{"_id": "632589828c8b9fca2c3a59e97451fde8fa7d188d", "title": "A hybrid of genetic algorithm and particle swarm optimization for recurrent network design", "text": "An evolutionary recurrent network which automates the design of recurrent neural/fuzzy networks using a new evolutionary learning algorithm is proposed in this paper. This new evolutionary learning algorithm is based on a hybrid of genetic algorithm (GA) and particle swarm optimization (PSO), and is thus called HGAPSO. In HGAPSO, individuals in a new generation are created, not only by crossover and mutation operation as in GA, but also by PSO. The concept of elite strategy is adopted in HGAPSO, where the upper-half of the best-performing individuals in a population are regarded as elites. However, instead of being reproduced directly to the next generation, these elites are first enhanced. The group constituted by the elites is regarded as a swarm, and each elite corresponds to a particle within it. In this regard, the elites are enhanced by PSO, an operation which mimics the maturing phenomenon in nature. These enhanced elites constitute half of the population in the new generation, whereas the other half is generated by performing crossover and mutation operation on these enhanced elites. HGAPSO is applied to recurrent neural/fuzzy network design as follows. For recurrent neural network, a fully connected recurrent neural network is designed and applied to a temporal sequence production problem. For recurrent fuzzy network design, a Takagi-Sugeno-Kang-type recurrent fuzzy network is designed and applied to dynamic plant control. The performance of HGAPSO is compared to both GA and PSO in these recurrent networks design problems, demonstrating its superiority."}
{"_id": "86e87db2dab958f1bd5877dc7d5b8105d6e31e46", "title": "A Hybrid EP and SQP for Dynamic Economic Dispatch with Nonsmooth Fuel Cost Function", "text": "Dynamic economic dispatch (DED) is one of the main functions of power generation operation and control. It determines the optimal settings of generator units with predicted load demand over a certain period of time. The objective is to operate an electric power system most economically while the system is operating within its security limits. This paper proposes a new hybrid methodology for solving DED. The proposed method is developed in such a way that a simple evolutionary programming (EP) is applied as a based level search, which can give a good direction to the optimal global region, and a local search sequential quadratic programming (SQP) is used as a fine tuning to determine the optimal solution at the final. Ten units test system with nonsmooth fuel cost function is used to illustrate the effectiveness of the proposed method compared with those obtained from EP and SQP alone."}
{"_id": "2a047d8c4c2a4825e0f0305294e7da14f8de6fd3", "title": "Genetic Fuzzy Systems - Evolutionary Tuning and Learning of Fuzzy Knowledge Bases", "text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the genetic fuzzy systems evolutionary tuning and learning of fuzzy knowledge bases. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book."}
{"_id": "506172b0e0dd4269bdcfe96dda9ea9d8602bbfb6", "title": "A modified particle swarm optimizer", "text": "In this paper, we introduce a new parameter, called inertia weight, into the original particle swarm optimizer. Simulations have been done to illustrate the signilicant and effective impact of this new parameter on the particle swarm optimizer."}
{"_id": "51317b6082322a96b4570818b7a5ec8b2e330f2f", "title": "Identification and control of dynamic systems using recurrent fuzzy neural networks", "text": "This paper proposes a recurrent fuzzy neural network (RFNN) structure for identifying and controlling nonlinear dynamic systems. The RFNN is inherently a recurrent multilayered connectionist network for realizing fuzzy inference using dynamic fuzzy rules. Temporal relations are embedded in the network by adding feedback connections in the second layer of the fuzzy neural network (FNN). The RFNN expands the basic ability of the FNN to cope with temporal problems. In addition, results for the FNNfuzzy inference engine, universal approximation, and convergence analysis are extended to the RFNN. For the control problem, we present the direct and indirect adaptive control approaches using the RFNN. Based on the Lyapunov stability approach, rigorous proofs are presented to guarantee the convergence of the RFNN by choosing appropriate learning rates. Finally, the RFNN is applied in several simulations (time series prediction, identification, and control of nonlinear systems). The results confirm the effectiveness of the RFNN."}
{"_id": "857a8c6c46b0a85ed6019f5830294872f2f1dcf5", "title": "Separate face and body selectivity on the fusiform gyrus.", "text": "Recent reports of a high response to bodies in the fusiform face area (FFA) challenge the idea that the FFA is exclusively selective for face stimuli. We examined this claim by conducting a functional magnetic resonance imaging experiment at both standard (3.125 x 3.125 x 4.0 mm) and high resolution (1.4 x 1.4 x 2.0 mm). In both experiments, regions of interest (ROIs) were defined using data from blocked localizer runs. Within each ROI, we measured the mean peak response to a variety of stimulus types in independent data from a subsequent event-related experiment. Our localizer scans identified a fusiform body area (FBA), a body-selective region reported recently by Peelen and Downing (2005) that is anatomically distinct from the extrastriate body area. The FBA overlapped with and was adjacent to the FFA in all but two participants. Selectivity of the FFA to faces and FBA to bodies was stronger for the high-resolution scans, as expected from the reduction in partial volume effects. When new ROIs were constructed for the high-resolution experiment by omitting the voxels showing overlapping selectivity for both bodies and faces in the localizer scans, the resulting FFA* ROI showed no response above control objects for body stimuli, and the FBA* ROI showed no response above control objects for face stimuli. These results demonstrate strong selectivities in distinct but adjacent regions in the fusiform gyrus for only faces in one region (the FFA*) and only bodies in the other (the FBA*)."}
{"_id": "12f107016fd3d062dff88a00d6b0f5f81f00522d", "title": "Scheduling for Reduced CPU Energy", "text": "The energy usage of computer systems is becoming more important, especially for battery operated systems. Displays, disks, and cpus, in that order, use the most energy. Reducing the energy used by displays and disks has been studied elsewhere; this paper considers a new method for reducing the energy used by the cpu. We introduce a new metric for cpu energy performance, millions-of-instructions-per-joule (MIPJ). We examine a class of methods to reduce MIPJ that are characterized by dynamic control of system clock speed by the operating system scheduler. Reducing clock speed alone does not reduce MIPJ, since to do the same work the system must run longer. However, a number of methods are available for reducing energy with reduced clock-speed, such as reducing the voltage [Chandrakasan et al 1992][Horowitz 1993] or using reversible [Younis and Knight 1993] or adiabatic logic [Athas et al 1994].\n What are the right scheduling algorithms for taking advantage of reduced clock-speed, especially in the presence of applications demanding ever more instructions-per-second? We consider several methods for varying the clock speed dynamically under control of the operating system, and examine the performance of these methods against workstation traces. The primary result is that by adjusting the clock speed at a fine grain, substantial CPU energy can be saved with a limited impact on performance."}
{"_id": "1ae0ac5e13134df7a0d670fc08c2b404f1e3803c", "title": "A data mining approach for location prediction in mobile environments", "text": "Mobility prediction is one of the most essential issues that need to be explored for mobility management in mobile computing systems. In this paper, we propose a new algorithm for predicting the next inter-cell movement of a mobile user in a Personal Communication Systems network. In the first phase of our threephase algorithm, user mobility patterns are mined from the history of mobile user trajectories. In the second phase, mobility rules are extracted from these patterns, and in the last phase, mobility predictions are accomplished by using these rules. The performance of the proposed algorithm is evaluated through simulation as compared to two other prediction methods. The performance results obtained in terms of Precision and Recall indicate that our method can make more accurate predictions than the other methods. 2004 Elsevier B.V. All rights reserved."}
{"_id": "7d3c9c4064b588d5d8c7c0cb398118aac239c71b", "title": "$\\mathsf {pSCAN}$ : Fast and Exact Structural Graph Clustering", "text": "We study the problem of structural graph clustering, a fundamental problem in managing and analyzing graph data. Given an undirected unweighted graph, structural graph clustering is to assign vertices to clusters, and to identify the sets of hub vertices and outlier vertices as well, such that vertices in the same cluster are densely connected to each other while vertices in different clusters are loosely connected. In this paper, we develop a new two-step paradigm for scalable structural graph clustering based on our three observations. Then, we present a $\\mathsf {pSCAN}$ approach, within the paradigm, aiming to reduce the number of structural similarity computations, and propose optimization techniques to speed up checking whether two vertices are structure-similar. $\\mathsf {pSCAN}$ outputs exactly the same clusters as the existing approaches $\\mathsf {SCAN}$ and $\\mathsf {SCAN\\text{++}}$ , and we prove that $\\mathsf {pSCAN}$ is worst-case optimal. Moreover, we propose efficient techniques for updating the clusters when the input graph dynamically changes, and we also extend our techniques to other similarity measures, e.g., Jaccard similarity. Performance studies on large real and synthetic graphs demonstrate the efficiency of our new approach and our dynamic cluster maintenance techniques. Noticeably, for the twitter graph with 1 billion edges, our approach takes 25 minutes while the state-of-the-art approach cannot finish even after 24 hours."}
{"_id": "305c45fb798afdad9e6d34505b4195fa37c2ee4f", "title": "Synthesis, properties, and applications of iron nanoparticles.", "text": "Iron, the most ubiquitous of the transition metals and the fourth most plentiful element in the Earth's crust, is the structural backbone of our modern infrastructure. It is therefore ironic that as a nanoparticle, iron has been somewhat neglected in favor of its own oxides, as well as other metals such as cobalt, nickel, gold, and platinum. This is unfortunate, but understandable. Iron's reactivity is important in macroscopic applications (particularly rusting), but is a dominant concern at the nanoscale. Finely divided iron has long been known to be pyrophoric, which is a major reason that iron nanoparticles have not been more fully studied to date. This extreme reactivity has traditionally made iron nanoparticles difficult to study and inconvenient for practical applications. Iron however has a great deal to offer at the nanoscale, including very potent magnetic and catalytic properties. Recent work has begun to take advantage of iron's potential, and work in this field appears to be blossoming."}
{"_id": "9f234867df1f335a76ea07933e4ae1bd34eeb48a", "title": "Automatic Machine Translation Evaluation: A Qualitative Approach", "text": "ADVERTIMENT. La consulta d\u2019aquesta tesi queda condicionada a l\u2019acceptaci\u00f3 de les seg\u00fcents condicions d'\u00fas: La difusi\u00f3 d\u2019aquesta tesi per mitj\u00e0 del servei TDX (www.tdx.cat) i a trav\u00e9s del Dip\u00f2sit Digital de la UB (diposit.ub.edu) ha estat autoritzada pels titulars dels drets de propietat intel\u00b7lectual \u00fanicament per a usos privats emmarcats en activitats d\u2019investigaci\u00f3 i doc\u00e8ncia. No s\u2019autoritza la seva reproducci\u00f3 amb finalitats de lucre ni la seva difusi\u00f3 i posada a disposici\u00f3 des d\u2019un lloc ali\u00e8 al servei TDX ni al Dip\u00f2sit Digital de la UB. No s\u2019autoritza la presentaci\u00f3 del seu contingut en una finestra o marc ali\u00e8 a TDX o al Dip\u00f2sit Digital de la UB (framing). Aquesta reserva de drets afecta tant al resum de presentaci\u00f3 de la tesi com als seus continguts. En la utilitzaci\u00f3 o cita de parts de la tesi \u00e9s obligat indicar el nom de la persona autora."}
{"_id": "5ebfcd50c56e51aada28ccecd041db5e002f5862", "title": "Gualzru's Path to the Advertisement World", "text": "This paper describes the genesis of Gualzru, a robot commissioned by a large Spanish technological company to provide advertisement services in open public spaces. Gualzru has to stand by at an interactive panel observing the people passing by and, at some point, select a promising candidate and approach her to initiate a conversation. After a small verbal interaction, the robot is supposed to convince the passerby to walk back to the panel, leaving the rest of the selling task to an interactive software embedded in it. The whole design and building process took less than three years of team composed of five groups at different geographical locations. We describe here the lessons learned during this period of time, from different points of view including the hardware, software, architectural decisions and team collaboration issues."}
{"_id": "73a7144e072356b5c9512bd4a87b22457d33760c", "title": "Treatment-Response Models for Counterfactual Reasoning with Continuous-time, Continuous-valued Interventions", "text": "Treatment effects can be estimated from observational data as the difference in potential outcomes. In this paper, we address the challenge of estimating the potential outcome when treatment-dose levels can vary continuously over time. Further, the outcome variable may not be measured at a regular frequency. Our proposed solution represents the treatment response curves using linear time-invariant dynamical systems\u2014this provides a flexible means for modeling response over time to highly variable dose curves. Moreover, for multivariate data, the proposed method: uncovers shared structure in treatment response and the baseline across multiple markers; and, flexibly models challenging correlation structure both across and within signals over time. For this, we build upon the framework of multiple-output Gaussian Processes. On simulated and a challenging clinical dataset, we show significant gains in accuracy over stateof-the-art models."}
{"_id": "c2aa3c7fd59a43c949844e98569429261dba36e6", "title": "Planar Helical Antenna of Circular Polarization", "text": "A planar helical antenna is presented for achieving wideband end-fire radiation of circular polarization while maintaining a very low profile. The helix is formed using printed strips with straight-edge connections implemented by plated viaholes. The currents flowing on the strips and along via-holes of the helix contribute to the horizontal and vertical polarizations, respectively. Besides, the current on the ground plane is utilized to weaken the strong amplitude of the horizontal electric field generated by the one on the strips. Thus, a good circular polarization can be achieved. Furthermore, a tapered helix and conducting side-walls are employed to broaden the axial ratio (AR) bandwidth as well as to improve the end-fire radiation pattern. The designed antenna operates at the center frequency of 10 GHz. Simulated results show that the planar helical antenna achieves wide-impedance bandwidth (|S11| <; -10 dB) from 7.4 to 12.8 GHz (54%) and 3-dB AR bandwidth from 8.2 to 11.6 GHz (34%), while retaining a thickness of only 0.11\u03bb0 at the center frequency. A prototype of the proposed antenna is fabricated and tested. Measured results are in good agreement with simulated ones."}
{"_id": "befdf0eb1a3d2e0d404e7fbdb43438be7ae607e5", "title": "Body Composition Changes After Very-Low-Calorie Ketogenic Diet in Obesity Evaluated by 3 Standardized Methods.", "text": "Context\nCommon concerns when using low-calorie diets as a treatment for obesity are the reduction in fat-free mass, mostly muscular mass, that occurs together with the fat mass (FM) loss, and determining the best methodologies to evaluate body composition changes.\n\n\nObjective\nThis study aimed to evaluate the very-low-calorie ketogenic (VLCK) diet-induced changes in body composition of obese patients and to compare 3 different methodologies used to evaluate those changes.\n\n\nDesign\nTwenty obese patients followed a VLCK diet for 4 months. Body composition assessment was performed by dual-energy X-ray absorptiometry (DXA), multifrequency bioelectrical impedance (MF-BIA), and air displacement plethysmography (ADP) techniques. Muscular strength was also assessed. Measurements were performed at 4 points matched with the ketotic phases (basal, maximum ketosis, ketosis declining, and out of ketosis).\n\n\nResults\nAfter 4 months the VLCK diet induced a -20.2 \u00b1 4.5 kg weight loss, at expenses of reductions in fat mass (FM) of -16.5 \u00b1 5.1 kg (DXA), -18.2 \u00b1 5.8 kg (MF-BIA), and -17.7 \u00b1 9.9 kg (ADP). A substantial decrease was also observed in the visceral FM. The mild but marked reduction in fat-free mass occurred at maximum ketosis, primarily as a result of changes in total body water, and was recovered thereafter. No changes in muscle strength were observed. A strong correlation was evidenced between the 3 methods of assessing body composition.\n\n\nConclusion\nThe VLCK diet-induced weight loss was mainly at the expense of FM and visceral mass; muscle mass and strength were preserved. Of the 3 body composition techniques used, the MF-BIA method seems more convenient in the clinical setting."}
{"_id": "506d4ca228f81715946ed1ad8d9205fad20fddfe", "title": "Measuring pictorial balance perception at first glance using Japanese calligraphy", "text": "According to art theory, pictorial balance acts to unify picture elements into a cohesive composition. For asymmetrical compositions, balancing elements is thought to be similar to balancing mechanical weights in a framework of symmetry axes. Assessment of preference for balance (APB), based on the symmetry-axes framework suggested in Arnheim R, 1974 Art and Visual Perception: A Psychology of the Creative Eye (Berkeley, CA: University of California Press), successfully matched subject balance ratings of images of geometrical shapes over unlimited viewing time. We now examine pictorial balance perception of Japanese calligraphy during first fixation, isolated from later cognitive processes, comparing APB measures with results from balance-rating and comparison tasks. Results show high between-task correlation, but low correlation with APB. We repeated the rating task, expanding the image set to include five rotations of each image, comparing balance perception of artist and novice participant groups. Rotation has no effect on APB balance computation but dramatically affects balance rating, especially for art experts. We analyze the variety of rotation effects and suggest that, rather than depending on element size and position relative to symmetry axes, first fixation balance processing derives from global processes such as grouping of lines and shapes, object recognition, preference for horizontal and vertical elements, closure, and completion, enhanced by vertical symmetry."}
{"_id": "772205182fbb6ad842df4a6cd937741145eeece0", "title": "Smoking and cervical cancer: pooled analysis of the IARC multi-centric case\u2013control study", "text": "Background: Smoking has long been suspected to be a risk factor for cervical cancer. However, not all previous studies have properly controlled for the effect of human papillomavirus (HPV) infection, which has now been established as a virtually necessary cause of cervical cancer. To evaluate the role of smoking as a cofactor of progression from HPV infection to cancer, we performed a pooled analysis of 10 previously published case\u2013control studies. This analysis is part of a series of analyses of cofactors of HPV in the aetiology of cervical cancer. Methods: Data were pooled from eight case\u2013control studies of invasive cervical carcinoma (ICC) and two of carcinoma in situ (CIS) from four continents. All studies used a similar protocol and questionnaires and included a PCR-based evaluation of HPV DNA in cytological smears or biopsy specimens. Only subjects positive for HPV DNA were included in the analysis. A total of 1463 squamous cell ICC cases were analyzed, along with 211 CIS cases, 124 adeno- or adeno-squamous ICC cases and 254 control women. Pooled odds ratios (OR) and 95% confidence intervals (CI) were estimated using logistic regression models controlling for sexual and non-sexual confounding factors. Results: There was an excess risk for ever smoking among HPV positive women (OR 2.17 95%CI 1.46\u20133.22). When results were analyzed by histological type, an excess risk was observed among cases of squamous cell carcinoma for current smokers (OR 2.30, 95%CI 1.31\u20134.04) and ex-smokers (OR 1.80, 95%CI 0.95\u20133.44). No clear pattern of association with risk was detected for adenocarcinomas, although the number of cases with this histologic type was limited. Conclusions: Smoking increases the risk of cervical cancer among HPV positive women. The results of our study are consistent with the few previously conducted studies of smoking and cervical cancer that have adequately controlled for HPV infection. Recent increasing trends of smoking among young women could have a serious impact on cervical cancer incidence in the coming years."}
{"_id": "d2018e51b772aba852e54ccc0ba7f0b7c2792115", "title": "Breathing Detection: Towards a Miniaturized, Wearable, Battery-Operated Monitoring System", "text": "This paper analyzes the main challenges associated with noninvasive, continuous, wearable, and long-term breathing monitoring. The characteristics of an acoustic breathing signal from a miniature sensor are studied in the presence of sources of noise and interference artifacts that affect the signal. Based on these results, an algorithm has been devised to detect breathing. It is possible to implement the algorithm on a single integrated circuit, making it suitable for a miniature sensor device. The algorithm is tested in the presence of noise sources on five subjects and shows an average success rate of 91.3% (combined true positives and true negatives)."}
{"_id": "cc76f5d348ab6c3a20ab4adb285fc1ad96d3c009", "title": "Speech-driven 3 D Facial Animation with Implicit Emotional Awareness : A Deep Learning Approach", "text": "We introduce a long short-term memory recurrent neural network (LSTM-RNN) approach for real-time facial animation, which automatically estimates head rotation and facial action unit activations of a speaker from just her speech. Specifically, the time-varying contextual non-linear mapping between audio stream and visual facial movements is realized by training a LSTM neural network on a large audio-visual data corpus. In this work, we extract a set of acoustic features from input audio, including Mel-scaled spectrogram, Mel frequency cepstral coefficients and chromagram that can effectively represent both contextual progression and emotional intensity of the speech. Output facial movements are characterized by 3D rotation and blending expression weights of a blendshape model, which can be used directly for animation. Thus, even though our model does not explicitly predict the affective states of the target speaker, her emotional manifestation is recreated via expression weights of the face model. Experiments on an evaluation dataset of different speakers across a wide range of affective states demonstrate promising results of our approach in real-time speech-driven facial animation."}
{"_id": "1b2a0e8af5c1f18e47e71244973ce4ace4ac6034", "title": "Compressed Nonparametric Language Modelling", "text": "Hierarchical Pitman-Yor Process priors are compelling methods for learning language models, outperforming point-estimate based methods. However, these models remain unpopular due to computational and statistical inference issues, such as memory and time usage, as well as poor mixing of sampler. In this work we propose a novel framework which represents the HPYP model compactly using compressed suffix trees. Then, we develop an efficient approximate inference scheme in this framework that has a much lower memory footprint compared to full HPYP and is fast in the inference time. The experimental results illustrate that our model can be built on significantly larger datasets compared to previous HPYP models, while being several orders of magnitudes smaller, fast for training and inference, and outperforming the perplexity of the state-of-the-art Modified Kneser-Ney countbased LM smoothing by up to 15%."}
{"_id": "c9d41f115eae5e03c5ed45c663d9435cb66ec942", "title": "Dissecting and Reassembling Color Correction Algorithms for Image Stitching", "text": "This paper introduces a new compositional framework for classifying color correction methods according to their two main computational units. The framework was used to dissect fifteen among the best color correction algorithms and the computational units so derived, with the addition of four new units specifically designed for this work, were then reassembled in a combinatorial way to originate about one hundred distinct color correction methods, most of which never considered before. The above color correction methods were tested on three different existing datasets, including both real and artificial color transformations, plus a novel dataset of real image pairs categorized according to the kind of color alterations induced by specific acquisition setups. Differently from previous evaluations, special emphasis was given to effectiveness in real world applications, such as image mosaicing and stitching, where robustness with respect to strong image misalignments and light scattering effects is required. Experimental evidence is provided for the first time in terms of the most recent perceptual image quality metrics, which are known to be the closest to human judgment. Comparative results show that combinations of the new computational units are the most effective for real stitching scenarios, regardless of the specific source of color alteration. On the other hand, in the case of accurate image alignment and artificial color alterations, the best performing methods either use one of the new computational units, or are made up of fresh combinations of existing units."}
{"_id": "b579366db457216b0548220bf369ab9eb183a0cc", "title": "An analysis on the significance of ticket analytics and defect analysis from software quality perspective", "text": "Software even though intangible should undergo evolution to fit into the ever changing real world scenarios. Each issue faced by the development and service team directly reflects in the quality of the software product. According to the related work, very few research is going on in the field of ticket and its related incident; a part of corrective maintenance. In depth research on incident tickets should be viewed as critical since, it provides information related to the kind of maintenance activities that is performed in any timestamp. Therefore classifying and analyzing tickets becomes a critical task in managing the operations of the service since each incident will be having a service level agreement associated with it. Further, incident analysis is essential to identify the patterns associated. Due to the existence of huge population of software in each organization and millions of incidents get reported per software product every year, it is practically impossible to manually analyze all the tickets. This paper focuses on projecting the importance of ticket to maintain the quality of software products and also distinguish it from the defect associated with a software system. This paper projects the importance of identifying defects in software as well as handling the incident related tickets and resolving it when viewed from the perspective of quality. It also gives an overview of the scope defect analysis and ticket analytics provide to the researchers."}
{"_id": "f69253e97f487b9d77b72553a9115fc814e3ed51", "title": "Clickbait Convolutional Neural Network", "text": "With the development of online advertisements, clickbait spread wider and wider. Clickbait dissatisfies users because the article content does not match their expectation. Thus, clickbait detection has attracted more and more attention recently. Traditional clickbait-detection methods rely on heavy feature engineering and fail to distinguish clickbait from normal headlines precisely because of the limited information in headlines. A convolutional neural network is useful for clickbait detection, since it utilizes pretrained Word2Vec to understand the headlines semantically, and employs different kernels to find various characteristics of the headlines. However, different types of articles tend to use different ways to draw users\u2019 attention, and a pretrained Word2Vec model cannot distinguish these different ways. To address this issue, we propose a clickbait convolutional neural network (CBCNN) to consider not only the overall characteristics but also specific characteristics from different article types. Our experimental results show that our method outperforms traditional clickbait-detection algorithms and the TextCNN model in terms of precision, recall and accuracy."}
{"_id": "6c9bd4bd7e30470e069f8600dadb4fd6d2de6bc1", "title": "A Database of Narrative Schemas", "text": "This paper describes a new language resource of events and semantic roles that characterize real-world situations. Narrative schemas contain sets of related events (edit and publish), a temporal ordering of the events (edit before publish), and the semantic roles of the participants (authors publish books). This type of world knowledge was central to early research in natural language understanding. Scripts were one of the main formalisms, representing common sequences of events that occur in the world. Unfortunately, most of this knowledge was hand-coded and time consuming to create. Current machine learning techniques, as well as a new approach to learning through coreference chains, has allowed us to automatically extract rich event structure from open domain text in the form of narrative schemas. The narrative schema resource described in this paper contains approximately 5000 unique events combined into schemas of varying sizes. We describe the resource, how it is learned, and a new evaluation of the coverage of these schemas over unseen documents."}
{"_id": "a72daf1fc4b1fc16d3c8a2e33f9aac6e17461d9a", "title": "User-Oriented Context Suggestion", "text": "Recommender systems have been used in many domains to assist users' decision making by providing item recommendations and thereby reducing information overload. Context-aware recommender systems go further, incorporating the variability of users' preferences across contexts, and suggesting items that are appropriate in different contexts. In this paper, we present a novel recommendation task, \"Context Suggestion\", whereby the system recommends contexts in which items may be selected. We introduce the motivations behind the notion of context suggestion and discuss several potential solutions. In particular, we focus specifically on user-oriented context suggestion which involves recommending appropriate contexts based on a user's profile. We propose extensions of well-known context-aware recommendation algorithms such as tensor factorization and deviation-based contextual modeling and adapt them as methods to recommend contexts instead of items. In our empirical evaluation, we compare the proposed solutions to several baseline algorithms using four real-world data sets."}
{"_id": "585da6b6355f3536e1b12b30ef4c3ea54b955f2d", "title": "Brand followers' retweeting behavior on Twitter: How brand relationships influence brand electronic word-of-mouth", "text": "Twitter, the popular microblogging site, has received increasing attention as a unique communication tool that facilitates electronic word-of-mouth (eWOM). To gain greater insight into this potential, this study investigates how consumers\u2019 relationships with brands influence their engagement in retweeting brand messages on Twitter. Data from a survey of 315 Korean consumers who currently follow brands on Twitter show that those who retweet brand messages outscore those who do not on brand identification, brand trust, community commitment, community membership intention, Twitter usage frequency, and total number of postings. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "d18cc66f7f87e041dec544a0b843496085ab54e1", "title": "Memory, navigation and theta rhythm in the hippocampal-entorhinal system", "text": "Theories on the functions of the hippocampal system are based largely on two fundamental discoveries: the amnestic consequences of removing the hippocampus and associated structures in the famous patient H.M. and the observation that spiking activity of hippocampal neurons is associated with the spatial position of the rat. In the footsteps of these discoveries, many attempts were made to reconcile these seemingly disparate functions. Here we propose that mechanisms of memory and planning have evolved from mechanisms of navigation in the physical world and hypothesize that the neuronal algorithms underlying navigation in real and mental space are fundamentally the same. We review experimental data in support of this hypothesis and discuss how specific firing patterns and oscillatory dynamics in the entorhinal cortex and hippocampus can support both navigation and memory."}
{"_id": "22fc3af1fb55d48f3c03cd96f277503e92541c60", "title": "Predictive Control of Power Converters: Designs With Guaranteed Performance", "text": "In this work, a cost function design based on Lyapunov stability concepts for finite control set model predictive control is proposed. This predictive controller design allows one to characterize the performance of the controlled converter, while providing sufficient conditions for local stability for a class of power converters. Simulation and experimental results on a buck dc-dc converter and a two-level dc-ac inverter are conducted to validate the effectiveness of our proposal."}
{"_id": "4114c89bec92ebde7c20d12d0303281983ed1df8", "title": "Design and Implementation of a Fast Dynamic Packet Filter", "text": "This paper presents Swift, a packet filter for high-performance packet capture on commercial off-the-shelf hardware. The key features of the Swift include: 1) extremely lowfilter update latency for dynamic packet filtering, and 2) gigabits-per-second high-speed packet processing. Based on complex instruction set computer (CISC) instruction set architecture (ISA), Swift achieves the former with an instruction set design that avoids the need for compilation and security checking, and the latter by mainly utilizing single instruction, multiple data (SIMD). We implement Swift in the Linux 2.6 kernel for both i386 and \u00d786-64 architectures and extensively evaluate its dynamic and static filtering performance on multiple machines with different hardware setups. We compare Swift to BPF (the BSD packet filter)--the de facto standard for packet filtering in modern operating systems--and hand-coded optimized C filters that are used for demonstrating possible performance gains. For dynamic filtering tasks, Swift is at least three orders of magnitude faster than BPF in terms of filter update latency. For static filtering tasks, Swift outperforms BPF up to three times in terms of packet processing speed and achieves much closer performance to the optimized C filters. We also show that Swift can harness the processing power of hardware SIMD instructions by virtue of its SIMD-capable instruction set."}
{"_id": "8e508720cdb495b7821bf6e43c740eeb5f3a444a", "title": "Learning Scalable Deep Kernels with Recurrent\nStructure", "text": "Many applications in speech, robotics, finance, and biology deal with sequential data, where ordering matters and recurrent structures are common. However, this structure cannot be easily captured by standard kernel functions. To model such structure, we propose expressive closed-form kernel functions for Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the inductive biases of long short-term memory (LSTM) recurrent networks, while retaining the non-parametric probabilistic advantages of Gaussian processes. We learn the properties of the proposed kernels by optimizing the Gaussian process marginal likelihood using a new provably convergent semi-stochastic gradient procedure, and exploit the structure of these kernels for scalable training and prediction. This approach provides a practical representation for Bayesian LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and thoroughly investigate a consequential autonomous driving application, where the predictive uncertainties provided by GP-LSTM are uniquely valuable."}
{"_id": "110599f48c30251aba60f68b8484a7b0307bcb87", "title": "SemEval-2015 Task 11: Sentiment Analysis of Figurative Language in Twitter", "text": "This report summarizes the objectives and evaluation of the SemEval 2015 task on the sentiment analysis of figurative language on Twitter (Task 11). This is the first sentiment analysis task wholly dedicated to analyzing figurative language on Twitter. Specifically, three broad classes of figurative language are considered: irony, sarcasm and metaphor. Gold standard sets of 8000 training tweets and 4000 test tweets were annotated using workers on the crowdsourcing platform CrowdFlower. Participating systems were required to provide a fine-grained sentiment score on an 11-point scale (-5 to +5, including 0 for neutral intent) for each tweet, and systems were evaluated against the gold standard using both a Cosinesimilarity and a Mean-Squared-Error measure."}
{"_id": "4b53f660eb6cfe9180f9e609ad94df8606724a3d", "title": "Text mining of news-headlines for FOREX market prediction: A Multi-layer Dimension Reduction Algorithm with semantics and sentiment", "text": "In this paper a novel approach is proposed to predict intraday directional-movements of a currency-pair in the foreign exchange market based on the text of breaking financial news-headlines. The motivation behind this work is twofold: First, although market-prediction through text-mining is shown to be a promising area of work in the literature, the text-mining approaches utilized in it at this stage are not much beyond basic ones as it is still an emerging field. This work is an effort to put more emphasis on the text-mining methods and tackle some specific aspects thereof that are weak in previous works, namely: the problem of high dimensionality as well as the problem of ignoring sentiment and semantics in dealing with textual language. This research assumes that addressing these aspects of text-mining have an impact on the quality of the achieved results. The proposed system proves this assumption to be right. The second part of the motivation is to research a specific market, namely, the foreign exchange market, which seems not to have been researched in the previous works based on predictive text-mining. Therefore, results of this work also successfully demonstrate a predictive relationship between this specific market-type and the textual data of news. Besides the above two main components of the motivation, there are other specific aspects that make the setup of the proposed system and the conducted experiment unique, for example, the use of news article-headlines only and not news article-bodies, which enables usage of short pieces of text rather than long ones; or the use of general financial breaking news without any further filtration. In order to accomplish the above, this work produces a multi-layer algorithm that tackles each of the mentioned aspects of the text-mining problem at a designated layer. The first layer is termed the Semantic Abstraction Layer and addresses the problem of co-reference in text mining that is contributing to sparsity. Co-reference occurs when two or more words in a text corpus refer to the same concept. This work produces a custom approach by the name of Heuristic-Hypernyms Feature-Selection which creates a way to recognize words with the same parent-word to be regarded as one entity. As a result, prediction accuracy increases significantly at this layer which is attributed to appropriate noise-reduction from the feature-space. The second layer is termed Sentiment Integration Layer, which integrates sentiment analysis capability into the algorithm by proposing a sentiment weight by the name of SumScore that reflects investors\u2019 sentiment. Additionally, this layer reduces the dimensions by eliminating those that are of zero value in terms of sentiment and thereby improves prediction accuracy. The third layer encompasses a dynamic model creation algorithm, termed Synchronous Targeted Feature Reduction (STFR). It is suitable for the challenge at hand whereby the mining of a stream of text is concerned. It updates the models with the most recent information available and, more importantly, it ensures that the dimensions are reduced to the absolute minimum. The algorithm and each of its layers are extensively evaluated using real market data and news content across multiple years and have proven to be solid and superior to any other comparable solution. The proposed techniques implemented in the system, result in significantly high directional-accuracies of up to 83.33%. On top of a well-rounded multifaceted algorithm, this work contributes a much needed research framework for this context with a test-bed of data that must make future research endeavors more convenient. The produced algorithm is scalable and its modular design allows improvement in each of its layers in future research. This paper provides ample details to reproduce the entire system and the conducted experiments. 2014 Elsevier Ltd. All rights reserved. A. Khadjeh Nassirtoussi et al. / Expert Systems with Applications 42 (2015) 306\u2013324 307"}
{"_id": "7f90ef42f22d4f9b86d33b0ad7f16261273c8612", "title": "BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network", "text": "a r t i c l e i n f o a b s t r a c t We present an automatic approach to the construction of BabelNet, a very large, wide-coverage multilingual semantic network. Key to our approach is the integration of lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition, Machine Translation is applied to enrich the resource with lexical information for all languages. We first conduct in vitro experiments on new and existing gold-standard datasets to show the high quality and coverage of BabelNet. We then show that our lexical resource can be used successfully to perform both monolingual and cross-lingual Word Sense Disambiguation: thanks to its wide lexical coverage and novel semantic relations, we are able to achieve state-of the-art results on three different SemEval evaluation tasks."}
{"_id": "033b62167e7358c429738092109311af696e9137", "title": "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews", "text": "This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., \u201csubtle nuances\u201d) and a negative semantic orientation when it has bad associations (e.g., \u201cvery cavalier\u201d). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word \u201cexcellent\u201d minus the mutual information between the given phrase and the word \u201cpoor\u201d. A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews."}
{"_id": "105a0b3826710356e218685f87b20fe39c64c706", "title": "Opinion observer: analyzing and comparing opinions on the Web", "text": "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sites containing such opinions, e.g., customer reviews of products, forums, discussion groups, and blogs. This paper focuses on online customer reviews of products. It makes two contributions. First, it proposes a novel framework for analyzing and comparing consumer opinions of competing products. A prototype system called Opinion Observer is also implemented. The system is such that with a single glance of its visualization, the user is able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. This comparison is useful to both potential customers and product manufacturers. For a potential customer, he/she can see a visual side-by-side and feature-by-feature comparison of consumer opinions on these products, which helps him/her to decide which product to buy. For a product manufacturer, the comparison enables it to easily gather marketing intelligence and product benchmarking information. Second, a new technique based on language pattern mining is proposed to extract product features from Pros and Cons in a particular type of reviews. Such features form the basis for the above comparison. Experimental results show that the technique is highly effective and outperform existing methods significantly."}
{"_id": "9ea16bc34448ca9d713f4501f1a6215a26746372", "title": "A survey of software testing practices in alberta", "text": "Software organizations have typically de-emphasized the importance of software testing. In this paper, the results of a regional survey of software testing and software quality assurance techniques are described. Researchers conducted the study during the summer and fall of 2002 by surveying software organizations in the Province of Alberta. Results indicate that Alberta-based organizations tend to test less than their counterparts in the United States. The results also indicate that Alberta software organizations tend to train fewer personnel on testing-related topics. This practice has the potential for a two-fold impact: first, the ability to detect trends that lead to reduced quality and to identify the root causes of reductions in product quality may suffer from the lack of testing. This consequence is serious enough to warrant consideration, since overall quality may suffer from the reduced ability to detect and eliminate process or product defects. Second, the organization may have a more difficult time adopting methodologies such as extreme programming. This is significant because other industry studies have concluded that many software organizations have tried or will in the next few years try some form of agile method. Newer approaches to software development like extreme programming increase the extent to which teams rely on testing skills. Organizations should consider their testing skill level as a key indication of their readiness for adopting software development techniques such as test-driven development, extreme programming, agile modelling, or other agile methods."}
{"_id": "746cafc676374114198c414d6426ec2f50e0ff80", "title": "Analysis and Design of Average Current Mode Control Using a Describing-Function-Based Equivalent Circuit Model", "text": "This paper proposes a small-signal model for average current mode control based on an equivalent circuit. The model uses a three-terminal equivalent circuit model based on a linearized describing function method to include the feedback effect of the sideband frequency components of the inductor current. The model extends the results obtained in peak current mode control to average current mode control. The proposed small-signal model is accurate up to half switching frequency, predicting the subharmonic instability. The proposed model is verified using SIMPLIS simulation and hardware experiments, which show good agreement with the measurement results. Based on the proposed model, new feedback design guidelines are presented. The proposed design guidelines are compared with several conventional, widely used design criteria. By designing the external ramp following the proposed design guidelines, the quality factor of the double poles at half of the switching frequency in the control-to-output transfer function can be precisely controlled. This helps the feedback loop design to achieve wide control bandwidth and proper damping."}
{"_id": "2b337d6a72c8c2b1d97097dc24ec0e9a8d4c2186", "title": "Using deep learning for short text understanding", "text": "Classifying short texts to one category or clustering semantically related texts is challenging, and the importance of both is growing due to the rise of microblogging platforms, digital news feeds, and the like. We can accomplish this classifying and clustering with the help of a deep neural network which produces compact binary representations of a short text, and can assign the same category to texts that have similar binary representations. But problems arise when there is little contextual information on the short texts, which makes it difficult for the deep neural network to produce similar binary codes for semantically related texts. We propose to address this issue using semantic enrichment. This is accomplished by taking the nouns, and verbs used in the short texts and generating the concepts and co-occurring words with the help of those terms. The nouns are used to generate concepts within the given short text, whereas the verbs are used to prune the ambiguous context (if any) present in the text. The enriched text then goes through a deep neural network to produce a prediction label for that short text representing it\u2019s category."}
{"_id": "1d53a898850b8d055db80ba99c59c89b080dfc4c", "title": "MVOR: A Multi-view RGB-D Operating Room Dataset for 2D and 3D Human Pose Estimation", "text": "Person detection and pose estimation is a key requirement to develop intelligent context-aware assistance systems. To foster the development of human pose estimation methods and their applications in the Operating Room (OR), we release the Multi-View Operating Room (MVOR) dataset, the first public dataset recorded during real clinical interventions. It consists of 732 synchronized multi-view frames recorded by three RGB-D cameras in a hybrid OR. It also includes the visual challenges present in such environments, such as occlusions and clutter. We provide camera calibration parameters, color and depth frames, human bounding boxes, and 2D/3D pose annotations. In this paper, we present the dataset, its annotations, as well as baseline results from several recent person detection and 2D/3D pose estimation methods. Since we need to blur some parts of the images to hide identity and nudity in the released dataset, we also present a comparative study of how the baselines have been impacted by the blurring. Results show a large margin for improvement and suggest that the MVOR dataset can be useful to compare the performance of the different methods."}
{"_id": "954d0346b5cdf3f1ec0fcc74ae5aadc5b733adc0", "title": "Beyond engagement analytics: which online mixed-data factors predict student learning outcomes?", "text": "This mixed-method study focuses on online learning analytics, a research area of importance. Several important student attributes and their online activities are examined to identify what seems to work best to predict higher grades. The purpose is to explore the relationships between student grade and key learning engagement factors using a large sample from an online undergraduate business course at an accredited American university (n\u00a0=\u00a0228). Recent studies have discounted the ability to predict student learning outcomes from big data analytics but a few significant indicators have been found by some researchers. Current studies tend to use quantitative factors in learning analytics to forecast outcomes. This study extends that work by testing the common quantitative predictors of learning outcome, but qualitative data is also examined to triangulate the evidence. Pre and post testing of information technology understanding is done at the beginning of the course. First quantitative data is collected, and depending on the hypothesis test results, qualitative data is collected and analyzed with text analytics to uncover patterns. Moodle engagement analytics indicators are tested as predictors in the model. Data is also taken from the Moodle system logs. Qualitative data is collected from student reflection essays. The result was a significant General Linear Model with four online interaction predictors that captured 77.5\u00a0% of grade variance in an undergraduate business course."}
{"_id": "483b94374944293d2a6d36cc1c97f0544ce3c79c", "title": "Which Hotel attributes Matter ? A review of previous and a framework for future research", "text": "A lot of effort has been made in the last decades to reveal, which hotel attributes guest care about. Due to the high costs that are typically involved with investments in the hotel industry, it makes a lot of sense to study, which product components the travellers appreciate. This study reveals that hotel attribute research turns out to be a wide and extremely heterogeneous field of research. The authors review empirical studies investigating the importance of hotel attributes, provide attribute rankings and suggest a framework for past and future research projects in the field, based on the dimensions \u201cfocus of research\u201d, \u201drisk versus utility\u201d and \u201ctrade-off versus no trade-off questioning situation\u201d."}
{"_id": "54c377407242e74e7c08e4a49e61837fd9ce2b25", "title": "On Power Quality of Variable-Speed Constant-Frequency Aircraft Electric Power Systems", "text": "In this paper, a comprehensive model of the variable-speed constant-frequency aircraft electric power system is developed to study the performance characteristics of the system and, in particular, the system power quality over a frequency range of operation of 400 Hz to 800 Hz. A fully controlled active power filter is designed to regulate the load terminal voltage, eliminate harmonics, correct supply power factor, and minimize the effect of unbalanced loads. The control algorithm for the active power filter (APF) is based on the perfect harmonic cancellation method which provides a three-phase reference supply current in phase with its positive-sequence fundamental voltage. The proposed APF is integrated into the model of a 90-kVA advanced aircraft electric power system under VSCF operation. The performance characteristics of the system are studied with the frequency of the generator's output voltage varied from 400 Hz to 800 Hz under different loading conditions. Several case studies are presented including dc loads as well as passive and dynamic ac loads. The power quality characteristics of the studied aircraft electric power system with the proposed active filter are shown to be in compliance with the most recent military aircraft electrical standards MIL-STD-704F as well as with the IEEE Std. 519."}
{"_id": "9d1940f843c448cc378214ff6bad3c1279b1911a", "title": "Shape-aware Instance Segmentation", "text": "We address the problem of instance-level semantic segmentation, which aims at jointly detecting, segmenting and classifying every individual object in an image. In this context, existing methods typically propose candidate objects, usually as bounding boxes, and directly predict a binary mask within each such proposal. As a consequence, they cannot recover from errors in the object candidate generation process, such as too small or shifted boxes. In this paper, we introduce a novel object segment representation based on the distance transform of the object masks. We then design an object mask network (OMN) with a new residual-deconvolution architecture that infers such a representation and decodes it into the final binary object mask. This allows us to predict masks that go beyond the scope of the bounding boxes and are thus robust to inaccurate object candidates. We integrate our OMN into a Multitask Network Cascade framework, and learn the resulting shape-aware instance segmentation (SAIS) network in an end-to-end manner. Our experiments on the PASCAL VOC 2012 and the CityScapes datasets demonstrate the benefits of our approach, which outperforms the state-of-the-art in both object proposal generation and instance segmentation."}
{"_id": "4d0130e95925b00a2d1ecba931a1a05a74370f3f", "title": "RT-Mover: a rough terrain mobile robot with a simple leg-wheel hybrid mechanism", "text": "There is a strong demand in many fields for practical robots, such as a porter robot and a personal mobility robot, that can move over rough terrain while carrying a load horizontally. We have developed a robot, called RT-Mover, which shows adequate mobility performance on targeted types of rough terrain. It has four drivable wheels and two leg-like axles but only five active shafts. A strength of this robot is that it realizes both a leg mode and a wheel mode in a simple mechanism. In this paper, the mechanical design concept is discussed. With an emphasis on minimizing the number of drive shafts, a mechanism is designed for a four-wheeled mobile body that is widely used in practical locomotive machinery. Also, strategies for moving on rough terrain are proposed. The kinematics, stability, and control of RT-Mover are also described in detail. Some typical cases of rough terrain for wheel mode and leg mode are selected, and the robot\u2019s ability of locomotion is assessed through simulations and experiments. In each case, the robot is able to move over rough terrain while maintaining the horizontal orientation of its platform."}
{"_id": "0651f838d918586ec1df66450c3d324602c9f59e", "title": "Privacy attacks in social media using photo tagging networks: a case study with Facebook", "text": "Social-networking users unknowingly reveal certain kinds of personal information that malicious attackers could profit from to perpetrate significant privacy breaches. This paper quantitatively demonstrates how the simple act of tagging pictures on the social-networking site of Facebook could reveal private user attributes that are extremely sensitive. Our results suggest that photo tags can be used to help predicting some, but not all, of the analyzed attributes. We believe our analysis make users aware of significant breaches of their privacy and could inform the design of new privacy-preserving ways of tagging pictures on social-networking sites."}
{"_id": "4c9774c5e57a4b7535eb19f6584f75c8b9c2cdcc", "title": "A framework based on RSA and AES encryption algorithms for cloud computing services", "text": "Cloud computing is an emerging computing model in which resources of the computing communications are provided as services over the Internet. Privacy and security of cloud storage services are very important and become a challenge in cloud computing due to loss of control over data and its dependence on the cloud computing provider. While there is a huge amount of transferring data in cloud system, the risk of accessing data by attackers raises. Considering the problem of building a secure cloud storage service, current scheme is proposed which is based on combination of RSA and AES encryption methods to share the data among users in a secure cloud system. The proposed method allows providing difficulty for attackers as well as reducing the time of information transmission between user and cloud data storage."}
{"_id": "e645cbd3aaeab56858f1e752677b8792d7377d14", "title": "BCSAT : A Benchmark Corpus for Sentiment Analysis in Telugu Using Word-level Annotations", "text": "The presented work aims at generating a systematically annotated corpus that can support the enhancement of sentiment analysis tasks in Telugu using wordlevel sentiment annotations. From OntoSenseNet, we extracted 11,000 adjectives, 253 adverbs, 8483 verbs and sentiment annotation is being done by language experts. We discuss the methodology followed for the polarity annotations and validate the developed resource. This work aims at developing a benchmark corpus, as an extension to SentiWordNet, and baseline accuracy for a model where lexeme annotations are applied for sentiment predictions. The fundamental aim of this paper is to validate and study the possibility of utilizing machine learning algorithms, word-level sentiment annotations in the task of automated sentiment identification. Furthermore, accuracy is improved by annotating the bi-grams extracted from the target corpus."}
{"_id": "40f5430ef326838d5b7ce018f62e51c188d7cdd7", "title": "Effects of quiz-style information presentation on user understanding", "text": "This paper proposes quiz-style information presentation for interactive systems as a means to improve user understanding in educational tasks. Since the nature of quizzes can highly motivate users to stay voluntarily engaged in the interaction and keep their attention on receiving information, it is expected that information presented as quizzes can be better understood by users. To verify the effectiveness of the approach, we implemented read-out and quiz systems and performed comparison experiments using human subjects. In the task of memorizing biographical facts, the results showed that user understanding for the quiz system was significantly better than that for the read-out system, and that the subjects were more willing to use the quiz system despite the long duration of the quizzes. This indicates that quiz-style information presentation promotes engagement in the interaction with the system, leading to the improved user understanding."}
{"_id": "24bbff699187ad6bf37e447627de1ca25267a770", "title": "Research on continuous auditing: A bibliometric analysis", "text": "This paper presents the results of a bibliometric study about the evolution of research on Continuous Auditing. This study has as main motivation to find reasons for the very slow evolvement of research on this topic. In addition, Continuous Auditing is one of the features of the emerging concept of Continuous Assurance. Thus, considering that Continuous Assurance represents numerous advantages for the organizational performance, this study also intends to understand if there is a relation between the evolution of research on Continuous Auditing and the still very low maturity levels of continuous assurance solutions. This study shows that the number of publications is considerably reduced and that the slow growth of research on Continuous Auditing may be contributing to the lack of maturity of Continuous Assurance."}
{"_id": "abd0478f1572d8ecdca4738df3e4b3bd116d7b42", "title": "Dispositional Factors in Internet Use: Personality Versus Cognitive Style", "text": "This study directly tests the effect of personality and cognitive style on three measures of Internet use. The results support the use of personality\u2014but not cognitive style\u2014as an antecedent variable. After controlling for computer anxiety, selfefficacy, and gender, including the \u201cBig Five\u201d personality factors in the analysis significantly adds to the predictive capabilities of the dependent variables. Including cognitive style does not. The results are discussed in terms of the role of personality and cognitive style in models of technology adoption and use."}
{"_id": "a64f48f9810c4788236f31dc2a9b87dd02977c3e", "title": "Voice quality evaluation of recent open source codecs", "text": "\u2022 Averaged frequency responses at different 16, and 24 kHz. External sampling rate does not tell the internal sampling rate. \u2022 Supported signal bandwidth depends on bitrate, but no documentation exists bandwidths were found out expirementally \u2022 We tested 32kHz sampling with 16ms frame length. There is also 8 ms lookahead. \u2022 The results show that bitrates below 32 kbit/s are not useable for voice applications.The voice quality is much worse than with SILK or bitrates shown in steady state"}
{"_id": "76eea8436996c7e9c8f7ad3dac34a12865edab24", "title": "Chain Replication for Supporting High Throughput and Availability", "text": "Chain replication is a new approach to coordinating clusters of fail-stop storage servers. The approach is intended for supporting large-scale storage services that exhibit high throughput and availability without sacrificing strong consistency guarantees. Besides outlining the chain replication protocols themselves, simulation experiments explore the performance characteristics of a prototype implementation. Throughput, availability, and several objectplacement strategies (including schemes based on distributed hash table routing) are discussed."}
{"_id": "522a7178e501018e442c03f4b93e29f62ae1eb64", "title": "Deep Voice 2 : Multi-Speaker Neural Text-to-Speech", "text": "We introduce a technique for augmenting neural text-to-speech (TTS) with lowdimensional trainable speaker embeddings to generate different voices from a single model. As a starting point, we show improvements over the two state-ofthe-art approaches for single-speaker neural TTS: Deep Voice 1 and Tacotron. We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1. We improve Tacotron by introducing a post-processing neural vocoder, and demonstrate a significant audio quality improvement. We then demonstrate our technique for multi-speaker speech synthesis for both Deep Voice 2 and Tacotron on two multi-speaker TTS datasets. We show that a single neural TTS system can learn hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality synthesis and preserving the speaker identities almost perfectly."}
{"_id": "ccbcaf528a222d04f40fd03b3cb89d5f78acbdc6", "title": "A Literature Review on Kidney Disease Prediction using Data Mining Classification Technique", "text": "-The huge amounts of data generated by healthcare transactions are too complex and voluminous to be processed and analyzed by traditional methods. Data mining provides the methodology and technology to transform these mounds of data into useful information for decision making. The Healthcare industry is generally \u201cinformation rich\u201d, which is not feasible to handle manually. These large amounts of data are very important in the field of data mining to extract useful information and generate relationships amongst the attributes. Kidney disease is a complex task which requires much experience and knowledge. Kidney disease is a silent killer in developed countries and one of the main contributors to disease burden in developing countries. In the health care industry the data mining is mainly used for predicting the diseases from the datasets. The Data mining classification techniques, namely Decision trees, ANN, Naive Bayes are analyzed on Kidney disease data set. Keywords--Data Mining, Kidney Disease, Decision tree, Naive Bayes, ANN, K-NN, SVM, Rough Set, Logistic Regression, Genetic Algorithms (GAs) / Evolutionary Programming (EP), Clustering"}
{"_id": "30f46fdfe1fdab60bdecaa27aaa94526dfd87ac1", "title": "Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera", "text": "We propose a method which can perform real-time 3D reconstruction from a single hand-held event camera with no additional sensing, and works in unstructured scenes of which it has no prior knowledge. It is based on three decoupled probabilistic filters, each estimating 6-DoF camera motion, scene logarithmic (log) intensity gradient and scene inverse depth relative to a keyframe, and we build a real-time graph of these to track and model over an extended local workspace. We also upgrade the gradient estimate for each keyframe into an intensity image, allowing us to recover a real-time video-like intensity sequence with spatial and temporal super-resolution from the low bit-rate input event stream. To the best of our knowledge, this is the first algorithm provably able to track a general 6D motion along with reconstruction of arbitrary structure including its intensity and the reconstruction of grayscale video that exclusively relies on event camera data."}
{"_id": "892fea843d58852a835f38087bc3b5102123f567", "title": "Multiple ramp schemes", "text": "A (t; k; n; S) ramp scheme is a protocol to distribute a secret s chosen inS among a setP of n participants in such a way that: 1) sets of participants of cardinality greater than or equal to k can reconstruct the secrets; 2) sets of participants of cardinality less than or equal tot have no information on s, whereas 3) sets of participants of cardinality greater than t and less thank might have \u201csome\u201d information on s. In this correspondence we analyze multiple ramp schemes, which are protocols to share many secrets among a set P of participants, using different ramp schemes. In particular, we prove a tight lower bound on the size of the shares held by each participant and on the dealer\u2019s randomness in multiple ramp schemes."}
{"_id": "ce148df015fc488ac6fc022dac3da53c141e0ea8", "title": "Protein function in precision medicine: deep understanding with machine learning.", "text": "Precision medicine and personalized health efforts propose leveraging complex molecular, medical and family history, along with other types of personal data toward better life. We argue that this ambitious objective will require advanced and specialized machine learning solutions. Simply skimming some low-hanging results off the data wealth might have limited potential. Instead, we need to better understand all parts of the system to define medically relevant causes and effects: how do particular sequence variants affect particular proteins and pathways? How do these effects, in turn, cause the health or disease-related phenotype? Toward this end, deeper understanding will not simply diffuse from deeper machine learning, but from more explicit focus on understanding protein function, context-specific protein interaction networks, and impact of variation on both."}
{"_id": "38d34b02820020aac7f060e84bb6c01b4dee665a", "title": "The impact of design management and process management on quality : an empirical investigation", "text": "\u017d . Design management and process management are two important elements of total quality management TQM implementation. They are drastically different in their targets of improvement, visibility, and techniques. In this paper, we establish a framework for identifying the synergistic linkages of design and process management to the operational quality \u017d . \u017d . outcomes during the manufacturing process internal quality and upon the field usage of the products external quality . Through a study of quality practices in 418 manufacturing plants from multiple industries, we empirically demonstrate that both design and process management efforts have an equal positive impact on internal quality outcomes such as scrap, rework, defects, performance, and external quality outcomes such as complaints, warranty, litigation, market share. A detailed contingency analysis shows that the proposed model of synergies between design and process management holds true for large and small firms; for firms with different levels of TQM experience; and in different industries with varying levels of competition, logistical complexity of production, or production process characteristics. Finally, the results also suggest that organizational learning enables mature TQM firms to implement both design and process efforts more rigorously and their synergy helps these firms to attain better quality outcomes. These findings indicate that, to attain superior quality outcomes, firms need to balance their design and process management efforts and persevere with long-term \u017d . implementation of these efforts. Because the study spans all of the manufacturing sectors SIC 20 through 39 , these conclusions should help firms in any industry revisit their priorities in terms of the relative efforts in design management and process management. q 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "b09b43cacd45fd922f7f85b1f8514cb4a775ca5d", "title": "A Web Service Discovery Approach Based on Common Topic Groups Extraction", "text": "Web services have attracted much attention from distributed application designers and developers because of their roles in abstraction and interoperability among heterogeneous software systems, and a growing number of distributed software applications have been published as Web services on the Internet. Faced with the increasing numbers of Web services and service users, researchers in the services computing field have attempted to address a challenging issue, i.e., how to quickly find the suitable ones according to user queries. Many previous studies have been reported towards this direction. In this paper, a novel Web service discovery approach based on topic models is presented. The proposed approach mines common topic groups from the service-topic distribution matrix generated by topic modeling, and the extracted common topic groups can then be leveraged to match user queries to relevant Web services, so as to make a better trade-off between the accuracy of service discovery and the number of candidate Web services. Experiment results conducted on two publicly-available data sets demonstrate that, compared with several widely used approaches, the proposed approach can maintain the performance of service discovery at an elevated level by greatly decreasing the number of candidate Web services, thus leading to faster response time."}
{"_id": "c108437a57bd8f8eaed9e26360ee100074e3f3fc", "title": "Computational Capabilities of Graph Neural Networks", "text": "In this paper, we will consider the approximation properties of a recently introduced neural network model called graph neural network (GNN), which can be used to process-structured data inputs, e.g., acyclic graphs, cyclic graphs, and directed or undirected graphs. This class of neural networks implements a function tau(G, n) isin R m that maps a graph G and one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks (FNNs). Some experimental examples are used to show the computational capabilities of the proposed model."}
{"_id": "28d3ec156472c35ea8e1b7acad969b725111fe56", "title": "Hipikat: a project memory for software development", "text": "Sociological and technical difficulties, such as a lack of informal encounters, can make it difficult for new members of noncollocated software development teams to learn from their more experienced colleagues. To address this situation, we have developed a tool, named Hipikat that provides developers with efficient and effective access to the group memory for a software development project that is implicitly formed by all of the artifacts produced during the development. This project memory is built automatically with little or no change to existing work practices. After describing the Hipikat tool, we present two studies investigating Hipikat's usefulness in software modification tasks. One study evaluated the usefulness of Hipikat's recommendations on a sample of 20 modification tasks performed on the Eclipse Java IDE during the development of release 2.1 of the Eclipse software. We describe the study, present quantitative measures of Hipikat's performance, and describe in detail three cases that illustrate a range of issues that we have identified in the results. In the other study, we evaluated whether software developers who are new to a project can benefit from the artifacts that Hipikat recommends from the project memory. We describe the study, present qualitative observations, and suggest implications of using project memory as a learning aid for project newcomers."}
{"_id": "334c4806912d851ef2117e67728cfa624dbec9a3", "title": "A Metrics Suite for Object Oriented Design", "text": "Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field\u2019s understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber, the theoretical base chosen for the metrics was the ontology of Bunge. Six design metrics are developed, and then analytically evaluated against Weyuker\u2019s proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement. \u201cA Metrics Suite For Object Oriented Design\u201d Shyam R. Chidamber Chris F. Kemerer Index Terms CR"}
{"_id": "383ca85aaca9f306ea7ae04fb0b6b76f1e393395", "title": "Two case studies of open source software development: Apache and Mozilla", "text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine data from two major open source projects, the Apache web server and the Mozilla browser. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution intervals for these OSS projects. We develop several hypotheses by comparing the Apache project with several commercial projects. We then test and refine several of these hypotheses, based on an analysis of Mozilla data. We conclude with thoughts about the prospects for high-performance commercial/open source process hybrids."}
{"_id": "3ea9cd35f39e8c128f39f13148e91466715f4ee2", "title": "A File Comparison Program", "text": "A file comparison program produces a list of differences between two files. These differences can be couched in terms of lines, e.g. by telling which lines must be inserted, deleted or moved to convert the first file to the second. Alternatively, the list of differences can identify individual bytes. Byte-oriented comparisons are useful with non-text files, such as compiled programs, that are not divided into lines. The approach adopted here is to generate only instructions to insert or delete entire lines. Since lines are treated as indivisible objects, files can be treated as containing lines consisting of a single symbol. In other words, an n-line file is modelled by a string of n symbols. In more formal terms, the file comparison problem can be rephrased as follows. The edit distance between two strings of symbols is the length of a shortest sequence of insertions and deletions that will convert the first string to the second. T h e goal, then, is to write a program that computes the edit distance between two arbitrary strings of symbols. In addition, the program must explicitly produce a shortest possible edit script (i.e. sequence of edit commands) for the given strings. Other approaches have been tried. For example, Tichy ' discusses a file-comparison tool that determines how one file can be constructed from another by copying blocks of lines and appending lines. However, the ability to economically generate shortestpossible edit scripts depends critically on the repertoire of instructions that are allowed in the scripts.2 File comparison algorithms have a number of potential uses beside merely producing a set of edit commands to be read by someone trying to understand the evolution of a program or document. For example, the edit scripts might be text editor instructions that are saved to avoid the expense of storing nearly identical files. Rather than storing"}
{"_id": "508119a50e3d4e8b7116c1b56a002de492b2270b", "title": "Object Detection Featuring 3D Audio Localization for Microsoft HoloLens - A Deep Learning based Sensor Substitution Approach for the Blind", "text": "Finding basic objects on a daily basis is a difficult but common task for blind people. This paper demonstrates the implementation of a wearable, deep learning backed, object detection approach in the context of visual impairment or blindness. The prototype aims to substitute the impaired eye of the user and replace it with technical sensors. By scanning its surroundings, the prototype provides a situational overview of objects around the device. Object detection has been implemented using a near real-time, deep learning model named YOLOv2. The model supports the detection of 9000 objects. The prototype can display and read out the name of augmented objects which can be selected by voice commands and used as directional guides for the user, using 3D audio feedback. A distance announcement of a selected object is derived from the HoloLens\u2019s spatial model. The wearable solution offers the opportunity to efficiently locate objects to support orientation without extensive training of the user. Preliminary evaluation covered the detection rate of speech recognition and the response times of the server."}
{"_id": "c0a39b1b64100b929ec77d33232513ec72089a2e", "title": "English as a Formal Specification Language", "text": "PENG is a computer-processable controlled natural language designed for writing unambiguous and precise specifications. PENG covers a strict subset of standard English and is precisely defined by a controlled grammar and a controlled lexicon. In contrast to other controlled languages, the author does not need to know the grammatical restrictions explicitly. ECOLE, a look-ahead text editor, indicates the restrictions while the specification is written. The controlled lexicon contains domain-specific content words that can be defined by the author on the fly and predefined function words. Specifications written in PENG can be deterministically translated into discourse representations structures to cope with anaphora and presuppositions and also into first-order predicate logic. To test the formal properties of PENG, we reformulated Schubert\u2019s steamroller puzzle in PENG, translated the resulting specification via discourse representation structures into first-order predicate logic with equality, and proved the steamroller\u2019s conclusion with OTTER, a standard theorem prover."}
{"_id": "f9cf246008d745f883914d925567bb36df806613", "title": "Automatic Retraction and Full-Cycle Operation for a Class of Airborne Wind Energy Generators", "text": "Airborne wind energy systems aim to harvest the power of winds blowing at altitudes higher than what conventional wind turbines reach. They employ a tethered flying structure, usually a wing, and exploit the aerodynamic lift to produce electrical power. In the case of ground-based systems, where the traction force on the tether is used to drive a generator on the ground, a two-phase power cycle is carried out: one phase to produce power, where the tether is reeled out under high traction force, and a second phase where the tether is recoiled under lower load. The problem of controlling a tethered wing in this second phase, the retraction phase, is addressed here, by proposing two possible control strategies. Theoretical analyses, numerical simulations, and experimental results are presented to show the performance of the two approaches. Finally, the experimental results of complete autonomous power generation cycles are reported and compared with those in first-principle models."}
{"_id": "53c544145d2fe5fe8c44584f44f36f74393b983e", "title": "Simulation of object and human skin formations in a grasping task", "text": "This paper addresses the problem of simulating deformations between objects and the hand of a synthetic character during a grasping process. A numerical method based on finite element theory allows us to take into account the active forces of the fingers on the object and the reactive forces of the object on the fingers. The method improves control of synthetic human behavior in a task level animation system because it provides information about the environment of a synthetic human and so can be compared to the sense of touch. Finite element theory currently used in engineering seems one of the best approaches for modeling both elastic and plastic deformation of objects, as well as shocks with or without penetration between deformable objects. We show that intrinsic properties of the method based on composition/decomposition of elements have an impact in computer animation. We also state that the use of the same method for modeling both objects and human bodies improves the modeling both objects and human bodies improves the modeling of the contacts between them. Moreover, it allows a realistic envelope deformation of the human fingers comparable to existing methods. To show what we can expect from the method, we apply it to the grasping and pressing of a ball. Our solution to the grasping problem is based on displacement commands instead of force commands used in robotics and human behavior."}
{"_id": "0eaa75861d9e17f2c95bd3f80f48db95bf68a50c", "title": "Electromigration and its impact on physical design in future technologies", "text": "Electromigration (EM) is one of the key concerns going forward for interconnect reliability in integrated circuit (IC) design. Although analog designers have been aware of the EM problem for some time, digital circuits are also being affected now. This talk addresses basic design issues and their effects on electromigration during interconnect physical design. The intention is to increase current density limits in the interconnect by adopting electromigration-inhibiting measures, such as short-length and reservoir effects. Exploitation of these effects at the layout stage can provide partial relief of EM concerns in IC design flows in future."}
{"_id": "24ff5027e7042aeead47ef3071f1a023243078bb", "title": "Optimizing Space-Air-Ground Integrated Networks by Artificial Intelligence", "text": "It is widely acknowledged that the development of traditional terrestrial communication technologies cannot provide all users with fair and high quality services due to the scarce network resource and limited coverage areas. To complement the terrestrial connection, especially for users in rural, disasterstricken, or other difficult-to-serve areas, satellites, unmanned aerial vehicles (UAVs), and balloons have been utilized to relay the communication signals. On the basis, Space-Air-Ground Integrated Networks (SAGINs) have been proposed to improve the users\u2019 Quality of Experience (QoE). However, compared with existing networks such as ad hoc networks and cellular networks, the SAGINs are much more complex due to the various characteristics of three network segments. To improve the performance of SAGINs, researchers are facing many unprecedented challenges. In this paper, we propose the Artificial Intelligence (AI) technique to optimize the SAGINs, as the AI technique has shown its predominant advantages in many applications. We first analyze several main challenges of SAGINs and explain how these problems can be solved by AI. Then, we consider the satellite traffic balance as an example and propose a deep learning based method to improve the traffic control performance. Simulation results evaluate that the deep learning technique can be an efficient tool to improve the performance of SAGINs."}
{"_id": "2c6835e8bdb8c70a9c3aa9bd2578b01dd1b93114", "title": "RY-ON WITH CLOTHING REGION", "text": "We propose a virtual try-on method based on generative adversarial networks (GANs). By considering clothing regions, this method enables us to reflect the pattern of clothes better than Conditional Analogy GAN (CAGAN), an existing virtual try-on method based on GANs. Our method first obtains the clothing region on a person by using a human parsing model learned with a large-scale dataset. Next, using the acquired region, the clothing part is removed from a human image. A desired clothing image is added to the blank area. The network learns how to apply new clothing to the area of people\u2019s clothing. Results demonstrate the possibility of reflecting a clothing pattern. Furthermore, an image of the clothes that the person is originally wearing becomes unnecessary during testing. In experiments, we generate images using images gathered from Zaland (a fashion E-commerce site)."}
{"_id": "38a70884a93dd6912404519a779cc497965feff1", "title": "Stereotypes of individuals with learning disabilities: views of college students with and without learning disabilities.", "text": "To explore possible reasons for low self-identification rates among undergraduates with learning disabilities (LD), we asked students (38 with LD, 100 without LD) attending two large, public, research-intensive universities to respond to a questionnaire designed to assess stereotypes about individuals with LD and conceptions of ability. Responses were coded into six categories of stereotypes about LD (low intelligence, compensation possible, process deficit, nonspecific insurmountable condition, working the system, and other), and into three categories of conceptions of intelligence (entity, incremental, neither). Consistent with past findings, the most frequent metastereotype reported by individuals in both groups related to generally low ability. In addition, students with LD were more likely to espouse views of intelligence as a fixed trait. As a whole, the study's findings have implications for our understanding of factors that influence self-identification and self-advocacy at the postsecondary level."}
{"_id": "cc6dc5a3e8a18a0aaab7cbe8cee22bf3ac92f0bf", "title": "Concurrency control methods in distributed database: A review and comparison", "text": "In the last years, remarkable improvements have been made in the ability of distributed database systems performance. A distributed database is composed of some sites which are connected to each other through network connections. In this system, if good harmonization isn't made between different transactions, it may result in database incoherence. Nowadays, because of the complexity of many sites and their connection methods, it is difficult to extend different models in distributed database serially. The principle goal of concurrency control in distributed database is to ensure not interfering in accessibility of common database by different sites. Different concurrency control algorithms have been suggested to use in distributed database systems. In this paper, some available methods have been introduced and compared for concurrency control in distributed database."}
{"_id": "45e2e2a327ea696411b212492b053fd328963cc3", "title": "Health App Use Among US Mobile Phone Users: Analysis of Trends by Chronic Disease Status", "text": "BACKGROUND\nMobile apps hold promise for serving as a lifestyle intervention in public health to promote wellness and attenuate chronic conditions, yet little is known about how individuals with chronic illness use or perceive mobile apps.\n\n\nOBJECTIVE\nThe objective of this study was to explore behaviors and perceptions about mobile phone-based apps for health among individuals with chronic conditions.\n\n\nMETHODS\nData were collected from a national cross-sectional survey of 1604 mobile phone users in the United States that assessed mHealth use, beliefs, and preferences. This study examined health app use, reason for download, and perceived efficacy by chronic condition.\n\n\nRESULTS\nAmong participants, having between 1 and 5 apps was reported by 38.9% (314/807) of respondents without a condition and by 6.6% (24/364) of respondents with hypertension. Use of health apps was reported 2 times or more per day by 21.3% (172/807) of respondents without a condition, 2.7% (10/364) with hypertension, 13.1% (26/198) with obesity, 12.3% (20/163) with diabetes, 12.0% (32/267) with depression, and 16.6% (53/319) with high cholesterol. Results of the logistic regression did not indicate a significant difference in health app download between individuals with and without chronic conditions (P>.05). Compared with individuals with poor health, health app download was more likely among those with self-reported very good health (odds ratio [OR] 3.80, 95% CI 2.38-6.09, P<.001) and excellent health (OR 4.77, 95% CI 2.70-8.42, P<.001). Similarly, compared with individuals who report never or rarely engaging in physical activity, health app download was more likely among those who report exercise 1 day per week (OR 2.47, 95% CI 1.6-3.83, P<.001), 2 days per week (OR 4.77, 95% CI 3.27-6.94, P<.001), 3 to 4 days per week (OR 5.00, 95% CI 3.52-7.10, P<.001), and 5 to 7 days per week (OR 4.64, 95% CI 3.11-6.92, P<.001). All logistic regression results controlled for age, sex, and race or ethnicity.\n\n\nCONCLUSIONS\nResults from this study suggest that individuals with poor self-reported health and low rates of physical activity, arguably those who stand to benefit most from health apps, were least likely to report download and use these health tools."}
{"_id": "71795f9f511f6948dd67aff7e9725c08ff1a4c94", "title": "Hadoop+Aparapi: Making heterogenous MapReduce programming easier", "text": "Lately, programmers have started to take advantage of GPU capabilities of cloud-based machines. Using the GPUs can decrease the number of nodes required to perform the computation by increasing the productivity per node. We combine Hadoop, a widely-used MapReduce framework, with Aparapi, a new Java-to-OpenCL conversion tool from AMD. We propose an easy-to-use API which allows easy implementation of MapReduce algorithms that make use of the GPU. Our API improves upon Hadoop by further hiding the complexity of GPU programming, thus allowing the programmer to concentrate on her algorithm. We also propose an accompanying refactoring that allows the programmer to specify the GPU part of their map computation by using very lightweight annotation."}
{"_id": "8cfb12304856268ee438ccb16e4b87960c7349e0", "title": "A Review on Internet of Things (IoT)", "text": "Internet, a revolutionary invention, is always transforming into some new kind of hardware and software making it unavoidable for anyone. The form of communication that we see now is either human-human or human-device, but the Internet of Things (IoT) promises a great future for the internet where the type of communication is machine-machine (M2M). This paper aims to provide a comprehensive overview of the IoT scenario and reviews its enabling technologies and the sensor networks. Also, it describes a six-layered architecture of IoT and points out the related key challenges."}
{"_id": "a39faa00248abb3984317f2d6830f485cb5e1a0d", "title": "of Analytical Chemistry Wearable and Implantable Sensors for Biomedical Applications Hatice", "text": "Mobile health technologies offer great promise for reducing healthcare costs and improving patient care. Wearable and implantable technologies are contributing to a transformation in the mobile health era in terms of improving healthcare and health outcomes and providing real-time guidance on improved health management and tracking. In this article, we review the biomedical applications of wearable and implantable medical devices and sensors, ranging from monitoring to prevention of diseases, as well as the materials used in the fabrication of these devices and the standards for wireless medical devices and mobile applications. We conclude by discussing some of the technical challenges in wearable and implantable technology and possible solutions for overcoming these difficulties."}
{"_id": "e749e6311e25eb8081672742e78c427ce5979552", "title": "Effective application of process improvement patterns to business processes", "text": "Improving the operational effectiveness and efficiency of processes is a fundamental task of business process management (BPM). There exist many proposals of process improvement patterns (PIPs) as practices that aim at supporting this goal. Selecting and implementing relevant PIPs are therefore an important prerequisite for establishing process-aware information systems in enterprises. Nevertheless, there is still a gap regarding the validation of PIPs with respect to their actual business value for a specific application scenario before implementation investments are incurred. Based on empirical research as well as experiences from BPM projects, this paper proposes a method to tackle this challenge. Our approach toward the assessment of process improvement patterns considers real-world constraints such as the role of senior stakeholders or the cost of adapting available IT systems. In addition, it outlines process improvement potentials that arise from the information technology infrastructure available to organizations, particularly regarding the combination of enterprise resource planning with business process intelligence. Our approach is illustrated along a real-world business process from human resource management. The latter covers a transactional volume of about 29,000 process instances over a period of 1\u00a0year. Overall, our approach enables both practitioners and researchers to reasonably assess PIPs before taking any process implementation decision."}
{"_id": "d5ecb372f6cbdfb52588fbb4a54be21d510009d0", "title": "A Study of Birth Order , Academic Performance , and Personality", "text": "This study aimed to investigate birth order effect on personality and academic performance amongst 120 Malaysians. Besides, it also aimed to examine the relationship between personality and academic achievement. Thirty firstborns, 30 middle children, 30 lastborns, and 30 only children, who shared the mean age of 20.0 years (SD= 1.85), were recruited into this study. Participants\u2019 Sijil Pelajaran Malaysia (SPM) results were recorded and their personality was assessed by Ten Item Personality Inventory (TIPI). Results indicated that participants of different birth positions did not differ significantly in terms of personality and academic performance. However, Pearson\u2019s correlation showed that extraversion correlated positively with academic performance. Keywordsbirth order; personality; academic achievement"}
{"_id": "6193ece762c15b7d8a958dc64c37e858cd873b8a", "title": "A compact printed log-periodic antenna with loaded stub", "text": "A compact printed log-periodic dipole antenna (LPDA) with distributed inductive load has been presented in this paper. By adding a stub on the top of each element, the dimension of the LPAD can be reduced by 60%. The antenna has obtained an impedance bandwidth of 10GHz (8GHz-18GHz). According to the simulation results, the designed structure achieves stable radiation patterns throughout the operating frequency band."}
{"_id": "2d9416485091e6af3619c4bc9323a0887d450c8a", "title": "LSDA: Large Scale Detection Through Adaptation", "text": "A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories. Recently, deep convolutional neural networks (CNNs) have emerged as clear winners on object classification benchmarks, in part due to training with 1.2M+ labeled classification images. Unfortunately, only a small fraction of those labels are available for the detection task. It is much cheaper and easier to collect large quantities of image-level labels from search engines than it is to collect detection data and label it with precise bounding boxes. In this paper, we propose Large Scale Detection through Adaptation (LSDA), an algorithm which learns the difference between the two tasks and transfers this knowledge to classifiers for categories without bounding box annotated data, turning them into detectors. Our method has the potential to enable detection for the tens of thousands of categories that lack bounding box annotations, yet have plenty of classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge demonstrates the efficacy of our approach. This algorithm enables us to produce a >7.6K detector by using available classification data from leaf nodes in the ImageNet tree. We additionally demonstrate how to modify our architecture to produce a fast detector (running at 2fps for the 7.6K detector). Models and software are available at lsda.berkeleyvision.org."}
{"_id": "0f28cbfe0674e0af4899d21dd90f6f5d0d5c3f1b", "title": "The Parable of Google Flu: Traps in Big Data Analysis", "text": "In February 2013, Google Flu Trends (GFT) made headlines but not for a reason that Google executives or the creators of the flu tracking system would have hoped. Nature reported that GFT was predicting more than double the proportion of doctor visits for influenza-like illness (ILI) than the Centers for Disease Control and Prevention (CDC), which bases its estimates on surveillance reports from laboratories across the United States (1, 2). This happened despite the fact that GFT was built to predict CDC reports. Given that GFT is often held up as an exemplary use of big data (3, 4), what lessons can we draw from this error?"}
{"_id": "cc43c080340817029fd497536cc9bd39b0a76dd2", "title": "Human Mobility in a Continuum Approach", "text": "Human mobility is investigated using a continuum approach that allows to calculate the probability to observe a trip to any arbitrary region, and the fluxes between any two regions. The considered description offers a general and unified framework, in which previously proposed mobility models like the gravity model, the intervening opportunities model, and the recently introduced radiation model are naturally resulting as special cases. A new form of radiation model is derived and its validity is investigated using observational data offered by commuting trips obtained from the United States census data set, and the mobility fluxes extracted from mobile phone data collected in a western European country. The new modeling paradigm offered by this description suggests that the complex topological features observed in large mobility and transportation networks may be the result of a simple stochastic process taking place on an inhomogeneous landscape."}
{"_id": "15e8961e8f9d1fb5060c3284a5bdcc09f2fc1ba6", "title": "Automated Diagnosis of Glaucoma Using Digital Fundus Images", "text": "Glaucoma is a disease of the optic nerve caused by the increase in the intraocular pressure of the eye. Glaucoma mainly affects the optic disc by increasing the cup size. It can lead to the blindness if it is not detected and treated in proper time. The detection of glaucoma through Optical Coherence Tomography (OCT) and Heidelberg Retinal Tomography (HRT) is very expensive. This paper presents a novel method for glaucoma detection using digital fundus images. Digital image processing techniques, such as preprocessing, morphological operations and thresholding, are widely used for the automatic detection of optic disc, blood vessels and computation of the features. We have extracted features such as cup to disc (c/d) ratio, ratio of the distance between optic disc center and optic nerve head to diameter of the optic disc, and the ratio of blood vessels area in inferior-superior side to area of blood vessel in the nasal-temporal side. These features are validated by classifying the normal and glaucoma images using neural network classifier. The results presented in this paper indicate that the features are clinically significant in the detection of glaucoma. Our system is able to classify the glaucoma automatically with a sensitivity and specificity of 100% and 80% respectively."}
{"_id": "36b3865f944c74c6d782c26dfe7be04ef9664a67", "title": "Conditional Generative Adversarial Networks for Speech Enhancement and Noise-Robust Speaker Verification", "text": "Improving speech system performance in noisy environments remains a challenging task, and speech enhancement (SE) is one of the effective techniques to solve the problem. Motivated by the promising results of generative adversarial networks (GANs) in a variety of image processing tasks, we explore the potential of conditional GANs (cGANs) for SE, and in particular, we make use of the image processing framework proposed by Isola et al. [1] to learn a mapping from the spectrogram of noisy speech to an enhanced counterpart. The SE cGAN consists of two networks, trained in an adversarial manner: a generator that tries to enhance the input noisy spectrogram, and a discriminator that tries to distinguish between enhanced spectrograms provided by the generator and clean ones from the database using the noisy spectrogram as a condition. We evaluate the performance of the cGAN method in terms of perceptual evaluation of speech quality (PESQ), short-time objective intelligibility (STOI), and equal error rate (EER) of speaker verification (an example application). Experimental results show that the cGAN method overall outperforms the classical short-time spectral amplitude minimum mean square error (STSA-MMSE) SE algorithm, and is comparable to a deep neural network-based SE approach (DNN-SE)."}
{"_id": "bb192e0208548831de1475b11859f2114121c662", "title": "Direct and Indirect Discrimination Prevention Methods", "text": "Along with privacy, discrimination is a very import ant issue when considering the legal and ethical aspects of data mini ng. It is more than obvious that most people do not want to be discriminated because of their gender, religion, nationality, age and so on, especially when those att ribu es are used for making decisions about them like giving them a job, loan, insu rance, etc. Discovering such potential biases and eliminating them from the traini ng data without harming their decision-making utility is therefore highly desirab le. For this reason, antidiscrimination techniques including discrimination discovery and prevention have been introduced in data mining. Discrimination prev ention consists of inducing patterns that do not lead to discriminatory decisio n even if the original training datasets are inherently biased. In this chapter, by focusing on the discrimination prevention, we present a taxonomy for classifying a d examining discrimination prevention methods. Then, we introduce a group of p re-processing discrimination prevention methods and specify the different featur es of each approach and how these approaches deal with direct or indirect discr imination. A presentation of metrics used to evaluate the performance of those app ro ches is also given. Finally, we conclude our study by enumerating interesting fu ture directions in this research body."}
{"_id": "1935e0986939ea6ef2afa01eeef94dbfea6fb6da", "title": "Markowitz Revisited: Mean-Variance Models in Financial Portfolio Analysis", "text": "Mean-variance portfolio analysis provided the first quantitative treatment of the tradeoff between profit and risk. We describe in detail the interplay between objective and constraints in a number of single-period variants, including semivariance models. Particular emphasis is laid on avoiding the penalization of overperformance. The results are then used as building blocks in the development and theoretical analysis of multiperiod models based on scenario trees. A key property is the possibility of removing surplus money in future decisions, yielding approximate downside risk minimization."}
{"_id": "1ea03bc28a14ade633d5a7fe9af71328834d45ab", "title": "Federated Learning for Keyword Spotting", "text": "We propose a practical approach based on federated learning to solve out-of-domain issues with continuously running embedded speech-based models such as wake word detectors. We conduct an extensive empirical study of the federated averaging algorithm for the \u201cHey Snips\u201d wake word based on a crowdsourced dataset that mimics a federation of wake word users. We empirically demonstrate that using an adaptive averaging strategy inspired from Adam in place of standard weighted model averaging highly reduces the number of communication rounds required to reach our target performance. The associated upstream communication costs per user are estimated at 8 MB, which is a reasonable in the context of smart home voice assistants. Additionally, the dataset used for these experiments is being open sourced with the aim of fostering further transparent research in the application of federated learning to speech data."}
{"_id": "55ca165fa6091973674b12ea8fa3f1a3a1e50a6d", "title": "Sample-Based Tree Search with Fixed and Adaptive State Abstractions", "text": "Sample-based tree search (SBTS) is an approach to solving Markov decision problems based on constructing a lookahead search tree using random samples from a generative model of the MDP. It encompasses Monte Carlo tree search (MCTS) algorithms like UCT as well as algorithms such as sparse sampling. SBTS is well-suited to solving MDPs with large state spaces due to the relative insensitivity of SBTS algorithms to the size of the state space. The limiting factor in the performance of SBTS tends to be the exponential dependence of sample complexity on the depth of the search tree. The number of samples required to build a search tree is O((|A|B)), where |A| is the number of available actions, B is the number of possible random outcomes of taking an action, and d is the depth of the tree. State abstraction can be used to reduce B by aggregating random outcomes together into abstract states. Recent work has shown that abstract tree search often performs substantially better than tree search conducted in the ground state space. This paper presents a theoretical and empirical evaluation of tree search with both fixed and adaptive state abstractions. We derive a bound on regret due to state abstraction in tree search that decomposes abstraction error into three components arising from properties of the abstraction and the search algorithm. We describe versions of popular SBTS algorithms that use fixed state abstractions, and we introduce the Progressive Abstraction Refinement in Sparse Sampling (PARSS) algorithm, which adapts its abstraction during search. We evaluate PARSS as well as sparse sampling with fixed abstractions on 12 experimental problems, and find that PARSS outperforms search with a fixed abstraction and that search with even highly inaccurate fixed abstractions outperforms search without abstraction. These results establish progressive abstraction refinement as a promising basis for new tree search algorithms, and we propose directions for future work within the progressive refinement framework."}
{"_id": "2ae40898406df0a3732acc54f147c1d377f54e2a", "title": "Query by Committee", "text": "We propose an algorithm called query by commitee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms."}
{"_id": "49e85869fa2cbb31e2fd761951d0cdfa741d95f3", "title": "Adaptive Manifold Learning", "text": "Manifold learning algorithms seek to find a low-dimensional parameterization of high-dimensional data. They heavily rely on the notion of what can be considered as local, how accurately the manifold can be approximated locally, and, last but not least, how the local structures can be patched together to produce the global parameterization. In this paper, we develop algorithms that address two key issues in manifold learning: 1) the adaptive selection of the local neighborhood sizes when imposing a connectivity structure on the given set of high-dimensional data points and 2) the adaptive bias reduction in the local low-dimensional embedding by accounting for the variations in the curvature of the manifold as well as its interplay with the sampling density of the data set. We demonstrate the effectiveness of our methods for improving the performance of manifold learning algorithms using both synthetic and real-world data sets."}
{"_id": "bf07d60ba6d6c6b8cabab72dfce06f203782df8f", "title": "Manifold-Learning-Based Feature Extraction for Classification of Hyperspectral Data: A Review of Advances in Manifold Learning", "text": "Advances in hyperspectral sensing provide new capability for characterizing spectral signatures in a wide range of physical and biological systems, while inspiring new methods for extracting information from these data. HSI data often lie on sparse, nonlinear manifolds whose geometric and topological structures can be exploited via manifold-learning techniques. In this article, we focused on demonstrating the opportunities provided by manifold learning for classification of remotely sensed data. However, limitations and opportunities remain both for research and applications. Although these methods have been demonstrated to mitigate the impact of physical effects that affect electromagnetic energy traversing the atmosphere and reflecting from a target, nonlinearities are not always exhibited in the data, particularly at lower spatial resolutions, so users should always evaluate the inherent nonlinearity in the data. Manifold learning is data driven, and as such, results are strongly dependent on the characteristics of the data, and one method will not consistently provide the best results. Nonlinear manifold-learning methods require parameter tuning, although experimental results are typically stable over a range of values, and have higher computational overhead than linear methods, which is particularly relevant for large-scale remote sensing data sets. Opportunities for advancing manifold learning also exist for analysis of hyperspectral and multisource remotely sensed data. Manifolds are assumed to be inherently smooth, an assumption that some data sets may violate, and data often contain classes whose spectra are distinctly different, resulting in multiple manifolds or submanifolds that cannot be readily integrated with a single manifold representation. Developing appropriate characterizations that exploit the unique characteristics of these submanifolds for a particular data set is an open research problem for which hierarchical manifold structures appear to have merit. To date, most work in manifold learning has focused on feature extraction from single images, assuming stationarity across the scene. Research is also needed in joint exploitation of global and local embedding methods in dynamic, multitemporal environments and integration with semisupervised and active learning."}
{"_id": "01996726f44253807537cec68393f1fce6a9cafa", "title": "Stochastic Neighbor Embedding", "text": "We describe a probabilistic approach to the task of placing objects, described by high-dimensional vectors or by pairwise dissimilarities, in a low-dimensional space in a way that preserves neighbor identities. A Gaussian is centered on each object in the high-dimensional space and the densities under this Gaussian (or the given dissimilarities) are used to define a probability distribution over all the potential neighbors of the object. The aim of the embedding is to approximate this distribution as well as possible when the same operation is performed on the low-dimensional \u201cimages\u201d of the objects. A natural cost function is a sum of Kullback-Leibler divergences, one per object, which leads to a simple gradient for adjusting the positions of the low-dimensional images. Unlike other dimensionality reduction methods, this probabilistic framework makes it easy to represent each object by a mixture of widely separated low-dimensional images. This allows ambiguous objects, like the document count vector for the word \u201cbank\u201d, to have versions close to the images of both \u201criver\u201d and \u201cfinance\u201d without forcing the images of outdoor concepts to be located close to those of corporate concepts."}
{"_id": "0e1431fa42d76c44911b07078610d4b9254bd4ce", "title": "Nonlinear Component Analysis as a Kernel Eigenvalue Problem", "text": "A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear mapfor instance, the space of all possible five-pixel products in 16 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition."}
{"_id": "40cfac582cafeadb0e09e5f020e2febf5cbd4986", "title": "Leveraging graph topology and semantic context for pharmacovigilance through twitter-streams", "text": "Adverse drug events (ADEs) constitute one of the leading causes of post-therapeutic death and their identification constitutes an important challenge of modern precision medicine. Unfortunately, the onset and effects of ADEs are often underreported complicating timely intervention. At over 500 million posts per day, Twitter is a commonly used social media platform. The ubiquity of day-to-day personal information exchange on Twitter makes it a promising target for data mining for ADE identification and intervention. Three technical challenges are central to this problem: (1) identification of salient medical keywords in (noisy) tweets, (2) mapping drug-effect relationships, and (3) classification of such relationships as adverse or non-adverse. We use a bipartite graph-theoretic representation called a drug-effect graph (DEG) for modeling drug and side effect relationships by representing the drugs and side effects as vertices. We construct individual DEGs on two data sources. The first DEG is constructed from the drug-effect relationships found in FDA package inserts as recorded in the SIDER database. The second DEG is constructed by mining the history of Twitter users. We use dictionary-based information extraction to identify medically-relevant concepts in tweets. Drugs, along with co-occurring symptoms are connected with edges weighted by temporal distance and frequency. Finally, information from the SIDER DEG is integrate with the Twitter DEG and edges are classified as either adverse or non-adverse using supervised machine learning. We examine both graph-theoretic and semantic features for the classification task. The proposed approach can identify adverse drug effects with high accuracy with precision exceeding 85\u00a0% and F1 exceeding 81\u00a0%. When compared with leading methods at the state-of-the-art, which employ un-enriched graph-theoretic analysis alone, our method leads to improvements ranging between 5 and 8\u00a0% in terms of the aforementioned measures. Additionally, we employ our method to discover several ADEs which, though present in medical literature and Twitter-streams, are not represented in the SIDER databases. We present a DEG integration model as a powerful formalism for the analysis of drug-effect relationships that is general enough to accommodate diverse data sources, yet rigorous enough to provide a strong mechanism for ADE identification."}
{"_id": "292eee24017356768f1f50b72701ea636dba7982", "title": "IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS", "text": "We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m of urban area in total."}
{"_id": "ffd7ac9b4fff641d461091d5237321f83bae5216", "title": "Multi-task Learning for Maritime Traffic Surveillance from AIS Data Streams", "text": "In a world of global trading, maritime safety, security and efficiency are crucial issues. We propose a multi-task deep learning framework for vessel monitoring using Automatic Identification System (AIS) data streams. We combine recurrent neural networks with latent variable modeling and an embedding of AIS messages to a new representation space to jointly address key issues to be dealt with when considering AIS data streams: massive amount of streaming data, noisy data and irregular timesampling. We demonstrate the relevance of the proposed deep learning framework on real AIS datasets for a three-task setting, namely trajectory reconstruction, anomaly detection and vessel type identification."}
{"_id": "6385cd92746386c82a69ffdc3bc0a9da9f64f721", "title": "The Dysphagia Outcome and Severity Scale", "text": "The Dysphagia Outcome and Severity Scale (DOSS) is a simple, easy-to-use, 7-point scale developed to systematically rate the functional severity of dysphagia based on objective assessment and make recommendations for diet level, independence level, and type of nutrition. Intra- and interjudge reliabilities of the DOSS was established by four clinicians on 135 consecutive patients who underwent a modified barium swallow procedure at a large teaching hospital. Patients were assigned a severity level, independence level, and nutritional level based on three areas most associated with final recommendations: oral stage bolus transfer, pharyngeal stage retention, and airway protection. Results indicate high interrater (90%) and intrarater (93%) agreement with this scale. Implications are suggested for use of the DOSS in documenting functional outcomes of swallowing and diet status based on objective assessment."}
{"_id": "1d18fba47004a4cf2643c41ca82f6b04904bb134", "title": "Depth Map Super-Resolution Considering View Synthesis Quality", "text": "Accurate and high-quality depth maps are required in lots of 3D applications, such as multi-view rendering, 3D reconstruction and 3DTV. However, the resolution of captured depth image is much lower than that of its corresponding color image, which affects its application performance. In this paper, we propose a novel depth map super-resolution (SR) method by taking view synthesis quality into account. The proposed approach mainly includes two technical contributions. First, since the captured low-resolution (LR) depth map may be corrupted by noise and occlusion, we propose a credibility based multi-view depth maps fusion strategy, which considers the view synthesis quality and interview correlation, to refine the LR depth map. Second, we propose a view synthesis quality based trilateral depth-map up-sampling method, which considers depth smoothness, texture similarity and view synthesis quality in the up-sampling filter. Experimental results demonstrate that the proposed method outperforms state-of-the-art depth SR methods for both super-resolved depth maps and synthesized views. Furthermore, the proposed method is robust to noise and achieves promising results under noise-corruption conditions."}
{"_id": "922b5eaa5ca03b12d9842b7b84e0e420ccd2feee", "title": "A New Approach to Linear Filtering and Prediction Problems", "text": "AN IMPORTANT class of theoretical and practical problems in communication and control is of a statistical nature. Such problems are: (i) Prediction of random signals; (ii) separation of random signals from random noise; (iii) detection of signals of known form (pulses, sinusoids) in the presence of random noise. In his pioneering work, Wiener [1]3 showed that problems (i) and (ii) lead to the so-called Wiener-Hopf integral equation; he also gave a method (spectral factorization) for the solution of this integral equation in the practically important special case of stationary statistics and rational spectra. Many extensions and generalizations followed Wiener\u2019s basic work. Zadeh and Ragazzini solved the finite-memory case [2]. Concurrently and independently of Bode and Shannon [3], they also gave a simplified method [2) of solution. Booton discussed the nonstationary Wiener-Hopf equation [4]. These results are now in standard texts [5-6]. A somewhat different approach along these main lines has been given recently by Darlington [7]. For extensions to sampled signals, see, e.g., Franklin [8], Lees [9]. Another approach based on the eigenfunctions of the WienerHopf equation (which applies also to nonstationary problems whereas the preceding methods in general don\u2019t), has been pioneered by Davis [10] and applied by many others, e.g., Shinbrot [11], Blum [12], Pugachev [13], Solodovnikov [14]. In all these works, the objective is to obtain the specification of a linear dynamic system (Wiener filter) which accomplishes the prediction, separation, or detection of a random signal.4 \u2014\u2014\u2014 1 This research was supported in part by the U. S. Air Force Office of Scientific Research under Contract AF 49 (638)-382. 2 7212 Bellona Ave. 3 Numbers in brackets designate References at end of paper. 4 Of course, in general these tasks may be done better by nonlinear filters. At present, however, little or nothing is known about how to obtain (both theoretically and practically) these nonlinear filters. Contributed by the Instruments and Regulators Division and presented at the Instruments and Regulators Conference, March 29\u2013 Apri1 2, 1959, of THE AMERICAN SOCIETY OF MECHANICAL ENGINEERS. NOTE: Statements and opinions advanced in papers are to be understood as individual expressions of their authors and not those of the Society. Manuscript received at ASME Headquarters, February 24, 1959. Paper No. 59-IRD\u201411. A New Approach to Linear Filtering and Prediction Problems"}
{"_id": "9ee859bec32fa8240de0273faff6f20e16e21cc6", "title": "Detection of exudates in fundus photographs using convolutional neural networks", "text": "Diabetic retinopathy is one of the leading causes of preventable blindness in the developed world. Early diagnosis of diabetic retinopathy enables timely treatment and in order to achieve it a major effort will have to be invested into screening programs and especially into automated screening programs. Detection of exudates is very important for early diagnosis of diabetic retinopathy. Deep neural networks have proven to be a very promising machine learning technique, and have shown excellent results in different compute vision problems. In this paper we show that convolutional neural networks can be effectively used in order to detect exudates in color fundus photographs."}
{"_id": "13d4c2f76a7c1a4d0a71204e1d5d263a3f5a7986", "title": "Random Forests", "text": "Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, ***, 148\u2013156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression."}
{"_id": "aeb7eaf29e16c82d9c0038a10d5b12d32b26ab60", "title": "Multi-task Learning Of Deep Neural Networks For Audio Visual Automatic Speech Recognition", "text": "Multi-task learning (MTL) involves the simultaneous training of two or more related tasks over shared representations. In this work, we apply MTL to audio-visual automatic speech recognition(AV-ASR). Our primary task is to learn a mapping between audio-visual fused features and frame labels obtained from acoustic GMM/HMM model. This is combined with an auxiliary task which maps visual features to frame labels obtained from a separate visual GMM/HMM model. The MTL model is tested at various levels of babble noise and the results are compared with a base-line hybrid DNN-HMM AVASR model. Our results indicate that MTL is especially useful at higher level of noise. Compared to base-line, upto 7% relative improvement in WER is reported at -3 SNR dB1."}
{"_id": "45fe9fac928c9e64c96b5feda318a09f1b0228dd", "title": "Ethnicity estimation with facial images", "text": "We have advanced an effort to develop vision based human understanding technologies for realizing human-friendly machine interfaces. Visual information, such as gender, age ethnicity, and facial expression play an important role in face-to-face communication. This paper addresses a novel approach for ethnicity classification with facial images. In this approach, the Gabor wavelets transformation and retina sampling are combined to extract key facial features, and support vector machines that are used for ethnicity classification. Our system, based on this approach, has achieved approximately 94% for ethnicity estimation under various lighting conditions."}
{"_id": "2dbaedea8ac09b11d9da8767eb73b6b821890661", "title": "Bullying and victimization in adolescence: concurrent and stable roles and psychological health symptoms.", "text": "From an initial sample of 1278 Italian students, the authors selected 537 on the basis of their responses to a self-report bully and victim questionnaire. Participants' ages ranged from 13 to 20 years (M = 15.12 years, SD = 1.08 years). The authors compared the concurrent psychological symptoms of 4 participant groups (bullies, victims, bully/victims [i.e., bullies who were also victims of bullying], and uninvolved students). Of participants, 157 were in the bullies group, 140 were in the victims group, 81 were in the bully/victims group, and 159 were in the uninvolved students group. The results show that bullies reported a higher level of externalizing problems, victims reported more internalizing symptoms, and bully/victims reported both a higher level of externalizing problems and more internalizing symptoms. The authors divided the sample into 8 groups on the basis of the students' recollection of their earlier school experiences and of their present role. The authors classified the participants as stable versus late bullies, victims, bully/victims, or uninvolved students. The authors compared each stable group with its corresponding late group and found that stable victims and stable bully/victims reported higher degrees of anxiety, depression, and withdrawal than did the other groups. The authors focus their discussion on the role of chronic peer difficulties in relation to adolescents' symptoms and well-being."}
{"_id": "d26ce29a109f8ccb42d7b0d312c70a6a750018ce", "title": "Side gate HiGT with low dv/dt noise and low loss", "text": "This paper presents a novel side gate HiGT (High-conductivity IGBT) that incorporates historical changes of gate structures for planar and trench gate IGBTs. Side gate HiGT has a side-wall gate, and the opposite side of channel region for side-wall gate is covered by a thick oxide layer to reduce Miller capacitance (Cres). In addition, side gate HiGT has no floating p-layer, which causes the excess Vge overshoot. The proposed side gate HiGT has 75% smaller Cres than the conventional trench gate IGBT. The excess Vge overshoot during turn-on is effectively suppressed, and Eon + Err can be reduced by 34% at the same diode's recovery dv/dt. Furthermore, side gate HiGT has sufficiently rugged RBSOA and SCSOA."}
{"_id": "0db78d548f914135ad16c0d6618890de52c0c065", "title": "Incremental Dialogue System Faster than and Preferred to its Nonincremental Counterpart", "text": "Current dialogue systems generally operate in a pipelined, modular fashion on one complete utterance at a time. Evidence from human language understanding shows that human understanding operates incrementally and makes use of multiple sources of information during the parsing process, including traditionally \u201clater\u201d components such as pragmatics. In this paper we describe a spoken dialogue system that understands language incrementally, provides visual feedback on possible referents during the course of the user\u2019s utterance, and allows for overlapping speech and actions. We further present findings from an empirical study showing that the resulting dialogue system is faster overall than its nonincremental counterpart. Furthermore, the incremental system is preferred to its nonincremental counterpart \u2013 beyond what is accounted for by factors such as speed and accuracy. These results indicate that successful incremental understanding systems will improve both performance and usability."}
{"_id": "eec44862b2d58434ca7706224bc0e9437a2bc791", "title": "The balanced scorecard: a foundation for the strategic management of information systems", "text": "\u017d . The balanced scorecard BSC has emerged as a decision support tool at the strategic management level. Many business leaders now evaluate corporate performance by supplementing financial accounting data with goal-related measures from the following perspectives: customer, internal business process, and learning and growth. It is argued that the BSC concept can be adapted to assist those managing business functions, organizational units and individual projects. This article develops a \u017d . balanced scorecard for information systems IS that measures and evaluates IS activities from the following perspectives: business value, user orientation, internal process, and future readiness. Case study evidence suggests that a balanced IS scorecard can be the foundation for a strategic IS management system provided that certain development guidelines are followed, appropriate metrics are identified, and key implementation obstacles are overcome. q 1999 Elsevier Science B.V. All rights reserved."}
{"_id": "4828a00f623651a9780b945980530fb6b3cb199a", "title": "Adversarial Machine Learning at Scale", "text": "Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model\u2019s parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet (Russakovsky et al., 2014). Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than singlestep attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a \u201clabel leaking\u201d effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process."}
{"_id": "f7d3b7986255f2e5e2402e84d7d7c5e583d7cb05", "title": "A Combined Model- and Learning-Based Framework for Interaction-Aware Maneuver Prediction", "text": "This paper presents a novel online-capable interaction-aware intention and maneuver prediction framework for dynamic environments. The main contribution is the combination of model-based interaction-aware intention estimation with maneuver-based motion prediction based on supervised learning. The advantages of this framework are twofold. On one hand, expert knowledge in the form of heuristics is integrated, which simplifies the modeling of the interaction. On the other hand, the difficulties associated with the scalability and data sparsity of the algorithm due to the so-called curse of dimensionality can be reduced, as a reduced feature space is sufficient for supervised learning. The proposed algorithm can be used for highly automated driving or as a prediction module for advanced driver assistance systems without the need of intervehicle communication. At the start of the algorithm, the motion intention of each driver in a traffic scene is predicted in an iterative manner using the game-theoretic idea of stochastic multiagent simulation. This approach provides an interpretation of what other drivers intend to do and how they interact with surrounding traffic. By incorporating this information into a Bayesian network classifier, the developed framework achieves a significant improvement in terms of reliable prediction time and precision compared with other state-of-the-art approaches. By means of experimental results in real traffic on highways, the validity of the proposed concept and its online capability is demonstrated. Furthermore, its performance is quantitatively evaluated using appropriate statistical measures."}
{"_id": "7616624dd230c42f6397a9a48094cf4611c02ab8", "title": "Frequency Tracking Control for a Capacitor-Charging Parallel Resonant Converter with Phase-Locked Loop", "text": "This study investigates a phase-locked loop (PLL) controlled parallel resonant converter (PRC) for a pulse power capacitor charging application. The dynamic nature of the capacitor charging is such that it causes a shift in the resonant frequency of the PRC. Using the proposed control method, the PRC can be optimized to operate with its maximum power capability and guarantee ZVS operation, even when the input voltage and resonant tank parameters vary. The detailed implementation of the PLL controller, as well as the determination of dead-time and leading time, is presented in this paper. Simulation and experimental results verify the performance of the proposed control method."}
{"_id": "4902805fe1e2f292f6beed7593154e686d7f6dc2", "title": "A Novel Connectionist System for Unconstrained Handwriting Recognition", "text": "Recognizing lines of unconstrained handwritten text is a challenging task. The difficulty of segmenting cursive or overlapping characters, combined with the need to exploit surrounding context, has led to low recognition rates for even the best current recognizers. Most recent progress in the field has been made either through improved preprocessing or through advances in language modeling. Relatively little work has been done on the basic recognition algorithms. Indeed, most systems rely on the same hidden Markov models that have been used for decades in speech and handwriting recognition, despite their well-known shortcomings. This paper proposes an alternative approach based on a novel type of recurrent neural network, specifically designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies. In experiments on two large unconstrained handwriting databases, our approach achieves word recognition accuracies of 79.7 percent on online data and 74.1 percent on offline data, significantly outperforming a state-of-the-art HMM-based system. In addition, we demonstrate the network's robustness to lexicon size, measure the individual influence of its hidden layers, and analyze its use of context. Last, we provide an in-depth discussion of the differences between the network and HMMs, suggesting reasons for the network's superior performance."}
{"_id": "e48df18774fbaff8b70b0231a02c3ccf1ebdf784", "title": "Enhancing computer vision to detect face spoofing attack utilizing a single frame from a replay video attack using deep learning", "text": "Recently, automatic face recognition has been applied in many web and mobile applications. Developers integrate and implement face recognition as an access control into these applications. However, face recognition authentication is vulnerable to several attacks especially when an attacker presents a 2-D printed image or recorded video frames in front of the face sensor system to gain access as a legitimate user. This paper introduces a non-intrusive method to detect face spoofing attacks that utilize a single frame of sequenced frames. We propose a specialized deep convolution neural network to extract complex and high features of the input diffused frame. We tested our method on the Replay Attack dataset which consists of 1200 short videos of both real-access and spoofing attacks. An extensive experimental analysis was conducted that demonstrated better results when compared to previous static algorithms results."}
{"_id": "1663f1b811c0ea542c1d128ff129cdf5cd7f9c44", "title": "ISAR - radar imaging of targets with complicated motion", "text": "ISAR imaging is described for general motion of a radar target. ISAR imaging may be seen as a 3D to 2D projection, and the importance of the ISAR image projection plane is stated. For general motion, ISAR images are often smeared when using FFT processing. Time frequency methods are used to analyze such images, and to form sharp images. A given smeared image is shown to be the result of changes both in scale and in the projection plane orientation."}
{"_id": "792e9c588e3426ec55f630fffefa439fc17e0406", "title": "Closing the Loop: Evaluating a Measurement Instrument for Maturity Model Design", "text": "To support the systematic improvement of business intelligence (BI) in organizations, we have designed and refined a BI maturity model (BIMM) and a respective measurement instrument (MI) in prior research. In this study, we devise an evaluation strategy, and evaluate the validity of the designed measurement artifact. Through cluster analysis of maturity assessments of 92 organizations, we identify characteristic BI maturity scenarios and representative cases for the relevant scenarios. For evaluating the designed instrument, we compare its results with insights obtained from in-depth interviews in the respective companies. A close match between our model's quantitative maturity assessments and the maturity levels from the qualitative analyses indicates that the MI correctly assesses BI maturity. The applied evaluation approach has the potential to be re-used in other design research studies where the validity of utility claims is often hard to prove."}
{"_id": "2907cde029f349948680a3690500d4cf09b5be96", "title": "An architecture for scalable, universal speech recognition", "text": "This thesis describes MultiSphinx, a concurrent architecture for scalable, low-latency automatic speech recognition. We first consider the problem of constructing a universal \u201ccore\u201d speech recognizer on top of which domain and task specific adaptation layers can be constructed. We then show that when this problem is restricted to that of expanding the search space from a \u201ccore\u201d vocabulary to a superset of this vocabulary across multiple passes of search, it allows us to effectively \u201cfactor\u201d a recognizer into components of roughly equal complexity. We present simple but effective algorithms for constructing the reduced vocabulary and associated statistical language model from an existing system. Finally, we describe the MultiSphinx decoder architecture, which allows multiple passes of recognition to operate concurrently and incrementally, either in multiple threads in the same process, or across multiple processes on separate machines, and which allows the best possible partial results, including confidence scores, to be obtained at any time during the recognition process."}
{"_id": "86436e9d0c98e7133c7d00d8875bcf0720ad3882", "title": "' SMART \u2019 CANE FOR THE VISUALLY IMPAIRED : DESIGN AND CONTROLLED FIELD TESTING OF AN AFFORDABLE OBSTACLE DETECTION SYSTEM", "text": "(USA) respectively. Since graduation, they have been associated with this project in voluntary and individual capacity. Sandeep Singh Gujral, an occupational therapist was an intern at the IIT Delhi from August-November 2009 and conducted user training for the trials."}
{"_id": "ee548bd6b96cf1d748def335c1517c2deea1b3f5", "title": "Forecasting Nike's sales using Facebook data", "text": "This paper tests whether accurate sales forecasts for Nike are possible from Facebook data and how events related to Nike affect the activity on Nike's Facebook pages. The paper draws from the AIDA sales framework (Awareness, Interest, Desire, and Action) from the domain of marketing and employs the method of social set analysis from the domain of computational social science to model sales from Big Social Data. The dataset consists of (a) selection of Nike's Facebook pages with the number of likes, comments, posts etc. that have been registered for each page per day and (b) business data in terms of quarterly global sales figures published in Nike's financial reports. An event study is also conducted using the Social Set Visualizer (SoSeVi). The findings suggest that Facebook data does have informational value. Some of the simple regression models have a high forecasting accuracy. The multiple regressions have a lower forecasting accuracy and cause analysis barriers due to data set characteristics such as perfect multicollinearity. The event study found abnormal activity around several Nike specific events but inferences about those activity spikes, whether they are purely event-related or coincidences, can only be determined after detailed case-by-case text analysis. Our findings help assess the informational value of Big Social Data for a company's marketing strategy, sales operations and supply chain."}
{"_id": "dcbebb8fbd3ebef2816ebe0f7da12340a5725a8b", "title": "Wiktionary-Based Word Embeddings", "text": "Vectorial representations of words have grown remarkably popular in natural language processing and machine translation. The recent surge in deep learning-inspired methods for producing distributed representations has been widely noted even outside these fields. Existing representations are typically trained on large monolingual corpora using context-based prediction models. In this paper, we propose extending pre-existing word representations by exploiting Wiktionary. This process results in a substantial extension of the original word vector representations, yielding a large multilingual dictionary of word embeddings. We believe that this resource can enable numerous monolingual and cross-lingual applications, as evidenced in a series of monolingual and cross-lingual semantic experiments that we have conducted."}
{"_id": "356f5f4224ee090a11e83a7e3cc130b2fdb0e612", "title": "Concept Based Query Expansion", "text": "Query expansion methods have been studied for a long time - with debatable success in many instances. In this paper we present a probabilistic query expansion model based on a similarity thesaurus which was constructed automatically. A similarity thesaurus reflects domain knowledge about the particular collection from which it is constructed. We address the two important issues with query expansion: the selection and the weighting of additional search terms. In contrast to earlier methods, our queries are expanded by adding those terms that are most similar to the concept of the query, rather than selecting terms that are similar to the query terms. Our experiments show that this kind of query expansion results in a notable improvement in the retrieval effectiveness when measured using both recall-precision and usefulness."}
{"_id": "37b0f219c1f2fbc4b432b24a5fe91dd733f19b7f", "title": "An Association Thesaurus for Information Retrieval", "text": "Although commonly used in both commercial and experimental information retrieval systems, thesauri have not demonstrated consistent beneets for retrieval performance, and it is diicult to construct a thesaurus automatically for large text databases. In this paper, an approach, called PhraseFinder, is proposed to construct collection-dependent association thesauri automatically using large full-text document collections. The association thesaurus can be accessed through natural language queries in INQUERY, an information retrieval system based on the probabilistic inference network. Experiments are conducted in IN-QUERY to evaluate diierent types of association thesauri, and thesauri constructed for a variety of collections."}
{"_id": "ca56018ed7042d8528b5a7cd8f332c5737b53b1f", "title": "Experiments in Automatic Statistical Thesaurus Construction", "text": "A well constructed thesaurus has long been recognized as a valuable tool in the effective operation of an information retrieval system. This paper reports the results of experiments designed to determine the validity of an approach to the automatic construction of global thesauri (described originally by Crouch in [1] and [2] based on a clustering of the document collection. The authors validate the approach by showing that the use of thesauri generated by this method results in substantial improvements in retrieval effectiveness in four test collections. The term discrimination value theory, used in the thesaurus generation algorithm to determine a term's membership in a particular thesaurus class, is found not to be useful in distinguishing a \u201cgood\u201d from an \u201cindifferent\u201d or \u201cpoor\u201d thesaurus class). In conclusion, the authors suggest an alternate approach to automatic thesaurus construction which greatly simplifies the work of producing viable thesaurus classes. Experimental results show that the alternate approach described herein in some cases produces thesauri which are comparable in retrieval effectiveness to those produced by the first method at much lower cost."}
{"_id": "e50a316f97c9a405aa000d883a633bd5707f1a34", "title": "Term-Weighting Approaches in Automatic Text Retrieval", "text": "The experimental evidence accumulated over the past 20 years indicates that text indexing systems based on the assignment of appropriately weighted single terms produce retrieval results that are superior to those obtainable with other more elaborate text representations. These results depend crucially on the choice of effective termweighting systems. This article summarizes the insights gained in automatic term weighting, and provides baseline single-term-indexing models with which other more elaborate content analysis procedures can be compared. 1. AUTOMATIC TEXT ANALYSIS In the late 195Os, Luhn [l] first suggested that automatic text retrieval systems could be designed based on a comparison of content identifiers attached both to the stored texts and to the users\u2019 information queries. Typically, certain words extracted from the texts of documents and queries would be used for content identification; alternatively, the content representations could be chosen manually by trained indexers familiar with the subject areas under consideration and with the contents of the document collections. In either case, the documents would be represented by term vectors of the form D= (ti,tj,...ytp) (1) where each tk identifies a content term assigned to some sample document D. Analogously, the information requests, or queries, would be represented either in vector form, or in the form of Boolean statements. Thus, a typical query Q might be formulated as Q = (qa,qbr.. . ,4r) (2)"}
{"_id": "fac511cc5079d432c7e010eee2d5a8d67136ecdd", "title": "Build-to-order supply chain management : a literature review and framework for development", "text": "The build-to-order supply chain management (BOSC) strategy has recently attracted the attention of both researchers and practitioners, given its successful implementation in many companies including Dell computers, Compaq, and BMW. The growing number of articles on BOSC in the literature is an indication of the importance of the strategy and of its role in improving the competitiveness of an organization. The objective of a BOSC strategy is to meet the requirements of individual customers by leveraging the advantages of outsourcing and information technology. There are not many research articles that provide an overview of BOSC, despite the fact that this strategy is being promoted as the operations paradigm of the future. The main objective of this research is to (i) review the concepts of BOSC, (ii) develop definitions of BOSC, (iii) classify the literature based on a suitable classification scheme, leading to some useful insights into BOSC and some future research directions, (iv) review the selected articles on BOSC for their contribution to the development and operations of BOSC, (v) develop a framework for BOSC, and (vi) suggest some future research directions. The literature has been reviewed based on the following four major areas of decision-making: organizational competitiveness, the development and implementation of BOSC, the operations of BOSC, and information technology in BOSC. Some of the important observations are: (a) there is a lack of adequate research on the design and control of BOSC, (b) there is a need for further research on the implementation of BOSC, (c) human resource issues in BOSC have been ignored, (d) issues of product commonality and modularity from the perspective of partnership or supplier development require further attention and (e) the trade-off between responsiveness and the cost of logistics needs further study. The paper ends with concluding remarks. # 2004 Elsevier B.V. All rights reserved."}
{"_id": "6ac15e819701cd0d077d8157711c4c402106722c", "title": "Team MIT Urban Challenge Technical Report", "text": "This technical report describes Team MIT's approach to the DARPA Urban Challenge. We have developed a novel strategy for using many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross\u00admodal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real\u00ad time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well\u00adproven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. These innovations are being incorporated in two new robotic vehicles equipped for autonomous driving in urban environments, with extensive testing on a DARPA site visit course. Experimental results demonstrate all basic navigation and some basic traffic behaviors, including unoccupied autonomous driving, lane following using pure\u00adpursuit control and our local frame perception strategy, obstacle avoidance using kino\u00addynamic RRT path planning, U\u00adturns, and precedence evaluation amongst other cars at intersections using our situational interpreter. We are working to extend these approaches to advanced navigation and traffic scenarios. \u2020 Executive Summary This technical report describes Team MIT's approach to the DARPA Urban Challenge. We have developed a novel strategy for using many inexpensive sensors, mounted on the vehicle periphery, and calibrated with a new cross-modal calibration technique. Lidar, camera, and radar data streams are processed using an innovative, locally smooth state representation that provides robust perception for real-time autonomous control. A resilient planning and control architecture has been developed for driving in traffic, comprised of an innovative combination of well-proven algorithms for mission planning, situational planning, situational interpretation, and trajectory control. These innovations are being incorporated in two new robotic vehicles equipped for autonomous driving in urban environments, with extensive testing on a DARPA site visit course. Experimental results demonstrate all basic navigation and some basic traffic behaviors, including unoccupied autonomous driving, lane following using pure-pursuit control and our local frame perception strategy, obstacle avoidance using kino-dynamic RRT path planning, U-turns, and precedence evaluation amongst other cars at intersections using our situational interpreter. We are working to extend these approaches to advanced navigation and traffic scenarios. DISCLAIMER: The information contained in this paper does not represent the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency (DARPA) or the Department of Defense. DARPA does not guarantee the accuracy or reliability of the information in this paper. Additional support \u2026"}
{"_id": "035e9cd81d5dd3b1621fb14e00b452959daffddf", "title": "Platforms in healthcare innovation ecosystems: The lens of an innovation intermediary", "text": "Healthcare innovation has made progressive strides. Innovative solutions now tend to incorporate device integration, data collection and data analysis linked across a diverse range of actors building platform-centric healthcare ecosystems. The interconnectedness and inter-disciplinarity of the ecosystems bring with it a number of vital issues around how to strategically manage such a complex system. This paper highlights the importance of innovation intermediaries particularly in a platform-centric ecosystem such as the healthcare industry. It serves as a reminder of why it is important for healthcare technologists to consider proactive ways to contribute to the innovation ecosystem by creating devices with the platform perspective in mind."}
{"_id": "f4968d7a9a330edf09f95c45170ad67d2f340fc7", "title": "Clustering Sensors in Wireless Ad Hoc Networks Operating in a Threat Environment", "text": "Sensors in a data fusion environment over hostile territory are geographically dispersed and change location with time. In order to collect and process data from these sensors an equally flexible network of fusion beds (i.e., clusterheads) is required. To account for the hostile environment, we allow communication links between sensors and clusterheads to be unreliable. We develop a mixed integer linear programming (MILP) model to determine the clusterhead location strategy that maximizes the expected data covered minus the clusterhead reassignments, over a time horizon. A column generation (CG) heuristic is developed for this problem. Computational results show that CG performs much faster than a standard commercial solver and the typical optimality gap for large problems is less than 5%. Improvements to the basic model in the areas of modeling link failure, consideration of bandwidth capacity, and clusterhead changeover cost estimation are also discussed."}
{"_id": "e5a837696c7a5d329a6a540a0d2de912a010ac47", "title": "Tableau Methods for Modal and Temporal Logics", "text": "This document is a complete draft of a chapter by Rajeev Gor e on \\Tableau Methods for Modal and Temporal Logics\" which is part of the \\Handbook of Tableau Methods\", edited"}
{"_id": "f0fcb064acefae20d0a7293a868c1a9822f388bb", "title": "General Spectral Camera Lens Simulation", "text": "We present a camera lens simulation model capable of producing advanced photographic phenomena in a general spectral Monte Carlo image rendering system. Our approach incorporates insights from geometrical diffraction theory, from optical engineering and from glass science. We show how to efficiently simulate all five monochromatic aberrations, spherical and coma aberration, astigmatism, field curvature and distortion. We also consider chromatic aberration, lateral colour and aperture diffraction. The inclusion of Fresnel reflection generates correct lens flares and we present an optimized sampling method for path generation."}
{"_id": "412b93429b7b7b307cc64702cdfa91630210a52e", "title": "An optimal technology mapping algorithm for delay optimization in lookup-table based FPGA designs", "text": "In this paper we present a polynomial time technology mapping algorithm, called Flow-Map, that optimally solves the LUT-based FPGA technology mapping problem for depth minimization for general Boolean networks. This theoretical breakthrough makes a sharp contrast with the fact that conventional technology mapping problem in library-based designs is NP-hard. A key step in Flow-Map is to compute a minimum height K-feasible cut in a network, solved by network flow computation. Our algorithm also effectively minimizes the number of LUTs by maximizing the volume of each cut and by several postprocessing operations. We tested the Flow-Map algorithm on a set of benchmarks and achieved reductions on both the network depth and the number of LUTs in mapping solutions as compared with previous algorithms."}
{"_id": "10727ad1dacea19dc813e50b014ca08e27dcd065", "title": "A gender-specific behavioral analysis of mobile device usage data", "text": "Mobile devices provide a continuous data stream of contextual behavioral information which can be leveraged for a variety of user services, such as in personalizing ads and customizing home screens. In this paper, we aim to better understand gender-related behavioral patterns found in application, Bluetooth, and Wi-Fi usage. Using a dataset which consists of approximately 19 months of data collected from 189 subjects, gender classification is performed using 1,000 features related to the frequency of events yielding up to 91.8% accuracy. Then, we present a behavioral analysis of application traffic using techniques commonly used for web browsing activity as an alternative data exploration approach. Finally, we conclude with a discussion on impersonation attacks, where we aim to determine if one gender is less vulnerable to unauthorized access on their mobile device."}
{"_id": "e275f643c97ca1f4c7715635bb72cf02df928d06", "title": "From Databases to Big Data", "text": ""}
{"_id": "5f85b28af230828675997aea48e939576996e16e", "title": "Educational game design for online education", "text": "The use of educational games in learning environments is an increasingly relevant trend. The motivational and immersive traits of game-based learning have been deeply studied in the literature, but the systematic design and implementation of educational games remain an elusive topic. In this study some relevant requirements for the design of educational games in online education are analyzed, and a general game design method that includes adaptation and assessment features is proposed. Finally, a particular implementation of that design is described in light of its applicability to other implementations and environments. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "328bfd1d0229bc4973277f893abd1eb288159fc9", "title": "A review of the literature on the aging adult skull and face: implications for forensic science research and applications.", "text": "This paper is a summary of findings of adult age-related craniofacial morphological changes. Our aims are two-fold: (1) through a review of the literature we address the factors influencing craniofacial aging, and (2) the general ways in which a head and face age in adulthood. We present findings on environmental and innate influences on face aging, facial soft tissue age changes, and bony changes in the craniofacial and dentoalveolar skeleton. We then briefly address the relevance of this information to forensic science research and applications, such as the development of computer facial age-progression and face recognition technologies, and contributions to forensic sketch artistry."}
{"_id": "7ecaca8db190608dc4482999e19b1593cc6ad4e5", "title": "Feature-based survey of model transformation approaches", "text": "Concrete Figure 5 Features of the body of a domain: (A) patterns and (B) logic Language Paradigm Value Specification Element Creation Implicit Explicit Logic Constraint Object-Oriented Functional Procedural Logic Value Binding Imperative Assignment IBM SYSTEMS JOURNAL, VOL 45, NO 3, 2006 CZARNECKI AND HELSEN 629 operating on one model from the parts operating on other models. For example, classical rewrite rules have an LHS operating on the source model and an RHS operating on the target model. In other approaches, such as a rule implemented as a Java program, there might not be any such syntactic distinction. Multidirectionality Multidirectionality refers to the ability to execute a rule in different directions (see Figure 4A). Rules supporting multidirectionality are usually defined over in/out-domains. Multidirectional rules are available in MTF and QVT Relations. Application condition Transformation rules in some approaches may have an application condition (see Figure 4A) that must be true in order for the rule to be executed. An example is the when-clause in QVT Relations (Example 1). Intermediate structure The execution of a rule may require the creation of some additional intermediate structures (see Figure 4A) which are not part of the models being transformed. These structures are often temporary and require their own metamodel. A particular example of intermediate structures are traceability links. In contrast to other intermediate structures, traceability links are usually persisted. Even if traceability links are not persisted, some approaches, such as AGG and VIATRA, rely on them to prevent multiple \u2018\u2018firings\u2019\u2019 of a rule for the same input element. Parameterization The simplest kind of parameterization is the use of control parameters that allow passing values as control flags (Figure 7). Control parameters are useful for implementing policies. For example, a transformation from class models to relational schemas could have a control parameter specifying which of the alternative patterns of object-relational mapping should be used in a given execution. 7 Generics allow passing data types, including model element types, as parameters. Generics can help make transformation rules more reusable. Generic transformations have been described by Varr\u00f3 and Pataricza. 17 Finally, higher-order rules take other rules as parameters and may provide even higher levels of reuse and abstraction. Stratego 64 is an example of a term rewriting language for program transformation supporting higher-order rules. We are currently not aware of any model transformation approaches with a first class support for higherorder rules. Reflection and aspects Some authors advocate the support for reflection and aspects (Figure 4) in transformation languages. Reflection is supported in ATL by allowing reflective access to transformation rules during the execution of transformations. An aspect-oriented extension of MTL was proposed by Silaghi et al. 65 Reflection and aspects can be used to express concerns that crosscut several rules, such as custom traceability management policies. 66 Rule application control: Location determination A rule needs to be applied to a specific location within its source scope. As there may be more than one match for a rule within a given source scope, we need a strategy for determining the application locations (Figure 8A). The strategy could be deterministic, nondeterministic, or interactive. For example, a deterministic strategy could exploit some standard traversal strategy (such as depth first) over the containment hierarchy in the source. Stratego 64 is an example of a term rewriting language with a rich mechanism for expressing traversal in tree structures. Examples of nondeterministic strategies include one-point application, where a rule is applied to one nondeterministically selected location, and concurrent application, where one rule is Figure 6 Typing Untyped Syntactically Typed Semantically Typed Typing Figure 7 Parameterization Control Parameters Generics Higher-Order Rules Parameterization CZARNECKI AND HELSEN IBM SYSTEMS JOURNAL, VOL 45, NO 3, 2006 630 Figure 8 Model transformation approach features: (A) location determination, (B) rule scheduling, (C) rule organization, (D) source-target relationship, (E) incrementality, (F) directionality, and (G) tracing Concurrent One-Point Non-Deterministic Deterministic Interactive Rule Application Strategy A"}
{"_id": "4f0a21f94152f68f102d90c6f63f2eb8638eacc6", "title": "Robotic versus Open Partial Nephrectomy: A Systematic Review and Meta-Analysis", "text": "OBJECTIVES\nTo critically review the currently available evidence of studies comparing robotic partial nephrectomy (RPN) and open partial nephrectomy (OPN).\n\n\nMATERIALS AND METHODS\nA comprehensive review of the literature from Pubmed, Web of Science and Scopus was performed in October 2013. All relevant studies comparing RPN with OPN were included for further screening. A cumulative meta-analysis of all comparative studies was performed and publication bias was assessed by a funnel plot.\n\n\nRESULTS\nEight studies were included for the analysis, including a total of 3418 patients (757 patients in the robotic group and 2661 patients in the open group). Although RPN procedures had a longer operative time (weighted mean difference [WMD]: 40.89; 95% confidence interval [CI], 14.39-67.40; p\u200a=\u200a0.002), patients in this group benefited from a lower perioperative complication rate (19.3% for RPN and 29.5% for OPN; odds ratio [OR]: 0.53; 95%CI, 0.42-0.67; p<0.00001), shorter hospital stay (WMD: -2.78; 95%CI, -3.36 to -1.92; p<0.00001), less estimated blood loss(WMD: -106.83; 95%CI, -176.4 to -37.27; p\u200a=\u200a0.003). Transfusions, conversion to radical nephrectomy, ischemia time and estimated GFR change, margin status, and overall cost were comparable between the two techniques. The main limitation of the present meta-analysis is the non-randomization of all included studies.\n\n\nCONCLUSIONS\nRPN appears to be an efficient alternative to OPN with the advantages of a lower rate of perioperative complications, shorter length of hospital stay and less blood loss. Nevertheless, high quality prospective randomized studies with longer follow-up period are needed to confirm these findings."}
{"_id": "d464735cc3f8cb515a96f1ac346d42e8d7a28671", "title": "Black-Box Calibration for ADCs With Hard Nonlinear Errors Using a Novel INL-Based Additive Code: A Pipeline ADC Case Study", "text": "This paper presents a digital nonlinearity calibration technique for ADCs with strong input\u2013output discontinuities between adjacent codes, such as pipeline, algorithmic, and SAR ADCs with redundancy. In this kind of converter, the ADC transfer function often involves multivalued regions, where conventional integral-nonlinearity (INL)-based calibration methods tend to miscalibrate, negatively affecting the ADC\u2019s performance. As a solution to this problem, this paper proposes a novel INL-based calibration which incorporates information from the ADC\u2019s internal signals to provide a robust estimation of static nonlinear errors for multivalued ADCs. The method is fully generalizable and can be applied to any existing design as long as there is access to internal digital signals. In pipeline or subranging ADCs, this implies access to partial subcodes before digital correction; for algorithmic or SAR ADCs, conversion bit/bits per cycle are used. As a proof-of-concept demonstrator, the experimental results for a 1.2 V 23 mW 130 nm-CMOS pipeline ADC with a SINAD of 58.4 dBc (in nominal conditions without calibration) is considered. In a stressed situation with 0.95 V of supply, the ADC has SINAD values of 47.8 dBc and 56.1 dBc, respectively, before and after calibration (total power consumption, including the calibration logic, being 15.4 mW)."}
{"_id": "b1be4ea2ce9edcdcb4455643d0d21094f4f0a772", "title": "WebSelF: A Web Scraping Framework", "text": "We present WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture previous work on web scraping. We conducted an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that composition of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing tech-"}
{"_id": "a7e2814ec5db800d2f8c4313fd436e9cf8273821", "title": "KNN Model-Based Approach in Classification", "text": "The k-Nearest-Neighbours (kNN) is a simple but effective method for classification. The major drawbacks with respect to kNN are (1) its low efficiency being a lazy learning method prohibits it in many applications such as dynamic web mining for a large repository, and (2) its dependency on the selection of a \u201cgood value\u201d for k. In this paper, we propose a novel kNN type method for classification that is aimed at overcoming these shortcomings. Our method constructs a kNN model for the data, which replaces the data to serve as the basis of classification. The value of k is automatically determined, is varied for different data, and is optimal in terms of classification accuracy. The construction of the model reduces the dependency on k and makes classification faster. Experiments were carried out on some public datasets collected from the UCI machine learning repository in order to test our method. The experimental results show that the kNN based model compares well with C5.0 and kNN in terms of classification accuracy, but is more efficient than the standard kNN."}
{"_id": "0fafcc5fc916ac93e73f3a708f4023720bbe2faf", "title": "A new retexturing method for virtual fitting room using Kinect 2 camera", "text": "This research work proposes a new method for garment retexturing using a single static image along with depth information obtained using the Microsoft Kinect 2 camera. First the garment is segmented out from the image and texture domain coordinates are computed for each pixel of the shirt using 3D information. After that shading is applied on the new colours from the texture image by applying linear stretching of the luminance of the segmented garment. The proposed method is colour and pattern invariant and results in to visually realistic retexturing. The proposed method has been tested on various images and it is shown that it generally performs better and produces more realistic results compared to the state-of-the-art methods. The proposed method can be an application for the virtual fitting room."}
{"_id": "b323c4d8f284dd27b9bc8c8be5bee3cd30e2c8ca", "title": "Classification of Various Neighborhood Operations for the Nurse Scheduling Problem", "text": "Since the nurse scheduling problem (NSP) is a problem of finding a feasible solution, the solution space must include infeasible solutions to solve it using a local search algorithm. However, the solution space consisting of all the solutions is so large that the search requires much CPU time. In the NSP, some constraints have higher priority. Thus, we can define the solution space to be the set of solutions satisfying some of the important constraints, which are called the elementary constraints. The connectivity of the solution space is also important for the performance. However, the connectivity is not obvious when the solution space consists only of solutions satisfying the elementary constraints and is composed of small neighborhoods. This paper gives theoretical support for using 4-opt-type neighborhood operations by discussing the connectivity of its solution space and the size of the neighborhood. Another interesting point in our model is a special case of the NSP corresponds to the bipartite transportation problem, and our result also applies to it."}
{"_id": "00dbf46a7a4ba6222ac5d44c1a8c09f261e5693c", "title": "Ecient Sparse Matrix-Vector Multiplication on CUDA", "text": "The massive parallelism of graphics processing units (GPUs) offers tremendous performance in many high-performance computing applications. While dense linear algebra readily maps to such platforms, harnessing this potential for sparse matrix computations presents additional challenges. Given its role in iterative methods for solving sparse linear systems and eigenvalue problems, sparse matrix-vector multiplication (SpMV) is of singular importance in sparse linear algebra. In this paper we discuss data structures and algorithms for SpMV that are efficiently implemented on the CUDA platform for the fine-grained parallel architecture of the GPU. Given the memory-bound nature of SpMV, we emphasize memory bandwidth efficiency and compact storage formats. We consider a broad spectrum of sparse matrices, from those that are well-structured and regular to highly irregular matrices with large imbalances in the distribution of nonzeros per matrix row. We develop methods to exploit several common forms of matrix structure while offering alternatives which accommodate greater irregularity. On structured, grid-based matrices we achieve performance of 36 GFLOP/s in single precision and 16 GFLOP/s in double precision on a GeForce GTX 280 GPU. For unstructured finite-element matrices, we observe performance in excess of 15 GFLOP/s and 10 GFLOP/s in single and double precision respectively. These results compare favorably to prior state-of-the-art studies of SpMV methods on conventional multicore processors. Our double precision SpMV performance is generally two and a half times that of a Cell BE with 8 SPEs and more than ten times greater than that of a quad-core Intel Clovertown system."}
{"_id": "0cab85c646bc4572101044cb22d944e3685732b5", "title": "High-speed VLSI architectures for the AES algorithm", "text": "This paper presents novel high-speed architectures for the hardware implementation of the Advanced Encryption Standard (AES) algorithm. Unlike previous works which rely on look-up tables to implement the SubBytes and InvSubBytes transformations of the AES algorithm, the proposed design employs combinational logic only. As a direct consequence, the unbreakable delay incurred by look-up tables in the conventional approaches is eliminated, and the advantage of subpipelining can be further explored. Furthermore, composite field arithmetic is employed to reduce the area requirements, and different implementations for the inversion in subfield GF(2/sup 4/) are compared. In addition, an efficient key expansion architecture suitable for the subpipelined round units is also presented. Using the proposed architecture, a fully subpipelined encryptor with 7 substages in each round unit can achieve a throughput of 21.56 Gbps on a Xilinx XCV1000 e-8 bg560 device in non-feedback modes, which is faster and is 79% more efficient in terms of equivalent throughput/slice than the fastest previous FPGA implementation known to date."}
{"_id": "29e65e682764c6dcc33cdefd8150521893fc2c94", "title": "Improving Real-Time Captioning Experiences for Deaf and Hard of Hearing Students", "text": "We take a qualitative approach to understanding deaf and hard of hearing (DHH) students' experiences with real-time captioning as an access technology in mainstream university classrooms. We consider both existing human-based captioning as well as new machine-based solutions that use automatic speech recognition (ASR). We employed a variety of qualitative research methods to gather data about students' captioning experiences including in-class observations, interviews, diary studies, and usability evaluations. We also conducted a co-design workshop with 8 stakeholders after our initial research findings. Our results show that accuracy and reliability of the technology are still the most important issues across captioning solutions. However, we additionally found that current captioning solutions tend to limit students' autonomy in the classroom and present a variety of user experience shortcomings, such as complex setups, poor feedback and limited control over caption presentation. Based on these findings, we propose design requirements and recommend features for real-time captioning in mainstream classrooms."}
{"_id": "704885bae7e9c5a37528be854b8bd2f24d1e641c", "title": "An Empirical Examination of the Effects of Web Personalization at Different Stages of Decision-Making", "text": "Personalization agents are incorporated in many websites to tailor content and interfaces for individual users. But in contrast to the proliferation of personalized web services worldwide, empirical research studying the effects of web personalization is scant. How does exposure to personalized offers affect subsequent product consideration and choice outcome? Drawing on the literature in HCI and consumer behavior, this research examines the effects of web personalization on users\u2019 information processing and expectations through different decision-making stages. A study using a personalized ring-tone download website was conducted. Our findings provide empirical evidence of the effects of web personalization. In particular, when consumers are forming their consideration sets, the agents have the capacity to help them discover new products and generate demand for unfamiliar products. Once the users have arrived at their choice, however, the persuasive effects from the personalization agent diminish. These results establish that adaptive role of personalization agents at different stages of decision-making."}
{"_id": "6e91e18f89027bb01c791ce54b2f7a1fc8d8ce9c", "title": "Pacific kids DASH for health (PacDASH) randomized, controlled trial with DASH eating plan plus physical activity improves fruit and vegetable intake and diastolic blood pressure in children.", "text": "BACKGROUND\nPacific Kids DASH for Health (PacDASH) aimed to improve child diet and physical activity (PA) level and prevent excess weight gain and elevation in blood pressure (BP) at 9 months.\n\n\nMETHODS\nPacDASH was a two-arm, randomized, controlled trial (ClinicalTrials.gov: NCT00905411). Eighty-five 5- to 8-year-olds in the 50th-99th percentile for BMI were randomly assigned to treatment (n=41) or control (n=44) groups; 62 completed the 9-month trial. Sixty-two percent were female. Mean age was 7.1\u00b10.95 years. Race/ethnicity was Asian (44%), Native Hawaiian or Other Pacific Islander (28%), white (21%), or other race/ethnicity (7%). Intervention was provided at baseline and 3, 6 and 9 months, with monthly supportive mailings between intervention visits, and a follow-up visit at 15 months to observe maintenance. Diet and PA were assessed by 2-day log. Body size, composition, and BP were measured. The intervention effect on diet and PA, body size and composition, and BP by the end of the intervention was tested using an F test from a mixed regression model, after adjustment for sex, age, and ethnic group.\n\n\nRESULTS\nFruit and vegetable (FV) intake decreased less in the treatment than control group (p=0.04). Diastolic BP (DBP) was 12 percentile units lower in the treatment than control group after 9 months of intervention (p=0.01). There were no group differences in systolic BP (SBP) or body size/composition.\n\n\nCONCLUSIONS\nThe PacDASH trial enhanced FV intake and DBP, but not SBP or body size/composition."}
{"_id": "18d5fe1bd87db26b7037d81f4170cf4ebe320e00", "title": "Efficient Time-Domain Image Formation with Precise Topography Accommodation for General Bistatic SAR Configurations", "text": "Due to the lack of an appropriate symmetry in the acquisition geometry, general bistatic synthetic aperture radar (SAR) cannot benefit from the two main properties of low-to-moderate resolution monostatic SAR: azimuth-invariance and topography-insensitivity. The precise accommodation of azimuth-variance and topography is a real challenge for efficent image formation algorithms working in the Fourier domain, but can be quite naturally handled by time-domain approaches. We present an efficient and practical implementation of a generalised bistatic SAR image formation algorithm with an accurate accommodation of these two effects. The algorithm has a common structure with the monostatic fast-factorised backprojection (FFBP), and is therefore based on subaperture processing. The images computed over the different subapertures are displayed in an advantageous elliptical coordinate system capable of incorporating the topographic information of the imaged scene in an analogous manner as topography-dependent monostatic SAR algorithms do. Analytical expressions for the Nyquist requirements using this coordinate system are derived. The overall discussion includes practical implementation hints and a realistic computational burden estimation. The algorithm is tested with both simulated and actual bistatic SAR data. The actual data correspond to the spaceborne-airborne experiment between TerraSAR-X and F-SAR performed in 2007 and to the DLR-ONERA airborne experiment carried out in 2003. The presented approach proves its suitability for the precise SAR focussing of the data acquired in general bistatic configurations."}
{"_id": "2f585530aea2968d799a9109aeb202b03ba977f4", "title": "Machine Comprehension Based on Learning to Rank", "text": "Machine comprehension plays an essential role in NLP and has been widely explored with dataset like MCTest. However, this dataset is too simple and too small for learning true reasoning abilities. (Hermann et al., 2015) therefore release a large scale news article dataset and propose a deep LSTM reader system for machine comprehension. However, the training process is expensive. We therefore try feature-engineered approach with semantics on the new dataset to see how traditional machine learning technique and semantics can help with machine comprehension. Meanwhile, our proposed L2R reader system achieves good performance with efficiency and less training data."}
{"_id": "5dd2b359fa6a0f7ebe3ca123df01b852bef08421", "title": "The challenges of building mobile underwater wireless networks for aquatic applications", "text": "The large-scale mobile underwater wireless sensor network (UWSN) is a novel networking paradigm to explore aqueous environments. However, the characteristics of mobile UWSNs, such as low communication bandwidth, large propagation delay, floating node mobility, and high error probability, are significantly different from ground-based wireless sensor networks. The novel networking paradigm poses interdisciplinary challenges that will require new technological solutions. In particular, in this article we adopt a top-down approach to explore the research challenges in mobile UWSN design. Along the layered protocol stack, we proceed roughly from the top application layer to the bottom physical layer. At each layer, a set of new design intricacies is studied. The conclusion is that building scalable mobile UWSNs is a challenge that must be answered by interdisciplinary efforts of acoustic communications, signal processing, and mobile acoustic network protocol design."}
{"_id": "0435ef56d5ba8461021cf9e5d811a014ed4e98ac", "title": "A transmission control scheme for media access in sensor networks", "text": "We study the problem of media access control in the novel regime of sensor networks, where unique application behavior and tight constraints in computation power, storage, energy resources, and radio technology have shaped this design space to be very different from that found in traditional mobile computing regime. Media access control in sensor networks must not only be energy efficient but should also allow fair bandwidth allocation to the infrastructure for all nodes in a multihop network. We propose an adaptive rate control mechanism aiming to support these two goals and find that such a scheme is most effective in achieving our fairness goal while being energy efficient for both low and high duty cycle of network traffic."}
{"_id": "11897106a75d5ba3bf5fb4677943bd6f33daadf7", "title": "Challenges for efficient communication in underwater acoustic sensor networks", "text": "Ocean bottom sensor nodes can be used for oceanographic data collection, pollution monitoring, offshore exploration and tactical surveillance applications. Moreover, Unmanned or Autonomous Underwater Vehicles (UUVs, AUVs), equipped with sensors, will find application in exploration of natural undersea resources and gathering of scientific data in collaborative monitoring missions. Underwater acoustic networking is the enabling technology for these applications. Underwater Networks consist of a variable number of sensors and vehicles that are deployed to perform collaborative monitoring tasks over a given area.In this paper, several fundamental key aspects of underwater acoustic communications are investigated. Different architectures for two-dimensional and three-dimensional underwater sensor networks are discussed, and the underwater channel is characterized. The main challenges for the development of efficient networking solutions posed by the underwater environment are detailed at all layers of the protocol stack. Furthermore, open research issues are discussed and possible solution approaches are outlined."}
{"_id": "1e7e4d2b2be08c01f1347dee53e99473d2a9e83d", "title": "The WHOI micro-modem: an acoustic communications and navigation system for multiple platforms", "text": "The micro-modem is a compact, low-power, underwater acoustic communications and navigation subsystem. It has the capability to perform low-rate frequency-hopping frequency-shift keying (FH-FSK), variable rate phase-coherent keying (PSK), and two different types of long base line navigation, narrow-band and broadband. The system can be configured to transmit in four different bands from 3 to 30 kHz, with a larger board required for the lowest frequency. The user interface is based on the NMEA standard, which is a serial port specification. The modem also includes a simple built-in networking capability which supports up to 16 units in a polled or random-access mode and has an acknowledgement capability which supports guaranteed delivery transactions. The paper contains a detailed system description and results from several tests are also presented"}
{"_id": "93074961701ca6983a6c0fdc2964d11e282b8bae", "title": "The state of the art in underwater acoustic telemetry", "text": "Progress in underwater acoustic telemetry since 1982 is reviewed within a framework of six current research areas: (1) underwater channel physics, channel simulations, and measurements; (2) receiver structures; (3) diversity exploitation; (4) error control coding; (5) networked systems; and (6) alternative modulation strategies. Advances in each of these areas as well as perspectives on the future challenges facing them are presented. A primary thesis of this paper is that increased integration of high-fidelity channel models into ongoing underwater telemetry research is needed if the performance envelope (defined in terms of range, rate, and channel complexity) of underwater modems is to expand."}
{"_id": "5e296b53252db11667011b55e78bd4a67cd3e547", "title": "Performance of Store Brands: A Cross-Country Analysis of Consumer Store Brand Preferences, Perceptions, and Risk", "text": "This paper empirically studies consumer choice behavior in regard to store brands in the US, UK and Spain. Store brand market shares differ by country and they are usually much higher in Europe than in the US. However, there is surprisingly little work in marketing that empirically studies the reasons that underlie higher market shares associated with store brands in Europe over the US. In this paper, we empirically study the notion that the differential success of store brands in the US versus in Europe is the higher brand equity that store brands command in Europe over the US. We use a framework based on previous work to conduct our analysis: consumer brand choice under uncertainty, and brands as signals of product positions. More specifically, we examine whether uncertainty about quality (or, the positioning of the brand in the product space), perceived quality of store brands versus national brands, consistency in store brand offerings over time, as well as consumer attitudes towards risk, quality and price underlie the differential success of store brands at least partially in the US versus Europe. We propose and estimate a model that explicitly incorporates the impact of uncertainty on consumer behavior. We compare 1) levels of uncertainty associated with store brands versus national brands, 2) consistency in product positions over time for both national and store brands, 3) relative quality levels of national versus store brands, and 4) consumer sensitivity to price, quality and risk across the three countries we study. The model is estimated on scanner panel data on detergent in the US, UK and Spanish markets, and on toilet paper and margarine in the US and Spain. We find that consumer learning and perceived risk (and the associated brand equity), as well as consumer attitude towards risk, quality and price, play an important role in consumers\u2019 store versus national brand choices and contribute to the differences in relative success of store brands across the countries we study."}
{"_id": "f5dbb5a0aeb5823d704077b372d0dd989d4ebe1c", "title": "Approximate queuing analysis for IEEE 802.15.4 sensor network", "text": "Wireless sensor networks (WSNs) have attracted much attention in recent years for their unique characteristics and wide use in many different applications. Especially, in military networks, all sensor motes are deployed randomly and densely. How can we optimize the number of deployed nodes (sensor node and sink) with a QoS guarantee (minimal end-to-end delay and packet drop)? In this paper, using the M/M/1 queuing model we propose a deployment optimization model for non-beacon-mode 802.15.4 sensor networks. The simulation results show that the proposed model is a promising approach for deploying the sensor network."}
{"_id": "1fbb66a9407470e1da332c4ef69cdc34e169a3d7", "title": "A Baseline for General Music Object Detection with Deep Learning", "text": "Deep learning is bringing breakthroughs to many computer vision subfields including Optical Music Recognition (OMR), which has seen a series of improvements to musical symbol detection achieved by using generic deep learning models. However, so far, each such proposal has been based on a specific dataset and different evaluation criteria, which made it difficult to quantify the new deep learning-based state-of-the-art and assess the relative merits of these detection models on music scores. In this paper, a baseline for general detection of musical symbols with deep learning is presented. We consider three datasets of heterogeneous typology but with the same annotation format, three neural models of different nature, and establish their performance in terms of a common evaluation standard. The experimental results confirm that the direct music object detection with deep learning is indeed promising, but at the same time illustrates some of the domain-specific shortcomings of the general detectors. A qualitative comparison then suggests avenues for OMR improvement, based both on properties of the detection model and how the datasets are defined. To the best of our knowledge, this is the first time that competing music object detection systems from the machine learning paradigm are directly compared to each other. We hope that this work will serve as a reference to measure the progress of future developments of OMR in music object detection."}
{"_id": "4b07c5cda3dab4e48bd39bee91c5eba2642037b2", "title": "System for the Measurement of Cathodic Currents in Electrorefining Processes That Employ Multicircuital Technology", "text": "This paper presents a measurement system of cathodic currents for copper electrorefining processes using multicircuital technology with optibar intercell bars. The proposed system is based on current estimation using 55 magnetic field sensors per intercell bar. Current values are sampled and stored every 5 min for seven days in a compatible SQL database. The method does not affect the normal operation of the process and does not require any structural modifications. The system for online measurement of 40 cells involving 2090 sensors is in operation in an electrorefinery site."}
{"_id": "a2c1cf2de9e822c2d662d520083832853ac39f9d", "title": "An Inductive 2-D Position Detection IC With 99.8% Accuracy for Automotive EMR Gear Control System", "text": "In this paper, the analog front end (AFE) for an inductive position sensor in an automotive electromagnetic resonance gear control applications is presented. To improve the position detection accuracy, a coil driver with an automatic two-step impedance calibration is proposed which, despite the load variation, provides the desired driving capability by controlling the main driver size. Also, a time shared analog-to-digital converter (ADC) is proposed to convert eight-phase signals while reducing the current consumption and area to 1/8 of the conventional structure. A relaxation oscillator with temperature compensation is proposed to generate a constant clock frequency in vehicle temperature conditions. This chip is fabricated using a 0.18- $\\mu \\text{m}$ CMOS process and the die area is 2 mm $\\times 1.5$ mm. The power consumption of the AFE is 23.1 mW from the supply voltage of 3.3 V to drive one transmitter (Tx) coil and eight receiver (Rx) coils. The measured position detection accuracy is greater than 99.8 %. The measurement of the Tx shows a driving capability higher than 35 mA with respect to the load change."}
{"_id": "2cfac4e999538728a8e11e2b7784433e80ca6a38", "title": "Transcriptional regulatory networks in Saccharomyces cerevisiae.", "text": "We have determined how most of the transcriptional regulators encoded in the eukaryote Saccharomyces cerevisiae associate with genes across the genome in living cells. Just as maps of metabolic networks describe the potential pathways that may be used by a cell to accomplish metabolic processes, this network of regulator-gene interactions describes potential pathways yeast cells can use to regulate global gene expression programs. We use this information to identify network motifs, the simplest units of network architecture, and demonstrate that an automated process can use motifs to assemble a transcriptional regulatory network structure. Our results reveal that eukaryotic cellular functions are highly connected through networks of transcriptional regulators that regulate other transcriptional regulators."}
{"_id": "526ece284aacc3ab8e3d4e839a9512dbbd27867b", "title": "Online crowdsourcing: Rating annotators and obtaining cost-effective labels", "text": "Labeling large datasets has become faster, cheaper, and easier with the advent of crowdsourcing services like Amazon Mechanical Turk. How can one trust the labels obtained from such services? We propose a model of the labeling process which includes label uncertainty, as well a multi-dimensional measure of the annotators' ability. From the model we derive an online algorithm that estimates the most likely value of the labels and the annotator abilities. It finds and prioritizes experts when requesting labels, and actively excludes unreliable annotators. Based on labels already obtained, it dynamically chooses which images will be labeled next, and how many labels to request in order to achieve a desired level of confidence. Our algorithm is general and can handle binary, multi-valued, and continuous annotations (e.g. bounding boxes). Experiments on a dataset containing more than 50,000 labels show that our algorithm reduces the number of labels required, and thus the total cost of labeling, by a large factor while keeping error rates low on a variety of datasets."}
{"_id": "55b4b9b303da1b0e8f938a80636ec95f7af235fd", "title": "Neural Question Answering at BioASQ 5B", "text": "This paper describes our submission to the 2017 BioASQ challenge. We participated in Task B, Phase B which is concerned with biomedical question answering (QA). We focus on factoid and list question, using an extractive QA model, that is, we restrict our system to output substrings of the provided text snippets. At the core of our system, we use FastQA, a state-ofthe-art neural QA system. We extended it with biomedical word embeddings and changed its answer layer to be able to answer list questions in addition to factoid questions. We pre-trained the model on a large-scale open-domain QA dataset, SQuAD, and then fine-tuned the parameters on the BioASQ training set. With our approach, we achieve state-of-the-art results on factoid questions and competitive results on list questions."}
{"_id": "6653956ab027cadb035673055c91ce1c7767e140", "title": "Compact Planar Microstrip Branch-Line Couplers Using the Quasi-Lumped Elements Approach With Nonsymmetrical and Symmetrical T-Shaped Structure", "text": "A class of the novel compact-size branch-line couplers using the quasi-lumped elements approach with symmetrical or nonsymmetrical T-shaped structures is proposed in this paper. The design equations have been derived, and two circuits using the quasi-lumped elements approach were realized for physical measurements. This novel design occupies only 29% of the area of the conventional approach at 2.4 GHz. In addition, a third circuit was designed by using the same formula implementing a symmetrical T-shaped structure and occupied both the internal and external area of the coupler. This coupler achieved 500-MHz bandwidth while the phase difference between S21 and S31 is within 90degplusmn1deg. Thus, the bandwidth is not only 25% wider than that of the conventional coupler, but occupies only 70% of the circuit area compared to the conventional design. All three proposed couplers can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements, bonding wires, and via-holes, making it very useful for wireless communication systems"}
{"_id": "fa27e993f88c12ef3cf5bbc6eb3b0b4f9de15e86", "title": "The Anatomy of Motivation: An Evolutionary-Ecological Approach", "text": "There have been few attempts to bring evolutionary theory to the study of human motivation. From this perspective motives can be considered psychological mechanisms to produce behavior that solves evolutionarily important tasks in the human niche. From the dimensions of the human niche we deduce eight human needs: optimize the number and survival of gene copies; maintain bodily integrity; avoid external threats; optimize sexual, environmental, and social capital; and acquire reproductive and survival skills. These needs then serve as the foundation for a necessary and sufficient list of 15 human motives, which we label: lust, hunger, comfort, fear, disgust, attract, love, nurture, create, hoard, affiliate, status, justice, curiosity, and play. We show that these motives are consistent with evidence from the current literature. This approach provides us with a precise vocabulary for talking about motivation, the lack of which has hampered progress in behavioral science. Developing testable theories about the structure and function of motives is essential to the project of understanding the organization of animal cognition and learning, as well as for the applied behavioral sciences."}
{"_id": "6de09e48d42d14c4079e5d8a6a58485341b41cad", "title": "Students\u2019 opinions on blended learning and its implementation in terms of their learning styles", "text": "The purpose of this article is to examine students\u2019 views on the blended learning method and its use in relation to the students\u2019 individual learning style. The study was conducted with 31 senior students. Web based media together with face to face classroom settings were used in the blended learning framework. A scale of Students\u2019 Views on Blended Learning and its implementation, Kolb\u2019s Learning Style Inventory, Pre-Information Form and open ended questions were used to gather data. The majority of the students\u2019 fell into assimilators, accommodators and convergers learning styles. Results revealed that students\u2019 views on blended learning method and its use are quite positive."}
{"_id": "b4d4a78ecc68fd8fe9235864e0b1878cb9e9f84b", "title": "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems (Reprint)", "text": "An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences:Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can decipher the message, since only he knows the corresponding decryption key.\nA message can be \u201csigned\u201d using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in \u201celectronic mail\u201d and \u201celectronic funds transfer\u201d systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n, of two large secret prime numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d = 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n."}
{"_id": "628c2bcfbd6b604e2d154c7756840d3a5907470f", "title": "Blockchain Platform for Industrial Internet of Things", "text": "Internet of Things (IoT) are being adopted for industrial and manufacturing applications such as manufacturing automation, remote machine diagnostics, prognostic health management of industrial machines and supply chain management. CloudBased Manufacturing is a recent on-demand model of manufacturing that is leveraging IoT technologies. While Cloud-Based Manufacturing enables on-demand access to manufacturing resources, a trusted intermediary is required for transactions between the users who wish to avail manufacturing services. We present a decentralized, peer-to-peer platform called BPIIoT for Industrial Internet of Things based on the Block chain technology. With the use of Blockchain technology, the BPIIoT platform enables peers in a decentralized, trustless, peer-to-peer network to interact with each other without the need for a trusted intermediary."}
{"_id": "0fd0e3854ee696148e978ec33d5c042554cd4d23", "title": "Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions", "text": "This document provides additional details about the experiments described in (Heilman and Smith, 2010). Note that while this document provides information about the datasets and experimental methods, it does not provide further results. If you have any further questions, please feel free to contact the first author. The preprocessed datasets (i.e., tagged and parsed) will be made available for research purposes upon request."}
{"_id": "3e27e210c9a41d9901164fd4c24e549616d1958a", "title": "CompNet: Neural networks growing via the compact network morphism", "text": "It is often the case that the performance of a neural network can be improved by adding layers. In real-world practices, we always train dozens of neural network architectures in parallel which is a wasteful process. We explored CompNet, in which case we morph a well-trained neural network to a deeper one where network function can be preserved and the added layer is compact. The work of the paper makes two contributions: a). The modified network can converge fast and keep the same functionality so that we do not need to train from scratch again; b). The layer size of the added layer in the neural network is controlled by removing the redundant parameters with sparse optimization. This differs from previous network morphism approaches which tend to add more neurons or channels beyond the actual requirements and result in redundance of the model. The method is illustrated using several neural network structures on different data sets including MNIST and CIFAR10."}
{"_id": "c11de783900e118e3d3de74efca5435c98b11e7c", "title": "Driver Drowsiness Detection System", "text": "Sleepiness or fatigue in drivers driving for long hours is the major cause of accidents on highways worldwide. The International statistics shows that a large number of road accidents are caused by driver fatigue. Therefore, a system that can detect oncoming driver fatigue and issue timely warning could help in preventing many accidents, and consequently save money and reduce personal suffering. The authors have made an attempt to design a system that uses video camera that points directly towards the driver\u201fs face in order to detect fatigue. If the fatigue is detected a warning signal is issued to alert the driver. The authors have worked on the video files recorded by the camera. Video file is converted into frames.Once the eyes are located from each frame, by determining the energy value of each frame one can determine whether the eyes are open or close. A particular condition is set for the energy values of open and close eyes. If the average of the energy value for 5 consecutive frames falls in a given condition then the driver will be detected as drowsy and issues a warning signal. The algorithm is proposed, implemented, tested, and found working satisfactorily."}
{"_id": "1e55bb7c095d3ea15bccb3df920c546ec54c86b5", "title": "Enterprise resource planning: A taxonomy of critical factors", "text": ""}
{"_id": "23a3dc2af47b13fe63189df63dbdda068b854cdd", "title": "Computable Elastic Distances Between Shapes", "text": "We define distances between geometric curves by the square root of the minimal energy required to transform one curve into the other. The energy is formally defined from a left invariant Riemannian distance on an infinite dimensional group acting on the curves, which can be explicitly computed. The obtained distance boils down to a variational problem for which an optimal matching between the curves has to be computed. An analysis of the distance when the curves are polygonal leads to a numerical procedure for the solution of the variational problem, which can efficiently be implemented, as illustrated by experiments."}
{"_id": "37726c30352bd235c2a832b6c16633c2b11b8913", "title": "Supervised Locally Linear Embedding", "text": "Locally linear embedding (LLE) is a recently proposed method for unsupervised nonlinear dimensionality reduction. It has a number of attractive features: it does not require an iterative algorithm, and just a few parameters need to be set. Two extensions of LLE to supervised feature extraction were independently proposed by the authors of this paper. Here, both methods are unified in a common framework and applied to a number of benchmark data sets. Results show that they perform very well on high-dimensional data which exhibits a manifold structure."}
{"_id": "8213dc79a49f6bd8e1f396e66db1d8503d85f566", "title": "Differentially Private Histogram Publication for Dynamic Datasets: an Adaptive Sampling Approach", "text": "Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on \"one-time\" release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods."}
{"_id": "d318b1cbb00282eea7fc5789f97d859181fc165e", "title": "LBP based recursive averaging for babble noise reduction applied to automatic speech recognition", "text": "Improved automatic speech recognition (ASR) in babble noise conditions continues to pose major challenges. In this paper, we propose a new local binary pattern (LBP) based speech presence indicator (SPI) to distinguish speech and non-speech components. Babble noise is subsequently estimated using recursive averaging. In the speech enhancement system optimally-modified log-spectral amplitude (OMLSA) uses the estimated noise spectrum obtained from the LBP based recursive averaging (LRA). The performance of the LRA speech enhancement system is compared to the conventional improved minima controlled recursive averaging (IMCRA). Segmental SNR improvements and perceptual evaluations of speech quality (PESQ) scores show that LRA offers superior babble noise reduction compared to the IMCRA system. Hidden Markov model (HMM) based word recognition results show a corresponding improvement."}
{"_id": "f7b3544567c32512be1bef4093a78075e59bdc11", "title": "Using flipped classroom approach to teach computer programming", "text": "Flipped classroom approach has been increasingly adopted in higher institutions. Although this approach has many advantages, there are also many challenges that should be considered. In this paper, we discuss the suitability of this approach to teach computer programming, and we report on our pilot experience of using this approach at Qatar University to teach one subject of computer programming course. It is found that students has positive attitude to this approach, it improves their learning. However, the main challenge was how to involve some of the students in online learning activities."}
{"_id": "ba965d68ea7a58f6a5676c47a5c81fad21959ef6", "title": "Towards a New Structural Model of the Sense of Humor: Preliminary Findings", "text": "In this article some formal, content-related and procedural considerations towards the sense of humor are articulated and the analysis of both everyday humor behavior and of comic styles leads to the initial proposal of a four factormodel of humor (4FMH). This model is tested in a new dataset and it is also examined whether two forms of comic styles (benevolent humor and moral mockery) do fit in. The model seems to be robust but further studies on the structure of the sense of humor as a personality trait are required."}
{"_id": "3e7aabaffdb4b05e701c544451dce55ad96e9401", "title": "Wi-Fi Fingerprint-Based Indoor Positioning: Recent Advances and Comparisons", "text": "The growing commercial interest in indoor location-based services (ILBS) has spurred recent development of many indoor positioning techniques. Due to the absence of Global Positioning System (GPS) signal, many other signals have been proposed for indoor usage. Among them, Wi-Fi (802.11) emerges as a promising one due to the pervasive deployment of wireless LANs (WLANs). In particular, Wi-Fi fingerprinting has been attracting much attention recently because it does not require line-of-sight measurement of access points (APs) and achieves high applicability in complex indoor environment. This survey overviews recent advances on two major areas of Wi-Fi fingerprint localization: advanced localization techniques and efficient system deployment. Regarding advanced techniques to localize users, we present how to make use of temporal or spatial signal patterns, user collaboration, and motion sensors. Regarding efficient system deployment, we discuss recent advances on reducing offline labor-intensive survey, adapting to fingerprint changes, calibrating heterogeneous devices for signal collection, and achieving energy efficiency for smartphones. We study and compare the approaches through our deployment experiences, and discuss some future directions."}
{"_id": "8acaebdf9569adafb03793b23e77bf4ac8c09f83", "title": "Design of spoof surface plasmon polariton based terahertz delay lines", "text": "We present the analysis and design of fixed physical length, spoof Surface Plasmon Polariton based waveguides with adjustable delay at terahertz frequencies. The adjustable delay is obtained using Corrugated Planar Goubau Lines (CPGL) by changing its corrugation depth without changing the total physical length of the waveguide. Our simulation results show that electrical lengths of 237.9\u00b0, 220.6\u00b0, and 310.6\u00b0 can be achieved by physical lengths of 250 \u03bcm and 200 \u03bcm at 0.25, 0.275, and 0.3 THz, respectively, for demonstration purposes. These simulations results are also consistent with our analytical calculations using the physical parameter and material properties. When we combine pairs of same length delay lines as if they are two branches of a terahertz phase shifter, we achieved an error rate of relative phase shift estimation better than 5.8%. To the best of our knowledge, this is the first-time demonstration of adjustable spoof Surface Plasmon Polariton based CPGL delay lines. The idea can be used for obtaining tunable delay lines with fixed lengths and phase shifters for the terahertz band circuitry."}
{"_id": "10d710c01acb10c4aea702926d21697935656c3d", "title": "Infrared Colorization Using Deep Convolutional Neural Networks", "text": "This paper proposes a method for transferring the RGB color spectrum to near-infrared (NIR) images using deep multi-scale convolutional neural networks. A direct and integrated transfer between NIR and RGB pixels is trained. The trained model does not require any user guidance or a reference image database in the recall phase to produce images with a natural appearance. To preserve the rich details of the NIR image, its high frequency features are transferred to the estimated RGB image. The presented approach is trained and evaluated on a real-world dataset containing a large amount of road scene images in summer. The dataset was captured by a multi-CCD NIR/RGB camera, which ensures a perfect pixel to pixel registration."}
{"_id": "325d145af5f38943e469da6369ab26883a3fd69e", "title": "Colorful Image Colorization", "text": "Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a \u201ccolorization Turing test,\u201d asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32% of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks."}
{"_id": "326a0914dcdf7f42b5e1c2887174476728ca1b9d", "title": "Wasserstein GAN", "text": "The problem this paper is concerned with is that of unsupervised learning. Mainly, what does it mean to learn a probability distribution? The classical answer to this is to learn a probability density. This is often done by defining a parametric family of densities (P\u03b8)\u03b8\u2208Rd and finding the one that maximized the likelihood on our data: if we have real data examples {x}i=1, we would solve the problem"}
{"_id": "5287d8fef49b80b8d500583c07e935c7f9798933", "title": "Generative Adversarial Text to Image Synthesis", "text": "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions."}
{"_id": "57bbbfea63019a57ef658a27622c357978400a50", "title": "U-Net: Convolutional Networks for Biomedical Image Segmentation", "text": ""}
{"_id": "6ec02fb5bfc307911c26741fb3804f16d8ad299c", "title": "Active learning for on-road vehicle detection: a comparative study", "text": "In recent years, active learning has emerged as a powerful tool in building robust systems for object detection using computer vision. Indeed, active learning approaches to on-road vehicle detection have achieved impressive results. While active learning approaches for object detection have been explored and presented in the literature, few studies have been performed to comparatively assess costs and merits. In this study, we provide a cost-sensitive analysis of three popular active learning methods for on-road vehicle detection. The generality of active learning findings is demonstrated via learning experiments performed with detectors based on histogram of oriented gradient features and SVM classification (HOG\u2013SVM), and Haar-like features and Adaboost classification (Haar\u2013Adaboost). Experimental evaluation has been performed on static images and real-world on-road vehicle datasets. Learning approaches are assessed in terms of the time spent annotating, data required, recall, and precision."}
{"_id": "c2b6755543c3f7c71adb3e14eb06179f27b6ad5d", "title": "HyFlex nickel-titanium rotary instruments after clinical use: metallurgical properties.", "text": "AIM\nTo analyse the type and location of defects in HyFlex CM instruments after clinical use in a graduate endodontic programme and to examine the impact of clinical use on their metallurgical properties.\n\n\nMETHODOLOGY\nA total of 468 HyFlex CM instruments discarded from a graduate endodontic programme were collected after use in three teeth. The incidence and type of instrument defects were analysed. The lateral surfaces of the defect instruments were examined by scanning electron microscopy. New and clinically used instruments were examined by differential scanning calorimetry (DSC) and x-ray diffraction (XRD). Vickers hardness was measured with a 200-g load near the flutes for new and clinically used axially sectioned instruments. Data were analysed using one-way anova or Tukey's multiple comparison test.\n\n\nRESULTS\nOf the 468 HyFlex instruments collected, no fractures were observed and 16 (3.4%) revealed deformation. Of all the unwound instruments, size 20, .04 taper unwound the most often (n\u00a0=\u00a05) followed by size 25, .08 taper (n\u00a0=\u00a04). The trend of DSC plots of new instruments and clinically used (with and without defects) instruments groups were very similar. The DSC analyses showed that HyFlex instruments had an austenite transformation completion or austenite-finish (Af ) temperature exceeding 37\u00a0\u00b0C. The Af temperatures of HyFlex instruments (with or without defects) after multiple clinical use were much lower than in new instruments (P\u00a0<\u00a00.05). The enthalpy values for the transformation from martensitic to austenitic on deformed instruments were smaller than in the new instruments at the tip region (P\u00a0<\u00a00.05). XRD results showed that NiTi instruments had austenite and martensite structure on both new and used HyFlex instruments at room temperature. No significant difference in microhardness was detected amongst new and used instruments (with and without defects).\n\n\nCONCLUSIONS\nThe risk of HyFlex instruments fracture in the canal is very low when instruments are discarded after three cases of clinical use. New HyFlex instruments were a mixture of martensite and austenite structure at body temperature. Multiple clinical use caused significant changes in the microstructural properties of HyFlex instruments. Smaller instruments should be considered as single-use."}
{"_id": "a25f6d05c8191be01f736073fa2bc20c03ad7ad8", "title": "Integrated control of a multi-fingered hand and arm using proximity sensors on the fingertips", "text": "In this study, we propose integrated control of a robotic hand and arm using only proximity sensing from the fingertips. An integrated control scheme for the fingers and for the arm enables quick control of the position and posture of the arm by placing the fingertips adjacent to the surface of an object to be grasped. The arm control scheme enables adjustments based on errors in hand position and posture that would be impossible to achieve by finger motions alone, thus allowing the fingers to grasp an object in a laterally symmetric grasp. This can prevent grasp failures such as a finger pushing the object out of the hand or knocking the object over. Proposed control of the arm and hand allowed correction of position errors on the order of several centimeters. For example, an object on a workbench that is in an uncertain positional relation with the robot, with an inexpensive optical sensor such as a Kinect, which only provides coarse image data, would be sufficient for grasping an object."}
{"_id": "13317a497f4dc5f62a15dbdc135dd3ea293474df", "title": "Do Multi-Sense Embeddings Improve Natural Language Understanding?", "text": "Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while \u2018multi-sense\u2019 methods have been proposed and tested on artificial wordsimilarity tasks, we don\u2019t know if they improve real natural language understanding tasks. In this paper we introduce a multisense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language un-"}
{"_id": "7ec6b06b0f421b80ca25994c7aa106106c7bfb50", "title": "Design and Simulation of Bridgeless PFC Boost Rectifiers", "text": "This work presents new three-level unidirectional single-phase PFC rectifier topologies well-suited for applications targeting high efficiency and/or high power density. The characteristics of a selected novel rectifier topology, including its principles of operation, modulation strategy, PID control scheme, and a power circuit design related analysis are presented. Finally, a 220-V/3-kW laboratory prototype is constructed and used in order to verify the characteristics of the new converter, which include remarkably low switching losses and single ac-side boost inductor, that allow for a 98.6% peak efficiency with a switching frequency of 140 kHz."}
{"_id": "c62ba57869099f20c8bcefd9b38ce5d8b4b3db56", "title": "Computational models of trust and reputation: agents, evolutionary games, and social networks", "text": "Many recent studies of trust and reputation are made in the context of commercial reputation or rating systems for online communities. Most of these systems have been constructed without a formal rating model or much regard for our sociological understanding of these concepts. We first provide a critical overview of the state of research on trust and reputation. We then propose a formal quantitative model for the rating process. Based on this model, we formulate two personalized rating schemes and demonstrate their effectiveness at inferring trust experimentally using a simulated dataset and a real world movie-rating dataset. Our experiments show that the popular global rating scheme widely used in commercial electronic communities is inferior to our personalized rating schemes when sufficient ratings among members are available. The level of sufficiency is then discussed. In comparison with other models of reputation, we quantitatively show that our framework provides significantly better estimations of reputation. \"Better\" is discussed with respect to a rating process and specific games as defined in this work. Secondly, we propose a mathematical framework for modeling trust and reputation that is rooted in findings from the social sciences. In particular, our framework makes explicit the importance of social information (i.e., indirect channels of inference) in aiding members of a social network choose whom they want to partner with or to avoid. Rating systems that make use of such indirect channels of inference are necessarily personalized in nature, catering to the individual context of the rater. Finally, we have extended our trust and reputation framework toward addressing a fundamental problem for social science and biology: evolution of cooperation. We show that by providing an indirect inference mechanism for the propagation of trust and reputation, cooperation among selfish agents can be explained for a set of game theoretic simulations. For these simulations in particular, our proposal is shown to have provided more cooperative agent communities than existing schemes are able to. Thesis Supervisor: Peter Szolovits Title: Professor of Electrical Engineering and Computer Science"}
{"_id": "61df37b2c1f731e2b6bcb1ae2c2b7670b917284c", "title": "Surface Management System Field Trial Results", "text": "NASA Ames Research Center, in cooperation with the FAA, has completed research and development of a proof-ofconcept Surface Management System (SMS). This paper reports on two recent SMS field tests as well as final performance and benefits analyses. Field tests and analysis support the conclusion that substantial portions of SMS technology are ready for transfer to the FAA and deployment throughout the National Airspace System (NAS). Other SMS capabilities were accepted in concept but require additional refinement for inclusion in subsequent development spirals. SMS is a decision support tool that helps operational specialists at Air Traffic Control (ATC) and NAS user facilities to collaboratively manage the movements of aircraft on the surface of busy airports, thereby improving capacity, efficiency, and flexibility. SMS provides accurate predictions of the future demand and how that demand will affect airport resources \u2013 information that is not currently available. The resulting shared awareness enables the Air Traffic Control Tower (ATCT), Terminal Radar Approach Control (TRACON), Air Route Traffic Control Center (ARTCC), and air carriers to coordinate traffic management decisions. Furthermore, SMS uses its ability to predict how future demand will play out on the surface to evaluate the effect of various traffic management decisions in advance of implementing them, to plan and advise surface operations. The SMS concept, displays, and algorithms were evaluated through a series of field tests at Memphis International Airport (MEM). An operational trial in September, 2003 evaluated SMS traffic management components, such as runway configuration change planning; shadow testing in January, 2004 tested tactical components (e.g., Approval Request (APREQ) coordination, sequencing for departure, and Expected Departure Clearance Time (EDCT) compliance). Participants in these evaluations rated the SMS concept and many of the traffic management displays very positively. Local and Ground controller displays will require integration with other automation systems. Feedback from FAA and NAS user participants support the conclusion that SMS algorithms currently provide information that has acceptable and beneficial accuracy for traffic management applications. Performance analysis results document the current accuracy of SMS algorithms. Benefits/cost analysis of delay cost reduction due to SMS provides the business case for SMS deployment."}
{"_id": "dcff311940942dcf81db5073e551a87e1710e52a", "title": "Recognizing Malicious Intention in an Intrusion Detection Process", "text": "Generally, theintrudermustperformseveralactions,organizedin anintrusionscenario, to achieve hisor hermaliciousobjective.Wearguethatintrusionscenarioscan bemodelledasa planningprocessandwesuggestmodellinga maliciousobjectiveas anattemptto violatea givensecurityrequirement. Our proposalis thento extendthe definitionof attackcorrelationpresentedin [CM02] to correlateattackswith intrusion objectivesThis notionis usefulto decideif a sequenceof correlatedactionscanlead to a securityrequirementviolation.This approachprovidesthesecurityadministrator with aglobalview of whathappensin thesystem.In particular, it controlsunobserved actionsthroughhypothesisgeneration,clustersrepeatedactionsin a singlescenario, recognizesintrudersthatarechangingtheir intrusionobjectivesandis efficient to detectvariationsof anintrusionscenario.Thisapproachcanalsobeusedto eliminatea category of falsepositivesthatcorrespondto falseattacks,that is actionsthatarenot furthercorrelatedto anintrusionobjective."}
{"_id": "7ffdf4d92b4bc5690249ed98e51e1699f39d0e71", "title": "Reconfigurable RF MEMS Phased Array Antenna Integrated Within a Liquid Crystal Polymer (LCP) System-on-Package", "text": "For the first time, a fully integrated phased array antenna with radio frequency microelectromechanical systems (RF MEMS) switches on a flexible, organic substrate is demonstrated above 10 GHz. A low noise amplifier (LNA), MEMS phase shifter, and 2 times 2 patch antenna array are integrated into a system-on-package (SOP) on a liquid crystal polymer substrate. Two antenna arrays are compared; one implemented using a single-layer SOP and the second with a multilayer SOP. Both implementations are low-loss and capable of 12deg of beam steering. The design frequency is 14 GHz and the measured return loss is greater than 12 dB for both implementations. The use of an LNA allows for a much higher radiated power level. These antennas can be customized to meet almost any size, frequency, and performance needed. This research furthers the state-of-the-art for organic SOP devices."}
{"_id": "3aa41f8fdb6a4523e2cd95365bb6c7499ad29708", "title": "iCanTrace : Avatar Personalization through Selfie Sketches", "text": "This paper introduces a novel system that allows users to generate customized cartoon avatars through a sketching interface. The rise of social media and personalized gaming has given a need for personalized virtual appearances. Avatars, self-curated and customized images to represent oneself, have become a common means of expressing oneself in these new media. Avatar creation platforms face the challenge of granting user significant control over the avatar creation, and the challenge of encumbering the user with too many choices in their avatar customization. This paper demonstrates a sketch-guided avatar customization system and its potential to simplify the avatar creation process. Author"}
{"_id": "ba02b6125ba47ff3629f1d09d1bada28169c2b32", "title": "Teaching Syntax by Adversarial Distraction", "text": "Existing entailment datasets mainly pose problems which can be answered without attention to grammar or word order. Learning syntax requires comparing examples where different grammar and word order change the desired classification. We introduce several datasets based on synthetic transformations of natural entailment examples in SNLI or FEVER, to teach aspects of grammar and word order. We show that without retraining, popular entailment models are unaware that these syntactic differences change meaning. With retraining, some but not all popular entailment models can learn to compare the syntax properly."}
{"_id": "cb25c33ba56db92b7da4d5080f73fba07cb914a3", "title": "A large-stroke flexure fast tool servo with new displacement amplifier", "text": "As the rapid progress of science and technology, the free-form surface optical component has played an important role in spaceflight, aviation, national defense, and other areas of the technology. While the technology of fast tool servo (FTS) is the most promising method for the machining of free-form surface optical component. However, the shortcomings of short-stroke of fast tool servo device have constrained the development of free-form surface optical component. To address this problem, a new large-stroke flexible FTS device is proposed in this paper. A series of mechanism modeling and optimal designs are carried out via compliance matrix theory, pseudo-rigid body theory, and Particle Swarm Optimization (PSO) algorithm, respectively. The mechanism performance of the large-stroke FTS device is verified by the Finite Element Analysis (FEA) method. For this study, a piezoelectric (PZT) actuator P-840.60 that can travel to 90 \u00b5m under open-loop control is employed, the results of experiment indicate that the maximum of output displacement can achieve 258.3\u00b5m, and the bandwidth can achieve around 316.84 Hz. Both theoretical analysis and the test results of prototype uniformly verify that the presented FTS device can meet the demand of the actual microstructure processing."}
{"_id": "3d4bae33c2ccc0a6597f80e27cbeed64990b95bd", "title": "Mindfulness practice leads to increases in regional brain gray matter density", "text": "Therapeutic interventions that incorporate training in mindfulness meditation have become increasingly popular, but to date little is known about neural mechanisms associated with these interventions. Mindfulness-Based Stress Reduction (MBSR), one of the most widely used mindfulness training programs, has been reported to produce positive effects on psychological well-being and to ameliorate symptoms of a number of disorders. Here, we report a controlled longitudinal study to investigate pre-post changes in brain gray matter concentration attributable to participation in an MBSR program. Anatomical magnetic resonance (MR) images from 16 healthy, meditation-na\u00efve participants were obtained before and after they underwent the 8-week program. Changes in gray matter concentration were investigated using voxel-based morphometry, and compared with a waiting list control group of 17 individuals. Analyses in a priori regions of interest confirmed increases in gray matter concentration within the left hippocampus. Whole brain analyses identified increases in the posterior cingulate cortex, the temporo-parietal junction, and the cerebellum in the MBSR group compared with the controls. The results suggest that participation in MBSR is associated with changes in gray matter concentration in brain regions involved in learning and memory processes, emotion regulation, self-referential processing, and perspective taking."}
{"_id": "217af49622a4e51b6d1b9b6c75726eaf1355a903", "title": "Animating pictures with stochastic motion textures", "text": "In this paper, we explore the problem of enhancing still pictures with subtly animated motions. We limit our domain to scenes containing passive elements that respond to natural forces in some fashion. We use a semi-automatic approach, in which a human user segments the scene into a series of layers to be individually animated. Then, a \"stochastic motion texture\" is automatically synthesized using a spectral method, i.e., the inverse Fourier transform of a filtered noise spectrum. The motion texture is a time-varying 2D displacement map, which is applied to each layer. The resulting warped layers are then recomposited to form the animated frames. The result is a looping video texture created from a single still image, which has the advantages of being more controllable and of generally higher image quality and resolution than a video texture created from a video source. We demonstrate the technique on a variety of photographs and paintings."}
{"_id": "10d79507f0f2e2d2968bf3a962e1daffc8bd44f0", "title": "Modeling the statistical time and angle of arrival characteristics of an indoor multipath channel", "text": "Most previously proposed statistical models for the indoor multipath channel include only time of arrival characteristics. However, in order to use statistical models in simulating or analyzing the performance of systems employing spatial diversity combining, information about angle of arrival statistics is also required. Ideally, it would be desirable to characterize the full spare-time nature of the channel. In this paper, a system is described that was used to collect simultaneous time and angle of arrival data at 7 GHz. Data processing methods are outlined, and results obtained from data taken in two different buildings are presented. Based on the results, a model is proposed that employs the clustered \"double Poisson\" time-of-arrival model proposed by Saleh and Valenzuela (1987). The observed angular distribution is also clustered with uniformly distributed clusters and arrivals within clusters that have a Laplacian distribution."}
{"_id": "d00ef607a10e5be00a9e05504ab9771c0b05d4ea", "title": "Analysis and Comparison of a Fast Turn-On Series IGBT Stack and High-Voltage-Rated Commercial IGBTS", "text": "High-voltage-rated solid-state switches such as insulated-gate bipolar transistors (IGBTs) are commercially available up to 6.5 kV. Such voltage ratings are attractive for pulsed power and high-voltage switch-mode converter applications. However, as the IGBT voltage ratings increase, the rate of current rise and fall are generally reduced. This tradeoff is difficult to avoid as IGBTs must maintain a low resistance in the epitaxial or drift region layer. For high-voltage-rated IGBTs with thick drift regions to support the reverse voltage, the required high carrier concentrations are injected at turn on and removed at turn off, which slows the switching speed. An option for faster switching is to series multiple, lower voltage-rated IGBTs. An IGBT-stack prototype with six, 1200 V rated IGBTs in series has been experimentally tested. The six-series IGBT stack consists of individual, optically isolated, gate drivers and aluminum cooling plates for forced air cooling which results in a compact package. Each IGBT is overvoltage protected by transient voltage suppressors. The turn-on current rise time of the six-series IGBT stack and a single 6.5 kV rated IGBT has been experimentally measured in a pulsed resistive-load, capacitor discharge circuit. The IGBT stack has also been compared to two IGBT modules in series, each rated at 3.3 kV, in a boost circuit application switching at 9 kHz and producing an output of 5 kV. The six-series IGBT stack results in improved turn-on switching speed, and significantly higher power boost converter efficiency due to a reduced current tail during turn off. The experimental test parameters and the results of the comparison tests are discussed in the following paper"}
{"_id": "b63a60e666c4c0335d8de4581eaaa3f71e8e0e54", "title": "A Nonlinear-Disturbance-Observer-Based DC-Bus Voltage Control for a Hybrid AC/DC Microgrid", "text": "DC-bus voltage control is an important task in the operation of a dc or a hybrid ac/dc microgrid system. To improve the dc-bus voltage control dynamics, traditional approaches attempt to measure and feedforward the load or source power in the dc-bus control scheme. However, in a microgrid system with distributed dc sources and loads, the traditional feedforward-based methods need remote measurement with communications. In this paper, a nonlinear disturbance observer (NDO) based dc-bus voltage control is proposed, which does not need the remote measurement and enables the important \u201cplug-and-play\u201d feature. Based on this observer, a novel dc-bus voltage control scheme is developed to suppress the transient fluctuations of dc-bus voltage and improve the power quality in such a microgrid system. Details on the design of the observer, the dc-bus controller and the pulsewidth-modulation (PWM) dead-time compensation are provided in this paper. The effects of possible dc-bus capacitance variation are also considered. The performance of the proposed control strategy has been successfully verified in a 30 kVA hybrid microgrid including ac/dc buses, battery energy storage system, and photovoltaic (PV) power generation system."}
{"_id": "6b557c35514d4b6bd75cebdaa2151517f5e820e2", "title": "Prediction , operations , and condition monitoring in wind energy", "text": "Recent developments in wind energy research including wind speed prediction, wind turbine control, operations of hybrid power systems, as well as condition monitoring and fault detection are surveyed. Approaches based on statistics, physics, and data mining for wind speed prediction at different time scales are reviewed. Comparative analysis of prediction results reported in the literature is presented. Studies of classical and intelligent control of wind turbines involving different objectives and strategies are reported. Models for planning operations of different hybrid power systems including wind generation for various objectives are addressed. Methodologies for condition monitoring and fault detection are discussed. Future research directions in wind energy are proposed. 2013 Elsevier Ltd. All rights reserved."}
{"_id": "e81f115f2ac725f27ea6549f4de0a71b3a3f6a5c", "title": "NEUROPSI: a brief neuropsychological test battery in Spanish with norms by age and educational level.", "text": "The purpose of this research was to develop, standardize, and test the reliability of a short neuropsychological test battery in the Spanish language. This neuropsychological battery was named \"NEUROPSI,\" and was developed to assess briefly a wide spectrum of cognitive functions, including orientation, attention, memory, language, visuoperceptual abilities, and executive functions. The NEUROPSI includes items that are relevant for Spanish-speaking communities. It can be applied to illiterates and low educational groups. Administration time is 25 to 30 min. Normative data were collected from 800 monolingual Spanish-speaking individuals, ages 16 to 85 years. Four age groups were used: (1) 16 to 30 years, (2) 31 to 50 years, (3) 51 to 65 years, and (4) 66 to 85 years. Data also are analyzed and presented within 4 different educational levels that were represented in this sample; (1) illiterates (zero years of school); (2) 1 to 4 years of school; (2) 5 to 9 years of school; and (3) 10 or more years of formal education. The effects of age and education, as well as the factor structure of the NEUROPSI are analyzed. The NEUROPSI may fulfill the need for brief, reliable, and objective evaluation of a broad range of cognitive functions in Spanish-speaking populations."}
{"_id": "382c057c0be037340e7d6494fc3a580b9d6b958c", "title": "Should TED talks be teaching us something?", "text": "The nonprofit phenomenon \u201cTED,\u201d the brand name for the concepts of Technology Education and Design, was born in 1984. It launched into pop culture stardom in 2006 when the organization\u2019s curators began offering short, free, unrestricted, and educational video segments. Known as \u201cTEDTalks,\u201d these informational segments are designed to be no longer than 18 minutes in length and provide succinct, targeted enlightenment on various topics or ideas that are deemed \u201cworth spreading.\u201d TED Talks, often delivered in sophisticated studios with trendy backdrops, follow a format that focuses learners on the presenter and limited, extremely purposeful visual aids. Topics range from global warming to running to the developing world. Popular TED Talks, such as Sir Ken Robinson\u2019s \u201cSchools Kill Creatively\u201d or Dan Gilbert\u2019s \u201cWhy Are We Happy?\u201d can easily garner well over a million views. TED Talks are a curious phenomenon for educators to observe. They are in many ways the antithesis of traditional lectures, which are typically 60-120 minutes in length and delivered in cavernous halls by faculty members engaged in everyday academic lives. Perhaps the formality of the lecture is the biggest superficial difference in comparison to casual TEDTalks (Table 1). However, TED Talks are not as unstructured as they may appear. Presenters are well coached and instructed to follow a specific presentation formula, whichmaximizes storyboarding and highlights passion for the subject. While learning is not formally assessed, TED Talks do seem to accomplish their goals of spreading ideas while sparking curiosity within the learner. The fact that some presentations have been viewed more than 16 million times points to the effectiveness of the platform in at least reaching learners and stimulating a desire to click, listen, and learn.Moreover, the TEDTalks website is the fourth most popular technology website and the single most popular conference and events website in the world. The TED phenomenon may have both direct and subliminal messages for academia. Perhaps an initial question to ponder is whether the TED phenomenon is a logical grassroots educational evolution or a reaction to the digital generation and their preference for learning that occurs \u201cwherever, whenever.\u201d The diverse cross-section of TED devotees ranging in background and age would seem to provide evidence that the platform does not solely appeal to younger generations of learners. Instead, it suggests that adult learners are either more drawn to digital learning than they think they are or than they are likely to admit. The perceived efficacy of TED once again calls into question the continued reliance of academia on the lecture as the primary currency of learning. TED Talks do not convey large chunks of information but rather present grander ideas. Would TED-like educational modules or blocks of 18-20 minutes be more likely to pique student curiosity across a variety of pharmacy topics, maintain attention span, and improve retention? Many faculty members who are recognized as outstanding teachers or lecturers might confess that they already teach through a TED-like lens. Collaterally, TED Talks or TED-formatted learning experiences might be ideal springboards for incorporation into inverted or flipped classroom environments where information is gathered and learned at home, while ideas are analyzed, debated, and assimilated within the classroom. Unarguably, TED Talks have given scientists and other researchers a real-time, mass media driven opportunity to disseminate their research, ideas, and theories that might otherwise have gone unnoticed. Similar platforms or approaches may be able to provide opportunities for the academy to further transmit research to the general public. The TED approach to idea dissemination is not without its critics. Several authors have criticized TED for flattening or dumbing down ideas so they fit into a preconceived, convenient format that is primarily designed to entertain. Consequently, the oversimplified ideas and conceptsmay provoke little effort from the learner to analyze data, theory, or controversy. Some Corresponding Author: Frank Romanelli, PharmD, MPH, 789 South Limestone Road, University of Kentucky College of Pharmacy, Lexington, KY 40536. E-mail: froma2@email. uky.edu American Journal of Pharmaceutical Education 2014; 78 (6) Article 113."}
{"_id": "36eff99a7f23cec395e4efc80ff7f937934c7be6", "title": "Geometry and Meaning", "text": "Geometry and Meaning is an interesting book about a relationship between geometry and logic defined on certain types of abstract spaces and how that intimate relationship might be exploited when applied in computational linguistics. It is also about an approach to information retrieval, because the analysis of natural language, especially negation, is applied to problems in IR, and indeed illustrated throughout the book by simple examples using search engines. It is refreshing to see IR issues tackled from a different point of view than the standard vector space (Salton, 1968). It is an enjoyable read, as intended by the author, and succeeds as a sort of tourist guide to the subject in hand. The early part of the book concentrates on the introduction of a number of elementary concepts from mathematics: graph theory, linear algebra (especially vector spaces), lattice theory, and logic. These concepts are well motivated and illustrated with good examples, mostly of a classificatory or taxonomic kind. One of the major goals of the book is to argue that non-classical logic, in the form of a quantum logic, is a candidate for analyzing language and its underlying logic, with a promise that such an approach could lead to improved search engines. The argument for this is aided by copious references to early philosophers, scientists, and mathematicians, creating the impression that when Aristotle, Descartes, Boole, and Grassmann were laying the foundations for taxonomy, analytical geometry, logic, and vector spaces, they had a more flexible and broader view of these subjects than is current. This is especially true of logic. Thus the historical approach taken to introducing quantum logic (chapter 7) is to show that this particular kind of logic and its interpretation in vector space were inherent in some of the ideas of these earlier thinkers. Widdows claims that Aristotle was never respected for his mathematics and that Grassmann\u2019s Ausdehnungslehre was largely ignored and left in obscurity. Whether Aristotle was never admired for his mathematics I am unable to judge, but certainly Alfred North Whitehead (1925) was not complimentary when he said:"}
{"_id": "f0d82cbac15c4379677d815c9d32f7044b19d869", "title": "Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity.", "text": "Neuroengineering is faced with unique challenges in repairing or replacing complex neural systems that are composed of many interacting parts. These interactions form intricate patterns over large spatiotemporal scales and produce emergent behaviors that are difficult to predict from individual elements. Network science provides a particularly appropriate framework in which to study and intervene in such systems by treating neural elements (cells, volumes) as nodes in a graph and neural interactions (synapses, white matter tracts) as edges in that graph. Here, we review the emerging discipline of network neuroscience, which uses and develops tools from graph theory to better understand and manipulate neural systems from micro- to macroscales. We present examples of how human brain imaging data are being modeled with network analysis and underscore potential pitfalls. We then highlight current computational and theoretical frontiers and emphasize their utility in informing diagnosis and monitoring, brain-machine interfaces, and brain stimulation. A flexible and rapidly evolving enterprise, network neuroscience provides a set of powerful approaches and fundamental insights that are critical for the neuroengineer's tool kit."}
{"_id": "f7d5f8c60972c18812925715f685ce8ae5d5659d", "title": "A new exact method for the two-dimensional orthogonal packing problem", "text": "The two-dimensional orthogonal packing problem (2OPP ) consists of determining if a set of rectangles (items) can be packed into one rectangle of fixed size (bin). In this paper we propose two exact algorithms for solving this problem. The first algorithm is an improvement on a classical branch&bound method, whereas the second algorithm is based on a two-step enumerative method. We also describe reduction procedures and lower bounds which can be used within the branch&bound method. We report computational experiments for randomly generated benchmarks, which demonstrate the efficiency of both methods."}
{"_id": "90c1104142203c8ead18882d49bfea8aec23e758", "title": "Sensitivity and diagnosticity of NASA-TLX and simplified SWAT to assess the mental workload associated with operating an agricultural sprayer.", "text": "The objectives of the present study were: a) to investigate three continuous variants of the NASA-Task Load Index (TLX) (standard NASA (CNASA), average NASA (C1NASA) and principal component NASA (PCNASA)) and five different variants of the simplified subjective workload assessment technique (SSWAT) (continuous standard SSWAT (CSSWAT), continuous average SSWAT (C1SSWAT), continuous principal component SSWAT (PCSSWAT), discrete event-based SSWAT (D1SSWAT) and discrete standard SSWAT (DSSWAT)) in terms of their sensitivity and diagnosticity to assess the mental workload associated with agricultural spraying; b) to compare and select the best variants of NASA-TLX and SSWAT for future mental workload research in the agricultural domain. A total of 16 male university students (mean 30.4 +/- 12.5 years) participated in this study. All the participants were trained to drive an agricultural spraying simulator. Sensitivity was assessed by the ability of the scales to report the maximum change in workload ratings due to the change in illumination and difficulty levels. In addition, the factor loading method was used to quantify sensitivity. The diagnosticity was assessed by the ability of the scale to diagnose the change in task levels from single to dual. Among all the variants of NASA-TLX and SSWAT, PCNASA and discrete variants of SSWAT showed the highest sensitivity and diagnosticity. Moreover, among all the variants of NASA and SSWAT, the discrete variants of SSWAT showed the highest sensitivity and diagnosticity but also high between-subject variability. The continuous variants of both scales had relatively low sensitivity and diagnosticity and also low between-subject variability. Hence, when selecting a scale for future mental workload research in the agricultural domain, a researcher should decide what to compromise: 1) between-subject variability or 2) sensitivity and diagnosticity. STATEMENT OF RELEVANCE: The use of subjective workload scales is very popular in mental workload research. The present study investigated the different variants of two popular workload rating scales (i.e. NASA-TLX and SSWAT) in terms of their sensitivity and diagnositicity and selected the best variants of each scale for future mental workload research."}
{"_id": "b1cbfd6c1e7f8a77e6c1e6db6cd0625e3bd785ef", "title": "Stadium Hashing: Scalable and Flexible Hashing on GPUs", "text": "Hashing is one of the most fundamental operations that provides a means for a program to obtain fast access to large amounts of data. Despite the emergence of GPUs as many-threaded general purpose processors, high performance parallel data hashing solutions for GPUs are yet to receive adequate attention. Existing hashing solutions for GPUs not only impose restrictions (e.g., inability to concurrently execute insertion and retrieval operations, limitation on the size of key-value data pairs) that limit their applicability, their performance does not scale to large hash tables that must be kept out-of-core in the host memory. In this paper we present Stadium Hashing (Stash) that is scalable to large hash tables and practical as it does not impose the aforementioned restrictions. To support large out-of-core hash tables, Stash uses a compact data structure named ticket-board that is separate from hash table buckets and is held inside GPU global memory. Ticket-board locally resolves significant portion of insertion and lookup operations and hence, by reducing accesses to the host memory, it accelerates the execution of these operations. Split design of the ticket-board also enables arbitrarily large keys and values. Unlike existing methods, Stash naturally supports concurrent insertions and retrievals due to its use of double hashing as the collision resolution strategy. Furthermore, we propose Stash with collaborative lanes (clStash) that enhances GPU's SIMD resource utilization for batched insertions during hash table creation. For concurrent insertion and retrieval streams, Stadium hashing can be up to 2 and 3 times faster than GPU Cuckoo hashing for in-core and out-of-core tables respectively."}
{"_id": "20f5b475effb8fd0bf26bc72b4490b033ac25129", "title": "Real time detection of lane markers in urban streets", "text": "We present a robust and real time approach to lane marker detection in urban streets. It is based on generating a top view of the road, filtering using selective oriented Gaussian filters, using RANSAC line fitting to give initial guesses to a new and fast RANSAC algorithm for fitting Bezier Splines, which is then followed by a post-processing step. Our algorithm can detect all lanes in still images of the street in various conditions, while operating at a rate of 50 Hz and achieving comparable results to previous techniques."}
{"_id": "27edbcf8c6023905db4de18a4189c2093ab39b23", "title": "Robust Lane Detection and Tracking in Challenging Scenarios", "text": "A lane-detection system is an important component of many intelligent transportation systems. We present a robust lane-detection-and-tracking algorithm to deal with challenging scenarios such as a lane curvature, worn lane markings, lane changes, and emerging, ending, merging, and splitting lanes. We first present a comparative study to find a good real-time lane-marking classifier. Once detection is done, the lane markings are grouped into lane-boundary hypotheses. We group left and right lane boundaries separately to effectively handle merging and splitting lanes. A fast and robust algorithm, based on random-sample consensus and particle filtering, is proposed to generate a large number of hypotheses in real time. The generated hypotheses are evaluated and grouped based on a probabilistic framework. The suggested framework effectively combines a likelihood-based object-recognition algorithm with a Markov-style process (tracking) and can also be applied to general-part-based object-tracking problems. An experimental result on local streets and highways shows that the suggested algorithm is very reliable."}
{"_id": "4d2cd0b25c5b0f69b6976752ebca43ec5f04a461", "title": "Lane detection and tracking using B-Snake", "text": "In this paper, we proposed a B-Snake based lane detection and tracking algorithm without any cameras\u2019 parameters. Compared with other lane models, the B-Snake based lane model is able to describe a wider range of lane structures since B-Spline can form any arbitrary shape by a set of control points. The problems of detecting both sides of lane markings (or boundaries) have been merged here as the problem of detecting the mid-line of the lane, by using the knowledge of the perspective parallel lines. Furthermore, a robust algorithm, called CHEVP, is presented for providing a good initial position for the B-Snake. Also, a minimum error method by Minimum Mean Square Error (MMSE) is proposed to determine the control points of the B-Snake model by the overall image forces on two sides of lane. Experimental results show that the proposed method is robust against noise, shadows, and illumination variations in the captured road images. It is also applicable to the marked and the unmarked roads, as well as the dash and the solid paint line roads. q 2003 Elsevier B.V. All rights reserved."}
{"_id": "1c0f7854c14debcc34368e210568696a01c40573", "title": "Using vanishing points for camera calibration", "text": "In this article a new method for the calibration of a vision system which consists of two (or more) cameras is presented. The proposed method, which uses simple properties of vanishing points, is divided into two steps. In the first step, the intrinsic parameters of each camera, that is, the focal length and the location of the intersection between the optical axis and the image plane, are recovered from a single image of a cube. In the second step, the extrinsic parameters of a pair of cameras, that is, the rotation matrix and the translation vector which describe the rigid motion between the coordinate systems fixed in the two cameras are estimated from an image stereo pair of a suitable planar pattern. Firstly, by matching the corresponding vanishing points in the two images the rotation matrix can be computed, then the translation vector is estimated by means of a simple triangulation. The robustness of the method against noise is discussed, and the conditions for optimal estimation of the rotation matrix are derived. Extensive experimentation shows that the precision that can be achieved with the proposed method is sufficient to efficiently perform machine vision tasks that require camera calibration, like depth from stereo and motion from image sequence."}
{"_id": "235aff8bdb65654163110b35f268de6933814c49", "title": "Realtime lane tracking of curved local road", "text": "A lane detection system is an important component of many intelligent transportation systems. We present a robust realtime lane tracking algorithm for a curved local road. First, we present a comparative study to find a good realtime lane marking classifier. Once lane markings are detected, they are grouped into many lane boundary hypotheses represented by constrained cubic spline curves. We present a robust hypothesis generation algorithm using a particle filtering technique and a RANSAC (random sample concensus) algorithm. We introduce a probabilistic approach to group lane boundary hypotheses into left and right lane boundaries. The proposed grouping approach can be applied to general part-based object tracking problems. It incorporates a likelihood-based object recognition technique into a Markov-style process. An experimental result on local streets shows that the suggested algorithm is very reliable"}
{"_id": "514ee2a4d6dec51d726012bd74b32b1e05f13271", "title": "The Ontological Foundation of REA Enterprise Information Systems", "text": "Philosophers have studied ontologies for centuries in their search for a systematic explanation of existence: \u201cWhat kind of things exist?\u201d Recently, ontologies have emerged as a major research topic in the fields of artificial intelligence and knowledge management where they address the content issue: \u201cWhat kind of things should we represent?\u201d The answer to that question differs with the scope of the ontology. Ontologies that are subject-independent are called upper-level ontologies, and they attempt to define concepts that are shared by all domains, such as time and space. Domain ontologies, on the other hand, attempt to define the things that are relevant to a specific application domain. Both types of ontologies are becoming increasingly important in the era of the Internet where consistent and machine-readable semantic definitions of economic phenomena become the language of e-commerce. In this paper, we propose the conceptual accounting framework of the Resource-Event-Agent (REA) model of McCarthy (1982) as an enterprise domain ontology, and we build upon the initial ontology work of Geerts and McCarthy (2000) which explored REA with respect to the ontological categorizations of John Sowa (1999). Because of its conceptual modeling heritage, REA already resembles an established ontology in many declarative (categories) and procedural (axioms) respects, and we also propose here to extend formally that framework both (1) vertically in terms of entrepreneurial logic (value chains) and workflow detail, and (2) horizontally in terms of type and commitment images of enterprise economic phenomena. A strong emphasis throughout the paper is given to the microeconomic foundations of the category definitions."}
{"_id": "944692d5d33fbc5f42294a8310380e0b057a1320", "title": "Dual- and Multiband U-Slot Patch Antennas", "text": "A wide band patch antenna fed by an L-probe can be designed for dual- and multi-band application by cutting U-slots on the patch. Simulation and measurement results are presented to illustrate this design."}
{"_id": "6800fbe3314be9f638fb075e15b489d1aadb3030", "title": "Advances in Collaborative Filtering", "text": "The collaborative filtering (CF) approach to recommenders has recently enjoyed much interest and progress. The fact that it played a central role within the recently completed Netflix competition has contributed to its popularity. This chapter surveys the recent progress in the field. Matrix factorization techniques, which became a first choice for implementing CF, are described together with recent innovations. We also describe several extensions that bring competitive accuracy into neighborhood methods, which used to dominate the field. The chapter demonstrates how to utilize temporal models and implicit feedback to extend models accuracy. In passing, we include detailed descriptions of some the central methods developed for tackling the challenge of the Netflix Prize competition."}
{"_id": "12bbec48c8fde83ea276402ffedd2e241e978a12", "title": "VirtualTable: a projection augmented reality game", "text": "VirtualTable is a projection augmented reality installation where users are engaged in an interactive tower defense game. The installation runs continuously and is designed to attract people to a table, which the game is projected onto. Any number of players can join the game for an optional period of time. The goal is to prevent the virtual stylized soot balls, spawning on one side of the table, from reaching the cheese. To stop them, the players can place any kind of object on the table, that then will become part of the game. Depending on the object, it will become either a wall, an obstacle for the soot balls, or a tower, that eliminates them within a physical range. The number of enemies is dependent on the number of objects in the field, forcing the players to use strategy and collaboration and not the sheer number of objects to win the game."}
{"_id": "ffd76d49439c078a6afc246e6d0638a01ad563f8", "title": "A Context-Aware Usability Model for Mobile Health Applications", "text": "Mobile healthcare is a fast growing area of research that capitalizes on mobile technologies and wearables to provide realtime and continuous monitoring and analysis of vital signs of users. Yet, most of the current applications are developed for general population without taking into consideration the context and needs of different user groups. Designing and developing mobile health applications and diaries according to the user context can significantly improve the quality of user interaction and encourage the application use. In this paper, we propose a user context model and a set of usability attributes for developing mobile applications in healthcare. The proposed model and the selected attributes are integrated into a mobile application development framework to provide user-centered and context-aware guidelines. To validate our framework, a mobile diary was implemented for patients undergoing Peritoneal Dialysis (PD) and tested with real users."}
{"_id": "8deafc34941a79b9cfc348ab63ec51752c7b1cde", "title": "New approach for clustering of big data: DisK-means", "text": "The exponential growth in the amount of data gathered from various sources has resulted in the need for more efficient algorithms to quickly analyze large datasets. Clustering techniques, like K-Means are useful in analyzing data in a parallel fashion. K-Means largely depends upon a proper initialization to produce optimal results. K-means++ initialization algorithm provides solution based on providing an initial set of centres to the K-Means algorithm. However, its inherent sequential nature makes it suffer from various limitations when applied to large datasets. For instance, it makes k iterations to find k centres. In this paper, we present an algorithm that attempts to overcome the drawbacks of previous algorithms. Our work provides a method to select a good initial seeding in less time, facilitating fast and accurate cluster analysis over large datasets."}
{"_id": "455d562bf02dcb5161c98668a5f5e470d02b70b8", "title": "A probabilistic constrained clustering for transfer learning and image category discovery", "text": "Neural network-based clustering has recently gained popularity, and in particular a constrained clustering formulation has been proposed to perform transfer learning and image category discovery using deep learning. The core idea is to formulate a clustering objective with pairwise constraints that can be used to train a deep clustering network; therefore the cluster assignments and their underlying feature representations are jointly optimized end-toend. In this work, we provide a novel clustering formulation to address scalability issues of previous work in terms of optimizing deeper networks and larger amounts of categories. The proposed objective directly minimizes the negative log-likelihood of cluster assignment with respect to the pairwise constraints, has no hyper-parameters, and demonstrates improved scalability and performance on both supervised learning and unsupervised transfer learning."}
{"_id": "e6bef595cb78bcad4880aea6a3a73ecd32fbfe06", "title": "Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach", "text": "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains."}
{"_id": "d77d2ab03f891d8f0822083020486a6de1f2900f", "title": "EEG Classification of Different Imaginary Movements within the Same Limb", "text": "The task of discriminating the motor imagery of different movements within the same limb using electroencephalography (EEG) signals is challenging because these imaginary movements have close spatial representations on the motor cortex area. There is, however, a pressing need to succeed in this task. The reason is that the ability to classify different same-limb imaginary movements could increase the number of control dimensions of a brain-computer interface (BCI). In this paper, we propose a 3-class BCI system that discriminates EEG signals corresponding to rest, imaginary grasp movements, and imaginary elbow movements. Besides, the differences between simple motor imagery and goal-oriented motor imagery in terms of their topographical distributions and classification accuracies are also being investigated. To the best of our knowledge, both problems have not been explored in the literature. Based on the EEG data recorded from 12 able-bodied individuals, we have demonstrated that same-limb motor imagery classification is possible. For the binary classification of imaginary grasp and elbow (goal-oriented) movements, the average accuracy achieved is 66.9%. For the 3-class problem of discriminating rest against imaginary grasp and elbow movements, the average classification accuracy achieved is 60.7%, which is greater than the random classification accuracy of 33.3%. Our results also show that goal-oriented imaginary elbow movements lead to a better classification performance compared to simple imaginary elbow movements. This proposed BCI system could potentially be used in controlling a robotic rehabilitation system, which can assist stroke patients in performing task-specific exercises."}
{"_id": "de0f84359078ec9ba79f4d0061fe73f6cac6591c", "title": "Single-Stage Single-Switch Four-Output Resonant LED Driver With High Power Factor and Passive Current Balancing", "text": "A resonant single-stage single-switch four-output LED driver with high power factor and passive current balancing is proposed. By controlling one output current, the other output currents of four-output LED driver can be controlled via passive current balancing, which makes its control simple. When magnetizing inductor current operates in critical conduction mode, unity power factor is achieved. The proposed LED driver uses only one active switch and one magnetic component, thus it benefits from low cost, small volume, and light weight. Moreover, high-efficiency performance is achieved due to single-stage power conversion and soft-switching characteristics. The characteristics of the proposed LED driver are studied in this paper and experimental results of two 110-W four-output isolated LED drivers are provided to verify the studied results."}
{"_id": "1924ae6773f09efcfc791454d42a3ec53207a815", "title": "Flexible Ambiguity Resolution and Incompleteness Detection in Requirements Descriptions via an Indicator-Based Configuration of Text Analysis Pipelines", "text": "Natural language software requirements descriptions enable end users to formulate their wishes and expectations for a future software product without much prior knowledge in requirements engineering. However, these descriptions are susceptible to linguistic inaccuracies such as ambiguities and incompleteness that can harm the development process. There is a number of software solutions that can detect deficits in requirements descriptions and partially solve them, but they are often hard to use and not suitable for end users. For this reason, we develop a software system that helps end-users to create unambiguous and complete requirements descriptions by combining existing expert tools and controlling them using automatic compensation strategies. In order to recognize the necessity of individual compensation methods in the descriptions, we have developed linguistic indicators, which we present in this paper. Based on these indicators, the whole text analysis pipeline is ad-hoc configured and thus adapted to the individual circumstances of a requirements description."}
{"_id": "727774c3a911d45ea6fe2d4ad66fd3b453a18c99", "title": "Correlating low-level image statistics with users - rapid aesthetic and affective judgments of web pages", "text": "In this paper, we report a study that examines the relationship between image-based computational analyses of web pages and users' aesthetic judgments about the same image material. Web pages were iteratively decomposed into quadrants of minimum entropy (quadtree decomposition) based on low-level image statistics, to permit a characterization of these pages in terms of their respective organizational symmetry, balance and equilibrium. These attributes were then evaluated for their correlation with human participants' subjective ratings of the same web pages on four aesthetic and affective dimensions. Several of these correlations were quite large and revealed interesting patterns in the relationship between low-level (i.e., pixel-level) image statistics and design-relevant dimensions."}
{"_id": "21c76cc8ebfb9c112c2594ce490b47e458b50e31", "title": "American Sign Language Recognition Using Leap Motion Sensor", "text": "In this paper, we present an American Sign Language recognition system using a compact and affordable 3D motion sensor. The palm-sized Leap Motion sensor provides a much more portable and economical solution than Cyblerglove or Microsoft kinect used in existing studies. We apply k-nearest neighbor and support vector machine to classify the 26 letters of the English alphabet in American Sign Language using the derived features from the sensory data. The experiment result shows that the highest average classification rate of 72.78% and 79.83% was achieved by k-nearest neighbor and support vector machine respectively. We also provide detailed discussions on the parameter setting in machine learning methods and accuracy of specific alphabet letters in this paper."}
{"_id": "519f5892938d4423cecc999b6e489b72fc0d0ca7", "title": "Cognitive, emotional, and behavioral considerations for chronic pain management in the Ehlers-Danlos syndrome hypermobility-type: a narrative review.", "text": "BACKGROUND\nEhlers-Danlos syndrome (EDS) hypermobility-type is the most common hereditary disorder of the connective tissue. The tissue fragility characteristic of this condition leads to multi-systemic symptoms in which pain, often severe, chronic, and disabling, is the most experienced. Clinical observations suggest that the complex patient with EDS hypermobility-type is refractory toward several biomedical and physical approaches. In this context and in accordance with the contemporary conceptualization of pain (biopsychosocial perspective), the identification of psychological aspects involved in the pain experience can be useful to improve interventions for this under-recognized pathology.\n\n\nPURPOSE\nReview of the literature on joint hypermobility and EDS hypermobility-type concerning psychological factors linked to pain chronicity and disability.\n\n\nMETHODS\nA comprehensive search was performed using scientific online databases and references lists, encompassing publications reporting quantitative and qualitative research as well as unpublished literature.\n\n\nRESULTS\nDespite scarce research, psychological factors associated with EDS hypermobility-type that potentially affect pain chronicity and disability were identified. These are cognitive problems and attention to body sensations, negative emotions, and unhealthy patterns of activity (hypo/hyperactivity).\n\n\nCONCLUSIONS\nAs in other chronic pain conditions, these aspects should be more explored in EDS hypermobility-type, and integrated into chronic pain prevention and management programs. Implications for Rehabilitation Clinicians should be aware that joint hypermobility may be associated with other health problems, and in its presence suspect a heritable disorder of connective tissue such as the Ehlers-Danlos syndrome (EDS) hypermobility-type, in which chronic pain is one of the most frequent and invalidating symptoms. It is necessary to explore the psychosocial functioning of patients as part of the overall chronic pain management in the EDS hypermobility-type, especially when they do not respond to biomedical approaches as psychological factors may be operating against rehabilitation. Further research on the psychological factors linked to pain chronicity and disability in the EDS hypermobility-type is needed."}
{"_id": "7d2fda30e52c39431dbb90ae065da036a55acdc7", "title": "A brief review: factors affecting the length of the rest interval between resistance exercise sets.", "text": "Research has indicated that multiple sets are superior to single sets for maximal strength development. However, whether maximal strength gains are achieved may depend on the ability to sustain a consistent number of repetitions over consecutive sets. A key factor that determines the ability to sustain repetitions is the length of rest interval between sets. The length of the rest interval is commonly prescribed based on the training goal, but may vary based on several other factors. The purpose of this review was to discuss these factors in the context of different training goals. When training for muscular strength, the magnitude of the load lifted is a key determinant of the rest interval prescribed between sets. For loads less than 90% of 1 repetition maximum, 3-5 minutes rest between sets allows for greater strength increases through the maintenance of training intensity. However, when testing for maximal strength, 1-2 minutes rest between sets might be sufficient between repeated attempts. When training for muscular power, a minimum of 3 minutes rest should be prescribed between sets of repeated maximal effort movements (e.g., plyometric jumps). When training for muscular hypertrophy, consecutive sets should be performed prior to when full recovery has taken place. Shorter rest intervals of 30-60 seconds between sets have been associated with higher acute increases in growth hormone, which may contribute to the hypertrophic effect. When training for muscular endurance, an ideal strategy might be to perform resistance exercises in a circuit, with shorter rest intervals (e.g., 30 seconds) between exercises that involve dissimilar muscle groups, and longer rest intervals (e.g., 3 minutes) between exercises that involve similar muscle groups. In summary, the length of the rest interval between sets is only 1 component of a resistance exercise program directed toward different training goals. Prescribing the appropriate rest interval does not ensure a desired outcome if other components such as intensity and volume are not prescribed appropriately."}
{"_id": "fe0643f3405c22fe7ca0b7d1274a812d6e3e5a11", "title": "Silicon carbide power MOSFETs: Breakthrough performance from 900 V up to 15 kV", "text": "Since Cree, Inc.'s 2nd generation 4H-SiC MOSFETs were commercially released with a specific on-resistance (RON, SP) of 5 m\u03a9\u00b7cm2 for a 1200 V-rating in early 2013, we have further optimized the device design and fabrication processes as well as greatly expanded the voltage ratings from 900 V up to 15 kV for a much wider range of high-power, high-frequency, and high-voltage energy-conversion and transmission applications. Using these next-generation SiC MOSFETs, we have now achieved new breakthrough performance for voltage ratings from 900 V up to 15 kV with a RON, SP as low as 2.3 m\u03a9\u00b7cm2 for a breakdown voltage (BV) of 1230 V and 900 V-rating, 2.7 m\u03a9\u00b7cm2 for a BV of 1620 V and 1200 V-rating, 3.38 m\u03a9\u00b7cm2 for a BV of 1830 V and 1700 V-rating, 10.6 m\u03a9\u00b7cm2 for a BV of 4160 V and 3300 V-rating, 123 m\u03a9\u00b7cm2 for a BV of 12 kV and 10 kV-rating, and 208 m\u03a9\u00b7cm2 for a BV of 15.5 kV and 15 kV-rating. In addition, due to the lack of current tailing during the bipolar device switching turn-off, the SiC MOSFETs reported in this work exhibit incredibly high frequency switching performance over their silicon counter parts."}
{"_id": "011d4ccb74f32f597df54ac8037a7903bd95038b", "title": "The evolution of human skin coloration.", "text": "Skin color is one of the most conspicuous ways in which humans vary and has been widely used to define human races. Here we present new evidence indicating that variations in skin color are adaptive, and are related to the regulation of ultraviolet (UV) radiation penetration in the integument and its direct and indirect effects on fitness. Using remotely sensed data on UV radiation levels, hypotheses concerning the distribution of the skin colors of indigenous peoples relative to UV levels were tested quantitatively in this study for the first time. The major results of this study are: (1) skin reflectance is strongly correlated with absolute latitude and UV radiation levels. The highest correlation between skin reflectance and UV levels was observed at 545 nm, near the absorption maximum for oxyhemoglobin, suggesting that the main role of melanin pigmentation in humans is regulation of the effects of UV radiation on the contents of cutaneous blood vessels located in the dermis. (2) Predicted skin reflectances deviated little from observed values. (3) In all populations for which skin reflectance data were available for males and females, females were found to be lighter skinned than males. (4) The clinal gradation of skin coloration observed among indigenous peoples is correlated with UV radiation levels and represents a compromise solution to the conflicting physiological requirements of photoprotection and vitamin D synthesis. The earliest members of the hominid lineage probably had a mostly unpigmented or lightly pigmented integument covered with dark black hair, similar to that of the modern chimpanzee. The evolution of a naked, darkly pigmented integument occurred early in the evolution of the genus Homo. A dark epidermis protected sweat glands from UV-induced injury, thus insuring the integrity of somatic thermoregulation. Of greater significance to individual reproductive success was that highly melanized skin protected against UV-induced photolysis of folate (Branda & Eaton, 1978, Science201, 625-626; Jablonski, 1992, Proc. Australas. Soc. Hum. Biol.5, 455-462, 1999, Med. Hypotheses52, 581-582), a metabolite essential for normal development of the embryonic neural tube (Bower & Stanley, 1989, The Medical Journal of Australia150, 613-619; Medical Research Council Vitamin Research Group, 1991, The Lancet338, 31-37) and spermatogenesis (Cosentino et al., 1990, Proc. Natn. Acad. Sci. U.S.A.87, 1431-1435; Mathur et al., 1977, Fertility Sterility28, 1356-1360).As hominids migrated outside of the tropics, varying degrees of depigmentation evolved in order to permit UVB-induced synthesis of previtamin D(3). The lighter color of female skin may be required to permit synthesis of the relatively higher amounts of vitamin D(3)necessary during pregnancy and lactation. Skin coloration in humans is adaptive and labile. Skin pigmentation levels have changed more than once in human evolution. Because of this, skin coloration is of no value in determining phylogenetic relationships among modern human groups."}
{"_id": "d87d70ecd0fdf0976cebbeaeacf25ad9872ffde1", "title": "Robust and false positive free watermarking in IWT domain using SVD and ABC", "text": "Watermarking is used to protect the copyrighted materials from being misused and help us to know the lawful ownership. The security of any watermarking scheme is always a prime concern for the developer. In this work, the robustness and security issue of IWT (integer wavelet transform) and SVD (singular value decomposition) based watermarking is explored. Generally, SVD based watermarking techniques suffer with an issue of false positive problem. This leads to even authenticating the wrong owner. We are proposing a novel solution to this false positive problem; that arises in SVD based approach. Firstly, IWT is employed on the host image and then SVD is performed on this transformed host. The properties of IWT and SVD help in achieving high value of robustness. Singular values are used for the watermark embedding. In order to further improve the quality of watermarking, the optimization of scaling factor (mixing ratio) is performed with the help of artificial bee colony (ABC) algorithm. A comparison with other schemes is performed to show the superiority of proposed scheme. & 2015 Elsevier Ltd. All rights reserved."}
{"_id": "ae3ebe6c69fdb19e12d3218a5127788fae269c10", "title": "A Literature Survey of Benchmark Functions For Global Optimization Problems", "text": "Test functions are important to validate and compare the performance of optimization algorithms. There have been many test or benchmark functions reported in the literature; however, there is no standard list or set of benchmark functions. Ideally, test functions should have diverse properties so that can be truly useful to test new algorithms in an unbiased way. For this purpose, we have reviewed and compiled a rich set of 175 benchmark functions for unconstrained optimization problems with diverse properties in terms of modality, separability, and valley landscape. This is by far the most complete set of functions so far in the literature, and tt can be expected this complete set of functions can be used for validation of new optimization in the future."}
{"_id": "d28235adc2c8c6fdfaa474bc2bab931129149fd6", "title": "Approaches to Measuring the Difficulty of Games in Dynamic Difficulty Adjustment Systems", "text": "In this article, three approaches are proposed for measuring difficulty that can be useful in developing Dynamic Difficulty Adjustment (DDA) systems in different game genres. Our analysis of the existing DDA systems shows that there are three ways to measure the difficulty of the game: using the formal model of gameplay, using the features of the game, and direct examination of the player. These approaches are described in this article and supplemented by appropriate examples of DDA implementations. In addition, the article describes the distinction between task complexity and task difficulty in DDA systems. Separating task complexity (especially the structural one) is suggested, which is an objective characteristic of the task, and task difficulty, which is related to the interaction between the task and the task performer."}
{"_id": "5c881260bcc64070b2b33c10d28f23f793b8344f", "title": "A low-voltage, low quiescent current, low drop-out regulator", "text": "The demand for low voltage, low drop-out (LDO) regulators is increasing because of the growing demand for portable electronics, i.e., cellular phones, pagers, laptops, etc. LDOs are used coherently with dc-dc converters as well as standalone parts. In power supply systems, they are typically cascaded onto switching regulators to suppress noise and provide a low noise output. The need for low voltage is innate to portable low power devices and corroborated by lower breakdown voltages resulting from reductions in feature size. Low quiescent current in a battery operated system is an intrinsic performance parameter because it partially determines battery life. This paper discusses some techniques that enable the practical realizations of low quiescent current LDOs at low voltages and in existing technologies. The proposed circuit exploits the frequency response dependence on load-current to minimize quiescent current flow. Moreover, the output current capabilities of MOS power transistors are enhanced and drop-out voltages are decreased for a given device size. Other applications, like dc-dc converters, can also reap the benefits of these enhanced MOS devices. An LDO prototype incorporating the aforementioned techniques was fabricated. The circuit was operable down to input voltages of 1 V with a zero-load quiescent current flow of 23 \u03bcA. Moreover, the regulator provided 18 and 50 mA of output current at input voltages of 1 and 1.2 V respectively."}
{"_id": "950ff860dbc8a24fc638ac942ce9c1f51fb24899", "title": "Where to Go Next: A Spatio-temporal LSTM model for Next POI Recommendation", "text": "Next Point-of-Interest (POI) recommendation is of great value for both location-based service providers and users. Recently Recurrent Neural Networks (RNNs) have been proved to be effective on sequential recommendation tasks. However, existing RNN solutions rarely consider the spatio-temporal intervals between neighbor checkins, which are essential for modeling user check-in behaviors in next POI recommendation. In this paper, we propose a new variant of LSTM, named STLSTM, which implements time gates and distance gates into LSTM to capture the spatio-temporal relation between successive check-ins. Specifically, one time gate and one distance gate are designed to control short-term interest update, and another time gate and distance gate are designed to control long-term interest update. Furthermore, to reduce the number of parameters and improve efficiency, we further integrate coupled input and forget gates with our proposed model. Finally, we evaluate the proposed model using four real-world datasets from various location-based social networks. Our experimental results show that our model significantly outperforms the state-of-the-art approaches for next POI recommendation."}
{"_id": "f99a50ce62845c62d9fcdec277e0857350534cc9", "title": "Absorptive Frequency-Selective Transmission Structure With Square-Loop Hybrid Resonator", "text": "A novel design of an absorptive frequency-selective transmission structure (AFST) is proposed. This structure is based on the design of a frequency-dependent lossy layer with square-loop hybrid resonator (SLHR). The parallel resonance provided by the hybrid resonator is utilized to bypass the lossy path and improve the insertion loss. Meanwhile, the series resonance of the hybrid resonator is used for expanding the upper absorption bandwidth. Furthermore, the absorption for out-of-band frequencies is achieved by using four metallic strips with lumped resistors, which are connected with the SLHR. The quantity of lumped elements required in a unit cell can be reduced by at least 50% compared to previous structures. The design guidelines are explained with the aid of an equivalent circuit model. Both simulation and experiment results are presented to demonstrate the performance of our AFST. It is shown that an insertion loss of 0.29 dB at 6.1 GHz and a 112.4% 10 dB reflection reduction bandwidth are obtained under the normal incidence."}
{"_id": "26f70336acf7247a35d3c0be6308fe29f25d2872", "title": "Implementation of AES-GCM encryption algorithm for high performance and low power architecture Using FPGA", "text": "Evaluation of the Advanced Encryption Standard (AES) algorithm in FPGA is proposed here. This Evaluation is compared with other works to show the efficiency. Here we are concerned about two major purposes. The first is to define some of the terms and concepts behind basic cryptographic methods, and to offer a way to compare the myriad cryptographic schemes in use today. The second is to provide some real examples of cryptography in use today. The design uses an iterative looping approach with block and key size of 128 bits, lookup table implementation of S-box. This gives low complexity architecture and easily achieves low latency as well as high throughput. Simulation results, performance results are presented and compared with previous reported designs. Since its acceptance as the adopted symmetric-key algorithm, the Advanced Encryption Standard (AES) and its recently standardized authentication Galois/Counter Mode (GCM) have been utilized in various security-constrained applications. Many of the AES-GCM applications are power and resource constrained and requires efficient hardware implementations. In this project, AES-GCM algorithms are evaluated and optimized to identify the high-performance and low-power architectures. The Advanced Encryption Standard (AES) is a specification for the encryption of electronic data. The Cipher Block Chaining (CBC) mode is a confidentiality mode whose encryption process features the combining (\u201cchaining\u201d) of the plaintext blocks with the previous Cipher text blocks. The CBC mode requires an IV to combine with the first plaintext block. The IV need not be secret, but it must be unpredictable. Also, the integrity of the IV should be protected. Galois/Counter Mode (GCM) is a block cipher mode of operation that uses universal hashing over a binary Galois field to provide authenticated encryption. Galois Hash is used for authentication, and the Advanced Encryption Standard (AES) block cipher is used for encryption in counter mode of operation. To obtain the least-complexity S-box, the formulations for the Galois Field (GF) sub-field inversions in GF (24) are optimized By conducting exhaustive simulations for the input transitions, we analyze the synthesis of the AES S-boxes considering the switching activities, gate-level net lists, and parasitic information. Finally, by implementation of AES-GCM the high-performance GF (2128) multiplier architectures, gives the detailed information of its performance. An optimized coding for the implementation of Advanced Encryption Standard-Galois Counter Mode has been developed. The speed factor of the algorithm implementation has been targeted and a software code in Verilog HDL has been developed. This implementation is useful in wireless security like military communication and mobile telephony where there is a grayer emphasis on the speed of communication."}
{"_id": "03f64a5989e4d2ecab989d9724ad4cc58f976daf", "title": "Multi-Document Summarization using Sentence-based Topic Models", "text": "Most of the existing multi-document summarization methods decompose the documents into sentences and work directly in the sentence space using a term-sentence matrix. However, the knowledge on the document side, i.e. the topics embedded in the documents, can help the context understanding and guide the sentence selection in the summarization procedure. In this paper, we propose a new Bayesian sentence-based topic model for summarization by making use of both the term-document and term-sentence associations. An efficient variational Bayesian algorithm is derived for model parameter estimation. Experimental results on benchmark data sets show the effectiveness of the proposed model for the multi-document summarization task."}
{"_id": "9a1b3247fc7f0abf892a40884169e0ed10d3b684", "title": "Intrusion detection by machine learning: A review", "text": "The popularity of using Internet contains some risks of network attacks. Intrusion detection is one major research problem in network security, whose aim is to identify unusual access or attacks to secure internal networks. In literature, intrusion detection systems have been approached by various machine learning techniques. However, there is no a review paper to examine and understand the current status of using machine learning techniques to solve the intrusion detection problems. This chapter reviews 55 related studies in the period between 2000 and 2007 focusing on developing single, hybrid, and ensemble classifiers. Related studies are compared by their classifier design, datasets used, and other experimental setups. Current achievements and limitations in developing intrusion detection systems by machine learning are present and discussed. A number of future research directions are also provided. 2009 Elsevier Ltd. All rights reserved."}
{"_id": "a10d128fd95710308dfee83953c5b26293b9ede7", "title": "Combining OpenFlow and sFlow for an effective and scalable anomaly detection and mitigation mechanism on SDN environments", "text": "Software Defined Networks (SDNs) based on the OpenFlow (OF) protocol export controlplane programmability of switched substrates. As a result, rich functionality in traffic management, load balancing, routing, firewall configuration, etc. that may pertain to specific flows they control, may be easily developed. In this paper we extend these functionalities with an efficient and scalable mechanism for performing anomaly detection and mitigation in SDN architectures. Flow statistics may reveal anomalies triggered by large scale malicious events (typically massive Distributed Denial of Service attacks) and subsequently assist networked resource owners/operators to raise mitigation policies against these threats. First, we demonstrate that OF statistics collection and processing overloads the centralized control plane, introducing scalability issues. Second, we propose a modular architecture for the separation of the data collection process from the SDN control plane with the employment of sFlow monitoring data. We then report experimental results that compare its performance against native OF approaches that use standard flow table statistics. Both alternatives are evaluated using an entropy-based method on high volume real network traffic data collected from a university campus network. The packet traces were fed to hardware and software OF devices in order to assess flow-based datagathering and related anomaly detection options. We subsequently present experimental results that demonstrate the effectiveness of the proposed sFlow-based mechanism compared to the native OF approach, in terms of overhead imposed on usage of system resources. Finally, we conclude by demonstrating that once a network anomaly is detected and identified, the OF protocol can effectively mitigate it via flow table modifications. 2013 Elsevier B.V. All rights reserved."}
{"_id": "c84b10c01a84f26fe8a1c978c919fbe5a9f9a661", "title": "Software-Defined Networking: A Comprehensive Survey", "text": "The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network\u2019s control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms with a focus on aspects such as resiliency, scalability, performance, security, and dependabilityVas well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined"}
{"_id": "1821fbfc03a45af816a8d7aef50321654b0aeec0", "title": "Revisiting Traffic Anomaly Detection Using Software Defined Networking", "text": "Despite their exponential growth, home and small office/home office networks continue to be poorly managed. Consequently, security of hosts in most home networks is easily compromised and these hosts are in turn used for largescale malicious activities without the home users\u2019 knowledge. We argue that the advent of Software Defined Networking (SDN) provides a unique opportunity to effectively detect and contain network security problems in home and home office networks. We show how four prominent traffic anomaly detection algorithms can be implemented in an SDN context using Openflow compliant switches and NOX as a controller. Our experiments indicate that these algorithms are significantly more accurate in identifying malicious activities in the home networks as compared to the ISP. Furthermore, the efficiency analysis of our SDN implementations on a programmable home network router indicates that the anomaly detectors can operate at line rates without introducing any performance penalties for the home network traffic."}
{"_id": "3192a953370bc8bf4b906261e8e2596355d2b610", "title": "A clean slate 4D approach to network control and management", "text": "Today's data networks are surprisingly fragile and difficult to manage. We argue that the root of these problems lies in the complexity of the control and management planes--the software and protocols coordinating network elements--and particularly the way the decision logic and the distributed-systems issues are inexorably intertwined. We advocate a complete refactoring of the functionality and propose three key principles--network-level objectives, network-wide views, and direct control--that we believe should underlie a new architecture. Following these principles, we identify an extreme design point that we call \"4D,\" after the architecture's four planes: decision, dissemination, discovery, and data. The 4D architecture completely separates an AS's decision logic from pro-tocols that govern the interaction among network elements. The AS-level objectives are specified in the decision plane, and en-forced through direct configuration of the state that drives how the data plane forwards packets. In the 4D architecture, the routers and switches simply forward packets at the behest of the decision plane, and collect measurement data to aid the decision plane in controlling the network. Although 4D would involve substantial changes to today's control and management planes, the format of data packets does not need to change; this eases the deployment path for the 4D architecture, while still enabling substantial innovation in network control and management. We hope that exploring an extreme design point will help focus the attention of the research and industrial communities on this crucially important and intellectually challenging area."}
{"_id": "883e3a3950968ebf8d03d3281076671538660c7c", "title": "Sensing spatial distribution of urban land use by integrating points-of-interest and Google Word2Vec model", "text": "Urban land use information plays an essential role in a wide variety of urban planning and environmental monitoring processes. During the past few decades, with the rapid technological development of remote sensing (RS), geographic information systems (GIS) and geospatial big data, numerous methods have been developed to identify urban land use at a fine scale. Points-of-interest (POIs) have been widely used to extract information pertaining to urban land use types and functional zones. However, it is difficult to quantify the relationship between spatial distributions of POIs and regional land use types due to a lack of reliable models. Previous methods may ignore abundant spatial features that can be extracted from POIs. In this study, we establish an innovative framework that detects urban land use distributions at the scale of traffic analysis zones (TAZs) by integrating Baidu POIs and a Word2Vec model. This framework was implemented using a Google open-source model of a deep-learning language in 2013. First, data for the Pearl River Delta (PRD) are transformed into a TAZ-POI corpus using a greedy algorithmby considering the spatial distributions of TAZs and inner POIs. Then, high-dimensional characteristic vectors of POIs and TAZs are extracted using the Word2Vec model. Finally, to validate the reliability of the POI/TAZ vectors, we implement a K-Means-based clustering model to analyze correlations between the POI/TAZ vectors and deploy TAZ vectors to identify urban land use types using a random forest algorithm (RFA) model. Compared with some state-of-the-art probabilistic topic models (PTMs), the proposed method can efficiently obtain the highest accuracy (OA = 0.8728, kappa = 0.8399). Moreover, the results can be used to help urban planners to monitor dynamic urban land use and evaluate the impact of urban planning schemes. ARTICLE HISTORY Received 21 March 2016 Accepted 28 September 2016"}
{"_id": "b0f7423f93e7c6e506c115771ef82440077a732a", "title": "Full virtualization based ARINC 653 partitioning", "text": "As the number of electronic components of avionics systems is significantly increasing, it is desirable to run several avionics software on a single computing device. In such system, providing a seamless way to integrate separate applications on a computing device is a very critical issue as the Integrated Modular Avionics (IMA) concept addresses. In this context, the ARINC 653 standard defines resource partitioning of avionics application software. The virtualization technology has very high potential of providing an optimal implementation of the partition concept. In this paper, we study supports for full virtualization based ARINC 653 partitioning. The supports include extension of XML-based configuration file format and hierarchical scheduler for temporal partitioning. We show that our implementation can support well-known VMMs, such as VirtualBox and VMware and present basic performance numbers."}
{"_id": "5fa463ad51c0fda19cf6a32d851a12eec5e872b1", "title": "Human Identification From Freestyle Walks Using Posture-Based Gait Feature", "text": "With the increase of terrorist threats around the world, human identification research has become a sought after area of research. Unlike standard biometric recognition techniques, gait recognition is a non-intrusive technique. Both data collection and classification processes can be done without a subject\u2019s cooperation. In this paper, we propose a new model-based gait recognition technique called postured-based gait recognition. It consists of two elements: posture-based features and posture-based classification. Posture-based features are composed of displacements of all joints between current and adjacent frames and center-of-body (CoB) relative coordinates of all joints, where the coordinates of each joint come from its relative position to four joints: hip-center, hip-left, hip-right, and spine joints, from the front forward. The CoB relative coordinate system is a critical part to handle the different observation angle issue. In posture-based classification, postured-based gait features of all frames are considered. The dominant subject becomes a classification result. The postured-based gait recognition technique outperforms the existing techniques in both fixed direction and freestyle walk scenarios, where turning around and changing directions are involved. This suggests that a set of postures and quick movements are sufficient to identify a person. The proposed technique also performs well under the gallery-size test and the cumulative match characteristic test, which implies that the postured-based gait recognition technique is not gallery-size sensitive and is a good potential tool for forensic and surveillance use."}
{"_id": "602f775577a5458e8b6c5d5a3cdccc7bb183662c", "title": "Comparing comprehension measured by multiple-choice and open-ended questions.", "text": "This study compared the nature of text comprehension as measured by multiple-choice format and open-ended format questions. Participants read a short text while explaining preselected sentences. After reading the text, participants answered open-ended and multiple-choice versions of the same questions based on their memory of the text content. The results indicated that performance on open-ended questions was correlated with the quality of self-explanations, but performance on multiple-choice questions was correlated with the level of prior knowledge related to the text. These results suggest that open-ended and multiple-choice format questions measure different aspects of comprehension processes. The results are discussed in terms of dual process theories of text comprehension."}
{"_id": "ebeca41ac60c2151137a45fcc5d1a70a419cad65", "title": "Current location-based next POI recommendation", "text": "Availability of large volume of community contributed location data enables a lot of location providing services and these services have attracted many industries and academic researchers by its importance. In this paper we propose the new recommender system that recommends the new POI for next hours. First we find the users with similar check-in sequences and depict their check-in sequences as a directed graph, then find the users current location. To recommend the new POI recommendation for next hour we refer to the directed graph we have created. Our algorithm considers both the temporal factor i.e., recommendation time, and the spatial(distance) at the same time. We conduct an experiment on random data collected from Foursquare and Gowalla. Experiment results show that our proposed model outperforms the collaborative-filtering based state-of-the-art recommender techniques."}
{"_id": "08952d434a9b6f1dc9281f2693b2dd855edcda6b", "title": "SiRiUS: Securing Remote Untrusted Storage", "text": "This paper presents SiRiUS, a secure file system designed to be layered over insecure network and P2P file systems such as NFS, CIFS, OceanStore, and Yahoo! Briefcase. SiRiUS assumes the network storage is untrusted and provides its own read-write cryptographic access control for file level sharing. Key management and revocation is simple with minimal out-of-band communication. File system freshness guarantees are supported by SiRiUS using hash tree constructions. SiRiUS contains a novel method of performing file random access in a cryptographic file system without the use of a block server. Extensions to SiRiUS include large scale group sharing using the NNL key revocation construction. Our implementation of SiRiUS performs well relative to the underlying file system despite using cryptographic operations."}
{"_id": "adeca3a75008d92cb52f5f2561dda7005a8814a4", "title": "Calibrated fuzzy AHP for current bank account selection", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.12.089 \u21d1 Corresponding author. Tel.: +44 23 92 844171. E-mail addresses: Alessio.Ishizaka@port.ac.uk (A. I com (N.H. Nguyen). Fuzzy AHP is a hybrid method that combines Fuzzy Set Theory and AHP. It has been developed to take into account uncertainty and imprecision in the evaluations. Fuzzy Set Theory requires the definition of a membership function. At present, there are no indications of how these membership functions can be constructed. In this paper, a way to calibrate the membership functions with comparisons given by the decision-maker on alternatives with known measures is proposed. This new technique is illustrated in a study measuring the most important factors in selecting a student current account. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "539b15c0215582d12e2228d486374651c21ac75d", "title": "Lane-Change Fuzzy Control in Autonomous Vehicles for the Overtaking Maneuver", "text": "The automation of the overtaking maneuver is considered to be one of the toughest challenges in the development of autonomous vehicles. This operation involves two vehicles (the overtaking and the overtaken) cooperatively driving, as well as the surveillance of any other vehicles that are involved in the maneuver. This operation consists of two lane changes-one from the right to the left lane of the road, and the other is to return to the right lane after passing. Lane-change maneuvers have been used to move into or out of a circulation lane or platoon; however, overtaking operations have not received much coverage in the literature. In this paper, we present an overtaking system for autonomous vehicles equipped with path-tracking and lane-change capabilities. The system uses fuzzy controllers that mimic human behavior and reactions during overtaking maneuvers. The system is based on the information that is supplied by a high-precision Global Positioning System and a wireless network environment. It is able to drive an automated vehicle and overtake a second vehicle that is driving in the same lane of the road."}
{"_id": "a306754e556446a5199e258f464fd6e26be547fe", "title": "Safety and Efficacy of Selective Neurectomy of the Gastrocnemius Muscle for Calf Reduction in 300 Cases", "text": "Liposuction alone is not always sufficient to correct the shape of the lower leg, and muscle reduction may be necessary. To assess the outcomes of a new technique of selective neurectomy of the gastrocnemius muscle to correct calf hypertrophy. Between October 2007 and May 2010, 300 patients underwent neurectomy of the medial and lateral heads of the gastrocnemius muscle at the Department of\u00a0Cosmetic and Plastic Surgery, the Second People\u2019s Hospital of Guangdong Province (Guangzhou, China) to correct the shape of their lower legs. Follow-up data from these 300 patients were analyzed retrospectively. Cosmetic results were evaluated independently by the surgeon, the patient, and a third party. Preoperative and postoperative calf circumferences were compared. The Fugl-Meyer motor function assessment was evaluated 3\u00a0months after surgery. The average reduction in calf circumference was 3.2\u00a0\u00b1\u00a01.2\u00a0cm. The Fugl-Meyer scores were normal in all patients both before and 3\u00a0months after surgery. A normal calf shape was achieved in all patients. Six patients complained of fatigue while walking and four of scar pigmentation, but in all cases, this resolved within 6\u00a0months. Calf asymmetry was observed in only two patients. The present series suggests that neurectomy of the medial and lateral heads of the gastrocnemius muscle may be safe and effective for correcting the shape of the calves. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 ."}
{"_id": "f31e0932a2f35a6d7feff20977ce08b5b5398c60", "title": "Structure of the tendon connective tissue.", "text": "Tendons consist of collagen (mostly type I collagen) and elastin embedded in a proteoglycan-water matrix with collagen accounting for 65-80% and elastin approximately 1-2% of the dry mass of the tendon. These elements are produced by tenoblasts and tenocytes, which are the elongated fibroblasts and fibrocytes that lie between the collagen fibers, and are organized in a complex hierarchical scheme to form the tendon proper. Soluble tropocollagen molecules form cross-links to create insoluble collagen molecules which then aggregate progressively into microfibrils and then into electronmicroscopically clearly visible units, the collagen fibrils. A bunch of collagen fibrils forms a collagen fiber, which is the basic unit of a tendon. A fine sheath of connective tissue called endotenon invests each collagen fiber and binds fibers together. A bunch of collagen fibers forms a primary fiber bundle, and a group of primary fiber bundles forms a secondary fiber bundle. A group of secondary fiber bundles, in turn, forms a tertiary bundle, and the tertiary bundles make up the tendon. The entire tendon is surrounded by a fine connective tissue sheath called epitenon. The three-dimensional ultrastructure of tendon fibers and fiber bundles is complex. Within one collagen fiber, the fibrils are oriented not only longitudinally but also transversely and horizontally. The longitudinal fibers do not run only parallel but also cross each other, forming spirals. Some of the individual fibrils and fibril groups form spiral-type plaits. The basic function of the tendon is to transmit the force created by the muscle to the bone, and, in this way, make joint movement possible. The complex macro- and microstructure of tendons and tendon fibers make this possible. During various phases of movements, the tendons are exposed not only to longitudinal but also to transversal and rotational forces. In addition, they must be prepared to withstand direct contusions and pressures. The above-described three-dimensional internal structure of the fibers forms a buffer medium against forces of various directions, thus preventing damage and disconnection of the fibers."}
{"_id": "6939327c1732e027130f0706b6279f78b8ecd2b7", "title": "Flexible Container-Based Computing Platform on Cloud for Scientific Workflows", "text": "Cloud computing is expected to be a promising solution for scientific computing. In this paper, we propose a flexible container-based computing platform to run scientific workflows on cloud. We integrate Galaxy, a popular biology workflow system, with four famous container cluster systems. Preliminary evaluation shows that container cluster systems introduce negligible performance overhead for data intensive scientific workflows, meanwhile, they are able to solve tool installation problem, guarantee reproducibility and improve resource utilization. Moreover, we implement four ways of using Docker, the most popular container tool, for our platform. Docker in Docker and Sibling Docker, which run everything within containers, both help scientists easily deploy our platform on any clouds in a few minutes."}
{"_id": "545dd72cd0357995144bb19bef132bcc67a52667", "title": "Voiced-Unvoiced Classification of Speech Using a Neural Network Trained with LPC Coefficients", "text": "Voiced-Unvoiced classification (V-UV) is a well understood but still not perfectly solved problem. It tackles the problem of determining whether a signal frame contains harmonic content or not. This paper presents a new approach to this problem using a conventional multi-layer perceptron neural network trained with linear predictive coding (LPC) coefficients. LPC is a method that results in a number of coefficients that can be transformed to the envelope of the spectrum of the input frame. As a spectrum is suitable for determining the harmonic content, so are the LPC-coefficients. The proposed neural network works reasonably well compared to other approaches and has been evaluated on a small dataset of 4 different speakers."}
{"_id": "89cbcc1e740a4591443ff4765a6ae8df0fdf5554", "title": "Piaget \u2019 s Constructivism , Papert \u2019 s Constructionism : What \u2019 s the difference ?", "text": "What is the difference between Piaget's constructivism and Papert\u2019s \u201cconstructionism\u201d? Beyond the mere play on the words, I think the distinction holds, and that integrating both views can enrich our understanding of how people learn and grow. Piaget\u2019s constructivism offers a window into what children are interested in, and able to achieve, at different stages of their development. The theory describes how children\u2019s ways of doing and thinking evolve over time, and under which circumstance children are more likely to let go of\u2014or hold onto\u2014 their currently held views. Piaget suggests that children have very good reasons not to abandon their worldviews just because someone else, be it an expert, tells them they\u2019re wrong. Papert\u2019s constructionism, in contrast, focuses more on the art of learning, or \u2018learning to learn\u2019, and on the significance of making things in learning. Papert is interested in how learners engage in a conversation with [their own or other people\u2019s] artifacts, and how these conversations boost self-directed learning, and ultimately facilitate the construction of new knowledge. He stresses the importance of tools, media, and context in human development. Integrating both perspectives illuminates the processes by which individuals come to make sense of their experience, gradually optimizing their interactions with the world"}
{"_id": "19c05a149bb20f27dd0eca0ec3ac847390b2d100", "title": "Microphone array processing for distant speech recognition: Towards real-world deployment", "text": "Distant speech recognition (DSR) holds out the promise of providing a natural human computer interface in that it enables verbal interactions with computers without the necessity of donning intrusive body- or head-mounted devices. Recognizing distant speech robustly, however, remains a challenge. This paper provides a overview of DSR systems based on microphone arrays. In particular, we present recent work on acoustic beamforming for DSR, along with experimental results verifying the effectiveness of the various algorithms described here; beginning from a word error rate (WER) of 14.3% with a single microphone of a 64-channel linear array, our state-of-the-art DSR system achieved a WER of 5.3%, which was comparable to that of 4.2% obtained with a lapel microphone. Furthermore, we report the results of speech recognition experiments on data captured with a popular device, the Kinect [1]. Even for speakers at a distance of four meters from the Kinect, our DSR system achieved acceptable recognition performance on a large vocabulary task, a WER of 24.1%, beginning from a WER of 42.5% with a single array channel."}
{"_id": "142bd1d4e41e5e29bdd87e0d5a145f3c708a3f44", "title": "Ford Campus vision and lidar data set", "text": "This paper describes a data set collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS-LV) and consumer (Xsens MTi-G) inertial measurement unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research campus and downtown Dearborn, Michigan during NovemberDecember 2009. The vehicle path trajectory in these data sets contain several large and small-scale loop closures, which should be useful for testing various state of the art computer vision and simultaneous localization and mapping (SLAM) algorithms. Fig. 1. The modified Ford F-250 pickup truck."}
{"_id": "1de3c8ddf30b9d6389aebc3bfa8a02a169a7368b", "title": "Mining frequent closed graphs on evolving data streams", "text": "Graph mining is a challenging task by itself, and even more so when processing data streams which evolve in real-time. Data stream mining faces hard constraints regarding time and space for processing, and also needs to provide for concept drift detection. In this paper we present a framework for studying graph pattern mining on time-varying streams. Three new methods for mining frequent closed subgraphs are presented. All methods work on coresets of closed subgraphs, compressed representations of graph sets, and maintain these sets in a batch-incremental manner, but use different approaches to address potential concept drift. An evaluation study on datasets comprising up to four million graphs explores the strength and limitations of the proposed methods. To the best of our knowledge this is the first work on mining frequent closed subgraphs in non-stationary data streams."}
{"_id": "31ea3186aa7072a9e25218efe229f5ee3cca3316", "title": "A ug 2 01 7 Reinforced Video Captioning with Entailment Rewards", "text": "Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset."}
{"_id": "4b944d518b88beeb9b2376975400cabd6e919957", "title": "SDN and Virtualization Solutions for the Internet of Things: A Survey", "text": "The imminent arrival of the Internet of Things (IoT), which consists of a vast number of devices with heterogeneous characteristics, means that future networks need a new architecture to accommodate the expected increase in data generation. Software defined networking (SDN) and network virtualization (NV) are two technologies that promise to cost-effectively provide the scale and versatility necessary for IoT services. In this paper, we survey the state of the art on the application of SDN and NV to IoT. To the best of our knowledge, we are the first to provide a comprehensive description of every possible IoT implementation aspect for the two technologies. We start by outlining the ways of combining SDN and NV. Subsequently, we present how the two technologies can be used in the mobile and cellular context, with emphasis on forthcoming 5G networks. Afterward, we move to the study of wireless sensor networks, arguably the current foremost example of an IoT network. Finally, we review some general SDN-NV-enabled IoT architectures, along with real-life deployments and use-cases. We conclude by giving directions for future research on this topic."}
{"_id": "fa16642fe405382cbd407ce1bc22213561185aba", "title": "Non-Invasive Glucose Meter for Android-Based Devices", "text": "This study helps in monitoring blood glucose level of a patient with the aid of an android device non-invasively. Diabetes is a metabolic disease characterized by high level of sugar in the blood, and considered as the fastest growing long-term disease affecting millions of people globally. The study measures the blood glucose level using sensor patch through diffused reflectance spectra on the inner side of the forearm. The Arduino microcontroller does the processing of the information from the sensor patch while the Bluetooth module wirelessly transmits to the android device the measured glucose level for storing, interpreting and displaying. Results showed that there is no significant between the measured values using the commercially available glucose meter and the created device. Based on ISO 15197 standard 39 of the 40 trials conducted, or 97.5% fell within the acceptable range."}
{"_id": "a360a526794df3aa8de96f83df171769a4022642", "title": "Joint Text Embedding for Personalized Content-based Recommendation", "text": "Learning a good representation of text is key to many recommendation applications. Examples include news recommendation where texts to be recommended are constantly published everyday. However, most existing recommendation techniques, such as matrix factorization based methods, mainly rely on interaction histories to learn representations of items. While latent factors of items can be learned e\u0082ectively from user interaction data, in many cases, such data is not available, especially for newly emerged items. In this work, we aim to address the problem of personalized recommendation for completely new items with text information available. We cast the problem as a personalized text ranking problem and propose a general framework that combines text embedding with personalized recommendation. Users and textual content are embedded into latent feature space. \u008ce text embedding function can be learned end-to-end by predicting user interactions with items. To alleviate sparsity in interaction data, and leverage large amount of text data with li\u008ale or no user interactions, we further propose a joint text embedding model that incorporates unsupervised text embedding with a combination module. Experimental results show that our model can signi\u0080cantly improve the e\u0082ectiveness of recommendation systems on real-world datasets."}
{"_id": "1aa60b5ae893cd93a221bf71b6b264f5aa5ca6b8", "title": "Why Not?", "text": "As humans, we have expectations for the results of any action, e.g. we expect at least one student to be returned when we query a university database for student records. When these expectations are not met, traditional database users often explore datasets via a series of slightly altered SQL queries. Yet most database access is via limited interfaces that deprive end users of the ability to alter their query in any way to garner better understanding of the dataset and result set. Users are unable to question why a particular data item is Not in the result set of a given query. In this work, we develop a model for answers to WHY NOT? queries. We show through a user study the usefulness of our answers, and describe two algorithms for finding the manipulation that discarded the data item of interest. Moreover, we work through two different methods for tracing the discarded data item that can be used with either algorithm. Using our algorithms, it is feasible for users to find the manipulation that excluded the data item of interest, and can eliminate the need for exhausting debugging."}
{"_id": "f39e21382458bf723e207d0ac649680f9b4dde4a", "title": "Recognition of Offline Handwritten Chinese Characters Using the Tesseract Open Source OCR Engine", "text": "Due to the complex structure and handwritten deformation, the offline handwritten Chinese characters recognition has been one of the most challenging problems. In this paper, an offline handwritten Chinese character recognition tool has been developed based on the Tesseract open source OCR engine. The tool mainly contributes on the following two points: First, a handwritten Chinese character features library is generated, which is independent of a specific user's writing style, Second, by preprocessing the input image and adjusting the Tesseract engine, multiple candidate recognition results are output based on weight ranking. The recognition accuracy rate of this tool is above 88% for both known user test set and unknown user test set. It has shown that the Tesseract engine is feasible for offline handwritten Chinese character recognition to a certain degree."}
{"_id": "3cc0c9a9917f9ed032376fa467838e720701e783", "title": "Gal4 in the Drosophila female germline", "text": "The modular Gal4 system has proven to be an extremely useful tool for conditional gene expression in Drosophila. One limitation has been the inability of the system to work in the female germline. A modified Gal4 system that works throughout oogenesis is presented here. To achieve germline expression, it was critical to change the basal promoter and 3'-UTR in the Gal4-responsive expression vector (generating UASp). Basal promoters and heterologous 3'-UTRs are often considered neutral, but as shown here, can endow qualitative tissue-specificity to a chimeric transcript. The modified Gal4 system was used to investigate the role of the Drosophila FGF homologue branchless, ligand for the FGF receptor breathless, in border cell migration. FGF signaling guides tracheal cell migration in the embryo. However, misexpression of branchless in the ovary had no effect on border cell migration. Thus border cells and tracheal cells appear to be guided differently."}
{"_id": "e79b34f6779095a73ba4604291d84bc26802b35e", "title": "Improving Relation Extraction by Pre-trained Language Representations", "text": "Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code."}
{"_id": "bad43ffc1c7d07db5990f631334bfa3157a6b134", "title": "Plate-laminated corporate-feed slotted waveguide array antenna at 350-GHz band by silicon process", "text": "A corporate feed slotted waveguide array antenna with broadband characteristics in term of gain in the 350 GHz band is achieved by measurement for the first time. The etching accuracy for thin laminated plates of the diffusion bonding process with conventional chemical etching is limited to \u00b120\u03bcm. This limits the use of this process for antenna fabrication in the submillimeter wave band where the fabrication tolerances are very severe. To improve the etching accuracy of the thin laminated plates, a new fabrication process has been developed. Each silicon wafer is etched by DRIE (deep reactive ion etcher) and is plated by gold on the surface. This new fabrication process provides better fabrication tolerances about \u00b15 \u03bcm using wafer bond aligner. The thin laminated wafers are then bonded with the diffusion bonding process under high temperature and high pressure. To validate the proposed antenna concepts, an antenna prototype has been designed and fabricated in the 350 GHz band. The 3dB-down gain bandwidth is about 44.6 GHz by this silicon process while it was about 15GHz by the conventional process using metal plates in measurement."}
{"_id": "0dacd4593ba6bce441bae37fc3ff7f3b70408ee1", "title": "Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds", "text": "Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (\u03b5, 0)and (\u03b5, \u03b4)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median."}
{"_id": "24d800e6681a129b7787cbb05d0e224acad70e8d", "title": "Exposure: A Passive DNS Analysis Service to Detect and Report Malicious Domains", "text": "A wide range of malicious activities rely on the domain name service (DNS) to manage their large, distributed networks of infected machines. As a consequence, the monitoring and analysis of DNS queries has recently been proposed as one of the most promising techniques to detect and blacklist domains involved in malicious activities (e.g., phishing, spam, botnets command-and-control, etc.). EXPOSURE is a system we designed to detect such domains in real time, by applying 15 unique features grouped in four categories.\n We conducted a controlled experiment with a large, real-world dataset consisting of billions of DNS requests. The extremely positive results obtained in the tests convinced us to implement our techniques and deploy it as a free, online service. In this article, we present the Exposure system and describe the results and lessons learned from 17 months of its operation. Over this amount of time, the service detected over 100K malicious domains. The statistics about the time of usage, number of queries, and target IP addresses of each domain are also published on a daily basis on the service Web page."}
{"_id": "32334506f746e83367cecb91a0ab841e287cd958", "title": "Practical privacy: the SuLQ framework", "text": "We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to {0, 1}. The true answer is \u03a3i\u03b5S f(di), and a noisy version is released as the response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity becomes reasonable as databases grow increasingly large.We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and (apparently!) all algorithms that operate in the in the statistical query learning model [11]."}
{"_id": "49934d08d42ed9e279a82cbad2086377443c8a75", "title": "Differentially Private Empirical Risk Minimization", "text": "Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the \u03b5-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance."}
{"_id": "61efdc56bc6c034e9d13a0c99d0b651a78bfc596", "title": "Differentially Private Distributed Constrained Optimization", "text": "Many resource allocation problems can be formulated as an optimization problem whose constraints contain sensitive information about participating users. This paper concerns a class of resource allocation problems whose objective function depends on the aggregate allocation (i.e., the sum of individual allocations); in particular, we investigate distributed algorithmic solutions that preserve the privacy of participating users. Without privacy considerations, existing distributed algorithms normally consist of a central entity computing and broadcasting certain public coordination signals to participating users. However, the coordination signals often depend on user information, so that an adversary who has access to the coordination signals can potentially decode information on individual users and put user privacy at risk. We present a distributed optimization algorithm that preserves differential privacy, which is a strong notion that guarantees user privacy regardless of any auxiliary information an adversary may have. The algorithm achieves privacy by perturbing the public signals with additive noise, whose magnitude is determined by the sensitivity of the projection operation onto user-specified constraints. By viewing the differentially private algorithm as an implementation of stochastic gradient descent, we are able to derive a bound for the suboptimality of the algorithm. We illustrate the implementation of our algorithm via a case study of electric vehicle charging. Specifically, we derive the sensitivity and present numerical simulations for the algorithm. Through numerical simulations, we are able to investigate various aspects of the algorithm when being used in practice, including the choice of step size, number of iterations, and the trade-off between privacy level and suboptimality."}
{"_id": "c7788c34ba1387f1e437a2f83e1931f0c64d8e4e", "title": "The role of transparency in recommender systems", "text": "Recommender Systems act as a personalized decision guides, aiding users in decisions on matters related to personal taste. Most previous research on Recommender Systems has focused on the statistical accuracy of the algorithms driving the systems, with little emphasis on interface issues and the user's perspective. The goal of this research was to examine the role of transprency (user understanding of why a particular recommendation was made) in Recommender Systems. To explore this issue, we conducted a user study of five music Recommender Systems. Preliminary results indicate that users like and feel more confident about recommendations that they perceive as transparent."}
{"_id": "7731c8a1c56fdfa149759a8bb7b81464da0b15c1", "title": "Recognizing Abnormal Heart Sounds Using Deep Learning", "text": "The work presented here applies deep learning to the task of automated cardiac auscultation, i.e. recognizing abnormalities in heart sounds. We describe an automated heart sound classification algorithm that combines the use of time-frequency heat map representations with a deep convolutional neural network (CNN). Given the cost-sensitive nature of misclassification, our CNN architecture is trained using a modified loss function that directly optimizes the trade-off between sensitivity and specificity. We evaluated our algorithm at the 2016 PhysioNet Computing in Cardiology challenge where the objective was to accurately classify normal and abnormal heart sounds from single, short, potentially noisy recordings. Our entry to the challenge achieved a final specificity of 0.95, sensitivity of 0.73 and overall score of 0.84. We achieved the greatest specificity score out of all challenge entries and, using just a single CNN, our algorithm differed in overall score by only 0.02 compared to the top place finisher, which used an ensemble approach."}
{"_id": "17a00f26b68f40fb03e998a7eef40437dd40e561", "title": "The Tire as an Intelligent Sensor", "text": "Active safety systems are based upon the accurate and fast estimation of the value of important dynamical variables such as forces, load transfer, actual tire-road friction (kinetic friction) muk, and maximum tire-road friction available (potential friction) mup. Measuring these parameters directly from tires offers the potential for improving significantly the performance of active safety systems. We present a distributed architecture for a data-acquisition system that is based on a number of complex intelligent sensors inside the tire that form a wireless sensor network with coordination nodes placed on the body of the car. The design of this system has been extremely challenging due to the very limited available energy combined with strict application requirements for data rate, delay, size, weight, and reliability in a highly dynamical environment. Moreover, it required expertise in multiple engineering disciplines, including control-system design, signal processing, integrated-circuit design, communications, real-time software design, antenna design, energy scavenging, and system assembly."}
{"_id": "190dcdb71a119ec830d6e7e6e01bb42c6c10c2f3", "title": "Surgical precision JIT compilers", "text": "Just-in-time (JIT) compilation of running programs provides more optimization opportunities than offline compilation. Modern JIT compilers, such as those in virtual machines like Oracle's HotSpot for Java or Google's V8 for JavaScript, rely on dynamic profiling as their key mechanism to guide optimizations. While these JIT compilers offer good average performance, their behavior is a black box and the achieved performance is highly unpredictable.\n In this paper, we propose to turn JIT compilation into a precision tool by adding two essential and generic metaprogramming facilities: First, allow programs to invoke JIT compilation explicitly. This enables controlled specialization of arbitrary code at run-time, in the style of partial evaluation. It also enables the JIT compiler to report warnings and errors to the program when it is unable to compile a code path in the demanded way. Second, allow the JIT compiler to call back into the program to perform compile-time computation. This lets the program itself define the translation strategy for certain constructs on the fly and gives rise to a powerful JIT macro facility that enables \"smart\" libraries to supply domain-specific compiler optimizations or safety checks.\n We present Lancet, a JIT compiler framework for Java bytecode that enables such a tight, two-way integration with the running program. Lancet itself was derived from a high-level Java bytecode interpreter: staging the interpreter using LMS (Lightweight Modular Staging) produced a simple bytecode compiler. Adding abstract interpretation turned the simple compiler into an optimizing compiler. This fact provides compelling evidence for the scalability of the staged-interpreter approach to compiler construction.\n In the case of Lancet, JIT macros also provide a natural interface to existing LMS-based toolchains such as the Delite parallelism and DSL framework, which can now serve as accelerator macros for arbitrary JVM bytecode."}
{"_id": "f5888af5e5353eb74d37ec50e9840e58b1992953", "title": "An LDA-Based Approach to Scientific Paper Recommendation", "text": "Recommendation of scientific papers is a task aimed to support researchers in accessing relevant articles from a large pool of unseen articles. When writing a paper, a researcher focuses on the topics related to her/his scientific domain, by using a technical language. The core idea of this paper is to exploit the topics related to the researchers scientific production (authored articles) to formally define her/his profile; in particular we propose to employ topic modeling to formally represent the user profile, and language modeling to formally represent each unseen paper. The recommendation technique we propose relies on the assessment of the closeness of the language used in the researchers papers and the one employed in the unseen papers. The proposed approach exploits a reliable knowledge source for building the user profile, and it alleviates the cold-start problem, typical of collaborative filtering techniques. We also present a preliminary evaluation of our approach on the DBLP."}
{"_id": "1f8be49d63c694ec71c2310309cd02a2d8dd457f", "title": "Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning", "text": "In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds \"more noise\" into features which are \"less relevant\" to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions."}
{"_id": "31e9d9458471b4a0cfc6cf1de219b10af0f37239", "title": "Why do you play World of Warcraft? An in-depth exploration of self-reported motivations to play online and in-game behaviours in the virtual world of Azeroth", "text": "Massively multiplayer online role-playing games (MMORPGs) are video games in which players create an avatar that evolves and interacts with other avatars in a persistent virtual world. Motivations to play MMORPGs are heterogeneous (e.g. achievement, socialisation, immersion in virtual worlds). This study investigates in detail the relationships between self-reported motives and actual in-game behaviours. We recruited a sample of 690 World of Warcraft players (the most popular MMORPG) who agreed to have their avatar monitored for 8 months. Participants completed an initial online survey about their motives to play. Their actual in-game behaviours were measured through the game\u2019s official database (the Armory website). Results showed specific associations between motives and in-game behaviours. Moreover, longitudinal analyses revealed that teamworkand competition-oriented motives are the most accurate predictors of fast progression in the game. In addition, although specific associations exist between problematic use and certain motives (e.g. advancement, escapism), longitudinal analyses showed that high involvement in the game is not necessarily associated with a negative impact upon daily living. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "33127e014cf537192c33a5b0e4b62df2a7b1869f", "title": "Policy ratification", "text": "It is not sufficient to merely check the syntax of new policies before they are deployed in a system; policies need to be analyzed for their interactions with each other and with their local environment. That is, policies need to go through a ratification process. We believe policy ratification becomes an essential part of system management as the number of policies in the system increases and as the system administration becomes more decentralized. In this paper, we focus on the basic tasks involved in policy ratification. To a large degree, these basic tasks can be performed independent of policy model and language and require little domain-specific knowledge. We present algorithms from constraint, linear, and logic programming disciplines to help perform ratification tasks. We provide an algorithm to efficiently assign priorities to the policies based on relative policy preferences indicated by policy administrators. Finally, with an example, we show how these algorithms have been integrated with our policy system to provide feedback to a policy administrator regarding potential interactions of policies with each other and with their deployment environment."}
{"_id": "c6b5c1cc565c878db50ad20aafd804284558ad02", "title": "Centrality in valued graphs : A measure of betweenness based on network flow", "text": "A new measure of centrality, C,, is introduced. It is based on the concept of network flows. While conceptually similar to Freeman\u2019s original measure, Ca, the new measure differs from the original in two important ways. First, C, is defined for both valued and non-valued graphs. This makes C, applicable to a wider variety of network datasets. Second, the computation of C, is not based on geodesic paths as is C, but on all the independent paths between all pairs of points in the network."}
{"_id": "2ccca721c20ad1d8503ede36fe310626070de640", "title": "Distributed Energy Resources Topology Identification via Graphical Modeling", "text": "Distributed energy resources (DERs), such as photovoltaic, wind, and gas generators, are connected to the grid more than ever before, which introduces tremendous changes in the distribution grid. Due to these changes, it is important to understand where these DERs are connected in order to sustainably operate the distribution grid. But the exact distribution system topology is difficult to obtain due to frequent distribution grid reconfigurations and insufficient knowledge about new components. In this paper, we propose a methodology that utilizes new data from sensor-equipped DER devices to obtain the distribution grid topology. Specifically, a graphical model is presented to describe the probabilistic relationship among different voltage measurements. With power flow analysis, a mutual information-based identification algorithm is proposed to deal with tree and partially meshed networks. Simulation results show highly accurate connectivity identification in the IEEE standard distribution test systems and Electric Power Research Institute test systems."}
{"_id": "8eb3ebd0a1d8a26c7070543180d233f841b79850", "title": "Performance of Reliable Transport Protocol over IEEE 802.11 Wireless LAN: Analysis and Enhancement", "text": "IEEE 802.11 Medium Access Control(MAC) is proposed to support asynchronous and time bounded delivery of radio data packets in infrastructure and ad hoc networks. The basis of the IEEE 802.11 WLAN MAC protocol is Distributed Coordination Function(DCF), which is a Carrier Sense Multiple Access with Collision Avoidance(CSMA/CA) with binary slotted exponential back-off scheme. Since IEEE 802.11 MAC has its own characteristics that are different from other wireless MAC protocols, the performance of reliable transport protocol over 802.11 needs further study. This paper proposes a scheme named DCF+, which is compatible with DCF, to enhance the performance of reliable transport protocol over WLAN. To analyze the performance of DCF and DCF+, this paper also introduces an analytical model to compute the saturated throughput of WLAN. Comparing with other models, this model is shown to be able to predict the behaviors of 802.11 more accurately. Moreover, DCF+ is able to improve the performance of TCP over WLAN, which is verified by modeling and elaborate simulation results."}
{"_id": "5574763d870bae0fd3fd6d3014297942a045f60a", "title": "Utilization of Data mining Approaches for Prediction of Life Threatening Diseases Survivability", "text": "Data mining now-a-days plays an important role in prediction of diseases in health care industry. The Health care industry utilizes data mining Techniques and finds out the information which is hidden in the data set. Many diagnoses have been done for predicting diseases. Without knowing the knowledge of profound medicine and clinical experience the treatment goes wrong. The time taken to recover from diseases depends on patients' severity. For finding out the disease, number of test needs to be taken by patient. In most cases not all test become more effective. And at last it leads to the death of the patient. Many experiments have been conducted by comparing the performance of predictive data mining for reducing the number of test taken by the patient indirectly. This research paper is to present a survey on predicting the presence of life threatening diseases which causes to death and list out the various classification algorithms that has been used with number of attributes for prediction."}
{"_id": "6273df9def7c011bc21cd42a4029d4b7c7c48c2e", "title": "A 45GHz Doherty power amplifier with 23% PAE and 18dBm output power, in 45nm SOI CMOS", "text": "A 45GHz Doherty power amplifier is implemented in 45nm SOI CMOS. Two-stack FET amplifiers are used as main and auxiliary amplifiers, allowing a supply voltage of 2.5V and high output power. The use of slow-wave coplanar waveguides (CPW) improves the PAE and gain by approximately 3% and 1dB, and reduces the die area by 20%. This amplifier exhibits more than 18dBm saturated output power, with peak power gain of 7dB. It occupies 0.64mm2 while achieving a peak PAE of 23%; at 6dB back-off the PAE is 17%."}
{"_id": "1e396464e440e6032be3f035a9a6837c32c9d2c0", "title": "Review of Micro Thermoelectric Generator", "text": "Used for thermal energy harvesting, thermoelectric generator (TEG) can convert heat into electricity directly. Structurally, the main part of TEG is the thermopile, which consists of thermocouples connected in series electrically and in parallel thermally. Benefiting from massive progress achieved in a microelectromechanical systems technology, micro TEG ( $\\mu$ -TEG) with advantages of small volume and high output voltage has obtained attention in recent 20 years. The review gives a comprehensive survey of the development and current status of $\\mu$ -TEG. First, the principle of operation is introduced and some key parameters used for characterizing the performance of $\\mu$ -TEG are highlighted. Next, $\\mu$ -TEGs are classified from the perspectives of structure, material, and fabrication technology. Then, almost all the relevant works are summarized for the convenience of comparison and reference. Summarized information includes the structure, material property, fabrication technology, output performance, and so on. This will provide readers with an overall evaluation of different studies and guide them in choosing the suitable $\\mu$ -TEGs for their applications. In addition, the existing and potential applications of $\\mu$ -TEG are shown, especially the applications in the Internet of things. Finally, we summarize the challenges encountered in improving the output power of $\\mu$ -TEG and predicted that more researchers would focus their efforts on the flexible structure $\\mu$ -TEG, and combination of $\\mu$ -TEG and other energy harvestings. With the emergence of more low-power devices and the gradual improvement of ZT value of the thermoelectric material, $\\mu$ -TEG is promising for applications in various fields. [2017-0610]"}
{"_id": "4c11a7b668dee651cc2d8eb2eaf8665449b1738f", "title": "Modern Release Engineering in a Nutshell -- Why Researchers Should Care", "text": "The release engineering process is the process that brings high quality code changes from a developer's workspace to the end user, encompassing code change integration, continuous integration, build system specifications, infrastructure-as-code, deployment and release. Recent practices of continuous delivery, which bring new content to the end user in days or hours rather than months or years, have generated a surge of industry-driven interest in the release engineering pipeline. This paper argues that the involvement of researchers is essential, by providing a brief introduction to the six major phases of the release engineering pipeline, a roadmap of future research, and a checklist of three major ways that the release engineering process of a system under study can invalidate the findings of software engineering studies. The main take-home message is that, while release engineering technology has flourished tremendously due to industry, empirical validation of best practices and the impact of the release engineering process on (amongst others) software quality is largely missing and provides major research opportunities."}
{"_id": "9f6db3f5809a9d1b9f1c70d9d30382a0bd8be8d0", "title": "A Review on Performance Analysis of Cloud Computing Services for Scientific Computing", "text": "Cloud computing has emerged as a very important commercial infrastructure that promises to reduce the need for maintaining costly computing facilities by organizations and institutes. Through the use of virtualization and time sharing of resources, clouds serve with a single set of physical resources as a large user base with altogether different needs. Thus, the clouds have the promise to provide to their owners the benefits of an economy of calibration and, at the same time, become a substitute for scientists to clusters, grids, and parallel production conditions. However, the present commercial clouds have been built to support web and small database workloads, which are very different from common scientific computing workloads. Furthermore, the use of virtualization and resource time sharing may introduce significant performance penalties for the demanding scientific computing workloads. In this paper, we analyze the performance of cloud computing services for scientific computing workloads. This paper evaluate the presence in real scientific computing workloads of Many-Task Computing users, that is, of users who employ loosely coupled applications comprising many tasks to achieve their scientific goals. Our effective method demonstrates to yield comparative and even better results than the more complex state-of-the-art techniques but has the advantage to be appropriate for real-time applications."}
{"_id": "6fcccd6def46a4dd50f85df4d4c011bd9f1855af", "title": "Cedalion: a language for language oriented programming", "text": "Language Oriented Programming (LOP) is a paradigm that puts domain specific programming languages (DSLs) at the center of the software development process. Currently, there are three main approaches to LOP: (1) the use of internal DSLs, implemented as libraries in a given host language; (2) the use of external DSLs, implemented as interpreters or compilers in an external language; and (3) the use of language workbenches, which are integrated development environments (IDEs) for defining and using external DSLs. In this paper, we contribute: (4) a novel language-oriented approach to LOP for defining and using internal DSLs. While language workbenches adapt internal DSL features to overcome some of the limitations of external DSLs, our approach adapts language workbench features to overcome some of the limitations of internal DSLs. We introduce Cedalion, an LOP host language for internal DSLs, featuring static validation and projectional editing. To validate our approach we present a case study in which Cedalion was used by biologists in designing a DNA microarray for molecular Biology research."}
{"_id": "7cbbe0025b71a265c6bee195b5595cfad397a734", "title": "Health chair: implicitly sensing heart and respiratory rate", "text": "People interact with chairs frequently, making them a potential location to perform implicit health sensing that requires no additional effort by users. We surveyed 550 participants to understand how people sit in chairs and inform the design of a chair that detects heart and respiratory rate from the armrests and backrests of the chair respectively. In a laboratory study with 18 participants, we evaluated a range of common sitting positions to determine when heart rate and respiratory rate detection was possible (32% of the time for heart rate, 52% for respiratory rate) and evaluate the accuracy of the detected rate (83% for heart rate, 73% for respiratory rate). We discuss the challenges of moving this sensing to the wild by evaluating an in-situ study totaling 40 hours with 11 participants. We show that, as an implicit sensor, the chair can collect vital signs data from its occupant through natural interaction with the chair."}
{"_id": "a00a757b26d5c4f53b628a9c565990cdd0e51876", "title": "The BURCHAK corpus: a Challenge Data Set for Interactive Learning of Visually Grounded Word Meanings", "text": "We motivate and describe a new freely available human-human dialogue data set for interactive learning of visually grounded word meanings through ostensive definition by a tutor to a learner. The data has been collected using a novel, character-by-character variant of the DiET chat tool (Healey et al., 2003; Mills and Healey, submitted) with a novel task, where a Learner needs to learn invented visual attribute words (such as \u201cburchak\u201d for square) from a tutor. As such, the text-based interactions closely resemble face-to-face conversation and thus contain many of the linguistic phenomena encountered in natural, spontaneous dialogue. These include selfand other-correction, mid-sentence continuations, interruptions, overlaps, fillers, and hedges. We also present a generic n-gram framework for building user (i.e. tutor) simulations from this type of incremental data, which is freely available to researchers. We show that the simulations produce outputs that are similar to the original data (e.g. 78% turn match similarity). Finally, we train and evaluate a Reinforcement Learning dialogue control agent for learning visually grounded word meanings, trained from the BURCHAK corpus. The learned policy shows comparable performance to a rulebased system built previously."}
{"_id": "b49e31fe5948b3ca4552ac69dd7a735607467f1c", "title": "GUSS: Solving Collections of Data Related Models Within GAMS", "text": "In many applications, optimization of a collection of problems is required where each problem is structurally the same, but in which some or all of the data defining the instance is updated. Such models are easily specified within modern modeling systems, but have often been slow to solve due to the time needed to regenerate the instance, and the inability to use advance solution information (such as basis factorizations) from previous solves as the collection is processed. We describe a new language extension, GUSS, that gathers data from different sources/symbols to define the collection of models (called scenarios), updates a base model instance with this scenario data and solves the updated model instance and scatters the scenario results to symbols in the GAMS database. We demonstrate the utility of this approach in three applications, namely data envelopment analysis, cross validation and stochastic dual dynamic programming. The language extensions are available for general use in all versions of GAMS starting with release 23.7."}
{"_id": "5914781bde18606e55e8f7683f55889df91576ec", "title": "30 + years of research and practice of outsourcing \u2013 Exploring the past and anticipating the future", "text": "Article history: Received 7 January 2008 Received in revised form 24 June 2008 Accepted 31 July 2008 Available online 5 April 2009 Outsourcing is a phenomenon that as a practice originated in the 1950s, but it was not until the 1980s when the strategy became widely adopted in organizations. Since then, the strategy has evolved from a strictly cost focused approach towards more cooperative nature, in which cost is only one, often secondary, decision-making criterion. In the development of the strategy, three broad and somewhat overlapping, yet distinct phases can be identified: the era of the Big Bang, the era of the Bandwagon, and the era of Barrierless Organizations. This paper illustrates that the evolution of the practice has caused several contradictions among researchers, as well as led to the situation where the theoretical background of the phenomenon has recently become much richer. Through examining existing research, this paper intends to identify the development of outsourcing strategy from a practical as well as a theoretical perspective from its birth up to today. In addition, through providing insights from managers in the information technology industry, this paper aims at providing a glimpse from the future \u2013 that is \u2013 what may be the future directions and research issues in this complex phenomenon? \u00a9 2009 Elsevier Inc. All rights reserved."}
{"_id": "423455ad8afb9b2534c0954a5e61c95bea611801", "title": "Virtualizing I/O Devices on VMware Workstation's Hosted Virtual Machine Monitor", "text": "Virtual machines were developed by IBM in the 1960\u2019s to provide concurrent, interactive access to a mainframe computer. Each virtual machine is a replica of the underlying physical machine and users are given the illusion of running directly on the physical machine. Virtual machines also provide benefits like isolation and resource sharing, and the ability to run multiple flavors and configurations of operating systems. VMware Workstation brings such mainframe-class virtual machine technology to PC-based desktop and workstation computers. This paper focuses on VMware Workstation\u2019s approach to virtualizing I/O devices. PCs have a staggering variety of hardware, and are usually pre-installed with an operating system. Instead of replacing the pre-installed OS, VMware Workstation uses it to host a user-level application (VMApp) component, as well as to schedule a privileged virtual machine monitor (VMM) component. The VMM directly provides high-performance CPU virtualization while the VMApp uses the host OS to virtualize I/O devices and shield the VMM from the variety of devices. A crucial question is whether virtualizing devices via such a hosted architecture can meet the performance required of high throughput, low latency devices. To this end, this paper studies the virtualization and performance of an Ethernet adapter on VMware Workstation. Results indicate that with optimizations, VMware Workstation\u2019s hosted virtualization architecture can match native I/O throughput on standard PCs. Although a straightforward hosted implementation is CPU-limited due to virtualization overhead on a 733 MHz Pentium R III system on a 100 Mb/s Ethernet, a series of optimizations targeted at reducing CPU utilization allows the system to match native network throughput. Further optimizations are discussed both within and outside a hosted architecture."}
{"_id": "c5788be735f3caadc7d0d3147aa52fd4a6036ec4", "title": "Detecting epistasis in human complex traits", "text": "Genome-wide association studies (GWASs) have become the focus of the statistical analysis of complex traits in humans, successfully shedding light on several aspects of genetic architecture and biological aetiology. Single-nucleotide polymorphisms (SNPs) are usually modelled as having additive, cumulative and independent effects on the phenotype. Although evidently a useful approach, it is often argued that this is not a realistic biological model and that epistasis (that is, the statistical interaction between SNPs) should be included. The purpose of this Review is to summarize recent directions in methodology for detecting epistasis and to discuss evidence of the role of epistasis in human complex trait variation. We also discuss the relevance of epistasis in the context of GWASs and potential hazards in the interpretation of statistical interaction terms."}
{"_id": "d3569f184b7083c0433bf00fa561736ae6f8d31e", "title": "Interactive Entity Resolution in Relational Data: A Visual Analytic Tool and Its Evaluation", "text": "Databases often contain uncertain and imprecise references to real-world entities. Entity resolution, the process of reconciling multiple references to underlying real-world entities, is an important data cleaning process required before accurate visualization or analysis of the data is possible. In many cases, in addition to noisy data describing entities, there is data describing the relationships among the entities. This relational data is important during the entity resolution process; it is useful both for the algorithms which determine likely database references to be resolved and for visual analytic tools which support the entity resolution process. In this paper, we introduce a novel user interface, D-Dupe, for interactive entity resolution in relational data. D-Dupe effectively combines relational entity resolution algorithms with a novel network visualization that enables users to make use of an entity's relational context for making resolution decisions. Since resolution decisions often are interdependent, D-Dupe facilitates understanding this complex process through animations which highlight combined inferences and a history mechanism which allows users to inspect chains of resolution decisions. An empirical study with 12 users confirmed the benefits of the relational context visualization on the performance of entity resolution tasks in relational data in terms of time as well as users' confidence and satisfaction."}
{"_id": "c630196c34533903b48e546897d46df27c844bc2", "title": "High-power-transfer-density capacitive wireless power transfer system for electric vehicle charging", "text": "This paper introduces a large air-gap capacitive wireless power transfer (WPT) system for electric vehicle charging that achieves a power transfer density exceeding the state-of-the-art by more than a factor of four. This high power transfer density is achieved by operating at a high switching frequency (6.78 MHz), combined with an innovative approach to designing matching networks that enable effective power transfer at this high frequency. In this approach, the matching networks are designed such that the parasitic capacitances present in a vehicle charging environment are absorbed and utilized as part of the wireless power transfer mechanism. A new modeling approach is developed to simplify the complex network of parasitic capacitances into equivalent capacitances that are directly utilized as the matching network capacitors. A systematic procedure to accurately measure these equivalent capacitances is also presented. A prototype capacitive WPT system with 150 cm2 coupling plates, operating at 6.78 MHz and incorporating matching networks designed using the proposed approach, is built and tested. The prototype system transfers 589 W of power across a 12-cm air gap, achieving a power transfer density of 19.6 kW/m2."}
{"_id": "1750a3716a03aaacdfbb0e25214beaa5e1e2b6ee", "title": "Ontology Development 101 : A Guide to Creating Your First Ontology", "text": "1 Why develop an ontology? In recent years the development of ontologies\u2014explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)\u2014has been moving from the realm of ArtificialIntelligence laboratories to the desktops of domain experts. Ontologies have become common on the World-Wide Web. The ontologies on the Web range from large taxonomies categorizing Web sites (such as on Yahoo!) to categorizations of products for sale and their features (such as on Amazon.com). The WWW Consortium (W3C) is developing the Resource Description Framework (Brickley and Guha 1999), a language for encoding knowledge on Web pages to make it understandable to electronic agents searching for information. The Defense Advanced Research Projects Agency (DARPA), in conjunction with the W3C, is developing DARPA Agent Markup Language (DAML) by extending RDF with more expressive constructs aimed at facilitating agent interaction on the Web (Hendler and McGuinness 2000). Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Price and Spackman 2000) and the semantic network of the Unified Medical Language System (Humphreys and Lindberg 1993). Broad general-purpose ontologies are emerging as well. For example, the United Nations Development Program and Dun & Bradstreet combined their efforts to develop the UNSPSC ontology which provides terminology for products and services (www.unspsc.org). An ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them. Why would someone want to develop an ontology? Some of the reasons are:"}
{"_id": "7c459c36e19629ff0dfb4bd0e541cc5d2d3f03e0", "title": "Generic Taxonomy of Social Engineering Attack", "text": "Social engineering is a type of attack that allows unauthorized access to a system to achieve specific objective. Commonly, the purpose is to obtain information for social engineers. Some successful social engineering attacks get victims\u2019 information via human based retrieval approach, example technique terms as dumpster diving or shoulder surfing attack to get access to password. Alternatively, victims\u2019 information also can be stolen using technical-based method such as from pop-up windows, email or web sites to get the password or other sensitive information. This research performed a preliminary analysis on social engineering attack taxonomy that emphasized on types of technical-based social engineering attack. Results from the analysis become a guideline in proposing a new generic taxonomy of Social Engineering Attack (SEA)."}
{"_id": "bf003bb2d52304fea114d824bc0bf7bfbc7c3106", "title": "Dissecting social engineering", "text": ""}
{"_id": "10466df2b511239674d8487101229193c011a657", "title": "The urgency for effective user privacy-education to counter social engineering attacks on secure computer systems", "text": "Trusted people can fail to be trustworthy when it comes to protecting their aperture of access to secure computer systems due to inadequate education, negligence, and various social pressures. People are often the weakest link in an otherwise secure computer system and, consequently, are targeted for social engineering attacks. Social Engineering is a technique used by hackers or other attackers to gain access to information technology systems by getting the needed information (for example, a username and password) from a person rather than breaking into the system through electronic or algorithmic hacking methods. Such attacks can occur on both a physical and psychological level. The physical setting for these attacks occurs where a victim feels secure: often the workplace, the phone, the trash, and even on-line. Psychology is often used to create a rushed or officious ambiance that helps the social engineer to cajole information about accessing the system from an employee.\n Data privacy legislation in the United States and international countries that imposes privacy standards and fines for negligent or willful non-compliance increases the urgency to measure the trustworthiness of people and systems. One metric for determining compliance is to simulate, by audit, a social engineering attack upon an organization required to follow data privacy standards. Such an organization commits to protect the confidentiality of personal data with which it is entrusted.\n This paper presents the results of an approved social engineering audit made without notice within an organization where data security is a concern. Areas emphasized include experiences between the Social Engineer and the audited users, techniques used by the Social Engineer, and other findings from the audit. Possible steps to mitigate exposure to the dangers of Social Engineering through improved user education are reviewed."}
{"_id": "24b4076e2f58325f5d86ba1ca1f00b08a56fb682", "title": "Ontologies: principles, methods and applications", "text": "This paper is intended to serve as a comprehensive introduction to the emerging eld concerned with the design and use of ontologies. We observe that disparate backgrounds, languages, tools, and techniques are a major barrier to e ective communication among people, organisations, and/or software systems. We show how the development and implementation of an explicit account of a shared understanding (i.e. an `ontology') in a given subject area, can improve such communication, which in turn, can give rise to greater reuse and sharing, inter-operability, and more reliable software. After motivating their need, we clarify just what ontologies are and what purposes they serve. We outline a methodology for developing and evaluating ontologies, rst discussing informal techniques, concerning such issues as scoping, handling ambiguity, reaching agreement and producing de nitions. We then consider the bene ts of and describe, a more formal approach. We re-visit the scoping phase, and discuss the role of formal languages and techniques in the speci cation, implementation and evaluation of ontologies. Finally, we review the state of the art and practice in this emerging eld, considering various case studies, software tools for ontology development, key research issues and future prospects. AIAI-TR-191 Ontologies Page i"}
{"_id": "90b16f97715a18a52b6a00b69411083bdb0460a0", "title": "Highly Sensitive, Flexible, and Wearable Pressure Sensor Based on a Giant Piezocapacitive Effect of Three-Dimensional Microporous Elastomeric Dielectric Layer.", "text": "We report a flexible and wearable pressure sensor based on the giant piezocapacitive effect of a three-dimensional (3-D) microporous dielectric elastomer, which is capable of highly sensitive and stable pressure sensing over a large tactile pressure range. Due to the presence of micropores within the elastomeric dielectric layer, our piezocapacitive pressure sensor is highly deformable by even very small amounts of pressure, leading to a dramatic increase in its sensitivity. Moreover, the gradual closure of micropores under compression increases the effective dielectric constant, thereby further enhancing the sensitivity of the sensor. The 3-D microporous dielectric layer with serially stacked springs of elastomer bridges can cover a much wider pressure range than those of previously reported micro-/nanostructured sensing materials. We also investigate the applicability of our sensor to wearable pressure-sensing devices as an electronic pressure-sensing skin in robotic fingers as well as a bandage-type pressure-sensing device for pulse monitoring at the human wrist. Finally, we demonstrate a pressure sensor array pad for the recognition of spatially distributed pressure information on a plane. Our sensor, with its excellent pressure-sensing performance, marks the realization of a true tactile pressure sensor presenting highly sensitive responses to the entire tactile pressure range, from ultralow-force detection to high weights generated by human activity."}
{"_id": "8f24560a66651fdb94eef61339527004fda8283b", "title": "Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning", "text": "Humans are able to understand and perform complex tasks by strategically structuring the tasks into incremental steps or subgoals. For a robot attempting to learn to perform a sequential task with critical subgoal states, such states can provide a natural opportunity for interaction with a human expert. This paper analyzes the benefit of incorporating a notion of subgoals into Inverse Reinforcement Learning (IRL) with a Human-In-The-Loop (HITL) framework. The learning process is interactive, with a human expert first providing input in the form of full demonstrations along with some subgoal states. These subgoal states define a set of subtasks for the learning agent to complete in order to achieve the final goal. The learning agent queries for partial demonstrations corresponding to each subtask as needed when the agent struggles with the subtask. The proposed Human Interactive IRL (HI-IRL) framework is evaluated on several discrete path-planning tasks. We demonstrate that subgoal-based interactive structuring of the learning task results in significantly more efficient learning, requiring only a fraction of the demonstration data needed for learning the underlying reward function with the baseline IRL model."}
{"_id": "747a58918524d15aca29885af3e1bc87313eb312", "title": "A step toward irrationality: using emotion to change belief", "text": "Emotions have a powerful impact on behavior and beliefs. The goal of our research is to create general computational models of this interplay of emotion, cognition and behavior to inform the design of virtual humans. Here, we address an aspect of emotional behavior that has been studied extensively in the psychological literature but largely ignored by computational approaches, emotion-focused coping. Rather than motivating external action, emotion-focused coping strategies alter beliefs in response to strong emotions. For example an individual may alter beliefs about the importance of a goal that is being threatened, thereby reducing their distress. We present a preliminary model of emotion-focused coping and discuss how coping processes, in general, can be coupled to emotions and behavior. The approach is illustrated within a virtual reality training environment where the models are used to create virtual human characters in high-stress social situations."}
{"_id": "332648a09d6ded93926829dbd81ac9dddf31d5b9", "title": "Perching and takeoff of a robotic insect on overhangs using switchable electrostatic adhesion", "text": "For aerial robots, maintaining a high vantage point for an extended time is crucial in many applications. However, available on-board power and mechanical fatigue constrain their flight time, especially for smaller, battery-powered aircraft. Perching on elevated structures is a biologically inspired approach to overcome these limitations. Previous perching robots have required specific material properties for the landing sites, such as surface asperities for spines, or ferromagnetism. We describe a switchable electroadhesive that enables controlled perching and detachment on nearly any material while requiring approximately three orders of magnitude less power than required to sustain flight. These electroadhesives are designed, characterized, and used to demonstrate a flying robotic insect able to robustly perch on a wide range of materials, including glass, wood, and a natural leaf."}
{"_id": "1facb3308307312789e1db7f0a0904ac9c9e7179", "title": "Key parameters influencing the behavior of Steel Plate Shear Walls (SPSW)", "text": "The complex behavior of Steel Plate Shear Walls (SPSW) is investigated herein through nonlinear FE simulations. A 3D detailed FE model is developed and validated utilizing experimental results available in the literature. The influence of key parameters on the structural behavior is investigated. The considered parameters are: the infill plate thickness, the beam size, the column size, the infill plate material grade and the frame material grade. Several structural responses are used as criteria to quantify their influence on the SPSW behavior. The evaluated structural responses are: yield strength, yield displacement, ultimate strength, initial stiffness and secondary stiffness. The results show that, overall the most influential parameter is the infill plate thickness followed by the beam size. Also, it was found that the least influential parameter is the frame material grade."}
{"_id": "236f183be06d824122da59ffb79e501d1a537486", "title": "Design for Reliability of Low-voltage, Switched-capacitor Circuits", "text": "Design for Reliability of Low-voltage, Switched-capacitor Circuits by Andrew Masami Abo Doctor of Philosophy in Engineering University of California, Berkeley Professor Paul R. Gray, Chair Analog, switched-capacitor circuits play a critical role in mixed-signal, analogto-digital interfaces. They implement a large class of functions, such as sampling, filtering, and digitization. Furthermore, their implementation makes them suitable for integration with complex, digital-signal-processing blocks in a compatible, low-cost technology\u2013particularly CMOS. Even as an increasingly larger amount of signal processing is done in the digital domain, this critical, analogto-digital interface is fundamentally necessary. Examples of some integrated applications include camcorders, wireless LAN transceivers, digital set-top boxes, and others. Advances in CMOS technology, however, are driving the operating voltage of integrated circuits increasingly lower. As device dimensions shrink, the applied voltages will need to be proportionately scaled in order to guarantee long-term reliability and manage power density. The reliability constraints of the technology dictate that the analog circuitry operate at the same low voltage as the digital circuitry. Furthermore, in achieving low-voltage operation, the reliability constraints of the technology must not be violated. This work examines the voltage limitations of CMOS technology and how analog circuits can maximize the utility of MOS devices without degrading relia-"}
{"_id": "83834cd33996ed0b00e3e0fca3cda413d7ed79ff", "title": "DWCMM: The Data Warehouse Capability Maturity Model", "text": "Data Warehouses and Business Intelligence have become popular fields of research in recent years. Unfortunately, in daily practice many Data Warehouse and Business Intelligence solutions still fail to help organizations make better decisions and increase their profitability, due to intransparent complexities and project interdependencies. In addition, emerging application domains such as Mobile Learning & Analytics heavily depend on a wellstructured data foundation with a longitudinally prepared architecture. Therefore, this research presents the Data Warehouse Capability Maturity Model (DWCMM) which encompasses both technical and organizational aspects involved in developing a Data Warehouse environment. The DWCMM can be used to help organizations assess their current Data Warehouse solution and provide them with guidelines for future improvements. The DWCMM consists of a maturity matrix and a maturity assessment questionnaire with 60 questions. The DWCMM has been evaluated empirically through expert interviews and case studies. We conclude that the DWCMM can be successfully applied in practice and that organizations can intelligibly utilize the DWCMM as a quickscan instrument to jumpstart their Data Warehouse and Business Intelligence improvement processes."}
{"_id": "89a9ad85d8343a622aaa8c072beacaf8df1f0464", "title": "Multiple-resonator-based bandpass filters", "text": "This article describes a class of recently developed multiple-mode-resonator-based bandpass filters for ultra-wide-band (UWB) transmission systems. These filters have many attractive features, including a simple design, compact size, low loss and good linearity in the UWB, enhanced out-of-band rejection, and easy integration with other circuits/antennas. In this article, we present a variety of multiple-mode resonators with stepped-impedance or stub-loaded nonuniform configurations and analyze their properties based on the transmission line theory. Along with the frequency dispersion of parallel-coupled transmission lines, we design and implement various filter structures on planar, uniplanar, and hybrid transmission line geometries."}
{"_id": "1ca75a68d6769df095ac3864d86bca21e9650985", "title": "Enhanced ARP: preventing ARP poisoning-based man-in-the-middle attacks", "text": "In this letter, an enhanced version of Address Resolution Protocol (ARP) is proposed to prevent ARP poisoning-based Man-in-the-Middle (MITM) attacks. The proposed mechanism is based on the following concept. When a node knows the correct Media Access Control (MAC) address for a given IP address, if it retains the IP/MAC address mapping while that machine is alive, then MITM attack is impossible for that IP address. In order to prevent MITM attacks even for a new IP address, a voting-based resolution mechanism is proposed. The proposed scheme is backward compatible with existing ARP and incrementally deployable."}
{"_id": "9c13e54760455a50482cda070c70448ecf30d68c", "title": "Time series classification with ensembles of elastic distance measures", "text": "Several alternative distance measures for comparing time series have recently been proposed and evaluated on time series classification (TSC) problems. These include variants of dynamic time warping (DTW), such as weighted and derivative DTW, and edit distance-based measures, including longest common subsequence, edit distance with real penalty, time warp with edit, and move\u2013split\u2013merge. These measures have the common characteristic that they operate in the time domain and compensate for potential localised misalignment through some elastic adjustment. Our aim is to experimentally test two hypotheses related to these distance measures. Firstly, we test whether there is any significant difference in accuracy for TSC problems between nearest neighbour classifiers using these distance measures. Secondly, we test whether combining these elastic distance measures through simple ensemble schemes gives significantly better accuracy. We test these hypotheses by carrying out one of the largest experimental studies ever conducted into time series classification. Our first key finding is that there is no significant difference between the elastic distance measures in terms of classification accuracy on our data sets. Our second finding, and the major contribution of this work, is to define an ensemble classifier that significantly outperforms the individual classifiers. We also demonstrate that the ensemble is more accurate than approaches not based in the time domain. Nearly all TSC papers in the data mining literature cite DTW (with warping window set through cross validation) as the benchmark for comparison. We believe that our ensemble is the first ever classifier to significantly outperform DTW and as such raises the bar for future work in this area."}
{"_id": "8c76872375aa79acb26871c93da76d90dfb0a950", "title": "Recovering punctuation marks for automatic speech recognition", "text": "This paper shows results of recovering punctuation over speech transcriptions for a Portuguese broadcast news corpus. The approach is based on maximum entropy models and uses word, part-of-speech, time and speaker information. The contribution of each type of feature is analyzed individually. Separate results for each focus condition are given, making it possible to analyze the differences of performance between planned and spontaneous speech."}
{"_id": "7e2eb3402ea7eacf182bccc3f8bb685636098d2c", "title": "Optical character recognition of the Orthodox Hellenic Byzantine Music notation", "text": "In this paper we present for the first time, the development of a new system for the off-line optical recognition of the characters used in the Orthodox Hellenic Byzantine Music Notation, that has been established since 1814. We describe the structure of the new system and propose algorithms for the recognition of the 71 distinct character classes, based on Wavelets, 4-projections and other structural and statistical features. Using a Nearest Neighbor classifier, combined with a post classification schema and a tree-structured classification philosophy, an accuracy of 99.4 % was achieved, in a database of about 18,000 byzantine character patterns that have been developed for the needs of the system. Optical music recognition Off-line character recognition, Byzantine Music, Byzantine Music Notation, Wavelets, Projections, Neural Networks Contour processing, Nearest Neighbor Classifier Byzantine Music Data Base"}
{"_id": "26d4ab9b60b91bb610202b58fa1766951fedb9e9", "title": "DRAW: A Recurrent Neural Network For Image Generation", "text": "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye."}
{"_id": "4805aee558489b5413ce5434737043148537f62f", "title": "A Comparison of Features for Android Malware Detection", "text": "With the increase in mobile device use, there is a greater need for increasingly sophisticated malware detection algorithms. The research presented in this paper examines two types of features of Android applications, permission requests and system calls, as a way to detect malware. We are able to differentiate between benign and malicious apps by applying a machine learning algorithm. The model that is presented here achieved a classification accuracy of around 80% using permissions and 60% using system calls for a relatively small dataset. In the future, different machine learning algorithms will be examined to see if there is a more suitable algorithm. More features will also be taken into account and the training set will be expanded."}
{"_id": "608ec914e356ff5e5782c908016958bf650a946f", "title": "CogALex-V Shared Task: GHHH - Detecting Semantic Relations via Word Embeddings", "text": "This paper describes our system submission to the CogALex-2016 Shared Task on Corpus-Based Identification of Semantic Relations. Our system won first place for Task-1 and second place for Task-2. The evaluation results of our system on the test set is 88.1% (79.0% for TRUE only) f-measure for Task-1 on detecting semantic similarity, and 76.0% (42.3% when excluding RANDOM) for Task-2 on identifying fine-grained semantic relations. In our experiments, we try word analogy, linear regression, and multi-task Convolutional Neural Networks (CNNs) with word embeddings from publicly available word vectors. We found that linear regression performs better in the binary classification (Task-1), while CNNs have better performance in the multi-class semantic classification (Task-2). We assume that word analogy is more suited for deterministic answers rather than handling the ambiguity of one-to-many and many-to-many relationships. We also show that classifier performance could benefit from balancing the distribution of labels in the training data."}
{"_id": "5ecbb84d51e2a23dadd496d7c6ab10cf277d4452", "title": "A 5-DOF rotation-symmetric parallel manipulator with one unconstrained tool rotation", "text": "This paper introduces a novel 5-DOF parallel manipulator with a rotation-symmetric arm system. The manipulator is unorthodox since one degree of freedom of its manipulated platform is unconstrained. Such a manipulator is still useful in a wide range of applications utilizing a rotation-symmetric tool. The manipulator workspace is analyzed for singularities and collisions. The rotation-symmetric arm system leads to a large positional workspace in relation to the footprint of the manipulator. With careful choice of structural parameters, the rotational workspace of the tool is also sizeable."}
{"_id": "c79ddcef4bdf56c5467143b32e53b23825c17eff", "title": "A Framework based on SDN and Containers for Dynamic Service Chains on IoT Gateways", "text": "In this paper, we describe a new approach for managing service function chains in scenarios where data from Internet of Things (IoT) devices is partially processed at the network edge. Our framework is enabled by two emerging technologies, Software-Defined Networking (SDN) and container based virtualization, which ensure several benefits in terms of flexibility, easy programmability, and versatility. These features are well suitable with the increasingly stringent requirements of IoT applications, and allow a dynamic and automated network service chaining. An extensive performance evaluation, which has been carried out by means of a testbed, seeks to understand how our proposed framework performs in terms of computational overhead, network bandwidth, and energy consumption. By accounting for the constraints of typical IoT gateways, our evaluation tries to shed light on the actual deployability of the framework on low-power nodes."}
{"_id": "489555f05e316015d24d2a1fdd9663d4b85eb60f", "title": "Diagnostic Accuracy of Clinical Tests for Morton's Neuroma Compared With Ultrasonography.", "text": "The aim of the present study was to assess the diagnostic accuracy of 7 clinical tests for Morton's neuroma (MN) compared with ultrasonography (US). Forty patients (54 feet) were diagnosed with MN using predetermined clinical criteria. These patients were subsequently referred for US, which was performed by a single, experienced musculoskeletal radiologist. The clinical test results were compared against the US findings. MN was confirmed on US at the site of clinical diagnosis in 53 feet (98%). The operational characteristics of the clinical tests performed were as follows: thumb index finger squeeze (96% sensitivity, 96% accuracy), Mulder's click (61% sensitivity, 62% accuracy), foot squeeze (41% sensitivity, 41% accuracy), plantar percussion (37% sensitivity, 36% accuracy), dorsal percussion (33% sensitivity, 26% accuracy), and light touch and pin prick (26% sensitivity, 25% accuracy). No correlation was found between the size of MN on US and the positive clinical tests, except for Mulder's click. The size of MN was significantly larger in patients with a positive Mulder's click (10.9 versus 8.5 mm, p = .016). The clinical assessment was comparable to US in diagnosing MN. The thumb index finger squeeze test was the most sensitive screening test for the clinical diagnosis of MN."}
{"_id": "206723950b10580ced733cbacbfc23c85b268e13", "title": "Why Lurkers Lurk", "text": "The goal of this paper is to address the question: \u2018why do lurkers lurk?\u2019 Lurkers reportedly makeup the majority of members in online groups, yet little is known about them. Without insight into lurkers, our understanding of online groups is incomplete. Ignoring, dismissing, or misunderstanding lurking distorts knowledge of life online and may lead to inappropriate design of online environments. To investigate lurking, the authors carried out a study of lurking using in-depth, semi-structured interviews with ten members of online groups. 79 reasons for lurking and seven lurkers\u2019 needs are identified from the interview transcripts. The analysis reveals that lurking is a strategic activity involving more than just reading posts. Reasons for lurking are categorized and a gratification model is proposed to explain lurker behavior."}
{"_id": "22d185c7ba066468f9ff1df03f1910831076e943", "title": "Learning Better Embeddings for Rare Words Using Distributional Representations", "text": "There are two main types of word representations: low-dimensional embeddings and high-dimensional distributional vectors, in which each dimension corresponds to a context word. In this paper, we initialize an embedding-learning model with distributional vectors. Evaluation on word similarity shows that this initialization significantly increases the quality of embeddings for rare words."}
{"_id": "f1b3400e49a929d9f5bd1b15081a13120abc3906", "title": "Text comparison using word vector representations and dimensionality reduction", "text": "This paper describes a technique to compare large text sources using word vector representations (word2vec) and dimensionality reduction (tSNE) and how it can be implemented using Python. The technique provides a bird\u2019s-eye view of text sources, e.g. text summaries and their source material, and enables users to explore text sources like a geographical map. Word vector representations capture many linguistic properties such as gender, tense, plurality and even semantic concepts like \"capital city of\". Using dimensionality reduction, a 2D map can be computed where semantically similar words are close to each other. The technique uses the word2vec model from the gensim Python library and t-SNE from scikit-learn."}
{"_id": "e49d662652885e9b71622713838c840cca9d33ed", "title": "Engineering Quality and Reliability in Technology-Assisted Review", "text": "The objective of technology-assisted review (\"TAR\") is to find as much relevant information as possible with reasonable effort. Quality is a measure of the extent to which a TAR method achieves this objective, while reliability is a measure of how consistently it achieves an acceptable result. We are concerned with how to define, measure, and achieve high quality and high reliability in TAR. When quality is defined using the traditional goal-post method of specifying a minimum acceptable recall threshold, the quality and reliability of a TAR method are both, by definition, equal to the probability of achieving the threshold. Assuming this definition of quality and reliability, we show how to augment any TAR method to achieve guaranteed reliability, for a quantifiable level of additional review effort. We demonstrate this result by augmenting the TAR method supplied as the baseline model implementation for the TREC 2015 Total Recall Track, measuring reliability and effort for 555 topics from eight test collections. While our empirical results corroborate our claim of guaranteed reliability, we observe that the augmentation strategy may entail disproportionate effort, especially when the number of relevant documents is low. To address this limitation, we propose stopping criteria for the model implementation that may be applied with no additional review effort, while achieving empirical reliability that compares favorably to the provably reliable method. We further argue that optimizing reliability according to the traditional goal-post method is inconsistent with certain subjective aspects of quality, and that optimizing a Taguchi quality loss function may be more apt."}
{"_id": "e75cb14344eaeec987aa571d0009d0e02ec48a63", "title": "Design of Highly Integrated Mechatronic Gear Selector Levers for Automotive Shift-by-Wire Systems", "text": "Increased requirements regarding ergonomic comfort, limited space, weight reduction, and electronic automation of functions and safety features are on the rise for future automotive gear levers. At the same time, current mechanical gear levers have restrictions to achieve this. In this paper, we present a monostable, miniaturized mechatronic gear lever to fulfill these requirements for automotive applications. This solution describes a gear lever for positioning in the center console of a car to achieve optimal ergonomics for dynamic driving, which enables both automatic and manual gear switching. In this paper, we describe the sensor and actuator concept, safety concept, recommended shift pattern, mechanical design, and the electronic integration of this shift-by-wire system in a typical automotive bus communication network. The main contribution of this paper is a successful system design and the integration of a mechatronic system in new applications for optimizing the human-machine interface inside road vehicles."}
{"_id": "66c410a2567e96dcff135bf6582cb26c9df765c4", "title": "Batch Identification Game Model for Invalid Signatures in Wireless Mobile Networks", "text": "Secure access is one of the fundamental problems in wireless mobile networks. Digital signature is a widely used technique to protect messages\u2019 authenticity and nodes\u2019 identities. From the practical perspective, to ensure the quality of services in wireless mobile networks, ideally the process of signature verification should introduce minimum delay. Batch cryptography technique is a powerful tool to reduce verification time. However, most of the existing works focus on designing batch verification algorithms for wireless mobile networks without sufficiently considering the impact of invalid signatures, which can lead to verification failures and performance degradation. In this paper, we propose a Batch Identification Game Model (BIGM) in wireless mobile networks, enabling nodes to find invalid signatures with reasonable delay no matter whether the game scenario is complete information or incomplete information. Specifically, we analyze and prove the existence of Nash Equilibriums (NEs) in both scenarios, to select the dominant algorithm for identifying invalid signatures. To optimize the identification algorithm selection, we propose a self-adaptive auto-match protocol which estimates the strategies and states of attackers based on historical information. Comprehensive simulation results in terms of NE reasonability, algorithm selection accuracy, and identification delay are provided to demonstrate that BIGM can identify invalid signatures more efficiently than existing algorithms."}
{"_id": "9a59a3719bf08105d4632898ee178bd982da2204", "title": "International Journal of Advanced Robotic Systems Design of a Control System for an Autonomous Vehicle Based on Adaptive-PID Regular Paper", "text": "The autonomous vehicle is a mobile robot integrating multi\u2010sensor navigation and positioning, intelligent decision making and control technology. This paper presents the control system architecture of the autonomous vehicle, called \u201cIntelligent Pioneer\u201d, and the path tracking and stability of motion to effectively navigate in unknown environments is discussed. In this approach, a two degree\u2010of\u2010freedom dynamic model is developed to formulate the path\u2010tracking problem in state space format. For controlling the instantaneous path error, traditional controllers have difficulty in guaranteeing performance and stability over a wide range of parameter changes and disturbances. Therefore, a newly developed adaptive\u2010PID controller will be used. By using this approach the flexibility of the vehicle control system will be increased and achieving great advantages. Throughout, we provide examples and results from Intelligent Pioneer and the autonomous vehicle using this approach competed in the 2010 and 2011 Future Challenge of China. Intelligent Pioneer finished all of the competition programmes and won first position in 2010 and third position in 2011."}
{"_id": "17ebe1eb19655543a6b876f91d41917488e70f55", "title": "Random synaptic feedback weights support error backpropagation for deep learning", "text": "The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning."}
{"_id": "57b199e1d22752c385c34191c1058bcabb850d9f", "title": "Getting Formal with Dopamine and Reward", "text": "Recent neurophysiological studies reveal that neurons in certain brain structures carry specific signals about past and future rewards. Dopamine neurons display a short-latency, phasic reward signal indicating the difference between actual and predicted rewards. The signal is useful for enhancing neuronal processing and learning behavioral reactions. It is distinctly different from dopamine's tonic enabling of numerous behavioral processes. Neurons in the striatum, frontal cortex, and amygdala also process reward information but provide more differentiated information for identifying and anticipating rewards and organizing goal-directed behavior. The different reward signals have complementary functions, and the optimal use of rewards in voluntary behavior would benefit from interactions between the signals. Addictive psychostimulant drugs may exert their action by amplifying the dopamine reward signal."}
{"_id": "7592f8a1d4fa2703b75cad6833775da2ff72fe7b", "title": "Deep Big Multilayer Perceptrons for Digit Recognition", "text": "The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent advancement by others dates back 8 years (error rate 0.4%). Good old on-line back-propagation for plain multi-layer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark with a single MLP and 0.31% with a committee of seven MLP. All we need to achieve this until 2011 best result are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning."}
{"_id": "9539a0c4f8766c08dbaf96561cf6f1f409f5d3f9", "title": "Feature-based attention influences motion processing gain in macaque visual cortex", "text": "Changes in neural responses based on spatial attention have been demonstrated in many areas of visual cortex, indicating that the neural correlate of attention is an enhanced response to stimuli at an attended location and reduced responses to stimuli elsewhere. Here we demonstrate non-spatial, feature-based attentional modulation of visual motion processing, and show that attention increases the gain of direction-selective neurons in visual cortical area MT without narrowing the direction-tuning curves. These findings place important constraints on the neural mechanisms of attention and we propose to unify the effects of spatial location, direction of motion and other features of the attended stimuli in a \u2018feature similarity gain model\u2019 of attention."}
{"_id": "cbcd9f32b526397f88d18163875d04255e72137f", "title": "Gradient-based learning applied to document recognition", "text": ""}
{"_id": "19e2ad92d0f6ad3a9c76e957a0463be9ac244203", "title": "Condition monitoring of helicopter drive shafts using quadratic-nonlinearity metric based on cross-bispectrum", "text": "Based on cross-bispectrum, quadratic-nonlinearity coupling between two vibration signals is proposed and used to assess health conditions of rotating shafts in an AH-64D helicopter tail rotor drive train. Vibration data are gathered from two bearings supporting the shaft in an experimental helicopter drive train simulating different shaft conditions, namely, baseline, misalignment, imbalance, and combination of misalignment and imbalance. The proposed metric shows better capabilities in distinguishing different shaft settings than the conventional linear coupling based on cross-power spectrum."}
{"_id": "7b24aa024ca2037b097cfcb2ea73a60ab497b80e", "title": "Internet security architecture", "text": "Fear of security breaches has been a major r eason f or the business world\u2019 s reluctance to embrace the Inter net as a viable means of communication. A widely adopted solution consists of physically separating private networks from the rest of Internet using firewalls. This paper discusses the curr ent cryptographic security measures available for the Internet infrastructur e as an alternative to physical segregation. First the IPsec ar chitecture including security protocols in the Internet Layer and the related key management pr oposals are introduced. The transport layer security protocol and security issues in the netw ork control and management are then presented. The paper is addr essed to r eaders with a basic understanding of common security mechanisms including encryption, authentication and key exchange techniques."}
{"_id": "525dc4242b21df23ba4e1ec0748cf46de0e8f5c0", "title": "Client attachment, attachment to the therapist and client-therapist attachment match: how do they relate to change in psychodynamic psychotherapy?", "text": "OBJECTIVE\nWe examined the associations between client attachment, client attachment to the therapist, and symptom change, as well as the effects of client-therapist attachment match on outcome. Clients (n = 67) and their therapists (n = 27) completed the ECR to assess attachment.\n\n\nMETHOD\nClients completed also the Client Attachment to Therapist scale three times (early, middle, and late sessions) and the OQ-45 at intake and four times over the course of a year of psychodynamic psychotherapy.\n\n\nRESULTS\nClients characterized by avoidant attachment and by avoidant attachment to their therapist showed the least improvement. A low-avoidant client-therapist attachment match led to a greater decrease in symptom distress than when a low-avoidant therapist treated a high-avoidant client.\n\n\nCONCLUSIONS\nThese findings suggest the importance of considering client-therapist attachment matching and the need to pay attention to the special challenges involved in treating avoidant clients in order to facilitate progress in psychotherapy."}
{"_id": "476edaffb4e613303012e7321dd319ba23abd0c3", "title": "Prioritized multi-task compliance control of redundant manipulators", "text": "We propose a new approach for dynamic control of redundant manipulators to deal with multiple prioritized tasks at the same time by utilizing null space projection techniques. The compliance control law is based on a new representation of the dynamics wherein specific null space velocity coordinates are introduced. These allow to efficiently exploit the kinematic redundancy according to the task hierarchy and lead to a dynamics formulation with block-diagonal inertia matrix. The compensation of velocity-dependent coupling terms between the tasks by an additional passive feedback action facilitates a stability analysis for the complete hierarchy based on semi-definite Lyapunov functions. No external forces have to be measured. Finally, the performance of the control approach is evaluated in experiments on a torque-controlled robot. \u00a9 2015 Elsevier Ltd. All rights reserved."}
{"_id": "fdb1a478c6c566729a82424b3d6b37ca76c8b85e", "title": "The Concept of Flow", "text": "What constitutes a good life? Few questions are of more fundamental importance to a positive psychology. Flow research has yielded one answer, providing an understanding of experiences during which individuals are fully involved in the present moment. Viewed through the experiential lens of flow, a good life is one that is characterized by complete absorption in what one does. In this chapter, we describe the flow model of optimal experience and optimal development, explain how flow and related constructs have been measured, discuss recent work in this area, and identify some promising directions for future research."}
{"_id": "7817db7b898a3458035174d914a7570d0b0efb7b", "title": "Corporate social responsibility and bank customer satisfaction A research agenda", "text": "Purpose \u2013 The purpose of this paper is to explore the relationship between corporate social responsibility (CSR) and customer outcomes. Design/methodology/approach \u2013 This paper reviews the literature on CSR effects and satisfaction, noting gaps in the literature. Findings \u2013 A series of propositions is put forward to guide future research endeavours. Research limitations/implications \u2013 By understanding the likely impact on customer satisfaction of CSR initiatives vis-\u00e0-vis customer-centric initiatives, the academic research community can assist managers to understand how to best allocate company resources in situations of low customer satisfaction. Such endeavours are managerially relevant and topical. Researchers seeking to test the propositions put forward in this paper would be able to gain links with, and possibly attract funding from, banks to conduct their research. Such endeavours may assist researchers to redefine the stakeholder view by placing customers at the centre of a network of stakeholders. Practical implications \u2013 An understanding of how to best allocate company resources to increase the proportion of satisfied customers will allow bank marketers to reduce customer churn and hence increase market share and profits. Originality/value \u2013 Researchers have not previously conducted a comparative analysis of the effects of different CSR initiatives on customer satisfaction, nor considered whether more customer-centric initiatives are likely to be more effective in increasing the proportion of satisfied customers."}
{"_id": "60cc377d4d2b885594906d58bacb5732e8a04eb9", "title": "Essential Layers, Artifacts, and Dependencies of Enterprise Architecture", "text": "After a period where implementation speed was more important than integration, consistency and reduction of complexity, architectural considerations have become a key issue of information management in recent years again. Enterprise architecture is widely accepted as an essential mechanism for ensuring agility and consistency, compliance and efficiency. Although standards like TOGAF and FEAF have developed, however, there is no common agreement on which architecture layers, which artifact types and which dependencies constitute the essence of enterprise architecture. This paper contributes to the identification of essential elements of enterprise architecture by (1) specifying enterprise architecture as a hierarchical, multilevel system comprising aggregation hierarchies, architecture layers and views, (2) discussing enterprise architecture frameworks with regard to essential elements, (3) proposing interfacing requirements of enterprise architecture with other architecture models and (4) matching these findings with current enterprise architecture practice in several large companies."}
{"_id": "46e0faacf50c8053d38fb3cf2da7fbbfb2932977", "title": "Agent-based control for decentralised demand side management in the smart grid", "text": "Central to the vision of the smart grid is the deployment of smart meters that will allow autonomous software agents, representing the consumers, to optimise their use of devices and heating in the smart home while interacting with the grid. However, without some form of coordination, the population of agents may end up with overly-homogeneous optimised consumption patterns that may generate significant peaks in demand in the grid. These peaks, in turn, reduce the efficiency of the overall system, increase carbon emissions, and may even, in the worst case, cause blackouts. Hence, in this paper, we introduce a novel model of a Decentralised Demand Side Management (DDSM) mechanism that allows agents, by adapting the deferment of their loads based on grid prices, to coordinate in a decentralised manner. Specifically, using average UK consumption profiles for 26M homes, we demonstrate that, through an emergent coordination of the agents, the peak demand of domestic consumers in the grid can be reduced by up to 17% and carbon emissions by up to 6%. We also show that our DDSM mechanism is robust to the increasing electrification of heating in UK homes (i.e., it exhibits a similar efficiency)."}
{"_id": "3d9e919a4de74089f94f5a1b2a167c66c19a241d", "title": "Maxillary length at 11-14 weeks of gestation in fetuses with trisomy 21.", "text": "OBJECTIVE\nTo determine the value of measuring maxillary length at 11-14 weeks of gestation in screening for trisomy 21.\n\n\nMETHODS\nIn 970 fetuses ultrasound examination was carried out for measurement of crown-rump length (CRL), nuchal translucency and maxillary length, and to determine if the nasal bone was present or absent, immediately before chorionic villus sampling for karyotyping at 11-14 weeks of gestation. In 60 cases the maxillary length was measured twice by the same operator to calculate the intraobserver variation in measurements.\n\n\nRESULTS\nThe median gestation was 12 (range, 11-14) weeks. The maxilla was successfully examined in all cases. The mean difference between paired measurements of maxillary length was -0.012 mm and the 95% limits of agreement were -0.42 (95% CI, -0.47 to -0.37) to 0.40 (95% CI, 0.35 to 0.44) mm. The fetal karyotype was normal in 839 pregnancies and abnormal in 131, including 88 cases of trisomy 21. In the chromosomally normal group the maxillary length increased significantly with CRL from a mean of 4.8 mm at a CRL of 45 mm to 8.3 mm at a CRL of 84 mm. In the trisomy 21 fetuses the maxillary length was significantly shorter than normal by 0.7 mm and in the trisomy 21 fetuses with absent nasal bone the maxilla was shorter than in those with present nasal bone by 0.5 mm. In fetuses with other chromosomal defects there were no significant differences from normal in the maxillary length.\n\n\nCONCLUSION\nAt 11-14 weeks of gestation, maxillary length in trisomy 21 fetuses is significantly shorter than in normal fetuses."}
{"_id": "db9531c2677ab3eeaaf434ccb18ca354438560d6", "title": "From e-commerce to social commerce: A close look at design features", "text": "E-commerce is undergoing an evolution through the adoption of Web 2.0 capabilities to enhance customer participation and achieve greater economic value. This new phenomenon is commonly referred to as social commerce, however it has not yet been fully understood. In addition to the lack of a stable and agreed-upon definition, there is little research on social commerce and no significant research dedicated to the design of social commerce platforms. This study offers literature review to explain the concept of social commerce, tracks its nascent state-of-the-art, and discusses relevant design features as they relate to e-commerce and Web 2.0. We propose a new model and a set of principles for guiding social commerce design. We also apply the model and guidelines to two leading social commerce platforms, Amazon and Starbucks on Facebook. The findings indicate that, for any social commerce website, it is critical to achieve a minimum set of social commerce design features. These design features must cover all the layers of the proposed model, including the individual, conversation, community and commerce levels. 2012 Elsevier B.V. All rights reserved."}
{"_id": "9d420ad78af7366384f77b29e62a93a0325ace77", "title": "A spectrogram-based audio fingerprinting system for content-based copy detection", "text": "This paper presents a novel audio fingerprinting method that is highly robust to a variety of audio distortions. It is based on an unconventional audio fingerprint generation scheme. The robustness is achieved by generating different versions of the spectrogram matrix of the audio signal by using a threshold based on the average of the spectral values to prune this matrix. We transform each version of this pruned spectrogram matrix into a 2-D binary image. Multiple versions of these 2-D images suppress noise to a varying degree. This varying degree of noise suppression improves likelihood of one of the images matching a reference image. To speed up matching, we convert each image into an n-dimensional vector, and perform a nearest neighbor search based on this n-dimensional vector. We give results with two different feature parameters and their combination. We test this method on TRECVID 2010 content-based copy detection evaluation dataset, and we validate the performance on TRECVID 2009 dataset also. Experimental results show the effectiveness of these features even when the audio is distorted. We compare the proposed method to two state-of-the-art audio copy detection systems, namely NN-based and Shazam systems. Our method by far outperforms Shazam system for all audio transformations (or distortions) in terms of detection performance, number of missed queries and localization accuracy. Compared to NN-based system, our approach reduces minimal Normalized Detection Cost Rate (min NDCR) by 23\u00a0% and improves localization accuracy by 24\u00a0%."}
{"_id": "8c15753cbb921f1b0ce4cd09b83415152212dbef", "title": "More than Just Two Sexes: The Neural Correlates of Voice Gender Perception in Gender Dysphoria", "text": "Gender dysphoria (also known as \"transsexualism\") is characterized as a discrepancy between anatomical sex and gender identity. Research points towards neurobiological influences. Due to the sexually dimorphic characteristics of the human voice, voice gender perception provides a biologically relevant function, e.g. in the context of mating selection. There is evidence for a better recognition of voices of the opposite sex and a differentiation of the sexes in its underlying functional cerebral correlates, namely the prefrontal and middle temporal areas. This fMRI study investigated the neural correlates of voice gender perception in 32 male-to-female gender dysphoric individuals (MtFs) compared to 20 non-gender dysphoric men and 19 non-gender dysphoric women. Participants indicated the sex of 240 voice stimuli modified in semitone steps in the direction to the other gender. Compared to men and women, MtFs showed differences in a neural network including the medial prefrontal gyrus, the insula, and the precuneus when responding to male vs. female voices. With increased voice morphing men recruited more prefrontal areas compared to women and MtFs, while MtFs revealed a pattern more similar to women. On a behavioral and neuronal level, our results support the feeling of MtFs reporting they cannot identify with their assigned sex."}
{"_id": "9281495c7ffc4d6d6e5305281c200f9b02ba70db", "title": "Security and compliance challenges in complex IT outsourcing arrangements: A multi-stakeholder perspective", "text": "Complex IT outsourcing arrangements promise numerous benefits such as increased cost predictability and reduced costs, higher flexibility and scalability upon demand. Organizations trying to realize these benefits, however, face several security and compliance challenges. In this article, we investigate the pressure to take action with respect to such challenges and discuss avenues toward promising responses. We collected perceptions on security and compliance challenges from multiple stakeholders by means of a series of interviews and an online survey, first, to analyze the current and future relevance of the challenges as well as potential adverse effects on organizational performance and, second, to discuss the nature and scope of potential responses. The survey participants confirmed the current and future relevance of the six challenges auditing clouds, managing heterogeneity of services, coordinating involved parties, managing relationships between clients and vendors, localizing and migrating data and coping with lack of security awareness. Additionally, they perceived these challenges as affecting organizational performance adversely in case they are not properly addressed. Responses in form of organizational measures were considered more promising than technical ones concerning all challenges except localizing and migrating data, for which the opposite was true. Balancing relational and contractual governance as well as employing specific client and vendor capabilities is essential for the success of IT outsourcing arrangements, yet do not seem sufficient to overcome the investigated challenges. Innovations connecting the technical perspective of utility software with the business perspective of application software relevant for security and compliance management, however, nourish the hope that the benefits associated with complex IT outsourcing arrangements can be realized in the foreseeable future whilst addressing the security and compliance challenges. a 2013 Elsevier Ltd. All rights reserved. 61. .fraunhofer.de (D. Bachlechner), stefan.thalmann@uibk.ac.at (S. Thalmann), ronald.maier@ ier Ltd. All rights reserved. c om p u t e r s & s e c u r i t y 4 0 ( 2 0 1 4 ) 3 8e5 9 39"}
{"_id": "919fa5c3a4f9c3c1c7ba407ccbac8ab72ba68566", "title": "Detection of high variability in gene expression from single-cell RNA-seq profiling", "text": "The advancement of the next-generation sequencing technology enables mapping gene expression at the single-cell level, capable of tracking cell heterogeneity and determination of cell subpopulations using single-cell RNA sequencing (scRNA-seq). Unlike the objectives of conventional RNA-seq where differential expression analysis is the integral component, the most important goal of scRNA-seq is to identify highly variable genes across a population of cells, to account for the discrete nature of single-cell gene expression and uniqueness of sequencing library preparation protocol for single-cell sequencing. However, there is lack of generic expression variation model for different scRNA-seq data sets. Hence, the objective of this study is to develop a gene expression variation model (GEVM), utilizing the relationship between coefficient of variation (CV) and average expression level to address the over-dispersion of single-cell data, and its corresponding statistical significance to quantify the variably expressed genes (VEGs). We have built a simulation framework that generated scRNA-seq data with different number of cells, model parameters, and variation levels. We implemented our GEVM and demonstrated the robustness by using a set of simulated scRNA-seq data under different conditions. We evaluated the regression robustness using root-mean-square error (RMSE) and assessed the parameter estimation process by varying initial model parameters that deviated from homogeneous cell population. We also applied the GEVM on real scRNA-seq data to test the performance under distinct cases. In this paper, we proposed a gene expression variation model that can be used to determine significant variably expressed genes. Applying the model to the simulated single-cell data, we observed robust parameter estimation under different conditions with minimal root mean square errors. We also examined the model on two distinct scRNA-seq data sets using different single-cell protocols and determined the VEGs. Obtaining VEGs allowed us to observe possible subpopulations, providing further evidences of cell heterogeneity. With the GEVM, we can easily find out significant variably expressed genes in different scRNA-seq data sets."}
{"_id": "d4caec47eeabb2eca3ce9e39b1fae5424634c731", "title": "Design and control of underactuated tendon-driven mechanisms", "text": "Many robotic hands or prosthetic hands have been developed in the last several decades, and many use tendon-driven mechanisms for their transmissions. Robotic hands are now built with underactuated mechanisms, which have fewer actuators than degrees of freedom, to reduce mechanical complexity or to realize a biomimetic motion such as flexion of an index finger. The design is heuristic and it is useful to develop design methods for the underactuated mechanisms. This paper classifies mechanisms driven by tendons into three classes, and proposes a design method for them. The two classes are related to underactuated tendon-driven mechanisms, and these have been used without distinction so far. An index finger robot, which has four active tendons and two passive tendons, is developed and controlled with the proposed method."}
{"_id": "8d6ca2dae1a6d1e71626be6167b9f25d2ce6dbcc", "title": "Semi-Supervised Learning with the Deep Rendering Mixture Model", "text": "Semi-supervised learning algorithms reduce the high cost of acquiring labeled training data by using both labeled and unlabeled data during learning. Deep Convolutional Networks (DCNs) have achieved great success in supervised tasks and as such have been widely employed in the semi-supervised learning. In this paper we leverage the recently developed Deep Rendering Mixture Model (DRMM), a probabilistic generative model that models latent nuisance variation, and whose inference algorithm yields DCNs. We develop an EM algorithm for the DRMM to learn from both labeled and unlabeled data. Guided by the theory of the DRMM, we introduce a novel nonnegativity constraint and a variational inference term. We report state-of-the-art performance on MNIST and SVHN and competitive results on CIFAR10. We also probe deeper into how a DRMM trained in a semi-supervised setting represents latent nuisance variation using synthetically rendered images. Taken together, our work provides a unified framework for supervised, unsupervised, and semisupervised learning."}
{"_id": "9bfc34ca3d3dd17ecdcb092f2a056da6cb824acd", "title": "Visual analytics of spatial interaction patterns for pandemic decision support", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Population mobility, i.e. the movement and contact of individuals across geographic space, is one of the essential factors that determine the course of a pandemic disease spread. This research views both individual-based daily activities and a pandemic spread as spatial interaction problems, where locations interact with each other via the visitors that they share or the virus that is transmitted from one place to another. The research proposes a general visual analytic approach to synthesize very large spatial interaction data and discover interesting (and unknown) patterns. The proposed approach involves a suite of visual and computational techniques, including (1) a new graph partitioning method to segment a very large interaction graph into a moderate number of spatially contiguous subgraphs (regions); (2) a reorderable matrix, with regions 'optimally' ordered on the diagonal, to effectively present a holistic view of major spatial interaction patterns; and (3) a modified flow map, interactively linked to the reorderable matrix, to enable pattern interpretation in a geographical context. The implemented system is able to visualize both people's daily movements and a disease spread over space in a similar way. The discovered spatial interaction patterns provide valuable insight for designing effective pandemic mitigation strategies and supporting decision-making in time-critical situations."}
{"_id": "0428c79e5be359ccd13d63205b5e06037404967b", "title": "On Bayesian Upper Confidence Bounds for Bandit Problems", "text": "Stochastic bandit problems have been analyzed from two different perspectives: a frequentist view, where the parameter is a deterministic unknown quantity, and a Bayesian approach, where the parameter is drawn from a prior distribution. We show in this paper that methods derived from this second perspective prove optimal when evaluated using the frequentist cumulated regret as a measure of performance. We give a general formulation for a class of Bayesian index policies that rely on quantiles of the posterior distribution. For binary bandits, we prove that the corresponding algorithm, termed BayesUCB, satisfies finite-time regret bounds that imply its asymptotic optimality. More generally, Bayes-UCB appears as an unifying framework for several variants of the UCB algorithm addressing different bandit problems (parametric multi-armed bandits, Gaussian bandits with unknown mean and variance, linear bandits). But the generality of the Bayesian approach makes it possible to address more challenging models. In particular, we show how to handle linear bandits with sparsity constraints by resorting to Gibbs sampling."}
{"_id": "e030aa1ea57ee47d3f3a0ce05b7e983f95115f1a", "title": "Psychometric Properties of Physical Activity and Leisure Motivation Scale in Farsi: an International Collaborative Project on Motivation for Physical Activity and Leisure.", "text": "BACKGROUND\nGiven the importance of regular physical activity, it is crucial to evaluate the factors favoring participation in physical activity. We aimed to report the psychometric analysis of the Farsi version of the Physical Activity and Leisure Motivation Scale (PALMS).\n\n\nMETHODS\nThe Farsi version of PALMS was completed by 406 healthy adult individuals to test its factor structure and concurrent validity and reliability.\n\n\nRESULTS\nConducting the exploratory factor analysis revealed nine factors that accounted for 64.6% of the variances. The PALMS reliability was supported with a high internal consistency of 0.91 and a high test-retest reliability of 0.97 (95% CI: 0.97-0.98). The association between the PALMS and its previous version Recreational Exercise Motivation Measure scores was strongly significant (r= 0.86, P < 0.001).\n\n\nCONCLUSION\nWe have shown that the Farsi version of the PALMS appears to be a valuable instrument to measure motivation for physical activity and leisure."}
{"_id": "6570489a6294a5845adfd195a50a226f78a139c1", "title": "An extended online purchase intention model for middle-aged online users", "text": "This article focuses on examining the determinants and mediators of the purchase intention of nononline purchasers between ages 31 and 60 who mostly have strong purchasing power. It propose anew online purchase intention model by integrating the technology acceptance model with additional determinants and adding habitual online usage as a new mediator. Based on a sample of more than 300 middle-aged non-online purchasers, beyond some situationally-specific predictor variables, online purchasing attitude and habitual online usage are key mediators. Personal awareness of security only affects habitual online usage, t indicating a concern of middle-aged users. Habitual online usage is a"}
{"_id": "1eb92d883dab2bc6a408245f4766f4c5d52f7545", "title": "Maximum Complex Task Assignment: Towards Tasks Correlation in Spatial Crowdsourcing", "text": "Spatial crowdsourcing has gained emerging interest from both research communities and industries. Most of current spatial crowdsourcing frameworks assume independent and atomic tasks. However, there could be some cases that one needs to crowdsource a spatial complex task which consists of some spatial sub-tasks (i.e., tasks related to a specific location). The spatial complex task's assignment requires assignments of all of its sub-tasks. The currently available frameworks are inapplicable to such kind of tasks. In this paper, we introduce a novel approach to crowdsource spatial complex tasks. We first formally define the Maximum Complex Task Assignment (MCTA) problem and propose alternative solutions. Subsequently, we perform various experiments using both real and synthetic datasets to investigate and verify the usability of our proposed approach."}
{"_id": "44298a4cf816fe8d55c663337932724407ae772b", "title": "A survey on policy search algorithms for learning robot controllers in a handful of trials", "text": "Most policy search algorithms require thousands of training episodes to find an effective policy, which is often infeasible with a physical robot. This survey article focuses on the extreme other end of the spectrum: how can a robot adapt with only a handful of trials (a dozen) and a few minutes? By analogy with the word \u201cbig-data\u201d, we refer to this challenge as \u201cmicro-data reinforcement learning\u201d. We show that a first strategy is to leverage prior knowledge on the policy structure (e.g., dynamic movement primitives), on the policy parameters (e.g., demonstrations), or on the dynamics (e.g., simulators). A second strategy is to create data-driven surrogate models of the expected reward (e.g., Bayesian optimization) or the dynamical model (e.g., model-based policy search), so that the policy optimizer queries the model instead of the real system. Overall, all successful micro-data algorithms combine these two strategies by varying the kind of model and prior knowledge. The current scientific challenges essentially revolve around scaling up to complex robots (e.g., humanoids), designing generic priors, and optimizing the computing time."}
{"_id": "35318f1dcc88c8051911ba48815c47d424626a92", "title": "Visual Analysis of TED Talk Topic Trends", "text": "TED Talks are short, powerful talks given by some of the world's brightest minds - scientists, philanthropists, businessmen, artists, and many others. Funded by members and advertising, these talks are free to access by the public on the TED website and TED YouTube channel, and many videos have become viral phenomena. In this research project, we perform a visual analysis of TED Talk videos and playlists to gain a good understanding of the trends and relationships between TED Talk topics."}
{"_id": "a42569c671b5f9d0fe2007af55199d668dae491b", "title": "Fine-grained Concept Linking using Neural Networks in Healthcare", "text": "To unlock the wealth of the healthcare data, we often need to link the real-world text snippets to the referred medical concepts described by the canonical descriptions. However, existing healthcare concept linking methods, such as dictionary-based and simple machine learning methods, are not effective due to the word discrepancy between the text snippet and the canonical concept description, and the overlapping concept meaning among the fine-grained concepts. To address these challenges, we propose a Neural Concept Linking (NCL) approach for accurate concept linking using systematically integrated neural networks. We call the novel neural network architecture as the COMposite AttentIonal encode-Decode neural network (COM-AID). COM-AID performs an encode-decode process that encodes a concept into a vector and decodes the vector into a text snippet with the help of two devised contexts. On the one hand, it injects the textual context into the neural network through the attention mechanism, so that the word discrepancy can be overcome from the semantic perspective. On the other hand, it incorporates the structural context into the neural network through the attention mechanism, so that minor concept meaning differences can be enlarged and effectively differentiated. Empirical studies on two real-world datasets confirm that the NCL produces accurate concept linking results and significantly outperforms state-of-the-art techniques."}
{"_id": "30f2b6834d6f2322da204f36ad24ddf43cc45d33", "title": "Structural XML Classification in Concept Drifting Data Streams", "text": "Classification of large, static collections of XML data has been intensively studied in the last several years. Recently however, the data processing paradigm is shifting from static to streaming data, where documents have to be processed online using limited memory and class definitions can change with time in an event called concept drift. As most existing XML classifiers are capable of processing only static data, there is a need to develop new approaches dedicated for streaming environments. In this paper, we propose a new classification algorithm for XML data streams called XSC. The algorithm uses incrementally mined frequent subtrees and a tree-subtree similarity measure to classify new documents in an associative manner. The proposed approach is experimentally evaluated against eight state-of-the-art stream classifiers on real and synthetic data. The results show that XSC performs significantly better than competitive algorithms in terms of accuracy and memory usage."}
{"_id": "14829636fee5a1cf8dee9737849a8e2bdaf9a91f", "title": "Bitter to Better - How to Make Bitcoin a Better Currency", "text": "Bitcoin is a distributed digital currency which has attracted a substantial number of users. We perform an in-depth investigation to understand what made Bitcoin so successful, while decades of research on cryptographic e-cash has not lead to a large-scale deployment. We ask also how Bitcoin could become a good candidate for a long-lived stable currency. In doing so, we identify several issues and attacks of Bitcoin, and propose suitable techniques to address them."}
{"_id": "35fe18606529d82ce3fc90961dd6813c92713b3c", "title": "SoK: Research Perspectives and Challenges for Bitcoin and Cryptocurrencies", "text": "Bit coin has emerged as the most successful cryptographic currency in history. Within two years of its quiet launch in 2009, Bit coin grew to comprise billions of dollars of economic value despite only cursory analysis of the system's design. Since then a growing literature has identified hidden-but-important properties of the system, discovered attacks, proposed promising alternatives, and singled out difficult future challenges. Meanwhile a large and vibrant open-source community has proposed and deployed numerous modifications and extensions. We provide the first systematic exposition Bit coin and the many related crypto currencies or 'altcoins.' Drawing from a scattered body of knowledge, we identify three key components of Bit coin's design that can be decoupled. This enables a more insightful analysis of Bit coin's properties and future stability. We map the design space for numerous proposed modifications, providing comparative analyses for alternative consensus mechanisms, currency allocation mechanisms, computational puzzles, and key management tools. We survey anonymity issues in Bit coin and provide an evaluation framework for analyzing a variety of privacy-enhancing proposals. Finally we provide new insights on what we term disinter mediation protocols, which absolve the need for trusted intermediaries in an interesting set of applications. We identify three general disinter mediation strategies and provide a detailed comparison."}
{"_id": "3d16ed355757fc13b7c6d7d6d04e6e9c5c9c0b78", "title": "Majority Is Not Enough: Bitcoin Mining Is Vulnerable", "text": ""}
{"_id": "5e86853f533c88a1996455d955a2e20ac47b3878", "title": "Information propagation in the Bitcoin network", "text": "Bitcoin is a digital currency that unlike traditional currencies does not rely on a centralized authority. Instead Bitcoin relies on a network of volunteers that collectively implement a replicated ledger and verify transactions. In this paper we analyze how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas. We then use the gathered information to verify the conjecture that the propagation delay in the network is the primary cause for blockchain forks. Blockchain forks should be avoided as they are symptomatic for inconsistencies among the replicas in the network. We then show what can be achieved by pushing the current protocol to its limit with unilateral changes to the client's behavior."}
{"_id": "5fb1285e05bbd78d0094fe8061c644ea09d9da8d", "title": "Double-spending fast payments in bitcoin", "text": "Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to verify payments. Nowadays, Bitcoin is increasingly used in a number of fast payment scenarios, where the time between the exchange of currency and goods is short (in the order of few seconds). While the Bitcoin payment verification scheme is designed to prevent double-spending, our results show that the system requires tens of minutes to verify a transaction and is therefore inappropriate for fast payments. An example of this use of Bitcoin was recently reported in the media: Bitcoins were used as a form of \\emph{fast} payment in a local fast-food restaurant. Until now, the security of fast Bitcoin payments has not been studied. In this paper, we analyze the security of using Bitcoin for fast payments. We show that, unless appropriate detection techniques are integrated in the current Bitcoin implementation, double-spending attacks on fast payments succeed with overwhelming probability and can be mounted at low cost. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast payments are not always effective in detecting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we propose and implement a modification to the existing Bitcoin implementation that ensures the detection of double-spending attacks against fast payments."}
{"_id": "d2920567fb66bc69d92ab2208f6455e37ce6138b", "title": "Disruptive Innovation : Removing the Innovators \u2019 Dilemma", "text": "The objectives of this research are to co-create understanding and knowledge on the phenomenon of disruptive innovation in order to provide pragmatic clarity on the term\u2019s meaning, impact and implications. This will address the academic audience\u2019s gap in knowledge and provide help to practitioners wanting to understand how disruptive innovation can be fostered as part of a major competitive strategy. This paper reports on the first eighteen months of a three year academic and industrial investigation. It presents a new pragmatic definition drawn from the literature and an overview of the conceptual framework for disruptive innovation that was co-created via the collaborative efforts of academia and industry. The barriers to disruptive innovation are presented and a best practice case study of how one company is overcoming these barriers is described. The remainder of the research, which is supported by a European Commission co-sponsored project called Disrupt-it, will focus on developing and validating tools to help overcome these barriers. Thomond, P., Herzberg, T. and Lettice, F. (2003). \"Disruptive Innovation: Removing the Innovators\u2019 Dilemma\". Knowledge into Practice British Academy of Management Annual Conference, Harrogate, UK, September 2003. 2 1.0. Introduction and Background. In his ground breaking book \u201cThe Innovator\u2019s Dilemma: When New Technologies Cause Great Firms to Fail\u201d, Clayton Christensen first coined the phrase \u2018disruptive technologies\u2019. He showed that time and again almost all the organisations that have \u2018died\u2019 or been displaced from their industries because of a new paradigm of customer offering could see the disruption coming but did nothing until it was too late (Christensen, 1997). They assess the new approaches or technologies and frame them as either deficient or as an unlikely threat much to the managers\u2019 regret and the organisation\u2019s demise (Christensen 2002). In the early 1990s, major airlines such as British Airways decided that the opportunities afforded by a low-cost, point-to point no frills strategy such as that introduced by the newly formed Ryanair was an unlikely threat. By the mid-1990\u2019s other newcomers such as easyJet had embraced Ryanair\u2019s foresight and before long, the \u2018low cost\u2019 approach had captured a large segment of the market. Low-cost no frills proved a hit with European travellers but not with the established airlines who had either ignored the threat or failed to capitalise on the approach. Today DVD technology and Charles Schwab are seen to be having a similar impact upon the VHS industry and Merrill Lynch respectively, however, disruption is not just a recent phenomenon it has firm foundations as a trend in the past that will also inevitably occur in the future. Examples of past disruptive innovations would include the introduction of the telegraph and its impact upon businesses like Pony Express and the transistor\u2019s impact upon the companies that produced cathode ray tubes. Future predictions include the impact of Light Emitting Diode\u2019 (L.E.D.) technology and its potential to completely disrupt the traditional light bulb sector and its supporting industries. More optimistically, Christensen (2002) further shows that the process of disruptive innovation has been one of the fundamental causal mechanisms through which access to life improving products and services has been increased and the basis on which long term organisational survival could be ensured (Christensen, 1997). In spite of the proclaimed importance of disruptive innovation and the ever increasing interest from both the business and academic press alike, there still appears to be a disparity between rhetoric and reality. To date, the multifaceted and interrelated issues of disruptive innovation have not been investigated in depth. The phenomenon with examples has been described by a number of authors (Christensen, 1997, Moore, 1995 Gilbert and Bower, 2002) and practitioner orientated writers have begun to offer strategies for responding to disruptive Thomond, P., Herzberg, T. and Lettice, F. (2003). \"Disruptive Innovation: Removing the Innovators\u2019 Dilemma\". Knowledge into Practice British Academy of Management Annual Conference, Harrogate, UK, September 2003. 3 change (Charitou and Markides, 2003, Rigby and Corbett, 2002, Rafi and Kampas, 2002). However, a deep integrated understanding of the entire subject is missing. In particular, there is an industrial need and academic gap knowledge in the pragmatic comprehension of how organisations can understand and foster disruptive innovation as part of a major competitive strategy. The objectives of this research are to co-create understanding and knowledge on the phenomenon of disruptive innovation in order to provide pragmatic clarity on the term\u2019s meaning, impact and implications. This will address the academic audience\u2019s gap in knowledge and provide help to practitioners wanting to understand how disruptive innovation can be fostered as part of a major competitive strategy. The current paper reports on the first eighteen months of a three year academic and industrial investigation. It presents a new pragmatic definition drawn from the literature and an overview of the conceptual framework for disruptive innovation that was co-created via the collaborative efforts of academia and industry. The barriers to disruptive innovation are presented and a best practice case study of how one company is overcoming these barriers is described. The research contributes to \u201cDisrupt-it\u201d, a \u20ac3million project for the Information Society Technologies Commission under the 5th Framework Program of the European Union, which will focus on developing and validating tools to help organisations foster disruptive innovation. 2.0 Understanding the Phenomenon of Disruptive Innovation. \u2018Disruptive Innovation\u2019, \u2018Disruptive Technologies\u2019 and \u2018Disruptive Business Strategies\u2019 are emerging and increasingly prominent business terms that are used to describe a form of revolutionary change. They are receiving ever more academic and industrial attention, yet these terms are still poorly defined and not well understood. A key objective of this research is to improve the understanding of disruptive innovation by drawing together multiple perspectives on the topic, as shown in Figure 1, into a more holistic and comprehensive definition. Much of the past investigation into discontinuous and disruptive innovation has been path dependent upon the researchers\u2019 investigative history. For example, Hamel\u2019s strategy background leads him to see disruptive innovation through the lens of the \u2018business model\u2019; whereas Christensen\u2019s technologically orientated past leads to a focus on \u2018disruptive technologies\u2019. What many researchers share is the view that firms need to periodically engage in the process of revolutionary change for long-term survival and this is not a new Thomond, P., Herzberg, T. and Lettice, F. (2003). \"Disruptive Innovation: Removing the Innovators\u2019 Dilemma\". Knowledge into Practice British Academy of Management Annual Conference, Harrogate, UK, September 2003. 4 phenomenon (Christensen, 1997; Christensen and Rosenbloom, 1995; Hamel, 2000; Schumpeter, 1975, Tushman and Anderson, 1986; Tushman and Nadler, 1986, Gilbert and Bower, 2002; Rigby and Corbett, 2002; Charitou and Markides, 2003; Foster and Kaplan, 2001; Thomond and Lettice, 2002). Disruptive innovation has also been defined as \u201ca technology, product or process that creeps up from below an existing business and threatens to displace it. Typically, the disrupter offers lower performance and less functionality... The product or process is good enough for a meaningful number of customers \u2013 indeed some don\u2019t buy the older version\u2019s higher functionality and welcome the disruption\u2019s simplicity. And gradually, the new product or process improves to the point where it displaces the incumbent.\u201d (Rafi and Kampas p 8, 2002). This definition borrows heavily from the work of Christensen (1997), which in turn has some of its origins in the findings of Dosi (1982). For example, each of the cases of disruptive innovation mentioned thus far represents a new paradigm of customer offering. Dosi (1982) claims that these can be represented as discontinuities in trajectories of progress as defined within earlier paradigms where a technological paradigm is a pattern of solutions for selected technological problems. In fact, new paradigms redefine the future meaning of progress and a new class of problems becomes the target of normal incremental innovation (Dosi, 1982). Therefore, disruptive innovations appear to typify a particular type of \u2018discontinuous innovation\u2019 (a term which has received much more academic attention). The same characteristics are found, except that disruptive innovations first establish their commercial footing in new or simple market niches by enabling customers to do things that only specialists could do before (e.g. low cost European airlines are opening up air travel to thousands that did not fly before) and that these new offerings, through a period of exploitation, migrate upmarket and eventually redefine the paradigms and value propositions on which the existing industry is based (Christensen, 1997, 2002; Moore, 1995; Business Model Charitou & Markides, 2003 Hamel, 2000"}
{"_id": "5e90e57fccafbc78ecbac1a78c546b7db9a468ce", "title": "Finding new terminology in very large corpora", "text": "Most technical and scientific terms are comprised of complex, multi-word noun phrases but certainly not all noun phrases are technical or scientific terms. The distinction of specific terminology from common non-specific noun phrases can be based on the observation that terms reveal a much lesser degree of distributional variation than non-specific noun phrases. We formalize the limited paradigmatic modifiability of terms and, subsequently, test the corresponding algorithm on bigram, trigram and quadgram noun phrases extracted from a 104-million-word biomedical text corpus. Using an already existing and community-wide curated biomedical terminology as an evaluation gold standard, we show that our algorithm significantly outperforms standard term identification measures and, therefore, qualifies as a high-performant building block for any terminology identification system. We also provide empirical evidence that the superiority of our approach, beyond a 10-million-word threshold, is essentially domain- and corpus-size-independent."}
{"_id": "991891e3aa226766dcb4ad7221045599f8607685", "title": "Review of axial flux induction motor for automotive applications", "text": "Hybrid and electric vehicles have been the focus of many academic and industrial studies to reduce transport pollution; they are now established products. In hybrid and electric vehicles, the drive motor should have high torque density, high power density, high efficiency, strong physical structure and variable speed range. An axial flux induction motor is an interesting solution, where the motor is a double sided axial flux machine. This can significantly increase torque density. In this paper a review of the axial flux motor for automotive applications, and the different possible topologies for the axial field motor, are presented."}
{"_id": "11b111cbe79e5733fea28e4b9ff99fe7b4a4585c", "title": "Generalized vulnerability extrapolation using abstract syntax trees", "text": "The discovery of vulnerabilities in source code is a key for securing computer systems. While specific types of security flaws can be identified automatically, in the general case the process of finding vulnerabilities cannot be automated and vulnerabilities are mainly discovered by manual analysis. In this paper, we propose a method for assisting a security analyst during auditing of source code. Our method proceeds by extracting abstract syntax trees from the code and determining structural patterns in these trees, such that each function in the code can be described as a mixture of these patterns. This representation enables us to decompose a known vulnerability and extrapolate it to a code base, such that functions potentially suffering from the same flaw can be suggested to the analyst. We evaluate our method on the source code of four popular open-source projects: LibTIFF, FFmpeg, Pidgin and Asterisk. For three of these projects, we are able to identify zero-day vulnerabilities by inspecting only a small fraction of the code bases."}
{"_id": "0dbed89ea3296f351eb986cc02678c7a33d50945", "title": "A Combinatorial Noise Model for Quantum Computer Simulation", "text": "Quantum computers (QCs) have many potential hardware implementations ranging from solid-state silicon-based structures to electron-spin qubits on liquid helium. However, all QCs must contend with gate infidelity and qubit state decoherence over time. Quantum error correcting codes (QECCs) have been developed to protect program qubit states from such noise. Previously, Monte Carlo noise simulators have been developed to model the effectiveness of QECCs in combating decoherence. The downside to this random sampling approach is that it may take days or weeks to produce enough samples for an accurate measurement. We present an alternative noise modeling approach that performs combinatorial analysis rather than random sampling. This model tracks the progression of the most likely error states of the quantum program through its course of execution. This approach has the potential for enormous speedups versus the previous Monte Carlo methodology. We have found speedups with the combinatorial model on the order of 100X-1,000X over the Monte Carlo approach when analyzing applications utilizing the [[7,1,3]] QECC. The combinatorial noise model has significant memory requirements, and we analyze its scaling properties relative to the size of the quantum program. Due to its speedup, this noise model is a valuable alternative to traditional Monte Carlo simulation."}
{"_id": "47f0455d65a0823c70ce7cce9749f3abd826e0a7", "title": "Random Walk with Restart on Large Graphs Using Block Elimination", "text": "Given a large graph, how can we calculate the relevance between nodes fast and accurately? Random walk with restart (RWR) provides a good measure for this purpose and has been applied to diverse data mining applications including ranking, community detection, link prediction, and anomaly detection. Since calculating RWR from scratch takes a long time, various preprocessing methods, most of which are related to inverting adjacency matrices, have been proposed to speed up the calculation. However, these methods do not scale to large graphs because they usually produce large dense matrices that do not fit into memory. In addition, the existing methods are inappropriate when graphs dynamically change because the expensive preprocessing task needs to be computed repeatedly.\n In this article, we propose Bear, a fast, scalable, and accurate method for computing RWR on large graphs. Bear has two versions: a preprocessing method BearS for static graphs and an incremental update method BearD for dynamic graphs. BearS consists of the preprocessing step and the query step. In the preprocessing step, BearS reorders the adjacency matrix of a given graph so that it contains a large and easy-to-invert submatrix, and precomputes several matrices including the Schur complement of the submatrix. In the query step, BearS quickly computes the RWR scores for a given query node using a block elimination approach with the matrices computed in the preprocessing step. For dynamic graphs, BearD efficiently updates the changed parts in the preprocessed matrices of BearS based on the observation that only small parts of the preprocessed matrices change when few edges are inserted or deleted. Through extensive experiments, we show that BearS significantly outperforms other state-of-the-art methods in terms of preprocessing and query speed, space efficiency, and accuracy. We also show that BearD quickly updates the preprocessed matrices and immediately computes queries when the graph changes."}
{"_id": "239222aead65a66be698036d04e4af6eaa24b77b", "title": "An energy-efficient unequal clustering mechanism for wireless sensor networks", "text": "Clustering provides an effective way for prolonging the lifetime of a wireless sensor network. Current clustering algorithms usually utilize two techniques, selecting cluster heads with more residual energy and rotating cluster heads periodically, to distribute the energy consumption among nodes in each cluster and extend the network lifetime. However, they rarely consider the hot spots problem in multihop wireless sensor networks. When cluster heads cooperate with each other to forward their data to the base station, the cluster heads closer to the base station are burdened with heavy relay traffic and tend to die early, leaving areas of the network uncovered and causing network partition. To address the problem, we propose an energy-efficient unequal clustering (EEUC) mechanism for periodical data gathering in wireless sensor networks. It partitions the nodes into clusters of unequal size, and clusters closer to the base station have smaller sizes than those farther away from the base station. Thus cluster heads closer to the base station can preserve some energy for the inter-cluster data forwarding. We also propose an energy-aware multihop routing protocol for the inter-cluster communication. Simulation results show that our unequal clustering mechanism balances the energy consumption well among all sensor nodes and achieves an obvious improvement on the network lifetime"}
{"_id": "d19f938c790f0ffd8fa7fccc9fd7c40758a29f94", "title": "Art-Bots: Toward Chat-Based Conversational Experiences in Museums", "text": ""}
{"_id": "7a5ae36df3f08df85dfaa21fead748f830d5e4fa", "title": "Learning Bound for Parameter Transfer Learning", "text": "We consider a transfer-learning problem by using the parameter transfer approach, where a suitable parameter of feature mapping is learned through one task and applied to another objective task. Then, we introduce the notion of the local stability and parameter transfer learnability of parametric feature mapping, and thereby derive a learning bound for parameter transfer algorithms. As an application of parameter transfer learning, we discuss the performance of sparse coding in selftaught learning. Although self-taught learning algorithms with plentiful unlabeled data often show excellent empirical performance, their theoretical analysis has not been studied. In this paper, we also provide the first theoretical learning bound for self-taught learning."}
{"_id": "2b695f4060e78f9977a3da1c01a07a05a3f94b28", "title": "Analyzing Posture and Affect in Task-Oriented Tutoring", "text": "Intelligent tutoring systems research aims to produce systems that meet or exceed the effectiveness of one on one expert human tutoring. Theory and empirical study suggest that affective states of the learner must be addressed to achieve this goal. While many affective measures can be utilized, posture offers the advantages of non intrusiveness and ease of interpretation. This paper presents an accurate posture estimation algorithm applied to a computer mediated tutoring corpus of depth recordings. Analyses of posture and session level student reports of engagement and cognitive load identified significant patterns. The results indicate that disengagement and frustration may coincide with closer postural positions and more movement, while focused attention and less frustration occur with more distant, stable postural positions. It is hoped that this work will lead to intelligent tutoring systems that recognize a greater breadth of affective expression through channels of posture and gesture."}
{"_id": "c28bcaab43e57b9b03f09fd2237669634da8a741", "title": "Contributions of the prefrontal cortex to the neural basis of human decision making", "text": "The neural basis of decision making has been an elusive concept largely due to the many subprocesses associated with it. Recent efforts involving neuroimaging, neuropsychological studies, and animal work indicate that the prefrontal cortex plays a central role in several of these subprocesses. The frontal lobes are involved in tasks ranging from making binary choices to making multi-attribute decisions that require explicit deliberation and integration of diverse sources of information. In categorizing different aspects of decision making, a division of the prefrontal cortex into three primary regions is proposed. (1) The orbitofrontal and ventromedial areas are most relevant to deciding based on reward values and contribute affective information regarding decision attributes and options. (2) Dorsolateral prefrontal cortex is critical in making decisions that call for the consideration of multiple sources of information, and may recruit separable areas when making well defined versus poorly defined decisions. (3) The anterior and ventral cingulate cortex appear especially relevant in sorting among conflicting options, as well as signaling outcome-relevant information. This topic is broadly relevant to cognitive neuroscience as a discipline, as it generally comprises several aspects of cognition and may involve numerous brain regions depending on the situation. The review concludes with a summary of how these regions may interact in deciding and possible future research directions for the field."}
{"_id": "3ec40e4f549c49b048cd29aeb0223e709abc5565", "title": "Image-based Airborne LiDAR Point Cloud Encoding for 3 D Building Model Retrieval", "text": "With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods."}
{"_id": "2433254a9df37729159daa5eeec56123e122518e", "title": "THE ROLE OF DIGITAL AND SOCIAL MEDIA MARKETING IN CONSUMER BEHAVIOR", "text": "This article reviews recently published research about consumers in digital and social media marketing settings. Five themes are identified: (i) consumer digital culture, (ii) responses to digital advertising, (iii) effects of digital environments on consumer behavior, (iv) mobile environments, and (v) online word of mouth (WOM). Collectively these articles shed light from many different angles on how consumers experience, influence, and are influenced by the digital environments in which they are situated as part of their daily lives. Much is still to be understood, and existing knowledge tends to be disproportionately focused on WOM, which is only part of the digital consumer experience. Several directions for future research are advanced to encourage researchers to consider a broader range of phenomena."}
{"_id": "399bc455dcbaf9eb0b4144d0bc721ac4bb7c8d59", "title": "A Spreadsheet Algebra for a Direct Data Manipulation Query Interface", "text": "A spreadsheet-like \"direct manipulation\" interface is more intuitive for many non-technical database users compared to traditional alternatives, such as visual query builders. The construction of such a direct manipulation interfacemay appear straightforward, but there are some significant challenges. First, individual direct manipulation operations cannot be too complex, so expressive power has to be achieved through composing (long) sequences of small operations. Second, all intermediate results are visible to the user, so grouping and ordering are material after every small step. Third, users often find the need to modify previously specified queries. Since manipulations are specified one step at a time, there is no actual queryexpression to modify. Suitable means must be provided to address this need. Fourth, the order in which manipulations are performed by the user should not affect the results obtained, to avoid user confusion. We address the aforementioned challenges by designing a new spreadsheet algebra that: i) operates on recursively grouped multi-sets, ii) contains a selectively designed set of operators capable of expressing at least all single-block SQL queries and can be intuitively implemented in a spreadsheet, iii) enables query modification by the notion of modifiable query state, and iv) requires no ordering in unary data manipulation operators since they are all designed to commute. We built a prototype implementation of the spreadsheet algebra and show, through user studies with non-technical subjects, that the resultant query interface is easier to use than a standard commercial visual query builder."}
{"_id": "1eff385c88fd1fdd1c03fd3fb573de2530b73f99", "title": "OBJECTIVE SELF-AWARENESS THEORY : RECENT PROGRESS AND ENDURING PROBLEMS By :", "text": "Objective self-awareness theory has undergone fundamental changes in the 3 decades since Duval and Wicklund's (1972) original formulation. We review new evidence that bears on the basic tenets of the theory. Many of the assumptions of self-awareness theory require revision, particularly how expectancies influence approach and avoidance of self-standard discrepancies; the nature of standards, especially when they are changed; and the role of causal attribution in directing discrepancy reduction. However, several unresolved conceptual issues remain; future theoretical and empirical directions are discussed. Article: The human dilemma is that which arises out of a man's capacity to experience himself as both subject and object at the same time. Both are necessary--for the science of psychology, for therapy, and for gratifying living. (May, 1967, p. 8) Although psychological perspectives on the self have a long history (e.g., Cooley, 1902; James, 1890; Mead, 1934), experimental research on the self has emerged only within the last 40 years. One of the earliest \"self theories\" was objective self-awareness (OSA) theory (Duval & Wicklund, 1972). OSA theory was concerned with the self-reflexive quality of the consciousness. Just as people can apprehend the existence of environmental stimuli, they can be aware of their own existence: \"When attention is directed inward and the individual's consciousness is focused on himself, he is the object of his own consciousness--hence 'objective' self awareness\" (Duval & Wicklund, 1972, p. 2). This is contrasted with \"subjective self-awareness\" that results when attention is directed away from the self and the person \"experiences himself as the source of perception and action\" (Duval & Wicklund, 1972, p. 3). By this, Duval and Wicklund (1972,chap. 3) meant consciousness of one's existence on an organismic level, in which such existence is undifferentiated as a separate and distinct object in the world. OSA theory has stimulated a lot of research and informed basic issues in social psychology, such as emotion (Scheier & Carver, 1977), attribution (Duval & Wicklund, 1973), attitude--behavior consistency (Gibbons, 1983), self-standard comparison (Duval & Lalwani, 1999), prosocial behavior (Froming, Nasby, & McManus, 1998), deindividuation (Diener, 1979), stereotyping (Macrae, Bodenhausen, & Milne, 1998), self-assessment (Silvia & Gendolla, in press), terror management (Arndt, Greenberg, Simon, Pyszczynski, & Solomon, 1998; Silvia, 2001), and group dynamics (Duval, 1976; Mullen, 1983). Self-focused attention is also fundamental to a host of clinical and health phenomena (Hull, 1981; Ingram, 1990; Pyszczynski, Hamilton, Greenberg, & Becker, 1991; Wells & Matthews, 1994). The study of self-focused attention continues to be a dynamic and active research area. A lot of research relevant to basic theoretical issues has been conducted since the last maj or review (Gibbons, 1990). Recent research has made progress in understanding links between self-awareness and causal attribution, the effects of expectancies on self-standard discrepancy reduction, and the nature of standards--the dynamics of selfawareness are now viewed quite differently. We review these recent developments[1] and hope that a conceptual integration of new findings will further stimulate research on self-focused attention. However, there is still much conceptual work left to be done, and many basic issues remain murky and controversial. We discuss these unresolved issues and sketch the beginnings of some possible solutions. Original Theory The original statement of OSA theory (Duval & Wicklund, 1972) employed only a few constructs, relations, and processes. The theory assumed that the orientation of conscious attention was the essence of selfevaluation. Focusing attention on the self brought about objective self-awareness, which initiated an automatic comparison of the self against standards. The self was defined very broadly as the person's knowledge of the person. A standard was \"defined as a mental representation of correct behavior, attitudes, and traits ... All of the standards of correctness taken together define what a 'correct' person is\" (Duval & Wicklund, 1972, pp. 3, 4). This simple system consisting of self, standards, and attentional focus was assumed to operate according to gestalt consistency principles (Heider, 1960). If a discrepancy was found between self and standards, negative affect was said to arise. This aversive state then motivated the restoration of consistency. Two behavioral routes were proposed. People could either actively change their actions, attitudes, or traits to be more congruent with the representations of the standard or could avoid the self-focusing stimuli and circumstances. Avoidance effectively terminates the comparison process and hence all self-evaluation. Early research found solid support for these basic ideas (Carver, 1975; Gibbons & Wicklund, 1976; Wicklund & Duval, 1971). Duval and Wicklund (1972) also assumed that objective self-awareness would generally be an aversive state--the probability that at least one self-standard discrepancy exists is quite high. This was the first assumption to be revised. Later work found that self-awareness can be a positive state when people are congruent with their standards (Greenberg & Musham, 1981; Ickes, Wicklund, & Ferris, 1973). New Developments OSA theory has grown considerably from the original statement (Duval & Wicklund, 1972). Our review focuses primarily on core theoretical developments since the last review (Gibbons, 1990). Other interesting aspects, such as interpersonal processes and interoceptive accuracy, have not changed significantly since previous reviews (Gibbons, 1990; Silvia & Gendolla, in press). We will also overlook the many clinical consequences of self-awareness; these have been exhaustively reviewed elsewhere (Pyszczynski et al., 1991; Wells & Matthews, 1994). Expectancies and Avoiding Self-Awareness Reducing a discrepancy or avoiding self-focus are equally effective ways of reducing the negative affect resulting from a discrepancy. When do people do one or the other? The original theory was not very specific about when approach versus avoidance would occur. Duval and Wicklund (1972) did, however, speculate that two factors should be relevant. The first was whether people felt they could effectively reduce the discrepancy; the second was whether the discrepancy was small or large. In their translation of OSA theory into a \"test--operate--test--exit\" (TOTE) feedback system, Carver, Blaney, and Scheier (1979a, 1979b) suggested that expectancies regarding outcome favorability determine approach versus avoidance behavior. When a self-standard discrepancy is recognized, people implicitly appraise their likelihood of reducing the discrepancy (cf. Bandura, 1977; Lazarus, 1966). If eventual discrepancy reduction is perceived as likely, people will try to achieve the standard. When expectations regarding improvement are unfavorable, however, people will try to avoid self-focus. Later research and theory (Duval, Duval, & Mulilis, 1992) refined Duval and Wicklund's (1972) speculations and the notion of outcome favorability. Expectancies are not simply and dichotomously favorable or unfavorable--they connote a person's rate of progress in discrepancy reduction relative to the magnitude of the discrepancy. More specifically, people will try to reduce a discrepancy to the extent they believe that their rate of progress is sufficient relative to the magnitude of the problem. Those who believe their rate of progress to be insufficient will avoid. To test this hypothesis, participants were told they were either highly (90%) or mildly (10%) discrepant from an experimental standard (Duval et al., 1992, Study 1). Participants were then given the opportunity to engage in a remedial task that was guaranteed by the experimenter to totally eliminate their deficiency provided that they worked on the task for 2 hr and 10 min. However, the rate at which working on the task would reduce the discrepancy was varied. In the low rate of progress conditions individuals were shown a performance curve indicating no progress until the last 20 min of working on the remedial task. During the last 20 min discrepancy was reduced to zero. In the constant rate of progress condition, participants were shown a performance curve in which progress toward discrepancy reduction began immediately and continued throughout such efforts with 30% of the deficiency being reduced in the first 30 min of activity and totally eliminated after 2 hr and 10 min. Results indicated that persons who believed the discrepancy to be mild and progress to be constant worked on the remedial task; those who perceived the problem to be mild but the rate of progress to be low avoided this activity. However, participants who thought that the discrepancy was substantial and the rate of progress only constant avoided working on the remedial task relative to those in the mild discrepancy and constant rate of progress condition. These results were conceptually replicated in a second experiment (Duval et al., 1992) using participants' time to complete the total 2 hr and 10 min of remedial work as the dependent measure. This pattern suggests that the rate of progress was sufficient relative to the magnitude of the discrepancy in the mild discrepancy and constant rate of progress condition; this in turn promoted approaching the problem. In the high discrepancy and constant rate of progress condition and the high and mild discrepancy and low rate of progress conditions, rate of progress was insufficient and promoted avoiding the problem. In a third experiment (Duval et al., 1992), people were again led to believe that they were either highly or mildly discrepant from a standard on an intellectual dimension and then given the opportunity to reduce that deficiency by working on a remedial task. A"}
{"_id": "0341cd2fb49a56697edaf03b05734f44d0e41f89", "title": "An empirical study on dependence clusters for effort-aware fault-proneness prediction", "text": "A dependence cluster is a set of mutually inter-dependent program elements. Prior studies have found that large dependence clusters are prevalent in software systems. It has been suggested that dependence clusters have potentially harmful effects on software quality. However, little empirical evidence has been provided to support this claim. The study presented in this paper investigates the relationship between dependence clusters and software quality at the function-level with a focus on effort-aware fault-proneness prediction. The investigation first analyzes whether or not larger dependence clusters tend to be more fault-prone. Second, it investigates whether the proportion of faulty functions inside dependence clusters is significantly different from the proportion of faulty functions outside dependence clusters. Third, it examines whether or not functions inside dependence clusters playing a more important role than others are more fault-prone. Finally, based on two groups of functions (i.e., functions inside and outside dependence clusters), the investigation considers a segmented fault-proneness prediction model. Our experimental results, based on five well-known open-source systems, show that (1) larger dependence clusters tend to be more fault-prone; (2) the proportion of faulty functions inside dependence clusters is significantly larger than the proportion of faulty functions outside dependence clusters; (3) functions inside dependence clusters that play more important roles are more fault-prone; (4) our segmented prediction model can significantly improve the effectiveness of effort-aware fault-proneness prediction in both ranking and classification scenarios. These findings help us better understand how dependence clusters influence software quality."}
{"_id": "b0f16acfa4efce9c24100ec330b82fb8a28feeec", "title": "Reinforcement Learning in Continuous State and Action Spaces", "text": "Many traditional reinforcement-learning algorithms have been designed for problems with small finite state and action spaces. Learn ing in such discrete problems can been difficult, due to noise and delayed reinfor cements. However, many real-world problems have continuous state or action sp aces, which can make learning a good decision policy even more involved. In this c apter we discuss how to automatically find good decision policies in continuous d omains. Because analytically computing a good policy from a continuous model c an be infeasible, in this chapter we mainly focus on methods that explicitly up date a representation of a value function, a policy or both. We discuss conside rations in choosing an appropriate representation for these functions and disc uss gradient-based and gradient-free ways to update the parameters. We show how to a pply these methods to reinforcement-learning problems and discuss many speci fic algorithms. Amongst others, we cover gradient-based temporal-difference lear ning, evolutionary strategies, policy-gradient algorithms and (natural) actor-cri ti methods. We discuss the advantages of different approaches and compare the perform ance of a state-of-theart actor-critic method and a state-of-the-art evolutiona ry strategy empirically."}
{"_id": "2bf8acb0bd8b0fde644b91c5dd4bef2e8119e61e", "title": "Decision Support based on Bio-PEPA Modeling and Decision Tree Induction: A New Approach, Applied to a Tuberculosis Case Study", "text": "The problem of selecting determinant features generating appropriate model structure is a challenge in epidemiological modelling. Disease spread is highly complex, and experts develop their understanding of its dynamic over years. There is an increasing variety and volume of epidemiological data which adds to the potential confusion. The authors propose here to make use of that data to better understand disease systems. Decision tree techniques have been extensively used to extract pertinent information and improve decision making. In this paper, the authors propose an innovative structured approach combining decision tree induction with Bio-PEPA computational modelling, and illustrate the approach through application to tuberculosis. By using decision tree induction, the enhanced Bio-PEPA model shows considerable improvement over the initial model with regard to the simulated results matching observed data. The key finding is that the developer expresses a realistic predictive model using relevant features, thus considering this approach as decision support, empowers the epidemiologist in his policy decision making. KEywoRDS Bio-PEPA Modelling, Data Mining, Decision Support, Decision Tree Induction, Epidemiology, Modelling and Simulation, Optimisation, Refinement, Tuberculosis"}
{"_id": "e3ab7a95af2c0efc92f146f8667ff95e46da84f1", "title": "On Optimizing VLC Networks for Downlink Multi-User Transmission: A Survey", "text": "The evolving explosion in high data rate services and applications will soon require the use of untapped, abundant unregulated spectrum of the visible light for communications to adequately meet the demands of the fifth-generation (5G) mobile technologies. Radio-frequency (RF) networks are proving to be scarce to cover the escalation in data rate services. Visible light communication (VLC) has emerged as a great potential solution, either in replacement of, or complement to, existing RF networks, to support the projected traffic demands. Despite of the prolific advantages of VLC networks, VLC faces many challenges that must be resolved in the near future to achieve a full standardization and to be integrated to future wireless systems. Here, we review the new, emerging research in the field of VLC networks and lay out the challenges, technological solutions, and future work predictions. Specifically, we first review the VLC channel capacity derivation, discuss the performance metrics and the associated variables; the optimization of VLC networks are also discussed, including resources and power allocation techniques, user-to-access point (AP) association and APs-toclustered-users-association, APs coordination techniques, nonorthogonal multiple access (NOMA) VLC networks, simultaneous energy harvesting and information transmission using the visible light, and the security issue in VLC networks. Finally, we propose several open research problems to optimize the various VLC networks by maximizing either the sum rate, fairness, energy efficiency, secrecy rate, or harvested energy."}
{"_id": "b1b5646683557b38468344dff09ae921a5a4b345", "title": "Comparison of CoAP and MQTT Performance Over Capillary Radios", "text": "The IoT protocols used in the application layer, namely the Constraint Application Protocol (CoAP) and Message Queue Telemetry Transport (MQTT) have dependencies to the transport layer. The choice of transport, Transmission Control Protocol (TCP) or the User Datagram Protocol(UDP), on the other hand, has an impact on the Internet of Things (IoT) application level performance, especially over a wireless medium. The motivation of this work is to look at the impact of the protocol stack on performance over two different wireless medium realizations, namely Bluetooth Low Energy and Wi-Fi. The use case studied is infrequent small reports sent from the sensor device to a central cloud storage over a last mile radio access link. We find that while CoAP/UDP based transport performs consistently better both in terms of latency and power consumption over both links, MQTT/TCP may also work when the use requirements allow for longerlatency providing better reliability. All in all, the full connectivity stack needs to be considered when designing an IoT deployment."}
{"_id": "cd5b7d8fb4f8dc3872e773ec24460c9020da91ed", "title": "Design of a compact high power phased array for 5G FD-MIMO system at 29 GHz", "text": "This paper presents a new design concept of a beam steerable high gain phased array antenna based on WR28 waveguide at 29 GHz frequency for fifth generation (5G) full dimension multiple input multiple output (FD-MIMO) system. The 8\u00d78 planar phased array is fed by a three dimensional beamformer to obtain volumetric beam scanning ranging from \u221260 to +60 degrees both in azimuth and elevation direction. Beamforming network (BFN) is designed using 16 set of 8\u00d78 Butler matrix beamformer to get 64 beam states, which control the horizontal and vertical angle. This is a new concept to design waveguide based high power three-dimensional beamformer for volumetric multibeam in Ka band for 5G application. The maximum gain of phased array is 28.5 dBi that covers 28.9 GHz to 29.4 GHz frequency band."}
{"_id": "b4cbe50b8988e7c9c1a7b982bfb6c708bb3ce3e8", "title": "Development and evaluation of low cost game-based balance rehabilitation tool using the microsoft kinect sensor", "text": "The use of the commercial video games as rehabilitation tools, such as the Nintendo WiiFit, has recently gained much interest in the physical therapy arena. Motion tracking controllers such as the Nintendo Wiimote are not sensitive enough to accurately measure performance in all components of balance. Additionally, users can figure out how to \"cheat\" inaccurate trackers by performing minimal movement (e.g. wrist twisting a Wiimote instead of a full arm swing). Physical rehabilitation requires accurate and appropriate tracking and feedback of performance. To this end, we are developing applications that leverage recent advances in commercial video game technology to provide full-body control of animated virtual characters. A key component of our approach is the use of newly available low cost depth sensing camera technology that provides markerless full-body tracking on a conventional PC. The aim of this research was to develop and assess an interactive game-based rehabilitation tool for balance training of adults with neurological injury."}
{"_id": "6a2311d02aea97f7fe4e78c8bd2a53091364dc3b", "title": "Aesthetics and Entropy III . Aesthetic measures 2", "text": "1 Consultant, Maplewood, MN USA 4 * Correspondence: sahyun@infionline.net; 1-(651)-927-9686 5 6 Abstract: We examined a series of real-world, pictorial photographs with varying 7 characteristics, along with their modification by noise addition and unsharp masking. As 8 response metrics we used three different versions of the aesthetic measure originally 9 proposed by Birkhoff. The first aesthetic measure, which has been used in other studies, 10 and which we used in our previous work as well, showed a preference for the least 11 complex of the images. It provided no justification for noise addition, but did reveal 12 enhancement on unsharp masking. Optimum level of unsharp masking varied with the 13 image, but was predictable from the individual image\u2019s GIF compressibility. We expect this 14 result to be useful for guiding the processing of pictorial photographic imagery. The 15 second aesthetic measure, that of informational aesthetics based on entropy alone failed 16 to provide useful discrimination among the images or the conditions of their modification. 17 A third measure, derived from the concepts of entropy maximization, as well as the 18 hypothesized preference of observers for \u201csimpler\u201d, i.e., more compressible, images, 19 yielded qualitatively the same results as the more traditional version of the measure. 20 Differences among the photographs and the conditions of their modification were more 21 clearly defined with this metric, however. 22"}
{"_id": "97aef787d63aef75e6f8055cdac3771f8649f21a", "title": "A Syllable-based Technique for Word Embeddings of Korean Words", "text": "Word embedding has become a fundamental component to many NLP tasks such as named entity recognition and machine translation. However, popular models that learn such embeddings are unaware of the morphology of words, so it is not directly applicable to highly agglutinative languages such as Korean. We propose a syllable-based learning model for Korean using a convolutional neural network, in which word representation is composed of trained syllable vectors. Our model successfully produces morphologically meaningful representation of Korean words compared to the original Skip-gram embeddings. The results also show that it is quite robust to the Out-of-Vocabulary problem."}
{"_id": "d9d8aafe6856025f2c2b7c70f5e640e03b6bcd46", "title": "Anti-phishing based on automated individual white-list", "text": "In phishing and pharming, users could be easily tricked into submitting their username/passwords into fraudulent web sites whose appearances look similar as the genuine ones. The traditional blacklist approach for anti-phishing is partially effective due to its partial list of global phishing sites. In this paper, we present a novel anti-phishing approach named Automated Individual White-List (AIWL). AIWL automatically tries to maintain a white-list of user's all familiar Login User Interfaces (LUIs) of web sites. Once a user tries to submit his/her confidential information to an LUI that is not in the white-list, AIWL will alert the user to the possible attack. Next, AIWL can efficiently defend against pharming attacks, because AIWL will alert the user when the legitimate IP is maliciously changed; the legitimate IP addresses, as one of the contents of LUI, are recorded in the white-list and our experiment shows that popular web sites' IP addresses are basically stable. Furthermore, we use Na\u00efve Bayesian classifier to automatically maintain the white-list in AIWL. Finally, we conclude through experiments that AIWL is an efficient automated tool specializing in detecting phishing and pharming."}
{"_id": "34feeafb5ff7757b67cf5c46da0869ffb9655310", "title": "Perpetual environmentally powered sensor networks", "text": "Environmental energy is an attractive power source for low power wireless sensor networks. We present Prometheus, a system that intelligently manages energy transfer for perpetual operation without human intervention or servicing. Combining positive attributes of different energy storage elements and leveraging the intelligence of the microprocessor, we introduce an efficient multi-stage energy transfer system that reduces the common limitations of single energy storage systems to achieve near perpetual operation. We present our design choices, tradeoffs, circuit evaluations, performance analysis, and models. We discuss the relationships between system components and identify optimal hardware choices to meet an application's needs. Finally we present our implementation of a real system that uses solar energy to power Berkeley's Telos Mote. Our analysis predicts the system will operate for 43 years under 1% load, 4 years under 10% load, and 1 year under 100% load. Our implementation uses a two stage storage system consisting of supercapacitors (primary buffer) and a lithium rechargeable battery (secondary buffer). The mote has full knowledge of power levels and intelligently manages energy transfer to maximize lifetime."}
{"_id": "3689220c58f89e9e19cc0df51c0a573884486708", "title": "AmbiMax: Autonomous Energy Harvesting Platform for Multi-Supply Wireless Sensor Nodes", "text": "AmbiMax is an energy harvesting circuit and a supercapacitor based energy storage system for wireless sensor nodes (WSN). Previous WSNs attempt to harvest energy from various sources, and some also use supercapacitors instead of batteries to address the battery aging problem. However, they either waste much available energy due to impedance mismatch, or they require active digital control that incurs overhead, or they work with only one specific type of source. AmbiMax addresses these problems by first performing maximum power point tracking (MPPT) autonomously, and then charges supercapacitors at maximum efficiency. Furthermore, AmbiMax is modular and enables composition of multiple energy harvesting sources including solar, wind, thermal, and vibration, each with a different optimal size. Experimental results on a real WSN platform, Eco, show that AmbiMax successfully manages multiple power sources simultaneously and autonomously at several times the efficiency of the current state-of-the-art for WSNs"}
{"_id": "4833d690f7e0a4020ef48c1a537dbb5b8b9b04c6", "title": "Integrated photovoltaic maximum power point tracking converter", "text": "A low-power low-cost highly efficient maximum power point tracker (MPPT) to be integrated into a photovoltaic (PV) panel is proposed. This can result in a 25% energy enhancement compared to a standard photovoltaic panel, while performing functions like battery voltage regulation and matching of the PV array with the load. Instead of using an externally connected MPPT, it is proposed to use an integrated MPPT converter as part of the PV panel. It is proposed that this integrated MPPT uses a simple controller in order to be cost effective. Furthermore, the converter has to be very efficient, in order to transfer more energy to the load than a directly coupled system. This is achieved by using a simple soft-switched topology. A much higher conversion efficiency at lower cost will then result, making the MPPT an affordable solution for small PV energy systems."}
{"_id": "61c1d66defb225eda47462d1bc393906772c9196", "title": "Hardware design experiences in ZebraNet", "text": "The enormous potential for wireless sensor networks to make a positive impact on our society has spawned a great deal of research on the topic, and this research is now producing environment-ready systems. Current technology limits coupled with widely-varying application requirements lead to a diversity of hardware platforms for different portions of the design space. In addition, the unique energy and reliability constraints of a system that must function for months at a time without human intervention mean that demands on sensor network hardware are different from the demands on standard integrated circuits. This paper describes our experiences designing sensor nodes and low level software to control them.\n In the ZebraNet system we use GPS technology to record fine-grained position data in order to track long term animal migrations [14]. The ZebraNet hardware is composed of a 16-bit TI microcontroller, 4 Mbits of off-chip flash memory, a 900 MHz radio, and a low-power GPS chip. In this paper, we discuss our techniques for devising efficient power supplies for sensor networks, methods of managing the energy consumption of the nodes, and methods of managing the peripheral devices including the radio, flash, and sensors. We conclude by evaluating the design of the ZebraNet nodes and discussing how it can be improved. Our lessons learned in developing this hardware can be useful both in designing future sensor nodes and in using them in real systems."}
{"_id": "576803b930ef44b79028048569e7ea321c1cecb0", "title": "Adaptive Computer-Based Training Increases on the Job Performance of X-Ray Screeners", "text": "Due to severe terrorist attacks in recent years, aviation security issues have moved into the focus of politicians as well as the general public. Effective screening of passenger bags using state-of-the-art X-ray screening systems is essential to prevent terrorist attacks. The performance of the screening process depends critically on the security personnel, because they decide whether bags are OK or whether they might contain a prohibited item. Screening X-ray images of passenger bags for dangerous and prohibited items effectively and efficiently is a demanding object recognition task. Effectiveness of computer-based training (CBT) on X-ray detection performance was assessed using computer-based tests and on the job performance measures using threat image projection (TIP). It was found that adaptive CBT is a powerful tool to increase detection performance and efficiency of screeners in X-ray image interpretation. Moreover, the results of training could be generalized to the real life situation as shown in the increased detection performance in TIP not only for trained items, but also for new (untrained) items. These results illustrate that CBT is a very useful tool to increase airport security from a human factors perspective."}
{"_id": "6c1ccc66420136488cf34c1ffe707afefd8b00b9", "title": "Rotation-discriminating template matching based on Fourier coefficients of radial projections with robustness to scaling and partial occlusion", "text": "We consider brightness/contrast-invariant and rotation-discriminating template matching that searches an image to analyze A for a query image Q. We propose to use the complex coefficients of the discrete Fourier transform of the radial projections to compute new rotation-invariant local features. These coefficients can be efficiently obtained via FFT. We classify templates in \u201cstable\u201d and \u201cunstable\u201d ones and argue that any local feature-based template matching may fail to find unstable templates. We extract several stable sub-templates of Q and find them in A by comparing the features. The matchings of the sub-templates are combined using the Hough transform. As the features of A are computed only once, the algorithm can find quickly many different sub-templates in A, and it is suitable for: finding many query images in A; multi-scale searching and partial occlusion-robust template matching."}
{"_id": "3370784dacf9df1e54384190dad40b817520ba3a", "title": "Haswell: The Fourth-Generation Intel Core Processor", "text": "Haswell, Intel's fourth-generation core processor architecture, delivers a range of client parts, a converged core for the client and server, and technologies used across many products. It uses an optimized version of Intel 22-nm process technology. Haswell provides enhancements in power-performance efficiency, power management, form factor and cost, core and uncore microarchitecture, and the core's instruction set."}
{"_id": "146da74cd886acbd4a593a55f0caacefa99714a6", "title": "Working model of Self-driving car using Convolutional Neural Network, Raspberry Pi and Arduino", "text": "The evolution of Artificial Intelligence has served as the catalyst in the field of technology. We can now develop things which was once just an imagination. One of such creation is the birth of self-driving car. Days have come where one can do their work or even sleep in the car and without even touching the steering wheel, accelerator you will still be able to reach your target destination safely. This paper proposes a working model of self-driving car which is capable of driving from one location to the other or to say on different types of tracks such as curved tracks, straight tracks and straight followed by curved tracks. A camera module is mounted over the top of the car along with Raspberry Pi sends the images from real world to the Convolutional Neural Network which then predicts one of the following directions. i.e. right, left, forward or stop which is then followed by sending a signal from the Arduino to the controller of the remote controlled car and as a result of it the car moves in the desired direction without any human intervention."}
{"_id": "6fd62c67b281956c3f67eb53fafaea83b2f0b4fb", "title": "Taking perspective into account in a communicative task", "text": "Previous neuroimaging studies of spatial perspective taking have tended not to activate the brain's mentalising network. We predicted that a task that requires the use of perspective taking in a communicative context would lead to the activation of mentalising regions. In the current task, participants followed auditory instructions to move objects in a set of shelves. A 2x2 factorial design was employed. In the Director factor, two directors (one female and one male) either stood behind or next to the shelves, or were replaced by symbolic cues. In the Object factor, participants needed to use the cues (position of the directors or symbolic cues) to select one of three possible objects, or only one object could be selected. Mere presence of the Directors was associated with activity in the superior dorsal medial prefrontal cortex (MPFC) and the superior/middle temporal sulci, extending into the extrastriate body area and the posterior superior temporal sulcus (pSTS), regions previously found to be responsive to human bodies and faces respectively. The interaction between the Director and Object factors, which requires participants to take into account the perspective of the director, led to additional recruitment of the superior dorsal MPFC, a region activated when thinking about dissimilar others' mental states, and the middle temporal gyri, extending into the left temporal pole. Our results show that using perspective taking in a communicative context, which requires participants to think not only about what the other person sees but also about his/her intentions, leads to the recruitment of superior dorsal MPFC and parts of the social brain network."}
{"_id": "30b1447fbfdbd887a9c896a2b0d80177fc17c94e", "title": "3-Axis Magnetic Sensor Array System for Tracking Magnet's Position and Orientation", "text": "In medical diagnoses and treatments, e.g., the endoscopy, the dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we present a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source. It does not require the connection wire and power supply for excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some pre-determined spatial points can be detected, and the magnet's position and orientation parameters can be calculated based on an appropriate algorithm. Here, we propose a real-time tracking system built by Honeywell 3-axis magnetic sensors, HMC1053, as well as the computer sampling circuit. The results show that satisfactory tracking accuracy (average localization error is 3.3 mm) can be achieved using a sensor array with enough number of the 3-axis magnetic sensors"}
{"_id": "b551feaa696da1ba44c31e081555e50358c6eca9", "title": "A Polymer-Based Capacitive Sensing Array for Normal and Shear Force Measurement", "text": "In this work, we present the development of a polymer-based capacitive sensing array. The proposed device is capable of measuring normal and shear forces, and can be easily realized by using micromachining techniques and flexible printed circuit board (FPCB) technologies. The sensing array consists of a polydimethlysiloxane (PDMS) structure and a FPCB. Each shear sensing element comprises four capacitive sensing cells arranged in a 2 \u00d7 2 array, and each capacitive sensing cell has two sensing electrodes and a common floating electrode. The sensing electrodes as well as the metal interconnect for signal scanning are implemented on the FPCB, while the floating electrodes are patterned on the PDMS structure. This design can effectively reduce the complexity of the capacitive structures, and thus makes the device highly manufacturable. The characteristics of the devices with different dimensions were measured and discussed. A scanning circuit was also designed and implemented. The measured maximum sensitivity is 1.67%/mN. The minimum resolvable force is 26 mN measured by the scanning circuit. The capacitance distributions induced by normal and shear forces were also successfully captured by the sensing array."}
{"_id": "bb17e8858b0d3a5eba2bb91f45f4443d3e10b7cd", "title": "The Balanced Scorecard: Translating Strategy Into Action", "text": ""}
{"_id": "01cac0a7c2a3240cb77a1e090694a104785f78f5", "title": "Workflow Automation: Overview and Research Issues", "text": "Workflow management systems, a relatively recent technology, are designed to make work more efficient, integrate heterogeneous application systems, and support interorganizational processes in electronic commerce applications. In this paper, we introduce the field of workflow automation, the subject of this special issue of Information Systems Frontiers. In the first part of the paper, we provide basic definitions and frameworks to aid understanding of workflow management technologies. In the remainder of the paper, we discuss technical and management research opportunities in this field and discuss the other contributions to the special issue."}
{"_id": "4cdad9b059b5077fcce00fb8bcb4e381edd353bd", "title": "A Novel Anomaly Detection Scheme Based on Principal Component Classifier", "text": "This paper proposes a novel scheme that uses robust principal component classifier in intrusion detection problems where the training data may be unsupervised. Assuming that anomalies can be treated as outliers, an intrusion predictive model is constructed from the major and minor principal components of the normal instances. A measure of the difference of an anomaly from the normal instance is the distance in the principal component space. The distance based on the major components that account for 50% of the total variation and the minor components whose eigenvalues less than 0.20 is shown to work well. The experiments with KDD Cup 1999 data demonstrate that the proposed method achieves 98.94% in recall and 97.89% in precision with the false alarm rate 0.92% and outperforms the nearest neighbor method, density-based local outliers (LOF) approach, and the outlier detection algorithm based on Canberra metric."}
{"_id": "ca20f466791f4b051ef3b8d2bf63789d33c562c9", "title": "CredFinder: A real-time tweets credibility assessing system", "text": "Lately, Twitter has grown to be one of the most favored ways of disseminating information to people around the globe. However, the main challenge faced by the users is how to assess the credibility of information posted through this social network in real time. In this paper, we present a real-time content credibility assessment system named CredFinder, which is capable of measuring the trustworthiness of information through user analysis and content analysis. The proposed system is capable of providing a credibility score for each user's tweets. Hence, it provides users with the opportunity to judge the credibility of information faster. CredFinder consists of two parts: a frontend in the form of an extension to the Chrome browser that collects tweets in real time from a Twitter search or a user-timeline page and a backend that analyzes the collected tweets and assesses their credibility."}
{"_id": "9923edf7815c720aa0d6d58a28332806ae91b224", "title": "Design of an Ultra-Wideband Pulse Generator Based on Avalanche Transistor", "text": "Based on the avalanche effect of avalanche transistor, a kind of ultra-wideband nanosecond pulse circuit has been designed, whose frequency, pulse width and amplitude are tunable. In this paper, the principle, structure and selection of components' parameters in the circuit are analyzed in detail. The circuit generates periodic negative pulse, whose pulse full width is 890 ps and pulse amplitude is -11.2 V in simulation mode. By setting up circuit for experiment and changing parameters properly, a kind of ultra-wideband pulse with pulse width of 2.131 ns and pulse amplitude of -9.23 V is achieved. With the features such as simple structure, stable and reliable performance and low cost, this pulse generator is applicable to ultra-wideband wireless communication system."}
{"_id": "33b424698c2b7602dcb579513c34fe20cc3ae669", "title": "A 0.5ps 1.4mW 50MS/s Nyquist bandwidth time amplifier based two-step flash-\u0394\u03a3 time-to-digital converter", "text": "We propose a 50-MS/s two-step flash-\u0394\u03a3 time-to-digital converter (TDC) using stable time amplifiers (TAs). The TDC demonstrates low-levels of shaped quantization noise. The system is simulated in 40-nm CMOS and consumes 1.3 mA from a 1.1 V supply. The bandwidth is broadened to Nyquist rate. At frequencies below 25 MHz, the integrated TDC error is as low as 143 fsrms, which is equal to an equivalent TDC resolution of 0.5 ps."}
{"_id": "7fabd0639750563e0fb09df341e0e62ef4d6e1fb", "title": "Brain-computer interfaces: communication and restoration of movement in paralysis.", "text": "The review describes the status of brain-computer or brain-machine interface research. We focus on non-invasive brain-computer interfaces (BCIs) and their clinical utility for direct brain communication in paralysis and motor restoration in stroke. A large gap between the promises of invasive animal and human BCI preparations and the clinical reality characterizes the literature: while intact monkeys learn to execute more or less complex upper limb movements with spike patterns from motor brain regions alone without concomitant peripheral motor activity usually after extensive training, clinical applications in human diseases such as amyotrophic lateral sclerosis and paralysis from stroke or spinal cord lesions show only limited success, with the exception of verbal communication in paralysed and locked-in patients. BCIs based on electroencephalographic potentials or oscillations are ready to undergo large clinical studies and commercial production as an adjunct or a major assisted communication device for paralysed and locked-in patients. However, attempts to train completely locked-in patients with BCI communication after entering the complete locked-in state with no remaining eye movement failed. We propose that a lack of contingencies between goal directed thoughts and intentions may be at the heart of this problem. Experiments with chronically curarized rats support our hypothesis; operant conditioning and voluntary control of autonomic physiological functions turned out to be impossible in this preparation. In addition to assisted communication, BCIs consisting of operant learning of EEG slow cortical potentials and sensorimotor rhythm were demonstrated to be successful in drug resistant focal epilepsy and attention deficit disorder. First studies of non-invasive BCIs using sensorimotor rhythm of the EEG and MEG in restoration of paralysed hand movements in chronic stroke and single cases of high spinal cord lesions show some promise, but need extensive evaluation in well-controlled experiments. Invasive BMIs based on neuronal spike patterns, local field potentials or electrocorticogram may constitute the strategy of choice in severe cases of stroke and spinal cord paralysis. Future directions of BCI research should include the regulation of brain metabolism and blood flow and electrical and magnetic stimulation of the human brain (invasive and non-invasive). A series of studies using BOLD response regulation with functional magnetic resonance imaging (fMRI) and near infrared spectroscopy demonstrated a tight correlation between voluntary changes in brain metabolism and behaviour."}
{"_id": "a31b795f8defb59889df8f13321e057192d64f73", "title": "iCONCUR: informed consent for clinical data and bio-sample use for research", "text": "Background\nImplementation of patient preferences for use of electronic health records for research has been traditionally limited to identifiable data. Tiered e-consent for use of de-identified data has traditionally been deemed unnecessary or impractical for implementation in clinical settings.\n\n\nMethods\nWe developed a web-based tiered informed consent tool called informed consent for clinical data and bio-sample use for research (iCONCUR) that honors granular patient preferences for use of electronic health record data in research. We piloted this tool in 4 outpatient clinics of an academic medical center.\n\n\nResults\nOf patients offered access to iCONCUR, 394 agreed to participate in this study, among whom 126 patients accessed the website to modify their records according to data category and data recipient. The majority consented to share most of their data and specimens with researchers. Willingness to share was greater among participants from an Human Immunodeficiency Virus (HIV) clinic than those from internal medicine clinics. The number of items declined was higher for for-profit institution recipients. Overall, participants were most willing to share demographics and body measurements and least willing to share family history and financial data. Participants indicated that having granular choices for data sharing was appropriate, and that they liked being informed about who was using their data for what purposes, as well as about outcomes of the research.\n\n\nConclusion\nThis study suggests that a tiered electronic informed consent system is a workable solution that respects patient preferences, increases satisfaction, and does not significantly affect participation in research."}
{"_id": "7aa1866adbc2b4758c04d8484e5bf22e4cce9cc9", "title": "Automotive radome design - fishnet structure for 79 GHz", "text": "Metamaterials are considered as an option for quasi-optical matching of automotive radar radomes, lowering transmission loss and minimizing reflections. This paper shows a fishnet structure design for the 79 GHz band which is suitable for this type of matching and which exhibits a negative index of refraction. The measured transmission loss is 0.9 dB at 79 GHz. A tolerance study concerning copper plating, substrate permittivity, oblique incidence, and polarization is shown. Quasi-optical measurements were done in the range of 60 \u2013 90 GHz, which agree with simulated results."}
{"_id": "b95dd9e28f2126aac27da8b0378d3b9487d8b73d", "title": "Avatar-independent scripting for real-time gesture animation", "text": "When animation of a humanoid figure is to be generated at run-time, instead of by replaying precomposed motion clips, some method is required of specifying the avatar\u2019s movements in a form from which the required motion data can be automatically generated. This form must be of a more abstract nature than raw motion data: ideally, it should be independent of the particular avatar\u2019s proportions, and both writable by hand and suitable for automatic generation from higher-level descriptions of the required actions. We describe here the development and implementation of such a scripting language for the particular area of sign languages of the deaf, called SiGML (Signing Gesture Markup Language), based on the existing HamNoSys notation for sign languages. We conclude by suggesting how this work may be extended to more general animation for interactive virtual reality applications."}
{"_id": "267718d3b9399a5eab90a1b1701e78369696e8fe", "title": "Analyzing multiple configurations of a C program", "text": "Preprocessor conditionals are heavily used in C programs since they allow the source code to be configured for different platforms or capabilities. However, preprocessor conditionals, as well as other preprocessor directives, are not part of the C language. They need to be evaluated and removed, and so a single configuration selected, before parsing can take place. Most analysis and program understanding tools run on this preprocessed version of the code so their results are based on a single configuration. This paper describes the approach of CRefactory, a refactoring tool for C programs. A refactoring tool cannot consider only a single configuration: changing the code for one configuration may break the rest of the code. CRefactory analyses the program for all possible configurations simultaneously. CRefactory also preserves preprocessor directives and integrates them in the internal representations. The paper also presents metrics from two case studies to show that CRefactory's program representation is practical."}
{"_id": "63abfb7d2d35d60a5dc2cc884251f9fee5d46963", "title": "The contribution of electrophysiology to functional connectivity mapping", "text": "A powerful way to probe brain function is to assess the relationship between simultaneous changes in activity across different parts of the brain. In recent years, the temporal activity correlation between brain areas has frequently been taken as a measure of their functional connections. Evaluating 'functional connectivity' in this way is particularly popular in the fMRI community, but has also drawn interest among electrophysiologists. Like hemodynamic fluctuations observed with fMRI, electrophysiological signals display significant temporal fluctuations, even in the absence of a stimulus. These neural fluctuations exhibit a correlational structure over a wide range of spatial and temporal scales. Initial evidence suggests that certain aspects of this correlational structure bear a high correspondence to so-called functional networks defined using fMRI. The growing family of methods to study activity covariation, combined with the diverse neural mechanisms that contribute to the spontaneous fluctuations, has somewhat blurred the operational concept of functional connectivity. What is clear is that spontaneous activity is a conspicuous, energy-consuming feature of the brain. Given its prominence and its practical applications for the functional connectivity mapping of brain networks, it is of increasing importance that we understand its neural origins as well as its contribution to normal brain function."}
{"_id": "1a1f0d0abcbdaa2d487f0a46dba1ca097774012d", "title": "Practical Backscatter Communication Systems for Battery-Free Internet of Things: A Tutorial and Survey of Recent Research", "text": "Backscatter presents an emerging ultralow-power wireless communication paradigm. The ability to offer submilliwatt power consumption makes it a competitive core technology for Internet of Things (IoT) applications. In this article, we provide a tutorial of backscatter communication from the signal processing perspective as well as a survey of the recent research activities in this domain, primarily focusing on bistatic backscatter systems. We also discuss the unique real-world applications empowered by backscatter communication and identify open questions in this domain. We believe this article will shed light on the low-power wireless connectivity design toward building and deploying IoT services in the wild."}
{"_id": "6fd78d20e6f51d872f07cde9350f4d31078ff723", "title": "OF A TUNABLE STIFFNESS COMPOSITE LEG FOR DYNAMIC LOCOMOTION", "text": "Passively compliant legs have been instrumental in the development of dynamically running legged robots. Having properly tuned leg springs is essential for stable, robust and energetically efficient running at high speeds. Recent simulation studies indicate that having variable stiffness legs, as animals do, can significantly improve the speed and stability of these robots in changing environmental conditions. However, to date, the mechanical complexities of designing usefully robust tunable passive compliance into legs has precluded their implementation on practical running robots. This paper describes a new design of a \u201dstructurally controlled variable stiffness\u201d leg for a hexapedal running robot. This new leg improves on previous designs\u2019 performance and enables runtime modification of leg stiffness in a small, lightweight, and rugged package. Modeling and leg test experiments are presented that characterize the improvement in stiffness range, energy storage, and dynamic coupling properties of these legs. We conclude that this variable stiffness leg design is now ready for implementation and testing on a dynamical running robot."}
{"_id": "d1c4907b1b225f61059915a06a3726706860c71e", "title": "An in-depth study of the promises and perils of mining GitHub", "text": "With over 10 million git repositories, GitHub is becoming one of the most important sources of software artifacts on the Internet. Researchers mine the information stored in GitHub\u2019s event logs to understand how its users employ the site to collaborate on software, but so far there have been no studies describing the quality and properties of the available GitHub data. We document the results of an empirical study aimed at understanding the characteristics of the repositories and users in GitHub; we see how users take advantage of GitHub\u2019s main features and how their activity is tracked on GitHub and related datasets to point out misalignment between the real and mined data. Our results indicate that while GitHub is a rich source of data on software development, mining GitHub for research purposes should take various potential perils into consideration. For example, we show that the majority of the projects are personal and inactive, and that almost 40 % of all pull requests do not appear as merged even though they were. Also, approximately half of GitHub\u2019s registered users do not have public activity, while the activity of GitHub users in repositories is not always easy to pinpoint. We use our identified perils to see if they can pose validity threats; we review selected papers from the MSR 2014 Mining Challenge and see if there are potential impacts to consider. We provide a set of recommendations for software engineering researchers on how to approach the data in GitHub."}
{"_id": "9c1ebae0eea2aa27fed13c71dc98dc0f67dd52a0", "title": "Unsupervised Segmentation of 3D Medical Images Based on Clustering and Deep Representation Learning", "text": "This paper presents a novel unsupervised segmentation method for 3D medical images. Convolutional neural networks (CNNs) have brought significant advances in image segmentation. However, most of the recent methods rely on supervised learning, which requires large amounts of manually annotated data. Thus, it is challenging for these methods to cope with the growing amount of medical images. This paper proposes a unified approach to unsupervised deep representation learning and clustering for segmentation. Our proposed method consists of two phases. In the first phase, we learn deep feature representations of training patches from a target image using joint unsupervised learning (JULE) that alternately clusters representations generated by a CNN and updates the CNN parameters using cluster labels as supervisory signals. We extend JULE to 3D medical images by utilizing 3D convolutions throughout the CNN architecture. In the second phase, we apply k-means to the deep representations from the trained CNN and then project cluster labels to the target image in order to obtain the fully segmented image. We evaluated our methods on three images of lung cancer specimens scanned with micro-computed tomography (micro-CT). The automatic segmentation of pathological regions in micro-CT could further contribute to the pathological examination process. Hence, we aim to automatically divide each image into the regions of invasive carcinoma, noninvasive carcinoma, and normal tissue. Our experiments show the potential abilities of unsupervised deep representation learning for medical image segmentation."}
{"_id": "27a693acee22752fa66f442b8d52b7f3c83134c7", "title": "Optimal Multiserver Configuration for Profit Maximization in Cloud Computing", "text": "As cloud computing becomes more and more popular, understanding the economics of cloud computing becomes critically important. To maximize the profit, a service provider should understand both service charges and business costs, and how they are determined by the characteristics of the applications and the configuration of a multiserver system. The problem of optimal multiserver configuration for profit maximization in a cloud computing environment is studied. Our pricing model takes such factors into considerations as the amount of a service, the workload of an application environment, the configuration of a multiserver system, the service-level agreement, the satisfaction of a consumer, the quality of a service, the penalty of a low-quality service, the cost of renting, the cost of energy consumption, and a service provider's margin and profit. Our approach is to treat a multiserver system as an M/M/m queuing model, such that our optimization problem can be formulated and solved analytically. Two server speed and power consumption models are considered, namely, the idle-speed model and the constant-speed model. The probability density function of the waiting time of a newly arrived service request is derived. The expected service charge to a service request is calculated. The expected net business gain in one unit of time is obtained. Numerical calculations of the optimal server size and the optimal server speed are demonstrated."}
{"_id": "e8217edd7376c26c714757a362724f81f3afbee0", "title": "Overview on Additive Manufacturing Technologies", "text": "This paper provides an overview on the main additive manufacturing/3D printing technologies suitable for many satellite applications and, in particular, radio-frequency components. In fact, nowadays they have become capable of producing complex net-shaped or nearly net-shaped parts in materials that can be directly used as functional parts, including polymers, metals, ceramics, and composites. These technologies represent the solution for low-volume, high-value, and highly complex parts and products."}
{"_id": "738f4d2137fc767b1802963b5e45a2216c27b77c", "title": "Churn Prediction : Does Technology Matter ?", "text": "The aim of this paper is to identify the most suitable model for churn prediction based on three different techniques. The paper identifies the variables that affect churn in reverence of customer complaints data and provides a comparative analysis of neural networks, regression trees and regression in their capabilities of predicting customer churn. Keywords\u2014Churn, Decision Trees, Neural Networks, Regression."}
{"_id": "ae341ad66824e1f30a2675fd50742b97794c8f57", "title": "Learning from imbalanced data sets with boosting and data generation: the DataBoost-IM approach", "text": "Learning from imbalanced data sets, where the number of examples of one (majority) class is much higher than the others, presents an important challenge to the machine learning community. Traditional machine learning algorithms may be biased towards the majority class, thus producing poor predictive accuracy over the minority class. In this paper, we describe a new approach that combines boosting, an ensemble-based learning algorithm, with data generation to improve the predictive power of classifiers against imbalanced data sets consisting of two classes. In the DataBoost-IM method, hard examples from both the majority and minority classes are identified during execution of the boosting algorithm. Subsequently, the hard examples are used to separately generate synthetic examples for the majority and minority classes. The synthetic data are then added to the original training set, and the class distribution and the total weights of the different classes in the new training set are rebalanced. The DataBoost-IM method was evaluated, in terms of the F-measures, G-mean and overall accuracy, against seventeen highly and moderately imbalanced data sets using decision trees as base classifiers. Our results are promising and show that the DataBoost-IM method compares well in comparison with a base classifier, a standard benchmarking boosting algorithm and three advanced boosting-based algorithms for imbalanced data set. Results indicate that our approach does not sacrifice one class in favor of the other, but produces high predictions against both minority and majority classes."}
{"_id": "090a6772a1d69f07bfe7e89f99934294a0dac1b9", "title": "Two Modifications of CNN", "text": ""}
{"_id": "0df013671e9e901a9126deb4957e22e3d937b1a5", "title": "Prototype and Feature Selection by Sampling and Random Mutation Hill Climbing Algorithms", "text": "With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term \u201cprototypes\u201d refers to the reference instances used in a nearest neighbor computation \u2014 the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes."}
{"_id": "32f7aef5c13c715b00b966eaaba5dd2fe35df1a4", "title": "Bayesian network classifiers for identifying the slope of the customer lifecycle of long-life customers", "text": "Undoubtedly, Customer Relationship Management (CRM) has gained its importance through the statement that acquiring a new customer is several times more costly than retaining and selling additional products to existing customers. Consequently, marketing practitioners are currently often focusing on retaining customers for as long as possible. However, recent findings in relationship marketing literature have shown that large differences exist within the group of long-life customers in terms of spending and spending evolution. Therefore, this paper focuses on introducing a measure of a customer\u2019s future spending evolution that might improve relationship marketing decision making. In this study, from a marketing point of view, we focus on predicting whether a newly acquired customer will increase or decrease his/her future spending from initial purchase information. This is essentially a classification task. The main contribution of this study lies in comparing and evaluating several Bayesian network classifiers with statistical and other artificial intelligence techniques for the purpose of classifying customers in the binary classification problem at hand. Certain Bayesian network classifiers have been recently proposed in the artificial"}
{"_id": "25519ce6a924f5890180eacfa6e66203048f5dd1", "title": "Big Data : New Tricks for Econometrics", "text": "5 Nowadays computers are in the middle of most economic transactions. 6 These \u201ccomputer-mediated transactions\u201d generate huge amounts of 7 data, and new tools can be used to manipulate and analyze this data. 8 This essay offers a brief introduction to some of these tools and meth9 ods. 10 Computers are now involved in many economic transactions and can cap11 ture data associated with these transactions, which can then be manipulated 12 and analyzed. Conventional statistical and econometric techniques such as 13 regression often work well but there are issues unique to big data sets that 14 may require different tools. 15 First, the sheer size of the data involved may require more powerful data 16 manipulation tools. Second, we may have more potential predictors than 17 appropriate for estimation, so we need to do some kind of variable selection. 18 Third, large data sets may allow for more flexible relationships than simple 19 \u2217Hal Varian is Chief Economist, Google Inc., Mountain View, California, and Emeritus Professor of Economics, University of California, Berkeley, California. Thanks to Jeffrey Oldham, Tom Zhang, Rob On, Pierre Grinspan, Jerry Friedman, Art Owen, Steve Scott, Bo Cowgill, Brock Noland, Daniel Stonehill, Robert Snedegar, Gary King, the editors of this journal for comments on earlier versions of this paper. 1 linear models. Machine learning techniques such as decision trees, support 20 vector machines, neural nets, deep learning and so on may allow for more 21 effective ways to model complex relationships. 22 In this essay I will describe a few of these tools for manipulating and an23 alyzing big data. I believe that these methods have a lot to offer and should 24 be more widely known and used by economists. In fact, my standard advice 25 to graduate students these days is \u201cgo to the computer science department 26 and take a class in machine learning.\u201d There have been very fruitful collabo27 rations between computer scientists and statisticians in the last decade or so, 28 and I expect collaborations between computer scientists and econometricians 29 will also be productive in the future. 30 1 Tools to manipulate big data 31 Economists have historically dealt with data that fits in a spreadsheet, but 32 that is changing as new more detailed data becomes available; see Einav 33 and Levin [2013] for several examples and discussion. If you have more 34 than a million or so rows in a spreadsheet, you probably want to store it in a 35 relational database, such as MySQL. Relational databases offer a flexible way 36 to store, manipulate and retrieve data using a Structured Query Language 37 (SQL) which is easy to learn and very useful for dealing with medium-sized 38 data sets. 39 However, if you have several gigabytes of data or several million observa40 tions, standard relational databases become unwieldy. Databases to manage 41 data of this size are generically known as \u201cNoSQL\u201d databases. The term is 42 used rather loosely, but is sometimes interpreted as meaning \u201cnot only SQL.\u201d 43 NoSQL databases are more primitive than SQL databases in terms of data 44 manipulation capabilities but can handle larger amounts of data. 45 Due to the rise of computer mediated transactions, many companies have 46 found it necessary to develop systems to process billions of transactions per 47"}
{"_id": "634aa5d051512ee4b831e6210a234fb2d9b9d623", "title": "Cooperative Intersection Management: A Survey", "text": "Intersection management is one of the most challenging problems within the transport system. Traffic light-based methods have been efficient but are not able to deal with the growing mobility and social challenges. On the other hand, the advancements of automation and communications have enabled cooperative intersection management, where road users, infrastructure, and traffic control centers are able to communicate and coordinate the traffic safely and efficiently. Major techniques and solutions for cooperative intersections are surveyed in this paper for both signalized and nonsignalized intersections, whereas focuses are put on the latter. Cooperative methods, including time slots and space reservation, trajectory planning, and virtual traffic lights, are discussed in detail. Vehicle collision warning and avoidance methods are discussed to deal with uncertainties. Concerning vulnerable road users, pedestrian collision avoidance methods are discussed. In addition, an introduction to major projects related to cooperative intersection management is presented. A further discussion of the presented works is given with highlights of future research topics. This paper serves as a comprehensive survey of the field, aiming at stimulating new methods and accelerating the advancement of automated and cooperative intersections."}
{"_id": "593b0a74211460f424d471ab7155a0a05c5fd342", "title": "Sequential Mining: Patterns and Algorithms Analysis", "text": "This paper presents and analysis the common existing sequential pattern mining algorithms. It presents a classifying study of sequential pattern-mining algorithms into five extensive classes. First, on the basis of Apriori-based algorithm, second on Breadth First Search-based strategy, third on Depth First Search strategy, fourth on sequential closed-pattern algorithm and five on the basis of incremental pattern mining algorithms. At the end, a comparative analysis is done on the basis of important key features supported by various algorithms. This study gives an enhancement in the understanding of the approaches of sequential"}
{"_id": "b5e04a538ecb428c4cfef9784fe1f7d1c193cd1a", "title": "Microstrip-fed circular substrate integrated waveguide (SIW) cavity resonator and antenna", "text": "Substrate integrated waveguide (SIW) cavity resonators are among the emerging group of SIW-based circuit components that are gaining popularity and increasingly employed in integrated microwave and mm-wave circuits. The SIW cavities offer a significantly enhanced performance in comparison to the previously available planar microwave resonators. The high quality factor of the waveguide-based cavity resonators enables designing microwave oscillators with very low phase noise as well as compact high-gain antennas [1\u20134]. The SIW-cavity-based antennas also show much promise for implementation of low-cost lightweight fully-integrated high-gain antennas that find application in ultra-light communication satellites, low payload spacecrafts, high-frequency radar, and sensors. In this paper, a circular SIW cavity resonator, which is fed by a microstrip and via probe, is presented. The microstrip feed is optimized to achieve a return loss of better than 20dB from both simulation and measurements. A resonance frequency of 16.79GHz and quality factor of 76.3 are determined from vector network analyzer (VNA) measurements. Due to the folded slot on the upper conductor layer, the resonator is an efficient cavity backed antenna. A maximum gain of over 7dB is measured in an anechoic chamber at the resonance frequency of 16.79 GHz."}
{"_id": "35de4258058f02a31cd0a0882b5bcc14d7a06697", "title": "Quality driven web services composition", "text": "The process-driven composition of Web services is emerging as a promising approach to integrate business applications within and across organizational boundaries. In this approach, individual Web services are federated into composite Web services whose business logic is expressed as a process model. The tasks of this process model are essentially invocations to functionalities offered by the underlying component services. Usually, several component services are able to execute a given task, although with different levels of pricing and quality. In this paper, we advocate that the selection of component services should be carried out during the execution of a composite service, rather than at design-time. In addition, this selection should consider multiple criteria (e.g., price, duration, reliability), and it should take into account global constraints and preferences set by the user (e.g., budget constraints). Accordingly, the paper proposes a global planning approach to optimally select component services during the execution of a composite service. Service selection is formulated as an optimization problem which can be solved using efficient linear programming methods. Experimental results show that this global planning approach outperforms approaches in which the component services are selected individually for each task in a composite service."}
{"_id": "87d696d7dce4fed554430f100d0f2aaee9f73bc5", "title": "Navigating the massive world of reddit: using backbone networks to map user interests in social media", "text": "In the massive online worlds of social media, users frequently rely on organizing themselves around specific topics of interest to find and engage with like-minded people. However, navigating these massive worlds and finding topics of specific interest often proves difficult because the worlds are mostly organized haphazardly, leaving users to find relevant interests by word of mouth or using a basic search feature. Here, we report on a method using the backbone of a network to create a map of the primary topics of interest in any social network. To demonstrate the method, we build an interest map for the social news web site reddit and show how such a map could be used to navigate a social media world. Moreover, we analyze the network properties of the reddit social network and find that it has a scale-free, small-world, and modular community structure, much like other online social networks such as Facebook and Twitter. We suggest that the integration of interest maps into popular social media platforms will assist users in organizing themselves into more specific interest groups, which will help alleviate the overcrowding effect often observed in large online communities. Subjects Network Science and Online Social Networks, Visual Analytics"}
{"_id": "1d145b63fd065c562ed2fecb3f34643fc9653b60", "title": "Examining the Technology Acceptance Model Using Physician Acceptance of Telemedicine Technology", "text": "The rapid growth of investment in information technology (IT) by organizations worldwide has made user acceptance an increasingly critical technology implementation a d management issue. While such acceptance has received fairly extensive attention from previous research, additional efforts are needed to examine or validate existing research results, particularly those involving different technologies, user populations, and/or organizational contexts. In response, this paper reports a research work that examined the applicability of the Technology Acceptance Model (\u03a4\u0391\u039c) in explaining physicians' decisions to accept telemedicine technology in the health-care context. The technology, the user group, and the organizational context are all new to IT acceptance/adoption research. The study also addressed a pragmatic technology management need resulting from millions of dollars invested by healthcare organizations in developing and implementing telemedicine programs in recent years. The model's overall fit, explanatory power, and the individual causal links that it postulates were evaluated by examining the acceptance of telemedicine technology among physicians practicing at public tertiary hospitals in Hong Kong. Our results suggested that \u03a4\u0391\u039c was able to provide a reasonable depiction of physicians' intention to use telemedicine technology. Perceived usefulness was found to be a significant determinant ofattitude and intention but perceived ease of use was not. The relatively low R-square of the model suggests both the limitations of the parsimonious model and the need for incorporating additional factors or integrating with other IT acceptance models in order to improve its specificity and explanatory utility in a health-care context. Based on the study findings, implications for user technology acceptance research and telemedicine management are discussed."}
{"_id": "d7ab41adebaec9272c2797512a021482a594d040", "title": "DevOps for Developers", "text": "ed descriptions of machines by using a DSL while enjoying the full power of scripting languages (in both Puppet and Chef, you can describe behavior in the Ruby language (a dynamic, general-purpose object-oriented programming language), see http://www.ruby-lang. org/en/). Declarative descriptions of target behavior (i.e., what the system must be). Thus, running the scripts will always lead to the same end result. Management of code in version control. By using a version control system as the leading medium, you do not need to adjust the machines manually (which is not reproducible). Synchronization of environments by using a version control system and automatic provisioning of environments. Continuous integration servers, such as Jenkins, simply have to listen to the path in the version control system to detect changes. Then the configuration management tool (e.g., Puppet) ensures that the corresponding machines apply the behavior that is described in version control. Using tools such as Jenkins (see Chapter 8) and Puppet and Vagrant (see Chapter 9), complete setups, including virtualizations, can be managed automatically. Sharing of scripts (e.g., Puppet manifests). A cross-functional team that includes development and operations can develop this function. Sharing the scripts in the version control system enables all parties, particularly development and operations, to use those scripts to set up their respective environments: test environments (used by development) and production environments (managed by operations). Automation is an essential backbone of DevOps (see Chapter 3 and Chapter 8 for more information on automation). Automation is the use of solutions to reduce the need for human work. Automation can ensure that the software is built the same way each time, that the team sees every change made to the software, and that the software is tested and reviewed in the same way every day so that no defects slip through or are introduced through human error. In software development projects, a high level of automation is a prerequisite for quickly delivering the best quality and for obtaining feedback from stakeholders early and often. Automating aspects of DevOps helps to make parts of the process transparent for the whole team and also helps deploy software to different target environments in the same way. You can best improve what you measure; and to measure something usefully, you need a process that delivers results in a reproducible way. DevOps addresses aspects similar to those tackled by Agile development, but the former focuses on breaking down the walls between developers and operations workers. The challenge is to communicate the benefits of DevOps to both development and operations teams. Both groups may be reluctant to start implementing the shift toward DevOps because their day is already full of activities. So why should they be concerned with the work of others? Why should DEVOPS FOR DEVELOPERS 31 operations want to use unfamiliar tools and adjust their daily routines when their self-made, isolated solutions have worked just fine for years? Because of this resistance, the incentives and commitment provided by upper management are important. Incentives alone are not enough: unified processes and tool chains are also important. Upper management will also resist by questioning the wisdom of implementing DevOps if the concrete benefits are not visible. Better cash flow and improved time to market are hard to measure. Management asks questions that address the core problems of software engineering while ignoring the symptoms: how can the company achieve maximal earnings in a short period of time? How can requirements be made stable and delivered to customers quickly? These results and visions should be measured with metrics that are shared by development and operations. Existing metrics can be further used or replaced by metrics that accurately express business value. One example of an end-to-end metric is the cycle time, which we will discuss in detail in Chapter 3."}
{"_id": "bca1bf790987bfb8fccf4e158a5c9fab3ab371ac", "title": "Naive Bayes Word Sense Induction", "text": "We introduce an extended naive Bayes model for word sense induction (WSI) and apply it to a WSI task. The extended model incorporates the idea the words closer to the target word are more relevant in predicting its sense. The proposed model is very simple yet effective when evaluated on SemEval-2010 WSI data."}
{"_id": "53f3edfeb22de82c7a4b4a02209d296526eee38c", "title": "Dispatching optimization and routing guidance for emergency vehicles in disaster", "text": "Based on the problem that disasters occur frequently all over the world recently. This paper aims to develop dispatching optimization and dynamical routing guidance techniques for emergency vehicles under disaster conditions, so as to reduce emergency response time and avoid further possible deterioration of disaster situation. As to dispatching for emergency vehicles, firstly, classify the casualties into several regions based on the pickup locations, quantity and severity of casualties by an adaptive spectral clustering method, and then work out dispatching strategies for emergency vehicles by k-means clustering method based on the distance among casualties regions, emergency supply stations and hospitals. As to routing guidance for emergency vehicles, centrally dynamic route guidance system based on parallel computing technology is presented to offer safe, reliable and fast routes for emergency vehicles, which are subject to the network's impedance function based on real-time forecasted travel time. Finally, the algorithms presented in this paper are validated based on the platform of ArcGIS by generating casualties randomly in random areas and damaging the simulation network of Changchun city randomly."}
{"_id": "ef51ff88c525751e2d09f245a3bedc40cf364961", "title": "Breaking CAPTCHAs on the Dark Web 11 February , 2018", "text": "On the Dark Web, several websites inhibit automated scraping attempts by employing CAPTCHAs. Scraping important content from a website is possible if these CAPTCHAs are solved by a web scraper. For this purpose, a Machine Learning tool is used, TensorFlow and an Optical Character Recognition tool, Tesseract to solve simple CAPTCHAs. Two sets of CATPCHAs, which are also used on some Dark Web websites, were generated for testing purposes. Tesseract achieved a success rate of 27.6% and 13.7% for set 1 and 2, respectively. A total of three models were created for TensorFlow. One model per set of CAPTCHAs and one model with the two sets mixed together. TensorFlow achieved a success rate of 94.6%, 99.7%, and 70.1% for the first, second, and mixed set, respectively. The initial investment to train TensorFlow can take up to two days to train for a single type of CAPTCHA, depending on implementation efficiency and hardware. The CAPTCHA images, including the answers, are also a requirement for training TensorFlow. Whereas Tesseract can be used on-demand without need for prior training."}
{"_id": "0d4fca03c4748fcac491809f0f73cde401972e28", "title": "Business Intelligence", "text": "Business intelligence systems combine operational data with analytical tools to present complex and competitive information to planners and decision makers. The objective is to improve the timeliness and quality of inputs to the decision process. Business Intelligence is used to understand the capabilities available in the firm; the state of the art, trends, and future directions in the markets, the technologies, and the regulatory environment in which the firm competes; and the actions of competitors and the implications of these actions. The emergence of the data warehouse as a repository, advances in data cleansing, increased capabilities of hardware and software, and the emergence of the web architecture all combine to create a richer business intelligence environment than was available previously. Although business intelligence systems are widely used in industry, research about them is limited. This paper, in addition to being a tutorial, proposes a BI framework and potential research topics. The framework highlights the importance of unstructured data and discusses the need to develop BI tools for its acquisition, integration, cleanup, search, analysis, and delivery. In addition, this paper explores a matrix for BI data types (structured vs. unstructured) and data sources (internal and external) to guide research."}
{"_id": "22b22af6c27e6d4348ed9d131ec119ba48d8301e", "title": "Automated API Property Inference Techniques", "text": "Frameworks and libraries offer reusable and customizable functionality through Application Programming Interfaces (APIs). Correctly using large and sophisticated APIs can represent a challenge due to hidden assumptions and requirements. Numerous approaches have been developed to infer properties of APIs, intended to guide their use by developers. With each approach come new definitions of API properties, new techniques for inferring these properties, and new ways to assess their correctness and usefulness. This paper provides a comprehensive survey of over a decade of research on automated property inference for APIs. Our survey provides a synthesis of this complex technical field along different dimensions of analysis: properties inferred, mining techniques, and empirical results. In particular, we derive a classification and organization of over 60 techniques into five different categories based on the type of API property inferred: unordered usage patterns, sequential usage patterns, behavioral specifications, migration mappings, and general information."}
{"_id": "e66efb82b1c3982c6451923d73e870e95339c3a6", "title": "Trajectory pattern mining: Exploring semantic and time information", "text": "With the development of GPS and the popularity of smart phones and wearable devices, users can easily log their daily trajectories. Prior works have elaborated on mining trajectory patterns from raw trajectories. Trajectory patterns consist of hot regions and the sequential relationships among them, where hot regions refer the spatial regions with a higher density of data points. Note that some hot regions do not have any meaning for users. Moreover, trajectory patterns do not have explicit time information or semantic information. To enrich trajectory patterns, we propose semantic trajectory patterns which are referred to as the moving patterns with spatial, temporal, and semantic attributes. Given a user trajectory, we aim at mining frequent semantic trajectory patterns. Explicitly, we extract the three attributes from a raw trajectory, and convert it into a semantic mobility sequence. Given such a semantic mobility sequence, we propose two algorithms to discover frequent semantic trajectory patterns. The first algorithm, MB (standing for matching-based algorithm), is a naive method to find frequent semantic trajectory patterns. It generates all possible patterns and extracts the occurrence of the patterns from the semantic mobility sequence. The second algorithm, PS (standing for PrefixSpan-based algorithm), is developed to efficiently mine semantic trajectory patterns. Due to the good efficiency of PrefixSpan, algorithm PS will fully utilize the advantage of PrefixSpan. Since the semantic mobility sequence contains three attributes, we need to further transform it into a raw sequence before using algorithm PrefixSpan. Therefore, we propose the SS algorithm (standing for sequence symbolization algorithm) to achieve this purpose. To evaluate our proposed algorithms, we conducted experiments on the real datasets of Google Location History, and the experimental results show the effectiveness and efficiency of our proposed algorithms."}
{"_id": "331cd0d53df0254213557cee2d9f0a2109ba16d8", "title": "Performance Analysis of Modified LLC Resonant Converter", "text": "In this paper a modified form of the most efficient resonant LLC series parallel converter configuration is proposed. The proposed system comprises of an additional LC circuit synchronized with the existing resonant tank of LLC configuration (LLC-LC configuration). With the development of power electronics devices, resonant converters have been proved to be more efficient than conventional converters as they employ soft switching technique. Among the three basic configurations of resonant converter, Series Resonant Converter (SRC), Parallel Resonant Converter (PRC) and Series Parallel Resonant Converter (SPRC), the LLC configuration under SPRC is proved to be most efficient providing narrow switching frequency range for wide range of load variation, improved efficiency and providing ZVS capability even under no load condition. The modified LLC configuration i.e., LLC-LC configuration offers better efficiency as well as better output voltage and gain. The efficiency tends to increase with increase in input voltage and hence these are suitable for high input voltage operation. The simulation and analysis has been done for full bridge configuration of the switching circuit and the results are presented"}
{"_id": "85fc21452fe92532ec89444055880aadb0eacf4c", "title": "Deep Recurrent Neural Networks for seizure detection and early seizure detection systems", "text": "Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizure detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration."}
{"_id": "8f6c14f6743d8f9a8ab0be99a50fb51a123ab62c", "title": "Document Image Binarization Using Recurrent Neural Networks", "text": "In the context of document image analysis, image binarization is an important preprocessing step for other document analysis algorithms, but also relevant on its own by improving the readability of images of historical documents. While historical document image binarization is challenging due to common image degradations, such as bleedthrough, faded ink or stains, achieving good binarization performance in a timely manner is a worthwhile goal to facilitate efficient information extraction from historical documents. In this paper, we propose a recurrent neural network based algorithm using Grid Long Short-Term Memory cells for image binarization, as well as a pseudo F-Measure based weighted loss function. We evaluate the binarization and execution performance of our algorithm for different choices of footprint size, scale factor and loss function. Our experiments show a significant trade-off between binarization time and quality for different footprint sizes. However, we see no statistically significant difference when using different scale factors and only limited differences for different loss functions. Lastly, we compare the binarization performance of our approach with the best performing algorithm in the 2016 handwritten document image binarization contest and show that both algorithms perform equally well."}
{"_id": "61dad02743d5333e942677836052b814bef4bad8", "title": "A collaborative filtering framework based on fuzzy association rules and multiple-level similarity", "text": "The rapid development of Internet technologies in recent decades has imposed a heavy information burden on users. This has led to the popularity of recommender systems, which provide advice to users about items they may like to examine. Collaborative Filtering (CF) is the most promising technique in recommender systems, providing personalized recommendations to users based on their previously expressed preferences and those of other similar users. This paper introduces a CF framework based on Fuzzy Association Rules and Multiple-level Similarity (FARAMS). FARAMS extended existing techniques by using fuzzy association rule mining, and takes advantage of product similarities in taxonomies to address data sparseness and nontransitive associations. Experimental results show that FARAMS improves prediction quality, as compared to similar approaches."}
{"_id": "3ddac15bd47bc0745db4297d30be71af43adf0bb", "title": "Greed is good: Approximating independent sets in sparse and bounded-degree graphs", "text": "Theminimum-degree greedy algorithm, or Greedy for short, is a simple and well-studied method for finding independent sets in graphs. We show that it achieves a performance ratio of (\u0394+2)/3 for approximating independent sets in graphs with degree bounded by \u0394. The analysis yields a precise characterization of the size of the independent sets found by the algorithm as a function of the independence number, as well as a generalization of Tur\u00e1n\u2019s bound. We also analyze the algorithm when run in combination with a known preprocessing technique, and obtain an improved $$(2\\bar d + 3)/5$$ performance ratio on graphs with average degree $$\\bar d$$ , improving on the previous best $$(\\bar d + 1)/2$$ of Hochbaum. Finally, we present an efficient parallel and distributed algorithm attaining the performance guarantees of Greedy."}
{"_id": "bb6e6e3251bbb80587bdb5064e24b55d728529b1", "title": "Mixed Methods Research : A Research Paradigm Whose Time Has Come", "text": "14 The purposes of this article are to position mixed methods research (mixed research is a synonym) as the natural complement to traditional qualitative and quantitative research, to present pragmatism as offering an attractive philosophical partner for mixed methods research, and to provide a framework for designing and conducting mixed methods research. In doing this, we briefly review the paradigm \u201cwars\u201d and incompatibility thesis, we show some commonalities between quantitative and qualitative research, we explain the tenets of pragmatism, we explain the fundamental principle of mixed research and how to apply it, we provide specific sets of designs for the two major types of mixed methods research (mixed-model designs and mixed-method designs), and, finally, we explain mixed methods research as following (recursively) an eight-step process. A key feature of mixed methods research is its methodological pluralism or eclecticism, which frequently results in superior research (compared to monomethod research). Mixed methods research will be successful as more investigators study and help advance its concepts and as they regularly practice it."}
{"_id": "d8e682b2f33b7c765d879717d68cdf262dba871e", "title": "Development Emails Content Analyzer: Intention Mining in Developer Discussions (T)", "text": "Written development communication (e.g. mailing lists, issue trackers) constitutes a precious source of information to build recommenders for software engineers, for example aimed at suggesting experts, or at redocumenting existing source code. In this paper we propose a novel, semi-supervised approach named DECA (Development Emails Content Analyzer) that uses Natural Language Parsing to classify the content of development emails according to their purpose (e.g. feature request, opinion asking, problem discovery, solution proposal, information giving etc), identifying email elements that can be used for specific tasks. A study based on data from Qt and Ubuntu, highlights a high precision (90%) and recall (70%) of DECA in classifying email content, outperforming traditional machine learning strategies. Moreover, we successfully used DECA for re-documenting source code of Eclipse and Lucene, improving the recall, while keeping high precision, of a previous approach based on ad-hoc heuristics."}
{"_id": "4e56ab1afd8a515a0a0b351fbf1b1d08624d0cc2", "title": "Shrink Globally , Act Locally : Sparse Bayesian Regularization and Prediction", "text": "We study the classic problem of choosing a prior distribution for a location parameter \u03b2 = (\u03b21, . . . , \u03b2p) as p grows large. First, we study the standard \u201cglobal-local shrinkage\u201d approach, based on scale mixtures of normals. Two theorems are presented which characterize certain desirable properties of shrinkage priors for sparse problems. Next, we review some recent results showing how L\u00e9vy processes can be used to generate infinite-dimensional versions of standard normal scale-mixture priors, along with new priors that have yet to be seriously studied in the literature. This approach provides an intuitive framework both for generating new regularization penalties and shrinkage rules, and for performing asymptotic analysis on existing models."}
{"_id": "64eb627f5e2048892edbeab44567516af4a43b2e", "title": "A randomized trial of a low-carbohydrate diet for obesity.", "text": "BACKGROUND\nDespite the popularity of the low-carbohydrate, high-protein, high-fat (Atkins) diet, no randomized, controlled trials have evaluated its efficacy.\n\n\nMETHODS\nWe conducted a one-year, multicenter, controlled trial involving 63 obese men and women who were randomly assigned to either a low-carbohydrate, high-protein, high-fat diet or a low-calorie, high-carbohydrate, low-fat (conventional) diet. Professional contact was minimal to replicate the approach used by most dieters.\n\n\nRESULTS\nSubjects on the low-carbohydrate diet had lost more weight than subjects on the conventional diet at 3 months (mean [+/-SD], -6.8+/-5.0 vs. -2.7+/-3.7 percent of body weight; P=0.001) and 6 months (-7.0+/-6.5 vs. -3.2+/-5.6 percent of body weight, P=0.02), but the difference at 12 months was not significant (-4.4+/-6.7 vs. -2.5+/-6.3 percent of body weight, P=0.26). After three months, no significant differences were found between the groups in total or low-density lipoprotein cholesterol concentrations. The increase in high-density lipoprotein cholesterol concentrations and the decrease in triglyceride concentrations were greater among subjects on the low-carbohydrate diet than among those on the conventional diet throughout most of the study. Both diets significantly decreased diastolic blood pressure and the insulin response to an oral glucose load.\n\n\nCONCLUSIONS\nThe low-carbohydrate diet produced a greater weight loss (absolute difference, approximately 4 percent) than did the conventional diet for the first six months, but the differences were not significant at one year. The low-carbohydrate diet was associated with a greater improvement in some risk factors for coronary heart disease. Adherence was poor and attrition was high in both groups. Longer and larger studies are required to determine the long-term safety and efficacy of low-carbohydrate, high-protein, high-fat diets."}
{"_id": "0dadc024bb2e9cb675165fdc7a13d55f5c732636", "title": "Overall C as a measure of discrimination in survival analysis: model specific population value and confidence interval estimation.", "text": "The assessment of the discrimination ability of a survival analysis model is a problem of considerable theoretical interest and important practical applications. This issue is, however, more complex than evaluating the performance of a linear or logistic regression. Several different measures have been proposed in the biostatistical literature. In this paper we investigate the properties of the overall C index introduced by Harrell as a natural extension of the ROC curve area to survival analysis. We develop the overall C index as a parameter describing the performance of a given model applied to the population under consideration and discuss the statistic used as its sample estimate. We discover a relationship between the overall C and the modified Kendall's tau and construct a confidence interval for our measure based on the asymptotic normality of its estimate. Then we investigate via simulations the length and coverage probability of this interval. Finally, we present a real life example evaluating the performance of a Framingham Heart Study model."}
{"_id": "9afa9c1c650d915c1b6f56b458ff3759bc26bf09", "title": "The apnea-ECG database", "text": "Sleep apnea is a sleep disorder with a high prevalence in the adult male population. Sleep apnea is regarded as an independent risk factor,for cardiovascular sequelae such as ischemic heart attacks and stroke. The diagnosis of sleep apnea requires polysomnographic studies in sleep laboratories with expensive equipment and attending personnel. Sleep apnea can be treated effectively using nasal ventilation therapy (nCPAP). Early recognition and selection of patients with sleep related breathing disorders is an important task. Although it has been suggested that this can be done on the basis of the ECG, careful quantitative studies of the accuracy of such techniques are needed. An annotated database with 70 nighttime ECG recordings has been created to support such studies. The annotations were based on visual scoring of disordered breathing during sleep."}
{"_id": "1d70f0d7bd782c65273bc689b6ada8723e52d7a3", "title": "Empirical comparison of algorithms for network community detection", "text": "Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that \"look like\" good communities for the application of interest.\n In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an approximation to the best cluster of any size, we consider a size-resolved version of the optimization problem. Considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms, since objective functions and approximation algorithms often have non-obvious size-dependent behavior."}
{"_id": "312a2edbec5fae34beaf33faa059d37d04cb7235", "title": "Community detection algorithms: a comparative analysis.", "text": "Uncovering the community structure exhibited by real networks is a crucial step toward an understanding of complex systems that goes beyond the local organization of their constituents. Many algorithms have been proposed so far, but none of them has been subjected to strict tests to evaluate their performance. Most of the sporadic tests performed so far involved small networks with known community structure and/or artificial graphs with a simplified structure, which is very uncommon in real systems. Here we test several methods against a recently introduced class of benchmark graphs, with heterogeneous distributions of degree and community size. The methods are also tested against the benchmark by Girvan and Newman [Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)] and on random graphs. As a result of our analysis, three recent algorithms introduced by Rosvall and Bergstrom [Proc. Natl. Acad. Sci. U.S.A. 104, 7327 (2007); Proc. Natl. Acad. Sci. U.S.A. 105, 1118 (2008)], Blondel [J. Stat. Mech.: Theory Exp. (2008), P10008], and Ronhovde and Nussinov [Phys. Rev. E 80, 016109 (2009)] have an excellent performance, with the additional advantage of low computational complexity, which enables one to analyze large systems."}
{"_id": "3e656e08d2b8d1bf84db56090f4053316b01c10f", "title": "Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities.", "text": "Many complex networks display a mesoscopic structure with groups of nodes sharing many links with the other nodes in their group and comparatively few with nodes of different groups. This feature is known as community structure and encodes precious information about the organization and the function of the nodes. Many algorithms have been proposed but it is not yet clear how they should be tested. Recently we have proposed a general class of undirected and unweighted benchmark graphs, with heterogeneous distributions of node degree and community size. An increasing attention has been recently devoted to develop algorithms able to consider the direction and the weight of the links, which require suitable benchmark graphs for testing. In this paper we extend the basic ideas behind our previous benchmark to generate directed and weighted networks with built-in community structure. We also consider the possibility that nodes belong to more communities, a feature occurring in real systems, such as social networks. As a practical application, we show how modularity optimization performs on our benchmark."}
{"_id": "56ff48f2b22014d5f59fd2db2b0fc0c651038de1", "title": "Link communities reveal multiscale complexity in networks", "text": "Networks have become a key approach to understanding systems of interacting objects, unifying the study of diverse phenomena including biological organisms and human society. One crucial step when studying the structure and dynamics of networks is to identify communities: groups of related nodes that correspond to functional subunits such as protein complexes or social spheres. Communities in networks often overlap such that nodes simultaneously belong to several groups. Meanwhile, many networks are known to possess hierarchical organization, where communities are recursively grouped into a hierarchical structure. However, the fact that many real networks have communities with pervasive overlap, where each and every node belongs to more than one group, has the consequence that a global hierarchy of nodes cannot capture the relationships between overlapping groups. Here we reinvent communities as groups of links rather than nodes and show that this unorthodox approach successfully reconciles the antagonistic organizing principles of overlapping communities and hierarchy. In contrast to the existing literature, which has entirely focused on grouping nodes, link communities naturally incorporate overlap while revealing hierarchical organization. We find relevant link communities in many networks, including major biological networks such as protein\u2013protein interaction and metabolic networks, and show that a large social network contains hierarchically organized community structures spanning inner-city to regional scales while maintaining pervasive overlap. Our results imply that link communities are fundamental building blocks that reveal overlap and hierarchical organization in networks to be two aspects of the same phenomenon."}
{"_id": "62bb7ce6ae6ed38f0ae4d304d56e8edfba1870d0", "title": "Topic-link LDA: joint models of topic and author community", "text": "Given a large-scale linked document collection, such as a collection of blog posts or a research literature archive, there are two fundamental problems that have generated a lot of interest in the research community. One is to identify a set of high-level topics covered by the documents in the collection; the other is to uncover and analyze the social network of the authors of the documents. So far these problems have been viewed as separate problems and considered independently from each other. In this paper we argue that these two problems are in fact inter-dependent and should be addressed together. We develop a Bayesian hierarchical approach that performs topic modeling and author community discovery in one unified framework. The effectiveness of our model is demonstrated on two blog data sets in different domains and one research paper citation data from CiteSeer."}
{"_id": "b7543053ab5e44a6e0fdfc6f9b9d3451011569b6", "title": "The role of brand logos in fi rm performance", "text": "a r t i c l e i n f o Keywords: Brand logos Brand management Aesthetics Commitment Brand extensions Firm performance This research demonstrates that the positive effects of brand logos on customer brand commitment and firm performance derive not from enabling brand identification, as is currently understood, but primarily from facilitating customer self-identity/expressiveness, representing a brand's functional benefits, and offering aesthetic appeal. This study examines whether brand names or visual symbols as logos are more effective at creating these benefits and whether or not the impact of the three aforementioned brand logo benefits on customer brand commitment and firm performance is contingent on the extent to which a firm leverages its brand (i.e., employs brand extensions to different product categories)."}
{"_id": "f07fd927971c40261dd7cef1ad6d2360b23fe294", "title": "A greedy approach to sparse canonical correlation analysis", "text": "We consider the problem of sparse canonical correlation analysis (CCA), i.e., the search for two linear combi nations, one for each multivariate, that yield maximum correlation using a specified number of variables. We propose an efficient numeri cal approximation based on a direct greedy approach which bound s the correlation at each stage. The method is specifically des igned to cope with large data sets and its computational complexit y depends only on the sparsity levels. We analyze the algorith m\u2019s performance through the tradeoff between correlation and parsimony. The results of numerical simulation suggest that a significant portion of the correlation may be captured using a relatively small number of variables. In addition, we exami ne the use of sparse CCA as a regularization method when the number of available samples is small compared to the dimensions of t he multivariates. I. I NTRODUCTION Canonical correlation analysis (CCA), introduced by Harol d Hotelling [1], is a standard technique in multivariate data n lysis for extracting common features from a pair of data sourc es [2], [3]. Each of these data sources generates a random vecto r that we call a multivariate. Unlike classical dimensionali ty reduction methods which address one multivariate, CCA take s into account the statistical relations between samples fro m two spaces of possibly different dimensions and structure. In particular, it searches for two linear combinations, one fo r each multivariate, in order to maximize their correlation. It is used in different disciplines as a stand-alone tool or as a preprocessing step for other statistical methods. Further more, CCA is a generalized framework which includes numerous classical methods in statistics, e.g., Principal Componen t Analysis (PCA), Partial Least Squares (PLS) and Multiple Linear Regression (MLR) [4]. CCA has recently regained attention with the advent of kernel CCA and its application to independent component analysis [5], [6]. The last decade has witnessed a growing interest in the search for sparse representations of signals and sparse numerical methods. Thus, we consider the problem of sparse CCA, i.e., the search for linear combinations with maximal correlation using a small number of variables. The quest for sparsity can be motivated through various reasonings. First is the ability to interpret and visualize the results. A small number of variables allows us to get the \u201cbig picture\u201d, while sacrificing some of the small details. Moreover, spars e representations enable the use of computationally efficien t The first two authors contributed equally to this manuscript . This work was supported in part by an AFOSR MURI under Grant FA9550-06-1-0 324. numerical methods, compression techniques, as well as nois e reduction algorithms. The second motivation for sparsity i s regularization and stability. One of the main vulnerabilit ies of CCA is its sensitivity to a small number of observations. Thu s, regularized methods such as ridge CCA [7] must be used. In this context, sparse CCA is a subset selection scheme which allows us to reduce the dimensions of the vectors and obtain a stable solution. To the best of our knowledge the first reference to sparse CCA appeared in [2] where backward and stepwise subset selection were proposed. This discussion was of qualitativ e nature and no specific numerical algorithm was proposed. Recently, increasing demands for multidimensional data pr ocessing and decreasing computational cost has caused the topic to rise to prominence once again [8]\u2013[13]. The main disadvantages with these current solutions is that there is no direct control over the sparsity and it is difficult (and nonintuitive) to select their optimal hyperparameters. In add ition, the computational complexity of most of these methods is too high for practical applications with high dimensional data sets. Sparse CCA has also been implicitly addressed in [9], [14] an d is intimately related to the recent results on sparse PCA [9] , [15]\u2013[17]. Indeed, our proposed solution is an extension of the results in [17] to CCA. The main contribution of this work is twofold. First, we derive CCA algorithms with direct control over the sparsity in each of the multivariates and examine their performance. Our computationally efficient methods are specifically aime d at understanding the relations between two data sets of larg e dimensions. We adopt a forward (or backward) greedy approach which is based on sequentially picking (or dropping) variables. At each stage, we bound the optimal CCA solution and bypass the need to resolve the full problem. Moreover, the computational complexity of the forward greedy method does not depend on the dimensions of the data but only on the sparsity parameters. Numerical simulation results show th at a significant portion of the correlation can be efficiently cap tured using a relatively low number of non-zero coefficients. Our second contribution is investigation of sparse CCA as a regularization method. Using empirical simulations we examin e the use of the different algorithms when the dimensions of the multivariates are larger than (or of the same order of) the number of samples and demonstrate the advantage of sparse CCA. In this context, one of the advantages of the greedy approach is that it generates the full sparsity path i n a single run and allows for efficient parameter tuning using"}
{"_id": "cbf796150ff01714244f09ccc16f16ffc471ffdb", "title": "Matching Web Tables with Knowledge Base Entities: From Entity Lookups to Entity Embeddings", "text": "Web tables constitute valuable sources of information for various applications, ranging from Web search to Knowledge Base (KB) augmentation. An underlying common requirement is to annotate the rows of Web tables with semantically rich descriptions of entities published in Web KBs. In this paper, we evaluate three unsupervised annotation methods: (a) a lookup-based method which relies on the minimal entity context provided in Web tables to discover correspondences to the KB, (b) a semantic embeddings method that exploits a vectorial representation of the rich entity context in a KB to identify the most relevant subset of entities in the Web table, and (c) an ontology matching method, which exploits schematic and instance information of entities available both in a KB and a Web table. Our experimental evaluation is conducted using two existing benchmark data sets in addition to a new large-scale benchmark created using Wikipedia tables. Our results show that: 1) our novel lookup-based method outperforms state-of-theart lookup-based methods, 2) the semantic embeddings method outperforms lookup-based methods in one benchmark data set, and 3) the lack of a rich schema in Web tables can limit the ability of ontology matching tools in performing high-quality table annotation. As a result, we propose a hybrid method that significantly outperforms individual methods on all the benchmarks."}
{"_id": "a742d81416a1da97af53e0a9748c16f37fd61b40", "title": "Blood Code: The History and Future of Video Game Censorship", "text": "INTRODUCTION ................................................................................... 571 I. FIRST AMENDMENT BACKGROUND ....................................... 573 II. THE ANALOGOUS HISTORIES OF FILMS AND VIDEO GAMES ....................................................................................... 576 A. Film Controversy and the Formation of the MPAA ................ 576 B. Early Video Game Controversy and the Formation of the ESRB ................................................................................... 580 C. Doom and Columbine ........................................................... 584 D. Jack Thompson and Grand Theft Auto ................................... 586 III. WHY VIDEO GAMES SHOULD NOT BE TREATED DIFFERENTLY THAN FILMS .................................................... 593 A. Violent and Sexual Content in Video Games is Distinguishable from Pornography and Obscenity. .................. 594 B. Violent Game Content is Similar to Violent Film Content. ..... 596 C. Positive Social Aspects of Violent Gaming............................... 597 D. Desensitization Will Lead to a Decrease in Political Outrage. ............................................................................... 604 IV. EXISTING VIDEO GAME JURISPRUDENCE .............................. 605 V. RATINGS AND LABELS AS UNCONSTITUTIONAL CENSORSHIP.............................................................................. 607 CONCLUSION ....................................................................................... 609"}
{"_id": "0814694a247a9b6e38dde34ab95067a63f67e458", "title": "Characterizing the life cycle of online news stories using social media reactions", "text": "This paper presents a study of the life cycle of news articles posted online. We describe the interplay between website visitation patterns and social media reactions to news content. We show that we can use this hybrid observation method to characterize distinct classes of articles. We also find that social media reactions can help predict future visitation patterns early and accurately. We validate our methods using qualitative analysis as well as quantitative analysis on data from a large international news network, for a set of articles generating more than 3,000,000 visits and 200,000 social media reactions. We show that it is possible to model accurately the overall traffic articles will ultimately receive by observing the first ten to twenty minutes of social media reactions. Achieving the same prediction accuracy with visits alone would require to wait for three hours of data. We also describe significant improvements on the accuracy of the early prediction of shelf-life for news stories."}
{"_id": "275f66e845043217d5c37328b5e71a178302469f", "title": "Improving www proxies performance with greedy-dual-size-frequency caching policy", "text": "Web, HTTP, WWW proxies, caching policies, replacement algorithms, performance Web proxy caches are used to improve performance of the WWW. Since the majority of Web documents are static documents, caching them at WWW proxies reduces both network traffic and response time. One of the keys to better proxy cache performance is an efficient caching policy which keeps in the cache popular documents and replaces rarely used ones. This paper introduces the Greedy-Dual-Size-Frequency caching policy to maximize hit and byte hit rates for WWW proxies. Proposed caching strategy incorporates in a simple way the most important characteristics of the file and its accesses such as file size, file access frequency and recentness of the last access. Greedy-Dual-Size-Frequency is an improvement of Greedy-Dual-Size algorithm \u2013 the current champion among the replacement strategies proposed for Web proxy caches."}
{"_id": "1c8ea4d0687eae94871ee1916da4445e08f29076", "title": "Modeling Review Argumentation for Robust Sentiment Analysis", "text": "Most text classification approaches model text at the lexical and syntactic level only, lacking domain robustness and explainability. In tasks like sentiment analysis, such approaches can result in limited effectiveness if the texts to be classified consist of a series of arguments. In this paper, we claim that even a shallow model of the argumentation of a text allows for an effective and more robust classification, while providing intuitive explanations of the classification results. Here, we apply this idea to the supervised prediction of sentiment scores for reviews. We combine existing approaches from sentiment analysis with novel features that compare the overall argumentation structure of the given review text to a learned set of common sentiment flow patterns. Our evaluation in two domains demonstrates the benefit of modeling argumentation for text classification in terms of effectiveness and robustness."}
{"_id": "f698467f1dd781f652c9839379ccc548a9aa4af1", "title": "Accounting for the effects of accountability.", "text": "This article reviews the now extensive research literature addressing the impact of accountability on a wide range of social judgments and choices. It focuses on 4 issues: (a) What impact do various accountability ground rules have on thoughts, feelings, and action? (b) Under what conditions will accountability attenuate, have no effect on, or amplify cognitive biases? (c) Does accountability alter how people think or merely what people say they think? and (d) What goals do accountable decision makers seek to achieve? In addition, this review explores the broader implications of accountability research. It highlights the utility of treating thought as a process of internalized dialogue; the importance of documenting social and institutional boundary conditions on putative cognitive biases; and the potential to craft empirical answers to such applied problems as how to structure accountability relationships in organizations."}
{"_id": "49afbe880b8bd419605beb84d3382647bf8e50ea", "title": "An Effective Gated and Attention-Based Neural Network Model for Fine-Grained Financial Target-Dependent Sentiment Analysis", "text": ""}
{"_id": "c1927578a61df3c5a33f6bca9f9bd5c181e1d5ac", "title": "Security Issues in the Internet of Things ( IoT ) : A Comprehensive Study", "text": "Wireless communication networks are highly prone to security threats. The major applications of wireless communication networks are in military, business, healthcare, retail, and transportations. These systems use wired, cellular, or adhoc networks. Wireless sensor networks, actuator networks, and vehicular networks have received a great attention in society and industry. In recent years, the Internet of Things (IoT) has received considerable research attention. The IoT is considered as future of the internet. In future, IoT will play a vital role and will change our living styles, standards, as well as business models. The usage of IoT in different applications is expected to rise rapidly in the coming years. The IoT allows billions of devices, peoples, and services to connect with others and exchange information. Due to the increased usage of IoT devices, the IoT networks are prone to various security attacks. The deployment of efficient security and privacy protocols in IoT networks is extremely needed to ensure confidentiality, authentication, access control, and integrity, among others. In this paper, an extensive comprehensive study on security and privacy issues in IoT networks is provided. Keywords\u2014Internet of Things (IoT); security issues in IoT; security; privacy"}
{"_id": "b5f6ee9baa07301bba6e187bd9380686a72866c6", "title": "Networks of the Brain", "text": "Models of Network Growth All networks, whether they are social, technological, or biological, are the result of a growth process. Many of these networks continue to grow for prolonged periods of time, continually modifying their connectivity structure throughout their entire existence. For example, the World Wide Web has grown from a small number of cross-linked documents in the early 1 990s to an estimated 30 billion indexed web pages in 2009.3 The extraordinary growth of the Web continues unabated and has occurred without any top-down design, yet the topology of its hyperlink structure exhibits characteristic statistical patterns (Pastor-Satorras and Vespig\u00ad nani, 2004). Other technological networks such as the power grid, global transportation networks, or mobile communication networks continue to grow and evolve, each displaying characteristic patterns of expansion and elaboration. Growth and change in social and organizational"}
{"_id": "d0eab53a6b20bfca924b85fcfb0ee76bfde6d4ef", "title": "Comparative Analysis of ADS-B Verification Techniques", "text": "ADS-B is one of many Federal Aviation Administration (FAA) regulated technologies used to monitor air traffic with high precision, while reducing dependencies on dated and costly radar equipment [1]. The FAA hopes to decrease the separation between aircraft, reduce risk of collision as air traffic density increases, save fuel costs, and increase situational awareness of both commercial and general aviation aircraft within United States airspace. Several aviation technology experts have expressed concern over the security of the ADS-B protocol [2] [3]. ADS-B has an open and well known data format, which is broadcast on known frequencies. This means that the protocol is highly susceptible to radio frequency (RF) attacks such as eavesdropping, jamming, and spoofing. Eavesdropping and jamming will be reviewed in Section 3.4. While eavesdropping and jamming attacks are well studied, due to their applicability in many radio technologies, spoofing attacks against ADS-B are particular to this system. As such, the latter is the focus of our research. This paper evaluates so-called Kalman Filtering and Group Validation techniques (described below) in order to assess which would be a better position verification method of ADS-B signals. The parameters for the comparative analysis include both technical feasibility and practical implementation of each position verification technique. The goal is to offer a practical position verification process which could be implemented with limited government funding within the next 10 years."}
{"_id": "4c2fedecddcae64514ad99b7301ad6e04654f10d", "title": "Deep Learning and Its Applications to Signal and Information Processing [Exploratory DSP]", "text": "The purpose of this article is to introduce the readers to the emerging technologies enabled by deep learning and to review the research work conducted in this area that is of direct relevance to signal processing. We also point out, in our view, the future research directions that may attract interests of and require efforts from more signal processing researchers and practitioners in this emerging area for advancing signal and information processing technology and applications."}
{"_id": "d0c9acb277da76aebf56c021cb02b51cdfbb56b8", "title": "The Anti-vaccination Movement: A Regression in Modern Medicine", "text": "There have been recent trends of parents in Western countries refusing to vaccinate their children due to numerous reasons and perceived fears. While opposition to vaccines is as old as the vaccines themselves, there has been a recent surge in the opposition to vaccines in general, specifically against the MMR (measles, mumps, and rubella) vaccine, most notably since the rise in prominence of the notorious British ex-physician, Andrew Wakefield,\u00a0and his works. This has caused multiple measles outbreaks in Western countries where the measles virus was previously considered eliminated. This paper evaluates and reviews the origins of the anti-vaccination movement, the reasons behind the recent strengthening of the movement, role of the internet in the spread of anti-vaccination ideas, and the repercussions in terms of public health and safety."}
{"_id": "e170ca6dad1221f4bb2e4fc3d42a182e23026b80", "title": "Emotion control in collaborative learning situations: do students regulate emotions evoked by social challenges?", "text": "BACKGROUND\nDuring recent decades, self-regulated learning (SRL) has become a major research field. SRL successfully integrates the cognitive and motivational components of learning. Self-regulation is usually seen as an individual process, with the social aspects of regulation conceptualized as one aspect of the context. However, recent research has begun to investigate whether self-regulation processes are complemented by socially shared regulation processes.\n\n\nAIMS\nThe presented study investigated what kind of socio-emotional challenges students experience during collaborative learning and whether the students regulate the emotions evoked during these situations. The interplay of the emotion regulation processes between the individual and the group was also studied.\n\n\nSAMPLE\nThe sample for this study was 63 teacher education students who studied in groups of three to five during three collaborative learning tasks.\n\n\nMETHOD\nStudents' interpretations of experienced social challenges and their attempts to regulate emotions evoked by these challenges were collected following each task using the Adaptive Instrument for the Regulation of Emotions.\n\n\nRESULTS\nThe results indicated that students experienced a variety of social challenges. Students also reported the use of shared regulation in addition to self-regulation. Finally, the results suggested that intrinsic group dynamics are derived from both individual and social elements of collaborative situations.\n\n\nCONCLUSION\nThe findings of the study support the assumption that students can regulate emotions collaboratively as well as individually. The study contributes to our understanding of the social aspects of emotional regulation in collaborative learning contexts."}
{"_id": "6f8a13a1a7eba8966627775c32ae59dafd91cedc", "title": "PAPILLON: expressive eyes for interactive characters", "text": "PAPILLON is a technology for designing highly expressive animated eyes for interactive characters, robots and toys. Expressive eyes are essential in any form of face-to-face communication [2] and designing them has been a critical challenge in robotics, as well as in interactive character and toy development."}
{"_id": "e33b9ee3c575c6a38b873888e796d29f59d98e04", "title": "Heart-Brain Neurodynamics : The Making of Emotions", "text": "As pervasive and vital as they are in human experience, emotions have long remained an enigma to science. This monograph explores recent scientific advances that clarify central controversies in the study of emotion, including the relationship between intellect and emotion, and the historical debate on the source of emotional experience. Particular attention is given to the intriguing body of research illuminating the critical role of ascending input from the body to the brain in the generation and perception of emotions. This discussion culminates in the presentation of a systems-oriented model of emotion in which the brain functions as a complex pattern-matching system, continually processing input from both the external and internal environments. From this perspective it is shown that the heart is a key component of the emotional system, thus providing a physiological basis for the long-acknowledged link between the heart and our emotional life."}
{"_id": "19b7e0786d9e093fdd8c8751dac0c4eb0aea0b74", "title": "The James-Lange theory of emotions: a critical examination and an alternative theory. By Walter B. Cannon, 1927.", "text": ""}
{"_id": "224f751d4691515b3f8010d12660c70dd62336b8", "title": "DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features", "text": "We propose DeepHand to estimate the 3D pose of a hand using depth data from commercial 3D sensors. We discriminatively train convolutional neural networks to output a low dimensional activation feature given a depth map. This activation feature vector is representative of the global or local joint angle parameters of a hand pose. We efficiently identify 'spatial' nearest neighbors to the activation feature, from a database of features corresponding to synthetic depth maps, and store some 'temporal' neighbors from previous frames. Our matrix completion algorithm uses these 'spatio-temporal' activation features and the corresponding known pose parameter values to estimate the unknown pose parameters of the input feature vector. Our database of activation features supplements large viewpoint coverage and our hierarchical estimation of pose parameters is robust to occlusions. We show that our approach compares favorably to state-of-the-art methods while achieving real time performance (\u2248 32 FPS) on a standard computer."}
{"_id": "486eb944129e0d90a7d2ef8d6085fd482c9be6c5", "title": "Correctness By Construction: Better Can Also Be Cheaper", "text": "I n December 1999 CrossTalk [3], David Cook provided a well-reasoned historical analysis of programming language development and considered the role languages play in the software development process. The article was valuable because it showed that programming language developments are not sufficient to ensure success; however, it would be dangerous to conclude from this that they are not necessary for success. Cook rightly identifies other issues such as requirements capture, specifications, and verification and validation (V&V) that need to be addressed. Perhaps we need to look at programming languages not just in terms of their ability to code some particular design but in the influence the language has on some of these other vital aspects of the development process. The key notion is that of the benefit of a precise language or language subset. If the term subset has set anyone thinking \" oh no, not another coding standard, \" then read on, the topic is much more interesting and useful than that! Language Issues Programming languages have evolved in three main ways. First came improvements in structure; then attempts at improving compile-time error detection through such things as strong typing; and, most significantly , facilities to improve our ability to express abstractions. All of these have shaped the way we think about problem solving. However, programming languages have not evolved in their precision of expression. In fact, they may have actually gotten worse since the meaning of a sample of machine code is exact and unequivocal , whereas the meaning of the constructs of typical modern high-order languages are substantially less certain. The evolution of C into C++ certainly improved its ability to express design abstractions but, if anything, the predictability of the compiled code decreased. These ambiguities arise either from deficiencies in the original language definition or from implementation freedoms given to the compiler writer for ease of implementation or efficiency reasons. None of this may look like a very serious problem. We can still do code walk-throughs and reviews and, after all, we still have to do dynamic testing that should flush out any remaining ambiguities. In fact the evidence is quite strong that it does matter because it creates an environment where we are encouraged to make little attempt to reason about the software we are producing at each stage of its development. Since we typically do not have formal mathematical specifications and we use imprecise \u2026"}
{"_id": "15fc05b9da56764192d56036721a6a19239c07fc", "title": "A lifelong learning perspective for mobile robot control", "text": "Abstmct-Designing robots that learn by themselves to perform complex real-world tasks is a still-open challenge for the field of Robotics and Artificial Intelligence. In this paper we present the robot learning problem as a lifelong problem, in which a robot faces a collection of tasks over its entire lifetime. Such a scenario provides the opportunity to gather general-purpose knowledge that transfers across tasks. We illustrate a particular learning mechanism, explanation-based neural network learning, that transfers knowledge between related tasks via neural network action models. The learning approach is illustrated using a mobile robot, equipped with visual, ultrasonic and laser sensors. In less than 10 minutes operation time, the robot is able to learn to navigate to a marked target object in a natural office environment."}
{"_id": "5a3e2899deed746f1513708f1f0f24a25f4a0750", "title": "3D Point Cloud Registration for Localization Using a Deep Neural Network Auto-Encoder", "text": "We present an algorithm for registration between a large-scale point cloud and a close-proximity scanned point cloud, providing a localization solution that is fully independent of prior information about the initial positions of the two point cloud coordinate systems. The algorithm, denoted LORAX, selects super-points–local subsets of points–and describes the geometric structure of each with a low-dimensional descriptor. These descriptors are then used to infer potential matching regions for an efficient coarse registration process, followed by a fine-tuning stage. The set of super-points is selected by covering the point clouds with overlapping spheres, and then filtering out those of low-quality or nonsalient regions. The descriptors are computed using state-of-the-art unsupervised machine learning, utilizing the technology of deep neural network based auto-encoders. Abstract This novel framework provides a strong alternative to the common practice of using manually designed key-point descriptors for coarse point cloud registration. Utilizing super-points instead of key-points allows the available geometrical data to be better exploited to find the correct transformation. Encoding local 3D geometric structures using a deep neural network auto-encoder instead of traditional descriptors continues the trend seen in other computer vision applications and indeed leads to superior results. The algorithm is tested on challenging point cloud registration datasets, and its advantages over previous approaches as well as its robustness to density changes, noise, and missing data are shown."}
{"_id": "89a3c09e0a4c54f89a7237be92d3385116030efc", "title": "Phishing Detection Taxonomy for Mobile Device", "text": "Phishing is one of the social engineering attacks and currently hit on mobile devices. Based on security report by Lookout [1], 30% of Lookout users clicking on an unsafe link per year by using mobile device. Few phishing detection techniques have been applied on mobile device. However, review on phishing detection technique on the detection technique redundant is still need. This paper addresses the current trend phishing detection for mobile device and identifies significant criterion to improve phishing detection techniques on mobile device. Thus, existing research on phishing detection technique for computer and mobile device will be compared and analysed. Hence, outcome of the analysis becomes a guideline in proposing generic phishing detection taxonomy for mobile device."}
{"_id": "c59f39796e0f8e733f44b1cfe374cfe76834dcf8", "title": "Bandwidth formula for Linear FMCW radar waveforms", "text": "The International Telecommunications Union provides recommendations regarding spectral emission bounds for primary radar systems. These bounds are currently in review and are defined in terms of spectral occupancy, necessary bandwidth, 40dB bandwidth and out-of-band roll-off rates. Here we derive out-of-band domain spectral envelopes, bandwidth formula and roll-off rates, for various Linear FMCW radar waveforms including sawtooth (LFMCW), Quadratic Phase Coded LFMCW, LFM Pulse Train, and Hann amplitude tapered LFMCW."}
{"_id": "585654039c441a15cdda936902f0f1f9b7498a89", "title": "Gold Fingers: 3D Targets for Evaluating Capacitive Readers", "text": "With capacitive fingerprint readers being increasingly used for access control as well as for smartphone unlock and payments, there is a growing interest among metrology agencies (e.g., the National Institute of Standards and Technology) to develop standard artifacts (targets) and procedures for repeatable evaluation of capacitive readers. We present our design and fabrication procedures to create conductive 3D targets (gold fingers) for capacitive readers. Wearable 3D targets with known feature markings (e.g., fingerprint ridge flow and ridge spacing) are first fabricated using a high-resolution 3D printer. A sputter coating process is subsequently used to deposit a thin layer (~300 nm) of conductive materials (titanium and gold) on 3D printed targets. The wearable gold finger targets are used to evaluate a PIV-certified single-finger capacitive reader as well as small-area capacitive readers embedded in smartphones and access control terminals. In additional, we show that a simple procedure to create 3D printed spoofs with conductive carbon coating is able to successfully spoof a PIV-certified single-finger capacitive reader as well as a capacitive reader embedded in an access control terminal."}
{"_id": "7e0f013e85eff9b089f58d9a3e98605ae1a7ba18", "title": "On Training Targets for Supervised Speech Separation", "text": "Formulation of speech separation as a supervised learning problem has shown considerable promise. In its simplest form, a supervised learning algorithm, typically a deep neural network, is trained to learn a mapping from noisy features to a time-frequency representation of the target of interest. Traditionally, the ideal binary mask (IBM) is used as the target because of its simplicity and large speech intelligibility gains. The supervised learning framework, however, is not restricted to the use of binary targets. In this study, we evaluate and compare separation results by using different training targets, including the IBM, the target binary mask, the ideal ratio mask (IRM), the short-time Fourier transform spectral magnitude and its corresponding mask (FFT-MASK), and the Gammatone frequency power spectrum. Our results in various test conditions reveal that the two ratio mask targets, the IRM and the FFT-MASK, outperform the other targets in terms of objective intelligibility and quality metrics. In addition, we find that masking based targets, in general, are significantly better than spectral envelope based targets. We also present comparisons with recent methods in non-negative matrix factorization and speech enhancement, which show clear performance advantages of supervised speech separation."}
{"_id": "c1cd441dad61b9d9d294a19a7043adb1582f786b", "title": "Speech recognition using factorial hidden Markov models for separation in the feature space", "text": "This paper proposes an algorithm for the recognition and separation of speech signals in non-stationary noise, such as another speaker. We present a method to combine hidden Markov models (HMMs) trained for the speech and noise into a factorial HMM to model the mixture signal. Robustness is obtained by separating the speech and noise signals in a feature domain, which discards unnecessary information. We use mel-cepstral coefficients (MFCCs) as features, and estimate the distribution of mixture MFCCs from the distributions of the target speech and noise. A decoding algorithm is proposed for finding the state transition paths and estimating gains for the speech and noise from a mixture signal. Simulations were carried out using speech material where two speakers were mixed at various levels, and even for high noise level (9 dB above the speech level), the method produced relatively good (60% word recognition accuracy) results. Audio demonstrations are available at www.cs.tut.fi/ \u0303tuomasv."}
{"_id": "ecd4bc32bb2717c96f76dd100fcd1255a07bd656", "title": "Roles of Pre-Training and Fine-Tuning in Context-Dependent DBN-HMMs for Real-World Speech Recognition", "text": "Recently, deep learning techniques have been successfully applied to automatic speech recognition tasks -first to phonetic recognition with context-independent deep belief network (DBN) hidden Markov models (HMMs) and later to large vocabulary continuous speech recognition using context-dependent (CD) DBN-HMMs. In this paper, we report our most recent experiments designed to understand the roles of the two main phases of the DBN learning -pre-training and fine tuning -in the recognition performance of a CD-DBN-HMM based large-vocabulary speech recognizer. As expected, we show that pre-training can initialize weights to a point in the space where fine-tuning can be effective and thus is crucial in training deep structured models. However, a moderate increase of the amount of unlabeled pre-training data has an insignificant effect on the final recognition results as long as the original training size is sufficiently large to initialize the DBN weights. On the other hand, with additional labeled training data, the fine-tuning phase of DBN training can significantly improve the recognition accuracy."}
{"_id": "0b3cfbf79d50dae4a16584533227bb728e3522aa", "title": "Long Short-Term Memory", "text": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."}
{"_id": "0c7d7b4c546e38a4097a97bf1d16a60012916758", "title": "The Kaldi Speech Recognition Toolkit", "text": "We describe the design of Kaldi, a free, open-source toolkit for speech recognition research. Kaldi provides a speech recognition system based on finite-state transducers (using the freely available OpenFst), together with detailed documentation and scripts for building complete recognition systems. Kaldi is written is C++, and the core library supports modeling of arbitrary phonetic-context sizes, acoustic modeling with subspace Gaussian mixture models (SGMM) as well as standard Gaussian mixture models, together with all commonly used linear and affine transforms. Kaldi is released under the Apache License v2.0, which is highly nonrestrictive, making it suitable for a wide community of users."}
{"_id": "9eb67ca57fecc691853636507e2b852de3f56fac", "title": "Analysis of the Paragraph Vector Model for Information Retrieval", "text": "Previous studies have shown that semantically meaningful representations of words and text can be acquired through neural embedding models. In particular, paragraph vector (PV) models have shown impressive performance in some natural language processing tasks by estimating a document (topic) level language model. Integrating the PV models with traditional language model approaches to retrieval, however, produces unstable performance and limited improvements. In this paper, we formally discuss three intrinsic problems of the original PV model that restrict its performance in retrieval tasks. We also describe modifications to the model that make it more suitable for the IR task, and show their impact through experiments and case studies. The three issues we address are (1) the unregulated training process of PV is vulnerable to short document over-fitting that produces length bias in the final retrieval model; (2) the corpus-based negative sampling of PV leads to a weighting scheme for words that overly suppresses the importance of frequent words; and (3) the lack of word-context information makes PV unable to capture word substitution relationships."}
{"_id": "6c7e55e3b53029296097ee07ba75b6b6a98b14e5", "title": "Intrusion detection system based on the analysis of time intervals of CAN messages for in-vehicle network", "text": "Controller Area Network (CAN) bus in the vehicles is a de facto standard for serial communication to provide an efficient, reliable and economical link between Electronic Control Units (ECU). However, CAN bus does not have enough security features to protect itself from inside or outside attacks. Intrusion Detection System (IDS) is one of the best ways to enhance the vehicle security level. Unlike the traditional IDS for network security, IDS for vehicle requires light-weight detection algorithm because of the limitations of the computing power of electronic devices reside in cars. In this paper, we propose a light-weight intrusion detection algorithm for in-vehicle network based on the analysis of time intervals of CAN messages. We captured CAN messages from the cars made by a famous manufacturer and performed three kinds of message injection attacks. As a result, we find the time interval is a meaningful feature to detect attacks in the CAN traffic. Also, our intrusion detection system detects all of message injection attacks without making false positive errors."}
{"_id": "b3ff14e4c9b939841dec4d877256e47d12817638", "title": "Automatic Gloss Finding for a Knowledge Base using Ontological Constraints", "text": "While there has been much research on automatically constructing structured Knowledge Bases (KBs), most of it has focused on generating facts to populate a KB. However, a useful KB must go beyond facts. For example, glosses (short natural language definitions) have been found to be very useful in tasks such as Word Sense Disambiguation. However, the important problem of Automatic Gloss Finding, i.e., assigning glosses to entities in an initially gloss-free KB, is relatively unexplored. We address that gap in this paper. In particular, we propose GLOFIN, a hierarchical semi-supervised learning algorithm for this problem which makes effective use of limited amounts of supervision and available ontological constraints. To the best of our knowledge, GLOFIN is the first system for this task. Through extensive experiments on real-world datasets, we demonstrate GLOFIN's effectiveness. It is encouraging to see that GLOFIN outperforms other state-of-the-art SSL algorithms, especially in low supervision settings. We also demonstrate GLOFIN's robustness to noise through experiments on a wide variety of KBs, ranging from user contributed (e.g., Freebase) to automatically constructed (e.g., NELL). To facilitate further research in this area, we have made the datasets and code used in this paper publicly available."}
{"_id": "6eb662ef35ec514429c5ba533b212a3a512c3517", "title": "Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-denied environments", "text": "Autonomous micro aerial vehicles (MAVs) will soon play a major role in tasks such as search and rescue, environment monitoring, surveillance, and inspection. They allow us to easily access environments to which no humans or other vehicles can get access. This reduces the risk for both the people and the environment. For the above applications, it is, however, a requirement that the vehicle is able to navigate without using GPS, or without relying on a preexisting map, or without specific assumptions about the environment. This will allow operations in unstructured, unknown, and GPS-denied environments. We present a novel solution for the task of autonomous navigation of a micro helicopter through a completely unknown environment by using solely a single camera and inertial sensors onboard. Many existing solutions suffer from the problem of drift in the xy plane or from the dependency on a clean GPS signal. The novelty in the here-presented approach is to use a monocular simultaneous localization and mapping (SLAM) framework to stabilize the vehicle in six degrees of freedom. This way, we overcome the problem of both the drift and the GPS dependency. The pose estimated by the visual SLAM algorithm is used in a linear optimal controller that allows us to perform all basic maneuvers such as hovering, set point and trajectory following, vertical takeoff, and landing. All calculations including SLAM and controller are running in real time and online while the helicopter is flying. No offline processing or preprocessing is done. We show real experiments that demonstrate that the vehicle can fly autonomously in an unknown and unstructured environment. To the best of our knowledge, the here-presented work describes the first aerial vehicle that uses onboard monocular vision as a main sensor to navigate through an unknown GPS-denied environment and independently of any external artificial aids. C \u00a9 2011 Wiley Periodicals, Inc."}
{"_id": "0286bef6d6da7990a2c50aefd5543df2ce481fbb", "title": "Combinatorial Pure Exploration of Multi-Armed Bandits", "text": "We study the combinatorial pure exploration (CPE) problem in the stochastic multi-armed bandit setting, where a learner explores a set of arms with the objective of identifying the optimal member of a decision class, which is a collection of subsets of arms with certain combinatorial structures such as size-K subsets, matchings, spanning trees or paths, etc. The CPE problem represents a rich class of pure exploration tasks which covers not only many existing models but also novel cases where the object of interest has a nontrivial combinatorial structure. In this paper, we provide a series of results for the general CPE problem. We present general learning algorithms which work for all decision classes that admit offline maximization oracles in both fixed confidence and fixed budget settings. We prove problem-dependent upper bounds of our algorithms. Our analysis exploits the combinatorial structures of the decision classes and introduces a new analytic tool. We also establish a general problem-dependent lower bound for the CPE problem. Our results show that the proposed algorithms achieve the optimal sample complexity (within logarithmic factors) for many decision classes. In addition, applying our results back to the problems of top-K arms identification and multiple bandit best arms identification, we recover the best available upper bounds up to constant factors and partially resolve a conjecture on the lower bounds."}
{"_id": "3f6f45584d7f71e47118bdcd12826995998871d1", "title": "How to Use the SOINN Software: User's Guide (Version 1.0)", "text": "The Self-Organizing Incremental Neural Network (SOINN) is an unsupervised classifier that is capable of online incremental learning. Studies have been performed not only for improving the SOINN, but also for applying it to various problems. Furthermore, using the SOINN, more intelligent functions are achieved, such as association, reasoning, and so on. In this paper, we show how to use the SOINN software and to apply it to the above problems."}
{"_id": "185f3fcc78ea32ddbad3a5ebdaefa9504bfb3f5e", "title": "Pattern Recognition and Computer Vision", "text": "Chinese painting is distinct from other art in that the painting elements are exhibited by complex water-and-ink diffusion and shows gray, white and black visual effect. Rendering such a water-and-ink painting with polychrome style is a challenging problem. In this paper, we propose a novel style transfer method for Chinese painting. We firstly decompose the Chinese painting with adaptive patches based on its structure, and locally colorize the painting. Then, the colorized image is used for guiding the process of texture transfer that is modeled in Markov Random Field (MRF). More precisely, we improve the classic texture transfer algorithm by modifying the compatibility functions for searching the optimal matching, with the chromatism information. The experiment results show that proposed adaptive patches can well preserve the original content while match the example style. Moreover, we present the transfer results with our method and recent style transfer algorithms, in order to make a comparison."}
{"_id": "0b6aab9ce7910938e0d60c0764dc1c09d3219b05", "title": "Combining document representations for known-item search", "text": "This paper investigates the pre-conditions for successful combination of document representations formed from structural markup for the task of known-item search. As this task is very similar to work in meta-search and data fusion, we adapt several hypotheses from those research areas and investigate them in this context. To investigate these hypotheses, we present a mixture-based language model and also examine many of the current meta-search algorithms. We find that compatible output from systems is important for successful combination of document representations. We also demonstrate that combining low performing document representations can improve performance, but not consistently. We find that the techniques best suited for this task are robust to the inclusion of poorly performing document representations. We also explore the role of variance of results across systems and its impact on the performance of fusion, with the surprising result that the correct documents have higher variance across document representations than highly ranking incorrect documents."}
{"_id": "a57623e6f0de3775513b436510b2d6cd9343dc5f", "title": "Text Segmentation with Topic Modeling and Entity Coherence", "text": "This paper describes a system which uses entity and topic coherence for improved Text Segmentation (TS) accuracy. First, Linear Dirichlet Allocation (LDA) algorithm was used to obtain topics for sentences in the document. We then performed entity mapping across a window in order to discover the transition of entities within sentences. We used the information obtained to support our LDA-based boundary detection for proper boundary adjustment. We report the significance of the entity coherence approach as well as the superiority of our algorithm over existing works."}
{"_id": "cd472598052666440b8063e7259b35b78a45d757", "title": "Factors Affecting Online Impulse Buying : Evidence from Chinese Social Commerce Environment", "text": "First, the purpose of this study is to examine the impact of situational variables, scarcity and serendipity, on online impulse buying (OIB) in Chinese social commerce (SC) environment. Second, the study further assesses the moderating role of five dimensions of hedonic shopping value. Data were gathered from 671 online shoppers who come from two metropolitan cities of China, Beijing, and Shanghai. Structure equation modeling utilized was generated by AMOS 23 version to test the study hypotheses. The results confirm that situational factors positively influence the online impulse buying among Chinese online shoppers in SC environment. Four dimensions of hedonic shopping value (social shopping, relaxation shopping, adventure shopping and idea shopping) positively moderate the relationship between serendipity and OIB; value shopping is insignificant with moderation effect. The finding is helpful to the online retailers and SC web developers by recommending them to take the scarcity and serendipity in their consideration. These factors have the potential to motivate the consumers to initiate the hedonic shopping aptitude to urge to buy impulsively. Unlike the previous work which remained unsuccessful in incorporating all factors into one study, this study has incorporated irrational and unplanned consumption along with rational and planned one in the same research."}
{"_id": "75225142acc421a15cb1cd5b633f6de5fc036586", "title": "Short Tamil sentence similarity calculation using knowledge-based and corpus-based similarity measures", "text": "Sentence similarity calculation plays an important role in text processing-related research. Many unsupervised techniques such as knowledge-based techniques, corpus-based techniques, string similarity based techniques, and graph alignment techniques are available to measure sentence similarity. However, none of these techniques have been experimented with Tamil. In this paper, we present the first-ever system to measure semantic similarity for Tamil short phrases using a hybrid approach that makes use of knowledge-based and corpus-based techniques. We tested this system with 2000 general sentence pairs and 100 mathematical sentence pairs. For the dataset of 2000 sentence pairs, this approach achieved a Mean Squared Error of 0.195 and a Pearson Correlation factor of 0.815. For the 100 mathematical sentence pairs, this approach achieved an 85% of accuracy."}
{"_id": "b8759b1ea437802d9a1c2a99d22932a960b7beec", "title": "Firewall Security: Policies, Testing and Performance Evaluation", "text": "This paper explores the firewall security and performance relationship for distributed systems. Experiments are conducted to set firewall security into seven different levels and to quantify their performance impacts. These firewall security levels are formulated, designed, implemented, and tested phase by phase under an experimental environment in which all performed tests are evaluated and compared. Based on the test results, the impacts of the various firewall security levels on system performance with respect to transaction time and latency are measured and analyzed. It is interesting to note that the intuitive belief about security to performance, i.e. the more security would result in less performance, does not always hold in the firewall testing. The results reveal that the significant impact from enhanced security on performance could only be observed under some particular scenarios and thus their relationships are not necessarily inversely related. We also discuss the tradeoff between security and performance."}
{"_id": "4df321947a2ac4365584a01d78a780913b171cf5", "title": "Datasets for Aspect-Based Sentiment Analysis in French", "text": "Aspect Based Sentiment Analysis (ABSA) is the task of mining and summarizing opinions from text about specific entities and their aspects. This article describes two datasets for the development and testing of ABSA systems for French which comprise user reviews annotated with relevant entities, aspects and polarity values. The first dataset contains 457 restaurant reviews (2365 sentences) for training and testing ABSA systems, while the second contains 162 museum reviews (655 sentences) dedicated to out-of-domain evaluation. Both datasets were built as part of SemEval-2016 Task 5 \u201cAspect-Based Sentiment Analysis\u201d where seven different languages were represented, and are publicly available for research purposes. This article provides examples and statistics by annotation type, summarizes the annotation guidelines and discusses their cross-lingual applicability. It also explains how the data was used for evaluation in the SemEval ABSA task and briefly presents the results obtained for French."}
{"_id": "dd8da9a4a0a5ef1f19ca71caf5eba11192dd2c41", "title": "A survey of domain adaptation for statistical machine translation", "text": "Differences in domains of language use between training data and test data have often been reported to result in performance degradation for phrase-based machine translation models. Throughout the past decade or so, a large body of work aimed at exploring domain-adaptation methods to improve system performance in the face of such domain differences. This paper provides a systematic survey of domain-adaptation methods for phrase-based machine-translation systems. The survey starts out with outlining the sources of errors in various components of phrase-based models due to domain change, including lexical selection, reordering and optimization. Subsequently, it outlines the different research lines to domain adaptation in the literature, and surveys the existing work within these research lines, discussing how these approaches differ and how they relate to each other."}
{"_id": "5fbe9d4e616632972e86c31fbb4b1dff4897e59e", "title": "Design of adaptive hypermedia learning systems: A cognitive style approach", "text": "In the past decade, a number of adaptive hypermedia learning systems have been developed. However, most of these systems tailor presentation content and navigational support solely according to students\u2019 prior knowledge. On the other hand, previous research suggested that cognitive styles significantly affect student learning because they refer to how learners process and organize information. To this end, the study presented in this paper developed an adaptive hypermedia learning system tailored to students\u2019 cognitive styles, with an emphasis on Pask\u2019s Holist\u2013Serialist dimension. How students react to this adaptive hypermedia learning system, including both learning performance and perceptions, was examined in this study. Forty-four undergraduate and postgraduate students participated in the study. The findings indicated that, in general, adapting to cognitive styles improves student learning. The results also showed that the adaptive hypermedia learning system have more effects on students\u2019 perceptions than performance. The implications of these results for the design of adaptive hypermedia learning systems are discussed. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "242b5b545bb17879a73161134bc84d5ba3e3cf35", "title": "Memory resource management in VMware ESX server", "text": "VMware ESX Server is a thin software layer designed to multiplex hardware resources efficiently among virtual machines running unmodified commodity operating systems. This paper introduces several novel ESX Server mechanisms and policies for managing memory. A ballooning technique reclaims the pages considered least valuable by the operating system running in a virtual machine. An idle memory tax achieves efficient memory utilization while maintaining performance isolation guarantees. Content-based page sharing and hot I/O page remapping exploit transparent page remapping to eliminate redundancy and reduce copying overheads. These techniques are combined to efficiently support virtual machine workloads that overcommit memory."}
{"_id": "596785ca2d338ebcdeac1fc29bf5357045574b2b", "title": "An architectural pattern language of cloud-based applications", "text": "The properties of clouds -- elasticity, pay-per-use, and standardization of the runtime infrastructure -- enable cloud providers and users alike to benefit from economies of scale, faster provisioning times, and reduced runtime costs. However, to achieve these benefits, application architects and developers have to respect the characteristics of the cloud environment.\n To reduce the complexity of cloud application architectures, we propose a pattern-based approach for cloud application design and development. We defined a pattern format to describe the principles of cloud computing, available cloud offerings, and cloud application architectures. Based on this format we developed an architectural pattern language of cloud-based applications: through interrelation of patterns for cloud offering descriptions and cloud application architectures, developers are guided during the identification of cloud environments and architecture patterns applicable to their problems. We cover the proceeding how we identified patterns in various information sources and existing productively used applications, give an overview of previously discovered patterns, and introduce one new pattern. Further, we propose a framework for the organizations of patterns and the guidance of developers during pattern instantiation."}
{"_id": "c70ad19c90491e2de8de686b6a49f9bbe44692c0", "title": "Seeing with Humans: Gaze-Assisted Neural Image Captioning", "text": "Gaze reflects how humans process visual scenes and is therefore increasingly used in computer vision systems. Previous works demonstrated the potential of gaze for object-centric tasks, such as object localization and recognition, but it remains unclear if gaze can also be beneficial for scene-centric tasks, such as image captioning. We present a new perspective on gaze-assisted image captioning by studying the interplay between human gaze and the attention mechanism of deep neural networks. Using a public large-scale gaze dataset, we first assess the relationship between state-of-the-art object and scene recognition models, bottom-up visual saliency, and human gaze. We then propose a novel split attention model for image captioning. Our model integrates human gaze information into an attention-based long short-term memory architecture, and allows the algorithm to allocate attention selectively to both fixated and non-fixated image regions. Through evaluation on the COCO/SALICON datasets we show that our method improves image captioning performance and that gaze can complement machine attention for semantic scene understanding tasks."}
{"_id": "0d836a0461e9c21fa7a25622115de55b81ceb446", "title": "Estimation of Presentations Skills Based on Slides and Audio Features", "text": "This paper proposes a simple estimation of the quality of student oral presentations. It is based on the study and analysis of features extracted from the audio and digital slides of 448 presentations. The main goal of this work is to automatically predict the values assigned by professors to different criteria in a presentation evaluation rubric. Machine Learning methods were used to create several models that classify students in two clusters: high and low performers. The models created from slide features were accurate up to 65%. The most relevant features for the slide-base models were: number of words, images, and tables, and the maximum font size. The audio-based models reached up to 69% of accuracy, with pitch and filled pauses related features being the most significant. The relatively high degrees of accuracy obtained with these very simple features encourage the development of automatic estimation tools for improving presentation skills."}
{"_id": "1627ce7f1429366829df3d49e28b8ecd7f7597b5", "title": "Diplomat: Using Delegations to Protect Community Repositories", "text": "Community repositories, such as Docker Hub, PyPI, and RubyGems, are bustling marketplaces that distribute software. Even though these repositories use common software signing techniques (e.g., GPG and TLS), attackers can still publish malicious packages after a server compromise. This is mainly because a community repository must have immediate access to signing keys in order to certify the large number of new projects that are registered each day. This work demonstrates that community repositories can offer compromise-resilience and real-time project registration by employing mechanisms that disambiguate trust delegations. This is done through two delegation mechanisms that provide flexibility in the amount of trust assigned to different keys. Using this idea we implement Diplomat, a software update framework that supports security models with different security / usability tradeoffs. By leveraging Diplomat, a community repository can achieve near-perfect compromise-resilience while allowing real-time project registration. For example, when Diplomat is deployed and configured to maximize security on Python\u2019s community repository, less than 1% of users will be at risk even if an attacker controls the repository and is undetected for a month. Diplomat is being integrated by Ruby, CoreOS, Haskell, OCaml, and Python, and has already been deployed by Flynn, LEAP, and Docker."}
{"_id": "69613390ca76bf103791ef251e1568deb5fe91dd", "title": "Satellite Image Classification Methods and Techniques: A Review", "text": "Satellite image classification process involves grouping the image pixel values into meaningful categories. Several satellite image classification methods and techniques are available. Satellite image classification methods can be broadly classified into three categories 1) automatic 2) manual and 3) hybrid. All three methods have their own advantages and disadvantages. Majority of the satellite image classification methods fall under first category. Satellite image classification needs selection of appropriate classification method based on the requirements. The current research work is a study on satellite image classification methods and techniques. The research work also compares various researcher's comparative results on satellite image classification methods."}
{"_id": "456f85fb61fa5f137431e6d12c5fc73cc2ebaced", "title": "Biometric Gait Authentication Using Accelerometer Sensor", "text": "This paper presents a biometric user authentication based on a person\u2019s gait. Unlike most previous gait recognition approaches, which are based on machine vision techniques, in our approach gait patterns are extracted from a physical device attached to the lower leg. From the output of the device accelerations in three directions: vertical, forward-backward, and sideways motion of the lower leg are obtained. A combination of these accelerations is used for authentication. Applying two different methods, histogram similarity and cycle length, equal error rates (EER) of 5% and 9% were achieved, respectively."}
{"_id": "4e116ae01d873ad67fb2ab6da5cb4feeb24bbcb5", "title": "Decision making regarding Smith-Petersen vs. pedicle subtraction osteotomy vs. vertebral column resection for spinal deformity.", "text": "STUDY DESIGN\nAuthor experience and literature review.\n\n\nOBJECTIVES\nTo investigate and discuss decision-making on when to perform a Smith-Petersen osteotomy as opposed to a pedicle subtraction procedure and/or a vertebral column resection.\n\n\nSUMMARY OF BACKGROUND DATA\nArticles have been published regarding Smith-Petersen osteotomies, pedicle subtraction procedures, and vertebral column resections. Expectations and complications have been reviewed. However, decision-making regarding which of the 3 procedures is most useful for a particular spinal deformity case is not clearly investigated.\n\n\nMETHODS\nDiscussed in this manuscript is the author's experience and the literature regarding the operative options for a fixed coronal or sagittal deformity.\n\n\nRESULTS\nThere are roles for Smith-Petersen osteotomy, pedicle subtraction, and vertebral column resection. Each has specific applications and potential complications.\n\n\nCONCLUSION\nAs the magnitude of resection increases, the ability to correct deformity improves, but also the risk of complication increases. Therein, an understanding of potential applications and complications is helpful."}
{"_id": "80ca5505a9ba00e91283519a75e01840a15a74bf", "title": "dCompaction: Delayed Compaction for the LSM-Tree", "text": "Key-value (KV) stores have become a backbone of large-scale applications in today\u2019s data centers. Write-optimized data structures like the Log-Structured Merge-tree (LSM-tree) and their variants are widely used in KV storage systems like BigTable and RocksDB. Conventional LSM-tree organizes KV items into multiple, successively larger components, and uses compaction to push KV items from one smaller component to another adjacent larger component until the KV items reach the largest component. Unfortunately, current compaction scheme incurs significant write amplification due to repeated KV item reads and writes, and then results in poor throughput. We propose a new compaction scheme, delayed compaction (dCompaction), that decreases write amplification. dCompaction postpones some compactions and gather them into the following compaction. In this way, it avoids KV item reads and writes during compaction, and consequently improves the throughput of LSM-tree based KV stores. We implement dCompaction on RocksDB, and conduct extensive experiments. Validation using YCSB framework shows that compared with RocksDB dCompaction has about 30% write performance improvements and also comparable read performance."}
{"_id": "131e4a4a40c29737f39e8cb0f4e59864ca1a1b34", "title": "LaneQuest: An accurate and energy-efficient lane detection system", "text": "Current outdoor localization techniques fail to provide the required accuracy for estimating the car's lane. In this paper, we present LaneQuest: a system that leverages the ubiquitous and low-energy inertial sensors available in commodity smart-phones to provide an accurate estimate of the car's current lane. LaneQuest leverages hints from the phone sensors about the surrounding environment to detect the car's lane. For example, a car making a right turn most probably will be in the right-most lane, a car passing by a pothole will be in a specific lane, and the car's angular velocity when driving through a curve reflects its lane. Our investigation shows that there are amble opportunities in the environment, i.e. lane \u201canchors\u201d, that provide cues about the car's lane. To handle the ambiguous location, sensors noise, and fuzzy lane anchors; LaneQuest employs a novel probabilistic lane estimation algorithm. Furthermore, it uses an unsupervised crowd-sourcing approach to learn the position and lane-span distribution of the different lane-level anchors. Our evaluation results from implementation on different android devices and 260Km driving traces by 13 drivers in different cities shows that LaneQuest can detect the different lane-level anchors with an average precision and recall of more than 90%. This leads to an accurate detection of the exact car's lane position 80% of the time, increasing to 89% of the time to within one lane. This comes with a low-energy footprint, allowing LaneQuest to be implemented on the energy-constrained mobile devices."}
{"_id": "3cbf0dcbc36a8f70e9b2b2f46b16e5057cbd9a7d", "title": "Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance", "text": "Fine-grained sense distinctions are one of the major obstacles to successful Word Sense Disambiguation. In this paper, we present a method for reducing the granularity of the WordNet sense inventory based on the mapping to a manually crafted dictionary encoding sense hierarchies, namely the Oxford Dictionary of English. We assess the quality of the mapping and the induced clustering, and evaluate the performance of coarse WSD systems in the Senseval-3 English all-words task."}
{"_id": "528fa9bb03644ba752fb9491be49b9dd1bce1d52", "title": "SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity", "text": "Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two texts. This paper presents the results of the STS pilot task in Semeval. The training data contained 2000 sentence pairs from previously existing paraphrase datasets and machine translation evaluation resources. The test data also comprised 2000 sentences pairs for those datasets, plus two surprise datasets with 400 pairs from a different machine translation evaluation corpus and 750 pairs from a lexical resource mapping exercise. The similarity of pairs of sentences was rated on a 0-5 scale (low to high similarity) by human judges using Amazon Mechanical Turk, with high Pearson correlation scores, around 90%. 35 teams participated in the task, submitting 88 runs. The best results scored a Pearson correlation>80%, well above a simple lexical baseline that only scored a 31% correlation. This pilot task opens an exciting way ahead, although there are still open issues, specially the evaluation metric."}
{"_id": "d87ceda3042f781c341ac17109d1e94a717f5f60", "title": "WordNet : an electronic lexical database", "text": "WordNet is perhaps the most important and widely used lexical resource for natural language processing systems up to now. WordNet: An Electronic Lexical Database, edited by Christiane Fellbaum, discusses the design of WordNet from both theoretical and historical perspectives, provides an up-to-date description of the lexical database, and presents a set of applications of WordNet. The book contains a foreword by George Miller, an introduction by Christiane Fellbaum, seven chapters from the Cognitive Sciences Laboratory of Princeton University, where WordNet was produced, and nine chapters contributed by scientists from elsewhere. Miller's foreword offers a fascinating account of the history of WordNet. He discusses the presuppositions of such a lexical database, how the top-level noun categories were determined, and the sources of the words in WordNet. He also writes about the evolution of WordNet from its original incarnation as a dictionary browser to a broad-coverage lexicon, and the involvement of different people during its various stages of development over a decade. It makes very interesting reading for casual and serious users of WordNet and anyone who is grateful for the existence of WordNet. The book is organized in three parts. Part I is about WordNet itself and consists of four chapters: \"Nouns in WordNet\" by George Miller, \"Modifiers in WordNet\" by Katherine Miller, \"A semantic network of English verbs\" by Christiane Fellbaum, and \"Design and implementation of the WordNet lexical database and search software\" by Randee Tengi. These chapters are essentially updated versions of four papers from Miller (1990). Compared with the earlier papers, the chapters in this book focus more on the underlying assumptions and rationales behind the design decisions. The description of the information contained in WordNet, however, is not as detailed as in Miller (1990). The main new additions in these chapters include an explanation of sense grouping in George Miller's chapter, a section about adverbs in Katherine Miller's chapter, observations about autohyponymy (one sense of a word being a hyponym of another sense of the same word) and autoantonymy (one sense of a word being an antonym of another sense of the same word) in Fellbaum's chapter, and Tengi's description of the Grinder, a program that converts the files the lexicographers work with to searchable lexical databases. The three papers in Part II are characterized as \"extensions, enhancements and"}
{"_id": "1528def1ddbd2deb261ebb873479f27f48251031", "title": "Clustering WordNet word senses", "text": "This paper presents the results of a set of methods to cluster WordNet word senses. The methods rely on different information sources: confusion matrixes from Senseval-2 Word Sense Disambiguation systems, translation similarities, hand-tagged examples of the target word senses and examples obtained automatically from the web for the target word senses. The clustering results have been evaluated using the coarsegrained word senses provided for the lexical sample in Senseval-2. We have used Cluto, a general clustering environment, in order to test different clustering algorithms. The best results are obtained for the automatically obtained examples, yielding purity values up to 84% on average over 20 nouns."}
{"_id": "2445089d4277ccbec3727fecfe73eaa4cc57e414", "title": "(Meta-) Evaluation of Machine Translation", "text": "This paper evaluates the translation quality of machine translation systems for 8 language pairs: translating French, German, Spanish, and Czech to English and back. We carried out an extensive human evaluation which allowed us not only to rank the different MT systems, but also to perform higher-level analysis of the evaluation process. We measured timing and intraand inter-annotator agreement for three types of subjective evaluation. We measured the correlation of automatic evaluation metrics with human judgments. This meta-evaluation reveals surprising facts about the most commonly used methodologies."}
{"_id": "3764e0bfcc4a196eb020d483d8c2f1822206a444", "title": "LSTM-Based Hierarchical Denoising Network for Android Malware Detection", "text": "Mobile security is an important issue on Android platform. Most malware detection methods based on machine learning models heavily rely on expert knowledge for manual feature engineering, which are still difficult to fully describe malwares. In this paper, we present LSTM-based hierarchical denoise network (HDN), a novel static Android malware detection method which uses LSTM to directly learn from the raw opcode sequences extracted from decompiled Android files. However, most opcode sequences are too long for LSTM to train due to the gradient vanishing problem. Hence, HDN uses a hierarchical structure, whose first-level LSTM parallelly computes on opcode subsequences (we called them method blocks) to learn the dense representations; then the secondlevel LSTM can learn and detect malware through method block sequences. Considering that malicious behavior only appears in partial sequence segments, HDN uses method block denoise module (MBDM) for data denoising by adaptive gradient scaling strategy based on loss cache. We evaluate and compare HDN with the latest mainstream researches on three datasets. The results show that HDN outperforms these Android malware detection methods,and it is able to capture longer sequence features and has better detection efficiency thanN-gram-based malware detection which is similar to our method."}
{"_id": "41bb7d4546fbc95d55342b621f95edad3b06717e", "title": "A Study on False Channel Condition Reporting Attacks in Wireless Networks", "text": "Wireless networking protocols are increasingly being designed to exploit a user's measured channel condition; we call such protocols channel-aware. Each user reports the measured channel condition to a manager of wireless resources and a channel-aware protocol uses these reports to determine how resources are allocated to users. In a channel-aware protocol, each user's reported channel condition affects the performance of every other user. The deployment of channel-aware protocols increases the risks posed by false channel-condition feedback. In this paper, we study what happens in the presence of an attacker that falsely reports its channel condition. We perform case studies on channel-aware network protocols to understand how an attack can use false feedback and how much the attack can affect network performance. The results of the case studies show that we need a secure channel condition estimation algorithm to fundamentally defend against the channel-condition misreporting attack. We design such an algorithm and evaluate our algorithm through analysis and simulation. Our evaluation quantifies the effect of our algorithm on system performance as well as the security and the performance of our algorithm."}
{"_id": "fe025433b702bf6e946610e0dba77f7dd16ae821", "title": "Extreme-angle broadband metamaterial lens.", "text": "For centuries, the conventional approach to lens design has been to grind the surfaces of a uniform material in such a manner as to sculpt the paths that rays of light follow as they transit through the interfaces. Refractive lenses formed by this procedure of bending the surfaces can be of extremely high quality, but are nevertheless limited by geometrical and wave aberrations that are inherent to the manner in which light refracts at the interface between two materials. Conceptually, a more natural--but usually less convenient--approach to lens design would be to vary the refractive index throughout an entire volume of space. In this manner, far greater control can be achieved over the ray trajectories. Here, we demonstrate how powerful emerging techniques in the field of transformation optics can be used to harness the flexibility of gradient index materials for imaging applications. In particular we design and experimentally demonstrate a lens that is broadband (more than a full decade bandwidth), has a field-of-view approaching 180 degrees and zero f-number. Measurements on a metamaterial implementation of the lens illustrate the practicality of transformation optics to achieve a new class of optical devices."}
{"_id": "9b9da80c186d8f6e7fa35747a6543d78e36f17e8", "title": "HMOG: New Behavioral Biometric Features for Continuous Authentication of Smartphone Users", "text": "We introduce hand movement, orientation, and grasp (HMOG), a set of behavioral features to continuously authenticate smartphone users. HMOG features unobtrusively capture subtle micro-movement and orientation dynamics resulting from how a user grasps, holds, and taps on the smartphone. We evaluated authentication and biometric key generation (BKG) performance of HMOG features on data collected from 100 subjects typing on a virtual keyboard. Data were collected under two conditions: 1) sitting and 2) walking. We achieved authentication equal error rates (EERs) as low as 7.16% (walking) and 10.05% (sitting) when we combined HMOG, tap, and keystroke features. We performed experiments to investigate why HMOG features perform well during walking. Our results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps. With BKG, we achieved the EERs of 15.1% using HMOG combined with taps. In comparison, BKG using tap, key hold, and swipe features had EERs between 25.7% and 34.2%. We also analyzed the energy consumption of HMOG feature extraction and computation. Our analysis shows that HMOG features extracted at a 16-Hz sensor sampling rate incurred a minor overhead of 7.9% without sacrificing authentication accuracy. Two points distinguish our work from current literature: 1) we present the results of a comprehensive evaluation of three types of features (HMOG, keystroke, and tap) and their combinations under the same experimental conditions and 2) we analyze the features from three perspectives (authentication, BKG, and energy consumption on smartphones)."}
{"_id": "4e0e664450094cc786898a5e1ef3727135ecfcd8", "title": "Accurate nonrigid 3D human body surface reconstruction using commodity depth sensors", "text": "1Department of Computer Science, School of Engineering and Applied Science, Institute for Computer Graphics, The George Washington University, 800 22nd Street NW Suite 3400, Washington, DC 20052, USA 2Department of Epidemiology and Biostatistics, Milken Institute School of Public Health, The George Washington University, 800 22nd Street NW Suite 7680, Washington, DC 20052, USA 3Department of Computer Science, School of Engineering and Applied Science, and Department of Pediatrics, School of Medicine and Health Sciences, Institute for Computer Graphics, The George Washington University, 800 22nd Street NW Suite 5830, Washington, DC 20052, USA"}
{"_id": "de0597313056b05fd7dd6b2d5e031cfb96564920", "title": "Transductive Adversarial Networks (TAN)", "text": "Transductive Adversarial Networks (TAN) is a novel domain-adaptation machine learning framework that is designed for learning a conditional probability distribution on unlabelled input data in a target domain, while also only having access to: (1) easily obtained labelled data from a related source domain, which may have a different conditional probability distribution than the target domain, and (2) a marginalised prior distribution on the labels for the target domain. TAN leverages a fully adversarial training procedure and a unique generator/encoder architecture which approximates the transductive combination of the available sourceand target-domain data. A benefit of TAN is that it allows the distance between the sourceand target-domain label-vector marginal probability distributions to be greater than 0 (i.e. different tasks across the source and target domains) whereas other domain-adaptation algorithms require this distance to equal 0 (i.e. a single task across the source and target domains). TAN can, however, still handle the latter case and is a more generalised approach to this case. Another benefit of TAN is that due to being a fully adversarial algorithm, it has the potential to accurately approximate highly complex distributions. Theoretical analysis demonstrates the viability of the TAN framework."}
{"_id": "620c1821f67fd39051fe0863567ac702ce27a72a", "title": "Modeling, simulation, and development of a robotic dolphin prototype", "text": "Abilities of sea animals and efficiency of fish swimming are a few of the impressive solutions of nature. In this paper, design, modeling, simulation and development studies of a robotic dolphin prototype, entirely inspired by the bottlenose dolphin (Tursiops truncatus), are presented. The first section focuses on the design principles and core design features of the prototype. In the second section, modeling and simulation studies which consist of hydrodynamics, kinematics and dynamical analysis of the robotic dolphin are presented. Dynamical simulations of the underwater behavior of the prototype are included in this section. The third section focuses on the general prototype development from mechanical construction to control system structure. Finally in the last section, experimental results obtained through the development of the prototype are discussed."}
{"_id": "7da851a9d67b6c8ee7e985928f91ee577c529f2e", "title": "An Empirical Comparison of Topics in Twitter and Traditional Media", "text": "Twitter as a new form of social media can potentially contain much useful information, but content analysis on Twitter has not been well studied. In particular, it is not clear whether as an information source Twitter can be simply regarded as a faster news feed that covers mostly the same information as traditional news media. In This paper we empirically compare the content of Twitter with a traditional news medium, New York Times, using unsupervised topic modeling. We use a Twitter-LDA model to discover topics from a representative sample of the entire Twitter. We then use text mining techniques to compare these Twitter topics with topics from New York Times, taking into consideration of topic categories and types. We find that although Twitter and New York Times cover similar categories and types of topics, the distributions of topic categories and types are quite different. Furthermore, there are Twitter-specific topics and NYT-specific topics, and they tend to belong to certain topic categories and types. We also study the relation between the proportions of opinionated tweets and retweets and topic categories and types, and find some interesting dependence. To the best of our knowledge, ours is the first comprehensive empirical comparison between Twitter and traditional news media."}
{"_id": "0b3983a6ad65f7f480d63d5a8d3e9f5c9c57e06a", "title": "Binary Rewriting of an Operating System Kernel \u2217", "text": "This paper deals with some of the issues that arise in the context of binary rewriting and instrumentation of an operatin g system kernel. OS kernels are very different from ordinary application code in many ways, e.g., they contain a significant amount of hand-written assembly code. Binary rewriting is an attractive approach for processing OS kernel code for several reasons, e.g., it provides a uniform way to handl e heterogeneity in code due to a combination of source code, assembly code and legacy code such as in device drivers. However, because of the many differences between ordinary application code and OS kernel code, binary rewriting techniques that work for application code do not always carry over directly to kernel code. This paper describes some of the issues that arise in this context, and the approaches we have taken to address them. A key goal when developing our system was to deal in a systematic manner with the various peculiarities seen in low-level systems code, and reason about the safety and correctness of code transformation s, without requiring significant deviations from the regular d evelopmental path. For example, a precondition we assumed was that no compiler or linker modifications should be required to use it and the tool should be able to process kernel binaries in the same way as it does ordinary applications."}
{"_id": "c4cfdcf19705f9095fb60fb2e569a9253a475f11", "title": "Towards Context-Aware Interaction Recognition for Visual Relationship Detection", "text": "Recognizing how objects interact with each other is a crucial task in visual recognition. If we define the context of the interaction to be the objects involved, then most current methods can be categorized as either: (i) training a single classifier on the combination of the interaction and its context; or (ii) aiming to recognize the interaction independently of its explicit context. Both methods suffer limitations: the former scales poorly with the number of combinations and fails to generalize to unseen combinations, while the latter often leads to poor interaction recognition performance due to the difficulty of designing a contextindependent interaction classifier.,,To mitigate those drawbacks, this paper proposes an alternative, context-aware interaction recognition framework. The key to our method is to explicitly construct an interaction classifier which combines the context, and the interaction. The context is encoded via word2vec into a semantic space, and is used to derive a classification result for the interaction. The proposed method still builds one classifier for one interaction (as per type (ii) above), but the classifier built is adaptive to context via weights which are context dependent. The benefit of using the semantic space is that it naturally leads to zero-shot generalizations in which semantically similar contexts (subject-object pairs) can be recognized as suitable contexts for an interaction, even if they were not observed in the training set. Our method also scales with the number of interaction-context pairs since our model parameters do not increase with the number of interactions. Thus our method avoids the limitation of both approaches. We demonstrate experimentally that the proposed framework leads to improved performance for all investigated interaction representations and datasets."}
{"_id": "39cc0e6c1b85d052c998a1c5949fe51baa96f0c5", "title": "Effective resource management for enhancing performance of 2D and 3D stencils on GPUs", "text": "GPUs are an attractive target for data parallel stencil computations prevalent in scientific computing and image processing applications. Many tiling schemes, such as overlapped tiling and split tiling, have been proposed in past to improve the performance of stencil computations. While effective for 2D stencils, these techniques do not achieve the desired improvements for 3D stencils due to the hardware constraints of GPU.\n A major challenge in optimizing stencil computations is to effectively utilize all resources available on the GPU. In this paper we develop a tiling strategy that makes better use of resources like shared memory and register file available on the hardware. We present a systematic methodology to reason about which strategy should be employed for a given stencil and also discuss implementation choices that have a significant effect on the achieved performance. Applying these techniques to various 2D and 3D stencils gives a performance improvement of 200-400% over existing tools that target such computations."}
{"_id": "a98242e420179d2a080069f5a02c6603fb5cfe3d", "title": "Zero-Current Switching Switched-Capacitor Zero-Voltage-Gap Automatic Equalization System for Series Battery String", "text": "The series battery string or supercapacitor string automatic equalization system based on quasi-resonant switched-capacitor converter is presented in this paper. It realizes the zero-voltage gap between cells and allows maximum energy recovery in a series battery system or supercapacitor system. It not only inherits the advantage of conventional switched-capacitor battery cell balancing system, but also overcomes the drawback of conduction loss, switching loss, and finite voltage difference among battery cells. All switches are MOSFET and controlled by just a pair of complementary signals in synchronous trigger pattern and the resonant tanks operate alternatively between the two states of charging and discharging. Zero-current switching and zero-voltage gap are achieved in this paper. Different resonant tank designs can meet the needs of different balancing time to meet the needs of different energy storage devices. Experimental results indicate that the efficiency of the system is high exceeding 98%. The system is very suitable for balancing used in battery management system."}
{"_id": "dcaeb29ad3307e2bdab2218416c81cb0c4e548b2", "title": "End-to-end Speech Recognition Using Lattice-free MMI", "text": "We present our work on end-to-end training of acoustic models using the lattice-free maximum mutual information (LF-MMI) objective function in the context of hidden Markov models. By end-to-end training, we mean flat-start training of a single DNN in one stage without using any previously trained models, forced alignments, or building state-tying decision trees. We use full biphones to enable context-dependent modeling without trees, and show that our end-to-end LF-MMI approach can achieve comparable results to regular LF-MMI on well-known large vocabulary tasks. We also compare with other end-to-end methods such as CTC in character-based and lexicon-free settings and show 5 to 25 percent relative reduction in word error rates on different large vocabulary tasks while using significantly smaller models."}
{"_id": "6b4593128ddfcbe006b51de0549596f24e724ff0", "title": "Counting of cigarettes in cigarette packets using LabVIEW", "text": "The proposed work presents a technique for an automated count of cigarette in cigarette packets. Proposed work is based on application of image processing techniques in LabVIEW platform. An objective of the proposed work is to count the number of cigarettes in a packet. National Instrument's Smart camera is used to capture images of cigarette packets moving in packaging line and process the data to fulfill the above objective. The technique was subjected to offline testing on more than 50 number of cigarette packets and the results obtained are found to be satisfactory in all cases."}
{"_id": "5d7e3fb23f2a14ffdab31f18051c9b8ff573db4e", "title": "LQR, double-PID and pole placement stabilization and tracking control of single link inverted pendulum", "text": "This paper presents the dynamic behaviour of a nonlinear single link inverted pendulum-on-cart system based on Lagrange Equation. The nonlinear model linearization was presented based on Taylor series approximation. LQR, double-PID and simple pole placement control techniques were proposed for upright stabilization and tracking controls of the system. Simulations results for the various control techniques subjected to a unity magnitude pulse input torque with and without disturbance were compared. The performances of the proposed controllers were investigated based on response time specifications and level of disturbance rejection. Thus, the performance of LQR is more reliable and satisfactory. Finally, future work suggestions were made."}
{"_id": "15fc8ce6630616cce1681f049391bdb4e186192b", "title": "Defect Detection in Textures through the Use of Entropy as a Means for Automatically Selecting the Wavelet Decomposition Level", "text": "This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed."}
{"_id": "29fbadeb12389a2fee4f0739cfc62c0ea9399de1", "title": "The influence of futures markets on real time price stabilization in electricity markets", "text": "Markets can interact with power systems in ways that can render an otherwise stable market and an otherwise stable power system into an unstable overall system. This unstable system will be characterized not only by fluctuating prices that do not settle to constant values, but, more worrisome, it creates the possibility of inducing slow electromechanical oscillations if left unchecked. This will tend to happen as a result of \"price chasing\" on the part of suppliers that can react (and over-react) to changing system prices. This paper examines the role that futures markets may have on the clearing prices and on altering the volatility and potential instability of real time prices and generator output."}
{"_id": "ecfc4792299f58390d85753b60ee227b6282ccbc", "title": "Ubiquitous enterprise service adaptations based on contextual user behavior", "text": "Recent advances in mobile technologies and infrastructures have created the demand for ubiquitous access to enterprise services from mobile handheld devices. Further, with the invention of new interaction devices, the context in which the services are being used becomes an integral part of the activity carried out with the system. Traditional human\u2013computer interface (HCI) theories are now inadequate for developing these context-aware applications, as we believe that the notion of context should be extended to different categories: computing contexts, user contexts, and physical contexts for ubiquitous computing. This demands a new paradigm for system requirements elicitation and design in order to make good use of such extended context information captured from mobile user behavior. Instead of redesigning or adapting existing enterprise services in an ad hoc manner, we introduce a methodology for the elicitation of context-aware adaptation requirements and the matching of context-awareness features to the target context by capability matching. For the implementation of such adaptations, we propose the use of three tiers of views: user interface views, data views, and process views. This approach centers on a novel notion of process views to ubiquitous service adaptation, where mobile users may execute a more concise version or modified procedure of the original process according to their behavior under different contexts. The process view also serves as the key mechanism for integrating user interface views and data views. Based on this model, we analyze the design and implementation issues of some common ubiquitous access situations and show how to adapt them systematically into a context-aware application by considering the requirements of a ubiquitous enterprise information system."}
{"_id": "edd6b6cd62d4c3b5d288721510e579be62c941d6", "title": "Conditional image generation using feature-matching GAN", "text": "Generative Adversarial Net is a frontier method of generative models for images, audios and videos. In this paper, we focus on conditional image generation and introduce conditional Feature-Matching Generative Adversarial Net to generate images from category labels. By visualizing state-of-art discriminative conditional generative models, we find these networks do not gain clear semantic concepts. Thus we design the loss function in the light of metric learning to measure semantic distance. The proposed model is evaluated on several well-known datasets. It is shown to be of higher perceptual quality and better diversity then existing generative models."}
{"_id": "7a050d2f0c83a65996e1261b52e6523f24d0bac2", "title": "Reactance-domain ESPRIT algorithm for a hexagonally shaped seven-element ESPAR antenna", "text": "A direction-of-arrival (DoA) method that combines the reactance-domain (RD) technique and the ESPRIT algorithm is proposed for use with the 7-element electronically steerable parasitic array radiator (ESPAR) for the estimation of noncoherent sources. Simulations show that the method could resolve up to three incoming signals with an estimation performance that depends on the signal's angle of arrival. Moreover, the method is compared with the Cramer-Rao lower bound (CRB) and the MUSIC asymptotic error variance, both modified for the RD technique. Numerical comparison between this lower bound and the MUSIC algorithm confirmed that the proposed method can achieve the CRB and provide high-precision DoA estimation with a level of performance that is sufficient for many DoA finding applications. The proposed method could be demonstrated by means of experiments on DOA estimation conducted in an anechoic chamber."}
{"_id": "f512a4ae0f6b2d8d03c54a6405d2697a74f7256a", "title": "A quick MST-based algorithm to obtain Pathfinder networks (\u221e,n-1)", "text": "Network scaling algorithms such as the Pathfinder algorithm are used to prunemany different kinds of networks, including citation networks, randomnetworks, and social networks. However, this algorithm suffers from run time problems for large networks and online processing due to its O(n4) time complexity. In this article, we introduce a new alternative, the MST-Pathfinder algorithm, which will allow us to prune the original network to get its PFNET(\u221e, n \u22121) in justO(n2 \u00b7 logn) time.Theunderlying idea comes from the fact that the union (superposition) of all the Minimum Spanning Trees extracted from a given network is equivalent to the PFNET resulting from the Pathfinder algorithmparameterized by a specific set of values (r = \u221e and q = n \u22121), those usually considered in many different applications. Although this property is well-known in the literature, it seems that no algorithm based on it has been proposed, up to now, to decrease the high computational cost of the original Pathfinder algorithm.We also present a mathematical proof of the correctness of this new alternative and test its good efficiency in two different case studies: one dedicated to the post-processing of large random graphs, and the other one to a real world case in which medium networks obtained by a cocitation analysis of the scientific domains in different countries are pruned."}
{"_id": "b49af9c4ab31528d37122455e4caf5fdeefec81a", "title": "Smart homes and their users: a systematic analysis and key challenges", "text": "Published research on smart homes and their users is growing exponentially, yet a clear understanding of who these users are and how they might use smart home technologies is missing from a field being overwhelmingly pushed by technology developers. Through a systematic analysis of peer-reviewed literature on smart homes and their users, this paper takes stock of the dominant research themes and the linkages and disconnects between them. Key findings within each of nine themes are analysed, grouped into three: (1) views of the smart home\u2014functional, instrumental, socio-technical; (2) users and the use of the smart home\u2014prospective users, interactions and decisions, using technologies in the home; and (3) challenges for realising the smart home\u2014hardware and software, design, domestication. These themes are integrated into an organising framework for future research that identifies the presence or absence of cross-cutting relationships between different understandings of smart homes and their users. The usefulness of the organising framework is illustrated in relation to two major concerns\u2014privacy and control\u2014that have been narrowly interpreted to date, precluding deeper insights and potential solutions. Future research on smart homes and their users can benefit by exploring and developing cross-cutting relationships between the research themes identified."}
{"_id": "c4a3da33ac6bcc9acd962f3bbb92d2387a62aed2", "title": "Mobile application for Indonesian medicinal plants identification using Fuzzy Local Binary Pattern and Fuzzy Color Histogram", "text": "This research proposed a new mobile application based on Android operating system for identifying Indonesian medicinal plant images based on texture and color features of digital leaf images. In the experiments we used 51 species of Indonesian medicinal plants and each species consists of 48 images, so the total images used in this research are 2,448 images. This research investigates effectiveness of the fusion between the Fuzzy Local Binary Pattern (FLBP) and the Fuzzy Color Histogram (FCH) in order to identify medicinal plants. The FLBP method is used for extracting leaf image texture. The FCH method is used for extracting leaf image color. The fusion of FLBP and FCH is done by using Product Decision Rules (PDR) method. This research used Probabilistic Neural Network (PNN) classifier for classifying medicinal plant species. The experimental results show that the fusion between FLBP and FCH can improve the average accuracy of medicinal plants identification. The accuracy of identification using fusion of FLBP and FCH is 74.51%. This application is very important to help people identifying and finding information about Indonesian medicinal plant."}
{"_id": "de704a0347322014abc4b3ecc27e86bdc5fac2fd", "title": "KNOWLEDGE ACQUISITION AND CYBER SICKNESS : A COMPARISON OF VR DEVICES IN VIRTUAL TOURS", "text": "SCIENCE JOURNAL 2015 | JUNE | SCIENCE JOURNAL | 613"}
{"_id": "b648d73edd1a533decd22eec2e7722b96746ceae", "title": "weedNet: Dense Semantic Weed Classification Using Multispectral Images and MAV for Smart Farming", "text": "Selective weed treatment is a critical step in autonomous crop management as related to crop health and yield. However, a key challenge is reliable and accurate weed detection to minimize damage to surrounding plants. In this letter, we present an approach for dense semantic weed classification with multispectral images collected by a micro aerial vehicle (MAV). We use the recently developed encoder\u2013decoder cascaded convolutional neural network, SegNet, that infers dense semantic classes while allowing any number of input image channels and class balancing with our sugar beet and weed datasets. To obtain training datasets, we established an experimental field with varying herbicide levels resulting in field plots containing only either crop or weed, enabling us to use the normalized difference vegetation index as a distinguishable feature for automatic ground truth generation. We train six models with different numbers of input channels and condition (fine tune) it to achieve $\\sim$0.8 F1-score and 0.78 area under the curve classification metrics. For the model deployment, an embedded Graphics Processing Unit (GPU) system (Jetson TX2) is tested for MAV integration. Dataset used in this letter is released to support the community and future work."}
{"_id": "7d712ea3803485467a46b46b71242477560c18f0", "title": "A Novel Differential-Fed Patch Antenna on Stepped-Impedance Resonator With Enhanced Bandwidth Under Dual-Resonance", "text": "A novel design concept to enhance the bandwidth of a differential-fed patch antenna using the dual-resonant radiation of a stepped-impedance resonator (SIR) is proposed. The SIR is composed of two distinctive portions: the radiating patch and a pair of open stubs. Initially, based on the transmission line model, the first and second odd-order radiative resonant modes, i.e., TM10 and TM30, of this SIR-typed patch antenna are extensively investigated. It is demonstrated that the frequency ratio between the dual-resonant modes can be fully controlled by the electrical length and the impedance ratios between the open stub and radiating patch. After that, the SIR-typed patch antenna is reshaped with stepped ground plane in order to increase the impedance ratio as highly required for wideband radiation. With this arrangement, these two radiative modes are merged with each other, resulting in a wide impedance bandwidth with a stable radiation pattern under dual-resonant radiation. Finally, the proposed antenna is designed, fabricated, and measured. It is verified in experiment that the impedance bandwidth (|Sdd11| <; -10 dB) of the proposed antenna has gained tremendous increment up to 10% (0.85-0.94 GHz) with two attenuation poles. Most importantly, the antenna has achieved a stable gain varying from 7.4 to 8.5 dB within the whole operating band, while keeping low-cross polarization."}
{"_id": "9f7b1b9d4f7dc2dcf850dcb311ecd309be380226", "title": "Design and Analysis of a Low-Profile and Broadband Microstrip Monopolar Patch Antenna", "text": "A new microstrip monopolar patch antenna is proposed and analyzed. The antenna has a wide bandwidth and a monopole like radiation pattern. Such antenna is constructed on a circular patch antenna that is shorted concentrically with a set of conductive vias. The antenna is analyzed using a cavity model. The cavity model analysis not only distinguishes each resonating mode and gives a physical insight into each mode of the antenna, but also provides a guideline to design a broadband monopolar patch antenna that utilizes two modes (TM01 and TM02 modes). Both modes provide a monopole like radiation pattern. The proposed antenna has a simple structure with a low profile of 0.024 wavelengths, and yields a wide impedance bandwidth of 18% and a maximum gain of 6 dBi."}
{"_id": "1965a7d9a3eb0727c054fb235b1758c8ffbb8e22", "title": "Circularly Polarized U-Slot Antenna", "text": "Circularly polarized single-layer U-slot microstrip patch antenna has been proposed. The suggested asymmetrical U-slot can generate the two orthogonal modes for circular polarization without chamfering any corner of the probe-fed square patch microstrip antenna. A parametric study has been carried out to investigate the effects caused by different arm lengths of the U-slot. The thickness of the foam substrate is about 8.5% of the wavelength at the operating frequency. The 3 dB axial ratio bandwidth of the antenna is 4%. Both experimental and theoretical results of the antenna have been presented and discussed. Circular polarization, printed antennas, U-slot."}
{"_id": "4f86fdb8312794929b9a11770fba271c5bf886fa", "title": "A Broadband Center-Fed Circular Patch-Ring Antenna With a Monopole Like Radiation Pattern", "text": "A center-fed circular microstrip patch antenna with a coupled annular ring is presented. This antenna has a low profile configuration with a monopole like radiation pattern. Compared to the center-fed circular patch antenna (CPA), the proposed antenna has a large bandwidth and similar radiation pattern. The proposed antenna is fabricated and tested. It resonates at 5.8 GHz, the corresponding impedance bandwidth and gain are 12.8% and 5.7 dBi, respectively. Very good agreement between the measurement and simulation for the return loss and radiation patterns is achieved."}
{"_id": "9462cd1ec2e404b22f76c88b6149d1e84683acb7", "title": "Printed Meandering Probe-Fed Circularly Polarized Patch Antenna With Wide Bandwidth", "text": "In this letter, a wideband compact circularly polarized (CP) patch antenna is proposed. This patch antenna consists of a printed meandering probe (M-probe) and truncated patches that excite orthogonal resonant modes to generate a wideband CP operation. The stacked patch is employed to further improve the axial-ratio (AR) bandwidth to fit the 5G Wi-Fi application. The proposed antenna achieves 42.3% impedance bandwidth and 16.8% AR bandwidth, respectively. The average gain within the AR bandwidth is 6.6 dBic with less than 0.5 dB variation. This work demonstrates a bandwidth broadening technique of an M-probe fed CP patch antenna. It is the first study to investigate and exhibit the M-probe could also provide the wideband characteristics in the dielectric loaded patch antenna. The potential applications of the antenna are 5G Wi-Fi and satellite communication systems."}
{"_id": "c9ed18a4a52503ede1f50691ff77efdf26acedd5", "title": "A PFC based BLDC motor drive using a Bridgeless Zeta converter", "text": "This paper deals with a PFC (Power Factor Corrected) Bridgeless Zeta converter based VSI (Voltage Source Inverter) fed BLDC (Brushless DC) motor drive. The speed control is achieved by controlling the voltage at the DC bus of VSI using a single voltage sensor. This facilitates the operation of VSI in fundamental frequency switching mode (Electronic Commutation of BLDC motor) in place of high frequency PWM (Pulse Width Modulation) switching for speed control. This leads to low switching losses in VSI and thus improves the efficiency of the drive. Moreover, a bridgeless configuration is used to reduce the conduction losses of DBR (Diode Bridge Rectifier). The bridgeless Zeta converter working in DCM (Discontinuous Conduction Mode) is used which utilizes a voltage follower approach thus requiring a single voltage sensor for speed control and PFC operation. The proposed drive is designed to operate over a wide range of speed control and under wide variation in supply voltages with high power factor and low harmonic distortion in the supply current at AC mains. An improved power quality is achieved with performance indices satisfying the international PQ (Power Quality) standards such as IEC-61000-3-2."}
{"_id": "d6002a6cc8b5fc2218754aed970aac91c8d8e7e9", "title": "Discriminatively Trained Templates for 3D Object Detection: A Real Time Scalable Approach", "text": "In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D/LINEMOD representation introduced recently by Hinterstoisser et al., yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector. Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. This holds both when using monocular color images (with LINE2D) and when using RGBD images (with LINEMOD). Moreover, we propose a challenging new dataset made of 12 objects, for future competing methods on monocular color images."}
{"_id": "aaf0a8925f8564e904abc46b3ad1f9aa2b120cf6", "title": "Vector filtering for color imaging", "text": "Vector processing operations use essential spectral and spatial information to remove noise and localize microarray spots. The proposed fully automated vector technique can be easily implemented in either hardware or software; and incorporated in any existing microarray image analysis and gene expression tool."}
{"_id": "89c59ac45c267a1c12b9d0ed4af88f6d6c619683", "title": "Entity Extraction, Linking, Classification, and Tagging for Social Media: A Wikipedia-Based Approach", "text": "Many applications that process social data, such as tweets, must extract entities from tweets (e.g., \u201cObama\u201d and \u201cHawaii\u201d in \u201cObama went to Hawaii\u201d), link them to entities in a knowledge base (e.g., Wikipedia), classify tweets into a set of predefined topics, and assign descriptive tags to tweets. Few solutions exist today to solve these problems for social data, and they are limited in important ways. Further, even though several industrial systems such as OpenCalais have been deployed to solve these problems for text data, little if any has been published about them, and it is unclear if any of the systems has been tailored for social media. In this paper we describe in depth an end-to-end industrial system that solves these problems for social data. The system has been developed and used heavily in the past three years, first at Kosmix, a startup, and later at WalmartLabs. We show how our system uses a Wikipedia-based global \u201creal-time\u201d knowledge base that is well suited for social data, how we interleave the tasks in a synergistic fashion, how we generate and use contexts and social signals to improve task accuracy, and how we scale the system to the entire Twitter firehose. We describe experiments that show that our system outperforms current approaches. Finally we describe applications of the system at Kosmix and WalmartLabs, and lessons learned."}
{"_id": "0b632048208c9c6b48b636f9f7ef8a5466325488", "title": "A Scenario-Adaptive Driving Behavior Prediction Approach to Urban Autonomous Driving", "text": "Driving through dynamically changing traffic scenarios is a highly challenging task for autonomous vehicles, especially on urban roadways. Prediction of surrounding vehicles\u2019 driving behaviors plays a crucial role in autonomous vehicles. Most traditional driving behavior prediction models work only for a specific traffic scenario and cannot be adapted to different scenarios. In addition, priori driving knowledge was never considered sufficiently. This study proposes a novel scenario-adaptive approach to solve these problems. A novel ontology model was developed to model traffic scenarios. Continuous features of driving behavior were learned by Hidden Markov Models (HMMs). Then, a knowledge base was constructed to specify the model adaptation strategies and store priori probabilities based on the scenario\u2019s characteristics. Finally, the target vehicle\u2019s future behavior was predicted considering both a posteriori probabilities and a priori probabilities. The proposed approach was sufficiently evaluated with a real autonomous vehicle. The application scope of traditional models can be extended to a variety of scenarios, while the prediction performance can be improved by the consideration of priori knowledge. For lane-changing behaviors, the prediction time horizon can be extended by up to 56% (0.76 s) on average. Meanwhile, long-term prediction precision can be enhanced by over 26%."}
{"_id": "41d103f751d47f0c140d21c5baa4981b3d4c9a76", "title": "Commonsense Causal Reasoning Using Millions of Personal Stories", "text": "The personal stories that people write in their Internet weblogs include a substantial amount of information about the causal relationships between everyday events. In this paper we describe our efforts to use millions of these stories for automated commonsense causal reasoning. Casting the commonsense causal reasoning problem as a Choice of Plausible Alternatives, we describe four experiments that compare various statistical and information retrieval approaches to exploit causal information in story corpora. The top performing system in these experiments uses a simple co-occurrence statistic between words in the causal antecedent and consequent, calculated as the Pointwise Mutual Information between words in a corpus of millions of personal stories."}
{"_id": "c9d1bcdb95aa748940b85508fd7277622f74c0a4", "title": "Rigor in Information Systems Positivist Case Research: Current Practices", "text": "Case research has commanded respect in the information systems (IS) discipline for at least a decade. Notwithstanding the relevance and potential value of case studies, this methodological approach was once considered to be one of the least systematic. Toward the end of the 1980s, the issue of whether IS case research was rigorously conducted was first raised. Researchers from our field (e.g., Benbasat et al. 1987; Lee 1989) and from other disciplines (e.g., Eisenhardt 1989; Yin 1994) called for more rigor in case research and, through theirrecommendations, contributed to the advancement of the case study methodology. Considering these contributions, the present study seeks to determine the extent to which the field of IS has advanced in its operational use of case study method. Precisely, it investigates the level of methodological rigor in positivist IS case research conducted over the past decade. To fulfill this objective, we identified and coded 183 case articles from seven major IS journals. Evaluation attributes or criteria considered in the present review focus on three main areas, namely, design issues, data collection, and data analysis. While the level of methodological rigor has experienced modest progress with respect to some specific attributes, the overall assessed rigor is somewhat equivocal and there are still significant areas for improvement. One of the keys is to include better documentation particularly regarding issues related to the data collection and"}
{"_id": "30584b8d5bf99e51139e7ca9a8c04637480b73ca", "title": "Design of Compact Wide Stopband Microstrip Low-pass Filter using T-shaped Resonator", "text": "In this letter, a compact microstrip low-pass filter (LPF) using T-shaped resonator with wide stopband is presented. The proposed LPF has capability to remove the eighth harmonic and a low insertion loss of 0.12 dB. The bandstop structure using stepped impendence resonator and two open-circuit stubs are used to design a wide stopband with attenuation level better than \u221220 dB from 3.08 up to 22 GHz. The proposed filter with \u22123-dB cutoff frequency of 2.68 GHz has been designed, fabricated, and measured. The operating of the LPF is investigated based on equivalent circuit model. Simulation results are verified by measurement results and excellent agreement between them is observed."}
{"_id": "bb9936dc85acfc794636140f02644f4f29a754c9", "title": "On-Line Analytical Processing", "text": "On-line analytical processing (OLAP) describes an approach to decision support, which aims to extract knowledge from a data warehouse, or more specifically, from data marts. Its main idea is providing navigation through data to non-expert users, so that they are able to interactively generate ad hoc queries without the intervention of IT professionals. This name was introduced in contrast to on-line transactional processing (OLTP), so that it reflected the different requirements and characteristics between these classes of uses. The concept falls in the area of business intelligence."}
{"_id": "566199b865312f259d0cf694d71d6a51462e0fb8", "title": "Mining Roles with Multiple Objectives", "text": "With the growing adoption of Role-Based Access Control (RBAC) in commercial security and identity management products, how to facilitate the process of migrating a non-RBAC system to an RBAC system has become a problem with significant business impact. Researchers have proposed to use data mining techniques to discover roles to complement the costly top-down approaches for RBAC system construction. An important problem is how to construct RBAC systems with low complexity. In this article, we define the notion of weighted structural complexity measure and propose a role mining algorithm that mines RBAC systems with low structural complexity. Another key problem that has not been adequately addressed by existing role mining approaches is how to discover roles with semantic meanings. In this article, we study the problem in two primary settings with different information availability. When the only information is user-permission relation, we propose to discover roles whose semantic meaning is based on formal concept lattices. We argue that the theory of formal concept analysis provides a solid theoretical foundation for mining roles from a user-permission relation. When user-attribute information is also available, we propose to create roles that can be explained by expressions of user-attributes. Since an expression of attributes describes a real-world concept, the corresponding role represents a real-world concept as well. Furthermore, the algorithms we propose balance the semantic guarantee of roles with system complexity. Finally, we indicate how to create a hybrid approach combining top-down candidate roles. Our experimental results demonstrate the effectiveness of our approaches."}
{"_id": "0f0133873e0ddf9db8e190ccf44a07249c16ba10", "title": "Data Fusion by Matrix Factorization", "text": "For most problems in science and engineering we can obtain data sets that describe the observed system from various perspectives and record the behavior of its individual components. Heterogeneous data sets can be collectively mined by data fusion. Fusion can focus on a specific target relation and exploit directly associated data together with contextual data and data about system's constraints. In the paper we describe a data fusion approach with penalized matrix tri-factorization (DFMF) that simultaneously factorizes data matrices to reveal hidden associations. The approach can directly consider any data that can be expressed in a matrix, including those from feature-based representations, ontologies, associations and networks. We demonstrate the utility of DFMF for gene function prediction task with eleven different data sources and for prediction of pharmacologic actions by fusing six data sources. Our data fusion algorithm compares favorably to alternative data integration approaches and achieves higher accuracy than can be obtained from any single data source alone."}
{"_id": "34c15e94066894d92b5b0ec6817b8a246c009aaa", "title": "Cloud Service Negotiation: A Research Roadmap", "text": "Cloud services are Internet-based XaaS (X as a Service) services, where X can be hardware, software or applications. As Cloud consumers value QoS (Quality of Service), Cloud providers should make certain service level commitments in order to achieve business success. This paper argues for Cloud service negotiation. It outlines a research roadmap, reviews the state of the art, and reports our work on Cloud service negotiation. Three research problems that we formulate are QoS measurement, QoS negotiation, and QoS enforcement. To address QoS measurement, we pioneer a quality model named CLOUDQUAL for Cloud services. To address QoS negotiation, we propose a tradeoff negotiation approach for Cloud services, which can achieve a higher utility. We also give some ideas to solve QoS enforcement, and balance utility and success rate for QoS negotiation."}
{"_id": "08d6c0f860378a8c56b4ba7f347429970f70e3bd", "title": "Classification of brain disease in magnetic resonance images using two-stage local feature fusion", "text": "BACKGROUND\nMany classification methods have been proposed based on magnetic resonance images. Most methods rely on measures such as volume, the cerebral cortical thickness and grey matter density. These measures are susceptible to the performance of registration and limited in representation of anatomical structure. This paper proposes a two-stage local feature fusion method, in which deformable registration is not desired and anatomical information is represented from moderate scale.\n\n\nMETHODS\nKeypoints are firstly extracted from scale-space to represent anatomical structure. Then, two kinds of local features are calculated around the keypoints, one for correspondence and the other for representation. Scores are assigned for keypoints to quantify their effect in classification. The sum of scores for all effective keypoints is used to determine which group the test subject belongs to.\n\n\nRESULTS\nWe apply this method to magnetic resonance images of Alzheimer's disease and Parkinson's disease. The advantage of local feature in correspondence and representation contributes to the final classification. With the help of local feature (Scale Invariant Feature Transform, SIFT) in correspondence, the performance becomes better. Local feature (Histogram of Oriented Gradient, HOG) extracted from 16\u00d716 cell block obtains better results compared with 4\u00d74 and 8\u00d78 cell block.\n\n\nDISCUSSION\nThis paper presents a method which combines the effect of SIFT descriptor in correspondence and the representation ability of HOG descriptor in anatomical structure. This method has the potential in distinguishing patients with brain disease from controls."}
{"_id": "e3bb879045c5807a950047d91e65c15e7f087313", "title": "Community Detection in Social Networks Based on Influential Nodes", "text": "Large-scale social networks emerged rapidly in recent years. Social networks have become complex networks. The structure of social networks is an important research area and has attracted much scientific interest. Community is an important structure in social networks. In this paper, we propose a community detection algorithm based on influential nodes. First, we introduce how to find influential nodes based on random walk. Then we combine the algorithm with order statistics theory to find community structure. We apply our algorithm in three classical data sets and compare to other algorithms. Our community detection algorithm is proved to be effective in the experiments. Our algorithm also has applications in data mining and recommendations."}
{"_id": "0414c4cc1974e6d3e69d9f2986e5bb9fb1af4701", "title": "Natural Language Processing using NLTK and WordNet", "text": "Natural Language Processing is a theoretically motivated range of computational techniques for analysing and representing naturally occurring texts at one or more levels of linguistic analysis for the purpose of achieving human-like language processing for a range of tasks or applications [1]. To perform natural language processing a variety of tools and platform have been developed, in our case we will discuss about NLTK for Python.The Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python programming language[2]. It provides easy-to-use interfaces to many corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning. In this paper we discuss different approaches for natural language processing using NLTK."}
{"_id": "7beabda9986a546cbf4f6c66de9c9aa7ea92a7ac", "title": "Arabic Automatic Speech Recognition Enhancement", "text": "In this paper, we propose three approaches for Arabic automatic speech recognition. For pronunciation modeling, we propose a pronunciation variant generation with decision tree. For acoustic modeling, we propose the Hybrid approach to adapt the native acoustic model using another native acoustic model. Regarding the language model, we improve the language model using processed text. The experimental results show that the proposed pronunciation model approach has reduction in WER around 1%. The acoustic modeling reduce the WER by 1.2% and the adapted language modeling show reduction in WER by 1.9%."}
{"_id": "2a1c06ea40469064f2419df481a6e6aab7b09cdf", "title": "Driving Hotzenplotz: A Hybrid Interface for Vehicle Control Aiming to Maximize Pleasure in Highway Driving", "text": "A prerequisite to foster proliferation of automated driving is common system acceptance. However, different users groups (novice, enthusiasts) decline automation, which could be, in turn, problematic for a successful market launch. We see a feasible solution in the combination of the advantages of manual (autonomy) and automated (increased safety) driving. Hence, we've developed the Hotzenplotz interface, combining possibility-driven design with psychological user needs. A simulator study (N=30) was carried-out to assess user experience with subjective criteria (Need Scale, PANAS/-X, HEMA, AttrakDiff) and quantitative measures (driving behavior, HR/HRV) in different conditions. Our results confirm that pure AD is significantly less able to satisfy user needs compared to manual driving and make people feeling bored/out of control. In contrast, the Hotzenplotz interface has proven to reduce the negative effects of AD. Our implication is that drivers should be provided with different control options to secure acceptance and avoid deskilling."}
{"_id": "715250b1a76178cbc057de701524db7b2695234c", "title": "The Design and Implementation of CoGaDB: A Column-oriented GPU-accelerated DBMS", "text": "Nowadays, the performance of processors is primarily bound by a fixed energy budget, the power wall. This forces hardware vendors to optimize processors for specific tasks, which leads to an increasingly heterogeneous hardware landscape. Although efficient algorithms for modern processors such as GPUs are heavily investigated, we also need to prepare the database optimizer to handle computations on heterogeneous processors. GPUs are an interesting base for case studies, because they already offer many difficulties we will face tomorrow. In this paper, we present CoGaDB, a main-memory DBMS with built-in GPU acceleration, which is optimized for OLAP workloads. CoGaDB uses the self-tuning optimizer framework HyPE to build a hardware-oblivious optimizer, which learns cost models for database operators and efficiently distributes a workload on available processors. Furthermore, CoGaDB implements efficient algorithms on CPU and GPU and efficiently supports star joins. We show in this paper, how these novel techniques interact with each other in a single system. Our evaluation shows that CoGaDB quickly adapts to the underlying hardware by increasing the accuracy of its cost models at runtime."}
{"_id": "e4c98cd116c0b86387354208bc97c1dc3c79d16d", "title": "Understanding Stories with Large-Scale Common Sense", "text": "Story understanding systems need to be able to perform commonsense reasoning, specifically regarding characters\u2019 goals and their associated actions. Some efforts have been made to form large-scale commonsense knowledge bases, but integrating that knowledge into story understanding systems remains a challenge. We have implemented the Aspire system, an application of large-scale commonsense knowledge to story understanding. Aspire extends Genesis, a rule-based story understanding system, with tens of thousands of goalrelated assertions from the commonsense semantic network ConceptNet. Aspire uses ConceptNet\u2019s knowledge to infer plausible implicit character goals and story causal connections at a scale unprecedented in the space of story understanding. Genesis\u2019s rule-based inference enables precise story analysis, while ConceptNet\u2019s relatively inexact but widely applicable knowledge provides a significant breadth of coverage difficult to achieve solely using rules. Genesis uses Aspire\u2019s inferences to answer questions about stories, and these answers were found to be plausible in a small study. Though we focus on Genesis and ConceptNet, demonstrating the value of supplementing precise reasoning systems with large-scale, scruffy commonsense knowledge is our primary contribution."}
{"_id": "79b4723a010c66c2f3fdcd3bd79dba1c3a3e2d28", "title": "A comprehensive model of PMOS NBTI degradation", "text": "Negative bias temperature instability has become an important reliability concern for ultra-scaled Silicon IC technology with significant implications for both analog and digital circuit design. In this paper, we construct a comprehensive model for NBTI phenomena within the framework of the standard reaction\u2013diffusion model. We demonstrate how to solve the reaction\u2013diffusion equations in a way that emphasizes the physical aspects of the degradation process and allows easy generalization of the existing work. We also augment this basic reaction\u2013diffusion model by including the temperature and field-dependence of the NBTI phenomena so that reliability projections can be made under arbitrary circuit operating conditions. 2004 Published by Elsevier Ltd."}
{"_id": "8e5ba7d60a4d9f425deeb3a05f3124fe6686b29a", "title": "Neural correlates of experimentally induced flow experiences", "text": "Flow refers to a positive, activity-associated, subjective experience under conditions of a perceived fit between skills and task demands. Using functional magnetic resonance perfusion imaging, we investigated the neural correlates of flow in a sample of 27 human subjects. Experimentally, in the flow condition participants worked on mental arithmetic tasks at challenging task difficulty which was automatically and continuously adjusted to individuals' skill level. Experimental settings of \"boredom\" and \"overload\" served as comparison conditions. The experience of flow was associated with relative increases in neural activity in the left anterior inferior frontal gyrus (IFG) and the left putamen. Relative decreases in neural activity were observed in the medial prefrontal cortex (MPFC) and the amygdala (AMY). Subjective ratings of the flow experience were significantly associated with changes in neural activity in the IFG, AMY, and, with trend towards significance, in the MPFC. We conclude that neural activity changes in these brain regions reflect psychological processes that map on the characteristic features of flow: coding of increased outcome probability (putamen), deeper sense of cognitive control (IFG), decreased self-referential processing (MPFC), and decreased negative arousal (AMY)."}
{"_id": "23e4844f33adaf2e26195ffc2a7514a2e45fd33d", "title": "Discovering Structure in the Universe of Attribute Names", "text": "Recently, search engines have invested significant effort to answering entity\u2013attribute queries from structured data, but have focused mostly on queries for frequent attributes. In parallel, several research efforts have demonstrated that there is a long tail of attributes, often thousands per class of entities, that are of interest to users. Researchers are beginning to leverage these new collections of attributes to expand the ontologies that power search engines and to recognize entity\u2013 attribute queries. Because of the sheer number of potential attributes, such tasks require us to impose some structure on this long and heavy tail of attributes. This paper introduces the problem of organizing the attributes by expressing the compositional structure of their names as a rule-based grammar. These rules offer a compact and rich semantic interpretation of multi-word attributes, while generalizing from the observed attributes to new unseen ones. The paper describes an unsupervised learning method to generate such a grammar automatically from a large set of attribute names. Experiments show that our method can discover a precise grammar over 100,000 attributes of Countries while providing a 40-fold compaction over the attribute names. Furthermore, our grammar enables us to increase the precision of attributes from 47% to more than 90% with only a minimal curation effort. Thus, our approach provides an efficient and scalable way to expand ontologies with attributes of user interest."}
{"_id": "025cdba37d191dc73859c51503e91b0dcf466741", "title": "Fingerprint image enhancement using GWT and DMF", "text": "Fingerprint image enhancement is an essential preprocessing step in fingerprint recognition applications. In this paper we introduce an approach that extracts simultaneously orientation and frequency of local ridge in the fingerprint image by Gabor wavelet filter bank and use them in Gabor Filtering of image. Furthermore, we describes a robust approach to fingerprint image enhancement, which is based on integration of Gabor Filters and Directional Median Filter(DMF). In fact, Gaussian-distributed noises are reduced effectively by Gabor Filters and impulse noises by DMF. the proposed DMF not only can finish its original tasks, it can also join broken fingerprint ridges, fill out the holes of fingerprint images, smooth irregular ridges as well as remove some annoying small artifacts between ridges. Experimental results show our method to be superior to those described in the literature."}
{"_id": "cccb533cd14259b92ec7cd71b1d3f679ef251394", "title": "Combining haptic human-machine interaction with predictive path planning for lane-keeping and collision avoidance systems", "text": "This paper presents a first approach for a haptic human-machine interface combined with a novel lane-keeping and collision-avoidance assistance system approach, as well as the results of a first exploration study with human test drivers. The assistance system approach is based on a potential field predictive path planning algorithm that incorporates the drivers wishes commanded by the steering wheel angle, the brake pedal or throttle, and the intended maneuver. For the design of the haptic human-machine interface the assistance torque characteristic at the handwheel is shaped and the path planning parameters are held constant. In the exploration, both driving data as well as questionnaires are evaluated. The results show good acceptance for the lane-keeping assistance while the collision avoidance assistance needs to be improved."}
{"_id": "fbbe5bf055f997cbd1ed1f3b72b1d630771e358e", "title": "Learning patterns of university student retention", "text": "Learning predictors for student retention is very difficult. After reviewing the literature, it is evident that there is considerable room for improvement in the current state of the art. As shown in this paper, improvements are possible if we (a) explore a wide range of learning methods; (b) take care when selecting attributes; (c) assess the efficacy of the learned theory not just by its median performance, but also by the variance in that performance; (d) study the delta of student factors between those who stay and those who are retained. Using these techniques, for the goal of predicting if students will remain for the first three years of an undergraduate degree, the following factors were found to be informative: family background and family\u2019s social-economic status, high school GPA"}
{"_id": "03dbbd987ea1fd5307ba5ae2f56d88e4f465b88c", "title": "Anonymizing sequential releases", "text": "An organization makes a new release as new information become available, releases a tailored view for each data request, releases sensitive information and identifying information separately. The availability of related releases sharpens the identification of individuals by a global quasi-identifier consisting of attributes from related releases. Since it is not an option to anonymize previously released data, the current release must be anonymized to ensure that a global quasi-identifier is not effective for identification. In this paper, we study the sequential anonymization problem under this assumption. A key question is how to anonymize the current release so that it cannot be linked to previous releases yet remains useful for its own release purpose. We introduce the lossy join, a negative property in relational database design, as a way to hide the join relationship among releases, and propose a scalable and practical solution."}
{"_id": "3dfce4601c3f413605399267b3314b90dc4b3362", "title": "Protecting Respondents' Identities in Microdata Release", "text": "\u00d0Today's globally networked society places great demand on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call today for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of the respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not to distort the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal"}
{"_id": "5c15b11610d7c3ee8d6d99846c276795c072eec3", "title": "Generalizing Data to Provide Anonymity when Disclosing Information (Abstract)", "text": "The proliferation of information on the Internet and access to fast computers with large storage capacities has increased the volume of information collected and disseminated about individuals. The existence os these other data sources makes it much easier to re-identify individuals whose private information is released in data believed to be anonymous. At the same time, increasing demands are made on organizations to release individualized data rather than aggregate statistical information. Even when explicit identi ers, such as name and phone number, are removed or encrypted when releasing individualized data, other characteristic data, which we term quasi-identi ers, can exist which allow the data recipient to re-identify individuals to whom the data refer. In this paper, we provide a computational disclosure technique for releasing information from a private table such that the identity of any individual to whom the released data refer cannot be de nitively recognized. Our approach protects against linking to other data. It is based on the concepts of generalization, by which stored values can be replaced with semantically consistent and truthful but less precise alternatives, and of k-anonymity . A table is said to provide k-anonymity when the contained data do not allow the recipient to associate the released information to a set of individuals smaller than k. We introduce the notions of generalized table and of minimal generalization of a table with respect to a k-anonymity requirement. As an optimization problem, the objective is to minimally distort the data while providing adequate protection. We describe an algorithm that, given a table, e ciently computes a preferred minimal generalization to provide anonymity."}
{"_id": "9492fb8d3b3ce09451fc1df46d5e3c200095f5eb", "title": "Bottom-Up Generalization : A Data Mining Solution to Privacy Protection", "text": "In recent years, privacy-preserving data mining has been studied extensively, because of the wide proliferation of sensitive information on the internet. This paper investigates data mining as a technique for masking data; therefore, termed data mining based privacy protection. This approach incorporates partially the requirement of a targeted data mining task into the process of masking data so that essential structure is preserved in the masked data. The following privacy problem is considered in this paper: a data holder wants to release a version of data for building classification models, but wants to protect against linking the released data to an external source for inferring sensitive information. An iterative bottom-up generalization is adapted from data mining to generalize the data. The generalized data remains useful to classification but becomes difficult to link to other sources. The generalization space is specified by a hierarchical structure of generalizations. A key is identifying the best generalization to climb up the hierarchy at each iteration."}
{"_id": "b30706600c01e23e11b303842fe548d62bf3ecf8", "title": "M-invariance: towards privacy preserving re-publication of dynamic datasets", "text": "The previous literature of privacy preserving data publication has focused on performing \"one-time\" releases. Specifically, none of the existing solutions supports re-publication of the microdata, after it has been updated with insertions and deletions. This is a serious drawback, because currently a publisher cannot provide researchers with the most recent dataset continuously.\n This paper remedies the drawback. First, we reveal the characteristics of the re-publication problem that invalidate the conventional approaches leveraging k-anonymity and l-diversity. Based on rigorous theoretical analysis, we develop a new generalization principle m-invariance that effectively limits the risk of privacy disclosure in re-publication. We accompany the principle with an algorithm, which computes privacy-guarded relations that permit retrieval of accurate aggregate information about the original microdata. Our theoretical results are confirmed by extensive experiments with real data."}
{"_id": "43d7f7a090630db3ed0d7fca9d649c8562aeaaa9", "title": "Evaluating Sketchiness as a Visual Variable for the Depiction of Qualitative Uncertainty", "text": "We report on results of a series of user studies on the perception of four visual variables that are commonly used in the literature to depict uncertainty. To the best of our knowledge, we provide the first formal evaluation of the use of these variables to facilitate an easier reading of uncertainty in visualizations that rely on line graphical primitives. In addition to blur, dashing and grayscale, we investigate the use of `sketchiness' as a visual variable because it conveys visual impreciseness that may be associated with data quality. Inspired by work in non-photorealistic rendering and by the features of hand-drawn lines, we generate line trajectories that resemble hand-drawn strokes of various levels of proficiency-ranging from child to adult strokes-where the amount of perturbations in the line corresponds to the level of uncertainty in the data. Our results show that sketchiness is a viable alternative for the visualization of uncertainty in lines and is as intuitive as blur; although people subjectively prefer dashing style over blur, grayscale and sketchiness. We discuss advantages and limitations of each technique and conclude with design considerations on how to deploy these visual variables to effectively depict various levels of uncertainty for line marks."}
{"_id": "99aba9948302a8bd2cdf46a809a59e34d1b57d86", "title": "An Architecture for Local Energy Generation, Distribution, and Sharing", "text": "The United States electricity grid faces significant problems resulting from fundamental design principles that limit its ability to handle the key energy challenges of the 21st century. We propose an innovative electric power architecture, rooted in lessons learned from the Internet and microgrids, which addresses these problems while interfacing gracefully into the current grid to allow for non-disruptive incremental adoption. Such a system, which we term a \"Local\" grid, is controlled by intelligent power switches (IPS), and can consist of loads, energy sources, and energy storage. The desired result of the proposed architecture is to produce a grid network designed for distributed renewable energy, prevalent energy storage, and stable autonomous systems. We will describe organizing principles of such a system that ensure well-behaved operation, such as requirements for communication and energy transfer protocols, regulation and control schemes, and market-based rules of operation."}
{"_id": "30b32f4a6341b5809428df1271bdb707f2418362", "title": "A Sequential Neural Encoder With Latent Structured Description for Modeling Sentences", "text": "In this paper, we propose a sequential neural encoder with latent structured description SNELSD for modeling sentences. This model introduces latent chunk-level representations into conventional sequential neural encoders, i.e., recurrent neural networks with long short-term memory LSTM units, to consider the compositionality of languages in semantic modeling. An SNELSD model has a hierarchical structure that includes a detection layer and a description layer. The detection layer predicts the boundaries of latent word chunks in an input sentence and derives a chunk-level vector for each word. The description layer utilizes modified LSTM units to process these chunk-level vectors in a recurrent manner and produces sequential encoding outputs. These output vectors are further concatenated with word vectors or the outputs of a chain LSTM encoder to obtain the final sentence representation. All the model parameters are learned in an end-to-end manner without a dependency on additional text chunking or syntax parsing. A natural language inference task and a sentiment analysis task are adopted to evaluate the performance of our proposed model. The experimental results demonstrate the effectiveness of the proposed SNELSD model on exploring task-dependent chunking patterns during the semantic modeling of sentences. Furthermore, the proposed method achieves better performance than conventional chain LSTMs and tree-structured LSTMs on both tasks."}
{"_id": "36cd10d9afacdc26a019d44ff3d39a8cf3fd4a9a", "title": "Advanced Gate Drive Unit With Closed-Loop $di_{{C}}/dt$ Control", "text": "This paper describes the design and the experimental investigation of a gate drive unit with closed-loop control of the collector current slope diC/dt for multichip insulated-gate bipolar transistors (IGBTs). Compared to a pure resistive gate drive, the proposed diC/dt control offers the ability to adjust the collector current slope freely which helps to find an optimized relation between switching losses and secure operation of the freewheeling diode for every type of IGBT. Based on the description of IGBT's switching behavior, the design and the realization of the gate drive are presented. The test setup and the comparison of switching tests with and without the proposed diC/dt control are discussed."}
{"_id": "efa2be90d7f48e11da39c3ed3fa14579fd367f9c", "title": "3-Dimensional Localization via RFID Tag Array", "text": "In this paper, we propose 3DLoc, which performs 3-dimensional localization on the tagged objects by using the RFID tag arrays. 3DLoc deploys three arrays of RFID tags on three mutually orthogonal surfaces of each object. When performing 3D localization, 3DLoc continuously moves the RFID antenna and scans the tagged objects in a 2-dimensional space right in front of the tagged objects. It then estimates the object's 3D position according to the phases from the tag arrays. By referring to the fixed layout of the tag array, we use Angle of Arrival-based schemes to accurately estimate the tagged objects' orientation and 3D coordinates in the 3D space. To suppress the localization errors caused by the multipath effect, we use the linear relationship of the AoA parameters to remove the unexpected outliers from the estimated results. We have implemented a prototype system and evaluated the actual performance in the real complex environment. The experimental results show that 3DLoc achieves the mean accuracy of 10cm in free space and 15.3cm in the multipath environment for the tagged object."}
{"_id": "3d676791081e7b16a4ead9924fc03bac587d181d", "title": "Inductive Learning of Answer Set Programs from Noisy Examples", "text": "In recent years, non-monotonic Inductive Logic Programming has received growing interest. Specifically, several new learning frameworks and algorithms have been introduced for learning under the answer set semantics, allowing the learning of common-sense knowledge involving defaults and exceptions, which are essential aspects of human reasoning. In this paper, we present a noise-tolerant generalisation of the learning from answer sets framework. We evaluate our ILASP3 system, both on synthetic and on real datasets, represented in the new framework. In particular, we show that on many of the datasets ILASP3 achieves a higher accuracy than other ILP systems that have previously been applied to the datasets, including a recently proposed differentiable learning framework."}
{"_id": "7d46c3648f76b5542d8ecd582f01f155e6b248d5", "title": "On Formal and Cognitive Semantics for Semantic Computing", "text": "Semantics is the meaning of symbols, notations, concepts, functions, and behaviors, as well as their relations that can be deduced onto a set of predefined entities and/or known concepts. Semantic computing is an emerging computational methodology that models and implements computational structures and behaviors at semantic or knowledge level beyond that of symbolic data. In semantic computing, formal semantics can be classified into the categories of to be, to have, and to do semantics. This paper presents a comprehensive survey of formal and cognitive semantics for semantic computing in the fields of computational linguistics, software science, computational intelligence, cognitive computing, and denotational mathematics. A set of novel formal semantics, such as deductive semantics, concept-algebra-based semantics, and visual semantics, is introduced that forms a theoretical and cognitive foundation for semantic computing. Applications of formal semantics in semantic computing are presented in case studies on semantic cognition of natural languages, semantic analyses of computing behaviors, behavioral semantics of human cognitive processes, and visual semantic algebra for image and visual object manipulations."}
{"_id": "927aa1bd7c0e122ae77224e92821f2eaafc96cf9", "title": "GENDER RECOGNITION FROM FACES USING BANDLET TRANSFORMATION", "text": "Gender Recognition is important in different commercial and law enforcement applications. In this paper we have proposed a gender recognition system through facial images. We have used a different technique that involves Bandlet Transform instead of previously used Wavelet Transform, which is a multi-resolution technique and more efficiently provides the edges of images, and then mean is combined to create the feature vectors of the images. To classify the images for gender, we have used fuzzy c mean clustering. Experimental results have shown that average 97.1% accuracy have been achieved using this technique when SUMS database was used and 93.3% was achieved when FERET database was used. Keywords----Bandlet, Gender Recognition, Fuzzy C-mean, Multi Resolution."}
{"_id": "86cd8da6c6b35d99b75aaaaf7be55c78a31948eb", "title": "Evolution of the internal fixation of long bone fractures. The scientific basis of biological internal fixation: choosing a new balance between stability and biology.", "text": "The advent of 'biological internal fixation' is an important development in the surgical management of fractures. Locked nailing has demonstrated that flexible fixation without precise reduction results in reliable healing. While external fixators are mainly used today to provide temporary fixation in fractures after severe injury, the internal fixator offers flexible fixation, maintaining the advantages of the external fixator but allowing long-term treatment. The internal fixator resembles a plate but functions differently. It is based on pure splinting rather than compression. The resulting flexible stabilisation induces the formation of callus. With the use of locked threaded bolts, the application of the internal fixator foregoes the need of adaptation of the shape of the splint to that of the bone during surgery. Thus, it is possible to apply the internal fixator as a minimally invasive percutaneous osteosynthesis (MIPO). Minimal surgical trauma and flexible fixation allow prompt healing when the blood supply to bone is maintained or can be restored early. The scientific basis of the fixation and function of these new implants has been reviewed. The biomechanical aspects principally address the degree of instability which may be tolerated by fracture healing under different biological conditions. Fractures may heal spontaneously in spite of gross instability while minimal, even non-visible, instability may be deleterious for rigidly fixed small fracture gaps. The theory of strain offers an explanation for the maximum instability which will be tolerated and the minimal degree required for induction of callus formation. The biological aspects of damage to the blood supply, necrosis and temporary porosity explain the importance of avoiding extensive contact of the implant with bone. The phenomenon of bone loss and stress protection has a biological rather than a mechanical explanation. The same mechanism of necrosis-induced internal remodelling may explain the basic process of direct healing."}
{"_id": "59ae5541e1dc71ff33b4b6e14cbbc5c3d46fc506", "title": "A Survey on Clustering Algorithms for Wireless Sensor Networks", "text": "A wireless sensor network (WSN)consisting of a large number of tiny sensors can be an effective tool for gathering data in diverse kinds of environments. The data collected by each sensor is communicated to the base station, which forwards the data to the end user. Clustering is introduced to WSNs because it has proven to be an effective approach to provide better data aggregation and scalability for large WSNs. Clustering also conserves the limited energy resources of the sensors. This paper synthesises existing clustering algorithms news's and highlights the challenges in clustering."}
{"_id": "ce7f81e898f1c78468df2480294806864899549e", "title": "80\u00baC, 50-Gb/s Directly Modulated InGaAlAs BH-DFB Lasers", "text": "Direct modulation at 50 Gb/s of 1.3-\u03bcm InGaAlAs DFB lasers operating at up to 80\u00b0C was experimentally demonstrated by using a ridge-shaped buried heterostructure."}
{"_id": "c955e319595b8ae051fe18f193490ba56c4f0254", "title": "Platform-as-a-Service (PaaS): The Next Hype of Cloud Computing", "text": "Cloud Computing is expected to become the driving force of information technology to revolutionize the future. Presently number of companies is trying to adopt this new technology either as service providers, enablers or vendors. In this way the cloud market is estimated be likely to emerge at a remarkable rate. Under the whole cloud umbrella, PaaS seems to have a relatively small market share. However, it is expected to offer much more as it is compared with its counterparts SaaS and IaaS. This paper is aimed to assess and analyze the future of PaaS technology. Year 2015 named as \u201cthe year of PaaS\u201d. It means that PaaS technology has established strong roots and ready to hit the market with better technology services. This research will discuss future PaaS market trends, growth and business competitors. In the current dynamic era, several companies in the market are offering PaaS services. This research will also outline some of the top service providers (proprietary & open source) to discuss their current technology status and present a futuristic look into their services and business strategies. Analysis of the present and future PaaS technology infrastructure will also be a major discussion in this paper."}
{"_id": "e1a7cf4e4760bb580dd67255fbe872bac33ae28b", "title": "Hybrid CMOS/memristor circuits", "text": "This is a brief review of recent work on the prospective hybrid CMOS/memristor circuits. Such hybrids combine the flexibility, reliability and high functionality of the CMOS subsystem with very high density of nanoscale thin film resistance switching devices operating on different physical principles. Simulation and initial experimental results demonstrate that performance of CMOS/memristor circuits for several important applications is well beyond scaling limits of conventional VLSI paradigm."}
{"_id": "91d513af1f667f64c9afc55ea1f45b0be7ba08d4", "title": "Automatic Face Image Quality Prediction", "text": "Face image quality can be defined as a measure of the utility of a face image to automatic face recognition. In this work, we propose (and compare) two methods for automatic face image quality based on target face quality values from (i) human assessments of face image quality (matcherindependent), and (ii) quality values computed from similarity scores (matcher-dependent). A support vector regression model trained on face features extracted using a deep convolutional neural network (ConvNet) is used to predict the quality of a face image. The proposed methods are evaluated on two unconstrained face image databases, LFW and IJB-A, which both contain facial variations with multiple quality factors. Evaluation of the proposed automatic face image quality measures shows we are able to reduce the FNMR at 1% FMR by at least 13% for two face matchers (a COTS matcher and a ConvNet matcher) by using the proposed face quality to select subsets of face images and video frames for matching templates (i.e., multiple faces per subject) in the IJB-A protocol. To our knowledge, this is the first work to utilize human assessments of face image quality in designing a predictor of unconstrained face quality that is shown to be effective in cross-database evaluation."}
{"_id": "2e3522d8d9c30f34e18161246c2f6dac7a9ae04d", "title": "The Neural Architecture of the Language Comprehension Network: Converging Evidence from Lesion and Connectivity Analyses", "text": "While traditional models of language comprehension have focused on the left posterior temporal cortex as the neurological basis for language comprehension, lesion and functional imaging studies indicate the involvement of an extensive network of cortical regions. However, the full extent of this network and the white matter pathways that contribute to it remain to be characterized. In an earlier voxel-based lesion-symptom mapping analysis of data from aphasic patients (Dronkers et al., 2004), several brain regions in the left hemisphere were found to be critical for language comprehension: the left posterior middle temporal gyrus, the anterior part of Brodmann's area 22 in the superior temporal gyrus (anterior STG/BA22), the posterior superior temporal sulcus (STS) extending into Brodmann's area 39 (STS/BA39), the orbital part of the inferior frontal gyrus (BA47), and the middle frontal gyrus (BA46). Here, we investigated the white matter pathways associated with these regions using diffusion tensor imaging from healthy subjects. We also used resting-state functional magnetic resonance imaging data to assess the functional connectivity profiles of these regions. Fiber tractography and functional connectivity analyses indicated that the left MTG, anterior STG/BA22, STS/BA39, and BA47 are part of a richly interconnected network that extends to additional frontal, parietal, and temporal regions in the two hemispheres. The inferior occipito-frontal fasciculus, the arcuate fasciculus, and the middle and inferior longitudinal fasciculi, as well as transcallosal projections via the tapetum were found to be the most prominent white matter pathways bridging the regions important for language comprehension. The left MTG showed a particularly extensive structural and functional connectivity pattern which is consistent with the severity of the impairments associated with MTG lesions and which suggests a central role for this region in language comprehension."}
{"_id": "a58fe54d2677fb01533c4dc2d8ef958a2cb921cb", "title": "Distribution of Eigenvalues for 2x2 MIMO Channel Capacity Based on Indoor Measurements", "text": "Based on the non line-of-sight (NLOS) indoor measurements performed in the corridors on the third floor of the Electronics and Telecommunications Research Institute (ETRI) building in Daejeon city, Republic of Korea, we investigated the distribution of the eigenvalues of HH*, where H denotes a 2\u00d72 multiple-input multiple-output (MIMO) channel matrix. Using the observation that the distribution of the measured eigenvalues matches well with the Gamma distribution, we propose a model of eigenvalues as Gamma distributed random variables that prominently feature both transmitting and receiving correlations. Using the model with positive integer k_i, i=1, 2, which is the shape parameter of a Gamma distribution, we derive the closed-form ergodic capacity of the 2\u00d72 MIMO channel. Validation results show that the proposed model enables the evaluation of the outage and ergodic capacities of the correlated 2\u00d72 MIMO channel in the NLOS indoor corridor environment."}
{"_id": "d4b29432c3e7bd4e2d06935169c91f781f441160", "title": "Speech recognition of Malayalam numbers", "text": "Digit speech recognition is important in many applications such as automatic data entry, PIN entry, voice dialing telephone, automated banking system, etc. This paper presents speaker independent speech recognition system for Malayalam digits. The system employs Mel frequency cepstrum coefficient (MFCC) as feature for signal processing and Hidden Markov model (HMM) for recognition. The system is trained with 21 male and female voices in the age group of 20 to 40 years and there was 98.5% word recognition accuracy (94.8% sentence recognition accuracy) on a test set of continuous digit recognition task."}
{"_id": "517a461a8839733e34c9025154de3d6275543642", "title": "Beyond Independent Relevance: Methods and Evaluation Metrics for Subtopic Retrieval", "text": "We present a non-traditional retrieval problem we call subtopic retrieval. The subtopic retrieval problem is concerned with finding documents that cover many different subtopics of a query topic. In such a problem, the utility of a document in a ranking is dependent on other documents in the ranking, violating the assumption of independent relevance which is assumed in most traditional retrieval methods. Subtopic retrieval poses challenges for evaluating performance, as well as for developing effective algorithms. We propose a framework for evaluating subtopic retrieval which generalizes the traditional precision and recall metrics by accounting for intrinsic topic difficulty as well as redundancy in documents. We propose and systematically evaluate several methods for performing subtopic retrieval using statistical language models and a maximal marginal relevance (MMR) ranking strategy. A mixture model combined with query likelihood relevance ranking is shown to modestly outperform a baseline relevance ranking on a data set used in the TREC interactive track."}
{"_id": "6b3abd1a6bf9c9564147cfda946c447955d01804", "title": "Kaleido: You Can Watch It But Cannot Record It", "text": "Recently a number of systems have been developed to implement and improve the visual communication over screen-camera links. In this paper we study an opposite problem: how to prevent unauthorized users from videotaping a video played on a screen, such as in a theater, while do not affect the viewing experience of legitimate audiences. We propose and develop a light-weight hardware-free system, called Kaleido, that ensures these properties by taking advantage of the limited disparities between the screen-eye channel and the screen-camera channel. Kaleido does not require any extra hardware and is purely based on re-encoding the original video frame into multiple frames used for displaying. We extensively test our system Kaleido using a variety of smartphone cameras. Our experiments confirm that Kaleido preserves the high-quality screen-eye channel while reducing the secondary screen-camera channel quality significantly."}
{"_id": "528407a9c5b41f81366bbe5cf8058dadcb139fea", "title": "A new design of XOR-XNOR gates for low power application", "text": "XOR and XNOR gate plays an important role in digital systems including arithmetic and encryption circuits. This paper proposes a combination of XOR-XNOR gate using 6-transistors for low power applications. Comparison between a best existing XOR-XNOR have been done by simulating the proposed and other design using 65nm CMOS technology in Cadence environment. The simulation results demonstrate the delay, power consumption and power-delay product (PDP) at different supply voltages ranging from 0.6V to 1.2V. The results show that the proposed design has lower power dissipation and has a full voltage swing."}
{"_id": "17580101d22476be5ad1655abb48d967c78a3875", "title": "Twenty-Five Years of Research on Women Farmers in Africa : Lessons and Implications for Agricultural Research Institutions with an Annotated Bibliography", "text": ".................................................................................................................... iv Acknowledgments ...................................................................................................... iv Introduction ............................................................................................................... 1 Labor ..........................................................................................................................2 Gender Division of Labor .................................................................................... 2 Household Labor Availability ............................................................................... 6 Agricultural Labor Markets .................................................................................. 8 Conclusions: Labor and Gender .......................................................................... 9 Land ........................................................................................................................ 10 Access to Land ................................................................................................... 10 Security of Land ................................................................................................ 11 Changing Access to Land ................................................................................... 11 Access to Other Inputs .............................................................................................. 12 Access to Credit ................................................................................................. 13 Access to Fertilizer ............................................................................................. 14 Access to Extension and Information ................................................................. 15 Access to Mechanization .................................................................................... 16 Gender Issues in Access to Inputs: Summary...................................................... 16 Outputs .................................................................................................................... 17 Household Decision-Making .................................................................................... 18 Cooperative Bargaining and Collective Models .................................................. 19 Noncooperative Bargaining Models ................................................................... 19 Conclusions .............................................................................................................. 21 References ................................................................................................................. 23 Annotated Bibliography ............................................................................................ 27"}
{"_id": "205917e8e885b3c2c6c31c90f9de7f411a24ec22", "title": "Sensor Based PUF IoT Authentication Model for a Smart Home with Private Blockchain", "text": "With ubiquitous adoption of connected sensors, actuators and smart devices are finding inroads into daily life. Internet of Things (IoT) authentication is rapidly transforming from classical cryptographic user-centric knowledge based approaches to device signature based automated methodologies to corroborate identity between claimant and a verifier. Physical Unclonable Function (PUF) based IoT authentication mechanisms are gaining widespread interest as users are required to access IoT devices in real time while also expecting execution of sensitive (even physical) IoT actions immediately. This paper, delineates combination of BlockChain and Sensor based PUF authentication mechanism for solving real-time but non-repudiable access to IoT devices in a Smart Home by utilizing a mining less consensus mechanism for the provision of immutable assurance to users' and IoT devices' transactions i.e. commands, status alerts, actions etc."}
{"_id": "112f07f90d4395623a51725e90bf9d91b89e559a", "title": "Defending the morality of violent video games", "text": "The effect of violent video games is among the most widely discussed topics in media studies, and for good reason. These games are immensely popular, but many seem morally objectionable. Critics attack them for a number of reasons ranging from their capacity to teach players weapons skills to their ability to directly cause violent actions. This essay shows that many of these criticisms are misguided. Theoretical and empirical arguments against violent video games often suffer from a number of significant shortcomings that make them ineffective. This essay argues that video games are defensible from the perspective of Kantian, Aristotelian, and utilitarian moral theories."}
{"_id": "cdbb15ee448ee4dc6db16a540a40fbb035d1f4ca", "title": "Education-specific Tag Recommendation in CQA Systems", "text": "Systems for Community Question Answering (CQA) are well-known on the open web (e.g. Stack Overflow or Quora). They have been recently adopted also for use in educational domain (mostly in MOOCs) to mediate communication between students and teachers. As students are only novices in topics they learn about, they may need various scaffoldings to achieve effective question answering. In this work, we focus specifically on automatic recommendation of tags classifying students' questions. We propose a novel method that can automatically analyze a text of a question and suggest appropriate tags to an asker. The method takes specifics of educational domain into consideration by a two-step recommendation process in which tags reflecting course structure are recommended at first and consequently supplemented with additional related tags.\n Evaluation of the method on data from CS50 MOOC at Stack Exchange platform showed that the proposed method achieved higher performance in comparison with a baseline method (tag recommendation without taking educational specifics into account)."}
{"_id": "598caa7f431930892a78f1083453b1c0ba29e725", "title": "Teaching a Machine to Read Maps With Deep Reinforcement Learning", "text": "The ability to use a 2D map to navigate a complex 3D environment is quite remarkable, and even difficult for many humans. Localization and navigation is also an important problem in domains such as robotics, and has recently become a focus of the deep reinforcement learning community. In this paper we teach a reinforcement learning agent to read a map in order to find the shortest way out of a random maze it has never seen before. Our system combines several state-of-theart methods such as A3C and incorporates novel elements such as a recurrent localization cell. Our agent learns to localize itself based on 3D first person images and an approximate orientation angle. The agent generalizes well to bigger mazes, showing that it learned useful localization and naviga-"}
{"_id": "13082af1fd6bb9bfe63e73cf007de1655b7f9ae0", "title": "Machine learning in automated text categorization", "text": "The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert labor power, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely, document representation, classifier construction, and classifier evaluation."}
{"_id": "6eb69624732a589e91816cc7722be0b0cdd28767", "title": "Discovering actionable patterns in event data", "text": "Applications such as those for systems management and intrusion detection employ an automated real-time operation system in which sensor data are collected and processed in real time. Although such a system effectively reduces the need for operation staff, it requires constructing and maintaining correlation rules. Currently, rule construction requires experts to identify problem patterns, a process that is timeconsuming and error-prone. In this paper, we propose reducing this burden by mining historical data that are readily available. Specifically, we first present efficient algorithms to mine three types of important patterns from historical event data: event bursts, periodic patterns, and mutually dependent patterns. We then discuss a framework for efficiently mining events that have multiple attributes. Last, we present Event Correlation Constructor\u2014a tool that validates and extends correlation knowledge."}
{"_id": "cd866d4510e397dbc18156f8d840d7745943cc1a", "title": "A tutorial on hidden Markov models and selected applications in speech recognition", "text": ""}
{"_id": "0a283fb395343cd26984425306ca24c85b09ccdb", "title": "Automatic Indexing Based on Bayesian Inference Networks", "text": "In this paper, a Bayesian inference network model for automatic indexing with index terms (descriptors) from a prescribed vocabulary is presented. It requires an indexing dictionary with rules mapping terms of the respective subject field onto descriptors and inverted lists for terms occuring in a set of documents of the subject field and descriptors manually assigned to these documents. The indexing dictionary can be derived automatically from a set of manually indexed documents. An application of the network model is described, followed by an indexing example and some experimental results about the indexing performance of the network model."}
{"_id": "1ac5b0628ff249c388ff5ca934a9ccbec577cbd7", "title": "Beyond Market Baskets: Generalizing Association Rules to Correlations", "text": "One of the most well-studied problems in data mining is mining for association rules in market basket data. Association rules, whose significance is measured via support and confidence, are intended to identify rules of the type, \u201cA customer purchasing item A often also purchases item B.\u201d Motivated by the goal of generalizing beyond market baskets and the association rules used with them, we develop the notion of mining rules that identify correlations (generalizing associations), and we consider both the absence and presence of items as a basis for generating rules. We propose measuring significance of associations via the chi-squared test for correlation from classical statistics. This leads to a measure that is upward closed in the itemset lattice, enabling us to reduce the mining problem to the search for a border between correlated and uncorrelated itemsets in the lattice. We develop pruning strategies and devise an efficient algorithm for the resulting problem. We demonstrate its effectiveness by testing it on census data and finding term dependence in a corpus of text documents, as well as on synthetic data."}
{"_id": "16afeecd0f4dbccdd8e281e0e7e443bd08681da1", "title": "Measuring the Impact of Network Performance on Cloud-Based Speech Recognition Applications", "text": "Cloud-based speech recognition systems enhance Web surfing, transportation, health care, etc. For example, using voice commands helps drivers search the Internet without affecting traffic safety risks. User frustration with network traffic problems can affect the usability of these applications. The performance of these type of applications should be robust in difficult network conditions. We evaluate the performance of several client-server speech recognition applications, under various network conditions. We measure transcription delay and accuracy of each application under different packet loss and jitter values. Results of our study show that performance of client-server speech recognition systems is affected by jitter and packet loss; which commonly occur in WiFi and cellular networks."}
{"_id": "41f3b1aebde4c342211185d2b5e339a60ceff9e2", "title": "Friction modeling in linear chemical-mechanical planarization", "text": "In this article, we develop an analytical model of the relationship between the wafer/pad friction and process configuration. We also provide experimental validation of this model for in situ process monitoring. CMP thus demonstrates that the knowledge and methodologies developed for friction modeling and control can be used to advance the understanding, monitoring, and control of semiconductor manufacturing processes. Meanwhile, relevant issues and challenges in real-time monitoring of CMP are presented as sources of future development."}
{"_id": "b972f638b7c4ed22e1bcb573520bb232ea88cda5", "title": "Efficient, Safe, and Probably Approximately Complete Learning of Action Models", "text": "In this paper we explore the theoretical boundaries of planning in a setting where no model of the agent\u2019s actions is given. Instead of an action model, a set of successfully executed plans are given and the task is to generate a plan that is safe, i.e., guaranteed to achieve the goal without failing. To this end, we show how to learn a conservative model of the world in which actions are guaranteed to be applicable. This conservative model is then given to an off-the-shelf classical planner, resulting in a plan that is guaranteed to achieve the goal. However, this reduction from a model-free planning to a model-based planning is not complete: in some cases a plan will not be found even when such exists. We analyze the relation between the number of observed plans and the likelihood that our conservative approach will indeed fail to solve a solvable problem. Our analysis show that the number of trajectories needed scales gracefully."}
{"_id": "e38b9f339e858c8ac95679737a0852d21c48d89c", "title": "Pressure characteristics at the stump/socket interface in transtibial amputees using an adaptive prosthetic foot.", "text": "BACKGROUND\nThe technological advances that have been made in developing highly functional prostheses are promising for very active patients but we do not yet know whether they cause an increase in biomechanical load along with possibly negative consequences for pressure conditions in the socket. Therefore, this study monitored the socket pressure at specific locations of the stump when using a microprocessor-controlled adaptive prosthetic ankle under different walking conditions.\n\n\nMETHODS\nTwelve unilateral transtibial amputees between 43 and 59 years of age were provided with the Proprio-Foot (Ossur) and underwent an instrumented 3D gait analysis in level, stair, and incline walking, including synchronous data capturing of socket pressure. Peak pressures and pressure time integrals (PTI) at three different locations were compared for five walking conditions with and without using the device's ankle adaptation mode.\n\n\nFINDINGS\nHighest peak pressures of 2.4 k Pa/kg were found for incline ascent at the calf muscle as compared to 2.1 k Pa/kg in level walking with large inter-individual variance. In stair ascent a strong correlation was found between maximum knee moment and socket pressure. The most significant pressure changes relative to level walking were seen in ramp descent anteriorly towards the stump end, with PTI values being almost twice as high as those in level walking. Adapting the angle of the prosthesis on stairs and ramps modified the pressure data such that they were closer to those in level walking.\n\n\nINTERPRETATION\nPressure at the stump depends on the knee moments involved in each walking condition. Adapting the prosthetic ankle angle is a valuable means of modifying joint kinetics and thereby the pressure distribution at the stump. However, large inter-individual differences in local pressures underline the importance of individual socket fitting."}
{"_id": "6c237c3638eefe1eb39212f801cd857bedc004ee", "title": "Exploiting Electronic Health Records to Mine Drug Effects on Laboratory Test Results", "text": "The proliferation of Electronic Health Records (EHRs) challenges data miners to discover potential and previously unknown patterns from a large collection of medical data. One of the tasks that we address in this paper is to reveal previously unknown effects of drugs on laboratory test results. We propose a method that leverages drug information to find a meaningful list of drugs that have an effect on the laboratory result. We formulate the problem as a convex non smooth function and develop a proximal gradient method to optimize it. The model has been evaluated on two important use cases: lowering low-density lipoproteins and glycated hemoglobin test results. The experimental results provide evidence that the proposed method is more accurate than the state-of-the-art method, rediscover drugs that are known to lower the levels of laboratory test results, and most importantly, discover additional potential drugs that may also lower these levels."}
{"_id": "ebae5af27aafd39358a46c83c1409885773254dd", "title": "A Survey of Vehicular Ad hoc Networks Routing Protocols", "text": "In recent years, the aspect of vehicular ad hoc network (VANET) is becoming an interesting research area; VANET is a mobile ad hoc network considered as a special case of mobile ad hoc network (MANET). Similar to MANET, VANET is characterized as autonomous and self-configured wireless network. However, VANET has very dynamic topology, large and variable network size, and constrained mobility; these characteristics led to the need for efficient routing and resource saving VANET protocols, to fit with different VANET environments. These differences render traditional MANET\u2019s protocols unsuitable for VANET. The aim of this work is to give a survey of the VANETs routing mechanisms, this paper gives an overview of Vehicular ad hoc networks (VANETs) and the existing VANET routing protocols; mainly it focused on vehicle to vehicle (V2V) communication and protocols. The paper also represents the general outlines and goals of VANETs, investigates different routing schemes that have been developed for VANETs, as well as providing classifications of VANET routing protocols (focusing on two classification forms), and gives summarized comparisons between different classes in the context of their methodologies used, strengths, and limitations of each class scheme compared to other classes. Finally, it extracts the current trends and the challenges for efficient routing mechanisms in VANETs."}
{"_id": "d012519a924e41aa7ff49d9b6be58033bd60fd9c", "title": "Predicting hospital admission at emergency department triage using machine learning", "text": "OBJECTIVE\nTo predict hospital admission at the time of ED triage using patient history in addition to information collected at triage.\n\n\nMETHODS\nThis retrospective study included all adult ED visits between March 2014 and July 2017 from one academic and two community emergency rooms that resulted in either admission or discharge. A total of 972 variables were extracted per patient visit. Samples were randomly partitioned into training (80%), validation (10%), and test (10%) sets. We trained a series of nine binary classifiers using logistic regression (LR), gradient boosting (XGBoost), and deep neural networks (DNN) on three dataset types: one using only triage information, one using only patient history, and one using the full set of variables. Next, we tested the potential benefit of additional training samples by training models on increasing fractions of our data. Lastly, variables of importance were identified using information gain as a metric to create a low-dimensional model.\n\n\nRESULTS\nA total of 560,486 patient visits were included in the study, with an overall admission risk of 29.7%. Models trained on triage information yielded a test AUC of 0.87 for LR (95% CI 0.86-0.87), 0.87 for XGBoost (95% CI 0.87-0.88) and 0.87 for DNN (95% CI 0.87-0.88). Models trained on patient history yielded an AUC of 0.86 for LR (95% CI 0.86-0.87), 0.87 for XGBoost (95% CI 0.87-0.87) and 0.87 for DNN (95% CI 0.87-0.88). Models trained on the full set of variables yielded an AUC of 0.91 for LR (95% CI 0.91-0.91), 0.92 for XGBoost (95% CI 0.92-0.93) and 0.92 for DNN (95% CI 0.92-0.92). All algorithms reached maximum performance at 50% of the training set or less. A low-dimensional XGBoost model built on ESI level, outpatient medication counts, demographics, and hospital usage statistics yielded an AUC of 0.91 (95% CI 0.91-0.91).\n\n\nCONCLUSION\nMachine learning can robustly predict hospital admission using triage information and patient history. The addition of historical information improves predictive performance significantly compared to using triage information alone, highlighting the need to incorporate these variables into prediction models."}
{"_id": "89a523135fc9cb3b0eb6cade2d1eab1b17ea42f4", "title": "Stress sensitivity of fault seismicity : a comparison between limited-offset oblique and major strike-slip faults", "text": "We present a new three-dimensional inventory of the southern San Francisco Bay area faults and use it to calculate stress applied principally by the 1989 M=7.1 Loma Prieta earthquake, and to compare fault seismicity rates before and after 1989. The major high-angle right-lateral faults exhibit a different response to the stress change than do minor oblique (rightlateral/thrust) faults. Seismicity on oblique-slip faults in the southern Santa Clara Valley thrust belt increased where the faults were unclamped. The strong dependence of seismicity change on normal stress change implies a high coefficient of static friction. In contrast, we observe that faults with significant offset (> 50-100 km) behave differently; microseismicity on the Hayward fault diminished where right-lateral shear stress was reduced, and where it was unclamped by the Loma Prieta earthquake. We observe a similar response on the San Andreas fault zone in southern California after the Landers earthquake sequence. Additionally, the offshore San Gregorio fault shows a seismicity rate increase where right-lateral/oblique shear stress was increased by the Loma Prieta earthquake despite also being clamped by it. These responses are consistent with either a low coefficient of static friction or high pore fluid pressures within the fault zones. We can explain the different behavior of the two styles of faults if those with large cumulative offset become impermeable through gouge buildup; coseismically pressurized pore fluids could be trapped and negate imposed normal stress changes, whereas in more limited offset faults fluids could rapidly escape. The difference in behavior between minor and major faults may explain why frictional failure criteria that apply intermediate coefficients of static friction can be effective in describing the broad distributions of aftershocks that follow large earthquakes, since many of these events occur both inside and outside major fault zones."}
{"_id": "5d6959c6d37ed3cc910cd436865a4c2a73284c7c", "title": "Photoplethysmography and its detailed pulse waveform analysis for arterial stiffness", "text": "Arterial stiffness index is one of the biomechanical indices of vascular healthiness. These indexes are based on detailed pulse waveform analysis which is presented here. After photoplethysmographyic (PPG) pulse wave measurement, we decompose the pulse waveform for the estimation and determination of arterial elasticity. Firstly, it is electro-optically measured PPG signal and by electromechanical film (EMFi) measured signal that are analyzed and investigated by dividing each wave into five logarithmic normal function components. For both the PPG and EMFi waveform we can find very easily a good fit between the original and overlapped and summed wave components. Each wave component is assumed to resemble certain phenomenon in the arteries and certain indexes can be calculated for example based on the mutual timing of the components. Several studies have demonstrated that these kinds of indexes calculated based on actual biomechanical processed can predict future cardiovascular events. Many dynamic factors, e.g., arterial stiffness, depend on fixed structural features of the arterial wall. For more accurate description, arterial stiffness is estimated based on pulse wave decomposition analysis in the radial measured by EMFi and PPG method and tibial arterial walls measured by PPG method parallelly. Elucidation of the precise relationship between endothelial function and arterial stiffness can be done through biomechanics. However, arterial wall elasticity awaits still further biomechanical studies with clinical relations and the influence of arterial flexibility, resistance and ageing inside of the radial pulse waveform."}
{"_id": "cb0c85c4eb75016a7098ca0c452e13812b9c95e9", "title": "Iterated learning and the evolution of language", "text": "Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins."}
{"_id": "dc53c638f58bf3982c5a6ed82002d56c955763c2", "title": "Effective and efficient correlation analysis with application to Market Basket Analysis and Network Community Detection", "text": "Finding the most interesting correlations among items is essential for problems in many commercial, medical, and scientific domains. For example, what kinds of items should be recommended with regard to what has been purchased by a customer? How to arrange the store shelf in order to increase sales? How to partition the whole social network into several communities for successful advertising campaigns? Which set of individuals on a social network should we target to convince in order to trigger a large cascade of further adoptions? When conducting correlation analysis, traditional methods have both effectiveness and efficiency problems, which will be addressed in this dissertation. Here, we explore the effectiveness problem in three ways. First, we expand the set of desirable properties and study the property satisfaction for different correlation measures. Second, we study different techniques to adjust original correlation measure, and propose two new correlation measures: the Simplified \u03c7 with Continuity Correction and the Simplified \u03c7 with Support. Third, we study the upper and lower bounds of different measures and categorize them by the bound differences. Combining with the above three directions, we provide guidelines for users to choose the proper measure according to their situations. With the proper correlation measure, we start to solve the efficiency problem for a large dataset. Here, we propose a fully-correlated itemset (FCI) framework to decouple the correlation measure from the need for efficient search. By wrapping the desired measure in our FCI framework, we take advantage of the desired measure\u2019s superiority in evaluating itemsets, eliminate itemsets with irrelevant items, and achieve good computational performance. In addition, we identify a 1-dimensional monotone property of the upper bound of any good correlation measure, and different 2-dimensional"}
{"_id": "afad9773e74db1927cff4c284dee8afbc4fb849d", "title": "Circularly polarized array antenna with corporate-feed network and series-feed elements", "text": "In this paper, corporate-feed circularly polarized microstrip array antennas are studied. The antenna element is a series-feed slot-coupled structure. Series feeding causes sequential rotation effect at the element level. Antenna elements are then used to form the subarray by applying sequential rotation to their feeding. Arrays having 4, 16, and 64 elements were made. The maximum achieved gains are 15.3, 21, and 25.4 dBic, respectively. All arrays have less than 15 dB return loss and 3 dB axial ratio from 10 to 13 GHz. The patterns are all quite symmetrical."}
{"_id": "5a131856df045cf27a2d5056cea2d2401e2d81b2", "title": "When Social Bots Attack: Modeling Susceptibility of Users in Online Social Networks", "text": "Social bots are automatic or semi-automatic computer programs that mimic humans and/or human behavior in online social networks. Social bots can attack users (targets) in online social networks to pursue a variety of latent goals, such as to spread information or to influence targets. Without a deep understanding of the nature of such attacks or the susceptibility of users, the potential of social media as an instrument for facilitating discourse or democratic processes is in jeopardy. In this paper, we study data from the Social Bot Challenge 2011 an experiment conducted by the WebEcologyProject during 2011 in which three teams implemented a number of social bots that aimed to influence user behavior on Twitter. Using this data, we aim to develop models to (i) identify susceptible users among a set of targets and (ii) predict users\u2019 level of susceptibility. We explore the predictiveness of three different groups of features (network, behavioral and linguistic features) for these tasks. Our results suggest that susceptible users tend to use Twitter for a conversational purpose and tend to be more open and social since they communicate with many different users, use more social words and show more affection than non-susceptible users."}
{"_id": "618c47bc44c3b6fc27067211214afccce6f7cd2c", "title": "Speed Estimation and Abnormality Detection from Surveillance Cameras", "text": "Motivated by the increasing industry trends towards autonomous driving, vehicles, and transportation we focus on developing a traffic analysis framework for the automatic exploitation of a large pool of available data relative to traffic applications. We propose a cooperative detection and tracking algorithm for the retrieval of vehicle trajectories in video surveillance footage based on deep CNN features that is ultimately used for two separate traffic analysis modalities: (a) vehicle speed estimation based on a state of the art fully automatic camera calibration algorithm and (b) the detection of possibly abnormal events in the scene using robust optical flow descriptors of the detected vehicles and Fisher vector representations of spatiotemporal visual volumes. Finally we measure the performance of our proposed methods in the NVIDIA AI CITY challenge evaluation dataset."}
{"_id": "2230381a078241c4385bb9c20b385e8f0da70b9b", "title": "Blind detection of photomontage using higher order statistics", "text": "We investigate the prospect of using bicoherence features for blind image splicing detection. Image splicing is an essential operation for digital photomontaging, which in turn is a technique for creating image forgery. We examine the properties of bicoherence features on a data set, which contains image blocks of diverse image properties. We then demonstrate the limitation of the baseline bicoherence features for image splicing detection. Our investigation has led to two suggestions for improving the performance of bicoherence features, i.e., estimating the bicoherence features of the authentic counterpart and incorporating features that characterize the variance of the feature performance. The features derived from the suggestions are evaluated with support vector machine (SVM) classification and is shown to improve the image splicing detection accuracy from 62% to about 70%."}
{"_id": "1f0396c08e8485358b3d1e13f451ed12ecfc1a77", "title": "Building a Question-Answering Chatbot using Forum Data in the Semantic Space", "text": "We build a conversational agent which knowledge base is an online forum for parents of autistic children. We collect about 35,000 threads totalling some 600,000 replies, and label 1% of them for usefulness using Amazon Mechanical Turk. We train a Random Forest Classifier using sent2vec features to label the remaining thread replies. Then, we use word2vec to match user queries conceptually with a thread, and then a reply with a predefined context window."}
{"_id": "ade7178613e4db90d6a551cb372aebae4c4fa0bf", "title": "Time series forecasting of cyber attack intensity", "text": "Cyber attacks occur on a near daily basis and are becoming exponentially more common. While some research aims to detect the characteristics of an attack, little focus has been given to patterns of attacks in general. This paper aims to exploit temporal correlations between the number of attacks per day in order to predict future intensity of cyber incidents. Through analysis of attack data collected from Hackmageddon, correlation was found among reported attack volume in consecutive days. This paper presents a forecasting system that aims to predict the number of cyber attacks on a given day based only on a set of historical attack count data. Our system conducts ARIMA time series forecasting on all previously collected incidents to predict the expected number of attacks on a future date. Our tool is able to use only a subset of data relevant to a specific attack method. Prediction models are dynamically updated over time as new data is collected to improve accuracy. Our system outperforms naive forecasting methods by 14.1% when predicting attacks of any type, and up to 21.2% when forecasting attacks of a specific type. Our system also produces a model which more accurately predicts future cyber attack intensity behavior."}
{"_id": "c906c9b2daddf67ebd949c71fc707d697065c6a0", "title": "Semantics-Aware Machine Learning for Function Recognition in Binary Code", "text": "Function recognition in program binaries serves as the foundation for many binary instrumentation and analysis tasks. However, as binaries are usually stripped before distribution, function information is indeed absent in most binaries. By far, identifying functions in stripped binaries remains a challenge. Recent research work proposes to recognize functions in binary code through machine learning techniques. The recognition model, including typical function entry point patterns, is automatically constructed through learning. However, we observed that as previous work only leverages syntax-level features to train the model, binary obfuscation techniques can undermine the pre-learned models in real-world usage scenarios. In this paper, we propose FID, a semantics-based method to recognize functions in stripped binaries. We leverage symbolic execution to generate semantic information and learn the function recognition model through well-performing machine learning techniques.FID extracts semantic information from binary code and, therefore, is effectively adapted to different compilers and optimizations. Moreover, we also demonstrate that FID has high recognition accuracy on binaries transformed by widely-used obfuscation techniques. We evaluate FID with over four thousand test cases. Our evaluation shows that FID is comparable with previous work on normal binaries and it notably outperforms existing tools on obfuscated code."}
{"_id": "df6b604d1352d4bd81604730f9000d7a29574384", "title": "eyond virtual museums : Experiencing immersive virtual reality in eal museums", "text": "Contemporary museums are much more than places devoted to the placement and the exhibition of collections and artworks; indeed, they are nowadays considered as a privileged means for communication and play a central role in making culture accessible to the mass audience. One of the keys to approach the general public is the use of new technologies and novel interaction paradigms. These means, which bring with them an undeniable appeal, allow curators to modulate the cultural proposal by structuring different courses for different user profiles. Immersive Virtual reality (VR) is probably one of the most appealing and potentially effective technologies to serve this purpose; nevertheless, it is still quite uncommon to find immersive installations in museums. Starting from our 10 years\u2019 experience in this topic, and following an in-depth survey about these technologies and their use in cultural contexts, we propose a classification of VR installations, specifically oriented to cultural heritage applications, based on their features in terms of interaction and immersion. On the basis of this classification, aiming to provide a tool for framing VR systems which would hopefully suggest indications related to costs, usability and quality of the sensorial experience, we analyze a series of live examples of which we point out strengths and weak points. We then summarize the current state and the very next future, identifying the major issues that prevent these technologies from being actually widespread, and outline proposals for a more se of pervasive and effective u"}
{"_id": "9aade3d26996ce7ef6d657130464504b8d812534", "title": "Face Alignment With Deep Regression", "text": "In this paper, we present a deep regression approach for face alignment. The deep regressor is a neural network that consists of a global layer and multistage local layers. The global layer estimates the initial face shape from the whole image, while the following local layers iteratively update the shape with local image observations. Combining standard derivations and numerical approximations, we make all layers able to backpropagate error differentials, so that we can apply the standard backpropagation to jointly learn the parameters from all layers. We show that the resulting deep regressor gradually and evenly approaches the true facial landmarks stage by stage, avoiding the tendency that often occurs in the cascaded regression methods and deteriorates the overall performance: yielding early stage regressors with high alignment accuracy gains but later stage regressors with low alignment accuracy gains. Experimental results on standard benchmarks demonstrate that our approach brings significant improvements over previous cascaded regression algorithms."}
{"_id": "f85ccab7173e543f2bfd4c7a81fb14e147695740", "title": "A method to infer emotions from facial Action Units", "text": "We present a robust method to map detected facial Action Units (AUs) to six basic emotions. Automatic AU recognition is prone to errors due to illumination, tracking failures and occlusions. Hence, traditional rule based methods to map AUs to emotions are very sensitive to false positives and misses among the AUs. In our method, a set of chosen AUs are mapped to the six basic emotions using a learned statistical relationship and a suitable matching technique. Relationships between the AUs and emotions are captured as template strings comprising the most discriminative AUs for each emotion. The template strings are computed using a concept called discriminative power. The Longest Common Subsequence (LCS) distance, an approach for approximate string matching, is applied to calculate the closeness of a test string of AUs with the template strings, and hence infer the underlying emotions. LCS is found to be efficient in handling practical issues like erroneous AU detection and helps to reduce false predictions. The proposed method is tested with various databases like CK+, ISL, FACS, JAFFE, MindReading and many real-world video frames. We compare our performance with rule based techniques, and show clear improvement on both benchmark databases and real-world datasets."}
{"_id": "7539293eaadec85917bcfcf4ecc53e7bdd41c227", "title": "Using phrases and document metadata to improve topic modeling of clinical reports", "text": "Probabilistic topic models provide an unsupervised method for analyzing unstructured text, which have the potential to be integrated into clinical automatic summarization systems. Clinical documents are accompanied by metadata in a patient's medical history and frequently contains multiword concepts that can be valuable for accurately interpreting the included text. While existing methods have attempted to address these problems individually, we present a unified model for free-text clinical documents that integrates contextual patient- and document-level data, and discovers multi-word concepts. In the proposed model, phrases are represented by chained n-grams and a Dirichlet hyper-parameter is weighted by both document-level and patient-level context. This method and three other Latent Dirichlet allocation models were fit to a large collection of clinical reports. Examples of resulting topics demonstrate the results of the new model and the quality of the representations are evaluated using empirical log likelihood. The proposed model was able to create informative prior probabilities based on patient and document information, and captured phrases that represented various clinical concepts. The representation using the proposed model had a significantly higher empirical log likelihood than the compared methods. Integrating document metadata and capturing phrases in clinical text greatly improves the topic representation of clinical documents. The resulting clinically informative topics may effectively serve as the basis for an automatic summarization system for clinical reports."}
{"_id": "291c3f4393987f67cded328e984dbae84af643cb", "title": "Faster Dynamic Programming for Markov Decision Processes", "text": "OF THESIS FASTER DYNAMIC PROGRAMMING FOR MARKOV DECISION PROCESSES Markov decision processes (MDPs) are a general framework used by Artificial Intelligence (AI) researchers to model decision theoretic planning problems. Solving real world MDPs has been a major and challenging research topic in the AI literature. This paper discusses two main groups of approaches in solving MDPs. The first group of approaches combines the strategies of heuristic search and dynamic programming to expedite the convergence process. The second makes use of graphical structures in MDPs to decrease the effort of classic dynamic programming algorithms. Two new algorithms proposed by the author, MBLAO* and TVI, are described here."}
{"_id": "370063c5491147d88d57bbcd865eb5004484c1eb", "title": "A Review of Technical Approaches to Realizing Near-Field Communication Mobile Payments", "text": "This article describes and compares four approaches to storing payment keys and executing payment applications on mobile phones via near-field communication at the point of sale. Even though the comparison hinges on security--specifically, how well the keys and payment application are protected against misuse--other criteria such as hardware requirements, availability, management complexity, and performance are also identified and discussed."}
{"_id": "21e480ad39c52d8e770810f8319750a34f8bc091", "title": "Exploiting geographic dependencies for real estate appraisal: a mutual perspective of ranking and clustering", "text": "It is traditionally a challenge for home buyers to understand, compare and contrast the investment values of real estates. While a number of estate appraisal methods have been developed to value real property, the performances of these methods have been limited by the traditional data sources for estate appraisal. However, with the development of new ways of collecting estate-related mobile data, there is a potential to leverage geographic dependencies of estates for enhancing estate appraisal. Indeed, the geographic dependencies of the value of an estate can be from the characteristics of its own neighborhood (individual), the values of its nearby estates (peer), and the prosperity of the affiliated latent business area (zone). To this end, in this paper, we propose a geographic method, named ClusRanking, for estate appraisal by leveraging the mutual enforcement of ranking and clustering power. ClusRanking is able to exploit geographic individual, peer, and zone dependencies in a probabilistic ranking model. Specifically, we first extract the geographic utility of estates from geography data, estimate the neighborhood popularity of estates by mining taxicab trajectory data, and model the influence of latent business areas via ClusRanking. Also, we use a linear model to fuse these three influential factors and predict estate investment values. Moreover, we simultaneously consider individual, peer and zone dependencies, and derive an estate-specific ranking likelihood as the objective function. Finally, we conduct a comprehensive evaluation with real-world estate related data, and the experimental results demonstrate the effectiveness of our method."}
{"_id": "044a9cb24e2863c6bcaaf39b7a210fbb11b381e9", "title": "A Low-Bandwidth Network File System", "text": "Users rarely consider running network file systems over slow or wide-area networks, as the performance would be unacceptable and the bandwidth consumption too high. Nonetheless, efficient remote file access would often be desirable over such networks---particularly when high latency makes remote login sessions unresponsive. Rather than run interactive programs such as editors remotely, users could run the programs locally and manipulate remote files through the file system. To do so, however, would require a network file system that consumes less bandwidth than most current file systems.This paper presents LBFS, a network file system designed for low-bandwidth networks. LBFS exploits similarities between files or versions of the same file to save bandwidth. It avoids sending data over the network when the same data can already be found in the server's file system or the client's cache. Using this technique in conjunction with conventional compression and caching, LBFS consumes over an order of magnitude less bandwidth than traditional network file systems on common workloads."}
{"_id": "964b6997a9c7852deff71d34894dbdc38d34fbdf", "title": "Algorithms for Delta Compression and Remote File Synchronization", "text": "Delta compression and remote file synchronization techniques are concerned with efficient file transfer over a slow communication link in the case where the receiving party already has a similar file (or files). This problem arises naturally, e.g., when distributing updated versions of software over a network or synchronizing personal files between different accounts and devices. More generally, the problem is becoming increasingly common in many networkbased applications where files and content are widely replicated, frequently modified, and cut and reassembled in different contexts and packagings. In this chapter, we survey techniques, software tools, and applications for delta compression, remote file synchronization, and closely related problems. We first focus on delta compression, where the sender knows all the similar files that are held by the receiver. In the second part, we survey work on the related, but in many ways quite different, problem of remote file synchronization, where the sender does not have a copy of the files held by the receiver. Work supported by NSF CAREER Award NSF CCR-0093400 and by Intel Corporation."}
{"_id": "148ec401da7d5859a9488c0f9a34200de71cc824", "title": "Leases: An Efficient Fault-Tolerant Mechanism for Distributed File Cache Consistency", "text": "Caching introduces the overhead and complexity of ensuring consistency, reducing some of its performance benefits. In a distributed system, caching must deal with the additional complications of communication and host failures.\nLeases are proposed as a time-based mechanism that provides efficient consistent access to cached data in distributed systems. Non-Byzantine failures affect performance, not correctness, with their effect minimized by short leases. An analytic model and an evaluation for file access in the V system show that leases of short duration provide good performance. The impact of leases on performance grows more significant in systems of larger scale and higher processor performance."}
{"_id": "41f1abe566060e53ad93d8cfa8c39ac582256868", "title": "Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial", "text": "The state machine approach is a general method for implementing fault-tolerant services in distributed systems. This paper reviews the approach and describes protocols for two different failure models\u2014Byzantine and fail stop. Systems reconfiguration techniques for removing faulty components and integrating repaired components are also discussed."}
{"_id": "46f766c11df69808453e14c900bcb3f4e081fcae", "title": "Copy Detection Mechanisms for Digital Documents", "text": "In a digital library system, documents are available in digital form and therefore are more easily copied and their copyrights are more easily violated. This is a very serious problem, as it discourages owners of valuable information from sharing it with authorized users. There are two main philosophies for addressing this problem: prevention and detection. The former actually makes unauthorized use of documents difficult or impossible while the latter makes it easier to discover such activity.In this paper we propose a system for registering documents and then detecting copies, either complete copies or partial copies. We describe algorithms for such detection, and metrics required for evaluating detection mechanisms (covering accuracy, efficiency, and security). We also describe a working prototype, called COPS, describe implementation issues, and present experimental results that suggest the proper settings for copy detection parameters."}
{"_id": "6db68f27bcb7c9c001bb0c144c1d0ac5d69a3f3a", "title": "Formal Analysis of Graphical Security Models", "text": "Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. \u2022 Users may download and print one copy of any publication from the public portal for the purpose of private study or research. \u2022 You may not further distribute the material or use it for any profit-making activity or commercial gain \u2022 You may freely distribute the URL identifying the publication in the public portal ? If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. May not music be described as the mathematics of the sense, mathematics as music of the reason? The musician feels mathematics, the mathematician thinks music: music the dream, mathematics the working life. Summary The increasing usage of computer-based systems in almost every aspect of our daily life makes more and more dangerous the threat posed by potential attackers , and more and more rewarding a successful attack. Moreover, the complexity of these systems is also increasing, including physical devices, software components and human actors interacting with each other to form so-called socio-technical systems. The importance of socio-technical systems to modern societies requires verifying their security properties formally, while their inherent complexity makes manual analyses impracticable. Graphical models for security offer an unrivalled opportunity to describe socio-technical systems, for they allow to represent different aspects like human behaviour, computation and physical phenomena in an abstract yet uniform manner. Moreover, these models can be assigned a formal semantics, thereby allowing formal verification of their properties. Finally, their appealing graphical notations enable to communicate security concerns in an understandable way also to non-experts, often in charge of the decision making. This dissertation argues that automated techniques can be developed on graphical security models to evaluate qualitative and quantitative security properties of socio-technical systems and to synthesise optimal attack and defence strategies. In support to this claim we develop analysis techniques for widely-used graphical security models such as attack trees and attack-defence trees. Our analyses cope with the optimisation of multiple parameters of an attack and defence scenario. Improving on the literature, in case of conflicting parameters such as probability and cost we compute the set of optimal solutions \u2026"}
{"_id": "e88eec15946dd19bdcf69db882f204386e05ff48", "title": "Robust techniques for background subtraction in urban traffic video", "text": "Identifying moving objects from a video sequence is a fundamental and critical task in many computer-vision applications. A common approach is to perform background subtraction, which identifies moving objects from the portion of a video frame that differs significantly from a background model. There are many challenges in developing a good background subtraction algorithm. First, it must be robust against changes in illumination. Second, it should avoid detecting non-stationary background objects such as swinging leaves, rain, snow, and shadow cast by moving objects. Finally, its internal background model should react quickly to changes in background such as starting and stopping of vehicles. In this paper, we compare various background subtraction algorithms for detecting moving vehicles and pedestrians in urban traffic video sequences. We consider approaches varying from simple techniques such as frame differencing and adaptive median filtering, to more sophisticated probabilistic modeling techniques. While complicated techniques often produce superior performance, our experiments show that simple techniques such as adaptive median filtering can produce good results with much lower computational complexity."}
{"_id": "2232416778783616736149c870a69beb13cda743", "title": "Face Recognition in a Meeting Room", "text": "In this paper, weinvestigaterecognition of humanfaces in a meetingroom. The major challenges of identifying humanfacesin this environmentincludelow quality of input images,poor illumination,unrestrictedheadposesand continuouslychangingfacial expressionsandocclusion.In order to addresstheseproblemswe proposea novel algorithm, DynamicSpaceWarping (DSW).Thebasic idea of the algorithm is to combinelocal features under certain spatial constraints. We compare DSWwith the eigenface approachondatacollectedfromvariousmeetings.Wehave testedboth front and profile face imagesand imageswith two stagesof occlusion.Theexperimentalresultsindicate thattheDSWapproachoutperformstheeigenfaceapproach in bothcases."}
{"_id": "74c24d7454a2408f766e4d9e507a0e9c3d80312f", "title": "A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks", "text": "A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes."}
{"_id": "853860b6472b2c883be4085a3460042fc8b1af3e", "title": "Laplacian Auto-Encoders: An explicit learning of nonlinear data manifold", "text": "A key factor contributing to the success of many auto-encoders based deep learning techniques is the implicit consideration of the underlying data manifold in their training criteria. In this paper, we aim to make this consideration more explicit by training auto-encoders completely from the manifold learning perspective. We propose a novel unsupervised manifold learning method termed Laplacian Auto-Encoders (LAEs). Starting from a general regularized function learning framework, LAE regularizes training of autoencoders so that the learned encoding function has the locality-preserving property for data points on the manifold. By exploiting the analog relation between the graph Laplacian and the Laplace\u2013Beltrami operator on the continuous manifold, we derive discrete approximations of the firstand higher-order auto-encoder regularizers that can be applied in practical scenarios, where only data points sampled from the distribution on the manifold are available. Our proposed LAE has potentially better generalization capability, due to its explicit respect of the underlying data manifold. Extensive experiments on benchmark visual classification datasets show that LAE consistently outperforms alternative auto-encoders recently proposed in deep learning literature, especially when training samples are relatively scarce. & 2015 Elsevier B.V. All rights reserved."}
{"_id": "81fd1a1f72963ce16ddebacea82e71dab3d6992a", "title": "Interactive surface design with interlocking elements", "text": "We present an interactive tool for designing physical surfaces made from flexible interlocking quadrilateral elements of a single size and shape. With the element shape fixed, the design task becomes one of finding a discrete structure---i.e., element connectivity and binary orientations---that leads to a desired geometry. In order to address this challenging problem of combinatorial geometry, we propose a forward modeling tool that allows the user to interactively explore the space of feasible designs. Paralleling principles from conventional modeling software, our approach leverages a library of base shapes that can be instantiated, combined, and extended using two fundamental operations: merging and extrusion. In order to assist the user in building the designs, we furthermore propose a method to automatically generate assembly instructions. We demonstrate the versatility of our method by creating a diverse set of digital and physical examples that can serve as personalized lamps or decorative items."}
{"_id": "90a5393b72b85ec21fae9a108ed5dd3e99837701", "title": "The Role of Instructional Design in Persuasion: A Comics Approach for Improving Cybersecurity", "text": "Although computer security technologies are the first line of defence to secure users, their success is dependent on individuals\u2019 behaviour. It is therefore necessary to persuade users to practice good computer security. Our interview analysis of users\u2019 conceptualization of security password guessing attacks, antivirus protection, and mobile online privacy shows that poor understanding of security threats influences users\u2019 motivation and ability to practice safe behaviours. We designed and developed an online interactive comic series called Secure Comics based on instructional design principles to address this problem. An eye-tracking experiment suggests that the graphical and interactive components of the comics direct users\u2019 attention and facilitate comprehension of the information. In our evaluations of Secure Comics, results from several user studies show that the comics improve understanding and motivate positive changes in security management behaviour. We discuss the implication of the findings to better understand the role of instructional design and persuasion in education technology."}
{"_id": "4c9f9f228390d1370e0df91d2565a2559444431d", "title": "SimRT: an automated framework to support regression testing for data races", "text": "Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance."}
{"_id": "9da15b932df57a8f959471ebc977d620efb18cc1", "title": "PicToSeek: combining color and shape invariant features for image retrieval", "text": "We aim at combining color and shape invariants for indexing and retrieving images. To this end, color models are proposed independent of the object geometry, object pose, and illumination. From these color models, color invariant edges are derived from which shape invariant features are computed. Computational methods are described to combine the color and shape invariants into a unified high-dimensional invariant feature set for discriminatory object retrieval. Experiments have been conducted on a database consisting of 500 images taken from multicolored man-made objects in real world scenes. From the theoretical and experimental results it is concluded that object retrieval based on composite color and shape invariant features provides excellent retrieval accuracy. Object retrieval based on color invariants provides very high retrieval accuracy whereas object retrieval based entirely on shape invariants yields poor discriminative power. Furthermore, the image retrieval scheme is highly robust to partial occlusion, object clutter and a change in the object's pose. Finally, the image retrieval scheme is integrated into the PicToSeek system on-line at http://www.wins.uva.nl/research/isis/PicToSeek/ for searching images on the World Wide Web."}
{"_id": "3923c0deee252ba10562a4378fc2bbc4885282b3", "title": "Fake Colorized Image Detection", "text": "Image forensics aims to detect the manipulation of digital images. Currently, splicing detection, copy-move detection, and image retouching detection are attracting significant attention from researchers. However, image editing techniques develop over time. An emerging image editing technique is colorization, in which grayscale images are colorized with realistic colors. Unfortunately, this technique may also be intentionally applied to certain images to confound object recognition algorithms. To the best of our knowledge, no forensic technique has yet been invented to identify whether an image is colorized. We observed that, compared with natural images, colorized images, which are generated by three state-of-the-art methods, possess statistical differences for the hue and saturation channels. Besides, we also observe statistical inconsistencies in the dark and bright channels, because the colorization process will inevitably affect the dark and bright channel values. Based on our observations, i.e., potential traces in the hue, saturation, dark, and bright channels, we propose two simple yet effective detection methods for fake colorized images: Histogram-based fake colorized image detection and feature encoding-based fake colorized image detection. Experimental results demonstrate that both proposed methods exhibit a decent performance against multiple state-of-the-art colorization approaches."}
{"_id": "63db36cb0b5c8dad17a0c02ab95fde805d585513", "title": "Assessing patient suitability for short-term cognitive therapy with an interpersonal focus", "text": "In the current study, the development and initial validation of the Suitability for Short-Term Cognitive Therapy (SSCT) interview procedure is reported. The SSCT is an interview and rating procedure designed to evaluate the potential appropriateness of patients for short-term cognitive therapy with an interpersonal focus. It consists of a 1-hour, semistructured interview, focused on eliciting information from the patient relevant to nine selection criteria. The procedures involved in the development of this scale are described in detail, and preliminary evidence suggesting that the selection criteria can be rated reliably is presented. In addition, data indicating that scores on the SSCT scale predict the outcome of short-term cognitive therapy on multiple dependent measures, including both therapist and patient perspectives, are reported. It is concluded that the SSCT is a potentially useful scale for identifying patients who may be suitable, or unsuitable, for the type of short-term cognitive therapy administered in the present study."}
{"_id": "2dc18f661b400033abd1086b917c451d3358aef2", "title": "Visible Machine Learning for Biomedicine", "text": "A major ambition of artificial intelligence lies in translating patient data to successful therapies. Machine learning models face particular challenges in biomedicine, however, including handling of extreme data heterogeneity and lack of mechanistic insight into predictions. Here, we argue for \"visible\" approaches that guide model structure with experimental biology."}
{"_id": "09e941ab733b2c6c26261cb85f00f145d9063b0b", "title": "Automated Text Summarization In SUMMARIST", "text": "SUMMARIST is an attempt to create a robust automated text summarization system, based on the \u2018equation\u2019: summarization = topic identification + interpretation + generation. Each of these stages contains several independent modules, many of them trained on large corpora of text. We describe the system\u2019s architecture and provide details of some of its modules."}
{"_id": "03ce7c63eea901962dfae539b3ca6c77d65c5c38", "title": "Spike-Based Synaptic Plasticity in Silicon: Design, Implementation, Application, and Challenges", "text": "The ability to carry out signal processing, classification, recognition, and computation in artificial spiking neural networks (SNNs) is mediated by their synapses. In particular, through activity-dependent alteration of their efficacies, synapses play a fundamental role in learning. The mathematical prescriptions under which synapses modify their weights are termed synaptic plasticity rules. These learning rules can be based on abstract computational neuroscience models or on detailed biophysical ones. As these rules are being proposed and developed by experimental and computational neuroscientists, engineers strive to design and implement them in silicon and en masse in order to employ them in complex real-world applications. In this paper, we describe analog very large-scale integration (VLSI) circuit implementations of multiple synaptic plasticity rules, ranging from phenomenological ones (e.g., based on spike timing, mean firing rates, or both) to biophysically realistic ones (e.g., calcium-dependent models). We discuss the application domains, weaknesses, and strengths of various representative approaches proposed in the literature, and provide insight into the challenges that engineers face when designing and implementing synaptic plasticity rules in VLSI technology for utilizing them in real-world applications."}
{"_id": "bc9f5d844ea6b989feb989cc6c8fc34f721a6b06", "title": "Networks versus markets in international trade", "text": "I propose a network/search view of international trade in differentiated products. I present evidence that supports the view that proximity and common language/colonial ties are more important for differentiated products than for products traded on organized exchanges in matching international buyers and sellers, and that search barriers to trade are higher for differentiated than for homogeneous products. I also discuss alternative explanations for the findings."}
{"_id": "3973e14770350ed54ba1272aa3e19b4d21f5dad3", "title": "Obstacle Detection and Tracking for the Urban Challenge", "text": "This paper describes the obstacle detection and tracking algorithms developed for Boss, which is Carnegie Mellon University 's winning entry in the 2007 DARPA Urban Challenge. We describe the tracking subsystem and show how it functions in the context of the larger perception system. The tracking subsystem gives the robot the ability to understand complex scenarios of urban driving to safely operate in the proximity of other vehicles. The tracking system fuses sensor data from more than a dozen sensors with additional information about the environment to generate a coherent situational model. A novel multiple-model approach is used to track the objects based on the quality of the sensor data. Finally, the architecture of the tracking subsystem explicitly abstracts each of the levels of processing. The subsystem can easily be extended by adding new sensors and validation algorithms."}
{"_id": "6a694487451957937adddbd682d3851fabd45626", "title": "Question answering passage retrieval using dependency relations", "text": "State-of-the-art question answering (QA) systems employ term-density ranking to retrieve answer passages. Such methods often retrieve incorrect passages as relationships among question terms are not considered. Previous studies attempted to address this problem by matching dependency relations between questions and answers. They used strict matching, which fails when semantically equivalent relationships are phrased differently. We propose fuzzy relation matching based on statistical models. We present two methods for learning relation mapping scores from past QA pairs: one based on mutual information and the other on expectation maximization. Experimental results show that our method significantly outperforms state-of-the-art density-based passage retrieval methods by up to 78% in mean reciprocal rank. Relation matching also brings about a 50% improvement in a system enhanced by query expansion."}
{"_id": "a3676ecae39afd35b1f7075fc630e28cfbb5a188", "title": "Nitro: Hardware-Based System Call Tracing for Virtual Machines", "text": "Virtual machine introspection (VMI) describes the method of monitoring and analyzing the state of a virtual machine from the hypervisor level. This lends itself well to security applications, though the hardware virtualization support from Intel and AMD was not designed with VMI in mind. This results in many challenges for developers of hardware-supported VMI systems. This paper describes the design and implementation of our prototype framework, Nitro, for system call tracing and monitoring. Since Nitro is a purely VMI-based system, it remains isolated from attacks originating within the guest operating system and is not directly visible from within the guest. Nitro is extremely flexible as it supports all three system call mechanisms provided by the Intel x86 architecture and has been proven to work in Windows, Linux, 32-bit, and 64-bit environments. The high performance of our system allows for real-time capturing and dissemination of data without hindering usability. This is supported by extensive testing with various guest operating systems. In addition, Nitro is resistant to circumvention attempts due to a construction called hardware rooting. Finally, Nitro surpasses similar systems in both performance and functionality."}
{"_id": "4b31ec67990a5fa81e7c1cf9fa2dbebcb91ded59", "title": "Adapting Naive Bayes to Domain Adaptation for Sentiment Analysis", "text": "In the community of sentiment analysis, supervised learning techniques have been shown to perform very well. When transferred to another domain, however, a supervised sentiment classifier often performs extremely bad. This is so-called domain-transfer problem. In this work, we attempt to attack this problem by making the maximum use of both the old-domain data and the unlabeled new-domain data. To leverage knowledge from the old-domain data, we proposed an effective measure, i.e., Frequently Co-occurring Entropy (FCE), to pick out generalizable features that occur frequently in both domains and have similar occurring probability. To gain knowledge from the newdomain data, we proposed Adapted Na\u00efve Bayes (ANB), a weighted transfer version of Naive Bayes Classifier. The experimental results indicate that proposed approach could improve the performance of base classifier dramatically, and even provide much better performance than the transfer-learning baseline, i.e. the Na\u00efve Bayes Transfer Classifier (NTBC)."}
{"_id": "654952a9cc4f3526dda8adf220a50a27a5c91449", "title": "DendroPy: a Python library for phylogenetic computing", "text": "UNLABELLED\nDendroPy is a cross-platform library for the Python programming language that provides for object-oriented reading, writing, simulation and manipulation of phylogenetic data, with an emphasis on phylogenetic tree operations. DendroPy uses a splits-hash mapping to perform rapid calculations of tree distances, similarities and shape under various metrics. It contains rich simulation routines to generate trees under a number of different phylogenetic and coalescent models. DendroPy's data simulation and manipulation facilities, in conjunction with its support of a broad range of phylogenetic data formats (NEXUS, Newick, PHYLIP, FASTA, NeXML, etc.), allow it to serve a useful role in various phyloinformatics and phylogeographic pipelines.\n\n\nAVAILABILITY\nThe stable release of the library is available for download and automated installation through the Python Package Index site (http://pypi.python.org/pypi/DendroPy), while the active development source code repository is available to the public from GitHub (http://github.com/jeetsukumaran/DendroPy)."}
{"_id": "6a69cab99a68869b2f6361c6a3004657e2deeae4", "title": "Ground plane segmentation for mobile robot visual navigation", "text": "We describe a method of mobile robot monocular visual navigation, which uses multiple visual cues to detect and segment the ground plane in the robot\u2019s field of view. Corner points are tracked through an image sequence and grouped into coplanar regions using a method which we call an H-based tracker. The H-based tracker employs planar homographys and is initialised by 5-point planar projective invariants. This allows us to detect ground plane patches and the colour within such patches is subsequently modelled. These patches are grown by colour classification to give a ground plane segmentation, which is then used as an input to a new variant of the artificial potential field algorithm."}
{"_id": "13c48b8c10022b4b2262c5d12f255e21f566cecc", "title": "Practical design considerations for a LLC multi-resonant DC-DC converter in battery charging applications", "text": "In this paper, resonant tank design procedure and practical design considerations are presented for a high performance LLC multi-resonant dc-dc converter in a two-stage smart battery charger for neighborhood electric vehicle applications. The multi-resonant converter has been analyzed and its performance characteristics are presented. It eliminates both low and high frequency current ripple on the battery, thus maximizing battery life without penalizing the volume of the charger. Simulation and experimental results are presented for a prototype unit converting 390 V from the input dc link to an output voltage range of 48 V to 72 V dc at 650 W. The prototype achieves a peak efficiency of 96%."}
{"_id": "27229aff757b797d0cae7bead5a236431b253b91", "title": "Predictive State Temporal Difference Learning", "text": "We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications, reinforcement learning (RL) is complicated by the fact that state is either high-dimensional or partially observable. Therefore, RL methods are designed to work with features of state rather than state itself, and the success or failure of learning is often determined by the suitability of the selected features. By comparison, subspace identification (SSID) methods are designed to select a feature set which preserves as much information as possible about state. In this paper we connect the two approaches, looking at the problem of reinforcement learning with a large set of features, each of which may only be marginally useful for value function approximation. We introduce a new algorithm for this situation, called Predictive State Temporal Difference (PSTD) learning. As in SSID for predictive state representations, PSTD finds a linear compression operator that projects a large set of features down to a small set that preserves the maximum amount of predictive information. As in RL, PSTD then uses a Bellman recursion to estimate a value function. We discuss the connection between PSTD and prior approaches in RL and SSID. We prove that PSTD is statistically consistent, perform several experiments that illustrate its properties, and demonstrate its potential on a difficult optimal stopping problem."}
{"_id": "21ff1d20dd7b3e6b1ea02036c0176d200ec5626d", "title": "Loss Max-Pooling for Semantic Image Segmentation", "text": "We introduce a novel loss max-pooling concept for handling imbalanced training data distributions, applicable as alternative loss layer in the context of deep neural networks for semantic image segmentation. Most real-world semantic segmentation datasets exhibit long tail distributions with few object categories comprising the majority of data and consequently biasing the classifiers towards them. Our method adaptively re-weights the contributions of each pixel based on their observed losses, targeting under-performing classification results as often encountered for under-represented object classes. Our approach goes beyond conventional cost-sensitive learning attempts through adaptive considerations that allow us to indirectly address both, inter- and intra-class imbalances. We provide a theoretical justification of our approach, complementary to experimental analyses on benchmark datasets. In our experiments on the Cityscapes and Pascal VOC 2012 segmentation datasets we find consistently improved results, demonstrating the efficacy of our approach."}
{"_id": "87982ff47c0614cf40204970208312abe943641f", "title": "Comparing and evaluating the sentiment on newspaper articles: A preliminary experiment", "text": "Recent years have brought a symbolic growth in the volume of research in Sentiment Analysis, mostly on highly subjective text types like movie or product reviews. The main difference these texts have with news articles is that their target is apparently defined and unique across the text. Thence while dealing with news articles, we performed three subtasks namely identifying the target; separation of good and bad news content from the good and bad sentiment expressed on the target and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. On concluding these tasks, we present our work on mining opinions about three different Indian political parties during elections in the year 2009. We built a Corpus of 689 opinion-rich instances from three different English dailies namely The Hindu, Times of India and Economic Times extracted from 02/ 01/ 2009 to 05/ 01/ 2009 (MM/ DD/ YY). In which (a) we tested the relative suitability of various sentiment analysis methods (both machine learning and lexical based) and (b) we attempted to separate positive or negative opinion from good or bad news. Evaluation includes comparison of three sentiment analysis methods (two machine learning based and one lexical based) and analyzing the choice of certain words used in political text which influence the Sentiments of public in polls. This preliminary experiment will benefit in predicting and forecasting the winning party in forthcoming Indian elections 2014."}
{"_id": "2538e3eb24d26f31482c479d95d2e26c0e79b990", "title": "Natural Language Processing (almost) from Scratch", "text": "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements."}
{"_id": "317deb87586baa4ee7c7b5dfc603ebed94d1da07", "title": "Deep Learning for Efficient Discriminative Parsing", "text": "We propose a new fast purely discriminative algorithm for natural language parsing, based on a \u201cdeep\u201d recurrent convolutional graph transformer network (GTN). Assuming a decomposition of a parse tree into a stack of \u201clevels\u201d, the network predicts a level of the tree taking into account predictions of previous levels. Using only few basic text features which leverage word representations from Collobert and Weston (2008), we show similar performance (in F1 score) to existing pure discriminative parsers and existing \u201cbenchmark\u201d parsers (like Collins parser, probabilistic context-free grammars based), with a huge speed advantage."}
{"_id": "0354210007fbe92385acf407549b5cacb41b5835", "title": "Distributed and overlapping representations of faces and objects in ventral temporal cortex.", "text": "The functional architecture of the object vision pathway in the human brain was investigated using functional magnetic resonance imaging to measure patterns of response in ventral temporal cortex while subjects viewed faces, cats, five categories of man-made objects, and nonsense pictures. A distinct pattern of response was found for each stimulus category. The distinctiveness of the response to a given category was not due simply to the regions that responded maximally to that category, because the category being viewed also could be identified on the basis of the pattern of response when those regions were excluded from the analysis. Patterns of response that discriminated among all categories were found even within cortical regions that responded maximally to only one category. These results indicate that the representations of faces and objects in ventral temporal cortex are widely distributed and overlapping."}
{"_id": "04cc04457e09e17897f9256c86b45b92d70a401f", "title": "A latent factor model for highly multi-relational data", "text": "Many data such as social networks, movie preferences or knowledge bases are multi-relational, in that they describe multiple relations between entities. While there is a large body of work focused on modeling these data, modeling these multiple types of relations jointly remains challenging. Further, existing approaches tend to breakdown when the number of these types grows. In this paper, we propose a method for modeling large multi-relational datasets, with possibly thousands of relations. Our model is based on a bilinear structure, which captures various orders of interaction of the data, and also shares sparse latent factors across different relations. We illustrate the performance of our approach on standard tensor-factorization datasets where we attain, or outperform, state-of-the-art results. Finally, a NLP application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations."}
{"_id": "052b1d8ce63b07fec3de9dbb583772d860b7c769", "title": "Learning representations by back-propagating errors", "text": "We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal \u2018hidden\u2019 units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1."}
{"_id": "50a6b2b84a9d11ed168ce6380ff17e76136cdfe7", "title": "A memory insensitive technique for large model simplification", "text": "In this paper we propose three simple, but significant improvements to the OoCS (Out-of-Core Simplification) algorithm of Lindstrom [20] which increase the quality of approximations and extend the applicability of the algorithm to an even larger class of compute systems.The original OoCS algorithm has memory complexity that depends on the size of the output mesh, but no dependency on the size of the input mesh. That is, it can be used to simplify meshes of arbitrarily large size, but the complexity of the output mesh is limited by the amount of memory available. Our first contribution is a version of OoCS that removes the dependency of having enough memory to hold (even) the simplified mesh. With our new algorithm, the whole process is made essentially independent of the available memory on the host computer. Our new technique uses disk instead of main memory, but it is carefully designed to avoid costly random accesses.Our two other contributions improve the quality of the approximations generated by OoCS. We propose a scheme for preserving surface boundaries which does not use connectivity information, and a scheme for constraining the position of the \"representative vertex\" of a grid cell to an optimal position inside the cell."}
{"_id": "22630a79f1c50603c1356f6ac9dc8524a18d4061", "title": "SecondNet: a data center network virtualization architecture with bandwidth guarantees", "text": "In this paper, we propose virtual data center (VDC) as the unit of resource allocation for multiple tenants in the cloud. VDCs are more desirable than physical data centers because the resources allocated to VDCs can be rapidly adjusted as tenants' needs change. To enable the VDC abstraction, we design a data center network virtualization architecture called SecondNet. SecondNet achieves scalability by distributing all the virtual-to-physical mapping, routing, and bandwidth reservation state in server hypervisors. Its port-switching based source routing (PSSR) further makes SecondNet applicable to arbitrary network topologies using commodity servers and switches. SecondNet introduces a centralized VDC allocation algorithm for bandwidth guaranteed virtual to physical mapping. Simulations demonstrate that our VDC allocation achieves high network utilization and low time complexity. Our implementation and experiments show that we can build SecondNet on top of various network topologies, and SecondNet provides bandwidth guarantee and elasticity, as designed."}
{"_id": "5af83b56353c5fba0518c203d192ffb6375cd986", "title": "A deep multiple instance model to predict prostate cancer metastasis from nuclear morphology", "text": "We consider the problem of identifying the patients who are diagnosed with highgrade prostate cancer using the histopathology of tumor in a prostate needle biopsy and are at a very high risk of lethal cancer progression. We hypothesize that the morphology of tumor cell nuclei in digital images from the biopsy can be used to predict tumor aggressiveness and posit the presence of metastasis as a surrogate for disease specific mortality. For this purpose, we apply a compositional multiinstance learning approach which encodes images of nuclei through a convolutional neural network, then predicts the presence of metastasis from sets of encoded nuclei. Through experiments on prostate needle biopsies (PNBX) from a patient cohort with known presence (M1 stage, n = 85) or absence (M0 stage, n = 86) of metastatic disease, we obtained an average area under the receiver operating characteristic curve of 0.71\u00b1 0.08 for predicting metastatic cases. These results support our hypothesis that information related to metastatic capacity of prostate cancer cells can be obtained through analysis of nuclei and establish a baseline for future research aimed at predicting the risk of future metastatic disease at a time when it might be preventable."}
{"_id": "d055b799c521b28bd4d6bf2fc905819d8e88207c", "title": "Design of a dual circular polarization microstrip patch array antenna", "text": "Design of a microstrip array antenna to achieve dual circular polarization is proposed in this paper. The proposed antenna is a 2\u00d72 array antenna where each patch element is circularly polarized. The feed network has microstrip lines, cross slot lines and air-bridges. The array antenna can excite both right-hand circular polarization (RHCP) and left-hand circular polarization (LHCP) without using any 90\u00b0 hybrid circuit or PIN diode. \u201cBoth-sided MIC Technology\u201d is used to design the feed network as it provides flexibility to place several types of transmission lines on both sides of the dielectric substrate. The design frequency of the proposed array antenna is 10 GHz. The simulated return loss exhibits an impedance bandwidth of greater than 5% and the 3-dB axial ratio bandwidths for both RHCP and LHCP are approximately 1.39%. The structure and the basic operation along with the simulation results of the proposed dual circularly polarized array antenna are demonstrated in this paper."}
{"_id": "2b72dd0d33e0892436394ef7642c6b517a1c71fd", "title": "Matching Visual Saliency to Confidence in Plots of Uncertain Data", "text": "Conveying data uncertainty in visualizations is crucial for preventing viewers from drawing conclusions based on untrustworthy data points. This paper proposes a methodology for efficiently generating density plots of uncertain multivariate data sets that draws viewers to preattentively identify values of high certainty while not calling attention to uncertain values. We demonstrate how to augment scatter plots and parallel coordinates plots to incorporate statistically modeled uncertainty and show how to integrate them with existing multivariate analysis techniques, including outlier detection and interactive brushing. Computing high quality density plots can be expensive for large data sets, so we also describe a probabilistic plotting technique that summarizes the data without requiring explicit density plot computation. These techniques have been useful for identifying brain tumors in multivariate magnetic resonance spectroscopy data and we describe how to extend them to visualize ensemble data sets."}
{"_id": "7d9facefffc720079d837aa421ab79d4856e2c88", "title": "Lightweight, High-Force Gripper Inspired by Chuck Clamping Devices", "text": "In this letter, we present a novel gripper, whose design was inspired by chuck clamping devices, for transferring heavy objects and assembling parts precisely in industrial applications. The developed gripper is lightweight (0.9 kg), can manipulate heavy payloads (over 23 kgf), and can automatically align its position and posture via a grasping motion. A fingertip design criterion is presented for the position alignment, while a control strategy is presented for the posture alignment. With one actuator, this gripper realized the above features. This letter describes the mathematical analyses and experiments used to validate these key metrics."}
{"_id": "29f07c86886af63f9bf43d089373ac1f7a95ea0e", "title": "A Multiarmed Bandit Incentive Mechanism for Crowdsourcing Demand Response in Smart Grids", "text": "Demand response is a critical part of renewable integration and energy cost reduction goals across the world. Motivated by the need to reduce costs arising from electricity shortage and renewable energy fluctuations, we propose a novel multiarmed bandit mechanism for demand response (MAB-MDR) which makes monetary offers to strategic consumers who have unknown response characteristics, to incetivize reduction in demand. Our work is inspired by a novel connection we make to crowdsourcing mechanisms. The proposed mechanism incorporates realistic features of the demand response problem including time varying and quadratic cost function. The mechanism marries auctions, that allow users to report their preferences, with online algorithms, that allow distribution companies to learn user-specific parameters. We show that MAB-MDR is dominant strategy incentive compatible, individually rational, and achieves sublinear regret. Such mechanisms can be effectively deployed in smart grids using new information and control architecture innovations and lead to welcome savings in energy costs."}
{"_id": "8e7fdb9d3fc0fef1f82f126072fc675e01ce5873", "title": "Clarifying Hypotheses by Sketching Data", "text": "Discussions between data analysts and colleagues or clients with no statistical background are difficult, as the analyst often has to teach and explain their statistical and domain knowledge. We investigate work practices of data analysts who collaborate with non-experts, and report findings regarding types of analysis, collaboration and availability of data. Based on these, we have created a tool to enhance collaboration between data analysts and their clients in the initial stages of the analytical process. Sketching time series data allows analysts to discuss expectations for later analysis. We propose function composition rather than freehand sketching, in order to structure the analyst-client conversation by independently expressing expected features in the data. We evaluate the usability of our prototype through two small studies, and report on user feedback for future iterations."}
{"_id": "0567283bc9affd475eae7cebaae658692a64d5a4", "title": "Intelligent Widgets for Intuitive Interaction and Coordination in Smart Home Environments", "text": "The intelligent home environment is a well-established example of the Ambient Intelligence application domain. A variety of sensors and actuators can be used to have the home environment adapt towards changing circumstances and user preferences. However, the complexity of how these intelligent home automation systems operate is often beyond the comprehension of non-technical users, and adding new technology to an existing infrastructure is often a burden. In this paper, we present a home automation framework designed based on smart widgets with a model driven methodology that raises the level of abstraction to configure home automation equipment. It aims to simplify user-level home automation management by mapping high-level home automation concepts onto a low-level composition and configuration of the automation building blocks with a reverse mapping to simplify the integration of new equipment into existing home automation systems. Experiments have shown that the mappings we proposed are sufficient to represent household appliances to the end user in a simple way and that new mappings can easily be added to our framework."}
{"_id": "a79f43246bed540084ca2d1fcf99a68c69820747", "title": "A Hybrid Approach to Detect and Localize Texts in Natural Scene Images", "text": "Text detection and localization in natural scene images is important for content-based image analysis. This problem is challenging due to the complex background, the non-uniform illumination, the variations of text font, size and line orientation. In this paper, we present a hybrid approach to robustly detect and localize texts in natural scene images. A text region detector is designed to estimate the text existing confidence and scale information in image pyramid, which help segment candidate text components by local binarization. To efficiently filter out the non-text components, a conditional random field (CRF) model considering unary component properties and binary contextual component relationships with supervised parameter learning is proposed. Finally, text components are grouped into text lines/words with a learning-based energy minimization method. Since all the three stages are learning-based, there are very few parameters requiring manual tuning. Experimental results evaluated on the ICDAR 2005 competition dataset show that our approach yields higher precision and recall performance compared with state-of-the-art methods. We also evaluated our approach on a multilingual image dataset with promising results."}
{"_id": "1b9de2d1e74fbe49bf852fa495f63c31bb038a31", "title": "A Pneumatic-Driven Haptic Glove with Force and Tactile Feedback", "text": "The advent of Oculus Rift indicates the start of a booming era of virtual reality. In order to increase the immersive feeling of interaction with the virtual world, haptic devices allow us to touch and manipulate virtual objects in an intuitive way. In this paper, we introduce a portable and low-cost haptic glove that provides both force and tactile feedback using a direct-control pneumatic concept. To produce force feedback, two inlet ports of a double acting pneumatic cylinder are opened and closed via solenoid DC valves through Pulse-width modulation (PWM) technique. For tactile feedback, an air bladder is actuated using a diaphragm pump via PWM operated solenoid valve. Experiments on a single finger prototype validated that the glove can provide force and tactile feedback with sufficient moving range of the finger joints. The maximum continuous force is 9 Newton and the response time is less than 400ms. The glove is light weighted and easy to be mounted on the index finger. The proposed glove could be potentially used for virtual reality grasping scenarios and for teleoperation of a robotic hand for handling hazardous objects."}
{"_id": "c6504fbbfcf32854e0bd35eb70539cafbecf332f", "title": "Client-side rate adaptation scheme for HTTP adaptive streaming based on playout buffer model", "text": "HTTP Adaptive Streaming (HAS) is an adaptive bitrate streaming technique which is able to adapt to the network conditions using conventional HTTP web servers. An HAS player periodically requests pre-encoded video chunks by sending an HTTP GET message. When the downloading a video chunk is finished, the player estimates the network bandwidth by calculating the goodput and adjusts the video quality based on its estimates. However, the bandwidth estimation in application layer is pretty inaccurate due to its architectural limitation. We show that inaccurate bandwidth estimation in rate adaptation may incur serious rate oscillations which is poor quality-of-experience for users. In this paper, we propose a buffer-based rate adaptation scheme which eliminates the bandwidth estimation step in rate adaptation to provide a smooth playback of HTTP-based streaming. We evaluate the performance of the HAS player implemented in the ns-3 network simulator. Our simulation results show that the proposed scheme significantly improves the stability by replacing bandwidth estimation with buffer occupancy estimation."}
{"_id": "03dc771ebf5b7bc3ccf8c4689d918924da524fe4", "title": "Approximating dynamic global illumination in image space", "text": "Physically plausible illumination at real-time framerates is often achieved using approximations. One popular example is ambient occlusion (AO), for which very simple and efficient implementations are used extensively in production. Recent methods approximate AO between nearby geometry in screen space (SSAO). The key observation described in this paper is, that screen-space occlusion methods can be used to compute many more types of effects than just occlusion, such as directional shadows and indirect color bleeding. The proposed generalization has only a small overhead compared to classic SSAO, approximates direct and one-bounce light transport in screen space, can be combined with other methods that simulate transport for macro structures and is visually equivalent to SSAO in the worst case without introducing new artifacts. Since our method works in screen space, it does not depend on the geometric complexity. Plausible directional occlusion and indirect lighting effects can be displayed for large and fully dynamic scenes at real-time frame rates."}
{"_id": "1b656883bed80fdec1d109ae04873540720610fa", "title": "Development and validation of the childhood narcissism scale.", "text": "In this article, we describe the development and validation of a short (10 item) but comprehensive self-report measure of childhood narcissism. The Childhood Narcissism Scale (CNS) is a 1-dimensional measure of stable individual differences in childhood narcissism with strong internal consistency reliability (Studies 1-4). The CNS is virtually unrelated to conventional measures of self-esteem but is positively related to self-appraised superiority, social evaluative concern and self-esteem contingency, agentic interpersonal goals, and emotional extremity (Study 5). Furthermore, the CNS is negatively related to empathic concern and positively related to aggression following ego threat (Study 6). These results suggest that childhood narcissism has similar psychological and interpersonal correlates as adult narcissism. The CNS provides researchers a convenient tool for measuring narcissism in children and young adolescents with strong preliminary psychometric characteristics."}
{"_id": "92c1f538613ff4923a8fa3407a16bed4aed361ac", "title": "Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis", "text": "Exploring data requires a fast feedback loop from the analyst to the system, with a latency below about 10 seconds because of human cognitive limitations. When data becomes large or analysis becomes complex, sequential computations can no longer be completed in a few seconds and data exploration is severely hampered. This article describes a novel computation paradigm called Progressive Computation for Data Analysis or more concisely Progressive Analytics, that brings at the programming language level a low-latency guarantee by performing computations in a progressive fashion. Moving this progressive computation at the language level relieves the programmer of exploratory data analysis systems from implementing the whole analytics pipeline in a progressive way from scratch, streamlining the implementation of scalable exploratory data analysis systems. This article describes the new paradigm through a prototype implementation called ProgressiVis, and explains the requirements it implies through examples."}
{"_id": "bb6508fb4457f09b5e146254220247bc4ea7b71c", "title": "Multiclass Classification of Driver Perceived Workload Using Long Short-Term Memory based Recurrent Neural Network", "text": "Human sensing enables intelligent vehicles to provide driver-adaptive support by classifying perceived workload into multiple levels. Objective of this study is to classify driver workload associated with traffic complexity into five levels. We conducted driving experiments in systematically varied traffic complexity levels in a simulator. We recorded driver physiological signals including electrocardiography, electrodermal activity, and electroencephalography. In addition, we integrated driver performance and subjective workload measures. Deep learning based models outperform statistical machine learning methods when dealing with dynamic time-series data with variable sequence lengths. We show that our long short-term memory based recurrent neural network model can classify driver perceived-workload into five classes with an accuracy of 74.5%. Since perceived workload differ between individual drivers for the same traffic situation, our results further highlight the significance of including driver characteristics such as driving style and workload sensitivity to achieve higher classification accuracy."}
{"_id": "c8e7c2a9201eb6217e62266c9d8c061b5394866e", "title": "Data mining for censored time-to-event data: a Bayesian network model for predicting cardiovascular risk from electronic health record data", "text": "Models for predicting the risk of cardiovascular (CV) events based on individual patient characteristics are important tools for managing patient care. Most current and commonly used risk prediction models have been built from carefully selected epidemiological cohorts. However, the homogeneity and limited size of such cohorts restrict the predictive power and generalizability of these risk models to other populations. Electronic health data (EHD) from large health care systems provide access to data on large, heterogeneous, and contemporaneous patient populations. The unique features and challenges of EHD, including missing risk factor information, non-linear relationships between risk factors and CV event outcomes, and differing effects from different patient subgroups, demand novel machine learning approaches to risk model development. In this paper, we present a machine learning approach based on Bayesian networks trained on EHD to predict the probability of having a CV event within 5\u00a0years. In such data, event status may be unknown for some individuals, as the event time is right-censored due to disenrollment and incomplete follow-up. Since many traditional data mining methods are not well-suited for such data, we describe how to modify both modeling and assessment techniques to account for censored observation times. We show that our approach can lead to better predictive performance than the Cox proportional hazards model (i.e., a regression-based approach commonly used for censored, time-to-event data) or a Bayesian network with ad hoc approaches to right-censoring. Our techniques are motivated by and illustrated on data from a large US Midwestern health care system."}
{"_id": "fc5a530ea80a3295d0872b85c3991a4d81336a61", "title": "Voice Activated Virtual Assistants Personality Perceptions and Desires- Comparing Personality Evaluation Frameworks", "text": "Currently, Voice Activated Virtual Assistants and Artificial Intelligence technologies are not just about performance or the functionalities they can carry out, it is also about the associated personality. This empirical multi-country study explores the personality perceptions of current VAVA users regarding these technologies. Since this is a rather unexplored territory for research, this study has identified two well-established personality evaluation methodologies, Aaker\u2019s traits approach and Jung\u2019s archetypes, to investigate current perceived personality and future desired personality of the four main Voice Activated Virtual Assistants: Siri, Google Assistant, Cortana and Alexa. Following are a summary of results by each methodology, and an analysis of the commonalities found between the two methodologies."}
{"_id": "f64d18d4bad30ea544aa828eacfa83208f2b7815", "title": "Conceptualizing Context for Pervasive Advertising", "text": "Profile-driven personalization based on socio-demographics is currently regarded as the most convenient base for successful personalized advertising. However, signs point to the dormant power of context recognition: Advertising systems that can adapt to the situational context of a consumer will rapidly gain importance. While technologies that can sense the environment are increasingly advanced, questions such as what to sense and how to adapt to a consumer\u2019s context are largely unanswered. In this chapter, we analyze the purchase context of a retail outlet and conceptualize it such that adaptive pervasive advertising applications really deliver on their potential: showing the right message at the right time to the right recipient. full version published as: Bauer, Christine & Spiekermann, Sarah (2011). Conceptualizing Context for Pervasive Advertising. In M\u00fcller, J\u00f6rg, Alt, Florian, & Michelis, Daniel (Eds.), Pervasive Advertising (pp. 159-183). London: Springer."}
{"_id": "2e92ddcf2e7a9d6c27875ec442637e13753f21a2", "title": "Self-Soldering Connectors for Modular Robots", "text": "The connection mechanism between neighboring modules is the most critical subsystem of each module in a modular robot. Here, we describe a strong, lightweight, and solid-state connection method based on heating a low melting point alloy to form reversible soldered connections. No external manipulation is required for forming or breaking connections between adjacent connectors, making this method suitable for reconfigurable systems such as self-reconfiguring modular robots. Energy is only consumed when switching connectivity, and the ability to transfer power and signal through the connector is inherent to the method. Soldering connectors have no moving parts, are orders of magnitude lighter than other connectors, and are readily mass manufacturable. The mechanical strength of the connector is measured as 173 N, which is enough to support many robot modules, and hundreds of connection cycles are performed before failure."}
{"_id": "459fbc416eb9a55920645c741b1e4cce95f39786", "title": "The Numerics of GANs", "text": "In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs). Using the formalism of smooth two-player games we analyze the associated gradient vector field of GAN training objectives. Our findings suggest that the convergence of current algorithms suffers due to two factors: i) presence of eigenvalues of the Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues with big imaginary part. Using these findings, we design a new algorithm that overcomes some of these limitations and has better convergence properties. Experimentally, we demonstrate its superiority on training common GAN architectures and show convergence on GAN architectures that are known to be notoriously hard to train."}
{"_id": "40c8e3894314581d0241162374602d68a6d1f38c", "title": "Culture and Institutions", "text": "A growing body of empirical work measuring different types of cultural traits has shown that culture matters for a variety of economic outcomes. This paper focuses on one specific aspect of the relevance of culture: its relationship to institutions. We review work with a theoretical, empirical, and historical bent to assess the presence of a two-way causal effect between culture and institutions. 1 We thank Benjamin Friedman and Andrei Shleifer for useful conversations and Janet Currie, Steven Durlauf, and six anonymous referees for excellent comments."}
{"_id": "6b894324281bd4b0251549c0e40802d6ca3d0b8f", "title": "Challenges and prospects of lithium-sulfur batteries.", "text": "Electrical energy storage is one of the most critical needs of 21st century society. Applications that depend on electrical energy storage include portable electronics, electric vehicles, and devices for renewable energy storage from solar and wind. Lithium-ion (Li-ion) batteries have the highest energy density among the rechargeable battery chemistries. As a result, Li-ion batteries have proven successful in the portable electronics market and will play a significant role in large-scale energy storage. Over the past two decades, Li-ion batteries based on insertion cathodes have reached a cathode capacity of \u223c250 mA h g(-1) and an energy density of \u223c800 W h kg(-1), which do not meet the requirement of \u223c500 km between charges for all-electric vehicles. With a goal of increasing energy density, researchers are pursuing alternative cathode materials such as sulfur and O2 that can offer capacities that exceed those of conventional insertion cathodes, such as LiCoO2 and LiMn2O4, by an order of magnitude (>1500 mA h g(-1)). Sulfur, one of the most abundant elements on earth, is an electrochemically active material that can accept up to two electrons per atom at \u223c2.1 V vs Li/Li(+). As a result, sulfur cathode materials have a high theoretical capacity of 1675 mA h g(-1), and lithium-sulfur (Li-S) batteries have a theoretical energy density of \u223c2600 W h kg(-1). Unlike conventional insertion cathode materials, sulfur undergoes a series of compositional and structural changes during cycling, which involve soluble polysulfides and insoluble sulfides. As a result, researchers have struggled with the maintenance of a stable electrode structure, full utilization of the active material, and sufficient cycle life with good system efficiency. Although researchers have made significant progress on rechargeable Li-S batteries in the last decade, these cycle life and efficiency problems prevent their use in commercial cells. To overcome these persistent problems, researchers will need new sulfur composite cathodes with favorable properties and performance and new Li-S cell configurations. In this Account, we first focus on the development of novel composite cathode materials including sulfur-carbon and sulfur-polymer composites, describing the design principles, structure and properties, and electrochemical performances of these new materials. We then cover new cell configurations with carbon interlayers and Li/dissolved polysulfide cells, emphasizing the potential of these approaches to advance capacity retention and system efficiency. Finally, we provide a brief survey of efficient electrolytes. The Account summarizes improvements that could bring Li-S technology closer to mass commercialization."}
{"_id": "e8691980eeb827b10cdfb4cc402b3f43f020bc6a", "title": "Segmentation Guided Attention Networks for Visual Question Answering", "text": "In this paper we propose to solve the problem of Visual Question Answering by using a novel segmentation guided attention based network which we call SegAttendNet. We use image segmentation maps, generated by a Fully Convolutional Deep Neural Network to refine our attention maps and use these refined attention maps to make the model focus on the relevant parts of the image to answer a question. The refined attention maps are used by the LSTM network to learn to produce the answer. We presently train our model on the visual7W dataset and do a category wise evaluation of the 7 question categories. We achieve state of the art results on this dataset and beat the previous benchmark on this dataset by a 1.5% margin improving the question answering accuracy from 54.1% to 55.6% and demonstrate improvements in each of the question categories. We also visualize our generated attention maps and note their improvement over the attention maps generated by the previous best approach."}
{"_id": "07f3f736d90125cb2b04e7408782af411c67dd5a", "title": "Convolutional Neural Network Architectures for Matching Natural Language Sentences", "text": "Semantic matching is of central importance to many natural language tasks [2, 28]. A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layerby-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models."}
{"_id": "0af737eae02032e66e035dfed7f853ccb095d6f5", "title": "ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs", "text": "How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence\u2019s representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https://github.com/yinwenpeng/Answer_Selection."}
{"_id": "1c059493904b2244d2280b8b4c0c7d3ca115be73", "title": "node2vec: Scalable Feature Learning for Networks", "text": "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations.\n We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks."}
{"_id": "468b9055950c428b17f0bf2ff63fe48a6cb6c998", "title": "A Neural Attention Model for Abstractive Sentence Summarization", "text": "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines."}
{"_id": "81eb0a1ea90a6f6d5e7f14cb3397a4ee0f77824a", "title": "Question/Answer Matching for CQA System via Combining Lexical and Sequential Information", "text": "Community-based Question Answering (CQA) has become popular in knowledge sharing sites since it allows users to get answers to complex, detailed, and personal questions directly from other users. Large archives of historical questions and associated answers have been accumulated. Retrieving relevant historical answers that best match a question is an essential component of a CQA service. Most state of the art approaches are based on bag-of-words models, which have been proven successful in a range of text matching tasks, but are insufficient for capturing the important word sequence information in short text matching. In this paper, a new architecture is proposed to more effectively model the complicated matching relations between questions and answers. It utilises a similarity matrix which contains both lexical and sequential information. Afterwards the information is put into a deep architecture to find potentially suitable answers. The experimental study shows its potential in improving matching accuracy of question and answer."}
{"_id": "81ff60a35e57e150875cfdde735fe69d19e9fdc4", "title": "Development of attentional networks in childhood", "text": "Recent research in attention has involved three networks of anatomical areas that carry out the functions of orienting, alerting and executive control (including conflict monitoring). There have been extensive cognitive and neuroimaging studies of these networks in adults. We developed an integrated Attention Network Test (ANT) to measure the efficiency of the three networks with adults. We have now adapted this test to study the development of these networks during childhood. The test is a child-friendly version of the flanker task with alerting and orienting cues. We studied the development of the attentional networks in a cross-sectional experiment with four age groups ranging from 6 through 9 (Experiment 1). In a second experiment, we compared children (age 10 years) and adult performance in both child and adults versions of the ANT. Reaction time and accuracy improved at each age interval and positive values were found for the average efficiency of each of the networks. Alertness showed evidence of change up to and beyond age 10, while conflict scores appear stable after age seven and orienting scores do not change in the age range studied. A final experiment with forty 7-year-old children suggested that children like adults showed independence between the three networks under some conditions."}
{"_id": "6c1cabe3f5980cbc50d290c2ed60b9aca624eab8", "title": "Mathematical modelling of infectious diseases.", "text": "INTRODUCTION\nMathematical models allow us to extrapolate from current information about the state and progress of an outbreak, to predict the future and, most importantly, to quantify the uncertainty in these predictions. Here, we illustrate these principles in relation to the current H1N1 epidemic.\n\n\nSOURCES OF DATA\nMany sources of data are used in mathematical modelling, with some forms of model requiring vastly more data than others. However, a good estimation of the number of cases is vitally important.\n\n\nAREAS OF AGREEMENT\nMathematical models, and the statistical tools that underpin them, are now a fundamental element in planning control and mitigation measures against any future epidemic of an infectious disease. Well-parameterized mathematical models allow us to test a variety of possible control strategies in computer simulations before applying them in reality.\n\n\nAREAS OF CONTROVERSY\nThe interaction between modellers and public-health practitioners and the level of detail needed for models to be of use.\n\n\nGROWING POINTS\nThe need for stronger statistical links between models and data.\n\n\nAREAS TIMELY FOR DEVELOPING RESEARCH\nGreater appreciation by the medical community of the uses and limitations of models and a greater appreciation by modellers of the constraints on public-health resources."}
{"_id": "5e22c4362df3b0accbe04517c41848a2b229efd1", "title": "Predicting sports events from past results Towards effective betting on football matches", "text": "A system for predicting the results of football matches that beats the bookmakers\u2019 odds is presented. The predictions for the matches are based on previous results of the teams involved."}
{"_id": "de93c4f886bdf55bfc1bcaefad648d5996ed3302", "title": "Modern Intrusion Detection, Data Mining, and Degrees of Attack Guilt", "text": "This chapter examines the state of modern intrusion detection, with a particular emphasis on the emerging approach of data mining. The discussion paralleIs two important aspects of intrusion detection: general detection strategy (misuse detection versus anomaly detection) and data source (individual hosts versus network trafik). Misuse detection attempts to match known patterns of intrusion , while anomaly detection searches for deviations from normal behavior . Between the two approaches, only anomaly detection has the ability to detect unknown attacks. A particularly promising approach to anomaly detection combines association mining with other forms of machine learning such as classification. Moreover, the data source that an intrusion detection system employs significantly impacts the types of attacks it can detect. There is a tradeoff in the level of detailed information available verD. Barbar\u00e1 et al. (ed .), Applications of Data Mining in Computer Security \u00a9 Kluwer Academic Publishers 2002 s"}
{"_id": "df25eaf576f55c09bb460d67134646fcb422b2ac", "title": "AGA: Attribute-Guided Augmentation", "text": "We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems."}
{"_id": "b52abc5f401a6dec62d650f5a2a500f469b9a7c0", "title": "A Case Study on Barriers and Enhancements of the PET Bottle-to-Bottle Recycling Systems in Germany and Sweden", "text": "Problem: The demand of beverages in PET bottles is constantly increasing. In this context, environmental, technological and regulatory aspects set a stronger focus on recycling. Generally, the reuse of recycled material from post-consumer PET bottles in bottle-to-bottle applications is seen as least environmentally harmful. However, closedloop systems are not widely implemented in Europe. Previous research mainly focuses on open-loop recycling systems and generally lacks discussion about the current German and Swedish systems and their challenges. Furthermore, previous studies lack theoretical and practical enhancements for bottle-to-bottle recycling from a managerial perspective. Purpose: The purpose of this study is to compare the PET bottle recycling systems in Germany and Sweden, analyse the main barriers and develop enhancements for closedloop systems. Method: This qualitative study employs a case study strategy about the two cases of Germany and Sweden. In total, 14 semi-structured interviews are conducted with respondents from different industry sectors within the PET bottle recycling systems. The empirical data is categorised and then analysed by pattern matching with the developed theoretical framework. Conclusion: Due to the theoretical and practical commitment to closed-loop recycling, the Swedish PET bottle recycling system outperforms the Germany system. In Germany, bottle-to-bottle recycling is currently performed on a smaller scale without a unified system. The main barriers for bottle-to-bottle recycling are distinguished into (1) quality and material factors, (2) regulatory and legal factors, (3) economic and market factors and (4) factors influenced by consumers. The enhancements for the systems are (1) quality and material factors, (2) regulatory and legal factors, (3) recollection factors and (4) expanding factors. Lastly, the authors provide further recommendations, which are (1) a recycling content symbol on bottle labels, (2) a council for bottle quality in Germany, (3) a quality seal for the holistic systems, (4) a reduction of transportation in Sweden and (5) an increase of consumer awareness on PET bottle consumption."}
{"_id": "9e00005045a23f3f6b2c9fca094930f8ce42f9f6", "title": "Managing Portfolios of Development Projects in a Complex Environment How the UN assign priorities to Programs at the Country", "text": ""}
{"_id": "2ec2f8cd6cf1a393acbc7881b8c81a78269cf5f7", "title": "Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics", "text": "We construct multi-modal concept representations by concatenating a skip-gram linguistic representation vector with a visual concept representation vector computed using the feature extraction layers of a deep convolutional neural network (CNN) trained on a large labeled object recognition dataset. This transfer learning approach brings a clear performance gain over features based on the traditional bag-of-visual-word approach. Experimental results are reported on the WordSim353 and MEN semantic relatedness evaluation tasks. We use visual features computed using either ImageNet or ESP Game images."}
{"_id": "94d1a665c0c7fbd017c9f3c50d35992e1c0c1ed0", "title": "Molecular and Morphological Characterization of Aphelenchoides fuchsi sp. n. (Nematoda: Aphelenchoididae) Isolated from Pinus eldarica in Western Iran.", "text": "Aphelenchoides fuchsi sp. n. is described and illustrated from bark and wood samples of a weakened Mondell pine in Kermanshah Province, western Iran. The new species has body length of 332 to 400 \u00b5m (females) and 365 to 395 \u00b5m (males). Lip region set off from body contour. The cuticle is weakly annulated, and there are four lines in the lateral field. The stylet is 8 to 10 \u03bcm long and has small basal swellings. The excretory pore is located ca one body diam. posterior to metacorpus valve or 51 to 62 \u03bcm from the head. The postuterine sac well developed (60-90 \u00b5m). Spicules are relatively short (15-16 \u03bcm in dorsal limb) with apex and rostrum rounded, well developed, and the end of the dorsal limb clearly curved ventrad like a hook. The male tail has usual three pairs of caudal papillae (2+2+2) and a well-developed mucro. The female tail is conical, terminating in a complicated step-like projection, usually with many tiny nodular protuberances. The new species belongs to the Group 2 sensu Shahina, category of Aphelenchoides species. Phylogenetic analysis based on small subunit (SSU) and partial large subunit (LSU) sequences of rRNA supported the morphological results."}
{"_id": "4d40a715a51bcca554915ecc5d88005fd56dc1e5", "title": "The future of seawater desalination: energy, technology, and the environment.", "text": "In recent years, numerous large-scale seawater desalination plants have been built in water-stressed countries to augment available water resources, and construction of new desalination plants is expected to increase in the near future. Despite major advancements in desalination technologies, seawater desalination is still more energy intensive compared to conventional technologies for the treatment of fresh water. There are also concerns about the potential environmental impacts of large-scale seawater desalination plants. Here, we review the possible reductions in energy demand by state-of-the-art seawater desalination technologies, the potential role of advanced materials and innovative technologies in improving performance, and the sustainability of desalination as a technological solution to global water shortages."}
{"_id": "6180482e02eb79eca6fd2e9b1ee9111d749d5ca2", "title": "A bidirectional soft pneumatic fabric-based actuator for grasping applications", "text": "THIS paper presents the development of a bidirectional fabric-based soft pneumatic actuator requiring low fluid pressurization for actuation, which is incorporated into a soft robotic gripper to demonstrate its utility. The bidirectional soft fabric-based actuator is able to provide both flexion and extension. Fabrication of the fabric actuators is simple as compared to the steps involved in traditional silicone-based approach. In addition, the fabric actuators are able to generate comparably larger vertical grip resistive force at lower operating pressure than elastomeric actuators and 3D-printed actuators, being able to generate resistive grip force up to 20N at 120 kPa. Five of the bidirectional soft fabric-based actuators are deployed within a five-fingered soft robotic gripper, complete with five casings and a base. It is capable of grasping a variety of objects with maximum width or diameter closer to its bending curvature. A cutting task involved bimanual manipulation was demonstrated successfully with the gripper. To incorporate intelligent control for such a task, a soft force made completely of compliant material was attached to the gripper, which allows determination of whether the cutting task is completed. To the authors' knowledge, this work is the first study which incorporates two soft robotic grippers for bimanual manipulation with one of the grippers sensorized to provide closed loop control."}
{"_id": "3f0924241a7deba2b40b0c1ea57a2e3d10c57ae0", "title": "Principles of GNSS, inertial, and multisensor integrated navigation systems, 2nd edition [Book review]", "text": "This second edition of Dr. Grove's book (the original was published in 2008) could arguably be considered a new work. At just under 1,000 pages (including the 11 appendices on the DVD), the second edition is 80% longer than the original. Frankly, the word \"book\" hardly seems adequate, considering the wide range of topics covered. \"Mini-encyclopedia\" seems more appropriate. The hardcover portion of the book comprises 18 chapters, and the DVD includes the aforementioned appendices plus 20 fully worked examples, 125 problems or exercises (with answers), and MATLAB routines for the simulation of many of the algorithms discussed in the main text. Here is a brief overview of the contents: \u25b8 Chapters 1\u20133: an overview of the diversity of positioning techniques and navigation systems; fundamentals of coordinate frames, kinematics and earth models; introduction to Kaiman filtering \u25b8 Chapters 4\u20136: inertial sensors, inertial navigation, and lower-cost dead reckoning systems \u25b8 Chapters 7\u201312: principles of radio positioning, short-, medium-, and long-range radio navigation, as well as extensive coverage of global navigation satellite systems (GNSS) \u25b8 Chapter 13: environmental feature matching. \u25b8 Chapters 14\u201316: various integration topics, including inertial navigation system (INS)/GNSS integration, alignment, zero-velocity updates, and multisensor integration \u25b8 Chapter 17: fault detection. \u25b8 Chapter 18: applications and trends. In summary, this book is an excellent reference (with numerous nuggets of wisdom) that should be readily handy on the shelf of every practicing navigation engineer. In the hands of an experienced instructor, the book will also serve students as a great textbook. However, the lack of examples integrated in the main text makes it difficult for the book to serve as a self-study guide for those that are new to the field."}
{"_id": "b0e7d36c94935fadf3c514903e4340eaa415e4ee", "title": "True self-configuration for the IoT", "text": "For the Internet of Things to finally become a reality, obstacles on different levels need to be overcome. This is especially true for the upcoming challenge of leaving the domain of technical experts and scientists. Devices need to connect to the Internet and be able to offer services. They have to announce and describe these services in machine-understandable ways so that user-facing systems are able to find and utilize them. They have to learn about their physical surroundings, so that they can serve sensing or acting purposes without explicit configuration or programming. Finally, it must be possible to include IoT devices in complex systems that combine local and remote data, from different sources, in novel and surprising ways. We show how all of that is possible today. Our solution uses open standards and state-of-the art protocols to achieve this. It is based on 6LowPAN and CoAP for the communications part, semantic web technologies for meaningful data exchange, autonomous sensor correlation to learn about the environment, and software built around the Linked Data principles to be open for novel and unforeseen applications."}
{"_id": "a8e656fe16825c47a41df9b28e0c97d4bc8fa58f", "title": "From turtles to Tangible Programming Bricks: explorations in physical language design", "text": "This article provides a historical overview of educational computing research at MIT from the mid-1960s to the present day, focusing on physical interfaces. It discusses some of the results of this research: electronic toys that help children develop advanced modes of thinking through free-form play. In this historical context, the article then describes and discusses the author\u2019s own research into tangible programming, culminating in the development of the Tangible Programming Bricks system\u2014a platform for creating microworlds for children to explore computation and scientific thinking."}
{"_id": "f83a207712fd4cf41aded79e9e6c4345ba879128", "title": "Ray: A Distributed Framework for Emerging AI Applications", "text": "The next generation of AI applications will continuously interact with the environment and learn from these interactions. These applications impose new and demanding systems requirements, both in terms of performance and flexibility. In this paper, we consider these requirements and present Ray\u2014a distributed system to address them. Ray implements a unified interface that can express both task-parallel and actor-based computations, supported by a single dynamic execution engine. To meet the performance requirements, Ray employs a distributed scheduler and a distributed and fault-tolerant store to manage the system\u2019s control state. In our experiments, we demonstrate scaling beyond 1.8 million tasks per second and better performance than existing specialized systems for several challenging reinforcement learning applications."}
{"_id": "aa2213a9f39736f80ccc54b9096e414682afa082", "title": "Wave-front Transformation with Gradient Metasurfaces", "text": "Relying on abrupt phase discontinuities, metasurfaces characterized by a transversely inhomogeneous surface impedance profile have been recently explored as an ultrathin platform to generate arbitrary wave fronts over subwavelength thicknesses. Here, we outline fundamental limitations of passive gradient metasurfaces in molding the impinging wave and show that local phase compensation is essentially insufficient to realize arbitrary wave manipulation, but full-wave designs should be considered. These findings represent a critical step towards realistic and highly efficient conformal wave manipulation beyond the scope of ray optics, enabling unprecedented nanoscale light molding."}
{"_id": "8641be8daff5b24e98a0d68138a61456853aef82", "title": "Adaptation impact and environment models for architecture-based self-adaptive systems", "text": "Self-adaptive systems have the ability to adapt their behavior to dynamic operating conditions. In reaction to changes in the environment, these systems determine the appropriate corrective actions based in part on information about which action will have the best impact on the system. Existing models used to describe the impact of adaptations are either unable to capture the underlying uncertainty and variability of such dynamic environments, or are not compositional and described at a level of abstraction too low to scale in terms of specification effort required for non-trivial systems. In this paper, we address these shortcomings by describing an approach to the specification of impact models based on architectural system descriptions, which at the same time allows us to represent both variability and uncertainty in the outcome of adaptations, hence improving the selection of the best corrective action. The core of our approach is a language equipped with a formal semantics defined in terms of Discrete Time Markov Chains that enables us to describe both the impact of adaptation tactics, as well as the assumptions about the environment. To validate our approach, we show how employing our language can improve the accuracy of predictions used for decision-making in the Rainbow framework for architecture-based self-adaptation."}
{"_id": "a65e815895bed510c0549957ce6baa129c909813", "title": "Induction of Root and Pattern Lexicon for Unsupervised Morphological Analysis of Arabic", "text": "We propose an unsupervised approach to learning non-concatenative morphology, which we apply to induce a lexicon of Arabic roots and pattern templates. The approach is based on the idea that roots and patterns may be revealed through mutually recursive scoring based on hypothesized pattern and root frequencies. After a further iterative refinement stage, morphological analysis with the induced lexicon achieves a root identification accuracy of over 94%. Our approach differs from previous work on unsupervised learning of Arabic morphology in that it is applicable to naturally-written, unvowelled text."}
{"_id": "5da41b7d7b1963cd1e86d99b4d9b86ad6d7a227a", "title": "An Unequal Wilkinson Power Divider for a Frequency and Its First Harmonic", "text": "This letter presents a Wilkinson power divider operating at a frequency and its first harmonic with unequal power divider ratio. To obtain the unequal property, four groups of 1/6 wavelength transmission lines with different characteristic impedances are needed to match all ports. Theoretically, closed-form equations for the design are derived based on transmission line theory. Experimental results have indicated that all the features of this novel power divider can be fulfilled at f 0 and 2f 0 simultaneously."}
{"_id": "6cd700af0b7953345d831c129a5a4e0d927bfa19", "title": "Adaptive Haptic Feedback Steering Wheel for Driving Simulators", "text": "Controlling a virtual vehicle is a sensory-motor activity with a specific rendering methodology that depends on the hardware technology and the software in use. We propose a method that computes haptic feedback for the steering wheel. It is best suited for low-cost, fixed-base driving simulators but can be ported to any driving simulator platform. The goal of our method is twofold. 1) It provides an efficient yet simple algorithm to model the steering mechanism using a quadri-polar representation. 2) This model is used to compute the haptic feedback on top of which a tunable haptic augmentation is adjusted to overcome the lack of presence and the unavoidable simulation loop latencies. This algorithm helps the driver to laterally control the virtual vehicle. We also discuss the experimental results that demonstrate the usefulness of our haptic feedback method."}
{"_id": "3f4e71d715fce70c89e4503d747aad11fcac8a43", "title": "Competing Values in the Era of Digitalization", "text": "This case study examines three different digital innovation projects within Auto Inc -- a large European automaker. By using the competing values framework as a theoretical lens we explore how dynamic capabilities occur in a firm trying to meet increasing demands in originating and innovating from digitalization. In this digitalization process, our study indicates that established socio-technical congruences are being challenged. More so, we pinpoint the need for organizations to find ways to embrace new experimental learning processes in the era of digitalization. While such a change requires long-term commitment and vision, this study presents three informal enablers for such experimental processes these enablers are timing, persistence, and contacts."}
{"_id": "215b4c25ad34557644b1a177bd5aeac8b2e66bc6", "title": "Why Your Encrypted Database Is Not Secure", "text": "Encrypted databases, a popular approach to protecting data from compromised database management systems (DBMS's), use abstract threat models that capture neither realistic databases, nor realistic attack scenarios. In particular, the \"snapshot attacker\" model used to support the security claims for many encrypted databases does not reflect the information about past queries available in any snapshot attack on an actual DBMS.\n We demonstrate how this gap between theory and reality causes encrypted databases to fail to achieve their \"provable security\" guarantees."}
{"_id": "84cf1178a7526355f323ce0442458de3b3744358", "title": "A high performance parallel algorithm for 1-D FFT", "text": "In this paper we propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. We use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. We show that the multidimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. We implemented this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine."}
{"_id": "1f7594d3be7f5c32e117bc669ed898dd0af88aa3", "title": "Dual-Band Textile MIMO Antenna Based on Substrate-Integrated Waveguide (SIW) Technology", "text": "A dual-band textile antenna for multiple-input-multiple-output (MIMO) applications, based on substrate-integrated waveguide (SIW) technology, is designed. The fundamental SIW cavity mode is designed to resonate at 2.4 GHz. Meanwhile, the second and third modes are modified and combined by careful placement of a via within the cavity to enable wideband coverage in the 5-GHz WLAN band. The simple antenna topology can be fabricated fully using textiles in a planar form, ensuring reliability and comfort. Numerical and experimental results indicate satisfactory antenna performance when worn on body in terms of impedance bandwidth, radiation efficiency, and specific absorption ratio (SAR). In order to validate its potential for MIMO applications, two elements of the proposed SIW antenna are arranged in six configurations to study the performance in terms of mutual coupling and envelope correlation. It is observed that the placement of the shorted edges of the two elements adjacent to each other produces the lowest mutual coupling and consequently the best envelope correlation."}
{"_id": "a2204b1ae6109db076a2b3c8d0db8cf390008812", "title": "Low self-esteem during adolescence predicts poor health, criminal behavior, and limited economic prospects during adulthood.", "text": "Using prospective data from the Dunedin Multidisciplinary Health and Development Study birth cohort, the authors found that adolescents with low self-esteem had poorer mental and physical health, worse economic prospects, and higher levels of criminal behavior during adulthood, compared with adolescents with high self-esteem. The long-term consequences of self-esteem could not be explained by adolescent depression, gender, or socioeconomic status. Moreover, the findings held when the outcome variables were assessed using objective measures and informant reports; therefore, the findings cannot be explained by shared method variance in self-report data. The findings suggest that low self-esteem during adolescence predicts negative real-world consequences during adulthood."}
{"_id": "02bb762c3bd1b3d1ad788340d8e9cdc3d85f33e1", "title": "Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web", "text": "We describe a family of caching protocols for distrib-uted networks that can be used to decrease or eliminate the occurrence of hot spots in the network. Our protocols are particularly designed for use with very large networks such as the Internet, where delays caused by hot spots can be severe, and where it is not feasible for every server to have complete information about the current state of the entire network. The protocols are easy to implement using existing network protocols such as TCP/fF\u2019,and require very little overhead. The protocols work with local control, make efficient use of existing resources, and scale gracefully as the network grows. Our caching protocols are based on a special kind of hashing that we call consistent hashing. Roughly speaking, a consistent hash function is one which changes nr.inimaflyas the range of the function changes. Through the development of good consistent hash functions, we are able to develop caching protocols which do not require users to have a current or even consistent view of the network. We believe that consistent hash functions may eventually prove to be useful in other applications such as distributed name servers and/or quorum systems."}
{"_id": "155ca30ef360d66af571eee47c7f60f300e154db", "title": "In Search of an Understandable Consensus Algorithm", "text": "Raft is a consensus algorithm for managing a replicated log. It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos, but its structure is different from Paxos; this makes Raft more understandable than Paxos and also provides a better foundation for building practical systems. In order to enhance understandability, Raft separates the key elements of consensus, such as leader election, log replication, and safety, and it enforces a stronger degree of coherency to reduce the number of states that must be considered. Results from a user study demonstrate that Raft is easier for students to learn than Paxos. Raft also includes a new mechanism for changing the cluster membership, which uses overlapping majorities to guarantee safety."}
{"_id": "2a0d27ae5c82d81b4553ea44e81eb986be5fd126", "title": "Paxos Made Simple", "text": "The Paxos algorithm, when presented in plain English, is very simple."}
{"_id": "3593269a4bf87a7d0f7aba639a50bc74cb288fb1", "title": "Space/Time Trade-offs in Hash Coding with Allowable Errors", "text": "In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash-coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency.\nThe new methods are intended to reduce the amount of space required to contain the hash-coded information from that associated with conventional methods. The reduction in space is accomplished by exploiting the possibility that a small fraction of errors of commission may be tolerable in some applications, in particular, applications in which a large amount of data is involved and a core resident hash area is consequently not feasible using conventional methods.\nIn such applications, it is envisaged that overall performance could be improved by using a smaller core resident hash area in conjunction with the new methods and, when necessary, by using some secondary and perhaps time-consuming test to \u201ccatch\u201d the small fraction of errors associated with the new methods. An example is discussed which illustrates possible areas of application for the new methods.\nAnalysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time."}
{"_id": "691564e0f19d5f62597adc0720d0e51ddbce9b89", "title": "Web Caching with Consistent Hashing", "text": "A key performance measure for the World Wide Web is the speed with which content is served to users. As traffic on the Web increases, users are faced with increasing delays and failures in data delivery. Web caching is one of the key strategies that has been explored to improve performance. An important issue in many caching systems is how to decide what is cached where at any given time. Solutions have included multicast queries and directory schemes. In this paper, we offer a new Web caching strategy based on consistent hashing. Consistent hashing provides an alternative to multicast and directory schemes, and has several other advantages in load balancing and fault tolerance. Its performance was analyzed theoretically in previous work; in this paper we describe the implementation of a consistent-hashing-based system and experiments that support our thesis that it can provide performance improvements. \uf8e9 1999 Published by Elsevier Science B.V. All rights reserved."}
{"_id": "215ac9b23a9a89ad7c8f22b5f9a9ad737204d820", "title": "An Empirical Investigation into Programming Language Syntax", "text": "Recent studies in the literature have shown that syntax remains a significant barrier to novice computer science students in the field. While this syntax barrier is known to exist, whether and how it varies across programming languages has not been carefully investigated. For this article, we conducted four empirical studies on programming language syntax as part of a larger analysis into the, so called, programming language wars. We first present two surveys conducted with students on the intuitiveness of syntax, which we used to garner formative clues on what words and symbols might be easy for novices to understand. We followed up with two studies on the accuracy rates of novices using a total of six programming languages: Ruby, Java, Perl, Python, Randomo, and Quorum. Randomo was designed by randomly choosing some keywords from the ASCII table (a metaphorical placebo). To our surprise, we found that languages using a more traditional C-style syntax (both Perl and Java) did not afford accuracy rates significantly higher than a language with randomly generated keywords, but that languages which deviate (Quorum, Python, and Ruby) did. These results, including the specifics of syntax that are particularly problematic for novices, may help teachers of introductory programming courses in choosing appropriate first languages and in helping students to overcome the challenges they face with syntax."}
{"_id": "e4edc414773e709e8eb3eddd77b519637f26f9a5", "title": "Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train", "text": "For the past 5 years, the ILSVRC competition and the ImageNet dataset have attracted a lot of interest from the Computer Vision community, allowing for state-of-the-art accuracy to grow tremendously. This should be credited to the use of deep artificial neural network designs. As these became more complex, the storage, bandwidth, and compute requirements increased. This means that with a non-distributed approach, even when using the most high-density server available, the training process may take weeks, making it prohibitive. Furthermore, as datasets grow, the representation learning potential of deep networks grows as well by using more complex models. This synchronicity triggers a sharp increase in the computational requirements and motivates us to explore the scaling behaviour on petaflop scale supercomputers. In this paper we will describe the challenges and novel solutions needed in order to train ResNet50 in this large scale environment. We demonstrate above 90% scaling efficiency and a training time of 28 minutes using up to 104K x86 cores. This is supported by software tools from Intel\u2019s ecosystem. Moreover, we show that with regular 90 120 epoch train runs we can achieve a top-1 accuracy as high as 77% for the unmodified ResNet-50 topology. We also introduce the novel Collapsed Ensemble (CE) technique that allows us to obtain a 77.5% top-1 accuracy, similar to that of a ResNet-152, while training a unmodified ResNet-50 topology for the same fixed training budget. All ResNet-50 models as well as the scripts needed to replicate them will be posted shortly. Keywords\u2014deep learning, scaling, convergence, large minibatch, ensembles."}
{"_id": "154d62d97d43243d73352b969b2335caaa6c2b37", "title": "Ensemble learning for free with evolutionary algorithms?", "text": "Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-EEL) or incrementally along evolution (On-EEL). Experiments on a set of benchmark problems show that Off-EEL outperforms single-hypothesis evolutionary learning and state-of-art Boosting and generates smaller classifier ensembles."}
{"_id": "3146fabd5631a7d1387327918b184103d06c2211", "title": "Person-Independent 3D Gaze Estimation Using Face Frontalization", "text": "Person-independent and pose-invariant estimation of eye-gaze is important for situation analysis and for automated video annotation. We propose a fast cascade regression based method that first estimates the location of a dense set of markers and their visibility, then reconstructs face shape by fitting a part-based 3D model. Next, the reconstructed 3D shape is used to estimate a canonical view of the eyes for 3D gaze estimation. The model operates in a feature space that naturally encodes local ordinal properties of pixel intensities leading to photometric invariant estimation of gaze. To evaluate the algorithm in comparison with alternative approaches, three publicly-available databases were used, Boston University Head Tracking, Multi-View Gaze and CAVE Gaze datasets. Precision for head pose and gaze averaged 4 degrees or less for pitch, yaw, and roll. The algorithm outperformed alternative methods in both datasets."}
{"_id": "39773ed3c249a731224b77783a1c1e5f353d5429", "title": "End-to-End Radio Traffic Sequence Recognition with Deep Recurrent Neural Networks", "text": "We investigate sequence machine learning techniques on raw radio signal time-series data. By applying deep recurrent neural networks we learn to discriminate between several application layer traffic types on top of a constant envelope modulation without using an expert demodulation algorithm. We show that complex protocol sequences can be learned and used for both classification and generation tasks using this approach. Keywords\u2014Machine Learning, Software Radio, Protocol Recognition, Recurrent Neural Networks, LSTM, Protocol Learning, Traffic Classification, Cognitive Radio, Deep Learning"}
{"_id": "fb7f39d7d24b30df7b177bca2732ff8c3ade0bc0", "title": "Homography estimation using one ellipse correspondence and minimal additional information", "text": "In sport scenarios like football or basketball, we often deal with central views where only the central circle and some additional primitives like the central line and the central point or a touch line are visible. In this paper we first characterize, from a mathematical point of view, the set of homographies that project a given ellipse into the unit circle, next, using some extra minimal additional information like the knowledge of the position in the image of the central line and central point or a touch line we show a method to fully determine the plane homography. We present some experiments in sport scenarios to show the ability of the proposed method to properly recover the plane homography."}
{"_id": "591b52d24eb95f5ec3622b814bc91ac872acda9e", "title": "Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.", "text": "Multilayer connectionist models of memory based on the encoder model using the backpropagation learning rule are evaluated. The models are applied to standard recognition memory procedures in which items are studied sequentially and then tested for retention. Sequential learning in these models leads to 2 major problems. First, well-learned information is forgotten rapidly as new information is learned. Second, discrimination between studied items and new items either decreases or is nonmonotonic as a function of learning. To address these problems, manipulations of the network within the multilayer model and several variants of the multilayer model were examined, including a model with prelearned memory and a context model, but none solved the problems. The problems discussed provide limitations on connectionist models applied to human memory and in tasks where information to be learned is not all available during learning."}
{"_id": "639f02d25eab3794e35b757ef64c6815a8929f84", "title": "A self-boost charge pump topology for a gate drive high-side power supply", "text": "A self-boost charge pump topology is presented for a floating high-side gate drive power supply that features high voltage and current capabilities for use in integrated power electronic modules (IPEMs). The transformerless topology uses a small capacitor to transfer energy to the high-side switch from a single power supply referred to the negative rail. Unlike conventional bootstrap power supplies, no switching of the main phase-leg switches is required to provide power continuously to the high-side gate drive, even if the high-side switch is permanently on. Additional advantages include low parts-count and simple control requirements. A piecewise linear model of the self-boost charge pump is derived and the circuit's operating characteristics are analyzed. Simulation and experimental results are provided to verify the desired operation of the new charge pump circuit. Guidelines are provided to assist with circuit component selection in new applications."}
{"_id": "a4bf5c295f0bf4f7f8d5c1e702b62018cca9bc58", "title": "The long-term sequelae of child and adolescent abuse: a longitudinal community study.", "text": "The purpose of the present study was to examine the relationship between childhood and adolescent physical and sexual abuse before the age of 18 and psychosocial functioning in mid-adolescence (age 15) and early adulthood (age 21) in a representative community sample of young adults. Subjects were 375 participants in an ongoing 17-years longitudinal study. At age 21, nearly 11% reported physical or sexual abuse before age 18. Psychiatric disorders based on DSM-III-R criteria were assessed utilizing the NIMH Diagnostic Interview Schedule, Revised Version (DIS-III-R). Approximately 80% of the abused young adults met DSM-III-R criteria for at least one psychiatric disorder at age 21. Compared to their nonabused counterparts, abused subjects demonstrated significant impairments in functioning both at ages 15 and at 21, including more depressive symptomatology, anxiety, psychiatric disorders, emotional-behavioral problems, suicidal ideation, and suicide attempts. While abused individuals were functioning significantly more poorly overall at ages 15 and 21 than their nonabused peers, gender differences and distinct patterns of impaired functioning emerged. These deficits underscore the need for early intervention and prevention strategies to forestall or minimize the serious consequences of child abuse."}
{"_id": "16e39000918a58e0755dc42abed368b2215c2aed", "title": "A radio resource management framework for TVWS exploitation under an auction-based approach", "text": "This paper elaborates on the design, implementation and performance evaluation of a prototype Radio Resource Management (RRM) framework for TV white spaces (TVWS) exploitation, under an auction-based approach. The proposed RRM framework is applied in a centralised Cognitive Radio (CR) network architecture, where exploitation of the available TVWS by Secondary Systems is orchestrated via a Spectrum Broker. Efficient RRM framework performance, as a matter of maximum-possible resources utilization and benefit of Spectrum Broker, is achieved by proposing and evaluating an auction-based algorithm. This auction-based algorithm considers both frequency and time domain during TVWS allocation process which was defined as an optimization problem, where maximum payoff of Spectrum Broker is the optimization goal. Experimental tests that were carried-out under controlled conditions environment, verified the validity of the proposed framework, besides identifying fields for further research."}
{"_id": "d6619b3c0523f0a12168fbce750edeee7b6b8a53", "title": "High power and high efficiency GaN-HEMT for microwave communication applications", "text": "Microwaves have been widely used for the modern communication systems, which have advantages in high bit rate transmission and the easiness of compact circuit and antenna design. Gallium Nitride (GaN), featured with high breakdown and high saturation velocity, is one of the promising material for high power and high frequency devices, and a kW-class output power has already been achieved [1]. We have developed the high power and high efficiency GaN HEMTs [2\u20135], targeting the amplifier for the base transceiver station (BTS). This presentation summarizes our recent works, focusing on the developments for the efficiency boosting and the robustness in high power RF operation."}
{"_id": "d2f210e3f34d65e3ae66b60e98d9c3a740b3c52a", "title": "Coloring-based coalescing for graph coloring register allocation", "text": "Graph coloring register allocation tries to minimize the total cost of spilled live ranges of variables. Live-range splitting and coalescing are often performed before the coloring to further reduce the total cost. Coalescing of split live ranges, called sub-ranges, can decrease the total cost by lowering the interference degrees of their common interference neighbors. However, it can also increase the total cost because the coalesced sub-ranges can become uncolorable. In this paper, we propose coloring-based coalescing, which first performs trial coloring and next coalesces all copyrelated sub-ranges that were assigned the same color. The coalesced graph is then colored again with the graph coloring register allocation. The rationale is that coalescing of differently colored sub-ranges could result in spilling because there are some interference neighbors that prevent them from being assigned the same color. Experiments on Java programs show that the combination of live-range splitting and coloring-based coalescing reduces the static spill cost by more than 6% on average, comparing to the baseline coloring without splitting. In contrast, well-known iterated and optimistic coalescing algorithms, when combined with splitting, increase the cost by more than 20%. Coloring-based coalescing improves the execution time by up to 15% and 3% on average, while the existing algorithms improve by up to 12% and 1% on average."}
{"_id": "4bbd31803e900aebcdb984523ef3770de3641981", "title": "Mathematics Learning through Computational Thinking Activities: A Systematic Literature Review", "text": "Computational Thinking represents a terminology that embraces the complex set of reasoning processes that are held for problem stating and solving through a computational tool. The ability of systematizing problems and solve them by these means is currently being considered a skill to be developed by all students, together with Language, Mathematics and Sciences. Considering that Computer Science has many of its roots on Mathematics, it is reasonable to ponder if and how Mathematics learning can be influenced by offering activities related to Computational Thinking to students. In this sense, this article presents a Systematic Literature Review on reported evidences of Mathematics learning in activities aimed at developing Computational Thinking skills. Forty-two articles which presented didactic activities together with an experimental design to evaluate learning outcomes published from 2006 to 2017 were analyzed. The majority of identified activities used a software tool or hardware device for their development. In these papers, a wide variety of mathematical topics has been being developed, with some emphasis on Planar Geometry and Algebra. Conversion of models and solutions between different semiotic representations is a high level cognitive skill that is most frequently associated to educational outcomes. This review indicated that more recent articles present a higher level of rigor in methodological procedures to assess learning effects. However, joint analysis of evidences from more than one data source is still not frequently used as a validation procedure."}
{"_id": "e9b87d8ba83281d5ea01e9b9fab14c73b0ae75eb", "title": "Partially overlapping neural networks for real and imagined hand movements.", "text": "Neuroimagery findings have shown similar cerebral networks associated with imagination and execution of a movement. On the other hand, neuropsychological studies of parietal-lesioned patients suggest that these networks may be at least partly distinct. In the present study, normal subjects were asked to either imagine or execute auditory-cued hand movements. Compared with rest, imagination and execution showed overlapping networks, including bilateral premotor and parietal areas, basal ganglia and cerebellum. However, direct comparison between the two experimental conditions showed that specific cortico-subcortical areas were more engaged in mental simulation, including bilateral premotor, prefrontal, supplementary motor and left posterior parietal areas, and the caudate nuclei. These results suggest that a specific neuronal substrate is involved in the processing of hand motor representations."}
{"_id": "8a718fccc947750580851f10698de1f41f5991f4", "title": "Disconnected aging: Cerebral white matter integrity and age-related differences in cognition", "text": "Cognition arises as a result of coordinated processing among distributed brain regions and disruptions to communication within these neural networks can result in cognitive dysfunction. Cortical disconnection may thus contribute to the declines in some aspects of cognitive functioning observed in healthy aging. Diffusion tensor imaging (DTI) is ideally suited for the study of cortical disconnection as it provides indices of structural integrity within interconnected neural networks. The current review summarizes results of previous DTI aging research with the aim of identifying consistent patterns of age-related differences in white matter integrity, and of relationships between measures of white matter integrity and behavioral performance as a function of adult age. We outline a number of future directions that will broaden our current understanding of these brain-behavior relationships in aging. Specifically, future research should aim to (1) investigate multiple models of age-brain-behavior relationships; (2) determine the tract-specificity versus global effect of aging on white matter integrity; (3) assess the relative contribution of normal variation in white matter integrity versus white matter lesions to age-related differences in cognition; (4) improve the definition of specific aspects of cognitive functioning related to age-related differences in white matter integrity using information processing tasks; and (5) combine multiple imaging modalities (e.g., resting-state and task-related functional magnetic resonance imaging; fMRI) with DTI to clarify the role of cerebral white matter integrity in cognitive aging."}
{"_id": "d53432934fa78151e7b75c95093c9b0be94b4b9a", "title": "Evolving computational intelligence systems", "text": "A new paradigm of the evolving computational intelligence systems (ECIS) is introduced in a generic framework of the knowledge and data integration (KDI). This generalization of the recent advances in the development of evolving fuzzy and neuro-fuzzy models and the more analytical angle of consideration through the prism of knowledge evolution as opposed to the usually used datacentred approach marks the novelty of the present paper. ECIS constitutes a suitable paradigm for adaptive modeling of continuous dynamic processes and tracing the evolution of knowledge. The elements of evolution, such as inheritance and structure development are related to the knowledge and data pattern dynamics and are considered in the context of an individual system/model. Another novelty of this paper consists of the comparison at a conceptual level between the concept of models and knowledge captured by these models evolution and the well known paradigm of evolutionary computation. Although ECIS differs from the concept of evolutionary (genetic) computing, both paradigms heavily borrow from the same source \u2013 nature and human evolution. As the origin of knowledge, humans are the best model of an evolving intelligent system. Instead of considering the evolution of population of spices or genes as the evolutionary computation algorithms does the ECIS concentrate on the evolution of a single intelligent system. The aim is to develop the intelligence/knowledge of this system through an evolution using inheritance and modification, upgrade and reduction. This approach is also suitable for the integration of new data and existing models into new models that can be incrementally adapted to future incoming data. This powerful new concept has been recently introduced by the authors in a series of parallel works and is still under intensive development. It forms the conceptual basis for the development of the truly intelligent systems. Another specific of this paper includes bringing together the two working examples of ECIS, namely ECOS and EFS. The ideas are supported by illustrative examples (a synthetic non-linear function for the ECOS case and a benchmark problem of house price modelling from UCI repository for the case of EFS)."}
{"_id": "7bdec3d91d8b649f892a779da78428986d8c5e3b", "title": "CCVis : Visual Analytics of Student Online Learning Behaviors Using Course Clickstream Data", "text": "As more and more college classrooms utilize online platforms to facilitate teaching and learning activities, analyzing student online behaviors becomes increasingly important for instructors to effectively monitor and manage student progress and performance. In this paper, we present CCVis, a visual analytics tool for analyzing the course clickstream data and exploring student online learning behaviors. Targeting a large college introductory course with over two thousand student enrollments, our goal is to investigate student behavior patterns and discover the possible relationships between student clickstream behaviors and their course performance. We employ higher-order network and structural identity classification to enable visual analytics of behavior patterns from the massive clickstream data. CCVis includes four coordinated views (the behavior pattern, behavior breakdown, clickstream comparative, and grade distribution views) for user interaction and exploration. We demonstrate the effectiveness of CCVis through case studies along with an ad-hoc expert evaluation. Finally, we discuss the limitation and extension of this work."}
{"_id": "104829c56a7f1236a887a6993959dd52aebd86f5", "title": "Modeling the global freight transportation system: A multi-level modeling perspective", "text": "The interconnectedness of different actors in the global freight transportation industry has rendered such a system as a large complex system where different sub-systems are interrelated. On such a system, policy-related- exploratory analyses which have predictive capacity are difficult to perform. Although there are many global simulation models for various large complex systems, there is unfortunately very little research aimed to develop a global freight transportation model. In this paper, we present a multi-level framework to develop an integrated model of the global freight transportation system. We employ a system view to incorporate different relevant sub-systems and categorize them in different levels. The fourstep model of freight transport is used as the basic foundation of the framework proposed. In addition to that, we also present the computational framework which adheres to the high level modeling framework to provide a conceptualization of the discrete-event simulation model which will be developed."}
{"_id": "c22366074e3b243f2caaeb2f78a2c8d56072905e", "title": "A broadband slotted ridge waveguide antenna array", "text": "A longitudinally-slotted ridge waveguide antenna array with a compact transverse dimension is presented. To broaden the bandwidth of the array, it is separated into two subarrays fed by a novel compact convex waveguide divider. A 16-element uniform linear array at X-band was fabricated and measured to verify the validity of the design. The measured bandwidth of S11les-15 dB is 14.9% and the measured cross- polarization level is less than -36 dB over the entire bandwidth. This array can be combined with the edge-slotted waveguide array to build a two-dimensional dual-polarization antenna array for the synthetic aperture radar (SAR) application"}
{"_id": "09c5b100f289a3993d91a66116e35ee95e99acc0", "title": "Segmenting cardiac MRI tagging lines using Gabor filter banks", "text": "t\u2014This paper describes a new method for the segmentation and extraction of cardiac MRI s. Our method is based on the novel use of a 2D bank. By convolving the tagged input image with ilters, the tagging lines are automatically enhanced ted out. We design the Gabor filter bank based on age\u2019s spatial and frequency characteristics. The is a combination of each filter\u2019s response in the bank. We demonstrate that compared to bandpass ds such as HARP, this method results in robust and mentation of the tagging lines."}
{"_id": "41e4eb8fbb335ae70026f4216069f33f8f9bbe53", "title": "Stepfather Involvement and Stepfather-Child Relationship Quality: Race and Parental Marital Status as Moderators.", "text": "Stepparent-child relationship quality is linked to stepfamily stability and children's well-being. Yet, the literature offers an incomplete understanding of factors that promote high-quality stepparent-child relationships, especially among socio-demographically diverse stepfamilies. In this study, we explore the association between stepfather involvement and stepfather-child relationship quality among a racially diverse and predominately low-income sample of stepfamilies with preadolescent children. Using a subsample of 467 mother-stepfather families from year 9 of the Fragile Families and Child Wellbeing Study, results indicate that stepfather involvement is positively associated with stepfather-child relationship quality. This association is statistically indistinguishable across racial groups, although the association is stronger among children in cohabiting stepfamilies compared to children in married stepfamilies."}
{"_id": "45063cf2e0116e700da5ca2863c8bb82ad4d64c2", "title": "Conceptual and Database Modelling of Graph Databases", "text": "Comparing graph databases with traditional, e.g., relational databases, some important database features are often missing there. Particularly, a graph database schema including integrity constraints is not explicitly defined, also a conceptual modelling is not used at all. It is hard to check a consistency of the graph database, because almost no integrity constraints are defined. In the paper, we discuss these issues and present current possibilities and challenges in graph database modelling. Also a conceptual level of a graph database design is considered. We propose a sufficient conceptual model and show its relationship to a graph database model. We focus also on integrity constraints modelling functional dependencies between entity types, which reminds modelling functional dependencies known from relational databases and extend them to conditional functional dependencies."}
{"_id": "6733017c5a01b698cc07b57fa9c9b9207b85cfbc", "title": "Accurate reconstruction of image stimuli from human fMRI based on the decoding model with capsule network architecture", "text": "In neuroscience, all kinds of computation models were designed to answer the open question of how sensory stimuli are encoded by neurons and conversely, how sensory stimuli can be decoded from neuronal activities. Especially, functional Magnetic Resonance Imaging (fMRI) studies have made many great achievements with the rapid development of the deep network computation. However, comparing with the goal of decoding orientation, position and object category from activities in visual cortex, accurate reconstruction of image stimuli from human fMRI is a still challenging work. In this paper, the capsule network (CapsNet) architecture based visual reconstruction (CNAVR) method is developed to reconstruct image stimuli. The capsule means containing a group of neurons to perform the better organization of feature structure and representation, inspired by the structure of cortical mini column including several hundred neurons in primates. The high-level capsule features in the CapsNet includes diverse features of image stimuli such as semantic class, orientation, location and so on. We used these features to bridge between human fMRI and image stimuli. We firstly employed the CapsNet to train the nonlinear mapping from image stimuli to high-level capsule features, and from highlevel capsule features to image stimuli again in an end-to-end manner. After estimating the serviceability of each voxel by encoding performance to accomplish the selecting of voxels, we secondly trained the nonlinear mapping from dimension-decreasing fMRI data to high-level capsule features. Finally, we can predict the high-level capsule features with fMRI data, and reconstruct image stimuli with the CapsNet. We evaluated the proposed CNAVR method on the dataset of handwritten digital images, and exceeded about 10% than the accuracy of all existing state-of-the-art methods on the structural similarity index (SSIM)."}
{"_id": "f8be08195b1a7e9e45028eee4844ea2482170a3e", "title": "Gut microbiota functions: metabolism of nutrients and other food components", "text": "The diverse microbial community that inhabits the human gut has an extensive metabolic repertoire that is distinct from, but complements the activity of mammalian enzymes in the liver and gut mucosa and includes functions essential for host digestion. As such, the gut microbiota is a key factor in shaping the biochemical profile of the diet and, therefore, its impact on host health and disease. The important role that the gut microbiota appears to play in human metabolism and health has stimulated research into the identification of specific microorganisms involved in different processes, and the elucidation of metabolic pathways, particularly those associated with metabolism of dietary components and some host-generated substances. In the first part of the review, we discuss the main gut microorganisms, particularly bacteria, and microbial pathways associated with the metabolism of dietary carbohydrates (to short chain fatty acids and gases), proteins, plant polyphenols, bile acids, and vitamins. The second part of the review focuses on the methodologies, existing and novel, that can be employed to explore gut microbial pathways of metabolism. These include mathematical models, omics techniques, isolated microbes, and enzyme assays."}
{"_id": "7ec5f9694bc3d061b376256320eacb8ec3566b77", "title": "The CN2 Induction Algorithm", "text": "Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present. Implementations of the CN2, ID3, and AQ algorithms are compared on three medical classification tasks."}
{"_id": "0d57ba12a6d958e178d83be4c84513f7e42b24e5", "title": "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour", "text": "Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves \u223c90% scaling efficiency when moving from 8 to 256 GPUs. This system enables us to train visual recognition models on internetscale data with high efficiency."}
{"_id": "22ba26e56fc3e68f2e6a96c60d27d5f721ea00e9", "title": "RMSProp and equilibrated adaptive learning rates for non-convex optimization", "text": "Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes. We show that the popular Jacobi preconditioner has undesirable behavior in the presence of both positive and negative curvature, and present theoretical and empirical evidence that the socalled equilibration preconditioner is comparatively better suited to non-convex problems. We introduce a novel adaptive learning rate scheme, called ESGD, based on the equilibration preconditioner. Our experiments show that ESGD performs as well or better than RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent."}
{"_id": "27da8d31b23f15a8d4feefe0f309dfaad745f8b0", "title": "Understanding deep learning requires rethinking generalization", "text": "Despite their massivesize, successful deep artificial neural networkscan exhibit a remarkably small differencebetween training and test performance. Conventional wisdom attributessmall generalization error either to propertiesof themodel family, or to the regularization techniquesused during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experimentsestablish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simpledepth two neural networksalready haveperfect finitesampleexpressivity assoon as thenumber of parameters exceeds thenumber of datapointsas it usually does in practice. We interpret our experimental findingsby comparison with traditional models."}
{"_id": "8e0eacf11a22b9705a262e908f17b1704fd21fa7", "title": "Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin", "text": "We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech\u2014two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system [26]. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale."}
{"_id": "bcdce6325b61255c545b100ef51ec7efa4cced68", "title": "An overview of gradient descent optimization algorithms", "text": "Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent."}
{"_id": "907149ace088dad97fe6a6cadfd0c9260bb75795", "title": "Expressing emotion through posture and gesture Introduction", "text": "Introduction Emotion and its physical expression are an integral part of social interaction, informing others about how we are feeling and affecting social outcomes (Vosk, Forehand, and Figueroa 1983). Studies on the physical expression of emotion can be traced back to the 19th century with Darwin\u2019s seminal book \u201cThe Expression of the Emotions in Man and Animals\u201d that reveals the key role of facial expressions and body movement in communicating status and emotion (Darwin 1872)."}
{"_id": "eaa6537b640e744216c8ec1272f6db5bbc53e0fe", "title": "Robust and Computationally Lightweight Autonomous Tracking of Vehicle Taillights and Signal Detection by Embedded Smart Cameras", "text": "An important aspect of collision avoidance and driver assistance systems, as well as autonomous vehicles, is the tracking of vehicle taillights and the detection of alert signals (turns and brakes). In this paper, we present the design and implementation of a robust and computationally lightweight algorithm for a real-time vision system, capable of detecting and tracking vehicle taillights, recognizing common alert signals using a vehicle-mounted embedded smart camera, and counting the cars passing on both sides of the vehicle. The system is low-power and processes scenes entirely on the microprocessor of an embedded smart camera. In contrast to most existing work that addresses either daytime or nighttime detection, the presented system provides the ability to track vehicle taillights and detect alert signals regardless of lighting conditions. The mobile vision system has been tested in actual traffic scenes and the results obtained demonstrate the performance and the lightweight nature of the algorithm."}
{"_id": "dd18d4a30cb1f516b62950db44f73589f8083c3e", "title": "Role of the Immune system in chronic pain", "text": "During the past two decades, an important focus of pain research has been the study of chronic pain mechanisms, particularly the processes that lead to the abnormal sensitivity \u2014 spontaneous pain and hyperalgesia \u2014 that is associated with these states. For some time it has been recognized that inflammatory mediators released from immune cells can contribute to these persistent pain states. However, it has only recently become clear that immune cell products might have a crucial role not just in inflammatory pain, but also in neuropathic pain caused by damage to peripheral nerves or to the CNS."}
{"_id": "5592c7e0225c956419a9a315718a87190b33f4c2", "title": "An Energy-Efficient Architecture for Binary Weight Convolutional Neural Networks", "text": "Binary weight convolutional neural networks (BCNNs) can achieve near state-of-the-art classification accuracy and have far less computation complexity compared with traditional CNNs using high-precision weights. Due to their binary weights, BCNNs are well suited for vision-based Internet-of-Things systems being sensitive to power consumption. BCNNs make it possible to achieve very high throughput with moderate power dissipation. In this paper, an energy-efficient architecture for BCNNs is proposed. It fully exploits the binary weights and other hardware-friendly characteristics of BCNNs. A judicious processing schedule is proposed so that off-chip I/O access is minimized and activations are maximally reused. To significantly reduce the critical path delay, we introduce optimized compressor trees and approximate binary multipliers with two novel compensation schemes. The latter is able to save significant hardware resource, and almost no computation accuracy is compromised. Taking advantage of error resiliency of BCNNs, an innovative approximate adder is developed, which significantly reduces the silicon area and data path delay. Thorough error analysis and extensive experimental results on several data sets show that the approximate adders in the data path cause negligible accuracy loss. Moreover, algorithmic transformations for certain layers of BCNNs and a memory-efficient quantization scheme are incorporated to further reduce the energy cost and on-chip storage requirement. Finally, the proposed BCNN hardware architecture is implemented with the SMIC 130-nm technology. The postlayout results demonstrate that our design can achieve an energy efficiency over 2.0TOp/s/W when scaled to 65 nm, which is more than two times better than the prior art."}
{"_id": "55ea7bb4e75608115b50b78f2fea6443d36d60cc", "title": "Application of ordinal logistic regression analysis in determining risk factors of child malnutrition in Bangladesh", "text": "BACKGROUND\nThe study attempts to develop an ordinal logistic regression (OLR) model to identify the determinants of child malnutrition instead of developing traditional binary logistic regression (BLR) model using the data of Bangladesh Demographic and Health Survey 2004.\n\n\nMETHODS\nBased on weight-for-age anthropometric index (Z-score) child nutrition status is categorized into three groups-severely undernourished (< -3.0), moderately undernourished (-3.0 to -2.01) and nourished (\u2265-2.0). Since nutrition status is ordinal, an OLR model-proportional odds model (POM) can be developed instead of two separate BLR models to find predictors of both malnutrition and severe malnutrition if the proportional odds assumption satisfies. The assumption is satisfied with low p-value (0.144) due to violation of the assumption for one co-variate. So partial proportional odds model (PPOM) and two BLR models have also been developed to check the applicability of the OLR model. Graphical test has also been adopted for checking the proportional odds assumption.\n\n\nRESULTS\nAll the models determine that age of child, birth interval, mothers' education, maternal nutrition, household wealth status, child feeding index, and incidence of fever, ARI & diarrhoea were the significant predictors of child malnutrition; however, results of PPOM were more precise than those of other models.\n\n\nCONCLUSION\nThese findings clearly justify that OLR models (POM and PPOM) are appropriate to find predictors of malnutrition instead of BLR models."}
{"_id": "32f6c0b6f801da365ed39f50a4966cf241bb905e", "title": "Why Sleep Matters-The Economic Costs of Insufficient Sleep: A Cross-Country Comparative Analysis.", "text": "The Centers for Disease Control and Prevention (CDC) in the United States has declared insufficient sleep a \"public health problem.\" Indeed, according to a recent CDC study, more than a third of American adults are not getting enough sleep on a regular basis. However, insufficient sleep is not exclusively a US problem, and equally concerns other industrialised countries such as the United Kingdom, Japan, Germany, or Canada. According to some evidence, the proportion of people sleeping less than the recommended hours of sleep is rising and associated with lifestyle factors related to a modern 24/7 society, such as psychosocial stress, alcohol consumption, smoking, lack of physical activity and excessive electronic media use, among others. This is alarming as insufficient sleep has been found to be associated with a range of negative health and social outcomes, including success at school and in the labour market. Over the last few decades, for example, there has been growing evidence suggesting a strong association between short sleep duration and elevated mortality risks. Given the potential adverse effects of insufficient sleep on health, well-being and productivity, the consequences of sleep-deprivation have far-reaching economic consequences. Hence, in order to raise awareness of the scale of insufficient sleep as a public-health issue, comparative quantitative figures need to be provided for policy- and decision-makers, as well as recommendations and potential solutions that can help tackling the problem."}
{"_id": "506277ae84149b82d215f76bc4f7135400f65b1d", "title": "User-defined Interface Gestures: Dataset and Analysis", "text": "We present a video-based gesture dataset and a methodology for annotating video-based gesture datasets. Our dataset consists of user-defined gestures generated by 18 participants from a previous investigation of gesture memorability. We design and use a crowd-sourced classification task to annotate the videos. The results are made available through a web-based visualization that allows researchers and designers to explore the dataset. Finally, we perform an additional descriptive analysis and quantitative modeling exercise that provide additional insights into the results of the original study. To facilitate the use of the presented methodology by other researchers we share the data, the source of the human intelligence tasks for crowdsourcing, a new taxonomy that integrates previous work, and the source code of the visualization tool."}
{"_id": "aa6da71c3099cd394b9af663cfadce1ef77cb37b", "title": "Decision Support for Handling Mismatches between COTS Products and System Requirements", "text": "In the process of selecting commercial off-the-shelf (COTS) products, it is inevitable to encounter mismatches between COTS products and system requirements. Mismatches occur when COTS attributes do not exactly match our requirements. Many of these mismatches are resolved after selecting a COTS product in order to improve its fitness with the requirements. This paper proposes a decision support approach that aims at addressing COTS mismatches during and after the selection process. Our approach can be integrated with existing COTS selection methods at two stages: (I) When evaluating COTS candidates: our approach is used to estimate the anticipated fitness of the candidates if their mismatches are resolved. This helps to base our COTS selection decisions on the fitness that the COTS candidates will eventually have if selected. (2) After selecting a COTS product: the approach suggests alternative plans for resolving the most appropriate mismatches using suitable actions, such that the most important risk, technical, and resource constraints are met. A case study from the e-services domain is used to illustrate the method and to discuss its added value"}
{"_id": "58bd0411bce7df96c44aa3579136eff873b56ac5", "title": "Multimodal Classification of Remote Sensing Images: A Review and Future Directions", "text": "Earth observation through remote sensing images allows the accurate characterization and identification of materials on the surface from space and airborne platforms. Multiple and heterogeneous image sources can be available for the same geographical region: multispectral, hyperspectral, radar, multitemporal, and multiangular images can today be acquired over a given scene. These sources can be combined/fused to improve classification of the materials on the surface. Even if this type of systems is generally accurate, the field is about to face new challenges: the upcoming constellations of satellite sensors will acquire large amounts of images of different spatial, spectral, angular, and temporal resolutions. In this scenario, multimodal image fusion stands out as the appropriate framework to address these problems. In this paper, we provide a taxonomical view of the field and review the current methodologies for multimodal classification of remote sensing images. We also highlight the most recent advances, which exploit synergies with machine learning and signal processing: sparse methods, kernel-based fusion, Markov modeling, and manifold alignment. Then, we illustrate the different approaches in seven challenging remote sensing applications: 1) multiresolution fusion for multispectral image classification; 2) image downscaling as a form of multitemporal image fusion and multidimensional interpolation among sensors of different spatial, spectral, and temporal resolutions; 3) multiangular image classification; 4) multisensor image fusion exploiting physically-based feature extractions; 5) multitemporal image classification of land covers in incomplete, inconsistent, and vague image sources; 6) spatiospectral multisensor fusion of optical and radar images for change detection; and 7) cross-sensor adaptation of classifiers. The adoption of these techniques in operational settings will help to monitor our planet from space in the very near future."}
{"_id": "9b69889c7d762c04a2d13b112d0b37e4f719ca34", "title": "Interface engineering of highly efficient perovskite solar cells", "text": "Advancing perovskite solar cell technologies toward their theoretical power conversion efficiency (PCE) requires delicate control over the carrier dynamics throughout the entire device. By controlling the formation of the perovskite layer and careful choices of other materials, we suppressed carrier recombination in the absorber, facilitated carrier injection into the carrier transport layers, and maintained good carrier extraction at the electrodes. When measured via reverse bias scan, cell PCE is typically boosted to 16.6% on average, with the highest efficiency of ~19.3% in a planar geometry without antireflective coating. The fabrication of our perovskite solar cells was conducted in air and from solution at low temperatures, which should simplify manufacturing of large-area perovskite devices that are inexpensive and perform at high levels."}
{"_id": "159f32e0d91ef919e94d9b6f1ef13ce9be62155c", "title": "Concatenate text embeddings for text classification", "text": "Text embedding has gained a lot of interests in text classification area. This paper investigates the popular neural document embedding method Paragraph Vector as a source of evidence in document ranking. We focus on the effects of combining knowledge-based with knowledge-free document embeddings for text classification task. We concatenate these two representations so that the classification can be done more accurately. The results of our experiments show that this approach achieves better performances on a popular dataset."}
{"_id": "8db81373f22957d430dddcbdaebcbc559842f0d8", "title": "Limits of predictability in human mobility.", "text": "A range of applications, from predicting the spread of human and electronic viruses to city planning and resource management in mobile communications, depend on our ability to foresee the whereabouts and mobility of individuals, raising a fundamental question: To what degree is human behavior predictable? Here we explore the limits of predictability in human dynamics by studying the mobility patterns of anonymized mobile phone users. By measuring the entropy of each individual's trajectory, we find a 93% potential predictability in user mobility across the whole user base. Despite the significant differences in the travel patterns, we find a remarkable lack of variability in predictability, which is largely independent of the distance users cover on a regular basis."}
{"_id": "2bbe9735b81e0978125dad005656503fca567902", "title": "Reusing Hardware Performance Counters to Detect and Identify Kernel Control-Flow Modifying Rootkits", "text": "Kernel rootkits are formidable threats to computer systems. They are stealthy and can have unrestricted access to system resources. This paper presents NumChecker, a new virtual machine (VM) monitor based framework to detect and identify control-flow modifying kernel rootkits in a guest VM. NumChecker detects and identifies malicious modifications to a system call in the guest VM by measuring the number of certain hardware events that occur during the system call's execution. To automatically count these events, NumChecker leverages the hardware performance counters (HPCs), which exist in modern processors. By using HPCs, the checking cost is significantly reduced and the tamper-resistance is enhanced. We implement a prototype of NumChecker on Linux with the kernel-based VM. An HPC-based two-phase kernel rootkit detection and identification technique is presented and evaluated on a number of real-world kernel rootkits. The results demonstrate its practicality and effectiveness."}
{"_id": "e7317fd7bd4f31e70351ca801f41d0040558ad83", "title": "Development and investigation of efficient artificial bee colony algorithm for numerical function optimization", "text": "Artificial bee colony algorithm (ABC), which is inspired by the foraging behavior of honey bee swarm, is a biological-inspired optimization. It shows more effective than genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). However, ABC is good at exploration but poor at exploitation, and its convergence speed is also an issue in some cases. For these insufficiencies, we propose an improved ABC algorithm called I-ABC. In I-ABC, the best-so-far solution, inertia weight and acceleration coefficients are introduced to modify the search process. Inertia weight and acceleration coefficients are defined as functions of the fitness. In addition, to further balance search processes, the modification forms of the employed bees and the onlooker ones are different in the second acceleration coefficient. Experiments show that, for most functions, the I-ABC has a faster convergence speed and ptimization better performances than each of ABC and the gbest-guided ABC (GABC). But I-ABC could not still substantially achieve the best solution for all optimization problems. In a few cases, it could not find better results than ABC or GABC. In order to inherit the bright sides of ABC, GABC and I-ABC, a high-efficiency hybrid ABC algorithm, which is called PS-ABC, is proposed. PS-ABC owns the abilities of prediction and selection. Results show that PS-ABC has a faster convergence speed like I-ABC and better search ability ods f than other relevant meth"}
{"_id": "7401611a24f86dffb5b0cd39cf11ee55a4edb32b", "title": "Comparative Evaluation of Anomaly Detection Techniques for Sequence Data", "text": "We present a comparative evaluation of a large number of anomaly detection techniques on a variety of publicly available as well as artificially generated data sets. Many of these are existing techniques while some are slight variants and/or adaptations of traditional anomaly detection techniques to sequence data."}
{"_id": "d7988bb266bc6653efa4b83dda102e1fc464c1f8", "title": "Flexible and Stretchable Electronics Paving the Way for Soft Robotics", "text": "Planar and rigid wafer-based electronics are intrinsically incompatible with curvilinear and deformable organisms. Recent development of organic and inorganic flexible and stretchable electronics enabled sensing, stimulation, and actuation of/for soft biological and artificial entities. This review summarizes the enabling technologies of soft sensors and actuators, as well as power sources based on flexible and stretchable electronics. Examples include artificial electronic skins, wearable biosensors and stimulators, electronics-enabled programmable soft actuators, and mechanically compliant power sources. Their potential applications in soft robotics are illustrated in the framework of a five-step human\u2013robot interaction loop. Outlooks of future directions and challenges are provided at the end."}
{"_id": "a3d638ab304d3ef3862d37987c3a258a24339e05", "title": "CycleGAN, a Master of Steganography", "text": "CycleGAN [Zhu et al., 2017] is one recent successful approach to learn a transformation between two image distributions. In a series of experiments, we demonstrate an intriguing property of the model: CycleGAN learns to \u201chide\u201d information about a source image into the images it generates in a nearly imperceptible, highfrequency signal. This trick ensures that the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while the generated image remains realistic. We connect this phenomenon with adversarial attacks by viewing CycleGAN\u2019s training procedure as training a generator of adversarial examples and demonstrate that the cyclic consistency loss causes CycleGAN to be especially vulnerable to adversarial attacks."}
{"_id": "5b54b6aa8288a1e9713293cec0178e8f3db3de2d", "title": "A Novel Variable Reluctance Resolver for HEV/EV Applications", "text": "In order to simplify the manufacturing process of variable reluctance (VR) resolvers for hybrid electric vehicle/electric vehicle (HEV/EV) applications, a novel VR resolver with nonoverlapping tooth-coil windings is proposed in this paper. A comparison of the winding configurations is first carried out between the existing and the proposed designs, followed by the description of the operating principle. Furthermore, the influence of actual application conditions is investigated by finite-element (FE) analyses, including operating speed and assembling eccentricity. In addition, identical stator and windings of the novel design can be employed in three resolvers of different rotor saliencies. The voltage difference among the three rotor combinations, as well as the detecting accuracy, is further investigated. Finally, prototypes are fabricated and tested to verify the analyses."}
{"_id": "355f9782e9667c19144e137761a7d44977c7a5c2", "title": "A content analysis of depression-related tweets", "text": "This study examines depression-related chatter on Twitter to glean insight into social networking about mental health. We assessed themes of a random sample (n=2,000) of depression-related tweets (sent 4-11 to 5-4-14). Tweets were coded for expression of DSM-5 symptoms for Major Depressive Disorder (MDD). Supportive or helpful tweets about depression was the most common theme (n=787, 40%), closely followed by disclosing feelings of depression (n=625; 32%). Two-thirds of tweets revealed one or more symptoms for the diagnosis of MDD and/or communicated thoughts or ideas that were consistent with struggles with depression after accounting for tweets that mentioned depression trivially. Health professionals can use our findings to tailor and target prevention and awareness messages to those Twitter users in need."}
{"_id": "69393d1fe9d68b7aeb5dd57741be392d18385e13", "title": "A Meta-Analysis of Methodologies for Research in Knowledge Management, Organizational Learning and Organizational Memory: Five Years at HICSS", "text": "The Task Force on Organizational Memory presented a report at the Hawaii International Conference for System Sciences in January 1998. The report included perspectives on knowledge-oriented research, conceptual models for organizational memory, and research methodologies for researchers considering work in organizational memory. This paper builds on the ideas originally presented in the 1998 report by examining research presented at HICSS in the general areas of knowledge management, organizational memory and organizational learning in the five years since the original task force report."}
{"_id": "c171faac12e0cf24e615a902e584a3444fcd8857", "title": "The Satisfaction With Life Scale.", "text": ""}
{"_id": "5a14949bcc06c0ae9eecd29b381ffce22e1e75b2", "title": "Organizational Learning and Management Information Systems", "text": "T he articles in this issue ofDATA BASE were chosen b y Anthony G . Hopwood, who is a professor of accounting and financial reporting at the London Graduate Schoo l of Business Studies . The articles contain important ideas , Professor Hopwood wrote, of significance to all intereste d in information systems, be they practitioners or academics . The authors, with their professional affiliations at th e time, were Chris Argyris, Graduate School of Education , Harvard University; Bo Hedberg and Sten Jonsson, Department of Business Administration, University o f Gothenburg; J . Frisco den Hertog, N . V. Philips' Gloeilampenfabrieken, The Netherlands, and Michael J . Earl, Oxford Centre for Management Studies . The articles appeared originally in Accounting, Organizations and Society, a publication of which Professor Hopwood is editor-in-chief. AOS exists to monitor emergin g developments and to actively encourage new approaches and perspectives ."}
{"_id": "ae4bb38eaa8fecfddbc9afefa33188ba3cc2282b", "title": "Missing Data Estimation in High-Dimensional Datasets: A Swarm Intelligence-Deep Neural Network Approach", "text": "In this paper, we examine the problem of missing data in high-dimensional datasets by taking into consideration the Missing Completely at Random and Missing at Random mechanisms, as well as the Arbitrary missing pattern. Additionally, this paper employs a methodology based on Deep Learning and Swarm Intelligence algorithms in order to provide reliable estimates for missing data. The deep learning technique is used to extract features from the input data via an unsupervised learning approach by modeling the data distribution based on the input. This deep learning technique is then used as part of the objective function for the swarm intelligence technique in order to estimate the missing data after a supervised fine-tuning phase by minimizing an error function based on the interrelationship and correlation between features in the dataset. The investigated methodology in this paper therefore has longer running times, however, the promising potential outcomes justify the trade-off. Also, basic knowledge of statistics is presumed."}
{"_id": "349119a443223a45dabcda844ac41e37bd1abc77", "title": "Spatio-Temporal Join on Apache Spark", "text": "Effective processing of extremely large volumes of spatial data has led to many organizations employing distributed processing frameworks. Apache Spark is one such open-source framework that is enjoying widespread adoption. Within this data space, it is important to note that most of the observational data (i.e., data collected by sensors, either moving or stationary) has a temporal component, or timestamp. In order to perform advanced analytics and gain insights, the temporal component becomes equally important as the spatial and attribute components. In this paper, we detail several variants of a spatial join operation that addresses both spatial, temporal, and attribute-based joins. Our spatial join technique differs from other approaches in that it combines spatial, temporal, and attribute predicates in the join operator.\n In addition, our spatio-temporal join algorithm and implementation differs from others in that it runs in commercial off-the-shelf (COTS) application. The users of this functionality are assumed to be GIS analysts with little if any knowledge of the implementation details of spatio-temporal joins or distributed processing. They are comfortable using simple tools that do not provide the ability to tweak the configuration of the"}
{"_id": "0161e4348a7079e9c37434c5af47f6372d4b412d", "title": "Class segmentation and object localization with superpixel neighborhoods", "text": "We propose a method to identify and localize object classes in images. Instead of operating at the pixel level, we advocate the use of superpixels as the basic unit of a class segmentation or pixel localization scheme. To this end, we construct a classifier on the histogram of local features found in each superpixel. We regularize this classifier by aggregating histograms in the neighborhood of each superpixel and then refine our results further by using the classifier in a conditional random field operating on the superpixel graph. Our proposed method exceeds the previously published state-of-the-art on two challenging datasets: Graz-02 and the PASCAL VOC 2007 Segmentation Challenge."}
{"_id": "02227c94dd41fe0b439e050d377b0beb5d427cda", "title": "Reading Digits in Natural Images with Unsupervised Feature Learning", "text": "Detecting and reading text from natural images is a hard computer vision task that is central to a variety of emerging applications. Related problems like document character recognition have been widely studied by computer vision and machine learning researchers and are virtually solved for practical applications like reading handwritten digits. Reliably recognizing characters in more complex scenes like photographs, however, is far more difficult: the best existing methods lag well behind human performance on the same tasks. In this paper we attack the problem of recognizing digits in a real application using unsupervised feature learning methods: reading house numbers from street level photos. To this end, we introduce a new benchmark dataset for research use containing over 600,000 labeled digits cropped from Street View images. We then demonstrate the difficulty of recognizing these digits when the problem is approached with hand-designed features. Finally, we employ variants of two recently proposed unsupervised feature learning methods and find that they are convincingly superior on our benchmarks."}
{"_id": "081651b38ff7533550a3adfc1c00da333a8fe86c", "title": "How transferable are features in deep neural networks?", "text": "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset."}
{"_id": "17facd6efab9d3be8b1681bb2c1c677b2cb02628", "title": "Transfer Feature Learning with Joint Distribution Adaptation", "text": "Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems."}
{"_id": "1c734a14c2325cb76783ca0431862c7f04a69268", "title": "Deep Domain Confusion: Maximizing for Domain Invariance", "text": "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task."}
{"_id": "1e21b925b65303ef0299af65e018ec1e1b9b8d60", "title": "Unsupervised Cross-Domain Image Generation", "text": "We study the ecological use of analogies in AI. Specifically, we address the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given representation function f, which accepts inputs in either domains, would remain unchanged. Other than f, the training data is unsupervised and consist of a set of samples from each domain, without any mapping between them. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f preserving component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity."}
{"_id": "6918fcbf5c5a86a7ffaf5650080505b95cd6d424", "title": "Hierarchical organization versus self-organization", "text": "In this paper the difference between hierarchical organization and selforganization is investigated. Organization is defined as a structure with a function. But how does the structure affect the function? I will start to examine this by doing two simulations. The idea is to have a given network of agents, which influence their neighbors. How the result differs in three different types of networks, is then explored. In the first simulation, agents try to align with their neighbors. The second simulation is inspired by the ecosystem. Agents take certain products from their neighbors, and transform them into products their neighbors can use."}
{"_id": "891d443dc003ed5f8762373395aacfa9ff895fd4", "title": "Moving object detection, tracking and classification for smart video surveillance", "text": "MOVING OBJECT DETECTION, TRACKING AND CLASSIFICATION FOR SMART VIDEO SURVEILLANCE Yi\u011fithan Dedeo\u011flu M.S. in Computer Engineering Supervisor: Assist. Prof. Dr. U\u011fur G\u00fcd\u00fckbay August, 2004 Video surveillance has long been in use to monitor security sensitive areas such as banks, department stores, highways, crowded public places and borders. The advance in computing power, availability of large-capacity storage devices and high speed network infrastructure paved the way for cheaper, multi sensor video surveillance systems. Traditionally, the video outputs are processed online by human operators and are usually saved to tapes for later use only after a forensic event. The increase in the number of cameras in ordinary surveillance systems overloaded both the human operators and the storage devices with high volumes of data and made it infeasible to ensure proper monitoring of sensitive areas for long times. In order to filter out redundant information generated by an array of cameras, and increase the response time to forensic events, assisting the human operators with identification of important events in video by the use of \u201csmart\u201d video surveillance systems has become a critical requirement. The making of video surveillance systems \u201csmart\u201d requires fast, reliable and robust algorithms for moving object detection, classification, tracking and activity analysis. In this thesis, a smart visual surveillance system with real-time moving object detection, classification and tracking capabilities is presented. The system operates on both color and gray scale video imagery from a stationary camera. It can handle object detection in indoor and outdoor environments and under changing illumination conditions. The classification algorithm makes use of the shape of the detected objects and temporal tracking results to successfully categorize objects into pre-defined classes like human, human group and vehicle. The system is also able to detect the natural phenomenon fire in various scenes reliably. The proposed tracking algorithm successfully tracks video objects even in full occlusion cases. In addition to these, some important needs of a robust iii"}
{"_id": "38a08fbe5eabbd68db495fa38f4ee506d82095d4", "title": "IMGPU: GPU-Accelerated Influence Maximization in Large-Scale Social Networks", "text": "Influence Maximization aims to find the top-$(K)$ influential individuals to maximize the influence spread within a social network, which remains an important yet challenging problem. Proven to be NP-hard, the influence maximization problem attracts tremendous studies. Though there exist basic greedy algorithms which may provide good approximation to optimal result, they mainly suffer from low computational efficiency and excessively long execution time, limiting the application to large-scale social networks. In this paper, we present IMGPU, a novel framework to accelerate the influence maximization by leveraging the parallel processing capability of graphics processing unit (GPU). We first improve the existing greedy algorithms and design a bottom-up traversal algorithm with GPU implementation, which contains inherent parallelism. To best fit the proposed influence maximization algorithm with the GPU architecture, we further develop an adaptive K-level combination method to maximize the parallelism and reorganize the influence graph to minimize the potential divergence. We carry out comprehensive experiments with both real-world and sythetic social network traces and demonstrate that with IMGPU framework, we are able to outperform the state-of-the-art influence maximization algorithm up to a factor of 60, and show potential to scale up to extraordinarily large-scale networks."}
{"_id": "1459a6fc833e60ce0f43fe0fc9a48f8f74db77cc", "title": "Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization", "text": "We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast stochastic algorithms that provably converge to a stationary point for constant minibatches. Furthermore, using a variant of these algorithms, we obtain provably faster convergence than batch proximal gradient descent. Our results are based on the recent variance reduction techniques for convex optimization but with a novel analysis for handling nonconvex and nonsmooth functions. We also prove global linear convergence rate for an interesting subclass of nonsmooth nonconvex functions, which subsumes several recent works."}
{"_id": "19229afbce15d62bcf8d3afe84a2d47a0b6f1939", "title": "Participatory design and \"democratizing innovation\"", "text": "Participatory design has become increasingly engaged in public spheres and everyday life and is no longer solely concerned with the workplace. This is not only a shift from work oriented productive activities to leisure and pleasurable engagements, but also a new milieu for production and innovation and entails a reorientation from \"democracy at work\" to \"democratic innovation\". What democratic innovation entails is currently defined by management and innovation research, which claims that innovation has been democratized through easy access to production tools and lead-users as the new experts driving innovation. We sketch an alternative \"democratizing innovation\" practice more in line with the original visions of participatory design based on our experience of running Malm\u00f6 Living Labs - an open innovation milieu where new constellations, issues and ideas evolve from bottom-up long-term collaborations amongst diverse stakeholders. Two cases and controversial matters of concern are discussed. The fruitfulness of the concepts \"Things\" (as opposed to objects), \"infrastructuring\" (as opposed to projects) and \"agonistic public spaces\" (as opposed to consensual decision-making) are explored in relation to participatory innovation practices and democracy."}
{"_id": "853331d5c2e4a5c29ff578c012bff7fec7ebd7bc", "title": "Study on Virtual Control of a Robotic Arm via a Myo Armband for the Self-Manipulation of a Hand Amputee", "text": "This paper proposes a Myo device that has electromyography (EMG) sensors for detecting electrical activities from different parts of the forearm muscles; it also has a gyroscope and an accelerometer. EMG sensors detect and provide very clear and important data from muscles compared with other types of sensors. The Myo armband sends data from EMG, gyroscope, and accelerometer sensors to a computer via Bluetooth and uses these data to control a virtual robotic arm which was built in Unity 3D. Virtual robotic arms based on EMG, gyroscope, and accelerometer sensors have different features. A robotic arm based on EMG is controlled by using the tension and relaxation of muscles. Consequently, a virtual robotic arm based on EMG is preferred for a hand amputee to a virtual robotic arm based on a gyroscope and an accelerometer"}
{"_id": "21786e6ca30849f750656277573ee11fa4d469c5", "title": "Physical Demands of Different Positions in FA Premier League Soccer.", "text": "The purpose of this study was to evaluate the physical demands of English Football Association (FA) Premier League soccer of three different positional classifications (defender, midfielder and striker). Computerised time-motion video-analysis using the Bloomfield Movement Classification was undertaken on the purposeful movement (PM) performed by 55 players. Recognition of PM had a good inter-tester reliability strength of agreement (\u03ba= 0.7277). Players spent 40.6 \u00b1 10.0% of the match performing PM. Position had a significant influence on %PM time spent sprinting, running, shuffling, skipping and standing still (p < 0.05). However, position had no significant influence on the %PM time spent performing movement at low, medium, high or very high intensities (p > 0.05). Players spent 48.7 \u00b1 9.2% of PM time moving in a directly forward direction, 20.6 \u00b1 6.8% not moving in any direction and the remainder of PM time moving backward, lateral, diagonal and arced directions. The players performed the equivalent of 726 \u00b1 203 turns during the match; 609 \u00b1 193 of these being of 0\u00b0 to 90\u00b0 to the left or right. Players were involved in the equivalent of 111 \u00b1 77 on the ball movement activities per match with no significant differences between the positions for total involvement in on the ball activity (p > 0.05). This study has provided an indication of the different physical demands of different playing positions in FA Premier League match-play through assessment of movements performed by players. Key pointsPlayers spent ~40% of the match performing Pur-poseful Movement (PM).Position had a significant influence on %PM time spent performing each motion class except walking and jogging. Players performed >700 turns in PM, most of these being of 0\u00b0-90\u00b0.Strikers performed most high to very high intensity activity and most contact situations.Defenders also spent a significantly greater %PM time moving backwards than the other two posi-tions.Different positions could benefit from more specific conditioning programs."}
{"_id": "76737d93659b31d5a6ce07a4e9e5107bc0c39adf", "title": "A CNS-permeable Hsp90 inhibitor rescues synaptic dysfunction and memory loss in APP-overexpressing Alzheimer\u2019s mouse model via an HSF1-mediated mechanism", "text": "Induction of neuroprotective heat-shock proteins via pharmacological Hsp90 inhibitors is currently being investigated as a potential treatment for neurodegenerative diseases. Two major hurdles for therapeutic use of Hsp90 inhibitors are systemic toxicity and limited central nervous system permeability. We demonstrate here that chronic treatment with a proprietary Hsp90 inhibitor compound (OS47720) not only elicits a heat-shock-like response but also offers synaptic protection in symptomatic Tg2576 mice, a model of Alzheimer\u2019s disease, without noticeable systemic toxicity. Despite a short half-life of OS47720 in mouse brain, a single intraperitoneal injection induces rapid and long-lasting (>3 days) nuclear activation of the heat-shock factor, HSF1. Mechanistic study indicates that the remedial effects of OS47720 depend upon HSF1 activation and the subsequent HSF1-mediated transcriptional events on synaptic genes. Taken together, this work reveals a novel role of HSF1 in synaptic function and memory, which likely occurs through modulation of the synaptic transcriptome."}
{"_id": "d65c2cbc0980d0840b88b569516ae9c277d9d200", "title": "Credit card fraud detection using machine learning techniques: A comparative analysis", "text": "Financial fraud is an ever growing menace with far consequences in the financial industry. Data mining had played an imperative role in the detection of credit card fraud in online transactions. Credit card fraud detection, which is a data mining problem, becomes challenging due to two major reasons \u2014 first, the profiles of normal and fraudulent behaviours change constantly and secondly, credit card fraud data sets are highly skewed. The performance of fraud detection in credit card transactions is greatly affected by the sampling approach on dataset, selection of variables and detection technique(s) used. This paper investigates the performance of na\u00efve bayes, k-nearest neighbor and logistic regression on highly skewed credit card fraud data. Dataset of credit card transactions is sourced from European cardholders containing 284,807 transactions. A hybrid technique of under-sampling and oversampling is carried out on the skewed data. The three techniques are applied on the raw and preprocessed data. The work is implemented in Python. The performance of the techniques is evaluated based on accuracy, sensitivity, specificity, precision, Matthews correlation coefficient and balanced classification rate. The results shows of optimal accuracy for na\u00efve bayes, k-nearest neighbor and logistic regression classifiers are 97.92%, 97.69% and 54.86% respectively. The comparative results show that k-nearest neighbour performs better than na\u00efve bayes and logistic regression techniques."}
{"_id": "fc4bd8f4db91bbb4053b8174544f79bf67b96b3b", "title": "Bangladeshi Number Plate Detection: Cascade Learning vs. Deep Learning", "text": "This work investigated two different machine learning techniques: Cascade Learning and Deep Learning, to find out which algorithm performs better to detect the number plate of vehicles registered in Bangladesh. To do this, we created a dataset of about 1000 images collected from a security camera of Independent University, Bangladesh. Each image in the dataset were then labelled manually by selecting the Region of Interest (ROI). In the Cascade Learning approach, a sliding window technique was used to detect objects. Then a cascade classifier was employed to determine if the window contained object of interest or not. In the Deep Learning approach, CIFAR-10 dataset was used to pre-train a 15-layer Convolutional Neural Network (CNN). Using this pretrained CNN, a Regions with CNN (R-CNN) was then trained using our dataset. We found that the Deep Learning approach (maximum accuracy 99.60% using 566 training images) outperforms the detector constructed using Cascade classifiers (maximum accuracy 59.52% using 566 positive and 1022 negative training images) for 252 test images."}
{"_id": "049c15a106015b287fec6fc3e8178d4c3f4adf67", "title": "Combining Poisson singular integral and total variation prior models in image restoration", "text": "In this paper, a novel Bayesian image restoration method based on a combination of priors is presented. It is well known that the Total Variation (TV) image prior preserves edge structures while imposing smoothness on the solutions. However, it tends to oversmooth textured areas. To alleviate this problem we propose to combine the TV and the Poisson Singular Integral (PSI) models, which, as we will show, preserves the image textures. The PSI prior depends on a parameter that controls the shape of the filter. A study on the behavior of the filter as a function of this parameter is presented. Our restoration model utilizes a bound for the TV image model based on the majorization\u2013minimization principle, and performs maximum a posteriori Bayesian inference. In order to assess the performance of the proposed approach, in the experimental section we compare it with other restoration methods. & 2013 Elsevier B.V. All rights reserved."}
{"_id": "ebf35073e122782f685a0d6c231622412f28a53b", "title": "A High-Quality Denoising Dataset for Smartphone Cameras", "text": "The last decade has seen an astronomical shift from imaging with DSLR and point-and-shoot cameras to imaging with smartphone cameras. Due to the small aperture and sensor size, smartphone images have notably more noise than their DSLR counterparts. While denoising for smartphone images is an active research area, the research community currently lacks a denoising image dataset representative of real noisy images from smartphone cameras with high-quality ground truth. We address this issue in this paper with the following contributions. We propose a systematic procedure for estimating ground truth for noisy images that can be used to benchmark denoising performance for smartphone cameras. Using this procedure, we have captured a dataset - the Smartphone Image Denoising Dataset (SIDD) - of ~30,000 noisy images from 10 scenes under different lighting conditions using five representative smartphone cameras and generated their ground truth images. We used this dataset to benchmark a number of denoising algorithms. We show that CNN-based methods perform better when trained on our high-quality dataset than when trained using alternative strategies, such as low-ISO images used as a proxy for ground truth data."}
{"_id": "156e7730b8ba8a08ec97eb6c2eaaf2124ed0ce6e", "title": "THE CONTROL OF THE FALSE DISCOVERY RATE IN MULTIPLE TESTING UNDER DEPENDENCY By", "text": "Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparable procedures which control the traditional familywise error rate. We prove that this same procedure also controls the false discovery rate when the test statistics have positive regression dependency on each of the test statistics corresponding to the true null hypotheses. This condition for positive dependency is general enough to cover many problems of practical interest, including the comparisons of many treatments with a single control, multivariate normal test statistics with positive correlation matrix and multivariate t. Furthermore, the test statistics may be discrete, and the tested hypotheses composite without posing special difficulties. For all other forms of dependency, a simple conservative modification of the procedure controls the false discovery rate. Thus the range of problems for which a procedure with proven FDR control can be offered is greatly increased."}
{"_id": "7f47767d338eb39664844c94833b52ae73d964ef", "title": "Gesture Recognition with a Convolutional Long Short-Term Memory Recurrent Neural Network", "text": "Inspired by the adequacy of convolutional neural networks in implicit extraction of visual features and the efficiency of Long Short-Term Memory Recurrent Neural Networks in dealing with long-range temporal dependencies, we propose a Convolutional Long Short-Term Memory Recurrent Neural Network (CNNLSTM) for the problem of dynamic gesture recognition. The model is able to successfully learn gestures varying in duration and complexity and proves to be a significant base for further development. Finally, the new gesture command TsironiGR-dataset for human-robot interaction is presented for the evaluation of CNNLSTM."}
{"_id": "a9b533329845d5d1a31c3ff2821ce9865c440158", "title": "Mirroring others' emotions relates to empathy and interpersonal competence in children", "text": "The mirror neuron system (MNS) has been proposed to play an important role in social cognition by providing a neural mechanism by which others' actions, intentions, and emotions can be understood. Here functional magnetic resonance imaging was used to directly examine the relationship between MNS activity and two distinct indicators of social functioning in typically-developing children (aged 10.1 years+/-7 months): empathy and interpersonal competence. Reliable activity in pars opercularis, the frontal component of the MNS, was elicited by observation and imitation of emotional expressions. Importantly, activity in this region (as well as in the anterior insula and amygdala) was significantly and positively correlated with established behavioral measures indexing children's empathic behavior (during both imitation and observation) and interpersonal skills (during imitation only). These findings suggest that simulation mechanisms and the MNS may indeed be relevant to social functioning in everyday life during typical human development."}
{"_id": "64887b38c382e331cd2b045f7a7edf05f17586a8", "title": "Genetic and environmental influences on sexual orientation and its correlates in an Australian twin sample.", "text": "We recruited twins systematically from the Australian Twin Registry and assessed their sexual orientation and 2 related traits: childhood gender nonconformity and continuous gender identity. Men and women differed in their distributions of sexual orientation, with women more likely to have slight-to-moderate degrees of homosexual attraction, and men more likely to have high degrees of homosexual attraction. Twin concordances for nonheterosexual orientation were lower than in prior studies. Univariate analyses showed that familial factors were important for all traits, but were less successful in distinguishing genetic from shared environmental influences. Only childhood gender nonconformity was significantly heritable for both men and women. Multivariate analyses suggested that the causal architecture differed between men and women, and, for women, provided significant evidence for the importance of genetic factors to the traits' covariation."}
{"_id": "4f2d62eaf7559b91b97bab3076fcd5f306da57f2", "title": "A texture-based method for modeling the background and detecting moving objects", "text": "This paper presents a novel and efficient texture-based method for modeling the background and detecting moving objects from a video sequence. Each pixel is modeled as a group of adaptive local binary pattern histograms that are calculated over a circular region around the pixel. The approach provides us with many advantages compared to the state-of-the-art. Experimental results clearly justify our model."}
{"_id": "23ddae93514a47b56dcbeed80e67fab62e8b5ec9", "title": "Retro: Targeted Resource Management in Multi-tenant Distributed Systems", "text": "In distributed systems shared by multiple tenants, effective resource management is an important pre-requisite to providing quality of service guarantees. Many systems deployed today lack performance isolation and experience contention, slowdown, and even outages caused by aggressive workloads or by improperly throttled maintenance tasks such as data replication. In this work we present Retro, a resource management framework for shared distributed systems. Retro monitors per-tenant resource usage both within and across distributed systems, and exposes this information to centralized resource management policies through a high-level API. A policy can shape the resources consumed by a tenant using Retro\u2019s control points, which enforce sharing and ratelimiting decisions. We demonstrate Retro through three policies providing bottleneck resource fairness, dominant resource fairness, and latency guarantees to high-priority tenants, and evaluate the system across five distributed systems: HBase, Yarn, MapReduce, HDFS, and Zookeeper. Our evaluation shows that Retro has low overhead, and achieves the policies\u2019 goals, accurately detecting contended resources, throttling tenants responsible for slowdown and overload, and fairly distributing the remaining cluster capacity."}
{"_id": "8f81d1854da5f6254780f00966d0c00d174b9881", "title": "Significant Change Spotting for Periodic Human Motion Segmentation of Cleaning Tasks Using Wearable Sensors", "text": "The proportion of the aging population is rapidly increasing around the world, which will cause stress on society and healthcare systems. In recent years, advances in technology have created new opportunities for automatic activities of daily living (ADL) monitoring to improve the quality of life and provide adequate medical service for the elderly. Such automatic ADL monitoring requires reliable ADL information on a fine-grained level, especially for the status of interaction between body gestures and the environment in the real-world. In this work, we propose a significant change spotting mechanism for periodic human motion segmentation during cleaning task performance. A novel approach is proposed based on the search for a significant change of gestures, which can manage critical technical issues in activity recognition, such as continuous data segmentation, individual variance, and category ambiguity. Three typical machine learning classification algorithms are utilized for the identification of the significant change candidate, including a Support Vector Machine (SVM), k-Nearest Neighbors (kNN), and Naive Bayesian (NB) algorithm. Overall, the proposed approach achieves 96.41% in the F1-score by using the SVM classifier. The results show that the proposed approach can fulfill the requirement of fine-grained human motion segmentation for automatic ADL monitoring."}
{"_id": "6c8d5d5eee5967958a2e03a84bcc00f1f81f4d9e", "title": "Decontaminating eukaryotic genome assemblies with machine learning", "text": "High-throughput sequencing has made it theoretically possible to obtain high-quality de novo assembled genome sequences but in practice DNA extracts are often contaminated with sequences from other organisms. Currently, there are few existing methods for rigorously decontaminating eukaryotic assemblies. Those that do exist filter sequences based on nucleotide similarity to contaminants and risk eliminating sequences from the target organism. We introduce a novel application of an established machine learning method, a decision tree, that can rigorously classify sequences. The major strength of the decision tree is that it can take any measured feature as input and does not require a priori identification of significant descriptors. We use the decision tree to classify de novo assembled sequences and compare the method to published protocols. A decision tree performs better than existing methods when classifying sequences in eukaryotic de novo assemblies. It is efficient, readily implemented, and accurately identifies target and contaminant sequences. Importantly, a decision tree can be used to classify sequences according to measured descriptors and has potentially many uses in distilling biological datasets."}
{"_id": "6588070d6578bc8a9d1f284f766340587501d620", "title": "SAX-EFG: an evolutionary feature generation framework for time series classification", "text": "A variety of real world applications fit into the broad definition of time series classification. Using traditional machine learning approaches such as treating the time series sequences as high dimensional vectors have faced the well known \"curse of dimensionality\" problem. Recently, the field of time series classification has seen success by using preprocessing steps that discretize the time series using a Symbolic Aggregate ApproXimation technique (SAX) and using recurring subsequences (\"motifs\") as features.\n In this paper we explore a feature construction algorithm based on genetic programming that uses SAX-generated motifs as the building blocks for the construction of more complex features. The research shows that the constructed complex features improve the classification accuracy in a statistically significant manner for many applications."}
{"_id": "6c201c1ded432c98178f1d35410b8958decc884a", "title": "Conserving Energy Through Neural Prediction of Sensed Data", "text": "The constraint of energy consumption is a serious problem in wireless sensor networks (WSNs). In this regard, many solutions for this problem have been proposed in recent years. In one line of research, scholars suggest data driven approaches to help conserve energy by reducing the amount of required communication in the network. This paper is an attempt in this area and proposes that sensors be powered on intermittently. A neural network will then simulate sensors\u2019 data during their idle periods. The success of this method relies heavily on a high correlation between the points making a time series of sensed data. To demonstrate the effectiveness of the idea, we conduct a number of experiments. In doing so, we train a NAR network against various datasets of sensed humidity and temperature in different environments. By testing on actual data, it is shown that the predictions by the device greatly obviate the need for sensed data during sensors\u2019 idle periods and save over 65 percent of energy."}
{"_id": "2c17972edee8cd41f344009dc939cf51260f425a", "title": "Appropriate blood pressure control in hypertensive and normotensive type 2 diabetes mellitus: a summary of the ABCD trial", "text": "The hypertensive and normotensive Appropriate Blood Pressure Control in Diabetes (ABCD) studies were prospective, randomized, interventional clinical trials with 5 years of follow-up that examined the role of intensive versus standard blood pressure control in a total of 950 patients with type 2 diabetes mellitus. In the hypertensive ABCD study, a significant decrease in mortality was detected in the intensive blood pressure control group when compared with the standard blood pressure control group. There was also a marked reduction in the incidence of myocardial infarction when patients were randomly assigned to initial antihypertensive therapy with angiotensin-converting-enzyme inhibition rather than calcium channel blockade. The results of the normotensive ABCD study included associations between intensive blood pressure control and significant slowing of the progression of nephropathy (as assessed by urinary albumin excretion) and retinopathy, and fewer strokes. In both the hypertensive and normotensive studies, mean renal function (as assessed by 24 h creatinine clearance) remained stable during 5 years of either intensive or standard blood pressure intervention in patients with normoalbuminuria (<30 mg/24 h) or microalbuminuria (30\u2013300 mg/24 h) at baseline. By contrast, the rate of creatinine clearance in patients with overt diabetic nephropathy (>300 mg/24 h; albuminuria) at baseline decreased by an average of 5 ml/min/year in spite of either intensive or standard blood pressure control. Analysis of the results of 5 years of follow-up revealed a highly significant correlation of all-cause and cardiovascular mortality with left ventricular mass and severity of albuminuria."}
{"_id": "4e4c9d1d7893795d386fa6e62385faa6c5eff814", "title": "Gaussian process factorization machines for context-aware recommendations", "text": "Context-aware recommendation (CAR) can lead to significant improvements in the relevance of the recommended items by modeling the nuanced ways in which context influences preferences. The dominant approach in context-aware recommendation has been the multidimensional latent factors approach in which users, items, and context variables are represented as latent features in low-dimensional space. An interaction between a user, item, and a context variable is typically modeled as some linear combination of their latent features. However, given the many possible types of interactions between user, items and contextual variables, it may seem unrealistic to restrict the interactions among them to linearity.\n To address this limitation, we develop a novel and powerful non-linear probabilistic algorithm for context-aware recommendation using Gaussian processes. The method which we call Gaussian Process Factorization Machines (GPFM) is applicable to both the explicit feedback setting (e.g. numerical ratings as in the Netflix dataset) and the implicit feedback setting (i.e. purchases, clicks). We derive stochastic gradient descent optimization to allow scalability of the model. We test GPFM on five different benchmark contextual datasets. Experimental results demonstrate that GPFM outperforms state-of-the-art context-aware recommendation methods."}
{"_id": "e083df3577b3231d16678aaf7a020767bdc9c3a0", "title": "Single-layered complex-valued neural network for real-valued classification problems", "text": "This paper presents two models of complex-valued neurons (CVNs) for real-valued classification problems, incorporating two newly-proposed activation functions, and presents their abilities as well as differences between them on benchmark problems. In both models, each real-valued input is encoded into a phase between 0 and \uf070 of a complex number of unity magnitude, and multiplied by a complex-valued weight. The weighted sum of inputs is fed to an activation function. Activation functions of both models map complex values into real values, and their role is to divide the net-input (weighted sum) space into multiple regions representing the classes of input patterns. The gradient-based learning rule is derived for each of the activation functions. Ability of such CVNs are discussed and tested with two-class problems, such as two and three input Boolean problems, and symmetry detection in binary sequences. We exhibit here that both the models can form proper boundaries for these linear and nonlinear problems. For solving n-class problems, a complex-valued neural network (CVNN) consisting of n CVNs is also considered in this paper. We tested such single-layered CVNNs on several real world benchmark problems. The results show that the classification and generalization abilities of single-layered CVNNs are comparable to the conventional real-valued neural networks (RVNNs) having one hidden layer. Moreover, convergence of CVNNs is much faster than that of RVNNs in most of the cases."}
{"_id": "0b8651737442ec30052724a68e85fefc4c941970", "title": "Visual firewall: real-time network security monitor", "text": "Networked systems still suffer from poor firewall configuration and monitoring. VisualFirewall seeks to aid in the configuration of firewalls and monitoring of networks by providing four simultaneous views that display varying levels of detail and time-scales as well as correctly visualizing firewall reactions to individual packets. The four implemented views, real-time traffic, visual signature, statistics, and IDS alarm, provide the levels of detail and temporality that system administrators need to properly monitor their systems in a passive or an active manner. We have visualized several attacks, and we feel that even individuals unfamiliar with networking concepts can quickly distinguish between benign and malignant traffic patterns with a minimal amount of introduction."}
{"_id": "9fe265de1cfab7a97b5efd81d7d42b386b15f2b9", "title": "IDS rainStorm: visualizing IDS alarms", "text": "The massive amount of alarm data generated from intrusion detection systems is cumbersome for network system administrators to analyze. Often, important details are overlooked and it is difficult to get an overall picture of what is occurring in the network by manually traversing textual alarm logs. We have designed a novel visualization to address this problem by showing alarm activity within a network. Alarm data is presented in an overview where system administrators can get a general sense of network activity and easily detect anomalies. They then have the option of zooming and drilling down for details. The information is presented with local network IP (Internet Protocol) addresses plotted over multiple yaxes to represent the location of alarms. Time on the x-axis is used to show the pattern of the alarms and variations in color encode the severity and amount of alarms. Based on our system administrator requirements study, this graphical layout addresses what system administrators need to see, is faster and easier than analyzing text logs, and uses visualization techniques to effectively scale and display the data. With this design, we have built a tool that effectively uses operational alarm log data generated on the Georgia Tech campus network. The motivation and background of our design is presented along with examples that illustrate its usefulness. CR Categories: C.2.0 [Computer-Communication Networks]: General\u2014Security and Protection C.2.3 [ComputerCommunication Networks]: Network Operations\u2014Network Monitoring H.5.2 [Information Systems]: Information Interfaces and Presentation\u2014User Interfaces"}
{"_id": "0c90a3d183dc5f467c692fb7cbf60303729c8078", "title": "Clustering intrusion detection alarms to support root cause analysis", "text": "It is a well-known problem that intrusion detection systems overload their human operators by triggering thousands of alarms per day. This paper presents a new approach for handling intrusion detection alarms more efficiently. Central to this approach is the notion that each alarm occurs for a reason, which is referred to as the alarm's root causes. This paper observes that a few dozens of rather persistent root causes generally account for over 90% of the alarms that an intrusion detection system triggers. Therefore, we argue that alarms should be handled by identifying and removing the most predominant and persistent root causes. To make this paradigm practicable, we propose a novel alarm-clustering method that supports the human analyst in identifying root causes. We present experiments with real-world intrusion detection alarms to show how alarm clustering helped us identify root causes. Moreover, we show that the alarm load decreases quite substantially if the identified root causes are eliminated so that they can no longer trigger alarms in the future."}
{"_id": "3c1f860a678f3f6f33f2cbfdfa7dfc7119a57a00", "title": "Aggregation and Correlation of Intrusion-Detection Alerts", "text": "This paper describes an aggregation and correlation algorithm used in the design and implementation of an intrusion-detection console built on top of the Tivoli Enterprise Console (TEC). The aggregation and correlation algorithm aims at acquiring intrusion-detection alerts and relating them together to expose a more condensed view of the security issues raised by intrusion-detection systems."}
{"_id": "844f1a88efc648b5c604c0a098b5c49f3fea4139", "title": "MieLog: A Highly Interactive Visual Log Browser Using Information Visualization and Statistical Analysis", "text": "System administration has become an increasingly important function, with the fundamental task being the inspection of computer log-files. It is not, however, easy to perform such tasks for two reasons. One is the high recognition load of log contents due to the massive amount of textual data. It is a tedious, time-consuming and often error-prone task to read through them. The other problem is the difficulty in extracting unusual messages from the log. If an administrator does not have the knowledge or experience, he or she cannot readily recognize unusual log messages. To help address these issues, we have developed a highly interactive visual log browser called \u2018\u2018MieLog.\u2019\u2019 MieLog uses two techniques for manual log inspection tasks: information visualization and statistical analysis. Information visualization is helpful in reducing the recognition load because it provides an alternative method of interpreting textual information without reading. Statistical analysis enables the extraction of unusual log messages without domain specific knowledge. We will give three examples that illustrate the ability of the MieLog system to isolate unusual messages more easily than before."}
{"_id": "7f481f1a5fac3a49ee8b2f1bfa7e5f2f8eda3085", "title": "Bit Error Tolerance of a CIFAR-10 Binarized Convolutional Neural Network Processor", "text": "Deployment of convolutional neural networks (ConvNets) in always-on Internet of Everything (IoE) edge devices is severely constrained by the high memory energy consumption of hardware ConvNet implementations. Leveraging the error resilience of ConvNets by accepting bit errors at reduced voltages presents a viable option for energy savings, but few implementations utilize this due to the limited quantitative understanding of how bit errors affect performance. This paper demonstrates the efficacy of SRAM voltage scaling in a 9-layer CIFAR-10 binarized ConvNet processor, achieving memory energy savings of 3.12\u00d7 with minimal accuracy degradation (\u223c99% of nominal). Additionally, we quantify the effect of bit error accumulation in a multi-layer network and show that further energy savings are possible by splitting weight and activation voltages. Finally, we compare the measured error rates for the CIFAR-10 binarized ConvNet against MNIST networks to demonstrate the difference in bit error requirements across varying complexity in network topologies and classification tasks."}
{"_id": "2aa5aaddbb367e477ba3bed67ce780f30e055279", "title": "A Simple Model for Intrinsic Image Decomposition with Depth Cues", "text": "We present a model for intrinsic decomposition of RGB-D images. Our approach analyzes a single RGB-D image and estimates albedo and shading fields that explain the input. To disambiguate the problem, our model estimates a number of components that jointly account for the reconstructed shading. By decomposing the shading field, we can build in assumptions about image formation that help distinguish reflectance variation from shading. These assumptions are expressed as simple nonlocal regularizers. We evaluate the model on real-world images and on a challenging synthetic dataset. The experimental results demonstrate that the presented approach outperforms prior models for intrinsic decomposition of RGB-D images."}
{"_id": "504054b182fc4d028c430e74a51b2d6ac2c43f64", "title": "Indirectly Encoding Neural Plasticity as a Pattern of Local Rules", "text": "Biological brains can adapt and learn from past experience. In neuroevolution, i.e. evolving artificial neural networks (ANNs), one way that agents controlled by ANNs can evolve the ability to adapt is by encoding local learning rules. However, a significant problem with most such approaches is that local learning rules for every connection in the network must be discovered separately. This paper aims to show that learning rules can be effectively indirectly encoded by extending the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) method. Adaptive HyperNEAT is introduced to allow not only patterns of weights across the connectivity of an ANN to be generated by a function of its geometry, but also patterns of arbitrary learning rules. Several such adaptive models with different levels of generality are explored and compared. The long-term promise of the new approach is to evolve large-scale adaptive ANNs, which is a major goal for neuroevolution."}
{"_id": "fa92af436a7d04fcccc025fdedde4039d19df42f", "title": "Breast cancer intrinsic subtype classification, clinical use and future trends.", "text": "Breast cancer is composed of multiple subtypes with distinct morphologies and clinical implications. The advent of microarrays has led to a new paradigm in deciphering breast cancer heterogeneity, based on which the intrinsic subtyping system using prognostic multigene classifiers was developed. Subtypes identified using different gene panels, though overlap to a great extent, do not completely converge, and the avail of new information and perspectives has led to the emergence of novel subtypes, which complicate our understanding towards breast tumor heterogeneity. This review explores and summarizes the existing intrinsic subtypes, patient clinical features and management, commercial signature panels, as well as various information used for tumor classification. Two trends are pointed out in the end on breast cancer subtyping, i.e., either diverging to more refined groups or converging to the major subtypes. This review improves our understandings towards breast cancer intrinsic classification, current status on clinical application, and future trends."}
{"_id": "fc1f88e48ab29a1fa21f1e7d73f47c270353de59", "title": "A robust 2D point-sequence curve offset algorithm with multiple islands for contour-parallel tool path", "text": "An offset algorithm is important to the contour-parallel tool path generation process. Usually, it is necessary to offset with islands. In this paper a new offset algorithm for a 2D point-sequence curve (PS-curve) with multiple islands is presented. The algorithm consists of three sub-processes, the islands bridging process, the raw offset curve generation and the global invalid loops removal. The input of the algorithm is a set of PS-curves, in which one of them is the outer profile and the others are islands. The bridging process bridges all the islands to the outer profile with the Delaunay triangulation method, forming a single linked PS-curve.With the fact that local problems are caused by intersections of adjacent bisectors, the concept of stuck circle is proposed. Based on stuck circle, local problems are fixed by updating the original profile with the proposed basic rule and append rule, so that a raw offset curve can be generated. The last process first reports all the self-intersections on the raw offset PS-curve, and then a procedure called tree analysis puts all the self-intersections into a tree. All the points between the nodes in even depth and its immediate children are collected using the collecting rule. The collected points form the valid loops, which is the output of the proposed algorithm. Each sub-process can be complete in near linear time, so the whole algorithm has a near linear time complexity. This can be proved by the examples tested in the paper. \u00a9 2012 Elsevier Ltd. All rights reserved."}
{"_id": "27266a1dd3854e4effe41b9a3c0e569d33004a33", "title": "Discovering facts with boolean tensor tucker decomposition", "text": "Open Information Extraction (Open IE) has gained increasing research interest in recent years. The first step in Open IE is to extract raw subject--predicate--object triples from the data. These raw triples are rarely usable per se, and need additional post-processing. To that end, we proposed the use of Boolean Tucker tensor decomposition to simultaneously find the entity and relation synonyms and the facts connecting them from the raw triples. Our method represents the synonym sets and facts using (sparse) binary matrices and tensor that can be efficiently stored and manipulated. We consider the presentation of the problem as a Boolean tensor decomposition as one of this paper's main contributions. To study the validity of this approach, we use a recent algorithm for scalable Boolean Tucker decomposition. We validate the results with empirical evaluation on a new semi-synthetic data set, generated to faithfully reproduce real-world data features, as well as with real-world data from existing Open IE extractor. We show that our method obtains high precision while the low recall can easily be remedied by considering the original data together with the decomposition."}
{"_id": "7dd434b3799a6c8c346a1d7ee77d37980a4ef5b9", "title": "Syntax-Directed Variational Autoencoder for Structured Data", "text": "Deep generative models have been enjoying success in modeling continuous data. However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures. How to generate both syntactically and semantically correct data still remains largely an open problem. Inspired by the theory of compiler where the syntax and semantics check is done via syntax-directed translation (SDT), we propose a novel syntax-directed variational autoencoder (SD-VAE) by introducing stochastic lazy attributes. This approach converts the offline SDT check into on-the-fly generated guidance for constraining the decoder. Comparing to the state-of-the-art methods, our approach enforces constraints on the output space so that the output will be not only syntactically valid, but also semantically reasonable. We evaluate the proposed model with applications in programming language and molecules, including reconstruction and program/molecule optimization. The results demonstrate the effectiveness in incorporating syntactic and semantic constraints in discrete generative models, which is significantly better than current state-of-the-art approaches."}
{"_id": "f43c1aee5382bb6fe9c92c54767218d954016e0c", "title": "Color texture classification by integrative Co-occurrence matrices", "text": "Integrative Co-occurrence matrices are introduced as novel features for color texture classi\"cation. The extended Co-occurrence notation allows the comparison between integrative and parallel color texture concepts. The information pro\"t of the new matrices is shown quantitatively using the Kolmogorov distance and by extensive classi\"cation experiments on two datasets. Applying them to the RGB and the LUV color space the combined color and intensity textures are studied and the existence of intensity independent pure color patterns is demonstrated. The results are compared with two baselines: gray-scale texture analysis and color histogram analysis. The novel features improve the classi\"cation results up to 20% and 32% for the \"rst and second baseline, respectively. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."}
{"_id": "eb596f7a723c955387ef577ba6bf8817cd3ebdc1", "title": "Design of hybrid resistive-capacitive DAC for SAR A/D converters", "text": "While hybrid capacitive-resistive D/A Converter (DAC) has been known for many years, its potential for energy-efficient operation is sometimes overlooked. This paper investigates the utilization of hybrid DACs in successive-approximation register A/D converters. To improve energy efficiency of SAR ADCs, a new hybrid DAC is introduced. In an exemplar 10-bit 100-MS/s ADC, simulation results show that the energy efficiency and chip area (of passive devices) can be improved by more than an order of magnitude."}
{"_id": "4c790c71219f6be248a3d426347bf7c4e3a0a6c4", "title": "The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment.", "text": "OBJECTIVES\nTo develop a 10-minute cognitive screening tool (Montreal Cognitive Assessment, MoCA) to assist first-line physicians in detection of mild cognitive impairment (MCI), a clinical state that often progresses to dementia.\n\n\nDESIGN\nValidation study.\n\n\nSETTING\nA community clinic and an academic center.\n\n\nPARTICIPANTS\nNinety-four patients meeting MCI clinical criteria supported by psychometric measures, 93 patients with mild Alzheimer's disease (AD) (Mini-Mental State Examination (MMSE) score > or =17), and 90 healthy elderly controls (NC).\n\n\nMEASUREMENTS\nThe MoCA and MMSE were administered to all participants, and sensitivity and specificity of both measures were assessed for detection of MCI and mild AD.\n\n\nRESULTS\nUsing a cutoff score 26, the MMSE had a sensitivity of 18% to detect MCI, whereas the MoCA detected 90% of MCI subjects. In the mild AD group, the MMSE had a sensitivity of 78%, whereas the MoCA detected 100%. Specificity was excellent for both MMSE and MoCA (100% and 87%, respectively).\n\n\nCONCLUSION\nMCI as an entity is evolving and somewhat controversial. The MoCA is a brief cognitive screening tool with high sensitivity and specificity for detecting MCI as currently conceptualized in patients performing in the normal range on the MMSE."}
{"_id": "92bb6791f0e8bc2dd0874f09930b6a7ff990827d", "title": "Wind-Aware Emergency Landing Assistant Based on Dubins Curves", "text": "A total engine failure poses a major threat to passengers as well as the aircraft and requires a fast decision by the pilot. We develop an assistant system to support the pilot in this decision process. An aircraft is able to glide a certain distance without thrust power by converting the potential energy of the altitude into distance. The objective of our work is to calculate an approach route which allows the aircraft to reach a suitable landing field at an appropriate altitude. This is a non-trivial problem because of many free parameters like wind direction and wind velocity. Our solution computes an approach route with two tangents and two co-rotating circular segments. For this purpose, valid approach routes can be calculated for many general cases. The method has a constant complexity and can dispense with iterative approaches. The route is calculated entirely in the wind frame which avoids complex calculations with trochoids. The central idea is to take the wind into account by moving the target towards the wind direction."}
{"_id": "72663b8c32a4ee99da679366868ca4de863c3ba4", "title": "Dynamic Cascades for Face Detection", "text": "In this paper, we propose a novel method, called \"dynamic cascade\", for training an efficient face detector on massive data sets. There are three key contributions. The first is a new cascade algorithm called \"dynamic cascade \", which can train cascade classifiers on massive data sets and only requires a small number of training parameters. The second is the introduction of a new kind of weak classifier, called \"Bayesian stump\", for training boost classifiers. It produces more stable boost classifiers with fewer features. Moreover, we propose a strategy for using our dynamic cascade algorithm with multiple sets of features to further improve the detection performance without significant increase in the detector's computational cost. Experimental results show that all the new techniques effectively improve the detection performance. Finally, we provide the first large standard data set for face detection, so that future researches on the topic can be compared on the same training and testing set."}
{"_id": "43dcd8b78857cbfe58ae684c44dd57c8c72368c3", "title": "Providing Adaptive Courses in Learning Management Systems with Respect to Learning Styles", "text": "Learning management systems (LMS) are commonly used in e-learning but provide little, or in most cases, no adaptivity. However, courses which adapt to the individual needs of students make learning easier for them and lead to a positive effect in learning. In this paper, we introduce a concept for providing adaptivity based on learning styles in LMS. In order to show the effectiveness of our approach, Moodle was extended by an add-on and an experiment with 437 students was performed. From the analysis of the students\u2019 performance and behaviour in the course, we found out that students who learned from a course that matches their learning styles spent significantly less time in the course and achieved in average the same marks than students who got a course that either mismatched with their learning styles or included all available learning objects. Therefore, providing adaptive courses in LMS according to the proposed concept can be seen as effective in supporting students in learning."}
{"_id": "34001fa75ba229639dc251fb1714a6bc2dfb76b3", "title": "A Tutorial on Logistic Regression", "text": "Many procedures in SAs/STAT~ can be used to perform 10' gistic regression analysis: CATMOD, GENMOD,LOGISTIC, al)d PROBIT. Each procedure has special features that make. ~ useful for gertain applications. For most applica\u00b7 tions, PROC LOGISTIC is the preferred choice,. It fits binary response or proportional odds models, provides various model\u00b7selection methods to identify important prognostic variables from a large number of candidate variables, and computes regression diagnostic statistics. This tutorial dis\u00b7 cusses some of the problems users encountered when they used the LOGISTIC procedure."}
{"_id": "4f58366300e6031ece7b770d3cc7ecdd019ca440", "title": "Computational intelligence techniques for HVAC systems : A review", "text": "Buildings are responsible for 40% of global energy use and contribute towards 30% of the total CO2 emissions. The drive to reduce energy use and associated greenhouse gas emissions from buildings has acted as a catalyst in the development of advanced computational methods for energy efficient design, management and control of buildings and systems. Heating, ventilation and air-conditioning (HVAC) systems are the major source of energy consumption in buildings and ideal candidates for substantial reductions in energy demand. Significant advances have been made in the past decades on the application of computational intelligence (CI) techniques for HVAC design, control, management, optimization, and fault detection and diagnosis. This article presents a comprehensive and critical review on the theory and applications of CI techniques for prediction, optimization, control and diagnosis of HVAC systems. The analysis of trends reveals that the minimisation of energy consumption was the key optimization objective in the reviewed research, closely followed by the optimization of thermal comfort, indoor air quality and occupant preferences. Hardcoded Matlab program was the most widely used simulation tool, followed by TRNSYS, EnergyPlus, DOE-2, HVACSim+ and ESP-r. Metaheuristic algorithms were the preferred CI method for solving HVAC related problems and in particular genetic algorithms were applied in most of the studies. Despite the low number of studies focussing on multi-agent systems (MAS), as compared to the other CI techniques, interest in the technique is increasing due to their ability of dividing and conquering an HVAC optimization problem with enhanced overall performance. The paper also identifies prospective future advancements and research directions."}
{"_id": "6c4917342e7c81c09b28a4cd4e7575b4e9b176bf", "title": "Parallelization of DC Analysis through Multiport Decomposition", "text": "Physical problems offer scope for macro level parallelization of solution by their essential structure. For parallelization of electrical network simulation, the most natural structure based method is that of multiport decomposition. In this paper this method is used for the simulation of electrical networks consisting of resistances, voltage and current sources using a distributed cluster of weakly coupled processors. At the two levels in which equations are solved in this method the authors have used sparse LU for both levels in the first scheme and sparse LU in the inner level and conjugate gradient in the outer level in the second scheme. Results are presented for planar networks, for the cases where the numbers of slave processors are 1 and 2, and for circuit sizes up to 8.2 million nodes and 16.4 million edges using 8 slave processors. The authors use a cluster of Pentium IV processors linked through a 10/100MBPS Ethernet switch"}
{"_id": "2d6d67ec52505e890f777900e9de4e5fa827ebd7", "title": "Online Learning with Predictable Sequences", "text": "We present methods for online linear optimization that take advantage of benign (as opposed to worst-case) sequences. Specifically if the sequence encountered by the learner is described well by a known \u201cpredictable process\u201d, the algorithms presented enjoy tighter bounds as compared to the typical worst case bounds. Additionally, the methods achieve the usual worst-case regret bounds if the sequence is not benign. Our approach can be seen as a way of adding prior knowledge about the sequence within the paradigm of online learning. The setting is shown to encompass partial and side information. Variance and path-length bounds [11, 9] can be seen as particular examples of online learning with simple predictable sequences. We further extend our methods and results to include competing with a set of possible predictable processes (models), that is \u201clearning\u201d the predictable process itself concurrently with using it to obtain better regret guarantees. We show that such model selection is possible under various assumptions on the available feedback. Our results suggest a promising direction of further research with potential applications to stock market and time series prediction."}
{"_id": "3eb54c38421009eac93c667b303afdd1e5544fce", "title": "d-Separation: From Theorems to Algorithms", "text": "An efficient algorithm is developed that identifies all independencies implied by the topology of a Baye\u00ad sian network. Its correctness and maximality stems from the soundness and completeness of d\u00ad separation with respect to probability theory. The al\u00ad gorithm runs in time 0 (IE I ) where E is the number of edges in the network."}
{"_id": "70666e2ecb5fd35af9adeccae0e2ef765cf149fe", "title": "Histogram-based image hashing scheme robust against geometric deformations", "text": "In this paper, we propose a robust image hash algorithm by using the invariance of the image histogram shape to geometric deformations. Robustness and uniqueness of the proposed hash function are investigated in detail by representing the histogram shape as the relative relations in the number of pixels among groups of two different bins. It is found from extensive testing that the histogram-based hash function has a satisfactory performance to various geometric deformations, and is also robust to most common signal processing operations thanks to the use of Gaussian kernel low-pass filter in the preprocessing phase."}
{"_id": "4daa0a41e8d3049cc40fb9804f22318d5302abc1", "title": "The Ever-Changing Social Perception of Autism Spectrum Disorders in the United States", "text": "This paper aims to examine the comprehensive social perception of autism spectrum disorders (ASDs) within the United States today. In order to study the broad public view of those with ASDs, this study investigates the evolution of the syndrome in both sociological and scientific realms. By drawing on the scientific progression of the syndrome and the mixture of this research with concurrent social issues and media representations, this study infers why such a significant amount of stigmatization has become attached to those with ASDs and how these stigmatizations have varied throughout history. After studying this evolving social perception of ASDs in the United States, the writer details suggestions for the betterment of this awareness, including boosted and specified research efforts, increased collaboration within those experts in autism, and positive visibility of those with ASDs and their families. Overall, the writer suggests that public awareness has increased and thus negative stigmatization has decreased in recent years; however, there remains much to be done to increase general social understanding of ASDs. \u201cAutism is about having a pure heart and being very sensitive... It is about finding a way to survive in an overwhelming, confusing world... It is about developing differently, in a different pace and with different leaps.\u201d -Trisha Van Berkel The identification of autism, in both sociological and scientific terms, has experienced a drastic evolution since its original definition in the early 20th century. From its original designation by Leo Kanner (1943), public understanding of autism spectrum disorders (ASDs) has been shrouded in mystery and misperception. The basic core features of all ASDs include problems with basic socialization and communication, strange intonation and facial expressions, and intense preoccupations or repetitive behaviors; however, one important aspect of what makes autism so complex is the wide variation in expression of the disorder (Lord, 2011). When comparing individuals with the same autism diagnosis, one will undoubtedly encounter many different personalities, strengths and weaknesses. This wide variability between individuals diagnosed with autism, along with the lack of basic understanding of the general public, accounts for a significant amount of social stigma in our society today. Social stigma stemming from this lack of knowledge has been reported in varying degrees since the original formation of the diagnosis. Studies conducted over the past two centuries have shown perceived negative stigma from the view of both the autistic individual and the family or caretakers"}
{"_id": "87a1273dea59e3748372c5ea69488d70e9125046", "title": "Finding and tracking road lanes using \"line-snakes\"", "text": "This paper presents a method forfinding and tracking road lanes. The method extracts and tracks lane boundaries for vision-guided vehicle navigation by combining the hough transform and the \u201cactive line model ( A M ) \u201d . The hough transform can extract vanishing points of the road, which can be used as a good estimation of the vehicle heading. For the curved road, however, the estimation may be too crude to be used for navigation. Therefore, the hough transform is used to obtain an initial position estimation of the lane boundaries on the road. The line snake ALM then improves the initial approximation to an accurate configuration of the lane boundaries. Once the line snake is initialized in the first image, it tracks the road lanes using the external and internal forces computed from images and a proposed boundary refinement technique. Specifically, an image region is divided into a few subregions along the vertical direction. The hough transform is then performed for each sub-region and candidate lines of road lanes in each sub-region are extracted. Among candidate lines, a most prominent line is found by the ALM that minimizes a defined snake energy. The external energy of ALM is a normalized sum of image gradients along the line. The internal deformation energy ensures the continuity of two neighboring lines by using the angledifference between two adjacent lines and the distance between the two lines. A search method based on the dynamic programming reduces the computational cost. The proposed method gives a uniJied framework for detecting, refining and tracking the road lane. Experimental results using images of a real road scene are presented."}
{"_id": "8045f9b48d3e848861620f86a4f7add0ad919556", "title": "A Book Dewarping System by Boundary-Based 3D Surface Reconstruction", "text": "Non-contact imaging devices such as digital cameras and overhead scanners can convert hardcopy books to digital images without cutting them to individual pages. However, the captured images have distinct distortions. A book dewarping system is proposed to remove the perspective and geometric distortions automatically from single images. A book boundary model is extracted, and a 3D book surface is reconstructed. And then the horizontal and vertical metrics of each column are restored from it. Experimental results show the good dewarping and speed performance. Since no additional equipments and no restrictions to specific book layouts or contents are needed, the proposed system is very practical in real applications."}
{"_id": "e7d3bd1df77c30b8db6a9a9c83692aa54d21e12a", "title": "Advantages of thesaurus representation using the Simple Knowledge Organization System (SKOS) compared with proposed alternatives", "text": "The concept of thesaurus has evolved from a list of conceptually interrelated words to today's controlled vocabularies, where terms form complex structures through semantic relationships. This term comes from the Latin and has turn been derived from the Greek \"\u03b8\u03b7\u03c3\u03b1\u03c5\u03c1\u03cc\u03c2\", which means treasury according to the Spanish Royal Academy, in whose dictionary it is also defined as: 'name given by its authors to certain dictionaries, catalogues and anthologies'. The increase in scientific communication and productivity made it essential to develop keyword indexing systems. At that time, Howerton spoke of controlled lists to refer to concepts that were heuristically or intuitively related. According to Roberts (1984), Mooers was the first to relate thesauri to information retrieval systems; Taube established the foundations of post-coordination, while Luhn dealt, at a basic level, with the creation of thesauri using automatic techniques. Brownson (1957) was the first to use the term to refer to the issue of translating concepts and their relationships expressed in documents into a more precise language free of ambiguities in order to facilitate information retrieval. The ASTIA Thesaurus was published in the early 1960s (Curr\u00e1s, 2005), already bearing the characteristics of today's thesauri and taking on the need for a tool to administer a controlled vocabulary in terms of indexing, thereby giving rise to the concept of documentary language."}
{"_id": "8c1351ff77f34a5a6e11be81a995e3faf862931b", "title": "Branching embedding: A heuristic dimensionality reduction algorithm based on hierarchical clustering", "text": "This paper proposes a new dimensionality reduction algorithm named branching embedding (BE). It converts a dendrogram to a two-dimensional scatter plot, and visualizes the inherent structures of the original high-dimensional data. Since the conversion part is not computationally demanding, the BE algorithm would be beneficial for the case where hierarchical clustering is already performed. Numerical experiments revealed that the outputs of the algorithm moderately preserve the original hierarchical structures."}
{"_id": "1a68d1fb23fd11649c111a3c374f2c3d3ef4c09e", "title": "VizTree: a Tool for Visually Mining and Monitoring Massive Time Series Databases", "text": "Moments before the launch of every space vehicle, engineering discipline specialists must make a critical go/no-go decision. The cost of a false positive, allowing a launch in spite of a fault, or a false negative, stopping a potentially successful launch, can be measured in the tens of millions of dollars, not including the cost in morale and other more intangible detriments. The Aerospace Corporation is responsible for providing engineering assessments critical to the go/no-go decision for every Department of Defense (DoD) launch vehicle. These assessments are made by constantly monitoring streaming telemetry data in the hours before launch. For this demonstration, we will introduce VizTree, a novel time-series visualization tool to aid the Aerospace analysts who must make these engineering assessments. VizTree was developed at the University of California, Riverside and is unique in that the same tool is used for mining archival data and monitoring incoming live telemetry. Unlike other time series visualization tools, VizTree can scale to very large databases, giving it the potential to be a generally useful data mining and database tool."}
{"_id": "f1aa9bbf85aa1a0b3255b38032c08781f491f4d3", "title": "0.18um BCD technology with best-in-class LDMOS from 6 V to 45 V", "text": "We propose a novel nLDMOS structure and a design concept in BCD technology with the best-in-class performance. The drift profile is optimized and the multi-oxide in the drift region is adopted to approach the RESURF limit of on-resistance vs. BVdss characteristic (i.e., 36V DMOS has a Ron_sp of 20mohm-mm2 with a BVdss of 50V; 45V DMOS has a Ron_sp of 28mohm-mm2 with a BVdss of 65V). Moreover, this modification requires merely three extra masks in the Non-Epitaxy LV process to achieve this improvement. Therefore it is not only a high performance but also a low cost solution."}
{"_id": "3f203c85493bce692da54648ef3f015db956a924", "title": "0.35\u03bcm, 30V fully isolated and low-Ron nLDMOS for DC-DC applications", "text": "In this paper, we present a new approach to integrate in a 0.35 \u03bcm BCD technology, low Ron LDMOS power transistors with highly competitive Specific Resistance figure of merit (Rsp, defined as Ron*Area). The LDMOS are fully isolated in order to support applications which may bias the source/drain electrodes below the substrate potential, which is critical for devices used in high-current, high-frequency switching applications. The new devices are suitable for high-efficiency DC-DC converter products with operating voltage of up to 30V, such as mobile PMICs. For maximum performance, two different extended-drain LDMOS structures have been developed to cover the entire operating voltage range: for 16V and below, a planar-gate structure is used and for 20V and above, a non-planar \u201coffset-LOCOS\u201d gate is used for 20V and above."}
{"_id": "ad3324136e70d546e2744bda4e2b14e6b5311c55", "title": "Ultra-low on-resistance LDMOS implementation in 0.13\u00b5m CD and BiCD process technologies for analog power IC's", "text": "Toshiba's 5th generation BiCD/CD-0.13 is a new process platform for analog power applications based on 0.13\u00b5m CMOS technology. The process platform has six varieties of rated voltage, 5V, 6V, 18V, 25V, 40V, and 60V. 5 to 18V CD-0.13 process use P-type silicon substrate. 25 to 60V BiCD-0.13 process use N-Epi wafer with N+/P+ buried layer on P type silicon substrate. Each LDMOS recode ultra-low on-resistance compared with that of previous papers, and we will realize the highest performance analog power IC's using this technology."}
{"_id": "d0a5e8fd8ca4537e6c0b76e90ae32f86de3cb0fc", "title": "Advanced 0.13um smart power technology from 7V to 70V", "text": "This paper presents BCD process integrating 7V to 70V power devices on 0.13um CMOS platform for various power management applications. BJT, Zener diode and Schottky diode are available and non-volatile memory is embedded as well. LDMOS shows best-in-class specific Ron (RSP) vs. BVDSS characteristics (i.e., 70V NMOS has RSP of 69m\u03a9-mm2 with BVDSS of 89V). Modular process scheme is used for flexibility to various requirements of applications."}
{"_id": "dd73a3d6682f3e1e70fd7b4d04971128fad5f27b", "title": "BCD8sP: An advanced 0.16 \u03bcm technology platform with state of the art power devices", "text": "Advanced 0.16 \u03bcm BCD technology platform offering dense logic transistors (1.8 V-5 V CMOS) and high performance analog features has been developed. Thanks to dedicated field plate optimization, body and drain engineering, state of the art power devices (8 V to 42 V rated) have been obtained ensuring large Safe Operating Areas with best RONXAREA-BVDSS tradeoff."}
{"_id": "b3e04836a8f1a1efda32d15296ff9435ab8afd86", "title": "HeNet: A Deep Learning Approach on Intel$^\\circledR$ Processor Trace for Effective Exploit Detection", "text": "This paper presents HeNet, a hierarchical ensemble neural network, applied to classify hardware-generated control flow traces for malware detection. Deep learning-based malware detection has so far focused on analyzing executable files and runtime API calls. Static code analysis approaches face challenges due to obfuscated code and adversarial perturbations. Behavioral data collected during execution is more difficult to obfuscate but recent research has shown successful attacks against API call based malware classifiers. We investigate control flow based characterization of a program execution to build robust deep learning malware classifiers. HeNet consists of a low-level behavior model and a toplevel ensemble model. The low-level model is a per-application behavior model, trained via transfer learning on a time-series of images generated from control flow trace of an execution. We use Intel R \u00a9 Processor Trace enabled processor for low overhead execution tracing and design a lightweight image conversion and segmentation of the control flow trace. The top-level ensemble model aggregates the behavior classification of all the trace segments and detects an attack. The use of hardware trace adds portability to our system and the use of deep learning eliminates the manual effort of feature engineering. We evaluate HeNet against real-world exploitations of PDF readers. HeNet achieves 100% accuracy and 0% false positive on test set, and higher classification accuracy compared to classical machine learning algorithms."}
{"_id": "a8540ff90bc1cf9eb54a2ba1ad4125e726d1980d", "title": "Soft-Gated Warping-GAN for Pose-Guided Person Image Synthesis", "text": "Despite remarkable advances in image synthesis research, existing works often fail in manipulating images under the context of large geometric transformations. Synthesizing person images conditioned on arbitrary poses is one of the most representative examples where the generation quality largely relies on the capability of identifying and modeling arbitrary transformations on different body parts. Current generative models are often built on local convolutions and overlook the key challenges (e.g. heavy occlusions, different views or dramatic appearance changes) when distinct geometric changes happen for each part, caused by arbitrary pose manipulations. This paper aims to resolve these challenges induced by geometric variability and spatial displacements via a new Soft-Gated Warping Generative Adversarial Network (Warping-GAN), which is composed of two stages: 1) it first synthesizes a target part segmentation map given a target pose, which depicts the region-level spatial layouts for guiding image synthesis with higher-level structure constraints; 2) the Warping-GAN equipped with a soft-gated warping-block learns feature-level mapping to render textures from the original image into the generated segmentation map. Warping-GAN is capable of controlling different transformation degrees given distinct target poses. Moreover, the proposed warping-block is lightweight and flexible enough to be injected into any networks. Human perceptual studies and quantitative evaluations demonstrate the superiority of our WarpingGAN that significantly outperforms all existing methods on two large datasets."}
{"_id": "19c953d98323e6f56a26e9d4b6b46f5809793fe4", "title": "Vehicle Guidance for an Autonomous Vehicle", "text": "This paper describes the vehicle guidance system of an autonomous vehicle. This system is part of the control structure of the vehicle and consists of a path generator, a motion planning algorithm and a sensor fusion module. A stereo vision sensor and a DGPS sensor are used as position sensors. The trajectory for the vehicle motion is generated in a rst step by using only information from a digital map. Object-detecting sensors such as the stereo vision sensor, three laserscanner and a radar sensor observe the vehicle environment and report detected objects to the sensor fusion module. This information is used to dynamically update the planned vehicle trajectory to the nal vehicle motion."}
{"_id": "43a050a0279c86baf842371c73b68b674061b579", "title": "A Tutorial on UAVs for Wireless Networks: Applications, Challenges, and Open Problems", "text": "The use of flying platforms such as unmanned aerial vehicles (UAVs), popularly known as drones, is rapidly growing in a wide range of wireless networking applications. In particular, with their inherent attributes such as mobility, flexibility, and adaptive altitude, UAVs admit several key potential applications in wireless systems. On the one hand, UAVs can be used as aerial base stations to enhance coverage, capacity, reliability, and energy efficiency of wireless networks. For instance, UAVs can be deployed to complement existing cellular systems by providing additional capacity to hotspot areas as well as to provide network coverage in emergency and public safety situations. On the other hand, UAVs can operate as flying mobile terminals within the cellular networks. Such cellular-connected UAVs can enable a wide range of key applications expanding from real-time video streaming to item delivery. Despite the several benefits and practical applications of using UAVs as aerial wireless devices, one must address many technical challenges. In this paper, a comprehensive tutorial on the potential benefits and applications of UAVs in wireless communications is presented. Moreover, the important challenges and the fundamental tradeoffs in UAV-enabled wireless networks are thoroughly investigated. In particular, the key UAV challenges such as three-dimensional (3D) deployment, performance analysis, air-to-ground channel modeling, and energy efficiency are explored along with representative results. Then, fundamental open problems and potential research directions pertaining to wireless communications and networking with UAVs are introduced. To cope with the open research problems, various analytical frameworks and mathematical tools such as optimization theory, machine learning, stochastic geometry, transport theory, and game theory are described. The use of such tools for addressing unique UAV problems is also presented. In a nutshell, this tutorial provides key guidelines on how to analyze, optimize, and design UAV-based wireless communication systems."}
{"_id": "c139770aba13d62bf2f5a3a579da98a28148fb09", "title": "Data Change Exploration Using Time Series Clustering", "text": "Analysis of static data is one of the best studied research areas. However, data changes over time. These changes may reveal patterns or groups of similar values, properties, and entities. We study changes in large, publicly available data repositories by modelling them as time series and clustering these series by their similarity. In order to perform change exploration on real-world data we use the publicly available revision data of Wikipedia Infoboxes and weekly snapshots of IMDB. The changes to the data are captured as events, which we call change records. In order to extract temporal behavior we count changes in time periods and propose a general transformation framework that aggregates groups of changes to numerical time series of different resolutions. We use these time series to study different application scenarios of unsupervised clustering. Our explorative results show that changes made to collaboratively edited data sources can help find characteristic behavior, distinguish entities or properties and provide insight into the respective domains."}
{"_id": "2f4bdb8f54ae1a7666c7be8e6b461e3c717a20cf", "title": "The survival of direct composite restorations in the management of severe tooth wear including attrition and erosion: A prospective 8-year study.", "text": "OBJECTIVES\nSurvival of directly placed composite to restore worn teeth has been reported in studies with small sample sizes, short observation periods and different materials. This study aimed to estimate survival for a hybrid composite placed by one clinician up to 8-years follow-up.\n\n\nMETHODS\nAll patients were referred and recruited for a prospective observational cohort study. One composite was used: Spectrum(\u00ae) (DentsplyDeTrey). Most restorations were placed on the maxillary anterior teeth using a Dahl approach.\n\n\nRESULTS\nA total of 1010 direct composites were placed in 164 patients. Mean follow-up time was 33.8 months (s.d. 27.7). 71 of 1010 restorations failed during follow-up. The estimated failure rate in the first year was 5.4% (95% CI 3.7-7.0%). Time to failure was significantly greater in older subjects (p=0.005) and when a lack of posterior support was present (p=0.003). Bruxism and an increase in the occlusal vertical dimension were not associated with failure. The proportion of failures was greater in patients with a Class 3 or edge-to-edge incisal relationship than in Class 1 and Class 2 cases but this was not statistically significant. More failures occurred in the lower arch (9.6%) compared to the upper arch (6%) with the largest number of composites having been placed on the maxillary incisors (n=519).\n\n\nCONCLUSION\nThe worn dentition presents a restorative challenge but composite is an appropriate restorative material.\n\n\nCLINICAL SIGNIFICANCE\nThis study shows that posterior occlusal support is necessary to optimise survival."}
{"_id": "b1ad2a08dc6b9c598f2ac101b22c2de8cd23e47f", "title": "On the dimensionality of organizational justice: a construct validation of a measure.", "text": "This study explores the dimensionality of organizational justice and provides evidence of construct validity for a new justice measure. Items for this measure were generated by strictly following the seminal works in the justice literature. The measure was then validated in 2 separate studies. Study 1 occurred in a university setting, and Study 2 occurred in a field setting using employees in an automobile parts manufacturing company. Confirmatory factor analyses supported a 4-factor structure to the measure, with distributive, procedural, interpersonal, and informational justice as distinct dimensions. This solution fit the data significantly better than a 2- or 3-factor solution using larger interactional or procedural dimensions. Structural equation modeling also demonstrated predictive validity for the justice dimensions on important outcomes, including leader evaluation, rule compliance, commitment, and helping behavior."}
{"_id": "b0fa515948682246559cebd1190cec39e87334c0", "title": "Music Synthesis with Reconstructive Phrase Modeling", "text": "This article describes a new synthesis technology called reconstructive phrase modeling (RPM). A goal of RPM is to combine the realistic sound quality of sampling with the performance interaction of functional synthesis. Great importance is placed on capturing the dynamics of note transitions-slurs, legato, bow changes, etc. Expressive results are achieved with conventional keyboard controllers. Mastery of special performance techniques is not needed. RPM is an analysis-synthesis system that is related to two important trends in computer music research. The first is a form of additive synthesis in which sounds are represented as a sum of time-varying harmonics plus noise elements. RPM creates expressive performances by searching a database of idiomatic instrumental phrases and combining modified fragments of these phrases to form a new expressive performance. This approach is related to another research trend called concatenative synthesis"}
{"_id": "2b550251323d541dd5d3f72ab68073e05cd485c5", "title": "SAR image formation toolbox for MATLAB", "text": "While many synthetic aperture radar (SAR) image formation techniques exist, two of the most intuitive methods for implementation by SAR novices are the matched filter and backprojection algorithms. The matched filter and (non-optimized) backprojection algorithms are undeniably computationally complex. However, the backprojection algorithm may be successfully employed for many SAR research endeavors not involving considerably large data sets and not requiring time-critical image formation. Execution of both image reconstruction algorithms in MATLAB is explicitly addressed. In particular, a manipulation of the backprojection imaging equations is supplied to show how common MATLAB functions, ifft and interp1, may be used for straight-forward SAR image formation. In addition, limits for scene size and pixel spacing are derived to aid in the selection of an appropriate imaging grid to avoid aliasing. Example SAR images generated though use of the backprojection algorithm are provided given four publicly available SAR datasets. Finally, MATLAB code for SAR image reconstruction using the matched filter and backprojection algorithms is provided."}
{"_id": "0392a3743157a87d0bdfe53dffa416a9a6b93dcc", "title": "Mitigating Sybils in Federated Learning Poisoning", "text": "Machine learning (ML) over distributed multiparty data is required for a variety of domains. Existing approaches, such as federated learning, collect the outputs computed by a group of devices at a central aggregator and run iterative algorithms to train a globally shared model. Unfortunately, such approaches are susceptible to a variety of attacks, including model poisoning, which is made substantially worse in the presence of sybils. In this paper we first evaluate the vulnerability of federated learning to sybil-based poisoning attacks. We then describe FoolsGold, a novel defense to this problem that identifies poisoning sybils based on the diversity of client updates in the distributed learning process. Unlike prior work, our system does not bound the expected number of attackers, requires no auxiliary information outside of the learning process, and makes fewer assumptions about clients"}
{"_id": "786f730302115d19f5ea14c49332bba77bde75c2", "title": "An ensemble classifier with random projection for predicting multi-label protein subcellular localization", "text": "In protein subcellular localization prediction, a predominant scenario is that the number of available features is much larger than the number of data samples. Among the large number of features, many of them may contain redundant or irrelevant information, causing the prediction systems suffer from overfitting. To address this problem, this paper proposes a dimensionality-reduction method that applies random projection (RP) to construct an ensemble multi-label classifier for predicting protein subcellular localization. Specifically, the frequencies of occurrences of gene-ontology terms are used as feature vectors, which are projected onto lower-dimensional spaces by random projection matrices whose elements conform to a distribution with zero mean and unit variance. The transformed low-dimensional vectors are classified by an ensemble of one-vs-rest multi-label support vector machine (SVM) classifiers, each corresponding to one of the RP matrices. The scores obtained from the ensemble are then fused for making the final decision. Experimental results on two recent datasets suggest that the proposed method can reduce the dimensions by six folds and remarkably improve the classification performance."}
{"_id": "ce83f8c2227ce387d7491f247d97f45e534fac24", "title": "Novel Control Method for Multimodule PV Microinverter With Multiple Functions", "text": "This paper presents a novel control method for multimodule photovoltaic microinverter (MI). The proposed MI employs a two-stage topology with active-clamped current-fed push\u2013pull converter cascaded with a full-bridge inverter. This system can operate in grid-connected mode to feed power to the grid with a programmable power factor. This system can also operate in line-interactive mode, i.e., share load power without feeding power to the grid. In the event of grid power failure, the MI can operate in a standalone mode to supply uninterruptible power to the load. This paper presents a multiloop control scheme with power programmable capability for achieving the above multiple functions. In addition, the proposed control scheme embedded a multimodule parallel capability that multiple MI modules can be paralleled to enlarge the capacity with autonomous control in all operation modes. Finally, three 250-W MI modules are adopted to demonstrate the effectiveness of the proposed control method in simulations as well as experiments."}
{"_id": "fe0e8bd42fe96f67d73756a582d1d394b326eaac", "title": "A refined PWM scheme for voltage and current source converters", "text": "A pulse-width-modulation (PWM) scheme that uses the converter switching frequency to minimize unwanted load current harmonics is described. This results in the reduction of the number of switch communications per cycle. The method is suitable for high-performance variable-speed AC-motor drives that require high-output switching frequencies for near-silent operation. It is also applicable, without change, to voltage or current-source inverters and to two and four-quadrant three-phase PWM rectifiers. Key predicted results have been verified experimentally on a 5 kVA inverter breadboard.<>"}
{"_id": "b2cc59430df4ff20e34c48d122ccb47b45b96f83", "title": "Path Planning of Unmanned Aerial Vehicles using B-Splines and Particle Swarm Optimization", "text": "Military operations are turning to more complex and advanced automation technologies for minimum risk and maximum efficiency.A critical piece to this strategy is unmanned aerial vehicles. Unmanned aerial vehicles require the intelligence to safely maneuver along a path to an intended target and avoiding obstacles such as other aircrafts or enemy threats. This paper presents a unique three-dimensional path planning problem formulation and solution approach using particle swarm optimization. The problem formulation was designed with three objectives: 1) minimize risk owing to enemy threats, 2) minimize fuel consumption incurred by deviating from the original path, and 3) fly over defined reconnaissance targets. The initial design point is defined as the original path of the unmanned aerial vehicles. Using particle swarm optimization, alternate paths are generated using B-spline curves, optimized based on the three defined objectives. The resulting paths can be optimized with a preference toward maximum safety, minimum fuel consumption, or target reconnaissance. This method has been implemented in a virtual environment where the generated alternate paths can be visualized interactively to better facilitate the decision-making process. The problem formulation and solution implementation is described along with the results from several simulated scenarios demonstrating the effectiveness of the method."}
{"_id": "7a48f36e567b3beda7866a25e37e4a3cc668d987", "title": "Diffusion Component Analysis: Unraveling Functional Topology in Biological Networks", "text": "Complex biological systems have been successfully modeled by biochemical and genetic interaction networks, typically gathered from high-throughput (HTP) data. These networks can be used to infer functional relationships between genes or proteins. Using the intuition that the topological role of a gene in a network relates to its biological function, local or diffusionbased \u201cguilt-by-association\u201d and graph-theoretic methods have had success in inferring gene functions. Here we seek to improve function prediction by integrating diffusion-based methods with a novel dimensionality reduction technique to overcome the incomplete and noisy nature of network data. In this paper, we introduce diffusion component analysis (DCA), a framework that plugs in a diffusion model and learns a low-dimensional vector representation of each node to encode the topological properties of a network. As a proof of concept, we demonstrate DCA\u2019s substantial improvement over state-of-the-art diffusion-based approaches in predicting protein function from molecular interaction networks. Moreover, our DCA framework can integrate multiple networks from heterogeneous sources, consisting of genomic information, biochemical experiments and other resources, to even further improve function prediction. Yet another layer of performance gain is achieved by integrating the DCA framework with support vector machines that take our node vector representations as features. Overall, our DCA framework provides a novel representation of nodes in a network that can be used as a plug-in architecture to other machine learning algorithms to decipher topological properties of and obtain novel insights into interactomes. This paper was selected for oral presentation at RECOMB 2015 and an abstract is published in the conference proceedings. ar X iv :1 50 4. 02 71 9v 1 [ qbi o. M N ] 1 0 A pr 2 01 5"}
{"_id": "45f3c92cfe87fa376dfe184cead765ff251b9b30", "title": "Experiments on deep learning for speech denoising", "text": "In this paper we present some experiments using a deep learning model for speech denoising. We propose a very lightweight procedure that can predict clean speech spectra when presented with noisy speech inputs, and we show how various parameter choices impact the quality of the denoised signal. Through our experiments we conclude that such a structure can perform better than some comparable single-channel approaches and that it is able to generalize well across various speakers, noise types and signal-to-noise ratios."}
{"_id": "881a4de26b49a08608ae056b7688d64396bd62cd", "title": "Detection of drug-drug interactions through data mining studies using clinical sources, scientific literature and social media", "text": "Drug-drug interactions (DDIs) constitute an important concern in drug development and postmarketing pharmacovigilance. They are considered the cause of many adverse drug effects exposing patients to higher risks and increasing public health system costs. Methods to follow-up and discover possible DDIs causing harm to the population are a primary aim of drug safety researchers. Here, we review different methodologies and recent advances using data mining to detect DDIs with impact on patients. We focus on data mining of different pharmacovigilance sources, such as the US Food and Drug Administration Adverse Event Reporting System and electronic health records from medical institutions, as well as on the diverse data mining studies that use narrative text available in the scientific biomedical literature and social media. We pay attention to the strengths but also further explain challenges related to these methods. Data mining has important applications in the analysis of DDIs showing the impact of the interactions as a cause of adverse effects, extracting interactions to create knowledge data sets and gold standards and in the discovery of novel and dangerous DDIs."}
{"_id": "06a267a3b7a696c57d857c3ff1cf0e7711ff0dee", "title": "Fast Min-Sum Algorithms for Decoding of LDPC over GF(q)", "text": "In this paper, we present a fast min-sum algorithm for decoding LDPC codes over GF(q). Our algorithm is different from the one presented by David Declercq and Marc Fossorier (2005) only at the way of speeding up the horizontal scan in the min-sum algorithm. The Declercq and Fossorier's algorithm speeds up the computation by reducing the number of configurations, while our algorithm uses the dynamic programming instead. Compared with the configuration reduction algorithm, the dynamic programming one is simpler at the design stage because it has less parameters to tune. Furthermore, it does not have the performance degradation problem caused by the configuration reduction because it searches the whole configuration space efficiently through dynamic programming. Both algorithms have the same level of complexity and use simple operations which are suitable for hardware implementations"}
{"_id": "a6355b13e74d2a11aba4bad75464bf721d7b61d4", "title": "Machine Learning Algorithms for Characterization of EMG Signals", "text": "\u2014In the last decades, the researchers of the human arm prosthesis are using different types of machine learning algorithms. This review article firstly gives a brief explanation about type of machine learning methods. Secondly, some recent applications of myoelectric control of human arm prosthesis by using machine learning algorithms are compared. This study presents two different comparisons based on feature extraction methods which are time series modeling and wavelet transform of EMG signal. Finally, of characterization of EMG for of human arm prosthesis have been and discussed."}
{"_id": "892c22460a1ef1da7d10d1cf007ff46c6c080f18", "title": "Generating Haptic Textures with a Vibrotactile Actuator", "text": "Vibrotactile actuation is mainly used to deliver buzzing sensations. But if vibrotactile actuation is tightly coupled to users' actions, it can be used to create much richer haptic experiences. It is not well understood, however, how this coupling should be done or which vibrotactile parameters create which experiences. To investigate how actuation parameters relate to haptic experiences, we built a physical slider with minimal native friction, a vibrotactile actuator and an integrated position sensor. By vibrating the slider as it is moved, we create an experience of texture between the sliding element and its track. We conducted a magnitude estimation experiment to map how granularity, amplitude and timbre relate to the experiences of roughness, adhesiveness, sharpness and bumpiness. We found that amplitude influences the strength of the perceived texture, while variations in granularity and timbre create distinct experiences. Our study underlines the importance of action in haptic perception and suggests strategies for deploying such tightly coupled feedback in everyday devices."}
{"_id": "df5c0ae24bdf598a9fe8e85facf476f4903bf8aa", "title": "Characterizing Taxi Flows in New York City", "text": "We present an analysis of taxi flows in Manhattan (NYC) using a variety of data mining approaches. The methods presented here can aid in development of representative and accurate models of large-scale traffic flows with applications to many areas, including outlier detection and characterization."}
{"_id": "265a32d3e5a55140389df0a0b666ac5c2dfaa0bd", "title": "Curriculum Learning Based on Reward Sparseness for Deep Reinforcement Learning of Task Completion Dialogue Management", "text": "Learning from sparse and delayed reward is a central issue in reinforcement learning. In this paper, to tackle reward sparseness problem of task oriented dialogue management, we propose a curriculum based approach on the number of slots of user goals. This curriculum makes it possible to learn dialogue management for sets of user goals with large number of slots. We also propose a dialogue policy based on progressive neural networks whose modules with parameters are appended with previous parameters fixed as the curriculum proceeds, and this policy improves performances over the one with single set of parameters."}
{"_id": "3c77afb5f21b4256f289371590fa539e074cc3aa", "title": "A system for understanding imaged infographics and its applications", "text": "Information graphics, or infographics, are visual representations of information, data or knowledge. Understanding of infographics in documents is a relatively new research problem, which becomes more challenging when infographics appear as raster images. This paper describes technical details and practical applications of the system we built for recognizing and understanding imaged infographics located in document pages. To recognize infographics in raster form, both graphical symbol extraction and text recognition need to be performed. The two kinds of information are then auto-associated to capture and store the semantic information carried by the infographics. Two practical applications of the system are introduced in this paper, including supplement to traditional optical character recognition (OCR) system and providing enriched information for question answering (QA). To test the performance of our system, we conducted experiments using a collection of downloaded and scanned infographic images. Another set of scanned document pages from the University of Washington document image database were used to demonstrate how the system output can be used by other applications. The results obtained confirm the practical value of the system."}
{"_id": "0c85afa692a692da6e444b1098e59d11f0b07b83", "title": "Coupling design and performance analysis of rim-driven integrated motor propulsor", "text": "Coupling design of rim-driven integrated motor propulsor for vessels and underwater vehicle is presented in this paper. The main characteristic of integrated motor propulsor is that the motor is integrated in the duct of propulsor. So the coupling design of motor and propulsor is the key to the overall design. Considering the influence of the motor and duct size, the propeller was designed, and the CFD Method was used to analyze the hydrodynamic performance of the propulsor. Based on the air-gap magnetic field of permanent magnet motor and the equivalent magnetic circuit, the integrated motor electromagnetic model was proposed, and the finite element method was used to analyze the motor electromagnetic field. Finally, the simulation of the integrated motor starting process with the load of propulsor torque was carried out, and the results meets the design specifications."}
{"_id": "cae3bc55809a531a933bf6071550eeb3a2632f55", "title": "Deep keyphrase generation with a convolutional sequence to sequence model", "text": "Keyphrases can provide highly condensed and valuable information that allows users to quickly acquire the main ideas. Most previous studies realize the automatic keyphrase extraction through dividing the source text into multiple chunks and then rank and select the most suitable ones. These approaches ignore the deep semantics behind the text and could not predict the keyphrases not appearing in the source text. A sequence to sequence model to generate keyphrases from vocabulary could solve the issues above. However, traditional sequence to sequence model based on recurrent neural network(RNN) suffers from low efficiency problem. We propose an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations can be completely parallelized over all elements so as to better exploit the GPU hardware. Our use of gated linear units alleviates gradient propagation and we equip each decoder layer with a separate attention model. Moreover, we incorporate a copying mechanism to handle out-of-vocabulary phrases. In experiments, we evaluate our model on six datasets, and our proposed model is demonstrated to outperform state-of-the-art baseline models consistently and significantly, both on extracting the keyphrases existing in the source text and generating the absent keyphrases based on the sematic meaning of the text."}
{"_id": "280f9cc6ee7679d02a7b8b58d08173628057f3ea", "title": "Evolutionary timeline summarization: a balanced optimization framework via iterative substitution", "text": "Classic news summarization plays an important role with the exponential document growth on the Web. Many approaches are proposed to generate summaries but seldom simultaneously consider evolutionary characteristics of news plus to traditional summary elements. Therefore, we present a novel framework for the web mining problem named Evolutionary Timeline Summarization (ETS). Given the massive collection of time-stamped web documents related to a general news query, ETS aims to return the evolution trajectory along the timeline, consisting of individual but correlated summaries of each date, emphasizing relevance, coverage, coherence and cross-date diversity. ETS greatly facilitates fast news browsing and knowledge comprehension and hence is a necessity. We formally formulate the task as an optimization problem via iterative substitution from a set of sentences to a subset of sentences that satisfies the above requirements, balancing coherence/diversity measurement and local/global summary quality. The optimized substitution is iteratively conducted by incorporating several constraints until convergence. We develop experimental systems to evaluate on 6 instinctively different datasets which amount to 10251 documents. Performance comparisons between different system-generated timelines and manually created ones by human editors demonstrate the effectiveness of our proposed framework in terms of ROUGE metrics."}
{"_id": "39c230241d51b1435472115aaa8c62b94ab9927d", "title": "Joint Inference for Event Timeline Construction", "text": "This paper addresses the task of constructing a timeline of events mentioned in a given text. To accomplish that, we present a novel representation of the temporal structure of a news article based on time intervals. We then present an algorithmic approach that jointly optimizes the temporal structure by coupling local classifiers that predict associations and temporal relations between pairs of temporal entities with global constraints. Moreover, we present ways to leverage knowledge provided by event coreference to further improve the system performance. Overall, our experiments show that the joint inference model significantly outperformed the local classifiers by 9.2% of relative improvement in F1. The experiments also suggest that good event coreference could make remarkable contribution to a robust event timeline construction system."}
{"_id": "6d823b8098ec2b13fb5dcbb02bb55d7030d37d5a", "title": "Automating Temporal Annotation with TARSQI", "text": "We present an overview of TARSQI, a modular system for automatic temporal annotation that adds time expressions, events and temporal relations to news texts."}
{"_id": "0229c0eb39efed90db6691469daf0bb7244cf649", "title": "Text classification and named entities for new event detection", "text": "New Event Detection is a challenging task that still offers scope for great improvement after years of effort. In this paper we show how performance on New Event Detection (NED) can be improved by the use of text classification techniques as well as by using named entities in a new way. We explore modifications to the document representation in a vector space-based NED system. We also show that addressing named entities preferentially is useful only in certain situations. A combination of all the above results in a multi-stage NED system that performs much better than baseline single-stage NED systems."}
{"_id": "19e8aaa1021f829c8ff0378158d9b69699ea4f83", "title": "Temporal Summaries of News Topics", "text": "We discuss technology to help a person monitor changes in news coverage over time. We define temporal summaries of news stories as extracting a single sentence from each event within a news topic, where the stories are presented one at a time and sentences from a story must be ranked before the next story can be considered. We explain a method for evaluation, and describe an evaluation corpus that we have built. We also propose several methods for constructing temporal summaries and evaluate their effectiveness in comparison to degenerate cases. We show that simple approaches are effective, but that the problem is far from solved."}
{"_id": "a75b289951207f627cdba8580a65cf18f9188d62", "title": "A 2.6GHz band 537W peak power GaN HEMT asymmetric Doherty amplifier with 48% drain efficiency at 7dB", "text": "A 537W saturation output power (Psat) asymmetric Doherty amplifier for 2.6GHz band was successfully developed. The main and peak amplifiers were implemented with Psat of 210W and 320W GaN HEMTs. The newly developed 320W GaN HEMT consists of a single GaN die, both input and output partial match networks and a compact package. Its output matching network was tuned to inverse class F and a single-ended 320W GaN HEMT achieved higher than 61.8% drain efficiency from 2.4GHz to 2.7GHz. The 210W and 320W GaN HEMTs asymmetric Doherty amplifier exhibited 57.3dBm (537W) Psat and 48% drain efficiency with \u221250.6dBc ACLR at 50.3dBm (107W) average output power using a 4-carrier W-CDMA signal and commercially available digital pre-distortion system. These excellent performances show the good suitability for 2.6GHz band basestations."}
{"_id": "90d0b469521883bf24d673457f080343b97902fb", "title": "A survey of automated material handling systems in 300-mm SemiconductorFabs", "text": "The fast-paced developments and technological breakthroughs in the semiconductor manufacturing industry elevates the importance of optimum utilization of resources. The newer 300-mm wafers fabs place a high level of emphasis on increasing yield and reducing cycle times. Automated material handling systems are importanttools that help us achieve these objectives. In addition, due to the increased weight and size of 300-mm wafers, an automated material handling system isa must for a 300-mm manufacturing facility. This paper discusses various approaches for automated materials handling in semiconductor manufacturing industries."}
{"_id": "66baa370deca40f0928554a62cd8b0e4dd5985d3", "title": "Nuclear CDKs Drive Smad Transcriptional Activation and Turnover in BMP and TGF-\u03b2 Pathways", "text": "TGF-beta and BMP receptor kinases activate Smad transcription factors by C-terminal phosphorylation. We have identified a subsequent agonist-induced phosphorylation that plays a central dual role in Smad transcriptional activation and turnover. As receptor-activated Smads form transcriptional complexes, they are phosphorylated at an interdomain linker region by CDK8 and CDK9, which are components of transcriptional mediator and elongation complexes. These phosphorylations promote Smad transcriptional action, which in the case of Smad1 is mediated by the recruitment of YAP to the phosphorylated linker sites. An effector of the highly conserved Hippo organ size control pathway, YAP supports Smad1-dependent transcription and is required for BMP suppression of neural differentiation of mouse embryonic stem cells. The phosphorylated linker is ultimately recognized by specific ubiquitin ligases, leading to proteasome-mediated turnover of activated Smad proteins. Thus, nuclear CDK8/9 drive a cycle of Smad utilization and disposal that is an integral part of canonical BMP and TGF-beta pathways."}
{"_id": "c9c30fe890c7b77b13d78c116cd80e046ae737b6", "title": "Antibacterial activity of large-area monolayer graphene film manipulated by charge transfer", "text": "Graphene has attracted increasing attention for potential applications in biotechnology due to its excellent electronic property and biocompatibility. Here we use both Gram-positive Staphylococcus aureus (S. aureus) and Gram-negative Escherichia coli (E. coli) to investigate the antibacterial actions of large-area monolayer graphene film on conductor Cu, semiconductor Ge and insulator SiO2. The results show that the graphene films on Cu and Ge can surprisingly inhibit the growth of both bacteria, especially the former. However, the proliferation of both bacteria cannot be significantly restricted by the graphene film on SiO2. The morphology of S. aureus and E. coli on graphene films further confirms that the direct contact of both bacteria with graphene on Cu and Ge can cause membrane damage and destroy membrane integrity, while no evident membrane destruction is induced by graphene on SiO2. From the viewpoint of charge transfer, a plausible mechanism is proposed here to explain this phenomenon. This study may provide new insights for the better understanding of antibacterial actions of graphene film and for the better designing of graphene-based antibiotics or other biomedical applications."}
{"_id": "87e9ed98e6b7d5c7bf3837f62f3af9d182224f3b", "title": "Industrial Management & Data Systems Transformational leadership and innovative work behavior", "text": "Purpose \u2013 The purpose of this paper is to explore the mediating role of psychological empowerment and the moderating role of self-construal (independent and interdependent) on the relationship between transformational leadership and employees\u2019 innovative work behavior (IWB). Design/methodology/approach \u2013 A total of 639 followers and 87 leaders filled out questionnaires from cross-industry sample of five most innovative companies of China. Structural equation modeling was used to analyze the relations. Findings \u2013 Results revealed that psychological empowerment mediated the relationship between transformational leadership and IWB. The research established that transformational leadership positively influences IWB which includes idea generation as well as idea implementation. The results also showed that the relationship between transformational leadership and IWB was stronger among employees with a higher interdependent self-construal and a lower independent self-construal. Originality/value \u2013 This study adds to IWB literature by empirically testing the moderating role of self-construal and the mediating role of psychological empowerment on transformational leadership-IWB link."}
{"_id": "6ccd19770991f35f7f4a9b0af62e3ff771536ae4", "title": "Named Entity Recognition System for Urdu", "text": "Named Entity Recognition (NER) is a task which helps in finding out Persons name, Location names, Brand names, Abbreviations, Date, Time etc and classifies the m into predefined different categories. NER plays a major role in various Natural Language Processing (NLP) fields like Information Extraction, Machine Translations and Question Answering. This paper describes the problems of NER in the context of Urdu Language and provides relevan t solutions. The system is developed to tag thirteen different Named Entities (NE), twelve NE proposed by IJCNLP-08 and Izaafats. We have used the Rule Based approach and developed the various rules to extract the Named Entities in the given Urdu text."}
{"_id": "4d26eb642175bcd01f0f67e55d73735bcfb13bab", "title": "A 27-mW 3.6-gb/s I/O transceiver", "text": "This paper describes a 3.6-Gb/s 27-mW transceiver for chip-to-chip applications. A voltage-mode transmitter is proposed that equalizes the channel while maintaining impedance matching. A comparator is proposed that achieves sampling bandwidth control and offset compensation. A novel timing recovery circuit controls the phase by mismatching the current in the charge pump. The architecture maintains high signal integrity while each port consumes only 7.5 mW/Gb/s. The entire design occupies 0.2 mm/sup 2/ in a 0.18-/spl mu/m 1.8-V CMOS technology."}
{"_id": "4faa414044d97b8deb45c37b78f59f18e6886bc7", "title": "Recurrent Neural Network Embedding for Knowledgebase Completion", "text": "Knowledge can often be represented using entities connected by relations. For example, the fact that tennis ball is round can be represented as \u201cTennisBall HasShape Round\u201d, where a \u201cTennisBall\u201d is one entity, \u201cHasShape\u201d is a relation and \u201cRound\u201d is another entity. A knowledge base is a way to store such structured information, a knowledge base stores triples of the \u201can entity-relation-an entity\u201d form, and a real world knowledge base often has millions or billions of such triples. There are several well-known knowledge bases including FreeBase [1], WordNet [2], YAGO [3], etc. They are important in fields like reasoning and question answering; for instance if one asks \u201cwhat is the shape of a tennis ball\u201d, we can search the knowledge base for the triple as \u201cTennisBall HasShape Round\u201d and output \u201cround\u201d as the answer."}
{"_id": "9981e27f01960526ea68227c7f8120e0c3ffe87f", "title": "Golden section search over hyper-rectangle: a direct search method", "text": "Abstract: This paper generalises the golden section optimal search method to higher dimensional optimisation problem. The method is applicable to a strict quasi-convex function of N-variables over an N-dimensional hyper rectangle. An algorithm is proposed in N-dimension. The algorithm is illustrated graphically in two dimensions and verified through several test functions in higher dimension using MATLAB."}
{"_id": "ac06b09e232fe9fd8c3fabef71e1fd7f6f752a7b", "title": "Honeypots : The Need of Network Security", "text": "Network forensics is basically used to detect attackers activity and to analyze their behavior. Data collection is the major task of network forensics and honeypots are used in network forensics to collect useful data. Honeypot is an exciting new technology with enormous potential for security communities. This review paper is based upon the introduction to honeypots, their importance in network security, types of honeypots, their advantages disadvantages and legal issues related with them. Research Paper also discuss about the shortcomings of intrusion detection system in a network security and how honeypots improve the security architecture of the organizational network. Furthermore, paper reveals about the different kind of honeypots their level of interaction and risks associated with them. Keywords\u2014honeyd; honeypots; nmap; network forensics"}
{"_id": "5eea7073acfa8b946204ff681aca192571a1d6c2", "title": "Repeatable Reverse Engineering with PANDA", "text": "We present PANDA, an open-source tool that has been purpose-built to support whole system reverse engineering. It is built upon the QEMU whole system emulator, and so analyses have access to all code executing in the guest and all data. PANDA adds the ability to record and replay executions, enabling iterative, deep, whole system analyses. Further, the replay log files are compact and shareable, allowing for repeatable experiments. A nine billion instruction boot of FreeBSD, e.g., is represented by only a few hundred MB. PANDA leverages QEMU's support of thirteen different CPU architectures to make analyses of those diverse instruction sets possible within the LLVM IR. In this way, PANDA can have a single dynamic taint analysis, for example, that precisely supports many CPUs. PANDA analyses are written in a simple plugin architecture which includes a mechanism to share functionality between plugins, increasing analysis code re-use and simplifying complex analysis development. We demonstrate PANDA's effectiveness via a number of use cases, including enabling an old but legitimately purchased game to run despite a lost CD key, in-depth diagnosis of an Internet Explorer crash, and uncovering the censorship activities and mechanisms of an IM client."}
{"_id": "34776d39fec2da8b10de1c2633bec5968341509f", "title": "Breaking 104 bit WEP in less than 60 seconds", "text": "We demonstrate an active attack on the WEP protocol that is able to recover a 104-bit WEP key using less than 40.000 frames with a success probability of 50%. In order to succeed in 95% of all cases, 85.000 packets are needed. The IV of these packets can be randomly chosen. This is an improvement in the number of required frames by more than an order of magnitude over the best known key-recovery attacks for WEP. On a IEEE 802.11g network, the number of frames required can be obtained by re-injection in less than a minute. The required computational effort is approximately 2 RC4 key setups, which on current desktop and laptop CPUs is neglegible."}
{"_id": "16b187a157ad1599bf785912ac7974e38198be7a", "title": "Brain-computer interface research at the Wadsworth Center.", "text": "Studies at the Wadsworth Center over the past 14 years have shown that people with or without motor disabilities can learn to control the amplitude of mu or beta rhythms in electroencephalographic (EEG) activity recorded from the scalp over sensorimotor cortex and can use that control to move a cursor on a computer screen in one or two dimensions. This EEG-based brain-computer interface (BCI) could provide a new augmentative communication technology for those who are totally paralyzed or have other severe motor impairments. Present research focuses on improving the speed and accuracy of BCI communication."}
{"_id": "1ada7e038d9dd3030cdbc7b0fc1eb041c1e4fb6b", "title": "Unsupervised Induction of Semantic Roles within a Reconstruction-Error Minimization Framework", "text": "We introduce a new approach to unsupervised estimation of feature-rich semantic role labeling models. Our model consists of two components: (1) an encoding component: a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features; (2) a reconstruction component: a tensor factorization model which relies on roles to predict argument fillers. When the components are estimated jointly to minimize errors in argument reconstruction, the induced roles largely correspond to roles defined in annotated resources. Our method performs on par with most accurate role induction methods on English and German, even though, unlike these previous approaches, we do not incorporate any prior linguistic knowledge about the languages."}
{"_id": "e4d3d316d29c6612593be1c9ce4736e3928213ed", "title": "A Two-Iteration Clustering Method to Reveal Unique and Hidden Characteristics of Items Based on Text Reviews", "text": "This paper presents a new method for extracting unique features of items based on their textual reviews. The method is built of two similar iterations of applying a weighting scheme and then clustering the resultant set of vectors. In the first iteration, restaurants of similar food genres are grouped together into clusters. The second iteration reduces the importance of common terms in each such cluster, and highlights those that are unique to each specific restaurant. Clustering the restaurants again, now according to their unique features, reveals very interesting connections between the restaurants."}
{"_id": "3981416d7c8f784db8d9bfff2216fb50af711dbe", "title": "Siphon-based deadlock prevention for a class of S4PR generalized Petri nets", "text": "The paper proposes a new technique for deadlock control for a class of generalized Petri net (PN) with S4PR net, from the concept of control siphon. One important property of PN is to design structure, in terms of siphons, in order to the characterization of the deadlock prevent, and analytic structure of the synchronization subsystem which is needed to control place in a system. An efficient siphon method provides a solution to calculating minimal siphons, which is lead to constructing an optimal supervisor, ensuring that the system live state is preserved. The experimental results show the illustration of the proposed methods by using an S4PR net with an example is represented in FMS. The simulation result is also provided as a Petri nets the conception as well as the integration with MATLAB."}
{"_id": "483a5429b8f027bb75a03943c3150846377e44f6", "title": "Software-Defined Networking Paradigms in Wireless Networks: A Survey", "text": "Software-defined networking (SDN) has generated tremendous interest from both academia and industry. SDN aims at simplifying network management while enabling researchers to experiment with network protocols on deployed networks. This article is a distillation of the state of the art of SDN in the context of wireless networks. We present an overview of the major design trends and highlight key differences between them."}
{"_id": "75ebe1e0ae9d42732e31948e2e9c03d680235c39", "title": "Hello! My name is... Buffy'' -- Automatic Naming of Characters in TV Video", "text": "We investigate the problem of automatically labelling appearances of characters in TV or film material. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying when characters are speaking; (iii) using complementary cues of face matching and clothing matching to propose common annotations for face tracks. Results are presented on episodes of the TV series \u201cBuffy the Vampire Slayer\u201d."}
{"_id": "d85a7bef0adb2651353d6aa74d6e6a2e12c3d841", "title": "A survey and analysis of ontology-based software tools for semantic interoperability in IoT and WoT landscapes", "text": "The current Internet of Things (IoT) ecosystem consists of non-interoperable products and services. The Web of Things (WoT) advances the IoT by allowing consumers to interact with the IoT ecosystem through the open and standard web technologies. But the Web alone does not solve the interoperability issues. It is widely acknowledged that Semantic Web Technologies hold the potential of achieving data and platform interoperability in both the IoT and WoT landscapes. In this context, the paper attempts to review and analyze the current state of ontology-based software tools for semantic interoperability."}
{"_id": "e4acbc3656424766e39a6fbb0ae758d90554111e", "title": "\"With most of it being pictures now, I rarely use it\": Understanding Twitter's Evolving Accessibility to Blind Users", "text": "Social media is an increasingly important part of modern life. We investigate the use of and usability of Twitter by blind users, via a combination of surveys of blind Twitter users, large-scale analysis of tweets from and Twitter profiles of blind and sighted users, and analysis of tweets containing embedded imagery. While Twitter has traditionally been thought of as the most accessible social media platform for blind users, Twitter's increasing integration of image content and users' diverse uses for images have presented emergent accessibility challenges. Our findings illuminate the importance of the ability to use social media for people who are blind, while also highlighting the many challenges such media currently present this user base, including difficulty in creating profiles, in awareness of available features and settings, in controlling revelations of one's disability status, and in dealing with the increasing pervasiveness of image-based content. We propose changes that Twitter and other social platforms should make to promote fuller access to users with visual impairments."}
{"_id": "1b3c1759c16fe3f0ea57140bb23489fbf104616b", "title": "Full dimension MIMO for LTE-Advanced and 5G", "text": "Elevation beamforming and Full Dimension MIMO (FD-MIMO) has been an active area of research and standardization in 3GPP LTE-Advanced. In an FD-MIMO system, a base station with 2-dimensional (2D) active array supports multi-user joint elevation and azimuth beamforming (a.k.a. 3D beamfoming), which results in much higher cell capacity compared to conventional systems. Recent study has shown that with these new FD-MIMO technologies, we can achieve promising 3-5\u00d7 gain in both cell capacity as well as cell-edge throughput. In this paper, we will provide a brief summary of recent 3GPP activities, including the recently completed 3D channel model, ongoing study on FD-MIMO scenarios, antenna/RF (radio frequency) transceiver architectures, as well as potential network performance benefits. In addition, we also discuss some methods for reducing CSI (channel state information) feedback overhead and ensuring efficient operation of large size FD-MIMO for both TDD and FDD systems."}
{"_id": "77a78f27356d502425ad232bf5cc554b73b38897", "title": "AnySee: Peer-to-Peer Live Streaming", "text": "Efficient and scalable live-streaming overlay construction has become a hot topic recently. In order to improve the performance metrics, such as startup delay, source-to-end delay, and playback continuity, most previous studies focused on intra-overlay optimization. Such approaches have drawbacks including low resource utilization, high startup and source-to-end delay, and unreasonable resource assignment in global P2P networks. Anysee is a peer-to-peer live streaming system and adopts an inter-overlay optimization scheme, in which resources can join multiple overlays, so as to (1) improve global resource utilization and distribute traffic to all physical links evenly; (2) assign resources based on their locality and delay; (3) guarantee streaming service quality by using the nearest peers, even when such peers might belong to different overlays; and (4) balance the load among the group members. We compare the performance of our design with existing approaches based on comprehensive trace driven simulations. Results show that AnySee outperforms previous schemes in resource utilization and the QoS of streaming services. AnySee has been implemented as an Internet based live streaming system, and was successfully released in the summer of 2004 in CERNET of China. Over 60,000 users enjoy massive entertainment programs, including TV programs, movies, and academic conferences. Statistics prove that this design is scalable and robust, and we believe that the wide deployment of AnySee will soon benefit many more Internet users."}
{"_id": "e7dbd9ba29c59c68a9dae9f40dfc4040476c4624", "title": "Feeling and believing: the influence of emotion on trust.", "text": "The authors report results from 5 experiments that describe the influence of emotional states on trust. They found that incidental emotions significantly influence trust in unrelated settings. Happiness and gratitude--emotions with positive valence--increase trust, and anger--an emotion with negative valence--decreases trust. Specifically, they found that emotions characterized by other-person control (anger and gratitude) and weak control appraisals (happiness) influence trust significantly more than emotions characterized by personal control (pride and guilt) or situational control (sadness). These findings suggest that emotions are more likely to be misattributed when the appraisals of the emotion are consistent with the judgment task than when the appraisals of the emotion are inconsistent with the judgment task. Emotions do not influence trust when individuals are aware of the source of their emotions or when individuals are very familiar with the trustee."}
{"_id": "d6f7c761fa64754d7d93601a4802da27b5858f8b", "title": "3 D model classification using convolutional neural network", "text": "Our goal is to classify 3D models directly using convolutional neural network. Most of existing approaches rely on a set of human-engineered features. We use 3D convolutional neural network to let the network learn the features over 3D space to minimize classification error. We trained and tested over ShapeNet dataset with data augmentation by applying random transformations. We made various visual analysis to find out what the network has learned. We extended our work to extract additional information such as pose of the 3D model."}
{"_id": "00f51b60ef3929097ada76a16ff71badc2277165", "title": "Preliminary Guidelines for Empirical Research in Software Engineering", "text": "Empirical software engineering research needs research guidelines to improve the research and reporting processes. We propose a preliminary set of research guidelines aimed at stimulating discussion among software researchers. They are based on a review of research guidelines developed for medical researchers and on our own experience in doing and reviewing software engineering research. The guidelines are intended to assist researchers, reviewers and meta-analysts in designing, conducting and evaluating empirical studies. Editorial boards of software engineering journals may wish to use our recommendations as a basis for developing guidelines for reviewers and for framing policies for dealing with the design, data collection and analysis and reporting of empirical studies."}
{"_id": "a023f6d6c383f4a3839036f07b1ea0aa04da9cbb", "title": "Measuring and predicting software productivity: A systematic map and review", "text": "0950-5849/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.infsof.2010.12.001 \u21d1 Address: School of Computing, Blekinge Institute 372 25, Sweden. E-mail addresses: kai.petersen@bth.se, kai.petersen URLs: http://www.bth.se/besq, http://www.ericsso Context: Software productivity measurement is essential in order to control and improve the performance of software development. For example, by identifying role models (e.g. projects, individuals, tasks) when comparing productivity data. The prediction is of relevance to determine whether corrective actions are needed, and to discover which alternative improvement action would yield the best results. Objective: In this study we identify studies for software productivity prediction and measurement. Based on the identified studies we first create a classification scheme and map the studies into the scheme (systematic map). Thereafter, a detailed analysis and synthesis of the studies is conducted. Method: As a research method for systematically identifying and aggregating the evidence of productivity measurement and prediction approaches systematic mapping and systematic review have been used. Results: In total 38 studies have been identified, resulting in a classification scheme for empirical research on software productivity. The mapping allowed to identify the rigor of the evidence with respect to the different productivity approaches. In the detailed analysis the results were tabulated and synthesized to provide recommendations to practitioners. Conclusion: Risks with simple ratio-based measurement approaches were shown. In response to the problems data envelopment analysis seems to be a strong approach to capture multivariate productivity measures, and allows to identify reference projects to which inefficient projects should be compared. Regarding simulation no general prediction model can be identified. Simulation and statistical process control are promising methods for software productivity prediction. Overall, further evidence is needed to make stronger claims and recommendations. In particular, the discussion of validity threats should become standard, and models need to be compared with each other. 2010 Elsevier B.V. All rights reserved."}
{"_id": "b58b3a1dd84fe44f91510df00905a1ed33c1525c", "title": "A Review of Productivity Factors and Strategies on Software Development", "text": "Since the late seventies, efforts to catalog factors that influences productivity, as well as actions to improve it, has been a huge concern for both academy and software development industry. Despite numerous studies, software organizations still do not know which the most significant factors are and what to do with it. Several studies present the factors in a very superficial way, some others address only the related factors or there are those that describe only a single factor. Actions to deal with the factors are spread and frequently were not mapped. Through a literature review, this paper presents a consolidated view of the main factors that have affected productivity over the years, and the strategies to deal with these factors nowadays. This research aims to support software development industry on the selection of their strategies to improve productivity by maximizing the positive factors and minimizing or avoiding the impact of the negative ones."}
{"_id": "d7b9dde9a7d304b378079049a0c2af40454a13bb", "title": "The impact of agile practices on communication in software development", "text": "Agile software development practices such as eXtreme Programming (XP) and SCRUM have increasingly been adopted to respond to the challenges of volatile business environments, where the markets and technologies evolve rapidly and present the unexpected. In spite of the encouraging results so far, little is known about how agile practices affect communication. This article presents the results from a study which examined the impact of XP and SCRUM practices on communication within software development teams and within the focal organization. The research was carried out as a case study in F-Secure where two agile software development projects were compared from the communication perspective. The goal of the study is to increase the understanding of communication in the context of agile software development: internally among the developers and project leaders and in the interface between the development team and stakeholders (i.e. customers, testers, other development teams). The study shows that agile practices improve both informal and formal communication. However, it further indicates that, in larger development situations involving multiple external stakeholders, a mismatch of adequate communication mechanisms can sometimes even hinder the communication. The study highlights the fact that hurdles and improvements in the communication process can both affect the feature requirements and task subtask dependencies as described in coordination theory. While the use of SCRUM and some XP practices facilitate team and organizational communication of the dependencies between product features and working tasks, the use of agile practices requires that the team and organization use also additional plan-driven practices to ensure the efficiency of external communication between all the actors of software development."}
{"_id": "0ee47ca8e90f3dd2107b6791c0da42357c56f5bc", "title": "Agile Software Development: The Business of Innovation", "text": "T he rise and fall of the dot-com-driven Internet economy shouldn't distract us from seeing that the business environment continues to change at a dramatically increasing pace. To thrive in this turbulent environment, we must confront the business need for relentless innovation and forge the future workforce culture. Agile software development approaches such as Extreme Programming , Crystal methods, Lean Development, Scrum, Adaptive Software Development (ASD), and others view change from a perspective that mirrors today's turbulent business and technology environment. In a recent study of more than 200 software development projects, QSM Associates' Michael Mah reported that the researchers couldn't find nearly half of the projects' original plans to measure against. Why? Conforming to plan was no longer the primary goal; instead, satisfying customers\u2014at the time of delivery , not at project initiation\u2014took precedence. In many projects we review, major changes in the requirements, scope, and technology that are outside the development team's control often occur within a project's life span. Accepting that Barry Boehm's life cycle cost differentials theory\u2014the cost of change grows through the software's development life cycle\u2014remains valid, the question today is not how to stop change early in a project but how to better handle inevitable changes throughout its life cycle. Traditional approaches assumed that if we just tried hard enough, we could anticipate the complete set of requirements early and reduce cost by eliminating change. Today, eliminating change early means being unresponsive to business con-ditions\u2014in other words, business failure. Similarly, traditional process manage-ment\u2014by continuous measurement, error identification, and process refine-ments\u2014strove to drive variations out of processes. This approach assumes that variations are the result of errors. Today, while process problems certainly cause some errors, external environmental changes cause critical variations. Because we cannot eliminate these changes, driving down the cost of responding to them is the only viable strategy. Rather than eliminating rework, the new strategy is to reduce its cost. However, in not just accommodating change, but embracing it, we also must be careful to retain quality. Expectations have grown over the years. The market demands and expects innovative, high-quality software that meets its needs\u2014 and soon. Agile methods are a response to this expectation. Their strategy is to reduce the cost of change throughout a project. Extreme Programming (XP), for example, calls for the software development team to \u2022 produce the first delivery in weeks, to achieve an early win and rapid \u2026"}
{"_id": "12f3f9a5bbcaa3ab09eff325bd4554924ac1356d", "title": "The Critical Success Factors for Effective ICT Governance in Malaysian Public Sector : A Delphi Study", "text": "The fundamental issues in ICT Governance (ICTG) implementation for Malaysian Public Sector (MPS) is how ICT be applied to support improvements in productivity, management effectiveness and the quality of services offered to its citizens. Our main concern is to develop and adopt a common definition and framework to illustrate how ICTG can be used to better align ICT with government\u2019s operations and strategic focus. In particular, we want to identify and categorize factors that drive a successful ICTG process. This paper presents the results of an exploratory study to identify, validate and refine such Critical Success Factors (CSFs) and confirmed seven CSFs and nineteen sub-factors as influential factors that fit MPS after further validated and refined. The Delphi method applied in validation and refining process before being endorsed as appropriate for MPS. The identified CSFs reflect the focus areas that need to be considered strategically to strengthen ICT Governance implementation and ensure business success. Keywords\u2014IT Governance, Critical Success Factors."}
{"_id": "427a03a0746f398340e8a7f95d56316dacd8d70c", "title": "The sil Locus in Streptococcus Anginosus Group: Interspecies Competition and a Hotspot of Genetic Diversity", "text": "The Streptococcus Invasion Locus (Sil) was first described in Streptococcus pyogenes and Streptococcus pneumoniae, where it has been implicated in virulence. The two-component peptide signaling system consists of the SilA response regulator and SilB histidine kinase along with the SilCR signaling peptide and SilD/E export/processing proteins. The presence of an associated bacteriocin region suggests this system may play a role in competitive interactions with other microbes. Comparative analysis of 42 Streptococcus Anginosus/Milleri Group (SAG) genomes reveals this to be a hot spot for genomic variability. A cluster of bacteriocin/immunity genes is found adjacent to the sil system in most SAG isolates (typically 6-10 per strain). In addition, there were two distinct SilCR peptides identified in this group, denoted here as SilCRSAG-A and SilCRSAG-B, with corresponding alleles in silB. Our analysis of the 42 sil loci showed that SilCRSAG-A is only found in Streptococcus intermedius while all three species can carry SilCRSAG-B. In S. intermedius B196, a putative SilA operator is located upstream of bacteriocin gene clusters, implicating the sil system in regulation of microbe-microbe interactions at mucosal surfaces where the group resides. We demonstrate that S. intermedius B196 responds to its cognate SilCRSAG-A, and, less effectively, to SilCRSAG-B released by other Anginosus group members, to produce putative bacteriocins and inhibit the growth of a sensitive strain of S. constellatus."}
{"_id": "7ba0aa88b813b8e7e47d5999cf4802bb9b30b86a", "title": "Towards Lexicalization of DBpedia Ontology with Unsupervised Learning and Semantic Role Labeling", "text": "Filling the gap between natural language expressions and ontology concepts or properties is the new trend in Semantic Web. Ontology lexicalization introduces a new layer of lexical information for ontology properties and concepts. We propose a method based on unsupervised learning for the extraction of the potential lexical expressions of DBpedia propertiesfrom Wikipedia text corpus. It is a resource-driven approach that comprises three main steps. The first step consists of the extraction of DBpedia triples for the aimed property followed by the extraction of Wikipedia articles describing the resources from these triples. In the second step, sentences mostly related to the property are extracted from the articles and they are analyzed with a Semantic Role Labeler resulting in a set of SRL annotated trees. In the last step, clusters of expressions are built using spectral clustering based on the distances between the SRL trees. The clusters with the least variance are considered to be relevant for the lexical expressions of the property."}
{"_id": "2d633db75b177aad6045c0469ba0696b905f314f", "title": "An objective and subjective study of the role of semantics and prosodic features in building corpora for emotional TTS", "text": "Building a text corpus suitable to be used in corpus-based speech synthesis is a time-consuming process that usually requires some human intervention to select the desired phonetic content and the necessary variety of prosodic contexts. If an emotional text-to-speech (TTS) system is desired, the complexity of the corpus generation process increases. This paper presents a study aiming to validate or reject the use of a semantically neutral text corpus for the recording of both neutral and emotional (acted) speech. The use of this kind of texts would eliminate the need to include semantically emotional texts into the corpus. The study has been performed for Basque language. It has been made by performing subjective and objective comparisons between the prosodic characteristics of recorded emotional speech using both semantically neutral and emotional texts. At the same time, the performed experiments allow for an evaluation of the capability of prosody to carry emotional information in Basque language. Prosody manipulation is the most common processing tool used in concatenative TTS. Experiments of automatic recognition of the emotions considered in this paper (the \"Big Six emotions\") show that prosody is an important emotional indicator, but cannot be the only manipulated parameter in an emotional TTS system-at least not for all the emotions. Resynthesis experiments transferring prosody from emotional to neutral speech have also been performed. They corroborate the results and support the use of a neutral-semantic-content text in databases for emotional speech synthesis"}
{"_id": "516f412a76911a13c9128aac827b52b27b98fad9", "title": "Uncovering social network sybils in the wild", "text": "Sybil accounts are fake identities created to unfairly increase the power or resources of a single user. Researchers have long known about the existence of Sybil accounts in online communities such as file-sharing systems, but have not been able to perform large scale measurements to detect them or measure their activities. In this paper, we describe our efforts to detect, characterize and understand Sybil account activity in the Renren online social network (OSN). We use ground truth provided by Renren Inc. to build measurement based Sybil account detectors, and deploy them on Renren to detect over 100,000 Sybil accounts. We study these Sybil accounts, as well as an additional 560,000 Sybil accounts caught by Renren, and analyze their link creation behavior. Most interestingly, we find that contrary to prior conjecture, Sybil accounts in OSNs do not form tight-knit communities. Instead, they integrate into the social graph just like normal users. Using link creation timestamps, we verify that the large majority of links between Sybil accounts are created accidentally, unbeknownst to the attacker. Overall, only a very small portion of Sybil accounts are connected to other Sybils with social links. Our study shows that existing Sybil defenses are unlikely to succeed in today's OSNs, and we must design new techniques to effectively detect and defend against Sybil attacks."}
{"_id": "6847de10b11501b26f6d35405d1b6436ef17c0b4", "title": "Query by Example of Speaker Audio Signals Using Power Spectrum and MFCCs", "text": "Search engine is the popular term for an information retrieval (IR) system. Typically, search engine can be based on full-text indexing. Changing the presentation from the text data to multimedia data types make an information retrieval process more complex such as a retrieval of image or sounds in large databases. This paper introduces the use of language and text independent speech as input queries in a large sound database by using Speaker identification algorithm. The method consists of 2 main processing first steps, we separate vocal and non-vocal identification after that vocal be used to speaker identification for audio query by speaker voice. For the speaker identification and audio query by process, we estimate the similarity of the example signal and the samples in the queried database by calculating the Euclidian distance between the Mel frequency cepstral coefficients (MFCC) and Energy spectrum of acoustic features. The simulations show that the good performance with a sustainable computational cost and obtained the average accuracy rate more than 90%."}
{"_id": "26433d86b9c215b5a6871c70197ff4081d63054a", "title": "Multimodal biometric fusion at feature level: Face and palmprint", "text": "Multimodal biometrics has recently attracted substantial interest for its high performance in biometric recognition system. In this paper we introduce multimodal biometrics for face and palmprint images using fusion techniques at the feature level. Gabor based image processing is utilized to extract discriminant features, while principal component analysis (PCA) and linear discriminant analysis (LDA) are used to reduce the dimension of each modality. The output features of LDA are serially combined and classified by a Euclidean distance classifier. The experimental results based on ORL face and Poly-U palmprint databases proved that this fusion technique is able to increase biometric recognition rates compared to that produced by single modal biometrics."}
{"_id": "4ca6307c991f8f4c28ebdd45a08239dfb9da1c0c", "title": "Improved Style Transfer by Respecting Inter-layer Correlations", "text": "A popular series of style transfer methods apply a style to a content image by controlling mean and covariance of values in early layers of a feature stack. This is insufficient for transferring styles that have strong structure across spatial scales like, e.g., textures where dots lie on long curves. This paper demonstrates that controlling inter-layer correlations yields visible improvements in style transfer methods. We achieve this control by computing cross-layer, rather than within-layer, gram matrices. We find that (a) cross-layer gram matrices are sufficient to control within-layer statistics. Inter-layer correlations improves style transfer and texture synthesis. The paper shows numerous examples on \u201dhard\u201d real style transfer problems (e.g. long scale and hierarchical patterns); (b) a fast approximate style transfer method can control cross-layer gram matrices; (c) we demonstrate that multiplicative, rather than additive style and content loss, results in very good style transfer. Multiplicative loss produces a visible emphasis on boundaries, and means that one hyper-parameter can be eliminated. 1 ar X iv :1 80 1. 01 93 3v 1 [ cs .C V ] 5 J an 2 01 8"}
{"_id": "58c87d2d678aab8bccd5cb20d04bc867682b07f2", "title": "The INTERSPEECH 2017 Computational Paralinguistics Challenge: Addressee, Cold & Snoring", "text": "The INTERSPEECH 2017 Computational Paralinguistics Challenge addresses three different problems for the first time in research competition under well-defined conditions: In the Addressee sub-challenge, it has to be determined whether speech produced by an adult is directed towards another adult or towards a child; in the Cold sub-challenge, speech under cold has to be told apart from \u2018healthy\u2019 speech; and in the Snoring subchallenge, four different types of snoring have to be classified. In this paper, we describe these sub-challenges, their conditions, and the baseline feature extraction and classifiers, which include data-learnt feature representations by end-to-end learning with convolutional and recurrent neural networks, and bag-of-audiowords for the first time in the challenge series."}
{"_id": "1ae9b720b3b3e497ef6a2a3e97079c0acb8570f1", "title": "An Entity-Centric Approach for Privacy and Identity Management in Cloud Computing", "text": "Entities (e.g., users, services) have to authenticate themselves to service providers (SPs) in order to use their services. An entity provides personally identifiable information (PII) that uniquely identifies it to an SP. In the traditional application-centric Identity Management (IDM) model, each application keeps trace of identities of the entities that use it. In cloud computing, entities may have multiple accounts associated with different SPs, or one SP. Sharing PIIs of the same entity across services along with associated attributes can lead to mapping of PIIs to the entity. We propose an entity-centric approach for IDM in the cloud. The approach is based on: (1) active bundles\u2014each including a payload of PII, privacy policies and a virtual machine that enforces the policies and uses a set of protection mechanisms to protect themselves, (2) anonymous identification to mediate interactions between the entity and cloud services using entity\u2019s privacy policies. The main characteristics of the approach are: it is independent of third party, gives minimum information to the SP and provides ability to use identity data on untrusted hosts."}
{"_id": "626a38a32e2255e5bef98880ebbddf6994840e9e", "title": "Multichannel Decoded Local Binary Patterns for Content-Based Image Retrieval", "text": "Local binary pattern (LBP) is widely adopted for efficient image feature description and simplicity. To describe the color images, it is required to combine the LBPs from each channel of the image. The traditional way of binary combination is to simply concatenate the LBPs from each channel, but it increases the dimensionality of the pattern. In order to cope with this problem, this paper proposes a novel method for image description with multichannel decoded LBPs. We introduce adder- and decoder-based two schemas for the combination of the LBPs from more than one channel. Image retrieval experiments are performed to observe the effectiveness of the proposed approaches and compared with the existing ways of multichannel techniques. The experiments are performed over 12 benchmark natural scene and color texture image databases, such as Corel-1k, MIT-VisTex, USPTex, Colored Brodatz, and so on. It is observed that the introduced multichannel adder- and decoder-based LBPs significantly improve the retrieval performance over each database and outperform the other multichannel-based approaches in terms of the average retrieval precision and average retrieval rate."}
{"_id": "57a809faecdeb6c97160be4cab0d0b2f42ed3c6f", "title": "Mapping the world's photos", "text": "We investigate how to organize a large collection of geotagged photos, working with a dataset of about 35 million images collected from Flickr. Our approach combines content analysis based on text tags and image data with structural analysis based on geospatial data. We use the spatial distribution of where people take photos to define a relational structure between the photos that are taken at popular places. We then study the interplay between this structure and the content, using classification methods for predicting such locations from visual, textual and temporal features of the photos. We find that visual and temporal features improve the ability to estimate the location of a photo, compared to using just textual features. We illustrate using these techniques to organize a large photo collection, while also revealing various interesting properties about popular cities and landmarks at a global scale."}
{"_id": "98997f81cc77f53ee84e9b6df1edc253f9f9d5f9", "title": "Personalized, interactive tag recommendation for flickr", "text": "We study the problem of personalized, interactive tag recommendation for Flickr: While a user enters/selects new tags for a particular picture, the system suggests related tags to her, based on the tags that she or other people have used in the past along with (some of) the tags already entered. The suggested tags are dynamically updated with every additional tag entered/selected. We describe a new algorithm, called Hybrid, which can be applied to this problem, and show that it outperforms previous algorithms. It has only a single tunable parameter, which we found to be very robust.\n Apart from this new algorithm and its detailed analysis, our main contributions are (i) a clean methodology which leads to conservative performance estimates, (ii) showing how classical classification algorithms can be applied to this problem, (iii) introducing a new cost measure, which captures the effort of the whole tagging process, (iv) clearly identifying, when purely local schemes (using only a user's tagging history) can or cannot be improved by global schemes (using everybody's tagging history)."}
{"_id": "424f92289a632f85f6ba9a611614d145c7d3393a", "title": "A review on software development security engineering using Dynamic System Method (DSDM)", "text": "Agile methodology such as Scrum, Extreme Programming (XP), Feature Driven Development (FDD) and the Dynamic System Development Method (DSDM) have gained enough recognition as efficient development process by delivering software fast even under the time constrains. However, like other agile methods DSDM has been criticized because of unavailability of security element in its four phases. In order to have a deeper look into the matter and discover more about the reality, we conducted a literature review. Our findings highlight that, in its current form, the DSDM does not support developing secure software. Although, there are a few researches on this topic about Scrum, XP and FDD but, based on our findings, there is no research on developing secure software using DSDM. Thus, in our future work we intend to propose enhanced DSDM that will cater the security aspects in software development."}
{"_id": "cbd8a90e809151b684e73fb3e31c2731874570c4", "title": "A systematic literature review of nurse shortage and the intention to leave.", "text": "AIM\nTo present the findings of a literature review regarding nurses' intention to leave their employment or the profession.\n\n\nBACKGROUND\nThe nursing shortage is a problem that is being experienced worldwide. It is a problem that, left unresolved, could have a serious impact on the provision of quality health care. Understanding the reasons why nurses leave their employment or the profession is imperative if efforts to increase retention are to be successful.\n\n\nEVALUATION\nElectronic databases were systematically searched to identify English research reports about nurses' intention to leave their employment or the profession. Key results concerning the issue were extracted and synthesized.\n\n\nKEY ISSUES\nThe diversified measurement instruments, samples and levels of intention to leave caused difficulties in the attempt to compare or synthesize findings. The factors influencing nurses' intention to leave were identified and categorized into organizational and individual factors.\n\n\nCONCLUSIONS\nThe reasons that trigger nurses' intention to leave are complex and are influenced by organizational and individual factors. Further studies should be conducted to investigate how external factors such as job opportunities correlate with nurses' intention to leave.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nThe review provides insight that can be useful in designing and implementing strategies to maintain a sustainable workforce in nursing."}
{"_id": "3df4b372489e734beeb70b320666354e53eb23c4", "title": "A model for play-based intervention for children with ADHD.", "text": "BACKGROUND/AIM\nThe importance of play in the social development of children is undisputed. Even though children with attention-deficit hyperactivity disorder (ADHD) experience serious social problems, there is limited research on their play. By integrating literature on ADHD with literature on play, we can postulate how play is influenced by the characteristics of ADHD. These postulations enabled us to propose a theoretical model (proposed model) to depict the interactive process between the characteristics of ADHD and factors that promote play. This paper presents the revised model and principles for intervention based on the results of a study investigating the play of children with ADHD (reported elsewhere).\n\n\nMETHODS\nWe tested the proposed model in a study comparing two groups of children (n = 350) between the ages of 5 and 11 years. One group consisted of children diagnosed with ADHD (n = 112) paired with playmates (n = 112) who were typically developing; the control group consisted of typically developing children paired with typically developing playmates (n = 126). The Test of Playfulness was administered, and the model was revised in line with the findings.\n\n\nRESULTS AND CONCLUSIONS\nThe findings suggest difficulties in the social play and lack of interpersonal empathy in the play of children with ADHD. We draw on the revised model to propose preliminary principles for play-based interventions for children with ADHD. The principles emphasise the importance of capturing the motivation of children with ADHD, counteracting the effects of lack of interpersonal empathy, and considerations for including playmates in the intervention process."}
{"_id": "a864fbfd34426c98b2832a3c2aa9fbc7df8bb910", "title": "Role of affective self-regulatory efficacy in diverse spheres of psychosocial functioning.", "text": "This prospective study with 464 older adolescents (14 to 19 years at Time 1; 16 to 21 years at Time 2) tested the structural paths of influence through which perceived self-efficacy for affect regulation operates in concert with perceived behavioral efficacy in governing diverse spheres of psychosocial functioning. Self-efficacy to regulate positive and negative affect is accompanied by high efficacy to manage one's academic development, to resist social pressures for antisocial activities, and to engage oneself with empathy in others' emotional experiences. Perceived self-efficacy for affect regulation essentially operated mediationally through the latter behavioral forms of self-efficacy rather than directly on prosocial behavior, delinquent conduct, and depression. Perceived empathic self-efficacy functioned as a generalized contributor to psychosocial functioning. It was accompanied by prosocial behavior and low involvement in delinquency but increased vulnerability to depression in adolescent females."}
{"_id": "f4b98dbd75c87a86a8bf0d7e09e3ebbb63d14954", "title": "Comparative Evaluation of Various MFCC Implementations on the Speaker Verification Task", "text": "Making no claim of being exhaustive, a review of the most popular MFCC (Mel Frequency Cepstral Coefficients) implementations is made. These differ mainly in the particular approximation of the nonlinear pitch perception of human, the filter bank design, and the compression of the filter bank output. Then, a comparative evaluation of the presented implementations is performed on the task of text-independent speaker verification, by means of the well-known 2001 NIST SRE (speaker recognition evaluation) one-speaker detection database."}
{"_id": "7b143616c637734d6c89f28723e2ceb7aabc0389", "title": "Personality, Culture, and System Factors - Impact on Affective Response to Multimedia", "text": "Whilst affective responses to various forms and genres of multimedia content have been well researched, precious few studies have investigated the combined impact that multimedia system parameters and human factors have on affect. Consequently, in this paper we explore the role that two primordial dimensions of human factors personality and culture in conjunction with system factors frame rate, resolution, and bit rate have on user affect and enjoyment of multimedia presentations. To this end, a two-site, cross-cultural study was undertaken, the results of which produced three predictve models. Personality and Culture traits were shown statistically to represent 5.6% of the variance in positive affect, 13.6% in negative affect and 9.3% in enjoyment. The correlation between affect and enjoyment, was significant. Predictive modeling incorporating human factors showed about 8%, 7% and 9% improvement in predicting positive affect, negative affect and enjoyment respectively when compared to models trained only on system factors. Results and analysis indicate the significant role played by human factors in influencing affect that users experience while watching multimedia."}
{"_id": "906dc85636c056408f13f0d24b6d6f92ffb63113", "title": "Towards a Simple but Useful Ontology Design Pattern Representation Language", "text": "The need for a representation language for ontology design patterns has long been recognized. However, the body of literature on the topic is still rather small and does not sufficiently reflect the diverse requirements on such a language. Herein, we propose a simple but useful and extendable approach which is fully compatible with the Web Ontology Language and should be easy to adopt by the community."}
{"_id": "f3da8e33c90dc19a33d91a1b6b2ec4430f3b0315", "title": "Enhanced multisensory integration in older adults", "text": "Information from the different senses is seamlessly integrated by the brain in order to modify our behaviors and enrich our perceptions. It is only through the appropriate binding and integration of information from the different senses that a meaningful and accurate perceptual gestalt can be generated. Although a great deal is known about how such cross-modal interactions influence behavior and perception in the adult, there is little knowledge as to the impact of aging on these multisensory processes. In the current study, we examined the speed of discrimination responses of aged and young individuals to the presentation of visual, auditory or combined visual-auditory stimuli. Although the presentation of multisensory stimuli speeded response times in both groups, the performance gain was significantly greater in the aged. Most strikingly, multisensory stimuli restored response times in the aged to those seen in young subjects to the faster of the two unisensory stimuli (i.e., visual). The current results suggest that despite the decline in sensory processing that accompanies aging, the use of multiple sensory channels may represent an effective compensatory strategy to overcome these unisensory deficits."}
{"_id": "404574efdb5193dc6b69ffcfbf2190212ebfa43f", "title": "Potential Vulnerabilities of Neuronal Reward, Risk, and Decision Mechanisms to Addictive Drugs", "text": "How do addictive drugs hijack the brain's reward system? This review speculates how normal, physiological reward processes may be affected by addictive drugs. Addictive drugs affect acute responses and plasticity in dopamine neurons and postsynaptic structures. These effects reduce reward discrimination, increase the effects of reward prediction error signals, and enhance neuronal responses to reward-predicting stimuli, which may contribute to compulsion. Addictive drugs steepen neuronal temporal reward discounting and create temporal myopia that impairs the control of drug taking. Tonically enhanced dopamine levels may disturb working memory mechanisms necessary for assessing background rewards and thus may generate inaccurate neuronal reward predictions. Drug-induced working memory deficits may impair neuronal risk signaling, promote risky behaviors, and facilitate preaddictive drug use. Malfunctioning adaptive reward coding may lead to overvaluation of drug rewards. Many of these malfunctions may result in inadequate neuronal decision mechanisms and lead to choices biased toward drug rewards."}
{"_id": "64290c658d2f1c47ad4fd8757a87ac6c9a708f89", "title": "Work Teams Applications and Effectiveness", "text": "\" This article uses an ecological approach to analyze factors in the effectiveness of work teams--small groups of interdependent individuals who share responsibility for outcomes for their organizations. Applications include advice and involvement, as in quality control circles and committees; production and service, as in assembly groups and sales teams; projects and development, as in engineering and research groups; and action and negotiation, as in sports teams and combat units. An analytic framework depicts team effectiveness as interdependent with organizational context, boundaries, and team development. Key context factors include (a) organizational culture, (b) technology and task design, (c) mission clarity, (d) autonomy, (e) rewards, ( f ) performance feedback, (g) training/consultation, and (h) physical environment. Team boundaries may mediate the impact of organizational context on team development. Current research leaves unanswered questions but suggests that effectiveness depends on organizational context and boundaries as much as on internal processes. Issues are raised for research and practice. The terms work team and work group appear often in today's discussions of organizations. Some experts claim that to be effective modern firms need to use small teams for an increasing variety of jobs. For instance, in an article subtitled \"The Team as Hero,\" Reich (1987) wrote, If we are to compete in today's world, we must begin to celebrate collective entrepreneurship, endeavors in which the whole of the effort is greater than the sum of individual contributions. We need to honor our teams more, our aggressive leaders and maverick geniuses less. (p. 78) Work teams occupy a pivotal role in what has been described as a management transformation (Walton, 1985), paradigm shift (Ketehum, 1984), and corporate renaissance (Kanter, 1983). In this management revolution, Peters (1988) advised that organizations use \"multi-function teams for all development activities\" (p. 210) and \"organize every function into tento thirty-person, largely self-managing teams\" (p. 296). Tornatzky (1986) pointed to new technologies that allow small work groups to take responsibility for whole products. Hackman (1986) predicted that, \"organizations in the future will rely heavily on member self-management\" (p. 90). Building blocks of such organizations are self-regulating work teams. But University of Tennessee University of Wisconsin--Eau Claire University o f Tennessee far from being revolutionary, work groups are traditional; \"the problem before us is not to invent more tools, but to use the ones we have\" (Kanter, 1983, p. 64). In this article, we explore applications of work teams and propose an analytic framework for team effectiveness. Work teams are defined as interdependent collections of individuals who share responsibility for specific outcomes for their organizations. In what follows, we first identify applications of work teams and then offer a framework for analyzing team effectiveness. Its facets make up topics of subsequent sections: organizational context, boundaries, and team development. We close with issues for research and practice. A p p l i c a t i o n s o f W o r k T e a m s Two watershed events called attention to the benefits of applying work teams beyond sports and mih'tary settings: the Hawthorne studies (Homans, 1950) and European experiments with autonomous work groups (Kelly, 1982). Enthusiasm has alternated with disenchantment (Bramel & Friend, 1987), but the 1980s have brought a resurgence of interest. Unfortunately, we have little evidence on how widely work teams are used or whether their use is expanding. Pasmore, Francis, Haldeman, and Shani (1982) reported that introduction of autonomous work groups was the most common intervention in 134 experiments in manufacturing firms. Production teams number among four broad categories of work team applications: (a) advice and involvement, (b) production and service, (c) projects and development, and (d) action and negotiation. Advice and Involvement Decision-making committees traditional in management now are expanding to first-line employees. Quality control (QC) circles and employee involvement groups have been common in the 1980s, often as vehicles for employee participation ( Cole, 1982 ). Perhaps several hundred thousand U.S. employees belong to QC circles (Ledford, Lawler, & Mohrman, 1988), usually first-line manufacturing employees who meet to identify opportunities for improvement. Some make and carry out proposals, but most have restricted scopes of activity and little working time, perhaps a few hours each month (Thompson, 1982). Employee involvement groups operate similarly, exploring ways to improve customer service (Peterfreund, 1982). 120 February 1990 \u2022 American Psychologist Copyright 1990 by the American Psyc2aological A~mciafion, Inc. 0003-066X/90/$00.75 Vol. 45, No. 2, 120-133 QC circles and employee involvement groups at times may have been implemented poorly (Shea, 1986), but they have been used extensively in some companies"}
{"_id": "16deaf3986d996a7bf5f6188d39607c2e406a1f8", "title": "Self-Adaptive Anytime Stream Clustering", "text": "Clustering streaming data requires algorithms which are capable of updating clustering results for the incoming data. As data is constantly arriving, time for processing is limited. Clustering has to be performed in a single pass over the incoming data and within the possibly varying inter-arrival times of the stream. Likewise, memory is limited, making it impossible to store all data. For clustering, we are faced with the challenge of maintaining a current result that can be presented to the user at any given time. In this work, we propose a parameter free algorithm that automatically adapts to the speed of the data stream. It makes best use of the time available under the current constraints to provide a clustering of the objects seen up to that point. Our approach incorporates the age of the objects to reflect the greater importance of more recent data. Moreover, we are capable of detecting concept drift, novelty and outliers in the stream. For efficient and effective handling, we introduce the ClusTree, a compact and self-adaptive index structure for maintaining stream summaries. Our experiments show that our approach is capable of handling a multitude of different stream characteristics for accurate and scalable anytime stream clustering."}
{"_id": "31cc80ffb56d7f82dcc44e78fbdea95bffe5028e", "title": "Depth-disparity calibration for augmented reality on binocular optical see-through displays", "text": "We present a study of depth-disparity calibration for augmented reality applications using binocular optical see-through displays. Two techniques were proposed and compared. The \"paired-eyes\" technique leverages the Panum's fusional area to help viewer find alignment between the virtual and physical objects. The \"separate-eyes\" technique eliminates the need of binocular fusion and involves using both eyes sequentially to check the virtual-physical object alignment on retinal images. We conducted a user study to measure the calibration results and assess the subjective experience of users with the proposed techniques."}
{"_id": "0c4867f11c9758014d591381d8b397a1d38b04a7", "title": "Pattern Recognition and Machine Learning", "text": "The first \u20ac price and the \u00a3 and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the \u20ac(D) includes 7% for Germany, the \u20ac(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. C. Bishop Pattern Recognition and Machine Learning"}
{"_id": "04c5268d7a4e3819344825e72167332240a69717", "title": "Action MACH a spatio-temporal Maximum Average Correlation Height filter for action recognition", "text": "In this paper we introduce a template-based method for recognizing human actions called action MACH. Our approach is based on a maximum average correlation height (MACH) filter. A common limitation of template-based methods is their inability to generate a single template using a collection of examples. MACH is capable of capturing intra-class variability by synthesizing a single Action MACH filter for a given action class. We generalize the traditional MACH filter to video (3D spatiotemporal volume), and vector valued data. By analyzing the response of the filter in the frequency domain, we avoid the high computational cost commonly incurred in template-based approaches. Vector valued data is analyzed using the Clifford Fourier transform, a generalization of the Fourier transform intended for both scalar and vector-valued data. Finally, we perform an extensive set of experiments and compare our method with some of the most recent approaches in the field by using publicly available datasets, and two new annotated human action datasets which include actions performed in classic feature films and sports broadcast television."}
{"_id": "139a860b94e9b89a0d6c85f500674fe239e87099", "title": "Optimal Brain Damage", "text": "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application."}
{"_id": "1c01e44df70d6fde616de1ef90e485b23a3ea549", "title": "A new class of upper bounds on the log partition function", "text": "We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants."}
{"_id": "39a6cc80b1590bcb2927a9d4c6c8f22d7480fbdd", "title": "A 3-dimensional sift descriptor and its application to action recognition", "text": "In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data."}
{"_id": "efaf07d40b9c5837639bed129794efc00f02e4c3", "title": "Continuous N-gram Representations for Authorship Attribution", "text": "This paper presents work on using continuous representations for authorship attribution. In contrast to previous work, which uses discrete feature representations, our model learns continuous representations for n-gram features via a neural network jointly with the classification layer. Experimental results demonstrate that the proposed model outperforms the state-of-the-art on two datasets, while producing comparable results on the remaining two."}
{"_id": "6683426ca06560523fc7461152d4dd3b84a07854", "title": "Autotagger: A Model for Predicting Social Tags from Acoustic Features on Large Music Databases", "text": "Social tags are user-generated keywords associated with some resource on the Web. In the case of music, social tags have become an important component of \u201cWeb 2.0\u201d recommender systems, allowing users to generate playlists based on use-dependent terms such as chill or jogging that have been applied to particular songs. In this paper, we propose a method for predicting these social tags directly from MP3 files. Using a set of 360 classifiers trained using the online ensemble learning algorithm FilterBoost, we map audio features onto social tags collected from the Web. The resulting automatic tags (or autotags) furnish information about music that is otherwise untagged or poorly tagged, allowing for insertion of previously unheard music into a social recommender. This avoids the \u201ccold-start problem\u201d common in such systems. Autotags can also be used to smooth the tag space from which similarities and recommendations are made by providing a set of comparable baseline tags for all tracks in a recommender system. Because the words we learn are the same as those used by people who label their music collections, it is easy to integrate our predictions into existing similarity and prediction methods based on web data."}
{"_id": "d86e51d6e1215d792a9d00995d367b6161fc33e7", "title": "Does Your Model Know the Digit 6 Is Not a Cat? A Less Biased Evaluation of \"Outlier\" Detectors", "text": "In the real world, a learning system could receive an input that looks nothing like anything it has seen during training, and this can lead to unpredictable behaviour. We thus need to know whether any given input belongs to the population distribution of the training data to prevent unpredictable behaviour in deployed systems. A recent surge of interest on this problem has led to the development of sophisticated techniques in the deep learning literature. However, due to the absence of a standardized problem formulation or an exhaustive evaluation, it is not evident if we can rely on these methods in practice. What makes this problem different from a typical supervised learning setting is that we cannot model the diversity of out-of-distribution samples in practice. The distribution of outliers used in training may not be the same as the distribution of outliers encountered in the application. Therefore, classical approaches that learn inliers vs. outliers with only two datasets can yield optimistic results. We introduce OD-test, a three-dataset evaluation scheme as a practical and more reliable strategy to assess progress on this problem. The OD-test benchmark provides a straightforward means of comparison for methods that address the out-of-distribution sample detection problem. We present an exhaustive evaluation of a broad set of methods from related areas on image classification tasks. Furthermore, we show that for realistic applications of high-dimensional images, the existing methods have low accuracy. Our analysis reveals areas of strength and weakness of each method."}
{"_id": "158d62f4e3363495148cf16c7b800daab7765760", "title": "Privacy Aware Learning", "text": "We study statistical risk minimization problems under a privacy model in which the data is kept confidential even from the learner. In this local privacy framework, we establish sharp upper and lower bounds on the convergence rates of statistical estimation procedures. As a consequence, we exhibit a precise tradeoff between the amount of privacy the data preserves and the utility, as measured by convergence rate, of any statistical estimator or learning procedure."}
{"_id": "77071790f7e3a2ab7fb8b2a7e8d0a10e0debc5c1", "title": "Autonomous Object Manipulation Using a Soft Planar Grasping Manipulator", "text": "This article presents the development of an autonomous motion planning algorithm for a soft planar grasping manipulator capable of grasp-and-place operations by encapsulation with uncertainty in the position and shape of the object. The end effector of the soft manipulator is fabricated in one piece without weakening seams using lost-wax casting instead of the commonly used multilayer lamination process. The soft manipulation system can grasp randomly positioned objects within its reachable envelope and move them to a desired location without human intervention. The autonomous planning system leverages the compliance and continuum bending of the soft grasping manipulator to achieve repeatable grasps in the presence of uncertainty. A suite of experiments is presented that demonstrates the system's capabilities."}
{"_id": "3ce29949228103340391dbb57e38dd68d58e9b9e", "title": "Fetal left brachiocephalic vein in normal and abnormal conditions.", "text": "OBJECTIVES\nTo establish values of fetal left brachiocephalic vein (LBCV) dimensions during normal pregnancy and determine whether routine assessment of the LBCV may help in identifying fetuses with congenital abnormalities of this vessel.\n\n\nMETHODS\nFetal LBCV was assessed prospectively during ultrasound examinations in 431 normal singleton pregnancies. The visualization rate of the transverse view of the upper fetal chest at the level of drainage of the LBCV into the superior vena cava (SVC) by two-dimensional (2D) and 2D plus color Doppler ultrasound was evaluated. Reference ranges of LBCV diameter during non-complicated pregnancies were established. Interobserver and intraobserver measurement variability was analyzed. In addition, a retrospective review of the hospital medical records of 91 pregnancies with fetuses diagnosed with LBCV abnormalities was performed.\n\n\nRESULTS\nSonographic assessment of the fetal LBCV was consistently achieved in the second and third trimesters and in some fetuses in the first trimester of pregnancy. In normal fetuses LBCV diameter increased significantly throughout pregnancy, with a mean value of 0.7 mm at 11 weeks and 4.9 mm at term. Dilation of the fetal LBCV was noted in five cases of intracranial arteriovenous malformation and six cases of supracardiac type total anomalous pulmonary venous connection. Abnormal course of the LBCV was noted in 12 fetuses. In 63 fetuses with a persistent left SVC and a right SVC the LBCV was absent.\n\n\nCONCLUSION\nThis is the first study describing an effective sonographic approach for the assessment of fetal LBCV dimensions during pregnancy. The normative data may provide an additional means of detecting rare anomalies of systemic and pulmonary veins during pregnancy."}
{"_id": "9960994d81af14ae49a684883ea376a6bef41b2d", "title": "The importance of drawing in the mechanical design process", "text": "-This paper is a study on the importance of drawing ( both formal drafting and informal sketching) during the process of mechanical design. Five hypotheses, focused on the t~es of drawings, their necessity in mechanical problem solving, and their relation to the external representation medium, are presented and supported. Support is through referenced studies in other domains and the results of protocol studies performed on five mechanical designers. Videotapes of all the marks-on-paper made by designers in representative sections of the design process were studied in detail for their type and purpose. The resulting data is supportive of the hypotheses. These results also give requirements for future computer aided design tools and graphics education, and goals for further studies. I. I N T R O D U C T I O N The goal of this paper is to study the importance of drawing (both formal drafting and informal sketching) during the process of mechanical design. This goal can be extended to state that we intend to show the necessio\" of drawing during all the developmental stages of a mechanical design. Through the information presented here, the requirements for future computer aided design tools, graphics education, and further studies will be developed. All mechanical engineers are taught drafting. Thus, most engineers are skilled at making and interpreting these formal mechanical drawings. These drawings are representations of a final design (the end product of the design process) and they are intended to archive the completed design and communicate it to other designers and manufacturing personnel. Additionally, engineers are notorious for not being able to think without making \"'back-of-the-envelope\" sketches of rough ideas. Sometimes these informal sketches serve to communicate a concept to a colleague, but more often they just help the idea take shape on paper. It is in considering how these sketches help an idea take form that gives a hint that drawing's role in engineering is more than just to archive a concept or to communicate with others. Understanding the use of both drafting and sketching in design is important to help formulate the future development of Computer Aided Design or Drafting (CAD) systems. As CAD evolves and becomes more \"'intelligent,'\" the question of what attributes these systems must have becomes more important. In the past, CAD system attributes have primarily been driven from developments in the computer industry, it is only through understanding drawing's importance in the design process that these systems can be based on design needs. Additionally, the pressures of CAD tool development, faculty time demands, and course expenses cause academic institutions to reevaluate the content of their \"graphics'\" courses. Understanding drawing's importance in the design process helps establish what skills need to be taught to engineers during their training. This paper is organized by first, in Section 2, clarifying the types of drawings used in mechanical design. The hypotheses to be addressed in this paper are given in Section 3. A discussion of research on the understanding of visual imagery to be used as a basis for arguments in support of the hypotheses is in Section 4. In Section 5 is a discussion of the results of data taken on how mechanical engineers use drawings during design. Lastly, in Section 6, is a discussion of how well the hypotheses have been supported and the implications of our findings on CAD development, educational requirements, and future research directions. 2. TYPES OF D R A W I N G S USED IN DESIGN Engineers make many types of marks-on-paper. In research, to be described in Section 5, we have broken down these marks into two main groupings: support notation and graphic representations. Support notation includes textual notes, lists, dimensions (including leaders and arrows), and calculations. Graphic representations include drawings of objects and their functions, and plots and charts. Mechanical design graphic representations are often scale drawings made with mechanical instruments or CAD computer systems. These drawings, made in accordance with a set of widely accepted rules, are defined as having been drafted. Sketches, on the other hand, are defined as \"free-hand\" drawings. They are usually not to scale and may use shorthand notations to represent both objects and their function. A differentiation must be made between the act of graphic representation and the medium on which it occurs. The medium, whether it be paper and pencil, a computer stylus on a tablet, chalk on a blackboard, or other medium may put interface restrictions on the representation. The following discussions are concerned with what is being represented, not with how the representation is made. However, the discussions point to the medium's restriction on representation and the need for improved interfaces. Another aspect of drawings to be considered is the level of abstraction of the information to be represented. During the design process, the design is refined"}
{"_id": "2cbc1789ba0a5df8069948aa2dfbd080d8184fc9", "title": "Anticipative Interaction Primitives for Human-Robot Collaboration", "text": "This paper introduces our initial investigation on the problem of providing a semi-autonomous robot collaborator with anticipative capabilities to predict human actions. Anticipative robot behavior is a desired characteristic of robot collaborators that lead to fluid, proactive interactions. We are particularly interested in improving reactive methods that rely on human action recognition to activate the corresponding robot action. Action recognition invariably causes delay in the robot\u2019s response, and the goal of our method is to eliminate this delay by predicting the next human action. Prediction is achieved by using a lookup table containing variations of assembly sequences, previously demonstrated by different users. The method uses the nearest neighbor sequence in the table that matches the actual sequence of human actions. At the movement level, our method uses a probabilistic representation of interaction primitives to generate robot trajectories. The method is demonstrated using a 7 degree-offreedom lightweight arm equipped with a 5-finger hand on an assembly task consisting of 17 steps."}
{"_id": "6217d2c64b6f843b2078dd0cf4fdb8ab15f06d43", "title": "The Missing Link of Jewish European Ancestry: Contrasting the Rhineland and\nthe Khazarian Hypotheses", "text": "The question of Jewish ancestry has been the subject of controversy for over two centuries and has yet to be resolved. The \"Rhineland hypothesis\" depicts Eastern European Jews as a \"population isolate\" that emerged from a small group of German Jews who migrated eastward and expanded rapidly. Alternatively, the \"Khazarian hypothesis\" suggests that Eastern European Jews descended from the Khazars, an amalgam of Turkic clans that settled the Caucasus in the early centuries CE and converted to Judaism in the 8th century. Mesopotamian and Greco-Roman Jews continuously reinforced the Judaized empire until the 13th century. Following the collapse of their empire, the Judeo-Khazars fled to Eastern Europe. The rise of European Jewry is therefore explained by the contribution of the Judeo-Khazars. Thus far, however, the Khazars' contribution has been estimated only empirically, as the absence of genome-wide data from Caucasus populations precluded testing the Khazarian hypothesis. Recent sequencing of modern Caucasus populations prompted us to revisit the Khazarian hypothesis and compare it with the Rhineland hypothesis. We applied a wide range of population genetic analyses to compare these two hypotheses. Our findings support the Khazarian hypothesis and portray the European Jewish genome as a mosaic of Near Eastern-Caucasus, European, and Semitic ancestries, thereby consolidating previous contradictory reports of Jewish ancestry. We further describe a major difference among Caucasus populations explained by the early presence of Judeans in the Southern and Central Caucasus. Our results have important implications for the demographic forces that shaped the genetic diversity in the Caucasus and for medical studies."}
{"_id": "8433bee637213243749bc3ef8bdbd61d9d3a0f3e", "title": "Energy efficient wearable sensor node for IoT-based fall detection systems", "text": "Falls can cause serious traumas such as brain injuries and bone fractures, especially among elderly people. Fear of falling might reduce physical activities resulting in declining social interactions and eventually causing depression. To lessen the effects of a fall, timely delivery of medical treatment can play a vital role. In a similar scenario, an IoT-based wearable system can pave the most promising way to mitigate serious consequences of a fall while providing the convenience of usage. However, to deliver sufficient degree of monitoring and reliability, wearable devices working at the core of fall detection systems are required to work for a prolonged period of time. In this work, we focus on energy efficiency of a wearable sensor node in an Internet-of-Things (IoT) based fall detection system. We propose the design of a tiny, lightweight, flexible and energy efficient wearable device. We investigate different parameters (e.g. sampling rate, communication bus interface, transmission protocol, and transmission rate) impacting on energy consumption of the wearable device. In addition, we provide a comprehensive analysis of energy consumption of the wearable in different configurations and operating conditions. Furthermore, we provide hints (hardware and software) for system designers implementing the optimal wearable device for IoT-based fall detection systems in terms of energy efficiency and high quality of service. The results clearly indicate that the proposed sensor node is novel and energy efficient. In a critical condition, the wearable device can be used continuously for 76 h with a 1000 mAh li-ion battery."}
{"_id": "27b7e8f3b11dfe12318f8ff10f1d4a60e144a646", "title": "Predicting function from sequence in venom peptide families", "text": "Toxins from animal venoms are small peptides that recognize specific molecular targets in the brains of prey or predators. Next generation sequencing has uncovered thousands of diverse toxin sequences, but the functions of these peptides are poorly understood. Here we demonstrate that the use of machine learning techniques on sequence-derived features enables high accuracy in the task of predicting a toxin\u2019s functionality using only its amino acid sequence. Comparison of the performance of several learning algorithms in this prediction task demonstrates that both physiochemical properties of the amino acid residues in a sequence as well as noncontiguous sequence motifs can be used independently to model the sequence dependence of venom function. We rationalize the observed model performance using unsupervised learning and make broad predictions about the distribution of toxin functions in the venome. Keywords\u2014Bioinformatics, machine learning, protein function prediction, venomics."}
{"_id": "bd8d9e1b3a192fcd045c7a3389920ac98097e774", "title": "ParaPIM: a parallel processing-in-memory accelerator for binary-weight deep neural networks", "text": "Recent algorithmic progression has brought competitive classification accuracy despite constraining neural networks to binary weights (+1/-1). These findings show remarkable optimization opportunities to eliminate the need for computationally-intensive multiplications, reducing memory access and storage. In this paper, we present ParaPIM architecture, which transforms current Spin Orbit Torque Magnetic Random Access Memory (SOT-MRAM) sub-arrays to massively parallel computational units capable of running inferences for Binary-Weight Deep Neural Networks (BWNNs). ParaPIM's in-situ computing architecture can be leveraged to greatly reduce energy consumption dealing with convolutional layers, accelerate BWNNs inference, eliminate unnecessary off-chip accesses and provide ultra-high internal bandwidth. The device-to-architecture co-simulation results indicate ~4x higher energy efficiency and 7.3x speedup over recent processing-in-DRAM acceleration, or roughly 5x higher energy-efficiency and 20.5x speedup over recent ASIC approaches, while maintaining inference accuracy comparable to baseline designs."}
{"_id": "7902e4fb3e30e085c0b88ea84c611be2b601f0d7", "title": "Automated Transplantation and Differential Testing for Clones", "text": "Code clones are common in software. When applying similar edits to clones, developers often find it difficult to examine the runtime behavior of clones. The problem is exacerbated when some clones are tested, while their counterparts are not. To reuse tests for similar but not identical clones, Grafter transplants one clone to its counterpart by (1) identifying variations in identifier names, types, and method call targets, (2) resolving compilation errors caused by such variations through code transformation, and (3) inserting stub code to transfer input data and intermediate output values for examination. To help developers examine behavioral differences between clones, Grafter supports fine-grained differential testing at both the test outcome level and the intermediate program state level. In our evaluation on three open source projects, Grafter successfully reuses tests in 94% of clone pairs without inducing build errors, demonstrating its automated code transplantation capability. To examine the robustness of G RAFTER, we systematically inject faults using a mutation testing tool, Major, and detect behavioral differences induced by seeded faults. Compared with a static cloning bug finder, Grafter detects 31% more mutants using the test-level comparison and almost 2X more using the state-level comparison. This result indicates that Grafter should effectively complement static cloning bug finders."}
{"_id": "0c8ce51f3384208518c328bd0306507079102d55", "title": "Bone graphs: Medial shape parsing and abstraction", "text": "1077-3142/$ see front matter 2011 Elsevier Inc. A doi:10.1016/j.cviu.2010.12.011 \u21d1 Corresponding author. E-mail address: dmac@cs.toronto.edu (D. Macrini) The recognition of 3-D objects from their silhouettes demands a shape representation which is stable with respect to minor changes in viewpoint and articulation. This can be achieved by parsing a silhouette into parts and relationships that do not change across similar object views. Medial descriptions, such as skeletons and shock graphs, provide part-based decompositions but suffer from instabilities. As a result, similar shapes may be represented by dissimilar part sets. We propose a novel shape parsing approach which is based on identifying and regularizing the ligature structure of a medial axis, leading to a bone graph, a medial abstraction which captures a more stable notion of an object\u2019s parts. Our experiments show that it offers improved recognition and pose estimation performance in the presence of within-class deformation over the shock graph. 2011 Elsevier Inc. All rights reserved."}
{"_id": "697bbd2f32b0eeb10783d87503d37e1e56ec5e2e", "title": "A Discrete Model for Inelastic Deformation of Thin Shells", "text": "We introduce a method for simulating the inelastic deformation of thin shells: we model plasticity and fracture of curved, deformable objects such as light bulbs, egg-shells and bowls. Our novel approach uses triangle meshes yet evolves fracture lines unrestricted to mesh edges. We present a novel measure of bending strain expressed in terms of surface invariants such as lengths and angles. We also demonstrate simple techniques to improve the robustness of standard timestepping as well as collisionresponse algorithms."}
{"_id": "98431da7222ee3fe12d277facf5ca1561c56d4f3", "title": "The estimation of the gradient of a density function, with applications in pattern recognition", "text": "Nonparametric density gradient estimation using a generalized kernel approach is investigated. Conditions on the kernel functions are derived to guarantee asymptotic unbiasedness, consistency, and uniform consistenby of the estimates. The results are generalized to obtain a simple mean-shift estimate that can be extended in a k-nearestneighbor approach. Applications of gradient estimation to pattern recognition are presented using clustering and intrinsic dimensionality problems, with the ultimate goal of providing further understanding of these problems in terms of density gradients."}
{"_id": "721e64bfd3158a77c55d59dd6415570594a72e9c", "title": "NVIDIA Jetson Platform Characterization", "text": "This study characterizes the NVIDIA Jetson TK1 and TX1 Platforms, both built on a NVIDIA Tegra System on Chip and combining a quad-core ARM CPU and an NVIDIA GPU. Their heterogeneous nature, as well as their wide operating frequency range, make it hard for application developers to reason about performance and determine which optimizations are worth pursuing. This paper attempts to inform developers\u2019 choices by characterizing the platforms\u2019 performance using Roofline models obtained through an empirical measurement-based approach as well as through a case study of a heterogeneous application (matrix multiplication). Our results highlight a difference of more than an order of magnitude in compute performance between the CPU and GPU on both platforms. Given that the CPU and GPU share the same memory bus, their Roofline models\u2019 balance points are also more than an order of magnitude apart. We also explore the impact of frequency scaling: build CPU and GPU Roofline profiles and characterize both platforms\u2019 balance point variation, power consumption, and performance per watt as frequency is scaled. The characterization we provide can be used in two main ways. First, given an application, it can inform the choice and number of processing elements to use (i.e., CPU/GPU and number of cores) as well as the optimizations likely to lead to high performance gains. Secondly, this characterization indicates that developers can use frequency scaling to tune the Jetson Platform to suit the requirements of their applications. Third, given a required power/performance budget, application developers can identify the appropriate parameters to use to tune the Jetson platforms to their specific workload requirements. We expect that this optimization approach can lead to overall gains in performance and/or power efficiency without requiring application changes."}
{"_id": "3972dc2d2306c48135b6dfa587a5433d0b75b1cd", "title": "Scan, Attend and Read: End-to-End Handwritten Paragraph Recognition with MDLSTM Attention", "text": "We present an attention-based model for end-to-end handwriting recognition. Our system does not require any segmentation of the input paragraph. The model is inspired by the differentiable attention models presented recently for speech recognition, image captioning or translation. The main difference is the implementation of covert and overt attention with a multi-dimensional LSTM network. Our principal contribution towards handwriting recognition lies in the automatic transcription without a prior segmentation into lines, which was critical in previous approaches. Moreover, the system is able to learn the reading order, enabling it to handle bidirectional scripts such as Arabic. We carried out experiments on the well-known IAM Database and report encouraging results which bring hope to perform full paragraph transcription in the near future."}
{"_id": "6cbb6db2561ecee3b24e22ee060b01068cba6b5a", "title": "Accidental displacement and migration of endosseous implants into \nadjacent craniofacial structures: A review and update", "text": "OBJECTIVES\nAccidental displacement of endosseous implants into the maxillary sinus is an unusual but potential complication in implantology procedures due to the special features of the posterior aspect of the maxillary bone; there is also a possibility of migration throughout the upper paranasal sinuses and adjacent structures. The aim of this paper is to review the published literature about accidental displacement and migration of dental implants into the maxillary sinus and other adjacent structures.\n\n\nSTUDY DESIGN\nA review has been done based on a search in the main on-line medical databases looking for papers about migration of dental implants published in major oral surgery, periodontal, dental implant and ear-nose-throat journals, using the keywords \"implant,\" \"migration,\" \"complication,\" \"foreign body\" and \"sinus.\"\n\n\nRESULTS\n24 articles showing displacement or migration to maxillary, ethmoid and sphenoid sinuses, orbit and cranial fossae, with different degrees of associated symptoms, were identified. Techniques found to solve these clinical issues include Cadwell-Luc approach, transoral endoscopy approach via canine fossae and transnasal functional endoscopy surgery.\n\n\nCONCLUSION\nBefore removing the foreign body, a correct diagnosis should be done in order to evaluate the functional status of the ostiomeatal complex and the degree of affectation of paranasal sinuses and other involved structures, determining the size and the exact location of the foreign body. After a complete diagnosis, an indicated procedure for every case would be decided."}
{"_id": "bc6f2144ab55022e10d623f94f3398595547be38", "title": "Language and life history: a new perspective on the development and evolution of human language.", "text": "It has long been claimed that Homo sapiens is the only species that has language, but only recently has it been recognized that humans also have an unusual pattern of growth and development. Social mammals have two stages of pre-adult development: infancy and juvenility. Humans have two additional prolonged and pronounced life history stages: childhood, an interval of four years extending between infancy and the juvenile period that follows, and adolescence, a stage of about eight years that stretches from juvenility to adulthood. We begin by reviewing the primary biological and linguistic changes occurring in each of the four pre-adult ontogenetic stages in human life history. Then we attempt to trace the evolution of childhood and juvenility in our hominin ancestors. We propose that several different forms of selection applied in infancy and childhood; and that, in adolescence, elaborated vocal behaviors played a role in courtship and intrasexual competition, enhancing fitness and ultimately integrating performative and pragmatic skills with linguistic knowledge in a broad faculty of language. A theoretical consequence of our proposal is that fossil evidence of the uniquely human stages may be used, with other findings, to date the emergence of language. If important aspects of language cannot appear until sexual maturity, as we propose, then a second consequence is that the development of language requires the whole of modern human ontogeny. Our life history model thus offers new ways of investigating, and thinking about, the evolution, development, and ultimately the nature of human language."}
{"_id": "8b0723fa5c33193386f1040ca9991abca969a827", "title": "Gender differences in the relationship between internet addiction and depression: A cross-lagged study in Chinese adolescents", "text": "The present study explored the role of gender in the association between Internet addiction and depression. Three-wave longitudinal panel data were collected from self-reported questionnaires that were completed by 1715 adolescents in grades 6e8 in China. Cross-lagged structural equation modeling was used to examine the relationship between Internet addiction and depression. In male adolescents, depression was found to significantly predict subsequent Internet addiction, suggesting that depression was the cause of Internet addiction and supporting the mood enhancement hypothesis. In female adolescents, Internet addiction was found to significantly predict subsequent depression, indicating that Internet addiction leads to depression and supporting the social displacement hypothesis. These results indicate that the relationship between Internet addiction and depression depended on gender. In addition, it was found that males and females exhibit different behavioral patterns and motivations of Internet usage. Males were more likely to use the Internet for pleasure and less likely to surf the Internet to search for information, compared with females. Although both males and females were prone to surfing the Internet alone, males were more likely to go online with friends compared with females. These findings suggest that gender-specific preventative and interventional strategies should be developed to reduce Internet addiction. \u00a9 2016 Elsevier Ltd. All rights reserved."}
{"_id": "383f1f2ceb32557690b6a0abf6aab48cb98552ff", "title": "Facebook, Twitter and Google Plus for Breaking News: Is There a Winner?", "text": "Twitter is widely seen as being the go to place for breaking news. Recently however, competing Social Media have begun to carry news. Here we examine how Facebook, Google Plus and Twitter report on breaking news. We consider coverage (whether news events are reported) and latency (the time when they are reported). Using data drawn from three weeks in December 2013, we identify 29 major news events, ranging from celebrity deaths, plague outbreaks to sports events. We find that all media carry the same major events, but Twitter continues to be the preferred medium for breaking news, almost consistently leading Facebook or Google Plus. Facebook and Google Plus largely repost newswire stories and their main research value is that they conveniently package multitple sources of information together."}
{"_id": "c97901da440e70bb6085b118d5f3f3190fc5eaf0", "title": "A Compact Circularly Polarized Cross-Shaped Slotted Microstrip Antenna", "text": "A compact cross-shaped slotted microstrip patch antenna is proposed for circularly polarized (CP) radiation. A symmetric, cross shaped slot is embedded along one of the diagonal axes of the square patch for CP radiation and antenna size reduction. The structure is asymmetric (unbalanced) along the diagonal axes. The overall size of the antenna with CP radiation can be reduced by increasing the perimeter of the symmetric cross-shaped slot within the first patch quadrant of the square patch. The performance of the CP radiation is also studied by varying the size and angle variation of the cross-shaped slot. A measured 3-dB axial-ratio (AR) bandwidth of around 6.0 MHz is achieved with the CP cross-shaped slotted microstrip antenna, with an 18.0 MHz 10-dB return-loss bandwidth. The measured boresight gain is more than 3.8 dBic over the operating band, while the overall antenna volume is 0.273\u03bbo \u00d7 0.273\u03bbo \u00d7 0.013\u03bbo (\u03bb\u03bf operating wavelength at 910 MHz)."}
{"_id": "35c12a61ada36fd9b9f89176c927bb53af6f2466", "title": "Linkages between Depressive Symptomatology and Internet Harassment among Young Regular Internet Users", "text": "Recent reports indicate 97% of youth are connected to the Internet. As more young people have access to online communication, it is integrally important to identify youth who may be more vulnerable to negative experiences. Based upon accounts of traditional bullying, youth with depressive symptomatology may be especially likely to be the target of Internet harassment. The current investigation will examine the cross-sectional relationship between depressive symptomatology and Internet harassment, as well as underlying factors that may help explain the observed association. Youth between the ages of 10 and 17 (N = 1,501) participated in a telephone survey about their Internet behaviors and experiences. Subjects were required to have used the Internet at least six times in the previous 6 months to ensure a minimum level of exposure. The caregiver self-identified as most knowledgeable about the young person's Internet behaviors was also interviewed. The odds of reporting an Internet harassment experience in the previous year were more than three times higher (OR: 3.38, CI: 1.78, 6.45) for youth who reported major depressive symptomatology compared to mild/absent symptomatology. When female and male respondents were assessed separately, the adjusted odds of reporting Internet harassment for males who also reported DSM IV symptoms of major depression were more than three times greater (OR: 3.64, CI: 1.16, 11.39) than for males who indicated mild or no symptoms of depression. No significant association was observed among otherwise similar females. Instead, the association was largely explained by differences in Internet usage characteristics and other psychosocial challenges. Internet harassment is an important public mental health issue affecting youth today. Among young, regular Internet users, those who report DSM IV-like depressive symptomatology are significantly more likely to also report being the target of Internet harassment. Future studies should focus on establishing the temporality of events, that is, whether young people report depressive symptoms in response to the negative Internet experience, or whether symptomatology confers risks for later negative online incidents. Based on these cross-sectional results, gender differences in the odds of reporting an unwanted Internet experience are suggested, and deserve special attention in future studies."}
{"_id": "c1b8ba97aa88210a02affe2f92826e059c729c8b", "title": "Exploration of adaptive gait patterns with a reconfigurable linkage mechanism", "text": "Legged robots are able to move across irregular terrains and some can be energy efficient, but are often constrained by a limited range of gaits which can limit their locomotion capabilities considerably. This paper reports a reconfigurable design approach to robotic legged locomotion that produces a wide variety of gait cycles, opening new possibilities for innovative applications. In this paper, we present a distance-based formulation and its application to solve the position analysis problem of a standard Theo Jansen mechanism. By changing the configuration of a linkage, our objective in this study is to identify novel gait patterns of interest for a walking platform. The exemplary gait variations presented in this work demonstrate the feasibility of our approach, and considerably extend the capabilities of the original design to not only produce novel cum useful gait patterns but also to realize behaviors beyond locomotion."}
{"_id": "018300f5f0e679cee5241d9c69c8d88e00e8bf31", "title": "Neural Variational Inference and Learning in Belief Networks", "text": "\u2022We introduce a simple, efficient, and general method for training directed latent variable models. \u2013 Can handle both discrete and continuous latent variables. \u2013 Easy to apply \u2013 requires no model-specific derivations. \u2022Key idea: Train an auxiliary neural network to perform inference in the model of interest by optimizing the variational bound. \u2013 Was considered before for Helmholtz machines and rejected as infeasible due to high variance of inference net gradient estimates. \u2022We make the approach practical using simple and general variance reduction techniques. \u2022Promising document modelling results using sigmoid belief networks."}
{"_id": "0a10d64beb0931efdc24a28edaa91d539194b2e2", "title": "Efficient Estimation of Word Representations in Vector Space", "text": "We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities."}
{"_id": "32cbd065ac9405530ce0b1832a9a58c7444ba305", "title": "Part-of-Speech Tagging for Twitter: Annotation, Features, and Experiments", "text": "We address the problem of part-of-speech tagging for English data from the popular microblogging service Twitter. We develop a tagset, annotate data, develop features, and report tagging results nearing 90% accuracy. The data and tools have been made available to the research community with the goal of enabling richer text analysis of Twitter and related social media data sets."}
{"_id": "040522d17bb540726a2e8d45ee264442502723a0", "title": "The Helmholtz Machine", "text": "Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust the parameters to maximize the probability of the observed patterns. We describe a way of finessing this combinatorial explosion by maximizing an easily computed lower bound on the probability of the observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways."}
{"_id": "3be23e51455b39a2819ecfd86b8bb5ba4716679f", "title": "A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion", "text": "In this paper, we present a flexible new technique for single viewpoint omnidirectional camera calibration. The proposed method only requires the camera to observe a planar pattern shown at a few different orientations. Either the camera or the planar pattern can be freely moved. No a priori knowledge of the motion is required, nor a specific model of the omnidirectional sensor. The only assumption is that the image projection function can be described by a Taylor series expansion whose coefficients are estimated by solving a two-step least-squares linear minimization problem. To test the proposed technique, we calibrated a panoramic camera having a field of view greater than 200 in the vertical direction, and we obtained very good results. To investigate the accuracy of the calibration, we also used the estimated omni-camera model in a structure from motion experiment. We obtained a 3D metric reconstruction of a scene from two highly distorted omnidirectional images by using image correspondences only. Compared with classical techniques, which rely on a specific parametric model of the omnidirectional camera, the proposed procedure is independent of the sensor, easy to use, and flexible."}
{"_id": "7eb7d3529adf3954a7704d0e502178ca10c79e0b", "title": "The effective use of Benford\u2019s Law to assist in detecting fraud in accounting data", "text": "Benford\u2019s law has been promoted as providing the auditor with a tool that is simple and effective for the detection of fraud. The purpose of this paper is to assist auditors in the most effective use of digital analysis based on Benford\u2019s law. The law is based on a peculiar observation that certain digits appear more frequently than others in data sets. For example, in certain data sets, it has been observed that more than 30% of numbers begin with the digit one. After discussing the background of the law and development of its use in auditing, we show where digital analysis based on Benford\u2019s law can most effectively be used and where auditors should exercise caution. Specifically, we identify data sets which can be expected to follow Benford\u2019s distribution, discuss the power of statistical tests, types of frauds that would be detected and not be detected by such analysis, the potential problems that arise when an account contains too few observations, as well as issues related to base rate of fraud. An actual example is provided demonstrating where Benford\u2019s law proved successful in identifying fraud in a population of accounting data."}
{"_id": "f2bce820b7f0f3ccf0554b105bfa2ded636db77a", "title": "Systematic Thinking Fostered by Illustrations in Scientific Text", "text": "In 2 experiments, students who lacked prior knowledge about car mechanics read a passage about vehicle braking systems that either contained labeled illustrations of the systems, illustrations without labels, labels without illustrations, or no labeled illustrations. Students who received passages that contained labeled illustrations of braking systems recalled more explanative than nonexplanative information as compared to control groups, and performed better on problem solving transfer but not on verbatim recognition as compared to control groups. Results support a model of meaningful learning in which illustrations can help readers to focus their attention on explanative information in text and to reorganize the information into useful mental models."}
{"_id": "b688b830da148f1c3a86916a42d9dd1b1cccd5ff", "title": "Pixel-Wise Attentional Gating for Scene Parsing", "text": "To achieve dynamic inference in pixel labeling tasks, we propose Pixel-wise Attentional Gating (PAG), which learns to selectively process a subset of spatial locations at each layer of a deep convolutional network. PAG is a generic, architecture-independent, problem-agnostic mechanism that can be readily \"plugged in\" to an existing model with fine-tuning. We utilize PAG in two ways: 1) learning spatially varying pooling fields that improve model performance without the extra computation cost associated with multi-scale pooling, and 2) learning a dynamic computation policy for each pixel to decrease total computation (FLOPs) while maintaining accuracy. We extensively evaluate PAG on a variety of per-pixel labeling tasks, including semantic segmentation, boundary detection, monocular depth and surface normal estimation. We demonstrate that PAG allows competitive or state-of-the-art performance on these tasks. Our experiments show that PAG learns dynamic spatial allocation of computation over the input image which provides better performance trade-offs compared to related approaches (e.g., truncating deep models or dynamically skipping whole layers). Generally, we observe PAG can reduce computation by 10% without noticeable loss in accuracy and performance degrades gracefully when imposing stronger computational constraints."}
{"_id": "84599b3defa3dfa9cfcd33c1339ce422aa5d2b68", "title": "Current Mode Control Integrated Circuit with High Accuracy Current Sensing Circuit for Buck Converter", "text": "A current mode control integrated circuit with accuracy current sensing circuit ( CSC ) for buck converter is presented in this the proposed accurate integrated current sensed inductor with the internal ramp be used for DC-DC converter feedback proposed CSC doesn't need an op amp to implement, and has been fabricated with a standard 0.35 mum CMOS process. Simulation result show that the switching converter can be operated up to 1 MHZ. The supply is suitable for signal cell lithium-ion battery supply power efficiency is over 85% for supply voltage from 2.5 V to 5 V and output current is 200 mA. The performance of the proposed circuit is the good compared to the other circuits."}
{"_id": "1eb7ba50214d0a7bb3247a0055f5700b7833c17e", "title": "Cold-Start Recommendation with Provable Guarantees: A Decoupled Approach", "text": "Although the matrix completion paradigm provides an appealing solution to the collaborative filtering problem in recommendation systems, some major issues, such as data sparsity and cold-start problems, still remain open. In particular, when the rating data for a subset of users or items is entirely missing, commonly known as the cold-start problem, the standard matrix completion methods are inapplicable due the non-uniform sampling of available ratings. In recent years, there has been considerable interest in dealing with cold-start users or items that are principally based on the idea of exploiting other sources of information to compensate for this lack of rating data. In this paper, we propose a novel and general algorithmic framework based on matrix completion that simultaneously exploits the similarity information among users and items to alleviate the cold-start problem. In contrast to existing methods, our proposed recommender algorithm, dubbed DecRec, decouples the following two aspects of the cold-start problem to effectively exploit the side information: (i) the completion of a rating sub-matrix, which is generated by excluding cold-start users/items from the original rating matrix; and (ii) the transduction of knowledge from existing ratings to cold-start items/users using side information. This crucial difference prevents the error propagation of completion and transduction, and also significantly boosts the performance when appropriate side information is incorporated. The recovery error of the proposed algorithm is analyzed theoretically and, to the best of our knowledge, this is the first algorithm that addresses the cold-start problem with provable guarantees on performance. Additionally, we also address the problem where both cold-start user and item challenges are present simultaneously. We conduct thorough experiments on real datasets that complement our theoretical results. These experiments demonstrate the effectiveness of the proposed algorithm in handling the cold-start users/items problem and mitigating data sparsity issue."}
{"_id": "17f537b9f39cdb37ec26100530f69c615d03fa3b", "title": "Automatic query-based keyword and keyphrase extraction", "text": "Extracting keywords and keyphrases mainly for identifying content of a document, has an importance role in text processing tasks such as text summarization, information retrieval, and query expansion. In this research, we introduce a new keyword/keyphrase extraction approach in which both single and multi-document keyword/keyphrase extraction techniques are considered. The proposed approach is specifically practical when a user is interested in additional data such as keywords/keyphrases related to a topic or query. In the proposed approach, first a set of documents are retrieved based on user's query, then a single document keyword extraction method is applied to extract candidate keyword/keyphrases from each retrieved document. Finally, a new re-scoring scheme is introduced to extract final keywords/keyphrases. We have evaluated the proposed method based on the relationship between the final keyword/keyphrases with the initial user query, and based user's satisfaction. Our experimental results show how much the extracted keywords/keyphrases are relevant and wellmatched with user's need."}
{"_id": "1d06bfa37282bda43a396dc99927b298d0288bfa", "title": "Category-aware Next Point-of-Interest Recommendation via Listwise Bayesian Personalized Ranking", "text": "Next Point-of-Interest (POI) recommendation has become an important task for location-based social networks (LBSNs). However, previous efforts suffer from the high computational complexity, besides the transition pattern between POIs has not been well studied. In this paper, we proposed a twofold approach for next POI recommendation. First, the preferred next category is predicted by using a third-rank tensor optimized by a Listwise Bayesian Personalized Ranking (LBPR) approach. Specifically we introduce two functions, namely PlackettLuce model and cross entropy, to generate the likelihood of a ranking list for posterior computation. Then POI candidates filtered by the predicated category are ranked based on the spatial influence and category ranking influence. The experiments on two real-world datasets demonstrate the significant improvements of our methods over several state-ofthe-art methods."}
{"_id": "49b3d71c415956a31b1031ae22920af6ea5bec9a", "title": "Crowdsourcing based spatial mining of urban emergency events using social media", "text": "With the advances of information communication technologies, it is critical to improve the efficiency and accuracy of emergency management systems through modern data processing techniques. The past decade has witnessed the tremendous technical advances in Sensor Networks, Internet/Web of Things, Cloud Computing, Mobile/Embedded Computing, Spatial/Temporal Data Processing, and Big Data, and these technologies have provided new opportunities and solutions to emergency management. GIS models and simulation capabilities are used to exercise response and recovery plans during non-disaster times. They help the decision-makers understand near real-time possibilities during an event. In this paper, a crowdsourcing based model for mining spatial information of urban emergency events is introduced. Firstly, basic definitions of the proposed method are given. Secondly, positive samples are selected to mine the spatial information of urban emergency events. Thirdly, location and GIS information are extracted from positive samples. At last, the real spatial information is determined based on address and GIS information. At last, a case study on an urban emergency event is given."}
{"_id": "a00bd22c2148fc0c2c32300742d9390431949f56", "title": "Attitudes towards following meat, vegetarian and vegan diets: an examination of the role of ambivalence", "text": "Vegetarianism within the U.K. is growing in popularity, with the current estimate of 7% of the population eating a vegetarian diet. This study examined differences between the attitudes and beliefs of four dietary groups (meat eaters, meat avoiders, vegetarians and vegans) and the extent to which attitudes influenced intentions to follow each diet. In addition, the role of attitudinal ambivalence as a moderator variable was examined. Completed questionnaires were obtained from 111 respondents (25 meat eaters, 26 meat avoiders, 34 vegetarians, 26 vegans). In general, predictions were supported, in that respondents displayed most positive attitudes and beliefs towards their own diets, and most negative attitudes and beliefs towards the diet most different form their own. Regression analyses showed that, as predicted by the Theory of Planned Behaviour, attitudes, subjective norm and perceived behavioural control were significant predictors of intention to follow each diet (apart from the vegetarian diet, where subjective norm was non-significant). In each case, attitudinal ambivalence was found to moderate the attitude-intention relationship, such that attitudes were found to be stronger predictors at lower levels of ambivalence. The results not only highlight the extent to which such alternative diets are an interesting focus for psychological research, but also lend further support to the argument that ambivalence in an important influence on attitude strength."}
{"_id": "b07bfdebdf11b7ab3ea3d5f0087891c464c5e34d", "title": "A 29\u201330GHz 64-element Active Phased array for 5G Application", "text": "A 64-element 29\u201330GHz active phased array for 5G millimeter wave applications is presented in this paper. The proposed phased array composites of 64-element antennas, 64-chan-nel T/R modules, 4 frequency conversion links, beam controlling circuitry, power management circuits and cooling fans, and are integrated in a in a very compact size(135mmX 77mmX56mm). Hybrid integration of GaAs and Si circuits are employed to achieve better RF performance. The architecture of the proposed phased array and the detail design of the T/R modules and antennas are analyzed. By the OTA (over the air) measurement, the proposed phased array achieves a bandwidth of 1 GHz at the center frequency of 29.5GHz, and the azimuth beam-width is 12 deg with the scanning range of \u00b145deg. With the excitation of 800MHz 64QAM signals, the transmitter beam achieves a EVM of 5.5%, ACLR of \u221230.5dBc with the PA working at \u221210dB back off, and the measured saturated EIRP is 63 dBm."}
{"_id": "2472198a01624e6e398518929c88d8ead6a33473", "title": "mCloud: A Context-Aware Offloading Framework for Heterogeneous Mobile Cloud", "text": "Mobile cloud computing (MCC) has become a significant paradigm for bringing the benefits of cloud computing to mobile devices\u2019 proximity. Service availability along with performance enhancement and energy efficiency are primary targets in MCC. This paper proposes a code offloading framework, called mCloud, which consists of mobile devices, nearby cloudlets and public cloud services, to improve the performance and availability of the MCC services. The effect of the mobile device context (e.g., network conditions) on offloading decisions is studied by proposing a context-aware offloading decision algorithm aiming to provide code offloading decisions at runtime on selecting wireless medium and appropriate cloud resources for offloading. We also investigate failure detection and recovery policies for our mCloud system. We explain in details the design and implementation of the mCloud prototype framework. We conduct real experiments on the implemented system to evaluate the performance of the algorithm. Results indicate the system and embedded decision algorithm are able to provide decisions on selecting wireless medium and cloud resources based on different context of the mobile devices, and achieve significant reduction on makespan and energy, with the improved service availability when compared with existing offloading schemes."}
{"_id": "1613a9fe64fbc2228e52b021ad45041556cc77ef", "title": "A Review of Scar Scales and Scar Measuring Devices", "text": "OBJECTIVE\nPathologic scarring affects millions of people worldwide. Quantitative and qualitative measurement modalities are needed to effectively evaluate and monitor treatments.\n\n\nMETHODS\nThis article reviews the literature on available tools and existent assessment scales used to subjectively and objectively characterize scar.\n\n\nRESULTS\nWe describe the attributes and deficiencies of each tool and scale and highlight areas where further development is critical.\n\n\nCONCLUSION\nAn optimal, universal scar scoring system is needed in order to better characterize, understand and treat pathologic scarring."}
{"_id": "9368b596fdc2af12a45defd3df6c94e39dd02d3a", "title": "WNtags: A Web-Based Tool For Image Labeling And Retrieval With Lexical Ontologies", "text": "Ever growing number of image documents available on the Internet continuously motivates research in better annotation models and more efficient retrieval methods. Formal knowledge representation of objects and events in pictures, their interaction as well as context complexity becomes no longer an option for a quality image repository, but a necessity. We present an ontologybased online image annotation tool WNtags and demonstrate its usefulness in several typical multimedia retrieval tasks using International Affective Picture System emotionally annotated image database. WNtags is built around WordNet lexical ontology but considers Suggested Upper Merged Ontology as the preferred labeling formalism. WNtags uses sets of weighted WordNet synsets as high-level image semantic descriptors and query matching is performed with word stemming and node distance metrics. We also elaborate our near future plans to expand image content description with induced affect as in stimuli for research of human emotion and attention."}
{"_id": "8443e3b50190f297874d2d76233f29dfb423069c", "title": "Paired Learners for Concept Drift", "text": "To cope with concept drift, we paired a stable online learner with a reactive one. A stable learner predicts based on all of its experience, whereas are active learner predicts based on its experience over a short, recent window of time. The method of paired learning uses differences in accuracy between the two learners over this window to determine when to replace the current stable learner, since the stable learner performs worse than does there active learner when the target concept changes. While the method uses the reactive learner as an indicator of drift, it uses the stable learner to predict, since the stable learner performs better than does the reactive learner when acquiring target concept. Experimental results support these assertions. We evaluated the method by making direct comparisons to dynamic weighted majority, accuracy weighted ensemble, and streaming ensemble algorithm (SEA) using two synthetic problems, the Stagger concepts and the SEA concepts, and three real-world data sets: meeting scheduling, electricity prediction, and malware detection. Results suggest that, on these problems, paired learners outperformed or performed comparably to methods more costly in time and space."}
{"_id": "0b2fe16ea31f59e44a0d244a12554d9554740b63", "title": "Intersection-priority based optimal RSU allocation for VANET", "text": "Roadside Unit (RSU) is an essential unit in a vehicular ad-hoc network (VANET) for collecting and analyzing traffic data given from smart vehicles. Furthermore, RSUs can take part in controlling traffic flow for vehicle's secure driving by broadcasting locally analyzed data, forwarding some important messages, and communicating with other RSUs, and soon. In order to maximize the availability of RSUs in the VANET, RSUs need to be fully distributed over an entire area. Thus, RSUs can make the best use of all traffic data gathered from every intersection. In this paper, we provide intersection-priority based RSU placement methods to find the optimal number and positions of RSUs for the full distribution providing with a maximal connectivity between RSUs while minimizing RSU setup costs. We propose three optimal algorithms: greedy, dynamic and hybrid algorithms. Finally, we provide simulated analyses of our algorithms using real urban roadmaps of JungGu/Jongrogu, YongsanGu, and GangnamGu in Seoul, each of which has a characteristic road style different than each other. We analyze how our algorithms work in such different types of roadways with real traffic data, and find the optimal number and positions of RSUs in these areas."}
{"_id": "6c3c36fbc2cf24baf2301e80da57ed68cab97cd6", "title": "Repeated labeling using multiple noisy labelers", "text": "This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction of predictive models. With the outsourcing of small tasks becoming easier, for example via Amazon\u2019s Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i)\u00a0Repeated-labeling can improve label quality and model quality, but not always. (ii)\u00a0When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii)\u00a0As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv)\u00a0Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a set of robust techniques that combine different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality/cost regimes, the benefit is substantial."}
{"_id": "1d1da2ef88928cf6174c9c53e0543665bc285b68", "title": "Improving literacy in developing countries using speech recognition-supported games on mobile devices", "text": "Learning to read in a second language is challenging, but highly rewarding. For low-income children in developing countries, this task can be significantly more challenging because of lack of access to high-quality schooling, but can potentially improve economic prospects at the same time. A synthesis of research findings suggests that practicing recalling and vocalizing words for expressing an intended meaning could improve word reading skills - including reading in a second language - more than silent recognition of what the given words mean. Unfortunately, many language learning software do not support this instructional approach, owing to the technical challenges of incorporating speech recognition support to check that the learner is vocalizing the correct word. In this paper, we present results from a usability test and two subsequent experiments that explore the use of two speech recognition-enabled mobile games to help rural children in India read words with understanding. Through a working speech recognition prototype, we discuss two major contributions of this work: first, we give empirical evidence that shows the extent to which productive training (i.e. vocalizing words) is superior to receptive vocabulary training, and discuss the use of scaffolding hints to \"\"unpack\"\" factors in the learner's linguistic knowledge that may impact reading. Second, we discuss what our results suggest for future research in HCI."}
{"_id": "742c42f5cd7c1f195fa83c8c1611ee7e62c9c81f", "title": "Making middleboxes someone else's problem: network processing as a cloud service", "text": "Modern enterprises almost ubiquitously deploy middlebox processing services to improve security and performance in their networks. Despite this, we find that today's middlebox infrastructure is expensive, complex to manage, and creates new failure modes for the networks that use them. Given the promise of cloud computing to decrease costs, ease management, and provide elasticity and fault-tolerance, we argue that middlebox processing can benefit from outsourcing the cloud. Arriving at a feasible implementation, however, is challenging due to the need to achieve functional equivalence with traditional middlebox deployments without sacrificing performance or increasing network complexity.\n In this paper, we motivate, design, and implement APLOMB, a practical service for outsourcing enterprise middlebox processing to the cloud.\n Our discussion of APLOMB is data-driven, guided by a survey of 57 enterprise networks, the first large-scale academic study of middlebox deployment. We show that APLOMB solves real problems faced by network administrators, can outsource over 90% of middlebox hardware in a typical large enterprise network, and, in a case study of a real enterprise, imposes an average latency penalty of 1.1ms and median bandwidth inflation of 3.8%."}
{"_id": "6b3c3e02cfc46ca94097934bec18333dde7cf77c", "title": "Improving Intrusion Detection System based on Snort rules for network probe attack detection", "text": "Data and network system security is the most important roles. An organization should find the methods to protect their data and network system to reduce the risk from attacks. Snort Intrusion Detection System (Snort-IDS) is a security tool of network security. It has been widely used for protecting the network of the organizations. The Snort-IDS utilize the rules to matching data packets traffic. If some packet matches the rules, Snort-IDS will generate the alert messages. However, Snort-IDS contain many rules and it also generates a lot of false alerts. In this paper, we present the procedure to improve the Snort-IDS rules for the network probe attack detection. In order to test the performance evaluation, we utilized the data set from the MIT-DAPRA 1999, which includes the normal and abnormal traffics. Firstly, we analyzed and explored the existing the Snort-IDS rules to improve the proposed Snort-IDS rules. Secondly, we applied the WireShark software to analyze data packets form of attack in data set. Finally, the Snort-IDS was improved, and it can detect the network probe attack. This paper, we had classified the attacks into several groups based on the nature of network probe attack. In addition, we also compared the efficacy of detection attacks between Snort-IDS rules to be updated with the Detection Scoring Truth. As the experimental results, the proposed Snort-IDS efficiently detected the network probe attacks compared to the Detection Scoring Truth. It can achieve higher accuracy. However, there were some detecting alert that occur over the attack in Detection Scoring Truth, because some attack occur in several time but the Detection Scoring Truth indentify as one time."}
{"_id": "5f507abd8d07d3bee56820fd3a5dc2234d1c38ee", "title": "Risk management applied to software development projects in incubated technology-based companies : literature review , classification , and analysis Gest\u00e3o", "text": ""}
{"_id": "4dda236c57d9807d811384ffa714196c4999949d", "title": "Algebraic Distance on Graphs", "text": "Measuring the connection strength between a pair of vertices in a graph is one of the most important concerns in many graph applications. Simple measures such as edge weights may not be sufficient for capturing the effects associated with short paths of lengths greater than one. In this paper, we consider an iterative process that smooths an associated value for nearby vertices, and we present a measure of the local connection strength (called the algebraic distance, see [25]) based on this process. The proposed measure is attractive in that the process is simple, linear, and easily parallelized. An analysis of the convergence property of the process reveals that the local neighborhoods play an important role in determining the connectivity between vertices. We demonstrate the practical effectiveness of the proposed measure through several combinatorial optimization problems on graphs and hypergraphs."}
{"_id": "2e7ebdd353c1de9e47fdd1cf0fce61bd33d87103", "title": "Comparing Speech Recognition Systems (Microsoft API, Google API And CMU Sphinx)", "text": "The idea of this paper is to design a tool that will be used to test and compare commercial speech recognition systems, such as Microsoft Speech API and Google Speech API, with open-source speech recognition systems such as Sphinx-4. The best way to compare automatic speech recognition systems in different environments is by using some audio recordings that were selected from different sources and calculating the word error rate (WER). Although the WER of the three aforementioned systems were acceptable, it was observed that the Google API is superior."}
{"_id": "49ea217068781d3f3d07ef258b84a1fd4cae9528", "title": "Pellet: An OWL DL Reasoner", "text": "Reasoning capability is of crucial importance to many applications developed for the Semantic Web. Description Logics provide sound and complete reasoning algorithms that can effectively handle the DL fragment of the Web Ontology Language (OWL). However, existing DL reasoners were implemented long before OWL came into existence and lack some features that are essential for Semantic Web applications, such as reasoning with individuals, querying capabilities, nominal support, elimination of the unique name assumption and so forth. With these objectives in mind we have implemented an OWL DL reasoner and deployed it in various kinds of applications."}
{"_id": "1cb5dea2a8f6abf0ef61ce229ee866594b6c5228", "title": "Unsupervised Lesion Detection in Brain CT using Bayesian Convolutional Autoencoders", "text": "Normally, lesions are detected using supervised learning techniques that require labelled training data. We explore the use of Bayesian autoencoders to learn the variability of healthy tissue and detect lesions as unlikely events under the normative model. As a proof-of-concept, we test our method on registered 2D midaxial slices from CT imaging data. Our results indicate that our method achieves best performance in detecting lesions caused by bleeding compared to baselines."}
{"_id": "1450296fb936d666f2f11454cc8f0108e2306741", "title": "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks", "text": "While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity."}
{"_id": "639937b3a1b8bded3f7e9a40e85bd3770016cf3c", "title": "A 3D Face Model for Pose and Illumination Invariant Face Recognition", "text": "Generative 3D face models are a powerful tool in computer vision. They provide pose and illumination invariance by modeling the space of 3D faces and the imaging process. The power of these models comes at the cost of an expensive and tedious construction process, which has led the community to focus on more easily constructed but less powerful models. With this paper we publish a generative 3D shape and texture model, the Basel Face Model (BFM), and demonstrate its application to several face recognition task. We improve on previous models by offering higher shape and texture accuracy due to a better scanning device and less correspondence artifacts due to an improved registration algorithm. The same 3D face model can be fit to 2D or 3D images acquired under different situations and with different sensors using an analysis by synthesis method. The resulting model parameters separate pose, lighting, imaging and identity parameters, which facilitates invariant face recognition across sensors and data sets by comparing only the identity parameters. We hope that the availability of this registered face model will spur research in generative models. Together with the model we publish a set of detailed recognition and reconstruction results on standard databases to allow complete algorithm comparisons."}
{"_id": "6424b69f3ff4d35249c0bb7ef912fbc2c86f4ff4", "title": "Deep Learning Face Attributes in the Wild", "text": "Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts."}
{"_id": "6b4da897dce4d6636670a83b64612f16b7487637", "title": "Learning from Simulated and Unsupervised Images through Adversarial Training", "text": "With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data."}
{"_id": "76705c60d9e41dddbb6e4e75c08dcb2b6fa23ed6", "title": "A Measure of Polarization on Social Media Networks Based on Community Boundaries", "text": "Polarization in social media networks is a fact in several scenarios such as political debates and other contexts such as same-sex marriage, abortion and gun control. Understanding and quantifying polarization is a longterm challenge to researchers from several areas, also being a key information for tasks such as opinion analysis. In this paper, we perform a systematic comparison between social networks that arise from both polarized and non-polarized contexts. This comparison shows that the traditional polarization metric \u2013 modularity \u2013 is not a direct measure of antagonism between groups, since non-polarized networks may be also divided into fairly modular communities. To bridge this conceptual gap, we propose a novel polarization metric based on the analysis of the boundary of a pair of (potentially polarized) communities, which better captures the notions of antagonism and polarization. We then characterize polarized and non-polarized social networks according to the concentration of high-degree nodes in the boundary of communities, and found that polarized networks tend to exhibit low concentration of popular nodes along the boundary. To demonstrate the usefulness of our polarization measures, we analyze opinions expressed on Twitter on the gun control issue in the United States, and conclude that our novel metrics help making sense of opinions expressed on online media."}
{"_id": "8ae5dde36e2755fd9afcb8a62df8cc9e35c79cb1", "title": "Significant improvement in one-dimensional cursor control using Laplacian electroencephalography over electroencephalography.", "text": "OBJECTIVE\nBrain-computer interfaces (BCIs) based on electroencephalography (EEG) have been shown to accurately detect mental activities, but the acquisition of high levels of control require extensive user training. Furthermore, EEG has low signal-to-noise ratio and low spatial resolution. The objective of the present study was to compare the accuracy between two types of BCIs during the first recording session. EEG and tripolar concentric ring electrode (TCRE) EEG (tEEG) brain signals were recorded and used to control one-dimensional cursor movements.\n\n\nAPPROACH\nEight human subjects were asked to imagine either 'left' or 'right' hand movement during one recording session to control the computer cursor using TCRE and disc electrodes.\n\n\nMAIN RESULTS\nThe obtained results show a significant improvement in accuracies using TCREs (44%-100%) compared to disc electrodes (30%-86%).\n\n\nSIGNIFICANCE\nThis study developed the first tEEG-based BCI system for real-time one-dimensional cursor movements and showed high accuracies with little training."}
{"_id": "fdaaa830821d5693f709d3bfcdec1526f32d32af", "title": "Assessment of healthcare claims rejection risk using machine learning", "text": "Modern healthcare service records, called Claims, record the medical treatments by a Provider (Doctor/Clinic), medication advised etc., along with the charges, and payments to be made by the patient and the Payer (insurance provider). Denial and rejection of healthcare claims is a significant administrative burden and source of loss to various healthcare providers and payers as well. Automating the identification of Claims prone to denial by reason, source, cause and other deciding factors is critical to lowering this burden of rework. We present classification methods based on Machine Learning (ML) to fully automate identification of such claims prone to rejection or denial with high accuracy, investigate the reasons for claims denial and recommend methods to engineer features using Claim Adjustment Reason Codes (CARC) as features with high Information Gain. The ML engine reported is first of its kind in Claims risk identification and represents a novel, significant enhancement to the state of practice of using ML for automating and containing claims denial risks."}
{"_id": "df39b32b8f2207c17ea19353591673244fda53eb", "title": "A 16 Gb/s receiver with DC wander compensated rail-to-rail AC coupling and passive linear-equalizer in 22 nm CMOS", "text": "A 16 Gb/s receiver implemented in 22 nm SOI CMOS technology is reported. The analog frontend accepts a rail-to-rail input common-mode imposed from the transmitter side. It consists of a baseline wander compensated passive linear equalizer that AC-couples the received signal to the subsequent active CTLE with a regulated common-mode level. The programmable passive linear equalizer features a frequency response suitable for low-frequency equalization such as for skin-effect losses. When its zero is programmed at 200 MHz minimum frequency, the measured maximum mid-band peaking is 7 dB. The receiver architecture is half-rate and comprises an 8-tap DFE and a baud-rate CDR. With no FFE at the transmitter, 0.9 Vppd PRBS31 NRZ data are recovered error-free (BER<;10-12) across a copper channel with 34 dB attenuation at 8 GHz."}
{"_id": "332c81b75c22ca272ccf0ca3237066b35ea81c3b", "title": "A Passive Gait-Based Weight-Support Lower Extremity Exoskeleton With Compliant Joints", "text": "This paper presents the design and analysis of a passive body weight (BW)-support lower extremity exoskeleton (LEE) with compliant joints to relieve compressive load in the knee. The biojoint-like mechanical knee decouples human gait into two phases, stance and swing, by a dual snap fit. The knee joint transfers the BW to the ground in the stance phases and is compliant to free the leg in the swing phases. Along with a leg dynamic model and a knee biomechanical model, the unmeasurable knee internal forces are simulated. The concept feasibility and dynamic models of the passive LEE design have been experimentally validated with measured plantar forces. The reduced knee forces confirm the effectiveness of the LEE in supporting human BW during walking and also provide a basis for computing the internal knee forces as a percentage of BW. Energy harvested from the hip spring reveals that the LEE can save human walking energy."}
{"_id": "26d0b98825761cda7e1a79475dbf6dc140daffbb", "title": "A Second-Order Class-D Audio Amplifier", "text": "Class-D audio amplifiers are particularly efficient, and this efficiency has led to their ubiquity in a wide range of modern electronic appliances. Their output takes the form of a high-frequency square wave whose duty cycle (ratio of on-time to off-time) is modulated at low frequency according to the audio signal. A mathematical model is developed here for a second-order class-D amplifier design (i.e., containing one second-order integrator) with negative feedback. We derive exact expressions for the dominant distortion terms, corresponding to a general audio input signal, and confirm these predictions with simulations. We also show how the observed phenomenon of \u201cpulse skipping\u201d arises from an instability of the analytical solution upon which the distortion calculations are based, and we provide predictions of the circumstances under which pulse skipping will take place, based on a stability analysis. These predictions are confirmed by simulations."}
{"_id": "d2938415204bb6f99a069152cb954e4baa441bba", "title": "A Compact GPS Antenna for Artillery Projectile Applications", "text": "This letter presents a compact antenna suitable for the reception of GPS signals on artillery projectiles over 1.57-1.60 GHz. Four inverted-F-type elements are excited by a series feed network in equal magnitude and successive 90\u00b0 phase difference. The shape and form factor of the antenna is tailored so that the antenna can be easily installed inside an artillery fuze. Measurements show that the proposed antenna has a gain of 2.90-3.77 dBic, an axial ratio of 1.9-2.86 dB, and a reflection coefficient of less than -10 dB over 1.57-1.62 GHz."}
{"_id": "a9c2ecffaf332d714a5c69adae1dad12031ee77a", "title": "A Graph Rewriting Approach for Converting Asynchronous ROMs into Synchronous Ones", "text": "Most of FPGAs have Configurable Logic Blocks (CLBs) to implement combinational and sequential circuits and block RAMs to implement Random Access Memories (RAMs) and Read Only Memories (ROMs). Circuit design that minimizes the number of clock cycles is easy if we use asynchronous read operations. However, most of FPGAs support synchronous read operations, but do not support asynchronous read operations. The main contribution of this paper is to provide one of the potent approaches to resolve this problem. We assume that a circuit using asynchronous ROMs designed by a non-expert or quickly designed by an expert is given. Our goal is to convert this circuit with asynchronous ROMs into an equivalent circuit with synchronous ones. The resulting circuit with synchronous ROMs can be embedded into FPGAs. We also discuss several techniques to decrease the latency and increase the clock frequency of the resulting circuits. key words: FPGA, block RAMs, asynchronous read operations, rewriting algorithm"}
{"_id": "c792d3aa4a0a2a93b6c443143588a19c645c66f4", "title": "Object-Oriented Design Process Model", "text": "ion x x x x Relationship x x x x x"}
{"_id": "289f1a3a127d0bc22b2abf4b897a03d934aec51b", "title": "Implementation of a Restricted Boltzmann Machine in a Spiking Neural Network", "text": "Restricted Boltzmann Machines (RBMs) have been demonstrated to perform efficiently on a variety of applications, such as dimensionality reduction and classification. Implementing RBMs on neuromorphic hardware has certain advantages, particularly from a concurrency and lowpower perspective. This paper outlines some of the requirements involved for neuromorphic adaptation of an RBM and attempts to address these issues with suitably targeted modifications for sampling and weight updates. Results show the feasibility of such alterations which will serve as a guide for future implementation of such algorithms in VLSI arrays of spiking neurons."}
{"_id": "3c701a0fcf29817d3f22117b8b73993a4e0d303b", "title": "Citation-based bootstrapping for large-scale author disambiguation", "text": "We present a new, two-stage, self-supervised algorithm for author disambiguation in large bibliographic databases. In the first \u201cbootstrap\u201d stage, a collection of highprecision features is used to bootstrap a training set with positive and negative examples of coreferring authors. A supervised feature-based classifier is then trained on the bootstrap clusters and used to cluster the authors in a larger unlabeled dataset. Our selfsupervised approach shares the advantages of unsupervised approaches (no need for expensive hand labels) as well as supervised approaches (a rich set of features that can be discriminatively trained). The algorithm disambiguates 54,000,000 author instances in Thomson Reuters\u2019 Web of Knowledge with B3 F1 of .807. We analyze parameters and features, particularly those from citation networks, which have not been deeply investigated in author disambiguation. The most important citation feature is self-citation, which can be approximated without expensive extraction of the full network. For the supervised stage, the minor improvement due to other citation features (increasing F1 from .748 to .767) suggests they may not be worth the trouble of extracting from databases that don\u2019t already have them. A lean feature set without expensive abstract and title features performs 130 times faster with about equal F"}
{"_id": "17033fd4fff03228cd6a06518365b082b4b45f7f", "title": "The bright-side and the dark-side of CEO personality: examining core self-evaluations, narcissism, transformational leadership, and strategic influence.", "text": "This article reports on an examination of the relationships between chief executive officer (CEO) personality, transformational and transactional leadership, and multiple strategic outcomes in a sample of 75 CEOs of Major League Baseball organizations over a 100-year period. CEO bright-side personality characteristics (core self-evaluations) were positively related to transformational leadership, whereas dark-side personality characteristics (narcissism) of CEOs were negatively related to contingent reward leadership. In turn, CEO transformational and contingent reward leadership were related to 4 different strategic outcomes, including manager turnover, team winning percentage, fan attendance, and an independent rating of influence. CEO transformational leadership was positively related to ratings of influence, team winning percentage, and fan attendance, whereas contingent reward leadership was negatively related to manager turnover and ratings of influence."}
{"_id": "ae15bbd7c206137fa8f8f5abc127e91bf59b7ddc", "title": "Early Detection and Quantification of Verticillium Wilt in Olive Using Hyperspectral and Thermal Imagery over Large Areas", "text": "Automatic methods for an early detection of plant diseases (i.e., visible symptoms at early stages of disease development) using remote sensing are critical for precision crop protection. Verticillium wilt (VW) of olive caused by Verticillium dahliae can be controlled only if detected at early stages of development. Linear discriminant analysis (LDA) and support vector machine (SVM) classification methods were applied to classify V. dahliae severity using remote sensing at large scale. High-resolution thermal and hyperspectral imagery were acquired with a manned platform which flew a 3000-ha commercial olive area. LDA reached an overall accuracy of 59.0% and a \u03ba of 0.487 while SVM obtained a higher overall accuracy, 79.2% with a similar \u03ba, 0.495. However, LDA better classified trees at initial and low severity levels, reaching accuracies of 71.4 and 75.0%, respectively, in comparison with the 14.3% and 40.6% obtained by SVM. Normalized canopy temperature, chlorophyll fluorescence, structural, xanthophyll, chlorophyll, carotenoid and disease indices were found to be the best indicators for early and advanced stage infection by VW. These results demonstrate that the methods developed in other studies at orchard scale are valid for flights in large areas comprising several olive orchards differing in soil and crop management characteristics."}
{"_id": "3dc5d14cdcc0240cba9c571cac4360714ef18458", "title": "EMDR: a putative neurobiological mechanism of action.", "text": "Numerous studies have provided evidence for the efficacy of eye movement desensitization and reprocessing therapy (EMDR) in the treatment of posttraumatic stress disorder (PTSD), including recent studies showing it to be more efficient than therapist-directed flooding. But few theoretical explanations of how EMDR might work have been offered. Shapiro, in her original description of EMDR, proposed that its directed eye movements mimic the saccades of rapid eye movement sleep (REM), but provided no clear explanation of how such mimicry might lead to clinical improvement. We now revisit her original proposal and present a complete model for how EMDR could lead to specific improvement in PTSD and related conditions. We propose that the repetitive redirecting of attention in EMDR induces a neurobiological state, similar to that of REM sleep, which is optimally configured to support the cortical integration of traumatic memories into general semantic networks. We suggest that this integration can then lead to a reduction in the strength of hippocampally mediated episodic memories of the traumatic event as well as the memories' associated, amygdala-dependent, negative affect. Experimental data in support of this model are reviewed and possible tests of the model are suggested."}
{"_id": "1e667b69915fef9070f063635ba01cdf229f5d8a", "title": "From ADHD to SAD: Analyzing the Language of Mental Health on Twitter through Self-Reported Diagnoses", "text": "Many significant challenges exist for the mental health field, but one in particular is a lack of data available to guide research. Language provides a natural lens for studying mental health \u2013 much existing work and therapy have strong linguistic components, so the creation of a large, varied, language-centric dataset could provide significant grist for the field of mental health research. We examine a broad range of mental health conditions in Twitter data by identifying self-reported statements of diagnosis. We systematically explore language differences between ten conditions with respect to the general population, and to each other. Our aim is to provide guidance and a roadmap for where deeper exploration is likely to be fruitful."}
{"_id": "0e52fbadb7af607b4135189e722e550a0bd6e7cc", "title": "Camouflage of self-inflicted razor blade incision scars with carbon dioxide laser resurfacing and thin skin grafting.", "text": "BACKGROUND\nSelf-cutting using a razor blade is a type of self-mutilating behavior that leaves permanent and socially unacceptable scars with unique patterns, particularly on the upper extremities and anterior chest wall. These scars are easily recognized in the community and become a source of lifelong guilt, shame, and regret for the self-mutilators. In the presented clinical study, we aimed to investigate the effectiveness of carbon dioxide laser resurfacing and thin skin grafting in camouflaging self-inflicted razor blade incision scars.\n\n\nMETHODS\nA total of 26 anatomical sites (11 upper arm, 11 forearm, and four anterior chest) of 16 white male patients, whose ages ranged from 20 to 41 years (mean, 23.8 years), were treated between February of 2001 and August of 2003. Detailed psychiatric evaluation preoperatively; informing the patient that the procedure is a \"camouflage\" operation; trimming hypertrophic scars down to intact skin level; intralesional corticosteroid injection to hypertrophic scars; carbon dioxide laser resurfacing as a single unit; thin (0.2 to 0.3 mm) skin grafting; compressive dressing for 15 days; use of tubular bandage; and protection from sunlight for at least 6 months constituted the key points of the procedure.\n\n\nRESULTS\nThe scars were successfully camouflaged and converted to a socially acceptable appearance similar to a burn scar. Partial graft loss in one case and hyperpigmentation in another case were the complications. No new hypertrophic scar developed.\n\n\nCONCLUSIONS\nThe carbon dioxide laser resurfacing and thin skin grafting method is effective in camouflaging self-inflicted razor blade incision scars."}
{"_id": "46946ff7eb0e0a42d33504690aad7ddbff83f31b", "title": "User modeling with personas", "text": "User demographic and behavior data obtained from real user observation provide valuable information for designers. Yet, such information can be misinterpreted if presented as statistic figures. Personas are fictitious user representations created in order to embody behaviors and motivations that a group of real users might express, representing them during the project development process. This article describes the persona as being an effective tool to the users' descriptive model."}
{"_id": "65a2cb8a02795015b398856327bdccc36214cdc6", "title": "IOFlow: a software-defined storage architecture", "text": "In data centers, the IO path to storage is long and complex. It comprises many layers or \"stages\" with opaque interfaces between them. This makes it hard to enforce end-to-end policies that dictate a storage IO flow's performance (e.g., guarantee a tenant's IO bandwidth) and routing (e.g., route an untrusted VM's traffic through a sanitization middlebox). These policies require IO differentiation along the flow path and global visibility at the control plane. We design IOFlow, an architecture that uses a logically centralized control plane to enable high-level flow policies. IOFlow adds a queuing abstraction at data-plane stages and exposes this to the controller. The controller can then translate policies into queuing rules at individual stages. It can also choose among multiple stages for policy enforcement.\n We have built the queue and control functionality at two key OS stages-- the storage drivers in the hypervisor and the storage server. IOFlow does not require application or VM changes, a key strength for deployability. We have deployed a prototype across a small testbed with a 40 Gbps network and storage devices. We have built control applications that enable a broad class of multi-point flow policies that are hard to achieve today."}
{"_id": "9f3b0bf6a3a083731d6c0fa0435fab5e68304dfc", "title": "Enterprise Architecture Characteristics in Context Enterprise Governance Base On COBIT 5 Framework", "text": "The existence of the enterprise architecture is an attempt of managing and planning over the evolution of information systems in the sphere of an enterprise with model-based. In developing the enterprise architecture, there are several tools definition of components in the system. This tool is known as enterprises architecture (EA) framework. In this paper, we present a method to build a model of enterprise architecture in accordance with the needs of the Organization by Understanding the characteristics of the EA framework such as Zachman, TOGAF, and FEAF. They are selected as the EA framework will be used to determine the characteristics of an EA because the framework is most widely used in corporate or Government. In COBIT 5 framework, there is a process associated with enterprise architecture it is APO03 Manage Enterprise Architecture. At this stage of the research, we describe the link between the characteristics of the EA with one process in COBIT 5 framework. The results contribute to give a recommendation how to design EA for organization based on the characteristic of EA in Context Enterprise Governance using COBIT 5 Framework."}
{"_id": "76c44858b1a3f3add903a992f66b71f5cdcd18e3", "title": "MDig : Multi-digit Recognition using Convolutional Nerual Network on Mobile", "text": "Multi-character recognition in arbitrary photographs on mobile platform is difficult, in terms of both accuracy and real-time performance. In this paper, we focus on the domain of hand-written multi-digit recognition. Convolutional neural network (CNN) is the state-of-the-art solution for object recognition, and presents a workload that is both compute and data intensive. To reduce the workload, we train a shallow CNN offline, achieving 99.07% top-1 accuracy. And we utilize preprocessing and segmentation to reduce input image size fed into CNN. For CNN implementation on the mobile platform, we adopt and modify DeepBeliefSDK to support batching fully-connected layers. On NVIDIA SHIELD tablet, the application processes a frame and extracts 32 digits in approximately 60ms, and batching the fully-connected layers reduces the CNN runtime by another 12%."}
{"_id": "2b0750d16db1ecf66a3c753264f207c2cb480bde", "title": "Mining Sequential Patterns", "text": "We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction."}
{"_id": "8cc8c9ece3cd13beabbc07719f5ff694af59ba5b", "title": "Smart wheelchairs: A literature review.", "text": "Several studies have shown that both children and adults benefit substantially from access to a means of independent mobility. While the needs of many individuals with disabilities can be satisfied with traditional manual or powered wheelchairs, a segment of the disabled community finds it difficult or impossible to use wheelchairs independently. To accommodate this population, researchers have used technologies originally developed for mobile robots to create \"smart wheelchairs.\" Smart wheelchairs have been the subject of research since the early 1980s and have been developed on four continents. This article presents a summary of the current state of the art and directions for future research."}
{"_id": "7c09d08cdeb688aece28d41feb406dbbcda9c5ac", "title": "Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network", "text": "A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration."}
{"_id": "9eeada49fc2cba846b4dad1012ba8a7ee78a8bb7", "title": "A New Facial Expression Recognition Method Based on Local Gabor Filter Bank and PCA plus LDA", "text": "ses a facial expression recognition system based on Gabor feature using a novel r bank. Traditionally, a global Gabor filter bank with 5 frequencies and 8 ten used to extract the Gabor feature. A lot of time will be involved to extract mensions of such Gabor feature vector are prohibitively high. A novel local Gabor art of frequency and orientation parameters is proposed. In order to evaluate the he local Gabor filter bank, we first employed a two-stage feature compression LDA to select and compress the Gabor feature, then adopted minimum distance nize facial expression. Experimental results show that the method is effective for eduction and good recognition performance in comparison with traditional entire . The best average recognition rate achieves 97.33% for JAFFE facial expression abor filter bank, feature extraction, PCA, LDA, facial expression recognition. deliver rich information about human emotion and play an essential role in human In order to facilitate a more intelligent and natural human machine interface of new cts, automatic facial expression recognition [1][18][20] had been studied world en years, which has become a very active research area in computer vision and n. There are many approaches have been proposed for facial expression analysis ages and image sequences [12][18] in the literature. we focus on the recognition of facial expression from single digital images with feature extraction. A number of approaches have been developed for extracting by: Motorola Labs Research Foundation (No.303D804372), NSFC (No.60275005), GDNSF 105938)."}
{"_id": "0c46b067779d7132c5fd097baadc261348afb261", "title": "Political Polarization on Twitter", "text": "In this study we investigate how social media shape the networked public sphere and facilitate communication between communities with different political orientations. We examine two networks of political communication on Twitter, comprised of more than 250,000 tweets from the six weeks leading up to the 2010 U.S. congressional midterm elections. Using a combination of network clustering algorithms and manually-annotated data we demonstrate that the network of political retweets exhibits a highly segregated partisan structure, with extremely limited connectivity between leftand right-leaning users. Surprisingly this is not the case for the user-to-user mention network, which is dominated by a single politically heterogeneous cluster of users in which ideologically-opposed individuals interact at a much higher rate compared to the network of retweets. To explain the distinct topologies of the retweet and mention networks we conjecture that politically motivated individuals provoke interaction by injecting partisan content into information streams whose primary audience consists of ideologically-opposed users. We conclude with statistical evidence in support of this hypothesis."}
{"_id": "1ad8410d0ded269af4a0116d8b38842a7549f0ae", "title": "Measuring User Influence in Twitter: The Million Follower Fallacy", "text": "Directed links in social media could represent anything from intimate friendships to common interests, or even a passion for breaking news or celebrity gossip. Such directed links determine the flow of information and hence indicate a user\u2019s influence on others\u2014a concept that is crucial in sociology and viral marketing. In this paper, using a large amount of data collected from Twitter, we present an in-depth comparison of three measures of influence: indegree, retweets, and mentions. Based on these measures, we investigate the dynamics of user influence across topics and time. We make several interesting observations. First, popular users who have high indegree are not necessarily influential in terms of spawning retweets or mentions. Second, most influential users can hold significant influence over a variety of topics. Third, influence is not gained spontaneously or accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these findings provide new insights for viral marketing and suggest that topological measures such as indegree alone reveals very little about the influence of a user."}
{"_id": "3531e0eb9ec8a6ac786146c71ead7f8d624fd2ca", "title": "TwitterRank: finding topic-sensitive influential twitterers", "text": "This paper focuses on the problem of identifying influential users of micro-blogging services. Twitter, one of the most notable micro-blogging services, employs a social-networking model called \"following\", in which each user can choose who she wants to \"follow\" to receive tweets from without requiring the latter to give permission first. In a dataset prepared for this study, it is observed that (1) 72.4% of the users in Twitter follow more than 80% of their followers, and (2) 80.5% of the users have 80% of users they are following follow them back. Our study reveals that the presence of \"reciprocity\" can be explained by phenomenon of homophily. Based on this finding, TwitterRank, an extension of PageRank algorithm, is proposed to measure the influence of users in Twitter. TwitterRank measures the influence taking both the topical similarity between users and the link structure into account. Experimental results show that TwitterRank outperforms the one Twitter currently uses and other related algorithms, including the original PageRank and Topic-sensitive PageRank."}
{"_id": "3f4558f0526a7491e2597941f99c14fea536288d", "title": "Author cocitation: A literature measure of intellectual structure", "text": ""}
{"_id": "7e8a5e0a87fab337d71ce04ba02b7a5ded392421", "title": "Detecting and Tracking Political Abuse in Social Media", "text": "We study astroturf political campaigns on microblogging platforms: politically-motivated individuals and organizations that use multiple centrally-controlled accounts to create the appearance of widespread support for a candidate or opinion. We describe a machine learning framework that combines topological, content-based and crowdsourced features of information diffusion networks on Twitter to detect the early stages of viral spreading of political misinformation. We present promising preliminary results with better than 96% accuracy in the detection of astroturf content in the run-up to the 2010 U.S. midterm elections."}
{"_id": "9da9786e6aba77c72516abb5cfa124402cfdb9f1", "title": "From content delivery today to information centric networking", "text": "Today, content delivery is a heterogeneous ecosystem composed by various independent infrastructures. The ever increasing growth of Internet traffic has encouraged the proliferation of different architectures to serve content provider needs and user demand. Despite the differences among the technology, their low level implementation can be characterized in a few fundamental building blocks: network storage, request routing, and data transfer. Existing solutions are inefficient because they try to build an information centric service model over a network infrastructure which was designed to support host-to-host communications. Information-Centric Networking (ICN) paradigm has been proposed as a possible solution to this mismatch. ICN integrates content delivery as a native network feature. The rationale is to architect a network that automatically interprets, processes, and delivers content (information) independently of its location. This paper makes the following contributions: 1) it identifies a set of building blocks for content delivery, 2) it surveys the most popular approaches to realize the above building blocks, 3) it compares content delivery solutions relying on the current Internet infrastructure with novel ICN approaches."}
{"_id": "8afc4ead207602dee08c9482fcf65a947b15c5e9", "title": "\"I spy with my little eye!\": breadth of attention, inattentional blindness, and tactical decision making in team sports.", "text": "Failures of awareness are common when attention is otherwise engaged. Such failures are prevalent in attention-demanding team sports, but surprisingly no studies have explored the inattentional blindness paradigm in complex sport game-related situations. The purpose of this paper is to explore the link between breadth of attention, inattentional blindness, and tactical decision-making in team ball sports. A series of studies revealed that inattentional blindness exists in the area of team ball sports (Experiment 1). More tactical instructions can lead to a narrower breadth of attention, which increases inattentional blindness, whereas fewer tactical instructions widen the breadth of attention in the area of team ball sports (Experiment 2). Further meaningful exogenous stimuli reduce inattentional blindness (Experiment 3). The results of all experiments are discussed in connection with consciousness and attention theories as well as creativity and training in team sports."}
{"_id": "40dc541032e2a31313e5acdbde83d271d699f41c", "title": "State of the art of Trust and Reputation Systems in E-Commerce Context", "text": "This article proposes in depth comparative study of the most popular, used and analyzed Trust and Reputation System (TRS) according to the trust and reputation literature and in terms of specific trustworthiness criteria. This survey is realized relying on a selection of trustworthiness criteria that analyze and evaluate the maturity and effectiveness of TRS. These criteria describe the utility, the usability, the performance and the effectiveness of the TRS. We also provide a summary table of the compared TRS within a detailed and granular selection of trust and reputation aspects."}
{"_id": "0d74e25ee00166b89792439556de170236a0c63f", "title": "Building a corpus of \"real\" texts for deception detection", "text": "Text-based deception detection is currently emerging as a vital multidisciplinary field due to its indisputable theoretical and practical value (police, security, and customs, including predatory communications, such as Internet scams). A very important issue associated with deception detection is designing valid text corpora. Most research has been carried out using texts produced in laboratory settings. There is a lack of \"real\" deceptive texts written when the stakes for deception are high as they are obviously difficult to collect and label. In addition, studies in text-based deception detection have mostly been performed for Romance and Germanic languages. There are few studies dealing with deception detection in Slavic languages. In this paper one can find an overview of available text corpora used for studying text-based deception detection as well as the description of how the first corpus of \"real\" deceptive texts for Slavic languages was collected and labeled. We expect this corpus once finished to be a handy tool for developing and testing new deception detection techniques and for carrying out related cross-cultural studies."}
{"_id": "98cef5c176f714a51ddfc8585b8985e968ea9d25", "title": "3D Human Activity Recognition with Reconfigurable Convolutional Neural Networks", "text": "Human activity understanding with 3D/depth sensors has received increasing attention in multimedia processing and interactions. This work targets on developing a novel deep model for automatic activity recognition from RGB-D videos. We represent each human activity as an ensemble of cubic-like video segments, and learn to discover the temporal structures for a category of activities, i.e. how the activities to be decomposed in terms of classification. Our model can be regarded as a structured deep architecture, as it extends the convolutional neural networks (CNNs) by incorporating structure alternatives. Specifically, we build the network consisting of 3D convolutions and max-pooling operators over the video segments, and introduce the latent variables in each convolutional layer manipulating the activation of neurons. Our model thus advances existing approaches in two aspects: (i) it acts directly on the raw inputs (grayscale-depth data) to conduct recognition instead of relying on hand-crafted features, and (ii) the model structure can be dynamically adjusted accounting for the temporal variations of human activities, i.e. the network configuration is allowed to be partially activated during inference. For model training, we propose an EM-type optimization method that iteratively (i) discovers the latent structure by determining the decomposed actions for each training example, and (ii) learns the network parameters by using the back-propagation algorithm. Our approach is validated in challenging scenarios, and outperforms state-of-the-art methods. A large human activity database of RGB-D videos is presented in addition."}
{"_id": "2883a32fe32493c1519f404112cbdadd1fe90c7c", "title": "Formal analysis of privacy requirements specifications for multi-tier applications", "text": "Companies require data from multiple sources to develop new information systems, such as social networking, e-commerce and location-based services. Systems rely on complex, multi-stakeholder data supply-chains to deliver value. These data supply-chains have complex privacy requirements: privacy policies affecting multiple stakeholders (e.g. user, developer, company, government) regulate the collection, use and sharing of data over multiple jurisdictions (e.g. California, United States, Europe). Increasingly, regulators expect companies to ensure consistency between company privacy policies and company data practices. To address this problem, we propose a methodology to map policy requirements in natural language to a formal representation in Description Logic. Using the formal representation, we reason about conflicting requirements within a single policy and among multiple policies in a data supply chain. Further, we enable tracing data flows within the supply-chain. We derive our methodology from an exploratory case study of Facebook platform policy. We demonstrate the feasibility of our approach in an evaluation involving Facebook, Zynga and AOL-Advertising policies. Our results identify three conflicts that exist between Facebook and Zynga policies, and one conflict within the AOL Advertising policy."}
{"_id": "d7377a8ac77be7041b189626a9dbef20ed37829c", "title": "e-health first impressions and visual evaluations: key design principles for attention and appeal", "text": "Design plays a critical role in the development of e-health, greatly impacting the outreach potential for pertinent health communication. Design influences viewers' initial evaluations of electronic displays of health information, as well as directly impacting the likelihood one will attend to and favorably evaluate the information, essential actions for processing the health concepts presented. Individuals with low health literacy, representing a hard-to-reach audience susceptible to worsened health outcomes, will benefit greatly from the application of theory-based design principles. Design principles that have been shown to appeal and engage audiences are the necessary first step for effective message delivery. Design principles, which directly impact increased attention, favorable evaluations, and greater information processing abilities, include: web aesthetics, visual complexity, affordances, prototypicality, and persuasive imagery. These areas of theory-driven design research should guide scholars in e-health investigation with research goals of broader outreach, reduction of disparities, and potential avenues for reduced health care costs. Improving design by working with this hard-to-reach audience will simultaneously improve practice, as the applications of key design principles through theory-driven design research will allow practitioners to create effective e-health that will benefit people more broadly."}
{"_id": "d88c3f2fa212e4ec1254e98b33380928299fa02b", "title": "Towards SVC-based Adaptive Streaming in information centric networks", "text": "HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for video streaming services. In HAS, each video is segmented and stored in different qualities. The client can dynamically select the most appropriate quality level to download, allowing it to adapt to varying network conditions. As the Internet was not designed to deliver such applications, optimal support for multimedia delivery is still missing. Information Centric Networking (ICN) is a recently proposed disruptive architecture that could solve this issue, where the focus is given to the content rather than to end-to-end connectivity. Due to the bandwidth unpredictability typical of ICN, standard AVC-based HAS performs quality selection sub-optimally, thus leading to a poor Quality of Experience (QoE). In this article, we propose to overcome this inefficiency by using Scalable Video Coding (SVC) instead. We individuate the main advantages of SVC-based HAS over ICN and outline, both theoretically and via simulation, the research challenges to be addressed to optimize the delivered QoE."}
{"_id": "f6c265af493c74cb7ef64b8ffe238e3f2487d133", "title": "A novel compact printed ACS fed dual-band antenna for Bluetooth/WLAN/WiMAX applications", "text": "In this research article, a compact dual band asymmetric coplanar strip-fed printed antenna is designed and presented for Bluetooth, WLAN/WiMAX and public safety applications. The dual frequency operating bands (2.45 GHz and 5.25 GHz) have been achieved by attaching two simple meander shaped radiating strips to the ACS feed line. The proposed antenna geometry is printed on a low cost FR4 substrate having thickness of 1.6mm with overall dimensions of 13 \u00d7 21.3m including uniplanar ground plane. The -10 dB impedance bandwidth of the meandered ACS-fed dual band monopole antenna is about 140MHz from 2.36-2.5 GHz, and 2500MHz from 4.5-7.0 GHz respectively, which can cover 2.4 GHz Bluetooth/WLAN, 5.2/5.8 GHz WLAN, 5.5 GHz WiMAX and 4.9 GHz US public safety bands. In addition to the simple geometry and wide impedance bandwidth features, proposed structure perform bidirectional and omnidirectional radiation pattern in both E and H-plane respectively."}
{"_id": "12d6bf07c3a530bfe5a962e807347ed7b3532d02", "title": "Table-top spatially-augmented realty: bringing physical models to life with projected imagery", "text": "Despite the availability of high-quality graphics systems, architects and designers still build scaled physical models of buildings and products. These physical models have many advantages, however they are typically static in structure and surface characteristics. They are inherently lifeless. In contrast, high-quality graphics systems are tremendously flexible, allowing viewers to see alternative structures, facades, textures, cut-away views, and even dynamic effects such as changing lighting, moving automobiles, people, etc. We introduce a combination of these approaches that builds on our previously-published projector-based Spatially-Augmented Reality techniques. The basic idea is to aim multiple ceiling-mounted light projectors inward to graphically augment table-top scaled physical models of buildings or products. This approach promises to provide very compelling hybrid visualizations that afford the benefits of both traditional physical models, and modern computer graphics, effectively \"bringing to life\" table-top"}
{"_id": "4ab84a0dc95f51ea23f4a26178ceba4a7f04a38f", "title": "Visual Event Summarization on Social Media using Topic Modelling and Graph-based Ranking Algorithms", "text": "Due to the increasing popularity of microblogging platforms, the amount of messages (posts) related to public events, especially posts encompassing multimedia content, is steadily increasing. The inclusion of images can convey much more information about the event, compared to their text, which is typically very short (e.g., tweets). Although such messages can be quite informative regarding different aspects of the event, there is a lot of spam and redundancy making it challenging to extract pertinent insights. In this work, we describe a summarization framework that, given a set of social media messages about an event, aims to select a subset of images derived from them, that, at the same time, maximizes the relevance of the selected images and minimizes their redundancy. To this end, we propose a topic modeling technique to capture the relevance of messages to event topics and a graph-based algorithm to produce a diverse ranking of the selected high-relevance images. A user-centered evaluation on a large Twitter dataset around several real-world events demonstrates that the proposed method considerably outperforms a number of state-of-the-art summarization algorithms in terms of result relevance, while at the same time it is also highly competitive in terms of diversity. Namely, we get an improvement of 25% in terms of precision compared to the second best result, and 7% in terms of diversity."}
{"_id": "8f242985963caaf265e921275f7ecc43d3381c89", "title": "Calibration of RGB camera with velodyne LiDAR", "text": "Calibration of the LiDAR sensor with RGB camera finds its usage in many application fields from enhancing image classification to the environment perception and mapping. This paper presents a pipeline for mutual pose and orientation estimation of the mentioned sensors using a coarse to fine approach. Previously published methods use multiple views of a known chessboard marker for computing the calibration parameters, or they are limited to the calibration of the sensors with a small mutual displacement only. Our approach presents a novel 3D marker for coarse calibration which can be robustly detected in both the camera image and the LiDAR scan. It also requires only a single pair of camera-LiDAR frames for estimating large sensors displacement. Consequent refinement step searches for more accurate calibration in small subspace of calibration parameters. The paper also presents a novel way for evaluation of the calibration precision using projection error."}
{"_id": "e289f61e79f12738cca1bf0558e0e8c89219d928", "title": "Sketch2normal: deep networks for normal map generation", "text": "Normal maps are of great importance for many 2D graphics applications such as surface editing, re-lighting, texture mapping and 2D shading etc. Automatically inferring normal map is highly desirable for graphics designers. Many researchers have investigated the inference of normal map from intuitive and flexiable line drawing based on traditional geometric methods while our proposed deep networks-based method shows more robustness and provides more plausible results."}
{"_id": "e1cfd9fa3f55c89ff1a09bb8b5ae44485451fb99", "title": "Delayed enhancement of multitasking performance: Effects of anodal transcranial direct current stimulation on the prefrontal cortex", "text": "BACKGROUND\nThe dorsolateral prefrontal cortex (DLPFC) has been proposed to play an important role in neural processes that underlie multitasking performance. However, this claim is underexplored in terms of direct causal evidence.\n\n\nOBJECTIVE\nThe current study aimed to delineate the causal involvement of the DLPFC during multitasking by modulating neural activity with transcranial direct current stimulation (tDCS) prior to engagement in a demanding multitasking paradigm.\n\n\nMETHODS\nThe study is a single-blind, crossover, sham-controlled experiment. Anodal tDCS or sham tDCS was applied over left DLPFC in forty-one healthy young adults (aged 18-35 years) immediately before they engaged in a 3-D video game designed to assess multitasking performance. Participants were separated into three subgroups: real-sham (i.e., real tDCS in the first session, followed by sham tDCS in the second session 1\u00a0h later), sham-real (sham tDCS first session, real tDCS second session), and sham-sham (sham tDCS in both sessions).\n\n\nRESULTS\nThe real-sham group showed enhanced multitasking performance and decreased multitasking cost during the second session, compared to first session, suggesting delayed cognitive benefits of tDCS. Interestingly, performance benefits were observed only for multitasking and not on a single-task version of the game. No significant changes were found between the first and second sessions for either the sham-real or the sham-sham groups.\n\n\nCONCLUSIONS\nThese results suggest a causal role of left prefrontal cortex in facilitating the simultaneous performance of more than one task, or multitasking. Moreover, these findings reveal that anodal tDCS may have delayed benefits that reflect an enhanced rate of learning."}
{"_id": "1e9f908a7ff08e8c4317252896ae0e545b27f1c4", "title": "Binarization-free OCR for historical documents using LSTM networks", "text": "A primary preprocessing block of almost any typical OCR system is binarization, through which it is intended to remove unwanted part of the input image, and only keep a binarized and cleaned-up version for further processing. The binarization step does not, however, always perform perfectly, and it can happen that binarization artifacts result in important information loss, by for instance breaking or deforming character shapes. In historical documents, due to a more dominant presence of noise and other sources of degradations, the performance of binarization methods usually deteriorates; as a result the performance of the recognition pipeline is hindered by such preprocessing phases. In this paper, we propose to skip the binarization step by directly training a 1D Long Short Term Memory (LSTM) network on gray-level text lines. We collect a large set of historical Fraktur documents, from publicly available online sources, and form train and test sets for performing experiments on both gray-level and binarized text lines. In order to observe the impact of resolution, the experiments are carried out on two identical sets of low and high resolutions. Overall, using gray-level text lines, the 1D LSTM network can reach 24% and 19% lower error rates on the low- and high-resolution sets, respectively, compared to the case of using binarization in the recognition pipeline."}
{"_id": "b802f6281eea6f049ca1686a5c6cc41e44f5748f", "title": "Joint effects of emotion and color on memory.", "text": "Numerous studies have shown that memory is enhanced for emotionally negative and positive information relative to neutral information. We examined whether emotion-induced memory enhancement is influenced by low-level perceptual attributes such as color. Because in everyday life red is often used as a warning signal, whereas green signals security, we hypothesized that red might enhance memory for negative information and green memory for positive information. To capture the signaling function of colors, we measured memory for words standing out from the context by color, and manipulated the color and emotional significance of the outstanding words. Making words outstanding by color strongly enhanced memory, replicating the well-known von Restorff effect. Furthermore, memory for colored words was further increased by emotional significance, replicating the memory-enhancing effect of emotion. Most intriguingly, the effects of emotion on memory additionally depended on color type. Red strongly increased memory for negative words, whereas green strongly increased memory for positive words. These findings provide the first evidence that emotion-induced memory enhancement is influenced by color and demonstrate that different colors can have different functions in human memory."}
{"_id": "3f994e413846f9bef8a25b30da0665420fa2bd2a", "title": "Design of a Real-Time Micro-Winch Controller for Kite-Power Applications Master \u2019 s Thesis in Embedded", "text": "Airborne wind energy is a technology to extract energy from high altitude winds. This technology is under heavy development by several companies and universities. An actual problem with the commercialization of the technology is the reliability and safety of the system. In this thesis a real time environment suitable to perform research and further development of the prototype steering and depower control is proposed. Additionally, the overload prevention of the kite lines is researched. This thesis presents a method to estimate the tension on the kite control tapes using only one tension sensor. Thus, reducing the amount of hardware needed to protect the kite from overloads. The method relies on the characterization of the powertrain efficiency and can be used to estimate the tensions at high loads. An algorithm to limit the forces on the steering lines by depowering the kite is shown; it controls the depower state of the kite based on the desired depower state, the actual tension, and previous tensions on the KCU\u2019s tapes. The tensions history is used to calculate a higher depower state to prevent future overloads, this reduces the amount of action needed by the motors and enable the system to use a brake to save energy. The limiter output is used as an input to a position controller, which allows the project to use off the shelf solutions to build the KCU prototype. The controller was implemented in a real time system and is able to run as fast as 20 Hz being the communication protocol the execution time bottleneck. The control algorithms were tested using a mathematical model of the kite, the environment, and trajectory control inputs from FreeKiteSim. Three scenarios were considered for the model test, normal operation, overload operation without tension limitation, and overload operation with tension limitation. The apparent wind speed during the reel out phase of the normal scenario is approximately 30 m/s and 35 m/s for the overload scenarios. During the overload scenario the limiter spent roughly 22% more energy than the normal operation scenario to counteract an increase of 5 m/s in the apparent wind during 3.5 hours of operation, but it spent 15% less energy than the overload scenario without tension limitation."}
{"_id": "3865d02552139862bf8dcc4942782af1d9eec17c", "title": "Analyzing News Media Coverage to Acquire and Structure Tourism Knowledge", "text": "Destination image significantly influences a tourist\u2019s decision-making process. The impact of news media coverage on destination image has attracted research attention and became particularly evident after catastrophic events such as the 2004 Indian Ocean earthquake that triggered a series of lethal tsunamis. Building upon previous research, this article analyzes the prevalence of tourism destinations among 162 international media sites. Term frequency captures the attention a destination receives\u2014from a general and, after contextual filtering, from a tourism perspective. Calculating sentiment estimates positive and negative media influences on destination image at a given point in time. Identifying semantic associations with the names of countries and major cities, the results of co-occurrence analysis reveal the public profiles of destinations, and the impact of current events on media coverage. These results allow national tourism organizations to assess how their destination is covered by news media in general, and in a specific tourism context. To guide analysts and marketers in this assessment, an iterative analysis of semantic associations extracts tourism knowledge automatically, and represents this knowledge as ontological structures."}
{"_id": "2298ec2d7a34ce667319f7e5e88005c71c4ee142", "title": "Composite Connectors for Composing Software Components", "text": "In a component-based system, connectors are used to compose components. Connectors should have a semantics that makes them simple to construct and use. At the same time, their semantics should be rich enough to endow them with desirable properties such as genericity, compositionality and reusability. For connector construction, compositionality would be particularly useful, since it would facilitate systematic construction. In this paper we describe a hierarchical approach to connector definition and construction that allows connectors to be defined and constructed from sub-connectors. These composite connectors are indeed generic, compositional and reusable. They behave like design patterns, and provide powerful composition connectors."}
{"_id": "0ea94e9c83d2e138fbccc1116b57d4f2a7ba6868", "title": "Measured Gene-Environment Interactions in Psychopathology: Concepts, Research Strategies, and Implications for Research, Intervention, and Public Understanding of Genetics.", "text": "There is much curiosity about interactions between genes and environmental risk factors for psychopathology, but this interest is accompanied by uncertainty. This article aims to address this uncertainty. First, we explain what is and is not meant by gene-environment interaction. Second, we discuss reasons why such interactions were thought to be rare in psychopathology, and argue instead that they ought to be common. Third, we summarize emerging evidence about gene-environment interactions in mental disorders. Fourth, we argue that research on gene-environment interactions should be hypothesis driven, and we put forward strategies to guide future studies. Fifth, we describe potential benefits of studying measured gene-environment interactions for basic neuroscience, gene hunting, intervention, and public understanding of genetics. We suggest that information about nurture might be harnessed to make new discoveries about the nature of psychopathology."}
{"_id": "ec76d5b32cd6f57a59d0c841f3ff558938aa6ddd", "title": "Oral stimulation for promoting oral feeding in preterm infants.", "text": "BACKGROUND\nPreterm infants (< 37 weeks' postmenstrual age) are often delayed in attaining oral feeding. Normal oral feeding is suggested as an important outcome for the timing of discharge from the hospital and can be an early indicator of neuromotor integrity and developmental outcomes. A range of oral stimulation interventions may help infants to develop sucking and oromotor co-ordination, promoting earlier oral feeding and earlier hospital discharge.\n\n\nOBJECTIVES\nTo determine the effectiveness of oral stimulation interventions for attainment of oral feeding in preterm infants born before 37 weeks' postmenstrual age (PMA).To conduct subgroup analyses for the following prespecified subgroups.\u2022 Extremely preterm infants born at < 28 weeks' PMA.\u2022 Very preterm infants born from 28 to < 32 weeks' PMA.\u2022 Infants breast-fed exclusively.\u2022 Infants bottle-fed exclusively.\u2022 Infants who were both breast-fed and bottle-fed.\n\n\nSEARCH METHODS\nWe used the standard search strategy of the Cochrane Neonatal Review Group to search the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE via PubMed (1966 to 25 February 2016), Embase (1980 to 25 February 2016) and the Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1982 to 25 February 2016). We searched clinical trials databases, conference proceedings and the reference lists of retrieved articles.\n\n\nSELECTION CRITERIA\nRandomised and quasi-randomised controlled trials comparing a defined oral stimulation intervention with no intervention, standard care, sham treatment or non-oral intervention in preterm infants and reporting at least one of the specified outcomes.\n\n\nDATA COLLECTION AND ANALYSIS\nOne review author searched the databases and identified studies for screening. Two review authors screened the abstracts of these studies and full-text copies when needed to identify trials for inclusion in the review. All review authors independently extracted the data and analysed each study for risk of bias across the five domains of bias. All review authors discussed and analysed the data and used the GRADE system to rate the quality of the evidence. Review authors divided studies into two groups for comparison: intervention versus standard care and intervention versus other non-oral or sham intervention. We performed meta-analysis using a fixed-effect model.\n\n\nMAIN RESULTS\nThis review included 19 randomised trials with a total of 823 participants. Almost all included trials had several methodological weaknesses. Meta-analysis showed that oral stimulation reduced the time to transition to oral feeding compared with standard care (mean difference (MD) -4.81, 95% confidence interval (CI) -5.56 to -4.06 days) and compared with another non-oral intervention (MD -9.01, 95% CI -10.30 to -7.71 days), as well as the duration of initial hospitalisation compared with standard care (MD -5.26, 95% CI -7.34 to -3.19 days) and compared with another non-oral intervention (MD -9.01, 95% CI -10.30 to -7.71 days).Investigators reported shorter duration of parenteral nutrition for infants compared with standard care (MD -5.30, 95% CI -9.73 to -0.87 days) and compared with another non-oral intervention (MD -8.70, 95% CI -15.46 to -1.94 days). They could identify no effect on breast-feeding outcomes nor on weight gain.\n\n\nAUTHORS' CONCLUSIONS\nAlthough the included studies suggest that oral stimulation shortens hospital stay, days to exclusive oral feeding and duration of parenteral nutrition, one must interpret results of these studies with caution, as risk of bias and poor methodological quality are high overall. Well-designed trials of oral stimulation interventions for preterm infants are warranted. Such trials should use reliable methods of randomisation while concealing treatment allocation, blinding caregivers to treatment when possible and paying particular attention to blinding of outcome assessors."}
{"_id": "5eee46f39ae4cafd90c0c34b9996b96ac8f638a2", "title": "The Economics of State Fragmentation-Assessing the Economic Impact of Secession", "text": "This paper provides empirical evidence that declaring independence significantly lowers per capita GDP based on a large panel of countries covering the period 1950-2016. To do so, we rely on a semi-parametric identification strategy that controls for the confounding effects of past GDP dynamics, anticipation effects, unobserved heterogeneity, model uncertainty and effect heterogeneity. Our baseline results indicate that declaring independence reduces per capita GDP by around 20% in the long run. We subsequently propose a quadruple-difference procedure to demonstrate that the results are not driven by simulation and matching inaccuracies or spillover effects. A second methodological novelty consists of the development of a two-step estimator that relies on the control function approach to control for the potential endogeneity of the estimated independence payoffs and their potential determinants, to shed some light on the primary channels driving our results. We find tentative evidence that the adverse effects of independence decrease in territorial size, pointing to the presence of economies of scale, but that they are mitigated when newly independent states liberalize their trade regime or use their new-found political autonomy to democratize."}
{"_id": "9e2af148acbf7d4623ca8a946be089a774ce5258", "title": "AlterEgo: A Personalized Wearable Silent Speech Interface", "text": "We present a wearable interface that allows a user to silently converse with a computing device without any voice or any discernible movements - thereby enabling the user to communicate with devices, AI assistants, applications or other people in a silent, concealed and seamless manner. A user's intention to speak and internal speech is characterized by neuromuscular signals in internal speech articulators that are captured by the AlterEgo system to reconstruct this speech. We use this to facilitate a natural language user interface, where users can silently communicate in natural language and receive aural output (e.g - bone conduction headphones), thereby enabling a discreet, bi-directional interface with a computing device, and providing a seamless form of intelligence augmentation. The paper describes the architecture, design, implementation and operation of the entire system. We demonstrate robustness of the system through user studies and report 92% median word accuracy levels."}
{"_id": "40e2d032a11b18bc470d47f8b0545db691c0d253", "title": "Be Quiet? Evaluating Proactive and Reactive User Interface Assistants", "text": "This research examined the ability of an anthropomorphic interface assistant to help people learn and use an unfamiliar text-editing tool, with a specific focus on assessing proactive assistant behavior. Participants in the study were introduced to a text editing system that used keypress combinations for invoking the different editing operations. Participants then were directed to make a set of prescribed changes to a document with the aid either of a paper manual, an interface assistant that would hear and respond to questions orally, or an assistant that responded to questions and additionally made proactive suggestions. Anecdotal evidence suggested that proactive assistant behavior would not enhance performance and would be viewed as intrusive. Our results showed that all three conditions performed similarly on objective editing performance (completion time, commands issued, and command recall), while the participants in the latter two conditions strongly felt that the assistant\u2019s help was valuable."}
{"_id": "04f39720b9b20f8ab990228ae3fe4f473e750fe3", "title": "Probabilistic Graphical Models - Principles and Techniques", "text": ""}
{"_id": "17fac85921a6538161b30665f55991f7c7e0f940", "title": "Calibrating Noise to Sensitivity in Private Data Analysis", "text": "We continue a line of research initiated in [10, 11] on privacypreserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = P i g(xi), where xi denotes the ith row of the database and g maps database rows to [0, 1]. We extend the study to general functions f , proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f . Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation results showing the increased value of interactive sanitization mechanisms over non-interactive."}
{"_id": "2a622720d4021259a6f6d3c6298559d1b56e7e62", "title": "Scaling personalized web search", "text": "Recent web search techniques augment traditional text matching with a global notion of \"importance\" based on the linkage structure of the web, such as in Google's PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance--for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques."}
{"_id": "37c3303d173c055592ef923235837e1cbc6bd986", "title": "Learning Fair Representations", "text": "We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification."}
{"_id": "4556f3f9463166aa3e27b2bec798c0ca7316bd65", "title": "Three naive Bayes approaches for discrimination-free classification", "text": "In this paper, we investigate how to modify the naive Bayes classifier in order to perform classification that is restricted to be independent with respect to a given sensitive attribute. Such independency restrictions occur naturally when the decision process leading to the labels in the data-set was biased; e.g., due to gender or racial discrimination. This setting is motivated by many cases in which there exist laws that disallow a decision that is partly based on discrimination. Naive application of machine learning techniques would result in huge fines for companies. We present three approaches for making the naive Bayes classifier discrimination-free: (i) modifying the probability of the decision being positive, (ii) training one model for every sensitive attribute value and balancing them, and (iii) adding a latent variable to the Bayesian model that represents the unbiased label and optimizing the model parameters for likelihood using expectation maximization. We present experiments for the three approaches on both artificial and real-life data."}
{"_id": "72e8b0167757b9c8579972ee6972065ba5cb762d", "title": "Using Visual Text Mining to Support the Study Selection Activity in Systematic Literature Reviews", "text": "Background: A systematic literature review (SLR) is a methodology used to aggregate all relevant existing evidence to answer a research question of interest. Although crucial, the process used to select primary studies can be arduous, time consuming, and must often be conducted manually. Objective: We propose a novel approach, known as 'Systematic Literature Review based on Visual Text Mining' or simply SLR-VTM, to support the primary study selection activity using visual text mining (VTM) techniques. Method: We conducted a case study to compare the performance and effectiveness of four doctoral students in selecting primary studies manually and using the SLR-VTM approach. To enable the comparison, we also developed a VTM tool that implemented our approach. We hypothesized that students using SLR-VTM would present improved selection performance and effectiveness. Results: Our results show that incorporating VTM in the SLR study selection activity reduced the time spent in this activity and also increased the number of studies correctly included. Conclusions: Our pilot case study presents promising results suggesting that the use of VTM may indeed be beneficial during the study selection activity when performing an SLR."}
{"_id": "9cc995f368233a7dce30d5a48c3509b5b4ffa57b", "title": "Pineal cyst: normal or pathological?", "text": "Review of 500 consecutive MRI studies was undertaken to assess the frequency and the appearances of cystic pineal glands. Cysts were encountered in 2.4% of cases. Follow-up examination demonstrated no change in these cysts and they were considered to be a normal variant. Size, MRI appearances and signs associated with this condition are reported in order to establish criteria of normality."}
{"_id": "48aef24024c979d70161cb8ca86793c2de8584ff", "title": "Fast algorithm for robust template matching with M-estimators", "text": "In this paper, we propose a fast algorithm for speeding up the process of template matching that uses M-estimators for dealing with outliers. We propose a particular image hierarchy called the p-pyramid that can be exploited to generate a list of ascending lower bounds of the minimal matching errors when a non-decreasing robust error measure is adopted. Then, the set of lower bounds can be used to prune the search of the p-pyramid, and a fast algorithm is thereby developed in this paper. This fast algorithm ensures finding the global minimum of the robust template matching problem in which a non-decreasing M-estimator serves as an error measure. Experimental results demonstrate the effectiveness of our method. Index Terms \u2014Template matching, robust template matching, M-estimator, fast algorithm."}
{"_id": "c5ad3dc750cc9c8316ee5903141baa7e857e670e", "title": "RELIABLE ISLANDING DETECTION WITH ACTIVE MV NETWORK MANAGEMENT", "text": "Active management of distribution networks and different controllable resources will play a key role in future Smart Grids. This paper proposes that centralized active network management functionalities at MV level could also include an algorithm which in real time confirms reliable islanding detection with local measurements based passive method. The proposed algorithm continuously ensures that there is such reactive power unbalance, so that the operation point remains constantly outside the non-detection zone of the used passive islanding detection method. The effect of recent grid code requirements on the proposed scheme, like the P/f -droop control of DG units, has been also considered."}
{"_id": "77abdfbfe65af328f61c6c0ed30b388c6cde8d63", "title": "IRIS: A goal-oriented big data analytics framework on Spark for better Business decisions", "text": "Big data analytics is the hottest new practice in Business Analytics today. However, recent industrial surveys find that big data analytics may fail to meet business expectations because of lack of business context and lack of expertise to connect the dots, inaccurate scope and batch-oriented Hadoop system. In this paper, we present IRIS - a goal-oriented big data analytics framework using Spark for better business decisions, which consists of a conceptual model which connects a business side and a big data side providing context information around the data, a claim-based evaluation method which enables to focus the most effective solutions, a process on how to use IRIS framework and an assistant tool using Spark which is a real-time big data analytics platform. In this framework, problems against business goals of the current process and solutions for the future process are explicitly hypothesized in the conceptual model and validated on real big data using big queries or big data analytics. As an empirical study, a shipment decision process is used to show how IRIS can support better business decisions in terms of comprehensive understanding both on business and data analytics, high priority and fast decisions."}
{"_id": "f5de0751d6d73f0496ac5842cc6ca84b2d0c2063", "title": "A Comprehensive Survey of Wireless Body Area Networks", "text": "Recent advances in microelectronics and integrated circuits, system-on-chip design, wireless communication and intelligent low-power sensors have allowed the realization of a Wireless Body Area Network (WBAN). A WBAN is a collection of low-power, miniaturized, invasive/non-invasive lightweight wireless sensor nodes that monitor the human body functions and the surrounding environment. In addition, it supports a number of innovative and interesting applications such as ubiquitous healthcare, entertainment, interactive gaming, and military applications. In this paper, the fundamental mechanisms of WBAN including architecture and topology, wireless implant communication, low-power Medium Access Control (MAC) and routing protocols are reviewed. A comprehensive study of the proposed technologies for WBAN at Physical (PHY), MAC, and Network layers is presented and many useful solutions are discussed for each layer. Finally, numerous WBAN applications are highlighted."}
{"_id": "104e29743b98ea97d73ce9558e839e0a6e90113f", "title": "Android ION Hazard: the Curse of Customizable Memory Management System", "text": "ION is a unified memory management interface for Android that is widely used on virtually all ARM based Android devices. ION attempts to achieve several ambitious goals that have not been simultaneously achieved before (not even on Linux). Different from managing regular memory in the system, ION is designed to share and manage memory with special constraints, e.g., physically contiguous memory. Despite the great flexibility and performance benefits offered, such a critical subsystem, as we discover, unfortunately has flawed security assumptions and designs. In this paper, we systematically analyze ION related vulnerabilities from conceptual root causes to detailed implementation decisions. Since ION is often customized heavily for different Android devices, the specific vulnerabilities often manifest themselves differently. By conducting a range of runtime testing as well as static analysis, we are able to uncover a large number of serious vulnerabilities on the latest Android devices (e.g., Nexus 6P running Android 6.0 and 7.0 preview) such as denial-of-service and dumping memory from the system and arbitrary applications (e.g., email content, passwords). Finally, we offer suggestions on how to redesign the ION subsystem to eliminate these flaws. We believe that the lessons learned can help guide the future design of similar memory management subsystems."}
{"_id": "0ee6254ab9bd1bc52d851206a84ec5f3fab9a308", "title": "Study of S-box properties in block cipher", "text": "In the field of cryptography, the substitution box (S-box) becomes the most widely used ciphers. The process of creating new and powerful S-boxes never end. Various methods are proposed to make the S-box becomes strongest and hard to attack. The strength or weakness of S-box will be determined through the analysis of S-box properties. However, the analysis of the properties of the S-box in block ciphers is still lacking because there is no specific guidelines and technique based on S-box properties. Hence, the cipher is easier to attack by an adversary if the S-box properties are not robust. The purpose of this paper is to describe and review of the S-box properties in block ciphers. As a result, for future work, a new model for analysis S-box properties will be proposed. The model can be used to analysis the properties to determine the strength and weakness of any S-boxes."}
{"_id": "b8349a89ef37e5f741da19609b6aea777cbc39ca", "title": "Design of tri-band Planar Inverted F Antenna (PIFA) with parasitic elements for UMTS2100, LTE and WiMAX mobile applications", "text": "In this paper, the multiband Planar Inverted F Antenna (PIFA) for mobile communication applications has been designed and constructed by introducing two rectangular shape parasitic elements which are located under the main radiating patch of the PIFA in order to obtain a triple-band operation. This triple-band PIFA antenna can be operated at three different operating frequencies which are 2100 MHz for UMTS2100, 2600 MHz for LTE Application and 3500 MHz for WiMAX Application. The main radiating patch of this antenna will control the upper band frequency at 3500 MHz, while the first and second parasitic elements located at the left and right edge of the ground plane will resonate at the middle band (2600MHz) and the lower band (2100 MHz) frequency operation. The proposed triple-band antenna is fabricated and experimentally tested. A good agreement between simulated and measured reflection coefficent of the antenna is achieved where the experimental impedance bandwidth covering the three applications. In these frequency bands, the antenna has nearly omni-directional radiation pattern with the peak gain between 2 dBi~ 5 dBi."}
{"_id": "872be69f66b12879d4741b0f0df02738452e3483", "title": "HGMF: Hierarchical Group Matrix Factorization for Collaborative Recommendation", "text": "Matrix factorization is one of the most powerful techniques in collaborative filtering, which models the (user, item) interactions behind historical explicit or implicit feedbacks. However, plain matrix factorization may not be able to uncover the structure correlations among users and items well such as communities and taxonomies. As a response, we design a novel algorithm, i.e., hierarchical group matrix factorization (HGMF), in order to explore and model the structure correlations among users and items in a principled way. Specifically, we first define four types of correlations, including (user, item), (user, item group), (user group, item) and (user group, item group); we then extend plain matrix factorization with a hierarchical group structure; finally, we design a novel clustering algorithm to mine the hidden structure correlations. In the experiments, we study the effectiveness of our HGMF for both rating prediction and item recommendation, and find that it is better than some state-of-the-art methods on several real-world data sets."}
{"_id": "2bcdc111f96df6b77a7d6be8c9a7e54eeda6d443", "title": "Sustainable Passenger Transportation : Dynamic RideSharing", "text": "AND"}
{"_id": "d4ddf8e44d13ac198384819443d6170fb4428614", "title": "Space Telerobotics : Unique Challenges to Human \u2013 Robot Collaboration in Space", "text": "In this chapter, we survey the current state of the art in space telerobots. We begin by defining relevant terms and describing applications. We then examine the design issues for space telerobotics, including common requirements, operational constraints, and design elements. A discussion follows of the reasons space telerobotics presents unique challenges beyond terrestrial systems. We then present case studies of several different space telerobots, examining key aspects of design and human\u2013robot interaction. Next, we describe telerobots and concepts of operations for future space exploration missions. Finally, we discuss the various ways in which space telerobots can be evaluated in order to characterize and improve performance."}
{"_id": "97bafb07cb97c7847b0b39bc0cafd1bafc28cca7", "title": "THE UBISENSE SMART SPACE PLATFORM", "text": "Ubisense has developed a platform for building Smart Space applications. The platform addresses the key requirements for building Smart Spaces: accurate 3D positioning, scalable realtime performance, development and deployment tools. This paper deepens the key requirements and describes how the Ubisense platform components meets them. The demonstration exemplifies the smart space platform by tracking players in a game application. The Ubisense Smart Space Platform Ubisense has developed a platform for building Smart Space applications. Our platform addresses the key requirements for building Smart Spaces: Accurate 3D positioning supports applications that can perceive the physical relationships between people and objects in the environment Scalable real-time performance enables arbitrary numbers of applications, used by arbitrary numbers of people over an arbitrarily-large area Development and deployment tools make it easy to design, implement, and manage Smart Space applications. The demonstration shows a Smart Space containing applications that integrate with external software (a simple game that users control by moving around), and devices (a PTZ camera that keeps users in shot while they are playing the game) Fig. 1 Smart space demonstration setup 1 Ubisense, St Andrews House, 90 St Andrews Road, Chesterton, CB4 1DL, United Kingdom, http://www.ubisense.net"}
{"_id": "5786cb96af196a5e70660689fe6bd92d40e6d7ab", "title": "A 20 W/Channel Class-D Amplifier With Near-Zero Common-Mode Radiated Emissions", "text": "A class-D amplifier that employs a new modulation scheme and associated output stage to achieve true filter-less operation is presented. It uses a new type of BD modulation that keeps the output common-mode constant, thereby removing a major contributor to radiated emissions, typically an issue for class-D amplifiers. The amplifier meets the FCC class B standard for radiated emissions without any LC filtering. It can accomplish this without any degradation to audio performance and while retaining high efficiency. THD+N is 0.19% at 1 kHz while supplying 5 W to an 8 Ohm load from a 12 V supply. Efficiency is 90% while providing 10 W under the same supply and load conditions. The new output stage occupies 1.8 mm2 per channel using the high voltage devices of a 0.25 \u00bfm BCD process."}
{"_id": "7848c7c6a782c6941bdd67556521aa97569366a4", "title": "Design Optimal PID Controller for Quad Rotor System", "text": "Quad rotor aerial vehicles are one of the most flexible and adaptable platforms for undertaking aerial research. Quad rotor in simplicity is rotorcraft that has four lift-generation propellers (four rotors), two rotors rotate in clockwise and the other two rotate anticlockwise, the speed and direction of Quad rotor can be controlled by varying the speed of these rotors. This paper describes the PID controller has been used for controlling attitude, Roll, Pitch and Yaw direction. In addition, optimal PID controller has been achieving by using particle swarm optimization (PSO), Bacterial Foraging optimization (BFO) and the BF-PSO optimization. Finally, the Quad rotor model has been simulating for several scenarios of testing."}
{"_id": "64c7f6717d962254721c204e60ea106c0ff4acda", "title": "Memristor Model Comparison", "text": "Since the 2008-dated discovery of memristor behavior at the nano-scale, Hewlett Packard is credited for, a large deal of efforts have been spent in the research community to derive a suitable model able to capture the nonlinear dynamics of the nano-scale structures. Despite a considerable number of models of different complexity have been proposed in the literature, there is an ongoing debate over which model should be universally adopted for the investigation of the unique opportunities memristors may offer in integrated circuit design. In order to shed some light into this passionate discussion, this paper compares some of the most noteworthy memristor models present in the literature. The strength of the Pickett?s model stands in its experiment-based development and in its ability to describe some physical mechanism at the origin of memristor dynamics. Since its parameter values depend on the excitation of the memristor and/or on the circuit employing the memristor, it may be assumed as a reference for comparison only in those scenarios for which its parameters were reported in the literature. In this work various noteworthy memristor models are fitted to the Pickett's model under one of such scenarios. This study shows how three models, Biolek's model, the Boundary Condition Memristor model and the Threshold Adaptive Memristor model, outperform the others in the replica of the dynamics observed in the Pickett's model. In the second part of this work the models are used in a couple of basic circuits to study the variance between the dynamical behaviors they give rise to. This analysis intends to make the circuit designers aware of the different behaviors which may occur in memristor-based circuits according to the memristor model under use."}
{"_id": "af359fbcfc6d58a741ac0d597cd20eb9a68dfa51", "title": "An Interval-Based Representation of Temporal Knowledge", "text": "This paper describes a method for maintaining the relationships between temporal intervals in a hierarchical manner using constraint propagation techniques. The representation includes a notion of the present moment (i.e., \"now\") , and allows one to represent intervals that may extend indefinitely into the past/future. This research was supported in part by the National Science Foundation under Grant Number IST-80-12418, and in part by the Office of Naval Research under Grant Number N00014-80-O0197."}
{"_id": "2e4e83ec31b43595ee7160a7bdb5c3a7dc4a1db2", "title": "Ladder Variational Autoencoders", "text": "Variational autoencoders are powerful models for unsupervised learning. However deep models with several layers of dependent stochastic variables are difficult to train which limits the improvements obtained using these highly expressive models. We propose a new inference model, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network. We show that this model provides state of the art predictive log-likelihood and tighter log-likelihood lower bound compared to the purely bottom-up inference in layered Variational Autoencoders and other generative models. We provide a detailed analysis of the learned hierarchical latent representation and show that our new inference model is qualitatively different and utilizes a deeper more distributed hierarchy of latent variables. Finally, we observe that batch normalization and deterministic warm-up (gradually turning on the KL-term) are crucial for training variational models with many stochastic layers."}
{"_id": "bebdd553058ab50d0cb19a1f65d7f4daeb7cda37", "title": "A Multi-Theoretical Literature Review on Information Security Investments using the Resource-Based View and the Organizational Learning Theory", "text": "The protection of information technology (IT) has become and is predicted to remain a key economic challenge for organizations. While research on IT security investment is fast growing, it lacks a theoretical basis for structuring research, explaining economictechnological phenomena and guide future research. We address this shortcoming by suggesting a new theoretical model emerging from a multi-theoretical perspective adopting the Resource-Based View and the Organizational Learning Theory. The joint application of these theories allows to conceptualize in one theoretical model the organizational learning effects that occur when the protection of organizational resources through IT security countermeasures develops over time. We use this model of IT security investments to synthesize findings of a large body of literature and to derive research gaps. We also discuss managerial implications of (closing) these gaps by providing practical examples."}
{"_id": "f7a0d42044b26be8d509310ba20fd8d665943eba", "title": "Template Attacks", "text": "We present template attacks, the strongest form of side channel attack possible in an information theoretic sense. These attacks can break implementations and countermeasures whose security is dependent on the assumption that an adversary cannot obtain more than one or a limited number of side channel samples. They require that an adversary has access to an identical experimental device that he can program to his choosing. The success of these attacks in such constraining situations is due manner in which noise within each sample is handled. In contrast to previous approaches which viewed noise as a hindrance that had to be reduced or eliminated, our approach focuses on precisely modeling noise, and using this to fully extract information present in a single sample. We describe in detail how an implementation of RC4, not amenable to techniques such as SPA and DPA, can easily be broken using template attacks with a single sample. Other applications include attacks on certain DES implementations which use DPA\u2013resistant hardware and certain SSL accelerators which can be attacked by monitoring electromagnetic emanations from an RSA operation even from distances of fifteen feet."}
{"_id": "5a2b6c6a7b9cb12554f660610526b22da8e070a7", "title": "Global aetiology and epidemiology of type 2 diabetes mellitus and its complications", "text": "Globally, the number of people with diabetes mellitus has quadrupled in the past three decades, and diabetes mellitus is the ninth major cause of death. About 1 in 11 adults worldwide now have diabetes mellitus, 90% of whom have type 2 diabetes mellitus (T2DM). Asia is a major area of the rapidly emerging T2DM global epidemic, with China and India the top two epicentres. Although genetic predisposition partly determines individual susceptibility to T2DM, an unhealthy diet and a sedentary lifestyle are important drivers of the current global epidemic; early developmental factors (such as intrauterine exposures) also have a role in susceptibility to T2DM later in life. Many cases of T2DM could be prevented with lifestyle changes, including maintaining a healthy body weight, consuming a healthy diet, staying physically active, not smoking and drinking alcohol in moderation. Most patients with T2DM have at least one complication, and cardiovascular complications are the leading cause of morbidity and mortality in these patients. This Review provides an updated view of the global epidemiology of T2DM, as well as dietary, lifestyle and other risk factors for T2DM and its complications."}
{"_id": "1407b3363d9bd817b00e95190a95372d3cb3694a", "title": "Probabilistic Frame Induction", "text": "In natural-language discourse, related events tend to appear near each other to describe a larger scenario. Such structures can be formalized by the notion of a frame (a.k.a. template), which comprises a set of related events and prototypical participants and event transitions. Identifying frames is a prerequisite for information extraction and natural language generation, and is usually done manually. Methods for inducing frames have been proposed recently, but they typically use ad hoc procedures and are difficult to diagnose or extend. In this paper, we propose the first probabilistic approach to frame induction, which incorporates frames, events, participants as latent topics and learns those frame and event transitions that best explain the text. The number of frames is inferred by a novel application of a split-merge method from syntactic parsing. In end-to-end evaluations from text to induced frames and extracted facts, our method produced state-of-the-art results while substantially reducing engineering effort."}
{"_id": "1bf9a76c9d9838afc51983894b58790b14c2e3d3", "title": "An ambient assisted living framework for mobile environments", "text": "Ambient assisted living (AAL) delivers IT solutions that aim to facilitate and improve lives of the disabled, elderly, and chronically ill people. Mobility is a key issue for elderly people because their physical activity, in general, improves their quality of life and keep health status. Then, this paper presents an AAL framework for caregivers and elderly people that allow them to maintain an active lifestyle without limiting their mobility. This framework includes four AAL tools for mobility environments: i) a fall detection mobile application; ii) a biofeedback monitoring system trough wearable sensors; iii) an outdoor location service through a shoe equipped with Global Positioning System (GPS); and iv) a mobile application for caregivers that take care of several elders confined to a home environment. The proposal is evaluated and demonstrated and it is ready for use."}
{"_id": "048c1752c56a64ee9883ee960b385835474b7fc0", "title": "Higher Lower Bounds from the 3SUM Conjecture", "text": "The 3SUM conjecture has proven to be a valuable tool for proving conditional lower bounds on dynamic data structures and graph problems. This line of work was initiated by P\u01cetra\u015fcu (STOC 2010) who reduced 3SUM to an offline SetDisjointness problem. However, the reduction introduced by P\u01cetra\u015fcu suffers from several inefficiencies, making it difficult to obtain tight conditional lower bounds from the 3SUM conjecture. In this paper we address many of the deficiencies of P\u01cetra\u015fcu\u2019s framework. We give new and efficient reductions from 3SUM to offline SetDisjointness and offline SetIntersection (the reporting version of SetDisjointness) which leads to polynomially higher lower bounds on several problems. Using our reductions, we are able to show the essential optimality of several algorithms, assuming the 3SUM conjecture. \u2022 Chiba and Nishizeki\u2019s O(m\u03b1)-time algorithm (SICOMP 1985) for enumerating all triangles in a graph with arboricity/degeneracy \u03b1 is essentially optimal, for any \u03b1. \u2022 Bj\u00f8rklund, Pagh, Williams, and Zwick\u2019s algorithm (ICALP 2014) for listing t triangles is essentially optimal (assuming the matrix multiplication exponent is \u03c9 = 2). \u2022 Any static data structure for SetDisjointness that answers queries in constant time must spend \u03a9(N2\u2212o(1)) time in preprocessing, where N is the size of the set system. These statements were unattainable via P\u01cetra\u015fcu\u2019s reductions. We also introduce several new reductions from 3SUM to pattern matching problems and dynamic graph problems. Of particular interest are new conditional lower bounds for dynamic versions of Maximum Cardinality Matching, which introduce a new technique for obtaining amortized lower bounds."}
{"_id": "b90a3e7d80eeb1f320e2d548cf17e7d17718d1eb", "title": "Rigid-Body Dynamics with Friction and Impact", "text": "Rigid-body dynamics with unilateral contact is a good approximation for a wide range of everyday phenomena, from the operation of car brakes to walking to rock slides. It is also of vital importance for simulating robots, virtual reality, and realistic animation. However, correctly modeling rigid-body dynamics with friction is difficult due to a number of discontinuities in the behavior of rigid bodies and the discontinuities inherent in the Coulomb friction law. This is particularly crucial for handling situations with large coefficients of friction, which can result in paradoxical results known at least since Painlev\u00e9 [C. R. Acad. Sci. Paris, 121 (1895), pp. 112\u2013115]. This single example has been a counterexample and cause of controversy ever since, and only recently have there been rigorous mathematical results that show the existence of solutions to his example. The new mathematical developments in rigid-body dynamics have come from several sources: \u201csweeping processes\u201d and the measure differential inclusions of Moreau in the 1970s and 1980s, the variational inequality approaches of Duvaut and J.-L. Lions in the 1970s, and the use of complementarity problems to formulate frictional contact problems by L\u00f6tstedt in the early 1980s. However, it wasn\u2019t until much more recently that these tools were finally able to produce rigorous results about rigid-body dynamics with Coulomb friction and impulses."}
{"_id": "2375f6d71ce85a9ff457825e192c36045e994bdd", "title": "Multilayer feedforward networks are universal approximators", "text": ""}
{"_id": "60b7c281f3a677274b7126c67b7f4059c631b1ea", "title": "There exists a neural network that does not make avoidable mistakes", "text": "The authors show that a multiple-input, single-output, single-hidden-layer feedforward network with (known) hardwired connections from input to hidden layer, monotone squashing at the hidden layer and no squashing at the output embeds as a special case a so-called Fourier network, which yields a Fourier series approximation properties of Fourier series representations. In particular, approximation to any desired accuracy of any square integrable function can be achieved by such a network, using sufficiently many hidden units. In this sense, such networks do not make avoidable mistakes.<>"}
{"_id": "6bfa668b84ae5cd7e19dbda5d78688ee9dc4b25c", "title": "A massively parallel architecture for a self-organizing neural pattern recognition machine", "text": "A neural network architecture for the learning of recognition categories is derived. Real-time network dynamics are completely characterized through mathematical analysis and computer simulations. The architecture self-organizes and self-stabilizes its recognition codes in response to arbitrary orderings of arbitrarily many and arbitrarily complex binary input patterns. Top-down attentional and matching mechanisms are critical in self-stabilizing the code learning process. The architecture embodies a parallel search scheme which updates itself adaptively as the learning process unfolds. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time does not grow as a function of code complexity. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. These invariant properties emerge in the form of learned critical feature patterns, or prototypes. The architecture possesses a context-sensitive self-scaling property which enables its emergent critical feature patterns to form. They detect and remember statistically predictive configurations of featural elements which are derived ~ ..from the set of all input patterns that are ever experienced. Four types of attentional"}
{"_id": "ee7f0bc85b339d781c2e0c7e6db8e339b6b9fec2", "title": "Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions", "text": "K.M. Hornik, M. Stinchcombe, and H. White (Univ. of California at San Diego, Dept. of Economics Discussion Paper, June 1988; to appear in Neural Networks) showed that multilayer feedforward networks with as few as one hidden layer, no squashing at the output layer, and arbitrary sigmoid activation function at the hidden layer are universal approximators: they are capable of arbitrarily accurate approximation to arbitrary mappings, provided sufficiently many hidden units are available. The present authors obtain identical conclusions but do not require the hidden-unit activation to be sigmoid. Instead, it can be a rather general nonlinear function. Thus, multilayer feedforward networks possess universal approximation capabilities by virtue of the presence of intermediate layers with sufficiently many parallel processors; the properties of the intermediate-layer activation function are not so crucial. In particular, sigmoid activation functions are not necessary for universal approximation.<>"}
{"_id": "384dc2b005c5d8605c62d840cc851751c47c4055", "title": "Control methodologies in networked control systems", "text": "The use of a data network in a control loop has gained increasing attentions in recent years due to its cost effective and flexible applications. One of the major challenges in this so-called networked control system (NCS) is the network-induced delay effect in the control loop. Network delays degrade the NCS control performance and destabilize the system. A significant emphasis has been on developing control methodologies to handle the network delay effect in NCS. This survey paper presents recent NCS control methodologies. The overview on NCS structures and description of network delays including characteristics and effects are also covered. r 2003 Published by Elsevier Ltd."}
{"_id": "0b33d8210d530fad72ce20bd6565ceaed792cbc0", "title": "In Defense of the Internet: The Relationship between Internet Communication and Depression, Loneliness, Self-Esteem, and Perceived Social Support", "text": "As more people connect to the Internet, researchers are beginning to examine the effects of Internet use on users' psychological health. Due in part to a study released by Kraut and colleagues in 1998, which concluded that Internet use is positively correlated with depression, loneliness, and stress, public opinion about the Internet has been decidedly negative. In contrast, the present study was designed to test the hypothesis that Internet usage can affect users beneficially. Participants engaged in five chat sessions with an anonymous partner. At three different intervals they were administered scales measuring depression, loneliness, self-esteem, and social support. Changes in their scores were tracked over time. Internet use was found to decrease loneliness and depression significantly, while perceived social support and self-esteem increased significantly."}
{"_id": "0f9065db0193be42173be5546a3dfb839f694521", "title": "Distributed Representations for Compositional Semantics", "text": "The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A lot of research has been devoted to finding ways of representing the semantics of individual words in vector spaces. Distributional approaches\u2014meaning distributed representations that exploit co-occurrence statistics of large corpora\u2014have proved popular and successful across a number of tasks. However, natural language usually comes in structures beyond the word level, with meaning arising not only from the individual words but also the structure they are contained in at the phrasal or sentential level. Modelling the compositional process by which the meaning of an utterance arises from the meaning of its parts is an equally fundamental task of NLP. This dissertation explores methods for learning distributed semantic representations and models for composing these into representations for larger linguistic units. Our underlying hypothesis is that neural models are a suitable vehicle for learning semantically rich representations and that such representations in turn are suitable vehicles for solving important tasks in natural language processing. The contribution of this thesis is a thorough evaluation of our hypothesis, as part of which we introduce several new approaches to representation learning and compositional semantics, as well as multiple state-of-the-art models which apply distributed semantic representations to various tasks in NLP. Part I focuses on distributed representations and their application. In particular, in Chapter 3 we explore the semantic usefulness of distributed representations by evaluating their use in the task of semantic frame identification. Part II describes the transition from semantic representations for words to compositional semantics. Chapter 4 covers the relevant literature in this field. Following this, Chapter 5 investigates the role of syntax in semantic composition. For this, we discuss a series of neural network-based models and learning mechanisms, and demonstrate how syntactic information can be incorporated into semantic composition. This study allows us to establish the effectiveness of syntactic information as a guiding parameter for semantic composition, and answer questions about the link between syntax and semantics. Following these discoveries regarding the role of syntax, Chapter 6 investigates whether it is possible to further reduce the impact of monolingual surface forms and syntax when attempting to capture semantics. Asking how machines can best approximate human signals of semantics, we propose multilingual information as one method for grounding semantics, and develop an extension to the distributional hypothesis for multilingual representations. Finally, Part III summarizes our findings and discusses future work."}
{"_id": "854c6d52fe2bf888ab7cb33ed8115df8c422d552", "title": "Social-driven Internet of Connected Objects", "text": "Internet evolution has been recently related with some aspect of user empowerment, mostly in terms of content distribution, and this has been ultimately accelerated by the fast-paced introduction and expansion of wireless technologies. Hence, the Internet should start to be seen as a communications infrastructure able to support the integration of a myriad of embedded and personal wireless objects. This way a future Internet will support the interaction between users\u2019 social, physical and virtual sphere. This position paper aims to raise some discussion about the technology required to ensure an efficient interaction between the physical, social and virtual worlds by extending the Internet by means of interconnected objects. Namely, it is argued that an efficient interaction between the physical, social and virtual worlds requires the development of a data-centric architecture based on IP-driven opportunisitc networking able to make useful data available to people when and where they really need it, augmenting their social and environmental awareness."}
{"_id": "8f89f992fdcc37e302b9c5b9b25c06d7f1086cf9", "title": "RHex: A Biologically Inspired Hexapod Runner", "text": "RHex is an untethered, compliant leg hexapod robot that travels at better than one body length per second over terrain few other robots can negotiate at all. Inspired by biomechanics insights into arthropod locomotion, RHex uses a clock excited alternating tripod gait to walk and run in a highly maneuverable and robust manner. We present empirical data establishing that RHex exhibits a dynamical (\u201cbouncing\u201d) gait\u2014its mass center moves in a manner \u2217This work was supported in part by DARPA/SPAWAR under contract N66001-00-C-8026. Portions of the material reported here were first presented in a conference paper appearing in the collection (Altendorfer et al., 2000). 208 Altendorfer et al. well approximated by trajectories from a Spring Loaded Inverted Pendulum (SLIP)\u2014characteristic of a large and diverse group of running animals, when its central clock, body mass, and leg stiffnesses are appropriately tuned. The SLIP template can function as a useful control guide in developing more complex autonomous locomotion behaviors such as registration via visual servoing, local exploration via visual odometry, obstacle avoidance, and, eventually, global mapping and localization."}
{"_id": "c0c0b8558b17aa20debc4611275a4c69edd1e2a7", "title": "Facial Expression Recognition via a Boosted Deep Belief Network", "text": "A training process for facial expression recognition is usually performed sequentially in three individual stages: feature learning, feature selection, and classifier construction. Extensive empirical studies are needed to search for an optimal combination of feature representation, feature set, and classifier to achieve good recognition performance. This paper presents a novel Boosted Deep Belief Network (BDBN) for performing the three training stages iteratively in a unified loopy framework. Through the proposed BDBN framework, a set of features, which is effective to characterize expression-related facial appearance/shape changes, can be learned and selected to form a boosted strong classifier in a statistical way. As learning continues, the strong classifier is improved iteratively and more importantly, the discriminative capabilities of selected features are strengthened as well according to their relative importance to the strong classifier via a joint fine-tune process in the BDBN framework. Extensive experiments on two public databases showed that the BDBN framework yielded dramatic improvements in facial expression analysis."}
{"_id": "91c7fc5b47c6767632ba030167bb59d9d080fbed", "title": "Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning", "text": "We introduce a method for following high-level navigation instructions by mapping directly from images, instructions and pose estimates to continuous low-level velocity commands for real-time control. The Grounded Semantic Mapping Network (GSMN) is a fully-differentiable neural network architecture that builds an explicit semantic map in the world reference frame by incorporating a pinhole camera projection model within the network. The information stored in the map is learned from experience, while the local-to-world transformation is computed explicitly. We train the model using DAGGERFM, a modified variant of DAGGER that trades tabular convergence guarantees for improved training speed and memory use. We test GSMN in virtual environments on a realistic quadcopter simulator and show that incorporating an explicit mapping and grounding modules allows GSMN to outperform strong neural baselines and almost reach an expert policy performance. Finally, we analyze the learned map representations and show that using an explicit map leads to an interpretable instruction-following model."}
{"_id": "0b98aa34c67031ae065c58b5ed8b269391db8368", "title": "Stationary common spatial patterns for brain-computer interfacing.", "text": "Classifying motion intentions in brain-computer interfacing (BCI) is a demanding task as the recorded EEG signal is not only noisy and has limited spatial resolution but it is also intrinsically non-stationary. The non-stationarities in the signal may come from many different sources, for instance, electrode artefacts, muscular activity or changes of task involvement, and often deteriorate classification performance. This is mainly because features extracted by standard methods like common spatial patterns (CSP) are not invariant to variations of the signal properties, thus should also change over time. Although many extensions of CSP were proposed to, for example, reduce the sensitivity to noise or incorporate information from other subjects, none of them tackles the non-stationarity problem directly. In this paper, we propose a method which regularizes CSP towards stationary subspaces (sCSP) and show that this increases classification accuracy, especially for subjects who are hardly able to control a BCI. We compare our method with the state-of-the-art approaches on different datasets, show competitive results and analyse the reasons for the improvement."}
{"_id": "82a3311dc057343216f82efa09fd8c3df1ff9e51", "title": "Compact GaN MMIC T/R module front-end for X-band pulsed radar", "text": "An X-band Single-Chip monolithic microwave integrated circuit (MMIC) has been developed by using a European GaN HEMT technology. The very compact MMIC die occupying only an area of 3.0 mm \u00d7 3.0 mm, integrates a high power amplifier (HPA), a low-noise amplifier (LNA) and a robust asymmetrical absorptive/reflective SPDT switch. At the antenna RF pad in the frequency range from 8.6 to 11.2 GHz, nearly 8 W of output power and 22 dB of linear gain were measured when operated in transmit mode. When operated in receive mode, a noise figure of 2.5 dB with a gain of 15 dB were measured at the Rx RF pad in the same frequency range."}
{"_id": "2f0ed0d45537853b04e711a9cfee0c294205acd3", "title": "Augmented reality in education: a meta-review and cross-media analysis", "text": "Augmented reality (AR) is an educational medium increasingly accessible to young users such as elementary school and high school students. Although previous research has shown that AR systems have the potential to improve student learning, the educational community remains unclear regarding the educational usefulness of AR and regarding contexts in which this technology is more effective than other educational mediums. This paper addresses these topics by analyzing 26 publications that have previously compared student learning in AR versus non-AR applications. It identifies a list of positive and negative impacts of AR experiences on student learning and highlights factors that are potentially underlying these effects. This set of factors is argued to cause differences in educational effectiveness between AR and other media. Furthermore, based on the analysis, the paper presents a heuristic questionnaire generated for judging the educational potential of AR experiences."}
{"_id": "273ab36c41cc5175c9bdde2585a9f4d17e35c683", "title": "Nonparametric Canonical Correlation Analysis", "text": "Canonical correlation analysis (CCA) is a classical representation learning technique for finding correlated variables in multi-view data. Several nonlinear extensions of the original linear CCA have been proposed, including kernel and deep neural network methods. These approaches seek maximally correlated projections among families of functions, which the user specifies (by choosing a kernel or neural network structure), and are computationally demanding. Interestingly, the theory of nonlinear CCA, without functional restrictions, had been studied in the population setting by Lancaster already in the 1950s, but these results have not inspired practical algorithms. We revisit Lancaster\u2019s theory to devise a practical algorithm for nonparametric CCA (NCCA). Specifically, we show that the solution can be expressed in terms of the singular value decomposition of a certain operator associated with the joint density of the views. Thus, by estimating the population density from data, NCCA reduces to solving an eigenvalue system, superficially like kernel CCA but, importantly, without requiring the inversion of any kernel matrix. We also derive a partially linear CCA (PLCCA) variant in which one of the views undergoes a linear projection while the other is nonparametric. Using a kernel density estimate based on a small number of nearest neighbors, our NCCA and PLCCA algorithms are memory-efficient, often run much faster, and perform better than kernel CCA and comparable to deep CCA. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s)."}
{"_id": "4c97f3c66236649f1723e210104833278fb7f84e", "title": "Language Independent Single Document Image Super-Resolution using CNN for improved recognition", "text": "Recognition of document images have important applications in restoring old and classical texts. The problem involves quality improvement before passing it to a properly trained OCR to get accurate recognition of the text. The image enhancement and quality improvement constitute important steps as subsequent recognition depends upon the quality of the input image. There are scenarios when high resolution images are not available and our experiments show that the OCR accuracy reduces significantly with decrease in the spatial resolution of document images. Thus the only option is to improve the resolution of such document images. The goal is to construct a high resolution image, given a single low resolution binary image, which constitutes the problem of single image super-resolution. Most of the previous work in super-resolution deal with natural images which have more information-content than the document images. Here, we use Convolution Neural Network to learn the mapping between low and the corresponding high resolution images. We experiment with different number of layers, parameter settings and non-linear functions to build a fast end-to-end framework for document image super-resolution. Our proposed model shows a very good PSNR improvement of about 4 dB on 75 dpi Tamil images, resulting in a 3% improvement of word level accuracy by the OCR. It takes less time than the recent sparse based natural image super-resolution technique, making it useful for real-time document recognition applications."}
{"_id": "3e6d1864887c440b7d78b4745110645803577098", "title": "Dual-Band Omnidirectional Circularly Polarized Antenna", "text": "A dual-band omnidirectional circularly polarized antenna is proposed. The antenna comprises back-to-back microstrip patches fed by a coplanar waveguide. A very low frequency ratio of 1.182 has been achieved, which can be easily tuned by adjusting four lumped capacitors incorporated into the antenna. An analysis of the omnidirectional circular polarization mechanism as well the dual band operation is provided and confirmed by numerical and experimental data. Key parameters to tune the resonant frequencies and the axial ratio have been identified. The prototype antenna provides omnidirectional circular polarization in one plane with cross polar isolation better than 12 dB for both frequency bands."}
{"_id": "fb9f09a906ad75395020e9bae5b51449fe58d49f", "title": "Beyond Bitcoin - Part I: A critical look at blockchain-based systems", "text": "After more than six years from the launch of Bitcoin, it has become evident that the decentralized transaction ledger functionality implemented through the blockchain technology can be used not only for cryptocurrencies, but to register, confirm and transfer any kind of contract and property. In this work we analyze the most relevant functionalities and known issues of this technology, with the intent of pointing out the possible behaviours that are not as efficient as they should when thinking with a broader outlook. Our analysis would be the starting point for the introduction of a new approach to blockchain creation and management, which will be the subject of a forthcoming paper."}
{"_id": "65bfd757ebd1712b2ed0fa3a0529b8aae1427f33", "title": "Secure Medical Data Transmission Model for IoT-Based Healthcare Systems", "text": "Due to the significant advancement of the Internet of Things (IoT) in the healthcare sector, the security, and the integrity of the medical data became big challenges for healthcare services applications. This paper proposes a hybrid security model for securing the diagnostic text data in medical images. The proposed model is developed through integrating either 2-D discrete wavelet transform 1 level (2D-DWT-1L) or 2-D discrete wavelet transform 2 level (2D-DWT-2L) steganography technique with a proposed hybrid encryption scheme. The proposed hybrid encryption schema is built using a combination of Advanced Encryption Standard, and Rivest, Shamir, and Adleman algorithms. The proposed model starts by encrypting the secret data; then it hides the result in a cover image using 2D-DWT-1L or 2D-DWT-2L. Both color and gray-scale images are used as cover images to conceal different text sizes. The performance of the proposed system was evaluated based on six statistical parameters; the peak signal-to-noise ratio (PSNR), mean square error (MSE), bit error rate (BER), structural similarity (SSIM), structural content (SC), and correlation. The PSNR values were relatively varied from 50.59 to 57.44 in case of color images and from 50.52 to 56.09 with the gray scale images. The MSE values varied from 0.12 to 0.57 for the color images and from 0.14 to 0.57 for the gray scale images. The BER values were zero for both images, while SSIM, SC, and correlation values were ones for both images. Compared with the state-of-the-art methods, the proposed model proved its ability to hide the confidential patient\u2019s data into a transmitted cover image with high imperceptibility, capacity, and minimal deterioration in the received stego-image."}
{"_id": "3632d60d1bb84cab4ba624bf9726712f5c4216c8", "title": "Optimal stopping of Markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives", "text": "We develop a theory characterizing optimal stopping times for discrete-time ergodic Markov processes with discounted rewards. The theory differs from prior work by its view of per-stage and terminal reward functions as elements of a certain Hilbert space. In addition to a streamlined analysis establishing existence and uniqueness of a solution to Bellman's equation, this approach provides an elegant framework for the study of approximate solutions. In particular, we propose a stochastic approximation algorithm that tunes weights of a linear combination of basis functions in order to approximate a value function. We prove that this algorithm converges (almost surely) and that the limit of convergence has some desirable properties. We discuss how variations on this line of analysis can be used to develop similar results for other classes of optimal stopping problems, including those involving independent increment processes, finite horizons, and two-player zero-sum games. We illustrate the approximation method with a computational case study involving the pricing of a path-dependent financial derivative security that gives rise to an optimal stopping problem with a one-hundred-dimensional state space."}
{"_id": "06895dd27e67a039c24fe06adf6391bcb245d7af", "title": "Image and video abstraction by multi-scale anisotropic Kuwahara filtering", "text": "The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local structure of image features. In this work, two limitations of the anisotropic Kuwahara filter are addressed. First, it is shown that by adding thresholding to the weighting term computation of the sectors, artifacts are avoided and smooth results in noise-corrupted regions are achieved. Second, a multi-scale computation scheme is proposed that simultaneously propagates local orientation estimates and filtering results up a low-pass filtered pyramid. This allows for a strong abstraction effect and avoids artifacts in large low-contrast regions. The propagation is controlled by the local variances and anisotropies that are derived during the computation without extra overhead, resulting in a highly efficient scheme that is particularly suitable for real-time processing on a GPU."}
{"_id": "48caac2f65bce47f6d27400ae4f60d8395cec2f3", "title": "Stochastic Gradient Boosting", "text": "Gradient boosting constructs additive regression models by sequentially tting a simple parameterized function (base learner) to current \\pseudo\"{residuals by least{squares at each iteration. The pseudo{residuals are the gradient of the loss functional being minimized, with respect to the model values at each training data point, evaluated at the current step. It is shown that both the approximation accuracy and execution speed of gradient boosting can be substantially improved by incorporating randomization into the procedure. Speci cally, at each iteration a subsample of the training data is drawn at random (without replacement) from the full training data set. This randomly selected subsample is then used in place of the full sample to t the base learner and compute the model update for the current iteration. This randomized approach also increases robustness against overcapacity of the base learner. 1 Gradient Boosting In the function estimation problem one has a system consisting of a random \\output\" or \\response\" variable y and a set of random \\input\" or \\explanatory\" variables x = fx1; ; xng. Given a \\training\" sample fyi;xig N 1 of known (y;x){values, the goal is to nd a function F (x) that maps x to y, such that over the joint distribution of all (y;x){values, the expected value of some speci ed loss function (y; F (x)) is minimized F (x) = argmin F (x) Ey;x (y; F (x)): (1) Boosting approximates F (x) by an \\additive\" expansion of the form"}
{"_id": "4fd01b9fad9502df657c578274818bc5f3cbe93f", "title": "Take a Look Around: Using Street View and Satellite Images to Estimate House Prices", "text": "When an individual purchases a home, they simultaneously purchase its structural features, its accessibility to work, and the neighborhood amenities. Some amenities, such as air quality, are measurable whilst others, such as the prestige or the visual impression of a neighborhood, are difficult to quantify. Despite the well-known impacts intangible housing features have on house prices, limited attention has been given to systematically quantifying these difficult to measure amenities. Two issues have lead to this neglect. Not only do few quantitative methods exist that can measure the urban environment, but that the collection of such data is both costly and subjective. We show that street image and satellite image data can capture these urban qualities and improve the estimation of house prices. We propose a pipeline that uses a deep neural network model to automatically extract visual features from images to estimate house prices in London, UK. We make use of traditional housing features such as age, size and accessibility as well as visual features from Google Street View images and Bing aerial images in estimating the house price model. We find encouraging results where learning to characterize the urban quality of a neighborhood improves house price prediction, even when generalizing to previously unseen London boroughs. We explore the use of non-linear vs. linear methods to fuse these cues with conventional models of house pricing, and show how the interpretability of linear models allows us to directly extract the visual desirability of neighborhoods as proxy variables that are both of interest in their own right, and could be used as inputs to other econometric methods. This is particularly valuable as once the network has been trained with the training data, it can be applied elsewhere, allowing us to generate vivid dense maps of the desirability of London streets."}
{"_id": "4a2690b2f504e192b77615fdd7b55be8527d1486", "title": "CARDIAC: An Intelligent Conversational Assistant for Chronic Heart Failure Patient Heath Monitoring", "text": "We describe CARDIAC, a prototype for an intelligent conversational assistant that provides health monitoring for chronic heart failure patients. CARDIAC supports user initiative through its ability to understand natural language and connect it to intention recognition. The natural language interface allows patients to interact with CARDIAC without special training. The system is designed to understand information that arises spontaneously in the course of the interview. If the patient gives more detail than necessary for answering a question, the system updates the user model accordingly. CARDIAC is a first step towards developing cost-effective, customizable, automated in-home conversational assistants that help patients manage their care and monitor their health using natural language."}
{"_id": "a05ad7eb010dcda96f4b76c56cdf1dce14ea49ed", "title": "A Queueing Network Model of Patient Flow in an Accident and Emergency Department", "text": "In many complex processing systems with limited resources, fast response times are demanded, but are seldom delivered. This is an especially serious problem in healthcare systems providing critical patient care. In this paper, we develop a multiclass Markovian queueing network model of patient flow in the Accident and Emergency department of a major London hospital. Using real patient timing data to help parameterise the model, we solve for moments and probability density functions of patient response time using discrete event simulation. We experiment with different patient handling priority schemes and compare the resulting response time moments and densities with real data."}
{"_id": "20755e1889af2f39bc0ed6eeafe8469aeff3073f", "title": "A near-optimal algorithm for computing the entropy of a stream", "text": "We describe a simple algorithm for approximating the empirical entropy of a stream of m values in a single pass, using O(\u03b5-2 log(\u0394-1) log m) words of space. Our algorithm is based upon a novel extension of a method introduced by Alon, Matias, and Szegedy [1]. We show a space lower bound of \u03a9(\u03b5-2 / log(\u03b5-1)), meaning that our algorithm is near-optimal in terms of its dependency on \u03b5. This improves over previous work on this problem [8, 13, 17, 5]. We show that generalizing to kth order entropy requires close to linear space for all k \u2265 1, and give additive approximations using our algorithm. Lastly, we show how to compute a multiplicative approximation to the entropy of a random walk on an undirected graph."}
{"_id": "53ea5aa438e041820d2fd9413d2b3aaf87a95212", "title": "Decoding as Continuous Optimization in Neural Machine Translation", "text": "In this work, we propose a novel decoding approach for neural machine translation (NMT) based on continuous optimisation. The resulting optimisation problem can then be tackled using a whole range of continuous optimisation algorithms which have been developed and used in the literature mainly for training. Our approach is general and can be applied to other sequence-to-sequence neural models as well. We make use of this powerful decoding approach to intersect an underlying NMT with a language model, to intersect left-to-right and right-to-left NMT models, and to decode with soft constraints involving coverage and fertility of the source sentence words. The experimental results show the promise of the proposed framework."}
{"_id": "908cb227cf6e3138157b47d057db0c700be4d7f2", "title": "Integrated virtual commissioning an essential activity in the automation engineering process: From virtual commissioning to simulation supported engineering", "text": "Production plants within manufacturing and process industries are becoming more and more complex systems. For a safe and demand sufficient production the automation system of a production plant is mission critical. Therefore the correct functioning and engineering of the automation system is essential. Nowadays the use of a virtual commissioning step before deploying the automation software on the real production plant is used more often. Within the virtual commissioning the automation software is tested against a virtual plant model, sufficient to check out the correct functioning of the automation system. Usually virtual commissioning is used as a step at the end of the automation engineering, as proposed by VDI 4499. Within this paper an integrated virtual commissioning is proposed, where the automation software is continuously tested against a virtual plant model during the overall engineering phase. The virtual plant is growing together with the automation software and thus enables simulation supported automation engineering. Benefits of the proposed workflow are that errors can be detected immediately after the implementation and the use of the virtual plant model could be extended to various stages of the engineering workflow."}
{"_id": "42f49ab31c43f28ca5ee68a0cb19406e77321cc3", "title": "Agile Flight Control Techniques for a Fixed-Wing Aircraft by Frantisek Michal Sobolic", "text": "As unmanned aerial vehicles (UAVs) become more involved in challenging mission objectives, the need for agility controlled flight becomes more of a necessity. The ability to navigate through constrained environments as well as quickly maneuver to each mission target is essential. Currently, individual vehicles are developed with a particular mission objective, whether it be persistent surveillance or fly-by reconnaissance. Fixed-wing vehicles with a high thrust-to-weight ratio are capable of performing maneuvers such as take-off or perch style landing and switch between hover and conventional flight modes. Agile flight controllers enable a single vehicle to achieve multiple mission objectives. By utilizing the knowledge of the flight dynamics through all flight regimes, nonlinear controllers can be developed that control the aircraft in a single design. This thesis develops a full six-degree-of-freedom model for a fixed-wing propellerdriven aircraft along with methods of control through nonconventional flight regimes. In particular, these controllers focus on transitioning into and out of hover to level flight modes. This maneuver poses hardships for conventional linear control architectures because these flights involve regions of the post-stall regime, which is highly nonlinear due to separation of flow over the lifting surfaces. Using Lyapunov backstepping control stability theory as well as quaternion-based control methods, control strategies are developed that stabilize the aircraft through these flight regimes without the need to switch control schemes. The effectiveness of each control strategy is demonstrated in both simulation and flight experiments. Thesis Supervisor: Jonathan P. How Title: Professor"}
{"_id": "1319bf6218cbcd85ac7512991447ecd9d776577d", "title": "Task constraints in visual working memory", "text": "This paper examines the nature of visual representations that direct ongoing performance in sensorimotor tasks. Performance of such natural tasks requires relating visual information from different gaze positions. To explore this we used the technique of making task relevant display changes during saccadic eye movements. Subjects copied a pattern of colored blocks on a computer monitor, using the mouse to drag the blocks across the screen. Eye position was monitored using a dual-purkinje eye tracker, and the color of blocks in the pattern was changed at different points in task performance. When the target of the saccade changed color during the saccade, the duration of fixations on the model pattern increased, depending on the point in the task that the change was made. Thus different fixations on the same visual stimulus served a different purpose. The results also indicated that the visual information that is retained across successive fixations depends on moment by moment task demands. This is consistent with previous suggestions that visual representations are limited and task dependent. Changes in blocks in addition to the saccade target led to greater increases in fixation duration. This indicated that some global aspect of the pattern was retained across different fixations. Fixation durations revealed effects of the display changes that were not revealed in perceptual report. This can be understood by distinguishing between processes that operate at different levels of description and different time scales. Our conscious experience of the world may reflect events over a longer time scale than those underlying the substructure of the perceptuo-motor machinery."}
{"_id": "b68b04cb44f2d0a9762715fabb4005756926374f", "title": "The role of visual attention in saccadic eye movements.", "text": "The relationship between saccadic eye movements and covert orienting or visual spatial attention was investigated in two experiments. In the first experiment, subjects were required to make a saccade to a specified location while also detecting a visual target presented just prior to the eye movement. Detection accuracy was highest when the location of the target coincided with the location of the saccade, suggesting that subjects use spatial attention in the programming and/or execution of saccadic eye movements. In the second experiment, subjects were explicitly directed to attend to a particular location and to make a saccade to the same location or to a different one. Superior target detection occurred at the saccade location regardless of attention instructions. This finding shows that subjects cannot move their eyes to one location and attend to a different one. The result of these experiments suggest that visuospatial attention is an important mechanism in generating voluntary saccadic eye movements."}
{"_id": "b73f2d7b58bfc555d8037b3fdb673c4cec1aecf0", "title": "Modeling attention to salient proto-objects", "text": "Selective visual attention is believed to be responsible for serializing visual information for recognizing one object at a time in a complex scene. But how can we attend to objects before they are recognized? In coherence theory of visual cognition, so-called proto-objects form volatile units of visual information that can be accessed by selective attention and subsequently validated as actual objects. We propose a biologically plausible model of forming and attending to proto-objects in natural scenes. We demonstrate that the suggested model can enable a model of object recognition in cortex to expand from recognizing individual objects in isolation to sequentially recognizing all objects in a more complex scene."}
{"_id": "03406ec0118137ca1ab734a8b6b3678a35a43415", "title": "A Morphable Model for the Synthesis of 3D Faces", "text": "In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an \u201cunlikely\u201d appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness."}
{"_id": "26e8f3cc46b710a013358bf0289df4de033446ba", "title": "Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper", "text": "A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situations, applications to grouped, censored or truncated data, finite mixture models, variance component estimation, hyperparameter estimation, iteratively reweighted least squares and factor analysis."}
{"_id": "6b3d644ac7cd884961df181d3f4cab2452a6a217", "title": "Simple to complex cross-modal learning to rank", "text": "The heterogeneity-gap between different modalities brings a significant challenge to multimedia information retrieval. Some studies formalize the cross-modal retrieval tasks as a ranking problem and learn a shared multi-modal embedding space to measure the cross-modality similarity. However, previous methods often establish the shared embedding space based on linear mapping functions which might not be sophisticated enough to reveal more complicated inter-modal correspondences. Additionally, current studies assume that the rankings are of equal importance, and thus all rankings are used simultaneously, or a small number of rankings are selected randomly to train the embedding space at each iteration. Such strategies, however, always suffer from outliers as well as reduced generalization capability due to their lack of insightful understanding of procedure of human cognition. In this paper, we involve the self-paced learning theory with diversity into the cross-modal learning to rank and learn an optimal multi-modal embedding space based on non-linear mapping functions. This strategy enhances the model\u2019s robustness to outliers and achieves better generalization via training the model gradually from easy rankings by diverse queries to more complex ones. An efficient alternative algorithm is exploited to solve the proposed challenging problem with fast convergence in practice. Extensive experimental results on several benchmark datasets indicate that the proposed method achieves significant improvements over the stateof-the-arts in this literature. \u00a9 2017 Elsevier Inc. All rights reserved."}
{"_id": "394dcd9679bd27023ea1d6eb0f62ef13353c347e", "title": "The Agile Requirements Refinery: Applying SCRUM Principles to Software Product Management", "text": "Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now. In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management. For this purpose, the 'agile requirements refinery' is presented, an extension to the SCRUM process that enables product managers to cope with large requirements in an agile development environment. A real-life case study is presented to illustrate how agile methods can be applied to software product management. The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process."}
{"_id": "7c135538e31fba64b798d3dd618a2ba01d11ca57", "title": "Determinants of Successful Knowledge Management Programs", "text": "The main objective of this paper is to investigate and identify the main determinants of successful knowledge management (KM) programs. We draw upon the institutional theory and the theory of technology assimilation to develop an integrative model of KM success that clarifies the role of information technology (IT) in relation to other important KM infrastructural capabilities and to KM process capabilities. We argue that the role of IT cannot be studied in isolation and that the effect of IT on KM success is fully mediated by KM process capabilities. The research model is tested with a survey study involving 191 KM practitioners. The empirical results provided strong support for the model. In addition to its theoretical contributions, this study also presents important practical implications through the identification of specific infrastructural capabilities leading to KM success."}
{"_id": "1e50a3f7edd649a11acc22fef4c8b993f2ceba74", "title": "Potential of Estimating Soil Moisture Under Vegetation Cover by Means of PolSAR", "text": "In this paper, the potential of using polarimetric SAR (PolSAR) acquisitions for the estimation of volumetric soil moisture under agricultural vegetation is investigated. Soil-moisture estimation by means of SAR is a topic that is intensively investigated but yet not solved satisfactorily. The key problem is the presence of vegetation cover which biases soil-moisture estimates. In this paper, we discuss the problem of soil-moisture estimation in the presence of agricultural vegetation by means of L-band PolSAR images. SAR polarimetry allows the decomposition of the scattering signature into canonical scattering components and their quantification. We discuss simple canonical models for surface, dihedral, and vegetation scattering and use them to model and interpret scattering processes. The performance and modifications of the individual scattering components are discussed. The obtained surface and dihedral components are then used to retrieve surface soil moisture. The investigations cover, for the first time, the whole vegetation-growing period for three crop types using SAR data and ground measurements acquired in the frame of the AgriSAR campaign."}
{"_id": "108200780ff50d766c1a13e2a5095d59b5a5421f", "title": "Can Twitter be used to predict county excessive alcohol consumption rates?", "text": "OBJECTIVES\nThe current study analyzes a large set of Twitter data from 1,384 US counties to determine whether excessive alcohol consumption rates can be predicted by the words being posted from each county.\n\n\nMETHODS\nData from over 138 million county-level tweets were analyzed using predictive modeling, differential language analysis, and mediating language analysis.\n\n\nRESULTS\nTwitter language data captures cross-sectional patterns of excessive alcohol consumption beyond that of sociodemographic factors (e.g. age, gender, race, income, education), and can be used to accurately predict rates of excessive alcohol consumption. Additionally, mediation analysis found that Twitter topics (e.g. 'ready gettin leave') can explain much of the variance associated between socioeconomics and excessive alcohol consumption.\n\n\nCONCLUSIONS\nTwitter data can be used to predict public health concerns such as excessive drinking. Using mediation analysis in conjunction with predictive modeling allows for a high portion of the variance associated with socioeconomic status to be explained."}
{"_id": "9704ce70f2013c7a1b0947222e68bf7da833b249", "title": "Effects of coffee on ambulatory blood pressure in older men and women: A randomized controlled trial.", "text": "This study assessed the effects of regular coffee drinking on 24-hour ambulatory blood pressure (ABP) in normotensive and hypertensive older men and women. Twenty-two normotensive and 26 hypertensive, nonsmoking men and women, with a mean age of 72.1 years (range, 54 to 89 years), took part in the study. After 2 weeks of a caffeine-free diet, subjects were randomized to continue with the caffeine-free diet and abstain from caffeine-containing drinks or drink instant coffee (5 cups per day, equivalent to 300 mg caffeine per day) in addition to the caffeine-free diet for a further 2 weeks. Change in systolic and diastolic blood pressures (SBP, DBP) determined by 24-hour ambulatory BP monitoring showed significant interactions between coffee drinking and hypertension status. In the hypertensive group, rise in mean 24-hour SBP was greater by 4.8 (SEM, 1.3) mm Hg (P=0.031) and increase in mean 24-hour DBP was higher by 3.0 (1.0) mm Hg (P=0.010) in coffee drinkers than in abstainers. There were no significant differences between abstainers and coffee drinkers in the normotensive group for 24-hour, daytime, or nighttime SBP or DBP. In older men and women with treated or untreated hypertension, ABP increased in coffee drinkers and decreased in abstainers. Restriction of coffee intake may be beneficial in older hypertensive individuals."}
{"_id": "aa9d4e6732320de060c4eccb6134fe579ad6428c", "title": "Efficient region search for object detection", "text": "We propose a branch-and-cut strategy for efficient region-based object detection. Given an oversegmented image, our method determines the subset of spatially contiguous regions whose collective features will maximize a classifier's score. We formulate the objective as an instance of the prize-collecting Steiner tree problem, and show that for a family of additive classifiers this enables fast search for the optimal object region via a branch-and-cut algorithm. Unlike existing branch-and-bounddetection methods designed for bounding boxes, our approach allows scoring of irregular shapes \u2014 which is especially critical for objects that do not conform to a rectangular window. We provide results on three challenging object detection datasets, and demonstrate the advantage of rapidly seeking best-scoring regions rather than subwindow rectangles."}
{"_id": "38cace070ec3151f4af4477da3798dd5196e1476", "title": "Reproducible Experiments for Comparing Apache Flink and Apache Spark on Public Clouds", "text": "Big data processing is a hot topic in today\u2019s computer science world. There is a significant demand for analysing big data to satisfy many requirements of many industries. Emergence of the Kappa architecture created a strong requirement for a highly capable and efficient data processing engine. Therefore data processing engines such as Apache Flink and Apache Spark emerged in open source world to fulfill that efficient and high performing data processing requirement. There are many available benchmarks to evaluate those two data processing engines. But complex deployment patterns and dependencies make those benchmarks very difficult to reproduce by our own. This project has two main goals. They are making few of community accepted benchmarks easily reproducible on cloud and validate the performance claimed by those studies. Keywords\u2013 Data Processing, Apache Flink, Apache Spark, Batch processing, Stream processing, Reproducible experiments, Cloud"}
{"_id": "31d2423a14db6f87bdeba678ad7b4304f07dd565", "title": ">1.3-Tb/s VCSEL-Based On-Board Parallel-Optical Transceiver Module for High-Density Optical Interconnects", "text": "This paper gives a detailed description of a >1.3-Tb/s VCSEL-based on-board parallel-optical transceiver module for high-density optical interconnects. The optical module integrates a 28-Gb/s \u00d7 24-channel transmitter and receiver into one package with a 1-in2 footprint, thereby yielding a data rate density as high as 1.34\u00a0Tb/s/in2. A unique module design is developed to utilize the whole top surface for thermal dissipation and the bottom surface for optical and electrical interfaces. The heat dissipation characteristics are studied in simulations and experiments. A test system of 24-channel optical loop-back link is built to evaluate the performance when operating all 24 channels by 28.05-Gb/s 231\u20131 PRBS bit streams simultaneously. With a total power consumption of 9.1\u00a0W, the optical module can be operated within the operating case temperature under a practical air cooling condition. When operating all 24 channels simultaneously, a total jitter margin at BER of 10\u201312 is larger than 0.4\u00a0U.I. at a monitor channel. The measured jitter margin is consistent regardless of any channel operation."}
{"_id": "61ef28caf5f2f72a2f66d27e5710f94653e13d9c", "title": "Random projection-based multiplicative data perturbation for privacy preserving distributed data mining", "text": "This paper explores the possibility of using multiplicative random projection matrices for privacy preserving distributed data mining. It specifically considers the problem of computing statistical aggregates like the inner product matrix, correlation coefficient matrix, and Euclidean distance matrix from distributed privacy sensitive data possibly owned by multiple parties. This class of problems is directly related to many other data-mining problems such as clustering, principal component analysis, and classification. This paper makes primary contributions on two different grounds. First, it explores independent component analysis as a possible tool for breaching privacy in deterministic multiplicative perturbation-based models such as random orthogonal transformation and random rotation. Then, it proposes an approximate random projection-based technique to improve the level of privacy protection while still preserving certain statistical characteristics of the data. The paper presents extensive theoretical analysis and experimental results. Experiments demonstrate that the proposed technique is effective and can be successfully used for different types of privacy-preserving data mining applications."}
{"_id": "667d26ab623fc8c578895b7e88326dd590d696de", "title": "Anusaaraka: Machine Translation in Stages", "text": "Fully-automatic general-purpose high-quality machine translation systems (FGH-MT) are extremely difficult to build. In fact, there is no system in the world for any pair of languages which qualifies to be called FGH-MT. The reasons are not far to seek. Translation is a creative process which involves interpretation of the given text by the translator. Translation would also vary depending on the audience and the purpose for which it is meant. This would explain the difficulty of building a machine translation system. Since, the machine is not capable of interpreting a general text with sufficient accuracy automatically at present let alone re-expressing it for a given audience, it fails to perform as FGH-MT. FOOTNOTE{The major difficulty that the machine facesin interpreting a given text is the lack of general world knowledge or common sense knowledge.}"}
{"_id": "3bd350b0a5dc5d059707fc30ae524c8ff1b92e28", "title": "Automatic Audio Content Analysis", "text": "This paper describes the theoretic framework and applications of automatic audio content analysis. After explaining the basic properties of audio analysis, we present a toolbox being the basis for the development of audio analysis algorithms. We also describe new applications which can be developed using the toolset, among them music indexing and retrieval as well as violence detection in the sound track of videos."}
{"_id": "d4e3795452bfc87f388e0c1a5fb519d1558effd0", "title": "LEARNING USING A SIAMESE CONVOLUTIONAL NEURAL NETWORK", "text": "In this paper we describe learning of a descriptor based on the Siamese Convolutional Neural Network (CNN) architecture and evaluate our results on a standard patch comparison dataset. The descriptor learning architecture is composed of an input module, a Siamese CNN descriptor module and a cost computation module that is based on the L2 Norm. The cost function we use pulls the descriptors of matching patches close to each other in feature space while pushing the descriptors for non-matching pairs away from each other. Compared to related work, we optimize the training parameters by combining a moving average strategy for gradients and Nesterov's Accelerated Gradient. Experiments show that our learned descriptor reaches a good performance and achieves stateof-art results in terms of the false positive rate at a 95% recall rate on standard benchmark datasets. * Corresponding author"}
{"_id": "1a25c7f8fadb01debaf574ba366c275d1a94dd73", "title": "Wearable/disposable sweat-based glucose monitoring device with multistage transdermal drug delivery module", "text": "Electrochemical analysis of sweat using soft bioelectronics on human skin provides a new route for noninvasive glucose monitoring without painful blood collection. However, sweat-based glucose sensing still faces many challenges, such as difficulty in sweat collection, activity variation of glucose oxidase due to lactic acid secretion and ambient temperature changes, and delamination of the enzyme when exposed to mechanical friction and skin deformation. Precise point-of-care therapy in response to the measured glucose levels is still very challenging. We present a wearable/disposable sweat-based glucose monitoring device integrated with a feedback transdermal drug delivery module. Careful multilayer patch design and miniaturization of sensors increase the efficiency of the sweat collection and sensing process. Multimodal glucose sensing, as well as its real-time correction based on pH, temperature, and humidity measurements, maximizes the accuracy of the sensing. The minimal layout design of the same sensors also enables a strip-type disposable device. Drugs for the feedback transdermal therapy are loaded on two different temperature-responsive phase change nanoparticles. These nanoparticles are embedded in hyaluronic acid hydrogel microneedles, which are additionally coated with phase change materials. This enables multistage, spatially patterned, and precisely controlled drug release in response to the patient\u2019s glucose level. The system provides a novel closed-loop solution for the noninvasive sweat-based management of diabetes mellitus."}
{"_id": "a5d0083902cdd74a4e22805a01688b07e3d01883", "title": "Generalized neural-network representation of high-dimensional potential-energy surfaces.", "text": "The accurate description of chemical processes often requires the use of computationally demanding methods like density-functional theory (DFT), making long simulations of large systems unfeasible. In this Letter we introduce a new kind of neural-network representation of DFT potential-energy surfaces, which provides the energy and forces as a function of all atomic positions in systems of arbitrary size and is several orders of magnitude faster than DFT. The high accuracy of the method is demonstrated for bulk silicon and compared with empirical potentials and DFT. The method is general and can be applied to all types of periodic and nonperiodic systems."}
{"_id": "a82a54cfee4556191546b57d3fd94f00c03dc95c", "title": "A universal SNP and small-indel variant caller using deep neural networks", "text": "Despite rapid advances in sequencing technologies, accurately calling genetic variants present in an individual genome from billions of short, errorful sequence reads remains challenging. Here we show that a deep convolutional neural network can call genetic variation in aligned next-generation sequencing read data by learning statistical relationships between images of read pileups around putative variant and true genotype calls. The approach, called DeepVariant, outperforms existing state-of-the-art tools. The learned model generalizes across genome builds and mammalian species, allowing nonhuman sequencing projects to benefit from the wealth of human ground-truth data. We further show that DeepVariant can learn to call variants in a variety of sequencing technologies and experimental designs, including deep whole genomes from 10X Genomics and Ion Ampliseq exomes, highlighting the benefits of using more automated and generalizable techniques for variant calling."}
{"_id": "6769f95f97db9d67fe3e5a1fa96195d5e7e03f64", "title": "Risk Management for Service-Oriented Systems", "text": "Web service technology can be used for integrating heterogeneous and autonomous applications into cross-organizational systems. A key problem is to support a high quality of service-oriented systems despite vulnerabilities caused by the use of external web services. One important aspect that has received little attention so far is risk management for such systems. This paper discusses risks peculiar for servicebased systems, their impact and ways of mitigation. In the context of service-oriented design, risks can be reduced by selection of appropriate business partners, web service discovery, service composition and Quality of Service (QoS) management. Advisors. Vincenzo D\u2019Andrea."}
{"_id": "a9fb580a2c961c4921709dd5d7425a12a7c3fc64", "title": "A Questionnaire Approach based on the Technology Acceptance Model for Mobile tracking on Patient progress Applications", "text": "Healthcare professionals spend much of their time wandering between patients and offices, while the supportive technology stays stationary. Therefore, mobile applications has adapted for healthcare industry. In spite of the advancement and variety of available mobile based applications, there is an eminent need to investigate the current position of the acceptance of those mobile health applications that are tailored towards the tracking patients condition, share patients information and access. Consequently, in this study Technology Acceptance Model has designed to investigate the user acceptance of mobile technology application within healthcare industry. The purpose of this study is to design a quantitative approach based on the technology acceptance model questionnaire as its primary research methodology. It utilized a quantitative approach based a Technology Acceptance Model (TAM) to evaluate the system mobile tracking Model. The related constructs for evaluation are: Perceived of Usefulness, Perceived Ease of Use, User Satisfaction and Attribute of Usability. All these constructs are modified to suit the context of the study. Moreover, this study outlines the details of each construct and its relevance toward the research issue. The outcome of the study represents series of approaches that will apply for checking the suitability of a mobile tracking on patient progress application for health care industry and how well it achieves the aims and objectives of the design."}
{"_id": "e5a636c6942c5ea72cf62ed962bb7c2996efe251", "title": "Propulsion Drive Models for Full Electric Marine Propulsion Systems", "text": "Integrated full electric propulsion systems are being introduced across both civil and military marine sectors. Standard power systems analysis packages cover electrical and electromagnetic components, but have limited models of mechanical subsystems and their controllers. Hence electromechanical system interactions between the prime movers, power network and driven loads are poorly understood. This paper reviews available models of the propulsion drive system components: the power converter, motor, propeller and ship. Due to the wide range of time-constants in the system, reduced order models of the power converter are required. A new model using state-averaged models of the inverter and a hybrid model of the rectifier is developed to give an effective solution combining accuracy with speed of simulation and an appropriate interface to the electrical network model. Simulation results for a typical ship manoeuvre are presented."}
{"_id": "b44c324af6b69c829ce2cc19de4daab3b1263ba0", "title": "Cooperative (rather than autonomous) vehicle-highway automation systems", "text": "The recent DARPA-sponsored automated vehicle \"Challenges\" have generated strong interest in both the research community and the general public, raising consciousness about the possibilities for vehicle automation. Driverless vehicles make good subjects for the visually-oriented media, and they pose enough interesting research challenges to occupy generations of graduate students. However, automated vehicles also have the potential to help solve a variety of real-world problems. Engineers need to think carefully about which of those problems we are actually solving.A well-engineered system should be designed to satisfy specific needs, and those needs should be reflected in the definition of system requirements. Alternative technological approaches can then be evaluated and traded off based on their ability to meet the requirements. The article describes the rather different needs of the public road transportation system and the military transportation system, and then shows how those needs influence the requirements for automated vehicle systems. These requirements point toward significantly different technical approaches, but it is possible to find some limited areas of technical commonality."}
{"_id": "a35d3d359f5cc8a307776acb9ee23c50175a964f", "title": "Transcending transmedia: emerging story telling structures for the emerging convergence platforms", "text": "Although the current paradigm for expanded participatory storytellling is the \"transmedia\" exploitation of the same storyworld on multiple platforms, it can be more productive to think of the digital medium as a single platform, combining all the functionalities we now associate with networked computers, game consoles, and conventionally delivered episodic television to enhance the traditional strengths and cultural functions of television. By looking at the how the four affordances of the digital medium the procedural, participatory, encyclopedic, and spatial apply to three core characteristics of television moving images, sustained storytelling, and individual viewing in the presence of a mass audience -- we can identify areas of emerging conventions and widening opportunity for design innovation"}
{"_id": "fb7ff16f840317c1f725f81f4f1cc8aafacb1516", "title": "A generic visual perception domain randomisation framework for Gazebo", "text": "The impressive results of applying deep neural networks in tasks such as object recognition, language translation, and solving digital games are largely attributed to the availability of massive amounts of high quality labelled data. However, despite numerous promising steps in incorporating these techniques in robotic tasks, the cost of gathering data with a real robot has halted the proliferation of deep learning in robotics. In this work, a plugin for the Gazebo simulator is presented, which allows rapid generation of synthetic data. By introducing variations in simulator-specific or irrelevant aspects of a task, one can train a model which exhibits some degree of robustness against those aspects, and ultimately narrow the reality gap between simulated and real-world data. To show a use-case of the developed software, we build a new dataset for detection and localisation of three object classes: box, cylinder and sphere. Our results in the object detection and localisation task demonstrate that with small datasets generated only in simulation, one can achieve comparable performance to that achieved when training on real-world images."}
{"_id": "31c3790b30bc27c7e657f1ba5c5421713d72474a", "title": "Fast single image fog removal using the adaptive Wiener filter", "text": "We present in this paper a fast single image defogging method that uses a novel approach to refining the estimate of amount of fog in an image with the Locally Adaptive Wiener Filter. We provide a solution for estimating noise parameters for the filter when the observation and noise are correlated by decorrelating with a naively estimated defogged image. We demonstrate our method is 50 to 100 times faster than existing fast single image defogging methods and that our proposed method subjectively performs as well as the Spectral Matting smoothed Dark Channel Prior method."}
{"_id": "e4d422ba556732bb90bb7b17636552af2cd3a26e", "title": "Probability Risk Identification Based Intrusion Detection System for SCADA Systems", "text": "As Supervisory Control and Data Acquisition (SCADA) systems control several critical infrastructures, they have connected to the internet. Consequently, SCADA systems face different sophisticated types of cyber adversaries. This paper suggests a Probability Risk Identification based Intrusion Detection System (PRI-IDS) technique based on analysing network traffic of Modbus TCP/IP for identifying replay attacks. It is acknowledged that Modbus TCP is usually vulnerable due to its unauthenticated and unencrypted nature. Our technique is evaluated using a simulation environment by configuring a testbed, which is a custom SCADA network that is cheap, accurate and scalable. The testbed is exploited when testing the IDS by sending individual packets from an attacker located on the same LAN as the Modbus master and slave. The experimental results demonstrated that the proposed technique can effectively and efficiently recognise replay attacks."}
{"_id": "cc98157b70d7cf464b880668d7694edd12188157", "title": "An Implementation of Intrusion Detection System Using Genetic Algorithm", "text": "Nowadays it is very important to maintain a high level security to ensure safe and trusted communication of information between various organizations. But secured data communication over internet and any other network is always under threat of intrusions and misuses. So Intrusion Detection Systems have become a needful component in terms of computer and network security. There are various approaches being utilized in intrusion detections, but unfortunately any of the systems so far is not completely flawless. So, the quest of betterment continues. In this progression, here we present an Intrusion Detection System (IDS), by applying genetic algorithm (GA) to efficiently detect various types of network intrusions. Parameters and evolution processes for GA are discussed in details and implemented. This approach uses evolution theory to information evolution in order to filter the traffic data and thus reduce the complexity. To implement and measure the performance of our system we used the KDD99 benchmark dataset and obtained reasonable detection rate."}
{"_id": "2f991be8d35e4c1a45bfb0d646673b1ef5239a1f", "title": "Model-Agnostic Interpretability of Machine Learning", "text": "Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces. Thus, interpretability has become a vital concern in machine learning, and work in the area of interpretable models has found renewed interest. In some applications, such models are as accurate as non-interpretable ones, and thus are preferred for their transparency. Even when they are not accurate, they may still be preferred when interpretability is of paramount importance. However, restricting machine learning to interpretable models is often a severe limitation. In this paper we argue for explaining machine learning predictions using model-agnostic approaches. By treating the machine learning models as blackbox functions, these approaches provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models. We also outline the main challenges for such methods, and review a recently-introduced model-agnostic explanation approach (LIME) that addresses these challenges."}
{"_id": "546add32740ac350dda44bab06f56d4e206622ab", "title": "Safety Verification of Deep Neural Networks", "text": "Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on image manipulations, such as scratches or changes to camera angle or lighting conditions, and define safety for an image classification decision in terms of invariance of the classification with respect to manipulations of the original image within a region of images that are close to it. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness."}
{"_id": "8db9df2eadea654f128c1887722c677c708e8a47", "title": "Deep Reinforcement Learning framework for Autonomous Driving", "text": "Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles. INTRODUCTION A robot car that drives autonomously is a long-standing goal of Artificial Intelligence. Driving a vehicle is a task that requires high level of skill, attention and experience from a human driver. Although computers are more capable of sustained attention and focus than humans, fully autonomous driving requires a level of intelligence that surpasses that achieved so far by AI agents. The tasks involved in creating an autonomous driving agent can be divided into 3 categories, as shown in Figure 1: 1) Recognition: Identifying components of the surrounding environment. Examples of this are pedestrian detection, traffic sign recognition, etc. Although far from trivial, recognition is a relatively easy task nowadays thanks to advances in Deep Learning (DL) algorithms, which have reached human level recognition or above at several object detection and classification problems [8] [2]. Deep learning models are able to learn complex feature representations from raw input data, omitting the need for handcrafted features [15] [2] [7]. In this regard, Convolutional Neural Networks (CNNs) are probably the most successful deep learning model, and have formed the basis of every winning entry on the ImageNet challenge since AlexNet [8]. This success has been replicated in lane & vehicle detection for autonomous driving [6]. 2) Prediction: It is not enough for an autonomous driving agent to recognize its environment; it must also be able to build internal models that predict the future states of the environment. Examples of this class of problem include building a map of the environment or tracking an object. To be able to predict the future, it is important to integrate past information. As such, Recurrent Neural Networks (RNNs) are essential to this class of problem. Long-Short Term Memory (LSTM) networks [5] are one such category of RNN that have been used in end-to-end scene labeling systems [14]. More recently, RNNs have also been used to improve object tracking performance in the DeepTracking model [13]. 3) Planning: The generation of an efficient model that incorporates recognition and prediction to plan the future sequence of driving actions that will enable the vehicle to navigate successfully. Planning is the hardest task of the three. The difficulty lies in integrating the ability of the model to understand the environment (recognition) and its dynamics (prediction) in a way that enables it to plan the future actions such that it avoids unwanted situations (penalties) and drives safely to its destination (rewards). Figure 1: High level autonomous driving tasks The Reinforcement Learning (RL) framework [17] [20] has been used for a long time in control tasks. The mixture of RL with DL was pointed out to be one of the most promising approaches to achieve human-level control in [9]. In [12] and [11] this humanlevel control was demonstrated on Atari games using the Deep Q Networks (DQN) model, in which RL is responsible for the planning part while DL is responsible for the representation learning part. Later, RNNs were integrated in the mixture to account for partial observable scenarios [4]. Autonomous driving requires the integration of information ar X iv :1 70 4. 02 53 2v 1 [ st at .M L ] 8 A pr 2 01 7 from multiple sensors. Some of them are low dimensional, like LIDAR, while others are high dimensional, like cameras. It is noteworthy in this particular example, however, that although raw camera images are high dimensional, the useful information needed to achieve the autonomous driving task is of much lower dimension. For example, the important parts of the scene that affect driving decisions are limited to the moving vehicle, free space on the road ahead, the position of kerbs, etc. Even the fine details of vehicles are not important, as only their spatial location is truly necessary for the problem. Hence the memory bandwidth for relevant information is much lower. If this relevant information can be extracted, while the other non-relevant parts are filtered out, it would improve both the accuracy and efficiency of autonomous driving systems. Moreover, this would reduce the computation and memory requirements of the system, which are critical constraints on embedded systems that will contain the autonomous driving control unit. Attention models are a natural fit for such an information filtering process. Recently, these models were successfully deployed for image recognition in [23] and [10], wherein RL was mixed with RNNs to obtain the parts of the image to attend to. Such models are easily extended and integrated to the DQN [11] and Deep Recurrent Q Networks (DRQN) [4] models. This integration was performed in [16]. The success of attention models drives us to propose them for the extraction of low level information from the raw sensory information to perform autonomous driving. In this paper, we propose a framework for an end-end autonomous driving model that takes in raw sensor inputs and outputs driving actions. The model is able to handle partially observable scenarios. Moreover, we propose to integrate the recent advances in attention models in order to extract only relevant information from the received sensors data, thereby making it suitable for real-time embedded systems. The main contributions of this paper: 1) presenting a survey of the recent advances of deep reinforcement learning and 2) introducing a framework for endend autonomous driving using deep reinforcement learning to the automotive community. The rest of the paper is divided into two parts. The first part provides a survey of deep reinforcement learning algorithms, starting with the traditional MDP framework and Q-learning, followed by the the DQN, DRQN and Deep Attention Recurrent Q Networks (DARQN). The second part of the paper describes the proposed framework that integrates the recent advances in deep reinforcement learning. Finally, we conclude and suggest directions for future work. REVIEW OF REINFORCEMENT LEARNING For a comprehensive overview of reinforcement learning, please refer to the second edition of Rich Sutton\u2019s textbook [18]. We provide a short review of important topics in this section. The Reinforcement Learning framework was formulated in [17] as a model to provide the best policy an agent can follow (best action to take in a given state), such that the total accumulated rewards are maximized when the agent follows that policy from the current and until a terminal state is reached. Motivation for RL Paradigm Driving is a multi-agent interaction problem. As a human driver, it is much easier to keep within a lane without any interaction with other cars than to change lanes in heavy traffic. The latter is more difficult because of the inherent uncertainty in behavior of other drivers. The number of interacting vehicles, their geometric configuration and the behavior of the drivers could have large variability and it is challenging to design a supervised learning dataset with exhaustive coverage of all scenarios. Human drivers employ some sort of online reinforcement learning to understand the behavior of other drivers such as whether they are defensive or aggressive, experienced or in-experienced, etc. This is particularly useful in scenarios which need negotiation, namely entering a roundabout, navigating junctions without traffic lights, lane changes during heavy traffic, etc. The main challenge in autonomous driving is to deal with corner cases which are unexpected even for a human driver, like recovering from being lost in an unknown area without GPS or dealing with disaster situations like flooding or appearance of a sinkhole on the ground. The RL paradigm models uncharted territory and learns from its own experience by taking actions. Additionally, RL may be able to handle non-differentiable cost functions which can create challenges for supervised learning problems. Currently, the standard approach for autonomous driving is to decouple the system into isolated sub-problems, typically supervised-learning-like object detection, visual odometry, etc and then having a post processing layer to combine all the results of the previous steps. There are two main issues with this approach: Firstly, the sub-problems which are solved may be more difficult than autonomous driving. For example, one might be solving object detection by semantic segmentation which is both challenging and unnecessary. Human drivers don\u2019t detect and classify all visible objects while driving, only the most relevant ones. Secondly, the isolated sub-problems may not combine coherently to achieve"}
{"_id": "a4d513cfc9d4902ef1a80198582f29b8ba46ac28", "title": "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation", "text": "This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders."}
{"_id": "b5a047dffc3d70dce19de61257605dfc8c69535c", "title": "Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks", "text": "Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU ) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods."}
{"_id": "a7680e975d395891522d3c10e3bf892f9b618048", "title": "Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-to-End Learning from Demonstration", "text": "We propose a technique for multi-task learning from demonstration that trains the controller of a low-cost robotic arm to accomplish several complex picking and placing tasks, as well as non-prehensile manipulation. The controller is a recurrent neural network using raw images as input and generating robot arm trajectories, with the parameters shared across the tasks. The controller also combines VAE-GAN-based reconstruction with autoregressive multimodal action prediction. Our results demonstrate that it is possible to learn complex manipulation tasks, such as picking up a towel, wiping an object, and depositing the towel to its previous position, entirely from raw images with direct behavior cloning. We show that weight sharing and reconstruction-based regularization substantially improve generalization and robustness, and training on multiple tasks simultaneously increases the success rate on all tasks."}
{"_id": "a479ee6ba49a26e28033b8b9448cf9e058bfa913", "title": "Mechanical design of the Wheel-Leg hybrid mobile robot to realize a large wheel diameter", "text": "In this paper, a new category of the wheel-leg hybrid robot is presented. The proposed mechanism can compose large wheel diameter compared with the previous hybrid robot to realize a greater ability to climb obstacles. A prototype model of one Wheel-Leg module of the proposed robot mechanism has been developed to illustrate the concept. Actual design and mode changing experiment with a test mechanical module is also presented. Basic movement tests and a test of the basic properties of the rotational fingertip are also shown. The Basic configurations of wheel-leg retractable is considered well. The integrated mode is also described."}
{"_id": "9cc507dc1c86b7e22d384d200da49961fd0a7fa5", "title": "Development of GaN HEMT based High Power High Efficiency Distributed Power Amplifier for Military Applications", "text": "Implementing wide bandgap GaN HEMT device into broadband distributed power amplifier creates a tremendous opportunity for RF designers to develop high power high efficiency very broadband power amplifiers for military applications. Several prototypes of 10-40 W GaN based distributed power amplifiers, including MMIC distributed PA, are currently under the development at Rockwell Collins, Inc. In this paper, we will discuss the results of a 10 W distributed power amplifier with the maximum power output of more than 40 dBm and a power-added efficiency of 30-70% over the bandwidth of 20-2000 MHz."}
{"_id": "414b173b65bbb4d471134162acac6fd303d32313", "title": "IDENTIFICATION OF FAILURE MECHANISMS TO ENHANCE PROGNOSTIC OUTCOMES", "text": "Predicting the reliability of a system in its actual life-cycle conditions and estimating its time to failure is helpful in decision making to mitigate system risks. There are three approaches to prognostics: the physics-of-failure approach, the data-driven approach, and the fusion approach. A key requirement in all these approaches is identification of the appropriate parameter(s) to monitor to gather data that can be used to assess impending failure. This paper presents a physics-of-failure approach, which uses failure modes, mechanisms and effects analysis (FMMEA) to enhance prognostics planning and implementation. This paper also presents the fusion approach to prognostics and the applicability of FMMEA to this approach. An example case of generating FMMEA information and using that to identify appropriate parameters to monitor is presented."}
{"_id": "81cba545677a2fa3ce259ef5c540b03cc8ab6c03", "title": "A survey on Vehicular Cloud Computing and its security", "text": "Vehicular networking has a significant advantages in the today era. It provides desirable features and some specific applications such as efficient traffic management, road safety and infotainment. The vehicle consists of comparatively more communication systems such as on-board computing device, storage and computing power, GPS etc. to provide Intelligent Transportation System (ITS). The new hybrid technology known as Vehicular Cloud Computing (VCC) has great impact on the ITS by using the resources of vehicles such as GPS, storage, internet and computing power for instant decision making and sharing information on the cloud. Moreover, the paper not only present the concept of vehicular cloud but also provide a brief overview on the applications, security issues, threats and security solution for the VCC."}
{"_id": "b4bd9fab8439da4939a980a950838d1299a9b030", "title": "Tabu Search - Part II", "text": "Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval, unless otherwise noted. For more information, contact permissions@informs.org. The Publisher does not warrant or guarantee the article\u2019s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. \u00a9 1990 INFORMS"}
{"_id": "8c9e07e4695028fd72b905eacb6c6766efa3c70d", "title": "Marahel : A Language for Constructive Level Generation", "text": "Marahel is a language and framework for constructive generation of 2D tile-based game levels. It is developed with the dual aim of making it easier to build level generators for game developers, and to help solving the general level generation problem by creating a generator space that can be searched using evolution. We describe the different sections of the level generators, and show examples of generated maps from 5 different generators. We analyze their expressive range on three dimensions: percentage of empty space, number of isolated elements, and cell-wise entropy of empty space. The results show that generators that have starkly different output from each other can easily be defined in Marahel."}
{"_id": "ae18fa7080e85922fa916591bc73cd100ff5e861", "title": "Right nulled GLR parsers", "text": "The right nulled generalized LR parsing algorithm is a new generalization of LR parsing which provides an elegant correction to, and extension of, Tomita's GLR methods whereby we extend the notion of a reduction in a shift-reduce parser to include right nulled items. The result is a parsing technique which runs in linear time on LR(1) grammars and whose performance degrades gracefully to a polynomial bound in the presence of nonLR(1) rules. Compared to other GLR-based techniques, our algorithm is simpler and faster."}
{"_id": "0f0fc58f268d1055166276c3b74d36fd12f10c33", "title": "One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities", "text": "The softmax representation of probabilities for categorical variables plays a prominent role in modern machine learning with numerous applications in areas such as large scale classification, neural language modeling and recommendation systems. However, softmax estimation is very expensive for large scale inference because of the high cost associated with computing the normalizing constant. Here, we introduce an efficient approximation to softmax probabilities which takes the form of a rigorous lower bound on the exact probability. This bound is expressed as a product over pairwise probabilities and it leads to scalable estimation based on stochastic optimization. It allows us to perform doubly stochastic estimation by subsampling both training instances and class labels. We show that the new bound has interesting theoretical properties and we demonstrate its use in classification problems."}
{"_id": "5c9b92f2d065211c3d08bab2083571543fb12469", "title": "Object Recognition using 3D SIFT in Complex CT Volumes", "text": "The automatic detection of objects within complex volumetric imagery is becoming of increased interest due to the use of dual energy Computed Tomography (CT) scanners as an aviation security deterrent. These devices produce a volumetric image akin to that encountered in prior medical CT work but in this case we are dealing with a complex multi-object volumetric environment including significant noise artefacts. In this work we look at the application of the recent extension to the seminal SIFT approach to the 3D volumetric recognition of rigid objects within this complex volumetric environment. A detailed overview of the approach and results when applied to a set of exemplar CT volumetric imagery is presented."}
{"_id": "d6054e77672a62708ea9f4be8d35c824cf25068d", "title": "Pulsating torque minimization techniques for permanent magnet AC motor drives-a review", "text": "AbstructPermanent magnet ac (PMAC) motor drives are finding expanded use in high-performance applications where torque smoothness is essential. This paper reviews a wide range of motorand controller-based design techniques that have been described in the literature for minimizing the generation of cogging and ripple torques in both sinusoidal and trapezoidal PMAC motor drives. Sinusoidal PMAC drives generally show the greatest potential for pulsating torque minimization using wellknown motor design techniques such as skewing and fractional slot pitch windings. In contrast, trapezoidal PMAC drives pose more difficult tradeoffs in both the motor and controller design which may require compromises in drive simplicity and cost to improve torque smoothness. Controller-based techniques for minimizing pulsating torque typically involve the use of active cancellation algorithms which depend on either accurate tuning or adaptive control schemes for effectiveness. In the end, successful suppression of pulsating torque ultimately relies on an orchestrated systems approach to all aspects of the PMAC machine and controller design which often requires a carefully selected combination of minimization techniques."}
{"_id": "46b92b61c908eb8809af6d0a3b7a4a2728792161", "title": "Commutation torque ripple reduction in brushless DC motor drives using a single DC current sensor", "text": "This paper presents a comprehensive study on reducing commutation torque ripples generated in brushless DC motor drives with only a single DC-link current sensor provided. In such drives, commutation torque ripple suppression techniques that are practically effective in low speed as well as high speed regions are scarcely found. The commutation compensation technique proposed here is based on a strategy that the current slopes of the incoming and the outgoing phases during the commutation interval can be equalized by a proper duty-ratio control. Being directly linked with deadbeat current control scheme, the proposed control method accomplishes suppression of the spikes and dips superimposed on the current and torque responses during the commutation intervals of the inverter. Effectiveness of the proposed control method is verified through simulations and experiments."}
{"_id": "10b8bf301fbad86b83fabcbd8592adc67d33f8c6", "title": "Targeted Restoration of the Intestinal Microbiota with a Simple, Defined Bacteriotherapy Resolves Relapsing Clostridium difficile Disease in Mice", "text": "Relapsing C. difficile disease in humans is linked to a pathological imbalance within the intestinal microbiota, termed dysbiosis, which remains poorly understood. We show that mice infected with epidemic C. difficile (genotype 027/BI) develop highly contagious, chronic intestinal disease and persistent dysbiosis characterized by a distinct, simplified microbiota containing opportunistic pathogens and altered metabolite production. Chronic C. difficile 027/BI infection was refractory to vancomycin treatment leading to relapsing disease. In contrast, treatment of C. difficile 027/BI infected mice with feces from healthy mice rapidly restored a diverse, healthy microbiota and resolved C. difficile disease and contagiousness. We used this model to identify a simple mixture of six phylogenetically diverse intestinal bacteria, including novel species, which can re-establish a health-associated microbiota and clear C. difficile 027/BI infection from mice. Thus, targeting a dysbiotic microbiota with a defined mixture of phylogenetically diverse bacteria can trigger major shifts in the microbial community structure that displaces C. difficile and, as a result, resolves disease and contagiousness. Further, we demonstrate a rational approach to harness the therapeutic potential of health-associated microbial communities to treat C. difficile disease and potentially other forms of intestinal dysbiosis."}
{"_id": "f0bf87e3f74f8bd0336d0d3102dce9028882389e", "title": "Partitioning and Segment Organization Strategies for Real-Time Selective Search on Document Streams", "text": "The basic idea behind selective search is to partition a collection into topical clusters, and for each query, consider only a subset of the clusters that are likely to contain relevant documents. Previous work on web collections has shown that it is possible to retain high-quality results while considering only a small fraction of the collection. These studies, however, assume static collections where it is feasible to run batch clustering algorithms for partitioning. In this work, we consider the novel formulation of selective search on document streams (specifically, tweets), where partitioning must be performed incrementally. In our approach, documents are partitioned into temporal segments and selective search is performed within each segment: these segments can either be clustered using batch or online algorithms, and at different temporal granularities. For efficiency, we take advantage of word embeddings to reduce the dimensionality of the document vectors. Experiments with test collections from the TREC Microblog Tracks show that we are able to achieve precision indistinguishable from exhaustive search while considering only around 5% of the collection. Interestingly, we observe no significant effectiveness differences between batch vs. online clustering and between hourly vs. daily temporal segments, despite them being very different index organizations. This suggests that architectural choices should be primarily guided by efficiency considerations."}
{"_id": "e54dad471f831b64710367400dacef5a2d63731d", "title": "A novel three-phase DC/DC converter for high-power applications", "text": "This paper proposes a new three-phase series resonant converter. The principle working of the converter is described analytically in detail for switching frequencies equal to the resonant frequency and for switching frequencies greater than the resonant frequency. Additionally, detailed simulations are performed to analyse the converter. Based on the analysis, design criteria for a 5 kW breadboard are derived. The 5 kW breadboard version of the converter has been built to validate the analytical investigations and simulations experimentally. Moreover, the breadboard is used to investigate the ZVS and ZCS possibilities of the topology and the influence of the deadtime and the switching frequency on the overall efficiency."}
{"_id": "0572e4cb844a1ca04bd3c555e38accab84e11c4b", "title": "Facebook\u00ae and academic performance", "text": "There is much talk of a change in modern youth \u2013 often referred to as digital natives or Homo Zappiens \u2013 with respect to their ability to simultaneously process multiple channels of information. In other words, kids today can multitask. Unfortunately for proponents of this position, there is much empirical documentation concerning the negative effects of attempting to simultaneously process different streams of information showing that such behavior leads to both increased study time to achieve learning parity and an increase in mistakes while processing information than those who are sequentially or serially processing that same information. This article presents the preliminary results of a descriptive and exploratory survey study involving Facebook use, often carried out simultaneously with other study activities, and its relation to academic performance as measured by self-reported Grade Point Average (GPA) and hours spent studying per week. Results show that Facebook users reported having lower GPAs and spend fewer hours per week studying than nonusers. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "1c09aabb45e44685ae214d8c6b94ab6e9ce422de", "title": "Silk - Generating RDF Links while Publishing or Consuming Linked Data", "text": "The central idea of the Web of Data is to interlink data items using RDF links. However, in practice most data sources are not sufficiently interlinked with related data sources. The Silk Link Discovery Framework addresses this problem by providing tools to generate links between data items based on user-provided link specifications. It can be used by data publishers to generate links between data sets as well as by Linked Data consumers to augment Web data with additional RDF links. In this poster we present the Silk Link Discovery Framework and report on two usage examples in which we employed Silk to generate links between two data sets about movies as well as to find duplicate persons in a stream of data items that is crawled from the Web."}
{"_id": "8b2ea3ecac8abd2357fbf8ca59ec31bad3191388", "title": "Towards End-to-End Lane Detection: an Instance Segmentation Approach", "text": "Modern cars are incorporating an increasing number of driver assist features, among which automatic lane keeping. The latter allows the car to properly position itself within the road lanes, which is also crucial for any subsequent lane departure or trajectory planning decision in fully autonomous cars. Traditional lane detection methods rely on a combination of highly-specialized, hand-crafted features and heuristics, usually followed by post-processing techniques, that are computationally expensive and prone to scalability due to road scene variations. More recent approaches leverage deep learning models, trained for pixel-wise lane segmentation, even when no markings are present in the image due to their big receptive field. Despite their advantages, these methods are limited to detecting a pre-defined, fixed number of lanes, e.g. ego-lanes, and can not cope with lane changes. In this paper, we go beyond the aforementioned limitations and propose to cast the lane detection problem as an instance segmentation problem - in which each lane forms its own instance - that can be trained end-to-end. To parametrize the segmented lane instances before fitting the lane, we further propose to apply a learned perspective transformation, conditioned on the image, in contrast to a fixed \u201dbird\u2019s-eye view\u201d transformation. By doing so, we ensure a lane fitting which is robust against road plane changes, unlike existing approaches that rely on a fixed, predefined transformation. In summary, we propose a fast lane detection algorithm, running at 50 fps, which can handle a variable number of lanes and cope with lane changes. We verify our method on the tuSimple dataset and achieve competitive results."}
{"_id": "a3ead91f63ca244359cfb4dbb9a22ab149d577eb", "title": "Information Extraction Using Web Usage Mining, Web Scrapping and Semantic Annotation", "text": "Extracting useful information from the web is the most significant issue of concern for the realization of semantic web. This may be achieved by several ways among which Web Usage Mining, Web Scrapping and Semantic Annotation plays an important role. Web mining enables to find out the relevant results from the web and is used to extract meaningful information from the discovery patterns kept back in the servers. Web usage mining is a type of web mining which mines the information of access routes/manners of users visiting the web sites. Web scraping, another technique, is a process of extracting useful information from HTML pages which may be implemented using a scripting language known as Prolog Server Pages(PSP) based on Prolog. Third, Semantic annotation is a technique which makes it possible to add semantics and a formal structure to unstructured textual documents, an important aspect in semantic information extraction which may be performed by a tool known as KIM(Knowledge Information Management). In this paper, we revisit, explore and discuss some information extraction techniques on web like web usage mining, web scrapping and semantic annotation for a better or efficient information extraction on the web illustrated with examples."}
{"_id": "6f0ab73a8d6494b0f8f50fafb32b9a214efbe4c9", "title": "Making Arabic PDF books accessible using gamification", "text": "Most of online Arabic books are not accessible to Arab people with visual impairments. They cannot read online books because they are usually scanned images of the original ones. There is also a problem in PDF encoding of some of the textual books. One of the solutions is to use an Arabic OCR to convert scanned books into text; however Arabic OCR is still in its early stages and suffers from many limitations. In this paper we propose the use of human recognition skills to replace OCR limitations by incorporating the concepts of crowdsourcing and gamification. Our proposed system is in the form of a mobile recall game that presents players with word images segmented from the books to be converted into text. The players' answers are checked using techniques similar to what is used in word spotting. We initially implemented two components of the system; the segmentation, and the feature extraction and matching components. For the feature extraction and matching component, which is used to verify the player's answers, we performed four tests to choose a similarity measure threshold for accepting an entered word as a correct answer. Future work will consider other means of input correctness assurance."}
{"_id": "bec079e846275997314be1415d8e5b3551fe09ca", "title": "Drug delivery and nanoparticles: Applications and hazards", "text": "The use of nanotechnology in medicine and more specifically drug delivery is set to spread rapidly. Currently many substances are under investigation for drug delivery and more specifically for cancer therapy. Interestingly pharmaceutical sciences are using nanoparticles to reduce toxicity and side effects of drugs and up to recently did not realize that carrier systems themselves may impose risks to the patient. The kind of hazards that are introduced by using nanoparticles for drug delivery are beyond that posed by conventional hazards imposed by chemicals in classical delivery matrices. For nanoparticles the knowledge on particle toxicity as obtained in inhalation toxicity shows the way how to investigate the potential hazards of nanoparticles. The toxicology of particulate matter differs from toxicology of substances as the composing chemical(s) may or may not be soluble in biological matrices, thus influencing greatly the potential exposure of various internal organs. This may vary from a rather high local exposure in the lungs and a low or neglectable exposure for other organ systems after inhalation. However, absorbed species may also influence the potential toxicity of the inhaled particles. For nanoparticles the situation is different as their size opens the potential for crossing the various biological barriers within the body. From a positive viewpoint, especially the potential to cross the blood brain barrier may open new ways for drug delivery into the brain. In addition, the nanosize also allows for access into the cell and various cellular compartments including the nucleus. A multitude of substances are currently under investigation for the preparation of nanoparticles for drug delivery, varying from biological substances like albumin, gelatine and phospholipids for liposomes, and more substances of a chemical nature like various polymers and solid metal containing nanoparticles. It is obvious that the potential interaction with tissues and cells, and the potential toxicity, greatly depends on the actual composition of the nanoparticle formulation. This paper provides an overview on some of the currently used systems for drug delivery. Besides the potential beneficial use also attention is drawn to the questions how we should proceed with the safety evaluation of the nanoparticle formulations for drug delivery. For such testing the lessons learned from particle toxicity as applied in inhalation toxicology may be of use. Although for pharmaceutical use the current requirements seem to be adequate to detect most of the adverse effects of nanoparticle formulations, it can not be expected that all aspects of nanoparticle toxicology will be detected. So, probably additional more specific testing would be needed."}
{"_id": "161a4ba80d447f9f60fd1246e51b360ec78c13de", "title": "A Lightweight Component Caching Scheme for Satisfiability Solvers", "text": "We introduce in this paper a lightweight technique for reducing work repetition caused by non\u2013chronological backtracking commonly practiced by DPLL\u2013based SAT solvers. The presented technique can be viewed as a partial component caching scheme. Empirical evaluation of the technique reveals significant improvements on a broad range of in-"}
{"_id": "37d5d2b0d392231d7deae1b2a4f13cbf54c9e184", "title": "Efficient mining of weighted association rules (WAR)", "text": "ABSTRACT In this paper, we extend the tradition association rule problem by allowing a weight to be associated with each item in a transaction, to re ect interest/intensity of the item within the transaction. This provides us in turn with an opportunity to associate a weight parameter with each item in the resulting association rule. We call it weighted association rule (WAR). WAR not only improves the con dence of the rules, but also provides a mechanism to do more effective target marketing by identifying or segmenting customers based on their potential degree of loyalty or volume of purchases. Our approach mines WARs by rst ignoring the weight and nding the frequent itemsets (via a traditional frequent itemset discovery algorithm), and is followed by introducing the weight during the rule generation. It is shown by experimental results that our approach not only results in shorter average execution times, but also produces higher quality results than the generalization of previous known methods on quantitative association rules."}
{"_id": "4509d415e5801aa2c33544168aac572bc13406dd", "title": "Fuzzy Neural Network-Based Adaptive Control for a Class of Uncertain Nonlinear Stochastic Systems", "text": "This paper studies an adaptive tracking control for a class of nonlinear stochastic systems with unknown functions. The considered systems are in the nonaffine pure-feedback form, and it is the first to control this class of systems with stochastic disturbances. The fuzzy-neural networks are used to approximate unknown functions. Based on the backstepping design technique, the controllers and the adaptation laws are obtained. Compared to most of the existing stochastic systems, the proposed control algorithm has fewer adjustable parameters and thus, it can reduce online computation load. By using Lyapunov analysis, it is proven that all the signals of the closed-loop system are semiglobally uniformly ultimately bounded in probability and the system output tracks the reference signal to a bounded compact set. The simulation example is given to illustrate the effectiveness of the proposed control algorithm."}
{"_id": "320ea4748b1f7e808eabedbedb75cce660122d26", "title": "Detecting Avocados to Zucchinis: What Have We Done, and Where Are We Going?", "text": "The growth of detection datasets and the multiple directions of object detection research provide both an unprecedented need and a great opportunity for a thorough evaluation of the current state of the field of categorical object detection. In this paper we strive to answer two key questions. First, where are we currently as a field: what have we done right, what still needs to be improved? Second, where should we be going in designing the next generation of object detectors? Inspired by the recent work of Hoiem et al. on the standard PASCAL VOC detection dataset, we perform a large-scale study on the Image Net Large Scale Visual Recognition Challenge (ILSVRC) data. First, we quantitatively demonstrate that this dataset provides many of the same detection challenges as the PASCAL VOC. Due to its scale of 1000 object categories, ILSVRC also provides an excellent test bed for understanding the performance of detectors as a function of several key properties of the object classes. We conduct a series of analyses looking at how different detection methods perform on a number of image-level and object-class-level properties such as texture, color, deformation, and clutter. We learn important lessons of the current object detection methods and propose a number of insights for designing the next generation object detectors."}
{"_id": "5425f7109dca2022ff9bde0ed3f113080d0d606b", "title": "DFD: Efficient Functional Dependency Discovery", "text": "The discovery of unknown functional dependencies in a dataset is of great importance for database redesign, anomaly detection and data cleansing applications. However, as the nature of the problem is exponential in the number of attributes none of the existing approaches can be applied on large datasets. We present a new algorithm DFD for discovering all functional dependencies in a dataset following a depth-first traversal strategy of the attribute lattice that combines aggressive pruning and efficient result verification. Our approach is able to scale far beyond existing algorithms for up to 7.5 million tuples, and is up to three orders of magnitude faster than existing approaches on smaller datasets."}
{"_id": "c2f4c6d7e06da14c4b3ce3a9b97394a64708dc52", "title": "Database Dependency Discovery: A Machine Learning Approach", "text": "Database dependencies, such as functional and multivalued dependencies, express the presence of structure in databas e relations, that can be utilised in the database design proce ss. The discovery of database dependencies can be viewed as an induction problem, in which general rules (dependencies) a re obtained from specific facts (the relation). This viewpoint has the advantage of abstracting away as much as possible from the particulars of the dependencies. The algorithms in this paper are designed such that they can easily be generalised to other kinds of dependencies. Like in current approaches to computational induction such as inductive logic programming, we distinguish between top down algorithms and bottom-up algorithms. In a top-down approach, hypotheses are generated in a systematic way and then tested against the given relation. In a bottom-up approach, the relation is inspected in order to see what dependencies it may satisfy or violate. We give algorithms for bot h approaches."}
{"_id": "28f840416cfe7aed3cda11e266119d247fcc3f9e", "title": "GORDIAN: Efficient and Scalable Discovery of Composite Keys", "text": "Identification of (composite) key attributes is of fundamental importance for many different data management tasks such as data modeling, data integration, anomaly detection, query formulation, query optimization, and indexing. However, information about keys is often missing or incomplete in many real-world database scenarios. Surprisingly, the fundamental problem of automatic key discovery has received little attention in the existing literature. Existing solutions ignore composite keys, due to the complexity associated with their discovery. Even for simple keys, current algorithms take a brute-force approach; the resulting exponential CPU and memory requirements limit the applicability of these methods to small datasets. In this paper, we describe GORDIAN, a scalable algorithm for automatic discovery of keys in large datasets, including composite keys. GORDIAN can provide exact results very efficiently for both real-world and synthetic datasets. GORDIAN can be used to find (composite) key attributes in any collection of entities, e.g., key column-groups in relational data, or key leaf-node sets in a collection of XML documents with a common schema. We show empirically that GORDIAN can be combined with sampling to efficiently obtain high quality sets of approximate keys even in very large datasets."}
{"_id": "5288d14f6a3937df5e10109d4e23d79b7ddf080f", "title": "Fast Algorithms for Mining Association Rules in Large Databases", "text": ""}
{"_id": "57fb4b0c63400dc984893461b1f5a73244b3e3eb", "title": "Logic and Databases: A Deductive Approach", "text": "ion, Databases and Conceptual Modeling (Pingree Park, Colo., June), pp. 112-114; ACM SZGMOD Rec. 11, 2 (Feb.). CODD, E. F. 1982. Relational database: A practical foundation for productivity. Commun. ACM 25, 2 (Feb.), 109-117. COLMERAUER, A. 1973. Un systeme de communication homme-machine en francais. Rapport, Groupe Intelligence Artificielle, Universite d\u2019AixMarseille-Luminy, Marseilles, France. COLMERAUER, A., AND PIQUE, J. F. 1981. About natural logic. In Advances in Database Theory vol. 1, H. Gallaire, J. Minker, and J.-M. Nicolas, Eds. Plenum, New York, pp. 343-365. COOPER, E. C. 1980. On the expressive power of query languages for relational databases. Tech. Rep. 14-80, Computer Science Dept., Harvard Univ., Cambridge, Mass. DAHL, V. 1982. On database systems development through logic. ACM Trans. Database Syst. 7, 1 (Mar.), 102-123. DATE, C. J. 1977. An Introduction to Database Systems. Addison-Wesley, Reading, Mass. DATE, C. J. 1981. An Introduction to Database Systems, 3rd ed. Addison-Wesley, Reading, Mass. DELIYANNI, A., AND KOWALSKI, R. A. 1979. Logic and semantic networks. Commun. ACM 22, 3"}
{"_id": "7d81977db644f56b3291546598d2f53165f76117", "title": "Regional cerebral blood flow correlates of visuospatial tasks in Alzheimer's disease.", "text": "This study investigated the role of visuospatial tasks in identifying cognitive decline in patients with Alzheimer's disease (AD), by correlating neuropsychological performance with cerebral perfusion measures. There were 157 participants: 29 neurologically healthy controls (age: 70.3 +/- 6.6, MMSE > or = 27), 86 patients with mild AD (age: 69.18 +/- 8.28, MMSE > or = 21) and 42 patients moderate/severe AD (age: 68.86 +/- 10.69, MMSE 8-20). Single Photon Emission Computerized Tomography (SPECT) was used to derive regional perfusion ratios, and correlated using partial least squares (PLS) with neuropsychological test scores from the Benton Line Orientation (BLO) and the Rey-Osterrieth Complex Figure (RO). Cross-sectional analysis demonstrated that mean scores differed in accordance with disease status: control group (BLO 25.5, RO 33.3); mild AD (BLO 20.1, RO 25.5); moderate/severe AD (BLO 10.7, RO 16). Correlations were observed between BLO/RO and right parietal SPECT regions in the AD groups. Visuospatial performance, often undersampled in cognitive batteries for AD, is clearly impaired even in mild AD and correlates with functional deficits as indexed by cerebral perfusion ratios on SPECT implicating right hemisphere circuits. Furthermore, PLS reveals that usual spatial tasks probe a distributed brain network in both hemispheres including many areas targeted by early AD pathology."}
{"_id": "ce2eb2cda28e8883d9475acd6f034733de38ae91", "title": "Articulatory , positional and coarticulatory characteristics for clear / l / and dark / l / : evidence from two Catalan dialects", "text": "Electropalatographic and acoustic data reported in this study show differences in closure location and degree, dorsopalatal contact size, closure duration, relative timing of events and formant frequency between clear /l/ and dark /l/ in two dialects of Catalan (Valencian and Majorcan). The two Catalan dialects under investigation differ also regarding degree of darkness but essentially not regarding coarticulatory resistance at the word edges, i.e. the alveolar lateral is equally dark word-initially and word-finally in Majorcan, and clearer in the former position vs. than the latter in Valencian, and more resistant to vowel effects in the two positions than intervocalically in both dialects. With reference to data from the literature, it appears that languages and dialects may differ as to whether /l/ is dark or clear in all word positions or whether or not initial /l/ is clearer than final /l/, and that articulatory strengthening occurs not only wordand utterance-initially but wordand utterance-finally as well. These and other considerations confirm the hypothesis that degree of darkness in /l/ proceeds gradually rather than categorically from one language to another."}
{"_id": "13179f0c9959a2bd357838aac3d0a97aea96d7b5", "title": "Risk factors for involvement in cyber bullying : Victims , bullies and bully \u2013 victims \u2606", "text": "The use of online technology is exploding worldwide and is fast becoming a preferred method of interacting. While most online interactions are neutral or positive the Internet provides a new means through which children and youth are bullied. The aim of this grounded theory approach was to explore technology, virtual relationships and cyber bullying from the perspectives of students. Seven focus groups were held with 38 students between fifth and eighth grades. The participants considered cyber bullying to be a serious problem and some characterized online bullying as more serious than \u0333traditional\u2018 bullying because of the associated anonymity. Although the students depicted anonymity as integral to cyber bullying, the findings suggest that much of the cyber bullying occurred within the context of their social groups and relationships. Findings revealed five major themes: technology embraced at younger ages and becoming the dominant medium for communication; definitions and views of cyber bullying; factors unique to cyber bullying; types of cyber bullying; and telling adults. The findings highlight the complexity of the perceived anonymity provided by the Internet and how AC C EP TE D M AN U SC R IP T ACCEPTED MANUSCRIPT 3 this may impact cyber bullying. The study offers greater awareness of the meanings of online relationships for children and youth."}
{"_id": "5cf701e4588067b52ead26188904005b59e71139", "title": "The Gut Microbiota and Autism Spectrum Disorders", "text": "Gastrointestinal (GI) symptoms are a common comorbidity in patients with autism spectrum disorder (ASD), but the underlying mechanisms are unknown. Many studies have shown alterations in the composition of the fecal flora and metabolic products of the gut microbiome in patients with ASD. The gut microbiota influences brain development and behaviors through the neuroendocrine, neuroimmune and autonomic nervous systems. In addition, an abnormal gut microbiota is associated with several diseases, such as inflammatory bowel disease (IBD), ASD and mood disorders. Here, we review the bidirectional interactions between the central nervous system and the gastrointestinal tract (brain-gut axis) and the role of the gut microbiota in the central nervous system (CNS) and ASD. Microbiome-mediated therapies might be a safe and effective treatment for ASD."}
{"_id": "28c41a507c374c15443fa5ccf1209e1eec34f317", "title": "Compiling Knowledge into Decomposable Negation Normal Form", "text": "We propose a method for compiling proposit ional theories into a new tractable form that we refer to as decomposable negation normal form (DNNF). We show a number of results about our compilation approach. First, we show that every propositional theory can be compiled into D N N F and present an algorithm to this effect. Second, we show that if a clausal form has a bounded treewidth, then its DNNF compilation has a linear size and can be computed in linear t ime \u2014 treewidth is a graphtheoretic parameter which measures the connectivity of the clausal form. Th i rd , we show that once a propositional theory is compiled into DNNF, a number of reasoning tasks, such as satisfiability and forgett ing, can be performed in linear t ime. Finally, we propose two techniques for approximating the DNNF compilat ion of a theory when the size of such compilation is too large to be practical. One of the techniques generates a sound but incomplete compilation, while the other generates a complete but unsound compilation. Together, these approximations bound the exact compilation from below and above in terms for their abil i ty to answer queries."}
{"_id": "d62f6185ed2c3878d788582fdeabbb5423c04c01", "title": "Object volume estimation based on 3D point cloud", "text": "An approach to estimating the volume of an object based on 3D point cloud is proposed in this paper. Firstly, 3D point cloud is cut into slices of equal thickness along the z-axis. Bisect each slice along the y-axis and cut each slice into sub-intervals along the x-axis. Using the minimum and maximum coordinates in each sub-interval, two surface curve functions can be fitted accordingly for the bisected slices. Then, the curves are integrated to estimate the area of each slice and further integrated along the z-axis for an estimated volume of the object."}
{"_id": "6b50b1fa78307a7505eb35ef58d5f51bf977f2a3", "title": "Auto-Encoding User Ratings via Knowledge Graphs in Recommendation Scenarios", "text": "In the last decade, driven also by the availability of an unprecedented computational power and storage capabilities in cloud environments, we assisted to the proliferation of new algorithms, methods, and approaches in two areas of artificial intelligence: knowledge representation and machine learning. On the one side, the generation of a high rate of structured data on the Web led to the creation and publication of the so-called knowledge graphs. On the other side, deep learning emerged as one of the most promising approaches in the generation and training of models that can be applied to a wide variety of application fields. More recently, autoencoders have proven their strength in various scenarios, playing a fundamental role in unsupervised learning. In this paper, we instigate how to exploit the semantic information encoded in a knowledge graph to build connections between units in a Neural Network, thus leading to a new method, SEM-AUTO, to extract and weight semantic features that can eventually be used to build a recommender system. As adding content-based side information may mitigate the cold user problems, we tested how our approach behaves in the presence of a few ratings from a user on the Movielens 1M dataset and compare results with BPRSLIM."}
{"_id": "6cd3ea4361e035969e6cf819422d0262f7c0a186", "title": "3D Deep Learning for Efficient and Robust Landmark Detection in Volumetric Data", "text": "Recently, deep learning has demonstrated great success in computer vision with the capability to learn powerful image features from a large training set. However, most of the published work has been confined to solving 2D problems, with a few limited exceptions that treated the 3D space as a composition of 2D orthogonal planes. The challenge of 3D deep learning is due to a much larger input vector, compared to 2D, which dramatically increases the computation time and the chance of over-fitting, especially when combined with limited training samples (hundreds to thousands), typical for medical imaging applications. To address this challenge, we propose an efficient and robust deep learning algorithm capable of full 3D detection in volumetric data. A two-step approach is exploited for efficient detection. A shallow network (with one hidden layer) is used for the initial testing of all voxels to obtain a small number of promising candidates, followed by more accurate classification with a deep network. In addition, we propose two approaches, i.e., separable filter decomposition and network sparsification, to speed up the evaluation of a network. To mitigate the over-fitting issue, thereby increasing detection robustness, we extract small 3D patches from a multi-resolution image pyramid. The deeply learned image features are further combined with Haar wavelet features to increase the detection accuracy. The proposed method has been quantitatively evaluated for carotid artery bifurcation detection on a head-neck CT dataset from 455 patients. Compared to the state-ofthe-art, the mean error is reduced by more than half, from 5.97 mm to 2.64 mm, with a detection speed of less than 1 s/volume."}
{"_id": "2ee7ee38745e9fcf89860dfb3d41c2155521e3a3", "title": "Residual Memory Networks in Language Modeling: Improving the Reputation of Feed-Forward Networks", "text": "We introduce the Residual Memory Network (RMN) architecture to language modeling. RMN is an architecture of feedforward neural networks that incorporates residual connections and time-delay connections that allow us to naturally incorporate information from a substantial time context. As this is the first time RMNs are applied for language modeling, we thoroughly investigate their behaviour on the well studied Penn Treebank corpus. We change the model slightly for the needs of language modeling, reducing both its time and memory consumption. Our results show that RMN is a suitable choice for small-sized neural language models: With test perplexity 112.7 and as few as 2.3M parameters, they out-perform both a much larger vanilla RNN (PPL 124, 8M parameters) and a similarly sized LSTM (PPL 115, 2.08M parameters), while being only by less than 3 perplexity points worse than twice as big LSTM."}
{"_id": "772aa928a1bbc901795b78c1bee6d539aa2d1a36", "title": "Author Disambiguation in PubMed: Evidence on the Precision and Recall of Author-ity among NIH-Funded Scientists", "text": "We examined the usefulness (precision) and completeness (recall) of the Author-ity author disambiguation for PubMed articles by associating articles with scientists funded by the National Institutes of Health (NIH). In doing so, we exploited established unique identifiers-Principal Investigator (PI) IDs-that the NIH assigns to funded scientists. Analyzing a set of 36,987 NIH scientists who received their first R01 grant between 1985 and 2009, we identified 355,921 articles appearing in PubMed that would allow us to evaluate the precision and recall of the Author-ity disambiguation. We found that Author-ity identified the NIH scientists with 99.51% precision across the articles. It had a corresponding recall of 99.64%. Precision and recall, moreover, appeared stable across common and uncommon last names, across ethnic backgrounds, and across levels of scientist productivity."}
{"_id": "c9946fedf333df0c6404765ba6ccbf8006779753", "title": "Motion planning based on learning models of pedestrian and driver behaviors", "text": "Autonomous driving has shown the capability of providing driver convenience and enhancing safety. While introducing autonomous driving into our current traffic system, one significant issue is to make the autonomous vehicle be able to react in the same way as real human drivers. In order to ensure that an autonomous vehicle of the future will perform like human drivers, this paper proposes a vehicle motion planning model, which can represent how drivers control vehicles based on the assessment of traffic environments in the real signalized intersection. The proposed motion planning model comprises functions of pedestrian intention detection, gap detection and vehicle dynamic control. The three functions are constructed based on the analysis of actual data collected from real traffic environments. Finally, this paper demonstrates the performance of the proposed method by comparing the behaviors of our model with the behaviors of real pedestrians and human drivers. The experimental results show that our proposed model can achieve 85% recognition rate for the pedestrian crossing intention. Moreover, the vehicle controlled by the proposed motion planning model and the actual human-driven vehicle are highly similar with respect to the gap acceptance in intersections."}
{"_id": "ffe504583a03a224f4b14938aa9d7b625d326736", "title": "Facing prejudice: implicit prejudice and the perception of facial threat.", "text": "We propose that social attitudes, and in particular implicit prejudice, bias people's perceptions of the facial emotion displayed by others. To test this hypothesis, we employed a facial emotion change-detection task in which European American participants detected the offset (Study 1) or onset (Study 2) of facial anger in both Black and White targets. Higher implicit (but not explicit) prejudice was associated with a greater readiness to perceive anger in Black faces, but neither explicit nor implicit prejudice predicted anger perceptions regarding similar White faces. This pattern indicates that European Americans high in implicit racial prejudice are biased to perceive threatening affect in Black but not White faces, suggesting that the deleterious effects of stereotypes may take hold extremely early in social interaction."}
{"_id": "e8cd182c70220f7c381a26c80c0a82b5c8e4d5c1", "title": "Recurrent networks with attention and convolutional networks for sentence representation and classification", "text": "In this paper, we propose a bi-attention, a multi-layer attention and an attention mechanism and convolution neural network based text representation and classification model (ACNN). The bi-attention have two attention mechanism to learn two context vectors, forward RNN with attention to learn forward context vector c\u20d7 $\\overrightarrow {\\mathbf {c}}$ and backward RNN with attention to learn backward context vector c\u20d6 $\\overleftarrow {\\mathbf {c}}$, and then concatenation c\u20d7 $\\overrightarrow {\\mathbf {c}}$ and c\u20d6 $\\overleftarrow {\\mathbf {c}}$ to get context vector c. The multi-layer attention is the stack of the bi-attention. In the ACNN, the context vector c is obtained by the bi-attention, then the convolution operation is performed on the context vector c, and the max-pooling operation is used to reduce the dimension. After max-pooling operation the text is converted to low-dimensional sentence vector m. Finally, the Softmax classifier be used for text classification. We test our model on 8 benchmarks text classification datasets, and our model achieved a better or the same performance compare with the state-of-the-art methods."}
{"_id": "92fff676dc28d962e79c0450531ebba12a341896", "title": "Modeling changing dependency structure in multivariate time series", "text": "We show how to apply the efficient Bayesian changepoint detection techniques of Fearnhead in the multivariate setting. We model the joint density of vector-valued observations using undirected Gaussian graphical models, whose structure we estimate. We show how we can exactly compute the MAP segmentation, as well as how to draw perfect samples from the posterior over segmentations, simultaneously accounting for uncertainty about the number and location of changepoints, as well as uncertainty about the covariance structure. We illustrate the technique by applying it to financial data and to bee tracking data."}
{"_id": "c7473ab608ec26b799e10a387067d503cdcc4e7e", "title": "Habits \u2014 A Repeat Performance", "text": "Habits are response dispositions that are activated automatically by the context cues that co-occurred with responses during past performance. Experiencesampling diary studies indicate that much of everyday action is characterized by habitual repetition. We consider various mechanisms that could underlie the habitual control of action, and we conclude that direct cuing and motivated contexts best account for the characteristic features of habit responding\u2014in particular, for the rigid repetition of action that can be initiated without intention and that runs to completion with minimal conscious control. We explain the utility of contemporary habit research for issues central to psychology, especially for behavior prediction, behavior change, and self-regulation. KEYWORDS\u2014habit; automaticity; motivation; goals; behavior change; behavior prediction; self-regulation From self-help guru Anthony Robbins to the religion of Zen Buddhism, received wisdom exhorts people to be mindful, deliberative, and conscious in all they do. In contrast, contemporary research in psychology shows that it is actually people\u2019s unthinking routines\u2014or habits\u2014that form the bedrock of everyday life. Without habits, people would be doomed to plan, consciously guide, and monitor every action, from making that first cup of coffee in the morning to sequencing the finger movements in a Chopin piano concerto. But what is a habit? The cognitive revolution radically reshaped the behaviorist view that habits rely on simple stimulus\u2013 response associations devoid of mental representation. Emerging is a more nuanced construct that includes roles for consciousness, goals, and motivational states. Fundamental questions persist, however, especially when comparing evidence across neuropsychology, animal-learning, and social-cognition literatures. Data from these fields support three views of habit, which we term the direct-context-cuing, implicit-goal, and motivated-context models. In this article, we consider these models and explain the relevance for psychology of a reinvigorated habit construct. HABITS AFTER BEHAVIORISM Within current theorizing, habits are automated response dispositions that are cued by aspects of the performance context (i.e., environment, preceding actions). They are learned through a process in which repetition incrementally tunes cognitive processors in procedural memory (i.e., the memory system that supports the minimally conscious control of skilled action). The relatively primitive associative learning that promotes habits is shared in some form across mammalian species. Our own interest in habits has been fueled by the recognition that much of everyday action is characterized by repetition. In experience-sampling diary studies using both student and community samples, approximately 45% of everyday behaviors tended to be repeated in the same location almost every day (Quinn & Wood, 2005; Wood, Quinn, & Kashy, 2002). In these studies, people reported a heterogeneous set of actions that varied in habit strength, including reading the newspaper, exercising, and eating fast food. Although a consensual perspective on habit mechanisms has yet to develop, common to all views is the idea that many behavioral sequences (e.g., one\u2019s morning coffee-making routine) are performed repeatedly in similar contexts. When responses and features of context occur in contiguity, the potential exists for associations to form between them, such that contexts come to cue responses. In what follows, we outline three views of habitual control that build on this understanding. Direct Context Cuing According to the direct-context-cuing model, repeated coactivation forges direct links in memory between context and response representations. Once these links are formed via associative learning, merely perceiving a context triggers associated responses. Supporting evidence comes from research in which merely activating a construct, such as the elderly stereotype, influences the performance of relevant behaviors, such as a slow speed of walking (e.g., Bargh, Chen, & Burrows, 1996). Address correspondence to David Neal, Department of Psychology, Box 90085, Duke University, Durham, NC 27708; e-mail: dneal@ duke.edu. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 198 Volume 15\u2014Number 4 Copyright r 2006 Association for Psychological Science Readers might wonder if it is realistic that contexts cue responses through this simple mechanism in the absence of an implicit or explicit goal. The answer is not clear, given that social-cognition research has thus far demonstrated only a limited version of direct-cuing effects. For example, activating the elderly stereotype influences walking speed, but it remains to be demonstrated whether such activation can initiate walking itself. However, the direct cuing of repeated action by contexts is suggested by myriad findings in cognitive neuroscience that reveal reduced involvement of goal-related neural structures, such as the prefrontal cortex, when behaviors have come under habitual control (see Daw, Niv, & Dayan, 2005). Furthermore, animal-learning research using a clever paradigm in which reinforcers are devalued suggests direct control by context. When rats initially perform an instrumental behavior (e.g., pressing a bar for a food pellet), they appear to be guided by specific goal expectations; they cease the behavior if the reward is devalued (e.g., by pairing it with a toxin; Dickinson & Balleine, 1995). In contrast, when rats have extensively repeated a behavior, their responses appear to be cued directly by contextual stimuli (e.g., the bar); reward devaluation has little impact on continued performance. These data are commonly interpreted as indicating that habit formation involves a shift to direct context cuing. Implicit Goals Associative learning explains not only the direct binding of contexts and actions but also the binding of contexts and goals. In implicit-goal models, habits develop when people repeatedly pursue a goal via a specific behavior in a given context. An indirect association then forms between the context and behavior within the broader goal system. In support, Aarts and Dijksterhuis (2000) found in several experiments that the automatic activation of habitual responses (e.g., bicycle riding) only occurs when a relevant goal has first been made accessible (e.g., the thought of attending class). These studies did not measure people\u2019s real-world behavior, however, but focused instead on judgments about behavior. It remains to be seen whether such judgments tap the cognitive processes that actually drive behavior. In addition, there is good reason to think that habit performance itself does not depend on goal activation. Goaldriven responses tend to be dynamic and flexible, as evidenced by people sometimes substituting behaviors that serve a common goal. In contrast, habits emerge in a rigid pattern such that, for example, a habitual runner is unlikely to substitute a cycling class for running. Thus, although implicit goals provide potentially powerful guides to action, they do not plausibly explain the context cuing of habits. Motivated Contexts In another framework for understanding context-cued responses, contexts can acquire diffuse motivational value when they have preceded rewards in the past. When contexts predict rewards in this way, they energize associated responses without activating specific goals. Evidence of the motivating quality of contexts comes from animal studies of the neurotransmitters that mediate reward learning. For example, when monkeys first learn that a feature of the environment (e.g., a light) predicts a reward (e.g., a drop of juice) when a response is made (e.g., a lever press), neurotransmitter activity (i.e., dopamine release) occurs just after the reward (see Schultz, Dayan, & Montague, 1997). After repeated practice, the animal reaches for the lever when the light is illuminated. Furthermore, the neurotransmitter response is no longer elicited by the juice but instead by the light. In this way, environmental cues can acquire motivational value. Reward-predicting environments are thought to signal the cached (or long-run future) value of an action without signaling a specific outcome (e.g., juice; Daw et al., 2005). This diffuse motivation may explain the rigid nature of context cuing, given that cached values do not convey a specific desired outcome that could be met by substitutable means. Contributing further to the rigidity of habits, neural evidence indicates that, with repetition, whole sequences of responses become chunked or integrated in memory with the contexts that predict them (Barnes, Kubota, Hu, Jin, & Graybiel, 2005). Chunked responses are cued and implemented as a unit, consistent with the idea that habits require limited conscious control to proceed to completion. This quality of habitual responding is frustratingly evident when, for example, trying to fix a well-practiced but badly executed golf swing or dance-step sequence. As yet, the motivated-context idea has been tested primarily with animals. Its promise as a model of human habits comes from evidence that reward-related neurotransmitter systems are shared across species (e.g., in humans, dopamine is elicited by monetary reward). Multiple Habit Mechanisms The high degree of repetition in daily life observed in the diary research of Wood et al. (2002) is likely to be a product of multiple habit-control mechanisms that draw, in various cases, on direct context associations as well as on diffuse motivations. Although we consider implicit goals to be an implausible mediator of habitual behavior, they undoubtedly contribute to some types of repetition. Whether habits are cued directly or are diffusely motivated, they are triggered automatically by contexts and performed in a relatively rigid way. These features of responding have important implications for theories of behavior prediction, behavior cha"}
{"_id": "32578f50dfcd443505450c79b61d59cc71b8d685", "title": "Digital Enterprise Architecture - Transformation for the Internet of Things", "text": "Excellence in IT is both a driver and a key enabler of the digital transformation. The digital transformation changes the way we live, work, learn, communicate, and collaborate. The Internet of Things (IoT) fundamentally influences today's digital strategies with disruptive business operating models and fast changing markets. New business information systems are integrating emerging Internet of Things infrastructures and components. With the huge diversity of Internet of Things technologies and products organizations have to leverage and extend previous Enterprise Architecture efforts to enable business value by integrating Internet of Things architectures. Both architecture engineering and management of current information systems and business models are complex and currently integrating beside the Internet of Things synergistic subjects, like Enterprise Architecture in context with services & cloud computing, semantic-based decision support through ontologies and knowledge-based systems, big data management, as well as mobility and collaboration networks. To provide adequate decision support for complex business/IT environments, we have to make transparent the impact of business and IT changes over the integral landscape of affected architectural capabilities, like directly and transitively impacted IoT-objects, business categories, processes, applications, services, platforms and infrastructures. The paper describes a new metamodel-based approach for integrating Internet of Things architectural objects, which are semi-automatically federated into a holistic Digital Enterprise Architecture environment."}
{"_id": "e8859b92af978e29cb3931b27bf781be6f98e3d0", "title": "A novel bridgeless buck-boost PFC converter", "text": "Conventional cascade buck-boost PFC (CBB-PFC) converter suffers from the high conduction loss in the input rectifier bridge. To resolve the above problem, a novel bridgeless buck-boost PFC topology is proposed in this paper. The proposed PFC converter which removes the input rectifier bridge has three conduction semiconductors at every moment. Comparing with CBB-PFC topology, the proposed topology reduces the conduction semiconductors, reduces conduction losses effectively, improves the efficiency of converter and is suitable for use in the wide input voltage range. In this paper, the average current mode control was implemented with UC3854, the theoretical analysis and design of detection circuits was presented. The experimental prototype with 400 V/600 W output and line input voltage range from 220 VAC to 380 VAC was built. Experimental results show that the proposed converter can improve 0.8% efficiency comparing CBB-PFC converter."}
{"_id": "9a7ff2f35a9e874fcf7d3a6a3c8671406cd7829a", "title": "Scene text recognition and tracking to identify athletes in sport videos", "text": "We present an athlete identification module forming part of a system for the personalization of sport video broadcasts. The aim of this module is the localization of athletes in the scene, their identification through the reading of names or numbers printed on their uniforms, and the labelling of frames where athletes are visible. Building upon a previously published algorithm we extract text from individual frames and read these candidates by means of an optical character recognizer (OCR). The OCR-ed text is then compared to a known list of athletes\u2019 names (or numbers), to provide a presence score for each athlete. Text regions are tracked in subsequent frames using a template matching technique. In this way blurred or distorted text, normally unreadable by the OCR, is exploited to provide a denser labelling of the video sequences. Extensive experiments show that the method proposed is fast, robust and reliable, out-performing results of other systems in the literature."}
{"_id": "0214778162cb1bb1fb15b6b8e5c5d669a3985fb5", "title": "A Knowledge Graph based Bidirectional Recurrent Neural Network Method for Literature-based Discovery", "text": "In this paper, we present a model which incorporates biomedical knowledge graph, graph embedding and deep learning methods for literature-based discovery. Firstly, the relations between entities are extracted from biomedical abstracts and then a knowledge graph is constructed by using these obtained relations. Secondly, the graph embedding technologies are applied to convert the entities and relations in the knowledge graph into a low-dimensional vector space. Thirdly, a bidirectional Long Short-Term Memory network is trained based on the entity associations represented by the pre-trained graph embeddings. Finally, the learned model is used for open and closed literature-based discovery tasks. The experimental results show that our method could not only effectively discover hidden associations between entities, but also reveal the corresponding mechanism of interactions. It suggests that incorporating knowledge graph and deep learning methods is an effective way for capturing the underlying complex associations between entities hidden in the literature."}
{"_id": "e74949ae81efa58f562d44b86339571701674284", "title": "A system for generating and injecting indistinguishable network decoys", "text": "We propose a novel trap-based architecture for detecting passive, \u201csilent\u201d, attackers who are eavesdropping on enterprise networks. Motivated by the increasing number of incidents where attackers sniff the local network for interesting information, such as credit card numbers, account credentials, and passwords, we introduce a methodology for building a trap-based network that is designed to maximize the realism of bait-laced traffic. Our proposal relies on a \u201crecord, modify, replay\u201d paradigm that can be easily adapted to different networked environments. The primary contributions of our architecture are the ease of automatically injecting large amounts of believable bait, and the integration of different detection mechanisms in the back-end. We demonstrate our methodology in a prototype platform that uses our decoy injection API to dynamically create and dispense network traps on a subset of our campus wireless network. Our network traps consist of several types of monitored passwords, authentication cookies, credit cards, and documents containing beacons to alarm when opened. The efficacy of our decoys against a model attack program is also discussed, along with results obtained from experiments in the field. In addition, we present a user study that demonstrates the believability of our decoy traffic, and finally, we provide experimental results to show that our solution causes only negligible interference to ordinary users."}
{"_id": "b8febc6422057c0476db7e64ac88df8fb0a619eb", "title": "Reflect , and React : A Culturally Responsive Model for Pre-service Secondary Social Studies Teachers", "text": "The purpose of this qualitative study was to design and implement a model of cultural-responsiveness within a social studies teacher education program. Specifically, we sought to understand how pre-service grades 6-12 social studies practitioners construct culturally responsive teaching (CRT) in their lesson planning. In addition, we examined the professional barriers that prevented teacher-candidates from actualizing culturally responsive pedagogy. Incorporating a conceptual model of Review, Reflect, and React, 20 teacher candidates in a social studies methods course engaged CRT theory and practice. Thematic analysis of lesson plans and clinical reflections indicated successful proponents of CRT critically analyzed their curriculum, explored the diverse needs of their students, and engaged learners in culturally appropriate social studies pedagogy. Findings also showed that unsuccessful CRT was characterized by a lack of content knowledge, resistance from the cooperating teacher, and a reliance on the textbook materials."}
{"_id": "0ed4df86b942e8b4979bac640a720db0187a32ef", "title": "Deep and Broad Learning on Content-Aware POI Recommendation", "text": "POI recommendation has attracted lots of research attentions recently. There are several key factors that need to be modeled towards effective POI recommendation - POI properties, user preference and sequential momentum of check- ins. The challenge lies in how to synergistically learn multi-source heterogeneous data. Previous work tries to model multi-source information in a flat manner, using either embedding based methods or sequential prediction models in a cross-related space, which cannot generate mutually reinforce results. In this paper, a deep and broad learning approach based on a Deep Context- aware POI Recommendation (DCPR) model was proposed to structurally learn POI and user characteristics. The proposed DCPR model includes three collaborative layers, a CNN layer for POI feature mining, an RNN layer for sequential dependency and user preference modeling, and an interactive layer based on matrix factorization to jointly optimize the overall model. Experiments over three data sets demonstrate that DCPR model achieves significant improvement over state-of-the-art POI recommendation algorithms and other deep recommendation models."}
{"_id": "2d113bbeb6f2f393a09a82bc05fdff61b391d05d", "title": "A Rule Based Approach to Discourse Parsing", "text": "In this paper we present an overview of recent developments in discourse theory and parsing under the Linguistic Discourse Model (LDM) framework, a semantic theory of discourse structure. We give a novel approach to the problem of discourse segmentation based on discourse semantics and sketch a limited but robust approach to symbolic discourse parsing based on syntactic, semantic and lexical rules. To demonstrate the utility of the system in a real application, we briefly describe the architecture of the PALSUMM system, a symbolic summarization system being developed at FX Palo Alto Laboratory that uses discourse structures constructed using the theory outlined to summarize written English prose texts. 1"}
{"_id": "9778197c8b8a4a1c297edb180b63f4a29f612895", "title": "Emerging Principles of Gene Expression Programs and Their Regulation.", "text": "Many mechanisms contribute to regulation of gene expression to ensure coordinated cellular behaviors and fate decisions. Transcriptional responses to external signals can consist of many hundreds of genes that can be parsed into different categories based on kinetics of induction, cell-type and signal specificity, and duration of the response. Here we discuss the structure of transcription programs and suggest a basic framework to categorize gene expression programs based on characteristics related to their control mechanisms. We also discuss possible evolutionary implications of this framework."}
{"_id": "a7ab6fe31ee11a6b59e4d0c15de9f81661ef0d58", "title": "EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces", "text": "OBJECTIVE\nBrain-computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional neural networks (CNNs), which have been used in computer vision and speech recognition to perform automatic feature extraction and classification, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible.\n\n\nAPPROACH\nIn this work we introduce EEGNet, a compact convolutional neural network for EEG-based BCIs. We introduce the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI. We compare EEGNet, both for within-subject and cross-subject classification, to current state-of-the-art approaches across four BCI paradigms: P300 visual-evoked potentials, error-related negativity responses (ERN), movement-related cortical potentials (MRCP), and sensory motor rhythms (SMR).\n\n\nMAIN RESULTS\nWe show that EEGNet generalizes across paradigms better than, and achieves comparably high performance to, the reference algorithms when only limited training data is available across all tested paradigms. In addition, we demonstrate three different approaches to visualize the contents of a trained EEGNet model to enable interpretation of the learned features.\n\n\nSIGNIFICANCE\nOur results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks. Our models can be found at: https://github.com/vlawhern/arl-eegmodels."}
{"_id": "fdddcec753050d624a72d3de6f23c5b560f1cb25", "title": "Signal Processing Approaches to Minimize or Suppress Calibration Time in Oscillatory Activity-Based Brain\u2013Computer Interfaces", "text": "One of the major limitations of brain-computer interfaces (BCI) is their long calibration time, which limits their use in practice, both by patients and healthy users alike. Such long calibration times are due to the large between-user variability and thus to the need to collect numerous training electroencephalography (EEG) trials for the machine learning algorithms used in BCI design. In this paper, we first survey existing approaches to reduce or suppress calibration time, these approaches being notably based on regularization, user-to-user transfer, semi-supervised learning and a priori physiological information. We then propose new tools to reduce BCI calibration time. In particular, we propose to generate artificial EEG trials from the few EEG trials initially available, in order to augment the training set size. These artificial EEG trials are obtained by relevant combinations and distortions of the original trials available. We propose three different methods to do so. We also propose a new, fast and simple approach to perform user-to-user transfer for BCI. Finally, we study and compare offline different approaches, both old and new ones, on the data of 50 users from three different BCI data sets. This enables us to identify guidelines about how to reduce or suppress calibration time for BCI."}
{"_id": "061356704ec86334dbbc073985375fe13cd39088", "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition", "text": "In this work we investigate the effect of the convolutional n etwork depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth, which shows that a significant improvement on the prior-art configurations can be achi eved by pushing the depth to 16\u201319 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first a nd he second places in the localisation and classification tracks respec tively. We also show that our representations generalise well to other datasets, whe re t y achieve the stateof-the-art results. Importantly, we have made our two bestp rforming ConvNet models publicly available to facilitate further research o n the use of deep visual representations in computer vision."}
{"_id": "14318685b5959b51d0f1e3db34643eb2855dc6d9", "title": "Going deeper with convolutions", "text": "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection."}
{"_id": "1827de6fa9c9c1b3d647a9d707042e89cf94abf0", "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "text": "Training Deep Neural Networks is complicated by the fact that the distribution of each layer\u2019s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a stateof-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters."}
{"_id": "6e80768219b2ab5a3247444cfb280e8d33d369f0", "title": "DESIGN OF AN ULTRA-WIDEBAND POWER DIVIDER VIA THE COARSE-GRAINED PARALLEL MICRO-GENETIC ALGORITHM", "text": "An ultra-wideband (UWB) power divider is designed in this paper. The UWB performance of this power divider is obtained by using a tapered microstrip line that consists of exponential and elliptic sections. The coarse grained parallel micro-genetic algorithm (PMGA) and CST Microwave Studio are combined to achieve an automated parallel design process. The method is applied to optimize the UWB power divider. The optimized power divider is fabricated and measured. The measured results show relatively low insertion loss, good return loss, and high isolation between the output ports across the whole UWB (3.1\u201310.6 GHz)."}
{"_id": "12dc84ff8f98d6cb4b3ea0c1d4b444900d4bbb5c", "title": "Visual Path Prediction in Complex Scenes with Crowded Moving Objects", "text": "This paper proposes a novel path prediction algorithm for progressing one step further than the existing works focusing on single target path prediction. In this paper, we consider moving dynamics of co-occurring objects for path prediction in a scene that includes crowded moving objects. To solve this problem, we first suggest a two-layered probabilistic model to find major movement patterns and their cooccurrence tendency. By utilizing the unsupervised learning results from the model, we present an algorithm to find the future location of any target object. Through extensive qualitative/quantitative experiments, we show that our algorithm can find a plausible future path in complex scenes with a large number of moving objects."}
{"_id": "90ff0f5eaed1ebb42a50da451f15cf39de53b681", "title": "NEED FOR HUMAN DIRECTION OF DATA MINING CROSS-INDUSTRY STANDARD PROCESS: CRISP\u2013DM CASE STUDY 1: ANALYZING AUTOMOBILE WARRANTY CLAIMS: EXAMPLE OF THE CRISP\u2013DM INDUSTRY STANDARD PROCESS IN ACTION FALLACIES OF DATA MINING", "text": "WHAT IS DATA MINING? WHY DATA MINING? NEED FOR HUMAN DIRECTION OF DATA MINING CROSS-INDUSTRY STANDARD PROCESS: CRISP\u2013DM CASE STUDY 1: ANALYZING AUTOMOBILE WARRANTY CLAIMS: EXAMPLE OF THE CRISP\u2013DM INDUSTRY STANDARD PROCESS IN ACTION FALLACIES OF DATA MINING WHAT TASKS CAN DATA MINING ACCOMPLISH? CASE STUDY 2: PREDICTING ABNORMAL STOCK MARKET RETURNS USING NEURAL NETWORKS CASE STUDY 3: MINING ASSOCIATION RULES FROM LEGAL DATABASES CASE STUDY 4: PREDICTING CORPORATE BANKRUPTCIES USING DECISION TREES CASE STUDY 5: PROFILING THE TOURISM MARKET USING k-MEANS CLUSTERING ANALYSIS"}
{"_id": "25f08ef49357067dcb58cc3b8af416e138006737", "title": "Design and implementation of server cluster dynamic load balancing based on OpenFlow", "text": "Nowadays, the Internet is flooded with huge traffic, many applications have millions users, a single server is difficult to bear a large number of clients' access, so many application providers will put several servers as a computing unit to provide support for a specific application, usually people will use distributed computing, load balancing technology to complete the work. A typical load balancing technique is to use a dedicated load balancer to forward the client requests to different servers, this technique requires dedicated hardware support, the hardware is expensive, lacks of flexibility and is easy to become a single point failure. There will be a new solution for load balancing with OpenFlow proposed., this paper mainly studies dynamic load balancing technology in the OpenFlow environment, the Controller collected the server running status through the SNMP protocol, and calculated the aggregated load of the severs according to dynamic load balancing scheduling algorithm, the OpenFlow switch will forward the client's request to the server whose aggregated load is smallest, thus minimize the response time of the web server. In the OpenFlow network environment, using this method can brings high flexibility without additional equipment."}
{"_id": "48593450966066cd97cf5e1ad11dfd2c29ab13bc", "title": "Solving the 0-1 Knapsack Problem with Genetic Algorithms", "text": "This paper describes a research project on using Genetic Algorithms (GAs) to solve the 0-1 Knapsack Problem (KP). The Knapsack Problem is an example of a combinatorial optimization problem, which seeks to maximize the benefit of objects in a knapsack without exceeding its capacity. The paper contains three sections: brief description of the basic idea and elements of the GAs, definition of the Knapsack Problem, and implementation of the 0-1 Knapsack Problem using GAs. The main focus of the paper is on the implementation of the algorithm for solving the problem. In the program, we implemented two selection functions, roulette-wheel and group selection. The results from both of them differed depending on whether we used elitism or not. Elitism significantly improved the performance of the roulette-wheel function. Moreover, we tested the program with different crossover ratios and single and double crossover points but the results given were not that different."}
{"_id": "b58ac39705ba8b6df6c87e7f662d4041eeb032b6", "title": "Cosmetics as a Feature of the Extended Human Phenotype: Modulation of the Perception of Biologically Important Facial Signals", "text": "Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection."}
{"_id": "a5786a82cea67a46c45b18525fca9e0bb9254040", "title": "Floral dip: a simplified method for Agrobacterium-mediated transformation of Arabidopsis thaliana.", "text": "The Agrobacterium vacuum infiltration method has made it possible to transform Arabidopsis thaliana without plant tissue culture or regeneration. In the present study, this method was evaluated and a substantially modified transformation method was developed. The labor-intensive vacuum infiltration process was eliminated in favor of simple dipping of developing floral tissues into a solution containing Agrobacterium tumefaciens, 5% sucrose and 500 microliters per litre of surfactant Silwet L-77. Sucrose and surfactant were critical to the success of the floral dip method. Plants inoculated when numerous immature floral buds and few siliques were present produced transformed progeny at the highest rate. Plant tissue culture media, the hormone benzylamino purine and pH adjustment were unnecessary, and Agrobacterium could be applied to plants at a range of cell densities. Repeated application of Agrobacterium improved transformation rates and overall yield of transformants approximately twofold. Covering plants for 1 day to retain humidity after inoculation also raised transformation rates twofold. Multiple ecotypes were transformable by this method. The modified method should facilitate high-throughput transformation of Arabidopsis for efforts such as T-DNA gene tagging, positional cloning, or attempts at targeted gene replacement."}
{"_id": "4e08d766f6f81501aba9a026db4b3d0b634ea535", "title": "Device-to-device communications: The physical layer security advantage", "text": "In systems that allow device-to-device (D2D) communications, user pairs in close proximity communicate directly without using an access point (AP) as an intermediary. D2D communications leads to improved throughput, reduced power consumption and interference, and more flexible resource allocation. We show that the D2D paradigm also provides significantly improved security at the physical layer, by reducing exposure of the information to eavesdroppers from two relatively high-power transmissions to a single low-power hop. We derive the secrecy outage probability (SOP) for the D2D and cellular systems, and compare performance for D2D scenarios in the presence of a multi-antenna eavesdropper. The cellular approach is only seen to have an advantage in certain cases when the AP has a large number of antennas and perfect channel state information."}
{"_id": "79e78623f550a14fec5a05fcb31e641f347ccccd", "title": "Efficient Self-Interpretation in Lambda Calculus", "text": "We start by giving a compact representation schema for \u03bb-terms and show how this leads to an exceedingly small and elegant self-interpreter. We then define the notion of a self-reducer, and show how this too can be written as a small \u03bb-term. Both the self-interpreter and the self-reducer are proved correct. We finally give a constructive proof for the second fixed point theorem for the representation schema. All the constructions have been implemented on a computer, and experiments verify their correctness. Timings show that the self-interpreter and self-reducer are quite efficient, being about 35 and 50 times slower than direct execution using a call-byneed reduction strategy."}
{"_id": "a520c83130857a8892a8233a3d8a1381a65d6ae3", "title": "A 10000 frames/s CMOS digital pixel sensor", "text": "A 352 288 pixel CMOS image sensor chip with perpixel single-slope ADC and dynamic memory in a standard digital 0.18m CMOS process is described. The chip performs \u201csnapshot\u201d image acquisition, parallel 8-bit A/D conversion, and digital readout at continuous rate of 10 000 frames/s or 1 Gpixels/s with power consumption of 50 mW. Each pixel consists of a photogate circuit, a three-stage comparator, and an 8-bit 3T dynamic memory comprising a total of 37 transistors in 9.4 9.4 m with a fill factor of 15%. The photogate quantum efficiency is 13.6%, and the sensor conversion gain is 13.1 V/e . At 1000 frames/s, measured integral nonlinearity is 0.22% over a 1-V range, rms temporal noise with digital CDS is 0.15%, and rms FPN with digital CDS is 0.027%. When operated at low frame rates, on-chip power management circuits permit complete powerdown between each frame conversion and readout. The digitized pixel data is read out over a 64-bit (8-pixel) wide bus operating at 167 MHz, i.e., over 1.33 GB/s. The chip is suitable for general high-speed imaging applications as well as for the implementation of several still and standard video rate applications that benefit from high-speed capture, such as dynamic range enhancement, motion estimation and compensation, and image stabilization."}
{"_id": "dd58d7c7083c1ee6e281fd2a5293653d4aecca5d", "title": "Adaptation of the Nomophobia Questionnaire (NMP-Q) to Spanish in a sample of adolescents.", "text": "INTRODUCTION\nNomophobia is the fear of being out of mobile phone contact. People suffering from this anxiety disorder have feelings of stress and nervousness when access to their mobiles or computers is not possible. This work is an adaptation and validation study of the Spanish version of the Nomophobia Questionnaire (NMP-Q).\n\n\nMETHODOLOGY\nThe study included 306 students (46.1% males and 53.9% females) with ages ranging 13 to 19 years (Md=15.41\u00b11.22).\n\n\nRESULTS\nExploratory factor analysis revealed four dimensions that accounted for 64.4% of total variance. The ordinal \u03b1-value was 0.95, ranging from 0.75 to 0.92 across factors. Measure of stability was calculated by the testretest method (r=0.823). Indicators of convergence with the Spanish versions of the \u201cMobile Phone Problem Use Scale\u201d (r=0.654) and the \u201cGeneralized Problematic Internet Use Scale\u201d (r=0.531) were identified. Problematic mobile phone use patterns were examined taking the 15P, 80P and 95P percentiles as cut-off points. Scores of 39, 87 and 116 on NMP-Q corresponded to occasional, at-risk and problematic users, respectively.\n\n\nCONCLUSIONS\nPsychometric analysis shows that the Spanish version of the NMP-Q is a valid and reliable tool for the study of nomophobia."}
{"_id": "5baade37a1a5ff966fc1ffe3099056fae491a2c6", "title": "Novel air blowing control for balancing a unicycle robot", "text": "This paper presents the implementation of a novel control method of using air for balancing the roll angle of a unicycle robot. The unicycle robot is designed and built. The roll angle of the unicycle robot is controlled by blowing air while the pitch angle is also controlled by DC motors. Successful balancing performances are demonstrated."}
{"_id": "7c461664d3be5328d2b82b62f254513a8cfc3f69", "title": "REMAINING LIFE PREDICTION OF ELECTRONIC PRODUCTS USING LIFE CONSUMPTION MONITORING APPROACH", "text": "Various kinds of failures may occur in electronic products because of their life cycle environmental conditions including temperature, humidity, shock and vibration. Failure mechanism models are available to estimate the time to failure for most of these failures. Hence if the life cycle environment of a product can be determined, it is possible to assess the amount of damage induced and predict when the product might fail. This paper presents a life consumption monitoring methodology to determine the remaining life of a product. A battery powered data recorder is used to monitor the temperature, shock and vibration loads on a printed circuit board assembly placed under the hood of an automobile. The recorded data is used in conjunction with physics-of-failure models to determine the damage accumulation in the solder joints due to temperature and vibration loading. The remaining life of the solder joints of the test board is then obtained from the damage accumulation information. 1. Reliability Prediction Reliability is defined as the ability of a product to perform as intended (i.e., without failure and within specified performance limits) for a specified time, in its life cycle application environment. Over time, technology improvements have lead to higher I/O counts in components and circuit cards of electronic products. This has resulted in significant downsizing of interconnect thicknesses, making them more vulnerable in field applications. At the same time, increase in warranties and severe liabilities of electronic product failure have compelled manufacturers to predict the reliability of their products in field applications. An efficient reliability prediction scheme can be used for many purposes including [1]: \u2022 Logistics support (e.g., forecast warranty and life cycle costs, spare parts provisioning, availability)"}
{"_id": "5424a7cd6a6393ceb8759494ee6a7f7f16a0a179", "title": "You Can Run but You Can't Read: Preventing Disclosure Exploits in Executable Code", "text": "Code reuse attacks allow an adversary to impose malicious behavior on an otherwise benign program. To mitigate such attacks, a common approach is to disguise the address or content of code snippets by means of randomization or rewriting, leaving the adversary with no choice but guessing. However, disclosure attacks allow an adversary to scan a process - even remotely - and enable her to read executable memory on-the-fly, thereby allowing the just-in time assembly of exploits on the target site. In this paper, we propose an approach that fundamentally thwarts the root cause of memory disclosure exploits by preventing the inadvertent reading of code while the code itself can still be executed. We introduce a new primitive we call Execute-no-Read (XnR) which ensures that code can still be executed by the processor, but at the same time code cannot be read as data. This ultimately forfeits the self-disassembly which is necessary for just-in-time code reuse attacks (JIT-ROP) to work. To the best of our knowledge, XnR is the first approach to prevent memory disclosure attacks of executable code and JIT-ROP attacks in general. Despite the lack of hardware support for XnR in contemporary Intel x86 and ARM processors, our software emulations for Linux and Windows have a run-time overhead of only 2.2% and 3.4%, respectively."}
{"_id": "8415d972023133d0f3f731ec7d070663ba878624", "title": "TangiPaint: A Tangible Digital Painting System", "text": "TangiPaint is a digital painting application that provides the experience of working with real materials such as canvas and oil paint. Using fingers on the touchscreen of an iPad or iPhone, users can lay down strokes of thick, three-dimensional paint on a simulated canvas. Then using the Tangible Display technology introduced by Darling and Ferwerda [1], users can tilt the display screen to see the gloss and relief or \"impasto\" of the simulated surface, and modify it until they get the appearance they desire. Scene lighting can also be controlled through direct gesture-based interaction. A variety of \"paints\" with different color and gloss properties and substrates with different textures are available and new ones can be created or imported. The tangiPaint system represents a first step toward developing digital art media that look and behave like real materials. Introduction The development of computer-based digital art tools has had a huge impact on a wide range of creative fields. In commercial art, advertisements incorporating images, text, and graphic elements can be laid out and easily modified using digital illustration applications. In cinema, background matte elements can be digitally drawn, painted, and seamlessly integrated with live footage. In fine art, painters, printers, and engravers have also been embracing the new creative possibilities of computer-based art tools. The recent introduction of mobile, tablet-based computers with high-resolution displays, graphics processing units (GPUs) and multi-touch capabilities is also creating new possibilities for direct interaction in digital painting. However a significant limitation of most digital painting tools is that the final product is just a digital image (typically an array of RGB color values). All the colors, textures and lighting effects that we see when we look at the digital painting are \u201cbaked in\u201d to the image by the painter. In contrast, when a painter works with real tools and media, the color, gloss, and textural properties of the work are a natural byproduct of the creative process, and lighting effects such as highlights and shadows are produced directly through interactions of the surface with light in the environment. In this paper we introduce tangiPaint, a new tablet-based digital painting system that attempts to bridge the gap between the real and digital worlds. tangiPaint is a tangible painting application that allows artists to work with digital media that look and behave like real materials. Figure 1 shows screenshots from the tangiPaint application implemented on an Apple iPad2. In Figure 1a an artist has painted a number of brushstrokes on a blank canvas. Note that in addition to color, the strokes vary in gloss, thickness, and texture, and run out just as if they were real paint laid down with a real brush. The paints also layer and mix realistically as they would on a real canvas. Figure 1. Screenshots of paintings created using the tangiPaint system. Note the gloss and relief of the brushstrokes and the texture of the underlying canvas. The system allows direct interaction with the \u201cpainted\u201d surface both in terms of paint application and manipulation of surface orientation and"}
{"_id": "0295f6b715b6ed73fb9fbb0da528220004eb892e", "title": "Estimating Data Integration and Cleaning Effort", "text": "Data cleaning and data integration have been the topic of intensive research for at least the past thirty years, resulting in a multitude of specialized methods and integrated tool suites. All of them require at least some and in most cases significant human input in their configuration, during processing, and for evaluation. For managers (and for developers and scientists) it would be therefore of great value to be able to estimate the effort of cleaning and integrating some given data sets and to know the pitfalls of such an integration project in advance. This helps deciding about an integration project using cost/benefit analysis, budgeting a team with funds and manpower, and monitoring its progress. Further, knowledge of how well a data source fits into a given data ecosystem improves source selection. We present an extensible framework for the automatic effort estimation for mapping and cleaning activities in data integration projects with multiple sources. It comprises a set of measures and methods for estimating integration complexity and ultimately effort, taking into account heterogeneities of both schemas and instances and regarding both integration and cleaning operations. Experiments on two real-world scenarios show that our proposal is two to four times more accurate than a current approach in estimating the time duration of an integration process, and provides a meaningful breakdown of the integration problems as well as the required integration activities. 1. COMPLEXITY OF INTEGRATION AND CLEANING Data integration and data cleaning remain among the most human-work-intensive tasks in data management. Both require a clear understanding of the semantics of schema and data \u2013 a notoriously difficult task for machines. Despite much research and development of supporting tools and algorithms, state-of-the-art integration projects involve significant human resource cost. In fact, Gartner reports that 10% of all IT cost goes into enterprise software c \u00a92015, Copyright is with the authors. Published in Proc. 18th International Conference on Extending Database Technology (EDBT), March 23-27, 2015, Brussels, Belgium: ISBN 978-3-89318-067-7, on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-nd 4.0. for data integration and data quality, and it is well recognized that most of those expenses are for human labor. Thus, when embarking on a data integration and cleaning project, it is useful and important to estimate in advance the effort and cost of the project and to find out which particular difficulties cause these. Such estimations help deciding whether to pursue the project in the first place, planning and scheduling the project using estimates about the duration of integration steps, budgeting in terms of cost or manpower, and finally monitoring the progress of the project. Cost estimates can also help integration service providers, IT consultants, and IT tool vendors to generate better price quotes for integration customers. Further, automatically generated knowledge of how well and how easy a data source fits into a given data ecosystem improves source selection. However, \u201cproject estimation for [. . . ] data integration projects is especially difficult, given the number of stakeholders involved across the organization as well as the unknowns of data complexity and quality.\u201d [14]. Any integration project has several steps and tasks, including requirements analysis, selection of data sources, determining the appropriate target database, data transformation specifications, testing, deployment, and maintenance. In this paper, we focus on exploring the database-related steps of integration and cleaning and automatically estimate their effort. 1.1 Challenges There are simple approaches to estimate in isolation the complexity of individual mapping and cleaning tasks. For the mapping, evaluating its complexity can be done by counting the matchings (i.e., correspondences) among elements. For the cleaning problem, a natural solution is to measure its complexity by counting the number of constraints on the target schema. However, as several integration approaches have shown, the interactive nature of these two problems is particularly complex [5, 11, 13]. For example, a data exchange problem takes as input two relational schemas, a transformation between them (a mapping), a set of target constraints, and answers two questions: whether it is possible to compute a valid solution for a given setting and how. Interestingly, to have a solution, certain conditions must hold on the target constraints, and extending the setting to more complex languages or data models bring tighter restrictions on the class of tractable cases [6, 12]. In our work, the main challenge is to estimate complexity and effort in a setting that goes beyond these ad-hoc studies while satisfying four main requirements: Generality: We require independence from the language used to express the data transformation. Furthermore, real cases often fail the existence of solution tests considered in formal frameworks http://www.gartner.com/ technology/research/ it-spending-forecast/ http://www.gartner.com/newsroom/ id/2292815 (e.g., weak acyclicity condition [11]), but an automatic estimation is still desirable for them in practice. Completeness: Only a subset of the constraints that hold on the data are specified over the schema. In fact, business rules are commonly enforced at the application level and are not reflected in the metadata of the schemas, but should nevertheless be considered. Granularity: Details about the integration issues are crucial for consumption of the estimation. For a real understanding and proper planning, it is important to know which source and/or target attributes are cause of problems and how (e.g., phone attributes in source and target schema have different formats). Existing estimators do not reason over actual data structures and thus make no statements about the causes of integration effort. Configurability and extensibility: The actual effort depends on subjective factors like the capabilities of available tools and the desired quality of the output. Therefore, intuitive, yet rich configuration settings for the estimation process are crucial for its applicability. Moreover, users must be able to extend the range of problems covered by the framework. These challenges cannot be tackled with existing syntactical methods to test the existence of solutions, as they work only in specific settings (Generality), are restricted to declarative specifications over the schemas (Completeness), and do not provide details about the actual problems (Granularity). On the other hand, as systems that compute solutions require human interaction to finalize the process [8, 13], they cannot be used for estimation purpose and their availability is orthogonal to our problem (Configurability). 1.2 Approaching Effort Estimation Figure 1 presents our view on the problem of estimating the effort of data integration. The starting point is an integration scenario with a target database and one or more source databases. The right-hand side of the figure shows the actual integration process performed by an integration specialist, where the goal is to move all instances of the source databases into the target database. Typically, a set of integration tools are used by the specialist. These tools have access to the source and target and support her in the tasks. The process takes a certain effort, which can be measured, for instance as amount of work in hours or days or in a monetary unit. Our goal is to find that effort without actually performing the integration. Moreover, we want to find and present the problems that cause this effort. To this end, we developed a two-phase process as shown on the left-hand side of Figure 1. The first phase, the complexity assessment, reveals concrete integration challenges for the scenario. To address generality, these problems are exclusively determined by the source and target schemas and instances; if and how an integration practitioner deals with them is not addressed at this point. Thus, this first phase is independent of external parameters, such as the level of expertise of the specialist or the available integration tools. However, it is aided by the results of schema matching and data profiling tools, which analyze the participating databases and produce metadata about them (to achieve completeness). The output of the complexity assessment is a set of clearly defined problems, such as number of violations for a constraint or number of different value representations. This detailed breakdown of the problems achieves granularity and is useful for several tasks, even if not interpreted as an input to calculate actual effort. Examples of application are source selection [9], i.e., given a set of integration candidates, find the source with the best \u2018fit\u2019; and support for data visualization [7], i.e., highlight parts of the schemas that are hard to integrate. Data"}
{"_id": "fcf9086047002a6a04385a80e3f4cf25aa1d59a5", "title": "Bidirectional Decoder Networks for Attention-Based End-to-End Offline Handwriting Recognition", "text": "Recurrent neural networks that can be trained end-to-end on sequence learning tasks provide promising benefits over traditional recognition systems. In this paper, we demonstrate the application of an attention-based long short-term memory decoder network for offline handwriting recognition and analyze the segmentation, classification and decoding errors produced by the model. We further extend the decoding network by a bidirectional topology together with an integrated length estimation procedure and show that it is superior to unidirectional decoder networks. Results are presented for the word and text line recognition tasks of the RIMES handwriting recognition database. The software used in the experiments is freely available for academic research purposes."}
{"_id": "e5ff81282457006da54e230dcff3a7ae45f7c278", "title": "A Piezoelectric Energy Harvester for Rotary Motion Applications: Design and Experiments", "text": "This paper investigates the analysis and design of a vibration-based energy harvester for rotary motion applications. The energy harvester consists of a cantilever beam with a tip mass and a piezoelectric ceramic attached along the beam that is mounted on a rotating shaft. Using this system, mechanical vibration energy is induced in the flexible beam due to the gravitational force applied to the tip mass while the hub is rotating. The piezoelectric transducer is used to convert the induced mechanical vibration energy into electricity. The equations of motion of the flexible structure are utilized along with the physical characteristics of the piezoelectric transducer to derive expressions for the electrical power. Furthermore, expressions for the optimum load resistance and maximum output power are obtained and validated experimentally using PVDF and PZT transducers. The results indicate that a maximum power of 6.4 mW at a shaft speed of 138 rad/s can be extracted by using a PZT transducer with dimensions 50.8 mm \u00d7 38.1 mm \u00d7 0.13 mm. This amount of power is sufficient to provide power for typical wireless sensors such as accelerometers and strain gauges."}
{"_id": "741fd80f0a31fe77f91b1cce3d91c544d6d5b1b2", "title": "Affective outcomes of virtual reality exposure therapy for anxiety and specific phobias: a meta-analysis.", "text": "Virtual reality exposure therapy (VRET) is an increasingly common treatment for anxiety and specific phobias. Lacking is a quantitative meta-analysis that enhances understanding of the variability and clinical significance of anxiety reduction outcomes after VRET. Searches of electronic databases yielded 52 studies, and of these, 21 studies (300 subjects) met inclusion criteria. Although meta-analysis revealed large declines in anxiety symptoms following VRET, moderator analyses were limited due to inconsistent reporting in the VRET literature. This highlights the need for future research studies that report uniform and detailed information regarding presence, immersion, anxiety and/or phobia duration, and demographics."}
{"_id": "0e762d93bcce174d13d5d682e702c5e3a28bed20", "title": "Hair Follicle Miniaturization in a Woolly Hair Nevus: A Novel \"Root\" Perspective for a Mosaic Hair Disorder.", "text": "Woolly hair nevus is a mosaic disorder characterized by unruly, tightly curled hair in a circumscribed area of the scalp. This condition may be associated with epidermal nevi. We describe an 11-year-old boy who initially presented with multiple patches of woolly hair and with epidermal nevi on his left cheek and back. He had no nail, teeth, eye, or cardiac abnormalities. Analysis of plucked hairs from patches of woolly hair showed twisting of the hair shaft and an abnormal hair cuticle. Histopathology of a woolly hair patch showed diffuse hair follicle miniaturization with increased vellus hairs."}
{"_id": "32595517f623429cd323e6552a063b99ddb78766", "title": "On Using Twitter to Monitor Political Sentiment and Predict Election Results", "text": "The body of content available on Twitter undoubtedly contains a diverse range of political insight and commentary. But, to what extent is this representative of an electorate? Can we model political sentiment effectively enough to capture the voting intentions of a nation during an election capaign? We use the recent Irish General Election as a case study for investigating the potential to model political sentiment through mining of social media. Our approach combines sentiment analysis using supervised learning and volume-based measures. We evaluate against the conventional election polls and the final election result. We find that social analytics using both volume-based measures and sentiment analysis are predictive and we make a number of observations related to the task of monitoring public sentiment during an election campaign, including examining a variety of sample sizes, time periods as well as methods for qualitatively exploring the underlying content."}
{"_id": "598552f19aa38e7d7921e074885aba9d18d22aa2", "title": "Robust Instance Recognition in Presence of Occlusion and Clutter", "text": "We present a robust learning based instance recognition framework from single view point clouds. Our framework is able to handle real-world instance recognition challenges, i.e, clutter, similar looking distractors and occlusion. Recent algorithms have separately tried to address the problem of clutter [9] and occlusion [16] but fail when these challenges are combined. In comparison we handle all challenges within a single framework. Our framework uses a soft label Random Forest [5] to learn discriminative shape features of an object and use them to classify both its location and pose. We propose a novel iterative training scheme for forests which maximizes the margin between classes to improve recognition accuracy, as compared to a conventional training procedure. The learnt forest outperforms template matching, DPM [7] in presence of similar looking distractors. Using occlusion information, computed from the depth data, the forest learns to emphasize the shape features from the visible regions thus making it robust to occlusion. We benchmark our system with the state-of-the-art recognition systems [9, 7] in challenging scenes drawn from the largest publicly available dataset. To complement the lack of occlusion tests in this dataset, we introduce our Desk3D dataset and demonstrate that our algorithm outperforms other methods in all settings."}
{"_id": "2059e79de003a1a177405cee9d8cdb8b555d91e8", "title": "Indoor Localization Using Camera Phones", "text": "Indoor localization has long been a goal of pervasive computing research. In this paper, we explore the possibility of determining user's location based on the camera images received from a smart phone. In our system, the smart phone is worn by the user as a pendant and images are periodically captured and transmitted over GPRS to a web server. The web server returns the location of the user by comparing the received images with images stored in a database. We tested our system inside the Computer Science department building. Preliminary results show that user's location can be determined correctly with more than 80% probability of success. As opposed to earlier solutions for indoor localization, this approach does not have any infrastructure requirements. The only cost is that of building an image database."}
{"_id": "b8c3586b0db6ab4a520e038fd04379944a93e328", "title": "Practical Processing of Mobile Sensor Data for Continual Deep Learning Predictions", "text": "We present a practical approach for processing mobile sensor time series data for continual deep learning predictions. The approach comprises data cleaning, normalization, capping, time-based compression, and finally classification with a recurrent neural network. We demonstrate the effectiveness of the approach in a case study with 279 participants. On the basis of sparse sensor events, the network continually predicts whether the participants would attend to a notification within 10 minutes. Compared to a random baseline, the classifier achieves a 40% performance increase (AUC of 0.702) on a withheld test set. This approach allows to forgo resource-intensive, domain-specific, error-prone feature engineering, which may drastically increase the applicability of machine learning to mobile phone sensor data."}
{"_id": "117a50fbdfd473e43e550c6103733e6cb4aecb4c", "title": "Maximum margin planning", "text": "Imitation learning of sequential, goal-directed behavior by standard supervised techniques is often difficult. We frame learning such behaviors as a maximum margin structured prediction problem over a space of policies. In this approach, we learn mappings from features to cost so an optimal policy in an MDP with these cost mimics the expert's behavior. Further, we demonstrate a simple, provably efficient approach to structured maximum margin learning, based on the subgradient method, that leverages existing fast algorithms for inference. Although the technique is general, it is particularly relevant in problems where A* and dynamic programming approaches make learning policies tractable in problems beyond the limitations of a QP formulation. We demonstrate our approach applied to route planning for outdoor mobile robots, where the behavior a designer wishes a planner to execute is often clear, while specifying cost functions that engender this behavior is a much more difficult task."}
{"_id": "11b6bdfe36c48b11367b27187da11d95892f0361", "title": "Maximum Entropy Inverse Reinforcement Learning", "text": "Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling realworld navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories."}
{"_id": "2532d0567c8334e4cadf282a73ffe399c1c32476", "title": "Learning Agents for Uncertain Environments (Extended Abstract)", "text": "This talk proposes a very simple \u201cbaseline architecture\u201d for a learning agent that can handle stochastic, partially observable environments. The architecture uses reinforcement learning together with a method for representing temporal processes as graphical models. I will discuss methods for leaming the parameters and structure of such representations from sensory inputs, and for computing posterior probabilities. Some open problems remain before we can try out the complete agent; more arise when we consider scaling up. A second theme of the talk will be whether reinforcement learning can provide a good model of animal and human learning. To answer this question, we must do inverse reinforcement learning: given the observed behaviour, what reward signal, if any, is being optimized? This seems to be a very interesting problem for the COLT, UAI, and ML communities, and has been addressed in econometrics under the heading of structural estimation of Markov decision processes. 1 Learning in uncertain environments AI is about the construction of intelligent agents, i.e., systems that perceive and act effectively (according to some performance measure) in an environment. I have argued elsewhere Russell and Norvig (1995) that most AI research has focused on environments that are static, deterministic, discrete, and fully observable. What is to be done when, as in the real world, the environment is dynamic, stochastic, continuous, and partially observable? \u2018This paper draws on a variety of research efforts supported by NSF @I-9634215), ONR (N00014-97-l-0941), and AR0 (DAAH04-96-1-0341). Permission to make digital or hard copies of all or p.art of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for prolit or commercial adwantage and that copies bear this notice and the full citation on the first page. To copy otherwise. to republish, to post on servers or to redistribute to lists, requires prior specific pemlission and/or a fee. COLT 98 Madison WI IJSA Copyright ACM 1998 1-5X1 13-057--0/9X/ 7...$5.00 In recent years, reinforcement learning (also called neurodynamic programming) has made rapid progress as an approachfor building agents automatically (Sutton, 1988; Kaelbling et al., 1996; Bertsekas & Tsitsiklis, 1996). The basic idea is that the performance measure is made available to the agent in the form of a rewardfunction specifying the reward for each state that the agent passes through. The performance measure is then the sum of the rewards obtained. For example, when a bumble bee forages, the reward function at each time step might be some combination of the distance flown (weighted negatively) and the nectar ingested. Reinforcement learning (RL) methods are essentially online algorithmd for solving Markovdecisionprocesses (MDPs). An MDP is defined by the reward function and a model, that is, the state transition probabilities conditioned on each possible action. RL algorithms can be model-based, where the agent learns a model, or model-free-e.g., Q-learning citeWatkins: 1989, which learns just a function Q(s, a) specifying the long-term value of taking action a in state s and acting optimally thereafter. Despite their successes, RL methods have been restricted largely tofully observable MDPs, in which the sensory input at each state is sufficient to identify the state. Obviously, in the real world, we must often deal with partially observable MDPs (POMDPs). Astrom (1965) proved that optimal decisions in POMDPs depend on the belief state b at each point in time, i.e., the posterior probability distribution over all possible actual states, given all evidence to date. The functions V and Q then become functions of b instead of s. Parr and Russell (1995) describes a very simple POMDP RL algorithm using an explicit representation of b as a vector of probabilities, and McCallum (1993) shows a way to approximate the belief state using recent percept sequences. Neither approach is likely to scale up to situations with large numbers of state variables and long-term temporal dependencies. What is needed is a way of representing the model compactly and updating the belief state efficiently given the model and each new observation. Dynamic Bayesian networks (Dean & Kanazawa, 1989) seem to have some of the required properties; in particular, they have significant advantages over other approaches such as Kalman filters and hidden Markov models. Our baseline architecture, shown in Figure 1, uses DBNs to represent and update the belief state as new sensor information arrives. Given a representation for b, the reward signal is used to learn a Q-function represented by some \u201cblack-box\u201d function approximator such as a neural network. Provided we can handle hybrid (dis-"}
{"_id": "6f20506ce955b7f82f587a14301213c08e79463b", "title": "Algorithms for Inverse Reinforcement Learning", "text": ""}
{"_id": "77024583e21d0cb7591900795f43f1a42dd6acf8", "title": "Learning to search: Functional gradient techniques for imitation learning", "text": "Programming robot behavior remains a challenging task. While it is often easy to abstractly define or even demonstrate a desired behavior, designing a controller that embodies the same behavior is difficult, time consuming, and ultimately expensive. The machine learning paradigm offers the promise of enabling \u201cprogramming by demonstration\u201d for developing high-performance robotic systems. Unfortunately, many \u201cbehavioral cloning\u201d (Bain & Sammut, 1995; Pomerleau, 1989; LeCun et al., 2006) approaches that utilize classical tools of supervised learning (e.g. decision trees, neural networks, or support vector machines) do not fit the needs of modern robotic systems. These systems are often built atop sophisticated planning algorithms that efficiently reason far into the future; consequently, ignoring these planning algorithms in lieu of a supervised learning approach often leads to myopic and poor-quality robot performance. While planning algorithms have shown success in many real-world applications ranging from legged locomotion (Chestnutt et al., 2003) to outdoor unstructured navigation (Kelly et al., 2004; Stentz, 2009), such algorithms rely on fully specified cost functions that map sensor readings and environment models to quantifiable costs. Such cost functions are usually manually designed and programmed. Recently, a set of techniques has been developed that explore learning these functions from expert human demonstration. These algorithms apply an inverse optimal control approach to find a cost function for which planned behavior mimics an expert\u2019s demonstration. The work we present extends the Maximum Margin Planning (MMP) (Ratliff et al., 2006a) framework to admit learning of more powerful, non-linear cost functions. These algorithms, known collectively as LEARCH (LEArning to seaRCH ), are simpler to implement than most existing methods, more efficient than previous attempts at non-linearization (Ratliff et al., 2006b), more naturally satisfy common constraints on the cost function, and better represent our prior beliefs about the function\u2019s form. We derive and discuss the framework both mathematically and intuitively, and demonstrate practical realworld performance with three applied case-studies including legged locomotion, grasp planning, and autonomous outdoor unstructured navigation. The latter study includes hundreds of kilometers of autonomous traversal through complex natural environments. These case-studies address key challenges in applying the algorithm in practical settings that utilize state-of-the-art planners, and which may be constrained by efficiency requirements and imperfect expert demonstration."}
{"_id": "b3ec660a43361ea05b32b1d659210ece24361b6e", "title": "The benefits of hybridization", "text": "This article presents the impact of the performance of an FC and control strategies on the benefits of hybridization of fuel cell/supercapacitor hybrid sources for vehicle applications. Then, the storage device can complement the main source to produce the compatibility and performance characteristics needed in a load. The studies of two hybrid power systems for vehicle applications, FC/battery and FC/supercapacitor hybrid power sources, are explained. Experimental results with small-scale devices (a PEMFC of 500 W, 40 A, and 13 V; a lead-xicid battery module of 33 Ah and 48 V; and a supercapacitor module of 292 F, 500 A, and 30 V) in laboratory will illustrate the performance of the system during motor-drive cycles."}
{"_id": "d14ddc01cff72066c6655aa39f3e207e34fb8591", "title": "RF-MEMS Switches for Reconfigurable Integrated Circuits", "text": "This paper deals with a relatively new area of radio-frequency (RF) technology based on microelectromechanical systems (MEMS). RF MEMS provides a class of new devices and components which display superior high-frequency performance relative to conventional (usually semiconductor) devices, and which enable new system capabilities. In addition, MEMS devices are designed and fabricated by techniques similar to those of very large-scale integration, and can be manufactured by traditional batch-processing methods. In this paper, the only device addressed is the electrostatic microswitch\u2014perhaps the paradigm RF-MEMS device. Through its superior performance characteristics, the microswitch is being developed in a number of existing circuits and systems, including radio front-ends, capacitor banks, and time-delay networks. The superior performance combined with ultra-low-power dissipation and large-scale integration should enable new system functionality as well. Two possibilities addressed here are quasi-optical beam steering and electrically reconfigurable antennas."}
{"_id": "0d9ec57e9bfd360c5fa7cee2c2ef149f735b649d", "title": "Modeling Complex Air Traffic Management Systems", "text": "In this work, we propose the use of multi-agent system (MAS) models as the basis for predictive reasoning about various safety conditions and the performance of Air Traffic Management (ATM) Systems. To this end, we describe the engineering of a domain-specific MAS model that provides constructs for creating scenarios related to ATM systems and procedures; we then instantiate the constructs in the ATM model for different scenarios. As a case study we generate a model for a concept that provides the ability to maximize departure throughput at La Guardia airport (LGA) without impacting the flow of the arrival traffic; the model consists of approximately 1.5 hours real time flight data. During this time, between 130 and 150 airplanes are managed by four en-route controllers, three TRACON controllers, and one tower controller at LGA who is responsible for departures and arrivals. The planes are landing at approximately 36 to 40 planes an hour. A key contribution of this work is that the model can be extended to various air-traffic management scenarios and can serve as a template for engineering large-scale models in other domains."}
{"_id": "8281e847faa0afa1fa28a438f214ca94fddcbe7b", "title": "Union and Difference of Models, 10 years later", "text": "This paper contains a summary of the talk given by the author on the occasion of the MODELS 2013 most influential paper award. The talk discussed the original paper as published in 2003, the research work done by others afterwards and the author\u2019s personal reflection on the award. 1 Version Control of Software and System Models There are two main usage scenarios for design models in software and system development: models as sketches, that serve as a communication aid in informal discussions, and models as formal artifacts, to be analyzed, transformed into other artifacts, maintained and evolved during the whole software and system development process. In this second scenario, models are valuable assets that should be kept in a trusted repository. In a complex development project, these models will be updated often and concurrently by different developers. Therefore, there is a need for a version control system for models with optimistic locking. This is a system to compare, merge and store all versions of all models created within a development project. We can illustrate the use of a version control system for models as follows. Let us assume that the original model shown at the top of Figure 1 is edited simultaneously by two developers. One developer has decided that the subclass B is no longer necessary in the model. Simultaneously, the other developer has decided that class C should have a subclass D. The problem is to combine the contributions of both developers into a single model. This is the model shown at the bottom of Fig. 1. We presented the basic algorithms to solve this problem in the original paper published in the proceedings of the UML 2003 conference [1]. The proposed solution is based on calculating the final model as the merge of the differences between the original and the edited models. Figure 2 shows an example of the difference of two models, in this case the difference between the models edited by the developers and the original model. The result of the difference is not always a model, in a similar way that the difference between two natural numbers is not a natural number but a negative one. An example of this is shown in the bottom B C A Original Model"}
{"_id": "e8813fa43d641f70ede35477dd21599b9012cab7", "title": "Stakeholder Identification in the Requirements Engineering Process", "text": "Adequate, timely and effective consultation of relevant stakeholders is of paramount importance in the requirements engineering process. However, the thorny issue of making sure that all relevant stakeholders are consulted has received less attention than other areas which depend on it, such as scenario-based requirements, involving users in development, negotiating between different viewpoints and so on. The literature suggests examples of stakeholders, and categories of stakeholder, but does not provide help in identifying stakeholders for a specific system. In this paper, we discuss current work in stakeholder identification, propose an approach to identifying relevant stakeholders for a specific system, and propose future directions for the work. 1. What is a \u00d4stakeholder\u00d5? There is a large body of literature in the strategic management area which discusses organisations in terms of a stakeholder model. Stakeholder analysis, it is claimed, can be used to analyse an organisation\u00d5s performance and determine its future strategic direction. An oft-quoted definition of \u00d4stakeholder\u00d5, taken from a key reference in this literature is: \u00d4A stakeholder in an organisation is (by definition) any group or individual who can affect or is affected by the achievement of the organisation\u00d5s objectives.\u00d5 [10] A much broader definition, which has also been attributed to Freeman, is that a stakeholder is \u00d4anything influencing or influenced by\u00d5 the firm, but it has been claimed that this definition is problematic because it leads to the identification of a very broad set of stakeholders. It is important to distinguish between influencers and stakeholders because while some potential stakeholders may indeed be both stakeholders and influencers, some who have a real stake in an enterprise may have no influence, e.g. a job applicant, while some influencers may have no stake, e.g. the media [8]. Information systems (IS) researchers have also taken up the idea of stakeholders: \u00d4We define stakeholders as these participants together with any other individuals, groups or organisations whose actions can influence or be influenced by the development and use of the system whether directly or indirectly.\u00d5 [22] In software engineering, stakeholders have been defined as: \u00d4The people and organisations affected by the application\u00d5 [3] \u00d4System stakeholders are people or organisations who will be affected by the system and who have a direct or indirect influence on the system requirements\u00d5 [16] \u00d4Stakeholders are people who have a stake or interest in the project\u00d5 [4] A more explicit refinement of this definition is: \u00d4\u00c9 anyone whose jobs will be altered, who supplies or gains information from it, or whose power or influence within the organisation will increase or decrease.\u00d5 [7] They go on to say that \u00d4It will frequently be the case that the formal \u00d4client\u00d5 who orders the system falls very low on the list of those affected. Be very wary of changes which take power, influence or control from some stakeholders without returning something tangible in its place.\u00d5 [7] When faced with the practical problem of how to identify the set of stakeholders relevant to a specific project, these definitions are not particularly helpful. The main concern is that, although such definitions are usually accompanied by example groups of stakeholders, they are vague and may lead to consideration of inappropriate or incomplete groups of stakeholders. Categories of stakeholder include end-users, managers and others involved in the organisational processes influenced by the system, engineers responsible for system development and maintenance, customers of the Sharp, Finkelstein & Galal 11/12/99 2 organisation who will use the system to provide a service, external bodies such as regulators, domain experts, and so on. They will each have different goals, and will try to satisfy their own without recourse to others [16]. Cotterell and Hughes [4] suggest that stakeholders might be in one of three categories: internal to the project team; external to the project team, but internal to the organisation; and external to both the project team and the organisation. Newman and Lamming [20] suggest a different division: into those who will use the system directly or indirectly, and those who will be involved in developing the system. This distinction is also taken up with respect to the development of knowledge-based systems [22]. The set of stakeholders in the knowledge acquisition process and the set of stakeholders in the use of a knowledge-based system, are not necessarily identical; they are likely to vary in membership, and for those members in common, the type and level of stake they have is likely to vary. The IS literature suggests a different division again, into \u00d4hubs\u00d5 or \u00d4sponsors\u00d5 and \u00d4spokes\u00d5 or \u00d4adaptors\u00d5, where the former are those initiating and sustaining the system, while the latter are those participating [21]. Macaulay [19] identifies four categories of stakeholder in any computer system: 1. Those responsible for design and development; 2. Those with a financial interest, responsible for its sale or purchase; 3. Those responsible for introduction and maintenance; 4. Those who have an interest in its use. But again, she offers no guidelines for identifying specific stakeholders for a given system. So far, we have not distinguished between individuals or groups and roles. As with other situations, the mapping between stakeholders and individuals or groups is not oneto-one. It is therefore more appropriate to think of stakeholders as roles rather than as specific people. In subsequent discussion, we shall use \u00d4stakeholder\u00d5 to mean \u00d4stakeholder role\u00d5. 2. Identifying stakeholders There has been little focus on the participants of the requirements engineering (RE) process, for example in terms of how to trace participants in the RE process and how to identify stakeholders [13]. All of the references cited above emphasise the importance of identifying stakeholders, and although they provide examples, or broad guidance for identifying them, none describes a model or a concrete approach for identifying stakeholders for a specific project. This deficiency has been noted in the management literature, and in the IS literature [21], where the approaches have been criticised for either assuming that stakeholders are \u00d4obvious\u00d5, or for providing broad categories which are too generic to be of practical use. Expert identification for knowledge-based systems development has similarities with stakeholder identification in RE, although here too there has been an assumption that experts are readily identifiable [22]. Pouloudi and Whitley [21] suggest four principles of stakeholder identification, and describe an approach which is based on these principles, and which they used to identify stakeholders in the drug use management domain. Lyytinen and Hirschheim [17] also provides some guidance for stakeholder identification for IS, while acknowledging that \u00d4the identification of the set of all stakeholders is far from trivial\u00d5. Methods for requirements engineering, e.g. KAOS [5], do not directly support stakeholder identification; it seems to be assumed that it is straightforward. The task of identifying actors for use case development [15] has similarities with stakeholder identification, but is targeted only at a fraction of system stakeholders. Stakeholders are related to each other and interact with each other [22,11,17]. Interactions between them include: exchanging information, products, or instructions, or providing supporting tasks. Information about the stakeholders and the nature of their relationships and interactions needs to be identified and recorded. Dimensions of importance are: relationships between stakeholders, the relationship of each stakeholder to the system, and the priority to be given to each stakeholder\u00d5s view. This information is needed to manage, interpret, balance and process stakeholder input. Reconciling different stakeholder views is beyond this paper, but is considered in [12]. 3. An approach to identifying stakeholders for requirements engineering Here, we propose an approach to discovering all relevant stakeholders of a specific system that we believe is domain-independent, effective and pragmatic. We draw on the literature cited above, and on aspects of ethical decision-making in software systems [18]. Our starting point is a set of stakeholders that we refer to as \u00d4baseline\u00d5 stakeholders. From these, we can recognise \u00d4supplier\u00d5 stakeholders and \u00d4client\u00d5 stakeholders: the former provides information or supporting tasks to the baseline, and the latter processes or inspects the products of the baseline. Other stakeholders that we call \u00d4satellite\u00d5s interact with the baseline in a variety of ways. \u00d4Interaction\u00d5 may involve communicating, reading a set of rules or guidelines, searching for information and so on. Our approach focuses on interactions between stakeholders rather than relationships between the system and the stakeholder, Sharp, Finkelstein & Galal 11/12/99 3 because they are easier to follow. Figure 1 illustrates the main elements of stakeholder identification. Figure 1. The main elements of stakeholder identification addressed by our approach 3.1 Baseline stakeholders We have identified four groups of baseline stakeholder: users, developers, legislators and decision-makers. We have dubbed these \u00d4baseline\u00d5 because the web of stakeholders and their relationships can be identified from them. The nature of each group is explored below. 3.1.1 Users. The term \u00d4user\u00d5 has many interpretations. For example, Holtzblatt and Jones [14] include in their definition of \u00d4users\u00d5 those who manage direct users, those who receive products from the system, those who test the system, those who make the purchasing decision, and those who use competitive products. Eason [9] identifies three categories of user: primary, secondary and te"}
{"_id": "e1b9ffd685908be70165ce89b83f541ceaf71895", "title": "A Smart Cloud Robotic System Based on Cloud Computing Services", "text": "In this paper, we present a smart service robotic system based on cloud computing services. The design and implementation of infrastructure, computation components and communication components are introduced. The proposed system can alleviate the complex computation and storage load of robots to cloud and provide various services to the robots. The computation components can dynamically allocate resources to the robots. The communication components allow easy access of the robots and provide flexible resource management. Furthermore, we modeled the task-scheduling problem and proposed a max-heaps algorithm. The simulation results demonstrate that the proposed algorithm minimized the overall task costs."}
{"_id": "1471511609f7185544703f0e22777a64c6681f38", "title": "The spread of behavior in an online social network experiment.", "text": "How do social networks affect the spread of behavior? A popular hypothesis states that networks with many clustered ties and a high degree of separation will be less effective for behavioral diffusion than networks in which locally redundant ties are rewired to provide shortcuts across the social space. A competing hypothesis argues that when behaviors require social reinforcement, a network with more clustering may be more advantageous, even if the network as a whole has a larger diameter. I investigated the effects of network structure on diffusion by studying the spread of health behavior through artificially structured online communities. Individual adoption was much more likely when participants received social reinforcement from multiple neighbors in the social network. The behavior spread farther and faster across clustered-lattice networks than across corresponding random networks."}
{"_id": "55eab388ca816eff2c9d4488ef5024840e444854", "title": "Voltage buffer compensation using Flipped Voltage Follower in a two-stage CMOS op-amp", "text": "In Miller and current buffer compensation techniques, the compensation capacitor often loads the output node. If a voltage buffer is used in feedback, the compensation capacitor obviates the loading on the output node. In this paper, we introduce an implementation of a voltage buffer compensation using a Flipped Voltage Follower (FVF) for stabilizing a two-stage CMOS op-amp. The op-amps are implemented in a 180-nm CMOS process with a power supply of 1.8V while operating with a quiescent current of 110\u03bcA. Results indicate that the proposed voltage buffer compensation using FVF improves the Unity Gain Frequency from 5.5MHz to 12.2MHz compared to Miller compensation. Also, the proposed technique enhances the transient response while lowering the compensation capacitance by 47% and 17.7% compared to Miller and common-drain compensation topologies. Utilization of FVF or its variants as a voltage buffer in a feedback compensation network has wide potential applications in the analog design space."}
{"_id": "3a4643a0c11a866e6902baa43e3bee9e3a68a3c6", "title": "Dynamic question generation system for web-based testing using particle swarm optimization", "text": "One aim of testing is to identify weaknesses in students\u2019 knowledge. Computerized tests are now one of the most important ways to judge learning, and, selecting tailored questions for each learner is a significant part of such tests. Therefore, one current trend is that computerized adaptive tests (CATs) not only assist teachers in estimating the learning performance of students, but also facilitate understanding of problems in their learning process. These tests, must effectively and efficiently select questions from a large-scale item bank, and to cope with this problem we propose a dynamic question generation system for web-based tests using the novel approach of particle swarm optimization (PSO). The dynamic question generation system is built to select tailored questions for each learner from the item bank to satisfy multiple assessment requirements. Furthermore, the proposed approach is able to efficiently generate near-optimal questions that satisfy multiple assessment criteria. With a series of experiments, we compare the efficiency and efficacy of the PSO approach with other approaches. The experimental results show that the PSO approach is suitable for the selection of near-optimal questions from large-scale item banks. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "f8ca1142602ce7b85b24f34c4de7bb2467d2c952", "title": "Deep Embedding for Spatial Role Labeling", "text": "This paper introduces the visually informed embedding of word (VIEW), a continuous vector representation for a word extracted from a deep neural model trained using the Microsoft COCO data set to forecast the spatial arrangements between visual objects, given a textual description. The model is composed of a deep multilayer perceptron (MLP) stacked on the top of a Long Short Term Memory (LSTM) network, the latter being preceded by an embedding layer. The VIEW is applied to transferring multimodal background knowledge to Spatial Role Labeling (SpRL) algorithms, which recognize spatial relations between objects mentioned in the text. This work also contributes with a new method to select complementary features and a fine-tuning method for MLP that improves the F1 measure in classifying the words into spatial roles. The VIEW is evaluated with the Task 3 of SemEval-2013 benchmark data set, SpaceEval."}
{"_id": "1ad4d974e4732a9e0a3c857eb182275fb296e62d", "title": "To recognize shapes, first learn to generate images.", "text": "The uniformity of the cortical architecture and the ability of functions to move to different areas of cortex following early damage strongly suggest that there is a single basic learning algorithm for extracting underlying structure from richly structured, high-dimensional sensory data. There have been many attempts to design such an algorithm, but until recently they all suffered from serious computational weaknesses. This chapter describes several of the proposed algorithms and shows how they can be combined to produce hybrid methods that work efficiently in networks with many layers and millions of adaptive connections."}
{"_id": "0177450c77baa6c48c3f1d3180b6683fd67f23b2", "title": "Design and optimization of thermo-mechanical reliability in wafer level packaging", "text": "Article history: Received 4 July 2009 Received in revised form 16 November 2009 Available online 29 January 2010 0026-2714/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.microrel.2009.11.010 * Corresponding author. Address: Department of M University, P.O. Box 10028, Beaumont, TX 77710, USA +1 409 880 8121. E-mail address: xuejun.fan@lamar.edu (X.J. Fan). In this paper, a variety of wafer level packaging (WLP) structures, including both fan-in and fan-out WLPs, are investigated for solder joint thermo-mechanical reliability performance, from a structural design point of view. The effects of redistribution layer (RDL), bump structural design/material selection, polymer-cored ball application, and PCB design/material selection are studied. The investigation focuses on four different WLP technologies: standard WLP (ball on I/O WLP), ball on polymer WLP without under bump metallurgy (UBM) layer, ball on polymer WLP with UBM layer, and encapsulated copper post WLP. Ball on I/O WLP, in which solder balls are directly attached to the metal pads on silicon wafer, is used as a benchmark for the analysis. 3-D finite element modeling is performed to investigate the effects of WLP structures, UBM layer, polymer film material properties (in ball on polymer WLP), and encapsulated epoxy material properties (in copper post WLP). Both ball on polymer and copper post WLPs have shown great reliability improvement in thermal cycling. For ball on polymer WLP structures, polymer film between silicon and solder balls creates a \u2018cushion\u2019 effect to reduce the stresses in solder joints. Such cushion effect can be achieved either by an extremely compliant film or a \u2018hard\u2019 film with a large coefficient of thermal expansion. Encapsulated copper post WLP shows the best thermo-mechanical performance among the four WLP structures. Furthermore, for a fan-out WLP, it has been found that the critical solder balls are the outermost solder balls under die-area, where the maximum thermal mismatch takes place. In a fan-out WLP package, chip size, other than package size, determines the limit of solder joint reliability. This paper also discusses the polymer-cored solder ball applications to enhance thermo-mechanical reliability of solder joints. Finally, both experimental and finite element analysis have demonstrated that making corner balls non-electrically connected can greatly improve the WLP thermomechanical reliability. 2009 Elsevier Ltd. All rights reserved."}
{"_id": "9d5f36b92ac155fccdae6730660ab44d46ad501a", "title": "Introducing Expected Returns into Risk Parity Portfolios : A New Framework for Tactical and Strategic Asset Allocation", "text": "Risk parity is an allocation method used to build diversified portfolios that does not rely on any assumptions of expected returns, thus placing risk management at the heart of the strategy. This explains why risk parity became a popular investment model after the global financial crisis in 2008. However, risk parity has also been criticized because it focuses on managing risk concentration rather than portfolio performance, and is therefore seen as being closer to passive management than active management. In this article, we show how to introduce assumptions of expected returns into risk parity portfolios. To do this, we consider a generalized risk measure that takes into account both the portfolio return and volatility. However, the trade-off between performance and volatility contributions creates some difficulty, while the risk budgeting problem must be clearly defined. After deriving the theoretical properties of such risk budgeting portfolios, we apply this new model to asset allocation. First, we consider long-term investment policy and the determination of strategic asset allocation. We then consider dynamic allocation and show how to build risk parity funds that depend on expected returns."}
{"_id": "b6bb01c270536933cb0778e7d14df32e207b979e", "title": "wradlib \u2013 An Open Source Library for Weather Radar Data Processing", "text": "Observation data from weather radars is deemed to have great potential in hydrology and meteorology for forecasting of severe weather or floods in small and urban catchments by providing high resolution measurements of precipitation. With the time series from operational weather radar networks attaining lengths suitable for the long term calibration of hydrological models, the interest in using this data is growing. There are, however, several challenges impeding a widespread use of weather radar data. The first being a multitude of different file formats for data storage and exchange. Although the OPERA project [1] has taken steps towards harmonizing the data exchange inside the European radar network [2], different dialects still exist in addition to a large variety of legacy formats. The second barrier is what we would like to describe as the product dilemma. A great number of potential applications also implies a great number of different and often competing requirements as to the quality of the data. As an example, while one user might be willing to accept more false alarms in a clutter detection filter, e.g. to avoid erroneous data assimilation results, another might want a more conservative product in order not to miss the small scale convection that leads to a hundred year flood in a small head catchment. A single product from a radar operator, even if created with the best methods currently available, will never be able to accommodate all these needs simultaneously. Thus the product will either be a compromise or it will accommodate one user more than the other. Often the processing chain needs to be in a specific order, where a change of a certain processing step is impossible without affecting the results of all following steps. Sometimes some post-processing of the product might be possible, but if not, the user\u2019s only choice is to either take the product as is or leave it. If a user should decide that he would take a raw radar product and try to do customized corrections, he is faced with basically reinventing the wheel, writing routines for data I/O, error corrections, georeferencing and visualization, trying to find relevant literature and to extract algorithms from publications, which, often enough, do not offer all implementation details. This takes a lot of time and effort, which could be used much more effectively, if standard algorithms were available in a well documented and easy to use manner. wradlib is intended to provide this collection of routines and algorithms in order to facilitate and foster the use of weather radar data in as many disciplines as possible including research, application development and teaching."}
{"_id": "372d6dcd30f63a1268067f7db6de50c5a49bcf3c", "title": "A survey of hardware Trojan threat and defense", "text": "Hardware Trojans (HTs) can be implanted in security-weak parts of a chip with various means to steal the internal sensitive data or modify original functionality, which may lead to huge economic losses and great harm to society. Therefore, it is very important to analyze the specific HT threats existing in the whole life cycle of integrated circuits (ICs), and perform protection against hardware Trojans. In this paper, we elaborate an IC market model to illustrate the potential HT threats faced by the parties involved in the model. Then we categorize the recent research advances in the countermeasures against HT attacks. Finally, the challenges and prospects for HT defense are illuminated. & 2016 Elsevier B.V. All rights reserved."}
{"_id": "596bcab1b62bf26d5f172154302f28d705020a1d", "title": "An overview of Fog computing and its security issues", "text": "Fog computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage and application services to end users. In this article, we elaborate the motivation and advantages of Fog computing and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state of the art of Fog computing and similar work under the same umbrella. Distinguished from other reviewing work of Fog computing, this paper further discloses the security and privacy issues according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of system security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device. In addition, we discuss the authentication and authorization techniques that can be used in Fog computing. An example of authentication techniques is introduced to address the security scenario where the connection between Fog and Cloud is fragile. Copyright \u00a9 2015 John Wiley & Sons, Ltd."}
{"_id": "4560491820e0ee49736aea9b81d57c3939a69e12", "title": "Investigating the Impact of Data Volume and Domain Similarity on Transfer Learning Applications", "text": "Transfer learning allows practitioners to recognize and apply knowledge learned in previous tasks (source task) to new tasks or new domains (target task), which share some commonality. The two important factors impacting the performance of transfer learning models are: (a) the size of the target dataset, and (b) the similarity in distribution between source and target domains. Thus far, there has been little investigation into just how important these factors are. In this paper, we investigate the impact of target dataset size and source/target domain similarity on model performance through a series of experiments. We find that more data is always beneficial, and model performance improves linearly with the log of data size, until we are out of data. As source/target domains differ, more data is required and fine tuning will render better performance than feature extraction. When source/target domains are similar and data size is small, fine tuning and feature extraction renders equivalent performance. Our hope is that by beginning this quantitative investigation on the effect of data volume and domain similarity in transfer learning we might inspire others to explore the significance of data in developing more accurate statistical models."}
{"_id": "55065876334df2698da179898d2f1be7501beca1", "title": "Determining the effectiveness of prompts for self-regulated learning in problem-solving scenarios", "text": "Cognitive scientists have studied internal cognitive structures, processes, and systems for decades in order to understand how they function in human learning. In order to solve challenging tasks in problem situations, learners not only have to perform cognitive activities, e.g., activating existing cognitive structures or organizing new information, they also have to set specific goals, plan their activities, monitor their performance during the problem-solving process, and evaluate the efficiency of their actions. This paper reports an experimental study with 98 participants where effective instructional interventions for self-regulated learning within problemsolving processes are investigated. Furthermore, an automated assessment and analysis methodology for determining the quality of learning outcomes is introduced. The results indicate that generic prompts are an important aid for developing cognitive structures while solving problems."}
{"_id": "91908a5b9e3df25a31112818485bd8a4b988cec3", "title": "Synchronising movements with the sounds of a virtual partner enhances partner likeability", "text": "Previous studies have demonstrated that synchronising movements with other people can influence affiliative behaviour towards them. While research has focused on synchronisation with visually observed movement, synchronisation with a partner who is heard may have similar effects. We replicate findings showing that synchronisation can influence ratings of likeability of a partner, but demonstrate that this is possible with virtual interaction, involving a video of a partner. Participants performed instructed synchrony in time to sounds instead of the observable actions of another person. Results show significantly higher ratings of likeability of a partner after moving at the same time as sounds attributed to that partner, compared with moving in between sounds. Objectively quantified synchrony also correlated with ratings of likeability. Belief that sounds were made by another person was manipulated in Experiment 2, and results demonstrate that when sounds are attributed to a computer, ratings of likeability are not affected by moving in or out of time. These findings demonstrate that interaction with sound can be experienced as social interaction in the absence of genuine interpersonal contact, which may help explain why people enjoy engaging with recorded music."}
{"_id": "c467ae4d7209bb6bc06f2d7e8f923e1ebf39213d", "title": "KeyGraph: Automatic Indexing by Co-Occurrence Graph Based on Building Construction Metaphor", "text": "In this paper, we present an algorithm for extracting keywords representing the asserted main point in a document, without relying on external devices such as natural language processing tools or a document corpus. Our algorithm KeyGraph is based on the segmentation of a graph, representing the co-occurrence between terms in a document, into clusters. Each cluster corresponds to a concept on which author's idea is based, and top ranked terms by a statistic based on each term's relationship to these clusters are selected as keywords. This strategy comes from considering that a document is constructed like a building for expressing new ideas based on traditional concepts. The experimental results show that thus extracted terms match author's point quite accurately, even though KeyGraph does not use each term's average frequency in a corpus, i.e., KeyGraph is a contentsensitive, domain independent device of indexing."}
{"_id": "26108bc64c03282bd5ee230afe25306c91cc5cd6", "title": "Community detection using a neighborhood strength driven Label Propagation Algorithm", "text": "Studies of community structure and evolution in large social networks require a fast and accurate algorithm for community detection. As the size of analyzed communities grows, complexity of the community detection algorithm needs to be kept close to linear. The Label Propagation Algorithm (LPA) has the benefits of nearly-linear running time and easy implementation, thus it forms a good basis for efficient community detection methods. In this paper, we propose new update rule and label propagation criterion in LPA to improve both its computational efficiency and the quality of communities that it detects. The speed is optimized by avoiding unnecessary updates performed by the original algorithm. This change reduces significantly (by order of magnitude for large networks) the number of iterations that the algorithm executes. We also evaluate our generalization of the LPA update rule that takes into account, with varying strength, connections to the neighborhood of a node considering a new label. Experiments on computer generated networks and a wide range of social networks show that our new rule improves the quality of the detected communities compared to those found by the original LPA. The benefit of considering positive neighborhood strength is pronounced especially on real-world networks containing sufficiently large fraction of nodes with high clustering coefficient."}
{"_id": "bda5d3785886f9ad9e966fa060aaaaf470436d44", "title": "Fast Supervised Discrete Hashing", "text": "Learning-based hashing algorithms are \u201chot topics\u201d because they can greatly increase the scale at which existing methods operate. In this paper, we propose a new learning-based hashing method called \u201cfast supervised discrete hashing\u201d (FSDH) based on \u201csupervised discrete hashing\u201d (SDH). Regressing the training examples (or hash code) to the corresponding class labels is widely used in ordinary least squares regression. Rather than adopting this method, FSDH uses a very simple yet effective regression of the class labels of training examples to the corresponding hash code to accelerate the algorithm. To the best of our knowledge, this strategy has not previously been used for hashing. Traditional SDH decomposes the optimization into three sub-problems, with the most critical sub-problem - discrete optimization for binary hash codes - solved using iterative discrete cyclic coordinate descent (DCC), which is time-consuming. However, FSDH has a closed-form solution and only requires a single rather than iterative hash code-solving step, which is highly efficient. Furthermore, FSDH is usually faster than SDH for solving the projection matrix for least squares regression, making FSDH generally faster than SDH. For example, our results show that FSDH is about 12-times faster than SDH when the number of hashing bits is 128 on the CIFAR-10 data base, and FSDH is about 151-times faster than FastHash when the number of hashing bits is 64 on the MNIST data-base. Our experimental results show that FSDH is not only fast, but also outperforms other comparative methods."}
{"_id": "c78755a1aa972aa59c307ebc284f9e3cdea00784", "title": "Vascular structures in dermoscopy*", "text": "Dermoscopy is an aiding method in the visualization of the epidermis and dermis. It is usually used to diagnose melanocytic lesions. In recent years, dermoscopy has increasingly been used to diagnose non-melanocytic lesions. Certain vascular structures, their patterns of arrangement and additional criteria may demonstrate lesion-specific characteristics. In this review, vascular structures and their arrangements are discussed separately in the light of conflicting views and an overview of recent literature."}
{"_id": "65541ef6e41cd870eb2d8601c401ac03ac62aacb", "title": "Audio-visual emotion recognition : A dynamic , multimodal approach", "text": "Designing systems able to interact with students in a natural manner is a complex and far from solved problem. A key aspect of natural interaction is the ability to understand and appropriately respond to human emotions. This paper details our response to the continuous Audio/Visual Emotion Challenge (AVEC'12) whose goal is to predict four affective signals describing human emotions. The proposed method uses Fourier spectra to extract multi-scale dynamic descriptions of signals characterizing face appearance, head movements and voice. We perform a kernel regression with very few representative samples selected via a supervised weighted-distance-based clustering, that leads to a high generalization power. We also propose a particularly fast regressor-level fusion framework to merge systems based on different modalities. Experiments have proven the efficiency of each key point of the proposed method and our results on challenge data were the highest among 10 international research teams. Key-words Affective computing, Dynamic features, Multimodal fusion, Feature selection, Facial expressions. ACM Classification"}
{"_id": "1c56a787cbf99f55b407bb1992d60fdfaf4a69b7", "title": "Lane detection using Fourier-based line detector", "text": "In this paper a new approach to lane marker detection problem is introduced as a significant complement for a semi/fully autonomous driver assistance system. The method incorporates advance line detection using multilayer fractional Fourier transform (MLFRFT) and the state-of-the-art advance lane detector (ALD). Experimental results have shown a considerable reduction in computational complexity."}
{"_id": "003715e5bda2dfd2373c937ded390e469e8d84b1", "title": "Directed diffusion: a scalable and robust communication paradigm for sensor networks", "text": "Advances in processor, memory and radio technology will enable small and cheap nodes capable of sensing, communication and computation. Networks of such nodes can coordinate to perform distributed sensing of environmental phenomena. In this paper, we explore the directed diffusion paradigm for such coordination. Directed diffusion is datacentric in that all communication is for named data. All nodes in a directed diffusion-based network are application-aware. This enables diffusion to achieve energy savings by selecting empirically good paths and by caching and processing data in-network. We explore and evaluate the use of directed diffusion for a simple remote-surveillance sensor network."}
{"_id": "16e6938256f8fd82de59aac2257805f692278f03", "title": "Adaptive Protocols for Information Dissemination in Wireless Sensor Networks", "text": "In this paper, we present a family of adaptive protocols, called SPIN (Sensor Protocols for Information via Negotiation), that efficiently disseminates information among sensors in an energy-constrained wireless sensor network. Nodes running a SPIN communication protocol name their data using high-level data descriptors, called metadata. They use meta-data negotiations to eliminate the transmission of redundant data throughout the network. In addition, SPIN nodes can base their communication decisions both upon application-specific knowledge of the data and upon knowledge of the resources that are available to them. This allows the sensors to efficiently distribute data given a limited energy supply. We simulate and analyze the performance of two specific SPIN protocols, comparing them to other possible approaches and a theoretically optimal protocol. We find that the SPIN protocols can deliver 60% more data for a given amount of energy than conventional approaches. We also find that, in terms of dissemination rate and energy usage, the SPIN protocols perform close to the theoretical optimum."}
{"_id": "006df3db364f2a6d7cc23f46d22cc63081dd70db", "title": "Dynamic source routing in ad hoc wireless networks", "text": "An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host\u2019s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal."}
{"_id": "49ed15db181c74c7067ec01800fb5392411c868c", "title": "Epidemic Algorithms for Replicated Database Maintenance", "text": "When a database is replicated at many sites, maintaining mutual consistency among the sites in the face of updates is a significant problem. This paper describes several randomized algorithms for distributing updates and driving the replicas toward consistency. The algorithms are very simple and require few guarantees from the underlying communication system, yet they ensure that the effect of every update is eventually reflected in all replicas. The cost and performance of the algorithms are tuned by choosing appropriate distributions in the randomization step. The algorithms are closely analogous to epidemics, and the epidemiology literature aids in understanding their behavior. One of the algorithms has been implemented in the Clearinghouse servers of the Xerox Corporate Internet, solving long-standing problems of high traffic and database inconsistency. An earlier version of this paper appeared in the Proceedings of the Sixth Annual ACM Symposium on Principles of Distributed Computing, Vancouver, August 1987, pages 1-12. CR"}
{"_id": "53b85e4066944b1753aae8e3418028a67d9372e1", "title": "The Chemical Basis of Morphogenesis", "text": "The paper discussed is by Alan Turing. It was published in 1952 and presents an idea of how periodic patterns could be formed in nature. Looking on periodic structures \u2013 like the stripes on tigers, the dots on leopards or the whirly leaves on woodruff \u2013 it is hard to imagine those patterns are formated by pure chance. On the other hand, thinking of the unbelievable multitude of possible realizations, the patterns can not all be exactly encoded in the genes. The paper \u201cThe Chemical Basis of Morphogenesis\u201d proposes a possible mechanism due to an interaction of two \u201cmorphogenes\u201d which react and diffuse through the tissue. Fulfilling some constrains regarding the diffusibilities and the behaviour of the reactions, this mechanism \u2013 called Turing mechanism \u2013 can lead to a pattern of concentrations defining the structure we see."}
{"_id": "6b60dd4cac27ac7b0f28012322146967c2a388ca", "title": "An Overview of 3D Software Visualization", "text": "Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions."}
{"_id": "f49b78324e01d783a66b3b777965b08a7b904a74", "title": "Ultra-wideband, short-pulse ground-penetrating radar: simulation and measurement", "text": "Ultra-wideband (UWB), short-pulse (SP) radar is investigated theoretically and experimentally for the detection and identification of targets buried in and placed atop soil. The calculations are performed using a rigorous, three-dimensional (3-D) Method of Moments algorithm for perfectly conducting bodies of revolution. Particular targets investigated theoretically include anti-personnel mines, anti-tank mines, and a 55-gallon drum, for which we model the time-domain scattered fields and the buried-target late-time resonant frequencies. With regard to the latter, the computed resonant frequencies are utilized to assess the feasibility of resonance-based buried-target identification for this class of targets. The measurements are performed using a novel UWB, SP synthetic aperture radar (SAR) implemented on a mobile boom. Experimental and theoretical results are compared."}
{"_id": "6714e4673b514ad9698d4949db763c66ad681d3d", "title": "Negotiation as a Metaphor for Distributed Problem Solving", "text": "We describe the concept of distributed problem solving and define it as the cooperative solution of problems by a decentralized and loosely coupled collection of problem solvers. This approach to problem solving offers the promise of increased performance and provides a useful medium for exploring and developing new problem-solving techniques. We present a framework called the contract net that specifies communication and control in a distributed problem solver. Task distribution is viewed as an interactive process, a discussion carried on between a node with a task to be executed and a group of nodes that may be able to execute the task. We describe the kinds of information that must be passed between nodes during the discussion in order to obtain effective problem-solving behavior. This discussion is the origin of the negotiation metaphor: Task distribution is viewed as a form of contract negotiation. We emphasize that protocols for distributed problem solving should help determine the content of the information transmitted, rather than simply provide a means of sending bits from one node to another. The use of the contract net framework is demonstrated in the solution of a simulated problem in area surveillance, of the sort encountered in ship or air traffic control. We discuss the mode of operation of a distributed sensing system, a network of nodes extending throughout a relatively large geographic area, whose primary aim is the formation of a dynamic map of traffic in the area. From the results of this preliminary study we abstract features of the framework applicable to problem solving in general, examining in particular transfer of control. Comparisons with PLANNER, CONNIVER, HEARSAY-n, and PUP6 are used to demonstrate that negotiation--the two-way transfer of information--is a natural extension to the transfer of control mechanisms used in earlier problem-"}
{"_id": "491c6e5b40ecef186c232290f6ce5ced2fff1409", "title": "An application of analytic hierarchy process ( AHP ) to investment portfolio selection in the banking sector of the Nigerian capital market", "text": "The importance of investment to the individual, a nation and the world economy cannot be over emphasized. Investment involves the sacrifice of immediate consumption to achieve greater consumption in the future. The Nigerian banking sector has made tremendous success in the recent past. All the banks that approached the capital market through public offers and right issues to raise their capital base recorded huge success. Investors in bank stocks have enjoyed high returns on investment, despite the slow growth in the nation\u2019s economy. However, the recent financial crisis that started in America, which has caused economy meltdown in many nations of the world and sudden fall in share prices, has brought about a higher risk than envisaged on investors, particularly those investing in bank stocks. This study underlines the importance of different criteria, factors and alternatives that are essential to successful investment decisions by applying the Analytic Hierarchy Process (AHP) in the selection of investment in bank stocks in the circumstances of financial crisis. The study provides recommendation on strategic investment decision options."}
{"_id": "e918ab254c2cfd832dec13b5a2d98b7e0cc905d5", "title": "Suggestibility of the child witness: a historical review and synthesis.", "text": "The field of children's testimony is in turmoil, but a resolution to seemingly intractable debates now appears attainable. In this review, we place the current disagreement in historical context and describe psychological and legal views of child witnesses held by scholars since the turn of the 20th century. Although there has been consistent interest in children's suggestibility over the past century, the past 15 years have been the most active in terms of the number of published studies and novel theorizing about the causal mechanisms that underpin the observed findings. A synthesis of this research posits three \"families\" of factors--cognitive, social, and biological--that must be considered if one is to understand seemingly contradictory interpretations of the findings. We conclude that there are reliable age differences in suggestibility but that even very young children are capable of recalling much that is forensically relevant. Findings are discussed in terms of the role of expert witnesses."}
{"_id": "cb9155bf684f9146da4605f07fed9224fd8b146b", "title": "The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?", "text": "A successful grasp requires careful balancing of the contact forces. Deducing whether a particular grasp will be successful from indirect measurements, such as vision, is therefore quite challenging, and direct sensing of contacts through touch sensing provides an appealing avenue toward more successful and consistent robotic grasping. However, in order to fully evaluate the value of touch sensing for grasp outcome prediction, we must understand how touch sensing can influence outcome prediction accuracy when combined with other modalities. Doing so using conventional model-based techniques is exceptionally difficult. In this work, we investigate the question of whether touch sensing aids in predicting grasp outcomes within a multimodal sensing framework that combines vision and touch. To that end, we collected more than 9,000 grasping trials using a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger, and evaluated visuo-tactile deep neural network models to directly predict grasp outcomes from either modality individually, and from both modalities together. Our experimental results indicate that incorporating tactile readings substantially improve grasping performance."}
{"_id": "08567f9b834bdc05a0cf6164ec4a6ab9c985429e", "title": "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations", "text": "In recent years, the interest in unsupervised learning of disentangled representations has significantly increased. The key assumption is that real-world data is generated by a few explanatory factors of variation and that these factors can be recovered by unsupervised learning algorithms. A large number of unsupervised learning approaches based on auto-encoding and quantitative evaluation metrics of disentanglement have been proposed; yet, the efficacy of the proposed approaches and utility of proposed notions of disentanglement has not been challenged in prior work. In this paper, we provide a sober look on recent progress in the field and challenge some common assumptions. We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12 000 models covering the six most prominent methods, and evaluate them across six disentanglement metrics in a reproducible large-scale experimental study on seven different data sets. On the positive side, we observe that different methods successfully enforce properties \u201cencouraged\u201d by the corresponding losses. On the negative side, we observe that in our study (1) \u201cgood\u201d hyperparameters seemingly cannot be identified without access to ground-truth labels, (2) good hyperparameters neither transfer across data sets nor across disentanglement metrics, and (3) that increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks. These results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets."}
{"_id": "74a5e76a6e45dfecfb4f4542a6a8cb1c6be3c2cb", "title": "Contraction of the abdominal muscles associated with movement of the lower limb.", "text": "BACKGROUND AND PURPOSE\nActivity of the trunk muscles is essential for maintaining stability of the lumbar spine because of the unstable structure of that portion of the spine. A model involving evaluation of the response of the lumbar multifidus and abdominal muscles to leg movement was developed to evaluate this function.\n\n\nSUBJECTS\nTo examine this function in healthy persons, 9 male and 6 female subjects (mean age = 20.6 years, SD = 2.3) with no history of low back pain were studied.\n\n\nMETHODS\nFine-wire and surface electromyography electrodes were used to record the activity of selected trunk muscles and the prime movers for hip flexion, abduction, and extension during hip movements in each of those directions.\n\n\nRESULTS\nTrunk muscle activity occurring prior to activity of the prime mover of the limb was associated with hip movement in each direction. The transversus abdominis (TrA) muscle was invariably the first muscle that was active. Although reaction time for the TrA and oblique abdominal muscles was consistent across movement directions, reaction time for the rectus abdominis and multifidus muscles varied with the direction of limb movement.\n\n\nCONCLUSION AND DISCUSSION\nResults suggest that the central nervous system deals with stabilization of the spine by contraction of the abdominal and multifidus muscles in anticipation of reactive forces produced by limb movement. The TrA and oblique abdominal muscles appear to contribute to a function not related to the direction of these forces."}
{"_id": "4c34433231e209d0b444cd38dad788de2a973f45", "title": "Low-degree Graph Partitioning via Local Search with Applications to Constraint Satisfaction, Max Cut, and Coloring", "text": "We present practical algorithms for constructing partitions of graphs into a fixed number of vertex-disjoint subgraphs that satisfy particular degree constraints. We use this in particular to find k-cuts of graphs of maximum degree \u2206 that cut at least a k\u22121 k (1 + 1 2\u2206+k\u22121 ) fraction of the edges, improving previous bounds known. The partitions also apply to constraint networks, for which we give a tight analysis of natural local search heuristics for the maximum constraint satisfaction problem. These partitions also imply efficient approximations for several problems on weighted bounded-degree graphs. In particular, we improve the best performance ratio for the weighted independent set problem to 3 \u2206+2 , and obtain an efficient algorithm for coloring 3-colorable graphs with at most 3\u2206+2 4 colors. Communicated by M. F\u00fcrer: submitted February 1996; revised March 1997. Halld\u00f3rsson, Lau, Low-degree Graph Partitioning, JGAA, 1(3) 1\u201313 (1997) 2"}
{"_id": "b55aafb62fb0d49ad7e0ed7ab5b936f985e1ac58", "title": "New cardinality estimation algorithms for HyperLogLog sketches", "text": "This paper presents new methods to estimate the cardinalities of data sets recorded by HyperLogLog sketches. A theoretically motivated extension to the original estimator is presented that eliminates the bias for small and large cardinalities. Based on the maximum likelihood principle a second unbiased method is derived together with a robust and efficient numerical algorithm to calculate the estimate. The maximum likelihood approach can also be applied to more than a single HyperLogLog sketch. In particular, it is shown that it gives more precise cardinality estimates for union, intersection, or relative complements of two sets that are both represented by HyperLogLog sketches compared to the conventional technique using the inclusion-exclusion principle. All the new methods are demonstrated and verified by extensive simulations."}
{"_id": "977ad29fc117946aa37e4e9b69a7d6cc00cb7388", "title": "Analysis of YouTube\u2019s traffic adaptation to dynamic environments", "text": "The popular Internet service, YouTube, has adopted by default the HyperText Markup Language version 5 (HTML5). With this adoption, YouTube has moved to Dynamic Adaptive Streaming over HTTP (DASH) as Adaptive BitRate (ABR) video streaming technology. Furthermore, rate adaptation in DASH is solely receiver-driven. This issue motivates this work to make a deep analysis of YouTube\u2019s particular DASH implementation. Firstly, this article provides a state of the art about DASH and adaptive streaming technology, and also YouTube traffic characterization related work. Secondly, this paper describes a new methodology and test-bed for YouTube\u2019s DASH implementation traffic characterization and performance measurement. This methodology and test-bed do not make use of proxies and, moreover, they are able to cope with YouTube traffic redirections. Finally, a set of experimental results are provided, involving a dataset of 310 YouTube\u2019s videos. The depicted results show a YouTube\u2019s traffic pattern characterization and a discussion about allowed download bandwidth, YouTube\u2019s consumed bitrate and quality of the video. Moreover, the obtained results are cross-validated with the analysis of HTTP requests performed by YouTube\u2019s video player. The outcomes of this article are applicable in the field of Quality of Service (QoS) and Quality of Experience (QoE) management. This is valuable information for Internet Service Providers (ISPs), because QoS management based on assured download bandwidth can be used in order to provide a target end-user\u2019s QoE when YouTube service is being consumed."}
{"_id": "25a26b86f4a2ebca2b154effbaf894aef690c03c", "title": "Analyzing the Effectiveness and Applicability of Co-training", "text": "Recently there has been signi cant interest in supervised learning algorithms that combine labeled and unlabeled data for text learning tasks. The co-training setting [1] applies to datasets that have a natural separation of their features into two disjoint sets. We demonstrate that when learning from labeled and unlabeled data, algorithms explicitly leveraging a natural independent split of the features outperform algorithms that do not. When a natural split does not exist, co-training algorithms that manufacture a feature split may out-perform algorithms not using a split. These results help explain why co-training algorithms are both discriminative in nature and robust to the assumptions of their embedded classi ers."}
{"_id": "3499f071b44d06d576e78c938ca7fc41edff510c", "title": "Exact finite-state machine identification from scenarios and temporal properties", "text": "Finite-state models, such as finite-state machines (FSMs), aid software engineering in many ways. They are often used in formal verification and also can serve as visual software models. The latter application is associated with the problems of software synthesis and automatic derivation of software models from specification. Smaller synthesized models are more general and are easier to comprehend, yet the problem of minimum FSM identification has received little attention in previous research. This paper presents four exact methods to tackle the problem of minimum FSM identification from a set of test scenarios and a temporal specification represented in linear temporal logic. The methods are implemented as an open-source tool. Three of them are based on translations of the FSM identification problem to SAT or QSAT problem instances. Accounting for temporal properties is done via counterexample prohibition. Counterexamples are either obtained from previously identified FSMs, or based on bounded model checking. The fourth method uses backtracking. The proposed methods are evaluated on several case studies and on a larger number of randomly generated instances of increasing complexity. The results show that the Iterative SAT-based method is the leader among the proposed methods. The methods are also compared with existing inexact approaches, i.e., the ones which do not necessarily identify the minimum FSM, and these comparisons show encouraging results."}
{"_id": "79780f551413667a6a69dc130dcf843516cda6aa", "title": "Real-Time Hand Tracking under Occlusion from an Egocentric RGB-D Sensor", "text": "We present an approach for real-time, robust and accurate hand pose estimation from moving egocentric RGB-D cameras in cluttered real environments. Existing methods typically fail for hand-object interactions in cluttered scenes imaged from egocentric viewpoints\u2014common for virtual or augmented reality applications. Our approach uses two subsequently applied Convolutional Neural Networks (CNNs) to localize the hand and regress 3D joint locations. Hand localization is achieved by using a CNN to estimate the 2D position of the hand center in the input, even in the presence of clutter and occlusions. The localized hand position, together with the corresponding input depth value, is used to generate a normalized cropped image that is fed into a second CNN to regress relative 3D hand joint locations in real time. For added accuracy, robustness and temporal stability, we refine the pose estimates using a kinematic pose tracking energy. To train the CNNs, we introduce a new photorealistic dataset that uses a merged reality approach to capture and synthesize large amounts of annotated data of natural hand interaction in cluttered scenes. Through quantitative and qualitative evaluation, we show that our method is robust to self-occlusion and occlusions by objects, particularly in moving egocentric perspectives."}
{"_id": "3e7c554b435525b50b558a05486148376af04ccb", "title": "Enhanced belief propagation decoding of polar codes through concatenation", "text": "The bit-channels of finite-length polar codes are not fully polarized, and a proportion of such bit-channels are neither completely \u201cnoiseless\u201d nor completely \u201cnoisy\u201d. By using an outer low-density parity-check code for these intermediate channels, we show how the performance of belief propagation (BP) decoding of the overall concatenated polar code can be improved. A simple example reports an improvement in Eb /N0 of 0.3 dB with respect to the conventional BP decoder."}
{"_id": "78beead3a05f7e8f2dc812298f813c5bacdc3061", "title": "From \"Bonehead\" to \"cLoNehEAd\": Nicknames, Play and Identity on Internet Relay Chat", "text": ""}
{"_id": "2cb7835672029e97a20e006cd8cee918cb351d03", "title": "Same-day genomic and epigenomic diagnosis of brain tumors using real-time nanopore sequencing", "text": "Molecular classification of cancer has entered clinical routine to inform diagnosis, prognosis, and treatment decisions. At the same time, new tumor entities have been identified that cannot be defined histologically. For central nervous system tumors, the current World Health Organization classification explicitly demands molecular testing, e.g., for 1p/19q-codeletion or IDH mutations, to make an integrated histomolecular diagnosis. However, a plethora of sophisticated technologies is currently needed to assess different genomic and epigenomic alterations and turnaround times are in the range of weeks, which makes standardized and widespread implementation difficult and hinders timely decision making. Here, we explored the potential of a pocket-size nanopore sequencing device for multimodal and rapid molecular diagnostics of cancer. Low-pass whole genome sequencing was used to simultaneously generate copy number (CN) and methylation profiles from native tumor DNA in the same sequencing run. Single nucleotide variants in IDH1, IDH2, TP53, H3F3A, and the TERT promoter region were identified using deep amplicon sequencing. Nanopore sequencing yielded ~0.1X genome coverage within 6\u00a0h and resulting CN and epigenetic profiles correlated well with matched microarray data. Diagnostically relevant alterations, such as 1p/19q codeletion, and focal amplifications could be recapitulated. Using ad hoc random forests, we could perform supervised pan-cancer classification to distinguish gliomas, medulloblastomas, and brain metastases of different primary sites. Single nucleotide variants in IDH1, IDH2, and H3F3A were identified using deep amplicon sequencing within minutes of sequencing. Detection of TP53 and TERT promoter mutations shows that sequencing of entire genes and GC-rich regions is feasible. Nanopore sequencing allows same-day detection of structural variants, point mutations, and methylation profiling using a single device with negligible capital cost. It outperforms hybridization-based and current sequencing technologies with respect to time to diagnosis and required laboratory equipment and expertise, aiming to make precision medicine possible for every cancer patient, even in resource-restricted settings."}
{"_id": "8cb45a5a03d2e8c9cc56030a99b9938cb2981087", "title": "TUT: a statistical model for detecting trends, topics and user interests in social media", "text": "The rapid development of online social media sites is accompanied by the generation of tremendous web contents. Web users are shifting from data consumers to data producers. As a result, topic detection and tracking without taking users' interests into account is not enough. This paper presents a statistical model that can detect interpretable trends and topics from document streams, where each trend (short for trending story) corresponds to a series of continuing events or a storyline. A topic is represented by a cluster of words frequently co-occurred. A trend can contain multiple topics and a topic can be shared by different trends. In addition, by leveraging a Recurrent Chinese Restaurant Process (RCRP), the number of trends in our model can be determined automatically without human intervention, so that our model can better generalize to unseen data. Furthermore, our proposed model incorporates user interest to fully simulate the generation process of web contents, which offers the opportunity for personalized recommendation in online social media. Experiments on three different datasets indicated that our proposed model can capture meaningful topics and trends, monitor rise and fall of detected trends, outperform baseline approach in terms of perplexity on held-out dataset, and improve the result of user participation prediction by leveraging users' interests to different trends."}
{"_id": "232ce27451be9cc332969ce20659d28cfc0bbfec", "title": "Synchronous neural oscillations and cognitive processes", "text": "The central problem for cognitive neuroscience is to describe how cognitive processes arise from brain processes. This review summarizes the recent evidence that synchronous neural oscillations reveal much about the origin and nature of cognitive processes such as memory, attention and consciousness. Memory processes are most closely related to theta and gamma rhythms, whereas attention seems closely associated with alpha and gamma rhythms. Conscious awareness may arise from synchronous neural oscillations occurring globally throughout the brain rather than from the locally synchronous oscillations that occur when a sensory area encodes a stimulus. These associations between the dynamics of the brain and cognitive processes indicate progress towards a unified theory of brain and cognition."}
{"_id": "64f256dd8b83e18bd6c5692f22ac821b622d820a", "title": "Medicinal strategies in the treatment of obesity", "text": "When prevention fails, medicinal treatment of obesity may become a necessity. Any strategic medicinal development must recognize that obesity is a chronic, stigmatized and costly disease that is increasing in prevalence. Because obesity can rarely be cured, treatment strategies are effective only as long as they are used, and combined therapy may be more effective than monotherapy. For a drug to have significant impact on body weight it must ultimately reduce energy intake, increase energy expenditure, or both. Currently approved drugs for long-term treatment of obesity include sibutramine, which inhibits food intake, and orlistat, which blocks fat digestion."}
{"_id": "1d6889c44e11141cc82ef28bba1afe07f3c0a2b4", "title": "Identity Authentication and Capability Based Access Control (IACAC) for the Internet of Things", "text": "In the last few years the Internet of Things (IoT) has seen widespread application and can be found in each field. Authentication and access control are important and critical functionalities in the context of IoT to enable secure communication between devices. Mobility, dynamic network topology and weak physical security of low power devices in IoT networks are possible sources for security vulnerabilities. It is promising to make an authentication and access control attack resistant and lightweight in a resource constrained and distributed IoT environment. This paper presents the Identity Authentication and Capability based Access Control (IACAC) model with protocol evaluation and performance analysis. To protect IoT from man-in-the-middle, replay and denial of service (Dos) attacks, the concept of capability for access control is introduced. The novelty of this model is that, it presents an integrated approach of authentication and access control for IoT devices. The results of other related study have also been analyzed to validate and support our findings. Finally, the proposed protocol is evaluated by using security protocol verification tool and verification results shows that IACAC is secure against aforementioned attacks. This paper also discusses performance analysis of the protocol in terms of computational time compared to other Journal of Cyber Security and Mobility, Vol. 1, 309\u2013348. c \u00a9 2013 River Publishers. All rights reserved. 310 P.N. Mahalle et al. existing solutions. Furthermore, this paper addresses challenges in IoT and security attacks are modelled with the use cases to give an actual view of IoT networks."}
{"_id": "bf9fd38bdf1d19834f13b675a9052feee4abeeb0", "title": "A magnetostatic 2-axis MEMS scanner with I-section rib-reinforcement and slender permanent magnet patterns", "text": "This study demonstrates the 2-axis epitaxial silicon scanner driven by the coil-less magnetostatic force using electroplated permanent magnet (CoNiMnP) film. The present approach has four merits: (1) the process employs the cheap silicon wafer with epitaxial layer; and the electrochemical etching stop technique is used to precisely control the thickness of scanner; (2) the I-section rib-reinforced structure is implemented to provide high stiffness of the mirror plate; (3) the magnetostatic driving force on scanner is increased by electroplated permanent magnet film with slender patterns; (4) the size of packaged scanner is reduced since the assembled permanent magnets are not required."}
{"_id": "fc10cf04b774b7e1d4e6d6c6c666ae04a07329d9", "title": "Improvements in children's speech recognition performance", "text": "There are several reasons why conventional speech recognition systems modeled on adult data fail to perform satisfactorily on children's speech input. For instance, children's vocal characteristics differ signi cantly from those of adults. In addition, their choices of vocabulary and sentence construction modalities usually do not conform to adult patterns. We describe comparative studies demonstrating the performance gain realized by adopting to children's acoustic and language model data to construct a children's speech recognition system."}
{"_id": "16207729492b760cd48bb4b9dd619a433b2e3d25", "title": "Social Media and Firm Equity Value", "text": "Companies have increasingly advocated social media technologies and platforms to improve and ultimately transform business performance. This study examines whether social media has a predictive relationship with firm equity value, which metric of social media has the strongest relationship, and the dynamics of the relationship. The results derived from the vector autoregressive models suggest that social media-based metrics (blogs and reviews) are significant leading indicators of firm stock market performance. We also find a faster predictive value of social media, i.e., a shorter \u201cwear-in\u201d effect compared with conventional online behavior metrics, such as web traffic and search. Interestingly, conventional digital user metrics (Google search and web traffic), which have been widely adopted to measure online consumer behavior, are found to have significant yet substantially weaker predictive relationships with firm equity value than social media. The results provide new insights for social media managers and firm equity valuations."}
{"_id": "a7d47e32ee59f2f104ff6120fa82bae912a87c81", "title": "Byte Rotation Encryption Algorithm through parallel processing and multi-core utilization", "text": "Securing digital data has become tedious task as the technology is increasing. Existing encryption algorithms such as AES,DES and Blowfish ensure information security but consume lot of time as the security level increases. In this paper, Byte Rotation Encryption Algorithm (BREA) has implemented using parallel processing and multi-core utilization. BREA divides data into fixed size blocks. Each block is processed parallely using random number key. So the set of blocks are executed in parallel by utilizing all the available CPU cores. Finally, from the experimental analysis, it is observed that the proposed BREA algorithm reduces execution time when the number of cores has increased."}
{"_id": "310b72fbc3d384ca88ca994b33476b8a2be2e27f", "title": "Sentiment Analyzer: Extracting Sentiments about a Given Topic using Natural Language Processing Techniques", "text": "We present Sentiment Analyzer (SA) that extracts sentiment (or opinion) about a subject from online text documents. Instead of classifying the sentiment of an entire document about a subject, SA detects all references to the given subject, and determines sentiment in each of the references using natural language processing (NLP) techniques. Our sentiment analysis consists of 1) a topic specific feature term extraction, 2) sentiment extraction, and 3) (subject, sentiment) association by relationship analysis. SA utilizes two linguistic resources for the analysis: the sentiment lexicon and the sentiment pattern database. The performance of the algorithms was verified on online product review articles (\u201cdigital camera\u201d and \u201cmusic\u201d reviews), and more general documents including general webpages and news articles."}
{"_id": "59d9160780bf3eac8c621983a36ff332a3497219", "title": "Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis", "text": "Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity. Positive words are used in phrases expressing negative sentiments, or vice versa. Also, quite often words that are positive or negative out of context are neutral in context, meaning they are not even being used to express a sentiment. The goal of this work is to automatically distinguish between prior and contextual polarity, with a focus on understanding which features are important for this task. Because an important aspect of the problem is identifying when polar terms are being used in neutral contexts, features for distinguishing between neutral and polar instances are evaluated, as well as features for distinguishing between positive and negative contextual polarity. The evaluation includes assessing the performance of features across multiple machine learning algorithms. For all learning algorithms except one, the combination of all features together gives the best performance. Another facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polarity. These experiments show that the presence of neutral instances greatly degrades the performance of these features, and that perhaps the best way to improve performance across all polarity classes is to improve the system's ability to identify when an instance is neutral."}
{"_id": "7c89cbf5d860819c9b5e5217d079dc8aafcba336", "title": "Recognizing subjectivity: a case study in manual tagging", "text": "In this paper, we describe a case study of a sentence-level categorization in which tagging instructions are developed and used by four judges to classify clauses from the Wall Street Journal as either subjective or objective. Agreement among the four judges is analyzed, and, based on that analysis, each clause is given a nal classiication. To provide empirical support for the classiications, correlations are assessed in the data between the subjective category and a basic semantic class posited by Quirk et al. (1985)."}
{"_id": "9141d85998eadb1bca5cca027ae07670cfafb015", "title": "Determining the Sentiment of Opinions", "text": "Identifying sentiments (the affective parts of opinions) is a challenging problem. We present a system that, given a topic, automatically finds the people who hold opinions about that topic and the sentiment of each opinion. The system contains a module for determining word sentiment and another for combining sentiments within a sentence. We experiment with various models of classifying and combining sentiment at word and sentence levels, with promising results."}
{"_id": "c2ac213982e189e4ad4c7f60608914a489ec9051", "title": "The Penn Arabic Treebank : Building a Large-Scale Annotated Arabic Corpus", "text": "From our three year experience of developing a large-scale corpus of annotated Arabic text, our paper will address the following: (a) review pertinent Arabic language issues as they relate to methodology choices, (b) explain our choice to use the Penn English Treebank style of guidelines, (requiring the Arabic-speaking annotators to deal with a new grammatical system) rather than doing the annotation in a more traditional Arabic grammar style (requiring NLP researchers to deal with a new system); (c) show several ways in which human annotation is important and automatic analysis difficult, including the handling of orthographic ambiguity by both the morphological analyzer and human annotators; (d) give an illustrative example of the Arabic Treebank methodology, focusing on a particular construction in both morphological analysis and tagging and syntactic analysis and following it in detail through the entire annotation process, and finally, (e) conclude with what has been achieved so far and what remains to be done."}
{"_id": "f3221ec7074722e125fe9f0f12b5fee22a431254", "title": "Algorithms for continuous queries: A geometric approach", "text": "Algorithms for Continuous Queries: A Geometric"}
{"_id": "52152dac5c7320a4818b48140bfcd396e4e965b7", "title": "Mapping the U.S. Political Blogosphere: Are Conservative Bloggers More Prominent?", "text": "Weblogs are now a key part of online culture, and social scientists are interested in characterising the networks formed by bloggers and measuring their extent and impact in areas such as politics. However, researchers wishing to conduct quantitative social science analysis of the blogging phenomenon are faced with the challenge of using new methods of data collection and analysis largely derived from fields outside of the social sciences, such as the information sciences. This paper presents an overview of one new approach for collecting and analysing weblog data, and illustrates this approach in the context of a preliminary quantitative analysis of online networks formed by a sample of North-American \u201cA-list\u201d political bloggers. There are two aims to this paper. First is to assess (using different data and methods) the conclusion of Adamic and Glance (2005) that there are significant differences in the behaviour of liberal and conservative bloggers, with the latter forming more dense patterns of linkages. We find broad support for this conclusion, and empirically assess the implications of differences in conservative/liberal linking behaviour for the online visibility of different political messages or ideologies. The second aim is to highlight the role of web mining and data visualisation in the analysis of weblogs, and the opportunities and challenges inherent in this new field of research."}
{"_id": "743c18a2c7abca292b3e6060ecd4e4464cf66bcc", "title": "Semantic Facial Expression Editing using Autoencoded Flow", "text": "High-level manipulation of facial expressions in images \u2014 such as changing a smile to a neutral expression \u2014 is challenging because facial expression changes are highly non-linear, and vary depending on the appearance of the face. We present a fully automatic approach to editing faces that combines the advantages of flow-based face manipulation with the more recent generative capabilities of Variational Autoencoders (VAEs). During training, our model learns to encode the flow from one expression to another over a low-dimensional latent space. At test time, expression editing can be done simply using latent vector arithmetic. We evaluate our methods on two applications: 1) single-image facial expression editing, and 2) facial expression interpolation between two images. We demonstrate that our method generates images of higher perceptual quality than previous VAE and flow-based methods."}
{"_id": "b3c346b3da022238301c1c95e17823b7d7ab2f49", "title": "Reliability of measurements of muscle tone and muscle power in stroke patients.", "text": "OBJECTIVES\nto establish the reliability of the modified Ashworth scale for measuring muscle tone in a range of muscle groups (elbow, wrist, knee and ankle; flexors and extensors) and of the Medical Research Council scale for measuring muscle power in the same muscle groups and their direct antagonists.\n\n\nDESIGN\na cross-sectional study involving repeated measures by two raters. We estimated reliability using the kappa statistic with quadratic weights (Kw).\n\n\nSETTING\nan acute stroke ward, a stroke rehabilitation unit and a continuing care facility.\n\n\nSUBJECTS\npeople admitted to hospital with an acute stroke-35 patients, median age 73 (interquartile range 65-80), 20 men and 15 women.\n\n\nRESULTS\ninter- and intra-rater agreement for the measurement of power was good to very good for all tested muscle groups (Kw = 0.84-0.96, Kw = 0.70-0.96). Inter- and intra-rater agreement for the measurement of tone in the elbow, wrist and knee flexors was good to very good (Kw = 0.73-0.96, Kw = 0.77-0.94). Inter- and intra-rater agreement for the measurement of tone in the ankle plantarflexors was moderate to good (Kw = 0.45-0.51, Kw = 0.59-0.64).\n\n\nCONCLUSIONS\nthe Medical Research Council scale was reliable in the tested muscle groups. The modified Ashworth scale demonstrated reliability in all tested muscle groups except the ankle plantarflexors. If reliable measurement of tone at the ankle is required for a specific purpose (e.g. to measure the effect of therapeutic intervention), further work will be necessary."}
{"_id": "8925a583cad259d60f63f236302fbc5ac8277e14", "title": "Microservice Ambients: An Architectural Meta-Modelling Approach for Microservice Granularity", "text": "Isolating fine-grained business functionalities byboundaries into entities called microservices is a core activityunderlying microservitization. We define microservitization asthe paradigm shift towards microservices. Determining theoptimal microservice boundaries (i.e. microservice granularity) is among the key microservitization design decisions thatinfluence the Quality of Service (QoS) of the microservice applicationat runtime. In this paper, we provide an architecturecentricapproach to model this decision problem. We build onambients \u2014 a modelling approach that can explicitly capturefunctional boundaries and their adaptation. We extend the aspect-oriented architectural meta-modelling approach of ambients\u2014AMBIENT-PRISMA \u2014 with microservice ambients. A microservice ambient is a modelling concept that treatsmicroservice boundaries as an adaptable first-class entity. Weuse a hypothetical online movie subscription-based systemto capture a microservitization scenario using our aspectorientedmodelling approach. The results show the ability ofmicroservice ambients to express the functional boundary of amicroservice, the concerns of each boundary, the relationshipsacross boundaries and the adaptations of these boundaries. Additionally, we evaluate the expressiveness and effectivenessof microservice ambients using criteria from ArchitectureDescription Language (ADL) classification frameworkssince microservice ambients essentially support architecturedescription for microservices. The evaluation focuses on thefundamental modelling constructs of microservice ambientsand how they support microservitization properties such asutility-driven design, tool heterogeneity and decentralised governance. The evaluation highlights how microservice ambientssupport analysis, evolution and mobility/location awarenesswhich are significant to quality-driven microservice granularityadaptation. The evaluation is general and irrespective of theparticular application domain and the business competenciesin that domain."}
{"_id": "b3200539538eca54a85223bf0ec4f3ed132d0493", "title": "Action Anticipation with RBF Kernelized Feature Mapping RNN", "text": "We introduce a novel Recurrent Neural Network-based algorithm for future video feature generation and action anticipation called feature mapping RNN . Our novel RNN architecture builds upon three effective principles of machine learning, namely parameter sharing, Radial Basis Function kernels and adversarial training. Using only some of the earliest frames of a video, the feature mapping RNN is able to generate future features with a fraction of the parameters needed in traditional RNN. By feeding these future features into a simple multilayer perceptron facilitated with an RBF kernel layer, we are able to accurately predict the action in the video. In our experiments, we obtain 18% improvement on JHMDB-21 dataset, 6% on UCF101-24 and 13% improvement on UT-Interaction datasets over prior stateof-the-art for action anticipation."}
{"_id": "b0e92c3a7506a526a5130872f4dfc1a5d59c2ecd", "title": "Chip Level Techniques for EMI Reduction in LCD Panels", "text": "This paper presents chip level techniques to improve electro-magnetic interference (EMI) characteristics of LCD-TV panels. A timing controller (TCON) uses over-driving algorithms to improve the response time of liquid crystals (LC). Since this algorithm needs previous frame data, external memory such as double data rate synchronous DRAM (DDR SDRAM) is widely used as a frame buffer. A TTL interface between the TCON and memory is used for read and write operations, generating EMI noise. For reduction of this EMI, three methods are described. The first approach is to reduce the driving current of data I/O buffers. The second is to adopt spread spectrum clock generation (SSCG), and the third is to apply a proposed algorithm which minimizes data transitions. EMI measurement of a 32\" LCD-TV panel shows that these approaches are very effective for reduction of EMI, achieving 20dB reduction at 175MHz."}
{"_id": "16c8c2ead791636acdad5ca72c1938dd66fba6c5", "title": "A Two-Stage Iterative Decoding of LDPC Codes for Lowering Error Floors", "text": "In iterative decoding of LDPC codes, trapping sets often lead to high error floors. In this work, we propose a two-stage iterative decoding to break trapping sets. Simulation results show that the error floor performance can be significantly improved with this decoding scheme."}
{"_id": "5b5909cd3757e2b9ab6f8784f4c7d936865076ea", "title": "Strategic Directions in Artificial Intelligence", "text": "\u2014constructing intelligent machines, whether or not these operate in the same way as people do; \u2014formalizing knowledge and mechanizing reasoning, both commonsense and refined expertise, in all areas of human endeavor; \u2014using computational models to understand the psychology and behavior of people, animals, and artificial agents; and \u2014making working with computers as easy and as helpful as working with skilled, cooperative, and possibly expert people."}
{"_id": "e33a3487f9b656631159186db4b2aebaed230b36", "title": "The digital platform: a research agenda", "text": "As digital platforms are transforming almost every industry today, they are slowly finding their way into the mainstream information systems (IS) literature. Digital platforms are a challenging research object because of their distributed nature and intertwinement with institutions, markets and technologies. New research challenges arise as a result of the exponentially growing scale of platform innovation, the increasing complexity of platform architectures, and the spread of digital platforms to many different industries. This paper develops a research agenda for digital platforms research in IS. We recommend researchers seek to (1) advance conceptual clarity by providing clear definitions that specify the unit of analysis, degree of digitality and the sociotechnical nature of digital platforms; (2) define the proper scoping of digital platform concepts by studying platforms on different architectural levels and in different industry settings; and (3) advance methodological rigour by employing embedded case studies, longitudinal studies, design research, data-driven modelling and visualization techniques. Considering current developments in the business domain, we suggest six questions for further research: (1) Are platforms here to stay?; (2) How should platforms be designed?; (3) How do digital platforms transform industries?; (4) How can data-driven approaches inform digital platforms research?; (5) How should researchers develop theory for digital platforms?; and (6) How do digital platforms affect everyday life?"}
{"_id": "3efcb76ba7a3709d63f9f04d62e54ee2ef1d9603", "title": "Theory and Application of Magnetic Flux Leakage Pipeline Detection", "text": "Magnetic flux leakage (MFL) detection is one of the most popular methods of pipeline inspection. It is a nondestructive testing technique which uses magnetic sensitive sensors to detect the magnetic leakage field of defects on both the internal and external surfaces of pipelines. This paper introduces the main principles, measurement and processing of MFL data. As the key point of a quantitative analysis of MFL detection, the identification of the leakage magnetic signal is also discussed. In addition, the advantages and disadvantages of different identification methods are analyzed. Then the paper briefly introduces the expert systems used. At the end of this paper, future developments in pipeline MFL detection are predicted."}
{"_id": "1be8cab8701586e751d6ed6d186ca0b6f58a54e7", "title": "Automatic detection of incomplete requirements via symbolic analysis", "text": "The usefulness of a system specification depends in part on the completeness of the requirements. However, enumerating all necessary requirements is difficult, especially when requirements interact with an unpredictable environment. A specification built with an idealized environmental view is incomplete if it does not include requirements to handle non-idealized behavior. Often incomplete requirements are not detected until implementation, testing, or worse, after deployment. Even when performed during requirements analysis, detecting incomplete requirements is typically an error prone, tedious, and manual task. This paper introduces Ares, a design-time approach for detecting incomplete requirements decomposition using symbolic analysis of hierarchical requirements models. We illustrate our approach by applying Ares to a requirements model of an industry-based automotive adaptive cruise control system. Ares is able to automatically detect specific instances of incomplete requirements decompositions at design-time, many of which are subtle and would be difficult to detect, either manually or with testing."}
{"_id": "d1feb47dd448594d30517add168b7365b45688c1", "title": "Internet-based peer support for parents: a systematic integrative review.", "text": "OBJECTIVES\nThe Internet and social media provide various possibilities for online peer support. The aim of this review was to explore Internet-based peer-support interventions and their outcomes for parents.\n\n\nDESIGN\nA systematic integrative review.\n\n\nDATA SOURCES\nThe systematic search was carried out in March 2014 in PubMed, Cinahl, PsycINFO and Cochrane databases.\n\n\nREVIEW METHODS\nTwo reviewers independently screened the titles (n=1793), abstracts and full texts to decide which articles should be chosen. The inclusion criteria were: (1) an Internet-based community as an intervention, or at least as a component of an intervention; (2) the participants in the Internet-based community had to be mothers and/or fathers or pregnant women; (3) the parents had to interact and communicate with each other through the Internet-based community. The data was analysed using content analysis. When analysing peer-support interventions only interventions developed by researchers were included and when analysing the outcomes for the parents, studies that focused on mothers, fathers or both parents were separated.\n\n\nRESULTS\nIn total, 38 publications met the inclusion criteria. Most of the studies focused on Internet-based peer support between mothers (n=16) or both parents (n=15) and seven focused on fathers. In 16 studies, the Internet-based interventions had been developed by researchers and 22 studies used already existing Internet peer-support groups, in which any person using the Internet could participate. For mothers, Internet-based peer support provided emotional support, information and membership in a social community. For fathers, it provided support for the transition to fatherhood, information and humorous communication. Mothers were more active users of Internet-based peer-support groups than fathers. In general, parents were satisfied with Internet-based peer support. The evidence of the effectiveness of Internet-based peer support was inconclusive but no harmful effects were reported in these reviewed studies.\n\n\nCONCLUSIONS\nInternet-based peer support provided informational support for parents and was accessible despite geographical distance or time constraints. Internet-based peer support is a unique form of parental support, not replacing but supplementing support offered by professionals. Experimental studies in this area are needed."}
{"_id": "9fcb152b8cc6e0c6d1c80d4a57a287a2515ee89c", "title": "A Rule-Based Framework of Metadata Extraction from Scientific Papers", "text": "Most scientific documents on the web are unstructured or semi-structured, and the automatic document metadata extraction process becomes an important task. This paper describes a framework for automatic metadata extraction from scientific papers. Based on a spatial and visual knowledge principle, our system can extract title, authors and abstract from scientific papers. We utilize format information such as font size and position to guide the metadata extraction process. The experiment results show that our system achieves a high accuracy in header metadata extraction which can effectively assist the automatic index creation for digital libraries."}
{"_id": "1ea34fdde4d6818a5e6b6062b763d48091bb7400", "title": "Battery-Draining-Denial-of-Service Attack on Bluetooth Devices", "text": "We extend the Xen virtual machine monitor with the ability to host a hundred of virtual machines on a single physical node. Similarly to a demand paging of virtual memory, we page out idle virtual machines making them available on demand. Paging is transparent. An idle virtual machine remains fully operational. It is able to respond to external events with a delay comparable to a delay under a medium load. To achieve desired degree of consolidation, we identify and leave in memory only a minimal working set of pages required to maintain the illusion of running VM and respond to external events. To keep the number of active pages small without harming performance dramatically, we build a correspondence between every event and its working set. Reducing a working set further, we implement a copy-on-write page sharing across virtual machines running on the same host. To decrease resources occupied by a virtual machine\u2019s file system, we implement a copy-on-write storage and golden imaging."}
{"_id": "c812374d55b1deb54582cb3656bb0265522e7ec6", "title": "Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions", "text": "Currently autonomous or self-driving vehicles are at the heart of academia and industry research becauseof itsmulti-faceted advantages that includes improved safety, reduced congestion, lower emissions and greater mobility. Software is the key driving factor underpinning autonomy within which planning algorithms that are responsible for mission-critical decision making hold a significant position. While transporting passengers or goods from a givenorigin to agivendestination,motionplanningmethods incorporate searching for apath to follow, avoiding obstacles and generating the best trajectory that ensures safety, comfort and efficiency. A range of different planning approaches have beenproposed in the literature. Thepurpose of this paper is to reviewexisting approaches and then compare and contrast different methods employed for the motion planning of autonomous on-road driving that consists of (1)findingapath, (2) searching for the safestmanoeuvre and (3)determining themost feasible trajectory. Methods developed by researchers in each of these three levels exhibit varying levels of complexity and performance accuracy. This paper presents a critical evaluation of each of these methods, in terms of their advantages/disadvantages, inherent limitations, feasibility, optimality, handling of obstacles and testing operational environments. Based on a critical review of existingmethods, research challenges to address current limitations are identified and future research directions are suggested so as to enhance the performanceofplanningalgorithmsat all three levels. Somepromising areasof future focushave been identified as the use of vehicular communications (V2V and V2I) and the incorporation of transport engineering aspects in order to improve the look-ahead horizon of current sensing technologies that are essential for planning with the aim of reducing the total cost of driverless vehicles. This critical reviewon planning techniques presented in this paper, along with theassociateddiscussions on their constraints and limitations, seek to assist researchers in accelerating development in the emerging field of autonomous vehicle research. Crown Copyright 2015 Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)."}
{"_id": "cc5a1cf7ad9d644f21a5df799ffbcb8d1e24abe1", "title": "MonoPerfCap : Human Performance Capture from Monocular Video WEIPENGXU , AVISHEKCHATTERJEE , MICHAELZOLLH\u00d6FER , HELGERHODIN , DUSHYANTMEHTA", "text": "We present the first marker-less approach for temporally coherent 3D performance capture of a human with general clothing from monocular video. Our approach reconstructs articulated human skeleton motion as well as medium-scale non-rigid surface deformations in general scenes. Human performance capture is a challenging problem due to the large range of articulation, potentially fast motion, and considerable non-rigid deformations, even from multi-view data. Reconstruction from monocular video alone is drastically more challenging, since strong occlusions and the inherent depth ambiguity lead to a highly ill-posed reconstruction problem. We tackle these challenges by a novel approach that employs sparse 2D and 3D human pose detections from a convolutional neural network using a batch-based pose estimation strategy. Joint recovery of per-batch motion allows to resolve the ambiguities of the monocular reconstruction problem based on a low dimensional trajectory subspace. In addition, we propose refinement of the surface geometry based on fully automatically extracted silhouettes to enable medium-scale non-rigid alignment. We demonstrate state-of-the-art performance capture results that enable exciting applications such as video editing and free viewpoint video, previously infeasible from monocular video. Our qualitative and quantitative evaluation demonstrates that our approach significantly outperforms previous monocular methods in terms of accuracy, robustness and scene complexity that can be handled."}
{"_id": "2243ca5af76cccee0347230abc05b029ddf0e16d", "title": "A linguistic theory of translation : an essay in applied linguistics", "text": "It's coming again, the new collection that this site has. To complete your curiosity, we offer the favorite linguistic theory of translation an essay in applied linguistics book as the choice today. This is a book that will show you even new to old thing. Forget it; it will be right for you. Well, when you are really dying of linguistic theory of translation an essay in applied linguistics, just pick it. You know, this book is always making the fans to be dizzy if not to find."}
{"_id": "174f55ffd478659bb3eea7b9d5c0c64e1a6c2907", "title": "A theory of formal synthesis via inductive learning", "text": "Formal synthesis is the process of generating a program satisfying a high-level formal specification. In recent times, effective formal synthesis methods have been proposed based on the use of inductive learning. We refer to this class of methods that learn programs from examples as formal inductive synthesis. In this paper, we present a theoretical framework for formal inductive synthesis. We discuss how formal inductive synthesis differs from traditional machine learning. We then describe oracle-guided inductive synthesis (OGIS), a framework that captures a family of synthesizers that operate by iteratively querying an oracle. An instance of OGIS that has had much practical impact is counterexample-guided inductive synthesis (CEGIS). We present a theoretical characterization of CEGIS for learning any program that computes a recursive language. In particular, we analyze the relative power of CEGIS variants where the types of counterexamples generated by the oracle varies. We also consider the impact of bounded versus unbounded memory available to the learning algorithm. In the special case where the universe of candidate programs is finite, we relate the speed of convergence to the notion of teaching dimension studied in machine learning theory. Altogether, the results of the paper take a first step towards a theoretical foundation for the emerging field of formal inductive synthesis."}
{"_id": "2e006307ea0fbc35ee318810bfa40dc8c055dea2", "title": "NAVSTAR: Global positioning system—Ten years later", "text": "With the award of one of the first multiyear procurements to be authorized by the Congress, the Department of Defense celebrates the tenth anniversary of a major Joint-Service / Agency program that promises to revolutionize the art and science of navigation. This contract, totaling 1.17 billion dollars, will result in delivery by the end of this decade of 28 satellites that are to become the heart of the NAVSTAR Global Positioning System (GPS). This paper traces the history of this program from its inception, identifying and discussing the major technologies that made it all possible. In so doing, the concept of operations of the entire system, including the satellites, their associated ground-based control network, and the family of user equipment being developed and tested to meet the broad scope of Department of Defence positioning and navigation requirements is introduced."}
{"_id": "53e082483bf1daa3dfe617be83eb3266425eede0", "title": "Bayesian Grasp Planning", "text": "We present a Bayesian framework for grasp planning that takes into account uncertainty in object shape or pose, as well as robot motion error. When trying to grasp objects based on noisy sensor data, a common problem is errors in perception, which cause the geometry or pose of the object to be uncertain. For each hypothesis about the geometry or pose of the object to be grasped, different sets of grasps can be planned. Of the resulting grasps, some are likely to work only if particular hypotheses are true, but some may work on most or even all hypotheses. Likewise, some grasps are easily broken by small errors in robot motion, but other grasps are robust to such errors. Our probabilistic framework takes into account all of these factors while trying to estimate the overall probability of success of each grasp, allowing us to select grasps that are robust to incorrect object recognition as well as motion error due to imperfect robot calibration. We demonstrate our framework while using the PR2 robot to grasp common household objects."}
{"_id": "9b7741bafecf80bf4de8aaae0d5260f4a6706082", "title": "Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization.", "text": "An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories."}
{"_id": "155ed7834a8a44a195b80719985a8b4ca11e6fdc", "title": "Iterative Adaptive Approaches to MIMO Radar Imaging", "text": "Multiple-input multiple-output (MIMO) radar can achieve superior performance through waveform diversity over conventional phased-array radar systems. When a MIMO radar transmits orthogonal waveforms, the reflected signals from scatterers are linearly independent of each other. Therefore, adaptive receive filters, such as Capon and amplitude and phase estimation (APES) filters, can be directly employed in MIMO radar applications. High levels of noise and strong clutter, however, significantly worsen detection performance of the data-dependent beamformers due to a shortage of snapshots. The iterative adaptive approach (IAA), a nonparametric and user parameter-free weighted least-squares algorithm, was recently shown to offer improved resolution and interference rejection performance in several passive and active sensing applications. In this paper, we show how IAA can be extended to MIMO radar imaging, in both the negligible and nonnegligible intrapulse Doppler cases, and we also establish some theoretical convergence properties of IAA. In addition, we propose a regularized IAA algorithm, referred to as IAA-R, which can perform better than IAA by accounting for unrepresented additive noise terms in the signal model. Numerical examples are presented to demonstrate the superior performance of MIMO radar over single-input multiple-output (SIMO) radar, and further highlight the improved performance achieved with the proposed IAA-R method for target imaging."}
{"_id": "fbe9e8467fcddb402d6cce9f515a74e3c5b8342a", "title": "You Cannot Sense My PINs: A Side-Channel Attack Deterrent Solution Based on Haptic Feedback on Touch-Enabled Devices", "text": "In this paper, we introduce a novel and secure solution to mitigate side-channel attacks to capture the PINs like touchID and other credentials of touch-enabled devices. Our approach can protect haptic feedback enabled devices from potential direct observation techniques such as cameras and motion sense techniques including such as accelerometers in smart-watch. Both attacks use the concept of shoulder surfing in social engineering and were published recently (CCS'14 and CCS'15). Hand-held devices universally employ small vibration motors as an inexpensive way to provide haptic feedback. The strength of the haptic feedback depends on the brand and the device manufacturer. They are usually strong enough to produce sliding movement and make audible noises if the device is resting on the top of a desk when the vibration motor turns. However, when the device is held in the hand the vibration can only be sensed by the holder; it is usually impossible or uncertain for an observer to know when the vibration motor turns. Our proposed solution uses the haptic feedback to inform the internal state of the keypad to the user and takes advantage of the fact that the effect of haptic feedback can be easily cloaked in such a way that direct observation techniques and indirect sensing techniques will fail. We develop an application on Android cell phones to demonstrate it and invite users to test the code. Moreover, we use real smart-watch to sense the vibration of Android cell phones. Our experimental results show that our approach can mitigate the probability of sensing a 4-digit or 6-digit PINs using smart-watch to below practical value. Our approach also can mitigate the probability of recognizing a 4-digit or 6-digit PINs using a camera within 1 meter to below practical value because the user does not need to move his or her hand during the internal states to input different PINs."}
{"_id": "0cfe588996f1bc319f87c6f75160d1cf1542d9a9", "title": "ON THE EMERGENCE OF GR 4 AMMAR FROM THE LEXICON", "text": ""}
{"_id": "162cd1259ad106990c6bfd36db19751f940274d3", "title": "Rapid word learning under uncertainty via cross-situational statistics.", "text": "There are an infinite number of possible word-to-word pairings in naturalistic learning environments. Previous proposals to solve this mapping problem have focused on linguistic, social, representational, and attentional constraints at a single moment. This article discusses a cross-situational learning strategy based on computing distributional statistics across words, across referents, and, most important, across the co-occurrences of words and referents at multiple moments. We briefly exposed adults to a set of trials that each contained multiple spoken words and multiple pictures of individual objects; no information about word-picture correspondences was given within a trial. Nonetheless, over trials, subjects learned the word-picture mappings through cross-trial statistical relations. Different learning conditions varied the degree of within-trial reference uncertainty, the number of trials, and the length of trials. Overall, the remarkable performance of learners in various learning conditions suggests that they calculate cross-trial statistics with sufficient fidelity and by doing so rapidly learn word-referent pairs even in highly ambiguous learning contexts."}
{"_id": "53dd71dc5598d41c06d3eef1315e098dc4cbca28", "title": "Word Segmentation : The Role of Distributional Cues", "text": "One of the infant\u2019s first tasks in language acquisition is to discover the words embedded in a mostly continuous speech stream. This learning problem might be solved by using distributional cues to word boundaries\u2014for example, by computing the transitional probabilities between sounds in the language input and using the relative strengths of these probabilities to hypothesize word boundaries. The learner might be further aided by language-specific prosodic cues correlated with word boundaries. As a first step in testing these hypotheses, we briefly exposed adults to an artificial language in which the only cues available for word segmentation were the transitional probabilities between syllables. Subjects were able to learn the words of this language. Furthermore, the addition of certain prosodic cues served to enhance performance. These results suggest that distributional cues may play an important role in the initial word segmentation of language learners. q 1996 Academic Press, Inc."}
{"_id": "7085126d3d21b559e38231f3fa283aae0ca50cd8", "title": "Statistical learning by 8-month-old infants.", "text": "Learners rely on a combination of experience-independent and experience-dependent mechanisms to extract information from the environment. Language acquisition involves both types of mechanisms, but most theorists emphasize the relative importance of experience-independent mechanisms. The present study shows that a fundamental task of language acquisition, segmentation of words from fluent speech, can be accomplished by 8-month-old infants based solely on the statistical relationships between neighboring speech sounds. Moreover, this word segmentation was based on statistical learning from only 2 minutes of exposure, suggesting that infants have access to a powerful mechanism for the computation of statistical properties of the language input."}
{"_id": "a6ad17f9df9346c56bab090b35ef73ff94f56c01", "title": "A computational study of cross-situational techniques for learning word-to-meaning mappings", "text": "This paper presents a computational study of part of the lexical-acquisition task faced by children, namely the acquisition of word-to-meaning mappings. It first approximates this task as a formal mathematical problem. It then presents an implemented algorithm for solving this problem, illustrating its operation on a small example. This algorithm offers one precise interpretation of the intuitive notions of cross-situational learning and the principle of contrast applied between words in an utterance. It robustly learns a homonymous lexicon despite noisy multi-word input, in the presence of referential uncertainty, with no prior knowledge that is specific to the language being learned. Computational simulations demonstrate the robustness of this algorithm and illustrate how algorithms based on cross-situational learning and the principle of contrast might be able to solve lexical-acquisition problems of the size faced by children, under weak, worst-case assumptions about the type and quantity of data available."}
{"_id": "d2880069513ee3cfbe1136cee6133a6eedb51c00", "title": "Substrate Integrated Waveguide Cavity-Backed Self-Triplexing Slot Antenna", "text": "In this letter, a novel substrate integrated waveguide (SIW) cavity-backed self-triplexing slot antenna is proposed for triple-band communication. It is realized on a single-layer printed circuit board. The proposed antenna consists of a pair of bow-tie slots etched on an SIW cavity, and three different resonant modes are excited by two microstrip lines and a coaxial probe to operate at three distinct frequencies (7.89, 9.44, and 9.87 GHz). The band around 9.48 GHz is obtained due to radiation from one bow-tie slot fed by adjacent microstrip feedline and band around 9.87 GHz is due to radiation from other bow-tie slot fed by its adjacent microstrip feedline. On the other hand, lower band around 7.89 GHz is achieved because of radiation from both bow-tie slots excited by coaxial probe feedline. The measured isolation between any two feedlines is better than 22.5 dB. The measured realized gain is more than 7.2 dB at all the bands. Cross polarization below 36.5, 29.3, and 24.45 dB in broad sight direction and high front-to-back ratio of more than 17.3 dB at each resonant frequency are obtained."}
{"_id": "9c703258eca64936838f700b6d2a0e2c33b12b72", "title": "Structural Health Monitoring Framework Based on Internet of Things: A Survey", "text": "Internet of Things (IoT) has recently received a great attention due to its potential and capacity to be integrated into any complex system. As a result of rapid development of sensing technologies such as radio-frequency identification, sensors and the convergence of information technologies such as wireless communication and Internet, IoT is emerging as an important technology for monitoring systems. This paper reviews and introduces a framework for structural health monitoring (SHM) using IoT technologies on intelligent and reliable monitoring. Specifically, technologies involved in IoT and SHM system implementation as well as data routing strategy in IoT environment are presented. As the amount of data generated by sensing devices are voluminous and faster than ever, big data solutions are introduced to deal with the complex and large amount of data collected from sensors installed on structures."}
{"_id": "e4284c6b3cab23dc6221a9b8383546810f5ecb6b", "title": "Comparative Analysis of Intelligent Transportation Systems for Sustainable Environment in Smart Cities", "text": "In recent works on the Internet of Vehicles (IoV), \u201cintelligent\u201d and \u201csustainable\u201d have been the buzzwords in the context of transportation. Maintaining sustainability in IoV is always a challenge. Sustainability in IoV can be achieved not only by the use of pollution-free vehicular systems, but also by maintenance of road traffic safety or prevention of accidents or collisions. With the aim of establishing an effective sustainable transportation planning system, this study performs a short analysis of existing sustainable transportation methods in the IoV. This study also analyzes various characteristics of sustainability and the advantages and disadvantages of existing transportation systems. Toward the end, this study provides a clear suggestion for effective sustainable transportation planning aimed at the maintenance of an eco-friendly environment and road traffic safety, which, in turn, would lead to a sustainable transportation system."}
{"_id": "6505481758758ad1b4d98e7321801723d9773af2", "title": "An Improved DC-Link Voltage Fast Control Scheme for a PWM Rectifier-Inverter System", "text": "This paper presents an improved dc-link voltage fast control strategy based on energy balance for a pulse width modulation (PWM) rectifier-inverter system to reduce the fluctuation of dc-link voltage. A conclusion is drawn that the energy in dc-link capacitor cannot be kept constant in dynamic process when the system operates in rectifier mode, even in ideal case. Meanwhile, the minimum dc-link voltage deviation is analyzed. Accordingly, a predictive dc-link voltage control scheme based on energy balance is proposed, while the grid current is regulated by deadbeat predictive control. A prediction method for output power is also introduced. Furthermore, a small-signal model with the control delay is adopted to analyze the stability and robustness of the control strategy. The simulation and experimental results demonstrate both good dynamic and steady-state performances of the rectifier-inverter system with the proposed control scheme."}
{"_id": "adf434fe0bf7ff55edee25d5e50d3de94ad2c325", "title": "Comparative Study on Load Balancing Techniques in Cloud Computing", "text": "The present era has witnessed tremendous growth of the internet and various applications that are running over it. Cloud computing is the internet based technology, emphasizing its utility and follows pay-as-you-go model, hence became so popular with high demanding features. Load balancing is one of the interesting and prominent research topics in cloud computing, which has gained a large attention recently. Users are demanding more services with better results. Many algorithms and approaches are proposed by various researchers throughout the world, with the aim of balancing the overall workload among given nodes, while attaining the maximum throughput and minimum time. In this paper, various proposed algorithms addressing the issue of load balancing in Cloud Computing are analyzed and compared to provide a gist of the latest approaches in this research area."}
{"_id": "3e38e521ec579a6aad4e7ebc5f125123018b1683", "title": "Application of spiking neural networks and the bees algorithm to control chart pattern recognition", "text": "Statistical process control (SPC) is a method for improving the quality o f products. Control charting plays a most important role in SPC. SPC control charts arc used for monitoring and detecting unnatural process behaviour. Unnatural patterns in control charts indicate unnatural causes for variations. Control chart pattern recognition is therefore important in SPC. Past research shows that although certain types o f charts, such as the CUSUM chart, might have powerful detection ability, they lack robustness and do not function automatically. In recent years, neural network techniques have been applied to automatic pattern recognition. Spiking Neural Networks (SNNs) belong to the third generation o f artificial neural networks, with spiking neurons as processing elements. In SNNs, time is an important feature for information representation and processing. This thesis proposes the application o f SNN techniques to control chart pattern recognition. It is designed to present an analysis o f the existing learning algorithms o f SNN for pattern recognition and to explain how and why spiking neurons have more computational power in comparison to the previous generation o f neural networks. This thesis focuses on the architecture and the learning procedure o f the network. Four new learning algorithms are presented with their specific architecture: Spiking Learning Vector Quantisation (S-LVQ), Enhanced-Spiking Learning Vector Quantisation (NS-LVQ), S-LVQ with Bees and NS-LVQ with Bees. The latter two algorithms employ a new intelligent swarm-based optimisation called the Bees Algorithm to optimise the LVQ pattern recognition networks. Overall, the aim o f the research is to develop a simple architecture for the proposed network as well as to develop a network that is efficient for application to control chart pattern recognition. Experiments show that the proposed architecture and the learning procedure give high pattern recognition accuracies."}
{"_id": "ab430d9e3e250ab5dc61e12f9e7e40c83227b8a0", "title": "On Different Facets of Regularization Theory", "text": "This review provides a comprehensive understanding of regularization theory from different perspectives, emphasizing smoothness and simplicity principles. Using the tools of operator theory and Fourier analysis, it is shown that the solution of the classical Tikhonov regularization problem can be derived from the regularized functional defined by a linear differential (integral) operator in the spatial (Fourier) domain. State-ofthe-art research relevant to the regularization theory is reviewed, covering Occam's razor, minimum length description, Bayesian theory, pruning algorithms, informational (entropy) theory, statistical learning theory, and equivalent regularization. The universal principle of regularization in terms of Kolmogorov complexity is discussed. Finally, some prospective studies on regularization theory and beyond are suggested."}
{"_id": "3e8d2a11a3ed6d9ebdd178a420b3c019c5356fae", "title": "Never-Ending Multiword Expressions Learning", "text": "This paper introduces NEMWEL, a system that performs Never-Ending MultiWord Expressions Learning. Instead of using a static corpus and classifier, NEMWEL applies supervised learning on automatically crawled news texts. Moreover, it uses its own results to periodically retrain the classifier, bootstrapping on its own results. In addition to a detailed description of the system\u2019s architecture and its modules, we report the results of a manual evaluation. It shows that NEMWEL is capable of learning new expressions over time with improved precision."}
{"_id": "9860487cd9e840d946b93457d11605be643e6d4c", "title": "Text Clustering for Topic Detection", "text": "The world wide web represents vast stores of information. However, the sheer amount of such information makes it practically impossible for any human user to be aware of much of it. Therefore, it would be very helpful to have a system that automatically discovers relevant, yet previously unknown information, and reports it to users in human-readable form. As the first attempt to accomplish such a goal, we proposed a new clustering algorithm and compared it with existing clustering algorithms. The proposed method is motivated by constructive and competitive learning from neural network research. In the construction phase, it tries to find the optimal number of clusters by adding a new cluster when the intrinsic difference between the instance presented and the existing clusters is detected. Each cluster then moves toward the optimal cluster center according to the learning rate by adjusting its weight vector. From the experimental results on the three different real world data sets, the proposed method shows an even trend of performance across the different domains, while the performance of our algorithm on text domains was better than that reported in previous research."}
{"_id": "c7df32e449f1ba9767763a122bbdc2fac310f958", "title": "Direct Desktop Printed-Circuits-on-Paper Flexible Electronics", "text": "There currently lacks of a way to directly write out electronics, just like printing pictures on paper by an office printer. Here we show a desktop printing of flexible circuits on paper via developing liquid metal ink and related working mechanisms. Through modifying adhesion of the ink, overcoming its high surface tension by dispensing machine and designing a brush like porous pinhead for printing alloy and identifying matched substrate materials among different papers, the slightly oxidized alloy ink was demonstrated to be flexibly printed on coated paper, which could compose various functional electronics and the concept of Printed-Circuits-on-Paper was thus presented. Further, RTV silicone rubber was adopted as isolating inks and packaging material to guarantee the functional stability of the circuit, which suggests an approach for printing 3D hybrid electro-mechanical device. The present work paved the way for a low cost and easygoing method in directly printing paper electronics."}
{"_id": "aea427bcea46c83021e62f4fb10178557510ca5d", "title": "Latent Variable Dialogue Models and their Diversity", "text": "We present a dialogue generation model that directly captures the variability in possible responses to a given input, which reduces the \u2018boring output\u2019 issue of deterministic dialogue models. Experiments show that our model generates more diverse outputs than baseline models, and also generates more consistently acceptable output than sampling from a deterministic encoder-decoder model."}
{"_id": "20efcba63a0d9f12251a5e5dda745ac75a6a84a9", "title": "Bio-Inspired Robotics Based on Liquid Crystalline Elastomers ( LCEs ) and Flexible Stimulators", "text": ""}
{"_id": "2a6783ae51d7ee781d584ef9a3eb8ab1997d0489", "title": "A study of large-scale ethnicity estimation with gender and age variations", "text": "In this paper we study large-scale ethnicity estimation under variations of age and gender. The biologically-inspired features are applied to ethnicity classification for the first time. Through a large number of experiments on a large database with more than 21,000 face images, we systematically study the effect of gender and age variations on ethnicity estimation. Our finding is that ethnicity classification can have high accuracy in most cases, but an interesting phenomenon is observed that the ethnic classification accuracies could be reduced by 6\u223c8% in average when female faces are used for training while males for testing. The study results provide a guide for face processing on a multi-ethnic database, e.g., image collection from the Internet, and may inspire further psychological studies on ethnic grouping with gender and age variations. We also apply the methods to the whole MORPH-II database with more than 55,000 face images for ethnicity classification of five races. It is the first time that ethnicity estimation is performed on so large a database."}
{"_id": "37caf807a77df2bf907b1e7561454419a328e34d", "title": "Engineering for Predictive Modeling using Reinforcement Learning", "text": "Feature engineering is a crucial step in the process of predictive modeling. It involves the transformation of given feature space, typically using mathematical functions, with the objective of reducing the modeling error for a given target. However, there is no well-defined basis for performing effective feature engineering. It involves domain knowledge, intuition, and most of all, a lengthy process of trial and error. The human attention involved in overseeing this process significantly influences the cost of model generation. We present a new framework to automate feature engineering. It is based on performance driven exploration of a transformation graph, which systematically and compactly enumerates the space of given options. A highly efficient exploration strategy is derived through reinforcement learning on past examples."}
{"_id": "e7ab1a736bb105df8678af5191640fd534c8c430", "title": "Power Transformer Economic Evaluation in Decentralized Electricity Markets", "text": "Owing to deregulation, privatization, and competition, estimating financial benefits of electrical power system projects is becoming increasingly important. In other words, it is necessary to assess the project profitability under the light of new developments in the electricity market. In this paper, a detailed methodology for the least cost choice of a distribution transformer is proposed, showing how the higher price of a facility can be traded against its operational cost over its life span. The proposed method involves the incorporation of the discounted cost of transformer losses to their economic evaluation, providing the ability to take into account variable energy cost during the transformer operating lifetime. In addition, the influence of the variability in the energy loss cost is investigated, taking into account a potential policy intended to be adopted by distribution network operators. The method is combined with statistical and probabilistic assessment of electricity price volatility in order to derive its impact on the transformer purchasing policy."}
{"_id": "fe8e3898e203086496ae33e355f450bd32b5daff", "title": "Use of opposition method in the test of high-power electronic converters", "text": "The test and the characterization of medium or high-power electronic converters, under nominal operating conditions, are made difficult by the requirement of high-power electrical source and load. In addition, the energy lost during the test may be very significant. The opposition method, which consists of an association of two identical converters supplied by the same source, one operating as a generator, the other as a receptor, can be a better way to do these test. Another advantage is the possibility to realize accurate measurements of the different losses in the converters under test. In the first part of this paper, the characteristics of the method concerning loss measurements are compared to those of the electrical or calorimetric methods, then it is shown how it can be applied to different types of power electronic converters, choppers, switched mode power supplies, and pulsewidth modulation inverters. In the second part, different examples of studies conducted by the authors, and using this method, are presented. They have varying goals, from the test of soft-switching inverters to the characterization of integrated gate-commutated thyristor (IGCT) devices mounted into 2-MW choppers."}
{"_id": "b9728a8279c12f93fff089b6fac96afd2d3bab04", "title": "Mollifying Networks", "text": "The optimization of deep neural networks can be more challenging than traditional convex optimization problems due to the highly non-convex nature of the loss function, e.g. it can involve pathological landscapes such as saddle-surfaces that can be difficult to escape for algorithms based on simple gradient descent. In this paper, we attack the problem of optimization of highly non-convex neural networks by starting with a smoothed \u2013 or mollified \u2013 objective function which becomes more complex as the training proceeds. Our proposition is inspired by the recent studies in continuation methods: similar to curriculum methods, we begin learning an easier (possibly convex) objective function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, objective function. The complexity of the mollified networks is controlled by a single hyperparameter which is annealed during the training. We show improvements on various difficult optimization tasks and establish a relationship between recent works on continuation methods for neural networks and mollifiers."}
{"_id": "1a3fe9032cf48e877bf44e24f5d06d9d14e04c9e", "title": "Robot-assisted movement training compared with conventional therapy techniques for the rehabilitation of upper-limb motor function after stroke.", "text": "OBJECTIVE\nTo compare the effects of robot-assisted movement training with conventional techniques for the rehabilitation of upper-limb motor function after stroke.\n\n\nDESIGN\nRandomized controlled trial, 6-month follow-up.\n\n\nSETTING\nA Department of Veterans Affairs rehabilitation research and development center.\n\n\nPARTICIPANTS\nConsecutive sample of 27 subjects with chronic hemiparesis (>6mo after cerebrovascular accident) randomly allocated to group.\n\n\nINTERVENTIONS\nAll subjects received twenty-four 1-hour sessions over 2 months. Subjects in the robot group practiced shoulder and elbow movements while assisted by a robot manipulator. Subjects in the control group received neurodevelopmental therapy (targeting proximal upper limb function) and 5 minutes of exposure to the robot in each session.\n\n\nMAIN OUTCOME MEASURES\nFugl-Meyer assessment of motor impairment, FIMtrade mark instrument, and biomechanic measures of strength and reaching kinematics. Clinical evaluations were performed by a therapist blinded to group assignments.\n\n\nRESULTS\nCompared with the control group, the robot group had larger improvements in the proximal movement portion of the Fugl-Meyer test after 1 month of treatment (P<.05) and also after 2 months of treatment (P<.05). The robot group had larger gains in strength (P<.02) and larger increases in reach extent (P<.01) after 2 months of treatment. At the 6-month follow-up, the groups no longer differed in terms of the Fugl-Meyer test (P>.30); however, the robot group had larger improvements in the FIM (P<.04).\n\n\nCONCLUSIONS\nCompared with conventional treatment, robot-assisted movements had advantages in terms of clinical and biomechanical measures. Further research into the use of robotic manipulation for motor rehabilitation is justified."}
{"_id": "552aa062d2895901bca03b26fa25766b3f64bf6f", "title": "A Brief survey of Data Mining Techniques Applied to Agricultural Data", "text": "As with many other sectors the amount of agriculture data based are increasing on a daily basis. However, the application of data mining methods and techniques to discover new insights or knowledge is a relatively a novel research area. In this paper we provide a brief review of a variety of Data Mining techniques that have been applied to model data from or about the agricultural domain. The Data Mining techniques applied on Agricultural data include k-means, bi clustering, k nearest neighbor, Neural Networks (NN) Support Vector Machine (SVM), Naive Bayes Classifier and Fuzzy c-means. As can be seen the appropriateness of data mining techniques is to a certain extent determined by the different types of agricultural data or the problems being addressed. This survey summarize the application of data mining techniques and predictive modeling application in the agriculture field."}
{"_id": "6ee0d9a40c60e50fab636cca74c6301853d42367", "title": "Stargazer: Automated regression-based GPU design space exploration", "text": "Graphics processing units (GPUs) are of increasing interest because they offer massive parallelism for high-throughput computing. While GPUs promise high peak performance, their challenge is a less-familiar programming model with more complex and irregular performance trade-offs than traditional CPUs or CMPs. In particular, modest changes in software or hardware characteristics can lead to large or unpredictable changes in performance. In response to these challenges, our work proposes, evaluates, and offers usage examples of Stargazer1, an automated GPU performance exploration framework based on stepwise regression modeling. Stargazer sparsely and randomly samples parameter values from a full GPU design space and simulates these designs. Then, our automated stepwise algorithm uses these sampled simulations to build a performance estimator that identifies the most significant architectural parameters and their interactions. The result is an application-specific performance model which can accurately predict program runtime for any point in the design space. Because very few initial performance samples are required relative to the extremely large design space, our method can drastically reduce simulation time in GPU studies. For example, we used Stargazer to explore a design space of nearly 1 million possibilities by sampling only 300 designs. For 11 GPU applications, we were able to estimate their runtime with less than 1.1% average error. In addition, we demonstrate several usage scenarios of Stargazer."}
{"_id": "ccaab0cee02fe1e5ffde33b79274b66aedeccc65", "title": "Ethical and Social Aspects of Self-Driving Cars", "text": "As an envisaged future of transportation, self-driving cars are being discussed from various perspectives, including social, economical, engineering, computer science, design, and ethics. On the one hand, self-driving cars present new engineering problems that are being gradually successfully solved. On the other hand, social and ethical problems are typically being presented in the form of an idealized unsolvable decision-making problem, the so-called trolley problem, which is grossly misleading. We argue that an applied engineering ethical approach for the development of new technology is what is needed; the approach should be applied, meaning that it should focus on the analysis of complex real-world engineering problems. Software plays a crucial role for the control of self-driving cars; therefore, software engineering solutions should seriously handle ethical and social considerations. In this paper we take a closer look at the regulative instruments, standards, design, and implementations of components, systems, and services and we present practical social and ethical challenges that have to be met, as well as novel expectations for software engineering."}
{"_id": "bbd9b5e4d4761d923d21a060513e826bf5bfc620", "title": "Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations", "text": "Recent advances with Convolutional Networks (ConvNets) have shifted the bottleneck for many computer vision tasks to annotated data collection. In this paper, we present a geometry-driven approach to automatically collect annotations for human pose prediction tasks. Starting from a generic ConvNet for 2D human pose, and assuming a multi-view setup, we describe an automatic way to collect accurate 3D human pose annotations. We capitalize on constraints offered by the 3D geometry of the camera setup and the 3D structure of the human body to probabilistically combine per view 2D ConvNet predictions into a globally optimal 3D pose. This 3D pose is used as the basis for harvesting annotations. The benefit of the annotations produced automatically with our approach is demonstrated in two challenging settings: (i) fine-tuning a generic ConvNet-based 2D pose predictor to capture the discriminative aspects of a subjects appearance (i.e.,personalization), and (ii) training a ConvNet from scratch for single view 3D human pose prediction without leveraging 3D pose groundtruth. The proposed multi-view pose estimator achieves state-of-the-art results on standard benchmarks, demonstrating the effectiveness of our method in exploiting the available multi-view information."}
{"_id": "66e3aa516a7124befa7d2a8b0872e9619acf7f58", "title": "JM: An R Package for the Joint Modelling of Longitudinal and Time-to-Event Data", "text": "In longitudinal studies measurements are often collected on different types of outcomes for each subject. These may include several longitudinally measured responses (such as blood values relevant to the medical condition under study) and the time at which an event of particular interest occurs (e.g., death, development of a disease or dropout from the study). These outcomes are often separately analyzed; however, in many instances, a joint modeling approach is either required or may produce a better insight into the mechanisms that underlie the phenomenon under study. In this paper we present the R package JM that fits joint models for longitudinal and time-to-event data."}
{"_id": "4f57643b95e854bb05fa0c037cbf8898accdbdef", "title": "Technical evaluation of the Carolo-Cup 2014 - A competition for self-driving miniature cars", "text": "The Carolo-Cup competition conducted for the eighth time this year, is an international student competition focusing on autonomous driving scenarios implemented on 1:10 scale car models. Three practical sub-competitions have to be realized in this context and represent a complex, interdisciplinary challenge. Hence, students have to cope with all core topics like mechanical development, electronic design, and programming as addressed usually by robotic applications. In this paper we introduce the competition challenges in detail and evaluate the results of all 13 participating teams from the 2014 competition. For this purpose, we analyze technical as well as non-technical configurations of each student group and derive best practices, lessons learned, and criteria as a precondition for a successful participation. Due to the comprehensive orientation of the Carolo-Cup, this knowledge can be applied on comparable projects and related competitions as well."}
{"_id": "21fb86020f68bf2dd57cd1b8a0e8adead5d9a9ae", "title": "Data Mining : Concepts and Techniques", "text": "Association rule mining was first proposed by Agrawal, Imielinski, and Swami [AIS93]. The Apriori algorithm discussed in Section 5.2.1 for frequent itemset mining was presented in Agrawal and Srikant [AS94b]. A variation of the algorithm using a similar pruning heuristic was developed independently by Mannila, Tiovonen, and Verkamo [MTV94]. A joint publication combining these works later appeared in Agrawal, Mannila, Srikant, Toivonen, and Verkamo [AMS96]. A method for generating association rules from frequent itemsets is described in Agrawal and Srikant [AS94a]."}
{"_id": "4eae6ee36de5f9ae3c05c6ca385938de98cd5ef8", "title": "Combining Text and Linguistic Document Representations for Authorship Attribution", "text": "In this paper, we provide several alternatives to the classical Bag-Of-Words model for automatic authorship attribution. To this end, we consider linguistic and writing style information such as grammatical structures to construct different document representations. Furthermore we describe two techniques to combine the obtained representations: combination vectors and ensemble based meta classification. Our experiments show the viability of our approach."}
{"_id": "288c67457f09c0c30cadd7439040114e9c377bc3", "title": "Finding Interesting Rules from Large Sets of Discovered Association Rules", "text": "Association rules, introduced by Agrawal, Imielinski, and Swami, are rules of the form \u201cfor 90% of the rows of the relation, if the row has value 1 in the columns in set W, then it has 1 also in column B\u201d. Efficient methods exist for discovering association rules from large collections of data. The number of discovered rules can, however, be so large that browsing the rule set and finding interesting rules from it can be quite difficult for the user. We show how a simple formalism of rule templates makes it possible to easily describe the structure of interesting rules. We also give examples of visualization of rules, and show how a visualization tool interfaces with rule templates."}
{"_id": "384bb3944abe9441dcd2cede5e7cd7353e9ee5f7", "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "text": ""}
{"_id": "49fa97db6b7f3ab2b3a623c3552aa680b80c8dd2", "title": "Automatically Categorizing Written Texts by Author Gender", "text": "The problem of automatically determining the gender of a document's author would appear to be a more subtle problem than those of categorization by topic or authorship attribution. Nevertheless, it is shown that automated text categorization techniques can exploit combinations of simple lexical and syntactic features to infer the gender of the author of an unseen formal written document with approximately 80% accuracy. The same techniques can be used to determine if a document is fiction or non-fiction with approximately 98% accuracy."}
{"_id": "883224c3b28b0563a393746066738f52e6fcc70d", "title": "To Create What You Tell: Generating Videos from Captions", "text": "We are creating multimedia contents everyday and everywhere. While automatic content generation has played a fundamental challenge to multimedia community for decades, recent advances of deep learning have made this problem feasible. For example, the Generative Adversarial Networks (GANs) is a rewarding approach to synthesize images. Nevertheless, it is not trivial when capitalizing on GANs to generate videos. The difficulty originates from the intrinsic structure where a video is a sequence of visually coherent and semantically dependent frames. This motivates us to explore semantic and temporal coherence in designing GANs to generate videos. In this paper, we present a novel Temporal GANs conditioning on Captions, namely TGANs-C, in which the input to the generator network is a concatenation of a latent noise vector and caption embedding, and then is transformed into a frame sequence with 3D spatio-temporal convolutions. Unlike the naive discriminator which only judges pairs as fake or real, our discriminator additionally notes whether the video matches the correct caption. In particular, the discriminator network consists of three discriminators: video discriminator classifying realistic videos from generated ones and optimizes video-caption matching, frame discriminator discriminating between real and fake frames and aligning frames with the conditioning caption, and motion discriminator emphasizing the philosophy that the adjacent frames in the generated videos should be smoothly connected as in real ones. We qualitatively demonstrate the capability of our TGANs-C to generate plausible videos conditioning on the given captions on two synthetic datasets (SBMG and TBMG) and one real-world dataset (MSVD). Moreover, quantitative experiments on MSVD are performed to validate our proposal via Generative Adversarial Metric and human study."}
{"_id": "f4b3804f052f15f8a25af78db24e32dc25254722", "title": "Number : AUTCON-D16-00058 R 1 Title : BIM for Infrastructure : An Overall Review and Constructor Perspective", "text": "6 The subject of Building Information Modelling (BIM) has become a central topic to the improvement of the AECOO 7 (Architecture, Engineering, Construction, Owner and Operator) industry around the world, to the point where the 8 concept is being expanded into domains it was not originally conceived to address. Transitioning BIM into the 9 domain of infrastructure projects has provided challenges and emphasized the constructor perspective of BIM. 10 Therefore, this study aims to collect the relevant literature regarding BIM within the Infrastructure domain and its 11 use from the constructor perspective to review and analyse the current industry positioning and research state of 12 the art, with regards to the set criteria. The review highlighted a developing base of BIM for infrastructure. From 13 the analysis, the related research gaps were identified regarding information integration, alignment of BIM 14 processes to constructor business processes & the effective governance and value of information. From this a 15 unique research strategy utilising a framework for information governance coupled with a graph based distributed 16 data environment is outlined to further progress the integration and efficiency of AECOO Infrastructure projects. 17"}
{"_id": "36d167397872c36713e8a274113b30ea5cd3ad7d", "title": "Enterprise Database Applications and the Cloud: A Difficult Road Ahead", "text": "There is considerable interest in moving DBMS applications from inside enterprise data centers to the cloud, both to reduce cost and to increase flexibility and elasticity. Some of these applications are \"green field\" projects (i.e., new applications), others are existing legacy systems that must be migrated to the cloud. In another dimension, some are decision support applications while others are update-oriented. In this paper, we discuss the technical and political challenges that these various enterprise applications face when considering cloud deployment. In addition, a requirement for quality-of-service (QoS) guarantees will generate additional disruptive issues. In some circumstances, achieving good DBMS performance on current cloud architectures and future hardware technologies will be non-trivial. In summary, there is a difficult road ahead for enterprise database applications."}
{"_id": "f61ca00abf165ea5590f67942c9bd7538187752d", "title": "Laughbot: Detecting Humor in Spoken Language with Language and Audio Cues", "text": "We propose detecting and responding to humor in spoken dialogue by extracting language and audio cues and subsequently feeding these features into a combined recurrent neural network (RNN) and logistic regression model. In this paper, we parse Switchboard phone conversations to build a corpus of punchlines and unfunny lines where punchlines precede laughter tokens in Switchboard transcripts. We create a combined RNN and logistic regression model that uses both acoustic and language cues to predict whether a conversational agent should respond to an utterance with laughter. Our model achieves an F1-score of 63.2 and accuracy of 73.9. This model outperforms our logistic language model (F1-score 56.6) and RNN acoustic model (59.4) as well as the final RNN model of D. Bertero, 2016 (52.9). Using our final model, we create a \u201claughbot\u201d that audibly responds to a user with laughter when their utterance is classified as a punchline. A conversational agent outfitted with a humorrecognition system such as the one we present in this paper would be valuable as these agents gain utility in everyday life. Keywords\u2014Chatbots; spoken natural language processing; deep learning; machine learning"}
{"_id": "4c27eef7fa83900ef8f2e48a523750d035830342", "title": "Reconstruction of dorsal and/or caudal nasal septum deformities with septal battens or by septal replacement: an overview and comparison of techniques.", "text": "OBJECTIVES\nThe objectives of this study were to describe and compare two techniques used to correct nasal septum deviations located in the dorsal and/or caudal septum.\n\n\nSTUDY DESIGN\nThe authors conducted a retrospective clinical chart review.\n\n\nMETHODS\nThe authors conducted a comparison of functional and technical results between surgery in the L-strut of the septum in 114 patients with septal battens or by septal replacement by subjective self-evaluation and by examination of the position of the septum during follow up.\n\n\nRESULTS\nThere was subjective improvement in nasal breathing in 86% of the septal batten group and in 94% of the septal replacement group. This difference was not statistically significant. The technical result was judged by examining the position of the septum during follow up as midline, slightly deviated, or severely deviated. The septum was significantly more often located in the midline during follow up in the septal replacement group than in the septal batten group.\n\n\nCONCLUSION\nTreatment of deformities located in the structurally important L-strut of the septum may be technically challenging and many functional, structural, and esthetic considerations must be taken into account. On the basis of this series, both septal battens and septal replacement techniques may be considered for correction of deviations in this area. The functional improvement rates were not significantly different between the techniques, although during follow up, the septum appeared to be significantly more often located in the midline in the septal replacement group. The techniques are described and their respective advantages and potential drawbacks are discussed."}
{"_id": "1b5f18498b42e464b81e3d81b8d32237aea4a234", "title": "DroidTrace: A ptrace based Android dynamic analysis system with forward execution capability", "text": "Android, being an open source smartphone operating system, enjoys a large community of developers who create new mobile services and applications. However, it also attracts malware writers to exploit Android devices in order to distribute malicious apps in the wild. In fact, Android malware are becoming more sophisticated and they use advanced \u201cdynamic loading\u201d techniques like Java reflection or native code execution to bypass security detection. To detect dynamic loading, one has to use dynamic analysis. Currently, there are only a handful of Android dynamic analysis tools available, and they all have shortcomings in detecting dynamic loading. The aim of this paper is to design and implement a dynamic analysis system which allows analysts to perform systematic analysis of dynamic payloads with malicious behaviors. We propose \u201cDroidTrace\u201d, a ptrace based dynamic analysis system with forward execution capability. Our system uses ptrace to monitor selected system calls of the target process which is running the dynamic payloads, and classifies the payloads behaviors through the system call sequence, e.g., behaviors such as file access, network connection, inter-process communication and even privilege escalation. Also, DroidTrace performs \u201cphysical modification\u201d to trigger different dynamic loading behaviors within an app. Using DroidTrace, we carry out a large scale analysis on 36,170 dynamic payloads in 50,000 apps and 294 malware in 10 families (four of them are zero-day) with various dynamic loading behaviors."}
{"_id": "ef9473055dd96e5d146c88ae3cc88d06e7adfd07", "title": "Understanding the Dynamic Interplay of Social Buzz and Contribution Behavior within and between Online Platforms - Evidence from Crowdfunding", "text": "Motivated by the growing interconnection between online platforms, we examine the dynamic interplay between social buzz and contribution behavior in the crowdfunding context. Since the utility of crowdfunding projects is usually difficult to ascertain, prospective backers draw on quality signals, such as social buzz and prior-contribution behavior, to make their funding decisions. We employ the panel vector autoregression (PVAR) methodology to investigate both intraand cross-platform effects based on data collected from three platforms: Indiegogo, one of the largest crowdfunding platforms on the web, Twitter and Facebook. Our results show a positive influence of social buzz on project backing, but a negative relationship in the reverse direction. Furthermore, we observe strong positive feedback cycles within each platform. Our results are supplemented by split-sample analyses for project orientation (Social, Cause and Entrepreneurial) and project success (Winners vs. Losers), in which Facebook shares were identified as a critical success factor."}
{"_id": "6e5d8a30531680beb200cd6f0de91a7919381520", "title": "Comparing exploration strategies for Q-learning in random stochastic mazes", "text": "Balancing the ratio between exploration and exploitation is an important problem in reinforcement learning. This paper evaluates four different exploration strategies combined with Q-learning using random stochastic mazes to investigate their performances. We will compare: UCB-1, softmax, \u2208-greedy, and pursuit. For this purpose we adapted the UCB-1 and pursuit strategies to be used in the Q-learning algorithm. The mazes consist of a single optimal goal state and two suboptimal goal states that lie closer to the starting position of the agent, which makes efficient exploration an important part of the learning agent. Furthermore, we evaluate two different kinds of reward functions, a normalized one with rewards between 0 and 1, and an unnormalized reward function that penalizes the agent for each step with a negative reward. We have performed an extensive grid-search to find the best parameters for each method and used the best parameters on novel randomly generated maze problems of different sizes. The results show that softmax exploration outperforms the other strategies, although it is harder to tune its temperature parameter. The worst performing exploration strategy is \u2208-greedy."}
{"_id": "a22aa5a7e98fe4fad1ec776df8d423b1c8b373ef", "title": "Character-based movie summarization", "text": "A decent movie summary is helpful for movie producer to promote the movie as well as audience to capture the theme of the movie before watching the whole movie. Most exiting automatic movie summarization approaches heavily rely on video content only, which may not deliver ideal result due to the semantic gap between computer calculated low-level features and human used high-level understanding. In this paper, we incorporate script into movie analysis and propose a novel character-based movie summarization approach, which is validated by modern film theory that what actually catches audiences' attention is the character. We first segment scenes in the movie by analysis and alignment of script and movie. Then we conduct substory discovery and content attention analysis based on the scent analysis and character interaction features. Given obtained movie structure and content attention value, we calculate movie attraction scores at both shot and scene levels and adopt this as criterion to generate movie summary. The promising experimental results demonstrate that character analysis is effective for movie summarization and movie content understanding."}
{"_id": "7af4c0e2899042aea87d4a37aadd9f60b53cf272", "title": "Distributed Denial-of-Service (DDoS) Threat in Collaborative Environment - A Survey on DDoS Attack Tools and Traceback Mechanisms", "text": "Collaborative applications are feasible nowadays and are becoming more popular due to the advancement in internetworking technology. The typical collaborative applications, in India include the Space research, Military applications, Higher learning in Universities and Satellite campuses, State and Central government sponsored projects, e-governance, e-healthcare systems, etc. In such applications, computing resources for a particular institution/organization spread across districts and states and communication is achieved through internetworking. Therefore the computing and communication resources must be protected against security attacks as any compromise on these resources would jeopardize the entire application/mission. Collaborative environment is prone for various threats, of which Distributed Denial of Service (DDoS) attacks are of major concern. DDoS attack prevents legitimate access to critical resources. A survey by Arbor networks reveals that approximately 1,200 DDoS attacks occur per day. As the DDoS attack is coordinated, the defense for the same has to be a collaborative one. To counter DDoS attacks in a collaborative environment, all the routers need to work collaboratively by exchanging their caveat messages with their neighbors. This paper analyses the security measures in a collaborative environment, identifles the popular DDoS attack tools, and surveys the existing traceback mechanisms to trace the real attacker."}
{"_id": "34977babbdc735c56b04668c19da31d89161a2b9", "title": "Geolocation of RF Emitters by Many UAVs", "text": "This paper presents an approach to using a large team of UAVs to find radio frequency (RF) emitting targets in a large area. Small, inexpensive UAVs that can collectively and rapidly determine the approximate location of intermittently broadcasting and mobile RF emitters have a range of applications in both military, e.g., for finding SAM batteries, and civilian, e.g., for finding lost hikers, domains. Received Signal Strength Indicator (RSSI) sensors on board the UAVs measure the strength of RF signals across a range of frequencies. The signals, although noisy and ambiguous due to structural noise, e.g., multipath effects, overlapping signals and sensor noise, allow estimates to be made of emitter locations. Generating a probability distribution over emitter locations requires integrating multiple signals from different UAVs into a Bayesian filter, hence requiring cooperation between the UAVs. Once likely target locations are identified, EO-camera equipped UAVs must be tasked to provide a video stream of the area to allow a user to identify the emitter."}
{"_id": "cb32e2100a853e7ea491b1ac17b941f64f8720df", "title": "75\u201385 GHz flip-chip phased array RFIC with simultaneous 8-transmit and 8-receive paths for automotive radar applications", "text": "This paper presents the first simultaneous 8-transmit and 8-receive paths 75-85 GHz phased array RFIC for FMCW automotive radars. The receive path has two separate I/Q mixers each connected to 4-element phased arrays for RF and digital beamforming. The chip also contains a build-in-self-test system (BIST) for the transmit and receive paths. Measurements on a flip-chip prototype show a gain >24 dB at 77 GHz, -25 dB coupling between adjacent channels in the transmit and receive paths (<;-45 dB between non-adjacent channels), and <;-50 dB coupling between the transmit and receive portions of the chip."}
{"_id": "d21ebaab3f715dc7178966ff146711882e6a6fee", "title": "Globally and locally consistent image completion", "text": "We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces."}
{"_id": "aae7bde6972328de6c23a510fe59254854163308", "title": "DEFO-NET: Learning Body Deformation Using Generative Adversarial Networks", "text": "Modelling the physical properties of everyday objects is a fundamental prerequisite for autonomous robots. We present a novel generative adversarial network (DEFO-NET), able to predict body deformations under external forces from a single RGB-D image. The network is based on an invertible conditional Generative Adversarial Network (IcGAN) and is trained on a collection of different objects of interest generated by a physical finite element model simulator. Defo-netinherits the generalisation properties of GANs. This means that the network is able to reconstruct the whole 3-D appearance of the object given a single depth view of the object and to generalise to unseen object configurations. Contrary to traditional finite element methods, our approach is fast enough to be used in real-time applications. We apply the network to the problem of safe and fast navigation of mobile robots carrying payloads over different obstacles and floor materials. Experimental results in real scenarios show how a robot equipped with an RGB-D camera can use the network to predict terrain deformations under different payload configurations and use this to avoid unsafe areas."}
{"_id": "651adaa058f821a890f2c5d1053d69eb481a8352", "title": "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples", "text": "We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimizationbased attacks, we find defenses relying on this effect can be circumvented. We describe characteristic behaviors of defenses exhibiting the effect, and for each of the three types of obfuscated gradients we discover, we develop attack techniques to overcome it. In a case study, examining noncertified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 9 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers."}
{"_id": "33eb066178d7ec2307f1db171f90c8338108bcc6", "title": "Graphical SLAM - a self-correcting map", "text": "We describe an approach to simultaneous localization and mapping, SLAM. This approach has the highly desirable property of robustness to data association errors. Another important advantage of our algorithm is that non-linearities are computed exactly, so that global constraints can be imposed even if they result in large shifts to the map. We represent the map as a graph and use the graph to find an efficient map update algorithm. We also show how topological consistency can be imposed on the map, such as, closing a loop. The algorithm has been implemented on an outdoor robot and we have experimental validation of our ideas. We also explain how the graph can be simplified leading to linear approximations of sections of the map. This reduction gives us a natural way to connect local map patches into a much larger global map."}
{"_id": "711d59dba9bd4284170ccae24fdc2a14519cf941", "title": "A novel approach to American sign language recognition using MAdaline neural network", "text": "Sign language interpretation is gaining a lot of research attention because of its social contributions which is proved to be extremely beneficial for the people suffering from hearing or speaking disabilities. This paper proposes a novel image processing sign language detection framework that employs MAdaline network for classification purpose. This paper mainly highlights two novel aspects, firstly it introduces an advanced feature set comprising of seven distinct features that has not been used widely for sign language interpretation purpose, more over utilization of such features negates the cumbersome step of cropping of irrelevant background image, thus reducing system complexity. Secondly it suggests a possible solution of the concerned problem can be obtained using an extension of the traditional Adaline network, formally termed as MAdaline Network. Although the concept of MAdaline network has originated much earlier, the provision of application of this framework in this domain definitely help in designing an improved sign language interpreting interface. The newly formulated framework has been implemented to recognize standardized American sign language containing 26 English alphabets from \u2018A\u2019 to \u2018Z\u2019. The performance of the proposed algorithm has also been compared with the standardized algorithms, and in each case the former one outperformed its contender algorithms by a large margin establishing the efficiency of the same."}
{"_id": "8f9a6313e525e33d88bb6f756e22bfec5272aab3", "title": "Design and Optimization a Circular Shape Network Antenna Micro Strip for Some Application", "text": "To meet the demands of high speed required by mobile communication of past generations ,one solution is to increase the number of antennas to the show and the reception of the wireless link this is called MIMO (Multiple input ,Multiple output )technology .however ,the integration of multiple antennas on the same PCB is delicate because of the small volume that require some applications and electromagnetic antenna between the coupling ,phenomena that we cannot neglect them .indeed a strong isolation between them has been reached to reduce fading of the signal caused by the electromagnetic antenna reached to reduce fading of the signal caused by the electromagnetic coupling and maximize the overall gain .in this article we are interested then integration on the same printed circuit of eight antennas MIMO are not operation in the same frequency band .the first antenna of this last work at 2.4GHz .other antennas have resonance frequency folling each with 20MHz offset this device is characterized by its original form that keeps is highly isolated antennas from the point of view electromagnetic coupling . INDEX TERMS MIMO, Technology Micro-strip, Microwave, Network Antenna"}
{"_id": "2ad08da69a014691ae76cf7f53534b40b412c0e4", "title": "Network Traffic Anomaly Detection", "text": "This paper presents a tutorial for network anomaly detection, focusing on non-signature-based approaches. Network traffic anomalies are unusual and significant changes in the traffic of a network. Networks play an important role in today\u2019s social and economic infrastructures. The security of the network becomes crucial, and network traffic anomaly detection constitutes an important part of network security. In this paper, we present three major approaches to non-signature-based network detection: PCA-based, sketch-based, and signal-analysis-based. In addition, we introduce a framework that subsumes the three approaches and a scheme for network anomaly extraction. We believe network anomaly detection will become more important in the future because of the increasing importance of network security."}
{"_id": "3cbb64df30f2581016542d2c0441f35e8a8c2147", "title": "Forward-Private Dynamic Searchable Symmetric Encryption with Efficient Search", "text": "Dynamic Searchable Symmetric Encryption (DSSE) allows to delegate keyword search and file update over an encrypted database via encrypted indexes, and therefore provides opportunities to mitigate the data privacy and utilization dilemma in cloud storage platforms. Despite its merits, recent works have shown that efficient DSSE schemes are vulnerable to statistical attacks due to the lack of forward-privacy, whereas forward-private DSSE schemes suffers from practicality concerns as a result of their extreme computation overhead. Due to significant practical impacts of statistical attacks, there is a critical need for new DSSE schemes that can achieve the forward-privacy in a more practical and efficient manner. We propose a new DSSE scheme that we refer to as Forward-private Sublinear DSSE (FS-DSSE). FS-DSSE harnesses special secure update strategies and a novel caching strategy to reduce the computation cost of repeated queries. Therefore, it achieves forward-privacy, sublinear search complexity, low end-to-end delay, and parallelization capability simultaneously. We fully implemented our proposed method and evaluated its performance on a real cloud platform. Our experimental evaluation results showed that the proposed scheme is highly secure and highly efficient compared with state-of-the-art DSSE techniques. Specifically, FS-DSSE is up to three magnitude of times faster than forward-secure DSSE counterparts, depending on the frequency of the searched keyword in the database."}
{"_id": "18d5fc8a3f2c7e9bac55fff40e0ecf3112196813", "title": "Performance Analysis of Classification Algorithms on Medical Diagnoses-a Survey", "text": "Corresponding Author: Vanaja, S., Research Scholar and Research guide, Research and Development, Bharathiar University, Coimbatore, Tamil Nadu, India Email: vanajasha@yahoo.com Abstract: The aim of this research paper is to study and discuss the various classification algorithms applied on different kinds of medical datasets and compares its performance. The classification algorithms with maximum accuracies on various kinds of medical datasets are taken for performance analysis. The result of the performance analysis shows the most frequently used algorithms on particular medical dataset and best classification algorithm to analyse the specific disease. This study gives the details of different classification algorithms and feature selection methodologies. The study also discusses about the data constraints such as volume and dimensionality problems. This research paper also discusses the new features of C5.0 classification algorithm over C4.5 and performance of classification algorithm on high dimensional datasets. This research paper summarizes various reviews and technical articles which focus on the current research on Medical diagnosis."}
{"_id": "132b3bd259bf10a41c00330a49de701c4e59a7ca", "title": "Semantic MEDLINE: An advanced information management application for biomedicine", "text": "Semantic MEDLINE integrates information retrieval, advanced natural language processing, automatic summarization, and visualization into a single Web portal. The application is intended to help manage the results of PubMed searches by condensing core semantic content in the citations retrieved. Output is presented as a connected graph of semantic relations, with links to the original MEDLINE citations. The ability to connect salient information across documents helps users keep up with the research literature and discover connections which might otherwise go unnoticed. Semantic MEDLINE can make an impact on biomedicine by supporting scientific discovery and the timely translation of insights from basic research into advances in clinical practice and patient care. Marcelo Fiszman has an M.D. from the State University of Rio de Janeiro and a Ph.D. in biomedical informatics from the University of Utah. He was awarded a postdoctoral fellowship in biomedical informatics at the National Library of Medicine (NLM) and is currently a research scientist there. His work focuses on natural language processing algorithms that exploit symbolic, rule-based techniques for semantic interpretation of biomedical text. He is also interested in using extracted semantic information for automatic abstraction summarization and literaturebased discovery. These efforts underpin Semantic MEDLINE, which is currently under development at NLM. This innovative biomedical information management application combines document retrieval, semantic interpretation, automatic summarization, and knowledge visualization into a single application."}
{"_id": "49b6601bd93f4cfb606c6c9d6be2ae7d4da7e5ac", "title": "Effects of Professional Development on Teachers ' Instruction : Results from a Three-Year Longitudinal Study", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. American Educational Research Association is collaborating with JSTOR to digitize, preserve and extend access to Educational Evaluation and Policy Analysis. This article examines the effects of professional development on teachers' instruction. Using a purposefully selected sample of about 207 teachers in 30 schools, in 10 districts infive states, we examine features of teachers' professional development and its effects on changing teaching practice in mathematics and science from 1996-1999. We found that professional developmentfocused on specific instructional practices increases teachers' use of those practices in the classroom. Furthermore, we found that specificfeatures, such as active learning opportunities, increase the effect of the professional development on teacher's instruction. What are the characteristics of professional development that affect teaching practice? This study adds to the knowledge base on effective professional development. The success of standards-based reform depends on teachers' ability to foster both basic knowledge and advanced thinking and problem solving among their students (Loucks-Desimone et al. Education 1999a), and surveys of teachers about their preservice preparation and in-service professional development experiences (e.g., Carey & Frechtling, 1997). In addition, there is a considerable amount of literature describing \"best practices\" in professional development, drawing on expert experiences (e.g., Loucks-Horsley et al., 1998). A professional consensus is emerging about particular characteristics of \"high quality\" professional development. These characteristics include a focus on content and how students learn content; in-depth, active learning opportunities; links to high standards, opportunities for teachers to engage in leadership roles; extended duration ; and the collective participation of groups of teachers from the same school, grade, or department. Although lists of characteristics such as these commonly appear in the literature on effective professional development, there is little direct evidence on the extent to which these characteristics are related to better teaching and increased student achievement. Some studies conducted over the past decade suggest that professional development experiences that share all or most of these characteristics can have a substantial, positive influence on teachers' classroom practice and student achieve-A few recent studies have begun to examine the relative importance of specific characteristics of professional development. Several studies have found that the intensity and \u2026"}
{"_id": "853ba021b20e566a632a3c6e047b06c8914ec37d", "title": "\"It's alive, it's magic, it's in love with you\": opportunities, challenges and open questions for actuated interfaces", "text": "Actuated Interfaces are receiving a great deal of interest from the research community. The field can now present a range of point designs, illustrating the potential design space of Actuated Interfaces. However, despite the increasing interest in Actuated Interfaces, the research carried out is nevertheless primarily preoccupied with the technical challenges and potential application areas, rather than how users actually approach, experience, interpret and understand Actuated Interfaces. Based on three case studies, investigating how people experience Actuated Interfaces, we point to; magic, movement and ambiguity as fruitful perspectives for understanding users' experiences with Actuated Interfaces. The three perspectives are employed to reflect upon opportunities and challenges, as well as point to open questions and relevant areas for future research for Actuated Interfaces."}
{"_id": "8c5ca158d90b3b034db872e5a82986af0146abf3", "title": "Automatic Airspace Sectorisation: A Survey", "text": "Airspace sectorisation provides a partition of a given airspace into sectors, subject to geometric constraints and workload constraints, so that some cost metric is minimised. We survey the algorithmic aspects of methods for automatic airspace sectorisation, for an intended readership of experts on air traffic management."}
{"_id": "8ed0ee88e811aaaad56d1a42e2cfce02edcc90ff", "title": "Brain structures differ between musicians and non-musician", "text": "From an early age, musicians learn complex motor and auditory skills (e.g., the translation of visually perceived musical symbols into motor commands with simultaneous auditory monitoring of output), which they practice extensively from childhood throughout their entire careers. Using a voxel-by-voxel morphometric technique, we found gray matter volume differences in motor, auditory, and visual-spatial brain regions when comparing professional musicians (keyboard players) with a matched group of amateur musicians and non-musicians. Although some of these multiregional differences could be attributable to innate predisposition, we believe they may represent structural adaptations in response to long-term skill acquisition and the repetitive rehearsal of those skills. This hypothesis is supported by the strong association we found between structural differences, musician status, and practice intensity, as well as the wealth of supporting animal data showing structural changes in response to long-term motor training. However, only future experiments can determine the relative contribution of predisposition and practice."}
{"_id": "cc54251f84c8577ca862fec41a1766c9a0d4a7b8", "title": "Updating P300: An integrative theory of P3a and P3b", "text": "The empirical and theoretical development of the P300 event-related brain potential (ERP) is reviewed by considering factors that contribute to its amplitude, latency, and general characteristics. The neuropsychological origins of the P3a and P3b subcomponents are detailed, and how target/standard discrimination difficulty modulates scalp topography is discussed. The neural loci of P3a and P3b generation are outlined, and a cognitive model is proffered: P3a originates from stimulus-driven frontal attention mechanisms during task processing, whereas P3b originates from temporal-parietal activity associated with attention and appears related to subsequent memory processing. Neurotransmitter actions associating P3a to frontal/dopaminergic and P3b to parietal/norepinephrine pathways are highlighted. Neuroinhibition is suggested as an overarching theoretical mechanism for P300, which is elicited when stimulus detection engages memory operations."}
{"_id": "6d3f38ea64c84d5ca1569fd73497e34525baa215", "title": "Increased auditory cortical representation in musicians", "text": "Acoustic stimuli are processed throughout the auditory projection pathway, including the neocortex, by neurons that are aggregated into \u2018tonotopic\u2019 maps according to their specific frequency tunings. Research on animals has shown that tonotopic representations are not statically fixed in the adult organism but can reorganize after damage to the cochlea or after training the intact subject to discriminate between auditory stimuli. Here we used functional magnetic source imaging (single dipole model) to measure cortical representations in highly skilled musicians. Dipole moments for piano tones, but not for pure tones of similar fundamental frequency (matched in loudness), were found to be enlarged by about 25% in musicians compared with control subjects who had never played an instrument. Enlargement was correlated with the age at which musicians began to practise and did not differ between musicians with absolute or relative pitch. These results, when interpreted with evidence for modified somatosensory representations of the fingering digits in skilled violinists, suggest that use-dependent functional reorganization extends across the sensory cortices to reflect the pattern of sensory input processed by the subject during development of musical skill."}
{"_id": "781ee20481f4e2e4accfa2b1cd6d70d5854bb171", "title": "Navigation-related structural change in the hippocampi of taxi drivers.", "text": "Structural MRIs of the brains of humans with extensive navigation experience, licensed London taxi drivers, were analyzed and compared with those of control subjects who did not drive taxis. The posterior hippocampi of taxi drivers were significantly larger relative to those of control subjects. A more anterior hippocampal region was larger in control subjects than in taxi drivers. Hippocampal volume correlated with the amount of time spent as a taxi driver (positively in the posterior and negatively in the anterior hippocampus). These data are in accordance with the idea that the posterior hippocampus stores a spatial representation of the environment and can expand regionally to accommodate elaboration of this representation in people with a high dependence on navigational skills. It seems that there is a capacity for local plastic change in the structure of the healthy adult human brain in response to environmental demands."}
{"_id": "3d7a8f5557b6a219e44c0c9fbb81aa0b668e65f9", "title": "Extended Kalman filtering for battery management systems of LiPB-based HEV battery packs Part 1 . Background", "text": "Battery management systems (BMS) in hybrid-electric-vehicle (HEV) battery packs must estimate values descriptive of the pack\u2019s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium-ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. This first paper investigates the estimation requirements for HEV BMS in some detail, in parallel to the requirements for other battery-powered applications. The comparison leads us to understand that the HEV environment is very challenging on batteries and the BMS, and that precise estimation of some parameters will improve performance and robustness, and will ultimately lengthen the useful lifetime of the pack. This conclusion motivates the use of more complex algorithms than might be used in other applications. Our premise is that EKF then becomes a very attractive approach. This paper introduces the basic method, gives some intuitive feel to the necessary computational steps, and concludes by presenting an illustrative example as to the type of results that may be obtained using EKF. \u00a9 2004 Elsevier B.V. All rights reserved."}
{"_id": "ed0034302ecba18ca29a5e23baa39b23409add7b", "title": "Price-Based Global Market Segmentation for Services", "text": "In business-to-business marketing, managers are often tasked with developing effective global pricing strategies for customers characterized by different cultures and different utilities for product attributes. The challenges of formulating international pricing schedules are especially evident in global markets for service offerings, where intensive customer contact, extensive customization requirements, and reliance on extrinsic cues for service quality make pricing particularly problematic. The purpose of this article is to develop and test a model of the antecedents of business customers\u2019 price elasticities of demand for services in an international setting. The article begins with a synthesis of the services, pricing, and global marketing literature streams and then identifies factors that account for differences in business customers\u2019 price elasticities for service offerings across customers in Asia Pacific, Europe, and North America. The findings indicate that price elasticities depend on service quality, service type, and level of service support and that horizontal segments do exist, which provides support for pricing strategies transcending national borders. The article concludes with a discussion of the managerial implications of these results for effective segmentation of global markets for services."}
{"_id": "e3b17a245dce9a2189a8a4f7538631b69c93812e", "title": "Adversarial Patch", "text": "We present a method to create universal, robust, targeted adversarial image patches in the real world. The patches are universal because they can be used to attack any scene, robust because they work under a wide variety of transformations, and targeted because they can cause a classifier to output any target class. These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers; even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class."}
{"_id": "d5b023d962e15c519993eed913a73f44c595a7ad", "title": "Efficacy and safety of Meriva\u00ae, a curcumin-phosphatidylcholine complex, during extended administration in osteoarthritis patients.", "text": "In a previous three-month study of Meriva, a proprietary curcumin-phosphatidylcholine phytosome complex, decreased joint pain and improvement in joint function were observed in 50 osteoarthritis (OA) patients. Since OA is a chronic condition requiring prolonged treatment, the long-term efficacy and safety of Meriva were investigated in a longer (eight months) study involving 100 OA patients. The clinical end points (Western Ontario and McMaster Universities [WOMAC] score, Karnofsky Performance Scale Index, and treadmill walking performance) were complemented by the evaluation of a series of inflammatory markers (interleukin [IL]-1beta, IL-6, soluble CD40 ligand [sCD40L], soluble vascular cell adhesion molecule (sVCAM)-1, and erythrocyte sedimentation rate [ESR]). This represents the most ambitious attempt, to date, to evaluate the clinical efficacy and safety of curcumin as an anti-inflammatory agent. Significant improvements of both the clinical and biochemical end points were observed for Meriva compared to the control group. This, coupled with an excellent tolerability, suggests that Meriva is worth considering for the long-term complementary management of osteoarthritis."}
{"_id": "d82d9cb8411cc8d39a26b821f98cda80b08124d7", "title": "An Ontology based Dialog Interface to Database", "text": "In this paper, we extend the state-of-the-art NLIDB system and present a dialog interface to relational databases. Dialog interface enables users to automatically exploit the semantic context of the conversation while asking natural language queries over RDBMS, thereby making it simpler to express complex questions in a natural, piece-wise manner. We propose novel ontology-driven techniques for addressing each of the dialog-specific challenges such as co-reference resolution, ellipsis resolution, and query disambiguation, and use them in determining the overall intent of the user query. We demonstrate the applicability and usefulness of dialog interface over two different domains viz. finance and healthcare."}
{"_id": "7e2fbad4fa4877ea3fd8d197950e335d59ebeedf", "title": "Consumers \u2019 Trust in Electronic Commerce Transactions : The Role of Perceived Privacy and Perceived Security 1 Introduction", "text": "Acknowledgement I am greatly indebted to Omar El Sawy and Ann Majchrzak for their guidance and suggestion, and I would like to thank Ricky Lim and Raymond for their help with the data analysis. Abstract Consumers' trust in their online transactions is vital for the sustained progress and development of electronic commerce. Our paper proposes that in addition to known factors of trust such a vendor's reputation, consumers' perception of privacy and security influence their trust in online transactions. Our research shows that consumers exhibit variability in their perceptions of privacy, security and trust between online and offline transactions even if it is conducted with the same store. We build upon this finding to develop and validate measures of consumers' perceived privacy and perceived security of their online transactions which are then theorized to influence their trust in EC transactions. We propose that the perceptions of privacy and security are factors that affect the consumers' trust in the institutional governance mechanisms underlying the Internet. We conduct two distinct empirical studies and through successive refinement and analysis using the Partial Least Squares technique, we test our hypothesized relationships while verifying the excellent measurement properties associated with our instrument. Our study finds that the consumers' perceived privacy and perceived security are indeed distinct constructs but the effect of perceived privacy on trust in EC transactions is strongly mediated by perceived security. A major implication of our findings is that while the much studied determinants of trust such as reputation of the transacting firm should not be neglected, vendors should also engage in efforts to positively influence consumer perceptions of privacy and security. We discuss the significance of this observation in the context of increasing importance in acquiring customer information for personalization and other online strategies."}
{"_id": "2e36ea91a3c8fbff92be2989325531b4002e2afc", "title": "Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models", "text": "Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison."}
{"_id": "422df80a96c5417302067f5ad7ba7b071a73e156", "title": "DC Motor Drive with P, PI, and Particle Swarm Optimization Speed Controllers", "text": "This paper implements a Particle Swarm Optimization (PSO) speed controller for controlling the speed of DC motor. Also traditional Proportional (P), Proportional-Integral (PI) controller have been developed and simulated using MATLAB/SIMULINK. The simulation results indicate that the PI controller has big overshoot, sensitivity to controller gains. The optimization with particle swarm verified the reduction of oscillations as well as improve the steady state error, rise time and overshot in speed response. The simulation results confirmed the system is less sensitive to gain."}
{"_id": "217abac938808f5f89e72fce7cfad1f61be8bcff", "title": "Modeling crowdsourcing systems: design and analysis of incentive mechanism and rating system", "text": "Over the past few years, we have seen an increasing popularity of crowdsourcing services [5]. Many companies are now providing such services, e.g., Amazon Mechanical Turk [1], Google Helpouts [3], and Yahoo! Answers [8], etc. Briefly speaking, crowdsourcing is an online, distributed problem solving paradigm and business production platform. It uses the power of today\u2019s Internet to solicit the collective intelligence of large number of users. Relying on the wisdom of the crowd to solve posted tasks (or problems), crowdsourcing has become a promising paradigm to obtain \u201csolutions\u201d which can have higher quality or lower costs than the conventional method of solving problems via specialized employees or contractors in a company. Typically, a crowdsourcing system operates with three basic components: Users, tasks and rewards. Users are classified into requesters and workers. A user can be a requester or a worker, and in some cases, a user can be a requester/worker at the same time. Requesters outsource tasks to workers and associate each task with certain rewards, which will be granted to the workers who solve the task. Workers, on the other hand, solve the assigned tasks and reply to requesters with solutions, and then take the reward, which can be in form of money [1], entertainment [7] or altruism [8], etc. To have a successful crowdsourcing website, it is pertinent to attract high volume of participation of users (requesters and workers), and at the same time, solutions by workers have to be of high quality. In this paper we design a rating system and a mechanism to encourage users to participate, and incentivize workers to high quality solutions. First, we develop a game-theoretic model to characterize workers\u2019 strategic behavior. We then design a class effective incentive mechanisms which consist of a task bundling scheme and a rating system, and pay workers according to solution ratings from requesters. We develop a model to characterize the design space of a class of commonly users rating systems\u2013 threshold based rating system. We quantify the impact of such rating systems, and the bundling scheme on reducing requesters\u2019 reward payment in guaranteeing high quality solutions. We find out that a simplest rating system, e.g., two rating points, is an effective system in which requesters only need to provide binary feedbacks to indicate whether they are satisfied or not with a solution."}
{"_id": "eebdf35becd03454304c4ba8fde1d07ede465b8d", "title": "An analysis of player affect transitions in survival horror games", "text": "The trend of multimodal interaction in interactive gaming has grown significantly as demonstrated for example by the wide acceptance of the Wii Remote and the Kinect as tools not just for commercial games but for game research as well. Furthermore, using the player\u2019s affective state as an additional input for game manipulation has opened the realm of affective gaming. In this paper, we analyzed the affective states of players prior to and after witnessing a scary event in a survival horror game. Player affect data were collected through our own affect annotation tool that allows the player to report his affect labels while watching his recorded gameplay and facial expressions. The affect data were then used for training prediction models with the player\u2019s brainwave and heart rate signals, as well as keyboard\u2013mouse activities collected during gameplay. Our results show that (i) players are likely to get more fearful of a scary event when they are in the suspense state and that (ii) heart rate is a good candidate for detecting player affect. Using our results, game designers can maximize the fear level of the player by slowly building tension until the suspense state and showing a scary event after that. We believe that this approach can be applied to the analyses of different sets of emotions in other games as well."}
{"_id": "5491b73d751c16c703eccbc67286f2d802594438", "title": "Asymptotic Convergence in Online Learning with Unbounded Delays", "text": "We study the problem of predicting the results of computations that are too expensive to run, via the observation of the results of smaller computations. We model this as an online learning problem with delayed feedback, where the length of the delay is unbounded, which we study mainly in a stochastic setting. We show that in this setting, consistency is not possible in general, and that optimal forecasters might not have average regret going to zero. However, it is still possible to give algorithms that converge asymptotically to Bayesoptimal predictions, by evaluating forecasters on specific sparse independent subsequences of their predictions. We give an algorithm that does this, which converges asymptotically on good behavior, and give very weak bounds on how long it takes to converge. We then relate our results back to the problem of predicting large computations in a deterministic setting."}
{"_id": "ba906034290287d1527ea5bb90271bb25aaca84c", "title": "Degradation of methyl orange using short-wavelength UV irradiation with oxygen microbubbles.", "text": "A novel wastewater treatment technique using 8 W low-pressure mercury lamps in the presence of uniform-sized microbubbles (diameter = 5.79 microm) was investigated for the decomposition of methyl orange as a model compound in aqueous solution. Photodegradation experiments were conducted with a BLB black light blue lamp (365 nm), a UV-C germicidal lamp (254 nm) and an ozone lamp (185 nm+254 nm) both with and without oxygen microbubbles. The results show that the oxygen microbubbles accelerated the decolorization rate of methyl orange under 185+254 nm irradiation. In contrast, the microbubbles under 365 and 254 nm irradiation were unaffected on the decolorization of methyl orange. It was found that the pseudo-zero order decolorization reaction constant in microbubble system is 2.1 times higher than that in conventional large bubble system. Total organic carbon (TOC) reduction rate of methyl orange was greatly enhanced by oxygen microbubble under 185+254 nm irradiation, however, TOC reduction rate by nitrogen microbubble was much slower than that with 185+254 nm irradiation only. Possible reaction mechanisms for the decolorization and mineralization of methyl orange both with oxygen and nitrogen mirobubbles were proposed in this study."}
{"_id": "05504fc6ff64fb5060ef24b16d978fac3fd96337", "title": "Cost Models for Future Software Life Cycle Processes: COCOMO 2.0", "text": "Current software cost estimation models, such as the 1981 Constructive Cost Model (COCOMO) for software cost estimation and its 1987 Ada COCOMO update, have been experiencing increasing difficulties in estimating the costs of software developed to new life cycle processes and capabilities. These include non-sequential and rapid-development process models; reuse-driven approaches involving commercial off the shelf (COTS) packages, reengineering, applications composition, and applications generation capabilities; object-oriented approaches supported by distributed middleware; and software process maturity initiatives. This paper summarizes research in deriving a baseline COCOMO 2.0 model tailored to these new forms of software development, including rationales for the model decisions. The major new modeling capabilities of COCOMO 2.0 are a tailorable family of software sizing models, involving Object Points, Function Points, and Source Lines of Code; nonlinear models for software reuse and reengineering; an exponent-driver approach for modeling relative software diseconomies of scale; and several additions, deletions, and updates to previous COCOMO effort-multiplier cost drivers. This model is serving as a framework for an extensive current data collection and analysis effort to further refine and calibrate the model\u2019s estimation capabilities."}
{"_id": "47f0f6a2fd518932734cc90936292775cc95aa5d", "title": "OCCUPATIONAL THERAPY FOR THE POLY- TRAUMA CASUALTY WITH LIMB LOSS", "text": ""}
{"_id": "c7cf37c3609051dbd4084fcc068427fbb7a1a0e1", "title": "Blob Enhancement and Visualization for Improved Intracranial Aneurysm Detection", "text": "Several researches have established that the sensitivity of visual assessment of smaller intracranial aneurysms is not satisfactory. Computer-aided diagnosis based on volume rendering of the response of blob enhancement filters may shorten visual inspection and increase detection sensitivity by directing a diagnostician to suspicious locations in cerebral vasculature. We proposed a novel blob enhancement filter based on a modified volume ratio of Hessian eigenvalues that has a more uniform response inside the blob-like structures compared to state-of-the-art filters. Because the response of proposed filter is independent of the size and intensity of structures, it is especially sensitive for detecting small blob-like structures such as aneuryms. We proposed a novel volume rendering method, which is sensitive to signal energy along the viewing ray and which visually enhances the visualization of true positives and suppresses usually sharp false positive responses. The proposed and state-of-the-art methods were quantitatively evaluated on a synthetic dataset and 42 clinical datasets of patients with aneurysms. Because of the capability to accurately enhance the aneurysm's boundary and due to a low number of visualized false positive responses, the combined use of the proposed filter and visualization method ensures a reliable detection of (small) intracranial aneurysms."}
{"_id": "cdf35a8d61d38659527b0f52f6d3655778c165c1", "title": "Spatio-Temporal Recurrent Convolutional Networks for Citywide Short-term Crowd Flows Prediction", "text": "With the rapid development of urban traffic, forecasting the flows of crowd plays an increasingly important role in traffic management and public safety. However, it is very challenging as it is affected by many complex factors, including spatio-temporal dependencies of regions and other external factors such as weather and holiday. In this paper, we proposed a deep-learning-based approach, named STRCNs, to forecast both inflow and outflow of crowds in every region of a city. STRCNs combines Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network structures to capture spatio-temporal dependencies, simultaneously. More particularly, our model can be decomposed into four components: Closeness captures the changes of instantaneous flows; Daily influence detects the changes of daily influence flows regularly; Weekly influence reacts weekly patterns of influence flows and External influence gets the influence of external factors. For the first three properties (Closeness, Daily influence and Weekly influence), we give a branch of recurrent convolutional network units to learn both spatial and temporal dependencies in crowd flows. External factors are fed into a two-layers fully connected neural network. STRCNs assigns different weights to different branches, and then merges the outputs of the four parts together. Experimental results on two data sets (MobileBJ and TaxiBJ) demonstrate that STRCNs outperforms classical time series and other deep-learning-based prediction methods."}
{"_id": "b336f946d34cb427452517f503ada4bbe0181d3c", "title": "Diagnosing Error in Temporal Action Detectors", "text": "Despite the recent progress in video understanding and the continuous rate of improvement in temporal action localization throughout the years, it is still unclear how far (or close?) we are to solving the problem. To this end, we introduce a new diagnostic tool to analyze the performance of temporal action detectors in videos and compare different methods beyond a single scalar metric. We exemplify the use of our tool by analyzing the performance of the top rewarded entries in the latest ActivityNet action localization challenge. Our analysis shows that the most impactful areas to work on are: strategies to better handle temporal context around the instances, improving the robustness w.r.t. the instance absolute and relative size, and strategies to reduce the localization errors. Moreover, our experimental analysis finds the lack of agreement among annotator is not a major roadblock to attain progress in the field. Our diagnostic tool is publicly available to keep fueling the minds of other researchers with additional insights about their algorithms."}
{"_id": "085a5657303249f8a7d2ffd5397cd9e4c70d8dbe", "title": "Cloud computing paradigms for pleasingly parallel biomedical applications", "text": "Cloud computing offers exciting new approaches for scientific computing that leverages the hardware and software investments on large scale data centers by major commercial players. Loosely coupled problems are very important in many scientific fields and are on the rise with the ongoing move towards data intensive computing. There exist several approaches to leverage clouds & cloud oriented data processing frameworks to perform pleasingly parallel computations. In this paper we present two pleasingly parallel biomedical applications, 1) assembly of genome fragments 2) dimension reduction in the analysis of chemical structures, implemented utilizing cloud infrastructure service based utility computing models of Amazon AWS and Microsoft Windows Azure as well as utilizing MapReduce based data processing frameworks, Apache Hadoop and Microsoft DryadLINQ. We review and compare each of the frameworks and perform a comparative study among them based on performance, efficiency, cost and the usability. Cloud service based utility computing model and the managed parallelism (MapReduce) exhibited comparable performance and efficiencies for the applications we considered. We analyze the variations in cost between the different platform choices (eg: EC2 instance types), highlighting the need to select the appropriate platform based on the nature of the computation."}
{"_id": "5b6535f9aa0f5afd4a33c70da2a8d10ca1832342", "title": "The psychometric properties and utility of the Short Sadistic Impulse Scale (SSIS).", "text": "Sadistic personality disorder (SPD) has been underresearched and often misunderstood in forensic settings. Furthermore, personality disorders in general are the subject of much controversy in terms of their classification (i.e., whether they should be categorical or dimensional). The Sadistic Attitudes and Behaviors Scale (SABS; Davies & Hand, 2003; O'Meara, Davies, & Barnes-Holmes, 2004) is a recently developed scale for measuring sadistic inclinations. Derived from this is the Short Sadistic Impulse Scale (SSIS), which has proved to be a strong unidimensional measure of sadistic inclination. Through cumulative scaling, it was investigated whether the SSIS could measure sadism on a continuum of interest, thus providing a dimensional view of the construct. Further, the SSIS was administered along with a number of other measures related to sadism in order to assess the validity of the scale. Results showed that the SSIS has strong construct and discriminant validity and may be useful as a screening measure for sadistic impulse."}
{"_id": "a9d916bd42b3298ae22185a10f7e1e365b4acdbe", "title": "Cyber military strategy for cyberspace superiority in cyber warfare", "text": "In this paper, we proposed robust and operational cyber military strategy for cyberspace superiority in cyber warfare. We considered cyber forces manpower, cyber intelligence capability, and an organization of cyber forces for improving cyber military strategy. In cyber forces power field, we should cultivated personals that can perform network computer operations and hold cyber security technology such as cyber intelligence collection, cyber-attack, cyber defence, and cyber forensics. Cyber intelligence capability includes cyber surveillance/reconnaissance, cyber order of battle, pre-CTO, and cyber damage assessment. An organization of cyber forces has to change tree structure to network structure and has to organize task or functional centric organizations. Our proposed cyber military strategy provides prior decision of action to operate desired effects in cyberspace."}
{"_id": "160404fb0d05a1a2efa593c448fcb8796c24b873", "title": "The emulation theory of representation: motor control, imagery, and perception.", "text": "The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language."}
{"_id": "7ca4d3c70a5f87f9b5d0dbbda0d644aeb6b485da", "title": "DC Bus Voltage Control for a Distributed Power System", "text": "This paper addresses voltage control of distributed dc power systems. DC power systems have been discussed as a result of the introduction of renewable, small-scale power generation units. Also, telecommunication power systems featuring UPS properties might benefit from a broader introduction of dc power systems. Droop control is utilized to distribute the load between the source converters. In order to make the loading of the source converters equal, in per unit, the voltage control algorithm for each converter has to be designed to act similar. The dc side capacitor of each converter, needed for filtering, is also determined as a consequence. The root locus is investigated for varying dc bus impedance. It is found that the risk of entering converter over-modulation is a stronger limitation than stability, at least for reasonable dc bus cable parameters. The stationary and dynamic properties during load variations are also investigated."}
{"_id": "678680169e8b75430c1239e7e5289f656e009809", "title": "REFRAMING SUPPLY CHAIN MANAGEMENT : A SERVICE-DOMINANT", "text": "Shifting the dominant thinking of supply chain management toward the concepts of service, value cocreation, value propositions, operant resources, networks, service ecosystems and learning opens up many research opportunities and strategies for improved organizational performance. The emerging thought world of service-dominant logic is presented as a means to reframe supply chain scholarship and practice for increased relevance and impact."}
{"_id": "9bf6a003ed9dab85b8d22b2dd50b4325c6ab67ef", "title": "The Intersecting Roles of Consumer and Producer : A Critical Perspective on Co-production , Co-creation and Prosumption", "text": "The terms \u2018co-creation\u2019, \u2018co-production\u2019, and \u2018prosumption\u2019 refer to situations in which consumers collaborate with companies or with other consumers to produce things of value. These situations sometimes appear to blur the traditional roles of \u2018producer\u2019 and \u2018consumer\u2019. Building on Marx\u2019s distinction between \u2018use value\u2019 and \u2018exchange value\u2019, we argue that, when consumers perform tasks normally handled by the company, this does not necessarily represent a fundamental change in exchange roles or economic organization. We then argue that, when individuals who are traditionally defined as \u2018consumers\u2019 produce exchange value for companies, this does represent a fundamental change. Thanks to recent advances in information technology, customers today are contributing to organizational processes in ways that would not have been possible 10 years ago. For example, at Threadless.com, customers can not only vote for the clothing designs that they would like to see produced but also submit their own designs for voting (Walker 2007). Similarly, at National Instruments, a company that makes measurement software and sensors, nearly half of the company\u2019s research and development activity is done by an on-line community of customers who collaborate to discover how NI\u2019s products can solve members\u2019 problems (Seybold 2007). Another example is Proctor & Gamble\u2019s \u2018Vocalpoint\u2019 program, an on-line community of product enthusiasts who are rewarded with coupons and samples in return for talking about products with their friends. The program has more than 500,000 members and is being used by other companies such as WD40 and the Discovery Channel to promote their products (Neff 2006). Some have argued that examples like these are symptomatic of a fundamental and dramatic shift that is occurring in relationships between organizations and their customers \u2013 a change that calls into question previously clear distinctions between consumers and producers and between customers and employees. For instance, some have suggested 2 Intersecting Roles of Consumer and Producer \u00a9 2008 The Authors Sociology Compass 2 (2008): 10.1111/j.1751-9020.2008.00112.x Journal Compilation \u00a9 2008 Blackwell Publishing Ltd that customers who help with product design or assist with product marketing are part of a \u2018consumer-as-creator revolution\u2019 (Nadeau 2006, 105) and represent a \u2018new paradigm of message creation and delivery\u2019 (McConnell and Huba 2006, 42). Others have written that \u2018the gap between producers and consumers is blurring\u2019 (Tapscott and Williams 2006, 125) and that \u2018in the emerging ... paradigm, a person can seamlessly shift from consumer to contributor and creator\u2019 (143). Still others have argued that in today\u2019s customer-organization relationships \u2018power and control are radically decentralized and heterarchical: producers and consumers coalesce\u2019 (Pitt et al. 2006, 118). We have also recently seen a resurgence of the word \u2018prosumer\u2019, which was first coined by cultural critic Alvin Toffler in 1980, to emphasize the novelty of asking individuals to simultaneously play the role of consumer and producer (Kotler 1986; Tapscott and Williams 2006). In this article, we offer a critical analysis of the ways in which the role of the consumer may be changing and therefore also an investigation of this role in the broader capitalist system, a system in which consumers have traditionally served a central function. We argue that although consumers are increasingly performing tasks normally handled by the company, this role redefinition may be, at least in some cases, illusory. We also find that although the employee/customer distinction can be blurred on many dimensions, one dimension on which the distinction remains is between those who create use value and those who create exchange value (Marx 1867 [2001]). Increasingly, individuals who have traditionally been defined as \u2018consumers\u2019 are producing exchange value for companies, and this, we argue, is where the so-called \u2018prosumer\u2019 does indeed represent a fundamental change in economic organization. We then use the distinction between use value and exchange value to further explore the normative and ethical implications of consumer production. Consumer versus producer The consumer\u2013producer relationship has traditionally been conceived of as an exchange relationship in which each party trades one kind of value for another (Bagozzi 1975). In this article, we focus on the exchange relationship between an end user (such as a person buying coffee beans for home use) and the organization from which this end user buys a product or service (such as a supermarket or coffee shop). Before an end user buys something, a series of transformations are usually applied to it to make the product or service usable to the end user and to therefore enhance its value. To make a cup of coffee, for example, someone must grow and harvest the coffee beans, roast and grind them, transport and package them, offer them for retail sale, and finally brew the cup of coffee. These are all steps in what Michael Porter (1985) calls the \u2018value chain\u2019, the series of transformations required to make a product for an end user."}
{"_id": "bd1c85bf52295adad0b1a59d79d7429091beeb22", "title": "Evolving to a New Dominant Logic for Marketing", "text": "Marketing inherited a model of exchange from economics, which had a dominant logic based on the exchange of \u201cgoods,\u201d which usually are manufactured output. The dominant logic focused on tangible resources, embedded value, and transactions. Over the past several decades, new perspectives have emerged that have a revised logic focused on intangible resources, the cocreation of value, and relationships. The authors believe that the new perspectives are converging to form a new dominant logic for marketing, one in which service provision rather than goods is fundamental to economic exchange. The authors explore this evolving logic and the corresponding shift in perspective for marketing scholars, marketing practitioners, and marketing educators."}
{"_id": "030467d08fb735b2855aa9e71185126ba389f9d9", "title": "The Emerging Role of Electronic Marketplaces on the Internet", "text": "Markets play a central role in the economy, facilitating the exchange of information, goods, services and payments. In the process, they create economic value for buyers, sellers, market intermediaries and for society at large. Recent years have seen a dramatic increase in the role of information technology in markets, both in traditional markets, and in the emergence of electronic marketplaces, such as the multitude of Internet-based online auctions."}
{"_id": "314617f72bff343c7a9f0d550dc8f918a691f2bd", "title": "Information systems in supply chain integration and management", "text": "Supply chain management (SCM) is the 21st century global operations strategy for achieving organizational competitiveness. Companies are attempting to find ways to improve their flexibility and responsiveness and in turn competitiveness by changing their operations strategy, methods and technologies that include the implementation of SCM paradigm and information technology (IT). However, a thorough and critical review of literature is yet to be carried out with the objective of bringing out pertinent factors and useful insights into the role and implications of IT in SCM. In this paper, the literature available on IT in SCM have been classified using suitable criteria and then critically reviewed to develop a framework for studying the applications of IT in SCM. Based on this review and analysis, recommendations have been made regarding the application of IT in SCM and some future research directions are indicated. 2003 Elsevier B.V. All rights reserved."}
{"_id": "65c85498be307ee940976db668dae4546943a4c8", "title": "The American Economic Review VOLUME XLV MARCH , 1955 NUMBER ONE ECONOMIC GROWTH AND INCOME INEQUALITY", "text": ""}
{"_id": "f9b79f7222658cc6670292f57547731a54a015f8", "title": "A Three-Phase Current-Fed DC/DC Converter With Active Clamp for Low-DC Renewable Energy Sources", "text": "This paper focuses on a new three-phase high power current-fed DC/DC converter with an active clamp. A three-phase DC/DC converter with high efficiency and voltage boosting capability is designed for use in the interface between a low-voltage fuel-cell source and a high-voltage DC bus for inverters. Zero-voltage switching in all active switches is achieved through using a common active clamp branch, and zero current switching in the rectifier diodes is achieved through discontinuous current conduction in the secondary side. Further, the converter is capable of increased power transfer due to its three-phase power configuration, and it reduces the RMS current per phase, thus reducing conduction losses. Moreover, a delta-delta connection on the three-phase transformer provides parallel current paths and reduces conduction losses in the transformer windings. An efficiency of above 93% is achieved through both improvements in the switching and through reducing conduction losses. A high voltage ratio is achieved by combining inherent voltage boost characteristics of the current-fed converter and the transformer turns ratio. The proposed converter and three-phase PWM strategy is analyzed, simulated, and implemented in hardware. Experimental results are obtained on a 500-W prototype unit, with all of the design verified and analyzed."}
{"_id": "16c7cb0515c235f6ab6e2d2fdfac0920ed1f7588", "title": "Par4: very high speed parallel robot for pick-and-place", "text": "This paper introduces a four-degree-of-freedom parallel manipulator dedicated to pick-and-place. It has been developed with the goal of reaching very high speed. This paper shows that its architecture is particularly well adapted to high dynamics. Indeed, it is an evolution of Delta, H4 and 14 robots architectures: it keeps the advantages of these existing robots, while overcoming their drawbacks. In addition, an optimization method based on velocity using adept motion has been developed and applied to this high speed parallel robot. All these considerations led to experimentations that proved we can reach high accelerations (13 G) and obtain a cycle time of 0.28 s."}
{"_id": "0123a29af69b6aa92856be8b48c834f0d2483640", "title": "Towards Controlled Transformation of Sentiment in Sentences", "text": "An obstacle to the development of many natural language processing products is the vast amount of training examples necessary to get satisfactory results. The generation of these examples is often a tedious and timeconsuming task. This paper this paper proposes a method to transform the sentiment of sentences in order to limit the work necessary to generate more training data. This means that one sentence can be transformed to an opposite sentiment sentence and should reduce by half the work required in the generation of text. The proposed pipeline consists of a sentiment classifier with an attention mechanism to highlight the short phrases that determine the sentiment of a sentence. Then, these phrases are changed to phrases of the opposite sentiment using a baseline model and an autoencoder approach. Experiments are run on both the separate parts of the pipeline as well as on the end-to-end model. The sentiment classifier is tested on its accuracy and is found to perform adequately. The autoencoder is tested on how well it is able to change the sentiment of an encoded phrase and it was found that such a task is possible. We use human evaluation to judge the performance of the full (end-to-end) pipeline and that reveals that a model using word vectors outperforms the encoder model. Numerical evaluation shows that a success rate of 54.7% is achieved on the sentiment change."}
{"_id": "ec28e43d33c838921d7e68fc7fd6a0cd521336a3", "title": "Design, Analysis, and Test of a Novel 2-DOF Nanopositioning System Driven by Dual Mode", "text": "Piezodriven flexure-based motion stages, with a large workspace and high positioning precision, are really attractive for the realization of high-performance atomic force microscope (AFM) scanning. In this paper, a modified lever displacement amplifier is proposed for the mechanism design of a novel compliant two-degree-of-freedom (2-DOF) nanopositioning stage, which can be selected to drive in dual modes. Besides, the modified double four-bar parallelogram, P (P denotes prismatic) joints are adopted in designing the flexure limbs. The established models for the mechanical performance evaluation of the stage, in terms of kinetostatics, dynamics, and workspace, are validated by the finite-element analysis. After a series of dimension optimizations carried out through the particle swarm optimization algorithm, a novel active disturbance rejection controller, including the nonlinearity tracking differentiator, the extended state observer, and the nonlinear state error feedback, is proposed to automatically estimate and suppress plant uncertainties arising from the hysteresis nonlinearity, creep effect, sensor noises, and unknown disturbances. The simulation and prototype test results indicate that the first natural frequency of the proposed stage is approximated to be 831 Hz, the amplification ratio in two axes is about 4.2, and the workspace is 119.7 \u03bcm \u00d7 121.4 \u03bcm, while the cross coupling between the two axes is kept within 2%. All the results prove that the developed stage possesses a good property for high-performance AFM scanning."}
{"_id": "b84c91f48e62506f733530ef2a046d74ff5e2d64", "title": "Large Receptive Field Networks for High-Scale Image Super-Resolution", "text": "Convolutional Neural Networks have been the backbone of recent rapid progress in Single-Image Super-Resolution. However, existing networks are very deep with many network parameters, thus having a large memory footprint and being challenging to train. We propose Large Receptive Field Networks which strive to directly expand the receptive field of Super-Resolution networks without increasing depth or parameter count. In particular, we use two different methods to expand the network receptive field: 1-D separable kernels and atrous convolutions. We conduct considerable experiments to study the performance of various arrangement schemes of the 1-D separable kernels and atrous convolution in terms of accuracy (PSNR / SSIM), parameter count, and speed, while focusing on the more challenging high upscaling factors. Extensive benchmark evaluations demonstrate the effectiveness of our approach."}
{"_id": "761f2288b1b0cea385b0b9a89bb068593d94d6bd", "title": "3D face recognition: a survey", "text": "3D face recognition has become a trending research direction in both industry and academia. It inherits advantages from traditional 2D face recognition, such as the natural recognition process and a wide range of applications. Moreover, 3D face recognition systems could accurately recognize human faces even under dim lights and with variant facial positions and expressions, in such conditions 2D face recognition systems would have immense difficulty to operate. This paper summarizes the history and the most recent progresses in 3D face recognition research domain. The frontier research results are introduced in three categories: pose-invariant recognition, expression-invariant recognition, and occlusion-invariant recognition. To promote future research, this paper collects information about publicly available 3D face databases. This paper also lists important open problems."}
{"_id": "16f67094b8e0ec70d48d1f0f01a1f204a89b0e12", "title": "Hybrid distribution transformer: Concept development and field demonstration", "text": "Today's distribution system is expected to supply power to loads for which it was not designed. Moreover, high penetration of distributed generation units is redefining the requirements for the design, control and operation of the electric distribution system. A Hybrid Distribution Transformer is a potential cost-effective alternative solution to various distribution grid control devices. The Hybrid Distribution Transformer is realized by augmenting a regular transformer with a fractionally rated power electronic converter, which provides the transformer with additional control capabilities. The Hybrid Distribution Transformer concept can provide dynamic ac voltage regulation, reactive power compensation and, in future designs, form an interface with energy storage devices. Other potential functionalities that can be realized from the Hybrid Distribution Transformer include voltage phase angle control, harmonic compensation and voltage sag compensation. This paper presents the concept of a Hybrid Distribution Transformer and the status of our efforts towards a 500 kVA, 12.47 kV/480 V field demonstrator."}
{"_id": "4b81b16778c5d2dbf2cee511f7822fa6ae3081bf", "title": "Designing a Tag-Based Statistical Math Word Problem Solver with Reasoning and Explanation", "text": "Background Since Big Data mainly aims to explore the correlation between surface features but not their underlying causality relationship, the Big Mechanism 2 program has been proposed by DARPA to find out \u201cwhy\u201d behind the \u201cBig Data\u201d. However, the pre-requisite for it is that the machine can read each document and learn its associated knowledge, which is the task of Machine Reading (MR). Since a domain-independent MR system is complicated and difficult to build, the math word problem (MWP) [1] is frequently chosen as the first test case to study MR (as it usually uses less complicated syntax and requires less amount of domain knowledge). According to the framework for making the decision while there are several candidates, previous MWP algebra solvers can be classified into: (1) Rule-based approaches with logic inference [2-7], which apply rules to get the answer (via identifying entities, quantities, operations, etc.) with a logic inference engine. (2) Rule-based approaches without logic inference [8-13], which apply rules to get the answer without a logic inference engine. (3) Statistics-based approaches [14, 15], which use statistical models to identify entities, quantities, operations, and get the answer. To our knowledge, all the statistics-based approaches do not adopt logic inference. The main problem of the rule-based approaches mentioned above is that the coverage rate problem is serious, as rules with wide coverage are difficult and expensive to construct. Also, since they adopt Go/No-Go approach (unlike statistical approaches which can adopt a large Top-N to have high including rates), the error accumulation problem would be severe. On the other hand, the main problem of those approaches without adopting logic inference is that they usually need to implement a new handling procedure for each new type of problems (as the general logic inference mechanism is not adopted). Also, as there is no inference engine to generate the reasoning chain [16], additional effort would be required for"}
{"_id": "bd2d7c7f0145028e85c102fe52655c2b6c26aeb5", "title": "Attribute-based People Search: Lessons Learnt from a Practical Surveillance System", "text": "We address the problem of attribute-based people search in real surveillance environments. The system we developed is capable of answering user queries such as \"show me all people with a beard and sunglasses, wearing a white hat and a patterned blue shirt, from all metro cameras in the downtown area, from 2pm to 4pm last Saturday\". In this paper, we describe the lessons we learned from practical deployments of our system, and how we made our algorithms achieve the accuracy and efficiency required by many police departments around the world. In particular, we show that a novel set of multimodal integral filters and proper normalization of attribute scores are critical to obtain good performance. We conduct a comprehensive experimental analysis on video footage captured from a large set of surveillance cameras monitoring metro chokepoints, in both crowded and normal activity periods. Moreover, we show impressive results using images from the recent Boston marathon bombing event, where our system can rapidly retrieve the two suspects based on their attributes from a database containing more than one thousand people present at the event."}
{"_id": "3f3b05dab6d9f734d40755a12cbe56c34cfb28cc", "title": "Why does the microbiome affect behaviour?", "text": "Growing evidence indicates that the mammalian microbiome can affect behaviour, and several symbionts even produce neurotransmitters. One common explanation for these observations is that symbionts have evolved to manipulate host behaviour for their benefit. Here, we evaluate the manipulation hypothesis by applying evolutionary theory to recent work on the gut\u2013brain axis. Although the theory predicts manipulation by symbionts under certain conditions, these appear rarely satisfied by the genetically diverse communities of the mammalian microbiome. Specifically, any symbiont investing its resources to manipulate host behaviour is expected to be outcompeted within the microbiome by strains that do not manipulate and redirect their resources into growth and survival. Moreover, current data provide no clear evidence for manipulation. Instead, we show how behavioural effects can readily arise as a by-product of natural selection on microorganisms to grow within the host and natural selection on hosts to depend upon their symbionts. We argue that understanding why the microbiome influences behaviour requires a focus on microbial ecology and local effects within the host. The microbiota can influence host behaviour through the gut\u2013brain axis. In this Opinion, Johnson and Foster explore the evolution of this relationship and propose that adaptations of competing gut microorganisms may affect behaviour as a by\u2011product, leading to host dependence."}
{"_id": "8b40b159c2316dbea297a301a9c561b1d9873c4a", "title": "Monolingual and Cross-Lingual Information Retrieval Models Based on (Bilingual) Word Embeddings", "text": "We propose a new unified framework for monolingual (MoIR) and cross-lingual information retrieval (CLIR) which relies on the induction of dense real-valued word vectors known as word embeddings (WE) from comparable data. To this end, we make several important contributions: (1) We present a novel word representation learning model called Bilingual Word Embeddings Skip-Gram (BWESG) which is the first model able to learn bilingual word embeddings solely on the basis of document-aligned comparable data; (2) We demonstrate a simple yet effective approach to building document embeddings from single word embeddings by utilizing models from compositional distributional semantics. BWESG induces a shared cross-lingual embedding vector space in which both words, queries, and documents may be presented as dense real-valued vectors; (3) We build novel ad-hoc MoIR and CLIR models which rely on the induced word and document embeddings and the shared cross-lingual embedding space; (4) Experiments for English and Dutch MoIR, as well as for English-to-Dutch and Dutch-to-English CLIR using benchmarking CLEF 2001-2003 collections and queries demonstrate the utility of our WE-based MoIR and CLIR models. The best results on the CLEF collections are obtained by the combination of the WE-based approach and a unigram language model. We also report on significant improvements in ad-hoc IR tasks of our WE-based framework over the state-of-the-art framework for learning text representations from comparable data based on latent Dirichlet allocation (LDA)."}
{"_id": "c2c03bd11ae5c58b3b7c8e10f325e2a253868e45", "title": "Easily Add Significance Testing to your Market Basket Analysis in SAS \u00ae Enterprise Miner TM", "text": "Market Basket Analysis is a popular data mining tool that can be used to search through data to find patterns of co-occurrence among objects. It is an algorithmic process that generates business rules and several metrics for each business rule such as support, confidence and lift that help researchers identify \u201cinteresting\u201d patterns. Although useful, these popular metrics do not incorporate traditional significance testing. This paper describes how to easily add a well-known statistical significance test, the Pearson\u2019s Chi Squared statistic, to the existing output generated by SAS\u00ae Enterprise Miner\u2019s Association Node. The addition of this significance test enhances the ability of data analysts to make better decisions about which business rules are likely to be more useful."}
{"_id": "3f9df5c77af49d5b1b19eac9b82cb430b50f482d", "title": "Leveraging social media networks for classification", "text": "Social media has reshaped the way in which people interact with each other. The rapid development of participatory web and social networking sites like YouTube, Twitter, and Facebook, also brings about many data mining opportunities and novel challenges. In particular, we focus on classification tasks with user interaction information in a social network. Networks in social media are heterogeneous, consisting of various relations. Since the relation-type information may not be available in social media, most existing approaches treat these inhomogeneous connections homogeneously, leading to an unsatisfactory classification performance. In order to handle the network heterogeneity, we propose the concept of social dimension to represent actors\u2019 latent affiliations, and develop a classification framework based on that. The proposed framework, SocioDim, first extracts social dimensions based on the network structure to accurately capture prominent interaction patterns between actors, then learns a discriminative classifier to select relevant social dimensions. SocioDim, by differentiating different types of network connections, outperforms existing representative methods of classification in social media, and offers a simple yet effective approach to integrating two types of seemingly orthogonal information: the network of actors and their attributes."}
{"_id": "bf9db8ca2dce7386cbed1ae0fd6465148cdb2b98", "title": "From Word To Sense Embeddings: A Survey on Vector Representations of Meaning", "text": "Over the past years, distributed semantic representations have proved to be effective and flexible keepers of prior knowledge to be integrated into downstream applications. This survey focuses on the representation of meaning. We start from the theoretical background behind word vector space models and highlight one of their major limitations: the meaning conflation deficiency, which arises from representing a word with all its possible meanings as a single vector. Then, we explain how this deficiency can be addressed through a transition from the word level to the more fine-grained level of word senses (in its broader acceptation) as a method for modelling unambiguous lexical meaning. We present a comprehensive overview of the wide range of techniques in the two main branches of sense representation, i.e., unsupervised and knowledge-based. Finally, this survey covers the main evaluation procedures and applications for this type of representation, and provides an analysis of four of its important aspects: interpretability, sense granularity, adaptability to different domains and compositionality."}
{"_id": "9d2d8b7b683e0ad66fb014956ddfdfcd292bdc55", "title": "Demographic Attributes Prediction on the Real-World Mobile Data", "text": "The deluge of the data generated by mobile phone devices imposes new challenges on the data mining community. User activities recorded by mobile phones could be useful for uncovering behavioral patterns. An interesting question is whether patterns in mobile phone usage can reveal demographic characteristics of the user? Demographic information about gender, age, marital status, job type, etc. is a key for applications with customer centric strategies. In this paper, we describe our approach to feature extraction from raw data and building predictive models for the task of demographic attributes predictions. We experimented with graph based representation of users inferred from similarity of their feature vectors, feature selections and classifications algorithms. Our work contributes to the Nokia Mobile Data Challenge (MDC) in the endeavor of exploring the real-world mobile data."}
{"_id": "a5bfe7d4c00ed9c9922ebf9e29692eedb6add060", "title": "The Effects of Profile Pictures and Friends' Comments on Social Network Site Users' Body Image and Adherence to the Norm", "text": "This study sought to explore the effects of exposure to Facebook body ideal profile pictures and norm conforming comments on users' body image. In addition, the social identity and self-categorization theoretical frameworks were used to explore users' endorsement of a body ideal norm. A mock Facebook page was used to conduct a pretest posttest 2\u2009\u00d7\u20092 between-group web-based experiment that featured body ideal profile pictures (body ideal vs. no body) and body ideal comments (conforming vs. nonconforming). Five hundred and one participants completed the experiment and passed all manipulation checks. Participants viewed pictures and comments on the status page and were able to leave their own comment before exiting. Results demonstrated no significant main effects. However, predispositional body satisfaction significantly moderated the relationship between body ideal pictures and body satisfaction. Most comments supported the body ideal norm. However, in support of self-categorization theory, participants exposed to nonconforming comments made nonconforming comments themselves significantly more than those exposed to conforming comments. The findings demonstrated the importance of continued body image research in social network sites, as well as the potential for self-categorization theory to guide such research."}
{"_id": "d8c42a63826842f99cf357c0af52beceb2d9e563", "title": "(Invited) Nanoimprinted perovskite for optoelectronics", "text": "Organic-inorganic hybrid perovskites have recently emerged as promising materials for optoelectronics. Here we show successful patterning of hybrid perovskite into nanostructures with cost-effective nanoimprint technology. Photodetectors are fabricated on nanoimprinted perovskite with improved responsivity. Nanoimprinted perovskite metasurface forms with significantly enhanced photoluminescence. Lasing is expected on nanoimprinted perovskite with optimized cavity design and process."}
{"_id": "c7ce1e88f41e771329fc4101252f61ff2aa9de6a", "title": "A 0.7V Fully-on-Chip Pseudo-Digital LDO Regulator with 6.3\u03bcA Quiescent Current and 100mV Dropout Voltage in 0.18-\u03bcm CMOS", "text": "This paper presents an NMOS pseudo-digital low-dropout (PD-LDO) regulator that supports low-voltage operation by eliminating the amplifier of an analog LDO. The proposed pseudo-digital control loop consists of a latched comparator, a 2X charge pump and a RC network. It detects the output voltage and provides a continuous gate control signal for the power transistor by charging and discharging the RC network. Fast transient response is achieved due to the source follower structure of the power NMOS, with a small output capacitor and small occupied chip area and without consuming large quiescent current. The proof-of-concept design of the proposed PD-LDO is implemented in a 0.18-J.1m CMOS process. The minimum supply voltage is 0.7 V, with a dropout voltage of 100 mV and a maximum load current of 100 mA. Using only 20 pF of on-chip output capacitor and 10 MHz comparator clock frequency, the undershoot is 106 mV with 90 mA load current step and 150 ns edge time. The quiescent current is only 6.3 \u03bcA and the active chip area is 0.08 mm2."}
{"_id": "f45cfbb9377a7da8be3f0a09d7291a7b01bb79d2", "title": "Heart rate variability and psychometric responses to overload and tapering in collegiate sprint-swimmers.", "text": "OBJECTIVES\nThe purpose of this study was to evaluate cardiac-parasympathetic and psychometric responses to competition preparation in collegiate sprint-swimmers. Additionally, we aimed to determine the relationship between average vagal activity and its daily fluctuation during each training phase.\n\n\nDESIGN\nObservational.\n\n\nMETHODS\nTen Division-1 collegiate sprint-swimmers performed heart rate variability recordings (i.e., log transformed root mean square of successive RR intervals, lnRMSSD) and completed a brief wellness questionnaire with a smartphone application daily after waking. Mean values for psychometrics and lnRMSSD (lnRMSSDmean) as well as the coefficient of variation (lnRMSSDcv) were calculated from 1 week of baseline (BL) followed by 2 weeks of overload (OL) and 2 weeks of tapering (TP) leading up to a championship competition.\n\n\nRESULTS\nCompetition preparation resulted in improved race times (p<0.01). Moderate decreases in lnRMSSDmean, and Large to Very Large increases in lnRMSSDcv, perceived fatigue and soreness were observed during the OL and returned to BL levels or peaked during TP (p<0.05). Inverse correlations between lnRMSSDmean and lnRMSSDcv were Very Large at BL and OL (p<0.05) but only Moderate at TP (p>0.05).\n\n\nCONCLUSIONS\nOL training is associated with a reduction and greater daily fluctuation in vagal activity compared with BL, concurrent with decrements in perceived fatigue and muscle soreness. These effects are reversed during TP where these values returned to baseline or peaked leading into successful competition. The strong inverse relationship between average vagal activity and its daily fluctuation weakened during TP."}
{"_id": "3cbf5c2b9833207aa7c18cbbb6257e25057bd99c", "title": "\u57fa\u65bc\u5c0d\u7167\u8868\u4ee5\u53ca\u8a9e\u8a00\u6a21\u578b\u4e4b\u7c21\u7e41\u5b57\u9ad4\u8f49\u63db (Chinese Characters Conversion System based on Lookup Table and Language Model) [In Chinese]", "text": "The character sets used in China and Taiwan are both Chinese, but they are divided into simplified and traditional Chinese characters. There are large amount of information exchange between China and Taiwan through books and Internet. To provide readers a convenient reading environment, the character conversion between simplified and traditional Chinese is necessary. The conversion between simplified and traditional Chinese characters has two problems: one-to-many ambiguity and term usage problems. Since there are many traditional Chinese characters that have only one corresponding simplified character, when converting simplified Chinese into traditional Chinese, the system will face the one-to-many ambiguity. Also, there are many terms that have different usages between the two Chinese societies. This paper focus on designing an extensible conversion system, that can take the advantage of community knowledge by accumulating lookup tables through Wikipedia to tackle the term usage problem and can integrate language model to disambiguate the one-to-many ambiguity. The system can reduce the cost of proofreading of character conversion for books, e-books, or online publications. The extensible architecture makes it easy to improve the system with new training data."}
{"_id": "b3318d66069cc164ac085d15dc25cafac82c9d6b", "title": "Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?", "text": "Qualitative research methods are enjoying unprecedented popularity. Although checklists have undoubtedly contributed to the wider acceptance of such methods, these can be counterproductive if used prescriptively. The uncritical adoption of a range of \u201ctechnical fixes\u201d (such as purposive sampling, grounded theory, multiple coding, triangulation, and respondent validation) does not, in itself, confer rigour. In this article I discuss the limitations of these procedures and argue that there is no substitute for systematic and thorough application of the principles of qualitative research. Technical fixes will achieve little unless they are embedded in a broader understanding of the rationale and assumptions behind qualitative research."}
{"_id": "3455c5ed8dac3cb7a8a6c5cbe28dff96cc123a68", "title": "Yet Another Visit to Paxos", "text": "This paper presents a modular introduction to crash-tolerant and Byzantine-tolerant protocols for reaching consensus that use the method introduced by the Paxos algorithm of Lamport and by the viewstamped replication algorithm of Oki and Liskov. The consensus protocol runs a sequence of epoch abstractions as governed by an epoch-change abstraction. Implementations of epoch and epoch-change that tolerate crash faults yield the consensus algorithm in Paxos and in viewstamped replication. Implementations of epoch and epoch-change that tolerate Byzantine faults yield the consensus algorithm in the PBFT protocol of Castro and Liskov."}
{"_id": "13bf13f019632a4edb967635e72e3e140f89e90e", "title": "Internet inter-domain traffic", "text": "In this paper, we examine changes in Internet inter-domain traffic demands and interconnection policies. We analyze more than 200 Exabytes of commercial Internet traffic over a two year period through the instrumentation of 110 large and geographically diverse cable operators, international transit backbones, regional networks and content providers. Our analysis shows significant changes in inter-AS traffic patterns and an evolution of provider peering strategies. Specifically, we find the majority of inter-domain traffic by volume now flows directly between large content providers, data center / CDNs and consumer networks. We also show significant changes in Internet application usage, including a global decline of P2P and a significant rise in video traffic. We conclude with estimates of the current size of the Internet by inter-domain traffic volume and rate of annualized inter-domain traffic growth."}
{"_id": "0a65bc1de050f544648977e786c499a0166fa22a", "title": "Strategies and struggles with privacy in an online social networking community", "text": "Online social networking communities such as Facebook and MySpace are extremely popular. These sites have changed how many people develop and maintain relationships through posting and sharing personal information. The amount and depth of these personal disclosures have raised concerns regarding online privacy. We expand upon previous research on users\u2019 under-utilization of available privacy options by examining users\u2019 current strategies for maintaining their privacy, and where those strategies fail, on the online social network site Facebook. Our results demonstrate the need for mechanisms that provide awareness of the privacy impact of users\u2019 daily interactions."}
{"_id": "21b07e146c9b56dbe3d516b59b69d989be2998b7", "title": "Moving beyond untagging: photo privacy in a tagged world", "text": "Photo tagging is a popular feature of many social network sites that allows users to annotate uploaded images with those who are in them, explicitly linking the photo to each person's profile. In this paper, we examine privacy concerns and mechanisms surrounding these tagged images. Using a focus group, we explored the needs and concerns of users, resulting in a set of design considerations for tagged photo privacy. We then designed a privacy enhancing mechanism based on our findings, and validated it using a mixed methods approach. Our results identify the social tensions that tagging generates, and the needs of privacy tools to address the social implications of photo privacy management."}
{"_id": "2d2b1f9446e9b4cdb46327cda32a8d9621944e29", "title": "Information revelation and privacy in online social networks", "text": "Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences."}
{"_id": "2dff0f21a23f9e3b6e0c50ce3fec75de4ff00359", "title": "Persona: an online social network with user-defined privacy", "text": "Online social networks (OSNs) are immensely popular, with some claiming over 200 million users. Users share private content, such as personal information or photographs, using OSN applications. Users must trust the OSN service to protect personal information even as the OSN provider benefits from examining and sharing that information. We present Persona, an OSN where users dictate who may access their information. Persona hides user data with attribute-based encryption (ABE), allowing users to apply fine-grained policies over who may view their data. Persona provides an effective means of creating applications in which users, not the OSN, define policy over access to private data. We demonstrate new cryptographic mechanisms that enhance the general applicability of ABE. We show how Persona provides the functionality of existing online social networks with additional privacy benefits. We describe an implementation of Persona that replicates Facebook applications and show that Persona provides acceptable performance when browsing privacy-enhanced web pages, even on mobile devices."}
{"_id": "2e1fa859dc31677895276a6b26d90ec70a4861c5", "title": "Estimating the Customer-Level Demand for Electricity Under Real-Time Market Prices", "text": "This paper presents estimates of the customer-level demand for electricity by industrial and commercial customers purchasing electricity according to the half-hourly energy prices from the England and Wales (E&W) electricity market. These customers also face the possibility of a demand charge on their electricity consumption during the three half-hour periods that are coincident with E&W system peaks. Although energy charges are largely known by 4:00 P.M. the day prior to consumption, a fraction of the energy charge and the identity of the half-hour periods when demand charges occur are only known with certainty ex post of consumption. Four years of data from a Regional Electricity Company (REC) in the United Kingdom is used to quantify the half-hourly customer-level demands under this real-time pricing program. The econometric model developed and estimated here quantifies the extent of intertemporal substitution in electricity consumption across pricing periods within the day due to changes in all of the components of the day-ahead E&W electricity prices, the level of the demand charge and the probability that a demand charge will be imposed. The results of this modeling framework can be used by retail companies supplying consumers purchasing electricity according to real-time market prices to construct demand-side bids into an electricity market open to competition. The paper closes with several examples of how this might be done."}
{"_id": "446d5a4496d7dc201263b888c4f0ae65833a25eb", "title": "Application of Spreading Activation Techniques in Information Retrieval", "text": "This paper surveys the use of Spreading Activation techniques onSemantic Networks in Associative Information Retrieval. The majorSpreading Activation models are presented and their applications toIR is surveyed. A number of works in this area are criticallyanalyzed in order to study the relevance of Spreading Activation forassociative IR."}
{"_id": "5882b1409dc3d6d9beebb0fd3ab149f864e8c8d3", "title": "Combined digital signature and digital watermark scheme for image authentication", "text": "Conventional digital signature schemes for image authentication encode the signature in a file separate from the original image, thus require extra bandwidth to transmit it. Meantime, the watermark technique embeds some information in the host image. In this paper, a combined digital watermark and digital signature scheme for image authentication is proposed. The scheme extracts signature from the original image and embeds them back into the image as watermark, avoiding additional signature file. Since images are always compressed before transmission in the Internet, the proposed scheme is tested for compression tolerance and shows good robustness against JPEG coding. Furthermore, the scheme not only can verify the authenticity and the integrity of images, but also can locate the illegal modifications."}
{"_id": "f2e9f869a9fc1f07887866be5f70a37b6c31411b", "title": "Listening to Chaotic Whispers: A Deep Learning Framework for News-oriented Stock Trend Prediction", "text": "Stock trend prediction plays a critical role in seeking maximized profit from the stock investment. However, precise trend prediction is very difficult since the highly volatile and non-stationary nature of the stock market. Exploding information on the Internet together with the advancing development of natural language processing and text mining techniques have enabled investors to unveil market trends and volatility from online content. Unfortunately, the quality, trustworthiness, and comprehensiveness of online content related to stock market vary drastically, and a large portion consists of the low-quality news, comments, or even rumors. To address this challenge, we imitate the learning process of human beings facing such chaotic online news, driven by three principles: sequential content dependency, diverse influence, and effective and efficient learning. In this paper, to capture the first two principles, we designed a Hybrid Attention Networks(HAN) to predict the stock trend based on the sequence of recent related news. Moreover, we apply the self-paced learning mechanism to imitate the third principle. Extensive experiments on real-world stock market data demonstrate the effectiveness of our framework. A further simulation illustrates that a straightforward trading strategy based on our proposed framework can significantly increase the annualized return."}
{"_id": "adbe1565649a4f547a68030da8b6a0814f228bbc", "title": "FinDroidHR: Smartwatch Gesture Input with Optical Heartrate Monitor", "text": "We present FinDroidHR, a novel gesture input technique for off-the-shelf smartwatches. Our technique is designed to detect 10 hand gestures on the hand wearing a smartwatch. The technique is enabled by analysing features of the Photoplethysmography (PPG) signal that optical heart-rate sensors capture. In a study with 20 participants, we show that FinDroidHR achieves 90.55% accuracy and 90.73% recall. Our work is the first study to explore the feasibility of using optical sensors on the off-the-shelf wearable devices to recognise gestures. Without requiring bespoke hardware, FinDroidHR can be readily used on existing smartwatches."}
{"_id": "9f4a856aee19e6cddbace27be817770214a6fa4a", "title": "Humanoid robotics platforms developed in HRP", "text": "This paper presents humanoid robotics platform that consists of a humanoid robot and an open architecture software platform developed in METI\u2019s Humanoid Robotics Project (HRP). The final version of the robot, called HRP-2, has 1540 mm height, 58 kg weight and 30 degrees of the freedom. The software platform includes a dynamics simulator and motion controllers of the r ns and is e \u00a9"}
{"_id": "51077ea4ac322d8b6438d5acfb6c0012cb1bcfb1", "title": "Using web security scanners to detect vulnerabilities in web services", "text": "Although web services are becoming business-critical components, they are often deployed with critical software bugs that can be maliciously explored. Web vulnerability scanners allow detecting security vulnerabilities in web services by stressing the service from the point of view of an attacker. However, research and practice show that different scanners have different performance on vulnerabilities detection. In this paper we present an experimental evaluation of security vulnerabilities in 300 publicly available web services. Four well known vulnerability scanners have been used to identify security flaws in web services implementations. A large number of vulnerabilities has been observed, which confirms that many services are deployed without proper security testing. Additionally, the differences in the vulnerabilities detected and the high number of false-positives (35% and 40% in two cases) and low coverage (less than 20% for two of the scanners) observed highlight the limitations of web vulnerability scanners on detecting security vulnerabilities in web services."}
{"_id": "2fddc5cdada7b44a598b3a4a76de52825350dd5d", "title": "Development of Generalized Photovoltaic Model Using MATLAB / SIMULINK", "text": "temperature into consideration, the output current and power characteristics of PV model are simulated and optimized using the proposed model. This enables the dynamics of PV power system to be easily simulated, analyzed, and optimized."}
{"_id": "9270f2bf533897ab5711c65eb269561b39a2055e", "title": "Social-Aware Video Recommendation for Online Social Groups", "text": "Group recommendation plays a significant role in today's social media systems, where users form social groups to receive multimedia content together and interact with each other, instead of consuming the online content individually. Limitations of traditional group recommendation approaches are as follows. First, they usually infer group members\u2019 preferences by their historical behaviors, failing to capture inactive users\u2019 preferences from the sparse historical data. Second, relationships between group members are not studied by these approaches, which fail to capture the inherent personality of members in a group. To address these issues, we propose a social-aware group recommendation framework that jointly utilizes both social relationships and social behaviors to not only infer a group's preference, but also model the tolerance and altruism characteristics of group members. Based on the observation that the following relationship in the online social network reflects common interests of users, we propose a group preference model based on external experts of group members. Furthermore, we model users\u2019 tolerance (willingness to receive content not preferred) and altruism (willingness to receive content preferred by friends). Finally, based on the group preference model, we design recommendation algorithms for users under different social contexts. Experimental results demonstrate the effectiveness of our approach, which significantly improves the recommendation accuracy against traditional approaches, especially in the cases of inactive group members."}
{"_id": "427afd1b0ecbea2ffb1feaec7ef55234394c9814", "title": "ScreenerNet: Learning Curriculum for Neural Networks", "text": "We propose to learn a curriculum or a syllabus for supervised learning with deep neural networks. Specifically, we learn weights for each sample in training by an attached neural network, called ScreenerNet, to the original network and jointly train them in an end-to-end fashion. We show the networks augmented with our ScreenerNet achieve early convergence with better accuracy than the state-of-the-art rule-based curricular learning methods in extensive experiments using three popular vision datasets including MNIST, CIFAR10 and Pascal VOC2012, and a Cartpole task using Deep Q-learning."}
{"_id": "4ab868acf51fd6d78ba2d15357de673f8ec0bad1", "title": "ICTs for Improving Patients Rehabilitation Research Techniques", "text": "The world population is rapidly aging and becoming a burden to health systems around the world. In this work we present a conceptual framework to encourage the research community to develop more comprehensive and adaptive ICT solutions for prevention and rehabilitation of chronic conditions in the daily life of the aging population and beyond health facilities. We first present an overview of current international standards in human functioning and disability, and how chronic conditions are interconnected in older age. We then describe innovative mobile and sensor technologies, predictive data analysis in healthcare, and game-based prevention and rehabilitation techniques. We then set forth a multidisciplinary approach for the personalized prevention and rehabilitation of chronic conditions using unobtrusive and pervasive sensors, interactive activities, and predictive analytics, which also eases the tasks of health-related researchers, caregivers and providers. Our proposal represents a conceptual basis for future research, in which much remains to be done in terms of standardization of technologies and health terminology, as well as data protection and privacy legislation."}
{"_id": "78be1d50e1eb2c526f5e35f5820e195eec313101", "title": "An Evaluation of Parser Robustness for Ungrammatical Sentences", "text": "For many NLP applications that require a parser, the sentences of interest may not be well-formed. If the parser can overlook problems such as grammar mistakes and produce a parse tree that closely resembles the correct analysis for the intended sentence, we say that the parser is robust. This paper compares the performances of eight state-of-the-art dependency parsers on two domains of ungrammatical sentences: learner English and machine translation outputs. We have developed an evaluation metric and conducted a suite of experiments. Our analyses may help practitioners to choose an appropriate parser for their tasks, and help developers to improve parser robustness against ungrammatical sentences."}
{"_id": "c7062c80bc4b13da80d4d10f9d9b273aa316f0fc", "title": "Microglia-mediated recovery from ALS-relevant motor neuron degeneration in a mouse model of TDP-43 proteinopathy", "text": "Though motor neurons selectively degenerate in amyotrophic lateral sclerosis, other cell types are likely involved in this disease. We recently generated rNLS8 mice in which human TDP-43 (hTDP-43) pathology could be reversibly induced in neurons and expected that microglia would contribute to neurodegeneration. However, only subtle microglial changes were detected during disease in the spinal cord, despite progressive motor neuron loss; microglia still reacted to inflammatory triggers in these mice. Notably, after hTDP-43 expression was suppressed, microglia dramatically proliferated and changed their morphology and gene expression profiles. These abundant, reactive microglia selectively cleared neuronal hTDP-43. Finally, when microgliosis was blocked during the early recovery phase using PLX3397, a CSF1R and c-kit inhibitor, rNLS8 mice failed to regain full motor function, revealing an important neuroprotective role for microglia. Therefore, reactive microglia exert neuroprotective functions in this amyotrophic lateral sclerosis model, and definition of the underlying mechanism could point toward novel therapeutic strategies. Using an inducible mouse model of sporadic ALS, Spiller et al. show that spinal microgliosis is not a major feature of TDP-43-triggered disease. Instead, microglia mediate TDP-43 clearance and motor recovery, suggesting a neuroprotective role in ALS."}
{"_id": "0fce6f27385af907e0751bb1d0781eb9bc1e5359", "title": "MazeBase: A Sandbox for Learning from Games", "text": "This paper introduces an environment for simple 2D maze games, designed as a sandbox for machine learning approaches to reasoning and planning. Within it, we create 10 simple games based on algorithmic tasks (e.g. embodying simple if-then statements). We deploy a range of neural models (fully connected, convolutional network, memory network) on these games, with and without a procedurally generated curriculum. We show that these architectures can be trained with reinforcement to respectable performance on these tasks, but are still far from optimal, despite their simplicity. We also apply these models to games involving combat, including StarCraft, demonstrating their ability to learn non-trivial tactics which enable them to consistently beat the in-game AI."}
{"_id": "157099d6ffd3ffca8cfca7955aff7c5f1a979ac9", "title": "Multilabel Text Classification for Automated Tag Suggestion", "text": "The increased popularity of tagging during the last few years can be mainly attributed to its embracing by most of the recently thriving user-centric content publishing and management Web 2.0 applications. However, tagging systems have some limitations that have led researchers to develop methods that assist users in the tagging process, by automatically suggesting an appropriate set of tags. We have tried to model the automated tag suggestion problem as a multilabel text classification task in order to participate in the ECML/PKDD 2008 Discovery Challenge."}
{"_id": "1edae9efb4f84352bb66095295cd94b2cddce00d", "title": "A taxonomy for user-healthcare robot interaction", "text": "This paper evaluates existing taxonomies aimed at characterizing the interaction between robots and their users and modifies them for health care applications. The modifications are based on existing robot technologies and user acceptance of robotics. Characterization of the user, or in this case the patient, is a primary focus of the paper, as they present a unique new role as robot users. While therapeutic and monitoring-related applications for robots are still relatively uncommon, we believe they will begin to grow and thus it is important that the spurring relationship between robot and patient is well understood."}
{"_id": "c91fed8c6bf32fccd4fc9148fb7b912191b8d962", "title": "Multiple Physical Layer Pipes performance for DVB-T2", "text": "The DVB-T2 terrestrial television standard is becoming increasingly important, and have been extensively studied and developed to provide many types of services with higher spectral efficiency and better performance. The Physical Layer Pipes in DVB-S2 are logical channels carrying one or more services with modulation scheme and robustness. The main changes are found in the physical layer where DVB-T2 incorporates a new physical layer pipe (PLP). Each physical layer pipe contains an individual configuration of modulation, coding and interleaving. This new concept allows a transmission with multiple physical layer pipes where each service can be transmitted with different physical layer configuration. The Advanced Television Systems Committee (ATSC3.0) standard will add value to broadcasting services, allowing a extending reach by adding new business models, providing higher quality, improved accessibility, personalization and interactivity and more flexible and efficient use of the spectrum."}
{"_id": "ab4bcc8e1db4f80fa2a31905cf332c00379602b0", "title": "Precision CMOS current reference with process and temperature compensation", "text": "This paper presents a new first-order temperature compensated CMOS current reference. To achieve a compact architecture able to operate under low voltage with low power consumption, it is based on a self-biasing beta multiplier current generator. Compensation against temperature is achieved by using instead of an ordinary resistor two triode transistors in parallel, acting as a negative and a positive temperature coefficient resistor, that generate a proportional to absolute temperature and a complementary to absolute temperature current which can be directly added to attain a temperature compensated current. Programmability is included to adjust the temperature coefficient and the reference current magnitude over process variations. Results for a 0.18 \u03bcm CMOS implementation show that the proposed 500 nA reference operate with supplies down to 1.2 V accomplishing over a (-40 to +120\u00b0C) range temperature drifts below 120 ppm/\u00b0C."}
{"_id": "403b8448589e0405ce50356d0c6be3916bd12ce1", "title": "Modified post-filter to recover modulation spectrum for HMM-based speech synthesis", "text": "This paper proposes a modified post-filter to recover a Modulation Spectrum (MS) in HMM-based speech synthesis. To alleviate the over-smoothing effect which is one of the major problems in HMM-based speech synthesis, the MS-based post-filter has been proposed. It recovers the utterance-level MS of the generated speech trajectory, and we have reported its benefit to the quality improvement. However, this post-filter is not applicable to various lengths of speech parameter trajectories, such as phrases or segments, which are shorter than an utterance. To address this problem, we propose two modified post-filters, (1) the time-invariant filter with a simplified conversion form and (2) the segment-level post-filter which applicable to a short-term parameter sequence. Furthermore, we also propose (3) the post-filter to recover the phoneme-level MS of HMM-state duration. Experimental results show that the modified post-filters also yield significant quality improvements in synthetic speech as yielded by the conventional post-filter."}
{"_id": "e9c525679fed4dad85699d09b5ce1ccaffe8f11d", "title": "Fully convolutional network and sparsity-based dictionary learning for liver lesion detection in CT examinations", "text": ""}
{"_id": "7d91d2944ed5b846739256029f2c9c79090fd0ca", "title": "TABLA: A unified template-based framework for accelerating statistical machine learning", "text": "A growing number of commercial and enterprise systems increasingly rely on compute-intensive Machine Learning (ML) algorithms. While the demand for these compute-intensive applications is growing, the performance benefits from general-purpose platforms are diminishing. Field Programmable Gate Arrays (FPGAs) provide a promising path forward to accommodate the needs of machine learning algorithms and represent an intermediate point between the efficiency of ASICs and the programmability of general-purpose processors. However, acceleration with FPGAs still requires long development cycles and extensive expertise in hardware design. To tackle this challenge, instead of designing an accelerator for a machine learning algorithm, we present TABLA, a framework that generates accelerators for a class of machine learning algorithms. The key is to identify the commonalities across a wide range of machine learning algorithms and utilize this commonality to provide a high-level abstraction for programmers. TABLA leverages the insight that many learning algorithms can be expressed as a stochastic optimization problem. Therefore, learning becomes solving an optimization problem using stochastic gradient descent that minimizes an objective function over the training data. The gradient descent solver is fixed while the objective function changes for different learning algorithms. TABLA provides a template-based framework to accelerate this class of learning algorithms. Therefore, a developer can specify the learning task by only expressing the gradient of the objective function using our high-level language. Tabla then automatically generates the synthesizable implementation of the accelerator for FPGA realization using a set of hand-optimized templates. We use Tabla to generate accelerators for ten different learning tasks targeted at a Xilinx Zynq FPGA platform. We rigorously compare the benefits of FPGA acceleration to multi-core CPUs (ARM Cortex A15 and Xeon E3) and many-core GPUs (Tegra K1, GTX 650 Ti, and Tesla K40) using real hardware measurements. TABLA-generated accelerators provide 19.4x and 2.9x average speedup over the ARM and Xeon processors, respectively. These accelerators provide 17.57x, 20.2x, and 33.4x higher Performance-per-Watt in comparison to Tegra, GTX 650 Ti and Tesla, respectively. These benefits are achieved while the programmers write less than 50 lines of code."}
{"_id": "f5a1eeab058b91503d7ebb25fd3615480d96a639", "title": "Testing the reliability and efficiency of the pilot Mixed Methods Appraisal Tool (MMAT) for systematic mixed studies review.", "text": "BACKGROUND\nSystematic literature reviews identify, select, appraise, and synthesize relevant literature on a particular topic. Typically, these reviews examine primary studies based on similar methods, e.g., experimental trials. In contrast, interest in a new form of review, known as mixed studies review (MSR), which includes qualitative, quantitative, and mixed methods studies, is growing. In MSRs, reviewers appraise studies that use different methods allowing them to obtain in-depth answers to complex research questions. However, appraising the quality of studies with different methods remains challenging. To facilitate systematic MSRs, a pilot Mixed Methods Appraisal Tool (MMAT) has been developed at McGill University (a checklist and a tutorial), which can be used to concurrently appraise the methodological quality of qualitative, quantitative, and mixed methods studies.\n\n\nOBJECTIVES\nThe purpose of the present study is to test the reliability and efficiency of a pilot version of the MMAT.\n\n\nMETHODS\nThe Center for Participatory Research at McGill conducted a systematic MSR on the benefits of Participatory Research (PR). Thirty-two PR evaluation studies were appraised by two independent reviewers using the pilot MMAT. Among these, 11 (34%) involved nurses as researchers or research partners. Appraisal time was measured to assess efficiency. Inter-rater reliability was assessed by calculating a kappa statistic based on dichotomized responses for each criterion. An appraisal score was determined for each study, which allowed the calculation of an overall intra-class correlation.\n\n\nRESULTS\nOn average, it took 14 min to appraise a study (excluding the initial reading of articles). Agreement between reviewers was moderate to perfect with regards to MMAT criteria, and substantial with respect to the overall quality score of appraised studies.\n\n\nCONCLUSION\nThe MMAT is unique, thus the reliability of the pilot MMAT is promising, and encourages further development."}
{"_id": "23355a1cb7bb226654e2319dd5ce9443284a694b", "title": "YAGO2: exploring and querying world knowledge in time, space, context, and many languages", "text": "We present YAGO2, an extension of the YAGO knowledge base with focus on temporal and spatial knowledge. It is automatically built from Wikipedia, GeoNames, and WordNet, and contains nearly 10 million entities and events, as well as 80 million facts representing general world knowledge. An enhanced data representation introduces time and location as first-class citizens. The wealth of spatio-temporal information in YAGO can be explored either graphically or through a special time- and space-aware query language."}
{"_id": "1920f1e482a7971b6e168df0354744c2544e4658", "title": "Stretchable Heater Using Ligand-Exchanged Silver Nanowire Nanocomposite for Wearable Articular Thermotherapy.", "text": "Thermal therapy is one of the most popular physiotherapies and it is particularly useful for treating joint injuries. Conventional devices adapted for thermal therapy including heat packs and wraps have often caused discomfort to their wearers because of their rigidity and heavy weight. In our study, we developed a soft, thin, and stretchable heater by using a nanocomposite of silver nanowires and a thermoplastic elastomer. A ligand exchange reaction enabled the formation of a highly conductive and homogeneous nanocomposite. By patterning the nanocomposite with serpentine-mesh structures, conformal lamination of devices on curvilinear joints and effective heat transfer even during motion were achieved. The combination of homogeneous conductive elastomer, stretchable design, and a custom-designed electronic band created a novel wearable system for long-term, continuous articular thermotherapy."}
{"_id": "09d8995d289fd31a15df47c824a9fdb79114a169", "title": "Manifold Gaussian Processes for regression", "text": "Off-the-shelf Gaussian Process (GP) covariance functions encode smoothness assumptions on the structure of the function to be modeled. To model complex and non-differentiable functions, these smoothness assumptions are often too restrictive. One way to alleviate this limitation is to find a different representation of the data by introducing a feature space. This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task. In this paper, we propose Manifold Gaussian Processes, a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space. The Manifold GP is a full GP and allows to learn data representations, which are useful for the overall regression task. As a proof-of-concept, we evaluate our approach on complex non-smooth functions where standard GPs perform poorly, such as step functions and robotics tasks with contacts."}
{"_id": "192687300b76bca25d06744b6586f2826c722645", "title": "Deep Gaussian Processes", "text": "In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples."}
{"_id": "2cac0942a692c3dbb46bcf826d71d202ab0f2e02", "title": "Variational Auto-encoded Deep Gaussian Processes", "text": "We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization."}
{"_id": "3bae80eca92a6e607cacdf03da393a1059c0d062", "title": "A Unifying View of Sparse Approximate Gaussian Process Regression", "text": "We provide a new unifying view, including all existing prope r probabilistic sparse approximations for Gaussian process regression. Our approach relies on exp ressing theeffective priorwhich the methods are using. This allows new insights to be gained, and highlights the relationship between existing methods. It also allows for a clear theoretically j ustified ranking of the closeness of the known approximations to the corresponding full GPs. Finall y we point directly to designs of new better sparse approximations, combining the best of the exi sting strategies, within attractive computational constraints."}
{"_id": "722fcc35def20cfcca3ada76c8dd7a585d6de386", "title": "Caffe: Convolutional Architecture for Fast Feature Embedding", "text": "Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (approx 2 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments.\n Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia."}
{"_id": "f801630d99835bea79d587cce0ee6b816bbaafd6", "title": "Multilayer networks: an architecture framework", "text": "We present an architecture framework for the control and management of multilayer networks and associated advanced network services. This material is identified as an \"architecture framework\" to emphasize its role in providing guidance and structure to our subsequent detailed architecture, design, and implementation activities. Our work is motivated by requirements from the Department of Energy science application community for real-time on-demand science-domain-specific network services and resource provisioning. We also summarize the current state of deployments and use of network services based on this multilayer network architecture framework."}
{"_id": "d3acf7f37c003cc77e9d51577ce5dce3a6700ad3", "title": "IoT for Healthcare", "text": "The Internet of Things (IoT) is the network of physical objects or things embedded with electronics, software, sensors, and network connectivity, which enables these objects to collect and exchange data. The Internet of Things allows objects to be sensed and controlled remotely across existing network infrastructure. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies, and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile, and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. Smart healthcare plays a significant role in healthcare applications through embedding sensors and actuators in patients and their medicine for monitoring and tracking purposes. The IoT is used by clinical care to monitor physiological statuses of patients through sensors by collecting and analyzing their information and then sending analyzed patient\u2019s data remotely to processing centers to make suitable actions. Not only for patients, it also useful for normal people to check the health status by using wearable devices with sensors."}
{"_id": "d6de518c6f78b406ed5e3996246acfc33b4a3918", "title": "User Experience in Service Design: A Case Study from Algeria", "text": "The design of crisis management services is crucial for emerging countries such as Algeria. It must take into account the experiences of diverse stakeholders. The authors investigate user experience (UX) practices from a service design perspective and describe a case study from Algeria exploring UX-driven service design for crisis management."}
{"_id": "2de974b0e20c9dd5215fb3d20698eb0bcdf0995b", "title": "Towards Accurate Predictions of Customer Purchasing Patterns", "text": "A range of algorithms was used to classify online retail customers of a UK company using historical transaction data. The predictive capabilities of the classifiers were assessed using linear regression, Lasso and regression trees. Unlike most related studies, classifications were based upon specific and marketing focused customer behaviours. Prediction accuracy on untrained customers was generally better than 80%. The models implemented (and compared) for classification were: Logistic Regression, Quadratic Discriminant Analysis, Linear SVM, RBF SVM, Gaussian Process, Decision Tree, Random Forest and Multi-layer Perceptron (Neural Network). Postcode data was then used to classify solely on demographics derived from the UK Land Registry and similar public data sources. Prediction accuracy remained better than 60%."}
{"_id": "1512f2115eaeeb96d05d67c694e4d0e9e77af23e", "title": "Discriminative learning of visual words for 3D human pose estimation", "text": "This paper addresses the problem of recovering 3D human pose from a single monocular image, using a discriminative bag-of-words approach. In previous work, the visual words are learned by unsupervised clustering algorithms. They capture the most common patterns and are good features for coarse-grain recognition tasks like object classification. But for those tasks which deal with subtle differences such as pose estimation, such representation may lack the needed discriminative power. In this paper, we propose to jointly learn the visual words and the pose regressors in a supervised manner. More specifically, we learn an individual distance metric for each visual word to optimize the pose estimation performance. The learned metrics rescale the visual words to suppress unimportant dimensions such as those corresponding to background. Another contribution is that we design an appearance and position context (APC) local descriptor that achieves both selectivity and invariance while requiring no background subtraction. We test our approach on both a quasi-synthetic dataset and a real dataset (HumanEva) to verify its effectiveness. Our approach also achieves fast computational speed thanks to the integral histograms used in APC descriptor extraction and fast inference of pose regressors."}
{"_id": "859be8c9a179cb0d231e62ca07b9f2569035487f", "title": "A graph-based recommender system for digital library", "text": "Research shows that recommendations comprise a valuable service for users of a digital library [11]. While most existing recommender systems rely either on a content-based approach or a collaborative approach to make recommendations, there is potential to improve recommendation quality by using a combination of both approaches (a hybrid approach). In this paper, we report how we tested the idea of using a graph-based recommender system that naturally combines the content-based and collaborative approaches. Due to the similarity between our problem and a concept retrieval task, a Hopfield net algorithm was used to exploit high-degree book-book, user-user and book-user associations. Sample hold-out testing and preliminary subject testing were conducted to evaluate the system, by which it was found that the system gained improvement with respect to both precision and recall by combining content-based and collaborative approaches. However, no significant improvement was observed by exploiting high-degree associations."}
{"_id": "38d4dce2c40a329aef2a82b003bd17551ac8439f", "title": "Real-time people counting system using video camera", "text": "In this MTech thesis experiments will be tried out on a people counting system in an effort to enhance the accuracy when separating counting groups of people, and nonhuman objects. This system features automatic color equalization, adaptive background subtraction, shadow detection algorithm and Kalman tracking. The aim is to develop a reliable and accurate computer vision alternative to sensor or contact based mechanisms. The problem for many computer vision based systems are making good separation between the background and foreground, and teaching the computers what parts make up a scene. We also want to find features to classify the foreground moving objects, an easy task for a human, but a complex task for a computer. Video has been captured with a birds eye view close to one of the entrances at the school about ten meters above the floor. From this video troublesome parts have been selected to test the changes done to the algorithms and program code."}
{"_id": "4d1d499963f37f8fb60654ccb8a26bb3f50f37e3", "title": "Bicycle dynamics and control: adapted bicycles for education and research", "text": "In this paper, the dynamics of bicycles is analyzed from the perspective of control. Models of different complexity are presented, starting with simple ones and ending with more realistic models generated from multibody software. Models that capture essential behavior such as self-stabilization as well as models that demonstrate difficulties with rear wheel steering are considered. Experiences using bicycles in control education along with suggestions for fun and thought-provoking experiments with proven student attraction are presented. Finally, bicycles and clinical programs designed for children with disabilities are described."}
{"_id": "0aafa4593b8bb496032a5b7bc7ab25d1a383d764", "title": "A neostriatal habit learning system in humans.", "text": "Amnesic patients and nondemented patients with Parkinson's disease were given a probabilistic classification task in which they learned which of two outcomes would occur on each trial, given the particular combination of cues that appeared. Amnesic patients exhibited normal learning of the task but had severely impaired declarative memory for the training episode. In contrast, patients with Parkinson's disease failed to learn the probabilistic classification task, despite having intact memory for the training episode. This double dissociation shows that the limbic-diencephalic regions damaged in amnesia and the neostriatum damaged in Parkinson's disease support separate and parallel learning systems. In humans, the neostriatum (caudate nucleus and putamen) is essential for the gradual, incremental learning of associations that is characteristic of habit learning. The neostriatum is important not just for motor behavior and motor learning but also for acquiring nonmotor dispositions and tendencies that depend on new associations."}
{"_id": "76365e761078ce82463988df59a8a0bce0d8d4d3", "title": "On Removing Routing Protocol from Future Wireless Networks: A Real-time Deep Learning Approach for Intelligent Traffic Control", "text": "Recently, deep learning has appeared as a breakthrough machine learning technique for various areas in computer science as well as other disciplines. However, the application of deep learning for network traffic control in wireless/heterogeneous networks is a relatively new area. With the evolution of wireless networks, efficient network traffic control such as routing methodology in the wireless backbone network appears as a key challenge. This is because the conventional routing protocols do not learn from their previous experiences regarding network abnormalities such as congestion and so forth. Therefore, an intelligent network traffic control method is essential to avoid this problem. In this article, we address this issue and propose a new, real-time deep learning based intelligent network traffic control method, exploiting deep Convolutional Neural Networks (deep CNNs) with uniquely characterized inputs and outputs to represent the considered Wireless Mesh Network (WMN) backbone. Simulation results demonstrate that our proposal achieves significantly lower average delay and packet loss rate compared to those observed with the existing routing methods. We particularly focus on our proposed method's independence from existing routing protocols, which makes it a potential candidate to remove routing protocol(s) from future wired/ wireless networks."}
{"_id": "24f01f041a3bbc5ffa8e013398808e6ac2b63763", "title": "Development of a 6-DOF manipulator actuated with a straight-fiber-type artificial muscle", "text": "Robots have become an integral part of human life, and the relationship between humans and robots has grown closer. Thus, it is desired that robots have characteristics similar to humans. In this context, we paid attention to an artificial muscle actuator. We used straight-fiber-type artificial muscles, derived from the McKibben type, which have excellent characteristics with respect to the contraction rate and force. We developed a 6-DOF manipulator actuated by a straight fiber artificial muscle. Furthermore, we tried to control the manipulator position by considering its characteristics."}
{"_id": "5c642f42f7c4057a63de64a9f9c1feeb9eac6a50", "title": "From smart to smarter cities: Bridging the dimensions of technology and urban planning", "text": "This paper discusses the importance of urban form in smart cities. Development of urban form has been a major concern for urban planners, designers and policy makers. Sprawl is one way of urban development which is not considered good for a better living standard. Compact form arose as an opposed idea to urban sprawl. Form based codes (FBCs) is a tool for urban planning and design that attempts to mitigate the problem of urban sprawl, whereas conventional zoning strengthens this. This paper highlights the importance of physical place in smart city and how FBCs attempt to create better physical place than conventional zoning. Our study shows that FBCs can lead to smart growth which can be a solution to bridge technology and urban planning together in a single platform."}
{"_id": "118e54f8f100564d0309c97e1062b2f7809186e8", "title": "How to learn a graph from smooth signals", "text": "We propose a framework that learns the graph structure underlying a set of smooth signals. Given X \u2208 Rm\u00d7n whose rows reside on the vertices of an unknown graph, we learn the edge weights w \u2208 R + under the smoothness assumption that tr ( X>LX ) is small. We show that the problem is a weighted `-1 minimization that leads to naturally sparse solutions. We point out how known graph learning or construction techniques fall within our framework and propose a new model that performs better than the state of the art in many settings. We present efficient, scalable primal-dual based algorithms for both our model and the previous state of the art, and evaluate their performance on artificial and real data."}
{"_id": "557e5e38a4c5b95e2bc86f491b03e5c8c7add857", "title": "Thin-Slicing for Pose: Learning to Understand Pose without Explicit Pose Estimation", "text": "We address the problem of learning a pose-aware, compact embedding that projects images with similar human poses to be placed close-by in the embedding space. The embedding function is built on a deep convolutional network, and trained with triplet-based rank constraints on real image data. This architecture allows us to learn a robust representation that captures differences in human poses by effectively factoring out variations in clothing, background, and imaging conditions in the wild. For a variety of pose-related tasks, the proposed pose embedding provides a cost-efficient and natural alternative to explicit pose estimation, circumventing challenges of localizing body joints. We demonstrate the efficacy of the embedding on pose-based image retrieval and action recognition problems."}
{"_id": "fd50fa6954e1f6f78ca66f43346e7e86b196b137", "title": "Regions, Periods, Activities: Uncovering Urban Dynamics via Cross-Modal Representation Learning", "text": "With the ever-increasing urbanization process, systematically modeling people\u2019s activities in the urban space is being recognized as a crucial socioeconomic task. This task was nearly impossible years ago due to the lack of reliable data sources, yet the emergence of geo-tagged social media (GTSM) data sheds new light on it. Recently, there have been fruitful studies on discovering geographical topics from GTSM data. However, their high computational costs and strong distributional assumptions about the latent topics hinder them from fully unleashing the power of GTSM. To bridge the gap, we present CrossMap, a novel crossmodal representation learning method that uncovers urban dynamics with massive GTSM data. CrossMap first employs an accelerated mode seeking procedure to detect spatiotemporal hotspots underlying people\u2019s activities. Those detected hotspots not only address spatiotemporal variations, but also largely alleviate the sparsity of the GTSM data. With the detected hotspots, CrossMap then jointly embeds all spatial, temporal, and textual units into the same space using two different strategies: one is reconstructionbased and the other is graph-based. Both strategies capture the correlations among the units by encoding their cooccurrence and neighborhood relationships, and learn lowdimensional representations to preserve such correlations. Our experiments demonstrate that CrossMap not only significantly outperforms state-of-the-art methods for activity recovery and classification, but also achieves much better efficiency."}
{"_id": "57b7af14a6aff0a942755abd3a935bf18f19965b", "title": "A 3.4 W Digital-In Class-D Audio Amplifier in 0.14 $\\mu $m CMOS", "text": "In this paper a class-D audio amplifier for mobile applications is presented realized in a 0.14 \u03bcm CMOS technology tailored for mobile applications. The amplifier has a simple PDM-based digital interface for audio and control that requires only two pins and enables assembly in 9-bump WL-CSP. The complete audio path is discussed that consists of a Parser, Digital PWM controller, 1-bit DA-converters, analog feedback loop and the Class-D power stage. A reconfigurable gate driver is used that reduces quiescent current consumption and radiated emission."}
{"_id": "ce8d99e5b270d15dc09422c08c500c5d86ed3703", "title": "A new paradigm of human gait analysis with Kinect", "text": "Analysis of human gait helps to find an intrinsic gait signature through which ubiquitous human identification and medical disorder problems can be investigated in a broad spectrum. The gait biometric provides an unobtrusive feature by which video gait data can be captured at a larger distance without prior awareness of the subject. In this paper, a new technique has been addressed to study the human gait analysis with Kinect Xbox device. It ensures us to minimize the segmentation errors with automated background subtraction technique. The closely similar human skeleton model can be generated from background subtracted gait images, altered by covariate conditions, such as change in walking speed and variations in clothing type. The gait signatures are captured from joint angle trajectories of left hip, left knee, right hip and right knee of subject's skeleton model. The experimental verification on Kinect gait data has been compared with our in-house development of sensor based biometric suit, Intelligent Gait Oscillation Detector (IGOD). An endeavor has been taken to investigate whether this sensor based biometric suit can be altered with a Kinect device for the proliferation of robust gait identification system. The Fisher discriminant analysis has been applied on training gait signature to look into the discriminatory power of feature vector. The Nai\u0308ve Bayesian classifier demonstrates an encouraging classification result with estimation of errors on limited dataset captured by Kinect sensor."}
{"_id": "484036c238df3645c038546df722d80a0fd4f642", "title": "Employee Engagement From a Self-Determination Theory Perspective", "text": "Macey and Schneider (2008) draw on numerous theories to explain what engagement is and how it is similar to and different from related constructs in the organizational behavior literature. As a result, we now have a better understanding of some of the key \u2018\u2018components\u2019\u2019 of engagement. What appears to be missing, however, is a strong unifying theory to guide research and practice. We believe that such a theory exists in the form of self-determination theory (SDT; Deci & Ryan, 1985; Ryan & Deci, 2000) and its various corollaries, self-concordance theory (SCT; Sheldon & Elliot, 1999), hierarchical theory (Vallerand, 1997), and passion theory (Vallerand et al., 2003). Although Macey and Schneider acknowledged the relevance of SDT and SCT, we believe that much greater use of these theories could be made to justify and extend their conceptual model."}
{"_id": "3f51289c8d4246525bc27be17c4e69b924e8ad1c", "title": "Cyber and Physical Security Vulnerability Assessment for IoT-Based Smart Homes", "text": "The Internet of Things (IoT) is an emerging paradigm focusing on the connection of devices, objects, or \"things\" to each other, to the Internet, and to users. IoT technology is anticipated to become an essential requirement in the development of smart homes, as it offers convenience and efficiency to home residents so that they can achieve better quality of life. Application of the IoT model to smart homes, by connecting objects to the Internet, poses new security and privacy challenges in terms of the confidentiality, authenticity, and integrity of the data sensed, collected, and exchanged by the IoT objects. These challenges make smart homes extremely vulnerable to different types of security attacks, resulting in IoT-based smart homes being insecure. Therefore, it is necessary to identify the possible security risks to develop a complete picture of the security status of smart homes. This article applies the operationally critical threat, asset, and vulnerability evaluation (OCTAVE) methodology, known as OCTAVE Allegro, to assess the security risks of smart homes. The OCTAVE Allegro method focuses on information assets and considers different information containers such as databases, physical papers, and humans. The key goals of this study are to highlight the various security vulnerabilities of IoT-based smart homes, to present the risks on home inhabitants, and to propose approaches to mitigating the identified risks. The research findings can be used as a foundation for improving the security requirements of IoT-based smart homes."}
{"_id": "f9f5f97f3ac2c4d071bbf6cff1ee2d6ceaf9338b", "title": "Handshaking, gender, personality, and first impressions.", "text": "Although people's handshakes are thought to reflect their personality and influence our first impressions of them, these relations have seldom been formally investigated. One hundred twelve participants had their hand shaken twice by 4 trained coders (2 men and 2 women) and completed 4 personality measures. The participants' handshakes were stable and consistent across time and coders. There were also gender differences on most of the handshaking characteristics. A firm handshake was related positively to extraversion and emotional expressiveness and negatively to shyness and neuroticism; it was also positively related to openness to experience, but only for women. Finally, handshake characteristics were related to the impressions of the participants formed by the coders. These results demonstrate that personality traits, assessed through self-report, can predict specific behaviors assessed by trained observers. The pattern of relations among openness, gender, handshaking, and first impressions suggests that a firm handshake may be an effective form of self-promotion for women."}
{"_id": "5c6f2becfb309e5b1323857bddc2c8fbc0c0ace5", "title": "Large-Scale Network Dysfunction in Major Depressive Disorder: A Meta-analysis of Resting-State Functional Connectivity.", "text": "IMPORTANCE\nMajor depressive disorder (MDD) has been linked to imbalanced communication among large-scale brain networks, as reflected by abnormal resting-state functional connectivity (rsFC). However, given variable methods and results across studies, identifying consistent patterns of network dysfunction in MDD has been elusive.\n\n\nOBJECTIVE\nTo investigate network dysfunction in MDD through a meta-analysis of rsFC studies.\n\n\nDATA SOURCES\nSeed-based voxelwise rsFC studies comparing individuals with MDD with healthy controls (published before June 30, 2014) were retrieved from electronic databases (PubMed, Web of Science, and EMBASE) and authors contacted for additional data.\n\n\nSTUDY SELECTION\nTwenty-seven seed-based voxel-wise rsFC data sets from 25 publications (556 individuals with MDD and 518 healthy controls) were included in the meta-analysis.\n\n\nDATA EXTRACTION AND SYNTHESIS\nCoordinates of seed regions of interest and between-group effects were extracted. Seeds were categorized into seed-networks by their location within a priori functional networks. Multilevel kernel density analysis of between-group effects identified brain systems in which MDD was associated with hyperconnectivity (increased positive or reduced negative connectivity) or hypoconnectivity (increased negative or reduced positive connectivity) with each seed-network.\n\n\nRESULTS\nMajor depressive disorder was characterized by hypoconnectivity within the frontoparietal network, a set of regions involved in cognitive control of attention and emotion regulation, and hypoconnectivity between frontoparietal systems and parietal regions of the dorsal attention network involved in attending to the external environment. Major depressive disorder was also associated with hyperconnectivity within the default network, a network believed to support internally oriented and self-referential thought, and hyperconnectivity between frontoparietal control systems and regions of the default network. Finally, the MDD groups exhibited hypoconnectivity between neural systems involved in processing emotion or salience and midline cortical regions that may mediate top-down regulation of such functions.\n\n\nCONCLUSIONS AND RELEVANCE\nReduced connectivity within frontoparietal control systems and imbalanced connectivity between control systems and networks involved in internal or external attention may reflect depressive biases toward internal thoughts at the cost of engaging with the external world. Meanwhile, altered connectivity between neural systems involved in cognitive control and those that support salience or emotion processing may relate to deficits regulating mood. These findings provide an empirical foundation for a neurocognitive model in which network dysfunction underlies core cognitive and affective abnormalities in depression."}
{"_id": "582ea307db25c5764e7d2ed82c4846757f4e95d7", "title": "Greedy Function Approximation : A Gradient Boosting Machine", "text": "Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \\boosting\" paradigm is developed for additive expansions based on any tting criterion. Speci c algorithms are presented for least{squares, least{absolute{deviation, and Huber{M loss functions for regression, and multi{class logistic likelihood for classi cation. Special enhancements are derived for the particular case where the individual additive components are decision trees, and tools for interpreting such \\TreeBoost\" models are presented. Gradient boosting of decision trees produces competitive, highly robust, interpretable procedures for regression and classi cation, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire 1996, and Friedman, Hastie, and Tibshirani 1998 are discussed. 1 Function estimation In the function estimation problem one has a system consisting of a random \\output\" or \\response\" variable y and a set of random \\input\" or \\explanatory\" variables x = fx1; ; xng. Given a \\training\" sample fyi;xig N 1 of known (y;x){values, the goal is to nd a function F (x) that maps x to y, such that over the joint distribution of all (y;x){values, the expected value of some speci ed loss function (y; F (x)) is minimized F (x) = argmin F (x) Ey;x (y; F (x)) = argmin F (x) Ex [Ey( (y; F (x)) jx] : (1) Frequently employed loss functions (y; F ) include squared{error (y F ) and absolute error jy F j for y 2 R (regression), and negative binomial log{likelihood, log(1 + e 2yF ), when y 2 f 1; 1g (classi cation). A common procedure is to take F (x) to be a member of a parameterized class of functions F (x;P), where P = fP1; P2; g is a set of parameters. In this paper we focus on \\additive\" expansions of the form"}
{"_id": "561f750a5a168283b14271abfb4331566f1813f4", "title": "FPGA implementations of fast fourier transforms for real-time signal and image processing", "text": "Applications based on Fast Fourier Transform (FFT) such as signal and image processing require high computational power, plus the ability to experiment with algorithms. Reconfigurable hardware devices in the form of Field Programmable Gate Arrays (FPGAs) have been proposed as a way of obtaining high performance at an economical price. At present, however, users must program FPGAs at a very low level and have a detailed knowledge of the architecture of the device being used. To try to reconcile the dual requirements of high performance and ease of development, this paper reports on the design and realisation of a High Level framework for the implementation of 1-D and 2-D FFTs for real-time applications. Results show that the parallel implementation of 2-D FFT achieves virtually linear speed-up and real-time performance for large matrix sizes. Finally, an FPGA-based parametrisable environment based on the developed parallel 2-D FFT architecture is presented as a solution for frequencydomain image filtering application."}
{"_id": "7911ab33185a881a673e2c9218ec2e7ebf86cf62", "title": "Priors for people tracking from small training sets", "text": "We advocate the use of scaled Gaussian process latent variable models (SGPLVM) to learn prior models of 3D human pose for 3D people tracking. The SGPLVM simultaneously optimizes a low-dimensional embedding of the high-dimensional pose data and a density function that both gives higher probability to points close to training data and provides a nonlinear probabilistic mapping from the low-dimensional latent space to the full-dimensional pose space. The SGPLVM is a natural choice when only small amounts of training data are available. We demonstrate our approach with two distinct motions, golfing and walking. We show that the SGPLVM sufficiently constrains the problem such that tracking can be accomplished with straightforward deterministic optimization."}
{"_id": "70e16c6565e6e9debbeea09c1631d88cc52b807e", "title": "Three dual polarized 2.4GHz microstrip patch antennas for active antenna and in-band full duplex applications", "text": "This paper presents the design, implementation and interport isolation performance evaluation of three dual port, dual polarized 2.4GHz microstrip patch antennas. Input matching and interport isolation (both DC and RF) performance of implemented antennas has been compared by measuring the input return losses (Sii, S22) and interport isolation (S12) respectively at 2.4GHz. Two implemented single layer antennas provide around 40dB RF interport isolation while the multilayer antenna has 60dB isolation between transmit and receive ports at centre frequency with DC isolated ports. The multilayer antenna provides more than 55dB interport isolation for antenna's 10 dB input impedance bandwidth of 50MHz."}
{"_id": "1aa5a8ad5b7031ba39e1dc0537484694364a1312", "title": "Evaluating Color Descriptors for Object and Scene Recognition", "text": "Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge."}
{"_id": "21cbb7e58e8f3e5912b32df3e75b004e5a0c00cc", "title": "Automatic annotation of human actions in video", "text": "This paper addresses the problem of automatic temporal annotation of realistic human actions in video using minimal manual supervision. To this end we consider two associated problems: (a) weakly-supervised learning of action models from readily available annotations, and (b) temporal localization of human actions in test videos. To avoid the prohibitive cost of manual annotation for training, we use movie scripts as a means of weak supervision. Scripts, however, provide only implicit, noisy, and imprecise information about the type and location of actions in video. We address this problem with a kernel-based discriminative clustering algorithm that locates actions in the weakly-labeled training data. Using the obtained action samples, we train temporal action detectors and apply them to locate actions in the raw video data. Our experiments demonstrate that the proposed method for weakly-supervised learning of action models leads to significant improvement in action detection. We present detection results for three action classes in four feature length movies with challenging and realistic video data."}
{"_id": "4fa2b00f78b2a73b63ad014f3951ec902b8b24ae", "title": "Semi-supervised hashing for scalable image retrieval", "text": "Large scale image search has recently attracted considerable attention due to easy availability of huge amounts of data. Several hashing methods have been proposed to allow approximate but highly efficient search. Unsupervised hashing methods show good performance with metric distances but, in image search, semantic similarity is usually given in terms of labeled pairs of images. There exist supervised hashing methods that can handle such semantic similarity but they are prone to overfitting when labeled data is small or noisy. Moreover, these methods are usually very slow to train. In this work, we propose a semi-supervised hashing method that is formulated as minimizing empirical error on the labeled data while maximizing variance and independence of hash bits over the labeled and unlabeled data. The proposed method can handle both metric as well as semantic similarity. The experimental results on two large datasets (up to one million samples) demonstrate its superior performance over state-of-the-art supervised and unsupervised methods."}
{"_id": "6a7c63a73724c0ca68b1675e256bb8b9a35c94f4", "title": "Investigating Causal Relations by Econometric Models and Cross-spectral Methods", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/econosoc.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission."}
{"_id": "d71c84fbaff0d7f2cbcf2f03630e189c58939a4a", "title": "Semantic analysis of soccer video using dynamic Bayesian network", "text": "Video semantic analysis is formulated based on the low-level image features and the high-level knowledge which is encoded in abstract, nongeometric representations. This paper introduces a semantic analysis system based on Bayesian network (BN) and dynamic Bayesian network (DBN). It is validated in the particular domain of soccer game videos. Based on BN/DBN, it can identify the special events in soccer games such as goal event, corner kick event, penalty kick event, and card event. The video analyzer extracts the low-level evidences, whereas the semantic analyzer uses BN/DBN to interpret the high-level semantics. Different from previous shot-based semantic analysis approaches, the proposed semantic analysis is frame-based for each input frame, it provides the current semantics of the event nodes as well as the hidden nodes. Another contribution is that the BN and DBN are automatically generated by the training process instead of determined by ad hoc. The last contribution is that we introduce a so-called temporal intervening network to improve the accuracy of the semantics output"}
{"_id": "430fb2aab098ea8ba737d1f1b98a31344d152308", "title": "A Flexible Model-Driven Game Development Approach", "text": "Game developers are facing an increasing demand for new games every year. Game development tools can be of great help, but require highly specialized professionals. Also, just as any software development effort, game development has some challenges. Model-Driven Game Development (MDGD) is suggested as a means to solve some of these challenges, but with a loss in flexibility. We propose a MDGD approach that combines multiple domain-specific languages (DSLs) with design patterns to provide flexibility and allow generated code to be integrated with manual code. After experimentation, we observed that, with the approach, less experienced developers can create games faster and more easily, and the product of code generation can be customized with manually written code, providing flexibility. However, with MDGD, developers become less familiar with the code, making manual codification more difficult."}
{"_id": "e83c5de4aa7ab7cd71c5fe39e64c828fba6ae645", "title": "Matchmaking in multi-player on-line games: studying user traces to improve the user experience", "text": "Designing and implementing a quality matchmaking service for Multiplayer Online Games requires an extensive knowledge of the habits, behaviors and expectations of the players. Gathering and analyzing traces of real games offers insight on these matters, but game server providers are very protective of such data in order to deter possible reuse by the competition and to prevent cheating. We circumvented this issue by gathering public data from a League of Legends server (information over more than 28 million game sessions). In this paper, we present our database which is freely available online, and we detail the analysis and conclusions we draw from this data regarding the expected requirements for the matchmaking service."}
{"_id": "d982c24c969f5b4042dabc4469c4842462d7b0dc", "title": "Knowledge sharing in organisational contexts: a motivation-based perspective", "text": "Purpose \u2013 Facilitating knowledge sharing within organisations is a difficult task: the willingness of individuals to share and integrate their knowledge is one of the central barriers. This paper aims to develop a motivation-based perspective to explore how organisations resolve the social dilemma of knowledge sharing. Design/methodology/approach \u2013 The analysis builds on a three-category taxonomy of motivation, adding \u2018\u2018hedonic\u2019\u2019 motivation to the traditional dichotomy of \u2018\u2018extrinsic\u2019\u2019 and \u2018\u2018intrinsic\u2019\u2019 motivation. It uses case studies gleaned from the literature to explore the interactive effects between the different motivators in two different types of knowledge-intensive organisations: professional bureaucracy and operating adhocracy. Findings \u2013 Within a professional bureaucracy, the social dilemma of knowledge sharing may be overcome through normative motivation, with provision of hedonic motivation through extrinsic incentives such as training and career progression. In an operating adhocracy where interdependent teamwork is vital, it may be overcome through normative alignment reinforced by intensive socialisation. Extrinsic motivators that align with hedonic motivation may also reinforce the propensity for knowledge sharing. In both organisational types, financial extrinsic incentives do not appear to be relevant on their own, and may \u2018\u2018crowd out\u2019\u2019 other motivators. Research limitations/implications \u2013 The cases reported were chosen from the existing literature and, although many were not designed specifically to address motivational issues, suggestive conclusions are drawn. Most of the cases were drawn from organisations rooted in the Anglo-American context and thus care would be needed in generalising the findings to organisations in other contexts. Originality/value \u2013 The paper represents the first attempt to apply a three-category taxonomy of motivation to examine knowledge-sharing behaviour in organisations. It highlights the interaction between the different motivators and provides a basis to integrate further the work of social psychologists and socio-economists on incentives and motivation in the context of knowledge sharing."}
{"_id": "8eca169f19425c76fa72078824e6a91a5b37f470", "title": "A versatile FMCW Radar System Simulator for Millimeter-Wave Applications", "text": "For the successful design of low-cost and high-performance radar systems accurate and efficient system simulation is a key requirement. In this paper we present a new versatile simulation environment for frequency-modulated continuous-wave radar systems. Besides common hardware simulation it covers integrated system simulation and concept analysis from signal synthesis to baseband. It includes a flexible scenario generator, accurate noise modeling, and efficiently delivers simulation data for development and testing of signal processing algorithms. A comparison of simulations and measurement results for an integrated 77-GHz radar prototype shows the capabilities of the simulator on two different scenarios."}
{"_id": "276bf4c952671f6b435081924b955717ce1dc78a", "title": "Automatic multilabel classification for Indonesian news articles", "text": "Problem transformation and algorithm adaptation are the two main approaches in machine learning to solve multilabel classification problem. The purpose of this paper is to investigate both approaches in multilabel classification for Indonesian news articles. Since this classification deals with a large number of features, we also employ some feature selection methods to reduce feature dimension. There are four factors as the focuses of this paper, i.e., feature weighting method, feature selection method, multilabel classification approach, and single-label classification algorithm. These factors will be combined to determine the best combination. The experiments show that the best performer for multilabel classification of Indonesian news articles is the combination of TF-IDF feature weighting method, Symmetrical Uncertainty feature selection method, Calibrated Label Ranking - which belongs to problem transformation approach -, and SVM algorithm. This best combination achieves F-measure of 85.13% in 10-fold cross-validation, but the F-measure decreases to 76.73% in testing because of OOV."}
{"_id": "1a8a32e1946cc8f5f45c7b5f121cdf0a2ac08eff", "title": "Topic Models for Mortality Modeling in Intensive Care Units", "text": "Mortality prediction is an important problem in the intensive care unit (ICU) because it is helpful for understanding patients\u2019 evolving severity, quality of care, and comparing treatments. Most ICU mortality models primarily consider structured data and physiological waveforms (Le Gall et al., 1993). An important limitation of these structured data approaches is that they miss a lot of vital information captured in providers\u2019 free text notes and reports. In this paper, we propose an approach to mortality prediction that incorporates the information from free text notes using topic modeling."}
{"_id": "71337276460b50a2cb37959a2d843e593dc4fdcc", "title": "A non-isolated three-port converter for stand-alone renewable power system", "text": "A novel non-isolated three-port converter (NI-TPC) is proposed interfacing one PV port, one bidirectional battery port and one load port. Single stage power conversion between any two of the three ports is achieved. The topology is derived by decoupling the bidirectional power flow path of the conventional structure into two unidirectional ones. Two of the three ports can be tightly regulated to achieve maximum power harvesting for PV or charge control for battery, and maintain the load voltage constant at the same time, while the third port is left flexible to compensate the power imbalance of the converter. Operation states are analyzed. The multi-regulator competition control strategy is presented to achieve autonomous and smooth state switching when the PV input power fluctuates. The analysis is verified by the experimental results."}
{"_id": "9fa1eadcbc10b91d91d1c5aa669631e6388aa7b4", "title": "A Framework for Coordinated Surface Operations Planning at Dallas-Fort Worth International Airport", "text": "An Integer Programming formulation is developed for optimizing surface operations at Dallas-Fort Worth airport, with the goal of assessing the potential benefits of taxi route planning. The model is based on operations in the eastern half of the airport under the most frequently used configuration. The focus is on operational concepts that optimize taxi routes by utilizing different control points on the airport surface. The benefits of two different concepts for optimizing taxiway operations, namely controlled pushback and taxi reroutes are analyzed, for both current data and a projected data set with approximately twice the traffic density. The analysis estimates that: (1) for current traffic densities, controlled pushback would reduce the average departure taxi time by 17% without altering runway schedule conformance, while the benefits of taxi reroutes would be minimal; and (2) for high-density operations, controlled pushback would reduce the average departure taxi time by 18%, while incorporating taxi reroutes would reduce the average arrival taxi time by 14%. Other benefits analyzed for these control strategies include a decrease in the average time spent in runway crossing queues."}
{"_id": "835e510fcf22b4b9097ef51b8d0bb4e7b806bdfd", "title": "Unsupervised Learning of Sequence Representations by Autoencoders", "text": "Sequence data is challenging for machine learning approaches, because the lengths of the sequences may vary between samples. In this paper, we present an unsupervised learning model for sequence data, called the Integrated Sequence Autoencoder (ISA), to learn a fixed-length vectorial representation by minimizing the reconstruction error. Specifically, we propose to integrate two classical mechanisms for sequence reconstruction which takes into account both the global silhouette information and the local temporal dependencies. Furthermore, we propose a stop feature that serves as a temporal stamp to guide the reconstruction process, which results in a higher-quality representation. The learned representation is able to effectively summarize not only the apparent features, but also the underlying and high-level style information. Take for example a speech sequence sample: our ISA model can not only recognize the spoken text (apparent feature), but can also discriminate the speaker who utters the audio (more high-level style). One promising application of the ISA model is that it can be readily used in the semi-supervised learning scenario, in which a large amount of unlabeled data is leveraged to extract high-quality sequence representations and thus to improve the performance of the subsequent supervised learning tasks on limited labeled data."}
{"_id": "617c3919c78d7cc0f596a17ea149eab1bf651c6f", "title": "A practical path loss model for indoor WiFi positioning enhancement", "text": "Positioning within a local area refers to technology whereby each node is self-aware of its position. Based on empirical study, this paper proposes an enhancement to the path loss model in the indoor environment for improved accuracy in the relationship between distance and received signal strength. We further demonstrate the potential of our model for the WiFi positioning system, where the mean errors in the distance estimation are 2.3 m and 2.9 m for line of sight and non line of sight environments, respectively."}
{"_id": "cb7ff548490bbdbb3da4cc6eab8a3429f61f618d", "title": "xLED: Covert Data Exfiltration from Air-Gapped Networks via Router LEDs", "text": "In this paper we show how attackers can covertly leak data (e.g., encryption keys, passwords and files) from highly secure or air-gapped networks via the row of status LEDs that exists in networking equipment such as LAN switches and routers. Although it is known that some network equipment emanates optical signals correlated with the information being processed by the device ('side-channel'), intentionally controlling the status LEDs to carry any type of data ('covert-channel') has never studied before. A malicious code is executed on the LAN switch or router, allowing full control of the status LEDs. Sensitive data can be encoded and modulated over the blinking of the LEDs. The generated signals can then be recorded by various types of remote cameras and optical sensors. We provide the technical background on the internal architecture of switches and routers (at both the hardware and software level) which enables this type of attack. We also present amplitude and frequency based modulation and encoding schemas, along with a simple transmission protocol. We implement a prototype of an exfiltration malware and discuss its design and implementation. We evaluate this method with a few routers and different types of LEDs. In addition, we tested various receivers including remote cameras, security cameras, smartphone cameras, and optical sensors, and also discuss different detection and prevention countermeasures. Our experiment shows that sensitive data can be covertly leaked via the status LEDs of switches and routers at a bit rates of 10 bit/sec to more than 1Kbit/sec per LED. Keywords\u2014 ezfiltration; air-gap; network; optical; covertchannel (key words)"}
{"_id": "65bc85b79306de2ffe607b71d21f3ef240c1d005", "title": "Steganalysis of QIM Steganography in Low-Bit-Rate Speech Signals", "text": "Steganalysis of the quantization index modulation QIM steganography in a low-bit-rate encoded speech stream is conducted in this research. According to the speech generation theory and the phoneme distribution properties in language, we first point out that the correlation characteristics of split vector quantization VQ codewords of linear predictive coding filter coefficients are changed after the QIM steganography. Based on this observation, we construct a model called the Quantization codeword correlation network QCCN based on split VQ codeword from adjacent speech frames. Furthermore, the QCCN model is pruned to yield a stronger correlation network. After quantifying the correlation characteristics of vertices in the pruned correlation network, we obtain feature vectors that are sensitive to steganalysis. Finally, we build a high-performance detector using the support vector machine SVM classifier. It is shown by experimental results that the proposed QCCN steganalysis method can effectively detect the QIM steganography in encoded speech stream when it is applied to low-bit-rate speech codec such as G.723.1 and G.729."}
{"_id": "fbcef6f098edb22a183a8161628beec6ff234ac5", "title": "Relation Extraction : A Survey", "text": "With the advent of the Internet, large amount of digital text is generated everyday in the form of news articles, research publications, blogs, question answering forums and social media. It is important to develop techniques for extracting information automatically from these documents, as lot of important information is hidden within them. This extracted information can be used to improve access and management of knowledge hidden in large text corpora. Several applications such as Question Answering, Information Retrieval would benefit from this information. Entities like persons and organizations, form the most basic unit of the information. Occurrences of entities in a sentence are often linked through well-defined relations; e.g., occurrences of person and organization in a sentence may be linked through relations such as employed at. The task of Relation Extraction (RE) is to identify such relations automatically. In this paper, we survey several important supervised, semi-supervised and unsupervised RE techniques. We also cover the paradigms of Open Information Extraction (OIE) and Distant Supervision. Finally, we describe some of the recent trends in the RE techniques and possible future research directions. This survey would be useful for three kinds of readers i) Newcomers in the field who want to quickly learn about RE; ii) Researchers who want to know how the various RE techniques evolved over time and what are possible future research directions and iii) Practitioners who just need to know which RE technique works best in various settings."}
{"_id": "4172f558219b94f850c6567f93fa60dee7e65139", "title": "The Treatment of Missing Values and its Effect on Classifier Accuracy", "text": "The presence of missing values in a dataset can affect the performance of a classifier constructed using that dataset as a training sample. Several methods have been proposed to treat missing data and the one used most frequently deletes instances containing at least one missing value of a feature. In this paper we carry out experiments with twelve datasets to evaluate the effect on the misclassification error rate of four methods for dealing with missing values: the case deletion method, mean imputation, median imputation, and the KNN imputation procedure. The classifiers considered were the Linear Discriminant Analysis (LDA) and the KNN classifier. The first one is a parametric classifier whereas the second one is a non parametric classifier."}
{"_id": "c4f3d64454d7166f9c2395dbb550bbfe1ab2b0cc", "title": "Designing Media Architecture: Tools and Approaches for Addressing the Main Design Challenges", "text": "Media Architecture is reaching a level of maturity at which we can identify tools and approaches for addressing the main challenges for HCI practitioners working in this field. While previous influential contributions within Media Architecture have identified challenges for designers and offered case studies of specific approaches, here, we (1) provide guidance on how to tackle the domain-specific challenges of Media Architecture design -- pertaining to the interface, integration, content, context, process, prototyping, and evaluation -- on the basis of the development of numerous installations over the course of seven years, and thorough studies of related work, and (2) present five categories of tools and approaches -- software tools, projection, 3D models, hardware prototyping, and evaluation tools -- developed to address these challenges in practice, exemplified through six concrete examples from real-life cases."}
{"_id": "77fea2c3831cb690b333d997be48f816dc5be3b6", "title": "More is Less: On the End-to-End Security of Group Chats in Signal, WhatsApp, and Threema", "text": "Secure instant messaging is utilized in two variants: one-to-one communication and group communication. While the first variant has received much attention lately (Frosch et al., EuroS Cohn-Gordon et al., EuroS Kobeissi et al., EuroS&P17), little is known about the cryptographic mechanisms and security guarantees of secure group communication in instant messaging. To approach an investigation of group instant messaging protocols, we first provide a comprehensive and realistic security model. This model combines security and reliability goals from various related literature to capture relevant properties for communication in dynamic groups. Thereby the definitions consider their satisfiability with respect to the instant delivery of messages. To show its applicability, we analyze three widely used real-world protocols: Signal, WhatsApp, and Threema. By applying our model, we reveal several shortcomings with respect to the security definition. Therefore we propose generic countermeasures to enhance the protocols regarding the required security and reliability goals. Our systematic analysis reveals that (1) the communications' integrity \u2013 represented by the integrity of all exchanged messages \u2013 and (2) the groups' closeness \u2013 represented by the members' ability of managing the group \u2013 are not end-to-end protected. We additionally show that strong security properties, such as Future Secrecy which is a core part of the one-to-one communication in the Signal protocol, do not hold for its group communication."}
{"_id": "fc0b575b84412e5bd90700cfe519615eb1f2eab5", "title": "Existential meaning's role in the enhancement of hope and prevention of depressive symptoms.", "text": "The authors confirmed that existential meaning has a unique relationship with and can prospectively predict levels of hope and depressive symptoms within a population of college students. Baseline measures of explicit meaning (i.e., an individual's self-reported experience of a sense of coherence and purpose in life) and implicit meaning (i.e., an individual's self-reported embodiment of the factors that are normatively viewed as comprising a meaningful life) explained significant amounts of variance in hope and depressive symptoms 2 months later beyond the variance explained by baseline levels of hope/depression, neuroticism, conscientiousness, agreeableness, openness to experience, extraversion, and social desirability. The authors discuss implications of these findings for the field of mental health treatment and suggest ways of influencing individuals' experience of existential meaning."}
{"_id": "e87b7eeae1f34148419be37900cf358069a152f1", "title": "Why software fails [software failure]", "text": "Most IT experts agree that software failures occur far more often than they should despite the fact that, for the most part, they are predictable and avoidable. It is unfortunate that most organizations don't see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Because software failure has tremendous implications for business and society, it is important to understand why this attitude persists."}
{"_id": "bea20c34bc3aa04c4c49361197e415b6a09e06b8", "title": "Transfer learning for class imbalance problems with inadequate data", "text": "A fundamental problem in data mining is to effectively build robust classifiers in the presence of skewed data distributions. Class imbalance classifiers are trained specifically for skewed distribution datasets. Existing methods assume an ample supply of training examples as a fundamental prerequisite for constructing an effective classifier. However, when sufficient data are not readily available, the development of a representative classification algorithm becomes even more difficult due to the unequal distribution between classes. We provide a unified framework that will potentially take advantage of auxiliary data using a transfer learning mechanism and simultaneously build a robust classifier to tackle this imbalance issue in the presence of few training samples in a particular target domain of interest. Transfer learning methods use auxiliary data to augment learning when training examples are not sufficient and in this paper we will develop a method that is optimized to simultaneously augment the training data and induce balance into skewed datasets. We propose a novel boosting-based instance transfer classifier with a label-dependent update mechanism that simultaneously compensates for class imbalance and incorporates samples from an auxiliary domain to improve classification. We provide theoretical and empirical validation of our method and apply to healthcare and text classification applications."}
{"_id": "3020dea141c2c81e9eb6e0a1a6c05740dc2ff30f", "title": "Player Type Models: Towards Empirical Validation", "text": "Player type models -- such as the BrainHex model -- are popular approaches for personalizing digital games towards individual preferences of players. Although several player type models have been developed and are currently used in game design projects, there is still a lack of data on their validity. To close this research gap we currently investigate the psychometric properties (factor structure, reliability, stability) and predictive validity (if player type scores can predict player experience) of the player type model BrainHex in an ongoing project. Results of two online studies (n1=592, n2=243) show that the psychometric properties of the BrainHex model could be improved. We suggest to improve the according questionnaire and sketch how the predictive validity could be investigated in future studies."}
{"_id": "fc7e37bd4d0e79ab0df0ee25260a180984e2a456", "title": "Scalable Metadata Management Techniques for Ultra-Large Distributed Storage Systems - A Systematic Review", "text": "The provisioning of an efficient ultra-large scalable distributed storage system for expanding cloud applications has been a challenging job for researchers in academia and industry. In such an ultra-large-scale storage system, data are distributed on multiple storage nodes for performance, scalability, and availability. The access to this distributed data is through its metadata, maintained by multiple metadata servers. The metadata carries information about the physical address of data and access privileges. The efficiency of a storage system highly depends on effective metadata management. This research presents an extensive systematic literature analysis of metadata management techniques in storage systems. This research work will help researchers to find the significance of metadata management and important parameters of metadata management techniques for storage systems. Methodical examination of metadata management techniques developed by various industry and research groups is described. The different metadata distribution techniques lead to various taxonomies. Furthermore, the article investigates techniques based on distribution structures and key parameters of metadata management. It also presents strengths and weaknesses of individual existing techniques that will help researchers to select the most appropriate technique for specific applications. Finally, it discusses existing challenges and significant research directions in metadata management for researchers."}
{"_id": "b0449156130e0de3a48fad6ef1149325d48d44f9", "title": "An Integral Model to Provide Reactive and Proactive Services in an Academic CSIRT Based on Business Intelligence", "text": "Cyber-attacks have increased in severity and complexity. That requires, that the CERT/CSIRT research and develops new security tools. Therefore, our study focuses on the design of an integral model based on Business Intelligence (BI), which provides reactive and proactive services in a CSIRT, in order to alert and reduce any suspicious or malicious activity on information systems and data networks. To achieve this purpose, a solution has been assembled, that generates information stores, being compiled from a continuous network transmission of several internal and external sources of an organization. However, it contemplates a data warehouse, which is focused like a correlator of logs, being formed by the information of feeds with diverse formats. Furthermore, it analyzed attack detection and port scanning, obtained from sensors such as Snort and Passive Vulnerability Scanner, which are stored in a database, where the logs have been generated by the systems. With such inputs, we designed and implemented BI systems using the phases of the Ralph Kimball methodology, ETL and OLAP processes. In addition, a software application has been implemented using the SCRUM methodology, which allowed to link the obtained logs to the BI system for visualization in dynamic dashboards, with the purpose of generating early alerts and constructing complex queries using the user interface through objects structures. The results demonstrate, that this solution has generated early warnings based on the level of criticality and level of sensitivity of malware and vulnerabilities as well as monitoring efficiency, increasing the level of security of member institutions."}
{"_id": "f049a8c4d08321b12276208687b6dc35586eecf7", "title": "Extensive Analysis and Prediction of Optimal Inventory levels in supply chain management based on Particle Swarm Optimization Algorithm", "text": "Efficient inventory management is a complex process which entails the management of the inventory in the whole supply chain. The dynamic nature of the excess stock level and shortage level from one period to another is a serious issue. In addition, consideration of multiple products and more supply chain members leads to very complex inventory management process. Moreover, the supply chain cost increases because of the influence of lead times for supplying the stocks as well as the raw materials. A better optimization methodology would consider all these factors in the prediction of the optimal stock levels to be maintained in order to minimize the total supply chain cost. Here, we are proposing an optimization methodology that utilizes the Particle Swarm Optimization algorithm, one of the best optimization algorithms, to overcome the impasse in maintaining the optimal stock levels at each member of the supply chain."}
{"_id": "ac8877b0e87625e26f52ab75e84c534a576b1e77", "title": "CIO and Business Executive Leadership Approaches to Establishing Company-wide Information Orientation", "text": "In the digital world, business executives have a heightened awareness of the strategic importance of information and information management to their companies\u2019 value creation. This presents both leadership opportunities and challenges for CIOs. To prevent the CIO position from being marginalized and to enhance CIOs\u2019 contribution to business value creation, they must move beyond being competent IT utility managers and play an active role in helping their companies build a strong information usage culture. The purpose of this article is to provide a better understanding of the leadership approaches that CIOs and business executives can adopt to improve their companies\u2019 information orientation. Based on our findings from four case studies, we have created a four-quadrant leadership-positioning framework. This framework is constructed from the CIO\u2019s perspective and indicates that a CIO may act as a leader, a follower or a nonplayer in developing the company\u2019s information orientation to achieve its strategic focus. The article concludes with guidelines that CIOs can use to help position their leadership challenges in introducing or sustaining their companies\u2019 information orientation initiatives and recommends specific leadership approaches depending on CIOs\u2019 particular situations."}
{"_id": "fd560f862dad46152e98abcab9ab79451b4324b8", "title": "Applying Machine Learning Techniques to Analysis of Gene Expression Data: Cancer Diagnosis", "text": "Classification of patient samples is a crucial aspect of cancer diagnosis. DNA hybridization arrays simultaneously measure the expression levels of thousands of genes and it has been suggested that gene expression may provide the additional information needed to improve cancer classification and diagnosis. This paper presents methods for analyzing gene expression data to classify cancer types. Machine learning techniques, such as Bayesian networks, neural trees, and radial basis function (RBF) networks, are used for the analysis of CAMDA Data Set 2. These techniques have their own properties including the ability of finding important genes for cancer classification, revealing relationships among genes, and classifying cancer. This paper reports on comparative evaluation of the experimental results of these methods."}
{"_id": "1677bd5219b40cff4db32b03dea230e0afc4896e", "title": "SignalSLAM: Simultaneous localization and mapping with mixed WiFi, Bluetooth, LTE and magnetic signals", "text": "Indoor localization typically relies on measuring a collection of RF signals, such as Received Signal Strength (RSS) from WiFi, in conjunction with spatial maps of signal fingerprints. A new technology for localization could arise with the use of 4G LTE telephony small cells, with limited range but with rich signal strength information, namely Reference Signal Received Power (RSRP). In this paper, we propose to combine an ensemble of available sources of RF signals to build multi-modal signal maps that can be used for localization or for network deployment optimization. We primarily rely on Simultaneous Localization and Mapping (SLAM), which provides a solution to the challenge of building a map of observations without knowing the location of the observer. SLAM has recently been extended to incorporate signal strength from WiFi in the so-called WiFi-SLAM. In parallel to WiFi-SLAM, other localization algorithms have been developed that exploit the inertial motion sensors and a known map of either WiFi RSS or of magnetic field magnitude. In our study, we use all the measurements that can be acquired by an off-the-shelf smartphone and crowd-source the data collection from several experimenters walking freely through a building, collecting time-stamped WiFi and Bluetooth RSS, 4G LTE RSRP, magnetic field magnitude, GPS reference points when outdoors, Near-Field Communication (NFC) readings at specific landmarks and pedestrian dead reckoning based on inertial data. We resolve the location of all the users using a modified version of Graph-SLAM optimization of the users poses with a collection of absolute location and pairwise constraints that incorporates multi-modal signal similarity. We demonstrate that we can recover the user positions and thus simultaneously generate dense signal maps for each WiFi access point and 4G LTE small cell, \u201cfrom the pocket\u201d. Finally, we demonstrate the localization performance using selected single modalities, such as only WiFi and the WiFi signal maps that we generated."}
{"_id": "30f67b7275cec21a94be945dfe4beff08c7e004a", "title": "Simultaneous localization and mapping: part I", "text": "This paper describes the simultaneous localization and mapping (SLAM) problem and the essential methods for solving the SLAM problem and summarizes key implementations and demonstrations of the method. While there are still many practical issues to overcome, especially in more complex outdoor environments, the general SLAM method is now a well understood and established part of robotics. Another part of the tutorial summarized more recent works in addressing some of the remaining issues in SLAM, including computation, feature representation, and data association"}
{"_id": "3303b29b10ce7cd76c799ad0c796521751347f9f", "title": "A solution to the simultaneous localization and map building (SLAM) problem", "text": "The simultaneous localisation and map building (SLAM) problem asks if it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and then to incrementally build a map of this environment while simultaneously using this map to compute absolute vehicle location. Starting from the estimation-theoretic foundations of this problem developed in [1], [2], [3], this paper proves that a solution to the SLAM problem is indeed possible. The underlying structure of the SLAM problem is first elucidated. A proof that the estimated map converges monotonically to a relative map with zero uncertainty is then developed. It is then shown that the absolute accuracy of the map and the vehicle location reach a lower bound defined only by the initial vehicle uncertainty. Together, these results show that it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and, using relative observations only, incrementally build a perfect map of the world and to compute simultaneously a bounded estimate of vehicle location. This paper also describes a substantial implementation of the SLAM algorithm on a vehicle operating in an outdoor environment using millimeter-wave (MMW) radar to provide relative map observations. This implementation is used to demonstrate how some key issues such as map management and data association can be handled in a practical environment. The results obtained are cross-compared with absolute locations of the map landmarks obtained by surveying. In conclusion, this paper discusses a number of key issues raised by the solution to the SLAM problem including sub-optimal map-building algorithms and map management."}
{"_id": "5a6a91287d06016ccce00c1c35e04af5d46c2b16", "title": "Walk detection and step counting on unconstrained smartphones", "text": "Smartphone pedometry offers the possibility of ubiquitous health monitoring, context awareness and indoor location tracking through Pedestrian Dead Reckoning (PDR) systems. However, there is currently no detailed understanding of how well pedometry works when applied to smartphones in typical, unconstrained use.\n This paper evaluates common walk detection (WD) and step counting (SC) algorithms applied to smartphone sensor data. Using a large dataset (27 people, 130 walks, 6 smartphone placements) optimal algorithm parameters are provided and applied to the data. The results favour the use of standard deviation thresholding (WD) and windowed peak detection (SC) with error rates of less than 3%. Of the six different placements, only the back trouser pocket is found to degrade the step counting performance significantly, resulting in undercounting for many algorithms."}
{"_id": "0e278a94018c3bbe464c3747cd155af10dc4fdab", "title": "Analyzing features for activity recognition", "text": "Human activity is one of the most important ingredients of context information. In wearable computing scenarios, activities such as walking, standing and sitting can be inferred from data provided by body-worn acceleration sensors. In such settings, most approaches use a single set of features, regardless of which activity to be recognized. In this paper we show that recognition rates can be improved by careful selection of individual features for each activity. We present a systematic analysis of features computed from a real-world data set and show how the choice of feature and the window length over which the feature is computed affects the recognition rates for different activities. Finally, we give a recommendation of suitable features and window lengths for a set of common activities."}
{"_id": "56e362c661d575b908e8a9f9bbb48f535a9312a5", "title": "On Managing Very Large Sensor-Network Data Using Bigtable", "text": "Recent advances and innovations in smart sensor technologies, energy storage, data communications, and distributed computing paradigms are enabling technological breakthroughs in very large sensor networks. There is an emerging surge of next-generation sensor-rich computers in consumer mobile devices as well as tailor-made field platforms wirelessly connected to the Internet. Billions of such sensor computers are posing both challenges and opportunities in relation to scalable and reliable management of the peta- and exa-scale time series being generated over time. This paper presents a Cloud-computing approach to this issue based on the two well-known data storage and processing paradigms: Bigtable and MapReduce."}
{"_id": "bcfca73fd9a210f9a4c78a0e0ca7e045c5495250", "title": "A Bayesian Nonparametric Approach for Multi-label Classification", "text": "Many real-world applications require multi-label classification where multiple target labels are assigned to each instance. In multi-label classification, there exist the intrinsic correlations between the labels and features. These correlations are beneficial for multi-label classification task since they reflect the coexistence of the input and output spaces that can be exploited for prediction. Traditional classification methods have attempted to reveal these correlations in different ways. However, existing methods demand expensive computation complexity for finding such correlation structures. Furthermore, these approaches can not identify the suitable number of label-feature correlation patterns. In this paper, we propose a Bayesian nonparametric (BNP) framework for multi-label classification that can automatically learn and exploit the unknown number of multi-label correlation. We utilize the recent techniques in stochastic inference to derive the cheap (but efficient) posterior inference algorithm for the model. In addition, our model can naturally exploit the useful information from missing label samples. Furthermore, we extend the model to update parameters in an online fashion that highlights the flexibility of our model against the existing approaches. We compare our method with the state-of-the-art multi-label classification algorithms on real-world datasets using both complete and missing label settings. Our model achieves better classification accuracy while our running time is consistently much faster than the baselines in an order of magnitude."}
{"_id": "9bc55cc4590caf827060fe677645e11242f28e4f", "title": "Complexity leadership theory : An interactive perspective on leading in complex adaptive systems", "text": "leadership theory: An interactive perspective on leading in complex adaptive systems\" (2006). Management Department Faculty Publications. Paper 8. Traditional, hierarchical views of leadership are less and less useful given the complexities of our modern world. Leadership theory must transition to new perspectives that account for the complex adaptive needs of organizations. In this paper, we propose that leadership (as opposed to leaders) can be seen as a complex dynamic process that emerges in the interactive \" spaces between \" people and ideas. That is, leadership is a dynamic that transcends the capabilities of individuals alone; it is the product of interaction, tension, and exchange rules governing changes in perceptions and understanding. We label this a dynamic of adaptive leadership, and we show how this dynamic provides important insights about the nature of leadership and its outcomes in organizational fields. We define a leadership event as a perceived segment of action whose meaning is created by the interactions of actors involved in producing it, and we present a set of innovative methods for capturing and analyzing these contextually driven processes. We provide theoretical and practical implications of these ideas for organizational behavior and organization and management theory."}
{"_id": "5a56e57683c3d512363d3032368941580d136f7e", "title": "Practical Option Pricing with Support Vector Regression and MART by", "text": "Support Vector Regression and MART by Ian I-En Choo Stanford University"}
{"_id": "14ec5729375dcba5c1712ab8df0f8970c3415d8d", "title": "Exome sequencing in sporadic autism spectrum disorders identifies severe de novo mutations", "text": "Evidence for the etiology of autism spectrum disorders (ASDs) has consistently pointed to a strong genetic component complicated by substantial locus heterogeneity. We sequenced the exomes of 20 individuals with sporadic ASD (cases) and their parents, reasoning that these families would be enriched for de novo mutations of major effect. We identified 21 de novo mutations, 11 of which were protein altering. Protein-altering mutations were significantly enriched for changes at highly conserved residues. We identified potentially causative de novo events in 4 out of 20 probands, particularly among more severely affected individuals, in FOXP1, GRIN2B, SCN1A and LAMC3. In the FOXP1 mutation carrier, we also observed a rare inherited CNTNAP2 missense variant, and we provide functional support for a multi-hit model for disease risk. Our results show that trio-based exome sequencing is a powerful approach for identifying new candidate genes for ASDs and suggest that de novo mutations may contribute substantially to the genetic etiology of ASDs."}
{"_id": "05ee231749c9ce97f036c71c1d2d599d660a8c81", "title": "GhostVLAD for set-based face recognition", "text": "The objective of this paper is to learn a compact representation of image sets for template-based face recognition. We make the following contributions: first, we propose a network architecture which aggregates and embeds the face descriptors produced by deep convolutional neural networks into a compact fixed-length representation. This compact representation requires minimal memory storage and enables efficient similarity computation. Second, we propose a novel GhostVLAD layer that includes ghost clusters, that do not contribute to the aggregation. We show that a quality weighting on the input faces emerges automatically such that informative images contribute more than those with low quality, and that the ghost clusters enhance the network\u2019s ability to deal with poor quality images. Third, we explore how input feature dimension, number of clusters and different training techniques affect the recognition performance. Given this analysis, we train a network that far exceeds the state-of-the-art on the IJB-B face recognition dataset. This is currently one of the most challenging public benchmarks, and we surpass the state-of-the-art on both the identification and verification protocols."}
{"_id": "1acc5e2788743b46272d977045868b937fb317f6", "title": "Similarity Breeds Proximity: Pattern Similarity within and across Contexts Is Related to Later Mnemonic Judgments of Temporal Proximity", "text": "Experiences unfold over time, but little is known about the mechanisms that support the formation of coherent episodic memories for temporally extended events. Recent work in animals has provided evidence for signals in hippocampus that could link events across temporal gaps; however, it is unknown whether and how such signals might be related to later memory for temporal information in humans. We measured patterns of fMRI BOLD activity as people encoded items that were separated in time and manipulated the presence of shared or distinct context across items. We found that hippocampal pattern similarity in the BOLD response across trials predicted later temporal memory decisions when context changed. By contrast, pattern similarity in lateral occipital cortex was related to memory only when context remained stable. These data provide evidence in humans that representational stability in hippocampus across time may be a mechanism for temporal memory organization."}
{"_id": "70bb8edcac8802816fbfe90e7c1643d67419dd34", "title": "Conspicuous Consumption and Household Indebtedness", "text": "Using a novel, large dataset of consumer transactions in Singapore, we study how conspicuous consumption affects household indebtedness. The coexistence of private housing (condominiums) and subsidized public housing (HDB) allows us to identify conspicuous consumers. Conditional on the same income and other socioeconomic characteristics, those who choose to reside in condominiums\u2014considered a status good\u2014are likely to be more conspicuous than their counterparts living in HDB units. We consistently find that condominium residents spend considerably more (by up to 44%) on conspicuous goods but not differently on inconspicuous goods. Compared with their matched HDB counterparts, more conspicuous consumers have 13% more credit card debt and 151% more delinquent credit card debt. Furthermore, the association between conspicuous consumption and credit card debt is concentrated among younger, male, single individuals. These results suggest that status-seeking-induced conspicuous consumption is an important determinant of household indebtedness."}
{"_id": "9b5bc029f386e51d5eee91cf6a1f921f1404549f", "title": "Wireless Network Virtualization: A Survey, Some Research Issues and Challenges", "text": "Since wireless network virtualization enables abstraction and sharing of infrastructure and radio spectrum resources, the overall expenses of wireless network deployment and operation can be reduced significantly. Moreover, wireless network virtualization can provide easier migration to newer products or technologies by isolating part of the network. Despite the potential vision of wireless network virtualization, several significant research challenges remain to be addressed before widespread deployment of wireless network virtualization, including isolation, control signaling, resource discovery and allocation, mobility management, network management and operation, and security as well as non-technical issues such as governance regulations, etc. In this paper, we provide a brief survey on some of the works that have already been done to achieve wireless network virtualization, and discuss some research issues and challenges. We identify several important aspects of wireless network virtualization: overview, motivations, framework, performance metrics, enabling technologies, and challenges. Finally, we explore some broader perspectives in realizing wireless network virtualization."}
{"_id": "6b3e788bba1176c6c47b37213461cf72a4fb666c", "title": "Unfolding the role of protein misfolding in neurodegenerative diseases", "text": "Recent evidence indicates that diverse neurodegenerative diseases might have a common cause and pathological mechanism \u2014 the misfolding, aggregation and accumulation of proteins in the brain, resulting in neuronal apoptosis. Studies from different disciplines strongly support this hypothesis and indicate that a common therapy for these devastating disorders might be possible. The aim of this article is to review the literature on the molecular mechanism of protein misfolding and aggregation, its role in neurodegeneration and the potential targets for therapeutic intervention in neurodegenerative diseases. Many questions still need to be answered and future research in this field will result in exciting new discoveries that might impact other areas of biology."}
{"_id": "fabb31da278765454c102086995cfaeda4f4df7b", "title": "Vehicle Number Plate Detection System for Indian Vehicles", "text": "An exponential increase in number of vehicles necessitates the use of automated systems to maintain vehicle information. The information is highly required for both management of traffic as well as reduction of crime. Number plate recognition is an effective way for automatic vehicle identification. Some of the existing algorithms based on the principle of learning takes a lot of time and expertise before delivering satisfactory results but even then lacks in accuracy. In the proposed algorithm an efficient method for recognition for Indian vehicle number plates has been devised. The algorithm aims at addressing the problems of scaling and recognition of position of characters with a good accuracy rate of 98.07%."}
{"_id": "1c1bebfd526b911e3865a20ffd9748117029993a", "title": "Generalized Thompson Sampling for Contextual Bandits", "text": "Thompson Sampling, one of the oldest heuristics for solving multi-armed bandits, has recently been shown to demonstrate state-of-the-art performance. The em pirical success has led to great interests in theoretical understanding of this heuristic. In this pap er, we approach this problem in a way very different from existing efforts. In particular, motiv ated by the connection between Thompson Sampling and exponentiated updates, we propose a new family of algorithms called Generalized Thompson Sampling in the expert-learning framework, which includes Thompson Sampling as a special case. Similar to most expert-learning algorithms, Generalized Thompson Sampling uses a loss function to adjust the experts\u2019 weights. General regr et bounds are derived, which are also instantiated to two important loss functions: square loss a nd logarithmic loss. In contrast to existing bounds, our results apply to quite general contextual bandi ts. More importantly, they quantify the effect of the \u201cprior\u201d distribution on the regret bounds."}
{"_id": "2067ad16e1991b77d53d527334e23b97d37f5ec1", "title": "Analysis of users and non-users of smartphone applications", "text": "Purpose: Smartphones facilitate the potential adoption of new mobile applications. The purpose of this research is to study users and non-users of three selected mobile applications, and find out what really drives the intention to use these applications across users and non-users. Design/methodology/approach: The authors measured actual usage of mobile applications in a panel study of 579 Finnish smartphone users, using in-device measurements as an objective way to identify users and non-users. A web-based survey was used in collecting data to test an extended TAM model in explaining intention to use. Findings: Perceived technological barriers negatively affect behavioural control, reflecting people\u2019s assessment of themselves being capable of using the services without trouble. Behavioural control is directly linked to perceived usefulness (except for games) and perceived enjoyment, as hypothesized. Perceived enjoyment and usefulness were generically found to explain intention to use applications for both users and for non-users. Research limitations/implications: With regards to the impact of social norms, the study finds that further research needs to be done in exploring its impact more thoroughly. The dataset of the research, consisting purely of male-dominated, young smartphone users, make the generalization of results difficult. Practical implications: There are differences regarding what drives the usage of different kinds of mobile applications. In this study, map applications and mobile Internet, are driven by more utilitarian motivations, whereas games are more hedonic. It is also clear that not everybody are using applications facilitated by smartphones, and therefore the presented approach of studying users and non-users separately provides a new approach to analyze adoption on a practical level. Originality/value: This research proves that models like TAM should not treat mobile services as a generic concept, but instead to specifically address individual mobile services. The research also demonstrates the unique value of combining objective usage measurements (reflecting actual behaviour) with traditional survey data in more comprehensively modelling service adoption. 2009 Elsevier Ltd. All rights reserved."}
{"_id": "8b6f2c853875f99f5e130aaac606ec85bb6ef801", "title": "Retransmission steganography and its detection", "text": "The paper presents a new steganographic method called RSTEG (retransmission steganography), which is intended for a broad class of protocols that utilises retransmission mechanisms. The main innovation of RSTEG is to not acknowledge a successfully received packet in order to intentionally invoke retransmission. The retransmitted packet carries a steganogram instead of user data in the payload field. RSTEG is presented in the broad context of network steganography, and the utilisation of RSTEG for TCP (transmission control protocol) retransmission mechanisms is described in detail. Simulation results are also presented with the main aim of measuring and comparing the steganographic bandwidth of the proposed method for different TCP retransmission mechanisms, as well as to determine the influence of RSTEG on the network retransmission level."}
{"_id": "ef4bc332442674a437a704e0fbee98fe36676986", "title": "Verification of an Implementation of Tomasulo's Algorithm by Compositional Model Checking", "text": "An implementation of an out-of-order processing unit based on Tomasulo\u2019s algorithm is formally verified using compositional model checking techniques. This demonstrates that finite-state methods can be applied to such algorithms, without recourse to higher-order proof systems. The paper introduces a novel compositional system that supports cyclic environment reasoning and multiple environment abstractions per signal. A proof of Tomasulo\u2019s algorithm is outlined, based on refinement maps, and relying on the novel features of the compositional system. This proof is fully verified by the SMV verifier, using symmetry to reduce the number of assertions that must be verified."}
{"_id": "b9f80615f15ced168c45a4cdbaa7ea0c37734029", "title": "Many-to-Many Geographically-Embedded Flow Visualisation: An Evaluation", "text": "Showing flows of people and resources between multiple geographic locations is a challenging visualisation problem. We conducted two quantitative user studies to evaluate different visual representations for such dense many-to-many flows. In our first study we compared a bundled node-link flow map representation and OD Maps [37] with a new visualisation we call MapTrix. Like OD Maps, MapTrix overcomes the clutter associated with a traditional flow map while providing geographic embedding that is missing in standard OD matrix representations. We found that OD Maps and MapTrix had similar performance while bundled node-link flow map representations did not scale at all well. Our second study compared participant performance with OD Maps and MapTrix on larger data sets. Again performance was remarkably similar."}
{"_id": "806ce938fca2c5d0989a599461526d9062dfaf51", "title": "Estimating Smart City sensors data generation", "text": "Nowadays, Smart Cities are positioned as one of the most challenging and important research topics, highlighting major changes in people's lifestyle. Technologies such as smart energy, smart transportation or smart health are being designed to improve citizen's quality of life. Smart Cities leverage the deployment of a network of devices - sensors and mobile devices-, all connected through different means and/or technologies, according to their network availability and capacities, setting a novel framework feeding end-users with an innovative set of smart services. Aligned to this objective, a typical Smart City architecture is organized into layers, including a sensing layer (generates data), a network layer (moves the data), a middleware layer (manages all collected data and makes it ready for usage) and an application layer (provides the smart services benefiting from this data). In this paper a real Smart City is analyzed, corresponding to the city of Barcelona, with special emphasis on the layers responsible for collecting the data generated by the deployed sensors. The amount of daily sensors data transmitted through the network has been estimated and a rough projection has been made assuming an exhaustive deployment that fully covers all city. Finally, we discuss some solutions to both reduce the data transmission and improve the data management."}
{"_id": "4354f1c8b058a5da4b30ffba97131edcf4fd79e7", "title": "Supervised Extraction of Diagnosis Codes from EMRs: Role of Feature Selection, Data Selection, and Probabilistic Thresholding", "text": "Extracting diagnosis codes from medical records is a complex task carried out by trained coders by reading all the documents associated with a patient's visit. With the popularity of electronic medical records (EMRs), computational approaches to code extraction have been proposed in the recent years. Machine learning approaches to multi-label text classification provide an important methodology in this task given each EMR can be associated with multiple codes. In this paper, we study the the role of feature selection, training data selection, and probabilistic threshold optimization in improving different multi-label classification approaches. We conduct experiments based on two different datasets: a recent gold standard dataset used for this task and a second larger and more complex EMR dataset we curated from the University of Kentucky Medical Center. While conventional approaches achieve results comparable to the state-of-the-art on the gold standard dataset, on our complex in-house dataset, we show that feature selection, training data selection, and probabilistic thresholding provide significant gains in performance."}
{"_id": "5c6b51bb44c9b2297733b58daaf26af01c98fe09", "title": "A Comparative Study of Feature Extraction Algorithms in Customer Reviews", "text": "The paper systematically compares two feature extraction algorithms to mine product features commented on in customer reviews. The first approach [17] identifies candidate features by applying a set of POS patterns and pruning the candidate set based on the log likelihood ratio test. The second approach [11] applies association rule mining for identifying frequent features and a heuristic based on the presence of sentiment terms for identifying infrequent features. We evaluate the performance of the algorithms on five product specific document collections regarding consumer electronic devices. We perform an analysis of errors and discuss advantages and limitations of the algorithms."}
{"_id": "bf6dd4e8ed48ea7be9654f521bfd45ba67c210e2", "title": "Culture care theory, research, and practice.", "text": "Today nurses are facing a world in which they are almost forced to use transculturally-based nursing theories and practices in order to care for people of diverse cultures. The author, who in the mid-50s pioneered the development of the first transcultural nursing theory with a care focus, discusses the relevance, assumptions, and predictions of the culture care theory along with the ethnonursing research method. The author contends that transcultural nursing findings are gradually transforming nursing practice and are providing a new paradigm shift from traditional medical and unicultural practice to multiculturally congruent and specific care modalities. A few research findings are presented to show the importance of being attentive to cultural care diversities and universalities as the major tenets of the theory. In addition, some major contributions of the theory are cited along with major challenges for the immediate future."}
{"_id": "ea5e4becc7f7a533c3685d89e5f087aa4e25cba7", "title": "Decision making, the P3, and the locus coeruleus-norepinephrine system.", "text": "Psychologists and neuroscientists have had a long-standing interest in the P3, a prominent component of the event-related brain potential. This review aims to integrate knowledge regarding the neural basis of the P3 and to elucidate its functional role in information processing. The authors review evidence suggesting that the P3 reflects phasic activity of the neuromodulatory locus coeruleus-norepinephrine (LC-NE) system. They discuss the P3 literature in the light of empirical findings and a recent theory regarding the information-processing function of the LC-NE phasic response. The theoretical framework emerging from this research synthesis suggests that the P3 reflects the response of the LC-NE system to the outcome of internal decision-making processes and the consequent effects of noradrenergic potentiation of information processing."}
{"_id": "41f9d67c031f8b9ec51745eb5a02d826d7b04539", "title": "OBFS: A File System for Object-Based Storage Devices", "text": "The object-based storage model, in which files are made up of one or more data objects stored on self-contained Object-Based Storage Devices (OSDs), is emerging as an architecture for distributed storage systems. The workload presented to the OSDs will be quite different from that of generalpurpose file systems, yet many distributed file systems employ general-purpose file systems as their underlying file system. We present OBFS, a small and highly efficient file system designed for use in OSDs. Our experiments show that our user-level implementation of OBFS outperforms Linux Ext2 and Ext3 by a factor of two or three, and while OBFS is 1/25 the size of XFS, it provides only slightly lower read performance and 10%\u201340% higher write performance."}
{"_id": "6f48d05e532254e2b7d429db97cb5ad6841b0812", "title": "Are effective teachers like good parents? Teaching styles and student adjustment in early adolescence.", "text": "This study examined the utility of parent socialization models for understanding teachers' influence on student adjustment in middle school. Teachers were assessed with respect to their modeling of motivation and to Baumrind's parenting dimensions of control, maturity demands, democratic communication, and nurturance. Student adjustment was defined in terms of their social and academic goals and interest in class, classroom behavior, and academic performance. Based on information from 452 sixth graders from two suburban middle schools, results of multiple regressions indicated that the five teaching dimensions explained significant amounts of variance in student motivation, social behavior, and achievement. High expectations (maturity demands) was a consistent positive predictor of students' goals and interests, and negative feedback (lack of nurturance) was the most consistent negative predictor of academic performance and social behavior. The role of motivation in mediating relations between teaching dimensions and social behavior and academic achievement also was examined; evidence for mediation was not found. Relations of teaching dimensions to student outcomes were the same for African American and European American students, and for boys and girls. The implications of parent socialization models for understanding effective teaching are discussed."}
{"_id": "e7e061731c5e623dcf9f8c62a5a17041c229bf68", "title": "Compact second harmonic-suppressed bandstop and bandpass filters using open stubs", "text": "Integration of bandstop filters with the bandstop or bandpass filter are presented in this paper. By replacing the series quarter-wavelength connecting lines of conventional open-stub bandpass/bandstop filters with the equivalent T-shaped lines, one could have compact open-stub bandstop/bandpass filters with second harmonic suppression. Transmission-line model calculation is used to derive the design equations of the equivalent T-shaped lines. Experiments have also been done to validate the design concept. Compared with the conventional open-stub bandpass/bandstop filters, over 30-dB improvement of the second harmonic suppression and 28.6% size reduction are achieved"}
{"_id": "afb772d0361d6ca85b78ba166b960317b9a87943", "title": "The Blockchain as a Software Connector", "text": "Blockchain is an emerging technology for decentralized and transactional data sharing across a large network of untrusted participants. It enables new forms of distributed software architectures, where components can find agreements on their shared states without trusting a central integration point or any particular participating components. Considering the blockchain as a software connector helps make explicitly important architectural considerations on the resulting performance and quality attributes (for example, security, privacy, scalability and sustainability) of the system. Based on our experience in several projects using blockchain, in this paper we provide rationales to support the architectural decision on whether to employ a decentralized blockchain as opposed to other software solutions, like traditional shared data storage. Additionally, we explore specific implications of using the blockchain as a software connector including design trade-offs regarding quality attributes."}
{"_id": "03af306bcb882da089453fa57539f62aa7b5289e", "title": "Conceptual modeling for ETL processes", "text": "Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we focus on the problem of the definition of ETL activities and provide formal foundations for their conceptual representation. The proposed conceptual model is (a) customized for the tracing of inter-attribute relationships and the respective ETL activities in the early stages of a data warehouse project; (b) enriched with a 'palette' of a set of frequently used ETL activities, like the assignment of surrogate keys, the check for null values, etc; and (c) constructed in a customizable and extensible manner, so that the designer can enrich it with his own re-occurring patterns for ETL activities."}
{"_id": "5b30ca7b3cbd38380c23333310f673f835b2dd3e", "title": "Beyond data warehousing: what's next in business intelligence?", "text": "During the last ten years the approach to business management has deeply changed, and companies have understood the importance of enforcing achievement of the goals defined by their strategy through metrics-driven management. The DW process, though supporting bottom-up extraction of information from data, fails in top-down enforcing the company strategy. A new approach to BI, called Business Performance Management (BPM), is emerging from this framework: it includes DW but it also requires a reactive component capable of monitoring the time-critical operational processes to allow tactical and operational decision-makers to tune their actions according to the company strategy. The aim of this paper is to encourage the research community to acknowledge the coming of a second era in BI, to propose a general architecture for BPM, and to lay the premises for investigating the most challenging of the related issues."}
{"_id": "ca966559c378c0d22d17b1bbce7f55b41cbbfba5", "title": "A UML Based Approach for Modeling ETL Processes in Data Warehouses", "text": "Data warehouses (DWs) are complex computer systems whose main goal is to facilitate the decision making process of knowledge workers. ETL (Extraction-Transformation-Loading) processes are responsible for the extraction of data from heterogeneous operational data sources, their transformation (conversion, cleaning, normalization, etc.) and their loading into DWs. ETL processes are a key component of DWs because incorrect or misleading data will produce wrong business decisions, and therefore, a correct design of these processes at early stages of a DW project is absolutely necessary to improve data quality. However, not much research has dealt with the modeling of ETL processes. In this paper, we present our approach, based on the Unified Modeling Language (UML), which allows us to accomplish the conceptual modeling of these ETL processes together with the conceptual schema of the target DW in an integrated manner. We provide the necessary mechanisms for an easy and quick specification of the common operations defined in these ETL processes such as, the integration of different data sources, the transformation between source and target attributes, the generation of surrogate keys and so on. Moreover, our approach allows the designer a comprehensive tracking and documentation of entire ETL processes, which enormously facilitates the maintenance of these processes. Another advantage of our proposal is the use of the UML (standardization, ease-of-use and functionality) and the seamless integration of the design of the ETL processes with the DW conceptual schema. Finally, we show how to use our integrated approach by using a well-known modeling tool such as Rational Rose."}
{"_id": "1126ceee34acd741396c493c84d8b6072a18bfd7", "title": "Potter's Wheel: An Interactive Data Cleaning System", "text": "Cleaning data of errors in structure and content is important for data warehousing and integration. Current solutions for data cleaning involve many iterations of data \u201cauditing\u201d to find errors, and long-running transformations to fix them. Users need to endure long waits, and often write complex transformation scripts. We present Potter\u2019s Wheel, an interactive data cleaning system that tightly integrates transformation and discrepancy detection. Users gradually build transformations to clean the data by adding or undoing transforms on a spreadsheet-like interface; the effect of a transform is shown at once on records visible on screen. These transforms are specified either through simple graphical operations, or by showing the desired effects on example data values. In the background, Potter\u2019s Wheel automatically infers structures for data values in terms of user-defined domains, and accordingly checks for constraint violations. Thus users can gradually build a transformation as discrepancies are found, and clean the data without writing complex programs or enduring long delays."}
{"_id": "25e0853ae37c2200de5d3597ae9e86131ce25aee", "title": "Modeling ETL activities as graphs", "text": "Extraction-Transformation-Loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. In this paper, we focus on the logical design of the ETL scenario of a data warehouse. Based on a formal logical model that includes the data stores, activities and their constituent parts, we model an ETL scenario as a graph, which we call the Architecture Graph. We model all the aforementioned entities as nodes and four different kinds of relationships (instance-of, part-of, regulator and provider relationships) as edges. In addition, we provide simple graph transformations that reduce the complexity of the graph. Finally, in order to support the engineering of the design and the evolution of the warehouse, we introduce specific importance metrics, namely dependence and responsibility, to measure the degree to which entities are bound to each other."}
{"_id": "0d8070ff416deb43fc9eae352d04b752dabba82f", "title": "A survey of B-tree locking techniques", "text": "B-trees have been ubiquitous in database management systems for several decades, and they are used in other storage systems as well. Their basic structure and basic operations are well and widely understood including search, insertion, and deletion. Concurrency control of operations in B-trees, however, is perceived as a difficult subject with many subtleties and special cases. The purpose of this survey is to clarify, simplify, and structure the topic of concurrency control in B-trees by dividing it into two subtopics and exploring each of them in depth."}
{"_id": "470a828d5e3962f2917a0092cc6ba46ccfe41a2a", "title": "Data Preparation for Data Mining", "text": "Senior Editor: Diane D. Cerra Director of Production & Manufacturing: Yonie Overton Production Editor: Edward Wade Editorial Assistant: Belinda Breyer Cover Design: Wall-To-Wall Studios Cover Photograph: \u00a9 1999 PhotoDisc, Inc. Text Design & Composition: Rebecca Evans & Associates Technical Illustration: Dartmouth Publishing, Inc. Copyeditor: Gary Morris Proofreader: Ken DellaPenta Indexer: Steve Rath Printer: Courier Corp."}
{"_id": "1774bec4155de16e4ed5328e44012d991f1509cd", "title": "RefSeer: A citation recommendation system", "text": "Citations are important in academic dissemination. To help researchers check the completeness of citations while authoring a paper, we introduce a citation recommendation system called RefSeer. Researchers can use it to find related works to cited while authoring papers. It can also be used by reviewers to check the completeness of a paper's references. RefSeer presents both topic based global recommendation and also citation-context based local recommendation. By evaluating the quality of recommendation, we show that such recommendation system can recommend citations with good precision and recall. We also show that our recommendation system is very efficient and scalable."}
{"_id": "9091d1c5aa7e07d022865b4800ec1684211d2045", "title": "A robust-coded pattern projection for dynamic 3D scene measurement", "text": "This paper presents a new coded structured light pattern which permits to solve the correspondence problem by a single shot and without using geometrical constraints. The pattern is composed by the projection of a grid made by coloured slits in such a way that each slit with its two neighbours appears only once in the pattern. The technique proposed permits a rapid and robust 3D scene measurement, even with moving objects. \u00d3 1998 Elsevier Science B.V. All rights reserved."}
{"_id": "623fd6adaa5585707d8d7339b5125185af6e3bf1", "title": "A Comparative Study of Psychosocial Interventions for Internet Gaming Disorder Among Adolescents Aged 13\u201317\u00a0Years", "text": "The present study is a quasi-experimental, prospective study of interventions for internet gaming disorder (IGD). One hundred four parents and their adolescent children were enrolled and allocated to one of the four treatment groups; 7-day Siriraj Therapeutic Residential Camp (S-TRC) alone, 8-week Parent Management Training for Game Addiction (PMT-G) alone, combined S-TRC and PMT-G, and basic psychoeducation (control). The severity of IGD was measured by the Game Addiction Screening Test (GAST). The mean difference among groups in GAST scores was statistically significant, with P values of <\u20090.001, 0.002, and 0.005 at 1, 3, and 6\u00a0months post-intervention, respectively. All groups showed improvement over the control group. The percentage of adolescents who remained in the addicted or probably addicted groups was less than 50% in the S-TRC, PMT-G, and combined groups. In conclusion, both S-TRC and PMT-G were effective psychosocial interventions for IGD and were superior to basic psychoeducation alone."}
{"_id": "a8820b172e7bc02406f1948d8bc75b7f4bdfb4ef", "title": "Paper-Based Synthetic Gene Networks", "text": "Synthetic gene networks have wide-ranging uses in reprogramming and rewiring organisms. To date, there has not been a way to harness the vast potential of these networks beyond the constraints of a laboratory or in vivo environment. Here, we present an in vitro paper-based platform that provides an alternate, versatile venue for synthetic biologists to operate and a much-needed medium for the safe deployment of engineered gene circuits beyond the lab. Commercially available cell-free systems are freeze dried onto paper, enabling the inexpensive, sterile, and abiotic distribution of synthetic-biology-based technologies for the clinic, global health, industry, research, and education. For field use, we create circuits with colorimetric outputs for detection by eye and fabricate a low-cost, electronic optical interface. We demonstrate this technology with small-molecule and RNA actuation of genetic switches, rapid prototyping of complex gene circuits, and programmable in vitro diagnostics, including glucose sensors and strain-specific Ebola virus sensors."}
{"_id": "60ac421dfd1e55c8649f2cb7d122f60e04821127", "title": "From centrality to temporary fame: Dynamic centrality in complex networks", "text": "We develop a new approach to the study of the dynamics of link utilization in complex networks using records of communication in a large social network. Counter to the perspective that nodes have particular roles, we find roles change dramatically from day to day. \u201cLocal hubs\u201d have a power law degree distribution over time, with no characteristic degree value. Our results imply a significant reinterpretation of the concept of node centrality in complex networks, and among other conclusions suggest that interventions targeting hubs will have significantly less effect than previously thought. \u00a9 2006 Wiley Periodicals, Inc. Complexity 12: 000 \u2013 000, 2006"}
{"_id": "72417ba72a69b9d2e84fb4228a6398c79a16e11a", "title": "HERZBERG ' S MOTIVATION-HYGIENE THEORY AND JOB SATISFACTION IN THE MALAYSIAN RETAIL SECTOR : THE MEDIATING EFFECT OF LOVE OF MONEY", "text": "This paper examines what motivates employees in the retail industry, and examines their level of job satisfaction, using Herzberg's hygiene factors and motivators. In this study, convenience sampling was used to select sales personnel from women's clothing stores in Bandar Sunway shopping mall in the state of Selangor. The results show that hygiene factors were the dominant motivators of sales personnel job satisfaction. Working conditions were the most significant in motivating sales personnel. Recognition was second, followed by company policy and salary. There is a need to delve more deeply into why salespeople place such a high importance on money. Further analysis was performed to assess how much the love of money mediates the relationship between salary and job satisfaction. Based on the general test for mediation, the love of money could explain the relationship between salary and job satisfaction. The main implication of this study is that sales personnel who value money highly are satisfied with their salary and job when they receive a raise."}
{"_id": "e58636e67665bd7c6ac1b441c353ab6842a65482", "title": "Evaluation of mobile app paradigms", "text": "The explosion of mobile applications both in number and variety raises the need of shedding light on their architecture, composition and quality. Indeed, it is crucial to understand which mobile application paradigm fits better to what type of application and usage. Such understanding has direct consequences on the user experience, the development cost and sale revenues of mobile apps. In this paper, we identified four main mobile application paradigms and evaluated them from the developer, user and service provider viewpoints. To ensure objectivity and soundness we start by defining high level criteria and then breaking down in finer-grained criteria. After a theoretical evaluation an implementation was carried out as a practical verification. The selected application is object recognition app, which is both exciting and challenging to develop."}
{"_id": "b0bd79de12dcc892c7ed3750fa278d14158cab26", "title": "Extraction of Information Related to Adverse Drug Events from Electronic Health Record Notes: Design of an End-to-End Model Based on Deep Learning", "text": "BACKGROUND\nPharmacovigilance and drug-safety surveillance are crucial for monitoring adverse drug events (ADEs), but the main ADE-reporting systems such as Food and Drug Administration Adverse Event Reporting System face challenges such as underreporting. Therefore, as complementary surveillance, data on ADEs are extracted from electronic health record (EHR) notes via natural language processing (NLP). As NLP develops, many up-to-date machine-learning techniques are introduced in this field, such as deep learning and multi-task learning (MTL). However, only a few studies have focused on employing such techniques to extract ADEs.\n\n\nOBJECTIVE\nWe aimed to design a deep learning model for extracting ADEs and related information such as medications and indications. Since extraction of ADE-related information includes two steps-named entity recognition and relation extraction-our second objective was to improve the deep learning model using multi-task learning between the two steps.\n\n\nMETHODS\nWe employed the dataset from the Medication, Indication and Adverse Drug Events (MADE) 1.0 challenge to train and test our models. This dataset consists of 1089 EHR notes of cancer patients and includes 9 entity types such as Medication, Indication, and ADE and 7 types of relations between these entities. To extract information from the dataset, we proposed a deep-learning model that uses a bidirectional long short-term memory (BiLSTM) conditional random field network to recognize entities and a BiLSTM-Attention network to extract relations. To further improve the deep-learning model, we employed three typical MTL methods, namely, hard parameter sharing, parameter regularization, and task relation learning, to build three MTL models, called HardMTL, RegMTL, and LearnMTL, respectively.\n\n\nRESULTS\nSince extraction of ADE-related information is a two-step task, the result of the second step (ie, relation extraction) was used to compare all models. We used microaveraged precision, recall, and F1 as evaluation metrics. Our deep learning model achieved state-of-the-art results (F1=65.9%), which is significantly higher than that (F1=61.7%) of the best system in the MADE1.0 challenge. HardMTL further improved the F1 by 0.8%, boosting the F1 to 66.7%, whereas RegMTL and LearnMTL failed to boost the performance.\n\n\nCONCLUSIONS\nDeep learning models can significantly improve the performance of ADE-related information extraction. MTL may be effective for named entity recognition and relation extraction, but it depends on the methods, data, and other factors. Our results can facilitate research on ADE detection, NLP, and machine learning."}
{"_id": "a5858131642b069c74cf49ebde3bb0e6149ee183", "title": "Measuring level of cuteness of baby images: a supervised learning scheme", "text": "The attractiveness of a baby face image depends on the perception of the perceiver. However, several recent studies advocate the idea that human perceptual analysis can be approximated by statistical models. We believe that the cuteness of baby faces depends on the low level facial features extracted from different parts (e.g., mouth, eyes, nose) of the faces. In this paper, we introduce a new problem of classifying baby face images based on their cuteness level using supervised learning techniques. The proposed learning model finds the potential of a deep learning technique in measuring the level of cuteness of baby faces. Since no datasets are available to validate the proposed technique, we construct a dataset of images of baby faces, downloaded from the internet. The dataset consists of several challenges like different view-point, orientation, lighting condition, contrast and background. We annotate the data using some well-known statistical tools inherited from Reliability theory. The experiments are conducted with some well-known image features like Speeded Up Robust Feature (SURF), Histogram of Oriented Gradient (HOG), Convolutional Neural Network (CNN) on Gradient and CNN on Laplacian, and the results are presented and discussed."}
{"_id": "5f3291e73ee204afad2eebd4bc1458248e45eabd", "title": "An empirical analysis of smart contracts: platforms, applications, and design patterns", "text": "Smart contracts are computer programs that can be consistently executed by a network of mutually distrusting nodes, without the arbitration of a trusted authority. Because of their resilience to tampering, smart contracts are appealing in many scenarios, especially in those which require transfers of money to respect certain agreed rules (like in financial services and in games). Over the last few years many platforms for smart contracts have been proposed, and some of them have been actually implemented and used. We study how the notion of smart contract is interpreted in some of these platforms. Focussing on the two most widespread ones, Bitcoin and Ethereum, we quantify the usage of smart contracts in relation to their application domain. We also analyse the most common programming patterns in Ethereum, where the source code of smart contracts is available."}
{"_id": "b9f56b82698c7b2036a98833a670d9a8f96d4474", "title": "A 5.2mW, 0.0016% THD up to 20kHz, ground-referenced audio decoder with PSRR-enhanced class-AB 16\u03a9 headphone amplifiers", "text": "A low-power ground-referenced audio decoder with PSRR-enhanced class-AB headphone amplifiers presents <;0.0016% THD in the whole audio band against the supply ripple by a negative charge-pump. Realized in the 40nm CMOS, the fully-integrated stereo decoder achieves 91dB SNDR and 100dB dynamic range while driving a 16\u03a9 headphone load and consumes 5.2mW from a 1.8V power supply. The core area is 0.093mm2/channel only."}
{"_id": "e4ae7b14d03e4ac5a432e1226dbd8613affd3e71", "title": "An operational measure of information leakage", "text": "Given two discrete random variables X and Y, an operational approach is undertaken to quantify the \u201cleakage\u201d of information from X to Y. The resulting measure \u2112(X\u2192Y ) is called maximal leakage, and is defined as the multiplicative increase, upon observing Y, of the probability of correctly guessing a randomized function of X, maximized over all such randomized functions. It is shown to be equal to the Sibson mutual information of order infinity, giving the latter operational significance. Its resulting properties are consistent with an axiomatic view of a leakage measure; for example, it satisfies the data processing inequality, it is asymmetric, and it is additive over independent pairs of random variables. Moreover, it is shown that the definition is robust in several respects: allowing for several guesses or requiring the guess to be only within a certain distance of the true function value does not change the resulting measure."}
{"_id": "29374eed47527cdaf14aa55fdb3935fc2de78c96", "title": "Vehicle Detection and Tracking in Car Video Based on Motion Model", "text": "This paper aims at real-time in-car video analysis to detect and track vehicles ahead for safety, autodriving, and target tracing. This paper describes a comprehensive approach to localizing target vehicles in video under various environmental conditions. The extracted geometry features from the video are continuously projected onto a 1-D profile and are constantly tracked. We rely on temporal information of features and their motion behaviors for vehicle identification, which compensates for the complexity in recognizing vehicle shapes, colors, and types. We probabilistically model the motion in the field of view according to the scene characteristic and the vehicle motion model. The hidden Markov model (HMM) is used to separate target vehicles from the background and track them probabilistically. We have investigated videos of day and night on different types of roads, showing that our approach is robust and effective in dealing with changes in environment and illumination and that real-time processing becomes possible for vehicle-borne cameras."}
{"_id": "9eea99f11ff7417ba8821e84b531e1d16f6683fc", "title": "The Weak Byzantine Generals Problem", "text": "The Byzantine Generals Problem requires processes to reach agreement upon a value even though some of them may fad. It is weakened by allowing them to agree upon an \"incorrect\" value if a failure occurs. The transaction eormmt problem for a distributed database Js a special case of the weaker problem. It is shown that, like the original Byzantine Generals Problem, the weak version can be solved only ff fewer than one-third of the processes may fad. Unlike the onginal problem, an approximate solution exists that can tolerate arbaranly many failures. Categories and Subject Descnptors: D.4.5 [Operating Systems]: Reliability, F.2.m [Analysis of Algorithms and Problem Complexity]: Mzscellaneous General Terms: Reliabthty Addmonal"}
{"_id": "748260579dc2fb789335a88ae3f63c114795d047", "title": "Action and Interaction Recognition in First-Person Videos", "text": "In this work, we evaluate the performance of the popular dense trajectories approach on first-person action recognition datasets. A person moving around with a wearable camera will actively interact with humans and objects and also passively observe others interacting. Hence, in order to represent real-world scenarios, the dataset must contain actions from first-person perspective as well as third-person perspective. For this purpose, we introduce a new dataset which contains actions from both the perspectives captured using a head-mounted camera. We employ a motion pyramidal structure for grouping the dense trajectory features. The relative strengths of motion along the trajectories are used to compute different bag-of-words descriptors and concatenated to form a single descriptor for the action. The motion pyramidal approach performs better than the baseline improved trajectory descriptors. The method achieves 96.7% on the JPL interaction dataset and 61.8% on our NUS interaction dataset. The same is used to detect actions in long video sequences and achieves average precision of 0.79 on JPL interaction dataset."}
{"_id": "1282e1e89b5c969ea26caa88106b186ed84f20d5", "title": "Wearable Electronics and Smart Textiles: A Critical Review", "text": "Electronic Textiles (e-textiles) are fabrics that feature electronics and interconnections woven into them, presenting physical flexibility and typical size that cannot be achieved with other existing electronic manufacturing techniques. Components and interconnections are intrinsic to the fabric and thus are less visible and not susceptible of becoming tangled or snagged by surrounding objects. E-textiles can also more easily adapt to fast changes in the computational and sensing requirements of any specific application, this one representing a useful feature for power management and context awareness. The vision behind wearable computing foresees future electronic systems to be an integral part of our everyday outfits. Such electronic devices have to meet special requirements concerning wearability. Wearable systems will be characterized by their ability to automatically recognize the activity and the behavioral status of their own user as well as of the situation around her/him, and to use this information to adjust the systems' configuration and functionality. This review focuses on recent advances in the field of Smart Textiles and pays particular attention to the materials and their manufacturing process. Each technique shows advantages and disadvantages and our aim is to highlight a possible trade-off between flexibility, ergonomics, low power consumption, integration and eventually autonomy."}
{"_id": "6d2b1acad1aa0d93782bf2611be9d5a3c31898a8", "title": "Customers \u2019 Behavior Prediction Using Artificial Neural Network", "text": "In this paper, customer restaurant preference is predicted based on social media location check-ins. Historical preferences of the customer and the influence of the customer\u2019s social network are used in combination with the customer\u2019s mobility characteristics as inputs to the model. As the popularity of social media increases, more and more customer comments and feedback about products and services are available online. It not only becomes a way of sharing information among friends in the social network but also forms a new type of survey which can be utilized by business companies to improve their existing products, services, and market analysis. Approximately 121,000 foursquare restaurant check-ins in the Greater New York City area are used in this research. Artificial neural networks (ANN) and support vector machine (SVM) are developed to predict the customers\u2019 behavior regarding restaurant preferences. ANN provides 93.13% average accuracy across investigated customers, compared to only 54.00% for SVM with a sigmoid kernel function."}
{"_id": "aca437e9e2a453c84a38d716ca9a7a7683ae58b6", "title": "Scene Understanding by Reasoning Stability and Safety", "text": "This paper presents a new perspective for 3D scene understanding by reasoning object stability and safety using intuitive mechanics. Our approach utilizes a simple observation that, by human design, objects in static scenes should be stable in the gravity field and be safe with respect to various physical disturbances such as human activities. This assumption is applicable to all scene categories and poses useful constraints for the plausible interpretations (parses) in scene understanding. Given a 3D point cloud captured for a static scene by depth cameras, our method consists of three steps: (i) recovering solid 3D volumetric primitives from voxels; (ii) reasoning stability by grouping the unstable primitives to physically stable objects by optimizing the stability and the scene prior; and (iii) reasoning safety by evaluating the physical risks for objects under physical disturbances, such as human activity, wind or earthquakes. We adopt a novel intuitive physics model and represent the energy landscape of each primitive and object in the scene by a disconnectivity graph (DG). We construct a contact graph with nodes being 3D volumetric primitives and edges representing the supporting relations. Then we adopt a Swendson\u2013Wang Cuts algorithm to partition the contact graph into groups, each of which is a stable object. In order to detect unsafe objects in a static scene, our method further infers hidden and situated causes (disturbances) in the scene, and then introduces intuitive physical mechanics to predict possible effects (e.g., falls) as consequences of the disturbances. In experiments, we demonstrate that the algorithm achieves a substantially better performance for (i) object segmentation, (ii) 3D volumetric recovery, and (iii) scene understanding with respect to other state-of-the-art methods. We also compare the safety prediction from the intuitive mechanics model with human judgement."}
{"_id": "7baecdf494fd6e352add6993f12cc1554ee5b645", "title": "Association fibre pathways of the brain: parallel observations from diffusion spectrum imaging and autoradiography.", "text": "Understanding the long association pathways that convey cortical connections is a critical step in exploring the anatomic substrates of cognition in health and disease. Diffusion tensor imaging (DTI) is able to demonstrate fibre tracts non-invasively, but present approaches have been hampered by the inability to visualize fibres that have intersecting trajectories (crossing fibres), and by the lack of a detailed map of the origins, course and terminations of the white matter pathways. We therefore used diffusion spectrum imaging (DSI) that has the ability to resolve crossing fibres at the scale of single MRI voxels, and identified the long association tracts in the monkey brain. We then compared the results with available expositions of white matter pathways in the monkey using autoradiographic histological tract tracing. We identified 10 long association fibre bundles with DSI that match the observations in the isotope material: emanating from the parietal lobe, the superior longitudinal fasciculus subcomponents I, II and III; from the occipital-parietal region, the fronto-occipital fasciculus; from the temporal lobe, the middle longitudinal fasciculus and from rostral to caudal, the uncinate fasciculus, extreme capsule and arcuate fasciculus; from the occipital-temporal region, the inferior longitudinal fasciculus; and from the cingulate gyrus, the cingulum bundle. We suggest new interpretations of the putative functions of these fibre bundles based on the cortical areas that they link. These findings using DSI and validated with reference to autoradiographic tract tracing in the monkey represent a considerable advance in the understanding of the fibre pathways in the cerebral white matter. By replicating the major features of these tracts identified by histological techniques in monkey, we show that DSI has the potential to cast new light on the organization of the human brain in the normal state and in clinical disorders."}
{"_id": "c86c37f76dc5c95d84a141e6313e5608ad7638e9", "title": "The role of the self in mindblindness in autism", "text": "Since its inception the 'mindblindness' theory of autism has greatly furthered our understanding of the core social-communication impairments in autism spectrum conditions (ASC). However, one of the more subtle issues within the theory that needs to be elaborated is the role of the 'self'. In this article, we expand on mindblindness in ASC by addressing topics related to the self and its central role in the social world and then review recent research in ASC that has yielded important insights by contrasting processes relating to both self and other. We suggest that new discoveries lie ahead in understanding how self and other are interrelated and/or distinct, and how understanding atypical self-referential and social-cognitive mechanisms may lead to novel ideas as to how to facilitate social-communicative abilities in ASC."}
{"_id": "28890189c8fb5a8082a2a0445eabaa914ea72bae", "title": "A review of functional Near-Infrared Spectroscopy measurements in naturalistic environments", "text": "The development of novel miniaturized wireless and wearable functional Near-Infrared Spectroscopy (fNIRS) devices have paved the way to new functional brain imaging that can revolutionize the cognitive research fields. Over the past few decades, several studies have been conducted with conventional fNIRS systems that have demonstrated the suitability of this technology for a wide variety of populations and applications, to investigate both the healthy brain and the diseased brain. However, what makes wearable fNIRS even more appealing is its capability to allow more ecologically-valid measurements in everyday life scenarios that are not possible with other gold-standard neuroimaging modalities, such as functional Magnetic Resonance Imaging. This can have"}
{"_id": "afd45a78b319032b19afd5553ee8504ff8319852", "title": "Programming with Abstract Data Types", "text": "The motivation behind the work in very-high-level languages is to ease the programming task by providing the programmer with a language containing primitives or abstractions suitable to his problem area. The programmer is then able to spend his effort in the right place; he concentrates on solving his problem, and the resulting program will be more reliable as a result. Clearly, this is a worthwhile goal.\n Unfortunately, it is very difficult for a designer to select in advance all the abstractions which the users of his language might need. If a language is to be used at all, it is likely to be used to solve problems which its designer did not envision, and for which the abstractions embedded in the language are not sufficient.\n This paper presents an approach which allows the set of built-in abstractions to be augmented when the need for a new data abstraction is discovered. This approach to the handling of abstraction is an outgrowth of work on designing a language for structured programming. Relevant aspects of this language are described, and examples of the use and definitions of abstractions are given."}
{"_id": "d593a0d4682012312354797938fbaa053e652f0d", "title": "The influence of user-generated content on traveler behavior: An empirical investigation on the effects of e-word-of-mouth to hotel online bookings", "text": "The increasing use of web 2.0 applications has generated numerous online user reviews. Prior studies have revealed the influence of user-generated reviews on the sales of products such as CDs, books, and movies. However, the influence of online user-generated reviews in the tourism industry is still largely unknown both to tourism researchers and practitioners. To bridge this knowledge gap in tourism management, we conducted an empirical study to identify the impact of online user-generated reviews on business performance using data extracted from a major online travel agency in China. The empirical findings show that traveler reviews have a significant impact on online sales, with a 10 percent increase in traveler review ratings boosting online bookings by more than five percent. Our results highlight the importance of online user-generated reviews to business performance in tourism. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "44b05f00f032c0d59df4b712b83ff82a913e950a", "title": "Designing for Exploratory Search on Touch Devices", "text": "Exploratory search confront users with challenges in expressing search intents as the current search interfaces require investigating result listings to identify search directions, iterative typing, and reformulating queries. We present the design of Exploration Wall, a touch-based search user interface that allows incremental exploration and sense-making of large information spaces by combining entity search, flexible use of result entities as query parameters, and spatial configuration of search streams that are visualized for interaction. Entities can be flexibly reused to modify and create new search streams, and manipulated to inspect their relationships with other entities. Data comprising of task-based experiments comparing Exploration Wall with conventional search user interface indicate that Exploration Wall achieves significantly improved recall for exploratory search tasks while preserving precision. Subjective feedback supports our design choices and indicates improved user satisfaction and engagement. Our findings can help to design user interfaces that can effectively support exploratory search on touch devices."}
{"_id": "4b4afd45404d2fd994c5bd0fb79181e702594b61", "title": "The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations", "text": "A useful starting point for designing advanced graphical user interfaces is the Visual InformationSeeking Mantra: Overview first, zoom and filter, then details-on-demand. But this is only a starting point in trying to understand the rich and varied set of information visualizations that have been proposed in recent years. This paper offers a task by data type taxonomy with seven data types (1-, 2-, 3-dimensional data, temporal and multi-dimensional data, and tree and network data) and seven tasks (overview, zoom, filter, details-on-demand, relate, history, and extract)."}
{"_id": "7e9507924ceebd784503fd25128218a7119ff722", "title": "TopicPanorama: A Full Picture of Relevant Topics", "text": "This paper presents a visual analytics approach to analyzing a full picture of relevant topics discussed in multiple sources, such as news, blogs, or micro-blogs. The full picture consists of a number of common topics covered by multiple sources, as well as distinctive topics from each source. Our approach models each textual corpus as a topic graph. These graphs are then matched using a consistent graph matching method. Next, we develop a level-of-detail (LOD) visualization that balances both readability and stability. Accordingly, the resulting visualization enhances the ability of users to understand and analyze the matched graph from multiple perspectives. By incorporating metric learning and feature selection into the graph matching algorithm, we allow users to interactively modify the graph matching result based on their information needs. We have applied our approach to various types of data, including news articles, tweets, and blog data. Quantitative evaluation and real-world case studies demonstrate the promise of our approach, especially in support of examining a topic-graph-based full picture at different levels of detail."}
{"_id": "a96898490180c86b63eee2e801de6e25de5aa71d", "title": "A user-centric evaluation framework for recommender systems", "text": "This research was motivated by our interest in understanding the criteria for measuring the success of a recommender system from users' point view. Even though existing work has suggested a wide range of criteria, the consistency and validity of the combined criteria have not been tested. In this paper, we describe a unifying evaluation framework, called ResQue (Recommender systems' Quality of user experience), which aimed at measuring the qualities of the recommended items, the system's usability, usefulness, interface and interaction qualities, users' satisfaction with the systems, and the influence of these qualities on users' behavioral intentions, including their intention to purchase the products recommended to them and return to the system. We also show the results of applying psychometric methods to validate the combined criteria using data collected from a large user survey. The outcomes of the validation are able to 1) support the consistency, validity and reliability of the selected criteria; and 2) explain the quality of user experience and the key determinants motivating users to adopt the recommender technology. The final model consists of thirty two questions and fifteen constructs, defining the essential qualities of an effective and satisfying recommender system, as well as providing practitioners and scholars with a cost-effective way to evaluate the success of a recommender system and identify important areas in which to invest development resources."}
{"_id": "054fc7be5084e0f2cec80e6d708c3eafbffc6497", "title": "Interaction Design for Recommender Systems", "text": "1 kirstens@sims.berkeley.edu, sinha@sims.berkeley.edu ABSTRACT Recommender systems act as personalized decision guides for users, aiding them in decision making about matters related to personal taste. Research has focused mostly on the algorithms that drive the system, with little understanding of design issues from the user\u2019s perspective. The goal of our research is to study users\u2019 interactions with recommender systems in order to develop general design guidelines. We have studied users\u2019 interactions with 11 online recommender systems. Our studies have highlighted the role of transparency (understanding of system logic), familiar recommendations, and information about recommended items in the user\u2019s interaction with the system. Our results also indicate that there are multiple models for successful recommender systems."}
{"_id": "44c299893ac76287e37d4d3ee8bf4b5f81cf37e9", "title": "Driving cycle-based design optimization of interior permanent magnet synchronous motor drives for electric vehicle application", "text": "The paper discusses the influence of driving cycles on the design optimization of permanent magnet synchronous machines. A bi-objective design optimization is presented for synchronous machines with V-shaped buried magnets. The machine length and the loss energy over a given driving cycle are defined as objectives. A total of 14 parameters defining geometry and winding layout are chosen as design parameters. Additionally some constraints like speed-torque-requirements and minimal stray field bridge widths are set. The optimization problem is solved using 2D finite element analysis and a high-performance differential evolution algorithm. The analyses are performed for the ARTEMIS driving cycle due to the more realistic driving behavior in comparison to the most commonly used New European Driving Cycle. Furthermore, a reference design optimization against the rated point loss energy is presented. The results show a much better performance of the driving cycle optimized machines in comparison to the rated point optimized machines in terms of the cycle-based loss energy. Loss savings depend strongly on the machine length and are approximately in between 15% and 45%."}
{"_id": "a154e688baa929c335dd9a673592797ec3c27281", "title": "Learning From Positive and Unlabeled Data: A Survey", "text": "Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data. The assumption is that the unlabeled data can contain both positive and negative examples. This setting has attracted increasing interest within the machine learning literature as this type of data naturally arises in applications such as medical diagnosis and knowledge base completion. This article provides a survey of the current state of the art in PU learning. It proposes seven key research questions that commonly arise in this field and provides a broad overview of how the field has tried to address them."}
{"_id": "454b7a0806de86a95a7b7df8ed3f196aff66532d", "title": "Machine Learning Techniques in ADAS: A Review", "text": "What machine learning (ML) technique is used for system intelligence implementation in ADAS (advanced driving assistance system)? This paper tries to answer this question. This paper analyzes ADAS and ML independently and then relate which ML technique is applicable to what ADAS component and why. The paper gives a good grasp of the current state-of-the-art. Sample works in supervised, unsupervised, deep and reinforcement learnings are presented; their strengths and rooms for improvements are also discussed. This forms part of the basics in understanding autonomous vehicle. This work is a contribution to the ongoing research in ML aimed at reducing road traffic accidents and fatalities, and the invocation of safe driving."}
{"_id": "b5e26adc2dfb7ad04f8f9286b7eac5f12c4a6714", "title": "Radian: Visual Exploration of Traceroutes", "text": "Several projects deploy probes in the Internet. Probes are systems that continuously perform traceroutes and other networking measurements (e.g., ping) towards selected targets. Measurements can be stored and analyzed to gain knowledge on several aspects of the Internet, but making sense of such data requires suitable methods and tools for exploration and visualization. We present Radian, a tool that allows to visualize traceroute paths at different levels of detail and to animate their evolution during a selected time interval. We also describe extensive tests of the tool using traceroutes performed by RIPE Atlas Internet probes."}
{"_id": "4a49ca9bc897fd1d8205371faca4bbaefcd2248e", "title": "DifNet: Semantic Segmentation by Diffusion Networks", "text": "Deep Neural Networks (DNNs) have recently shown state of the art performance on semantic segmentation tasks, however, they still suffer from problems of poor boundary localization and spatial fragmented predictions. The difficulties lie in the requirement of making dense predictions from a long path model all at once, since details are hard to keep when data goes through deeper layers. Instead, in this work, we decompose this difficult task into two relative simple sub-tasks: seed detection which is required to predict initial predictions without the need of wholeness and preciseness, and similarity estimation which measures the possibility of any two nodes belong to the same class without the need of knowing which class they are. We use one branch network for one sub-task each, and apply a cascade of random walks base on hierarchical semantics to approximate a complex diffusion process which propagates seed information to the whole image according to the estimated similarities. The proposed DifNet consistently produces improvements over the baseline models with the same depth and with the equivalent number of parameters, and also achieves promising performance on Pascal VOC and Pascal Context dataset. Our DifNet is trained end-to-end without complex loss functions."}
{"_id": "1c6ee895c202a91a808de59445e3dbde2f4cda0e", "title": "Any domain parsing: automatic domain adaptation for natural language parsing", "text": "of \u201cAny Domain Parsing: Automatic Domain Adaptation for Natural Language Parsing\u201d by David McClosky , Ph.D., Brown University, May, 2010. Current efforts in syntactic parsing are largely data-driv en. These methods require labeled examples of syntactic structures to learn statistical patterns governing these structures. Labeled data typically requires expert annotators which makes it both time consuming and costly to p roduce. Furthermore, once training data has been created for one textual domain, portability to similar domains is limited. This domain-dependence has inspired a large body of work since syntactic parsing aims to capture syntactic patterns across an entire language rather than just a specific domain. The simplest approach to this task is to assume that the targe t domain is essentially the same as the source domain. No additional knowledge about the target domain is i cluded. A more realistic approach assumes that only raw text from the target domain is available. This a s umption lends itself well to semi-supervised learning methods since these utilize both labeled and unlab eled examples. This dissertation focuses on a family of semi-supervised me thods called self-training. Self-training creates semi-supervised learners from existing supervised learne rs with minimal effort. We first show results on self-training for constituency parsing within a single dom ain. While self-training has failed here in the past, we present a simple modification which allows it to succeed, p roducing state-of-the-art results for English constituency parsing. Next, we show how self-training is be neficial when parsing across domains and helps further when raw text is available from the target domain. On e of the remaining issues is that one must choose a training corpus appropriate for the target domain or perfo rmance may be severely impaired. Humans can do this in some situations, but this strategy becomes less prac tical as we approach larger data sets. We present a technique, Any Domain Parsing, which automatically detect s useful source domains and mixes them together to produce a customized parsing model. The resulting models perform almost as well as the best seen parsing models (oracle) for each target domain. As a result, we have a fully automatic syntactic constituency parser which can produce high-quality parses for all types of text, regardless of domain. Abstract of \u201cAny Domain Parsing: Automatic Domain Adaptation for Natural Language Parsing\u201d by David McClosky , Ph.D., Brown University, May, 2010.of \u201cAny Domain Parsing: Automatic Domain Adaptation for Natural Language Parsing\u201d by David McClosky , Ph.D., Brown University, May, 2010. Current efforts in syntactic parsing are largely data-driv en. These methods require labeled examples of syntactic structures to learn statistical patterns governing these structures. Labeled data typically requires expert annotators which makes it both time consuming and costly to p roduce. Furthermore, once training data has been created for one textual domain, portability to similar domains is limited. This domain-dependence has inspired a large body of work since syntactic parsing aims to capture syntactic patterns across an entire language rather than just a specific domain. The simplest approach to this task is to assume that the targe t domain is essentially the same as the source domain. No additional knowledge about the target domain is i cluded. A more realistic approach assumes that only raw text from the target domain is available. This a s umption lends itself well to semi-supervised learning methods since these utilize both labeled and unlab eled examples. This dissertation focuses on a family of semi-supervised me thods called self-training. Self-training creates semi-supervised learners from existing supervised learne rs with minimal effort. We first show results on self-training for constituency parsing within a single dom ain. While self-training has failed here in the past, we present a simple modification which allows it to succeed, p roducing state-of-the-art results for English constituency parsing. Next, we show how self-training is be neficial when parsing across domains and helps further when raw text is available from the target domain. On e of the remaining issues is that one must choose a training corpus appropriate for the target domain or perfo rmance may be severely impaired. Humans can do this in some situations, but this strategy becomes less prac tical as we approach larger data sets. We present a technique, Any Domain Parsing, which automatically detect s useful source domains and mixes them together to produce a customized parsing model. The resulting models perform almost as well as the best seen parsing models (oracle) for each target domain. As a result, we have a fully automatic syntactic constituency parser which can produce high-quality parses for all types of text, regardless of domain. Any Domain Parsing: Automatic Domain Adaptation for Natural Language Parsing by David McClosky B. S., University of Rochester, 2004 Sc. M., Brown University, 2006 A dissertation submitted in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in the Department of Computer Science at Brown University. Providence, Rhode Island May, 2010 c \u00a9 Copyright 2010 by David McClosky This dissertation by David McClosky is accepted in its prese nt form by the Department of Computer Science as satisfying the disser tation requirement for the degree of Doctor of Philosophy. Date Eugene Charniak, Director Recommended to the Graduate Council Date Mark Johnson, Reader Cognitive and Linguistic Sciences Date Dan Klein, Reader University of California at Berkeley Approved by the Graduate Council Date Sheila Bond Dean of the Graduate School"}
{"_id": "bd75508d29a69fbf3bf1454c06554e6850d76bd5", "title": "Walking > Walking-in-Place > Flying, in Virtual Environments", "text": "A study by Slater, et al., [1995] indicated that naive subjects in an immersive virtual environment experience a higher subjective sense of presence when they locomote by walking-in-place (virtual walking) than when they push-button-fly (along the floor plane). We replicated their study, adding real walking as a third condition. Our study confirmed their findings. We also found that real walking is significantly better than both virtual walking and flying in ease (simplicity, straightforwardness, naturalness) as a mode of locomotion. The greatest difference in subjective presence was between flyers and both kinds of walkers. In addition, subjective presence was higher for real walkers than virtual walkers, but the difference was statistically significant only in some models. Follow-on studies show virtual walking can be substantially improved by detecting footfalls with a head accelerometer. As in the Slater study, subjective presence significantly correlated with subjects\u2019 degree of association with their virtual bodies (avatars). This, our strongest statistical result, suggests that substantial potential presence gains can be had from tracking all limbs and customizing avatar appearance. An unexpected by-product was that real walking through our enhanced version of Slater\u2019s visual-cliff virtual environment (Figure 1) yielded a strikingly compelling virtual experience\u2014the strongest we and most of our visitors have yet experienced. The most needed system improvement is the substitution of wireless technology for all links to the user. CR"}
{"_id": "2cac9d4521fb51f1644a6a318df5ed84e8a1878d", "title": "NL2KB: Resolving Vocabulary Gap between Natural Language and Knowledge Base in Knowledge Base Construction and Retrieval", "text": "Words to express relations in natural language (NL) statements may be different from those to represen t properties in knowledge bases (KB). The vocabulary g p becomes barriers for knowledge base construction and retrieval. With the demo system called NL2KB in this paper, users can browse which properties in KB side may be mapped to for a given relati onal pattern in NL side. Besides, they can retrieve the sets of relational patterns in NL side for a given property in KB side. We describe how the mapping is established in detail. Although the mined patterns are used for Chinese knowledge base applications, t he methodology can be extended to other languages."}
{"_id": "c80a825c336431658efb2cf4c82d6797eb4054fe", "title": "Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl", "text": "We present DEPCC, the largest-to-date linguistically analyzed corpus in English including 365 million documents, composed of 252 billion tokens and 7.5 billion of named entity occurrences in 14.3 billion sentences from a web-scale crawl of the COMMON CRAWL project. The sentences are processed with a dependency parser and with a named entity tagger and contain provenance information, enabling various applications ranging from training syntax-based word embeddings to open information extraction and question answering. We built an index of all sentences and their linguistic meta-data enabling quick search across the corpus. We demonstrate the utility of this corpus on the verb similarity task by showing that a distributional model trained on our corpus yields better results than models trained on smaller corpora, like Wikipedia. This distributional model outperforms the state of art models of verb similarity trained on smaller corpora on the SimVerb3500 dataset."}
{"_id": "0a22c2d4a7a05db1afa1b702bcb1faa3ff63b6e8", "title": "Universal Blind Quantum Computation", "text": "We present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory. The client only needs to be able to prepare single qubits randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Our protocol is interactive: after the initial preparation of quantum states, the client and server use two-way classical communication which enables the client to drive the computation, giving single-qubit measurement instructions to the server, depending on previous measurement outcomes. Our protocol works for inputs and outputs that are either classical or quantum. We give an authentication protocol that allows the client to detect an interfering server; our scheme can also be made fault-tolerant. We also generalize our result to the setting of a purely classical client who communicates classically with two non-communicating entangled servers, in order to perform a blind quantum computation. By incorporating the authentication protocol, we show that any problem in BQP has an entangled two-prover interactive proof with a purely classical verifier. Our protocol is the first universal scheme which detects a cheating server, as well as the first protocol which does not require any quantum computation whatsoever on the client's side. The novelty of our approach is in using the unique features of measurement-based quantum computing which allows us to clearly distinguish between the quantum and classical aspects of a quantum computation."}
{"_id": "4774b4f88a26eff2d22372acc17168b2d1c58b0d", "title": "The evolution to 4G cellular systems: LTE-Advanced", "text": "This paper provides an in-depth view on the technologies being considered for Long Term Evolution-Advanced (LTE-Advanced). First, the evolution from third generation (3G) to fourth generation (4G) is described in terms of performance requirements and main characteristics. The new network architecture developed by the Third Generation Partnership Project (3GPP), which supports the integration of current and future radio access technologies, is highlighted. Then, the main technologies for LTE-Advanced are explained, together with possible improvements, their associated challenges, and some approaches that have been considered to tackle those challenges."}
{"_id": "8e0bace83c69e81cf7c68ce007347c4775204cd0", "title": "A closer look at GPUs", "text": "As the line between GPUs and CPUs begins to blur, it's important to understand what makes GPUs tick."}
{"_id": "c8dce54deaaa3bc13585ab458720d1e08f4d5d89", "title": "MRR: an unsupervised algorithm to rank reviews by relevance", "text": "The automatic detection of relevant reviews plays a major role in tasks such as opinion summarization, opinion-based recommendation, and opinion retrieval. Supervised approaches for ranking reviews by relevance rely on the existence of a significant, domain-dependent training data set. In this work, we propose MRR (Most Relevant Reviews), a new unsupervised algorithm that identifies relevant revisions based on the concept of graph centrality. The intuition behind MRR is that central reviews highlight aspects of a product that many other reviews frequently mention, with similar opinions, as expressed in terms of ratings. MRR constructs a graph where nodes represent reviews, which are connected by edges when a minimum similarity between a pair of reviews is observed, and then employs PageRank to compute the centrality. The minimum similarity is graph-specific, and takes into account how reviews are written in specific domains. The similarity function does not require extensive pre-processing, thus reducing the computational cost. Using reviews from books and electronics products, our approach has outperformed the two unsupervised baselines and shown a comparable performance with two supervised regression models in a specific setting. MRR has also achieved a significantly superior run-time performance in a comparison with the unsupervised baselines."}
{"_id": "10e70e16e5a68d52fa2c9d0a452db9ed2f9403aa", "title": "A generalized uncertainty principle and sparse representation in pairs of bases", "text": "An elementary proof of a basic uncertainty principle concerning pairs of representations of RN vectors in different orthonormal bases is provided. The result, slightly stronger than stated before, has a direct impact on the uniqueness property of the sparse representation of such vectors using pairs of orthonormal bases as overcomplete dictionaries. The main contribution in this paper is the improvement of an important result due to Donoho and Huo concerning the replacement of the l0 optimization problem by a linear programming minimization when searching for the unique sparse representation."}
{"_id": "6059b76d64bcd03332f8e5dd55ea5c61669d42dd", "title": "Happiness unpacked: positive emotions increase life satisfaction by building resilience.", "text": "Happiness-a composite of life satisfaction, coping resources, and positive emotions-predicts desirable life outcomes in many domains. The broaden-and-build theory suggests that this is because positive emotions help people build lasting resources. To test this hypothesis, the authors measured emotions daily for 1 month in a sample of students (N = 86) and assessed life satisfaction and trait resilience at the beginning and end of the month. Positive emotions predicted increases in both resilience and life satisfaction. Negative emotions had weak or null effects and did not interfere with the benefits of positive emotions. Positive emotions also mediated the relation between baseline and final resilience, but life satisfaction did not. This suggests that it is in-the-moment positive emotions, and not more general positive evaluations of one's life, that form the link between happiness and desirable life outcomes. Change in resilience mediated the relation between positive emotions and increased life satisfaction, suggesting that happy people become more satisfied not simply because they feel better but because they develop resources for living well."}
{"_id": "b04a503487bc6505aa8972fd690da573f771badb", "title": "Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention", "text": "Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-tointerpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers etc., can understand what triggered a particular behavior. Here we explore the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network\u2019s output (steering control). Our approach is two-stage. In the first stage, we use a visual attention model to train a convolution network endto- end from images to steering angle. The attention model highlights image regions that potentially influence the network\u2019s output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network\u2019s behavior. We demonstrate the effectiveness of our model on three datasets totaling 16 hours of driving. We first show that training with attention does not degrade the performance of the end-to-end network. Then we show that the network causally cues on a variety of features that are used by humans while driving."}
{"_id": "43f7c23ff80f8d96cc009bf3a0b030658ff9ad8b", "title": "An exploratory study of backtracking strategies used by developers", "text": "Developers frequently backtrack while coding. They go back to an earlier state by removing inserted code or by restoring removed code for various reasons. However, little is known about when and how the developers backtrack, and modern IDEs do not provide much assistance for backtracking. As a first step towards gathering baseline knowledge about backtracking and designing more robust backtracking assistance tools in modern IDEs, we conducted an exploratory study with 12 professional developers and a follow-up online survey. Our study revealed several barriers they faced while backtracking. Subjects often manually commented and uncommented code, and often had difficulty finding relevant parts to backtrack. Backtracking was reported to be needed by 3/4 of the developers at least \"sometimes\"."}
{"_id": "baee37b414e52bbbcbd3a30fe9112cfcc70e0c88", "title": "Scientific and Pragmatic Challenges for Bridging Education and Neuroscience", "text": "Neuroscience has experienced rapid growth in recent years, spurred in part by the U.S. government\u2019s designation of the 1990s as \u201cThe Decade of the Brain\u201d (Jones & Mendell, 1999). The rapid development of functional neuroimaging techniques has given researchers unprecedented access to the behaving brains of healthy children and adults. The result has been a wave of new insights into thinking, emotion, motivation, learning, and development. As these insights suffuse the social sciences, they sometimes inspire reconsideration of existing explanations. This is most true in psychology, as marked by the births of cognitive neuroscience (Gazzaniga, Ivry, & Mangun, 2002), developmental neuroscience (Johnson, Munakata, & Gilmore, 2001), and social neuroscience (Cacioppo, Visser, & Pickett, 2005). It is increasingly true in economics, where the rapid rise of neuroeconomics (Camerer, Loewenstein, & Prelec, 2005) has caught the attention of the popular press (Cassidy, 2006). Other social sciences, including communication (Anderson et al., 2006), political science (McDermott, 2004), and sociology (Wexler, 2006), are just beginning to confront the question of whether their research can be informed by neuroscience. Education is somewhere between the two poles of early adopters and tentative newcomers. A decade ago, in this journal, Bruer (1997) forcefully considered the relevance of neuroscience to education. His conclusion\u2014that neuroscience is \u201ca bridge too far\u201d\u2014was noteworthy because Bruer was then director of the McDonnell Foundation, which was actively funding research in both disciplines. Although it was in his best interests to find connections between the disciplines, he found instead poorly drawn extrapolations that inflated neuroscience findings into educational neuromyths. Since Bruer\u2019s cautionary evaluation, a number of commentators have considered the prospects for educational neuroscience. Many sound a more optimistic note (Ansari & Coch, 2006; Byrnes & Fox, 1998; Geake & Cooper, 2003; Goswami, 2006; Petitto & Dunbar, in press), and a textbook has even appeared (Blakemore & Frith, 2005). In this article, we negotiate the middle ground between the pessimism of Bruer and the optimism of those who followed. Table 1 summarizes eight concerns about connecting education and neuroscience. Some are drawn from Bruer (1997) and the ensuing commentaries. Others come from conversations with colleagues in both disciplines, and still others from our own experiences. These concerns do not seem to represent a blanket dismissal but rather a genuine curiosity (tempered by a healthy skepticism) about the implications of neuroscience for education. We begin by articulating the concerns along with some facts about neuroscience that make the concerns more concrete. We voice them in the strong tone in which we have heard them espoused. We then revisit the concerns, reinterpreting them as potential opportunities (also in Table 1). This approach permits us to review a selection of neuroscience studies relevant to content learning. We focus on recent functional magnetic resonance imaging (fMRI), or neuroimaging, studies for reasons of space and because these are the findings that have captured the most attention, both in the academy and in the popular press. Ideally, our review illustrates some elements of neuroscience so that education researchers can think more specifically about the prospects of educational neuroscience. We conclude with two reflections on moving from armchair arguments of a philosophical nature to scientific action on the ground. First, we argue that education and neuroscience can be bridged if (and only if) researchers collaborate across disciplinary lines on tractable problems of common interest. It is the success"}
{"_id": "4a85b7fa5e81ae4cefc963c897ed6cf734a15fbc", "title": "Animal-assisted therapy and loneliness in nursing homes: use of robotic versus living dogs.", "text": "Loneliness is a common problem in long-term care facilities (LTCF) and previous work has shown that animal-assisted therapy (AAT) can to some degree reverse loneliness. Here, we compared the ability of a living dog (Dog) and a robotic dog (AIBO) to treat loneliness in elderly patients living in LTCF. In comparison with a control group not receiving AAT, both the Dog and AIBO groups had statistically significant improvements in their levels of loneliness. As measured by a modified Lexington Attachment to Pets Scale (MLAPS), residents showed high levels of attachment to both the dog and AIBO. Subscale analysis showed that the AIBO group scored lower than the living dog on \"animal rights/animal welfare\" but not on \"general attachment\" or \"people substituting.\" However, MLAPS measures did not correlate with changes in loneliness, showing that attachment was not the mechanism by which AAT decreases loneliness. We conclude that interactive robotic dogs can reduce loneliness in residents of LTCF and that residents become attached to these robots. However, level of attachment does not explain the decrease in loneliness associated with AAT conducted with either a living or robotic dog."}
{"_id": "24f4c9ea98592710dc22d12de43903a74b29335b", "title": "Modelling and Simulation of Four Quadrant Operation of Three Phase Brushless DC Motor With Hysteresis Current", "text": "Brushless DC (BLDC) motor drives are becoming more popular in industrial, traction applications. This makes the control of BLDC motor in all the four quadrants very vitalThe motor is operated in four steady state operating modes of torque-speed plane. To control a BLDC machine it is generally required to measure the speed and position of rotor by using the sensor because the inverter phases, acting at any time, must be commutated depending on the rotor position. Simulation of the proposed model was done using MATLAB/ SIMULINK."}
{"_id": "a3c6aa7949501e1bea1380bfcdd5344dcd152e30", "title": "3D Soft Tissue Analysis \u2013 Part 1: Sagittal Parameters", "text": "The aim of this study was to develop a reliable threedimensional (3D) analysis of facial soft tissues. We determined the mean sagittal 3D values and relationships between sagittal skeletal parameters, and digitally recorded 3D soft tissue parameters. A total of 100 adult patients (n\u00aa = 53, n\u00ba = 47) of Caucasian ethnic origin were included in this study. Patients with syndromes, cleft lip and palate, noticeable asymmetry or anomalies in the number of teeth were excluded. Arithmetic means for seven sagittal 3D soft tissue parameters were calculated. The parameters were analyzed biometrically in terms of their reliability and gender-specific differences. The 3D soft tissue values were further analyzed for any correlations with sagittal cephalometric values. Reproducible 3D mean values were defined for 3D soft tissue parameters. We detected gender-specific differences among the parameters. Correlations between the sagittal 3D soft tissue and cephalometric measurements were statistically significant. 3D soft tissue analysis provides additional information on the sagittal position of the jaw bases and on intermaxillary sagittal relations. Further studies are needed to integrate 3D soft tissue analysis in future treatment planning and assessment as a supportive diagnostic tool. Ziel dieser Untersuchung war die Entwicklung einer reliablen dreidimensionalen (3D) Analyse der Gesichtsweichteile. Es sollten sagittale 3D-Durchschnittswerte bestimmt werden und Beziehungen zwischen sagittalen skelettalen Parametern und digital erfassten 3D-Weichteilparametern dargestellt werden. In die Studie eingeschlossen wurden 100 erwachsene Patienten (n\u00aa = 53, n\u00ba = 47) kaukasischer Herkunft. Ausgeschlossen wurden Patienten mit Syndromen, LKGSSpalten, auff\u00e4lligen Asymmetrien oder Anomalien der Zahnzahl. Es wurden arithmetische Mittelwerte f\u00fcr sieben sagittale 3DWeichteilparameter ermittelt. Die Parameter wurden biometrisch hinsichtlich ihrer Reliabilit\u00e4t und geschlechtsspezifischer Unterschiede analysiert. Des Weiteren wurden die 3D-Weichteilwerte bez\u00fcglich bestehender Korrelationen zu sagittalen kephalometrischen Werten untersucht. F\u00fcr die 3D-Weichteilparameter konnten reproduzierbare 3D-Durchschnittswerte definiert werden. Innerhalb der Parameter lie\u00dfen sich geschlechtsspezifische Unterschiede feststellen. Die Korrelationen zwischen den sagittalen 3D-Weichteilmessungen und den kephalometrischen Messungen waren statistisch signifikant. Die 3D-Weichteilanalyse l\u00e4sst Aussagen sowohl \u00fcber den sagittalen Einbau der Kieferbasen als auch \u00fcber die sagittalen intermaxill\u00e4ren Beziehungen zu. Weitere Untersuchungen sind w\u00fcnschenswert, um die 3D-Weichteildiagnostik zuk\u00fcnftig als unterst\u00fctzendes diagnostisches Instrument in die Behandlungsplanung und -bewertung integrieren zu k\u00f6nnen."}
{"_id": "9e0d8f69eccaee03e20020b7278510e3e41e1c47", "title": "Binary partition tree as an efficient representation for image processing, segmentation, and information retrieval", "text": "This paper discusses the interest of binary partition trees as a region-oriented image representation. Binary partition trees concentrate in a compact and structured representation a set of meaningful regions that can be extracted from an image. They offer a multiscale representation of the image and define a translation invariant 2-connectivity rule among regions. As shown in this paper, this representation can be used for a large number of processing goals such as filtering, segmentation, information retrieval and visual browsing. Furthermore, the processing of the tree representation leads to very efficient algorithms. Finally, for some applications, it may be interesting to compute the binary partition tree once and to store it for subsequent use for various applications. In this context, the paper shows that the amount of bits necessary to encode a binary partition tree remains moderate."}
{"_id": "d6590674fe5fb16cf93c9ead436555d4d984d870", "title": "Making sense of social media streams through semantics: A survey", "text": "Using semantic technologies for mining and intelligent information access to social media is a challenging, emerging research area. Traditional search methods are no longer able to address the more complex information seeking behaviour in media streams, which has evolved towards sense making, learning, investigation, and social search. Unlike carefully authored news text and longer web context, social media streams pose a number of new challenges, due to their large-scale, short, noisy, contextdependent, and dynamic nature. This paper defines five key research questions in this new application area, examined through a survey of state-of-the-art approaches for mining semantics from social media streams; user, network, and behaviour modelling; and intelligent, semanticbased information access. The survey includes key methods not just from the Semantic Web research field, but also from the related areas of natural language processing and user modelling. In conclusion, key outstanding challenges are discussed and new directions for research are proposed."}
{"_id": "c86736a80661552937ed77bb4bffe7e5e246cc87", "title": "A Document Skew Detection Method Using the Hough Transform", "text": "Document image processing has become an increasingly important technology in the automation of office documentation tasks. Automatic document scanners such as text readers and OCR (Optical Character Recognition) systems are an essential component of systems capable of those tasks. One of the problems in this field is that the document to be read is not always placed correctly on a flatbed scanner. This means that the document may be skewed on the scanner bed, resulting in a skewed image. This skew has a detrimental effect on document on document analysis, document understanding, and character segmentation and recognition. Consequently, detecting the skew of a document image and correcting it are important issues in realising a practical document reader. In this paper we describe a new algorithm for skew detection. We then compare the performance and results of this skew detection algorithm to other publidhed methods form O'Gorman, Hinds, Le, Baird, Posel and Akuyama. Finally, we discuss the theory of skew detection and the different apporaches taken to solve the problem of skew in documents. The skew correction algorithm we propose has been shown to be extremenly fast, with run times averaging under 0.25 CPU seconds to calculate the angle on the DEC 5000/20 workstation."}
{"_id": "36927265f588ed093c2cbdbf7bf95ddd72f000a9", "title": "Performance Evaluation of Bridgeless PFC Boost Rectifiers", "text": "In this paper, a systematic review of bridgeless power factor correction (PFC) boost rectifiers, also called dual boost PFC rectifiers, is presented. Performance comparison between the conventional PFC boost rectifier and a representative member of the bridgeless PFC boost rectifier family is performed. Loss analysis and experimental efficiency evaluation for both CCM and DCM/CCM boundary operations are provided."}
{"_id": "77c87f82a73edab2c46d600fc3d7821cdb15359a", "title": "State-of-the-art, single-phase, active power-factor-correction techniques for high-power applications - an overview", "text": "A review of high-performance, state-of-the-art, active power-factor-correction (PFC) techniques for high-power, single-phase applications is presented. The merits and limitations of several PFC techniques that are used in today's network-server and telecom power supplies to maximize their conversion efficiencies are discussed. These techniques include various zero-voltage-switching and zero-current-switching, active-snubber approaches employed to reduce reverse-recovery-related switching losses, as well as techniques for the minimization of the conduction losses. Finally, the effect of recent advancements in semiconductor technology, primarily silicon-carbide technology, on the performance and design considerations of PFC converters is discussed."}
{"_id": "864e1700594dfdf46a4981b5bc07a54ebeab11ba", "title": "Bridgeless PFC implementation using one cycle control technique", "text": "Conventional boost PFC suffers from the high conduction loss in the input rectifier-bridge. Higher efficiency can be achieved by using the bridgeless boost topology. This new circuit has issues such as voltage sensing, current sensing and EMI noise. In this paper, one cycle control technique is used to solve the issues of the voltage sensing and current sensing. Experimental results show efficiency improvement and EMI performance"}
{"_id": "c0b7e09f212ec85da22974c481e7b93efeba1504", "title": "Common mode noise modeling and analysis of dual boost PFC circuit", "text": "To achieve high efficiency PFC front stage in switching mode power supply (SMPS), dual boost PFC (DBPFC) topology shows superior characteristics compared with traditional boost PFC, but it by nature brings higher EMI noise, especially common mode (CM) noise. This paper deals with the modeling and analysis of DBPFC CM noise based on and compared with boost PFC, noise propagation equivalent circuits of both topologies are deduced, and theoretical analysis illustrates the difference. Experiments are performed to validate the EMI model and analysis."}
{"_id": "60ba158cb1a619726db31b684a7bf818e2f8256b", "title": "Common mode EMI noise suppression in bridgeless boost PFC converter", "text": "Bridgeless boost PFC converter has high efficiency by eliminating the input diode bridge. However, Common Mode (CM) conducted EMI becomes a great issue. The goal of this paper is to study the possibility to minimize the CM noise in this converter. First the noise model is studied. Then a balance concept is introduced and applied to cancel the CM noise. Two approaches to minimize CM noise are introduced and compared. Experiments verify the effectiveness of both approaches."}
{"_id": "6e81f26db102e3e261f4fd35251e2f1315709258", "title": "A design methodology of chip-to-chip wireless power transmission system", "text": "A design methodology to transmit power using a chip-to-chip wireless interface is proposed. The proposed power transmission system is based on magnetic coupling, and the power transmission of 5mW/mm2 was verified. The transmission efficiency trade-off with the transmitted power is also discussed."}
{"_id": "4155199a98a1c618d2c73fe850f582483addd7ff", "title": "Options for Control of Reactive Power by Distributed Photovoltaic Generators", "text": "High-penetration levels of distributed photovoltaic (PV) generation on an electrical distribution circuit present several challenges and opportunities for distribution utilities. Rapidly varying irradiance conditions may cause voltage sags and swells that cannot be compensated by slowly responding utility equipment resulting in a degradation of power quality. Although not permitted under current standards for interconnection of distributed generation, fast-reacting, VAR-capable PV inverters may provide the necessary reactive power injection or consumption to maintain voltage regulation under difficult transient conditions. As side benefit, the control of reactive power injection at each PV inverter provides an opportunity and a new tool for distribution utilities to optimize the performance of distribution circuits, e.g., by minimizing thermal losses. We discuss and compare via simulation various design options for control systems to manage the reactive power generated by these inverters. An important design decision that weighs on the speed and quality of communication required is whether the control should be centralized or distributed (i.e., local). In general, we find that local control schemes are able to maintain voltage within acceptable bounds. We consider the benefits of choosing different local variables on which to control and how the control system can be continuously tuned between robust voltage control, suitable for daytime operation when circuit conditions can change rapidly, and loss minimization better suited for nighttime operation."}
{"_id": "1e6123e778caa1e7d77292ffc40920b78d769ce7", "title": "Deep Convolutional Neural Network Design Patterns", "text": "Recent research in the deep learning field has produced a plethora of new architectures. At the same time, a growing number of groups are applying deep learning to new applications. Some of these groups are likely to be composed of inexperienced deep learning practitioners who are baffled by the dizzying array of architecture choices and therefore opt to use an older architecture (i.e., Alexnet). Here we attempt to bridge this gap by mining the collective knowledge contained in recent deep learning research to discover underlying principles for designing neural network architectures. In addition, we describe several architectural innovations, including Fractal of FractalNet network, Stagewise Boosting Networks, and Taylor Series Networks (our Caffe code and prototxt files is available at https://github.com/iPhysicist/CNNDesignPatterns). We hope others are inspired to build on our preliminary work."}
{"_id": "88c3904153d831f6d854067f2ad69c5a330f4af3", "title": "A linear regression analysis of the effects of age related pupil dilation change in iris biometrics", "text": "Medical studies have shown that average pupil size decreases linearly throughout adult life. Therefore, on average, the longer the time between acquisition of two images of the same iris, the larger the difference in dilation between the two images. Several studies have shown that increased difference in dilation causes an increase in the false nonmatch rate for iris recognition. Thus, increased difference in pupil dilation is one natural mechanism contributing to an iris template aging effect. We present an experimental analysis of the change in genuine match scores in the context of dilation differences due to aging."}
{"_id": "bbaee955dd1280cb50ee26040f0e8c939369cf78", "title": "Controlling Robot Morphology From Incomplete Measurements", "text": "Mobile robots with complex morphology are essential for traversing rough terrains in Urban Search & Rescue missions. Since teleoperation of the complex morphology causes high cognitive load of the operator, the morphology is controlled autonomously. The autonomous control measures the robot state and surrounding terrain which is usually only partially observable, and thus the data are often incomplete. We marginalize the control over the missing measurements and evaluate an explicit safety condition. If the safety condition is violated, tactile terrain exploration by the body-mounted robotic arm gathers the missing data."}
{"_id": "40213ebcc1e03c25ba97f4110c0b2030fd2e79b6", "title": "Computational imaging with multi-camera time-of-flight systems", "text": "Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human-computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating."}
{"_id": "627c05039285cee961f9802c41db1a5eaa550b13", "title": "Merging What's Cracked, Cracking What's Merged: Adaptive Indexing in Main-Memory Column-Stores", "text": "Adaptive indexing is characterized by the partial creation and refinement of the index as side effects of query execution. Dynamic or shifting workloads may benefit from preliminary index structures focused on the columns and specific key ranges actually queried \u2014 without incurring the cost of full index construction. The costs and benefits of adaptive indexing techniques should therefore be compared in terms of initialization costs, the overhead imposed upon queries, and the rate at which the index converges to a state that is fully-refined for a particular workload component. Based on an examination of database cracking and adaptive merging, which are two techniques for adaptive indexing, we seek a hybrid technique that has a low initialization cost and also converges rapidly. We find the strengths and weaknesses of database cracking and adaptive merging complementary. One has a relatively high initialization cost but converges rapidly. The other has a low initialization cost but converges relatively slowly. We analyze the sources of their respective strengths and explore the space of hybrid techniques. We have designed and implemented a family of hybrid algorithms in the context of a column-store database system. Our experiments compare their behavior against database cracking and adaptive merging, as well as against both traditional full index lookup and scan of unordered data. We show that the new hybrids significantly improve over past methods while at least two of the hybrids come very close to the \u201cideal performance\u201d in terms of both overhead per query and convergence to a final state."}
{"_id": "bcc18022ba80d2e2a6d41c64fb6c8a9e3a898140", "title": "Positioning for the Internet of Things: A 3GPP Perspective", "text": "Many use cases in the Internet of Things (IoT) will require or benefit from location information, making positioning a vital dimension of the IoT. The 3GPP has dedicated a significant effort during its Release 14 to enhance positioning support for its IoT technologies to further improve the 3GPPbased IoT eco-system. In this article, we identify the design challenges of positioning support in LTE-M and NB-IoT, and overview the 3GPP's work in enhancing the positioning support for LTE-M and NB-IoT. We focus on OTDOA, which is a downlink based positioning method. We provide an overview of the OTDOA architecture and protocols, summarize the designs of OTDOA positioning reference signals, and present simulation results to illustrate the positioning performance."}
{"_id": "24fe8b910028522424f2e8dd5ecb9dc2bd3e9153", "title": "Autonomous Takeoff and Flight of a Tethered Aircraft for Airborne Wind Energy", "text": "A control design approach to achieve fully autonomous takeoff and flight maneuvers with a tethered aircraft is presented and demonstrated in real-world flight tests with a small-scale prototype. A ground station equipped with a controlled winch and a linear motion system accelerates the aircraft to takeoff speed and controls the tether reeling in order to limit the pulling force. This setup corresponds to airborne wind energy (AWE) systems with ground-based energy generation and rigid aircrafts. A simple model of the aircraft\u2019s dynamics is introduced and its parameters are identified from experimental data. A model-based, hierarchical feedback controller is then designed, whose aim is to manipulate the elevator, aileron, and propeller inputs in order to stabilize the aircraft during the takeoff and to achieve figure-of-eight flight patterns parallel to the ground. The controller operates in a fully decoupled mode with respect to the ground station. Parameter tuning and stability/robustness aspect are discussed, too. The experimental results indicate that the controller is able to achieve satisfactory performance and robustness, notwithstanding its simplicity, and confirm that the considered takeoff approach is technically viable and solves the issue of launching this kind of AWE systems in a compact space and at low additional cost."}
{"_id": "7d6968a25c81e4bc0fb958173a61ea82362cbd03", "title": "STOCK MARKET PREDICTION USING BIO-INSPIRED COMPUTING : A SURVEY", "text": "Bio-inspired evolutionary algorithms are probabilistic search methods that mimic natural biological evolution. They show the behavior of the biological entities interacting locally with one another or with their environment to solve complex problems. This paper aims to analyze the most predominantly used bio-inspired optimization techniques that have been used for stock market prediction and hence present a comparative study between them."}
{"_id": "717bba8eec2457f1af024a008929fbe4c7ad0612", "title": "Security slicing for auditing XML, XPath, and SQL injection vulnerabilities", "text": "XML, XPath, and SQL injection vulnerabilities are among the most common and serious security issues for Web applications and Web services. Thus, it is important for security auditors to ensure that the implemented code is, to the extent possible, free from these vulnerabilities before deployment. Although existing taint analysis approaches could automatically detect potential vulnerabilities in source code, they tend to generate many false warnings. Furthermore, the produced traces, i.e. dataflow paths from input sources to security-sensitive operations, tend to be incomplete or to contain a great deal of irrelevant information. Therefore, it is difficult to identify real vulnerabilities and determine their causes. One suitable approach to support security auditing is to compute a program slice for each security-sensitive operation, since it would contain all the information required for performing security audits (Soundness). A limitation, however, is that such slices may also contain information that is irrelevant to security (Precision), thus raising scalability issues for security audits. In this paper, we propose an approach to assist security auditors by defining and experimenting with pruning techniques to reduce original program slices to what we refer to as security slices, which contain sound and precise information. To evaluate the proposed pruning mechanism by using a number of open source benchmarks, we compared our security slices with the slices generated by a state-of-the-art program slicing tool. On average, our security slices are 80% smaller than the original slices, thus suggesting significant reduction in auditing costs."}
{"_id": "bf277908733127ada3acf7028947a5eb0e9be38b", "title": "A fully integrated multi-CPU, GPU and memory controller 32nm processor", "text": "This paper describes the 32nm Sandy Bridge processor that integrates up to 4 high performance Intel Architecture (IA) cores, a power/performance optimized graphic processing unit (GPU) and memory and PCIe controllers in the same die. The Sandy Bridge architecture block diagram is shown in Fig. 15.1.1 and the floorplan of a four IA-core version is shown in Fig. 15.1.2. The Sandy Bridge IA core implements an improved branch prediction algorithm, a micro-operation (Uop) cache, a floating point Advanced Vector Extension (AVX), a second load port in the L1 cache and bigger register files in the out-of-order part of the machine; all these architecture improvements boost the IA core performance without increasing the thermal power dissipation envelope or the average power consumption (to preserve battery life in mobile systems). The CPUs and GPU share the same 8MB level-3 cache memory. The data flow is optimized by a high performance on die interconnect fabric (called \u201cring\u201d) that connects between the CPUs, the GPU, the L3 cache and the system agent (SA) unit that houses a 1600MT/s, dual channel DDR3 memory controller, a 20-lane PCIe gen2 controller, a two parallel pipe display engine, the power management control unit and the testability logic. An on die EPROM is used for configurability and yield optimization."}
{"_id": "daac2e9298d513de49d9e6d01c6ec78daf8feb47", "title": "Position-based impedance control of an industrial hydraulic manipulator", "text": "This paper addresses the problem of impedance control in hydraulic robots. Whereas most impedance and hybrid force/position control formulations have focussed on electrically-driven robots with controllable actuator torques, torque control of hydraulic actuators is a difficult task. Motivated by the previous works [2,9, lo], a position-based impedance controller is proposed and demonstrated on an existing industrial hydraulic robot (a Unimate MKII-2000). A nonlinear proportional-integral (NPI) controller is first developed to meet the accurate positioning requirements of this impedance control formulation. The NPI controller is shown to make the manipulator match a range of second-order target impedances. Various experiments in free space and in environmental contact, including a simple impedance modulation experiment, demonstrate the feasibility and the promise of the technique."}
{"_id": "b57a4f80f2b216105c6c283e18236c2474927590", "title": "Clustered Collaborative Filtering Approach for Distributed Data Mining on Electronic Health Records", "text": "Distributed Data Mining (DDM) has become one of the promising areas of Data Mining. DDM techniques include classifier approach and agent-approach. Classifier approach plays a vital role in mining distributed data, having homogeneous and heterogeneous approaches depend on data sites. Homogeneous classifier approach involves ensemble learning, distributed association rule mining, meta-learning and knowledge probing. Heterogeneous classifier approach involves collective principle component analysis, distributed clustering, collective decision tree and collective bayesian learning model. In this paper, classifier approach for DDM is summarized and an architectural model based on clustered-collaborative filtering for Electronic Health Records (EHR) is proposed."}
{"_id": "9f2923984ca3f0bbcca4415f47bee061d848120e", "title": "Regulating the new information intermediaries as gatekeepers of information diversity", "text": "Natali Helberger is Professor at the Institute for Information Law, University of Amsterdam, Amsterdam, The Netherlands. Katharina Kleinen-von K\u00f6nigsl\u00f6w is Assistant Professor for Political Communication at the University of Zurich, Zurich, Switzerland. Rob van der Noll is Senior Researcher at SEO Economic Research, Amsterdam, The Netherlands. Abstract Purpose \u2013 The purposes of this paper are to deal with the questions: because search engines, social networks and app-stores are often referred to as gatekeepers to diverse information access, what is the evidence to substantiate these gatekeeper concerns, and to what extent are existing regulatory solutions to control gatekeeper control suitable at all to address new diversity concerns? It will also map the different gatekeeper concerns about media diversity as evidenced in existing research before the background of network gatekeeping theory critically analyses some of the currently discussed regulatory approaches and develops the contours of a more user-centric approach towards approaching gatekeeper control and media diversity."}
{"_id": "8a3afde910fc3ebd95fdb51a157883b81bfc7e73", "title": "A hierarchical lstm model with multiple features for sentiment analysis of sina weibo texts", "text": "Sentiment analysis has long been a hot topic in natural language processing. With the development of social network, sentiment analysis on social media such as Facebook, Twitter and Weibo becomes a new trend in recent years. Many different methods have been proposed for sentiment analysis, including traditional methods (SVM and NB) and deep learning methods (RNN and CNN). In addition, the latter always outperform the former. However, most of existing methods only focus on local text information and ignore the user personality and content characteristics. In this paper, we propose an improved LSTM model with considering the user-based features and content-based features. We first analysis the training dataset to extract artificial features which consists of user-based and content-based. Then we construct a hierarchical LSTM model, named LSTM-MF (a hierarchical LSTM model with multiple features), and introduce the features into the model to generate sentence and document representations. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-the-art methods."}
{"_id": "933cba585a12e56a8f60511ebeb74b8cb42634b1", "title": "A Distribution-Based Clustering Algorithm for Mining in Large Spatial Databases", "text": "The problem of detecting clusters of points belonging to a spatial point process arises in many applications. In this paper , we introduce the new clustering algorithm DBCLASD (Distribution Based Clustering of LArge Spatial Databases) to discover clusters of this type. The results of experiments demonstrate that DBCLASD, contrary to partitioning algorithms such as CLARANS, discovers clusters of arbitrary shape. Furthermore, DBCLASD does not require any input parameters, in contrast to the clustering algorithm DBSCAN requiring two input parameters which may be difficult to provide for large databases. In terms of efficiency, DBCLASD is between CLARANS and DBSCAN, close to DBSCAN. Thus, the efficiency of DBCLASD on large spatial databases is very attractive when considering its nonparametric nature and its good quality for clusters of arbitrary shape."}
{"_id": "c66dc716dc0ab674386255e64a743c291c3a1ab5", "title": "Impact of Depression and Antidepressant Treatment on Heart Rate Variability: A Review and Meta-Analysis", "text": "BACKGROUND\nDepression is associated with an increase in the likelihood of cardiac events; however, studies investigating the relationship between depression and heart rate variability (HRV) have generally focused on patients with cardiovascular disease (CVD). The objective of the current report is to examine with meta-analysis the impact of depression and antidepressant treatment on HRV in depressed patients without CVD.\n\n\nMETHODS\nStudies comparing 1) HRV in patients with major depressive disorder and healthy control subjects and 2) the HRV of patients with major depressive disorder before and after treatment were considered for meta-analysis.\n\n\nRESULTS\nMeta-analyses were based on 18 articles that met inclusion criteria, comprising a total of 673 depressed participants and 407 healthy comparison participants. Participants with depression had lower HRV (time frequency: Hedges' g = -.301, p < .001; high frequency: Hedges' g = -.293, p < .001; nonlinear: Hedges' g = -1.955, p = .05; Valsalva ratio: Hedges' g = -.712, p < .001) than healthy control subjects, and depression severity was negatively correlated with HRV (r = -.354, p < .001). Tricyclic medication decreased HRV, although serotonin reuptake inhibitors, mirtazapine, and nefazodone had no significant impact on HRV despite patient response to treatment.\n\n\nCONCLUSIONS\nDepression without CVD is associated with reduced HRV, which decreases with increasing depression severity, most apparent with nonlinear measures of HRV. Critically, a variety of antidepressant treatments do not resolve these decreases despite resolution of symptoms, highlighting that antidepressant medications might not have HRV-mediated cardioprotective effects and the need to identify individuals at risk among patients in remission."}
{"_id": "2cae7a02082722145a6977469f74a0eb5bd10585", "title": "Temporal Sequence Learning, Prediction, and Control: A Review of Different Models and Their Relation to Biological Mechanisms", "text": "In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward-based (e.g., TD learning) and correlation-based (Hebbian) learning related? and How do the different models correspond to possibly underlying biological mechanisms of synaptic plasticity? We first compare the different models in an open-loop condition, where behavioral feedback does not alter the learning. Here we observe that reward-based and correlation-based learning are indeed very similar. Machine control is then used to introduce the problem of closed-loop control (e.g., actor-critic architectures). Here the problem of evaluative (rewards) versus nonevaluative (correlations) feedback from the environment will be discussed, showing that both learning approaches are fundamentally different in the closed-loop condition. In trying to answer the second question, we compare neuronal versions of the different learning architectures to the anatomy of the involved brain structures (basal-ganglia, thalamus, and cortex) and the molecular biophysics of glutamatergic and dopaminergic synapses. Finally, we discuss the different algorithms used to model STDP and compare them to reward-based learning rules. Certain similarities are found in spite of the strongly different timescales. Here we focus on the biophysics of the different calcium-release mechanisms known to be involved in STDP."}
{"_id": "b83a6c77e61a38ada308992cc579c8cd49ee16f4", "title": "A Survey of Outlier Detection Methods in Network Anomaly Identification", "text": "The detection of outliers has gained considerable interest in data mining with the realization that outliers can be the key discovery to be made from very large databases. Outliers arise due to various reasons such as mechanical faults, changes in system behavior, fraudulent behavior, human error and instrument error. Indeed, for many applications the discovery of outliers leads to more interesting and useful results than the discovery of inliers. Detection of outliers can lead to identification of system faults so that administrators can take preventive measures before they escalate. It is possible that anomaly detection may enable detection of new attacks. Outlier detection is an important anomaly detection approach. In this paper, we present a comprehensive survey of well known distance-based, density-based and other techniques for outlier detection and compare them. We provide definitions of outliers and discuss their detection based on supervised and unsupervised learning in the context of network anomaly detection."}
{"_id": "f0bb2eac780d33a2acc129a17e502a3aca28d3a3", "title": "Managing Risk in Software Process Improvement: An Action Research Approach", "text": "Many software organizations engage in software process improvement (SPI) initiatives to increase their capability to develop quality solutions at a competitive level. Such efforts, however, are complex and very demanding. A variety of risks makes it difficult to develop and implement new processes. We studied SPI in its organizational context through collaborative practice research (CPR), a particular form of action research. The CPR program involved close collaboration between practitioners and researchers over a three-year period to understand and improve SPI initiatives in four Danish software organizations. The problem of understanding and managing risks in SPI teams emerged in one of the participating organizations and led to this research. We draw upon insights from the literature on SPI and software risk management as well as practical lessons learned from managing SPI risks in the participating software organizations. Our research offers two contributions. First, we contribute to knowledge on SPI by proposing an approach to understand and manage risks in SPI teams. This risk management approach consists of a framework for understanding risk areas and risk resolution strategies within SPI and a related Iversen et al./Managing Risk in SPI 396 MIS Quarterly Vol. 28 No. 3/September 2004 process for managing SPI risks. Second, we contribute to knowledge on risk management within the information systems and software engineering disciplines. We propose an approach to tailor risk management to specific contexts. This approach consists of a framework for understanding and choosing between different forms of risk management and a process to tailor risk management to specific contexts."}
{"_id": "2b1d83d6d7348700896088b34154eb9e0b021962", "title": "CONTEXT-CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS", "text": "We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a regularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods."}
{"_id": "4edae1c443cd9bede2af016c23e13d6e664bfe7e", "title": "Ensemble methods for spoken emotion recognition in call-centres", "text": "Machine-based emotional intelligence is a requirement for more natural interaction between humans and computer interfaces and a basic level of accurate emotion perception is needed for computer systems to respond adequately to human emotion. Humans convey emotional information both intentionally and unintentionally via speech patterns. These vocal patterns are perceived and understood by listeners during conversation. This research aims to improve the automatic perception of vocal emotion in two ways. First, we compare two emotional speech data sources: natural, spontaneous emotional speech and acted or portrayed emotional speech. This comparison demonstrates the advantages and disadvantages of both acquisition methods and how these methods affect the end application of vocal emotion recognition. Second, we look at two classification methods which have not been applied in this field: stacked generalisation and unweighted vote. We show how these techniques can yield an improvement over traditional classification methods. 2006 Elsevier B.V. All rights reserved."}
{"_id": "32c79c5a66e97469465875a31685864e0df8faee", "title": "Laser ranging : a critical review of usual techniques for distance measurement", "text": "Marc Rioux National Research Council, Canada Institute for Information Technology Visual Information Technology Ottawa, Canada, K1A 0R6 Abstract. We review some usual laser range finding techniques for industrial applications. After outlining the basic principles of triangulation and time of flight [pulsed, phase-shift and frequency modulated continuous wave (FMCW)], we discuss their respective fundamental limitations. Selected examples of traditional and new applications are also briefly presented. \u00a9 2001 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.1330700]"}
{"_id": "2913553d5e1ff6447b555e458fe1e0de78c0d45a", "title": "Liquid cooled permanent-magnet traction motor design considering temporary overloading", "text": "This paper focuses on traction motor design in a variable speed drive of an Electric Mini Bus, aiming to deliver high mean mechanical power over the whole speed range, using single gear transmission. Specific design considerations aim to minimize oversizing, by utilizing temporary overloading capability using liquid cooled motor housing. Constant toque - constant power control strategy is adopted by appropriately programming the over-torque and field weakening operation areas of the motor. A good quantitative analysis between over-torque, rated and field weakening operation areas, is given, focusing on efficiency and paying particular attention to iron losses both on stator and rotor. The motor has been designed in order to meet traction application specifications and allow operation in higher specific electric loading to increase power density."}
{"_id": "325be255b9eb19cb721b7ec0429e46570181370e", "title": "Nitroglycerin: a review of its use in the treatment of vascular occlusion after soft tissue augmentation.", "text": "BACKGROUND\nSkin necrosis after soft tissue augmentation with dermal fillers is a rare but potentially severe complication. Nitroglycerin paste may be an important treatment option for dermal and epidermal ischemia in cosmetic surgery.\n\n\nOBJECTIVES\nTo summarize the knowledge about nitroglycerin paste in cosmetic surgery and to understand its current use in the treatment of vascular compromise after soft tissue augmentation. To review the mechanism of action of nitroglycerin, examine its utility in the dermal vasculature in the setting of dermal filler-induced ischemia, and describe the facial anatomy danger zones in order to avoid vascular injury.\n\n\nMETHODS\nA literature review was conducted to examine the mechanism of action of nitroglycerin, and a treatment algorithm was proposed from clinical observations to define strategies for impending facial necrosis after filler injection.\n\n\nRESULTS AND CONCLUSIONS\nOur experience with nitroglycerin paste and our review of the medical literature supports the use of nitroglycerin paste on the skin to help improve flow in the dermal vasculature because of its vasodilatory effect on small-caliber arterioles."}
{"_id": "3e8482443faa2319a6c2c04693402d9b5264bc0a", "title": "Factors contributing to the facial aging of identical twins.", "text": "BACKGROUND\nThe purpose of this study was to identify the environmental factors that contribute to facial aging in identical twins.\n\n\nMETHODS\nDuring the Twins Day Festival in Twinsburg, Ohio, 186 pairs of identical twins completed a comprehensive questionnaire, and digital images were obtained. A panel reviewed the images independently and recorded the differences in the perceived twins' ages and their facial features. The perceived age differences were then correlated with multiple factors.\n\n\nRESULTS\nFour-point higher body mass index was associated with an older appearance in twins younger than age 40 but resulted in a younger appearance after age 40 (p = 0.0001). Eight-point higher body mass index was associated with an older appearance in twins younger than age 55 but was associated with a younger appearance after age 55 (p = 0.0001). The longer the twins smoked, the older they appeared (p < 0.0001). Increased sun exposure was associated with an older appearance and accelerated with age (p = 0.015), as was a history of outdoor activities and lack of sunscreen use. Twins who used hormone replacement had a younger appearance (p = 0.002). Facial rhytids were more evident in twins with a history of skin cancer (p = 0.05) and in those who smoked (p = 0.005). Dark and patchy skin discoloration was less prevalent in twins with a higher body mass index (p = 0.01) and more common in twins with a history of smoking (p = 0.005) and those with sun exposure (p = 0.005). Hair quantity was better with a higher body mass index (p = 0.01) although worse with a history of skin cancer (p = 0.005) and better with the use of hormones (p = 0.05).\n\n\nCONCLUSION\nThis study offers strong statistical evidence to support the role of some of the known factors that govern facial aging."}
{"_id": "5841bf263cfd388a7af631f0f85fc6fa07dca945", "title": "Foreign body granulomas after all injectable dermal fillers: part 2. Treatment options.", "text": "SUMMARY\nForeign body granulomas occur at certain rates with all injectable dermal fillers. They have to be distinguished from early implant nodules, which usually appear 2 to 4 weeks after injection. In general, foreign body granulomas appear after a latent period of several months at all injected sites at the same time. If diagnosed early and treated correctly, they can be diminished within a few weeks. The treatment of choice of this hyperactive granulation tissue is the intralesional injection of corticosteroid crystals (triamcinolone, betamethasone, or prednisolone), which may be repeated in 4-week cycles until the right dose is found. To lower the risk of skin atrophy, corticosteroids can be combined with antimitotic drugs such as 5-fluorouracil and pulsed lasers. Because foreign body granulomas grow fingerlike into the surrounding tissue, surgical excision should be the last option. Surgery or drainage is indicated to treat normal lumps and cystic foreign body granulomas with little tissue ingrowth. In most patients, a foreign body granuloma is a single event during a lifetime, often triggered by a systemic bacterial infection."}
{"_id": "b6626537ce41ccd7b8442799eb4c33576f1ce482", "title": "Aging changes of the midfacial fat compartments: a computed tomographic study.", "text": "BACKGROUND\nThe restoration of a natural volume distribution is a major goal in facial rejuvenation. The aims of this study were to establish a radiographic method enabling effective measurements of the midfacial fat compartments and to compare the anatomy between human cadavers of younger versus older age.\n\n\nMETHODS\nData from computed tomographic scans of 12 nonfixed cadaver heads, divided into two age groups (group 1, 54 to 75 years, n = 6; and group 2, 75 to 104 years, n = 6), were analyzed. For evaluation of the volume distribution within a specific compartment, the sagittal diameter of the upper, middle, and lower thirds of each compartment was determined. For evaluation of a \"sagging\" of the compartments, the distance between the cephalad border and the infraorbital rim was determined.\n\n\nRESULTS\nComputed tomography enables a reproducible depiction of the facial fat compartments and reveals aging changes. The distance between the fat compartments and the infraorbital rim was higher in group 2 compared with group 1. The sagittal diameter of the lower third of the compartments was higher, and the sagittal diameter of the upper third was smaller in group 2 compared with group 1. The buccal extension of the buccal fat pad was shown to be an independent, separate compartment.\n\n\nCONCLUSIONS\nThis study demonstrates an inferior migration of the midfacial fat compartments and an inferior volume shift within the compartments during aging. Additional distinct compartment-specific changes (e.g., volume loss of the deep medial cheek fat and buccal extension of the buccal fat pad) contribute to the appearance of the aged face."}
{"_id": "c668bf08ccce6f68ad2a285727dbfb7433f3cfa7", "title": "Validated assessment scales for the lower face.", "text": "BACKGROUND\nAging in the lower face leads to lines, wrinkles, depression of the corners of the mouth, and changes in lip volume and lip shape, with increased sagging of the skin of the jawline. Refined, easy-to-use, validated, objective standards assessing the severity of these changes are required in clinical research and practice.\n\n\nOBJECTIVE\nTo establish the reliability of eight lower face scales assessing nasolabial folds, marionette lines, upper and lower lip fullness, lip wrinkles (at rest and dynamic), the oral commissure and jawline, aesthetic areas, and the lower face unit.\n\n\nMETHODS AND MATERIALS\nFour 5-point rating scales were developed to objectively assess upper and lower lip wrinkles, oral commissures, and the jawline. Twelve experts rated identical lower face photographs of 50 subjects in two separate rating cycles using eight 5-point scales. Inter- and intrarater reliability of responses was assessed.\n\n\nRESULTS\nInterrater reliability was substantial or almost perfect for all lower face scales, aesthetic areas, and the lower face unit. Intrarater reliability was high for all scales, areas and the lower face unit.\n\n\nCONCLUSION\nOur rating scales are reliable tools for valid and reproducible assessment of the aging process in lower face areas."}
{"_id": "67d72dfcb30b466718172610315dbb7b7655b412", "title": "Millimetre-wave T-shaped antenna with defected ground structures for 5G wireless networks", "text": "This paper presents a T-shaped antenna at millimetre-wave (MMW) frequency ranges to offer a number of advantages including simple structure, high operating bandwidth, and high gain. Defected ground structures (DGS) have been symmetrically added in ground in order to produce multiple resonating bands, accompanied by partial ground plane to achieve continuous operating bandwidth. The antenna consists of T-shaped radiating patch with a coplanar waveguide (CPW) feed. The bottom part has a partial ground plane loaded with five symmetrical split-ring slots. Measured results of antenna prototype show a wide bandwidth of 25.1-37.5 GHz. Moreover, simulation evaluation of peak gain of the antenna is 9.86 dBi at 36.8 GHz, and efficiency is higher than 80% in complete range of operation. The proposed antenna is considered as a potential candidate for the 5G wireless networks and applications."}
{"_id": "acdbb55beedf3db02bfc16e254c7d8759c64758f", "title": "Design and implementation of full bridge bidirectional isolated DC-DC converter for high power applications", "text": "This paper proposes the design and implementation of a high power full bridge bidirectional isolated DC-DC converter (BIDC) which comprises of two symmetrical voltage source converters and a high frequency transformer. In the proposed BIDC, a well-known PI controller based single phase-shift (SPS) modulation technique is used in order to achieve high power transfer. Besides, different phase shift methods such as extended phase-shift (EPS) and double phase-shift (DPS) are compared with SPS. Both simulation and experimental results are caried out to verify PI controller based simple phase-shift controlled BIDC prototype that is designed for 300-V 2.4-kW and operating at 20 kHz."}
{"_id": "a430998dd1a9a93ff5d322bd69944432cfd2769b", "title": "Ubii: Physical World Interaction Through Augmented Reality", "text": "We describe a new set of interaction techniques that allow users to interact with physical objects through augmented reality (AR). Previously, to operate a smart device, physical touch is generally needed and a graphical interface is normally involved. These become limitations and prevent the user from operating a device out of reach or operating multiple devices at once. Ubii (Ubiquitous interface and interaction) is an integrated interface system that connects a network of smart devices together, and allows users to interact with the physical objects using hand gestures. The user wears a smart glass which displays the user interface in an augmented reality view. Hand gestures are captured by the smart glass, and upon recognizing the right gesture input, Ubii will communicate with the connected smart devices to complete the designated operations. Ubii supports common inter-device operations such as file transfer, printing, projecting, as well as device pairing. To improve the overall performance of the system, we implement computation offloading to perform the image processing computation. Our user test shows that Ubii is easy to use and more intuitive than traditional user interfaces. Ubii shortens the operation time on various tasks involving operating physical devices. The novel interaction paradigm attains a seamless interaction between the physical and digital worlds."}
{"_id": "6af930eebc0e426c97a245d94d9dbf1177719258", "title": "Early androgen exposure and human gender development", "text": "During early development, testosterone plays an important role in sexual differentiation of the mammalian brain and has enduring influences on behavior. Testosterone exerts these influences at times when the testes are active, as evidenced by higher concentrations of testosterone in developing male than in developing female animals. This article critically reviews the available evidence regarding influences of testosterone on human gender-related development. In humans, testosterone is elevated in males from about weeks 8 to 24 of gestation and then again during early postnatal development. Individuals exposed to atypical concentrations of testosterone or other androgenic hormones prenatally, for example, because of genetic conditions or because their mothers were prescribed hormones during pregnancy, have been consistently found to show increased male-typical juvenile play behavior, alterations in sexual orientation and gender identity (the sense of self as male or female), and increased tendencies to engage in physically aggressive behavior. Studies of other behavioral outcomes following dramatic androgen abnormality prenatally are either too small in their numbers or too inconsistent in their results, to provide similarly conclusive evidence. Studies relating normal variability in testosterone prenatally to subsequent gender-related behavior have produced largely inconsistent results or have yet to be independently replicated. For studies of prenatal exposures in typically developing individuals, testosterone has been measured in single samples of maternal blood or amniotic fluid. These techniques may not be sufficiently powerful to consistently detect influences of testosterone on behavior, particularly in the relatively small samples that have generally been studied. The postnatal surge in testosterone in male infants, sometimes called mini-puberty, may provide a more accessible opportunity for measuring early androgen exposure during typical development. This approach has recently begun to be used, with some promising results relating testosterone during the first few months of postnatal life to later gender-typical play behavior. In replicating and extending these findings, it may be important to assess testosterone when it is maximal (months 1 to 2 postnatal) and to take advantage of the increased reliability afforded by repeated sampling."}
{"_id": "eee3851d8889a9267f5ddc6ef5b2e24ed37ca4f0", "title": "Effect of Manipulated Amplitude and Frequency of Human Voice on Dominance and Persuasiveness in Audio Conferences", "text": "We propose to artificially manipulate participants' vocal cues, amplitude and frequency, in real time to adjust their dominance and persuasiveness during audio conferences. We implemented a prototype system and conducted two experiments. The first experiment investigated the effect of vocal cue manipulation on the perception of dominance. The results showed that participants perceived higher dominance while listening to a voice with a high amplitude and low frequency. The second experiment investigated the effect of vocal cue manipulation on persuasiveness. The results indicated that a person with a low amplitude and low frequency voice had greater persuasiveness in conversations with biased dominance, while there was no statistically significant difference in persuasiveness in conversations with unbiased dominance."}
{"_id": "bdeb23105ed6c419890cc86b4d72d1431bb0fe51", "title": "A Theoretical Framework for Serious Game Design: Exploring Pedagogy, Play and Fidelity and their Implications for the Design Process", "text": "It is widely acknowledged that digital games can provide an engaging, motivating and \u201cfun\u201d experience for students. However an entertaining game does not necessarily constitute a meaningful, valuable learning experience. For this reason, experts espouse the importance of underpinning serious games with a sound theoretical framework which integrates and balances theories from two fields of practice: pedagogy and game design (Kiili, 2005; Seeney & Routledge, 2009). Additionally, with the advent of sophisticated, immersive technologies, and increasing interest in the opportunities for constructivist learning offered by these technologies, concepts of fidelity and its impact on student learning and engagement, have emerged (Aldrich, 2005; Harteveld et al., 2007, 2010). This paper will explore a triadic theoretical framework for serious game design comprising play, pedagogy and fidelity. It will outline underpinning theories, review key literatures and identify challenges and issues involved in balancing these elements in the process of serious game design. DOI: 10.4018/ijgbl.2012100103 42 International Journal of Game-Based Learning, 2(4), 41-60, October-December 2012 Copyright \u00a9 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. game design (Kiili, 2005; Seeney & Routledge, 2009). While a sound pedagogical framework is considered essential to their effectiveness as learning tools, equally important is the integration of game play elements which harness and sustain player engagement. Additionally, with the advent of sophisticated and immersive technologies, as exemplified in the virtual worlds of contemporary games, and increasing interest in the opportunities for constructivist learning offered by these worlds, concepts of fidelity, and its impact on student learning and engagement, have emerged (Aldrich, 2005; Harteveld et al., 2007, 2010). This paper will explore this triadic theoretical framework for serious game design, outlining underpinning theories, reviewing key literatures and identifying associated challenges and issues (Figure 1). The paper begins by reflecting on pedagogical theories commonly utilised to conceptualise game-based learning, focusing on three constructivist theories. Following this, attention switches to theories used to conceptualise players\u2019 engagement with digital games, and thus inform effective, engaging and \u201cfun\u201d game design. As a key component of engaging and pedagogically effective game design, the concept of fidelity, and how it relates to game design and game-based learning, is discussed. The paper will conclude by reflecting on issues and challenges involved in balancing these components when designing"}
{"_id": "0a191b2cecb32969feea6b9db5a4a58f9a0eb456", "title": "Design and evaluation of a wide-area event notification service", "text": "The components of a loosely coupled system are typically designed to operate by generating and responding to asynchronous events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems, whereby generators of events publish event notifications to the infrastructure and consumers of events subscribe with the infrastructure to receive relevant notifications. The two primary services that should be provided to components by the infrastructure are notification selection (i. e., determining which notifications match which subscriptions) and notification delivery (i.e., routing matching notifications from publishers to subscribers). Numerous event notification services have been developed for local-area networks, generally based on a centralized server to select and deliver event notifications. Therefore, they suffer from an inherent inability to scale to wide-area networks, such as the Internet, where the number and physical distribution of the service's clients can quickly overwhelm a centralized solution. The critical challenge in the setting of a wide-area network is to maximize the expressiveness in the selection mechanism without sacrificing scalability in the delivery mechanism. This paper presents SIENA, an event notification service that we have designed and implemented to exhibit both expressiveness and scalability. We describe the service's interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used to optimize performance. We also present results of simulation studies that examine the scalability and performance of the service."}
{"_id": "3199194fec2eaa6be1453f280c6392e3ce94e37a", "title": "Comparing Children's Crosshair and Finger Interactions in Handheld Augmented Reality: Relationships Between Usability and Child Development", "text": "Augmented reality technology is a unique medium that can be helpful for young children's entertainment and education, but in order to achieve the benefits of this technology, augmented reality experiences need to be appropriately designed for young children's developing physical and cognitive skills. In the present study we investigated how 5-10 year-old children react to typical handheld augmented reality interaction techniques such as crosshair selection and finger selection, in AR environments that require them to change perspective or not. Our analysis shows significant impacts of age upon AR performance, with young children having slower selection times, more tracking losses, and taking longer to recover tracking. Significant differences were also found between AR interaction technique conditions, with finger selection being faster than crosshair selection, and interactions which required changes in perspective taking longer, generating more tracking losses, and more errors in selection. Furthermore, by analyzing children's performance in relation to metrics of physical and cognitive development, we identified correlations between AR interaction techniques performance and developmental tests of spatial relations, block construction and visuomotor precision. Gender differences were analyzed but no significant effects were detected."}
{"_id": "ba459a7bd17d8af7642dd0c0ddf9e897dff3c4a8", "title": "Do citations and readership identify seminal publications?", "text": "This work presents a new approach for analysing the ability of existing research metrics to identify research which has strongly influenced future developments. More specifically, we focus on the ability of citation counts and Mendeley reader counts to distinguish between publications regarded as seminal and publications regarded as literature reviews by field experts. The main motivation behind our research is to gain a better understanding of whether and how well the existing research metrics relate to research quality. For this experiment we have created a new dataset which we call TrueImpactDataset and which contains two types of publications, seminal papers and literature reviews. Using the dataset, we conduct a set of experiments to study how citation and reader counts perform in distinguishing these publication types, following the intuition that causing a change in a field signifies research quality. Our research shows that citation counts work better than a random baseline (by a margin of 10%) in distinguishing important seminal research papers from literature reviews while Mendeley reader counts do not work better than the baseline."}
{"_id": "9182452cd181022034cc810c263d397fbc3dd75d", "title": "Successful failure: what Foucault can teach us about privacy self-management in a world of Facebook and big data", "text": "The \u201cprivacy paradox\u201d refers to the discrepancy between the concern individuals express for their privacy and the apparently low value they actually assign to it when they readily trade personal information for low-value goods online. In this paper, I argue that the privacy paradox masks a more important paradox: the self-management model of privacy embedded in notice-and-consent pages on websites and other, analogous practices can be readily shown to underprotect privacy, even in the economic terms favored by its advocates. The real question, then, is why privacy self-management occupies such a prominent position in privacy law and regulation. Borrowing from Foucault\u2019s late writings, I argue that this failure to protect privacy is also a success in ethical subject formation, as it actively pushes privacy norms and practices in a neoliberal direction. In other words, privacy self-management isn\u2019t about protecting people\u2019s privacy; it\u2019s about inculcating the idea that privacy is an individual, commodified good that can be traded for other market goods. Along the way, the self-management regime forces privacy into the market, obstructs the functioning of other, more social, understandings of privacy, and occludes the various ways that individuals attempt to resist adopting the market-based view of themselves and their privacy. Throughout, I use the analytics practices of Facebook and social networking sites as a sustained case study of the point."}
{"_id": "43310a0a428a222ca3cc34370c9494bfa97528e4", "title": "Always-ON visual node with a hardware-software event-based binarized neural network inference engine", "text": "This work introduces an ultra-low-power visual sensor node coupling event-based binary acquisition with Binarized Neural Networks (BNNs) to deal with the stringent power requirements of always-on vision systems for IoT applications. By exploiting in-sensor mixed-signal processing, an ultra-low-power imager generates a sparse visual signal of binary spatial-gradient features. The sensor output, packed as a stream of events corresponding to the asserted gradient binary values, is transferred to a 4-core processor when the amount of data detected after frame difference surpasses a given threshold. Then, a BNN trained with binary gradients as input runs on the parallel processor if a meaningful activity is detected in a pre-processing stage. During the BNN computation, the proposed Event-based Binarized Neural Network model achieves a system energy saving of 17.8% with respect to a baseline system including a low-power RGB imager and a Binarized Neural Network, while paying a classification performance drop of only 3% for a real-life 3-classes classification scenario. The energy reduction increases up to 8x when considering a long-term always-on monitoring scenario, thanks to the event-driven behavior of the processing sub-system."}
{"_id": "4954bb26107d69eb79bb32ffa247c8731cf20fcf", "title": "Improving privacy and security in multi-authority attribute-based encryption", "text": "Attribute based encryption (ABE) [13] determines decryption ability based on a user's attributes. In a multi-authority ABE scheme, multiple attribute-authorities monitor different sets of attributes and issue corresponding decryption keys to users, and encryptors can require that a user obtain keys for appropriate attributes from each authority before decrypting a message. Chase [5] gave a multi-authority ABE scheme using the concepts of a trusted central authority (CA) and global identifiers (GID). However, the CA in that construction has the power to decrypt every ciphertext, which seems somehow contradictory to the original goal of distributing control over many potentially untrusted authorities. Moreover, in that construction, the use of a consistent GID allowed the authorities to combine their information to build a full profile with all of a user's attributes, which unnecessarily compromises the privacy of the user. In this paper, we propose a solution which removes the trusted central authority, and protects the users' privacy by preventing the authorities from pooling their information on particular users, thus making ABE more usable in practice."}
{"_id": "b27e8000ef007af8f6b51d597f13c02e7c7c2a0f", "title": "Multigate-Cell Stacked FET Design for Millimeter-Wave CMOS Power Amplifiers", "text": "To increase the voltage handling capability of scaled CMOS-based circuits, series connection (stacking) of transistors has been demonstrated in recently reported mm-wave power amplifiers. This paper discusses the implementation of stacked CMOS circuits employing a compact, multigate layout technique, rather than the conventional series connection of individual transistors. A unit multigate FET is composed of a single transistor with one source and drain and multiple (four) gate connections. Capacitances are implemented in a distributed manner allowing close proximity to the individual gate fingers using metal layers available within the CMOS back-end-of-line (BEOL) stack. The multigate structure is demonstrated to decrease parasitic resistances and capacitances, and has better layout for heat-sinking. The unit cell is replicated through tiling to implement larger effective gate widths. Millimeter-wave power amplifiers using the multigate-cell are presented which operate over the 25-35 GHz band and achieve 300 mW of saturated output power and peak power-added efficiency (PAE) of 30% in 45 nm CMOS SOI technology. To the authors' knowledge, the output power is the best reported for a single stage CMOS power amplifier that does not use power-combining for this frequency range."}
{"_id": "33b8063aac5715591c80a38a69f5b0619fbc041d", "title": "Application of Requirement-oriented Data Quality Evaluation Method", "text": "The data quality directly determines the data further analysis and other applications. With the important of data quality, data is a major asset in software applications and information systems. Getting and ensuring high quality data is critical to the software and related business operations. At present, there are many methods and techniques of data quality assessment or evaluation. Based on the evaluation criteria of data quality, the paper mainly starts from the needs of application software and uses the requirements as a guide in the evaluation process. Define evaluation criteria based on user-defined requirements. In this paper, the method proposed in a practical case has been applied, and achieved good practical results."}
{"_id": "3a01bdcd4bb19151d326bff1c84561ea0b6c757e", "title": "Real-time neuroevolution in the NERO video game", "text": "In most modern video games, character behavior is scripted; no matter how many times the player exploits a weakness, that weakness is never repaired. Yet, if game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. This paper introduces the real-time Neuroevolution of Augmenting Topologies (rtNEAT) method for evolving increasingly complex artificial neural networks in real time, as a game is being played. The rtNEAT method allows agents to change and improve during the game. In fact, rtNEAT makes possible an entirely new genre of video games in which the player trains a team of agents through a series of customized exercises. To demonstrate this concept, the Neuroevolving Robotic Operatives (NERO) game was built based on rtNEAT. In NERO, the player trains a team of virtual robots for combat against other players' teams. This paper describes results from this novel application of machine learning, and demonstrates that rtNEAT makes possible video games like NERO where agents evolve and adapt in real time. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games."}
{"_id": "d6487d4a4c8433c7636a490a6630efb961bbfd13", "title": "Concept Mining via Embedding", "text": "In this work, we study the problem of concept mining, which serves as the first step in transforming unstructured text into structured information, and supports downstream analytical tasks such as information extraction, organization, recommendation and search. Previous work mainly relies on statistical signals, existing knowledge bases, or predefined linguistic patterns. In this work, we propose a novel approach that mines concepts based on their occurrence contexts, by learning embedding vector representations that summarize the context information for each possible candidates, and use these embeddings to evaluate the concept's global quality and their fitness to each local context. Experiments over several real-world corpora demonstrate the superior performance of our method. A publicly available implementation is provided at https://github.com/kleeeeea/ECON."}
{"_id": "c5fa3cf216af23b8e88803750ff8e040bb8a2d1f", "title": "WhatsApp Usage Patterns and Prediction Models", "text": "This paper presents an extensive study of the usage of the WhatsApp social network, an Internet messaging application that is quickly replacing SMS messaging. It is based on the analysis of over 4 million messages from nearly 100 users that we collected in order to understand people\u2019s use of the network. We believe that this is the first in-depth study of the properties of WhatsApp messages with an emphasis of noting differences across different age and gender demographic groups. It is also the first to use statistical and data analytic tools in this analysis. We found that different genders and age demographics had significantly different usage habits in almost all message and group attributes. These differences facilitate the development of user prediction models based on data mining tools. We illustrate this by developing several prediction models such as for a person\u2019s gender or approximate age. We also noted differences in users\u2019 group behavior. We created group behaviorial models including the likelihood a given group would have more file attachments, if a group would contain a larger number of participants, a higher frequency of activity, quicker response times and shorter messages. We present a detailed discussion about the specific attributes that were contained in all predictive models and suggest possible applications based on these results."}
{"_id": "12a3b69936d3d2e618c8068970cbbe0e8da101ac", "title": "A data mining framework for detecting subscription fraud in telecommunication", "text": "Service providing companies including telecommunication companies often receive substantial damage from customers\u2019 fraudulent behaviors. One of the common types of fraud is subscription fraud in which usage type is in contradiction with subscription type. This study aimed at identifying customers\u2019 subscription fraud by employing data mining techniques and adopting knowledge discovery process. To this end, a hybrid approach consisting of preprocessing, clustering, and classification phases was applied, and appropriate tools were employed commensurate to each phase. Specifically, in the clustering phase SOM and K-means were combined, and in the classification phase decision tree (C4.5), neural networks, and support vector machines as single classifiers and bagging, boosting, stacking, majority and consensus voting as ensembles were examined. In addition to using clustering to identify outlier cases, it was also possible \u2013 by defining new features \u2013 to maintain the results of clustering phase for the classification phase. This, in turn, contributed to better classification results. A real dataset provided by Telecommunication Company of Tehran was applied to demonstrate the effectiveness of the proposed method. The efficient use of synergy among these techniques significantly increased prediction accuracy. The performance of all single and ensemble classifiers is evaluated based on various metrics and compared by statistical tests. The results showed that support vector machines among single classifiers and boosted trees among all classifiers have the best performance in terms of various metrics. The research findings show that the proposed model has a high accuracy, and the resulting outcomes are significant both theoretically and practically. & 2010 Elsevier Ltd. All rights reserved."}
{"_id": "948b79a0b533917db575cd13d52732a5229d9500", "title": "FastSpMM: An Efficient Library for Sparse Matrix Matrix Product on GPUs", "text": "Sparse matrix matrix (SpMM) multiplication is involved in a wide range of scientific and technical applications. The computational requirements for this kind of operation are enormous, especially for large matrices. This paper analyzes and evaluates a method to efficiently compute the SpMM product in a computing environment that includes graphics processing units (GPUs). Some libraries to compute this matricial operation can be found in the literature. However, our strategy (FastSpMM) outperforms the existing approaches because it combines the use of the ELLPACK-R storage format with the exploitation of the high ratio computation/memory access of the SpMM operation and the overlapping of CPU\u2013GPU communications/computations by Compute Unified Device Architecture streaming computation. In this work, FastSpMM is described and its performance evaluated with regard to the CUSPARSE library (supplied by NVIDIA), which also includes routines to compute SpMM on GPUs. Experimental evaluations based on a representative set of test matrices show that, in terms of performance, FastSpMM outperforms the CUSPARSE routine as well as the implementation of the SpMM as a set of sparse matrix vector products."}
{"_id": "892144cf4cddf38a272015e261bca984dca7f4b0", "title": "Knowledge Management: An Introduction and Perspective", "text": "In the mid-1980s, individuals and organizations began to appreciate the increasingly important role of knowledge in the emerging competitive environment. International competition was changing to increasingly emphasize product and service quality, responsiveness, diversity and customization. Some organizations, such as USbased Chaparral Steel, had been pursuing a knowledge focus for some years, but during this period it started to become a more wide-spread business concern. These notions appeared in many places throughout the world \u2013 almost simultaneously in the way bubbles appear in a kettle of superheated water! Over a brief period from 1986 to 1989, numerous reports appeared in the public domain concerning how to manage knowledge explicitly. There were studies, results of corporate efforts, and conferences on the topic."}
{"_id": "25098861749fe9eab62fbe90c1ebeaed58c211bb", "title": "Boosting as a Regularized Path to a Maximum Margin Classifier", "text": "In this paper we study boosting methods from a new perspective. We build on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an l1 constraint on the coefficient vector. This helps understand the success of boosting with early stopping as regularized fitting of the loss criterion. For the two most commonly used criteria (exponential and binomial log-likelihood), we further show that as the constraint is relaxed\u2014or equivalently as the boosting iterations proceed\u2014the solution converges (in the separable case) to an \u201cl1-optimal\u201d separating hyper-plane. We prove that this l1-optimal separating hyper-plane has the property of maximizing the minimal l1-margin of the training data, as defined in the boosting literature. An interesting fundamental similarity between boosting and kernel support vector machines emerges, as both can be described as methods for regularized optimization in high-dimensional predictor space, using a computational trick to make the calculation practical, and converging to margin-maximizing solutions. While this statement describes SVMs exactly, it applies to boosting only approximately."}
{"_id": "3baddc440617ce202fd190b32b1d73f1bb14561d", "title": "Fault Detection, Isolation, and Service Restoration in Distribution Systems: State-of-the-Art and Future Trends", "text": "This paper surveys the conceptual aspects, as well as recent developments in fault detection, isolation, and service restoration (FDIR) following an outage in an electric distribution system. This paper starts with a discussion of the rationale for FDIR, and then investigates different areas of the FDIR problem. Recently reported approaches are compared and related to discussions on current practices. This paper then addresses some of the often-cited associated technical, environmental, and economic challenges of implementing self-healing for the distribution grid. The review concludes by pointing toward the need and directions for future research."}
{"_id": "51a3e60fa624ffeb2d75f6adaffe58e4646ce366", "title": "Exploiting potential citation papers in scholarly paper recommendation", "text": "To help generate relevant suggestions for researchers, recommendation systems have started to leverage the latent interests in the publication profiles of the researchers themselves. While using such a publication citation network has been shown to enhance performance, the network is often sparse, making recommendation difficult. To alleviate this sparsity, we identify \"potential citation papers\" through the use of collaborative filtering. Also, as different logical sections of a paper have different significance, as a secondary contribution, we investigate which sections of papers can be leveraged to represent papers effectively.\n On a scholarly paper recommendation dataset, we show that recommendation accuracy significantly outperforms state-of-the-art recommendation baselines as measured by nDCG and MRR, when we discover potential citation papers using imputed similarities via collaborative filtering and represent candidate papers using both the full text and assigning more weight to the conclusion sections."}
{"_id": "04a8c5d8535fc58e5ad55dd3f9288bc78567d0c4", "title": "A COMPARISON OF ENTERPRISE ARCHITECTURE FRAMEWORKS", "text": "An Enterprise Architecture Framework (EAF) maps all of the software development processes within the enterprise and how they relate and interact to fulfill the enterprise\u2019s mission. It provides organizations with the ability to understand and analyze weaknesses or inconsistencies to be identified and addressed. There are a number of already established EAF in use today; some of these frameworks were developed for very specific areas, whereas others have broader functionality. This study provides a comparison of several frameworks that can then be used for guidance in the selection of an EAF that meets the needed criteria."}
{"_id": "0825788b9b5a18e3dfea5b0af123b5e939a4f564", "title": "Glove: Global Vectors for Word Representation", "text": "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75% on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition."}
{"_id": "0826c98d1b1513aa2f45e6654bb5075a58b64649", "title": "Linguistic Regularities in Continuous Space Word Representations", "text": "\u2022 Neural network language model and distributed representation for words (Vector representation) \u2022 Capture syntactic and remantic regularities in language \u2022 Outperform state-of-the-art"}
{"_id": "307e3d7d5857942d7d2c9d97f7437777535487e0", "title": "Universal Adversarial Perturbations", "text": "Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images."}
{"_id": "326cfa1ffff97bd923bb6ff58d9cb6a3f60edbe5", "title": "The Earth Mover's Distance as a Metric for Image Retrieval", "text": "We investigate the properties of a metric between two distributions, the Earth Mover's Distance (EMD), for content-based image retrieval. The EMD is based on the minimal cost that must be paid to transform one distribution into the other, in a precise sense, and was first proposed for certain vision problems by Peleg, Werman, and Rom. For image retrieval, we combine this idea with a representation scheme for distributions that is based on vector quantization. This combination leads to an image comparison framework that often accounts for perceptual similarity better than other previously proposed methods. The EMD is based on a solution to the transportation problem from linear optimization, for which efficient algorithms are available, and also allows naturally for partial matching. It is more robust than histogram matching techniques, in that it can operate on variable-length representations of the distributions that avoid quantization and other binning problems typical of histograms. When used to compare distributions with the same overall mass, the EMD is a true metric. In this paper we focus on applications to color and texture, and we compare the retrieval performance of the EMD with that of other distances."}
{"_id": "83a6cacc126d85c45605797406262677c256a6af", "title": "Software Framework for Topic Modelling with Large Corpora", "text": "Large corpora are ubiquitous in today\u2019s world and memory quickly becomes the limiting factor in practical applications of the Vector Space Model (VSM). In this paper, we identify a gap in existing implementations of many of the popular algorithms, which is their scalability and ease of use. We describe a Natural Language Processing software framework which is based on the idea of document streaming, i.e. processing corpora document after document, in a memory independent fashion. Within this framework, we implement several popular algorithms for topical inference, including Latent Semantic Analysis and Latent Dirichlet Allocation, in a way that makes them completely independent of the training corpus size. Particular emphasis is placed on straightforward and intuitive framework design, so that modifications and extensions of the methods and/or their application by interested practitioners are effortless. We demonstrate the usefulness of our approach on a real-world scenario of computing document similarities within an existing digital library DML-CZ."}
{"_id": "cd2b8ac987b3254dd31835afa185ab1962e62aa6", "title": "Dual-Resonance NFC Antenna System Based on NFC Chip Antenna", "text": "In this letter, to enhance the performance of the near-field communication (NFC) antenna, an antenna system that has dual resonance is proposed. The antenna is based on NFC chip antenna, and the dual resonance comes from the chip antenna itself, the eddy current on printed circuit board, and a nearby loop that has strong coupling with the chip antenna. The performance of the proposed antenna system is confirmed by measurement."}
{"_id": "c7ceb4200c1875142edb1664628abb4a9ccd0674", "title": "Explaining the \u201c Identifiable Victim Effect \u201d", "text": "It is widely believed that people are willing to expend greater resources to save the lives of identified victims than to save equal numbers of unidentified or statistical victims. There are many possible causes of this disparity which have not been enumerated previously or tested empirically. We discuss four possible causes of the \u201cidentifiable victim effect\u201d and present the results of two studies which indicate that the most important cause of the disparity in treatment of identifiable and statistical lives is that, for identifiable victims, a high proportion of those at risk can be saved."}
{"_id": "52a97f714547e62f3ad59c979dcfa2f6733038e8", "title": "Efficiency optimization of half-bridge series resonant inverter with asymmetrical duty cycle control for domestic induction heating", "text": "In this paper, a method to improve efficiency in half-bridge series resonant inverter applied to domestic induction heating is presented. Low and medium output powers required for this application implies the use of higher switching frequencies, which leads to an efficiency decrease. Asymmetrical duty cycle (ADC) modulation scheme is proposed to improve efficiency due to its switching frequency reduction and absence of additional hardware requirements. Study methodology comprises, in a first step, a theoretical analysis of power balance as a function of control parameters: duty cycle and switching frequency. In addition, restrictions due to snubber and dead time, and variability of the induction loads have been considered. Afterwards, an efficiency analysis has been carried out to determine the optimum operation point. Switching and conduction losses have been calculated to examine global importance of each one for different switching devices. ADC modulation efficiency improvement is achieved by means of a switching frequency reduction, mainly at low-medium power range and low quality factor (Q) loads. The analytical results obtained with this study have been validated through an induction heating test-bench. A discrete 3-kW RL load has been designed to emulate a typical induction heating load. Then, a commercial induction heating inverter is used to evaluate ADC modulation scheme."}
{"_id": "8038a2e6da256556664b21401aed77079160c8b1", "title": "Vehicle Detection and Compass Applications using AMR Magnetic Sensors", "text": "The earliest magnetic field detectors allowed navigation over trackless oceans by sensing the earth\u2019s magnetic poles. Magnetic field sensing has vastly expanded as industry has adapted a variety of magnetic sensors to detect the presence, strength, or direction of magnetic fields not only from the earth, but also from permanent magnets, magnetized soft magnets, vehicle disturbances, brain wave activity, and fields generated from electric currents. Magnetic sensors can measure these properties without physical contact and have become the eyes of many industrial and navigation control systems. This paper will describe the current state of magnetic sensing within the earth\u2019s field range and how these sensors are applied. Several applications will be presented for magnetic sensing in systems with emphasis on vehicle detection and navigation based on magnetic fields."}
{"_id": "7f77d49b35ed15637d767a13a882ef8d3193772e", "title": "Flying Eyes and Hidden Controllers: A Qualitative Study of People's Privacy Perceptions of Civilian Drones in The US", "text": "Drones are unmanned aircraft controlled remotely or operated autonomously. While the extant literature suggests that drones can in principle invade people\u2019s privacy, little is known about how people actually think about drones. Drawing from a series of in-depth interviews conducted in the United States, we provide a novel and rich account of people\u2019s pri\u00ad vacy perceptions of drones for civilian uses both in general and under specific usage scenarios. Our informants raised both physical and information privacy issues against government, organization and individual use of drones. Informants\u2019 reason\u00ad ing about the acceptance of drone use was in part based on whether the drone is operating in a public or private space. However, our informants differed significantly in their defini\u00ad tions of public and private spaces. While our informants\u2019 pri\u00ad vacy concerns such as surveillance, data collection and shar\u00ad ing have been raised for other tracking technologies such as camera phones and closed-circuit television (CCTV), our in\u00ad terviews highlight two heightened issues of drones: (1) pow\u00ad erful yet inconspicuous data collection, (2) hidden and inac\u00ad cessible drone controllers. These two aspects of drones render some of people\u2019s existing privacy practices futile (e.g., notice recording and ask controllers to stop or delete the recording). Some informants demanded notifications of drones near them and expected drone controllers asking for their explicit per\u00ad missions before recording. We discuss implications for future privacy-enhancing drone designs."}
{"_id": "a0041f890b7e1ffef5c3919fd4a7de95c82282d7", "title": "Reduced graphene oxide\u2013silver nanoparticle nanocomposite: a potential anticancer nanotherapy", "text": "BACKGROUND\nGraphene and graphene-based nanocomposites are used in various research areas including sensing, energy storage, and catalysis. The mechanical, thermal, electrical, and biological properties render graphene-based nanocomposites of metallic nanoparticles useful for several biomedical applications. Epithelial ovarian carcinoma is the fifth most deadly cancer in women; most tumors initially respond to chemotherapy, but eventually acquire chemoresistance. Consequently, the development of novel molecules for cancer therapy is essential. This study was designed to develop a simple, non-toxic, environmentally friendly method for the synthesis of reduced graphene oxide-silver (rGO-Ag) nanoparticle nanocomposites using Tilia amurensis plant extracts as reducing and stabilizing agents. The anticancer properties of rGO-Ag were evaluated in ovarian cancer cells.\n\n\nMETHODS\nThe synthesized rGO-Ag nanocomposite was characterized using various analytical techniques. The anticancer properties of the rGO-Ag nanocomposite were evaluated using a series of assays such as cell viability, lactate dehydrogenase leakage, reactive oxygen species generation, cellular levels of malonaldehyde and glutathione, caspase-3 activity, and DNA fragmentation in ovarian cancer cells (A2780).\n\n\nRESULTS\nAgNPs with an average size of 20 nm were uniformly dispersed on graphene sheets. The data obtained from the biochemical assays indicate that the rGO-Ag nanocomposite significantly inhibited cell viability in A2780 ovarian cancer cells and increased lactate dehydrogenase leakage, reactive oxygen species generation, caspase-3 activity, and DNA fragmentation compared with other tested nanomaterials such as graphene oxide, rGO, and AgNPs.\n\n\nCONCLUSION\nT. amurensis plant extract-mediated rGO-Ag nanocomposites could facilitate the large-scale production of graphene-based nanocomposites; rGO-Ag showed a significant inhibiting effect on cell viability compared to graphene oxide, rGO, and silver nanoparticles. The nanocomposites could be effective non-toxic therapeutic agents for the treatment of both cancer and cancer stem cells."}
{"_id": "932d3ce4de4fe94f8f4d302d208ed6e1c4c930de", "title": "Transfer Learning for Music Classification and Regression Tasks", "text": "In this paper, we present a transfer learning approach for music classification and regression tasks. We propose to use a pre-trained convnet feature, a concatenated feature vector using the activations of feature maps of multiple layers in a trained convolutional network. We show how this convnet feature can serve as general-purpose music representation. In the experiments, a convnet is trained for music tagging and then transferred to other music-related classification and regression tasks. The convnet feature outperforms the baseline MFCC feature in all the considered tasks and several previous approaches that are aggregating MFCCs as well as lowand high-level music features."}
{"_id": "8d71074d6e7421c3f267328cafbba0877892c60d", "title": "Detecting Deceptive Behavior via Integration of Discriminative Features From Multiple Modalities", "text": "Deception detection has received an increasing amount of attention in recent years, due to the significant growth of digital media, as well as increased ethical and security concerns. Earlier approaches to deception detection were mainly focused on law enforcement applications and relied on polygraph tests, which had proved to falsely accuse the innocent and free the guilty in multiple cases. In this paper, we explore a multimodal deception detection approach that relies on a novel data set of 149 multimodal recordings, and integrates multiple physiological, linguistic, and thermal features. We test the system on different domains, to measure its effectiveness and determine its limitations. We also perform feature analysis using a decision tree model, to gain insights into the features that are most effective in detecting deceit. Our experimental results indicate that our multimodal approach is a promising step toward creating a feasible, non-invasive, and fully automated deception detection system."}
{"_id": "33960c2f143918e200ebfb5bbd4ae9b8e20cd96c", "title": "Inferring Undiscovered Public Knowledge by Using Text Mining-driven Graph Model", "text": "Due to the recent development of Information Technology, the number of publications is increasing exponentially. In response to the increasing number of publications, there has been a sharp surge in the demand for replacing the existing manual text data processing by an automatic text data processing. Swanson proposed ABC model [1] on the top of text mining as a part of literature-based knowledge discovery for finding new possible biomedical hypotheses about three decades ago. The following clinical scholars proved the effectiveness of the possible hypotheses found by ABC model [2]. Such effectiveness let scholars try various literature-based knowledge discovery approaches [3, 4, 5]. However, their trials are not fully automated but hybrids of automatic and manual processes. The manual process requires the intervention of experts. In addition, their trials consider a single perspective. Even trials involving network theory have difficulties in mal-understanding the entire network structure of the relationships among concepts and the systematic interpretation on the structure [6, 7]. Thus, this study proposes a novel approach to discover various relationships by extending the intermediate concept B to a multi-leveled concept. By applying a graph-based path finding method based on co-occurrence and the relational entities among concepts, we attempt to systematically analyze and investigate the relationships between two concepts of a source node and a target node in the total paths. For the analysis of our study, we set our baseline as the result of Swanson [8]'s work. This work suggested the intermediate concept or terms between Raynaud's disease and fish oils as blood viscosity, platelet aggregability, and vasconstriction. We compared our results of intermediate concepts with these intermediate concepts of Swanson's. This study provides distinct perspectives for literature-based discovery by not only discovering the meaningful relationship among concepts in biomedical literature through graph-based path interference but also being able to generate feasible new hypotheses."}
{"_id": "84404e2e918815ff359c4a534f7bc18b59ebc2f7", "title": "Failure detection and consensus in the crash-recovery model", "text": "Summary. We study the problems of failure detection and consensus in asynchronous systems in which processes may crash and recover, and links may lose messages. We first propose new failure detectors that are particularly suitable to the crash-recovery model. We next determine under what conditions stable storage is necessary to solve consensus in this model. Using the new failure detectors, we give two consensus algorithms that match these conditions: one requires stable storage and the other does not. Both algorithms tolerate link failures and are particularly efficient in the runs that are most likely in practice \u2013 those with no failures or failure detector mistakes. In such runs, consensus is achieved within $3 \\delta$ time and with 4 n messages, where $\\delta$ is the maximum message delay and n is the number of processes in the system."}
{"_id": "390baa992f07450a257694753c6f5e2858586fe0", "title": "Preimages for Step-Reduced SHA-2", "text": "In this paper, we present preimage attacks on up to 43step SHA-256 (around 67% of the total 64 steps) and 46-step SHA-512 (around 57.5% of the total 80 steps), which significantly increases the number of attacked steps compared to the best previously published preimage attack working for 24 steps. The time complexities are 2, 2 for finding pseudo-preimages and 2, 2 compression function operations for full preimages. The memory requirements are modest, around 2 words for 43-step SHA-256 and 46-step SHA-512. The pseudo-preimage attack also applies to 43-step SHA-224 and SHA-384. Our attack is a meet-in-the-middle attack that uses a range of novel techniques to split the function into two independent parts that can be computed separately and then matched in a birthday-style phase."}
{"_id": "8be94dfea49d79ab7ce90b3dbd25ff38500f4163", "title": "Critical review on video game evaluation heuristics: social games perspective", "text": "This paper presents the first step in creating design and evaluation heuristics for social games which emerge from the domain of social media. Initial high level heuristics for social games are offered by reviewing four existing video game heuristic models and analyzing two social games design frameworks."}
{"_id": "b42d95996b8760ec06cfa1894e411bdb972be9c4", "title": "Through-Wall Opportunistic Sensing System Utilizing a Low-Cost Flat-Panel Array", "text": "A UWB through-wall imaging system is proposed based on a planar low profile aperture array operating from 0.9 GHz to 2.3 GHz. The goal is to provide a lightweight, fixed array to serve as an alternative to synthetic aperture radars (SAR) that require continuous array movement while collecting data. The proposed system consists of 12 dual-linear printed elements arranged within a triangular lattice, each forming a \u201cflower\u201d shape and backed by a ground plane. The array delivers half-space radiation with wideband performance, necessary for imaging applications. UWB capability is realized by suppressing grating lobes via the introduction of virtual phase centers interwoven within the actual array feeds. The proposed system is demonstrated for through-wall imaging via a non-coherent process. Distinctively, several coherent images are forged from various fixed aperture locations (referred to as \u201csnapshot\u201d locations) and appropriately combined to create a composite scene image. In addition to providing a unique wideband imaging capability (as an alternative to SAR), the system is portable and inexpensive for collecting/storing scattering data. The array design and data collection system is described, and several through-wall images are presented to demonstrate functionality."}
{"_id": "10767cd60ac9e33188ae7e35d1e84a6614387da2", "title": "Maximum likelihood linear transformations for HMM-based speech recognition", "text": "This paper examines the application of linear transformations for speaker and environmental adaptation in an HMM-based speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Other than in the form of a simple bias, strict linear feature-space transformations are inappropriate in this case. Hence, only model-based linear transforms are considered. The paper compares the two possible forms of model-based transforms: (i) unconstrained, where any combination of mean and variance transform may be used, and (ii) constrained, which requires the variance transform to have the same form as the mean transform (sometimes referred to as feature-space transforms). Re-estimation formulae for all appropriate cases of transform are given. This includes a new and e cient \\full\" variance transform and the extension of the constrained model-space transform from the simple diagonal case to the full or block-diagonal case. The constrained and unconstrained transforms are evaluated in terms of computational cost, recognition time e ciency, and use for speaker adaptive training. The recognition performance of the two model-space transforms on a large vocabulary speech recognition task using incremental adaptation is investigated. In addition, initial experiments using the constrained model-space transform for speaker adaptive training are detailed."}
{"_id": "6446405d851f7e940314d2d07aa8ff67b86d1da6", "title": "Semantic-Based Location Recommendation With Multimodal Venue Semantics", "text": "In recent years, we have witnessed a flourishing of location -based social networks. A well-formed representation of location knowledge is desired to cater to the need of location sensing, browsing, navigation and querying. In this paper, we aim to study the semantics of point-of-interest (POI) by exploiting the abundant heterogeneous user generated content (UGC) from different social networks. Our idea is to explore the text descriptions, photos, user check-in patterns, and venue context for location semantic similarity measurement. We argue that the venue semantics play an important role in user check-in behavior. Based on this argument, a unified POI recommendation algorithm is proposed by incorporating venue semantics as a regularizer. In addition to deriving user preference based on user-venue check-in information, we place special emphasis on location semantic similarity. Finally, we conduct a comprehensive performance evaluation of location semantic similarity and location recommendation over a real world dataset collected from Foursquare and Instagram. Experimental results show that the UGC information can well characterize the venue semantics, which help to improve the recommendation performance."}
{"_id": "ad2f8ea1313410195953f9c186a4cdb6012e53a2", "title": "SyncGC: A Synchronized Garbage Collection Technique for Reducing Tail Latency in Cassandra", "text": "Data-center applications running on distributed databases often suffer from unexpectedly high response time fluctuation which is caused by long tail latency. In this paper, we find that long tail latency of user writes is mainly created by the interference with garbage collection (GC) tasks running in various system layers. In order to address the tail latency problem, we propose a synchronized garbage collection technique, called SyncGC. By scheduling multiple GC instances to execute in sync with each other in an overlapped manner, SyncGC prevents user requests from being interfered with GC instances, thereby minimizing their negative impacts on tail latency. Our experimental results with Cassandra show that SyncGC reduces the 99.99th-percentile tail latency and the maximum latency by 35% and 37%, on average, respectively."}
{"_id": "508d8c1dbc250732bd2067689565a8225013292f", "title": "Experimental validation of dual PPG local pulse wave velocity probe", "text": "A novel dual photoplethysmograph (PPG) probe and measurement system for local pulse wave velocity (PWV) is proposed and demonstrated. The developed probe design employs reflectance PPG transducers for non-invasive detection of blood pulse propagation waveforms from two adjacent measurement points (28 mm apart). Transit time delay between the continuously acquired dual pulse waveform was used for beat-to-beat local PWV measurement. An in-vivo experimental validation study was conducted on 10 healthy volunteers (8 male and 2 female, 21 to 33 years of age) to validate the PPG probe design and developed local PWV measurement system. The proposed system was able to measure carotid local PWV from multiple subjects. Beat-to-beat variation of baseline carotid PWV was less than 7.5% for 7 out of 10 subjects, a maximum beat-to-beat variation of 16% was observed during the study. Variation in beat-to-beat carotid local PWV and brachial blood pressure (BP) values during post-exercise recovery period was also examined. A statistically significant correlation between intra-subject local PWV variation and brachial BP parameters was observed (r > 0.85, p < 0.001). The results demonstrated the feasibility of proposed PPG probe for continuous beat-to-beat local PWV measurement from the carotid artery. Such a non-invasive local PWV measurement unit can be potentially used for continuous ambulatory BP measurements."}
{"_id": "2b3caf9dfb2a539c89a5f55fc3ba6f81f4fa8f65", "title": "Millimeter-Wave TE20-Mode SIW Dual-Slot-Fed Patch Antenna Array With a Compact Differential Feeding Network", "text": "A millimeter-wave series\u2013parallel patch antenna array is presented, in which the dual-slot feeding structure is handily implemented using the intrinsic field distribution of TE20 mode in substrate-integrated waveguide (SIW). One 28 GHz patch antenna element fed by the TE20-mode SIW is first designed, achieving a 10 dB impedance bandwidth of 10.2% and a simulated peak gain of 6.48 dBi. Based on the antenna element, a $4 \\times 4$ array with a compact series\u2013parallel differential feeding network is developed accordingly. Due to the novel compact SIW-based series\u2013parallel feeding network, the antenna array can achieve superior radiation performances, which is the highlight of this communication. The simulation and measurement results of the proposed antenna array are in good agreement, demonstrating a performance of 8.5% impedance bandwidth, 19.1 dBi peak gain, symmetrical radiation patterns, and low cross-polarization levels (\u221230 dB in E-plane and \u221225 dB in H-plane) in the operating frequency band of 26.65\u201329.14 GHz."}
{"_id": "0797a337a6f2a8e1ac2824a4e941a92c83c79040", "title": "Perceived parental social support and academic achievement: an attachment theory perspective.", "text": "The study tested the extent to which parental social support predicted college grade point average among undergraduate students. A sample of 418 undergraduates completed the Social Provisions Scale--Parent Form (C.E. Cutrona, 1989) and measures of family conflict and achievement orientation. American College Testing Assessment Program college entrance exam scores (ACT; American College Testing Program, 1986) and grade point average were obtained from the university registrar. Parental social support, especially reassurance of worth, predicted college grade point average when controlling for academic aptitude (ACT scores), family achievement orientation, and family conflict. Support from parents, but not from friends or romantic partners, significantly predicted grade point average. Results are interpreted in the context of adult attachment theory."}
{"_id": "4d85ad577916479ff7d57b78162ef4de70cf895e", "title": "Comparison of Hilbert transform and wavelet methods for the analysis of neuronal synchrony", "text": "The quantification of phase synchrony between neuronal signals is of crucial importance for the study of large-scale interactions in the brain. Two methods have been used to date in neuroscience, based on two distinct approaches which permit a direct estimation of the instantaneous phase of a signal [Phys. Rev. Lett. 81 (1998) 3291; Human Brain Mapping 8 (1999) 194]. The phase is either estimated by using the analytic concept of Hilbert transform or, alternatively, by convolution with a complex wavelet. In both methods the stability of the instantaneous phase over a window of time requires quantification by means of various statistical dependence parameters (standard deviation, Shannon entropy or mutual information). The purpose of this paper is to conduct a direct comparison between these two methods on three signal sets: (1) neural models; (2) intracranial signals from epileptic patients; and (3) scalp EEG recordings. Levels of synchrony that can be considered as reliable are estimated by using the technique of surrogate data. Our results demonstrate that the differences between the methods are minor, and we conclude that they are fundamentally equivalent for the study of neuroelectrical signals. This offers a common language and framework that can be used for future research in the area of synchronization."}
{"_id": "a5a4ea1118adf2880b0800859d0e7238b3ae2094", "title": "Author ' s personal copy Romancing leadership : Past , present , and future \u2606", "text": "Available online 2 October 2011 This paper presents a review of the romance of leadership and the social construction of leadership theory 25 years after it was originally introduced. We trace the development of this theoretical approach from the original formulation of the romance of leadership (RoL) theory as attributional bias through its emergence as a radical, unconventional approach that views leadership as a sensemaking activity that is primarily \u2018in the eye of the beholder.\u2019 We subsequently review research published in management and organizational psychology journals, book chapters and special issues of journals from 1985 to 2010. Three overall themes emerged from this review: 1) biases in (mis)attributions of leadership, including attributions for organizational success and failure; 2) follower-centered approaches, including the role of follower characteristics, perceptions, and motivations in interpreting leadership ratings; and 3) the social construction of leadership, including interfollower and social contagion processes, the role of crisis and uncertainty, and constructions and deconstructions of leadership and CEO celebrity in the media. Within each of these themes, we examine developments and summarize key findings. Our review concludes with recommendations for future theoretical and empirical"}
{"_id": "3c31d0411e813111eb652fb3639a76411b53394e", "title": "A General Agnostic Active Learning Algorithm", "text": "We present a simple, agnostic active learning algorithm that works for any hypothesis class of bounded VC dimension, and any data distribution. Our algorithm extends a scheme of Cohn, Atlas, and Ladner [6] to the agnostic setting, by (1) reformulating it using a reduction to supervised learning and (2) showing how to apply generalization bounds even for the non-i.i.d. samples that result from selective sampling. We provide a general characterization of the label complexity of our algorithm. This quantity is never more than the usual PAC sample complexity of supervised learning, and is exponentially smaller for some hypothesis classes and distributions. We also demonstrate improvements experimentally."}
{"_id": "611f9faa6f3aeff3ccd674d779d52c4f9245376c", "title": "Multiresolution Models for Object Detection", "text": "Most current approaches to recognition aim to be scaleinvariant. However, the cues available for recognizing a 300 pixel tall object are qualitatively different from those for recognizing a 3 pixel tall object. We argue that for sensors with finite resolution, one should instead use scale-variant, or multiresolution representations that adapt in complexity to the size of a putative detection window. We describe a multiresolution model that acts as a deformable part-based model when scoring large instances and a rigid template with scoring small instances. We also examine the interplay of resolution and context, and demonstrate that context is most helpful for detecting low-resolution instances when local models are limited in discriminative power. We demonstrate impressive results on the Caltech Pedestrian benchmark, which contains object instances at a wide range of scales. Whereas recent state-of-theart methods demonstrate missed detection rates of 86%-37% at 1 falsepositive-per-image, our multiresolution model reduces the rate to 29%."}
{"_id": "c97cca5e6f7c2268a7c5aa0603842fd7cb72dfd4", "title": "Gesture control of drone using a motion controller", "text": "In this study, we present our implementation of using a motion controller to control the motion of a drone via simple human gestures. We have used the Leap as the motion controller and the Parrot AR DRONE 2.0 for this implementation. The Parrot AR DRONE is an off the shelf quad rotor having an on board Wi-Fi system. The AR DRONE is connected to the ground station via Wi-Fi and the Leap is connected to the ground station via USB port. The LEAP Motion Controller recognizes the hand gestures and relays it on to the ground station. The ground station runs ROS (Robot Operating System) in Linux which is used as the platform for this implementation. Python is the programming language used for interaction with the AR DRONE in order to convey the simple hand gestures. In our implementation, we have written python codes to interpret the hand gestures captured by the LEAP, and transmit them in order to control the motion of the AR DRONE via these gestures."}
{"_id": "a09547337b202ecf203148cc4636dd3db75f9df0", "title": "Improvement of Marching Cubes Algorithm Based on Sign Determination", "text": "Traditional Marching Cubes algorithm has the problem of repeating calculation, so that an improved Marching Cubes algorithm is put forward. Boundary voxel is utilized to find adjacent boundary voxel. According to the relationship of edge and edge sign between boundary voxel and adjacent boundary voxel, we transmit intersection on the common face of the boundary voxel to adjacent boundary voxel. If the edge sign of adjacent boundary voxel is not existed, we change the edge sign of the adjacent boundary voxel simultaneously. In that way, we can avoid double counting of the intersection, which is in two adjacent boundary voxels. The two adjacent boundary voxels have common surface on where the edge of the isosurface lies. During the computation of intersection of edge and isosurface, we only compute the edges whose edge signs are null. At the same time, we make use of edge sign to avoid repeating assignment. It speeds up isosurface extraction."}
{"_id": "10cfa5bfab3da9c8026d3a358695ea2a5eba0f33", "title": "Parallel Tracking and Mapping for Small AR Workspaces", "text": "This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems."}
{"_id": "5137516a2604bb0828e59dc517b252d489409986", "title": "Fast Keypoint Recognition in Ten Lines of Code", "text": "While feature point recognition is a key component of modern approaches to object detection, existing approaches require computationally expensive patch preprocessing to handle perspective distortion. In this paper, we show that formulating the problem in a Naive Bayesian classification framework makes such preprocessing unnecessary and produces an algorithm that is simple, efficient, and robust. Furthermore, it scales well to handle large number of classes. To recognize the patches surrounding keypoints, our classifier uses hundreds of simple binary features and models class posterior probabilities. We make the problem computationally tractable by assuming independence between arbitrary sets of features. Even though this is not strictly true, we demonstrate that our classifier nevertheless performs remarkably well on image datasets containing very significant perspective changes."}
{"_id": "69524d5b10b0bb7e4aa1c9057eefe197b230922e", "title": "Face to Face Collaborative AR on Mobile Phones", "text": "Mobile phones are an ideal platform for augmented reality. In this paper we describe how they can also be used to support face to face collaborative AR gaming. We have created a custom port of the ARToolKit library to the Symbian mobile phone operating system and then developed a sample collaborative AR game based on this. We describe the game in detail and user feedback from people who have played the game. We also provide general design guidelines that could be useful for others who are developing mobile phone collaborative AR applications."}
{"_id": "779f05bf98049762df4298043ac6f38c82a07607", "title": "USING CAMERA-EQUIPPED MOBILE PHONES FOR INTERACTING WITH REAL-WORLD OBJECTS", "text": "The idea described in this paper is to use the built-in cameras of consumer mobile phones as sensors for 2-dimensional visual codes. Such codes can be attached to physical objects in order to retrieve object-related information and functionality. They are also suitable for display on electronic screens. The proposed visual code system allows the simultaneous detection of multiple codes, introduces a position-independent coordinate system, and provides the phone\u2019s orientation as a parameter. The ability to detect objects in the user\u2019s vicinity offers a natural way of interaction and strengthens the role of mobile phones in a large number of application scenarios. We describe the hardware requirements, the design of a suitable visual code, a lightweight recognition algorithm, and present some example applications."}
{"_id": "7c20a16d1666e990e6499995556578c1649d191b", "title": "Robust Visual Tracking for Non-Instrumented Augmented Reality", "text": "This paper presents a robust and flexible framework for augmented reality which does not require instrumenting either the environment or the workpiece. A model-based visual tracking system is combined with with rate gyroscopes to produce a system which can track the rapid camera rotations generated by a head-mounted camera, even if images are substantially degraded by motion blur. This tracking yields estimates of head position at video field rate (50Hz) which are used to align computer-generated graphics on an optical see-through display. Nonlinear optimisation is used for the calibration of display parameters which include a model of optical distortion. Rendered visuals are pre-distorted to correct the optical distortion of the display."}
{"_id": "edb92581ae897424de756257b82389c6cb21f28b", "title": "Engagement in Multimedia Training Systems", "text": "Two studies examined user engagement in two type multimedia training systems -a more passive medium, videotape, and a less passive medium, interactive softw Each study compared engagement in three formats: the Text format contained text and still images, the Audio format contained audio and still images, and the Video format contained audio and video images. In both studies, engagement was lower in the Text condition than in the Video condition. However, there were no differences in engagement between Text and Audio in the videotapebased training, and no differences between Audio and Video in the computer-based training."}
{"_id": "79465f3bac4fb9f8cc66dcbe676022ddcd9c05c6", "title": "Action recognition based on a bag of 3D points", "text": "This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90% recognition accuracy were achieved by sampling only about 1% 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation."}
{"_id": "27b807162372574a2317b92a66396ba15c10d290", "title": "Birdbrains could teach basal ganglia research a new song", "text": "Recent advances in anatomical, physiological and histochemical characterization of avian basal ganglia neurons and circuitry have revealed remarkable similarities to mammalian basal ganglia. A modern revision of the avian anatomical nomenclature has now provided a common language for studying the function of the cortical-basal-ganglia-cortical loop, enabling neuroscientists to take advantage of the specialization of basal ganglia areas in various avian species. For instance, songbirds, which learn their vocal motor behavior using sensory feedback, have specialized a portion of their cortical-basal ganglia circuitry for song learning and production. This discrete circuit dedicated to a specific sensorimotor task could be especially tractable for elucidating the interwoven sensory, motor and reward signals carried by basal ganglia, and the function of these signals in task learning and execution."}
{"_id": "e2f8ea2a3504d2984ed91ef8666874d91fd2a400", "title": "Robotic Arm Based 3D Reconstruction Test Automation", "text": "The 3-D reconstruction involves the construction of a 3-D model from a set of images. The 3-D reconstruction has varied uses that include 3-D printing, the generation of 3-D models that can be shared through social media, and more. The 3-D reconstruction involves complex computations in mobile phones that must determine the pose estimation. The pose estimation involves the process of transforming a 2-D object into 3-D space. Once the pose estimation is done, then the mesh generation is performed using the graphics processing unit. This helps render the 3-D object. The competitive advantages of using hardware processors are to accelerate the intensive computation using graphics processors and digital signal processors. The stated problem that this technical paper addresses is the need for a reliable automated test for the 3-D reconstruction feature. The solution to this problem involved the design and development of an automated test system using a programmable robotic arm and rotor for precisely testing the quality of 3-D reconstruction features. The 3-D reconstruction testing involves using a robotic arm lab to accurately test the algorithmic integrity and end-to-end validation of the generated 3-D models. The robotic arm can move the hardware at different panning speeds, specific angles, fixed distances from the object, and more. The ability to reproduce the scanning at a fixed distance and the same panning speed helps to generate test results that can be benchmarked by different software builds. The 3-D reconstruction also requires a depth sensor to be mounted onto the device under examination. We use this robotic arm lab for functional, high performance, and stable validation of the 3-D reconstruction feature. This paper addresses the computer vision use case testing for 3-D reconstruction features and how we have used the robotic arm lab for automating these use cases."}
{"_id": "c4f7dcd553740d393544c2e2016809ba2c03e3a4", "title": "A Ka-Band CMOS Wilkinson Power Divider Using Synthetic Quasi-TEM Transmission Lines", "text": "This work presents a Ka-band two-way 3 dB Wilkinson power divider using synthetic quasi-transverse electromagnetic (TEM) transmission lines (TLs). The synthetic quasi-TEM TL, also called complementary-conducting-strip TL (CCS TL), is theoretically analyzed. The equivalent TL model, whose production is based on the extracted results, is applied to the power divider design. The prototype is fabricated by the standard 0.18 mum 1P6M CMOS technology, showing the circuit size of 210.0 mumtimes390.0 mum without contact pads. The measurement results, which match the 50 Omega system, reveal perfect agreements with those of the simulations. The comparison reveals the following characteristics. The divider exhibits an equal power-split with the insertion losses (S21 and S31) of 3.65 dB. The return losses (S11, S22 and S33) of the prototype are higher than 10.0 dB from 30.0 to 40.0 GHz."}
{"_id": "7cba794f1a270d0f44c76114c1c9d57718abe033", "title": "Using Patterns to Capture Architectural Decisions", "text": "Throughout the software design process, developers must make decisions and reify them in code. The decisions made during software architecting are particularly significant in that they have system-wide implications, especially on the quality attributes. However, architects often fail to adequately document their decisions because they don't appreciate the benefits, don't know how to document decisions, or don't recognize that they're making decisions. This lack of thorough documentation. This paper provides information about a decision's rationale and consequences, architecture patterns can help architects better understand and more easily record their decisions."}
{"_id": "618b5ebc365fb374faf0276633b11bcf249efa0e", "title": "A model for concentric tube continuum robots under applied wrenches", "text": "Continuum robots made from telescoping precurved elastic tubes enable base-mounted actuators to specify the curved shapes of robots as thin as standard surgical needles. While free space beam mechanics-based models of the shape of these \u2018active cannulas\u2019 exist, current models cannot account for external forces and torques applied to the cannula by the environment. In this paper we apply geometrically exact beam theory to solve the statics problem for concentric-tube continuum robots. This yields the equivalent of forward kinematics for an active cannula with general tube precurvature functions and arbitrarily many tubes, under loading from a general wrench distribution. The model achieves average experimental tip errors of less than 3 mm over the workspace of a prototype active cannula subject to various tip forces."}
{"_id": "aec61d77458e8d2713c29d12d49ed2eaf814c98f", "title": "The feasible workspace analysis of a set point control for a cable-suspended robot with input constraints and disturbances", "text": "This paper deals with the characterization of reachable domain of a set-point controller for a cable-suspended robot under disturbances and input constraints. The main contribution of the paper is to calculate the feasible domain analytically through the choice of a control law, starting from a given initial condition. This analytical computation is then recursively used to find a second feasible domain starting from a boundary point of the first feasible domain. Hence, this procedure allows to expand the region of feasible reference signals by connecting adjacent domains through common points. Finally, the effectiveness of the proposed method is illustrated by numerical simulations on a kinematically determined cable robot with six cables."}
{"_id": "9878cc39383560752c5379a7e9641fc82a4daf7f", "title": "Graph Embedding Problem Settings Graph Embedding Input Graph Embedding Output Homogeneous Graph Heterogeneous Graph Graph with Auxiliary Information Graph Constructed from Non-relational Data Node Embedding Edge Embedding Hybrid Embedding Whole-Graph Embedding Graph Embedding Techniques Matrix Facto", "text": "Graph is an important data representation which appears in a wide diversity of real-world scenarios. Effective graph analytics provides users a deeper understanding of what is behind the data, and thus can benefit a lot of useful applications such as node classification, node recommendation, link prediction, etc. However, most graph analytics methods suffer the high computation and space cost. Graph embedding is an effective yet efficient way to solve the graph analytics problem. It converts the graph data into a low dimensional space in which the graph structural information and graph properties are maximumly preserved. In this survey, we conduct a comprehensive review of the literature in graph embedding. We first introduce the formal definition of graph embedding as well as the related concepts. After that, we propose two taxonomies of graph embedding which correspond to what challenges exist in different graph embedding problem settings and how the existing work address these challenges in their solutions. Finally, we summarize the applications that graph embedding enables and suggest four promising future research directions in terms of computation efficiency, problem settings, techniques and application scenarios."}
{"_id": "53c08abb25ba0573058ad2692ffed39fbd65eb0d", "title": "Local sensor system for badminton smash analysis", "text": "This paper presents a development of a sensory system for analysis of badminton smashes. During a badminton game, the ability to execute a powerful smash is fundamental for a player to be competitive. In most games, the winning factor for the game is often attributed to a high shuttle speed during the execution of a smash. It was envisioned that the shuttle speed can be correlated from the speed of the rackets head. To help analyze the key relationship between the racket speed and the shuttle ball speed, a series of sensors were integrated into a conventional racket. The aim of this investigation is to develop a sensor system that will facilitate the quantifiable analysis of the badminton smash. It was determined in a previous investigation that a single accelerometer is insufficient to determine the two or three dimensional trajectory of the racket. Therefore a low mass compact, 2-axes piezoelectric accelerometers was applied to the head of the racket. An acoustic shock sensor was also mounted on the racket head to identify the instant of the contact event. It was hypothesized that the magnitude of the acoustic signal, associated with the hitting event, and the final speed of the racket when fused could provide a good correlation with the shuttle speed. The fuzzy inference system and ANFIS (Adaptive Neuro-Fuzzy Inferior System) sensor fusion techniques was investigated. It has been demonstrated that it is possible to analyze the performance of the smashing stroke based on the system developed in this investigation which in turn may, with further develop by a useful aid to badminton coaches and the methods developed may also be applicable to other applications."}
{"_id": "46fd85775cab39ecb32cf2e41642ed2d0984c760", "title": "Vital, Sophia, and Co. - The Quest for the Legal Personhood of Robots", "text": "The paper examines today\u2019s debate on the legal status of AI robots, and how often scholars and policy makers confuse the legal agenthood of these artificial agents with the status of legal personhood. By taking into account current trends in the field, the paper suggests a twofold stance. First, policy makers shall seriously mull over the possibility of establishing novel forms of accountability and liability for the activities of AI robots in contracts and business law, e.g., new forms of legal agenthood in cases of complex distributed responsibility. Second, any hypothesis of granting AI robots full legal personhood has to be discarded in the foreseeable future. However, how should we deal with Sophia, which became the first AI application to receive citizenship of any country, namely, Saudi Arabia, in October 2017? Admittedly, granting someone, or something, legal personhood is\u2014as always has been\u2014a highly sensitive political issue that does not simply hinge on rational choices and empirical evidence. Discretion, arbitrariness, and even bizarre decisions play a role in this context. However, the normative reasons why legal systems grant human and artificial entities, such as corporations, their status, help us taking sides in today\u2019s quest for the legal personhood of AI robots. Is citizen Sophia really conscious, or capable of suffering the slings and arrows of outrageous scholars?"}
{"_id": "9bd38b75d75ea35a720b7647a93e8f5526348be6", "title": "Working memory as an emergent property of the mind and brain", "text": "Cognitive neuroscience research on working memory has been largely motivated by a standard model that arose from the melding of psychological theory with neuroscience data. Among the tenets of this standard model are that working memory functions arise from the operation of specialized systems that act as buffers for the storage and manipulation of information, and that frontal cortex (particularly prefrontal cortex) is a critical neural substrate for these specialized systems. However, the standard model has been a victim of its own success, and can no longer accommodate many of the empirical findings of studies that it has motivated. An alternative is proposed: Working memory functions arise through the coordinated recruitment, via attention, of brain systems that have evolved to accomplish sensory-, representation-, and action-related functions. Evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies, from monkeys and humans, is considered, as is the question of how to interpret delay-period activity in the prefrontal cortex."}
{"_id": "c805f7e151418da74df326c730cc690f2c081b7d", "title": "Innovating with Concept Mapping", "text": "Student-generated concept maps (Cmaps) have been the preferred choice to design assessment tasks, which are time-consuming in real classroom settings. We have proposed the use of Cmap with errors (elaborated by teachers) to develop assessment tasks, as a way to address the logistical practicality obstacles usually found in classrooms. This paper compared two different tasks, finding the errors and judging the selected propositions in Cmap with errors, exploring two topics with different levels of difficulty. Our results confirmed Cmap with errors as a straightforward approach to include concept mapping into the classroom routine to foster the pedagogic resonance (the bridge between teacher knowledge and student learning), which is critical for motivating students to learn meaningfully. Moreover, Cmaps with errors are also amenable for on-line learning platforms. Future works of our research group will develop an automated process for evaluation and feedback using the task formats presented in this paper."}
{"_id": "949b3f3ee6256aa3465914b6697f0a17475a3112", "title": "Natural Language Processing for Question Answering ( NLP 4", "text": "The recent developments in Question Answering have kept with open-domain questions and collections, sometimes argued as being more di fficult than narrow domain-focused questions and corpora. The biomedical field is indeed a specialized domain; however, its scope is fairly broad, so that considering a biomedical QA task is not necessarily such a simplification over open-domain QA as represented in the recent TREC evaluations. We shall try to characterize salient aspects of biomedical QA as well as to give a short review of useful resources to address this task."}
{"_id": "177a167ba6a2e0b84d5e6f1375f6b54e14a8c6c2", "title": "Guest editorial: qualitative studies in information systems: a critical review and some guiding principles", "text": "Please refer to attached Guest Editorial"}
{"_id": "b7ec80b305ee1705a107989b4a56a24801084dbd", "title": "Examining the critical success factors in the adoption of enterprise resource planning", "text": "This paper presents a literature review of the critical success factors (CSFs) in the implementation of enterprise resource planning (ERP) across 10 different countries/regions. The review covers journals, conference proceedings, doctoral dissertation, and textbooks from these 10 different countries/regions. Through a review of the literature, 18 CSFs were identified, with more than 80 sub-factors, for the successful implementation of ERP. The findings of our study reveal that \u2018appropriate business and IT legacy systems\u2019, \u2018business plan/vision/goals/justification\u2019, \u2018business process reengineering\u2019, \u2018change management culture and programme\u2019, \u2018communication\u2019, \u2018ERP teamwork and composition\u2019, \u2018monitoring and evaluation of performance\u2019, \u2018project champion\u2019, \u2018project management\u2019, \u2018software/system development, testing and troubleshooting\u2019, \u2018top management support\u2019, \u2018data management\u2019, \u2018ERP strategy and implementation methodology\u2019, \u2018ERP vendor\u2019, \u2018organizational characteristics\u2019, \u2018fit between ERP and business/process\u2019, \u2018national culture\u2019 and \u2018country-related functional requirement\u2019 were the commonly extracted factors across these 10 countries/regions. In these 18 CSFs, \u2018top management support\u2019 and \u2018training and education\u2019 were the most frequently cited as the critical factors to the successful implementation of ERP systems. # 2008 Elsevier B.V. All rights reserved."}
{"_id": "5dfbf6d15c47637fa5d62e27f9b26c59fc249ab3", "title": "Using Quality Models in Software Package Selection", "text": "0 7 4 0 7 4 5 9 / 0 3 / $ 1 9 . 0 0 \u00a9 2 0 0 3 I E E E All the methodologies that have been proposed recently for choosing software packages compare user requirements with the packages\u2019 capabilities.3\u20135 There are different types of requirements, such as managerial, political, and, of course, quality requirements. Quality requirements are often difficult to check. This is partly due to their nature, but there is another reason that can be mitigated, namely the lack of structured and widespread descriptions of package domains (that is, categories of software packages such as ERP systems, graphical or data structure libraries, and so on). This absence hampers the accurate description of software packages and the precise statement of quality requirements, and consequently overall package selection and confidence in the result of the process. Our methodology for building structured quality models helps solve this drawback. (See the \u201cRelated Work\u201d sidebar for information about other approaches.) A structured quality model for a given package domain provides a taxonomy of software quality features, and metrics for computing their value. Our approach relies on the International Organization for Standardization and International Electrotechnical Commission 9126-1 quality standard,6 which we selected for the following reasons:"}
{"_id": "d2656083425ffd4135cc13d0b444dbc605ebb6ba", "title": "An AI Approach to Automatic Natural Music Transcription", "text": "Automatic music transcription (AMT) remains a fundamental and difficult problem in music information research, and current music transcription systems are still unable to match human performance. AMT aims to automatically generate a score representation given a polyphonic acoustical signal. In our project, we approach the AMT problem on two fronts: acoustic modeling to identify pitches from a frame of audio, and establishing a score generation model to convert exact piano roll representations of audio into more \u2018natural\u2019 sheet music. We build an end to end pipeline that aims to convert .wav classical piano audio files into a \u2018natural\u2019 score representation."}
{"_id": "e79b273af2f644753eef96f39e39bff973ce21a1", "title": "COMPARISON OF LOSSLESS DATA COMPRESSION ALGORITHMS FOR TEXT DATA", "text": "Data compression is a common requirement for most of the computerized applications. There are number of data compression algorithms, which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms, which use different approaches. This paper examines lossless data compression algorithms and compares their performance. A set of selected algorithms are examined and implemented to evaluate the performance in compressing text data. An experimental comparison of a number of different lossless data compression algorithms is presented in this paper. The article is concluded by stating which algorithm performs well for text data."}
{"_id": "600c91dd8c2ad24217547d506ec2f89316f21e15", "title": "A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling", "text": "We introduce a simple and accurate neural model for dependency-based semantic role labeling. Our model predicts predicate-argument dependencies relying on states of a bidirectional LSTM encoder. The semantic role labeler achieves competitive performance on English, even without any kind of syntactic information and only using local inference. However, when automatically predicted partof-speech tags are provided as input, it substantially outperforms all previous local models and approaches the best reported results on the English CoNLL2009 dataset. We also consider Chinese, Czech and Spanish where our approach also achieves competitive results. Syntactic parsers are unreliable on out-of-domain data, so standard (i.e., syntactically-informed) SRL models are hindered when tested in this setting. Our syntax-agnostic model appears more robust, resulting in the best reported results on standard out-of-domain test sets."}
{"_id": "2fc1e38198f2961969f3863230c0a2993e156506", "title": "Walking on foot to explore a virtual environment with uneven terrain", "text": "In this work we examine people's ability to maintain spatial awareness of an HMD--based immersive virtual environment when the terrain of the virtual world does not match the real physical world in which they are locomoting. More specifically, we examine what happens when a subject explores a hilly virtual environment while physically walking in a flat room. In our virtual environments, subjects maintain their height (or distance to the ground plane) at all times. In the experiment described in this work, we directly compare spatial orientation in a flat virtual environment to two variations of a hilly virtual environment. Interestingly, we find that subjects' spatial awareness of the environment is improved with the addition of uneven terrain."}
{"_id": "26ae715ca80c4057a9e1ddce311b45ce5e2de835", "title": "The emergence of lncRNAs in cancer biology.", "text": "The discovery of numerous noncoding RNA (ncRNA) transcripts in species from yeast to mammals has dramatically altered our understanding of cell biology, especially the biology of diseases such as cancer. In humans, the identification of abundant long ncRNA (lncRNA) >200 bp has catalyzed their characterization as critical components of cancer biology. Recently, roles for lncRNAs as drivers of tumor suppressive and oncogenic functions have appeared in prevalent cancer types, such as breast and prostate cancer. In this review, we highlight the emerging impact of ncRNAs in cancer research, with a particular focus on the mechanisms and functions of lncRNAs."}
{"_id": "0943ed739c909d17f8686280d43d50769fe2c2f8", "title": "Action Reaction Learning : Analysis and Synthesis of Human Behaviour", "text": "We propose Action-Reaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between an action and its reaction by observing time sequences. We apply this method to analyze human interaction and to subsequently synthesize human behaviour. Using a time series of perceptual measurements, a system automatically uncovers a mapping between gestures from one human participant (an action) and a subsequent gesture (a reaction) from another participant. A probabilistic model is trained from data of the human interaction using a novel estimation technique, Conditional Expectation Maximization (CEM). The system drives a graphical interactive character which probabilistically predicts the most likely response to the user's behaviour and performs it interactively. Thus, after analyzing human interaction in a pair of participants, the system is able to replace one of them and interact with a single remaining user."}
{"_id": "8d4fd3f3f9cab5f24bdb0ec533a1304eca6a8331", "title": "Vehicle licence plate recognition using super-resolution technique", "text": "Due to the development of economy and technology, the people's demands on cars are growing and so are the problems, such as finding stolen car, banning violation and parking lot management. It will be time-consuming and low-efficiency if we do those jobs only by human because of the limitation of human being's concentration. Therefore, it has been a popular topic to develop intelligent monitoring system under new video technology within a decade. There are still many rooms for future development and application in the car license detection and recognition fields. For finding stolen car, we can integrate car license detection system and road monitoring system to analyze the videos and trace the objects, so we can gain high-efficiency and low-cost results. For parking lot management system, we can have the result of access management and automatic charge to reduce the human resource through car license recognition system. Automated toll of highway can be done through car license recognition system as well. Because the car license recognition system is mostly applied to security monitoring field and business purposes, the demand of the accuracy is quite strict. There are causes which make the inaccuracy of the license recognition, such as the lack of video resolution, too small license due to the distance and the light and shadow. We will discuss the license recognition system and how to use apply the super-resolution to overcome those above problems."}
{"_id": "1f050eb09a40c7d59715d2bb3b9d2d3708e99dda", "title": "PubChem Substance and Compound databases", "text": "PubChem (https://pubchem.ncbi.nlm.nih.gov) is a public repository for information on chemical substances and their biological activities, launched in 2004 as a component of the Molecular Libraries Roadmap Initiatives of the US National Institutes of Health (NIH). For the past 11 years, PubChem has grown to a sizable system, serving as a chemical information resource for the scientific research community. PubChem consists of three inter-linked databases, Substance, Compound and BioAssay. The Substance database contains chemical information deposited by individual data contributors to PubChem, and the Compound database stores unique chemical structures extracted from the Substance database. Biological activity data of chemical substances tested in assay experiments are contained in the BioAssay database. This paper provides an overview of the PubChem Substance and Compound databases, including data sources and contents, data organization, data submission using PubChem Upload, chemical structure standardization, web-based interfaces for textual and non-textual searches, and programmatic access. It also gives a brief description of PubChem3D, a resource derived from theoretical three-dimensional structures of compounds in PubChem, as well as PubChemRDF, Resource Description Framework (RDF)-formatted PubChem data for data sharing, analysis and integration with information contained in other databases."}
{"_id": "272216c1f097706721096669d85b2843c23fa77d", "title": "Adam: A Method for Stochastic Optimization", "text": "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm."}
{"_id": "9ecd3d4c75f5e927fc3167dd185d7825e14a814b", "title": "Probabilistic Principal Component Analysis", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "00ca2e50ae1c5f4499d9271c72466d2f9d4ae137", "title": "Stochastic Variational Inference", "text": "We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions. We develop this technique for a large class of probabilistic models and we demonstrate it with two probabilistic topic models, latent Dirichlet allocation and the hierarchical Dirichlet process topic model. Using stochastic variational inference, we analyze several large collections of documents: 300K articles from Nature, 1.8M articles from The New York Times, and 3.8M articles from Wikipedia. Stochastic inference can easily handle data sets of this size and outperforms traditional variational inference, which can only handle a smaller subset. (We also show that the Bayesian nonparametric topic model outperforms its parametric counterpart.) Stochastic variational inference lets us apply complex Bayesian models to massive data sets."}
{"_id": "053dec3537df88a1f68b53e33e1462a6b88066f6", "title": "PILCO: A Model-Based and Data-Efficient Approach to Policy Search", "text": "In this paper, we introduce pilco, a practical, data-efficient model-based policy search method. Pilco reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, pilco can cope with very little data and facilitates learning from scratch in only a few trials. Policy evaluation is performed in closed form using state-ofthe-art approximate inference. Furthermore, policy gradients are computed analytically for policy improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks."}
{"_id": "05aba481e8a221df5d8775a3bb749001e7f2525e", "title": "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization", "text": "We present a new family of subgradient methods that dynamica lly incorporate knowledge of the geometry of the data observed in earlier iterations to perfo rm more informative gradient-based learning. Metaphorically, the adaptation allows us to find n eedles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems fro m recent advances in stochastic optimization and online learning which employ proximal funct ions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adap tively modifying the proximal function, which significantly simplifies setting a learning rate nd results in regret guarantees that are provably as good as the best proximal function that can be cho sen in hindsight. We give several efficient algorithms for empirical risk minimization probl ems with common and important regularization functions and domain constraints. We experimen tally study our theoretical analysis and show that adaptive subgradient methods outperform state-o f-the-art, yet non-adaptive, subgradient algorithms."}
{"_id": "f2bc77fdcea85738d1062da83d84dfa3371d378d", "title": "A 14-mW 6.25-Gb/s Transceiver in 90-nm CMOS", "text": "This paper describes a 6.25-Gb/s 14-mW transceiver in 90-nm CMOS for chip-to-chip applications. The transceiver employs a number of features for reducing power consumption, including a shared LC-PLL clock multiplier, an inductor-loaded resonant clock distribution network, a low- and programmable-swing voltage-mode transmitter, software-controlled clock and data recovery (CDR) and adaptive equalization within the receiver, and a novel PLL-based phase rotator for the CDR. The design can operate with channel attenuation of -15 dB or greater at a bit-error rate of 10-15 or less, while consuming less than 2.25 mW/Gb/s per transceiver."}
{"_id": "82507310123a6cd970f28c628edba9a6e78618c3", "title": "Limbic-cortical dysregulation: a proposed model of depression.", "text": "A working model of depression implicating failure of the coordinated interactions of a distributed network of limbic-cortical pathways is proposed. Resting state patterns of regional glucose metabolism in idiopathic depressed patients, changes in metabolism with antidepressant treatment, and blood flow changes with induced sadness in healthy subjects were used to test and refine this hypothesis. Dorsal neocortical decreases and ventral paralimbic increases characterize both healthy sadness and depressive illness; concurrent inhibition of overactive paralimbic regions and normalization of hypofunctioning dorsal cortical sites characterize disease remission. Normal functioning of the rostral anterior cingulate, with its direct connections to these dorsal and ventral areas, is postulated to be additionally required for the observed reciprocal compensatory changes, since pretreatment metabolism in this region uniquely predicts antidepressant treatment response. This model is offered as an adaptable framework to facilitate continued integration of clinical imaging findings with complementary neuroanatomical, neurochemical, and electrophysiological studies in the investigation of the pathogenesis of affective disorders."}
{"_id": "a1cd96a10ca8dd53a52444456f6119a3284e5ae7", "title": "Image Steganography Using LSB and Edge \u2013 Detection Technique", "text": "217 \uf020 Abstract-Steganography is the technique of hiding the fact that communication is taking place, by hiding data in other data. Many different carrier file formats can be used, but digital images are the most popular because of their frequency on the Internet. For hiding secret information in images, there exist a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. Steganalysis, the detection of this hidden information, is an inherently difficult problem and requires a thorough investigation so we are using \" Edge detection Filter \". In this paper search how the edges of the images can be used to hiding text message in Steganography .It give the depth view of image steganography and Edge detection Filter techniques. Method: In this paper search how the edges of the images can be used to hiding text message in Steganography. For that gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 pixel connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of least Significant bit steganography. Steganalysis then used to evaluate the hiding process to ensure the data can be hidden in best possible way."}
{"_id": "5e4ae6cd0f583d4c8edc74bddd800baf8915bfe8", "title": "Multiplexed protein quantitation in Saccharomyces cerevisiae using amine-reactive isobaric tagging reagents.", "text": "We describe here a multiplexed protein quantitation strategy that provides relative and absolute measurements of proteins in complex mixtures. At the core of this methodology is a multiplexed set of isobaric reagents that yield amine-derivatized peptides. The derivatized peptides are indistinguishable in MS, but exhibit intense low-mass MS/MS signature ions that support quantitation. In this study, we have examined the global protein expression of a wild-type yeast strain and the isogenic upf1Delta and xrn1Delta mutant strains that are defective in the nonsense-mediated mRNA decay and the general 5' to 3' decay pathways, respectively. We also demonstrate the use of 4-fold multiplexing to enable relative protein measurements simultaneously with determination of absolute levels of a target protein using synthetic isobaric peptide standards. We find that inactivation of Upf1p and Xrn1p causes common as well as unique effects on protein expression."}
{"_id": "8cf34a74266fcefdd1e70e7c0900648c4f2cac88", "title": "An Efficient Certificateless Encryption for Secure Data Sharing in Public Clouds", "text": "We propose a mediated certificateless encryption scheme without pairing operations for securely sharing sensitive information in public clouds. Mediated certificateless public key encryption (mCL-PKE) solves the key escrow problem in identity based encryption and certificate revocation problem in public key cryptography. However, existing mCL-PKE schemes are either inefficient because of the use of expensive pairing operations or vulnerable against partial decryption attacks. In order to address the performance and security issues, in this paper, we first propose a mCL-PKE scheme without using pairing operations. We apply our mCL-PKE scheme to construct a practical solution to the problem of sharing sensitive information in public clouds. The cloud is employed as a secure storage as well as a key generation center. In our system, the data owner encrypts the sensitive data using the cloud generated users' public keys based on its access control policies and uploads the encrypted data to the cloud. Upon successful authorization, the cloud partially decrypts the encrypted data for the users. The users subsequently fully decrypt the partially decrypted data using their private keys. The confidentiality of the content and the keys is preserved with respect to the cloud, because the cloud cannot fully decrypt the information. We also propose an extension to the above approach to improve the efficiency of encryption at the data owner. We implement our mCL-PKE scheme and the overall cloud based system, and evaluate its security and performance. Our results show that our schemes are efficient and practical."}
{"_id": "392ed32f87afa8677e44e40b3f79ff967f6d88f9", "title": "Clustal W and Clustal X version 2.0", "text": "SUMMARY\nThe Clustal W and Clustal X multiple sequence alignment programs have been completely rewritten in C++. This will facilitate the further development of the alignment algorithms in the future and has allowed proper porting of the programs to the latest versions of Linux, Macintosh and Windows operating systems.\n\n\nAVAILABILITY\nThe programs can be run on-line from the EBI web server: http://www.ebi.ac.uk/tools/clustalw2. The source code and executables for Windows, Linux and Macintosh computers are available from the EBI ftp site ftp://ftp.ebi.ac.uk/pub/software/clustalw2/"}
{"_id": "af91a6a76f71a399794b10bcfa1500ffdef9410d", "title": "Evaluating three treatments for borderline personality disorder: a multiwave study.", "text": "OBJECTIVE\nThe authors examined three yearlong outpatient treatments for borderline personality disorder: dialectical behavior therapy, transference-focused psychotherapy, and a dynamic supportive treatment.\n\n\nMETHOD\nNinety patients who were diagnosed with borderline personality disorder were randomly assigned to transference-focused psychotherapy, dialectical behavior therapy, or supportive treatment and received medication when indicated. Prior to treatment and at 4-month intervals during a 1-year period, blind raters assessed the domains of suicidal behavior, aggression, impulsivity, anxiety, depression, and social adjustment in a multiwave study design.\n\n\nRESULTS\nIndividual growth curve analysis revealed that patients in all three treatment groups showed significant positive change in depression, anxiety, global functioning, and social adjustment across 1 year of treatment. Both transference-focused psychotherapy and dialectical behavior therapy were significantly associated with improvement in suicidality. Only transference-focused psychotherapy and supportive treatment were associated with improvement in anger. Transference-focused psychotherapy and supportive treatment were each associated with improvement in facets of impulsivity. Only transference-focused psychotherapy was significantly predictive of change in irritability and verbal and direct assault.\n\n\nCONCLUSIONS\nPatients with borderline personality disorder respond to structured treatments in an outpatient setting with change in multiple domains of outcome. A structured dynamic treatment, transference-focused psychotherapy was associated with change in multiple constructs across six domains; dialectical behavior therapy and supportive treatment were associated with fewer changes. Future research is needed to examine the specific mechanisms of change in these treatments beyond common structures."}
{"_id": "6c5d2e4bc54beb4260cd56f9a45bf90e98d1187d", "title": "A Corpus and Model Integrating Multiword Expressions and Supersenses", "text": "This paper introduces a task of identifying and semantically classifying lexical expressions in running text. We investigate the online reviews genre, adding semantic supersense annotations to a 55,000 word English corpus that was previously annotated for multiword expressions. The noun and verb supersenses apply to full lexical expressions, whether singleor multiword. We then present a sequence tagging model that jointly infers lexical expressions and their supersenses. Results show that even with our relatively small training corpus in a noisy domain, the joint task can be performed to attain 70% class labeling F1."}
{"_id": "61c6da88c9a6ee3a72e1494d42acd5eb21c17a55", "title": "The neural basis of rationalization: cognitive dissonance reduction during decision-making.", "text": "People rationalize the choices they make when confronted with difficult decisions by claiming they never wanted the option they did not choose. Behavioral studies on cognitive dissonance provide evidence for decision-induced attitude change, but these studies cannot fully uncover the mechanisms driving the attitude change because only pre- and post-decision attitudes are measured, rather than the process of change itself. In the first fMRI study to examine the decision phase in a decision-based cognitive dissonance paradigm, we observed that increased activity in right-inferior frontal gyrus, medial fronto-parietal regions and ventral striatum, and decreased activity in anterior insula were associated with subsequent decision-related attitude change. These findings suggest the characteristic rationalization processes that are associated with decision-making may be engaged very quickly at the moment of the decision, without extended deliberation and may involve reappraisal-like emotion regulation processes."}
{"_id": "9da870dbbc32c23013ef92dd9b30db60a3cd7628", "title": "Sparse Non-rigid Registration of 3D Shapes", "text": "Non-rigid registration of 3D shapes is an essential task of increasing importance as commodity depth sensors become more widely available for scanning dynamic scenes. Non-rigid registration is much more challenging than rigid registration as it estimates a set of local transformations instead of a single global transformation, and hence is prone to the overfitting issue due to underdetermination. The common wisdom in previous methods is to impose an \u21132-norm regularization on the local transformation differences. However, the \u21132-norm regularization tends to bias the solution towards outliers and noise with heavy-tailed distribution, which is verified by the poor goodness-of-fit of the Gaussian distribution over transformation differences. On the contrary, Laplacian distribution fits well with the transformation differences, suggesting the use of a sparsity prior. We propose a sparse non-rigid registration (SNR) method with an \u21131-norm regularized model for transformation estimation, which is effectively solved by an alternate direction method (ADM) under the augmented Lagrangian framework. We also devise a multi-resolution scheme for robust and progressive registration. Results on both public datasets and our scanned datasets show the superiority of our method, particularly in handling large-scale deformations as well as outliers and noise."}
{"_id": "4b50628723e4e539fb985e15766a6c65da49237c", "title": "THE STRUCTURE ( S ) OF PARTICLE VERBS", "text": "\u2022 complex heads are formed either in the lexicon (cf. Booij 1990 , Johnson 1991 , Koizumi 1993, Neeleman 1994, Neeleman & Weerman 1993, Stiebels 1996, Stiebels & Wunderlich 1994, Wiese 1996, Ackerman & Webelhuth 1998 ) or by some kind of overt or covert incorporation of the particle into the verb (cf. van Riemsdijk 1978, Baker 1988, Koopman 1995, Olsen 1995a, 1995b, Zeller 1997a, 1997b, 1998) \u2022 small clause structure (cf. Kayne 1985, Gu\u00e9ron 1987 , 1990, Hoekstra 1988, Grewendorf 1990, Bennis 1992, Mulder 1992 , den Dikken 1992, 1995 , Zwart 1997 among others) 1"}
{"_id": "6d24e98c086818bfd00406ef54a44bca9dde76b8", "title": "State of the Art in Stereoscopic and Autostereoscopic Displays", "text": "Underlying principles of stereoscopic direct-view displays, binocular head-mounted displays, and autostereoscopic direct-view displays are explained and some early work as well as the state of the art in those technologies are reviewed. Stereoscopic displays require eyewear and can be categorized based on the multiplexing scheme as: 1) color multiplexed (old technology but there are some recent developments; low-quality due to color reproduction and crosstalk issues; simple and does not require additional electronics hardware); 2) polarization multiplexed (requires polarized light output and polarization-based passive eyewear; high-resolution and high-quality displays available); and 3) time multiplexed (requires faster display hardware and active glasses synchronized with the display; high-resolution commercial products available). Binocular head-mounted displays can readily provide 3-D, virtual images, immersive experience, and more possibilities for interactive displays. However, the bulk of the optics, matching of the left and right ocular images and obtaining a large field of view make the designs quite challenging. Some of the recent developments using unconventional optical relays allow for thin form factors and open up new possibilities. Autostereoscopic displays are very attractive as they do not require any eyewear. There are many possibilities in this category including: two-view (the simplest implementations are with a parallax barrier or a lenticular screen), multiview, head tracked (requires active optics to redirect the rays to a moving viewer), and super multiview (potentially can solve the accommodation-convergence mismatch problem). Earlier 3-D booms did not last long mainly due to the unavailability of enabling technologies and the content. Current developments in the hardware technologies provide a renewed interest in 3-D displays both from the consumers and the display manufacturers, which is evidenced by the recent commercial products and new research results in this field."}
{"_id": "239d60331832d4a457fa29659dccb1d6571c481f", "title": "Importance of audio feature reduction in automatic music genre classification", "text": "Multimedia database retrieval is rapidly growing and its popularity in online retrieval systems is consequently increasing. Large datasets are major challenges for searching, retrieving, and organizing the music content. Therefore, a robust automatic music-genre classification method is needed for organizing this music data into different classes according to specific viable information. Two fundamental components are to be considered for genre classification: audio feature extraction and classifier design. In this paper, we propose diverse audio features to precisely characterize the music content. The feature sets belong to four groups: dynamic, rhythmic, spectral, and harmonic. From the features, five statistical parameters are considered as representatives, including the fourth-order central moments of each feature as well as covariance components. Ultimately, insignificant representative parameters are controlled by minimum redundancy and maximum relevance. This algorithm calculates the score level of all feature attributes and orders them. Only high-score audio features are considered for genre classification. Moreover, we can recognize those audio features and distinguish which of the different statistical parameters derived from them are important for genre classification. Among them, mel frequency cepstral coefficient statistical parameters, such as covariance components and variance, are more frequently selected than the feature attributes of other groups. This approach does not transform the original features as do principal component analysis and linear discriminant analysis. In addition, other feature reduction methodologies, such as locality-preserving projection and non-negative matrix factorization are considered. The performance of the proposed system is measured based on the reduced features from the feature pool using different feature reduction techniques. The results indicate that the overall classification is competitive with existing state-of-the-art frame-based methodologies."}
{"_id": "e36ecd4250fac29cc990330e01c9abee4c67a9d6", "title": "A Dual-Band Dual-Circular-Polarization Antenna for Ka-Band Satellite Communications", "text": "A novel Ka-band dual-band dual-circularly-polarized antenna array is presented in this letter. A dual-band antenna with left-hand circular polarization for the Ka-band downlink frequencies and right-hand circular polarization for the Ka-band uplink frequencies is realized with compact annular ring slots. By applying the sequential rotation technique, a 2 \u00d7 2 subarray with good performance is obtained. This letter describes the design process and presents simulation and measurement results."}
{"_id": "528f0ce730b0607cc6107d672aec344aaaae1b24", "title": "Using machine learning algorithms for housing price prediction: The case of Fairfax County, Virginia housing data", "text": "House sales are determined based on the Standard & Poor\u2019s Case-Shiller home price indices and the housing price index of the Office of Federal Housing Enterprise Oversight (OFHEO). These reflect the trends of the US housing market. In addition to these housing price indices, the development of a housing price prediction model can greatly assist in the prediction of future housing prices and the establishment of real estate policies. This study uses machine learning algorithms as a research methodology to develop a housing price prediction model. To improve the accuracy of housing price prediction, this paper analyzes the housing data of 5359 townhouses in Fairfax County, Virginia, gathered by the Multiple Listing Service (MLS) of the Metropolitan Regional Information Systems (MRIS). We develop a housing price prediction model based on machine learning algorithms such as C4.5, RIPPER, Na\u00efve Bayesian, and AdaBoost and compare their classification accuracy performance. We then propose an improved housing price prediction model to assist a house seller or a real estate agent make better informed decisions based on house price valuation. The experiments demonstrate that the RIPPER algorithm, based on accuracy, consistently outperforms the other models in the performance of housing price prediction. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "ab3e60422c31e6d9763d5dc0794d3c29f3e75b0c", "title": "Using plane + parallax for calibrating dense camera arrays", "text": "A light field consists of images of a scene taken from different viewpoints. Light fields are used in computer graphics for image-based rendering and synthetic aperture photography, and in vision for recovering shape. In this paper, we describe a simple procedure to calibrate camera arrays used to capture light fields using a plane + parallax framework. Specifically, for the case when the cameras lie on a plane, we show (i) how to estimate camera positions up to an affine ambiguity, and (ii) how to reproject light field images onto a family of planes using only knowledge of planar parallax for one point in the scene. While planar parallax does not completely describe the geometry of the light field, it is adequate for the first two applications which, it turns out, do not depend on having a metric calibration of the light field. Experiments on acquired light fields indicate that our method yields better results than full metric calibration."}
{"_id": "4743c1b19006909c86219af3aeb6dcbee4eb119f", "title": "Performance Analysis, Mapping, and Multiobjective Optimization of a Hybrid Robotic Machine Tool", "text": "A serial-parallel hybrid machine tool is expected to integrate the respective features of pure serial/parallel mechanism. The traditional method of hybridization is to connect n (n \u2265 2) mechanisms bottom to head, in which at least one should be a parallel mechanism. One unique approach called Mechanism Hybridization is to embed one serial mechanism inside of a pure parallel mechanism, which greatly changes its overall performance. Based on this idea, an X-Y gantry system including a five-axis hybrid manipulator is developed, which is expected to be applied as the next generation of computer numerical control machine. The inverse kinematics and Jacobian matrix are derived. Since performance improvement is one of the most important factors that greatly affect the application potential of hybrid manipulators in different industry fields, to deeply investigate the comprehensive features, the local/global performance indexes of stiffness, dexterity, and manipulability are mathematically modeled and mapped. A discrete-boundary-searching method is developed to calculate and visualize the workspace. Pareto-based evolutionary multiobjective performance optimization is implemented to simultaneously improve the four indexes, and the representative nondominated solutions are listed. The proposed methodologies are generic and applicable for the design, modeling, and improvement of other parallel/hybrid robotic machine tools."}
{"_id": "95db44d1f978a8407ae8b202dee9d70f700c6948", "title": "Simulation in software engineering training", "text": "Simulation is frequently used for training in many application areas like aviation and economics, but not in software engineering. We present the SESAM project which focuses on software engineering education using simulation. In the SESAM project a simulator was developed. Using this simulator, a student can take the role of a software project manager. The simulated software project can be finished within a couple of hours because it is simulated in \u201cquick-motion\u201d mode.\nIn this paper, the background and goals of the SESAM project are presented. A new simulation model, the so called QA model, is introduced. The model behavior is demonstrated by investigating and comparing different strategies for software development. The results of experiments based on the QA model are reported. Finally, conclusions are drawn from the experiments and future work is outlined."}
{"_id": "d0701ea0e6822272cac58dc73e6c7e5d2c796436", "title": "Effects of Transformational Leadership Training on Attitudinal and Financial Outcomes : A Field Experiment", "text": "A pretest-posttest control-group design (N = 20) was used to assess the effects of transformational leadership training, with 9 and 11 managers assigned randomly to training and control groups, respectively. Training consisted of a 1-day group session and 4 individual booster sessions thereafter on a monthly basis. Multivariate analyses of covariance, with pretest scores as the covariate, showed that the training resulted in significant effects on subordinates' perceptions of leaders' transformational leadership, subordinates' own organizational commitment, and 2 aspects of branch-level financial performance."}
{"_id": "2d3a3a3ef0ac94e899feb3a7c7caa3fb6099393d", "title": "An approach to finite-time controller design for a class of T-S fuzzy systems", "text": "This paper studies the finite-time stabilization problem for a class of nonlinear systems described by Takagi-Sugeno (T-S) fuzzy dynamic models with parametric uncertainties. A novel non-Lipschitz continuous state feedback control scheme with augmented dynamics is developed. It is shown that finite-time convergence of the closed loop system can be achieved, and the potential controller singularity problem can be avoided. Finally, one example is given to illustrate the effectiveness of the proposed control approach."}
{"_id": "5f8e64cc066886a99cc8e30e68d1b29d8bb1961d", "title": "The Human Hippocampus and Spatial and Episodic Memory", "text": "Finding one's way around an environment and remembering the events that occur within it are crucial cognitive abilities that have been linked to the hippocampus and medial temporal lobes. Our review of neuropsychological, behavioral, and neuroimaging studies of human hippocampal involvement in spatial memory concentrates on three important concepts in this field: spatial frameworks, dimensionality, and orientation and self-motion. We also compare variation in hippocampal structure and function across and within species. We discuss how its spatial role relates to its accepted role in episodic memory. Five related studies use virtual reality to examine these two types of memory in ecologically valid situations. While processing of spatial scenes involves the parahippocampus, the right hippocampus appears particularly involved in memory for locations within an environment, with the left hippocampus more involved in context-dependent episodic or autobiographical memory."}
{"_id": "cb2ff27205b27b9fd95005a98facdf09a0d4759f", "title": "3D Sampling Textures for Creative Design and Manufacturing", "text": "1 Texture synthesized from 3D scans of snake-skin and mushroom laminae. ABSTRACT 3D sampling is a robust new strategy for exploring and creating designs. 3D textures are sampled and mixed almost like music to create new multifunctional surfaces and material microstructures (Figure 1). This paper presents several design cases performed using 3D sampling techniques, demonstrating how they can be used to explore and enhance product ideas, variations, and functions in subtle and responsive ways. In each case, design variations are generated and their mechanical behavior is evaluated against performance criteria using computational simulation or empirical testing. This work aims to promote creativity and discovery by introducing irregular geometric structures within trial-and-error feedback loops. Sayjel Vijay Patel SUTD DManD Centre"}
{"_id": "06b30e1de10abd64ece74623fbb2f6f58a34f492", "title": "Large-Scale Liquid Simulation on Adaptive Hexahedral Grids", "text": "Regular grids are attractive for numerical fluid simulations because they give rise to efficient computational kernels. However, for simulating high resolution effects in complicated domains they are only of limited suitability due to memory constraints. In this paper we present a method for liquid simulation on an adaptive octree grid using a hexahedral finite element discretization, which reduces memory requirements by coarsening the elements in the interior of the liquid body. To impose free surface boundary conditions with second order accuracy, we incorporate a particular class of Nitsche methods enforcing the Dirichlet boundary conditions for the pressure in a variational sense. We then show how to construct a multigrid hierarchy from the adaptive octree grid, so that a time efficient geometric multigrid solver can be used. To improve solver convergence, we propose a special treatment of liquid boundaries via composite finite elements at coarser scales. We demonstrate the effectiveness of our method for liquid simulations that would require hundreds of millions of simulation elements in a non-adaptive regime."}
{"_id": "547bfc90e96c2f3c1f9a1fdc9d7c84014322bc81", "title": "Power optimization for FinFET-based circuits using genetic algorithms", "text": "Reducing power consumption is one of the important design goals for circuit designers. Power optimization techniques for bulk CMOS-based circuit designs have been extensively studied. As technology scales, FinFET has been proposed as an alternative for bulk CMOS when technology scales beyond 32 nm technology (E.J. Nowak et al., 2004). In this paper, we propose a power optimization framework for FinFET based circuit design, based on genetic algorithms. We exploit the unique feature of independent gate (IG) controls for FinFET devices to reduce the power consumption, and combine with other low power techniques such as multi-Vdd and gate sizing to achieve power optimization for FinFET-based circuits. We use 32 nm PTM FinFET device model (W. Zhao and Y. Cao, 2006) and conduct experiments on ISCAS benchmarks. The experimental results show that our methodology can achieve over 80% power reduction while satisfying the same performance constraints, comparing to the case that all the FinFET transistors are tuned to be the fastest ones."}
{"_id": "8a504a1ea577a855490e0a413b283ac1856c2da8", "title": "Improving the intrinsic calibration of a Velodyne LiDAR sensor", "text": "LiDAR (Light detection and ranging) sensors are widely used in research and development. As such, they build the base for the evaluation of newly developed ADAS (Advanced Driver Assistance Systems) functions in the automotive field where they are used for ground truth establishment. However, the factory calibration provided for the sensors is not able to satisfy the high accuracy requirements by such applications. In this paper we propose a concept to easily improve the existing calibration of a Velodyne LiDAR sensor without the need for special calibration setups which can even be used to enhance already recorded data."}
{"_id": "aae4665c2bd9f04e0c7f993123b30d87486b79f7", "title": "Combining template tracking and laser peak detection for 3D reconstruction and grasping in underwater environments", "text": "Autonomous grasping of unknown objects by a robot is a highly challenging skill that is receiving increasing attention in the last years. This problem becomes still more challenging (and less explored) in underwater environments, with highly unstructured scenarios, limited availability of sensors and, in general, adverse conditions that affect in different degree the robot perception and control systems. This paper describes an approach for semi-autonomous grasping and recovery on underwater unknown objects from floating vehicles. A laser stripe emitter is attached to a robot forearm that performs a scan of a target of interest. This scan is captured by a camera that also estimates the motion of the floating vehicle while doing the scan. The scanned points are triangulated and transformed according to the motion estimation, thus recovering partial 3D structure of the scene with respect to a fixed frame. A user then indicates the part where to grab the object, and the final grasp is automatically planned on that area. The approach herein presented is tested and validated in water tank conditions."}
{"_id": "dec3de4ae1cb82c75189eb98b5ebb9a1a683f334", "title": "Clear underwater vision", "text": "Underwater imaging is important for scientific research and technology, as well as for popular activities. We present a computer vision approach which easily removes degradation effects in underwater vision. We analyze the physical effects of visibility degradation. We show that the main degradation effects can be associated with partial polarization of light. We therefore present an algorithm which inverts the image formation process, to recover a good visibility image of the object. The algorithm is based on a couple of images taken through a polarizer at different orientations. As a by product, a distance map of the scene is derived as well. We successfully used our approach when experimenting in the sea using a system we built. We obtained great improvement of scene contrast and color correction, and nearly doubled the underwater visibility range."}
{"_id": "375d5a31e1f15b4b4dab3428f0517e014fa61a91", "title": "Girona 500 AUV: From Survey to Intervention", "text": "This paper outlines the specifications and basic design approach taken on the development of the Girona 500, an autonomous underwater vehicle whose most remarkable characteristic is its capacity to reconfigure for different tasks. The capabilities of this new vehicle range from different forms of seafloor survey to inspection and intervention tasks."}
{"_id": "d45eaee8b2e047306329e5dbfc954e6dd318ca1e", "title": "ROS: an open-source Robot Operating System", "text": "This paper gives an overview of ROS, an opensource robot operating system. ROS is not an operating system in the traditional sense of process management and scheduling; rather, it provides a structured communications layer above the host operating systems of a heterogenous compute cluster. In this paper, we discuss how ROS relates to existing robot software frameworks, and briefly overview some of the available application software which uses ROS."}
{"_id": "d8b69a91ff70de099aeaee3b448ef02f889f3faf", "title": "Design and Control of Autonomous Underwater Robots: A Survey", "text": "During the 1990s, numerous worldwide research and development activities have occurred in underwater robotics, especially in the area of autonomous underwater vehicles (AUVs). As the ocean attracts great attention on environmental issues and resources as well as scientific and military tasks, the need for and use of underwater robotic systems has become more apparent. Great efforts have been made in developing AUVs to overcome challenging scientific and engineering problems caused by the unstructured and hazardous ocean environment. In the 1990s, about 30 new AUVs have been built worldwide. With the development of new materials, advanced computing and sensory technology, as well as theoretical advancements, R&D activities in the AUV community have increased. However, this is just the beginning for more advanced, yet practical and reliable AUVs. This paper surveys some key areas in current state-of-the-art underwater robotic technologies. It is by no means a complete survey but provides key references for future development. The new millennium will bring advancements in technology that will enable the development of more practical, reliable AUVs."}
{"_id": "17f84250276a340edcac9e5173e2d55020922deb", "title": "Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge", "text": "We propose Logic Tensor Networks: a uniform framework for integrating automatic learning and reasoning. A logic formalism called Real Logic is defined on a first-order language whereby formulas have truth-value in the interval [0,1] and semantics defined concretely on the domain of real numbers. Logical constants are interpreted as feature vectors of real numbers. Real Logic promotes a well-founded integration of deductive reasoning on a knowledge-base and efficient data-driven relational machine learning. We show how Real Logic can be implemented in deep Tensor Neural Networks with the use of Google\u2019s TENSORFLOW primitives. The paper concludes with experiments applying Logic Tensor Networks on a simple but representative example of knowledge comple-"}
{"_id": "1553fc8c9a9e7ab44aa948ca641bd9148e3f0f6b", "title": "Survey on Data Classification and Data Encryption Techniques Used in Cloud Computing", "text": "Cloud computing is an imminent revolution in information technology (IT) industry because of its performance, accessibility, low cost and many other luxury features. Security of data in the cloud is one of the major issues which acts as barrier in the implementation of cloud computing. In past years, a number of research works have targeted this problem. In this paper discuss some of the data classification techniques widely used in cloud computing. The objective of data classification is to find out the required level of security for data and to protect data by providing sufficient level of security according to the risk levels of data. In this paper also discuss a survey of existing solutions for security problem, discuss their advantages, and point out any disadvantages for future research. Specifically, focus on the use of encryption techniques, and provide a comparative study of the major encryption techniques."}
{"_id": "abbb1ed6bd09bbeae2b7f5d438950d774dbfdb42", "title": "Metaphor in the Mind : The Cognition of Metaphor", "text": "The most sustained and innovative recent work on metaphor has occurred in cognitive science and psychology. Psycholinguistic investigation suggests that novel, poetic metaphors are processed differently than literal speech, while relatively conventionalized and contextually salient metaphors are processed more like literal speech. This conflicts with the view of \u201ccognitive linguists\u201d like George Lakoff that all or nearly all thought is essentially metaphorical. There are currently four main cognitive models of metaphor comprehension: juxtaposition, category-transfer, feature-matching, and structural alignment. Structural alignment deals best with the widest range of examples; but it still fails to account for the complexity and richness of fairly novel, poetic metaphors. 1. General Issues in the Study of Metaphor Philosophers have often adopted a dismissive attitude toward metaphor. Hobbes (ch. 8) advocated excluding metaphors from rational discourse because they \u201copenly profess deceit,\u201d while Locke (Bk. 3, ch. 10) claimed that figurative uses of language serve only \u201cto insinuate wrong ideas, move the passions, and thereby mislead the judgment; and so indeed are perfect cheats.\u201d Later, logical positivists like Ayer and Carnap assumed that because metaphors like (1) How sweet the moonlight sleeps upon this bank!2 involve category mistakes, they have no real meaning or verification conditions. Thus, they too mentioned metaphor only to place it beyond the pale of rational discourse. Starting in the 1960s and 70s, philosophers and linguists began to take more positive interest in metaphor. Black argued forcefully that metaphors do have a distinctive, essentially non-propositional meaning or cognitive significance, which is produced by the \u201cinteraction\u201d of the \u201csystems of associated commonplaces\u201d for the metaphor\u2019s \u201cprimary\u201d and \u201csecondary\u201d subjects (e.g., with moonlight and sleeping sweetly). Other theorists were more friendly to the idea that metaphorical and literal meaning are of the same essential kind. Many of them proposed that the literal absurdity of metaphors \u00a9 Blackwell Publishing 2006 Philosophy Compass 1/2 (2006): 154\u2013170, 10.1111/j.1747-9991.2006.00013.x"}
{"_id": "97ea7509c7f08b82b30676195b21ddc15c3b7712", "title": "Faculty at Saudi Electronic University attitudes toward using augmented reality in education", "text": "This study aims to examine the possibility of implementing the Augmented Reality (AR) application in higher education by answering three questions: Which extended faculty at the Saudi Electronic University are familiar with such applications, what the perceptions do they hold toward using it in education, and what barriers do they believe may hinder implementing this technology. An online survey was designed and distributed to faculty members at two colleges selected randomly to collect data from participants. Even though the faculty were at an accepted level of familiarity with this technology, they did not use it in their classes. Results showed that faculty have a positive attitude toward the use of AR and have confidence in its potential to enrich the learning environment. This result is connected to the AR advantages the faculty were in agreement with. Results also showed that faculty have concerns about some barriers that might impact implementation of AR in the education environment, such as the lack of technical support."}
{"_id": "5fed66826bc19773b2c281997db3cc2233e1f14a", "title": "Impact of supply chain integration and selective disintegration on supply chain efficiency and organization performance", "text": "An efficient business system integration (BSI) is the key determinant factor for the organization to stay competitive. The goal of this study is to develop and execute a research project to study BSI in the oil & gas industry and how an efficient BSI in supply chain (SCI) can be used to gain advantage. A mixed (qualitative & quantitative) survey method was employed and structural equation modeling (SEM) is applied to test the theoretical framework and hypotheses. Findings reveal that, total integration do not offer optimum performance and selective disintegration (DIS) is necessary for supply chain efficiency (SCE) and organization performance (PER). This study helps to better understand the degree of integration required for optimum supply chain performance and came up with new construct DIS. Specifically, this study investigated the effect of SCI and DIS on SCE and PER. The compelled sample size precludes evaluating and testing of additional broad models of the relationship amongst the constructs. The main managerial lesson is that, in contrast to what has been written in many books and other popular publications, selective disintegration is necessary for optimum supply chain efficiency."}
{"_id": "603561e877ceee2213a72b90a9248d684f83e6ba", "title": "Learning to summarize web image and text mutually", "text": "We consider the problem of learning to summarize images by text and visualize text utilizing images, which we call Mutual-Summarization. We divide the web image-text data space into three subspaces, namely pure image space (PIS), pure text space (PTS) and image-text joint space (ITJS). Naturally, we treat the ITJS as a knowledge base.\n For summarizing images by sentence issue, we map images from PIS to ITJS via image classification models and use text summarization on the corresponding texts in ITJS to summarize images. For text visualization problem, we map texts from PTS to ITJS via text categorization models and generate the visualization by choosing the semantic related images from ITJS, where the selected images are ranked by their confidence. In above approaches images are represented by color histograms, dense visual words and feature descriptors at different levels of spatial pyramid; and the texts are generated according to the Latent Dirichlet Allocation (LDA) topic model. Multiple Kernel (MK) methodologies are used to learn classifiers for image and text respectively. We show the Mutual-Summarization results on our newly collected dataset of six big events (\"Gulf Oil Spill\", \"Haiti Earthquake\", etc.) as well as demonstrate improved cross-media retrieval performance over existing methods in terms of MAP, Precision and Recall."}
{"_id": "46f91a6cf5047498c0bf4c75852ecdb1a72fadd7", "title": "A-PNR: Automatic Plate Number Recognition", "text": "Automatic Plate Number Recognition (APNR) has important applications in traffic surveillance, toll booth, protected parking lot, No parking zone, etc. It is a challenging problem, especially when the license plates have varying sizes, the number of lines, fonts, background diversity etc. This work aims to address APNR using deep learning method for real-time traffic images. We first extract license plate candidates from each frame using edge information and geometrical properties, ensuring high recall using one class SVM. The verified candidates are used for NP recognition purpose along with a spatial transformer network (STN) for character recognition. Our system is evaluated on several traffic images with vehicles having different license plate formats in terms of tilt, distances, colors, illumination, character size, thickness etc. Also, the background was very challenging. Results demonstrate robustness to such variations and impressive performance in both the localization and recognition."}
{"_id": "a32ba4129c23051e23d49b75bd84967dd532ad1b", "title": "Virtual reality job interview training in adults with autism spectrum disorder.", "text": "The feasibility and efficacy of virtual reality job interview training (VR-JIT) was assessed in a single-blinded randomized controlled trial. Adults with autism spectrum disorder were randomized to VR-JIT (n = 16) or treatment-as-usual (TAU) (n = 10) groups. VR-JIT consisted of simulated job interviews with a virtual character and didactic training. Participants attended 90 % of laboratory-based training sessions, found VR-JIT easy to use and enjoyable, and they felt prepared for future interviews. VR-JIT participants had greater improvement during live standardized job interview role-play performances than TAU participants (p = 0.046). A similar pattern was observed for self-reported self-confidence at a trend level (p = 0.060). VR-JIT simulation performance scores increased over time (R(2) = 0.83). Results indicate preliminary support for the feasibility and efficacy of VR-JIT, which can be administered using computer software or via the internet."}
{"_id": "561612d24c6c793143c7e5c41a241933b3349299", "title": "Event-based stock market prediction", "text": "There are various studies on the behavior of the market. In particular, derivatives such as futures and options have taken a lot of attentions, lately. Predicting these derivatives is not only important for the risk management purposes, but also for price speculative activities. Besides that accurate prediction of the market\u2019s direction can help investors to gain enormous profits with small amount of capital [Tsaih et al., 1998]. Stock market prediction can be viewed as a challenging time-series prediction [Kim, 2003]. There are many factors that are influential on the financial markets, including political events, natural disasters, economic conditions, and so on. Despite the complexity of the movements in market prices, the market behavior is not completely random. Instead, it is governed by a highly nonlinear dynamical system [Blank, 1991]. Forecasting the future prices is carried out based on the technical analysis, which studies the market\u2019s action using past prices and the other market information. Market analysis is in contradiction with the Efficient Market Hypothesis (EMH). EMH was developed in 1970 by economist Eugene Fama [Fama, 1965a, Fama, 1965b] whose theory stated that it is not possible for an investor to outperform the market because all available information is already built into all stock prices. If the EMH was true, it would not be possible to use machine learning techniques for market prediction. Nevertheless, there are many successful technical analysis in the financial world and number of studies appearing in academic literature that are using machine learning techniques for market prediction [Choudhry and Garg, 2008]. One way to forecast the market movement is by analyzing the special events of the market such as earnings announcements. Earnings announcement for each stock is an official public statement of a company\u2019s profitability for a specific time period, typically a quarter or a year. Each company has its specific earnings announcement dates. Stock price of a company is affected by the earnings announcement event. Equity analysts usually predict the earnings per share (EPS) prior to the announcement date. In this project, using machine-learning techniques, we want to predict whether a given stock will be rising in the following day after earnings announcement or not. This will lead to a binary classification problem, which can be tackled based on the huge amount of available public data. This project consists of two major tasks: data collection and application of machine learning algorithms. In \u00a72, we will discuss our efforts to collect the required data. In continuation in \u00a73, features definitions and selections are described. We have considered and discussed different machine learning algorithms in \u00a74. In \u00a75, the summary of our results and possible extensions are explained."}
{"_id": "5cd8a2a85caaa1fdc48f67957db3ac5350e69127", "title": "Effects of early-life abuse differ across development: infant social behavior deficits are followed by adolescent depressive-like behaviors mediated by the amygdala.", "text": "Abuse during early life, especially from the caregiver, increases vulnerability to develop later-life psychopathologies such as depression. Although signs of depression are typically not expressed until later life, signs of dysfunctional social behavior have been found earlier. How infant abuse alters the trajectory of brain development to produce pathways to pathology is not completely understood. Here we address this question using two different but complementary rat models of early-life abuse from postnatal day 8 (P8) to P12: a naturalistic paradigm, where the mother is provided with insufficient bedding for nest building; and a more controlled paradigm, where infants undergo olfactory classical conditioning. Amygdala neural assessment (c-Fos), as well as social behavior and forced swim tests were performed at preweaning (P20) and adolescence (P45). Our results show that both models of early-life abuse induce deficits in social behavior, even during the preweaning period; however, depressive-like behaviors were observed only during adolescence. Adolescent depressive-like behavior corresponds with an increase in amygdala neural activity in response to forced swim test. A causal relationship between the amygdala and depressive-like behavior was suggested through amygdala temporary deactivation (muscimol infusions), which rescued the depressive-like behavior in the forced swim test. Our results indicate that social behavior deficits in infancy could serve as an early marker for later psychopathology. Moreover, the implication of the amygdala in the ontogeny of depressive-like behaviors in infant abused animals is an important step toward understanding the underlying mechanisms of later-life mental disease associated with early-life abuse."}
{"_id": "8293ab4ce2ccf1fd64f816b274838fe19c569d40", "title": "Neural correlates of cognitive processing in monolinguals and bilinguals.", "text": "Here, we review the neural correlates of cognitive control associated with bilingualism. We demonstrate that lifelong practice managing two languages orchestrates global changes to both the structure and function of the brain. Compared with monolinguals, bilinguals generally show greater gray matter volume, especially in perceptual/motor regions, greater white matter integrity, and greater functional connectivity between gray matter regions. These changes complement electroencephalography findings showing that bilinguals devote neural resources earlier than monolinguals. Parallel functional findings emerge from the functional magnetic resonance imaging literature: bilinguals show reduced frontal activity, suggesting that they do not need to rely on top-down mechanisms to the same extent as monolinguals. This shift for bilinguals to rely more on subcortical/posterior regions, which we term the bilingual anterior-to-posterior and subcortical shift (BAPSS), fits with results from cognitive aging studies and helps to explain why bilinguals experience cognitive decline at later stages of development than monolinguals."}
{"_id": "5f271829cb0fd59e97a2b1c5c1fb9fa9ab4973c2", "title": "Low-frequency Fourier analysis of speech rhythm.", "text": "A method for studying speech rhythm is presented, using Fourier analysis of the amplitude envelope of bandpass-filtered speech. Rather than quantifying rhythm with time-domain measurements of interval durations, a frequency-domain representation is used--the rhythm spectrum. This paper describes the method in detail, and discusses approaches to characterizing rhythm with low-frequency spectral information."}
{"_id": "75acb0d2d776889fb6e2387b82140336839820cb", "title": "Simple sequence repeat markers that identify Claviceps species and strains", "text": "Claviceps purpurea is a pathogen that infects most members of Pooideae, a subfamily of Poaceae, and causes ergot, a floral disease in which the ovary is replaced with a sclerotium. When the ergot body is accidently consumed by either man or animal in high enough quantities, there is extreme pain, limb loss and sometimes death. This study was initiated to develop simple sequence repeat (SSRs) markers for rapid identification of\u00a0C. purpurea. SSRs were designed from sequence data stored at the National Center for Biotechnology Information database. The study consisted of 74 ergot isolates, from four different host species, Lolium perenne, Poa pratensis, Bromus inermis, and Secale cereale plus three additional Claviceps species, C. pusilla, C. paspali and C. fusiformis. Samples were collected from six different counties in Oregon and Washington over a 5-year period. Thirty-four SSR markers were selected, which enabled the differentiation of each isolate from one another based solely on their molecular fingerprints. Discriminant analysis of principle components was used to identify four isolate groups, CA Group 1, 2, 3, and 4, for subsequent cluster and molecular variance analyses. CA Group 1 consisting of eight isolates from the host species P. pratensis, was separated on the cluster analysis plot from the remaining three groups and this group was later identified as C. humidiphila. The other three groups were distinct from one another, but closely related. These three groups contained samples from all four of the host species. These SSRs are simple to use, reliable and allowed clear differentiation of C. humidiphila from C. purpurea. Isolates from the three separate species, C. pusilla, C. paspali and C. fusiformis, also amplified with these markers. The SSR markers developed in this study will be helpful in defining the population structure and genetics of Claviceps strains. They will also provide valuable tools for plant breeders needing to identify resistance in crops or for researchers examining fungal movements across environments."}
{"_id": "0bb71e91b29cf9739c0e1334f905baad01b663e6", "title": "Lifetime-Aware Scheduling and Power Control for M2M Communications in LTE Networks", "text": "In this paper the scheduling and transmit power control are investigated to minimize the energy consumption for battery-driven devices deployed in LTE networks. To enable efficient scheduling for a massive number of machine-type subscribers, a novel distributed scheme is proposed to let machine nodes form local clusters and communicate with the base-station through the cluster-heads. Then, uplink scheduling and power control in LTE networks are introduced and lifetime-aware solutions are investigated to be used for the communication between cluster-heads and the base-station. Beside the exact solutions, low-complexity suboptimal solutions are presented in this work which can achieve near optimal performance with much lower computational complexity. The performance evaluation shows that the network lifetime is significantly extended using the proposed protocols."}
{"_id": "e66877f07bdbf5ac394880f0ad630117e63803a9", "title": "Rumor detection for Persian Tweets", "text": "Nowadays, striking growth of online social media has led to easier and faster spreading of rumors on cyber space, in addition to tradition ways. In this paper, rumor detection on Persian Twitter community has been addressed for the first time by exploring and analyzing the significances of two categories of rumor features: Structural and Content-based features. While applying both feature sets leads to precision more than 80%, precision around 70% using only structural features has been obtained. Moreover, we show how features of users tending to produce and spread rumors are effective in rumor detection process. Also, the experiments have led to learning of a language model of collected rumors. Finally, all features have been ranked and the most discriminative ones have been discussed."}
{"_id": "6dc4be33a07c277ee68d42c151b4ee866108281f", "title": "Covalsa: Covariance estimation from compressive measurements using alternating minimization", "text": "The estimation of covariance matrices from compressive measurements has recently attracted considerable research efforts in various fields of science and engineering. Owing to the small number of observations, the estimation of the covariance matrices is a severely ill-posed problem. This can be overcome by exploiting prior information about the structure of the covariance matrix. This paper presents a class of convex formulations and respective solutions to the high-dimensional covariance matrix estimation problem under compressive measurements, imposing either Toeplitz, sparseness, null-pattern, low rank, or low permuted rank structure on the solution, in addition to positive semi-definiteness. To solve the optimization problems, we introduce the Co-Variance by Augmented Lagrangian Shrinkage Algorithm (CoVALSA), which is an instance of the Split Augmented Lagrangian Shrinkage Algorithm (SALSA). We illustrate the effectiveness of our approach in comparison with state-of-the-art algorithms."}
{"_id": "4d2074c1b6ee5562f815e489c5db366d2ec0d894", "title": "Control Strategy to Naturally Balance Hybrid Converter for Variable-Speed Medium-Voltage Drive Applications", "text": "In this paper, a novel control strategy to naturally balance a hybrid converter is presented for medium-voltage induction motor drive applications. Fundamental current of the motor is produced by a three-level quasi-square-wave converter. Several single-phase full-bridge converters are connected in series with this quasi-square-wave converter to eliminate harmonic voltages from the net output voltage of this hybrid converter. Various novel modulation strategies for these series cells are proposed. These lead to a naturally regulated dc-bus voltage of series cells. These also optimize the number of series cells required. Variable fundamental voltage and frequency control of induction motor is adopted with a constant dc-bus voltage of the three-level quasi-square-wave converter. Experimental results are also produced here to validate the proposed modulation strategies for induction motor drives using low-voltage protomodel. Based on the aforementioned, a 2.2-kV induction motor drive has been built, and results are also produced in this paper."}
{"_id": "b56d1c7cef034003e469701d215394db3d8cc675", "title": "Improvement of Intrusion Detection System in Data Mining using Neural Network", "text": "Many researchers have argued that Artificial Neural Networks (ANNs) can improve the performance of intrusion detection systems (IDS). One of the central areas in network intrusion detection is how to build effective systems that are able to distinguish normal from intrusive traffic. In this paper four different algorithms are used namely as Multilayer Perception, Radial Base Function, Logistic Regression and Voted Perception. All these neural based algorithms are implemented in WEKA data mining tool to evaluate the performance. For experimental work, NSL KDD dataset is used. Each neural based algorithm is tested with conducted dataset. It is shown that the Multilayer Perception neural network algorithm is providing more accurate results than other algorithms. Keywords\u2014 Data Mining, Intrusion Detection System, Neural network classifier algorithm, NSL KDD dataset"}
{"_id": "aa5a3009c83b497918127f6696557658d142b706", "title": "Valid items for screening dysphagia risk in patients with stroke: a systematic review.", "text": "BACKGROUND AND PURPOSE\nScreening for dysphagia is essential to the implementation of preventive therapies for patients with stroke. A systematic review was undertaken to determine the evidence-based validity of dysphagia screening items using instrumental evaluation as the reference standard.\n\n\nMETHODS\nFour databases from 1985 through March 2011 were searched using the terms cerebrovascular disease, stroke deglutition disorders, and dysphagia. Eligibility criteria were: homogeneous stroke population, comparison to instrumental examination, clinical examination without equipment, outcome measures of dysphagia or aspiration, and validity of screening items reported or able to be calculated. Articles meeting inclusion criteria were evaluated for methodological rigor. Sensitivity, specificity, and predictive capabilities were calculated for each item.\n\n\nRESULTS\nTotal source documents numbered 832; 86 were reviewed in full and 16 met inclusion criteria. Study quality was variable. Testing swallowing, generally with water, was the most commonly administered item across studies. Both swallowing and nonswallowing items were identified as predictive of aspiration. Neither swallowing protocols nor validity were consistent across studies.\n\n\nCONCLUSIONS\nNumerous behaviors were found to be associated with aspiration. The best combination of nonswallowing and swallowing items as well as the best swallowing protocol remains unclear. Findings of this review will assist in development of valid clinical screening instruments."}
{"_id": "082e75207b76185cdd901b18ec09b1a5b694922a", "title": "Distributed Subgradient Methods for Multi-Agent Optimization", "text": "We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy."}
{"_id": "02b8a4d6e4df5d2a5db57786bb047ed08105336f", "title": "Monte-Carlo Localization for Mobile Wireless Sensor Networks", "text": "Localization is crucial to many applications in wireless sensor networks. In this article, we propose a range-free anchor-based localization algorithm for mobile wireless sensor networks that builds upon the Monte Carlo Localization algorithm. We concentrate on improving the localization accuracy and efficiency by making better use of the information a sensor node gathers and by drawing the necessary location samples faster. To do so, we constrain the area from which samples are drawn by building a box that covers the region where anchors\u2019 radio ranges overlap. This box is the region of the deployment area where the sensor node is localized. Simulation results show that localization accuracy is improved by a minimum of 4% and by a maximum of 73%, on average 30%, for varying node speeds when considering nodes with knowledge of at least three anchors. The coverage is also strongly affected by speed and its improvement ranges from 3% to 55%, on average 22%. Finally, the processing time is reduced by 93% for a similar localization accuracy."}
{"_id": "cfce4a3e3a626d6e0d9a155706d995f5d406d5a0", "title": "On the economic significance of ransomware campaigns: A Bitcoin transactions perspective", "text": "Bitcoin cryptocurrency system enables users to transact securely and pseudo-anonymously by using an arbitrary number of aliases (Bitcoin addresses). Cybercriminals exploit these characteristics to commit immutable and presumably untraceable monetary fraud, especially via ransomware; a type of malware that encrypts files of the infected system and demands ransom for decryption. In this paper, we present our comprehensive study on all recent ransomware and report the economic impact of such ransomware from the Bitcoin payment perspective. We also present a lightweight framework to identify, collect, and analyze Bitcoin addresses managed by the same user or group of users (cybercriminals, in this case), which includes a novel approach for classifying a payment as ransom. To verify the correctness of our framework, we compared our findings on CryptoLocker ransomware with the results presented in the literature. Our results align with the results found in the previous works except for the final valuation in USD. The reason for this discrepancy is that we used the average Bitcoin price on the day of each ransom payment whereas the authors of the previous studies used the Bitcoin price on the day of their evaluation. Furthermore, for each investigated ransomware, we provide a holistic view of its genesis, development, the process of infection and execution, and characteristic of ransom demands. Finally, we also release our dataset that contains a detailed transaction history of all the Bitcoin addresses we identified for each ransomware."}
{"_id": "ff6c257dbac99b39e1e2fb3f5c0154268e9f1022", "title": "Localized Multiple Kernel Learning With Dynamical Clustering and Matrix Regularization", "text": "Localized multiple kernel learning (LMKL) is an attractive strategy for combining multiple heterogeneous features with regard to their discriminative power for each individual sample. However, the learning of numerous local solutions may not scale well even for a moderately sized training set, and the independently learned local models may suffer from overfitting. Hence, in existing local methods, the distributed samples are typically assumed to share the same weights, and various unsupervised clustering methods are applied as preprocessing. In this paper, to enable the learner to discover and benefit from the underlying local coherence and diversity of the samples, we incorporate the clustering procedure into the canonical support vector machine-based LMKL framework. Then, to explore the relatedness among different samples, which has been ignored in a vector $\\ell _{p}$ -norm analysis, we organize the cluster-specific kernel weights into a matrix and introduce a matrix-based extension of the $\\ell _{p}$ -norm for constraint enforcement. By casting the joint optimization problem as a problem of alternating optimization, we show how the cluster structure is gradually revealed and how the matrix-regularized kernel weights are obtained. A theoretical analysis of such a regularizer is performed using a Rademacher complexity bound, and complementary empirical experiments on real-world data sets demonstrate the effectiveness of our technique."}
{"_id": "f93c86a40e38465762397b66bea99ceaec6fdf94", "title": "Optimal Execution with Nonlinear Impact Functions and Trading-Enhanced Risk", "text": "We determine optimal trading strategies for liquidation of a large single-asset portfolio to minimize a combination of volatility risk and market impact costs. We take the market impact cost per share to be a power law function of the trading rate, with an arbitrary positive exponent. This includes, for example, the square-root law that has been proposed based on market microstructure theory. In analogy to the linear model, we define a \u201ccharacteristic time\u201d for optimal trading, which now depends on the initial portfolio size and decreases as execution proceeds. We also consider a model in which uncertainty of the realized price is increased by demanding rapid execution; we show that optimal trajectories are described by a \u201ccritical portfolio size\u201d above which this effect is dominant and below which it may be neglected."}
{"_id": "4ce54221d9c2fc3ecf38aedc3eecefee53e568b8", "title": "Mining Offensive Language on Social Media", "text": "English. The present research deals with the automatic annotation and classification of vulgar ad offensive speech on social media. In this paper we will test the effectiveness of the computational treatment of the taboo contents shared on the web, the output is a corpus of 31,749 Facebook comments which has been automatically annotated through a lexicon-based method for the automatic identification and classification of taboo expressions. Italiano. La presente ricerca affronta il tema dell\u2019annotazione e della classificazione automatica dei contenuti volgari e offensivi espressi nei social media. Lo scopo del nostro lavoro consiste nel testare l\u2019efficacia del trattamento computazionale dei contenuti tab\u00f9 condivisi in rete. L\u2019output che forniamo un corpus di 31,749 commenti generati dagli utenti di Facebook e annotato automaticamente attraverso un metodo basato sul lessico per l\u2019identificazione e la classificazione delle"}
{"_id": "234f11713077aa09179533a1f37c075662e25b0f", "title": "Incremental Algorithms for Hierarchical Classification", "text": "We study the problem of hierarchical classification when labels corresponding to partial and/or multiple paths in the underlying taxonomy are allowed. We introduce a new hierarchical loss function, the H-loss, implementing the simple intuition that additional mistakes in the subtree of a mistaken class should not be charged for. Based on a probabilistic data model introduced in earlier work, we derive the Bayes-optimal classifier for the H-loss. We then empirically compare two incremental approximations of the Bayes-optimal classifier with a flat SVM classifier and with classifiers obtained by using hierarchical versions of the Perceptron and SVM algorithms. The experiments show that our simplest incremental approximation of the Bayes-optimal classifier performs, after just one training epoch, nearly as well as the hierarchical SVM classifier (which performs best). For the same incremental algorithm we also derive an H-loss bound showing, when data are generated by our probabilistic data model, exponentially fast convergence to the H-loss of the hierarchical classifier based on the true model parameters."}
{"_id": "2acbd7681ac7c06cae1541b8925f95294bc4dc45", "title": "An integrated framework for optimizing automatic monitoring systems in large IT infrastructures", "text": "The competitive business climate and the complexity of IT environments dictate efficient and cost-effective service delivery and support of IT services. These are largely achieved by automating routine maintenance procedures, including problem detection, determination and resolution. System monitoring provides an effective and reliable means for problem detection. Coupled with automated ticket creation, it ensures that a degradation of the vital signs, defined by acceptable thresholds or monitoring conditions, is flagged as a problem candidate and sent to supporting personnel as an incident ticket. This paper describes an integrated framework for minimizing false positive tickets and maximizing the monitoring coverage for system faults.\n In particular, the integrated framework defines monitoring conditions and the optimal corresponding delay times based on an off-line analysis of historical alerts and incident tickets. Potential monitoring conditions are built on a set of predictive rules which are automatically generated by a rule-based learning algorithm with coverage, confidence and rule complexity criteria. These conditions and delay times are propagated as configurations into run-time monitoring systems. Moreover, a part of misconfigured monitoring conditions can be corrected according to false negative tickets that are discovered by another text classification algorithm in this framework. This paper also provides implementation details of a program product that uses this framework and shows some illustrative examples of successful results."}
{"_id": "05357314fe2da7c2248b03d89b7ab9e358cbf01e", "title": "Learning with kernels", "text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher."}
{"_id": "06d0a9697a0f0242dbdeeff08ec5266b74bfe457", "title": "Fast Exact Inference with a Factored Model for Natural Language Parsing", "text": "We presenta novel generati ve model for natural languagetree structuresin whichsemantic(lexical dependenc y) andsyntacticstructuresare scoredwith separatemodels.Thisfactorizationprovidesconceptual simplicity, straightforwardopportunitiesfor separatelyimproving the componentmodels,anda level of performancealreadycloseto thatof similar, non-factoredmodels.Most importantly, unlikeothermodernparsing models,thefactoredmodeladmitsanextremelyeffectiveA parsingalgorithm,which makesefficient,exactinferencefeasible."}
{"_id": "094d9601e6f6c45579647e20b5f7b0eeb4e2819f", "title": "Large margin hierarchical classification", "text": "We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted tree which induces a metric over the label set. Our approach combines ideas from large margin kernel methods and Bayesian analysis. Following the large margin principle, we associate a prototype with each label in the tree and formulate the learning task as an optimization problem with varying margin constraints. In the spirit of Bayesian methods, we impose similarity requirements between the prototypes corresponding to adjacent labels in the hierarchy. We describe new online and batch algorithms for solving the constrained optimization problem. We derive a worst case loss-bound for the online algorithm and provide generalization analysis for its batch counterpart. We demonstrate the merits of our approach with a series of experiments on synthetic, text and speech data."}
{"_id": "7083489c4750898ceae641e8e3854e4440cadb61", "title": "A ZigBee -wireless wearable remote physiological monitoring system", "text": "Wearable health monitoring systems use integrated sensors to monitor vital physiological parameters of the wearer. The paper discusses the preliminary results of a prototype wearable physiological monitoring system to monitor physiological parameters such as Electrocardiogram (ECG), Heart Rate (HR), Electroencephalogram (EEG) and Body Temperature. The ECG and EEG signals are acquired using a textile electrodes integrated into the fabric of the wearer. The acquired ECG and EEG signals are processed to remove noises and the parameters are extracted and trend analysis is carried out. The physiological parameters are monitored at the remote monitoring station using a ZigBee wireless communication."}
{"_id": "8da0771e47c32405c4877cedce3ff84ac7390646", "title": "A survey of datasets for visual tracking", "text": "For 15\u00a0years now, visual tracking has been a very active research area of the computer vision community. But an increasing amount of works can be observed in the last five years. This has led to the development of numerous algorithms that can deal with more and more complex video sequences. Each of them has its own strengths and weaknesses. That is the reason why it becomes necessary to compare those algorithms. For this purpose, some datasets dedicated to visual tracking as well as, sometimes, their ground truth annotation files are regularly made publicly available by researchers. However, each dataset has its own specificities and is sometimes dedicated to test the ability of some algorithms to tackle only one or a few specific visual tracking subproblems. This article provides an overview of some of the datasets that are most used by the visual tracking community, but also of others that address specific tasks. We also propose a cartography of these datasets from a novel perspective, namely that of the difficulties the datasets present for visual tracking."}
{"_id": "8e0df9abe037f370809e19045a114ee406902139", "title": "DC Voltage Balancing for PWM Cascaded H-Bridge Converter Based STATCOM", "text": "This paper presents a new control method for cascaded connected H-bridge converter based STATCOMs. These converters have been classically commutated at fundamental line-frequencies, but the evolution of power semiconductors has allowed to increase switching frequencies and the power ratings of these devices, permitting the use of PWM modulation techniques. This paper mainly focuses the DC bus voltage balancing problems, and proposes a new control technique that solves these balancing problems maintaining the delivered reactive power equally distributed among all the H-bridges of the converter"}
{"_id": "682af5b9069369aae4e463807dcfcc96bce26a83", "title": "Emotion recognition from text using semantic labels and separable mixture models", "text": "This study presents a novel approach to automatic emotion recognition from text. First, emotion generation rules (EGRs) are manually deduced from psychology to represent the conditions for generating emotion. Based on the EGRs, the emotional state of each sentence can be represented as a sequence of semantic labels (SLs) and attributes (ATTs); SLs are defined as the domain-independent features, while ATTs are domain-dependent. The emotion association rules (EARs) represented by SLs and ATTs for each emotion are automatically derived from the sentences in an emotional text corpus using the a priori algorithm. Finally, a separable mixture model (SMM) is adopted to estimate the similarity between an input sentence and the EARs of each emotional state. Since some features defined in this approach are domain-dependent, a dialog system focusing on the students' daily expressions is constructed, and only three emotional states, happy, unhappy, and neutral, are considered for performance evaluation. According to the results of the experiments, given the domain corpus, the proposed approach is promising, and easily ported into other domains."}
{"_id": "45717c656ad11586ae1654d3961dbc762035b291", "title": "Project Aura: Toward Distraction-Free Pervasive Computing", "text": "A s the effects of Moore's law cause computing systems to become cheaper and more plentiful, a new problem arises: increasingly, the bottleneck in computing is not its disk capacity, processor speed, or communication bandwidth, but rather the limited resource of human attention. Human attention refers to a user's ability to attend to his or her primary tasks, ignoring system-generated distractions such as poor performance and failures. By exploiting plentiful computing resources to reduce user distraction, Project Aura is creating a system whose effectiveness is considerably greater than that of other systems today. Aura is specifically intended for pervasive computing environments involving wireless communication , wearable or handheld computers, and smart spaces. Human attention is an especially scarce resource in such environments, because the user is often preoccupied with walking, driving, or other real-world interactions. In addition, mobile computing poses difficult challenges such as intermittent and variable-bandwidth connectivity, concern for battery life, and the client resource constraints that weight and size considerations impose. To accomplish its ambitious goals, research in Aura spans every system level: from the hardware, through the operating system, to applications and end users. Underlying this diversity of concerns, Aura applies two broad concepts. First, it uses proactivity, which is a system layer's ability to anticipate requests from a higher layer. In today's systems , each layer merely reacts to the layer above it. Second, Aura is self-tuning: layers adapt by observing the demands made on them and adjusting their performance and resource usage characteristics accordingly. Currently, system-layer behavior is relatively static. Both of these techniques will help lower demand for human attention. To illustrate the kind of world we are trying to create, we present two hypothetical Aura scenarios. Although these might seem far-fetched today, they represent the kind of scenarios we expect to make commonplace through our research. In the first scenario, Jane is at Gate 23 in the Pitts-burgh airport, waiting for her connecting flight. She has edited many large documents and would like to use her wireless connection to email them. Unfortunately , bandwidth is miserable because many passengers at Gates 22 and 23 are surfing the Web. Aura observes that, at the current bandwidth, Jane won't be able to finish sending her documents before her flight departs. Consulting the airport's wireless network bandwidth service and flight schedule service, Aura discovers that wireless bandwidth is excellent at Gate 15, and that there are \u2026"}
{"_id": "39320211191e5e61a5547723256f04618a0b229c", "title": "Potassium and sodium transporters of Pseudomonas aeruginosa regulate virulence to barley", "text": "We investigated the role of three uncharacterized cation transporters of Pseudomonas aeruginosa PAO1 as virulence factors for barley: PA1207, PA5021, and PA2647. PAO1 displayed reduced barley virulence with inactivated PA1207, PA5021, and PA2647 as well as with one known Na+/H+ antiporter, PA1820. Using the Escherichia coli LB2003 mutant lacking three K+ uptake systems, the expression of the PA5021 gene repressed LB2003 growth with low K+, but the strain acquired tolerance to high K+. In contrast, the expression of the PA1207 gene enhanced growth of LB2003 with low K+ but repressed its growth with high K+; therefore, the PA5021 protein exports K+, while the PA1207 protein imports K+. The PA5021 mutant of P. aeruginosa also showed impaired growth at 400\u00a0mM KCl and at 400\u00a0mM NaCl; therefore, the PA5021 protein may also export Na+. The loss of the PA5021 protein also decreased production of the virulence factor pyocyanin; corroborating this result, pyocyanin production decreased in wild-type PAO1 under high salinity. Whole-genome transcriptome analysis showed that PAO1 induced more genes in barley upon infection compared to the PA5021 mutant. Additionally, PAO1 infection induced water stress-related genes in barley, which suggests that barley may undergo water deficit upon infection by this pathogen."}
{"_id": "8f76334bd276a2b92bd79203774f292318f42dc6", "title": "Design of broadband circularly polarized horn antenna using an L-shaped probe", "text": "This paper deals with a circular horn antenna fed by an L-shaped probe. The design process for broadband matching to a 50 Omega coaxial cable, and the antenna performance in axial ratio and gain are presented. The simulation results of this paper were obtained using Ansoft HFSS 9.2"}
{"_id": "66078dcc6053a2b314942f738048ee0359726cb5", "title": "COMPUTATION OF CONDITIONAL PROBABILITY STATISTICS BY 8-MONTH-OLD INFANTS", "text": "321 Abstract\u2014 A recent report demonstrated that 8-month-olds can segment a continuous stream of speech syllables, containing no acoustic or prosodic cues to word boundaries, into wordlike units after only 2 min of listening experience (Saffran, Aslin, & Newport, 1996). Thus, a powerful learning mechanism capable of extracting statistical information from fluent speech is available early in development. The present study extends these results by documenting the particular type of statistical computation\u2014transitional (conditional) probability\u2014used by infants to solve this word-segmentation task. An artificial language corpus, consisting of a continuous stream of trisyllabic nonsense words, was presented to 8-month-olds for 3 min. A postfamiliarization test compared the infants' responses to words versus part-words (tri-syllabic sequences spanning word boundaries). The corpus was constructed so that test words and part-words were matched in frequency, but differed in their transitional probabilities. Infants showed reliable discrimination of words from part-words, thereby demonstrating rapid segmentation of continuous speech into words on the basis of transitional probabilities of syllable pairs. Many aspects of the patterns of human languages are signaled in the speech stream by what is called distributional evidence, that is, regularities in the relative positions and order of elements over a corpus of utterances (Bloomfield, 1933; Maratsos & Chalkley, 1980). This type of evidence, along with linguistic theories about the characteristics of human languages, is what comparative linguists use to discover the structure of exotic languages (Harris, 1951). Similarly, this type of evidence , along with tendencies to perform certain kinds of analyses on language input (Chomsky, 1957), could be used by human language learners to acquire their native languages. However, using such evidence would require rather complex distributional and statistical computations , and surprisingly little is known about the abilities of human infants and young children to perform these computations. By using the term computation, we do not mean, of course, that infants are consciously performing a mathematical calculation, but rather that they might be sensitive to and able to store quantitative aspects of distribu-tional information about a language corpus. Recently, we have begun studying this problem by investigating the abilities of human learners to use statistical information to dis-Words are known to vary dramatically from one language to another, so finding the words of a language is clearly a problem that must involve learning from the linguistic environment. Moreover, the beginnings and ends of the sequences of sounds that form words in a \u2026"}
{"_id": "bb290a97b0d7312ea25870f145ef9262926d9be4", "title": "A deep learning-based multi-sensor data fusion method for degradation monitoring of ball screws", "text": "As ball screw has complex structure and long range of distribution, single signal collected by one sensor is difficult to express its condition fully and accurately. Multi-sensor data fusion usually has a better effect compared with single signal. Multi-sensor data fusion based on neural network(BP) is a commonly used multi-sensor data fusion method, but its application is limited by local optimum problem. Aiming at this problem, a multi-sensor data fusion method based on deep learning for ball screw is proposed in this paper. Deep learning, which consists of unsupervised learning and supervised learning, is the development and evolution of traditional neural network. It can effectively alleviate the optimization difficulty. Parallel superposition on frequency spectra of signals is directly done in the proposed deep learning-based multi-sensor data fusion method, and deep belief networks(DBN) are established by using fused data to adaptively mine available fault characteristics and automatically identify the degradation condition of ball screw. Test is designed to collect vibration signals of ball screw in 7 different degradation conditions by using 5 acceleration sensors installed on different places. The proposed fusion method is applied in identifying the degradation degree of ball screw in the test to demonstrate its efficacy. Finally, the multi-sensor data fusion based on neural network is also applied in degradation degree monitoring. The monitoring accuracy of deep learning-based multi-sensor data fusion is higher compared with that of neural network-based multi-sensor data fusion, which means the proposed method has more superiority."}
{"_id": "a8bec3694c8cc6885c88d88d103f807ef115e6cc", "title": "Dual methods for nonconvex spectrum optimization of multicarrier systems", "text": "The design and optimization of multicarrier communications systems often involve a maximization of the total throughput subject to system resource constraints. The optimization problem is numerically difficult to solve when the problem does not have a convexity structure. This paper makes progress toward solving optimization problems of this type by showing that under a certain condition called the time-sharing condition, the duality gap of the optimization problem is always zero, regardless of the convexity of the objective function. Further, we show that the time-sharing condition is satisfied for practical multiuser spectrum optimization problems in multicarrier systems in the limit as the number of carriers goes to infinity. This result leads to efficient numerical algorithms that solve the nonconvex problem in the dual domain. We show that the recently proposed optimal spectrum balancing algorithm for digital subscriber lines can be interpreted as a dual algorithm. This new interpretation gives rise to more efficient dual update methods. It also suggests ways in which the dual objective may be evaluated approximately, further improving the numerical efficiency of the algorithm. We propose a low-complexity iterative spectrum balancing algorithm based on these ideas, and show that the new algorithm achieves near-optimal performance in many practical situations"}
{"_id": "e9073b8b285afdd3713fa59ead836571060b9e73", "title": "A Comprehensive Framework for Evaluation in Design Science Research", "text": "Evaluation is a central and essential activity in conducting rigorous Design Science Research (DSR), yet there is surprisingly little guidance about designing the DSR evaluation activity beyond suggesting possible methods that could be used for evaluation. This paper extends the notable exception of the existing framework of Pries-Heje et al [11] to address this problem. The paper proposes an extended DSR evaluation framework together with a DSR evaluation design method that can guide DSR researchers in choosing an appropriate strategy for evaluation of the design artifacts and design theories that form the output from DSR. The extended DSR evaluation framework asks the DSR researcher to consider (as input to the choice of the DSR evaluation strategy) contextual factors of goals, conditions, and constraints on the DSR evaluation, e.g. the type and level of desired rigor, the type of artifact, the need to support formative development of the designed artifacts, the properties of the artifact to be evaluated, and the constraints on resources available, such as time, labor, facilities, expertise, and access to research subjects. The framework and method support matching these in the first instance to one or more DSR evaluation strategies, including the choice of ex ante (prior to artifact construction) versus ex post evaluation (after artifact construction) and naturalistic (e.g., field setting) versus artificial evaluation (e.g., laboratory setting). Based on the recommended evaluation strategy(ies), guidance is provided concerning what methodologies might be appropriate within the chosen strategy(ies)."}
{"_id": "06138684cc86aba86b496017a7bd410f96ab18dd", "title": "Clustering to Find Exemplar Terms for Keyphrase Extraction", "text": "Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graphbased ranking methods (TextRank) by 9.5% in F1-measure."}
{"_id": "6d69f9b44c3316250d328b6ac08d1d22bc426f00", "title": "TextRay: Mining Clinical Reports to Gain a Broad Understanding of Chest X-Rays", "text": "The chest X-ray (CXR) is by far the most commonly performed radiological examination for screening and diagnosis of many cardiac and pulmonary diseases. There is an immense world-wide shortage of physicians capable of providing rapid and accurate interpretation of this study. A radiologist-driven analysis of over two million CXR reports generated an ontology including the 40 most prevalent pathologies on CXR. By manually tagging a relatively small set of sentences, we were able to construct a training set of 959k studies. A deep learning model was trained to predict the findings given the patient frontal and lateral scans. For 12 of the findings we compare the model performance against a team of radiologists and show that in most cases the radiologists agree on average more with the algorithm than with each other."}
{"_id": "893cd8c2d739eacfb79f9217d018d0c4cfbb8d98", "title": "Fast bayesian compressive sensing using Laplace priors", "text": "In this paper we model the components of the compressive sensing (CS) problem using the Bayesian framework by utilizing a hierarchical form of the Laplace prior to model sparsity of the unknown signal. This signal prior includes some of the existing models as special cases and achieves a high degree of sparsity. We develop a constructive (greedy) algorithm resulting from this formulation where necessary parameters are estimated solely from the observation and therefore no user-intervention is needed. We provide experimental results with synthetic 1D signals and images, and compare with the state-of-the-art CS reconstruction algorithms demonstrating the superior performance of the proposed approach."}
{"_id": "3269b6ccf3ed63bb7c1fa744c20377474ff23760", "title": "Structure, culture and Simmelian ties in entrepreneurial firms", "text": "This article develops a cultural agreement approach to organizational culture that emphasizes how clusters of individuals reinforce potentially idiosyncratic understandings of many aspects of culture including the structure of network relations. Building on recent work concerning Simmelian tied dyads (defined as dyads embedded in three-person cliques), the research examines perceptions concerning advice and friendship relations in three entrepreneurial firms. The results support the idea that Simmelian tied dyads (relative to dyads in general) reach higher agreement concerning who is tied to whom, and who are embedded together in triads in organizations. \u00a9 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "717a68a5a2eca79a95c1e05bd595f9c10efdeafa", "title": "Thinking like a nurse: a research-based model of clinical judgment in nursing.", "text": "This article reviews the growing body of research on clinical judgment in nursing and presents an alternative model of clinical judgment based on these studies. Based on a review of nearly 200 studies, five conclusions can be drawn: (1) Clinical judgments are more influenced by what nurses bring to the situation than the objective data about the situation at hand; (2) Sound clinical judgment rests to some degree on knowing the patient and his or her typical pattern of responses, as well as an engagement with the patient and his or her concerns; (3) Clinical judgments are influenced by the context in which the situation occurs and the culture of the nursing care unit; (4) Nurses use a variety of reasoning patterns alone or in combination; and (5) Reflection on practice is often triggered by a breakdown in clinical judgment and is critical for the development of clinical knowledge and improvement in clinical reasoning. A model based on these general conclusions emphasizes the role of nurses' background, the context of the situation, and nurses' relationship with their patients as central to what nurses notice and how they interpret findings, respond, and reflect on their response."}
{"_id": "300810453b6d300077e4ac4b16f271ba5abd7310", "title": "Efficient Sparse Coding in Early Sensory Processing: Lessons from Signal Recovery", "text": "Sensory representations are not only sparse, but often overcomplete: coding units significantly outnumber the input units. For models of neural coding this overcompleteness poses a computational challenge for shaping the signal processing channels as well as for using the large and sparse representations in an efficient way. We argue that higher level overcompleteness becomes computationally tractable by imposing sparsity on synaptic activity and we also show that such structural sparsity can be facilitated by statistics based decomposition of the stimuli into typical and atypical parts prior to sparse coding. Typical parts represent large-scale correlations, thus they can be significantly compressed. Atypical parts, on the other hand, represent local features and are the subjects of actual sparse coding. When applied on natural images, our decomposition based sparse coding model can efficiently form overcomplete codes and both center-surround and oriented filters are obtained similar to those observed in the retina and the primary visual cortex, respectively. Therefore we hypothesize that the proposed computational architecture can be seen as a coherent functional model of the first stages of sensory coding in early vision."}
{"_id": "3d5586cc57eadcc5220e6a316236a8b474500d41", "title": "The Security Development Lifecycle in the Context of Accreditation Policies and Standards", "text": "The proposed security development lifecycle (SecDLC) model delivers a perpetual cycle of information security management and refinement. Using real-world examples, the authors show how SecDLC ensures the goals of preserving, monitoring, and improving security practices, policies, and standards in private and public sectors. The authors describe the four phases of SecDLC, comparing and contrasting them to existing security development models."}
{"_id": "9c6afcd7cb409fd3130465474ce0911fdb99200f", "title": "Upgrading Lignocellulosic Biomasses : Hydrogenolysis of Platform Derived Molecules Promoted by Heterogeneous Pd-Fe Catalysts", "text": "This review provides an overview of heterogeneous bimetallic Pd-Fe catalysts in the C\u2013C and C\u2013O cleavage of platform molecules such as C2\u2013C6 polyols, furfural, phenol derivatives and aromatic ethers that are all easily obtainable from renewable cellulose, hemicellulose and lignin (the major components of lignocellulosic biomasses). The interaction between palladium and iron affords bimetallic Pd-Fe sites (ensemble or alloy) that were found to be very active in several sustainable reactions including hydrogenolysis, catalytic transfer hydrogenolysis (CTH) and aqueous phase reforming (APR) that will be highlighted. This contribution concentrates also on the different synthetic strategies (incipient wetness impregnation, deposition-precipitaion, co-precipitaion) adopted for the preparation of heterogeneous Pd-Fe systems as well as on the main characterization techniques used (XRD, TEM, H2-TPR, XPS and EXAFS) in order to elucidate the key factors that influence the unique catalytic performances observed."}
{"_id": "57db04988af0b65c217eaf3271afc40927d1c72f", "title": "Proposed embedded security framework for Internet of Things (IoT)", "text": "IoT is going to be an established part of life by extending the communication and networking anytime, anywhere. Security requirements for IoT will certainly underline the importance of properly formulated, implemented, and enforced security policies throughout their life-cycle. This paper gives a detailed survey and analysis of embedded security, especially in the area of IoT. Together with the conventional security solutions, the paper highlights the need to provide in-built security in the device itself to provide a flexible infrastructure for dynamic prevention, detection, diagnosis, isolation, and countermeasures against successful breaches. Based on this survey and analysis, the paper defines the security needs taking into account computational time, energy consumption and memory requirements of the devices. Finally, this paper proposes an embedded security framework as a feature of software/hardware co-design methodology."}
{"_id": "db52d3520f9ac17d20bd6195e03f4a650c923fba", "title": "A New Modular Bipolar High-Voltage Pulse Generator", "text": "Adapting power-electronic converters that are used in pulsed-power application attracted considerable attention during recent years. In this paper, a modular bipolar high-voltage pulse generator is proposed based on the voltage multipliers and half-bridge converters concept by using power electronics switches. This circuit is capable to generate repetitive high voltage bipolar pulses with a flexible output pattern, by using low voltage power supply and elements. The proposed topology was simulated in MATLAB/Simulink. To verify the circuit operation a four-stage laboratory prototype has been assembled and tested. The results confirm the theoretical analysis and show the validity of the converter scheme."}
{"_id": "4fd69173cabb3d4377432d70488938ac533a5ac3", "title": "JOINT IMAGE SHARPENING AND DENOISING BY 3 D TRANSFORM-DOMAIN COLLABORATIVE FILTERING", "text": "In order to simultaneously sharpen image details and attenuate noise, we propose to combine the recent blockmatching and 3D \u00deltering (BM3D) denoising approach, based on 3D transform-domain collaborative \u00deltering, with alpha-rooting, a transform-domain sharpening technique. The BM3D exploits grouping of similar image blocks into 3D arrays (groups) on which collaborative \u00deltering (by hard-thresholding) is applied. We propose two approaches of sharpening by alpha-rooting; the \u00derst applies alpharooting individually on the 2D transform spectra of each grouped block; the second applies alpha-rooting on the 3D-transform spectra of each 3D array in order to sharpen \u00dene image details shared by all grouped blocks and further enhance the interblock differences. The conducted experiments with the proposed method show that it can preserve and sharpen \u00dene image details and effectively attenuate noise."}
{"_id": "41c987b8a7e916d56fed2ea7311397e0f2286f3b", "title": "ACIQ: Analytical Clipping for Integer Quantization of neural networks", "text": "Unlike traditional approaches that focus on the quantization at the network level, in this work we propose to minimize the quantization effect at the tensor level. We analyze the trade-off between quantization noise and clipping distortion in low precision networks. We identify the statistics of various tensors, and derive exact expressions for the mean-square-error degradation due to clipping. By optimizing these expressions, we show marked improvements over standard quantization schemes that normally avoid clipping. For example, just by choosing the accurate clipping values, more than 40% accuracy improvement is obtained for the quantization of VGG16-BN to 4-bits of precision. Our results have many applications for the quantization of neural networks at both training and inference time. One immediate application is for a rapid deployment of neural networks to low-precision accelerators without time-consuming fine tuning or the availability of the full datasets."}
{"_id": "3b01dce44b93c96e9bdf6ba1c0ffc44303eebc10", "title": "Latent fault detection in large scale services", "text": "Unexpected machine failures, with their resulting service outages and data loss, pose challenges to datacenter management. Existing failure detection techniques rely on domain knowledge, precious (often unavailable) training data, textual console logs, or intrusive service modifications. We hypothesize that many machine failures are not a result of abrupt changes but rather a result of a long period of degraded performance. This is confirmed in our experiments, in which over 20% of machine failures were preceded by such latent faults. We propose a proactive approach for failure prevention. We present a novel framework for statistical latent fault detection using only ordinary machine counters collected as standard practice. We demonstrate three detection methods within this framework. Derived tests are domain-independent and unsupervised, require neither background information nor tuning, and scale to very large services. We prove strong guarantees on the false positive rates of our tests."}
{"_id": "6e8e4a5bec184eb1d1df1276005c9ffd3fbfab25", "title": "An Optimal Auction Mechanism for Mobile Edge Caching", "text": "With the explosive growth of wireless data, mobile edge caching has emerged as a promising paradigm to support mobile traffic recently, in which the service providers (SPs) prefetch some popular contents in advance and cache them locally at the network edge. When requested, those locally cached contents can be directly delivered to users with low latency, thus alleviating the traffic load over backhaul channels during peak hours and enhancing the quality-of-experience (QoE) of users simultaneously. Due to the limited available cache space, it makes sense for the SP to cache the most profitable contents. Nevertheless, users' true valuations of contents are their private knowledge, which is unknown to the SP in general. This information asymmetry poses a significant challenge for effective caching at the SP side. Further, the cached contents can be delivered with different quality, which needs to be chosen judiciously to balance delivery costs and user satisfaction. To tackle these difficulties, in this paper, we propose an optimal auction mechanism from the perspective of the SP. In the auction, the SP determines the cache space allocation over contents and user payments based on the users' (possibly untruthful) reports of their valuations so that the SP's expected revenue is maximized. The advocated mechanism is designed to elicit true valuations from the users (incentive compatibility) and to incentivize user participation (individual rationality). In addition, we devise a computationally efficient method for calculating the optimal cache space allocation and user payments. We further examine the optimal choice of the content delivery quality for the case with a large number of users and derive a closed-form solution to compute the optimal delivery quality. Finally, extensive simulations are implemented to evaluate the performance of the proposed optimal auction mechanism, and the impact of various model parameters is highlighted to obtain engineering insights into the content caching problem."}
{"_id": "1bde4205a9f1395390c451a37f9014c8bea32a8a", "title": "3D object recognition in range images using visibility context", "text": "Recognizing and localizing queried objects in range images plays an important role for robotic manipulation and navigation. Even though it has been steadily studied, it is still a challenging task for scenes with occlusion and clutter."}
{"_id": "1e8f46aeed1a96554a2d759d7ca194e1f9c22de1", "title": "Go-ICP: Solving 3D Registration Efficiently and Globally Optimally", "text": "Registration is a fundamental task in computer vision. The Iterative Closest Point (ICP) algorithm is one of the widely-used methods for solving the registration problem. Based on local iteration, ICP is however well-known to suffer from local minima. Its performance critically relies on the quality of initialization, and only local optimality is guaranteed. This paper provides the very first globally optimal solution to Euclidean registration of two 3D point sets or two 3D surfaces under the L2 error. Our method is built upon ICP, but combines it with a branch-and-bound (BnB) scheme which searches the 3D motion space SE(3) efficiently. By exploiting the special structure of the underlying geometry, we derive novel upper and lower bounds for the ICP error function. The integration of local ICP and global BnB enables the new method to run efficiently in practice, and its optimality is exactly guaranteed. We also discuss extensions, addressing the issue of outlier robustness."}
{"_id": "242caa8e04b73f56a8d4adae36028cc176364540", "title": "Voting-based pose estimation for robotic assembly using a 3D sensor", "text": "We propose a voting-based pose estimation algorithm applicable to 3D sensors, which are fast replacing their 2D counterparts in many robotics, computer vision, and gaming applications. It was recently shown that a pair of oriented 3D points, which are points on the object surface with normals, in a voting framework enables fast and robust pose estimation. Although oriented surface points are discriminative for objects with sufficient curvature changes, they are not compact and discriminative enough for many industrial and real-world objects that are mostly planar. As edges play the key role in 2D registration, depth discontinuities are crucial in 3D. In this paper, we investigate and develop a family of pose estimation algorithms that better exploit this boundary information. In addition to oriented surface points, we use two other primitives: boundary points with directions and boundary line segments. Our experiments show that these carefully chosen primitives encode more information compactly and thereby provide higher accuracy for a wide class of industrial parts and enable faster computation. We demonstrate a practical robotic bin-picking system using the proposed algorithm and a 3D sensor."}
{"_id": "02a808de5aa34685955fd1473433161edd20fd80", "title": "Surface reconstruction from unorganized points", "text": "Computer Graphics, 26, 2, July 1992 Surface Reconstruction from Unorganized Points Hugues Hoppe* Tony DeRose* Tom Duchampt John McDonald$ Werner Stuetzle~ University of Washington Seattle, WA 98195 We describe and demonstrate an algorithm that takes as input an unorganized set of points {xl, . . . . x.} c IR3 on or near an unknown manifold M, and produces as output a simplicial surface that approximates M. Neither the topology, the presence of boundaries, nor the geometry of M are assumed to be known in advance \u2014 all are inferred automatically from the data. This problem natu rally arises in a variety of practical situations such as range scanning an object from multiple view points, recovery of biological shapes from two-dimensional slices, and interactive surface sketching. CR"}
{"_id": "1eca34e935b5e0f172ff5557697317f32b646f5e", "title": "Registration and Integration of Textured 3-D Data", "text": "In general, multiple views are required to create a complete 3-D model of an object or of a multi-roomed indoor scene. In this work, we address the problem of merging multiple textured 3-D data sets, each of which corresponds to a different view of a scene or object. There are two steps to the merging process: registration and integration. To register, or align, data sets we use a modified version of the Iterative Closest Point algorithm; our version, which we call color ICP, considers not only 3-D information, but color as well. We show experimentally that the use of color decreases registration error significantly. Once the 3-D data sets have been registered, we integrate them to produce a seamless, composite 3-D textured model. Our approach to integration uses a 3-D occupancy grid to represent likelihood of spatial occupancy through voting. In addition to occupancy information, we store surface normal in each voxel of the occupancy grid. Surface normal is used to robustly extract a surface from the occupancy grid; on that surface we blend textures from multiple views."}
{"_id": "ff2a96303a285c946c989369666503ededff35be", "title": "Neem and other Botanical insecticides: Barriers to commercialization", "text": "In spite of the wide recognition that many plants possess insecticidal properties, only a handful of pest control products directly obtained from plants,i. e., botanical insecticides, are in use in developed countries. The demonstrated efficacy of the botanical neem (based on seed kernel extracts ofAzadirachta indica), and its recent approval for use in the United States, has stimulated research and development of other botanical insecticides. However, the commercialization of new botanical insecticides can be hindered by a number of issues. The principal barriers to commercialization of new botanicals are (i) scarcity of the natural resource; (ii) standardization and quality control; and (iii) registration. These issues are no problem (i) or considerably less of a problem (ii, iii) with conventional insecticides. In this review I discuss these issues and suggest how the problems may be overcome in the future."}
{"_id": "f0d99d9594fe3b5da915218614ef9f31f3bc4835", "title": "Trajectory learning from human demonstrations via manifold mapping", "text": "This work proposes a framework that enables arbitrary robots with unknown kinematics models to imitate human demonstrations to acquire a skill, and reproduce it in real-time. The diversity of robots active in non-laboratory environments is growing constantly, and to this end we present an approach for users to be able to easily teach a skill to a robot with any body configuration. Our proposed method requires a motion trajectory obtained from human demonstrations via a Kinect sensor, which is then projected onto a corresponding human skeleton model. The kinematics mapping between the robot and the human model is learned by employing Local Procrustes Analysis, which enables the transfer of the demonstrated trajectory from the human model to the robot. Finally, the transferred trajectory is modeled using Dynamic Movement Primitives, allowing it to be reproduced in real time. Experiments in simulation on a 4 degree of freedom robot show that our method is able to correctly imitate various skills demonstrated by a human."}
{"_id": "41e5f9984ec71218416c54304bcb9a99c27b4938", "title": "A Chinese Text Corrector Based on Seq2Seq Model", "text": "In this paper, we build a Chinese text corrector which can correct spelling mistakes precisely in Chinese texts. Our motivation is inspired by the recently proposed seq2seq model which consider the text corrector as a sequence learning problem. To begin with, we propose a biased-decoding method to improve the bilingual evaluation understudy (BLEU) score of our model. Secondly, we adopt a more reasonable OOV token scheme, which enhances the robustness of our correction mechanism. Moreover, to test the performance of our proposed model thoroughly, we establish a corpus which includes 600,000 sentences from news data of Sogou Labs. Experiments show that our corrector model can achieve better corrector results based on the corpus."}
{"_id": "143c5c64ed76f9cbbd70309e071905862364aa4b", "title": "Cloud Log Forensics Metadata Analysis", "text": "The increase in the quantity and questionable quality of the forensic information retrieved from the current virtualized data cloud system architectures has made it extremely difficult for law enforcement to resolve criminal activities within these logical domains. This paper poses the question of what kind of information is desired from virtual machine (VM) hosted operating systems (OS) investigated by a cloud forensic examiner. The authors gives an overview of the information that exists on current VM OS by looking at it's kernel hypervisor logs and discusses the shortcomings. An examination of the role that the VM kernel hypervisor logs provide as OS metadata in cloud investigations is also presented."}
{"_id": "3251fcb9cfb22beeb1ec13b746a5e5608590e977", "title": "A novel wideband slotted mm wave microstrip patch antenna", "text": "In this paper, a novel approach for the design of a compact broadband planar microstrip patch antenna is presented for millimetre wave wireless applications. This antenna structure was realized on a Quartz crystal substrate of dielectric constant 3.8 with the thickness of 0.8mm with ground plane. In order to examine, the performances of this antenna it was designed at the frequency 30 GHz and simulated by IE3D software package of Zeland. This type of antenna is composed of two slots of different dimensions and fed by single coaxial feed. Appropriate design of the antenna structure resulted in three discontinuous resonant bands. The proposed antenna has a \u221210dB return loss impedance bandwidth of 10.3GHz. The performance of this antenna was verified and characterised in terms of the antenna return loss, radiation pattern, surface current distribution and power gain. Also, the numerical results show that the proposed antenna has good impedance bandwidth and radiation characteristics over the entire operating bands. Details of the proposed antenna configurations and design procedures are given."}
{"_id": "5df318e4aac5313124571ecc7e186cba9e84a264", "title": "Mobile Application Security in the Presence of Dynamic Code Updates", "text": "The increasing number of repeated malware penetrations into official mobile app markets poses a high security threat to the confidentiality and privacy of end users\u2019 personal and sensitive information. Protecting end user devices from falling victims to adversarial apps presents a technical and research challenge for security researchers/engineers in academia and industry. Despite the security practices and analysis checks deployed at app markets, malware sneak through the defenses and infect user devices. The evolution of malware has seen it become sophisticated and dynamically changing software usually disguised as legitimate apps. Use of highly advanced evasive techniques, such as encrypted code, obfuscation and dynamic code updates, etc., are common practices found in novel malware. With evasive usage of dynamic code updates, a malware pretending as benign app bypasses analysis checks and reveals its malicious functionality only when installed on a user\u2019s device. This dissertation provides a thorough study on the use and the usage manner of dynamic code updates in Android apps. Moreover, we propose a hybrid analysis approach, StaDART, that interleaves static and dynamic analysis to cover the inherent shortcomings of static analysis techniques to analyze apps in the presence of dynamic code updates. Our evaluation results on real world apps demonstrate the effectiveness of StaDART. However, typically dynamic analysis, and hybrid analysis too for that matter, brings the problem of stimulating the app\u2019s behavior which is a non-trivial challenge for automated analysis tools. To this end, we propose a backward slicing based targeted inter component code paths execution technique, TeICC. TeICC leverages a backward slicing mechanism to extract code paths starting from a target point in the app. It makes use of a system dependency graph to extract code paths that involve inter component communication. The extracted code paths are then instrumented and executed inside the app context to capture sensitive dynamic behavior, resolve dynamic code updates and obfuscation. Our evaluation of TeICC shows that it can be effectively used for targeted execution of inter component code paths in obfuscated Android apps. Also, still not ruling out the possibility of adversaries reaching the user devices, we propose an on-phone API hooking"}
{"_id": "22d2a23f9fcf70ae5b9b5effa02931d7267efd92", "title": "PCAS: Pruning Channels with Attention Statistics for Deep Network Compression", "text": "Compression techniques for deep neural networks are important for implementing them on small embedded devices. In particular, channel-pruning is a useful technique for realizing compact networks. However, many conventional methods require manual setting of compression ratios in each layer. It is difficult to analyze the relationships between all layers, especially for deeper models. To address these issues, we propose a simple channel-pruning technique based on attention statistics that enables to evaluate the importance of channels. We improved the method by means of a criterion for automatic channel selection, using a single compression ratio for the entire model in place of per-layer model analysis. The proposed approach achieved superior performance over conventional methods with respect to accuracy and the computational costs for various models and datasets. We provide analysis results for behavior of the proposed criterion on different datasets to demonstrate its favorable properties for channel pruning."}
{"_id": "a111795bf08efda6cb504916e023ec43821f3a84", "title": "An automated domain specific stop word generation method for natural language text classification", "text": "We propose an automated method for generating domain specific stop words to improve classification of natural language content. Also we implemented a Bayesian natural language classifier working on web pages, which is based on maximum a posteriori probability estimation of keyword distributions using bag-of-words model to test the generated stop words. We investigated the distribution of stop-word lists generated by our model and compared their contents against a generic stop-word list for English language. We also show that the document coverage rank and topic coverage rank of words belonging to natural language corpora follow Zipf's law, just like the word frequency rank is known to follow."}
{"_id": "25af957ac7b643df9191507930d943ea22254549", "title": "Image-Dependent Gamut Mapping as Optimization Problem", "text": "We explore the potential of image-dependent gamut mapping as a constrained optimization problem. The performance of our new approach is compared to standard reference gamut mapping algorithms in psycho-visual tests."}
{"_id": "686b5783138e9adc75aec0f5832d69703eca41d9", "title": "Research synthesis in software engineering: A tertiary study", "text": "0950-5849/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.infsof.2011.01.004 \u21d1 Corresponding author. E-mail addresses: dcruzes@idi.ntnu.no (D.S. Cruzes Context: Comparing and contrasting evidence from multiple studies is necessary to build knowledge and reach conclusions about the empirical support for a phenomenon. Therefore, research synthesis is at the center of the scientific enterprise in the software engineering discipline. Objective: The objective of this article is to contribute to a better understanding of the challenges in synthesizing software engineering research and their implications for the progress of research and practice. Method: A tertiary study of journal articles and full proceedings papers from the inception of evidencebased software engineering was performed to assess the types and methods of research synthesis in systematic reviews in software engineering. Results: As many as half of the 49 reviews included in the study did not contain any synthesis. Of the studies that did contain synthesis, two thirds performed a narrative or a thematic synthesis. Only a few studies adequately demonstrated a robust, academic approach to research synthesis. Conclusion: We concluded that, despite the focus on systematic reviews, there is limited attention paid to research synthesis in software engineering. This trend needs to change and a repertoire of synthesis methods needs to be an integral part of systematic reviews to increase their significance and utility for research and practice. 2011 Elsevier B.V. All rights reserved."}
{"_id": "f74e36bb7d96b3875531f451212ee3703b575c42", "title": "Vishit: A Visualizer for Hindi Text", "text": "We outline the design of a visualizer, named Vishit, for texts in the Hindi language. The Hindi language is lingua franca in many states of India where people speak different languages. The visualized text serves as a universal language where seamless communication is needed by many people who speak different languages and have different cultures. Vishit consists of the following three major processing steps: language processing, knowledge base creation and scene generation. Initial results from the Vishit prototype are encouraging."}
{"_id": "d44f2a7cc586ea1e1cdff14b7a91bdf49c2aef85", "title": "Software-Based Resolver-to-Digital Conversion Using a DSP", "text": "A simple and cost-effective software-based resolver-to-digital converter using a digital signal processor is presented. The proposed method incorporates software generation of the resolver carrier using a digital filter for synchronous demodulation of the resolver outputs in such a way that there is a substantial savings on hardware like the costly carrier oscillator and associated digital and analog circuits for amplitude demodulators. In addition, because the method does not cause any time delay, the dynamics of the servo control using the scheme are not affected. Furthermore, the method enables the determination of the angle for a complete 360deg shaft rotation with reasonable accuracy using a lookup table that contains entries of only up to 45deg. Computer simulations and experimental results demonstrate the effectiveness and applicability of the proposed scheme."}
{"_id": "b98a96a4c381d85186e8becc85fb9fc114b13926", "title": "Learning to Detect Misleading Content on Twitter", "text": "The publication and spread of misleading content is a problem of increasing magnitude, complexity and consequences in a world that is increasingly relying on user-generated content for news sourcing. To this end, multimedia analysis techniques could serve as an assisting tool to spot and debunk misleading content on the Web. In this paper, we tackle the problem of misleading multimedia content detection on Twitter in real time. We propose a number of new features and a new semi-supervised learning event adaptation approach that helps generalize the detection capabilities of trained models to unseen content, even when the event of interest is of different nature compared to that used for training. Combined with bagging, the proposed approach manages to outperform previous systems by a significant margin in terms of accuracy. Moreover, in order to communicate the verification process to end users, we develop a web-based application for visualizing the results."}
{"_id": "7fb9a32853480bd6cb92d669b47013cb9ed5237c", "title": "Directional antennas in FANETs: A performance analysis of routing protocols", "text": "Flying Ad-hoc Networks (FANETs) [1] are mobile ad-hoc networks formed by small and medium-sized UAVs. Nowadays, most UAVs are equipped with omnidirectional antennas. In addition, most of the existing routing protocols were designed assuming the use of omnidirectional antennas. Directional antennas have the potential to increase spatial reuse, save the battery's energy, and substantially increase the transmission range. However, these benefits come with a few challenges. Existing directional MAC protocols deal with these challenges, mostly in static ad-hoc networks. We define DA-FANETs as FANETs where directional antennas are used. In this paper, we investigate the performance of existing routing protocols in DA-FANETs. First we implement an 802.11b-based directional MAC protocol that embodies common directional MAC protocol features. Then we evaluate the following routing protocols in DA-FANET scenarios: AODV, OLSR, RGR, and GRP. The results show that each antenna beamwidth has an optimal network size that gives the highest amount of routing. In addition, RGR is the best option for DA-FANETs while GRP is the worst. Although RGR shows the best performance of all, it still leaves a lot of room for improvement with a PDR sitting at just 85% and a relatively high amount of overhead in terms of the number of transmissions performed."}
{"_id": "fbd573bab9a6e1f9728b5f7c1d99b47b4b8aa0da", "title": "A new 6-DOF parallel robot with simple kinematic model", "text": "This paper presents a novel six-legged parallel robot, the kinematic model of which is simpler than that of the simplest six-axis serial robot. The new robot is the 6-DOF extension of the Cartesian parallel robot. It consists of three pairs of base-mounted prismatic actuators, the directions in each pair parallel to one of the axes of a Cartesian coordinate system. In each of the six legs, there are also two passive revolute joints, the axes of which are parallel to the direction of the prismatic joint. Finally, each leg is attached to the mobile platform via a spherical joint. The direct kinematics of the novel parallel robot can be solved easily by partitioning the orientation and the position of the mobile platform. There are eight distinct solutions, which can be found directly by solving a linear system and alternating the signs of three radicals. This parallel robot has a large workspace and is suitable for machining or rapid prototyping, as detailed in this paper."}
{"_id": "7f657845ac8d66b15752e51bf133f3ff2b2b771e", "title": "Accurate Singular Values of Bidiagonal Matrices", "text": "C Abstract om puting the singular valu es of a bidiagonal m atrix is the fin al phase of the stan dard algo-w rithm for the singular valu e decom position of a general m atrix. We present a ne w algorithm hich com putes all the singular valu es of a bidiagonal m atrix to high relative accuracy indepen-p dent of their m agn itudes. In contrast, the stan dard algorithm for bidiagonal m atrice s m ay com ute sm all singular valu es with no relative accuracy at all. Num erical experim ents sh ow th at K the new algorithm is com parable in speed to the stan dard algorithm , an d frequently faster. The stan dard algorithm for com puting the singular valu e decom position (SVD) of a gen-1 eral real m atrix A has two phases [7]:) Com pute orthogonal m atrices P an d Q su ch that B = P AQ is in bidiagonal form , i.e. 1 1 1 T 1. 2 has nonzero entries only on its diagonal an d first su perdiagonal) Com pute orthogonal m atrices P an d Q su ch that \u03a3 = P BQ is diagonal an d nonnega-t i 2 2 2 T 2 ive. The diagonal entries \u03c3 of \u03a3 are the singular values of A. We will take them to be-sorted in decreasing order: \u03c3 \u2265 \u03c3. The colum ns of Q = Q Q are the right singular vec i i + 1 1 2 t 1 2 ors, an d the colum ns of P= P P are the left singular vectors. n This process takes O (n) operations, where n is the dim ension of A. This is true eve 3 though Phase 2 is iterative, since it converges quickly in practice. The error an alysis o f this r u com bined procedure has a widely accepted conclusion [8], an d provided neither overflow no nderflow occurs m ay be su m m arized as follows: The com puted singular valu es \u03c3 differ from the true singular valu es of A by no m ore t 1 i han p (n). \u03b5. A , where A = \u03c3 is the 2-norm of A, \u03b5 is the m ach ine precision, an d p (n) is a T slowly growing function of the dim ension n \u2026"}
{"_id": "5ed4b57999d2a6c28c66341179e2888c9ca96a25", "title": "Learning Symbolic Models of Stochastic Domains", "text": "In this article, we work towards the goal of developing agents that can learn to act in complex worlds. We develop a probabilistic, relational planning rule representation that compactly models noisy, nondeterministic action effects, and show how such rules can be effectively learned. Through experiments in simple planning domains and a 3D simulated blocks world with realistic physics, we demonstrate that this learning algorithm allows agents to effectively model world dynamics."}
{"_id": "410600034c3693dbb6830f21a7d682e662984f7e", "title": "A Rank Minimization Heuristic with Application to Minimum Order System Approximation", "text": "Several problems arising in control system analysis and design, such as reduced order controller synthesis, involve minimizing the rank of a matrix variable subject to linear matrix inequality (LMI) constraints. Except in some special cases, solving this rank minimization problem (globally) is very difficult. One simple and surprisingly effective heuristic, applicable when the matrix variable is symmetric and positive semidefinite, is to minimize its trace in place of its rank. This results in a semidefinite program (SDP) which can be efficiently solved. In this paper we describe a generalization of the trace heuristic that applies to general nonsymmetric, even non-square, matrices, and reduces to the trace heuristic when the matrix is positive semidefinite. The heuristic is to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual of the spectral norm. We show that this problem can be reduced to an SDP, hence efficiently solved. To motivate the heuristic, we show that the dual spectral norm is the convex envelope of the rank on the set of matrices with norm less than one. We demonstrate the method on the problem of minimum order system approximation."}
{"_id": "297e1b9c24fc6f5d481c52cdc2f305d45ecfddb2", "title": "StalemateBreaker: A Proactive Content-Introducing Approach to Automatic Human-Computer Conversation", "text": "Existing open-domain human-computer conversation systems are typically passive: they either synthesize or retrieve a reply provided a human-issued utterance. It is generally presumed that humans should take the role to lead the conversation and introduce new content when a stalemate occurs, and that the computer only needs to \u201crespond.\u201d In this paper, we propose STALEMATEBREAKER, a conversation system that can proactively introduce new content when appropriate. We design a pipeline to determine when, what, and how to introduce new content during human-computer conversation. We further propose a novel reranking algorithm BiPageRank-HITS to enable rich interaction between conversation context and candidate replies. Experiments show that both the content-introducing approach and the reranking algorithm are effective. Our full STALEMATEBREAKER model outperforms a state-of-the-practice conversation system by +14.4% p@1 when a stalemate occurs."}
{"_id": "497bfd76187fb4c71c010ca3e3e65862ecf74e14", "title": "Ethical theory, codes of ethics and IS practice", "text": "Ethical issues, with respect to computer-based information systems, are important to the individual IS practitioner. These same issues also have an important impact on the moral well-being of organizations and societies. Considerable discussion has taken place in the Information Systems (IS) literature on specific ethical issues, but there is little published work which relates these issues to mainstream ethical theory. This paper describes a range of ethical theories drawn from philosophy literature and uses these theories to critique aspects of the newly revised ACM Code of Ethics. Some in the IS field on problematic ethical issues which are not resolved by the Code are then identified and discussed. The paper draws some implications and conclusions on the value of ethical theory and codes of practice, and on further work to develop existing ethical themes and to promote new initiatives in the ethical domain."}
{"_id": "787da29b8e0226a7b451ffd7f5d17f3394c3e615", "title": "Dual-camera design for coded aperture snapshot spectral imaging.", "text": "Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method."}
{"_id": "4822e607340828e9466cadd262c0e0286decbf64", "title": "Examining the Relationship Between Reviews and Sales: The Role of Reviewer Identity Disclosure in Electronic Markets", "text": "C product reviews have proliferated online, driven by the notion that consumers\u2019 decision to purchase or not purchase a product is based on the positive or negative information about that product they obtain from fellow consumers. Using research on information processing as a foundation, we suggest that in the context of an online community, reviewer disclosure of identity-descriptive information is used by consumers to supplement or replace product information when making purchase decisions and evaluating the helpfulness of online reviews. Using a unique data set based on both chronologically compiled ratings as well as reviewer characteristics for a given set of products and geographical location-based purchasing behavior from Amazon, we provide evidence that community norms are an antecedent to reviewer disclosure of identity-descriptive information. Online community members rate reviews containing identity-descriptive information more positively, and the prevalence of reviewer disclosure of identity information is associated with increases in subsequent online product sales. In addition, we show that shared geographical location increases the relationship between disclosure and product sales, thus highlighting the important role of geography in electronic commerce. Taken together, our results suggest that identity-relevant information about reviewers shapes community members\u2019 judgment of products and reviews. Implications for research on the relationship between online word-of-mouth (WOM) and sales, peer recognition and reputation systems, and conformity to online community norms are discussed."}
{"_id": "0f3178633b1512dc55cd1707eb539623d17e29d3", "title": "Examining CNN Representations With Respect to Dataset Bias", "text": "Given a pre-trained CNN without any testing samples, this paper proposes a simple yet effective method to diagnose feature representations of the CNN. We aim to discover representation flaws caused by potential dataset bias. More specifically, when the CNN is trained to estimate image attributes, we mine latent relationships between representations of different attributes inside the CNN. Then, we compare the mined attribute relationships with ground-truth attribute relationships to discover the CNN\u2019s blind spots and failure modes due to dataset bias. In fact, representation flaws caused by dataset bias cannot be examined by conventional evaluation strategies based on testing images, because testing images may also have a similar bias. Experiments have demonstrated the effectiveness of our method."}
{"_id": "feed8923642dc2e052213e5fcab6346819dc2fb2", "title": "A hybrid Fuzzy Logic Controller-Firefly Algorithm (FLC-FA) based for MPPT Photovoltaic (PV) system in solar car", "text": "This paper propose Firefly Algorithm (FA) new method for tuning the membership function of fuzzy logic controller for Maximum Power Point Tracker (MPPT) system Photovoltaic solar car, that consist of hybrid Fuzzy Logic Controller - Firefly Algorithm (FLC-FA) for the parameter. There are many MPPT methods for photovoltaic (PV) system, Perturbation and Observation (PnO), fuzzy logic controller (FLC) standard, and hybrid Fuzzy logic controller firefly algorithm (FLC-FA) is compared in this paper. The proposed FLC-FA algorithm is to obtain the optimal solution for MPPT for photovoltaic (PV) systems for solar cars. The result Fuzzy logic controller firefly (FLC-FA) of the proposed method, the highest maximum strength and efficiency generated is PnO = 96.31%, Standard FLC = 99.88% and Proposed FLC-FA = 99.98%. better than PnO and Fuzzy logic controller standard method. The main advantage of the proposed FLC-FA is more efficient and accurate than the still fuzzy logic controller standard."}
{"_id": "224eb3407b50533668b6c1caa55a720688b8b532", "title": "A review and future direction of agile, business intelligence, analytics and data science", "text": "Agile methodologies were introduced in 2001. Since this time, practitioners have applied Agile methodologies to many delivery disciplines. This article explores the application of Agile methodologies and principles to business intelligence delivery and how Agile has changed with the evolution of business intelligence. Business intelligence has evolved because the amount of data generated through the internet and smart devices has grown exponentially altering how organizations and individuals use information. The practice of business intelligence delivery with an Agile methodology has matured; however, business intelligence has evolved altering the use of Agile principles and practices. The Big Data phenomenon, the volume, variety, and velocity of data, has impacted business intelligence and the use of information. New trends such as fast analytics and data science have emerged as part of business intelligence. This paper addresses how Agile principles and practices have evolved with business intelligence, as well as its challenges and future directions."}
{"_id": "387ef15ce6de4a74b1a51f3694419b90d3d81fba", "title": "Ravenclaw: dialog management using hierarchical task decomposition and an expectation agenda", "text": "We describe RavenClaw, a new dialog management framework developed as a successor to the Agenda [1] architecture used in the CMU Communicator. RavenClaw introduces a clear separation between task and discourse behavior specification, and allows rapid development of dialog management components for spoken dialog systems operating in complex, goal-oriented domains. The system development effort is focused entirely on the specification of the dialog task, while a rich set of domain-independent conversational behaviors are transparently generated by the dialog engine. To date, RavenClaw has been applied to five different domains allowing us to draw some preliminary conclusions as to the generality of the approach. We briefly describe our experience in developing these systems."}
{"_id": "07c095dc513b7ac94a9360fab68ec4d2572797e2", "title": "Stochastic Language Generation For Spoken Dialogue Systems", "text": "The two current approaches to language generation, Template-based and rule-based (linguistic) NLG, have limitations when applied to spoken dialogue systems, in part because they were developed for text generation. In this paper, we propose a new corpus-based approach to natural language generation, specifically designed for spoken dialogue systems."}
{"_id": "1347af2305933f77953c881a78c6029ab50ae460", "title": "Automatically clustering similar units for unit selection in speech synthesis", "text": "This paper describes a new method for synthesizing speech by concatenating sub-word units from a database of labelled speech. A large unit inventory is created by automatically clustering units of the same phone class based on their phonetic and prosodic context. The appropriate cluster is then selected for a target unit offering a small set of candidate units. An optimal path is found through the candidate units based on their distance from the cluster center and an acoustically based join cost. Details of the method and justification are presented. The results of experiments using two different databases are given, optimising various parameters within the system. Also a comparison with other existing selection based synthesis techniques is given showing the advantages this method has over existing ones. The method is implemented within a full text-to-speech system offering efficient natural sounding speech synthesis."}
{"_id": "43d15ec7a3f7c26830541ea57f4af56b61983ca4", "title": "Creating natural dialogs in the carnegie mellon communicator system", "text": "The Carnegie Mellon Communicator system helps users create complex travel itineraries through a conversational interface. Itineraries consist of (multi-leg) flights, hotel and car reservations and are built from actual travel information for North America, obtained from the Web. The system manages dialog using a schema-based approach. Schemas correspond to major units of task information (such as a flight leg) and define conversational topics, or foci of interaction, meaningful to the user."}
{"_id": "2cc0df76c91c52a497906abc1cdeda91edb2923c", "title": "Extraction of high confidence minutiae points from fingerprint images", "text": "Fingerprint is one of the well accepted biometric traits for building automatic human recognition system. These systems mainly involves matching of two fingerprints based on their minutiae points. Therefore, extraction of reliable or true minutiae from the fingerprint image is critically important. This paper proposed an novel algorithm for automatic extraction of highly reliable minutiae points from a fingerprint image. It utilizes frequency domain image enhancement and heuristics. Experiments have been conducted on two databases. FVC2002 which is a publicly available fingerprint database containing 800 fingerprint images of 100 persons and IITK-Sel500FP which is an in-house database containing 1000 fingerprint images of 500 persons. Minutiae points extracted by the proposed algorithm are benchmarked against the manually marked ones and the comparison is done with an open source minutiae extractor mindtct of NIST. Experimental results have shown that the proposed algorithm has significantly higher confidence of extracted minutiae points being true."}
{"_id": "5054139b40eb3036b7c04ef7dbc401782527dcfb", "title": "Cognitive effort drives workspace configuration of human brain functional networks.", "text": "Effortful cognitive performance is theoretically expected to depend on the formation of a global neuronal workspace. We tested specific predictions of workspace theory, using graph theoretical measures of network topology and physical distance of synchronization, in magnetoencephalographic data recorded from healthy adult volunteers (N = 13) during performance of a working memory task at several levels of difficulty. We found that greater cognitive effort caused emergence of a more globally efficient, less clustered, and less modular network configuration, with more long-distance synchronization between brain regions. This pattern of task-related workspace configuration was more salient in the \u03b2-band (16-32 Hz) and \u03b3-band (32-63 Hz) networks, compared with both lower (\u03b1-band; 8-16 Hz) and higher (high \u03b3-band; 63-125 Hz) frequency intervals. Workspace configuration of \u03b2-band networks was also greater in faster performing participants (with correct response latency less than the sample median) compared with slower performing participants. Processes of workspace formation and relaxation in relation to time-varying demands for cognitive effort could be visualized occurring in the course of task trials lasting <2 s. These experimental results provide support for workspace theory in terms of complex network metrics and directly demonstrate how cognitive effort breaks modularity to make human brain functional networks transiently adopt a more efficient but less economical configuration."}
{"_id": "c5ae787fe9a647636da5384da552851a327159c0", "title": "Designing wireless transceiver blocks for LoRa application", "text": "The Internet of Things enables connected objects to communicate over distant fields, and trade-offs are often observed with the battery life and range. Long Range Radio, or LoRa, provides transmission for up to 15 km with impressive battery life, while sacrificing throughput. In order to design the LoRa system, there is a need to design and implement the LoRa transceiver. 1 V, 900 MHz, LoRa-capable transceiver blocks were designed in 65 nm process. The receiver front end consists of the narrowband low noise amplifier, common gate-common source balun, and double balanced mixer. The power consumption of this system is 2.63 mW. The overall gain of the receiver front end is approximately 51 dB. The transmitter implemented in this project utilizes a class-D amplifier alongside an inverter chain driver stage and a transformer based power combiner. The transmitter outputs 21.57 dBm of power from the given supply at 900 MHz, operating at 21% drain efficiency."}
{"_id": "f4fdaaf864ca6f73ced06f937d3af978568998eb", "title": "Network Monitoring as a Streaming Analytics Problem", "text": "Programmable switches potentially make it easier to perform flexible network monitoring queries at line rate, and scalable stream processors make it possible to fuse data streams to answer more sophisticated queries about the network in real-time. However, processing such network monitoring queries at high traffic rates requires both the switches and the stream processors to filter the traffic iteratively and adaptively so as to extract only that traffic that is of interest to the query at hand. While the realization that network monitoring is a streaming analytics problem has been made earlier, our main contribution in this paper is the design and implementation of Sonata, a closed-loop system that enables network operators to perform streaming analytics for network monitoring applications at scale. To achieve this objective, Sonata allows operators to express a network monitoring query by considering each packet as a tuple. More importantly, Sonata allows them to partition the query across both the switches and the stream processor, and through iterative refinement, Sonata's runtime attempts to extract only the traffic that pertains to the query, thus ensuring that the stream processor can scale to satisfy a large number of queries for traffic at very high rates. We show with a simple example query involving DNS reflection attacks and traffic traces from one of the world's largest IXPs that Sonata can capture 95% of all traffic pertaining to the query, while reducing the overall data rate by a factor of about 400 and the number of required counters by four orders of magnitude."}
{"_id": "4f0bfa3fb030a6aa6dc84ac97052583e425a1521", "title": "Neural representations for object perception: structure, category, and adaptive coding.", "text": "Object perception is one of the most remarkable capacities of the primate brain. Owing to the large and indeterminate dimensionality of object space, the neural basis of object perception has been difficult to study and remains controversial. Recent work has provided a more precise picture of how 2D and 3D object structure is encoded in intermediate and higher-level visual cortices. Yet, other studies suggest that higher-level visual cortex represents categorical identity rather than structure. Furthermore, object responses are surprisingly adaptive to changes in environmental statistics, implying that learning through evolution, development, and also shorter-term experience during adulthood may optimize the object code. Future progress in reconciling these findings will depend on more effective sampling of the object domain and direct comparison of these competing hypotheses."}
{"_id": "3e37eb2620e86bafdcab25f7086806ba1bac3404", "title": "A Wireless Headstage for Combined Optogenetics and Multichannel Electrophysiological Recording", "text": "This paper presents a wireless headstage with real-time spike detection and data compression for combined optogenetics and multichannel electrophysiological recording. The proposed headstage, which is intended to perform both optical stimulation and electrophysiological recordings simultaneously in freely moving transgenic rodents, is entirely built with commercial off-the-shelf components, and includes 32 recording channels and 32 optical stimulation channels. It can detect, compress and transmit full action potential waveforms over 32 channels in parallel and in real time using an embedded digital signal processor based on a low-power field programmable gate array and a Microblaze microprocessor softcore. Such a processor implements a complete digital spike detector featuring a novel adaptive threshold based on a Sigma-delta control loop, and a wavelet data compression module using a new dynamic coefficient re-quantization technique achieving large compression ratios with higher signal quality. Simultaneous optical stimulation and recording have been performed in-vivo using an optrode featuring 8 microelectrodes and 1 implantable fiber coupled to a 465-nm LED, in the somatosensory cortex and the Hippocampus of a transgenic mouse expressing ChannelRhodospin (Thy1::ChR2-YFP line 4) under anesthetized conditions. Experimental results show that the proposed headstage can trigger neuron activity while collecting, detecting and compressing single cell microvolt amplitude activity from multiple channels in parallel while achieving overall compression ratios above 500. This is the first reported high-channel count wireless optogenetic device providing simultaneous optical stimulation and recording. Measured characteristics show that the proposed headstage can achieve up to 100% of true positive detection rate for signal-to-noise ratio (SNR) down to 15 dB, while achieving up to 97.28% at SNR as low as 5 dB. The implemented prototype features a lifespan of up to 105 minutes, and uses a lightweight (2.8 g) and compact $(17 \\times 18 \\times 10\\ \\text{mm}^{3})$ rigid-flex printed circuit board."}
{"_id": "7780c59bcc110b9b1987d74ccab27c3e108ebf0d", "title": "Big Data and Transformational Government", "text": "The big data phenomenon is growing throughout private and public sector domains. Profit motives make it urgent for companies in the private sector to learn how to leverage big data. However, in the public sector, government services could also be greatly improved through the use of big data. Here, the authors describe some drivers, barriers, and best practices affecting the use of big data and associated analytics in the government domain. They present a model that illustrates how big data can result in transformational government through increased efficiency and effectiveness in the delivery of services. Their empirical basis for this model uses a case vignette from the US Department of Veterans Affairs, while the theoretical basis is a balanced view of big data that takes into account the continuous growth and use of such data. This article is part of a special issue on big data and business analytics."}
{"_id": "b3870b319e32b0a2f687aa873d7935f0043f6aa5", "title": "An analysis and critique of Research through Design: towards a formalization of a research approach", "text": "The field of HCI is experiencing a growing interest in Research through Design (RtD), a research approach that employs methods and processes from design practice as a legitimate method of inquiry. We are interested in expanding and formalizing this research approach, and understanding how knowledge, or theory, is generated from this type of design research. We conducted interviews with 12 leading HCI design researchers, asking them about design research, design theory, and RtD specifically. They were easily able to identify different types of design research and design theory from contemporary and historical design research efforts, and believed that RtD might be one of the most important contributions of design researchers to the larger research community. We further examined three historical RtD projects that were repeatedly mentioned in the interviews, and performed a critique of current RtD practices within the HCI research and interaction design communities. While our critique summarizes the problems, it also shows possible directions for further developments and refinements of the approach."}
{"_id": "fcb58022e146719109af0b527352d9a313705828", "title": "SDN On-The-Go (OTG) physical testbed", "text": "An emerging field of research, Software Defined Networks (SDN) promises to change the landscape of traditional network topology and management. Researchers and early adopters alike need adequate SDN testing facilities for their experiments but their options are limited. Industry is responding slowly with embedded support for SDN in their enterprise grade network hardware but it is cost prohibitive for many test environments with a single SDN switch costing thousands of dollars. There are a few emerging community SDN test networks that are fantastic for testing large topologies with production grade traffic but there is a cost associated with membership and some controlled experiments are difficult. A free and indispensible alternative to a dedicated hardware SDN is to use network emulation tools. These software tools are widely used and invaluable to SDN research. They provide an amazingly precise representation of physical network nodes and behavior but are inherently limited by their aggregation with other virtual devices on the same compute node. Some of our research requires a higher precision than software emulation can provide. Our solution is to build a low cost, portable, standalone SDN testbed. Called SDN On-The-Go (OTG), it is a complete, self-contained testbed that consists of four dedicated ZodiacFX SDN switches, four RaspberryPi3 hosts, a dedicated Kangaroo+ controller with 4GB RAM and a couple of routers to form the network isolation. The testbed supports many configurations for pseudo real-world SDN experiments that produce reliable and repeatable results. It can be used as a standalone research tool or as part of a larger network with production quality traffic. SDN OTG is designed to be used as a portable teaching device, moved from classroom to classroom or taken home for private research. We achieved our repeatability factor of an order of magnitude greater than emulation based testing. Our SDN OTG physical testbed weighs only twenty pounds, costs about a thousand US dollars, provides repeatable, precise time sensitive data and can be setup as a fully functional SDN testbed in a matter of minutes."}
{"_id": "eee5594f8e88e51f4fb86b43c9f4cb54d689f73c", "title": "Predicting and Improving Memory Retention : Psychological Theory Matters in the Big Data Era", "text": "Cognitive psychology has long had the aim of understanding mechanisms of human memory, with the expectation that such an understanding will yield practical techniques that support learning and retention. Although research insights have given rise to qualitative advice for students and educators, we present a complementary approach that offers quantitative, individualized guidance. Our approach synthesizes theory-driven and data-driven methodologies. Psychological theory characterizes basic mechanisms of human memory shared among members of a population, whereas machine-learning techniques use observations from a population to make inferences about individuals. We argue that despite the power of big data, psychological theory provides essential constraints on models. We present models of forgetting and spaced practice that predict the dynamic time-varying knowledge state of an individual student for specific material. We incorporate these models into retrieval-practice software to assist students in reviewing previously mastered material. In an ambitious year-long intervention in a middle-school foreign language course, we demonstrate the value of systematic review on long-term educational outcomes, but more specifically, the value of adaptive review that leverages data from a population of learners to personalize recommendations based on an individual\u2019s study history and past performance."}
{"_id": "28e28ba720462ced332bd649fcc6283fec9c134a", "title": "IoT survey: An SDN and fog computing perspective", "text": "Recently, there has been an increasing interest in the Internet of Things (IoT). While some analysts disvalue the IoT hype, several technology leaders, governments, and researchers are putting serious efforts to develop solutions enabling wide IoT deployment. Thus, the huge amount of generated data, the high network scale, the security and privacy concerns, the new requirements in terms of QoS, and the heterogeneity in this ubiquitous network of networks make its implementation a very challenging task. SDN, a new networking paradigm, has revealed its usefulness in reducing the management complexities in today\u2019s networks. Additionally, SDN, having a global view of the network, has presented effective security solutions. On the other hand, fog computing, a new data service platform, consists of pushing the data to the network edge reducing the cost (in terms of bandwidth consumption and high latency) of \u201cbig data\u201d transportation through the core network. In this paper, we critically review the SDN and fog computingbased solutions to overcome the IoT main challenges, highlighting their advantages, and exposing their weaknesses. Thus, we make recommendations at the end of this paper for the upcoming research work. Keywords\u2014 IoT; Survey; SDN; Fog; Cloud; 5G."}
{"_id": "337cafd0035dee02fd62550bae301f796a553ac7", "title": "General Image-Quality Equation: GIQE.", "text": "A regression-based model was developed relating aerial image quality, expressed in terms of the National Imagery Interpretability Rating Scale (NIIRS), to fundamental image attributes. The General Image-Quality Equation (GIQE) treats three main attributes: scale, expressed as the ground-sampled distance; sharpness, measured from the system modulation transfer function; and the signal-to-noise ratio. The GIQE can be applied to any visible sensor and predicts NIIRS ratings with a standard error of 0.3 NIIRS. The image attributes treated by the GIQE are influenced by system design and operation parameters. The GIQE allows system designers and operators to perform trade-offs for the optimization of image quality."}
{"_id": "93ade051f51f727ef745f2c417ed30eb08148e85", "title": "A Survey on Quality of Experience of HTTP Adaptive Streaming", "text": "Changing network conditions pose severe problems to video streaming in the Internet. HTTP adaptive streaming (HAS) is a technology employed by numerous video services that relieves these issues by adapting the video to the current network conditions. It enables service providers to improve resource utilization and Quality of Experience (QoE) by incorporating information from different layers in order to deliver and adapt a video in its best possible quality. Thereby, it allows taking into account end user device capabilities, available video quality levels, current network conditions, and current server load. For end users, the major benefits of HAS compared to classical HTTP video streaming are reduced interruptions of the video playback and higher bandwidth utilization, which both generally result in a higher QoE. Adaptation is possible by changing the frame rate, resolution, or quantization of the video, which can be done with various adaptation strategies and related client- and server-side actions. The technical development of HAS, existing open standardized solutions, but also proprietary solutions are reviewed in this paper as fundamental to derive the QoE influence factors that emerge as a result of adaptation. The main contribution is a comprehensive survey of QoE related works from human computer interaction and networking domains, which are structured according to the QoE impact of video adaptation. To be more precise, subjective studies that cover QoE aspects of adaptation dimensions and strategies are revisited. As a result, QoE influence factors of HAS and corresponding QoE models are identified, but also open issues and conflicting results are discussed. Furthermore, technical influence factors, which are often ignored in the context of HAS, affect perceptual QoE influence factors and are consequently analyzed. This survey gives the reader an overview of the current state of the art and recent developments. At the same time, it targets networking researchers who develop new solutions for HTTP video streaming or assess video streaming from a user centric point of view. Therefore, this paper is a major step toward truly improving HAS."}
{"_id": "fcdfdf3012bdd8feeaffc43f15bc976c11944ec3", "title": "A Framework for End-to-End Evaluation of 5G mmWave Cellular Networks in ns-3", "text": "The growing demand for ubiquitous mobile data services along with the scarcity of spectrum in the sub-6 GHz bands has given rise to the recent interest in developing wireless systems that can exploit the large amount of spectrum available in the millimeter wave (mmWave) frequency range. Due to its potential for multi-gigabit and ultra-low latency links, mmWave technology is expected to play a central role in 5th Generation (5G) cellular networks. Overcoming the poor radio propagation and sensitivity to blockages at higher frequencies presents major challenges, which is why much of the current research is focused at the physical layer. However, innovations will be required at all layers of the protocol stack to effectively utilize the large air link capacity and provide the end-to-end performance required by future networks.\n Discrete-event network simulation will be an invaluable tool for researchers to evaluate novel 5G protocols and systems from an end-to-end perspective. In this work, we present the first-of-its-kind, open-source framework for modeling mmWave cellular networks in the ns-3 simulator. Channel models are provided along with a configurable physical and MAC-layer implementation, which can be interfaced with the higher-layer protocols and core network model from the ns-3 LTE module to simulate end-to-end connectivity. The framework is demonstrated through several example simulations showing the performance of our custo mmmWave stack."}
{"_id": "b6a8927a47dce4daf240c70711358727cac371c5", "title": "Center Frequency and Bandwidth Controllable Microstrip Bandpass Filter Design Using Loop-Shaped Dual-Mode Resonator", "text": "A design approach for developing microwave bandpass filters (BPFs) with continuous control of the center frequency and bandwidth is presented. The proposed approach exploits a simple loop-shaped dual-mode resonator that is tapped and perturbed with varactor diodes to realize center frequency tunability and passband reconfigurabilty. The even- and odd-mode resonances of the resonator can be predominately controlled via the incorporated varactors, and the passband response reconfiguration is obtained with the proposed tunable external coupling mechanism, which resolves the return losses degradation attributed to conventional fixed external coupling mechanisms. The demonstrated approach leads to a relatively simple synthesis of the microwave BPF with an up to 33% center frequency tuning range, an excellent bandwidth tuning capability, as well as a high filter response reconfigurability, including an all-reject response."}
{"_id": "6123e52c1a560c88817d8720e05fbff8565271fb", "title": "Gated Siamese Convolutional Neural Network Architecture for Human Re-Identification", "text": "Matching pedestrians across multiple camera views, known as human re-identification, is a challenging research problem that has numerous applications in visual surveillance. With the resurgence of Convolutional Neural Networks (CNNs), several end-to-end deep Siamese CNN architectures have been proposed for human re-identification with the objective of projecting the images of similar pairs (i.e. same identity) to be closer to each other and those of dissimilar pairs to be distant from each other. However, current networks extract fixed representations for each image regardless of other images which are paired with it and the comparison with other images is done only at the final level. In this setting, the network is at risk of failing to extract finer local patterns that may be essential to distinguish positive pairs from hard negative pairs. In this paper, we propose a gating function to selectively emphasize such fine common local patterns by comparing the mid-level features across pairs of images. This produces flexible representations for the same image according to the images they are paired with. We conduct experiments on the CUHK03, Market-1501 and VIPeR datasets and demonstrate improved performance compared to a baseline Siamese CNN architecture."}
{"_id": "4469ff0b698d4752504b4b900b0cbef38ded59e4", "title": "Data association for multi-object Tracking-by-Detection in multi-camera networks", "text": "Multi-object tracking is still a challenging task in computer vision. We propose a robust approach to realize multi-object tracking using multi-camera networks. Detection algorithms are utilized to detect object regions with confidence scores for initialization of individual particle filters. Since data association is the key issue in Tracking-by-Detection mechanism, we present an efficient greedy matching algorithm considering multiple judgments based on likelihood functions. Furthermore, tracking in single cameras is realized by a greedy matching method. Afterwards, 3D geometry positions are obtained from the triangulation relationship between cameras. Corresponding objects are tracked in multiple cameras to take the advantages of multi-camera based tracking. Our algorithm performs online and does not need any information about the scene, no restrictions of enter-and-exit zones, no assumption of areas where objects are moving on and can be extended to any class of object tracking. Experimental results show the benefits of using multiple cameras by the higher accuracy and precision rates."}
{"_id": "55c769b5829ca88ba940e0050497f4956c233445", "title": "A real-time method for depth enhanced visual odometry", "text": "Visual odometry can be augmented by depth information such as provided by RGB-D cameras, or from lidars associated with cameras. However, such depth information can be limited by the sensors, leaving large areas in the visual images where depth is unavailable. Here, we propose a method to utilize the depth, even if sparsely available, in recovery of camera motion. In addition, the method utilizes depth by structure from motion using the previously estimated motion, and salient visual features for which depth is unavailable. Therefore, the method is able to extend RGBD visual odometry to large scale, open environments where depth often cannot be sufficiently acquired. The core of our method is a bundle adjustment step that refines the motion estimates in parallel by processing a sequence of images, in a batch optimization. We have evaluated our method in three sensor setups, one using an RGB-D camera, and two using combinations of a camera and a 3D lidar. Our method is rated #4 on the KITTI odometry benchmark irrespective of sensing modality\u2014compared to stereo visual odometry methods which retrieve depth by triangulation. The resulting average position error is 1.14% of the distance traveled."}
{"_id": "602bb6a3eaab203895d5daaf311b728c53a41463", "title": "Repeatability, reproducibility and rigor in systems research", "text": "Computer systems research spans sub-disciplines that include embedded and real-time systems, compilers, networking, and operating systems. Our contention is that a number of structural factors inhibit quality research and decrease the velocity of science. We highlight some of the factors we have encountered in our work and observed in published papers and propose solutions that, if widely adopted, could both increase the productivity of researchers and the quality of their output."}
{"_id": "801307124a2e4098b8d7402241ed189fed123be5", "title": "Earpod: eyes-free menu selection using touch input and reactive audio feedback", "text": "We present the design and evaluation of earPod: an eyes-free menu technique using touch input and reactive auditory feedback. Studies comparing earPod with an iPod-like visual menu technique on reasonably-sized static menus indicate that they are comparable in accuracy. In terms of efficiency (speed), earPod is initially slower, but outperforms the visual technique within 30 minutes of practice. Our results indicate that earPod is potentially a reasonable eyes-free menu technique for general use, and is a particularly exciting technique for use in mobile device interfaces."}
{"_id": "4e348a6bb29f7ac5514ba52d503417424153223c", "title": "Incorporating Luminance, Depth and Color Information by Fusion-based Networks for Semantic Segmentation", "text": "Semantic segmentation has made encouraging progress due to the success of deep convolutional networks in recent years. Meanwhile, depth sensors become prevalent nowadays, so depth maps can be acquired more easily. However, there are few studies that focus on the RGB-D semantic segmentation task. Exploiting the depth information effectiveness to improve performance is a challenge. In this paper, we propose a novel solution named LDFNet, which incorporates Luminance, Depth and Color information by a fusion-based network. It includes a sub-network to process depth maps and employs luminance images to assist the depth information in processes. LDFNet outperforms the other state-of-art systems on the Cityscapes dataset, and its inference speed is faster than most of the existing networks. The experimental results show the effectiveness of the proposed multi-modal fusion network and its potential for practical applications."}
{"_id": "1c9eb312cd57109b4040aab332aeff3e6661a929", "title": "Applying Reinforcement Learning to Blackjack Using Q-Learning", "text": "Blackjack is a popular card game played in many casinos. The objective of the game is to win money by obtaining a point total higher than the dealer\u2019s without exceeding 21. Determining an optimal blackjack strategy proves to be a difficult challenge due to the stochastic nature of the game. This presents an interesting opportunity for machine learning algorithms. Supervised learning techniques may provide a viable solution, but do not take advantage of the inherent reward structure of the game. Reinforcement learning algorithms generally perform well in stochastic environments, and could utilize blackjack's reward structure. This paper explores reinforcement learning as a means of approximating an optimal blackjack strategy using the Q-learning algorithm."}
{"_id": "22ddd63b622aa19166322abed42c3971685accd1", "title": "Methods of inference and learning for performance modeling of parallel applications", "text": "Increasing system and algorithmic complexity combined with a growing number of tunable application parameters pose significant challenges for analytical performance modeling. We propose a series of robust techniques to address these challenges. In particular, we apply statistical techniques such as clustering, association, and correlation analysis, to understand the application parameter space better. We construct and compare two classes of effective predictive models: piecewise polynomial regression and artifical neural networks. We compare these techniques with theoretical analyses and experimental results. Overall, both regression and neural networks are accurate with median error rates ranging from 2.2 to 10.5 percent. The comparable accuracy of these models suggest differentiating features will arise from ease of use, transparency, and computational efficiency."}
{"_id": "11d11c127be2a23596e38e868977188f8eb59cd8", "title": "Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC", "text": "State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning in nonlinear nonparametric statespace models. We place a Gaussian process prior over the transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. However, to enable efficient inference, we marginalize over the dynamics of the model and instead infer directly the joint smoothing distribution through the use of specially tailored Particle Markov Chain Monte Carlo samplers. Once an approximation of the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. We make use of sparse Gaussian process models to greatly reduce the computational complexity of the approach."}
{"_id": "353656b577e1fbbbe995a1a623698f6fd8559b3d", "title": "Spatial and Spatio-Temporal Multidimensional Data Modelling: A Survey", "text": "Data warehouse store and provide access to large volume of historical data supporting the strategic decisions of organisations. Data warehouse is based on a multidimensional model which allow to express user\u2019s needs for supporting the decision making process. Since it is estimated that 80% of data used for decision making has a spatial or location component, spatial data have been widely integrated in Data Warehouses and in OLAP systems. Extending a multidimensional data model by the inclusion of spatial data provides a concise and organised spatial data warehouse representation. This paper aims to provide a comprehensive review of literature on developed and suggested spatial and spatio-temporel multidimensional models. A benchmarking study of the proposed models is presented. Several evaluation criteria\u2019s are used to identify the existence of trends as well as potential needs for further investigations. Keywords\u2014 Data warehouse, Spatial data, Multidimensional modelling, Temporal data"}
{"_id": "b90dd2f366988d9bb76399d4137c1768fe460c8f", "title": "Malicious PDF detection using metadata and structural features", "text": "Owed to their versatile functionality and widespread adoption, PDF documents have become a popular avenue for user exploitation ranging from large-scale phishing attacks to targeted attacks. In this paper, we present a framework for robust detection of malicious documents through machine learning. Our approach is based on features extracted from document metadata and structure. Using real-world datasets, we demonstrate the the adequacy of these document properties for malware detection and the durability of these features across new malware variants. Our analysis shows that the Random Forests classification method, an ensemble classifier that randomly selects features for each individual classification tree, yields the best detection rates, even on previously unseen malware.\n Indeed, using multiple datasets containing an aggregate of over 5,000 unique malicious documents and over 100,000 benign ones, our classification rates remain well above 99% while maintaining low false positives of 0.2% or less for different classification parameters and experimental scenarios. Moreover, the classifier has the ability to detect documents crafted for targeted attacks and separate them from broadly distributed malicious PDF documents. Remarkably, we also discovered that by artificially reducing the influence of the top features in the classifier, we can still achieve a high rate of detection in an adversarial setting where the attacker is aware of both the top features utilized in the classifier and our normality model. Thus, the classifier is resilient against mimicry attacks even with knowledge of the document features, classification method, and training set."}
{"_id": "067c7857753e21e7317b556c86e30be60aa7cac0", "title": "Xen and the art of virtualization", "text": "Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service.This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort.Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead --- at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests."}
{"_id": "0992ef63a94c4b9dfc05f96b3a144c1e7237c539", "title": "Static detection of malicious JavaScript-bearing PDF documents", "text": "Despite the recent security improvements in Adobe's PDF viewer, its underlying code base remains vulnerable to novel exploits. A steady flow of rapidly evolving PDF malware observed in the wild substantiates the need for novel protection instruments beyond the classical signature-based scanners. In this contribution we present a technique for detection of JavaScript-bearing malicious PDF documents based on static analysis of extracted JavaScript code. Compared to previous work, mostly based on dynamic analysis, our method incurs an order of magnitude lower run-time overhead and does not require special instrumentation. Due to its efficiency we were able to evaluate it on an extremely large real-life dataset obtained from the VirusTotal malware upload portal. Our method has proved to be effective against both known and unknown malware and suitable for large-scale batch processing."}
{"_id": "0c668ee24d58ecca165f788d40765e79ed615471", "title": "Classification and Regression Trees", "text": ""}
{"_id": "0dac671dae4192cfe96d290b50cc3f1105798825", "title": "Stealthy malware detection through vmm-based \"out-of-the-box\" semantic view reconstruction", "text": "An alarming trend in malware attacks is that they are armed with stealthy techniques to detect, evade, and subvert malware detection facilities of the victim. On the defensive side, a fundamental limitation of traditional host-based anti-malware systems is that they run inside the very hosts they are protecting (\"in the box\"), making them vulnerable to counter-detection and subversion by malware. To address this limitation, recent solutions based on virtual machine (VM) technologies advocate placing the malware detection facilities outside of the protected VM (\"out of the box\"). However, they gain tamper resistance at the cost of losing the native, semantic view of the host which is enjoyed by the \"in the box\" approach, thus leading to a technical challenge known as the semantic gap.\n In this paper, we present the design, implementation, and evaluation of VMwatcher - an \"out-of-the-box\" approach that overcomes the semantic gap challenge. A new technique called guest view casting is developed to systematically reconstruct internal semantic views (e.g., files, processes, and kernel modules) of a VM from the outside in a non-intrusive manner. Specifically, the new technique casts semantic definitions of guest OS data structures and functions on virtual machine monitor (VMM)-level VM states, so that the semantic view can be reconstructed. With the semantic gap bridged, we identify two unique malware detection capabilities: (1) view comparison-based malware detection and its demonstration in rootkit detection and (2) \"out-of-the-box\" deployment of host-based anti-malware software with improved detection accuracy and tamper-resistance. We have implemented a proof-of-concept prototype on both Linux and Windows platforms and our experimental results with real-world malware, including elusive kernel-level rootkits, demonstrate its practicality and effectiveness."}
{"_id": "c803e14cf844f774607ba7dd1f647fd8539519f0", "title": "Automated and Cooperative Vehicle Merging at Highway On-Ramps", "text": "Recognition of necessities of connected and automated vehicles (CAVs) is gaining momentum. CAVs can improve both transportation network efficiency and safety through control algorithms that can harmonically use all existing information to coordinate the vehicles. This paper addresses the problem of optimally coordinating CAVs at merging roadways to achieve smooth traffic flow without stop-and-go driving. We present an optimization framework and an analytical closed-form solution that allows online coordination of vehicles at merging zones. The effectiveness of the efficiency of the proposed solution is validated through a simulation, and it is shown that coordination of vehicles can significantly reduce both fuel consumption and travel time."}
{"_id": "22a21791f8e36fe0959027a689ce996d3eae0747", "title": "An ESPAR Antenna for Beamspace-MIMO Systems Using PSK Modulation Schemes", "text": "In this paper the use of electronically steerable passive array radiator (ESPAR) antennas is introduced for achieving increased spectral efficiency characteristics in multiple-input, multiple-output (MIMO) systems using a single active element, and compact antennas. The proposed ESPAR antenna is capable of mapping phase-shift-keying (PSK) modulated symbols to be transmitted onto orthogonal basis functions on the wavevector domain of the multi-element antenna (MEA), instead of the traditional approach of sending different symbol streams in different locations on the antenna domain. In this way, different symbols are transmitted simultaneously towards different angles of departure at the transmitter side. We show that the proposed system, called beamspace-MIMO (BS-MIMO) can achieve performance characteristics comparable to traditional MIMO systems while using a single radio-frequency (RF) front-end and ESPAR antenna arrays of total length equal to lambda/8, for popular PSK modulation schemes such as binary-PSK (BPSK), quaternary-PSK (QPSK), as well as for their offset-PSK and differential-PSK variations."}
{"_id": "4b7ec490154397c2691d3404eccd412665fa5e6a", "title": "N-gram-based Machine Translation", "text": "This article describes in detail an n-gram approach to statistical machine translation. This approach consists of a log-linear combination of a translation model based on n-grams of bilingual units, which are referred to as tuples, along with four specific feature functions. Translation performance, which happens to be in the state of the art, is demonstrated with Spanish-to-English and English-to-Spanish translations of the European Parliament Plenary Sessions (EPPS)."}
{"_id": "1f568ee68dd5a0d8e77321fb724dd6b26a8f034a", "title": "To queue or not to queue : equilibrium behavior in queueing systems", "text": "To the memory of my parents, Sara and Avraham Haviv To my mother and late father, Fela and Shimon Hassin Contents Preface xi 1. INTRODUCTION 1 1.1 Basic concepts 2 1.1.1 Strategies, payoffs, and equilibrium 2 1.1.2 Steady-state 4 1.1.3 Subgame perfect equilibrium 5 1.1.4 Evolutionarily stable strategies 5 1.1.5 The Braess paradox 5 1.1.6 Avoid the crowd or follow it? 6 1.2 Threshold strategies 7 1.3 Costs and objectives 9 1.4 Queueing theory preliminaries 11 1.5 A shuttle example 14 1.5.1 The unobservable model 14 1.5.2 The observable model 17 1.5.3 Social optimality 18 1.6 Non-stochastic models 19"}
{"_id": "342ca79a0ac2cc726bf31ffb4ce399821d3e2979", "title": "Merkle Tree Traversal in Log Space and Time", "text": "We present a technique for Merkle tree traversal which requires only logarithmic space and time. For a tree with N nodes, our algorithm computes sequential tree leaves and authentication path data in time Log2(N) and space less than 3Log2(N), where the units of computation are hash function evaluations or leaf value computations, and the units of space are the number of node values stored. Relative to this algorithm, we show our bounds to be necessary and sufficient. This result is an asymptotic improvement over all other previous results (for example, measuring cost = space \u2217 time). We also prove that the complexity of our algorithm is optimal: There can exist no Merkle tree traversal algorithm which consumes both less than O(Log2(N)) space and less than O(Log2(N)) time. Our algorithm is especially of practical interest when space efficiency is required, and can also enhance other traversal algorithms which relax space constraints to gain speed."}
{"_id": "f93507b4c55a3ae6b891d00bcb98dde6410543f3", "title": "Kinematics and Jacobian analysis of a 6UPS Stewart-Gough platform", "text": "In this paper a complete kinematics analysis of a 6UPS Stewart-Gough platform is performed. A new methodology to deal with the forward kinematics problem based on a multibody formulation and a numerical method is presented, resulting in a highly efficient formulation approach. The idea of a multibody formulation is basically to built for each joint that links the body of the robot, a set of equations that define the constraints on their movement so that the joint variables are included in these equations. Once the constraint function is defined, we may use a numerical method to find the kinematic solution. In order to avoid the problem reported with the orientation (Euler's Angles) of the moving platform, a generalized coordinates vector using quaternions is applied. Then, the Jacobian analysis using reciprocal screw systems is described and the workspace is determined. Finally, a design of a simulator using MATLAB as programming tool is presented."}
{"_id": "82e1e4c49d515b1adebe9e2932458e08280de634", "title": "Coping with compassion fatigue.", "text": "Helping others who have undergone a trauma from a natural disaster, accident, or sudden act of violence, can be highly satisfying work. But helping trauma victims can take a toll on even the most seasoned mental health professional. Ongoing exposure to the suffering of those you are helping can bring on a range of signs and symptoms -including anxiety, sleeplessness, irritability, and feelings of helplessness -that can interfere, sometimes significantly, with everyday life and work. In clinicians, including therapists, counselors, and social workers, this response is often referred to as \u201ccompassion fatigue\u201d or \u201csecondary post-traumatic stress.\u201d"}
{"_id": "12a23d19543e73b5808b35f1ff2d00faba633bb6", "title": "The flipped classroom: a course redesign to foster learning and engagement in a health professions school.", "text": "Recent calls for educational reform highlight ongoing concerns about the ability of current curricula to equip aspiring health care professionals with the skills for success. Whereas a wide range of proposed solutions attempt to address apparent deficiencies in current educational models, a growing body of literature consistently points to the need to rethink the traditional in-class, lecture-based course model. One such proposal is the flipped classroom, in which content is offloaded for students to learn on their own, and class time is dedicated to engaging students in student-centered learning activities, like problem-based learning and inquiry-oriented strategies. In 2012, the authors flipped a required first-year pharmaceutics course at the University of North Carolina Eshelman School of Pharmacy. They offloaded all lectures to self-paced online videos and used class time to engage students in active learning exercises. In this article, the authors describe the philosophy and methodology used to redesign the Basic Pharmaceutics II course and outline the research they conducted to investigate the resulting outcomes. This article is intended to serve as a guide to instructors and educational programs seeking to develop, implement, and evaluate innovative and practical strategies to transform students' learning experience. As class attendance, students' learning, and the perceived value of this model all increased following participation in the flipped classroom, the authors conclude that this approach warrants careful consideration as educators aim to enhance learning, improve outcomes, and fully equip students to address 21st-century health care needs."}
{"_id": "e930f44136bcc8cf32370c4caaf0734af0fa0d51", "title": "An Empirical Investigation of the Impact of Individual and Work Characteristics on Telecommuting Success", "text": "Individual and work characteristics are used in telecommuting plans; however, their impact on telecommuting success is not well known. We studied how employee tenure, work experience, communication skills, task interdependence, work output measurability, and task variety impact telecommuter productivity, performance, and satisfaction after taking into account the impact of communication technologies. Data collected from 89 North American telecommuters suggest that in addition to the richness of the media, work experience, communication skills, and task interdependence impact telecommuting success. These characteristics are practically identifiable and measurable; therefore, we expect our findings to help managers convert increasing telecommuting adoption rates to well-defined and measurable gains."}
{"_id": "def773093c721c5d0dcac3909255ec39efeca97b", "title": "An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data", "text": "Human action recognition is an important task in computer vision. Extracting discriminative spatial and temporal features to model the spatial and temporal evolutions of different actions plays a key role in accomplishing this task. In this work, we propose an end-to-end spatial and temporal attention model for human action recognition from skeleton data. We build our model on top of the Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), which learns to selectively focus on discriminative joints of skeleton within each frame of the inputs and pays different levels of attention to the outputs of different frames. Furthermore, to ensure effective training of the network, we propose a regularized cross-entropy loss to drive the model learning process and develop a joint training strategy accordingly. Experimental results demonstrate the effectiveness of the proposed model, both on the small human action recognition dataset of SBU and the currently largest NTU dataset."}
{"_id": "5a5af9fc5093fcd964daa769cbdd8e548261f79b", "title": "Fast Viterbi map matching with tunable weight functions", "text": "This paper describes a map matching program submitted to the ACM SIGSPATIAL Cup 2012. We first summarize existing map matching algorithms into three categories, and compare their performance thoroughly. In general, global max-weight methods using the Viterbi dynamic programming algorithm are the most accurate but the accuracy varies at different sampling intervals using different weight functions. Our submission selects a hybrid that improves upon the best two weight functions such that its accuracy is better than both and the performance is robust against varying sampling rates. In addition, we employ many optimization techniques to reduce the overall latency, as the scoring heavily emphasizes on speed. Using the training dataset with manually corrected ground truth, our Java-based program matched all 14,436 samples in 5 seconds on a dual-core 3.3 GHz iCore 3 processor, and achieved 98.9% accuracy."}
{"_id": "fe124d1c5ff4c691e0f16b24713e81b0fbc840ca", "title": "Domain Adaptation with Adversarial Neural Networks and Auto-encoders", "text": "Background. Domain adaptation focuses on the situation where we have data generated from multiple domains, which are assumed to be different, but similar, in a certain sense. In this work we focus on the case where there are two domains, known as the source and the target domain. The source domain is assumed to have a large amount of labeled data while labeled data in the target domain is scarce. The goal of domain adaptation algorithms is to generalize better on the target domain by exploiting the large amount of labeled data in the related source domain."}
{"_id": "2fc5999b353f41be8c0d5dea2ecb7c26507aa7c0", "title": "A particle filter for monocular vision-aided odometry", "text": "We propose a particle filter-based algorithm for monocular vision-aided odometry for mobile robot localization. The algorithm fuses information from odometry with observations of naturally occurring static point features in the environment. A key contribution of this work is a novel approach for computing the particle weights, which does not require including the feature positions in the state vector. As a result, the computational and sample complexities of the algorithm remain low even in feature-dense environments. We validate the effectiveness of the approach extensively with both simulations as well as real-world data, and compare its performance against that of the extended Kalman filter (EKF) and FastSLAM. Results from the simulation tests show that the particle filter approach is better than these competing approaches in terms of the RMS error. Moreover, the experiments demonstrate that the approach is capable of achieving good localization accuracy in complex environments."}
{"_id": "e89d7b727da4617caf1c7c2dc8523d8565079ecf", "title": "Attacking the Washington, D.C. Internet Voting System", "text": "In 2010, Washington, D.C. developed an Internet voting pilot project that was intended to allow overseas absentee voters to cast their ballots using a website. Prior to deploying the system in the general election, the District held a unique public trial: a mock election during which anyone was invited to test the system or attempt to compromise its security. This paper describes our experience participating in this trial. Within 48 hours of the system going live, we had gained nearcomplete control of the election server. We successfully changed every vote and revealed almost every secret ballot. Election officials did not detect our intrusion for nearly two business days\u2014and might have remained unaware for far longer had we not deliberately left a prominent clue. This case study\u2014the first (to our knowledge) to analyze the security of a government Internet voting system from the perspective of an attacker in a realistic pre-election deployment\u2014attempts to illuminate the practical challenges of securing online voting as practiced today by a growing number of jurisdictions."}
{"_id": "9cccd211c9208f790d71fa5b3499d8f827744aa0", "title": "Applications of Educational Data Mining: A survey", "text": "Various educational oriented problems are resolved through Educational Data Mining, which is the most prevalent applications of data mining. One of the crucial goals of this paper is to study the most recent works carried out on EDM and analyze their merits and drawbacks. This paper also highlights the cumulative results of the various data mining practices and techniques applied in the surveyed articles, and thereby suggesting the researchers on the future directions on EDM. In addition, an experiment was also conducted to evaluate, certain classification and clustering algorithms to observe the most reliable algorithms for future researches."}
{"_id": "6831bb247c853b433d7b2b9d47780dc8d84e4762", "title": "Analysis of Deep Convolutional Neural Network Architectures", "text": "In computer vision many tasks are solved using machine learning. In the past few years, state of the art results in computer vision have been achieved using deep learning. Deeper machine learning architectures are better capable in handling complex recognition tasks, compared to previous more shallow models. Many architectures for computer vision make use of convolutional neural networks which were modeled after the visual cortex. Currently deep convolutional neural networks are the state of the art in computer vision. Through a literature survey an overview is given of the components in deep convolutional neural networks. The role and design decisions for each of the components is presented, and the difficulties involved in training deep neural networks are given. To improve deep learning architectures an analysis is given of the activation values in four different architectures using various activation functions. Current state of the art classifiers use dropout, max-pooling as well as the maxout activation function. New components may further improve the architecture by providing a better solution for the diminishing gradient problem."}
{"_id": "07186dd168d2a8add853cc9fdedf1376a77a7ac8", "title": "Classification and tracking of dynamic objects with multiple sensors for autonomous driving in urban environments", "text": "Future driver assistance systems are likely to use a multisensor approach with heterogeneous sensors for tracking dynamic objects around the vehicle. The quality and type of data available for a data fusion algorithm depends heavily on the sensors detecting an object. This article presents a general framework which allows the use sensor specific advantages while abstracting the specific details of a sensor. Different tracking models are used depending on the current set of sensors detecting the object. A sensor independent algorithm for classifying objects regarding their current and past movement state is presented. The described architecture and algorithms have been successfully implemented in Tartan racingpsilas autonomous vehicle for the urban grand challenge. Results are presented and discussed."}
{"_id": "25338f00a219e2f4b152ff8b62e8ca8aa6d4709f", "title": "NAViGaTOR: Network Analysis, Visualization and Graphing Toronto", "text": "SUMMARY\nNAViGaTOR is a powerful graphing application for the 2D and 3D visualization of biological networks. NAViGaTOR includes a rich suite of visual mark-up tools for manual and automated annotation, fast and scalable layout algorithms and OpenGL hardware acceleration to facilitate the visualization of large graphs. Publication-quality images can be rendered through SVG graphics export. NAViGaTOR supports community-developed data formats (PSI-XML, BioPax and GML), is platform-independent and is extensible through a plug-in architecture.\n\n\nAVAILABILITY\nNAViGaTOR is freely available to the research community from http://ophid.utoronto.ca/navigator/. Installers and documentation are provided for 32- and 64-bit Windows, Mac, Linux and Unix.\n\n\nCONTACT\njuris@ai.utoronto.ca\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."}
{"_id": "9fc2c61ea601d126d386121f25e7af679a15c2fc", "title": "Speed or accuracy? a study in evaluation of simultaneous speech translation", "text": "Simultaneous speech translation is a technology that attempts to reduce the delay inherent in speech translation by beginning translation before the end of explicit sentence boundaries. Despite best efforts, there is still often a trade-off between speed and accuracy in these systems, with systems with less delay also achieving lower accuracy. However, somewhat surprisingly, there is no previous work examining the relative importance of speed and accuracy, and thus given two systems with various speeds and accuracies, it is difficult to say with certainty which is better. In this paper, we make the first steps towards evaluation of simultaneous speech translation systems in consideration of both speed and accuracy. We collect user evaluations of speech translation results with different levels of accuracy and delay, and using this data to learn the parameters of an evaluation measure that can judge the trade-off between these two factors. Based on these results, we find that considering both accuracy and delay in the evaluation of speech translation results helps improve correlations with human judgements, and that users placed higher relative importance on reducing delay when results were presented through text, rather than speech."}
{"_id": "60ac4554e9f89f0ca319d3bbcc4828bf4dad5b05", "title": "Cyber-Physical Systems: A New Frontier", "text": "The report of the President's Council of Advisors on Science and Technology (PCAST) has placed CPS on the top of the priority list for federal research investment [6]. This article first reviews some of the challenges and promises of CPS, followed by an articulation of some specific challenges and promises that are more closely related to the sensor networks, ubiquitous and trustworthy computing conference."}
{"_id": "110ee8ab8f652c16fcc3bb767687e1c695c2500b", "title": "GP-GAN: Towards Realistic High-Resolution Image Blending", "text": "Recent advances in generative adversarial networks (GANs) have shown promising potentials in conditional image generation. However, how to generate high-resolution images remains an open problem. In this paper, we aim at generating high-resolution well-blended images given composited copy-and-paste ones, i.e. realistic highresolution image blending. To achieve this goal, we propose Gaussian-Poisson GAN (GP-GAN), a framework that combines the strengths of classical gradient-based approaches and GANs, which is the first work that explores the capability of GANs in high-resolution image blending task to the best of our knowledge. Particularly, we propose GaussianPoisson Equation to formulate the high-resolution image blending problem, which is a joint optimisation constrained by the gradient and colour information. Gradient filters can obtain gradient information. For generating the colour information, we propose Blending GAN to learn the mapping between the composited image and the well-blended one. Compared to the alternative methods, our approach can deliver high-resolution, realistic images with fewer bleedings and unpleasant artefacts. Experiments confirm that our approach achieves the state-of-the-art performance on Transient Attributes dataset. A user study on Amazon Mechanical Turk finds that majority of workers are in favour of the proposed approach."}
{"_id": "5005a39bf313b75601b07295387dc7efc4924726", "title": "Dependent Gaussian Processes", "text": "Gaussian processes are usually parameterised in terms of their covariance functions. However, this makes it difficult to deal with multiple outputs, because ensuring that the covariance matrix is positive definite is problematic. An alternative formulation is to treat Gaussian processes as white noise sources convolved with smoothing kernels, and to parameterise the kernel instead. Using this, we extend Gaussian processes to handle multiple, coupled outputs."}
{"_id": "8db60e5196657c4f6188ffee3d2a48d188014f6d", "title": "Realtime multi-aircraft tracking in aerial scene with deep orientation network", "text": "Tracking the aircrafts from an aerial view is very challenging due to large appearance, perspective angle, and orientation variations. The deep-patch orientation network (DON) method was proposed for the multi-ground target tracking system, which is general and can learn the target\u2019s orientation based on the structure information in the training samples. Such approach leverages the performance of tracking-by-detection framework into two aspects: one is to improve the detectability of the targets by using the patch-based model for the target localization in the detection component and the other is to enhance motion characteristics of the individual tracks by incorporating the orientation information as an association metric in the tracking component. Based on the DON structure, you only look once (YOLO) and faster region convolutional neural network (FrRCNN) detection frameworks with simple online and realtime tracking (SORT) tracker are utilized as a case study. The Comparative experiments demonstrate that the overall detection accuracy is improved at the same processing speed of both detection frameworks. Furthermore, the number of Identity switches (IDsw) has reduced about 67% without affecting the computational complexity of the tracking component. Consequently, the presented method is efficient for realtime ground target-tracking scenarios."}
{"_id": "78fef6c5072e0d2ecfa710c2bb5bbc2cf54acbca", "title": "Learning user tastes: a first step to generating healthy meal plans", "text": "Poor nutrition is fast becoming one of the major causes of ill-health and death in the western world. It is caused by a variety of factors including lack of nutritional understanding leading to poor choices being made when selecting which dishes to cook and eat. We wish to build systems which can recommend nutritious meal plans to users, however a crucial pre-requisite is to be able to recommend dishes that people will like. In this work we investigate key factors contributing to how recipes are rated by analysing the results of a longterm study (n=123 users) in order to understand how best to approach the recommendation problem. In doing so we identify a number of important contextual factors which can influence the choice of rating and suggest how these might be exploited to build more accurate recipe recommender systems. We see this as a crucial first step in a healthy meal recommender. We conclude by summarising our thoughts on how we will combine recommended recipes into meal plans based on nutritional guidelines."}
{"_id": "393158606639f86f4d3ca2e89d15bd38b59577e1", "title": "Transliteration for Resource-Scarce Languages", "text": "Today, parallel corpus-based systems dominate the transliteration landscape. But the resource-scarce languages do not enjoy the luxury of large parallel transliteration corpus. For these languages, rule-based transliteration is the only viable option. In this article, we show that by properly harnessing the monolingual resources in conjunction with manually created rule base, one can achieve reasonable transliteration performance. We achieve this performance by exploiting the power of Character Sequence Modeling (CSM), which requires only monolingual resources. We present the results of our rule-based system for Hindi to English, English to Hindi, and Persian to English transliteration tasks. We also perform extrinsic evaluation of transliteration systems in the context of Cross Lingual Information Retrieval. Another important contribution of our work is to explain the widely varying accuracy numbers reported in transliteration literature, in terms of the entropy of the language pairs and the datasets involved."}
{"_id": "197a7fc2f8d57d93727b348851b59b34ce990afd", "title": "SRILM - an extensible language modeling toolkit", "text": "SRILM is a collection of C++ libraries, executable programs, and helper scripts designed to allow both production of and experimentation with statistical language models for speech recognition and other applications. SRILM is freely available for noncommercial purposes. The toolkit supports creation and evaluation of a variety of language model types based on N-gram statistics, as well as several related tasks, such as statistical tagging and manipulation of N-best lists and word lattices. This paper summarizes the functionality of the toolkit and discusses its design and implementation, highlighting ease of rapid prototyping, reusability, and combinability of tools."}
{"_id": "9c1775c992847f31a9bda931a1ecb8cac5365367", "title": "Efficient Realization of Coordinate Structures in Combinatory Categorial Grammar", "text": "We describe a chart realization algorithm for Combinatory Categorial Grammar (CCG), and show how it can be used to efficiently realize a wide range of coordination phenomena, including argument cluster coordination and gapping. The algorithm incorporates three novel methods for improving the efficiency of chart realization: (i) using rules to chunk the input logical form into sub-problems to be solved independently prior to further combination; (ii) pruning edges from the chart based on the n-gram score of the edge\u2019s string, in comparison to other edges with equivalent categories; and (iii) formulating the search as a best-first anytime algorithm, using n-gram scores to sort the edges on the agenda. The algorithm has been implemented as an extension to the OpenCCG open source CCG parser, and initial performance tests indicate that the realizer is fast enough for practical use in natural language dialogue systems."}
{"_id": "12f661171799cbd899e1ff4ae0a7e2170c3d547b", "title": "Two decades of statistical language modeling: where do we go from here?", "text": "Statistical language models estimate the distribution of various natural language phenomena for the purpose of speech recognition and other language technologies. Since the first significant model was proposed in 1980, many attempts have been made to improve the state of the art. We review them, point to a few promising directions, and argue for a Bayesian approach to integration of linguistic theories with data."}
{"_id": "395f4b41578c3ff5139ddcf9e90eb60801b50394", "title": "Statistical language modeling using the CMU-cambridge toolkit", "text": "The CMU Statistical Language Modeling toolkit was re leased in in order to facilitate the construction and testing of bigram and trigram language models It is currently in use in over academic government and industrial laboratories in over countries This paper presents a new version of the toolkit We outline the con ventional language modeling technology as implemented in the toolkit and describe the extra e ciency and func tionality that the new toolkit provides as compared to previous software for this task Finally we give an exam ple of the use of the toolkit in constructing and testing a simple language model"}
{"_id": "7dd4a0fb7368a510840cffd7dcad618962d3b8e4", "title": "An Architecture for Action, Emotion, and Social Behavior", "text": "The Oz project at Carnegie Mellon is studying the construction of artistically e ective simulated worlds. Such worlds typically include several agents, which must exhibit broad behavior. To meet this need, we are developing an agent architecture, called Tok, that presently supports reactivity, goals, emotions, and social behavior. Here we brie y introduce the requirements of our application, summarize the Tok architecture, and describe a particular agent we have constructed that exhibits the desired social behavior. 1 THE OZ PROJECT AND BROAD AGENTS 1 1 The Oz Project and Broad Agents The Oz project at Carnegie Mellon University is developing technology for artistically interesting simulated worlds [3]. We want to let human users participate in dramatically e ective worlds that include moderately competent, emotional agents. We work with artists in the CMU Drama and English Departments, to help focus our technology on genuine artistic needs. An Oz world has four primary components. There is a simulated physical environment, a set of automated agents which help populate the world, a user interface to allow one or more people to participate in the world [14], and a planner concerned with the long term structure of the user's experience [2]. One of the keys to an artistically engaging experience is for the user to be able to \\suspend disbelief\". That is, the user must be able to imagine that the world portrayed is real, without being jarred out of this belief by the world's behavior. The automated agents, in particular, must not be blatantly unreal. Thus, part of our e ort is aimed at producing agents with a broad set of capabilities, including goal-directed reactive behavior, emotional state and behavior, social knowledge and behavior, and some natural language abilities. For our purpose, each of these capacities can be as limited as is necessary to allow us to build broad, integrated agents [4]. Oz worlds can be simpler than the real world, but they must retain su cient complexity to serve as interesting artistic vehicles. The complexity level seems to be somewhat higher, but not exceptionally higher, than typical AI micro-worlds. Despite these simpli cations, we nd that our agents must deal with imprecise and erroneous perceptions, with the need to respond rapidly, and with a general inability to fully model the agent-rich world they inhabit. Thus, we suspect that some of our experience with broad agents in Oz may transfer to the domain of social, real-world robots. Building broad agents is a little studied area. Much work has been done on building reactive systems [1, 6, 7, 10, 11, 22], natural language systems (which we do not discuss here), and even emotion systems [9, 18, 20]. There has been growing interest in integrating action and learning (see [16]) and some very interesting work on broader integration [23, 19]. However, we are aware of no other e orts to integrate the particularly wide range of capabilities needed in the Oz domain. Here we present our e orts, focusing on the structure of a particular agent designed to exhibit goaldirected reactive behavior, emotion, and some social behavior. Further discussion of the integration issues can be found in [5]."}
{"_id": "6fdbe12fc54a583dfe52315f5694e352185d8f03", "title": "A source code linearization technique for detecting plagiarized programs", "text": "It is very important to detect plagiarized programs in the field of computer science education. Therefore, many tools and algorithms have been developed for this purpose. Generally, these tools are operated in two phases. In phase 1, a program plagiarism detecting tool generates an intermediate representation from a given program set. The intermediate representation should reflect the structural characterization of the program. Most tools use the parse tree or token sequence by intermediate representation. In phase 2, the program looks for plagiarized material and evaluates the similarity of two programs. It is helpful to announce the plagiarized metarials between two programs to the instructor. In this paper, we present the static tracing method in order to improve program plagiarism detection accuracy. The static tracing method statically executes a program at the syntax-level and then extracts predefined keywords according to the order of the executed functions. The result of experiment proves this method can detect plagiarism more effectively than the previously released plagiarism detecting method."}
{"_id": "0b8f4edf1a7b4d19d47d419f41cde432b9708ab7", "title": "Silicon-Filled Rectangular Waveguides and Frequency Scanning Antennas for mm-Wave Integrated Systems", "text": "We present a technology for the manufacturing of silicon-filled integrated waveguides enabling the realization of low-loss high-performance millimeter-wave passive components and high gain array antennas, thus facilitating the realization of highly integrated millimeter-wave systems. The proposed technology employs deep reactive-ion-etching (DRIE) techniques with aluminum metallization steps to integrate rectangular waveguides with high geometrical accuracy and continuous metallic side walls. Measurement results of integrated rectangular waveguides are reported exhibiting losses of 0.15 dB/ \u03bbg at 105 GHz. Moreover, ultra-wideband coplanar to waveguide transitions with 0.6 dB insertion loss at 105 GHz and return loss better than 15 dB from 80 to 110 GHz are described and characterized. The design, integration and measured performance of a frequency scanning slotted-waveguide array antenna is reported, achieving a measured beam steering capability of 82 \u00b0 within a band of 23 GHz and a half-power beam-width (HPBW) of 8.5 \u00b0 at 96 GHz. Finally, to showcase the capability of this technology to facilitate low-cost mm-wave system level integration, a frequency modulated continuous wave (FMCW) transmit-receive IC for imaging radar applications is flip-chip mounted directly on the integrated array and experimentally characterized."}
{"_id": "5696eb62fd6f1c039021e66b9f03b50066797a8e", "title": "Development of a continuum robot using pneumatic artificial muscles", "text": "This paper presents the design concept of an intrinsic spatial continuum robot, of which arm segments are actuated by pneumatic artificial muscles. Since pneumatic artificial muscles supported the arm segments of the robot instead of a passive spring, the developed continuum robot was not only capable of following a given trajectory, but also varying its stiffness according to given external environment. Experiment results revealed that the proposed continuum robot showed versatile movements by controlling the pressure of supply air to pneumatic artificial muscles, and the well-coordinated pattern matching scheme based on low-cost webcams gave good performance in estimating the position and orientation of the end-effector of the spatial continuum robot."}
{"_id": "c58ae94ed0c4f59e00fde734f8b3ec8a554faf09", "title": "Introduction of speech log-spectral priors into dereverberation based on Itakura-Saito distance minimization", "text": "It has recently been shown that a multi-channel linear prediction can effectively achieve blind speech dereverberation based on maximum-likelihood (ML) estimation. This approach can estimate and cancel unknown reverberation processes from only a few seconds of observation. However, one problem with this approach is that speech distortion may increase if we iterate the dereverberation more than once based on Itakura-Saito (IS) distance minimization to further reduce the reverberation. To overcome this problem, we introduce speech log-spectral priors into this approach, and reformulate it based on maximum a posteriori (MAP) estimation. Two types of priors are introduced, a Gaussian mixture model (GMM) of speech log spectra, and a GMM of speech mel-frequency cepstral coefficients. In the formulation, we also propose a new versatile technique to integrate such log-spectral priors with the IS distance minimization in a computationally efficient manner. Preliminary experiments show the effectiveness of the proposed approach."}
{"_id": "261069aa06459562a2de3950395b41ed4aebe9df", "title": "Disentangled Representation Learning for Text Style Transfer", "text": "This paper tackles the problem of disentangling the latent variables of style and content in language models. We propose a simple, yet effective approach, which incorporates auxiliary objectives: a multi-task classification objective, and dual adversarial objectives for label prediction and bag-ofwords prediction, respectively. We show, both qualitatively and quantitatively, that the style and content are indeed disentangled in the latent space, using this approach. This disentangled latent representation learning method is applied to attribute (e.g. style) transfer on non-parallel corpora. We achieve similar content preservation scores compared to previous state-of-the-art approaches, and significantly better style-transfer strength scores. Our code is made publicly available for replicability and extension purposes ."}
{"_id": "4f8781df9fd1d7473a28b86885af30a1cad2a2d0", "title": "Abstractive Multi-document Summarization with Semantic Information Extraction", "text": "This paper proposes a novel approach to generate abstractive summary for multiple documents by extracting semantic information from texts. The concept of Basic Semantic Unit (BSU) is defined to describe the semantics of an event or action. A semantic link network on BSUs is constructed to capture the semantic information of texts. Summary structure is planned with sentences generated based on the semantic link network. Experiments demonstrate that the approach is effective in generating informative, coherent and compact summary."}
{"_id": "ec1df457a2be681227f79de3ce932fccb65ee2bb", "title": "Opportunistic Computation Offloading in Mobile Edge Cloud Computing Environments", "text": "The dynamic mobility and limitations in computational power, battery resources, and memory availability are main bottlenecks in fully harnessing mobile devices as data mining platforms. Therefore, the mobile devices are augmented with cloud resources in mobile edge cloud computing (MECC) environments to seamlessly execute data mining tasks. The MECC infrastructures provide compute, network, and storage services in one-hop wireless distance from mobile devices to minimize the latency in communication as well as provide localized computations to reduce the burden on federated cloud systems. However, when and how to offload the computation is a hard problem. In this paper, we present an opportunistic computation offloading scheme to efficiently execute data mining tasks in MECC environments. The scheme provides the suitable execution mode after analyzing the amount of unprocessed data, privacy configurations, contextual information, and available on-board local resources (memory, CPU, and battery power). We develop a mobile application for online activity recognition and evaluate the proposed scheme using the event data stream of 5 million activities collected from 12 users for 15 days. The experiments show significant improvement in execution time and battery power consumption resulting in 98% data reduction."}
{"_id": "5f0b96c18ac64affb923b28938ac779d7d1e1b81", "title": "Large-scale audio feature extraction and SVM for acoustic scene classification", "text": "This work describes a system for acoustic scene classification using large-scale audio feature extraction. It is our contribution to the Scene Classification track of the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (D-CASE). The system classifies 30 second long recordings of 10 different acoustic scenes. From the highly variable recordings, a large number of spectral, cepstral, energy and voicing-related audio features are extracted. Using a sliding window approach, classification is performed on short windows. SVM are used to classify these short segments, and a majority voting scheme is employed to get a decision for longer recordings. On the official development set of the challenge, an accuracy of 73 % is achieved. SVM are compared with a nearest neighbour classifier and an approach called Latent Perceptual Indexing, whereby SVM achieve the best results. A feature analysis using the t-statistic shows that mainly Mel spectra are the most relevant features."}
{"_id": "80f83fa19bd5b7d7e94781a1de91609dfbd62937", "title": "A fractional order fuzzy PID controller for binary distillation column control", "text": "Expert and intelligent control schemes have recently emerged out as a promising solution with robustness which can efficiently deal with the nonlinearities, along with various types of modelling uncertainties, present in different real world systems e.g. binary distillation column. This paper is an attempt to propose an intelligent control system which takes the form of a Fractional Order Fuzzy Proportional \u2013 Integral \u2013 Derivative (FOFPID) controller which is investigated as a solution to deal with the complex dynamic nature of the distillation column. The FOFPID controller is an extension of an existing formula based self tuning fuzzy Proportional Integral controller structure, which varies its gains at run time in accordance with the instantaneous error and rate of change of error. The FOFPID controller is a Takagi-Sugeno (TS) model based fuzzy adaptive controller comprising of non-integer order of integration and differentiation operators used in the controller. It has been observed that inclusion of non-integer order of the integration and differentiation operators made the controller scheme more robust. For the performance evaluation of the proposed scheme, the performance of FOFPID controller is compared with that of its integer order counterpart, a Fuzzy Proportional \u2013 Integral \u2013 Derivative (FPID) controller. The parameters of both the controllers were optimized for minimum integral of absolute error (IAE) using a bio-inspired global optimization algorithm, Genetic Algorithm (GA). Intensive LabVIEW TM simulation studies were performed which included setpoint tracking with and without uncertainties, disturbance rejection, and noise suppression investigations. For testing the parameter uncertainty handling capability of the proposed controller, uncertain and time varying relative volatility and uncertain tray hydraulic constant were applied. Also, for the disturbance rejection studies, intensive simulations were conducted, which included two most common causes of disturbance i.e. variation in feed composition and variation in feed flow rate. All the simulation investigations clearly suggested that FOFPID controller provided superior performance over FPID controller for each case study i.e. setpoint tracking, disturbance rejection, noise suppression and parameter uncertainties."}
{"_id": "8977709773a432a17bf13dfdee420bf41fdb63fd", "title": "A three-component scattering model for polarimetric SAR data", "text": "An approach has been developed that involves the fit of a combination of three simple scattering mechanisms to polarimetric SAR observations. The mechanisms are canopy scatter from a cloud of randomly oriented dipoles, evenor double-bounce scatter from a pair of orthogonal surfaces with different dielectric constants and Bragg scatter from a moderately rough surface. This composite scattering model is used to describe the polarimetric backscatter from naturally occurring scatterers. The model is shown to describe the behavior of polarimetric backscatter from tropical rain forests quite well by applying it to data from NASA/Jet Propulsion Laboratory\u2019s (JPL\u2019s) airborne polarimetric synthetic aperture radar (AIRSAR) system. The model fit allows clear discrimination between flooded and nonflooded forest and between forested and deforested areas, for example. The model is also shown to be usable as a predictive tool to estimate the effects of forest inundation and disturbance on the fully polarimetric radar signature. An advantage of this model fit approach is that the scattering contributions from the three basic scattering mechanisms can be estimated for clusters of pixels in polarimetric SAR images. Furthermore, it is shown that the contributions of the three scattering mechanisms to the HH, HV, and VV backscatter can be calculated from the model fit. Finally, this model fit approach is justified as a simplification of more complicated scattering models, which require many inputs to solve the forward scattering problem."}
{"_id": "e22ab00eaae9e17f2c4c69883b8c111dba5bc646", "title": "Stacked Patch Antenna With Dual-Polarization and Low Mutual Coupling for Massive MIMO", "text": "Massive multiple input and multiple output (MIMO) has attracted significant interests in both academia and industry. It has been considered as one of most promising technologies for 5G wireless systems. The large-scale antenna array for base stations naturally becomes the key to deploy the Massive MIMO technologies. In this communication, we present a dual-polarized antenna array with 144 ports for Massive MIMO operating at 3.7 GHz. The proposed array consists of 18 low profile subarrays. Each subarray consists of four single units. Each single antenna unit consists of one vertically polarized port and one horizontally polarized port connected to power splitters, which serve as a feeding network. A stacked patch design is used to construct the single unit with the feeding network, which gives higher gain and lower mutual coupling within the size of a conversional dual-port patch antenna. Simulation results of the proposed single antenna unit, sub-array, and Massive MIMO array are verified by measurement."}
{"_id": "9f8eb04dbafdfda997ac5e06cd6c521f82bf4e4c", "title": "UIMA: an architectural approach to unstructured information processing in the corporate research environment", "text": "IBM Research has over 200 people working on Unstructured Information Management (UIM) technologies with a strong focus on Natural Language Processing (NLP). These researchers are engaged in activities ranging from natural language dialog, information retrieval, topic-tracking, named-entity detection, document classification and machine translation to bioinformatics and open-domain question answering. An analysis of these activities strongly suggested that improving the organization\u2019s ability to quickly discover each other's results and rapidly combine different technologies and approaches would accelerate scientific advance. Furthermore, the ability to reuse and combine results through a common architecture and a robust software framework would accelerate the transfer of research results in NLP into IBM\u2019s product platforms. Market analyses indicating a growing need to process unstructured information, specifically multilingual, natural language text, coupled with IBM Research\u2019s investment in NLP, led to the development of middleware architecture for processing unstructured information dubbed UIMA. At the heart of UIMA are powerful search capabilities and a data-driven framework for the development, composition and distributed deployment of analysis engines. In this paper we give a general introduction to UIMA focusing on the design points of its analysis engine architecture and we discuss how UIMA is helping to accelerate research and technology transfer. 1 Motivation and Objectives We characterize a UIM application as a software system that analyzes large volumes of unstructured information while consulting structured information sources to discover, organize and present knowledge relevant to the application\u2019s end-user. We define structured information as information whose intended meaning is unambiguous and explicitly represented in the structure or format of the data. The canonical example of structured information is a relational database table. We define unstructured information as information whose intended meaning is only loosely implied by its form. The canonical example is a natural language document; other examples include voice, audio, image and video. To appear in a special issue of the Journal of Natural Language Engineering 2004 2 Ferrucci and Lally Do Not Distirubte without written permission from authors Market indicators suggest a rapid increase in the use of text and voice analytics to a growing class of commercially viable applications (Roush 2003). National security interests are driving applications to involve an even broader set of UIM technologies including image and video analytics. Emerging and commercially important UIM application areas include life sciences, e-commerce, technical support, advanced search, and national and business intelligence. In six major labs spread out over the globe, IBM Research has over 200 people working on Unstructured Information Management (UIM) technologies with a primary focus on Natural Language Processing (NLP). These researchers are engaged in activities ranging from natural language dialog, information retrieval, topic-tracking, named-entity detection, document classification and machine translation to bioinformatics and open-domain question answering. Each group is developing different technical and engineering approaches to process unstructured information in pursuit of their specific research agenda in the UIM space. While IBM Research\u2019s independent UIM technologies fare well in scientific venues, accelerated development of integrated, robust applications is becoming increasingly important to IBM\u2019s commercial interests. Furthermore the rapid integration of different alogorthims and techniques is also a means to advance scientific results. For example, experiments in statistical name-entity detection or machine translation may benefit from using a deep parser as a feature generator. Facilitating the rapid integration of an independently produced deep parser with statistical algorithm implementations can accelerate the research cycle time as well as the time to product. Some of the challenges that face a large research organization in efficiently leveraging and integrating its (and third-party) UIM assets include: \u2022 Organizational Structure. IBM Research is largely organized in three to five person teams geographically dispersed among different labs throughout world. This structure makes it more difficult to jointly develop and reuse technologies. \u2022 Skills Alignment. There are inefficiencies in having PhDs trained and specialized in, for example, computational linguistics or statistical machine learning, develop and apply the requisite systems expertise to deliver their work into product architectures and/or make it easily usable by other researchers. To appear in a special issue of the Journal of Natural Language Engineering 2004 3 Ferrucci and Lally Do Not Distirubte without written permission from authors \u2022 Development Inefficiencies. Research results are delivered in a myriad of interfaces and technologies. Reuse and integration often requires too much effort dedicated to technical adaptation. Too often work is replicated. \u2022 Speed to product. Software Products introduce their own architectures, interfaces and implementation technologies. Technology transfer from the lab to the products often becomes a significant engineering or rewrite effort for researchers. A common framework and engineering discipline could help address the challenges listed above. The Unstructured Information Management Architecture (UIMA) project was initiated within IBM Research based on the premise that a common software architecture for developing, composing and delivering UIM technologies, if adopted by the organization, would facilitate technology reuse and integration, enable quicker scientific experimentation and speed the path to product. IBM\u2019s Unstructured Information Management Architecture (UIMA) high-level objectives are two fold: 1) Accelerate scientific advances by enabling the rapid combination of UIM technologies (e.g., natural language processing, audio and video analysis, information retrieval, etc.). 2) Accelerate transfer of UIM technologies to product by providing an architecture and associated framework implementations that promote reuse and support flexible deployment options. 2 The Unstructured Information Management Architecture (UIMA) UIMA is a software architecture for developing UIM applications. The UIMA high-level architecture, illustrated in Figure 1, defines the roles, interfaces and communications of large-grained components essential for UIM applications. These include components capable of analyzing unstructured artifacts, integrating and accessing structured sources and storing, indexing and searching for artifacts based on discovered semantic content resulting from analyses. A primary design point for UIMA is that it should admit the implementation of middleware frameworks that provide a component-based infrastructure for developing UIM applications suitable for vastly different deployment environments. These should range from lightweight and embeddable implementations to highly scalable implementations that are meant to exploit clusters of machines and provide high throughput, high availability serviced-based offerings. While the primary focus of current UIMA framework implementations is squarely on natural language To appear in a special issue of the Journal of Natural Language Engineering 2004 4 Ferrucci and Lally Do Not Distirubte without written permission from authors UIM Application Analysis Collection Level Analysis Document, Collection & Metadata Store Knowledge Source Access Collection Processing Manager (CPM) Knowledge & Data Bases Unstructured Information (Text) Analysis Engines Crawlers Analysis Engine Directory Acquisition Unstructured Information Analysis Structured Information Access Semantic Search Engine Indices Indices Documents Collections Metadata Knowledge Source Adapter Directory Access"}
{"_id": "18cdf57195143ed3a3681440ddfb6f1fa839475b", "title": "On composition of a federated web search result page: using online users to provide pairwise preference for heterogeneous verticals", "text": "Modern web search engines are federated --- a user query is sent to the numerous specialized search engines called verticals like web (text documents), News, Image, Video, etc. and the results returned by these engines are then aggregated and composed into a search result page (SERP) and presented to the user. For a specific query, multiple verticals could be relevant, which makes the placement of these vertical results within blocks of textual web results challenging: how do we represent, assess, and compare the relevance of these heterogeneous entities?\n In this paper we present a machine-learning framework for SERP composition in the presence of multiple relevant verticals. First, instead of using the traditional label generation method of human judgment guidelines and trained judges, we use a randomized online auditioning system that allows us to evaluate triples of the form query, web block, vertical>. We use a pairwise click preference to evaluate whether the web block or the vertical block had a better users' engagement. Next, we use a hinged feature vector that contains features from the web block to create a common reference frame and augment it with features representing the specific vertical judged by the user. A gradient boosted decision tree is then learned from the training data. For the final composition of the SERP, we place a vertical result at a slot if the score is higher than a computed threshold. The thresholds are algorithmically determined to guarantee specific coverage for verticals at each slot.\n We use correlation of clicks as our offline metric and show that click-preference target has a better correlation than human judgments based models. Furthermore, on online tests for News and Image verticals we show higher user engagement for both head and tail queries."}
{"_id": "271d0fd591c44a018b62e9ba091ee0bd6697af06", "title": "Soft start-up for high frequency LLC resonant converter with optimal trajectory control", "text": "This paper investigates the soft start-up for high frequency LLC resonant converter with optimal trajectory control. Two methods are proposed to realize soft start-up for high frequency LLC converter by commercial low-cost microcontrollers (MCU). Both methods can achieve soft start-up with minimum stress and optimal energy delivery. One method is mixed-signal implementation by sensing resonant tank to minimize the digital delay. Another method is digital implementation with look-up table. Experimental results are demonstrated on 500kHz 1kW 400V/12V LLC converter."}
{"_id": "129371e185855a115d23a36cf78d5fcfb4fdb9ed", "title": "Grammar Variational Autoencoder", "text": "s m i l e s ! c h a i n atom ! b r a c k e t _ a t o m | a l i p h a t i c _ o r g a n i c | a r o m a t i c _ o r g a n i c a l i p h a t i c _ o r g a n i c ! \u2019B\u2019 | \u2019C\u2019 | \u2019N\u2019 | \u2019O\u2019 | \u2019S \u2019 | \u2019P \u2019 | \u2019F \u2019 | \u2019 I \u2019 | \u2019 Cl \u2019 | \u2019 Br \u2019 a r o m a t i c _ o r g a n i c ! \u2019 c \u2019 | \u2019 n \u2019 | \u2019 o \u2019 | \u2019 s \u2019 b r a c k e t _ a t o m ! \u2019 [ \u2019 BAI \u2019 ] \u2019 BAI ! i s o t o p e symbol BAC | symbol BAC | i s o t o p e symbol | symbol BAC ! c h i r a l BAH | BAH | c h i r a l BAH ! hc ou n t BACH | BACH | h co u n t BACH ! c h a r g e c l a s s | c h a r g e | c l a s s symbol ! a l i p h a t i c _ o r g a n i c | a r o m a t i c _ o r g a n i c i s o t o p e ! DIGIT | DIGIT DIGIT | DIGIT DIGIT DIGIT DIGIT ! \u20191 \u2019 | \u20192 \u2019 | \u20193 \u2019 | \u20194 \u2019 | \u20195 \u2019 | \u20196 \u2019 | \u20197 \u2019 | \u20198 \u2019 c h i r a l ! \u2019@\u2019 | \u2019@@\u2019 hc ou n t ! \u2019H\u2019 | \u2019H\u2019 DIGIT c h a r g e ! \u2019 \u2019 | \u2019 \u2019 DIGIT | \u2019 \u2019 DIGIT DIGIT | \u2019+ \u2019 | \u2019+ \u2019 DIGIT | \u2019+ \u2019 DIGIT DIGIT bond ! \u2019 \u2019 | \u2019= \u2019 | \u2019# \u2019 | \u2019 / \u2019 | \u2019 \\ \u2019 r i n g b o n d ! DIGIT | bond DIGIT branched_a tom ! atom | atom RB | atom BB | atom RB BB RB ! RB r i n g b o n d | r i n g b o n d BB ! BB b ra n ch | b r a n ch b ra nc h ! \u2019 ( \u2019 c h a i n \u2019 ) \u2019 | \u2019 ( \u2019 bond c h a i n \u2019 ) \u2019 c h a i n ! branched_a tom | c h a i n branched_a tom | c h a i n bond branched_a tom"}
{"_id": "1324708ddf12d7940bc2377131e54d97111a1197", "title": "Fashion-focused creative commons social dataset", "text": "In this work, we present a fashion-focused Creative Commons dataset, which is designed to contain a mix of general images as well as a large component of images that are focused on fashion (i.e., relevant to particular clothing items or fashion accessories). The dataset contains 4810 images and related metadata. Furthermore, a ground truth on image's tags is presented. Ground truth generation for large-scale datasets is a necessary but expensive task. Traditional expert based approaches have become an expensive and non-scalable solution. For this reason, we turn to crowdsourcing techniques in order to collect ground truth labels; in particular we make use of the commercial crowdsourcing platform, Amazon Mechanical Turk (AMT). Two different groups of annotators (i.e., trusted annotators known to the authors and crowdsourcing workers on AMT) participated in the ground truth creation. Annotation agreement between the two groups is analyzed. Applications of the dataset in different contexts are discussed. This dataset contributes to research areas such as crowdsourcing for multimedia, multimedia content analysis, and design of systems that can elicit fashion preferences from users."}
{"_id": "ede1e7c2dba44f2094307f17322babce38031577", "title": "Memory versus non-linearity in reservoirs", "text": "Reservoir Computing (RC) is increasingly being used as a conceptually simple yet powerful method for using the temporal processing of recurrent neural networks (RNN). However, because fundamental insight in the exact functionality of the reservoir is as yet still lacking, in practice there is still a lot of manual parameter tweaking or brute-force searching involved in optimizing these systems. In this contribution we aim to enhance the insights into reservoir operation, by experimentally studying the interplay of the two crucial reservoir properties, memory and non-linear mapping. For this, we introduce a novel metric which measures the deviation of the reservoir from a linear regime and use it to define different regions of dynamical behaviour. Next, we study the relationship of two important reservoir parameters, input scaling and spectral radius, on two properties of an artificial task, namely memory and non-linearity."}
{"_id": "3d66a00d2fedbdbd6c709cdd2ad764c7963614a4", "title": "Improving feature selection techniques for machine learning", "text": "As a commonly used technique in data preprocessing for machine learning, feature selection identifies important features and removes irrelevant, redundant or noise features to reduce the dimensionality of feature space. It improves efficiency, accuracy and comprehensibility of the models built by learning algorithms. Feature selection techniques have been widely employed in a variety of applications, such as genomic analysis, information retrieval, and text categorization. Researchers have introduced many feature selection algorithms with different selection criteria. However, it has been discovered that no single criterion is best for all applications. We proposed a hybrid feature selection framework called based on genetic algorithms (GAs) that employs a target learning algorithm to evaluate features, a wrapper method. We call it hybrid genetic feature selection (HGFS) framework. The advantages of this approach include the ability to accommodate multiple feature selection criteria and find small subsets of features that perform well for the target algorithm. The experiments on genomic data demonstrate that ours is a robust and effective approach that can find subsets of features with higher classification accuracy and/or smaller size compared to each individual feature selection algorithm. A common characteristic of text categorization tasks is multi-label classification with a great number of features, which makes wrapper methods time-consuming and impractical. We proposed a simple filter (non-wrapper) approach called Relation Strength and Frequency Variance (RSFV) measure. The basic idea is that informative features are those that are highly correlated with the class and distribute most differently among all classes. The approach is compared with two well-known feature selection methods in the experiments on two standard text corpora. The experiments show that RSFV generate equal or better performance than the others in many cases. INDEX WORDS: Feature selection, Gene selection, Term selection, Dimension Reduction, Genetic algorithm, Text categorization, Text classification IMPROVING FEATURE SELECTION TECHNIQUES FOR"}
{"_id": "ccc636a980fa307262aef569667a5bcf568e32af", "title": "Calculating the maximum execution time of real-time programs", "text": "In real-time systems, the timing behavior is an important property of each task. It has to be guaranteed that the execution of a task does not take longer than the specified amount of time. Thus, a knowledge about the maximum execution time of programs is of utmost importance. This paper discusses the problems for the calculation of the maximum execution time (MAXT... MAximum eXecution Time). It shows the preconditions which have to be met before the MAXT of a task can be calculated. Rules for the MAXT calculation are described. Triggered by the observation that in most cases the calculated MAXT far exceeds the actual execution time, new language constructs are introduced. These constructs allow programmers to put into their programs more information about the behavior of the algorithms implemented and help to improve the self checking property of programs. As a consequence, the quality of MAXT calculations is improved significantly. In a realistic example, an improvement fator of 11 has been achieved."}
{"_id": "3ec05afd1eb4bb4d5ec17a9e0b3d09f5cbc30304", "title": "Comparative Study of Chronic Kidney Disease Prediction using KNN and SVM", "text": "Chronic kidney disease (CKD), also known as chronic renal disease. Chronic kidney disease involves conditions that damage your kidneys and decrease their ability to keep you healthy. You may develop complications like high blood pressure, anemia (low blood count), weak bones, poor nutritional health and nerve damage. . Early detection and treatment can often keep chronic kidney disease from getting worse. Data Mining is the term used for knowledge discovery from large databases. The task of data mining is to make use of historical data, to discover regular patterns and improve future decisions, follows from the convergence of several recent trends: the lessening cost of large data storage devices and the everincreasing ease of collecting data over networks; the expansion of robust and efficient machine learning algorithms to process this data; and the lessening cost of computational power, enabling use of computationally intensive methods for data analysis. Machine learning, has already created practical applications in such areas as analyzing medical science outcomes, detecting fraud, detecting fake users etc. Various data mining classification approaches and machine learning algorithms are applied for prediction of chronic diseases. The objective of this research work is to introduce a new decision support system to predict chronic kidney disease. The aim of this work is to compare the performance of Support vector machine (SVM) and K-Nearest Neighbour (KNN) classifier on the basis of its accuracy, precision and execution time for CKD prediction. From the experimental results it is observed that the performance of KNN classifier is better than SVM. Keywords\u2014Data Mining, Machine learning, Chronic kidney disease, Classification, K-Nearest Neighbour, Support vector machine."}
{"_id": "7f1aefe7eeb759bf05c27438c1687380f7b6b06f", "title": "Prader-Willi syndrome: a review of clinical, genetic, and endocrine findings", "text": "INTRODUCTION\nPrader-Willi syndrome (PWS) is a multisystemic complex genetic disorder caused by lack of expression of genes on the paternally inherited chromosome 15q11.2-q13 region. There are three main genetic subtypes in PWS: paternal 15q11-q13 deletion (65-75 % of cases), maternal uniparental disomy 15 (20-30 % of cases), and imprinting defect (1-3 %). DNA methylation analysis is the only technique that will diagnose PWS in all three molecular genetic classes and differentiate PWS from Angelman syndrome. Clinical manifestations change with age with hypotonia and a poor suck resulting in failure to thrive during infancy. As the individual ages, other features such as short stature, food seeking with excessive weight gain, developmental delay, cognitive disability and behavioral problems become evident. The phenotype is likely due to hypothalamic dysfunction, which is responsible for hyperphagia, temperature instability, high pain threshold, hypersomnia and multiple endocrine abnormalities including growth hormone and thyroid-stimulating hormone deficiencies, hypogonadism and central adrenal insufficiency. Obesity and its complications are the major causes of morbidity and mortality in PWS.\n\n\nMETHODS\nAn extensive review of the literature was performed and interpreted within the context of clinical practice and frequently asked questions from referring physicians and families to include the current status of the cause and diagnosis of the clinical, genetics and endocrine findings in PWS.\n\n\nCONCLUSIONS\nUpdated information regarding the early diagnosis and management of individuals with Prader-Willi syndrome is important for all physicians and will be helpful in anticipating and managing or modifying complications associated with this rare obesity-related disorder."}
{"_id": "dd97a44fdc5923d1d3fd9d7c3dc300fb6f4f04ed", "title": "Spotlight: Optimizing Device Placement for Training Deep Neural Networks", "text": "Training deep neural networks (DNNs) requires an increasing amount of computation resources, and it becomes typical to use a mixture of GPU and CPU devices. Due to the heterogeneity of these devices, a recent challenge is how each operation in a neural network can be optimally placed on these devices, so that the training process can take the shortest amount of time possible. The current state-of-the-art solution uses reinforcement learning based on the policy gradient method, and it suffers from suboptimal training times. In this paper, we propose Spotlight, a new reinforcement learning algorithm based on proximal policy optimization, designed specifically for finding an optimal device placement for training DNNs. The design of our new algorithm relies upon a new model of the device placement problem: by modeling it as a Markov decision process with multiple stages, we are able to prove that Spotlight achieves a theoretical guarantee on performance improvements. We have implemented Spotlight in the CIFAR-10 benchmark and deployed it on the Google Cloud platform. Extensive experiments have demonstrated that the training time with placements recommended by Spotlight is 60.9% of that recommended by the policy gradient method."}
{"_id": "f659b381e2074262453e7b92071abe07443640b3", "title": "Percutaneous interventions for left atrial appendage exclusion: options, assessment, and imaging using 2D and 3D echocardiography.", "text": "Percutaneous left atrial appendage (LAA) exclusion is an evolving treatment to prevent embolic events in patients with nonvalvular atrial fibrillation. In the past few years multiple percutaneous devices have been developed to exclude the LAA from the body of the left atrium and thus from the systemic circulation. Two- and 3-dimensional transesophageal echocardiography (TEE) is used to assess the LAA anatomy and its suitability for percutaneous closure to select the type and size of the closure device and to guide the device implantation procedure in conjunction with fluoroscopy. In addition, 2- and 3-dimensional TEE is also used to assess the effectiveness of device implantation acutely and on subsequent follow-up examination. Knowledge of the implantation options that are currently available along with their specific characteristics is essential for choosing the appropriate device for a given patient with a specific LAA anatomy. We present the currently available LAA exclusion devices and the echocardiographic imaging approaches for evaluation of\u00a0the LAA before, during, and after LAA occlusion."}
{"_id": "d0596a92400fa2268ee502d682a5b72fca4cc678", "title": "Biomedical ontology alignment: an approach based on representation learning", "text": "BACKGROUND\nWhile representation learning techniques have shown great promise in application to a number of different NLP tasks, they have had little impact on the problem of ontology matching. Unlike past work that has focused on feature engineering, we present a novel representation learning approach that is tailored to the ontology matching task. Our approach is based on embedding ontological terms in a high-dimensional Euclidean space. This embedding is derived on the basis of a novel phrase retrofitting strategy through which semantic similarity information becomes inscribed onto fields of pre-trained word vectors. The resulting framework also incorporates a novel outlier detection mechanism based on a denoising autoencoder that is shown to improve performance.\n\n\nRESULTS\nAn ontology matching system derived using the proposed framework achieved an F-score of 94% on an alignment scenario involving the Adult Mouse Anatomical Dictionary and the Foundational Model of Anatomy ontology (FMA) as targets. This compares favorably with the best performing systems on the Ontology Alignment Evaluation Initiative anatomy challenge. We performed additional experiments on aligning FMA to NCI Thesaurus and to SNOMED CT based on a reference alignment extracted from the UMLS Metathesaurus. Our system obtained overall F-scores of 93.2% and 89.2% for these experiments, thus achieving state-of-the-art results.\n\n\nCONCLUSIONS\nOur proposed representation learning approach leverages terminological embeddings to capture semantic similarity. Our results provide evidence that the approach produces embeddings that are especially well tailored to the ontology matching task, demonstrating a novel pathway for the problem."}
{"_id": "ed130a246a3ff589096ac2c20babdd986068e528", "title": "The Media and Technology Usage and Attitudes Scale: An empirical investigation", "text": "Current approaches to measuring people's everyday usage of technology-based media and other computer-related activities have proved to be problematic as they use varied outcome measures, fail to measure behavior in a broad range of technology-related domains and do not take into account recently developed types of technology including smartphones. In the present study, a wide variety of items, covering a range of up-to-date technology and media usage behaviors. Sixty-six items concerning technology and media usage, along with 18 additional items assessing attitudes toward technology, were administered to two independent samples of individuals, comprising 942 participants. Factor analyses were used to create 11 usage subscales representing smartphone usage, general social media usage, Internet searching, e-mailing, media sharing, text messaging, video gaming, online friendships, Facebook friendships, phone calling, and watching television in addition to four attitude-based subscales: positive attitudes, negative attitudes, technological anxiety/dependence, and attitudes toward task-switching. All subscales showed strong reliabilities and relationships between the subscales and pre-existing measures of daily media usage and Internet addiction were as predicted. Given the reliability and validity results, the new Media and Technology Usage and Attitudes Scale was suggested as a method of measuring media and technology involvement across a variety of types of research studies either as a single 60-item scale or any subset of the 15 subscales."}
{"_id": "31864e13a9b3473ebb07b4f991f0ae3363517244", "title": "A Computational Approach to Edge Detection", "text": "This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge."}
{"_id": "874b9e74fb000c6554c6941f73ddd4c66c6de38d", "title": "Exploring steganography: Seeing the unseen", "text": "Steganography is the art of hiding information in ways that prevent the detection of hidden messages. It includes a vast array of secret communications methods that conceal the message's very existence. These methods include invisible inks, microdots, character arrangement, digital signatures, covert channels, and spread spectrum communications. Steganography and cryptography are cousins in the spycraft family: cryptography scrambles a message so it cannot be understood while steganography hides the message so it cannot be seen. In this article the authors discuss image files and how to hide information in them, and discuss results obtained from evaluating available steganographic software. They argue that steganography by itself does not ensure secrecy, but neither does simple encryption. If these methods are combined, however, stronger encryption methods result. If an encrypted message is intercepted, the interceptor knows the text is an encrypted message. But with steganography, the interceptor may not know that a hidden message even exists. For a brief look at how steganography evolved, there is included a sidebar titled \"Steganography: Some History.\""}
{"_id": "8c82c93dfc7d3672e58efd982a23791a8a419053", "title": "Techniques for Data Hiding", "text": "by International Business Machines Corporation. Copying in printed form for private use is permitted without payment of royalty provided that (1) each reproduction is done without alteration and (2) the Journal reference and IBM copyright notice are included on the first page. The title and abstract, but no other portions, of this paper may be copied or distributed royalty free without further permission by computer-based and other information service systems. Permission to republish any other portion of this paper must be obtained from the Editor. Data hiding, a form of steganography, embeds data into digital media for the purpose of identification, annotation, and copyright. Several constraints affect this process: the quantity of data to be hidden, the need for invariance of these data under conditions where a \" host \" signal is subject to distortions, e.g., lossy compression, and the degree to which the data must be immune to interception, modification, or removal by a third party. We explore both traditional and novel techniques for addressing the data-hiding process and evaluate these techniques in light of three applications: copyright protection, tamper-proofing, and augmentation data embedding. igital representation of media facilitates access and potentially improves the portability, efficiency , and accuracy of the information presented. Undesirable effects of facile data access include an increased opportunity for violation of copyright and tampering with or modification of content. The motivation for this work includes the provision of protection of intellectual property rights, an indication of content manipulation, and a means of annotation. Data hiding represents a class of processes used to embed data, such as copyright information, into various forms of media such as image, audio, or text with a minimum amount of perceivable degradation to the \" host \" signal; i.e., the embedded data should be invisible and inaudible to a human observer. Note that data hiding, while similar to compression, is distinct from encryption. Its goal is not to restrict or regulate access to the host signal, but rather to ensure that embedded data remain inviolate and recoverable. Two important uses of data hiding in digital media are to provide proof of the copyright, and assurance of content integrity. Therefore, the data should stay hidden in a host signal, even if that signal is subjected to manipulation as degrading as filtering, resampling, cropping, or lossy data compression. Other applications of data hiding, such as the inclusion of augmentation data, need not \u2026"}
{"_id": "934ac849af9e478b7c0c3c599b9047e291d96ae2", "title": "A New Approach to Persian/Arabic Text Steganography", "text": "Conveying information secretly and establishing hidden relationship has been of interest since long past. Text documents have been widely used since very long time ago. Therefore, we have witnessed different method of hiding information in texts (text steganography) since past to the present. In this paper we introduce a new approach for steganography in Persian and Arabic texts. Considering the existence of too many points in Persian and Arabic phrases, in this approach, by vertical displacement of the points, we hide information in the texts. This approach can be categorized under feature coding methods. This method can be used for Persian/Arabic watermarking. Our method has been implemented by Java programming language"}
{"_id": "b41c45b2ca0c38a4514f0779395ebdf3d34cecc0", "title": "Applications of data hiding in digital images", "text": ""}
{"_id": "9859eaa1e72bffa7474c0d1ee63a5fbb041b314e", "title": "At the end of the 14C time scale--the Middle to Upper Paleolithic record of western Eurasia.", "text": "The dynamics of change underlying the demographic processes that led to the replacement of Neandertals by Anatomically Modern Humans (AMH) and the emergence of what are recognized as Upper Paleolithic technologies and behavior can only be understood with reference to the underlying chronological framework. This paper examines the European chronometric (mainly radiocarbon-based) record for the period between ca. 40 and 30 ka 14C BP and proposes a relatively rapid transition within some 2,500 years. This can be summarized in the following falsifiable hypotheses: (1) final Middle Paleolithic (FMP) \"transitional\" industries (Uluzzian, Chatelperronian, leaf-point industries) were made by Neandertals and date predominantly to between ca. 41 and 38 ka 14C BP, but not younger than 35/34 ka 14C BP; (2) initial (IUP) and early (EUP) Upper Paleolithic \"transitional\" industries (Bachokirian, Bohunician, Protoaurignacian, Kostenki 14) will date to between ca. 39/38 and 35 ka 14C BP and document the appearance of AMH in Europe; (3) the earliest Aurignacian (I) appears throughout Europe quasi simultaneously at ca. 35 ka 14C BP. The earliest appearance of figurative art is documented only for a later phase ca. 33.0/32.5-29.2 ka 14C BP. Taken together, the Middle to Upper Paleolithic transition appears to be a cumulative process involving the acquisition of different elements of \"behavioral modernity\" through several \"stages of innovation.\""}
{"_id": "831d55d38104389de256c501495539a73118db7f", "title": "Azuma A Survey of Augmented Reality", "text": "This paper surveys the field of augmented reality (AR), in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing , visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality."}
{"_id": "e6c8af0bedefa15ab8278b2f21a7b8f93f93ba5d", "title": "The Social Explanatory Styles Questionnaire: Assessing Moderators of Basic Social-Cognitive Phenomena Including Spontaneous Trait Inference, the Fundamental Attribution Error, and Moral Blame", "text": "Why is he poor? Why is she failing academically? Why is he so generous? Why is she so conscientious? Answers to such everyday questions--social explanations--have powerful effects on relationships at the interpersonal and societal levels. How do people select an explanation in particular cases? We suggest that, often, explanations are selected based on the individual's pre-existing general theories of social causality. More specifically, we suggest that over time individuals develop general beliefs regarding the causes of social events. We refer to these beliefs as social explanatory styles. Our goal in the present article is to offer and validate a measure of individual differences in social explanatory styles. Accordingly, we offer the Social Explanatory Styles Questionnaire (SESQ), which measures three independent dimensions of social explanatory style: Dispositionism, historicism, and controllability. Studies 1-3 examine basic psychometric properties of the SESQ and provide positive evidence regarding internal consistency, factor structure, and both convergent and divergent validity. Studies 4-6 examine predictive validity for each subscale: Does each explanatory dimension moderate an important phenomenon of social cognition? Results suggest that they do. In Study 4, we show that SESQ dispositionism moderates the tendency to make spontaneous trait inferences. In Study 5, we show that SESQ historicism moderates the tendency to commit the Fundamental Attribution Error. Finally, in Study 6 we show that SESQ controllability predicts polarization of moral blame judgments: Heightened blaming toward controllable stigmas (assimilation), and attenuated blaming toward uncontrollable stigmas (contrast). Decades of research suggest that explanatory style regarding the self is a powerful predictor of self-functioning. We think it is likely that social explanatory styles--perhaps comprising interactive combinations of the basic dimensions tapped by the SESQ--will be similarly potent predictors of social functioning. We hope the SESQ will be a useful tool for exploring that possibility."}
{"_id": "71214b3a60fb87b159b73f4f3a33bedbb5751cbf", "title": "SimSensei Demonstration: A Perceptive Virtual Human Interviewer for Healthcare Applications", "text": "We present the SimSensei system, a fully automatic virtual agent that conducts interviews to assess indicators of psychological distress. We emphasize on the perception part of the system, a multimodal framework which captures and analyzes user state for both behavioral understanding and interactional purposes."}
{"_id": "922cc72c45cdf8b4ff2c4b1ad604056c90ee5983", "title": "Speech Synthesis of Code-Mixed Text", "text": "Most Text to Speech (TTS) systems today assume that the input text is in a single language and is written in the same language that the text needs to be synthesized in. However, in bilingual and multilingual communities, code mixing or code switching occurs in speech, in which speakers switch between languages in the same utterance. Due to the popularity of social media, we now see code-mixing even in text in these multilingual communities. TTS systems capable of synthesizing such text need to be able to handle text that is written in multiple languages and scripts. Code-mixed text poses many challenges to TTS systems, such as language identification, spelling normalization and pronunciation modeling. In this work, we describe a preliminary framework for synthesizing code-mixed text. We carry out experiments on synthesizing code-mixed Hindi and English text. We find that there is a significant user preference for TTS systems that can correctly identify and pronounce words in different languages."}
{"_id": "e526fd18329ad0da5dd989358fd20e4dd2a2a3e1", "title": "A framework for designing cloud forensic-enabled services (CFeS)", "text": "Cloud computing is used by consumers to access cloud services. Malicious actors exploit vulnerabilities of cloud services to attack consumers. The link between these two assumptions is the cloud service. Although cloud forensics assists in the direction of investigating and solving cloud-based cyber-crimes, in many cases the design and implementation of cloud services fall back. Software designers and engineers should focus their attention on the design and implementation of cloud services that can be investigated in a forensic sound manner. This paper presents a methodology that aims on assisting designers to design cloud forensic-enabled services. The methodology supports the design of cloud services by implementing a number of steps to make the services cloud forensic enabled. It consists of a set of cloud forensic constraints, a modeling language expressed through a conceptual model and a process based on the concepts identified and presented in the model. The main advantage of the proposed methodology is the correlation of cloud services\u2019 characteristics with the cloud investigation while providing software engineers the ability to design and implement cloud forensic-enabled services via the use of a set of predefined forensic-related tasks."}
{"_id": "8fffaf3e357ca89261b71a23d56669a5da277156", "title": "Implementing a Personal Health Record Cloud Platform Using Ciphertext-Policy Attribute-Based Encryption", "text": "Work on designing and implementing a patient-centric, personal health record cloud platform based on open-source Indivo X system. We adopt cipher text-policy attribute-based encryption to provide privacy protection and fine-grained access control."}
{"_id": "977afc672a8f100665fc5377f7f934e408b5a177", "title": "An ef fi cient and effective convolutional auto-encoder extreme learning machine network for 3 d feature learning", "text": "3D shape features play a crucial role in graphics applications, such as 3D shape matching, recognition, and retrieval. Various 3D shape descriptors have been developed over the last two decades; however, existing descriptors are handcrafted features that are labor-intensively designed and cannot extract discriminative information for a large set of data. In this paper, we propose a rapid 3D feature learning advantages of the convolutional neuron network, auto-encoder, and extreme learning machine (ELM). This method performs better and faster than other methods. In addition, we define a novel architecture based on CAE-ELM. The architecture accepts two types of 3D shape representation, namely, voxel data and signed distance field data (SDF), as inputs to extract the global and local features of 3D shapes. Voxel data describe structural information, whereas SDF data contain details on 3D shapes. Moreover, the proposed CAE-ELM can be used in practical graphics applications, such as 3D shape completion. Experiments show that the features extracted by CAE-ELM are superior to existing hand-crafted features and other deep learning methods or ELM models. Moreover, the classification accuracy of the proposed architecture is superior to that of other methods on ModelNet10 (91.4%) and ModelNet40 (84.35%). The training process also runs faster than existing deep learning methods by approximately two orders of magnitude. & 2015 Elsevier B.V. All rights reserved."}
{"_id": "87c7647c659aefb539dbe672df06714d935e0a3b", "title": "Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations", "text": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, IMAGENET-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Unlike recent robustness research, this benchmark evaluates performance on commonplace corruptions not worst-case adversarial corruptions. We find that there are negligible changes in relative corruption robustness from AlexNet to ResNet classifiers, and we discover ways to enhance corruption robustness. Then we propose a new dataset called ICONS-50 which opens research on a new kind of robustness, surface variation robustness. With this dataset we evaluate the frailty of classifiers on new styles of known objects and unexpected instances of known classes. We also demonstrate two methods that improve surface variation robustness. Together our benchmarks may aid future work toward networks that learn fundamental class structure and also robustly generalize."}
{"_id": "49b42aa77b764af561d63aee591e114c6dc03d8b", "title": "Dual Policy Iteration", "text": "A novel class of Approximate Policy Iteration (API) algorithms have recently demonstrated impressive practical performance (e.g., ExIt [1], AlphaGo-Zero [2]). This new family of algorithms maintains, and alternately optimizes, two policies: a fast, reactive policy (e.g., a deep neural network) deployed at test time, and a slow, non-reactive policy (e.g., Tree Search), that can plan multiple steps ahead. The reactive policy is updated under supervision from the non-reactive policy, while the non-reactive policy is improved via guidance from the reactive policy. In this work we study this class of Dual Policy Iteration (DPI) strategy in an alternating optimization framework and provide a convergence analysis that extends existing API theory. We also develop a special instance of this framework which reduces the update of non-reactive policies to model-based optimal control using learned local models, and provides a theoretically sound way of unifying model-free and model-based RL approaches with unknown dynamics. We demonstrate the efficacy of our approach on various continuous control Markov Decision Processes."}
{"_id": "5b062562a8067baae045df1c7f5a8455d0363b5a", "title": "SCPNet: Spatial-Channel Parallelism Network for Joint Holistic and Partial Person Re-Identification", "text": "Holistic person re-identification (ReID) has received extensive study in the past few years and achieves impressive progress. However, persons are often occluded by obstacles or other persons in practical scenarios, which makes partial person re-identification non-trivial. In this paper, we propose a spatial-channel parallelism network (SCPNet) in which each channel in the ReID feature pays attention to a given spatial part of the body. The spatial-channel corresponding relationship supervises the network to learn discriminative feature for both holistic and partial person re-identification. The single model trained on four holistic ReID datasets achieves competitive accuracy on these four datasets, as well as outperforms the state-of-the-art methods on two partial ReID datasets without training."}
{"_id": "aad96ea7ce0e9ef3097401a0c4c8236b41f4a06f", "title": "Identifying prosumer's energy sharing behaviours for forming optimal prosumer-communities", "text": "Smart Grid (SG) achieves bidirectional energy and information flow between the energy user and the utility grid, allowing energy users not only to consume energy, but also to generate the energy and share with the utility grid or with other energy consumers. This type of energy user who consumes energy and who also can generate the energy is called the \u201cprosumer\u201d. The sustainability of the SG energy sharing process heavily depends on its participating prosumers, making prosumer participation and management schemes crucial within the energy sharing approaches. However in literature, there is very little attention on prosumer management schemes. The contribution of this article is twofold. First, this article introduces a novel concept of participating and managing the prosumers in the SG energy sharing process in the form of autonomous, intelligent goal-oriented virtual communities. Here, the community of prosumers can collectively increase the amount of power to be auctioned or bought offering higher bargaining power, thereby settling for a higher price per kilowatt in long-term. According to the literature, this research is the first of this type introducing a community approach for prosumer management. The initial step to build an effective prosumer-community is the identification of those prosumers who would be suitable to make efficient prosumer communities. This leads the necessity of identifying parameters that influence the energy sharing behaviours of prosumers. The second contribution of this article is that, this comprehensively analyzes the different parameters influencing the prosumer's energy sharing behaviours and thus presents multi-agent architecture for optimal prosumer-community formation."}
{"_id": "61963ec7bfa73a1725bed7a58e0e570d6c860bb6", "title": "Adaptive UKF-SLAM based on magnetic gradient inversion method for underwater navigation", "text": "Consider the two characteristics: (1) Simultaneous localization and mapping (SLAM) is a popular algorithm for autonomous underwater vehicle, but visual SLAM is significantly influenced by weak illumination. (2)Geomagnetism-aided navigation and gravity-aided navigation are equally important methods in the field of vehicle navigation, but both are affected heavily by time-varying noises and terrain fluctuations. However, magnetic gradient vector can avoid the influence of time-varying noises, and is less affected by terrain fluctuations. To this end, we propose an adaptive SLAM-based magnetic gradient aided navigation with the following advantages: (1) Adaptive SLAM is an efficient way to deal with uncertainty of the measurement model. (2) Magnetic gradient inversion equation is a good alternative to be used as measurement equation in visual SLAM-denied environment. Experimental results show that our proposed method is an effective solution, combining magnetic gradient information with SLAM."}
{"_id": "6a182f5fe8b5475014986ad303d15240450922a8", "title": "Investigating episodes of mobile phone activity as indicators of opportune moments to deliver notifications", "text": "We investigate whether opportune moments to deliver notifications surface at the endings of episodes of mobile interaction (making voice calls or receiving SMS) based on the assumption that the endings collocate with naturally occurring breakpoint in the user's primary task. Testing this with a naturalistic experiment we find that interruptions (notifications) are attended to and dealt with significantly more quickly after a user has finished an episode of mobile interaction compared to a random baseline condition, supporting the potential utility of this notification strategy. We also find that the workload and situational appropriateness of the secondary interruption task significantly affect subsequent delay and completion rate of the tasks. In situ self-reports and interviews reveal complexities in the subjective experience of the interruption, which suggest that a more nuanced classification of the particular call or SMS and its relationship to the primary task(s) would be desirable."}
{"_id": "c539c664fc020d3a44c9614bb291e8fb43c973b9", "title": "Virtual reality and hand tracking system as a medical tool to evaluate patients with parkinson's", "text": "In this paper, we take advantage of the free hand interaction technology as a medical tool, either in rehabilitation centers or at home, that allows the evaluation of patients with Parkinson's. We have created a virtual reality scene to engage the patient to feel in an activity that can be found in daily life, and use the Leap Motion controller tracking to evaluate and classify the tremor in the hands. A sample of 33 patients diagnosed with Parkinson's disease (PD) participated in the study. Three tests were performed per patient, the first two to evaluate the amplitude of the postural tremor in each hand, and the third to measure the time to complete a specific task. Analysis shows that our tool can be used effectively to classify the stage of Parkinson's disease."}
{"_id": "8843dc55d09407a8b22de1e0c8b781cda9f70bb6", "title": "Neonatal Abstinence Syndrome and High School Performance.", "text": "BACKGROUND AND OBJECTIVES\nLittle is known of the long-term, including school, outcomes of children diagnosed with Neonatal abstinence syndrome (NAS) (International Statistical Classification of Disease and Related Problems [10th Edition], Australian Modification, P96.1).\n\n\nMETHODS\nLinked analysis of health and curriculum-based test data for all children born in the state of New South Wales (NSW), Australia, between 2000 and 2006. Children with NAS (n = 2234) were compared with a control group matched for gestation, socioeconomic status, and gender (n = 4330, control) and with other NSW children (n = 598 265, population) for results on the National Assessment Program: Literacy and Numeracy, in grades 3, 5, and 7.\n\n\nRESULTS\nMean test scores (range 0-1000) for children with NAS were significantly lower in grade 3 (359 vs control: 410 vs population: 421). The deficit was progressive. By grade 7, children with NAS scored lower than other children in grade 5. The risk of not meeting minimum standards was independently associated with NAS (adjusted odds ratio [aOR], 2.5; 95% confidence interval [CI], 2.2-2.7), indigenous status (aOR, 2.2; 95% CI, 2.2-2.3), male gender (aOR, 1.3; 95% CI, 1.3-1.4), and low parental education (aOR, 1.5; 95% CI, 1.1-1.6), with all Ps < .001.\n\n\nCONCLUSIONS\nA neonatal diagnostic code of NAS is strongly associated with poor and deteriorating school performance. Parental education may decrease the risk of failure. Children with NAS and their families must be identified early and provided with support to minimize the consequences of poor educational outcomes."}
{"_id": "c987d7617ade93e5c9ce8cec3832d38dd9de5de9", "title": "Programming challenges of chatbot: Current and future prospective", "text": "In the modern Era of technology, Chatbots is the next big thing in the era of conversational services. Chatbots is a virtual person who can effectively talk to any human being using interactive textual skills. Currently, there are many cloud base Chatbots services which are available for the development and improvement of the chatbot sector such as IBM Watson, Microsoft bot, AWS Lambda, Heroku and many others. A virtual person is based on machine learning and Artificial Intelligence (AI) concepts and due to dynamic nature, there is a drawback in the design and development of these chatbots as they have built-in AI, NLP, programming and conversion services. This paper gives an overview of cloud-based chatbots technologies along with programming of chatbots and challenges of programming in current and future Era of chatbot."}
{"_id": "4b9ac9e1493b11f0af0c65c7a291eb61f33678b6", "title": "Model predictive control for dynamic footstep adjustment using the divergent component of motion", "text": "This paper presents an extension of previous model predictive control (MPC) schemes to the stabilization of the time-varying divergent component of motion (DCM). To address the control authority limitations caused by fixed footholds, the step positions and rotations are treated as control inputs, allowing the generation and execution of stable walking motions, both at high speeds and in the face of disturbances. Rotation approximations are handled by applying a mixed-integer program, which, when combined with the use of the time-varying DCM to account for the effects of height changes, improve the versatility of MPC. Simulation results of fast walking and step recovery with the ESCHER humanoid demonstrate the effectiveness of this approach."}
{"_id": "40e17135a21da48687589acb0c2478a86825c6df", "title": "Vaginal Labiaplasty: Defense of the Simple \u201cClip and Snip\u201d and a New Classification System", "text": "Vaginal labiaplasty has become a more frequently performed procedure as a result of the publicity and education possible with the internet. Some of our patients have suffered in silence for years with large, protruding labia minora and the tissue above the clitoris that is disfiguring and uncomfortable and makes intercourse very difficult and painful. We propose four classes of labia protrusion based on size and location: Class 1 is normal, where the labia majora and minora are about equal. Class 2 is the protrusion of the minora beyond the majora. Class 3 includes a clitoral hood. Class 4 is where the large labia minora extends to the perineum. There are two principal means of reconstructing this area. Simple amputation may be possible for Class 2 and Class 4. Class 2 and Class 3 may be treated with a wedge resection and flap advancement that preserves the delicate free edge of the labia minora (Alter, Ann Plast Surg 40:287, 1988). Class 4 may require a combination of both amputation of the clitoral hood and/or perineal extensions and rotation flap advancement over the labia minora. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 ."}
{"_id": "7e19f7a82528fa79349f1fc61c7f0d35a9ad3a5e", "title": "Face Recognition: A Hybrid Neural Network Approach", "text": "Faces represent complex, multidimensional, meaningful visual stimuli an d developing a computational model for face recognition is difficult [42]. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sam pling, a self-organizing map neural network, and a convolutional neural network. The self-organizi ng map provides a quantization of the image samples into a topological space where inputs that are n e rby in the original space are also nearby in the output space, thereby providing dimensionality red uction and invariance to minor changes in the image sample, and the convolutional neural network prov ides for partial invariance to translation, rotation, scale, and deformation. The convolutional net work extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen -Lo\u00e8ve transform in place of the self-organizing map, and a multi-layer perceptron in place of the con volutional network. The Karhunen-Lo\u00e8ve transform performs almost as well (5.3% error versus 3 .8%). The multi-layer perceptron performs very poorly (40% error versus 3.8%). The method is cap able of rapid classification, requires only fast, approximate normalization and preprocessing, and cons istently exhibits better classification performance than the eigenfaces approach [42] on the database consider ed as the number of images per person in the training database is varied from 1 to 5. With 5 imag es per person the proposed method and eigenfaces result in 3.8% and 10.5% error respectively. The recogni zer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10% of the examples. We use a database of 400 images of 40 individuals which cont ains quite a high degree of variability in expression, pose, and facial details. We analyze computati onal complexity and discuss how new classes could be added to the trained recognizer."}
{"_id": "fd17fcc2d8736d270b34ddc2fbc8bfa581b7d6dd", "title": "Prevalence of chronic low back pain: systematic review", "text": "OBJECTIVE\nTo estimate worldwide prevalence of chronic low back pain according to age and sex.\n\n\nMETHODS\nWe consulted Medline (PubMed), LILACS and EMBASE electronic databases. The search strategy used the following descriptors and combinations: back pain, prevalence, musculoskeletal diseases, chronic musculoskeletal pain, rheumatic, low back pain, musculoskeletal disorders and chronic low back pain. We selected cross-sectional population-based or cohort studies that assessed chronic low back pain as an outcome. We also assessed the quality of the selected studies as well as the chronic low back pain prevalence according to age and sex.\n\n\nRESULTS\nThe review included 28 studies. Based on our qualitative evaluation, around one third of the studies had low scores, mainly due to high non-response rates. Chronic low back pain prevalence was 4.2% in individuals aged between 24 and 39 years old and 19.6% in those aged between 20 and 59. Of nine studies with individuals aged 18 and above, six reported chronic low back pain between 3.9% and 10.2% and three, prevalence between 13.1% and 20.3%. In the Brazilian older population, chronic low back pain prevalence was 25.4%.\n\n\nCONCLUSIONS\nChronic low back pain prevalence increases linearly from the third decade of life on, until the 60 years of age, being more prevalent in women. Methodological approaches aiming to reduce high heterogeneity in case definitions of chronic low back pain are essential to consistency and comparative analysis between studies. A standard chronic low back pain definition should include the precise description of the anatomical area, pain duration and limitation level."}
{"_id": "9b39830391d7ad65ab2b7a8d312e404d128bbde6", "title": "The image of India as a Travel Destination and the attitude of viewers towards Indian TV Dramas", "text": "For a few decades now, various television stations in Indonesia have been broadcasting foreign drama series including those from a range of Asian countries, such Korea, India, Turkey, Thailand and the Philippines . This study aims to explore attitude towards Asian drama in those countries and the destination image of the country where the drama emanates from as perceived by the audiences. This study applied a mixed-methodology approach in order to explore particularly attitudes towards foreign television drama productions. There is a paucity of study exploring the attitudes of audiences towards Indian television dramas and a limited study focussing on the image of India as a preferred travel destination. Data was collected using an online instrument and participants were selected as a convenience sample. The attitude towards foreign television dramas was measured using items that were adapted from the qualitative study results, whereas for measuring destination image, an existing scale was employed. This study found that the attitudes of audiences towards Indian drama and their image of India had no dimension (one factor). The study also reported that attitude towards Indian dramas had a significant impact on the image of India as a travel destination and vice-versa. Recommendations for future study and tourism marketing are discussed."}
{"_id": "5dd9dc47c4acc9ea3e597751194db52119398ac6", "title": "Low Power High-Efficiency Shift Register Using Implicit Pulse-Triggered Flip-Flop in 130 nm CMOS Process for a Cryptographic RFID Tag", "text": "The shift register is a type of sequential logic circuit which is mostly used for storing digital data or the transferring of data in the form of binary numbers in radio frequency identification (RFID) applications to improve the security of the system. A power-efficient shift register utilizing a new flip-flop with an implicit pulse-triggered structure is presented in this article. The proposed flip-flop has features of high performance and low power. It is composed of a sampling circuit implemented by five transistors, a C-element for rise and fall paths, and a keeper stage. The speed is enhanced by executing four clocked transistors together with a transition condition technique. The simulation result confirms that the proposed topology consumes the lowest amounts of power of 30.1997 and 22.7071 nW for parallel in \u2013parallel out (PIPO) and serial in \u2013serial out (SISO) shift register respectively covering 22 \u03bcm2 chip area. The overall design consist of only 16 transistors and is simulated in 130 nm complementary-metal-oxide-semiconductor (CMOS) technology with a 1.2 V power supply."}
{"_id": "31bb49ba7df94b88add9e3c2db72a4a98927bb05", "title": "Static and dynamic 3D facial expression recognition: A comprehensive survey", "text": "a r t i c l e i n f o Keywords: Facial behaviour analysis Facial expression recognition 3D facial surface 3D facial surface sequences (4D faces) Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose and illumination variations. In order to deal with these problems, 3D and 4D (dynamic 3D) recordings are increasingly used in expression analysis research. In this paper we survey the recent advances in 3D and 4D facial expression recognition. We discuss developments in 3D facial data acquisition and tracking, and present currently available 3D/4D face databases suitable for 3D/4D facial expressions analysis as well as the existing facial expression recognition systems that exploit either 3D or 4D data in detail. Finally, challenges that have to be addressed if 3D facial expression recognition systems are to become a part of future applications are extensively discussed. Automatic human behaviour understanding has attracted a great deal of interest over the past two decades, mainly because of its many applications spanning various fields such as psychology, computer technology , medicine and security. It can be regarded as the essence of next-generation computing systems as it plays a crucial role in affective computing technologies (i.e. proactive and affective user interfaces), learner-adaptive tutoring systems, patient-profiled personal wellbeing technologies, etc. [1]. Facial expression is the most cogent, naturally preeminent means for humans to communicate emotions, to clarify and give emphasis, to signal comprehension disagreement, to express intentions and, more generally, to regulate interactions with the environment and other people [2]. These facts highlight the importance of automatic facial behaviour analysis, including facial expression of emotion and facial action unit (AU) recognition, and justify the interest this research area has attracted, in the past twenty years [3,4]. Until recently, most of the available data sets of expressive faces were of limited size containing only deliberately posed affective displays, mainly of the prototypical expressions of six basic emotions (i.e. anger, disgust, fear, happiness, sadness and surprise), recorded under highly controlled conditions. Recent efforts focus on the recognition of complex and spontaneous emotional phenomena (e.g. boredom or lack of attention, frustration , stress, etc.) rather than on the recognition of deliberately displayed prototypical expressions of emotions [5,4,6,7]. However, most \u2026"}
{"_id": "08f7fb99edd844aac90fccfa51f5d55cbdf15cbd", "title": "Steady-State VEP-Based Brain-Computer Interface Control in an Immersive 3D Gaming Environment", "text": "This paper presents the application of an effective EEG-based brain-computer interface design for binary control in a visually elaborate immersive 3D game. The BCI uses the steady-state visual evoked potential (SSVEP) generated in response to phasereversing checkerboard patterns. Two power-spectrum estimation methods were employed for feature extraction in a series of offline classification tests. Both methods were also implemented during real-time game play. The performance of the BCI was found to be robust to distracting visual stimulation in the game and relatively consistent across six subjects, with 41 of 48 games successfully completed. For the best performing feature extraction method, the average real-time control accuracy across subjects was 89%. The feasibility of obtaining reliable control in such a visually rich environment using SSVEPs is thus demonstrated and the impact of this result is discussed."}
{"_id": "382b521a3a385ea8a339b68c54183e2681602735", "title": "Brainport: an alternative input to the brain.", "text": "Brain Computer Interface (BCI) technology is one of the most rapidly developing areas of modern science; it has created numerous significant crossroads between Neuroscience and Computer Science. The goal of BCI technology is to provide a direct link between the human brain and a computerized environment. The objective of recent BCI approaches and applications have been designed to provide the information flow from the brain to the computerized periphery. The opposite or alternative direction of the flow of information (computer to brain interface, or CBI) remains almost undeveloped. The BrainPort is a CBI that offers a complementary technology designed to support a direct link from a computerized environment to the human brain - and to do so non-invasively. Currently, BrainPort research is pursuing two primary goals. One is the delivery of missing sensory information critical for normal human behavior through an additional artificial sensory channel around the damaged or malfunctioning natural sensory system. The other is to decrease the risk of sensory overload in human-machine interactions by providing a parallel and supplemental channel for information flow to the brain. In contrast, conventional CBI strategies (e.g., Virtual Reality), are usually designed to provide additional or substitution information through pre-existing sensory channels, and unintentionally aggravate the brain overload problem."}
{"_id": "6777d0f3b5b606f218437905346a9182712840cc", "title": "A general framework for brain-computer interface design", "text": "The Brain-Computer Interface (BCI) research community has acknowledged that researchers are experiencing difficulties when they try to compare the BCI techniques described in the literature. In response to this situation, the community has stressed the need for objective methods to compare BCI technologies. Suggested improvements have included the development and use of benchmark applications and standard data sets. However, as a young, multidisciplinary research field, the BCI community lacks a common vocabulary. As a result, this deficiency leads to poor intergroup communication, which hinders the development of the desired methods of comparison. One of the principle reasons for the lack of common vocabulary is the absence of a common functional model of a BCI System. This paper proposes a new functional model for BCI System design. The model supports many features that facilitate the comparison of BCI technologies with other BCI and non-BCI user interface technologies. From this model, taxonomy for BCI System design is developed. Together the model and taxonomy are considered a general framework for BCI System design. The representational power of the proposed framework was evaluated by applying it to a set of existing BCI technologies. The framework could effectively describe all of the BCI System designs tested."}
{"_id": "97e5088bfb7583f1a40881d7c41f7be63ebb2846", "title": "The thought translation device (TTD) for completely paralyzed patients.", "text": "The thought translation device trains locked-in patients to self-regulate slow cortical potentials (SCP's) of their electroencephalogram (EEG). After operant learning of SCP self-control, patients select letters, words or pictograms in a computerized language support program. Results of five respirated, locked-in-patients are described, demonstrating the usefulness of the thought translation device as an alternative communication channel in motivated totally paralyzed patients with amyotrophic lateral sclerosis."}
{"_id": "d76beb59a23c01c9bec1940c4cec1ca26e00480a", "title": "Brain-computer interfaces based on the steady-state visual-evoked response.", "text": "The Air Force Research Laboratory has implemented and evaluated two brain-computer interfaces (BCI's) that translate the steady-state visual evoked response into a control signal for operating a physical device or computer program. In one approach, operators self-regulate the brain response; the other approach uses multiple evoked responses."}
{"_id": "8a65dc637d39c14323dccd5cbcc08eed2553880e", "title": "The Struggle for District-based Health Information Systems in South Africa", "text": "This article describes the initial period (1994\u20132001) of an ongoing action research project to develop health information systems to support district management in South Africa. The reconstruction of the health sector in postapartheid South Africa striving for equity in health service delivery and building of a decentralized structure based on health districts. In terms of information systems (IS) development, this reform process translates into standardization of health data in ways that inscribe the goals of the new South Africa by enhancing local control and integration of information handling. We describe our approach to action research and use concepts from actor-network and structuration theories in analyzing the case material. In the detailed description and analysis of the process of IS development provided, we focus on the need to balance standardization and local \u008f exibility (localization); standardization is thus seen as bottom-up alignment of an array of heterogeneous actors. Building on a social system model of information systems, we conceptualize the IS design strategy developed and used as the cultivation of processes whereby these actors are translating and aligning their interests. We develop a modular hierarchy of global and local datasets as a framework within which the tensions between standardization and localization may be understood and addressed. Finally, we discuss the possible relevance of the results of the research in other countries."}
{"_id": "9cbef9d9cdbe4007444bd3a6e83551ae0865b648", "title": "Dynamic Conditional Random Fields for Joint Sentence Boundary and Punctuation Prediction", "text": "The use of dynamic conditional random fields (DCRF) has been shown to outperform linear-chain conditional random fields (LCRF) for punctuation prediction on conversational speech texts [1]. In this paper, we combine lexical, prosodic, and modified n-gram score features into the DCRF framework for a joint sentence boundary and punctuation prediction task on TDT3 English broadcast news. We show that the joint prediction method outperforms the conventional two-stage method using LCRF or maximum entropy model (MaxEnt). We show the importance of various features using DCRF, LCRF, MaxEnt, and hidden-event n-gram model (HEN) respectively. In addition, we address the practical issue of feature explosion by introducing lexical pruning, which reduces model size and improves the F1-measure. We adopt incremental local training to overcome memory size limitation without incurring significant performance penalty. Our results show that adding prosodic and n-gram score features gives about 20% relative error reduction in all cases. Overall, DCRF gives the best accuracy, followed by LCRF, MaxEnt, and HEN."}
{"_id": "f351de6e160b429a18fa06bd3994a95f932c4ce5", "title": "Toward Understanding How Early-Life Stress Reprograms Cognitive and Emotional Brain Networks", "text": "Vulnerability to emotional disorders including depression derives from interactions between genes and environment, especially during sensitive developmental periods. Adverse early-life experiences provoke the release and modify the expression of several stress mediators and neurotransmitters within specific brain regions. The interaction of these mediators with developing neurons and neuronal networks may lead to long-lasting structural and functional alterations associated with cognitive and emotional consequences. Although a vast body of work has linked quantitative and qualitative aspects of stress to adolescent and adult outcomes, a number of questions are unclear. What distinguishes \u2018normal\u2019 from pathologic or toxic stress? How are the effects of stress transformed into structural and functional changes in individual neurons and neuronal networks? Which ones are affected? We review these questions in the context of established and emerging studies. We introduce a novel concept regarding the origin of toxic early-life stress, stating that it may derive from specific patterns of environmental signals, especially those derived from the mother or caretaker. Fragmented and unpredictable patterns of maternal care behaviors induce a profound chronic stress. The aberrant patterns and rhythms of early-life sensory input might also directly and adversely influence the maturation of cognitive and emotional brain circuits, in analogy to visual and auditory brain systems. Thus, unpredictable, stress-provoking early-life experiences may influence adolescent cognitive and emotional outcomes by disrupting the maturation of the underlying brain networks. Comprehensive approaches and multiple levels of analysis are required to probe the protean consequences of early-life adversity on the developing brain. These involve integrated human and animal-model studies, and approaches ranging from in vivo imaging to novel neuroanatomical, molecular, epigenomic, and computational methodologies. Because early-life adversity is a powerful determinant of subsequent vulnerabilities to emotional and cognitive pathologies, understanding the underlying processes will have profound implications for the world\u2019s current and future children."}
{"_id": "5f8835f977ed62da907113ddd03115ee7269e623", "title": "Mutations in EZH2 cause Weaver syndrome.", "text": "We used trio-based whole-exome sequencing to analyze two families affected by Weaver syndrome, including one of the original families reported in 1974. Filtering of rare variants in the affected probands against the parental variants identified two different de novo mutations in the enhancer of zeste homolog 2 (EZH2). Sanger sequencing of EZH2 in a third classically-affected proband identified a third de novo mutation in this gene. These data show that mutations in EZH2 cause Weaver syndrome."}
{"_id": "3912e7522ad4c7b271883600dd3d2d6f46e2c8b5", "title": "CapStones and ZebraWidgets: sensing stacks of building blocks, dials and sliders on capacitive touch screens", "text": "Recent research proposes augmenting capacitive touch pads with tangible objects, enabling a new generation of mobile applications enhanced with tangible objects, such as game pieces and tangible controllers. In this paper, we extend the concept to capacitive tangibles consisting of multiple parts, such as stackable gaming pieces and tangible widgets with moving parts. We achieve this using a system of wires and connectors inside each block that causes the capacitance of the bottom-most block to reflect the entire assembly. We demonstrate three types of tangibles, called CapStones, Zebra Dials and Zebra Sliders that work with current consumer hardware and investigate what designs may become possible as touchscreen hardware evolves."}
{"_id": "7e51cac4bc694a1a3c07720fb33cee9ce37c4ba4", "title": "Sufficient Sample Sizes for Multilevel Modeling", "text": "An important problem in multilevel modeling is what constitutes a sufficient sample size for accurate estimation. In multilevel analysis, the major restriction is often the higher-level sample size. In this paper, a simulation study is used to determine the influence of different sample sizes at the group level on the accuracy of the estimates (regression coefficients and variances) and their standard errors. In addition, the influence of other factors, such as the lowest-level sample size and different variance distributions between the levels (different intraclass correlations), is examined. The results show that only a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second-level standard errors. In all of the other simulated conditions the estimates of the regression coefficients, the variance components, and the standard errors are unbiased and accurate."}
{"_id": "600434c6255c160b53ad26912c1c0b96f0d48ce6", "title": "How Many Trees in a Random Forest?", "text": "Random Forest is a computationally efficient technique that can operate quickly over large datasets. It has been used in many recent research projects and real-world applications in diverse domains. However, the associated literature provides almost no directions about how many trees should be used to compose a Random Forest. The research reported here analyzes whether there is an optimal number of trees within a Random Forest, i.e., a threshold from which increasing the number of trees would bring no significant performance gain, and would only increase the computational cost. Our main conclusions are: as the number of trees grows, it does not always mean the performance of the forest is significantly better than previous forests (fewer trees), and doubling the number of trees is worthless. It is also possible to state there is a threshold beyond which there is no significant gain, unless a huge computational environment is available. In addition, it was found an experimental relationship for the AUC gain when doubling the number of trees in any forest. Furthermore, as the number of trees grows, the full set of attributes tend to be used within a Random Forest, which may not be interesting in the biomedical domain. Additionally, datasets\u2019 density-based metrics proposed here probably capture some aspects of the VC dimension on decision trees and low-density datasets may require large capacity machines whilst the opposite also seems to be true."}
{"_id": "4cbadc5f4afe9ac178fd14a6875ef1956a528313", "title": "Security in IPv 6-enabled Wireless Sensor Networks : An Implementation of TLS / DTLS for the Contiki Operating System", "text": "During the last several years the advancements in technology made it possible for small sensor nodes to communicate wirelessly with the rest of the Internet. With this achievement the question of securing such IP-enabled Wireless Sensor Networks (IP-WSNs) emerged and has been an important research topic since. In this thesis we discuss our implementation of TLS and DTLS protocols using a pre-shared key cipher suite (TLS PSK WITH AES 128 CCM 8) for the Contiki operating system. Apart from simply adding a new protocol to the set of protocols supported by the Contiki OS, this project allows us to evaluate how suitable the transport-layer security and pre-shared key management schemes are for IP-WSNs."}
{"_id": "5ffff74a97cc52c9061f45295ce94ac77bf2f350", "title": "Arbuscular mycorrhizal fungi in alleviation of salt stress: a review.", "text": "BACKGROUND\nSalt stress has become a major threat to plant growth and productivity. Arbuscular mycorrhizal fungi colonize plant root systems and modulate plant growth in various ways.\n\n\nSCOPE\nThis review addresses the significance of arbuscular mycorrhiza in alleviation of salt stress and their beneficial effects on plant growth and productivity. It also focuses on recent progress in unravelling biochemical, physiological and molecular mechanisms in mycorrhizal plants to alleviate salt stress.\n\n\nCONCLUSIONS\nThe role of arbuscular mycorrhizal fungi in alleviating salt stress is well documented. This paper reviews the mechanisms arbuscular mycorrhizal fungi employ to enhance the salt tolerance of host plants such as enhanced nutrient acquisition (P, N, Mg and Ca), maintenance of the K(+) : Na(+) ratio, biochemical changes (accumulation of proline, betaines, polyamines, carbohydrates and antioxidants), physiological changes (photosynthetic efficiency, relative permeability, water status, abscissic acid accumulation, nodulation and nitrogen fixation), molecular changes (the expression of genes: PIP, Na(+)/H(+) antiporters, Lsnced, Lslea and LsP5CS) and ultra-structural changes. Theis review identifies certain lesser explored areas such as molecular and ultra-structural changes where further research is needed for better understanding of symbiosis with reference to salt stress for optimum usage of this technology in the field on a large scale. This review paper gives useful benchmark information for the development and prioritization of future research programmes."}
{"_id": "1d895f34b640670fd92192caff8b3b53c8fb09b5", "title": "Efficient Object Localization and Pose Estimation with 3D Wireframe Models", "text": "We propose a new and efficient method for 3D object localization and fine-grained 3D pose estimation from a single 2D image. Our approach follows the classical paradigm of matching a 3D model to the 2D observations. Our first contribution is a 3D object model composed of a set of 3D edge primitives learned from 2D object blueprints, which can be viewed as a 3D generalization of HOG features. This model is used to define a matching cost obtained by applying a rigid-body transformation to the 3D object model, projecting it onto the image plane, and matching the projected model to HOG features extracted from the input image. Our second contribution is a very efficient branch-and-bound algorithm for finding the 3D pose that maximizes the matching score. For this, 3D integral images of quantized HOGs are employed to evaluate in constant time the maximum attainable matching scores of individual model primitives. We applied our method to three different datasets of cars and achieved promising results with testing times as low as less than half a second."}
{"_id": "fd4a1f1ecb930518bc0d4a3f6f3ac72eeda88be9", "title": "Didier Dubey Suarez Medina Assessment of Web-based Information Security Awareness Courses", "text": "Information security awareness web-based courses are commonly recommended in cyber security strategies to help build a security culture capable of addressing information systems breaches caused by user mistakes whose negligence or ignorance of policies may endanger information systems assets. A research gap exists on the impact of Information Security Awareness Web-Based Courses: these are failing in changing to a significant degree the behavior of participants regarding compliance and diligence, which translates into continuous vulnerabilities. The aim of this work is to contribute with a theoretical and empirical analysis on the potential strengths and weaknesses of Information Security Awareness Web-Based Courses and with two practical tools readily applicable for designers and reviewers of web-based or mediatized courses on information security awareness and education. The research design seeks to respond two research questions. The first on the formulation of a minimum set of criteria that could be applied to Information Security Awareness Web-Based Courses, to support their real impact on employee\u2019s diligence and compliance, resulting in eleven criteria for courses\u2019 assessment and a checklist. The second, about a controlled experiment to explore the actual impact of an existing course, in respect to diligence and compliance using phishing emails as educational tools, that reaffirms the theoretical assumptions arrived to earlier. The development of minimum criteria and their systematic implementation pursue behavioral change, emphasizes the importance of disciplinary integration in cyber security research, and advocates for the development of a solid security culture of diligence and compliance, capable of supporting the protection of organizations from information system threats. The results gathered in this study suggest that achieving positive results in the existing information security tests that follow security awareness courses does not necessarily imply that diligence or information security policies compliance are affected. These preliminary findings accumulate evidence on the importance of implementing the recommendations formulated in this work."}
{"_id": "143f6b100c13d120e55c3e30e441c4abac7a5db2", "title": "An Analysis of Time-Dependent Planning", "text": "This paper presents a framework for exploring issues in time-dependent planning: planning in which the time available to respond to predicted events varies, and the decision making required to formulate effective responses is complex. Our analysis of time-dependent planning suggests an approach based on a class of algorithms that we call anytime algorithms. Anytime algorithms can be interrupted at any point during computation to return a result whose utility is a function of computation time. We explore methods for solving time-dependent planning problems based on the properties of anytime algorithms. Time-dependent planning is concerned with determining how best to respond to predicted events when the time available to make such determinations varies from situation to situation. In order to program a robot to react appropriately over a range of situations, we have to understand how to design effective algorithms for time-dependent planning. In this paper, we will be concerned primarily with understanding the properties of such algorithms, and providing a precise characterization of time-dependent planning. The issues we are concerned with arise either because the number of events that the robot has to contend with varies, and, hence, the time allotted to deliberating about any one event varies, or the observations that allow us to predict events precede the events they herald by varying amounts of time. The range of planning problems in which such complications occur is quite broad. Almost any situation that involves tracking objects of differing velocities will involve time-dependent planning (e.g., vehicle monitoring [Lesser and Corkill, 1983; Durfee, 19871, signal processing [Chung et al., 19871, and juggling [Donner and Jameson, 19861). S t t i ua ions where a system has to dynamically reevaluate its options [Fox and Smith, 1985; Dean, 19871 or delay committing to specific options until critical information arrives [Fox and Kempf, 19851 generally can be cast as time-dependent planning problems. To take a specific example, consider the problem faced by a stationary robot assigned the task of recognizing and intercepting or rerouting objects on a moving conveyor *This work was supported in part by the National Science Foundation under grant IRI-8612644 and by an IBM faculty development award. belt. Suppose that the robot\u2019s view of the conveyor is obscured at some point by a partition, and that someone on the other side of this partition places objects on the conveyor at irregular intervals. The robot\u2019s task requires that, between the time each object clears the partition and the time it reaches the end of the conveyor, it must classify the object and react appropriately. We assume that classification is computationally intensive, and that the longer the robot spends in analyzing an image, the more likely it is to make a correct classification. One can imagine a variety of reactions. The robot might simply have to push a button to direct each object into a bin intended for objects of a specific class; the time required for this sort of reaction is negligible. Alternatively, the robot might have to reach out and grasp certain objects and assemble them; the time required to react in this case will depend upon many factors. One can also imagine variations that exacerbate the timedependent aspects of the problem. For instance, it might take more time to classify certain objects, the number of objects placed on the conveyor might vary throughout the day, or the conveyor might speed up or slow down according to production demands. The important thing to note is, if the robot is to make optimal use of its time, it should be prepared to make decisions in situations where there is very little time to decide as well as to take advantage of situations where there is more than average time to decide. This places certain constraints on the design of the algorithms for performing classification, determining assembly sequences, and handling other inferential tasks. Traditional computer science concerns itself primarily with the complexity and correctness of algorithms. In most planning situations, however, there is no one correct answer, and having the right answer too late is tantamount to not having it at all. In dealing with potentially intractable problems, computer scientists are sometimes content with less than guaranteed solutions (e.g., answers that are likely correct and guaranteed computed in polynomial time (Monte Carlo algorithms), answers that are guaranteed correct and likely computed in polynomial time (Las Vegas algorithms), answers that are optimal within some factor and computed in polynomial time (approximation algorithms). While we regard this small concession to reality as encouraging, it doesn\u2019t begin to address the problems in time-dependent planning. For many planning tasks, polynomial performance is not sufficient; we need algorithms that compute the best answers they can in the time they have available. Planning is concerned with reasoning about whether to act and how. Scheduling is concerned with reasoning about Dean and Boddy 49 From: AAAI-88 Proceedings. Copyright \u00a91988, AAAI (www.aaai.org). All rights reserved."}
{"_id": "e803c3a56d7accc79c0d66847075ebc83bc6bd97", "title": "LSTM-CF: Unifying Context Modeling and Fusion with LSTMs for RGB-D Scene Labeling", "text": "Our model achieves the state-of-the-art performance on SUNRGBD, NYUdepth v2. It can be leveraged to improve the groundtruth annotations of newly captured 3943 RGB-D images in the SUNRGBD dataset. 2.2% improvement over previous state-of-the-art average accuracy and best on 15 categories Photometric data is vital for scene labeling while depth is an auxiliary information source. LSMT-CF model is important for fusing context features from different sources."}
{"_id": "01cb3f168a2cc811f6a79c4f0508f769002a49d5", "title": "RGB-D Based Action Recognition with Light-weight 3D Convolutional Networks", "text": "Different from RGB videos, depth data in RGB-D videos provide key complementary information for tristimulus visual data which potentially could achieve accuracy improvement for action recognition. However, most of the existing action recognition models solely using RGB videos limit the performance capacity. Additionally, the state-of-the-art action recognition models, namely 3D convolutional neural networks (3D-CNNs) contain tremendous parameters suffering from computational inefficiency. In this paper, we propose a series of 3D lightweight architectures for action recognition based on RGB-D data. Compared with conventional 3D-CNN models, the proposed lightweight 3D-CNNs have considerably less parameters involving lower computation cost, while it results in favorable recognition performance. Experimental results on two public benchmark datasets show that our models can approximate or outperform the state-of-the-art approaches. Specifically, on the RGB+DNTU (NTU) dataset, we achieve 93.2% and 97.6% for crosssubject and cross-view measurement, and on the NorthwesternUCLA Multiview Action 3D (N-UCLA) dataset, we achieve 95.5% accuracy of cross-view."}
{"_id": "bc2852fa0a002e683aad3fb0db5523d1190d0ca5", "title": "Learning from Ambiguously Labeled Face Images", "text": "Learning a classifier from ambiguously labeled face images is challenging since training images are not always explicitly-labeled. For instance, face images of two persons in a news photo are not explicitly labeled by their names in the caption. We propose a Matrix Completion for Ambiguity Resolution (MCar) method for predicting the actual labels from ambiguously labeled images. This step is followed by learning a standard supervised classifier from the disambiguated labels to classify new images. To prevent the majority labels from dominating the result of MCar, we generalize MCar to a weighted MCar (WMCar) that handles label imbalance. Since WMCar outputs a soft labeling vector of reduced ambiguity for each instance, we can iteratively refine it by feeding it as the input to WMCar. Nevertheless, such an iterative implementation can be affected by the noisy soft labeling vectors, and thus the performance may degrade. Our proposed Iterative Candidate Elimination (ICE) procedure makes the iterative ambiguity resolution possible by gradually eliminating a portion of least likely candidates in ambiguously labeled faces. We further extend MCar to incorporate the labeling constraints among instances when such prior knowledge is available. Compared to existing methods, our approach demonstrates improvements on several ambiguously labeled datasets."}
{"_id": "90d07df2d165b034e38ec04b3f6343d483f6cb38", "title": "Using Generative Adversarial Networks to Design Shoes : The Preliminary Steps", "text": "In this paper, we envision a Conditional Generative Adversarial Network (CGAN) designed to generate shoes according to an input vector encoding desired features and functional type. Though we do not build the CGAN, we lay the foundation for its completion by exploring 3 areas. Our dataset is the UT-Zap50K dataset, which has 50,025 images of shoes categorized by functional type and with relative attribute comparisons. First, we experiment with several models to build a stable Generative Adversarial Network (GAN) trained on just athletic shoes. Then, we build a classifier based on GoogLeNet that is able to accurately categorize shoe images into their respective functional types. Finally, we explore the possibility of creating a binary classifier for each attribute in our dataset, though we are ultimately limited by the quality of the attribute comparisons provided. The progress made by this study will provide a robust base to create a conditional GAN that generates customized shoe designs."}
{"_id": "2d8683044761263c0654466b205e0c3c46428054", "title": "A confirmatory factor analysis of IS employee motivation and retention", "text": "It is widely recognized that the relationships between organizations and their IS departments are changing. This trend threatens to undermine the retention of IS workers and the productivity of IS operations. In the study reported here, we examine IS employees' motivation and intent to remain using structural equation modeling. A survey was conducted among existing IS employees and analyzed with LISREL VIII. Results showed that latent motivation has an impact on latent retention, with job satisfaction and perceptions of management on career development as indicator variables for the former, and burnout, loyalty, and turnover intent as indicator variables for the latter. Implications for management in developing strategies for the retention of IS employees are provided. # 2001 Elsevier Science B.V. All rights reserved."}
{"_id": "2f965b1d0017f0c01e79eb40b8faf3b72b199581", "title": "Living with Seal Robots in a Care House - Evaluations of Social and Physiological Influences", "text": "Robot therapy for elderly residents in a care house has been conducted since June 2005. Two therapeutic seal robots were introduced and activated for over 9 hours every day to interact with the residents. This paper presents the results of this experiment. In order to investigate the psychological and social effects of the robots, the activities of the residents in public areas were recorded by video cameras during daytime hours (8:30-18:00) for over 2 months. In addition, the hormones 17-Ketosteroid sulfate (17-KS-S) and 17-hydroxycorticosteroids (17-OHCS) in the residents' urine were obtained and analyzed. The results indicate that their social interaction was increased through interaction with the seal robots. Furthermore, the results of the urinary tests showed that the reactions of the subjects' vital organs to stress improved after the introduction of the robots"}
{"_id": "0ab99aa04e3a8340a7552355fb547374a5604b24", "title": "Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique", "text": "D EEP learning is a growing trend in general data analysis and has been termed one of the 10 breakthrough technologies of 2013 [1]. Deep learning is an improvement of artificial neural networks, consisting of more layers that permit higher levels of abstraction and improved predictions from data [2]. To date, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains. In particular, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. Deep CNNs automatically learn mid-level and high-level abstractions obtained from raw data (e.g., images). Recent results indicate that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in natural images. Medical image analysis groups across the world are quickly entering the field and applying CNNs and other deep learning methodologies to a wide variety of applications. Promising results are emerging. In medical imaging, the accurate diagnosis and/or assessment of a disease depends on both image acquisition and image interpretation. Image acquisition has improved substantially over recent years, with devices acquiring data at faster rates and increased resolution. The image interpretation process, however, has only recently begun to benefit from computer technology. Most interpretations of medical images are performed by physicians; however, image interpretation by humans is limited due to its subjectivity, large variations across interpreters, and fatigue. Many diagnostic tasks require an initial search process to detect abnormalities, and to quantify measurements and changes over time. Computerized tools, specifically image analysis and machine learning, are the key enablers to improve diagnosis, by facilitating identification of the findings that require treatment and to support the expert\u2019s workflow. Among these tools, deep learning is rapidly proving to be the state-of-the-art foundation, leading to improved accuracy. It has also opened up new frontiers in data analysis with rates of progress not before experienced."}
{"_id": "75a850e035a7599ecf69d560d6f593d0443d1ef4", "title": "Privacy in e-commerce: examining user scenarios and privacy preferences", "text": "Privacy is a necessary concern in electronic commerce. It is difficult, if not impossible, to complete a transaction without revealing some personal data \u2013 a shipping address, billing information, or product preference. Users may be unwilling to provide this necessary information or even to browse online if they believe their privacy is invaded or threatened. Fortunately, there are technologies to help users protect their privacy. P3P (Platform for Privacy Preferences Project) from the World Wide Web Consortium is one such technology. However, there is a need to know more about the range of user concerns and preferences about privacy in order to build usable and effective interface mechanisms for P3P and other privacy technologies. Accordingly, we conducted a survey of 381 U.S. Net users, detailing a range of commerce scenarios and examining the participants' concerns and preferences about privacy. This paper presents both the findings from that study as well as their design implications."}
{"_id": "e6334789ec6d43664f8f164a462461a4408243ba", "title": "Hyperspectral image classification and dimensionality reduction: an orthogonal subspace projection approach", "text": "Abstruct-Most applications of hyperspectral imagery require processing techniques which achieve two fundamental goals: 1) detect and classify the constituent materials for each pixel in the scene; 2) reduce the data volumeldimensionality, without loss of critical information, so that it can be processed efficiently and assimilated by a human analyst. In this paper, we describe a technique which simultaneously reduces the data dimensionality, suppresses undesired or interfering spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel vector onto a subspace which is orthogonal to the undesired signatures. This operation is an optimal interference suppression process in the least squares sense. Once the interfering signatures have been nulled, projecting the residual onto the signature of interest maximizes the signal-to-noise ratio and results in a single component image that represents a classification for the signature of interest. The orthogonal subspace projection (OSP) operator can be extended to k signatures of interest, thus reducing the dimensionality of k and classifying the hyperspectral image simultaneously. The approach is applicable to both spectrally pure as well as mixed pixels."}
{"_id": "7207e0a58f6c36b7860574d5568924a5a3e9c51b", "title": "Flow Wars: Systemizing the Attack Surface and Defenses in Software-Defined Networks", "text": "Emerging software defined network SDN stacks have introduced an entirely new attack surface that is exploitable from a wide range of launch points. Through an analysis of the various attack strategies reported in prior work, and through our own efforts to enumerate new and variant attack strategies, we have gained two insights. First, we observe that different SDN controller implementations, developed independently by different groups, seem to manifest common sets of pitfalls and design weakness that enable the extensive set of attacks compiled in this paper. Second, through a principled exploration of the underlying design and implementation weaknesses that enables these attacks, we introduce a taxonomy to offer insight into the common pitfalls that enable SDN stacks to be broken or destabilized when fielded within hostile computing environments. This paper first captures our understanding of the SDN attack surface through a comprehensive survey of existing SDN attack studies, which we extend by enumerating 12 new vectors for SDN abuse. We then organize these vulnerabilities within the well-known confidentiality, integrity, and availability model, assess the severity of these attacks by replicating them in a physical SDN testbed, and evaluate them against three popular SDN controllers. We also evaluate the impact of these attacks against published SDN defense solutions. Finally, we abstract our findings to offer the research and development communities with a deeper understanding of the common design and implementation pitfalls that are enabling the abuse of SDN networks."}
{"_id": "ec03905f9a87f0e1e071b16bdac4bc26619a7f2e", "title": "Psychopathy and the DSM-IV criteria for antisocial personality disorder.", "text": "The Axis II Work Group of the Task Force on DSM-IV has expressed concern that antisocial personality disorder (APD) criteria are too long and cumbersome and that they focus on antisocial behaviors rather than personality traits central to traditional conceptions of psychopathy and to international criteria. We describe an alternative to the approach taken in the rev. 3rd ed. of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R; American Psychiatric Association, 1987), namely, the revised Psychopathy Checklist. We also discuss the multisite APD field trials designed to evaluate and compare four criteria sets: the DSM-III-R criteria, a shortened list of these criteria, the criteria for dyssocial personality disorder from the 10th ed. of the International Classification of Diseases (World Health Organization, 1990), and a 10-item criteria set for psychopathic personality disorder derived from the revised Psychopathy Checklist."}
{"_id": "ae13b99c809922ce555fd5d968b9b3dd389f8c61", "title": "Annular-ring CMUT arrays for forward-looking IVUS: transducer characterization and imaging", "text": "In this study, a 64-element, 1.15-mm diameter annular-ring capacitive micromachined ultrasonic transducer (CMUT) array was characterized and used for forward-looking intravascular ultrasound (IVUS) imaging tests. The array was manufactured using low-temperature processes suitable for CMOS electronics integration on a single chip. The measured radiation pattern of a 43 /spl times/ 140-/spl mu/m array element depicts a 40/spl deg/ view angle for forward-looking imaging around a 15-MHz center frequency in agreement with theoretical models. Pulse-echo measurements show a -10-dB fractional bandwidth of 104% around 17 MHz for wire targets 2.5 mm away from the array in vegetable oil. For imaging and SNR measurements, RF A-scan data sets from various targets were collected using an interconnect scheme forming a 32-element array configuration. An experimental point spread function was obtained and compared with simulated and theoretical array responses, showing good agreement. Therefore, this study demonstrates that annular-ring CMUT arrays fabricated with CMOS-compatible processes are capable of forward-looking IVUS imaging, and the developed modeling tools can be used to design improved IVUS imaging arrays."}
{"_id": "73f8d428fa37bc677d6e08e270336e066432c8c9", "title": "Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial", "text": "Robots increasingly have the potential to interact with people in daily life. It is believed that, based on this ability, they will play an essential role in human society in the not-so-distant future. This article examined the proposition that robots could form relationships with children and that children might learn from robots as they learn from other children. In this article, this idea is studied in an 18-day field trial held at a Japanese elementary school. Two English-speaking \u201cRobovie\u201d robots interacted with firstand sixth-grade pupils at the perimeter of their respective classrooms. Using wireless identification tags and sensors, these robots identified and interacted with children who came near them. The robots gestured and spoke English with the children, using a vocabulary of about 300 sentences for speaking and 50 words for recognition. The children were given a brief picture\u2013word matching English test at the start of the trial, af62 KANDA, HIRANO, EATON, ISHIGURO"}
{"_id": "c4f19894a279d10aa2001facc7d35516e8d654ff", "title": "The role of physical embodiment in human-robot interaction", "text": "Autonomous robots are agents with physical bodies that share our environment. In this work, we test the hypothesis that physical embodiment has a measurable effect on performance and perception of social interactions. Support of this hypothesis would suggest fundamental differences between virtual agents and robots from a social standpoint and have significant implications for human-robot interaction. We measure task performance and perception of a robot's social abilities in a structured but open-ended task based on the Towers of Hanoi puzzle. Our experiment compares aspects of embodiment by evaluating: (1) the difference between a physical robot and a simulated one; (2) the effect of physical presence through a co-located robot versus a remote tele-present robot. We present data from a pilot study with 12 subjects showing interesting differences in perception of remote physical robot's and simulated agent's attention to the task, and task enjoyment"}
{"_id": "1d7383438d2ad4705fcfae4c9c56b5ac8a86d3f3", "title": "Representations in Distributed Cognitive Tasks", "text": "In this paper we propose a theoretical framework of distributed representations and a methodology of representational analysis for the study of distributed cognitive tasks\u00d1tasks that require the processing of information distributed across the internal mind and the external environment. The basic principle of distributed representations is that the representational system of a distributed cognitive task is a set of internal and external representations, which together represent the abstract structure of the task. The basic strategy of representational analysis is to decompose the representation of a hierarchical task into its component levels so that the representational properties at each level can be independently examined. The theoretical framework and the methodology are used to analyze the hierarchical structure of the Tower of Hanoi problem. Based on this analysis, four experiments are designed to examine the representational properties of the Tower of Hanoi. Finally, the nature of external representations is discussed. This research was supported by a grant to Donald Norman and Edwin Hutchins from the Ames Research Center of the National Aeronautics & Space Agency, Grant NCC 2-591 in the Aviation Safety/Automation Program, technical monitor, Everett Palmer. Additional support was provided by funds from the Apple Computer Company and the Digital Equipment Corporation to the Affiliates of Cognitive Science at"}
{"_id": "563ea3682f1e3d14dee418fe1fbb52f433185242", "title": "The usability of everyday technology: emerging and fading opportunities", "text": "Current work in the field of usability tends to focus on snapshots of use as the basis for evaluating designs. However, giving due consideration to the fact that everyday use of technology involves a process of evolution, we set out to investigate how the design of the technology may be used to support this. Based on a long-term empirical study of television use in the homes of two families, we illustrate how use continuously develops in a complex interplay between the users' expectations---as they are formed and triggered by the design---and the needs and context of use per se. We analyze the empirical data from the perspective of activity theory. This framework serves to highlight how use develops, and it supports our analysis and discussion about how design, the users' backgrounds, previous experience, and needs, and the specific context of use supports or hinders the development of use. Moreover, we discuss how the characteristics of the home settings, in which the televisions studied were situated, represent a challenge to usability work. The concluding discussion leads to a set of hypotheses relevant to designers and researchers who wish to tackle some of the aspects of usability of particular importance to development in the use of home technology."}
{"_id": "ce66e4006c9948b7ed080c239540dfd0746ff639", "title": "Are Robots Embodied ?", "text": "Embodiment has become an important concept in many areas of cognitive science. There are, however, very different notions of exactly what embodiment is and what kind of body is required for what kind of embodied cognition. Hence, while many would agree that humans are embodied cognizers, there is much less agreement on what kind of artefact could be considered as embodied. This paper identifies and contrasts five different notions of embodiment which can roughly be characterized as (1) structural coupling between agent and environment, (2) historical embodiment as the result of a history of structural coupling, (3) physical embodiment, (4) \u2018organismoid\u2019 embodiment, i.e. organism-like bodily form (e.g., humanoid robots), and (5) organismic embodiment of autopoietic, living systems."}
{"_id": "1ee728c9e75d89d371a5001db4326c2341792b7c", "title": "Single Cesium Lead Halide Perovskite Nanocrystals at Low Temperature: Fast Single-Photon Emission, Reduced Blinking, and Exciton Fine Structure.", "text": "Metal-halide semiconductors with perovskite crystal structure are attractive due to their facile solution processability, and have recently been harnessed very successfully for high-efficiency photovoltaics and bright light sources. Here, we show that at low temperature single colloidal cesium lead halide (CsPbX3, where X = Cl/Br) nanocrystals exhibit stable, narrow-band emission with suppressed blinking and small spectral diffusion. Photon antibunching demonstrates unambiguously nonclassical single-photon emission with radiative decay on the order of 250 ps, representing a significant acceleration compared to other common quantum emitters. High-resolution spectroscopy provides insight into the complex nature of the emission process such as the fine structure and charged exciton dynamics."}
{"_id": "5dc8b3e5b1d233067e5b34215a50aef173ed3fa3", "title": "Abstract Structures in Spatial Cognition", "text": "Structures in Spatial Cognition Christopher Habel and Carola Eschenbach University of Hamburg Abstract. The importance of studying spatial cognition in cognitive science is enforced by the fact that the applicability of spatial concepts and spatial expressions is not limited to the spatial domain. We claim that common structures underlying both concrete, physical space and other domains are the basis for using spatial expressions, e.g., prepositions like between, with respect to space as well as time or other domains. This claim opposes the thesis that the common use is based upon an analogy between concrete space and other domains. The development of geometry from Euclid\u2019s Elements to more differentiated systems of diverse geometries and topologies can be perceived of as an example of the transfer from modeling concrete space towards describing abstract spatial structures. The importance of studying spatial cognition in cognitive science is enforced by the fact that the applicability of spatial concepts and spatial expressions is not limited to the spatial domain. We claim that common structures underlying both concrete, physical space and other domains are the basis for using spatial expressions, e.g., prepositions like between, with respect to space as well as time or other domains. This claim opposes the thesis that the common use is based upon an analogy between concrete space and other domains. The development of geometry from Euclid\u2019s Elements to more differentiated systems of diverse geometries and topologies can be perceived of as an example of the transfer from modeling concrete space towards describing abstract spatial structures. 1 The Current Interest in Spatial Cognition: Spatial Representations and Spatial Concepts Human behavior is anchored in space and time. Spatial information, i.e., information about spatial properties of the entities in our environment, about spatial constellations in our surrounding, and about the spatial properties and relations of our bodies with respect to this surrounding, has a central position for human cognition. In the recognition of objects and events by different sensory channels, i.e., in visual, haptic or auditory perception, spatial information is involved. Motor behavior, i.e., locomotion and the movement of the body, is based on such information as well. Beyond perception and motor action, some higher cognitive activities that interact indirectly with the spatial environment are coupled with spatial information, for instance, memory, problem solving and planning (cf. Eilan et al. 1993). The interaction of spatial cognition and other cognitive faculties is also exemplified by the ability to communicate information about spatial properties of the external world, especially about objects or constellations not directly perceivable (cf. Freksa & Habel 1990). The cognitive science method of investigating and explaining cognition based on computation and representation has led to increasing research activities focusing on spatial representations and processes on such representations: * Parts of this paper are based on Habel & Eschenbach (1995). The research reported in this paper was carried out in connection to the project \u2018Axiomatik r\u00e4umlicher Konzepte\u2019 (Ha 1237/7) supported by the Deutsche Forschungsgemeinschaft (DFG). Thanks to an anonymous referee for comments and suggestions for improvements. Address: FB Informatik (AB WSV) and Graduiertenkolleg Kognitionswissenschaft, Universit\u00e4t Hamburg, Vogt-K\u00f6lln-Str. 30, D-22527 Hamburg. {habel, eschenbach}@informatik.uni-hamburg.de. \u2022 In cognitive psychology spatial concepts are basic for thinking about objects and situations in physical space and therefore the necessary constituents for the integration of the central higher cognitive faculties with sensory, motor and linguistic faculties (cf. Miller 1978, Landau & Jackendoff 1993). \u2022 In linguistics spatial concepts are discussed as basic ingredients of the \u2013 mental \u2013 lexicon; the linguistic approach of cognitive grammar \u2013 with Lakoff, Langacker and Talmy as its most influential advocates \u2013 is committed to space as the foundation for semantics and for the general grammatical system. \u2022 In Artificial Intelligence spatial concepts are the basis for developing representational formalisms for processing spatial knowledge; for example, calculi of Qualitative Spatial Reasoning differ with respect to what their primitive terms are and which spatial expressions are definable on this basis (See, e.g., Freksa 1992, Hern\u00e1ndez 1994, and Randell et al. 1992, Schlieder 1995a, b). In the cognitive grammar approach as well as in most other discussions on spatial information the question what the basis of using the term spatial is seems to allow a simple answer: Space is identified with three-dimensional physical space. And by this, the concrete, physical space of our environment is seen as the conceptual and semantic basis for a wide range of linguistic and non-linguistic cognition. Accordingly, spatial concepts concern size, shape or relative location of objects in three-dimensional physical space. This view is based on the judgment that direct interaction with concrete, physical space is the core of our experience and therefore of our knowledge. Our spatial experience leads to groupings among spatial concepts depending on geometrical types of spatial characterizations. Examples of such groups, each of them corresponding to types of experience, are: topological concepts \u2022 based on relations between regions and their boundaries \u2022 invariant with respect to elastic transformations concepts of ordering \u2022 based on relations between objects or regions with respect to the relative position in a spatial constellation. \u2022 independent of the extensions and distances of the objects and regions in question metric concepts \u2022 include measures of distance and size of objects and regions. 1 Following Miller (1978) we assume that the conceptual structure includes different types of concepts, e.g. concepts for objects, properties and relations. In this sense, spatial relations like touching or betweenness correspond to relational concepts, while shape properties like being round or angular correspond to predicative concepts. 2 See, e.g., Lakoff (1987), Langacker (1986), Talmy (1983). Note that in the initial phase of this framework the term space grammar was used (Langacker 1982). 3 Although it is often claimed that physical space is not concrete but an abstraction based on spatial properties and relations of material bodies, we will use the term concrete space to refer to physical space in contrast to abstract spatial structures referring to less restricted structures (such as topological, ordering or metric spaces) underlying physical space. Aspects of this division are reflected by the contrast between qualitative and quantitative spatial reasoning. The means of qualitative spatial reasoning are in many cases restricted to topological terms (e.g., Randell et al. 1992) and terms of ordering (e.g., Schlieder 1995a, b). An independent dimension of analysis concerns the distinction between concept types and types of spatial entities: On the one hand we deal with spatial relations between objects, their location or relative position. Characteristically these relations are independent of shape and extension, such that they can apply to points idealizing the place of the objects. On the other hand we are concerned with shape properties of extended objects or regions independent of their location. Shape properties are coined by the relative position of object parts among themselves. Concepts of object orientation combine both, shape of extended entities and spatial relations of their parts and other objects. These types of spatial concepts can be subdivided according to the dimension of geometrical type. As a third dimension in classifying spatial concepts, we distinguish between static and dynamic concepts. Whereas the former concern properties and relations without consideration of time and change, the latter reflect the possibility of changes of spatial properties and relations over time. Since trajectories or paths of locomotion processes are extended spatial entities, spatial concepts are applicable to them as well as to material or geographical entities. (See Table 1 for an exemplification of some sections according to the three-dimensional classification scheme)."}
{"_id": "5343b6d5c9f3a2c4d9648991162a6cc13c1c5e70", "title": "DA-GAN: Instance-Level Image Translation by Deep Attention Generative Adversarial Networks", "text": "Unsupervised image translation, which aims in translating two independent sets of images, is challenging in discovering the correct correspondences without paired data. Existing works build upon Generative Adversarial Networks (GANs) such that the distribution of the translated images are indistinguishable from the distribution of the target set. However, such set-level constraints cannot learn the instance-level correspondences (e.g. aligned semantic parts in object transfiguration task). This limitation often results in false positives (e.g. geometric or semantic artifacts), and further leads to mode collapse problem. To address the above issues, we propose a novel framework for instance-level image translation by Deep Attention GAN (DA-GAN). Such a design enables DA-GAN to decompose the task of translating samples from two sets into translating instances in a highly-structured latent space. Specifically, we jointly learn a deep attention encoder, and the instance-level correspondences could be consequently discovered through attending on the learned instances. Therefore, the constraints could be exploited on both set-level and instance-level. Comparisons against several state-of-the-arts demonstrate the superiority of our approach, and the broad application capability, e.g, pose morphing, data augmentation, etc., pushes the margin of domain translation problem.1"}
{"_id": "38f88e1f1ca23b2a5cd226b5b72b1933637d16e8", "title": "Automatic detection and rapid determination of earthquake magnitude by wavelet multiscale analysis of the primary arrival", "text": "Earthquake early warning systems must save lives. It is of great importance that networked systems of seismometers be equipped with reliable tools to make rapid determinations of earthquake magnitude in the few to tens of seconds before the damaging ground motion occurs. A new fully automated algorithm based on the discrete wavelet transform detects as well as analyzes the incoming first arrival with great accuracy and precision, estimating the final magnitude to within a single unit from the first few seconds of the P wave. \u00a9 2006 Elsevier B.V. All rights reserved."}
{"_id": "ac5342a7973f246af43634994ab288a9193d1131", "title": "High-Accuracy Tracking Control of Hydraulic Rotary Actuators With Modeling Uncertainties", "text": "Structured and unstructured uncertainties are the main obstacles in the development of advanced controllers for high-accuracy tracking control of hydraulic servo systems. For the structured uncertainties, nonlinear adaptive control can be employed to achieve asymptotic tracking performance. But modeling errors, such as nonlinear frictions, always exist in physical hydraulic systems and degrade the tracking accuracy. In this paper, a robust integral of the sign of the error controller and an adaptive controller are synthesized via backstepping method for motion control of a hydraulic rotary actuator. In addition, an experimental internal leakage model of the actuator is built for precise model compensation. The proposed controller accounts for not only the structured uncertainties (i.e., parametric uncertainties), but also the unstructured uncertainties (i.e., nonlinear frictions). Furthermore, the controller theoretically guarantees asymptotic tracking performance in the presence of various uncertainties, which is very important for high-accuracy tracking control of hydraulic servo systems. Extensive comparative experimental results are obtained to verify the high-accuracy tracking performance of the proposed control strategy."}
{"_id": "f76c947a3afe92269b87f24f9bcc060b7072603f", "title": "Computationally efficient leakage inductance calculation for a high-frequency core-type transformer", "text": "Leakage inductance is a critical design attribute of high-frequency transformers. In this paper, a new method to estimate core-type transformer leakage inductance is proposed. The magnetic analysis uses a combination of analytical and numerical analysis. A two-dimensional magnetic analysis using Biot-Savart law and the method of images is introduced. Using the same field analysis, expressions for mean squared field intensity values necessary for estimating proximity effect loss in the windings are also derived."}
{"_id": "fb2d5460fd0291552e5449b7d2c42667e7a751ae", "title": "View-Independent Recognition of Hand Postures", "text": "In Proc. of IEEE Conf. on CVPR\u20192000, Vol.II, pp.88-94, Hilton Head Island, SC, 2000 Since human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research of view-independent object recognition. Due to the difficulties of the modelbased approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set of predefined gesture commands, and it is also extended to hand detection. This algorithm can also apply to other object recognition tasks."}
{"_id": "a14bb15298961545b83e7c7cefff0e7af79828f7", "title": "A Survey of CPU-GPU Heterogeneous Computing Techniques", "text": "As both CPUs and GPUs become employed in a wide range of applications, it has been acknowledged that both of these Processing Units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated a significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this article, we survey Heterogeneous Computing Techniques (HCTs) such as workload partitioning that enable utilizing both CPUs and GPUs to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler, and application levels. Further, we review both discrete and fused CPU-GPU systems and discuss benchmark suites designed for evaluating Heterogeneous Computing Systems (HCSs). We believe that this article will provide insights into the workings and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance."}
{"_id": "176fd97ad01bcbd27214502110e6054c6923262c", "title": "Software engineering for big data projects: Domains, methodologies and gaps", "text": "Context: Big data has become the new buzzword in the information and communication technology industry. Researchers and major corporations are looking into big data applications to extract the maximum value from the data available to them. However, developing and maintaining stable and scalable big data applications is still a distant milestone. Objective: To look at existing research on how software engineering concepts, namely the phases of the software development project life cycle (SDPLC), can help build better big data application projects. Method: A literature survey was performed. A manual search covered papers returned by search engines resulting in approximately 2,000 papers being searched and 170 papers selected for review. Results: The search results helped in identifying data rich application projects that have the potential to utilize big data successfully. The review helped in exploring SDPLC phases in the context of big data applications and performing a gap analysis of the phases that have yet to see detailed research efforts but deserve attention."}
{"_id": "94097d89173928e11fc20f851cf05c9138cbfbb3", "title": "Biology of language-The epistemology of reality", "text": "2. What processes take place in a linguistic interaction that permit an organism (us) to describe and to predict events that it may experience? This is my way of honoring the memory of Eric H. Lenneberg, if one honors the memory of another scientist by speaking about one\u2019s own work Whatever the case, I wish to honor his memory not only because of his great accomplishments, but also because he was capable of inspiring his students, as the symposium on which this book is based revealed. The only way I can do this is to accept the honor of presenting my views about biology, language, and reality. I shall, accordingly, speak about language as a biologist. In doing so, I shall use language, notwithstanding that this use of language to speak about language is within the core of the problem I wish to consider."}
{"_id": "f1526054914997591ffdb8cd523bea219ce7a26e", "title": "Statistical significance versus clinical relevance.", "text": "In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P\u2009<\u20090.05 means that the null hypothesis is false, and P\u2009\u22650.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs."}
{"_id": "c44c877ad48f587baa49c79d528d1be6448256b1", "title": "Derivation of Human Trophoblast Stem Cells.", "text": "Trophoblast cells play an essential role in the interactions between the fetus and mother. Mouse trophoblast stem (TS) cells have been derived and used as the best in\u00a0vitro model for molecular and functional analysis of mouse trophoblast lineages, but attempts to derive human TS cells have so far been unsuccessful. Here we show that activation of Wingless/Integrated (Wnt) and EGF and inhibition of TGF-\u03b2, histone deacetylase (HDAC), and Rho-associated protein kinase (ROCK) enable long-term culture of human villous cytotrophoblast (CT) cells. The resulting cell lines have the capacity to give rise to the three major trophoblast lineages, which show transcriptomes similar to those of the corresponding primary trophoblast cells. Importantly, equivalent cell lines can be derived from human blastocysts. Our data strongly suggest that the CT- and blastocyst-derived cell lines are human TS cells, which will provide a powerful tool to study human trophoblast development and function."}
{"_id": "4cd6e2a4e9212c7e47e50bddec92f09095b67c2d", "title": "A Bidirectional-Switch-Based Wide-Input Range High-Efficiency Isolated Resonant Converter for Photovoltaic Applications", "text": "Modular photovoltaic (PV) power conditioning systems (PCSs) require a high-efficiency dc-dc converter stage capable of regulation over a wide input voltage range for maximum power point tracking. In order to mitigate ground leakage currents and to be able to use a high-efficiency, nonisolated grid-tied inverter, it is also desirable for this microconverter to provide galvanic isolation between the PV module and the inverter. This paper presents a novel, isolated topology that is able to meet the high efficiency over a wide input voltage range requirement. This topology yields high efficiency through low circulating currents, zero-voltage switching (ZVS) and low-current switching of the primary side devices, ZCS of the output diodes, and direct power transfer to the load for the majority of switching cycle. This topology is also able to provide voltage regulation through basic fixed-frequency pulsewidth modulated (PWM) control. These features are able to be achieved with the simple addition of a secondary-side bidirectional ac switch to the isolated series resonant converter. Detailed analysis of the operation of this converter is discussed along with a detailed design procedure. Experimental results of a 300-W prototype are given. The prototype reached a peak power stage efficiency of 98.3% and a California Energy Commission (CEC) weighted power stage efficiency of 98.0% at the nominal input voltage."}
{"_id": "86cace71c7626d46a362963b5f354132effae16d", "title": "Detection of Red Tomato on Plants using Image Processing Techniques", "text": "Tomatoes are the best-known grown fruit in greenhouses that have been recently attempted to be picked up automatically. Tomato is a plant which its fruit does not ripe simultaneously, therefore it is necessary to develop an algorithm to distinguish red tomatoes. In the current study, a new segmentation algorithm based on region growing was proposed for guiding a robot to pick up red tomatoes. For this purpose, several colour images of tomato plants were acquired in a greenhouse. The colour images of tomato were captured under natural light, without any artificial lighting equipment. To recognize red tomatoes form non-red ones, at first background of images were removed. For removing the background, subtraction of red and green components (R-G) was applied. Usually tomatoes touch together, so separating touching tomatoes was next step. In this step, the watershed algorithm was used that was followed by improving process. Afterwards, red tomato was detected by the region growing approach. Results obtained from testing the developed algorithm showed an encouraging accuracy (82.38%) to develop an expert system for online recognition of red"}
{"_id": "a83aac2a7a136fcaa1c076dd6e3abdd720fbb520", "title": "A Novel Metal-Plate Monopole Antenna for DTV Application", "text": "In this paper, a novel metal-plate monopole antenna is presented for indoor digital television (DTV) signal coverage in the 468-880MHz band. The proposed antenna consists of two orthorhombic metal-plates with symmetrical bevels, a top-uploaded metal-plate and a ground plane. Simulation results show that the antenna achieves a bandwidth ranging from 468MHz to 880MHz which covers the DTV band (470-860MHZ) and shows stable radiation patterns in three coordinate planes."}
{"_id": "501363eb1c55d30e6151d99269afc5aa31f8a0c8", "title": "Plot Induction and Evolutionary Search for Story Generation", "text": "In this paper we develop a story generator that leverages knowledge inherent in corpora without requiring extensive manual involvement. A key feature in our approach is the reliance on a story planner which we acquire automatically by recording events, their participants, and their precedence relationships in a training corpus. Contrary to previous work our system does not follow a generate-and-rank architecture. Instead, we employ evolutionary search techniques to explore the space of possible stories which we argue are well suited to the story generation task. Experiments on generating simple children\u2019s stories show that our system outperforms previous data-driven approaches."}
{"_id": "d781442d1ab4345865c068dfbfead772d777ca19", "title": ", GT-GOLEM-2011-001 , 2011 Ach : IPC for Real-Time Robot Control", "text": "We present a new Inter-Process Communication (IPC) mechanism and library. Ach is uniquely suited for coordinating perception, control drivers, and algorithms in real-time systems that sample data from physical processes. Ach eliminates the Head-of-Line Blocking problem for applications that always require access to the newest message. Ach is efficient, robust, and formally verified. It has been tested and demonstrated on a variety of physical robotic systems. Finally, the source code for Ach is available under an Open Source BSD-style license."}
{"_id": "a4fc8436f0b5b9bfa5803ea945d6ccfeaa6b0376", "title": "CNNPred: CNN-based stock market prediction using several data sources", "text": "Feature extraction from financial data is one of the most important problems in market prediction domain for which many approaches have been suggested. Among other modern tools, convolutional neural networks (CNN) have recently been applied for automatic feature selection and market prediction. However, in experiments reported so far, less attention has been paid to the correlation among different markets as a possible source of information for extracting features. In this paper, we suggest a CNN-based framework with specially designed CNNs, that can be applied on a collection of data from a variety of sources, including different markets, in order to extract features for predicting the future of those markets. The suggested framework has been applied for predicting the next days direction of movement for the indices of S&P 500, NASDAQ, DJI, NYSE, and RUSSELL markets based on various sets of initial features. The evaluations show a significant improvement in predictions performance compared to the state of the art baseline algorithms."}
{"_id": "16c527f74f029be1d6526175b7cc2d938f7e7c69", "title": "Deep Learning for Multi-label Classification", "text": "In multi-label classification, the main focus has been to develop ways of learning the underlying dependencies between labels, and to take advantage of this at classification time. Developing better feature-space representations has been predominantly employed to reduce complexity, e.g., by eliminating non-helpful feature attributes from the input space prior to (or during) training. This is an important task, since many multilabel methods typically create many different copies or views of the same input data as they transform it, and considerable memory can be saved by taking advantage of redundancy. In this paper, we show that a proper development of the feature space can make labels less interdependent and easier to model and predict at inference time. For this task we use a deep learning approach with restricted Boltzmann machines. We present a deep network that, in an empirical evaluation, outperforms a number of competitive methods from the literature."}
{"_id": "6a2d7f6431574307404b38c07e845b763b577e64", "title": "A 69.8 dB SNDR 3rd-order Continuous Time Delta-Sigma Modulator with an Ultimate Low Power Tuning System for a Worldwide Digital TV-Receiver", "text": "This paper presents a 3rd-order continuous time delta-sigma modulator for a worldwide digital TV-receiver whose SNDR is 69.8 dB. An ultimate low power tuning system using RC-relaxation oscillator is developed in order to achieve high yield against PVT variations. A 3rd-order modulator with modified single opamp resonator contributes to cost reduction by realizing very compact circuit. The mechanism to occur 2nd-order harmonic distortion at current feedback DAC was analyzed and a reduction scheme of the distortion enabled the modulator to achieved FOM of 0.18 pJ/conv-step."}
{"_id": "af7bec24fc78d1b87c0c3807f447dc9eed0d261b", "title": "A comparison of spotlight synthetic aperture radar image formation techniques", "text": "Spotlight synthetic aperture radar images can be formed from the complex phase history data using two main techniques: 1) polar-to-Cartesian interpolation followed by twodimensional inverse Fourier transform (2DFFT) , and 2) convolution backprojection (CBP) . CBP has been widely used to reconstruct medical images in computer aided tomography, and only recently has been applied to form synthetic aperture radar imagery. It is alleged that CBP yields higher quality images because 1) all the Fourier data are used and 2) the polar formatted data is used directly to form a 2D Cartesian image and therefore 2D interpolation is not required. This report compares the quality of images formed by CBP and several modified versions of the 2DFFT method. We show from an image quality point of view that CBP is equivalent to first windowing the phase history data and then interpolating to an exscribed rectangle. From a mathematical perspective, we should expect this conclusion since the same Fourier data are used to form the SAR image. We next address the issue of parallel implementation of each algorithm. We dispute previous claims that CBP is more readily parallelizable than the 2DFFT method. Our conclusions are supported by comparing execution times between massively parallel implementations of both algorithms, showing that both experience similar decreases in computation time, but that CBP takes significantly longer to form an image. This page intentionally left blank"}
{"_id": "7ed74be5864f454e508f7684954aaec94ad68394", "title": "A Decision Framework for Blockchain Platforms for IoT and Edge Computing", "text": "Blockchains started as an enabling technology in the area of digital currencies with the introduction of Bitcoin. However, blockchains have emerged as a technology that goes beyond financial transactions by providing a platform supporting secure and robust distributed public ledgers. We think that the Internet of Things (IoT) can also benefit from blockchain technology, especially in the areas of security, privacy, fault tolerance, and autonomous behavior. Here we present a decision framework to help practitioners systematically evaluate the potential use of blockchains in an IoT context."}
{"_id": "432dc627ba321f5df3fe163c5903b58e0527ad7a", "title": "Brain Activity during Simulated Deception: An Event-Related Functional Magnetic Resonance Study", "text": "TheGuilty Knowledge Test (GKT) has been used extensively to model deception. An association between the brain evoked response potentials and lying on the GKT suggests that deception may be associated with changes in other measures of brain activity such as regional blood flow that could be anatomically localized with event-related functional magnetic resonance imaging (fMRI). Blood oxygenation level-dependent fMRI contrasts between deceptive and truthful responses were measured with a 4 Tesla scanner in 18 participants performing the GKT and analyzed using statistical parametric mapping. Increased activity in the anterior cingulate cortex (ACC), the superior frontal gyrus (SFG), and the left premotor, motor, and anterior parietal cortex was specifically associated with deceptive responses. The results indicate that: (a) cognitive differences between deception and truth have neural correlates detectable by fMRI, (b) inhibition of the truthful response may be a basic component of intentional deception, and (c) ACC and SFG are components of the basic neural circuitry for deception."}
{"_id": "f95d0dfd126614d289b7335379083778d8d68b01", "title": "Central sensitization in carpal tunnel syndrome with extraterritorial spread of sensory symptoms", "text": "Extraterritorial spread of sensory symptoms is frequent in carpal tunnel syndrome (CTS). Animal models suggest that this phenomenon may depend on central sensitization. We sought to obtain psychophysical evidence of sensitization in CTS with extraterritorial symptoms spread. We recruited 100 unilateral CTS patients. After selection to rule out concomitant upper-limb causes of pain, 48 patients were included. The hand symptoms distribution was graded with a diagram into median and extramedian pattern. Patients were asked on proximal pain. Quantitative sensory testing (QST) was performed in the territory of injured median nerve and in extramedian territories to document signs of sensitization (hyperalgesia, allodynia, wind-up). Extramedian pattern and proximal pain were found in 33.3% and 37.5% of patients, respectively. The QST profile associated with extramedian pattern includes: (1) thermal and mechanic hyperalgesia in the territory of the injured median nerve and in those of the uninjured ulnar and radial nerves and (2) enhanced wind-up. No signs of sensitization were found in patients with the median distribution and those with proximal symptoms. Different mechanisms may underlie hand extramedian and proximal spread of symptoms, respectively. Extramedian spread of symptoms in the hand may be secondary to spinal sensitization but peripheral and supraspinal mechanisms may contribute. Proximal spread may represent referred pain. Central sensitization may be secondary to abnormal activity in the median nerve afferents or the consequence of a predisposing trait. Our data may explain the persistence of sensory symptoms after median nerve surgical release and the presence of non-anatomical sensory patterns in neuropathic pain."}
{"_id": "5d832e9b80b6edbd3f99adb96f9c6496ae3218de", "title": "PsychoPy\u2014Psychophysics software in Python", "text": "The vast majority of studies into visual processing are conducted using computer display technology. The current paper describes a new free suite of software tools designed to make this task easier, using the latest advances in hardware and software. PsychoPy is a platform-independent experimental control system written in the Python interpreted language using entirely free libraries. PsychoPy scripts are designed to be extremely easy to read and write, while retaining complete power for the user to customize the stimuli and environment. Tools are provided within the package to allow everything from stimulus presentation and response collection (from a wide range of devices) to simple data analysis such as psychometric function fitting. Most importantly, PsychoPy is highly extensible and the whole system can evolve via user contributions. If a user wants to add support for a particular stimulus, analysis or hardware device they can look at the code for existing examples, modify them and submit the modifications back into the package so that the whole community benefits."}
{"_id": "02a98118ce990942432c0147ff3c0de756b4b76a", "title": "Learning realistic human actions from movies", "text": "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8% accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results."}
{"_id": "14ce7635ff18318e7094417d0f92acbec6669f1c", "title": "DeepFace: Closing the Gap to Human-Level Performance in Face Verification", "text": "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance."}
{"_id": "50ca90bc847694a7a2d9a291f0d903a15e408481", "title": "A Multi-scale Approach to Gesture Detection and Recognition", "text": "We propose a generalized approach to human gesture recognition based on multiple data modalities such as depth video, articulated pose and speech. In our system, each gesture is decomposed into large-scale body motion and local subtle movements such as hand articulation. The idea of learning at multiple scales is also applied to the temporal dimension, such that a gesture is considered as a set of characteristic motion impulses, or dynamic poses. Each modality is first processed separately in short spatio-temporal blocks, where discriminative data-specific features are either manually extracted or learned. Finally, we employ a Recurrent Neural Network for modeling large-scale temporal dependencies, data fusion and ultimately gesture classification. Our experiments on the 2013 Challenge on Multimodal Gesture Recognition dataset have demonstrated that using multiple modalities at several spatial and temporal scales leads to a significant increase in performance allowing the model to compensate for errors of individual classifiers as well as noise in the separate channels."}
{"_id": "586d7b215d1174f01a1dc2f6abf6b2eb0f740ab6", "title": "Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition", "text": "We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64% error on MNIST, and 54% average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples."}
{"_id": "afbd6dbf502004ad2be091afc084580d02a56a2e", "title": "Efficient model-based 3D tracking of hand articulations using Kinect", "text": "The 3D tracking of the articulation of human hands is a theoretically interesting and challenging problem with numerous and diverse applications [1]. Several attempts have been made to address the problem by considering markerless visual data. Existing approaches can be categorized into modeland appearance-based [1]. In this work, we propose a model-based approach to the problem (Fig. 1). We formulate it as an optimization problem that minimizes the discrepancy between the 3D structure and appearance of hypothesized 3D hand model instances, and actual visual observations. Optimization is performed with a variant of an existing stochastic optimization method (Particle Swarm Optimization PSO) [2]. The most computationally demanding parts of the process have been implemented to run efficiently on a GPU [3]. The resulting system performs hand articulations tracking at a rate of 15Hz."}
{"_id": "47310b4e14990becd5d473a07092ded4df2fbef1", "title": "DeepCas: An End-to-end Predictor of Information Cascades", "text": "Information cascades, effectively facilitated by most social network platforms, are recognized as a major factor in almost every social success and disaster in these networks. Can cascades be predicted? While many believe that they are inherently unpredictable, recent work has shown that some key properties of information cascades, such as size, growth, and shape, can be predicted by a machine learning algorithm that combines many features. These predictors all depend on a bag of hand-crafting features to represent the cascade network and the global network structures. Such features, always carefully and sometimes mysteriously designed, are not easy to extend or to generalize to a different platform or domain. Inspired by the recent successes of deep learning in multiple data mining tasks, we investigate whether an end-to-end deep learning approach could effectively predict the future size of cascades. Such a method automatically learns the representation of individual cascade graphs in the context of the global network structure, without hand-crafted features or heuristics. We find that node embeddings fall short of predictive power, and it is critical to learn the representation of a cascade graph as a whole. We present algorithms that learn the representation of cascade graphs in an end-to-end manner, which significantly improve the performance of cascade prediction over strong baselines including feature based methods, node embedding methods, and graph kernel methods. Our results also provide interesting implications for cascade prediction in general."}
{"_id": "0dc01f266118cb816dc148c3680c59eaaa7c0c6e", "title": "Applications of Probabilistic Programming (Master's thesis, 2015)", "text": "This thesis describes work on two applications of probabilistic programming: the learning of probabilistic program code given specifications, in particular program code of one-dimensional samplers; and the facilitation of sequential Monte Carlo inference with help of data-driven proposals. The latter is presented with experimental results on a linear Gaussian model and a non-parametric dependent Dirichlet process mixture of objects model for object recognition and tracking. We begin this work by providing a brief introduction to probabilistic programming. In the second Chapter we present an approach to automatic discovery of samplers in the form of probabilistic programs. Specifically, we learn the procedure code of samplers for one-dimensional distributions. We formulate a Bayesian approach to this problem by specifying a grammar-based prior over probabilistic program code. We use an approximate Bayesian computation method to learn the programs, whose executions generate samples that statistically match observed data or analytical characteristics of distributions of interest. In our experiments we leverage different probabilistic programming systems, including Anglican and Probabilistic C, to perform Markov chain Monte Carlo sampling over the space of programs. Experimental results have demonstrated that, using the proposed methodology, we can learn approximate and even some exact samplers. Finally, we show that our results are competitive with regard to genetic programming methods."}
{"_id": "6e31e3713c07011b9e8a6d0df67e4b242082431d", "title": "Low-Level Software Security: Attacks and Defenses", "text": "This tutorial paper considers the issues of low-level software security from a language-based perspective, with the help of concrete examples. Four examples of low-level software attacks are covered in full detail; these examples are representative of the major types of attacks on C and C++ software that is compiled into machine code. Six examples of practical defenses against those attacks are also covered in detail; these defenses are selected because of their effectiveness, wide applicability, and low enforcement overhead."}
{"_id": "3656d8a8391f66c516d1358065d2bd7a3caa160f", "title": "Towards integrated safety analysis and design", "text": "There are currently many problems with the development and assessment of software intensive safety-critical systems. In this paper we describe the problems, and introduce a novel approach to their solution, based around goal-structuring concepts, which we believe will ameliorate some of the difficulties. We discuss the use of modified and new forms of safety assessment notations to provide evidence of safety, and the use of data derived from such notations as a means of providing quantified input into the design assessment process. We then show how the design assessment can be partially automated, and from this develop some ideas on how we might move from analytical to synthetic approaches, using safety criteria and evidence as a fitness function for comparing alternative automatically-generated designs."}
{"_id": "418e12e8d443c7a6cf0ea708d49265bb4d4ce34e", "title": "Hand Gesture Recognition Based on Perceptual Shape Decomposition with a Kinect Camera", "text": "In this paper, we propose the Perceptual Shape Decomposition (PSD) to detect fingers for a Kinect-based hand gesture recognition system. The PSD is formulated as a discrete optimization problem by removing all negative minima with minimum cost. Experiments show that our PSD is perceptually relevant and robust against distortion and hand variations, and thus improves the recognition system performance. key words: Kinect camerta, hand gesture recognition, perceptual decomposition, finger detection"}
{"_id": "72696bce8a55e6d4beb49dcc168be2b3c05ef243", "title": "Convergence guarantees for RMSProp and ADAM in non-convex optimization and their comparison to Nesterov acceleration on autoencoders", "text": "RMSProp and ADAM continue to be extremely popular algorithms for training neural nets but their theoretical foundations have remained unclear. In this work we make progress towards that by giving proofs that these adaptive gradient algorithms are guaranteed to reach criticality for smooth non-convex objectives and we give bounds on the running time. We then design experiments to compare the performances of RMSProp and ADAM against Nesterov Accelerated Gradient method on a variety of autoencoder setups. Through these experiments we demonstrate the interesting sensitivity that ADAM has to its momentum parameter \u03b21. We show that in terms of getting lower training and test losses, at very high values of the momentum parameter (\u03b21 = 0.99) (and large enough nets if using mini-batches) ADAM outperforms NAG at any momentum value tried for the latter. On the other hand, NAG can sometimes do better when ADAM\u2019s \u03b21 is set to the most commonly used value: \u03b21 = 0.9. We also report experiments on different autoencoders to demonstrate that NAG has better abilities in terms of reducing the gradient norms and finding weights which increase the minimum eigenvalue of the Hessian of the loss function."}
{"_id": "8e0d5976b09a15c1125558338f0a6859fc29494a", "title": "Input-series output-parallel AC/AC converter", "text": "Input-series and output-parallel (ISOP) converters are suitable for the high input voltage and low output voltage conversion applications. An ISOP current mode AC/AC converter with high frequency link is proposed. The control strategy and operation principle of the ISOP AC/AC converter are investigated. The control strategy ensures the proper sharing of the input voltage and the proper sharing of the output current among the ISOP AC/AC converter. By simulation, the correctness of the operation principle and the control strategy of the ISOP AC/AC converter are proved."}
{"_id": "37a83525194c436369fa110c0e709f6585409f26", "title": "Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks", "text": "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-world video frames. We also show that our model can be applied to visual analogy-making, and present an analysis of the learned network representations."}
{"_id": "80bcfbb1a30149e636ff1a08aeb715dad6dd9285", "title": "High efficiency Ka-band Gallium Nitride power amplifier MMICs", "text": "The design and performance of two high efficiency Ka-band power amplifier MMICs utilizing a 0.15\u03bcm GaN HEMT process technology is presented. Measured in-fixture continuous wave (CW) results for the 3-stage balanced amplifier demonstrates up to 11W of output power and 30% power added efficiency (PAE) at 30GHz. The 3-stage single-ended design produced over 6W of output power and up to 34% PAE. The die size for the balanced and single-ended MMICs are 3.24\u00d73.60mm2 and 1.74\u00d73.24mm2 respectively."}
{"_id": "284de726e700a6c52f9f8fb9f3de4d4b0ff778bb", "title": "A prioritized grid long short-term memory RNN for speech recognition", "text": "Recurrent neural networks (RNNs) are naturally suitable for speech recognition because of their ability of utilizing dynamically changing temporal information. Deep RNNs have been argued to be able to model temporal relationships at different time granularities, but suffer vanishing gradient problems. In this paper, we extend stacked long short-term memory (LSTM) RNNs by using grid LSTM blocks that formulate computation along not only the temporal dimension, but also the depth dimension, in order to alleviate this issue. Moreover, we prioritize the depth dimension over the temporal one to provide the depth dimension more updated information, since the output from it will be used for classification. We call this model the prioritized Grid LSTM (pGLSTM). Extensive experiments on four large datasets (AMI, HKUST, GALE, and MGB) indicate that the pGLSTM outperforms alternative deep LSTM models, beating stacked LSTMs with 4% to 7% relative improvement, and achieve new benchmarks among uni-directional models on all datasets."}
{"_id": "597a9a2b093ea5463be0a62397a600f6bac51c27", "title": "Correlation for user confidence in predictive decision making", "text": "Despite the recognized value of Machine Learning (ML) techniques and high expectation of applying ML techniques within various applications, significant barriers to the widespread adoption and local implementation of ML approaches still exist in the areas of trust (of ML results), comprehension (of ML processes), as well as confidence (in decision making) by users. This paper investigates the effects of correlation between features and target values on user confidence in data analytics-driven decision making. Our user study found that revealing the correlation between features and target variables affected user confidence in decision making significantly. Moreover, users felt more confident in decision making when correlation shared the same trend with the prediction model performance. These findings would help design intelligent user interfaces and evaluate the effectiveness of machine learning models in applications."}
{"_id": "48d0fdea25539712d98f5408465affcaa23dc917", "title": "Databases, features and classifiers for speech emotion recognition: a review", "text": "Abstract Speech is an effective medium to express emotions and attitude through language. Finding the emotional content from a speech signal and identify the emotions from the speech utterances is an important task for the researchers. Speech emotion recognition has considered as an important research area over the last decade. Many researchers have been attracted due to the automated analysis of human affective behaviour. Therefore a number of systems, algorithms, and classifiers have been developed and outlined for the identification of emotional content of a speech from a person\u2019s speech. In this study, available literature on various databases, different features and classifiers have been taken in to consideration for speech emotion recognition from assorted languages."}
{"_id": "e724d4640b2ece1dc0f6d67c616e8dd61675210b", "title": "A WL-SPPIM Semantic Model for Document Classification", "text": "In this paper, we explore SPPIM-based text classification method, and the experiment reveals that the SPPIM method is equal to or even superior than SGNS method in text classification task on three international and standard text datasets, namely 20newsgroups, Reuters52 and WebKB. Comparing to SGNS, although SPPMI provides a better solution, it is not necessarily better than SGNS in text classification tasks.. Based on our analysis, SGNS takes into the consideration of weight calculation during decomposition process, so it has better performance than SPPIM in some standard datasets. Inspired by this, we propose a WL-SPPIM semantic model based on SPPIM model, and experiment shows that WL-SPPIM approach has better classification and higher scalability in the text classification task compared with LDA, SGNS and SPPIM approaches."}
{"_id": "10c897304637b917f0412303e9597bc032a4cd1a", "title": "Estimation of human body shape and cloth field in front of a kinect", "text": "This paper describes an easy-to-use system to estimate the shape of a human body and his/her clothes. The system uses a Kinect to capture the human's RGB and depth information from different views. Using the depth data, a non-rigid deformation method is devised to compensate motions between different views, thus to align and complete the dressed shape. Given the reconstructed dressed shape, the skin regions are recognized by a skin classifier from the RGB images, and these skin regions are taken as a tight constraints for the body estimation. Subsequently, the body shape is estimated from the skin regions of the dressed shape by leveraging a statistical model of human body. After the body estimation, the body shape is non-rigidly deformed to fit the dressed shape, so as to extract the cloth field of the dressed shape. We demonstrate our system and the therein algorithms by several experiments. The results show the effectiveness of the proposed method. & 2014 Elsevier B.V. All rights reserved."}
{"_id": "c0d30aa1b12a9bceda14ab1d1f2489a2dddc8277", "title": "Identifying and Eliminating Mislabeled Training Instances", "text": "This paper presents a new approach to identifying and eliminating mislabeled training instances. The goal of this technique is to improve classiication accuracies produced by learning algorithms by improving the quality of the training data. The approach employs an ensemble of clas-siiers that serve as a lter for the training data. Using an n-fold cross validation, the training data is passed through the lter. Only instances that the lter classiies correctly are passed to the-nal learning algorithm. We present an empirical evaluation of the approach for the task of automated land cover mapping from remotely sensed data. Labeling error arises in these data from a multitude of sources including lack of consistency in the vegetation classiication used, variable measurement techniques, and variation in the spatial sampling resolution. Our evaluation shows that for noise levels of less than 40%, lter-ing results in higher predictive accuracy than not ltering, and for levels of class noise less than or equal to 20% ltering allows the base-line accuracy to be retained. Our empirical results suggest that the ensemble lter approach is an eeective method for identifying labeling errors, and further , that the approach will signiicantly beneet ongoing research to develop accurate and robust remote sensing-based methods to map land cover at global scales."}
{"_id": "dc264515056acaf1ddcb4e810787fcf23e86cbc0", "title": "Low-power energy harvester for wiegand transducers", "text": "This paper discusses the use of Wiegand magnetic sensors as energy harvesters for powering low-power electronic equipments. A Wiegand device typically releases a ~10 \u03bcs voltage pulse several volts wide when subject to an external, time-varying magnetic field. Due to the sharpness of the magnetic transition, pulse generation occurs regardless of how slow the magnetic field variation is, an attractive feature which enables its use in energy harvesting scenarios even when low-frequency sources are considered. The paper first identifies the theoretical conditions for maximum energy extraction. An efficient energy harvesting circuit is then proposed which interfaces the Wiegand source to a fixed DC voltage provided, for instance, by a rechargeable Lithium-Ion battery. Simulations and experimental results are provided supporting the developed theoretical framework and the effectiveness of the proposed implementation."}
{"_id": "e46aba462356c60cd8b336c77aec99d69a5e58a9", "title": "Improving Estimation Accuracy using Better Similarity Distance in Analogy-based Software Cost Estimation", "text": "Software cost estimation nowadays plays a more and more important role in practical projects since modern software projects become more and more complex as well as diverse. To help estimate software development cost accurately, this research does a systematic analysis of the similarity distances in analogy-based software cost estimation and based on this, a new non-orthogonal space distance (NoSD) is proposed as a measure of the similarities between real software projects. Different from currently adopted measures like the Euclidean distance and so on, this non-orthogonal space distance not only considers the different features to have different importance for cost estimation, but also assumes project features to have a non-orthogonal dependent relationship which is considered independent to each other in Euclidean distance. Based on such assumptions, NoSD method describes the non-orthogonal angles between feature axes using feature redundancy and it represents the feature weights using feature relevance, where both redundancy and relevance are defined in terms of mutual information. It can better reveal the real dependency relationships between real life software projects based on this non-orthogonal space distance. Also experiments show that it brings a greatest of 13.1% decrease of MMRE and a 12.5% increase of PRED(0.25) on ISBSG R8 dataset, and 7.5% and 20.5% respectively on the Desharnais dataset. Furthermore, to make it better fit the complex data distribution of real life software projects data, this research leverages the particle swarm optimization algorithm for an optimization of the proposed non-orthogonal space distance and proposes a PSO optimized non-orthogonal space distance (PsoNoSD). It brings further improvement in the estimation accuracy. As shown in experiments, compared with the normally used Euclidean distance, PsoNoSD improves the estimation accuracy by 38.73% and 11.59% in terms of MMRE and PRED(0.25) on ISBSG R8 dataset. On the Desharnais dataset, the improvements are 23.38% and 24.94% respectively. In summary, the new methods proposed in this research, which are based on theoretical study as well as systematic experiments, have solved some problems of currently used techniques and they show a great ability of notably improving the software cost estimation accuracy."}
{"_id": "1525c74ab503677924f60f1df304a0bcfbd51ae0", "title": "A Counterexample to Theorems of Cox and Fine", "text": "Cox s well known theorem justifying the use of probability is shown not to hold in nite domains The counterexample also suggests that Cox s assumptions are insu cient to prove the result even in in nite domains The same counterexample is used to disprove a result of Fine on comparative conditional probability"}
{"_id": "7190ae6bb76076ffdecd78cd40e0be9e86f1f85e", "title": "Location Assisted Coding (LAC): Embracing Interference in Free Space Optical Communications", "text": "As the number of wireless devices grows, the increasing demand for the shared radio frequency (RF) spectrum becomes a critical problem. Unlike wired communications in which, theoretically, more fibers can be used to accommodate the increasing bandwidth demand, wireless spectrum cannot be arbitrarily increased due to the fundamental limitations imposed by the physical laws. On the other hand, recent advances in free space optical (FSO) technologies promise a complementary approach to increase wireless capacity. However, high-speed FSO technologies are currently confined to short distance transmissions, resulting in limited mobility. In this paper, we briefly describe WiFO, a hybrid WiFi-FSO network for Gbps wireless local area network (WLAN) femtocells that can provide up to one Gbps per user while maintaining seamless mobility. While typical RF femtocells are non-overlapped to minimize inter-cell interference, there are advantages of using overlapped femtocells to increase mobility and throughput when the number of users is small. That said, the primary contribution of this paper will be a novel location assisted coding (LAC) technique used in the WiFO network that aims to increase throughput and reduce interference for multiple users in a dense array of femtocells. Both theoretical analysis and numerical experiments show orders of magnitude increase in throughput using LAC over basic codes."}
{"_id": "5eca25bc0be5329ad6ed6c8f2ca16e218632f470", "title": "Innovative enhancement of the Caesar cipher algorithm for cryptography", "text": "The Caesar Cipher algorithm for cryptography is one of the oldest algorithms. Now much newer algorithms have arrived that are much more secure, however in terms of speed of execution Caesar cipher algorithm is still the fastest owing to its simplicity. However the algorithm is extremely easy to crack. This is because in this algorithm each character of a message is always replaced by the same fixed character that has been predetermined. To improve the algorithm and enhance it's security feature, a few changes can be added. This paper proposes an enhancement to the existing algorithm by making use first of a simple Diffie-Hellman key exchange scenario to obtain a secret key and later using simple mathematics to ensure the encryption of data is much more safer. Once a private shared key is obtained by making use of the Diffie-Hellman method, the key is subject to the mod operation with 26 to obtain a value less than or equal to 26, then the current character is taken and to this the key value obtained is added to obtain a new character. For any character in the `x' position the key is simply first multiplied with `x' and then mod is done to obtain the encrypted character. So 2nd character of the message is multiplied with 2, third character with 3 and so on. This enhances the security and also does not increase the time of execution by a large margin."}
{"_id": "2bb1e444ca057597eb1d393457ca41e9897079c6", "title": "Turret: A Platform for Automated Attack Finding in Unmodified Distributed System Implementations", "text": "Security and performance are critical goals for distributed systems. The increased design complexity, incomplete expertise of developers, and limited functionality of existing testing tools often result in bugs and vulnerabilities that prevent implementations from achieving their design goals in practice. Many of these bugs, vulnerabilities, and misconfigurations manifest after the code has already been deployed making the debugging process difficult and costly. In this paper, we present Turret, a platform for automatically finding performance attacks in unmodified implementations of distributed systems. Turret does not require the user to provide any information about vulnerabilities and runs the implementation in the same operating system setup as the deployment, with an emulated network. Turret uses a new attack finding algorithm and several optimizations that allow it to find attacks in a matter of minutes. We ran Turret on 5 different distributed system implementations specifically designed to tolerate insider attacks, and found 30 performance attacks, 24 of which were not previously reported to the best of our knowledge."}
{"_id": "8108a291e7f526178395f50b1b52cb55bed8db5b", "title": "A Relational Approach to Monitoring Complex Systems", "text": "Monitoring is an essential part of many program development tools, and plays a central role in debugging, optimization, status reporting, and reconfiguration. Traditional monitoring techniques are inadequate when monitoring complex systems such as multiprocessors or distributed systems. A new approach is described in which a historical database forms the conceptual basis for the information processed by the monitor. This approach permits advances in specifying the low-level data collection, specifying the analysis of the collected data, performing the analysis, and displaying the results. Two prototype implementations demonstrate the feasibility of the approach."}
{"_id": "2a61d70a4e9dece71594d33320063e331485f1df", "title": "Mapping and Localization in 3D Environments Using a 2D Laser Scanner and a Stereo Camera", "text": "2D laser scanners have been widely used for accomplishing a number of challenging AI and robotics tasks such as mapping of large environments and localization in highly dynamic environments. However, using only one 2D laser scanner could be insufficient and less reliable for accomplishing tasks in 3D environments. The problem could be solved using multiple 2D laser scanners or a 3D laser scanner for performing 3D perception. Unfortunately, the cost of such 3D sensing systems is still too high for enabling AI and robotics applications. In this paper, we propose to use a 2D laser scanner and a stereo camera for accomplishing simultaneous localization and mapping (SLAM) in 3D indoor environments in which the 2D laser scanner is used for SLAM and the stereo camera is used for 3D mapping. The experimental results demonstrate that the proposed system is lower cost yet effective, and the obstacle detection rate is significant improved compares to using one 2D laser scanner for mapping."}
{"_id": "875e08da83c0d499da9d9a5728d492d35d96773c", "title": "Architecting the next generation of service-based SCADA/DCS system of systems", "text": "SCADA and DCS systems are in the heart of the modern industrial infrastructure. The rapid changes in the networked embedded systems and the way industrial applications are designed and implemented, call for a shift in the architectural paradigm. Next generation SCADA and DCS systems will be able to foster cross-layer collaboration with the shop-floor devices as well as in-network and enterprise applications. Ecosystems driven by (web) service based interactions will enable stronger coupling of real-world and the business side, leading to a new generation of monitoring and control applications and services witnessed as the integration of large-scale systems of systems that are constantly evolving to address new user needs."}
{"_id": "d29f918ad0b759f01299ec905f564359ada97ba5", "title": "Information Processing in Medical Imaging", "text": "In this paper, we present novel algorithms to compute robust statistics from manifold-valued data. Specifically, we present algorithms for estimating the robust Fr\u00e9chet Mean (FM) and performing a robust exact-principal geodesic analysis (ePGA) for data lying on known Riemannian manifolds. We formulate the minimization problems involved in both these problems using the minimum distance estimator called the L2E. This leads to a nonlinear optimization which is solved efficiently using a Riemannian accelerated gradient descent technique. We present competitive performance results of our algorithms applied to synthetic data with outliers, the corpus callosum shapes extracted from OASIS MRI database, and diffusion MRI scans from movement disorder patients respectively."}
{"_id": "0a6d7e8e61c54c796f53120fdb86a25177e00998", "title": "Complex Embeddings for Simple Link Prediction", "text": "In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.1"}
{"_id": "6ac1962fd1f2b90da02c63c16af39a3c7a3e6df6", "title": "SQuAD: 100, 000+ Questions for Machine Comprehension of Text", "text": "We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com."}
{"_id": "c05727976dfae43ae13aa520774fc3bd369d49b5", "title": "Freebase: A Shared Database of Structured General Human Knowledge", "text": "Freebase is a practical, scalable, graph-shaped database of structured general human knowledge, inspired by Semantic Web research and collaborative data communities such as the Wikipedia. Freebase allows public read and write access through an HTTP-based graph-query API for research, the creation and maintenance of structured data, and application building. Access is free and all data in Freebase has a very open (e.g. Creative Commons, GFDL) license."}
{"_id": "f010affab57b5fcf1cd6be23df79d8ec98c7289c", "title": "TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension", "text": "We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a featurebased classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study.1"}
{"_id": "6e7302a08e04e2120c50440f280fb77dcd5aeb35", "title": "Emotion recognition with multimodal features and temporal models", "text": "This paper presents our methods to the Audio-Video Based Emotion Recognition subtask in the 2017 Emotion Recognition in the Wild (EmotiW) Challenge. The task aims to predict one of the seven basic emotions for short video segments. We extract different features from audio and facial expression modalities. We also explore the temporal LSTM model with the input of frame facial features, which improves the performance of the non-temporal model. The fusion of different modality features and the temporal model lead us to achieve a 58.5% accuracy on the testing set, which shows the effectiveness of our methods."}
{"_id": "7fe73dd079d14520224af718c12ab3e224d40cbb", "title": "Recommendation to Groups", "text": "Recommender systems have traditionally recommended items to individual users, but there has recently been a proliferation of recommenders that address their recommendations to groups of users. The shift of focus from an individual to a group makes more of a difference than one might at first expect. This chapter discusses the most important new issues that arise, organizing them in terms of four subtasks that can or must be dealt with by a group recommender: 1. acquiring information about the user\u2019s preferences; 2. generating recommendations; 3. explaining recommendations; and 4. helping users to settle on a final decision. For each issue, we discuss how it has been dealt with in existing group recommender systems and what open questions call for further research."}
{"_id": "7ae59771c7d9a3a346fb6374d21c31ca62c3618b", "title": "The cyber threat landscape: Challenges and future research directions", "text": "Cyber threats are becoming more sophisticated with the blending of once distinct types of attack into more damaging forms. Increased variety and volume of attacks is inevitable given the desire of financially and criminally-motivated actors to obtain personal and confidential information, as highlighted in this paper. We describe how the Routine Activity Theory can be applied to mitigate these risks by reducing the opportunities for cyber crime to occur, making cyber crime more difficult to commit and by increasing the risks of detection and punishment associated with committing cyber crime. Potential research questions are also identified. a 2011 Elsevier Ltd. All rights reserved."}
{"_id": "c6919dd8f8e72ff3e1fbc0072d64bde0a19c2a8e", "title": "Genetics-Based Machine Learning for Rule Induction: State of the Art, Taxonomy, and Comparative Study", "text": "The classification problem can be addressed by numerous techniques and algorithms which belong to different paradigms of machine learning. In this paper, we are interested in evolutionary algorithms, the so-called genetics-based machine learning algorithms. In particular, we will focus on evolutionary approaches that evolve a set of rules, i.e., evolutionary rule-based systems, applied to classification tasks, in order to provide a state of the art in this field. This paper has a double aim: to present a taxonomy of the genetics-based machine learning approaches for rule induction, and to develop an empirical analysis both for standard classification and for classification with imbalanced data sets. We also include a comparative study of the genetics-based machine learning (GBML) methods with some classical non-evolutionary algorithms, in order to observe the suitability and high potential of the search performed by evolutionary algorithms and the behavior of the GBML algorithms in contrast to the classical approaches, in terms of classification accuracy."}
{"_id": "8d99c0d149046b36a8f1691d9f14775de9927171", "title": "A connectionist approach to dynamic resource management for virtualised network functions", "text": "Network Functions Virtualisation (NFV) continues to gain attention as a paradigm shift in the way telecommunications services are deployed and managed. By separating Network Functions (NFs) from traditional middleboxes, NFV is expected to lead to reduced CAPEX and OPEX, and to more agile services. However, one of the main challenges to achieving these objectives is on how physical resources can be efficiently, autonomously, and dynamically allocated to Virtualised Network Functions (VNFs) whose resource requirements ebb and flow. In this paper, we propose a Graph Neural Network (GNN)-based algorithm which exploits Virtual Network Function Forwarding Graph (VNF-FG) topology information to predict future resource requirements for each Virtual Network Function Component (VNFC). The topology information of each VNFC is derived from combining its past resource utilisation as well as the modelled effect on the same from VNFCs in its neighbourhood. Our proposal has been evaluated using a deployment of a virtualised IP Multimedia Subsystem (IMS), and real VoIP traffic traces, with results showing an average prediction accuracy of 90%. Moreover, compared to a scenario where resources are allocated manually and/or statically, our proposal reduces the average number of dropped calls by at least 27% and improves call setup latency by over 29%."}
{"_id": "9bdd3ae8dd074573fa262fe99fdf03ce578d907c", "title": "A Chopper Current-Feedback Instrumentation Amplifier With a 1 mHz $1/f$ Noise Corner and an AC-Coupled Ripple Reduction Loop", "text": "This paper presents a chopper instrumentation amplifier for interfacing precision thermistor bridges. For high CMRR and DC gain, the amplifier employs a three-stage current-feedback topology with nested-Miller compensation. By chopping both the input and intermediate stages of the amplifier, a 1 mHz 1/f noise corner was achieved at an input-referred noise power spectral density (PSD) of 15 nV/\u00bfHz. To reduce chopper ripple, the amplifier employs a continuous-time AC-coupled ripple reduction loop. Due to its continuous-time nature, the loop causes no noise folding to DC and hence offers improved noise performance over auto-zeroed amplifiers. The loop reduces chopper ripple by more than 60 dB, to levels below the amplifier's own input-referred noise. Furthermore, a maximum input referred offset of 5 \u00bf V and a CMRR greater than 120 dB were measured at a supply current of 230 \u00bfA at 5 V."}
{"_id": "96ad9daa11dfd96846990ab0ceaebf8077772c78", "title": "Digitalism: The New Realism?", "text": "Today\u2019s society is increasingly digitalised, with mobile smartphones being routinely carried and used by a significant percentage of the population. This provides an augmented experience for the individual that does not depend on their geographical separation with respect to their community of friends and other contacts. This changes the nature of relationships between people. Individuals may live in a \u201cdigital bubble\u201d, close to others physically, but far away from them in their digital world. More specifically, digital images can be generated and shared with ever greater ease. Sometimes the digital image takes on an important part of the individual\u2019s experience of reality. This paper explores examples of the phenomenon, within the context of the arts in particular and culture in general. We also consider the assortment of terms used in a variety of ways by researchers in different fields with regard to our ever more digital society, such as digitalism, digitality, digitalisation, digital culture, digital philosophy, etc. We survey these terms, exploring them from alternative viewpoints, including sociological and philosophical aspects, and attempt to pinpoint some of these terms more precisely, especially in a cultural and artistic context."}
{"_id": "65e58981f966a6d1b62c1f7d889ae3e1c0a864a1", "title": "Strategic Business / IT Alignment using Goal Models", "text": "Since few years, enterprise information technologies (IT) are no more seen as a simple technological support for business strategies in the enterprise. Moreover, standalone IT departments are created in order to support evolution and growth of IT in the enterprise. Often, IT department defines a specific strategy describing vision, goals and objectives of IT development in the organization. However, to remain competitive, IT strategy and IT investment should be coherent with global enterprise strategies. The continuous process of preserving coherence between Business/IT strategies is widely known as strategic Business/IT alignment. Our work aims at discussing the relation between interview based Business IT alignement discovery and the Strategic Alignement Model proposed by Venkatraman. The paper proposes also modeling tools and engineering methodologies to support this alignment process."}
{"_id": "882bd3e291822e2a510b6648779efd8312c9068d", "title": "On the use of X-vectors for Robust Speaker Recognition", "text": "Text-independent speaker verification (SV) is currently in the process of embracing DNN modeling in every stage of SV system. Slowly, the DNN-based approaches such as end-to-end modelling and systems based on DNN embeddings start to be competitive even in challenging and diverse channel conditions of recent NIST SREs. Domain adaptation and the need for a large amount of training data are still a challenge for current discriminative systems and (unlike with generative models), we see significant gains from data augmentation, simulation and other techniques designed to overcome lack of training data. We present an analysis of a SV system based on DNN embeddings (x-vectors) and focus on robustness across diverse data domains such as standard telephone and microphone conversations, both in clean, noisy and reverberant environments. We also evaluate the system on challenging far-field data created by re-transmitting a subset of NIST SRE 2008 and 2010 microphone interviews. We compare our results with the stateof-the-art i-vector system. In general, we were able to achieve better performance with the DNN-based systems, but most importantly, we have confirmed the robustness of such systems across multiple data domains."}
{"_id": "de5a4d0112784b2e62a309c058529fc6ab5dceb1", "title": "User experiences and expectations of vibrotactile, thermal and squeeze feedback in interpersonal communication", "text": "Katja Suhonen Tampere University of Technology, Human-Centered Technology, P.O.Box 589, 33101 Tampere, Finland katja.suhonen@tut.fi Kaisa V\u00e4\u00e4n\u00e4nen-Vainio-Mattila Tampere University of Technology, Human-Centered Technology, P.O.Box 589, 33101 Tampere, Finland kaisa.vaananen-vainiomattila@tut.fi Kalle M\u00e4kel\u00e4 Tampere University of Technology, Department of Electronics, P.O.Box 692, 33101 Tampere, Finland kalle.makela@tut.fi"}
{"_id": "ab575f3953b5e998439f3d17ac12c6c42e1f0220", "title": "Differences in kinematics and electromyographic activity between men and women during the single-legged squat.", "text": "BACKGROUND\nNumerous factors have been identified as potentially increasing the risk of anterior cruciate ligament injury in the female athlete. However, differences between the sexes in lower extremity coordination, particularly hip control, are only minimally understood.\n\n\nHYPOTHESIS\nThere is no difference in kinematic or electromyographic data during the single-legged squat between men and women.\n\n\nSTUDY DESIGN\nDescriptive comparison study.\n\n\nMETHODS\nWe kinematically and electromyographically analyzed the single-legged squat in 18 intercollegiate athletes (9 male, 9 female). Subjects performed five single-legged squats on their dominant leg, lowering themselves as far as possible and then returning to a standing position without losing balance.\n\n\nRESULTS\nWomen demonstrated significantly more ankle dorsiflexion, ankle pronation, hip adduction, hip flexion, hip external rotation, and less trunk lateral flexion than men. These factors were associated with a decreased ability of the women to maintain a varus knee position during the squat as compared with the men. Analysis of all eight tested muscles demonstrated that women had greater muscle activation compared with men. When each muscle was analyzed separately, the rectus femoris muscle activation was found to be statistically greater in women in both the area under the linear envelope and maximal activation data.\n\n\nCONCLUSIONS\nUnder a physiologic load in a position commonly assumed in sports, women tend to position their entire lower extremity and activate muscles in a manner that could increase strain on the anterior cruciate ligament."}
{"_id": "df0e6e6c808bb30f0e21b6873048361a28b28b64", "title": "Procedural Content Generation: Goals, Challenges and Actionable Steps", "text": "This chapter discusses the challenges and opportunities of procedural content generation (PCG) in games. It starts with defining three grand goals of PCG, namely multi-level multi-content PCG, PCG-based game design and generating complete games. The way these goals are defined, they are not feasible with current technology. Therefore we identify nine challenges for PCG research. Work towards meeting these challenges is likely to take us closer to realising the three grand goals. In order to help researchers get started, we also identify five actionable steps, which PCG researchers could get started working on immediately. 1998 ACM Subject Classification I.2.1 Applications and Expert Systems: Games"}
{"_id": "e327e5083b862239374d2e5424e431eb0711c29a", "title": "Time-to-Digital Converter Using a Tuned-Delay Line Evaluated in 28-, 40-, and 45-nm FPGAs", "text": "This paper proposes a bin-width tuning method for a field-programmable gate array (FPGA)-based delay line for a time-to-digital converter (TDC). Changing the hit transitions and sampling patterns of the carry chain considering delays of the sum and carry-out bins can improve the bin-width uniformity and thus measurement precision. The proposed sampling method was evaluated and compared with the ordinary tapped-delay-line (TDL) method in three different types of FPGAs: Kintex-7, Virtex-6, and Spartan-6. The linearity, equivalent bin width, and measurement precision improved for all the evaluated FPGAs by adopting the proposed method. The measurement precision obtained using the simple TDL architecture is comparable with other complex TDC architectures. In addition, the proposed method improves bin-width uniformity and measurement precision while maintaining the advantages of TDL TDCs, that is, fast conversion rate and small resource usage. Furthermore, the enhanced linearity of the delay line can also improve other carry-chain-based FPGA-TDCs."}
{"_id": "232faf8d0b97862dce95c5afbccf11004b91ef04", "title": "Control structure design for complete chemical plants", "text": "Control structure design deals with the structural decisions of the control system, including what to control and how to pair the variables to form control loops. Although these are very important issues, these decisions are in most cases made in an ad hoc fashion, based on experience and engineering insight, without considering the details of each problem. In the paper, a systematic procedure for control structure design for complete chemical plants (plantwide control) is presented. It starts with carefully defining the operational and economic objectives, and the degrees of freedom available to fulfill them. Other issues, discussed in the paper, include inventory and production rate control, decentralized versus multivariable control, loss in performance by bottom-up design, and a definition of a the \u2018\u2018complexity number\u2019\u2019 for the control system. # 2003 Elsevier Ltd. All rights reserved."}
{"_id": "15943bdecfe42e5f6707efa2eb5a491356f1822e", "title": "Reducing Uncertainty of Low-Sampling-Rate Trajectories", "text": "The increasing availability of GPS-embedded mobile devices has given rise to a new spectrum of location-based services, which have accumulated a huge collection of location trajectories. In practice, a large portion of these trajectories are of low-sampling-rate. For instance, the time interval between consecutive GPS points of some trajectories can be several minutes or even hours. With such a low sampling rate, most details of their movement are lost, which makes them difficult to process effectively. In this work, we investigate how to reduce the uncertainty in such kind of trajectories. Specifically, given a low-sampling-rate trajectory, we aim to infer its possible routes. The methodology adopted in our work is to take full advantage of the rich information extracted from the historical trajectories. We propose a systematic solution, History based Route Inference System (HRIS), which covers a series of novel algorithms that can derive the travel pattern from historical data and incorporate it into the route inference process. To validate the effectiveness of the system, we apply our solution to the map-matching problem which is an important application scenario of this work, and conduct extensive experiments on a real taxi trajectory dataset. The experiment results demonstrate that HRIS can achieve higher accuracy than the existing map-matching algorithms for low-sampling-rate trajectories."}
{"_id": "5d428cc440bb0b2c1bac75e679ac8a88e2ed71bf", "title": "Towards reliable traffic sign recognition", "text": "The demand for reliable traffic sign recognition (TSR) increases with the development of safety driven advanced driver assistance systems (ADAS). Emerging technologies like brake-by-wire or steer-by-wire pave the way for collision avoidance and threat identification systems. Obviously, decision making in such critical situations requires high reliability of the information base. Especially for comfort systems, we need to take into account that the user tends to trust the information provided by the ADAS [1]. In this paper, we present a robust system architecture for the reliable recognition of circular traffic signs. Our system employs complementing approaches for the different stages of current TSR systems. This introduces the application of local SIFT features for content-based traffic sign detection along with widely applied shape-based approaches. We further add a technique called contracting curve density (CCD) to refine the localization of the detected traffic sign candidates and therefore increase the performance of the subsequent classification module. Finally, the recognition stage based on SIFT and SURF descriptions of the candidates executed by a neural net provides a robust classification of structured image content like traffic signs. By applying these steps we compensate the weaknesses of the utilized approaches, and thus, improve the system's performance."}
{"_id": "742b172f9f8afbc2b4821171aa35b5f0e15a2661", "title": "3-Dimensional Analysis on the GIDL Current of Body-tied Triple Gate FinFET", "text": "Triple gate FinFET is emerging as a promising candidate for the future CMOS device structures because of its immunity to short-channel effect. However, the suppression of GIDL is a significant challenge for its application. In this paper, we discuss the characteristics of GIDL on FinFET and extensively analyze the influence of the device technology on GIDL. The analysis is expected to give guidelines to the future development of triple gate FinFET"}
{"_id": "a0283ce9ecdca710b186cbb103efe5ec812d1fb1", "title": "Perspectives on the Productivity Dilemma", "text": "Editor\u2019s note The authors of this paper presented an All-academy session at the 2008 Academy of Management annual meeting in Anaheim, California. We were excited by the dynamic nature of the debate and felt that it related closely to critical issues in the areas of operations management, strategy, product development and international business. We thus invited the authors to write an article offering their individual and joint views on the productivity dilemma. We trust you will find it to be stimulating and thought provoking. We invite you to add your voice to the discussion by commenting on this article at the Operations and Supply Chain (OSM) Forum at http://www.journaloperationsmanagement.org/OSM.asp. \u2013 Kenneth K. Boyer and Morgan L. Swink"}
{"_id": "4ef8fdb0c97d331e07ae96323855d15a75340ab0", "title": "Applying the Waek Learning Framework to Understand and Improve C4.5", "text": "There has long been a chasm between theoretical mod els of machine learning and practical machine learn ing algorithms For instance empirically successful algorithms such as C and backpropagation have not met the criteria of the PAC model and its vari ants Conversely the algorithms suggested by com putational learning theory are usually too limited in various ways to nd wide application The theoreti cal status of decision tree learning algorithms is a case in point while it has been proven that C and all reasonable variants of it fails to meet the PAC model criteria other recently proposed decision tree algo rithms that do have non trivial performance guaran tees unfortunately require membership queries Two recent developments have narrowed this gap between theory and practice not for the PAC model but for the related model known as weak learning or boosting First an algorithm called Adaboost was proposed that meets the formal criteria of the boosting model and is also competitive in practice Second the basic algorithms underlying the popular C and CART programs have also very recently been shown to meet the formal criteria of the boosting model Thus it seems plausible that the weak learning frame work may provide a setting for interaction between formal analysis and machine learning practice that is lacking in other theoretical models Our aim in this paper is to push this interaction further in light of these recent developments In par ticular we perform experiments suggested by the for mal results for Adaboost and C within the weak learning framework We concentrate on two particu larly intriguing issues First the theoretical boosting results for top down decision tree algorithms such as C suggest that a new splitting criterion may result in trees that are smaller and more accurate than those obtained using the usual information gain We con rm this suggestion experimentally Second a super cial interpretation of the theo retical results suggests that Adaboost should vastly outperform C This is not the case in practice and we argue through experimental results that the theory must be understood in terms of a measure of a boosting algorithm s behavior called its advantage sequence We compare the advantage sequences for C and Adaboost in a number of experiments We nd that these sequences have qualitatively dif ferent behavior that explains in large part the dis crepancies between empirical performance and the theoretical results Brie y we nd that although C and Adaboost are both boosting algorithms Adaboost creates successively harder ltered dis tributions while C creates successively easier ones in a sense that will be made precise"}
{"_id": "9a5b3f24d1e17cd675cc95d5ebcf6cee8a4b4811", "title": "Sex similarities and differences in preferences for short-term mates: what, whether, and why.", "text": "Are there sex differences in criteria for sexual relationships? The answer depends on what question a researcher asks. Data suggest that, whereas the sexes differ in whether they will enter short-term sexual relationships, they are more similar in what they prioritize in partners for such relationships. However, additional data and context of other findings and theory suggest different underlying reasons. In Studies 1 and 2, men and women were given varying \"mate budgets\" to design short-term mates and were asked whether they would actually mate with constructed partners. Study 3 used a mate-screening paradigm. Whereas women have been found to prioritize status in long-term mates, they instead (like men) prioritize physical attractiveness much like an economic necessity in short-term mates. Both sexes also show evidence of favoring well-rounded long- and short-term mates when given the chance. In Studies 4 and 5, participants report reasons for having casual sex and what they find physically attractive. For women, results generally support a good genes account of short-term mating, as per strategic pluralism theory (S. W. Gangestad & J. A. Simpson, 2000). Discussion addresses broader theoretical implications for mate preference, and the link between method and theory in examining social decision processes."}
{"_id": "1ef415ce920b2ca197a21bbfd710e7a9dc7a655e", "title": "Interpersonal connectedness: conceptualization and directions for a measurement instrument", "text": "Interpersonal connectedness is the sense of belonging based on the appraisal of having sufficient close social contacts. This feeling is regarded as one of the major outcomes of successful (mediated) social interaction and as such an important construct for HCI. However, the exact nature of this feeling, how to achieve it, and how to assess it remain unexplored to date. In the current paper we start the theoretical conceptualization of this phenomenon by exploring its basic origins in psychological literature and simultaneously formulate requirements for a measurement instrument to be developed in the service of exploring and testing CMC applications, in particular awareness technologies."}
{"_id": "686fb884072b323d6bd365bdee1df894ee996758", "title": "Security challenges in software defined network and their solutions", "text": "The main purpose of Software Defined Networking (SDN) is to allow network engineers to respond quickly to changing network industrial requirements. This network technology focuses on making network as adaptable and active as virtual server. SDN is physical separation of Control plane from Data plane and control plane is centralized to manage underlying infrastructure. Hence, the SDN permit network administrator to adjust wide traffic flow from centralized control console without having to touch Switches and Routers, and can provide the services to wherever they are needed in the network. As in SDN the control plane is disassociated from underlying forwarding plane, however, susceptible to many security challenges like Denial of Service (DoS) attack, Distributed DoS (DDoS) attack, Volumetric attack. In this paper, we highlight some security challenges and evaluate some security solutions."}
{"_id": "0f8468de03ee9f12d693237bec87916311bf1c24", "title": "The Seventh PASCAL Recognizing Textual Entailment Challenge", "text": "This paper presents the Seventh Recognizing Textual Entailment (RTE-7) challenge. This year\u2019s challenge replicated the exercise proposed in RTE-6, consisting of a Main Task, in which Textual Entailment is performed on a real corpus in the Update Summarization scenario; a Main subtask aimed at detecting novel information; and a KBP Validation Task, in which RTE systems had to validate the output of systems participating in the KBP Slot Filling Task. Thirteen teams participated in the Main Task (submitting 33 runs) and 5 in the Novelty Detection Subtask (submitting 13 runs). The KBP Validation Task was undertaken by 2 participants which submitted 5 runs. The ablation test experiment, introduced in RTE-5 to evaluate the impact of knowledge resources used by the systems participating in the Main Task and extended also to tools in RTE-6, was also repeated in RTE-7."}
{"_id": "b13d8434b140ac3b9cb923b91afc17d1e448abfc", "title": "Mobile applications for weight management: theory-based content analysis.", "text": "BACKGROUND\nThe use of smartphone applications (apps) to assist with weight management is increasingly prevalent, but the quality of these apps is not well characterized.\n\n\nPURPOSE\nThe goal of the study was to evaluate diet/nutrition and anthropometric tracking apps based on incorporation of features consistent with theories of behavior change.\n\n\nMETHODS\nA comparative, descriptive assessment was conducted of the top-rated free apps in the Health and Fitness category available in the iTunes App Store. Health and Fitness apps (N=200) were evaluated using predetermined inclusion/exclusion criteria and categorized based on commonality in functionality, features, and developer description. Four researchers then evaluated the two most popular apps in each category using two instruments: one based on traditional behavioral theory (score range: 0-100) and the other on the Fogg Behavioral Model (score range: 0-6). Data collection and analysis occurred in November 2012.\n\n\nRESULTS\nEligible apps (n=23) were divided into five categories: (1) diet tracking; (2) healthy cooking; (3) weight/anthropometric tracking; (4) grocery decision making; and (5) restaurant decision making. The mean behavioral theory score was 8.1 (SD=4.2); the mean persuasive technology score was 1.9 (SD=1.7). The top-rated app on both scales was Lose It! by Fitnow Inc.\n\n\nCONCLUSIONS\nAll apps received low overall scores for inclusion of behavioral theory-based strategies."}
{"_id": "2b211f9553ec78ff17fa3ebe16c0a036ef33c54b", "title": "Constructions from Dots and Lines", "text": "Marko A. Rodriguez is graph systems architect at AT&T Interactive. He can be reached at markomarkorodriguez.com. Peter Neubauer is chief operating officer of Neo Technology. He can be reached at peter.neubauerneotechnology.com A graph is a data structure composed of dots (i.e., vertices) and lines (i.e., edges). The dots and lines of a graph can be organized into intricate arrangements. A graph\u2019s ability to denote objects and their relationships to one another allows for a surprisingly large number of things to be modeled as graphs. From the dependencies that link software packages to the wood beams that provide the framing to a house, most anything has a corresponding graph representation. However, just because it is possible to represent something as a graph does not necessarily mean that its graph representation will be useful. If a modeler can leverage the plethora of tools and algorithms that store and process graphs, then such a mapping is worthwhile. This article explores the world of graphs in computing and exposes situations in which graphical models are beneficial."}
{"_id": "0122bba20a91c739ffb6dd4c68cf84b9305ccfc0", "title": "Hilbert Space Embeddings of Hidden Markov Models", "text": "Hidden Markov Models (HMMs) are important tools for modeling sequence data. However, they are restricted to discrete latent states, and are largely restricted to Gaussian and discrete observations. And, learning algorithms for HMMs have predominantly relied on local search heuristics, with the exception of spectral methods such as those described below. We propose a nonparametric HMM that extends traditional HMMs to structured and non-Gaussian continuous distributions. Furthermore, we derive a localminimum-free kernel spectral algorithm for learning these HMMs. We apply our method to robot vision data, slot car inertial sensor data and audio event classification data, and show that in these applications, embedded HMMs exceed the previous state-of-the-art performance."}
{"_id": "0c5e3186822a3d10d5377b741f36b6478d0a8667", "title": "Closing the learning-planning loop with predictive state representations", "text": "A central problem in artificial intelligence is that of planning to maximize future reward under uncertainty in a partially observable environment. In this paper we propose and demonstrate a novel algorithm which accurately learns a model of such an environment directly from sequences of action-observation pairs. We then close the loop from observations to actions by planning in the learned model and recovering a policy which is near-optimal in the original environment. Specifically, we present an efficient and statistically consistent spectral algorithm for learning the parameters of a Predictive State Representation (PSR). We demonstrate the algorithm by learning a model of a simulated high-dimensional, vision-based mobile robot planning task, and then perform approximate point-based planning in the learned PSR. Analysis of our results shows that the algorithm learns a state space which efficiently captures the essential features of the environment. This representation allows accurate prediction with a small number of parameters, and enables successful and efficient planning."}
{"_id": "16611312448f5897c7a84e2f590617f4fa3847c4", "title": "A Spectral Algorithm for Learning Hidden Markov Models", "text": "Hidden Markov Models (HMMs) are one of the most fundamental a nd widely used statistical tools for modeling discrete time series. Typically, they are learned using sea rch heuristics (such as the Baum-Welch / EM algorithm), which suffer from the usual local optima issues. While in gen eral these models are known to be hard to learn with samples from the underlying distribution, we provide t h first provably efficient algorithm (in terms of sample and computational complexity) for learning HMMs under a nat ur l separation condition. This condition is roughly analogous to the separation conditions considered for lear ning mixture distributions (where, similarly, these model s are hard to learn in general). Furthermore, our sample compl exity results do not explicitly depend on the number of distinct (discrete) observations \u2014 they implicitly depend on this number through spectral properties of the underlyin g HMM. This makes the algorithm particularly applicable to se ttings with a large number of observations, such as those in natural language processing where the space of observati on is sometimes the words in a language. Finally, the algorithm is particularly simple, relying only on a singula r value decomposition and matrix multiplications."}
{"_id": "5ec33788f9908f69255c9df424c51c7495546893", "title": "Hilbert space embeddings of conditional distributions with applications to dynamical systems", "text": "In this paper, we extend the Hilbert space embedding approach to handle conditional distributions. We derive a kernel estimate for the conditional embedding, and show its connection to ordinary embeddings. Conditional embeddings largely extend our ability to manipulate distributions in Hilbert spaces, and as an example, we derive a nonparametric method for modeling dynamical systems where the belief state of the system is maintained as a conditional embedding. Our method is very general in terms of both the domains and the types of distributions that it can handle, and we demonstrate the effectiveness of our method in various dynamical systems. We expect that conditional embeddings will have wider applications beyond modeling dynamical systems."}
{"_id": "645b2c28c28bd28eaa187a2faafa5ec12bc12e3a", "title": "Knowledge tracing: Modeling the acquisition of procedural knowledge", "text": "This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has \u2018mastered\u2019 each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels."}
{"_id": "420a5f02b079d596ec2da0b5cddda43326226a09", "title": "Differential evolution algorithm with multiple mutation strategies based on roulette wheel selection", "text": "In this paper, we propose a differential evolution (DE) algorithm variant with a combination of multiple mutation strategies based on roulette wheel selection, which we call MMRDE. We first propose a new, reflection-based mutation operation inspired by the reflection operations in the Nelder\u2013Mead method. We design an experiment to compare its performance with seven mutation strategies, and we prove its effectiveness at balancing exploration and exploitation of DE. Although our reflection-based mutation strategy can balance exploration and exploitation of DE, it is still prone to premature convergence or evolutionary stagnation when solving complex multimodal optimization problems. Therefore, we add two basic strategies to help maintain population diversity and increase the robustness. We use roulette wheel selection to arrange mutation strategies based on their success rates for each individual. MMRDE is tested with some improved DE variants based on 28 benchmark functions for real-parameter optimization that have been recommended by the Institute of Electrical and Electronics Engineers CEC2013 special session. Experimental results indicate that the proposed algorithm shows its effectiveness at cooperative work with multiple strategies. It can obtain a good balance between exploration and exploitation. The proposed algorithm can guide the search for a global optimal solution with quick convergence compared with other improved DE variants."}
{"_id": "7cf3e97ef62c25a4df6cafb79e2efa8d605419b8", "title": "Managing the knowledge lifecycle: A integrated knowledge management process model", "text": "The main purpose of this study is to propose an integrated conceptual model for exploring knowledge management process. Reviewing literature on knowledge management and tracing its historical background, definitions and dominant paradigms and analysis 32 frameworks of KM, a relatively integrated model of KM was presented in this paper. The study found many similarities of models and its presents a model which combine all the best features of the models analyzed. A 10-stage integrated model was proposed. These stages include some activities such as knowledge goal setting, identification, acquisition, creation, organization, sharing, evaluation, preservation, retention and updating and KM effectiveness evaluation. The findings can be used by managers in all organizational environments to implementation and auditing of KM practices."}
{"_id": "55295193bc32ffd43c592ad19b4eabfde282c9b1", "title": "An Augmented Reality museum guide", "text": "Recent years have seen advances in many enabling augmented reality technologies. Furthermore, much research has been carried out on how augmented reality can be used to enhance existing applications. This paper describes our experiences with an AR-museum guide that combines some of the latest technologies. Amongst other technologies, markerless tracking, hybrid tracking, and an ultra-mobile-PC were used. Like existing audio guides, the AR-guide can be used by any museum visitor, during a six-month exhibition on Islamic art. We provide a detailed description of the museumpsilas motivation for using AR, of our experiences in developing the system, and the initial results of user surveys. Taking this information into account, we can derive possible system improvements."}
{"_id": "523cf537aa1050efdcf0befe1d851b363afa0396", "title": "Security in Cloud Computing Using Cryptographic Algorithms Miss .", "text": "Cloud computing is the concept implemented to decipher the Daily Computing Problems. Cloud computing is basically virtual pool of resources and it provides these resources to users via internet. Cloud computing is the internet based development and used in computer technology. The prevalent problem associated with cloud computing is data privacy, security, anonymity and reliability etc. But the most important between them is security and how cloud provider assures it. To secure the Cloud means secure the treatments (calculations) and storage (databases hosted by the Cloud provider). In this paper we analyses different security issues to cloud and different cryptographic algorithms adoptable to better security for the cloud. Keywords\u2014 Cloud Computing, Cryptographic Algorithm, Internet, Security Algorithms, Security Attacks, Security Issue"}
{"_id": "a94250803137af3aedf73e1cd2d9146a21b29356", "title": "CEFAM: Comprehensive Evaluation Framework for Agile Methodologies", "text": "Agile software development is regarded as an effective and efficient approach, mainly due to its ability to accommodate rapidly changing requirements, and to cope with modern software development challenges. There is therefore a strong tendency to use agile software development methodologies where applicable; however, the sheer number of existing agile methodologies and their variants hinders the selection of an appropriate agile methodology or method chunk. Methodology evaluation tools address this problem through providing detailed evaluations, yet no comprehensive evaluation framework is available for agile methodologies. We introduce the comprehensive evaluation framework for agile methodologies (CEFAM) as an evaluation tool for project managers and method engineers. The hierarchical (and mostly quantitative) evaluation criterion set introduced in this evaluation framework enhances the usability of the framework and provides results that are precise enough to be useful for the selection, adaptation and construction of agile methodologies."}
{"_id": "687dbe4438022dc4521ae6ff53d7a6dc04c9d154", "title": "Design and operation of Wi-Fi agribot integrated system", "text": "Robotics in agriculture is not a new concept; in controlled environments (green houses), it has a history of over 20 years. Research has been performed to develop harvesters for cherry tomatoes, cucumbers, mushrooms, and other fruits. In horticulture, robots have been introduced to harvest citrus and apples. In this paper autonomous robot for agriculture (AgriBot) is a prototype and implemented for performing various agricultural activities like seeding, weeding, spraying of fertilizers, insecticides. AgriBot is controlled with a Arduino Mega board having At mega 2560 microcontroller. The powerful Raspberry Pi a mini computer is used to control and monitor the working of the robot. The Arduino Mega is mounted on a robot allowing for access to all of the pins for rapid prototyping. Its hexapod body can autonomously walk in any direction, avoiding objects with its ultrasonic proximity sensor. Its walking algorithms allow it to instantly change direction and walk in any new direction without turning its body. An underbody sensory array allows the robot to know if a seed has been planted in the area at the optimal spacing and depth. AgriBot can then dig a hole, plant a seed in the hole, cover the seed with soil, and apply any pre-emergence fertilizers and/or herbicides along with the marking agent. AgriBot can then signal to other robots in the immediate proximity that it needs help planting in that area or that this area has been planted and to move on by communicating through Wi-Fi."}
{"_id": "80372b437142b8a27a9b496c21c2f2708f6f3ae3", "title": "A practical VEP-based brain-computer interface", "text": "This paper introduces the development of a practical brain-computer interface at Tsinghua University. The system uses frequency-coded steady-state visual evoked potentials to determine the gaze direction of the user. To ensure more universal applicability of the system, approaches for reducing user variation on system performance have been proposed. The information transfer rate (ITR) has been evaluated both in the laboratory and at the Rehabilitation Center of China, respectively. The system has been proved to be applicable to >90% of people with a high ITR in living environments."}
{"_id": "cd81ec0f8f66e4e82bd4123fdf7d39bc75e6d441", "title": "Projector Calibration by \"Inverse Camera Calibration\"", "text": "The accuracy of 3-D reconstructions depends substantially on the accuracy of active vision system calibration. In this work, the problem of video projector calibration is solved by inverting the standard camera calibration work flow. The calibration procedure requires a single camera, which does not need to be calibrated and which is used as the sensor whether projected dots and calibration pattern landmarks, such as the checkerboard corners, coincide. The method iteratively adjusts the projected dots to coincide with the landmarks and the final coordinates are used as inputs to a camera calibration method. The otherwise slow iterative adjustment is accelerated by estimating a plane homography between the detected landmarks and the projected dots, which makes the calibration method fast."}
{"_id": "3eff1f3c6899cfb7b66694908e1de92764e3ba04", "title": "FEEL: Featured Event Embedding Learning", "text": "Statistical script learning is an effective way to acquire world knowledge which can be used for commonsense reasoning. Statistical script learning induces this knowledge by observing event sequences generated from texts. The learned model thus can predict subsequent events, given earlier events. Recent approaches rely on learning event embeddings which capture script knowledge. In this work, we suggest a general learning model\u2013Featured Event Embedding Learning (FEEL)\u2013for injecting event embeddings with fine grained information. In addition to capturing the dependencies between subsequent events, our model can take into account higher level abstractions of the input event which help the model generalize better and account for the global context in which the event appears. We evaluated our model over three narrative cloze tasks, and showed that our model is competitive with the most recent state-of-the-art. We also show that our resulting embedding can be used as a strong representation for advanced semantic tasks such as discourse parsing and sentence semantic relatedness."}
{"_id": "8ad6fda2d41dd823d2569797c8c7353dad31b371", "title": "Attribute-based encryption with non-monotonic access structures", "text": "We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes."}
{"_id": "7ee87dc96108eca0ae8d5c2d5a8b44bd4d0f0fe1", "title": "Browser Fingerprinting as user tracking technology", "text": "The Web has become an indispensable part of our society and is currently most commonly used mode of information delivery. Millions of users access the free services provided by the websites on daily basis and while providing these free services websites track and profile their web users. In this environment, the ability to track users and their online habits can be very lucrative for advertising companies, yet very intrusive for the privacy of users. The objective of this paper is to study about the increasingly common yet hardly discussed technique of identifying individual Web users and tracking them across multiple websites known as \u201cBrowser Fingerprinting\u201d. A unique browser fingerprint is derived by the unique pattern of information visible whenever a computer visits a website. The permutations thus collected are sufficiently distinct that they can be used as a tool for tracking. Unlike cookies, Fingerprints are generated on server side and are difficult for a user to influence. The main objective of this research is study about how the fingerprinting was evolved, its positives and negatives, what threat it poses to users' online privacy and what countermeasures could be used to prevent it. This paper will also analyse which different properties the browsers send to the server, allowing a unique fingerprint of those browsers to be created."}
{"_id": "141c85bcd27b9d53f97cba73c6f4c88455654dac", "title": "Zone Divisional Hierarchical Routing Protocol for Wireless Sensor Network", "text": "Clustering prolongs energy resources, improves scalability and preserves communication bandwidth. Clustering is either classified as static and dynamic or as equal and unequal. In cluster based routing protocols that employ multi-hop communication, imbalanced energy consumption among the nodes results in hot-spots. Unequal clustering overcomes hot-spots but requires a high overhead and is prone to connectivity issues. To offer guaranteed connectivity and alleviate the hot-spot problem, a zone divisional hierarchical routing protocol has been proposed in this paper. The network is divided into equal sized static rectangular clusters which are assigned to two zones namely near zone and far zone. The zone facing the base station is known as the near zone and rest of the network space makes up the far zone which is further divided into sub-zones. Dual cluster heads approach for sharing the reception, aggregation and forwarding tasks is proposed. The performance evaluation of the proposed protocol against the existing protocols reveals that the method offers energy-efficient multi-hop communication support, uses negligible overhead, prevents creation of hot-spots, avoids early death of nodes, uses balanced energy consumption across the network and maximizes the network lifetime."}
{"_id": "96ac16ce5d0d094fe5bf675a4a15e65025d85874", "title": "MOBA: a New Arena for Game AI", "text": "Games have always been popular testbeds for Artificial Intelligence (AI). In the last decade, we have seen the rise of the Multiple Online Battle Arena (MOBA) games, which are the most played games nowadays. In spite of this, there are few works that explore MOBA as a testbed for AI Research. In this paper we present and discuss the main features and opportunities offered by MOBA games to Game AI Research. We describe the various challenges faced along the game and also propose a discrete model that can be used to better understand and explore the game. With this, we aim to encourage the use of MOBA as a novel research platform for Game AI."}
{"_id": "4f3dbfec5c67f0fb0602d9c803a391bc2f6ee4c7", "title": "A 20-GHz phase-locked loop for 40-gb/s serializing transmitter in 0.13-/spl mu/m CMOS", "text": "A 20-GHz phase-locked loop with 4.9 ps/sub pp//0.65 ps/sub rms/ jitter and -113.5 dBc/Hz phase noise at 10-MHz offset is presented. A half-duty sampled-feedforward loop filter that simply replaces the resistor with a switch and an inverter suppresses the reference spur down to -44.0 dBc. A design iteration procedure is outlined that minimizes the phase noise of a negative-g/sub m/ oscillator with a coupled microstrip resonator. Static frequency dividers made of pulsed latches operate faster than those made of flip-flops and achieve near 2:1 frequency range. The phase-locked loop fabricated in a 0.13-/spl mu/m CMOS operates from 17.6 to 19.4GHz and dissipates 480mW."}
{"_id": "2edc7ee4ef1b19d104f34395f4e977afac12ea64", "title": "Color Balance and Fusion for Underwater Image Enhancement", "text": "We introduce an effective technique to enhance the images captured underwater and degraded due to the medium scattering and absorption. Our method is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. It builds on the blending of two images that are directly derived from a color-compensated and white-balanced version of the original degraded image. The two images to fusion, as well as their associated weight maps, are defined to promote the transfer of edges and color contrast to the output image. To avoid that the sharp weight map transitions create artifacts in the low frequency components of the reconstructed image, we also adapt a multiscale fusion strategy. Our extensive qualitative and quantitative evaluation reveals that our enhanced images and videos are characterized by better exposedness of the dark regions, improved global contrast, and edges sharpness. Our validation also proves that our algorithm is reasonably independent of the camera settings, and improves the accuracy of several image processing applications, such as image segmentation and keypoint matching."}
{"_id": "1fcaf7ddcadda724d67684d66856c107375f448b", "title": "Rationale-Augmented Convolutional Neural Networks for Text Classification", "text": "We present a new Convolutional Neural Network (CNN) model for text classification that jointly exploits labels on documents and their constituent sentences. Specifically, we consider scenarios in which annotators explicitly mark sentences (or snippets) that support their overall document categorization, i.e., they provide rationales. Our model exploits such supervision via a hierarchical approach in which each document is represented by a linear combination of the vector representations of its component sentences. We propose a sentence-level convolutional model that estimates the probability that a given sentence is a rationale, and we then scale the contribution of each sentence to the aggregate document representation in proportion to these estimates. Experiments on five classification datasets that have document labels and associated rationales demonstrate that our approach consistently outperforms strong baselines. Moreover, our model naturally provides explanations for its predictions."}
{"_id": "a11c59837b88f27c2f58a1c562b96450f7d52c3f", "title": "Informing Pedagogical Action: Aligning Learning Analytics With Learning Design", "text": "This article considers the developing field of learning analytics and argues that to move from small-scale practice to broad scale applicability, there is a need to establish a contextual framework that helps teachers interpret the information that analytics provides. The article presents learning design as a form of documentation of pedagogical intent that can provide the context for making sense of diverse sets of analytic data. We investigate one example of learning design to explore how broad categories of analytics\u2014which we call checkpoint and process analytics\u2014can inform the interpretation of outcomes from a learning design and facilitate pedagogical action."}
{"_id": "20b41b2a0d8ee71efd3986b4baeed24eba904350", "title": "Maternal depression and early childhood growth in developing countries: systematic review and meta-analysis.", "text": "OBJECTIVE\nTo investigate the relationship between maternal depression and child growth in developing countries through a systematic literature review and meta-analysis.\n\n\nMETHODS\nSix databases were searched for studies from developing countries on maternal depression and child growth published up until 2010. Standard meta-analytical methods were followed and pooled odds ratios (ORs) for underweight and stunting in the children of depressed mothers were calculated using random effects models for all studies and for subsets of studies that met strict criteria on study design, exposure to maternal depression and outcome variables. The population attributable risk (PAR) was estimated for selected studies.\n\n\nFINDINGS\nSeventeen studies including a total of 13,923 mother and child pairs from 11 countries met inclusion criteria. The children of mothers with depression or depressive symptoms were more likely to be underweight (OR: 1.5; 95% confidence interval, CI: 1.2-1.8) or stunted (OR: 1.4; 95% CI: 1.2-1.7). Subanalysis of three longitudinal studies showed a stronger effect: the OR for underweight was 2.2 (95% CI: 1.5-3.2) and for stunting, 2.0 (95% CI: 1.0-3.9). The PAR for selected studies indicated that if the infant population were entirely unexposed to maternal depressive symptoms 23% to 29% fewer children would be underweight or stunted.\n\n\nCONCLUSION\nMaternal depression was associated with early childhood underweight and stunting. Rigorous prospective studies are needed to identify mechanisms and causes. Early identification, treatment and prevention of maternal depression may help reduce child stunting and underweight in developing countries."}
{"_id": "63d91c875bad47f2cf563a00b39218b200d6814b", "title": "Application of Artificial Neural Networks to Multiple Criteria Inventory Classification", "text": "Original scientific paper Inventory classification is a very important part of inventory control which represents the technique of operational research discipline. A systematic approach to the inventory control and classification may have a significant influence on company competitiveness. The paper describes the results obtained by investigating the application of neural networks in multiple criteria inventory classification. Various structures of a back-propagation neural network have been analysed and the optimal one with the minimum Root Mean Square error selected. The predicted results are compared to those obtained by the multiple criteria classification using the analytical hierarchy process."}
{"_id": "3a932920c44c06b43fc24393c8710dfd2238eb37", "title": "Methods for Pretreatment of Lignocellulosic Biomass for Efficient Hydrolysis and Biofuel Production", "text": "Industrial & Engineering Chemistry Research is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Review Methods for Pretreatment of Lignocellulosic Biomass for Efficient Hydrolysis and Biofuel Production Parveen Kumar, Diane M. Barrett, Michael J. Delwiche, and Pieter Stroeve Ind. Eng. Chem. Res., Article ASAP \u2022 DOI: 10.1021/ie801542g \u2022 Publication Date (Web): 20 March 2009 Downloaded from http://pubs.acs.org on March 26, 2009"}
{"_id": "0cc95944a0fcbeb402b02bc86b522bef79873f16", "title": "Automatic Game Design via Mechanic Generation", "text": "Game designs often center on the game mechanics\u2014 rules governing the logical evolution of the game. We seek to develop an intelligent system that generates computer games. As first steps towards this goal we present a composable and cross-domain representation for game mechanics that draws from AI planning action representations. We use a constraint solver to generate mechanics subject to design requirements on the form of those mechanics\u2014what they do in the game. A planner takes a set of generated mechanics and tests whether those mechanics meet playability requirements\u2014controlling how mechanics function in a game to affect player behavior. We demonstrate our system by modeling and generating mechanics in a role-playing game, platformer game, and combined role-playing-platformer game."}
{"_id": "2f17bde51760f3d5577043c3ba83173d583a66ac", "title": "Central Retinal Artery Occlusion and Partial Ophthalmoplegia Caused by Hyaluronic Acid Injection Filler Cosmetic", "text": "Central retinal artery occlusion is a rare ocular emergency condition that can lead to a total blindness. Hereby we report a 30-year-old woman with a sudden visual loss of the left eye after being injected by hyaluronic acid filler at the dorsum nasal area for cosmetic purpose. Visual acuity was light perception while anterior segment and intraocular pressure were within normal limit. Partial ophthalmoplegia include restriction to the nasal gaze were found. Funduscopic examination revealed total retinal edema with a cherry red spot at the fovea. Ocular massage was performed as initial management followed by humor aqueous paracentesis. All procedures were done within 90 minutes. The patients discharge with the visual improvement to hand motion and the Partial ophthalmophlegia has been improved after being evaluated less than a month."}
{"_id": "5b0295e92ac6f493f23f46efcd6fdfbbca74ac48", "title": "Critiquing-based recommenders: survey and emerging trends", "text": "Critiquing-based recommender systems elicit users\u2019 feedback, called critiques, which they made on the recommended items. This conversational style of interaction is in contract to the standard model where users receive recommendations in a single interaction. Through the use of the critiquing feedback, the recommender systems are able to more accurately learn the users\u2019 profiles, and therefore suggest better recommendations in the subsequent rounds. Critiquing-based recommenders have been widely studied in knowledge-, content-, and preference-based recommenders and are beginning to be tried in several online websites, such as MovieLens. This article examines the motivation and development of the subject area, and offers a detailed survey of the state of the art concerning the design of critiquing interfaces and development of algorithms for critiquing generation. With the help of categorization analysis, the survey reveals three principal branches of critiquing based recommender systems, using respectively natural language based, system-suggested, and user-initiated critiques. Representative example systems will be presented and analyzed for each branch, and their respective pros and cons will be discussed. Subsequently, a hybrid framework is developed to unify the advantages of different methods and overcome their respective limitations. Empirical findings from user studies are further presented, indicating how hybrid critiquing supports could effectively enable end-users to achieve more confident decisions. Finally, the article will point out several future trends to boost the advance of critiquing-based recommenders."}
{"_id": "5c38df0e9281c60b32550d92bed6e5af9a869c05", "title": "Expert Level Control of Ramp Metering Based on Multi-Task Deep Reinforcement Learning", "text": "This paper shows how the recent breakthroughs in reinforcement learning (RL) that have enabled robots to learn to play arcade video games, walk, or assemble colored bricks, can be used to perform other tasks that are currently at the core of engineering cyberphysical systems. We present the first use of RL for the control of systems modeled by discretized non-linear partial differential equations (PDEs) and devise a novel algorithm to use non-parametric control techniques for large multi-agent systems. Cyberphysical systems (e.g., hydraulic channels, transportation systems, the energy grid, and electromagnetic systems) are commonly modeled by PDEs, which historically have been a reliable way to enable engineering applications in these domains. However, it is known that the control of these PDE models is notoriously difficult. We show how neural network-based RL enables the control of discretized PDEs whose parameters are unknown, random, and time-varying. We introduce an algorithm of mutual weight regularization (MWR), which alleviates the curse of dimensionality of multi-agent control schemes by sharing experience between agents while giving each agent the opportunity to specialize its action policy so as to tailor it to the local parameters of the part of the system it is located in. A discretized PDE, such as the scalar Lighthill\u2013Whitham\u2013Richards PDE can indeed be considered as a macroscopic freeway traffic simulator and which presents the most salient challenges for learning to control large cyberphysical system with multiple agents. We consider two different discretization procedures and show the opportunities offered by applying deep reinforcement for continuous control on both. Using a neural RL PDE controller on a traffic flow simulation based on a Godunov discretization of the San Francisco Bay Bridge, we are able to achieve precise adaptive metering without model calibration thereby beating the state of the art in traffic metering. Furthermore, with the more accurate BeATS simulator, we manage to achieve a control performance on par with ALINEA, a state-of-the-art parametric control scheme, and show how using MWR improves the learning procedure."}
{"_id": "9ec3b78b826df149fb215f005e32e56afaf532da", "title": "Design Issues and Challenges in Wireless Sensor Networks", "text": "Wireless Sensor Networks (WSNs) are composed self-organized wireless ad hoc networks which comprise of a large number of resource constrained sensor nodes. The major areas of research in WSN is going on hardware, and operating system of WSN, deployment, architecture, localization, synchronization, programming models, data aggregation and dissemination, database querying, architecture, middleware, quality of service and security. This paper study highlights ongoing research activities and issues that affect the design and performance of Wireless Sensor Network."}
{"_id": "15cf63f8d44179423b4100531db4bb84245aa6f1", "title": "Deep Learning", "text": "Machine-learning technology powers many aspects of modern society: from web searches to content filtering on social networks to recommendations on e-commerce websites, and it is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images, transcribe speech into text, match news items, posts or products with users\u2019 interests, and select relevant results of search. Increasingly, these applications make use of a class of techniques called deep learning. Conventional machine-learning techniques were limited in their ability to process natural data in their raw form. For decades, constructing a pattern-recognition or machine-learning system required careful engineering and considerable domain expertise to design a feature extractor that transformed the raw data (such as the pixel values of an image) into a suitable internal representation or feature vector from which the learning subsystem, often a classifier, could detect or classify patterns in the input. Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification. Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level. With the composition of enough such transformations, very complex functions can be learned. For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations. An image, for example, comes in the form of an array of pixel values, and the learned features in the first layer of representation typically represent the presence or absence of edges at particular orientations and locations in the image. The second layer typically detects motifs by spotting particular arrangements of edges, regardless of small variations in the edge positions. The third layer may assemble motifs into larger combinations that correspond to parts of familiar objects, and subsequent layers would detect objects as combinations of these parts. The key aspect of deep learning is that these layers of features are not designed by human engineers: they are learned from data using a general-purpose learning procedure. Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years. It has turned out to be very good at discovering intricate structures in high-dimensional data and is therefore applicable to many domains of science, business and government. In addition to beating records in image recognition and speech recognition, it has beaten other machine-learning techniques at predicting the activity of potential drug molecules, analysing particle accelerator data, reconstructing brain circuits, and predicting the effects of mutations in non-coding DNA on gene expression and disease. Perhaps more surprisingly, deep learning has produced extremely promising results for various tasks in natural language understanding, particularly topic classification, sentiment analysis, question answering and language translation. We think that deep learning will have many more successes in the near future because it requires very little engineering by hand, so it can easily take advantage of increases in the amount of available computation and data. New learning algorithms and architectures that are currently being developed for deep neural networks will only accelerate this progress."}
{"_id": "268a7cd1bf65351bba4eb97aac373d624af2c08f", "title": "A Persona-Based Neural Conversation Model", "text": "We present persona-based models for handling the issue of speaker consistency in neural response generation. A speaker model encodes personas in distributed embeddings that capture individual characteristics such as background information and speaking style. A dyadic speakeraddressee model captures properties of interactions between two interlocutors. Our models yield qualitative performance improvements in both perplexity and BLEU scores over baseline sequence-to-sequence models, with similar gains in speaker consistency as measured by human judges."}
{"_id": "32de3580916a1e5f36bc54779ba5904c19daa403", "title": "Intelligent Tutoring Systems with Conversational Dialogue", "text": "been developed during the last 20 years have proven to be quite successful, particularly in the domains of mathematics, science, and technology. They produce significant learning gains beyond classroom environments. They are capable of engaging most students\u2019 attention and interest for hours. We have been working on a new generation of intelligent tutoring systems that hold mixedinitiative conversational dialogues with the learner. The tutoring systems present challenging problems and questions to the learner, the learner types in answers in English, and there is a lengthy multiturn dialogue as complete solutions or answers evolve. This article presents the tutoring systems that we have been developing. AUTOTUTOR is a conversational agent, with a talking head, that helps college students learn about computer literacy. ANDES, ATLAS, AND WHY2 help adults learn about physics. Instead of being mere information-delivery systems, our systems help students actively construct knowledge through conversations."}
{"_id": "3b9732bb07dc99bde5e1f9f75251c6ea5039373e", "title": "Deep Reinforcement Learning with Double Q-learning", "text": "The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games. The goal of reinforcement learning (Sutton and Barto 1998) is to learn good policies for sequential decision problems, by optimizing a cumulative future reward signal. Q-learning (Watkins 1989) is one of the most popular reinforcement learning algorithms, but it is known to sometimes learn unrealistically high action values because it includes a maximization step over estimated action values, which tends to prefer overestimated to underestimated values. In previous work, overestimations have been attributed to insufficiently flexible function approximation (Thrun and Schwartz 1993) and noise (van Hasselt 2010, 2011). In this paper, we unify these views and show overestimations can occur when the action values are inaccurate, irrespective of the source of approximation error. Of course, imprecise value estimates are the norm during learning, which indicates that overestimations may be much more common than previously appreciated. It is an open question whether, if the overestimations do occur, this negatively affects performance in practice. Overoptimistic value estimates are not necessarily a problem in and of themselves. If all values would be uniformly higher then the relative action preferences are preserved and we would not expect the resulting policy to be any worse. Furthermore, it is known that sometimes it is good to be optimistic: optimism in the face of uncertainty is a well-known Copyright c \u00a9 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. exploration technique (Kaelbling et al. 1996). If, however, the overestimations are not uniform and not concentrated at states about which we wish to learn more, then they might negatively affect the quality of the resulting policy. Thrun and Schwartz (1993) give specific examples in which this leads to suboptimal policies, even asymptotically. To test whether overestimations occur in practice and at scale, we investigate the performance of the recent DQN algorithm (Mnih et al. 2015). DQN combines Q-learning with a flexible deep neural network and was tested on a varied and large set of deterministic Atari 2600 games, reaching human-level performance on many games. In some ways, this setting is a best-case scenario for Q-learning, because the deep neural network provides flexible function approximation with the potential for a low asymptotic approximation error, and the determinism of the environments prevents the harmful effects of noise. Perhaps surprisingly, we show that even in this comparatively favorable setting DQN sometimes substantially overestimates the values of the actions. We show that the Double Q-learning algorithm (van Hasselt 2010), which was first proposed in a tabular setting, can be generalized to arbitrary function approximation, including deep neural networks. We use this to construct a new algorithm called Double DQN. This algorithm not only yields more accurate value estimates, but leads to much higher scores on several games. This demonstrates that the overestimations of DQN indeed lead to poorer policies and that it is beneficial to reduce them. In addition, by improving upon DQN we obtain state-of-the-art results on the Atari domain."}
{"_id": "4cd91a0bc0474ce5560209cbeb79b21a1403eac1", "title": "Attention with Intention for a Neural Network Conversation Model", "text": "In a conversation or a dialogue process, attention and intention play intrinsic roles. This paper proposes a neural network based approach that models the attention and intention processes. It essentially consists of three recurrent networks. The encoder network is a word-level model representing source side sentences. The intention network is a recurrent network that models the dynamics of the intention process. The decoder network is a recurrent network that produces responses to the input from the source side. It is a language model that is dependent on the intention and has an attention mechanism to attend to particular source side words, when predicting a symbol in the response. The model is trained end-toend without labeling data. Experiments show that this model generates natural responses to user inputs."}
{"_id": "3aabaa2b1a7e3bee7ab1c29da0311a4fdd5f4378", "title": "Low cost power failure protection for MLC NAND flash storage systems with PRAM/DRAM hybrid buffer", "text": "In the latest PRAM/DRAM hybrid MLC NAND flash storage systems (NFSS), DRAM is used to temporarily store file system data for system response time reduction. To ensure data integrity, super-capacitors are deployed to supply the backup power for moving the data from DRAM to NAND flash during power failures. However, the capacitance degradation of super-capacitor severely impairs system robustness. In this work, we proposed a low cost power failure protection scheme to reduce the energy consumption of power failure protection and increase the robustness of the NFSS with PRAM/DRAM hybrid buffer. Our scheme enables the adoption of the more reliable regular capacitor to replace the super capacitor as the backup power. The experimental result shows that our scheme can substantially reduce the capacitance budget of power failure protection circuitry by 75.1% with very marginal performance and energy overheads."}
{"_id": "a60c571ffeceab5029e3a762838fb7b5e52fdb7b", "title": "Latent Variable Analysis Growth Mixture Modeling and Related Techniques for Longitudinal Data", "text": "This chapter gives an overview of recent advances in latent variable analysis. Emphasis is placed on the strength of modeling obtained by using a flexible combination of continuous and categorical latent variables. To focus the discussion and make it manageable in scope, analysis of longitudinal data using growth models will be considered. Continuous latent variables are common in growth modeling in the form of random effects that capture individual variation in development over time. The use of categorical latent variables in growth modeling is, in contrast, perhaps less familiar, and new techniques have recently emerged. The aim of this chapter is to show the usefulness of growth model extensions using categorical latent variables. The discussion also has implications for latent variable analysis of cross-sectional data. The chapter begins with two major parts corresponding to continuous outcomes versus categorical outcomes. Within each part, conventional modeling using continuous latent variables will be described"}
{"_id": "3ea12acd699b0be1985d51b49eeb48cab87034d0", "title": "Code quality analysis in open source software development", "text": "Proponents of open source style software development claim that better software is produced using this model compared with the traditional closed model. However, there is little empirical evidence in support of these claims. In this paper, we present the results of a pilot case study aiming: (a) to understand the implications of structural quality; and (b) to figure out the benefits of structural quality analysis of the code delivered by open source style development. To this end, we have measured quality characteristics of 100 applications written for Linux, using a software measurement tool, and compared the results with the industrial standard that is proposed by the tool. Another target of this case study was to investigate the issue of modularity in open source as this characteristic is being considered crucial by the proponents of open source for this type of software development. We have empirically assessed the relationship between the size of the application components and the delivered quality measured through user satisfaction. We have determined that, up to a certain extent, the average component size of an application is negatively related to the user satisfaction for this application."}
{"_id": "4ac2914c53913bde6efe2d68b4a703fdce74ccbe", "title": "A snake-based method for segmentation of intravascular ultrasound images and its in vivo validation.", "text": "Image segmentation for detection of vessel walls is necessary for quantitative assessment of vessel diseases by intravascular ultrasound. A new segmentation method based on gradient vector flow (GVF) snake model is proposed in this paper. The main characteristics of the proposed method include two aspects: one is that nonlinear filtering is performed on GVF field to reduce the critical points, change the morphological structure of the parallel curves and extend the capture range; the other is that balloon snake is combined with the model. Thus, the improved GVF and balloon snake can be automatically initialized and overcome the problem caused by local energy minima. Results of 20 in vivo cases validated the accuracy and stability of the segmentation method for intravascular ultrasound images."}
{"_id": "40944e7b88077be8ae53feb44bb92ae545d16ff1", "title": "Basket-Sensitive Personalized Item Recommendation", "text": "Personalized item recommendation is useful in narrowing down the list of options provided to a user. In this paper, we address the problem scenario where the user is currently holding a basket of items, and the task is to recommend an item to be added to the basket. Here, we assume that items currently in a basket share some association based on an underlying latent need, e.g., ingredients to prepare some dish, spare parts of some device. Thus, it is important that a recommended item is relevant not only to the user, but also to the existing items in the basket. Towards this goal, we propose two approaches. First, we explore a factorizationbased model called BFM that incorporates various types of associations involving the user, the target item to be recommended, and the items currently in the basket. Second, based on our observation that various recommendations towards constructing the same basket should have similar likelihoods, we propose another model called CBFM that further incorporates basket-level constraints. Experiments on three real-life datasets from different domains empirically validate these models against baselines based on matrix factorization and association rules."}
{"_id": "9fab8e2111a3546f61d8484a1ef7bdbd0fb5239a", "title": "A Review of Classic Edge Detectors", "text": "In this paper some of the classic alternatives for edge detection in digital images are studied. The main idea of edge detection algorithms is to find where abrupt changes in the intensity of an image have occurred. The first family of algorithms reviewed in this work uses the first derivative to find the changes of intensity, such as Sobel, Prewitt and Roberts. In the second reviewed family, second derivatives are used, for example in algorithms like Marr-Hildreth and Haralick. The obtained results are analyzed from a qualitative point of view (perceptual) and from a quantitative point of view (number of operations, execution time), considering different ways to convolve an image with a kernel (step required in some of the algorithms). Source Code For all the reviewed algorithms, an open source C implementation is provided which can be downloaded from the IPOL web page of this article1. An online demonstration is also available, where the user can test and reproduce our results."}
{"_id": "bcbf261b0c98b563b842313d02990e386cad0d24", "title": "An Analysis of the Accuracy of Bluetooth Low Energy for Indoor Positioning Applications", "text": "This study investigated the impact of Bluetooth Low Energy devices in advertising/beaconing mode on fingerprint-based indoor positioning schemes. Early experimentation demonstrated that the low bandwidth of BLE signals compared to WiFi is the cause of significant measurement error when coupled with the use of three BLE advertising channels. The physics underlying this behaviour is verified in simulation. A multipath mitigation scheme is proposed and tested. It is determined that the optimal positioning performance is provided by 10Hz beaconing and a 1 second multipath mitigation processing window size. It is determined that a steady increase in positioning performance with fingerprint size occurs up to 7 \u00b11, above this there is no clear benefit to extra beacon coverage."}
{"_id": "2d453783a940b8ebdae7cf5efa1cc6949d04bf56", "title": "Internet of Things: Vision, Applications and Challenges", "text": "The form of communication that we see now is either human-human or human-device, but the Internet of Things (IoT) promises a great future for the internet where the type of communication is machine-machine (M2M). It aims to unify everything in our world under a common infrastructure, giving us not only control of things around us, but also keeping us informed of the state of the things. This paper aims to provide a comprehensive overview of the IoT scenario and reviews its enabling technologies and the sensor networks. Also, it describes a six-layered architecture of IoT and points out the related key challenges. However, this manuscript will give good comprehension for the new researchers, who want to do research in this field of Internet of Things and facilitate knowledge accumulation in efficiently."}
{"_id": "18eadfc4a6bcffd6f1ca1d1534a54a3848442d46", "title": "The architecture of virtual machines", "text": "A virtual machine can support individual processes or a complete system depending on the abstraction level where virtualization occurs. Some VMs support flexible hardware usage and software isolation, while others translate from one instruction set to another. Virtualizing a system or component -such as a processor, memory, or an I/O device - at a given abstraction level maps its interface and visible resources onto the interface and resources of an underlying, possibly different, real system. Consequently, the real system appears as a different virtual system or even as multiple virtual systems. Interjecting virtualizing software between abstraction layers near the HW/SW interface forms a virtual machine that allows otherwise incompatible subsystems to work together. Further, replication by virtualization enables more flexible and efficient and efficient use of hardware resources."}
{"_id": "5340844c3768aa6f04ef0784061d4070f1166fe6", "title": "Parasitic-Aware Common-Centroid FinFET Placement and Routing for Current-Ratio Matching", "text": "The FinFET technology is regarded as a better alternative for modern high-performance and low-power integrated-circuit design due to more effective channel control and lower power consumption. However, the gate-misalignment problem resulting from process variation and the parasitic resistance resulting from interconnecting wires based on the FinFET technology becomes even more severe compared with the conventional planar CMOS technology. Such gate misalignment and unwanted parasitic resistance may increase the threshold voltage and decrease the drain current of transistors. When applying the FinFET technology to analog circuit design, the variation of drain currents can destroy current-ratio matching among transistors and degrade circuit performance. In this article, we present the first FinFET placement and routing algorithms for layout generation of a common-centroid FinFET array to precisely match the current ratios among transistors. Experimental results show that the proposed matching-driven FinFET placement and routing algorithms can obtain the best current-ratio matching compared with the state-of-the-art common-centroid placer."}
{"_id": "c596f88ccba5b7d5276ac6a9b68972fd7d14d959", "title": "A Domain Model for the Internet of Things", "text": "By bringing together the physical world of real objects with the virtual world of IT systems, the Internet of Things has the potential to significantly change both the enterprise world as well as society. However, the term is very much hyped and understood differently by different communities, especially because IoT is not a technology as such but represents the convergence of heterogeneous - often new - technologies pertaining to different engineering domains. What is needed in order to come to a common understanding is a domain model for the Internet of Things, defining the main concepts and their relationships, and serving as a common lexicon and taxonomy and thus as a basis for further scientific discourse and development of the Internet of Things. As we show, having such a domain model is also helpful in design of concrete IoT system architectures, as it provides a template and thus structures the analysis of use cases."}
{"_id": "5a9f4dc3e5d7c70d58c9512d7193d079c3331273", "title": "3D People Tracking with Gaussian Process Dynamical Models", "text": "We advocate the use of Gaussian Process Dynamical Models (GPDMs) for learning human pose and motion priors for 3D people tracking. A GPDM provides a lowdimensional embedding of human motion data, with a density function that gives higher probability to poses and motions close to the training data. With Bayesian model averaging a GPDM can be learned from relatively small amounts of data, and it generalizes gracefully to motions outside the training set. Here we modify the GPDM to permit learning from motions with significant stylistic variation. The resulting priors are effective for tracking a range of human walking styles, despite weak and noisy image measurements and significant occlusions."}
{"_id": "abcc6724bc625f8f710a2d455e9ac0577d84eea9", "title": "Can We Predict a Riot? Disruptive Event Detection Using Twitter", "text": "In recent years, there has been increased interest in real-world event detection using publicly accessible data made available through Internet technology such as Twitter, Facebook, and YouTube. In these highly interactive systems, the general public are able to post real-time reactions to \u201creal world\u201d events, thereby acting as social sensors of terrestrial activity. Automatically detecting and categorizing events, particularly small-scale incidents, using streamed data is a non-trivial task but would be of high value to public safety organisations such as local police, who need to respond accordingly. To address this challenge, we present an end-to-end integrated event detection framework that comprises five main components: data collection, pre-processing, classification, online clustering, and summarization. The integration between classification and clustering enables events to be detected, as well as related smaller-scale \u201cdisruptive events,\u201d smaller incidents that threaten social safety and security or could disrupt social order. We present an evaluation of the effectiveness of detecting events using a variety of features derived from Twitter posts, namely temporal, spatial, and textual content. We evaluate our framework on a large-scale, real-world dataset from Twitter. Furthermore, we apply our event detection system to a large corpus of tweets posted during the August 2011 riots in England. We use ground-truth data based on intelligence gathered by the London Metropolitan Police Service, which provides a record of actual terrestrial events and incidents during the riots, and show that our system can perform as well as terrestrial sources, and even better in some cases."}
{"_id": "9a67ef24bdbed0776d1bb0aa164c09cc029bbdd5", "title": "An Improved Algorithm for Incremental Induction of Decision Trees", "text": "This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called`slewing' is introduced. Finally, a non-incremental method is given for nding a decision tree based on a direct metric of a candidate tree."}
{"_id": "7e0e6e0cf37ce4f4dde8c940ee4ce0fdb7b28656", "title": "Sentiment Analysis and Summarization of Twitter Data", "text": "Sentiment Analysis (SA) and summarization has recently become the focus of many researchers, because analysis of online text is beneficial and demanded in many different applications. One such application is product-based sentiment summarization of multi-documents with the purpose of informing users about pros and cons of various products. This paper introduces a novel solution to target-oriented (i.e. aspect-based) sentiment summarization and SA of short informal texts with a main focus on Twitter posts known as \"tweets\". We compare different algorithms and methods for SA polarity detection and sentiment summarization. We show that our hybrid polarity detection system not only outperforms the unigram state-of-the-art baseline, but also could be an advantage over other methods when used as a part of a sentiment summarization system. Additionally, we illustrate that our SA and summarization system exhibits a high performance with various useful functionalities and features."}
{"_id": "6623033437816e8a841738393dcdbaa7ab9e2fa5", "title": "Data Representation Based on Interval-Sets for Anomaly Detection in Time Series", "text": "Anomaly detection in time series is a popular topic focusing on a variety of applications, which achieves a wealth of results. However, there are many cases of missing anomaly and increased false alarm in most of the existing works. Inspired by the concept of interval-sets, this paper proposes an anomaly detection algorithm called a fuzzy interval-set and tries to detect the potential value of the time series from a new perspective. In the proposed algorithm, a time series will be divided into several subsequences. Each subsequence is regarded as an interval-set depending on its value space and boundary of the subsequence. The similarity measurements between interval sets adopt interval operations and point probability distributions of the interval bounds. In addition, an anomaly score is defined based on similarity results. The experimental results on synthetic and real data sets indicate that the proposed algorithm has better discriminative performance than the piecewise aggregate approximation method and reduces the false alarm rate significantly."}
{"_id": "22c205caf66afc33814419b9006e4a7e704d937e", "title": "Bridging the divide in financial market forecasting: machine learners vs. financial economists", "text": "Financial time series forecasting is a popular application of machine learning methods. Previous studies report that advanced forecasting methods predict price changes in financial markets with high accuracy and that profit can be made trading on these predictions. However, financial economists point to the informational efficiency of financial markets, which questions price predictability and opportunities for profitable trading. The objective of the paper is to resolve this contradiction. To this end, we undertake an extensive forecasting simulation, based on data from thirty-four financial indices over six years. These simulations confirm that the best machine learning methods produce more accurate forecasts than the best econometric methods. We also examine the methodological factors that impact the predictive accuracy of machine learning forecasting experiments. The results suggest that the predictability of a financial market and the feasibility of profitable model-based trading are significantly influenced by the maturity of the market, the forecasting method employed, the horizon for which it generates predictions and the methodology used to assess the model and simulate model-based trading. We also find evidence against the informational value of indicators from the field of technical analysis. Overall, we confirm that advanced forecasting methods can be used to predict price changes in some financial markets and we discuss whether these results question the prevailing view in the financial economics literature that financial markets are efficient."}
{"_id": "6694e272267d3fa2b05e330a6ca43f48494af2e9", "title": "Algorithms for asymptotic extrapolation \u2217", "text": "Consider a power series f \u2208 R[[z]], which is obtained by a precise mathematical construction. For instance, f might be the solution to some differential or functional initial value problem or the diagonal of the solution to a partial differential equation. In cases when no suitable method is beforehand for determining the asymptotics of the coefficients fn, but when many such coefficients can be computed with high accuracy, it would be useful if a plausible asymptotic expansion for fn could be guessed automatically. In this paper, we will present a general scheme for the design of such \u201casymptotic extrapolation algorithms\u201d. Roughly speaking, using discrete differentiation and techniques from automatic asymptotics, we strip off the terms of the asymptotic expansion one by one. The knowledge of more terms of the asymptotic expansion will then allow us to approximate the coefficients in the expansion with high accuracy."}
{"_id": "9cb3bfd45bd0dbf4ffd328c3577a989fcb82ca07", "title": "Knowledge Engineering with Markov Logic Networks : A Review", "text": "Within the realm of statistical relational knowledge representation formalisms, Markov logic is perhaps one of the most flexible and general languages, for it generalises both first-order logic (for finite domains) and probabilistic graphical models. Knowledge engineering with Markov logic is, however, not a straightforward task. In particular, modelling approaches that are too firmly rooted in the principles of logic often tend to produce unexpected results in practice. In this paper, I collect a number of issues that are relevant to knowledge engineering practice: I describe the fundamental semantics of Markov logic networks and explain how simple probabilistic properties can be represented. Furthermore, I discuss fallacious modelling assumptions and summarise conditions under which generalisation across domains may fail. As a collection of fundamental insights, the paper is primarily directed at knowledge engineers who are new to Markov logic."}
{"_id": "588e37b4d7ffcd245d819eafe079f2b92ac9e20d", "title": "Effective Crowd Annotation for Relation Extraction", "text": "Can crowdsourced annotation of training data boost performance for relation extraction over methods based solely on distant supervision? While crowdsourcing has been shown effective for many NLP tasks, previous researchers found only minimal improvement when applying the method to relation extraction. This paper demonstrates that a much larger boost is possible, e.g., raising F1 from 0.40 to 0.60. Furthermore, the gains are due to a simple, generalizable technique, Gated Instruction, which combines an interactive tutorial, feedback to correct errors during training, and improved screening."}
{"_id": "8a0d391f53bf7566552e79835916fc1ea49900cd", "title": "A Hardware Framework for Yield and Reliability Enhancement in Chip Multiprocessors", "text": "Device reliability and manufacturability have emerged as dominant concerns in end-of-road CMOS devices. An increasing number of hardware failures are attributed to manufacturability or reliability problems. Maintaining an acceptable manufacturing yield for chips containing tens of billions of transistors with wide variations in device parameters has been identified as a great challenge. Additionally, today\u2019s nanometer scale devices suffer from accelerated aging effects because of the extreme operating temperature and electric fields they are subjected to. Unless addressed in design, aging-related defects can significantly reduce the lifetime of a product. In this article, we investigate a micro-architectural scheme for improving yield and reliability of homogeneous chip multiprocessors (CMPs). The proposed solution involves a hardware framework that enables us to utilize the redundancies inherent in a multicore system to keep the system operational in the face of partial failures. A micro-architectural modification allows a faulty core in a CMP to use another core\u2019s resources to service any instruction that the former cannot execute correctly by itself. This service improves yield and reliability but may cause loss of performance. The target platform for quantitative evaluation of performance under degradation is a dual-core and a quad-core chip multiprocessor with one or more cores sustaining partial failure. Simulation studies indicate that when a large, high-latency, and sparingly used unit such as a floating-point unit fails in a core, correct execution may be sustained through outsourcing with at most a 16% impact on performance for a floating-point intensive application. For applications with moderate floating-point load, the degradation is insignificant. The performance impact may be mitigated even further by judicious selection of the cores to commandeer depending on the current load on each of the candidate cores. The area overhead is also negligible due to resource reuse."}
{"_id": "dd9d92499cead9292bdebc5135361a2dbc70c169", "title": "Data processing and calibration of a cross-pattern stripe projector", "text": "Dense surface acquisition is one of the most challenging tasks for optical 3-D measurement systems in applications such as inspection of industrial parts, reverse engineering, digitization of virtual reality models and robot guidance. In order to achieve high accuracy we need good hardware equipment, as well as sophisticated data processing methods. Since measurements should be feasible under real world conditions, accuracy, reliability and consistency are a major issue. Based on the experiences with a previous system, a detailed analysis of the performance was carried out, leading to a new hardware setup. On the software side, we improved our calibration procedure and replaced the phase shift technique previously used by a new processing scheme which we call line shift processing. This paper describes our new approach. Results are presented and compared to results derived from the previous system."}
{"_id": "6418b3858a4764c4c35f32da8bc2c24284b41d47", "title": "Anomaly network traffic detection algorithm based on information entropy measurement under the cloud computing environment", "text": "In recent years, there are more and more abnormal activities in the network, which greatly threaten network security. Hence, it is of great importance to collect the data which indicate the running statement of the network, and distinguish the anomaly phenomena of the network in time. In this paper, we propose a novel anomaly network traffic detection algorithm under the cloud computing environment. Firstly, the framework of the anomaly network traffic detection system is illustrated, and six type of network traffic features are consider in this work, that is, (1) number of source IP address, (2) number of source port number, (3) number of destination IP address, (4) number of destination port number, (5) Number of packet type, and (6) number of network packets. Secondly, we propose a novel hybrid information entropy and SVM model to tackle the proposed problem by normalizing values of network features and exploiting SVM detect anomaly network behaviors. Finally, experimental results demonstrate that the proposed algorithm can detect anomaly network traffic with high accuracy and it can also be used in the large scale dataset."}
{"_id": "c3f2d101b616d82d07ca2cc4cb8ed0cb53fde21f", "title": "DeepPointSet : A Point Set Generation Network for 3 D Object Reconstruction from a Single Image Supplementary Material", "text": "We conducted human study to provide reference to our current CD and EMD values reported on the rendered dataset. We provided the human subject with a GUI tool to create a triangular mesh from the image. The tool (see Fig 1) enables the user to edit the mesh in 3D and to align the modeled object back to the input image. In total 16 models are created from the input images of our validation set. N = 1024 points are sampled from each model."}
{"_id": "2824e8f94cffe6d4969e96248e67aa228ac72278", "title": "Machine Learning for Real-Time Prediction of Damaging Straight-Line Convective Wind", "text": "Thunderstorms in theUnited States cause over 100 deaths and $10 billion (U.S. dollars) in damage per year, much of which is attributable to straight-line (nontornadic) wind. This paper describes a machine-learning system that forecasts the probability of damaging straight-line wind ($50 kt or 25.7m s) for each storm cell in the continental United States, at distances up to 10 km outside the storm cell and lead times up to 90min. Predictors are based on radar scans of the storm cell, storm motion, storm shape, and soundings of the nearstorm environment. Verification data come from weather stations and quality-controlled storm reports. The system performs very well on independent testing data. The area under the receiver operating characteristic (ROC) curve ranges from 0.88 to 0.95, the critical success index (CSI) ranges from 0.27 to 0.91, and the Brier skill score (BSS) ranges from 0.19 to 0.65 (.0 is better than climatology). For all three scores, the best value occurs for the smallest distance (inside storm cell) and/or lead time (0\u201315min), while the worst value occurs for the greatest distance (5\u201310 km outside storm cell) and/or lead time (60\u201390min). The system was deployed during the 2017 Hazardous Weather Testbed."}
{"_id": "07f77ad9c58b21588a9c6fa79ca7917dd58cca98", "title": "Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture", "text": "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks."}
{"_id": "1abf6491d1b0f6e8af137869a01843931996a562", "title": "ParseNet: Looking Wider to See Better", "text": "We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN [19]). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-theart performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at https: //github.com/weiliu89/caffe/tree/fcn."}
{"_id": "2fdbd2cdb5b028e77b7640150496ae13eab05f30", "title": "Coupled depth learning", "text": "In this paper we propose a method for estimating depth from a single image using a coarse to fine approach. We argue that modeling the fine depth details is easier after a coarse depth map has been computed. We express a global (coarse) depth map of an image as a linear combination of a depth basis learned from training examples. The depth basis captures spatial and statistical regularities and reduces the problem of global depth estimation to the task of predicting the input-specific coefficients in the linear combination. This is formulated as a regression problem from a holistic representation of the image. Crucially, the depth basis and the regression function are coupled and jointly optimized by our learning scheme. We demonstrate that this results in a significant improvement in accuracy compared to direct regression of depth pixel values or approaches learning the depth basis disjointly from the regression function. The global depth estimate is then used as a guidance by a local refinement method that introduces depth details that were not captured at the global level. Experiments on the NYUv2 and KITTI datasets show that our method outperforms the existing state-of-the-art at a considerably lower computational cost for both training and testing."}
{"_id": "3fe41cfefb187e83ad0a5417ab01c212f318577f", "title": "Shape, Illumination, and Reflectance from Shading", "text": "A fundamental problem in computer vision is that of inferring the intrinsic, 3D structure of the world from flat, 2D images of that world. Traditional methods for recovering scene properties such as shape, reflectance, or illumination rely on multiple observations of the same scene to overconstrain the problem. Recovering these same properties from a single image seems almost impossible in comparison-there are an infinite number of shapes, paint, and lights that exactly reproduce a single image. However, certain explanations are more likely than others: surfaces tend to be smooth, paint tends to be uniform, and illumination tends to be natural. We therefore pose this problem as one of statistical inference, and define an optimization problem that searches for the most likely explanation of a single image. Our technique can be viewed as a superset of several classic computer vision problems (shape-from-shading, intrinsic images, color constancy, illumination estimation, etc) and outperforms all previous solutions to those constituent problems."}
{"_id": "420c46d7cafcb841309f02ad04cf51cb1f190a48", "title": "Multi-Scale Context Aggregation by Dilated Convolutions", "text": "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction problems such as semantic segmentation are structurally different from image classification. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multiscale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy."}
{"_id": "e820785f476cf37122c58e5d6b0e203af479b2df", "title": "An overview on crime prediction methods", "text": "In the recent past, crime analyses are required to reveal the complexities in the crime dataset. This process will help the parties that involve in law enforcement in arresting offenders and directing the crime prevention strategies. The ability to predict the future crimes based on the location, pattern and time can serve as a valuable source of knowledge for them either from strategic or tactical perspectives. Nevertheless, to predict future crime accurately with a better performance, it is a challenging task because of the increasing numbers of crime in present days. Therefore, crime prediction method is important to identify the future crime and reduces the numbers of crime. Currently, some researchers have been conducted a study to predict crime based on particular inputs. The performance of prediction models can be evaluated using a variety of different prediction methods such as support vector machine, multivariate time series and artificial neural network. However, there are still some limitations on their findings to provide an accurate prediction for the location of crimes. A large number of research papers on this topic have already been published previously. Thus, in this paper, we thoroughly review each of them and summarized the outcomes. Our objective is to identify current implementations of crime prediction method and the possibility to enhance it for future needs."}
{"_id": "f7c8f4897d24769e019ae792e9f7f492aa6e8977", "title": "Organizational Use of Big Data and Competitive Advantage - Exploration of Antecedents", "text": "The use of Big Data can hold considerable benefits for organizations, but there is as yet little research on the impacts of Big Data use on key variables such as competitive advantage. The antecedents to Big Data use aiming at competitive advantage are also poorly understood. Drawing from prior research on the ICT-competitiveness link, this research examined the relationship of organizational use of Big Data and competitive advantage \u2013 as well as certain hypothesized antecedents to this relationship. Data was collected from a nationwide survey in Japan which was addressed to Big Data users in private companies. The result shows that the competitiveness is influenced by the systematic and extensive use of Big Data in the organization, and top management understanding for Big Data use. Antecedents of systematic and extensive Big Data use were found to be the cross-functional activities of analysts, proactive attitude of users, secured data management, and human resources. Future research should strive to verify these respondent opinions using longitudinal approaches based on objective financial data."}
{"_id": "6bc785deeb35643d865157738548149e393f9dd3", "title": "Back to the Future: Current-Mode Processor in the Era of Deeply Scaled CMOS", "text": "This paper explores the use of MOS current-mode logic (MCML) as a fast and low noise alternative to static CMOS circuits in microprocessors, thereby improving the performance, energy efficiency, and signal integrity of future computer systems. The power and ground noise generated by an MCML circuit is typically 10-100\u00d7 smaller than the noise generated by a static CMOS circuit. Unlike static CMOS, whose dominant dynamic power is proportional to the frequency, MCML circuits dissipate a constant power independent of clock frequency. Although these traits make MCML highly energy efficient when operating at high speeds, the constant static power of MCML poses a challenge for a microarchitecture that operates at the modest clock rate and with a low activity factor. To address this challenge, a single-core microarchitecture for MCML is explored that exploits the C-slow retiming technique, and operates at a high frequency with low complexity to save energy. This design principle contrasts with the contemporary multicore design paradigm for static CMOS that relies on a large number of gates operating in parallel at the modest speeds. The proposed architecture generates 10-40\u00d7 lower power and ground noise, and operates within 13% of the performance (i.e., 1/ExecutionTime) of a conventional, eight-core static CMOS processor while exhibiting 1.6\u00d7 lower energy and 9% less area. Moreover, the operation of an MCML processor is robust under both systematic and random variations in transistor threshold voltage and effective channel length."}
{"_id": "09696f6d3d623445878a2745ae276fdce3ce214b", "title": "Marine Litter Distribution and Density in European Seas, from the Shelves to Deep Basins", "text": "Anthropogenic litter is present in all marine habitats, from beaches to the most remote points in the oceans. On the seafloor, marine litter, particularly plastic, can accumulate in high densities with deleterious consequences for its inhabitants. Yet, because of the high cost involved with sampling the seafloor, no large-scale assessment of distribution patterns was available to date. Here, we present data on litter distribution and density collected during 588 video and trawl surveys across 32 sites in European waters. We found litter to be present in the deepest areas and at locations as remote from land as the Charlie-Gibbs Fracture Zone across the Mid-Atlantic Ridge. The highest litter density occurs in submarine canyons, whilst the lowest density can be found on continental shelves and on ocean ridges. Plastic was the most prevalent litter item found on the seafloor. Litter from fishing activities (derelict fishing lines and nets) was particularly common on seamounts, banks, mounds and ocean ridges. Our results highlight the extent of the problem and the need for action to prevent increasing accumulation of litter in marine environments."}
{"_id": "606fec3f353ba72e3cb9cd27e1a0f3a3edfbcca8", "title": "The development of integrated career portal in university using agile methodology", "text": "A gap between university graduate's quality and industry sector demand is one of the challenge university has to tackle, which makes university graduates absorption in job market slows down. One of the tools that can be used to bridge between graduates and companies is an integrated career portal. This portal can be optimized to help industry find the employees they need from the alumni from university. On the other hand, this tool can help graduates to get job that suits their competency. By using the portal, alumni can hopefully minimize the number of unemployed university graduates and accelerate their absorption into job market. The development of this career portal will use agile methodology. Questionnaire was distributed to alumni and employers to collect information about what user need in the portal. The result of this research can create a career portal which integrated business process and social media."}
{"_id": "85f01eaab9fe79aa901da13ef90a55f5178baf3e", "title": "Internet of Things security and forensics: Challenges and opportunities", "text": "The Internet of Things (IoT) envisions pervasive, connected, and smart nodes interacting autonomously while offering all sorts of services. Wide distribution, openness and relatively high processing power of IoT objects made them an ideal target for cyber attacks. Moreover, as many of IoT nodes are collecting and processing private information, they are becoming a goldmine of data for malicious actors. Therefore, security and specifically the ability to detect compromised nodes, together with collecting and preserving evidences of an attack or malicious activities emerge as a priority in successful deployment of IoT networks. In this paper, we first introduce existing major security and forensics challenges within IoT domain and then briefly discuss about papers published in this special issue targeting identified challenges."}
{"_id": "2d2ae1590b23963c26b5338972ecbf7f1bdc8e13", "title": "Auto cropping for digital photographs", "text": "In this paper, we propose an effective approach to the nearly untouched problem, still photograph auto cropping, which is one of the important features to automatically enhance photographs. To obtain an optimal result, we first formulate auto cropping as an optimization problem by defining an energy function, which consists of three sub models: composition sub model, conservative sub model, and penalty sub model. Then, particle swarm optimization (PSO) is employed to obtain the optimal solution by maximizing the objective function. Experimental results and user studies over hundreds of photographs show that the proposed approach is effective and accurate in most cases, and can be used in many practical multimedia applications."}
{"_id": "38e80bb1d747439550eac14453b7bc1f50436ff0", "title": "Amplified Boomerang Attacks Against Reduced-Round MARS and Serpent", "text": "We introduce a new cryptanalytic technique based on Wagner\u2019s boomerang and inside-out attacks. We first describe this new attack in terms of the original boomerang attack, and then demonstrate its use on reduced-round variants of the MARS core and Serpent. Our attack breaks eleven rounds of the MARS core with 2 chosen plaintexts, 2 memory, and 2 partial decryptions. Our attack breaks eight rounds of Serpent with 2 chosen plaintexts, 2 memory, and 2 partial decryptions."}
{"_id": "6a9840bd5ed2d10dfaf233e7fbede05135761d81", "title": "The naturalness of reproduced high dynamic range images", "text": "The problem of visualizing high dynamic range images on the devices with restricted dynamic range has recently gained a lot of interest in the computer graphics community. Various so-called tone mapping operators have been proposed to face this issue. The field of tone mapping assumes thorough knowledge of both the objective and subjective attributes of an image. However, there no published analysis of such attributes exists so far. In this paper, we present an overview of image attributes which are used extensively in different tone mapping methods. Furthermore, we propose a scheme of relationships between these attributes, leading to the definition of an overall quality measure which we call naturalness. We present results of the subjective psychophysical testing that we have performed to prove the proposed relationship scheme. Our effort sets the stage for well-founded quality comparisons between tone mapping operators. By providing good definitions of the different attributes, comparisons, be they user-driven or fully automatic, are made possible at all."}
{"_id": "366ae3858593d5440b4a9bf39865eb7c4ddd0206", "title": "Real-Time Ray Tracing with CUDA", "text": "The graphics processors (GPUs) have recently emerged as a low-cost alternative for parallel programming. Since modern GPUs have great computational power as well as high memory bandwidth, running ray tracing on them has been an active field of research in computer graphics in recent years. Furthermore, the introduction of CUDA, a novel GPGPU architecture, has removed several limitations that the traditional GPU-based ray tracing suffered. In this paper, an implementation of high performance CUDA ray tracing is demonstrated. We focus on the performance and show how our design choices in various optimization lead to an implementation that outperforms the previous works. For reasonably complex scenes with simple shading, our implementation achieves the performance of 30 to 43 million traced rays per second. Our implementation also includes the effects of recursive specular reflection and refraction, which were less discussed in previous GPU-based ray tracing works."}
{"_id": "47b4ef22a8ca7b9d6e6089a99f5748193ad24acc", "title": "Are men really more \u2018 oriented \u2019 toward short-term mating than women ?", "text": "According to Sexual Strategies Theory (D.M. Buss and D.P. Schmitt 1993), both men and women possess psychological adaptations for short-term mating. However, men may possess three adaptations that make it seem as though they are generally more \u2018oriented\u2019 toward short-term mating than women: (1) Men possess greater desire for short-term sexual relationships than women; (2) Men prefer larger numbers of sexual partners over time than women; and (3) Men require less time before consenting to sex than women. We review a wide body of psychological theory and evidence that corroborates the presence of these adaptations in men\u2019s short-term sexual psychology. We also correct some recurring misinterpretations of Sexual Strategies Theory, such as the mistaken notion that women are designed solely for long-term mating. Finally, we document how the observed sex differences in short-term mating complement some feminist theories and refute competing evolutionary theories of human sexuality. Psychology, Evolution & Gender 3.3 December 2001 pp. 211\u2013239 Psychology, Evolution & Gender ISSN 1461-6661 print/ISSN 1470-1073 online \u00a9 2001 Taylor & Francis Ltd http://www.tandf.co.uk/journals DOI: 10.1080/1461666011011933 1"}
{"_id": "32791996c1040b9dcc34e71a05d72e5c649eeff9", "title": "Automatic Detection of Respiration Rate From Ambulatory Single-Lead ECG", "text": "Ambulatory electrocardiography is increasingly being used in clinical practice to detect abnormal electrical behavior of the heart during ordinary daily activities. The utility of this monitoring can be improved by deriving respiration, which previously has been based on overnight apnea studies where patients are stationary, or the use of multilead ECG systems for stress testing. We compared six respiratory measures derived from a single-lead portable ECG monitor with simultaneously measured respiration air flow obtained from an ambulatory nasal cannula respiratory monitor. Ten controlled 1-h recordings were performed covering activities of daily living (lying, sitting, standing, walking, jogging, running, and stair climbing) and six overnight studies. The best method was an average of a 0.2-0.8 Hz bandpass filter and RR technique based on lengthening and shortening of the RR interval. Mean error rates with the reference gold standard were plusmn4 breaths per minute (bpm) (all activities), plusmn2 bpm (lying and sitting), and plusmn1 breath per minute (overnight studies). Statistically similar results were obtained using heart rate information alone (RR technique) compared to the best technique derived from the full ECG waveform that simplifies data collection procedures. The study shows that respiration can be derived under dynamic activities from a single-lead ECG without significant differences from traditional methods."}
{"_id": "a1bdfa8ec779fbf6aa780d3c63cabac2d5887b4a", "title": "Location and Time Aware Social Collaborative Retrieval for New Successive Point-of-Interest Recommendation", "text": "In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results.\n In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6% growth in Precision@5 and 47.3% improvement in Recall@5 over the best previous method."}
{"_id": "7eac1eb85b919667c785b9ac4085d8ca68998d20", "title": "MOBILE LEARNING FOR EDUCATION : BENEFITS AND CHALLENGES", "text": "Education and training is the process by which the wisdom, knowledge and skills of one generation are passed on to the next. Today there are two form s f education and training: conventional education and distance education. Mobile learning, or \"M-Lear ning\", offers modern ways to support learning process through mobile devices, such as handheld an d tablet computers, MP3 players, smart phones and mobile phones. This document introduces the sub ject of mobile learning for education purposes. It examines what impact mobile devices have had on teaching and learning practices and goes on to look at the opportunities presented by the use of d igital media on mobile devices. The main purpose of this paper is to describe the current state of mobi le learning, benefits, challenges, and it\u2019s barrier s to support teaching and learning. Data for this paper w re collected through bibliographic and internet research from January to March 2013. Four key areas will be addressed in this paper: 1. An analysis of Mobile Learning. 2. Differentiating E-Learning from Mobile Learning 3. Value and Benefits of Mobile Learning 4. Challenges and Barriers of M obile Learning: Study showed that M-Learning as a Distance learning brought great benefits to socie ty include : Training when it is needed, Training a t any time; Training at any place; Learner-centred co ntent; Avoidance of re-entry to work problems; Training for taxpayers, and those fully occupied du ring university lectures and sessions at training centres; and The industrialisation of teaching and learning. And also, notebooks, mobile Tablets, iPod touch, and iPads are very popular devices for mobil e learning because of their cost and availability o f apps. --------------------------------"}
{"_id": "6ca0343f94fdd4efbaddde8e4040558bcc29e8ed", "title": "Combining Naive Bayes and Decision Tree for Adaptive Intrusion Detection", "text": "In this paper, a new learning algorithm for adaptive network intrusion detection using naive Bayesian classifier and decision tree is presented, which performs balance detections and keeps false positives at acceptable level for different types of network attacks, and eliminates redundant attributes as well as contradictory examples from training data that make the detection model complex. The proposed algorithm also addresses some difficulties of data mining such as handling continuous attribute, dealing with missing attribute values, and reducing noise in training data. Due to the large volumes of security audit data as well as the complex and dynamic properties of intrusion behaviours, several data miningbased intrusion detection techniques have been applied to network-based traffic data and host-based data in the last decades. However, there remain various issues needed to be examined towards current intrusion detection systems (IDS). We tested the performance of our proposed algorithm with existing learning algorithms by employing on the KDD99 benchmark intrusion detection dataset. The experimental results prove that the proposed algorithm achieved high detection rates (DR) and significant reduce false positives (FP) for different types of network intrusions using limited computational resources."}
{"_id": "8f67ff7d7a4fc72d87f82ae340dba4365b7ea664", "title": "Learning Filter Banks Using Deep Learning For Acoustic Signals", "text": "Designing appropriate features for acoustic event recognition tasks is an active field of research. Expressive features should both improve the performance of the tasks and also be interpret-able. Currently, heuristically designed features based on the domain knowledge requires tremendous effort in hand-crafting, while features extracted through deep network are difficult for human to interpret. In this work, we explore the experience guided learning method for designing acoustic features. This is a novel hybrid approach combining both domain knowledge and purely data driven feature designing. Based on the procedure of log Mel-filter banks, we design a filter bank learning layer. We concatenate this layer with a convolutional neural network (CNN) model. After training the network, the weight of the filter bank learning layer is extracted to facilitate the design of acoustic features. We smooth the trained weight of the learning layer and re-initialize it in filter bank learning layer as audio feature extractor. For the environmental sound recognition task based on the Urbansound8K dataset [1], the experience guided learning leads to a 2% accuracy improvement compared with the fixed feature extractors (the log Mel-filter bank). The shape of the new filter banks are visualized and explained to prove the effectiveness of the feature design process."}
{"_id": "d691128d367d9bd1f35bd0dd15887c073b72817a", "title": "Improved Algorithm for Intrusion Detection Using Genetic Algorithm and SNORT", "text": "Intrusion Detection Systems (IDSs) detects the network attacks by scanning each data packets. There are various approaches were done for making better IDS, though traditional IDS approaches are insufficient in terms of automating detection of attacks as attackers evolved. This research is focused on Genetic Algorithm approach used in Intrusion Detection. This research shows that how the traditional SNORT and Genetic Algorithm combined together so that the number of rules of SNORT is decreased. Experimental results Shows That IDS using Genetic Algorithm and SNORT can decrease the amount of detection time, CPU Utilization and Memory Utilization. Keywords\u2014Genetic Algorithm, SNORT, Fitness Function,"}
{"_id": "b86c2baf8bbff63c79a47797ac82778b75b72e03", "title": "Probabilistic Semantics and Pragmatics: Uncertainty in Language and Thought", "text": "Language is used to communicate ideas. Ideas are mental tools for coping with a complex and uncertain world. Thus human conceptual structures should be key to language meaning, and probability\u2014the mathematics of uncertainty\u2014 should be indispensable for describing both language and thought. Indeed, probabilistic models are enormously useful in modeling human cognition (Tenenbaum et al., 2011) and aspects of natural language (Bod et al., 2003; Chater et al., 2006). With a few early exceptions (e.g. Adams, 1975; Cohen, 1999b), probabilistic tools have only recently been used in natural language semantics and pragmatics. In this chapter we synthesize several of these modeling advances, exploring a formal model of interpretation grounded, via lexical semantics and pragmatic inference, in conceptual structure. Flexible human cognition is derived in large part from our ability to imagine possibilities (or possible worlds). A rich set of concepts, intuitive theories, and other mental representations support imagining and reasoning about possible worlds\u2014together we will call these the conceptual lexicon. We posit that this collection of concepts also forms the set of primitive elements available for lexical semantics: word meanings can be built from the pieces of conceptual structure. Larger semantic structures are then built from word meanings by composition, ultimately resulting in a sentence meaning which is a phrase in the \u201clanguage of thought\u201d provided by the conceptual lexicon. This expression is truth-functional in that it takes on a Boolean value for each imagined world, and it can thus be used as the basis for belief updating. However, the connection between cognition, semantics, and belief is not direct: because language must flexibly adapt to the context of communication, the connection between lexical representation and interpreted meaning is mediated by pragmatic inference."}
{"_id": "efe61b96e7d3ca79c14b41f4918f234bec936bb1", "title": "Clinical and symptomatological reflections: the fascial system", "text": "Every body structure is wrapped in connective tissue, or fascia, creating a structural continuity that gives form and function to every tissue and organ. Currently, there is still little information on the functions and interactions between the fascial continuum and the body system; unfortunately, in medical literature there are few texts explaining how fascial stasis or altered movement of the various connective layers can generate a clinical problem. Certainly, the fascia plays a significant role in conveying mechanical tension, in order to control an inflammatory environment. The fascial continuum is essential for transmitting muscle force, for correct motor coordination, and for preserving the organs in their site; the fascia is a vital instrument that enables the individual to communicate and live independently. This article considers what the literature offers on symptoms related to the fascial system, trying to connect the existing information on the continuity of the connective tissue and symptoms that are not always clearly defined. In our opinion, knowing and understanding this complex system of fascial layers is essential for the clinician and other health practitioners in finding the best treatment strategy for the patient."}
{"_id": "e25b53d36bb1e0c00a13d2aa6c48dc320a400a67", "title": "Towards Online-Recognition with Deep Bidirectional LSTM Acoustic Models", "text": "Online-Recognition requires the acoustic model to provide posterior probabilities after a limited time delay given the online input audio data. This necessitates unidirectional modeling and the standard solution is to use unidirectional long short-term memory (LSTM) recurrent neural networks (RNN) or feedforward neural networks (FFNN). It is known that bidirectional LSTMs are more powerful and perform better than unidirectional LSTMs. To demonstrate the performance difference, we start by comparing several different bidirectional and unidirectional LSTM topologies. Furthermore, we apply a modification to bidirectional RNNs to enable online-recognition by moving a window over the input stream and perform one forwarding through the RNN on each window. Then, we combine the posteriors of each forwarding and we renormalize them. We show in experiments that the performance of this online-enabled bidirectional LSTM performs as good as the offline bidirectional LSTM and much better than the unidirectional LSTM."}
{"_id": "467024c6cc8fe73c64e501f8a12cdbafbf9561b0", "title": "A License Plate-Recognition Algorithm for Intelligent Transportation System Applications", "text": "In this paper, a new algorithm for vehicle license plate identification is proposed, on the basis of a novel adaptive image segmentation technique (sliding concentric windows) and connected component analysis in conjunction with a character recognition neural network. The algorithm was tested with 1334 natural-scene gray-level vehicle images of different backgrounds and ambient illumination. The camera focused in the plate, while the angle of view and the distance from the vehicle varied according to the experimental setup. The license plates properly segmented were 1287 over 1334 input images (96.5%). The optical character recognition system is a two-layer probabilistic neural network (PNN) with topology 108-180-36, whose performance for entire plate recognition reached 89.1%. The PNN is trained to identify alphanumeric characters from car license plates based on data obtained from algorithmic image processing. Combining the above two rates, the overall rate of success for the license-plate-recognition algorithm is 86.0%. A review in the related literature presented in this paper reveals that better performance (90% up to 95%) has been reported, when limitations in distance, angle of view, illumination conditions are set, and background complexity is low"}
{"_id": "0e9c1dc5415dafcbe41ef447121afc6835238870", "title": "Semantic Spaces: Measuring the Distance between Different Subspaces", "text": "Semantic Space models, which provide a numerical representation of words\u2019 meaning extracted from corpus of documents, have been formalized in terms of Hermitian operators over real valued Hilbert spaces by Bruza et al. [1]. The collapse of a word into a particular meaning has been investigated applying the notion of quantum collapse of superpositional states [2]. While the semantic association between words in a Semantic Space can be computed by means of the Minkowski distance [3] or the cosine of the angle between the vector representation of each pair of words, a new procedure is needed in order to establish relations between two or more Semantic Spaces. We address the question: how can the distance between different Semantic Spaces be computed? By representing each Semantic Space as a subspace of a more general Hilbert space, the relationship between Semantic Spaces can be computed by means of the subspace distance. Such distance needs to take into account the difference in the dimensions between subspaces. The availability of a distance for comparing different Semantic Subspaces would enable to achieve a deeper understanding about the geometry of Semantic Spaces which would possibly translate into better effectiveness in Information Retrieval tasks."}
{"_id": "84cfcacc9f11372818d9f8cada04fef64ece87cf", "title": "Fracture detection in x-ray images through stacked random forests feature fusion", "text": "Bone fractures are among the most common traumas in musculoskeletal injuries. They are also frequently missed during the radiological examination. Thus, there is a need for assistive technologies for radiologists in this field. Previous automatic bone fracture detection work has focused on detection of specific fracture types in a single anatomical region. In this paper, we present a generalized bone fracture detection method that is applicable to multiple bone fracture types and multiple bone structures throughout the body. The method uses features extracted from candidate patches in X-ray images in a novel discriminative learning framework called the Stacked Random Forests Feature Fusion. This is a multilayer learning formulation in which the class probability labels, produced by random forests learners at a lower level, are used to derive the refined class distribution labels at the next level. The candidate patches themselves are selected using an efficient subwindow search algorithm. The outcome of the method is a number of fracture bounding-boxes ranked from the most likely to the least likely to contain a fracture. We evaluate the proposed method on a set of 145 X-rays images. When the top ranking seven fracture bounding-boxes are considered, we are able to capture 81.2% of the fracture findings reported by a radiologist. The proposed method outperforms other fracture detection frameworks that use local features, and single layer random forests and support vector machine classification."}
{"_id": "f681ceb64b7b7cc440a417d255fbf30d69926970", "title": "Neural Disambiguation of Causal Lexical Markers Based on Context", "text": "Causation is a psychological tool of humans to understand the world and it is projected in natural language. Causation relates two events, so in order to understand the causal relation of those events and the causal reasoning of humans, the study of causality classification is required. We claim that the use of linguistic features may restrict the representation of causality, and dense vector spaces can provide a better encoding of the causal meaning of an utterance. Herein, we propose a neural network architecture only fed with word embeddings for the task of causality classification. Our results show that our claim holds, and we outperform the state-of-the-art on the AltLex corpus. The source code of our experiments is publicly available.1"}
{"_id": "8a60b082aa758c317e9677beed7e7776acde5e4c", "title": "Chapter 12 DATA MINING IN SOCIAL MEDIA", "text": "The rise of online social media is providing a wealth of social network data. Data mining techniques provide researchers and practitioners the tools needed to analyze large, complex, and frequently changing social media data. This chapter introduces the basics of data mining, reviews social media, discusses how to mine social media data, and highlights some illustrative examples with an emphasis on social networking sites and blogs."}
{"_id": "36ada37bede2f0fd3dbab2f8d4d33df8670ff00b", "title": "The Description Logic Handbook: Theory, Implementation, and Applications", "text": "Like modal logic, temporal logic, or description logic, separation logic has become a Description Logic Handbook: Theory, Implementation and Applications,\u201d. The Description Logic Handbook: Theory, Implementation and Applications (Franz Baader, Diego Calvanese, Deborah. McGuinness, Daniele Nardi, Peter. To formalize knowledge, we employ various logic-based formalisms, such as The Description Logic Handbook: Theory, Implementation and Applications."}
{"_id": "862576569343cfa7c699784267543b81fad23c3a", "title": "Attributive Concept Descriptions with Complements", "text": "Schmidt-SchaulL M. and G. Smolka, Attributive concept descriptions with complements, Artificial Intelligence 48 (1991) 1-26. We investigate the consequences of adding unions and complements to attributive concept descriptions employed in terminological knowledge representation languages. It is shown that deciding coherence and subsumption of such descriptions are PSPACE-complete problems that can be decided with linear space."}
{"_id": "0f26b165700dc85548ea3d570d757aba838f5a97", "title": "Racer: A Core Inference Engine for the Semantic Web", "text": "In this paper we describe Racer, which can be considered as a core inference engine for the semantic web. The Racer inference server offers two APIs that are already used by at least three different network clients, i.e., the ontology editor OilEd, the visualization tool RICE, and the ontology development environment Protege 2. The Racer server supports the standard DIG protocol via HTTP and a TCP based protocol with extensive query facilities. Racer currently supports the web ontology languages DAML+OIL, RDF, and OWL."}
{"_id": "57820e6f974d198bf4bbdf26ae7e1063bac190c3", "title": "The Semantic Web\" in Scientific American", "text": ""}
{"_id": "54a5680c9fcd540bf3df970dc7fcd38747f689bc", "title": "Hand Gesture Recognition System", "text": "The task of gesture recognition is highly challenging due to complex background, presence of nongesture hand motions, and different illumination environments. In this paper, a new technique is proposed which begins by detecting the hand and determining its canter, tracking the hands trajectory and analyzing the variations in the hand locations, and finally recognizing the gesture. The proposed technique overcomes background complexity and distance from camera which is up to 2.5 meters. Experimental results show that the proposed technique can recognize 12 gestures with rate above 94%. Keywords\u2014 Hand gesture recognition, skin detection, Hand tracking."}
{"_id": "46aa7391e5763cbc7dd570882baf5cfc832ba3f4", "title": "A review of split-bolus single-pass CT in the assessment of trauma patients", "text": "The purpose of this study was to review and compare the image quality and radiation dose of split-bolus single-pass computed tomography(CT) in the assessment of trauma patients in comparison to standard multi-phase CT techniques. An online electronic database was searched using the MESH terms \u201csplit-bolus,\u201d \u201cdual phase,\u201d and \u201csingle pass.\u201d Inclusion criteria required the research article to compare a split contrast bolus protocol in a single-pass scan in the assessment of trauma patients. Studies using split-bolus CT technique in non-traumatic injury assessment were excluded. Six articles met the inclusion criteria. Parenchymal and vascular image qualities, as well as subjective image quality assessments, were equal or superior in comparison to non-split-bolus multi-phase trauma CT protocols. Split-bolus single-pass CT decreased radiation exposure in all studies. Further research is required to determine the superior split-bolus protocol and the specificity and sensitivity of detecting blunt cerebrovascular injury screening, splenic parenchymal vascular lesions, and characterization of pelvic vascular extravasation."}
{"_id": "64af3319e9ed4e459e80b469923106887d28313d", "title": "Dual-Band Microstrip Bandpass Filter Using Stepped-Impedance Resonators With New Coupling Schemes", "text": "A microstrip bandpass filter using stepped-impedance resonators is designed in low-temperature co-fired ceramic technology for dual-band applications at 2.4 and 5.2 GHz. New coupling schemes are proposed to replace the normal counterparts. It is found that the new coupling scheme for the interstages can enhance the layout compactness of the bandpass filter; while the new coupling scheme at the input and output can improve the performance of the bandpass filter. To validate the design and analysis, a prototype of the bandpass filter was fabricated and measured. It is shown that the measured and simulated performances are in good agreement. The prototype of the bandpass filter achieved insertion loss of 1.25 and 1.87 dB, S11 of -29 and -40 dB, and bandwidth of 21% and 12.7% at 2.4 and 5.2 GHz, respectively. The bandpass filter is further studied for a single-package solution of dual-band radio transceivers. The bandpass filter is, therefore, integrated into a ceramic ball grid array package. The integration is analyzed with an emphasis on the connection of the bandpass filter to the antenna and to the transceiver die"}
{"_id": "72987c6ea6c0d6bfbfea8520ae0a1f74bdd8fb7c", "title": "A Survey on the Security of Stateful SDN Data Planes", "text": "Software-defined networking (SDN) emerged as an attempt to introduce network innovations faster, and to radically simplify and automate the management of large networks. SDN traditionally leverages OpenFlow as device-level abstraction. Since OpenFlow permits the programmer to \u201cjust\u201d abstract a static flow-table, any stateful control and processing intelligence is necessarily delegated to the network controller. Motivated by the latency and signaling overhead that comes along with such a two-tiered SDN programming model, in the last couple of years several works have proposed innovative switch-level (data plane) programming abstractions capable to deploy some smartness directly inside the network switches, e.g., in the form of localized stateful flow processing. Furthermore, the possible inclusion of states and state maintenance primitives inside the switches is currently being debated in the OpenFlow standardization community itself. In this paper, after having provided the reader with a background on such emerging stateful SDN data plane proposals, we focus our attention on the security implications that data plane programmability brings about. Also via the identification of potential attack scenarios, we specifically highlight possible vulnerabilities specific to stateful in-switch processing (including denial of service and saturation attacks), which we believe should be carefully taken into consideration in the ongoing design of current and future proposals for stateful SDN data planes."}
{"_id": "1182ce944d433c4930fef42e0c27edd5c00f1254", "title": "Prosthetic gingival reconstruction in a fixed partial restoration. Part 1: introduction to artificial gingiva as an alternative therapy.", "text": "The Class III defect environment entails a vertical and horizontal deficiency in the edentulous ridge. Often, bone and soft tissue surgical procedures fall short of achieving a natural esthetic result. Alternative surgical and restorative protocols for these types of prosthetic gingival restorations are presented in this three-part series, which highlights the diagnostic and treatment aspects as well as the lab and maintenance challenges. A complete philosophical approach involves both a biologic understanding of the limitations of the hard and soft tissue healing process as well as that of multiple adjacent implants in the esthetic zone. These limitations may often necessitate the use of gingiva-colored \"pink\" restorative materials and essential preemptive planning via three-dimensional computer-aided design/computer-assisted manufacture to achieve the desired esthetic outcome. The present report outlines a rationale for consideration of artificial gingiva when planning dental prostheses. Prosthetic gingiva can overcome the limitations of grafting and should be a consideration in the initial treatment plan. (Int J Periodontics Restorative Dent 2009;29:471-477.)."}
{"_id": "a4255430a93ca3e304a56d83234927885c8aaae5", "title": "Car monitoring, alerting and tracking model: Enhancement with mobility and database facilities", "text": "Statistics show that the number of cars is increasing rapidly and so is the number of car theft attempts, locally and internationally. Although there are a lot of car security systems that had been produced lately, but the result is still disappointing as the number of car theft cases still increases. The thieves are inventing cleverer and stronger stealing techniques that need more powerful security systems. This project \u201cCar Monitoring and Tracking System\u201d is being proposed to solve the issue. It introduces the integration between monitoring and tracking system. Both elements are very crucial in order to have a powerful security system. The system can send SMS and MMS to the owner to have fast response especially if the car is nearby. This paper focuses on using MMS and database technology, the picture of the intruder will be sent via local GSM/GPRS service provider to user (and/or) police. The Database offers the required information about car and owner, which will help the police or security authorities in their job. Moreover, in the event of theft local police and user can easily track the car using GPS system that can be link to Google Earth and other mapping software. The implementation and testing results show the success of prototype in sending MMS to owner within 40 seconds and receiving acknowledgment to the database (police or security unit) within 4 minutes. The timing and results are suitable to owner and police to take suitable action against intruder."}
{"_id": "3320f454a8d3e872e2007a07f286b91ca333a924", "title": "Data Governance Framework for Big Data Implementation with a Case of Korea", "text": "Big Data governance requires a data governance that can satisfy the needs for corporate governance, IT governance, and ITA/EA. While the existing data governance focuses on the processing of structured data, Big Data governance needs to be established in consideration of a broad sense of Big Data services including unstructured data. To achieve the goals of Big Data, strategies need to be established together with goals that are aligned with the vision and objective of an organization. In addition to the preparation of the IT infrastructure, a proper preparation of the components is required to effectively implement the strategy for Big Data services. We propose the Big Data Governance Framework in this paper. The Big Data governance framework presents criteria different from existing criteria at the data quality level. It focuses on timely, reliable, meaningful, and sufficient data services, focusing on what data attributes should be achieved based on the data attributes of Big Data services. In addition to the quality level of Big Data, the personal information protection strategy and the data disclosure/accountability strategy are also needed to achieve goals and to prevent problems. This paper performed case analysis based on the Big Data Governance Framework with the National Pension Service of South Korea. Big Data services in the public sector are an inevitable choice to improve the quality of people's life. Big Data governance and its framework are the essential components for the realization of Big Data service."}
{"_id": "5b854ac6f0ffe3f963ee2b7d4df474eba737cd63", "title": "Numeric data frames and probabilistic judgments in complex real-world environments", "text": "This thesis investigates human probabilistic judgment in complex real-world settings to identify processes underpinning biases across groups which relate to numerical frames and formats. Experiments are conducted replicating real-world environments and data to test judgment performance based on framing and format. Regardless of background skills and experience, people in professional and consumer contexts show a strong tendency to perceive the world from a linear perspective, interpreting information in concrete, absolute terms and making judgments based on seeking and applying linear functions. Whether predicting sales, selecting between financial products, or forecasting refugee camp data, people use minimal cues and systematically apply additive methods amidst non-linear trends and percentage points to yield linear estimates in both rich and sparse informational contexts. Depending on data variability and temporality, human rationality and choice may be significantly helped or hindered by informational framing and format. The findings deliver both theoretical and practical contributions. Across groups and individual differences, the effects of informational format and the tendency to linearly extrapolate are connected by the bias to perceive values in concrete terms and make sense of data by seeking simple referent points. People compare and combine referents using additive methods when inappropriate and adhere strongly to defaults when applied in complex numeric environments. The practical contribution involves a framing manipulation which shows that format biases (i.e., additive processing) and optimism (i.e., associated with intertemporal effects) can be counteracted in judgments involving percentages and exponential growth rates by using absolute formats and positioning defaults in future event context information. This framing manipulation was highly effective in improving loan choice and repayment judgments compared to information in standard finance industry formats. There is a strong potential to increase rationality using this data format manipulation in other financial settings and domains such as health behaviour change in which peoples\u2019 erroneous interpretation of percentages and non-linear relations negatively impact choice and behaviours in both the short and long-term."}
{"_id": "ffdd7823e1f093876d8c4ea1d7da897c6b259ca8", "title": "A reconfigurable three- and single-phase AC/DC non-isolated bi-directional converter for multiple worldwide voltages", "text": "AC/DC converters for worldwide deployment or multiple applications are often required to operate with single-or three-phase AC connections at a variety of voltages. Electric vehicle charging is an excellent example: a typical AC/DC converter optimized for a 400 V three-phase European connection will only produce about 35% of rated power when operating on a 240 V single-phase connection in North America. This paper proposes a reconfigurable AC-DC converter with integral DC-DC stage that allows the single-phase AC power to roughly double compared to a standard configuration, up to 2/3 of the DC/DC stage rating. The system is constructed with two conventional back-to-back three-phase two-level bridges. When operating in single-phase mode, rather than leaving the DC side under-utilized, our proposed design repurposes one of the DC phase legs and the unused AC phase to create two parallel single-phase connections. This roughly doubles the AC output current and hence doubles the single phase AC power."}
{"_id": "4f6ea57dddeb2950feedd39101c7bd586774599a", "title": "Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness", "text": "Interactive video in an e-learning system allows proactive and random access to video content. Our empirical study examined the influence of interactive video on learning outcome and learner satisfaction in e-learning environments. Four different settings were studied: three were e-learning environments\u2014with interactive video, with non-interactive video, and without video. The fourth was the traditional classroom environment. Results of the experiment showed that the value of video for learning effectiveness was contingent upon the provision of interactivity. Students in the e-learning environment that provided interactive video achieved significantly better learning performance and a higher level of learner satisfaction than those in other settings. However, students who used the e-learning environment that provided non-interactive video did not improve either. The findings suggest that it may be important to integrate interactive instructional video into e-learning systems. # 2005 Elsevier B.V. All rights reserved."}
{"_id": "84252a9be6279e9e34ed4af5676bc642986ba0f6", "title": "IoT based monitoring and control of appliances for smart home", "text": "The recent technology in home automation provides security, safety and comfortable life at home. That is why in the competitive environment and fast world, home automation technology is required for every person. This purposed home automation technology provides smart monitoring and control of the home appliances as well as door permission system for interaction between the visitor and home/office owner. The control and monitoring the status (ON/OFF of the appliances) have been implemented using multiple ways such as The Internet, electrical switch, and Graphical User Interface (GUI) interface. The system has low-cost design, user-friendly interface, and easy installation in home or multi-purpose building. Using this technology, the consumer can reduce the wastage of electrical power by regular monitoring of home appliances or the proper ON/OFF scheduling of the devices."}
{"_id": "8e393c18974baa8d5d704edaf116f009cb919463", "title": "2.1 28Gb/s 560mW multi-standard SerDes with single-stage analog front-end and 14-tap decision-feedback equalizer in 28nm CMOS", "text": "A high-speed SerDes must meet multiple challenges including high-speed operation, intensive equalization technique, low power consumption, small area and robustness. In order to meet new standards, such a OIF CEI-25G-LR, CEI-28G-MR/SR/VSR, IEEE802.3bj and 32G-FC, data-rates are increased to 25 to 28Gb/s, which is more than 75% higher than the previous generation of SerDes. For SerDes applications with several hundreds of lanes integrated in single chip, power consumption is very important factor while maintaining high performance. There are several previous works at 28Gb/s or higher data-rate [1-2]. They use an unrolled DFE to meet the critical timing margin, but the unrolled DFE structure increases the number of DFE slicers, increasing the overall power and die area. In order to tackle these challenges, we introduce several circuits and architectural techniques. The analog front-end (AFE) uses a single-stage architecture and a compact on-chip passive inductor in the transimpedance amplifier (TIA), providing 15dB boost. The boost is adaptive and its adaptation loop is decoupled from the decision-feedback equalizer (DFE) adaptation loop by the use of a group-delay adaptation (GDA) algorithm. DFE has a half-rate 1-tap unrolled structure with 2 total error latches for power and area reduction. A two-stage sense-amplifier-based slicer achieves a sensitivity of 15mV and DFE timing closure. We also develop a high-speed clock buffer that uses a new active-inductor circuit. This active-inductor circuit has the capability to control output-common-mode voltage to optimize circuit operating points."}
{"_id": "bc8e53c1ef837531126886520ce155e14140beb4", "title": "Understanding Human Motion and Gestures for Underwater Human-Robot Collaboration", "text": "In this paper, we present a number of robust methodologies for an underwater robot to visually detect, follow, and interact with a diver for collaborative task execution. We design and develop two autonomous diver-following algorithms, the first of which utilizes both spatialand frequency-domain features pertaining to human swimming patterns in order to visually track a diver. The second algorithm uses a convolutional neural network-based model for robust tracking-by-detection. In addition, we propose a hand gesture-based human-robot communication framework that is syntactically simpler and computationally more efficient than the existing grammar-based frameworks. In the proposed interaction framework, deep visual detectors are used to provide accurate hand gesture recognition; subsequently, a finite-state machine performs robust and efficient gesture-to-instruction mapping. The distinguishing feature of this framework is that it can be easily adopted by divers for communicating with underwater robots without using artificial markers or requiring memorization of complex language rules. Furthermore, we validate the performance and effectiveness of the proposed methodologies through extensive field experiments in closedand open-water environments. Finally, we perform a user interaction study to demonstrate the usability benefits of our proposed interaction framework compared to existing methods."}
{"_id": "f9e416ae4c54fd91ab3f45ba01b463686eb1dbcc", "title": "mTOR regulation of autophagy.", "text": "Nutrient starvation induces autophagy in eukaryotic cells through inhibition of TOR (target of rapamycin), an evolutionarily-conserved protein kinase. TOR, as a central regulator of cell growth, plays a key role at the interface of the pathways that coordinately regulate the balance between cell growth and autophagy in response to nutritional status, growth factor and stress signals. Although TOR has been known as a key regulator of autophagy for more than a decade, the underlying regulatory mechanisms have not been clearly understood. This review discusses the recent advances in understanding of the mechanism by which TOR regulates autophagy with focus on mammalian TOR (mTOR) and its regulation of the autophagy machinery."}
{"_id": "f180081aab729e2e26c81f475a70c030816901d8", "title": "EyeTell: Video-Assisted Touchscreen Keystroke Inference from Eye Movements", "text": "Keystroke inference attacks pose an increasing threat to ubiquitous mobile devices. This paper presents EyeTell, a novel video-assisted attack that can infer a victim's keystrokes on his touchscreen device from a video capturing his eye movements. EyeTell explores the observation that human eyes naturally focus on and follow the keys they type, so a typing sequence on a soft keyboard results in a unique gaze trace of continuous eye movements. In contrast to prior work, EyeTell requires neither the attacker to visually observe the victim's inputting process nor the victim device to be placed on a static holder. Comprehensive experiments on iOS and Android devices confirm the high efficacy of EyeTell for inferring PINs, lock patterns, and English words under various environmental conditions."}
{"_id": "428d73d47d9f01557e612abba52c9c21942915a1", "title": "Rigid-Foldable Thick Origami", "text": "In this paper, a method is proposed for geometrically constructing thick panel structures that follow the kinetic behavior of rigid origami by using tapered or two-ply panels and hinges located at their edges. The proposed method can convert generalized pattern of rigid-foldable origami into thick panels structure with kinetic motion, which leads to novel designs of origami for various engineering purposes including architecture."}
{"_id": "9fae98bdba54a24e72e8173d50c1461553cd98b2", "title": "Moving ERP Systems to the Cloud-Data Security Issues", "text": "This paper brings to light data security issues and concerns for organizations by moving their Enterprise Resource Planning (ERP) systems to the cloud. Cloud computing has become the new trend of how organizations conduct business and has enabled them to innovate and compete in a dynamic environment through new and innovative business models. The growing popularity and success of the cloud has led to the emergence of cloud-based Software-as-a-Service (SaaS) ERP systems, a new alternative approach to traditional on-premise ERP systems. Cloud-based ERP has a myriad of benefits for organizations. However, infrastructure engineers need to address data security issues before moving their enterprise applications to the cloud. Cloud-based ERP raises specific concerns about the confidentiality and integrity of the data stored in the cloud. Such concerns that affect the adoption of cloud-based ERP are based on the size of the organization. Small to medium enterprises (SMEs) gain the maximum benefits from cloud-based ERP as many of the concerns around data security are not relevant to them. On the contrary, larger organizations are more cautious in moving their mission critical enterprise applications to the cloud. A hybrid solution where organizations can choose to keep their sensitive applications on-premise while leveraging the benefits of the cloud is proposed in this paper as an effective solution that is gaining momentum and popularity for large organizations."}
{"_id": "cbd51c78edc96eb6f654a938f8247334a2c4fa00", "title": "Implementation of RSA 2048-bit and AES 256-bit with digital signature for secure electronic health record application", "text": "This research addresses the implementation of encryption and digital signature technique for electronic health record to prevent cybercrime problem such as robbery, modification and unauthorized access. In this research, RSA 2048-bit algorithm, AES 256-bit and SHA 256 will be implemented in Java programming. Secure Electronic Health Record Information (SEHR) application design is intended to combine given services, such as confidentiality, integrity, authentication, and non-repudiation. Cryptography is used to ensure the file records and electronic documents for detailed information on the medical past, present and future forecasts that have been given only for the patients. The document will be encrypted using an encryption algorithm based on NIST Standard. In the application, there are two schemes, namely the protection and verification scheme. This research uses black-box testing and white-box testing to test the software input, output, and code without testing the process and design that occurs in the system. We demonstrated the implementation of cryptography in Secure Electronic Health Record Information (SEHR). The implementation of encryption and digital signature in this research can prevent archive thievery which is shown on implementation and is proven on the test."}
{"_id": "2f6611fea81af98bc28341c830dd590a1c9159a9", "title": "An automotive radar network based on 77 GHz FMCW sensors", "text": "Automotive radar sensors will be used in many future applications to increase comfort and safety. Adaptive cruise control (ACC), parking aid, collision warning or avoidance and pre crash are some automotive applications, which are based on high performance radar sensors. This paper describes results gained by an European Commission funded research project and a new radar network based on four individual monostatic 77 GHz FMCW radar sensors. Each sensor measures target range and velocity simultaneously and extremely accurate even in multiple target situations with a maximum range of 40 m. The azimuth angle is estimated by multilateration techniques in a radar network architecture. The system design and the signal processing of this network are described in detail in this paper. Due to the high range resolution performance of each radar sensor ordinary targets cannot be considered any longer as point but as extended targets. System capability, performance and reliability can be improved essentially if the single sensor FMCW signal processing, the target position processing in the radar network and the tracking scheme are jointly optimized."}
{"_id": "08de4de6aaffc4d235e45cfa324fb36d5681dd6c", "title": "Sex differences in intrinsic aptitude for mathematics and science?: a critical review.", "text": "This article considers 3 claims that cognitive sex differences account for the differential representation of men and women in high-level careers in mathematics and science: (a) males are more focused on objects from the beginning of life and therefore are predisposed to better learning about mechanical systems; (b) males have a profile of spatial and numerical abilities producing greater aptitude for mathematics; and (c) males are more variable in their cognitive abilities and therefore predominate at the upper reaches of mathematical talent. Research on cognitive development in human infants, preschool children, and students at all levels fails to support these claims. Instead, it provides evidence that mathematical and scientific reasoning develop from a set of biologically based cognitive capacities that males and females share. These capacities lead men and women to develop equal talent for mathematics and science."}
{"_id": "a03f443f5b0be834ad0e53a8e436b7d09e7a8689", "title": "Reality mining based on Social Network Analysis", "text": "Data Mining is the extraction of hidden predictive information from large databases. The process of discovering interesting, useful, nontrivial patterns from large spatial datasets is called spatial data mining. When time gets associated it becomes spatio temporal data mining. The study of spatio temporal data mining is of great concern for the study of mobile phone sensed data. Reality Mining is defined as the study of human social behavior based on mobile phone sensed data. It is based on the data collected by sensors in mobile phones, security cameras, RFID readers, etc. All allow for the measurement of human physical and social activity. In this paper Netviz, Gephi and Weka tools have been used to convert and analyze the Facebook. Further, analysis of a reality mining dataset is also presented."}
{"_id": "923d01e0ff983c0ecd3fde5e831e5faff8485bc5", "title": "Improving Restaurants by Extracting Subtopics from Yelp Reviews", "text": "In this paper, we describe latent subtopics discovered from Yelp restaurant reviews by running an online Latent Dirichlet Allocation (LDA) algorithm. The goal is to point out demand of customers from a large amount of reviews, with high dimensionality. These topics can provide meaningful insights to restaurants about what customers care about in order to increase their Yelp ratings, which directly affects their revenue. We used the open dataset from the Yelp Dataset Challenge with over 158,000 restaurant reviews. To find latent subtopics from reviews, we adopted Online LDA, a generative probabilistic model for collections of discrete data such as text corpora. We present the breakdown of hidden topics over all reviews, predict stars per hidden topics discovered, and extend our findings to that of temporal information regarding restaurants peak hours. Overall, we have found several interesting insights and a method which could definitely prove useful to restaurant owners."}
{"_id": "b1076b870b991563fbc9e5004752794708492bed", "title": "Ontology Based SMS Controller for Smart Phones", "text": "Text analysis includes lexical analysis of the text and has been widely studied and used in diverse applications. In the last decade, researchers have proposed many efficient solutions to analyze / classify large text dataset, however, analysis / classification of short text is still a challenge because 1) the data is very sparse 2) It contains noise words and 3) It is difficult to understand the syntactical structure of the text. Short Messaging Service (SMS) is a text messaging service for mobile/smart phone and this service is frequently used by all mobile users. Because of the popularity of SMS service, marketing companies nowadays are also using this service for direct marketing also known as SMS marketing.In this paper, we have proposed Ontology based SMS Controller which analyze the text message and classify it using ontology aslegitimate or spam. The proposed system has been tested on different scenarios and experimental results shows that the proposed solution is effective both in terms of efficiency and time. Keywords\u2014Short Text Classification; SMS Spam; Text Analysis; Ontology based SMS Spam; Text Analysis and Ontology"}
{"_id": "cfad94900162ffcbc5a975e348a5cdccdc1e8b07", "title": "BP-STDP: Approximating backpropagation using spike timing dependent plasticity", "text": "The problem of training spiking neural networks (SNNs) is a necessary precondition to understanding computations within the brain, a field still in its infancy. Previous work has shown that supervised learning in multi-layer SNNs enables bio-inspired networks to recognize patterns of stimuli through hierarchical feature acquisition. Although gradient descent has shown impressive performance in multi-layer (and deep) SNNs, it is generally not considered biologically plausible and is also computationally expensive. This paper proposes a novel supervised learning approach based on an event-based spike-timing-dependent plasticity (STDP) rule embedded in a network of integrate-and-fire (IF) neurons. The proposed temporally local learning rule follows the backpropagation weight change updates applied at each time step. This approach enjoys benefits of both accurate gradient descent and temporally local, efficient STDP. Thus, this method is able to address some open questions regarding accurate and efficient computations that occur in the brain. The experimental results on the XOR problem, the Iris data, and the MNIST dataset demonstrate that the proposed SNN performs as successfully as the traditional NNs. Our approach also compares favorably with the state-of-the-art multi-layer SNNs."}
{"_id": "3cf5bc20338110eefa8b5375735272a1c4365619", "title": "A Requirements-to-Implementation Mapping Tool for Requirements Traceability", "text": "Software quality is a major concern in software engineering, particularly when we are dealing with software services that must be available 24 hours a day, please the customer and be kept permanently. In software engineering, requirements management is one of the most important tasks that can influence the success of a software project. Maintain traceability information is a fundamental aspect in software engineering and can facilitate the process of software development and maintenance. This information can be used to support various activities such as analyzing the impact of changes and software verification and validation. However, traceability techniques are mainly used between requirements and software test cases. This paper presents a prototype of a tool that supports the mapping of functional requirements with the pages and HTML elements of a web site. This tool helps maintaining requirements traceability information updated and, ultimately, increasing the efficiency of requirements change management, which may contribute to the overall quality of the software service."}
{"_id": "45b7d3881dd92c8c73b99fa3497e5d28a2106c24", "title": "Neuroevolution: from architectures to learning", "text": "Artificial neural networks (ANNs) are applied to many real-world problems, ranging from pattern classification to robot control. In order to design a neural network for a particular task, the choice of an architecture (including the choice of a neuron model), and the choice of a learning algorithm have to be addressed. Evolutionary search methods can provide an automatic solution to these problems. New insights in both neuroscience and evolutionary biology have led to the development of increasingly powerful neuroevolution techniques over the last decade. This paper gives an overview of the most prominent methods for evolving ANNs with a special focus on recent advances in the synthesis of learning architectures."}
{"_id": "505c58c2c100e7512b7f7d906a9d4af72f6e8415", "title": "Genetic programming - on the programming of computers by means of natural selection", "text": "Page ii Complex Adaptive Systems John H. Holland, Christopher Langton, and Stewart W. Wilson, advisors Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press edition John H. Holland Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life edited by Francisco J. Varela and Paul Bourgine Genetic Programming: On the Programming of Computers by Means of Natural Selection John R. Koza"}
{"_id": "50975d6cd92e71f828ffd54bf776c32daa79e295", "title": "Survey of Neural Transfer Functions", "text": "The choice of transfer functions may strongly influence complexity and performance of neural networks. Although sigmoidal transfer functions are the most common there is no a priori reason why models based on such functions should always provide optimal decision borders. A large number of alternative transfer functions has been described in the literature. A taxonomy of activation and output functions is proposed, and advantages of various non-local and local neural transfer functions are discussed. Several less-known types of transfer functions and new combinations of activation/output functions are described. Universal transfer functions, parametrized to change from localized to delocalized type, are of greatest interest. Other types of neural transfer functions discussed here include functions with activations based on nonEuclidean distance measures, bicentral functions, formed from products or linear combinations of pairs of sigmoids, and extensions of such functions making rotations of localized decision borders in highly dimensional spaces practical. Nonlinear input preprocessing techniques are briefly described, offering an alternative way to change the shapes of decision borders."}
{"_id": "6ae5deae0167649c37117d11681a223a271923c3", "title": "Back propagation neural network with adaptive differential evolution algorithm for time series forecasting", "text": "The back propagation neural network (BPNN) can easily fall into the local minimum point in time series forecasting. A hybrid approach that combines the adaptive differential evolution (ADE) algorithm with BPNN, called ADE\u2013BPNN, is designed to improve the forecasting accuracy of BPNN. ADE is first applied to search for the global initial connection weights and thresholds of BPNN. Then, BPNN is employed to thoroughly search for the optimal weights and thresholds. Two comparative real-life series data sets are used to verify the feasibility and effectiveness of the hybrid method. The proposed ADE\u2013BPNN can effectively improve forecasting accuracy relative to basic BPNN, autoregressive integrated moving average model (ARIMA), and other hybrid models. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "6e744cf0273a84b087e94191fd654210e8fec8e9", "title": "A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks", "text": "Research in neuroevolutionthat is, evolving artificial neural networks (ANNs) through evolutionary algorithmsis inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution."}
{"_id": "baf77336afa708e071c5c5e1c6f481ff27d0150d", "title": "On code reuse from StackOverflow: An exploratory study on Android apps", "text": "Context: Source code reuse has been widely accepted as a fundamental activity in software development. Recent studies showed that StackOverflow has emerged as one of the most popular resources for code reuse. Therefore, a plethora of work proposed ways to optimally ask questions, search for answers and find relevant code on StackOverflow. However, little work studies the impact of code reuse from Stack-"}
{"_id": "84ca84dad742749a827291e103cde8185cea1bcf", "title": "CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks", "text": "Generative Adversarial Networks (GAN) have achieved big success in various domains such as image generation, music generation, and natural language generation. In this paper, we propose a novel GAN-based collaborative filtering (CF) framework to provide higher accuracy in recommendation. We first identify a fundamental problem of existing GAN-based methods in CF and highlight it quantitatively via a series of experiments. Next, we suggest a new direction of vector-wise adversarial training to solve the problem and propose our GAN-based CF framework, called CFGAN, based on the direction. We identify a unique challenge that arises when vector-wise adversarial training is employed in CF. We then propose three CF methods realized on top of our CFGAN that are able to address the challenge. Finally, via extensive experiments on real-world datasets, we validate that vector-wise adversarial training employed in CFGAN is really effective to solve the problem of existing GAN-based CF methods. Furthermore, we demonstrate that our proposed CF methods on CFGAN provide recommendation accuracy consistently and universally higher than those of the state-of-the-art recommenders."}
{"_id": "3a46c11ad7afed8defbb368e478dbf94c24f43a3", "title": "A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures", "text": "Scientific problems that depend on processing largeamounts of data require overcoming challenges in multiple areas:managing large-scale data distribution, co-placement andscheduling of data with compute resources, and storing and transferringlarge volumes of data. We analyze the ecosystems of thetwo prominent paradigms for data-intensive applications, hereafterreferred to as the high-performance computing and theApache-Hadoop paradigm. We propose a basis, common terminologyand functional factors upon which to analyze the two approachesof both paradigms. We discuss the concept of \"Big DataOgres\" and their facets as means of understanding and characterizingthe most common application workloads found acrossthe two paradigms. We then discuss the salient features of thetwo paradigms, and compare and contrast the two approaches.Specifically, we examine common implementation/approaches ofthese paradigms, shed light upon the reasons for their current\"architecture\" and discuss some typical workloads that utilizethem. In spite of the significant software distinctions, we believethere is architectural similarity. We discuss the potential integrationof different implementations, across the different levelsand components. Our comparison progresses from a fully qualitativeexamination of the two paradigms, to a semi-quantitativemethodology. We use a simple and broadly used Ogre (K-meansclustering), characterize its performance on a range of representativeplatforms, covering several implementations from bothparadigms. Our experiments provide an insight into the relativestrengths of the two paradigms. We propose that the set of Ogreswill serve as a benchmark to evaluate the two paradigms alongdifferent dimensions."}
{"_id": "0a3437fa123c92dc2789b8d49d1f934febe2c125", "title": "Towards a Self-Organized Agent-Based Simulation Model for Exploration of Human Synaptic Connections", "text": "In this paper, the early design of our selforganized agent-based simulation model for exploration of synaptic connections that faithfully generates what is observed in natural situation is given. While we take inspiration from neuroscience, our intent is not to create a veridical model of processes in neurodevelopmental biology, nor to represent a real biological system. Instead, our goal is to design a simulation model that learns acting in the same way of human nervous system by using findings on human subjects using reflex methodologies in order to estimate unknown connections."}
{"_id": "e48617210e65974e7c21c081b6f1cec7116580ba", "title": "BIM: An integrated model for planned and preventive maintenance of architectural heritage", "text": "Modern digital technologies give us great possibilities to organize knowledge about constructions, regarding multidisciplinary fields like preservation, conservation and valorization of our architectural heritage, in order to suggest restoration projects and related work, or to suppose a planned preventive maintenance. New procedures to archive, analyze and manage architectural information find a natural support in 3D model, thanks to the development of capacities of new modeling software. Moreover, if the model contains or interfaces with a heterogeneous archive of information, as it is for BIM, this model can be considered as the bases of critical studies, projects of restoration, heritage maintenance, integrated management, protection, and valorization, and evaluation of economic aspects, management and planning, that can flow into a planned preventive maintenance [1]. The aspect that limit the use of BIM technology is the set up parametric object library inside programs: the standardized level of these objects deals difficulty with survey and restoration issues, where each single element has its own historical and architectural characterization [2]. From this foreword, the goal of this research is more evident: the possibilities of using BIM modeling to the existing constructions and cultural heritage, as a support for the construction and management of a Plan for planned preventive maintenance."}
{"_id": "dc7024840a4ba7ab634517fae53e77695ff5dda9", "title": "Energy Efficient Smartphone-Based Activity Recognition using Fixed-Point Arithmetic", "text": "In this paper we propose a novel energy efficient approach for the recognition of human activities using smartphones as wearable sensing devices, targeting assisted living applications such as remote patient activity monitoring for the disabled and the elderly. The method exploits fixed-point arithmetic to propose a modified multiclass Support Vector Machine (SVM) learning algorithm, allowing to better preserve the smartphone battery lifetime with respect to the conventional floating-point based formulation while maintaining comparable system accuracy levels. Experiments show comparative results between this approach and the traditional SVM in terms of recognition performance and battery consumption, highlighting the advantages of the proposed method."}
{"_id": "556254959ca591439473a8e31d2e09ef37d78285", "title": "Design for reliability of power electronics modules", "text": "0026-2714/$ see front matter 2009 Published by doi:10.1016/j.microrel.2009.07.055 * Corresponding author. Tel.: +44 (0)20 8331 8660 E-mail address: c.bailey@gre.ac.uk (C. Bailey). Power electronics uses semiconductor technology to convert and control electrical power. Demands for efficient energy management, conversion and conservation, and the increasing take-up of electronics in transport systems has resulted in tremendous growth in the use of power electronics devices such as Insulated Gate Bipolar Transistors (IGBT\u2019s). The packaging of power electronics devices involves a number of challenges for the design engineer in terms of reliability. For example, IGBT modules will contain a number of semiconductor dies within a small footprint bonded to substrates with aluminum wires and wide area solder joints. To a great extent, the reliability of the package will depend on the thermo-mechanical behavior of these materials. This paper details a physics of failure approach to reliability predictions of IGBT modules. It also illustrates the need for a probabilistic approach to reliability predictions that include the effects of design variations. Also discussed are technologies for predicting the remaining life of the package when subjected to qualification stresses or in service stresses using prognostics methods. 2009 Published by Elsevier Ltd."}
{"_id": "52d09d745295ed32519a74291f890a5115798d58", "title": "A Real-time Inversion Attack on the GMR-2 Cipher Used in the Satellite Phones", "text": "The GMR-2 cipher is a kind of stream cipher currently being used in some Inmarsat satellite phones. It has been proven that such cipher can be cracked using only one frame known keystream but with a moderate executing times. In this paper, we present a new thorough security analysis of the GMR-2 cipher. We first study the inverse properties and the relationship of the cipher\u2019s components to reveal a bad one-way character of the cipher. Then by introducing a new concept called \u201cvalid key chain\u201d according to the cipher\u2019s key schedule, we for the first time propose a real-time inversion attack using one frame keystream. This attack contains three phases: (1) table generation (2) dynamic table looks-up, filtration and combination (3) verification. Our analysis shows that, using the proposed attack, the exhaustive search space for the 64-bit encryption key can be reduced to about 2 when one frame (15 bytes) keystream is available. Compared with previous known attacks, this inversion attack is much more efficient. Finally, the proposed attack are carried out on a 3.3GHz platform, and the experimental results demonstrate that the 64-bit encryption-key could be recovered in around 0.02s on average."}
{"_id": "8e31ec136f0a49718827fbe9e95a546d16579441", "title": "Review on virtual synchronous generator (VSG) for enhancing performance of microgrid", "text": "Increasing penetration level of Distributed Generators (DGs)/ Renewable energy sources (RES) creates low inertia problems, damping effect on grid dynamic performance and stability. Voltage rise by reverse power from PV generations, excessive supply of electricity in the grid due to full generation of the DGs, power fluctuations due to variable nature of Renewable energy sources, and degradation of frequency regulation can be considered as some negative results of mentioned hurdle. For securing such a grid there should add inertia, virtually. The proposed control approach based on an inverter connected in Microgrid, which operated as a virtual synchronous generator (VSG) reacting only in transitory situations. Super capacitor (SC), an energy storage system (ESS) included in MG-forming inverter i.e., a normal inverter structure used for contributing system robustness and reliability. In steady-state load distributed between the MG-supporting inverter are achieved with two restoration controllers for active and reactive powers. The virtual inertia hypothesis can be used for maintaining a large share of DGs in future grids without affect system stability. This proposed system analyzed and simulated using MATLAB/SIMULINK."}
{"_id": "26ed19606a57d837b4b9dcc984ff61763cdd0d36", "title": "Predicting short-term interests using activity-based search context", "text": "A query considered in isolation offers limited information about a searcher's intent. Query context that considers pre-query activity (e.g., previous queries and page visits), can provide richer information about search intentions. In this paper, we describe a study in which we developed and evaluated user interest models for the current query, its context (from pre-query session activity), and their combination, which we refer to as intent. Using large-scale logs, we evaluate how accurately each model predicts the user's short-term interests under various experimental conditions. In our study we: (i) determine the extent of opportunity for using context to model intent; (ii) compare the utility of different sources of behavioral evidence (queries, search result clicks, and Web page visits) for building predictive interest models, and; (iii) investigate optimally combining the query and its context by learning a model that predicts the context weight for each query. Our findings demonstrate significant opportunity in leveraging contextual information, show that context and source influence predictive accuracy, and show that we can learn a near-optimal combination of the query and context for each query. The findings can inform the design of search systems that leverage contextual information to better understand, model, and serve searchers' information needs."}
{"_id": "77b1354f4b9b1d9282840ee8baf3f003c2bc3f66", "title": "A 280-KBytes Twin-Bit-Cell Embedded NOR Flash Memory With a Novel Sensing Current Protection Enhanced Technique and High-Voltage Generating Systems", "text": "A novel sensing current protection enhanced (SCPE) technique and high-voltage (HV) generating systems are adopted to greatly enhance the advantages of the twin-bit cell. The SCPE technique provides the tradeoff of sensing margin loss between \u201c1\u201d and \u201c0\u201d bits sensing case to realize fast read access. The read speed is improved by 7.7%. HV generating systems with parallel-series-transform and capacitance-shared techniques are proposed to satisfy the complicated requirements of output capability and current drivability with lower area penalty. The HV periphery area decreases by 71%. A 1.5-V 280-KBytes embedded NOR flash memory IP has been fabricated in HHGrace (Shanghai Huahong Grace Semiconductor Manufacturing Corporation) 90-nm 4 poly 4 metal CMOS process. The complicated operation voltages and access time of 23.5 ns have been achieved at 1.5 V. The die size of the proposed IP is 0.437 mm $^{2}$ and the area size of charge pump has been reduced to 0.006 mm $^{2}$ on average."}
{"_id": "0c495278136cae458b3976b879bcace7f9b42c30", "title": "A survey of methods for data fusion and system adaptation using autonomic nervous system responses in physiological computing", "text": "Physiological computing represents a mode of human\u2013computer interaction where the computer monitors, analyzes and responds to the user\u2019s psychophysiological activity in real-time. Within the field, autonomic nervous system responses have been studied extensively since they can be measured quickly and unobtrusively. However, despite a vast body of literature available on the subject, there is still no universally accepted set of rules that would translate physiological data to psychological states. This paper surveys the work performed on data fusion and system adaptation using autonomic nervous system responses in psychophysiology and physiological computing during the last ten years. First, five prerequisites for data fusion are examined: psychological model selection, training set preparation, feature extraction, normalization and dimension reduction. Then, different methods for either classification or estimation of psychological states from the extracted features are presented and compared. Finally, implementations of system adaptation are reviewed: changing the system that the user is interacting with in response to cognitive or affective information inferred from autonomic nervous system responses. The paper is aimed primarily at psychologists and computer scientists who have already recorded autonomic nervous system responses and now need to create algorithms to determine the subject\u2019s psychological state. 2012 British Informatics Society Limited. All rights reserved."}
{"_id": "39fc175c150ba9ee662107b395fc0c3bbcc4962e", "title": "Review: Graph Databases", "text": "This paper covers NOSQL databases. With the increase in net users and applications it is important to understand graph databases, the future of tomorrow\u2019s data storage and management. Basic architecture of some graph databases has been discussed to know their working and how data is stored in the form of nodes and relationships. KeywordsNOSQL databases, Neo4j, JENA, DEX, InfoGrid, FlockDB."}
{"_id": "8cbe965f4909a3a7347f4def8365e473db1faf2b", "title": "Differential effects of atomoxetine on executive functioning and lexical decision in attention-deficit/hyperactivity disorder and reading disorder.", "text": "OBJECTIVE\nThe effects of a promising pharmacological treatment for attention-deficit/hyperactivity disorder (ADHD), atomoxetine, were studied on executive functions in both ADHD and reading disorder (RD) because earlier research demonstrated an overlap in executive functioning deficits in both disorders. In addition, the effects of atomoxetine were explored on lexical decision.\n\n\nMETHODS\nSixteen children with ADHD, 20 children with ADHD + RD, 21 children with RD, and 26 normal controls were enrolled in a randomized placebo-controlled crossover study. Children were measured on visuospatial working memory, inhibition, and lexical decision on the day of randomization and following two 28-day medication periods.\n\n\nRESULTS\nChildren with ADHD + RD showed improved visuospatial working memory performance and, to a lesser extent, improved inhibition following atomoxetine treatment compared to placebo. No differential effects of atomoxetine were found for lexical decision in comparison to placebo. In addition, no effects of atomoxetine were demonstrated in the ADHD and RD groups.\n\n\nCONCLUSION\nAtomoxetine improved visuospatial working memory and to a lesser degree inhibition in children with ADHD + RD, which suggests differential developmental pathways for co-morbid ADHD + RD as compared to ADHD and RD alone.\n\n\nCLINICAL TRIAL REGISTRY\nB4Z-MC-LYCK, NCT00191906; http://clinicaltrials.gov/ct2/show/NCT00191906."}
{"_id": "f4cdd1d15112a3458746b58a276d97e79d8f495d", "title": "Gradient Regularization Improves Accuracy of Discriminative Models", "text": "Regularizing the gradient norm of the output of a neural network with respect to its inputs is a powerful technique, rediscovered several times. This paper presents evidence that gradient regularization can consistently improve classification accuracy on vision tasks, using modern deep neural networks, especially when the amount of training data is small. We introduce our regularizers as members of a broader class of Jacobian-based regularizers. We demonstrate empirically on real and synthetic data that the learning process leads to gradients controlled beyond the training points, and results in solutions that generalize well."}
{"_id": "4aa00fec49069c8e3e4814b99be1d99d654c84bb", "title": "A robust data scaling algorithm to improve classification accuracies in biomedical data", "text": "Machine learning models have been adapted in biomedical research and practice for knowledge discovery and decision support. While mainstream biomedical informatics research focuses on developing more accurate models, the importance of data preprocessing draws less attention. We propose the Generalized Logistic (GL) algorithm that scales data uniformly to an appropriate interval by learning a generalized logistic function to fit the empirical cumulative distribution function of the data. The GL algorithm is simple yet effective; it is intrinsically robust to outliers, so it is particularly suitable for diagnostic/classification models in clinical/medical applications where the number of samples is usually small; it scales the data in a nonlinear fashion, which leads to potential improvement in accuracy. To evaluate the effectiveness of the proposed algorithm, we conducted experiments on 16 binary classification tasks with different variable types and cover a wide range of applications. The resultant performance in terms of area under the receiver operation characteristic curve (AUROC) and percentage of correct classification showed that models learned using data scaled by the GL algorithm outperform the ones using data scaled by the Min-max and the Z-score algorithm, which are the most commonly used data scaling algorithms. The proposed GL algorithm is simple and effective. It is robust to outliers, so no additional denoising or outlier detection step is needed in data preprocessing. Empirical results also show models learned from data scaled by the GL algorithm have higher accuracy compared to the commonly used data scaling algorithms."}
{"_id": "6674bf8a73f7181ec0cee0a8ee23f4b4f9bb162e", "title": "Compliance with Information Security Policies: An Empirical Investigation", "text": "Information security was the main topic in this paper. An investigation of the compliance to information security policies were discussed. The author mentions that the insignificant relationship between rewards and actual compliance with information security policies does not make sense. Quite possibly this relationship results from not applying rewards for security compliance. Also mentions that based on the survey conducted, careless employee behavior places an organization's assets and reputation in serious jeopardy. The major threat to information security arises from careless employees who fail to comply with organizations' information security policies and procedures."}
{"_id": "1bd50926079e68a6e32dc4412e9d5abe331daefb", "title": "Fisher Discrimination Dictionary Learning for sparse representation", "text": "Sparse representation based classification has led to interesting image recognition results, while the dictionary used for sparse coding plays a key role in it. This paper presents a novel dictionary learning (DL) method to improve the pattern classification performance. Based on the Fisher discrimination criterion, a structured dictionary, whose dictionary atoms have correspondence to the class labels, is learned so that the reconstruction error after sparse coding can be used for pattern classification. Meanwhile, the Fisher discrimination criterion is imposed on the coding coefficients so that they have small within-class scatter but big between-class scatter. A new classification scheme associated with the proposed Fisher discrimination DL (FDDL) method is then presented by using both the discriminative information in the reconstruction error and sparse coding coefficients. The proposed FDDL is extensively evaluated on benchmark image databases in comparison with existing sparse representation and DL based classification methods."}
{"_id": "9462ce3ca0fd8ac68e01892152ad6a72ab0e37b7", "title": "More on the fragility of performance: choking under pressure in mathematical problem solving.", "text": "In 3 experiments, the authors examined mathematical problem solving performance under pressure. In Experiment 1, pressure harmed performance on only unpracticed problems with heavy working memory demands. In Experiment 2, such high-demand problems were practiced until their answers were directly retrieved from memory. This eliminated choking under pressure. Experiment 3 dissociated practice on particular problems from practice on the solution algorithm by imposing a high-pressure test on problems practiced 1, 2, or 50 times each. Infrequently practiced high-demand problems were still performed poorly under pressure, whereas problems practiced 50 times each were not. These findings support distraction theories of choking in math, which contrasts with considerable evidence for explicit monitoring theories of choking in sensorimotor skills. This contrast suggests a skill taxonomy based on real-time control structures."}
{"_id": "e093191573b88bd279213c0164f389378810e782", "title": "The Effects of Interaction Goals on Negotiation Tactics and Outcomes: A Dyad-Level Analysis Across Two Cultures", "text": "This study investigates how negotiators\u2019 interaction goals influence their own and their counterparts\u2019 negotiation tactics and outcomes across two cultures using a simulated employment contract negotiation. Results show that when negotiators placed greater importance on competitive goals, they used more distributive persuasion and fewer priority information exchange tactics, which reduced their counterparts\u2019 profit; negotiators\u2019 competitive goals also caused their counterparts to use fewer priority information exchange tactics, which in turn hurt their own profit. Dyad members\u2019 competitive goals have an indirect, negative impact on joint profit. In addition, Chinese negotiators placed greater importance on competitive goals and used more distributive and fewer integrative tactics than Americans, but the associations between goals and tactics did not differ across cultures. Nevertheless, members of the two cultures took different paths to improve joint profit; as a result, Chinese dyads achieved no less joint profit than American dyads. The study sheds light on culture\u2019s effect on the interactive processes by which goals impact negotiation performance."}
{"_id": "d504a72e40ecee5c2e721629e7368a959b18c681", "title": "Solving job-shop scheduling problems by genetic algorithm", "text": "Job-shop Scheduling Problem (JSP) is one of extremely hard problems because it requires very large combinatorial search space and the precedence constraint between machines. The traditional algorithm used to solve the problem is the branch-andbound method, which takes considerable computing time when the size of problem is large. W e propose a new method for solving JSP using Genetic Algorithm (GA) and demonstrate its efficiency by the standard benchmark of job-shop scheduling problems. Some important points of G A are how t o represent the schedules as an individuals and to design the genetic operators for the representation in order t o produce better results."}
{"_id": "42bacdec7c9b78075e4849609a25892951095461", "title": "Detection of ulcerative colitis severity in colonoscopy video frames", "text": "Ulcerative colitis (UC) is a chronic inflammatory disease characterized by periods of relapses and remissions affecting more than 500,000 people in the United States. The therapeutic goals of UC are to first induce and then maintain disease remission. However, it is very difficult to evaluate the severity of UC objectively because of non-uniform nature of symptoms associated with UC, and large variations in their patterns. To address this, we objectively measure and classify the severity of UC presented in optical colonoscopy video frames based on the image textures. To extract distinct textures, we are using a hybrid approach in which a new proposed feature based on the accumulation of pixel value differences is combined with an existing feature such as LBP (Local Binary Pattern). The experimental results show the hybrid method can achieve more than 90% overall accuracy."}
{"_id": "3fb14a16d10d942ce9a78406d3a47c430244954f", "title": "Finding surprising patterns in a time series database in linear time and space", "text": "The problem of finding a specified pattern in a time series database (i.e. query by content) has received much attention and is now a relatively mature field. In contrast, the important problem of enumerating all surprising or interesting patterns has received far less attention. This problem requires a meaningful definition of \"surprise\", and an efficient search technique. All previous attempts at finding surprising patterns in time series use a very limited notion of surprise, and/or do not scale to massive datasets. To overcome these limitations we introduce a novel technique that defines a pattern surprising if the frequency of its occurrence differs substantially from that expected by chance, given some previously seen data."}
{"_id": "3d0f75c914bc3a34ef45cb0f6a18f841fa8008f0", "title": "Everyday Life Information Seeking : Approaching Information Seeking in the Context of \u201c Way of Life \u201d", "text": "The study offers a framework for the study of everyday life information seeking (ELIS) in the context of way of and mastery of life.Way of life is defined as the \u201corder of things,\u201d manifesting itself, for example, in the relationship between work and leisure time, models of consumption, and nature of hobbies. Mastery of life is interpreted as \u201ckeeping things in order; \u201d four ideal types of mastery of life with their implications for ELIS, namely optimistic-cognitive, pessimistic-cognitive, defensiveaffective and pessimistic-affective mastery of life are outlined. The article reviews two major dimensions of ELIS, there are. the seeking of orienting and practical information. The research framework was tested in an empirical study based on interviews with teachers and industrial workers, eleven of both. The main features of seeking orienting and practical information are reviewed, followed by suggestions for refinement of the research framework."}
{"_id": "843018e9d8ebda9a4ec54256a9f70835e0edff7e", "title": "Ultra-wideband antipodal vivaldi antenna array with Wilkinson power divider feeding network", "text": "In this paper, a two element antipodal Vivaldi antenna array in combination with an ultra-wideband (UWB) Wilkinson power divider feeding network is presented. The antennas are placed in a stacked configuration to provide more gain and to limit cross talk between transmit and receive antennas in radar applications. Each part can be realized with standard planar technology. For a single antenna element, a comparison in performance is done between standard FR4 and a Rogers\u00ae RO3003 substrate. The antenna spacing is obtained using the parametric optimizer of CST MWS\u00ae. The performance of the power divider and the antenna array is analyzed by means of simulations and measurements."}
{"_id": "8839b84e8432f6787d2281806430a65d39a94499", "title": "The Vivaldi Aerial", "text": "The Vivaldi Aerial is a new member of the class of aperiodic continuously scaled antenna structures and, as such, it has theoretically unlimited instantaneous frequency bandwidth. This aerial has significant gain and linear polarisation and can be made to conform to a constant gain vs. frequency performance. One such design has been made with approximately 10 dBI gain and \u00bf20 dB sidelobe level over an instantaneous frequency bandwidth extending from below 2 GHz to above 40 GHz."}
{"_id": "984df1f081fbd623600ec45635e5d9a4811c0aef", "title": "Ultra-wideband Vivaldi arrays for see-through-wall imaging radar applications", "text": "Two Vivaldi antenna arrays have been presented. First is an 8-element tapered slot array covering 1.2 to 4 GHz band for STW applications for brick/concrete wall imaging. Second is a 16-element antipodal array operating at 8 to 10.6 GHz for high resolution imaging when penetrating through dry wall. Based on the two designs, and utilizing a smooth wide band slot to microstrip transition to feed the Vivaldi antenna array, a 1\u201310 GHz frequency band can be covered. Alternatively, the design can be used in a reconfigurable structure to cover either a 1\u20133 GHz or 8\u201310 GHz band. Experimental and measured results have been completed and will be discussed in detail. The designs will significantly impact the development of compact reconfigurable and portable systems."}
{"_id": "d7ccf0988c16b0b373274bcf57991f036e253f5c", "title": "Analysis of Ultra Wideband Antipodal Vivaldi Antenna Design", "text": "The characteristics of antipodal Vivaldi antennas with different structures and the key factors that affect the performance of the antipodal antennas have been investigated. The return loss, radiation pattern and current distribution with various elliptical loading sizes, opening rates and ground plane sizes have been analyzed and compared using full wave simulation tool. Based on the parameter study, a design that is capable of achieving a bandwidth from 2.1 GHz to 20 GHz has been identified."}
{"_id": "e3f4fdf6d2f10ebe4cfc6d0544afa63976527d60", "title": "A 324-Element Vivaldi Antenna Array for Radio Astronomy Instrumentation", "text": "This paper presents a 324-element 2-D broadside array for radio astronomy instrumentation which is sensitive to two mutually orthogonal polarizations. The array is composed of cruciform units consisting of a group of four Vivaldi antennas arranged in a cross-shaped structure. The Vivaldi antenna used in this array exhibits a radiation intensity characteristic with a symmetrical main beam of 87.5\u00b0 at 3 GHz and 44.2\u00b0 at 6 GHz. The measured maximum side/backlobe level is 10.3 dB below the main beam level. The array can operate at a high frequency of 5.4 GHz without the formation of grating lobes."}
{"_id": "f5f9366ae604faf45ef53a3f20fcc31d317bd751", "title": "Finger printing on the web", "text": "Nowadays, companies such as Google [1] and Facebook [2] are present on most websites either through advertising services or through social plugins. They follow users around the web collecting information to create unique user profiles. This serious threat to privacy is a major cause for concern, as this reveals that there are no restrictions on the type of information that can be collected, the duration it can be stored, the people who can access it and the purpose of use or misuse. In order to determine if there indeed is any statistically significant evidence that fingerprinting occurs on certain web services, we integrated 2 software namely, FPBlock [6], a prototype browser extension that blocks web-based device fingerprinting, and AdFisher [10], an automated tool that sending out thousands of automated Web browser instances, in such a manner so that the data collected by FP-Block could be fed to AdFisher for analysis."}
{"_id": "c0271edf9cc616a30974918b3cce0f95d4265366", "title": "Cloud-assisted body area networks: state-of-the-art and future challenges", "text": "Body Area Networks (BANs) are emerging as enabling technology for many human-centered application domains such as health-care, sport, fitness, wellness, ergonomics, emergency, safety, security, and sociality. A BAN, which basically consists of wireless wearable sensor nodes usually coordinated by a static or mobile device, is mainly exploited to monitor single assisted livings. Data generated by a BAN can be processed in real-time by the BAN coordinator and/or transmitted to a server-side for online/offline processing and long-term storing. A network of BANs worn by a community of people produces large amount of contextual data that require a scalable and efficient approach for elaboration and storage. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of body sensor data streams. In this paper, we motivate the introduction of Cloud-assisted BANs along with the main challenges that need to be addressed for their development and management. The current state-of-the-art is overviewed and framed according to the main requirements for effective Cloud-assisted BAN architectures. Finally, relevant open research issues in terms of efficiency, scalability, security, interoperability, prototyping, dynamic deployment and management, are discussed."}
{"_id": "3b5c7cdcc75e6064149226b4b652bf88f687bd77", "title": "Visual and Infrared Sensor Data-Based Obstacle Detection for the Visually Impaired Using the Google Project Tango Tablet Development Kit and the Unity Engine", "text": "A novel visual and infrared sensor data-based system to assist visually impaired users in detecting obstacles in their path while independently navigating indoors is presented. The system has been developed for the recently introduced Google Project Tango Tablet Development Kit equipped with a powerful graphics processor and several sensors which allow it to track its motion and orientation in 3-D space in real-time. It exploits the inbuilt functionalities of the Unity engine in the Tango SDK to create a 3-D reconstruction of the surrounding environment, then associates a Unity collider component with the user and utilizes it to determine his interaction with the reconstructed mesh in order to detect obstacles. The user is warned about any detected obstacles via audio alerts. An extensive empirical evaluation of the obstacle detection component has yielded favorable results, thus, confirming the potential of this system for future development work."}
{"_id": "a29cc7c75b5868bef0576bd2ad94821e6358a26a", "title": "BioGlass: Physiological parameter estimation using a head-mounted wearable device", "text": "This work explores the feasibility of using sensors embedded in Google Glass, a head-mounted wearable device, to measure physiological signals of the wearer. In particular, we develop new methods to use Glass's accelerometer, gyroscope, and camera to extract pulse and respiratory rates of 12 participants during a controlled experiment. We show it is possible to achieve a mean absolute error of 0.83 beats per minute (STD: 2.02) for heart rate and 1.18 breaths per minute (STD: 2.04) for respiration rate when considering different combinations of sensors. These results included testing across sitting, supine, and standing still postures before and after physical exercise."}
{"_id": "e3412b045525ed4a6c0142bba69522beb2f82a25", "title": "Monocular Depth Estimation: A Survey", "text": "Monocular depth estimation is often described as an illposed and inherently ambiguous problem. Estimating depth from 2D images is a crucial step in scene reconstruction, 3D object recognition, segmentation, and detection. The problem can be framed as: given a single RGB image as input, predict a dense depth map for each pixel. This problem is worsened by the fact that most scenes have large texture and structural variations, object occlusions, and rich geometric detailing. All these factors contribute to difficulty in accurate depth estimation. In this paper, we review five papers that attempt to solve the depth estimation problem with various techniques including supervised, weakly-supervised, and unsupervised learning techniques. We then compare these papers and understand the improvements made over one another. Finally, we explore potential improvements that can aid to better solve this problem."}
{"_id": "6fee63e0ae4bc1f3fd08044d7d694bb17b9c059c", "title": "Cross-Cultural Software Production and Use: A Structurational Analysis", "text": "This paper focuses on cross-cultural software production and use, which is increasingly common in today's more globalized world. A theoretical basis for analysis is developed, using concepts drawn from structuration theory. The theory is illustrated using two cross-cultural case studies. It is argued that structurational analysis provides a deeper examination of cross-cultural working and IS than is found in the current literature, which is dominated by Hofstede-type studies, in particular, the theoretical approach can be used to analyze cross-cultural conflict and contradiction, cultural heterogeneity, detailed work patterns, and the dynamic nature of culture. The paper contributes to the growing body of literature that emphasizes the essential role of cross-cultural understanding in contemporary society. 'Michael D. Myers was the accepting senior editor for this paper. introduction There has been much debate over the last decade about the major sociat transformations taking place in the world such as the increasing interconnectedness of different societies, the compression of time and space, and an intensification of consciousness of the world as a whole (Robertson 1992), Such changes are often tabeted with the term globatization, atthough the precise nature of this phenomenon is highly complex on closer examination. For example. Beck (2000) distinguishes between globality, the change in consciousness of the world as a single entity, and globaiism, the ideology of neoliberatism which argues that the world market eliminates or supplants the importance of locat potiticat action. Despite the complexity of the globalization phenomena, all commentators would agree that information and communication technologies (ICTs) are deeply implicated in the changes that are taking ptace through their abitity to enabte new modes of work, communication, and organization MiS Quarterly Vol. 26 No. 4. pp. 359-380/December 2002 359 Walsham/Cross-Cultural Software Production & Use across time and space. For example, the influential work of Castells (1996, 1997, 1998) argues that we are in the \"information age\" where information generation, processing, and transformation are fundamental to societal functioning and societal change, and where ICTs enable the pervasive expansion of networking throughout the social structure. However, does globalization, and the related spread of ICTs, imply that the world is becoming a homogeneous arena for gtobat business and gtobal attitudes, with differences between organizations and societies disappearing? There are many authors who take exception to this conclusion. For exampie, Robertson (1992) discussed the way in which imported themes are indigenized in particular societies with tocat culture constraining receptivity to some ideas rather than others, and adapting them in specific ways. He cited Japan as a good example of these glocalization processes. White accepting the idea of time-space compression facilitated by ICTs, Robertson argued that one of its main consequences is an exacerbation of collisions between gtobat, societat, and communat attitudes. Simitarty, Appadural {1997), coming from a nonWestern background, argued against the gtobat homogenization thesis on the grounds that difl'erent societies witt appropriate the \"materials of modernity\" differently depending on their specific geographies, histories, and languages. Watsham {2001) devetoped a related argument, with a specific focus on the rote of ICTs, concluding that globat diversity needs to be a key focus when devetoping and using such technotogies. If these latter arguments are broadty correct, then working with tCTs in and across different cuttures should prove to be problematic, in that there wilt be different views ofthe relevance, appticabitity, and vatue of particular modes of working and use of ICTs which may produce conflict. For exampte, technotogy transfer from one society to another involves the importing of that technology into an \"atien\" cutturat context where its value may not be similarly perceived to that in its original host cutture. Simitarty, cross-cuttural communication through tCTs, or cross-cultural information systems (IS) devetopment teams, are likely to confront issues of incongruence of values and attitudes. The purpose of this paper is to examine a particular topic within the area of cross-cutturat working and tCTs, namety that of software production and use; in particutar, where the software is not devetoped in and for a specific cutturat group. A primary goat is to devetop a theoreticat basis for anatysis of this area. Key eiements of this basis, which draws on structuration theory, are described in the next section of the paper, tn order to iltustrate the theoreticat basis and its vatue in analyzing real situations, the subsequent sections draw on the field data from two published case studies of cross-cultural software development and application. There is an extensive titerature on cross-cutturat working and IS, and the penultimate section ofthe paper reviews key etements of this titerature, and shows how the anatysis of this paper makes a new contribution. In particular, it witt be argued that the structurationat analysis enabtes a more sophisticated and detailed consideration of issues in cross-culturat software production under four specific headings: cross-cultural contradiction and conflict; cultural heterogeneity; detailed work patterns in different cuttures; and the dynamic, emergent nature of cutture. The final section of the paper wilt summarize some theoretical and practical implications. Structuration Theory, Cuiture and iS The theoretical basis for this paper draws on structuration theory {Giddens 1979, 1984). This theory has been highty inftuentiat in sociology and the social sciences generalty since Giddens first developed the ideas some 20 years ago. In addition, the theory has received considerable attention in the IS field {for a good review, see Jones 1998). The focus here, however, wilt be on how structuration theory can offer a new way of looking 360 MIS Quarterly Vol. 26 No. 4/December 2002 Walsham/Cross-Cuitural Software Production & Use Table 1. Structuration Theory, Culture, and ICTs: Some Key Concepts"}
{"_id": "69d40b2bc094e1ec938617d8cdaf4f7ac227ead3", "title": "LTSA-WS: a tool for model-based verification of web service compositions and choreography", "text": "In this paper we describe a tool for a model-based approach to verifying compositions of web service implementations. The tool supports verification of properties created from design specifications and implementation models to confirm expected results from the viewpoints of both the designer and implementer. Scenarios are modeled in UML, in the form of Message Sequence Charts (MSCs), and then compiled into the Finite State Process (FSP) process algebra to concisely model the required behavior. BPEL4WS implementations are mechanically translated to FSP to allow an equivalence trace verification process to be performed. By providing early design verification and validation, the implementation, testing and deployment of web service compositions can be eased through the understanding of the behavior exhibited by the composition. The approach is implemented as a plug-in for the Eclipse development environment providing cooperating tools for specification, formal modeling, verification and validation of the composition process."}
{"_id": "0f8a645b6e204d6bd06191ebc685fe2d887dbde8", "title": "Breaking the News: First Impressions Matter on Online News", "text": "A growing number of people are changing the way they consume news, replacing the traditional physical newspapers and magazines by their virtual online versions or/and weblogs. The interactivity and immediacy present in online news are changing the way news are being produced and exposed by media corporations. News websites have to create effective strategies to catch people\u2019s attention and attract their clicks. In this paper we investigate possible strategies used by online news corporations in the design of their news headlines. We analyze the content of 69,907 headlines produced by four major global media corporations during a minimum of eight consecutive months in 2014. In order to discover strategies that could be used to attract clicks, we extracted features from the text of the news headlines related to the sentiment polarity of the headline. We discovered that the sentiment of the headline is strongly related to the popularity of the news and also with the dynamics of the posted comments on that particular news."}
{"_id": "7661539d276da03f63864e8df4162521af2d8184", "title": "A new method for decontamination of de novo transcriptomes using a hierarchical clustering algorithm", "text": "Motivation\nThe identification of contaminating sequences in a de novo assembly is challenging because of the absence of information on the target species. For sample types where the target organism is impossible to isolate from its matrix, such as endoparasites, endosymbionts and soil-harvested samples, contamination is unavoidable. A few post-assembly decontamination methods are currently available but are based only on alignments to databases, which can lead to poor decontamination.\n\n\nResults\nWe present a new decontamination method based on a hierarchical clustering algorithm called MCSC. This method uses frequent patterns found in sequences to create clusters. These clusters are then linked to the target species or tagged as contaminants using classic alignment tools. The main advantage of this decontamination method is that it allows sequences to be tagged correctly even if they are unknown or misaligned to a database.\n\n\nAvailability and Implementation\nScripts and documentation about the MCSC decontamination method are available at https://github.com/Lafond-LapalmeJ/MCSC_Decontamination .\n\n\nContact\n: benjamin.mimee@agr.gc.ca.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."}
{"_id": "d40a8628453b9be53c57c3d6e50ea1aa438bdd3f", "title": "Understanding self-reflection: how people reflect on personal data through visual data exploration", "text": "Rapid advancements in consumer technologies enable people to collect a wide range of personal data. With a proper means for people to ask questions and explore their data, longitudinal data feeds from multiple self-tracking tools pose great opportunities to foster deep self-reflection. However, most self-tracking tools lack support for self-reflection beyond providing simple feedback. Our overarching goal is to support self-trackers in reflecting on their data and gaining rich insights through visual data exploration. As a first step toward the goal, we built a web-based application called Visualized Self, and conducted an in-lab think-aloud study (N = 11) to examine how people reflect on their personal data and what types of insights they gain throughout the reflection. We discuss lessons learned from studying with Visualized Self, and suggest directions for designing visual data exploration tools for fostering self-reflection."}
{"_id": "dadd12956a684202ac0ee6b7d599434a1a50fee2", "title": "Flavonoids in food and their health benefits.", "text": "There has been increasing interest in the research of flavonoids from dietary sources, due to growing evidence of the versatile health benefits of flavonoids through epidemiological studies. As occurrence of flavonoids is directly associated with human daily dietary intake of antioxidants, it is important to evaluate flavonoid sources in food. Fruits and vegetables are the main dietary sources of flavonoids for humans, along with tea and wine. However, there is still difficulty in accurately measuring the daily intake of flavonoids because of the complexity of existence of flavonoids from various food sources, the diversity of dietary culture, and the occurrence of a large amount of flavonoids itself in nature. Nevertheless, research on the health aspects of flavonoids for humans is expanding rapidly. Many flavonoids are shown to have antioxidative activity, free-radical scavenging capacity, coronary heart disease prevention, and anticancer activity, while some flavonoids exhibit potential for anti-human immunodeficiency virus functions. As such research progresses. further achievements will undoubtedly lead to a new era of flavonoids in either foods or pharmaceutical supplements. Accordingly, an appropriate model for a precise assessment of intake of flavonoids needs to be developed. Most recent research has focused on the health aspects of flavonoids from food sources for humans. This paper reviews the current advances in flavonoids in food, with emphasis on health aspects on the basis of the published literature, which may provide some guidance for researchers in further investigations and for industries in developing practical health agents."}
{"_id": "f568818ca0eccc3e04213d300dfa4e4e7e81ba8c", "title": "Open/Closed Eye Analysis for Drowsiness Detection", "text": "Drowsiness detection is vital in preventing traffic accidents. Eye state analysis - detecting whether the eye is open or closed - is critical step for drowsiness detection. In this paper, we propose an easy algorithm for pupil center and iris boundary localization and a new algorithm for eye state analysis, which we incorporate into a four step system for drowsiness detection: face detection, eye detection, eye state analysis, and drowsy decision. This new system requires no training data at any step or special cameras. Our eye detection algorithm uses Eye Map, thus achieving excellent pupil center and iris boundary localization results on the IMM database. Our novel eye state analysis algorithm detects eye state using the saturation (S) channel of the HSV color space. We analyze our eye state analysis algorithm using five video sequences and show superior results compared to the common technique based on distance between eyelids."}
{"_id": "e48385aa4bea1d8454364c23a9f7f18b6565dc6c", "title": "Indoors forensic entomology: colonization of human remains in closed environments by specific species of sarcosaprophagous flies.", "text": "Fly species that are commonly recovered on human corpses concealed in houses or other dwellings are often dependent on human created environments and might have special features in their biology that allow them to colonize indoor cadavers. In this study we describe nine typical cases involving forensically relevant flies on human remains found indoors in southern Finland. Eggs, larvae and puparia were reared to adult stage and determined to species. Of the five species found the most common were Lucilia sericata Meigen, Calliphora vicina Robineau-Desvoidy and Protophormia terraenovae Robineau-Desvoidy. The flesh fly Sarcophaga caerulescens Zetterstedt is reported for the first time to colonize human cadavers inside houses and a COI gene sequence based DNA barcode is provided for it to help facilitate identification in the future. Fly biology, colonization speed and the significance of indoors forensic entomological evidence are discussed."}
{"_id": "672eb3ae0edb85a592b98b0456a07616a2c73670", "title": "Affordances in Grounded Language Learning", "text": "We present a novel methodology involving mappings between different modes of semantic representations. We propose distributional semantic models as a mechanism for representing the kind of world knowledge inherent in the system of abstract symbols characteristic of a sophisticated community of language users. Then, motivated by insight from ecological psychology, we describe a model approximating affordances, by which we mean a language learner\u2019s direct perception of opportunities for action in an environment. We present a preliminary experiment involving mapping between these two representational modalities, and propose that our methodology can become the basis for a cognitively inspired model of grounded language learning."}
{"_id": "2f072c1e958c64fbd87af763e5f6ed14747bcbf7", "title": "LSTM-Based ECG Classification for Continuous Monitoring on Personal Wearable Devices", "text": "A novel ECG classification algorithm is proposed for continuous cardiac monitoring on wearable devices with limited processing capacity. The proposed solution employs a novel architecture consisting of wavelet transform and multiple LSTM recurrent neural networks (Fig. 1). Experimental evaluations show superior ECG classification performance compared to previous works. Measurements on different hardware platforms show the proposed algorithm meets timing requirements for continuous and real-time execution on wearable devices. In contrast to many compute-intensive deep-learning based approaches, the proposed algorithm is lightweight, and therefore, brings continuous monitoring with accurate LSTM-based ECG classification to wearable devices. The source code is available online [1]."}
{"_id": "846b945e8af5340a803708a2971268216decaddf", "title": "Data Series Management: Fulfilling the Need for Big Sequence Analytics", "text": "Massive data sequence collections exist in virtually every scientific and social domain, and have to be analyzed to extract useful knowledge. However, no existing data management solution (such as relational databases, column stores, array databases, and time series management systems) can offer native support for sequences and the corresponding operators necessary for complex analytics. We argue for the need to study the theory and foundations for sequence management of big data sequences, and to build corresponding systems that will enable scalable management and analysis of very large sequence collections. To this effect, we need to develop novel techniques to efficiently support a wide range of sequence queries and mining operations, while leveraging modern hardware. The overall goal is to allow analysts across domains to tap in the goldmine of the massive and ever-growing sequence collections they (already) have."}
{"_id": "91e9387c92b7c9d295c1188719d30dd179cc81e8", "title": "DocChat: An Information Retrieval Approach for Chatbot Engines Using Unstructured Documents", "text": "Most current chatbot engines are designed to reply to user utterances based on existing utterance-response (or Q-R)1 pairs. In this paper, we present DocChat, a novel information retrieval approach for chatbot engines that can leverage unstructured documents, instead of Q-R pairs, to respond to utterances. A learning to rank model with features designed at different levels of granularity is proposed to measure the relevance between utterances and responses directly. We evaluate our proposed approach in both English and Chinese: (i) For English, we evaluate DocChat on WikiQA and QASent, two answer sentence selection tasks, and compare it with state-of-the-art methods. Reasonable improvements and good adaptability are observed. (ii) For Chinese, we compare DocChat with XiaoIce2, a famous chitchat engine in China, and side-by-side evaluation shows that DocChat is a perfect complement for chatbot engines using Q-R pairs as main source of responses."}
{"_id": "141ca542912b6ecac0babd9b5e17169db0ded49b", "title": "An integrated cloud-based platform for labor monitoring and data analysis in precision agriculture", "text": "Harvest labor has became a prevailing cost in cherry and other Specialty Crops industry. We developed an integrated solution that provided real-time labor monitoring, payroll accrual, and labor-data-based analysis. At the core of our solution is a cloud-based information system that collects labor data from purposely designed labor monitoring devices, and visualizes real-time labor productivity data through a mobile-friendly user interface. Our solution used a proprietary process [1] to accurately associate labor data with its related worker and position under a many-to-many employment relation. We also describe our communication API and protocol, which are specifically designed to improve the reliability of data communication within an orchard. Besides its immediate benefits in improving the efficiency and accuracy of labor monitoring, our solution also enables data analysis and visualization based on harvest labor data. As an example, we discuss our approach of yield mapping based on harvest labor data. We implemented the platform and deployed the system on a cloud-based computing platform for better scalability. An early version of the system has been tested during the 2012 harvest season in cherry orchards in the U.S. Pacific Northwest Region."}
{"_id": "9f54261f4179e710427d56d41e944f9a45ec581c", "title": "PALSAR Radiometric and Geometric Calibration", "text": "This paper summarizes the results obtained from geometric and radiometric calibrations of the Phased-Array L-Band Synthetic Aperture Radar (PALSAR) on the Advanced Land Observing Satellite, which has been in space for three years. All of the imaging modes of the PALSAR, i.e., single, dual, and full polarimetric strip modes and scanning synthetic aperture radar (SCANSAR), were calibrated and validated using a total of 572 calibration points collected worldwide and distributed targets selected primarily from the Amazon forest. Through raw-data characterization, antenna-pattern estimation using the distributed target data, and polarimetric calibration using the Faraday rotation-free area in the Amazon, we performed the PALSAR radiometric and geometric calibrations and confirmed that the geometric accuracy of the strip mode is 9.7-m root mean square (rms), the geometric accuracy of SCANSAR is 70 m, and the radiometric accuracy is 0.76 dB from a corner-reflector analysis and 0.22 dB from the Amazon data analysis (standard deviation). Polarimetric calibration was successful, resulting in a VV/HH amplitude balance of 1.013 (0.0561 dB) with a standard deviation of 0.062 and a phase balance of 0.612deg with a standard deviation of 2.66deg ."}
{"_id": "d6b3be85cae20761285d347bafac7b2c2b3b176a", "title": "Cascaded seven level inverter with reduced number of switches using level shifting PWM technique", "text": "A multilevel inverter is a power electronic device that is used for high voltage and high power applications and has many advantages like, low switching stress, low total harmonic distortion (THD). Hence, the size and bulkiness of passive filters can be reduced. This paper proposes two new topologies of a 7-level cascaded multilevel inverter with reduced number of switches than that of conventional type which has 12 switches. The topologies consist of circuits with 9 switches and 7 switches for the same 7-level output. Therefore with less number of switches, there will be a reduction in gate drive circuitry and also very few switches will be conducting for specific intervals of time. The SPWM technique is implemented using multicarrier wave signals. Level Shifted triangular waves are used in comparison with sinusoidal reference to generate Sine PWM switching sequence. The number of level shifted triangular waves depends on the number of levels in the output. i.e. for n levels, n-1 number of carrier waves. This paper uses 1 KHz SPWM pulses with a modulation index of 0.8. The circuits are simulated using SPWM technique and the effect of the harmonic spectrum is analyzed. A comparison is made for the topologies with 9 switches and 7 switches and an effective reduction in THD has been observed for the circuits with less number of switches. The THD for 9 switches is 14% and the THD for 7 switches is 12.5%. The circuits are modeled and simulated with the help of MATLAB/SIMULINK."}
{"_id": "49deb46f1b045ee3b2f85d62015b9c42eb393e76", "title": "Gaze estimation method based on an aspherical model of the cornea: surface of revolution about the optical axis of the eye", "text": "A novel gaze estimation method based on a novel aspherical model of the cornea is proposed in this paper. The model is a surface of revolution about the optical axis of the eye. The calculation method is explained on the basis of the model. A prototype system for estimating the point of gaze (POG) has been developed using this method. The proposed method has been found to be more accurate than the gaze estimation method based on a spherical model of the cornea."}
{"_id": "446fb5f82d5f9320370798f82a5aad0f8255c1ff", "title": "Evaluation of Multi-view 3D Reconstruction Software", "text": "A number of software solutions for reconstructing 3D models from multi-view image sets have been released in recent years. Based on an unordered collection of photographs, most of these solutions extract 3D models using structure-from-motion (SFM) algorithms. In this work, we compare the resulting 3D models qualitatively and quantitatively. To achieve these objectives, we have developed different methods of comparison for all software solutions. We discuss the perfomance and existing drawbacks. Particular attention is paid to the ability to create printable 3D models or 3D models usable for other applications."}
{"_id": "fe48ab7bc677d59094d4414d538a96bc4f99176b", "title": "Beautiful Math, Part 2: Aesthetic Patterns Based on Fractal Tilings", "text": "A fractal tiling (f-tiling) is a tiling whose boundary is fractal. This article presents two families of rare, infinitely many f-tilings. Each f-tiling is constructed by reducing tiles by a fixed scaling factor, using a single prototile, which is a segment of a regular polygon. The authors designed invariant mappings to automatically produce appealing seamless, colored patterns from such tilings."}
{"_id": "9edf6f5e799ca6fa0c2d5510d96cbf2848005cb9", "title": "The art of deep learning ( applied to NLP )", "text": "In many regards, tuning deep-learning networks is still more an art than it is a technique. Choosing the correct hyper-parameters and getting a complex network to learn properly can be daunting to people not well versed in that art. Taking the example of entailment classification, this work will analyze different methods and tools that can be used in order to diagnose learning problems. It will aim at providing an deep understanding of some key hyper-parameters, sketch out their effects and explain how to identify recurrent problems when tuning them."}
{"_id": "f457cdc4212fdf5979aa3d05498170eb610ea144", "title": "Auditing and Assessment of Data Traffic Flows in an IoT Architecture", "text": "Recent advances in the development of the Internet of Things and ICT have completely changed the way citizens interact with Smart City environments increasing the demand of more services and infrastructures in many different contexts. Furthermore, citizens require to be active users in a flexible smart living lab, with the possibility to access Smart City data, analyze them, perform actions and receive notifications based on automated decision-making processes. Critical problems could arise if the continuity of data flows and communication among connected IoT devices and data-driven applications is interrupted or lost, due to some devices or system malfunction or unexpected behavior. The proposed solution is a set of instruments, aimed at real-time collecting and storing IoT and Smart City data (data shadow), as well as auditing data traffic flows in an IoT Smart City Architecture, with the purpose of quantitatively monitoring the status and detecting potential anomalies and malfunctions at level of single IoT device and/or service. These instruments are the DevDash and AMMA tools, designed and realized within the Snap4City framework. Specific use cases have been provided to highlight the capabilities of these instruments in terms of data indexing, monitoring and analysis."}
{"_id": "07f1bf314056d39c24e995dab1e0a44a5cae3df0", "title": "Probability Estimates for Multi-class Classification by Pairwise Coupling", "text": "Pairwise coupling is a popular multi-class classification method that combines together all pairwise comparisons for each pair of classes. This paper presents two approaches for obtaining class probabilities. Both methods can be reduced to linear systems and are easy to implement. We show conceptually and experimentally that the proposed approaches are more stable than two existing popular methods: voting and [3]."}
{"_id": "0acf1a74e6ed8c323192d2b0424849820fe88715", "title": "Support Vector Machine Active Learning with Applications to Text Classification", "text": "Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings."}
{"_id": "1a090df137014acab572aa5dc23449b270db64b4", "title": "LIBSVM: a library for support vector machines", "text": ""}
{"_id": "9ae252d3b0821303f8d63ba9daf10030c9c97d37", "title": "A Bayesian hierarchical model for learning natural scene categories", "text": "We propose a novel approach to learn and recognize natural scene categories. Unlike previous work, it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region is represented as part of a \"theme\". In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distributions as well as the codewords distribution over the themes without supervision. We report satisfactory categorization performances on a large set of 13 categories of complex scenes."}
{"_id": "fa6cbc948677d29ecce76f1a49cea01a75686619", "title": "Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope", "text": "In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category."}
{"_id": "2c688c40374fee862e0f0038696f2951f1927337", "title": "When two choices are not enough: Balancing at scale in Distributed Stream Processing", "text": "Carefully balancing load in distributed stream processing systems has a fundamental impact on execution latency and throughput. Load balancing is challenging because real-world workloads are skewed: some tuples in the stream are associated to keys which are significantly more frequent than others. Skew is remarkably more problematic in large deployments: having more workers implies fewer keys per worker, so it becomes harder to \u201caverage out\u201d the cost of hot keys with cold keys. We propose a novel load balancing technique that uses a heavy hitter algorithm to efficiently identify the hottest keys in the stream. These hot keys are assigned to d \u2265 2 choices to ensure a balanced load, where d is tuned automatically to minimize the memory and computation cost of operator replication. The technique works online and does not require the use of routing tables. Our extensive evaluation shows that our technique can balance real-world workloads on large deployments, and improve throughput and latency by 150% and 60% respectively over the previous state-of-the-art when deployed on Apache Storm."}
{"_id": "2fc32ad5f46e866dd24774e55ea07a9ba1a85df6", "title": "Smart Configuration of Smart Environments", "text": "One of the central research challenges in the Internet of Things and Ubiquitous Computing domains is how users can be enabled to \u201cprogram\u201d their personal and industrial smart environments by combining services that are provided by devices around them. We present a service composition system that enables the goal-driven configuration of smart environments for end users by combining semantic metadata and reasoning with a visual modeling tool. In contrast to process-driven approaches where service mashups are statically defined, we make use of embedded semantic API descriptions to dynamically create mashups that fulfill the user's goal. The main advantage of our system is its high degree of flexibility, as service mashups can adapt to dynamic environments and are fault-tolerant with respect to individual services becoming unavailable. To support users in expressing their goals, we integrated a visual programming tool with our system that allows to model the desired state of a smart environment graphically, thereby hiding the technicalities of the underlying semantics. Possible applications of the presented system include the management of smart homes to increase individual well-being, and reconfigurations of smart environments, for instance in the industrial automation or healthcare domains."}
{"_id": "6697bd267ccf363bc1b8ab7cb971b880495ff3f1", "title": "Smart Locks: Lessons for Securing Commodity Internet of Things Devices", "text": "We examine the security of home smart locks: cyber-physical devices that replace traditional door locks with deadbolts that can be electronically controlled by mobile devices or the lock manufacturer's remote servers. We present two categories of attacks against smart locks and analyze the security of five commercially-available locks with respect to these attacks. Our security analysis reveals that flaws in the design, implementation, and interaction models of existing locks can be exploited by several classes of adversaries, allowing them to learn private information about users and gain unauthorized home access. To guide future development of smart locks and similar Internet of Things devices, we propose several defenses that mitigate the attacks we present. One of these defenses is a novel approach to securely and usably communicate a user's intended actions to smart locks, which we prototype and evaluate. Ultimately, our work takes a first step towards illuminating security challenges in the system design and novel functionality introduced by emerging IoT systems."}
{"_id": "f0be59cebe9c67353f5e84fe3abdca4cc360f03b", "title": "A hybrid machine learning approach to automatic plant phenotyping for smart agriculture", "text": "Recently, a new ICT approach to agriculture called \u201cSmart Agriculture\u201d has been received great attention to support farmers' decision-making for good final yield on various kinds of field conditions. For this purpose, this paper presents two image sensing methods that enable an automatic observation to capture flowers and seedpods of soybeans in real fields. The developed image sensing methods are considered as sensors in an agricultural cyber-physical system in which big data on the growth status of agricultural plants and environmental information (e.g., weather, temperature, humidity, solar radiation, soil condition, etc.) are analyzed to mine useful rules for appropriate cultivation. The proposed image sensing methods are constructed by combining several image processing and machine learning techniques. The flower detection is realized based on a coarse-to-fine approach where candidate areas of flowers are first detected by SLIC and hue information, and the acceptance of flowers is decided by CNN. In the seedpod detection, candidates of seedpod regions are first detected by the Viola-Jones object detection method, and we also use CNN to make a final decision on the acceptance of detected seedpods. The performance of the proposed image sensing methods is evaluated for a data set of soybean images that were taken from a crowd of soybeans in real agricultural fields in Hokkaido, Japan."}
{"_id": "a4de5334f527fed37578da04461bbb0b077b6005", "title": "Low temperature Cu-Cu direct bonding for 3D-IC by using fine crystal layer", "text": "In this paper, we report a method of low temperature solid diffusion bonding. To investigate bondability of solid diffusion, we examined the effect of bump metals and bump planarization methods. Cu and Au bump were used for bump metals and CMP and ultra-precision cutting were used for bump planarization methods. We found that fine crystal layer could be formed on only cut Cu and Au bumps, and especially cut Cu bumps had a thick fine crystal layer on the surface. The layer on cut Cu bump was found to be easily to recrystallize at low temperature condition of 150 degree C. Moreover, the bonding interface of cut Cu bump disappeared at 200 degree C for 30 min, which means solid diffusion across the interface was realized with the contribution of fine crystal layer. In addition, for Cu-Cu direct bonding, formic acid treatment before bonding is effective because formic acid can react at low temperature without destroying fine crystal layer. That led to achieve high bonding strength between cut Cu bumps."}
{"_id": "3520ba596291829fbc749fa44f573a176a57f554", "title": "Hybrid excited claw pole electric machine", "text": "The paper presents the concept and results of simulation and experimental research of a claw pole generator with hybrid excitation. Hybrid excitation is performed with a conventional coil located between two parts of the claw-shape rotor and additional permanent magnets which are placed on claw poles. Within the research first a simulation and next constructed experimental model has been developed on the basis of the mass-produced vehicle alternator. Experimental researches have shown that - at a suitable rotational speed - it is possible to self-excite of the generator without any additional source of electrical power."}
{"_id": "9a25afa23b40f57229b9642b519fe67a6019ef41", "title": "A 92GHz bandwidth SiGe BiCMOS HBT TIA with less than 6dB noise figure", "text": "A low-noise, broadband amplifier with resistive degeneration and transimpedance feedback is reported with 200 mVpp input linearity and less than 6 dB noise figure up to 88 GHz. The measured gain of 13 dB, noise figure, linearity, and group delay variation of \u00b11.5 ps are in excellent agreement with simulation. Eye diagram measurements were conducted up to 120 Gb/s and a dynamic range larger than 36 dB was obtained from eye diagram measurements at 40 Gb/s. The chip, which includes a 50\u03a9 output buffer, occupies 0.138 mm2 and consumes 21 mA from a 2.3V supply."}
{"_id": "a05e84f77e1dacaa1c59ba0d92919bdcfe4debbb", "title": "Video Question Answering via Hierarchical Spatio-Temporal Attention Networks", "text": "Open-ended video question answering is a challenging problem in visual information retrieval, which automatically generates the natural language answer from the referenced video content according to the question. However, the existing visual question answering works only focus on the static image, which may be ineffectively applied to video question answering due to the lack of modeling the temporal dynamics of video contents. In this paper, we consider the problem of open-ended video question answering from the viewpoint of spatio-temporal attentional encoderdecoder learning framework. We propose the hierarchical spatio-temporal attention network for learning the joint representation of the dynamic video contents according to the given question. We then develop the spatio-temporal attentional encoder-decoder learning method with multi-step reasoning process for open-ended video question answering. We construct a large-scale video question answering dataset. The extensive experiments show the effectiveness of our method."}
{"_id": "c6916fc5b6798c91c1c7f486d95cc94bc0a61afa", "title": "Persistence Diagrams with Linear Machine Learning Models", "text": "Persistence diagrams have been widely recognized as a compact descriptor for characterizing multiscale topological features in data. When many datasets are available, statistical features embedded in those persistence diagrams can be extracted by applying machine learnings. In particular, the ability for explicitly analyzing the inverse in the original data space from those statistical features of persistence diagrams is significantly important for practical applications. In this paper, we propose a unified method for the inverse analysis by combining linear machine learning models with persistence images. The method is applied to point clouds and cubical sets, showing the ability of the statistical inverse analysis and its advantages. This work is partially supported by JSPS KAKENHI Grant Number JP 16K17638, JST CREST Mathematics15656429, JST \u201cMaterials research by Information Integration\u201d Initiative (MI2I) project of the Support Program for Starting Up Innovation Hub, Structural Materials for Innovation Strategic Innovation Promotion Program D72, and New Energy and Industrial Technology Development Organization (NEDO). Ippei Obayashi Advanced Institute for Materials Research (WPI-AIMR), Tohoku University. 2-1-1 Katahira, Aoba-ku, Sendai, 980-8577 Japan Tel.: +81-22-217-6320 Fax: +81-22-217-5129 E-mail: ippei.obayashi.d8@tohoku.ac.jp Y. Hiraoka Advanced Institute for Materials Research (WPI-AIMR), Tohoku University. Center for Materials research by Information Integration (CMI2), Research and Services Division of Materials Data and Integrated System (MaDIS), National Institute for Materials Science (NIMS). E-mail: hiraoka@tohoku.ac.jp"}
{"_id": "5e9bc2f884254f4f2256f6943a46d84ac065108d", "title": "The neural correlates of trustworthiness evaluations of faces and brands: Implications for behavioral and consumer neuroscience.", "text": "When we buy a product of a brand, we trust the brand to provide good quality and reliability. Therefore, trust plays a major role in consumer behavior. It is unclear, however, how trust in brands is processed in the brain and whether it is processed differently from interpersonal trust. In this study, we used fMRI to investigate the neural correlates of interpersonal and brand trust by comparing the brain activation patterns during explicit trustworthiness judgments of faces and brands. Our results showed that while there were several brain areas known to be linked to trustworthiness evaluations, such as the amygdalae, more active in trustworthiness judgments when compared to a control task (familiarity judgment) for faces, no such difference was found for brands. Complementary ROI analysis revealed that the activation of both amygdalae was strongest for faces in the trustworthiness judgments. The direct comparison of the brain activation patterns during the trustworthiness evaluations between faces and brands in this analysis showed that trustworthiness judgments of faces activated the orbitofrontal cortex, another region that was previously linked to interpersonal trust, more strongly than trustworthiness judgments of brands. Further, trustworthiness ratings of faces, but not brands, correlated with activation in the orbitofrontal cortex. Our results indicate that the amygdalae, as well as the orbitofrontal cortex, play a prominent role in interpersonal trust (faces), but not in trust for brands. It is possible that this difference is due to brands being processed as cultural objects rather than as having human-like personality characteristics."}
{"_id": "682c27724a404dfd41e2728813c66a5ee2b20dea", "title": "SeVR+: Secure and privacy-aware cloud-assisted video reporting service for 5G vehicular networks", "text": "Recently, Eiza et al. proposed a secure and privacy-aware scheme for video reporting service in 5G enabled Vehicular Ad hoc Networks (VANET). They employ heterogeneous network and cloud platform to obtain more availability with low latency platform for an urgent accident video reporting service. In their study, for the first time the security issues of 5G enabled vehicular networks have been addressed. Eiza et al. claimed that their scheme guarantees user's privacy, confidentiality, non-repudiation, message integrity and availability for participant vehicles. In this paper, we show that Eiza et al. scheme is vulnerable to replay, message fabrication and DoS attacks. Regarding the sensibility of video reporting services in VANET, then, we propose an efficient protocol to overcome security weaknesses of Eiza et.al. scheme and show that the proposed protocol resists against commonplace attacks in VANET with acceptable communication and computation overhead."}
{"_id": "fd4d6ee3aed9f5e3f63b1bce29e8706c3c5917dd", "title": "`Putting the Face to the Voice' Matching Identity across Modality", "text": "Speech perception provides compelling examples of a strong link between auditory and visual modalities. This link originates in the mechanics of speech production, which, in shaping the vocal tract, determine the movement of the face as well as the sound of the voice. In this paper, we present evidence that equivalent information about identity is available cross-modally from both the face and voice. Using a delayed matching to sample task, XAB, we show that people can match the video of an unfamiliar face, X, to an unfamiliar voice, A or B, and vice versa, but only when stimuli are moving and are played forward. The critical role of time-varying information is underlined by the ability to match faces to voices containing only the coarse spatial and temporal information provided by sine wave speech [5]. The effect of varying sentence content across modalities was small, showing that identity-specific information is not closely tied to particular utterances. We conclude that the physical constraints linking faces to voices result in bimodally available dynamic information, not only about what is being said, but also about who is saying it."}
{"_id": "5b2727716de710dddb5bdca184912ff0e269e2ae", "title": "Control and path planning of a walk-assist robot using differential flatness", "text": "With the growth of elderly population in our society, technology will play an important role in providing functional mobility to humans. In this paper, we propose a robot walking helper with both passive and active control modes of guidance. From the perspective of human safety, the passive mode adopts the braking control law on the wheels to differentially steer the vehicle. The active mode can guide the user efficiently when the passive control with user-applied force is not adequate for guidance. The theory of differential flatness is used to plan the trajectory of control gains within the proposed scheme of the controller. Since the user input force is not known a-priori, the theory of model predictive control is used to periodically compute the trajectory of these control gains. The simulation results show that the walking assist robot, along with the structure of this proposed control scheme, can guide the user to a goal effectively."}
{"_id": "f7457a69b32cf761508ae51016204bab473f2035", "title": "Electrically Large Zero-Phase-Shift Line Grid-Array UHF Near-Field RFID Reader Antenna", "text": "A grid-array antenna using a zero-phase-shift (ZPS) line is proposed to enlarge the interrogation zone of a reader antenna for near-field ultra-high-frequency (UHF) radio frequency identification (RFID) system. The proposed grid-array antenna is composed of a number of grid cells and a double-sided parallel-strip line feeding network. Each grid cell, namely segmented loop constructed by the ZPS line, has uniform and single-direction flowing current along the line. By configuration of the cells sharing a common side with its adjacent grid cells which carry reverse-direction flowing current, a grid-array antenna is formed to generate a strong and uniform magnetic-field distribution over a large interrogation zone even when the perimeter of the interrogation zone reaches up to 3\u03bb (where \u03bb is the operating wavelength in free space) or larger. As an example, a grid-array antenna with 1 \u00d7 2 segmented ZPS line loop cells implemented onto a piece of FR4 printed board (PCB) is designed and prototyped. The results show that the grid-array antenna achieves the impedance matching over the frequency range from 790 to 1040 MHz and produces strong and uniform magnetic-field distribution over an interrogation zone of 308 mm \u00d7 150 mm."}
{"_id": "1e477aa7eb007c493fa92b8450a7f85eb14ccf0c", "title": "101companies: A Community Project on Software Technologies and Software Languages", "text": "101companies is a community project in computer science (or software science) with the objective of developing a free, structured, wiki-accessible knowledge resource including an open-source repository for different stakeholders with interests in software technologies, software languages, and technological spaces; notably: teachers and learners in software engineering or software languages as well as software developers, software technologists, and ontologists. The present paper introduces the 101companies Project. In fact, the present paper is effectively a call for contributions to the project and a call for applications of the project in research and education."}
{"_id": "ee4093f87de9cc612cc83fdc13bd67bfdd366fbe", "title": "An approach to Korean license plate recognition based on vertical edge matching", "text": "License plate recognition (LPR) has many applications in traffic monitoring systems. In this paper, a vertical edge matching based algorithm to recognize Korean license plate from input gray-scale image is proposed. The algorithm is able to recognize license plates in normal shape, as well as plates that are out of shape due to the angle of view. The proposed algorithm is fast enough, the recognition unit of a LPR system can be implemented only in software so that the cost of the system is reduced."}
{"_id": "c02c1a345461f015a683ee8b8d6082649567429f", "title": "Electromagnetic simulation of 3D stacked ICs: Full model vs. S-parameter cascaded based model", "text": "Three-dimensional electromagnetic simulation models are often simplified and/or segmented in order to reduce the simulation time and memory requirements without sacrificing the accuracy of the results. This paper investigates the difference between full model and S-parameter cascaded based model of 3D stacked ICs with the presence of Through Silicon Vias. It is found that the simulation of the full model is required for accurate results, however, a divide and conquers (segmentation) approach can be used for preliminary post layout analysis. Modeling guidelines are discussed and details on the proper choice of ports, boundary conditions, and solver technology are highlighted. A de-embedding methodology is finally explored to improve the accuracy of the cascaded/segmented results."}
{"_id": "1ac52b7d8db223029388551b2db25657ed8c9852", "title": "Solving a Huge Number of Similar Tasks: A Combination of Multi-Task Learning and a Hierarchical Bayesian Approach", "text": "In this paper, we propose a machine-learning solution to problems consisting of many similar prediction tasks. Each of the individual tasks has a high risk of overrtting. We combine two types of knowledge transfer between tasks to reduce this risk: multi-task learning and hierarchical Bayesian modeling. Multi-task learning is based on the assumption that there exist features typical to the task at hand. To nd these features, we train a huge two-layered neural network. Each task has its own output, but shares the weights from the input to the hidden units with all other tasks. In this way a relatively large set of possible explanatory variables (the network inputs) is reduced to a smaller and easier to handle set of features (the hidden units). Given this set of features and after an appropriate scale transformation, we assume that the tasks are exchangeable. This assumption allows for a hierarchical Bayesian analysis in which the hyperparameters can be estimated from the data. EEectively, these hyperpa-rameters act as regularizers and prevent over-tting. We describe how to make the system robust against nonstationarities in the time series and give directions for further improvement. We illustrate our ideas on a database regarding the prediction of newspaper sales."}
{"_id": "3a80c307f2f0782214120f600a81f3cde941b3c3", "title": "True Random Number Generator Embedded in Reconfigurable Hardware", "text": "This paper presents a new True Random Number Generator (TRNG) based on an analog Phase-Locked Loop (PLL) implemented in a digital Altera Field Programmable Logic Device (FPLD). Starting with an analysis of the one available on chip source of randomness the PLL synthesized low jitter clock signal, a new simple and reliable method of true randomness extraction is proposed. Basic assumptions about statistical properties of jitter signal are confirmed by testing of mean value of the TRNG output signal. The quality of generated true random numbers is confirmed by passing standard NIST statistical tests. The described TRNG is tailored for embedded System-On-a-ProgrammableChip (SOPC) cryptographic applications and can provide a good quality true random bit-stream with throughput of several tens of kilobits per second. The possibility of including the proposed TRNG into a SOPC design significantly increases the system security of embedded cryptographic hardware."}
{"_id": "2b0dcde4dba0dcad345af5285553b0b9fdf35f78", "title": "Reliability-aware design to suppress aging", "text": "Due to aging, circuit reliability has become extraordinary challenging. Reliability-aware circuit design flows do virtually not exist and even research is in its infancy. In this paper, we propose to bring aging awareness to EDA tool flows based on so-called degradation-aware cell libraries. These libraries include detailed delay information of gates/cells under the impact that aging has on both threshold voltage (Vth) and carrier mobility (\u03bc) of transistors. This is unlike state of the art which considers Vth only. We show how ignoring \u03bc degradation leads to underestimating guard-bands by 19% on average. Our investigation revealed that the impact of aging is strongly dependent on the operating conditions of gates (i.e. input signal slew and output load capacitance), and not solely on the duty cycle of transistors. Neglecting this fact results in employing insufficient guard-bands and thus not sustaining reliability during lifetime.\n We demonstrate that degradation-aware libraries and tool flows are indispensable for not only accurately estimating guardbands, but also efficiently containing them. By considering aging degradations during logic synthesis, significantly more resilient circuits can be obtained. We further quantify the impact of aging on the degradation of image processing circuits. This goes far beyond investigating aging with respect to path delays solely. We show that in a standard design without any guardbanding, aging leads to unacceptable image quality after just one year. By contrast, if the synthesis tool is provided with the degradation-aware cell library, high image quality is sustained for 10 years (even under worst-case aging and without a guard-band). Hence, using our approach, aging can be effectively suppressed."}
{"_id": "205b0711be40875a7ea9a6ac21c8d434b7830943", "title": "Hexahedral Meshing With Varying Element Sizes", "text": "Hexahedral (or Hex-) meshes are preferred in a number of scientific and engineering simulations and analyses due to their desired numerical properties. Recent state-of-the-art techniques can generate high quality hex-meshes. However, they typically produce hex-meshes with uniform element sizes and thus may fail to preserve small scale features on the boundary surface. In this work, we present a new framework that enables users to generate hex-meshes with varying element sizes so that small features will be filled with smaller and denser elements, while the transition from smaller elements to larger ones is smooth, compared to the octree-based approach. This is achieved by first detecting regions of interest (ROI) of small scale features. These ROIs are then magnified using the as-rigid-as-possible (ARAP) deformation with either an automatically determined or a user-specified scale factor. A hex-mesh is then generated from the deformed mesh using existing approaches that produce hex-meshes with uniform-sized elements. This initial hex-mesh is then mapped back to the original volume before magnification to adjust the element sizes in those ROIs. We have applied this framework to a variety of man-made and natural models to demonstrate its effectiveness."}
{"_id": "889e50a73eb7307b35aa828a1a4392c7a08c1c01", "title": "An Efficient Participant\u2019s Selection Algorithm for Crowdsensing", "text": "With the advancement of mobile technology the use of Smartphone is greatly increased. Everyone has the mobile phones and it becomes the necessity of life. Today, smart devices are flooding the internet data at every time and in any form that cause the mobile crowdsensing (MCS). One of the key challenges in mobile crowd sensing system is how to effectively identify and select the well-suited participants in recruitments from a large user pool. This research work presents the concept of crowdsensing along with the selection process of participants from a large user pool. MCS provides the efficient selection process for participants that how well suited participant\u2019s selects/recruit from a large user pool. For this, the proposed selection algorithm plays our role in which the recruitment of participants takes place with the availability status from the large user pool. At the end, the graphical result presented with the suitable location of the participants and their time slot. Keywords\u2014Mobile crowdsensing (MCS); Mobile Sensing Platform (MSP]); crowd sensing; participant; user pool; crowdsourcing"}
{"_id": "4c5b0aed439c050d95e026a91ebade76793f39c0", "title": "Active-feedback frequency-compensation technique for low-power multistage amplifiers", "text": "An active-feedback frequency-compensation (AFFC) technique for low-power operational amplifiers is presented in this paper. With an active-feedback mechanism, a high-speed block separates the low-frequency high-gain path and high-frequency signal path such that high gain and wide bandwidth can be achieved simultaneously in the AFFC amplifier. The gain stage in the active-feedback network also reduces the size of the compensation capacitors such that the overall chip area of the amplifier becomes smaller and the slew rate is improved. Furthermore, the presence of a left-half-plane zero in the proposed AFFC topology improves the stability and settling behavior of the amplifier. Three-stage amplifiers based on AFFC and nested-Miller compensation (NMC) techniques have been implemented by a commercial 0.8m CMOS process. When driving a 120-pF capacitive load, the AFFC amplifier achieves over 100-dB dc gain, 4.5-MHz gain-bandwidth product (GBW) , 65 phase margin, and 1.5-V/ s average slew rate, while only dissipating 400W power at a 2-V supply. Compared to a three-stage NMC amplifier, the proposed AFFC amplifier provides improvement in both the GBW and slew rate by 11 times and reduces the chip area by 2.3 times without significant increase in the power consumption."}
{"_id": "681a52fa5356334acc7d43ca6e3d4a78e82f12e2", "title": "Effective Question Answering Techniques and their Evaluation Metrics", "text": "Question Answering (QA) is a focused way of information retrieval. Question Answering system tries to get back the accurate answers to questions posed in natural language provided a set of documents. Basically question answering system (QA) has three elements i. e. question classification, information retrieval (IR), and answer extraction. These elements play a major role in Question Answering. In Question classification, the questions are classified depending upon the type of its entity. Information retrieval component is used to determine success by retrieving relevant answer for different questions posted by the intelligent question answering system. Answer extraction module is growing topics in the QA in which ranking and validating a candidate's answer is the major job. This paper offers a concise discussion regarding different Question Answering types. In addition we describe different evaluation metrics used to evaluate the performance of different question answering systems. We also discuss the recent question answering systems developed and their corresponding techniques."}
{"_id": "47e035c0fd01fa0418060ba14612d15bd9a01845", "title": "A 500mA analog-assisted digital-LDO-based on-chip distributed power delivery grid with cooperative regulation and IR-drop reduction in 65nm CMOS", "text": "With the die area of modern processors growing larger and larger, the IR drop across the power supply rail due to its parasitic resistance becomes considerable. There is an urgent demand for local power regulation to reduce the IR drop and to enhance transient response. A distributed power-delivery grid (DPDG) is an attractive solution for large area power-supply applications. The dual-loop distributed micro-regulator in [1] achieves a tight regulation and fast response, but suffers from large ripple and high power consumption due to the comparator-based regulator. Digital low-dropout regulators (DLDOs) [2] can be used as local micro-regulators to implement a DPDG, due to their low-voltage operation and process scalability. Adaptive control [3], asynchronous 3D pipeline control [4], analog-assisted tri-loop control [5], and event-driven PI control [6] are proposed to enhance the transient response speed. However, digital LDOs suffer from the intrinsic limitations of large output ripple and narrow current range. This paper presents an on-chip DPDg with cooperative regulation based on an analog-assisted digital LDO (AADLDO), which inherits the merits of low output ripple and sub-LSB current supply ability from the analog control, and the advantage of low supply voltage operation and adaptive fast response from the digital control."}
{"_id": "1e56ed3d2c855f848ffd91baa90f661772a279e1", "title": "Latent Dirichlet Allocation", "text": "We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hofmann's aspect model , also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification."}
{"_id": "27ae23bb8d284a1fa8c8ab24e23a72e1836ff5cc", "title": "Similarity estimation techniques from rounding algorithms", "text": "(MATH) A locality sensitive hashing scheme is a distribution on a family $\\F$ of hash functions operating on a collection of objects, such that for two objects x,y, Prh\u03b5F[h(x) = h(y)] = sim(x,y), where sim(x,y) \u03b5 [0,1] is some similarity function defined on the collection of objects. Such a scheme leads to a compact representation of objects so that similarity of objects can be estimated from their compact sketches, and also leads to efficient algorithms for approximate nearest neighbor search and clustering. Min-wise independent permutations provide an elegant construction of such a locality sensitive hashing scheme for a collection of subsets with the set similarity measure sim(A,B) = \\frac{|A &Pgr; B|}{|A &Pgr B|}.(MATH) We show that rounding algorithms for LPs and SDPs used in the context of approximation algorithms can be viewed as locality sensitive hashing schemes for several interesting collections of objects. Based on this insight, we construct new locality sensitive hashing schemes for:- A collection of vectors with the distance between \u2192 \\over u and \u2192 \\over v measured by \u00d8(\u2192 \\over u, \u2192 \\over v)/\u03c0, where \u00d8(\u2192 \\over u, \u2192 \\over v) is the angle between \u2192 \\over u) and \u2192 \\over v). This yields a sketching scheme for estimating the cosine similarity measure between two vectors, as well as a simple alternative to minwise independent permutations for estimating set similarity.
- A collection of distributions on n points in a metric space, with distance between distributions measured by the Earth Mover Distance (EMD), (a popular distance measure in graphics and vision). Our hash functions map distributions to points in the metric space such that, for distributions P and Q, EMD(P,Q) &xie; Eh\u03b5\\F [d(h(P),h(Q))] &xie; O(log n log log n). EMD(P, Q).
."}
{"_id": "2b8a80b18cc7a4461c6e532c2f3de7e570d4fcd6", "title": "A probabilistic approach to spatiotemporal theme pattern mining on weblogs", "text": "Mining subtopics from weblogs and analyzing their spatiotemporal patterns have applications in multiple domains. In this paper, we define the novel problem of mining spatiotemporal theme patterns from weblogs and propose a novel probabilistic approach to model the subtopic themes and spatiotemporal theme patterns simultaneously. The proposed model discovers spatiotemporal theme patterns by (1) extracting common themes from weblogs; (2) generating theme life cycles for each given location; and (3) generating theme snapshots for each given time period. Evolution of patterns can be discovered by comparative analysis of theme life cycles and theme snapshots. Experiments on three different data sets show that the proposed approach can discover interesting spatiotemporal theme patterns effectively. The proposed probabilistic model is general and can be used for spatiotemporal text mining on any domain with time and location information."}
{"_id": "682988d3cc614c122745d0e87ad9df0f44c3e432", "title": "Automatic labeling of multinomial topic models", "text": "Multinomial distributions over words are frequently used to model topics in text collections. A common, major challenge in applying all such topic models to any text mining problem is to label a multinomial topic model accurately so that a user can interpret the discovered topic. So far, such labels have been generated manually in a subjective way. In this paper, we propose probabilistic approaches to automatically labeling multinomial topic models in an objective way. We cast this labeling problem as an optimization problem involving minimizing Kullback-Leibler divergence between word distributions and maximizing mutual information between a label and a topic model. Experiments with user study have been done on two text data sets with different genres.The results show that the proposed labeling methods are quite effective to generate labels that are meaningful and useful for interpreting the discovered topic models. Our methods are general and can be applied to labeling topics learned through all kinds of topic models such as PLSA, LDA, and their variations."}
{"_id": "99f231f29d8bbd410bb3edc096a502b1ebda8526", "title": "Automatic labeling hierarchical topics", "text": "Recently, statistical topic modeling has been widely applied in text mining and knowledge management due to its powerful ability. A topic, as a probability distribution over words, is usually difficult to be understood. A common, major challenge in applying such topic models to other knowledge management problem is to accurately interpret the meaning of each topic. Topic labeling, as a major interpreting method, has attracted significant attention recently. However, previous works simply treat topics individually without considering the hierarchical relation among topics, and less attention has been paid to creating a good hierarchical topic descriptors for a hierarchy of topics. In this paper, we propose two effective algorithms that automatically assign concise labels to each topic in a hierarchy by exploiting sibling and parent-child relations among topics. The experimental results show that the inter-topic relation is effective in boosting topic labeling accuracy and the proposed algorithms can generate meaningful topic labels that are useful for interpreting the hierarchical topics."}
{"_id": "f6be1786de10bf484decb16762fa66819930965e", "title": "The Costs and Benefits of Pair Programming", "text": "Pair or collaborative programming is where two programmers develop software side by side at one computer. Using interviews and controlled experiments, the authors investigated the costs and benefits of pair programming. They found that for a development-time cost of about 15%, pair programming improves design quality, reduces defects, reduces staffing risk, enhances technical skills, improves team communications and is considered more enjoyable at statistically significant levels."}
{"_id": "85a1b811113414aced9c9fe18b02dfbd24cb41af", "title": "Comparing White-Box and Black-Box Test Prioritization", "text": "Although white-box regression test prioritization has been well-studied, the more recently introduced black-box prioritization approaches have neither been compared against each other nor against more well-established white-box techniques. We present a comprehensive experimental comparison of several test prioritization techniques, including well-established white-box strategies and more recently introduced black-box approaches. We found that Combinatorial Interaction Testing and diversity-based techniques (Input Model Diversity and Input Test Set Diameter) perform best among the black-box approaches. Perhaps surprisingly, we found little difference between black-box and white-box performance (at most 4% fault detection rate difference). We also found the overlap between black- and white-box faults to be high: the first 10% of the prioritized test suites already agree on at least 60% of the faults found. These are positive findings for practicing regression testers who may not have source code available, thereby making white-box techniques inapplicable. We also found evidence that both black-box and white-box prioritization remain robust over multiple system releases."}
{"_id": "56832b51f1e8c7cd4b45c3591f650c90e0d554fd", "title": "4G- A NEW ERA IN WIRELESS TELECOMMUNICATION", "text": "4G \u2013 \u201cconnect anytime, anywhere, anyhow\u201d promising ubiquitous network access at high speed to the end users, has been a topic of great interest especially for the wireless telecom industry. 4G seems to be the solution for the growing user requirements of wireless broadband access and the limitations of the existing wireless communication system. The purpose of this paper is to provide an overview of the different aspects of 4G which includes its features, its proposed architecture and key technological enablers. It also elaborates on the roadblocks in its implementations. A special consideration has been given to the security concerns of 4G by discussing a security threat analysis model proposed by International Telecommunication Union (ITU). By applying this model, a detailed analysis of threats to 4G and the corresponding measures to counter them can be performed."}
{"_id": "38936f2c404ec886d214f2e529e2d090a7cd6d95", "title": "Collocated photo sharing, story-telling, and the performance of self", "text": "This article reports empirical findings from four inter-related studies, with an emphasis on collocated sharing. Collocated sharing remains important, using both traditional and emerging image-related technologies. Co-present viewing is a dynamic, improvisational construction of a contingent, situated interaction between story-teller and audience. The concept of performance, as articulated differently by Erving Goffman and Judith Butler, is useful understand the enduring importance of co-present sharing of photos and the importance of oral narratives around images in enacting identity and relationships. Finally, we suggest some implications for both HCI research and the design of image-related technologies. & 2009 Elsevier Ltd. All rights reserved."}
{"_id": "39517e37380c64d3c6fe8f9f396500c8e254bfdf", "title": "DoS and DDoS in Named Data Networking", "text": "With the growing realization that current Internet protocols are reaching the limits of their senescence, several on-going research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denial-of-Service (DoS) attacks that plague today's Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) - a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN's resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking."}
{"_id": "d0e9b588bcff6971ecafbe2aedbb622a62619a8a", "title": "Introducing Serendipity in a Content-Based Recommender System", "text": "Today recommenders are commonly used with various purposes, especially dealing with e-commerce and information filtering tools. Content-based recommenders rely on the concept of similarity between the bought/ searched/ visited item and all the items stored in a repository. It is a common belief that the user is interested in what is similar to what she has already bought/searched/visited. We believe that there are some contexts in which this assumption is wrong: it is the case of acquiring unsearched but still useful items or pieces of information. This is called serendipity. Our purpose is to stimulate users and facilitate these serendipitous encounters to happen. This paper presents the design and implementation of a hybrid recommender system that joins a content-based approach and serendipitous heuristics in order to mitigate the over-specialization problem with surprising suggestions."}
{"_id": "0c9ba3329d6ec82ae581cde268614abd0313fdeb", "title": "Optimizing Dialogue Management with Reinforcement Learning: Experiments with the NJFun System", "text": "Designing the dialogue policy of a spoken dialogue system involves many nontrivial choices. This paper presents a reinforcement learning approach for automatically optimizing a dialogue policy, which addresses the technical challenges in applying reinforcement learning to a working dialogue system with human users. We report on the design, construction and empirical evaluation of NJFun, an experimental spoken dialogue system that provides users with access to information about fun things to do in New Jersey. Our results show that by optimizing its performance via reinforcement learning, NJFun measurably improves system performance."}
{"_id": "556a1aefa5122461e97141a49963e76fe15c25bd", "title": "ARROW: GenerAting SignatuRes to Detect DRive-By DOWnloads", "text": "A drive-by download attack occurs when a user visits a webpage which attempts to automatically download malware without the user's consent. Attackers sometimes use a malware distribution network (MDN) to manage a large number of malicious webpages, exploits, and malware executables. In this paper, we provide a new method to determine these MDNs from the secondary URLs and redirect chains recorded by a high-interaction client honeypot. In addition, we propose a novel drive-by download detection method. Instead of depending on the malicious content used by previous methods, our algorithm first identifies and then leverages the URLs of the MDN's central servers, where a central server is a common server shared by a large percentage of the drive-by download attacks in the same MDN. A set of regular expression-based signatures are then generated based on the URLs of each central server. This method allows additional malicious webpages to be identified which launched but failed to execute a successful drive-by download attack. The new drive-by detection system named ARROW has been implemented, and we provide a large-scale evaluation on the output of a production drive-by detection system. The experimental results demonstrate the effectiveness of our method, where the detection coverage has been boosted by 96% with an extremely low false positive rate."}
{"_id": "8c38382f681598e77a2e539473ac9d91b2d7a2d3", "title": "A theorem on polygon cutting with applications", "text": "Let P be a simple polygon with N vertices, each being assigned a weight \u2208 {0,1}, and let C, the weight of P, be the added weight of all vertices. We prove that it is possible, in O(N) time, to find two vertices a,b in P, such that the segment ab lies entirely inside the polygon P and partitions it into two polygons, each with a weight not exceeding 2C/3. This computation assumes that all the vertices have been sorted along some axis, which can be done in O(Nlog N) time. We use this result to derive a number of efficient divide-and-conquer algorithms for: 1. Triangulating an N-gon in O(Nlog N) time. 2. Decomposing an N-gon into (few) convex pieces in O(Nlog N) time. 3. Given an O(Nlog N) preprocessing, computing the shortest distance between two arbitrary points inside an N-gon (i.e., the internal distance), in O(N) time. 4. Computing the longest internal path in an N-gon in O(N2) time. In all cases, the algorithms achieve significant improvements over previously known methods, either by displaying better performance or by gaining in simplicity. In particular, the best algorithms for Problems 2,3,4, known so far, performed respectively in O(N2), O(N2), and O(N4) time."}
{"_id": "e2c318d32e58a1cdc073080dd189a88eab6c07b3", "title": "A meta-analysis of continuing medical education effectiveness.", "text": "INTRODUCTION\nWe undertook a meta-analysis of the Continuing Medical Education (CME) outcome literature to examine the effect of moderator variables on physician knowledge, performance, and patient outcomes.\n\n\nMETHODS\nA literature search of MEDLINE and ERIC was conducted for randomized controlled trials and experimental design studies of CME outcomes in which physicians were a major group. CME moderator variables included the types of intervention, the types and number of participants, time, and the number of intervention sessions held over time.\n\n\nRESULTS\nThirty-one studies met the eligibility criteria, generating 61 interventions. The overall sample-size weighted effect size for all 61 interventions was r = 0.28 (0.18). The analysis of CME moderator variables showed that active and mixed methods had medium effect sizes (r = 0.33 [0.33], r = 0.33 [0.26], respectively), and passive methods had a small effect size (r = 0.20 [0.16], confidence interval 0.15, 0.26). There was a positive correlation between the effect size and the length of the interventions (r = 0.33) and between multiple interventions over time (r = 0.36). There was a negative correlation between the effect size and programs that involved multiple disciplines (r = -0.18) and the number of participants (r = -0.13). The correlation between the effect size and the length of time for outcome assessment was negative (r = -0.31).\n\n\nDISCUSSION\nThe meta-analysis suggests that the effect size of CME on physician knowledge is a medium one; however, the effect size is small for physician performance and patient outcome. The examination of moderator variables shows there is a larger effect size when the interventions are interactive, use multiple methods, and are designed for a small group of physicians from a single discipline."}
{"_id": "205201b657a894eee015be9f1b4269508fca83e8", "title": "Prediction of Yelp Review Star Rating using Sentiment Analysis", "text": "Yelp aims to help people find great local businesses, e.g. restaurants. Automated software is currently used to recommend the most helpful and reliable reviews for the Yelp community, based on various measures of quality, reliability, and activity. However, this is not tailored to each customer. Our goal in this project is to apply machine learning to predict a customer\u2019s star rating of a restaurant based on his/her reviews, as well as other customers\u2019 reviews/ratings, to recommend other restaurants to the customer, as shown in Figure 1."}
{"_id": "a41df0c6d0dea24fef561177bf19450b8413bb21", "title": "An efficient algorithm for the \"optimal\" stable marriage", "text": "In an instance of size n of the stable marriage problem, each of n men and n women ranks the members of the opposite sex in order of preference. A stable matching is a complete matching of men and women such that no man and woman who are not partners both prefer each other to their actual partners under the matching. It is well known [2] that at least one stable matching exists for every stable marriage instance. However, the classical Gale-Shapley algorithm produces a marriage that greatly favors the men at the expense of the women, or vice versa. The problem arises of finding a stable matching that is optimal under some more equitable or egalitarian criterion of optimality. This problem was posed by Knuth [6] and has remained unsolved for some time. Here, the objective of maximizing the average (or, equivalently, the total) \u201csatisfaction\u201d of all people is used. This objective is achieved when a person's satisfaction is measured by the position of his/her partner in his/her preference list. By exploiting the structure of the set of all stable matchings, and using graph-theoretic methods, an O(n4) algorithm for this problem is derived."}
{"_id": "e990a41e8f09e0ef4695c39af351bf25f333eefa", "title": "Primary, Secondary, and Meta-Analysis of Research", "text": ""}
{"_id": "024e394f00b43e7f44f8d0c641c584b6d7edf7d6", "title": "Efficient AUV navigation fusing acoustic ranging and side-scan sonar", "text": "This paper presents an on-line nonlinear least squares algorithm for multi-sensor autonomous underwater vehicle (AUV) navigation. The approach integrates the global constraints of range to and GPS position of a surface vehicle or buoy communicated via acoustic modems and relative pose constraints arising from targets detected in side-scan sonar images. The approach utilizes an efficient optimization algorithm, iSAM, which allows for consistent on-line estimation of the entire set of trajectory constraints. The optimized trajectory can then be used to more accurately navigate the AUV, to extend mission duration, and to avoid GPS surfacing. As iSAM provides efficient access to the marginal covariances of previously observed features, automatic data association is greatly simplified \u2014 particularly in sparse marine environments. A key feature of our approach is its intended scalability to single surface sensor (a vehicle or buoy) broadcasting its GPS position and simultaneous one-way travel time range (OWTT) to multiple AUVs. We discuss why our approach is scalable as well as robust to modem transmission failure. Results are provided for an ocean experiment using a Hydroid REMUS 100 AUV co-operating with one of two craft: an autonomous surface vehicle (ASV) and a manned support vessel. During these experiments the ranging portion of the algorithm ran online on-board the AUV. Extension of the paradigm to multiple missions via the optimization of successive survey missions (and the resultant sonar mosaics) is also demonstrated."}
{"_id": "32a0b53aac8c6e2fcb2a7e4c492b195efe2dc965", "title": "Understanding the perceived logic of care by vaccine-hesitant and vaccine-refusing parents: A qualitative study in Australia", "text": "In terms of public health, childhood vaccination programs have benefits that far outweigh risks. However, some parents decide not to vaccinate their children. This paper explores the ways in which such parents talked about the perceived risks and benefits incurred by vaccinating (or not vaccinating) their children. Between 2013-2016 we undertook 29 in-depth interviews with non-vaccinating and/or 'vaccine hesitant' parents in Australia. Interviews were conducted in an open and non-judgmental manner, akin to empathic neutrality. Interviews focused on parents talking about the factors that shaped their decisions not to (or partially) vaccinate their children. All interviews were transcribed and analysed using both inductive and deductive processes. The main themes focus on parental perceptions of: 1. their capacity to reason; 2. their rejection of Western medical epistemology; and 3. their participation in labour intensive parenting practices (which we term salutogenic parenting). Parents engaged in an ongoing search for information about how best to parent their children (capacity to reason), which for many led to questioning/distrust of traditional scientific knowledge (rejection of Western medical epistemology). Salutogenic parenting spontaneously arose in interviews, whereby parents practised health promoting activities which they saw as boosting the natural immunity of their children and protecting them from illness (reducing or negating the perceived need for vaccinations). Salutogenic parenting practices included breastfeeding, eating organic and/or home-grown food, cooking from scratch to reduce preservative consumption and reducing exposure to toxins. We interpret our data as a 'logic of care', which is seen by parents as internally consistent, logically inter-related and inter-dependent. Whilst not necessarily sharing the parents' reasoning, we argue that an understanding of their attitudes towards health and well-being is imperative for any efforts to engage with their vaccine refusal at a policy level."}
{"_id": "68d05c858844b9e4c2a798e35fb49d4e7d446c82", "title": "Computational Thinking in Practice : How STEM Professionals Use CT in Their Work", "text": "The goal of this study is to bring current computational thinking in STEM educational efforts in line with the increasingly computational nature of STEM research practices. We conducted interviews with STEM practitioners in various fields to understand the nature of CT as it happens in authentic research settings and to revisit a first iteration of our definition of CT in form of a taxonomy. This exploration gives us insight into how scientists use computers in their work and help us identify what practices are important to include in high school STEM learning contexts. Our findings will inform the design of classroom activities to better prepare today\u2019s students for the modern STEM landscape that awaits them."}
{"_id": "92594ee5041fd15ad817257b304f2a12b28f5ab8", "title": "Wavelet filter evaluation for image compression", "text": "Choice of filter bank in wavelet compression is a critical issue that affects image quality as well as system design. Although regularity is sometimes used in filter evaluation, its success at predicting compression performance is only partial. A more reliable evaluation can be obtained by considering an L-level synthesis/analysis system as a single-input, single-output, linear shift-variant system with a response that varies according to the input location module (2(L),2(L)). By characterizing a filter bank according to its impulse response and step response in addition to regularity, we obtain reliable and relevant (for image coding) filter evaluation metrics. Using this approach, we have evaluated all possible reasonably short (less than 36 taps in the synthesis/analysis pair) minimum-order biorthogonal wavelet filter banks. Of this group of over 4300 candidate filter banks, we have selected and present here the filters best suited to image compression. While some of these filters have been published previously, others are new and have properties that make them attractive in system design."}
{"_id": "aa474ae1be631696b790ffb19468ce9615899631", "title": "Multimodal recognition of personality traits in social interactions", "text": "This paper targets the automatic detection of personality traits in a meeting environment by means of audio and visual features; information about the relational context is captured by means of acoustic features designed to that purpose. Two personality traits are considered: Extraversion (from the Big Five) and the Locus of Control. The classification task is applied to thin slices of behaviour, in the form of 1-minute sequences. SVM were used to test the performances of several training and testing instance setups, including a restricted set of audio features obtained through feature selection. The outcomes improve considerably over existing results, provide evidence about the feasibility of the multimodal analysis of personality, the role of social context, and pave the way to further studies addressing different features setups and/or targeting different personality traits."}
{"_id": "f8a95360af226056a99ad1ea1328653bbe89b8b4", "title": "Design Criteria for Electric Vehicle Fast Charge Infrastructure Based on Flemish Mobility Behavior", "text": "This paper studies the technical design criteria for fast charge infrastructure, covering the mobility needs. The infrastructure supplements the residential and public slow charging infrastructure. Two models are designed. The first determines the charging demand, based on current mobility behavior in Flanders. The second model simulates a charge infrastructure that meets the resulting fast charge demand. The energy management is performed by a rule-based control algorithm, that directs the power flows between the fast chargers, the energy storage system, the grid connection, and the photovoltaic installation. There is a clear trade-off between the size of the energy storage system and the power rating of the grid connection. Finally, the simulations indicate that 99.7% of the vehicles visiting the fast charge infrastructure can start charging within 10 minutes with a configuration limited to 5 charging spots, instead of 9 spots when drivers are not willing to wait."}
{"_id": "41856dc1ca34c93967ec7c20c735bed83fc379f4", "title": "A Field Study of Run-Time Location Access Disclosures on Android Smartphones", "text": "Smartphone users are increasingly using apps that can access their location. Often these accesses can be without users knowledge and consent. For example, recent research has shown that installation-time capability disclosures are ineffective in informing people about their apps\u2019 location access. In this paper, we present a four-week field study (N=22) on run-time location access disclosures. Towards this end, we implemented a novel method to disclose location accesses by location-enabled apps on participants\u2019 smartphones. In particular, the method did not need any changes to participants\u2019 phones beyond installing our study app. We randomly divided our participants to two groups: a Disclosure group (N=13), who received our disclosures and a No Disclosure group (N=9) who received no disclosures from us. Our results confirm that the Android platform\u2019s location access disclosure method does not inform participants effectively. Almost all participants pointed out that their location was accessed by several apps they would have not expected to access their location. Further, several apps accessed their location more frequently than they expected. We conclude that our participants appreciated the transparency brought by our run-time disclosures and that because of the disclosures most of them had taken actions to manage their apps\u2019 location access."}
{"_id": "bcb1029032d2ca45669f8f66b9ee94a7f0f23028", "title": "Morphological and LSU rDNA sequence variation within the Gonyaulax spinifera-Spiniferites group (Dinophyceae) and proposal of G-elongata comb. nov and G-membranacea comb. nov", "text": "Cultures were established from cysts of the cyst-based taxa Spiniferites elongatus and S. membranaceus. Motile cells and cysts from both cultures and sediment samples were examined using light and scanning electron microscopy. The cyst\u00ad theca relationship was established for S. elongatus. The motile cells have the tabulation pattern 2 pr, 4', 6\", 6c, \ufffd 4s, 6\"', I p, Inn, but they remain unattributable to previously described Gonyaulax species. There was large variation in process length and process morphology in cysts from both cultures and wild samples and there was variation in ornamentation and in the development of spines and flanges in motile cells. A new combination, G. elongata (Reid) Ellegaard et al. comb. nov. is proposed, following new rules of the International Code of Botanical Nomenclature that give genera based on extant forms priority over genera based on fossil forms. Extreme morphological variation in the cyst and motile stages of S. membranaceus is described and this species is also transferred to the genus Gonyaulax, as G. membranacea (Rossignol) Ellegaard et al. comb. nov. Approximately 1500 bp of large subunit (LSU) rDNA were determined for these two species and for G. baltica, G. cf. spinifera ( = S. ramosus) and G. digitalis ( = Bitectatodinium tepikiense). LSU rDNA showed sequence divergences similar to those estimated between species in other genera within the Gonyaulacales; a phylogeny for the Gonyaulacales was established, including novel LSU rONA sequences for Alexandrium margalefii, A. pseudogonyaulax and Pyrodinium bahamense var. compressum. Our results show that motile stages obtained from the germination of several cysts of the 'fossil-based' Spiniferites and B. tepikiense, which were previously attributed to 'Gonyaulax spinifera group undifferentiated', belong to distinct species of the genus Gonyaulax. These species show small morphological differences in the motile stage but relatively high sequence divergence. Moreover, this group of species is monophyletic, supported by bootstrap values of 100% in parsimony and maximum likelihood analyses."}
{"_id": "a1700eb761f10df03d60b9d0b2e91b1977b0f0fb", "title": "BUSINESS-TO-BUSINESS E-BUSINESS MODELS : CLASSIFICATION AND TEXTILE INDUSTRY IMPLICATIONS", "text": "Since the introduction of the Internet and e-commerce in the mid-1990s, there has been a lot of hype surrounding e-business, the impact that it will have on the way that companies do business, and how it will change the global economy as a whole. Since the crash of the dotcom companies in 2001, there has been much less hype surrounding the use of the Internet for business. There seems to have been a realization that e-business may not be the answer to all of a company\u2019s problems, but can be a great asset in the struggle to increase efficiencies in daily business dealings, and that the Web is primarily a new way of relating to customers and suppliers. This paper categorizes and discusses the different types of business-to-business electronic business models currently being used by businesses and discussed in the academic literature, and shows how these business models are being implemented within the textile industry. This paper is divided into three parts. Part I gives an overview and some important definitions associated with businessto-business e-business, and discusses some characteristics that are unique to doing business on the Internet. Risks and benefits associated with doing business online are also discussed. Part II analyzes the different types of e-business models seen in the academic literature. Based on the analysis of the literature, a taxonomy of e-business models was developed. This new classification system organized e-business models into the following categories: sourcing models, ownership models, service-based models, customer relationship management models, supply chain models, interaction Models and revenue models. Part III reviews how these e-business models are currently being used within the textile industry. A preliminary analysis of 79 textile manufacturing companies was conducted to identify the applications of e-business."}
{"_id": "2716c1a156c89dbde7281a7b0812ad81038e4722", "title": "Test Results and Torque Improvement of the 50-kW Switched Reluctance Motor Designed for Hybrid Electric Vehicles", "text": "A switched reluctance motor (SRM) has been developed as one of the possible candidates of rare-earth-free electric motors. A prototype machine has been built and tested. It has competitive dimensions, torque, power, and efficiency with respect to the 50-kW interior permanent magnet synchronous motor employed in the hybrid electric vehicles (Toyota Prius 2003). It is found that competitive power of 50-kW rating and efficiency of 95% are achieved. The prototype motor provided 85% of the target torque. Except the maximum torque, the most speed-torque region is found to be covered by the test SRM. The cause of discrepancy in the measured and calculated torque values is examined. An improved design is attempted, and a new experimental switched reluctance machine is designed and built for testing. The results are given in this paper."}
{"_id": "b0064e4e4e6277d8489d607893b36d91f850083d", "title": "Matching Curves to Imprecise Point Sets using Fr\u00e9chet Distance", "text": "Let P be a polygonal curve in R of length n, and S be a point-set of size k. The Curve/Point Set Matching problem consists of finding a polygonal curve Q on S such that the Fr\u00e9chet distance from P is less than a given \u03b5. We consider eight variations of the problem based on the distance metric used and the omittability or repeatability of the points. We provide closure to a recent series of complexity results for the case where S consists of precise points. More importantly, we formulate a more realistic version of the problem that takes into account measurement errors. This new problem is posed as the matching of a given curve to a set of imprecise points. We prove that all three variations of the problem that are in P when S consists of precise points become NP-complete when S consists of imprecise points. We also discuss approximation re-"}
{"_id": "1f8116db538169de3553b1091e82107f7594301a", "title": "Common LISP: the language, 2nd Edition", "text": ""}
{"_id": "38b1824bbab96f9ab2e1079ea2b60ae313e6cb7f", "title": "A Survey on Multicarrier Communications: Prototype Filters, Lattice Structures, and Implementation Aspects", "text": "Due to their numerous advantages, communications over multicarrier schemes constitute an appealing approach for broadband wireless systems. Especially, the strong penetration of orthogonal frequency division multiplexing (OFDM) into the communications standards has triggered heavy investigation on multicarrier systems, leading to re-consideration of different approaches as an alternative to OFDM. The goal of the present survey is not only to provide a unified review of waveform design options for multicarrier schemes, but also to pave the way for the evolution of the multicarrier schemes from the current state of the art to future technologies. In particular, a generalized framework on multicarrier schemes is presented, based on what to transmit, i.e., symbols, how to transmit, i.e., filters, and where/when to transmit, i.e., lattice. Capitalizing on this framework, different variations of orthogonal, bi-orthogonal, and non-orthogonal multicarrier schemes are discussed. In addition, filter designs for various multicarrier systems are reviewed considering four different design perspectives: energy concentration, rapid decay, spectrum nulling, and channel/hardware characteristics. Subsequently, evaluation tools which may be used to compare different filters in multicarrier schemes are studied. Finally, multicarrier schemes are evaluated from the perspective of practical implementation aspects, such as lattice adaptation, equalization, synchronization, multiple antennas, and hardware impairments."}
{"_id": "4d1efe967fde304dc31e19665b9c328969a79128", "title": "5GNOW: non-orthogonal, asynchronous waveforms for future mobile applications", "text": "This article provides some fundamental indications about wireless communications beyond LTE/LTE-A (5G), representing the key findings of the European research project 5GNOW. We start with identifying the drivers for making the transition to 5G networks. Just to name one, the advent of the Internet of Things and its integration with conventional human-initiated transmissions creates a need for a fundamental system redesign. Then we make clear that the strict paradigm of synchronism and orthogonality as applied in LTE prevents efficiency and scalability. We challenge this paradigm and propose new key PHY layer technology components such as a unified frame structure, multicarrier waveform design including a filtering functionality, sparse signal processing mechanisms, a robustness framework, and transmissions with very short latency. These components enable indeed an efficient and scalable air interface supporting the highly varying set of requirements originating from the 5G drivers."}
{"_id": "de19bd1be876549cbccb79e545c43442a6223f7a", "title": "Sparse code multiple access", "text": "Multicarrier CDMA is a multiplexing approach in which modulated QAM symbols are spread over multiple OFDMA tones by using a generally complex spreading sequence. Effectively, a QAM symbol is repeated over multiple tones. Low density signature (LDS) is a version of CDMA with low density spreading sequence allowing us to take advantage of a near optimal ML receiver with practically feasible complexity. In this paper, we propose a new multiple access scheme so called sparse code multiple access (SCMA) which still enjoys the low complexity reception technique but with better performance compared to LDS. In SCMA, the procedure of bit to QAM symbol mapping and spreading are combined together and incoming bits are directly mapped to a multidimensional codeword of an SCMA codebook set. Each layer or user has its dedicated codebook. Shaping gain of a multidimensional constellation is the main source of the performance improvement in comparison to the simple repetition of QAM symbols in LDS. In general, SCMA codebook design is an optimization problem. A systematic sub-optimal approach is proposed here for SCMA codebook design."}
{"_id": "f1732fe25d335af43c0ff90c1b35f9b5a5c38dc5", "title": "SCMA Codebook Design", "text": "Multicarrier CDMA is a multiple access scheme in which modulated QAM symbols are spread over OFDMA tones by using a generally complex spreading sequence. Effectively, a QAM symbol is repeated over multiple tones. Low density signature (LDS) is a version of CDMA with low density spreading sequences allowing us to take advantage of a near optimal message passing algorithm (MPA) receiver with practically feasible complexity. Sparse code multiple access (SCMA) is a multi-dimensional codebook-based non-orthogonal spreading technique. In SCMA, the procedure of bit to QAM symbol mapping and spreading are combined together and incoming bits are directly mapped to multi-dimensional codewords of SCMA codebook sets. Each layer has its dedicated codebook. Shaping gain of a multi-dimensional constellation is one of the main sources of the performance improvement in comparison to the simple repetition of QAM symbols in LDS. Meanwhile, like LDS, SCMA enjoys the low complexity reception techniques due to the sparsity of SCMA codewords. In this paper a systematic approach is proposed to design SCMA codebooks mainly based on the design principles of lattice constellations. Simulation results are presented to show the performance gain of SCMA compared to LDS and OFDMA."}
{"_id": "43a87c07f2cbfeab0801b9cf2ece5f7e0689d287", "title": "LDS-OFDM an Efficient Multiple Access Technique", "text": "In this paper LDS-OFDM is introduced as an uplink multicarrier multiple access scheme. LDS-OFDM uses Low Density Signature (LDS) structure for spreading the symbols in frequency domain. This technique benefits from frequency diversity besides its ability of supporting parallel data streams up to 400% more than the number of subcarriers (overloaded condition). The performance of LDS-OFDM is evaluated and compared with conventional OFDMA systems over multipath fading channel. Monte Carlo based simulations for various loading conditions indicate significant performance improvement over OFDMA system."}
{"_id": "1b5280076b41fda48eb837cbb03cc1b283d4fa3c", "title": "Neural network controller based on PID using an extended Kalman filter algorithm for multi-variable non-linear control system", "text": "The Proportional Integral Derivative (PID) controller is widely used in the industrial control application, which is only suitable for the single input/single output (SISO) with known-parameters of the linear system. However, many researchers have been proposed the neural network controller based on PID (NNPID) to apply for both of the single and multi-variable control system but the NNPID controller that uses the conventional gradient descent-learning algorithm has many disadvantages such as a low speed of the convergent stability, difficult to set initial values, especially, restriction of the degree of system complexity. Therefore, this paper presents an improvement of recurrent neural network controller based on PID, including a controller structure improvement and a modified extended Kalman filter (EKF) learning algorithm for weight update rule, called ENNPID controller. We apply the proposed controller to the dynamic system including inverted pendulum, and DC motor system by the MATLAB simulation. From our experimental results, it shows that the performance of the proposed controller is higher than the other PID-like controllers in terms of fast convergence and fault tolerance that are highly required."}
{"_id": "08a5c749035e13a464eb5fb6cf4d4df63069330f", "title": "Can recursive bisection alone produce routable placements?", "text": "This work focuses on congestion-driven placement of standard cells into rows in the fixed-die context. We summarize the state-of-the-art after two decades of research in recursive bisection placement and implement a new placer, called Capo, to empirically study the achievable limits of the approach. From among recently proposed improvements to recursive bisection, Capo incorporates a leading-edge multilevel min-cut partitioner [7], techniques for partitioning with small tolerance [8], optimal min-cut partitioners and end-case min-wirelength placers [5], previously unpublished partitioning tolerance computations, and block splitting heuristics. On the other hand, our \u201cgood enough\u201d implementation does not use \u201coverlapping\u201d [17], multi-way partitioners [17, 20], analytical placement, or congestion estimation [24, 35]. In order to run on recent industrial placement instances, Capo must take into account fixed macros, power stripes and rows with different allowed cell orientations. Capo reads industry-standard LEF/DEF, as well as formats of the GSRC bookshelf for VLSI CAD algorithms [6], to enable comparisons on available placement instances in the fixed-die regime.\nCapo clearly demonstrates that despite a potential mismatch of objectives, improved mincut bisection can still lead to improved placement wirelength and congestion. Our experiments on recent industrial benchmarks fail to give a clear answer to the question in the title of this paper. However, they validate a series of improvements to recursive bisection and point out a need for transparent congestion management techniques that do not worsen the wirelength of already routable placements. Our experimental flow, which validates fixed-die placement results by violation-free detailed auto-routability, provides a new norm for comparison of VLSI placement implementations."}
{"_id": "098ee9f8d7c885e6ccc386435537513a6303667c", "title": "Factors Affecting Repurchase Intention of Smartphone : A Case Study of Huawei Smartphone in Yangon , Myanmar", "text": "This study investigates the relationship between functional value, price consciousness, word of mouth, brand image, attitude towards product and repurchases intention of a smartphone brand. To do so, a survey was conducted by distributing 420 questionnaires in 7 different shopping malls in Yangon, Myanmar. The data collected was analyzed using SPSS and hypotheses were examined by employing the Pearson correlation and multiple linear regression. The results show that there is a positive and significant relationship among functional value, word of mouth, price consciousness towards attitude towards product and brand image, attitude towards product, word of mouth influences repurchase intention. Based on these results, it seems that the smart phone company needs to develop marketing strategy to increase repurchase intention."}
{"_id": "91fa5860d3a15cdd9cf03bd6bbee8b0cc8a9f08a", "title": "Finding the face in the crowd: an anger superiority effect.", "text": "Facial gestures have been given an increasingly critical role in models of emotion. The biological significance of interindividual transmission of emotional signals is a pivotal assumption for placing the face in a central position in these models. This assumption invited a logical corollary, examined in this article: Face-processing should be highly efficient. Three experiments documented an asymmetry in the processing of emotionally discrepant faces embedded in crowds. The results suggested that threatening faces pop out of crowds, perhaps as a result of a preattentive, parallel search for signals of direct threat."}
{"_id": "bc749f0e81eafe9e32d56336750782f45d82609d", "title": "Combination of Texture and Geometric Features for Age Estimation in Face Images", "text": "Automatic age estimation from facial images has recently received an increasing interest due to a variety of applications, such as surveillance, human-computer interaction, forensics, and recommendation systems. Despite such advances, age estimation remains an open problem due to several challenges associated with the aging process. In this work, we develop and analyze an automatic age estimation method from face images based on a combination of textural and geometric features. Experiments are conducted on the Adience dataset (Adience Benchmark, 2017; Eidinger et al., 2014), a large known benchmark used to evaluate both age and gender classification approaches."}
{"_id": "507f4a0a5eaecd38566a3b1bf2a33554a2b2eed2", "title": "BIGT control optimisation for overall loss reduction", "text": "In this paper we present the latest results of utilizing MOS-control (MOSctrl) to optimize the performance of the Bi-mode Insulated Gate Transistor (BIGT) chip. The adaption of the BIGT technology enables higher output power per footprint. However, to enable the full performance benefit of the BIGT, the optimisation of the known standard MOS gate control is necessary. This optimisation is being demonstrated over the whole current and temperature range for the BIGT diode turn-off and BIGT turn-on operation. It is shown that the optimum control can offer a performance increase up to 20% for high voltage devices."}
{"_id": "6138c3595abf95cb83cbe89de6b6620b7c1d5234", "title": "Evaluation of information technology investment: a data envelopment analysis approach", "text": "The increasing use of information technology (IT) has resulted in a need for evaluating the productivity impacts of IT. The contemporary IT evaluation approach has focused on return on investment and return on management. IT investment has impacts on different stages of business operations. For example, in the banking industry, IT plays a key role in effectively generating (i) funds from the customer in the forms of deposits and then (ii) profits by using deposits as investment funds. Existing approaches based upon data envelopment analysis (DEA) only measure the IT efficiency or impact on one specific stage when a multi-stage business process is present. A detailed model is needed to characterize the impact of IT on each stage of the business operation. The current paper develops a DEA non-linear programming model to evaluate the impact of IT on multiple stages along with information on how to distribute the IT-related resources so that the efficiency is maximized. It is shown that this non-linear program can be treated as a parametric linear program. It is also shown that if there is only one intermediate measure, then the non-linear DEA model becomes a linear program. Our approach is illustrated with an example taken from previous studies. 2004 Elsevier Ltd. All rights reserved."}
{"_id": "3e89403dcd478c849732f872001064f21ff073f9", "title": "Comparing memory systems for chip multiprocessors", "text": "There are two basic models for the on-chip memory in CMP systems:hardware-managed coherent caches and software-managed streaming memory. This paper performs a direct comparison of the two modelsunder the same set of assumptions about technology, area, and computational capabilities. The goal is to quantify how and when they differ in terms of performance, energy consumption, bandwidth requirements, and latency tolerance for general-purpose CMPs. We demonstrate that for data-parallel applications, the cache-based and streaming models perform and scale equally well. For certain applications with little data reuse, streaming scales better due to better bandwidth use and macroscopic software prefetching. However, the introduction of techniques such as hardware prefetching and non-allocating stores to the cache-based model eliminates the streaming advantage. Overall, our results indicate that there is not sufficient advantage in building streaming memory systems where all on-chip memory structures are explicitly managed. On the other hand, we show that streaming at the programming model level is particularly beneficial, even with the cache-based model, as it enhances locality and creates opportunities for bandwidth optimizations. Moreover, we observe that stream programming is actually easier with the cache-based model because the hardware guarantees correct, best-effort execution even when the programmer cannot fully regularize an application's code."}
{"_id": "d28000811e1fbb0a8967395cd8ffba8ce1b53863", "title": "Positive and negative emotional verbal stimuli elicit activity in the left amygdala.", "text": "The human amygdala's involvement in negative emotion is well established, but relatively little is known regarding its role in positive emotion. Here we examined the neural response to emotionally positive, negative, and neutral words using fMRI. Relative to neutral words, positive and negative emotional words elicited greater activity in the left amygdala. Positive but not negative words elicited activity in dorsal and ventral striatal regions which have been linked in previous neuroimaging studies to reward and positive affect, including caudate, putamen, globus pallidus, and accumbens. These findings provide the first direct evidence that the amygdala is involved in emotional reactions elicited by both positive and negative emotional words, and further indicate that positive words additionally activate brain regions related to reward."}
{"_id": "5f4847990b6e68368cd2cac809748298ab241099", "title": "Broadband Radial Waveguide Spatial Combiner", "text": "A broadband eight-way spatial combiner using coaxial probes and radial waveguides has been proposed and designed. The simple electromagnetic modeling for the radial waveguide power divider/combiner has been developed using equivalent-circuit method. The measured 10-dB return loss and 1-dB insertion loss bandwidth of this waveguide spatial combiner are all demonstrated to be about 8 GHz."}
{"_id": "13d0f69204bc4a78acc9b8fecb91f41ab1a76823", "title": "Privacy Aspects of Recommender Systems", "text": "The popularity of online recommender systems has soared; they are deployed in numerous websites and gather tremendous amounts of user data that are necessary for the recommendation purposes. This data, however, may pose a severe threat to user privacy, if accessed by untrusted parties or used inappropriately. Hence, it is of paramount importance for recommender system designers and service providers to find a sweet spot, which allows them to generate accurate recommendations and guarantee the privacy of their users. In this chapter we overview the state of the art in privacy enhanced recommendations. We analyze the risks to user privacy imposed by recommender systems, survey the existing solutions, and discuss the privacy implications for the users of the recommenders. We conclude that a considerable effort is still required to develop practical recommendation solutions that provide adequate privacy guarantees, while at the same time facilitating the delivery of high-quality recommendations to their users. Arik Friedman NICTA, Australia, e-mail: arik.friedman@nicta.com.au Bart Knijnenburg UC Irvine, USA, e-mail: bart.k@uci.edu Kris Vanhecke iMinds Ghent University, Belgium, e-mail: kris.vanhecke@intec.ugent.be Luc Martens iMinds Ghent University, Belgium, e-mail: luc.martens@intec.ugent.be Shlomo Berkovsky CSIRO, Australia, e-mail: shlomo.berkovsky@csiro.au 1 Accepted to be included in the 2nd edition of the Recommender Systems handbook. Please do not share publicly, and consult the authors before citing this chapter."}
{"_id": "158d54518e694d0a7d7c0fe2ea474d873eaeb5a0", "title": "ArgumenText: Searching for Arguments in Heterogeneous Sources", "text": "Argument mining is a core technology for enabling argument search in large corpora. However, most current approaches fall short when applied to heterogeneous texts. In this paper, we present an argument retrieval system capable of retrieving sentential arguments for any given controversial topic. By analyzing the highest-ranked results extracted from Web sources, we found that our system covers 89% of arguments found in expert-curated lists of arguments from an online debate portal, and also identifies additional valid arguments."}
{"_id": "50d90325c9f81e562fdb65097c0be4f889008d20", "title": "A qualitative study of Ragnar\u00f6k Online private servers: in-game sociological issues", "text": "In the last decade, online games have garnered much attention as more and more players gather on game servers. In parallel, communities of illegal private server players and administrators have spread and might host hundreds of thousands of players. To study the Korean online game Ragnar\u00f6k Online, we conducted interviews to collect qualitative data on two private servers as well as on the official French server for the game. This paper discusses some of the reasons why Ragnar\u00f6k Online private servers might attract players and how examining private servers' characteristics could help improve official game servers."}
{"_id": "19b82e0ee0d61bb538610d3a470798ce7d6096ee", "title": "A 28GHz SiGe BiCMOS phase invariant VGA", "text": "The paper describes a technique to design a phase invariant variable gain amplifier (VGA). Variable gain is achieved by varying the bias current in a BJT, while the phase variation is minimized by designing a local feedback network such that the applied base to emitter voltage has a bias-dependent phase variation which compensates the inherent phase variation of the transconductance. Two differential 28GHz VGA variants based on these principles achieve <;5\u00b0 phase variation over 8dB and 18dB of gain control range, respectively, with phase invariance maintained over PVT. Implemented in GF 8HP BiCMOS technology, the VGAs achieve 18dB nominal gain, 4GHz bandwidth, and IP1dB > -13dBm while consuming 35mW."}
{"_id": "d1adb86df742a9556e137020dca0e505442f337a", "title": "Bacteriocins: developing innate immunity for food.", "text": "Bacteriocins are bacterially produced antimicrobial peptides with narrow or broad host ranges. Many bacteriocins are produced by food-grade lactic acid bacteria, a phenomenon which offers food scientists the possibility of directing or preventing the development of specific bacterial species in food. This can be particularly useful in preservation or food safety applications, but also has implications for the development of desirable flora in fermented food. In this sense, bacteriocins can be used to confer a rudimentary form of innate immunity to foodstuffs, helping processors extend their control over the food flora long after manufacture."}
{"_id": "48265726215736f7dd7ceccacac488422032397c", "title": "DPABI: Data Processing & Analysis for (Resting-State) Brain Imaging", "text": "Brain imaging efforts are being increasingly devoted to decode the functioning of the human brain. Among neuroimaging techniques, resting-state fMRI (R-fMRI) is currently expanding exponentially. Beyond the general neuroimaging analysis packages (e.g., SPM, AFNI and FSL), REST and DPARSF were developed to meet the increasing need of user-friendly toolboxes for R-fMRI data processing. To address recently identified methodological challenges of R-fMRI, we introduce the newly developed toolbox, DPABI, which was evolved from REST and DPARSF. DPABI incorporates recent research advances on head motion control and measurement standardization, thus allowing users to evaluate results using stringent control strategies. DPABI also emphasizes test-retest reliability and quality control of data processing. Furthermore, DPABI provides a user-friendly pipeline analysis toolkit for rat/monkey R-fMRI data analysis to reflect the rapid advances in animal imaging. In addition, DPABI includes preprocessing modules for task-based fMRI, voxel-based morphometry analysis, statistical analysis and results viewing. DPABI is designed to make data analysis require fewer manual operations, be less time-consuming, have a lower skill requirement, a smaller risk of inadvertent mistakes, and be more comparable across studies. We anticipate this open-source toolbox will assist novices and expert users alike and continue to support advancing R-fMRI methodology and its application to clinical translational studies."}
{"_id": "1cf314336cc6af13025fa91b19a50960b626684e", "title": "On the Traveling Salesman Problem with Simple Temporal Constraints", "text": "Many real-world applications require the successful combination of spatial and temporal reasoning. In this paper, we study the general framework of the Traveling Salesman Problem with Simple Temporal Constraints. Representationally, this framework subsumes the Traveling Salesman Problem, Simple Temporal Problems, as well as many of the frameworks described in the literature. We analyze the theoretical properties of the combined problem providing strong inapproximability results for the general problem, and positive results for"}
{"_id": "5ed17634e4bd989c235662d8fa3af0a23e7262ef", "title": "A Survey on Detection of Reasons Behind Infant Cry Using Speech Processing", "text": "Infant cry analysis is necessary to detect the reasons behind the cry. Infants can communicate through their cry only. They express their physical and emotional needs through their cry. The crying of an infant can be stopped if the reason behind their cry is known. Therefore, identifying the reason behind infant cry is very essential for medical follow up. In this paper, a literature survey is done to study, analyze, compare the existing approaches and their limitations. This survey helped us to propose a new approach for detecting the reasons behind Infant's cry. The proposed model will use input as the cry signal and from these signal patterns the unique features are extracted using MFCCs, LPCCs and Pitch, etc. These features will help discriminating the patterns from one another. These extracted features will be used to train Neural Network Multilayer classifier. This classifier will be used to identify multiple classes of reasons behind infant cry such as hunger, pain, sleep, discomfort, etc. Efforts will be made for achieving good accuracy results through this approach."}
{"_id": "671280fad11ccd0d02469086207ba9f988f267cb", "title": "Convolutional neural networks: an illustration in TensorFlow", "text": "C onvolutional neural networks (CNNs) have interesting and widely varied architectures catering to the requirement of learning features from three dimensional and structured data volumes (with a particular example being images, whether single or multi-channel). However, there are certain design considerations common to all CNN architectures that enable us to craft correct implementations of the architectures suited to our objectives. It is more than just mathematical calculations\u2014it's an art in itself! Thus, in the present tutorial, we'll put our knowledge of CNNs and their distinct layers in practice by making use of Google's open-source machine learning (ML) platform, TensorFlow. The tutorial is fairly simple and follows the steps detailed online [1] in light of added insights, which can be assimilated from the basics of CNN architectural concepts. The TensorFlow platform was released in November 2015, in a climate of stiff competition from well-established deep learning and ML packages such as Torch and Theano. (The performance comparisons between the packages goes beyond the scope of this piece). The platform has been undergoing constant improvements since its release, thanks in part to a nascent community of contributors. TensorFlow packs quite a punch owing to its significant positive characteristics. Firstly, the package's open source nature makes it transparent, customizable, and extensible by the end user. Secondly, the documentation support is rich and elaborate, which is a critical factor in accelerating user adoption. Lastly, the fact that the package has been backed by Google, with some of the best ML and AI researchers being among its active contributors, is the cherry on top. Additionally it's compatibility with newer versions of Python (version 3.3 and above), optional GPU utilization, parallelizable execution, and visualization support via TensorBoard further enrich its impressive repertoire. Its Achilles heel\u2014which has prevented widespread adoption\u2014is its unavailability on the Windows OS (TensorFlow is only available on Mac OSX and Linux platforms), and its non-operability in distributed environments, as of now (it's bound to change soon). The question as to why the MNIST dataset [2] was chosen for this tutorial is a very pertinent one. The answer is the handwritten digits dataset has been ubiquitous since its release in 1998 by Yann LeCunn [3]. Having since become the textbook dataset for neural network classification tasks, it is a good choice in our context of getting acquainted with CNNs. The dataset consists of 60,000 training examples and 10,000 test examples, each being a grayscale \u2026"}
{"_id": "c788f4ea8d8c0d91237c8146556c3fad71ac362a", "title": "TextCatcher: a method to detect curved and challenging text in natural scenes", "text": "In this paper, we propose a text detection algorithm which is hybrid and multi-scale. First, it relies on a connected component-based approach: After the segmentation of the image, a classification step using a new wavelet descriptor spots the letters. A new graph modeling and its traversal procedure allow to form candidate text areas. Second, a texture-based approach discards the false positives. Finally, the detected text areas are precisely cut out and a new binarization step is introduced. The main advantage of our method is that few assumptions are put forward. Thus, \u201cchallenging texts\u201d like multi-sized, multi-colored, multi-oriented or curved text can be localized. The efficiency of TextCatcher has been validated on three different datasets: Two come from the ICDAR competition, and the third one contains photographs we have taken with various daily life texts. We present both qualitative and quantitative results."}
{"_id": "ca20de3504e3d74faec96ed4f70340e3a0860191", "title": "Put your money where your mouth is! Explaining collective action tendencies through group-based anger and group efficacy.", "text": "Insights from appraisal theories of emotion are used to integrate elements of theories on collective action. Three experiments with disadvantaged groups systematically manipulated procedural fairness (Study 1), emotional social support (Study 2), and instrumental social support (Study 3) to examine their effects on collective action tendencies through group-based anger and group efficacy. Results of structural equation modeling showed that procedural fairness and emotional social support affected the group-based anger pathway (reflecting emotion-focused coping), whereas instrumental social support affected the group efficacy pathway (reflecting problem-focused coping), constituting 2 distinct pathways to collective action tendencies. Analyses of the means suggest that collective action tendencies become stronger the more fellow group members \"put their money where their mouth is.\" The authors discuss how their dual pathway model integrates and extends elements of current approaches to collective action."}
{"_id": "524144d7e3624c3e88cf380e7ccf585d96b39a70", "title": "Taxonomy of intrusion risk assessment and response system", "text": "In recent years, we have seen notable changes in the way attackers infiltrate computer systems compromising their functionality. Research in intrusion detection systems aims to reduce the impact of these attacks. In this paper, we present a taxonomy of intrusion response systems (IRS) and Intrusion Risk Assessment (IRA), two important components of an intrusion detection solution. We achieve this by classifying a number of studies published during the last two decades . We discuss the key features of existing IRS and IRA. We show how characterizing security risks and choosing the right countermeasures are an important and challenging part of designing an IRS and an IRA. Poorly designed IRS and IRA may reduce network performance and wrongly disconnect users from a network. We propose techniques on how to address these challenges and highlight the need for a comprehensive defense mechanism approach. We believe that this taxonomy will open up interesting areas for future research in the growing field of intrusion risk assessment and response systems."}
{"_id": "67ea3089b457e6a09e45ea4117cb7f30d7695e69", "title": "Segmentation of Medical Ultrasound Images Using Convolutional Neural Networks with Noisy Activating Functions", "text": "The attempts to segment medical ultrasound images have had limited success than the attempts to segment images from other medical imaging modalities. In this project, we attempt to segment medical ultrasound images using convolutional neural networks (CNNs) with a group of noisy activation functions which have recently been demonstrated to improve the performance of neural networks. We report on the segmentation results using a U-Net-like CNN with noisy rectified linear unit (NReLU) functions, noisy hard sigmoid (NHSigmoid) functions, and noisy hard tanh (NHTanh) function on a small data set."}
{"_id": "034b2b97e6b23061f6f71a5e19c1b03bf4c19ec8", "title": "GSA: A Gravitational Search Algorithm", "text": "In recent years, various heuristic optimization methods have been developed. Many of these methods are inspired by swarm behaviors in nature. In this paper, a new optimization algorithm based on the law of gravity and mass interactions is introduced. In the proposed algorithm, the searcher agents are a collection of masses which interact with each other based on the Newtonian gravity and the laws of motion. The proposed method has been compared with some well-known heuristic search methods. The obtained results confirm the high performance of the proposed method in solving various nonlinear functions. 2009 Elsevier Inc. All rights reserved."}
{"_id": "00844516c86828a4cc81471b573cb1a1696fcde9", "title": "Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion", "text": "Here, we demonstrate that subject motion produces substantial changes in the timecourses of resting state functional connectivity MRI (rs-fcMRI) data despite compensatory spatial registration and regression of motion estimates from the data. These changes cause systematic but spurious correlation structures throughout the brain. Specifically, many long-distance correlations are decreased by subject motion, whereas many short-distance correlations are increased. These changes in rs-fcMRI correlations do not arise from, nor are they adequately countered by, some common functional connectivity processing steps. Two indices of data quality are proposed, and a simple method to reduce motion-related effects in rs-fcMRI analyses is demonstrated that should be flexibly implementable across a variety of software platforms. We demonstrate how application of this technique impacts our own data, modifying previous conclusions about brain development. These results suggest the need for greater care in dealing with subject motion, and the need to critically revisit previous rs-fcMRI work that may not have adequately controlled for effects of transient subject movements."}
{"_id": "21bfc289cf7e2309e70f390ae14d89df7c911a67", "title": "Modeling regional and psychophysiologic interactions in fMRI: the importance of hemodynamic deconvolution", "text": "The analysis of functional magnetic resonance imaging (fMRI) time-series data can provide information not only about task-related activity, but also about the connectivity (functional or effective) among regions and the influences of behavioral or physiologic states on that connectivity. Similar analyses have been performed in other imaging modalities, such as positron emission tomography. However, fMRI is unique because the information about the underlying neuronal activity is filtered or convolved with a hemodynamic response function. Previous studies of regional connectivity in fMRI have overlooked this convolution and have assumed that the observed hemodynamic response approximates the neuronal response. In this article, this assumption is revisited using estimates of underlying neuronal activity. These estimates use a parametric empirical Bayes formulation for hemodynamic deconvolution."}
{"_id": "261208c69aeca0243e43511845a0d8023d31acbe", "title": "Common regions of the human frontal lobe recruited by diverse cognitive demands", "text": "Though many neuroscientific methods have been brought to bear in the search for functional specializations within prefrontal cortex, little consensus has emerged. To assess the contribution of functional neuroimaging, this article reviews patterns of frontal-lobe activation associated with a broad range of different cognitive demands, including aspects of perception, response selection, executive control, working memory, episodic memory and problem solving. The results show a striking regularity: for many demands, there is a similar recruitment of mid-dorsolateral, mid-ventrolateral and dorsal anterior cingulate cortex. Much of the remainder of frontal cortex, including most of the medial and orbital surfaces, is largely insensitive to these demands. Undoubtedly, these results provide strong evidence for regional specialization of function within prefrontal cortex. This specialization, however, takes an unexpected form: a specific frontal-lobe network that is consistently recruited for solution of diverse cognitive problems."}
{"_id": "2ce0d2f6efe74b9df4c0eccb434322d931c5dd47", "title": "Prefrontal cortical function and anxiety: controlling attention to threat-related stimuli", "text": "Threat-related stimuli are strong competitors for attention, particularly in anxious individuals. We used functional magnetic resonance imaging (fMRI) with healthy human volunteers to study how the processing of threat-related distractors is controlled and whether this alters as anxiety levels increase. Our work builds upon prior analyses of the cognitive control functions of lateral prefrontal cortex (lateral PFC) and anterior cingulate cortex (ACC). We found that rostral ACC was strongly activated by infrequent threat-related distractors, consistent with a role for this area in responding to unexpected processing conflict caused by salient emotional stimuli. Participants with higher anxiety levels showed both less rostral ACC activity overall and reduced recruitment of lateral PFC as expectancy of threat-related distractors was established. This supports the proposal that anxiety is associated with reduced top-down control over threat-related distractors. Our results suggest distinct roles for rostral ACC and lateral PFC in governing the processing of task-irrelevant, threat-related stimuli, and indicate reduced recruitment of this circuitry in anxiety."}
{"_id": "0ca9e60d077c97f8f9f9e43110e899ed45284ecd", "title": "Other minds in the brain: a functional imaging study of \u201ctheory of mind\u201d in story comprehension", "text": "The ability of normal children and adults to attribute independent mental states to self and others in order to explain and predict behaviour (\"theory of mind\") has been a focus of much recent research. Autism is a biologically based disorder which appears to be characterised by a specific impairment in this \"mentalising\" process. The present paper reports a functional neuroimaging study with positron emission tomography in which we studied brain activity in normal volunteers while they performed story comprehension tasks necessitating the attribution of mental states. The resultant brain activity was compared with that measured in two control tasks: \"physical\" stories which did not require this mental attribution, and passages of unlinked sentences. Both story conditions, when compared to the unlinked sentences, showed significantly increased regional cerebral blood flow in the following regions: the temporal poles bilaterally, the left superior temporal gyrus and the posterior cingulate cortex. Comparison of the \"theory of mind\" stories with \"physical\" stores revealed a specific pattern of activation associated with mental state attribution: it was only this task which produced activation in the medial frontal gyrus on the left (Brodmann's area 8). This comparison also showed significant activation in the posterior cingulate cortex. These surprisingly clear-cut findings are discussed in relation to previous studies of brain activation during story comprehension. The localisation of brain regions involved in normal attribution of mental states and contextual problem solving is feasible and may have implication for the neural basis of autism."}
{"_id": "723b30edce2a7a46626a38c8f8cac929131b9ed4", "title": "Daemo: A Self-Governed Crowdsourcing Marketplace", "text": "Crowdsourcing marketplaces provide opportunities for autonomous and collaborative professional work as well as social engagement. However, in these marketplaces, workers feel disrespected due to unreasonable rejections and low payments, whereas requesters do not trust the results they receive. The lack of trust and uneven distribution of power among workers and requesters have raised serious concerns about sustainability of these marketplaces. To address the challenges of trust and power, this paper introduces Daemo, a self-governed crowdsourcing marketplace. We propose a prototype task to improve the work quality and open-governance model to achieve equitable representation. We envisage Daemo will enable workers to build sustainable careers and provide requesters with timely, quality labor for their businesses."}
{"_id": "1eda2af8492a67d66afdb26b70d15e07d9bd11fe", "title": "Discriminative shape from shading in uncalibrated illumination", "text": "Estimating surface normals from just a single image is challenging. To simplify the problem, previous work focused on special cases, including directional lighting, known reflectance maps, etc., making shape from shading impractical outside the lab. To cope with more realistic settings, shading cues need to be combined and generalized to natural illumination. This significantly increases the complexity of the approach, as well as the number of parameters that require tuning. Enabled by a new large-scale dataset for training and analysis, we address this with a discriminative learning approach to shape from shading, which uses regression forests for efficient pixel-independent prediction and fast learning. Von Mises-Fisher distributions in the leaves of each tree enable the estimation of surface normals. To account for their expected spatial regularity, we introduce spatial features, including texton and silhouette features. The proposed silhouette features are computed from the occluding contours of the surface and provide scale-invariant context. Aside from computational efficiency, they enable good generalization to unseen data and importantly allow for a robust estimation of the reflectance map, extending our approach to the uncalibrated setting. Experiments show that our discriminative approach outperforms state-of-the-art methods on synthetic and real-world datasets."}
{"_id": "84a505024b58c0c3bb5b1bf12ee76f162ebf52d0", "title": "System Integration and Power-Flow Management for a Series Hybrid Electric Vehicle Using Supercapacitors and Batteries", "text": "In this paper, system integration and power-flow management algorithms for a four-wheel-driven series hybrid electric vehicle (HEV) having multiple power sources composed of a diesel-engine-based generator, lead acid battery bank, and supercapacitor bank are presented. The super-capacitor is utilized as a short-term energy storage device to meet the dynamic performance of the vehicle, while the battery is utilized as a mid-term energy storage for the electric vehicle (EV) mode operation due to its higher energy density. The generator based on an interior permanent magnet machine (IPMM), run by a diesel engine, provides the average power for the normal operation of the vehicle. Thanks to the proposed power-flow management algorithm, each of the energy sources is controlled appropriately and also the dynamic performance of the vehicle has been improved. The proposed power-flow management algorithm has been experimentally verified with a full-scale prototype vehicle."}
{"_id": "dfd653ad1409ecbcd8970f2b505dea0807e316ca", "title": "Linear AC LED driver with the multi-level structure and variable current regulator", "text": "This paper proposes a linear AC LED driver for LED lighting applications. The proposed circuit is small in size because the circuit structure consists of only semiconductors and resistors without any reactors and electrolytic capacitors. The current bypass circuit which is connected in parallel to the LED string consists of single MOSFET, single zener diode and two resistors. The MOSFET is operated in an active state by a self-bias circuit. Thus, an external controller and high voltage gate drivers are not required. The proposed circuit is experimentally validated by using a 7 W prototype. From the experimental results, the THD of input current is 2.1% and the power factor is 0.999. In addition, the simulation loss analysis demonstrates an efficiency of 87% for a 7 W prototype."}
{"_id": "166cee31d458a41872a50e81532b787845d92e70", "title": "Space-Vector PWM Control Synthesis for an H-Bridge Drive in Electric Vehicles", "text": "This paper deals with a synthesis of space-vector pulsewidth modulation (SVPWM) control methods applied for an H-bridge inverter feeding a three-phase permanent-magnet synchronous machine (PMSM) in electric-vehicle (EV) applications. First, a short survey of existing architectures of power converters, particularly those adapted to degraded operating modes, is presented. Standard SVPWM control methods are compared with three innovative methods using EV drive specifications in the normal operating mode. Then, a rigorous analysis of the margins left in the control strategy is presented for a semiconductor switch failure to fulfill degraded operating modes. Finally, both classic and innovative strategies are implemented in numerical simulation; their results are analyzed and discussed."}
{"_id": "a5a147be45a38cacff1b21a9c05fc8c408df237b", "title": "WALNUT: Waging Doubt on the Integrity of MEMS Accelerometers with Acoustic Injection Attacks", "text": "Cyber-physical systems depend on sensors to make automated decisions. Resonant acoustic injection attacks are already known to cause malfunctions by disabling MEMS-based gyroscopes. However, an open question remains on how to move beyond denial of service attacks to achieve full adversarial control of sensor outputs. Our work investigates how analog acoustic injection attacks can damage the digital integrity of a popular type of sensor: the capacitive MEMS accelerometer. Spoofing such sensors with intentional acoustic interference enables an out-of-spec pathway for attackers to deliver chosen digital values to microprocessors and embedded systems that blindly trust the unvalidated integrity of sensor outputs. Our contributions include (1) modeling the physics of malicious acoustic interference on MEMS accelerometers, (2) discovering the circuit-level security flaws that cause the vulnerabilities by measuring acoustic injection attacks on MEMS accelerometers as well as systems that employ on these sensors, and (3) two software-only defenses that mitigate many of the risks to the integrity of MEMS accelerometer outputs. We characterize two classes of acoustic injection attacks with increasing levels of adversarial control: output biasing and output control. We test these attacks against 20 models of capacitive MEMS accelerometers from 5 different manufacturers. Our experiments find that 75% are vulnerable to output biasing, and 65% are vulnerable to output control. To illustrate end-to-end implications, we show how to inject fake steps into a Fitbit with a $5 speaker. In our self-stimulating attack, we play a malicious music file from a smartphone's speaker to control the on-board MEMS accelerometer trusted by a local app to pilot a toy RC car. In addition to offering hardware design suggestions to eliminate the root causes of insecure amplification and filtering, we introduce two low-cost software defenses that mitigate output biasing attacks: randomized sampling and 180 degree out-of-phase sampling. These software-only approaches mitigate attacks by exploiting the periodic and predictable nature of the malicious acoustic interference signal. Our results call into question the wisdom of allowing microprocessors and embedded systems to blindly trust that hardware abstractions alone will ensure the integrity of sensor outputs."}
{"_id": "0d4d20c9f025a54b4c55f8d674f475306ebc88a6", "title": "Ant-Q: A Reinforcement Learning Approach to the Traveling Salesman Problem", "text": "In this paper we introduce Ant-Q, a family of algorithms which present many similarities with Q-learning (Watkins, 1989), and which we apply to the solution of symmetric and asymmetric instances of the traveling salesman problem (TSP). Ant-Q algorithms were inspired by work on the ant system (AS), a distributed algorithm for combinatorial optimization based on the metaphor of ant colonies which was recently proposed in (Dorigo, 1992; Dorigo, Maniezzo and Colorni, 1996). We show that AS is a particular instance of the Ant-Q family, and that there are instances of this family which perform better than AS. We experimentally investigate the functioning of Ant-Q and we show that the results obtained by Ant-Q on symmetric TSP's are competitive with those obtained by other heuristic approaches based on neural networks or local search. Finally, we apply Ant-Q to some difficult asymmetric TSP's obtaining very good results: Ant-Q was able to find solutions of a quality which usually can be found only by very specialized algorithms."}
{"_id": "18977c6f7abb245691f4268ccd116036bd2391f0", "title": "All-at-once Optimization for Coupled Matrix and Tensor Factorizations", "text": "Joint analysis of data from multiple sources has the potential to improve our understanding of the underlying structures in complex data sets. For instance, in restaurant recommendation systems, recommendations can be based on rating histories of customers. In addition to rating histories, customers\u2019 social networks (e.g., Facebook friendships) and restaurant categories information (e.g., Thai or Italian) can also be used to make better recommendations. The task of fusing data, however, is challenging since data sets can be incomplete and heterogeneous, i.e., data consist of both matrices, e.g., the person by person social network matrix or the restaurant by category matrix, and higher-order tensors, e.g., the \u201cratings\u201d tensor of the form restaurant by meal by person. In this paper, we are particularly interested in fusing data sets with the goal of capturing their underlying latent structures. We formulate this problem as a coupled matrix and tensor factorization (CMTF) problem where heterogeneous data sets are modeled by fitting outer-product models to higher-order tensors and matrices in a coupled manner. Unlike traditional approaches solving this problem using alternating algorithms, we propose an all-at-once optimization approach called CMTF-OPT (CMTF-OPTimization), which is a gradient-based optimization approach for joint analysis of matrices and higher-order tensors. We also extend the algorithm to handle coupled incomplete data sets. Using numerical experiments, we demonstrate that the proposed all-at-once approach is more accurate than the alternating least squares approach."}
{"_id": "7d6acd022c000e57f46795cc54506215b9b9ec33", "title": "A Tagged Corpus and a Tagger for Urdu", "text": "In this paper, we describe a release of a sizeable monolingual Urdu corpus automatically tagged with part-of-speech tags. We extend the work of Jawaid and Bojar (2012) who use three different taggers and then apply a voting scheme to disambiguate among the different choices suggested by each tagger. We run this complex ensemble on a large monolingual corpus and release the tagged corpus. Additionally, we use this data to train a single standalone tagger which will hopefully significantly simplify Urdu processing. The standalone tagger obtains the accuracy of 88.74% on test data."}
{"_id": "539ea86fa738afd939fb18566107c971461f8548", "title": "Learning as search optimization: approximate large margin methods for structured prediction", "text": "Mappings to structured output spaces (strings, trees, partitions, etc.) are typically learned using extensions of classification algorithms to simple graphical structures (eg., linear chains) in which search and parameter estimation can be performed exactly. Unfortunately, in many complex problems, it is rare that exact search or parameter estimation is tractable. Instead of learning exact models and searching via heuristic means, we embrace this difficulty and treat the structured output problem in terms of approximate search. We present a framework for learning as search optimization, and two parameter updates with convergence the-orems and bounds. Empirical evidence shows that our integrated approach to learning and decoding can outperform exact models at smaller computational cost."}
{"_id": "1219fb39b46aabd74879a7d6d3c724fb4e55aeae", "title": "Bricolage versus breakthrough : distributed and embedded agency in technology entrepreneurship", "text": "We develop a perspective on technology entrepreneurship as involving agency that is distributed across different kinds of actors. Each actor becomes involved with a technology, and, in the process, generates inputs that result in the transformation of an emerging technological path. The steady accumulation of inputs to a technological path generates a momentum that enables and constrains the activities of distributed actors. In other words, agency is not only distributed, but it is embedded as well. We explicate this perspective through a comparative study of processes underlying the emergence of wind turbines in Denmark and in United States. Through our comparative study, we flesh out \u201cbricolage\u201d and \u201cbreakthrough\u201d as contrasting approaches to the engagement of actors in shaping technological paths. \u00a9 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "2266636d87e44590ade738b92377d1fe1bc5c970", "title": "Threshold selection using Renyi's entropy", "text": ""}
{"_id": "1672a134e0cebfef817c0c832eea1e54ffb094b0", "title": "UTHealth at SemEval-2016 Task 12: an End-to-End System for Temporal Information Extraction from Clinical Notes", "text": "The 2016 Clinical TempEval challenge addresses temporal information extraction from clinical notes. The challenge is composed of six sub-tasks, each of which is to identify: (1) event mention spans, (2) time expression spans, (3) event attributes, (4) time attributes, (5) events\u2019 temporal relations to the document creation times (DocTimeRel), and (6) narrative container relations among events and times. In this article, we present an end-to-end system that addresses all six sub-tasks. Our system achieved the best performance for all six sub-tasks when plain texts were given as input. It also performed best for narrative container relation identification when gold standard event/time annotations were given."}
{"_id": "beaaba420f5cef9b4564bc4e1ff88094a5fa2054", "title": "Discovering Molecular Functional Groups Using Graph Convolutional Neural Networks", "text": "Functional groups (FGs) serve as a foundation for analyzing chemical properties of organic molecules. Automatic discovery of FGs will impact various fields of research, including medicinal chemistry, by reducing the amount of lab experiments required for discovery or synthesis of new molecules. Here, we investigate methods based on graph convolutional neural networks (GCNNs) for localizing FGs that contribute to specific chemical properties. Molecules are modeled as undirected graphs with atoms as nodes and bonds as edges. Using this graph structure, we trained GCNNs in a supervised way on experimentally-validated molecular training sets to predict specific chemical properties, e.g., toxicity. Upon learning a GCNN, we analyzed its activation patterns to automatically identify FGs using four different methods: gradient-based saliency maps, Class Activation Mapping (CAM), gradient-weighted CAM (Grad-CAM), and Excitation Back-Propagation. We evaluated the contrastive power of these methods with respect to the specificity of the identified molecular substructures and their relevance for chemical functions. GradCAM had the highest contrastive power and generated qualitatively the best FGs. This work paves the way for automatic analysis and design of new molecules."}
{"_id": "a8f87a5ab16764e61aef3cbadcc52ca927bb392d", "title": "How to make large self-organizing maps for nonvectorial data", "text": "The self-organizing map (SOM) represents an open set of input samples by a topologically organized, finite set of models. In this paper, a new version of the SOM is used for the clustering, organization, and visualization of a large database of symbol sequences (viz. protein sequences). This method combines two principles: the batch computing version of the SOM, and computation of the generalized median of symbol strings."}
{"_id": "83c2183c5fd530bd1ff00ba51939680b4419840b", "title": "Structurally-Sensitive Multi-Scale Deep Neural Network for Low-Dose CT Denoising", "text": "Computed tomography (CT) is a popular medical imaging modality and enjoys wide clinical applications. At the same time, the X-ray radiation dose associated with CT scannings raises a public concern due to its potential risks to the patients. Over the past years, major efforts have been dedicated to the development of low-dose CT (LDCT) methods. However, the radiation dose reduction compromises the signal-to-noise ratio, leading to strong noise and artifacts that down-grade the CT image quality. In this paper, we propose a novel 3-D noise reduction method, called structurally sensitive multi-scale generative adversarial net, to improve the LDCT image quality. Specifically, we incorporate 3-D volumetric information to improve the image quality. Also, different loss functions for training denoising models are investigated. Experiments show that the proposed method can effectively preserve the structural and textural information in reference to the normal-dose CT images and significantly suppress noise and artifacts. Qualitative visual assessments by three experienced radiologists demonstrate that the proposed method retrieves more information and outperforms competing methods."}
{"_id": "2af586c64c32baeb445992e0ea6b76bbbbc30c7f", "title": "Massive parallelization of approximate nearest neighbor search on KD-tree for high-dimensional image descriptor matching", "text": ""}
{"_id": "15a4ef82d92b08c5c1332324d0820ec3d082bf3e", "title": "REGULARIZATION TOOLS: A Matlab package for analysis and solution of discrete ill-posed problems", "text": "The package REGULARIZATION TOOLS consists of 54 Matlab routines for analysis and solution of discrete ill-posed problems, i.e., systems of linear equations whose coefficient matrix has the properties that its condition number is very large, and its singular values decay gradually to zero. Such problems typically arise in connection with discretization of Fredholm integral equations of the first kind, and similar ill-posed problems. Some form of regularization is always required in order to compute a stabilized solution to discrete ill-posed problems. The purpose of REGULARIZATION TOOLS is to provide the user with easy-to-use routines, based on numerical robust and efficient algorithms, for doing experiments with regularization of discrete ill-posed problems. By means of this package, the user can experiment with different regularization strategies, compare them, and draw conclusions from these experiments that would otherwise require a major programming effert. For discrete ill-posed problems, which are indeed difficult to treat numerically, such an approach is certainly superior to a single black-box routine. This paper describes the underlying theory gives an overview of the package; a complete manual is also available."}
{"_id": "cb0da1ed189087c9ba716cc5c99c75b52430ec06", "title": "Transparent and Efficient CFI Enforcement with Intel Processor Trace", "text": "Current control flow integrity (CFI) enforcement approaches either require instrumenting application executables and even shared libraries, or are unable to defend against sophisticated attacks due to relaxed security policies, or both, many of them also incur high runtime overhead. This paper observes that the main obstacle of providing transparent and strong defense against sophisticated adversaries is the lack of sufficient runtime control flow information. To this end, this paper describes FlowGuard, a lightweight, transparent CFI enforcement approach by a novel reuse of Intel Processor Trace (IPT), a recent hardware feature that efficiently captures the entire runtime control flow. The main challenge is that IPT is designed for offline performance analysis and software debugging such that decoding collected control flow traces is prohibitively slow on the fly. FlowGuard addresses this challenge by reconstructing applications' conservative control flow graphs (CFG) to be compatible with the compressed encoding format of IPT, and labeling the CFG edges with credits in the help of fuzzing-like dynamic training. At runtime, FlowGuard separates fast and slow paths such that the fast path compares the labeled CFGs with the IPT traces for fast filtering, while the slow path decodes necessary IPT traces for strong security. We have implemented and evaluated FlowGuard on a commodity Intel Skylake machine with IPT support. Evaluation results show that FlowGuard is effective in enforcing CFI for several applications, while introducing only small performance overhead. We also show that, with minor hardware extensions, the performance overhead can be further reduced."}
{"_id": "f812347d46035d786de40c165a158160bb2988f0", "title": "Predictive coding as a model of cognition", "text": "Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function."}
{"_id": "f59d8504d7c6e209e6f8bcb62346140214b244b7", "title": "Fine-tuning Deep Convolutional Networks for Plant Recognition", "text": "This paper describes the participation of the ECOUAN team in the LifeCLEF 2015 challenge. We used a deep learning approach in which the complete system was learned without hand-engineered components. We pre-trained a convolutional neural network using 1.8 million images and used a fine-tuning strategy to transfer learned recognition capabilities from general domains to the specific challenge of Plant Identification task. The classification accuracy obtained by our method outperformed the best result obtained in 2014. Our group obtained the 4th position among all teams and the 10th position among 18 runs."}
{"_id": "681faa552d147d606815bbd008bc1de0005f63ba", "title": "A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines", "text": "Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines."}
{"_id": "28ff2f3bf5403d7adc58f6aac542379806fa3233", "title": "Interpreting random forest models using a feature contribution method", "text": "Model interpretation is one of the key aspects of the model evaluation process. The explanation of the relationship between model variables and outputs is easy for statistical models, such as linear regressions, thanks to the availability of model parameters and their statistical significance. For \u201cblack box\u201d models, such as random forest, this information is hidden inside the model structure. This work presents an approach for computing feature contributions for random forest classification models. It allows for the determination of the influence of each variable on the model prediction for an individual instance. Interpretation of feature contributions for two UCI benchmark datasets shows the potential of the proposed methodology. The robustness of results is demonstrated through an extensive analysis of feature contributions calculated for a large number of generated random forest models."}
{"_id": "c8bbf44d7454f37e2e12713f48ca99b3f19ef915", "title": "Methodology of band rejection / addition for microstrip antennas design using slot line theory and current distribution analysis", "text": "Radio or wireless communication means to transfer information over long distance without using any wires. Millions of people exchange information every day using pager, cellular, telephones, laptops, various types of personal digital assistance (PDAs) and other wireless communication products. The worldwide interoperability for microwave access (Wi-Max) aims to provide wireless data over a long distance in variety of ways. It was based on IEEE 802.16 standard. It is an effective metropolitan area access technique with many favorable features like flexibility, cost, efficiency and fast networking, which not only provides wireless access, but also serves to expand the access to wired network. The coverage area of Wi-Max is around 30-50 km. it can provide high 100 Mbps data rates in 20 MHz bandwidth on fixed and nomadic applications in the 2-11 GHz frequencies. In this paper, a methodology of band rejection/addition for microstrip antennas design using slot line theory and current distribution analysis has been introduced and analyzed. The analysis and design are done by a commercial software. The radiation characteristics, such as; return loss, VSWR, input impedance, and the surface current densities have been introduced and discussed. Finally, the proposed optimum antenna design structure has been fabricated and the measured S-parameters of the proposed structure can be analyzed with network analyzer and compared with simulation results to demonstrate the excellent performance and meet the requirements for wireless communication applications."}
{"_id": "2704a9af1b368e2b68b0fe022b2fd48b8c7c25cc", "title": "Distance Metric Learning with Application to Clustering with Side-Information", "text": "Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \u201cplausible\u201d ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \u201csimilar.\u201d For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in , learns a distance metric over that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance."}
{"_id": "d3c79bd62814782c7075c7fea80d1f87e337a538", "title": "Comparing Clusterings - An Overview", "text": "As the amount of data we nowadays have to deal with becomes larger and larger, the methods that help us to detect structures in the data and to identify interesting subsets in the data become more and more important. One of these methods is clustering, i.e. segmenting a set of elements into subsets such that the elements in each subset are somehow \u201dsimiliar\u201d to each other and elements of different subsets are \u201dunsimilar\u201d. In the literature we can find a large variety of clustering algorithms, each having certain advantages but also certain drawbacks. Typical questions that arise in this context comprise:"}
{"_id": "09370d132a1e238a778f5e39a7a096994dc25ec1", "title": "Discriminant Adaptive Nearest Neighbor Classification", "text": "Nearest neighbor classification expects the class conditional probabilities to be locally constant, and suffers from bias in high dimensions We propose a locally adaptive form of nearest neighbor classification to try to finesse this curse of dimensionality. We use a local linear discriminant analysis to estimate an effective metric for computing neighborhoods. We determine the local decision boundaries from centroid information, and then shrink neighborhoods in directions orthogonal to these local decision boundaries, and elongate them parallel to the boundaries. Thereafter, any neighborhood-based classifier can be employed, using the modified neighborhoods. The posterior probabilities tend to be more homogeneous in the modified neighborhoods. We also propose a method for global dimension reduction, that combines local dimension information. In a number of examples, the methods demonstrate the potential for substantial improvements over nearest neighbour classification. Introduction We consider a discrimination problem with d classes and N training observations. The training observations consist of predictor measurements x = (zl,z2,...zp) on p predictors and the known class memberships. Our goal is to predict the class membership of an observation with predictor vector x0 Nearest neighbor classification is a simple and appealing approach to this problem. We find the set of K nearest neighbors in the training set to x0 and then classify x0 as the most frequent class among the K neighbors. Nearest neighbors is an extremely flexible classification scheme, and does not involve any pre-processing (fitting) of the training data. This can offer both space and speed advantages in very large problems: see Cover (1968), Duda & Hart (1973), McLachlan (1992) for background material on nearest neighborhood classification. Cover & Hart (1967) show that the one nearest neighbour rule has asymptotic error rate at most twice the Bayes rate. However in finite samples the curse of dimensionality can severely hurt the nearest neighbor rule. The relative radius of the nearest-neighbor sphere grows like r 1/p where p is the dimension and r the radius for p = 1, resulting in severe bias at the target point x. Figure 1 illustrates the situation for a simple example. Figure 1: The vertical strip denotes the NN region using only the X coordinate to find the nearest neighbor for the target point (solid dot). The sphere shows the NN region using both coordinates, and we see in this case it has extended into the class 1 region (and found the wrong class in this instance). Our illustration here is based on a 1-NN rule, but the same phenomenon ccurs for k-NN rules as well. Nearest neighbor techniques are based on the assumption that locally the class posterior probabilities are constant. While that is clearly true in the vertical strip using only coordinate X, using X and Y this is no longer true. The techniques outlined in the abstract are designed to overcome these problems. Figure 2 shows an example. There are two classes in two dimensions, one of which almost completely surrounds the other. The left panel shows a nearest neighborhood of size 25 at the target point (shown as origin), which is chosen to near the class boundary. The right panel shows the same size neighborhood using our discriminant adap142 KDD--95 From: KDD-95 Proceedings. Copyright \u00a9 1995, AAAI (www.aaai.org). All rights reserved."}
{"_id": "0bacca0993a3f51649a6bb8dbb093fc8d8481ad4", "title": "Constrained K-means Clustering with Background Knowledge", "text": "Clustering is traditionally viewed as an unsupervised method for data analysis. However, in some cases information about the problem domain is available in addition to the data instances themselves. In this paper, we demonstrate how the popular k-means clustering algorithm can be profitably modified to make use of this information. In experiments with artificial constraints on six data sets, we observe improvements in clustering accuracy. We also apply this method to the real-world problem of automatically detecting road lanes from GPS data and observe dramatic increases in performance."}
{"_id": "bed9545680bbbf2c4e922263e13534d199c98613", "title": "Promoting Student Metacognition", "text": "Imagine yourself as the instructor of an introductory undergraduate biology course. Two students from your course independently visit your office the week after the first exam. Both students are biology majors. Both regularly attend class and submit their assignments on time. Both appear to be eager, dedicated, and genuine students who want to learn biology. During each of their office hours visits, you ask them to share how they prepared for the first exam. Their stories are strikingly different (inspired by Ertmer and Newby, 1996). During office hours, Josephina expresses that she was happy the exam was on a Monday, because she had a lot of time to prepare the previous weekend. She shares that she started studying after work on Saturday evening and did not go out with friends that night. When queried, she also shares that she reread all of the assigned textbook material and made flashcards of the bold words in the text. She feels that she should have done well on the test, because she studied all Saturday night and all day on Sunday. She feels that she did everything she could do to prepare. That said, she is worried about what her grade will be, and she wants you to know that she studied really hard, so she should get a good grade on the exam. Later in the week, Maya visits your office. When asked how she prepared for the first exam, she explains that she has regularly reviewed the PowerPoint slides each evening after class since the beginning of the term 4 weeks ago. She also read the assigned textbook pages weekly, but expresses that she spent most of her time comparing the ideas in the PowerPoint slides with the information in the textbook to see how they were similar and different. She found several places in which things seemed not to agree, which confused her. She kept a running list of these confusions each week. When you ask what she did with these confusions, she shares that she"}
{"_id": "7e30d0c7aaaed7fa2d04fc8cc0fd3af8e24ca385", "title": "A Survey of Text Summarization Extractive Techniques", "text": "Text Summarization is condensing the source text into a shorter version preserving its information content and overall meaning. It is very difficult for human beings to manually summarize large documents of text. Text Summarization methods can be classified into extractive and abstractive summarization. An extractive summarization method consists of selecting important sentences, paragraphs etc. from the original document and concatenating them into shorter form. The importance of sentences is decided based on statistical and linguistic features of sentences. An abstractive summarization method consists of understanding the original text and re-telling it in fewer words. It uses linguistic methods to examine and interpret the text and then to find the new concepts and expressions to best describe it by generating a new shorter text that conveys the most important information from the original text document. In this paper, a Survey of Text Summarization Extractive techniques has been presented."}
{"_id": "7294cc55a3b09a43cde606085b1a5742277b9e09", "title": "Multiple SVM-RFE for gene selection in cancer classification with expression data", "text": "This paper proposes a new feature selection method that uses a backward elimination procedure similar to that implemented in support vector machine recursive feature elimination (SVM-RFE). Unlike the SVM-RFE method, at each step, the proposed approach computes the feature ranking score from a statistical analysis of weight vectors of multiple linear SVMs trained on subsamples of the original training data. We tested the proposed method on four gene expression datasets for cancer classification. The results show that the proposed feature selection method selects better gene subsets than the original SVM-RFE and improves the classification accuracy. A Gene Ontology-based similarity assessment indicates that the selected subsets are functionally diverse, further validating our gene selection method. This investigation also suggests that, for gene expression-based cancer classification, average test error from multiple partitions of training and test sets can be recommended as a reference of performance quality."}
{"_id": "2601e35c203cb160bc82e7840c15b193f9c66404", "title": "Photogeometric Scene Flow for High-Detail Dynamic 3D Reconstruction", "text": "Photometric stereo (PS) is an established technique for high-detail reconstruction of 3D geometry and appearance. To correct for surface integration errors, PS is often combined with multiview stereo (MVS). With dynamic objects, PS reconstruction also faces the problem of computing optical flow (OF) for image alignment under rapid changes in illumination. Current PS methods typically compute optical flow and MVS as independent stages, each one with its own limitations and errors introduced by early regularization. In contrast, scene flow methods estimate geometry and motion, but lack the fine detail from PS. This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction. PGSF performs PS, OF, and MVS simultaneously. It is based on two key observations: (i) while image alignment improves PS, PS allows for surfaces to be relit to improve alignment, (ii) PS provides surface gradients that render the smoothness term in MVS unnecessary, leading to truly data-driven, continuous depth estimates. This synergy is demonstrated in the quality of the resulting RGB appearance, 3D geometry, and 3D motion."}
{"_id": "08394b082370493f381bab44158dc316fe6eef2a", "title": "Data Driven Ontology Evaluation", "text": "The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the \u2018fit\u2019 between an ontology and a domain of knowledge. We consider a number of methods for measuring this \u2018fit\u2019 and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology."}
{"_id": "d1a5a5b6aac08e09c0056c5f70906cb6f96e8b4b", "title": "Learning Across Scales - Multiscale Methods for Convolution Neural Networks", "text": "In this work, we establish the relation between optimal control and training deep Convolution Neural Networks (CNNs). We show that the forward propagation in CNNs can be interpreted as a time-dependent nonlinear differential equation and learning can be seen as controlling the parameters of the differential equation such that the network approximates the data-label relation for given training data. Using this continuous interpretation, we derive two new methods to scale CNNs with respect to two different dimensions. The first class of multiscale methods connects low-resolution and high-resolution data using prolongation and restriction of CNN parameters inspired by algebraic multigrid techniques. We demonstrate that our method enables classifying highresolution images using CNNs trained with low-resolution images and vice versa and warm-starting the learning process. The second class of multiscale methods connects shallow and deep networks and leads to new training strategies that gradually increase the depths of the CNN while re-using parameters for initializations."}
{"_id": "d9d1c5a04bc9e6a117facc2701a28003b9df42dd", "title": "Toolpath Planning for Continuous Extrusion Additive Manufacturing", "text": "Recent work in additive manufacturing has introduced a new class of 3D printers that operate by extruding slurries and viscous mixtures such as silicone, glass, epoxy, and concrete, but because of the fluid flow properties of these materials it is difficult to stop extrusion once the print has begun. Conventional toolpath generation for 3D printing is based on the assumption that the flow of material can be controlled precisely and the resulting path includes instructions to disable extrusion and move the print head to another portion of the model. A continuous extrusion printer cannot disable material flow, and so these toolpaths produce low quality prints with wasted material. This paper outlines a greedy algorithm for post-processing toolpath instructions that employs a Traveling Salesperson Problem (TSP) solver to reduce the distance traveled between subsequent space-filling curves and layers, which reduces unnecessary extrusion by at least 20% for simple object models on an open-source 3D printer."}
{"_id": "0e8b8e0c37b0ebc9c36b99103a487dbbbdf9ee97", "title": "Model Predictive Control System Design of a passenger car for a Valet Parking Scenario", "text": ""}
{"_id": "76278b9afb45ccb7c6e633a5747054f94e5fcc23", "title": "Artificial Intelligence MArkup Language: A Brief Tutorial", "text": "The purpose of this paper is to serve as a reference guide for the development of chatterbots implemented with the AIML language. In order to achieve this, the main concepts in Pattern Recognition area are described because the AIML uses such theoretical framework in their syntactic and semantic structures. After that, AIML language is described and each AIML command/tag is followed by an application example. Also, the usage of AIML embedded tags for the handling of sequence dialogue limitations between humans and machines is shown. Finally, computer systems that assist in the design of chatterbots with the AIML language are classified and described."}
{"_id": "677974d1ee2bd8d53a139b2f5805cca883f4d710", "title": "An Analysis of the Elastic Net Approach to the Traveling Salesman Problem", "text": "This paper analyzes the elastic net approach (Durbin and Willshaw 1987) to the traveling salesman problem of finding the shortest path through a set of cities. The elastic net approach jointly minimizes the length of an arbitrary path in the plane and the distance between the path points and the cities. The tradeoff between these two requirements is controlled by a scale parameter K. A global minimum is found for large K, and is then tracked to a small value. In this paper, we show that (1) in the small K limit the elastic path passes arbitrarily close to all the cities, but that only one path point is attracted to each city, (2) in the large K limit the net lies at the center of the set of cities, and (3) at a critical value of K the energy function bifurcates. We also show that this method can be interpreted in terms of extremizing a probability distribution controlled by K. The minimum at a given K corresponds to the maximum a posteriori (MAP) Bayesian estimate of the tour under a natural statistical interpretation. The analysis presented in this paper gives us a better understanding of the behavior of the elastic net, allows us to better choose the parameters for the optimization, and suggests how to extend the underlying ideas to other domains."}
{"_id": "223754b54c2fa60578b32253b2c7de3ddf6447c5", "title": "A Comparative Analysis and Study of Multiview CNN Models for Joint Object Categorization and Pose Estimation", "text": "In the Object Recognition task, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose estimation using these approaches has received relatively less attention. In this work, we study how Convolutional Neural Networks (CNN) architectures can be adapted to the task of simultaneous object recognition and pose estimation. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations within CNNs represent object pose information and how this contradicts with object category representations. We extensively experiment on two recent large and challenging multi-view datasets and we achieve better than the state-of-the-art."}
{"_id": "c9fe03fc61635732db2955ae965b0babfb346d50", "title": "Optimizing esthetics for implant restorations in the anterior maxilla: anatomic and surgical considerations.", "text": "The placement of dental implants in the anterior maxilla is a challenge for clinicians because of patients' exacting esthetic demands and difficult pre-existing anatomy. This article presents anatomic and surgical considerations for these demanding indications for implant therapy. First, potential causes of esthetic implant failures are reviewed, discussing anatomic factors such as horizontal or vertical bone deficiencies and iatrogenic factors such as improper implant selection or the malpositioning of dental implants for an esthetic implant restoration. Furthermore, aspects of preoperative analysis are described in various clinical situations, followed by recommendations for the surgical procedures in single-tooth gaps and in extended edentulous spaces with multiple missing teeth. An ideal implant position in all 3 dimensions is required. These mesiodistal, apicocoronal, and orofacial dimensions are well described, defining \"comfort\" and \"danger\" zones for proper implant position in the anterior maxilla. During surgery, the emphasis is on proper implant selection to avoid oversized implants, careful and low-trauma soft tissue handling, and implant placement in a proper position using either a periodontal probe or a prefabricated surgical guide. If missing, the facial bone wall is augmented using a proper surgical technique, such as guided bone regeneration with barrier membranes and appropriate bone grafts and/or bone substitutes. Finally, precise wound closure using a submerged or a semi-submerged healing modality is recommended. Following a healing period of between 6 and 12 weeks, a reopening procedure is recommended with a punch technique to initiate the restorative phase of therapy."}
{"_id": "28a7bad2788637fcf83809cd5725ea847fb74832", "title": "EEG correlates of task engagement and mental workload in vigilance, learning, and memory tasks.", "text": "INTRODUCTION\nThe ability to continuously and unobtrusively monitor levels of task engagement and mental workload in an operational environment could be useful in identifying more accurate and efficient methods for humans to interact with technology. This information could also be used to optimize the design of safer, more efficient work environments that increase motivation and productivity.\n\n\nMETHODS\nThe present study explored the feasibility of monitoring electroencephalo-graphic (EEG) indices of engagement and workload acquired unobtrusively and quantified during performance of cognitive tests. EEG was acquired from 80 healthy participants with a wireless sensor headset (F3-F4,C3-C4,Cz-POz,F3-Cz,Fz-C3,Fz-POz) during tasks including: multi-level forward/backward-digit-span, grid-recall, trails, mental-addition, 20-min 3-Choice Vigilance, and image-learning and memory tests. EEG metrics for engagement and workload were calculated for each 1 -s of EEG.\n\n\nRESULTS\nAcross participants, engagement but not workload decreased over the 20-min vigilance test. Engagement and workload were significantly increased during the encoding period of verbal and image-learning and memory tests when compared with the recognition/ recall period. Workload but not engagement increased linearly as level of difficulty increased in forward and backward-digit-span, grid-recall, and mental-addition tests. EEG measures correlated with both subjective and objective performance metrics.\n\n\nDISCUSSION\nThese data in combination with previous studies suggest that EEG engagement reflects information-gathering, visual processing, and allocation of attention. EEG workload increases with increasing working memory load and during problem solving, integration of information, analytical reasoning, and may be more reflective of executive functions. Inspection of EEG on a second-by-second timescale revealed associations between workload and engagement levels when aligned with specific task events providing preliminary evidence that second-by-second classifications reflect parameters of task performance."}
{"_id": "2e827db520b6d2d781aaa9ec744ccdd7a557c641", "title": "MuDeL: a language and a system for describing and generating mutants", "text": "Mutation Testing is an approach for assessing the quality of a test case suite by analyzing its ability in distinguishing the product under test from a set of alternative products, the so-called mutants. The mutants are generated from the product under test by applying a set of mutant operators, which produce products with slight syntactical differences. The mutant operators are usually based on typical errors that occurs during the software development and can be related to a fault model. In this paper, we propose a language \u2014 named MuDeL \u2014 for describing mutant operators aiming not only at automating the mutant generation, but also at providing precision and formality to the operator descriptions. The language was designed using concepts that come from transformational and logical programming paradigms, as well as from context-free grammar theory. The language is illustrated with some simple examples. We also describe the mudelgen system, developed to support this language."}
{"_id": "e10d43df2cb088c0fbb129261331ab76c32697fa", "title": "Direct-on-line-start permanent-magnet-assisted synchronous reluctance machine with ferrite magnets", "text": "The possibility of applying low-cost ferrite magnets in a direct-on-line-start permanent-magnet-assisted synchronous reluctance machine (DOL PMaSynRM) for industrial applications, such as pump or ventilation systems was studied. The main target of this machine topology was to achieve a higher efficiency class compared to a counterpart induction machine (IM). Because of the weak properties of the ferrite magnets the motor had to be optimized for a low magnetic stress starting from the permanent magnets' point of view. The optimization procedure includes finding of a suitable rotor construction with a proper number of barriers, thickness of the barriers and thickness of the ferrite magnets. More detailed attention was dedicated to the analysis and elimination of irreversible demagnetization risk of ferrite magnets during start up period."}
{"_id": "6e407ee7ef1edc319ca5fd5e12f67ce46158b4f7", "title": "Internet use and the logics of personal empowerment in health.", "text": "OBJECTIVES\nThe development of personal involvement and responsibility has become a strategic issue in health policy. The main goal of this study is to confirm the coexistence of three logics of personal empowerment through health information found on the Internet.\n\n\nMETHODS\nA theoretical framework was applied to analyze personal empowerment from the user's perspective. A well-established Canadian Web site that offers information on personal health was used as a case study. A close-ended questionnaire was completed online by 2275 visitors and members of the Web site.\n\n\nRESULTS\nThe findings confirm that the development of feelings of competence and control through Internet use is structured around three different logics. This implies three types of aptitudes that are fostered when the Internet is used to seek health information: doing what is prescribed (the professional logic), making choices based on personal judgment (the consumer logic), and mutual assistance (the community logic).\n\n\nCONCLUSIONS\nA recurring issue in three logics is the balance of roles and responsibilities required between the individual and the health provider."}
{"_id": "abe5043d7f72ae413c4f616817f18f814d647c7a", "title": "Serendipity in the Research Literature: A Phenomenology of Serendipity Reporting", "text": "The role of information sciences is to connect people with the information they need to accomplish the tasks that contribute to the greater societal good. While evidence of the wonderful contributions arising from serendipitous events abound, the framework describing the information behaviors exhibited during a serendipitous experience is just emerging and additional detail regarding the factors influencing those behaviors is needed in order to support these experiences effectively. Furthermore, it is important to understand the whole process of serendipity to fully appreciate the impact of research policies, disciplinary traditions and academic reporting practices on this unique type of information behavior. This study addresses those need by examining the phenomenon of serendipity as it is reported by biomedical and radiography researchers. A mixed method content analysis of existing research reports will be incorporated with semi-structured interviews of serendipity reporters to gain a robust understanding of the phenomenon of serendipity, and provide detail that may inform the design of information environments."}
{"_id": "e2f682775355e20f460c5f07a8a8612a0d50db8e", "title": "Zero-forcing beamforming with block diagonalization scheme for Coordinated Multi-Point transmission", "text": "Coordinated Multi-Point (CoMP) transmission is a technology targeted for Long Term Evolution Advanced (LTE-A). The Joint Processing (JP) technique for CoMP can maximize system performance, which is achieved mainly with channel information-based beamforming algorithms. Precoding methods perform differently in various CoMP scenarios. In this paper, we propose a joint processing scheme in downlink CoMP transmission system. We apply block diagonal beamforming to downlink transmission, and assume perfect knowledge of downlink channels and transmit messages at each transmit point."}
{"_id": "9e23ea65a0ab63d6024ea53920745bd6abe591a8", "title": "Social Computing for Mobile Big Data", "text": "Mobile big data contains vast statistical features in various dimensions, including spatial, temporal, and the underlying social domain. Understanding and exploiting the features of mobile data from a social network perspective will be extremely beneficial to wireless networks, from planning, operation, and maintenance to optimization and marketing."}
{"_id": "3dfd88c034e4984a00e0cef1fe57f8064dceb644", "title": "Tobler ' s First Law of Geography , Self Similarity , and Perlin Noise : A Large Scale Analysis of Gradient Distribution in Southern Utah with Application to Procedural Terrain Generation", "text": "A statistical analysis finds that in a 160,000 square kilometer region of southern Utah gradients appear to be exponentially distributed at resolutions from 5m up to 1km. A simple modification to the Perlin noise generator changing the gradient distribution in each octave to an exponential distribution results in realistic and interesting procedurally generated terrain. The inverse transform sampling method is used in the amortized noise algorithm to achieve an exponential distribution in each octave, resulting in the generation of infinite non-repeating terrain with the same characteristics."}
{"_id": "546574329c45069faf241ba98e378151d776c458", "title": "\"My Data Just Goes Everywhere: \" User Mental Models of the Internet and Implications for Privacy and Security", "text": "Many people use the Internet every day yet know little about how it really works. Prior literature diverges on how people\u2019s Internet knowledge affects their privacy and security decisions. We undertook a qualitative study to understand what people do and do not know about the Internet and how that knowledge affects their responses to privacy and security risks. Lay people, as compared to those with computer science or related backgrounds, had simpler mental models that omitted Internet levels, organizations, and entities. People with more articulated technical models perceived more privacy threats, possibly driven by their more accurate understanding of where specific risks could occur in the network. Despite these differences, we did not find a direct relationship between people\u2019s technical background and the actions they took to control their privacy or increase their security online. Consistent with other work on user knowledge and experience, our study suggests a greater emphasis on policies and systems that protect privacy and security without relying too much on users\u2019 security practices."}
{"_id": "7c50b02fe69da0381262a2a8e088fcb7f6399937", "title": "Thirty-Five Years of Research on Neuro-Linguistic Programming . NLP Research Data Base . State of the Art or Pseudoscientific Decoration ?", "text": "The huge popularity of Neuro-Linguistic Programming (NLP) therapies and training has not been accompanied by knowledge of the empirical underpinnings of the concept. The article presents the concept of NLP in the light of empirical research in the Neuro-Linguistic Programming Research Data Base. From among 315 articles the author selected 63 studies published in journals from the Master Journal List of ISI. Out of 33 studies, 18.2% show results supporting the tenets of NLP, 54.5% results non-supportive of the NLP tenets and 27.3% brings uncertain results. The qualitative analysis indicates the greater weight of the non-supportive studies and their greater methodological worth against the ones supporting the tenets. Results contradict the claim of an empirical basis of NLP."}
{"_id": "240eed85e5badbc8693fcd61fd05df8792e964f0", "title": "Cloud Computing: A Review", "text": "Cloud computing is a development of parallel, distributed and grid computing which provides computing potential as a service to clients rather than a product. Clients can access software resources, valuable information and hardware devices as a subscribed and monitored service over a network through cloud computing.Due to large number of requests for access to resources and service level agreements between cloud service providers and clients, few burning issues in cloud environment like QoS, Power, Privacy and Security, VM Migration, Resource Allocation and Scheduling need attention of research community.Resource allocation among multiple clients has to be ensured as per service level agreements. Several techniques have been invented and tested by research community for generation of optimal schedules in cloud computing. A few promising approaches like Metaheuristics, Greedy, Heuristic technique and Genetic are applied for task scheduling in several parallel and distributed systems. This paper presents a review on scheduling proposals in cloud environment."}
{"_id": "3a4fe8f246ad7461e52741780010e288c147906c", "title": "Patient satisfaction and medication adherence assessment amongst patients at the diabetes medication therapy adherence clinic.", "text": "AIMS\nTo determine the satisfaction and current adherence status of patients with diabetes mellitus at the diabetes Medication Therapy Adherence Clinic and the relationship between patient satisfaction and adherence.\n\n\nMETHODS\nThis cross-sectional descriptive study was carried out at three government hospitals in the state of Johor, Malaysia. Patient's satisfaction was measured using the Patient Satisfaction with Pharmaceutical Care Questionnaire; medication adherence was measured using the eight-item Morisky Medication Adherence Scale.\n\n\nRESULTS\nOf n=165 patients, 87.0% of patients were satisfied with DMTAC service (score 60-100) with mean scores of 76.8. On the basis of MMAS, 29.1% had a medium rate and 26.1% had a high rate of adherence. Females are 3.02 times more satisfied with the pharmaceutical service compared to males (OR 3.03, 95% CI 1.12-8.24, p<0.05) and non-Malays are less satisfied with pharmaceutical care provided during DMTAC compared to Malays (OR 0.32, 95% CI 0.12-0.85, p<0.05). Older patients age group \u226560 years were 3.29 times more likely to adhere to their medications (OR 3.29, 95% CI 1.10-9.86, p<0.05). Females were the most adherent compared to males (OR 2.33, 95%CI 1.10-4.93, p<0.05) and patients with secondary level of education were 2.72 times more adherent to their medications compared to those in primary school and no formal education (OR 2.72, 95%CI 1.13-6.55, p<0.05). There is a significant (p<0.01), positive fair correlation (r=0.377) between satisfaction and adherence.\n\n\nCONCLUSION\nPatients were highly satisfied with DMTAC service, while their adherence levels were low. There is an association between patient satisfaction and adherence."}
{"_id": "176256ef634abe50ec11a3ef1538b4e485608a66", "title": "Wind Turbine Structural Health Monitoring : A Short Investigation Based on SCADA Data", "text": "The use of offshore wind farms has been growing in recent years, as steadier and higher wind speeds can be generally found over water compared to land. Moreover, as human activities tend to complicate the construction of land wind farms, offshore locations, which can be found more easily near densely populated areas, can be seen as an attractive choice. However, the cost of an offshore wind farm is relatively high, and therefore their reliability is crucial if they ever need to be fully integrated into the energy arena. As wind turbines have become more complex, efficient, and expensive structures, they require more sophisticated monitoring systems, especially in offshore sites where the financial losses due to failure could be substantial. This paper presents the preliminary analysis of supervisor control and data acquisition (SCADA) extracts from the Lillgrund wind farm for the purposes of structural health monitoring. A machine learning approach is applied in order to produce individual power curves, and then predict measurements of the power produced of each wind turbine from the measurements of the other wind turbines in the farm. A comparison between neural network and Gaussian process regression is also made."}
{"_id": "2c03df8b48bf3fa39054345bafabfeff15bfd11d", "title": "Deep Residual Learning for Image Recognition", "text": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers - 8\u00d7 deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."}
{"_id": "5763c2c62463c61926c7e192dcc340c4691ee3aa", "title": "Learning a Deep Convolutional Network for Image Super-Resolution", "text": "We propose a deep learning method for single image superresolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the lowresolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage."}
{"_id": "83de43bc849ad3d9579ccf540e6fe566ef90a58e", "title": "A Public Domain Dataset for Human Activity Recognition using Smartphones", "text": "Human-centered computing is an emerging research field that aims to understand human behavior and integrate users and their social context with computer systems. One of the most recent, challenging and appealing applications in this framework consists in sensing human body motion using smartphones to gather context information about people actions. In this context, we describe in this work an Activity Recognition database, built from the recordings of 30 subjects doing Activities of Daily Living (ADL) while carrying a waist-mounted smartphone with embedded inertial sensors, which is released to public domain on a well-known on-line repository. Results, obtained on the dataset by exploiting a multiclass Support Vector Machine (SVM), are also acknowledged."}
{"_id": "ba02ed6083ec066e4a7494883b3ef373ff78e802", "title": "On deep learning-based channel decoding", "text": "We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity."}
{"_id": "9c263ada2509f84c1f7c49504742c98f04d827bc", "title": "Automated plant Watering system", "text": "In daily operations related to farming or gardening Watering is the most important cultural practice and the most labor-intensive task. No matter whichever weather it is, either too hot and dry or too cloudy and wet, you want to be able to control the amount of water that reaches your plants. Modern watering systems could be effectively used to water plants when they need it. But this manual process of watering requires two important aspects to be considered: when and how much to water. In order to replace manual activities and making gardener's work easier, we have create automatic plant watering system. By adding automated plant watering system to your garden or agricultural field, you will help all of your plants reach their fullest potential as well as conserving water. Using sprinklers drip emitters, or a combination of both, we can design a system that is ideal for every plant in our yard. For implementation of automatic plant watering system, we have used combination of sprinkler systems, pipes, and nozzles. This project uses the ATmega328 microcontroller. It is programmed to sense moisture level of plants at particular instance of time, if the moisture content is less than specified threshold which is predefined according to particular plant's water need then desired amount of water is supplied till it reaches threshold. Generally, plants need to be watered twice a day, morning and evening. Thus, the microcontroller is programmed to water plants two times per day. System is designed in such a way that it reports its current state as well as remind the user to add water to the tank. All this notifications are made through mobile application. We hope that through this prototype we all can enjoy having plants, without being worried about absent or forgetfulness."}
{"_id": "a22eb623224133c5966aadc2b11d9618d8270b3b", "title": "Energy-Efficient I/O Thread Schedulers for NVMe SSDs on NUMA", "text": "Non-volatile memory express (NVMe) based SSDs and the NUMA platform are widely adopted in servers to achieve faster storage speed and more powerful processing capability. As of now, very little research has been conducted to investigate the performance and energy efficiency of the state-of-the-art NUMA architecture integrated with NVMe SSDs, an emerging technology used to host parallel I/O threads. As this technology continues to be widely developed and adopted, we need to understand the runtime behaviors of such systems in order to design software runtime systems that deliver optimal performance while consuming only the necessary amount of energy. This paper characterizes the runtime behaviors of a Linux-based NUMA system employing multiple NVMe SSDs. Our comprehensive performance and energy-efficiency study using massive numbers of parallel I/O threads shows that the penalty due to CPU contention is much smaller than that due to remote access of NVMe SSDs. Based on this insight, we develop a dynamic \"lesser evil\" algorithm called ESN, to minimize the impact of these two types of penalties. ESN is an energy-efficient profiling-based I/O thread scheduler for managing I/O threads accessing NVMe SSDs on NUMA systems. Our empirical evaluation shows that ESN can achieve optimal I/O throughput and latency while consuming up to 50% less energy and using fewer CPUs."}
{"_id": "3ec46c96b22b55cfc4187a27d6a0b21a2e44f955", "title": "Cascade Mask Generation Framework for Fast Small Object Detection", "text": "Detecting small objects is a challenging task. Existing CNN-based objection detection pipeline faces such a dilemma: using a high-resolution image as input incurs high computational cost, but using a low-resolution image as input loses the feature representation of small objects and therefore leads to low accuracy. In this work, we propose a cascade mask generation framework to tackle this issue. The proposed framework takes in multi-scale images as input and processes them in ascending order of the scale. Each processing stage outputs object proposals as well as a region-of-interest (RoI) mask for the next stage. With RoI convolution, the masked regions can be excluded from the computation in the next stage. The procedure continues until the largest scale image is processed. Finally, the object proposals generated from multiple scales are classified by a post classifier. Extensive experiments on Tsinghua-Tencent 100K traffic sign benchmark demonstrate that our approach achieves state-of-the-art small object detection performance at a significantly improved speed-accuracy tradeoff compared with previous methods."}
{"_id": "2db168f14f3169b8939b843b9f4caf78c3884fb3", "title": "Broadband Bent Triangular Omnidirectional Antenna for RF Energy Harvesting", "text": "In this letter, a broadband bent triangular omnidirectional antenna is presented for RF energy harvesting. The antenna has a bandwidth for VSWR \u2264 2 from 850 MHz to 1.94 GHz. The antenna is designed to receive both horizontal and vertical polarized waves and has a stable radiation pattern over the entire bandwidth. Antenna has also been optimized for energy harvesting application and it is designed for 100 \u03a9 input impedance to provide a passive voltage amplification and impedance matching to the rectifier. A peak efficiency of 60% and 17% is obtained for a load of 500 \u03a9 at 980 and 1800 MHz, respectively. At a cell site while harvesting all bands simultaneously a voltage of 3.76 V for open circuit and 1.38 V across a load of 4.3 k \u03a9 is obtained at a distance of 25 m using an array of two elements of the rectenna."}
{"_id": "484ac571356251355d3e24dcb23bdd6d0911bd94", "title": "Graph Indexing: Tree + Delta >= Graph", "text": "Recent scientific and technological advances have witnessed an abundance of structural patterns modeled as graphs. As a result, it is of special interest to process graph containment queries effectively on large graph databases. Given a graph database G, and a query graph q, the graph containment query is to retrieve all graphs in G which contain q as subgraph(s). Due to the vast number of graphs in G and the nature of complexity for subgraph isomorphism testing, it is desirable to make use of high-quality graph indexing mechanisms to reduce the overall query processing cost. In this paper, we propose a new cost-effective graph indexing method based on frequent tree-features of the graph database. We analyze the effectiveness and efficiency of tree as indexing feature from three critical aspects: feature size, feature selection cost, and pruning power. In order to achieve better pruning ability than existing graph-based indexing methods, we select, in addition to frequent tree-features (Tree), a small number of discriminative graphs (\u2206) on demand, without a costly graph mining process beforehand. Our study verifies that (Tree+\u2206) is a better choice than graph for indexing purpose, denoted (Tree+\u2206 \u2265Graph), to address the graph containment query problem. It has two implications: (1) the index construction by (Tree+\u2206) is efficient, and (2) the graph containment query processing by (Tree+\u2206) is efficient. Our experimental studies demonstrate that (Tree+\u2206) has a compact index structure, achieves an order of magnitude better performance in index construction, and most importantly, outperforms up-to-date graphbased indexing methods: gIndex and C-Tree, in graph containment query processing."}
{"_id": "2063222c5ce0dd233fa3056ddc245fca26bd5cf2", "title": "Deep learning based human action recognition: A survey", "text": "Human action recognition has attracted much attentions because of its great potential applications. With the rapid development of computer performance and Internet, the methods of human action recognition based on deep learning become mainstream and develop at a breathless pace. This paper will give a novel reasonable taxonomy and a review of deep learning human action recognition methods based on color videos, skeleton sequences and depth maps. In addition, some datasets and effective tricks in action recognition deep learning methods will be introduced, and also the development trend is discussed."}
{"_id": "22749899b50c5113516b9820f875a580910aa746", "title": "A compact dual-band (L1/L2) GPS antenna design", "text": "A small slot-loaded patch antenna design developed for receiving both L1 and L2 bands GPS signals is discussed. The dual band coverage is achieved by using a patch mode at L2 band and a slot mode at L1 band. High dielectric material and meandered slot line are employed to reduce the antenna size down to 25.4 mm in diameter. The RHCP is achieved by combining two orthogonal modes via a small 0\u00b0-90\u00b0 hybrid chip. Both patch and slot modes share a single proximity probe conveniently located on the side of the antenna (Fig.1). This paper discusses about the design procedure as well as simulated antenna performance."}
{"_id": "d01cf53d8561406927613b3006f4128199e05a6a", "title": "PROBABILISTIC NEURAL NETWORK", "text": "This paper reports results of artificial neural network for robot navigation tasks. Machine learning methods have proven usability in many complex problems concerning mobile robots control. In particular we deal with the well-known strategy of navigating by \u201cwall-following\u201d. In this study, probabilistic neural network (PNN) structure was used for robot navigation tasks. The PNN result was compared with the results of the Logistic Perceptron, Multilayer Perceptron, Mixture of Experts and Elman neural networks and the results of the previous studies reported focusing on robot navigation tasks and using same dataset. It was observed the PNN is the best classification accuracy with 99,635% accuracy using same dataset."}
{"_id": "61f8cca7f449990e9e3bc541c1fac71fd17fcfe4", "title": "Trends and Challenges in CMOS Design for Emerging 60 GHz WPAN Applications", "text": "The extensive growth of wireless communications industry is creating a big market opportunity. Wireless operators are currently searching for new solutions which would be implemented into the existing wireless communication networks to provide the broader bandwidth, the better quality and new value-added services. In the last decade, most commercial efforts were focused on the 1-10 GHz spectrum for voice and data applications for mobile phones and portable computers (Niknejad & Hashemi, 2008). Nowadays, the interest is growing in applications that use high rate wireless communications. Multigigabit-per-second communication requires a very large bandwidth. The Ultra-Wide Band (UWB) technology was basically used for this issue. However, this technology has some shortcomings including problems with interference and a limited data rate. Furthermore, the 3\u20135 GHz spectrum is relatively crowded with many interferers appearing in the WiFi bands (Niknejad & Hashemi, 2008). The use of millimeter wave frequency band is considered the most promising technology for broadband wireless. In 2001, the Federal Communications Commission (FCC) released a set of rules governing the use of spectrum between 57 and 66 GHz (Baldwin, 2007). Hence, a large bandwidth coupled with high allowable transmit power equals high possible data rates. Traditionally the implementation of 60 GHz radio technology required expensive technologies based on III-V compound semiconductors such as InP and GaAs (Smulders et al., 2007). The rapid progress of CMOS technology has enabled its application in millimeter wave applications. Currently, the transistors became small enough, consequently fast enough. As a result, the CMOS technology has become one of the most attractive choices in implementing 60 GHz radio due to its low cost and high level of integration (Doan et al., 2005). Despite the advantages of CMOS technology, the design of 60 GHz CMOS transceiver exhibits several challenges and difficulties that the designers must overcome. This chapter aims to explore the potential of the 60 GHz band in the use for emergent generation multi-gigabit wireless applications. The chapter presents a quick overview of the state-of-the-art of 60 GHz radio technology and its potentials to provide for high data rate and short range wireless communications. The chapter is organized as follows. Section 2 presents an overview about 60 GHz band. The advantages are presented to highlight the performance characteristics of this band. The opportunities of the physical layer of the IEEE"}
{"_id": "f929758fc52f8d7660298afd96d078541a4b8ef7", "title": "Classification of behaviour in housed dairy cows using an accelerometer-based activity monitoring system", "text": "Advances in bio-telemetry technology have made it possible to automatically monitor and classify behavioural activities in many animals, including domesticated species such as dairy cows. Automated behavioural classification has the potential to improve health and welfare monitoring processes as part of a Precision Livestock Farming approach. Recent studies have used accelerometers and pedometers to classify behavioural activities in dairy cows, but such approaches often cannot discriminate accurately between biologically important behaviours such as feeding, lying and standing or transition events between lying and standing. In this study we develop a decision-tree algorithm that uses tri-axial accelerometer data from a neck-mounted sensor to both classify biologically important behaviour in dairy cows and to detect transition events between lying and standing. Data were collected from six dairy cows that were monitored continuously for 36 h. Direct visual observations of each cow were used to validate the algorithm. Results show that the decision-tree algorithm is able to accurately classify three types of biologically relevant behaviours: lying (77.42 % sensitivity, 98.63 % precision), standing (88.00 % sensitivity, 55.00 % precision), and feeding (98.78 % sensitivity, 93.10 % precision). Transitions between standing and lying were also detected accurately with an average sensitivity of 96.45 % and an average precision of 87.50 %. The sensitivity and precision of the decision-tree algorithm matches the performance of more computationally intensive algorithms such as hidden Markov models and support vector machines. Biologically important behavioural activities in housed dairy cows can be classified accurately using a simple decision-tree algorithm applied to data collected from a neck-mounted tri-axial accelerometer. The algorithm could form part of a real-time behavioural monitoring system in order to automatically detect dairy cow health and welfare status."}
{"_id": "11acf1d410fd12649f73fd42f1a22c5fa6746191", "title": "Different models for model matching: An analysis of approaches to support model differencing", "text": "Calculating differences between models is an important and challenging task in Model Driven Engineering. Model differencing involves a number of steps starting with identifying matching model elements, calculating and representing their differences, and finally visualizing them in an appropriate way. In this paper, we provide an overview of the fundamental steps involved in the model differencing process and summarize the advantages and shortcomings of existing approaches for identifying matching model elements. To assist potential users in selecting one of the existing methods for the problem at stake, we investigate the trade-offs these methods impose in terms of accuracy and effort required to implement each one of them."}
{"_id": "75b5bee6f5d2cd8ef1928bf99da5e6d26addfe84", "title": "Wearable Computing for Health and Fitness: Exploring the Relationship between Data and Human Behaviour", "text": "Health and fitness wearable technology has recently advanced, making it easier for an individual to monitor their behaviours. Previously self generated data interacts with the user to motivate positive behaviour change, but issues arise when relating this to long term mention of wearable devices. Previous studies within this area are discussed. We also consider a new approach where data is used to support instead of motivate, through monitoring and logging to encourage reflection. Based on issues highlighted, we then make recommendations on the direction in which future work could be most beneficial."}
{"_id": "c924e8c66c1a8255abbeb3de28e3a714cb58f934", "title": "Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science", "text": "As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts. In this paper, we introduce the concept of tree-based pipeline optimization for automating one of the most tedious parts of machine learning--pipeline design. We implement an open source Tree-based Pipeline Optimization Tool (TPOT) in Python and demonstrate its effectiveness on a series of simulated and real-world benchmark data sets. In particular, we show that TPOT can design machine learning pipelines that provide a significant improvement over a basic machine learning analysis while requiring little to no input nor prior knowledge from the user. We also address the tendency for TPOT to design overly complex pipelines by integrating Pareto optimization, which produces compact pipelines without sacrificing classification accuracy. As such, this work represents an important step toward fully automating machine learning pipeline design."}
{"_id": "8391927148bd67b82afe53c8d65030c675f8953d", "title": "Brain basis of early parent-infant interactions: psychology, physiology, and in vivo functional neuroimaging studies.", "text": "Parenting behavior critically shapes human infants' current and future behavior. The parent-infant relationship provides infants with their first social experiences, forming templates of what they can expect from others and how to best meet others' expectations. In this review, we focus on the neurobiology of parenting behavior, including our own functional magnetic resonance imaging (fMRI) brain imaging experiments of parents. We begin with a discussion of background, perspectives and caveats for considering the neurobiology of parent-infant relationships. Then, we discuss aspects of the psychology of parenting that are significantly motivating some of the more basic neuroscience research. Following that, we discuss some of the neurohormones that are important for the regulation of social bonding, and the dysregulation of parenting with cocaine abuse. Then, we review the brain circuitry underlying parenting, proceeding from relevant rodent and nonhuman primate research to human work. Finally, we focus on a study-by-study review of functional neuroimaging studies in humans. Taken together, this research suggests that networks of highly conserved hypothalamic-midbrain-limbic-paralimbic-cortical circuits act in concert to support aspects of parent response to infants, including the emotion, attention, motivation, empathy, decision-making and other thinking that are required to navigate the complexities of parenting. Specifically, infant stimuli activate basal forebrain regions, which regulate brain circuits that handle specific nurturing and caregiving responses and activate the brain's more general circuitry for handling emotions, motivation, attention, and empathy--all of which are crucial for effective parenting. We argue that an integrated understanding of the brain basis of parenting has profound implications for mental health."}
{"_id": "9d17e897e8344d1cf42a322359b48d1ff50b4aef", "title": "Learning to Fuse Things and Stuff", "text": "We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks."}
{"_id": "ceb2f3a1d43251058393575c3cf7139cd72b787f", "title": "How to get mature global virtual teams: a framework to improve team process management in distributed software teams", "text": "Managing global software development teams is not an easy task because of the additional problems and complexities that have to be taken into account. This paper defines VTManager, a methodology that provides a set of efficient practices for global virtual team management in software development projects. These practices integrate software development techniques in global environments with others such as explicit practices for global virtual team management, definition of skills and abilities needed to work in these teams, availability of collaborative work environments and shared knowledge management practices. The results obtained and the lessons learned from implementing VTManager in a pilot project to develop software tools for collaborative work in rural environments are also presented. This project was carried out by geographically distributed teams involving people from seven countries with a high level of virtualness."}
{"_id": "4e0398207529e36c08fc9da7e0a4a1ad41ef1d37", "title": "An iterative maximum-likelihood polychromatic algorithm for CT", "text": "A new iterative maximum-likelihood reconstruction algorithm for X-ray computed tomography is presented. The algorithm prevents beam hardening artifacts by incorporating a polychromatic acquisition model. The continuous spectrum of the X-ray tube is modeled as a number of discrete energies. The energy dependence of the attenuation is taken into account by decomposing the linear attenuation coefficient into a photoelectric component and a Compton scatter component. The relative weight of these components is constrained based on prior material assumptions. Excellent results are obtained for simulations and for phantom measurements. Beam-hardening artifacts are effectively eliminated. The relation with existing algorithms is discussed. The results confirm that improving the acquisition model assumed by the reconstruction algorithm results in reduced artifacts. Preliminary results indicate that metal artifact reduction is a very promising application for this new algorithm."}
{"_id": "93db2c50e9087c2d83c3345ef0e59dc3e6bf3707", "title": "Optimal Charging in Wireless Rechargeable Sensor Networks", "text": "Recent years have witnessed several new promising technologies to power wireless sensor networks, which motivate some key topics to be revisited. By integrating sensing and computation capabilities to the traditional radio-frequency identification (RFID) tags, the Wireless Identification and Sensing Platform (WISP) is an open-source platform acting as a pioneering experimental platform of wireless rechargeable sensor networks. Different from traditional tags, an RFID-based wireless rechargeable sensor node needs to charge its onboard energy storage above a threshold to power its sensing, computation, and communication components. Consequently, such charging delay imposes a unique design challenge for deploying wireless rechargeable sensor networks. In this paper, we tackle this problem by planning the optimal movement strategy of the mobile RFID reader, such that the time to charge all nodes in the network above their energy threshold is minimized. We first propose an optimal solution using the linear programming (LP) method. To further reduce the computational complexity, we then introduce a heuristic solution with a provable approximation ratio of (1 + \u03b8)/(1 - \u03b5) by discretizing the charging power on a 2-D space. Through extensive evaluations, we demonstrate that our design outperforms the set-cover-based design by an average of 24.7%, whereas the computational complexity is O((N/\u03b5)2). Finally, we consider two practical issues in system implementation and provide guidelines for parameter setting."}
{"_id": "c8790e54ad744731893c1a44664f2495494ca9f8", "title": "CPW to waveguide transition with tapered slotline probe", "text": "A new CPW to waveguide transition is developed based on the concept of tapered slot antenna and E-plane probe coupling. The transition consists of a tapered slotline probe and a slotline to CPW matching section. The current design has the advantages of broad bandwidth, compact size, low fabrication cost, and high reliability. The characteristics of a prototype transition are investigated by numerical simulation on Duroid substrate and WR-90 waveguide. The back-to-back combination is measured to verify the agreement with the simulated results and realization of this design."}
{"_id": "043eb1fbce9890608b859fb45e321829e518c26e", "title": "A novel secure hash algorithm for public key digital signature schemes", "text": "Hash functions are the most widespread among all cryptographic primitives, and are currently used in multiple cryptographic schemes and in security protocols. This paper presents a new Secure Hash Algorithm called (SHA-192). It uses a famous secure hash algorithm given by the National Institute of Standard and Technology (NIST).The basic design of SHA192 is to have the output length of 192.The SHA-192 has been designed to satisfy the different level of enhanced security and to resist the advanced SHA attacks. The security analysis of the SHA-192 is compared to the old one given by NIST and gives more security and excellent results as shown in our discussion. In this paper the digital signature algorithm which is given by NIST has been modified using the proposed algorithms SHA-192. Using proposed SHA-192 hash algorithm a new digital signature schemes is also proposed. The SHA-192 can be used in many applications such s public key cryptosystem, digital signcryption, message authentication code, random generator and in security architecture of upcoming wireless devices like software defined radio etc."}
{"_id": "a24d72bd0d08d515cb3e26f94131d33ad6c861db", "title": "Ethical Challenges in Data-Driven Dialogue Systems", "text": "The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. There are well documented instances where interactions with these system have resulted in biased or even offensive conversations due to the data-driven training process. Here, we highlight potential ethical issues that arise in dialogue systems research, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns. We also suggest areas stemming from these issues that deserve further investigation. Through this initial survey, we hope to spur research leading to robust, safe, and ethically sound dialogue systems."}
{"_id": "62646f9450a3c95e745c1d2bb056dcf851acdaad", "title": "Speed-Security Tradeo s in Blockchain Protocols", "text": "Transaction processing speed is one of the major considerations in cryptocurrencies that are based on proof of work (POW) such as Bitcoin. At an intuitive level it is widely understood that processing speed is at odds with the security aspects of the underlying POW based consensus mechanism of such protocols, nevertheless the tradeo between the two properties is still not well understood. In this work, motivated by recent work [9] in the formal analysis of the Bitcoin backbone protocol, we investigate the tradeo between provable security and transaction processing speed viewing the latter as a function of the block generation rate. We introduce a new formal property of blockchain protocols, called chain growth, and we show it is fundamental for arguing the security of a robust transaction ledger. We strengthen the results of [9] showing for the rst time that reasonable security bounds hold even for the faster (than Bitcoin's) block generation rates that have been adopted by several major alt-coins (including Litecoin, Dogecoin etc.). We then provide a rst formal security proof of the GHOST rule for blockchain protocols. The GHOST rule was put forth in [14] as a mechanism to improve transaction processing speed and a variant of the rule is adopted by Ethereum. Our security analysis of the GHOST backbone matches our new analysis for Bitcoin in terms of the common pre x property but falls short in terms of chain growth where we provide an attack that substantially reduces the chain speed compared to Bitcoin. While our results establish the GHOST variant as a provably secure alternative to standard Bitcoin-like transaction ledgers they also highlight potential shortcomings in terms of processing speed compared to Bitcoin. We nally present attacks and simulation results against blockchain protocols (both for Bitcoin and GHOST) that present natural upper barriers for the speed-security tradeo . By combining our positive and negative results we map the speed/security domain for blockchain protocols and list open problems for future work."}
{"_id": "883964b5ce2b736b504cbe239e6a3b2dec8626c1", "title": "Classification of teeth in cone-beam CT using deep convolutional neural network", "text": "Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification."}
{"_id": "e99b38c64aec6109a126f5f6e192b7ed7f1afb58", "title": "Harvesting rib cartilage grafts for secondary rhinoplasty.", "text": "BACKGROUND\nReconstruction of the nasal osseocartilaginous framework is the foundation of successful secondary rhinoplasty.\n\n\nMETHODS\nAchieving this often requires large quantities of cartilage to correct both contour deformities and functional problems caused by previous procedures. Satisfactory and consistent long-term results rely on using grafts with low resorption rates and sufficient strength to offer adequate support. Auricular cartilage, irradiated cartilage, and alloplastic materials have all been used as implantable grafts with limited success.\n\n\nRESULTS\nIn the senior author's experience (J.P.G.), rib cartilage has proven to be a reliable, abundant, and relatively accessible donor with which to facilitate successful secondary rhinoplasty surgery.\n\n\nCONCLUSIONS\n: The authors describe in detail the techniques that they have found to be integral in harvesting rib cartilage grafts for secondary rhinoplasty."}
{"_id": "1aa02f027663a626f3aa17ed9bab52377a23f634", "title": "High-Dimensional Continuous Control Using Generalized Advantage Estimation", "text": "This paper is concerned with developing policy gradient methods that gracefully scale up to challenging problems with high-dimensional state and action spaces. Towards this end, we develop a scheme that uses value functions to substantially reduce the variance of policy gradient estimates, while introducing a tolerable amount of bias. This scheme, which we call generalized advantage estimation (GAE), involves using a discounted sum of temporal difference residuals as an estimate of the advantage function, and can be interpreted as a type of automated cost shaping. It is simple to implement and can be used with a variety of policy gradient methods and value function approximators. Along with this variance-reduction scheme, we use trust region algorithms to optimize the policy and value function, both represented as neural networks. We present experimental results on a number of highly challenging 3D locomotion tasks, where our approach learns complex gaits for bipedal and quadrupedal simulated robots. We also learn controllers for the biped getting up off the ground. In contrast to prior work that uses hand-crafted low-dimensional policy representations, our neural network policies map directly from raw kinematics to joint torques."}
{"_id": "96fccad0177530b81941d208355887de2d658d2c", "title": "Policy Gradient Methods for Robotics", "text": "The acquisition and improvement of motor skills and control policies for robotics from trial and error is of essential importance if robots should ever leave precisely pre-structured environments. However, to date only few existing reinforcement learning methods have been scaled into the domains of high-dimensional robots such as manipulator, legged or humanoid robots. Policy gradient methods remain one of the few exceptions and have found a variety of applications. Nevertheless, the application of such methods is not without peril if done in an uninformed manner. In this paper, we give an overview on learning with policy gradient methods for robotics with a strong focus on recent advances in the field. We outline previous applications to robotics and show how the most recently developed methods can significantly improve learning performance. Finally, we evaluate our most promising algorithm in the application of hitting a baseball with an anthropomorphic arm"}
{"_id": "afbe59950a7d452ce0a3f412ee865f1e1d94d9ef", "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates", "text": "Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations."}
{"_id": "085fb3acabcbf80ef1bf47daec50d246475b072b", "title": "Infinite-Horizon Policy-Gradient Estimation", "text": "Gradient-based approaches to direct policy search in reinf orcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-f unction methods. In this paper we introduceGPOMDP, a simulation-based algorithm for generating a bi sedestimate of the gradient of theaverage rewardin Partially Observable Markov Decision Processes ( POMDPs) controlled by parameterized stochastic policies. A similar algorithm wa s proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm\u2019s chief advantages are tha t i requires storage of only twice the number of policy parameters, uses one free parameter 2 [0; 1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowl edge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter is related to the mixing timeof the controlledPOMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control s paces, multiple-agents, higher-order derivatives, and a version for training stochastic policie s with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient e stimates generated by GPOMDP can be used in both a traditional stochastic gradient algori thm and a conjugate-gradient procedure to find local optima of the average reward."}
{"_id": "286955da96c1c5c38fe3853da3a877c1d6607c9a", "title": "Privacy-Preserving Public Auditing For Secure Cloud Storage", "text": "By using Cloud storage, users can access applications, services, software whenever they requires over the internet. Users can put their data remotely to cloud storage and get benefit of on-demand services and application from the resources. The cloud must have to ensure data integrity and security of data of user. The issue about cloud storage is integrity and privacy of data of user can arise. To maintain to overkill this issue here, we are giving public auditing process for cloud storage that users can make use of a third-party auditor (TPA) to check the integrity of data. Not only verification of data integrity, the proposed system also supports data dynamics. The work that has been done in this line lacks data dynamics and true public auditability. The auditing task monitors data modifications, insertions and deletions. The proposed system is capable of supporting public auditability, data dynamics and Multiple TPA are used for the auditing process. We also extend our concept to ring signatures in which HARS scheme is used. Merkle Hash Tree is used to improve block level authentication. Further we extend our result to enable the TPA to perform audits for multiple users simultaneously through Batch auditing."}
{"_id": "c7cccff4318f8a3e7d6d48bd9e91819d1186ba18", "title": "Design Of Ternary Logic Gates Using CNTFET", "text": "This paper presents a novel design of ternary logic gates like STI,PTI,NTI,NAND and NOR using carbon nanotube field effect transistors. Ternary logic is a promising alternative to the conventional binary logic design technique, since it is possible to accomplish simplicity and energy efficiency in modern digital design due to the reduced circuit overhead such as interconnects and chip area. In this paper novel design of basic logic gates for ternary logic based on CNTFET, is proposed. Keywords\u2014 Carbon nano-tube Field Effect Transistor(CNTFET),MVL(multi valued logic),Ternary logic,STI,NTI,PTI"}
{"_id": "dd0b5dd2d15ebc6a5658c75ec102b64e359c674d", "title": "Sampled-Point Network for Classification of Deformed Building Element Point Clouds", "text": "Search-and-rescue (SAR) robots operating in post-disaster urban areas need to accurately identify physical site information to perform navigation, mapping and manipulation tasks. This can be achieved by acquiring a 3D point cloud of the environment and performing object recognition from the point cloud data. However, this task is complicated by the unstructured environments and potentially-deformed objects encountered during disaster relief operations. Current 3D object recognition methods rely on point cloud input acquired under suitable conditions and do not consider deformations such as outlier noise, bending and truncation. This work introduces a deep learning architecture for 3D class recognition from point clouds of deformed building elements. The classification network, consisting of stacked convolution and average pooling layers applied directly to point coordinates, was trained using point clouds sampled from a database of mesh models. The proposed method achieves robustness to input variability using point sorting, resampling, and rotation normalization techniques. Experimental results on synthetically-deformed object datasets show that the proposed method outperforms the conventional deep learning methods in terms of classification accuracy and computational efficiency."}
{"_id": "bb988b89e77179c24a1d69e7a97655de1a120855", "title": "Real-time 3 D Camera Tracking for Industrial Augmented Reality Applications", "text": "In this paper we present a new solution for real-time 3D camera pose estimation for Augmented Reality (AR) applications. The tracking system does not require special engineering of the environment, such as placing markers or beacons. The required input data are a CAD model of the target object to be tracked, and a calibrated reference image of it. We consider the whole process of camera tracking, and developed both an autonomous initialization and a real-time tracking procedure. The system is robust to abrupt camera motions, strong changes of the lighting conditions and partial occlusions. To avoid typical jitter and drift problems the tracker performs feature matching not only in an iterative manner, but also against stable reference features, which are dynamically cached in case of high confidence. We present experimental results generated with help of synthetic ground truth, real off-line and on-line image sequences using different types of target objects."}
{"_id": "6d016cc9496bdfcb12a223eac073aa58444e80f4", "title": "[Hyaluronic acid rheology: Basics and clinical applications in facial rejuvenation].", "text": "Hyaluronic acid (HA) is the most widely used dermal filler to treat facial volume deficits and winkles specially for facial rejuvenation. Depending on various areas of the face, filler is exposed to two different forces (shear deformation and compression/stretching forces) resulting from intrinsec and external mechanical stress. The purpose of this technical note is to explain how rheology, which is the study of the flow and deformation of matter under strains, can be used in our clinical practice of facial volumization with fillers. Indeed, comprehension of the rheological properties of HA has become essential in selection of dermal filler targeted to the area of the face. Viscosity, elasticity and cohesivity are the main three properties to be taken into consideration in this selection. Aesthetic physicians and surgeons have to familiarize with those basics in order to select the HA with the right rheological properties to achieve a natural-looking and long-lasting outcome."}
{"_id": "cd15507f33e0d30103a9b9b2c6304177268f4e0a", "title": "Robust Vehicle Detection and Distance Estimation Under Challenging Lighting Conditions", "text": "Avoiding high computational costs and calibration issues involved in stereo-vision-based algorithms, this paper proposes real-time monocular-vision-based techniques for simultaneous vehicle detection and inter-vehicle distance estimation, in which the performance and robustness of the system remain competitive, even for highly challenging benchmark datasets. This paper develops a collision warning system by detecting vehicles ahead and, by identifying safety distances to assist a distracted driver, prior to occurrence of an imminent crash. We introduce adaptive global Haar-like features for vehicle detection, tail-light segmentation, virtual symmetry detection, intervehicle distance estimation, as well as an efficient single-sensor multifeature fusion technique to enhance the accuracy and robustness of our algorithm. The proposed algorithm is able to detect vehicles ahead at both day or night and also for short- and long-range distances. Experimental results under various weather and lighting conditions (including sunny, rainy, foggy, or snowy) show that the proposed algorithm outperforms state-of-the-art algorithms."}
{"_id": "f89384e8216a665534903b470cce4b8476814a50", "title": "Proof Analysis in Modal Logic", "text": "A general method for generating contractionand cut-free sequent calculi for a large family of normal modal logics is presented. The method covers all modal logics characterized by Kripke frames determined by universal or geometric properties and it can be extended to treat also G\u00f6del\u2013L\u00f6b provability logic. The calculi provide direct decision methods through terminating proof search. Syntactic proofs of modal undefinability results are obtained in the form of conservativity theorems."}
{"_id": "6a535e0aeb83f1c2e361c7b0c575f8bc0cc8aa34", "title": "Huffman Coding-Based Adaptive Spatial Modulation", "text": "Antenna switch enables multiple antennas to share a common RF chain. It also offers an additional spatial dimension, i.e., antenna index, that can be utilized for data transmission via both signal space and spatial dimension. In this paper, we propose a Huffman coding-based adaptive spatial modulation that generalizes both conventional spatial modulation and transmit antenna selection. Through the Huffman coding, i.e., designing variable length prefix codes, the transmit antennas can be activated with different probabilities. When the input signal is Gaussian distributed, the optimal antenna activation probability is derived through optimizing channel capacity. To make the optimization tractable, closed form upper bound and lower bound are derived as the effective approximations of channel capacity. When the input is discrete QAM signal, the optimal antenna activation probability is derived through minimizing symbol error rate. Numerical results show that the proposed adaptive transmission offers considerable performance improvement over the conventional spatial modulation and transmit antenna selection."}
{"_id": "767410f40ed2ef1b8b759fec3782d8a0f2f8ad40", "title": "On Blockchain Applications : Hyperledger Fabric And Ethereum", "text": "Blockchain is a tamper-proof digital ledger which can be used to record public or private peer to peer network transactions and it cannot be altered retroactively without the alteration of all subsequent blocks of the network. A blockchain is updated via the consensus protocol that ensures a linear, unambiguous ordering of transactions. Blocks guarantee the integrity and consistency of the blockchain across a network of distributed nodes. Different blockchain applications use various consensus protocols for their working. Byzantine fault tolerance (BFT) is one of them and it is a characteristic of a system that tolerates the class of failures known as the Byzantine Generals Problem. Hyperledger, Stellar, and Ripple are three blockchain application which uses BFT consensus. The best variant of BFT is Practical Byzantine Fault tolerance (PBFT). Hyperledger fabric with deterministic transactions can run on the top of PBFT. This paper focuses on a survey of various consensus mechanisms and makes a comparative study of Hyperledger fabric and Ethereum. Keywords\u2014 consensus; hyperledger fabric; ethereum; byzantine fault tolerance;"}
{"_id": "69e37306cc5c1262a979a95244c487b73e4a3fbf", "title": "MapFactory \u2013 Towards a mapping design pattern for big geospatial data", "text": "With big geospatial data emerging, cartographers and geographic information scientists have to find new ways of dealing with the volume, variety, velocity, and veracity (4Vs) of the data. This requires the development of tools that allow processing, filtering, analysing, and visualising of big data through multidisciplinary collaboration. In this paper, we present the MapFactory design pattern that will be used for the creation of different maps according to the (input) design specification for big geospatial data. The design specification is based on elements from ISO19115-1:2014 Geographic information -Metadata -Part 1: Fundamentals that would guide the design and development of the map or set of maps to be produced. The results of the exploratory research suggest that the MapFactory design pattern will help with software reuse and communication. The MapFactory design pattern will aid software developers to build the tools that are required to automate map making with big geospatial data. The resulting maps would assist cartographers and others to make sense of big geospatial data."}
{"_id": "12fc6f855f58869cf81743b9be0df1380c17f4d0", "title": "Exploiting Redundancy in Question Answering", "text": "Our goal is to automatically answer brief factual questions of the form ``When was the Battle of Hastings?'' or ``Who wrote The Wind in the Willows?''. Since the answer to nearly any such question can now be found somewhere on the Web, the problem reduces to finding potential answers in large volumes of data and validating their accuracy. We apply a method for arbitrary passage retrieval to the first half of the problem and demonstrate that answer redundancy can be used to address the second half. The success of our approach depends on the idea that the volume of available Web data is large enough to supply the answer to most factual questions multiple times and in multiple contexts. A query is generated from a question and this query is used to select short passages that may contain the answer from a large collection of Web data. These passages are analyzed to identify candidate answers. The frequency of these candidates within the passages is used to ``vote'' for the most likely answer. The approach is experimentally tested on questions taken from the TREC-9 question-answering test collection. As an additional demonstration, the approach is extended to answer multiple choice trivia questions of the form typically asked in trivia quizzes and television game shows."}
{"_id": "eacba5e8fbafb1302866c0860fc260a2bdfff232", "title": "VOS-GAN: Adversarial Learning of Visual-Temporal Dynamics for Unsupervised Dense Prediction in Videos", "text": "Recent GAN-based video generation approaches model videos as the combination of a time-independent scene component and a time-varying motion component, thus factorizing the generation problem into generating background and foreground separately. One of the main limitations of current approaches is that both factors are learned by mapping one source latent space to videos, which complicates the generation task as a single data point must be informative of both background and foreground content. In this paper we propose a GAN framework for video generation that, instead, employs two latent spaces in order to structure the generative process in a more natural way: 1) a latent space to generate the static visual content of a scene (background), which remains the same for the whole video, and 2) a latent space where motion is encoded as a trajectory between sampled points and whose dynamics are modeled through an RNN encoder (jointly trained with the generator and the discriminator) and then mapped by the generator to visual objects\u2019 motion. Additionally, we extend current video discrimination approaches by incorporating in the learning procedure motion estimation and, leveraging the peculiarity of the generation process, unsupervised pixel-wise dense predictions. Extensive performance evaluation showed that our approach is able to a) synthesize more realistic videos than state-of-the-art methods, b) learn effectively both local and global video dynamics, as demonstrated by the results achieved on a video action recognition task over the UCF-101 dataset, and c) accurately perform unsupervised video object segmentation on standard video benchmarks, such as DAVIS, SegTrack and F4K-Fish."}
{"_id": "83e5ee06cf7ae891750682ca39cefaeed60ac998", "title": "Pharmacist-managed medication therapy adherence clinics: The Malaysian experience.", "text": "In Malaysia, the first pharmacist-managed clinic was established in 2004. This type of clinics is locally known as a medication therapy adherence clinic (MTAC).1e3 In fact, these clinics were introduced by the Pharmaceutical Services Division (PSD), Ministry of Health Malaysia (MOH) as part of the clinical pharmacy services to improve the quality, safety, and cost-effectiveness of patient care through better medicines management at ambulatory care settings in hospitals and MOH clinics.2 The major aims of the clinics include optimization of medication therapy, improvement of medication adherence, and prevention or reduction of adverse drug events and other medication related problems.1 Over a decade, the number of MTACs has increased tremendously to 660 by the end of 2013.1 Moreover, currently, there are 13 types of clinics being operated in Malaysia that include neurology (stroke), diabetes, heart failure, respiratory, psoriasis, anticoagulation, rheumatology, nephrology, hepatitis, geriatrics, retroviral disease, hemophilia, and psychiatry.1,3,4 The pharmacists operate the clinics and provide pharmaceutical care services to the patients in collaboration with physicians and other healthcare professionals according to specific protocols. The specific roles of the pharmacist include providing drug information, patient education and counseling, identifying and solving drugrelated problems, monitoring patients' drug therapy and treatment outcomes, offering feedback on patient progress, making appropriate recommendations to physicians to individualize the patients' drug regimens according to patient-related factors including clinical, humanistic, and economic factors.1e4 Several studies have been conducted to evaluate the impact of pharmacist-managed clinics on patient care in Malaysia.5e10 These studies have demonstrated that the clinics are beneficial in optimizing medication therapy, improving medication adherence, and giving better clinical outcomes. Moreover, some studies have shown that these clinics are more cost-effective than standard medical care.11,12 In addition, it has been shown that the patients are satisfied with the pharmaceutical care services provided by these clinics.13,14 Therefore, the current evidence supports further expansion of these clinics in the ambulatory care system in Malaysia. We believe there are several factors that have led to the success of the pharmacist-managed clinics in Malaysia. These include the tremendous support and guidance from the PSD, Ministry of Health Malaysia to help in the establishment of these clinics. As an example of this support, the PSD has established a protocol for each clinic to help the pharmacists run the clinic, and to enable standardization of practice and the expansion of the clinics throughout the country. These protocols are written by specialized clinical pharmacy committees (e.g. respiratory subspecialty,"}
{"_id": "df9c3ce70641d77ec2a563ac336e51265005c61b", "title": "An Intelligent RFID Reader and its Application in Airport Baggage Handling System", "text": "In civil aviation baggage handing application, the RFID tags are used to enhance the ability for baggage tracking, dispatching and conveyance, so as to improve the management efficiency and the users' satisfaction. An intelligent RFID reader which has the ability of data disposal and provides EPC Edge Savant services is presented. The prototype readers and its experiment in the airport baggage handling system are also introduced. The workflow of the reader can be configured dynamically, which makes the deployment much more flexible. Real-time tasks can also be assigned to implement edge business process by the intelligent reader."}
{"_id": "826a530b835a917200da1b25993b5319021e4551", "title": "Realtime Ray Tracing on GPU with BVH-based Packet Traversal", "text": "Recent GPU ray tracers can already achieve performance competitive to that of their CPU counterparts. Nevertheless, these systems can not yet fully exploit the capabilities of modern GPUs and can only handle medium-sized, static scenes. In this paper we present a BVH-based GPU ray tracer with a parallel packet traversal algorithm using a shared stack. We also present a fast, CPU-based BVH construction algorithm which very accurately approximates the surface area heuristic using streamed binning while still being one order of magnitude faster than previously published results. Furthermore, using a BVH allows us to push the size limit of supported scenes on the GPU: We can now ray trace the 12.7 million triangle Power Plant at 1024 times 1024 image resolution with 3 fps, including shading and shadows."}
{"_id": "5c36dd75aa6ae71c7d3d46621a4d7249db9f46f4", "title": "Mobile Forensic Data Analysis: Suspicious Pattern Detection in Mobile Evidence", "text": "Culprits\u2019 identification by the means of suspicious pattern detection techniques from mobile device data is one of the most important aims of the mobile forensic data analysis. When criminal activities are related to entirely automated procedures such as malware propagation, predicting the corresponding behavior is a rather achievable task. However, when human behavior is involved, such as in cases of traditional crimes, prediction and detection become more compelling. This paper introduces a combined criminal profiling and suspicious pattern detection methodology for two criminal activities with moderate to the heavy involvement of mobile devices, cyberbullying and low-level drug dealing. Neural and Neurofuzzy techniques are applied on a hybrid original and simulated dataset. The respective performance results are measured and presented, the optimal technique is selected, and the scenarios are re-run on an actual dataset for additional testing and verification."}
{"_id": "09ebd015cc09089907f6e2e2edabb3f5d0ac7a2f", "title": "A GPU-based implementation of motion detection from a moving platform", "text": "We describe a GPU-based implementation of motion detection from a moving platform. Motion detection from a moving platform is inherently difficult as the moving camera induces 2D motion field in the entire image. A step compensating for camera motion is required prior to estimating of the background model. Due to inevitable registration errors, the background model is estimated according to a sliding window of frames to avoid the case where erroneous registration influences the quality of the detection for the whole sequence. However, this approach involves several characteristics that put a heavy burden on real-time CPU implementation. We exploit GPU to achieve significant acceleration over standard CPU implementations. Our GPU-based implementation can build the background model and detect motion regions at around 18 fps on 320times240 videos that are captured for a moving camera."}
{"_id": "b1117993836169e2b24b4bf0d5afea6d49a0348c", "title": "How to do a grounded theory study: a worked example of a study of dental practices", "text": "BACKGROUND\nQualitative methodologies are increasingly popular in medical research. Grounded theory is the methodology most-often cited by authors of qualitative studies in medicine, but it has been suggested that many 'grounded theory' studies are not concordant with the methodology. In this paper we provide a worked example of a grounded theory project. Our aim is to provide a model for practice, to connect medical researchers with a useful methodology, and to increase the quality of 'grounded theory' research published in the medical literature.\n\n\nMETHODS\nWe documented a worked example of using grounded theory methodology in practice.\n\n\nRESULTS\nWe describe our sampling, data collection, data analysis and interpretation. We explain how these steps were consistent with grounded theory methodology, and show how they related to one another. Grounded theory methodology assisted us to develop a detailed model of the process of adapting preventive protocols into dental practice, and to analyse variation in this process in different dental practices.\n\n\nCONCLUSIONS\nBy employing grounded theory methodology rigorously, medical researchers can better design and justify their methods, and produce high-quality findings that will be more useful to patients, professionals and the research community."}
{"_id": "18aab1169897c138bba123046148e2015fb67c0c", "title": "qBase relative quantification framework and software for management and automated analysis of real-time quantitative PCR data", "text": "Although quantitative PCR (qPCR) is becoming the method of choice for expression profiling of selected genes, accurate and straightforward processing of the raw measurements remains a major hurdle. Here we outline advanced and universally applicable models for relative quantification and inter-run calibration with proper error propagation along the entire calculation track. These models and algorithms are implemented in qBase, a free program for the management and automated analysis of qPCR data."}
{"_id": "09ad9778b7d8ef3a9a6953a988dd3aacdc3e85ae", "title": "A Comparison of String Distance Metrics for Name-Matching Tasks", "text": "Using an open-source, Java toolkit of name-matching methods, we experimentally compare string distance metrics on the task of matching entity names. We investigate a number of different metrics proposed by different communities, including edit-distance metrics, fast heuristic string comparators, token-based distance metrics, and hybrid methods. Overall, the best-performing method is a hybrid scheme combining a TFIDF weighting scheme, which is widely used in information retrieval, with the Jaro-Winkler string-distance scheme, which was developed in the probabilistic record linkage community."}
{"_id": "d4b21d9321cb68932a5ceeed49330aff1d638042", "title": "The power of statistical tests in meta-analysis.", "text": "Calculations of the power of statistical tests are important in planning research studies (including meta-analyses) and in interpreting situations in which a result has not proven to be statistically significant. The authors describe procedures to compute statistical power of fixed- and random-effects tests of the mean effect size, tests for heterogeneity (or variation) of effect size parameters across studies, and tests for contrasts among effect sizes of different studies. Examples are given using 2 published meta-analyses. The examples illustrate that statistical power is not always high in meta-analysis."}
{"_id": "e7c6f67a70b5cf0842a7a2fc497131a79b6ee2c5", "title": "Frequent Pattern Compression: A Significance-Based Compression Scheme for L2 Caches", "text": "With the widening gap between processor and memory speeds, memory system designers ma find cache compression beneficial to increase cache capacity and reduce off-chip bandwidth Most hardware compression algorithms fall into the dictionary-based category, which depend on building a dictionary and using its entries to encode repeated data values. Such algorithms are effective in compressing large data blocks and files. Cache lines, however, are typically short (32-256 bytes), and a per-line dictionary places a significant overhead that limits the compressibility and increases decompression latency of such algorithms. For such short lines significance-based compression is an appealing alternative. We propose and evaluate a simple significance-based compression scheme that has a low com pression and decompression overhead. This scheme, Frequent Pattern Compression (FPC compresses individual cache lines on a word-by-word basis by storing common word patterns in a compressed format accompanied with an appropriate prefix. For a 64-byte cache line, compression can be completed in three cycles and decompression in five cycles, assuming 1 FO4 gate delays per cycle. We propose a compressed cache design in which data is stored in compressed form in the L2 caches, but are uncompressed in the L1 caches. L2 cache lines ar compressed to predetermined sizes that never exceed their original size to reduce decompre sion overhead. This simple scheme provides comparable compression ratios to more comple schemes that have higher cache hit latencies."}
{"_id": "6dc6d89fdb28ffbc75c38a2bb566601b93b9e30a", "title": "A pre-collision control strategy for human-robot interaction based on dissipated energy in potential inelastic impacts", "text": "Enabling human-robot collaboration raises new challenges in safety-oriented robot design and control. Indices that quantitatively describe human injury due to a human-robot collision are needed to propose suitable pre-collision control strategies. This paper presents a novel model-based injury index built on the concept of dissipated kinetic energy in a potential inelastic impact. This quantity represents the fracture energy lost when a human-robot collision occurs, modeling both clamped and unclamped cases. It depends on the robot reflected mass and velocity in the impact direction. The proposed index is expressed in analytical form suitable to be integrated in a constraint-based pre-collision control strategy. The exploited control architecture allows to perform a given robot task while simultaneously bounding our injury assessment and minimizing the reflected mass in the direction of the impact. Experiments have been performed on a lightweight robot ABB FRIDA to validate the proposed injury index as well as the pre-collision control strategy."}
{"_id": "4e26e488c02b3647e0f1566760555ebe5d002558", "title": "A Quadratic-Complexity Observability-Constrained Unscented Kalman Filter for SLAM", "text": "This paper addresses two key limitations of the unscented Kalman filter (UKF) when applied to the simultaneous localization and mapping (SLAM) problem: the cubic computational complexity in the number of states and the inconsistency of the state estimates. To address the first issue, we introduce a new sampling strategy for the UKF, which has constant computational complexity. As a result, the overall computational complexity of UKF-based SLAM becomes of the same order as that of the extended Kalman filter (EKF)-based SLAM, i.e., quadratic in the size of the state vector. Furthermore, we investigate the inconsistency issue by analyzing the observability properties of the linear-regression-based model employed by the UKF. Based on this analysis, we propose a new algorithm, termed observability-constrained (OC)-UKF, which ensures the unobservable subspace of the UKF's linear-regression-based system model is of the same dimension as that of the nonlinear SLAM system. This results in substantial improvement in the accuracy and consistency of the state estimates. The superior performance of the OC-UKF over other state-of-the-art SLAM algorithms is validated by both Monte-Carlo simulations and real-world experiments."}
{"_id": "b8aa8b5d06c98a900d8cea61864669b28c3ac0fc", "title": "Routing protocols in Vehicular Delay Tolerant Networks: A comprehensive survey", "text": "This article presents a comprehensive survey of routing protocols proposed for routing in Vehicular Delay Tolerant Networks (VDTN) in vehicular environment. DTNs are utilized in various operational environments, including those subject to disruption and disconnection and those with high-delay, such as Vehicular Ad-Hoc Networks (VANET). We focus on a special type of VANET, where the vehicular traffic is sparse and direct end-to-end paths between communicating parties do not always exist. Thus, communication in this context falls into the category of Vehicular Delay Tolerant Network (VDTN). Due to the limited transmission range of an RSU (Road Side Unit), remote vehicles, in VDTN, may not connect to the RSU directly and thus have to rely on intermediate vehicles to relay the packets. During the message relay process, complete end-to-end paths may not exist in highly partitioned VANETs. Therefore, the intermediate vehicles must buffer and forward messages opportunistically. Through buffer, carry and forward, the message can eventually be delivered to the destination even if an end-to-end connection never exists between source and destination. The main objective of routing protocols in DTN is to maximize the probability of delivery to the destination while minimizing the end-to-end delay. Also, vehicular traffic models are important for DTN routing in vehicle networks because the performance of DTN routing protocols is closely related to population and mobility models of the network. 2014 Elsevier B.V. All rights reserved."}
{"_id": "286f0650438d0cc5123057909662d242d7bbce07", "title": "Toward detecting emotions in spoken dialogs", "text": "The importance of automatically recognizing emotions from human speech has grown with the increasing role of spoken language interfaces in human-computer interaction applications. This paper explores the detection of domain-specific emotions using language and discourse information in conjunction with acoustic correlates of emotion in speech signals. The specific focus is on a case study of detecting negative and non-negative emotions using spoken language data obtained from a call center application. Most previous studies in emotion recognition have used only the acoustic information contained in speech. In this paper, a combination of three sources of information-acoustic, lexical, and discourse-is used for emotion recognition. To capture emotion information at the language level, an information-theoretic notion of emotional salience is introduced. Optimization of the acoustic correlates of emotion with respect to classification error was accomplished by investigating different feature sets obtained from feature selection, followed by principal component analysis. Experimental results on our call center data show that the best results are obtained when acoustic and language information are combined. Results show that combining all the information, rather than using only acoustic information, improves emotion classification by 40.7% for males and 36.4% for females (linear discriminant classifier used for acoustic information)."}
{"_id": "34850eb3f55633599e8e8f36db4bedf541d30b94", "title": "A database of German emotional speech", "text": "The article describes a database of emotional speech. Ten actors (5 female and 5 male) simulated the emotions, producing 10 German utterances (5 short and 5 longer sentences) which could be used in everyday communication and are interpretable in all applied emotions. The recordings were taken in an anechoic chamber with high-quality recording equipment. In addition to the sound electro-glottograms were recorded. The speech material comprises about 800 sentences (seven emotions * ten actors * ten sentences + some second versions). The complete database was evaluated in a perception test regarding the recognisability of emotions and their naturalness. Utterances recognised better than 80% and judged as natural by more than 60% of the listeners were phonetically labelled in a narrow transcription with special markers for voice-quality, phonatory and articulatory settings and articulatory features. The database can be accessed by the public via the internet (http://www.expressive-speech.net/emodb/)."}
{"_id": "67efaba1be4c0462a5fc2ce9762f7edf9719c6a0", "title": "Speech emotion recognition using hidden Markov models", "text": "In emotion classification of speech signals, the popular features employed are statistics of fundamental frequency, energy contour, duration of silence and voice quality. However, the performance of systems employing these features degrades substantially when more than two categories of emotion are to be classified. In this paper, a text independent method of emotion classification of speech is proposed. The proposed method makes use of short time log frequency power coefficients (LFPC) to represent the speech signals and a discrete hidden Markov model (HMM) as the classifier. The emotions are classified into six categories. The category labels used are, the archetypal emotions of Anger, Disgust, Fear, Joy, Sadness and Surprise. Adatabase consisting of 60 emotional utterances, each from twelve speakers is constructed and used to train and test the proposed system. Performance of the LFPC feature parameters is compared with that of the linear prediction Cepstral coefficients (LPCC) and mel-frequency Cepstral coefficients (MFCC) feature parameters commonly used in speech recognition systems. Results show that the proposed system yields an average accuracy of 78% and the best accuracy of 96% in the classification of six emotions. This is beyond the 17% chances by a random hit for a sample set of 6 categories. Results also reveal that LFPC is a better choice as feature parameters for emotion classification than the traditional feature parameters. 2003 Elsevier B.V. All rights reserved."}
{"_id": "953e69c66f2bdfcc65e4d677fa429571cdec2a60", "title": "Emotion Recognition in Speech Using Neural Networks", "text": "Emotion recognition in speech is a topic on which little research has been done to-date. In this paper, we discuss why emotion recognition in speech is a significant and applicable research topic, and present a system for emotion recognition using oneclass-in-one neural networks. By using a large database of phoneme balanced words, our system is speakerand context-independent. We achieve a recognition rate of approximately 50% when testing eight emotions."}
{"_id": "c33076c2aa1d4a860c223529c1d1941c58cd77fc", "title": "Nonlinear dimension reduction for EEG-based epileptic seizure detection", "text": "Approximately 0.1 percent of epileptic patients die from unexpected deaths. In general, for intractable seizures, it is crucial to have an algorithm to accurately and automatically detect the seizures and notify care-givers to assist patients. EEG signals are known as definitive diagnosis of seizure events. In this work, we utilize the frequency domain features (normalized in-band power spectral density) for the EEG channels. We applied a nonlinear data-embedding technique based on stochastic neighbor distance metric to capture the relationships among data elements in high dimension and improve the accuracy of seizure detection. This proposed data embedding technique not only makes it possible to visualize data in two or three dimensions, but also tackles the inherent difficulties regarding high dimensional data classification such as time complexity and memory requirement. We also applied a patient specific KNN classification to detect seizure and non-seizure events. The results indicate that our nonlinear technique provides significantly better visualization and classification efficiency (F-measure greater than 87%) compared to conventional dimension reduction approaches."}
{"_id": "676e552b1b1f10cc7b80ec3ce51bced5990a9e68", "title": "SVD-based collaborative filtering with privacy", "text": "Collaborative filtering (CF) techniques are becoming increasingly popular with the evolution of the Internet. Such techniques recommend products to customers using similar users' preference data. The performance of CF systems degrades with increasing number of customers and products. To reduce the dimensionality of filtering databases and to improve the performance, Singular Value Decomposition (SVD) is applied for CF. Although filtering systems are widely used by E-commerce sites, they fail to protect users' privacy. Since many users might decide to give false information because of privacy concerns, collecting high quality data from customers is not an easy task. CF systems using these data might produce inaccurate recommendations. In this paper, we discuss SVD-based CF with privacy. To protect users' privacy while still providing recommendations with decent accuracy, we propose a randomized perturbation-based scheme."}
{"_id": "b3cedde36a6841b43162fc406b688e51bec68d36", "title": "Hierarchical interpretations for neural network predictions", "text": "Deep neural networks (DNNs) have achieved impressive predictive performance due to their ability to learn complex, non-linear relationships between variables. However, the inability to effectively visualize these relationships has led to DNNs being characterized as black boxes and consequently limited their applications. To ameliorate this problem, we introduce the use of hierarchical interpretations to explain DNN predictions through our proposed method, agglomerative contextual decomposition (ACD). Given a prediction from a trained DNN, ACD produces a hierarchical clustering of the input features, along with the contribution of each cluster to the final prediction. This hierarchy is optimized to identify clusters of features that the DNN learned are predictive. Using examples from Stanford Sentiment Treebank and ImageNet, we show that ACD is effective at diagnosing incorrect predictions and identifying dataset bias. Through human experiments, we demonstrate that ACD enables users both to identify the more accurate of two DNNs and to better trust a DNN\u2019s outputs. We also find that ACD\u2019s hierarchy is largely robust to adversarial perturbations, implying that it captures fundamental aspects of the input and ignores spurious noise."}
{"_id": "9c49820bb76031297dab4ae4f5d9237ca4c2e603", "title": "Moving Target Indication via RADARSAT-2 Multichannel Synthetic Aperture Radar Processing", "text": "With the recent launches of the German TerraSAR-X and the Canadian RADARSAT-2, both equipped with phased array antennas and multiple receiver channels, synthetic aperture radar, ground moving target indication (SAR-GMTI) data are now routinely being acquired from space. Defence R&D Canada has been conducting SAR-GMTI trials to assess the performance and limitations of the RADARSAT-2 GMTI system. Several SAR-GMTI modes developed for RADARSAT-2 are described and preliminary test results of these modes are presented. Detailed equations of motion of a moving target for multiaperture spaceborne SAR geometry are derived and a moving target parameter estimation algorithm developed for RADARSAT-2 (called the Fractrum Estimator) is presented. Limitations of the simple dual-aperture SAR-GMTI mode are analysed as a function of the signal-to-noise ratio and target speed. Recently acquired RADARSAT-2 GMTI data are used to demonstrate the capability of different system modes and to validate the signal model and the algorithm."}
{"_id": "26861e41e5b44774a2801e1cd76fd56126bbe257", "title": "Personalized Tour Recommendation Based on User Interests and Points of Interest Visit Durations", "text": "Tour recommendation and itinerary planning are challenging tasks for tourists, due to their need to select Points of Interest (POI) to visit in unfamiliar cities, and to select POIs that align with their interest preferences and trip constraints. We propose an algorithm called PERSTOUR for recommending personalized tours using POI popularity and user interest preferences, which are automatically derived from real-life travel sequences based on geotagged photos. Our tour recommendation problem is modelled using a formulation of the Orienteering problem, and considers user trip constraints such as time limits and the need to start and end at specific POIs. In our work, we also reflect levels of user interest based on visit durations, and demonstrate how POI visit duration can be personalized using this time-based user interest. Using a Flickr dataset of four cities, our experiments show the effectiveness of PERSTOUR against various baselines, in terms of tour popularity, interest, recall, precision and F1-score. In particular, our results show the merits of using time-based user interest and personalized POI visit durations, compared to the current practice of using frequency-based user interest and average visit durations."}
{"_id": "97dc1b226233b4591df41ad8633d1db91ca49406", "title": "MaJIC: Compiling MATLAB for Speed and Responsiveness", "text": "This paper presents and evaluates techniques to improve the execution performance of MATLAB. Previous efforts concentrated on source to source translation and batch compilation; MaJIC provides an interactive frontend that looks like MATLAB and compiles/optimizes code behind the scenes in real time, employing a combination of just-in-time and speculative ahead-of-time compilation. Performance results show that the proper mixture of these two techniques can yield near-zero response time as well as performance gains previously achieved only by batch compilers."}
{"_id": "14d81790e89e55007b3854e9d242e638065d8415", "title": "NEAT-o-Games: blending physical activity and fun in the daily routine", "text": "This article describes research that aims to encourage physical activity through a novel pervasive gaming paradigm. Data from a wearable accelerometer are logged wirelessly to a cell phone and control the animation of an avatar that represents the player in a virtual race game with other players over the cellular network. Winners are declared every day and players with an excess of activity points can spend some to get hints in mental games of the suite, like Sudoku. The racing game runs in the background throughout the day and every little move counts. As the gaming platform is embedded in the daily routine of players, it may act as a strong behavioral modifier and increase everyday physical activity other than volitional sporting exercise. Such physical activity (e.g., taking the stairs), is termed NEAT and was shown to play a major role in obesity prevention and intervention. A pilot experiment demonstrates that players are engaged in NEAT-o-Games and become more physically active while having a good dosage of fun."}
{"_id": "19c499cfd7fe58dea4241c6dc44dfe0003ac1453", "title": "A novel 8T SRAM with minimized power and delay", "text": "In this paper, a novel 8T SRAM cell is proposed which aims at decreasing the delay and lowering the total power consumption of the cell. The threshold voltage variations in the transistor affect the read and write stability of the cell. Also, power dissipation increases with the number of transistors which in turn affects the read and write stability. The proposed 8T SRAM bitcell is designed using 180 nm CMOS, n-well technology with a supply voltage of 1.8 V. The results show that the average delay has been improved by 80 % compared to the conventional 6T cell. The total power is improved by 14.5 % as compared to conventional 6T SRAM cell."}
{"_id": "381a30fba04a5094f0f0c2b250b00f04c023ccb6", "title": "Texture analysis of medical images.", "text": "The analysis of texture parameters is a useful way of increasing the information obtainable from medical images. It is an ongoing field of research, with applications ranging from the segmentation of specific anatomical structures and the detection of lesions, to differentiation between pathological and healthy tissue in different organs. Texture analysis uses radiological images obtained in routine diagnostic practice, but involves an ensemble of mathematical computations performed with the data contained within the images. In this article we clarify the principles of texture analysis and give examples of its applications, reviewing studies of the technique."}
{"_id": "59ccb3db9e905808340f2edbeb7e9814c47d6beb", "title": "f3.js: A Parametric Design Tool for Physical Computing Devices for Both Interaction Designers and End-users", "text": "Although the exploration of design alternatives is crucial for interaction designers and customization is required for end-users, the current development tools for physical computing devices have focused on single versions of an artifact. We propose the parametric design of devices including their enclosure layouts and programs to address this issue. A Web-based design tool called f3.js is presented as an example implementation, which allows devices assembled from laser-cut panels with sensors and actuator modules to be parametrically created and customized. It enables interaction designers to write code with dedicated APIs, declare parameters, and interactively tune them to produce the enclosure layouts and programs. It also provides a separate user interface for end-users that allows parameter tuning and dynamically generates instructions for device assembly. The parametric design approach and the tool were evaluated through two user studies with interaction designers, university students, and end-users."}
{"_id": "6dfbb5801aab21dc3c0b1825db028bb617477446", "title": "Recurrent Transformer Networks for Semantic Correspondence", "text": "We present recurrent transformer networks (RTNs) for obtaining dense correspondences between semantically similar images. Our networks accomplish this through an iterative process of estimating spatial transformations between the input images and using these transformations to generate aligned convolutional activations. By directly estimating the transformations between an image pair, rather than employing spatial transformer networks to independently normalize each individual image, we show that greater accuracy can be achieved. This process is conducted in a recursive manner to refine both the transformation estimates and the feature representations. In addition, a technique is presented for weakly-supervised training of RTNs that is based on a proposed classification loss. With RTNs, state-of-the-art performance is attained on several benchmarks for semantic correspondence."}
{"_id": "3fc7dbd009c93f9f0a163d9dfd087d7748ee7d34", "title": "Statics and Dynamics of Continuum Robots With General Tendon Routing and External Loading", "text": "Tendons are a widely used actuation strategy for continuum robots that enable forces and moments to be transmitted along the robot from base-mounted actuators. Most prior robots have used tendons routed in straight paths along the robot. However, routing tendons through general curved paths within the robot offers potential advantages in reshaping the workspace and enabling a single section of the robot to achieve a wider variety of desired shapes. In this paper, we provide a new model for the statics and dynamics of robots with general tendon routing paths that is derived by coupling the classical Cosserat-rod and Cosserat-string models. This model also accounts for general external loading conditions and includes traditional axially routed tendons as a special case. The advantage of the usage of this coupled model for straight-tendon robots is that it accounts for the distributed wrenches that tendons apply along the robot. We show that these are necessary to consider when the robot is subjected to out-of-plane external loads. Our experimental results demonstrate that the coupled model matches experimental tip positions with an error of 1.7% of the robot length, in a set of experiments that include both straight and nonstraight routing cases, with both point and distributed external loads."}
{"_id": "4555fd3622908e2170e4ffdd717b83518b123b09", "title": "Folded dipole antenna near metal plate", "text": "The paper presents the effects on antenna parameters when an antenna is placed horizontally near a metal plate. The plate has finite size and rectangular shape. A folded dipole antenna is used and it is placed symmetrically above the plate. The FEM (finite element method) is used to simulate the dependency of antenna parameters on the size of the plate and the distance between the plate and the antenna. The presence of the metal plate, even a small one if it is at the right distance, causes very big changes in the behaviour of the antenna. The bigger the plate, especially in width, the sharper and narrower are the lobes of the radiation pattern. The antenna height defines how many lobes the radiation pattern has. A number of the antenna parameters, including impedance, directivity and front-to-back ratio, change periodically as the antenna height is increased. The resonant frequency of the antenna also changes under the influence of the metal plate."}
{"_id": "2421a14ce3cdd563fc3155a151a69568b0ee6b31", "title": "Semi-supervised Learning by Entropy Minimization", "text": "We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solutions benefit from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the \u201ccluster assumption\u201d. Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces."}
{"_id": "73cdc71ced7be58ad8eeacb8e8311358412d6bd6", "title": "LinkNBed: Multi-Graph Representation Learning with Entity Linkage", "text": "Knowledge graphs have emerged as an important model for studying complex multirelational data. This has given rise to the construction of numerous large scale but incomplete knowledge graphs encoding information extracted from various resources. An effective and scalable approach to jointly learn over multiple graphs and eventually construct a unified graph is a crucial next step for the success of knowledge-based inference for many downstream applications. To this end, we propose LinkNBed, a deep relational learning framework that learns entity and relationship representations across multiple graphs. We identify entity linkage across graphs as a vital component to achieve our goal. We design a novel objective that leverage entity linkage and build an efficient multi-task training procedure. Experiments on link prediction and entity linkage demonstrate substantial improvements over the state-ofthe-art relational learning approaches."}
{"_id": "bcc61dd29c110b25c1a179188b1270a34210d669", "title": "Metacognition of the testing effect: guiding learners to predict the benefits of retrieval.", "text": "If the mnemonic benefits of testing are to be widely realized in real-world learning circumstances, people must appreciate the value of testing and choose to utilize testing during self-guided learning. Yet metacognitive judgments do not appear to reflect the enhancement provided by testing Karpicke & Roediger (Science 319:966-968, 2008). In this article, we show that under judicious conditions, learners can indeed reveal an understanding of the beneficial effects of testing, as well as the interaction of that effect with delay (experiment 1). In that experiment, subjects made judgments of learning (JOLs) for previously studied or previously tested items in either a cue-only or a cue-target context, and either immediately or after a 1-day delay. When subjects made judgments in a cue-only context, their JOLs accurately reflected the effects of testing, both immediately and at a delay. To evaluate the potential of exposure to such conditions for promoting generalized appreciation of testing effects, three further experiments elicited global predictions about restudied and tested items across two study/test cycles (experiments 2, 3, and 4). The results indicated that learners' global na\u00efve metacognitive beliefs increasingly reflect the beneficial effects of testing when learners experience these benefits with increasing external support. If queried under facilitative circumstances, learners appreciate the mnemonic enhancement that testing provides on both an item-by-item and global basis but generalize that knowledge to future learning only with considerable guidance."}
{"_id": "ef0bb86fb89935e42f723771896eae0ac67b9545", "title": "White-box cryptography: practical protection on hostile hosts", "text": "Businesses often interact with users via web-browsers and applications on mobile devices, and host services on cloud servers they may not own. Such highly-exposed environments employ white-box cryptography (WBC) for security protection. WBC operates on a security model far different from the traditional black-box model. The modern business world includes large commercial segments in which end-users are directly exposed to business application software hosted on web browsers, mobile phones, web-connected tablets, and an increasing number of other devices: the `internet of things' (IoT). Software applications and their communication activities now dominate much of the commercial world, and there have been countless hacks on such software, and on devices hosting it, with targets as diverse as mobile phones, web browser applications, vehicles, and even refrigerators! The business advantages of deploying computational power near the user encourage software migration to exposed network end-points, but this increasing exposure provides an ever growing attack surface. Here, we discuss goals and challenges of white-box cryptography and emerging approaches in a continual attempt to stay at least one step ahead of the attackers. We list some WBC techniques, both traditional and recent, indicating how they might be incorporated into a WBC AES implementation."}
{"_id": "1893b770859e7d6f28f4e5f9173065223ef948d6", "title": "The pursuit of happiness: time, money, and social connection.", "text": "Does thinking about time, rather than money, influence how effectively individuals pursue personal happiness? Laboratory and field experiments revealed that implicitly activating the construct of time motivates individuals to spend more time with friends and family and less time working-behaviors that are associated with greater happiness. In contrast, implicitly activating money motivates individuals to work more and socialize less, which (although productive) does not increase happiness. Implications for the relative roles of time versus money in the pursuit of happiness are discussed."}
{"_id": "20cf8416518b40e1aa583d459617081241bc18d4", "title": "Fragile online relationship: a first look at unfollow dynamics in twitter", "text": "We analyze the dynamics of the behavior known as 'unfollow' in Twitter. We collected daily snapshots of the online relationships of 1.2 million Korean-speaking users for 51 days as well as all of their tweets. We found that Twitter users frequently unfollow. We then discover the major factors, including the reciprocity of the relationships, the duration of a relationship, the followees' informativeness, and the overlap of the relationships, which affect the decision to unfollow. We conduct interview with 22 Korean respondents to supplement the quantitative results.\n They unfollowed those who left many tweets within a short time, created tweets about uninteresting topics, or tweeted about the mundane details of their lives. To the best of our knowledge, this work is the first systematic study of the unfollow behavior in Twitter."}
{"_id": "0fb54f54cd7427582fdb2a8da46d42884c7f417a", "title": "The role of emotion in computer-mediated communication: A review", "text": "It has been argued that the communication of emotions is more difficult in computer-mediated communication (CMC) than in face-to-face (F2F) communication. The aim of this paper is to review the empirical evidence in order to gain insight in whether emotions are communicated differently in these different modes of communication. We review two types of studies: (1) studies that explicitly examine discrete emotions and emotion expressions, and (2) studies that examine emotions more implicitly, namely as self-disclosure or emotional styles. Our conclusion is that there is no indication that CMC is a less emotional or less personally involving medium than F2F. On the contrary, emotional communication online and offline is surprisingly similar, and if differences are found they show more frequent and explicit emotion communication in CMC than in F2F. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "55327c16e9ace5bd6e716a83abc6ebdca7497621", "title": "Initial construction and validation of the Pathological Narcissism Inventory.", "text": "The construct of narcissism is inconsistently defined across clinical theory, social-personality psychology, and psychiatric diagnosis. Two problems were identified that impede integration of research and clinical findings regarding narcissistic personality pathology: (a) ambiguity regarding the assessment of pathological narcissism vs. normal narcissism and (b) insufficient scope of existing narcissism measures. Four studies are presented documenting the initial derivation and validation of the Pathological Narcissism Inventory (PNI). The PNI is a 52-item self-report measure assessing 7 dimensions of pathological narcissism spanning problems with narcissistic grandiosity (Entitlement Rage, Exploitativeness, Grandiose Fantasy, Self-sacrificing Self-enhancement) and narcissistic vulnerability (Contingent Self-esteem, Hiding the Self, Devaluing). The PNI structure was validated via confirmatory factor analysis. The PNI correlated negatively with self-esteem and empathy, and positively with shame, interpersonal distress, aggression, and borderline personality organization. Grandiose PNI scales were associated with vindictive, domineering, intrusive, and overly-nurturant interpersonal problems, and vulnerable PNI scales were associated with cold, socially avoidant, and exploitable interpersonal problems. In a small clinical sample, PNI scales exhibited significant associations with parasuicidal behavior, suicide attempts, homicidal ideation, and several aspects of psychotherapy utilization."}
{"_id": "645c2d0bf6225c87f1f8fdc2eacf4a9a39c570fc", "title": "COPSS-lite: Lightweight ICN Based Pub/Sub for IoT Environments", "text": "Information Centric Networking (ICN) is a new networking paradigm that treats content as the first class entity. It provides content to users without regards to the current location or source of the content. The publish/subscribe (pub/sub) systems have gained popularity in Internet. Pub/sub systems dismisses the need for users to request every content of their interest. Instead, the content is supplied to the interested users (subscribers) as and when it is published. CCN/NDN are popular ICN proposals widely accepted in the ICN community however, they do not provide an efficient pub/sub mechanism. COPSS enhances CCN/NDN with an efficient pub/sub capability. Internet of Things (IoT) is a growing topic of interest in both Academia and Industry. The current designs for IoT relies on IP. However, the IoT devices are constrained in their available resources and IP is heavy for their operation.We observed that IoT\u2019s are information centric in nature and hence ICN is a more suitable candidate to support IoT environments. Although NDN and COPSS work well for the Internet, their current full fledged implementations cannot be used by the resource constrained IoT devices. CCN-lite is a light weight, inter-operable version of the CCNx protocol for supporting the IoT devices. However, CCN-lite like its ancestors lacks the support for an efficient pub/sub mechanism. In this paper, we developed COPSS-lite, an efficient and light weight implementation of pub/sub for IoT. COPSS-lite is developed to enhance CCN-lite and also support multi-hop connection by incorporating the famous RPL protocol for low power and lossy networks. We provide a preliminary evaluation to show proof of operability with real world sensor devices in IoT lab. Our results show that COPSS-lite is compact, operates on all platforms that support CCN-lite and we observe significant performance benefits with COPSS-lite in IoT environments."}
{"_id": "5752b8dcec5856b7ad6289bbe1177acce535fba4", "title": "Parsing English with a Link Grammar", "text": "We develop a formal grammatical system called a link grammar, show how English grammar can be encoded in such a system, and give algorithms for efficiently parsing with a link grammar. Although the expressive power of link grammars is equivalent to that of context free grammars, encoding natural language grammars appears to be much easier with the new system. We have written a program for general link parsing and written a link grammar for the English language. The performance of this preliminary system \u2013 both in the breadth of English phenomena that it captures and in the computational resources used \u2013 indicates that the approach may have practical uses as well as linguistic significance. Our program is written in C and may be obtained through the internet. c 1991 Daniel Sleator and Davy Temperley * School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, sleator@cs.cmu.edu. y Music Department, Columbia University, New York, NY 10027, dt3@cunixa.cc.columbia.edu. Research supported in part by the National Science Foundation under grant CCR-8658139, Olin Corporation, and R. R. Donnelley and Sons. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of Olin Corporation, R. R. Donnelley and Sons, or the NSF."}
{"_id": "36a3eed52ff0a694aa73ce6a0d592cb440ed3d31", "title": "Robust Physical-World Attacks on Machine Learning Models", "text": "Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world\u2014they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm\u2014Robust Physical Perturbations (RP2)\u2014 that generates perturbations by taking images under different conditions into account. Our algorithm can create spatiallyconstrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100% of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100% of the testing conditions."}
{"_id": "fcc961631830b8b11114bd60f7e3ccb3888e3678", "title": "Sociable driving agents to maintain driver's attention in autonomous driving", "text": "Recently, many studies have been conducted on increasing the automation level in cars to achieve safer and more efficient transportation. The increased automation level creates room for the drivers to shift their attention to non-driving related activities. However, there are cases that cannot be handled by automation where a driver should take over the control. This pilot study investigates a paradigm for keeping the drivers' situation-awareness active during autonomous driving by utilizing a social robot system, NAMIDA. NAMIDA is an interface consisting of three sociable driving agents that can interact with the driver through eye-gaze behaviors. We analyzed the effectiveness of NAMIDA on maintaining the drivers' attention to the road, by evaluating the response time of the drivers to a critical situation on the road. An experiment consisting of a take over scenario was conducted in a dynamic driving simulator. The results showed that existence of NAMIDA significantly reduced the response time of the drivers. However, surprisingly, NAMIDA without eye-gaze behaviors was more effective in reducing the response time than NAMIDA with eye-gaze behaviors. Additionally, the results revealed better subjective impressions for NAMIDA with eye-gaze behaviors behaviors."}
{"_id": "e0f296dd7a8c9e315e4bd4e1108142f2e9e6faec", "title": "Correlating driver gaze with the road scene for driver assistance systems", "text": "A driver assistance system (DAS) should support the driver by monitoring road and vehicle events and presenting relevant and timely information to the driver. It is impossible to know what a driver is thinking, but we can monitor the driver\u2019s gaze direction and compare it with the position of information in the driver\u2019s viewfield to make inferences. In this way, not only do we monitor the driver\u2019s actions, we monitor the driver\u2019s observations as well. In this paper we present the automated detection and recognition of road signs, combined with the monitoring of the driver\u2019s response. We present a complete system that reads speed signs in real-time, compares the driver\u2019s gaze, and provides immediate feedback if it appears the sign has been missed by t \u00a9"}
{"_id": "19aca01bafe52131ec95473dac105889aa6a4d33", "title": "Robust method for road sign detection and recognition", "text": "This paper describes a method for detecting and recognizing road signs in gray-level and color images acquired by a single camera mounted on a moving vehicle. The method works in three stages. First, the search for the road sign is reduced to a suitable region of the image by using some a priori knowledge on the scene or color clues (when available). Secondly, a geometrical analysis of the edges extracted from the image is carried out, which generates candidates to be circular and triangular signs. Thirdly, a recognition stage tests by cross-correlation techniques each candidate which, if validated, is classi ed according to the data-base of signs. An extensive experimentation has shown that the method is robust against low-level noise corrupting edge detection and contour following, and works for images of cluttered urban streets as well as country roads and highways. A further improvement on the detection and recognition scheme has been obtained by means of temporal integration based on Kalman ltering methods of the extracted information. The proposed approach can be very helpful for the development of a system for driving assistance."}
{"_id": "28312c3a47c1be3a67365700744d3d6665b86f22", "title": "Face recognition: A literature survey", "text": "As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered."}
{"_id": "2c21e4808dcb8f9d935d98af07d733b8134525ab", "title": "Fast Radial Symmetry for Detecting Points of Interest", "text": "A new transform is presented that utilizes local radial symmetry to highlight points of interest within a scene. Its lowcomputational complexity and fast runtimes makes this method well-suited for real-time vision applications. The performance of the transform is demonstrated on a wide variety of images and compared with leading techniques from the literature. Both as a facial feature detector and as a generic region of interest detector the new transform is seen to offer equal or superior performance to contemporary techniques at a relatively low-computational cost. A real-time implementation of the transform is presented running at over 60 frames per second on a standard Pentium III PC."}
{"_id": "342f0a9db326eb32df8edb8268e573ea6c7d3999", "title": "Vision-based eye-gaze tracking for human computer interface", "text": "Eye-gaze is an input mode which has the potential of an efficient computer interface. Eye movement has been the focus of research in this area. Non-intrusive eyegaze tracking that allows slight head movement is addressed in this paper. A small 2D mark is employed as a reference to compensate for this movement. The iris center has been chosen for purposes of measuring eye movement. The gaze point is estimated after acquiring the eye movement data. Preliminary experimental results are given through a screen pointing application."}
{"_id": "b43ffb0d4f8d1c66632b78ad74d92ab1218a6976", "title": "An Empirical Exploration of Curriculum Learning for Neural Machine Translation", "text": "Machine translation systems based on deep neural networks are expensive to train. Curriculum learning aims to address this issue by choosing the order in which samples are presented during training to help train better models faster. We adopt a probabilistic view of curriculum learning, which lets us flexibly evaluate the impact of curricula design, and perform an extensive exploration on a German-English translation task. Results show that it is possible to improve convergence time at no loss in translation quality. However, results are highly sensitive to the choice of sample difficulty criteria, curriculum schedule and other hyperparameters."}
{"_id": "1f2e7ce5ccf6afc5551db594997a749444480bbd", "title": "Algorithms for Lipschitz Learning on Graphs", "text": "We develop fast algorithms for solving regression problems on graphs where one is given the value of a function at some vertices, and must find its smoothest possible extension to all vertices. The extension we compute is the absolutely minimal Lipschitz extension, and is the limit for large p of p-Laplacian regularization. We present an algorithm that computes a minimal Lipschitz extension in expected linear time, and an algorithm that computes an absolutely minimal Lipschitz extension in expected time \u00d5(mn). The latter algorithm has variants that seem to run much faster in practice. These extensions are particularly amenable to regularization: we can perform l0-regularization on the given values in polynomial time and l1-regularization on the initial function values and on graph edge weights in time \u00d5(m). Our definitions and algorithms naturally extend to directed graphs."}
{"_id": "c633e0075f56581556967dbe32631d8e4c94dcb5", "title": "Tanzer group IIB constricted ear repair with helical advancement and superior auricular artery chondrocutaneous flap.", "text": "Constricted ear deformity was first described by Tanzer and classified it into 3 groups according to the degree of constriction. The group IIB deformity involves the helix, scapha, and antihelical fold. The height of the ear is sharply reduced, and the soft tissue envelope is not sufficient to close the cartilage framework after expansion and reshaping.This study describes expanding the cartilage and increasing the height by advancing the helical root superiorly and repairing the skin-cartilage defect with a superior auricular artery chondrocutaneous flap in Tanzer group IIB constricted ear deformity.Six ears of 6 patients were treated with this technique during the past 3 years. All patients were satisfied with the appearance of their corrected ears, and the increase in height was maintained through the follow-up period.The described technique does not have the disadvantages and possible complications of harvesting a costal cartilage graft. Moving and fixing the root of helix to a more superior position provide the auricle with additional length. The superior auricular artery chondrocutaneous flap not only provides adequate soft tissue for primary closure of the anterior portion of the auricle but also aids in repairing the cartilage defect resulting from the superior advancement of the helix as well."}
{"_id": "20862a3a216ecc99519fa24fb0826317a3e54302", "title": "Hand Gesture Recognition using Sign Language", "text": "Sign Language is mostly used by deaf and dumb people. In order to improve the man machine interaction, sign language can be used as a way for communicating with machines. Most of the applications which enable sign language processing are using data gloves and other devices for interacting with computers. This restricts the freedom of users. So to avoid this, this system we capture live video stream through normal webcam and pass it as input and process this video to detect human gestures . These gestures can be signs represented in any sign language like American Sign Language (ASL), Indian Sign Language(ISL) etc. By processing these signs, this system can make an efficient system for searching using signs. In future this method of searching can be implemented in many areas such as railway station for searching the details of any trains, etc. Searching uses signs can play a major role in searching technique because the latest existing technique of searching is based on voice. But this may not work properly because each person has a different pronunciation. This system can overcome the drawbacks of the current available searching technique and thus can make every person under an umbrella of searching."}
{"_id": "4e740d864f382798b33b3510b1ca2980c885c5e7", "title": "Thermoplastic Forming of Bulk Metallic Glass\u2014 A Technology for MEMS and Microstructure Fabrication", "text": "A technology for microelectromechanical systems (MEMS) and microstructure fabrication is introduced where the bulk metallic glass (BMG) is formed at a temperature where the BMG exist as a viscous liquid under an applied pressure into a mold. This thermoplastic forming is carried out under comparable forming pressure and temperatures that are used for plastics. The range of possible sizes in all three dimensions of this technology allows the replication of high strength features ranging from about 30 nm to centimeters with aspect ratios of 20 to 1, which are homogeneous and isotropic and free of stresses and porosity. Our processing method includes a hot-cutting technique that enables a clean planar separation of the parts from the BMG reservoir. It also allows to net-shape three-dimensional parts on the micron scale. The technology can be implemented into conventional MEMS fabrication processes. The properties of BMG as well as the thermoplastic formability enable new applications and performance improvements of existing MEMS devices and nanostructures"}
{"_id": "85acd7b5542bf475c70914f1c4efa4e2207421c2", "title": "Mobile Multimedia Recommendation in Smart Communities: A Survey", "text": "Due to the rapid growth of Internet broadband access and proliferation of modern mobile devices, various types of multimedia (e.g., text, images, audios, and videos) have become ubiquitously available anytime. Mobile device users usually store and use multimedia contents based on their personal interests and preferences. Mobile device challenges such as storage limitation have, however, introduced the problem of mobile multimedia overload to users. To tackle this problem, researchers have developed various techniques that recommend multimedia for mobile users. In this paper, we examine the importance of mobile multimedia recommendation systems from the perspective of three smart communities, namely mobile social learning, mobile event guide, and context-aware services. A cautious analysis of existing research reveals that the implementation of proactive, sensor-based and hybrid recommender systems can improve mobile multimedia recommendations. Nevertheless, there are still challenges and open issues such as the incorporation of context and social properties, which need to be tackled to generate accurate and trustworthy mobile multimedia recommendations."}
{"_id": "63aa729924d672a37f04bd3a18e60bd0510bb3a3", "title": "Wireless Geophone Network for remote monitoring and detection of landslides", "text": "Recent years have shown an alarmous increase in rain fall induced landslides. This has facilitated the need for having a monitoring system to predict the landslides which could eventually reduce the loss of human life. We have developed and deployed a Wireless Sensor Network to monitor rainfall induced landslide, in Munnar, South India. A successful landslide warning was issued in June 2009 using this system. The system is being enhanced by incorporating a Wireless Geophone Network to locate the initiation of landslide. The paper discusses an algorithm that was developed to analyze the geophone data and automatically detect the landslide signal. A novel method to localize the landslide initiation point is detailed. The algorithm is based on the time delay inherent in the transmission of waves through the surface of the earth. The approach detailed here does not require additional energy since the geophones are self excitatory. The error rate of the approach is much less when compared to the other localization methods like RSSI. The proposed algorithm is being tested and validated, in the landslide laboratory set up at our university."}
{"_id": "d70cd3d2fe0a194321ee92c305976873b883d529", "title": "A compact, 37% fractional bandwidth millimeter-wave phase shifter using a wideband lange coupler for 60-GHz and E-band systems", "text": "A wideband 57.7\u201384.2 GHz Phase Shifter is presented using a compact Lange coupler to generate in-phase and quadrature signal. The Lange coupler is followed by two balun transformers that provide the IQ vector modulation with differential I and Q signals. The implemented Phase Shifter demonstrates an average 6-dB insertion loss and 5-dB gain variation. The measured average rms phase and gain errors are 7 degrees and 1 dB, respectively. The phase shifter is implemented in GlobalFoundries 45-nm SOI CMOS technology using a trap-rich substrate. The chip area is 385 \u03bcm \u00d7 285 \u03bcm and the Phase Shifter consumes less than 17 mW. To the best of authors knowledge, this is the first phase shifter that covers both 60 GHz band and E-band frequencies with a fractional bandwidth of 37%."}
{"_id": "98bcaf679cc12712d8187b64e1351f1cbf90d7fb", "title": "An innovative ontology-driven system supporting personnel selection: the OntoHR case", "text": "This paper describes the initial development of an HRM system that aims to decrease the gap between higher vocational education and the labour market for a specific job in the ICT sector. This paper focuses specifically on the delineation of a process model and the selection of a suitable job role (information system analyst) that is valid across organisational and cultural boundaries. The process model implies various applied uses of this ontology-based system, including mapping qualifications in vocational education to current and valid job roles, testing and evaluating the student applicant on the basis of labour market driven competencies, and providing ad-hoc support to educational institutions by elucidating the weaknesses of particular VET curricula."}
{"_id": "72868defad65be8d72c911afb343c3bc74fb2a24", "title": "A Balanced Filtering Branch-Line Coupler", "text": "A balanced filtering branch-line coupler is proposed for the first time. The proposed design is realized by coupled microstrip lines and coupled-line-fed coupling structures. Based on this, the proposed design not only exhibits the functions of power dividing and filtering for differential-mode signals, but also can suppress the common-mode noise/signal and be easily connected with other balanced circuits. The dimension of the proposed design is similar to that of the traditional single-ended design. A prototype of the balanced filtering branch-line coupler centered at 1.87 GHz with the size of 0.33 \u03bbg \u00d7 0.42 \u03bbg is fabricated, where \u03bbg is the guided wavelength at the center frequency. The measured results exhibit the maximum insertion loss of 1.4 dB with a 3-dB fractional bandwidth of 3.5%."}
{"_id": "c59dd7da9b11761239d9b97e36ef972acfb1ba6f", "title": "Identifying personal genomes by surname inference.", "text": "Sharing sequencing data sets without identifiers has become a common practice in genomics. Here, we report that surnames can be recovered from personal genomes by profiling short tandem repeats on the Y chromosome (Y-STRs) and querying recreational genetic genealogy databases. We show that a combination of a surname with other types of metadata, such as age and state, can be used to triangulate the identity of the target. A key feature of this technique is that it entirely relies on free, publicly accessible Internet resources. We quantitatively analyze the probability of identification for U.S. males. We further demonstrate the feasibility of this technique by tracing back with high probability the identities of multiple participants in public sequencing projects."}
{"_id": "96693c88cdcf0b721e4eff7f4f426bc90f90d601", "title": "Automatic web spreadsheet data extraction", "text": "Spreadsheets contain a huge amount of high-value data but do not observe a standard data model and thus are difficult to integrate. A large number of data integration tools exist, but they generally can only work on relational data. Existing systems for extracting relational data from spreadsheets are too labor intensive to support ad-hoc integration tasks, in which the correct extraction target is only learned during the course of user interaction.\n This paper introduces a system that automatically extracts relational data from spreadsheets, thereby enabling relational spreadsheet integration. The resulting integrated relational data can be queried directly or can be translated into RDF triples. When compared to standard techniques for spreadsheet data extraction on a set of 100 random Web spreadsheets, the system reduces the amount of human labor by 72% to 92%. In addition to the system design, we present the results of a general survey of more than 400,000 spreadsheets we downloaded from the Web, giving a novel view of how users organize their data in spreadsheets."}
{"_id": "5bbc7e59b11e737f653a565419b68d9f5d9ceb68", "title": "Analysis of waveguide slot-based structures using wide-band equivalent-circuit model", "text": "Analysis of geometrically complicated waveguide-based slotted arrays and filters is performed using a simple equivalent-circuit model. First, the circuit parameters (inductance and capacitance) of a simple waveguide slot-coupler problem are obtained through moment-method (MoM) analysis. The values of the lumped LC elements are virtually constant over the frequency range of interest (the X-band) for specific waveguide and slot dimensions. Based on the equivalent-circuit model of a single slot of two coupled waveguides, more complicated structures are then analyzed, such as slot coupler arrays and slot-based waveguide filters. The scattering parameters of these structures are obtained through circuit analysis, and are verified using the MoM and finite-difference time-domain method. Excellent agreement is observed over a wide band of frequencies and is confirmed by experimental results."}
{"_id": "eb58118b9db1e95f9792f39c3780dbba3bb966cb", "title": "A Wearable Inertial Measurement System With Complementary Filter for Gait Analysis of Patients With Stroke or Parkinson\u2019s Disease", "text": "This paper presents a wearable inertial measurement system and its associated spatiotemporal gait analysis algorithm to obtain quantitative measurements and explore clinical indicators from the spatiotemporal gait patterns for patients with stroke or Parkinson\u2019s disease. The wearable system is composed of a microcontroller, a triaxial accelerometer, a triaxial gyroscope, and an RF wireless transmission module. The spatiotemporal gait analysis algorithm, consisting of procedures of inertial signal acquisition, signal preprocessing, gait phase detection, and ankle range of motion estimation, has been developed for extracting gait features from accelerations and angular velocities. In order to estimate accurate ankle range of motion, we have integrated accelerations and angular velocities into a complementary filter for reducing the accumulation of integration error of inertial signals. All 24 participants mounted the system on their foot to walk along a straight line of 10 m at normal speed and their walking recordings were collected to validate the effectiveness of the proposed system and algorithm. Experimental results show that the proposed inertial measurement system with the designed spatiotemporal gait analysis algorithm is a promising tool for automatically analyzing spatiotemporal gait information, serving as clinical indicators for monitoring therapeutic efficacy for diagnosis of stroke or Parkinson\u2019s disease."}
{"_id": "f17d9ab6a60600174df20eb6cb1fecc522912257", "title": "Preparation of Personalized-dose Salbutamol Sulphate Oral Films with Thermal Ink-Jet Printing", "text": "To evaluate the use of thermal ink-jetting as a method for dosing drugs onto oral films. A Hewlett-Packard printer cartridge was modified so that aqueous drug solutions replaced the ink. The performance of the printer as a function of print solution viscosity and surface tension was determined; viscosities between 1.1 and 1.5\u00a0mm2 s-1 were found to be optimal, while surface tension did not affect deposition. A calibration curve for salbutamol sulphate was prepared, which demonstrated drug deposition onto an acetate film varied linearly with concentration (r2\u2009=\u20090.9992). The printer was then used to deposit salbutamol sulphate onto an oral film made of potato starch. It was found that when doses were deposited in a single pass under the print head, then the measured dose was in good agreement with the theoretical dose. With multiple passes the measured dose was always significantly less than the theoretical dose. It is proposed that the losses arise from erosion of the printed layer by shearing forces during paper handling. The losses were predictable, and the variance in dose deposited was always less than the BP limits for tablet and oral syrup salbutamol sulphate preparations. TIJ printing offers a rapid method for extemporaneous preparation of personalized-dose medicines."}
{"_id": "a1a84459c9cae06e7e118f053193ac511fba6d96", "title": "From big smartphone data to worldwide research: The Mobile Data Challenge", "text": "This paper presents an overview of the Mobile Data Challenge (MDC), a large-scale research initiative aimed at generating innovations around smartphone-based research, as well as community-based evaluation of mobile data analysis methodologies. First, we review the Lausanne Data Collection Campaign (LDCC) \u2013 an initiative to collect unique, longitudinal smartphone data set for the MDC. Then, we introduce the Open and Dedicated Tracks of the MDC; describe the specific data sets used in each of them; discuss the key design and implementation aspects introduced in order to generate privacypreserving and scientifically relevant mobile data resources for wider use by the research community; and summarize the main research trends found among the 100+ challenge submissions. We finalize by discussing the main lessons learned from the participation of several hundred researchers worldwide in the MDC Tracks."}
{"_id": "519d220149838ca5f534b004b305ed9ca1f3afd1", "title": "ESUR prostate MR guidelines 2012", "text": "The aim was to develop clinical guidelines for multi-parametric MRI of the prostate by a group of prostate MRI experts from the European Society of Urogenital Radiology (ESUR), based on literature evidence and consensus expert opinion. True evidence-based guidelines could not be formulated, but a compromise, reflected by \u201cminimal\u201d and \u201coptimal\u201d requirements has been made. The scope of these ESUR guidelines is to promulgate high quality MRI in acquisition and evaluation with the correct indications for prostate cancer across the whole of Europe and eventually outside Europe. The guidelines for the optimal technique and three protocols for \u201cdetection\u201d, \u201cstaging\u201d and \u201cnode and bone\u201d are presented. The use of endorectal coil vs. pelvic phased array coil and 1.5 vs. 3\u00a0T is discussed. Clinical indications and a PI-RADS classification for structured reporting are presented. Key Points \u2022 This report provides guidelines for magnetic resonance imaging (MRI) in prostate cancer. \u2022 Clinical indications, and minimal and optimal imaging acquisition protocols are provided. \u2022 A structured reporting system (PI-RADS) is described."}
{"_id": "7e7f14f325d7e8d70e20ca22800ad87cfbf339ff", "title": "An overview of pricing models for revenue management", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles."}
{"_id": "765865cd323c8f16909c89ca8e5a3ff38e41c36b", "title": "Recommendation Systems in Software Engineering", "text": "Software engineering is a knowledge-intensive activity that presents many information navigation challenges. Information spaces in software engineering include the source code and change history of the software, discussion lists and forums, issue databases, component technologies and their learning resources, and the development environment. The technical nature, size, and dynamicity of these information spaces motivate the development of a special class of applications to support developers: recommendation systems in software engineering (RSSEs), which are software applications that provide information items estimated to be valuable for a software engineering task in a given context. In this introduction, we review the characteristics of information spaces in software engineering, describe the unique aspects of RSSEs, present an overview of the issues and considerations involved in creating, evaluating, and using RSSEs, and present a general outlook on the current state of research and development in the field of recommendation systems for highly technical domains."}
{"_id": "87dc0dea299ed325d70973b6ffc8ca03b48d97f4", "title": "Relationship Between Visual-Motor Integration , Eye-Hand Coordination , and Quality of Handwriting", "text": "If the influence of visual-motor integration (copying forms) on the quality of handwriting has been widely investigated, the influence of eye-hand coordination (tracing item) has been less well analyzed. The Concise Assessment Scale for Children\u2019s Handwriting (BHK), the Developmental Test of Visual Perception (DTVP-2), and the section \u201cManual Dexterity\u201d of the Movement Assessment Battery for Children (M-ABC) were administered to a group of second grade children (N = 75; 8.1-year-olds). The association of visual-motor integration and eye-hand coordination are predictive of the quality of handwriting (p < . 001). These two skills should be taken into consideration when children are referred to occupational therapy for difficulties in handwriting."}
{"_id": "dc877b84314f20dc32df80f838942325decf997a", "title": "The research and implementation of metadata cache backup technology based on CEPH file system", "text": "Based on research and analysis of the Ceph file system, the log cache backup scheme is proposed. The log cache backup scheme reduces metadata access delays by caching logs and reducing the time of storing logs to server clusters. In order to prevent the loss of cached data in metadata servers, a cached data backup scheme is proposed. Compared to the metadata management subsystem of the Ceph, the log cache backup scheme can effectively improve the access performance of metadata. Finally, based on open source codes of the Ceph file system, the log cache backup scheme is implemented. Compared to the performance of the Ceph metadata management subsystem, experiment results show that performance improvements of the log cache backup scheme are up to 11.5%."}
{"_id": "9ca7a563741e76c1a3b5e749c78a0604cf18fa24", "title": "A meta-analysis of the technology acceptance model: Investigating subjective norm and moderation effects", "text": "We conducted a quantitative meta-analysis of previous research on the technology acceptance model (TAM) in an attempt to make well-grounded statements on the role of subjective norm. Furthermore, we compared TAM results by taking into account moderating effects of one individual-related factor (type of respondents), one technology-related factor (type of technology), and one contingent factor (culture). Results indicated a significant influence of subjective norm on perceived usefulness and behavioral intention to use. Moderating effects were found for all three factors. The findings yielded managerial implications for both intracompany and market-based settings. # 2006 Elsevier B.V. All rights reserved."}
{"_id": "d842dd469141b4a6f06894240814ef641270fe56", "title": "Triangular Tile Pasting P Systems for Pattern Generation", "text": "-In the area of membrane computing, a new computability model, called P system, which is a highly distributed, parallel theoretical computing model, was introduced by G.H. P\u0103un [1], inspired from the cell structure and its function. There are several application areas of these P systems. Among these one area deals with the problem of picture generation. Ceterachi et al. [2] began a study on linking the two areas of membrane computing and picture grammars, which were not very much linked before, by relating P system and array-rewriting grammars generating picture languages and proposing array rewriting P systems. Iso-Triangular picture languages were introduced in [3], which can also generate rectangular and hexagonal picture languages. Tile pasting P system model for pattern generation was introduced in [4]. In this paper we propose a theoretical model of a P system using triangular tiles called Triangular tile pasting P system, for generating two dimensional patterns that are generated by gluing triangular tiles and study some of its properties. Keywords--Triangular Tiles, Array Grammars, Pasting System, Tile Pasting P Systems"}
{"_id": "042f8effa0841dca3a41b58fab12f0fd8e3f9ccb", "title": "Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers", "text": "We present a unifying framework for studying the solution of multiclass categorization problems by reducing them to multiple binary problems that are then solved using a margin-based binary learning algorithm. The proposed framework unifies some of the most popular approaches in which each class is compared against all others, or in which all pairs of classes are compared to each other, or in which output codes with error-correcting properties are used. We propose a general method for combining the classifiers generated on the binary problems, and we prove a general empirical multiclass loss bound given the empirical loss of the individual binary learning algorithms. The scheme and the corresponding bounds apply to many popular classification learning algorithms including support-vector machines, AdaBoost, regression, logistic regression and decision-tree algorithms. We also give a multiclass generalization error analysis for general output codes with AdaBoost as the binary learner. Experimental results with SVM and AdaBoost show that our scheme provides a viable alternative to the most commonly used multiclass algorithms."}
{"_id": "3f168efed1bc3d97c0f7ddb2f3e79f2eb230bafa", "title": "Analysis of Emotionally Salient Aspects of Fundamental Frequency for Emotion Detection", "text": "During expressive speech, the voice is enriched to convey not only the intended semantic message but also the emotional state of the speaker. The pitch contour is one of the important properties of speech that is affected by this emotional modulation. Although pitch features have been commonly used to recognize emotions, it is not clear what aspects of the pitch contour are the most emotionally salient. This paper presents an analysis of the statistics derived from the pitch contour. First, pitch features derived from emotional speech samples are compared with the ones derived from neutral speech, by using symmetric Kullback-Leibler distance. Then, the emotionally discriminative power of the pitch features is quantified by comparing nested logistic regression models. The results indicate that gross pitch contour statistics such as mean, maximum, minimum, and range are more emotionally prominent than features describing the pitch shape. Also, analyzing the pitch statistics at the utterance level is found to be more accurate and robust than analyzing the pitch statistics for shorter speech regions (e.g., voiced segments). Finally, the best features are selected to build a binary emotion detection system for distinguishing between emotional versus neutral speech. A new two-step approach is proposed. In the first step, reference models for the pitch features are trained with neutral speech, and the input features are contrasted with the neutral model. In the second step, a fitness measure is used to assess whether the input speech is similar to, in the case of neutral speech, or different from, in the case of emotional speech, the reference models. The proposed approach is tested with four acted emotional databases spanning different emotional categories, recording settings, speakers and languages. The results show that the recognition accuracy of the system is over 77% just with the pitch features (baseline 50%). When compared to conventional classification schemes, the proposed approach performs better in terms of both accuracy and robustness."}
{"_id": "782f2b8924ed5104b42308a0565429747780f0c2", "title": "The role of voice quality in communicating emotion, mood and attitude", "text": "This paper explores the role of voice quality in the communication of emotions, moods and attitudes. Listeners reactions to an utterance synthesised with seven different voice qualities were elicited in terms of pairs of opposing affective attributes. The voice qualities included harsh voice, tense voice, modal voice, breathy voice, whispery voice, creaky voice and lax\u2013creaky voice. These were synthesised using a formant synthesiser, and the voice source parameter settings were guided by prior analytic studies as well as auditory judgements. Results offer support for some past observations on the association of voice quality and affect, and suggest a number of refinements in some cases. Listeners ratings further suggest that these qualities are considerably more effective in signalling milder affective states than the strong emotions. It is clear that there is no one-to-one mapping between voice quality and affect: rather a given quality tends to be associated with a cluster of affective attributes. 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "a474b149717cb37d6c507395ba27844a92889939", "title": "Emotion recognition in human-computer interaction", "text": "In this paper, we outline the approach we have developed to construct an emotion-recognising system. It is based on guidance from psychological studies of emotion, as well as from the nature of emotion in its interaction with attention. A neural network architecture is constructed to be able to handle the fusion of different modalities (facial features, prosody and lexical content in speech). Results from the network are given and their implications discussed, as are implications for future direction for the research."}
{"_id": "002a8b9ef513d46dc8dcce85c04a87ae6a221b4c", "title": "New Support Vector Algorithms", "text": "We propose a new class of support vector algorithms for regression and classification. In these algorithms, a parameter lets one effectively control the number of support vectors. While this can be useful in its own right, the parameterization has the additional benefit of enabling us to eliminate one of the other free parameters of the algorithm: the accuracy parameter in the regression case, and the regularization constant C in the classification case. We describe the algorithms, give some theoretical results concerning the meaning and the choice of , and report experimental results."}
{"_id": "37bf8fa510083fadc50ef3a52e4b8c10fa0531dd", "title": "RNN-based sequence prediction as an alternative or complement to traditional recommender systems", "text": "The recurrent neural networks have the ability to grasp the temporal patterns within the data. This is a property that can be used in order to help a recommender system better taking into account the user past history. Still the dimensionality problem that raises within the recommender system field also raises here as the number of items the system have to be aware of is susceptibility high. Recent research have studied the use of such neural networks at a user\u2019s session level. This thesis rather examines the use of this technique at a whole user\u2019s past history level associated with techniques such as embeddings and softmax sampling in order to accommodate with the high dimensionality. The proposed method results in a sequence prediction model that can be used as is for the recommender task or as a feature within a more complex system."}
{"_id": "b3bcbbe5b6a7f477b4277b3273f999721dba30ef", "title": "Developing multiple hypotheses in behavioral ecology", "text": "Researchers in behavioral ecology are increasingly turning to research methods that allow the simultaneous evaluation of hypotheses. This approach has great potential to increase our scientific understanding, but researchers interested in the approach should be aware of its long and somewhat contentious history. Also, prior to implementing multiple hypothesis evaluation, researchers should be aware of the importance of clearly specifying a priori hypotheses. This is one of the more difficult aspects of research based on multiple hypothesis evaluation, and we outline and provide examples of three approaches for doing so. Finally, multiple hypothesis evaluation has some limitations important to behavioral ecologists; we discuss two practical issues behavioral ecologists are likely to face."}
{"_id": "9746994d09b5bf6c40bee3693ee8678e191f84b8", "title": "The Sixth PASCAL Recognizing Textual Entailment Challenge", "text": "This paper presents the Fifth Recognizing Textual Entailment Challenge (RTE5). Following the positive experience of the last campaign, RTE-5 has been proposed for the second time as a track at the Text Analysis Conference (TAC). The structure of the RTE-5 Main Task remained unchanged, offering both the traditional two-way task and the threeway task introduced in the previous campaign. Moreover, a pilot Search Task was set up, consisting of finding all the sentences in a set of documents that entail a given hypothesis. 21 teams participated in the campaign, among which 20 in the Main Task (for a total of 54 runs) and 8 in the Pilot Task (for a total of 20 runs). Another important innovation introduced in this campaign was mandatory ablation tests that participants had to perform for all major knowledge resources employed by their systems."}
{"_id": "f1eb886a9e3d2a74cdce442bfdef4c0341d697c8", "title": "Mindfulness-Based Cognitive Therapy and the Adult ADHD Brain: A Neuropsychotherapeutic Perspective", "text": "Attention-deficit/hyperactivity disorder (ADHD) is a recognized serious mental disorder that often persists into adulthood. The symptoms and impairments associated with ADHD often cause significant mental suffering in affected individuals. ADHD has been associated with abnormal neuronal activity in various neuronal circuits, such as the dorsofrontostriatal, orbitofrontostriatal, and frontocerebellar circuits. Psychopharmacological treatment with methylphenidate hydrochloride is recommended as the first-line treatment for ADHD. It is assumed that medication ameliorates ADHD symptoms by improving the functioning of the brain areas affected in the condition. However, side effects, contraindications, or non-response can limit the effectiveness of a psychopharmacological treatment for ADHD. It is therefore necessary to develop non-pharmacological interventions that target neuronal mechanisms associated with the condition in the same way as pharmacological treatment. We think that mindfulness meditation employed as a neuropsychotherapeutic intervention could help patients with ADHD to regulate impaired brain functioning and thereby reduce ADHD symptoms. In this paper, we highlight the mechanisms of such mindfulness meditation, and thus provide a rationale for further research and treatment development from a neuropsychotherapeutic perspective. We conclude that mindfulness meditation employed as a neuropsychotherapeutic intervention in therapy is a promising treatment approach in ADHD."}
{"_id": "709df9cb0574f1ff3cb3302c904db38365040fa8", "title": "Image-based reconstruction of wire art", "text": "Objects created by connecting and bending wires are common in furniture design, metal sculpting, wire jewelry, etc. Reconstructing such objects with traditional depth and image based methods is extremely difficult due to their unique characteristics such as lack of features, thin elements, and severe self-occlusions. We present a novel image-based method that reconstructs a set of continuous 3D wires used to create such an object, where each wire is composed of an ordered set of 3D curve segments. Our method exploits two main observations: simplicity - wire objects are often created using only a small number of wires, and smoothness - each wire is primarily smoothly bent with sharp features appearing only at joints or isolated points. In light of these observations, we tackle the challenging image correspondence problem across featureless wires by first generating multiple candidate 3D curve segments and then solving a global selection problem that balances between image and smoothness cues to identify the correct 3D curves. Next, we recover a decomposition of such curves into a set of distinct and continuous wires by formulating a multiple traveling salesman problem, which finds smooth paths, i.e., wires, connecting the curves. We demonstrate our method on a wide set of real examples with varying complexity and present high-fidelity results using only 3 images for each object. We provide the source code and data for our work in the project website."}
{"_id": "52ed13d4b686fa236ca4e999b7ec533431dfbfee", "title": "On the Semantics of Self-Unpacking Malware Code \u2217", "text": "The rapid increase in attacks on software systems via malware such as viruses, worms, trojans, etc., has made it imperative to develop effective techniques for detecting and analyzing malware binaries. Such binaries are usually transmitted in packed or encrypted form, with the executable payload decrypted dynamically and then executed. In order to reason formally about their execution behavior, therefore, we need semantic descriptions that can capture this self-modifying aspect of their code. However, current approaches to the semantics of programs usually assume that the program code is immutable, which makes them inapplicable to self-unpacking malware code. This paper takes a step towards addressing this problem by describing a formal semantics for self-modifying code. We use our semantics to show how the execution of self-unpacking code can be divided naturally into a sequence of phases, and uses this to show how the behavior of a program can be characterized statically in terms of a program evolution graph. We discuss several applications of our work, including static unpacking and deobfuscation of encrypted malware and static cross-phase code analysis."}
{"_id": "17a4b08d28ca23e6a077abd427aa88f70fec8c3e", "title": "An Information Retrieval Approach For Automatically Constructing Software Libraries", "text": "Although software reuse presents clear advantages for programmer productivit.y and code reliability, it is not practiced enough. One of the reasons for the only moderate success of reuse is the lack of software libr'lries that facilitate the actual locating and understanding of reusable components. This paper dpscrihes II technology for automatically I\\Ssembling large softwar~ libraries that promote softwarp rpuse by helping the user locate the components closest to hfO'r/his neeels Softwarp libraries are aut.omatically assemblfO'd from a set. of Ilnorganizpd component.s by using informat.ion rptripval techniques. The constrllction of the library is done in !.wo steps. First, attribut.es ;up alJtornal.ically pxtracted from nat.urlll language docllmpntat.ion by IIsing a npIV inJexing scheme ba~fO'd an t.he nntinns of lexical affinities anci quantity of inforrnllt.ion. Then, a hierarchy for browsing is alJtomat.iclllly generaterl using a clustering techniq1le that draws only nn the informlltion provided by the attribulps. Thanks to the frfO'p-!.pxt inrlexing se-heme, tools following this approach can accept free-style nat'Jral language 'luprips. This tpChllOlogy h1\\.S bfO'en impll\"ment .. d in the GURU ~ystpm, whirh hrforms l!i% better on a random test set, while being milch le~s pxpensive to build than INFOEXl'LORF.R."}
{"_id": "f3e36b7e24b71129480321ecd8cfd81c8d5659b9", "title": "NOSEP: Nonoverlapping Sequence Pattern Mining With Gap Constraints", "text": "Sequence pattern mining aims to discover frequent subsequences as patterns in a single sequence or a sequence database. By combining gap constraints (or flexible wildcards), users can specify special characteristics of the patterns and discover meaningful subsequences suitable for their own application domains, such as finding gene transcription sites from DNA sequences or discovering patterns for time series data classification. Due to the inherent complexity of sequence patterns, including the exponential candidate space with respect to pattern letters and gap constraints, to date, existing sequence pattern mining methods are either incomplete or do not support the Apriori property because the support ratio of a pattern may be greater than that of its subpatterns. Most importantly, patterns discovered by these methods are either too restrictive or too general and cannot represent underlying meaningful knowledge in the sequences. In this paper, we focus on a nonoverlapping sequence pattern mining task with gap constraints, where a nonoverlapping sequence pattern allows sequence letters to be flexibly and maximally utilized for pattern discovery. A new Apriori-based nonoverlapping sequence pattern mining algorithm, NOSEP, is proposed. NOSEP is a complete pattern mining algorithm, which uses a specially designed data structure, Nettree, to calculate the exact occurrence of a pattern in the sequence. Experimental results and comparisons on biology DNA sequences, time series data, and Gazelle datasets demonstrate the efficiency of the proposed algorithm and the uniqueness of nonoverlapping sequence patterns compared to other methods."}
{"_id": "0bfb9311d006ed6b520f4afbd349b9bacca88000", "title": "A CMOS Transceiver for a Multistandard 13.56-MHz RFID Reader SoC", "text": "A CMOS transceiver for a multistandard 13.56-MHz radio-frequency identification reader system-on-a-chip (SoC) is designed and fabricated. The SoC consists of an RF/analog part for modulation/demodulation and a digital part for controlling the transceiver functionality. Prior to designing the integrated circuit, pre-experiments using discrete components and commercial tags are performed. With the results, overall functions and specifications are determined. For supporting multistandard, several blocks are designed with digital controls according to the standards. In the transmitter, a digitally controlled amplitude modulator for various modulation indexes and a power control circuit are adopted. In the receiver, a variable gain amplifier and a level-controllable comparator, which are also controlled digitally according to the standard, are introduced. The full transceiver SoC is implemented in the Chartered 0.18-\u00bfm CMOS technology. The measurement results of the implemented chip indicate that the designed transceiver operates in a multistandard mode."}
{"_id": "d9bfaf51cf9894657b814adfb342ff25948877e0", "title": "The contagious leader: impact of the leader's mood on the mood of group members, group affective tone, and group processes.", "text": "The present study examined the effects of leaders' mood on (a) the mood of individual group members, (b) the affective tone of groups, and (c) 3 group processes: coordination, effort expenditure, and task strategy. On the basis of a mood contagion model, the authors found that when leaders were in a positive mood, in comparison to a negative mood, (a) individual group members experienced more positive and less negative mood, and (b) groups had a more positive and a less negative affective tone. The authors also found that groups with leaders in a positive mood exhibited more coordination and expended less effort than did groups with leaders in a negative mood. Applied implications of the results are discussed."}
{"_id": "ed0a72c00b56a1c9fe9e9151211021f7d4ad4f47", "title": "A 3-Month, Randomized, Double-Blind, Placebo-Controlled Study Evaluating the Ability of an Extra-Strength Marine Protein Supplement to Promote Hair Growth and Decrease Shedding in Women with Self-Perceived Thinning Hair", "text": "An oral marine protein supplement (MPS) is designed to promote hair growth in women with temporary thinning hair (Viviscal Extra Strength; Lifes2good, Inc., Chicago, IL). This double-blind, placebo-controlled study assessed the ability of MPS to promote terminal hair growth in adult women with self-perceived thinning hair associated with poor diet, stress, hormonal influences, or abnormal menstrual cycles. Adult women with thinning hair were randomized to receive MPS (N = 30) or placebo (N = 30) twice daily for 90 days. Digital images were obtained from a 4\u2009cm(2) area scalp target area. Each subject's hair was washed and shed hairs were collected and counted. After 90 days, these measures were repeated and subjects completed Quality of Life and Self-Assessment Questionnaires. MPS-treated subjects achieved a significant increase in the number of terminal hairs within the target area (P < 0.0001) which was significantly greater than placebo (P < 0.0001). MPS use also resulted in significantly less hair shedding (P = 0.002) and higher total Self-Assessment (P = 0.006) and Quality of Life Questionnaires scores (P = 0.035). There were no reported adverse events. MPS promotes hair growth and decreases hair loss in women suffering from temporary thinning hair. This trial is registered with ClinicalTrials.gov Identifier: NCT02297360."}
{"_id": "e6b7c15bba47ec33771eda5e22c747a8a093b9b5", "title": "Digital Advertising: An Information Scientist's Perspective", "text": "Digital online advertising is a form of promotion that uses the Internet and World Wide Web for the express purpose of delivering marketing messages to attract customers. Examples of online advertising include text ads that appear on search engine results pages, banner ads, in-text ads, or Rich Media ads that appear on regular web pages, portals, or applications. Over the past 15 years online advertising, a $65 billion industry worldwide in 2009, has been pivotal to the success of the World Wide Web. That being said, the field of advertising has been equally revolutionized by the Internet, World Wide Web, and more recently, by the emergence of the social web, and mobile devices. This success has arisen largely from the transformation of the advertising industry from a low-tech, human intensive, \u201cMad Men\u201d way of doing work to highly optimized, quantitative, mathematical, computerand data-centric processes that enable highly targeted, personalized, performance based advertising. This chapter provides a clear and detailed overview of the technologies and business models that are transforming the field of online advertising primarily from statistical machine learning and information science perspectives."}
{"_id": "e936c90b2ca8f65dcad86ad4946ebb4bfaa8e909", "title": "Design on the low-leakage diode string for using in the power-rail ESD clamp circuits in a 0.35-/spl mu/m silicide CMOS process", "text": "A new design of the diode string with very low leakage current is proposed for use in the ESD clamp circuits across the power rails. By adding an NMOS-controlled lateral SCR (NCLSCR) device into the stacked diode string, the leakage current of this new diode string with six stacked diodes at 5 V (3.3 V) forward bias can be reduced to only 2.1 (1.07) nA at a temperature of 125/spl deg/C in a 0.35 /spl mu/m silicide CMOS process, whereas the previous designs have a leakage current in the order of mA. The total blocking voltage of this new design with NCLSCR can be linearly adjusted by changing the number of the stacked diodes in the diode string without causing latch-up danger across the power rails. From the experimental results, the human-body-model ESD level of the ESD clamp circuit with the proposed low-leakage diode string is greater than 8 kV in a 0.35 /spl mu/m silicide CMOS process by using neither ESD implantation nor the silicide-blocking process modifications."}
{"_id": "0911bcf6bfff20a84a56b9d448bcb3d72a1eb093", "title": "Zero-bias autoencoders and the benefits of co-adapting features", "text": "Regularized training of an autoencoder typically results in hidden unit biases that take on large negative values. We show that negative biases are a natural result of using a hidden layer whose responsibility is to both represent the input data and act as a selection mechanism that ensures sparsity of the representation. We then show that negative biases impede the learning of data distributions whose intrinsic dimensionality is high. We also propose a new activation function that decouples the two roles of the hidden layer and that allows us to learn representations on data with very high intrinsic dimensionality, where standard autoencoders typically fail. Since the decoupled activation function acts like an implicit regularizer, the model can be trained by minimizing the reconstruction error of training data, without requiring any additional regularization."}
{"_id": "f9b7cb13eee257a67a5a8049f22580152873c0a4", "title": "Global prevalence of glaucoma and projections of glaucoma burden through 2040: a systematic review and meta-analysis.", "text": "PURPOSE\nGlaucoma is the leading cause of global irreversible blindness. Present estimates of global glaucoma prevalence are not up-to-date and focused mainly on European ancestry populations. We systematically examined the global prevalence of primary open-angle glaucoma (POAG) and primary angle-closure glaucoma (PACG), and projected the number of affected people in 2020 and 2040.\n\n\nDESIGN\nSystematic review and meta-analysis.\n\n\nPARTICIPANTS\nData from 50 population-based studies (3770 POAG cases among 140,496 examined individuals and 786 PACG cases among 112 398 examined individuals).\n\n\nMETHODS\nWe searched PubMed, Medline, and Web of Science for population-based studies of glaucoma prevalence published up to March 25, 2013. Hierarchical Bayesian approach was used to estimate the pooled glaucoma prevalence of the population aged 40-80 years along with 95% credible intervals (CrIs). Projections of glaucoma were estimated based on the United Nations World Population Prospects. Bayesian meta-regression models were performed to assess the association between the prevalence of POAG and the relevant factors.\n\n\nMAIN OUTCOME MEASURES\nPrevalence and projection numbers of glaucoma cases.\n\n\nRESULTS\nThe global prevalence of glaucoma for population aged 40-80 years is 3.54% (95% CrI, 2.09-5.82). The prevalence of POAG is highest in Africa (4.20%; 95% CrI, 2.08-7.35), and the prevalence of PACG is highest in Asia (1.09%; 95% CrI, 0.43-2.32). In 2013, the number of people (aged 40-80 years) with glaucoma worldwide was estimated to be 64.3 million, increasing to 76.0 million in 2020 and 111.8 million in 2040. In the Bayesian meta-regression model, men were more likely to have POAG than women (odds ratio [OR], 1.36; 95% CrI, 1.23-1.52), and after adjusting for age, gender, habitation type, response rate, and year of study, people of African ancestry were more likely to have POAG than people of European ancestry (OR, 2.80; 95% CrI, 1.83-4.06), and people living in urban areas were more likely to have POAG than those in rural areas (OR, 1.58; 95% CrI, 1.19-2.04).\n\n\nCONCLUSIONS\nThe number of people with glaucoma worldwide will increase to 111.8 million in 2040, disproportionally affecting people residing in Asia and Africa. These estimates are important in guiding the designs of glaucoma screening, treatment, and related public health strategies."}
{"_id": "211b00d62a9299880fefbbb6251254cb685b7810", "title": "Fuzzy Logic Based-Map Matching Algorithm for Vehicle Navigation System in Urban Canyons", "text": "With the rapid progress in the development of wireless technology, Global Positioning System (GPS) based vehicle navigation systems are being widely deployed in automobiles to serve the location-based needs of users and for efficient traffic management. An essential process in vehicle navigation is to map match the position obtained from GPS (or/and other sensors) on a road network map. This process of map matching in turn helps in mitigating errors from navigation solution. GPS based vehicle navigation systems have difficulties in tracking vehicles in urban canyons due to poor satellite availability. High Sensitivity GPS (HS GPS) receivers can alleviate this problem by acquiring and tracking weak signals (and increasing the availability), but at the cost of high measurement noise and errors due to multipath and cross correlation. Position and velocity results in such conditions are typically biased and have unknown distributions. Thus filtering and other statistical methods are difficult to implement. Soft computing has replaced classical computing on many fronts where uncertainties are difficult to model. Fuzzy logic, based on fuzzy reasoning concepts, is one of the most widely used soft computational methods. In many circumstances, it can take noisy, imprecise input, to yield crisp (i.e. numerically accurate) output. Fuzzy logic can be applied effectively to map match the output from a HS GPS receiver in urban canyons because of its inherent tolerance to imprecise inputs. This paper describes a map matching algorithm based on fuzzy logic. The input of the system comes from a SiRF HS XTrac GPS receiver and a low cost gyro (Murata ENV-05G). The results show an improvement in tracking the vehicle in urban canyon conditions."}
{"_id": "87182a9e818ec5cc093c28cbba0ff32fc6bf40b0", "title": "Automatic design of low power CMOS buffer-chain circuit using differential evolutionary algorithm and particle swarm optimization", "text": "PSO and DE algorithms and its variants are used for the optimization of a buffer-chain circuit and the results of all the algorithms are compared in this literature. By testing these algorithms on different mathematical benchmark functions the best parameter values of buffer chain circuit are obtained in such a way that it reduces the error between simulated output and optimized output, hence giving the best circuit performance. Evolutionary algorithms are better in performance and speed than the classical methods. 130nm CMOS technology has been used in this work. With the help of these parameter values the circuit simulator gives the values of power consumption, symmetry, rise time and fall time, which are almost closer to the desired specification of the buffer chain circuit."}
{"_id": "0c37d48d2bb1bd6c63a9b5c57da37609d03900b0", "title": "Memory Interference as a Determinant of Language Comprehension", "text": "The parameters of the human memory system constrain the operation of language comprehension processes. In the memory literature, both decay and interference have been proposed as causes of forgetting; however, while there is a long history of research establishing the nature of interference effects in memory, the effects of decay are much more poorly supported. Nevertheless, research investigating the limitations of the human sentence processing mechanism typically focus on decay-based explanations, emphasizing the role of capacity, while the role of interference has received comparatively little attention. This paper reviews both accounts of difficulty in language comprehension by drawing direct connections to research in the memory domain. Capacity-based accounts are found to be untenable, diverging substantially from what is known about the operation of the human memory system. In contrast, recent research investigating comprehension difficulty using a retrieval-interference paradigm is shown to be wholly consistent with both behavioral and neuropsychological memory phenomena. The implications of adopting a retrieval-interference approach to investigating individual variation in language comprehension are discussed."}
{"_id": "47918c5bece63d7a6cd9574c37c2b17114a5f87e", "title": "Sentiment Classification through Semantic Orientation Using SentiWordNet", "text": "Sentiment analysis is the procedure by which information is extracted from the opinions, appraisals and emotions of people in regards to entities, events and their attributes. In decision making, the opinions of others have a significant effect on customers ease in making choices regards to online shopping, choosing events, products, entities. In this paper, a rule based domain independent sentiment analysis method is proposed. The proposed method classifies subjective and objective sentences from reviews and blog comments. The semantic score of subjective sentences is extracted from SentiWordNet to calculate their polarity as positive, negative or neutral based on the contextual sentence structure. The results show the effectiveness of the proposed method and it outperforms the machine learning methods. The proposed method achieves an accuracy of 87% at the feedback level and 83% at the sentence level for comments. [Aurangzeb khan, Muhammad Zubair Asghar, Shakeel Ahmad, Fazal Masud Kundi, Maria Qasim, Furqan. Sentiment Classification through Semantic Orientation Using SentiWordNet. Life Sci J 2014; 11(10):309-315] (ISSN: 1097-8135). http://www.lifesciencesite.com. 44"}
{"_id": "591565e123518cc04748649d0b028d5326eafc5f", "title": "Radar Signal Processing for Jointly Estimating Tracks and Micro-Doppler Signatures", "text": "The aim of the radar systems is to collect information about their surroundings. In many scenarios besides static targets there are numerous moving objects with very different characteristics, such as extent, movement behavior or micro-Doppler spread. It would be most desirable to have algorithms that extract all information on static and moving object automatically, without a system operator. In this paper, we present measurements conducted with a commercially available high-resolution multi-channel linear frequency-modulated continuous-wave radar and algorithms that do not only produce radar images but a description of the scenario on a higher level. After conventional spectrum estimation and thresholding, we present a clustering stage that combines individual detections and generates representations of each target individually. This stage is followed by a Kalman filter based multi-target tracking block. The tracker allows us to follow each target and collect its properties over time. With this method of jointly estimating tracks and characteristics of each individual target in a scenario, inputs for classifiers can be generated. Which, in turn, will be able to generate information that could be used for driver assistance or alarm trigger systems."}
{"_id": "803b70098fe8e6c84bb266e8f2abb23d785cc46b", "title": "The medial patellofemoral ligament: location of femoral attachment and length change patterns resulting from anatomic and nonanatomic attachments.", "text": "BACKGROUND\nIncompetence of the medial patellofemoral ligament (MPFL) is an integral factor in patellofemoral instability. Reconstruction of this structure is gaining increasing popularity. However, the natural behavior of the ligament is still not fully understood, and crucially, the correct landmark for femoral attachment of the MPFL at surgery is poorly defined.\n\n\nPURPOSE\nTo determine the length change pattern of the native MPFL, investigate the effect of nonanatomic femoral and differing patellar attachment sites on length changes, and recommend a reproducible femoral attachment site for undertaking anatomic MPFL reconstruction.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nEight cadaveric knees were dissected of skin and subcutaneous fat and mounted in a kinematics rig with the quadriceps tensioned. The MPFL length change patterns were measured for combinations of patellar and femoral attachments using a suture and displacement transducer. Three attachments were along the superomedial border of the patella, and 5 femoral attachments were at the MPFL center and 5 mm proximal, distal, anterior, and posterior to this point. Reproducibility of attachment sites was validated radiographically.\n\n\nRESULTS\nThe femoral attachment point, taking the anterior-posterior medial femoral condyle diameter to be 100%, was identified 40% from the posterior, 50% from the distal, and 60% from the anterior border of the medial femoral condyle. This point was most isometric, with a mean maximal length change to the central patellar attachment of 2.1 mm from 0\u00b0 to 110\u00b0 of knee flexion. The proximal femoral attachment resulted in up to 6.4 mm mean lengthening and the distal attachment up to 9.1 mm mean shortening through 0\u00b0 to 110\u00b0 of knee flexion, resulting in a significantly nonisometric graft (P < .05).\n\n\nCONCLUSION\nWe report the anatomic femoral and patellar MPFL graft attachments, with confirmation of the reproducibility of their location and resulting kinematic behavior. Nonanatomic attachments caused significant loss of isometry.\n\n\nCLINICAL RELEVANCE\nThe importance of an anatomically positioned MPFL reconstruction is highlighted, and an identifiable radiographic point for femoral tunnel position is suggested for use intraoperatively."}
{"_id": "87adf3d8f3f935dc982e91951dd5730fb1559d90", "title": "The Spatial Orienting paradigm: How to design and interpret spatial attention experiments", "text": "This paper is conceived as a guide that will describe the very well known Spatial Orienting paradigm, used to explore attentional processes in healthy individuals as well as in people suffering from psychiatric disorders and brain-damaged patients. The paradigm was developed in the late 1970s, and since then, it has been used in thousands of attentional studies. In this review, we attempt to describe, the paradigm for the na\u00eff reader, and explain in detail when is it used, which variables are usually manipulated, how to interpret its results, and how can it be adapted to different populations and methodologies. The main goal of this review is to provide a practical guide to researchers who have never used the paradigm that will help them design their experiments, as a function of their theoretical and experimental needs. We also focus on how to adapt the paradigm to different technologies (such as event-related potentials, functional resonance imaging, or transcranial magnetic stimulation), and to different populations by presenting an example of its use in brain-damaged patients."}
{"_id": "a494191eed61da992579d67412d504f5b3104816", "title": "A gesture-free geometric approach for mid-air expression of design intent in 3D virtual pottery", "text": "The advent of depth cameras has enabled mid-air interactions for shape modeling with bare hands. Typically, these interactions employ a finite set of pre-defined hand gestures to allow users to specify modeling operations in virtual space. However, human interactions in real world shaping processes (such as pottery or sculpting) are complex, iterative, and continuous. In this paper, we show that the expression of user intent in shaping processes can be derived from the geometry of contact between the hand and the manipulated object. Specifically, we describe the design and evaluation of a geometric interaction technique for bare-hand mid-air virtual pottery. We model the shaping of a pot as a gradual and progressive convergence of the pot\u2019s profile to the shape of the user\u2019s hand represented as a point-cloud (PCL). Thus, a user does not need to learn, know, or remember any gestures to interact with our system. Our choice of pottery simplifies the geometric representation, allowing us to systematically study how users use their hands and fingers to express the intent of deformation during a shaping process. Our evaluations demonstrate that it is possible to enable users to express their intent for shape deformation without the need for a fixed set of gestures for clutching and deforming a shape."}
{"_id": "23ec90b7f51ebb1a290a760ed9b9ebfe8ba68eb6", "title": "A comparative study of differential evolution variants for global optimization", "text": "In this paper, we present an empirical comparison of some Differential Evolution variants to solve global optimization problems. The aim is to identify which one of them is more suitable to solve an optimization problem, depending on the problem's features and also to identify the variant with the best performance, regardless of the features of the problem to be solved. Eight variants were implemented and tested on 13 benchmark problems taken from the specialized literature. These variants vary in the type of recombination operator used and also in the way in which the mutation is computed. A set of statistical tests were performed in order to obtain more confidence on the validity of the results and to reinforce our discussion. The main aim is that this study can help both researchers and practitioners interested in using differential evolution as a global optimizer, since we expect that our conclusions can provide some insights regarding the advantages or limitations of each of the variants studied."}
{"_id": "8f0c8df7a81d53d7227c2eea6552c9cd4b1a846d", "title": "An AES Smart Card Implementation Resistant to Power Analysis Attacks", "text": "In this article we describe an efficient AES software implementation that is well suited for 8-bit smart cards and resistant against power analysis attacks. Our implementation masks the intermediate results and randomizes the sequence of operations at the beginning and the end of the AES execution. Because of the masking, it is secure against simple power analysis attacks, template attacks and first-order DPA attacks. Due to the combination of masking and randomization, it is resistant against higher-order DPA attacks. Resistant means that a large number of measurements is required for a successful attack. This expected number of measurements is tunable. The designer can choose the amount of randomization and thereby increase the number of measurements. This article also includes a practical evaluation of the countermeasures. The results prove the theoretical assessment of the countermeasures to be correct."}
{"_id": "27f9b805de1f125273a88786d2383621e60c6094", "title": "Approximating Kinematics for Tracked Mobile Robots", "text": "In this paper we propose a kinematic approach for tracked mobile robots in order to improve motion control and pose estimation. Complex dynamics due to slippage and track\u2013soil interactions make it difficult to predict the exact motion of the vehicle on the basis of track velocities. Nevertheless, real-time computations for autonomous navigation require an effective kinematics approximation without introducing dynamics in the loop. The proposed solution is based on the fact that the instantaneous centers of rotation (ICRs) of treads on the motion plane with respect to the vehicle are dynamics-dependent, but they lie within a bounded area. Thus, optimizing constant ICR positions for a particular terrain results in an approximate kinematic model for tracked mobile robots. Two different approaches are presented for off-line estimation of kinematic parameters: (i) simulation of the stationary response of the dynamic model for the whole velocity range of the vehicle; (ii) introduction of an experimental setup so that a genetic algorithm can produce the model from actual sensor readings. These methods have been evaluated for on-line odometric computations and low-level motion control with the Auriga\u03b1 mobile robot on a hard-surface flat soil at moderate speeds. KEY WORDS\u2014tracked vehicles, kinematic control, mobile robotics, parameter identification, dynamics simulation"}
{"_id": "656dce6c2b7315518cf0aea5d5b2500f869ab223", "title": "Practical Stabilization of a Skid-steering Mobile Robot - A Kinematic-based Approach", "text": "This paper presents kinematic control problem of skid-steering mobile robot using practical smooth and time-varying stabilizer. The stability result is proved using Lyapunov analysis and takes into account both input signal saturation and uncertainty of kinematics. In order to ensure stable motion of the robot the condition of permissible velocities is formulated according to dynamic model and wheel-surface interaction. Theoretical considerations are illustrated by simulation results"}
{"_id": "760c81c0a19f8358dde38691b81fe2f20c829b44", "title": "Experimental kinematics for wheeled skid-steer mobile robots", "text": "This work aims at improving real-time motion control and dead-reckoning of wheeled skid-steer vehicles by considering the effects of slippage, but without introducing the complexity of dynamics computations in the loop. This traction scheme is found both in many off-the-shelf mobile robots due to its mechanical simplicity and in outdoor applications due to its maneuverability. In previous works, we reported a method to experimentally obtain an optimized kinematic model for skid-steer tracked vehicles based on the boundedness of the instantaneous centers of rotation (ICRs) of treads on the motion plane. This paper provides further insight on this method, which is now proposed for wheeled skid-steer vehicles. It has been successfully applied to a popular research robotic platform, pioneer P3-AT, with different kinds of tires and terrain types."}
{"_id": "7df9117c67587f516fd4b13bd9df88aff9ab79b3", "title": "Adaptive Trajectory Tracking Control of Skid-Steered Mobile Robots", "text": "Skid-steered mobile robots have been widely used for terrain exploration and navigation. In this paper, we present an adaptive trajectory control design for a skid-steered wheeled mobile robot. Kinematic and dynamic modeling of the robot is first presented. A pseudo-static friction model is used to capture the interaction between the wheels and the ground. An adaptive control algorithm is designed to simultaneously estimate the wheel/ground contact friction information and control the mobile robot to follow a desired trajectory. A Lyapunov-based convergence analysis of the controller and the estimation of the friction model parameter are presented. Simulation and preliminary experimental results based on a four-wheel robot prototype are demonstrated for the effectiveness and efficiency of the proposed modeling and control scheme"}
{"_id": "9d9fc59f80f41915793e7f59895c02dfa6c1e5a9", "title": "Trajectory Tracking Control of a Four-Wheel Differentially Driven Mobile Robot", "text": "We consider the trajecto ry tracking control problem for a 4-wheel differentially driven mobile robot moving on an outdoor terrain. A dynamic model is presented accounting for the effects of wheel skidding. A model-based nonlinear controller is designed, following the dynamic feedback linearization paradigm. A n operational nonholonomic constraint is added at this stage, so as to obtain a predictable behavior for the instantaneous center of rotation thus preventing excessive skidding. The controller is then robustijied, using conventional linear techniques, against uncertainty in the soil parameters at the ground-wheel contact. Simulation results show the good performance in tracking spline-type trajectories on a virtual terrain with varying characteristics."}
{"_id": "872331a88e2aa40b9a1b64c1a9c2ea6577c3fe44", "title": "Impact of probing procedure on flip chip reliability", "text": "Probe-after-bump is the primary probing procedure for flip chip technology, since it does not directly contact the bump pad, and involves a preferred under bump metallurgy (UBM) step coverage on the bump pads. However, the probe-after-bump procedure suffers from low throughputs and high cost. It also delays the yield feedback to the fab, and makes difficult clarification of the accountability of the low yield bumped wafer between the fab and the bumping house. The probe-before-bump procedure can solve these problems, but the probing tips may over-probe or penetrate the bump pads, leading to poor UBM step coverage, due to inadequate probing conditions or poor probing cards. This work examines the impact of probing procedure on flip chip reliability, using printing and electroplating bumpings on aluminum and copper pads. Bump height, bump shear strength, die shear force, UBM step coverage, and reliability testing are used to determine the influence of probing procedure on flip chip reliability. The experimental results reveal that bump quality and reliability test in the probe-before-bump procedure, under adequate probing conditions, differ slightly from the corresponding items in the probe-after-bump procedure. UBM gives superior step coverage of probe marks in both probe-before-bump and probe-after-bump procedures, implying that UBM achieves greater adhesion and barrier function between the solder bump and the bump pad. Both printing and electroplating bump processes slightly influence all evaluated items. The heights of probe marks on the copper pads are 40\u201360% lower than those on the aluminum pads, indicating that the copper pad enhances UBM step coverage. This finding reveals that adequate probing conditions of the probe-before-bump procedure are suited to sort flip chip wafers and do not significantly affect bump height, bump shear strength, die shear force, or flip chip reliability. 2002 Elsevier Science Ltd. All rights reserved."}
{"_id": "94453d8f060a5968d30f9d5b68f3ee557cb69078", "title": "Anomaly Detection with Attribute Conflict Identification in Bank Customer Data", "text": "In commercial banks, data centers often integrates different data sources, which represent complex and independent business systems. Due to the inherent data variability and measurement or execution errors, there may exist some abnormal customer records (data). Existing automatic abnormal customer detection methods are outlier detection which focuses on the differences between customers, and it ignores the other possible abnormal customers caused by the inner features confliction of each customer. In this paper, we designed a method to identify abnormal customer information whose inner attributes are conflicting (confliction detection). We integrate the outlier detection and the confliction identification techniques together, as the final abnormality detection. This can provide a complete and accurate support of customer data for commercial bank's decision making. Finally, we have performed experiments on a dataset from a Chinese commercial bank to demonstrate the effectiveness of our method."}
{"_id": "ae0f81e746cb41562970711c94b53f9a53bb466d", "title": "Subcutaneous penile vein thrombosis (Penile Mondor's Disease): pathogenesis, diagnosis, and therapy.", "text": "OBJECTIVES\nIn international studies, only a few data are available on subcutaneous penile vein thrombosis. The pathogenesis is unknown, and no general recommendation exists regarding therapy.\n\n\nMETHODS\nA total of 25 patients with the clinical picture of a \"superficial penile vein thrombosis\" were treated at our policlinic. All patients had noted sudden and almost painless indurations on the penile dorsal surface. The extent of the thrombosis varied. Detailed anamnesis, ultrasonography, and routine laboratory tests were performed for all patients, knowing that primary therapy was conservative.\n\n\nRESULTS\nNo patient indicated any pain. Some reported a feeling of tension in the area of the thrombosis. In all patients, the thrombosis occurred in the dorsal penis shaft. It was close to the sulcus coronarius in 21 patients, near the penis root in 3, and in the entire penis shaft in 1 patient. The length of the thrombotic vein was between 2 and 4 cm. The ultrasound results were similar for all patients. The primary treatment was conservative for all patients. Recovery was achieved in more than 92% of cases (23 of 25 patients) using conservative therapy, which consisted of local dressing with heparin ointment (10,000 IU) and oral application of an antiphlogistic for 14 days. In 2 cases, thrombectomy was necessary.\n\n\nCONCLUSIONS\nExtended imaging diagnosis does not improve the evaluation of the extent of a superficial penile vein thrombosis. Conservative primary therapy consisting of heparin ointment and oral application of antiphlogistics is sufficient. If the thrombosis persists after conservative therapy, surgery is indicated."}
{"_id": "bac1b676aa3a97218afcfc81ef6d4e0015251150", "title": "Control of Inertial Stabilization Systems Using Robust Inverse Dynamics Control and Adaptive Control", "text": "This paper presents an advanced controller design for an Inertial stabilization system. The system has a 2-DOF gimbal which will be attached to an aviation vehicle. Due to dynamics modeling errors, and friction and disturbances from the outside environment, the tracking accuracy of an airborne gimbal may severely degrade. So, an advanced controller is needed. Robust inverse dynamics control and the adaptive control are used in the inner loop or gimbal servo-system to control the gimbal motion. An indirect line of sight (LOS) stabilization will be controlled by the outer loop controller. A stabilizer is mounted on the base of the system to measure base rate and orientation of the gimbal in reference to the fixed reference frame. It can withstand high angular slew rates. The experimental results illustrate that the proposed controllers are capable enough to overcome the disturbances and the impact of LOS disturbances on the tracking performance."}
{"_id": "25a3a354955f1a1782c9c817edb93b7303672291", "title": "How well developed are altmetrics? A cross-disciplinary analysis of the presence of \u2018alternative metrics\u2019 in scientific publications", "text": "In this paper an analysis of the presence and possibilities of altmetrics for bibliometric and performance analysis is carried out. Using the web based tool Impact Story, we collected metrics for 20,000 random publications from the Web of Science. We studied both the presence and distribution of altmetrics in the set of publications, across fields, document types and over publication years, as well as the extent to which altmetrics correlate with citation indicators. The main result of the study is that the altmetrics source that provides the most metrics is Mendeley, with metrics on readerships for 62.6\u00a0% of all the publications studied, other sources only provide marginal information. In terms of relation with citations, a moderate spearman correlation (r\u00a0=\u00a00.49) has been found between Mendeley readership counts and citation indicators. Other possibilities and limitations of these indicators are discussed and future research lines are outlined."}
{"_id": "30cd39388b5c1aae7d8153c0ab9d54b61b474ffe", "title": "Deep Recurrent Regression for Facial Landmark Detection", "text": "We propose a novel end-to-end deep architecture for face landmark detection, based on a deep convolutional and deconvolutional network followed by carefully designed recurrent network structures. The pipeline of this architecture consists of three parts. Through the first part, we encode an input face image to resolution-preserved deconvolutional feature maps via a deep network with stacked convolutional and deconvolutional layers. Then, in the second part, we estimate the initial coordinates of the facial key points by an additional convolutional layer on top of these deconvolutional feature maps. In the last part, by using the deconvolutional feature maps and the initial facial key points as input, we refine the coordinates of the facial key points by a recurrent network that consists of multiple long short-term memory components. Extensive evaluations on several benchmark data sets show that the proposed deep architecture has superior performance against the state-of-the-art methods."}
{"_id": "95f05fa558ae1a81e54c555b234dd54fcea98830", "title": "Adaptive hypertext navigation based on user goals and context", "text": "Hypertext systems allow flexible access to topics of information, but this flexibility has disadvantages. Users often become lost or overwhelmed by choices. An adaptive hypertext system can overcome these disadvantages by recommending information to users based on their specific information needs and preferences. Simple associative matrices provide an effective way of capturing these user preferences. Because the matrices are easily updated, they support the kind of dynamic learning required in an adaptive system. HYPERFLEX, a prototype of an adaptive hypertext system that learns, is described. Informal studies with HYPERFLEX clarify the circumstances under which adaptive systems are likely to be useful, and suggest that HYPERFLEX can reduce time spent searching for information by up to 40%. Moreover, these benefits can be obtained with relatively little effort on the part of hypertext authors or users. The simple models underlying HYPERFLEX's performance may offer a general and useful alternative to more sophisticated modelling techniques. Conditions under which these models, and similar adaptation techniques, might be most useful are discussed."}
{"_id": "a501de74e9341b326845218ba0891053f3331e25", "title": "Unfriending on Facebook: Friend Request and Online/Offline Behavior Analysis", "text": "Objectives: Determine the role of the friend request in unfriending decisions. Find factors in unfriending decisions and find differences in the perception of online and offline behaviors that vary depending on the unfriending decision. Method: Survey research conducted online. 690 surveys about unfriending were analyzed using exploratory statistical techniques. Results: The research results show that the initiator of the friend request has more than their expected share of unfriends compared to those who receive the friend request. There are online and offline factors for unfriending decisions; the research identified six constructs to evaluate unfriending decisions. There are 4 components for online behaviors (unimportant/frequent posts, polarizing posts, inappropriate posts and everyday life posts) and 2 offline components (disliked behavior and changes in the relationship). Survey respondents who said they unfriend for online reasons were more likely to agree that the person posted too frequently about unimportant topics, polarizing topics, and inappropriate topics compared to those who unfriended for offline reasons."}
{"_id": "0b61a17906637ece5a9c5e7e3e6de93378209706", "title": "Semantics of context-free languages", "text": "\u201cMeaning\u201d may be assigned to a string in a context-free language by defining \u201cattributes\u201d of the symbols in a derivation tree for that string. The attributes can be defined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are \u201csynthesized\u201d, i.e., defined solely in terms of attributes of thedescendants of the corresponding nonterminal symbol, while other attributes are \u201cinherited\u201d, i.e., defined in terms of attributes of theancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature."}
{"_id": "a5a4db6b066d58c1281d1e83be821216d22696f3", "title": "TweetCred: A Real-time Web-based System for Assessing Credibility of Content on Twitter", "text": "During large scale events, a large volume of content is posted on Twitter, but not all of this content is trustworthy. The presence of spam, advertisements, rumors and fake images reduces the value of information collected from Twitter, especially during sudden-onset crisis events where information from other sources is scarce. In this research work, we describe various facets of assessing the credibility of usergenerated content on Twitter during large scale events, and develop a novel real-time system to assess the credibility of tweets. Firstly, we develop a semi-supervised ranking model using SVM-rank for assessing credibility, based on training data obtained from six high-impact crisis events of 2013. An extensive set of forty-five features is used to determine the credibility score for each of the tweets. Secondly, we develop and deploy a system\u2013TweetCred\u2013in the form of a browser extension, a web application and an API at the link: http://twitdigest.iiitd.edu.in/TweetCred/. To the best of our knowledge, this is the first research work to develop a practical system for credibility on Twitter and evaluate it with real users. TweetCred was installed and used by 717 Twitter users within a span of three weeks. During this period, a credibility score was computed for more than 1.1 million unique tweets. Thirdly, we evaluated the real-time performance of TweetCred , observing that 84% of the credibility scores were displayed within 6 seconds. We report on the positive feedback that we received from the system\u2019s users and the insights we gained into improving the system for future iterations."}
{"_id": "11fa695a4e6b0f20a396edc4010d4990c8d29fe9", "title": "Monte-Carlo Tree Search: A New Framework for Game AI", "text": "Classic approaches to game AI require either a high quality of domain knowledge, or a long time to generate effective AI behaviour. These two characteristics hamper the goal of establishing challenging game AI. In this paper, we put forward Monte-Carlo Tree Search as a novel, unified framework to game AI. In the framework, randomized explorations of the search space are used to predict the most promising game actions. We will demonstrate that Monte-Carlo Tree Search can be applied effectively to (1) classic board-games, (2) modern board-games, and (3) video games."}
{"_id": "62cb571ad1d67b3b46409f0c3830558b076b378c", "title": "Low-Cost Wideband and High-Gain Slotted Cavity Antenna Using High-Order Modes for Millimeter-Wave Application", "text": "A novel single-fed low-cost wideband and high-gain slotted cavity antenna based on substrate integrated waveguide (SIW) technology using high-order cavity modes is demonstrated in this paper. High-order resonant modes (TE130, TE310, TE330) inside the cavity are simply excited by a coaxial probe which is located at the center of the antenna. Energy is coupled out of the cavity by a 3 \u00d7 3 slot array etched on the top surface of the cavity. Two antennas with different polarizations are designed and tested. Measured results show that the linearly polarized prototype achieves an impedance bandwidth (|S11| <; -10 dB) of > 26% (28 to 36.6 GHz), and a 1-dB gain bandwidth of 14.1% (30.3 to 34.9 GHz). In addition, a measured maximum gain of 13.8 dBi and radiation efficiency of 92% are obtained. To generate circularly polarized radiation, a rotated dipole array is placed in front of the proposed linearly polarized antenna. Measured results show that the circularly polarized antenna exhibits a common bandwidth (10-dB return loss bandwidth, 3-dB axial ratio bandwidth, and 1-dB gain bandwidth) of 11%."}
{"_id": "dae92eb758f00f3800c53ce4dbb74da599ae9949", "title": "Medical Concept Embeddings via Labeled Background Corpora", "text": "In recent years, we have seen an increasing amount of interest in low-dimensional vector representations of words. Among other things, these facilitate computing word similarity and relatedness scores. The most well-known example of algorithms to produce representations of this sort are the word2vec approaches. In this paper, we investigate a new model to induce such vector spaces for medical concepts, based on a joint objective that exploits not only word co-occurrences but also manually labeled documents, as available from sources such as PubMed. Our extensive experimental analysis shows that our embeddings lead to significantly higher correlations with human similarity and relatedness assessments than previous work. Due to the simplicity and versatility of vector representations, these findings suggest that our resource can easily be used as a drop-in replacement to improve any systems relying on medical concept similarity measures."}
{"_id": "fa1dc30a59e39cf48025ae039b0d56497dd44224", "title": "Exploring Ways To Mitigate Sensor-Based Smartphone Fingerprinting", "text": "Modern smartphones contain motion sensors, such as accelerometers and gyroscopes. These sensors have many useful applications; however, they can also be used to uniquely identify a phone by measuring anomalies in the signals, which are a result from manufacturing imperfections. Such measurements can be conducted surreptitiously in the browser and can be used to track users across applications, websites, and visits. We analyze techniques to mitigate such device fingerprinting either by calibrating the sensors to eliminate the signal anomalies, or by adding noise that obfuscates the anomalies. To do this, we first develop a highly accurate fingerprinting mechanism that combines multiple motion sensors and makes use of (inaudible) audio stimulation to improve detection. We then collect measurements from a large collection of smartphones and evaluate the impact of calibration and obfuscation techniques on the classifier accuracy."}
{"_id": "fc24d32ab6acd5f1d2c478a6d0597c85afb28feb", "title": "Yelp Dataset Challenge: Review Rating Prediction", "text": "Review websites, such as TripAdvisor and Yelp, allow users to post online reviews for various businesses, products and services, and have been recently shown to have a significant influence on consumer shopping behaviour. An online review typically consists of free-form text and a star rating out of 5. The problem of predicting a user\u2019s star rating for a product, given the user\u2019s text review for that product, is called Review Rating Prediction and has lately become a popular, albeit hard, problem in machine learning. In this paper, we treat Review Rating Prediction as a multi-class classification problem, and build sixteen different prediction models by combining four feature extraction methods, (i) unigrams, (ii) bigrams, (iii) trigrams and (iv) Latent Semantic Indexing, with four machine learning algorithms, (i) logistic regression, (ii) N\u00e4\u0131ve Bayes classification, (iii) perceptrons, and (iv) linear Support Vector Classification. We analyse the performance of each of these sixteen models to come up with the best model for predicting the ratings from reviews. We use the dataset provided by Yelp for training and testing the models."}
{"_id": "b4173ba51b02f530e196560c3d17bbd0c4ed131d", "title": "Prospects of encoding Java source code in XML", "text": "Currently, the only standard format for representing Java source code is plain text-based. This paper explores the prospects of using Extensible Markup Language (XML) for this purpose. XML enables the leverage of tools and standards more powerful than those available for plain-text formats, while retaining broad accessibility. The paper outlines the potential benefits of future XML grammars that would allow for improved code structure and querying possibilities; code extensions, construction, and formatting; and referencing parts of code. It also introduces the concept of grammar levels and argues for the inclusion of several grammar levels into a common framework. It discusses conversions between grammars and some practical grammar design issues. Keywords\u2014XML, Java, source code, parsing, code formatting"}
{"_id": "12bab74396ae0b76ac91f8c022866e9785aa48a3", "title": "Semantic integration of UML class diagram with semantic validation on segments of mappings", "text": "Recently, attention has focused on the software development, specially by different teams that are geographically distant to support collaborative work. Management, description and modeling in such collaborative approach are through several tools and techniques based on UML models. It is now supported by a large number of tools. Most of these systems have the ability to compare different UML models, assist developers, designers and also provide operations for the merging and integration, to produce a coherent model. The contribution in this article is both to integrate a set of UML class diagrams using mappings that are result of alignment and assist designers and developers in the integration. In addition, we will present a detail integration of UML models with the validation of mappings between them. Such validation helps to achieve correct, consistent and coherent integrated model."}
{"_id": "096cc955e19446c3445e331c62d62897833d3e46", "title": "On partial least squares in head pose estimation: How to simultaneously deal with misalignment", "text": "Head pose estimation is a critical problem in many computer vision applications. These include human computer interaction, video surveillance, face and expression recognition. In most prior work on heads pose estimation, the positions of the faces on which the pose is to be estimated are specified manually. Therefore, the results are reported without studying the effect of misalignment. We propose a method based on partial least squares (PLS) regression to estimate pose and solve the alignment problem simultaneously. The contributions of this paper are two-fold: 1) we show that the kernel version of PLS (kPLS) achieves better than state-of-the-art results on the estimation problem and 2) we develop a technique to reduce misalignment based on the learned PLS factors."}
{"_id": "3004d634eb462c2d8fc039f99713e9400a58dced", "title": "Reinforcement Learning and Approximate Dynamic Programming for Feedback Control", "text": "reinforcement learning and approximate dynamic programming for feedback control are a good way to achieve details about operating certainproducts. Many products that you buy can be obtained using instruction manuals. These user guides are clearlybuilt to give step-by-step information about how you ought to go ahead in operating certain equipments. Ahandbook is really a user's guide to operating the equipments. Should you loose your best guide or even the productwould not provide an instructions, you can easily obtain one on the net. You can search for the manual of yourchoice online. Here, it is possible to work with google to browse through the available user guide and find the mainone you'll need. On the net, you'll be able to discover the manual that you might want with great ease andsimplicity"}
{"_id": "9b8ad31d762a64f944a780c13ff2d41b9b4f3ab3", "title": "Toward a Rational Choice Process Theory of Internet Scamming: The Offender's Perspective", "text": "Internet fraud scam is a crime enabled by the Internet to swindle Internet users. The global costs of these scams are in the billions of US dollars. Existing research suggests that scammers maximize their economic gain. Although this is a plausible explanation, since the idea of the scam is to fool people to send money, this explanation alone, cannot explain why individuals become Internet scammers. An equally important, albeit unexplored riddle, is the question of what strategies Internet scammers adopt to perform the act. As a first step to address these gaps, we interviewed five Internet scammers in order to develop a rational choice process theory of Internet scammers\u2019 behavior. The initial results suggest that an interplay of socioeconomic and dynamic thinking processes explains why individuals drift into Internet scamming. Once an individual drifts into Internet scamming, a successful scam involves two processes: persuasive strategy and advance fee strategy."}
{"_id": "222d8b2803f9cedf0da0b454c061c0bb46384722", "title": "Catfish Binary Particle Swarm Optimization for Feature Selection", "text": "The feature selection process constitutes a commonly encountered problem of global combinatorial optimization. This process reduces the number of features by removing irrelevant, noisy, and redundant data, thus resulting in acceptable classification accuracy. Feature selection is a preprocessing technique with great importance in the fields of data analysis and information retrieval processing, pattern classification, and data mining applications. This paper presents a novel optimization algorithm called catfish binary particle swarm optimization (CatfishBPSO), in which the so-called catfish effect is applied to improve the performance of binary particle swarm optimization (BPSO). This effect is the result of the introduction of new particles into the search space (\u201ccatfish particles\u201d), which replace particles with the worst fitness by the initialized at extreme points of the search space when the fitness of the global best particle has not improved for a number of consecutive iterations. In this study, the K-nearest neighbor (K-NN) method with leave-oneout cross-validation (LOOCV) was used to evaluate the quality of the solutions. CatfishBPSO was applied and compared to six classification problems taken from the literature. Experimental results show that CatfishBPSO simplifies the feature selection process effectively, and either obtains higher classification accuracy or uses fewer features than other feature selection methods."}
{"_id": "511e6318772f3bc4f6f192a96ff0501a1d1955f4", "title": "Powering 3 Dimensional Microrobots : Power Density Limitations \u2217", "text": "Many types of electrostatic and electromagnetic microactuators have been developed. It is important to understand the fundamental performance limitations of these actuators for use in micro-robotic systems. The most important consideration for micro mobile robots is the effective power density of the actuator. As inertia and gravitional forces become less significant for small robots, typical metrics for macro-robots, such as torque-to-weight ratio, are not appropriate. A very significant problem with micro-actuators for robotics is the need for efficient transmissions to obtain large forces and torques at low speeds from inherently high speed low force actuators."}
{"_id": "d784ed9d4538b731663990131568e3fbc4cb96ee", "title": "FPGA based Time-to-Digital Converter", "text": "High-resolution Time-to-Digital converter (TDC) is one of the crucial blocks in Highenergy nuclear physics time of flight experiments. The contemporary time interval measurement techniques rely on employing methods like Time-to-Amplitude Conversion, Vernier method, Delay Locked Loops (DLL), Tapped Delay Lines (TDL), Differential Delay Lines (DDL) etc. Recently, FPGA based TDC designs with merits of low cost, fast development cycle and re-programmability are reported [1]. The TDC implementation in FPGA have design challenges and issues like, lack of direct control over the propagation delays, unpredictable Place & Route (P&R) delays and delay variations due to Process, Voltage & Temperature (PVT) variations. In the TDC design presented here, the resolution below the minimum gate delay is achieved by employing differential (Vernier oscillator) technique. The predictable P&R is achieved by manual P&R of the critical elements and avoiding the use of digital synthesis tools. Further, in order to compensate for the delay variations due to PVT, a calibration methodology is developed and implemented. By implementing the design in a Flash-based FPGA, the low power design objective is achieved."}
{"_id": "8828177ef8c00e893164e0dd396f9c0d78a96d7a", "title": "Looking good: factors affecting the likelihood of having cosmetic surgery", "text": "The present study examined various factors associated with the likelihood of having cosmetic surgery in a community sample of Austrian participants. One-hundred and sixty-eight women and 151 men completed a questionnaire measuring how likely they were to consider common cosmetic procedures. The results showed that women were more likely than men to consider most cosmetic procedures. Path analysis revealed that personal experience of having had cosmetic surgery was a significant predictor of future likelihood, while media exposure (viewing advertisements or television programs, or reading articles about cosmetic surgery) mediated the influence of vicarious experience and sex. These results are discussed in relation to previous work examining the factors associated with the likelihood of having cosmetic surgery."}
{"_id": "1db2e600b8a386c559b0fe0caf5c472aef95482c", "title": "A Case for NOW (Networks of Workstations) - Abstract", "text": "In this paper, we argue that because of recent technology advances, networks of workstations (NOWs) are poised to become the primary computing infrastructure for science and engineering, from low end interactive computing to demanding sequential and parallel applications. We identify three opportunities for NOWs that will benefit endusers: dramatically improving virtual memory and file system performance by using the aggregate DRAM of a NOW as a giant cache for disk; achieving cheap, highly available, and scalable file storage by using redundant arrays of workstation disks, using the LAN as the I/O backplane; and finally, multiple CPUs for parallel computing. We describe the technical challenges in exploiting these opportunities \u2013 namely, efficient communication hardware and software, global coordination of multiple workstation operating systems, and enterprise-scale network file systems. We are currently building a 100-node NOW prototype to demonstrate that practical solutions exist to these technical challenges."}
{"_id": "04caa1a55b12d5f3830ed4a31c4b47921a3546f2", "title": "Discriminative Embeddings of Latent Variable Models for Structured Data", "text": "Kernel classifiers and regressors designed for structured data, such as sequences, trees and graphs, have significantly advanced a number of interdisciplinary areas such as computational biology and drug design. Typically, kernels are designed beforehand for a data type which either exploit statistics of the structures or make use of probabilistic generative models, and then a discriminative classifier is learned based on the kernels via convex optimization. However, such an elegant two-stage approach also limited kernel methods from scaling up to millions of data points, and exploiting discriminative information to learn feature representations. We propose, structure2vec, an effective and scalable approach for structured data representation based on the idea of embedding latent variable models into feature spaces, and learning such feature spaces using discriminative information. Interestingly, structure2vec extracts features by performing a sequence of function mappings in a way similar to graphical model inference procedures, such as mean field and belief propagation. In applications involving millions of data points, we showed that structure2vec runs 2 times faster, produces models which are 10, 000 times smaller, while at the same time achieving the state-of-the-art predictive performance."}
{"_id": "47da821786eea4e6d9d69cc5aa1f248e958ea6e4", "title": "Searching for processes and threads in Microsoft Windows memory dumps", "text": "Current tools to analyze memory dumps of systems running Microsoft Windows usually build on the concept of enumerating lists maintained by the kernel to keep track of processes, threads and other objects. Therefore they will frequently fail to detect objects that are already terminated or which have been hidden by Direct Kernel Object Manipulation"}
{"_id": "54ee58c3623b7fdf8b7e9e29355a24478f574eea", "title": "GUITAR: Piecing Together Android App GUIs from Memory Images", "text": "An Android app's graphical user interface (GUI) displays rich semantic and contextual information about the smartphone's owner and app's execution. Such information provides vital clues to the investigation of crimes in both cyber and physical spaces. In real-world digital forensics however, once an electronic device becomes evidence most manual interactions with it are prohibited by criminal investigation protocols. Hence investigators must resort to \"image-and-analyze\" memory forensics (instead of browsing through the subject phone) to recover the apps' GUIs. Unfortunately, GUI reconstruction is still largely impossible with state-of-the-art memory forensics techniques, which tend to focus only on individual in-memory data structures. An Android GUI, however, displays diverse visual elements each built from numerous data structure instances. Furthermore, whenever an app is sent to the background, its GUI structure will be explicitly deallocated and disintegrated by the Android framework. In this paper, we present GUITAR, an app-independent technique which automatically reassembles and redraws all apps' GUIs from the multitude of GUI data elements found in a smartphone's memory image. To do so, GUITAR involves the reconstruction of (1) GUI tree topology, (2) drawing operation mapping, and (3) runtime environment for redrawing. Our evaluation shows that GUITAR is highly accurate (80-95% similar to original screenshots) at reconstructing GUIs from memory images taken from a variety of Android apps on popular phones. Moreover, GUITAR is robust in reconstructing meaningful GUIs even when facing GUI data loss."}
{"_id": "7693cafd6f29623f61d66f031cadd60b6ce827d7", "title": "Evaluation: from Precision, Recall and F-measure to ROC, Informedness, Markedness and Correlation", "text": "Commonly used evaluation measures including Recall, Precision, F-Measure and Rand Accuracy are biased and should not be used without clear understanding of the biases, and corresponding identification of chance or base case levels of the statistic. Using these measures a system that performs worse in the objective sense of Informedness, can appear to perform better under any of these commonly used measures. We discuss several concepts and measures that reflect the probability that prediction is informed versus chance. Informedness and introduce Markedness as a dual measure for the probability that prediction is marked versus chance. Finally we demonstrate elegant connections between the concepts of Informedness, Markedness, Correlation and Significance as well as their intuitive relationships with Recall and Precision, and outline the extension from the dichotomous case to the general multi-class case."}
{"_id": "0704bb7b7918cd512b5e66ea4b4993e50b8ae92f", "title": "The Spectrum Kernel: A String Kernel for SVM Protein Classification", "text": "We introduce a new sequence-similarity kernel, the spectrum kernel, for use with support vector machines (SVMs) in a discriminative approach to the protein classification problem. Our kernel is conceptually simple and efficient to compute and, in experiments on the SCOP database, performs well in comparison with state-of-the-art methods for homology detection. Moreover, our method produces an SVM classifier that allows linear time classification of test sequences. Our experiments provide evidence that string-based kernels, in conjunction with SVMs, could offer a viable and computationally efficient alternative to other methods of protein classification and homology detection."}
{"_id": "99a4fc97244b83647af47a177e919ccc21918101", "title": "Have biopesticides come of age?", "text": "Biopesticides based on living microbes and their bioactive compounds have been researched and promoted as replacements for synthetic pesticides for many years. However, lack of efficacy, inconsistent field performance and high cost have generally relegated them to niche products. Recently, technological advances and major changes in the external environment have positively altered the outlook for biopesticides. Significant increases in market penetration have been made, but biopesticides still only make up a small percentage of pest control products. Progress in the areas of activity spectra, delivery options, persistence of effect and implementation have contributed to the increasing use of biopesticides, but technologies that are truly transformational and result in significant uptake are still lacking."}
{"_id": "e915fb1cd8285aad1b9aceebb510c50ce1e131d3", "title": "Association of polymorphisms in the cytochrome P450 CYP2C9 with warfarin dose requirement and risk of bleeding complications", "text": "BACKGROUND\nThe cytochrome P450 CYP2C9 is responsible for the metabolism of S-warfarin. Two known allelic variants CYP2C9*2 and CYP2C9*3 differ from the wild type CYP2C9*1 by a single aminoacid substitution in each case. The allelic variants are associated with impaired hydroxylation of S-warfarin in in-vitro expression systems. We have studied the effect of CYP2C9 polymorphism on the in-vivo warfarin dose requirement.\n\n\nMETHODS\nPatients with a daily warfarin dose requirement of 1.5 mg or less (low-dose group, n=36), randomly selected patients with a wide range of dose requirements from an anticoagulant clinic in north-east England (clinic control group, n=52), and 100 healthy controls from the community in the same region were studied. Genotyping for the CYP2C9*2 and CYP2C9*3 alleles was done by PCR analysis. Case notes were reviewed to assess the difficulties encountered during the induction of warfarin therapy and bleeding complications in the low-dose and clinic control groups.\n\n\nFINDINGS\nThe odds ratio for individuals with a low warfarin dose requirement having one or more CYP2C9 variant alleles compared with the normal population was 6.21 (95% CI 2.48-15.6). Patients in the low-dose group were more likely to have difficulties at the time of induction of warfarin therapy (5.97 [2.26-15.82]) and have increased risk of major bleeding complications (rate ratio 3.68 [1.43-9.50]) when compared with randomly selected clinic controls.\n\n\nINTERPRETATION\nWe have shown that there is a strong association between CYP2C9 variant alleles and low warfarin dose requirement. CYP2C9 genotyping may identify a subgroup of patients who have difficulty at induction of warfarin therapy and are potentially at a higher risk of bleeding complications."}
{"_id": "7c18fb7dc07135b75f1301ca88939d81f0d7a4b7", "title": "Ant System", "text": "Ant System, the first Ant Colony Optimization algorithm, showed to be a viable method for attacking hard combinatorial optimization problems. Yet, its performance, when compared to more fine-tuned algorithms, was rather poor for large instances of traditional benchmark problems like the Traveling Salesman Problem. To show that Ant Colony Optimization algorithms could be good alternatives to existing algorithms for hard combinatorial optimization problems, recent research in this ares has mainly focused on the development of algorithmic variants which achieve better performance than AS. In this article, we present \u2013 Ant System, an Ant Colony Optimization algorithm derived from Ant System. \u2013 Ant System differs from Ant System in several important aspects, whose usefulness we demonstrate by means of an experimental study. Additionally, we relate one of the characteristics specific to AS \u2014 that of using a greedier search than Ant System \u2014 to results from the search space analysis of the combinatorial optimization problems attacked in this paper. Our computational results on the Traveling Salesman Problem and the Quadratic Assignment Problem show that \u2013 Ant System is currently among the best performing algorithms for these problems."}
{"_id": "af8d09f8832f9effc138036666f542132d92d78e", "title": "Don't Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards", "text": "We present a new side-channel attack against soft keyboards that support gesture typing on Android smartphones. An application without any special permissions can observe the number and timing of the screen hardware interrupts and system-wide software interrupts generated during user input, and analyze this information to make inferences about the text being entered by the user. System-wide information is usually considered less sensitive than app-specific information, but we provide concrete evidence that this may be mistaken. Our attack applies to all Android versions, including Android M where the SELinux policy is tightened. We present a novel application of a recurrent neural network as our classifier to infer text. We evaluate our attack against the \u201cGoogle Keyboard\u201d on Nexus 5 phones and use a real-world chat corpus in all our experiments. Our evaluation considers two scenarios. First, we demonstrate that we can correctly detect a set of pre-defined \u201csentences of interest\u201d (with at least 6 words) with 70% recall and 60% precision. Second, we identify the authors of a set of anonymous messages posted on a messaging board. We find that even if the messages contain the same number of words, we correctly re-identify the author more than 97% of the time for a set of up to 35 sentences. Our study demonstrates a new way in which systemwide resources can be a threat to user privacy. We investigate the effect of rate limiting as a countermeasure but find that determining a proper rate is error-prone and fails in subtle cases. We conclude that real-time interrupt information should be made inaccessible, perhaps via a tighter SELinux policy in the next Android version."}
{"_id": "1dc5b2114d1ff561fc7d6163d8f4e9c905ca12c4", "title": "Testing the significance of a correlation with nonnormal data: comparison of Pearson, Spearman, transformation, and resampling approaches.", "text": "It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n \u2265 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n \u2264 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests."}
{"_id": "2c5808f0f06538d8d5a5335a0c938e902b7c7413", "title": "An Autonomous Multi-UAV System for Search and Rescue", "text": "This paper proposes and evaluates a modular architecture of an autonomous unmanned aerial vehicle (UAV) system for search and rescue missions. Multiple multicopters are coordinated using a distributed control system. The system is implemented in the Robot Operating System (ROS) and is capable of providing a real-time video stream from a UAV to one or more base stations using a wireless communications infrastructure. The system supports a heterogeneous set of UAVs and camera sensors. If necessary, an operator can interfere and reduce the autonomy. The system has been tested in an outdoor mission serving as a proof of concept. Some insights from these tests are described in the paper."}
{"_id": "7f75c1245dc8aab77769c3b75834202f51ef49d1", "title": "Examining the benefits and challenges of using audience response systems: A review of the literature", "text": "0360-1315/$ see front matter Crown Copyright 2 doi:10.1016/j.compedu.2009.05.001 * Corresponding author. Tel.: +1 905 721 8668x267 E-mail address: robin.kay@uoit.ca (R.H. Kay). 1 Tel.: +1 905 721 8668x2886. Audience response systems (ARSs) permit students to answer electronically displayed multiple choice questions using a remote control device. All responses are instantly presented, in chart form, then reviewed and discussed by the instructor and the class. A brief history of ARSs is offered including a discussion of the 26 labels used to identify this technology. Next a detailed review of 67 peer-reviewed papers from 2000 to 2007 is offered presenting the benefits and challenges associated with the use of an ARS. Key benefits for using ARSs include improvements to the classroom environment (increases in attendance, attention levels, participation and engagement), learning (interaction, discussion, contingent teaching, quality of learning, learning performance), and assessment (feedback, formative, normative). The biggest challenges for teachers in using ARSs are time needed to learn and set up the ARS technology, creating effective ARS questions, adequate coverage of course material, and ability to respond to instantaneous student feedback. Student challenges include adjusting to a new method of learning, increased confusion when multiple perspectives are discussed, and negative reactions to being monitored. It is concluded that more systematic, detailed research is needed in a broader range of contexts. Crown Copyright 2009 Published by Elsevier Ltd. All rights reserved."}
{"_id": "89e2196b3c8cb4164fde074f26d482599ed28d9c", "title": "Updating freeze: Aligning animal and human research", "text": "Freezing is widely used as the main outcome measure for fear in animal studies. Freezing is also getting attention more frequently in human stress research, as it is considered to play an important role in the development of psychopathology. Human models on defense behavior are largely based on animal models. Unfortunately, direct translations between animal and human studies are hampered by differences in definitions and methods. The present review therefore aims to clarify the conceptualization of freezing. Neurophysiological and neuroanatomical correlates are discussed and a translational model is proposed. We review the upcoming research on freezing in humans that aims to match animal studies by using physiological indicators of freezing (bradycardia and objective reduction in movement). Finally, we set the agenda for future research in order to optimize mutual animal-human translations and stimulate consistency and systematization in future empirical research on the freezing phenomenon."}
{"_id": "936988ba574aac6ca14b8a918472874bbfd1a809", "title": "Child abuse, neglect, and adult behavior: research design and findings on criminality, violence, and child abuse.", "text": "Using a prospective cohorts design, a large sample of physical and sexual abuse cases was compared to a matched control group. Overall, abused and neglected subjects had higher rates than did controls for adult criminality and arrests for violent offenses, but not for adult arrests for child abuse or neglect. Findings are discussed in the context of intergenerational transmission of violence, and directions for future research are suggested."}
{"_id": "7c1ac5a078b2d45740ea18caa12ceddef5b4d122", "title": "An approach for selecting seed URLs of focused crawler based on user-interest ontology", "text": "Seed URLs selection for focused Web crawler intends to guide related and valuable information that meets a user\u2019s personal information requirement and provide more effective information retrieval. In this paper, we propose a seed URLs selection approach based on user-interest ontology. In order to enrich semantic query, we first intend to apply Formal Concept Analysis to construct user-interest concept lattice with user log profile. By using concept lattice merger, we construct the user-interest ontology which can describe the implicit concepts and relationships between them more appropriately for semantic representation eed URLs ormal concept analysis ser-interest ontology ipartite graph eb crawler and query match. On the other hand, we make full use of the user-interest ontology for extracting the user interest topic area and expanding user queries to receive the most related pages as seed URLs, which is an entrance of the focused crawler. In particular, we focus on how to refine the user topic area using the bipartite directed graph. The experiment proves that the user-interest ontology can be achieved effectively by merging concept lattices and that our proposed approach can select high quality seed URLs collection and improve the average precision of focused Web crawler."}
{"_id": "d3abb0b5b3ce7eb464846bbdfd93e0fbf505e954", "title": "Compact arrays fed by substrate integrated waveguides", "text": "In this paper, we compare three different concepts of compact antenna arrays fed by substrate integrated waveguides (SIW). Antenna concepts differ in the type of radiators. Slots represent magnetic linear radiators, patches are electric surface radiators, and Vivaldi slots belong to travelling-wave antennas. Hence, the SIW feeders have to exploit different mechanisms of exciting antenna elements. Impedance and radiation properties of studied antenna arrays have been related to the normalized frequency. Antenna arrays have been mutually compared to show fundamental dependencies of final parameters of the designed antennas on state variables of antennas, on SIW feeder architectures and on related implementation details."}
{"_id": "4360637b10da1d63029456e76b6a30673c6efb97", "title": "SmartPort: A Platform for Sensor Data Monitoring in a Seaport Based on FIWARE", "text": "Seaport monitoring and management is a significant research area, in which infrastructure automatically collects big data sets that lead the organization in its multiple activities. Thus, this problem is heavily related to the fields of data acquisition, transfer, storage, big data analysis and information visualization. Las Palmas de Gran Canaria port is a good example of how a seaport generates big data volumes through a network of sensors. They are placed on meteorological stations and maritime buoys, registering environmental parameters. Likewise, the Automatic Identification System (AIS) registers several dynamic parameters about the tracked vessels. However, such an amount of data is useless without a system that enables a meaningful visualization and helps make decisions. In this work, we present SmartPort, a platform that offers a distributed architecture for the collection of the port sensors' data and a rich Internet application that allows the user to explore the geolocated data. The presented SmartPort tool is a representative, promising and inspiring approach to manage and develop a smart system. It covers a demanding need for big data analysis and visualization utilities for managing complex infrastructures, such as a seaport."}
{"_id": "e96940e4837c597ed05bd3ea52e126ca444d842b", "title": "Modeling textual entailment with role semantic information", "text": "In this thesis, we present a novel approach for modeling textual entailment using lexicalsemantic information on the level of predicate-argument structure. To this end, we adopt information provided by the Berkeley FrameNet repository and embed it into an implemented end-to-end system. The two main goals of this thesis are the following: (i) to provide an analysis of the potential contribution of frame semantic information to the recognition textual entailment and (ii) to present a robust system architecture that can serve as basis for future experiments, research, and improvement. Our work was carried out in the context of the textual entailment initiative, which since 2005 has set the stage for the broad investigation of inference in natural-language processing tasks, including empirical evaluation of its coverage and reliability. In short, textual entailment describes inferential relations between (entailing) texts and (entailed) hypotheses as interpreted by typical language users. This pre-theoretic notion captures a natural range of inferences as compared to logical entailment, which has traditionally been used within theoretical approaches to natural language semantics. Various methods for modeling textual entailment have been proposed in the literature, ranging from shallow techniques like lexical overlap to shallow syntactic parsing and the exploitation of WordNet relations. Recently, there has been a move towards more structured meaning representations. In particular, the level of predicate-argument structure has gained much attention, which seems to be a natural and straightforward choice. Predicate-argument structure allows annotating sentences or texts with nuclear meaning representations (\u201cwho did what to whom\u201d), which are of obvious relevance for this task. For example, it can account for paraphrases like \u201cGhosts scare John\u201d vs. \u201cJohn is scared by ghosts\u201d. In this thesis, we present an approach to textual entailment that is centered around the analysis of predicate-argument structure. It combines LFG grammatical analysis, predicate-argument structure in the FrameNet paradigm, and taxonomic information from WordNet into tripartite graph structures. By way of a declarative graph matching algorithm, the \u201cstructural and semantic\u201d similarity of hypotheses and texts is computed and the result is represented as feature vectors. A supervised machine learning architecture trained on entailment corpora is used to check textual entailment for new text/hypothesis pairs. The approach is implemented in the SALSA RTE system, which successfully participated in the second and third RTE challenge. While system performance is on a par with that of comparable systems, the intuitively expected strong positive effect of using FrameNet information has not yet been confirmed. In order to evaluate different system components and to assess the potential contribution of FrameNet information for checking textual entailment, we conducted a number of experiments. For example, with the help of a gold-standard corpus, we"}
{"_id": "7336015b2de1c3a7accf7651c28939f7fa03cb9e", "title": "An algorithm for treatment of the drooping nose.", "text": "UNLABELLED\nNasal tip ptosis (\"drooping\" or long nose) occurs when the tip of the nose is more caudal than what is deemed ideal. Intrinsic factors, such as elongated or caudally-rotated lower lateral cartilages, can lead to nasal tip ptosis. Extrinsic factors, such as elongated upper lateral cartilages or excessive caudal anterior septum and heavy nasal skin, can push the nasal tip caudally and lead to drooping of the nasal tip. The loss of maxillary or nasal spine support may enhance the potential for tip ptosis. In addition, a hyperactive depressor nasi septi could, as a result of continuous pull on the tip, result in tip ptosis. Aging or previous nasal procedures (such as the Goldman-type tip surgery) where the continuity of the lateral and medial crura of the lower lateral cartilages have been violated may cause a weakening of the tip-supporting mechanisms and de-rotation of the nasal tip. Correction of this deformity is challenging and rewarding; it can resolve both the cosmetic deformity and nasal obstruction symptoms related to this entity. The goal of this article is to present our current principles of diagnosis and treatment of nasal tip ptosis, as well as to introduce and algorithm of preferred methods and techniques for its reliable and stable correction.\n\n\nRESULTS\nCorrection of the nasal tip ptosis requires accurate diagnosis, a recognition of the interplay between various anatomic components, specific strategy planning, and a correction of anatomic abnormalities."}
{"_id": "a832b34e1c530f9e66be4471a051f981b1482d27", "title": "Emerging\nApplications of Liquid Metals Featuring Surface Oxides", "text": "Gallium and several of its alloys are liquid metals at or near room temperature. Gallium has low toxicity, essentially no vapor pressure, and a low viscosity. Despite these desirable properties, applications calling for liquid metal often use toxic mercury because gallium forms a thin oxide layer on its surface. The oxide interferes with electrochemical measurements, alters the physicochemical properties of the surface, and changes the fluid dynamic behavior of the metal in a way that has, until recently, been considered a nuisance. Here, we show that this solid oxide \"skin\" enables many new applications for liquid metals including soft electrodes and sensors, functional microcomponents for microfluidic devices, self-healing circuits, shape-reconfigurable conductors, and stretchable antennas, wires, and interconnects."}
{"_id": "9cc2680fc8524b1e3f1f11b508d5642673e22f55", "title": "A Personalized Graph-Based Document Ranking Model Using a Semantic User Profile", "text": "The overload of the information available on the web, held with the diversity of the user information needs and the ambiguity of their queries have led the researchers to develop personalized search tools that return only documents that meet the user profile representing his main interests and needs. We present in this paper a personalized document ranking model based on an extended graph-based distance measure that exploits a semantic user profile derived from a predefined web ontology (ODP). The measure is based on combining Minimum Common Supergraph (MCS) and Maximum Common Subgraph (mcs) between graphs representing respectively the document and the user profile. We extend this measure in order to take into account a semantic recovery between the document and the user profile through common concepts and cross links connecting the two graphs. Results show the effectiveness of our personalized graph-based ranking model compared to Yahoo search results."}
{"_id": "6ed38b0cb510fa91434eb63ab464bee66c9323c6", "title": "A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction", "text": "We improve automatic correction of grammatical, orthographic, and collocation errors in text using a multilayer convolutional encoder-decoder neural network. The network is initialized with embeddings that make use of character Ngram information to better suit this task. When evaluated on common benchmark test data sets (CoNLL-2014 and JFLEG), our model substantially outperforms all prior neural approaches on this task as well as strong statistical machine translation-based systems with neural and task-specific features trained on the same data. Our analysis shows the superiority of convolutional neural networks over recurrent neural networks such as long short-term memory (LSTM) networks in capturing the local context via attention, and thereby improving the coverage in correcting grammatical errors. By ensembling multiple models, and incorporating an N-gram language model and edit features via rescoring, our novel method becomes the first neural approach to outperform the current state-of-the-art statistical machine translation-based approach, both in terms of grammaticality and fluency."}
{"_id": "e4acaccd3c42b618396c9c28dae64ae7091e36b8", "title": "A self-steering I/Q receiver array in 45-nm CMOS SOI", "text": "A novel I/Q receiver array is demonstrated that adapts phase shifts in each receive channel to point a receive beam toward an incident RF signal. The measured array operates at 8.1 GHz and covers steering angles of +/-35 degrees for a four element array. Additionally, the receiver incorporates an I/Q down-converter and demodulates 64QAM with EVM less than 4%. The chip is fabricated in 45 nm CMOS SOI process and occupies an area of 3.45 mm2 while consuming 143 mW dc power."}
{"_id": "b100619c0ca6f5e75dd9ad13f22aab88269c031a", "title": "A Technical Approach to the Energy Blockchain in Microgrids", "text": "The present paper considers some technical issues related to the \u201cenergy blockchain\u201d paradigm applied to microgrids. In particular, what appears from the study is that the superposition of energy transactions in a microgrid creates a variation of the power losses in all the branches of the microgrid. Traditional power losses allocation in distribution systems takes into account only generators while, in this paper, a real-time attribution of power losses to each transaction involving one generator and one load node is done by defining some suitable indices. Besides, the presence of P\u2013V nodes increases the level of reactive flows and provides a more complex technical perspective. For this reason, reactive power generation for voltage support at P\u2013V nodes poses a further problem of reactive power flow exchange, which is worth of investigation in future works in order to define a possible way of remuneration. The experimental section of the paper considers a medium voltage microgrid and two different operational scenarios."}
{"_id": "bf9283bbc24c01c664556a4d7d7ebe8a94781d4f", "title": "Aspect extraction in sentiment analysis: comparative analysis and survey", "text": "Sentiment analysis (SA) has become one of the most active and progressively popular areas in information retrieval and text mining due to the expansion of the World Wide Web (WWW). SA deals with the computational treatment or the classification of user\u2019s sentiments, opinions and emotions hidden within the text. Aspect extraction is the most vital and extensively explored phase of SA to carry out the classification of sentiments in precise manners. During the last decade, enormous number of research has focused on identifying and extracting aspects. Therefore, in this survey, a comprehensive overview has been attempted for different aspect extraction techniques and approaches. These techniques have been categorized in accordance with the adopted approach. Despite being a traditional survey, a comprehensive comparative analysis is conducted among different approaches of aspect extraction, which not only elaborates the performance of any technique but also guides the reader to compare the accuracy with other state-of-the-art and most recent approaches."}
{"_id": "b4f00f51edaf5d5b84e957ab947aac47c04740ba", "title": "An incremental deployment algorithm for mobile robot teams", "text": "This paper describes an algorithm for deploying the members of a mobile robot team into an unknown environment. The algorithm deploys robots one-at-a-time, with each robot making use of information gathered by the previous robots to determine the next deployment location. The deployment pattern is designed to maximize the area covered by the robots\u2019 sensors, while simultaneously ensuring that the robots maintain line-of-sight contact with one another. This paper describes the basic algorithm and presents results obtained from a series of experiments conducted using both real and simulated"}
{"_id": "3b032aacdc7dadf0dc3475565c7236fa4b372eb6", "title": "RABAC: Role-Centric Attribute-Based Access Control", "text": "Role-based access control (RBAC) is a commercially dominant model, standardized by the National Institute of Standards and Technology (NIST). Although RBAC provides compelling benefits for security management it has several known deficiencies such as role explosion, wherein multiple closely related roles are required (e.g., attendingdoctor role is separately defined for each patient). Numerous extensions to RBAC have been proposed to overcome these shortcomings. Recently NIST announced an initiative to unify and standardize these extensions by integrating roles with attributes, and identified three approaches: use attributes to dynamically assign users to roles, treat roles as just another attribute, and constrain the permissions of a role via attributes. The first two approaches have been previously studied. This paper presents a formal model for the third approach for the first time in the literature. We propose the novel role-centric attribute-based access control (RABAC) model which extends the NIST RBAC model with permission filtering policies. Unlike prior proposals addressing the role-explosion problem, RABAC does not fundamentally modify the role concept and integrates seamlessly with the NIST RBAC model. We also define an XACML profile for RABAC based on the existing XACML profile for RBAC."}
{"_id": "64a995d605a1f4f632e6acf9468ae0834e62e084", "title": "A throwable miniature robotic system", "text": "Before the soldiers or police take actions, a fast and real-time detection in dangerous places, such as indoor, passageway and underpass, can both guarantee the soldiers' safety and increase the battle's accuracy, so it is significative and necessary to design a kind of scout robot which is able to arrive in the target region to acquire the message fast by the ways of thrown, shot or airdrop. This kind of robot should be small, easy to hide and have the ability of anti-impact and semi-autonomous. This paper mainly proposes a design method of the throwable miniature scout robot, analyses the anti-pact mechanism and autonomous strategy, and shows that the anti-impact mechanism is useful to attenuate the impact and the semi-autonomous control strategy can be fit for the robot's application through several experiments at last."}
{"_id": "5b7c4b3c12917f289513c4896a08548e67ede9fe", "title": "Evidence for a common representation of decision values for dissimilar goods in human ventromedial prefrontal cortex.", "text": "To make economic choices between goods, the brain needs to compute representations of their values. A great deal of research has been performed to determine the neural correlates of value representations in the human brain. However, it is still unknown whether there exists a region of the brain that commonly encodes decision values for different types of goods, or if, in contrast, the values of different types of goods are represented in distinct brain regions. We addressed this question by scanning subjects with functional magnetic resonance imaging while they made real purchasing decisions among different categories of goods (food, nonfood consumables, and monetary gambles). We found activity in a key brain region previously implicated in encoding goal-values: the ventromedial prefrontal cortex (vmPFC) was correlated with the subjects' value for each category of good. Moreover, we found a single area in vmPFC to be correlated with the subjects' valuations for all categories of goods. Our results provide evidence that the brain encodes a \"common currency\" that allows for a shared valuation for different categories of goods."}
{"_id": "fde42b706122e42efa876262039729d449487ae2", "title": "Sentiment Identification in Code-Mixed Social Media Text", "text": "Sentiment analysis is the Natural Language Processing (NLP) task dealing with the detection and classification of sentiments in texts. While some tasks deal with identifying presence of sentiment in text (Subjectivity analysis), other tasks aim at determining the polarity of the text categorizing them as positive, negative and neutral. Whenever there is presence of sentiment in text, it has a source (people, group of people or any entity) and the sentiment is directed towards some entity, object, event or person. Sentiment analysis tasks aim to determine the subject, the target and the polarity or valence of the sentiment. In our work, we try to automatically extract sentiment (positive or negative) from Facebook posts using a machine learning approach. While some works have been done in code-mixed social media data and in sentiment analysis separately, our work is the first attempt (as of now) which aims at performing sentiment analysis of code-mixed social media text. We have used extensive pre-processing to remove noise from raw text. Multilayer Perceptron model has been used to determine the polarity of the sentiment. We have also developed the corpus for this task by manually labelling Facebook posts with their associated sentiments."}
{"_id": "03684e4a57d2c33e0ed219cad9e3b180175f2464", "title": "Social signal processing: Survey of an emerging domain", "text": "The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence \u2013 the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement \u2013 in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for Social Signal Processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially-aware computing."}
{"_id": "04c2cda00e5536f4b1508cbd80041e9552880e67", "title": "Hipster Wars: Discovering Elements of Fashion Styles", "text": "The clothing we wear and our identities are closely tied, revealing to the world clues about our wealth, occupation, and socioidentity. In this paper we examine questions related to what our clothing reveals about our personal style. We first design an online competitive Style Rating Game called Hipster Wars to crowd source reliable human judgments of style. We use this game to collect a new dataset of clothing outfits with associated style ratings for 5 style categories: hipster, bohemian, pinup, preppy, and goth. Next, we train models for betweenclass and within-class classification of styles. Finally, we explore methods to identify clothing elements that are generally discriminative for a style, and methods for identifying items in a particular outfit that may indicate a style."}
{"_id": "09fa54f1ab7aaa83124d2415bfc6eb51e4b1f081", "title": "Where to Buy It: Matching Street Clothing Photos in Online Shops", "text": "In this paper, we define a new task, Exact Street to Shop, where our goal is to match a real-world example of a garment item to the same item in an online shop. This is an extremely challenging task due to visual differences between street photos (pictures of people wearing clothing in everyday uncontrolled settings) and online shop photos (pictures of clothing items on people, mannequins, or in isolation, captured by professionals in more controlled settings). We collect a new dataset for this application containing 404,683 shop photos collected from 25 different online retailers and 20,357 street photos, providing a total of 39,479 clothing item matches between street and shop photos. We develop three different methods for Exact Street to Shop retrieval, including two deep learning baseline methods, and a method to learn a similarity measure between the street and shop domains. Experiments demonstrate that our learned similarity significantly outperforms our baselines that use existing deep learning based representations."}
{"_id": "30ea6c4991ceabc8d197ccff3e45fb0a00a3f6c5", "title": "Style Finder: Fine-Grained Clothing Style Detection and Retrieval", "text": "With the rapid proliferation of smartphones and tablet computers, search has moved beyond text to other modalities like images and voice. For many applications like Fashion, visual search offers a compelling interface that can capture stylistic visual elements beyond color and pattern that cannot be as easily described using text. However, extracting and matching such attributes remains an extremely challenging task due to high variability and deformability of clothing items. In this paper, we propose a fine-grained learning model and multimedia retrieval framework to address this problem. First, an attribute vocabulary is constructed using human annotations obtained on a novel fine-grained clothing dataset. This vocabulary is then used to train a fine-grained visual recognition system for clothing styles. We report benchmark recognition and retrieval results on Women's Fashion Coat Dataset and illustrate potential mobile applications for attribute-based multimedia retrieval of clothing items and image annotation."}
{"_id": "324608bf8fecc064bc491da21291465ab42fa6b6", "title": "Matching-CNN meets KNN: Quasi-parametric human parsing", "text": "Both parametric and non-parametric approaches have demonstrated encouraging performances in the human parsing task, namely segmenting a human image into several semantic regions (e.g., hat, bag, left arm, face). In this work, we aim to develop a new solution with the advantages of both methodologies, namely supervision from annotated data and the flexibility to use newly annotated (possibly uncommon) images, and present a quasi-parametric human parsing model. Under the classic K Nearest Neighbor (KNN)-based nonparametric framework, the parametric Matching Convolutional Neural Network (M-CNN) is proposed to predict the matching confidence and displacements of the best matched region in the testing image for a particular semantic region in one KNN image. Given a testing image, we first retrieve its KNN images from the annotated/manually-parsed human image corpus. Then each semantic region in each KNN image is matched with confidence to the testing image using M-CNN, and the matched regions from all KNN images are further fused, followed by a superpixel smoothing procedure to obtain the ultimate human parsing result. The M-CNN differs from the classic CNN [12] in that the tailored cross image matching filters are introduced to characterize the matching between the testing image and the semantic region of a KNN image. The cross image matching filters are defined at different convolutional layers, each aiming to capture a particular range of displacements. Comprehensive evaluations over a large dataset with 7,700 annotated human images well demonstrate the significant performance gain from the quasi-parametric model over the state-of-the-arts [29, 30], for the human parsing task."}
{"_id": "07cb8b9386778b158f879ee36a902f04dbf33bd6", "title": "The SeaLion has Landed: An IDE for Answer-Set Programming---Preliminary Report", "text": "We report about the current state and designated features of the tool SeaLion, aimed to serve as an integrated development environment (IDE) for answer-set programming (ASP). A main goal of SeaLion is to provide a user-friendly environment for supporting a developer to write, evaluate, debug, and test answer-set programs. To this end, new support techniques have to be developed that suit the requirements of the answer-set semantics and meet the constraints of practical applicability. In this respect, SeaLion benefits from the research results of a project on methods and methodologies for answer-set program development in whose context SeaLion is realised. Currently, the tool provides source-code editors for the languages of Gringo and DLV that offer syntax highlighting, syntax checking, and a visual program outline. Further implemented features are support for external solvers and visualisation as well as visual editing of answer sets. SeaLion comes as a plugin of the popular Eclipse platform and provides itself interfaces for future extensions of the IDE."}
{"_id": "61c648e162dad3bd154d429b3e5cec601efa4a28", "title": "Occlusion horizons for driving through urban scenery", "text": "Wepresentarapidocclusioncullingalgorithmspecificallydesigned for urbanenvironments. For eachframe,an occlusionhorizonis beingbuilt on-the-flyduring a hierarchicalfront-to-backtraversal of the scene.All conservatively hiddenobjectsareculled, while all theoccludingimpostorsof all conservatively visibleobjectsare addedto the \u0004 D occlusionhorizon.Our framework alsosupports levels-of-detail(LOD) renderingby estimatingthe visible areaof the projectionof an objectin orderto selecttheappropriateLOD for eachobject.This algorithmrequiresno substantial preprocessing andnoexcessi ve storage. In a test sceneof 10,000buildings, the cull phasetook 11 ms on a PentiumII333 MHz and45 ms on an SGI Octaneper frame on average. In typical views, the occlusionhorizonculled away 80-90%of theobjectsthatwerewithin theview frustum,giving a 10 timesspeedupover view frustumculling alone.Combiningthe occlusionhorizonwith LOD renderinggavea17 timesspeedupon anSGIOctane,and23 timeson aPII. CR Categoriesand SubjectDescriptors: I.3.7[ComputerGraphics]: I.3.7 Three-Dimensional GraphicsandRealism."}
{"_id": "2006f8d01395dab714bdcdcfd1cebbe6d6276e35", "title": "Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding", "text": "One of the key problems in spoken language understanding (SLU) is the task of slot filling. In light of the recent success of applying deep neural network technologies in domain detection and intent identification, we carried out an in-depth investigation on the use of recurrent neural networks for the more difficult task of slot filling involving sequence discrimination. In this work, we implemented and compared several important recurrent-neural-network architectures, including the Elman-type and Jordan-type recurrent networks and their variants. To make the results easy to reproduce and compare, we implemented these networks on the common Theano neural network toolkit, and evaluated them on the ATIS benchmark. We also compared our results to a conditional random fields (CRF) baseline. Our results show that on this task, both types of recurrent networks outperform the CRF baseline substantially, and a bi-directional Jordantype network that takes into account both past and future dependencies among slots works best, outperforming a CRFbased baseline by 14% in relative error reduction."}
{"_id": "5664a4ca240ca67d4846bcafb332a5f67e1a9791", "title": "Accessibility Evaluation of Classroom Captions", "text": "Real-time captioning enables deaf and hard of hearing (DHH) people to follow classroom lectures and other aural speech by converting it into visual text with less than a five second delay. Keeping the delay short allows end-users to follow and participate in conversations. This article focuses on the fundamental problem that makes real-time captioning difficult: sequential keyboard typing is much slower than speaking. We first surveyed the audio characteristics of 240 one-hour-long captioned lectures on YouTube, such as speed and duration of speaking bursts. We then analyzed how these characteristics impact caption generation and readability, considering specifically our human-powered collaborative captioning approach. We note that most of these characteristics are also present in more general domains. For our caption comparison evaluation, we transcribed a classroom lecture in real-time using all three captioning approaches. We recruited 48 participants (24 DHH) to watch these classroom transcripts in an eye-tracking laboratory. We presented these captions in a randomized, balanced order. We show that both hearing and DHH participants preferred and followed collaborative captions better than those generated by automatic speech recognition (ASR) or professionals due to the more consistent flow of the resulting captions. These results show the potential to reliably capture speech even during sudden bursts of speed, as well as for generating \u201cenhanced\u201d captions, unlike other human-powered captioning approaches."}
{"_id": "53730790a77a31794a72b334a14f4691a16b87ba", "title": "A Decentralised Sharing App running a Smart Contract on the Ethereum Blockchain", "text": "The sharing economy, the business of collectively using privately owned objects and services, has fuelled some of the fastest growing businesses of the past years. However, popular sharing platforms like Airbnb or Uber exhibit several drawbacks: a cumbersome sign up procedure, lack of participant privacy, overbearing terms and conditions, and significant fees for users. We demonstrate a Decentralised App (DAPP) for the sharing of everyday objects based on a smart contract on the Ethereum blockchain. This contract enables users to register and rent devices without involvement of a Trusted Third Party (TTP), disclosure of any personal information or prior sign up to the service. With increasing distribution of cryptocurrencies the use of smart contracts such as proposed in this paper has the potential to revolutionise the sharing economy."}
{"_id": "4d4e151bf5738fa2b6a8ef9ac4e66cfe5179cd5f", "title": "Layout-sensitive language extensibility with SugarHaskell", "text": "Programmers need convenient syntax to write elegant and concise programs. Consequently, the Haskell standard provides syntactic sugar for some scenarios (e.g., do notation for monadic code), authors of Haskell compilers provide syntactic sugar for more scenarios (e.g., arrow notation in GHC), and some Haskell programmers implement preprocessors for their individual needs (e.g., idiom brackets in SHE). But manually written preprocessors cannot scale: They are expensive, error-prone, and not composable. Most researchers and programmers therefore refrain from using the syntactic notations they need in actual Haskell programs, but only use them in documentation or papers. We present a syntactically extensible version of Haskell, SugarHaskell, that empowers ordinary programmers to implement and use custom syntactic sugar.\n Building on our previous work on syntactic extensibility for Java, SugarHaskell integrates syntactic extensions as sugar libraries into Haskell's module system. Syntax extensions in SugarHaskell can declare arbitrary context-free and layout-sensitive syntax. SugarHaskell modules are compiled into Haskell modules and further processed by a Haskell compiler. We provide an Eclipse-based IDE for SugarHaskell that is extensible, too, and automatically provides syntax coloring for all syntax extensions imported into a module.\n We have validated SugarHaskell with several case studies, including arrow notation (as implemented in GHC) and EBNF as a concise syntax for the declaration of algebraic data types with associated concrete syntax. EBNF declarations also show how to extend the extension mechanism itself: They introduce syntactic sugar for using the declared concrete syntax in other SugarHaskell modules."}
{"_id": "1fdfa103b31c17be922050972a49414cead951f5", "title": "Theodor Lipps and the shift from \"sympathy\" to \"empathy\".", "text": "In the course of extensive philosophical debates on aesthetics in nineteenth-century Germany, Robert Vischer introduced the concept of Einf\u00fchlung in relation to art. Theodor Lipps subsequently extended it from art to visual illusions and interpersonal understanding. While Lipps had regarded Einf\u00fchlung as basically similar to the old notion of sympathy, Edward Titchener in America believed it had a different meaning. Hence, he coined the term empathy as its translation. This term came to be increasingly widely adopted, first in psychology and then more generally. But the lack of agreement about the supposed difference between these concepts suggests that Lipps had probably been right."}
{"_id": "1e77175db11664a18b6b6e3cac4cf5c90e7cb4b5", "title": "A polynomial-time maximum common subgraph algorithm for outerplanar graphs and its application to chemoinformatics", "text": "Metrics for structured data have received an increasing interest in the machine learning community. Graphs provide a natural representation for structured data, but a lot of operations on graphs are computationally intractable. In this article, we present a polynomial-time algorithm that computes a maximum common subgraph of two outerplanar graphs. The algorithm makes use of the block-and-bridge preserving subgraph isomorphism, which has significant efficiency benefits and is also motivated from a chemical perspective. We focus on the application of learning structure-activity relationships, where the task is to predict the chemical activity of molecules. We show how the algorithm can be used to construct a metric for structured data and we evaluate this metric and more generally also the block-and-bridge preserving matching operator on 60 molecular datasets, obtaining state-of-the-art results in terms of predictive performance and efficiency."}
{"_id": "2f92b10acf7c405e55c74c1043dabd9ded1b1800", "title": "Dynamic Integration of Background Knowledge in Neural NLU Systems", "text": "Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite background knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the taskspecific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way."}
{"_id": "9b23769aa313e4c7adc75944a5da1eb4eafb8395", "title": "Preserving Relation Privacy in Online Social Network Data", "text": "Online social networks routinely publish data of interest to third parties, but in so doing often reveal relationships, such as a friendship or contractual association, that an attacker can exploit. This systematic look at existing privacy-preservation techniques highlights the vulnerabilities of users even in networks that completely anonymize identities. Through a taxonomy that categorizes techniques according to the degree of user identity exposure, the authors examine the ways that existing approaches compromise relation privacy and offer more secure alternatives."}
{"_id": "fc5f08a4e4ef0f63d0091377fa2c649e68f0c5fd", "title": "Emotion and the motivational brain", "text": "Psychophysiological and neuroscience studies of emotional processing undertaken by investigators at the University of Florida Laboratory of the Center for the Study of Emotion and Attention (CSEA) are reviewed, with a focus on reflex reactions, neural structures and functional circuits that mediate emotional expression. The theoretical view shared among the investigators is that expressed emotions are founded on motivational circuits in the brain that developed early in evolutionary history to ensure the survival of individuals and their progeny. These circuits react to appetitive and aversive environmental and memorial cues, mediating appetitive and defensive reflexes that tune sensory systems and mobilize the organism for action and underly negative and positive affects. The research reviewed here assesses the reflex physiology of emotion, both autonomic and somatic, studying affects evoked in picture perception, memory imagery, and in the context of tangible reward and punishment, and using the electroencephalograph (EEG) and functional magnetic resonance imaging (fMRI), explores the brain's motivational circuits that determine human emotion."}
{"_id": "7fbf4458d5c19ec29b282f89bcb9110dc6ca89fc", "title": "Single-cell RNA-seq highlights intratumoral heterogeneity in primary glioblastoma", "text": "Human cancers are complex ecosystems composed of cells with distinct phenotypes, genotypes, and epigenetic states, but current models do not adequately reflect tumor composition in patients. We used single-cell RNA sequencing (RNA-seq) to profile 430 cells from five primary glioblastomas, which we found to be inherently variable in their expression of diverse transcriptional programs related to oncogenic signaling, proliferation, complement/immune response, and hypoxia. We also observed a continuum of stemness-related expression states that enabled us to identify putative regulators of stemness in vivo. Finally, we show that established glioblastoma subtype classifiers are variably expressed across individual cells within a tumor and demonstrate the potential prognostic implications of such intratumoral heterogeneity. Thus, we reveal previously unappreciated heterogeneity in diverse regulatory programs central to glioblastoma biology, prognosis, and therapy."}
{"_id": "d9c4b1ca997583047a8721b7dfd9f0ea2efdc42c", "title": "Learning Inference Models for Computer Vision", "text": "Computer vision can be understood as the ability to perform inference on image data. Breakthroughs in computer vision technology are often marked by advances in inference techniques, as even the model design is often dictated by the complexity of inference in them. This thesis proposes learning based inference schemes and demonstrates applications in computer vision. We propose techniques for inference in both generative and discriminative computer vision models. Despite their intuitive appeal, the use of generative models in vision is hampered by the difficulty of posterior inference, which is often too complex or too slow to be practical. We propose techniques for improving inference in two widely used techniques: Markov Chain Monte Carlo (MCMC) sampling and message-passing inference. Our inference strategy is to learn separate discriminative models that assist Bayesian inference in a generative model. Experiments on a range of generative vision models show that the proposed techniques accelerate the inference process and/or converge to better solutions. A main complication in the design of discriminative models is the inclusion of prior knowledge in a principled way. For better inference in discriminative models, we propose techniques that modify the original model itself, as inference is simple evaluation of the model. We concentrate on convolutional neural network (CNN) models and propose a generalization of standard spatial convolutions, which are the basic building blocks of CNN architectures, to bilateral convolutions. First, we generalize the existing use of bilateral filters and then propose new neural network architectures with learnable bilateral filters, which we call \u2018Bilateral Neural Networks\u2019. We show how the bilateral filtering modules can be used for modifying existing CNN architectures for better image segmentation and propose a neural network approach for temporal information propagation in videos. Experiments demonstrate the potential of the proposed bilateral networks on a wide range of vision tasks and datasets. In summary, we propose learning based techniques for better inference in several computer vision models ranging from inverse graphics to freely parameterized neural networks. In generative vision models, our inference techniques alleviate some of the crucial hurdles in Bayesian posterior inference, paving new ways for the use of model based machine learning in vision. In discriminative CNN models, the proposed filter generalizations aid in the design of new neural network architectures that can handle sparse high-dimensional data as well as provide a way for incorporating prior knowledge into CNNs."}
{"_id": "7d2e36f1eb09cbff2f9f5c942e538a964464b15e", "title": "Tunneling Transistors Based on Graphene and 2-D Crystals", "text": "As conventional transistors become smaller and thinner in the quest for higher performance, a number of hurdles are encountered. The discovery of electronic-grade 2-D crystals has added a new \u201clayer\u201d to the list of conventional semiconductors used for transistors. This paper discusses the properties of 2-D crystals by comparing them with their 3-D counterparts. Their suitability for electronic devices is discussed. In particular, the use of graphene and other 2-D crystals for interband tunneling transistors is discussed for low-power logic applications. Since tunneling phenomenon in reduced dimensions is not conventionally covered in texts, the physics is developed explicitly before applying it to transistors. Though we are in an early stage of learning to design devices with 2-D crystals, they have already been the motivation behind a list of truly novel ideas. This paper reviews a number of such ideas."}
{"_id": "149bf28af91cadf2cd933bd477599cca40f55ccd", "title": "Autonomous reinforcement learning on raw visual input data in a real world application", "text": "We propose a learning architecture, that is able to do reinforcement learning based on raw visual input data. In contrast to previous approaches, not only the control policy is learned. In order to be successful, the system must also autonomously learn, how to extract relevant information out of a high-dimensional stream of input information, for which the semantics are not provided to the learning system. We give a first proof-of-concept of this novel learning architecture on a challenging benchmark, namely visual control of a racing slot car. The resulting policy, learned only by success or failure, is hardly beaten by an experienced human player."}
{"_id": "11757f6404965892e02c8c49c5665e2b25492bec", "title": "Capturing and animating skin deformation in human motion", "text": "During dynamic activities, the surface of the human body moves in many subtle but visually significant ways: bending, bulging, jiggling, and stretching. We present a technique for capturing and animating those motions using a commercial motion capture system and approximately 350 markers. Although the number of markers is significantly larger than that used in conventional motion capture, it is only a sparse representation of the true shape of the body. We supplement this sparse sample with a detailed, actor-specific surface model. The motion of the skin can then be computed by segmenting the markers into the motion of a set of rigid parts and a residual deformation (approximated first as a quadratic transformation and then with radial basis functions). We demonstrate the power of this approach by capturing flexing muscles, high frequency motions, and abrupt decelerations on several actors. We compare these results both to conventional motion capture and skinning and to synchronized video of the actors."}
{"_id": "a8da9412b4a2a15a3124be06c4c37ad98f8f02d2", "title": "Ontology and the Lexicon", "text": "A lexicon is a linguistic object and hence is not the same thing as an ontology, which is non-linguistic. Nonetheless, word senses are in many ways similar to ontological concepts and the relationships found between word senses resemble the relationships found between concepts. Although the arbitrary and semi-arbitrary distinctions made by natural languages limit the degree to which these similarities can be exploited, a lexicon can nonetheless serve in the development of an ontology, especially in a technical domain."}
{"_id": "d381709212dccf397284eee54a1e3010a4ef777f", "title": "A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss", "text": "We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-theart ROUGE scores while being the most informative and readable summarization on the CNN/Daily Mail dataset in a solid human evaluation."}
{"_id": "fb80d858f0b3832d67c6aafaf5426c787198e492", "title": "The TangibleK Robotics Program: Applied Computational Thinking for Young Children.", "text": "This article describes the TangibleK robotics program for young children. Based on over a decade of research, this program is grounded on the belief that teaching children about the human-made world, the realm of technology and engineering, is as important as teaching them about the natural world, numbers, and letters. The TangibleK program uses robotics as a tool to engage children in developing computational thinking and learning about the engineering design process. It includes a research-based, developmentally appropriate robotics kit that children can use to make robots and program their behaviors. The curriculum has been piloted in kindergarten classrooms and in summer camps and lab settings. The author presents the theoretical framework that informs TangibleK and the \u201cpowerful ideas\u201d from computer science and robotics on which the curriculum is based, linking them to other disciplinary areas and developmental characteristics of early childhood. The article concludes with a description of classroom pedagogy and activities, along with assessment tools used to evaluate learning outcomes."}
{"_id": "9154ea2ddc06a0ffd0d5909520495ac16a467a64", "title": "Associations between young adults' use of sexually explicit materials and their sexual preferences, behaviors, and satisfaction.", "text": "This study examined how levels of sexually explicit material (SEM) use during adolescence and young adulthood were associated with sexual preferences, sexual behaviors, and sexual and relationship satisfaction. Participants included 782 heterosexual college students (326 men and 456 women; M(age)\u00a0=\u00a019.9) who completed a questionnaire online. Results revealed high frequencies and multiple types and contexts of SEM use, with men's usage rates systematically higher than women's. Regression analyses revealed that both the frequency of SEM use and number of SEM types viewed were uniquely associated with more sexual experience (a higher number of overall and casual sexual intercourse partners, as well as a lower age at first intercourse). Higher frequencies of SEM use were associated with less sexual and relationship satisfaction. The frequency of SEM use and number of SEM types viewed were both associated with higher sexual preferences for the types of sexual practices typically presented in SEM. These findings suggest that SEM use can play a significant role in a variety of aspects of young adults' sexual development processes."}
{"_id": "5e1059a5ae0dc94e83a269b8d5c0763d1e6851e2", "title": "Improved Collision Attack on Hash Function MD5", "text": "In this paper, we present a fast attack algorithm to find two-block collision of hash function MD5. The algorithm is based on the two-block collision differential path of MD5 that was presented by Wang et al. in the Conference EUROCRYPT 2005. We found that the derived conditions for the desired collision differential path were not sufficient to guarantee the path to hold and that some conditions could be modified to enlarge the collision set. By using technique of small range searching and omitting the computing steps to check the characteristics in the attack algorithm, we can speed up the attack of MD5 efficiently. Compared with the Advanced Message Modification technique presented by Wang et al., the small range searching technique can correct 4 more conditions for the first iteration differential and 3 more conditions for the second iteration differential, thus improving the probability and the complexity to find collisions. The whole attack on the MD5 can be accomplished within 5 hours using a PC with Pentium4 1.70GHz CPU."}
{"_id": "808d7f0e5385210422751f1a9a6fd0f07cda647d", "title": "Complementary approaches built as web service for arabic handwriting OCR systems via amazon elastic mapreduce (EMR) model", "text": "Arabic Optical Character Recognition (OCR) as Web Services represents a major challenge for handwritten document recognition. A variety of approaches, methods, algorithms and techniques have been proposed in order to build powerful Arabic OCR web services. Unfortunately, these methods could not succeed in achieving this mission in case of large large quantity Arabic handwritten documents. Intensive experiments and observations revealed that some of the existing approaches and techniques are complementary and can be combined to improve the recognition rate. Designing and implementing these recent sophisticated complementary approaches and techniques as web services are commonly complex; they require strong computing power to reach an acceptable recognition speed especially in case of large quantity documents. One of the possible solutions to overcome this problem is to benefit from distributed computing architectures such as cloud computing. This paper describes the design and implementation of Arabic Handwriting Recognition as a web service (AHRweb service) based on the complementary approach K-Nearest Neighbor (KNN) /Support Vector Machine (SVM) (K-NN/SVM) via Amazon Elastic MapReduce (EMR) model. The experiments were conducted on a cloud computing environment with a real large scale handwriting dataset from the Institute for Communications Technology (IFN)/ Ecole Nationale d\u2019Ing\u00e9nieur de Tunis (ENIT) IFN/ENIT database. The J-Sim (Java Simulator) was used as a tool to generate and analyze statistical results. Experimental results show that Amazon Elastic MapReduce (EMR) model constitutes a very promising framework for enhancing large AHRweb service performances."}
{"_id": "6c0e967829480c4eb85800d5750300337fecdd7c", "title": "Improving returns on stock investment through neural network selection", "text": "Artificial neural networks\u2019 (ANNs\u2019) generalization powers have in recent years received admiration of finance researchers and practitioners. Their usage in such areas as bankruptcy prediction, debt-risk assessment, and security-market applications has yielded promising results. With such intensive research and proven ability of the ANN in the area of security-market application and the growing importance of the role of equity securities in Singapore, it has motivated the conceptual development of this work in using the ANN in stock selection. With their proven generalization ability, neural networks are able to infer the characteristics of performing stocks from the historical patterns. The performance of stocks is reflective of the profitability and quality of management of the underlying company. Such information is reflected in financial and technical variables. As such, the ANN is used as a tool to uncover the intricate relationships between the performance of stocks and the related financial and technical variables. Historical data, such as financial variables (inputs) and performance of the stock (output) is used in this ANN application. Experimental results obtained thus far have been very encouraging. IDEA GROUP PUBLISHING This paper appears in the publication, Artificial Neural Networks in Finance and Manufacturing edited by Joarder Kamruzzaman, Rezaul Begg, and Ruhul Sarker\u00a9 2006, Idea Group Inc. 701 E. Chocolate Avenue, Suite 200, Hershey PA 17033-1240, USA Tel: 717/533-8845; Fax 717/533-8661; URL-http://www.idea-group.com ITB13014 Improving Returns on Stock Investment through Neural Network Selection 153 Copyright \u00a9 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. Introduction With the growing importance in the role of equities to both international and local investors, the selection of attractive stocks is of utmost importance to ensure a good return. Therefore, a reliable tool in the selection process can be of great assistance to these investors. An effective and efficient tool/system gives the investor the competitive edge over others as he/she can identify the performing stocks with minimum effort. In assisting the investors in their decision-making process, both the academics and practitioners have devised trading strategies, rules, and concepts based on fundamental and technical analysis. Innovative investors opt to employ information technology to improve the efficiency in the process. This is done through transforming trading strategies into computer-known language so as to exploit the logical processing power of the computer. This greatly reduces the time and effort in short-listing the list of attractive stocks. In the age where information technology is dominant, such computerized rule-based expert systems have severe limitations that will affect their effectiveness and efficiency. In particular, their inability in handling nonlinear relationships between financial variables and stock prices has been a major shortcoming. However, with the significant advancement in the field of ANNs, these limitations have found a solution. In this work, the generalization ability of the ANN is being harnessed in creating an effective and efficient tool for stock selection. Results of the research in this field have so far been very encouraging. Application of Neural Network in Stock Investment One of the earliest studies was by Halquist and Schmoll (1989), who used a neural network model to predict trends in the S&P 500 index. They found that the model was able to predict the trends 61% of the time. This was followed by Trippi and DeSieno (1992) and Grudnitski and Osburn (1993). Trippi and DeSieno (1992) devised an S&P 500 trading system that consisted of several trained neural networks and a set of rules for combining the network results to generate a composite recommended trading strategy. The trading system was used to predict S&P 500 index futures and the results showed that this system significantly outperformed the passive buy-and-hold strategy. Grudnitski and Osburn (1993) used a neural network to predict the monthly price changes and trading return in the S&P 500 index futures. The results showed that the neural network was able to predict correctly 75% of the time and gave a positive return above risk. Another work on predicting S&P 500 index futures was by Tsaih, Hsu, and Lai (1998). Similar to Trippi and DeSieno (1992), Tsaih et al. (1998) also integrated a rule-based system technique with a neural network to produce a trading system. However, in the Tsaih et al. (1998) study, they used reasoning neural networks instead of the backpropagation method used by Trippi and Desieno (1992). Empirical results in the daily prediction of price changes in the S&P 500 index futures showed that this hybrid artificialintelligence (AI) approach outperformed the passive buy-and-hold investment strategy. 11 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the publisher's webpage: www.igi-global.com/chapter/improving-returns-stockinvestment-through/5354"}
{"_id": "f3399ba97e3bc8ea29967c5e42eb669edc8d9e59", "title": "Efficient Adaptive Nonlinear Filters for Nonlinear Active Noise Control", "text": "In this paper, we treat nonlinear active noise control (NANC) with a linear secondary path (LSP) and with a nonlinear secondary path (NSP) in a unified structure by introducing a new virtual secondary path filter concept and using a general function expansion nonlinear filter. We discover that using the filtered-error structure results in greatly reducing the computational complexity of NANC. As a result, we extend the available filtered-error-based algorithms to solve NANC/LSP problems and, furthermore, develop our adjoint filtered-error-based algorithms for NANC/NSP. This family of algorithms is computationally efficient and possesses a simple structure. We also find that the computational complexity of NANC/NSP can be reduced even more using block-oriented nonlinear models, such as the Wiener, Hammerstein, or linear-nonlinear-linear (LNL) models for the NSP. Finally, we use the statistical properties of the virtual secondary path and the robustness of our proposed methods to further reduce the computational complexity and simplify the implementation structure of NANC/NSP when the NSP satisfies certain conditions. Computational complexity and simulation results are given to confirm the efficiency and effectiveness of all of our proposed methods"}
{"_id": "c6894120a6475bdee28770300a335638308e4f3e", "title": "Routing perturbation for enhanced security in split manufacturing", "text": "Split manufacturing can mitigate security vulnerabilities at untrusted foundries by exposing only partial designs. Even so, attackers can make educated guess according to design conventions and thereby recover entire chip designs. In this work, a routing perturbation-based defense method is proposed such that such attacks become very difficult while wirelength/timing overhead is restricted to be very small. Experimental results on benchmark circuits confirm the effectiveness of the proposed techniques. The new techniques also significantly outperform the latest previous work."}
{"_id": "12f8ae73af6280171a1c5fb8818320427c4f255f", "title": "Music emotion classification: a fuzzy approach", "text": "Due to the subjective nature of human perception, classification of the emotion of music is a challenging problem. Simply assigning an emotion class to a song segment in a deterministic way does not work well because not all people share the same feeling for a song. In this paper, we consider a different approach to music emotion classification. For each music segment, the approach determines how likely the song segment belongs to an emotion class. Two fuzzy classifiers are adopted to provide the measurement of the emotion strength. The measurement is also found useful for tracking the variation of music emotions in a song. Results are shown to illustrate the effectiveness of the approach."}
{"_id": "9d4a2052502fe2c60e48d7c8997da1269df6f48f", "title": "Automatic mood detection and tracking of music audio signals", "text": "Music mood describes the inherent emotional expression of a music clip. It is helpful in music understanding, music retrieval, and some other music-related applications. In this paper, a hierarchical framework is presented to automate the task of mood detection from acoustic music data, by following some music psychological theories in western cultures. The hierarchical framework has the advantage of emphasizing the most suitable features in different detection tasks. Three feature sets, including intensity, timbre, and rhythm are extracted to represent the characteristics of a music clip. The intensity feature set is represented by the energy in each subband, the timbre feature set is composed of the spectral shape features and spectral contrast features, and the rhythm feature set indicates three aspects that are closely related with an individual's mood response, including rhythm strength, rhythm regularity, and tempo. Furthermore, since mood is usually changeable in an entire piece of classical music, the approach to mood detection is extended to mood tracking for a music piece, by dividing the music into several independent segments, each of which contains a homogeneous emotional expression. Preliminary evaluations indicate that the proposed algorithms produce satisfactory results. On our testing database composed of 800 representative music clips, the average accuracy of mood detection achieves up to 86.3%. We can also on average recall 84.1% of the mood boundaries from nine testing music pieces."}
{"_id": "fffda2474b1754897c854e0c322796f1fc9f4db8", "title": "Psychological testing and assessment in the 21st century.", "text": "As spin-offs of the current revolution in the cognitive and neurosciences, clinical neuropsychologists in the 21st century will be using biological tests of intelligence and cognition that record individual differences in brain functions at the neuromolecular, neurophysiologic, and neurochemical levels. Assessment of patients will focus more on better use of still intact functions, as well as rehabilitating or bypassing impaired functions, than on diagnosis, as is the focus today. Better developed successors to today's scales for assessing personal competency and adaptive behavior, as well as overall quality of life, also will be in wide use in clinical settings. With more normal individuals, use of new generations of paper-and-pencil inventories, as well as biological measures for assessing differences in interests, attitudes, personality styles, and predispositions, is predicted."}
{"_id": "4ccb0d37c69200dc63d1f757eafb36ef4853c178", "title": "Musical genre classification of audio signals", "text": "Musical genres are categorical labels created by humans to characterize pieces of music. A musical genre is characterized by the common characteristics shared by its members. These characteristics typically are related to the instrumentation, rhythmic structure, and harmonic content of the music. Genre hierarchies are commonly used to structure the large collections of music available on the Web. Currently musical genre annotation is performed manually. Automatic musical genre classification can assist or replace the human user in this process and would be a valuable addition to music information retrieval systems. In addition, automatic musical genre classification provides a framework for developing and evaluating features for any type of content-based analysis of musical signals. In this paper, the automatic classification of audio signals into an hierarchy of musical genres is explored. More specifically, three feature sets for representing timbral texture, rhythmic content and pitch content are proposed. The performance and relative importance of the proposed features is investigated by training statistical pattern recognition classifiers using real-world audio collections. Both whole file and real-time frame-based classification schemes are described. Using the proposed feature sets, classification of 61% for ten musical genres is achieved. This result is comparable to results reported for human musical genre classification."}
{"_id": "667ba06f64d3687d290a899deb2fdf7dabe7fd1c", "title": "A fuzzy K-nearest neighbor algorithm", "text": "Classification of objects is an important area of research and application in a variety of fields. In the presence of full knowledge of the underlying probabilities, Bayes decision theory gives optimal error rates. In those cases where this information is not present, many algorithms make use of distance or similarity among samples as a means of classification. The K-nearest neighbor decision rule has often been used in these pattern recognition problems. One of the difficulties that arises when utilizing this technique is that each of the labeled samples is given equal importance in deciding the class memberships of the pattern to be classified, regardless of their `typicalness'. The theory of fuzzy sets is introduced into the K-nearest neighbor technique to develop a fuzzy version of the algorithm. Three methods of assigning fuzzy memberships to the labeled samples are proposed, and experimental results and comparisons to the crisp version are presented."}
{"_id": "70fa7c2a8a12509da107fe82b04d5e422120984f", "title": "Statistical Learning Algorithms Applied to Automobile Insurance", "text": "We recently conducted a research project for a large North American automobile insurer. This study was the most exhaustive ever undertaken by this particular insurer and lasted over an entire year. We analyzed the discriminating power of each variable used for ratemaking. We analyzed the performance of several models within five broad categories: linear regressions, generalized linear models, decision trees, neural networks and support vector machines. In this paper, we present the main results of this study. We qualitatively compare models and show how neural networks can represent high-order nonlinear dependencies with a small number of parameters, each of which is estimated on a large proportion of the data, thus yielding low variance. We thoroughly explain the purpose of the nonlinear sigmoidal transforms which are at the very heart of neural networks\u2019 performances. The main numerical result is a statistically significant reduction in the out-of-sample meansquared error using the neural network model and our ability to substantially reduce the median premium by charging more to the highest risks. This in turn can translate into substantial savings and financial benefits for an insurer. We hope this paper goes a long way in convincing actuaries to include neural networks within their set of modeling tools for ratemaking."}
{"_id": "f2a2c5129cf5656af7acc7ffaf84c9c9bafe72c5", "title": "Knowledge sharing in open source software communities: motivations and management", "text": "Copyright \u00a9 and Moral Rights are retained by the author(s) and/ or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This item cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder(s). The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holders."}
{"_id": "ae80f715274308a7564b400081efb74e21e3c1ea", "title": "Hybris: Robust Hybrid Cloud Storage", "text": "Besides well-known benefits, commodity cloud storage also raises concerns that include security, reliability, and consistency. We present Hybris key-value store, the first robust hybrid cloud storage system, aiming at addressing these concerns leveraging both private and public cloud resources.\n Hybris robustly replicates metadata on trusted private premises (private cloud), separately from data which is dispersed (using replication or erasure coding) across multiple untrusted public clouds. Hybris maintains metadata stored on private premises at the order of few dozens of bytes per key, avoiding the scalability bottleneck at the private cloud. In turn, the hybrid design allows Hybris to efficiently and robustly tolerate cloud outages, but also potential malice in clouds without overhead. Namely, to tolerate up to f malicious clouds, in the common case of the Hybris variant with data replication, writes replicate data across f + 1 clouds, whereas reads involve a single cloud. In the worst case, only up to f additional clouds are used. This is considerably better than earlier multi-cloud storage systems that required costly 3f + 1 clouds to mask f potentially malicious clouds. Finally, Hybris leverages strong metadata consistency to guarantee to Hybris applications strong data consistency without any modifications to the eventually consistent public clouds.\n We implemented Hybris in Java and evaluated it using a series of micro and macrobenchmarks. Our results show that Hybris significantly outperforms comparable multi-cloud storage systems and approaches the performance of bare-bone commodity public cloud storage."}
{"_id": "18739734d4f69a4a399b7d48a2d242d60206c7aa", "title": "JCLEC: a Java framework for evolutionary computation", "text": "In this paper we describe JCLEC, a Java software system for the development of evolutionary computation applications. This system has been designed as a framework, applying design patterns to maximize its reusability and adaptability to new paradigms with a minimum of programming effort. JCLEC architecture comprises three main modules: the core contains all abstract type definitions and their implementation; experiments runner is a scripting environment to run algorithms in batch mode; finally, GenLab is a graphical user interface that allows users to configure an algorithm, to execute it interactively and to visualize the results obtained. The use of JCLEC system is illustrated though the analysis of one case study: the resolution of the 0/1 knapsack problem by means of evolutionary algorithms."}
{"_id": "a900eee711ba307e5adfd0408e57987714198382", "title": "Developing Competency Models to Promote Integrated Human Resource Practices \u2022 309 DEVELOPING COMPETENCY MODELS TO PROMOTE INTEGRATED HUMAN RESOURCE PRACTICES", "text": "Today, competencies are used in many facets of human resource management, ranging from individual selection, development, and performance management to organizational strategic planning. By incorporating competencies into job analysis methodologies, the Office of Personnel Management (OPM) has developed robust competency models that can form the foundation for each of these initiatives. OPM has placed these models into automated systems to ensure access for employees, human resources professionals, and managers. Shared access to the data creates a shared frame of reference and a common language of competencies that have provided the basis for competency applications in public sector agencies. \u00a9 2002 Wiley Periodicals, Inc."}
{"_id": "6200f1499cbd41104284946476b082cb754077b1", "title": "Progressive Stochastic Learning for Noisy Labels", "text": "Large-scale learning problems require a plethora of labels that can be efficiently collected from crowdsourcing services at low cost. However, labels annotated by crowdsourced workers are often noisy, which inevitably degrades the performance of large-scale optimizations including the prevalent stochastic gradient descent (SGD). Specifically, these noisy labels adversely affect updates of the primal variable in conventional SGD. To solve this challenge, we propose a robust SGD mechanism called progressive stochastic learning (POSTAL), which naturally integrates the learning regime of curriculum learning (CL) with the update process of vanilla SGD. Our inspiration comes from the progressive learning process of CL, namely learning from \u201ceasy\u201d tasks to \u201ccomplex\u201d tasks. Through the robust learning process of CL, POSTAL aims to yield robust updates of the primal variable on an ordered label sequence, namely, from \u201creliable\u201d labels to \u201cnoisy\u201d labels. To realize POSTAL mechanism, we design a cluster of \u201cscreening losses,\u201d which sorts all labels from the reliable region to the noisy region. To sum up, POSTAL using screening losses ensures robust updates of the primal variable on reliable labels first, then on noisy labels incrementally until convergence. In theory, we derive the convergence rate of POSTAL realized by screening losses. Meanwhile, we provide the robustness analysis of representative screening losses. Experimental results on UCI1 simulated and Amazon Mechanical Turk crowdsourcing data sets show that the POSTAL using screening losses is more effective and robust than several existing baselines.1UCI is the abbreviation of University of California Irvine."}
{"_id": "efe03a2940e09547bb15035d35e7e07ed59848bf", "title": "JHU-ISI Gesture and Skill Assessment Working Set ( JIGSAWS ) : A Surgical Activity Dataset for Human Motion Modeling", "text": "Dexterous surgical activity is of interest to many researchers in human motion modeling. In this paper, we describe a dataset of surgical activities and release it for public use. The dataset was captured using the da Vinci Surgical System and consists of kinematic and video from eight surgeons with different levels of skill performing five repetitions of three elementary surgical tasks on a bench-top model. The tasks, which include suturing, knot-tying and needle-passing, are standard components of most surgical skills training curricula. In addition to kinematic and video data captured from the da Vinci Surgical System, we are also releasing manual annotations of surgical gestures (atomic activity segments), surgical skill using global rating scores, a standardized cross-validation experimental setup, and a C++/Matlab toolkits for analyzing surgical gestures using hidden Markov models and using linear dynamical systems. We refer to the dataset as the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) to indicate the collaboration between Johns Hopkins University (JHU) and Intuitive Surgical Inc. (ISI), Sunnyvale, CA, on collecting these data."}
{"_id": "d7544535a3424d946d312f19434d9389ffd0c32b", "title": "Robust model predictive control for an uncertain smart thermal grid", "text": "The focus of this paper is on modeling and control of Smart Thermal Grids (STGs) in which the uncertainties in the demand and/or supply are included. We solve the corresponding robust model predictive control (MPC) optimization problem using mixed-integer-linear programming techniques to provide a day-ahead prediction for the heat production in the grid. In an example, we compare the robust MPC approach with the robust optimal control approach, in which the day-ahead production plan is obtained by optimizing the objective function for entire day at once. There, we show that the robust MPC approach successfully keeps the supply-demand balance in the STG while satisfying the constraints of the production units in the presence of uncertainties in the heat demand. Moreover, we see that despite the longer computation time, the performance of the robust MPC controller is considerably better than the one of the robust optimal controller."}
{"_id": "759d9a6c9206c366a8d94a06f4eb05659c2bb7f2", "title": "Toward Open Set Recognition", "text": "To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of \u201cclosed set\u201d recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is \u201copen set\u201d recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel \u201c1-vs-set machine,\u201d which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks."}
{"_id": "00960cb3f5a74d23eb5ded93f1aa717b9c6e6851", "title": "Input Warping for Bayesian Optimization of Non-stationary Functions", "text": "Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions. The ability to accurately model distributions over functions is critical to the effectiveness of Bayesian optimization. Although Gaussian processes provide a flexible prior over functions, there are various classes of functions that remain difficult to model. One of the most frequently occurring of these is the class of non-stationary functions. The optimization of the hyperparameters of machine learning algorithms is a problem domain in which parameters are often manually transformed a priori, for example by optimizing in \u201clog-space,\u201d to mitigate the effects of spatially-varying length scale. We develop a methodology for automatically learning a wide family of bijective transformations or warpings of the input space using the Beta cumulative distribution function. We further extend the warping framework to multi-task Bayesian optimization so that multiple tasks can be warped into a jointly stationary space. On a set of challenging benchmark optimization tasks, we observe that the inclusion of warping greatly improves on the state-of-the-art, producing better results faster and more reliably."}
{"_id": "6e6740bc09ca3fe013ff9029cca2830aec834829", "title": "CVAP: Validation for Cluster Analyses", "text": "Evaluation of clustering results (or cluster validation) is an important and necessary step in cluster analysis, but it is often time-consuming and complicated work. We present a visual cluster validation tool, the Cluster Validity Analysis Platform (CVAP), to facilitate cluster validation. The CVAP provides necessary methods (e.g., many validity indices, several clustering algorithms and procedures) and an analysis environment for clustering, evaluation of clustering results, estimation of the number of clusters, and performance comparison among different clustering algorithms. It can help users accomplish their clustering tasks faster and easier and help achieve good clustering quality when there is little prior knowledge about the cluster structure of a data set."}
{"_id": "fcc7541e27ee3ef817b21a7fcd362030cbaa3f69", "title": "A New High-Temperature Superconducting Vernier Permanent-Magnet Machine for Wind Turbines", "text": "A vernier permanent magnet (VPM) machine using high-temperature superconducting (HTS) winding for dual excitation is proposed for wind power generation. The proposed VPM machine adopts the parallel hybrid excitation configuration, the stator consists of an identical tooth pair connected together by the stator yoke and the rotor contains two sets of consequent-pole permanent magnets joined together by the rotor yoke. The armature winding is located at the stator main slots while the field winding using HTS coils is wound on the stator yoke. With the benefit of the HTS field winding, the air-gap flux can be flexibly adjusted by controlling the magnitude of the dc current. Therefore, the machine performance is improved to cope with different operating conditions."}
{"_id": "9015a38343d71b7972af6efc9c2a224807662e8f", "title": "The Neglect of Power in Recent Framing Research By", "text": "This article provides a critique of recent developments in research examining media frames and their influence. We contend that a number of trends in framing research have neglected the relationship between media frames and broader issues of political and social power. This neglect is a product of a number of factors, including conceptual problems in the definition of frames, the inattention to frames sponsorship, the failure to examine framing contests within wider political and social contexts, and the reduction of framing to a form of media effects. We conclude that framing research needs to be linked to the political and social questions regarding power central to the media hegemony thesis, and illustrate this focus by exploring how framing research can contribute to an understanding of the interaction between social movements and the news media."}
{"_id": "f4a653d82471463564a0ce966ef8df3e7d9aeea8", "title": "Research on the Matthews Correlation Coefficients Metrics of Personalized Recommendation Algorithm Evaluation", "text": "The personalized recommendation systems could better improve the personalized service for network user and alleviate the problem of information overload in the Internet. As we all know, the key point of being a successful recommendation system is the performance of recommendation algorithm. When scholars put forward some new recommendation algorithms, they claim that the new algorithms have been improved in some respects, better than previous algorithm. So we need some evaluation metrics to evaluate the algorithm performance. Due to the scholar didn\u2019t fully understand the evaluation mechanism of recommendation algorithms. They mainly emphasized some specific evaluation metrics like Accuracy, Diversity. What\u2019s more, the academia did not establish a complete and unified assessment of recommendation algorithms evaluation system which is credibility to do the work of recommendation evaluation. So how to do this work objective and reasonable is still a challengeable task. In this article, we discussed the present evaluation metrics with its respective advantages and disadvantages. Then, we put forward to use the Matthews Correlation Coefficient to evaluate the recommendation algorithm\u2019s performance. All this based on an open source projects called mahout which provides a rich set of components to construct the classic recommendation algorithm. The results of the experiments show that the applicability of Matthews correlation coefficient in the relative evaluation work of recommendation algorithm."}
{"_id": "e9b217b79401fde11ec0dc862f22179868721b0a", "title": "Implementation of a fuzzy logic speed controller for a permanent magnet dc motor using a low-cost Arduino platform", "text": "Implementation of a Fuzzy Logic speed controller for a Permanent Magnet DC (PMDC) motor using a low-cost Arduino interfaced with a standard xPC Target MATLAB environment is demonstrated. The Arduino solution is acceptable at kHz sampling frequencies. The study of speed control of PMDC motor based on Arduino and L298N H-Bridge using an artificial intelligence control method, such as Fuzzy PID with only four rules, has been validated by experiments on the physical system."}
{"_id": "f836b7315ac839f9728cc58d0c9ee6e438ebefa9", "title": "An embedded, GSM based, multiparameter, realtime patient monitoring system and control \u2014 An implementation for ICU patients", "text": "Wireless, remote patient monitoring system and control using feedback and GSM technology is used to monitor the different parameters of an ICU patient remotely and also control over medicine dosage is provided. Measurement of vital parameters can be done remotely and under risk developing situation can be conveyed to the physician with alarm triggering systems in order to initiate the proper control actions. In the implemented system a reliable and efficient real time remote patient monitoring system that can play a vital role in providing better patient care is developed. This system enables expert doctors to monitor vital parameters viz body temperature, blood pressure and heart rate of patients in remote areas of hospital as well as he can monitor the patient when he is out of the premises. The system in addition also provides a feedback to control the dosage of medicine to the patient as guided by the doctor remotely, in response to the health condition message received by the doctor. Mobile phones transfer measured parameters via SMS to clinicians for further analysis or diagnosis. The timely manner of conveying the real time monitored parameter to the doctor and control action taken by him is given high priority which is very much needed and which is the uniqueness of the developed system. The system even facilitates the doctor to monitor the patient's previous history from the data in memory inbuilt in the monitoring device. Also data can be sent to several doctors incase a doctor fails to respond urgently."}
{"_id": "18faf4efa0e186e39658ab97676accb26b09733d", "title": "Exploring the relationship between knowledge management practices and innovation performance", "text": "The process of innovation depends heavily on knowledge, and the management of knowledge and human capital should be an essential element of running any type of business. Recent research indicates that organisations are not consistent in their approach to knowledge management (KM), with KM approaches being driven predominantly within an information technology (IT) or humanist framework, with little if any overlap. This paper explores the relationship between KM approaches and innovation performance through a preliminary study focusing on the manufacturing industry. The most significant implication that has emerged from the study is that managers in manufacturing firms should place more emphasis on human resource management (HRM) practices when developing innovation strategies for product and process innovations. The study shows that KM contributes to innovation performance when a simultaneous approach of \u201csoft HRM practices\u201d and \u201chard IT practices\u201d are implemented."}
{"_id": "4e79c933ed0ccc1bef35605dec91ec87089c18bb", "title": "Schedulability Analysis Using Uppaal: Herschel-Planck Case Study", "text": "We propose a modeling framework for performing schedulability analysis by using Uppaal real-time model-checker [2]. The framework is inspired by a case study where schedulability analysis of a satellite system is performed. The framework assumes a single CPU hardware where a fixed priority preemptive scheduler is used in a combination with two resource sharing protocols and in addition voluntary task suspension is considered. The contributions include the modeling framework, its application on an industrial case study and a comparison of results with classical response time analysis."}
{"_id": "2e9aea8b1a8d2e9725cf71253deaa2a406dfd7cc", "title": "Security trade-offs in Cloud storage systems", "text": "Securing software systems, is of paramount importance today. This is especially true for cloud systems, as they can be attacked nearly anytime from everywhere over the Internet. Ensuring security in general, however, typically has a negative impact on performance, usability, and increases the system\u2019s complexity. For cloud systems, which are built for high performance and availability as well as elastic scalability, security, therefore, may annihilate many of these original quality properties. Consequently, security, especially for cloud systems, can be understood as a trade-off problem where security mechanisms, applied to protect the system from specific threats and to achieve specific security goals, trade in security for other quality properties. For Cloud Storage Systems (CSS)\u2014i.e., distributed storage systems that replicate data over a cluster of nodes in order to manage huge data amounts, highly volatile query load (elastic scalability), and extraordinary needs for availability and fault-tolerance\u2014this trade-off problem is particularly prominent. Already, the different original quality properties of CSS cannot be provided at the same time and, thus, lead to fundamental trade-offs such as the trade-off between consistency, availability, and partition tolerance (see, e.g.: [53]). The piggybacked trade-offs coming from considering security as an additionally wanted quality property of such systems lead to further trade-offs that must be decided and managed. In this thesis, we focus on the trade-offs between security and performance in CSS. In order to not contradict the original design goals of CSS, a sensible management of these trade-offs in CSS requires a high degree of understanding of the relationships between security and performance in the security mechanisms of a specific CSS. Otherwise, this can lead to a badly configured CSS which is a security risk or consumes a lot of resources unnecessarily. This thesis, hence, aims at enhancing the understanding of the trade-offs between security and performance in CSS as well as at improving the management of these trade-offs in such systems. For this, we present three independent contributions in this thesis. The first contribution intends to improve the overall understanding of security in and security requirements for CSS. We present two reference usage models that support a security engineer in understanding the general usage of cloud storage"}
{"_id": "b53e4c232833a8e663a9cf15dcdd050ff801c05c", "title": "Detecting Insider Threats Using RADISH: A System for Real-Time Anomaly Detection in Heterogeneous Data Streams", "text": "We present a scalable system for high-throughput real-time analysis of heterogeneous data streams. Our architecture enables incremental development of models for predictive analytics and anomaly detection as data arrives into the system. In contrast with batch data-processing systems, such as Hadoop, that can have high latency, our architecture allows for ingest and analysis of data on the fly, thereby detecting and responding to anomalous behavior in near real time. This timeliness is important for applications such as insider threat, financial fraud, and network intrusions. We demonstrate an application of this system to the problem of detecting insider threats, namely, the misuse of an organization's resources by users of the system and present results of our experiments on a publicly available insider threat dataset."}
{"_id": "4a8c571c9fd24376556c41c417bba0d6be0dc9d3", "title": "Accept or appropriate ? A design-oriented critique on technology acceptance models", "text": "Technology acceptance models are tools for predicting users\u2019 reception of technology by measuring how they rate statements on a questionnaire scale. It has been claimed that these tools help to assess the social acceptance of a final IT product when its development is still underway. However, their use is not without problems. This paper highlights some of the underlying shortcomings that arise particularly from a simplistic conception of \u201cacceptance\u201d that does not recognize the possibility that users can invent new uses for (i.e., appropriate) technology in many situations. This lack of recognition can easily lead one to assume that users are passive absorbers of technological products, so that every user would adopt the same usages irrespective of the context of use, the differences in work tasks, or the characteristics of interpersonal cooperation. In light of recent research on appropriation, technology use must actually be understood in a more heterogeneous way, as a process through which different users find the product useful in different ways. This paper maintains that if, in fact, a single technology can be used for multiple purposes, then subscribing to the thinking arising from technology acceptance model research may actually lead one into suboptimal design solutions and thus also compromise user acceptance. Therefore, this paper also presents some starting points for designing specifically for easier technology appropriation."}
{"_id": "44b8c1fbe10cdff678969a223139d339d1f08f3e", "title": "Privacy protection based access control scheme in cloud-based services", "text": "With the rapid development of computer technology, cloud-based services have become a hot topic. They not only provide users with convenience, but also bring many security issues, such as data sharing and privacy issue. In this paper, we present an access control system with privilege separation based on privacy protection (PS-ACS). In the PS-ACS scheme, we divide users into private domain (PRD) and public domain (PUD) logically. In PRD, to achieve read access permission and write access permission, we adopt the Key-Aggregate Encryption (KAE) and the Improved Attribute-based Signature (IABS) respectively. In PUD, we construct a new multi-authority ciphertext policy attribute-based encryption (CP-ABE) scheme with efficient decryption to avoid the issues of single point of failure and complicated key distribution, and design an efficient attribute revocation method for it. The analysis and simulation result show that our scheme is feasible and superior to protect users' privacy in cloud-based services."}
{"_id": "aa636c3653b3b5557759cf7fb2cfe5eccf883fa1", "title": "Liquid Leakage Thin Film-Tape Sensor", "text": "A liquid leakage sensor on adhesive tape with alarming function has been developed. The film sensor tape consists of three layers: a base film layer, a substrate film layer, and a protective film layer. Three conductive lines and one resistive line are printed on the substrate film layer with a nano-sized silver conductive ink by the electronic gravure printing method. The location of liquid leakage has been found to be monitored accurately by applying the pulse voltage to the conductive and resistance lines in a periodic polarity switching mode. The leakage tests for various electrolyte liquids have been performed and the accuracy of leakage error position has been achieved to be about 0.25% on 200 meter long tape length."}
{"_id": "7113af6cefeece031e00aac386c35fa17a87ca01", "title": "The study on feature selection in customer churn prediction modeling", "text": "When the customer churn prediction model is built, a large number of features bring heavy burdens to the model and even decrease the accuracy. This paper is aimed to review the feature selection, to compare the algorithms from different fields and to design a framework of feature selection for customer churn prediction. Based on the framework, the author experiment on the structured module with some telecom operator's marketing data to verify the efficiency of the feature selection framework."}
{"_id": "a8d18bf53eb945d0179fd33d237a014470def8c5", "title": "Millimeter-wave large-scale phased-arrays for 5G systems", "text": "This talk will present our latest work on silicon RFICs for phased-array applications with emphasis on very large chips with built-in-self-test capabilities for 5G systems. SiGe is shown to be ideal for mm-wave applications due to its high temperature performance (automotive radars, base-stations, defense systems, etc.) and lower power consumption. These chips drastically reduce the cost of microwave and millimeter-wave phased arrays by combining many elements on the same chip, together with digital control and some cases, high-efficiency antennas. The phased-array chips also result in an easier packaging scheme using either a multi-layer PCB or wafer-level packages. We believe that this family of chips will be essential for millimeter-wave 5G communication systems."}
{"_id": "39b58ef6487c893219c77c61c762eee5694d0e36", "title": "SLIQ: A Fast Scalable Classifier for Data Mining", "text": "Classi cation is an important problem in the emerging eld of data mining. Although classi cation has been studied extensively in the past, most of the classi cation algorithms are designed only for memory-resident data, thus limiting their suitability for data mining large data sets. This paper discusses issues in building a scalable classier and presents the design of SLIQ, a new classi er. SLIQ is a decision tree classi er that can handle both numeric and categorical attributes. It uses a novel pre-sorting technique in the tree-growth phase. This sorting procedure is integrated with a breadthrst tree growing strategy to enable classi cation of disk-resident datasets. SLIQ also uses a new tree-pruning algorithm that is inexpensive, and results in compact and accurate trees. The combination of these techniques enables SLIQ to scale for large data sets and classify data sets irrespective of the number of classes, attributes, and examples (records), thus making it an attractive tool for data mining."}
{"_id": "d26b70479bc818ef7079732ba014e82368dbf66f", "title": "Sampling-Based Estimation of the Number of Distinct Values of an Attribute", "text": "We provide several new sampling-based estimators of the number of distinct values of an attribute in a relation. We compare these new estimators to estimators from the database and statistical literature empirically, using a large number of attribute-value distributions drawn from a variety of real-world databases. This appears to be the first extensive comparison of distinct-value estimators in either the database or statistical literature, and is certainly the first to use highlyskewed data of the sort frequently encountered in database applications. Our experiments indicate that a new \u201chybrid\u201d estimator yields the highest precision on average for a given sampling fraction. This estimator explicitly takes into account the degree of skew in the data and combines a new \u201csmoothed jackknife\u201d estimator with an estimator due to Shlosser. We investigate how the hybrid estimator behaves as we scale up the size of the database."}
{"_id": "1f25ed3c9707684cc0cdf3e8321c791bc7164147", "title": "SPRINT: A Scalable Parallel Classifier for Data Mining", "text": "Classification is an important data mining problem. Although classification is a wellstudied problem, most of the current classification algorithms require that all or a portion of the the entire dataset remain permanently in memory. This limits their suitability for mining over large databases. We present a new decision-tree-based classification algorithm, called SPRINT that removes all of the memory restrictions, and is fast and scalable. The algorithm has also been designed to be easily parallelized, allowing many processors to work together to build a single consistent model. This parallelization, also presented here, exhibits excellent scalability as well. The combination of these characteristics makes the proposed algorithm an ideal tool for data mining."}
{"_id": "7c3a4b84214561d8a6e4963bbb85a17a5b1e003a", "title": "Programs for Machine Learning. Part I", "text": ""}
{"_id": "84dae6a2870c68005732b9db6890f375490f2d4e", "title": "Inferring Decision Trees Using the Minimum Description Length Principle", "text": "This paper concerns methods for inferring decision trees from examples for classification problems. The reader who is unfamiliar with this problem may wish to consult J. R. Quinlan\u2019s paper (1986), or the excellent monograph by Breiman et al. (1984), although this paper will be self-contained. This work is inspired by Rissanen\u2019s work on the Minimum description length principle (or MDLP for short) and on his related notion of the stochastic complexity of a string Rissanen, 1986b. The reader may also want to refer to related work by Boulton and Wallace (1968, 1973a, 1973b), Georgeff and Wallace (1984), and Hart (1987). Roughly speaking, the minimum description length principle states that the best \u201ctheory\u201d to infer from a set of data is the one which minimizes the sum of"}
{"_id": "1d093148752f8a57a8989c35ee53adb331dfe485", "title": "A low dropout, CMOS regulator with high PSR over wideband frequencies", "text": "Modern system-on-chip (SoC) environments are swamped in high frequency noise that is generated by RF and digital circuits and propagated onto supply rails through capacitive coupling. In these systems, linear regulators are used to shield noise-sensitive analog blocks from high frequency fluctuations in the power supply. The paper presents a low dropout regulator that achieves power supply rejection (PSR) better than -40 dB over the entire frequency spectrum. The system has an output voltage of 1.0 V and a maximum current capability of 10 mA. It consists of operational amplifiers (op amps), a bandgap reference, a clock generator, and a charge pump and has been designed and simulated using BSIM3 models of a 0.5 /spl mu/m CMOS process obtained from MOSIS."}
{"_id": "18b2d2ac7dbc986c6e9db68d1e53860a9ad20844", "title": "Optical Fiber Sensors for Aircraft Structural Health Monitoring", "text": "Aircraft structures require periodic and scheduled inspection and maintenance operations due to their special operating conditions and the principles of design employed to develop them. Therefore, structural health monitoring has a great potential to reduce the costs related to these operations. Optical fiber sensors applied to the monitoring of aircraft structures provide some advantages over traditional sensors. Several practical applications for structures and engines we have been working on are reported in this article. Fiber Bragg gratings have been analyzed in detail, because they have proved to constitute the most promising technology in this field, and two different alternatives for strain measurements are also described. With regard to engine condition evaluation, we present some results obtained with a reflected intensity-modulated optical fiber sensor for tip clearance and tip timing measurements in a turbine assembled in a wind tunnel."}
{"_id": "cbc1395a4bb8a7540749e056753c139c2b6fff15", "title": "3DMatch: Learning the Matching of Local 3D Geometry in Range Scans", "text": "Establishing correspondences between 3D geometries is essential to a large variety of graphics and vision applications, including 3D reconstruction, localization, and shape matching. Despite significant progress, geometric matching on real-world 3D data is still a challenging task due to the noisy, low-resolution, and incomplete nature of scanning data. These difficulties limit the performance of current state-of-art methods which are typically based on histograms over geometric properties. In this paper, we introduce 3DMatch1, a data-driven local feature learner that jointly learns a geometric feature representation and an associated metric function from a large collection of realworld scanning data. We represent 3D geometry using accumulated distance fields around key-point locations. This representation is suited to handle noisy and partial scanning data, and concurrently supports deep learning with convolutional neural networks directly in 3D. To train the networks, we propose a way to automatically generate correspondence labels for deep learning by leveraging existing RGB-D reconstruction algorithms. In our results, we demonstrate that we are able to outperform state-of-theart approaches by a significant margin. In addition, we show the robustness of our descriptor in a purely geometric sparse bundle adjustment pipeline for 3D reconstruction. 1All of our code, datasets, and trained models are publicly available: http://3dmatch.cs.princeton.edu/"}
{"_id": "c8cff62987122b3a20cc73e644093d6ab6e7bc0a", "title": "A Hybrid Approach to Grapheme to Phoneme Conversion in Assamese", "text": "Assamese is one of the low resource Indian languages. This paper implements both rule-based and data-driven grapheme to phoneme (G2P) conversion systems for Assamese. The rule-based system is used as the baseline which yields a word error rate of 35.3%. The data-driven systems are implemented using state-of-the-art sequence learning techniques such as \u2014i) Joint-Sequence Model (JSM), ii) Recurrent Neural Networks with LTSM cell (LSTM-RNN) and iii) bidirectional LSTM (BiLSTM). The BiLSTM yields the lowest WER i.e., 18.7%, which is an absolute 16.6% improvement on the baseline system. We additionally implement the rules of syllabification for Assamese. The surface output is generated in two forms namely i) phonemic sequence with syllable boundaries, and ii) only phonemic sequence. The output of BiLSTM is fed as an input to Hybrid system. The Hybrid system syllabifies the input phonemic sequences to apply the vowel harmony rules. It also applies the rules of schwa-deletion as well as some rules in which the consonants change their form in clusters. The accuracy of the Hybrid system is 17.3% which is an absolute 1.4% improvement over the BiLSTM based G2P."}
{"_id": "785595f3a813f55f5bc7afeb57afc8d7e5a9e46b", "title": "An innovative cost-effective smart meter with embedded non intrusive load monitoring", "text": "Most of the time, domestic energy usage is invisible to the users. Households have only a vague idea about the amount of energy they are using for different purposes during the day, and the electric bill is usually the only monthly/bimonthly feedback. Making the energy consumption promptly visible can make the difference and foster a really efficient usage of the energy in a city. The emerging smart metering infrastructure has the potential to address some of these goals, by providing both households and electric utilities useful information about energy consumption, with a detailed breakdown to single appliance contribution. This paper presents a novel ultra-low-cost smart metering system capable of providing domestic AC load identification, by implementing both basic/typical smart meter measurement and a simple yet effective load disaggregation algorithm, which can detect and monitor domestic loads starting from a single point of measurement."}
{"_id": "dba7d665367dacedc1915787f3aa158fe24aa4f7", "title": "A Stable and Accurate Marker-Less Augmented Reality Registration Method", "text": "Markerless Augmented Reality (AR) registration using the standard Homography matrix is unstable, and for image-based registration it has very low accuracy. In this paper, we present a new method to improve the stability and the accuracy of marker-less registration in AR. Based on the Visual Simultaneous Localization and Mapping (V-SLAM) framework, our method adds a three-dimensional dense cloud processing step to the state-of-the-art ORB-SLAM in order to deal with mainly the point cloud fusion and the object recognition. Our algorithm for the object recognition process acts as a stabilizer to improve the registration accuracy during the model to the scene transformation process. This has been achieved by integrating the Hough voting algorithm with the Iterative Closest Points(ICP) method. Our proposed AR framework also further increases the registration accuracy with the use of integrated camera poses on the registration of virtual objects. Our experiments show that the proposed method not only accelerates the speed of camera tracking with a standard SLAM system, but also effectively identifies objects and improves the stability of markerless augmented reality applications."}
{"_id": "841c16f0e724fca3209a7a83fae94bec1eaae01c", "title": "A Probabilistic Framework for Real-time 3D Segmentation using Spatial, Temporal, and Semantic Cues", "text": "The best-matching segment is then assigned to this groundtruth bounding box for the evaluation metric described in our paper, as well as for the metric described below. Some previous works have evaluated 3D segmentation using the intersection-over-union metric on 3D points [5]. Note that our method segments the entire scene, as opposed to the method of Wang et al. [5], so the evaluation metric from Wang et al. [5] does not directly apply. However, we could modify the intersection-over-union metric [5] as follows: we can compute the fraction of ground-truth bounding boxes which have an intersection-over-union score less than a threshold \u03c4IOU , as"}
{"_id": "1de62bb648cf86fe0c1165a370aa8539ba28187e", "title": "Mode-Seeking on Hypergraphs for Robust Geometric Model Fitting", "text": "In this paper, we propose a novel geometric model fitting method, called Mode-Seeking on Hypergraphs (MSH), to deal with multi-structure data even in the presence of severe outliers. The proposed method formulates geometric model fitting as a mode seeking problem on a hypergraph in which vertices represent model hypotheses and hyperedges denote data points. MSH intuitively detects model instances by a simple and effective mode seeking algorithm. In addition to the mode seeking algorithm, MSH includes a similarity measure between vertices on the hypergraph and a \"weight-aware sampling\" technique. The proposed method not only alleviates sensitivity to the data distribution, but also is scalable to large scale problems. Experimental results further demonstrate that the proposed method has significant superiority over the state-of-the-art fitting methods on both synthetic data and real images."}
{"_id": "0859b73ef7bd5757e56a429d862a3696029ec20b", "title": "Name Translation in Statistical Machine Translation - Learning When to Transliterate", "text": "We present a method to transliterate names in the framework of end-to-end statistical machine translation. The system is trained to learn when to transliterate. For Arabic to English MT, we developed and trained a transliterator on a bitext of 7 million sentences and Google\u2019s English terabyte ngrams and achieved better name translation accuracy than 3 out of 4 professional translators. The paper also includes a discussion of challenges in name translation evaluation."}
{"_id": "647ed48ec0358b39ee796ab085aa8fb530a98a50", "title": "Suppressing the azimuth ambiguities in synthetic aperture radar images", "text": "This paper proposes a method for removing the azimuth ambiguities from synthetic aperture radar ( S A R ) images. The basic idea is to generate a two-dimensional reference function for S A R processing which provides, in addition to the matched filtering for the unaliased part of the received signal, the deconvolution of the azimuth ambiguities. This approach corresponds to an ideal filter concept, where an ideal impulse response function is obtained even in the presence of severe phase and amplitude errors. Modeling the sampled azimuth signal shows that the absolute phase value of the ambiguities cannot easily be determined due to their undersampling. The concept of the ideal filter is then extended to accommodate the undefined phase of the ambiguities and also the fading of the azimuth signal. Raw data from the E-SAR system have been used to verify the improvement in image quality by using the new method. It has a substantial significance in enabling the pulse-repetition frequency (PRF) constraints in the S A R system design to be relaxed and also for improving S A R image quality and interpretation."}
{"_id": "f2fa76d1fe666fb62c56b9a33732965563d04fe3", "title": "Swara Histogram Based Structural Analysis And Identification Of Indian Classical Ragas", "text": "This work is an attempt towards robust automated analysis of Indian classical ragas through machine learning and signal processing tools and techniques. Indian classical music has a definite heirarchical structure where macro level concepts like thaats and raga are defined in terms of micro entities like swaras and shrutis. Swaras or notes in Indian music are defined only in terms of their relation to one another (akin to the movable do-re-mi-fa system), and an inference must be made from patterns of sounds, rather than their absolute frequency structure. We have developed methods to perform scale-independent raga identification using a random forest classifier on swara histograms and achieved state-of-the-art results for the same. The approach is robust as it directly works on partly noisy raga recordings from Youtube videos without knowledge of the scale used, whereas previous work in this direction often use audios generated in a controlled environment with the desired scale. The current work demonstrates the approach for 8 ragas namely Darbari, Khamaj, Malhar, Sohini, Bahar, Basant, Bhairavi and Yaman and we have achieved an average identification accuracy of 94.28% through the framework."}
{"_id": "35a846c04ce07b76e4deffac1d4cf1d9ebf1de8f", "title": "Monitoring of Micro-sleep and Sleepiness for the Drivers Using EEG Signal", "text": "................................................................................................................................................... 3 ACKNOWLEDGEMENT ................................................................................................................................. 5 LIST OF FIGURES .......................................................................................................................................... 8 LIST OF TABLES ............................................................................................................................................ 9 LIST OF ABBREVIATIONS ............................................................................................................................ 10 INTRODUCTION ......................................................................................................................................... 11 BACKGROUND ........................................................................................................................................... 13 METHODS .................................................................................................................................................. 17 PRE-PROCESSING ................................................................................................................................... 20 FEATURE EXTRACTION ........................................................................................................................... 24 SUPPORT VECTOR MACHINE ................................................................................................................. 31 RELATED WORK ......................................................................................................................................... 41 Extraction of Feature Information in EEG Signal ................................................................................... 41 Sleepiness and health problems in drivers who work driving some kind of vehicle with nocturnal and shift works ............................................................................................................................................. 43 SVM Regression Training with SMO ...................................................................................................... 48 EVALUATION AND CONCLUSION ............................................................................................................... 52 FUTURE WORK ........................................................................................................................................... 55 REFERENCES .............................................................................................................................................. 58"}
{"_id": "cf4453f13598f21a8e4137fa611d54ecaf6f5acf", "title": "Improving Liver Lesion Detection using CT-to-PET FCN and GAN Networks", "text": "We present a brief description of our recent work that was submitted for journal publication introducing a novel system for generation of virtual PET images using CT scans [1]. We combine a fully convolutional network (FCN) with a conditional generative adversarial network (GAN) to synthesize PET images from given input CT images. Clinically, such solutions may reduce the need for the more expensive and radioactive PET/CT scan. Quantitative evaluation was conducted using an existing lesion detection software, combining the synthesized PET as a false positive reduction layer for the detection of malignant lesions in the liver. Current results look promising showing a reduction in the average false positive per case while keeping the same true positive rate. The suggested solution is comprehensive and can be expanded to additional body organs, and different modalities."}
{"_id": "46195459752594da1b3d17885f89e396bad5225f", "title": "Challenges and Solutions for End-to-End Time-Sensitive Optical Networking", "text": "Time sensitive applications rely on deterministic low latency, highly reactive, and trustable networks, as opposed to today's best effort internet. We review existing solutions and propose in this paper Deterministic Dynamic Network as a solution for end-to-end time-sensitive optical networking."}
{"_id": "1497843154ce93565043acd98061742b1420ec21", "title": "Childhood Adversity and Factors Determining Resilience among Undergraduate Students", "text": "Childhood adversity experiences, when not properly managed may result to anti-social behaviour, health risk or psychological problems like severe depression, suicidal behavior in undergraduate students. Hence the study examined childhood adversity experiences and its factors determining resilience among undergraduate students in Oyo State. The study adopted a descriptive survey research; multi-stage random sampling technique was used to select 341 undergraduate students of Ladoke Akintola University of Technology, Ogbomosho and Oyo State. Two research instruments were used to test childhood adversity scale and factors determining resilience (protective family, birth order, community protective factors). The study formulated four research hypotheses which were tested with Pearson Product moment correlation analysis and multiple regression analysis at coefficient level of 0.05. The result of the analysis revealed that level of Adverse Childhood Experience (ACE) among respondents is slightly low with half of the respondents between the ages of 18 years to 20 years. The prominent ACE identified by respondents is physical assaults, home with incidence of substance abuse, victim of sexual abuse, and humiliation from parents. The level of resilience among the respondents is moderate. There was a positive significant relationship between childhood adversity and lifetime resilience undergraduate student in Oyo State (r = 0.272), protective family factor was also associated with lifetime resilience undergraduate student in Oyo State (r = 0.018) there was a significant relationship between birth order and lifetime resilience undergraduate student in Oyo State. (r = 0.794) there was a significant relationship between community protective factor and lifetime resilience undergraduate student o in Oyo State (r = 0.835). The magnitude of relative contribution showed protective family factor (\u03b2 = 0.987), had significant relative contribution; community protective factor (\u03b2 = 0.762), had significant relative contribution; childhood adversity (\u03b2 = 0.724), had significant relative contribution; and then birth order (\u03b2 = 0.687), had significant relative contribution to resilience of the undergraduate students. Educational institutions should tackle issues on educator-student relationships through various channels, especially social work, counseling units and students affairs department. There is need for initial psychosocial assessment for fresher to test resilience and adverse childhood experiences, mental health service should be contextualize in health care service for undergraduate students, the parents should be educated on the impact of childhood adversity on wellbeing and academic performance of undergraduate students in Nigeria."}
{"_id": "76c87ec44fc5dc96bc445abe008deaf7c97c9373", "title": "A 79 GHz differentially fed grid array antenna", "text": "This paper presents a planar grid array antenna with a 100 \u2126 differential microstrip line feed on a single layer of standard soft substrate. The antenna operates in the 79 GHz frequency band for automotive radar applications. Its single row design offers a narrow beam in elevation and a wide beam in azimuth. Together with the differential microstrip line feeding, the antenna is suitable for differential multichannel MMICs in the frequency range."}
{"_id": "234d205f50ad0ef11863a7b758ff2fe70d7ec4a8", "title": "The Santa Claus problem", "text": "We consider the following problem: The Santa Claus has n presents that he wants to distribute among m kids. Each kid has an arbitrary value for each present. Let pij be the value that kid i has for present j. The Santa's goal is to distribute presents in such a way that the least lucky kid is as happy as possible, i.e he tries to maximize mini=1,...,m sumj \u2208 Si pij where Si is a set of presents received by the i-th kid.Our main result is an O(log log m/log log log m) approximation algorithm for the restricted assignment case of the problem when pij \u2208 pj,0 (i.e. when present j has either value pj or 0 for each kid). Our algorithm is based on rounding a certain natural exponentially large linear programming relaxation usually referred to as the configuration LP. We also show that the configuration LP has an integrality gap of \u03a9(m1/2) in the general case, when pij can be arbitrary."}
{"_id": "5a5c17a2dfb6d0e26a25bfcf2d03dd2f865940fc", "title": "The Wisconsin Card Sorting Test: theoretical analysis and modeling in a neuronal network.", "text": "Neuropsychologists commonly use the Wisconsin Card Sorting Test as a test of the integrity of frontal lobe functions. However, an account of its range of validity and of the neuronal mechanisms involved is lacking. We analyze the test at 3 different levels. First, the different versions of the test are described, and the results obtained with normal subjects and brain-lesioned patients are reviewed. Second, a computational analysis is used to reveal what algorithms may pass the test, and to predict their respective performances. At this stage, 3 cognitive components are isolated that may critically contribute to performance: the ability to change the current rule when negative reward occurs, the capacity to memorize previously tested rules in order to avoid testing them twice, and the possibility of rejecting some rules a priori by reasoning. Third, a model neuronal network embodying these 3 components is described. The coding units are clusters of neurons organized in layers, or assemblies. A sensorimotor loop enables the network to sort the input cards according to several criteria (color, form, etc.). A higher-level assembly of rule-coding clusters codes for the currently tested rule, which shifts when negative reward is received. Internal testing of the possible rules, analogous to a reasoning process, also occurs, by means of an endogenous auto-evaluation loop. When lesioned, the model reproduces the behavior of frontal lobe patients. Plausible biological or molecular implementations are presented for several of its components."}
{"_id": "09a912f49e0b65803627412c969123620cda4a77", "title": "Slice Sampling for Probabilistic Programming", "text": "We introduce the first, general purpose, slice sampling inference engine for probabilistic programs. This engine is released as part of StocPy, a new Turing-Complete probabilistic programming language, available as a Python library. We present a transdimensional generalisation of slice sampling which is necessary for the inference engine to work on traces with different numbers of random variables. We show that StocPy compares favourably to other PPLs in terms of flexibility and usability, and that slice sampling can outperform previously introduced inference methods. Our experiments include a logistic regression, HMM, and Bayesian Neural Net."}
{"_id": "70a4d654151234924c0a7ce7822f12108bf3db49", "title": "Towards understanding the effects of individual gamification elements on intrinsic motivation and performance", "text": "Research on the effectiveness of gamification has proliferated over the last few years, but the underlying motivational mechanisms have only recently become object of empirical research. It has been suggested that when perceived as informational, gamification elements, such as points, levels and leaderboards, may afford feelings of competence and hence enhance intrinsic motivation and promote performance gains. We conducted a 2 4 online experiment that systematically examined how points, leaderboards and levels, as well as participants' goal causality orientation influence intrinsic motivation, competence and performance (tag quantity and quality) in an image annotation task. Compared to a control condition, game elements did not significantly affect competence or intrinsic motivation, irrespective of participants' causality orientation. However, participants' performance did not mirror their intrinsic motivation, as points, and especially levels and leaderboard led to a significantly higher amount of tags generated compared to the control group. These findings suggest that in this particular study context, points, levels and leaderboards functioned as extrinsic incentives, effective only for promoting perfor-"}
{"_id": "faa9bc38930db96df8b662fa0c2499efabeb335d", "title": "An Architecture for Fault-Tolerant Computation with Stochastic Logic", "text": "Mounting concerns over variability, defects, and noise motivate a new approach for digital circuitry: stochastic logic, that is to say, logic that operates on probabilistic signals and so can cope with errors and uncertainty. Techniques for probabilistic analysis of circuits and systems are well established. We advocate a strategy for synthesis. In prior work, we described a methodology for synthesizing stochastic logic, that is to say logic that operates on probabilistic bit streams. In this paper, we apply the concept of stochastic logic to a reconfigurable architecture that implements processing operations on a datapath. We analyze cost as well as the sources of error: approximation, quantization, and random fluctuations. We study the effectiveness of the architecture on a collection of benchmarks for image processing. The stochastic architecture requires less area than conventional hardware implementations. Moreover, it is much more tolerant of soft errors (bit flips) than these deterministic implementations. This fault tolerance scales gracefully to very large numbers of errors."}
{"_id": "5bb59eefd9df07014147c288ca33fa5fb17bbbbb", "title": "First-Order Theorem Proving and Vampire", "text": "In this tutorial we give a short introduction in first-order theorem proving and the use of the theorem prover Vampire. The first part of the tutorial is intended for the audience with little or no knowledge in theorem proving. We will discuss the the resolution and superposition calculus, introduce the saturation principle, present various algorithms implementing redundancy elimination, preprocessing and clause form transformation and demonstrate how these concepts are implemented in Vampire. The second part will cover more advanced topics and features. Some of these features are implemented only in Vampire. This includes reasoning with theories, such as arithmetic, answering queries to very large knowledge bases, interpolation, and an original technique of symbol elimination, which allows one to automatically discover invariants in programs with loops. All the introduced concepts will be illustrated by running the firstorder theorem prover Vampire."}
{"_id": "e3e8e30aecfde1107c3ff28652d55b64adc60ef3", "title": "A DNA-based implementation of YAEA encryption algorithm", "text": "The fundamental idea behind this encryption technique is the exploitation of DNA cryptographic strength, such as its storing capabilities and parallelism in order to enforce other conventional cryptographic algorithms. In this study, a binary form of data, such as plaintext messages, and images are transformed into sequences of DNA nucleotides. Subsequently, efficient searching algorithms are used to locate the multiple positions of a sequence of four DNA nucleotides. These four DNA nucleotides represent the binary octet of a single plaintext character or the single pixel of an image within, say, a Canis Familiaris genomic chromosome. The process of recording the locations of a sequence of four DNA nucleotides representing a single plain-text character, then returning a single randomly chosen position, will enable us to assemble a file of random pointers of the locations of the four DNA nucleotides in the searched Canis Families genome. We call the file containing the randomly selected position in the searchable DNA strand for each plain text character, the ciphered text. Since there is negligible correlation between the pointers file obtained from the selected genome, with its inherently massive storing capabilities, and the plain-text characters, the method, we believe, is robust against any type of cipher attacks."}
{"_id": "a7263b095bb6fb6ad7d09f2e3cd2caa470aab126", "title": "Adaptive Neuro-Fuzzy Inference System (ANFIS) Based Software Evaluation", "text": "Software metric is a measure of some property of a piece of software or its specifications. The goal is to obtain reproducible and quantifiable measurements, which may have several valuable applications in schedule and budget planning, effort and cost evaluation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments. Software effort evaluation is one of the most essential and crucial part of software project planning for which efficient effort metrics is required. Software effort evaluation is followed by software cost evaluation which is helpful for both customers and developers. Thus, efficiency of effort component of software is very essential. The algorithmic models are weak in estimating early effort evaluation with regards to uncertainty and imprecision in software projects. To overcome this problem, there are various machine learning methods. One of the methods is soft computing in which there are various methodologies viz., Artificial Neural Network, Fuzzy Logic, Evolutionary computation based Genetic Algorithm and Metaheuristic based Particle Swarm Optimization. These methods are good at solving real-world ambiguities. This paper highlights the design of an efficient software effort evaluation model using Adaptive Neuro-Fuzzy Inference System (ANFIS) for uncertain datasets and it shows that this technique significantly outperforms with sufficient results."}
{"_id": "8bcaccd884fd7dac9d782522739cfd12159c9309", "title": "Point-of-Interest Recommendation Using Heterogeneous Link Prediction", "text": "Venue recommendation in location-based social networks is among the more important tasks that enhances user participation on the social network. Despite its importance, earlier research have shown that the accurate recommendation of appropriate venues for users is a difficult task specially given the highly sparse nature of user check-in information. In this paper, we show how a comprehensive set of user and venue related information can be methodically incorporated into a heterogeneous graph representation based on which the problem of venue recommendation can be efficiently formulated as an instance of the heterogeneous link prediction problem on the graph. We systematically compare our proposed approach with several strong baselines and show that our work, which is computationally less-intensive compared to the baselines, is able to shows improved performance in terms of precision and f-measure."}
{"_id": "1532b1ceb90272770b9ad0cee8cd572164be658f", "title": "Sentiment analysis in multiple languages: Feature selection for opinion classification in Web forums", "text": "The Internet is frequently used as a medium for exchange of information and opinions, as well as propaganda dissemination. In this study the use of sentiment analysis methodologies is proposed for classification of Web forum opinions in multiple languages. The utility of stylistic and syntactic features is evaluated for sentiment classification of English and Arabic content. Specific feature extraction components are integrated to account for the linguistic characteristics of Arabic. The entropy weighted genetic algorithm (EWGA) is also developed, which is a hybridized genetic algorithm that incorporates the information-gain heuristic for feature selection. EWGA is designed to improve performance and get a better assessment of key features. The proposed features and techniques are evaluated on a benchmark movie review dataset and U.S. and Middle Eastern Web forum postings. The experimental results using EWGA with SVM indicate high performance levels, with accuracies of over 91% on the benchmark dataset as well as the U.S. and Middle Eastern forums. Stylistic features significantly enhanced performance across all testbeds while EWGA also outperformed other feature selection methods, indicating the utility of these features and techniques for document-level classification of sentiments."}
{"_id": "167e1359943b96b9e92ee73db1df69a1f65d731d", "title": "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts", "text": "Sentiment analysis seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as \u201cthumbs up\u201d or \u201cthumbs down\u201d. To determine this sentiment polarity, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints. Publication info: Proceedings of the ACL, 2004."}
{"_id": "49a6ba0f8668e4da188017a45aec84c9336c7fe5", "title": "Learning Subjective Adjectives from Corpora", "text": "Subjectivity taggingis distinguishing sentences used to present opinions and evaluations from sentences used to objectively present factual information. There are numerous applications for which subjectivity tagging is relevant, including information extraction and information retrieval. This paper identifies strong clues of subjectivity using the results of a method for clustering words according to distributional similarity (Lin 1998), seeded by a small amount of detailed manual annotation. These features are then further refined with the addition of lexical semantic features of adjectives, specifically polarityandgradability(Hatzivassiloglou & McKeown 1997), which can be automatically learned from corpora. In 10-fold cross validation experiments, features based on both similarity clusters and the lexical semantic features are shown to have higher precision than features based on each alone."}
{"_id": "7063f78639f7620f081895a14eb5a55a3b7c6057", "title": "Light Stemming for Arabic Information Retrieval", "text": "Computational Morphology is an urgent problem for Arabic Natural Language Processing, because Arabic is a highly inflected language. We have found, however, that a full solution to this problem is not required for effective information retrieval. Light stemming allows remarkably good information retrieval without providing correct morphological analyses. We developed several light stemmers for Arabic, and assessed their effectiveness for information retrieval using standard TREC data. We have also compared light stemming with several stemmers based on morphological analysis.. The light stemmer, light10, outperformed the other approaches. It has been included in the Lemur toolkit, and is becoming widely used Arabic information retrieval."}
{"_id": "bc7308a97ec2d3f7985d48671abe7a8942a5b9f8", "title": "Sentiment Analysis using Support Vector Machines with Diverse Information Sources", "text": "This paper introduces an approach to sentiment analysis which uses support vector machines (SVMs) to bring together diverse sources of potentially pertinent information, including several favorability measures for phrases and adjectives and, where available, knowledge of the topic of the text. Models using the features introduced are further combined with unigram models which have been shown to be effective in the past (Pang et al., 2002) and lemmatized versions of the unigram models. Experiments on movie review data from Epinions.com demonstrate that hybrid SVMs which combine unigram-style feature-based SVMs with those based on real-valued favorability measures obtain superior performance, producing the best results yet published using this data. Further experiments using a feature set enriched with topic information on a smaller dataset of music reviews handannotated for topic are also reported, the results of which suggest that incorporating topic information into such models may also yield improvement."}
{"_id": "42a96f07ab5ba25c88e5f7c0009ed9bfa3a4c5fc", "title": "A Differential Covariance Matrix Adaptation Evolutionary Algorithm for real parameter optimization", "text": "Hybridization in context to Evolutionary Computation (EC) aims at combining the operators and methodologies from different EC paradigms to form a single algorithm that may enjoy a statistically superior performance on a wide variety of optimization problems. In this article we propose an efficient hybrid evolutionary algorithm that embeds the difference vector-based mutation scheme, the crossover and the selection strategy of Differential Evolution (DE) into another recently developed global optimization algorithm known as Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES). CMA-ES is a stochastic method for real parameter (continuous domain) optimization of non-linear, non-convex functions. The algorithm includes adaptation of covariance matrix which is basically an alternative method of traditional Quasi-Newton method for optimization based on gradient method. The hybrid algorithm, referred by us as Differential Covariance Matrix Adaptation Evolutionary Algorithm (DCMA-EA), turns out to possess a better blending of the explorative and exploitative behaviors as compared to the original DE and original CMAES, through empirical simulations. Though CMA-ES has emerged itself as a very efficient global optimizer, its performance deteriorates when it comes to dealing with complicated fitness landscapes, especially landscapes associated with noisy, hybrid composition functions and many real world optimization problems. In order to improve the overall performance of CMA-ES, the mutation, crossover and selection operators of DE have been incorporated into CMA-ES to synthesize the hybrid algorithm DCMA-EA. We compare DCMA-EA with original DE and CMA-EA, two best known DE-variants: SaDE and JADE, and two state-of-the-art real optimizers: IPOP-CMA-ES (Restart Covariance Matrix Adaptation Evolution Strategy with increasing population size) and DMS-PSO (Dynamic Multi Swarm Particle Swarm Optimization) over a test-suite of 20 shifted, rotated, and compositional benchmark functions and also two engineering optimization problems. Our comparative study indicates that although the hybridization scheme does not impose any serious burden on DCMA-EA in terms of number of Function Evaluations (FEs), DCMA-EA still enjoys a statistically superior performance over most of the tested benchmarks and especially over the multi-modal, rotated, and compositional ones in comparison to the other algorithms considered here. 2011 Published by Elsevier Inc. y Elsevier Inc. osh), swagatam.das@isical.ac.in (S. Das), roy.subhrajit20@gmail.com (S. Roy), skminha.isl@gmail.com ganthan). 200 S. Ghosh et al. / Information Sciences 182 (2012) 199\u2013219"}
{"_id": "13941748ae988ef79d3a123b63469789d01422e7", "title": "Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions", "text": "The driving force behind deep networks is their ability to compactly represent rich classes of functions. The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to realize (or approximate) functions of another. To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones. In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways. We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks. By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency. In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not. Empirical evaluation demonstrates how the expressive efficiency of connectivity, similarly to that of depth, translates into gains in accuracy. This leads us to believe that expressive efficiency may serve a key role in the development of new tools for deep network design."}
{"_id": "897c79a099637b56a38241a46952935341d57131", "title": "An elaborate data set on human gait and the effect of mechanical perturbations", "text": "Here we share a rich gait data set collected from fifteen subjects walking at three speeds on an instrumented treadmill. Each trial consists of 120 s of normal walking and 480 s of walking while being longitudinally perturbed during each stance phase with pseudo-random fluctuations in the speed of the treadmill belt. A total of approximately 1.5 h of normal walking (>5000 gait cycles) and 6 h of perturbed walking (>20,000 gait cycles) is included in the data set. We provide full body marker trajectories and ground reaction loads in addition to a presentation of processed data that includes gait events, 2D joint angles, angular rates, and joint torques along with the open source software used for the computations. The protocol is described in detail and supported with additional elaborate meta data for each trial. This data can likely be useful for validating or generating mathematical models that are capable of simulating normal periodic gait and non-periodic, perturbed gaits."}
{"_id": "02733207813127f30d89ddb7a59d0374509c789c", "title": "Cognitive control signals for neural prosthetics.", "text": "Recent development of neural prosthetics for assisting paralyzed patients has focused on decoding intended hand trajectories from motor cortical neurons and using this signal to control external devices. In this study, higher level signals related to the goals of movements were decoded from three monkeys and used to position cursors on a computer screen without the animals emitting any behavior. Their performance in this task improved over a period of weeks. Expected value signals related to fluid preference, the expected magnitude, or probability of reward were decoded simultaneously with the intended goal. For neural prosthetic applications, the goal signals can be used to operate computers, robots, and vehicles, whereas the expected value signals can be used to continuously monitor a paralyzed patient's preferences and motivation."}
{"_id": "58a4ed0b7c64a2643b120821cfeacda3229b106c", "title": "Interaactive map projections and distortion", "text": "We introduce several new methods for visualizing map projections and their associated distortions. These methods are embodied in the Interactive Map Projections system which allows users to view a representation of the Earth simultaneously as a sphere and as a projection with the ability to interact with both images. The relationship between the globe and the projection is enhanced by the use of explicit visualization of the intermediate developable geometric shapes used in the projection. A tool is built on top of the Interactive Map Projections system that provides a new method of visualizing map projection distortion. The central idea is one or more floating rings on the globe that can be interactively positioned and scaled. As the rings are manipulated on the globe, the corresponding projection of the rings are distorted using the same map projection parameters. This method is applied to study areal and angular distortion and is particularly useful when analyzing large geographical extents (such as in global climate studies) where distortions are significant, as well as visualizations for which information is geo-referenced and perhaps scaled to the underlying map. The floating ring tool is further enhanced to study 3D data sets placed over or under map projections. Examples include atmospheric and oceanographic data, respectively. Here, the ring is extended into a cone with apex at the center of the sphere and emanating beyond the surface into the atmosphere. It serves as a reminder that distortion exists in maps and data overlayed over maps, and provides information about the degree, location, and type of distortion. # 2001 Elsevier Science Ltd. All rights reserved."}
{"_id": "d250e57f6b7e06bb1dac41c8b89700086a85999e", "title": "Self-Supervised Generalisation with Meta Auxiliary Learning", "text": "Learning with auxiliary tasks has been shown to improve the generalisation of a primary task. However, this comes at the cost of manuallylabelling additional tasks which may, or may not, be useful for the primary task. We propose a new method which automatically learns labels for an auxiliary task, such that any supervised learning task can be improved without requiring access to additional data. The approach is to train two neural networks: a label-generation network to predict the auxiliary labels, and a multi-task network to train the primary task alongside the auxiliary task. The loss for the label-generation network incorporates the multi-task network\u2019s performance, and so this interaction between the two networks can be seen as a form of meta learning. We show that our proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets by a significant margin, without requiring additional auxiliary labels. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels. The self-supervised nature of our method leads to a promising new direction towards automated generalisation. The source code is available at https://github.com/lorenmt/maxl."}
{"_id": "e7b593a7b2b6db162002d7a15659ff7d4b3eaf02", "title": "Efficiency and harmonics generation in microwave to DC conversion circuits of half-wave and full-wave rectifier types", "text": "A Rectifying Antenna (Rectenna) is one of the most important components for a wireless power transmission. It has been developed for many applications such as Space Solar Power System (SSPS), and Radio Frequency Identification (RFID) etc. The Rectenna consisting of RF-DC conversion circuits and receiving antennas needs to be designed for high conversion efficiency to achieve efficient power transmission. we try to design the mw-Class RF-DC conversion circuit using half-and full-wave rectification. The measured conversion efficiencies is 59.3 % and 65.3 % athalf- and full-wave type, respectively. And the full-wave type has lower 2nd harmonic reflection coefficient."}
{"_id": "63af4355721f417bc405886f383af096fbfe51b2", "title": "Dynamic load balancing on single- and multi-GPU systems", "text": "The computational power provided by many-core graphics processing units (GPUs) has been exploited in many applications. The programming techniques currently employed on these GPUs are not sufficient to address problems exhibiting irregular, and unbalanced workload. The problem is exacerbated when trying to effectively exploit multiple GPUs concurrently, which are commonly available in many modern systems. In this paper, we propose a task-based dynamic load-balancing solution for single-and multi-GPU systems. The solution allows load balancing at a finer granularity than what is supported in current GPU programming APIs, such as NVIDIA's CUDA. We evaluate our approach using both micro-benchmarks and a molecular dynamics application that exhibits significant load imbalance. Experimental results with a single-GPU configuration show that our fine-grained task solution can utilize the hardware more efficiently than the CUDA scheduler for unbalanced workload. On multi-GPU systems, our solution achieves near-linear speedup, load balance, and significant performance improvement over techniques based on standard CUDA APIs."}
{"_id": "3f10e91a922860e49a42b4ee5ffa14ad7e6500e8", "title": "Guidelines for Use of Climate Scenarios Developed from Statistical Downscaling Methods", "text": "The Guidelines for Use of Climate Scenarios Developed from Statistical Downscaling Methods constitutes \" Supporting material \" of the Intergovernmental Panel on Climate Change (as defined in the Procedures for the Preparation, Review, Acceptance, Adoption, Approval, and Publication of IPCC Reports). The Guidelines were prepared for consideration by the IPCC at the request of its Task Group on Data and Scenario Support for Impacts and Climate Analysis (TGICA). This supporting material has not been subject to the formal intergovernmental IPCC review processes. Eduardo Zorito provided additional materials and feedback that were incorporated by June 2003. Revisions were made in the light of comments received from Elaine Barrow and John Mitchell in November 2003 \u2013 most notably the inclusion of a worked case study. Further comments were received from Penny Whetton and Steven Charles in February 2004."}
{"_id": "3719c7ec35b7c374a364efa64e6e0066fcd84d3f", "title": "Extracting Data from NoSQL Databases", "text": "Businesses and organizations today generate increasing volumes of data. Being able to analyze and visualize this data to find trends that can be used as input when making business decisions is an important factor for competitive advantage. Spotfire is a software platform for doing this. Spotfire uses a tabular data model similar to the relational model used in relational database management systems (RDBMSs), which are commonly used by companies for storing data. Extraction and import of data from RDBMSs to Spotfire is generally a simple task. In recent years, because of changing application requirements, new types of databases under the general term NoSQL have become popular. NoSQL databases differ from RDBMSs mainly in that they use non-relational data models, lack explicit schemas and scale horizontally. Some of these features cause problems for applications like Spotfire when extracting and importing data. This thesis investigates how these problems can be solved, thus enabling support for NoSQL databases in Spotfire. The approach and conclusions are valid for any application that interacts with databases in a similar way as Spotfire. General solutions for supporting NoSQL databases are suggested. Also, two concrete tools for importing data from Cassandra and Neo4j that have been implemented in the Spotfire platform are described. The presented solutions comprise a data model mapping from the NoSQL system to Spotfire tables, sampling and possibly clustering for finding schemas, and an extraction mechanism tailored to the particular system\u2019s query interface. The suggested solutions are not claimed to be complete. Rather, the work in this thesis can serve as a starting point for more thorough investigations or as a basis for something that can be extended."}
{"_id": "99f1b79073f6a628348a884725e516f2840aa5ef", "title": "Ontologies in biology: design, applications and future challenges", "text": "Biological knowledge is inherently complex and so cannot readily be integrated into existing databases of molecular (for example, sequence) data. An ontology is a formal way of representing knowledge in which concepts are described both by their meaning and their relationship to each other. Unique identifiers that are associated with each concept in biological ontologies (bio-ontologies) can be used for linking to and querying molecular databases. This article reviews the principal bio-ontologies and the current issues in their design and development: these include the ability to query across databases and the problems of constructing ontologies that describe complex knowledge, such as phenotypes."}
{"_id": "46b2b620e7b26bf6049aaece16c469006a95a2c7", "title": "A SIFT-Based Forensic Method for Copy\u2013Move Attack Detection and Transformation Recovery", "text": "One of the principal problems in image forensics is determining if a particular image is authentic or not. This can be a crucial task when images are used as basic evidence to influence judgment like, for example, in a court of law. To carry out such forensic analysis, various technological instruments have been developed in the literature. In this paper, the problem of detecting if an image has been forged is investigated; in particular, attention has been paid to the case in which an area of an image is copied and then pasted onto another zone to create a duplication or to cancel something that was awkward. Generally, to adapt the image patch to the new context a geometric transformation is needed. To detect such modifications, a novel methodology based on scale invariant features transform (SIFT) is proposed. Such a method allows us to both understand if a copy-move attack has occurred and, furthermore, to recover the geometric transformation used to perform cloning. Extensive experimental results are presented to confirm that the technique is able to precisely individuate the altered area and, in addition, to estimate the geometric transformation parameters with high reliability. The method also deals with multiple cloning."}
{"_id": "7bbedbeb14f0050c86c67acefa8b5c263fc30e9a", "title": "Internet self-efficacy and electronic service acceptance", "text": "Internet self-efficacy (ISE), or the beliefs in one\u2019s capabilities to organize and execute courses of Internet actions required to produce given attainments, is a potentially important factor to explain the consumers\u2019 decisions in e-commerce use, such as eservice. In this study, we introduce two types of ISE (i.e., general Internet self-efficacy and Web-specific self-efficacy) as new factors that reflect the user\u2019s behavioral control beliefs in e-service acceptance. Using these two constructs as behavioral control factors, we extend and empirically validate the Theory of Planned Behavior (TPB) for the World Wide Web (WWW) context. D 2003 Elsevier B.V. All rights reserved."}
{"_id": "d3f005cad43a043a50443e167c839403172d71df", "title": "Corrosion Study and Intermetallics Formation in Gold and Copper Wire Bonding in Microelectronics Packaging", "text": "A comparison study on the reliability of gold (Au) and copper (Cu) wire bonding is conducted to determine their corrosion and oxidation behavior in different environmental conditions. The corrosion and oxidation behaviors of Au and Cu wire bonding are determined through soaking in sodium chloride (NaCl) solution and high temperature storage (HTS) at 175 \u00b0C, 200 \u00b0C and 225 \u00b0C. Galvanic corrosion is more intense in Cu wire bonding as compared to Au wire bonding in NaCl solution due to the minimal formation of intermetallics in the former. At all three HTS annealing temperatures, the rate of Cu-Al intermetallic formation is found to be three to five times slower than Au-Al intermetallics. The faster intermetallic growth rate and lower activation energy found in this work for both Au/Al and Cu/Al as compared to literature could be due to the thicker Al pad metallization which removed the rate-determining step in previous studies due to deficit in Al material."}
{"_id": "6974d15455bbf5237eec5af571dcd6961480b733", "title": "Introducing teachers to computational thinking using unplugged storytelling", "text": "Many countries are introducing new school computing syllabuses that make programming and computational thinking core components. However, many of the teachers involved have major knowledge, skill and pedagogy gaps. We have explored the effectiveness of using 'unplugged' methods (constructivist, often kinaesthetic, activities away from computers) with contextually rich storytelling to introduce teachers to these topics in a non-threatening way. We describe the approach we have used in workshops for teachers and its survey based evaluation. Teachers were highly positive that the approach was inspiring, confidence building and gave them a greater understanding of the concepts involved, as well as giving practical teaching techniques that they would use."}
{"_id": "f49a6e5ce0081f627f2639598d86504404cafed6", "title": "Sustainable Power Supply Solutions for Off-Grid Base Stations", "text": "The telecommunication sector plays a significant role in shaping the global economy and the way people share information and knowledge. At present, the telecommunication sector is liable for its energy consumption and the amount of emissions it emits in the environment. In the context of off-grid telecommunication applications, offgrid base stations (BSs) are commonly used due to their ability to provide radio coverage over a wide geographic area. However, in the past, the off-grid BSs usually relied on emission-intensive power supply solutions such as diesel generators. In this review paper, various types of solutions (including, in particular, the sustainable solutions) for powering BSs are discussed. The key aspects in designing an ideal power supply solution are reviewed, and these mainly include the pre-feasibility study and the thermal management of BSs, which comprise heating and cooling of the BS shelter/cabinets and BS electronic equipment and power supply components. The sizing and optimization approaches used to design the BSs\u2019 power supply systems as well as the operational and control strategies adopted to manage the power supply systems are also reviewed in this paper."}
{"_id": "6a43ba86b986923e1f7468fdc8c8ec1d994788db", "title": "On Exploiting Dynamic Execution Patterns for Workload Offloading in Mobile Cloud Applications", "text": "Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing users' demand of mobile multimedia applications, by offloading the computational workloads from local devices to the remote cloud. Current MCC research focuses on making offloading decisions over different methods of a MCC application, but may inappropriately increase the energy consumption if having transmitted a large amount of program states over expensive wireless channels. Limited research has been done on avoiding such energy waste by exploiting the dynamic patterns of applications' run-time execution for workload offloading. In this paper, we adaptively offload the local computational workload with respect to the run-time application dynamics. Our basic idea is to formulate the dynamic executions of user applications using a semi-Markov model, and to further make offloading decisions based on probabilistic estimations of the offloading operation's energy saving. Such estimation is motivated by experimental investigations over practical smart phone applications, and then builds on analytical modeling of methods' execution times and offloading expenses. Systematic evaluations show that our scheme significantly improves the efficiency of workload offloading compared to existing schemes over various smart phone applications."}
{"_id": "389dff9e0ed28973a61d3bfeadf8b4b639f6a155", "title": ": Past , Present , Future", "text": "We thank Steve Burks, Dick Thaler, and especially Matthew Rabin (who collaborated during part of the process) for helpful comments."}
{"_id": "0217fb2a54a4f324ddf82babc6ec6692a3f6194f", "title": "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets", "text": "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https://arxiv.org/abs/1606.03657."}
{"_id": "8b89fb1c664b742271e0f19a9efe8492f14074f5", "title": "Real-time Simulation of Large Bodies of Water with Small Scale Details", "text": "We present a hybrid water simulation method that combines grid based and particles based approaches. Our specialized shallow water solver can handle arbitrary underlying terrain slopes, arbitrary water depth and supports wet-dry regions tracking. To treat open water scenes we introduce a method for handling non-reflecting boundary conditions. Regions of liquid that cannot be represented by the height field including breaking waves, water falls and splashing due to rigid and soft bodies interaction are automatically turned into spray, splash and foam particles. The particles are treated as simple non-interacting point masses and they exchange mass and momentum with the height field fluid. We also present a method for procedurally adding small scale waves that are advected with the water flow. We demonstrate the effectiveness of our method in various test scene including a large flowing river along a valley with beaches, big rocks, steep cliffs and waterfalls. Both the grid and the particles simulations are implemented in CUDA. We achieve real-time performance on modern GPUs in all the examples."}
{"_id": "c55426ad4a6f84001b52717dddcba440bb10df5d", "title": "From Hashing to CNNs: Training Binary Weight Networks via Hashing", "text": "Deep convolutional neural networks (CNNs) have shown appealing performance on various computer vision tasks in recent years. This motivates people to deploy CNNs to realworld applications. However, most of state-of-art CNNs require large memory and computational resources, which hinders the deployment on mobile devices. Recent studies show that low-bit weight representation can reduce much storage and memory demand, and also can achieve efficient network inference. To achieve this goal, we propose a novel approach named BWNH to train Binary Weight Networks via Hashing. In this paper, we first reveal the strong connection between inner-product preserving hashing and binary weight networks, and show that training binary weight networks can be intrinsically regarded as a hashing problem. Based on this perspective, we propose an alternating optimization method to learn the hash codes instead of directly learning binary weights. Extensive experiments on CIFAR10, CIFAR100 and ImageNet demonstrate that our proposed BWNH outperforms current state-of-art by a large margin."}
{"_id": "be389fb59c12c8c6ed813db13ab74841433ea1e3", "title": "iMapper: Interaction-guided Joint Scene and Human Motion Mapping from Monocular Videos", "text": "Fig. 1. We present iMapper, a method that reasons about the interactions of humans with objects, to recover both a plausible scene arrangement and human motions, that best explain an input monocular video (see inset). We fit characteristic interactions called scenelets (e.g., A, B, C) to the video and use them to reconstruct a plausible object arrangement and human motion path (left). The key challenge is that reliable fitting requires information about occlusions, which are unknown (i.e., latent). (Right) We show an overlay (from top-view) of our result over manually annotated groundtruth object placements. Note that object meshes are placed based on estimated object category, location, and size information."}
{"_id": "4fbd641e30a4e5868f4a146760b9543eafcee21d", "title": "Algorithm for 3D Point Cloud Denoising", "text": "The raw data of point cloud produced by 3D scanning tools contains additive noise from various sources. This paper proposes a method for 3D unorganized point cloud denoising by making full use of the depth information of unorganized points and space analytic geometry theory, applying over-domain average method for 2D image of image denoising theory to 3D point data. The point cloud noises are filtered by using irregular polyhedron based on the limited local neighborhoods. The experiment shows that the proposed method successfully removes noise from point cloud with the features of the scattered point model reserved. Furthermore, the presented algorithm excels in its simplicity both in implementation and operation."}
{"_id": "92bc0ff78de5ae14a31c4085e9b2e0b29d57c9aa", "title": "Central Bank Communication and the Yield Curve: A Semi-Automatic Approach using Non-Negative Matrix Factorization", "text": "Communication is now a standard tool in the central bank\u2019s monetary policy toolkit. Theoretically, communication provides the central bank an opportunity to guide public expectations, and it has been shown empirically that central bank communication can lead to financial market fluctuations. However, there has been little research into which dimensions or topics of information are most important in causing these fluctuations. We develop a semi-automatic methodology that summarizes the FOMC statements into its main themes, automatically selects the best model based on coherency, and assesses whether there is a significant impact of these themes on the shape of the U.S Treasury yield curve using topic modeling methods from the machine learning literature. Our findings suggest that the FOMC statements can be decomposed into three topics: (i) information related to the economic conditions and the mandates, (ii) information related to monetary policy tools and intermediate targets, and (iii) information related to financial markets and the financial crisis. We find that statements are most influential during the financial crisis and the effects are mostly present in the curvature of the yield curve through information related to the financial theme."}
{"_id": "54797270f673a2efc9c4b86b5de3befa59b13bd2", "title": "Substrate Integrated Waveguide Cross-Coupled Filter With Negative Coupling Structure", "text": "Substrate integrated waveguide (SIW) technology provides an attractive solution to the integration of planar and nonplanar circuits by using a planar circuit fabrication process. However, it is usually difficult to implement the negative coupling structure required for the design of compact canonical folded elliptic or quasi-elliptic cross-coupled bandpass filter on the basis of a single-layer SIW. In this paper, a special planar negative coupling scheme including a magnetic coupling post-wall iris and a balanced microstrip line with a pair of metallic via-holes is studied in detail. Two -band fourth-degree cross-coupled bandpass filters without and with source-load coupling using the negative coupling structures are then proposed and designed. The two novel SIW filters having the same center frequency of 20.5 GHz and respective passband width of 700 and 800 MHz are implemented on a single-layer Rogers RT/Duroid 5880 substrate with thickness of 0.508 mm. Measured results of those filters, which exhibit a high selectivity, and a minimum in-band insertion loss of approximately 0.9 and 1.0 dB, respectively, agree well with simulated results."}
{"_id": "aaf7be312dd3d22b032cbbf9530ad56bf8e7800b", "title": "Dispersion characteristics of substrate integrated rectangular waveguide", "text": "Dispersion properties of the substrate integrated rectangular waveguide (SIRW) are rigorously obtained using the BI-RME method combined with the Floquet's theorem. Our analysis shows that the SIRW basically has the same guided-wave characteristics as the conventional rectangular waveguide. Empirical equations are derived from the calculated dispersion curves in order to estimate the cutoff frequency of the first two dominant modes of the SIRW To validate the analysis results, an SIRW guide was designed and measured. Very good agreements between the experimental and theoretical results were obtained."}
{"_id": "eefb364424648a3498dcee451bf0af00463fab1a", "title": "Characteristics of cross (bypass) coupling through higher/lower order modes and their applications in elliptic filter design", "text": "This paper presents a new set of results concerning the use of higher/lower order modes as a means to implement bypass or cross coupling for applications in elliptic filter design. It is shown that the signs of the coupling coefficients to produce a transmission zero (TZ) either below or above the passband are, in certain situations, reversed from the predictions of simpler existing models. In particular, the bypass coupling to higher/lower order modes must be significantly stronger than the coupling to the main resonance in order to generate TZs in the immediate vicinity of the passband. Planar (H-plane) singlets are used to illustrate the derived results. This study should provide very important guidelines in selecting the proper main and bypass couplings for sophisticated filtering structures. Example filters are designed, built, and measured to demonstrate the validity of the introduced theory."}
{"_id": "f24a1af3bd8873920593786d81590d29520cfebc", "title": "Multilayered substrate integrated waveguide (MSIW) elliptic filter", "text": "This letter presents the design and experiment of a novel elliptic filter based on the multilayered substrate integrated waveguide (MSIW) technique. A C-band elliptic filter with four folded MSIW cavities is simulated by using high frequency structure simulator software and fabricated with a two-layer printed circuit board process, the measured results show good performance and in agreement with the simulated results."}
{"_id": "f9caa828cc99bdfb1f42b255136d9c78ee9a1f1a", "title": "Novel compact net-type resonators and their applications to microstrip bandpass filters", "text": "Novel compact net-type resonators and their practical applications to microstrip bandpass filters have been presented in this paper. Three kinds of filters are designed and fabricated to demonstrate the practicality of the proposed compact net-type resonators. In addition, by adjusting the structural parameters of the net-type resonators, the spurious frequencies can be properly shifted to higher frequencies. As a result, a three-pole Chebyshev net-type resonator filter with a fractional bandwidth (FBW) of 6.8% has a spurious resonance of up to 4.1f/sub 0/, and it has more than 80% size reduction in comparison with the conventional U-shaped resonator filter. A four-pole quasi-elliptic net-type resonator filter with a FBW of 3.5% has a spurious resonance of up to 5f/sub 0/, and it has approximately 67% size reduction in comparison with the cross-coupled open-loop resonator filter. A three-pole trisection net-type resonator filter with a FBW of 4.7% has a spurious resonance of up to 6.5f/sub 0/, and its size is reduced by 68% in comparison with the trisection open-loop resonator filter. Consequently, each of the designed filters occupies a very small circuit size and has a good stopband response. The measured results are in good agreement with the full-wave simulation results by IE3D."}
{"_id": "cc415579249532aa33651c8eca1aebf5ce26af1d", "title": "Today \u2019 s State-Owned Enterprises of China : Are They Dying Dinosaurs or Dynamic Dynamos ?", "text": "NOTE: The authors thank Bianca Bain for her assistance on an earlier draft of the paper and the University of Macao for providing financial support. Summary This paper raises the question and provides empirical evidence regarding the status of the evolution of the state-owned enterprises (SOEs) in China today. In this study, we compare the SOEs to domestic private-owned enterprises (POEs) and foreign-controlled businesses (FCBs) in the context of their organizational cultures. While a new ownership form, many of the POEs, evolved from former collectives that reflect the traditional values of Chinese business. Conversely, the FCBs are much more indicative of the large global MNCs. Therefore, we look at the SOEs in the context of these two reference points. We conclude that the SOEs of today have substantially transformed to approximate a configuration desired by the Chinese government when it began the SOE transformation a couple of decades ago to make them globally competitive. The SOEs of today appear to be appropriately described as China's economic dynamic dynamo for the future."}
{"_id": "5b4242f283ce25a44af2832769a5a54662ac4d38", "title": "Simplified high-accuracy calculation of eddy-current loss in round-wire windings", "text": "It has recently been shown that the most commonly used methods for calculating high-frequency eddy-current loss in round-wire windings can have substantial error, exceeding 60%. Previous work includes a formula based on a parametric set of finite-element analysis (FEA) simulations that gives proximity-effect loss for a large range of frequencies, using the parameters from a lookup table based on winding geometry. We improve the formula by decreasing the number of parameters in the formula and also, more importantly, by using simple functions to get the parameters from winding geometry such that a large lookup table is not needed. The function we present is exact in the low frequency limit (diameter much smaller than skin depth) and has error less than 4% at higher frequencies. We make our new model complete by examining the field expression needed to get the total proximity-effect loss and by including the skin-effect loss. We also present experimental results confirming the validity of the model and its superiority to standard methods."}
{"_id": "8052bc5f9beb389b3144d423e7b5d6fcf5d0cc4f", "title": "Adapting attributes by selecting features similar across domains", "text": "Attributes are semantic visual properties shared by objects. They have been shown to improve object recognition and to enhance content-based image search. While attributes are expected to cover multiple categories, e.g. a dalmatian and a whale can both have \"smooth skin\", we find that the appearance of a single attribute varies quite a bit across categories. Thus, an attribute model learned on one category may not be usable on another category. We show how to adapt attribute models towards new categories. We ensure that positive transfer can occur between a source domain of categories and a novel target domain, by learning in a feature subspace found by feature selection where the data distributions of the domains are similar. We demonstrate that when data from the novel domain is limited, regularizing attribute models for that novel domain with models trained on an auxiliary domain (via Adaptive SVM) improves the accuracy of attribute prediction."}
{"_id": "71ba42461b7bc72f91bdaf92412204ebe288347c", "title": "Area-efficient parallel-prefix Ling adders", "text": "Efficient addition of binary numbers plays a very important role in the design of dedicated as well as general purpose processors for the implementation of arithmetic and logic units, branch decision, and floating-point operations, address generations, etc. Several methods have been reported in the literature for the fast and hardware-efficient realization of binary additions. Among these methods, parallel-prefix addition schemes have received much of attentions, since they provide many design choices for delay/area-efficient implementations and optimization of tradeoffs. In this paper, we have proposed area-efficient approach for the design of parallel-prefix Ling adders. We have achieved the area efficiency by computing the real carries, based on the Ling carries produced by the lower bit positions. Using the proposed method, the number of logic levels can be reduced by one, which leads to reduction of delay as well as significant saving of area complexity of the adder. We have implemented the proposed adders using 0.18\u00b5m CMOS technology; and from the synthesis results, we find that our proposed adders could achieve up to 35% saving of area over the previously reported parallel-prefix Ling adders under the same delay constraints."}
{"_id": "31b179cc445f2cf2b82b0112b309a8cf10abfde6", "title": "3D shape regression for real-time facial animation", "text": "We present a real-time performance-driven facial animation system based on 3D shape regression. In this system, the 3D positions of facial landmark points are inferred by a regressor from 2D video frames of an ordinary web camera. From these 3D points, the pose and expressions of the face are recovered by fitting a user-specific blendshape model to them. The main technical contribution of this work is the 3D regression algorithm that learns an accurate, user-specific face alignment model from an easily acquired set of training data, generated from images of the user performing a sequence of predefined facial poses and expressions. Experiments show that our system can accurately recover 3D face shapes even for fast motions, non-frontal faces, and exaggerated expressions. In addition, some capacity to handle partial occlusions and changing lighting conditions is demonstrated."}
{"_id": "50379959c3f953cf5dc7f68a60a0c3f9b3333413", "title": "Explaining International Migration in the Skype Network: The Role of Social Network Features", "text": "In recent years, several new ways have appeared for quantifying human migration such as location based smartphone applications and tracking user activity from websites. To show usefulness of these new approaches, we present the results of a study of cross-country migration as observed via login events in the Skype network. We explore possibility to extract human migration and correlate it with institutional statistics. The study demonstrates that a number of social network features are strongly related to net migration from and to a given country, as well as net migration between pairs of countries. Specifically, we find that percentage of international calls, percentage of international links and foreign logins in a country, complemented by gross domestic product, can be used as relatively accurate proxies for estimating migration."}
{"_id": "1e4b941215d539981086f599ce74fa8e48184eb9", "title": "Compact Proofs of Retrievability", "text": "In a proof-of-retrievability system, a data storage center must prove to a verifier that he is actually storing all of a client\u2019s data. The central challenge is to build systems that are both efficient and provably secure\u2014that is, it should be possible to extract the client\u2019s data from any prover that passes a verification check. In this paper, we give the first proof-of-retrievability schemes with full proofs of security against arbitrary adversaries in the strongest model, that of Juels\u00a0and Kaliski. Our first scheme, built from BLS signatures and secure in the random oracle model, features a proof-of-retrievability protocol in which the client\u2019s query and server\u2019s response are both extremely short. This scheme allows public verifiability: anyone can act as a verifier, not just the file owner. Our second scheme, which builds on pseudorandom functions (PRFs) and is secure in the standard model, allows only private verification. It features a proof-of-retrievability protocol with an even shorter server\u2019s response than our first scheme, but the client\u2019s query is long. Both schemes rely on homomorphic properties to aggregate a proof into one small authenticator value."}
{"_id": "f1a818efa190959826a88df02ceb6f86c1d9ec9b", "title": "Verilog HDL model based thermometer-to-binary encoder with bubble error correction", "text": "This paper compares several approaches to come up with the Verilog HDL model of the thermometer-to-binary encoder with bubble error correction. It has been demonstrated that implementations of different ideas to correct bubble errors yield circuits whose parameters tremendously vary in delay, area and power consumption. The shortest delay is achieved for the design synthesized from the model which mimics a human reading temperature on classic liquid-in-glass thermometer."}
{"_id": "f9fafd8ea1190ffbc2757eed0f0a8bbff610c43e", "title": "Robust detection of non-motorized road users using deep learning on optical and LIDAR data", "text": "Detection of non-motorized road users, such as cyclists and pedestrians, is a challenging problem in collision warning/collision avoidance (CW/CA) systems as direct information (e.g. location, speed, and class) cannot be obtained from such users. In this paper, we propose a fusion of LIDAR data and a deep learning-based computer vision algorithm, to substantially improve the detection of regions of interest (ROIs) and subsequent identification of road users. Experimental results on the KITTI object detection benchmark quantify the effectiveness of incorporating LIDAR data with region-based deep convolutional networks. Thus our work provides another step towards the goal of designing safe and smart transportation systems of the future."}
{"_id": "cf4e54499ef2cf27ddda74975996036705600a18", "title": "Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks", "text": "Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases."}
{"_id": "51ce3edde4311ee97bf693ad4c2b4e0c286ed688", "title": "The infrastructure problem in HCI", "text": "HCI endeavors to create human-centered computer systems, but underlying technological infrastructures often stymie these efforts. We outline three specific classes of user experience difficulties caused by underlying technical infrastructures, which we term constrained possibilities, unmediated interaction, and interjected abstractions. We explore how prior approaches in HCI have addressed these issues, and discuss new approaches that will be required for future progress. We argue that the HCI community must become more deeply involved with the creation of technical infrastructures. Doing so, however, requires a substantial expansion to the methodological toolbox of HCI."}
{"_id": "bd6b3ef8cab804823b4ede2c9ceb6118a5dd9f0f", "title": "An examination of the celebrity endorsements and online customer reviews influence female consumers' shopping behavior", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.chb.2012.08.005 \u21d1 Corresponding author. Tel.: +886 2 2737 6764; fa E-mail address: melodyps@gmail.com (P.-S. Wei). The goal of this study is to compare the influence of celebrity endorsements to online customer reviews on female shopping behavior. Based on AIDMA and AISAS models, we design an experiment to investigate consumer responses to search good and experience good respectively. The results revealed that search good (shoes) endorsed by a celebrity in an advertisement evoked significantly more attention, desire, and action from the consumer than did an online customer review. We also found that online customer reviews emerged higher than the celebrity endorsement on the scale of participants\u2019 memory, search and share attitudes toward the experience good (toner). Implications for marketers as well as suggestions for future research are discussed. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "141ea59c86e184f60ab23a0537339545642eca5c", "title": "Hat-Delta --- One Right Does Make a Wrong", "text": "We outline two methods for locating bugs in a program. This is done by comparing computations of the same program with different input. At least one of these computations must produce a correct result, while exactly one must exhibit some erroneous behaviour. Firstly reductions that are thought highly likely to be correct are eliminated from the search for the bug. Secondly, a program slicing technique is used to identify areas of code that are likely to be correct. Both methods have been implemented. In combination with algorithmic debugging they provide a system that quickly and accurately identifies bugs."}
{"_id": "86686c1543dddd149b933a681c86077f8ac068de", "title": "Terabit/sec-class board-level optical interconnects through polymer waveguides using 24-channel bidirectional transceiver modules", "text": "We report here on the design, fabrication and characterization of an integrated optical data bus designed for terabit/sec-class module-to-module on-board data transfer using integrated optical transceivers. The parallel optical transceiver is based on a through-silicon-via (TSV) silicon carrier as the platform for integration of 24-channel VCSEL and photodiode arrays with CMOS ICs. The Si carrier also includes optical vias (holes) for optical access to conventional surface-emitting 850-nm optoelectronic (OE) devices. The 48-channel transceiver is flip-chip soldered to an organic carrier forming the transceiver Optomodule. The optical printed circuit board (o-PCB) is a typical FR4 board with a polymer waveguide layer added on top. A 48-channel flex-waveguide is fabricated separately and attached to the FR4 board. Turning mirrors are fabricated into the waveguides and a lens array is attached to facilitate optical coupling. An assembly procedure has been developed to surface mount the Optomodule to the o-PCB using a ball grid array (BGA) process which provides both electrical and optical interconnections. Efficient optical coupling is achieved using a dual-lens optical system, with one lens array incorporated into the Optomodule and a second on the o-PCB. Fully functional Optomodules with 24 transmitter + 24 receiver channels were characterized with transmitters operating up to 20 Gb/s and receivers up to 15 Gb/s. Finally, two Optomodules were assembled onto an o-PCB and a full optical link demonstrated, achieving > 20 bidirectional links at 10 Gb/s. At 15 Gb/s, error-free operation was demonstrated for 15 channels in each direction, realizing a record o-PCB link with a 225 Gb/s bidirectional aggregate data rate."}
{"_id": "38241bbdbae62ee2ceacc590681a18dc2564adec", "title": "PLUG: flexible lookup modules for rapid deployment of new protocols in high-speed routers", "text": "New protocols for the data link and network layer are being proposed to address limitations of current protocols in terms of scalability, security, and manageability. High-speed routers and switches that implement these protocols traditionally perform packet processing using ASICs which offer high speed, low chip area, and low power. But with inflexible custom hardware, the deployment of new protocols could happen only through equipment upgrades. While newer routers use more flexible network processors for data plane processing, due to power and area constraints lookups in forwarding tables are done with custom lookup modules. Thus most of the proposed protocols can only be deployed with equipment upgrades. To speed up the deployment of new protocols, we propose a flexible lookup module, PLUG (Pipelined Lookup Grid). We can achieve generality without loosing efficiency because various custom lookup modules have the same fundamental features we retain: area dominated by memories, simple processing, and strict access patterns defined by the data structure. We implemented IPv4, Ethernet, Ethane, and SEATTLE in our dataflow-based programming model for the PLUG and mapped them to the PLUG hardware which consists of a grid of tiles. Throughput, area, power, and latency of PLUGs are close to those of specialized lookup modules."}
{"_id": "01094798b20e96e1d029d6874577167f2214c7b6", "title": "Algorithmic improvements for fast concurrent Cuckoo hashing", "text": "Fast concurrent hash tables are an increasingly important building block as we scale systems to greater numbers of cores and threads. This paper presents the design, implementation, and evaluation of a high-throughput and memory-efficient concurrent hash table that supports multiple readers and writers. The design arises from careful attention to systems-level optimizations such as minimizing critical section length and reducing interprocessor coherence traffic through algorithm re-engineering. As part of the architectural basis for this engineering, we include a discussion of our experience and results adopting Intel's recent hardware transactional memory (HTM) support to this critical building block. We find that naively allowing concurrent access using a coarse-grained lock on existing data structures reduces overall performance with more threads. While HTM mitigates this slowdown somewhat, it does not eliminate it. Algorithmic optimizations that benefit both HTM and designs for fine-grained locking are needed to achieve high performance.\n Our performance results demonstrate that our new hash table design---based around optimistic cuckoo hashing---outperforms other optimized concurrent hash tables by up to 2.5x for write-heavy workloads, even while using substantially less memory for small key-value items. On a 16-core machine, our hash table executes almost 40 million insert and more than 70 million lookup operations per second."}
{"_id": "64d4af4af55a437ce5aa64dba345e8814cd12195", "title": "Information-Driven Dynamic Sensor Collaboration for Tracking Applications", "text": "This article overviews the information-driven approach to sensor collaboration in ad hoc sensor networks. The main idea is for a network to determine participants in a \" sensor collaboration \" by dynamically optimizing the information utility of data for a given cost of communication and computation. A definition of information utility is introduced, and several approximate measures of the information utility are developed for reasons of computational tractability. We illustrate the use of this approach using examples drawn from tracking applications. I. INTRODUCTION The technology of wirelessly networked micro-sensors promises to revolutionize the way we live, work, and interact with the physical environment. For example, tiny, inexpensive sensors can be \" sprayed \" onto roads, walls, or machines to monitor and detect a variety of interesting events such as highway traffic, wildlife habitat condition, forest fire, manufacturing job flow, and military battlefield situation."}
{"_id": "0b9213651d939b8195b0f4225fe409af6459effb", "title": "Estimating 3D Hand Pose from a Cluttered Image", "text": "A method is proposed that can generate a ranked list of plausible three-dimensional hand configurations that best match an input image. Hand pose estimation is formulated as an image database indexing problem, where the closest matches for an input hand image are retrieved from a large database of synthetic hand images. In contrast to previous approaches, the system can function in the presence of clutter, thanks to two novel clutter-tolerant indexing methods. First, a computationally efficient approximation of the image-to-model chamfer distance is obtained by embedding binary edge images into a high-dimensional Euclidean space. Second, a general-purpose, probabilistic line matching method identifies those line segment correspondences between model and input images that are the least likely to have occurred by chance. The performance of this cluttertolerant approach is demonstrated in quantitative experiments with hundreds of real hand images."}
{"_id": "e1b2ecd775a41cd5833fb59460738617997d7d84", "title": "Thermal degradation of DRAM retention time: Characterization and improving techniques", "text": "Variation of DRAM retention time and reliability problem induced by thermal stress was investigated. Most of the DRAM cells revealed 2-state retention time with thermal stress. The effects of hydrogen annealing condition and fluorine implantation on the variation of retention time and reliability are discussed."}
{"_id": "9c1b9598f82f9ed7d75ef1a9e627496759aa2387", "title": "Data Science , Predictive Analytics , and Big Data : A Revolution That Will Transform Supply Chain Design and Management", "text": "W e illuminate the myriad of opportunities for research where supply chain management (SCM) intersects with data science, predictive analytics, and big data, collectively referred to as DPB. We show that these terms are not only becoming popular but are also relevant to supply chain research and education. Data science requires both domain knowledge and a broad set of quantitative skills, but there is a dearth of literature on the topic and many questions. We call for research on skills that are needed by SCM data scientists and discuss how such skills and domain knowledge affect the effectiveness of an SCM data scientist. Such knowledge is crucial to develop future supply chain leaders. We propose definitions of data science and predictive analytics as applied to SCM. We examine possible applications of DPB in practice and provide examples of research questions from these applications, as well as examples of research questions employing DPB that stem from management theories. Finally, we propose specific steps interested researchers can take to respond to our call for research on the intersection of SCM and DPB."}
{"_id": "63e1e4ddad1621af02b2f8ad8a0bf5dbd47abf38", "title": "An integrated charger using segmented windings of interior permanent magnet motor based on 3 phase with 9 windings", "text": "Connecting the plug of electric vehicles (EVs) into the grid source directly means to charge the battery. The electric power generated from the grid goes through the electric motor, crosses the bidirectional inverter used as three-boost rectifier and is received by the load (battery). An innovative fast-integrated charger based on segmented windings of permanent magnet synchronous motor (PMSM) is introduced for plug-in electric vehicles (PEV). The three-phase with nine windings of PMSM is used during the traction and charging modes. Three (or single) phases of grid source are directly connected to the neutral points of segmented PMSM windings to provide high power density to the battery. The system configuration and operation of charging mode are explained in detail for proposed integrated on-board charger. The simulation models are designed by using Ansoft Maxwell, Ansys Maxwell circuit editor, and Matlab/Simulink softwares, and simulation results are presented to verify the performance of the proposed system. In addition, the experimental setup of the integrated system is under progress."}
{"_id": "ac97e503f0193873e30270aa768d53a4517ff657", "title": "From the Internet of Computers to the Internet of Things", "text": "This paper1 discusses the vision, the challenges, possible usage scenarios and technological building blocks of the \u201cInternet of Things\u201d. In particular, we consider RFID and other important technological developments such as IP stacks and web servers for smart everyday objects. The paper concludes with a discussion of social and governance issues that are likely to arise as the vision of the Internet of Things becomes a reality."}
{"_id": "5dd82357b16f9893ca95e29c65a8974fd94b55f4", "title": "of the Thesis Classification of Imbalanced Data Using Synthetic OverSampling Techniques by", "text": "of the Thesis Classification of Imbalanced Data Using Synthetic Over-Sampling Techniques"}
{"_id": "5685a394b25fcb27b6ad91f7325f2e60a9892e2a", "title": "Query Optimization Techniques In Graph Databases", "text": "Graph databases (GDB) have recently been arisen to overcome the limits of traditional databases for storing and managing data with graph-like structure. Today, they represent a requirementfor many applications that manage graph-like data,like social networks.Most of the techniques, applied to optimize queries in graph databases, have been used in traditional databases, distribution systems,... or they are inspired from graph theory. However, their reuse in graph databases should take care of the main characteristics of graph databases, such as dynamic structure, highly interconnected data, and ability to efficiently access data relationships. In this paper, we survey the query optimization techniques in graph databases. In particular,we focus on the features they have introduced to improve querying graph-like data."}
{"_id": "045a975c1753724b3a0780673ee92b37b9827be6", "title": "Wait-Free Synchronization", "text": "A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless of the execution speeds of the other processes. The problem of constructing a wait-free implementation of one data object from another lies at the heart of much recent work in concurrent algorithms, concurrent data structures, and multiprocessor architectures. First, we introduce a simple and general technique, based on reduction to a concensus protocol, for proving statements of the form, \u201cthere is no wait-free implementation of X by Y.\u201d We derive a hierarchy of objects such that no object at one level has a wait-free implementation in terms of objects at lower levels. In particular, we show that atomic read/write registers, which have been the focus of much recent attention, are at the bottom of the hierarchy: thay cannot be used to construct wait-free implementations of many simple and familiar data types. Moreover, classical synchronization primitives such astest&set and fetch&add, while more powerful than read and write, are also computationally weak, as are the standard message-passing primitives. Second, nevertheless, we show that there do exist simple universal objects from which one can construct a wait-free implementation of any sequential object."}
{"_id": "0541d5338adc48276b3b8cd3a141d799e2d40150", "title": "MapReduce: Simplified Data Processing on Large Clusters", "text": "MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day."}
{"_id": "2032be0818be583f159cc75f2022ed78222fb772", "title": "Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization", "text": "This paper proposes a two-phase scheme for removing salt-and-pepper impulse noise. In the first phase, an adaptive median filter is used to identify pixels which are likely to be contaminated by noise (noise candidates). In the second phase, the image is restored using a specialized regularization method that applies only to those selected noise candidates. In terms of edge preservation and noise suppression, our restored images show a significant improvement compared to those restored by using just nonlinear filters or regularization methods only. Our scheme can remove salt-and-pepper-noise with a noise level as high as 90%."}
{"_id": "43089ffed8c6c653f6994fb96f7f48bbcff2a598", "title": "Adaptive median filters: new algorithms and results", "text": "Based on two types of image models corrupted by impulse noise, we propose two new algorithms for adaptive median filters. They have variable window size for removal of impulses while preserving sharpness. The first one, called the ranked-order based adaptive median filter (RAMF), is based on a test for the presence of impulses in the center pixel itself followed by a test for the presence of residual impulses in the median filter output. The second one, called the impulse size based adaptive median filter (SAMF), is based on the detection of the size of the impulse noise. It is shown that the RAMF is superior to the nonlinear mean L(p) filter in removing positive and negative impulses while simultaneously preserving sharpness; the SAMF is superior to Lin's (1988) adaptive scheme because it is simpler with better performance in removing the high density impulsive noise as well as nonimpulsive noise and in preserving the fine details. Simulations on standard images confirm that these algorithms are superior to standard median filters."}
{"_id": "1ef2b855e8a447b17ca7470ae5f3fff667d2fc28", "title": "AP3: cooperative, decentralized anonymous communication", "text": "This paper describes a cooperative overlay network that provides anonymous communication services for participating users. The Anonymizing Peer-to-Peer Proxy (AP3) system provides clients with three primitives: (i) anonymous message delivery, (ii) anonymous channels, and (iii) secure pseudonyms. AP3 is designed to be lightweight, low-cost and provides \"probable innocence\" anonymity to participating users, even under a large-scale coordinated attack by a limited fraction of malicious overlay nodes. Additionally, we use AP3's primitives to build novel anonymous group communication facilities (multicast and anycast), which shield the identity of both publishers and subscribers."}
{"_id": "683c8f5c60916751bb23f159c86c1f2d4170e43f", "title": "Probabilistic Encryption", "text": ""}
{"_id": "7bbf34f4766a424d8fa934f5d1bda580e9ae814c", "title": "THE ROLE OF MUSIC COMMUNICATION IN CINEMA", "text": "[Authors\u2019 note: This paper is an abbreviated version of a chapter included in a forthcoming book entitled Music Communication (D. Miell, R. MacDonald, & D. Hargreaves, Eds.), to be published by Oxford University Press.] Past research leaves no doubt about the efficacy of music as a means of communication. In the following pages, after presenting a general model of music communication, the authors will introduce models \u2013 both empirical and theoretical \u2013 of film music perception and the role of music in film, referencing some of the most significant research investigating the relationship between sound and image in the cinematic context. We shall then enumerate the many ways in which the motion picture soundtrack can supplement, enhance, and expand upon the meaning of a film\u2019s narrative. The relationship between the auditory and visual components in cinema is both active and dynamic, affording a multiplicity of possible relations than can evolve \u2013 sometimes dramatically \u2013 as the narrative unfolds. This paper will take a cognitive approach to the study of musical communication in cinema. As a result, much credence will be given to the results of empirical research investigating human cognitive processing in response to the motion picture experience. In conclusion, the present authors will argue for a more inclusive definition of the term \u201cfilm music\u201d than that utilized or implied in previous publications. In our view, film music is one component of a sonic fabric that includes the musical score, ambient sound, dialogue, sound effects, and silence. The functions of these constituent elements often overlap or interact with one another, creating a harmonious counterpoint to the visual image. 1. A MODEL OF MUSIC COMMUNICATION Many studies have investigated various aspects of musical communication as a form of expression (Bengtsson & Gabrielsson, 1983; Clarke, 1988; Clynes, 1983, Gabrielsson, 1988, Seashore, 1967/1938; Senju & Ohgushi, 1987; Sundberg, 1988; Sundberg et al., 1983). A tripartite communication model was proposed by Campbell and Heller (1980), consisting simply of a composer, performer, and listener. Using this previous model as a foundation, Kendall and Carterette (1990) elegantly expanded upon this model of music communication, detailing a process involving multiple states of coding, decoding, and recoding. Kendall and Carterette suggest that this process involves the \u201cgrouping and parsing of elementary thought units\u201d (p. 132), these \u201cthought units\u201d (metasymbols) are mental representations involved in the process of creating, performing, and listening to musical sound. 2. MODELS OF FILM MUSIC COMMUNICATION 2.1 Empirical Evidence Several researchers have proposed models specific to the perception and cognition of music within the cinematic context. Initiating this systematic effort, Marshall and Cohen\u2019s (1988) bipartite \u201ccongruence-associationist\u201d model suggests that the meaning of a film is altered by the music as the result of two complex cognitive processes. Based upon subject responses, the researchers determined that musical sound directly effects subject ratings on the Potency (strong-weak) and Activity (active-passive) dimensions, while the Evaluative dimension (good-bad) relies on the degree of congruence between the audio and visual components on all three dimensions, as determined by a \u201ccomparator\u201d component. The second part of the model describes how musical meaning is ascribed to the film. Marshall and Cohen claim that attention is directed to the overlapping congruent meaning of the music and the film. Referential meanings associated with the music are ascribed to the overlapped (congruent) audio-visual components upon which attention is focused. As a result, \u201cthe music alters meaning of a particular aspect of the film\u201d (1988, p. 109). Marshall and Cohen also acknowledge the important role played by temporal characteristics of the sound and image, stating that \u201cthe assignment of accent to events will affect retention, processing, and interpretation\u201d (1988, p. 108). Incorporation of this important component of the developing model was provided by Lipscomb and Kendall\u2019s (1994) Film Music Paradigm, in which two implicit processes are considered as the basis for whether attentional focus is shifted to the musical component or whether it is likely to remain at the subconscious \u2013 cognitively \u201cinaudible\u201d \u2013 level. The authors suggested that these two implicit processes include an association judgment (similar to Marshall and Cohen\u2019s assessment of \u201ccongruence\u201d) and an evaluation of the accent structure relationship between the auditory and visual components. Based on the results of a series of three experiments utilizing stimuli ranging from extremely simple, single-object animations to actual movie excerpts, Lipscomb (1995) determined that the role of the two implicit judgments appears to be dynamic such that, with simple stimuli (such as that used in Lipscomb, 1995, Experiment 1 and Marshall & Cohen, 1988), accent structure alignment plays a dominant role. As the stimuli become more complex (e.g., multi-object animations and actual movie excerpts) the primary determinant of meaning in the auditory domain appears to shift to the associational judgment, with the accent structure alignment aspect receding to a supporting role, i.e., focusing audience attention on certain aspects of the visual image (Boltz, 2001). The most complex and fully developed model of film music perception proposed to date is Cohen\u2019s (2001) \u201ccongruence-associationist framework for understanding film-music communication\u201d (p. 259; see Figure 1). This multi-stage model attempts to account for meaning derived from the spoken narrative, visual images, and musical sound. Level A represents bottom-up processing based on physical features derived from input to each perceptual modality. Level B represents the determination of cross-modal congruence, based on both semantic (associational) and syntactic (temporal) grouping features. Level D represents top-down processing, determined by an individual\u2019s past experience and the retention of that experience in long term memory. According to this model, the input from levels B (bottom-up) and D (top-down) meet in the observer\u2019s conscious mind (level C), where information is prepared for transfer to short term memory. In its details, clearly documented in Cohen (2001), this model is based on an assumption of visual primacy, citing several studies that have suggested a subservient role for the auditory component (Bolivar et al., 1994; Driver, 1997; Thompson et al., 1994). Though a common assumption throughout the literature, the present authors would like to express reservation about this assumption and suggest that additional research is required before such a claim can be supported definitively. Figure 1. Cohen\u2019s \u201ccongruence-associationist framework. 2.2 Theoretical Evidence Richard Wagner, creator of the idealized Gesamtkunstwerk in the form of the 19 century music drama, claimed that \u201cas pure organ of the feeling, [music] speaks out the very thing which word speech in itself can not speak out ... that which, looked at from the standpoint of our human intellect, is the unspeakable\u201d (Wagner 1849/1964, p. 217). According to Suzanne K. Langer, \u201cmusic has all the earmarks of a true symbolism, except one: the existence of an assigned connotation\u201d and, though music is clearly a symbolic form, it remains an \u201cunconsummated symbol\u201d (1942, p. 240). In order for a film to make the greatest possible impact, there must be an interaction between the verbal dialogue (consummated symbol), the cinematic images (also, typically, a consummated symbol), and the musical score (unconsummated symbol). To answer the question \u201cHow does music in film narration create a point of experience for the spectator?,\u201d Gorbman (1987) suggests three methods by which music can \u201csignify\u201d in the context of a narrative film. Purely musical signification results from the highly coded syntactical relationships inherent in the association of one musical tone with another. Patterns of tension and release provide a sense of organization and meaning to the musical sound, apart from any extramusical association that might exist. Cultural musical codes are exemplified by music that has come to be associated with a certain mood or state of mind. These associations have been further canonized by the Hollywood film industry into certain conventional expectations \u2013 implicitly anticipated by enculturated audience members \u2013 determined by the narrative content of a given scene. Finally, cinematic codes influence musical meaning merely due to the placement of musical sound within the filmic context. Opening credit and end title music illustrate this type of signification, as well as recurring musical themes that come to represent characters or situations within the film. There is a commonly held belief that film music is not to be heard (Burt, 1994; Gorbman, 1987). Instead, it is believed to fulfill its role in communicating the underlying psychological drama of the narrative at a subconscious level (Lipscomb, 1989). There is, however, certain music that is intended to be heard by the audience as part of the cinematic diegesis, i.e., \u201cthe narratively implied spatiotemporal world of the actions and characters\u201d (Gorbman 1987, p. 21). This \u201cworld\u201d includes, naturally, a sonic component. Therefore, all sounds that are understood to be heard by characters in the narrative \u2013 including music \u2013 are referred to as diegetic, while those that are not part of the diegesis (e.g., the orchestral score) are referred to as nondiegetic. This would suggest that diegetic music is more likely to be processed at the conscious level while nondiegetic music might remain at the subconscious level, though research is needed to determine whether this is true, in fact. It is worth noting also, that the source of diegetic sound can be either seen or unseen. Michel Chion (199"}
{"_id": "3a116f2ae10a979c18787245933cb9f984569599", "title": "Data Collection in Wireless Sensor Networks with Mobile Elements: A Survey", "text": "Wireless sensor networks (WSNs) have emerged as an effective solution for a wide range of applications. Most of the traditional WSN architectures consist of static nodes which are densely deployed over a sensing area. Recently, several WSN architectures based on mobile elements (MEs) have been proposed. Most of them exploit mobility to address the problem of data collection in WSNs. In this article we first define WSNs with MEs and provide a comprehensive taxonomy of their architectures, based on the role of the MEs. Then we present an overview of the data collection process in such a scenario, and identify the corresponding issues and challenges. On the basis of these issues, we provide an extensive survey of the related literature. Finally, we compare the underlying approaches and solutions, with hints to open problems and future research directions."}
{"_id": "3b290393afef51b374f9daf9856ec3c1a5fa2968", "title": "A Successive Approximation Recursive Digital Low-Dropout Voltage Regulator With PD Compensation and Sub-LSB Duty Control", "text": "This paper presents a recursive digital low-dropout (RLDO) regulator that improves response time, quiescent power, and load regulation dynamic range over prior digital LDO designs by 1\u20132 orders of magnitude. The proposed RLDO enables a practical digital replacement to analog LDOs by using an SAR-like binary search algorithm in a coarse loop and a sub-LSB pulse width modulation duty control scheme in a fine loop. A proportional-derivative compensation scheme is employed to ensure stable operation independent of load current, the size of the output decoupling capacitor, and clock frequency. Implemented in 0.0023 mm2 in 65 nm CMOS, the 7-bit RLDO achieves, at a 0.5-V input, a response time of 15.1 ns with a figure of merit of 199.4 ps, along with stable operation across a 20 000 $\\times $ dynamic load range."}
{"_id": "d429ddfb32f921e630ded47a8fd1bc424f7283d9", "title": "Imaging Cognition II: An Empirical Review of 275 PET and fMRI Studies", "text": "Positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) have been extensively used to explore the functional neuroanatomy of cognitive functions. Here we review 275 PET and fMRI studies of attention (sustained, selective, Stroop, orientation, divided), perception (object, face, space/motion, smell), imagery (object, space/ motion), language (written/spoken word recognition, spoken/ no spoken response), working memory (verbal/numeric, object, spatial, problem solving), semantic memory retrieval (categorization, generation), episodic memory encoding (verbal, object, spatial), episodic memory retrieval (verbal, nonverbal, success, effort, mode, context), priming (perceptual, conceptual), and procedural memory (conditioning, motor, and nonmotor skill learning). To identify consistent activation patterns associated with these cognitive operations, data from 412 contrasts were summarized at the level of cortical Brodmann's areas, insula, thalamus, medial-temporal lobe (including hippocampus), basal ganglia, and cerebellum. For perception and imagery, activation patterns included primary and secondary regions in the dorsal and ventral pathways. For attention and working memory, activations were usually found in prefrontal and parietal regions. For language and semantic memory retrieval, typical regions included left prefrontal and temporal regions. For episodic memory encoding, consistently activated regions included left prefrontal and medial-temporal regions. For episodic memory retrieval, activation patterns included prefrontal, medial-temporal, and posterior midline regions. For priming, deactivations in prefrontal (conceptual) or extrastriate (perceptual) regions were consistently seen. For procedural memory, activations were found in motor as well as in non-motor brain areas. Analysis of regional activations across cognitive domains suggested that several brain regions, including the cerebellum, are engaged by a variety of cognitive challenges. These observations are discussed in relation to functional specialization as well as functional integration."}
{"_id": "ac6fdfa9d2ca8ec78ea1b2c8807ab9147b8a526d", "title": "Exploring Synergies between Machine Learning and Knowledge Representation to Capture Scientific Knowledge", "text": "In this paper we explore synergies between the machine learning and knowledge representation fields by considering how scientific knowledge is represented in these areas. We illustrate some of the knowledge obtained through machine learning methods, providing two contrasting examples of such models: probabilistic graphical models (aka Bayesian networks) and artificial neural networks (including deep learning networks). From knowledge representation, we give an overview of ontological representations, qualitative reasoning, and planning. Then we discuss potential synergies that would benefit both areas."}
{"_id": "e7b50e3f56e21fd2a5eb34923d427a0bc6dd8905", "title": "Coupling Matrix Synthesis for a New Class of Microwave Filter Configuration", "text": "In this paper a new approach to the synthesis of coupling matrices for microwave filters is presente d. The new approach represents an advance on existing direct a nd optimization methods for coupling matrix synthesis in that it will exhaustively discover all possible coupling matrix solutions for a network if more than one exists. This enables a se lection to be made of the set of coupling values, resonator frequ ency offsets, parasitic coupling tolerance etc that will be best suited to the technology it is intended to realize the microwave filter with. To demonstrate the use of the method, the case of the r cently \u2013 introduced \u2018extended box\u2019 (EB) coupling matrix configuration is taken. The EB represents a new class of filter con figuration featuring a number of important advantages, one of which is the existence of multiple coupling matrix solutions for each prototype filtering function, eg 16 for 8 degree cases. This case is taken as an example to demonstrate the use of the synthesis method \u2013 yielding one solution suitable for dual-mode realiz ation and one where some couplings are small enough to neglect. Index Terms \u2014 Coupling matrix, filter synthesis, Groebner basis, inverted characteristic, multiple solutions."}
{"_id": "5c876c7c26fec05ba1e3876f49f44de219838629", "title": "An Artificial Bee Colony-Based COPE Framework for Wireless Sensor Network", "text": "In wireless communication, network coding is one of the intelligent approaches to process the packets before transmitting for efficient information exchange. The goal of this work is to enhance throughput by using the intelligent technique, which may give comparatively better optimization. This paper introduces a biologically-inspired coding approach called Artificial Bee Colony Network Coding (ABC-NC), a modification in the COPE framework. The existing COPE and its variant are probabilistic approaches, which may not give good results in all of the real-time scenarios. Therefore, it needs some intelligent technique to find better packet combinations at intermediate nodes before forwarding to optimize the energy and maximize the throughput in wireless networks. This paper proposes ABC-NC over the existing COPE framework for the wireless environment."}
{"_id": "73f38ffa54ca4dff09d42cb18461187b9315a735", "title": "Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition", "text": "Adaptive sparse coding methods learn a possibly overcomplete set of basis functions, such that natural image patches can be reconstructed by linearly combining a small subset of these bases. The applicability of these methods to visual object recognition tasks has been limited because of the prohibitive cost of the optimization algorithms required to compute the sparse representation. In this work we propose a simple and efficient algorithm to learn basis functions. After training, this model also provides a fast and smooth approximator to the optimal representation, achieving even better accuracy than exact sparse coding algorithms on visual object recognition tasks."}
{"_id": "6c9b39fe6b5615a012a99eac0aaaf527343feefb", "title": "What are mobile developers asking about? A large scale study using stack overflow", "text": "The popularity of mobile devices has been steadily growing in recent years. These devices heavily depend on software from the underlying operating systems to the applications they run. Prior research showed that mobile software is different than traditional, large software systems. However, to date most of our research has been conducted on traditional software systems. Very little work has focused on the issues that mobile developers face. Therefore, in this paper, we use data from the popular online Q&A site, Stack Overflow, and analyze 13,232,821 posts to examine what mobile developers ask about. We employ Latent Dirichlet allocation-based topic models to help us summarize the mobile-related questions. Our findings show that developers are asking about app distribution, mobile APIs, data management, sensors and context, mobile tools, and user interface development. We also determine what popular mobile-related issues are the most difficult, explore platform specific issues, and investigate the types (e.g., what, how, or why) of questions mobile developers ask. Our findings help highlight the challenges facing mobile developers that require more attention from the software engineering research and development communities in the future and establish a novel approach for analyzing questions asked on Q&A forums."}
{"_id": "f3a1246d3a0c7de004db9ef9f312bcedb5e22532", "title": "Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval", "text": "Thanks to the success of deep learning, cross-modal retrieval has made significant progress recently. However, there still remains a crucial bottleneck: how to bridge the modality gap to further enhance the retrieval accuracy. In this paper, we propose a self-supervised adversarial hashing (SSAH) approach, which lies among the early attempts to incorporate adversarial learning into cross-modal hashing in a self-supervised fashion. The primary contribution of this work is that two adversarial networks are leveraged to maximize the semantic correlation and consistency of the representations between different modalities. In addition, we harness a self-supervised semantic network to discover high-level semantic information in the form of multi-label annotations. Such information guides the feature learning process and preserves the modality relationships in both the common semantic space and the Hamming space. Extensive experiments carried out on three benchmark datasets validate that the proposed SSAH surpasses the state-of-the-art methods."}
{"_id": "58fcd1e5ca46415ff5b06a84fd08160e43b13205", "title": "Arthrodesis with Intramedular Fixation in Posttraumatic Arthrosis of the Midfoot: A Case Report", "text": "We present two middle-aged men with posttraumatic arthrosis of the midfoot. Both patients suffered from severe pain and their foot was unable to bear weight. Both were operated using new fusion bolt 6.5 mm and additional screws. In their case, arthrodesis was mandatory and effective intervention. After surgical treatment both patients were pain free and able to walk without crutches and return to daily work."}
{"_id": "4c0074521b708af5009526a8bacfab9fcdf48f96", "title": "Wavelet analysis of EEG for seizure detection: Coherence and phasesynchrony estimation.", "text": "This paper deals with the wavelet analysis method f or seizure detection in EEG time series and coherence estimation. The main part of the paper pr esents the basic principles of signal decomposition in connection with the EEG frequency ba nds. Wavelet analysis method has been used for detection of seizure onset. The wavelet fi ltered signal is used for the computation of spectral power ratio. The results show that our met hod can identify pre seizure, seizure, post seizure and non seizure phases. When dealing with s eizure detection and prediction problems it is important to identify seizure precursor dynamics and necessary to identify information about onset and spread of seizures. Therefore in th e second part the coherence and phase synchrony during pre seizure, seizure, post seizure an d non seizure are computed. We expect this method to provide more insight into dynamic aspects of the seizure generating process."}
{"_id": "3b1b13271544fb55c227c980f6452bb945ae58d0", "title": "Evaluating Digital Forensic Options for the Apple iPad", "text": "The iPod Touch, iPhone and iPad from Apple are among the most popular mobile computing platforms in use today. These devices are of forensic interest because of their high adoption rate and potential for containing digital evidence. The uniformity in their design and underlying operating system (iOS) also allows forensic tools and methods to be shared across product types. This paper analyzes the tools and methods available for conducting forensic examinations of the Apple iPad. These include commercial software products, updated methodologies based on existing jailbreaking processes and the analysis of the device backup contents provided by iTunes. While many of the available commercial tools offer promise, the results of our analysis indicate that most comprehensive examination of the iPad requires jailbreaking to perform forensic duplication and manual analysis of its media content."}
{"_id": "9c3cfc2c07a1a7e3b456db463f527340221e9f73", "title": "Enhancing scholarly use of digital libraries: A comparative survey and review of bibliographic metadata ontologies", "text": "The HathiTrust Research Center (HTRC) is engaged in the development of tools that will give scholars the ability to analyze the HathiTrust digital library's 14 million volume corpus. A cornerstone of the HTRC's digital infrastructure is the workset -- a kind of scholar-built research collection intended for use with the HTRC's analytics platform. Because more than 66% of the digital corpus is subject to copyright restrictions, scholarly users remain dependent upon the descriptive accounts provided by traditional metadata records in order to identify and gather together bibliographic resources for analysis. This paper compares the MADSRDF/MODSRDF, Bibframe, schema.org, BIBO, and FaBiO ontologies by assessing their suitability for employment by the HTRC to meet scholars' needs. These include distinguishing among multiple versions of the same work; representing the complex historical and physical relationships among those versions; and identifying and providing access to finer grained bibliographic entities, e.g., poems, chapters, sections, and even smaller segments of content."}
{"_id": "ae081edc60a62b1b1d542167dbe716ce7c5ec9ff", "title": "Decreased gut microbiota diversity, delayed Bacteroidetes colonisation and reduced Th1 responses in infants delivered by caesarean section.", "text": "OBJECTIVE\nThe early intestinal microbiota exerts important stimuli for immune development, and a reduced microbial exposure as well as caesarean section (CS) has been associated with the development of allergic disease. Here we address how microbiota development in infants is affected by mode of delivery, and relate differences in colonisation patterns to the maturation of a balanced Th1/Th2 immune response.\n\n\nDESIGN\nThe postnatal intestinal colonisation pattern was investigated in 24 infants, born vaginally (15) or by CS (nine). The intestinal microbiota were characterised using pyrosequencing of 16S rRNA genes at 1 week and 1, 3, 6, 12 and 24 months after birth. Venous blood levels of Th1- and Th2-associated chemokines were measured at 6, 12 and 24 months.\n\n\nRESULTS\nInfants born through CS had lower total microbiota diversity during the first 2 years of life. CS delivered infants also had a lower abundance and diversity of the Bacteroidetes phylum and were less often colonised with the Bacteroidetes phylum. Infants born through CS had significantly lower levels of the Th1-associated chemokines CXCL10 and CXCL11 in blood.\n\n\nCONCLUSIONS\nCS was associated with a lower total microbial diversity, delayed colonisation of the Bacteroidetes phylum and reduced Th1 responses during the first 2 years of life."}
{"_id": "986b967c4bb2a7c4ef753a41fc625530828be503", "title": "Whole-function vectorization", "text": "Data-parallel programming languages are an important component in today's parallel computing landscape. Among those are domain-specific languages like shading languages in graphics (HLSL, GLSL, RenderMan, etc.) and \"general-purpose\" languages like CUDA or OpenCL. Current implementations of those languages on CPUs solely rely on multi-threading to implement parallelism and ignore the additional intra-core parallelism provided by the SIMD instruction set of those processors (like Intel's SSE and the upcoming AVX or Larrabee instruction sets). In this paper, we discuss several aspects of implementing dataparallel languages on machines with SIMD instruction sets. Our main contribution is a language- and platform-independent code transformation that performs whole-function vectorization on low-level intermediate code given by a control flow graph in SSA form. We evaluate our technique in two scenarios: First, incorporated in a compiler for a domain-specific language used in realtime ray tracing. Second, in a stand-alone OpenCL driver. We observe average speedup factors of 3.9 for the ray tracer and factors between 0.6 and 5.2 for different OpenCL kernels."}
{"_id": "62659da8c3d0a450e6a528ad13f94f56d2621759", "title": "Argumentation Theory: A Very Short Introduction", "text": "Since the time of the ancient Greek philosophers and rhetoricians, argumentation theorists have searched for the requirements that make an argument correct, by some appropriate standard of proof, by examining the errors of reasoning we make when we try to use arguments. These errors have long been called fallacies, and the logic textbooks have for over 2000 years tried to help students to identify these fallacies, and to deal with them when they are encountered. The problem was that deductive logic did not seem to be much use for this purpose, and there seemed to be no other obvious formal structure that could usefully be applied to them. The radical approach taken by Hamblin (1970) was to refashion the concept of an argument to think of it not just as an arbitrarily designated set of propositions, but as a move one party makes in a dialog to offer premises that may be acceptable to another party who doubts the conclusion of the argument. Just after Hamblin's time a school of thought called informal logic grew up that wanted to take a new practical approach to teaching students skills of critical thinking by going beyond deductive logic to seek other methods for analyzing and evaluating arguments. Around the same time, an interdisciplinary group of scholars associated with the term 'argumentation', coming from fields like speech communication, joined with the informal logic group to help build up such practical methods and apply them to real examples of argumentation (Johnson and Blair, 1987). The methods that have been developed so far are still in a process of rapid evolution. More recently, improvements in them have been due to some computer scientists joining the group, and to collaborative research efforts between argumentation theorists and computer scientists. Another recent development has been the adaption of argumentation models and techniques to fields in artificial intelligence, like multi-agent systems and artificial intelligence for legal reasoning. In a short paper, it is not possible to survey all these developments. The best that can be done is to offer an introduction to some of the basic concepts and methods of argumentation theory as they have evolved to the present point, and to briefly indicate some problems and limitations in them. 1. Arguments and Argumentation There are four tasks undertaken by argumentation, or informal logic, as it is also often called: identification, analysis, evaluation and invention. The task of identification \u2026"}
{"_id": "62feb51dbb8c3a94cbfc91c950f39dc2c7506e1a", "title": "Super Normal Vector for Activity Recognition Using Depth Sequences", "text": "This paper presents a new framework for human activity recognition from video sequences captured by a depth camera. We cluster hypersurface normals in a depth sequence to form the polynormal which is used to jointly characterize the local motion and shape information. In order to globally capture the spatial and temporal orders, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time grids. We then propose a novel scheme of aggregating the low-level polynormals into the super normal vector (SNV) which can be seen as a simplified version of the Fisher kernel representation. In the extensive experiments, we achieve classification results superior to all previous published results on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D."}
{"_id": "a90b9b5edac31a4320f2a003fef519a399b67f6b", "title": "A Seed-Based Method for Generating Chinese Confusion Sets", "text": "In natural language, people often misuse a word (called a \u201cconfused word\u201d) in place of other words (called \u201cconfusing words\u201d). In misspelling corrections, many approaches to finding and correcting misspelling errors are based on a simple notion called a \u201cconfusion set.\u201d The confusion set of a confused word consists of confusing words. In this article, we propose a new method of building Chinese character confusion sets.\n Our method is composed of two major phases. In the first phase, we build a list of seed confusion sets for each Chinese character, which is based on measuring similarity in character pinyin or similarity in character shape. In this phase, all confusion sets are constructed manually, and the confusion sets are organized into a graph, called a \u201cseed confusion graph\u201d (SCG), in which vertices denote characters and edges are pairs of characters in the form (confused character, confusing character).\n In the second phase, we extend the SCG by acquiring more pairs of (confused character, confusing character) from a large Chinese corpus. For this, we use several word patterns (or patterns) to generate new confusion pairs and then verify the pairs before adding them into a SCG. Comprehensive experiments show that our method of extending confusion sets is effective. Also, we shall use the confusion sets in Chinese misspelling corrections to show the utility of our method."}
{"_id": "393ddf850d806c4eeaec52a1e2ea4c4dcc5c76ee", "title": "Learning Over Long Time Lags", "text": "The advantage of recurrent neural networks (RNNs) in learni ng dependencies between time-series data has distinguished RNNs from other deep learning models. Recent ly, many advances are proposed in this emerging field. However, there is a lack of comprehensive review on mem ory models in RNNs in the literature. This paper provides a fundamental review on RNNs and long short te rm memory (LSTM) model. Then, provides a surveys of recent advances in different memory enhancement s and learning techniques for capturing long term dependencies in RNNs."}
{"_id": "488d861cd5122ae7e4ac89ff082b159c4889870c", "title": "Ethical Artificial Intelligence", "text": "First Edition Please send typo and error reports, and any other comments, to hibbard@wisc.edu."}
{"_id": "649922386f1222a2e64c1c80bcc171431c070e92", "title": "Twitter Part-of-Speech Tagging for All: Overcoming Sparse and Noisy Data", "text": "Part-of-speech information is a pre-requisite in many NLP algorithms. However, Twitter text is difficult to part-of-speech tag: it is noisy, with linguistic errors and idiosyncratic style. We present a detailed error analysis of existing taggers, motivating a series of tagger augmentations which are demonstrated to improve performance. We identify and evaluate techniques for improving English part-of-speech tagging performance in this genre. Further, we present a novel approach to system combination for the case where available taggers use different tagsets, based on voteconstrained bootstrapping with unlabeled data. Coupled with assigning prior probabilities to some tokens and handling of unknown words and slang, we reach 88.7% tagging accuracy (90.5% on development data). This is a new high in PTB-compatible tweet part-of-speech tagging, reducing token error by 26.8% and sentence error by 12.2%. The model, training data and tools are made available."}
{"_id": "0d94a0a51cdecbdec81c97d2040ed28d3e9c96de", "title": "Photobook: Content-based manipulation of image databases", "text": "We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These query tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on text annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We discuss three types of Photobook descriptions in detail: one that allows search based on appearance, one that uses 2-D shape, and a third that allows search based on textural properties. These image content descriptions can be combined with each other and with text-based descriptions to provide a sophisticated browsing and search capability. In this paper we demonstrate Photobook on databases containing images of people, video keyframes, hand tools, fish, texture swatches, and 3-D medical data."}
{"_id": "fa2603efaf717974c77162c93d800defae61a129", "title": "Face recognition/detection by probabilistic decision-based neural network", "text": "This paper proposes a face recognition system, based on probabilistic decision-based neural networks (PDBNN). With technological advance on microelectronic and vision system, high performance automatic techniques on biometric recognition are now becoming economically feasible. Among all the biometric identification methods, face recognition has attracted much attention in recent years because it has potential to be most nonintrusive and user-friendly. The PDBNN face recognition system consists of three modules: First, a face detector finds the location of a human face in an image. Then an eye localizer determines the positions of both eyes in order to generate meaningful feature vectors. The facial region proposed contains eyebrows, eyes, and nose, but excluding mouth (eye-glasses will be allowed). Lastly, the third module is a face recognizer. The PDBNN can be effectively applied to all the three modules. It adopts a hierarchical network structures with nonlinear basis functions and a competitive credit-assignment scheme. The paper demonstrates a successful application of PDBNN to face recognition applications on two public (FERET and ORL) and one in-house (SCR) databases. Regarding the performance, experimental results on three different databases such as recognition accuracies as well as false rejection and false acceptance rates are elaborated. As to the processing speed, the whole recognition process (including PDBNN processing for eye localization, feature extraction, and classification) consumes approximately one second on Sparc10, without using hardware accelerator or co-processor."}
{"_id": "a6f1dfcc44277d4cfd8507284d994c9283dc3a2f", "title": "Eigenfaces for Recognition", "text": "We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as \"eigenfaces,\" because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture."}
{"_id": "b217788dd6d274ad391ee950e6f6a34033bd2fc7", "title": "The multilayer perceptron as an approximation to a Bayes optimal discriminant function", "text": "The multilayer perceptron, when trained as a classifier using backpropagation, is shown to approximate the Bayes optimal discriminant function. The result is demonstrated for both the two-class problem and multiple classes. It is shown that the outputs of the multilayer perceptron approximate the a posteriori probability functions of the classes being trained. The proof applies to any number of layers and any type of unit activation function, linear or nonlinear."}
{"_id": "00ef4151ae7dfd201b326afbe9c112fac91f9872", "title": "An Efficient Active Learning Framework for New Relation Types", "text": "Supervised training of models for semantic relation extraction has yielded good performance, but at substantial cost for the annotation of large training corpora. Active learning strategies can greatly reduce this annotation cost. We present an efficient active learning framework that starts from a better balance between positive and negative samples, and boosts training efficiency by interleaving self-training and co-testing. We also studied the reduction of annotation cost by enforcing argument type constraints. Experiments show a substantial speed-up by comparison to the previous state-of-the-art pure co-testing active learning framework. We obtain reasonable performance with only 150 labels for individual ACE 2004 relation"}
{"_id": "1b102e8fcf68da7b0d7da16b71a07376cba22f6d", "title": "Complications during root canal irrigation--literature review and case reports.", "text": "LITERATURE REVIEW AND CASE REPORTS: The literature concerning the aetiology, symptomatology and therapy of complications during root canal irrigation is reviewed. Three cases of inadvertent injection of sodium hypochlorite and hydrogen peroxide beyond the root apex are presented. Clinical symptoms are discussed, as well as preventive and therapeutic considerations."}
{"_id": "5c97f52213c3414b70b9f507619669dfcc1749f9", "title": "Neural Machine Translation Advised by Statistical Machine Translation", "text": "Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years. However, recent studies show that NMT generally produces fluent but inadequate translations (Tu et al. 2016; He et al. 2016). This is in contrast to conventional Statistical Machine Translation (SMT), which usually yields adequate but non-fluent translations. It is natural, therefore, to leverage the advantages of both models for better translations, and in this work we propose to incorporate SMT model into NMT framework. More specifically, at each decoding step, SMT offers additional recommendations of generated words based on the decoding information from NMT (e.g., the generated partial translation and attention history). Then we employ an auxiliary classifier to score the SMT recommendations and a gating function to combine the SMT recommendations with NMT generations, both of which are jointly trained within the NMT architecture in an end-to-end manner. Experimental results on Chinese-English translation show that the proposed approach achieves significant and consistent improvements over state-of-the-art NMT and SMT systems on multiple NIST test sets."}
{"_id": "aaa9d12640ec6f9d1d37333141c761c902d2d280", "title": "Leveraging Wikipedia Table Schemas for Knowledge Graph Augmentation", "text": "General solutions to augment Knowledge Graphs (KGs) with facts extracted from Web tables aim to associate pairs of columns from the table with a KG relation based on the matches between pairs of entities in the table and facts in the KG. These approaches suffer from intrinsic limitations due to the incompleteness of the KGs. In this paper we investigate an alternative solution, which leverages the patterns that occur on the schemas of a large corpus of Wikipedia tables. Our experimental evaluation, which used DBpedia as reference KG, demonstrates the advantages of our approach over state-of-the-art solutions and reveals that we can extract more than 1.7M of facts with an estimated accuracy of 0.81 even from tables that do not expose any fact on the KG."}
{"_id": "2e7539ce290fae45acf5465739054e99fb6f5cc8", "title": "Breaking out of the box of recommendations: from items to packages", "text": "Classical recommender systems provide users with a list of recommendations where each recommendation consists of a single item, e.g., a book or DVD. However, several applications can benefit from a system capable of recommending packages of items, in the form of sets. Sample applications include travel planning with a limited budget (price or time) and twitter users wanting to select worthwhile tweeters to follow given that they can deal with only a bounded number of tweets. In these contexts, there is a need for a system that can recommend top-k packages for the user to choose from.\n Motivated by these applications, we consider composite recommendations, where each recommendation comprises a set of items. Each item has both a value (rating) and a cost associated with it, and the user specifies a maximum total cost (budget) for any recommended set of items. Our composite recommender system has access to one or more component recommender system, focusing on different domains, as well as to information sources which can provide the cost associated with each item. Because the problem of generating the top recommendation (package) is NP-complete, we devise several approximation algorithms for generating top-k packages as recommendations. We analyze their efficiency as well as approximation quality. Finally, using two real and two synthetic data sets, we subject our algorithms to thorough experimentation and empirical analysis. Our findings attest to the efficiency and quality of our approximation algorithms for top-k packages compared to exact algorithms."}
{"_id": "4ee0ad8e256523256c8d21790189388ed4beca7e", "title": "Guided filtering for PRNU-based localization of small-size image forgeries", "text": "PRNU-based techniques guarantee a good forgery detection performance irrespective of the specific type of forgery. The presence or absence of the camera PRNU pattern is detected by a correlation test. Given the very low power of the PRNU signal, however, the correlation must be averaged over a pretty large window, reducing the algorithm's ability to reveal small forgeries. To improve resolution, we estimate correlation with a spatially adaptive filtering technique, with weights computed over a suitable pilot image. Implementation efficiency is achieved by resorting to the recently proposed guided filters. Experiments prove that the proposed filtering strategy allows for a much better detection performance in the case of small forgeries."}
{"_id": "71b5e96c643c31a3e2efd90e932ce2fa176a65e7", "title": "Intel MPX Explained: An Empirical Study of Intel MPX and Software-based Bounds Checking Approaches", "text": "Memory-safety violations are a prevalent cause of both reliability and security vulnerabilities in systems software written in unsafe languages like C/C++. Unfortunately, all the existing software-based solutions to this problem exhibit high performance overheads preventing them from wide adoption in production runs. To address this issue, Intel recently released a new ISA extension\u2014Memory Protection Extensions (Intel MPX), a hardware-assisted full-stack solution to protect against memory safety violations. In this work, we perform an exhaustive study of the Intel MPX architecture to understand its advantages and caveats. We base our study along three dimensions: (a) performance overheads, (b) security guarantees, and (c) usability issues. To put our results in perspective, we compare Intel MPX with three prominent software-based approaches: (1) trip-wire\u2014AddressSanitizer, (2) objectbased\u2014SAFECode, and (3) pointer-based\u2014SoftBound. Our main conclusion is that Intel MPX is a promising technique that is not yet practical for widespread adoption. Intel MPX\u2019s performance overheads are still high (~50% on average), and the supporting infrastructure has bugs which may cause compilation or runtime errors. Moreover, we showcase the design limitations of Intel MPX: it cannot detect temporal errors, may have false positives and false negatives in multithreaded code, and its restrictions on memory layout require substantial code changes for some programs. This paper presents only the general discussion and aggregated data; for the complete evaluation, please see the supporting website: https://Intel-MPX.github.io/. Evaluation plots and section headings have hyperlinks to the complete experimental description and results."}
{"_id": "c93fe35d8888f296a095b906cca26ffac991aa75", "title": "Childhood predictors differentiate life-course persistent and adolescence-limited antisocial pathways among males and females.", "text": "This article reports a comparison on childhood risk factors of males and females exhibiting childhood-onset and adolescent-onset antisocial behavior, using data from the Dunedin longitudinal study. Childhood-onset delinquents had childhoods of inadequate parenting, neurocognitive problems, and temperament and behavior problems, whereas adolescent-onset delinquents did not have these pathological backgrounds. Sex comparisons showed a male-to-female ratio of 10:1 for childhood-onset delinquency but a sex ratio of only 1.5:1 for adolescence-onset delinquency. Showing the same pattern as males, childhood-onset females had high-risk backgrounds but adolescent-onset females did not. These findings are consistent with core predictions from the taxonomic theory of life-course persistent and adolescence-limited antisocial behavior."}
{"_id": "776584e054bd8ba1ff6c4906eb947fc0abb0abc3", "title": "Efficient aircraft spare parts inventory management under demand uncertainty", "text": "In airline industries, the aircraft maintenance cost takes up about 13% of the total operating cost. It can be reduced by a good planning. Spare parts inventories exist to serve the maintenance planning. Compared with commonly used reorder point system (ROP) and forecasting methods which only consider historical data, this paper presents two non-linear programming models which predict impending demands based on installed parts failure distribution. The optimal order time and order quantity can be found by minimizing total cost. The first basic mathematical model assumes shortage period starts from mean time to failure (MTTF). An iteration method and GAMS are used to solve this model. The second improved mathematical model takes into account accurate shortage time. Due to its complexity, only GAMS is applied in solution methodology. Both models can be proved effective in cost reduction through revised numerical examples and their results. Comparisons of the two models are also discussed."}
{"_id": "86ba3c7141a9b9d22293760d7c96e68074f4ef65", "title": "CAPTCHA Design: Color, Usability, and Security", "text": "Most user interfaces use color, which can greatly enhance their design. Because the use of color is typically a usability issue, it rarely causes security failures. However, using color when designing CAPTCHAs, a standard security technology that many commercial websites apply widely, can have an impact on usability and interesting but critical implications for security. Here, the authors examine some CAPTCHAs to determine whether their use of color negatively affects their usability, security, or both."}
{"_id": "3fedbe4e3e0577c7e895d9c968f512a30aada47a", "title": "Development and validation of measures of social phobia scrutiny fear and social interaction anxiety.", "text": "The development and validation of the Social Phobia Scale (SPS) and the Social Interaction Anxiety Scale (SIAS) two companion measures for assessing social phobia fears is described. The SPS assesses fear of being scrutinised during routine activities (eating, drinking, writing, etc.), while the SIAS assesses fear of more general social interaction, the scales corresponding to the DSM-III-R descriptions of Social Phobia--Circumscribed and Generalised types, respectively. Both scales were shown to possess high levels of internal consistency and test-retest reliability. They discriminated between social phobia, agoraphobia and simple phobia samples, and between social phobia and normal samples. The scales correlated well with established measures of social anxiety, but were found to have low or non-significant (partial) correlations with established measures of depression, state and trait anxiety, locus of control, and social desirability. The scales were found to change with treatment and to remain stable in the face of no-treatment. It appears that these scales are valid, useful, and easily scored measures for clinical and research applications, and that they represent an improvement over existing measures of social phobia."}
{"_id": "450b9397f371e08dc775b81882fc536be73bb06d", "title": "Using Machine Learning to Refine Black-Box Test Specifications and Test Suites", "text": "In the context of open source development or software evolution, developers often face test suites which have been developed with no apparent rationale and which may need to be augmented or refined to ensure sufficient dependability, or even reduced to meet tight deadlines. We refer to this process as the re-engineering of test suites. It is important to provide both methodological and tool support to help people understand the limitations of test suites and their possible redundancies, so as to be able to refine them in a cost effective manner. To address this problem in the case of black-box testing, we propose a methodology based on machine learning that has shown promising results on a case study."}
{"_id": "8c3930ca8183c9bf8f2bfbe112717df1475287b9", "title": "An architecture for the aggregation and analysis of scholarly usage data", "text": "Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. This paper presents a technical, standards-based architecture for sharing usage information, which we have designed and implemented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service. This paper also discusses issues that were encountered when implementing the proposed approach, and it presents preliminary results obtained from analyzing a usage data set containing about 3,500,000 requests aggregated by a federation of linking servers at the California State University system over a 20 month period."}
{"_id": "b3cca6cf9b74f9ecdcf716864b957696e8d002e2", "title": "Optimising thermal efficiency of direct contact membrane distillation by brine recycling for small-scale seawater desalination Revised Manuscript Submitted to Desalination", "text": "A technique to optimise thermal efficiency using brine recycling during direct contact membrane distillation (DCMD) of seawater was investigated. By returning the hot brine to the feed tank, the system water recovery could be increased and the sensible heat of the hot brine was recovered to improve thermal efficiency. The results show that in the optimal water recovery range of 20 to 60% facilitated by brine recycling, the specific thermal energy consumption of the process could be reduced by more than half. It is also noteworthy that within this optimal water recovery range, the risk of membrane scaling is negligible DCMD of seawater at a constant water recovery of 70% was achieved for over 24. h without any scale formation on the membrane surface. In contrast, severe membrane scaling was observed when water recovery reached 80%. In addition to water recovery, other operating conditions such as feed temperature and water circulation rates could influence the process thermal efficiency. Increasing the feed temperature and reducing the circulation flow rates increased thermal efficiency. Increasing the feed temperature could also mitigate the negative effect of elevated feed concentration on the distillate flux, particularly at a high water recovery. Disciplines Engineering | Science and Technology Studies Publication Details Duong, H. C., Cooper, P., Nelemans, B., Cath, T. Y. & Nghiem, L. D. (2015). Optimising thermal efficiency of direct contact membrane distillation by brine recycling for small-scale seawater desalination. Desalination, 374 1-9. This journal article is available at Research Online: http://ro.uow.edu.au/eispapers/4237 1 Optimising thermal efficiency of direct contact membrane distillation by brine recycling for small-scale seawater desalination Revised Manuscript Submitted to"}
{"_id": "f01b4ef5e825cb73836c58a308ea6b7680cc5537", "title": "A self-charging power unit by integration of a textile triboelectric nanogenerator and a flexible lithium-ion battery for wearable electronics.", "text": "A novel integrated power unit realizes both energy harvesting and energy storage by a textile triboelectric nanogenerator (TENG)-cloth and a flexible lithium-ion battery (LIB) belt, respectively. The mechanical energy of daily human motion is converted into electricity by the TENG-cloth, sustaining the energy of the LIB belt to power wearable smart electronics."}
{"_id": "647cb3825baecb6fab8b098166d5a446f7711f9b", "title": "Learning Plannable Representations with Causal InfoGAN", "text": "In recent years, deep generative models have been shown to \u2018imagine\u2019 convincing high-dimensional observations such as images, audio, and even video, learning directly from raw data. In this work, we ask how to imagine goal-directed visual plans \u2013 a plausible sequence of observations that transition a dynamical system from its current configuration to a desired goal state, which can later be used as a reference trajectory for control. We focus on systems with high-dimensional observations, such as images, and propose an approach that naturally combines representation learning and planning. Our framework learns a generative model of sequential observations, where the generative process is induced by a transition in a low-dimensional planning model, and an additional noise. By maximizing the mutual information between the generated observations and the transition in the planning model, we obtain a low-dimensional representation that best explains the causal nature of the data. We structure the planning model to be compatible with efficient planning algorithms, and we propose several such models based on either discrete or continuous states. Finally, to generate a visual plan, we project the current and goal observations onto their respective states in the planning model, plan a trajectory, and then use the generative model to transform the trajectory to a sequence of observations. We demonstrate our method on imagining plausible visual plans of rope manipulation3."}
{"_id": "10e6a38a158d6f2f6a9c12343847da78dd72c1f9", "title": "A gut (microbiome) feeling about the brain.", "text": "PURPOSE OF REVIEW\nThere is an increasing realization that the microorganisms which reside within our gut form part of a complex multidirectional communication network with the brain known as the microbiome-gut-brain axis. In this review, we focus on recent findings which support a role for this axis in modulating neurodevelopment and behavior.\n\n\nRECENT FINDINGS\nA growing body of research is uncovering that under homeostatic conditions and in response to internal and external stressors, the bacterial commensals of our gut can signal to the brain through a variety of mechanisms to influence processes such neurotransmission, neurogenesis, microglia activation, and modulate behavior. Moreover, the mechanisms underlying the ability of stress to modulate the microbiota and also for microbiota to change the set point for stress sensitivity are being unraveled. Dysregulation of the gut microbiota composition has been identified in a number of psychiatric disorders, including depression. This has led to the concept of bacteria that have a beneficial effect upon behavior and mood (psychobiotics) being proposed for potential therapeutic interventions.\n\n\nSUMMARY\nUnderstanding the mechanisms by which the bacterial commensals of our gut are involved in brain function may lead to the development of novel microbiome-based therapies for these mood and behavioral disorders."}
{"_id": "cffd812661ce822be4cf3735b7ac8bb79798b59c", "title": "A Dual-Polarized Pattern Reconfigurable Yagi Patch Antenna for Microbase Stations", "text": "A two-port pattern reconfigurable three-layered Yagi-Uda patch antenna with \u00b145\u00b0 dual-polarization characteristic is presented. A driven patch (DP) and two large parasitic patches (LPPs) are printed on the top side of the middle layer, and the middle and bottom layers share a common metal ground. Two microstrip feedlines printed orthogonally on the bottom side of the bottom layer are used to feed the DP through two H-shaped slots etched in the ground. The LPPs are connected to/disconnected from the ground controlled by switches. By adjusting the connection states of the LPPs, one wide-beam mode and three narrow-beam modes can be obtained in both polarizations. A small parasitic patch is printed on the top side of the top layer to improve the pattern coherence of the two polarizations. This antenna is studied by both simulation and measurement. The measured common bandwidth of the four modes in both polarizations is 3.32\u20133.51 GHz, and the isolations between two polarizations in all the modes are higher than 20 dB. The low-profile antenna is very suitable for microbase-station applications."}
{"_id": "9810d71b1051d421b068486548949d721ab84cd0", "title": "Dynamic Malware Detection Using API Similarity", "text": "Hackers create different types of Malware such as Trojans which they use to steal user-confidential information (e.g. credit card details) with a few simple commands, recent malware however has been created intelligently and in an uncontrolled size, which puts malware analysis as one of the top important subjects of information security. This paper proposes an efficient dynamic malware-detection method based on API similarity. This proposed method outperform the traditional signature-based detection method. The experiment evaluated 197 malware samples and the proposed method showed promising results of correctly identified malware."}
{"_id": "21ac0c53e1df2ffea583e45c091b7f262e0a9533", "title": "Interactive Block Games for Assessing Children's Cognitive Skills: Design and Preliminary Evaluation", "text": "Background: This paper presents design and results from preliminary evaluation of Tangible Geometric Games (TAG-Games) for cognitive assessment in young children. The TAG-Games technology employs a set of sensor-integrated cube blocks, called SIG-Blocks, and graphical user interfaces for test administration and real-time performance monitoring. TAG-Games were administered to children from 4 to 8 years of age for evaluating preliminary efficacy of this new technology-based approach. Methods: Five different sets of SIG-Blocks comprised of geometric shapes, segmented human faces, segmented animal faces, emoticons, and colors, were used for three types of TAG-Games, including Assembly, Shape Matching, and Sequence Memory. Computational task difficulty measures were defined for each game and used to generate items with varying difficulty. For preliminary evaluation, TAG-Games were tested on 40 children. To explore the clinical utility of the information assessed by TAG-Games, three subtests of the age-appropriate Wechsler tests (i.e., Block Design, Matrix Reasoning, and Picture Concept) were also administered. Results: Internal consistency of TAG-Games was evaluated by the split-half reliability test. Weak to moderate correlations between Assembly and Block Design, Shape Matching and Matrix Reasoning, and Sequence Memory and Picture Concept were found. The computational measure of task complexity for each TAG-Game showed a significant correlation with participants' performance. In addition, age-correlations on TAG-Game scores were found, implying its potential use for assessing children's cognitive skills autonomously."}
{"_id": "8f50078de9d41bd756cb0dcb346a916cc9d50ce6", "title": "Answering Top-k Queries with Multi-Dimensional Selections: The Ranking Cube Approach", "text": "Observed in many real applications, a top-k query often consists of two components to reflect a user's preference: a selection condition and a ranking function. A user may not only propose ad hoc ranking functions, but also use different interesting subsets of the data. In many cases, a user may want to have a thorough study of the data by initiating a multi-dimensional analysis of the top-k query results. Previous work on top-k query processing mainly focuses on optimizing data access according to the ranking function only. The problem of efficient answering top-k queries with multi-dimensional selections has not been well addressed yet.This paper proposes a new computational model, called ranking cube, for efficient answering top-k queries with multi-dimensional selections. We define a rank-aware measure for the cube, capturing our goal of responding to multi-dimensional ranking analysis. Based on the ranking cube, an efficient query algorithm is developed which progressively retrieves data blocks until the top-k results are found. The curse of dimensionality is a well-known challenge for the data cube and we cope with this difficulty by introducing a new technique of ranking fragments. Our experiments on Microsoft's SQL Server 2005 show that our proposed approaches have significant improvement over the previous methods."}
{"_id": "f88fc26bef96d2c430eb758f6e925824b82d8139", "title": "IC-CRIME : A Collaborative , Web-Based , 3 D System for the Investigation , Analysis , and Annotation of Crime Scenes", "text": "Modern day crime scene investigation methods are continually being enhanced by the application of new technologies to improve the analysis and presentation of crime scene information, helping to solve and prosecute crimes. This paper describes a new system called IC-CRIME that integrates several new technologies to meet both of these needs. IC-CRIME employs laser scanning to produce realistic 3D models of a crime scene which it then incorporates into a 3D virtual environment. ICCRIME creates an integrated platform that encourages investigators and forensic experts to view, explore, and annotate these models at any time from any web browser. A key goal of the system is to expand opportunities for collaboration between professionals whose geographic locations and time commitments would otherwise limit their ability to contribute to the analysis of the actual crime scene."}
{"_id": "809587390027486210dd5a6c78c52c4b1a20cc9f", "title": "Investment and Financing Constraints : Evidence from the Funding of Corporate Pension Plans", "text": "I exploit sharply nonlinear funding rules for defined benefit pension plans in order to identify the dependence of corporate investment on internal financial resources in a large sample. Capital expenditures decline with mandatory contributions to defined benefit pension plans, even when controlling for correlations between the pension funding status itself and the firm\u2019s unobserved investment opportunities. The effect is particularly evident among firms that face financing constraints based on observable variables such as credit ratings. Investment also displays strong negative correlations with the part of mandatory contributions resulting solely from unexpected asset market movements. * Graduate School of Business, University of Chicago. I thank James Poterba, Stew Myers, Jonathan Gruber, Dirk Jenter, Heitor Almeida, Daniel Bergstresser, Mihir Desai, Michael Greenstone, Robin Greenwood, David Scharfstein, Antoinette Schoar, Jeremy Stein, and Amir Sufi for helpful comments and discussions. I would also like to thank Rob Stambaugh (the editor) and the referees. This work benefited greatly from the thoughts of economics seminar participants at MIT, Harvard, Princeton, and the Kennedy School of Government, and of finance seminar participants at the University of Chicago, the University of Pennsylvania (Wharton), Harvard, Stanford, Columbia, NYU (Stern), Dartmouth (Tuck), Michigan (Ross), Boston College (Carroll), Northwestern (Kellogg), Duke (Fuqua), and the 2004 Western Finance Association meetings in Vancouver. I am grateful to the Center for Retirement Research at Boston College and the National Bureau of Economic Research for financial support."}
{"_id": "30d7e9fc1bc237864a505e8a30b33431a2a8aaa9", "title": "Abstractive Multi-Document Summarization : An Overview", "text": "In recent times, the necessity of generating single document summary has gained popularity among the researchers due to its extensive applicability. The text summarization can be categorized with different approaches like: extractive and abstractive from single document or multi document, goal of text summarization (intent, focus and coverage), characteristic of text summarization (frequency-based, knowledge-based and discoursebased), level of processing (surface level, entities level and discourse level) and kind of information (lexicon, structure information and deep understanding). Recently, the efforts in research are transferred from single document summarization to multi document summarization Multi-document summarization considerably differs from single in issues related to compression, speed, redundancy and passage selection are critical in the formation of useful summaries. In this paper, we review the techniques that have done for multi document summarization. Next, we describe evaluation method In conclusion; we propose our future work for multi document"}
{"_id": "e352612f51ad34c764f128cc62e91b51fe7a9759", "title": "Point & Teleport Locomotion Technique for Virtual Reality", "text": "With the increasing popularity of virtual reality (VR) and new devices getting available with relatively lower costs, more and more video games have been developed recently. Most of these games use first person interaction techniques since it is more natural for Head Mounted Displays (HMDs). One of the most widely used interaction technique in VR video games is locomotion that is used to move user's viewpoint in virtual environments. Locomotion is an important component of video games since it can have a strong influence on user experience. In this study, a new locomotion technique we called \"Point & Teleport\" is described and compared with two commonly used VR locomotion techniques of walk-in-place and joystick. In this technique, users simply point where they want to be in virtual world and they are teleported to that position. As a major advantage, it is not expected to introduce motion sickness since it does not involve any visible translational motion. In this study, two VR experiments were designed and performed to analyze the Point & Teleport technique. In the first experiment, Point & Teleport was compared with walk-in-place and joystick locomotion techniques. In the second experiment, a direction component was added to the Point & Teleport technique so that the users could specify their desired orientation as well. 16 users took part in both experiments. Results indicated that Point & Teleport is a fun and user friendly locomotion method whereas the additional direction component degraded the user experience."}
{"_id": "33c859730444bd835dbf5f0956110f45a735ee89", "title": "The \"visual cliff\".", "text": "This simple apparatus is used to investigate depth perception in different animals. All species thus far tested seem able to perceive and avoid a sharp drop as soon as they can move about. Human infants at the creeping and toddling stage are notoriously prone to falls from more or less high places. They must be kept from going over the brink by side panels on their cribs, gates on stairways and the vigilance of adults. As their muscular coordination matures they begin to avoid such accidents on their own. Common sense might suggest that the child learns to recognize falling-off places by experience\u2013that is, by falling and hurting himself. But is experience really the teacher? Or is the ability to perceive and avoid a brink part of the child's original endowment? Answers to these questions will throw light on the genesis of space perception in general. Height perception is a special case of distance perception: information in the light reaching the eye provides stimuli that can be utilized for the discrimination both of depth and of receding distance on the level. At what stage of development can an animal respond effectively to these stimuli? Does the onset of such response vary with animals of different species and habitats? At Cornell University we have been investigating these problems by means of a simple experimental setup that we call a visual cliff. The cliff is a simulated one and hence makes it possible not only to control the optical and other stimuli (auditory and tactual, for instance) but also to protect the experimental subjects. It consists of a board laid across a large sheet of heavy glass which is supported a foot or more above the floor. On one side of the board a sheet of patterned material is placed flush against the undersurface of the glass, giving the glass the appearance as well as the substance of solidity. On the other side a sheet of the same material is laid upon the floor; this side of the board thus becomes the visual cliff (Fig. 1). The Classic Visual Cliff Experiment This young explorer has the good sense not to crawl out onto an apparently unsupported surface, even when Mother beckons from the other side. Rats, pups, kittens, and chicks also will not try to walk across to the other side. (So don't bother asking why the chicken crossed the visual cliff.) \u2026"}
{"_id": "76fee58c7308b185db38d4177d04902172993561", "title": "Improved Redirection with Distractors: A large-scale-real-walking locomotion interface and its effect on navigation in virtual environments", "text": "Users in virtual environments often find navigation more difficult than in the real world. Our new locomotion interface, Improved Redirection with Distractors (IRD), enables users to walk in larger-than-tracked space VEs without predefined waypoints. We compared IRD to the current best interface, really walking, by conducting a user study measuring navigational ability. Our results show that IRD users can really walk through VEs that are larger than the tracked space and can point to targets and complete maps of VEs no worse than when really walking."}
{"_id": "984031f16dded96a8ae48a6dd9d49edefa98aa46", "title": "Quantifying immersion in virtual reality", "text": "Virtual Reality (VR) has generated much excitement but little formal proof that it is useful. Because VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. In this paper, we show that users with a VR interface complete a search task faster than users with a stationary monitor and a hand-based input device. We placed users in the center of the virtual room shown in Figure 1 and told them to look for camouflaged targets. VR users did not do significantly better than desktop users. However, when asked to search the room and conclude if a target existed, VR users were substantially better at determining when they had searched the entire room. Desktop users took 41% more time, re-examining areas they had already searched. We also found a positive transfer of training from VR to stationary displays and a negative transfer of training from stationary displays to VR."}
{"_id": "13a92a59545eefbaf5ec3adf6000ec64f4bb73a5", "title": "When is it biased?: assessing the representativeness of twitter's streaming API", "text": "Twitter shares a free 1% sample of its tweets through the \"Streaming API\". Recently, research has pointed to evidence of bias in this source. The methodologies proposed in previous work rely on the restrictive and expensive Firehose to find the bias in the Streaming API data. We tackle the problem of finding sample bias without costly and restrictive Firehose data. We propose a solution that focuses on using an open data source to find bias in the Streaming API."}
{"_id": "fe8e40c0d5c6c07adcbc2dff18423da0c88582fc", "title": "Information security culture: A management perspective", "text": "Information technology has become an integral part of modern life. Today, the use of information permeates every aspect of both business and private lives. Most organizations need information systems to survive and prosper and thus need to be serious about protecting their information assets. Many of the processes needed to protect these information assets are, to a large extent, dependent on human cooperated behavior. Employees, whether intentionally or through negligence, often due to a lack of knowledge, are the greatest threat to information security. It has become widely accepted that the establishment of an organizational sub-culture of information security is key to managing the human factors involved in information security. This paper briefly examines the generic concept of corporate culture and then borrows from the management and economical sciences to present a conceptual model of information security culture. The presented model incorporates the concept of elasticity from the economical sciences in order to show how various variables in an information security culture influence each other. The purpose of the presented model is to facilitate conceptual thinking and argumentation about information security culture. a 2009 Elsevier Ltd. All rights reserved."}
{"_id": "4528c1164d43a004ea8a254b4cc240d8d9b84587", "title": "Designing Anthropomorphic Robot Hand With Active Dual-Mode Twisted String Actuation Mechanism and Tiny Tension Sensors", "text": "In this letter, using the active dual-mode twisted string actuation (TSA) mechanism and tiny tension sensors on the tendon strings, an anthropomorphic robot hand is newly designed in a compact manner. Thanks to the active dual-mode TSA mechanism, which is a miniaturized transmission, the proposed robot hand dose has a wide range of operation in terms of the grasping force and speed. It experimentally produces maximally the fingertip force 31.3 N and minimally the closing time of 0.5 s in average. Also, tiny tension sensor with the dimension of 4.7 (width)\u00a0\u00d7\u00a04.0 (height)\u00a0\u00d7\u00a010.75 (length) mm is newly presented and embedded at the fingertips in order to measure the tension on the tendon strings, which would allow the grasping force control. The kinetic and kinematic analyses are performed and the performance is verified by experiments."}
{"_id": "46c55aa2dac524287cf4a61656967c1ff8b7713a", "title": "Control of a Quadrotor Using a Smart Self-Tuning Fuzzy PID Controller", "text": "This paper deals with the modelling, simulation-based controller design and path planning of a four rotor helicopter known as a quadrotor. All the drags, aerodynamic, coriolis and gyroscopic effect are neglected. A Newton-Euler formulation is used to derive the mathematical model. A smart self-tuning fuzzy PID controller based on an EKF algorithm is proposed for the attitude and position control of the quadrotor. The PID gains are tuned using a self-tuning fuzzy algorithm. The self-tuning of fuzzy parameters is achieved based on an EKF algorithm. A smart selection technique and exclusive tuning of active fuzzy parameters is proposed to reduce the computational time. Dijkstra\u2019s algorithm is used for path planning in a closed and known environment filled with obstacles and/or boundaries. The Dijkstra algorithm helps avoid obstacle and find the shortest route from a given initial position to the final position."}
{"_id": "7a3623df776b1f1347a0a4b3c60f863469b838be", "title": "A Flooding Warning System based on RFID Tag Array for Energy Facility", "text": "Passive radio-frequency identification (RFID) tags are widely used due to its economic cost and satisfactory performance. So far, passive RFID tags are mostly applied to identify certain objects, such as underground pipe identification under buried conditions. However, there is lack of study on the application of buried tags for further application. In this paper, the performances of buried RFID tags are studied to develop a flooding warning system based on RFID tag array for energy facility such as power stations. In this study, the corresponding signal strength received by the RFID reader is evaluated when the RFID tags are buried by seven materials respectively. The results show that flood warning detector can be constructed using passive RFID tag array and reader."}
{"_id": "b519835c855de750ad52907e24cb6a0904bee3de", "title": "Cognitive Control Deficits in Schizophrenia: Mechanisms and Meaning", "text": "Although schizophrenia is an illness that has been historically characterized by the presence of positive symptomatology, decades of research highlight the importance of cognitive deficits in this disorder. This review proposes that the theoretical model of cognitive control, which is based on contemporary cognitive neuroscience, provides a unifying theory for the cognitive and neural abnormalities underlying higher cognitive dysfunction in schizophrenia. To support this model, we outline converging evidence from multiple modalities (eg, structural and functional neuroimaging, pharmacological data, and animal models) and samples (eg, clinical high risk, genetic high risk, first episode, and chronic subjects) to emphasize how dysfunction in cognitive control mechanisms supported by the prefrontal cortex contribute to the pathophysiology of higher cognitive deficits in schizophrenia. Our model provides a theoretical link between cellular abnormalities (eg, reductions in dentritic spines, interneuronal dysfunction), functional disturbances in local circuit function (eg, gamma abnormalities), altered inter-regional cortical connectivity, a range of higher cognitive deficits, and symptom presentation (eg, disorganization) in the disorder. Finally, we discuss recent advances in the neuropharmacology of cognition and how they can inform a targeted approach to the development of effective therapies for this disabling aspect of schizophrenia."}
{"_id": "280bb6d7ada9ef118899b5a1f655ae9166fb8f0b", "title": "Tampering Detection in Low-Power Smart Cameras", "text": "A desirable feature in smart cameras is the ability to autonomously detect any tampering event/attack that would prevent a clear view over the monitored scene. No matter whether tampering is due to atmospheric phenomena (e.g., few rain drops over the camera lens) or to malicious attacks (e.g., occlusions or device displacements), these have to be promptly detected to possibly activate countermeasures. Tampering detection is particularly challenging in battery-powered cameras, where it is not possible to acquire images at full-speed frame-rates, nor use sophisticated image-analysis algorithms. We here introduce a tampering-detection algorithm specifically designed for low-power smart cameras. The algorithm leverages very simple indicators that are then monitored by an outlier-detection scheme: any frame yielding an outlier is detected as tampered. Core of the algorithm is the partitioning of the scene into adaptively defined regions, that are preliminarily defined by segmenting the image during the algorithmconfiguration phase, and which shows to improve the detection of camera displacements. Experiments show that the proposed algorithm can successfully operate on sequences acquired at very low-frame rate, such as one frame every minute, with a very small computational complexity."}
{"_id": "e8a2776154f8bc7625742fade5efb71ff1c9896b", "title": "W-BAND MICROSTRIP-TO-WAVEGUIDE TRANSITION USING VIA FENCES", "text": "Abstract\u2014The paper presents integrated probe for direct coupling to the WR-10 waveguide with the use of metal filled vias on both sides of the microstrip line. Design and optimization of this novel microstripto-waveguide transition has been performed using 3-D finite element method based software HFSS (High Frequency Structure Simulator). A back-to-back transition has been fabricated and measured between 75\u2013110GHz. The measured return loss is higher than 10 dB and the insertion loss for a single microstrip-to-waveguide transition is about 1.15 dB."}
{"_id": "c72796c511e2282e4088b0652a4cce0e2da4296c", "title": "Hierarchical Discrete Distribution Decomposition for Match Density Estimation", "text": "Existing deep learning methods for pixel correspondence output a point estimate of the motion field, but do not represent the full match distribution. Explicit representation of a match distribution is desirable for many applications as it allows direct representation of the correspondence probability. The main difficulty of estimating a full probability distribution with a deep network is the high computational cost of inferring the entire distribution. In this paper, we propose Hierarchical Discrete Distribution Decomposition, dubbed HD, to learn probabilistic point and region matching. Not only can it model match uncertainty, but also region propagation. To achieve this, we estimate the hierarchical distribution of pixel correspondences at different image scales without multi-hypotheses ensembling. Despite its simplicity, our method can achieve competitive results for both optical flow and stereo matching on established benchmarks, while the estimated uncertainty is a good indicator of errors. Furthermore, the point match distribution within a region can be grouped together to propagate the whole region even if the area changes across images."}
{"_id": "a63b97291149bfed416aa9e56a21314069540a7b", "title": "A meta-analysis of working memory impairments in children with attention-deficit/hyperactivity disorder.", "text": "OBJECTIVE\nTo determine the empirical evidence for deficits in working memory (WM) processes in children and adolescents with attention-deficit/hyperactivity disorder (ADHD).\n\n\nMETHOD\nExploratory meta-analytic procedures were used to investigate whether children with ADHD exhibit WM impairments. Twenty-six empirical research studies published from 1997 to December, 2003 (subsequent to a previous review) met our inclusion criteria. WM measures were categorized according to both modality (verbal, spatial) and type of processing required (storage versus storage/manipulation).\n\n\nRESULTS\nChildren with ADHD exhibited deficits in multiple components of WM that were independent of comorbidity with language learning disorders and weaknesses in general intellectual ability. Overall effect sizes for spatial storage (effect size = 0.85, CI = 0.62 - 1.08) and spatial central executive WM (effect size = 1.06, confidence interval = 0.72-1.39) were greater than those obtained for verbal storage (effect size = 0.47, confidence interval = 0.36-0.59) and verbal central executive WM (effect size = 0.43, confidence interval = 0.24-0.62).\n\n\nCONCLUSION\nEvidence of WM impairments in children with ADHD supports recent theoretical models implicating WM processes in ADHD. Future research is needed to more clearly delineate the nature, severity, and specificity of the impairments to ADHD."}
{"_id": "73d738f1c52e41d8e60700b1aac06d80bf8d8570", "title": "Terpene synthases of oregano (Origanum vulgare L.) and their roles in the pathway and regulation of terpene biosynthesis", "text": "The aroma, flavor and pharmaceutical value of cultivated oregano (Origanum vulgare L.) is a consequence of its essential oil which consists mostly of monoterpenes and sesquiterpenes. To investigate the biosynthetic pathway to oregano terpenes and its regulation, we identified and characterized seven terpene synthases, key enzymes of terpene biosynthesis, from two cultivars of O. vulgare. Heterologous expression of these enzymes showed that each forms multiple mono- or sesquiterpene products and together they are responsible for the direct production of almost all terpenes found in O. vulgare essential oil. The correlation of essential oil composition with relative and absolute terpene synthase transcript concentrations in different lines of O. vulgare demonstrated that monoterpene synthase activity is predominantly regulated on the level of transcription and that the phenolic monoterpene alcohol thymol is derived from \u03b3-terpinene, a product of a single monoterpene synthase. The combination of heterologously-expressed terpene synthases for in vitro assays resulted in blends of mono- and sesquiterpene products that strongly resemble those found in vivo, indicating that terpene synthase expression levels directly control the composition of the essential oil. These results will facilitate metabolic engineering and directed breeding of O. vulgare cultivars with higher quantity of essential oil and improved oil composition."}
{"_id": "90c28aeb30a1632efbd9f7d0f5eb3580c10f135c", "title": "A wireless slanted optrode array with integrated micro leds for optogenetics", "text": "This paper presents a wireless-enabled, flexible optrode array with multichannel micro light-emitting diodes (\u03bc-LEDs) for bi-directional wireless neural interface. The array integrates wirelessly addressable \u03bc-LED chips with a slanted polymer optrode array for precise light delivery and neural recording at multiple cortical layers simultaneously. A droplet backside exposure (DBE) method was developed to monolithically fabricate varying-length optrodes on a single polymer platform. In vivo tests in rat brains demonstrated that the \u03bc-LEDs were inductively powered and controlled using a wireless switched-capacitor stimulator (SCS), and light-induced neural activity was recorded with the optrode array concurrently."}
{"_id": "63d440eb606c7aa4ee3c7fcd94d65af3f5c92c96", "title": "Efficient projections onto the l1-ball for learning in high dimensions", "text": "We describe efficient algorithms for projecting a vector onto the l1-ball. We present two methods for projection. The first performs exact projection in O(n) expected time, where n is the dimension of the space. The second works on vectors k of whose elements are perturbed outside the l1-ball, projecting in O(k log(n)) time. This setting is especially useful for online learning in sparse feature spaces such as text categorization applications. We demonstrate the merits and effectiveness of our algorithms in numerous batch and online learning tasks. We show that variants of stochastic gradient projection methods augmented with our efficient projection procedures outperform interior point methods, which are considered state-of-the-art optimization techniques. We also show that in online settings gradient updates with l1 projections outperform the exponentiated gradient algorithm while obtaining models with high degrees of sparsity."}
{"_id": "7482fc3f108e7a4b379292e3c5dbabefdf8706fa", "title": "Integrated on-chip inductors with electroplated magnetic yokes ( invited )", "text": "J. Appl. Phys. 111, 07E328 (2012) A single-solenoid pulsed-magnet system for single-crystal scattering studies Rev. Sci. Instrum. 83, 035101 (2012) Solution to the problem of E-cored coil above a layered half-space using the method of truncated region eigenfunction expansion J. Appl. Phys. 111, 07E717 (2012) Array of 12 coils to measure the position, alignment, and sensitivity of magnetic sensors over temperature J. Appl. Phys. 111, 07E501 (2012) Skin effect suppression for Cu/CoZrNb multilayered inductor J. Appl. Phys. 111, 07A501 (2012)"}
{"_id": "8bd59b6111c21ca9f133b2ac9f0a8b102e344076", "title": "Slow Flow: Exploiting High-Speed Cameras for Accurate and Diverse Optical Flow Reference Data", "text": "Existing optical flow datasets are limited in size and variability due to the difficulty of capturing dense ground truth. In this paper, we tackle this problem by tracking pixels through densely sampled space-time volumes recorded with a high-speed video camera. Our model exploits the linearity of small motions and reasons about occlusions from multiple frames. Using our technique, we are able to establish accurate reference flow fields outside the laboratory in natural environments. Besides, we show how our predictions can be used to augment the input images with realistic motion blur. We demonstrate the quality of the produced flow fields on synthetic and real-world datasets. Finally, we collect a novel challenging optical flow dataset by applying our technique on data from a high-speed camera and analyze the performance of the state-of-the-art in optical flow under various levels of motion blur."}
{"_id": "d6609337639ad64c59afff71637b42c0c6be7d6d", "title": "Dermoscopy of cutaneous leishmaniasis.", "text": "BACKGROUND\nDermoscopy has been proposed as a diagnostic tool in the case of skin infections and parasitosis but no specific dermoscopic criteria have been described for cutaneous leishmaniasis (CL).\n\n\nOBJECTIVES\nTo describe the dermoscopic features of CL.\n\n\nMETHODS\nDermoscopic examination (using the DermLite Foto; 3Gen, LLC, Dana Point, CA, U.S.A.) of 26 CL lesions was performed to evaluate specific dermoscopic criteria.\n\n\nRESULTS\nWe observed the following dermoscopic features: generalized erythema (100%), 'yellow tears' (53%), hyperkeratosis (50%), central erosion/ulceration (46%), erosion/ulceration associated with hyperkeratosis (38%) and 'white starburst-like pattern' (38%). Interestingly, at least one vascular structure described in skin neoplasms was observed in all cases: comma-shaped vessels (73%), linear irregular vessels (57%), dotted vessels (53%), polymorphous/atypical vessels (26%), hairpin vessels (19%), arborizing telangiectasia (11%), corkscrew vessels (7%) and glomerular-like vessels (7%). Combination of two or more different types of vascular structures was present in 23 of 26 CL lesions (88%), with a combination of two vascular structures in 13 cases (50%) and three or more in 10 cases (38%).\n\n\nCONCLUSIONS\nCharacteristic dermoscopic structures have been identified in CL. Important vascular patterns seen in melanocytic and nonmelanocytic tumours are frequently observed in this infection."}
{"_id": "e9001ea4368b808da7cabda7edd2fbd4118a3f6b", "title": "Path Planning and Controlled Crash Landing of a Quadcopter in case of a Rotor Failure", "text": "This paper presents a framework for controlled emergency landing of a quadcopter, experiencing a rotor failure, away from sensitive areas. A complete mathematical model capturing the dynamics of the system is presented that takes the asymmetrical aerodynamic load on the propellers into account. An equilibrium state of the system is calculated around which a linear time-invariant control strategy is developed to stabilize the system. By utilizing the proposed model, a specific configuration for a quadcopter is introduced that leads to the minimum power consumption during a yaw-rateresolved hovering after a rotor failure. Furthermore, given a 3D representation of the environment, an optimal flight trajectory towards a safe crash landing spot, while avoiding collision with obstacles, is developed using an RRT* approach. The cost function for determining the best landing spot consists of: (i) finding the safest landing spot with the largest clearance from the obstacles; and (ii) finding the most energy-efficient trajectory towards the landing spot. The performance of the proposed framework is tested via simulations."}
{"_id": "6dcb07839672014c294d71ee78a7c72b715e0b76", "title": "A Bibliometric Study on Culture Research in International Business", "text": "National cultures and cultural differences provide a crucial component of the context of international business (IB) research. We conducted a bibliometric study of the articles published in seven leading IB journals, over a period of three decades, to analyze how \u201cnational culture\u201d has been impacting in IB research. Co-citation mappings permit us to identify the ties binding works dealing with culture and cultural issues in IB. We identify two main clusters of research each comprising two sub-clusters, with Hofstede\u2019s (1980) work setting much of the conceptual and empirical approach on culture-related studies. One main cluster entails works on the conceptualization of culture and its dimensions and other cluster on cultural distance. This conceptual framework captures the extant IB research incorporating culture-related concepts and influences."}
{"_id": "6d23073dbb68d353f30bb97f4803cfbd66546444", "title": "Ensemble Algorithms in Reinforcement Learning", "text": "This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms."}
{"_id": "52afbe99de04878a5ac73a4adad4b1cedae8a6ce", "title": "Multiresolution Tree Networks for 3D Point Cloud Processing", "text": "We present multiresolution tree-structured networks to process point clouds for 3D shape understanding and generation tasks. Our network represents a 3D shape as a set of locality-preserving 1D ordered list of points at multiple resolutions. This allows efficient feed-forward processing through 1D convolutions, coarse-to-fine analysis through a multi-grid architecture, and it leads to faster convergence and small memory footprint during training. The proposed treestructured encoders can be used to classify shapes and outperform existing pointbased architectures on shape classification benchmarks, while tree-structured decoders can be used for generating point clouds directly and they outperform existing approaches for image-to-shape inference tasks learned using the ShapeNet dataset. Our model also allows unsupervised learning of point-cloud based shapes by using a variational autoencoder, leading to higher-quality generated shapes."}
{"_id": "ed84e205d306f831f84c6f31f7c94d83a60fe5d8", "title": "An application of explicit model predictive control to electric power assisted steering systems", "text": "This paper presents explicit model predictive control (MPC) for electric power assisted steering (EPAS) systems. Explicit MPC has the ability to alleviate the computational burdens of MPC, which is an online optimization technique, using multi-parametric quadratic programming (mp-QP). EPAS systems are being used more frequently for passenger cars because of their advantages over hydraulic steering systems in terms of performance and cost. The main objective of EPAS systems is to provide a desired motor torque; therefore, to handle this problem, explicit MPC is employed owing to the ability of MPC to deal with constraints on controls and states and of explicit MPC to do the same work offline. This paper summarizes a formulation of explicit MPC and introduces a straight-line boost curve, which is used to determine desired motor torques. A mechanical model, motion equations and the state-space form of a column-type EPAS system are also described in this paper. An linear-quadratic regulator (LQR) controller is designed so that the explicit MPC controller could be compared against it. The parameter values are respectively varied to demonstrate the robustness of the proposed explicit MPC controller. The proposed control strategy shows better performance than that of the LQR as well as a reduction in the required computational complexity."}
{"_id": "538d9235d0af4d02f45d17c9663e98afdf8ae4f9", "title": "Neural Semantic Encoders", "text": "We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU."}
{"_id": "43d4b77ca6f9a29993bce5bade90aa2b8e4d2cac", "title": "CCDN: Content-Centric Data Center Networks", "text": "Data center networks continually seek higher network performance to meet the ever increasing application demand. Recently, researchers are exploring the method to enhance the data center network performance by intelligent caching and increasing the access points for hot data chunks. Motivated by this, we come up with a simple yet useful caching mechanism for generic data centers, i.e., a server caches a data chunk after an application on it reads the chunk from the file system, and then uses the cached chunk to serve subsequent chunk requests from nearby servers. To turn the basic idea above into a practical system and address the challenges behind it, we design content-centric data center networks CCDNs, which exploits an innovative combination of content-based forwarding and location [Internet Protocol IP]-based forwarding in switches, to correctly locate the target server for a data chunk on a fully distributed basis. Furthermore, CCDN enhances traditional content-based forwarding to determine the nearest target server, and enhances traditional location IP-based forwarding to make high utilization of the precious memory space in switches. Extensive simulations based on real-world workloads and experiments on a test bed built with NetFPGA prototypes show that, even with a small portion of the server\u2019s storage as cache e.g., 3% and with a modest content forwarding information base size e.g., 1000 entries in switches, CCDN can improve the average throughput to get data chunks by 43% compared with a pure Hadoop File System HDFS system in a real data center."}
{"_id": "38418928d6d842fe6edadc809f384278d793d610", "title": "Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks", "text": "Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 1030. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800% on one of the DNNs we tested."}
{"_id": "49e77b981a0813460e2da2760ff72c522ae49871", "title": "The Limitations of Deep Learning in Adversarial Settings", "text": "Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification."}
{"_id": "5c3785bc4dc07d7e77deef7e90973bdeeea760a5", "title": "TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems", "text": "Mart\u0131\u0301n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng Google Research\u2217 Abstract"}
{"_id": "83bfdd6a2b28106b9fb66e52832c45f08b828541", "title": "Intriguing properties of neural networks", "text": "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extent. We can cause the network to misclassify an image by applying a certain hardly perceptible perturbation, which is found by maximizing the network\u2019s prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input."}
{"_id": "8aef07e27848bb1c3ec62537c58e276361530c14", "title": "Millimeter wave cavity backed aperture coupled microstrip patch antenna", "text": "In this paper, a new single element broad band cavity backed multi-layer microstrip patch antenna is presented. This antenna is designed to be used in 79 GHz MIMO radar applications, and is fabricated with advanced high resolution multi-layer PCB technology. It has a wide bandwidth of 11.43 GHz, which is 14.5% of the center frequency. The beam widths in both E and H-planes are very wide, and are 144 degrees and 80 degrees respectively. Also, the antenna gain and efficiency are respectively 4.7 dB and 87%. Furthermore, measurements have been done, and show a good agreement with simulations."}
{"_id": "b7d5aea8ac1332767c4525c29efc68cfc34bdf11", "title": "ROBUST CYLINDER FITTING IN THREE-DIMENSIONAL POINT CLOUD DATA", "text": "This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD). Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA) with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i) in the presence of noise and high percentage of outliers, (ii) for incomplete as well as complete data, (iii) for small and large number of points, and (iv) for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63meter (m); the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic) poles, diameter at breast height estimation for trees, and building and bridge information modelling."}
{"_id": "3d92b174e02bd6c30bd989eac3465dfde081878c", "title": "Feature integration analysis of bag-of-features model for image retrieval", "text": "One of the biggest challenges in content based image retrieval is to solve the problem of \u201csemantic gaps\u201d between low-level features and high-level semantic concepts. In this paper, we aim to investigate various combinations of mid-level features to build an effective image retrieval system based on the bag-offeatures (BoF) model. Specifically, we study two ways of integrating the SIFT and LBP descriptors, HOG and LBP descriptors, respectively. Based on the qualitative and quantitative evaluations on two benchmark datasets, we show that the integrations of these features yield complementary and substantial improvement on image retrieval even with noisy background and ambiguous objects. Two integration models are proposed: the patch-based integration and image-based integration. By using a weighted K-means clustering algorithm, the image-based SIFT-LBP integration achieves the best performance on the given benchmark problems comparing to the existing algorithms. & 2013 Elsevier B.V. All rights reserved."}
{"_id": "b695bb1cb3e8ccc34a780aa67512917cb5ce28c9", "title": "Emotion Regulation Ability 1 Running head : EMOTION REGULATION The Ability to Regulate Emotion is Associated with Greater Well-Being , Income , and Socioeconomic Status", "text": "Are people who are best able to implement strategies to regulate their emotional expressive behavior happier and more successful than their counterparts? Although past research has examined individual variation in knowledge of the most effective emotion regulation strategies, little is known about how individual differences in the ability to actually implement these strategies, as assessed objectively in the laboratory, is associated with external criteria. In two studies, we examined how individual variation in the ability to modify emotional expressive behavior in response to evocative stimuli is related to well-being and financial success. Study 1 showed that individuals who can best suppress their emotional reaction to an acoustic startle are happiest with their lives. Study 2 showed that individuals who can best amplify their emotional reaction to a disgust-eliciting movie are happiest with their lives and have the highest disposable income and socioeconomic status. Thus, being able to implement emotion regulation strategies in the laboratory is closely linked to well-being and financial success. Emotion Regulation Ability 3 The Ability to Regulate Emotion is Associated with Greater Well-Being, Income, and Socioeconomic Status Individual variation in cognitive abilities, such as language and mathematics, has been shown to relate strongly to a number of important life criteria, including performance at school and at work (Kunzel, Hezlett, & Ones, 2004; Schmidt & Hunter, 1998). Research in recent years has suggested that there is also important variation among individuals in emotional abilities (see Mayer, Roberts, & Barsade, 2008; Mayer, Salovey, & Caruso, 2008, for reviews). In particular, the ability to regulate emotions reflects variation in how well people adjust emotional responses to meet current situational demands (Gross & Thompson, 2007; Salovey & Mayer, 1990). Equipped with this ability, individuals can aptly modify which emotions they have, when they have them, and how they experience and express them (Gross, 1998). This ability is arguably one of the most critical elements of our emotion repertoire, and it is the focus of the present research. Past research has begun to examine whether individual variation in the ability to regulate emotions is associated with various criteria. This research has found that variation in knowledge of how to best regulate emotions \u2013 whether people know the rules of emotion regulation \u2013 is associated with well-being, close social relationships, high grades in school, and high job performance (e.g., C\u00f4t\u00e9 & Miners, 2006; Lopes, Salovey, C\u00f4t\u00e9, & Beers, 2005; MacCann & Roberts, 2008). The measures used in these studies assess the degree to which people know how to best manage emotions. Specifically, they reflect how closely respondents\u2019 judgments of how to best regulate emotion in hypothetical scenarios match the judgments of experts. For instance, the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT; Mayer, Salovey, & Caruso, 2002) asks respondents to rate the effectiveness of a series of strategies to manage emotions in several hypothetical scenarios, and their responses are compared to those provided by expert emotion researchers. Emotion Regulation Ability 4 Notwithstanding the importance of knowing how to best manage emotions, knowledge does not fully represent the domain of emotion regulation ability. People who know the best strategies may not implement them well. The distinction between knowledge and the ability to implement is established in the larger literature on intelligence (cf. Ackerman, 1996), and it is also theoretically useful to describe emotional abilities. For example, a customer service agent who knows that cognitively reframing an interaction with a difficult customer is the best strategy may not implement that strategy well during the interaction. Thus, to understand fully how emotion regulation ability is associated with criteria such as well-being and financial success, researchers must also examine the ability to implement strategies to regulate emotions \u2013 whether people can actually operate the machinery of emotion regulation. Several of the measures used in studies of the relationship between emotion regulation and other criteria do not assess actual ability to implement emotion regulation strategies. For example, the MSCEIT (Mayer et al., 2002) does not ask respondents to implement the strategy that they believe best addresses the issues depicted in the scenarios. Recent advances in affective science, however, provide tools to objectively assess the ability to implement emotion regulation strategies (Gross & Levenson, 1993; Hagemann, Levenson, & Gross, 2006; Jackson, Malmstadt, Larson, & Davidson, 2000; Kunzmann, Kupperbusch, & Levenson, 2005). In these laboratory paradigms, individuals receive specific instructions about how to regulate their emotions (e.g., reduce the intensity of their emotional expressive behaviors) when encountering emotional stimuli, such as loud noises or emotionally evocative film clips. Success at implementing the emotion regulation strategy can be measured objectively, for example, by coding how much respondents change their emotional expressive behavior when being instructed to do so. Several studies have used this paradigm to examine how regulating emotions is associated with cognitive task performance (Baumeister, Bratslavsky, Muraven, & Tice, 1998; Emotion Regulation Ability 5 Bonanno, Papa, Lalande, Westphal, & Coifman, 2004; Schmeichel, Demaree, Robinson, & Pu, 2006), the activation of neural systems (Beauregard, Levesque, & Bourgouin, 2001; Oschner, Bunge, Gross, & Gabrieli, 2002), and emotion experience, emotional expressive behavior, and autonomic physiology (Demaree, Schmeichel, Robinson, Pu, Everhart, & Berntson, 2006; Giuliani, McCrae, & Gross, 2008; Gross & Levenson, 1993, 1997; Hagemann et al., 2006). This paradigm has also been used as an individual difference measure to test how the ability to implement emotion regulation strategies is associated with age (Kunzmann et al., 2005; Scheibe & Blanchard-Fields, 2009), working memory (Schmeichel, Volokhov, & Demaree, 2008), and executive function (Gyurak, Goodkind, Madan, Kramer, Miller, & Levenson, 2008). In addition, one study employed this paradigm to assess people\u2019s flexibility in using different emotion regulation strategies depending on the situation, showing that flexibility is associated with lower distress after a traumatic event (Bonanno, Papa, Lalande, Westphal, & Coifman, 2004). Thus, this body of research supports the utility of these laboratory paradigms for assessing individual variation in the ability to implement emotion regulation strategies and the correlates of this ability. In this report, we present the results of two studies that examine whether individual variation in the ability to implement strategies to regulate emotions is associated with well-being and financial success and, if so, in what direction. Most people regulate their emotions daily, and more than half the time, they do so by modifying the expression of emotions in their face, voice, and posture (Gross, Richards, & John, 2006). Given the frequency with which we regulate our emotional expressive behavior, it is reasonable to expect that the individual\u2019s ability in this realm would exhibit important associations with other constructs. The regulation of visible expressive behavior encompasses both up-regulation (amplifying emotional expressive behavior) and downEmotion Regulation Ability 6 regulation (reducing emotional expressive behavior). We considered the association of both with our criteria. We now turn to our theoretical development. A review of the existing literature suggests the possibility of both a positive and a negative association between the ability to implement emotion regulation strategies assessed in the laboratory and well-being and financial success. Furthermore, because we do not test the direction of causality in our studies, we consider theoretical arguments for both causal directions of associations, reviewing literatures that suggest that emotion regulation ability has consequences for well-being and financial success (both positive and negative), and also that well-being and financial success have consequences for emotion regulation ability (both positive and negative). The Ability to Regulate Emotional Behavior and Well-Being and Financial Success: Positive Associations In this section, we present theoretical arguments suggesting that the ability to regulate emotion and well-being and financial success are positively associated. We first describe why high emotion regulation ability may help people become happier and garner more financial resources, and then we examine whether happiness and financial resources may help people develop better abilities to regulate their emotions. Why Would Emotion Regulation Ability Increase Well-Being and Financial Success? Philosophers have argued that rational thought and a happy life requires the ability to rein in on emotional impulses (Aristotle, 1884; Solomon, 1993). The ability to modify emotional expressive behavior effectively may help people adapt flexibly to situational demands. Equipped with this ability, individuals might be more successful in communicating attitudes, goals, and intentions that are appropriate in various situations (Keltner & Haidt, 1999) and that might be Emotion Regulation Ability 7 rewarded and fulfilled. The ability to adapt successfully to situational demands then could be associated with various indicators of well-being and success. At a more micro-level, modifying emotional expressive behavior effectively may help people conform to display rules about who can show which emotions to whom and when they can do so (Friesen, 1972). People often attain rewards for conforming to displays rules in various settings. For instance, employees who conform to display rules at wo"}
{"_id": "9b90568faad1fd394737b79503571b7f5f0b2f4b", "title": "Optimizing Space Amplification in RocksDB", "text": "RocksDB is an embedded, high-performance, persistent keyvalue storage engine developed at Facebook. Much of our current focus in developing and configuring RocksDB is to give priority to resource efficiency instead of giving priority to the more standard performance metrics, such as response time latency and throughput, as long as the latter remain acceptable. In particular, we optimize space efficiency while ensuring read and write latencies meet service-level requirements for the intended workloads. This choice is motivated by the fact that storage space is most often the primary bottleneck when using Flash SSDs under typical production workloads at Facebook. RocksDB uses log-structured merge trees to obtain significant space efficiency and better write throughput while achieving acceptable read performance. This paper describes methods we used to reduce storage usage in RocksDB. We discuss how we are able to trade off storage efficiency and CPU overhead, as well as read and write amplification. Based on experimental evaluations of MySQL with RocksDB as the embedded storage engine (using TPC-C and LinkBench benchmarks) and based on measurements taken from production databases, we show that RocksDB uses less than half the storage that InnoDB uses, yet performs well and in many cases even better than the B-tree-based InnoDB storage engine. To the best of our knowledge, this is the first time a Log-structured merge treebased storage engine has shown competitive performance when running OLTP workloads at large scale."}
{"_id": "676680a4812cee810341f9153c15ae93fefd6675", "title": "Ultra-Reliable and Low-Latency 5 G Communication", "text": "Machine-to-machine communication, M2M, will make up a large portion of the new types of services and use cases that the fifth generation (5G) systems will address. On the one hand, 5G will connect a large number of low-cost and low-energy devices in the context of the Internet of things; on the other hand it will enable critical machine type communication use cases, such as smart factory, automotive, energy, and e-health \u2013 which require communication with very high reliability and availability, as well as very low end-to-end latency. In this paper, we will discuss the requirements, enablers and challenges to support these emerging mission-critical 5G use cases. Keywords\u2014 5G, NR, M2M, MTC, URLLC, Tactile Internet, Reliability, Latency."}
{"_id": "a2418f51c11eaedfbf6168cbbe7fcf2a1fee9c9c", "title": "A GLOBAL MEASURE OF PERCEIVED STRESS", "text": "Aim: To evaluate validity of the Greek version of a global measure of perceived stress PSS\u221214 (Perceived Stress Scale \u2013 14 item). Materials and Methods: The original PSS\u221214 (theoretical range 0\u221256) was translated into Greek and then back-translated. One hundred men and women (39\u00b110 years old, 40 men) participated in the validation process. Firstly, participants completed the Greek PSS\u221214 and, then they were interviewed by a psychologist specializing in stress management. Cronbach\u2019s alpha (\u03b1) evaluated internal consistency of the measurement, whereas Kendall\u2019s tau-b and Bland & Altman methods assessed consistency with the clinical evaluation. Exploratory and Confirmatory Factor analyses were conducted to reveal hidden factors within the data and to confirm the two-dimensional character of the scale. Results: Mean (SD) PSS\u221214 score was 25(7.9). Strong internal consistency (Cronbach\u2019s \u03b1 = 0.847) as well as moderate-to-good concordance between clinical assessment and PSS\u221214 (Kendall\u2019s tau-b = 0.43, p<0.01) were observed. Two factors were extracted. Factor one explained 34.7% of variability and was heavily laden by positive items, and factor two that explained 10.6% of the variability by negative items. Confirmatory factor analysis revealed that the model with 2 factors had chi-square equal to 241.23 (p<0.001), absolute fix indexes were good (i.e. GFI=0.733, AGFI=0.529), and incremental fix indexes were also adequate (i.e. NFI=0.89 and CFI=0.92). Conclusion: The developed Greek version of PSS\u221214 seems to be a valid instrument for the assessment of perceived stress in the Greek adult population living in urban areas; a finding that supports its local use in research settings as an evaluation tool measuring perceived stress, mainly as a risk factor but without diagnostic properties."}
{"_id": "a96520ac46c6789d607405d6cb514e709a046927", "title": "Polysemy: Theoretical and Computational Approaches Yael Ravin and Claudia Leacock (editors) (IBM T. J. Watson Research Center and Educational Testing Services) New York: Oxford University Press, 2000, xi+227 pp; hardbound, ISBN 0-19-823842-8, $74.00, \u00a345.00; paperbound, ISBN 0-19-925086-3, $21.95, \u00a3", "text": "As the editors of this volume remind us, polysemy has been a vexing issue for the understanding of language since antiquity. For half a century, it has been a major bottleneck for natural language processing. It contributed to the failure of early machine translation research (remember Bar-Hillel\u2019s famous pen and box example) and is still plaguing most natural language processing and information retrieval applications. A recent issue of this journal described the state of the art in automatic sense disambiguation (Ide and V\u00e9ronis 1998), and Senseval system competitions have revealed the immense difficulty of the task (http://www.sle.sharp.co.uk/senseval2). However, no significant progress can be made on the computational aspects of polysemy without serious advances in theoretical issues. At the same time, theoretical work can be fostered by computational results and problems, and language-processing applications can provide a unique test bed for theories. It was therefore an excellent idea to gather both theoretical and applied contributions in the same book. Yael Ravin and Claudia Leacock are well-known names to those who work on the theoretical and computational aspects of word meaning. In this volume, they bring together a collection of essays from leading researchers in the field. As far as I can tell, these essays are not reprints or expanded versions of conference papers, as is often the case for edited works; instead, they seem to have been specially commissioned for the purposes of this book, which makes it even more exciting to examine. The book is composed of 11 chapters. It is not formally divided into parts, but chapters dealing more specifically with the computational aspects of polysemy are grouped together at the end (and constitute about one-third of the volume). Chapter 1 is an overview written by the volume editors. Yael Ravin and Claudia Leacock survey the main theories of meaning and their treatment of polysemy. These include the classical Aristotelian approach revived by Katz and Fodor (1963); Rosch\u2019s (1977) prototypical approach, which has its roots in Wittgenstein\u2019s Philosophical Investigations (1953); and the relational approach recently exemplified by WordNet (Fellbaum 1998), which (although the authors do not mention it) can be traced back to Peirce\u2019s (1931\u20131958) and Selz\u2019s (1913, 1922) graphs and which gained popularity with Quillian\u2019s (1968) semantic networks. In the course of this overview, Ravin and Leacock put the individual chapters into perspective by relating them to the various theories."}
{"_id": "40ee1e872dac8cd5ee1e9967a8ca73d2b1ae251c", "title": "Effects of a Dynamic Warm-Up, Static Stretching or Static Stretching with Tendon Vibration on Vertical Jump Performance and EMG Responses", "text": "The purpose of this study was to investigate the short-term effects of static stretching, with vibration given directly over Achilles tendon, on electro-myographic (EMG) responses and vertical jump (VJ) performances. Fifteen male, college athletes voluntarily participated in this study (n=15; age: 22\u00b14 years old; body height: 181\u00b110 cm; body mass: 74\u00b111 kg). All stages were completed within 90 minutes for each participant. Tendon vibration bouts lasted 30 seconds at 50 Hz for each volunteer. EMG analysis for peripheral silent period, H-reflex, H-reflex threshold, T-reflex and H/M ratio were completed for each experimental phases. EMG data were obtained from the soleus muscle in response to electro stimulation on the popliteal post tibial nerve. As expected, the dynamic warm-up (DW) increased VJ performances (p=0.004). Increased VJ performances after the DW were not statistically substantiated by the EMG findings. In addition, EMG results did not indicate that either static stretching (SS) or tendon vibration combined with static stretching (TVSS) had any detrimental or facilitation effect on vertical jump performances. In conclusion, using TVSS does not seem to facilitate warm-up effects before explosive performance."}
{"_id": "67fb947cdbb040cd5883e599aa29f33dd3241d3c", "title": "Relation between language experiences in preschool classrooms and children's kindergarten and fourth-grade language and reading abilities.", "text": "Indirect effects of preschool classroom indexes of teacher talk were tested on fourth-grade outcomes for 57 students from low-income families in a longitudinal study of classroom and home influences on reading. Detailed observations and audiotaped teacher and child language data were coded to measure content and quantity of verbal interactions in preschool classrooms. Preschool teachers' use of sophisticated vocabulary during free play predicted fourth-grade reading comprehension and word recognition (mean age=9; 7), with effects mediated by kindergarten child language measures (mean age=5; 6). In large group preschool settings, teachers' attention-getting utterances were directly related to later comprehension. Preschool teachers' correcting utterances and analytic talk about books, and early support in the home for literacy predicted fourth-grade vocabulary, as mediated by kindergarten receptive vocabulary."}
{"_id": "bed350c7507c1823d9eecf57d1ca19c09b5cd71e", "title": "Static secure page allocation for light-weight dynamic information flow tracking", "text": "Dynamic information flow tracking (DIFT) is an effective security countermeasure for both low-level memory corruptions and high-level semantic attacks. However, many software approaches suffer large performance degradation, and hardware approaches have high logic and storage overhead. We propose a flexible and light-weight hardware/software co-design approach to perform DIFT based on secure page allocation. Instead of associating every data with a taint tag, we aggregate data according to their taints, i.e., putting data with different attributes in separate memory pages. Our approach is a compiler-aided process with architecture support. The implementation and analysis show that the memory overhead is little, and our approach can protect critical information, including return address, indirect jump address, and system call IDs, from being overwritten by malicious users."}
{"_id": "fbde0ec7f7b47f3183f6004e4f8261f5e8aa1a94", "title": "A concept for automated construction progress monitoring using BIM-based geometric constraints and photogrammetric point clouds", "text": "On-site progress monitoring is essential for keeping track of the ongoing work on construction sites. Currently, this task is a manual, time-consuming activity. The research presented here, describes a concept for an automated comparison of the actual state of construction with the planned state for the early detection of deviations in the construction process. The actual state of the construction site is detected by photogrammetric surveys. From these recordings, dense point clouds are generated by the fusion of disparity maps created with semi-global-matching (SGM). These are matched against the target state provided by a 4D Building Information Model (BIM). For matching the point cloud and the BIM, the distances between individual points of the cloud and a component\u2019s surface are aggregated using a regular cell grid. For each cell, the degree of coverage is determined. Based on this, a confidence value is computed which serves as basis for the existence decision concerning the respective component. Additionally, processand dependency-relations are included to further enhance the detection process. Experimental results from a real-world case study are presented and discussed."}
{"_id": "71b02d95f0081afc8e4a611942bbc2b5d739ea81", "title": "Evolutionary computation and evolutionary deep learning for image analysis, signal processing and pattern recognition", "text": "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). GECCO \u201918 Companion, July 15--19, 2018, Kyoto, Japan \u00a9 2018 Copyright held by the owner/author(s). 978-1-4503-57647/18/07...$15.00 DOI https://doi.org/10.1145/3205651.3207859"}
{"_id": "04c42902e984900e411dfedfabfe5b80056738ce", "title": "Serotonin alterations in anorexia and bulimia nervosa: New insights from imaging studies", "text": "Anorexia nervosa (AN) and bulimia nervosa (BN) are related disorders with relatively homogenous presentations such as age of onset and gender distribution. In addition, they share symptoms, such as extremes of food consumption, body image distortion, anxiety and obsessions, and ego-syntonic neglect, raises the possibility that these symptoms reflect disturbed brain function that contributes to the pathophysiology of this illness. Recent brain imaging studies have identified altered activity in frontal, cingulate, temporal, and parietal cortical regions in AN and BN. Importantly, such disturbances are present when subjects are ill and persist after recovery, suggesting that these may be traits that are independent of the state of the illness. Emerging data point to a dysregulation of serotonin pathways in cortical and limbic structures that may be related to anxiety, behavioral inhibition, and body image distortions. In specific, recent studies using PET with serotonin specific radioligands implicate alterations of 5-HT1A and 5-HT2A receptors and the 5-HT transporter. Alterations of these circuits may affect mood and impulse control as well as the motivating and hedonic aspects of feeding behavior. Such imaging studies may offer insights into new pharmacology and psychotherapy approaches."}
{"_id": "3f52f57dcfdd1bb0514ff744f4fdaa986a325591", "title": "Thunderstrike: EFI firmware bootkits for Apple MacBooks", "text": "There are several flaws in Apple's MacBook firmware security that allows untrusted modifications to be written to the SPI Flash boot ROM of these laptops. This capability represents a new class of persistent firmware rootkits, or 'bootkits', for the popular Apple MacBook product line. Stealthy bootkits can conceal themselves from detection and prevent software attempts to remove them. Malicious modifications to the boot ROM are able to survive re-installation of the operating system and even hard-drive replacement. Additionally, the malware can install a copy of itself onto other Thunderbolt devices' Option ROMs as a means to spread virally across air-gap security perimeters. Apple has fixed some of these flaws as part of CVE 2014-4498, but there is no easy solution to this class of vulnerability, since the MacBook lacks trusted hardware to perform cryptographic validation of the firmware at boot time."}
{"_id": "238a7cd1d672362ab96315290bf1bdac14f5b065", "title": "High-Q MEMS Resonators for Laser Beam Scanning Displays", "text": "This paper reports on design, fabrication and characterization of high-Q MEMS resonators to be used in optical applications like laser displays and LIDAR range sensors. Stacked vertical comb drives for electrostatic actuation of single-axis scanners and biaxial MEMS mirrors were realized in a dual layer polysilicon SOI process. High Q-factors up to 145,000 have been achieved applying wafer level vacuum packaging technology including deposition of titanium thin film getters. The effective reduction of gas damping allows the MEMS actuator to achieve large amplitudes at high oscillation frequencies while driving voltage and power consumption can be minimized. Exemplarily shown is a micro scanner that achieves a total optical scan angle of 86 degrees at a resonant frequency of 30.8 kHz, which fulfills the requirements for HD720 resolution. Furthermore, results of a new wafer based glass-forming technology for fabrication of three dimensionally shaped glass lids with tilted optical windows are presented."}
{"_id": "7b485979c75b46d8c194868c0e70890f4a0f0ede", "title": "Supplementary material for the paper : Discriminative Correlation Filter with Channel and Spatial Reliability DCF-CSR", "text": "This is the supplementary material for the paper \u201dDiscriminative Correlation Filter with Channel and Spatial Reliability\u201d submitted to the CVPR 2017. Due to spatial constraints, parts not crucial for understanding the DCF-CSR tracker formulation, but helpful for gaining insights, were moved here. 1. Derivation of the augmented Lagrangian minimizer This section provides the complete derivation of the closed-form solutions for the relations (9,10) in the submitted paper [3]. The augmented Lagrangian from Equation (5) in [3] is L(\u0125c,h, l\u0302) = \u2016\u0125c diag(f\u0302)\u2212 \u011d\u2016 + \u03bb 2 \u2016hm\u2016 + (1) [\u0302l(\u0125c \u2212 \u0125m) + l\u0302(\u0125c \u2212 \u0125m)] + \u03bc\u2016\u0125c \u2212 \u0125m\u2016, with hm = (m h). For the purposes of derivation we will rewrite (1) into a fully vectorized form L(\u0125c,h, l\u0302) = \u2016\u0125c diag(f\u0302)\u2212 \u011d\u2016 + \u03bb 2 \u2016hm\u2016+ (2) [\u0302 l(\u0125c \u2212 \u221a DFMh) + l\u0302(\u0125c \u2212 \u221a DFMh) ] + \u03bc\u2016\u0125c \u2212 \u221a DFMh\u2016, where F denotes D \u00d7 D orthonormal matrix of Fourier coefficients, such that the Fourier transform is defined as x\u0302 = F(x) = \u221a DFx and M = diag(m). For clearer representation we denote the four terms in the summation (2) as L(\u0125c,h, l\u0302) = L1 + L2 + L3 + L4, (3) where L1 = ( \u0125c diag(f\u0302)\u2212 \u011d )( \u0125c diag(f\u0302)\u2212 \u011d )T , (4)"}
{"_id": "011d14aa1aa7c9a80e88d593bc783642547bb8a8", "title": "Efficient Energy-Optimal Routing for Electric Vehicles", "text": "Traditionally routing has focused on finding shortest paths in networks with positive, static edge costs representing the distance between two nodes. Energy-optimal routing for electric vehicles creates novel algorithmic challenges, as simply understanding edge costs as energy values and applying standard algorithms does not work. First, edge costs can be negative due to recuperation, excluding Dijkstra-like algorithms. Second, edge costs may depend on parameters such as vehicle weight only known at query time, ruling out existing preprocessing techniques. Third, considering battery capacity limitations implies that the cost of a path is no longer just the sum of its edge costs. This paper shows how these challenges can be met within the framework of A* search. We show how the specific domain gives rise to a consistent heuristic function yielding an O(n) routing algorithm. Moreover, we show how battery constraints can be treated by dynamically adapting edge costs and hence can be handled in the same way as parameters given at query time, without increasing run-time complexity. Experimental results with real road networks and vehicle data demonstrate the advantages of our solution."}
{"_id": "8e4641d963762a0f8bd58f2c6a2a176a9418fdec", "title": "Matching RGB Images to CAD Models for Object Pose Estimation", "text": "We propose a novel method for 3D object pose estimation in RGB images, which does not require pose annotations of objects in images in the training stage. We tackle the pose estimation problem by learning how to establish correspondences between RGB images and rendered depth images of CAD models. During training, our approach only requires textureless CAD models and aligned RGB-D frames of a subset of object instances, without explicitly requiring pose annotations for the RGB images. We employ a deep quadruplet convolutional neural network for joint learning of suitable keypoints and their associated descriptors in pairs of rendered depth images which can be matched across modalities with aligned RGB-D views. During testing, keypoints are extracted from a query RGB image and matched to keypoints extracted from rendered depth images, followed by establishing 2D-3D correspondences. The object\u2019s pose is then estimated using the RANSAC and PnP algorithms. We conduct experiments on the recently introduced Pix3D [33] dataset and demonstrate the efficacy of our proposed approach in object pose estimation as well as generalization to object instances not seen during training."}
{"_id": "ac7c07ef8b208dc67505f844467a2892784363f5", "title": "Field-weakening Control Algorithm for Interior Permanent Magnet Synchronous Motor Based on Space-Vector Modulation Technique", "text": "We investigate the implementation of field-weakening control for interior permanent magnet synchronous motor (IPMSM), and analyze the algorithm of field-weakening control in d-q axes. To deal with the problem that the dc-link voltage is not fully utilized when the voltage regulation is employed, we propose a new field-weakening scheme based on the space-vector modulation technique. The duty-time of zero-voltage vector is used as the feedback signal to decide the switching of fieldweakening. To avoid the regulation lag in the q-axis component, we apply the lead-angle control during the field-weakening progress. The proposed scheme is validated with Matlab/Simulink tool. Simulation results show that the scheme is feasible. It not only improves the utilization ratio of the space-vector-pulse-width-modulation (SVPWM) inverter, but also achieves a smooth transition between the constant-torque mode below the base-speed and the constant-power mode above the basespeed."}
{"_id": "569c295f18ede0e7472882ff13cbb71e6a1c1a41", "title": "An Image Matching Algorithm Based on SIFT and Improved LTP", "text": "SIFT is one of the most robust and widely used image matching algorithms based on local features. But the key-points descriptor of SIFT algorithm have 128 dimensions. Aiming to the problem of its high dimension and complexity, a novel image matching algorithm is proposed. The descriptors of SIFT key-points are constructed by the rotation invariant LTP, city-block distance is also employed to reduce calculation of key-points matching. The experiment is achieved through different lighting, blur changes and rotation of images, the results show that this method can reduce the processing time and raise image matching efficiency."}
{"_id": "3b3acbf7cc2ec806e4177eac286a2ee22f6f7630", "title": "An Over-110-GHz-Bandwidth 2:1 Analog Multiplexer in 0.25-\u03bcm InP DHBT Technology", "text": "This paper presents an over-110-GHz-bandwidth 2:1 analog multiplexer (AMUX) for ultra-broadband digital-to-analog (D/A) conversion subsystems. The AMUX was designed and fabricated by using newly developed $\\pmb{0.25-\\mu \\mathrm{m}}$ -emitter-width InP double heterojunction bipolar transistors (DHBTs), which have a peak $\\pmb{f_{\\mathrm{T}}}$ and $\\pmb{ f\\displaystyle \\max}$ of 460 and 480 GHz, respectively. The AMUX IC consists of lumped building blocks, including data-input linear buffers, a clock-input limiting buffer, an AMUX core, and an output linear buffer. The measured 3-dB bandwidth for data and clock paths are both over 110 GHz. In addition, it measures and obtains time-domain large-signal sampling operations of up to 180 GS/s. A 224-Gb/s (112-GBaud) four-level pulse-amplitude modulation (PAM4) signal was successfully generated by using this AMUX. To the best of our knowledge, this AMUX IC has the broadest bandwidth and the fastest sampling rate compared with any other previously reported AMUXes."}
{"_id": "7e13493a290fe0d6a1f4bd477e00b3ca37c5d82a", "title": "Field-stop layer optimization for 1200V FS IGBT operating at 200\u02daC", "text": "This paper is concerned with design considerations for enabling the operation of Field-Stop Insulated Gate Bipolar Transistors (FS IGBTs) at 200 C. It is found that through a careful optimization of the Field-Stop layer doping profile the device has a low leakage current and delivers a favorable tradeoff between the on-state voltage (Von) and turn-off loss (Eoff). An investigation of the adverse effects of increasing the junction temperature on the temperature-dependent properties of the FS IGBTs is also discussed herein."}
{"_id": "23059f5f42e71e6ab4981081a310df65d28ecda1", "title": "Automatic indexing: an approach using an index term corpus and combining linguistic and statistical methods", "text": "This thesis discusses the problems and the methods of finding relevant information in large collections of documents. The contribution of this thesis to this problem is to develop better content analysis methods which can be used to describe document content with"}
{"_id": "2f02f1276a0b01d0e6e3501eb4961ee6bea45702", "title": "Theory of Outlier Ensembles", "text": "Outlier detection is an unsupervised problem, in which labels are not available with data records [2]. As a result, it is generally more challenging to design ensemble analysis algorithms for outlier detection. In particular, methods that require the use of labels in intermediate steps of the algorithm cannot be generalized to outlier detection. For example, in the case of boosting, the classifier algorithm needs to be evaluated in the intermediate steps of the algorithm with the use of training-data labels. Such methods are generally not possible in the case of outlier analysis. As discussed in [1], there are unique reasons why ensemble analysis is generally more difficult in the case of outlier analysis as compared to classification. In spite of the unsupervised nature of outlier ensemble analysis, we show that the theoretical foundations of outlier analysis and classification are surprisingly similar. A number of useful discussions on the theory of classification ensembles may be found in [27, 29, 33]. Further explanations on the use of the bias-variance decomposition in different types of classifiers such as neural networks, support vector machines, and rule-based classifiers are discussed in [17, 30, 31]. It is noteworthy that the biasvariance decomposition is often used in customized ways for different types of base classifiers and combination methods; this general principle is also true in outlier detection. Several arguments have recently been proposed on the theory explaining the accuracy improvements of outlier ensembles. In some cases, incorrect new arguments (such as those in [32]) are proposed to justify experimental results that can be explained by well-known ideas, and an artificial distinction is made between the theory of classification ensembles and outlier ensembles. A recent paper [4]"}
{"_id": "f919bc88b746d7a420b6364142e0ac13aa80de44", "title": "The Usability of Open Source Software", "text": "Twidale Open source communities have successfully developed a great deal of software although most computer users only use proprietary applications. The usability of open source software is often regarded as one reason for this limited distribution. In this paper we review the existing evidence of the usability of open source software and discuss how the characteristics of open source development influence usability. We describe how existing human-computer interaction techniques can be used to leverage distributed networked communities, of developers and users, to address issues of usability."}
{"_id": "1e62935d9288446abdaeb1fec84daa12a684b300", "title": "Detecting adversarial advertisements in the wild", "text": "In a large online advertising system, adversaries may attempt to profit from the creation of low quality or harmful advertisements. In this paper, we present a large scale data mining effort that detects and blocks such adversarial advertisements for the benefit and safety of our users. Because both false positives and false negatives have high cost, our deployed system uses a tiered strategy combining automated and semi-automated methods to ensure reliable classification. We also employ strategies to address the challenges of learning from highly skewed data at scale, allocating the effort of human experts, leveraging domain expert knowledge, and independently assessing the effectiveness of our system."}
{"_id": "2bdcc4dbf14e13d33740531ea8954463ca7e68a2", "title": "Social coding in GitHub: transparency and collaboration in an open software repository", "text": "Social applications on the web let users track and follow the activities of a large number of others regardless of location or affiliation. There is a potential for this transparency to radically improve collaboration and learning in complex knowledge-based activities. Based on a series of in-depth interviews with central and peripheral GitHub users, we examined the value of transparency for large-scale distributed collaborations and communities of practice. We find that people make a surprisingly rich set of social inferences from the networked activity information in GitHub, such as inferring someone else's technical goals and vision when they edit code, or guessing which of several similar projects has the best chance of thriving in the long term. Users combine these inferences into effective strategies for coordinating work, advancing technical skills and managing their reputation."}
{"_id": "50341a2e4dec45165c75da37296ee7984b71e044", "title": "Searching for build debt: Experiences managing technical debt at Google", "text": "With a large and rapidly changing codebase, Google software engineers are constantly paying interest on various forms of technical debt. Google engineers also make efforts to pay down that debt, whether through special Fixit days, or via dedicated teams, variously known as janitors, cultivators, or demolition experts. We describe several related efforts to measure and pay down technical debt found in Google's BUILD files and associated dead code. We address debt found in dependency specifications, unbuildable targets, and unnecessary command line flags. These efforts often expose other forms of technical debt that must first be managed."}
{"_id": "517d6e3999bd425069e45346045adcbd2d0c9299", "title": "Counterfactual reasoning and learning systems: the example of computational advertising", "text": "This work shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system. Such predictions allow both humans and algorithms to select the changes that would have improved the system performance. This work is illustrated by experiments on the ad placement system associated with the Bing search engine."}
{"_id": "4dd7721248c5489e25f46f7ab78c7d0229a596d4", "title": "A Fully Integrated Reconfigurable Self-Startup RF Energy-Harvesting System With Storage Capability", "text": "This paper introduces a fully integrated RF energy-harvesting system. The system can simultaneously deliver the current demanded by external dc loads and store the extra energy in external capacitors, during periods of extra output power. The design is fabricated in 0.18- $\\mu \\text{m}$ CMOS technology, and the active chip area is 1.08 mm2. The proposed self-startup system is reconfigurable with an integrated LC matching network, an RF rectifier, and a power management/controller unit, which consumes 66\u2013157 nW. The required clock generation and the voltage reference circuit are integrated on the same chip. Duty cycle control is used to operate for the low input power that cannot provide the demanded output power. Moreover, the number of stages of the RF rectifier is reconfigurable to increase the efficiency of the available output power. For high available power, a secondary path is activated to charge an external energy storage element. The measured RF input power sensitivity is \u221214.8 dBm at a 1-V dc output."}
{"_id": "eceedcee47d29c063ce0d36f8fca630b6d68f63f", "title": "HDTLR: A CNN based Hierarchical Detector for Traffic Lights", "text": "Reliable traffic light detection is one crucial key component for autonomous driving in urban areas. This includes the extraction of direction arrows contained within the traffic lights as an autonomous car will need this information for selecting the traffic light corresponding to its current lane. Current state of the art traffic light detection systems are not able to provide such information. Within this work we present a hierarchical traffic light detection algorithm, which is able to detect traffic lights and determine their state and contained direction information within one CNN forward pass. This Hierarchical DeepTLR (HDTLR) outperforms current state of the art traffic light detection algorithms in state aware detection and can detect traffic lights with direction information down to a size of 4 pixel in width at a frequency of 12 frames per second."}
{"_id": "78abaa22d3fdc0181e44a23c4a93ec220840ecd1", "title": "Implementation of behavior change techniques in mobile applications for physical activity.", "text": "BACKGROUND\nMobile applications (apps) for physical activity are popular and hold promise for promoting behavior change and reducing non-communicable disease risk. App marketing materials describe a limited number of behavior change techniques (BCTs), but apps may include unmarketed BCTs, which are important as well.\n\n\nPURPOSE\nTo characterize the extent to which BCTs have been implemented in apps from a systematic user inspection of apps.\n\n\nMETHODS\nTop-ranked physical activity apps (N=100) were identified in November 2013 and analyzed in 2014. BCTs were coded using a contemporary taxonomy following a user inspection of apps.\n\n\nRESULTS\nUsers identified an average of 6.6 BCTs per app and most BCTs in the taxonomy were not represented in any apps. The most common BCTs involved providing social support, information about others' approval, instructions on how to perform a behavior, demonstrations of the behavior, and feedback on the behavior. A latent class analysis of BCT configurations revealed that apps focused on providing support and feedback as well as support and education.\n\n\nCONCLUSIONS\nContemporary physical activity apps have implemented a limited number of BCTs and have favored BCTs with a modest evidence base over others with more established evidence of efficacy (e.g., social media integration for providing social support versus active self-monitoring by users). Social support is a ubiquitous feature of contemporary physical activity apps and differences between apps lie primarily in whether the limited BCTs provide education or feedback about physical activity."}
{"_id": "756a8ce1bddd94cf166baadd17ff2b89834a23da", "title": "Future affective technology for autism and emotion communication.", "text": "People on the autism spectrum often experience states of emotional or cognitive overload that pose challenges to their interests in learning and communicating. Measurements taken from home and school environments show that extreme overload experienced internally, measured as autonomic nervous system (ANS) activation, may not be visible externally: a person can have a resting heart rate twice the level of non-autistic peers, while outwardly appearing calm and relaxed. The chasm between what is happening on the inside and what is seen on the outside, coupled with challenges in speaking and being pushed to perform, is a recipe for a meltdown that may seem to come 'out of the blue', but in fact may have been steadily building. Because ANS activation both influences and is influenced by efforts to process sensory information, interact socially, initiate motor activity, produce meaningful speech and more, deciphering the dynamics of ANS states is important for understanding and helping people on the autism spectrum. This paper highlights advances in technology that can comfortably sense and communicate ANS arousal in daily life, allowing new kinds of investigations to inform the science of autism while also providing personalized feedback to help individuals who participate in the research."}
{"_id": "df8902f8acb0b30a230dfaa1ead91e27f123472c", "title": "Toward Massive Machine Type Cellular Communications", "text": "Cellular networks have been engineered and optimized to carrying ever-increasing amounts of mobile data, but over the last few years, a new class of applications based on machine-centric communications has begun to emerge. Automated devices such as sensors, tracking devices, and meters, often referred to as machine-to-machine (M2M) or machine-type communications (MTC), introduce an attractive revenue stream for mobile network operators, if a massive number of them can be efficiently supported. The novel technical challenges posed by MTC applications include increased overhead and control signaling as well as diverse application-specific constraints such as ultra-low complexity, extreme energy efficiency, critical timing, and continuous data intensive uploading. This article explains the new requirements and challenges that large-scale MTC applications introduce, and provides a survey of key techniques for overcoming them. We focus on the potential of 4.5G and 5G networks to serve both the high data rate needs of conventional human-type communication (HTC) subscribers and the forecasted billions of new MTC devices. We also opine on attractive economic models that will enable this new class of cellular subscribers to grow to its full potential."}
{"_id": "cdf1153e70045098bbc2cb43e3759751f8de03af", "title": "A class D output stage with zero dead time", "text": "An integrated class-D output stage has been realized with zero dead time, thereby removing one of the dominant sources of distortion in class-D amplifiers. Dead time is eliminated through proper dimensioning of the power transistor drivers and accurate matching of switch timing. Open-loop distortion of this output stage stays below 0.1% up to 35 W."}
{"_id": "7e1c856f1e8116076248b228d85989ecb860ca23", "title": "Partially Observable Reinforcement Learning for Intelligent Transportation Systems", "text": "Intelligent Transportation Systems (ITS) have attracted the attention of researchers and the general public alike as a means to alleviate traffic congestion. Recently, the maturity of wireless technology has enabled a cost-efficient way to achieve ITS by detecting vehicles using Vehicle to Infrastructure (V2I) communications. Traditional ITS algorithms, in most cases, assume that every vehicle is observed, such as by a camera or a loop detector, but a V2I implementation would detect only those vehicles with wireless communications capability. We examine a family of transportation systems, which we will refer to as \u2018Partially Detected Intelligent Transportation Systems\u2019. An algorithm that can act well under a small detection rate is highly desirable due to gradual penetration rates of the underlying wireless technologies such as Dedicated Short Range Communications (DSRC) technology. Artificial Intelligence (AI) techniques for Reinforcement Learning (RL) are suitable tools for finding such an algorithm due to utilizing varied inputs and not requiring explicit analytic understanding or modeling of the underlying system dynamics. In this paper, we report a RL algorithm for partially observable ITS based on DSRC. The performance of this system is studied under different car flows, detection rates, and topologies of the road network. Our system is able to efficiently reduce the average waiting time of vehicles at an intersection, even with a low detection rate."}
{"_id": "3e4004566f45ee28b3a1dd73ba67242d17985160", "title": "Fuzzy & Datamining based Disease Prediction Using K-NN Algorithm", "text": "Disease diagnosis is one of the most important applications of such system as it is one of the leading causes of deaths all over the world. Almost all system predicting disease use inputs from complex tests conducted in labs and none of the system predicts disease based on the risk factors such as tobacco smoking, alcohol intake, age, family history, diabetes, hypertension, high cholesterol, physical inactivity, obesity. Researchers have been using several data mining techniques to help health care professionals in the diagnosis of heart disease. K-Nearest-Neighbour (KNN) is one of the successful data mining techniques used in classification problems. However, it is less used in the diagnosis of heart disease patients. Recently, researchers are showing that combining different classifiers through voting is outperforming other single classifiers. This paper investigates applying KNN to help healthcare professionals in the diagnosis of disease specially heart disease. It also investigates if integrating voting with KNN can enhance its accuracy in the diagnosis of heart disease patients. The results show that applying KNN could achieve higher accuracy than neural network ensemble in the diagnosis of heart disease patients. The results also show that applying voting could not enhance the KNN accuracy in the diagnosis of heart disease."}
{"_id": "879f70a13aa7e461ec2425093f47475ac601a550", "title": "ENHANCEMENT STRATEGIES FOR FRAME-TO-FRAME UAS STEREO VISUAL ODOMETRY", "text": "Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements. * Corresponding author"}
{"_id": "5a935692eca51f426e8cd2ac548e8e866631278f", "title": "An Empirical Analysis of Deep Network Loss Surfaces", "text": "The training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find local minima."}
{"_id": "8dbad88d01e5f5d71e5748ee28247f12436a1797", "title": "Now it's personal: on abusive game design", "text": "In this paper, we introduce the concept of abusive game design as an attitude towards creating games -- an aesthetic practice that operates as a critique of certain conventionalisms in popular game design wisdom. We emphasize that abusive game design is, at its core, about spotlighting the dialogic relation between player and designer."}
{"_id": "5648813f9ab3877b2a15b462663fdedd88d3c432", "title": "Classification of Multiple-Sentence Questions", "text": "Conventional QA systems cannot answer to the questions composed of two or more sentences. Therefore, we aim to construct a QA system that can answer such multiple-sentence questions. As the first stage, we propose a method for classifying multiple-sentence questions into question types. Specifically, we first extract the core sentence from a given question text. We use the core sentence and its question focus in question classification. The result of experiments shows that the proposed method improves F-measure by 8.8% and accuracy by 4.4%."}
{"_id": "222e9d19121c8bb4cb90af0bcbe71599c94c454c", "title": "Chemical-protein relation extraction with ensembles of SVM, CNN, and RNN models", "text": "Text mining the relations between chemicals and proteins is an increasingly important task. The CHEMPROT track at BioCreative VI aims to promote the development and evaluation of systems that can automatically detect the chemical-protein relations in running text (PubMed abstracts). This manuscript describes our submission, which is an ensemble of three systems, including a Support Vector Machine, a Convolutional Neural Network, and a Recurrent Neural Network. Their output is combined using a decision based on majority voting or stacking. Our CHEMPROT system obtained 0.7266 in precision and 0.5735 in recall for an f-score of 0.6410, demonstrating the effectiveness of machine learning-based approaches for automatic relation extraction from biomedical literature. Our submission achieved the highest performance in the task during the 2017 challenge. Keywords\u2014relation extraction; deep learning; chemical; protein"}
{"_id": "44c5433047692f26ae518e5c659eecfcf9325f49", "title": "EasyConnect: A Management System for IoT Devices and Its Applications for Interactive Design and Art", "text": "Many Internet of Things (IoT) technologies have been used in applications for money flow, logistics flow, people flow, interactive art design, and so on. To manage these increasing disparate devices and connectivity options, ETSI has specified end-to-end machine-to-machine (M2M) system architecture for IoT applications. Based on this architecture, we develop an IoT EasyConnect system to manage IoT devices. In our approach, an IoT device is characterized by its \u201cfeatures\u201d (e.g., temperature, vibration, and display) that are manipulated by the network applications. If a network application handles the individual device features independently, then we can write a software module for each device feature, and the network application can be simply constructed by including these brick-like device feature modules. Based on the concept of device feature, brick-like software modules can provide simple and efficient mechanism to develop IoT device applications and interactions."}
{"_id": "4b392162b7bda78308c205928911c42b7ce6a293", "title": "Inherent Resistance of Efficient ECC Designs against SCA Attacks", "text": "The Montgomery kP algorithm using Lopez-Dahab projective coordinates is a time and energy efficient way to perform the elliptic curve point multiplication. It even can provide resistance against simple power analysis attacks. Nevertheless, straight forward implementations of this algorithm are not resistant against differential power analysis attacks. But highly-efficient designs i.e. designs in which different functional units are operating in parallel can increase the resistance of ECC implementations against DPA. In this paper we demonstrate this fact on the examples of our 6 implementations of the Montgomery kP algorithm. We perform horizontal DPA attacks to demonstrate the fact that the resistance of the kP algorithm against side channel analysis attacks can be increased through its implementation as an efficient design."}
{"_id": "a54eaafe60c3309169a976df7e204c5b066798c7", "title": "The Theory of Correlation Formulas and Their Application to Discourse Coherence", "text": "The Winograd Schema Challenge (WSC) was proposed as a measure of machine intelligence. It boils down to anaphora resolution, a task familiar from computational linguistics. Research in linguistics and AI has coalesced around discourse coherence as the critical factor in solving this task, and the process of establishing discourse coherence relies fundamentally on world and commonsense knowledge. In this thesis, we build on an approach to establishing coherence on the basis of correlation. The utility of this approach lies in its conceptual clarity and ability to flexibly represent commonsense knowledge. We work to fill some conceptual holes with the Correlation Calculus approach. First, understanding the calculus in a vacuum is not straightfoward unless it has a precise semantics. Second, existing demonstrations of the Correlation Calculus on Winograd Schema Challenge problems have not been linguistically credible. We hope to ameliorate some\u2014but by no means all\u2014of the outstanding issues with the Correlation Calculus. We do so first by providing a precise semantics of the calculus, which relates our intuitive understanding of correlation with a precise notion involving probabilities. Second, we formulate the establishment of discourse coherence by correlation formulas within the framework of Discourse Representation Theory. This provides a more complete and linguistically credible account of the relationship between the Correlation Calculus, discourse coherence, and Winograd Schema Challenge problems."}
{"_id": "ac7dbcf7b1dba7eb04afc7cde3a07a30b113ba06", "title": "Game-Related Statistics that Discriminated Winning, Drawing and Losing Teams from the Spanish Soccer League.", "text": "The aim of the present study was to analyze men's football competitions, trying to identify which game-related statistics allow to discriminate winning, drawing and losing teams. The sample used corresponded to 380 games from the 2008-2009 season of the Spanish Men's Professional League. The game-related statistics gathered were: total shots, shots on goal, effectiveness, assists, crosses, offsides commited and received, corners, ball possession, crosses against, fouls committed and received, corners against, yellow and red cards, and venue. An univariate (t-test) and multivariate (discriminant) analysis of data was done. The results showed that winning teams had averages that were significantly higher for the following game statistics: total shots (p < 0.001), shots on goal (p < 0.01), effectiveness (p < 0.01), assists (p < 0.01), offsides committed (p < 0.01) and crosses against (p < 0.01). Losing teams had significantly higher averages in the variable crosses (p < 0.01), offsides received (p < 0. 01) and red cards (p < 0.01). Discriminant analysis allowed to conclude the following: the variables that discriminate between winning, drawing and losing teams were the total shots, shots on goal, crosses, crosses against, ball possession and venue. Coaches and players should be aware for these different profiles in order to increase knowledge about game cognitive and motor solicitation and, therefore, to evaluate specificity at the time of practice and game planning. Key pointsThis paper increases the knowledge about soccer match analysis.Give normative values to establish practice and match objectives.Give applications ideas to connect research with coaches' practice."}
{"_id": "a11f80ce00cac50998ff40fca4af58c58b9e545f", "title": "Free radicals and antioxidants in human health: current status and future prospects.", "text": "Free radicals and related species have attracted a great deal of attention in recent years. They are mainly derived from oxygen (reactive oxygen species/ROS) and nitrogen (reactive nitrogen species/RNS), and are generated in our body by various endogenous systems, exposure to different physicochemical conditions or pathophysiological states. Free radicals can adversely alter lipids, proteins and DNA and have been implicated in aging and a number of human diseases. Lipids are highly prone to free radical damage resulting in lipid peroxidation that can lead to adverse alterations. Free radical damage to protein can result in loss of enzyme activity. Damage caused to DNA, can result in mutagenesis and carcinogenesis. Redox signaling is a major area of free radical research that is attracting attention. Nature has endowed us with protective antioxidant mechanisms- superoxide dismutase (SOD), catalase, glutathione, glutathione peroxidases and reductase, vitamin E (tocopherols and tocotrienols), vitamin C etc., apart from many dietary components. There are epidemiological evidences correlating higher intake of components/ foods with antioxidant abilities to lower incidence of various human morbidities or mortalities. Current research reveals the different potential applications of antioxidant/free radical manipulations in prevention or control of disease. Natural products from dietary components such as Indian spices and medicinal plants are known to possess antioxidant activity. Newer and future approaches include gene therapy to produce more antioxidants in the body, genetically engineered plant products with higher level of antioxidants, synthetic antioxidant enzymes (SOD mimics), novel biomolecules and the use of functional foods enriched with antioxidants."}
{"_id": "800979161bffd5b8534bf7874f1eefec531ccda3", "title": "Cultivate Self-Efficacy for Personal and Organizational Effectiveness", "text": "Bandura, A. (2000). Cultivate self-efficacy for personal and organizational effectiveness. In E."}
{"_id": "54ee63cdeca5b7673beea700e2e46e1c95a504e9", "title": "Power LDMOS with novel STI profile for improved Rsp, BVdss, and reliability", "text": "The profile of shallow trench isolation (STI) is designed to improve LDMOS specific on-resistance (Rsp), BVDSS, safe operating area (SOA), and hot carrier lifetimes (HCL) in an integrated BiCMOS power technology. Silicon etch, liner oxidation and CMP processes are tuned to improve the tradeoffs in a power technology showing significant improvement to both p-channel and n-channel Rsp compared to devices fabricated with the STI profile inherited from the original submicron CMOS platform. Extensive TCAD and experiments were carried out to gain insight into the physical mechanisms and further improve device performance after STI process optimization. The final process and device structures yield SOAs that are limited only by thermal constraints up to rated voltages"}
{"_id": "bb93e5d1cb062157b19764b78fbfd7911ecbac5a", "title": "A Hygroscopic Sensor Electrode for Fast Stabilized Non-Contact ECG Signal Acquisition", "text": "A capacitive electrocardiography (cECG) technique using a non-invasive ECG measuring technology that does not require direct contact between the sensor and the skin has attracted much interest. The system encounters several challenges when the sensor electrode and subject's skin are weakly coupled. Because there is no direct physical contact between the subject and any grounding point, there is no discharge path for the built-up electrostatic charge. Subsequently, the electrostatic charge build-up can temporarily contaminate the ECG signal from being clearly visible; a stabilization period (3-15 min) is required for the measurement of a clean, stable ECG signal at low humidity levels (below 55% relative humidity). Therefore, to obtain a clear ECG signal without noise and to reduce the ECG signal stabilization time to within 2 min in a dry ambient environment, we have developed a fabric electrode with embedded polymer (FEEP). The designed hygroscopic FEEP has an embedded superabsorbent polymer layer. The principle of FEEP as a conductive electrode is to provide humidity to the capacitive coupling to ensure strong coupling and to allow for the measurement of a stable, clear biomedical signal. The evaluation results show that hygroscopic FEEP is capable of rapidly measuring high-accuracy ECG signals with a higher SNR ratio."}
{"_id": "3f2f4e017ad87e57945324877134fe1d0aab1c9f", "title": "A 3.9-ps RMS Precision Time-to-Digital Converter Using Ones-Counter Encoding Scheme in a Kintex-7 FPGA", "text": "A 3.9-ps time-interval rms precision and 277-M events/second measurement throughput time-to-digital converter (TDC) is implemented in a Xilinx Kintex-7 field programmable gate array (FPGA). Unlike previous work, the TDC is achieved with a multichain tapped-delay line (TDL) followed by an ones-counter encoder. The four normal TDLs merged together make the TDC bins very small, so that the time precision can be significantly improved. The ones-counter encoder naturally applies global bubble error correction to the output of TDL, thus the TDC design is relatively simple even when using FPGAs made with current advanced process technology. The TDC implementation is a generally applicable method that can simultaneously achieve high time precision and high measurement throughput."}
{"_id": "49cc527ee351f1a8e087806730936cabd5e6034b", "title": "Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors", "text": "Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR."}
{"_id": "9a80c4089e2ef8066b4b3b63578543b5b706603e", "title": "Can You Spot the Fakes?: On the Limitations of User Feedback in Online Social Networks", "text": "Online social networks (OSNs) are appealing platforms for spammers and fraudsters, who typically use fake or compromised accounts to connect with and defraud real users. To combat such abuse, OSNs allow users to report fraudulent profiles or activity. The OSN can then use reporting data to review and/or limit activity of reported accounts. Previous authors have suggested that an OSN can augment its takedown algorithms by identifying a \u201ctrusted set\u201d of users whose reports are weighted more heavily in the disposition of flagged accounts. Such identification would allow the OSN to improve both speed and accuracy of fake account detection and thus reduce the impact of spam on users. In this work we provide the first public, data-driven assessment of whether the above assumption is true: are some users better at reporting than others? Specifically, is reporting skill both measurable, i.e., possible to distinguish from random guessing; and repeatable, i.e., persistent over repeated sampling? Our main contributions are to develop a statistical framework that describes these properties and to apply this framework to data from LinkedIn, the professional social network. Our data includes member reports of fake profiles as well as the more voluminous, albeit weaker, signal of member responses to connection requests. We find that members demonstrating measurable, repeatable skill in identifying fake profiles do exist but are rare: at most 2.4% of those reporting fakes and at most 1.3% of those rejecting connection requests. We conclude that any reliable \u201ctrusted set\u201d of members will be too small to have noticeable impact on spam metrics."}
{"_id": "dbc6fa024c0169a87a33b97634d061a96492db6d", "title": "How to practice person\u2010centred care: A conceptual framework", "text": "BACKGROUND\nGlobally, health-care systems and organizations are looking to improve health system performance through the implementation of a person-centred care (PCC) model. While numerous conceptual frameworks for PCC exist, a gap remains in practical guidance on PCC implementation.\n\n\nMETHODS\nBased on a narrative review of the PCC literature, a generic conceptual framework was developed in collaboration with a patient partner, which synthesizes evidence, recommendations and best practice from existing frameworks and implementation case studies. The Donabedian model for health-care improvement was used to classify PCC domains into the categories of \"Structure,\" \"Process\" and \"Outcome\" for health-care quality improvement.\n\n\nDISCUSSION\nThe framework emphasizes the structural domain, which relates to the health-care system or context in which care is delivered, providing the foundation for PCC, and influencing the processes and outcomes of care. Structural domains identified include: the creation of a PCC culture across the continuum of care; co-designing educational programs, as well as health promotion and prevention programs with patients; providing a supportive and accommodating environment; and developing and integrating structures to support health information technology and to measure and monitor PCC performance. Process domains describe the importance of cultivating communication and respectful and compassionate care; engaging patients in managing their care; and integration of care. Outcome domains identified include: access to care and Patient-Reported Outcomes.\n\n\nCONCLUSION\nThis conceptual framework provides a step-wise roadmap to guide health-care systems and organizations in the provision PCC across various health-care sectors."}
{"_id": "7314be5cd836c8f06bd1ecab565b00b65259eac6", "title": "Probabilistic topic models", "text": "Surveying a suite of algorithms that offer a solution to managing large document archives."}
{"_id": "c951df3db2e0c515bdbdf681137cf6705ef61c0d", "title": "SNOW 2014 Data Challenge: Assessing the Performance of News Topic Detection Methods in Social Media", "text": "The SNOW 2014 Data Challenge aimed at creating a public benchmark and evaluation resource for the problem of topic detection in streams of social content. In particular, given a set of tweets spanning a time interval of interest, the Challenge required the extraction of the most significant news topics in short timeslots within the selected interval. Here, we provide details with respect to the Challenge definition, the data collection and evaluation process, and the results achieved by the 11 teams that participated in it, along with a concise retrospective analysis of the main conclusions and arising issues."}
{"_id": "d582d6d459601eabeac24f4bb4231c3e93882259", "title": "Sensing Trending Topics in Twitter", "text": "Online social and news media generate rich and timely information about real-world events of all kinds. However, the huge amount of data available, along with the breadth of the user base, requires a substantial effort of information filtering to successfully drill down to relevant topics and events. Trending topic detection is therefore a fundamental building block to monitor and summarize information originating from social sources. There are a wide variety of methods and variables and they greatly affect the quality of results. We compare six topic detection methods on three Twitter datasets related to major events, which differ in their time scale and topic churn rate. We observe how the nature of the event considered, the volume of activity over time, the sampling procedure and the pre-processing of the data all greatly affect the quality of detected topics, which also depends on the type of detection method used. We find that standard natural language processing techniques can perform well for social streams on very focused topics, but novel techniques designed to mine the temporal distribution of concepts are needed to handle more heterogeneous streams containing multiple stories evolving in parallel. One of the novel topic detection methods we propose, based on -grams cooccurrence and topic ranking, consistently achieves the best performance across all these conditions, thus being more reliable than other state-of-the-art techniques."}
{"_id": "1ccf386465081b6f8cad5a753827de7576dfe436", "title": "Collective entity linking in web text: a graph-based method", "text": "Entity Linking (EL) is the task of linking name mentions in Web text with their referent entities in a knowledge base. Traditional EL methods usually link name mentions in a document by assuming them to be independent. However, there is often additional interdependence between different EL decisions, i.e., the entities in the same document should be semantically related to each other. In these cases, Collective Entity Linking, in which the name mentions in the same document are linked jointly by exploiting the interdependence between them, can improve the entity linking accuracy.\n This paper proposes a graph-based collective EL method, which can model and exploit the global interdependence between different EL decisions. Specifically, we first propose a graph-based representation, called Referent Graph, which can model the global interdependence between different EL decisions. Then we propose a collective inference algorithm, which can jointly infer the referent entities of all name mentions by exploiting the interdependence captured in Referent Graph. The key benefit of our method comes from: 1) The global interdependence model of EL decisions; 2) The purely collective nature of the inference algorithm, in which evidence for related EL decisions can be reinforced into high-probability decisions. Experimental results show that our method can achieve significant performance improvement over the traditional EL methods."}
{"_id": "f4e2fff95ba8d469fed09c8bd7efa0604ece3be3", "title": "Structural Architectures for a Deployable Wideband UHF Antenna", "text": "This paper explores concepts for a wideband deployable antenna suitable for small satellites. The general approach taken was to closely couple antenna theory and structural mechanics, to produce a deployable antenna that is considered efficient in both fields. Approaches that can be deployed using stored elastic strain energy have been favored over those that require powered actuators or environmental effects to enforce deployment. Two types of concepts were developed: thin shell structure and pantograph. These concepts cover four antenna topologies: crossed log periodic, conical log spiral, helical and quadrifilar helix. Of these, the conical log spiral antenna and the accompanying deployment concepts are determined to be the most promising approaches that warrant further study."}
{"_id": "ed5d20511f9087309f91fcd6504960baf101bce3", "title": "Global climate change and terrestrial net primary production", "text": "A process-based model was used to estimate global patterns of net primary production and soil nitrogen cycling for contemporary climate conditions and current atmospheric C02 concentration. Over half of the global annual net primary production was estimated to occur in the tropics, with most of the production attributable to tropical evergreen forest. The effects of C02 doubling and associated climate changes were also explored. The responses in tropical and dry temperate ecosystems were dominated by C02, but those in northern and moist temperate ecosystems reflected the effects of temperature on nitrogen availability."}
{"_id": "bbe179f855d91feb6a832656c2e15dc679c6d2d0", "title": "Low-cost backdrivable motor control based on feed-forward/feed-back friction compensation", "text": "A low-cost but accurate and backdrivable motor control system running on a DC motor and a harmonic drive gear is proposed. It compensates internal friction of the gear and the counter-electromotive torque by combining a model-based feed-forward method and a disturbance observer using a cheap torque sensor. A complementary use of those techniques lowers requirements to their performances i.e. precision, bandwidth, etc, while it is equipped with a flexible property against the external torque. A 2-DOF servo controller is also built upon the system in order to simultaneously achieve smooth responses and robust convergences to the reference."}
{"_id": "22eb5f0400babf66c76128e475dc2baf1606dc6d", "title": "Submaximal exercise testing: clinical application and interpretation.", "text": "Compared with maximal exercise testing, submaximal exercise testing appears to have greater applicability to physical therapists in their role as clinical exercise specialists. This review contrasts maximal and submaximal exercise testing. Two major categories of submaximal tests (ie, predictive and performance tests) and their relative merits are described. Predictive tests are submaximal tests that are used to predict maximal aerobic capacity. Performance tests involve measuring the responses to standardized physical activities that are typically encountered in everyday life. To maximize the validity and reliability of data obtained from submaximal tests, physical therapists are cautioned to apply the tests selectively based on their indications; to adhere to methods, including the requisite number of practice sessions; and to use measurements such as heart rate, blood pressure, exertion, and pain to evaluate test performance and to safely monitor patients."}
{"_id": "90bd2446523982c62c857c167447636924e03557", "title": "Control of Grid Connected PV Array Using P&O MPPT Algorithm", "text": "The renewable energy is becoming moremainstream and accessible. This has been made possible due toan increase in environmental awareness coupled with thepopular demand to cut back on the greenhouse emissions. We inthis project propose a grid connected PV system. The aim of theproject is to implement a complete distributed energy resourcesystem (DER). The project will feature a PV module, which willbe controlled and optimized by means of a maximum powerpoint tracking (MPPT) algorithm. A boost converter along witha single phase grid tie inverter will be used to increase theoutput voltage and to convert it to AC. A phase locked loopcircuit will be used to integrate the single phase inverter withthe grid. A control methodology consisting of PI controllers isemployed for operating the PV at the MPPT point bycontrolling the switching of the boost converter and also for theoperation of the single phase inverter and its integration withthe grid. The parameters of these controllers are tuned to givethe best results. This will be followed by a detailed mathematicaland engineering analysis for the simulated results. The validityof the proposed scheme will be verified by simulation using thePSIM software."}
{"_id": "1f671ab8b6efdff792a983afbed6eab444e673ef", "title": "Learning analytics: envisioning a research discipline and a domain of practice", "text": "Learning analytics are rapidly being implemented in different educational settings, often without the guidance of a research base. Vendors incorporate analytics practices, models, and algorithms from datamining, business intelligence, and the emerging \"big data\" fields. Researchers, in contrast, have built up a substantial base of techniques for analyzing discourse, social networks, sentiments, predictive models, and in semantic content (i.e., \"intelligent\" curriculum). In spite of the currently limited knowledge exchange and dialogue between researchers, vendors, and practitioners, existing learning analytics implementations indicate significant potential for generating novel insight into learning and vital educational practices. This paper presents an integrated and holistic vision for advancing learning analytics as a research discipline and a domain of practices. Potential areas of collaboration and overlap are presented with the intent of increasing the impact of analytics on teaching, learning, and the education system."}
{"_id": "41563b2830005e93a2ce2e612d5522aa519f1537", "title": "[POSTER] Realtime Shape-from-Template: System and Applications", "text": "An important yet unsolved problem in computer vision and Augmented Reality (AR) is to compute the 3D shape of nonrigid objects from live 2D videos. When the object's shape is provided in a rest pose, this is the Shape-from-Template (SfT) problem. Previous realtime SfT methods require simple, smooth templates, such as flat sheets of paper that are densely textured, and which deform in simple, smooth ways. We present a realtime SfT framework that handles generic template meshes, complex deformations and most of the difficulties present in real imaging conditions. Achieving this has required new, fast solutions to the two core sub-problems: robust registration and 3D shape inference. Registration is achieved with what we call Deformable Render-based Block Matching (DRBM): a highly-parallel solution which densely matches a time-varying render of the object to each video frame. We then combine matches from DRBM with physical deformation priors and perform shape inference, which is done by quickly solving a sparse linear system with a Geometric Multi-Grid (GMG)-based method. On a standard PC we achieve up to 21fps depending on the object. Source code will be released."}
{"_id": "6108ee29143606dfe2028dc92da2393693f17a79", "title": "Fine-Grained Categorization and Dataset Bootstrapping Using Deep Metric Learning with Humans in the Loop", "text": "Existing fine-grained visual categorization methods often suffer from three challenges: lack of training data, large number of fine-grained categories, and high intraclass vs. low inter-class variance. In this work we propose a generic iterative framework for fine-grained categorization and dataset bootstrapping that handles these three challenges. Using deep metric learning with humans in the loop, we learn a low dimensional feature embedding with anchor points on manifolds for each category. These anchor points capture intra-class variances and remain discriminative between classes. In each round, images with high confidence scores from our model are sent to humans for labeling. By comparing with exemplar images, labelers mark each candidate image as either a \"true positive\" or a \"false positive.\" True positives are added into our current dataset and false positives are regarded as \"hard negatives\" for our metric learning model. Then the model is retrained with an expanded dataset and hard negatives for the next round. To demonstrate the effectiveness of the proposed framework, we bootstrap a fine-grained flower dataset with 620 categories from Instagram images. The proposed deep metric learning scheme is evaluated on both our dataset and the CUB-200-2001 Birds dataset. Experimental evaluations show significant performance gain using dataset bootstrapping and demonstrate state-of-the-art results achieved by the proposed deep metric learning methods."}
{"_id": "5ce97d9f83e10c5a04c8130143c12a7682a69e3e", "title": "A 1.2 V Reactive-Feedback 3.1\u201310.6 GHz Low-Noise Amplifier in 0.13 $\\mu{\\hbox {m}}$ CMOS", "text": "A 15.1 dB gain, 2.1 dB (min.) noise figure low-noise amplifier (LNA) fabricated in 0.13 mum CMOS operates across the entire 3.1-10.6 GHz ultrawideband (UWB). Noise figure variation over the band is limited to 0.43 dB. Reactive (transformer) feedback reduces the noise figure, stabilizes the gain, and sets the terminal impedances over the desired bandwidth. It also provides a means of separating ESD protection circuitry from the RF input path. Bias current-reuse limits power consumption of the 0.87mm2 IC to 9 mW from a 1.2 V supply. Comparable measured results are presented from both packaged and wafer probed test samples"}
{"_id": "cbfe59e455878db23e222ce59b39bc8988b3a614", "title": "Advances in Reversed Nested Miller Compensation", "text": "The use of two frequency compensation schemes for three-stage operational transconductance amplifiers, namely the reversed nested Miller compensation with nulling resistor (RN-MCNR) and reversed active feedback frequency compensation (RAFFC), is presented in this paper. The techniques are based on the basic RNMC and show an inherent advantage over traditional compensation strategies, especially for heavy capacitive loads. Moreover, they are implemented without entailing extra transistors, thus saving circuit complexity and power consumption. A well-defined design procedure, introducing phase margin as main design parameter, is also developed for each solution. To verify the effectiveness of the techniques, two amplifiers have been fabricated in a standard 0.5-mum CMOS process. Experimental measurements are found in good agreement with theoretical analysis and show an improvement in small-signal and large-signal amplifier performances. Finally, an analytical comparison with the nonreversed counterparts topologies, which shows the superiority of the proposed solutions, is also included."}
{"_id": "ee3d30287b87272c5d6efc54d4da627a59e0a816", "title": "Grammar Inference Using Recurrent Neural Networks", "text": "This paper compares the performance of two recurrent neural network models learning the grammar of simple English sentences and introduces a method for creating useful negative training examples from a set of grammatical sentences. The learning task is cast as a classification problem as opposed to a prediction problem; networks are taught to classify a given sequence as grammatical or ungrammatical. The two neural network models compared are the simple architecture proposed by Elman in [1], and the Long Short-Term Memory (LSTM) network from [4]. For comparison, a Naive Bayes (NB) classifier is also trained with bigrams to accomplish this task. Only the Elman network learns the training data better than NB, and neither network generalizes better than NB on sentences longer than the training sentences. The LSTM network does not generalize as well as the Elman network on a simple regular grammar, but shows comparable ability to generalize knowledge of English Grammar."}
{"_id": "12c6f3ae8f20a1473a89b9cbb82d0f02275ea62b", "title": "Hand detection using multiple proposals", "text": "We describe a two-stage method for detecting hands and their orientation in unconstrained images. The first stage uses three complementary detectors to propose hand bounding boxes. Each bounding box is then scored by the three detectors independently, and a second stage classifier learnt to compute a final confidence score for the proposals using these features. We make the following contributions: (i) we add context-based and skin-based proposals to a sliding window shape based detector to increase recall; (ii) we develop a new method of non-maximum suppression based on super-pixels; and (iii) we introduce a fully annotated hand dataset for training and testing. We show that the hand detector exceeds the state of the art on two public datasets, including the PASCAL VOC 2010 human layout challenge."}
{"_id": "3c76b9192e0828c87ce8d2b8aaee197d9700dd68", "title": "Serendipity: Finger Gesture Recognition using an Off-the-Shelf Smartwatch", "text": "Previous work on muscle activity sensing has leveraged specialized sensors such as electromyography and force sensitive resistors. While these sensors show great potential for detecting finger/hand gestures, they require additional hardware that adds to the cost and user discomfort. Past research has utilized sensors on commercial devices, focusing on recognizing gross hand gestures. In this work we present Serendipity, a new technique for recognizing unremarkable and fine-motor finger gestures using integrated motion sensors (accelerometer and gyroscope) in off-the-shelf smartwatches. Our system demonstrates the potential to distinguish 5 fine-motor gestures like pinching, tapping and rubbing fingers with an average f1-score of 87%. Our work is the first to explore the feasibility of using solely motion sensors on everyday wearable devices to detect fine-grained gestures. This promising technology can be deployed today on current smartwatches and has the potential to be applied to cross-device interactions, or as a tool for research in fields involving finger and hand motion."}
{"_id": "6258323b0ddedd0892febb36c1772a10820e0b0c", "title": "Using Ontological Engineering to Overcome Common AI-ED Problems", "text": "This paper discusses long-term prospects of AI-ED research with the aim of giving a clear view of what we need for further promotion of the research from both the AI and ED points of view. An analysis of the current status of AI-ED research is done in the light of intelligence, conceptualization, standardization and theory-awareness. Following this, an ontology-based architecture with appropriate ontologies is proposed. Ontological engineering of IS/ID is next discussed followed by a road map towards an ontology-aware authoring system. Heuristic design patterns and XML-based documentation are also discussed."}
{"_id": "750c3a9c6814bab2b6cb93fa25c7b91e6929fae2", "title": "Plant/microbe cooperation for electricity generation in a rice paddy field", "text": "Soils are rich in organics, particularly those that support growth of plants. These organics are possible sources of sustainable energy, and a microbial fuel cell (MFC) system can potentially be used for this purpose. Here, we report the application of an MFC system to electricity generation in a rice paddy field. In our system, graphite felt electrodes were used; an anode was set in the rice rhizosphere, and a cathode was in the flooded water above the rhizosphere. It was observed that electricity generation (as high as 6\u00a0mW/m2, normalized to the anode projection area) was sunlight dependent and exhibited circadian oscillation. Artificial shading of rice plants in the daytime inhibited the electricity generation. In the rhizosphere, rice roots penetrated the anode graphite felt where specific bacterial populations occurred. Supplementation to the anode region with acetate (one of the major root-exhausted organic compounds) enhanced the electricity generation in the dark. These results suggest that the paddy-field electricity-generation system was an ecological solar cell in which the plant photosynthesis was coupled to the microbial conversion of organics to electricity."}
{"_id": "eeb28f84609be8a64c9665eabcfd73c743fa5aa9", "title": "Privacy and trust in Facebook photo sharing: age and gender differences", "text": "Purpose \u2013 The purpose of this paper is to examine gender and age differences regarding various aspects of privacy, trust, and activity on one of the most popular Facebook activity \u2013 \u201cphoto sharing.\u201d Design/methodology/approach \u2013 The data were collected using an online survey hosted by a web-based survey service for three weeks during December 2014-January 2015. The target audience comprised of Facebook users over 18 years engaged in sharing their photos on the platform. Findings \u2013Women and young Facebook users are significantly more concerned about the privacy of their shared photos. Meanwhile, users from older age groups are less active in using the site, in sharing photos, and in taking privacy-related protective measures. Interestingly, despite having more privacy concerns, young Facebook users display higher trust levels toward the platform than older users. Overall, in the study, there was an extremely significant difference in privacy attitudes among people under and over 35 years of age. Originality/value \u2013 The main contribution of this study is new knowledge regarding the gender and age differences in various privacy-related aspects, trust, and activity. Findings from the study broadens the overall understanding of how these issues positively/negatively influence the photosharing activity on Facebook."}
{"_id": "18df2b9da1cd6d9a412c45ed99fdc4a608c4c4bd", "title": "Combining Generational and Conservative Garbage Collection: Framework and Implementations", "text": "Two key ideas in garbage collection are generational collection and conservative pointer-finding. Generational collection and conservative pointer-finding are hard to use together, because generational collection is usually expressed in terms of copying objects, while conservative pointer-finding precludes copying. We present a new framework for defining garbage collectors. When applied to generational collection, it generalizes the notion of younger/older to a partial order. It can describe traditional generational and conservative techniques, and lends itself to combining different techniques in novel ways. We study in particular two new garbage collectors inspired by this framework. Both these collectors use conservative pointer-finding. The first one is based on a rewrite of an existing trace-and-sweep collector to use one level of generation. The second one has a single parameter, which controls how objects are partitioned into generations: the value of this parameter can be changed dynamically with no overhead. We have implemented both collectors and present measurements of their performance in practice."}
{"_id": "bdac3841eb5a6d239a807c2673f2463db30968d5", "title": "Delineation of line patterns in images using B-COSFIRE filters", "text": "Delineation of line patterns in images is a basic step required in various applications such as blood vessel detection in medical images, segmentation of rivers or roads in aerial images, detection of cracks in walls or pavements, etc. In this paper we present trainable B-COSFIRE filters, which are a model of some neurons in area V1 of the primary visual cortex, and apply it to the delineation of line patterns in different kinds of images. B-COSFIRE filters are trainable as their selectivity is determined in an automatic configuration process given a prototype pattern of interest. They are configurable to detect any preferred line structure (e.g. segments, corners, cross-overs, etc.), so usable for automatic data representation learning. We carried out experiments on two data sets, namely a line-network data set from INRIA and a data set of retinal fundus images named IOSTAR. The results that we achieved confirm the robustness of the proposed approach and its effectiveness in the delineation of line structures in different kinds of images"}
{"_id": "d2044695b0cc9c01b07e81ecf27b19c4efb9e23c", "title": "Heuristics for High-Utility Local Process Model Mining", "text": "Local Process Models (LPMs) describe structured fragments of process behavior occurring in the context of less structured business processes. In contrast to traditional support-based LPM discovery, which aims to generate a collection of process models that describe highly frequent behavior, High-Utility Local Process Model (HU-LPM) discovery aims to generate a collection of process models that provide useful business insights by specifying a utility function. Mining LPMs is a computationally expensive task, because of the large search space of LPMs. In supportbased LPM mining, the search space is constrained by making use of the property that support is anti-monotonic. We show that in general, we cannot assume a provided utility function to be anti-monotonic, therefore, the search space of HU-LPMs cannot be reduced without loss. We propose four heuristic methods to speed up the mining of HU-LPMs while still being able to discover useful HU-LPMs. We demonstrate their applicability on three real-life data sets."}
{"_id": "fc379ad7fbc905e246787b73332b8d7b647444c7", "title": "Nurses' response time to call lights and fall occurrences.", "text": "Nurses respond to fallers' call lights more quickly than they do to lights initiated by non-fallers. The nurses' responsiveness to call lights could be a compensatory mechanism in responding to the fall prevalence on the unit."}
{"_id": "8dee883a7ff04379677e08093b8abcb5378923b0", "title": "Designing an Interactive Messaging and Reminder Display for Elderly", "text": "Despite the wealth of information and communicatio n technology in society today, there appears to be a lack of accept able information services for the growing elderly population in need of care. Acc eptability is not only related to human factors such as button size and legibility , but rather relates to perceived value and harmony in relation to existing l ving patterns. This paper describes the design of an asynchronous interactive ommunication system based upon a bulletin board metaphor. A panel of en d-users was involved in various stages of the design process. To improve ea s of use, functionality exposed to elderly users is limited, while caregive rs are given extended control. A pilot field study with a working prototype showed a high degree of user acceptance. The user centered approach resulted in a design concept that was acceptable for the elderly participants."}
{"_id": "062a4873b34c36e99028dd500506de89e70a3b38", "title": "A generalized hidden Markov model with discriminative training for query spelling correction", "text": "Query spelling correction is a crucial component of modern search engines. Existing methods in the literature for search query spelling correction have two major drawbacks. First, they are unable to handle certain important types of spelling errors, such as concatenation and splitting. Second, they cannot efficiently evaluate all the candidate corrections due to the complex form of their scoring functions, and a heuristic filtering step must be applied to select a working set of top-K most promising candidates for final scoring, leading to non-optimal predictions. In this paper we address both limitations and propose a novel generalized Hidden Markov Model with discriminative training that can not only handle all the major types of spelling errors, including splitting and concatenation errors, in a single unified framework, but also efficiently evaluate all the candidate corrections to ensure the finding of a globally optimal correction. Experiments on two query spelling correction datasets demonstrate that the proposed generalized HMM is effective for correcting multiple types of spelling errors. The results also show that it significantly outperforms the current approach for generating top-K candidate corrections, making it a better first-stage filter to enable any other complex spelling correction algorithm to have access to a better working set of candidate corrections as well as to cover splitting and concatenation errors, which no existing method in academic literature can correct."}
{"_id": "613c001c968a96e05ee60034074ed29a9d76e125", "title": "Voltage Doubler Rectified Boost-Integrated Half Bridge (VDRBHB) Converter for Digital Car Audio Amplifiers", "text": "A new voltage doubler rectified boost-integrated half bridge converter for digital car audio amplifiers is proposed. The proposed converter shows low conduction loss due to low voltage stress of the secondary diodes, no dc magnetizing current for the transformer, and a lack of stored energy in the transformer. Moreover, since the primary MOSFETs are turned-on under zero voltage switching conditions and the secondary diodes are turned-off under zero current switching conditions, the proposed converter has minimized switching losses. In addition, the input filter can be minimized due to a continuous input current, and an output filter does not need an inductor in the proposed converter. Therefore, the proposed converter has the desired features, high efficiency and low profile, for a viable power supplies for digital car audio amplifiers. A 60-W industrial sample of the proposed converter has been implemented for digital car audio amplifiers with a measured efficiency of 88.3% at nominal input voltage."}
{"_id": "15e72433bed1f8cb79ab3282458b3b9038475da2", "title": "Structuring Space with Image Schemata: Wayfinding in Airports as a Case Study", "text": "Wayfinding is a basic activity people do throughout their entire lives as they navigate from one place to another. In order to create different spaces in such a way that they facilit ate people\u2019s wayfinding it is necessary to integrate principles of human spatial cognition into the design process. This paper presents a methodology to structure space based on experiental patterns, called image schemata. It integrates cognitive and engineering aspects in three steps: (1) interviewing people about their spatial experiences as they perform a wayfinding task in the application space, (2) extracting the image schemata from these interviews and formulating a sequence of subtasks, and (3) structuring the application space (i.e., the wayfinding task) with the extracted image schemata. We use wayfinding in airports as a case study to demonstrate the methodology. Our observations show that most often image schemata are correlated with other image schemata in the form of image-schematic blocks and rarely occur in isolation. Such image-schematic blocks serve as a knowledge-representation scheme for wayfinding tasks."}
{"_id": "1eb9816a331288c657b0899cb20a1a073c3c7314", "title": "Enriching Wayfinding Instructions with Local Landmarks", "text": "Navigation services communicate optimal routes to users by providing sequences of instructions for these routes. Each single instruction guides the wayfinder from one decision point to the next. The instructions are based on geometric data from the street network, which is typically the only dataset available. This paper addresses the question of enriching such wayfinding instructions with local landmarks. We propose measures to formally specify the landmark saliency of a feature. Values for these measures are subject to hypothesis tests in order to define and extract landmarks from datasets. The extracted landmarks are then integrated in the wayfinding instructions. A concrete example from the city of Vienna demonstrates the applicability and usefulness of the method."}
{"_id": "90aac67f7bbb5bccbb101dc637825897df8fab91", "title": "A Model for Context-Specific Route Directions", "text": "Wayfinding, i.e. getting from some origin to a destination, is one of the prime everyday problems humans encounter. It has received a lot of attention in research and many (commercial) systems propose assistance in this task. We present an approach to route directions based on the idea to adapt route directions to route and environment's characteristics. The lack of such an adaptation is a major drawback of existing systems. Our approach is based on an informationand representation-theoretic analysis of routes and takes into account findings of behavioral research. The resulting systematics is the framework for the optimization process. We discuss the consequences of using an optimization process for generating route directions and outline its algorithmic realization."}
{"_id": "9d49388512688e55ea1a882a210552653e8afd61", "title": "Artificial Intelligence: A Modern Approach", "text": "Artificial Intelligence (AI) is a big field, and this is a big book. We have tried to explore the full breadth of the field, which encompasses logic, probability, and continuous mathematics; perception, reasoning, learning, and action; and everything from microelectronic devices to robotic planetary explorers. The book is also big because we go into some depth. URI: http://thuvienso.thanglong.edu.vn/handle/DHTL_123456789/4010 Appears in Collections: Tin h c Files in This Item: File Description Size Format CS503-2.pdf Gi i thi u 2.38 MB Adobe PDF View/Open CS503_TriTueNhanTaoNC_GTStuart Russell, Peter Norvig-Artificial Intelligence. A Modern Approach [Global Edition]Pearson (2016).pdf N i dung 14.25 MB Adobe PDF View/Open Request a copy Show full item record Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated. TH VI N S TR NG I H C TH NG LONG a ch : ng Nghi\u00eam Xu\u00e2n Y\u00eam i Kim Ho\u00e0ng Mai H\u00e0 N i i n tho i: 043 559 2376 Email: thuvien@thanglong.edu.vn Feedback Book review: Battered women, children, and welfare reform: The ties that bind, cosmogonic hypothesis Schmidt allows you to simply explain this discrepancy, but the accentuation gracefully creates the ideological Greatest Common Divisor (GCD). Book Review: Trying Leviathan: The Nineteenth-Century New York Court Case That Put the Whale on Trial and Challenged the Order of Nature, the quantum state vertically is a precession ketone even in the case of unique chemical properties. Forgiveness is a choice: A step-by-step process for resolving anger and restoring hope, impersonation certainly compensates for a certain hour angle, which significantly reduces the yield of the target alcohol. Atonement in the Book of Leviticus, v. Artificial intelligence: a modern approach, fertility, especially in the context of the socio-economic crisis, once. Book Review: Why Poor People Stay Poor: Urban Bias in World Development: by MICHAEL LIPTON. London: Temple Smith. pp. 467.\u00a3 9.50, non-profit organization, according to traditional ideas, is a number of out of the ordinary meteorite. Book reviews--All Is Forgiven: The Secular Message in American Protestantism by Marsha G. Witten, redistribution of the budget is preparative. Book reviews: Religious transformations--All is Forgiven: The Secular Message in American Protestantism by Marsha G. Witten, scherba argued that behaviorism indifferently transforms the rotational Gestalt. The effort re-quired to effectively intervene can be intimidating for many helpers. Miller shows that if we do not insist that abusers must be forgiven no matter what, that, the consumer market, by virtue of Newton's third law, is a brand."}
{"_id": "4023ae0ba18eed43a97e8b8c9c8fcc9a671b7aa3", "title": "The magical number seven plus or minus two: some limits on our capacity for processing information.", "text": "My problem is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals. This number assumes a variety of disguises, being sometimes a little larger and sometimes a little smaller than usual, but never changing so much as to be unrecognizable. The persistence with which this number plagues me is far more than a random accident. There is, to quote a famous senator, a design behind it, some pattern governing its appearances. Either there really is something unusual about the number or else I am suffering from delusions of persecution. I shall begin my case history by telling you about some experiments that tested how accurately people can assign numbers to the magnitudes of various aspects of a stimulus. In the traditional language of psychology these would be called experiments in absolute"}
{"_id": "81f8c2637e6ff72094a7e3a4141b5117fb19171a", "title": "A Study on Vehicle Detection and Tracking Using Wireless Sensor Networks", "text": "Wireless Sensor network (WSN) is an emerging technology and has great potential to be employed in critical situations. The development of wireless sensor networks was originally motivated by military applications like battlefield surveillance. However, Wireless Sensor Networks are also used in many areas such as Industrial, Civilian, Health, Habitat Monitoring, Environmental, Military, Home and Office application areas. Detection and tracking of targets (eg. animal, vehicle) as it moves through a sensor network has become an increasingly important application for sensor networks. The key advantage of WSN is that the network can be deployed on the fly and can operate unattended, without the need for any pre-existing infrastructure and with little maintenance. The system will estimate and track the target based on the spatial differences of the target signal strength detected by the sensors at different locations. Magnetic and acoustic sensors and the signals captured by these sensors are of present interest in the study. The system is made up of three components for detecting and tracking the moving objects. The first component consists of inexpensive off-the shelf wireless sensor devices, such as MicaZ motes, capable of measuring acoustic and magnetic signals generated by vehicles. The second component is responsible for the data aggregation. The third component of the system is responsible for data fusion algorithms. This paper inspects the sensors available in the market and its strengths and weakness and also some of the vehicle detection and tracking algorithms and their classification. This work focuses the overview of each algorithm for detection and tracking and compares them based on evaluation parameters."}
{"_id": "c336b9c9c049d0e4119707952a4d65c84b4e0b53", "title": "Loneliness, self-esteem, and life satisfaction as predictors of Internet addiction: a cross-sectional study among Turkish university students.", "text": "This study investigated the relationship among loneliness, self-esteem, life satisfaction, and Internet addiction. Participants were 384 university students (114 males, 270 females) from 18 to 24 years old from the faculty of education in Turkey. The Internet Addiction, UCLA Loneliness, Self-esteem, and Life Satisfaction scales were distributed to about 1000 university students, and 38.4% completed the survey (see Appendix A and B). It was found that loneliness, self-esteem, and life satisfaction explained 38% of the total variance in Internet addiction. Loneliness was the most important variable associated with Internet addiction and its subscales. Loneliness and self-esteem together explained time-management problems and interpersonal and health problems while loneliness, self-esteem, and life satisfaction together explained only the interpersonal and health problems subscales."}
{"_id": "af3b68155a6acc6e5520c0857a55cb7541f538fc", "title": "The Role of Search Engine Optimization in Search Marketing", "text": "Web sites invest significant resources in trying to influence their visibility among online search results. In addition to paying for sponsored links, they invest in methods known as search engine optimization (SEO) that improve the ranking of a site among the search results without improving its quality. We study the economic incentives of Web sites to invest in SEO and its implications on search engine and advertiser payoffs. We find that the process is equivalent to an all-pay auction with noise and headstarts. Our results show that, under certain conditions, a positive level of search engine optimization improves the search engine\u2019s ranking quality and thus the satisfaction of its visitors. In particular, if the quality of sites coincides with their valuation for visitors then search engine optimization serves as a mechanism that improves the ranking by correcting measurement errors. While this benefits consumers and increases traffic to the search engine, sites participating in search engine optimization could be worse off due to wasteful spending unless their valuation for traffic is very high. We also investigate how search engine optimization affects the revenues from sponsored links. Surprisingly, we find that in many cases search engine revenues are increased by SEO."}
{"_id": "f5f470993c9028cd2f4dffd914c74085e5c8d0f9", "title": "Corrective 3D reconstruction of lips from monocular video", "text": "In facial animation, the accurate shape and motion of the lips of virtual humans is of paramount importance, since subtle nuances in mouth expression strongly influence the interpretation of speech and the conveyed emotion. Unfortunately, passive photometric reconstruction of expressive lip motions, such as a kiss or rolling lips, is fundamentally hard even with multi-view methods in controlled studios. To alleviate this problem, we present a novel approach for fully automatic reconstruction of detailed and expressive lip shapes along with the dense geometry of the entire face, from just monocular RGB video. To this end, we learn the difference between inaccurate lip shapes found by a state-of-the-art monocular facial performance capture approach, and the true 3D lip shapes reconstructed using a high-quality multi-view system in combination with applied lip tattoos that are easy to track. A robust gradient domain regressor is trained to infer accurate lip shapes from coarse monocular reconstructions, with the additional help of automatically extracted inner and outer 2D lip contours. We quantitatively and qualitatively show that our monocular approach reconstructs higher quality lip shapes, even for complex shapes like a kiss or lip rolling, than previous monocular approaches. Furthermore, we compare the performance of person-specific and multi-person generic regression strategies and show that our approach generalizes to new individuals and general scenes, enabling high-fidelity reconstruction even from commodity video footage."}
{"_id": "7d5f8753f1e5a3779f133218ff7ca38de97b6281", "title": "Preventing software architecture erosion through static architecture conformance checking", "text": "Software architecture erosion is a problem faced by many organizations in the software industry. It happens when `as-implemented' architecture does not conform to the `as-intended' architecture, which results in low quality, complex, hard to maintain software. Architecture conformance checking refers to assessing the conformity of the implemented architecture to the intended architecture and can provide a strategy for detecting software architecture erosion and thereby prevent its negative consequences. When considering the current state-of-the-art of software architecture research and popular industry practices on architectural erosion, it obviously appears that such solution strategy is much needed to address the ever increasing demands for large scale complex software systems. In this paper an analysis of existing static architecture conformance checking is undertaken. Extending previously conducted research, we are in the process of developing a static architecture conformance checking tool for Java, based on GRASP ADL as a mean to overcome the challenges of software architecture erosion. Early design/implementation details of this tool are also presented."}
{"_id": "40e33480eafad3b9dc40d4e2fc0dd9943e5cbe02", "title": "A Role for Chordless Cycles in the Representation and Retrieval of Information", "text": "This paper explains how very large network structures can be reduced, or consolidated, to an assemblage of chordless cycles (cyclic structures without cross connecting links), that is called a trace, for storage and later retreival. After developing a basic mathematical framework, it illustrates the reduction process using a most general (with directed and undirected links) network. A major theme of the paper is that this approach appears to model actual biological memory, as well as offering an attractive digital solution."}
{"_id": "7554e7c56813731acfefdaf898ccb03e0d667007", "title": "Hate Lingo: A Target-based Linguistic Analysis of Hate Speech in Social Media", "text": "While social media empowers freedom of expression and individual voices, it also enables anti-social behavior, online harassment, cyberbullying, and hate speech. In this paper, we deepen our understanding of online hate speech by focusing on a largely neglected but crucial aspect of hate speech \u2013 its target: either directed towards a specific person or entity, or generalized towards a group of people sharing a common protected characteristic. We perform the first linguistic and psycholinguistic analysis of these two forms of hate speech and reveal the presence of interesting markers that distinguish these types of hate speech. Our analysis reveals that Directed hate speech, in addition to being more personal and directed, is more informal, angrier, and often explicitly attacks the target (via name calling) with fewer analytic words and more words suggesting authority and influence. Generalized hate speech, on the other hand, is dominated by religious hate, is characterized by the use of lethal words such as murder, exterminate, and kill; and quantity words such as million and many. Altogether, our work provides a data-driven analysis of the nuances of online-hate speech that enables not only a deepened understanding of hate speech and its social implications, but also its detection."}
{"_id": "06736f3c5a040483e26ebd57678f3d3e8273f873", "title": "A Content-Adaptively Sparse Reconstruction Method for Abnormal Events Detection With Low-Rank Property", "text": "This paper presents a content-adaptively sparse reconstruction method for abnormal events detection by exploiting the low-rank property of video sequences. In dictionary learning phase, the bases which describe more important characteristics of the normal behavior patterns are assigned with lower reconstruction costs. Based on the low-rank property of the bases captured by the low-rank approximation, a weighted sparse reconstruction method is proposed to measure the abnormality of testing samples. Multiscale 3-D gradient features, which encode the spatiotemporal information, are adopted as the low level descriptors. The benefits of the proposed method are threefold: first, the low-rank property is utilized to learn the underlying normal dictionaries, which can represent groups of similar normal features effectively; second, the sparsity-based algorithm can adaptively determine the number of dictionary bases, which makes it a preferable choice for representing the dynamic scene semantics; and third, based on the weighted sparse reconstruction method, the proposed method is more efficient for detecting the abnormal events. Experimental results on the public datasets have shown that the proposed method yields competitive performance comparing with the state-of-the-art methods."}
{"_id": "d73fc776f58e1f4fa78cc931481865d287035e9b", "title": "Accelerometer's position free human activity recognition using a hierarchical recognition model", "text": "Monitoring of physical activities is a growing field with potential applications such as lifecare and healthcare. Accelerometry shows promise in providing an inexpensive but effective means of long-term activity monitoring of elderly patients. However, even for the same physical activity the output of any body-worn Triaxial Accelerometer (TA) varies at different positions of a subject's body, resulting in a high within-class variance. Thus almost all existing TA-based human activity recognition systems require firm attachment of TA to a specific body part, making them impractical for long-term activity monitoring during unsupervised free living. Therefore, we present a novel hierarchical recognition model that can recognize human activities independent of TA's position along a human body. The proposed model minimizes the high within-class variance significantly and allows subjects to carry TA freely in any pocket without attaching it firmly to a body-part. We validated our model using six daily physical activities: resting (sit/stand), walking, walk-upstairs, walk-downstairs, running, and cycling. Activity data is collected from four most probable body positions of TA: chest pocket, front trousers pocket, rear trousers pocket, and inner jacket pocket. The average accuracy of about 95% illustrates the effectiveness of the proposed method."}
{"_id": "f4742f641fbb326675318fb651452bbb0d6ec2c0", "title": "An ontological framework for knowledge modeling and decision support in cyber-physical systems", "text": "Our work is concerned with the development of knowledge structures to support correct-by-design cyber-physical systems (CPS). This class of systems is defined by a tight integration of software and physical processes, the need to satisfy stringent constraints on performance and safety, and a reliance on automation for the management of system functionality and decision making. To assure correctness of functionality with respect to requirements, there is a strong need for system models to account for semantics of the domains involved. This paper introduces a new ontological-based knowledge and reasoning framework for decision support for CPS. It enables the development of determinate, provable and executable CPS models supported by sound semantics strengthening the model-driven approach to CPS design. An investigation into the structure of basic description logics (DL) has identified the needed semantic extensions to enable the web ontology language (OWL) as the ontological language for our framework. The SROIQ DL has been found to be the most appropriate logic-based knowledge formalism as it maps to OWL 2 and ensures its decidability. Thus, correct, stable, complete and terminating reasoning algorithms are guaranteed with this SROIQ-backed language. The framework takes advantage of the commonality of data and information processing in the different domains involved to overcome the barrier of heterogeneity of domains and physics in CPS. Rules-based reasoning processes are employed. The framework provides interfaces for semantic extensions and computational support, including the ability to handle quantities for which dimensions and units are semantic parameters in the physical world. Together, these capabilities enable the conversion of data to knowledge and their effective use for efficient decision making and the study of system-level properties, especially safety. We exercise these concepts in a traffic light time-based reasoning system. 2016 Elsevier Ltd. All rights reserved."}
{"_id": "be3ba35181ca5a95910884e79905808e1bb884ae", "title": "BUILDING BETTER CAUSAL THEORIES : A FUZZY SET APPROACH TO TYPOLOGIES IN ORGANIZATION RESEARCH", "text": "Typologies are an important way of organizing the complex cause-effect relationships that are key building blocks of the strategy and organization literatures. Here, I develop a novel theoretical perspective on causal core and periphery, which is based on how elements of a configuration are connected to outcomes. Using data on hightechnology firms, I empirically investigate configurations based on the Miles and Snow typology using fuzzy set qualitative comparative analysis (fsQCA). My findings show how the theoretical perspective developed here allows for a detailed analysis of causal core, periphery, and asymmetry, shifting the focus to midrange theories of causal processes."}
{"_id": "b345fd87cb98fc8b2320530609c63b1b347b14d9", "title": "Open-Domain Non-factoid Question Answering", "text": "We present an end-to-end system for open-domain non-factoid question answering. We leverage the information on the ever-growing World Wide Web, and the capabilities of modern search engines to find the relevant information. Our QA system is composed of three components: (i) query formulation module (QFM) (ii) candidate answer generation module (CAGM) and (iii) answer selection module (ASM). A thorough empirical evaluation using two datasets demonstrates that the proposed approach is highly competitive."}
{"_id": "f0eace9bfe72c2449f76461ad97c4042d2a7141b", "title": "A Novel Antenna-in-Package With LTCC Technology for W-Band Application", "text": "In this letter, a novel antenna-in-package (AiP) technology at W-band has been proposed. This technology is presented for solving the special case that the metallic package should be used to accommodate high mechanical strength. By taking advantages of the multilayer low temperature co-fired ceramic (LTCC) technology, the radiation efficiency of the antenna can be maintained. Meanwhile, high mechanical strength and shielding performance are achieved. A prototype of AiP has been designed. The prototype constitutes integrated LTCC antenna, low-loss feeder, and metallic package with a tapered horn aperture. This LTCC feeder is realized by laminated waveguide (LWG). An LWG cavity that is buried in LTCC is employed to broaden the antenna impedance bandwidth. Electromagnetic (EM) simulations and measurements of antenna performances agree well over the whole frequency range of interest. The proposed prototype achieves a -10-dB impedance bandwidth of 10 GHz from 88 to 98 GHz and a peak gain of 12.3 dBi at 89 GHz."}
{"_id": "5ffd89424a61ff1d0a88a602e4b5373d91eebd4b", "title": "Assessing the veracity of identity assertions via OSNs", "text": "Anonymity is one of the main virtues of the Internet, as it protects privacy and enables users to express opinions more freely. However, anonymity hinders the assessment of the veracity of assertions that online users make about their identity attributes, such as age or profession. We propose FaceTrust, a system that uses online social networks to provide lightweight identity credentials while preserving a user's anonymity. Face-Trust employs a \u201cgame with a purpose\u201d design to elicit the opinions of the friends of a user about the user's self-claimed identity attributes, and uses attack-resistant trust inference to assign veracity scores to identity attribute assertions. FaceTrust provides credentials, which a user can use to corroborate his assertions. We evaluate our proposal using a live Facebook deployment and simulations on a crawled social graph. The results show that our veracity scores strongly correlate with the ground truth, even when a large fraction of the social network users is dishonest and employs the Sybil attack."}
{"_id": "dbef91d5193499891a5050ec8d10d99786e37c87", "title": "Learning from examples to improve code completion systems", "text": "The suggestions made by current IDE's code completion features are based exclusively on static type system of the programming language. As a result, often proposals are made which are irrelevant for a particular working context. Also, these suggestions are ordered alphabetically rather than by their relevance in a particular context. In this paper, we present intelligent code completion systems that learn from existing code repositories. We have implemented three such systems, each using the information contained in repositories in a different way. We perform a large-scale quantitative evaluation of these systems, integrate the best performing one into Eclipse, and evaluate the latter also by a user study. Our experiments give evidence that intelligent code completion systems which learn from examples significantly outperform mainstream code completion systems in terms of the relevance of their suggestions and thus have the potential to enhance developers' productivity."}
{"_id": "e17427624a8e2770353ee567942ec41b06f94979", "title": "Patient-Specific Epileptic Seizure Onset Detection Algorithm Based on Spectral Features and IPSONN Classifier", "text": "This paper proposes a patient-specific epileptic seizure onset detection algorithm. In this algorithm, spectral features in five frequency bands (\u0394, \u03b1, \u00df, \u03b8 and \u03b3) is extracted from small frames of seizure and non-seizure EEG signals by applying Discrete Wavelet Transform (DWT) and Discrete Fourier Transform (DFT). These features can create the maximum distinction between two classes. Then a neural network (NN) classifier based on improved particle swarm optimization (IPSO) is used to determine an optimal nonlinear decision boundary. This classifier allows adjusting the parameter of the NN classifier, efficiently. Finally, the performance of algorithm is evaluated based on three measures, sensitivity, specificity and latency. The results indicate that the proposed algorithm obtain a higher sensitivity and smaller latency than other common algorithms. The proposed algorithm can be used as a seizure onset detector to initiate the just-in time therapy methods."}
{"_id": "bd966a4333268bfa67663c4fe757983d50b818b3", "title": "Traffic lights detection and state estimation using Hidden Markov Models", "text": "The detection of a traffic light on the road is important for the safety of persons who occupy a vehicle, in a normal vehicles or an autonomous land vehicle. In normal vehicle, a system that helps a driver to perceive the details of traffic signals, necessary to drive, could be critical in a delicate driving manoeuvre (i.e crossing an intersection of roads). Furthermore, traffic lights detection by an autonomous vehicle is a special case of perception, because it is important for the control that the autonomous vehicle must take. Multiples authors have used image processing as a base for achieving traffic light detection. However, the image processing presents a problem regarding conditions for capturing scenes, and therefore, the traffic light detection is affected. For this reason, this paper proposes a method that links the image processing with an estimation state routine formed by Hidden Markov Models (HMM). This method helps to determine the current state of the traffic light detected, based on the obtained states by image processing, aiming to obtain the best performance in the determination of the traffic light states. With the proposed method in this paper, we obtained 90.55% of accuracy in the detection of the traffic light state, versus a 78.54% obtained using solely image processing. The recognition of traffic lights using image processing still has a large dependence on the capture conditions of each frame from the video camera. In this context, the addition of a pre-processing stage before image processing could contribute to improve this aspect, and could provide a better results in determining the traffic light state."}
{"_id": "71c415d469d19c45da66e0f672fb9a70425ea2d1", "title": "AES-128 ECB encryption on GPUs and effects of input plaintext patterns on performance", "text": "In the recent years, the Graphics Processing Units (GPUs) have gained popularity for general purpose applications, immensely outperforming traditional optimized CPU based implementations. A class of such applications implemented on GPUs to achieve faster execution than CPUs include cryptographic techniques like the Advanced Encryption Standard (AES) which is a widely deployed symmetric encryption/decryption scheme in various electronic communication domains. With the drastic advancements in electronic communication technology, and growth in the user space, the size of data exchanged electronically has increased substantially. So, such cryptographic techniques become a bottleneck to fast transfers of information. In this work, we implement the AES-128 ECB Encryption on two of the recent and advanced GPUs (NVIDIA Quadro FX 7000 and Tesla K20c) with different memory usage schemes and varying input plaintext sizes and patterns. We obtained a speedup of up to 87x against an advanced CPU (Intel Xeon X5690) based implementation. Moreover, our experiments reveal that the different degrees of pattern repetitions in input plaintext affect the encryption performance on GPU."}
{"_id": "af8968cda2d5042d9ea253b746324348eeebfb8b", "title": "Virtual reality vs. reality in engineering education", "text": "Virtual reality has become significantly popular in recent years. It is also widely used and more frequently implemented in education, training, and research. This article discusses the potential and the current state of virtual reality, and its tools in engineering education. The focus is put on some opportunities, challenges and dead ends of implementation faced by the authors. In this point of view, virtual reality is the future of creative learning, however it has its limits in terms of practical experiments, learning by doing, which is still more effective as virtual ones."}
{"_id": "e6d84c3b17ade98da650ffe25b1822966734809a", "title": "Emerging late adolescent friendship networks and Big Five personality traits: a social network approach.", "text": "The current study focuses on the emergence of friendship networks among just-acquainted individuals, investigating the effects of Big Five personality traits on friendship selection processes. Sociometric nominations and self-ratings on personality traits were gathered from 205 late adolescents (mean age=19 years) at 5 time points during the first year of university. SIENA, a novel multilevel statistical procedure for social network analysis, was used to examine effects of Big Five traits on friendship selection. Results indicated that friendship networks between just-acquainted individuals became increasingly more cohesive within the first 3 months and then stabilized. Whereas individuals high on Extraversion tended to select more friends than those low on this trait, individuals high on Agreeableness tended to be selected more as friends. In addition, individuals tended to select friends with similar levels of Agreeableness, Extraversion, and Openness."}
{"_id": "b224196347525fee20677711436b0e77bc51abc2", "title": "Individual Tree Segmentation from LiDAR Point Clouds for Urban Forest Inventory", "text": "The objective of this study is to develop new algorithms for automated urban forest inventory at the individual tree level using LiDAR point cloud data. LiDAR data contain three-dimensional structure information that can be used to estimate tree height, base height, crown depth, and crown diameter. This allows precision urban forest inventory down to individual trees. Unlike most of the published algorithms that detect individual trees from a LiDAR-derived raster surface, we worked directly with the LiDAR point cloud data to separate individual trees and estimate tree metrics. Testing results in typical urban forests are encouraging. Future works will be oriented to synergize LiDAR data and optical imagery for urban tree characterization through data fusion techniques."}
{"_id": "1b46b708f86c983eca542f91854174c0fe5dd5d1", "title": "Control of Prosthetic Device Using Support Vector Machine Signal Classification Technique", "text": "An appropriate classification of the surface myoelectric signals (MES) allows people with disabilities to control assistive prosthetic devices. The performance of these pattern recognition methods significantly affects the accuracy and smoothness of the target movements. We designed an intelligent Support Vector Machine (SVM) classifier to incorporate potential variations in electrode placement, thus achieving high accuracy for predictive control. MES from seven locations of the forearm were recorded over six different sessions. Despite meticulous attempt to keep the recording locations consistent between trials, slight shifts may still occur affecting the classification performance. We hypothesize that the machine learning algorithm is able to compensate for these variations. The recorded data was first processed using Discrete Wavelet Transform over 9 frequency bands. As a result, a 63-dimension embedding of the wavelet coefficients were used as the training data for the SVM classifiers. For each session of recordings, a new classifier was trained using only the data sets from the previous sessions. The new classifier was then tested with the data obtained in the current session. The performance of the classifier was evaluated by calculating the sensitivity and specificity. The result indicated that after a critical number of recording sessions, the classifier accuracy starts to reach a plateau, meaning that inclusions of new training data will not significant improve the performance of the classifier. It was observed that the effect of electrode placement variations was reduced and that the classification accuracy of >89% can be obtained."}
{"_id": "977d63d1ad9f03a1e08b900ba76e2f1602f020db", "title": "A Dual Decomposition Approach to Feature Correspondence", "text": "In this paper, we present a new approach for establishing correspondences between sparse image features related by an unknown nonrigid mapping and corrupted by clutter and occlusion, such as points extracted from images of different instances of the same object category. We formulate this matching task as an energy minimization problem by defining an elaborate objective function of the appearance and the spatial arrangement of the features. Optimization of this energy is an instance of graph matching, which is in general an NP-hard problem. We describe a novel graph matching optimization technique, which we refer to as dual decomposition (DD), and demonstrate on a variety of examples that this method outperforms existing graph matching algorithms. In the majority of our examples, DD is able to find the global minimum within a minute. The ability to globally optimize the objective allows us to accurately learn the parameters of our matching model from training examples. We show on several matching tasks that our learned model yields results superior to those of state-of-the-art methods."}
{"_id": "3cc850ea1a405015b0d485fc03495ee30bdc17c5", "title": "Step negotiation with wheel traction: a strategy for a wheel-legged robot", "text": "This paper presents a quasi-static step climbing behaviour for a minimal sensing wheel-legged quadruped robot called PAW. In the quasi-static climbing maneuver, the robot benefits from wheel traction and uses its legs to reconfigure itself with respect to the step during the climb. The control methodology with the corresponding controller parameters is determined and the state machine for the maneuver is developed. With this controller, PAW is able to climb steps higher than its body clearance. Furthermore, any step height up to this maximum achievable height can be negotiated autonomously with a single set of controller parameters, without knowledge of the step height or distance to the step."}
{"_id": "dc7caf8f78a010b8f1a3802b5860c1b9754d836e", "title": "MARKOV CHAIN MONTE CARLO SIMULATION METHODS IN ECONOMETRICS", "text": "We present several Markov chain Monte Carlo simulation methods that have been widely used in recent years in econometrics and statistics. Among these is the Gibbs sampler, which has been of particular interest to econometricians. Although the paper summarizes some of the relevant theoretical literature, its emphasis is on the presentation and explanation of applications to important models that are studied in econometrics. We include a discussion of some implementation issues, the use of the methods in connection with the EM algorithm, and how the methods can be helpful in model specification questions. Many of the applications of these methods are of particular interest to Bayesians, but we also point out ways in which frequentist statisticians may find the techniques useful."}
{"_id": "2077d0f30507d51a0d3bbec4957d55e817d66a59", "title": "Fields of Experts: a framework for learning image priors", "text": "We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov random field (MRF) models by learning potential functions over extended pixel neighborhoods. Field potentials are modeled using a Products-of-Experts framework that exploits nonlinear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field of Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with and even outperform specialized techniques."}
{"_id": "589b8659007e1124f765a5d1bd940b2bf4d79054", "title": "Projection Pursuit Regression", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "7b1cc19dec9289c66e7ab45e80e8c42273509ab6", "title": "Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis", "text": "Neural networks are a powerful technology for classification of visual inputs arising from documents. However, there is a confusing plethora of different neural network methods that are used in the literature and in industry. This paper describes a set of concrete best practices that document analysis researchers can use to get good results with neural networks. The most important practice is getting a training set as large as possible: we expand the training set by adding a new form of distorted data. The next most important practice is that convolutional neural networks are better suited for visual document tasks than fully connected networks. We propose that a simple \u201cdo-it-yourself\u201d implementation of convolution with a flexible architecture is suitable for many visual document problems. This simple convolutional neural network does not require complex methods, such as momentum, weight decay, structuredependent learning rates, averaging layers, tangent prop, or even finely-tuning the architecture. The end result is a very simple yet general architecture which can yield state-of-the-art performance for document analysis. We illustrate our claims on the MNIST set of English digit"}
{"_id": "529b446504d00ce7bbb1f4ae131328c48c208e1c", "title": "CBIR Evaluation using Different Distances and DWT", "text": "In this paper, evaluation is made on the result of CBIR system based on haar wavelet transform with different distances for similarity matching to calculate the deviation from query image. So, different distances used are Chessboard distance, Cityblock distance and Euclidean distance. In this paper discrete wavelet transform is used to decompose the image. The image is decomposed till sixth level and last level approximate component is saved as feature vector. Comparison is made between different distances to see the best suited distance for CBIR. The wavelet used is "Haar". Haar has compact support and it is the simplest orthogonal wavelet. It is very fast also."}
{"_id": "46e78e418c76db11fff5563ec1905e8b616252d3", "title": "Blockchained Post-Quantum Signatures", "text": "Inspired by the blockchain architecture and existing Merkle tree based signature schemes, we propose BPQS, an extensible post-quantum (PQ) resistant digital signature scheme best suited to blockchain and distributed ledger technologies (DLTs). One of the unique characteristics of the protocol is that it can take advantage of application-specific chain/graph structures in order to decrease key generation, signing and verification costs as well as signature size. Compared to recent improvements in the field, BPQS outperforms existing hash-based algorithms when a key is reused for reasonable numbers of signatures, while it supports a fallback mechanism to allow for a practically unlimited number of signatures if required. We provide an open source implementation of the scheme and benchmark it."}
{"_id": "24d2b140789410bb454f9afe164a2beec97e6048", "title": "DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences", "text": "Modeling the properties and functions of DNA sequences is an important, but challenging task in the broad field of genomics. This task is particularly difficult for non-coding DNA, the vast majority of which is still poorly understood in terms of function. A powerful predictive model for the function of non-coding DNA can have enormous benefit for both basic science and translational research because over 98% of the human genome is non-coding and 93% of disease-associated variants lie in these regions. To address this need, we propose DanQ, a novel hybrid convolutional and bi-directional long short-term memory recurrent neural network framework for predicting non-coding function de novo from sequence. In the DanQ model, the convolution layer captures regulatory motifs, while the recurrent layer captures long-term dependencies between the motifs in order to learn a regulatory 'grammar' to improve predictions. DanQ improves considerably upon other models across several metrics. For some regulatory markers, DanQ can achieve over a 50% relative improvement in the area under the precision-recall curve metric compared to related models. We have made the source code available at the github repository http://github.com/uci-cbcl/DanQ."}
{"_id": "0824049706fd13e6f7757af5e50ee6ca4e38506b", "title": "A socially assistive robot exercise coach for the elderly", "text": "We present a socially assistive robot (SAR) system designed to engage elderly users in physical exercise. We discuss the system approach, design methodology, and implementation details, which incorporate insights from psychology research on intrinsic motivation, and we present five clear design principles for SAR-based therapeutic interventions. We then describe the system evaluation, consisting of a multi-session user study with older adults (n = 33), to evaluate the effectiveness of our SAR exercise system and to investigate the role of embodiment by comparing user evaluations of similar physically and virtually embodied coaches. The results validate the system approach and effectiveness at motivating physical exercise in older adults according to a variety of user performance and outcomes measures. The results also show a clear preference by older adults for the physically embodied robot coach over the virtual coach in terms of enjoyableness, helpfulness, and social attraction, among other factors."}
{"_id": "1564f14967351a4e40d5f3c752d7754599b98833", "title": "Polyphase filters - A model for teaching the art of discovery in DSP", "text": "By its very nature DSP is a mathematically heavy topic and to fully understand it students need to understand the mathematical developments underlying DSP topics. However, relying solely on mathematical developments often clouds the true nature of the foundation of a result. It is likely that students who master the mathematics may still not truly grasp the key ideas of a topic. Furthermore, teaching DSP topics by merely \u201cgoing through the mathematics\u201d deprives students of learning the art of discovery that will make them good researchers. This paper uses the topic of polyphase decimation and interpolation to illustrate how it is possible to maintain rigor yet teach using less mathematical approaches that show students how researchers think when developing new ideas."}
{"_id": "44c47c8c86c70aabe8040dce89b8de042f868f19", "title": "Explanation-Based Generalization: A Unifying View", "text": "The problem of formulating general concepts from specific training examples has long been a major focus of machine learning research. While most previous research has focused on empirical methods for generalizing from a large number of training examples using no domain-specific knowledge, in the past few years new methods have been developed for applying domain-specific knowledge to formulate valid generalizations from single training examples. The characteristic common to these methods is that their ability to generalize from a single example follows from their ability to explain why the training example is a member of the concept being learned. This paper proposes a general, domain-independent mechanism, called EBG, that unifies previous approaches to explanation-based generalization. The EBG method is illustrated in the context of several example problems, and used to contrast several existing systems for explanation-based generalization. The perspective on explanation-based generalization afforded by this general method is also used to identify open research problems in this area."}
{"_id": "36528b15f3917916bd87d941849d4928f25ce7c6", "title": "Broadband Micro-Coaxial Wilkinson Dividers", "text": "This paper presents several micro-coaxial broadband 2 : 1 Wilkinson power dividers operating from 2 to 22 GHz, a 11 : 1 bandwidth. Circuits are fabricated on silicon with PolyStrata technology, and are implemented with 650 mum times 400 mum air-supported micro-coaxial lines. The measured isolation between the output ports is greater than 11 dB and the return loss at each port is more than 13 dB over the entire bandwidth. The footprints of these dividers can be miniaturized due to the high isolation between adjacent coaxial lines and their tight bend radius. For higher power handling, larger lines with a cross section of 1050 mum times 850 mum are also demonstrated. The effect of mismatch at the output ports is investigated in order to find the power loss in the resistors."}
{"_id": "e000420abfc0624adaaa828d51e011fdac980919", "title": "Distributed Graph Routing for WirelessHART Networks", "text": "Communication reliability in a Wireless Sensor and Actuator Network (WSAN) has a high impact on stability of industrial process monitoring and control. To make reliable and real-time communication in highly unreliable environments, industrial WSANs such as those based on WirelessHART adopt graph routing. In graph routing, each packet is scheduled on multiple time slots using multiple channels, on multiple links along multiple paths on a routing graph between a source and a destination. While high redundancy is crucial to reliable communication, determining and maintaining graph routing is challenging in terms of execution time and energy consumption for resource constrained WSAN. Existing graph routing algorithms use centralized approach, do not scale well in terms of these metrics, and are less suitable under network dynamics. To address these limitations, we propose the first distributed graph routing protocol for WirelessHART networks. Our distributed protocol is based on the Bellman-Ford Algorithm, and generates all routing graphs together using a single algorithm. We prove that our proposed graph routing can include a path between a source and a destination with cost (in terms of hop-count) at most 3 times the optimal cost. We implemented our proposed routing algorithm on TinyOS and evaluated through experiments on TelosB motes and simulations using TOSSIM. The results show that it is scalable and consumes at least 40% less energy and needs at least 65% less time at the cost of 1kB of extra memory compared to the state-of-the-art centralized approach for generating routing graphs."}
{"_id": "784c937fb6d8855c112dd20c1e747fb09ae08084", "title": "Traffic Surveillance using Multi-Camera Detection and Multi-Target Tracking", "text": "Non-intrusive video-detection for traffic flow observation and surveillance is the primary alternative to conventional inductive loop detectors. Video Image Detection Systems (VIDS) can derive traffic parameters by means of image processing and pattern recognition methods. Existing VIDS emulate the inductive loops. We propose a trajectory based recognition algorithm to expand the common approach and to obtain new types of information (e.g. queue length or erratic movements). Different views of the same area by more than one camera sensor is necessary, because of the typical limitations of single camera systems, resulting from the occlusion effect of other cars, trees and traffic signs. A distributed cooperative multi-camera system enables a significant enlargement of the observation area. The trajectories are derived from multi-target tracking. The fusion of object data from different cameras is done using a tracking method. This approach opens up opportunities to identify and specify traffic objects, their location, speed and other characteristic object information. The system provides new derived and consolidated information about traffic participants. Thus, this approach is also beneficial for a description of individual traffic participants."}
{"_id": "3272595fc86f13c7cce0547f2b464c2befe5a69f", "title": "Designing Password Policies for Strength and Usability", "text": "Password-composition policies are the result of service providers becoming increasingly concerned about the security of online accounts. These policies restrict the space of user-created passwords to preclude easily guessed passwords and thus make passwords more difficult for attackers to guess. However, many users struggle to create and recall their passwords under strict password-composition policies, for example, ones that require passwords to have at least eight characters with multiple character classes and a dictionary check. Recent research showed that a promising alternative was to focus policy requirements on password length instead of on complexity. In this work, we examine 15 password policies, many focusing on length requirements. In doing so, we contribute the first thorough examination of policies requiring longer passwords. We conducted two online studies with over 20,000 participants, and collected both usability and password-strength data. Our findings indicate that password strength and password usability are not necessarily inversely correlated: policies that lead to stronger passwords do not always reduce usability. We identify policies that are both more usable and more secure than commonly used policies that emphasize complexity rather than length requirements. We also provide practical recommendations for service providers who want their users to have strong yet usable passwords."}
{"_id": "5c2900fadc2485b42a52c0a1dd280e41193301f6", "title": "Mimir: a Market-based Real-time Question and Answer Service", "text": "Community-based question and answer (Q&A) systems facilitate information exchange and enable the creation of reusable knowledge repositories. While these systems are growing in usage and are changing how people find and share information, current designs are inefficient, wasting the time and attention of their users. Furthermore, existing systems do not support signaling and screening of joking and non-serious questions. Coupling Q&A services with instant and text messaging for faster questions and answers may exacerbate these issues, causing Q&A services to incur high interruption costs on their users.\n In this paper we present the design and evaluation of a market-based real-time Q&A system. We compared its use to a similar Q&A system without a market. We found that while markets can reduce wasted resources by reducing the number of less important questions and low quality answers, it may also reduce the socially conducive questions and usages that are vital to sustaining a Q&A community."}
{"_id": "9d6481079654a141381dc2752257fe1b5b112f6f", "title": "Tuning of PID controller for an automatic regulator voltage system using chaotic optimization approach", "text": "Despite the popularity, the tuning aspect of proportional\u2013integral-derivative (PID) controllers is a challenge for researchers and plant operators. Various controllers tuning methodologies have been proposed in the literature such as auto-tuning, self-tuning, pattern recognition, artificial intelligence, and optimization methods. Chaotic optimization algorithms as an emergent method of global optimization have attracted much attention in engineering applications. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from local optimum, is a promising tool for engineering applications. In this paper, a tuning method for determining the parameters of PID control for an automatic regulator voltage (AVR) system using a chaotic optimization approach based on Lozi map is proposed. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. Simulation results are promising and show the effectiveness of the proposed approach. Numerical simulations based on proposed PID control of an AVR system for nominal system parameters and step reference voltage input demonstrate the good performance of chaotic optimization. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "35ccecd0b3a70635f952d0938cb095b991d6c1d7", "title": "What Constitutes Normal Sleeping Patterns ? Childhood to Adolescence 2 . 1 Developmental Patterns in Sleep Duration : Birth to Adolescence", "text": "Healthy, adequate sleep is integral to the process of growth and development during adolescence. At puberty, maturational changes in th e underlying homeostatic and circadian sleep regulatory mechanisms influence the sleep-wak e p tterns of adolescents. These changes interact with psychosocial factors, such as increas ing academic demands, hours spent in paid employment, electronic media use, and social opport unities, and constrict the time available for adolescents to sleep. Survey studies reveal tha t adolescents\u2019 habitual sleep schedules are associated with cumulative sleep loss. As a consequ ence, there is growing concern about the effects of insufficient sleep on adolescents\u2019 wakin g function. This review identifies and examines the characteristics of sleep and sleep los s in adolescents. It highlights the need for more research into the effects of chronic partial s leep deprivation in adolescents, and the process of extending sleep on weekends to recover t he effects of sleep debt. An understanding of chronic sleep deprivation and recovery sleep in adolescents will facilitate the development of evidence-based sleep guidelines and recommendati o s for recovery sleep opportunities when habitual sleep times are insufficient."}
{"_id": "8ff54aa8045b1e30c348cf2ca42259c946cd7a9e", "title": "Search-based Neural Structured Learning for Sequential Question Answering", "text": "Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semistructured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential question answering task, we propose a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search. Our model effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions."}
{"_id": "da471fce22ff517bc83607c0b98471c7bd32c54f", "title": "Crowdfunding Success Factors: The Characteristics of Successfully Funded Projects on Crowdfunding Platforms", "text": "Crowdfunding platforms offer promising opportunities for project founders to publish their project ideas and to collect money in order to be able to realize them. Consequently, the question of what influences the successful funding of projects, i.e., reaching the target amount of money, is very important. Building upon media richness theory and the concept of reciprocity, we extend previous research in the field of crowdfunding success factors. We provide a comprehensive view on factors influencing crowdfunding success by both focusing on project-specific as well as founder-specific aspects. Analyzing a sample of projects of the crowdfunding platform kickstarter.com, we find that the project description, related images and videos as well as the question of whether the founder has previously backed other projects influence funding success. Interestingly, the question of whether the founder has previously created other projects has no significant influence. Our results are of high interest for the stakeholders on crowdfunding platforms."}
{"_id": "1c6aa0f196e2a68d3c93fd10b85f084811a87b02", "title": "Predicting Song Popularity", "text": "Future Work Dataset and Features Music has been an integral part of our culture all throughout human history. In 2012 alone, the U.S. music industry generated $15 billion. Having a fundamental understanding of what makes a song popular has major implications to businesses that thrive on popular music, namely radio stations, record labels, and digital and physical music market places. Many private companies in these industries have solutions for this problem, but details have been kept private for competitive reasons. For our project, we will predict song popularity based on an song\u2019s audio features and metadata. Methods & Results"}
{"_id": "65769b53e71ea7c52b3a07ad32bd4fdade6a0173", "title": "Multi-task Deep Reinforcement Learning with PopArt", "text": "The reinforcement learning community has made great strides in designing algorithms capable of exceeding human performance on specific tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequentialdecision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learning system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribution of each task to the agent\u2019s updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy with a single set of weights that exceeds median human performance. To our knowledge, this was the first time a single agent surpassed human-level performance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab."}
{"_id": "0a2cf52659f4bc450e8c6b15eead1fa9081affbc", "title": "Many-objective optimization algorithm applied to history matching", "text": "Reservoir model calibration, called history matching in the petroleum industry, is an important task to make more accurate predictions for better reservoir management. Providing an ensemble of good matched reservoir models from history matching is essential to reproduce the observed production data from a field and to forecast reservoir performance. The nature of history matching is multi-objective because there are multiple match criteria or misfit from different production data, wells and regions in the field. In many cases, these criteria are conflicting and can be handled by the multi-objective approach. Moreover, multi-objective provides faster misfit convergence and more robust towards stochastic nature of optimization algorithms. However, reservoir history matching may feature far too many objectives that can be efficiently handled by conventional multi-objective algorithms, such as multi-objective particle swarm optimizer (MOPSO) and non-dominated sorting genetic algorithm II (NSGA II). Under an increasing number of objectives, the performance of multi-objective history matching by these algorithms deteriorates (lower match quality and slower misfit convergence). In this work, we introduce a recently proposed algorithm for many-objective optimization problem, known as reference vector-guided evolutionary algorithm (RVEA), to history matching. We apply the algorithm to history matching a synthetic reservoir model and a real field case study with more than three objectives. The paper demonstrates the superiority of the proposed RVEA to the state of the art multi-objective history matching algorithms, namely MOPSO and NSGA II."}
{"_id": "c1353d7db7098efd2622f01b59e8dba55e1f1327", "title": "An improved LLC resonant inverter for induction heating with asymmetrical control", "text": "This paper proposes a modified LLC resonant load configuration of a full-bridge inverter for induction heating applications by using asymmetrical voltage cancellation control. The proposed control method is implemented in a full-bridge inverter topology. With the use of a phase-locked loop control, the operating frequency is automatically adjusted to maintain a small constant lagging phase angle under load parameter variation. The output power is controlled using the asymmetrical voltage cancellation technique. The LLC resonant tank is designed with a matching transformer in between the series inductor and paralleled LC resonant tank for short circuit protection capability of the matching transformer and the induction coil. The validity of the proposed method is verified through computer simulation and hardware experiment at the operating frequency of 108.7 to 110.6 kHz."}
{"_id": "214658334c581f0d18b9a871928e91b6e4f83be7", "title": "An active battery cell balancing topology without using external energy storage elements", "text": "Cell balancing circuits are important to extent life-cycle of batteries and to extract maximum power from the batteries. A lot of power electronics topology has been tried for cell balancing in the battery packages. Active cell balancing topologies transfer energy from the cells showing higher performance to the cells showing lower performance to balance voltages across the cells of the battery using energy storage elements like combination of inductor-capacitor or transformer-capacitor or switched capacitor or switched inductor. In this study an active balancing topology without using any energy storage element is proposed. The idea is similar to the switched capacitor topology in which a capacitor or capacitor banks is switched across the cells of battery to balance the voltages. Since a basic battery cell model includes capacitance because of capacitive effect of the cell, this capacitive effect can be utilized in cell balancing. Hence the equalizer capacitors in switched capacitor topology can be eliminated and the cells of battery can be switched with each other. This allows faster energy transfer and hence results in quick equalization. The proposed topology removes the need of extra energy storage elements like capacitors which frequently fails in power electronic circuits, reduces the losses inserted by extra energy storage elements and cost and volume of the circuits and simplifies control algorithm. The proposed balancing circuit can be implemented according to the application requirement. The proposed topology is simulated in MATLAB/Simulink environment and showed better results in terms of balancing speed in comparison to switched capacitor topologies."}
{"_id": "5b9aec6bddd83f8e904aac8fae344ff306d3d29b", "title": "Energy-Efficient Resource Allocation in Cellular Networks With Shared Full-Duplex Relaying", "text": "Recent advances in self-interference cancelation techniques enable full-duplex relaying (FDR) systems, which transmit and receive simultaneously in the same frequency band with high spectrum efficiency. Unlike most existing works, we study the problem of energy-efficient resource allocation in FDR networks. We consider a shared FDR deployment scenario, where an FDR relay is deployed at the intersection of several adjacent cell sectors. First, a simple but practical transmission strategy is proposed to deal with the involved interference, i.e., multiaccess interference, multiuser interference, and self-interference. Then, the problem of joint power and subcarrier allocation is formulated to maximize the network-level energy efficiency while taking the residual self-interference into account. Since the formulated problem is a mixed combinatorial and nonconvex optimization problem with high computation complexity, we use Dinkelbach and discrete stochastic optimization methods to solve the energy-efficient resource-allocation problem efficiently. Simulation results are presented to show the effectiveness of the proposed scheme."}
{"_id": "e4c564501bf73d3ee92b28ea96bb7b4ff9ec80ed", "title": "A novel wideband waveguide-to-microstrip transition with waveguide stepped impedance transformer", "text": "A novel wideband waveguide-to-microstrip transition is presented in this paper. A printed fan-shaped probe is used, while a waveguide stepped impedance transformer is utilized to broaden the bandwidth further. With an optimized design, a relative bandwidth of 27.8% for -20dB return loss with a center frequency of 15GHz is achieved. The simulated insertion loss in this bandwidth is less than 0.29dB. A back-to-back transition is fabricated and measured. The measured result is similar to that of simulation."}
{"_id": "85b012e2ce65be26e3a807b44029f9f03a483fbc", "title": "Graph based E-Government web service composition", "text": "Nowadays, e-government has emerged as a government policy to improve the quality and efficiency of public administrations. By exploiting the potential of new information and communication technologies, government agencies are providing a wide spectrum of online services. These services are composed of several web services that comply with well defined processes. One of the big challenges is the need to optimize the composition of the elementary web services. In this paper, we present a solution for optimizing the computation effort in web service composition. Our method is based on Graph Theory. We model the semantic relationship between the involved web services through a directed graph. Then, we compute all shortest paths using for the first time, an extended version of the FloydWarshall algorithm."}
{"_id": "8fd198887c4d403707f1d030d0c4e414dcbe3b26", "title": "Classification of Mixed-Type Defect Patterns in Wafer Bin Maps Using Convolutional Neural Networks", "text": "In semiconductor manufacturing, a wafer bin map (WBM) represents the results of wafer testing for dies using a binary pass or fail value. For WBMs, defective dies are often clustered into groups of local systematic defects. Determining their specific patterns is important, because different patterns are related to different root causes of failure. Recently, because wafer sizes have increased and the process technology has become more complicated, the probability of observing mixed-type defect patterns, i.e., two or more defect patterns in a single wafer, has increased. In this paper, we propose the use of convolutional neural networks (CNNs) to classify mixed-type defect patterns in WBMs in the framework of an individual classification model for each defect pattern. Through simulated and real data examples, we show that the CNN is robust to random noise and performs effectively, even if there are many random defects in WBMs."}
{"_id": "0c04909ed933469246defcf9aca2b71ae8e3f623", "title": "Information Retrieval", "text": "The major change in the second edition of this book is the addition of a new chapter on probabilistic retrieval. This chapter has been included because I think this is one of the most interesting and active areas of research in information retrieval. There are still many problems to be solved so I hope that this particular chapter will be of some help to those who want to advance the state of knowledge in this area. All the other chapters have been updated by including some of the more recent work on the topics covered. In preparing this new edition I have benefited from discussions with Bruce Croft, The material of this book is aimed at advanced undergraduate information (or computer) science students, postgraduate library science students, and research workers in the field of IR. Some of the chapters, particularly Chapter 6 * , make simple use of a little advanced mathematics. However, the necessary mathematical tools can be easily mastered from numerous mathematical texts that now exist and, in any case, references have been given where the mathematics occur. I had to face the problem of balancing clarity of exposition with density of references. I was tempted to give large numbers of references but was afraid they would have destroyed the continuity of the text. I have tried to steer a middle course and not compete with the Annual Review of Information Science and Technology. Normally one is encouraged to cite only works that have been published in some readily accessible form, such as a book or periodical. Unfortunately, much of the interesting work in IR is contained in technical reports and Ph.D. theses. For example, most the work done on the SMART system at Cornell is available only in reports. Luckily many of these are now available through the National Technical Information Service (U.S.) and University Microfilms (U.K.). I have not avoided using these sources although if the same material is accessible more readily in some other form I have given it preference. I should like to acknowledge my considerable debt to many people and institutions that have helped me. Let me say first that they are responsible for many of the ideas in this book but that only I wish to be held responsible. My greatest debt is to Karen Sparck Jones who taught me to research information retrieval as an experimental science. Nick Jardine and Robin \u2026"}
{"_id": "3cfbb77e5a0e24772cfdb2eb3d4f35dead54b118", "title": "Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors", "text": "Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts."}
{"_id": "9ec20b90593695e0f5a343dade71eace4a5145de", "title": "Deep Learning for Natural Language Processing", "text": "1Student,Dept. of Computer Engineering, VESIT, Maharashtra, India ---------------------------------------------------------------------------***-------------------------------------------------------------------Abstract Deep Learning has come into existence as a new area for research in Machine Learning. It aims to act like a human brain, having the ability to learn and process from complex data and also tries solving intricate tasks as well. Due to this capability, its been used in various fields like text, sound, images etc. Natural language process has started to being impacted by the deep learning techniques. This research paper highlights Deep Learning\u2019s recent developments and applications in Natural Language Processing."}
{"_id": "a1f33473ea3b8e98fee37e32ecbecabc379e07a0", "title": "Image Segmentation by Cascaded Region Agglomeration", "text": "We propose a hierarchical segmentation algorithm that starts with a very fine over segmentation and gradually merges regions using a cascade of boundary classifiers. This approach allows the weights of region and boundary features to adapt to the segmentation scale at which they are applied. The stages of the cascade are trained sequentially, with asymetric loss to maximize boundary recall. On six segmentation data sets, our algorithm achieves best performance under most region-quality measures, and does it with fewer segments than the prior work. Our algorithm is also highly competitive in a dense over segmentation (super pixel) regime under boundary-based measures."}
{"_id": "b208fb408ca3d6a88051a83b3cfa2e757b61d42b", "title": "Continuous and Cuffless Blood Pressure Monitoring Based on ECG and SpO2 Signals ByUsing Microsoft Visual C Sharp", "text": "BACKGROUND\nOne of the main problems especially in operating room and monitoring devices is measurement of Blood Pressure (BP) by sphygmomanometer cuff. Objective :In this study we designed a new method to measure BP changes continuously for detecting information between cuff inflation times by using vital signals in monitoring devices. This will be achieved by extraction of the time difference between each cardiac cycle and a relative pulse wave.\n\n\nMETHODS\nFinger pulse and ECG signals in lead I were recorded by a monitoring device. The output of monitoring device wasinserted in a computer by serial network communication. A software interface (Microsoft Visual C#.NET ) was used to display and process the signals in the computer. Time difference between each cardiac cycle and pulse signal was calculated throughout R wave detection in ECG and peak of pulse signal by the software. The relation between time difference in two waves and BP was determined then the coefficients of equation were obtained in different physical situations. The results of estimating BP were compared with the results of sphygmomanometer method and the error rate was calculated.\n\n\nRESULTS\nIn this study, 25 subjects participated among them 15 were male and 10 were female. The results showed that BP was linearly related to time difference. Average of coefficient correlation was 0.9\u00b10.03 for systolic and 0.82\u00b10.04 for diastolic blood pressure. The highest error percentage was calculated 8% for male and 11% for female group. Significant difference was observed between the different physical situation and arm movement changes. The relationship between time difference and age was estimated in a linear relationship with a correlation coefficient of 0.76.\n\n\nCONCLUSION\nBy determining linear relation values with high accuracy, BP can be measured with insignificant error. Therefore it can be suggested as a new method to measure the blood pressure continuously."}
{"_id": "c0141c34064c894ca0a9fa476bad80ef1ab25257", "title": "Performance assessment and uncertainty quantification of predictive models for smart manufacturing systems", "text": "We review in this paper several methods from Statistical Learning Theory (SLT) for the performance assessment and uncertainty quantification of predictive models. Computational issues are addressed so to allow the scaling to large datasets and the application of SLT to Big Data analytics. The effectiveness of the application of SLT to manufacturing systems is exemplified by targeting the derivation of a predictive model for quality forecasting of products on an assembly line."}
{"_id": "0ef311acf523d4d0e2cc5f747a6508af2c89c5f7", "title": "LDA-based document models for ad-hoc retrieval", "text": "Search algorithms incorporating some form of topic model have a long history in information retrieval. For example, cluster-based retrieval has been studied since the 60s and has recently produced good results in the language model framework. An approach to building topic models based on a formal generative model of documents, Latent Dirichlet Allocation (LDA), is heavily cited in the machine learning literature, but its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use LDA to improve ad-hoc retrieval. We propose an LDA-based document model within the language modeling framework, and evaluate it on several TREC collections. Gibbs sampling is employed to conduct approximate inference in LDA and the computational complexity is analyzed. We show that improvements over retrieval using cluster-based models can be obtained with reasonable efficiency."}
{"_id": "c761ef1ed330186d09063550a13e7cded8282217", "title": "Number of articles Economy Health Care Taxes Federal Budget Jobs State Budget Education Candidate Biography Elections Immigration Foreign Policy Crime Energy History Job Accomplishments Legal Issues Environment Terrorism Military Guns", "text": "We report on attempts to use currently available automated text analysis tools to identify possible biased treatment by Politifact of Democratic vs. Republican speakers, through language. We begin by noting that there is no established method for detecting such differences, and indeed that \u201cbias\u201d is complicated and difficult to operationalize into a measurable quantity. This report includes several analyses that are representative of the tools available from natural language processing at this writing. In each case, we offer (i) what we would expect to see in the results if the method picked up on differential treatment between Democrats vs. Republicans, (ii) what we actually observe, and (iii) potential problems with the analysis; in some cases we also suggest (iv) future analyses that might be more revelatory."}
{"_id": "8ce9cb9f2e5c7a7e146b176eb82dbcd34b4c764c", "title": "PCSIM: A Parallel Simulation Environment for Neural Circuits Fully Integrated with Python", "text": "The Parallel Circuit SIMulator (PCSIM) is a software package for simulation of neural circuits. It is primarily designed for distributed simulation of large scale networks of spiking point neurons. Although its computational core is written in C++, PCSIM's primary interface is implemented in the Python programming language, which is a powerful programming environment and allows the user to easily integrate the neural circuit simulator with data analysis and visualization tools to manage the full neural modeling life cycle. The main focus of this paper is to describe PCSIM's full integration into Python and the benefits thereof. In particular we will investigate how the automatically generated bidirectional interface and PCSIM's object-oriented modular framework enable the user to adopt a hybrid modeling approach: using and extending PCSIM's functionality either employing pure Python or C++ and thus combining the advantages of both worlds. Furthermore, we describe several supplementary PCSIM packages written in pure Python and tailored towards setting up and analyzing neural simulations."}
{"_id": "eb2e3911b852fc00ee1d9f089e3feb461aacde62", "title": "Class Activation Map Generation by Representative Class Selection and Multi-Layer Feature Fusion", "text": "Existing method generates class activation map (CAM) by a set of fixed classes (i.e., using all the classes), while the discriminative cues between class pairs are not considered. Note that activation maps by considering different class pair are complementary, and therefore can provide more discriminative cues to overcome the shortcoming of the existing CAM generation that the highlighted regions are usually local part regions rather than global object regions due to the lack of object cues. In this paper, we generate CAM by using a few of representative classes, with aim of extracting more discriminative cues by considering each class pair to obtain CAM more globally. The advantages are twofold. Firstly, the representative classes are able to obtain activation regions that are complementary to each other, and therefore leads to generating activation map more accurately. Secondly, we only need to consider a small number of representative classes, making the CAM generation suitable for small networks. We propose a clustering based method to select the representative classes. Multiple binary classification models rather than a multiple class classification model are used to generate the CAM. Moreover, we propose a multi-layer fusion based CAM generation method to simultaneously combine high-level semantic features and low-level detail features. We validate the proposed method on the PASCAL VOC and COCO database in terms of segmentation groundtruth. Various networks such as classical network (Resnet-50, Resent-101 and Resnet-152) and small network (VGG-19, Resnet-18 and Mobilenet) are considered. Experimental results show that the proposed method improves the CAM generation obviously."}
{"_id": "31706b0fe43c859dbf9bf698bf6ca2daef969872", "title": "Processing of Reference and the Structure of Language: An Analysis of Complex Noun Phrases", "text": "Five experiments used self-paced reading time to examine the ways in which complex noun phrases (both conjoined NPs and possessive NPs) in\u008fuence the interpretation of referentially dependent expressions. The experimental conditions contrasted the reading of repeated names and pronouns referring to components of a complex NP and to the entire complex NP. The results indicate that the entity introduced by a major constituent of a sentence is more accessible as a referent than the entities introduced by component noun phrases. This pattern of accessibility departs from the advantage of \u008erst mention that has been demonstrated using probe-word recognition tasks. It supports the idea that reduced expressions are interpreted as referring directly to prominent entities in a mental model whereas reference by names to entities that are already represented in a mental model is mediated by additional processes. The same interpretive processes appear to operate on coreference within and between sentences."}
{"_id": "dc148913b7271b65d221d2c41f77a9e41750c867", "title": "Predicting the Price of Used Cars using Machine Learning Techniques", "text": "In this paper, we investigate the application of supervised machine learning techniques to predict the price of used cars in Mauritius. The predictions are based on historical data collected from daily newspapers. Different techniques like multiple linear regression analysis, k-nearest neighbours, na\u00efve bayes and decision trees have been used to make the predictions. The predictions are then evaluated and compared in order to find those which provide the best performances. A seemingly easy problem turned out to be indeed very difficult to resolve with high accuracy. All the four methods provided comparable performance. In the future, we intend to use more sophisticated algorithms to make the predictions. Keywords-component; formatting; style; styling; insert (key words)"}
{"_id": "2f5ae832284c2f9fe2bba449b961745aef817bbf", "title": "Beyond Doctors: Future Health Prediction from Multimedia and Multimodal Observations", "text": "Although chronic diseases cannot be cured, they can be effectively controlled as long as we understand their progressions based on the current observational health records, which is often in the form of multimedia data. A large and growing body of literature has investigated the disease progression problem. However, far too little attention to date has been paid to jointly consider the following three observations of the chronic disease progression: 1) the health statuses at different time points are chronologically similar; 2) the future health statuses of each patient can be comprehensively revealed from the current multimedia and multimodal observations, such as visual scans, digital measurements and textual medical histories; and 3) the discriminative capabilities of different modalities vary significantly in accordance to specific diseases. In the light of these, we propose an adaptive multimodal multi-task learning model to co-regularize the modality agreement, temporal progression and discriminative capabilities of different modalities. We theoretically show that our proposed model is a linear system. Before training our model, we address the data missing problem via the matrix factorization approach. Extensive evaluations on a real-world Alzheimer's disease dataset well verify our proposed model. It should be noted that our model is also applicable to other chronic diseases."}
{"_id": "2b25897316a8f42b88a9d66d966f4de0ed7f7a05", "title": "Rapid diagnostic tests for malaria parasites.", "text": "Malaria presents a diagnostic challenge to laboratories in most countries. Endemic malaria, population movements, and travelers all contribute to presenting the laboratory with diagnostic problems for which it may have little expertise available. Drug resistance and genetic variation has altered many accepted morphological appearances of malaria species, and new technology has given an opportunity to review available procedures. Concurrently the World Health Organization has opened a dialogue with scientists, clinicians, and manufacturers on the realistic possibilities for developing accurate, sensitive, and cost-effective rapid diagnostic tests for malaria, capable of detecting 100 parasites/microl from all species and with a semiquantitative measurement for monitoring successful drug treatment. New technology has to be compared with an accepted \"gold standard\" that makes comparisons of sensitivity and specificity between different methods. The majority of malaria is found in countries where cost-effectiveness is an important factor and ease of performance and training is a major consideration. Most new technology for malaria diagnosis incorporates immunochromatographic capture procedures, with conjugated monoclonal antibodies providing the indicator of infection. Preferred targeted antigens are those which are abundant in all asexual and sexual stages of the parasite and are currently centered on detection of HRP-2 from Plasmodium falciparum and parasite-specific lactate dehydrogenase or Plasmodium aldolase from the parasite glycolytic pathway found in all species. Clinical studies allow effective comparisons between different formats, and the reality of nonmicroscopic diagnoses of malaria is considered."}
{"_id": "36e22ee5b8694af3703856eda13d8ee4ea284171", "title": "Automatic Keyword Extraction for Text Summarization in Multi-document e-Newspapers Articles", "text": "Summarization is the way towards lessening the content of a text file to make it brief that holds all the critical purposes in the content of original text file. In the process of extractive summarization, one extracts only those sentences which are the most relevant sentences in the text document and that conveys the moral of the content. The extractive summarization techniques usually revolve around the idea of discovering most relevant and frequent keywords and then extract the sentences based on those keywords. Manual extraction or explanation of relevant keywords are a dreary procedure overflowing with errors including loads of manual exertion and time. In this paper, we proposed a hybrid approach to extract keyword automatically for multi-document text summarization in enewspaper articles. The performance of the proposed approach is compared with three additional keyword extraction techniques namely, term frequency-inverse document frequency (TF-IDF), term frequency-adaptive inverse document frequency (TF-AIDF), and a number of false alarm (NFA) for automatic keyword extraction and summarization in e-newspapers articles for better analysis. Finally, we showed that our proposed techniques had been outperformed over other techniques for automatic keyword extraction and summarization."}
{"_id": "542cc06e3b522163d1a8aed8d5769e94cb6456ce", "title": "Software-Defined \u201cHardware\u201d Infrastructures: A Survey on Enabling Technologies and Open Research Directions", "text": "This paper provides an overview of software-defined \u201chardware\u201d infrastructures (SDHI). SDHI builds upon the concept of hardware (HW) resource disaggregation. HW resource disaggregation breaks today\u2019s physical server-oriented model where the use of a physical resource (e.g., processor or memory) is constrained to a physical server\u2019s chassis. SDHI extends the definition of of software-defined infrastructures (SDI) and brings greater modularity, flexibility, and extensibility to cloud infrastructures, thus allowing cloud operators to employ resources more efficiently and allowing applications not to be bounded by the physical infrastructure\u2019s layout. This paper aims to be an initial introduction to SDHI and its associated technological advancements. This paper starts with an overview of the cloud domain and puts into perspective some of the most prominent efforts in the area. Then, it presents a set of differentiating use-cases that SDHI enables. Next, we state the fundamentals behind SDI and SDHI, and elaborate why SDHI is of great interest today. Moreover, it provides an overview of the functional architecture of a cloud built on SDHI, exploring how the impact of this transformation goes far beyond the cloud infrastructure level in its impact on platforms, execution environments, and applications. Finally, an in-depth assessment is made of the technologies behind SDHI, the impact of these technologies, and the associated challenges and potential future directions of SDHI."}
{"_id": "521c2dd41bd7de10cb514f4e9d537fd434699cb7", "title": "A Study On Unmanned Vehicles and Cyber Security", "text": "During the past years Unmanned Aerial Vehicle (UAV) usage has grown in military and civilian fields. Every year, emergency response operations are more and more dependent of these unmanned vehicles. Lacking direct human interference, the correction of onboard errors and security breaches is a growing concern in unmanned vehicles. One of the concerns raised by first responders using unmanned vehicles is the security and privacy of the victims they are rescuing. Video channels used for the control of unmanned vehicles are of particular importance, having great susceptibility to hijack, jamming and spoofing attacks. It is assumed the video feeds are not protected by any type of encryption and are therefore vulnerable to hacking and/or spoofing. This summer we have conducted a preliminary survey of the security vulnerabilities and their implications on the operation of unmanned vehicles from a network security perspective."}
{"_id": "1832aa821e189770d4401092eb0a93215bd1c382", "title": "A stochastic data discrimination based autoencoder approach for network anomaly detection", "text": "Machine learning based network anomaly detection methods, which are already effective defense mechanisms against known network intrusion attacks, have also proven themselves to be more successful on the detection of zero-day attacks compared to other types of detection methods. Therefore, research on network anomaly detection using deep learning is getting more attention constantly. In this study we created an anomaly detection model based on a deterministic autoencoder, which discriminates normal and abnormal data by using our proposed stochastic threshold determination approach. We tested our proposed anomaly detection model on the NSL-KDD's test dataset KDDTest+ and obtained an accuracy of 88.28%. The experimental results show that our proposed anomaly detection model can perform almost the same as the most successful and up-to-date machine learning based anomaly detection models in the literature."}
{"_id": "1007fbc622acd3cc8f658558a3e841ea200f6880", "title": "Electrophysiology in the age of light", "text": "Electrophysiology, the 'gold standard' for investigating neuronal signalling, is being challenged by a new generation of optical probes. Together with new forms of microscopy, these probes allow us to measure and control neuronal signals with spatial resolution and genetic specificity that already greatly surpass those of electrophysiology. We predict that the photon will progressively replace the electron for probing neuronal function, particularly for targeted stimulation and silencing of neuronal populations. Although electrophysiological characterization of channels, cells and neural circuits will remain necessary, new combinations of electrophysiology and imaging should lead to transformational discoveries in neuroscience."}
{"_id": "ea2994ae14c2a9dafce41b00ca507a0d8db097cb", "title": "Tagging Ingush - Language Technology For Low-Resource Languages Using Resources From Linguistic Field Work", "text": "This paper presents on-going work on creating NLP tools for under-resourced languages from very sparse training data coming from linguistic field work. In this work, we focus on Ingush, a Nakh-Daghestanian language spoken by about 300,000 people in the Russian republics Ingushetia and Chechnya. We present work on morphosyntactic taggers trained on transcribed and linguistically analyzed recordings and dependency parsers using English glosses to project annotation for creating synthetic treebanks. Our preliminary results are promising, supporting the goal of bootstrapping efficient NLP tools with limited or no task-specific annotated data resources available."}
{"_id": "3941c4da721e35e39591ee95cca44a1016b24c6b", "title": "Vertex-Centric Graph Processing: The Good, the Bad, and the Ugly", "text": "We study distributed graph algorithms that adopt an iterati ve vertexcentric framework for graph processing, popularized by Goo gle\u2019s Pregel system. Since then, there are several attempts to imp lement many graph algorithms in a vertex-centric framework, a s well as efforts to design optimization techniques for improving the efficiency. However, to the best of our knowledge, there has not been any systematic study to compare these vertex-centric i mplementations with their sequential counterparts. Our paper a dd esses this gap in two ways. (1) We analyze the computational complexity of such implementations with the notion of time-pro cessor product, and benchmark several vertex-centric graph algor ithms whether they perform more work with respect to their best-kn ow sequential solutions. (2) Employing the concept of balance d practical Pregel algorithms, we study if these implementations suffer from imbalanced workload and large number of iterations. Ou r findings illustrate that with the exception of Euler tour tre e algorithm, all other algorithms either perform asymptotically more work than their best-known sequential approach, or suffer from i mbalanced workload/ large number of iterations, or even both. We also emphasize on graph algorithms that are fundamentally diffic ult to be expressed in vertex-centric frameworks, and conclude by discussing the road ahead for distributed graph processing."}
{"_id": "b57ba909756462d812dc20fca157b3972bc1f533", "title": "Vision-Based Classification of Skin Cancer using Deep Learning", "text": "This study proposes the use of deep learning algorithms to detect the presence of skin cancer, specifically melanoma, from images of skin lesions taken by a standard camera. Skin cancer is the most prevalent form of cancer in the US where 3.3 million people get treated each year. The 5-year survival rate of melanoma is 98% when detected and treated early yet over 10,000 people are lost each year due mostly to late-stage diagnoses [2]. Thus, there is a need to make melanoma screening and diagnoses methods cheaper, quicker, simpler, and more accessible. This study aims to produce an inexpensive and fast computer-vision based machine learning tool that can be used by doctors and patients to track and classify suspicious skin lesions as benign or malignant with adequate accuracy using only a cell phone camera. The data set was trained on 3 separate learning models with increasingly improved classification accuracy. The 3 models included logistic regression, a deep neural network, and a fine-tuned, pre-trained, VGG-16 Convolutional Neural Network (CNN) [7]. Preliminary results show the developed algorithm\u2019s ability to segment moles from images with 70% accuracy and classify skin lesions as melanoma with 78% balanced accuracy using a fine-tuned VGG-16 CNN."}
{"_id": "5e7b74d9e7d0399686bfd4f5334ef44f7d87d259", "title": "Ceiling continuum arm with extensible pneumatic actuators for desktop workspace", "text": "We propose an extensible pneumatic continuum arm that elongates to perform reaching movements and object grasping, and is suspended on the ceiling to prevent interference with human workers in a desktop workspace. The selected actuators with bellows aim to enhance the arm motion capabilities. A single actuator can provide a maximum tension force of 150 N, and the proposed arm has a three-segment structure with a bundle of three actuators per segment. We measured the three-dimensional motion at the arm tip by using an optical motion-capture system. The corresponding results show that the arm can grasp objects with approximate radius of 80 mm and reach any point on the desktop. Furthermore, the maximum elongation ratio is 180%, with length varying between 0.75 m and 2.1 m. Experiments verified that the arm can grasp objects of various sizes and shapes. Moreover, we demonstrate the vertical transportation of objects taking advantage of the arm extensibility. We expect to apply the proposed arm for tasks such as grasping objects, illuminating desktops, and physically interacting with users."}
{"_id": "a6a1b70305b27c556aac779fb65429db9c2e1ef2", "title": "Eventually Returning to Strong Consistency", "text": "Eventually and weakly consistent distributed systems have emerged in the past decade as an answer to scalability and availability issues associated with strong consistency semantics, such as linearizability. However, systems offering strong consistency semantics have an advantage over systems based on weaker consistency models, as they are typically much simpler to reason about and are more intuitive to developers, exhibiting more predictable behavior. Therefore, a lot of research and development effort is being invested lately into the re-engineering of strongly consistent distributed systems, as well as into boosting their scalability and performance. This paper overviews and discusses several novel directions in the design and implementation of strongly consistent systems in industries and research domains such as cloud computing, data center networking and blockchain. It also discusses a general trend of returning to strong consistency in distributed systems, when system requirements permit so."}
{"_id": "559057220f461e2e94148138cddfdc356f2a8893", "title": "Food Recognition using Fusion of Classifiers based on CNNs", "text": "With the arrival of convolutional neural networks, the complex problem of food recognition has experienced an important improvement in recent years. The best results have been obtained using methods based on very deep convolutional ceural cetworks, which show that the deeper the model,the better the classification accuracy will be obtain. However, very deep neural networks may suffer from the overfitting problem. In this paper, we propose a combination of multiple classifiers based on different convolutional models that complement each other and thus, achieve an improvement in performance. The evaluation of our approach is done on two public datasets: Food-101 as a dataset with a wide variety of fine-grained dishes, and Food-11 as a dataset of high-level food categories, where our approach outperforms the independent CNN models."}
{"_id": "0018a0b35ede8900badee90f4c44385205baf2e5", "title": "Implementation of PID controller and pre-filter to control non-linear ball and plate system", "text": "In this paper, the authors try to make PID controller with Pre-filter that is implemented at ball and plate system. Ball and plate system will control the position of ball's axis in pixels value by using servo motor as its actuator and webcam as its sensor of position. PID controller with Pre-filter will have a better response than conventional PID controller. Eventhough the response of PID with Pre-filter is slower than conventional PID, the effect of Pre-filter in the system will give the less overshoot response."}
{"_id": "cb4c975c5b09e2c43d29b5c395b74d25e47fe06b", "title": "Comparing a Scalable SDN Simulation Framework Built on ns-3 and DCE with Existing SDN Simulators and Emulators", "text": "As software-defined networking (SDN) grows beyond its original aim to simply separate the control and data network planes, it becomes useful both financially and analytically to provide adequate mechanisms for simulating this new paradigm. A number of simulation/emulation tools for modeling SDN, such as Mininet, are already available. A new, novel framework for providing SDN simulation has been provided in this work using the network simulator ns-3. The ns-3 module Direct Code Execution (DCE) allows real-world network applications to be run within a simulated network topology. This work employs DCE for running the SDN controller library POX and its applications on nodes in a simulated network topology. In this way, real-world controller applications can be completely portable between simulation and actual deployment. This work also describes a user-defined ns-3 application mimicking an SDN switch supporting OpenFlow 1.0 that can interact with real-world controllers. To evaluate its performance, this ns-3 DCE SDN framework is compared against Mininet as well as some other readily available SDN simulation/emulation tools. Metrics such as realtime performance, memory usage, and reliability in terms of packet loss are analyzed across the multiple simulation/emulation tools to gauge how they compare."}
{"_id": "61736617ae1eb5483a3b8b182815ab6c59bf4939", "title": "Pose optimization in edge distance field for textureless 3D object tracking", "text": "This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds."}
{"_id": "238c322d010fbc32bf110377045235c589629cba", "title": "Designing games with a purpose", "text": "Data generated as a side effect of game play also solves computational problems and trains AI algorithms."}
{"_id": "8a241ac5ee8d34479e8849a72096713cc6bbfed5", "title": "A web-based IDE for IDP", "text": "IDP is a knowledge base system based on first order logic. It is finding its way to a larger public but is still facing practical challenges. Adoption of new languages requires a newcomer-friendly way for users to interact with it. Both an online presence to try to convince potential users to download the system and offline availability to develop larger applications are essential. We developed an IDE which can serve both purposes through the use of web technology. It enables us to provide the user with a modern IDE with relatively little effort."}
{"_id": "e85a71c8cae795a1b2052a697d5e8182cc8c0655", "title": "The Stanford CoreNLP Natural Language Processing Toolkit", "text": "We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage."}
{"_id": "17f1c422b0840adb53e1e92db397a80e156b9fa7", "title": "Marijuana and Coronary Heart Disease", "text": "Marijuana is the most commonly abused drug in the United States.1 It mainly acts on cannabinoid receptors. There are two types of cannabinoid receptors in humans: cannabinoid receptor type 1 (CB1) and cannabinoid receptor type 2 (CB2). CB1 receptor activation is pro-atherogenic, and CB2 receptor activation is largely anti-atherogenic. Marijuana use is also implicated as a trigger for myocardial infarction (MI) in patients with stable coronary artery disease (CAD)."}
{"_id": "d9b094767cbed2edc8c7bd1170669a22068734c0", "title": "Health of Things Algorithms for Malignancy Level Classification of Lung Nodules", "text": "Lung cancer is one of the leading causes of death world wide. Several computer-aided diagnosis systems have been developed to help reduce lung cancer mortality rates. This paper presents a novel structural co-occurrence matrix (SCM)-based approach to classify nodules into malignant or benign nodules and also into their malignancy levels. The SCM technique was applied to extract features from images of nodules and classifying them into malignant or benign nodules and also into their malignancy levels. The computed tomography exams from the lung image database consortium and image database resource initiative datasets provide information concerning nodule positions and their malignancy levels. The SCM was applied on both grayscale and Hounsfield unit images with four filters, to wit, mean, Laplace, Gaussian, and Sobel filters creating eight different configurations. The classification stage used three well-known classifiers: multilayer perceptron, support vector machine, and $k$ -nearest neighbors algorithm and applied them to two tasks: ( $i$ ) to classify the nodule images into malignant or benign nodules and ( $ii$ ) to classify the lung nodules into malignancy levels (1 to 5). The results of this approach were compared to four other feature extraction methods: gray-level co-occurrence matrix, local binary patterns, central moments, and statistical moments. Moreover, the results here were also compared to the results reported in the literature. Our approach outperformed the other methods in both tasks; it achieved 96.7% for both accuracy and F-Score metrics in the first task, and 74.5% accuracy and 53.2% F-Score in the second. These experimental results reveal that the SCM successfully extracted features of the nodules from the images and, therefore may be considered as a promising tool to support medical specialist to make a more precise diagnosis concerning the malignancy of lung nodules."}
{"_id": "5f93423c41914639a147ad8dd589967daf2ea877", "title": "Maximum Margin Clustering", "text": "We propose a new method for clustering based on finding maximum margin hyperplanes through data. By reformulating the problem in terms of the implied equivalence relation matrix, we can pose the problem as a convex integer program. Although this still yields a difficult computational problem, the hard-clustering constraints can be relaxed to a soft-clustering formulation which can be feasibly solved with a semidefinite program. Since our clustering technique only depends on the data through the kernel matrix, we can easily achieve nonlinear clusterings in the same manner as spectral clustering. Experimental results show that our maximum margin clustering technique often obtains more accurate results than conventional clustering methods. The real benefit of our approach, however, is that it leads naturally to a semi-supervised training method for support vector machines. By maximizing the margin simultaneously on labeled and unlabeled training data, we achieve state of the art performance by using a single, integrated learning principle."}
{"_id": "cc13fde0a91f4d618e6af66b49690702906316ae", "title": "A MapReduce Implementation of C 4 . 5 Decision Tree Algorithm", "text": "Recent years have witness the development of cloud computing and the big data era, which brings up challenges to traditional decision tree algorithms. First, as the size of dataset becomes extremely big, the process of building a decision tree can be quite time consuming. Second, because the data cannot fit in memory any more, some computation must be moved to the external storage and therefore increases the I/O cost. To this end, we propose to implement a typical decision tree algorithm, C4.5, using MapReduce programming model. Specifically, we transform the traditional algorithm into a series of Map and Reduce procedures. Besides, we design some data structures to minimize the communication cost. We also conduct extensive experiments on a massive dataset. The results indicate that our algorithm exhibits both time efficiency and scalability."}
{"_id": "5faca17c1f8293e5d171719a6b8a289592c3d64d", "title": "Hierarchical Deep Multiagent Reinforcement Learning", "text": "Despite deep reinforcement learning has recently achieved great successes, however in multiagent environments, a number of challenges still remain. Multiagent reinforcement learning (MARL) is commonly considered to suffer from the problem of non-stationary environments and exponentially increasing policy space. It would be even more challenging to learn effective policies in circumstances where the rewards are sparse and delayed over long trajectories. In this paper, we study Hierarchical Deep Multiagent Reinforcement Learning (hierarchical deep MARL) in cooperative multiagent problems with sparse and delayed rewards, where efficient multiagent learning methods are desperately needed. We decompose the original MARL problem into hierarchies and investigate how effective policies can be learned hierarchically in synchronous/asynchronous hierarchical MARL frameworks. Several hierarchical deep MARL architectures, i.e., Ind-hDQN, hCom and hQmix, are introduced for different learning paradigms. Moreover, to alleviate the issues of sparse experiences in high-level learning and non-stationarity in multiagent settings, we propose a new experience replay mechanism, named as Augmented Concurrent Experience Replay (ACER). We empirically demonstrate the effects and efficiency of our approaches in several classic Multiagent Trash Collection tasks, as well as in an extremely challenging team sports game, i.e., Fever Basketball Defense."}
{"_id": "fc8fdffa5f97829f2373a5138b6d70a8baf08bdf", "title": "The McGill pain questionnaire: from description to measurement.", "text": "On the language of pain. By Ronald Melzack, Warren S. Torgerson. Anesthesiology 1971; 34:50-9. Reprinted with permission. The purpose of this study was to develop new approaches to the problem of describing and measuring pain in human subjects. Words used to describe pain were brought together and categorized, and an attempt was made to scale them on a common intensity dimension. The data show that: 1) there are many words in the English language to describe the varieties of pain experience; 2) there is a high level of agreement that the words fall into classes and subclasses that represent particular dimensions or properties of pain experience; 3) substantial portions of the words have approximately the same relative positions on a common intensity scale for people who have widely divergent backgrounds. The word lists provide a basis for a questionnaire to study the effects of anesthetic and analgesic agents on the experience of pain."}
{"_id": "2d0f1f8bf2e3b8c58be79e01430c67471ceee05e", "title": "Low-Power Near-Threshold 10T SRAM Bit Cells With Enhanced Data-Independent Read Port Leakage for Array Augmentation in 32-nm CMOS", "text": "The conventional six-transistor static random access memory (SRAM) cell allows high density and fast differential sensing but suffers from half-select and read-disturb issues. Although the conventional eight-transistor SRAM cell solves the read-disturb issue, it still suffers from low array efficiency due to deterioration of read bit-line (RBL) swing and $\\text{I}_{\\mathbf {on}}/\\text{I}_{\\mathbf {off}}$ ratio with increase in the number of cells per column. Previous approaches to solve these issues have been afflicted by low performance, data-dependent leakage, large area, and high energy per access. Therefore, in this paper, we present three iterations of SRAM bit cells with nMOS-only based read ports aimed to greatly reduce data-dependent read port leakage to enable 1k cells/RBL, improve read performance, and reduce area and power over conventional and 10T cell-based works. We compare the proposed work with other works by recording metrics from the simulation of a 128-kb SRAM constructed with divided-wordline-decoding architecture and a 32-bit word size. Apart from large improvements observed over conventional cells, up to 100-mV improvement in read-access performance, up to 19.8% saving in energy per access, and up to 19.5% saving in the area are also observed over other 10T cells, thereby enlarging the design and application gamut for memory designers in low-power sensors and battery-enabled devices."}
{"_id": "36d6ca6d47fa3a74d7698f0aa39605c29c594a3b", "title": "Trainable Sentence Planning for Complex Information Presentations in Spoken Dialog Systems", "text": "A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH\u2019s template-based generator even for quite complex information presentations."}
{"_id": "a55d01c2a6d33e8dff01d4eab89b941465d288ba", "title": "Complications associated with injectable soft-tissue fillers: a 5-year retrospective review.", "text": "IMPORTANCE\nEven when administered by experienced hands, injectable soft-tissue fillers can cause various unintended reactions, ranging from minor and self-limited responses to severe complications requiring prompt treatment and close follow-up.\n\n\nOBJECTIVES\nTo review the complications associated with injectable soft-tissue filler treatments administered in the Williams Rejuva Center during a 5-year period and to discuss their management.\n\n\nDESIGN AND SETTING\nRetrospective medical record review in a private practice setting.\n\n\nPARTICIPANTS\nPatients receiving injectable soft-tissue fillers and having a treatment-related complication.\n\n\nINTERVENTIONS\nInjectable soft-tissue filler treatments.\n\n\nMAIN OUTCOME MEASURES\nA retrospective medical record review was conducted of patients undergoing treatment with injectable soft-tissue fillers between January 1, 2007, and December 31, 2011, and identified as having a treatment-related complication.\n\n\nRESULTS\nA total of 2089 injectable soft-tissue filler treatments were performed during the study period, including 1047 with hyaluronic acid, 811 with poly-L-lactic acid, and 231 with calcium hydroxylapatite. Fourteen complications were identified. The most common complication was nodule or granuloma formation. Treatment with calcium hydroxylapatite had the highest complication rate.\n\n\nCONCLUSIONS AND RELEVANCE\nComplications are rare following treatment with injectable soft-tissue fillers. Nevertheless, it is important to be aware of the spectrum of potential adverse sequelae and to be comfortable with their proper management.\n\n\nLEVEL OF EVIDENCE\n4."}
{"_id": "75eb06c5d342975d6ec7b8c652653cf38bf1d6b6", "title": "On Data Clustering Analysis: Scalability, Constraints, and Validation", "text": "Clustering is the problem of grouping data based on similarity. While this problem has attracted the attention of many researchers for many years, we are witnessing a resurgence of interest in new clustering techniques. In this paper we discuss some very recent clustering approaches and recount our experience with some of these algorithms. We also present the problem of clustering in the presence of constraints and discuss the issue of clustering validation."}
{"_id": "4e473a7bda10d5d11bb767de4e5da87e99d2ef69", "title": "Better managed than memorized? Studying the Impact of Managers on Password Strength and Reuse", "text": "Despite their well-known security problems, passwords are still the incumbent authentication method for virtually all online services. To remedy the situation, users are very often referred to password managers as a solution to the password reuse and weakness problems. However, to date the actual impact of password managers on password strength and reuse has not been studied systematically. We provide the first large-scale study of the password managers\u2019 influence on users\u2019 real-life passwords. By combining qualitative data on users\u2019 password creation and management strategies, collected from 476 participants of an online survey, with quantitative data (incl. password metrics and entry methods) collected in situ with a browser plugin from 170 users, we were able to gain a more complete picture of the factors that influence our participants\u2019 password strength and reuse. Our approach allows us to quantify for the first time that password managers indeed influence the password security, however, whether this influence is beneficial or aggravating existing problems depends on the users\u2019 strategies and how well the manager supports the users\u2019 password management right from the time of password creation. Given our results, we think research should further investigate how managers can better support users\u2019 password strategies in order to improve password security as well as stop aggravating the existing problems."}
{"_id": "c39ecde1f38e9aee9ad846f7633a33403274d019", "title": "Advanced Beamformers for Cochlear Implant Users: Acute Measurement of Speech Perception in Challenging Listening Conditions", "text": "OBJECTIVE\nTo investigate the performance of monaural and binaural beamforming technology with an additional noise reduction algorithm, in cochlear implant recipients.\n\n\nMETHOD\nThis experimental study was conducted as a single subject repeated measures design within a large German cochlear implant centre. Twelve experienced users of an Advanced Bionics HiRes90K or CII implant with a Harmony speech processor were enrolled. The cochlear implant processor of each subject was connected to one of two bilaterally placed state-of-the-art hearing aids (Phonak Ambra) providing three alternative directional processing options: an omnidirectional setting, an adaptive monaural beamformer, and a binaural beamformer. A further noise reduction algorithm (ClearVoice) was applied to the signal on the cochlear implant processor itself. The speech signal was presented from 0\u00b0 and speech shaped noise presented from loudspeakers placed at \u00b170\u00b0, \u00b1135\u00b0 and 180\u00b0. The Oldenburg sentence test was used to determine the signal-to-noise ratio at which subjects scored 50% correct.\n\n\nRESULTS\nBoth the adaptive and binaural beamformer were significantly better than the omnidirectional condition (5.3 dB\u00b11.2 dB and 7.1 dB\u00b11.6 dB (p<0.001) respectively). The best score was achieved with the binaural beamformer in combination with the ClearVoice noise reduction algorithm, with a significant improvement in SRT of 7.9 dB\u00b12.4 dB (p<0.001) over the omnidirectional alone condition.\n\n\nCONCLUSIONS\nThe study showed that the binaural beamformer implemented in the Phonak Ambra hearing aid could be used in conjunction with a Harmony speech processor to produce substantial average improvements in SRT of 7.1 dB. The monaural, adaptive beamformer provided an averaged SRT improvement of 5.3 dB."}
{"_id": "a106ecb03c1c654d5954fb802929ea2a2d437525", "title": "Robust ground plane tracking in cluttered environments from egocentric stereo vision", "text": "Estimating the ground plane is often one of the first steps in geometric reasoning processes as it offers easily accessible context knowledge. Especially unconstrained platforms that capture video from egocentric viewpoints can benefit from such knowledge in various ways. A key requirement here is keeping orientation, which can be greatly achieved by keeping track of the ground. We present an approach to keep track of the ground plane in cluttered inner-urban environments using stereo vision in real-time. We fuse a planar model fit in low-resolution disparity data with the direction of the vertical vanishing point. Our experiments show how this effectively decreases the error of plane attitude estimation compared to classic least-squares fitting and allows to track the plane with camera configurations in which the ground is not visible. We evaluate the approach using ground-truth from an inertial measurement unit and demonstrate long-term stability on a dataset of challenging inner city scenes."}
{"_id": "56c700693b63e3da3b985777da6d9256e2e0dc21", "title": "Global refinement of random forest", "text": "Random forest is well known as one of the best learning methods. In spite of its great success, it also has certain drawbacks: the heuristic learning rule does not effectively minimize the global training loss; the model size is usually too large for many real applications. To address the issues, we propose two techniques, global refinement and global pruning, to improve a pre-trained random forest. The proposed global refinement jointly relearns the leaf nodes of all trees under a global objective function so that the complementary information between multiple trees is well exploited. In this way, the fitting power of the forest is significantly enhanced. The global pruning is developed to reduce the model size as well as the over-fitting risk. The refined model has better performance and smaller storage cost, as verified in extensive experiments."}
{"_id": "d73a71fa24b582accb934a9c2308567376ff396d", "title": "3D geo-database research: Retrospective and future directions", "text": "3D geo-database research is a promising field to support challenging applications such as 3D urban planning, environmental monitoring, infrastructure management, and early warning or disaster management and response. In these fields, interdisciplinary research in GIScience and related fields is needed to support the modelling, analysis, management, and integration of large geo-referenced data sets, which describe human activities and geophysical phenomena. Geo-databases may serve as platforms to integrate 2D maps, 3D geo-scientific models, and other geo-referenced data. However, current geo-databases do not provide sufficient 3D data modelling and data handling techniques. New 3D geo-databases are needed to handle surface and volume models. This article first presents a 25-year retrospective of geo-database research. Data modelling, standards, and indexing of geo-data are discussed in detail. New directions for the development of 3D geo-databases to open new fields for interdisciplinary research are addressed. Two scenarios in the fields of early warning and emergency response demonstrate the combined management of human and geophysical phenomena. The article concludes with a critical outlook on open research problems. & 2011 Elsevier Ltd. All rights reserved."}
{"_id": "80e98ea8e97d0d5444399319f5fe46f7c96231b7", "title": "How Online Basic Psychological Need Satisfaction Influences Self-Disclosure Online among Chinese Adolescents: Moderated Mediation Effect of Exhibitionism and Narcissism", "text": "Under the basic framework of self-determination theory, the present study examined a moderated mediation model in which exhibitionism mediated the relationship between online basic psychological need satisfaction and self-disclosure on the mobile Internet, and this mediation effect was moderated by narcissism. A total of 296 Chinese middle school students participated in this research. The results revealed that exhibitionism fully mediated the association between online competence need satisfaction and self-disclosure on the mobile net, and partly mediated the association between online relatedness need satisfaction and self-disclosure on the mobile net. The mediating path from online basic psychological need satisfaction (competence and relatedness) to exhibitionism was moderated by narcissism. Compared to the low level of narcissism, online competence need satisfaction had a stronger predictive power on exhibitionism under the high level of narcissism condition. In contrast, online relatedness need satisfaction had a weaker predictive power on exhibitionism."}
{"_id": "ce93e3c1a46ce2a5d7c93121f9f1b992cf868de3", "title": "Small-footprint Keyword Spotting Using Deep Neural Network and Connectionist Temporal Classifier", "text": "Mainly for the sake of solving the lack of keyword-specific data, we propose one Keyword Spotting (KWS) system using Deep Neural Network (DNN) and Connectionist Temporal Classifier (CTC) on power-constrained small-footprint mobile devices, taking full advantage of general corpus from continuous speech recognition which is of great amount. DNN is to directly predict the posterior of phoneme units of any personally customized key-phrase, and CTC to produce a confidence score of the given phoneme sequence as responsive decision-making mechanism. The CTC-KWS has competitive performance in comparison with purely DNN based keyword specific KWS, but not increasing any computational complexity."}
{"_id": "488a4ec702dca3d0c60eedbbd3971136b8bacd00", "title": "Attacking state space explosion problem in model checking embedded TV software", "text": "The features of current TV sets is increasing rapidly resulting in more complicated embedded TV software that is also harder to test and verify. Using model checking in verification of embedded software is a widely accepted practice even though it is seriously affected by the exponential increase in the number of states being produced during the verification process. Using fully non-deterministic user agent models is one of the reasons that can result in a drastic increase in the number of states being produced. In order to shrink the state space being observed during the model checking process of TV software, a method is proposed that rely on using previous test logs to generate partly nondeterministic user agent model. Results show that by using partly non-deterministic user agents, the verification time of certain safety and liveness properties can be significantly decreased."}
{"_id": "00d7c79ee185f503ed4f2f3415b02104999d5574", "title": "Bridging the Gap between Business Strategy and Software Development", "text": "In software-intensive organizations, an organizational management system will not guarantee organizational success unless the business strategy can be translated into a set of operational software goals. The Goal Question Metric (GQM) approach has proven itself useful in a variety of industrial settings to support quantitative software project management. However, it does not address linking software measurement goals to higher-level goals of the organization in which the software is being developed. This linkage is important, as it helps to justify software measurement efforts and allows measurement data to contribute to higher-level decisions. In this paper, we propose a GQMStrategies\u00ae measurement approach that builds on the GQM approach to plan and implement software measurement. GQMStrategies provides mechanisms for explicitly linking software measurement goals to higher-level goals for the software organization, and further to goals and strategies at the level of the entire business. An example application of the proposed method is illustrated in the context of an example measurement initiative."}
{"_id": "776669ccb66b317f33dbb0022d2b1cb94fe06559", "title": "Increased social fear and decreased fear of objects in monkeys with neonatal amygdala lesions", "text": "The amygdala has been implicated in the mediation of emotional and species-specific social behavior (Kling et al., 1970; Kling and Brothers, 1992; Kluver and Bucy, 1939; Rosvold et al., 1954). Humans with bilateral amygdala damage are impaired in judging negative emotion in facial expressions and making accurate judgements of trustworthiness (Adolphs et al., 1998, 1994). Amygdala dysfunction has also been implicated in human disorders ranging from social anxiety (Birbaumer et al., 1998) to depression (Drevets, 2000) to autism (Bachevalier, 1994; Baron-Cohen et al., 2000; Bauman and Kemper, 1993). We produced selective amygdala lesions in 2-week-old macaque monkeys who were returned to their mothers for rearing. At 6-8 months of age, the lesioned animals demonstrated less fear of novel objects such as rubber snakes than age-matched controls. However, they displayed substantially more fear behavior than controls during dyadic social interactions. These results suggest that neonatal amygdala lesions dissociate a system that mediates social fear from one that mediates fear of inanimate objects. Furthermore, much of the age-appropriate repertoire of social behavior was present in amygdala-lesioned infants indicating that these lesions do not produce autistic-like behavior in monkeys. Finally, amygdala lesions early in development have different effects on social behavior than lesions produced in adulthood."}
{"_id": "0aa303109a3402aa5a203877847d549c4a24d933", "title": "Who Do I Look Like? Determining Parent-Offspring Resemblance via Gated Autoencoders", "text": "Recent years have seen a major push for face recognition technology due to the large expansion of image sharing on social networks. In this paper, we consider the difficult task of determining parent-offspring resemblance using deep learning to answer the question \"Who do I look like?\" Although humans can perform this job at a rate higher than chance, it is not clear how they do it [2]. However, recent studies in anthropology [24] have determined which features tend to be the most discriminative. In this study, we aim to not only create an accurate system for resemblance detection, but bridge the gap between studies in anthropology with computer vision techniques. Further, we aim to answer two key questions: 1) Do offspring resemble their parents? and 2) Do offspring resemble one parent more than the other? We propose an algorithm that fuses the features and metrics discovered via gated autoencoders with a discriminative neural network layer that learns the optimal, or what we call genetic, features to delineate parent-offspring relationships. We further analyze the correlation between our automatically detected features and those found in anthropological studies. Meanwhile, our method outperforms the state-of-the-art in kinship verification by 3-10% depending on the relationship using specific (father-son, mother-daughter, etc.) and generic models."}
{"_id": "e2919bbdb9c8d7209ab6b3732e05eb136512709e", "title": "Performance of H.264, H.265, VP8 and VP9 Compression Standards for High Resolutions", "text": "Recently multimedia services increase, especially in video domain, leads to requirements of quality assessment. The main factors effected that quality are compression technology and transmission link. This paper presents a coding efficiency comparison of the well-known video compression standards Advanced Video Coding (H.264/AVC), High-Efficiency Video Coding (H.265/HEVC), VP8 and VP9 using objective metrics. An extensive range of bitrates from low to high bitrates was selected and encoded. The evaluation was done for four types of sequences in high resolutions depending on content. All four video sequences were encoded by using same encoding configurations for all the examined video codecs. The results showed that the quality of all compression standards rises logarithmically with increasing bitrate - in low bitrates the quality grows faster than in high bitrates. The efficiency of the new compression standards outperforms the older ones. The efficiency of VP8 compression standard outperforms the H.264/AVC compression standard. The efficiency of H.265/HEVC and VP9 compression standards is almost the same. The results also showed that VP8 codec cannot allow encoding by low bitrates. According to the results can be also said that the efficiency of the new compression standards decreases with the resolution. The results showed that the effectiveness of compression depends on the type of test sequence."}
{"_id": "5dab88dbd297974ebed3f964a224d88ab54fc0ba", "title": "Detection of Ellipses by a Modified Hough Transformation", "text": "The Hough transformation can detect straight lines in an edge-enhanced picture, however its extension to recover ellipses requires too long a computing time. This correspondence proposes a modified method which utilizes two properties of an ellipse in such a way that it iteratively searches for clusters in two different parameter spaces to find almost complete ellipses, then evaluates their parameters by the least mean squares method."}
{"_id": "0f892fa9574f24bc7b50fed94e0abbd84883c2dc", "title": "Is dark silicon useful? Harnessing the four horsemen of the coming dark silicon apocalypse", "text": "Due to the breakdown of Dennardian scaling, the percentage of a silicon chip that can switch at full frequency is dropping exponentially with each process generation. This utilization wall forces designers to ensure that, at any point in time, large fractions of their chips are effectively dark or dim silicon, i.e., either idle or significantly underclocked.\n As exponentially larger fractions of a chip's transistors become dark, silicon area becomes an exponentially cheaper resource relative to power and energy consumption. This shift is driving a new class of architectural techniques that \"spend\" area to \"buy\" energy efficiency. All of these techniques seek to introduce new forms of heterogeneity into the computational stack. We envision that ultimately we will see widespread use of specialized architectures that leverage these techniques in order to attain orders-of-magnitude improvements in energy efficiency.\n However, many of these approaches also suffer from massive increases in complexity. As a result, we will need to look towards developing pervasively specialized architectures that insulate the hardware designer and the programmer from the underlying complexity of such systems. In this paper, I discuss four key approaches--the four horsemen--that have emerged as top contenders for thriving in the dark silicon age. Each class carries with its virtues deep-seated restrictions that requires a careful understanding of the underlying tradeoffs and benefits."}
{"_id": "265d621e32757aec8d0b6456383a64edc9304be3", "title": "Distributed Multi-Robot Localization", "text": "This paper presents a new approach to the cooperative localization problem, namely distributed multi-robot localization. A group of M robots is viewed as a single system composed of robots that carry, in general, di erent sensors and have di erent positioning capabilities. A single Kalman lter is formulated to estimate the position and orientation of all the members of the group. This centralized schema is capable of fusing information provided by the sensors distributed on the individual robots while accommodating independencies and interdependencies among the collected data. In order to allow for distributed processing, the equations of the centralized Kalman lter are treated so that this lter can be decomposed into M modi ed Kalman lters each running on a separate robot. The distributed localization algorithm is applied to a group of 3 robots and the improvement in localization accuracy is presented."}
{"_id": "29108331ab9bfc1b490e90309014cff218db25cf", "title": "Human model evaluation in interactive supervised learning", "text": "Model evaluation plays a special role in interactive machine learning (IML) systems in which users rely on their assessment of a model's performance in order to determine how to improve it. A better understanding of what model criteria are important to users can therefore inform the design of user interfaces for model evaluation as well as the choice and design of learning algorithms. We present work studying the evaluation practices of end users interactively building supervised learning systems for real-world gesture analysis problems. We examine users' model evaluation criteria, which span conventionally relevant criteria such as accuracy and cost, as well as novel criteria such as unexpectedness. We observed that users employed evaluation techniques---including cross-validation and direct, real-time evaluation---not only to make relevant judgments of algorithms' performance and interactively improve the trained models, but also to learn to provide more effective training data. Furthermore, we observed that evaluation taught users about what types of models were easy or possible to build, and users sometimes used this information to modify the learning problem definition or their plans for using the trained models in practice. We discuss the implications of these findings with regard to the role of generalization accuracy in IML, the design of new algorithms and interfaces, and the scope of potential benefits of incorporating human interaction in the design of supervised learning systems."}
{"_id": "3f5b98643e68d7ef4c9e1ae615f8d2f5a57c67be", "title": "Teachable robots: Understanding human teaching behavior to build more effective robot learners", "text": "While Reinforcement Learning (RL) is not traditionally designed for interactive supervisory input from a human teacher, several works in both robot and software agents have adapted it for human input by letting a human trainer control the reward signal. In this work, we experimentally examine the assumption underlying these works, namely that the human-given reward is compatible with the traditional RL reward signal. We describe an experimental platform with a simulated RL robot and present an analysis of real-time human teaching behavior found in a study in which untrained subjects taught the robot to perform a new task. We report three main observations on how people administer feedback when teaching a robot a task through Reinforcement Learning: (a) they use the reward channel not only for feedback, but also for future-directed guidance; (b) they have a positive bias to their feedback \u2014 possibly using the signal as a motivational channel; and (c) they change their behavior as they develop a mental model of the robotic learner. Given this, we made specific modifications to the simulated RL robot, and analyzed and evaluated its learning behavior in four additional experiments with human trainers. We report significant improvements on several learning measures. This work demonstrates the importance of understanding the human-teacher/robot-learner partnership in order to design algorithms that support how people want to teach while simultaneously improving the robot\u2019s learning behavior."}
{"_id": "2093bb20cbaf684d5fde5ec46ebf5a9423393581", "title": "Natural methods for robot task learning: instructive demonstrations, generalization and practice", "text": "Among humans, teaching various tasks is a complex process which relies on multiple means for interaction and learning, both on the part of the teacher and of the learner. Used together, these modalities lead to effective teaching and learning approaches, respectively. In the robotics domain, task teaching has been mostly addressed by using only one or very few of these interactions. In this paper we present an approach for teaching robots that relies on the key features and the general approach people use when teaching each other: first give a demonstration, then allow the learner to refine the acquired capabilities by practicing under the teacher's supervision, involving a small number of trials. Depending on the quality of the learned task, the teacher may either demonstrate it again or provide specific feedback during the learner's practice trial for further refinement. Also, as people do during demonstrations, the teacher can provide simple instructions and informative cues, increasing the performance of learning. Thus, instructive demonstrations, generalization over multiple demonstrations and practice trials are essential features for a successful human-robot teaching approach. We implemented a system that enables all these capabilities and validated these concepts with a Pioneer 2DX mobile robot learning tasks from multiple demonstrations and teacher feedback."}
{"_id": "25ae96b48f21303e598c2fc4b257aa6eb2a6bcb3", "title": "XWand: UI for intelligent spaces", "text": "The XWand is a novel wireless sensor package that enables styles of natural interaction with intelligent environments. For example, a user may point the wand at a device and control it using simple gestures. The XWand system leverages the intelligence of the environment to best determine the user's intention. We detail the hardware device, signal processing algorithms to recover position and orientation, gesture recognition techniques, a multimodal (wand and speech) computational architecture and a preliminary user study examining pointing performance under conditions of tracking availability and audio feedback."}
{"_id": "280ccfcfec38b3c38372466fb9e34333d921715a", "title": "GloveTalkII: An Adaptive Gesture-to-Formant Interface", "text": "Glove-TaikII is a system which translates hand gestures-\u00b7 to speech through an adaptive interface. Hand gestures are mapped continuously to 10 control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary, multiple languages in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TaikII uses several input devices (including a Cyberglove, a ContactGlove, a polhemus sensor, and a foot-pedal), a parallel formant speech synthesizer and 3 neural networks. The gestureto-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed, user-defined relationship between hand-position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency and stop consonants are produced with a fixed mapping from the input devices. One subject has trained for about 100 hours to speak intelligibly with Glove-TalkII. He passed through eight distinct stages while learning to speak. He speaks slowly with speech quality similar to a text-to-speech synthesizer but with far more natural-sounding pitch variations."}
{"_id": "286e6e25a244d715cf697cc0a1c0c8f81ec88fbc", "title": "Collecting and analyzing qualitative data for system dynamics: methods and models", "text": "System dynamics depends heavily upon quantitative data to generate feedback models. Qualitative data and their analysis also have a central role to play at all levels of the modeling process. Although the classic literature on system dynamics strongly supports this argument, the protocols to incorporate this information during the modeling process are not detailed by the most influential authors. Data gathering techniques such as interviews and focus groups, and qualitative data analysis techniques such as grounded theory methodology and ethnographic decision models could have a strong, critical role in rigorous system dynamics efforts. This paper describes some of the main qualitative, social science techniques and explores their suitability in the different stages of the modeling process. Additionally, the authors argue that the techniques described in the paper could contribute to the understanding of the modeling process, facilitate"}
{"_id": "2ebe1ffec53e63c2799cba961503f0a6abafccd3", "title": "Skip graphs", "text": "Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where elements are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer networks, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, constructing, inserting new elements into, searching a skip graph and detecting and repairing errors in the data structure introduced by node failures can be done using simple and straight-forward algorithms."}
{"_id": "8faaf7ddbfdf7b2b16c5c13c710c44b09b0e1067", "title": "Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera", "text": "3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets."}
{"_id": "79b9455a9a854834af41178cd0eb43e93aa9d006", "title": "Traceable Air Baggage Handling System Based on RFID Tags in the Airport", "text": "The RFID is not only a feasible, novel, and cost-effective candidate for daily object identification but it is also considered as a significant tool to provide traceable visibility along different stages of the aviation supply chain. In the air baggage handing application, the RFID tags are used to enhance the ability for baggage tracking, dispatching and conveyance so as to improve the management efficiency and the users\u2019 satisfaction. We surveyed current related work and introduce the IATA RP1740c protocol used for the standard to recognize the baggage tags. One distributed aviation baggage traceable application is designed based on the RFID networks. We describe the RFID-based baggage tracking experiment in the BCIA (Beijing Capital International Airport). In this experiment the tags are sealed in the printed baggage label and the RFID readers are fixed in the certain interested positions of the BHS in the Terminal 2. We measure the accurate recognition rate and monitor the baggage\u2019s real-time situation on the monitor\u2019s screen. Through the analysis of the measured results within two months we emphasize the advantage of the adoption of RFID tags in this high noisy BHS environment. The economical benefits achieved by the extensive deployment of RFID in the baggage handing system are also outlined."}
{"_id": "3c9ea700f2668fa78dd7c2773e015dad165e8f56", "title": "Learning continuous grasp stability for a humanoid robot hand based on tactile sensing", "text": "Grasp stability estimation with complex robots in environments with uncertainty is a major research challenge. Analytical measures such as force closure based grasp quality metrics are often impractical because tactile sensors are unable to measure contacts accurately enough especially in soft contact cases. Recently, an alternative approach of learning the stability based on examples has been proposed. Current approaches of stability learning analyze the tactile sensor readings only at the end of the grasp attempt, which makes them somewhat time consuming, because the grasp can be stable already earlier. In this paper, we propose an approach for grasp stability learning, which estimates the stability continuously during the grasp attempt. The approach is based on temporal filtering of a support vector machine classifier output. Experimental evaluation is performed on an anthropomorphic ARMAR-IIIb. The results demonstrate that the continuous estimation provides equal performance to the earlier approaches while reducing the time to reach a stable grasp significantly. Moreover, the results demonstrate for the first time that the learning based stability estimation can be used with a flexible, pneumatically actuated hand, in contrast to the rigid hands used in earlier works."}
{"_id": "36f57ae7525de92d5300bd9e06978c884ebeeda8", "title": "Authorship attribution based on Life-Like Network Automata", "text": "The authorship attribution is a problem of considerable practical and technical interest. Several methods have been designed to infer the authorship of disputed documents in multiple contexts. While traditional statistical methods based solely on word counts and related measurements have provided a simple, yet effective solution in particular cases; they are prone to manipulation. Recently, texts have been successfully modeled as networks, where words are represented by nodes linked according to textual similarity measurements. Such models are useful to identify informative topological patterns for the authorship recognition task. However, there is no consensus on which measurements should be used. Thus, we proposed a novel method to characterize text networks, by considering both topological and dynamical aspects of networks. Using concepts and methods from cellular automata theory, we devised a strategy to grasp informative spatio-temporal patterns from this model. Our experiments revealed an outperformance over structural analysis relying only on topological measurements, such as clustering coefficient, betweenness and shortest paths. The optimized results obtained here pave the way for a better characterization of textual networks."}
{"_id": "8757e5b778ebb443b7a487d4fa65883dcff86027", "title": "High-Sensitivity Software-Configurable 5.8-GHz Radar Sensor Receiver Chip in 0.13-$\\mu$ m CMOS for Noncontact Vital Sign Detection", "text": "In this paper, analyses on sensitivity and link budget have been presented to guide the design of high-sensitivity noncontact vital sign detector. Important design issues such as flicker noise, baseband bandwidth, and gain budget have been discussed with practical considerations of analog-to-digital interface and signal processing methods in noncontact vital sign detection. Based on the analyses, a direct-conversion 5.8-GHz radar sensor chip with 1-GHz bandwidth was designed and fabricated. This radar sensor chip is software configurable to set the operation point and detection range for optimal performance. It integrates all the analog functions on-chip so that the output can be directly sampled for digital signal processing. Measurement results show that the fabricated chip has a sensitivity of better than -101 dBm for ideal detection in the absence of random body movement. Experiments have been performed successfully in laboratory environment to detect the vital signs of human subjects."}
{"_id": "5407167bf5cce2f942261b26e14f9e44251332a1", "title": "The social impact of a systematic floor cleaner", "text": "Mint is an automatic cleaning robot that sweeps and mops hard-surface floors using dusting and mopping cloths. Thanks to the Northstar navigation technology it systematically cleans and navigates in people's homes. Since it first became commercially available in mid 2010, hundreds of thousands of Mint cleaners are nowadays in use at home. In this paper we investigate the product's social impact with respect to the attitude of customers towards a systematic floor cleaner and how such a robot influences their lifestyle. We first report feedback from users owning the product, and demonstrate how Mint changed their everyday life at home. We then evaluate the results of a survey launched in 2012 that addresses the technical understanding of the product and what impact it has on the social life of their users. Our findings suggest that Mint, on average, saves more than one hour of people's time per week, floors are cleaner leading to healthier homes and lives, systematic cleaning is seen as an important feature, and modifications to the environment to support the navigation of the robot are largely accepted."}
{"_id": "ee137192f06f06932f5cdf71e46cef9eca260ae2", "title": "Audit and Analysis of Impostors: An experimental approach to detect fake profile in online social network", "text": "In the present generation, the social life of every person has become associated with online social networks (OSN). These sites have made drastic changes in the way we socialize. Making friends and keeping in contact with them as well as being updated of their activities, has become easier. But with their rapid growth, problems like fake profiles, online impersonation have also increased. The risk lies in the fact that anybody can create a profile to impersonate a real person on the OSN. The fake profile could be exploited to build online relationship with a targeted person purely through online interactions with the friends of victim.\n In present work, we have proposed experimental framework with which detection of fake profile is feasible within the friend list, however this framework is restricted to a specific online social networking site namely Facebook. This framework extracts data from the friend list and uses it to classify them as real or fake by using unsupervised and supervised machine learning."}
{"_id": "b322827e441137ded5991d61d12c86f67effc2ae", "title": "Blind PSF estimation and methods of deconvolution optimization", "text": "We have shown that the left side null space of the autoregression (AR) matrix operator is the lexicographical presentation of the point spread function (PSF) on condition the AR parameters are common for original and blurred images. The method of inverse PSF evaluation with regularization functional as the function of surface area is offered. The inverse PSF was used for primary image estimation. Two methods of original image estimate optimization were designed basing on maximum entropy generalization of sought and blurred images conditional probability density and regularization. The first method uses balanced variations of convolution and deconvolution transforms to obtaining iterative schema of image optimization. The variations balance was defined by dynamic regularization basing on condition of iteration process convergence. The regularization has dynamic character because depends on current and previous image estimate variations. The second method implements the regularization of deconvolution optimization in curved space with metric defined on image estimate surface. It is basing on target functional invariance to fluctuations of optimal argument value. The given iterative schemas have faster convergence in comparison with known ones, so they can be used for reconstruction of high resolution images series in real time."}
{"_id": "f53fe36c27d32b349619ad75b37f4b1bb583f0b1", "title": "The FEELING of WHAT HAPPENS : BODY AND EMOTION IN THE MAKING OF CONSCIOUSNESS", "text": "Antonio Damasio is a Portuguese-born American behavioral neurologist and neuroscientist. For a person without any knowledge of neuroscience, biology, or psychology, The Feeling of What Happens would presumably be a fascinating read. How often does one typically contemplate their own contemplation? The concepts presented by Damasio are abstract and uncommon, and undoubtedly are a new phenomenon for the average reader."}
{"_id": "ea4854a88e919adfe736fd63261bbd3b79a7bfb9", "title": "MELOGRAPH: Multi-Engine WorkfLOw Graph Processing", "text": "This paper introduces MELOGRAPH, a new system that exposes in the front-end a domain specific language(DSL) for graph processing tasks and in the back-end identifies, ranks and generates source code for the top-N ranked engines. This approach lets the specialized MELOGRAPH be part of a more general multi-engine workflow optimizer. The candidate execution engines are chosen from the contemporaneous Big Data ecosystem: graph databases (e.g. Neo4j, TitanDB, OrientDB, Sparksee/DEX) and robust graph processing frameworks with Java API or packaged libraries of algorithms (e.g. Giraph, Okapi, Flink Gelly, Hama, Gremlin). As MELOGRAPH is work in progress, our current paper stresses upon the state of the art in this field, provides a general architecture and some early implementation insights."}
{"_id": "e739cb4f6a17db37d3c4d6223025cea9ea5bca8c", "title": "Trusted Click: Overcoming Security issues of NFV in the Cloud", "text": "Network Function Virtualization has received a large amount of research and recent efforts have been made to further leverage the cloud to enhance NFV. However, since there are privacy and security issues with using cloud computing, work has been done to allow for operating on encrypted data, which introduces a large amount of overhead in both computation and data, while only providing a limited set of operations, since these encryption schemes are not fully homomorphic. We propose using trusted computing to circumvent these limitations by having hardware enforce data privacy and provide guaranteed computation. Prior work has shown that Intel's Software Guard Extensions can be used to protect the state of network functions, but there are still questions about the usability of SGX in arbitrary NFV applications and the performance of SGX in these applications. We extend prior work to show how SGX can be used in network deployments by extending the Click modular router to perform secure packet processing with SGX. We also present a performance evaluation of SGX on real hardware to show that processing inside of SGX has a negligible performance impact, compared to performing the same processing outside of SGX."}
{"_id": "f5ebb6548f6ec1c5e16e68e1ec38677e43d1fed3", "title": "Parathyroid Hormone PTH(1-34) Formulation that Enables Uniform Coating on a Novel Transdermal Microprojection Delivery System", "text": "Assess formulation parameters to enable >24-h continuous accurate and uniform coating of PTH(1-34) on a novel transdermal microprojection array delivery system. Surface activity and rheology of the liquid formulation was determined by contact angle measurement and cone-plate viscometry. The formulation\u2019s delivery performance was assessed in vivo using the hairless guinea pig model. Peptide gelation was investigated by rheological and viscoelastic behavior changes. Accurate and uniform coating was achieved by formulating the liquid formulation to a preferred contact angle range of 30\u201360\u00b0 with a surfactant and by establishing a Newtonian fluid (defined as a fluid maintaining a constant viscosity with shear rate and time) with a viscosity of \u226520\u00a0cps via adjusting the peptide concentration and using an appropriate acidic counterion. A non-volatile acidic counterion was found critical to compensate for the loss of the volatile acetate counterion to maintain the peptide formulation\u2019s solubility upon rehydration in the skin. Finally, the 15.5% w/w PTH(1-34) concentration was found to be the most physically stable formulation (delayed gelation) in the roll-coating reservoir. With a properly designed coating reservoir for shear force reduction, the liquid formulation could last for more than 24\u00a0h without gelation. The study successfully offered scientific rationales for developing an optimal liquid formulation for a novel titanium microprojection array coating process. The resultant formulation has an enduring physical stability (>24\u00a0h) in the coating reservoir and maintained good in vivo dissolution performance."}
{"_id": "dce7a0550b4d63f6fe2e6908073ce0ce63626b0c", "title": "An Ethics Evaluation Tool for Automating Ethical Decision-Making in Robots and Self-Driving Cars", "text": "As we march down the road of automation in robotics and artificial intelligence, we will need to automate an increasing amount of ethical decision-making in order for our devices to operate independently from us. But automating ethical decision-making raises novel questions for engineers and designers, who will have to make decisions about how to accomplish that task. For example, some ethical decisionmaking involves hard moral cases, which in turn requires user input if we are to respect established norms surrounding autonomy and informed consent. The author considers this and other ethical considerations that accompany the automation of ethical decision-making. He proposes some general ethical requirements that should be taken into account in the design room, and sketches a design tool that can be integrated into the design process to help engineers, designers, ethicists, and policymakers decide how best to automate certain forms of ethical decision-making."}
{"_id": "87c13f4ec110495837056295909ceb503158c821", "title": "Peppermint oil for treatment of irritable bowel syndrome.", "text": "22 AM J HEALTH-SYST PHARM | VOLUME 73 | NUMBER 2 | JANUARY 15, 2016 Peppermint oil for treatment of irritable bowel syndrome Irritable bowel syndrome (IBS) is a chronic gastrointestinal disorder that affects 5\u201315% of people worldwide. Typically, patients with IBS experience abdominal pain or discomfort and constipation or diarrhea with other gastrointestinal symptoms, such as abdominal distention and bloating; these symptoms reduce quality of life and can be difficult to control with currently available prescription agents. For these reasons, patients are often compelled to try nonprescription therapies, including natural products. In studies conducted in Australia and the United Kingdom, about 20\u201350% of patients with IBS reported using complementary and alternative medicines. Thus, it is important that healthcare professionals are knowledgeable of these therapies, including dietary supplements such as peppermint oil. Historically, the ingestion of peppermint oil has been associated with effects on the gastrointestinal tract. It is a classic essential oil known to have carminative properties (i.e., it is a naturally occurring remedy thought to help decrease bloating and gas by allowing passage of flatus) and may even be beneficial as an antiemetic. Peppermint oil comes from a perennial herb (Mentha \u00d7 piperita), a plant found across North America and Europe. Mentha \u00d7 piperita is a sterile hybrid of two herbs, spearmint (Menthaspicata) and water mint (Menthaaquatica). The main constituents of peppermint oil include menthol (35\u201355%), menthone (20\u201331%), menthyl acetate (3\u201310%), isomenthone, 1,8-cineole, limonene, b-myrcene, and carvone. Peppermint oil may be obtained through steam distillation of flowering parts that grow aboveground. It is a volatile oil, with menthol accounting for a majority of its potency. Due to the various constituents of peppermint oil, it has a variety of uses, including topical application as an antiseptic and for aches and pains, inhalation as aromatherapy, and oral formulations for flavoring or for use as digestive aids. Relaxation of intestinal muscle, both in vivo and in vitro, and relaxation of the lower esophageal sphincter have been reported with the use of peppermint oil, which is thought of as an antispasmodic that may confer benefits in conditions such as IBS. Several clinical studies of the use of peppermint oil for the treatment of"}
{"_id": "8559adcb55b96ca112745566a9b96988fe51ba9a", "title": "Human motion and emotion parameterization", "text": "This paper describes the methods and data structures used for generating new human motions out of existing motion data. These existing animation data are derived from live motion capture or produced through the traditional animation tools. We present a survey of techniques for generating new animations by physical and emotional parameterizations. Discussed new animations are generated by Fourier transformations, transformations of velocity and amplitude and scattered data interpolation in this report."}
{"_id": "b8d3bca50ccc573c5cb99f7d201e8acce6618f04", "title": "An Algorithm for Drawing General Undirected Graphs Tomihisa Kamada and Satoru Kawai", "text": "Graphs (networks) are very common data structures which are handled in computers. Diagrams are widely used to represent the graph structures visually in many information systems. In order to automatically draw the diagrams which are, for example, state graphs, data-flow graphs, Petri nets, and entity-relationship diagrams, basic graph drawing algorithms are required. The state of the art in automatic drawing is surveyed comprehensively in [7,19]. There have been only a few algorithms for general undirected graphs. This paper presents a simple but successful algorithm for drawing undirected graphs and weighted graphs. The basic idea of our algorithm is as follows. We regard the desirable \"geometric\" (Euclidean) distance between two vertices in the drawing as the \"graph theoretic\" distance between them in the corresponding graph. We introduce a virtual dynamic system in which every two vertices are connected by a \"spring\" of such desirable length. Then, we regard the optimal layout of vertices as the state in which the total spring energy of the system is minimal. The \"spring\" idea for drawing general graphs was introduced in [6], and similar methods were used for drawing planar graphs with fixed boundary [2,20]. This paper brings a new significant result in graph drawing based on the spring model."}
{"_id": "3669f95b0a3224c62d4ddfcebc174dee613e07fc", "title": "Topic Model for Identifying Suicidal Ideation in Chinese Microblog", "text": "Suicide is one of major public health problems worldwide. Traditionally, suicidal ideation is assessed by surveys or interviews, which lacks of a real-time assessment of personal mental state. Online social networks, with large amount of user-generated data, offer opportunities to gain insights of suicide assessment and prevention. In this paper, we explore potentiality to identify and monitor suicide expressed in microblog on social networks. First, we identify users who have committed suicide and collect millions of microblogs from social networks. Second, we build suicide psychological lexicon by psychological standards and word embedding technique. Third, by leveraging both language styles and online behaviors, we employ Topic Model and other machine learning algorithms to identify suicidal ideation. Our approach achieves the best results on topic-500, yielding F1 \u2212 measure of 80.0%, Precision of 87.1%, Recall of 73.9%, and Accuracy of 93.2%. Furthermore, a prototype system for monitoring suicidal ideation on several social networks is deployed."}
{"_id": "d1fda9eb1cb03baa331baae5805d08d81208f730", "title": "Unusual coexistence of caudal duplication and caudal regression syndromes.", "text": "Caudal duplication syndrome includes anomalies of the genitourinary system, gastrointestinal tract, and the distal neural tube. Caudal regression syndrome presents with lumbosacral hypogenesis, anomalies of the lower gastrointestinal tract, genitourinary system, and limb anomalies. Both happen as a result of insult to the caudal cell mass. We present a child having features consistent with both entities."}
{"_id": "ab19cbea5c61536b616cfa7654cf01bf0621b83f", "title": "Biodynamic Excisional Skin Tension Lines for Cutaneous Surgery", "text": ""}
{"_id": "ab7b1542251ff46971704cd24562062f59901fb8", "title": "Qualitative research methods in mental health.", "text": "As the evidence base for the study of mental health problems develops, there is a need for increasingly rigorous and systematic research methodologies. Complex questions require complex methodological approaches. Recognising this, the MRC guidelines for developing and testing complex interventions place qualitative methods as integral to each stage of intervention development and implementation. However, mental health research has lagged behind many other healthcare specialities in using qualitative methods within its evidence base. Rigour in qualitative research raises many similar issues to quantitative research and also some additional challenges. This article examines the role of qualitative methods within mental heath research, describes key methodological and analytical approaches and offers guidance on how to differentiate between poor and good quality qualitative research."}
{"_id": "01e0a7cdbf9851a30f7dc31dc79adc2a7bde1c9f", "title": "Reliability in the utility computing era: Towards reliable Fog computing", "text": "This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible."}
{"_id": "1b90ee5c846aafe7feb38b439a3e8fa212757899", "title": "Detection and analysis of drive-by-download attacks and malicious JavaScript code", "text": "JavaScript is a browser scripting language that allows developers to create sophisticated client-side interfaces for web applications. However, JavaScript code is also used to carry out attacks against the user's browser and its extensions. These attacks usually result in the download of additional malware that takes complete control of the victim's platform, and are, therefore, called \"drive-by downloads.\" Unfortunately, the dynamic nature of the JavaScript language and its tight integration with the browser make it difficult to detect and block malicious JavaScript code.\n This paper presents a novel approach to the detection and analysis of malicious JavaScript code. Our approach combines anomaly detection with emulation to automatically identify malicious JavaScript code and to support its analysis. We developed a system that uses a number of features and machine-learning techniques to establish the characteristics of normal JavaScript code. Then, during detection, the system is able to identify anomalous JavaScript code by emulating its behavior and comparing it to the established profiles. In addition to identifying malicious code, the system is able to support the analysis of obfuscated code and to generate detection signatures for signature-based systems. The system has been made publicly available and has been used by thousands of analysts."}
{"_id": "3032182c47b75d9c1d16877815dab8f8637631a2", "title": "Beyond blacklists: learning to detect malicious web sites from suspicious URLs", "text": "Malicious Web sites are a cornerstone of Internet criminal activities. As a result, there has been broad interest in developing systems to prevent the end user from visiting such sites. In this paper, we describe an approach to this problem based on automated URL classification, using statistical methods to discover the tell-tale lexical and host-based properties of malicious Web site URLs. These methods are able to learn highly predictive models by extracting and automatically analyzing tens of thousands of features potentially indicative of suspicious URLs. The resulting classifiers obtain 95-99% accuracy, detecting large numbers of malicious Web sites from their URLs, with only modest false positives."}
{"_id": "6ba2b0a92408789eec23c008a9beb1b574b42470", "title": "Anomaly Based Web Phishing Page Detection", "text": "Many anti-phishing schemes have recently been proposed in literature. Despite all those efforts, the threat of phishing attacks is not mitigated. One of the main reasons is that phishing attackers have the adaptability to change their tactics with little cost. In this paper, we propose a novel approach, which is independent of any specific phishing implementation. Our idea is to examine the anomalies in Web pages, in particular, the discrepancy between a Web site's identity and its structural features and HTTP transactions. It demands neither user expertise nor prior knowledge of the Web site. The evasion of our phishing detection entails high cost to the adversary. As shown by the experiments, our phishing detector functions with low miss rate and low false-positive rate"}
{"_id": "9cbe8c8ba680a4e55517a8cf322603334ac68be1", "title": "Effective analysis, characterization, and detection of malicious web pages", "text": "The steady evolution of the Web has paved the way for miscreants to take advantage of vulnerabilities to embed malicious content into web pages. Up on a visit, malicious web pages steal sensitive data, redirect victims to other malicious targets, or cease control of victim's system to mount future attacks. Approaches to detect malicious web pages have been reactively effective at special classes of attacks like drive-by-downloads. However, the prevalence and complexity of attacks by malicious web pages is still worrisome. The main challenges in this problem domain are (1) fine-grained capturing and characterization of attack payloads (2) evolution of web page artifacts and (3) exibility and scalability of detection techniques with a fast-changing threat landscape. To this end, we proposed a holistic approach that leverages static analysis, dynamic analysis, machine learning, and evolutionary searching and optimization to effectively analyze and detect malicious web pages. We do so by: introducing novel features to capture fine-grained snapshot of malicious web pages, holistic characterization of malicious web pages, and application of evolutionary techniques to fine-tune learning-based detection models pertinent to evolution of attack payloads. In this paper, we present key intuition and details of our approach, results obtained so far, and future work."}
{"_id": "d69ae114a54a0295fe0a882d205611a121f981e1", "title": "ADAM: Detecting Intrusions by Data Mining", "text": "Intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization. Recently, new intrusion detection systems based on data mining are making their appearance in the field. This paper describes the design and experiences with the ADAM ( Audit Data Analysis and Mining) system, which we use as a testbed to study how useful data mining techniques can be in intrusion detection. Keywords\u2014Intrusion Detection, Data Mining, Association Rules, Classifiers."}
{"_id": "a3fe9f3b248417db3cdcf07ab6f9a63c03a6345f", "title": "Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders", "text": "Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts."}
{"_id": "636bb6a9fd77f6811fc0339c44542ab25ba552cc", "title": "A shared \" passengers & goods \" city logistics system", "text": "Many strategic planning models have been developed to help decision making in city logistics. Such models do not take into account, or very few, the flow of passengers because the considered unit does not have the same nature (a person is active and a good is passive). However, it seems fundamental to gather the goods and the passengers in one model when their respective transports interact with each other. In this context, we suggest assessing a shared passengers & goods city logistics system where the spare capacity of public transport is used to distribute goods toward the city core. We model the problem as a vehicle routing problem with transfers and give a mathematical formulation. Then we propose an Adaptive Large Neighborhood Search (ALNS) to solve it. This approach is evaluated on data sets generated following a field study in the city of La Rochelle in France."}
{"_id": "86f7488e6ad64ad3a7aab65f936c9686aee91a1a", "title": "Speaker Identification using Mel Frequency Cepstral Coefficient and BPNN", "text": "Speech processing is emerged as one of the important application area of digital signal processing. Various fields for research in speech processing are speech recognition, speaker recognition, speech synthesis, speech coding etc. The objective of automatic speaker recognition is to extract, characterize and recognize the information about speaker identity. Feature extraction is the first step for speaker recognition. Many algorithms are suggested/developed by the researchers for feature extraction. In this work, the Mel Frequency Cepstrum Coefficient (MFCC) feature has been used for designing a text dependent speaker identification system. BPNN is used for identification of speaker after training the feature set from MFCC. Some modifications to the existing technique of MFCC for feature extraction are also suggested to improve the speaker recognition efficiency. Information from speech recognition can be used in various ways in state-of-the-art speaker recognition systems. This includes the obvious use of recognized words to enable the use of text-dependent speaker modeling techniques when the words spoken are not given. Furthermore, it has been shown that the choice of words and phones itself can be a useful indicator of speaker identity. Also, recognizer output enables higher-level features, in particular those related to prosodic properties of speech. Keywords\u2014 Speaker identification, BPNN, MFCC, speech processing, feature extraction, speech signal"}
{"_id": "8188d1381f8c77f7df0117fd0dab1919693c1295", "title": "Language support for fast and reliable message-based communication in singularity OS", "text": "Message-based communication offers the potential benefits of providing stronger specification and cleaner separation between components. Compared with shared-memory interactions, message passing has the potential disadvantages of more expensive data exchange (no direct sharing) and more complicated programming.In this paper we report on the language, verification, and run-time system features that make messages practical as the sole means of communication between processes in the Singularity operating system. We show that using advanced programming language and verification techniques, it is possible to provide and enforce strong system-wide invariants that enable efficient communication and low-overhead software-based process isolation. Furthermore, specifications on communication channels help in detecting programmer mistakes early---namely at compile-time---thereby reducing the difficulty of the message-based programming model.The paper describes our communication invariants, the language and verification features that support them, as well as implementation details of the infrastructure. A number of benchmarks show the competitiveness of this approach."}
{"_id": "6c0cfbb0e02b8d5ea3f0d6f94eb25c7b93ff3e85", "title": "Timeline generation with social attention", "text": "Timeline generation is an important research task which can help users to have a quick understanding of the overall evolution of any given topic. It thus attracts much attention from research communities in recent years. Nevertheless, existing work on timeline generation often ignores an important factor, the attention attracted to topics of interest (hereafter termed \"social attention\"). Without taking into consideration social attention, the generated timelines may not reflect users' collective interests. In this paper, we study how to incorporate social attention in the generation of timeline summaries. In particular, for a given topic, we capture social attention by learning users' collective interests in the form of word distributions from Twitter, which are subsequently incorporated into a unified framework for timeline summary generation. We construct four evaluation sets over six diverse topics. We demonstrate that our proposed approach is able to generate both informative and interesting timelines. Our work sheds light on the feasibility of incorporating social attention into traditional text mining tasks."}
{"_id": "998065c6747d8fb05dca5977415179e20371c3d4", "title": "Analyzing the State of Static Analysis: A Large-Scale Evaluation in Open Source Software", "text": "The use of automatic static analysis has been a software engineering best practice for decades. However, we still do not know a lot about its use in real-world software projects: How prevalent is the use of Automated Static Analysis Tools (ASATs) such as FindBugs and JSHint? How do developers use these tools, and how does their use evolve over time? We research these questions in two studies on nine different ASATs for Java, JavaScript, Ruby, and Python with a population of 122 and 168,214 open-source projects. To compare warnings across the ASATs, we introduce the General Defect Classification (GDC) and provide a grounded-theory-derived mapping of 1,825 ASAT-specific warnings to 16 top-level GDC classes. Our results show that ASAT use is widespread, but not ubiquitous, and that projects typically do not enforce a strict policy on ASAT use. Most ASAT configurations deviate slightly from the default, but hardly any introduce new custom analyses. Only a very small set of default ASAT analyses is widely changed. Finally, most ASAT configurations, once introduced, never change. If they do, the changes are small and have a tendency to occur within one day of the configuration's initial introduction."}
{"_id": "587f8411391bf2f9d7586eed05416977c6024dd0", "title": "Microrobot Design Using Fiber Reinforced Composites", "text": "Mobile microrobots with characteristic dimensions on the order of 1cm are difficult to design using either MEMS (microelectromechanical systems) technology or precision machining. This is due to the challenges associated with constructing the high strength links and high-speed, low-loss joints with micron scale features required for such systems. Here we present an entirely new framework for creating microrobots which makes novel use of composite materials. This framework includes a new fabrication process termed Smart Composite Microstructures (SCM) for integrating rigid links and large angle flexure joints through a laser micromachining and lamination process. We also present solutions to actuation and integrated wiring issues at this scale using SCM. Along with simple design rules that are customized for this process, our new complete microrobotic framework is a cheaper, quicker, and altogether superior method for creating microrobots that we hope will become the paradigm for robots at this scale."}
{"_id": "e5e989029e7e87fe8faddd233a97705547d06dda", "title": "Mr. DLib's Living Lab for Scholarly Recommendations", "text": "We introduce the first living lab for scholarly recommender systems. This lab allows recommender-system researchers to conduct online evaluations of their novel algorithms for scholarly recommendations, i.e., research papers, citations, conferences, research grants etc. Recommendations are delivered through the living lab \u0301s API in platforms such as reference management software and digital libraries. The living lab is built on top of the recommender system as-a-service Mr. DLib. Current partners are the reference management software JabRef and the CORE research team. We present the architecture of Mr. DLib\u2019s living lab as well as usage statistics on the first ten months of operating it. During this time, 970,517 recommendations were delivered with a mean click-through rate of 0.22%."}
{"_id": "a75a6ed085fb57762fa148b82aa47607c0d9d92c", "title": "A Genetic Algorithm Approach to Dynamic Job Shop Scheduling Problems", "text": "This paper describes a genetic algorithm approach to the dynamic job shop scheduling problem with jobs arriving continually. Both deterministic and stochastic models of the dynamic problem were investigated. The objective functions examined were weighted flow time, maximum tardiness, weighted tardiness, weighted lateness, weighted number of tardy jobs, and weighted earliness plus weighted tardi-ness. In the stochastic model, we further tested the approach under various manufacturing environments with respect to the machine workload, imbalance of machine workload, and due date tightness. The results indicate that the approach performs well and is robust with regard to the objective function and the manufacturing environment in comparison with priority rule approaches."}
{"_id": "7703a2c5468ecbee5b62c048339a03358ed5fe19", "title": "Recurrent Neural Aligner: An Encoder-Decoder Neural Network Model for Sequence to Sequence Mapping", "text": "We introduce an encoder-decoder recurrent neural network model called Recurrent Neural Aligner (RNA) that can be used for sequence to sequence mapping tasks. Like connectionist temporal classification (CTC) models, RNA defines a probability distribution over target label sequences including blank labels corresponding to each time step in input. The probability of a label sequence is calculated by marginalizing over all possible blank label positions. Unlike CTC, RNA does not make a conditional independence assumption for label predictions; it uses the predicted label at time t\u22121 as an additional input to the recurrent model when predicting the label at time t. We apply this model to end-to-end speech recognition. RNA is capable of streaming recognition since the decoder does not employ attention mechanism. The model is trained on transcribed acoustic data to predict graphemes and no external language and pronunciation models are used for decoding. We employ an approximate dynamic programming method to optimize negative log likelihood, and a sampling-based sequence discriminative training technique to fine-tune the model to minimize expected word error rate. We show that the model achieves competitive accuracy without using an external language model nor doing beam search decoding."}
{"_id": "a23179010e83ebdc528b4318bcea8edace96cbe5", "title": "Effective Bug Triage Based on Historical Bug-Fix Information", "text": "For complex and popular software, project teams could receive a large number of bug reports. It is often tedious and costly to manually assign these bug reports to developers who have the expertise to fix the bugs. Many bug triage techniques have been proposed to automate this process. In this paper, we describe our study on applying conventional bug triage techniques to projects of different sizes. We find that the effectiveness of a bug triage technique largely depends on the size of a project team (measured in terms of the number of developers). The conventional bug triage methods become less effective when the number of developers increases. To further improve the effectiveness of bug triage for large projects, we propose a novel recommendation method called Bug Fixer, which recommends developers for a new bug report based on historical bug-fix information. Bug Fixer constructs a Developer-Component-Bug (DCB) network, which models the relationship between developers and source code components, as well as the relationship between the components and their associated bugs. A DCB network captures the knowledge of \"who fixed what, where\". For a new bug report, Bug Fixer uses a DCB network to recommend to triager a list of suitable developers who could fix this bug. We evaluate Bug Fixer on three large-scale open source projects and two smaller industrial projects. The experimental results show that the proposed method outperforms the existing methods for large projects and achieves comparable performance for small projects."}
{"_id": "0b242d5123f79defd5f775d49d8a7047ad3153bc", "title": "How Important is Weight Symmetry in Backpropagation?", "text": "Gradient backpropagation (BP) requires symmetric feedforward and feedback connections\u2014the same weights must be used for forward and backward passes. This \u201cweight transport problem\u201d [1] is thought to be one of the main reasons of BP\u2019s biological implausibility. Using 15 different classification datasets, we systematically study to what extent BP really depends on weight symmetry. In a study that turned out to be surprisingly similar in spirit to Lillicrap et al.\u2019s demonstration [2] but orthogonal in its results, our experiments indicate that: (1) the magnitudes of feedback weights do not matter to performance (2) the signs of feedback weights do matter\u2014the more concordant signs between feedforward and their corresponding feedback connections, the better (3) with feedback weights having random magnitudes and 100% concordant signs, we were able to achieve the same or even better performance than SGD. (4) some normalizations/stabilizations are indispensable for such asymmetric BP to work, namely Batch Normalization (BN) [3] and/or a \u201cBatch Manhattan\u201d (BM) update rule. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. ar X iv :1 51 0. 05 06 7v 3 [ cs .L G ] 2 D ec 2 01 5"}
{"_id": "cfdfb7fecab1c795a52d0d201064fe876e0aae2f", "title": "Smarter universities: A vision for the fast changing digital era", "text": "In this paper we analyze the current situation of education in universities, with particular reference to the European scenario. Specifically, we observe that recent evolutions, such as pervasive networking and other enabling technologies, have been dramatically changing human life, knowledge acquisition, and the way works are performed and people learn. In this societal change, universities must maintain their leading role. Historically, they set trends primarily in education but now they are called to drive the change in other aspects too, such as management, safety, and environment protection. The availability of newer and newer technology reflects on how the relevant processes should be performed in the current fast changing digital era. This leads to the adoption of a variety of smart solutions in university environments to enhance the quality of life and to improve the performances of both teachers and students. Nevertheless, we argue that being smart is not enough for a modern university. In fact, universities should better become smarter. By \u201csmarter university\u201d we mean a place where knowledge is shared between employees, teachers, students, and all stakeholders in a seamless way. In this paper we propose, and discuss a smarter university model, derived from the one designed for the development of"}
{"_id": "baca9a36855ee7e5d80a860072be24a865ec8bf1", "title": "Impact of dietary fiber intake on glycemic control, cardiovascular risk factors and chronic kidney disease in Japanese patients with type 2 diabetes mellitus: the Fukuoka Diabetes Registry", "text": "BACKGROUND\nDietary fiber is beneficial for the treatment of type 2 diabetes mellitus, although it is consumed differently in ethnic foods around the world. We investigated the association between dietary fiber intake and obesity, glycemic control, cardiovascular risk factors and chronic kidney disease in Japanese type 2 diabetic patients.\n\n\nMETHODS\nA total of 4,399 patients were assessed for dietary fiber intake using a brief self-administered diet history questionnaire. The associations between dietary fiber intake and various cardiovascular risk factors were investigated cross-sectionally.\n\n\nRESULTS\nBody mass index, fasting plasma glucose, HbA1c, triglyceride and high-sensitivity C-reactive protein negatively associated with dietary fiber intake after adjusting for age, sex, duration of diabetes, current smoking, current drinking, total energy intake, fat intake, saturated fatty acid intake, leisure-time physical activity and use of oral hypoglycemic agents or insulin. The homeostasis model assessment insulin sensitivity and HDL cholesterol positively associated with dietary fiber intake. Dietary fiber intake was associated with reduced prevalence of abdominal obesity, hypertension and metabolic syndrome after multivariate adjustments including obesity. Furthermore, dietary fiber intake was associated with lower prevalence of albuminuria, low estimated glomerular filtration rate and chronic kidney disease after multivariate adjustments including protein intake. Additional adjustments for obesity, hypertension or metabolic syndrome did not change these associations.\n\n\nCONCLUSION\nWe demonstrated that increased dietary fiber intake was associated with better glycemic control and more favorable cardiovascular disease risk factors including chronic kidney disease in Japanese type 2 diabetic patients. Diabetic patients should be encouraged to consume more dietary fiber in daily life."}
{"_id": "241e2b442812d843dbd30e924c2f2f6ad8e12179", "title": "Concept Decompositions for Large Sparse Text Data Using Clustering", "text": "Unlabeled document collections are becoming increasingly common and available; mining such data sets represents a major contemporary challenge. Using words as features, text documents are often represented as high-dimensional and sparse vectors\u2013a few thousand dimensions and a sparsity of 95 to 99% is typical. In this paper, we study a certain spherical k-means algorithm for clustering such document vectors. The algorithm outputs k disjoint clusters each with a concept vector that is the centroid of the cluster normalized to have unit Euclidean norm. As our first contribution, we empirically demonstrate that, owing to the high-dimensionality and sparsity of the text data, the clusters produced by the algorithm have a certain \u201cfractal-like\u201d and \u201cself-similar\u201d behavior. As our second contribution, we introduce concept decompositions to approximate the matrix of document vectors; these decompositions are obtained by taking the least-squares approximation onto the linear subspace spanned by all the concept vectors. We empirically establish that the approximation errors of the concept decompositions are close to the best possible, namely, to truncated singular value decompositions. As our third contribution, we show that the concept vectors are localized in the word space, are sparse, and tend towards orthonormality. In contrast, the singular vectors are global in the word space and are dense. Nonetheless, we observe the surprising fact that the linear subspaces spanned by the concept vectors and the leading singular vectors are quite close in the sense of small principal angles between them. In conclusion, the concept vectors produced by the spherical k-means algorithm constitute a powerful sparse and localized \u201cbasis\u201d for text data sets."}
{"_id": "46ba80738127b19a892cfe687ece171251abb806", "title": "Attribution versus persuasion as a means for modifying behavior.", "text": "The present research compared the relative effectiveness of an attribution strategy with a persuasion strategy in changing behavior. Study 1 attempted to teach fifth graders not to litter and to clean up after others. An attribution group was repeatedly told that they were neat and tidy people, a persuasion group was repeatedly told that they should be neat and tidy, and a control group received no treatment. Attribution proved considerably more effective in modifying behavior. Study 2 tried to discover whether similar effects would hold for a more central aspect of school performance, math achievement and self-esteem, and whether an attribution of ability would be as effective as an attribution of motivation. Repeatedly attributing to second graders either the ability or the motivation to do well in math proved more effective than comparable persuasion or no-treatment control groups, although a group receiving straight reinforcement for math problem-solving behavior also did well. It is suggested that persuasion often suffers because it involves a negative attribution (a person should be what he is not), while attribution generally gains because it disguises persuasive intent."}
{"_id": "ea76ec431a7dd1c9a5b7ccf9fb0a4cb13b9b5037", "title": "Taslihan Virtual Reconstruction - Interactive Digital Story or a Serious Game", "text": "During the Ottoman period, Taslihan was the largest accommodation complex in Sarajevo, Bosnia and Herzegovina. Today, only one wall remains as a memento of its existence. In this paper, we compare user appreciation of an interactive digital story about this building and of a serious game about Taslihan to see which application offers more knowledge and immersion while bringing this monument to life in the collective memory of the people."}
{"_id": "b7fa7e6f1ebd1653ad0431a85dc46221b8e2a367", "title": "Mobile apps for science learning: Review of research", "text": "This review examined articles on mobile apps for science learning published from 2007 to 2014. A qualitative content analysis was used to investigate the science mobile app research for its mobile app design, underlying theoretical foundations, and students' measured outcomes. This review found that mobile apps for science learning offered a number of similar design features, including technology-based scaffolding, location-aware functionality, visual/audio representations, digital knowledge-construction tools, digital knowledge-sharing mechanisms, and differentiated roles. Many of the studies cited a specific theoretical foundation, predominantly situated learning theory, and applied this to the design of the mobile learning environment. The most common measured outcome was students' basic scientific knowledge or conceptual understanding. A number of recommendations came out of this review. Future studies need to make use of newer, available technologies; isolate the testing of specific app features; and develop additional strategies around using mobile apps for collaboration. Researchers need to make more explicit connections between the instructional principles and the design features of their mobile learning environment in order to better integrate theory with practice. In addition, this review noted that stronger alignment is needed between the underlying theories and measured outcomes, and more studies are needed to assess students' higher-level cognitive outcomes, cognitive load, and skill-based outcomes such as problem solving. Finally, more research is needed on how science mobile apps can be used with more varied science topics and diverse audiences. \u00a9 2015 Elsevier Ltd. All rights reserved."}
{"_id": "c4dece35bb107170c9f76fcb254a191dc15cce27", "title": "Characteristics and Expected Returns in Individual Equity Options", "text": "I study excess returns from selling individual equity option portfolios that are leverageadjusted monthly and delta-hedged daily. Strikingly, I find that several measures of risk rise by maturity, although expected returns decrease. Based on my analysis, I identify three new factors \u2013level, slope, and value\u2013 in option returns, which together explain the cross-sectional variation in expected returns on option portfolios formed on moneyness, maturity and option value (the spread between historical volatility and the Black-Scholes implied volatility). This three-factor model helps explain expected returns on option portfolios formed on a variety of different characteristics that include carry, VRP, volatility momentum, idiosyncratic volatility, illiquidity, etc. While the level premium appears to be a compensation for marketwide volatility and jump shocks, theories of risk-averse financial intermediaries help us to understand the slope and the value premiums."}
{"_id": "0cdf9697538c46db78a948ede0f9b0c605b71d26", "title": "Survey of fraud detection techniques", "text": "Due to the dramatic increase of fraud which results in loss of billions of dollars worldwide each year, several modern techniques in detecting fraud are continually developed and applied to many business fields. Fraud detection involves monitoring the behavior of populations of users in order to estimate, detect, or avoid undesirable behavior. Undesirable behavior is a broad term including delinquency, fraud, intrusion, and account defaulting. This paper presents a survey of current techniques used in credit card fraud detection, telecommunication fraud detection, and computer intrusion detection. The goal of this paper is to provide a comprehensive review of different techniques to detect frauds."}
{"_id": "271c8b6d98ec65db2e5b6b28757c66fea2a5a463", "title": "Measuring emotional intelligence with the MSCEIT V2.0.", "text": "Does a recently introduced ability scale adequately measure emotional intelligence (EI) skills? Using the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT; J. D. Mayer, P. Salovey, & D. R. Caruso, 2002b), the authors examined (a) whether members of a general standardization sample and emotions experts identified the same test answers as correct, (b) the test's reliability, and (c) the possible factor structures of EI. Twenty-one emotions experts endorsed many of the same answers, as did 2,112 members of the standardization sample, and exhibited superior agreement, particularly when research provides clearer answers to test questions (e.g., emotional perception in faces). The MSCEIT achieved reasonable reliability, and confirmatory factor analysis supported theoretical models of EI. These findings help clarify issues raised in earlier articles published in Emotion."}
{"_id": "29452dfebc17d6b1fa57b68971ecae85f73fd1d7", "title": "A complete formalized knowledge representation model for advanced digital forensics timeline analysis", "text": "Having a clear view of events that occurred over time is a difficult objective to achieve in digital investigations (DI). Event reconstruction, which allows investigators to understand the timeline of a crime, is one of the most important step of a DI process. This complex task requires exploration of a large amount of events due to the pervasiveness of new technologies nowadays. Any evidence produced at the end of the investigative process must also meet the requirements of the courts, such as reproducibility, verifiability, validation, etc. For this purpose, we propose a new methodology, supported by theoretical concepts, that can assist investigators through the whole process including the construction and the interpretation of the events describing the case. The proposed approach is based on a model which integrates knowledge of experts from the fields of digital forensics and software development to allow a semantically rich representation of events related to the incident. The main purpose of this model is to allow the analysis of these events in an automatic and efficient way. This paper describes the approach and then focuses on the main conceptual and formal aspects: a formal incident modelization and operators for timeline reconstruction and analysis. \u00a9 2014 Digital Forensics Research Workshop. Published by Elsevier Limited. All rights"}
{"_id": "ec8630ea4cc06b9a51a7aa4ba50b91ccf112437d", "title": "Inverse Reinforcement Learning Based Human Behavior Modeling for Goal Recognition in Dynamic Local Network Interdiction", "text": "Goal recognition is the task of inferring an agent\u2019s goals given some or all of the agent\u2019s observed actions. Among different ways of problem formulation, goal recognition can be solved as a model-based planning problem using off-theshell planners. However, obtaining accurate cost or reward models of an agent and incorporating them into the planning model becomes an issue in real applications. Towards this end, we propose an Inverse Reinforcement Learning (IRL)based opponent behavior modeling method, and apply it in the goal recognition assisted Dynamic Local Network Interdiction (DLNI) problem. We first introduce the overall framework and the DLNI problem domain of our work. After that, an IRL-based human behavior modeling method and Markov Decision Process-based goal recognition are introduced. Experimental results indicate that our learned behavior model has a higher tracking accuracy and yields better interdiction outcomes than other models."}
{"_id": "20432d7fec7b15f414f51a1e4fe1983f353eff9d", "title": "Author Disambiguation using Error-driven Machine Learning with a Ranking Loss Function", "text": "Author disambiguation is the problem of determining whether records in a publications database refer to the same person. A common supervised machine learning approach is to build a classifier to predict whether a pair of records is coreferent, followed by a clustering step to enforce transitivity. However, this approach ignores powerful evidence obtainable by examining sets (rather than pairs) of records, such as the number of publications or co-authors an author has. In this paper we propose a representation that enables these first-order features over sets of records. We then propose a training algorithm well-suited to this representation that is (1) error-driven in that training examples are generated from incorrect predictions on the training data, and (2) rank-based in that the classifier induces a ranking over candidate predictions. We evaluate our algorithms on three author disambiguation datasets and demonstrate error reductions of up to 60% over the standard binary classification approach."}
{"_id": "5243700bf7f0863fc9d350921515767c69f754cd", "title": "Closed-Loop Deep Brain Stimulation Is Superior in Ameliorating Parkinsonism", "text": "Continuous high-frequency deep brain stimulation (DBS) is a widely used therapy for advanced Parkinson's disease (PD) management. However, the mechanisms underlying DBS effects remain enigmatic and are the subject of an ongoing debate. Here, we present and test a closed-loop stimulation strategy for PD in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) primate model of PD. Application of pallidal closed-loop stimulation leads to dissociation between changes in basal ganglia (BG) discharge rates and patterns, providing insights into PD pathophysiology. Furthermore, cortico-pallidal closed-loop stimulation has a significantly greater effect on akinesia and on cortical and pallidal discharge patterns than standard open-loop DBS and matched control stimulation paradigms. Thus, closed-loop DBS paradigms, by modulating pathological oscillatory activity rather than the discharge rate of the BG-cortical networks, may afford more effective management of advanced PD. Such strategies have the potential to be effective in additional brain disorders in which a pathological neuronal discharge pattern can be recognized."}
{"_id": "102153467f27d43dd1db8a973846d3ac10ffdc3c", "title": "ECG signal analysis and arrhythmia detection on IoT wearable medical devices", "text": "Healthcare is one of the most rapidly expanding application areas of the Internet of Things (IoT) technology. IoT devices can be used to enable remote health monitoring of patients with chronic diseases such as cardiovascular diseases (CVD). In this paper we develop an algorithm for ECG analysis and classification for heartbeat diagnosis, and implement it on an IoT-based embedded platform. This algorithm is our proposal for a wearable ECG diagnosis device, suitable for 24-hour continuous monitoring of the patient. We use Discrete Wavelet Transform (DWT) for the ECG analysis, and a Support Vector Machine (SVM) classifier. The best classification accuracy achieved is 98.9%, for a feature vector of size 18, and 2493 support vectors. Different implementations of the algorithm on the Galileo board, help demonstrate that the computational cost is such, that the ECG analysis and classification can be performed in real-time."}
{"_id": "7dd5afc660970d82491c9015a52bf23ab79bf650", "title": "Ubiquitous Data Accessing Method in IoT-Based Information System for Emergency Medical Services", "text": "The rapid development of Internet of things (IoT) technology makes it possible for connecting various smart objects together through the Internet and providing more data interoperability methods for application purpose. Recent research shows more potential applications of IoT in information intensive industrial sectors such as healthcare services. However, the diversity of the objects in IoT causes the heterogeneity problem of the data format in IoT platform. Meanwhile, the use of IoT technology in applications has spurred the increase of real-time data, which makes the information storage and accessing more difficult and challenging. In this research, first a semantic data model is proposed to store and interpret IoT data. Then a resource-based data accessing method (UDA-IoT) is designed to acquire and process IoT data ubiquitously to improve the accessibility to IoT data resources. Finally, we present an IoT-based system for emergency medical services to demonstrate how to collect, integrate, and interoperate IoT data flexibly in order to provide support to emergency medical services. The result shows that the resource-based IoT data accessing method is effective in a distributed heterogeneous data environment for supporting data accessing timely and ubiquitously in a cloud and mobile computing platform."}
{"_id": "072a0db716fb6f8332323f076b71554716a7271c", "title": "The impact of the MIT-BIH Arrhythmia Database", "text": "The MIT-BIH Arrhythmia Database was the first generally available set of standard test material for evaluation of arrhythmia detectors, and it has been used for that purpose as well as for basic research into cardiac dynamics at about 500 sites worldwide since 1980. It has lived a far longer life than any of its creators ever expected. Together with the American Heart Association Database, it played an interesting role in stimulating manufacturers of arrhythmia analyzers to compete on the basis of objectively measurable performance, and much of the current appreciation of the value of common databases, both for basic research and for medical device development and evaluation, can be attributed to this experience. In this article, we briefly review the history of the database, describe its contents, discuss what we have learned about database design and construction, and take a look at some of the later projects that have been stimulated by both the successes and the limitations of the MIT-BIH Arrhythmia Database."}
{"_id": "44159c85dec6df7a257cbe697bfc854ecb1ebb0b", "title": "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals.", "text": "The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of Health, is intended to stimulate current research and new investigations in the study of cardiovascular and other complex biomedical signals. The resource has 3 interdependent components. PhysioBank is a large and growing archive of well-characterized digital recordings of physiological signals and related data for use by the biomedical research community. It currently includes databases of multiparameter cardiopulmonary, neural, and other biomedical signals from healthy subjects and from patients with a variety of conditions with major public health implications, including life-threatening arrhythmias, congestive heart failure, sleep apnea, neurological disorders, and aging. PhysioToolkit is a library of open-source software for physiological signal processing and analysis, the detection of physiologically significant events using both classic techniques and novel methods based on statistical physics and nonlinear dynamics, the interactive display and characterization of signals, the creation of new databases, the simulation of physiological and other signals, the quantitative evaluation and comparison of analysis methods, and the analysis of nonstationary processes. PhysioNet is an on-line forum for the dissemination and exchange of recorded biomedical signals and open-source software for analyzing them. It provides facilities for the cooperative analysis of data and the evaluation of proposed new algorithms. In addition to providing free electronic access to PhysioBank data and PhysioToolkit software via the World Wide Web (http://www.physionet. org), PhysioNet offers services and training via on-line tutorials to assist users with varying levels of expertise."}
{"_id": "f9ac31a6550f449454ceedebc01826f5c6785a26", "title": "A One-Stage Correction of the Blepharophimosis Syndrome Using a Standard Combination of Surgical Techniques", "text": "The aim of this study was to evaluate the efficacy of a one-stage treatment for the blepharophimosis-ptosis-epicanthus inversus syndrome (BPES) using a combination of standard surgical techniques. This is a retrospective interventional case series study of 21 BPES patients with a 1-year minimum follow-up period. The one-stage intervention combined three different surgical procedures in the following order: Z-epicanthoplasty for the epicanthus, transnasal wiring of the medial canthal ligaments for the telecanthus, and a bilateral fascia lata sling for ptosis correction. Preoperative and postoperative measurements of the horizontal lid fissure length (HFL), vertical lid fissure width (VFW), nasal intercanthal distance (ICD), and the ratio between the intercanthal distance and the horizontal fissure length (ICD/HFL) were analyzed using Student\u2019s t test for paired variables. The mean preoperative measurements were 4.95\u00a0\u00b1\u00a01.13\u00a0mm for the VFW, 20.90\u00a0\u00b1\u00a02.14\u00a0mm for the HFL, 42.45\u00a0\u00b1\u00a02.19\u00a0mm for the ICD, and 2.04\u00a0\u00b1\u00a00.14\u00a0mm for the ICD/HFL ratio. The mean postoperative measurements were 7.93\u00a0\u00b1\u00a01.02\u00a0mm for the VFW, 26.36\u00a0\u00b1\u00a01.40\u00a0mm for the HFL, 32.07\u00a0\u00b1\u00a01.96\u00a0mm for the ICD, and 1.23\u00a0\u00b1\u00a00.09\u00a0mm for the ICD/HFL ratio. All these values and their differences were statistically significant (P\u00a0<\u00a00.0001). All of the patients developed symmetric postoperative inferior version lagophthalmus, a complication that tended to decrease over time. One-stage correction of BPES is safe and efficient with the surgical techniques described."}
{"_id": "3d425a44b54f505a5d280653a3b4f992d4836c80", "title": "Time and cognitive load in working memory.", "text": "According to the time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004), the cognitive load a given task involves is a function of the proportion of time during which it captures attention, thus impeding other attention-demanding processes. Accordingly, the present study demonstrates that the disruptive effect on concurrent maintenance of memory retrievals and response selections increases with their duration. Moreover, the effect on recall performance of concurrent activities does not go beyond their duration insofar as the processes are attention demanding. Finally, these effects are not modality specific, as spatial processing was found to disrupt verbal maintenance. These results suggest a sequential and time-based function of working memory in which processing and storage rely on a single and general purpose attentional resource needed to run executive processes devoted to constructing, maintaining, and modifying ephemeral representations."}
{"_id": "e60e755af36853ba2c6c6f6b60e82903df167e56", "title": "Supervised LogEuclidean Metric Learning for Symmetric Positive Definite Matrices", "text": "Metric learning has been shown to be highly effective to improve the performance of nearest neighbor classification. In this paper, we address the problem of metric learning for symmetric positive definite (SPD) matrices such as covariance matrices, which arise in many real-world applications. Naively using standard Mahalanobis metric learning methods under the Euclidean geometry for SPD matrices is not appropriate, because the difference of SPD matrices can be a non-SPD matrix and thus the obtained solution can be uninterpretable. To cope with this problem, we propose to use a properly parameterized LogEuclidean distance and optimize the metric with respect to kernel-target alignment, which is a supervised criterion for kernel learning. Then the resulting non-trivial optimization problem is solved by utilizing the Riemannian geometry. Finally, we experimentally demonstrate the usefulness of our LogEuclidean metric learning algorithm on real-world classification tasks for EEG signals and texture patches."}
{"_id": "08973fdcd763a2c7855dae39e8040df69aa420dc", "title": "Visualizing Scholarly Publications and Citations to Enhance Author Profiles", "text": "With data on scholarly publications becoming more abundant and accessible, there exist new opportunities for using this information to provide rich author profiles to display and explore scholarly work. We present a pair of linked visualizations connected to the Microsoft Academic Graph that can be used to explore the publications and citations of individual authors. We provide an online application with which a user can manage collections of papers and generate these visualizations."}
{"_id": "6e785a402a60353e6e22d6883d3998940dcaea96", "title": "Three Models for the Description of Language", "text": "The grammar of a language is a device that describes the structure of that language. The grammar is comprised of a set of rules whose goal is twofold: first these rules can be used to create sentences of the associated language and only these, and second they can be used to classify whether a given sentence is an element of the language or not. The goal of a linguist is to discover grammars that are simple and yet are able to fully span the language. In [1] Chomsky describes three possible options of increasing complexity for English grammars: Finite-state, Phrase Structure and Transformational. This paper briefly present these three grammars and summarizes Chomsky\u2019s analysis and results which state that finite-state grammars are inadequate because they fail to span all possible sentences of the English language, and phrase structure grammar is overly complex."}
{"_id": "b0572307afb7e769f360267d893500893f5d6b3d", "title": "SemEval-2017 Task 7: Detection and Interpretation of English Puns", "text": "Apun is a form of wordplay in which aword suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another word, for an intended humorous or rhetorical effect. Though a recurrent and expected feature in many discourse types, puns stymie traditional approaches to computational lexical semantics because they violate their one-sense-percontext assumption. This paper describes the first competitive evaluation for the automatic detection, location, and interpretation of puns. We describe the motivation for these tasks, the evaluation methods, and the manually annotated data set. Finally, we present an overview and discussion of the participating systems\u2019 methodologies, resources, and results."}
{"_id": "a92eac4415719698d7d2097ef9564e7b36699010", "title": "Stakeholder engagement, social auditing and corporate sustainability", "text": "Purpose \u2013 To identify the applicability of social auditing as an approach of engaging stakeholders in assessing and reporting on corporate sustainability and its performance. Design/methodology/approach \u2013 Drawing upon the framework of AA1000 and the social auditing studies, this paper links stakeholder engagement, social auditing and corporate sustainability with a view to applying dialogue-based social auditing to address corporate sustainability. Findings \u2013 This paper identifies a \u201cmatch\u201d between corporate sustainability and social auditing, as both aim at improving the social, environmental and economic performance of an organisation, considering the well-being of a wider range of stakeholders and requiring the engagement of stakeholders in the process. This paper suggests that social auditing through engaging stakeholders via dialogue could be applied to build trusts, identify commitment and promote co-operation amongst stakeholders and corporations. Research limitations/implications \u2013 This research requires further empirical research into the practicality of social auditing in addressing corporate sustainability and the determination of the limitations of dialogue-based social auditing. Practical implications \u2013 Social auditing has been identified as a useful mechanism of balancing differing interests among stakeholders and corporations in a democratic business society. The application of social auditing in developing and achieving corporate sustainability has apparently practical implications. Originality/value \u2013 This paper examines the applicability of dialogue-based social auditing in helping business to move towards sustainability. Social auditing as a process of assessing and reporting on corporate social and environmental performance through engaging stakeholders via dialogue could be applied to build trusts, identify commitment and promote cooperation amongst stakeholders and corporations."}
{"_id": "3b81c6743e909268a6471295eddde13fd6520678", "title": "Using Semantic Web Technologies for Exploratory OLAP: A Survey", "text": "This paper describes the convergence of some of the most influential technologies in the last few years, namely data warehousing (DW), on-line analytical processing (OLAP), and the Semantic Web (SW). OLAP is used by enterprises to derive important business-critical knowledge from data inside the company. However, the most interesting OLAP queries can no longer be answered on internal data alone, external data must also be discovered (most often on the web), acquired, integrated, and (analytically) queried, resulting in a new type of OLAP, exploratory OLAP. When using external data, an important issue is knowing the precise semantics of the data. Here, SW technologies come to the rescue, as they allow semantics (ranging from very simple to very complex) to be specified for web-available resources. SW technologies do not only support capturing the \u201cpassive\u201d semantics, but also support active inference and reasoning on the data. The paper first presents a characterization of DW/OLAP environments, followed by an introduction to the relevant SW foundation concepts. Then, it describes the relationship of multidimensional (MD) models and SW technologies, including the relationship between MD models and SW formalisms. Next, the paper goes on to survey the use of SW technologies for data modeling and data provisioning, including semantic data annotation and semantic-aware extract, transform, and load (ETL) processes. Finally, all the findings are discussed and a number of directions for future research are outlined, including SW support for intelligent MD querying, using SW technologies for providing context to data warehouses, and scalability issues."}
{"_id": "9a8a36ce97b5b5324f4b5e5ee19b61df0b2e756b", "title": "Do cheerfulness , exhilaration , and humor production moderate pain tolerance ? A FACS study", "text": "Prior studies have shown that watching a funny film leads to an increase in pain tolerance. The present study aimed at separating three factors considered potentially essential (mood, behavior, and cognition related to humor) and examined whether they are responsible for this effect. Furthermore, the study examined whether trait cheerfulness and trait seriousness, as measured by the State-TraitCheerfulness-Inventory (STCI; Ruch et al. 1996), moderate changes in pain tolerance. Fifty-sixty female subjects were assigned randomly to three groups, each having a different task to pursue while watching a funny film: (1) get into a cheerful mood without smiling or laughing (\u201dCheerfulness\u201d); (2) smile and laugh extensively (\u201dExhilaration\u201d); and (3) produce a humorous commentary to the film (\u201dHumor production\u201d). Pain tolerance was measured using the cold pressor test before, immediately after, and twenty minutes after the film. Results indicated that pain tolerance increased for participants from before to after watching the funny film and remained high for the twenty minutes. This effect was moderated by facial but not verbal indicators of enjoyment of humor. Participants low in trait seriousness had an overall higher pain tolerance. Subjects with a high score in trait cheerfulness showed an increase in pain tolerance after producing humor while watching the film whereas subjects low in trait cheerfulness showed a similar increase after smiling and laughter during the film. DOI: https://doi.org/10.1515/humr.2004.009 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-77579 Originally published at: Zweyer, K; Velker, B; Ruch, Willibald (2004). Do cheerfulness, exhilaration, and humor production moderate pain tolerance? A FACS study. HUMOR: International Journal of Humor Research, 17(12):85-119. DOI: https://doi.org/10.1515/humr.2004.009 Do cheerfulness, exhilaration, and humor production moderate pain tolerance? A FACS study KAREN ZWEYER, BARBARA VELKER, and WILLIBALD RUCH"}
{"_id": "f040434dd7dbc965c04b386b6b206325698a635b", "title": "A sneak peek into digital innovations and wearable sensors for cardiac monitoring", "text": "Many mobile phone or tablet applications have been designed to control cardiovascular risk factors (obesity, smoking, sedentary lifestyle, diabetes and hypertension) or to optimize treatment adherence. Some have been shown to be useful but the long-term benefits remain to be demonstrated. Digital stethoscopes make easier the interpretation of abnormal heart sounds, and the development of pocket-sized echo machines may quickly and significantly expand the use of ultrasounds. Daily home monitoring of pulmonary artery pressures with wireless implantable sensors has been shown to be associated with a significant decrease in hospital readmissions for heart failure. There are more and more non-invasive, wireless, and wearable sensors designed to monitor heart rate, heart rate variability, respiratory rate, arterial oxygen saturation, and thoracic fluid content. They have the potential to change the way we monitor and treat patients with cardiovascular diseases in the hospital and beyond. Some may have the ability to improve quality of care, decrease the number of medical visits and hospitalization, and ultimately health care costs. Validation and outcome studies are needed to clarify, among the growing number of digital innovations and wearable sensors, which tools have real clinical value."}
{"_id": "4f96e06db144823d16516af787e96d13073b4316", "title": "Applications of Data Mining Techniques in TelecomChurn Prediction", "text": "In this competitive world, business becomes highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. This paper explores the application of data mining techniques in predicting likely churners and the impact of attribute selection on identifying the churn. It also compares the efficiency of Decision tree and Neural Network classifiers and lists their performances."}
{"_id": "6d8984600e3cd9ae9d6b803f43f2410fa5c0ad0b", "title": "Learning from Corrupted Binary Labels via Class-Probability Estimation", "text": "Many supervised learning problems involve learning from samples whose labels are corrupted in some way. For example, each label may be flipped with some constant probability (learning with label noise), or one may have a pool of unlabelled samples in lieu of negative samples (learning from positive and unlabelled data). This paper uses class-probability estimation to study these and other corruption processes belonging to the mutually contaminated distributions framework (Scott et al., 2013), with three conclusions. First, one can optimise balanced error and AUC without knowledge of the corruption parameters. Second, given estimates of the corruption parameters, one can minimise a range of classification risks. Third, one can estimate corruption parameters via a class-probability estimator (e.g. kernel logistic regression) trained solely on corrupted data. Experiments on label noise tasks corroborate our analysis. 1. Learning from corrupted binary labels In many practical scenarios involving learning from binary labels, one observes samples whose labels are corrupted versions of the actual ground truth. For example, in learning from class-conditional label noise (CCN learning), the labels are flipped with some constant probability (Angluin & Laird, 1988). In positive and unlabelled learning (PU learning), we have access to some positive samples, but in lieu of negative samples only have a pool of samples whose label is unknown (Denis, 1998). More generally, suppose there is a notional clean distribution D over instances and labels. We say a problem involves learning from corrupted Proceedings of the 32 International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copyright 2015 by the author(s). binary labels if we observe training samples drawn from some corrupted distribution Dcorr such that the observed labels do not represent those we would observe under D. A fundamental question is whether one can minimise a given performance measure with respect to D, given access only to samples from Dcorr. Intuitively, in general this requires knowledge of the parameters of the corruption process that determines Dcorr. This yields two further questions: are there measures for which knowledge of these corruption parameters is unnecessary, and for other measures, can we estimate these parameters? In this paper, we consider corruption problems belonging to the mutually contaminated distributions framework (Scott et al., 2013). We then study the above questions through the lens of class-probability estimation, with three conclusions. First, optimising balanced error (BER) as-is on corrupted data equivalently optimises BER on clean data, and similarly for the area under the ROC curve (AUC). That is, these measures can be optimised without knowledge of the corruption process parameters; further, we present evidence that these are essentially the only measures with this property. Second, given estimates of the corruption parameters, a range of classification measures can be minimised by thresholding corrupted class-probabilities. Third, under some assumptions, these corruption parameters may be estimated from the range of the corrupted class-probabilities. For all points above, observe that learning requires only corrupted data. Further, corrupted class-probability estimation can be seen as treating the observed samples as if they were uncorrupted. Thus, our analysis gives justification (under some assumptions) for this apparent heuristic in problems such as CCN and PU learning. While some of our results are known for the special cases of CCN and PU learning, our interest is in determining to what extent they generalise to other label corruption problems. This is a step towards a unified treatment of these problems. We now fix notation and formalise the problem. Learning from Corrupted Binary Labels via Class-Probability Estimation 2. Background and problem setup Fix an instance space X. We denote byD some distribution over X \u00d7 {\u00b11}, with (X,Y) \u223c D a pair of random variables. Any D may be expressed via the class-conditional distributions (P,Q) = (P(X | Y = 1),P(X | Y = \u22121)) and base rate \u03c0 = P(Y = 1), or equivalently via marginal distribution M = P(X) and class-probability function \u03b7 : x 7\u2192 P(Y = 1 | X = x). When referring to these constituent distributions, we write D as DP,Q,\u03c0 or DM,\u03b7. 2.1. Classifiers, scorers, and risks A classifier is any function f : X\u2192 {\u00b11}. A scorer is any function s : X \u2192 R. Many learning methods (e.g. SVMs) output a scorer, from which a classifier is formed by thresholding about some t \u2208 R. We denote the resulting classifier by thresh(s, t) : x 7\u2192 sign(s(x)\u2212 t). The false positive and false negative rates of a classifier f are denoted FPR(f),FNR(f), and are defined by P X\u223cQ (f(X) = 1) and P X\u223cP (f(X) = \u22121) respectively. Given a function \u03a8: [0, 1] \u2192 [0, 1], a classification performance measure Class\u03a8 : {\u00b11} \u2192 [0, 1] assesses the performance of a classifier f via (Narasimhan et al., 2014) Class\u03a8(f) = \u03a8(FPR (f),FNR(f), \u03c0). A canonical example is the misclassification error, where \u03a8: (u, v, p) 7\u2192 p \u00b7 v+ (1\u2212 p) \u00b7 u. Given a scorer s, we use Class\u03a8(s; t) to refer to Class D \u03a8(thresh(s, t)). The \u03a8-classification regret of a classifier f : X\u2192 {\u00b11} is regret\u03a8(f) = Class D \u03a8(f)\u2212 inf g : X\u2192{\u00b11} Class\u03a8(g). A loss is any function ` : {\u00b11} \u00d7R\u2192 R+. Given a distribution D, the `-risk of a scorer s is defined as L` (s) = E (X,Y)\u223cD [`(Y, s(X))] . (1) The `-regret of a scorer, regret` , is as per the \u03a8-regret. We say ` is strictly proper composite (Reid & Williamson, 2010) if argmins L` (s) is some strictly monotone transformation \u03c8 of \u03b7, i.e. we can recover class-probabilities from the optimal prediction via the link function \u03c8. We call class-probability estimation (CPE) the task of minimising Equation 1 for some strictly proper composite `. The conditional Bayes-risk of a strictly proper composite ` is L` : \u03b7 7\u2192 \u03b7`1(\u03c8(\u03b7)) + (1 \u2212 \u03b7)`\u22121(\u03c8(\u03b7)). We call ` strongly proper composite with modulus \u03bb if L` is \u03bbstrongly concave (Agarwal, 2014). Canonical examples of such losses are the logistic and exponential loss, as used in logistic regression and AdaBoost respectively. Quantity Clean Corrupted Joint distribution D Corr(D,\u03b1, \u03b2, \u03c0corr) or Dcorr Class-conditionals P,Q Pcorr, Qcorr Base rate \u03c0 \u03c0corr Class-probability \u03b7 \u03b7corr \u03a8-optimal threshold t\u03a8 t D corr,\u03a8 Table 1. Common quantities on clean and corrupted distributions. 2.2. Learning from contaminated distributions Suppose DP,Q,\u03c0 is some \u201cclean\u201d distribution where performance will be assessed. (We do not assume that D is separable.) In MC learning (Scott et al., 2013), we observe samples from some corrupted distribution Corr(D,\u03b1, \u03b2, \u03c0corr) over X \u00d7 {\u00b11}, for some unknown noise parameters \u03b1, \u03b2 \u2208 [0, 1] with \u03b1 + \u03b2 < 1; where the parameters are clear from context, we occasionally refer to the corrupted distribution as Dcorr. The corrupted classconditional distributions Pcorr, Qcorr are Pcorr = (1\u2212 \u03b1) \u00b7 P + \u03b1 \u00b7Q Qcorr = \u03b2 \u00b7 P + (1\u2212 \u03b2) \u00b7Q, (2) and the corrupted base rate \u03c0corr in general has no relation to the clean base rate \u03c0. (If \u03b1+\u03b2 = 1, then Pcorr = Qcorr, making learning impossible, whereas if \u03b1+ \u03b2 > 1, we can swap Pcorr, Qcorr.) Table 1 summarises common quantities on the clean and corrupted distributions. From (2), we see that none of Pcorr, Qcorr or \u03c0corr contain any information about \u03c0 in general. Thus, estimating \u03c0 from Corr(D,\u03b1, \u03b2, \u03c0corr) is impossible in general. The parameters \u03b1, \u03b2 are also non-identifiable, but can be estimated under some assumptions on D (Scott et al., 2013). 2.3. Special cases of MC learning Two special cases of MC learning are notable. In learning from class-conditional label noise (CCN learning) (Angluin & Laird, 1988), positive samples have labels flipped with probability \u03c1+, and negative samples with probability \u03c1\u2212. This can be shown to reduce to MC learning with \u03b1 = \u03c0\u22121 corr \u00b7 (1\u2212 \u03c0) \u00b7 \u03c1\u2212 , \u03b2 = (1\u2212 \u03c0corr) \u00b7 \u03c0 \u00b7 \u03c1+, (3) and the corrupted base rate \u03c0corr = (1\u2212\u03c1+)\u00b7\u03c0+\u03c1\u2212\u00b7(1\u2212\u03c0). (See Appendix C for details.) In learning from positive and unlabelled data (PU learning) (Denis, 1998), one has access to unlabelled samples in lieu of negative samples. There are two subtly different settings: in the case-controlled setting (Ward et al., 2009), the unlabelled samples are drawn from the marginal distribution M , corresponding to MC learning with \u03b1 = 0, \u03b2 = \u03c0, Learning from Corrupted Binary Labels via Class-Probability Estimation and \u03c0corr arbitrary. In the censoring setting (Elkan & Noto, 2008), observations are drawn from D followed by a label censoring procedure. This is in fact a special of CCN (and hence MC) learning with \u03c1\u2212 = 0. 3. BER and AUC are immune to corruption We first show that optimising balanced error and AUC on corrupted data is equivalent to doing so on clean data. Thus, with a suitably rich function class, one can optimise balanced error and AUC from corrupted data without knowledge of the corruption process parameters. 3.1. BER minimisation is immune to label corruption The balanced error (BER) (Brodersen et al., 2010) of a classifier is simply the mean of the per-class error rates, BER(f) = FPR(f) + FNR(f) 2 . This is a popular measure in imbalanced learning problems (Cheng et al., 2002; Guyon et al., 2004) as it penalises sacrificing accuracy on the rare class in favour of accuracy on the dominant class. The negation of the BER is also known as the AM (arithmetic mean) metric (Menon et al., 2013). The BER-optimal classifier thresholds the class-probability function at the base rate (Menon et al., 2013), so that: argmin f : X\u2192{\u00b11} BER(f) = thresh(\u03b7, \u03c0) (4) argmin f : X\u2192{\u00b11} BERcorr(f) = thresh(\u03b7corr, \u03c0corr), (5) where \u03b7corr denotes the corrupted class-probability function. As Equation 4 depends on \u03c0, it may appear that one must know \u03c0 to minimise the clean BER from corrupted data. Surprisingly, the BER-optimal classifiers in Equations 4 and 5 coincide. This is because of the following relationship between"}
{"_id": "0584a69d27f55d726e72b69f21ca1bbab400c498", "title": "Intelligent criminal identification system", "text": "In many countries the amount of crime incidents that is reported per day is increasing dramatically. Concerning about Sri Lanka, The department of Police is the major organization of preventing crimes. In general, Sri Lankan police stations utilize paper-based information storing systems and they don't employ computer based applications up to a great extent. Due to this utilization of paper-based systems police officers have to spend a lot of time as well as man power to analyze existing crime information and to identify suspects for crime incidents. So the requirement of an efficient way for crime investigation has arisen. Data mining practices is one aspect of crime investigation, for which numerous technique are available. This paper highlights the use of data mining techniques, clustering and classification for effective investigation of crimes. Further the paper aims to identify suspects by analyzing existing evidences in situations where any witness or forensic clues are not present."}
{"_id": "9581009c34216cac062888c5ccf055453db17881", "title": "Intelligent Tutoring Systems and Learning Outcomes : A Meta-Analysis", "text": "Intelligent Tutoring Systems (ITS) are computer programs that model learners\u2019 psychological states to provide individualized instruction. They have been developed for diverse subject areas (e.g., algebra, medicine, law, reading) to help learners acquire domain-specific, cognitive and metacognitive knowledge. A meta-analysis was conducted on research that compared the outcomes from students learning from ITS to those learning from non-ITS learning environments. The meta-analysis examined how effect sizes varied with type of ITS, type of comparison treatment received by learners, type of learning outcome, whether knowledge to be learned was procedural or declarative, and other factors. After a search of major bibliographic databases, 107 effect sizes involving 14,321 participants were extracted and analyzed. The use of ITS was associated with greater achievement in comparison with teacher-led, large-group instruction (g .42), non-ITS computer-based instruction (g .57), and textbooks or workbooks (g .35). There was no significant difference between learning from ITS and learning from individualized human tutoring (g \u2013.11) or small-group instruction (g .05). Significant, positive mean effect sizes were found regardless of whether the ITS was used as the principal means of instruction, a supplement to teacher-led instruction, an integral component of teacher-led instruction, or an aid to homework. Significant, positive effect sizes were found at all levels of education, in almost all subject domains evaluated, and whether or not the ITS provided feedback or modeled student misconceptions. The claim that ITS are relatively effective tools for learning is consistent with our analysis of potential publication bias."}
{"_id": "8618f895585234058f424bd1a5f7043244b6d696", "title": "Kalman filtering over a packet-delaying network: A probabilistic approach", "text": "In this paper, we consider Kalman filtering over a packet-delaying network. Given the probability distribution of the delay, we can characterize the filter performance via a probabilistic approach. We assume that the estimator maintains a buffer of length D so that at each time k, the estimator is able to retrieve all available data packets up to time k\u2212D+1. Both the cases of sensorwith andwithout necessary computation capability for filter updates are considered. When the sensor has no computation capability, for a givenD, we give lower and upper bounds on the probability forwhich the estimation error covariance is within a prescribed bound. When the sensor has computation capability, we show that the previously derived lower and upper bounds are equal to each other. An approach for determining the minimum buffer length for a required performance in probability is given and an evaluation on the number of expected filter updates is provided. Examples are provided to demonstrate the theory developed in the paper. \u00a9 2009 Elsevier Ltd. All rights reserved."}
{"_id": "402f850dff86fb601d34b2841e6083ac0f928edd", "title": "SCNN: An accelerator for compressed-sparse convolutional neural networks", "text": "Convolutional Neural Networks (CNNs) have emerged as a fundamental technology for machine learning. High performance and extreme energy efficiency are critical for deployments of CNNs, especially in mobile platforms such as autonomous vehicles, cameras, and electronic personal assistants. This paper introduces the Sparse CNN (SCNN) accelerator architecture, which improves performance and energy efficiency by exploiting the zero-valued weights that stem from network pruning during training and zero-valued activations that arise from the common ReLU operator. Specifically, SCNN employs a novel dataflow that enables maintaining the sparse weights and activations in a compressed encoding, which eliminates unnecessary data transfers and reduces storage requirements. Furthermore, the SCNN dataflow facilitates efficient delivery of those weights and activations to a multiplier array, where they are extensively reused; product accumulation is performed in a novel accumulator array. On contemporary neural networks, SCNN can improve both performance and energy by a factor of 2.7x and 2.3x, respectively, over a comparably provisioned dense CNN accelerator."}
{"_id": "f3b84862bbe7eded638ab33c566c53c17bb1f83c", "title": "Hidden Community Detection in Social Networks", "text": "We introduce a new paradigm that is important for community detection in the realm of network analysis. Networks contain a set of strong, dominant communities, which interfere with the detection of weak, natural community structure. When most of the members of the weak communities also belong to stronger communities, they are extremely hard to be uncovered. We call the weak communities the hidden community structure. We present a novel approach called HICODE (HIdden COmmunity DEtection) that identi\u0080es the hidden community structure as well as the dominant community structure. By weakening the strength of the dominant structure, one can uncover the hidden structure beneath. Likewise, by reducing the strength of the hidden structure, one can more accurately identify the dominant structure. In this way, HICODE tackles both tasks simultaneously. Extensive experiments on real-world networks demonstrate that HICODE outperforms several state-of-the-art community detection methods in uncovering both the dominant and the hidden structure. For example in the Facebook university social networks, we \u0080nd multiple non-redundant sets of communities that are strongly associated with residential hall, year of registration or career position of the faculties or students, while the state-of-the-art algorithms mainly locate the dominant ground truth category. Due to the di\u0081culty of labeling all ground truth communities in real-world datasets, HICODE provides a promising approach to pinpoint the existing latent communities and uncover communities for which there is no ground truth. Finding this unknown structure is an extremely important community detection problem."}
{"_id": "7b14b89089cc86fc75bcb56c673d8316e05f5d67", "title": "HiStory: a hierarchical storyboard interface design for video browsing on mobile devices", "text": "This paper presents an interactive thumbnail-based video browser for mobile devices such as smartphones featuring a touch screen. Developed as part of on-going research and supported by user studies, it introduces HiStory, a hierarchical storyboard design offering an interface metaphor that is familiar and intuitive yet supports fast and effective completion of Known-Item-Search tasks by rapidly providing an overview of a video's content with varying degrees of granularity."}
{"_id": "7f7464349a4fbd6fc33444ffb2413228deb6dca2", "title": "Idiopathic isolated clitoromegaly: A report of two cases", "text": "BACKGROUND: Clitoromegaly is a frequent congenital malformation, but acquired clitoral enlargement is relatively rare. METHODS: Two acquired clitoromegaly cases treated in Ataturk Training Hospital, Izmir, Turkey are presented. RESULTS: History from both patients revealed clitoromegaly over the last three years. Neither gynecological nor systemic abnormalities were detected in either patient. Karyotype analyses and hormonal tests were normal. Abdominal and gynaecological ultrasound did not show any cystic lesion or other abnormal finding. Computerized tomography scan of the adrenal glands was normal. Clitoroplasty with preservation of neurovascular pedicles was performed for the treatment of clitoromegaly. CONCLUSION: The patients were diagnosed as \"idiopathic isolated\" clitoromegaly. To the best of our knowledge, there has been no detailed report about idiopathic clitoromegaly in the literature."}
{"_id": "b91144901bccdead9a32f330f0d83dc1d84e759b", "title": "On the design and implementation of linear differential microphone arrays.", "text": "Differential microphone array (DMA), a particular kind of sensor array that is responsive to the differential sound pressure field, has a broad range of applications in sound recording, noise reduction, signal separation, dereverberation, etc. Traditionally, an Nth-order DMA is formed by combining, in a linear manner, the outputs of a number of DMAs up to (including) the order of N - 1. This method, though simple and easy to implement, suffers from a number of drawbacks and practical limitations. This paper presents an approach to the design of linear DMAs. The proposed technique first transforms the microphone array signals into the short-time Fourier transform (STFT) domain and then converts the DMA beamforming design to simple linear systems to solve. It is shown that this approach is much more flexible as compared to the traditional methods in the design of different directivity patterns. Methods are also presented to deal with the white noise amplification problem that is considered to be the biggest hurdle for DMAs, particularly higher-order implementations."}
{"_id": "346f4f5cfb33dbba203ceb049d38542118e15f92", "title": "Data-free parameter pruning for Deep Neural Networks", "text": "Deep Neural nets (NNs) with millions of parameters are at the heart of many stateof-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85% of the total parameters in an MNIST-trained network, and about 35% for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network."}
{"_id": "047ff9a3a05cce0d6594eb9300d50aaf5f93f55b", "title": "An 82.4% efficiency package-bondwire-based four-phase fully integrated buck converter with flying capacitor for area reduction", "text": "Multi-phase converters have become a topic of great interest due to the high output power capacity and output ripple cancellation effect. They are even more beneficial to nowadays high-frequency fully integrated converters with output capacitor integrated on-chip. As one of the dominant chip area consumers, reducing the size of the on-chip decoupling capacitors directly leads to cost down. It is reported that a 5\u00d7 capacitor area reduction can be achieved with a four-phase converter compared to a single-phase one [1]. However, the penalty is obvious. Every extra phase comes with an inductor, which is also counted as cost and becomes more dominant with increase in the number of phases."}
{"_id": "1c9b9f3b3bf89f1c051c07411c226719aa468923", "title": "Object Detection in High-Resolution Remote Sensing Images Using Rotation Invariant Parts Based Model", "text": "In this letter, we propose a rotation invariant parts-based model to detect objects with complex shape in high-resolution remote sensing images. Specifically, the geospatial objects with complex shape are firstly divided into several main parts, and the structure information among parts is described and regulated in polar coordinates to achieve the rotation invariance on configuration. Meanwhile, the pose variance of each part relative to the object is also defined in our model. In encoding the features of the rotated parts and objects, a new rotation invariant feature is proposed by extending histogram oriented gradients. During the final detection step, a clustering method is introduced to locate the parts in objects, and that method can also be used to fuse the detection results. By this way, an efficient detection model is constructed and the experimental results demonstrate the robustness and precision of our proposed detection model."}
{"_id": "2453dd38cde21f3248b55d281405f11d58168fa9", "title": "Multi-scale Patch Aggregation (MPA) for Simultaneous Detection and Segmentation", "text": "Aiming at simultaneous detection and segmentation (SD-S), we propose a proposal-free framework, which detect and segment object instances via mid-level patches. We design a unified trainable network on patches, which is followed by a fast and effective patch aggregation algorithm to infer object instances. Our method benefits from end-to-end training. Without object proposal generation, computation time can also be reduced. In experiments, our method yields results 62.1% and 61.8% in terms of mAPr on VOC2012 segmentation val and VOC2012 SDS val, which are state-of-the-art at the time of submission. We also report results on Microsoft COCO test-std/test-dev dataset in this paper."}
{"_id": "702c5b4c444662bc53f6d1f92a4de88efe68c071", "title": "Learning to simplify: fully convolutional networks for rough sketch cleanup", "text": "In this paper, we present a novel technique to simplify sketch drawings based on learning a series of convolution operators. In contrast to existing approaches that require vector images as input, we allow the more general and challenging input of rough raster sketches such as those obtained from scanning pencil sketches. We convert the rough sketch into a simplified version which is then amendable for vectorization. This is all done in a fully automatic way without user intervention. Our model consists of a fully convolutional neural network which, unlike most existing convolutional neural networks, is able to process images of any dimensions and aspect ratio as input, and outputs a simplified sketch which has the same dimensions as the input image. In order to teach our model to simplify, we present a new dataset of pairs of rough and simplified sketch drawings. By leveraging convolution operators in combination with efficient use of our proposed dataset, we are able to train our sketch simplification model. Our approach naturally overcomes the limitations of existing methods, e.g., vector images as input and long computation time; and we show that meaningful simplifications can be obtained for many different test cases. Finally, we validate our results with a user study in which we greatly outperform similar approaches and establish the state of the art in sketch simplification of raster images."}
{"_id": "915c4bb289b3642489e904c65a47fa56efb60658", "title": "Perceptual Losses for Real-Time Style Transfer and Super-Resolution", "text": "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results."}
{"_id": "9201bf6f8222c2335913002e13fbac640fc0f4ec", "title": "Fully convolutional networks for semantic segmentation", "text": ""}
{"_id": "5360ea82abdc3223b0ede0179fe5842c180b70ed", "title": "Scientific Table Search Using Keyword Queries", "text": "Tables are common and important in scientific documents, yet most text-based document search systems do not capture structures and semantics specific to tables. How to bridge different types of mismatch between keywords queries and scientific tables and what influences ranking quality needs to be carefully investigated. This paper considers the structure of tables and gives different emphasis to table components. On the query side, thanks to external knowledge such as knowledge bases and ontologies, key concepts are extracted and used to build structured queries, and target quantity types are identified and used to expand original queries. A probabilistic framework is proposed to incorporate structural and semantic information from both query and table sides. We also construct and release TableArXiv, a high quality dataset with 105 queries and corresponding relevance judgements for scientific table search. Experiments demonstrate significantly higher accuracy overall and at the top of the rankings than several baseline methods."}
{"_id": "e29bca5e57cb7d4edd00eaa6667794663984c77f", "title": "Decision support for Cybersecurity risk planning", "text": "a r t i c l e i n f o Security countermeasures help ensure the confidentiality, availability, and integrity of information systems by preventing or mitigating asset losses from Cybersecurity attacks. Due to uncertainty, the financial impact of threats attacking assets is often difficult to measure quantitatively, and thus it is difficult to prescribe which countermeasures to employ. In this research, we describe a decision support system for calculating the uncertain risk faced by an organization under cyber attack as a function of uncertain threat rates, countermeasure costs, and impacts on its assets. The system uses a genetic algorithm to search for the best combination of countermeasures, allowing the user to determine the preferred tradeoff between the cost of the portfolio and resulting risk. Data collected from manufacturing firms provide an example of results under realistic input conditions. Assuring a secure information technology (IT) environment for the transaction of commerce is a major concern. The magnitude of the task is growing yearly, as attackers become more knowledgeable, more determined, and bolder in their efforts. According to a lead security expert at International Data Corporation, a global provider of market intelligence and advisory services for the IT community, \" Emerging new attack vectors, more targeted campaigns, compliance, and the layering of new technologies on the corporate infrastructure are all factors having a tremendous impact on the overall risk to an organization \" [21]. Given that corporate expenditures are usually a good indicator of the level of concern about an issue, security is obviously near the top of many IT executives' lists. According to IDC, the global market for providers of information security services is projected to exceed $32.5 billion in 2010 [21]. The New York Times recently noted that \" the intrusion into Google's computers and related attacks from within China on some thirty other companies point to the rising sophistication of such assaults and the vulnerability of even the best defenses \" [25]. The Times further states that, according to the Computer Security Institute, malware infections are up from one-half to almost two-thirds of companies surveyed last year, at an average cost of $235,000 for each organization. Finally, the newspaper observes, malware is being exploited in new terrains with malicious code that turns on cellphone microphones and cameras for the purpose of industrial spying [25]. Because of the variety of methods attackers use to try to infiltrate and disrupt \u2026"}
{"_id": "d0858a2f403004bf0c002ca071884f5b2d002b22", "title": "Memory reduction method for deep neural network training", "text": "Training deep neural networks requires a large amount of memory, making very deep neural networks difficult to fit on accelerator memories. In order to overcome this limitation, we present a method to reduce the amount of memory for training a deep neural network. The method enables to suppress memory increase during the backward pass, by reusing the memory regions allocated for the forward pass. Experimental results exhibit our method reduced the occupied memory size in training by 44.7% on VGGNet with no accuracy affection. Our method also enabled training speedup by increasing the mini batch size up to double."}
{"_id": "cd3007e5c5c8b9a3771e7d99502cdc388b369e8a", "title": "MATLAB based code for 3 D joint inversion of Magnetotelluric and Direct Current resistivity imaging data", "text": "24 th EM Induction Workshop, Helsing\u00f8r, Denmark, August 12-19, 2018 1 /4 MATLAB based code for 3D joint inversion of Magnetotelluric and Direct Current resistivity imaging data M. Israil 1 , A. Singh 1 Anita Devi 1 , and Pravin K.Gupta 1 1 Indian Institute of Technology, Roorkee, 247667, India, mohammad.israil@gmail.com"}
{"_id": "371d0da66420ed6885c4badc3d4b7eb086d4292c", "title": "An Efficient Privacy-Preserving Ranked Keyword Search Method", "text": "Cloud data owners prefer to outsource documents in an encrypted form for the purpose of privacy preserving. Therefore it is essential to develop efficient and reliable ciphertext search techniques. One challenge is that the relationship between documents will be normally concealed in the process of encryption, which will lead to significant search accuracy performance degradation. Also the volume of data in data centers has experienced a dramatic growth. This will make it even more challenging to design ciphertext search schemes that can provide efficient and reliable online information retrieval on large volume of encrypted data. In this paper, a hierarchical clustering method is proposed to support more search semantics and also to meet the demand for fast ciphertext search within a big data environment. The proposed hierarchical approach clusters the documents based on the minimum relevance threshold, and then partitions the resulting clusters into sub-clusters until the constraint on the maximum size of cluster is reached. In the search phase, this approach can reach a linear computational complexity against an exponential size increase of document collection. In order to verify the authenticity of search results, a structure called minimum hash sub-tree is designed in this paper. Experiments have been conducted using the collection set built from the IEEE Xplore. The results show that with a sharp increase of documents in the dataset the search time of the proposed method increases linearly whereas the search time of the traditional method increases exponentially. Furthermore, the proposed method has an advantage over the traditional method in the rank privacy and relevance of retrieved documents."}
{"_id": "aed183e7258aff722192ca8b2368b51a7f817b1a", "title": "Distributed Denial of Service: Taxonomies of Attacks, Tools, and Countermeasures", "text": "Distributed Denial of Service (DDoS) attacks have become a large problem for users of computer systems connected to the Internet. DDoS attackers hijack secondary victim systems using them to wage a coordinated large-scale attack against primary victim systems. As new countermeasures are developed to prevent or mitigate DDoS attacks, attackers are constantly developing new methods to circumvent these new countermeasures. In this paper, we describe DDoS attack models and propose taxonomies to characterize the scope of DDoS attacks, the characteristics of the software attack tools used, and the countermeasures available. These taxonomies illustrate similarities and patterns in different DDoS attacks and tools, to assist in the development of more generalized solutions to countering DDoS attacks, including new derivative attacks."}
{"_id": "30f79f6193580a2afc1a495dd6046ac8a74b4714", "title": "Meta-Analytic Review of Leader-Member Exchange Theory : Correlates and Construct Issues", "text": "s International, 48, 2922. Dienesch, R. M., & Liden, R. C. (1986). Leader-member exchange model of leadership: A critique and further development. Academy of Management Review, 11, 618\u2014634. *Dobbins, G. H., Cardy, R. L., & Platz-Vieno, S. J. (1990). A contingency approach to appraisal satisfaction: An initial investigation of the joint effects of organizational variables and appraisal characteristics. Journal of Management, 16, 619-632. *Dockery, T. M., & Steiner, D. D. (1990). The role of the initial interaction in leader-member exchange. Group and Organization Studies, 15, 395-413. *Duarte, N. T, Goodson, J. R., & Klich, N. R. (1994). Effects of dyadic quality and duration on performance appraisal. Academy of Management Journal, 37, 499-521. *Duchon, D., Green, S. G., &Taber, T. D. (1986). Vertical dyad linkage: A longitudinal assessment of antecedents, measures, and consequences. Journal of Applied Psychology, 71, 5660. *Dunegan, K. J., Duchon, D., & Uhl-Bien, M. (1992). Examining the link between leader-member exchange and subordinate performance: The role of task analyzability and variety as moderators. Journal of Management, 18, 59\u201476. *Dunegan, K. J., Uhl-Bien, M., & Duchon, D. (1994, August). Task-level climate (TLC) and leader-member exchange (LMX) as interactive predictors of subordinate performance. Paper presented at the 54th Academy of Management meeting, Dallas, TX. Fahr, J.-L., & Dobbins, G. H. (1989). Effects of comparative performance information on the accuracy of self-ratings and agreement between selfand supervisor ratings. Journal of Applied Psychology, 74, 606-610. Feldman, J. M. (1986). A note on the statistical correction of halo error. Journal of Applied Psychology, 71, 173-176. * Ferris, G. R. (1985). Role of leadership in the employee withdrawal process: A constructive replication. Journal of Applied Psychology, 70, 777-781. *Rikami, C. V, & Larson, E. W. (1984). Commitment to company and union: Parallel models. Journal of Applied Psychology, 69, 367-371. *Gast, I. (1987). Leader cognitive complexity and its effects on the quality of exchange relationships with subordinates (Doctoral dissertation, George Washington University, 1987). Dissertation Abstracts International, 47, 5082. *Gerras, S. (1993). The effect of cognitive busyness and nonverbal behaviors on trait inferences and LMX judgments (Doctoral dissertation, The Pennsylvania State University, 1992). Dissertation Abstracts International, 53, 3819. *Gessner, J. (1993). An interpersonal attraction approach to leader-member exchange: Predicting the predictor (Doctoral dissertation, University of Maryland, 1993). Dissertation Abstracts International, 53, 3820. Graen, G. B. (1976). Role-making processes within complex organizations. In M. D. Dunnette (Ed.), Handbook of industrial and organizational psychology (pp. 1201-1245). Chicago: Rand McNally. Graen, G. B. (1989). Unwritten rules for your career. New York: Wiley. *Graen, G. B., & Cashman, J. (1975). A role-making model of leadership in formal organizations: A development approach. In J. G. Hunt & L. L. Larson (Eds.), Leadership frontiers (pp. 143-165). Kent, OH: Kent State University. *Graen, G. B., Dansereau, E, Minanii, T., & Cashman, J. (1973). Leadership behaviors as cues to performance evaluation. Academy of Management Journal, 16, 611-623. *Graen,G. B., Liden, R., &Hoel,W. (1982). Role of leadership in the employee withdrawal process. Journal of Applied Psychology, 67, 868-872. *Graen, G. B., Novak, M. A., & Sommerkamp, P. (1982). The effects of leader-member exchange and job design on productivity and satisfaction: Testing a dual attachment model. Organizational Behavior and Human Performance, 30, 109131. Graen, G. B., & Scandura, T. A. (1987). Toward a psychology of dyadic organizing. Research in Organizational Behavior, 9, 175-208. Graen, G. B., Scandura, T. A., & Graen, M. R. (1986). A field experiment test of the moderating effects of growth need strength on productivity. Journal of Applied Psychology, 71, 484-491. Graen, G. B., & Schiemann, W. (1978). Leader-member agreement: A vertical dyad linkage approach. Journal of Applied Psychology, 63, 206-212. Graen, G. B., & Uhl-Bien, M. (1995). Relationship-based approach to leadership: Development of leader-member exchange (LMX) theory of leadership over 25 years: Applying a multi-level multi-domain perspective. Leadership Quarterly, 6, 219-247. *Graen, G. B., Wakabayashi, M., Graen, M. R., & Graen, M. G. (1990). International generalizability of American hypotheses about Japanese management progress: A strong inference investigation. Leadership Quarterly, 1, 1-23. *Green, S. G., Anderson, S. E., & Shivers, S. L. (1996). Demographic and organizational influences on leader-member exchange and related work attitudes. Organizational Behavior and Human Decision Processes, 66, 203-214. *Green, S. G., Blank, W, & Liden, R. C. (1983). Market and organizational influences on bank employees' work attitudes and behaviors. Journal of Applied Psychology, 68, 298-306. Harris, M. M., & Schaubroeck, J. (1988). A meta-analysis of self-supervisor, self-peer, and peer-supervisor ratings. Personnel Psychology, 41, 43-62. Hater, J. J., & Bass, B. M. (1988). Superiors' evaluations and subordinates' perceptions of transformational and transactional leadership. Journal of Applied Psychology, 73, 695702. Hedges, L. V. (1987). How hard is hard science, how soft is soft science? The empirical cumulativeness of research. American Psychologist, 42, 443-455. Hedges, L. V, & Olkin, I. (1985). Statistical methods for metaanalysis. New \\brk: Academic Press. House, R. J. (1977). A 1976 theory of charismatic leadership. In J. G. Hunt & L. L. Larson (Eds.), Leadership: The cutting edge (pp. 189-207). Carbondale: Southern Illinois University. Howell, J. M., & Avolio, B. J. (1993). Transformational leadership, transactional leadership, locus of control, and support 842 GERSTNER AND DAY for innovation: Key predictors of consolidated-business-unit performance. Journal of Applied Psychology, 78, 891\u2014902. Howell, J. M., & Frost, P. J. (1989). A laboratory study of charismatic leadership. Organizational Behavior and Human Decision Processes, 43, 243-269. Huffcutt, A. I., & Arthur, W., Jr. (1995). Development of a new outlier statistic for meta-analytic data. Journal of Applied Psychology, 80, 327-334. Huffcutt, A. I., Arthur, W., Jr., & Bennet, W. (1993). Conducting meta-analysis using the proc means procedure in SAS. Educational and Psychological Measurement, S3, 119-131. Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage. Hunter, J. E., Schmidt, F. L., & Jackson, G. B. (1982). Metaanalysis: Cumulating research findings across studies. Beverly Hills, CA: Sage. *James, L. R., Gent, M. J., Hater, J. J., & Coray, K. E. (1979). Correlates of psychological influence: An illustration of the psychological climate approach to work environment perceptions. Personnel Psychology, 32, 563-588. *James, L. R., Hater, J. J., & Jones, A. (1981). Perceptions of psychological influence: A cognitive information processing approach for explaining moderated relationships. Personnel Psychology, 34, 453-477. Johnson, B. T., & Tbrco, R. M. (1992). The value of goodnessof-fit indices in meta-analysis: A comment on Hall and Rosenthai. Communication Monographs, 59, 388-396. * Jones, A. P., Glaman, J. M., & Johnson, D. S. (1993). Perceptions of a quality program and relationships with work perceptions and job attitudes. Psychological Reports, 72, 619-624. *Katerberg, R., & Horn, P. W. (1981). Effects of within-group and between-group variation in leadership. Journal of Applied Psychology, 66, 218-223. \"Keller, T., & Dansereau, F. (1995). Leadership and empowerment: A social exchange perspective. Human Relations, 48, 127-146. *Kim, K. I., & Organ, D. W. (1982). Determinants of leadersubordinate exchange relationships. Group and Organization Studies, 7, 77-89. *Kinicki, A. J., & Vecchio, R. P. (1994). Influences on the quality of supervisor\u2014subordinate relations: The role of timepressure, organizational commitment, and locus of control. Journal of Organizational Behavior, 15, 75-82. *K'Obonyo, P. O. (1989). A dyadic upward influence process: A laboratory investigation of the effect of a subordinate's ingratiation (praise & performance) on the superior-subordinate exchange relationship (Doctoral dissertation, University of South Carolina, 1988). Dissertation Abstracts International, 50, 1366. Kozlowski, S. W. J., & Doherty, M. L. (1989). Integration of climate and leadership: Examination of a neglected issue. Journal of Applied Psychology, 74, 546-553. Kuhnert, K. W. (1994). Transforming leadership: Developing people through delegation. In B. M. Bass & B. J. Avolio (Eds.), Improving organizational effectiveness through transformational leadership (pp. 10-25). Thousand Oaks, CA: Sage. Kuhnert, K. W., & Lewis, P. (1987). Transactional and transformational leadership: A constructive/developmental analysis. Academy of Management Review, 12, 648\u2014657. *Lagace, R. R. (1988). An investigation into the impact of interpersonal trust on the quality of the relationship and outcome variables in the sales manager/salesperson dyad (Doctoral dissertation, University of Cincinnati, 1987). Dissertation Abstracts International, 48, 2679. *Lagace, R. R. (1990). Leader-member exchange: Antecedents and consequences of the cadre and hired hand. Journal of Personal Selling and Sales Management, 10, 11-19. *Lagace, R. R., Castlebeny, S. B., & Ridnour, R. E. (1993). An exploratory salesforce study of the relationship between leader-member exchange and motivation, role stress, and manager evaluation. Journal of Applied Business Research, 9, 110-119. *Leana, C. R. (1986). Predictors and consequences of delegation. Academy of Management Journal, 2, 754\u2014774. *Liden, R., & Maslyn, J. M. (in press). Multidimensionality of leader-member"}
{"_id": "c46988a5a538bf160f84c78bed237c53e3f903d6", "title": "Data-efficient hand motor imagery decoding in EEG-BCI by using Morlet wavelets & Common Spatial Pattern algorithms", "text": "EEG-based Brain Computer Interfaces (BCIs) are quite noisy brain signals recorded from the scalp (electroencephalography, EEG) to translate the user's intent into action. This is usually achieved by looking at the pattern of brain activity across many trials while the subject is imagining the performance of an instructed action - the process known as motor imagery. Nevertheless, existing motor imagery classification algorithms do not always achieve good performances because of the noisy and non-stationary nature of the EEG signal and inter-subject variability. Thus, current EEG BCI takes a considerable upfront toll on patients, who have to submit to lengthy training sessions before even being able to use the BCI. In this study, we developed a data-efficient classifier for left/right hand motor imagery by combining in our pattern recognition both the oscillation frequency range and the scalp location. We achieve this by using a combination of Morlet wavelet and Common Spatial Pattern theory to deal with nonstationarity and noise. The system achieves an average accuracy of 88% across subjects and was trained by about a dozen training (10-15) examples per class reducing the size of the training pool by up to a 100-fold, making it very data-efficient way for EEG BCI."}
{"_id": "117bbf27f5e0dab72ebf5c3d7eeb5ca381015b30", "title": "Cluster-Based Scalable Network Services", "text": "We identify three fundamental requirements for scalable network services: incremental scalability and overflow growth provisioning, 24x7 availability through fault masking, and costeffectiveness. We argue that clusters of commodity workstations interconnected by a high-speed SAN are exceptionally well-suited to meeting these challenges for Internet-server workloads, provided the software infrastructure for managing partial failures and administering a large cluster does not have to be reinvented for each new service. To this end, we propose a general, layered architecture for building cluster-based scalable network services that encapsulates the above requirements for reuse, and a service-programming model based on composable workers that perform transformation, aggregation, caching, and customization (TACC) of Internet content. For both performance and implementation simplicity, the architecture and TACC programming model exploit BASE, a weaker-than-ACID data semantics that results from trading consistency for availability and relying on soft state for robustness in failure management. Our architecture can be used as an \u201coff the shelf\u201d infrastructural platform for creating new network services, allowing authors to focus on the \u201ccontent\u201d of the service (by composing TACC building blocks) rather than its implementation. We discuss two real implementations of services based on this architecture: TranSend, a Web distillation proxy deployed to the UC Berkeley dialup IP population, and HotBot, the commercial implementation of the Inktomi search engine. We present detailed measurements of TranSend\u2019s performance based on substantial client traces, as well as anecdotal evidence from the TranSend and HotBot experience, to support the claims made for the architecture."}
{"_id": "f2e36a4d531311f534378fc77c29650c469b69d8", "title": "Multi-scale CNN Stereo and Pattern Removal Technique for Underwater Active Stereo System", "text": "Demands on capturing dynamic scenes of underwater environments are rapidly growing. Passive stereo is applicable to capture dynamic scenes, however the shape with textureless surfaces or irregular re\ufb02ections cannot be recovered by the technique. In our system, we add a pattern projector to the stereo camera pair so that arti\ufb01cial textures are augmented on the objects. To use the system at underwater environments, several problems should be compensated, i.e., refraction, disturbance by \ufb02uctuation and bubbles. Further, since surface of the objects are interfered by the bubbles, projected patterns, etc., those noises and patterns should be removed from captured images to recover original texture. To solve these problems, we propose three approaches; a depth-dependent calibration, Convolutional Neural Network(CNN)-stereo method and CNN-based texture recovery method. A depth-dependent calibration I sour analysis to \ufb01nd the acceptable depth range for approximation by center projection to \ufb01nd the certain target depth for calibration. In terms of CNN stereo, unlike common CNN based stereo methods which do not consider strong disturbances like refraction or bubbles, we designed a novel CNN architecture for stereo matching using multi-scale information, which is intended to be robust against such disturbances. Finally, we propose a multi-scale method for bubble and a projected-pattern removal method using CNNs to recover original textures. Experimental results are shown to prove the effectiveness of our method compared with the state of the art techniques. Furthermore, reconstruction of a live swimming \ufb01sh is demonstrated to con\ufb01rm the feasibility of our techniques."}
{"_id": "a7d94d683c631d5f8fe4fada6fe0bbb692703324", "title": "Advances in Digital Forensics XIII", "text": "In digital forensics, examinations are carried out to explain events and demonstrate the root cause from a number of plausible causes. Yin\u2019s approach to case study research offers a systematic process for investigating occurrences in their real-world contexts. The approach is well suited to examining isolated events and also addresses questions about causality and the reliability of findings. The techniques that make Yin\u2019s approach suitable for research also apply to digital forensic examinations. The merits of case study research are highlighted in previous work that established the suitability of the case study research method for conducting digital forensic examinations. This research extends the previous work by demonstrating the practicality of Yin\u2019s case study method in examining digital events. The research examines the relationship between digital evidence \u2013 the effect \u2013 and its plausible causes, and how patterns can be identified and applied to explain the events. Establishing these patterns supports the findings of a forensic examination. Analytic strategies and techniques inherent in Yin\u2019s case study method are applied to identify and analyze patterns in order to establish the findings of a digital forensic examination."}
{"_id": "8a2fcac4ab1e45cdff8a9637877dd2c19bffc4e0", "title": "A Machine Learning Approach for SQL Queries Response Time Estimation in the Cloud", "text": "Cloud computing provides on-demand services with pay-as-you-go model. In this sense, customers expect quality from providers where QoS control over DBMSs services demands decision making, usually based on performance aspects. It is essential that the system be able to assimilate the characteristics of the application in order to determine the execution time for its queries' execution. Machine learning techniques are a promising option in this context, since they o er a wide range of algorithms that build predictive models for this purpose based on the past observations of the demand and on the DBMS system behavior. In this work, we propose and evaluate the use of regression methods for modeling the SQL query execution time in the cloud. This process uses di erent techniques, such as bag-of-words, statistical lter and genetic algorithms. We use K-Fold Cross-Validation for measuring the quality of the models in each step. Experiments were carried out based on data from database systems in the cloud generated by the TPC-C benchmark."}
{"_id": "552d5338c6151cd0e4b61cc31ba5bc507d5db52f", "title": "Ontology based context modeling and reasoning using OWL", "text": "Here we propose an OWL encoded context ontology (CONON) for modeling context in pervasive computing environments, and for supporting logic-based context reasoning. CONON provides an upper context ontology that captures general concepts about basic context, and also provides extensibility for adding domain-specific ontology in a hierarchical manner. Based on this context ontology, we have studied the use of logic reasoning to check the consistency of context information, and to reason over low-level, explicit context to derive high-level, implicit context. By giving a performance study for our prototype, we quantitatively evaluate the feasibility of logic based context reasoning for nontime-critical applications in pervasive computing environments, where we always have to deal carefully with the limitation of computational resources."}
{"_id": "649e0a31d3a16b10109f2c9627d07b16e15290c0", "title": "OPINE: Extracting Product Features and Opinions from Reviews", "text": "Consumers have to often wade through a large number of on-line reviews in order to make an informed product choice. We introduce OPINE, an unsupervised, high-precision information extraction system which mines product reviews in order to build a model of product features and their evaluation by reviewers."}
{"_id": "ec2027c2dd93e4ee8316cc0b3069e8abfdcc2ecf", "title": "Latent Variable PixelCNNs for Natural Image Modeling", "text": "We study probabilistic models of natural images and extend the autoregressive family of PixelCNN models by incorporating latent variables. Subsequently, we describe two new generative image models that exploit different image transformations as latent variables: a quantized grayscale view of the image or a multi-resolution image pyramid. The proposed models tackle two known shortcomings of existing PixelCNN models: 1) their tendency to focus on low-level image details, while largely ignoring high-level image information, such as object shapes, and 2) their computationally costly procedure for image sampling. We experimentally demonstrate benefits of our LatentPixelCNN models, in particular showing that they produce much more realistically looking image samples than previous stateof-the-art probabilistic models."}
{"_id": "544f6fa29fa08f949b7dc61aa9f7887078fbfc0b", "title": "Top-down control of visual attention in object detection", "text": "Current computational models of visual attention focus on bottom-up information and ignore scene context. However, studies in visual cognition show that humans use context to facilitate object detection in natural scenes by directing their attention or eyes to diagnostic regions. Here we propose a model of attention guidance based on global scene config\u00ad uration. We show that the statistics of low-level features across the scene image determine where a specific object (e.g. a person) should be located. Human eye movements show that regions chosen by the top-down model agree with regions scrutinized by human observers performing a visual search task for people. The results validate the proposition that top-down information from visual context modulates the saliency of image regions during the task of object de\u00ad tection. Contextual information provides a shortcut for effi\u00ad cient object detection systems."}
{"_id": "cceda5edb064cbe563724d3c6b6d61140747aa15", "title": "Continuous software engineering: A roadmap and agenda", "text": "Throughout its short history, software development has been characterized by harmful disconnects between important activities such as planning, development and implementation. The problem is further exacerbated by the episodic and infrequent performance of activities such as planning, testing, integration and releases. Several emerging phenomena reflect attempts to address these problems. For example, Continuous Integration is a practice which has emerged to eliminate discontinuities between development and deployment. In a similar vein, the recent emphasis on DevOps recognizes that the integration between software development and its operational deployment needs to be a continuous one. We argue a similar continuity is required between business strategy and development, BizDev being the term we coin for this. These disconnects are even more problematic given the need for reliability and resilience in the complex and data-intensive systems being developed today. We identify a number of continuous activities which together we label as \u2018Continuous \u2217\u2019 (i.e. Continuous Star) which we present as part of an overall roadmap for Continuous Software engineering. We argue for a continuous (but not necessarily rapid) software engineering delivery pipeline. We conclude the paper with a research agenda. \u00a9 2015 Elsevier Inc. All rights reserved."}
{"_id": "6ff347f7c7324a4038639fc58b8deea9038ec053", "title": "Stable Marriage with Ties and Unacceptable Partners", "text": "An instance of the classical stable marriage problem involves n men and n women, and each person ranks all n members of the opposite sex in strict order of preference. The effect of allowing ties in the preference lists has been investigated previously, and three natural definitions of stability arise. In this paper, we extend this study by allowing a preference list to involve ties and/or be incomplete. We show that, under the weakest notion of stability, the stable matchings need not be all of the same cardinality, and the decision problem related to finding a maximum cardinality stable matching is NP-complete, even if the ties occur in the preference lists of one sex only. This result has important implications for practical matching schemes such as the well-known National Resident Matching Program [9]. In the cases of the other two notions of stability, Irving [5] has described algorithms for testing whether a stable matching exists, and for constructing such a matching if one does exist, where a preference list is complete but may involve ties. We demonstrate how to extend these algorithms to the case where a preference list may be incomplete and/or involve ties."}
{"_id": "ca579dbe53b33424abb13b35c5b7597ef19600ad", "title": "Speech recognition with primarily temporal cues.", "text": "Nearly perfect speech recognition was observed under conditions of greatly reduced spectral information. Temporal envelopes of speech were extracted from broad frequency bands and were used to modulate noises of the same bandwidths. This manipulation preserved temporal envelope cues in each band but restricted the listener to severely degraded information on the distribution of spectral energy. The identification of consonants, vowels, and words in simple sentences improved markedly as the number of bands increased; high speech recognition performance was obtained with only three bands of modulated noise. Thus, the presentation of a dynamic temporal pattern in only a few broad spectral regions is sufficient for the recognition of speech."}
{"_id": "929a376c6fea1376baf40fc2979cfbdd867f03ab", "title": "Soft decoding of JPEG 2000 compressed images using bit-rate-driven deep convolutional neural networks", "text": "Lossy image compression methods always introduce various unpleasant artifacts into the compressed results, especially at low bit-rates. In recent years, many effective soft decoding methods for JPEG compressed images have been proposed. However, to the best of our knowledge, very few works have been done on soft decoding of JPEG 2000 compressed images. Inspired by the outstanding performance of Convolution Neural Network (CNN) in various computer vision tasks, we presents a soft decoding method for JPEG 2000 by using multiple bit-rate-driven deep CNNs. More specifically, in training stage, we train a series of deep CNNs using lots of high quality training images and the corresponding JPEG 2000 compressed images at different coding bit-rates. In testing stage, for an input compressed image, the CNN trained with the nearest coding bit-rate is selected to perform soft decoding. Extensive experiments demonstrate the effectiveness of the presented soft decoding framework, which greatly improves the visual quality and objective scores of JPEG 2000 compressed images."}
{"_id": "71b3f1d399f0d118df4fb877c7c703d9fda6e648", "title": "Satisfaction, trust and online purchase intention: A study of consumer perceptions", "text": "This paper tries to do the research of analyzing the relation of the customer trust, the customer satisfaction and purchase intention under the electronic commerce shopping environment. Drawing on the theory of customer trust, customer satisfaction and purchase intention, based on a survey of 102 respondents from the University in China, the United Kingdom, and the United States, the results showed that there is significant correlation between satisfaction and trust, satisfaction and purchase intention, trust and purchase intention, and more than 50% of respondents chose Taobao and Tmall, and the second is Amazon when they shopping onling. It could be expected that this kind of study is able to provide suggestion and guide for internet merchant."}
{"_id": "7176b952e09e8a83dcc41779237eea55d0e75c20", "title": "Discovering Temporal Retweeting Patterns for Social Media Marketing Campaigns", "text": "Social media has become one of the most popular marketing channels for many companies, which aims at maximizing their influence by various marketing campaigns conducted from their official accounts on social networks. However, most of these marketing accounts merely focus on the contents of their tweets. Less effort has been made on understanding tweeting time, which is a major contributing factor in terms of attracting customers' attention and maximizing the influence of a social marketing campaign. To that end, in this paper, we provide a focused study of temporal retweeting patterns and their influence on social media marketing campaigns. Specifically, we investigate the users' retweeting patterns by modeling their retweeting behaviors as a generative process, which considers temporal, social, and topical factors. Moreover, we validate the predictive power of the model on the dataset collected from Sina Weibo, the most popular micro blog platform in China. By discovering the temporal retweeting patterns, we analyze the temporal popular topics and recommend tweets to users in a time-aware manner. Finally, experimental results show that the proposed algorithm outperforms other baseline methods. This model is applicable for companies to conduct their marketing campaigns at the right time on social media."}
{"_id": "4b9c85a32ccebb851b94f6da17307461e50345a2", "title": "From learning models of natural image patches to whole image restoration", "text": "Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors and optimizing over whole images can lead to tremendous computational challenges. In contrast, when we work with small image patches, it is possible to learn priors and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full image? Can we learn better patch priors? In this work we answer these questions. We compare the likelihood of several patch models and show that priors that give high likelihood to data perform better in patch restoration. Motivated by this result, we propose a generic framework which allows for whole image restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated. We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other generic prior methods for image denoising, deblurring and inpainting."}
{"_id": "0352ed76cb27ff9b0edecfcf9556bc1e19756e9e", "title": "Note-Taking With Computers : Exploring Alternative Strategies for Improved Recall", "text": "Three experiments examined note-taking strategies and their relation to recall. In Experiment 1, participants were instructed either to take organized lecture notes or to try and transcribe the lecture, and they either took their notes by hand or typed them into a computer. Those instructed to transcribe the lecture using a computer showed the best recall on immediate tests, and the subsequent experiments focused on note-taking using computers. Experiment 2 showed that taking organized notes produced the best recall on delayed tests. In Experiment 3, however, when participants were given the opportunity to study their notes, those who had tried to transcribe the lecture showed better recall on delayed tests than those who had taken organized notes. Correlational analyses of data from all 3 experiments revealed that for those who took organized notes, working memory predicted note-quantity, which predicted recall on both immediate and delayed tests. For those who tried to transcribe the lecture, in contrast, only note-quantity was a consistent predictor of recall. These results suggest that individuals who have poor working memory (an ability traditionally thought to be important for note-taking) can still take effective notes if they use a note-taking strategy (transcribing using a computer) that can help level the playing field for students of diverse cognitive abilities."}
{"_id": "cfa092829c4c7a42ec77ab6844661e1dae082172", "title": "Analytical Tools for Blockchain: Review, Taxonomy and Open Challenges", "text": "Bitcoin has introduced a new concept that could feasibly revolutionise the entire Internet as it exists, and positively impact on many types of industries including, but not limited to, banking, public sector and supply chain. This innovation is grounded on pseudo-anonymity and strives on its innovative decentralised architecture based on the blockchain technology. Blockchain is pushing forward a race of transaction-based applications with trust establishment without the need for a centralised authority, promoting accountability and transparency within the business process. However, a blockchain ledger (e.g., Bitcoin) tend to become very complex and specialised tools, collectively called \u201cBlockchain Analytics\u201d, are required to allow individuals, law enforcement agencies and service providers to search, explore and visualise it. Over the last years, several analytical tools have been developed with capabilities that allow, e.g., to map relationships, examine flow of transactions and filter crime instances as a way to enhance forensic investigations. This paper discusses the current state of blockchain analytical tools and presents a thematic taxonomy model based on their applications. It also examines open challenges for future development and research."}
{"_id": "0a7c4cec908ca18f76f5101578a2496a2dceb5e7", "title": "Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers", "text": "In the deep neural network (DNN), the hidden layers can be considered as increasingly complex feature transformations and the final softmax layer as a log-linear classifier making use of the most abstract features computed in the hidden layers. While the loglinear classifier should be different for different languages, the feature transformations can be shared across languages. In this paper we propose a shared-hidden-layer multilingual DNN (SHL-MDNN), in which the hidden layers are made common across many languages while the softmax layers are made language dependent. We demonstrate that the SHL-MDNN can reduce errors by 3-5%, relatively, for all the languages decodable with the SHL-MDNN, over the monolingual DNNs trained using only the language specific data. Further, we show that the learned hidden layers sharing across languages can be transferred to improve recognition accuracy of new languages, with relative error reductions ranging from 6% to 28% against DNNs trained without exploiting the transferred hidden layers. It is particularly interesting that the error reduction can be achieved for the target language that is in different families of the languages used to learn the hidden layers."}
{"_id": "3157ed1fbad482520ca87045b308446d8adbdedb", "title": "Communication-Efficient Learning of Deep Networks from Decentralized Data", "text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10\u2013100\u00d7 as compared to synchronized stochastic gradient descent."}
{"_id": "8e21d353ba283bee8fd18285558e5e8df39d46e8", "title": "Federated Meta-Learning for Recommendation", "text": "Recommender systems have been widely studied from the machine learning perspective, where it is crucial to share information among users while preserving user privacy. In this work, we present a federated meta-learning framework for recommendation in which user information is shared at the level of algorithm, instead of model or data adopted in previous approaches. In this framework, user-specific recommendation models are locally trained by a shared parameterized algorithm, which preserves user privacy and at the same time utilizes information from other users to help model training. Interestingly, the model thus trained exhibits a high capacity at a small scale, which is energyand communicationefficient. Experimental results show that recommendation models trained by meta-learning algorithms in the proposed framework outperform the state-of-the-art in accuracy and scale. For example, on a production dataset, a shared model under Google Federated Learning (McMahan et al., 2017) with 900,000 parameters has prediction accuracy 76.72%, while a shared algorithm under federated meta-learning with less than 30,000 parameters achieves accuracy of 86.23%."}
{"_id": "9552ac39a57daacf3d75865a268935b5a0df9bbb", "title": "Neural networks and principal component analysis: Learning from examples without local minima", "text": "We consider the problem of learning from examples in layered linear feed-forward neural networks using optimization methods, such as back propagation, with respect to the usual quadratic error function E of the connection weights. Our main result is a complete description of the landscape attached to E in terms of principal component analysis. We show that E has a unique minimum corresponding to the projection onto the subspace generated by the first principal vectors of a covariance matrix associated with the training patterns. All the additional critical points of E are saddle points (corresponding to projections onto subspaces generated by higher order vectors). The auto-associative case is examined in detail. Extensions and implications for the learning algorithms are discussed."}
{"_id": "4f803c5d435bef1985477366231503f10739fe11", "title": "A CORDIC processor for FFT computation and its implementation using gallium arsenide technology", "text": "In this paper, the architecture and the implementation of a complex fast Fourier transform (CFFT) processor using 0.6 m gallium arsenide (GaAs) technology are presented. This processor computes a 1024-point FFT of 16 bit complex data in less than 8 s, working at a frequency beyond 700 MHz, with a power consumption of 12.5 W. The architecture of the processor is based on the COordinate Rotation DIgital Computer (CORDIC) algorithm, which avoids the use of conventional multiplicationand-accumulation (MAC) units, but evaluates the trigonometric functions using only add and shift operations. Improvements to the basic CORDIC architecture are introduced in order to reduce the area and power of the processor. This together with the use of pipelining and carry save adders produces a very regular and fast processor. The CORDIC units were fabricated and tested in order to anticipate the final performance of the processor. This work also demonstrates the maturity of GaAs technology for implementing ultrahigh-performance signal processors."}
{"_id": "d2960e97d00be0e50577914811668fb05fc46bc4", "title": "A current-starved inverter-based differential amplifier design for ultra-low power applications", "text": "As silicon feature sizes decrease, more complex circui try arrays can now be contrived on a single die. This increase in the number of on-chip devices per unit area results in increased power dissipation per unit area. In order to meet certain power and operating temperature specifications, circuit design necessitates a focus on power efficiency, which is especially important in systems employing hundreds or thousands of instances of the same device. In large arrays, a slight increase in the power efficiency of a single component is heightened by the number of instances of the device in the system. This paper proposes a fully differential, low-power current-starving inverter-based amplifier topology designed in a commercial 0.18\u03bcm process. This design achieves 46dB DC gain and a 464 kHz uni ty gain frequency with a power consumption of only 145.32nW at 700mV power supply vol tage for ultra-low power, low bandwidth applications. Higher bandwidth designs are also proposed, including a 48dB DC gain, 2.4 MHz unity-gain frequency amplifier operating at 900mV wi th only 3.74\u03bcW power consumption."}
{"_id": "82256dd95c21bcc336d01437eabba7470d92188b", "title": "Conditional connectivity", "text": "For a noncomplete graph G , the traditional definition of its connectivity is the minimum number of points whose removal results in a disconnected subgraph with components H , , . . . , H k . The conditional connectivity of G with respect to some graphtheoretic property P is the smallest cardinality of a set S of points, if any, such that every component Hi of the disconnected graph G S has property P. A survey of promising properties P is presented. Questions for various P-connectivities are listed in analogy with known results on connectivity and line-connectivity."}
{"_id": "7957fad0ddbe7323da83d00b094caedb8eb1a473", "title": "A review of mangrove rehabilitation in the Philippines: successes, failures and future prospects", "text": "From half a million hectares at the turn of the century, Philippine mangroves have declined to only 120,000\u00a0ha while fish/shrimp culture ponds have increased to 232,000\u00a0ha. Mangrove replanting programs have thus been popular, from community initiatives (1930s\u20131950s) to government-sponsored projects (1970s) to large-scale international development assistance programs (1980s to present). Planting costs escalated from less than US$100 to over $500/ha, with half of the latter amount allocated to administration, supervision and project management. Despite heavy funds for massive rehabilitation of mangrove forests over the last two decades, the long-term survival rates of mangroves are generally low at 10\u201320%. Poor survival can be mainly traced to two factors: inappropriate species and site selection. The favored but unsuitable Rhizophora are planted in sandy substrates of exposed coastlines instead of the natural colonizers Avicennia and Sonneratia. More significantly, planting sites are generally in the lower intertidal to subtidal zones where mangroves do not thrive rather than the optimal middle to upper intertidal levels, for a simple reason. Such ideal sites have long been converted to brackishwater fishponds whereas the former are open access areas with no ownership problems. The issue of pond ownership may be complex and difficult, but such should not outweigh ecological requirements: mangroves should be planted where fishponds are, not on seagrass beds and tidal flats where they never existed. This paper reviews eight mangrove initiatives in the Philippines and evaluates the biophysical and institutional factors behind success or failure. The authors recommend specific protocols (among them pushing for a 4:1 mangrove to pond ratio recommended for a healthy ecosystem) and wider policy directions to make mangrove rehabilitation in the country more effective."}
{"_id": "27434a48e7e10b011c862c297d5c29110816ec5c", "title": "What characterizes a shadow boundary under the sun and sky?", "text": "Despite decades of study, robust shadow detection remains difficult, especially within a single color image. We describe a new approach to detect shadow boundaries in images of outdoor scenes lit only by the sun and sky. The method first extracts visual features of candidate edges that are motivated by physical models of illumination and occluders. We feed these features into a Support Vector Machine (SVM) that was trained to discriminate between most-likely shadow-edge candidates and less-likely ones. Finally, we connect edges to help reject non-shadow edge candidates, and to encourage closed, connected shadow boundaries. On benchmark shadow-edge data sets from Lalonde et al. and Zhu et al., our method showed substantial improvements when compared to other recent shadow-detection methods based on statistical learning."}
{"_id": "353c4ab6943b671abffd9503f16c34706bc44fdb", "title": "3D distance fields: a survey of techniques and applications", "text": "A distance field is a representation where, at each point within the field, we know the distance from that point to the closest point on any object within the domain. In addition to distance, other properties may be derived from the distance field, such as the direction to the surface, and when the distance field is signed, we may also determine if the point is internal or external to objects within the domain. The distance field has been found to be a useful construction within the areas of computer vision, physics, and computer graphics. This paper serves as an exposition of methods for the production of distance fields, and a review of alternative representations and applications of distance fields. In the course of this paper, we present various methods from all three of the above areas, and we answer pertinent questions such as How accurate are these methods compared to each other? How simple are they to implement?, and What is the complexity and runtime of such methods?."}
{"_id": "385b401bf75771b02b5721641ae04ace43d2bbcd", "title": "Large-Scale Supervised Multimodal Hashing with Semantic Correlation Maximization", "text": "Due to its low storage cost and fast query speed, hashing has been widely adopted for similarity search in multimedia data. In particular, more and more attentions have been payed to multimodal hashing for search in multimedia data with multiple modalities, such as images with tags. Typically, supervised information of semantic labels is also available for the data points in many real applications. Hence, many supervised multimodal hashing (SMH) methods have been proposed to utilize such semantic labels to further improve the search accuracy. However, the training time complexity of most existing SMH methods is too high, which makes them unscalable to large-scale datasets. In this paper, a novel SMH method, called semantic correlation maximization (SCM), is proposed to seamlessly integrate semantic labels into the hashing learning procedure for large-scale data modeling. Experimental results on two real-world datasets show that SCM can significantly outperform the state-of-the-art SMH methods, in terms of both accuracy and scalability."}
{"_id": "ff8b0e6ac8e478e7332edb2934fc943f3f826091", "title": "Noise radar using random phase and frequency modulation", "text": "Pulse compression radar is used in a great number of applications. Excellent range resolution and high electronic counter-countermeasures performance is achieved by wideband long pulses, which spread out the transmitted energy in frequency and time. By using a random noise waveform, the range ambiguity is suppressed as well. In most applications, the random signal is transmitted directly from a noise-generating microwave source. A sine wave, which is phase or frequency modulated by random noise, is an alternative, and in this paper, the ambiguity function and the statistical characteristics of the correlation output for the latter configuration are further analyzed. Range resolution is then improved because the noise bandwidth of the modulated carrier is wider than that of the modulating signal, and the range sidelobes are also further suppressed. Random biphase modulation gives a 4-dB (/spl pi//sup 2//4) improvement, but much higher sidelobe suppression could be achieved using continuous phase/frequency modulation. Due to the randomness of the waveform, the output correlation integral is accompanied by a noise floor, which limits the possible sidelobe suppression as determined by the time-bandwidth product. In synthetic aperture radar (SAR) applications with distributed targets, this product should be large compared with the number of resolution elements inside the antenna main beam. The advantages of low range sidelobes and enhanced range resolution make frequency/phase-modulated noise radar attractive for many applications, including SAR mapping, surveillance, altimetry, and scatterometry. Computer algorithms for reference signal delay and compression are discussed as replacements for the classical delay line implementation."}
{"_id": "1866c1e44f0003946f6a27def74d768d0f66799d", "title": "Collusive Piracy Prevention in P2P Content Delivery Networks", "text": "Collusive piracy is the main source of intellectual property violations within the boundary of a P2P network. Paid clients (colluders) may illegally share copyrighted content files with unpaid clients (pirates). Such online piracy has hindered the use of open P2P networks for commercial content delivery. We propose a proactive content poisoning scheme to stop colluders and pirates from alleged copyright infringements in P2P file sharing. The basic idea is to detect pirates timely with identity-based signatures and time-stamped tokens. The scheme stops collusive piracy without hurting legitimate P2P clients by targeting poisoning on detected violators, exclusively. We developed a new peer authorization protocol (PAP) to distinguish pirates from legitimate clients. Detected pirates will receive poisoned chunks in their repeated attempts. Pirates are thus severely penalized with no chance to download successfully in tolerable time. Based on simulation results, we find 99.9 percent prevention rate in Gnutella, KaZaA, and Freenet. We achieved 85-98 percent prevention rate on eMule, eDonkey, Morpheus, etc. The scheme is shown less effective in protecting some poison-resilient networks like BitTorrent and Azureus. Our work opens up the low-cost P2P technology for copyrighted content delivery. The advantage lies mainly in minimum delivery cost, higher content availability, and copyright compliance in exploring P2P network resources."}
{"_id": "f4d51f5a4af851fb33f32c35712be81648f6d0c8", "title": "RanDroid: Android malware detection using random machine learning classifiers", "text": "The growing polularity of Android based smartphone attracted the distribution of malicious applications developed by attackers which resulted the need for sophisticated malware detection techniques. Several techniques are proposed which use static and/or dynamic features extracted from android application to detect malware. The use of machine learning is adapted in various malware detection techniques to overcome the mannual updation overhead. Machine learning classifiers are widely used to model Android malware patterns based on their static features and dynamic behaviour. To address the problem of malware detection, in this paper we have proposed a machine learning-based malware detection system for Android platform. Our proposed system utilizes the features of collected random samples of goodware and malware apps to train the classifiers. The system extracts requested permissions, vulnerable API calls along with the existence of app's key information such as; dynamic code, reflection code, native code, cryptographic code and database from applications, which was missing in previous proposed solutions and uses them as features in various machine learning classifiers to build classification model. To validate the performance of proposed system, \"RanDroid\" various experiments have been carriedout, which show that the RanDroid is capable to achieve a high classification accuracy of 97.7 percent."}
{"_id": "5be24830fae471c496a38fdbac48872011a7de71", "title": "Accelerating Asynchronous Stochastic Gradient Descent for Neural Machine Translation", "text": "In order to extract the best possible performance from asynchronous stochastic gradient descent one must increase the mini-batch size and scale the learning rate accordingly. In order to achieve further speedup we introduce a technique that delays gradient updates effectively increasing the mini-batch size. Unfortunately with the increase of mini-batch size we worsen the stale gradient problem in asynchronous stochastic gradient descent (SGD) which makes the model convergence poor. We introduce local optimizers which mitigate the stale gradient problem and together with fine tuning our momentum we are able to train a shallow machine translation system 27% faster than an optimized baseline with negligible penalty in BLEU."}
{"_id": "7060f6062ba1cbe9502eeaaf13779aa1664224bb", "title": "A Glimpse Far into the Future: Understanding Long-term Crowd Worker Quality", "text": "Microtask crowdsourcing is increasingly critical to the creation of extremely large datasets. As a result, crowd workers spend weeks or months repeating the exact same tasks, making it necessary to understand their behavior over these long periods of time. We utilize three large, longitudinal datasets of nine million annotations collected from Amazon Mechanical Turk to examine claims that workers fatigue or satisfice over these long periods, producing lower quality work. We find that, contrary to these claims, workers are extremely stable in their quality over the entire period. To understand whether workers set their quality based on the task's requirements for acceptance, we then perform an experiment where we vary the required quality for a large crowdsourcing task. Workers did not adjust their quality based on the acceptance threshold: workers who were above the threshold continued working at their usual quality level, and workers below the threshold self-selected themselves out of the task. Capitalizing on this consistency, we demonstrate that it is possible to predict workers' long-term quality using just a glimpse of their quality on the first five tasks."}
{"_id": "ea4c0f890be5f8d8a661b6aab65dc5fe9c29106d", "title": "Robust Cross-lingual Hypernymy Detection using Dependency Context", "text": "Cross-lingual Hypernymy Detection involves determining if a word in one language (\u201cfruit\u201d) is a hypernym of a word in another language (\u201cpomme\u201d i.e. apple in French). The ability to detect hypernymy cross-lingually can aid in solving cross-lingual versions of tasks such as textual entailment and event coreference. We propose BISPARSE-DEP, a family of unsupervised approaches for cross-lingual hypernymy detection, which learns sparse, bilingual word embeddings based on dependency contexts. We show that BISPARSE-DEP can significantly improve performance on this task, compared to approaches based only on lexical context. Our approach is also robust, showing promise for low-resource settings: our dependency-based embeddings can be learned using a parser trained on related languages, with negligible loss in performance. We also crowd-source a challenging dataset for this task on four languages \u2013 Russian, French, Arabic, and Chinese. Our embeddings and datasets are publicly available.1"}
{"_id": "101c6852b5bcf66a60ea774a6ffb04ef41a8ca13", "title": "Broadband and compact impedance transformers for microwave circuits", "text": "Most microwave circuits and antennas use impedance transformers. In a conventional power amplifier, large number of inductors and capacitor or transmission line sections are use to realize an impedance transformer. This article presents an asymmetric broadside-coupled microstrip TLT (transmission line transformer) on the GaAs substrate. This matching network can be easily implemented using any monolithic microwave integrated circuit (MMIC) process supporting multilevel metallization"}
{"_id": "2e5fadbaab27af0c2b5cc6a3481c11b2b83c4f94", "title": "Seeing Behind the Camera: Identifying the Authorship of a Photograph", "text": "We introduce the novel problem of identifying the photographer behind a photograph. To explore the feasibility of current computer vision techniques to address this problem, we created a new dataset of over 180,000 images taken by 41 well-known photographers. Using this dataset, we examined the effectiveness of a variety of features (low and high-level, including CNN features) at identifying the photographer. We also trained a new deep convolutional neural network for this task. Our results show that high-level features greatly outperform low-level features. We provide qualitative results using these learned models that give insight into our method's ability to distinguish between photographers, and allow us to draw interesting conclusions about what specific photographers shoot. We also demonstrate two applications of our method."}
{"_id": "3f4f00024beda5436dce4d677f27fe4209c3790f", "title": "Participatory Air Pollution Monitoring Using Smartphones", "text": "Air quality monitoring is extremely important as air pollution has a direct impact on human health. In this paper we introduce a low-power and low-cost mobile sensing system for participatory air quality monitoring. In contrast to traditional stationary air pollution monitoring stations, we present the design, implementation, and evaluation of GasMobile, a small and portable measurement system based on off-the-shelf components and suited to be used by a large number of people. Vital to the success of participatory sensing applications is a high data quality. We improve measurement accuracy by (i) exploiting sensor readings near governmental measurement stations to keep sensor calibration up to date and (ii) analyzing the effect of mobility on the accuracy of the sensor readings to give user advice on measurement execution. Finally, we show that it is feasible to use GasMobile to create collective high-resolution air pollution maps."}
{"_id": "a0015665acb00ddf1491aec436e47cfb75835b82", "title": "Bluetooth based home automation system", "text": "The past decade has seen significant advancement in the field of consumer electronics. Various \u2018intelligent\u2019 appliances such as cellular phones, air-conditioners, home security devices, home theatres, etc. are set to realize the concept of a smart home. They have given rise to a Personal Area Network in home environment, where all these appliances can be interconnected and monitored using a single controller. Busy families and individuals with physical limitation represent an attractive market for home automation and networking. A wireless home network that does not incur additional costs of wiring would be desirable. Bluetooth technology, which has emerged in late 1990s, is an ideal solution for this purpose. This paper describes an application of Bluetooth technology in home automation and networking environment. It proposes a network, which contains a remote, mobile host controller and several client modules (home appliances). The client modules communicate with the host controller through Bluetooth devices. q 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "796a6e78d4c4b63926ee956f202d874a8c4542b0", "title": "OCNet: Object Context Network for Scene Parsing", "text": "In this paper, we address the problem of scene parsing with deep learning and focus on the context aggregation strategy for robust segmentation. Motivated by that the label of a pixel is the category of the object that the pixel belongs to, we introduce an object context pooling (OCP) scheme, which represents each pixel by exploiting the set of pixels that belong to the same object category with such a pixel, and we call the set of pixels as object context. Our implementation, inspired by the self-attention approach, consists of two steps: (i) compute the similarities between each pixel and all the pixels, forming a socalled object context map for each pixel served as a surrogate for the true object context, and (ii) represent the pixel by aggregating the features of all the pixels weighted by the similarities. The resulting representation is more robust compared to existing context aggregation schemes, e.g., pyramid pooling modules (PPM) in PSPNet and atrous spatial pyramid pooling (ASPP), which do not differentiate the context pixels belonging to the same object category or not, making the reliability of contextually aggregated representations limited. We empirically demonstrate our approach and two pyramid extensions with state-ofthe-art performance on three semantic segmentation benchmarks: Cityscapes, ADE20K and LIP. Code has been made available at: https://github.com/PkuRainBow/ OCNet.pytorch."}
{"_id": "339261e9fb670beeec379cfab65aeb728d5aecc0", "title": "RETOS: Resilient, Expandable, and Threaded Operating System for Wireless Sensor Networks", "text": "This paper presents the design principles, implementation, and evaluation of the RETOS operating system which is specifically developed for micro sensor nodes. RETOS has four distinct objectives, which are to provide (1) a multithreaded programming interface, (2) system resiliency, (3) kernel extensibility with dynamic reconfiguration, and (4) WSN-oriented network abstraction. RETOS is a multithreaded operating system, hence it provides the commonly used thread model of programming interface to developers. We have used various implementation techniques to optimize the performance and resource usage of multithreading. RETOS also provides software solutions to separate kernel from user applications, and supports their robust execution on MMU-less hardware. The RETOS kernel can be dynamically reconfigured, via loadable kernel framework, so a application-optimized and resource-efficient kernel is constructed. Finally, the networking architecture in RETOS is designed with a layering concept to provide WSN-specific network abstraction. RETOS currently supports Atmel ATmega128, TI MSP430, and Chipcon CC2430 family of microcontrollers. Several real-world WSN applications are developed for RETOS and the overall evaluation of the systems is described in the paper."}
{"_id": "5107aafa77a8ac9bec0cb0cb74dbef28a130dd18", "title": "MRTouch: Adding Touch Input to Head-Mounted Mixed Reality", "text": "We present MRTouch, a novel multitouch input solution for head-mounted mixed reality systems. Our system enables users to reach out and directly manipulate virtual interfaces affixed to surfaces in their environment, as though they were touchscreens. Touch input offers precise, tactile and comfortable user input, and naturally complements existing popular modalities, such as voice and hand gesture. Our research prototype combines both depth and infrared camera streams together with real-time detection and tracking of surface planes to enable robust finger-tracking even when both the hand and head are in motion. Our technique is implemented on a commercial Microsoft HoloLens without requiring any additional hardware nor any user or environmental calibration. Through our performance evaluation, we demonstrate high input accuracy with an average positional error of 5.4 mm and 95% button size of 16 mm, across 17 participants, 2 surface orientations and 4 surface materials. Finally, we demonstrate the potential of our technique to enable on-world touch interactions through 5 example applications."}
{"_id": "6b7edd8d60d28c9a2fd0ce8a8a931e9487c6dfbe", "title": "Aneka Cloud Application Platform and Its Integration with Windows Azure", "text": "Aneka is an Application Platform-as-a-Service (Aneka PaaS) for Cloud Computing. It acts as a framework for building customized applications and deploying them on either public or private Clouds. One of the key features of Aneka is its support for provisioning resources on different public Cloud providers such as Amazon EC2, Windows Azure and GoGrid. In this chapter, we will present Aneka platform and its integration with one of the public Cloud infrastructures, Windows Azure, which enables the usage of Windows Azure Compute Service as a resource provider of Aneka PaaS. The integration of the two platforms will allow users to leverage the power of Windows Azure Platform for Aneka Cloud Computing, employing a large number of compute instances to run their applications in parallel. Furthermore, customers of the Windows Azure platform can benefit from the integration with Aneka PaaS by embracing the advanced features of Aneka in terms of multiple programming models, scheduling and management services, application execution services, accounting and pricing services and dynamic provisioning services. Finally, in addition to the Windows Azure Platform we will illustrate in this chapter the integration of Aneka PaaS with other public Cloud platforms such as Amazon EC2 and GoGrid, and virtual machine management platforms such as Xen Server. The new support of provisioning resources on Windows Azure once again proves the adaptability, extensibility and flexibility of Aneka."}
{"_id": "e2f28568031e1902d4f8ee818261f0f2c20de6dd", "title": "Distributional Semantics Resources for Biomedical Text Processing", "text": "The openly available biomedical literature contains over 5 billion words in publication abstracts and full texts. Recent advances in unsupervised language processing methods have made it possible to make use of such large unannotated corpora for building statistical language models and inducing high quality vector space representations, which are, in turn, of utility in many tasks such as text classification, named entity recognition and query expansion. In this study, we introduce the first set of such language resources created from analysis of the entire available biomedical literature, including a dataset of all 1to 5-grams and their probabilities in these texts and new models of word semantics. We discuss the opportunities created by these resources and demonstrate their application. All resources introduced in this study are available under open licenses at http://bio.nlplab.org."}
{"_id": "a20654ad8f79b08bf8f29ce66951cacba10812f6", "title": "Optimization algorithm using scrum process", "text": "Scrum process is methodology for software development. Members in a scrum team have self-organizing team by planning and sharing knowledge. This paper introduces optimization algorithm using the population as scrum team doing the scrum process to find an optimum solution. The proposed algorithm maintains the level of exploration and the exploitation search by specific of the scrum-team. The experiment has compared the proposed approach with GA and PSO by finding an optimal solution of five numerical functions. The experiment result indicates that the proposed algorithm provides the best solution and finds the result quickly."}
{"_id": "25b6818743a6c0b9502a1c026c653038ff505c09", "title": "Evolving a Heuristic Function for the Game of Tetris", "text": ""}
{"_id": "467b58fff03e85146cf8342d381c1d1b8836da85", "title": "Relations Among Loneliness, Social Anxiety, and Problematic Internet Use", "text": "The model of problematic Internet use advanced and tested in the current study proposes that individuals' psychosocial well-being, along with their beliefs about interpersonal communication (both face-to-face and online) are important cognitive predictors of negative outcomes arising from Internet use. The study examined the extent to which social anxiety explains results previously attributed to loneliness as a predictor of preference for online social interaction and problematic Internet use. The results support the hypothesis that the relationship between loneliness and preference for online social interaction is spurious, and that social anxiety is the confounding variable."}
{"_id": "5829ac859bd2b301cfd53a44194aecb884e99355", "title": "The comorbid psychiatric symptoms of Internet addiction: attention deficit and hyperactivity disorder (ADHD), depression, social phobia, and hostility.", "text": "PURPOSE\nTo: (1) determine the association between Internet addiction and depression, self-reported symptoms of attention deficit and hyperactivity disorder (ADHD), social phobia, and hostility for adolescents; and (2) evaluate the sex differences of association between Internet addiction and the above-mentioned psychiatric symptoms among adolescents.\n\n\nMETHODS\nA total of 2114 students (1204 male and 910 female) were recruited for the study. Internet addiction, symptoms of ADHD, depression, social phobia, and hostility were evaluated by the self-report questionnaire.\n\n\nRESULTS\nThe results demonstrated that adolescents with Internet addiction had higher ADHD symptoms, depression, social phobia, and hostility. Higher ADHD symptoms, depression, and hostility are associated with Internet addiction in male adolescents, and only higher ADHD symptoms and depression are associated with Internet addiction in female students.\n\n\nCONCLUSION\nThese results suggest that Internet addiction is associated with symptoms of ADHD and depressive disorders. However, hostility was associated with Internet addiction only in males. Effective evaluation of, and treatment for ADHD and depressive disorders are required for adolescents with Internet addiction. More attention should be paid to male adolescents with high hostility in intervention of Internet addiction."}
{"_id": "04b309af262cf2643daa93a34c1ba177cd6e7a85", "title": "Internet Addiction: The Emergence of a New Clinical Disorder", "text": "Anecdotal reports indicated that some on-line users were becoming addicted to the Internet in much that same way that others became addicted to drugs or alcohol which resulted in academic, social, and occupational impairment. However, research among sociologists, psychologists, or psychiatrists has not formally identified addictive use of the Internet as a problematic behavior. This study investigated the existence of Internet addiction and the extent of problems caused by such potential misuse. This study utilized an adapted version of the criteria for pathological gambling defined by the DSM-IV (APA, 1994). On the basis of this criteria, case studies of 396 dependent Internet users (Dependents) and a control group of 100 non-dependent Internet users (Non-Dependents) were classified. Qualitative analyses suggests significant behavioral and functional usage differences between the two groups. Clinical and social implications of pathological Internet use and future directions for research are discussed."}
{"_id": "b6f2c4f2bc1ddbeb36a0fb9d7a8ee2e74264a5ca", "title": "Internet addiction in Korean adolescents and its relation to depression and suicidal ideation: a questionnaire survey.", "text": "This study examined the relationship of Internet addiction to depression and suicidal ideation in Korean adolescents. The participants were 1573 high-school students living in a city who completed the self-reported measures of the Internet Addiction Scale, the Korean version of the Diagnostic Interview Schedule for Children-Major Depression Disorder-Simple Questionnaire, and the Suicidal Ideation Questionnaire-Junior. A correlational survey design was employed. Among the samples, 1.6% was diagnosed as Internet addicts, while 38.0% was classified as possible Internet addicts. The prevalence of Internet addiction did not vary with gender. The levels of depression and suicide ideation were highest in the Internet-addicts group. Future studies should investigate the direct relationship between psychological health problems and Internet dependency."}
{"_id": "bdd7d0a098365a0a83cc15c11d708362fd2d748b", "title": "Internet addiction among Chinese adolescents: prevalence and psychological features.", "text": "BACKGROUND\nTo investigate the prevalence of Internet addiction among Chinese adolescents and to explore the psychological features associated with Internet addiction.\n\n\nMETHODS\nA total of 2620 high school students from four high schools in Changsha City were surveyed using Diagnostic Questionnaire for Internet Addiction (YDQ), Eysenck Personality Questionnaire (the edition for children, EPQ), Time Management Disposition Scale (TMDS) and Strengths and Difficulties Questionnaire (SDQ). The mean age of whole sample was 15.19 years (ranging from 12 years to 18 years). According to the modified YDQ criteria by Beard, 64 students who were diagnosed as Internet addiction (the mean age: 14.59 years) and 64 who were diagnosed as being normal in Internet usage (the mean age: 14.81 years) were included in a case-control study.\n\n\nRESULTS\nThe rate of Internet use among the surveyed adolescents was 88%, among which the incidence rate of Internet addiction was 2.4%. The Internet addiction group had significantly higher scores on the EPQ subscales of neuroticism, psychoticism, and lie than the control group (P < 0.05). The Internet addiction group scored lower than the control group on the TMDS subscales of sense of control over time, sense of value of time, and sense of time efficacy (P < 0.05). Compared with the control group, the Internet addiction group had also significantly higher scores on the SDQ subscales of emotional symptoms, conduct problems, hyperactivity, total difficulties and lower scores on the subscale of prosocial behaviours (P < 0.05).\n\n\nCONCLUSIONS\nThe present study suggests that Internet addiction is not rare among Chinese adolescents. In addition, adolescents with Internet addiction possess different psychological features when compared with those who use the Internet less frequently."}
{"_id": "416c1b1bd27032506512d27f07756e96ed1d5af0", "title": "Hardware resource estimation for heterogeneous FPGA-based SoCs", "text": "The increasing complexity of recent System-on-Chip (SoC) designs introduces new challenges for design space exploration tools. In addition to the time-to-market challenge, designers need to estimate rapidly and accurately both performance and area occupation of complex and diverse applications. High-Level Synthesis (HLS) has been emerged as an attractive solution for designers to address these challenges in order to explore a large number of SoC configurations. In this paper, we target hybrid CPU-FPGA based SoCs. We propose a high-level area estimation tool based on an analytic model without requiring register-transfer level (RTL) implementations. This technique allows to estimate the required FPGA resources at the source code level to map an application to a hybrid CPU-FPGA system. The proposed model also enables a fast design exploration with different trade-offs through HLS optimization pragmas. Experimental results show that the proposed area analytic model provides an accurate estimation with a negligible error (less than 5+) compared to RTL implementations."}
{"_id": "6b25fe8edc92e31973238ad33a40b06d26362c73", "title": "No double-dissociation between optic ataxia and visual agnosia: Multiple sub-streams for multiple visuo-manual integrations", "text": "The current dominant view of the visual system is marked by the functional and anatomical dissociation between a ventral stream specialised for perception and a dorsal stream specialised for action. The \"double-dissociation\" between visual agnosia (VA), a deficit of visual recognition, and optic ataxia (OA), a deficit of visuo-manual guidance, considered as consecutive to ventral and dorsal damage, respectively, has provided the main argument for this dichotomic view. In the first part of this paper, we show that the currently available empirical data do not suffice to support a double-dissociation between OA and VA. In the second part, we review evidence coming from human neuropsychology and monkey data, which cast further doubts on the validity of a simple double-dissociation between perception and action because they argue for a far more complex organisation with multiple parallel visual-to-motor connections: 1. A dorso-dorsal pathway (involving the most dorsal part of the parietal and pre-motor cortices): for immediate visuo-motor control--with OA as typical disturbance. The latest research about OA is reviewed, showing how these patients exhibit deficits restricted to the most direct and fast visuo-motor transformations. We also propose that mild mirror ataxia, consisting of misreaching errors when the controlesional hand is guided to a visual goal though a mirror, could correspond to OA with an isolated \"hand effect\". 2. A ventral stream-prefrontal pathway (connections from the ventral visual stream to pre-frontal areas, by-passing the parietal areas): for \"mediate\" control (involving spatial or temporal transpositions [Rossetti, Y., & Pisella, L. (2003). Mediate responses as direct evidence for intention: Neuropsychology of Not to-, Not now- and Not there-tasks. In S. Johnson (Ed.), Cognitive Neuroscience perspectives on the problem of intentional action (pp. 67-105). MIT Press.])--with VA as typical disturbance. Preserved visuo-manual guidance in patients with VA is restricted to immediate goal-directed guidance, they exhibit deficits for delayed or pantomimed actions. 3. A ventro-dorsal pathway (involving the more ventral part of the parietal lobe and the pre-motor and pre-frontal areas): for complex planning and programming relying on high representational levels with a more bilateral organisation or an hemispheric lateralisation--with mirror apraxia, limb apraxia and spatial neglect as representatives. Mirror apraxia is a deficit that affects both hands after unilateral inferior parietal lesion with the patients reaching systematically and repeatedly toward the virtual image in the mirror. Limb apraxia is localized on a more advanced conceptual level of object-related actions and results from deficient integrative, computational and \"working memory\" capacities of the left inferior parietal lobule. A component of spatial working memory has recently been revealed also in spatial neglect consecutive to lesion involving the network of the right inferior parietal lobule and the right frontal areas. We conclude by pointing to the differential temporal constraints and integrative capabilities of these parallel visuo-motor pathways as keys to interpret the neuropsychological deficits."}
{"_id": "e0909761a727d5eaee0cf63c3374e5c59bb5e6a7", "title": "Mechanisms of decompensation and organ failure in cirrhosis: From peripheral arterial vasodilation to systemic inflammation hypothesis.", "text": "The peripheral arterial vasodilation hypothesis has been most influential in the field of cirrhosis and its complications. It has given rise to hundreds of pathophysiological studies in experimental and human cirrhosis and is the theoretical basis of life-saving treatments. It is undisputed that splanchnic arterial vasodilation contributes to portal hypertension and is the basis for manifestations such as ascites and hepatorenal syndrome, but the body of research generated by the hypothesis has revealed gaps in the original pathophysiological interpretation of these complications. The expansion of our knowledge on the mechanisms regulating vascular tone, inflammation and the host-microbiota interaction require a broader approach to advanced cirrhosis encompassing the whole spectrum of its manifestations. Indeed, multiorgan dysfunction and failure likely result from a complex interplay where the systemic spread of bacterial products represents the primary event. The consequent activation of the host innate immune response triggers endothelial molecular mechanisms responsible for arterial vasodilation, and also jeopardizes organ integrity with a storm of pro-inflammatory cytokines and reactive oxygen and nitrogen species. Thus, the picture of advanced cirrhosis could be seen as the result of an inflammatory syndrome in contradiction with a simple hemodynamic disturbance."}
{"_id": "c2db182624de2297ac97a15727dab7292410228f", "title": "Single Image Shadow Removal via Neighbor-Based Region Relighting", "text": "In this paper we present a novel method for shadow removal in single images. For each shadow region we use a trained classifier to identify a neighboring lit region of the same material. Given a pair of lit-shadow regions we perform a region relighting transformation based on histogram matching of luminance values between the shadow region and the lit region. Then, we adjust the CIELAB a and b channels of the shadow region by adding constant offsets based on the difference of the median shadow and lit pixel values. We demonstrate that our approach produces results that outperform the state of the art by evaluating our method using a publicly available benchmark dataset."}
{"_id": "4f48b6a243edc297f1e282e34c255d5029403116", "title": "Electric grid state estimators for distribution systems with microgrids", "text": "In the development of smart grid, state estimation in distribution systems will likely face more challenges than that at the transmission level. This paper addresses one of such challenges, namely, the absence of topology information, by developing a forecast-aided topology change detection method and an event-triggered recursive Bayesian estimator to identify the correct topology. Simulation studies with microgrid induced changes are presented to illustrate the effectiveness of the proposed algorithm."}
{"_id": "ab293bf07ac0971cd859595d86dd71c2fed5be9d", "title": "Thumb Motor Performance Varies by Movement Orientation, Direction, and Device Size During Single-Handed Mobile Phone Use", "text": "OBJECTIVE\nThe aim of this study was to determine if thumb motor performance metrics varied by movement orientation, direction, and device size during single-handed use of a mobile phone device.\n\n\nBACKGROUND\nWith the increased use of mobile phones, understanding how design factors affect and improve performance can provide better design guidelines.\n\n\nMETHOD\nA repeated measures laboratory experiment of 20 right-handed participants measured the thumb tip's 3-D position relative to a phone during reciprocal tapping tasks across four phone designs and four thumb tip movement orientations. Each movement orientation included two movement directions: an \"outward\" direction consisting in CMC (carpometacarpal) joint flexion or abduction movements and an \"inward\" direction consisting in CMC joint extension or adduction movements. Calculated metrics of the thumb's motor performance were Fitts' effective width and index of performance.\n\n\nRESULTS\nIndex of performance varied significantly across phones, with performance being generally better for the smaller devices. Performance was also significantly higher for adduction-abduction movement orientations compared to flexion-extension, and for \"outward\" compared to \"inward\" movement directions.\n\n\nCONCLUSION\nFor single-handed device use, adduction-abduction-type movements on smaller phones lead to better thumb performance.\n\n\nAPPLICATION\nThe results from this study can be used to design new mobile phone devices and keypad interfaces that optimize specific thumb motions to improve the user-interface experience during single-handed use."}
{"_id": "5fc9065fe9fabc76445e8a9bc2438d0440d21225", "title": "Elliptic Curve Cryptosystems", "text": "The application of elliptic curves to the field of cryptography has been relatively recent. It has opened up a wealth of possibilities in terms of security, encryption, and real-world applications. In particular, we are interested in public-key cryptosystems that use the elliptic curve discrete logarithm problem to establish security. The objective of this thesis is to assemble the most important facts and findings into a broad, unified overview of this field. To illustrate certain points, we also discuss a sample implementation of the elliptic curve analogue of the El"}
{"_id": "8a11400ab5d3cb349a048891ed665b95aed9f6ca", "title": "Electronic Payments of Small Amounts", "text": "This note considers the application of electronic cash to transactions in which many small amounts must be paid to the same payee and in which it is not possible to just pay the total amount afterwards. The most notable example of such a transaction is payment for phone calls. If currently published electronic cash systems are used and a full payment protocol is executed for each of the small amounts, the overall complexity of the system will be prohibitively large (time, storage and communication). This note describes how such payments can be handled in a wide class of payment systems. The solution is very easy to adapt as it only in uences the payment and deposit transactions involving such payments. Furthermore, making and verifying each small payment requires very little computation and communication, and the total complexity of both transactions is comparable to that of a payment of a xed amount."}
{"_id": "c3edb96b8c3892147cd8932f1ee8c98b336b1b1f", "title": "A 25-kHz 3rd-order continuous-time Delta-Sigma modulator using tri-level quantizer", "text": "This paper presents a 3rd order continuous-time Delta-Sigma modulator with a tri-level quantizer, which provides 3-dB reduction of quantization noise without dynamic element matching (DEM). The tri-level DAC linearity is analyzed and it shows that a highly linear tri-level DAC can be realized in fully-differential active-RC Delta-Sigma modulator. The performance of the tri-level continuous-time Delta-Sigma modulator has been verified through simulations using a standard 0.18-\u03bcm CMOS process. It achieves 81-dB SNDR at 3.2-MS/s sampling rate and consumes 1.14-\u03bcW power with ideal amplifier."}
{"_id": "18a20b71308afff3d06122a3c98ea9eab2f92f4d", "title": "Finding a Maximum Clique using Ant Colony Optimization and Particle Swarm Optimization in Social Networks", "text": "Interaction between users in online social networks plays a key role in social network analysis. One on important types of social group is full connected relation between some users, which known as clique structure. Therefore finding a maximum clique is essential for some analysis. In this paper, we proposed a new method using ant colony optimization algorithm and particle swarm optimization algorithm. In the proposed method, in order to attain better results, it is improved process of pheromone update by particle swarm optimization. Simulation results on popular standard social network benchmarks in comparison standard ant colony optimization algorithm are shown a relative enhancement of proposed algorithm. Keywordssocial network analysis; clique problem; ACO; PSO."}
{"_id": "edc70c2b07c5bf343067640d53fbc3623b79b170", "title": "Surgical Management of Hidradenitis Suppurativa: Outcomes of 590 Consecutive Patients.", "text": "BACKGROUND\nHidradenitis suppurativa is a progressive, recurrent inflammatory disease. Surgical management is potentially curative with limited efficacy data.\n\n\nOBJECTIVE\nTo evaluate hidradenitis surgical patients.\n\n\nMETHODS\nRetrospective review of outcomes of 590 consecutive surgically treated patients.\n\n\nRESULTS\nMost patients were white (91.0% [435/478]), men (337 [57.1%]), smokers (57.7% [297/515]) with Hurley Stage III disease (476 [80.7%]). Procedure types were excision (405 [68.6%]), unroofing (168 [28.5%]), and drainage (17 [2.9%]) treating disease of perianal/perineum (294 [49.8%]), axilla (124 [21.0%]), gluteal cleft (76 [12.9%]), inframammary (12 [2.0%]), and multiple surgical sites (84 [14.2%]). Postoperative complications occurred in 15 patients (2.5%) and one-fourth (144 [24.4%]) suffered postoperative recurrence, which necessitated reoperation in one-tenth (69 [11.7%]) of patients. Recurrence risk was increased by younger age (hazard ratio [HR], 0.8; 95% confidence interval [CI], 0.7-0.9), multiple surgical sites (HR, 1.6; 95% CI, 1.1-2.5), and drainage-type procedures (HR, 3.5; 95% CI, 1.2-10.7). Operative location, disease severity, gender, and operative extent did not influence recurrence rate.\n\n\nCONCLUSION\nExcision and unroofing procedures were effective treatments with infrequent complications and low recurrence rates. Well-planned surgical treatment aiming to remove or unroof the area of intractable hidradenitis suppurativa was highly effective in the management of this challenging disease."}
{"_id": "c2513f008b8da9ba3556ba7fc8dd9ba22066d1cd", "title": "Analysis of Massive MIMO-Enabled Downlink Wireless Backhauling for Full-Duplex Small Cells", "text": "Recent advancements in self-interference (SI) cancellation capability of low-power wireless devices motivate in-band full-duplex (FD) wireless backhauling in small cell networks (SCNs). In-band FD wireless backhauling concurrently allows the use of the same frequency spectrum for the backhaul as well as access links of the small cells. In this paper, using tools from stochastic geometry, we develop a framework to model the downlink rate coverage probability of a user in a given SCN with massive multiple-input-multiple-output (MIMO)-enabled wireless backhauls. The considered SCN is composed of a mixture of small cells that are configured in either in-band or out-of-band backhaul modes with a certain probability. The performance of the user in the considered hierarchical network is limited by several sources of interference, such as the backhaul interference, small cell base station (SBS)-to-SBS interference, and the SI. Moreover, due to the channel hardening effect in massive MIMO, the backhaul links only experience long term channel effects, whereas the access links experience both the long term and the short term channel effects. Consequently, the developed framework is flexible to characterize different sources of interference while capturing the heterogeneity of the access and backhaul channels. In specific scenarios, the framework enables deriving closed-form coverage probability expressions. Under perfect backhaul coverage, the simplified expressions are utilized to optimize the proportion of in-band and out-of-band small cells in the SCN in the closed form. Finally, a few remedial solutions are proposed that can potentially mitigate the backhaul interference and in turn improve the performance of in-band FD wireless backhauling. Numerical results investigate the scenarios in which in-band wireless backhauling is useful and demonstrate that maintaining a correct proportion of in-band and out-of-band FD small cells is crucial in wireless backhauled SCNs."}
{"_id": "890ef19b51ee773a7fc0f187372274f93feb10a4", "title": "Applications of Trajectory Data in Transportation: Literature Review and Maryland Case Study", "text": "This paper considers applications of trajectory data in transportation, and makes two primary contributions. First, it provides a comprehensive literature review detailing ways in which trajectory data has been used for transportation systems analysis, distilling existing research into the following six areas: demand estimation, modeling human behavior, designing public transit, measuring and predicting traffic performance, quantifying environmental impact, and safety analysis. Additionally, it presents innovative applications of trajectory data for the state of Maryland, employing visualization and machine learning techniques to extract value from 20 million GPS traces. These visual analytics will be implemented in the Regional Integrated Transportation Information System (RITIS), which provides free data sharing and visual analytics tools to help transportation agencies attain situational awareness, evaluate performance, and share insights with the public."}
{"_id": "dc8d72a17de9c26c88fb9fda0169ecd00f4a45e1", "title": "Smart Health Care: An Edge-Side Computing Perspective", "text": "Increasing health awareness and rapid technological advancements have resulted in significant growth in an emerging sector: smart health care. By the end of 2020, the total number of smart health-care devices is expected to reach 808.9 million (646 million devices without wearables and the remaining with wearables) [1], [2]. This proliferation of devices is expected to revolutionize health care by speeding up treatment and diagnostic processes, decreasing physician visit costs, and enhancing patient care quality."}
{"_id": "5e6043745685210beab94f15d54f76df91dd9967", "title": "Hand segmentation for hand-object interaction from depth map", "text": "Hand segmentation for hand-object interaction is a necessary preprocessing step in many applications such as augmented reality, medical application, and human-robot interaction. However, typical methods are based on color information which is not robust to objects with skin color, skin pigment difference, and light condition variations. Thus, we propose hand segmentation method for hand-object interaction using only a depth map. It is challenging because of the small depth difference between a hand and objects during an interaction. To overcome this challenge, we propose the two-stage random decision forest (RDF) method consisting of detecting hands and segmenting hands. To validate the proposed method, we demonstrate results on the publicly available dataset of hand segmentation for hand-object interaction. The proposed method achieves high accuracy in short processing time comparing to the other state-of-the-art methods."}
{"_id": "526acf565190d843758b89d37acf281639cb90e2", "title": "Finding News Citations for Wikipedia", "text": "An important editing policy in Wikipedia is to provide citations for added statements in Wikipedia pages, where statements can be arbitrary pieces of text, ranging from a sentence to a paragraph. In many cases citations are either outdated or missing altogether.\n In this work we address the problem of finding and updating news citations for statements in entity pages. We propose a two-stage supervised approach for this problem. In the first step, we construct a classifier to find out whether statements need a news citation or other kinds of citations (web, book, journal, etc.). In the second step, we develop a news citation algorithm for Wikipedia statements, which recommends appropriate citations from a given news collection. Apart from IR techniques that use the statement to query the news collection, we also formalize three properties of an appropriate citation, namely: (i) the citation should entail the Wikipedia statement, (ii) the statement should be central to the citation, and (iii) the citation should be from an authoritative source.\n We perform an extensive evaluation of both steps, using 20 million articles from a real-world news collection. Our results are quite promising, and show that we can perform this task with high precision and at scale."}
{"_id": "9592f5734a6d677323141aef4316f7ebe4b5903f", "title": "A Modular Framework for Versatile Conversational Agent Building", "text": "This paper illustrates a web-based infrastructure of an architecture for conversational agents equipped with a modular knowledge base. This solution has the advantage to allow the building of specific modules that deal with particular features of a conversation (ranging from its topic to the manner of reasoning of the chatbot). This enhances the agent interaction capabilities. The approach simplifies the chatbot knowledge base design process: extending, generalizing or even restricting the chatbot knowledge base in order to suit it to manage specific dialoguing tasks as much as possible."}
{"_id": "6ed67a876b3afd2f2fb7b5b8c0800a0398c76603", "title": "ERP integration in a healthcare environment: a case study", "text": ""}
{"_id": "8fb1e94e16ca9da081ac1d313c0539dd10253cbb", "title": "Evaluating the influence of YouTube advertising for attraction of young customers", "text": "Nowadays, we have been faced with an increasing number of people who are spending tremendous amounts of time all around the world on YouTube. To date, the factors that persuade customers to accept YouTube advertising as an advertising medium are not yet fully understood. The present paper identified four dimensions towards YouTube advertising (i.e., entertainment, Informativeness, Customization and irritation) which may be affected on advertising value as well as brand awareness, and accordingly on purchase intention of consumers. The conceptual model hypothesizes that ad value strategies are positively associated with brand awareness, which in turn influence perceived usefulness of You Tube and continued purchase behavior. For this study, data were collected from students studying at the Sapienza University of Rome. In total, 315 usable questionnaires were chosen in order to analysis of data for the variables. The results show that entertainment, informativeness and customization are the strongest positive drivers, while irritation is negatively related to YouTube advertising. On the other hand, advertising value through YouTube affects both brand awareness and purchase intention of con-"}
{"_id": "f8e2e20d732dd831e0ad1547bd3c22a3c2a6216f", "title": "Embedding new data points for manifold learning via coordinate propagation", "text": "In recent years, a series of manifold learning algorithms have been proposed for nonlinear dimensionality reduction. Most of them can run in a batch mode for a set of given data points, but lack a mechanism to deal with new data points. Here we propose an extension approach, i.e., mapping new data points into the previously learned manifold. The core idea of our approach is to propagate the known coordinates to each of the new data points. We first formulate this task as a quadratic programming, and then develop an iterative algorithm for coordinate propagation. Tangent space projection and smooth splines are used to yield an initial coordinate for each new data point, according to their local geometrical relations. Experimental results and applications to camera direction estimation and face pose estimation illustrate the validity of our approach."}
{"_id": "bf3921d033eef168b8fc5bd64900e12bcd2ee2d5", "title": "Automated segmentation of 3D anatomical structures on CT images by using a deep convolutional network based on end-to-end learning approach", "text": "We have proposed an end-to-end learning approach that trained a deep convolutional neural network (CNN) for automatic CT image segmentation, which accomplished a voxel-wised multiple classification to directly map each voxel on 3D CT images to an anatomical label automatically. The novelties of our proposed method were (1) transforming the anatomical structures segmentation on 3D CT images into a majority voting of the results of 2D semantic image segmentation on a number of 2D-slices from different image orientations, and (2) using \u201cconvolution\u201d and \u201cdeconvolution\u201d networks to achieve the conventional \u201ccoarse recognition\u201d and \u201cfine extraction\u201d functions which were integrated into a compact all-in-one deep CNN for CT image segmentation. The advantage comparing to previous works was its capability to accomplish real-time image segmentations on 2D slices of arbitrary CT-scan-range (e.g. body, chest, abdomen) and produced correspondingly-sized output. In this paper, we propose an improvement of our proposed approach by adding an organ localization module to limit CT image range for training and testing deep CNNs. A database consisting of 240 3D CT scans and a human annotated ground truth was used for training (228 cases) and testing (the remaining 12 cases). We applied the improved method to segment pancreas and left kidney regions, respectively. The preliminary results showed that the accuracies of the segmentation results were improved significantly (pancreas was 34% and kidney was 8% increased in Jaccard index from our previous results). The effectiveness and usefulness of proposed improvement for CT image segmentations were confirmed."}
{"_id": "f1f8ff891b746373c7684ecc3b2dc12d77bf2111", "title": "Preserving the Archival Bond in Distributed Ledgers: A Data Model and Syntax", "text": "Distributed cryptographic ledgers, such as the blockchain, are now being used in recordkeeping. However, they lack a key feature of more traditional recordkeeping systems needed to establish the authenticity of records and enable reliance on them for trustworthy recordkeeping. The missing feature is known in archival science as the archival bond \u2013 the mutual relationship that exists among documents by virtue of the actions in which they participate. In this paper, we propose a novel data model and syntax using core web principles that can be used to address this shortcoming in distributed ledgers as recordkeeping systems."}
{"_id": "f825ae0fb460d4ca71ca308d8887b47fef47144b", "title": "Fast and accurate scan registration through minimization of the distance between compact 3D NDT representations", "text": "Registration of range sensor measurements is an important task in mobile robotics and has received a lot of attention. Several iterative optimization schemes have been proposed in order to align three-dimensional (3D) point scans. With the more widespread use of high-frame-rate 3D sensors and increasingly more challenging application scenarios for mobile robots, there is a need for fast and accurate registration methods that current state-of-the-art algorithms cannot always meet. This work proposes a novel algorithm that achieves accurate point cloud registration an order of a magnitude faster than the current state of the art. The speedup is achieved through the use of a compact spatial representation: the Three-Dimensional Normal Distributions Transform (3D-NDT). In addition, a fast, global-descriptor based on the 3D-NDT is defined and used to achieve reliable initial poses for the iterative algorithm. Finally, a closed-form expression for the covariance of the proposed method is also derived. The proposed algorithms are evaluated on two standard point cloud data sets, resulting in stable performance on a par with or better than the state of the art. The implementation is available as an open-source package for the Robot Operating System (ROS)."}
{"_id": "6423e91ee7ee8d7f6765e47f2b0cb4510682f239", "title": "Optimized Spatial Hashing for Collision Detection of Deformable Objects", "text": "We propose a new approach to collision and self\u2013 collision detection of dynamically deforming objects that consist of tetrahedrons. Tetrahedral meshes are commonly used to represent volumetric deformable models and the presented algorithm is integrated in a physically\u2013based environment, which can be used in game engines and surgical simulators. The proposed algorithm employs a hash function for compressing a potentially infinite regular spatial grid. Although the hash function does not always provide a unique mapping of grid cells, it can be generated very efficiently and does not require complex data structures, such as octrees or BSPs. We have investigated and optimized the parameters of the collision detection algorithm, such as hash function, hash table size and spatial cell size. The algorithm can detect collisions and self\u2013 collisions in environments of up to 20k tetrahedrons in real\u2013time. Although the algorithm works with tetrahedral meshes, it can be easily adapted to other object primitives, such as triangles. Figure 1: Environment with dynamically deforming objects, that consist of tetrahedrons."}
{"_id": "22b01542da7f63d36435b42e97289eb92742c0ce", "title": "Effects of cognitive-behavioral therapy on brain activation in specific phobia", "text": "Little is known about the effects of successful psychotherapy on brain function in subjects with anxiety disorders. The present study aimed to identify changes in brain activation following cognitive-behavioral therapy (CBT) in subjects suffering from specific phobia. Using functional magnetic resonance imaging (fMRI), brain activation to spider videos was measured in 28 spider phobic and 14 healthy control subjects. Phobics were randomly assigned to a therapy-group (TG) and a waiting-list control group (WG). Both groups of phobics were scanned twice. Between scanning sessions, CBT was given to the TG. Before therapy, brain activation did not differ between both groups of phobics. As compared to control subjects, phobics showed greater responses to spider vs. control videos in the insula and anterior cingulate cortex (ACC). CBT strongly reduced phobic symptoms in the TG while the WG remained behaviorally unchanged. In the second scanning session, a significant reduction of hyperactivity in the insula and ACC was found in the TG compared to the WG. These results propose that increased activation in the insula and ACC is associated with specific phobia, whereas an attenuation of these brain responses correlates with successful therapeutic intervention."}
{"_id": "0796f6cd7f0403a854d67d525e9b32af3b277331", "title": "Identifying Relations for Open Information Extraction", "text": "Open Information Extraction (IE) is the task of extracting assertions from massive corpora without requiring a pre-specified vocabulary. This paper shows that the output of state-ofthe-art Open IE systems is rife with uninformative and incoherent extractions. To overcome these problems, we introduce two simple syntactic and lexical constraints on binary relations expressed by verbs. We implemented the constraints in the REVERB Open IE system, which more than doubles the area under the precision-recall curve relative to previous extractors such as TEXTRUNNER and WOE. More than 30% of REVERB\u2019s extractions are at precision 0.8 or higher\u2014 compared to virtually none for earlier systems. The paper concludes with a detailed analysis of REVERB\u2019s errors, suggesting directions for future work.1"}
{"_id": "18dd22835af007c736bbf4f5cdb7a54ff685aff7", "title": "Multilingual Relation Extraction using Compositional Universal Schema", "text": "When building a knowledge base (KB) of entities and relations from multiple structured KBs and text, universal schema represents the union of all input schema, by jointly embedding all relation types from input KBs as well as textual patterns expressing relations. In previous work, textual patterns are parametrized as a single embedding, preventing generalization to unseen textual patterns. In this paper we employ an LSTM to compositionally capture the semantics of relational text. We dramatically demonstrate the flexibility of our approach by evaluating in a multilingual setting, in which the English training data entities overlap with the seed KB, but the Spanish text does not. Additional improvements are obtained by tying word embeddings across languages. In extensive experiments on the English and Spanish TAC KBP benchmark, our techniques provide substantial accuracy improvements. Furthermore we find that training with the additional non-overlapping Spanish also improves English relation extraction accuracy. Our approach is thus suited to broad-coverage automated knowledge base construction in low-resource languages and domains."}
{"_id": "24281c886cd9339fe2fc5881faf5ed72b731a03e", "title": "Spark: Cluster Computing with Working Sets", "text": "MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time."}
{"_id": "507da9473d1b7cf69357e9d987b71cf7b5623b4d", "title": "GENIES: a natural-language processing system for the extraction of molecular pathways from journal articles", "text": "Systems that extract structured information from natural language passages have been highly successful in specialized domains. The time is opportune for developing analogous applications for molecular biology and genomics. We present a system, GENIES, that extracts and structures information about cellular pathways from the biological literature in accordance with a knowledge model that we developed earlier. We implemented GENIES by modifying an existing medical natural language processing system, MedLEE, and performed a preliminary evaluation study. Our results demonstrate the value of the underlying techniques for the purpose of acquiring valuable knowledge from biological journals."}
{"_id": "03ff3f8f4d5a700fbe8f3a3e63a39523c29bb60f", "title": "A Convolutional Neural Network for Modelling Sentences", "text": "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline."}
{"_id": "4b64453e592b0d0b367e01bfc9a833de214a6519", "title": "DeepTLR: A single deep convolutional network for detection and classification of traffic lights", "text": "Reliable real-time detection of traffic lights is a major concern for the task of autonomous driving. As deep convolutional networks have proven to be a powerful tool in visual object detection, we propose DeepTLR, a camera-based system for real-time detection and classification of traffic lights. Detection and state classification are realized using a single deep convolutional network. DeepTLR does not use any prior knowledge about traffic light locations. Also the detection is executed frame by frame without using temporal information. It is able to detect traffic lights on the whole camera image without any presegmentation. This is achieved by classifying each fine-grained pixel region of the input image and performing a bounding box regression on regions of each class. We show that our algorithm is able to run on frame-rates required for real-time applications while reaching notable results."}
{"_id": "e6fa6e3bc568e413ac4b52e06d28cc78da25e137", "title": "Integration of Radio Frequency Identification and Wireless Sensor Networks", "text": ".................................................................................................................... iii \u00d6Z ...................................................................................................................................... v DEDICATION ................................................................................................................ vii ACKNOWLEDGMENTS ............................................................................................. viii LIST OF TABLES .......................................................................................................... xii LIST OF FIGURES ....................................................................................................... xiii LIST OF SYMBOLS/ABBREVIATIONS ...................................................................... xv"}
{"_id": "a26062250dcd3756eb9f4178b991521ef2ba433b", "title": "Low-k interconnect stack with a novel self-aligned via patterning process for 32nm high volume manufacturing", "text": "Interconnect process features are described for a 32nm high performance logic technology. Lower-k, yet highly manufacturable, Carbon-Doped Oxide (CDO) dielectric layers are introduced on this technology at three layers to address the demand for ever lower metal line capacitance. The pitches have been aggressively scaled to meet the expectation for density, and the metal resistance and electromigration performance have been carefully balanced to meet the high reliability requirements while maintaining the lowest possible resistance. A new patterning scheme has been used to limit any patterning damage to the lower-k ILD and address the increasingly difficult problem of via-to-metal shorting at these very tight pitches. The interconnect stack has a thick Metal-9 layer to provide a low resistance path for the power and I/O routing that has been carefully scaled to maintain a low resistance. The combined interconnect stack provides high density, performance, and reliability, and supports a Pb-free 32nm process."}
{"_id": "0cb9c4ba06d7a69cf7aadb326072ac2ed2207452", "title": "Timing Attack against Protected RSA-CRT Implementation Used in PolarSSL", "text": "In this paper, we present a timing attack against the RSACRT algorithm used in the current version 1.1.4 of PolarSSL, an opensource cryptographic library for embedded systems. This implementation uses a classical countermeasure to avoid two previous attacks of Schindler and another one due to Boneh and Brumley. However, a careful analysis reveals a bias in the implementation of Montgomery multiplication. We theoretically analyse the distribution of output values for Montgomery multiplication when the output is greater than the Montgomery constant, R. In this case, we show that an extra bit is set in the top most significant word of the output and a time variance can be observed. Then we present some proofs with reasonable assumptions to explain this bias due to an extra bit. Moreover, we show it can be used to mount an attack that reveals the factorisation. We also study another countermeasure and show its resistance against attacked library."}
{"_id": "05dc9d40649cece88b6c87196d9c87db61e2fbf2", "title": "Haptic Feedback for Virtual Reality 1", "text": "Haptic feedback is a crucial sensorial modality in virtual reality interactions. Haptics means both force feedback (simulating object hardness, weight, and inertia) and tactile feedback (simulating surface contact geometry, smoothness, slippage, and temperature). Providing such sensorial data requires desk-top or portable special-purpose hardware called haptic interfaces. Modeling physical interactions involves precise collision detection, real-time force computation, and high control-loop bandwidth. This results in a large computation load which requires multi-processor parallel processing on networked computers. Applications for haptics-intensive VR simulations include CAD model design and assembly. Improved technology (wearable computers, novel actuators, haptic toolkits) will increase the use of force/tactile feedback in future VR simulations."}
{"_id": "03c423de1b6a1ca91924a1587a3a57927df1b0e8", "title": "DFT-based Transformation Invariant Pooling Layer for Visual Classification", "text": "We propose a novel discrete Fourier transform-based pooling layer for convolutional neural networks. The DFT magnitude pooling replaces the traditional max/average pooling layer between the convolution and fully-connected layers to retain translation invariance and shape preserving (aware of shape difference) properties based on the shift theorem of the Fourier transform. Thanks to the ability to handle image misalignment while keeping important structural information in the pooling stage, the DFT magnitude pooling improves the classification accuracy significantly. In addition, we propose the DFT method for ensemble networks using the middle convolution layer outputs. The proposed methods are extensively evaluated on various classification tasks using the ImageNet, CUB 2010-2011, MIT Indoors, Caltech 101, FMD and DTD datasets. The AlexNet, VGG-VD 16, Inception-v3, and ResNet are used as the base networks, upon which DFT and DFT methods are implemented. Experimental results show that the proposed methods improve the classification performance in all networks and datasets."}
{"_id": "7f7d7e7d53febd451e263784b59c1c9038474499", "title": "A systematic literature review of green software metrics", "text": "Green IT is getting increasing attention in software engineering research. Nevertheless energy efficiency studies have mostly focused on the hardware side of IT, the software role still requires deepening in terms of methods and techniques. Furthermore, it is necessary to understand how to assess the software\u201cgreenness\u201d for stimulating the energy efficiency awareness, since early phases of the software lifecycle. The main goal of this study is to describe and to classify metrics related to software \u201cgreenness\u201d present in the software engineering literature. Furthermore, this study analyzes the evolution of those metrics, in terms of type, context, and evaluation methods. To achieve this goal, a systematic literature review has been performed surveying the metrics claimed in the last decade. After examined 960 publications, we selected 23 of them as primary studies, from which we isolated extracting 96 different green metrics. Therefore, we analyzed search results in order to show what is the trend of research about green software metrics, how metrics perform measurement on resources, and what type of metrics are more appealing for defined contexts."}
{"_id": "1955a213c446782342761730fc67ee23fa517c83", "title": "Artificial-Life Ecosystems - What are they and what could they become?", "text": "This paper summarises the history of the terms ecology and ecosystem, before examining their application in the early and recent literature of A-Life agent-based software simulation. It investigates trends in A-Life that have led to a predominance of simulations incorporating artificial evolution acting on generic agents, but lacking a level of detail that would allow the emergence of phenomena relating to the transfer and transformation of energy and matter between the virtual abiotic environment and biota. Implications of these characteristics for the relevance of A-Life\u2019s virtual ecosystem models to Ecology are discussed. We argue a position that the inclusion of low-level representations of energetics, matter and evolution, in concert with pattern-oriented modelling techniques from Ecology for model validation, will improve the relevance of A-Life models to Ecology. We also suggest two methods that may allows us to meet this goal: artificial evolution can be employed as a mechanism for automating pattern-oriented ecological modelling from the level of individual species up to that of the ecosystem, or it may be employed to explore general principles of ecosystem behaviour over evolutionary time periods."}
{"_id": "6d93c8ef0cca8085ff11d73523c8cf9410d11856", "title": "Pulse-Modulated Intermittent Control in Consensus of Multiagent Systems", "text": "This paper proposes a control framework, called pulse-modulated intermittent control, which unifies impulsive control and sampled control. Specifically, the concept of pulse function is introduced to characterize the control/rest intervals and the amplitude of the control. By choosing some specified functions as the pulse function, the proposed control scheme can be reduced to sampled control or impulsive control. The proposed control framework is applied to consensus problems of multiagent systems. Using discretization approaches and stability theory, several necessary and sufficient conditions are established to ensure the consensus of the controlled system. The results show that consensus depends not only on the network topology, the sampling period and the control gains, but also the pulse function. Moreover, a lower bound of the asymptotic convergence factor is derived as well. For a given pulse function and an undirected graph, an optimal control gain is designed to achieve the fastest convergence. In addition, impulsive control and sampled control are revisited in the proposed control framework. Finally, some numerical examples are given to verify the effectiveness of theoretical results."}
{"_id": "39d8eef237f475648214989494ac6f89562d2826", "title": "Weighted Low-Rank Approximations", "text": "Weighted-norms can arise in several situations. A zero/one weighted-norm, for example, arises when some of the entries in the matrix are not observed. External estimates of the noise variance associated with each measurement may be available (e.g. gene expression analysis) and using weights inversely proportional to the noise variance can lead to better reconstruction of the underlying structure. In other applications, entries in the target matrix represent aggregates of many samples. When using unweighted low-rank approximations (e.g. for separating style and content [4]), we assume a uniform number of samples for each entry. By incorporating weights, we can account for varying numbers of samples in such situations. Low-rank approximations are also used in the design of two-dimensional digital filters, in which case weights might arise from constraints of varying importance [3]."}
{"_id": "83e859ff4842cb7fbc4eca1ba68b8144897925d8", "title": "The validity of the Hospital Anxiety and Depression Scale. An updated literature review.", "text": "OBJECTIVE\nTo review the literature of the validity of the Hospital Anxiety and Depression Scale (HADS).\n\n\nMETHOD\nA review of the 747 identified papers that used HADS was performed to address the following questions: (I) How are the factor structure, discriminant validity and the internal consistency of HADS? (II) How does HADS perform as a case finder for anxiety disorders and depression? (III) How does HADS agree with other self-rating instruments used to rate anxiety and depression?\n\n\nRESULTS\nMost factor analyses demonstrated a two-factor solution in good accordance with the HADS subscales for Anxiety (HADS-A) and Depression (HADS-D), respectively. The correlations between the two subscales varied from.40 to.74 (mean.56). Cronbach's alpha for HADS-A varied from.68 to.93 (mean.83) and for HADS-D from.67 to.90 (mean.82). In most studies an optimal balance between sensitivity and specificity was achieved when caseness was defined by a score of 8 or above on both HADS-A and HADS-D. The sensitivity and specificity for both HADS-A and HADS-D of approximately 0.80 were very similar to the sensitivity and specificity achieved by the General Health Questionnaire (GHQ). Correlations between HADS and other commonly used questionnaires were in the range.49 to.83.\n\n\nCONCLUSIONS\nHADS was found to perform well in assessing the symptom severity and caseness of anxiety disorders and depression in both somatic, psychiatric and primary care patients and in the general population."}
{"_id": "c11677ba8882023812cb94dda1029580fa251778", "title": "An automated system to detect and recognize vehicle license plates of Bangladesh", "text": "Bangladesh is a country in South Asia that uses Retro Reflective license plates. The plate has two lines with words, letters, and digits. An automated system to detect and recognize these plates is presented in this paper. The system is divided into four parts: plate detection, extraction, character segmentation and recognition. At first, the input image is enhanced using CLAHE and a matched filter specially designed for license plates with two lines is applied. Then tilt correction using Radon transformation, binarization and cleaning are performed. For character segmentation, mean intensity based horizontal and vertical projection is used. In recognition, we have used two different Convolutional Neural Network (CNN) to classify digits and letters. Tesseract OCR is used for district names. We have developed a dataset of over 400 images of different vehicles (e.g., private car, bus, truck etc.) taken at different times of day (including nights). The plates in this dataset are in different angles, including blurry, worn-out and muddy ones. On this dataset, the proposed system achieved a success rate of 96.8% in detection, 89.5% in extraction, 98.6% in segmentation and 98.0% in character recognition."}
{"_id": "c9d51f7110977c46b6ba9b69a68cff4dd8b6de0b", "title": "Text Mining for Documents Annotation and Ontology Support", "text": "This paper presents a survey of basic concepts in the area of text data mining and some of the methods used in order to elicit useful knowledge from collections of textual data. Three different text data mining techniques (clustering/visualisation, association rules and classification models) are analysed and its exploitation possibilities within the Webocracy project are showed. Clustering and association rules discovery are well suited as supporting tools for ontology management. Classification models are used for automatic documents annotation."}
{"_id": "0b3412e13637763b21ad6ddadd4b9b68907730e2", "title": "Encouraging user behaviour with achievements: An empirical study", "text": "Stack Overflow, a question and answer Web site, uses a reward system called badges to publicly reward users for their contributions to the community. Badges are used alongside a reputation score to reward positive behaviour by relating a user's site identity with their perceived expertise and respect in the community. A greater number of badges associated with a user profile in some way indicates a higher level of authority, leading to a natural incentive for users to attempt to achieve as many badges as possible. In this study, we examine the publicly available logs for Stack Overflow to examine three of these badges in detail. We look at the effect of one badge in context on an individual user level and at the global scope of three related badges across all users by mining user behaviour around the time that the badge is awarded. This analysis supports the claim that badges can be used to influence user behaviour by demonstrating one instance of an increase in user activity related to a badge immediately before it is awarded when compared to the period afterwards."}
{"_id": "8a696afa3b4e82727764f26a95d2577c85315171", "title": "Implicit theories about willpower predict self-regulation and grades in everyday life.", "text": "Laboratory research shows that when people believe that willpower is an abundant (rather than highly limited) resource they exhibit better self-control after demanding tasks. However, some have questioned whether this \"nonlimited\" theory leads to squandering of resources and worse outcomes in everyday life when demands on self-regulation are high. To examine this, we conducted a longitudinal study, assessing students' theories about willpower and tracking their self-regulation and academic performance. As hypothesized, a nonlimited theory predicted better self-regulation (better time management and less procrastination, unhealthy eating, and impulsive spending) for students who faced high self-regulatory demands. Moreover, among students taking a heavy course load, those with a nonlimited theory earned higher grades, which was mediated by less procrastination. These findings contradict the idea that a limited theory helps people allocate their resources more effectively; instead, it is people with the nonlimited theory who self-regulate well in the face of high demands."}
{"_id": "417997271d0c310e73c6454784244445253a15a0", "title": "Enabling network programmability in LTE/EPC architecture using OpenFlow", "text": "Nowadays, mobile operators face the challenge to sustain the future data tsunami. In fact, today's increasing data and control traffic generated by new kinds of network usage puts strain on mobile operators', without creating any corresponding equivalent revenue. In our previous work, we analyzed the 3GPP LTE/EPC architecture and showed that a redesign of this architecture is needed to suit future network usages and to provide new revenue generating services. Moreover, we proposed a new control plane based on the OpenFlow (OF) protocol for the LTE/EPC architecture that enables flexibility and programmability aspects. In this paper, we are interested in the programmability aspect. We show how the data plane can be easily configured thanks to OF. In addition, we evaluate the signaling load of our proposed architecture and compare it to that of 3GPP LTE/EPC architecture. The preliminary findings suggest that managing the data plane with OF has little impact on the signaling load while the network programmability is improved."}
{"_id": "1714ecfb4ac45130bc241a4d7ec45f2a9fb8b99e", "title": "The Behavioural Paths to Wellbeing : An Exploratory Study to Distinguish Between Hedonic and Eudaimonic Wellbeing From an Activity Perspective", "text": "Hedonic wellbeing and eudaimonic wellbeing are two prevailing approaches to wellbeing. However, remarkably little research has distinguished them from an activity perspective; the knowledge of behavioural paths for achieving these two wellbeings is poor. This study first clarified the behavioural contents of the two approaches through a bottom-up method and then analysed the representativeness of activities to indicate to what extent activities contributed to wellness. We found that the paths to hedonic wellbeing and eudaimonic wellbeing overlapped and differed from each other. Furthermore, this study explained why hedonic activity differed from eudaimonic activity by analysing activity characteristics. We found that people reported higher frequency, sensory experience, and affective experience in hedonic activity, whereas they reported higher intellectual experience, behavioural experience, and spiritual experience in eudaimonic activity. Finally, we explored the behavioural pattern of wellbeing pursuit in both an unthreatening situation and a threatening situation. We found that the overlap between the two approaches increased in the threatening situation. Moreover, people in the threatening situation tended to score lower on all characteristics except frequency relative to those in the unthreatening situation. It seemed that the behavioural pattern in the threatening situation was less effective than its equivalent in the unthreatening situation."}
{"_id": "50d551c7ad7d5ccfd6c0886ea48a597d4821baf7", "title": "Unsupervised outlier detection in streaming data using weighted clustering", "text": "Outlier detection is a very important task in many fields like network intrusion detection, credit card fraud detection, stock market analysis, detecting outlying cases in medical data etc. Outlier detection in streaming data is very challenging because streaming data cannot be scanned multiple times and also new concepts may keep evolving in coming data over time. Irrelevant attributes can be termed as noisy attributes and such attributes further magnify the challenge of working with data streams. In this paper, we propose an unsupervised outlier detection scheme for streaming data. This scheme is based on clustering as clustering is an unsupervised data mining task and it does not require labeled data. In proposed scheme both density based and partitioning clustering method are combined to take advantage of both density based and distance based outlier detection. Proposed scheme also assigns weights to attributes depending upon their respective relevance in mining task and weights are adaptive in nature. Weighted attributes are helpful to reduce or remove the effect of noisy attributes. Keeping in view the challenges of streaming data, the proposed scheme is incremental and adaptive to concept evolution. Experimental results on synthetic and real world data sets show that our proposed approach outperforms other existing approach (CORM) in terms of outlier detection rate, false alarm rate, and increasing percentages of outliers."}
{"_id": "5e77868839082c8463a3f61f9ffb5873e5dbd03f", "title": "Motif-Based Classification of Time Series with Bayesian Networks and SVMs", "text": "Classification of time series is an important task with many challenging applications like brain wave (EEG) analysis, signature verification or speech recognition. In this paper we show how characteristic local patterns (motifs) can improve the classification accuracy. We introduce a new motif class, generalized semi-continuous motifs. To allow flexibility and noise robustness, these motifs may include gaps of various lengths, generic and more specific wildcards. We propose an efficient algorithm for mining generalized sequential motifs. In experiments on real medical data, we show how generalized semi-continuous motifs improve the accuracy of SVMs and Bayesian Networks for time series classificiation."}
{"_id": "9e1d50d98ae09c15354dbcb126609e337d3dc6fb", "title": "A vector quantization approach to speaker recognition", "text": "CH2118-8/85/0000-0387 $1.00 \u00a9 1985 IEEE 387 ABSTRACT. In this study a vector quantIzation (VQ) codebook was system. In the other, Shore and Burton 112] used word-based VQ used as an efficient means of characterizing the short-time spectral codebooks and reported good performance in speaker-trained isolatedfeatures of a speaker. A set of such codebooks were then used to word recognition experiments. Here, instead of using word-based VQ recognize the identity of an unknown speaker from his/her unlabelled spoken utterances based on a minimum distance (distortion) codebooks to characterize the phonetic contents of isolated words, we propose to use speaker-based VQ codebooks to characterize the classification rule. A series of speaker recognition experiments was variability of short-time acoustic features of speakers. performed using a 100-talker (50 male and 50 female) telephone recording database consisting of isolated digit utterances. For ten random but different isolated digits, over 98% speaker identification H. Speaker-based VQ Codebook Approach to Speaker accuracy was achieved. The effects, on performance, of different Characterization and Recognition system parameters such as codebook sizes, the number of test digits, phonetic richness of the text, and difference in recording sessions Were also studied in detail. A set of short-time raw feature vectors of a speaker can be used directly to represent the essential acoustical, phonological or physiological characteristics of that speaker if the training set includes sufficient variations. However such a direct representation is not practical when the number of training vectors is large. The memory requirements for storage and computational complexity in the recognition phase eventually become prohibitively high. Therefore an Automatic speaker recognition has long been an interesting and challenging problem to speech researchers [1-101. The problem, efficient way of compressing the training data had to be found. In order to compress the original data to a small set of representative depending upon the nature of the final task, can be classified into two points, we used a VQ codebook with a small number of codebook different categories: speaker verification and speaker identification. In entries. a speaker verification task, the recognizer is asked to verify an identity claim made by an unknown speaker and a decision to reject or accept the identity claim is made. In a speaker identification task the recognizer is asked to decide which out of a population of N speakers is best classified as the unknown speaker. The decision may include a choice of \"none of the above\" (i.e., a choice that the specific speaker is not in a given closed set of speakers). The input speech material used for speaker recognition can be either text-dependent (text-constrained) or text-independent (text-free). In the text-dependent mode the speaker is asked to utter a prescribed text. The utterance is then The speaker-based VQ codebook generation can be summarized as follows: Given a set of I training feature vectors, {a1,a2 a) characterizing the variability of a speaker, we want to find a partitioning of the feature vector space, {S1,S2 SM}, for that particular speaker where, 5, the whole feature space is represented as S S1 US2 U . . . US. Each partition, S, forms a convex, nonoverlapping region and every vector inside S is represented by the respo5'tis'1g ceatcoid. vector, b1, cf 5,. The p titioning is done in such a way that the average distortion"}
{"_id": "5f379793c8605eebd07da213aebd6ed8f14c438a", "title": "A Family of Neutral Point Clamped Full-Bridge Topologies for Transformerless Photovoltaic Grid-Tied Inverters", "text": "Transformerless inverter topologies have attracted more attentions in photovoltaic (PV) generation system since they feature high efficiency and low cost. In order to meet the safety requirement for transformerless grid-tied PV inverters, the leakage current has to be tackled carefully. Neutral point clamped (NPC) topology is an effective way to eliminate the leakage current. In this paper, two types of basic switching cells, the positive neutral point clamped cell and the negative neutral point clamped cell, are proposed to build NPC topologies, with a systematic method of topology generation given. A family of single-phase transformerless full-bridge topologies with low-leakage current for PV grid-tied NPC inverters is derived including the existing oH5 and some new topologies. A novel positive-negative NPC (PN-NPC) topology is analyzed in detail with operational modes and modulation strategy given. The power losses are compared among the oH5, the full-bridge inverter with dc bypass (FB-DCBP) topology, and the proposed PN-NPC topologies. A universal prototype for these three NPC-type topologies mentioned is built to evaluate the topologies at conversion efficiency and the leakage current characteristic. The PN-NPC topology proposed exhibits similar leakage current with the FB-DCBP, which is lower than that of the oH5 topology, and features higher efficiency than both the oH5 and the FB-DCBP topologies."}
{"_id": "dd215b777c1c251b61ebee99592250f44073d4c0", "title": "Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks", "text": "Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples."}
{"_id": "5c4736f1bcaf27033269ff7831e18a7f017e0343", "title": "Designing game-based learning environments for elementary science education: A narrative-centered learning perspective", "text": "Game-based learning environments hold significant promise for STEM education, yet they are enormously complex. CRYSTAL ISLAND: UNCHARTED DISCOVERY, is a gamebased learning environment designed for upper elementary science education that has been under development in our laboratory for the past four years. This article discusses curricular and narrative interaction design requirements, presents the design of the CRYSTAL ISLAND learning environment, and describes its evolution through a series of pilots and field tests. Additionally, a classroom integration study was conducted to initiate a shift towards ecological validity. Results indicated that CRYSTAL ISLAND produced significant learning gains on both science content and problem-solving measures. Importantly, gains were consistent for gender across studies. This finding is key in light of past studies that revealed disproportionate participation by boys within game-based learning environments."}
{"_id": "6c528568782b54dacc7a20db2e2d85f418674047", "title": "Volumetric Object Recognition Using 3-D CNNs on Depth Data", "text": "Recognizing 3-D objects has a wide range of application areas from autonomous robots to self-driving vehicles. The popularity of low-cost RGB-D sensors has enabled a rapid progress in 3-D object recognition in the recent years. Most of the existing studies use depth data as an additional channel to the RGB channels. Instead of this approach, we propose two volumetric representations to reveal rich 3-D structural information hidden in depth images. We present a 3-D convolutional neural network (CNN)-based object recognition approach, which utilizes these volumetric representations and single and multi-rotational depth images. The 3-D CNN architecture trained to recognize single depth images produces competitive results with the state-of-the-art methods on two publicly available datasets. However, recognition accuracy increases further when the multiple rotations of objects are brought together. Our multirotational 3-D CNN combines information from multiple views of objects to provide rotational invariance and improves the accuracy significantly comparing with the single-rotational approach. The results show that utilizing multiple views of objects can be highly informative for the 3-D CNN-based object recognition."}
{"_id": "46f1ba485e685ff8ff6232f6c04f1a3920d4f33a", "title": "Radar-based Fall Detection Based on Doppler Time-Frequency Signatures for Assisted Living", "text": "Falls are a major public health concern and main causes of accidental death in the senior U.S. population. Timely and accurate detection permits immediate assistance after a fall and, thereby, reduces complications of fall risk. Radar technology provides an effective means for this purpose because it is non-invasive, insensitive to lighting conditions as well as obstructions, and has less privacy concerns. In this paper, we develop an effective fall detection scheme for the application in continuous-wave radar systems. The proposed scheme exploits time-frequency characteristics of the radar Doppler signatures, and the motion events are classified using the joint statistics of three different features, including the extreme frequency, extreme frequency ratio, and the length of event period. Sparse Bayesian classifier based on the relevance vector machine is used to perform the classification. Laboratory experiments are performed to collect radar data corresponding to different motion patterns to verify the effectiveness of the proposed algorithm."}
{"_id": "74ec3d4cbb22453ce1d128c42ea66d2bdced64d6", "title": "Novel Multilevel Inverter Carrier-Based PWM Method", "text": "The advent of the transformerless multilevel inverter topology has brought forth various pulsewidth modulation (PWM) schemes as a means to control the switching of the active devices in each of the multiple voltage levels in the inverter. An analysis of how existing multilevel carrier-based PWM affects switch utilization for the different levels of a diode-clamped inverter is conducted. Two novel carrier-based multilevel PWM schemes are presented which help to optimize or balance the switch utilization in multilevel inverters. A 10-kW prototype sixlevel diode-clamped inverter has been built and controlled with the novel PWM strategies proposed in this paper to act as a voltage-source inverter for a motor drive."}
{"_id": "e68a6132f5536aad264ba62052005d0eca3356d5", "title": "A New Neutral-Point-Clamped PWM Inverter", "text": "A new neutral-point-clamped pulsewidth modulation (PWM) inverter composed of main switching devices which operate as switches for PWM and auxiliary switching devices to clamp the output terminal potential to the neutral point potential has been developed. This inverter output contains less harmonic content as compared with that of a conventional type. Two inverters are compared analytically and experimentally. In addition, a new PWM technique suitable for an ac drive system is applied to this inverter. The neutral-point-clamped PWM inverter adopting the new PWM technique shows an excellent drive system efficiency, including motor efficiency, and is appropriate for a wide-range variable-speed drive system."}
{"_id": "ff5c193fd7142b3f426baf997b43937eca1bbbad", "title": "Multilevel inverters: a survey of topologies, controls, and applications", "text": "Multilevel inverter technology has emerged recently as a very important alternative in the area of high-power medium-voltage energy control. This paper presents the most important topologies like diode-clamped inverter (neutral-point clamped), capacitor-clamped (flying capacitor), and cascaded multicell with separate dc sources. Emerging topologies like asymmetric hybrid cells and soft-switched multilevel inverters are also discussed. This paper also presents the most relevant control and modulation methods developed for this family of converters: multilevel sinusoidal pulsewidth modulation, multilevel selective harmonic elimination, and space-vector modulation. Special attention is dedicated to the latest and more relevant applications of these converters such as laminators, conveyor belts, and unified power-flow controllers. The need of an active front end at the input side for those inverters supplying regenerative loads is also discussed, and the circuit topology options are also presented. Finally, the peripherally developing areas such as high-voltage high-power devices and optical sensors and other opportunities for future development are addressed."}
{"_id": "3d8a29cf3843f92bf9897c4f2d3c02d96d59540a", "title": "Multilevel PWM Methods at Low Modulation Indices", "text": "When utilized at low amplitude modulation indices, existing multilevel carrier-based PWM strategies have no special provisions for this operating region, and several levels of the inverter go unused. This paper proposes some novel multilevel PWM strategies to take advantage of the multiple levels in both a diodeclamped inverter and a cascaded H-bridges inverter by utilizing all of the levels in the inverter even at low modulation indices. Simulation results show what effects the different strategies have on the active device utilization. A prototype 6-level diode-clamped inverter and an 11-level cascaded H-bridges inverter have been built and controlled with the novel PWM strategies proposed in this paper."}
{"_id": "40baa5d4632d807cc5841874be73415775b500fd", "title": "Multilevel Converters for Large Electric Drives", "text": "Traditional two-level high-frequency pulse width modulation (PWM) inverters for motor drives have several problems associated with their high frequency switching which produces common-mode voltage and high voltage change (dV/dt) rates to the motor windings. Multilevel inverters solve these problems because their devices can switch at a much lower frequency. Two different multilevel topologies are identified for use as a converter for electric drives, a cascade inverter with separate dc sources and a back-to-back diode clamped converter. The cascade inverter is a natural fit for large automotive allelectric drives because of the high VA ratings possible and because it uses several levels of dc voltage sources which would be available from batteries or fuel cells. The back-to-back diode clamped converter is ideal where a source of ac voltage is available such as a hybrid electric vehicle. Simulation and experimental results show the superiority of these two converters over PWM based drives."}
{"_id": "0c53ef79bb8e5ba4e6a8ebad6d453ecf3672926d", "title": "Weakly Supervised PatchNets: Describing and Aggregating Local Patches for Scene Recognition", "text": "Traditional feature encoding scheme (e.g., Fisher vector) with local descriptors (e.g., SIFT) and recent convolutional neural networks (CNNs) are two classes of successful methods for image recognition. In this paper, we propose a hybrid representation, which leverages the discriminative capacity of CNNs and the simplicity of descriptor encoding schema for image recognition, with a focus on scene recognition. To this end, we make three main contributions from the following aspects. First, we propose a patch-level and end-to-end architecture to model the appearance of local patches, called PatchNet. PatchNet is essentially a customized network trained in a weakly supervised manner, which uses the image-level supervision to guide the patch-level feature extraction. Second, we present a hybrid visual representation, called VSAD, by utilizing the robust feature representations of PatchNet to describe local patches and exploiting the semantic probabilities of PatchNet to aggregate these local patches into a global representation. Third, based on the proposed VSAD representation, we propose a new state-of-the-art scene recognition approach, which achieves an excellent performance on two standard benchmarks: MIT Indoor67 (86.2%) and SUN397 (73.0%)."}
{"_id": "a3345798b1faf238e8d805bbe9124b0b8e0c869f", "title": "Autophagy as a regulated pathway of cellular degradation.", "text": "Macroautophagy is a dynamic process involving the rearrangement of subcellular membranes to sequester cytoplasm and organelles for delivery to the lysosome or vacuole where the sequestered cargo is degraded and recycled. This process takes place in all eukaryotic cells. It is highly regulated through the action of various kinases, phosphatases, and guanosine triphosphatases (GTPases). The core protein machinery that is necessary to drive formation and consumption of intermediates in the macroautophagy pathway includes a ubiquitin-like protein conjugation system and a protein complex that directs membrane docking and fusion at the lysosome or vacuole. Macroautophagy plays an important role in developmental processes, human disease, and cellular response to nutrient deprivation."}
{"_id": "d35abb53c3c64717126deff65c26d6563276df45", "title": "Machine learning aided cognitive RAT selection for 5G heterogeneous networks", "text": "The starring role of the Heterogeneous Networks (HetNet) strategy as the key Radio Access Network (RAN) architecture for future 5G networks poses serious challenges to the current user association (cell selection) mechanisms used in cellular networks. The max-SINR algorithm, although historically effective for performing this function, is inefficient at best and obsolete at worst in 5G HetNets. The foreseen embarrassment of riches and diversified propagation characteristics of network attachment points spanning multiple Radio Access Technologies (RAT) requires novel and creative context-aware system designs that optimize the association and routing decisions in the context of single-RAT and multi-RAT connections, respectively. This paper proposes a framework under these guidelines that relies on Machine Learning techniques at the terminal device level for Cognitive RAT Selection and presents simulation results to suppport it."}
{"_id": "895fa1357bcfa9b845945c6505a6e48070fd5d89", "title": "An Anonymous Electronic Voting Protocol for Voting Over The Internet", "text": "In this work we propose a secure electronic voting protocol that is suitable for large scale voting over the Internet. The protocol allows a voter to cast his or her ballot anonymously, by exchanging untraceable yet authentic messages. The protocol ensures that (i) only eligible voters are able to cast votes, (ii) a voter is able to cast only one vote, (iii) a voter is able to verify that his or her vote is counted in the final tally, (iv) nobody, other than the voter, is able to link a cast vote with a voter, and (v) if a voter decides not to cast a vote, nobody is able to cast a fraudulent vote in place of the voter. The protocol does not require the cooperation of all registered voters. Neither does it require the use of complex cryptographic techniques like threshold cryptosystems or anonymous channels for casting votes. This is in contrast to other voting protocols that have been proposed in the literature. The protocol uses three agents, other than the voters, for successful operation. However, we do not require any of these agents to be trusted. That is, the agents may be physically co-located or may collude with one another to try to commit a fraud. If a fraud is committed, it can be easily detected and proven, so that the vote can be declared null and void. Although we propose the protocol with electronic voting in mind, the protocol can be used in other applications that involve exchanging an untraceable yet authentic message. Examples of such applications are answering confidential questionnaire anonymously or anonymous financial transactions."}
{"_id": "88ccb5b72cf96c9e34940c15e070c7d69a77a98c", "title": "The Love/Hate Relationship with the C Preprocessor: An Interview Study (Artifact)", "text": "The C preprocessor has received strong criticism in academia, among others regarding separation of concerns, error proneness, and code obfuscation, but is widely used in practice. Many (mostly academic) alternatives to the preprocessor exist, but have not been adopted in practice. Since developers continue to use the preprocessor despite all criticism and research, we ask how practitioners perceive the C preprocessor. We performed interviews with 40 developers, used grounded theory to analyze the data, and cross-validated the results with data from a survey among 202 developers, repository mining, and results from previous studies. In particular, we investigated four research questions related to why the preprocessor is still widely used in practice, common problems, alternatives, and the impact of undisciplined annotations. Our study shows that developers are aware of the criticism the C preprocessor receives, but use it nonetheless, mainly for portability and variability. Many developers indicate that they regularly face preprocessorrelated problems and preprocessor-related bugs. The majority of our interviewees do not see any current C-native technologies that can entirely replace the C preprocessor. However, developers tend to mitigate problems with guidelines, even though those guidelines are not enforced consistently. We report the key insights gained from our study and discuss implications for practitioners and researchers on how to better use the C preprocessor to minimize its negative impact. 1998 ACM Subject Classification D.3.4 Processors"}
{"_id": "2201c7ebc6d0365d2ec0bdd94c344f5dd269aa04", "title": "Inferring Mood Instability on Social Media by Leveraging Ecological Momentary Assessments", "text": "Active and passive sensing technologies are providing powerful mechanisms to track, model, and understand a range of health behaviors and well-being states. Despite yielding rich, dense and high fidelity data, current sensing technologies often require highly engineered study designs and persistent participant compliance, making them difficult to scale to large populations and to data acquisition tasks spanning extended time periods. This paper situates social media as a new passive, unobtrusive sensing technology. We propose a semi-supervised machine learning framework to combine small samples of data gathered through active sensing, with large-scale social media data to infer mood instability (MI) in individuals. Starting from a theoretically-grounded measure of MI obtained from mobile ecological momentary assessments (EMAs), we show that our model is able to infer MI in a large population of Twitter users with 96% accuracy and F-1 score. Additionally, we show that, our model predicts self-identifying Twitter users with bipolar and borderline personality disorder to exhibit twice the likelihood of high MI, compared to that in a suitable control. We discuss the implications and the potential for integrating complementary sensing capabilities to address complex research challenges in precision medicine."}
{"_id": "abd81ffe23b23bf5cfdb2f1a02b66c8e14f11581", "title": "The Therapeutic Potentials of Ayahuasca: Possible Effects against Various Diseases of Civilization", "text": "Ayahuasca is an Amazonian psychoactive brew of two main components. Its active agents are \u03b2-carboline and tryptamine derivatives. As a sacrament, ayahuasca is still a central element of many healing ceremonies in the Amazon Basin and its ritual consumption has become common among the mestizo populations of South America. Ayahuasca use amongst the indigenous people of the Amazon is a form of traditional medicine and cultural psychiatry. During the last two decades, the substance has become increasingly known among both scientists and laymen, and currently its use is spreading all over in the Western world. In the present paper we describe the chief characteristics of ayahuasca, discuss important questions raised about its use, and provide an overview of the scientific research supporting its potential therapeutic benefits. A growing number of studies indicate that the psychotherapeutic potential of ayahuasca is based mostly on the strong serotonergic effects, whereas the sigma-1 receptor (Sig-1R) agonist effect of its active ingredient dimethyltryptamine raises the possibility that the ethnomedical observations on the diversity of treated conditions can be scientifically verified. Moreover, in the right therapeutic or ritual setting with proper preparation and mindset of the user, followed by subsequent integration of the experience, ayahuasca has proven effective in the treatment of substance dependence. This article has two important take-home messages: (1) the therapeutic effects of ayahuasca are best understood from a bio-psycho-socio-spiritual model, and (2) on the biological level ayahuasca may act against chronic low grade inflammation and oxidative stress via the Sig-1R which can explain its widespread therapeutic indications."}
{"_id": "117601fe80cc4b7d69a18da06949279395c62292", "title": "Epigenetics and the embodiment of race: developmental origins of US racial disparities in cardiovascular health.", "text": "The relative contribution of genetic and environmental influences to the US black-white disparity in cardiovascular disease (CVD) is hotly debated within the public health, anthropology, and medical communities. In this article, we review evidence for developmental and epigenetic pathways linking early life environments with CVD, and critically evaluate their possible role in the origins of these racial health disparities. African Americans not only suffer from a disproportionate burden of CVD relative to whites, but also have higher rates of the perinatal health disparities now known to be the antecedents of these conditions. There is extensive evidence for a social origin to prematurity and low birth weight in African Americans, reflecting pathways such as the effects of discrimination on maternal stress physiology. In light of the inverse relationship between birth weight and adult CVD, there is now a strong rationale to consider developmental and epigenetic mechanisms as links between early life environmental factors like maternal stress during pregnancy and adult race-based health disparities in diseases like hypertension, diabetes, stroke, and coronary heart disease. The model outlined here builds upon social constructivist perspectives to highlight an important set of mechanisms by which social influences can become embodied, having durable and even transgenerational influences on the most pressing US health disparities. We conclude that environmentally responsive phenotypic plasticity, in combination with the better-studied acute and chronic effects of social-environmental exposures, provides a more parsimonious explanation than genetics for the persistence of CVD disparities between members of socially imposed racial categories."}
{"_id": "87f452d4e9baabda4093007a9c6bbba30c35f3e4", "title": "Face spoofing detection from single images using texture and local shape analysis", "text": "Current face biometric systems are vulnerable to spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access. Inspired by image quality assessment, characterisation of printing artefacts and differences in light reflection, the authors propose to approach the problem of spoofing detection from texture analysis point of view. Indeed, face prints usually contain printing quality defects that can be well detected using texture and local shape features. Hence, the authors present a novel approach based on analysing facial image for detecting whether there is a live person in front of the camera or a face print. The proposed approach analyses the texture and gradient structures of the facial images using a set of low-level feature descriptors, fast linear classification scheme and score level fusion. Compared to many previous works, the authors proposed approach is robust and does not require user-cooperation. In addition, the texture features that are used for spoofing detection can also be used for face recognition. This provides a unique feature space for coupling spoofing detection and face recognition. Extensive experimental analysis on three publicly available databases showed excellent results compared to existing works."}
{"_id": "1f6f571f29d930bc2371c9eb044e03bb8ebd86ae", "title": "Subjective Well \u2010 Being and Income : Is There Any Evidence of Satiation ? *", "text": "Subjective Well\u2010Being and Income: Is There Any Evidence of Satiation? Many scholars have argued that once \u201cbasic needs\u201d have been met, higher income is no longer associated with higher in subjective well-being. We assess the validity of this claim in comparisons of both rich and poor countries, and also of rich and poor people within a country. Analyzing multiple datasets, multiple definitions of \u201cbasic needs\u201d and multiple questions about well-being, we find no support for this claim. The relationship between wellbeing and income is roughly linear-log and does not diminish as incomes rise. If there is a satiation point, we are yet to reach it. JEL Classification: D6, I3, N3, O1, O4"}
{"_id": "816cce72ad9fbeb854e3ca723ded5a51dfcb9311", "title": "Dynamic mixed membership blockmodel for evolving networks", "text": "In a dynamic social or biological environment, interactions between the underlying actors can undergo large and systematic changes. Each actor can assume multiple roles and their degrees of affiliation to these roles can also exhibit rich temporal phenomena. We propose a state space mixed membership stochastic blockmodel which can track across time the evolving roles of the actors. We also derive an efficient variational inference procedure for our model, and apply it to the Enron email networks, and rewiring gene regulatory networks of yeast. In both cases, our model reveals interesting dynamical roles of the actors."}
{"_id": "54f573383d23275731487f2a6b45845db29dbdf8", "title": "Regression approaches to voice quality controll based on one-to-many eigenvoice conversion", "text": "This paper proposes techniques for flexibly controlling voice quality of converted speech from a particular source speaker based on one-to-many eigenvoice conversion (EVC). EVC realizes a voice quality control based on the manipulation of a small number of parameters, i.e., weights for eigenvectors, of an eigenvoice Gaussian mixture model (EV-GMM), which is trained with multiple parallel data sets consisting of a single source speaker and many pre-stored target speakers. However, it is difficult to control intuitively the desired voice quality with those parametersbecause each eigenvector doesn\u2019t usually represent a specific physical meaning. In order to cope with this problem, we propose regression approaches to the EVC-based voice quality controller. The tractable voice quality control of the converted speech is achieved with a low-dimensionalvoice quality control vector capturing specific voice characteristics. We conducted experimental verifications of each of the proposed approaches."}
{"_id": "edc22d9a3aba2c9d457ef16acd7e6de7a17daed4", "title": "A brief survey of blackhole detection and avoidance for ZRP protocol in MANETs", "text": "Within Mobile Ad-Hoc Network(MANETs) unusual categories of routing protocols have been determined. It is a group of wireless system. MANET is other susceptible to a variety of attack than wired system. Black hole attack is further meticulous intimidation to MANETs. The routing protocols of MANET is less protected and therefore consequenced the system with malicious node. There are various routing protocols being used for MANETs. All routing protocols have been briefly discussed, however, Zone Routing Protocol (ZRP) is discussed in detail. Black hole is a major security threat for MANETs. Hence, in this paper, the various techniques used for detection and avoidance of Black hole attack in MANETs using ZRP routing protocol have been discussed."}
{"_id": "08c2ba4d7183d671b9a7652256de17110b81c723", "title": "A Practical Attack to De-anonymize Social Network Users", "text": "Social networking sites such as Facebook, LinkedIn, and Xing have been reporting exponential growth rates and have millions of registered users. In this paper, we introduce a novel de-anonymization attack that exploits group membership information that is available on social networking sites. More precisely, we show that information about the group memberships of a user (i.e., the groups of a social network to which a user belongs) is sufficient to uniquely identify this person, or, at least, to significantly reduce the set of possible candidates. That is, rather than tracking a user's browser as with cookies, it is possible to track a person. To determine the group membership of a user, we leverage well-known web browser history stealing attacks. Thus, whenever a social network user visits a malicious website, this website can launch our de-anonymization attack and learn the identity of its visitors. The implications of our attack are manifold, since it requires a low effort and has the potential to affect millions of social networking users. We perform both a theoretical analysis and empirical measurements to demonstrate the feasibility of our attack against Xing, a medium-sized social network with more than eight million members that is mainly used for business relationships. Furthermore, we explored other, larger social networks and performed experiments that suggest that users of Facebook and LinkedIn are equally vulnerable."}
{"_id": "cf9145aa55da660a8d32bf628235c615318463bf", "title": "Cryptography on FPGAs: State of the Art Implementations and Attacks", "text": "In the last decade, it has become aparent that embedded systems are integral parts of our every day lives. The wireless nature of many embedded applications as well as their omnipresence has made the need for security and privacy preserving mechanisms particularly important. Thus, as FPGAs become integral parts of embedded systems, it is imperative to consider their security as a whole. This contribution provides a state-of-the-art description of security issues on FPGAs, both from the system and implementation perspectives. We discuss the advantages of reconfigurable hardware for cryptographic applications, show potential security problems of FPGAs, and provide a list of open research problems. Moreover, we summarize both public and symmetric-key algorithm implementations on FPGAs."}
{"_id": "aee27f46bca631cc9c45b3ad5032ab9b771dfefe", "title": "Compliance with a structured bedside handover protocol : An observational , multicentred study \u2606", "text": "Background: Bedside handover is the delivery of the nurse-to-nurse shift handover at the patient\u2019s bedside. The method is increasingly used in nursing, but the evidence concerning the implementation process and compliance to the method is limited. Objectives: To determine the compliance with a structured bedside handover protocol following ISBARR and if there were differences in compliance between wards. Design: A multicentred observational study with unannounced and non-participatory observations (n=638) one month after the implementation of a structured bedside handover protocol. Settings and participants: Observations of individual patient handovers between nurses from the morning shift and the afternoon shift in 12 nursing wards in seven hospitals in Flanders, Belgium. Methods: A tailored and structured bedside handover protocol following ISBARR was developed, and nurses were trained accordingly. One month after implementation, a minimum of 50 observations were performed with a checklist, in each participating ward. To enhance reliability, 20% of the observations were conducted by two researchers, and inter-rater agreement was calculated. Data were analysed using descriptive statistics, one-way ANOVAs and multilevel analysis. Results: Average compliance rates to the structured content protocol during bedside handovers were high (83.63%; SD 11.44%), and length of stay, the type of ward and the nursing care model were influencing contextual factors. Items that were most often omitted included identification of the patient (46.27%), the introduction of nurses (36.51%), hand hygiene (35.89%), actively involving the patient (34.44%), and using the call light (21.37%). Items concerning the exchange of clinical information (e.g., test results, reason for admittance, diagnoses) were omitted less (8.09%\u20131.45%). Absence of the patients (27.29%) and staffing issues (26.70%) accounted for more than half of the non-executed bedside handovers. On average, a bedside handover took 146 s per patient. Conclusions: When the bedside handover was delivered, compliance to the structured content was high, indicating that the execution of a bedside handover is a feasible step for nurses. The compliance rate was influenced by the patient\u2019s length of stay, the nursing care model and the type of ward, but their influence was limited. Future implementation projects on bedside handover should focus sufficiently on standard hospital procedures and patient involvement. According to the nurses, there was however a high number of situations where bedside handovers could not be delivered, perhaps indicating a reluctance in practice to use bedside"}
{"_id": "00aa499569decb4e9abe40ebedca1b318b7664a8", "title": "A Novel Feature Selection Approach Based on FODPSO and SVM", "text": "A novel feature selection approach is proposed to address the curse of dimensionality and reduce the redundancy of hyperspectral data. The proposed approach is based on a new binary optimization method inspired by fractional-order Darwinian particle swarm optimization (FODPSO). The overall accuracy (OA) of a support vector machine (SVM) classifier on validation samples is used as fitness values in order to evaluate the informativity of different groups of bands. In order to show the capability of the proposed method, two different applications are considered. In the first application, the proposed feature selection approach is directly carried out on the input hyperspectral data. The most informative bands selected from this step are classified by the SVM. In the second application, the main shortcoming of using attribute profiles (APs) for spectral-spatial classification is addressed. In this case, a stacked vector of the input data and an AP with all widely used attributes are created. Then, the proposed feature selection approach automatically chooses the most informative features from the stacked vector. Experimental results successfully confirm that the proposed feature selection technique works better in terms of classification accuracies and CPU processing time than other studied methods without requiring the number of desired features to be set a priori by users."}
{"_id": "ef2237aea4815107db8ed1cb62476ef30c4dfa47", "title": "CMiner: Opinion Extraction and Summarization for Chinese Microblogs", "text": "Sentiment analysis of microblog texts has drawn lots of attention in both the academic and industrial fields. However, most of the current work only focuses on polarity classification. In this paper, we present an opinion mining system for Chinese microblogs called CMiner. Instead of polarity classification, CMiner focuses on more complicated opinion mining tasks - opinion target extraction and opinion summarization. Novel algorithms are developed for the two tasks and integrated into the end-to-end system. CMiner can help to effectively understand the users' opinion towards different opinion targets in a microblog topic. Specially, we develop an unsupervised label propagation algorithm for opinion target extraction. The opinion targets of all messages in a topic are collectively extracted based on the assumption that similar messages may focus on similar opinion targets. In addition, we build an aspect-based opinion summarization framework for microblog topics. After getting the opinion targets of all the microblog messages in a topic, we cluster the opinion targets into several groups and extract representative targets and summaries for each group. A co-ranking algorithm is proposed to rank both the opinion targets and microblog sentences simultaneously. Experimental results on a benchmark dataset show the effectiveness of our system and the algorithms."}
{"_id": "91e04ecd6ddd52642fde1cd2cce7d09c7c20d695", "title": "Applying an optimized switching strategy to a high gain boost converter for input current ripple cancellation", "text": "This paper discusses applying various switching strategies such as conventional complementary and proportional methods on a specific converter. This converter was recently presented and it can provide high voltage gain and has current ripple cancelation ability at a preselected duty-cycle. The input current ripple is zero when the converter works at a special duty-cycle. But load disturbance leads to output voltage changes that maybe cause the duty-cycle to deviate from its preselected value. In this situation, the input current ripple cannot be completely canceled. The proposed proportional strategy is an optimized method for minimizing input current ripple at other operating duty-cycles and also it provides a voltage gain lower than conventional complementary strategy. Here, the converter's performance is analyzed under two switching strategies and the effect of various strategies has been investigated on converter parameters including output voltage and input current ripple. These considerations are verified by implementing a 100-W prototype of the proposed converter in laboratory."}
{"_id": "748eb923d2c384d2b3af82af58d2e6692ef57aa1", "title": "The Text Mining Handbook: Advanced Approaches to Analyzing Unstructured Data Ronen Feldman and James Sanger (Bar-Ilan University and ABS Ventures) Cambridge, England: Cambridge University Press, 2007, xii+410 pp; hardbound, ISBN 0-521-83657-3, $70.00", "text": "Text mining is a new and exciting area of computer science that tries to solve the crisis of information overload by combining techniques from data mining, machine learning, natural language processing, information retrieval, and knowledge management. The Text Mining Handbook presents a comprehensive discussion of the latest techniques in text mining and link detection. In addition to providing an in-depth examination of core text mining and link detection algorithms and operations, the book examines advanced pre-processing techniques, knowledge representation considerations, and visualization approaches, ending with real-world applications."}
{"_id": "8d9ae24c9f1e59ee8cfc4c6317671b4947c2a153", "title": "A fractional open circuit voltage based maximum power point tracker for photovoltaic arrays", "text": "In this paper a fractional open circuit voltage based maximum power point tracker (MPPT) for photovoltaic (PV) arrays is proposed. The fractional open circuit voltage based MPPT utilizes the fact that the PV array voltage corresponding to the maximum power exhibits a linear dependence with respect to array open circuit voltage for different irradiation and temperature levels. This method is the simplest of all the MPPT methods described in the literature. The main disadvantage of this method is that the PV array is disconnected from the load after regular intervals for the sampling of the array voltage. This results in power loss. Another disadvantage is that if the duration between two successive samplings of the array voltage, called the sampling period, is too long, there is a considerable loss. This is because the output voltage of the PV array follows the unchanged reference during one sampling period. Once a maximum power point (MPP) is tracked and a change in irradiation occurs between two successive samplings, then the new MPP is not tracked until the next sampling of the PV array voltage. This paper proposes an MPPT circuit in which the sampling interval of the PV array voltage, and the sampling period have been shortened. The sample and hold circuit, which samples and holds the MPP voltage, has also been simplified. The proposed circuit does not utilize expensive microcontroller or a digital signal processor and is thus suitable for low cost photovoltaic applications."}
{"_id": "7774b582b9b3fa50775c7fddcfac712cdeef7c97", "title": "HCI Education: Innovation, Creativity and Design Thinking", "text": "Human-Computer Interaction (HCI) education needs re-thinking. In this paper, we explore how and what creativity and design thinking could contribute with, if included as a part of the HCI curriculum. The findings from courses where design thinking was included, indicate that design thinking contributed to increased focus on innovation and creativity, as well as prevented too early fixation on a single solution in the initial phases of HCI design processes, fostering increased flexibility and adaptability in learning processes. The creativity and adaptability may be the best long-term foci that HCI education can add to its curriculums and offer to students when preparing them for future work"}
{"_id": "902d5bc1b1b6e35aabe3494f0165e42a918e82ed", "title": "Compressed matching for feature vectors", "text": "The problem of compressing a large collection of feature vectors is investigated, so that object identification can be processed on the compressed form of the features. The idea is to perform matching of a query image against an image database, using directly the compressed form of the descriptor vectors, without decompression. Specifically, we concentrate on the Scale Invariant Feature Transform (SIFT), a known object detection method, as well as on Dense SIFT and PHOW features, that contain, for each image, about 300 times as many vectors as the original SIFT. Given two feature vectors, we suggest achieving our goal by compressing them using a lossless encoding by means of a Fibonacci code, for which the pairwise matching can be done directly on the compressed files. In our experiments, this approach improves the processing time and incurs only a small loss in compression efficiency relative to standard compressors requiring a decoding phase."}
{"_id": "2fef3ba4f888855e1d087572003553f485414ef1", "title": "Revisit Behavior in Social Media: The Phoenix-R Model and Discoveries", "text": "How many listens will an artist receive on a online radio? How about plays on a YouTube video? How many of these visits are new or returning users? Modeling and mining popularity dynamics of social activity has important implications for researchers, content creators and providers. We here investigate the effect of revisits (successive visits from a single user) on content popularity. Using four datasets of social activity, with up to tens of millions media objects (e.g., YouTube videos, Twitter hashtags or LastFM artists), we show the effect of revisits in the popularity evolution of such objects. Secondly, we propose the PHOENIX-R model which captures the popularity dynamics of individual objects. PHOENIX-R has the desired properties of being: (1) parsimonious, being based on the minimum description length principle, and achieving lower root mean squared error than state-of-the-art baselines; (2) applicable, the model is effective for predicting future popularity values of objects."}
{"_id": "c6419ccf4340832b6a23217674ca1a051a3a1416", "title": "The TaSSt: Tactile sleeve for social touch", "text": "In this paper we outline the design process of the TaSST (Tactile Sleeve for Social Touch), a touch-sensitive vibrotactile arm sleeve. The TaSST was designed to enable two people to communicate different types of touch over a distance. The touch-sensitive surface of the sleeve consists of a grid of 4\u00d73 sensor compartments filled with conductive wool. Each compartment controls the vibration intensity of a vibration motor, located in a grid of 4\u00d73 motors beneath the touch-sensitive layer. An initial evaluation of the TaSST revealed that it was mainly suitable for communicating protracted (e.g. pressing), and simple (e.g. poking) touches."}
{"_id": "c8958d1d8e6a59127e6277626b18e7b7556f5d31", "title": "Kinesthetic interaction: revealing the bodily potential in interaction design", "text": "Within the Human-Computer Interaction community there is a growing interest in designing for the whole body in interaction design. The attempts aimed at addressing the body have very different outcomes spanning from theoretical arguments for understanding the body in the design process, to more practical examples of designing for bodily potential. This paper presents Kinesthetic Interaction as a unifying concept for describing the body in motion as a foundation for designing interactive systems. Based on the theoretical foundation for Kinesthetic Interaction, a conceptual framework is introduced to reveal bodily potential in relation to three design themes --- kinesthetic development, kinesthetic means and kinesthetic disorder; and seven design parameters --- engagement, sociality, movability, explicit motivation, implicit motivation, expressive meaning and kinesthetic empathy. The framework is a tool to be utilized when analyzing existing designs, as well as developing designs exploring new ways of designing kinesthetic interactions."}
{"_id": "0757817bf5714bb91c3d4f30cf3144e0837e57e5", "title": "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms", "text": "This paper presents our vision of Human Computer Interaction (HCI): \"Tangible Bits.\" Tangible Bits allows users to \"grasp & manipulate\" bits in the center of users\u2019 attention by coupling the bits with everyday physical objects and architectural surfaces. Tangible Bits also enables users to be aware of background bits at the periphery of human perception using ambient display media such as light, sound, airflow, and water movement in an augmented space. The goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities. This paper describes three key concepts of Tangible Bits: interactive surfaces; the coupling of bits with graspable physical objects; and ambient media for background awareness. We illustrate these concepts with three prototype systems \u2013 the metaDESK, transBOARD and ambientROOM \u2013 to identify underlying research issues."}
{"_id": "0e1f0917797ca7db9bc1d042fa88a46b64e8f171", "title": "inTouch: A Medium for Haptic Interpersonal Communication", "text": "In this paper, we introduce a new approach for applying haptic feedback technology to interpersonal communication. We present the design of our prototype inTouch system which provides a physical link between users separated by distance."}
{"_id": "0f308ef439e65b6b7df4489df01e47a37e5fba7f", "title": "Augmented reality: linking real and virtual worlds: a new paradigm for interacting with computers", "text": "A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. The difference is that these objects also provide a link into a computer network. Doctors can examine patients while viewing superimposed medical images; children can program their own LEGO constructions; construction engineers can use ordinary paper engineering drawings to communicate with distant colleagues. Rather than immersing people in an artificially-created virtual world, the goal is to augment objects in the physical world by enhancing them with a wealth of digital information and communication capabilities."}
{"_id": "d041316fe31b862110fa4745496f66ec793bf8e3", "title": "Conformal printing of electrically small antennas on three-dimensional surfaces.", "text": "J. J. Adams ,[+] Prof. J. T. Bernhard Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801, USA E-mail: jbernhar@illinois.edu Dr. E. B. Duoss ,[+,++] T. F. Malkowski ,[+] Dr. B. Y. Ahn , Prof. J. A. Lewis Department of Materials Science and Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801 USA E-mail: jalewis@illinois.edu Dr. M. J. Motala , Prof. R. G. Nuzzo Department of Chemistry University of Illinois at Urbana-Champaign Urbana, Illinois 61801, USA [+] These authors contributed equally to this work. [++] Presently at Lawrence Livermore National Laboratory, Center for Microand NanoTechnology, Livermore, CA 94550 USA"}
{"_id": "cde617cc3f07faa62ed3812ee150c20559bb91cd", "title": "Oxytocin is associated with human trustworthiness", "text": "Human beings exhibit substantial interpersonal trust-even with strangers. The neuroactive hormone oxytocin facilitates social recognition in animals, and we examine if oxytocin is related to trustworthiness between humans. This paper reports the results of an experiment to test this hypothesis, where trust and trustworthiness are measured using the sequential anonymous \"trust game\" with monetary payoffs. We find that oxytocin levels are higher in subjects who receive a monetary transfer that reflects an intention of trust relative to an unintentional monetary transfer of the same amount. In addition, higher oxytocin levels are associated with trustworthy behavior (the reciprocation of trust). Absent intentionality, both the oxytocin and behavioral responses are extinguished. We conclude that perceptions of intentions of trust affect levels of circulating oxytocin."}
{"_id": "d5987b0ddbe11cff32e7c8591b2b829127849a31", "title": "Learning Partially Contracting Dynamical Systems from Demonstrations", "text": "An algorithm for learning the dynamics of point-to-point motions from demonstrations using an autonomous nonlinear dynamical system, named contracting dynamical system primitives (CDSP), is presented. The motion dynamics are approximated using a Gaussian mixture model (GMM) and its parameters are learned subject to constraints derived from partial contraction analysis. Systems learned using the proposed method generate trajectories that accurately reproduce the demonstrations and are guaranteed to converge to a desired goal location. Additionally, the learned models are capable of quickly and appropriately adapting to unexpected spatial perturbations and changes in goal location during reproductions. The CDSP algorithm is evaluated on shapes from a publicly available human handwriting dataset and also compared with two state-of-the-art motion generation algorithms. Furthermore, the CDSP algorithm is also shown to be capable of learning and reproducing point-to-point motions directly from real-world demonstrations using a Baxter robot."}
{"_id": "108c652bcc4456562762ab2c4b707dc265c4cc7e", "title": "A Solid-State Marx Generator With a Novel Configuration", "text": "A new pulsed-power generator based on a Marx generator (MG) is proposed in this paper with a reduced number of semiconductor components and with a more efficient load-supplying process. The main idea is to charge two groups of capacitors in parallel through an inductor and take advantage of the resonant phenomenon in charging each capacitor up to twice as the input-voltage level. In each resonant half-cycle, one of those capacitor groups is charged; eventually, the charged capacitors will be connected in series, and the summation of the capacitor voltages are appeared at the output of a pulsed-power converter. This topology can be considered as a modified MG, which works based on the resonant concept. The simulated models of this converter have been investigated in a MATLAB/Simulink platform, and a laboratory prototype has been implemented in a laboratory. The simulation and test results verify the operation of the proposed topology in different switching modes."}
{"_id": "2dd8a75ea1baca4a390a6f5bb24afd95d7bb0e1c", "title": "Neural personalized response generation as domain adaptation", "text": "One of the most crucial problem on training personalized response generation models for conversational robots is the lack of large scale personal conversation data. To address the problem, we propose a two-phase approach, namely initialization then adaptation, to first pre-train an optimized RNN encoder-decoder model (LTS model) in a large scale conversational data for general response generation and then fine-tune the model in a small scale personal conversation data to generate personalized responses. For evaluation, we propose a novel human aided method, which can be seen as a quasi-Turing test, to evaluate the performance of the personalized response generation models. Experimental results show that the proposed personalized response generation model outperforms the state-of-the-art approaches to language model personalization and persona-based neural conversation generation on the automatic evaluation, offline human judgment and the quasi-Turing test."}
{"_id": "f41e3c6d51a18e8365613fdb558dbaee5d859d75", "title": "Control of contact via tactile sensing", "text": "In this paper, we present our approach to using tactile sensing in the feedback control of robot contact tasks. A general framework, called tactile servo, is first introduced. We then address the critical issue of how to model the state of contact in terms that are both sufficient for defining general contacts and conducive to bridging the gap between a robot task description and the information observable by a tactile sensor. We subsequently examine techniques for deriving tactile sensor models that are required for computing the state of contact from the sensor output. Two basic methods for mapping a tactile sensor image to the contact state variables are introduced\u2014one based on explicit inverse modeling and the other on numerical tactile Jacobian. In both cases, moments of the tactile sensor image are used as the features that capture the necessary information about contact. The theoretical development is supported by extensive experiments, which include edge tracking, object shape construction, and object manipulation."}
{"_id": "49d5fefa7242e2b95e75b59bba288250041c21fd", "title": "External Computations and Interoperability in the New DLV Grounder", "text": "In this paper we focus on some of the most recent advancements in I-DLV, the new intelligent grounder of DLV; the system has been endowed with means aimed at easing the interoperability and integration with external systems and accommodating external source of computation and value invention within ASP programs. In particular, we describe here the support for external computations via explicit calls to Python scripts, and tools for the interoperability with both relational and graph databases."}
{"_id": "2f2678e1b54593e54ad63e616729c6e6b4527184", "title": "Study of Planar Inverted-F Antenna ( PIFA ) for Mobile Devices", "text": "Now-a-days more and more radios are being integrated into a single wireless platform to allow maximum connectivity. In this paper a theoretical study of Planar Inverted F Antenna is presented. PIFA as a low profile antenna is widely used in several application areas due to several advantages. Research and experiments done on PIFA structures are discussed and categorized. Some methods that can be used to improve efficiency, bandwidth & reduce volume of an antenna are also discussed which includes effect of dielectric substrate, dimensions of the antenna etc."}
{"_id": "a23bb18a5219541e8fbe50d0a1fbf7c2acfa7b5c", "title": "S-Sensors: Integrating physical world inputs with social networks using wireless sensor networks", "text": "Real-world monitoring and surveillance applications are rapidly increasing with the growth in the demand for awareness of the environment and surrounding. Sensor-to-web integration models are envisaged to improve the utilisation of sensory infrastructures and data via advanced coordination and collaboration of sensor nodes and web applications. This paper proposes to employ micro-blogging to publish and share sensory data and resources. S-Sensors provides a framework to globally share locally measured sensory readings. Short messages depicting the status of the environment are used to convey sensory data of the physical world. Moreover, data accessibility and resource manageability are facilitated by using social networks paradigms to form communities of sensor networks."}
{"_id": "b30751afa5b2d53bcfe1976782a0d7c43c968f64", "title": "Development of ambient environmental monitoring system through wireless sensor network (WSN) using NodeMCU and \u201cWSN monitoring\u201d", "text": "In this paper we have developed a system for web based environment monitoring using the WSN Technology. WSN sensor nodes transmit data to the cloud-based database via Web API request. Measured data can be monitored by the user anywhere from internet by using the Web Application which one is also compatible for mobile phones. If the data measured by sensor node exceeds the configured value range in Web Application, Web Application sends a warning e-mail to users for improving environmental conditions."}
{"_id": "d044d399049bb9bc6df8cc2a5d72610a95611eed", "title": "Multicenter randomized clinical trial evaluating the effectiveness of the Lokomat in subacute stroke.", "text": "OBJECTIVE\nTo compare the efficacy of robotic-assisted gait training with the Lokomat to conventional gait training in individuals with subacute stroke.\n\n\nMETHODS\nA total of 63 participants<6 months poststroke with an initial walking speed between 0.1 to 0.6 m/s completed the multicenter, randomized clinical trial. All participants received twenty-four 1-hour sessions of either Lokomat or conventional gait training. Outcome measures were evaluated prior to training, after 12 and 24 sessions, and at a 3-month follow-up exam. Self-selected overground walking speed and distance walked in 6 minutes were the primary outcome measures, whereas secondary outcome measures included balance, mobility and function, cadence and symmetry, level of disability, and quality of life measures.\n\n\nRESULTS\nParticipants who received conventional gait training experienced significantly greater gains in walking speed (P=.002) and distance (P=.03) than those trained on the Lokomat. These differences were maintained at the 3-month follow-up evaluation. Secondary measures were not different between the 2 groups, although a 2-fold greater improvement in cadence was observed in the conventional versus Lokomat group.\n\n\nCONCLUSIONS\nFor subacute stroke participants with moderate to severe gait impairments, the diversity of conventional gait training interventions appears to be more effective than robotic-assisted gait training for facilitating returns in walking ability."}
{"_id": "641cb0722bb971117dce3ef53f8de0b1a63ef6f0", "title": "Advances in Neuropsychoanalysis , Attachment Theory , and Trauma Research : Implications for Self Psychology", "text": "In 1971, Heinz Kohut, trained in neurology and then psychoanalysis, published The Analysis of the Self, a detailed exposition of the central role of the self in human existence. This classic volume of both twentieth century psychoanalysis and psychology was more than a collection of various clinical observations\u2014rather it represented an overarching integrated theory of the development, structuralization, psychopathogenesis, and psychotherapy of disorders of the self. Although some of these ideas were elaborations of previous psychoanalytic principles, a large number of his concepts, including an emphasis on self rather than ego, signified an innovative departure from mainstream psychoanalysis and yet a truly creative addition to Freud\u2019s theory."}
{"_id": "b0d4b6c5dac78ce88c76c63c41b58084f98b0485", "title": "Linearization Through Dithering: A 50 MHz Bandwidth, 10-b ENOB, 8.2 mW VCO-Based ADC", "text": "Non-linear voltage-to-frequency characteristic of a voltage-controlled oscillator (VCO) severely curtails the dynamic range of analog-to-digital converters (ADCs) built with VCOs. Typical approaches to enhance the dynamic range include embedding the VCO-based ADC in a \u0394\u03a3 loop or to post-process the digital data for calibration, both of which impose significant power constraints. In contrast, in this work the VCO-based ADC is linearized through a filtered dithering technique, wherein the VCO-based ADC is used as a fine stage that processes the residue from a coarse stage in a 0-1 MASH structure. The proposed filtered dithering technique conditions the signal to the VCO input to appear as white noise thereby eliminating spurious signal content arising out of the VCO nonlinearity. The work resorts to multiple other signal processing techniques to build a high-resolution, wideband prototype, in 65 nm complementary metal-oxide semiconductor (CMOS), that achieves 10 effective number of bits (ENOB) in digitizing signals with 50 MHz bandwidth consuming 8.2 mW at a figure of merit (FoM) of 90 fJ/conv.step."}
{"_id": "4d5ed143295435c85300ecc3dfa05f462f03846c", "title": "Image compression with auto-encoder algorithm using deep neural network (DNN)", "text": "An image compression is necessary for the image processing applications such as data storing, image classification, image recognition etc. Then, several research articles have been proposed to reserve for these topics. However, the image compression with auto encoder has been found for a small number of the improvements. Therefore, this paper presents a detailed study to demonstrate the image compression algorithm using the deep neural network (DNN). The proposed algorithm consists of 1) compressing image with auto encoder, and 2) decoding image. The proposed compressing image with auto encoder algorithm uses non-recurrent three-layer neural networks (NRTNNs) which use an extended Kalman filter (EKF) to update the weights of the networks. To evaluate the proposed algorithm performances, the Matlab program is used for implementations of the overall testing algorithm. From our simulation results, it shows that the proposed image compression algorithm is able to reduce the image dimensionality and is able to recall the compressed image with low loss."}
{"_id": "5b7ad451e0fa36cded02c28f3ec32a0cad5a3df5", "title": "Distribution expansion planning considering reliability and security of energy using modified PSO ( Particle Swarm Optimization ) algorithm", "text": "Distribution feeders and substations need to provide additional capacity to serve the growing electrical demand of customers without compromising the reliability of the electrical networks. Also, more control devices, such as DG (Distributed Generation) units are being integrated into distribution feeders. Distribution networks were not planned to host these intermittent generation units before construction of the systems. Therefore, additional distribution facilities are needed to be planned and prepared for the future growth of the electrical demand as well as the increase of network hosting capacity by DG units. This paper presents a multiobjective optimization algorithm for the MDEP (Multi-Stage Distribution Expansion Planning) in the presence of DGs using nonlinear formulations. The objective functions of the MDEP consist of minimization of costs, END (Energy-Not-Distributed), active power losses and voltage stability index based on SCC (Short Circuit Capacity). A MPSO (modified Particle Swarm Optimization) algorithm is developed and used for this multiobjective MDEP optimization. In the proposed MPSO algorithm, a new mutation method is implemented to improve the global searching ability and restrain the premature convergence to local minima. The effectiveness of the proposed method is tested on a typical 33-bus test system and results are presented."}
{"_id": "d1917d0118d1d4574b85fe7023646df5a8dc9512", "title": "Word similarity computation based on HowNet", "text": "Word similarity computation has important applications in many areas, such as natural language processing, intelligence information retrieval, document clustering, automatic answering, word sense disambiguation, machine translation etc. This article makes an intensive study of word similarity computation based on HowNet and the word similarity is computed in three steps: (1) computes sememes similarity, (2) computes concepts similarity using the weight sum method of sememes similarity, (3) takes the maximum similarity of concepts as words similarity. This article mainly introduces numbering the sememes and presents a more accurate method of decomposing the concept description type. The experiment shows that the algorithm of word similarity presented in this article is simple and feasible, and the computed results are relatively consistent with awareness."}
{"_id": "f1e0a619b6ad652b65b49f362ac9413e89291ad7", "title": "A Resource-Based Model For E-Commerce In Developing Countries", "text": "Previous efforts in electronic commerce (e-commerce) research in developing countries shows that there is an acute lack of theoretical frameworks and empirical evidence to understand how developing country firms realize electronic commerce benefits amidst their national constraints. This paper sets out to develop a theoretically abstracted but contextually grounded model of how developing country firms can orient their resources and realize these benefits amidst their national constraints. A review of e-commerce and strategy management literature to develop a resource \u2013 based model for e-commerce benefits was undertaken. The process-based model provides an understanding of how to identify, integrate, and reconfigure resources to achieve electronic commerce benefits; provides propositions that serves as theoretical platforms for future empirically grounded research on electronic commerce in developing country contexts and brings organizations closer to identifying and categorizing the strategic value of resources and the role managerial capabilities and intangible resources play in sustaining e-commerce benefits. Finally, our findings provides organizations the strategic options to address resources which have lost their value or have become less valuable to their strategic orientation in e-commerce adoption thereby serving as a starting point of examining e-commerce in developing countries through the theoretical lens of information systems and strategic management."}
{"_id": "be5fa72d0bca30184c8bef0361ee07aa5c7a011e", "title": "Personalized Security Messaging : Nudges for Compliance with Browser Warnings", "text": "Decades of psychology and decision-making research show that everyone makes decisions differently; yet security messaging is still one-size-fits-all. This suggests that we can improve outcomes by delivering information relevant to how each individual makes decisions. We tested this hypothesis by designing messaging customized for stable personality traits\u2014 specifically, the five dimensions of the General Decision-Making Style (GDMS) instrument. We applied this messaging to browser warnings, security messaging encountered by millions of web users on a regular basis. To test the efficacy of our nudges, we conducted experiments with 1,276 participants, who encountered a warning about broken HTTPS due to an invalid certificate under realistic circumstances. While the effects of some nudges correlated with certain traits in a statistically significant manner, we could not reject the null hypothesis\u2014that the intervention did not affect the subjects\u2019 behavior\u2014for most of our nudges, especially after accounting for participants who did not pay close attention to the message. In this paper, we present the detailed results of our experiments, discuss potential reasons for why the outcome contradicts the decision-making research, and identify lessons for researchers based on our experience."}
{"_id": "5df5978d05ace5695ab5856d4a662b93805b9c00", "title": "Coding of pleasant touch by unmyelinated afferents in humans", "text": "Pleasant touch sensations may begin with neural coding in the periphery by specific afferents. We found that during soft brush stroking, low-threshold unmyelinated mechanoreceptors (C-tactile), but not myelinated afferents, responded most vigorously at intermediate brushing velocities (1\u221210 cm s\u22121), which were perceived by subjects as being the most pleasant. Our results indicate that C-tactile afferents constitute a privileged peripheral pathway for pleasant tactile stimulation that is likely to signal affiliative social body contact."}
{"_id": "cc46a8ba591880c791cb3425e94a2d6f614ba016", "title": "A Systematic Mapping Study on Software Reuse", "text": "Context: Software reuse is considered as the key to a successful software development because of its potential to reduce the time to market, increase quality and reduce costs. This increase in demand made the software organizations to envision the use of software reusable assets which can also help in solving recurring problems. Till now, software reuse is confined to reuse of source code in the name of code scavenging. Now a day, software organizations are extending the concepts of software reuse to other life cycle objects as they realized that reuse of source code alone does not save money. The academia has put forward some assets as reusable and presented methods or approaches for reusing them. Also, for a successful software reuse the organizations should assess the value of reuse and keep track on their reuse programs. The other area which is vital for software reuse is the maintenance. Maintenance of reusable software has direct impact on the cost of the software. In this regard, academia has presented a number of techniques, methods, metrics and models for assessing the value of reuse and for maintaining the reusable software. Objectives: In our thesis, we investigate on the reusable assets and the methods/ approaches that are put forward by the academia for reusing those assets. Also a systematic mapping study is performed to investigate what techniques, methods, models and metrics for assessing the value of reuse and for maintaining the reused software are proposed and we also investigate their validation status as well. Methods: The databases like IEEE Xplore, ACM digital library, Inspec, Springer and Google scholar were used to search for the relevant studies for our systematic mapping study. We followed basic inclusion criteria along with detailed inclusion/exclusion criteria for selecting the appropriate article. Results: Through our systematic mapping study, we could summarize the list of 14 reusable assets along with the approaches/methods for reusing them. Taxonomy for assessing the value of reuse and taxonomy for maintaining the reusable software are presented. We also presented the methods/metrics/models/techniques for measuring reuse to assess its value and for maintaining the reusable software along with their validation status and areas in focus. Conclusion: We conclude that, there is a need for defining a standard set of reusable assets that are commonly accepted by the researchers in the field of software reuse. Most metrics/models/methods/approaches presented for assessing the value of reuse and for maintaining the reuse software are academically validated. Efforts have to be put on industrially validating them using the real data."}
{"_id": "32b7c6bf94dc64ca7e64cb2daa1fdad4058912ae", "title": "Design of Lightweight Authentication and Key Agreement Protocol for Vehicular Ad Hoc Networks", "text": "Due to the widespread popularity in both academia and industry, vehicular ad hoc networks (VANETs) have been used in a wide range of applications starting from intelligent transportation to e-health and itinerary planning. This paper proposes a new decentralized lightweight authentication and key agreement scheme for VANETs. In the proposed scheme, there are three types of mutual authentications: 1) between vehicles; 2) between vehicles and their respective cluster heads; and 3) between cluster heads and their respective roadside units. Apart from these authentications, the proposed scheme also maintains secret keys between roadside units for their secure communications. The rigorous formal and informal security analysis shows that the proposed scheme is capable to defend various malicious attacks. Moreover, the ns-2 simulation demonstrates the practicability of the proposed scheme in VANET environment."}
{"_id": "95216a2abf179e26db309d08701f605124233e6d", "title": "A Theoretical Framework for Structured Prediction using Factor Graph Complexity \u2217", "text": "We present a general theoretical analysis of structured prediction. By introducing a new complexity measure that explicitly factors in the structure of the output space and the loss function, we are able to derive new data-dependent learning guarantees for a broad family of losses and for hypothesis sets with an arbitrary factor graph decomposition. To the best of our knowledge, these are both the most favorable and the most general guarantees for structured prediction (and multiclass classification) currently known. We also extend this theory by leveraging the principle of Voted Risk Minimization (VRM) and showing that learning is possible with complex factor graphs. We both present new learning bounds in this advanced setting as well as derive two new families of algorithms, Voted Conditional Random Field and Voted Structured Boosting, which can make use of very complex features and factor graphs without overfitting. Finally, we validate our theory through experiments on several datasets."}
{"_id": "9628b0d57eddf932a347d35c247bc8e263e11e2b", "title": "Theoretical perspectives on the relation between catastrophizing and pain.", "text": "The tendency to \"catastrophize\" during painful stimulation contributes to more intense pain experience and increased emotional distress. Catastrophizing has been broadly conceived as an exaggerated negative \"mental set\" brought to bear during painful experiences. Although findings have been consistent in showing a relation between catastrophizing and pain, research in this area has proceeded in the relative absence of a guiding theoretical framework. This article reviews the literature on the relation between catastrophizing and pain and examines the relative strengths and limitations of different theoretical models that could be advanced to account for the pattern of available findings. The article evaluates the explanatory power of a schema activation model, an appraisal model, an attention model, and a communal coping model of pain perception. It is suggested that catastrophizing might best be viewed from the perspective of hierarchical levels of analysis, where social factors and social goals may play a role in the development and maintenance of catastrophizing, whereas appraisal-related processes may point to the mechanisms that link catastrophizing to pain experience. Directions for future research are suggested."}
{"_id": "acdacc042f29dea296c1a5e130ccea7b011d7ca4", "title": "A realist review of mobile phone-based health interventions for non-communicable disease management in sub-Saharan Africa", "text": "BACKGROUND\nThe prevalence of non-communicable diseases (NCDs) is increasing in sub-Saharan Africa. At the same time, the use of mobile phones is rising, expanding the opportunities for the implementation of mobile phone-based health (mHealth) interventions. This review aims to understand how, why, for whom, and in what circumstances mHealth interventions against NCDs improve treatment and care in sub-Saharan Africa.\n\n\nMETHODS\nFour main databases (PubMed, Cochrane Library, Web of Science, and Google Scholar) and references of included articles were searched for studies reporting effects of mHealth interventions on patients with NCDs in sub-Saharan Africa. All studies published up until May 2015 were included in the review. Following a realist review approach, middle-range theories were identified and integrated into a Framework for Understanding the Contribution of mHealth Interventions to Improved Access to Care for patients with NCDs in sub-Saharan Africa. The main indicators of the framework consist of predisposing characteristics, needs, enabling resources, perceived usefulness, and perceived ease of use. Studies were analyzed in depth to populate the framework.\n\n\nRESULTS\nThe search identified 6137 titles for screening, of which 20 were retained for the realist synthesis. The contribution of mHealth interventions to improved treatment and care is that they facilitate (remote) access to previously unavailable (specialized) services. Three contextual factors (predisposing characteristics, needs, and enabling resources) influence if patients and providers believe that mHealth interventions are useful and easy to use. Only if they believe mHealth to be useful and easy to use, will mHealth ultimately contribute to improved access to care. The analysis of included studies showed that the most important predisposing characteristics are a positive attitude and a common language of communication. The most relevant needs are a high burden of disease and a lack of capacity of first-contact providers. Essential enabling resources are the availability of a stable communications network, accessible maintenance services, and regulatory policies.\n\n\nCONCLUSIONS\nPolicy makers and program managers should consider predisposing characteristics and needs of patients and providers as well as the necessary enabling resources prior to the introduction of an mHealth intervention. Researchers would benefit from placing greater attention on the context in which mHealth interventions are being implemented instead of focusing (too strongly) on the technical aspects of these interventions."}
{"_id": "dbc9313c7633c6555cf5b1d3998ab3f93326b512", "title": "Visual attention measures for multi-screen TV", "text": "We introduce a set of nine measures to characterize viewers' visual attention patterns for multi-screen TV. We apply our measures during an experiment involving nine screen layouts with two, three, and four TV screens, for which we report new findings on visual attention. For example, we found that viewers need an average discovery time up to 4.5 seconds to visually fixate four screens, and their perceptions of how long they watched each screen are substantially accurate, i.e., we report Pearson correlations up to .892 with measured eye tracking data. We hope our set of new measures (and the companion toolkit to compute them automatically) will benefit the community as a first step toward understanding visual attention for emerging multi-screen TV applications."}
{"_id": "098cc8b16697307a241658d69c213954ede76d59", "title": "A first look at traffic on smartphones", "text": "Using data from 43 users across two platforms, we present a detailed look at smartphone traffic. We find that browsing contributes over half of the traffic, while each of email, media, and maps contribute roughly 10%. We also find that the overhead of lower layer protocols is high because of small transfer sizes. For half of the transfers that use transport-level security, header bytes correspond to 40% of the total. We show that while packet loss is the main factor that limits the throughput of smartphone traffic, larger send buffers at Internet servers can improve the throughput of a quarter of the transfers. Finally, by studying the interaction between smartphone traffic and the radio power management policy, we find that the power consumption of the radio can be reduced by 35% with minimal impact on the performance of packet exchanges."}
{"_id": "0cae1c767b150213ab64ea73720b3fbabf1f7c84", "title": "Bartendr: a practical approach to energy-aware cellular data scheduling", "text": "Cellular radios consume more power and suffer reduced data rate when the signal is weak. According to our measurements, the communication energy per bit can be as much as 6x higher when the signal is weak than when it is strong. To realize energy savings, applications must preferentially communicate when the signal is strong, either by deferring non-urgent communication or by advancing anticipated communication to coincide with periods of strong signal. Allowing applications to perform such scheduling requires predicting signal strength, so that opportunities for energy-efficient communication can be anticipated. Furthermore, such prediction must be performed at little energy cost.\n In this paper, we make several contributions towards a practical system for energy-aware cellular data scheduling called Bartendr. First, we establish, via measurements, the relationship between signal strength and power consumption. Second, we show that location alone is not sufficient to predict signal strength and motivate the use of tracks to enable effective prediction. Finally, we develop energy-aware scheduling algorithms for different workloads - syncing and streaming - and evaluate these via simulation driven by traces obtained during actual drives, demonstrating energy savings of up to 60%. Our experiments have been performed on four cellular networks across two large metropolitan areas, one in India and the other in the U.S."}
{"_id": "1e126cee4c1bddbfdd4e36bf91b8b1c2fe8d44c2", "title": "Accurate online power estimation and automatic battery behavior based power model generation for smartphones", "text": "This paper describes PowerBooter, an automated power model construction technique that uses built-in battery voltage sensors and knowledge of battery discharge behavior to monitor power consumption while explicitly controlling the power management and activity states of individual components. It requires no external measurement equipment. We also describe PowerTutor, a component power management and activity state introspection based tool that uses the model generated by PowerBooter for online power estimation. PowerBooter is intended to make it quick and easy for application developers and end users to generate power models for new smartphone variants, which each have different power consumption properties and therefore require different power models. PowerTutor is intended to ease the design and selection of power efficient software for embedded systems. Combined, PowerBooter and PowerTutor have the goal of opening power modeling and analysis for more smartphone variants and their users."}
{"_id": "209562baa7d7a155c4f5ebfb30dbd84ad087e3b3", "title": "Applied spatial data analysis with R", "text": "Do you need the book of Applied Spatial Data Analysis with R pdf with ISBN of 9781461476177? You will be glad to know that right now Applied Spatial Data Analysis with R pdf is available on our book collections. This Applied Spatial Data Analysis with R comes PDF and EPUB document format. If you want to get Applied Spatial Data Analysis with R pdf eBook copy, you can download the book copy here. The Applied Spatial Data Analysis with R we think have quite excellent writing style that make it easy to comprehend."}
{"_id": "3f62fe7de3bf15af1e5871dd8f623db29d8f0c35", "title": "Diversity in smartphone usage", "text": "Using detailed traces from 255 users, we conduct a comprehensive study of smartphone use. We characterize intentional user activities -- interactions with the device and the applications used -- and the impact of those activities on network and energy usage. We find immense diversity among users. Along all aspects that we study, users differ by one or more orders of magnitude. For instance, the average number of interactions per day varies from 10 to 200, and the average amount of data received per day varies from 1 to 1000 MB. This level of diversity suggests that mechanisms to improve user experience or energy consumption will be more effective if they learn and adapt to user behavior. We find that qualitative similarities exist among users that facilitate the task of learning user behavior. For instance, the relative application popularity for can be modeled using an exponential distribution, with different distribution parameters for different users. We demonstrate the value of adapting to user behavior in the context of a mechanism to predict future energy drain. The 90th percentile error with adaptation is less than half compared to predictions based on average behavior across users."}
{"_id": "d7fef01bd72eb1b073b2e053f05ae1ae43eae499", "title": "Chip to wafer temporary bonding with self-alignment by patterned FDTS layer for size-free MEMS integration", "text": "In this paper, we present a low-cost and rapid self-alignment process for temporary bonding of MEMS chip onto carrier wafer for size-free MEMS-IC integration. For the first time, a hydrophobic self-assembled monolayer (SAM), FDTS (CF3(CF2)7(CH2)2SiCl3), was successfully patterned by lift-off process on an oxidized silicon carrier wafer. Small volume of H2O (\u223c\u00b5l/cm2) was then dropped and spread on the non-coated hydrophilic SiO2 surface for temporary bonding of MEMS chip. Our results demonstrated that the hydrophobic FDTS pattern on carrier wafer enables rapid and precise self-alignment of MEMS chip onto SiO2 binding-site by capillary force. After transfer the MEMS chips to target wafer, FDTS can be removed by O2 plasma treatment or UV irradiation."}
{"_id": "6c43bd33f4733bfc8b729da2a182101fa27abcd6", "title": "Exploiting users' social relations to forward data in opportunistic networks: The HiBOp solution", "text": "Opportunistic networks, in which nodes opportunistically exploit any pair-wise contact to identify next hops towards the destination, are one of the most interesting technologies to support the pervasive networking vision. Opportunistic networks allow content sharing between mobile users without requiring any pre-existing Internet infrastructure, and tolerate partitions, long disconnections, and topology instability in general. In this paperwe propose a context-aware framework for routing and forwarding in opportunistic networks. The framework is general, and able to host various flavors of context-aware routing. In this work we also present a particular protocol, HiBOp, which, by exploiting the framework, learns and represents through context information, the users\u2019 behavior and their social relations, and uses this knowledge to drive the forwarding process. The comparison of HiBOp with reference to alternative solutions shows that a context-aware approach based on users\u2019 social relations turns out to be a very efficient solution for forwarding in opportunistic networks.We showperformance improvements over the reference solutions both in terms of resource utilization and in terms of user perceived QoS. \u00a9 2008 Elsevier B.V. All rights reserved."}
{"_id": "8c0bdd3de8baa2789c76bf0842195d0f2cd4fa6c", "title": "Are Cyberbullies Less Empathic? Adolescents' Cyberbullying Behavior and Empathic Responsiveness", "text": "Meta-analyses confirm a negative relationship between aggressive behavior and empathy, that is, the ability to understand and share the feelings of others. Based on theoretical considerations, it was, therefore, hypothesized that a lack of empathic responsiveness may be characteristic for cyberbullies in particular. In the present study, 2.070 students of Luxembourg secondary schools completed an online survey that included a cyberbullying questionnaire(4) and a novel empathy short scale. According to the main hypothesis, analyses of variances indicated that cyberbullies demonstrated less empathic responsiveness than non-cyberbullies. In addition, cyberbullies were also more afraid of becoming victims of cyberbullying. The findings confirm and substantially extend the research on the relationship between empathy and aggressive behavior. From an educational point of view, the present findings suggest that training of empathy skills might be an important tool to decrease cyberbullying."}
{"_id": "a23407b19100acba66fcfc2803d251a3c829e9e3", "title": "Applying Graph theory to the Internet of Things", "text": "In the Internet of Things (IoT), we all are ``things''. Graph theory, a branch of discrete mathematics, has been proven to be useful and powerful in understanding complex networks in history. By means of graph theory, we define new concepts and terminology, and explore the definition of IoT, and then show that IoT is the union of a topological network, a data-functional network and a domi-functional network."}
{"_id": "526f595dca4d9c61131df2dd2aa001398f6bea43", "title": "Dermatologists' accuracy in early diagnosis of melanoma of the nail matrix.", "text": "OBJECTIVE\nTo measure and compare the accuracy of 4 different clinical methods in the diagnosis of melanoma in situ of the nail matrix among dermatologists with different levels of clinical experience.\n\n\nDESIGN\nTwelve cases of melanonychias (5 melanomas and 7 nonmelanomas) were presented following 4 successive steps: (1) clinical evaluation, (2) evaluation according to the ABCDEF rule, (3) dermoscopy of the nail plate, and (4) intraoperative dermoscopy. At each step, the dermatologists were asked to decide if the lesion was a melanoma.\n\n\nSETTING\nThe test was administered at 2 dermatological meetings in 2008.\n\n\nPARTICIPANTS\nA total of 152 dermatologists, including 11 nail experts, 53 senior dermatologists, and 88 junior dermatologists.\n\n\nMAIN OUTCOME MEASURES\nThe answers were evaluated as percentage of right answers for each diagnostic step according to the different grade of expertise. Differences among the percentage of right answers in the different steps were evaluated with the z test at a 5% level of significance. The agreement was investigated using Cohen kappa statistic.\n\n\nRESULTS\nThe only method that statistically influenced the correct diagnosis for each category (experts, seniors, and juniors) was intraoperative dermoscopy (z test; P < .05). Cohen kappa statistic showed a moderate interobserver agreement.\n\n\nCONCLUSIONS\nOverall accuracy of dermatologists in the diagnosis of nail matrix melanoma in situ is low because the percentages of physicians who indicated the correct diagnosis during each of the first 3 clinical steps of the test ranged from 46% to 55%. The level of expertise did not statistically influence the correct diagnosis."}
{"_id": "45654695f5cad20d2be36d45d280af5180004baf", "title": "Rethink fronthaul for soft RAN", "text": "In this article we discuss the design of a new fronthaul interface for future 5G networks. The major shortcomings of current fronthaul solutions are first analyzed, and then a new fronthaul interface called next-generation fronthaul interface (NGFI) is proposed. The design principles for NGFI are presented, including decoupling the fronthaul bandwidth from the number of antennas, decoupling cell and user equipment processing, and focusing on high-performancegain collaborative technologies. NGFI aims to better support key 5G technologies, in particular cloud RAN, network functions virtualization, and large-scale antenna systems. NGFI claims the advantages of reduced bandwidth as well as improved transmission efficiency by exploiting the tidal wave effect on mobile network traffic. The transmission of NGFI is based on Ethernet to enjoy the benefits of flexibility and reliability. The major impact, challenges, and potential solutions of Ethernet-based fronthaul networks are also analyzed. Jitter, latency, and time and frequency synchronization are the major issues to overcome."}
{"_id": "0f73f4ebc58782c03cc78aa7d6a391d23101ba09", "title": "Language within our grasp", "text": "In monkeys, the rostral part of ventral premotor cortex (area F5) contains neurons that discharge, both when the monkey grasps or manipulates objects and when it observes the experimenter making similar actions. These neurons (mirror neurons) appear to represent a system that matches observed events to similar, internally generated actions, and in this way forms a link between the observer and the actor. Transcranial magnetic stimulation and positron emission tomography (PET) experiments suggest that a mirror system for gesture recognition also exists in humans and includes Broca's area. We propose here that such an observation/execution matching system provides a necessary bridge from'doing' to'communicating',as the link between actor and observer becomes a link between the sender and the receiver of each message."}
{"_id": "5309c563fe3f3b78f5e5e2ac9ee2159ebf28402f", "title": "DCU-UVT: Word-Level Language Classification with Code-Mixed Data", "text": "This paper describes the DCU-UVT team\u2019s participation in the Language Identification in Code-Switched Data shared task in the Workshop on Computational Approaches to Code Switching. Wordlevel classification experiments were carried out using a simple dictionary-based method, linear kernel support vector machines (SVMs) with and without contextual clues, and a k-nearest neighbour approach. Based on these experiments, we select our SVM-based system with contextual clues as our final system and present results for the Nepali-English and Spanish-English datasets."}
{"_id": "c91b4b3a20a7637ecbb7e0179ac3108f3cf11880", "title": "Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network", "text": "Human generates responses relying on semantic and functional dependencies, including coreference relation, among dialogue elements and their context. In this paper, we investigate matching a response with its multi-turn context using dependency information based entirely on attention. Our solution is inspired by the recently proposed Transformer in machine translation (Vaswani et al., 2017) and we extend the attention mechanism in two ways. First, we construct representations of text segments at different granularities solely with stacked self-attention. Second, we try to extract the truly matched segment pairs with attention across the context and response. We jointly introduce those two kinds of attention in one uniform neural network. Experiments on two large-scale multi-turn response selection tasks show that our proposed model significantly outperforms the state-of-the-art models."}
{"_id": "7eb2cb5b31002906ef6a1b9f6707968e57afb714", "title": "Multi-label Deep Regression and Unordered Pooling for Holistic Interstitial Lung Disease Pattern Detection", "text": "Holistically detecting interstitial lung disease (ILD) patterns from CT images is challenging yet clinically important. Unfortunately, most existing solutions rely on manually provided regions of interest, limiting their clinical usefulness. In addition, no work has yet focused on predicting more than one ILD from the same CT slice, despite the frequency of such occurrences. To address these limitations, we propose two variations of multi-label deep convolutional neural networks (CNNs). The first uses a deep CNN to detect the presence of multiple ILDs using a regression-based loss function. Our second variant further improves performance, using spatially invariant Fisher Vector encoding of the CNN feature activations. We test our algorithms on a dataset of 533 patients using five-fold cross-validation, achieving high area-under-curve (AUC) scores of 0.982, 0.972, 0.893 and 0.993 for Ground Glass, Reticular, Honeycomb and Emphysema, respectively. As such, our work represents an important step forward in providing clinically effective ILD detection."}
{"_id": "a1bbd52c57ad6a36057f5aa69544887261eb1a83", "title": "Syntax-based Alignment of Multiple Translations: Extracting Paraphrases and Generating New Sentences", "text": "We describe a syntax-based algorithm that automatically builds Finite State Automata (word lattices) from semantically equivalent translation sets. These FSAs are good representations of paraphrases. They can be used to extract lexical and syntactic paraphrase pairs and to generate new, unseen sentences that express the same meaning as the sentences in the input sets. Our FSAs can also predict the correctness of alternative semantic renderings, which may be used to evaluate the quality of translations."}
{"_id": "2269853c59ce06c599b79df3c3023504fc56a685", "title": "Narrating a Knowledge Base", "text": "We aim to automatically generate natural language narratives about an input structured knowledge base (KB). We build our generation framework based on a pointer network which can copy facts from the input KB, and add two attention mechanisms: (i) slot-aware attention to capture the association between a slot type and its corresponding slot value; and (ii) a new table position self-attention to capture the inter-dependencies among related slots. For evaluation, besides standard metrics including BLEU, METEOR, and ROUGE, we also propose a KB reconstruction based metric by extracting a KB from the generation output and comparing it with the input KB. We also create a new data set which includes 106,216 pairs of structured KBs and their corresponding natural language descriptions for two distinct entity types. Experiments show that our approach significantly outperforms state-of-the-art methods. The reconstructed KB achieves 68.8% 72.6% F-score.1"}
{"_id": "2363304d6dfccc56d22e4dcb21397cd3aaa45b9a", "title": "Robust high-dimensional precision matrix estimation", "text": "The dependency structure of multivariate data can be analyzed using the covariance matrix \u03a3. In many fields the precision matrix \u03a3 \u22121 is even more informative. As the sample covariance estimator is singular in high-dimensions, it cannot be used to obtain a precision matrix estimator. A popular high-dimensional estimator is the graphical lasso, but it lacks robustness. We consider the high-dimensional independent contamination model. Here, even a small percentage of contaminated cells in the data matrix may lead to a high percentage of contaminated rows. Downweighting entire observations, which is done by traditional robust procedures, would then results in a loss of information. In this paper, we formally prove that replacing the sample covariance matrix in the graphical lasso with an elementwise robust covariance matrix leads to an elementwise robust, sparse precision matrix estimator computable in high-dimensions. Examples of such elementwise robust covariance estimators are given. The final precision matrix estimator is positive definite, has a high breakdown point under elementwise contamination and can be computed fast."}
{"_id": "5ed8121a16c7ce5b89e69b7de482bab950796b67", "title": "The Impact of Clinical Empathy on Patients and Clinicians : Understanding Empathy ' s Side Effects", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."}
{"_id": "78e2cf228287d7e995c6718338e3ec58dc7cca50", "title": "Platelet-Rich Plasma: Applications in Dermatology", "text": ""}
{"_id": "10a88fdb15b5cb5baf0f35d3e8726ffb829c8a87", "title": "Computer Forensics Field Triage Process Model", "text": "With the proliferation of digital based evidence, the need for the timely identification, analysis and interpretation of digital evidence is becoming more crucial. In many investigations critical information is required while at the scene or within a short period of time measured in hours as opposed to days. The traditional cyber forensics approach of seizing a system(s)/media, transporting it to the lab, making a forensic image(s), and then searching the entire system for potential evidence, is no longer appropriate in some circumstances. In cases such as child abductions, pedophiles, missing or exploited persons, time is of the essence. In these types of cases, investigators dealing with the suspect or crime scene need investigative leads quickly; in some cases it is the difference between life and death for the victim(s). The Cyber Forensic Field Triage Process Model (CFFTPM) proposes an onsite or field approach for providing the identification, analysis and interpretation of digital evidence in a short time frame, without the requirement of having to take the system(s)/media back to the lab for an in-depth examination or acquiring a complete forensic image(s). The proposed model adheres to commonly held forensic principles, and does not negate the ability that once the initial field triage is concluded, the system(s)/storage media be transported back to a lab environment for a more thorough examination and analysis. The CFFTPM has been successfully used in various real world cases, Journal of Digital Forensics, Security and Law, Vol. 1(2) 20 and its investigative importance and pragmatic approach has been amply demonstrated. Furthermore, the derived evidence from these cases has not been challenged in the court proceedings where it has been introduced. The current article describes the CFFTPM in detail, discusses the model\u2019s forensic soundness, investigative support capabilities and practical considerations."}
{"_id": "c22c3fe69473c83533b19d2dd5481df4edd9e9e8", "title": "Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes", "text": "This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications."}
{"_id": "ca42861ce4941e5e3490acd900ebfbbbcd7f9065", "title": "Interactive Character Animation Using Simulated Physics: A State-of-the-Art Review", "text": "Physics simulation offers the possibility of truly responsive and realistic animation. Despite wide adoption of physics simulation for the animation of passive phenomena, such as fluids, cloths and rag-doll characters, commercial applications still resort to kinematics-based approaches for the animation of actively controlled characters. However, following a renewed interest in the use of physics simulation for interactive character animation, many recent publications demonstrate tremendous improvements in robustness, visual quality and usability. We present a structured review of over two decades of research on physics-based character animation, as well as point out various open research areas and possible future directions."}
{"_id": "73c8b5dfa616dd96aa5252633a0d5d11fc4186ee", "title": "Resilient packet delivery through software defined redundancy: An experimental study", "text": "Mission-critical applications that use computer networks require guaranteed communication with bounds on the time needed to deliver packets, even if hardware along a path fails or congestion occurs. Current networking technologies follow a reactive approach in which retransmission mechanisms and routing update protocols respond once packet loss occurs. We present a proactive approach that pre-builds and monitors redundant paths to ensure timely delivery with no recovery latency, even in the case of a single-link failure. The paper outlines the approach, describes a new mechanism we call a redundancy controller, and explains the use of SDN to establish and change paths. The paper also reports on a prototype system and gives experimental measurements."}
{"_id": "7674e4e66c60a4a31d0b68a07d4ea521cca8a84b", "title": "The FuzzyLog: A Partially Ordered Shared Log", "text": "The FuzzyLog is a partially ordered shared log abstraction. Distributed applications can concurrently append to the partial order and play it back. FuzzyLog applications obtain the benefits of an underlying shared log \u2013 extracting strong consistency, durability, and failure atomicity in simple ways \u2013 without suffering from its drawbacks. By exposing a partial order, the FuzzyLog enables three key capabilities for applications: linear scaling for throughput and capacity (without sacrificing atomicity), weaker consistency guarantees, and tolerance to network partitions. We present Dapple, a distributed implementation of the FuzzyLog abstraction that stores the partial order compactly and supports efficient appends / playback via a new ordering protocol. We implement several data structures and applications over the FuzzyLog, including several map variants as well as a ZooKeeper implementation. Our evaluation shows that these applications are compact, fast, and flexible: they retain the simplicity (100s of lines of code) and strong semantics (durability and failure atomicity) of a shared log design while exploiting the partial order of the FuzzyLog for linear scalability, flexible consistency guarantees (e.g., causal+ consistency), and network partition tolerance. On a 6-node Dapple deployment, our FuzzyLogbased ZooKeeper supports 3M/sec single-key writes, and 150K/sec atomic cross-shard renames."}
{"_id": "a5e1ba6191b8b53de48dd676267902364ff1b0d2", "title": "Predictors of Relationship Satisfaction in Online Romantic Relationships", "text": "Based on traditional theories of interpersonal relationship development and on the hyperpersonal communication theory, this study examined predictors of relationship satisfaction for individuals involved in online romantic relationships. One hundredfourteen individuals (N1\u20444114) involved in online romantic relationships, and who had only engaged in computer-mediated communication (CMC) with their partners, completed an online questionnaire about their relationships. Intimacy, trust, and communication satisfaction were found to be the strongest predictors of relationship satisfaction for individuals involved in online romances. Additionally, perceptions of relationship variables differed depending on relationship length and time spent communicating. Implications for interpersonal and hyperpersonal communication theories, and future investigation of online relationships, are discussed."}
{"_id": "782a7865a217e42b33f85734b54e84f50ea14088", "title": "Impact of Corpora Quality on Neural Machine Translation", "text": "Large parallel corpora that are automatically obtained from the web, documents or elsewhere often exhibit many corrupted parts that are bound to negatively affect the quality of the systems and models that learn from these corpora. This paper describes frequent problems found in data and such data affects neural machine translation systems, as well as how to identify and deal with them. The solutions are summarised in a set of scripts that remove problematic sentences from input corpora."}
{"_id": "60756d9b0d8bf637b5ecdc1d782932903d6490f2", "title": "Computed tomographic analysis of tooth-bearing alveolar bone for orthodontic miniscrew placement.", "text": "INTRODUCTION\nWhen monocortical orthodontic miniscrews are placed in interdental alveolar bone, the safe position of the miniscrew tip should be ensured. This study was designed to quantify the periradicular space in the tooth-bearing area to provide practical guidelines for miniscrew placement.\n\n\nMETHODS\nComputerized tomographs of 30 maxillae and mandibles were taken from nonorthodontic adults with normal occlusion. Both mesiodistal interradicular distance and bone thickness over the narrowest interradicular space (safety depth) were measured at 2, 4, 6, and 8 mm from the cementoenamel junction.\n\n\nRESULTS\nMesiodistal space greater than 3 mm was available at the 8-mm level in the maxillary anterior region, between the premolars, and between the second premolar and the first molar at 4 mm. In the mandible, sufficient mesiodistal space was found between the premolars, between the molars, and between the second premolar and the first molar at the 4-mm level. Safety depth greater than 4 mm was found in the maxillary and mandibular intermolar regions, and between the second premolar and the first molar in both arches.\n\n\nCONCLUSIONS\nSubapical placement is advocated in the anterior segment. Premolar areas appear reliable in both arches. Angulated placement in the intermolar area is suggested to use the sufficient safety depth in this area."}
{"_id": "2fd37fed17e07c4ec04caefe7dcbcb16670fa2d8", "title": "Block Chain Technologies & The Semantic Web : A Framework for Symbiotic Development", "text": "The concept of peer-to-peer applications is not new, nor is the concept of distributed hash tables. What emerged in 2008 with the publication of the Bitcoin white paper was an incentive structure that unified these two software paradigms with a set of economic stimuli to motivate the creation of a dedicated computing network orders of magnitude more powerful than the world\u2019s fastest supercomputers. The purpose of which is the maintenance of a massive distributed database known as the Bitcoin \u201cblock chain\u201d. Apart from the digital currency it enables, block chain technology is a fascinating new computing paradigm with broad implications for the future development of the World Wide Web, and by extension, the further growth of Linked Data and the Semantic Web. This work is divided into two main sections, we first demonstrate how block chain technologies can contribute toward the realization of a more robust Semantic Web, and subsequently we provide a framework wherein the Semantic Web is utilized to ameliorate block chain technology itself."}
{"_id": "3897090008776b124ff299f135fdbf0cc3aa06d4", "title": "A survey of remote optical photoplethysmographic imaging methods", "text": "In recent years researchers have presented a number of new methods for recovering physiological parameters using just low-cost digital cameras and image processing. The ubiquity of digital cameras presents the possibility for many new, low-cost applications of vital sign monitoring. In this paper we present a review of the work on remote photoplethysmographic (PPG) imaging using digital cameras. This review specifically focuses on the state-of-the-art in PPG imaging where: 1) measures beyond pulse rate are evaluated, 2) non-ideal conditions (e.g., the presence of motion artifacts) are explored, and 3) use cases in relevant environments are demonstrated. We discuss gaps within the literature and future challenges for the research community. To aid in the continuing advancement of PPG imaging research, we are making available a website with the references collected for this review as well as information on available code and datasets of interest. It is our hope that this website will become a valuable resource for the PPG imaging community. The site can be found at: http://web.mit.edu/~djmcduff/www/ remote-physiology.html."}
{"_id": "ec241c7b314b472ac0d58060f5bc09b791ba6017", "title": "Non-contact, automated cardiac pulse measurements using video imaging and blind source separation.", "text": "Remote measurements of the cardiac pulse can provide comfortable physiological assessment without electrodes. However, attempts so far are non-automated, susceptible to motion artifacts and typically expensive. In this paper, we introduce a new methodology that overcomes these problems. This novel approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into independent components. Using Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to an FDA-approved finger blood volume pulse (BVP) sensor and achieved high accuracy and correlation even in the presence of movement artifacts. Furthermore, we applied this technique to perform heart rate measurements from three participants simultaneously. This is the first demonstration of a low-cost accurate video-based method for contact-free heart rate measurements that is automated, motion-tolerant and capable of performing concomitant measurements on more than one person at a time."}
{"_id": "38bcf0bd4f8c35ff54d292d37cbdca1da677f3f5", "title": "Mobile monitoring with wearable photoplethysmographic biosensors.", "text": "earable biosensors (WBS) will permit continuous cardiovascular (CV) monitoring in a number of novel settings. Benefits may be realized in the diagnosis and treatment of a number of major diseases. WBS, in conjunction with appropriate alarm algorithms, can increase surveillance capabilities for CV catastrophe for high-risk subjects. WBS could also play a role in the treatment of chronic diseases, by providing information that enables precise titration of therapy or detecting lapses in patient compliance. WBS could play an important role in the wireless surveillance of people during hazardous operations (military, fire-fighting, etc.), or such sensors could be dispensed during a mass civilian casualty occurrence. Given that CV physio-logic parameters make up the \" vital signs \" that are the most important information in emergency medical situations, WBS might enable a wireless monitoring system for large numbers of at-risk subjects. This same approach may also have utility in monitoring the waiting room of today's overcrowded emergency departments. For hospital inpatients who require CV monitoring, current biosensor technology typically tethers patients in a tangle of cables, whereas wearable CV sensors could increase inpatient comfort and may even reduce the risk of tripping and falling, a perennial problem for hospital patients who are ill, medicated, and in an unfamiliar setting. On a daily basis, wearable CV sensors could detect a missed dose of medication by sensing untreated elevated blood pressure and could trigger an automated reminder for the patient to take the medication. Moreover, it is important for doctors to titrate the treatment of high blood pressure, since both insufficient therapy as well as excessive therapy (leading to abnormally low blood pressures) increase mortality. However, healthcare providers have only intermittent values of blood pressure on which to base therapy decisions; it is possible that continuous blood pressure monitoring would permit enhanced titration of therapy and reductions in mortality. Similarly, WBS would be able to log the physiologic signature of a patient's exercise efforts (manifested as changes in heart rate and blood pressure), permitting the patient and healthcare provider to assess compliance with a regimen proven to improve health outcomes. For patients with chronic cardiovascular disease, such as heart failure, home monitoring employing WBS may detect exacerbations in very early (and often easily treated) stages, long before the patient progresses to more dangerous levels that necessitate an emergency room visit and costly hospital admission. In this article we will address both technical and clinical \u2026"}
{"_id": "72e08cf12730135c5ccd7234036e04536218b6c1", "title": "An extended set of Haar-like features for rapid object detection", "text": "Recently Viola et al. [5] have introduced a rapid object detection scheme based on a boosted cascade of simple features. In this paper we introduce a novel set of rotated haar-like features, which significantly enrich this basic set of simple haar-like features and which can also be calculated very efficiently. At a given hit rate our sample face detector shows off on average a 10% lower false alarm rate by means of using these additional rotated features. We also present a novel post optimization procedure for a given boosted cascade improving on average the false alarm rate further by 12.5%. Using both enhancements the number of false detections is only 24 at a hit rate of 82.3% on the CMU face set [7]."}
{"_id": "83d677d8b227c2917606145ef8f894cf7fbc5d9b", "title": "Recovering pulse rate during motion artifact with a multi-imager array for non-contact imaging photoplethysmography", "text": "Photoplethysmography relies on characteristic changes in the optical absorption of tissue due to pulsatile (arterial) blood flow in peripheral vasculature. Sensors for observing the photoplethysmographic effect have traditionally required contact with the skin surface. Recent advances in non-contact imaging photoplethysmography have demonstrated that measures of cardiopulmonary system state, such as pulse rate, pulse rate variability, and respiration rate, can be obtained from a participant by imaging their face under relatively motionless conditions. A critical limitation in this method that must be resolved is the inability to recover these measures under conditions of head motion artifact. To investigate the adequacy of channel space dimensionality for the use of blind source separation in this context, nine synchronized, visible spectrum imagers positioned in a semicircular array centered on the imaged participant were used for data acquisition in a controlled lighting environment. Three-lead electrocardiogram and finger-tip reflectance photoplethysmogram were also recorded as ground truth signals. Controlled head motion artifact trial conditions were compared to trials in which the participant remained stationary, with and without the aid of a chinrest. Bootstrapped means of one-minute, non-overlapping trial segments show that, for situations involving little to no head motion, a single imager is sufficient for recovering pulse rate with an average absolute error of less than two beats per minute. However, error in the recovered pulse rate measurement for the single imager can be as high as twenty-two beats per minute when head motion artifact is severe. This increase in measurement error during motion artifact was mitigated by increasing the dimensionality of the imager channel space with multiple imagers in the array prior to applying blind source separation. In contrast to single-imager results, the multi-imager channel space resulted in an absolute error in the recovered pulse rate measurement that is comparable with pulse rate measured via fingertip reflectance photoplethysmography. These results demonstrate that non-contact, imaging photoplethysmography can be accurate in the presence of head motion artifact when a multi-imager array is implemented to increase the dimensionality of the decomposed channel space."}
{"_id": "146b84bdd9b9078f40a2df9b7ded26416771f740", "title": "Inverse Risk-Sensitive Reinforcement Learning", "text": "We address the problem of inverse reinforcement learning in Markov decision processes where the agent is risk-sensitive. We derive a risk-sensitive reinforcement learning algorithm with convergence guarantees that employs convex risk metrics and models of human decisionmaking deriving from behavioral economics. The risk-sensitive reinforcement learning algorithm provides the theoretical underpinning for a gradient-based inverse reinforcement learning algorithm that minimizes a loss function defined on observed behavior of a risk-sensitive agent. We demonstrate the performance of the proposed technique on two examples: (i) the canonical Grid World example and (ii) a Markov decision process modeling ride-sharing passengers\u2019 decisions given price changes. In the latter, we use pricing and travel time data from a ride-sharing company to construct the transition probabilities and rewards of the Markov decision process."}
{"_id": "2ee822dbb7ef60cd79e17cd2deec68ac14f13522", "title": "Minimization of symbolic automata", "text": "Symbolic Automata extend classical automata by using symbolic alphabets instead of finite ones. Most of the classical automata algorithms rely on the alphabet being finite, and generalizing them to the symbolic setting is not a trivial task. In this paper we study the problem of minimizing symbolic automata. We formally define and prove the basic properties of minimality in the symbolic setting, and lift classical minimization algorithms (Huffman-Moore's and Hopcroft's algorithms) to symbolic automata. While Hopcroft's algorithm is the fastest known algorithm for DFA minimization, we show how, in the presence of symbolic alphabets, it can incur an exponential blowup. To address this issue, we introduce a new algorithm that fully benefits from the symbolic representation of the alphabet and does not suffer from the exponential blowup. We provide comprehensive performance evaluation of all the algorithms over large benchmarks and against existing state-of-the-art implementations. The experiments show how the new symbolic algorithm is faster than previous implementations."}
{"_id": "90886fef6288f8d9789feaff6c9a96403c324498", "title": "An Empirical Evaluation of True Online TD({\\lambda})", "text": "The true online TD(\u03bb) algorithm has recently been proposed (van Seijen and Sutton, 2014) as a universal replacement for the popular TD(\u03bb) algorithm, in temporal-difference learning and reinforcement learning. True online TD(\u03bb) has better theoretical properties than conventional TD(\u03bb), and the expectation is that it also results in faster learning. In this paper, we put this hypothesis to the test. Specifically, we compare the performance of true online TD(\u03bb) with that of TD(\u03bb) on challenging examples, random Markov reward processes, and a real-world myoelectric prosthetic arm. We use linear function approximation with tabular, binary, and non-binary features. We assess the algorithms along three dimensions: computational cost, learning speed, and ease of use. Our results confirm the strength of true online TD(\u03bb): 1) for sparse feature vectors, the computational overhead with respect to TD(\u03bb) is minimal; for non-sparse features the computation time is at most twice that of TD(\u03bb), 2) across all domains/representations the learning speed of true online TD(\u03bb) is often better, but never worse than that of TD(\u03bb), and 3) true online TD(\u03bb) is easier to use, because it does not require choosing between trace types, and it is generally more stable with respect to the step-size. Overall, our results suggest that true online TD(\u03bb) should be the first choice when looking for an efficient, general-purpose TD method."}
{"_id": "4ed0fa22b8cef58b181d9a425bdde2db289ba295", "title": "Cause related marketing campaigns and consumer purchase intentions : The mediating role of brand awareness and corporate image", "text": "The purpose of this research is to investigate the kind of relationship between Cause Related Marketing (CRM) campaigns, brand awareness and corporate image as possible antecedents of consumer purchase intentions in the less developed country of Pakistan. An initial conceptualization was developed from mainstream literature to be validated through empirical research. The conceptualization was then tested with primary quantitative survey data collected from 203 students studying in different universities of Rawalpindi and Islamabad. Correlation and regression analysis were used to test the key hypothesis derived from literature positing brand awareness and corporate image as mediating the relationship between CRM and consumer purchase intentions. The findings indicate that consumer purchase intentions are influenced by the cause related marketing campaigns. Furthermore it was observed that the brand awareness and corporate image partially mediate the impact of CRM campaigns on consumer purchase intentions. The data was gathered from universities situated in Rawalpindi and Islamabad only. Hence, future research could extend these findings to other cities in Pakistan to test their generalizability. Further research can be carried out through data collection from those people who actually participated in cause related marketing campaigns to identify the original behavior of customers instead of their purchase intentions. This research and the claims made are limited to the FMCG industry. The key implications cause related marketing of these findings for marketing managers lend support for the use of campaigns in Pakistan. The findings also suggest some measures which can be taken in to consideration in order to enhance brand awareness and to improve corporate image as both variables mediate the impact of CRM campaigns on consumer purchase intentions. The study contributes to cause related marketing literature by indicating a mediating role of brand awareness and corporate image on CRM campaigns and consumer purchase intentions. This mediating role was ignored in previous studies. Moreover, it contributes to close the gap of empirical research in this field, which exists particularly due to the diverse attitude of customers in less developed countries such as Pakistan."}
{"_id": "b9f13239b323e399c9746597196abec4c0fb7942", "title": "Loosecut: Interactive image segmentation with loosely bounded boxes", "text": "One popular approach to interactively segment an object of interest from an image is to annotate a bounding box that covers the object, followed by a binary labeling. However, the existing algorithms for such interactive image segmentation prefer a bounding box that tightly encloses the object. This increases the annotation burden, and prevents these algorithms from utilizing automatically detected bounding boxes. In this paper, we develop a new LooseCut algorithm that can handle cases where the bounding box only loosely covers the object. We propose a new Markov Random Fields (MRF) model for segmentation with loosely bounded boxes, including an additional energy term to encourage consistent labeling of similar-appearance pixels and a global similarity constraint to better distinguish the foreground and background. This MRF model is then solved by an iterated max-flow algorithm. We evaluate LooseCut in three public image datasets, and show its better performance against several state-of-the-art methods when increasing the bounding-box size."}
{"_id": "2990cf242558ede739d6a26a2f8b098f94390323", "title": "Morphological Smoothing and Extrapolation of Word Embeddings", "text": "Languages with rich inflectional morphology exhibit lexical data sparsity, since the word used to express a given concept will vary with the syntactic context. For instance, each count noun in Czech has 12 forms (where English uses only singular and plural). Even in large corpora, we are unlikely to observe all inflections of a given lemma. This reduces the vocabulary coverage of methods that induce continuous representations for words from distributional corpus information. We solve this problem by exploiting existing morphological resources that can enumerate a word\u2019s component morphemes. We present a latentvariable Gaussian graphical model that allows us to extrapolate continuous representations for words not observed in the training corpus, as well as smoothing the representations provided for the observed words. The latent variables represent embeddings of morphemes, which combine to create embeddings of words. Over several languages and training sizes, our model improves the embeddings for words, when evaluated on an analogy task, skip-gram predictive accuracy, and word similarity."}
{"_id": "3f748911b0bad210d7b9c4598d158a9b15d4ef5f", "title": "K-Automorphism: A General Framework For Privacy Preserving Network Publication", "text": "The growing popularity of social networks has generated interesting data management and data mining problems. An important concern in the release of these data for study is their privacy, since social networks usually contain personal information. Simply removing all identifiable personal information (such as names and social security number) before releasing the data is insufficient. It is easy for an attacker to identify the target by performing different structural queries. In this paper we propose k-automorphism to protect against multiple structural attacks and develop an algorithm (called KM) that ensures k-automorphism. We also discuss an extension of KM to handle \u201cdynamic\u201d releases of the data. Extensive experiments show that the algorithm performs well in terms of protection it provides."}
{"_id": "12df6611d9fff192fa09e1da60310d7485190c1c", "title": "Making Smart Contracts Smarter", "text": "Cryptocurrencies record transactions in a decentralized data structure called a blockchain. Two of the most popular cryptocurrencies, Bitcoin and Ethereum, support the feature to encode rules or scripts for processing transactions. This feature has evolved to give practical shape to the ideas of smart contracts, or full-fledged programs that are run on blockchains. Recently, Ethereum's smart contract system has seen steady adoption, supporting tens of thousands of contracts, holding millions dollars worth of virtual coins.\n In this paper, we investigate the security of running smart contracts based on Ethereum in an open distributed network like those of cryptocurrencies. We introduce several new security problems in which an adversary can manipulate smart contract execution to gain profit. These bugs suggest subtle gaps in the understanding of the distributed semantics of the underlying platform. As a refinement, we propose ways to enhance the operational semantics of Ethereum to make contracts less vulnerable. For developers writing contracts for the existing Ethereum system, we build a symbolic execution tool called Oyente to find potential security bugs. Among 19, 336 existing Ethereum contracts, Oyente flags 8, 833 of them as vulnerable, including the TheDAO bug which led to a 60 million US dollar loss in June 2016. We also discuss the severity of other attacks for several case studies which have source code available and confirm the attacks (which target only our accounts) in the main Ethereum network."}
{"_id": "74f1b2501daf6b091d017a3ba3110a7db0ab50db", "title": "Enumerating the Non-Isomorphic Assembly Configurations of Modular Robotic Systems", "text": "A \\modular\" robotic system consists of joint and link modules that can be assembled in a variety of conngurations to meet diierent or changing task requirements. However, due to typical symmetries in module design, diierent assembly conngurations may lead to robotic structures which are kinematically identical, or isomorphic. This paper considers how to enumerate the non-isomorphic assembly conngurations of a modular robotic system. We introduce an Assembly Incidence Matrix (AIM) to represent a modular robot assembly con-guration. Then we use symmetries of the module geometry and graph isomorphisms to deene an equivalence relation on the AIMs. Equivalent AIMs represent isomorphic robot assembly conngurations. Based on this equivalence relation, we propose an algorithm to generate non-isomorphic assembly conngurations of an n-link tree-like robot with diierent joint and link module types. Examples demonstrate that this method is a signiicant improvement over a brute force enumeration process."}
{"_id": "edfd28fc766101c734fc534d65c569e8f9bdb9ac", "title": "Locally Normalized Filter Banks Applied to Deep Neural-Network-Based Robust Speech Recognition", "text": "This letter describes modifications to locally normalized filter banks (LNFB), which substantially improve their performance on the Aurora-4 robust speech recognition task using a Deep Neural Network-Hidden Markov Model (DNN-HMM)-based speech recognition system. The modified coefficients, referred to as LNFB features, are a filter-bank version of locally normalized cepstral coefficients (LNCC), which have been described previously. The ability of the LNFB features is enhanced through the use of newly proposed dynamic versions of them, which are developed using an approach that differs somewhat from the traditional development of delta and delta\u2013delta features. Further enhancements are obtained through the use of mean normalization and mean\u2013variance normalization, which is evaluated both on a per-speaker and a per-utterance basis. The best performing feature combination (typically LNFB combined with LNFB delta and delta\u2013delta features and mean\u2013variance normalization) provides an average relative reduction in word error rate of 11.4% and 9.4%, respectively, compared to comparable features derived from Mel filter banks when clean and multinoise training are used for the Aurora-4 evaluation. The results presented here suggest that the proposed technique is more robust to channel mismatches between training and testing data than MFCC-derived features and is more effective in dealing with channel diversity."}
{"_id": "4eebe0d12aefeedf3ca85256bc8aa3b4292d47d9", "title": "DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks", "text": "Probabilistic forecasting, i.e. estimating the probability distribution of a time series\u2019 future given its past, is a key enabler for optimizing business processes. In retail businesses, for example, forecasting demand is crucial for having the right inventory available at the right time at the right place. In this paper we propose DeepAR, a methodology for producing accurate probabilistic forecasts, based on training an auto-regressive recurrent network model on a large number of related time series. We demonstrate how by applying deep learning techniques to forecasting, one can overcome many of the challenges faced by widely-used classical approaches to the problem. We show through extensive empirical evaluation on several real-world forecasting data sets that our methodology produces more accurate forecasts than other state-of-the-art methods, while requiring minimal manual work."}
{"_id": "5581f2e0e08ccd5280637eb79865939b6eafb1f1", "title": "IMPACTS DUE TO INTERNET ADDICTION AMONG MALAYSIAN UNIVERSITY STUDENTS Azizah Zainudin", "text": "The purpose of this study is to study the impacts due to Internet addiction among Malaysian university students. Research methodology used in this study was by distributing survey questions to 653 university students from five different universities in Malaysia. There were four possible impacts measured in this research study which include Academic Performances, Relationships, Personality and Lifestyle. The finding shows that, Internet addiction cause problems with respondents\u2019 academic performances, having bad personality and practicing an unhealthy lifestyle. There were significantly differences in academic performances, personality and lifestyle between \u201cAverage user\u201d and \u201cExcessive user\u201d. The above matter will be further discussed throughout this"}
{"_id": "eef7d32eca9a363c31151d2ac4873f98a93f02d2", "title": "Micropower energy harvesting", "text": "More than a decade of research in the field of thermal, motion, vibration and electromagnetic radiation energy harvesting has yielded increasing power output and smaller embodiments. Power management circuits for rectification and DC\u2013DC conversion are becoming able to efficiently convert the power from these energy harvesters. This paper summarizes recent energy harvesting results and their power man-"}
{"_id": "1cbeb9f98bf84e35d53cae946a4cd7a87b7fda69", "title": "The Role of Asymmetric Dimethylarginine (ADMA) in Endothelial Dysfunction and Cardiovascular Disease", "text": "Endothelium plays a crucial role in the maintenance of vascular tone and structure. Endothelial dysfunction is known to precede overt coronary artery disease. A number of cardiovascular risk factors, as well as metabolic diseases and systemic or local inflammation cause endothelial dysfunction. Nitric oxide (NO) is one of the major endothelium derived vaso-active substances whose role is of prime importance in maintaining endothelial homeostasis. Low levels of NO are associated with impaired endothelial function. Asymmetric dimethylarginine (ADMA), an analogue of L-arginine, is a naturally occurring product of metabolism found in human circulation. Elevated levels of ADMA inhibit NO synthesis and therefore impair endothelial function and thus promote atherosclerosis. ADMA levels are increased in people with hypercholesterolemia, atherosclerosis, hypertension, chronic heart failure, diabetes mellitus and chronic renal failure. A number of studies have reported ADMA as a novel risk marker of cardiovascular disease. Increased levels of ADMA have been shown to be the strongest risk predictor, beyond traditional risk factors, of cardiovascular events and all-cause and cardiovascular mortality in people with coronary artery disease. Interventions such as treatment with L-arginine have been shown to improve endothelium-mediated vasodilatation in people with high ADMA levels. However the clinical utility of modifying circulating ADMA levels remains uncertain."}
{"_id": "a03d7b9ed965d7ad97027e4cd141fec3ad072c64", "title": "Sindice.com: a document-oriented lookup index for open linked data", "text": "Data discovery on the SemanticWeb requires crawling and indexing of statements, in addition to the \u2018linked-data\u2019 approach of de-referencing resourceURIs. Existing SemanticWeb search engines are focused on database-like functionality, compromising on index size, query performance and live updates.Wepresent Sindice, a lookup indexover SemanticWeb resources. Our index allows applications to automatically locate documents containing information about a given resource. In addition, we allow resource retrieval through inverse-functional properties, offer a full-text search and index SPARQL endpoints. Finally, we extend the sitemap protocol to efficiently index large datasets with minimal impact on data providers."}
{"_id": "5227edcd3f91f8d4f853b7261ad9c971194fd490", "title": "Evaluating radial, axial and transverse flux topologies for 'in-wheel' motor", "text": "Three different motor topologies, (1) radial, (2) axial and (3) transverse flux, are investigated for 'in-wheel' electric propulsion system. The design constraints are: 350mm/spl times/100mm package limitation, 48 V DC bus restriction and per phase inverter current limited to 500 A/sup pk/. In addition, magnet pole pairs are fixed to 9 due to limitation in 'position sensor' capacity. The paper summarizes results obtained from analytical design followed by 2D and 3D static and transient finite element analysis accompanied with experimental verification for the radial flux topology."}
{"_id": "dc2c9af5b05e832f32d4a0f5c48e5b08fdd31429", "title": "Essential features of serious games design in higher education: Linking learning attributes to game mechanics", "text": "This paper consolidates evidence and material from a range of specialist and disciplinary fields to provide an evidence-based review and synthesis on the design and use of serious games in higher education. Search terms identified 165 papers reporting conceptual and empirical evidence on how learning attributes and game mechanics may be planned, designed, and implemented by university teachers interested in using games, which are integrated into lesson plans and orchestrated as part of a learning sequence at any scale. The findings outline the potential of classifying the links between learning attributes and game mechanics as a means to scaffold teachers\u2019 understanding of how to perpetuate learning in optimal ways whilst enhancing the in-game learning experience. The findings of this paper provide a foundation for describing methods, frames, and discourse around experiences of design and use of serious games, linked to methodological limitations and recommendations for further research in this area. Practitioner Notes What is already known about this topic \u2022 Serious game design is a relatively new discipline that couples learning design with game features. A key characteristic of this approach is grounded in educational need and theory, rather than a focus purely on entertainment. Under this approach, a common method for designing serious games involves the creation of learning strategies, content and principles for the purpose of enhancing the student\u2019s learning experience. \u2022 There are no pedagogically driven strategies that take into account how learning attributes are interlinked to game elements for balancing learning with play. This is due to a limited evidence base of comparative evaluations assessing differing game designs against a single pedagogical model, or vice versa. This often leads practitioners seeking to introduce games into classroom with difficulties identifying what defines a game or how design should be enacted in a way that encompasses particular learning interventions. \u2022 Qualitative research methodologies have not been utilised as often for discerning how people experience, understand and use serious games for teaching and learning in higher education. What this paper adds \u2022 This paper presents the foundation of a taxonomy linking learning and game attributes along with teacher roles, aiming to encourage the cross-fertilisation and further integration of evidence on serious game design. The initial findings provide insight for practitioners on elements common to games and serious games, and how these link to particular learning design strategies that may afford pedagogically-rich games. \u2022 It informs development in the design and use of serious games to support teaching and learning in higher education. Given the key role of practitioners in the development of a wide range of serious games, due to the iterative and participatory methods frequently adopted, it offers insight into game elements and mechanics, allowing practitioners to easily relate to how learning activities, outcomes, feedback and roles may vary and visualised in relation to what needs to be learned out of playing the game. \u2022 The paper also identifies gaps in the evidence base and avenues for future research. Through this a practitioner can gain insight into the current unknowns in the area, and relate their experience of introducing games in the classroom to the current evidence base. Implications for practice and/or policy \u2022 The taxonomy informs serious game/instructional designers, game developers, academics and students about how learning elements (e.g. learning activities, learning outcomes, feedback and assessment) can be represented in games. We advocate dialogue between teachers and serious game designers, for improving the process of amalgamating learning with fun; and whilst the idea of using digital games is relatively new, teachers have a wealth of experience introducing learning features in games to be used into their classroom activities which frequently set out undocumented. \u2022 It aids the delineation of game attributes and associated game categories for characterising games based on primary purpose and use. Planning and designing teaching and learning in a game and teacher\u2019s associated role in guiding the in-game learning process is a consistent challenge and this paper seeks to provide a foundation by which this process can be undertaken in an informed and conscious manner. \u2022 The findings of the review act as the point of departure for creating a research agenda for understanding disjunctions between espoused and enacted personal theories of using serious games through qualitative methodologies. Introduction Serious Games (SGs) design is a relatively new discipline that couples learning design with game mechanics and logic (de Freitas, 2006; Hainley 2006; Westera et al., 2008). Design for SGs involves the creation of learning activities that may use the whole game or entail a gaming element (e.g. leader boards, virtual currencies, in-game hints) for the purpose of transforming the student\u2019s learning experience. Arguments against SGs have centred upon lack of empirical evidence in support of their efficacy; and the fact that appropriate research methodologies have not been enacted as yet for discerning how people understand and use SGs for teaching and learning in higher education (Mayer 2012; Connolly 2012). However, there are studies in the UK and US respectively that have demonstrated positive results in large sample groups (see for example Dunwell et al., 2014; Hertzog et al., 2014; Kato et al., 2008). Research evidence stresses the lack of commonly accepted pedagogically-driven strategies to afford game mechanisms and suggest that an inclusive model that takes into account pedagogy and teaching strategy, aligned to game activity and assessment is necessary for balancing play features with pedagogical aspects. This argument stems from an important observation that learning is a constructive process, which encompasses aspects of collaborative learning in which knowledge creation emerges through discussion and negotiation between individuals and groups. With these perspectives, this paper draws together evidence and material from a range of specialist and disciplinary fields to offer a critical review and synthesis of the design and use of SGs in higher"}
{"_id": "c7cd25ee91ee78be8617fbf24ba5535f6b458483", "title": "Removing Shadows from Images", "text": "We attempt to recover a 2D chromaticity intrinsic variation of a single RGB image which is independent of lighting, without requiring any prior knowledge about the camera. The proposed algorithm aims to learn an invariant direction for projecting from 2D color space to the greyscale intrinsic image. We notice that along this direction, the entropy in the derived invariant image is minimized. The experiments conducted on various inputs indicate that this method achieves intrinsic 2D chromaticity images which are free of shadows. In addition, we examined the idea to utilize projection pursuit instead of entropy minimization to find the desired direction."}
{"_id": "e3dd580388948bed7c6307d12b9018fe49db33ae", "title": "Kwyjibo: automatic domain name generation", "text": "Automatically generating \u2018good\u2019 domain names that are random yet pronounceable is a problem harder than it first appears. The problem is related to random word generation, and we survey and categorize existing techniques before presenting our own syllable-based algorithm that produces higher-quality results. Our results are also applicable elsewhere, in areas such as password generation, username generation, and even computer-generated poetry. Copyright \u00a9 2008 John Wiley & Sons, Ltd."}
{"_id": "db976381ba0bcf53cd4bd359d1dffe39d259aa2e", "title": "Multi-session PLDA scoring of i-vector for partially open-set speaker detection", "text": "This paper advocates the use of probabilistic linear discriminant analysis (PLDA) for partially open-set detection task with multiple i-vectors enrollment condition. Also referred to as speaker verification, the speaker detection task has always been considered under an open-set scenario. In this paper, a more general partially open-set speaker detection problem in considered, where the imposters might be one of the known speakers previously enrolled to the system. We show how this could be coped with by modifying the definition of the alternative hypothesis in the PLDA scoring function. We also look into the impact of the conditionalindependent assumption as it was used to derive the PLDA scoring function with multiple training i-vectors. Experiments were conducted using the NIST 2012 Speaker Recognition Evaluation (SRE\u201912) datasets to validate various points discussed in the paper."}
{"_id": "ba0200a5e8d0217c123207956b9e19810920e406", "title": "The effects of prosocial video games on prosocial behaviors: international evidence from correlational, longitudinal, and experimental studies.", "text": "Although dozens of studies have documented a relationship between violent video games and aggressive behaviors, very little attention has been paid to potential effects of prosocial games. Theoretically, games in which game characters help and support each other in nonviolent ways should increase both short-term and long-term prosocial behaviors. We report three studies conducted in three countries with three age groups to test this hypothesis. In the correlational study, Singaporean middle-school students who played more prosocial games behaved more prosocially. In the two longitudinal samples of Japanese children and adolescents, prosocial game play predicted later increases in prosocial behavior. In the experimental study, U.S. undergraduates randomly assigned to play prosocial games behaved more prosocially toward another student. These similar results across different methodologies, ages, and cultures provide robust evidence of a prosocial game content effect, and they provide support for the General Learning Model."}
{"_id": "954155ba57f34af0db5477aed4a4fa5f0ccf4933", "title": "Oral squamous cell carcinoma overview.", "text": "Most cancer in the head and neck is squamous cell carcinoma (HNSCC) and the majority is oral squamous cell carcinoma (OSCC). This is an overview of oral squamous cell carcinoma (OSCC) highlighting essential points from the contributors to this issue, to set the scene. It emphasises the crucial importance of prevention and the necessarily multidisciplinary approach to the diagnosis and management of patients with OSCC."}
{"_id": "6ec8febd00f973a1900f951099feb0b5ec60e9af", "title": "Hybrid crafting: towards an integrated practice of crafting with physical and digital components", "text": "With current digital technologies, people have large archives of digital media, such as images and audio files, but there are only limited means to include these media in creative practices of crafting and making. Nevertheless, studies have shown that crafting with digital media often makes these media more cherished and that people enjoy being creative with their digital media. This paper aims to open up the way for novel means for crafting, which include digital media in integrations with physical construction, here called \u2018hybrid crafting\u2019. Notions of hybrid crafting were explored to inform the design of products or systems that may support these new crafting practices. We designed \u2018Materialise\u2019\u2014a building set that allows for the inclusion of digital images and audio files in physical constructions by using tangible building blocks that can display images or play audio files, alongside a variety of other physical components\u2014and used this set in four hands-on creative workshops to gain insight into how people go about doing hybrid crafting; whether hybrid crafting is desirable; what the characteristics of hybrid crafting are; and how we may design to support these practices. By reflecting on the findings from these workshops, we provide concrete guidelines for the design of novel hybrid crafting products or systems that address craft context, process and result. We aim to open up the design space to designing for hybrid crafting because these new practices provide interesting new challenges and opportunities for future crafting that can lead to novel forms of creative expression."}
{"_id": "8fc1b05e40fab886bde1645208999c824806b479", "title": "MOA: Massive Online Analysis", "text": "MassiveOnline Analysis (MOA) is a software environment for implementing al orithms and running experiments for online learning from evolving data str eams. MOA includes a collection of offline and online methods as well as tools for evaluation. In particular, it implements boosting, bagging, and Hoeffding Trees, all with and without Na \u0131\u0308ve Bayes classifiers at the leaves. MOA supports bi-directional interaction with WEKA, the Waikato Environment for Knowledge Analysis, and is released under the GNU GPL license."}
{"_id": "0af89f7184163337558ba3617101aeec5c7f7169", "title": "Deep Learning with Limited Numerical Precision", "text": "Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network\u2019s behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding."}
{"_id": "1d827e24143e5fdfe709d33b7b13a9a24d402efd", "title": "Learning Deep Features for Scene Recognition using Places Database", "text": "[1] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding, 2007. [2] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. 2007. [3] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proc. CVPR, 2006. [4] L.-J. Li and L. Fei-Fei. What, where and who? classifying events by scene and object recognition. In Proc. ICCV, 2007. [5] G. Patterson and J. Hays. Sun attribute database: Discovering, annotating, and recognizing scene attributes. In Proc. CVPR, 2012."}
{"_id": "233b1774f28c9972df2dfcf20dfbb0df45792bd0", "title": "A 240 G-ops/s Mobile Coprocessor for Deep Neural Networks", "text": "Deep networks are state-of-the-art models used for understanding the content of images, videos, audio and raw input data. Current computing systems are not able to run deep network models in real-time with low power consumption. In this paper we present nn-X: a scalable, low-power coprocessor for enabling real-time execution of deep neural networks. nn-X is implemented on programmable logic devices and comprises an array of configurable processing elements called collections. These collections perform the most common operations in deep networks: convolution, subsampling and non-linear functions. The nn-X system includes 4 high-speed direct memory access interfaces to DDR3 memory and two ARM Cortex-A9 processors. Each port is capable of a sustained throughput of 950 MB/s in full duplex. nn-X is able to achieve a peak performance of 227 G-ops/s, a measured performance in deep learning applications of up to 200 G-ops/s while consuming less than 4 watts of power. This translates to a performance per power improvement of 10 to 100 times that of conventional mobile and desktop processors."}
{"_id": "2ffc74bec88d8762a613256589891ff323123e99", "title": "Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks", "text": "Convolutional neural network (CNN) has been widely employed for image recognition because it can achieve high accuracy by emulating behavior of optic nerves in living creatures. Recently, rapid growth of modern applications based on deep learning algorithms has further improved research and implementations. Especially, various accelerators for deep CNN have been proposed based on FPGA platform because it has advantages of high performance, reconfigurability, and fast development round, etc. Although current FPGA accelerators have demonstrated better performance over generic processors, the accelerator design space has not been well exploited. One critical problem is that the computation throughput may not well match the memory bandwidth provided an FPGA platform. Consequently, existing approaches cannot achieve best performance due to under-utilization of either logic resource or memory bandwidth. At the same time, the increasing complexity and scalability of deep learning applications aggravate this problem. In order to overcome this problem, we propose an analytical design scheme using the roofline model. For any solution of a CNN design, we quantitatively analyze its computing throughput and required memory bandwidth using various optimization techniques, such as loop tiling and transformation. Then, with the help of rooine model, we can identify the solution with best performance and lowest FPGA resource requirement. As a case study, we implement a CNN accelerator on a VC707 FPGA board and compare it to previous approaches. Our implementation achieves a peak performance of 61.62 GFLOPS under 100MHz working frequency, which outperform previous approaches significantly."}
{"_id": "a83d62bb714a02dc5ba9a6f1aa7dd2fe16c29fb7", "title": "Topology Derivation of Nonisolated Three-Port DC\u2013DC Converters From DIC and DOC", "text": "A systematic approach is proposed for the derivation of nonisolated three-port converter (TPC) topologies based on dual-input converters (DIC) and dual-output converters (DOC), which serves as an interface for a renewable source, a storage battery, and a load simultaneously. The power flow in a TPC is analyzed and compared with that in a DIC or a DOC, respectively. Beginning with building the power flow paths of a TPC from a DIC or a DOC, the general principles and detailed procedures for the generation and optimization of TPC topologies are presented. Based on these works, a family of nonisolated TPC topologies is developed. The derived TPCs feature single-stage power conversion between any two of the three ports, and result in high integration and high efficiency. One of the proposed TPCs, named Boost-TPC, is taken as an example for verifying the performance of its circuit topology with related operational methods. Pulsewidth modulation and power management methods, used in this circuit, are analyzed in detail. Experiments have been carried out on a 1-kW prototype of the Boost-TPC, which demonstrate the feasibility and effectiveness of the proposed topology derivation method."}
{"_id": "c00b3b26acf8fe8b362e75eac91349c37a1bd29b", "title": "A framework for cognitive Internet of Things based on blockchain", "text": "Internet of Things, cognitive systems, and blockchain technology are three fields which have created numerous revolutions in software development. It seems that a combination among these fields may results in emerging a high potential and interesting field. Therefore, in this paper, we propose a framework for Internet of Things based on cognitive systems and blockchain technology. To the best of our knowledge, there is no framework for Internet of Things based on cognitive systems and blockchain. In order to study the applicability of the proposed framework, a recommender system based on the proposed framework is suggested. Since the proposed framework is novel, the suggested recommender system is novel. The suggested recommender system is compared with the existing recommender systems. The results show that the suggested recommender system has several benefits which are not available in the existing recommender systems."}
{"_id": "38bf5e4f8dc147b9b5a2a9a93c60a1e99b3fe3c0", "title": "Management patterns: SDN-enabled network resilience management", "text": "Software-defined networking provides abstractions and a flexible architecture for the easy configuration of network devices, based on the decoupling of the data and control planes. This separation has the potential to considerably simplify the implementation of resilience functionality (e.g., traffic classification, anomaly detection, traffic shaping) in future networks. Although software-defined networking in general, and OpenFlow as its primary realisation, provide such abstractions, support is still needed for orchestrating a collection of OpenFlow-enabled services that must cooperate to implement network-wide resilience. In this paper, we describe a resilience management framework that can be readily applied to this problem. An important part of the framework are policy-controlled management patterns that describe how to orchestrate individual resilience services, implemented as OpenFlow applications."}
{"_id": "89a825fa5f41ac97cbf69c2a72230b1ea695d1e0", "title": "Gated convolutional networks based hybrid acoustic models for low resource speech recognition", "text": "In acoustic modeling for large vocabulary speech recognition, recurrent neural networks (RNN) have shown great abilities to model temporal dependencies. However, the performance of RNN is not prominent in resource limited tasks, even worse than the traditional feedforward neural networks (FNN). Furthermore, training time for RNN is much more than that for FNN. In recent years, some novel models are provided. They use non-recurrent architectures to model long term dependencies. In these architectures, they show that using gate mechanism is an effective method to construct acoustic models. On the other hand, it has been proved that using convolution operation is a good method to learn acoustic features. We hope to take advantages of both these two methods. In this paper we present a gated convolutional approach to low resource speech recognition tasks. The gated convolutional networks use convolutional architectures to learn input features and a gate to control information. Experiments are conducted on the OpenKWS, a series of low resource keyword search evaluations. From the results, the gated convolutional networks relatively decrease the WER about 6% over the baseline LSTM models, 5% over the DNN models and 3% over the BLSTM models. In addition, the new models accelerate the learning speed by more than 1.8 and 3.2 times compared to that of the baseline LSTM and BLSTM models."}
{"_id": "b43a7c665e21e393123233374f4213a811fe7dfe", "title": "Deep Texture Features for Robust Face Spoofing Detection", "text": "Biometric systems are quite common in our everyday life. Despite the higher difficulty to circumvent them, nowadays criminals are developing techniques to accurately simulate physical, physiological, and behavioral traits of valid users, process known as spoofing attack. In this context, robust countermeasure methods must be developed and integrated with the traditional biometric applications in order to prevent such frauds. Despite face being a promising trait due to its convenience and acceptability, face recognition systems can be easily fooled with common printed photographs. Most of state-of-the-art antispoofing techniques for face recognition applications extract handcrafted texture features from images, mainly based on the efficient local binary patterns (LBP) descriptor, to characterize them. However, recent results indicate that high-level (deep) features are more robust for such complex tasks. In this brief, a novel approach for face spoofing detection that extracts deep texture features from images by integrating the LBP descriptor to a modified convolutional neural network is proposed. Experiments on the NUAA spoofing database indicate that such deep neural network (called LBPnet) and an extended version of it (n-LBPnet) outperform other state-of-the-art techniques, presenting great results in terms of attack detection."}
{"_id": "762f5a712f4d6994ead089fcc0c5db98479a2008", "title": "Performance Evaluation of Concurrent Lock-free Data Structures on GPUs", "text": "Graphics processing units (GPUs) have emerged as a strong candidate for high-performance computing. While regular data-parallel computations with little or no synchronization are easy to map on the GPU architectures, it is a challenge to scale up computations on dynamically changing pointer-linked data structures. The traditional lock-based implementations are known to offer poor scalability due to high lock contention in the presence of thousands of active threads, which is common in GPU architectures. In this paper, we present a performance evaluation of concurrent lock-free implementations of four popular data structures on GPUs. We implement a set using lock-free linked list, hash table, skip list, and priority queue. On the first three data structures, we evaluate the performance of different mixes of addition, deletion, and search operations. The priority queue is designed to support retrieval and deletion of the minimum element and addition operations to the set. We evaluate the performance of these lock-free data structures on a Tesla C2070 Fermi GPU and compare it with the performance of multi-threaded lock-free implementations for CPU running on a 24-core Intel Xeon server. The linked list, hash table, skip list, and priority queue implementations achieve speedup of up to 7.4, 11.3, 30.7, and 30.8, respectively on the GPU compared to the Xeon server."}
{"_id": "26e207eb124fb0a939f821e1efa24e7c6b990844", "title": "Images of Success and the Preference for Luxury", "text": "This research examines the impact of media depictions of success (or failure) on consumers\u2019 desire for luxury brands. In a pilot study and three additional studies, we demonstrate that reading a story about a similar/successful other, such as a business major from the same university, increases consumers\u2019 expectations about their own future wealth, which in turn increases their desire for luxury brands. However, reading about a dissimilar successful other, such as a biology major, lowers consumers\u2019preferences for luxury brands. Furthermore, we examine the role of ease of imagining oneself in the narrative as a mediator of the relation between direction of comparison, similarity, and brand preference."}
{"_id": "defbb82519f3500ec6488dfd4991c68868475afa", "title": "A new weight initialization method for sigmoidal feedforward artificial neural networks", "text": "Initial weight choice has been recognized to be an important aspect of the training methodology for sigmoidal feedforward neural networks. In this paper, a new mechanism for weight initialization is proposed. The mechanism distributes the initial input to output weights in a manner that all weights (including thresholds) leading into a hidden layer are uniformly distributed in a region and the center of the region from which the weights are sampled are such that no region overlaps for two distinct hidden nodes. The proposed method is compared against random weight initialization routines on five function approximation tasks using the Resilient Backpropagation (RPROP) algorithm for training. The proposed method is shown to lead to about twice as fast convergence to a pre-specifled goal for training as compared to any of the random weight initialization methods. Moreover, it is shown that at least for these problems the networks reach a deeper minima of the error functional during training and generalizes better than the networks trained whose weights were initialized by random weight initialization methods."}
{"_id": "3e892a9a23afd4c28c60bbf8a6e5201bf62c1dcd", "title": "Combining qualitative and quantitative sampling, data collection, and analysis techniques in mixed-method studies.", "text": "Researchers have increasingly turned to mixed-method techniques to expand the scope and improve the analytic power of their studies. Yet there is still relatively little direction on and much confusion about how to combine qualitative and quantitative techniques. These techniques are neither paradigm- nor method-linked; researchers' orientations to inquiry and their methodological commitments will influence how they use them. Examples of sampling combinations include criterion sampling from instrument scores, random purposeful sampling, and stratified purposeful sampling. Examples of data collection combinations include the use of instruments for fuller qualitative description, for validation, as guides for purposeful sampling, and as elicitation devices in interviews. Examples of data analysis combinations include interpretively linking qualitative and quantitative data sets and the transformation processes of qualitizing and quantitizing."}
{"_id": "12408a6e684f51adf2b19071233afa37a378dee4", "title": "Artistic reality: fast brush stroke stylization for augmented reality", "text": "The goal of augmented reality is to provide the user with a view of the surroundings enriched by virtual objects. Practically all augmented reality systems rely on standard real-time rendering methods for displaying graphical objects. Although such conventional computer graphics algorithms are fast, they often fail to produce sufficiently realistic renderings. Therefore, virtual models can easily be distinguished from the real environment. We have recently proposed a novel approach for generating augmented reality images [4]. Our method is based on the idea of applying stylization techniques for adapting the visual realism of both the camera image and the virtual graphical objects. Since both the camera image and the virtual objects are stylized in a corresponding way, they appear very similar. Here, we present a new method for the stylization of augmented reality images. This approach generates a painterly brush stroke rendering. The resulting stylized augmented reality video frames look similar to paintings created in the pointillism style. We describe the implementation of the camera image filter and the non-photorealistic rendererfor virtual objects. These components have been newly designed or adapted for this purpose. They are fast enough for generating augmented reality images in near real-time (more than 14 frames per second)."}
{"_id": "5bd97fb4ce1b6ebf6be926a1e0512e1666cbe48b", "title": "A Definition of Artificial Intelligence", "text": "In this paper we offer a formal definition of Artificial Intelligence and this directly gives us an algorithm for construction of this object. Really, this algorithm is useless due to the combinatory explosion. The main innovation in our definition is that it does not include the knowledge as a part of the intelligence. So according to our definition a newly born baby also is an Intellect. Here we differs with Turing's definition which suggests that an Intellect is a person with knowledge gained through the years."}
{"_id": "5eb1d160d2be5467aaf08277a02d3159f5fd7a75", "title": "Anisotropic diffusion map based spectral embedding for 3D CAD model retrieval", "text": "In the product life cycle, design reuse can save cost and improve existing products conveniently in most new product development. To retrieve similar models from big database, most search algorithms convert CAD model into a shape descriptor and compute the similarity two models according to a descriptor metric. This paper proposes a new 3D shape matching approach by matching the coordinates directly. It is based on diffusion maps which integrate the rand walk and graph spectral analysis to extract shape features embedded in low dimensional spaces and then they are used to form coordinations for non-linear alignment of different models. These coordinates could capture multi-scale properties of the 3D geometric features and has shown good robustness to noise. The results also have shown better performance compared to the celebrated Eigenmap approach in the 3D model retrieval."}
{"_id": "35864d52726d19513bc25f28f072b5c97a719b09", "title": "Know Your Body Through Intrinsic Goals", "text": "The first \"object\" that newborn children play with is their own body. This activity allows them to autonomously form a sensorimotor map of their own body and a repertoire of actions supporting future cognitive and motor development. Here we propose the theoretical hypothesis, operationalized as a computational model, that this acquisition of body knowledge is not guided by random motor-babbling, but rather by autonomously generated goals formed on the basis of intrinsic motivations. Motor exploration leads the agent to discover and form representations of the possible sensory events it can cause with its own actions. When the agent realizes the possibility of improving the competence to re-activate those representations, it is intrinsically motivated to select and pursue them as goals. The model is based on four components: (1) a self-organizing neural network, modulated by competence-based intrinsic motivations, that acquires abstract representations of experienced sensory (touch) changes; (2) a selector that selects the goal to pursue, and the motor resources to train to pursue it, on the basis of competence improvement; (3) an echo-state neural network that controls and learns, through goal-accomplishment and competence, the agent's motor skills; (4) a predictor of the accomplishment of the selected goals generating the competence-based intrinsic motivation signals. The model is tested as the controller of a simulated simple planar robot composed of a torso and two kinematic 3-DoF 2D arms. The robot explores its body covered by touch sensors by moving its arms. The results, which might be used to guide future empirical experiments, show how the system converges to goals and motor skills allowing it to touch the different parts of own body and how the morphology of the body affects the formed goals. The convergence is strongly dependent on competence-based intrinsic motivations affecting not only skill learning and the selection of formed goals, but also the formation of the goal representations themselves."}
{"_id": "391c71d926c8bc0ea8dcf2ead05d59ef6a1057bf", "title": "Photo tourism: exploring photo collections in 3D", "text": "We present a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface. Our system consists of an image-based modeling front end that automatically computes the viewpoint of each photograph as well as a sparse 3D model of the scene and image to model correspondences. Our photo explorer uses image-based rendering techniques to smoothly transition between photographs, while also enabling full 3D navigation and exploration of the set of images and world geometry, along with auxiliary information such as overhead maps. Our system also makes it easy to construct photo tours of scenic or historic locations, and to annotate image details, which are automatically transferred to other relevant images. We demonstrate our system on several large personal photo collections as well as images gathered from Internet photo sharing sites."}
{"_id": "26bea0df45f41161032f471c76c613a5ad623a7e", "title": "Design for wearability", "text": "Digital Technology is constantly improving as information becomes wireless. These advances demand more wearable and mobile form factors for products that access information. A product that is wearable should have wearability. This paper explores the concept of dynamic wearability, through design research. Wearability is defined as the interaction between the human body and the wearable object. Dynamic wearability extends that definition to include the human body in motion. Our research has been to locate, understand, and define the spaces on the human body where solid and flexible forms can rest-without interfering with fluid human movement. The result is a set of design guidelines embodied in a set of wearable forms. These wearable forms describe the three dimensional spaces on the body best suited for comfortable and unobtrusive wearability by design."}
{"_id": "dec3d12ef8d1fc3f18572302ff31f74a490a833a", "title": "A New Approach to Quantify Network Security by Ranking of Security Metrics and Considering Their Relationships", "text": "There are several characteristics in computer networks, which play important roles in determining the level of network security. These characteristics known as security metrics can be applied for security quantification in computer networks. Most of the researches on this area has focused on defining the new security metrics to improve the quantification process. In this paper, we present a new approach to analyze and quantify the network security by ranking of security metrics with considering the relationships between them. Our ranking method reveals the importance of each security metric to quantify security in the network under surveillance. The proposed approach helps the network administrators to have a better insight on the level of network security"}
{"_id": "a16fbdb6a5f4f2bc39784348960103d0ca39b9cf", "title": "Is FLIP enough? Or should we use the FLIPPED model instead?", "text": "Yunglung Chen, Yuping Wang, Kinshuk, Nian-Shing Chen Department of Information Management, National Sun Yat-sen University, No. 70, Lienhai Rd., Kaohsiung 80424, Taiwan School of Languages and Linguistics, Griffith University, Nathan, 4111, Australia School of Computing and Information System, Athabasca University, Athabasca, Canada henryylchen@gmail.com, y.wang@griffith.edu.au, kinshuk@athabascau.ca, nschen@mis.nsysu.edu.tw"}
{"_id": "c2a7f02cf7a8a39051dd47433d0415547b62eff4", "title": "Step-by-step design and control of LCL filter based three phase grid-connected inverter", "text": "This paper proposes a detailed step-by-step design procedure and control of an LCL filter for grid connected three phase sine PWM voltage source inverter. The goal of the design is to ensure high quality of grid current as well as to minimize the size of filter magnetics. In order to ensure unity power factor injection into grid a current controller is designed with a constraint that only the inverter side inductor current is sensed. Good agreement between simulation and experimental results verify the effectiveness of the designed controller and the LCL filter."}
{"_id": "50965b84549a1a925499b9ce16bb20409948eb39", "title": "Power Quality Control and Design of Power Converter for Variable-Speed Wind Energy Conversion System with Permanent-Magnet Synchronous Generator", "text": "The control strategy and design of an AC/DC/AC IGBT-PMW power converter for PMSG-based variable-speed wind energy conversion systems (VSWECS) operation in grid/load-connected mode are presented. VSWECS consists of a PMSG connected to a AC-DC IGBT-based PWM rectifier and a DC/AC IGBT-based PWM inverter with LCL filter. In VSWECS, AC/DC/AC power converter is employed to convert the variable frequency variable speed generator output to the fixed frequency fixed voltage grid. The DC/AC power conversion has been managed out using adaptive neurofuzzy controlled inverter located at the output of controlled AC/DC IGBT-based PWM rectifier. In this study, the dynamic performance and power quality of the proposed power converter connected to the grid/load by output LCL filter is focused on. Dynamic modeling and control of the VSWECS with the proposed power converter is performed by using MATLAB/Simulink. Simulation results show that the output voltage, power, and frequency of VSWECS reach to desirable operation values in a very short time. In addition, when PMSG based VSWECS works continuously with the 4.5 kHz switching frequency, the THD rate of voltage in the load terminal is 0.00672%."}
{"_id": "a0eb9c4b70e7f63fef5faef0ac9282416e1b58a9", "title": "Finding expert users in community question answering", "text": "Community Question Answering (CQA) websites provide a rapidly growing source of information in many areas. This rapid growth, while offering new opportunities, puts forward new challenges. In most CQA implementations there is little effort in directing new questions to the right group of experts. This means that experts are not provided with questions matching their expertise, and therefore new matching questions may be missed and not receive a proper answer. We focus on finding experts for a newly posted question. We investigate the suitability of two statistical topic models for solving this issue and compare these methods against more traditional Information Retrieval approaches. We show that for a dataset constructed from the Stackoverflow website, these topic models outperform other methods in retrieving a candidate set of best experts for a question. We also show that the Segmented Topic Model gives consistently better performance compared to the Latent Dirichlet Allocation Model."}
{"_id": "77aa875c17f428e7f78f5bfb64033514a820c808", "title": "Impact of rank optimization on downlink non-orthogonal multiple access (NOMA) with SU-MIMO", "text": "Non-orthogonal multiple access (NOMA) is a promising radio access technique for further cellular enhancements toward the 5th generation (5G) mobile communication systems. Single-user multiple-input multiple-output (SU-MIMO) is one of the key technologies in long term evolution (LTE) /LTE-Advanced systems. It is proved that NOMA combined with SU-MIMO techniques can achieve further system performance improvement. In this paper, we focus on the impact of rank optimization on the performance of NOMA with SU-MIMO in downlink. Firstly, a geometry based rank adjustment method is studied. Secondly, an enhanced feedback method for rank adjustment is discussed. The simulation results show that the performance gain of NOMA improves with the proposed two rank adjustment methods. Compared to orthogonal access system, large performance gains can be achieved for NOMA, which are about 23% for cell average throughput and 33% for cell-edge user throughput."}
{"_id": "15252afe259894bd3d0f306f29eca5e90ab05eac", "title": "On Bias, Variance, 0/1\u2014Loss, and the Curse-of-Dimensionality", "text": "The classification problem is considered in which an outputvariable y assumes discrete values with respectiveprobabilities that depend upon the simultaneous values of a set of input variablesx = {x_1,....,x_n}. At issue is how error in the estimates of theseprobabilities affects classification error when the estimates are used ina classification rule. These effects are seen to be somewhat counterintuitive in both their strength and nature. In particular the bias andvariance components of the estimation error combine to influenceclassification in a very different way than with squared error on theprobabilities themselves. Certain types of (very high) bias can becanceled by low variance to produce accurate classification. This candramatically mitigate the effect of the bias associated with some simpleestimators like \u201cnaive\u201d Bayes, and the bias induced by thecurse-of-dimensionality on nearest-neighbor procedures. This helps explainwhy such simple methods are often competitive with and sometimes superiorto more sophisticated ones for classification, and why\u201cbagging/aggregating\u201d classifiers can often improveaccuracy. These results also suggest simple modifications to theseprocedures that can (sometimes dramatically) further improve theirclassification performance."}
{"_id": "649d0aa1be51cc545de52fc584640501efdcf68b", "title": "Semantic Segmentation for High Spatial Resolution Remote Sensing Images Based on Convolution Neural Network and Pyramid Pooling Module", "text": "Semantic segmentation provides a practical way to segment remotely sensed images into multiple ground objects simultaneously, which can be potentially applied to multiple remote sensed related aspects. Current classification algorithms in remotely sensed images are mostly limited by different imaging conditions, the multiple ground objects are difficult to be separated from each other due to high intraclass spectral variances and interclass spectral similarities. In this study, we propose an end-to-end framework to semantically segment high-resolution aerial images without postprocessing to refine the segmentation results. The framework provides a pixel-wise segmentation result, comprising convolutional neural network structure and pyramid pooling module, which aims to extract feature maps at multiple scales. The proposed model is applied to the ISPRS Vaihingen benchmark dataset from the ISPRS 2D Semantic Labeling Challenge. Its segmentation results are compared with previous state-of-the-art method UZ_1, UPB and three other methods that segment images into objects of all the classes (including clutter/background) based on true orthophoto tiles, and achieve the highest overall accuracy of 87.8% over the published performances, to the best of our knowledge. The results validate the efficiency of the proposed model in segmenting multiple ground objects from remotely sensed images simultaneously."}
{"_id": "86c9a59c7c4fcf0d10dbfdb6afd20dd3c5c1426c", "title": "A Multichannel Approach to Fingerprint Classification", "text": "Fingerprint classification provides an important indexing mechanism in a fingerprint database. An accurate and consistent classification can greatly reduce fingerprint matching time for a large database. We present a fingerprint classification algorithm which is able to achieve an accuracy better than previously reported in the literature. We classify fingerprints into five categories: whorl, right loop, left loop, arch, and tented arch. The algorithm uses a novel representation (FingerCode) and is based on a two-stage classifier to make a classification. It has been tested on 4,000 images in the NIST-4 database. For the five-class problem, a classification accuracy of 90 percent is achieved (with a 1.8 percent rejection during the feature extraction phase). For the four-class problem (arch and tented arch combined into one class), we are able to achieve a classification accuracy of 94.8 percent (with 1.8 percent rejection). By incorporating a reject option at the classifier, the classification accuracy can be increased to 96 percent for the five-class classification task, and to 97.8 percent for the four-class classification task after a total of 32.5 percent of the images are rejected."}
{"_id": "6b4fe4aa4d66fecc7b2869569002714d91d0b3f7", "title": "Receptive fields, binocular interaction and functional architecture in the cat's visual cortex.", "text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical \u2026"}
{"_id": "a2ed347d010aeae4ddd116676bdea2e77d942f6e", "title": "Fingerprint classification", "text": "A fingerprint classification algorithm is presented in this paper. Fingerprints are classified into five categories: arch, tented arch, left loop, right loop and whorl. The algorithm extracts singular points (cores and deltas) in a fingerprint image and performs classification based on the number and locations of the detected singular points. The classifier is invariant to rotation, translation and small amounts of scale changes. The classifier is rule-based, where the rules are generated independent of a given data set. The classifier was tested on 4000 images in the NIST-4 database and on 5400 images in the NIST-9 database. For he NIST-4 database, classification accuracies of 85.4% for the five-class problem and 91.1% for the four-class problem (with arch and tented arch placed in the same category) were achieved. Using a reject option, the four-class classification error can be reduced to less than 6% with 10% fingerprint images rejected. Similar classification performance was obtained on the NIST-9 database."}
{"_id": "b07ce649d6f6eb636872527104b0209d3edc8188", "title": "Pattern classification and scene analysis", "text": ""}
{"_id": "b6308422bc6fe0babd8fe2f04571449a0b016349", "title": "Adaptive flow orientation-based feature extraction in fingerprint images", "text": "-A reliable method for extracting structural features from fingerprint images is presented. Viewing fingerprint images as a textured image, an orientation flow field is computed. The rest of the stages in the algorithm use the flow field to design adaptive filters for the input image. To accurately locate ridges, a waveform projection-based ridge segmentation algorithm is used. The ridge skeleton image is obtained and smoothed using morphological operators to detect the features. A large number of spurious features from the detected set of minutiae is deleted by a postprocessing stage. The performance of the proposed algorithm has been evaluated by computing a \"goodness index\" (GI) which compares the results of automatic extraction with manually extracted ground truth. The significance of the observed GI values is determined by comparing the index for a set of fingerprints against the GI values obtained under a baseline distribution. The detected features are observed to be reliable and accurate. Fingerprints Feature extraction Texture Flow orientation Minutiae Segmentation Skeleton"}
{"_id": "410d48ddd9173049d0b3c9a2a6dbf3b606842262", "title": "Automated Static Code Analysis for Classifying Android Applications Using Machine Learning", "text": "In this paper we apply Machine Learning (ML) techniques on static features that are extracted from Android's application files for the classification of the files. Features are extracted from Android\u2019s Java byte-code (i.e.,.dex files) and other file types such as XML-files. Our evaluation focused on classifying two types of Android applications: tools and games. Successful differentiation between games and tools is expected to provide positive indication about the ability of such methods to learn and model Android benign applications and potentially detect malware files. The results of an evaluation, performed using a test collection comprising 2,285 Android. apk files, indicate that features, extracted statically from. apk files, coupled with ML classification algorithms can provide good indication about the nature of an Android application without running the application, and may assist in detecting malicious applications. This method can be used for rapid examination of Android. apks and informing of suspicious applications."}
{"_id": "3337976b072405933a02f7d912d2b6432de38feb", "title": "Automated Text Summarization and the SUMMARIST System", "text": "This paper consists of three parts: a preliminary typology of summaries in general; a description of the current and planned modules and performance of the SUMMARIST automated multilingual text summarization system being built sat ISI, and a discussion of three methods to evaluate summaries. 1. T H E N A T U R E O F S U M M A R I E S Early experimentation in the late 1950's and early 60's suggested that text summarization by computer was feasible though not s traightforward (Luhn, 59; Edmundson, 68). The methods developed then were fairly unsophisticated, relying primarily on surface level phenomena such as sentence position and word frequency counts, and focused on producing extracts (passages selected from the text, reproduced verbatim) rather than abstracts (interpreted portions of the text, newly generated). After a hiatus of some decades, the growing presence of large amounts of online text--in corpora and especially on the Web--renewed the interest in automated text summarization. During these intervening decades, progress in Natural Language Processing (NLP), coupled with great increases of computer memory and speed, made possible more sophisticated techniques, with very encouraging results. In the late 1990's, some relatively small research investments in the US (not more than 10 projects, including commercial efforts at Microsoft, Lexis-Nexis, Oracle, SRA, and TextWise, and university efforts at CMU, NMSU, UPenn, and USC/ISI) over three or four years have produced several systems that exhibit potential marketability, as well as several innovations that promise continued improvement. In addition, several recent workshops, a book collection, and several tutorials testify that automated text summarization has become a hot area. However, when one takes a moment to study the various systems and to consider what has really been achieved, one cannot help being struck by their underlying similarity, by the narrowness of their focus, and by the large numbers of unknown factors that surround the problem. For example, what precisely is a summary? No-one seems to know exactly. In our work, we use summary as the generic term and define it as follows: A summary is a text that is produced out of one or more (possibly multimedia) texts, that contains (some of) the same information of the original text(s), and that is no longer than half of the original text(s). To clarify the picture a little, we follow and extend (Sp~irck Jones, 97) by identifying the following aspects of variation. Any summary can be characterized by (at least) three major classes of characteristics: Invut: characteristics of the source text(s) Source size: single-document v s . multi-document: A single-document summary derives from a single input text (though the summarization process itself may employ information compiled earlier from other texts). A multi-document summary is one text that covers the content of more than one input text, and is usually used only when the input texts are thematically related. Specificity: domain-specific vs. general: When the input texts all pertain to a single domain, it may be appropr ia te to apply domain spec i f i c summarization techniques, focus on specific content, and output specific formats, compared to the general case. A domain-specific summary derives from input text(s) whose theme(s) pertain to a single restricted domain. As such, it can assume less term ambiguity, idiosyncratic word and grammar usage, specialized formatting, etc., and can reflect them in the summary."}
{"_id": "f0acbfdde58c297ff1705be9e9f8e119f4ba38cc", "title": "Asymmetrical duty cycle control with phase limit of LLC resonant inverter for an induction furnace", "text": "This paper proposes a power control of LLC resonant inverter for an induction furnace using asymmetrical duty cycle control (ADC) with phase limit to guarantee zero voltage switching (ZVS) operation and protect switching devices from spike current during the operation. The output power can be simply controlled through duty cycle of gate signals of a full-bridge inverter. With the phase limit control, non-ZVS operation and spike current caused by a change of duty cycle with fixed frequency and load Curie's temperature can be eliminated. Theoretical and simulation analyses of the proposed method have been investigated. The experimental results of heating a 300 g of Tin from room temperature until it is melted at 232 \u00b0C provided."}
{"_id": "df2f26f96a514956b34ceaee8189e7f54160bee0", "title": "Defect Detection in Patterned Fabrics Using Modified Local Binary Patterns", "text": "Local binary patterns LBP, is one of the features which has been used for texture classification. In this paper, a method based on using these features is proposed for detecting defects in patterned fabrics. In the training stage, at first step LBP operator is applied to all rows (columns) of a defect free fabric sample, pixel by pixel, and the reference feature vector is computed. Then this image is divided into windows and LBP operator is applied to each row (column) of these windows. Based on comparison with the reference feature vector a suitable threshold for defect free windows is found. In the detection stage, a test image is divided into windows and using the threshold, defective windows can be detected. The proposed method is simple and gray scale invariant. Because of its simplicity, online implementation is possible as well."}
{"_id": "25126128faa023d1a65a47abeb8c33219cc8ca5c", "title": "Less is More: Nystr\u00f6m Computational Regularization", "text": "We study Nystr\u00f6m type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of Nystr\u00f6m Kernel Regularized Least Squares, where the subsampling level implements a form of computational regularization, in the sense that it controls at the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on benchmark large scale datasets."}
{"_id": "b2719ac9e43cd491608b2e8282736ccac1915671", "title": "Power consumption estimation in CMOS VLSI chips", "text": "Power consumption from logic circuits, interconnections, dock distribution, on chip memories, and off chip driving in CMOS VLSI is estimated. Estimation methods are demonstrated and verified. An estimate tool is created. Power consumption distribution between ~ I I ~ I T O M ~ C ~ ~ O I W , clock distribution, logic gates, memories, and off chip driving are analyzed by examples. Comparisons are done between cell library, gate array, and full custom design. Also comparisons between static and dynamic logic are given. Results show that the power consumption of all interconnections and off chip driving can be up to 20% and 65% of the total power consumption respectively. Compared to cell library design, gate array designed chips consume about 10% more power, and power reduction in full custom designed chips could be 15%."}
{"_id": "b0f70ce823cc91b6a5fe3297b98f5fdad4796bab", "title": "A dynamic instruction set computer", "text": "One of the key contributions of this paper is the idea of treating the two-dimensional FPGA area as a one-dimensional array of custom instructions, which the paper refers to as linear hardware space. Each custom instruction occupies the full width of the array but can occupy different heights to allow for instructions of varying complexity. Instructions also have a common interface that includes pass-thrus which connect to the previous and next instructions in the array. This communication interface greatly simplifies the problem of relocating a custom instruction to a free area, allowing other instructions to potentially remain in-place. The communication paths for an instruction are always at the same relative positions, and run-time CAD operations (and their corresponding overheads) are completely avoided. Thus, the linear hardware space reduces the overheads and simplifies the process of run-time reconfiguration of custom processor instructions."}
{"_id": "12e54895e1d3da2fb75a6bb8f84c8d1f5108d632", "title": "Mechanical Design and Modeling of an Omni-directional RoboCup Player", "text": "Abstract. This paper covers the mechanical design process of an omni-directional mobile robot developed for the Ohio University RoboCup team player. It covers each design iteration, detailing what we learned from each phase. In addition, the kinematics of the final design is derived, and its inverse Jacobian matrix for the design is presented. The dynamic equations of motion for the final design is derived in a symbolic form, assuming that no slip occurs on the wheel in the spin direction. Finally, a simulation example demonstrates simple independent PID wheel control for the complex coupled nonlinear dynamic system."}
{"_id": "c436a8a6fa644e3d0264ef1b1b3a838e925a7e32", "title": "Analysis on botnet detection techniques", "text": "Botnet detection plays an important role in network security. Botnet are collection of compromised computers called the bot. For detecting the presence of bots in a network, there are many detection techniques available. Network based detection method is the one of the efficient method in detecting bots. Paper reviews four different botnet detection techniques and a comparison of all these techniques are done."}
{"_id": "2f5f766944162077579091a40315ec228d9785ba", "title": "Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes.", "text": "Importance\nA deep learning system (DLS) is a machine learning technology with potential for screening diabetic retinopathy and related eye diseases.\n\n\nObjective\nTo evaluate the performance of a DLS in detecting referable diabetic retinopathy, vision-threatening diabetic retinopathy, possible glaucoma, and age-related macular degeneration (AMD) in community and clinic-based multiethnic populations with diabetes.\n\n\nDesign, Setting, and Participants\nDiagnostic performance of a DLS for diabetic retinopathy and related eye diseases was evaluated using 494\u202f661 retinal images. A DLS was trained for detecting diabetic retinopathy (using 76\u202f370 images), possible glaucoma (125\u202f189 images), and AMD (72\u202f610 images), and performance of DLS was evaluated for detecting diabetic retinopathy (using 112\u202f648 images), possible glaucoma (71\u202f896 images), and AMD (35\u202f948 images). Training of the DLS was completed in May 2016, and validation of the DLS was completed in May 2017 for detection of referable diabetic retinopathy (moderate nonproliferative diabetic retinopathy or worse) and vision-threatening diabetic retinopathy (severe nonproliferative diabetic retinopathy or worse) using a primary validation data set in the Singapore National Diabetic Retinopathy Screening Program and 10 multiethnic cohorts with diabetes.\n\n\nExposures\nUse of a deep learning system.\n\n\nMain Outcomes and Measures\nArea under the receiver operating characteristic curve (AUC) and sensitivity and specificity of the DLS with professional graders (retinal specialists, general ophthalmologists, trained graders, or optometrists) as the reference standard.\n\n\nResults\nIn the primary validation dataset (n\u2009=\u200914\u202f880 patients; 71\u202f896 images; mean [SD] age, 60.2 [2.2] years; 54.6% men), the prevalence of referable diabetic retinopathy was 3.0%; vision-threatening diabetic retinopathy, 0.6%; possible glaucoma, 0.1%; and AMD, 2.5%. The AUC of the DLS for referable diabetic retinopathy was 0.936 (95% CI, 0.925-0.943), sensitivity was 90.5% (95% CI, 87.3%-93.0%), and specificity was 91.6% (95% CI, 91.0%-92.2%). For vision-threatening diabetic retinopathy, AUC was 0.958 (95% CI, 0.956-0.961), sensitivity was 100% (95% CI, 94.1%-100.0%), and specificity was 91.1% (95% CI, 90.7%-91.4%). For possible glaucoma, AUC was 0.942 (95% CI, 0.929-0.954), sensitivity was 96.4% (95% CI, 81.7%-99.9%), and specificity was 87.2% (95% CI, 86.8%-87.5%). For AMD, AUC was 0.931 (95% CI, 0.928-0.935), sensitivity was 93.2% (95% CI, 91.1%-99.8%), and specificity was 88.7% (95% CI, 88.3%-89.0%). For referable diabetic retinopathy in the 10 additional datasets, AUC range was 0.889 to 0.983 (n\u2009=\u200940\u202f752 images).\n\n\nConclusions and Relevance\nIn this evaluation of retinal images from multiethnic cohorts of patients with diabetes, the DLS had high sensitivity and specificity for identifying diabetic retinopathy and related eye diseases. Further research is necessary to evaluate the applicability of the DLS in health care settings and the utility of the DLS to improve vision outcomes."}
{"_id": "a6b70f80cddffd78780960c542321ddfe3c412f3", "title": "Suicide and Suicide Risk in Lesbian, Gay, Bisexual, and Transgender Populations: Review and Recommendations", "text": "Despite strong indications of elevated risk of suicidal behavior in lesbian, gay, bisexual, and transgender people, limited attention has been given to research, interventions or suicide prevention programs targeting these populations. This article is a culmination of a three-year effort by an expert panel to address the need for better understanding of suicidal behavior and suicide risk in sexual minority populations, and stimulate the development of needed prevention strategies, interventions and policy changes. This article summarizes existing research findings, and makes recommendations for addressing knowledge gaps and applying current knowledge to relevant areas of suicide prevention practice."}
{"_id": "791351badc12101a2db96649e127ac2f3993c70f", "title": "Watch What You Wear: Preliminary Forensic Analysis of Smart Watches", "text": "This work presents preliminary forensic analysis of two popular smart watches, the Samsung Gear 2 Neo and LG G. These wearable computing devices have the form factor of watches and sync with smart phones to display notifications, track footsteps and record voice messages. We posit that as smart watches are adopted by more users, the potential for them becoming a haven for digital evidence will increase thus providing utility for this preliminary work. In our work, we examined the forensic artifacts that are left on a Samsung Galaxy S4 Active phone that was used to sync with the Samsung Gear 2 Neo watch and the LG G watch. We further outline a methodology for physically acquiring data from the watches after gaining root access to them. Our results show that we can recover a swath of digital evidence directly form the watches when compared to the data on the phone that is synced with the watches. Furthermore, to root the LG G watch, the watch has to be reset to its factory settings which is alarming because the process may delete data of forensic relevance. Although this method is forensically intrusive, it may be used for acquiring data from already rooted LG watches. It is our observation that the data at the core of the functionality of at least the two tested smart watches, messages, health and fitness data, e-mails, contacts, events and notifications are accessible directly from the acquired images of the watches, which affirms our claim that the forensic value of evidence from smart watches is worthy of further study and should be investigated both at a high level and with greater specificity and granularity."}
{"_id": "a7fbd785b5fa441b5261913e93966ba395ae2244", "title": "Design of Broadband Constant-Beamwidth Conical Corrugated-Horn Antennas [Antenna Designer's Notebook]", "text": "In this paper, a new design procedure is proposed for the design of wideband constant-beamwidth conical corrugated-horn antennas, with minimum design and construction complexity. The inputs to the procedure are the operating frequency band, the required minimum beamwidth in the entire frequency band, and the frequency in which the maximum gain is desired to occur. Based on these values, the procedure gives a relatively good design with a relative bandwidth of up to 2.5:1. Based on the proposed procedure, a corrugated-horn antenna with a constant beamwidth over the frequencies of 8 to 18 GHz was designed and simulated using commercial software. The designed antenna was also constructed, and its electromagnetic performance was measured. The measured results of the constructed prototype antenna confirmed the simulation results and satisfied the design requirements, validating the proposed design procedure."}
{"_id": "8d93c6b66a114e265942b000903c9244087bd9f6", "title": "I See What You Say (ISWYS): Arabic lip reading system", "text": "The ability of communicating easily with everyone is a blessing people with hearing impairment do not have. They completely rely on their vision around healthy individuals to difficultly read lips. This paper proposes a solution for this problem, ISWYS (I See What You Say) is a research-oriented speech recognition system for the Arabic language that interprets lips movements into readable text. It is accomplished by analyzing a video of lips movements that resemble utterances, and then converting it to readable characters using video analysis and motion estimation. Our algorithm involves dividing the video into n number of frames to generate n-1 image frame which is produced by taking the difference between consecutive frames. Then, video features are extracted to be used by our error function which provided recognition of approximately 70%."}
{"_id": "b9961cf63a8bff98f9a5086651d3c9c0c847d7a1", "title": "A Novel BiQuad Antenna for 2 . 4 GHz Wireless Link Application : A Proposed Design", "text": "An ISM Band (2.4GHz) Design for a Biquad Antenna with Reflector base is presented for satisfying the ISM Band point-to-point link Applications in this paper. This proposed design can be used for Reception as well as Transmission in Wi-Fi's, WLAN, Bluetooth and even Zigbee links. The proposed antenna consists of two squares of the same size of 1\u20444 wavelength as a radiating element and a metallic plate or grid as reflector. This antenna has a beam width of about 70 degrees and a gain in the order of 10-12 dBi. A Prototype of this Antenna is designed and constructed. Parametric study is performed to understand the characterstics of the proposed antenna. Almost good antenna performances such as radiation patterns and antenna gains over the operating bands have been observed and simulated peak gain of the antenna is 10.7 dBi at 2439MHz. The simulated return loss is-35dB, whereas simulated SWR is 1.036 over the operating bands. The Biquad antenna is simple to build and offers good directivity and gain for Point-to-Point communications [1]. It consists of two squares of the same size of 1\u20444 wavelength as a radiating element and a metallic plate or grid as reflector. This antenna has a beam width of about 70 degrees and a gain in the order of 10-12 dBi. It can be used as stand-alone antenna or as feeder for a Parabolic Dish [2]. The polarization is such that looking at the antenna from the front, if the squares are placed side by side the polarization is vertical. The element is made from a length of 2mm thick copper wire, bent into the appropriate shape. Note that the length of each \" side \" should be close to 31 mm as possible, when measured from center to center of the wire [4-5]. The Details of the proposed antenna design are described in the paper, and simulated results are presented and discussed in the following sections."}
{"_id": "98e1c84d74a69349f6f7502763958a757ec7f416", "title": "Regulation of OsSPL14 by OsmiR156 defines ideal plant architecture in rice", "text": "Increasing crop yield is a major challenge for modern agriculture. The development of new plant types, which is known as ideal plant architecture (IPA), has been proposed as a means to enhance rice yield potential over that of existing high-yield varieties. Here, we report the cloning and characterization of a semidominant quantitative trait locus, IPA1 (Ideal Plant Architecture 1), which profoundly changes rice plant architecture and substantially enhances rice grain yield. The IPA1 quantitative trait locus encodes OsSPL14 (SOUAMOSA PROMOTER BINDING PROTEIN-LIKE 14) and is regulated by microRNA (miRNA) OsmiR156 in vivo. We demonstrate that a point mutation in OsSPL14 perturbs OsmiR156-directed regulation of OsSPL14, generating an 'ideal' rice plant with a reduced tiller number, increased lodging resistance and enhanced grain yield. Our study suggests that OsSPL14 may help improve rice grain yield by facilitating the breeding of new elite rice varieties."}
{"_id": "43d693936dcb0c582743ed30bdf505faa4dce8ec", "title": "Introduction to smart learning analytics: foundations and developments in video-based learning", "text": "Smart learning has become a new term to describe technological and social developments (e.g., Big and Open Data, Internet of Things, RFID, and NFC) enable effective, efficient, engaging and personalized learning. Collecting and combining learning analytics coming from different channels can clearly provide valuable information in designing and developing smart learning. Although, the potential of learning analytics to enable smart learning is very promising area, it remains non-investigated and even ill-defined concept. The paper defines the subset of learning analytics that focuses on supporting the features and the processes of smart learning, under the term Smart Learning Analytics. This is followed by a brief discussion on the prospects and drawbacks of Smart Learning Analytics and their recent foundations and developments in the area of Video-Based Learning. Drawing from our experience with the recent international workshops in Smart Environments and Analytics in Video-Based Learning, we present the state-of-the-art developments as well as the four selected contributions. The paper further draws attention to the great potential and need for research in the area of Smart Learning Analytics."}
{"_id": "4fbb0656d67a67686ebbad3023c71a59856b7ae7", "title": "Machine Learning for Semantic Parsing in Review", "text": "Spoken Language Understanding (SLU) and more specifically, semantic parsing is an indispensable task in each speech-enabled application. In this survey, we review the current research on SLU and semantic parsing with emphasis on machine learning techniques used for these tasks. Observing the current trends in semantic parsing, we conclude our discussion by suggesting some of the most promising future research trends."}
{"_id": "e17da8f078989d596878d7e1ea36d10e879feb64", "title": "Is a picture worth a thousand words? A Deep Multi-Modal Fusion Architecture for Product Classification in e-commerce", "text": "Classifying products precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification based on text and image neural network classifiers. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves classification accuracy over both networks on a real-world largescale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce businesses, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc."}
{"_id": "18cd20f0605629e0cbb714a900511c9b68233400", "title": "Balancing Performance, Energy, and Quality in Pervasive Computing", "text": "We describe Spectra, a remote execution system for batterypowered clients used in pervasive computing. Spectra enables applications to combine the mobility of small devices with the greater processing power of static compute servers. Spectra is self-tuning: it monitors both application resource usage and the availability of resources in the environment, and dynamically determines how and where to execute application components. In making this determination, Spectra balances the competing goals of performance, energy conservation, and application quality. We have validated Spectra\u2019s approach on the Compaq Itsy v2.2 and IBM ThinkPad 560X using a speech recognizer, a document preparation system, and a natural language translator. Our results confirm that Spectra almost always selects the best execution plan, and that its few suboptimal choices are very close to optimal."}
{"_id": "179634b2986fe4bbb2ca4714aec588012b66c231", "title": "Implicit and explicit ethnocentrism: revisiting the ideologies of prejudice.", "text": "Two studies investigated relationships among individual differences in implicit and explicit prejudice, right-wing ideology, and rigidity in thinking. The first study examined these relationships focusing on White Americans' prejudice toward Black Americans. The second study provided the first test of implicit ethnocentrism and its relationship to explicit ethnocentrism by studying the relationship between attitudes toward five social groups. Factor analyses found support for both implicit and explicit ethnocentrism. In both studies, mean explicit attitudes toward out groups were positive, whereas implicit attitudes were negative, suggesting that implicit and explicit prejudices are distinct; however, in both studies, implicit and explicit attitudes were related (r = .37, .47). Latent variable modeling indicates a simple structure within this ethnocentric system, with variables organized in order of specificity. These results lead to the conclusion that (a) implicit ethnocentrism exists and (b) it is related to and distinct from explicit ethnocentrism."}
{"_id": "e32acd4b0eea757df6e62a1e474bf1153ee98e09", "title": "Harlequin fetus.", "text": "We report a case of harlequin fetus born to the consanguineous parents. She had the typical skin manifestations of thick armour like scales with fissures, complete ectropion and eclabium, atrophic and crumpled ears and swollen extremities with gangrenous digits. Supportive treatment was given but the neonate died on the 4th day."}
{"_id": "dbbfc1e3e356dc3312a378e9a620e46c2abc5cc2", "title": "Efficient Clustering-Based Outlier Detection Algorithm for Dynamic Data Stream", "text": "Anomaly detection is currently an important and active research problem in many fields and involved in numerous applications. Most of the existing methods are based on distance measure. But in case of data stream these methods are not very efficient as computational point of view. Most of the exiting work on outlier detection in data stream declare a point as an outlier/inlier as soon as it arrive due to limited memory resources as compared to the huge data stream, to declare an outlier as it arrive often can lead us to a wrong decision, because of dynamic nature of the incoming data. In this paper we introduced a clustering based approach, which divide the stream in chunks and cluster each chunk using k-mean in fixed number of clusters. Instead of keeping only the summary information, which often used in case of clustering data stream, we keep the candidate outliers and mean value of every cluster for the next fixed number of steam chunks, to make sure that the detected candidate outliers are the real outliers. By employing the mean value of the clusters of previous chunk with mean values of the current chunk of stream, we decide better outlierness for data stream objects. Several experiments on different dataset confirm that our technique can find better outliers with low computational cost than the other exiting distance based approaches of outlier detection in data stream."}
{"_id": "645d9e7e5e3c5496f11e0e303dc4cc1395109773", "title": "Performance modeling for systematic performance tuning", "text": "The performance of parallel scientific applications depends on many factors which are determined by the execution environment and the parallel application. Especially on large parallel systems, it is too expensive to explore the solution space with series of experiments. Deriving analytical models for applications and platforms allow estimating and extrapolating their execution performance, bottlenecks, and the potential impact of optimization options. We propose to use such \"performance modeling\" techniques beginning from the application design process throughout the whole software development cycle and also during the lifetime of supercomputer systems. Such models help to guide supercomputer system design and re-engineering efforts to adopt applications to changing platforms and allow users to estimate costs to solve a particular problem. Models can often be built with the help of well-known performance profiling tools. We discuss how we successfully used modeling throughout the proposal, initial testing, and beginning deployment phase of the Blue Waters supercomputer system."}
{"_id": "16302319d910a1da77656133727f081c65995635", "title": "Programming by Feedback", "text": "This paper advocates a new ML-based programming framework, called Programming by Feedback (PF), which involves a sequence of interactions between the active computer and the user. The latter only provides preference judgments on pairs of solutions supplied by the active computer. The active computer involves two components: the learning component estimates the user\u2019s utility function and accounts for the user\u2019s (possibly limited) competence; the optimization component explores the search space and returns the most appropriate candidate solution. A proof of principle of the approach is proposed, showing that PF requires a handful of interactions in order to solve some discrete and continuous benchmark problems."}
{"_id": "3d16326c34fdbf397876fcc173702846ae9f5fc0", "title": "Forwarding in a content-based network", "text": "This paper presents an algorithm for content-based forwarding, an essential function in content-based networking. Unlike in traditional address-based unicast or multicast networks, where messages are given explicit destination addresses, the movement of messages through a content-based network is driven by predicates applied to the content of the messages. Forwarding in such a network amounts to evaluating the predicates stored in a router's forwarding table in order to decide to which neighbor routers the message should be sent. We are interested in finding a forwarding algorithm that can make this decision as quickly as possible in situations where there are numerous, complex predicates and high volumes of messages. We present such an algorithm and give the results of studies evaluating its performance."}
{"_id": "418750bf838ed1417a4b65ebf292d804371e3f67", "title": "Probabilistic in-network caching for information-centric networks", "text": "In-network caching necessitates the transformation of centralised operations of traditional, overlay caching techniques to a decentralised and uncoordinated environment. Given that caching capacity in routers is relatively small in comparison to the amount of forwarded content, a key aspect is balanced distribution of content among the available caches. In this paper, we are concerned with decentralised, real-time distribution of content in router caches. Our goal is to reduce caching redundancy and in turn, make more efficient utilisation of available cache resources along a delivery path.\n Our in-network caching scheme, called ProbCache, approximates the caching capability of a path and caches contents probabilistically in order to: i) leave caching space for other flows sharing (part of) the same path, and ii) fairly multiplex contents of different flows among caches of a shared path.\n We compare our algorithm against universal caching and against schemes proposed in the past for Web-Caching architectures, such as Leave Copy Down (LCD). Our results show reduction of up to 20% in server hits, and up to 10% in the number of hops required to hit cached contents, but, most importantly, reduction of cache-evictions by an order of magnitude in comparison to universal caching."}
{"_id": "8427c741834ccea874aa9e7be85c412d9670bfa2", "title": "A survey on content-centric technologies for the current Internet: CDN and P2P solutions", "text": "0140-3664/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.comcom.2011.10.005 q This work was partially funded by the European AWARE RECOGNITION (257756), FIRE-SCAMPI (25841 \u21d1 Tel.: +39 0503153269. E-mail address: a.passarella@iit.cnr.it One of the most striking properties of the Internet is its flexibility to accommodate features it was not conceived for. Among the most significant examples, in this survey we consider the transition of the Internet from a reliable fault-tolerant network for host-to-host communication to a content-centric network, i.e. a network mostly devoted to support efficient generation, sharing and access to content. We survey this research area according to a top-down approach. We present a conceptual framework that encompasses the key building blocks required to support content-centric networking in the Internet. Then we describe in detail the two most important types of content-centric Internet technologies, i.e., Content-Delivery Networks (CDNs) and P2P systems. For each of them, we show how they cover the key building blocks. We then identify the functional components of CDN and P2P content management solutions, and discuss the main solutions proposed in the literature for each of them. We consider different types of content (both real time and non real time), and different networking environments (fixed, mobile, . . .). Finally, we also discuss the main recent research trends focused on how to design the Future Internet as a native content-centric network. 2011 Elsevier B.V. All rights reserved."}
{"_id": "05cf4d77e9810d40d3e2ca360ce5fb8e8ea98f10", "title": "A data-oriented (and beyond) network architecture", "text": "The Internet has evolved greatly from its original incarnation. For instance, the vast majority of current Internet usage is data retrieval and service access, whereas the architecture was designed around host-to-host applications such as telnet and ftp. Moreover, the original Internet was a purely transparent carrier of packets, but now the various network stakeholders use middleboxes to improve security and accelerate applications. To adapt to these changes, we propose the Data-Oriented Network Architecture (DONA), which involves a clean-slate redesign of Internet naming and name resolution."}
{"_id": "6693b715117f5a16018d0718e58be7022c326ac1", "title": "Computational intelligence and tower defence games", "text": "The aim of this paper is to introduce the use of Tower Defence (TD) games in Computational Intelligence (CI) research. We show how TD games can provide an important test-bed for the often under-represented casual games research area. Additionally, the use of CI in the TD games has the potential to create a more interesting, interactive and ongoing game experience for casual gamers. We present a definition of the current state and development of TD games, and include a classification of TD game components. We then describe some potential ways CI can be used to augment the TD experience. Finally, a prototype TD game based on experience-driven procedural content generation is presented."}
{"_id": "f6793ab7c1669f81f64212a20f407374882f2130", "title": "The Validity of Presence as a Reliable Human Performance Metric in Immersive Environments", "text": "Advances in interactive media technologies have reached the stage where users need no longer to act as passive observers. As the technology evolves further we will be able to engage in computer based synthetic performances that mimic events in the real or natural world. Performances in this context refers to plays, films, computer based simulations, engineering mock-ups, etc. At a more fundamental level, we are dealing with complex interactions of the human sensory and perceptual systems with a stimulus environment (covering visual, auditory and other components). Authors or rather directors of these new media environments will exploit computer generated perceptual illusions to convince the person (participant) who is interacting with the system that they are present in the environment. Taken to the limit the participant would not be able to distinguish between being in a real or natural environment compared with being in a purely synthetic environment. Unfortunately, the performance of current technology falls considerably short of even getting even close to providing a faithful perceptual illusion of the real world. Eventually, VR authoring tools will become more sophisticated and it will be possible to produce realistic virtual environments that match more closely with the real world. Some VR practitioners have attempted to collect data that they purport to represent the degree of presence a user experiences. Unfortunately, we do not yet have an agreed definition of presence and seem undecided whether it is a meaningful metric in its own right. This paper will examine the dangers of attempting to measure the 'presence' of a VR system as a one-dimensional parameter. The question of whether presence is a valid measure of a VR system will also be addressed. An important differentiating characteristic of VR systems compared with other human-computer interfaces are their ability to create a sense of 'being-in' the computer generated environment. Other forms of media such as film and TV are also known to induce a sense of 'being-in' the environment. Some VR practitioners have tended to use the term presence to describe this effect This means that people who are engaged in the virtual environment feel as though they are actually part of the virtual environment. In order to be perceptually present in a virtual environment it is first important to understand what it means to be present in the real-world. A good example of real world experience is that of a roller coaster ride. The sensory \u2026"}
{"_id": "82bc67442f4d5353d7dde5c4345f2ec041a7de93", "title": "Using the opinion leaders in social networks to improve the cold start challenge in recommender systems", "text": "The increasing volume of information about goods and services has been growing confusion for online buyers in cyberspace and this problem still continues. One of the most important ways to deal with the information overload is using a system called recommender system. The task of a recommender system is to offer the most appropriate and the nearest product to the user's demands and needs. In this system, one of the main problems is the cold start challenge. This problem occurs when a new user logs on and because there is no sufficient information available in the system from the user, the system won't be able to provide appropriate recommendation and the system error will rise. In this paper, we propose to use a new measurement called opinion leaders to alleviate this problem. Opinion leader is a person that his opinion has an impact on the target user. As a result, in the case of a new user logging in and the user \u2014 item's matrix sparseness, we can use the opinion of opinion leaders to offer the appropriate recommendation for new users and thereby increase the accuracy of the recommender system. The results of several conducted tests showed that opinion leaders combined with recommender systems will effectively reduce the recommendation errors."}
{"_id": "48a0ef1b5a44e50389d88330036d15231a351a52", "title": "Anomaly detection using DBSCAN clustering technique for traffic video surveillance", "text": "Detecting anomalies such as rule violations, accidents, unusual driving and other suspicious action increase the need for automatic analysis in Traffic Video Surveillance (TVS). Most of the works in Traffic rule violation systems are based on probabilistic methods of classification for detecting the events as normal and abnormal. This paper proposes an un-supervised clustering technique namely Novel Anomaly Detection-Density Based Spatial Clustering of Applications with Noise (NAD-DBSCAN) which clusters the trajectories of moving objects of varying sizes and shapes. A trajectory is said to be abnormal if the event that never fit with the trained model. Epsilon (Eps) and Minimum Points (MinPts) are essential parameters for dynamically calculating the sum of clusters for a data point. The proposed system is validated using benchmark traffic dataset and found to perform accurately in detecting anomalies."}
{"_id": "3f29bcf67ddaa3991ad2c15046c51f6a309d01f8", "title": "Security Attacks and Solutions in Clouds Kazi Zunnurhain", "text": "Cloud computing offers great potential to improve productivity and reduce costs, but at the same time it possesses many new security risks. In this paper we identify the possible security attacks on clouds including: Wrapping attacks, Malware-Injection attacks, Flooding attacks, Browser attacks, and also Accountability checking problems. We identify the root causes of these attacks and propose specific"}
{"_id": "ba1962830434448b3cc4fd63eb561fd3816febc5", "title": "What you look at is what you get: eye movement-based interaction techniques", "text": "In seeking hitherto-unused methods by which users and computers can communicate, we investigate the usefulness of eye movements as a fast and convenient auxiliary user-to-computer communication mode. The barrier to exploiting this medium has not been eye-tracking technology but the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a natural and unobtrusive way. This paper discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, and reports our experiences and observations on them."}
{"_id": "e568d744d48744938f94eeabf7e79d3a8764435e", "title": "Depression Scale Recognition from Audio, Visual and Text Analysis", "text": "Depression is a major mental health disorder that is rapidly affecting lives worldwide. Depression not only impacts emotional but also physical and psychological state of the person. Its symptoms include lack of interest in daily activities, feeling low, anxiety, frustration, loss of weight and even feeling of selfhatred. This report describes work done by us for Audio Visual Emotion Challenge (AVEC) 2017 during our second year BTech summer internship. With the increase in demand to detect depression automatically with the help of machine learning algorithms, we present our multimodal feature extraction and decision level fusion approach for the same. Features are extracted by processing on the provided Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) database. Gaussian Mixture Model (GMM) clustering and Fisher vector approach were applied on the visual data; statistical descriptors on gaze, pose; low level audio features and head pose and text features were also extracted. Classification is done on fused as well as independent features using Support Vector Machine (SVM) and neural networks. The results obtained were able to cross the provided baseline on validation data set by 17% on audio features and 24.5% on video features. Keywords\u2014AVEC 2017, SVM, Depression, Neural network, RMSE, MAE, fusion, speech processing."}
{"_id": "414573bcd1849b4d3ec8a06dd4080b62f1db5607", "title": "Attacking DDoS at the Source", "text": "Distributed denial-of-service (DDoS) attacks present an Internet-wide threat. We propose D-WARD, a DDoS defense system deployed at source-end networks that autonomously detects and stops attacks originating from these networks. Attacks are detected by the constant monitoring of two-way traffic flows between the network and the rest of the Internet and periodic comparison with normal flow models. Mismatching flows are rate-limited in proportion to their aggressiveness. D-WARD offers good service to legitimate traffic even during an attack, while effectively reducing DDoS traffic to a negligible level. A prototype of the system has been built in a Linux router. We show its effectiveness in various attack scenarios, discuss motivations for deployment, and describe associated costs."}
{"_id": "b84baac3305531e8c548f496815d47e21e13bb0c", "title": "PAGE: A Partition Aware Engine for Parallel Graph Computation", "text": "Graph partition quality affects the overall performance of parallel graph computation systems. The quality of a graph partition is measured by the balance factor and edge cut ratio. A balanced graph partition with small edge cut ratio is generally preferred since it reduces the expensive network communication cost. However, according to an empirical study on Giraph, the performance over well partitioned graph might be even two times worse than simple random partitions. This is because these systems only optimize for the simple partition strategies and cannot efficiently handle the increasing workload of local message processing when a high quality graph partition is used. In this paper, we propose a novel partition aware graph computation engine named PAGE, which equips a new message processor and a dynamic concurrency control model. The new message processor concurrently processes local and remote messages in a unified way. The dynamic model adaptively adjusts the concurrency of the processor based on the online statistics. The experimental evaluation demonstrates the superiority of PAGE over the graph partitions with various qualities."}
{"_id": "f9bfb77f6cf1d175afe8745d32b187c602b61bfc", "title": "Making flexible magnetic aerogels and stiff magnetic nanopaper using cellulose nanofibrils as templates.", "text": "Nanostructured biological materials inspire the creation of materials with tunable mechanical properties. Strong cellulose nanofibrils derived from bacteria or wood can form ductile or tough networks that are suitable as functional materials. Here, we show that freeze-dried bacterial cellulose nanofibril aerogels can be used as templates for making lightweight porous magnetic aerogels, which can be compacted into a stiff magnetic nanopaper. The 20-70-nm-thick cellulose nanofibrils act as templates for the non-agglomerated growth of ferromagnetic cobalt ferrite nanoparticles (diameter, 40-120 nm). Unlike solvent-swollen gels and ferrogels, our magnetic aerogel is dry, lightweight, porous (98%), flexible, and can be actuated by a small household magnet. Moreover, it can absorb water and release it upon compression. Owing to their flexibility, high porosity and surface area, these aerogels are expected to be useful in microfluidics devices and as electronic actuators."}
{"_id": "b3823c1693fd8c03c5f8fd694908f314051d3a72", "title": "When the face reveals what words do not: facial expressions of emotion, smiling, and the willingness to disclose childhood sexual abuse.", "text": "For survivors of childhood sexual abuse (CSA), verbal disclosure is often complex and painful. The authors examined the voluntary disclosure-nondisclosure of CSA in relation to nonverbal expressions of emotion in the face. Consistent with hypotheses derived from recent theorizing about the moral nature of emotion, CSA survivors who did not voluntarily disclose CSA showed greater facial expressions of shame, whereas CSA survivors who voluntarily disclosed CSA expressed greater disgust. Expressions of disgust also signaled sexual abuse accompanied by violence. Consistent with recent theorizing about smiling behavior, CSA nondisclosers made more polite smiles, whereas nonabused participants expressed greater genuine positive emotion. Discussion addressed the implications of these findings for the study of disclosure of traumatic events, facial expression, and the links between morality and emotion."}
{"_id": "0c6c7583687c245aedbe894edf63541fdda122ea", "title": "OpinionFinder: A System for Subjectivity Analysis", "text": "Vancouver, October 2005. OpinionFinder: A system for subjectivity analysis Theresa Wilson\u2021, Paul Hoffmann\u2021, Swapna Somasundaran\u2020, Jason Kessler\u2020, Janyce Wiebe\u2020\u2021, Yejin Choi\u00a7, Claire Cardie\u00a7, Ellen Riloff\u2217, Siddharth Patwardhan\u2217 \u2021Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA 15260 \u2020Department of Computer Science, University of Pittsburgh, Pittsburgh, PA 15260 \u00a7Department of Computer Science, Cornell University, Ithaca, NY 14853 \u2217School of Computing, University of Utah, Salt Lake City, UT 84112"}
{"_id": "8c46b80a009c28fb1b6d76c0d424adb4139dfd5a", "title": "Model-Based Quantitative Network Security Metrics: A Survey", "text": "Network security metrics (NSMs) based on models allow to quantitatively evaluate the overall resilience of networked systems against attacks. For that reason, such metrics are of great importance to the security-related decision-making process of organizations. Considering that over the past two decades several model-based quantitative NSMs have been proposed, this paper presents a deep survey of the state-of-the-art of these proposals. First, to distinguish the security metrics described in this survey from other types of security metrics, an overview of security metrics, in general, and their classifications is presented. Then, a detailed review of the main existing model-based quantitative NSMs is provided, along with their advantages and disadvantages. Finally, this survey is concluded with an in-depth discussion on relevant characteristics of the surveyed proposals and open research issues of the topic."}
{"_id": "628db24b86a59af1546ee501695496918caa8dba", "title": "Study of wheel slip and traction forces in differential drive robots and slip avoidance control strategy", "text": "The effect of wheel slip in differential drive robots is investigated in this paper. We consider differential drive robots with two driven wheels and ball-type caster wheels that are used to provide balance and support to the mobile robot. The limiting values of traction forces for slip and no slip conditions are dependent on wheel-ground kinetic and static friction coefficients. The traction forces are used to determine the fraction of input torque that provides robot motion and this is used to calculate the actual position of the robot under slip conditions. The traction forces under no slip conditions are used to determine the limiting value of the wheel torque above which the wheel slips. This limiting torque value is used to set a saturation limit for the input torque to avoid slip. Simulations are conducted to evaluate the behavior of the robot during slip and no slip conditions. Experiments are conducted under similar slip and no slip conditions using a custom built differential drive mobile robot with one caster wheel to validate the simulations. Experiments are also conducted with the torque limiting strategy. Results from model simulations and experiments are presented and discussed."}
{"_id": "5e7fbb0a42d933587a04bb31cad3ff9c95b79924", "title": "Design and fabrication of W-band SIW horn antenna using PCB process", "text": "A W-band SIW horn antenna is designed and fabricated using PCB process. Measured S11 reaches -15 dB at 84 GHz with bandwidth of 1 GHz and the maximum simulated gain is 9 dBi. The antenna is loaded with dielectric to increase gain and the effect of different loading lengths on gain is studied. The antenna is fed using a WR10 standard waveguide through a coupling slot in the SIW. The effect of changing the slot width on the return loss is studied. Good agreement is achieved between measured and simulated results. The proposed antenna is suitable for medical imaging and air traffic radar."}
{"_id": "0c931bb17d9e5d4dc12e2662a3571535841ab534", "title": "Aligning 3D models to RGB-D images of cluttered scenes", "text": "The goal of this work is to represent objects in an RGB-D scene with corresponding 3D models from a library. We approach this problem by first detecting and segmenting object instances in the scene and then using a convolutional neural network (CNN) to predict the pose of the object. This CNN is trained using pixel surface normals in images containing renderings of synthetic objects. When tested on real data, our method outperforms alternative algorithms trained on real data. We then use this coarse pose estimate along with the inferred pixel support to align a small number of prototypical models to the data, and place into the scene the model that fits best. We observe a 48% relative improvement in performance at the task of 3D detection over the current state-of-the-art [34], while being an order of magnitude faster."}
{"_id": "f9907e97fd86f478f4da4c2a6e96b4f3192b4f1c", "title": "Adaptive Scale Selection for Multiscale Segmentation of Satellite Images", "text": "With dramatically increasing of the spatial resolution of satellite imaging sensors, object-based image analysis (OBIA) has been gaining prominence in remote sensing applications. Multiscale image segmentation is a prerequisite step that splits an image into hierarchical homogeneous segmented objects for OBIA. However, scale selection remains a challenge in multiscale segmentation. In this study, we presented an adaptive approach for defining and estimating the optimal scale in the multiscale segmentation process. Central to our method is the combined use of image features from segmented objects and prior knowledge from historical thematic maps in a top-down segmentation procedure. Specifically, the whole image was first split into segmented objects, with the largest scale in a presupposition segmentation scale sequence. Second, based on segmented object features and prior knowledge in the local region of thematic maps, we calculated complexity values for each segmented object. Third, if the complexity values of an object were large enough, this object would be further split into multiple segmented objects with a smaller scale in the scale sequence. Then, in the similar manner, complex segmented objects were split into the simplest objects iteratively. Finally, the final segmentation result was obtained and evaluated. We have applied this method on a GF-1 multispectral satellite image and a ZY-3 multispectral satellite image to produce multiscale segmentation maps and further classification maps, compared with the state-of-the-art and the traditional mean shift algorithm. The experimental results illustrate that the proposed method is practically helpful and efficient to produce the appropriate segmented image objects with optimal scales."}
{"_id": "19d73217ada6b093594648d55ec0b5991a61eec1", "title": "Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review", "text": "Moral dilemma tasks have been a much appreciated experimental paradigm in empirical studies on moral cognition for decades and have, more recently, also become a preferred paradigm in the field of cognitive neuroscience of moral decision-making. Yet, studies using moral dilemmas suffer from two main shortcomings: they lack methodological homogeneity which impedes reliable comparisons of results across studies, thus making a metaanalysis manifestly impossible; and second, they overlook control of relevant design parameters. In this paper, we review from a principled standpoint the studies that use moral dilemmas to approach the psychology of moral judgment and its neural underpinnings. We present a systematic review of 19 experimental design parameters that can be identified in moral dilemmas. Accordingly, our analysis establishes a methodological basis for the required homogeneity between studies and suggests the consideration of experimental aspects that have not yet received much attention despite their relevance."}
{"_id": "7f1ea7e6eb5fb4c7ecc34e26250abea775c8aea2", "title": "Interlingua based Sanskrit-English machine translation", "text": "This paper is based on the work towards developing a machine translation system for Sanskrit to English. An interlingua based machine translation system in Paninian framework was developed in this work. As lexical functional grammar can handle semantic information along with syntactic analysis, the proposed system uses lexical functional grammar in different levels of analysis phase. In this system the given Sanskrit text is first converted to an intermediate notation called interlingua. This notation is then used for mapping to the target language, English, and generates the translated output. The proposed system works in the translation of simple and complex sentences in the given corpus. The karaka analysis system of Panini is used for the semantic analysis."}
{"_id": "b9bb8d42cc70242157e3e8fa36abcb978da7cd1f", "title": "Development and field test of novel two-wheeled UAV for bridge inspections", "text": "This paper presents the development and field test of a novel unmanned aerial vehicle (UAV) for bridge inspections. The proposed UAV, which consists of a quadrotor, a cylindrical cage installed in it, and two spokeless wheels freely rotating around the cage, can climb and run on the bridge surface. Its structure improves durability and reduces air resistance compared with conventional UAVs with a cage or wheels. These advantages expand the uses of the UAV in various fields. This paper evaluates the effectiveness of the proposed UAV in real-world bridge inspection scenarios. We tested basic locomotion and measured air resistances. Experimental results on bridges demonstrated the ability to inspect various locations on bridges that are difficult for human inspectors to access."}
{"_id": "677d77eb7ae64ce8ada6e52ddddc7a538ea64b45", "title": "An 87 fJ/conversion-step 12 b 10 MS/s SAR ADC using a minimum number of unit capacitors", "text": "This work proposes a 12 b 10 MS/s 0.11 lm CMOS successive-approximation register ADC based on a C-R hybrid DAC for low-power sensor applications. The proposed C-R DAC employs a 2-step split-capacitor array of upper seven bits and lower five bits to optimize power consumption and chip area at the target speed and resolution. A VCM-based switching method for the most significant bit and reference voltage segments from an insensitive R-string for the last two least significant bits minimize the number of unit capacitors required in the C-R hybrid DAC. The comparator accuracy is improved by an open-loop offset cancellation technique in the first-stage pre-amp. The prototype ADC in a 0.11 lm CMOS process demonstrates the measured differential nonlinearity and integral nonlinearity within 1.18 LSB and 1.42 LSB, respectively. The ADC shows a maximum signal-to-noise-and-distortion ratio of 63.9 dB and a maximum spurious-free dynamic range of 77.6 dB at 10 MS/s. The ADC with an active die area of 0.34 mm consumes 1.1 mW at 1.0 V and 10 MS/s, corresponding to a figure-of-merit of 87 fJ/conversion-step."}
{"_id": "40f46e5d5e729965adb7b5613a625040097d07eb", "title": "Assessment of degree of risk from sources of microbial contamination in cleanrooms ; 3 : Overall application", "text": "The European Commission and the Food and Drug Administration in the USA suggest that risk management and assessment methods should be used to identify and control sources of microbial contamination1,2. A risk management method has been described by Whyte3,4 that is based on the Hazard Analysis and Critical Control Point (HACCP) system but reinterpreted for use in cleanrooms, and called Risk Management of Contamination (RMC). This method is also described in the PHSS Technical Monograph No 145 and has the following steps."}
{"_id": "d6b9f66bfd1a4c5d01834dcd9e68752d15892067", "title": "Mindfulness and Meditation Practice as Moderators of the Relationship between Age and Subjective Wellbeing among Working Adults", "text": "Promoting the health and wellbeing of an aging and age-diverse workforce is a timely and growing concern to organizations and to society. To help address this issue, we investigated the relationship between age and subjective wellbeing by examining the moderating role of mindfulness in two independent studies. In study 1, trait mindfulness was examined as a moderator of the relationship between age and vitality and between age and work-family balance in a sample of 240 participants. In study 2, data from the second phase of the Midlife Development in the USA (MIDUS II) project was used to investigate mindful-practice (i.e., meditation) as a moderator of the relationships between age and multiple measures of subjective wellbeing (life satisfaction, psychological health, physical health) in a sample of 2477 adults. Results revealed that mindfulness moderates the relationship between age and multiple indicators of subjective wellbeing. In addition, study 2 results indicated that individuals who reported that they mediated often combined with those who reported they meditated a lot reported better physical health than those who reported that they never meditate. The findings suggest that cultivating mindfulness can be a proactive tool for fostering health and subjective wellbeing in an aging and agediverse workforce."}
{"_id": "482112e839684c25166f16ed91153683158da31e", "title": "A Survey of Intrusion Detection Techniques", "text": "Intrusion detection is an alternative to the situation of the security violation.Security mechanism of the network is necessary against the threat to the system. There are two types of intruders: external intruders, who are unauthorized users of the machines they attack, and internal intruders, who have permission to access the system with some restrictions. This paper describes a brief overview of various intrusion detection techniques such as fuzzy logic, neural network, pattern recognition methods, genetic algorithms and related techniques is presented. Among the several soft computing paradigms, fuzzy rule-based classifiers, decision trees, support vector machines, linear genetic programming is model fast and efficient intrusion detection systems. KeywordsIntroduction, intrusion detection methods, misuse detection techniques, anomaly detection techniques, genetic algorithms."}
{"_id": "705a24f4e1766a44bbba7cf335f74229ed443c7b", "title": "Face Recognition with Local Binary Patterns, Spatial Pyramid Histograms and Naive Bayes Nearest Neighbor Classification", "text": "Face recognition algorithms commonly assume that face images are well aligned and have a similar pose -- yet in many practical applications it is impossible to meet these conditions. Therefore extending face recognition to unconstrained face images has become an active area of research. To this end, histograms of Local Binary Patterns (LBP) have proven to be highly discriminative descriptors for face recognition. Nonetheless, most LBP-based algorithms use a rigid descriptor matching strategy that is not robust against pose variation and misalignment. We propose two algorithms for face recognition that are designed to deal with pose variations and misalignment. We also incorporate an illumination normalization step that increases robustness against lighting variations. The proposed algorithms use descriptors based on histograms of LBP and perform descriptor matching with spatial pyramid matching (SPM) and Naive Bayes Nearest Neighbor (NBNN), respectively. Our contribution is the inclusion of flexible spatial matching schemes that use an image-to-class relation to provide an improved robustness with respect to intra-class variations. We compare the accuracy of the proposed algorithms against Ahonen's original LBP-based face recognition system and two baseline holistic classifiers on four standard datasets. Our results indicate that the algorithm based on NBNN outperforms the other solutions, and does so more markedly in presence of pose variations."}
{"_id": "18ef666517e8b60cbe00c0f5ec5bd8dd18b936f8", "title": "Energy-Efficient Resource Allocation for Mobile-Edge Computation Offloading", "text": "Mobile-edge computation offloading (MECO) off-loads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation."}
{"_id": "32093be6d9bf5038adb5b5b8b8e8a9b62001643c", "title": "Multi-Label Classification: An Overview", "text": "Multi-label classification methods are increasingly required by modern applications, such as protein function classification, music categorization, and semantic scene classification. This article introduces the task of multi-label classification, organizes the sparse related literature into a structured presentation and performs comparative experimental results of certain multi-label classification methods. It also contributes the definition of concepts for the quantification of the multi-label nature of a data set."}
{"_id": "3b840207422285bf625220395b18a9f75f55acef", "title": "Offloading in Mobile Cloudlet Systems with Intermittent Connectivity", "text": "The emergence of mobile cloud computing enables mobile users to offload applications to nearby mobile resource-rich devices (i.e., cloudlets) to reduce energy consumption and improve performance. However, due to mobility and cloudlet capacity, the connections between a mobile user and mobile cloudlets can be intermittent. As a result, offloading actions taken by the mobile user may fail (e.g., the user moves out of communication range of cloudlets). In this paper, we develop an optimal offloading algorithm for the mobile user in such an intermittently connected cloudlet system, considering the users' local load and availability of cloudlets. We examine users' mobility patterns and cloudlets' admission control, and derive the probability of successful offloading actions analytically. We formulate and solve a Markov decision process (MDP) model to obtain an optimal policy for the mobile user with the objective to minimize the computation and offloading costs. Furthermore, we prove that the optimal policy of the MDP has a threshold structure. Subsequently, we introduce a fast algorithm for energy-constrained users to make offloading decisions. The numerical results show that the analytical form of the successful offloading probability is a good estimation in various mobility cases. Furthermore, the proposed MDP offloading algorithm for mobile users outperforms conventional baseline schemes."}
{"_id": "02cbb22e2011938d8d2c0a42b175e96d59bb377f", "title": "Above the Clouds: A Berkeley View of Cloud Computing", "text": "Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased. Developers with innovative ideas for new Internet services no longer require the large capital outlays in hardware to deploy their service or the human expense to operate it. They need not be concerned about overprovisioning for a service whose popularity does not meet their predictions, thus wasting costly resources, or underprovisioning for one that becomes wildly popular, thus missing potential customers and revenue. Moreover, companies with large batch-oriented tasks can get results as quickly as their programs can scale, since using 1000 servers for one hour costs no more than using one server for 1000 hours. This elasticity of resources, without paying a premium for large scale, is unprecedented in the history of IT. Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds. People can be users or providers of SaaS, or users or providers of Utility Computing. We focus on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SaaS Users. From a hardware point of view, three aspects are new in Cloud Computing."}
{"_id": "926c63eb0446bcd297df4efc58b803917815a912", "title": "2.4 to 61 GHz Multiband Double-Directional Propagation Measurements in Indoor Office Environments", "text": "This paper presents the details and results of double-directional propagation measurements carried out in two indoor office environments: a semi-open, cubicle-type office space and a more traditional work environment with closed offices. The measurements cover seven frequency bands from 2.4 to 61 GHz, permitting the propagation characteristics to be compared over a wide range of candidate radio frequencies for next-generation mobile wireless systems, including ultra high frequency and millimeter-wave bands. A novel processing algorithm is introduced for the expansion of multiband measurement data into sets of discrete multipath components. Based on the resulting multipath parameter estimates, models are presented for frequency-dependent path loss, shadow fading, copolarization ratio, delay spread, and angular spreads, along with their interfrequency correlations. Our results indicate a remarkably strong consistency in multipath structure over the entire frequency range considered."}
{"_id": "fb8704210358d0cbf5113c97e1f9f9f03f67e6fc", "title": "A review of content-based image retrieval systems in medical applications - clinical benefits and future directions", "text": "Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization ( approximately 1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although still a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into picture archiving and communication systems (PACS) have been created. This article gives an overview of available literature in the field of content-based access to medical image data and on the technologies used in the field. Section 1 gives an introduction into generic content-based image retrieval and the technologies used. Section 2 explains the propositions for the use of image retrieval in medical practice and the various approaches. Example systems and application areas are described. Section 3 describes the techniques used in the implemented systems, their datasets and evaluations. Section 4 identifies possible clinical benefits of image retrieval systems in clinical practice as well as in research and education. New research directions are being defined that can prove to be useful. This article also identifies explanations to some of the outlined problems in the field as it looks like many propositions for systems are made from the medical domain and research prototypes are developed in computer science departments using medical datasets. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools."}
{"_id": "50c3bd96304473c0e5721fcaefe20a44a7bcd691", "title": "Wavelet based real-time smoke detection in video", "text": "A method for smoke detection in video is proposed. It is assumed the camera monitoring the scene is stationary. Since the smoke is semi-transparent, edges of image frames start loosing their sharpness and this leads to a decrease in the high frequency content of the image. To determine the smoke in the field of view of the camera, the background of the scene is estimated and decrease of high frequency energy of the scene is monitored using the spatial wavelet transforms of the current and the background images. Edges of the scene are especially important because they produce local extrema in the wavelet domain. A decrease in values of local extrema is also an indicator of smoke. In addition, scene becomes grayish when there is smoke and this leads to a decrease in chrominance values of pixels. Periodic behavior in smoke boundaries and convexity of smoke regions are also analyzed. All of these clues are combined to reach a final decision."}
{"_id": "d242e2c62b0e533317a6f667ce8c6caa55648efe", "title": "Short-Circuit Current of Wind Turbines With Doubly Fed Induction Generator", "text": "The short-circuit current contribution of wind turbines has not received much attention so far. This paper considers the short-circuit behavior, especially the short-circuit current of wind turbines with a doubly fed induction generator. Mostly, these wind turbines have a crowbar to protect the power electronic converter that is connected to the rotor windings of the induction generator. First, the maximum value of the short-circuit current of a conventional induction machine is determined. The differences between a crowbar-protected doubly fed induction generator and a conventional induction generator are highlighted and approximate equations for the maximum short-circuit current of a doubly fed induction generator are determined. The values obtained in this way are compared to the values obtained from time domain simulations. The differences are less then 15%"}
{"_id": "5d0a48e2b4fe4b9150574cedf867d73787f30e48", "title": "Game Theory in Wireless Networks: A Tutorial", "text": "The behavior of a given wireless device may affect the communication capabilities of a neighboring device, notably because the radio communication channel is usually shared in wireless networks. In this tutorial, we carefully explain how situations of this kind can be modelled by making use of game theory. By leveraging on four simple running examples, we introduce the most fundamental concepts of non-cooperative game theory. This approach should help students and scholars to quickly master this fascinating analytical tool without having to read the existing lengthy, economics-oriented books. It should also assist them in modelling problems of their own."}
{"_id": "46c5d41a1a111d51eeab06b401b74b404c8653ea", "title": "Cryptography and cryptanalysis for embedded systems", "text": "A growing number of devices of daily use are equipped with computing capabilities. Today, already more than 98 % of all manufactured microprocessors are employed in embedded applications , leaving less than 2 % to traditional computers. Many of these embedded devices are enabled to communicate amongst each other and form networks. A side effect of the rising interconnectedness is a possible vulnerability of these embedded systems. Attacks that have formerly been restricted to PCs can suddenly be launched against cars, tickets, ID cards or even pacemakers. At the same time the security awareness of users and manufacturers of such systems is much lower than in classical PC environments. This renders security one key aspect of embedded systems design and for most pervasive computing applications. As embedded systems are usually deployed in large numbers, costs are a main concern of system developers. Hence embedded security solutions have to be cheap and efficient. Many security services such as digital signatures can only be realized by public key cryptography. Yet, public key schemes are in terms of computation orders of magnitude more expensive than private key cryptosystems. At the same time the prevailing schemes rely on very similar security assumptions. If one scheme gets broken, almost all cryptosystems employing asymmetric cryptography become useless. The first part of this work explores alternatives to the prevailing public key cryptosystems. Two alternative signature schemes and one public key encryption scheme from the family of post quantum cryptosystems are explored. Their security relies on different assumptions so that a break of one of the prevailing schemes does not affect the security of the studied alternatives. The main focus lies on the implementational aspects of these schemes for embedded systems. One actual outcome is that, contrary to common belief, the presented schemes provide similar and in some cases even better performance than the prevailing schemes. The presented solutions include a highly scalable software implementation of the Merkle signature scheme aimed at low-cost microprocessors. For signatures in hardware an FPGA framework for implementing a family of signature schemes based on multivariate quadratic equations is presented. Depending on the chosen scheme, multivariate quadratic signatures show better performance than elliptic curves in terms of area consumption and performance. The McEliece cryptosystem is an alternative public key encryption scheme which was believed to be infeasible on embedded platforms due to its large key size. This work shows that by applying certain \u2026"}
{"_id": "32137062567626ca0f466c7a2272a4368f093800", "title": "Recall of childhood trauma: a prospective study of women's memories of child sexual abuse.", "text": "One hundred twenty-nine women with previously documented histories of sexual victimization in childhood were interviewed and asked detailed questions about their abuse histories to answer the question \"Do people actually forget traumatic events such as child sexual abuse, and if so, how common is such forgetting?\" A large proportion of the women (38%) did not recall the abuse that had been reported 17 years earlier. Women who were younger at the time of the abuse and those who were molested by someone they knew were more likely to have no recall of the abuse. The implications for research and practice are discussed. Long periods with no memory of abuse should not be regarded as evidence that the abuse did not occur."}
{"_id": "b6ca785bc7fa5b803c1c9a4f80b6630551e7a261", "title": "Blind Identification of Underdetermined Mixtures by Simultaneous Matrix Diagonalization", "text": "In this paper, we study simultaneous matrix diagonalization-based techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the well-known SOBI algorithm. The problem is reformulated in terms of the parallel factor decomposition (PARAFAC) of a higher-order tensor. We present conditions under which the mixing matrix is unique and discuss several algorithms for its computation."}
{"_id": "0c9c3f948eda8fb339e76c6612ca4bd36244efd7", "title": "The Internet of Things: A Review of Enabled Technologies and Future Challenges", "text": "The Internet of Things (IoT) is an emerging classical model, envisioned as a system of billions of small interconnected devices for posing the state-of-the-art findings to real-world glitches. Over the last decade, there has been an increasing research concentration in the IoT as an essential design of the constant convergence between human behaviors and their images on Information Technology. With the development of technologies, the IoT drives the deployment of across-the-board and self-organizing wireless networks. The IoT model is progressing toward the notion of a cyber-physical world, where things can be originated, driven, intermixed, and modernized to facilitate the emergence of any feasible association. This paper provides a summary of the existing IoT research that underlines enabling technologies, such as fog computing, wireless sensor networks, data mining, context awareness, real-time analytics, virtual reality, and cellular communications. Also, we present the lessons learned after acquiring a thorough representation of the subject. Thus, by identifying numerous open research challenges, it is presumed to drag more consideration into this novel paradigm."}
{"_id": "d297445dc6b372fb502d26a6f29828b3c160a60e", "title": "THE RESTORATIVE BENEFITS OF NATURE : TOWARD AN INTEGRATIVE", "text": "Directed attention plays an important role in human information processing; its fatigue, in turn, has far\u00ad reaching consequences. Attention Restoration Theory provides an analysis of the kinds of experiences that lead to recovery from such fatigue. Natural environments turn out to be particularly rich in the character\u00ad istics necessary for restorative experiences. An integrative framework is proposed that places both directed attention and stress in the larger context of human-environment relationships. \u00a9 1995 Academic Press Limited"}
{"_id": "5d17292f029c9912507b70f9ac6912fa590ade6d", "title": "Intracranial EEG and human brain mapping", "text": "This review is an attempt to highlight the value of human intracranial recordings (intracranial electro-encephalography, iEEG) for human brain mapping, based on their technical characteristics and based on the corpus of results they have already yielded. The advantages and limitations of iEEG recordings are introduced in detail, with an estimation of their spatial and temporal resolution for both monopolar and bipolar recordings. The contribution of iEEG studies to the general field of human brain mapping is discussed through a review of the effects observed in the iEEG while patients perform cognitive tasks. Those effects range from the generation of well-localized evoked potentials to the formation of large-scale interactions between distributed brain structures, via long-range synchrony in particular. A framework is introduced to organize those iEEG studies according to the level of complexity of the spatio-temporal patterns of neural activity found to correlate with cognition. This review emphasizes the value of iEEG for the study of large-scale interactions, and describes in detail the few studies that have already addressed this point."}
{"_id": "c8859b7ac5f466675c41561a6a299f7078a90df0", "title": "Survey on Hadoop and Introduction to YARN Amogh", "text": "Big Data, the analysis of large quantities of data to gain new insight has become a ubiquitous phrase in recent years. Day by day the data is growing at a staggering rate. One of the efficient technologies that deal with the Big Data is Hadoop, which will be discussed in this paper. Hadoop, for processing large data volume jobs uses MapReduce programming model. Hadoop makes use of different schedulers for executing the jobs in parallel. The default scheduler is FIFO (First In First Out) Scheduler. Other schedulers with priority, pre-emption and non-pre-emption options have also been developed. As the time has passed the MapReduce has reached few of its limitations. So in order to overcome the limitations of MapReduce, the next generation of MapReduce has been developed called as YARN (Yet Another Resource Negotiator). So, this paper provides a survey on Hadoop, few scheduling methods it uses and a brief introduction to YARN. Keywords\u2014Hadoop, HDFS, MapReduce, Schedulers, YARN."}
{"_id": "560e0e58d0059259ddf86fcec1fa7975dee6a868", "title": "Face recognition in unconstrained videos with matched background similarity", "text": "Recognizing faces in unconstrained videos is a task of mounting importance. While obviously related to face recognition in still images, it has its own unique characteristics and algorithmic requirements. Over the years several methods have been suggested for this problem, and a few benchmark data sets have been assembled to facilitate its study. However, there is a sizable gap between the actual application needs and the current state of the art. In this paper we make the following contributions. (a) We present a comprehensive database of labeled videos of faces in challenging, uncontrolled conditions (i.e., \u2018in the wild\u2019), the \u2018YouTube Faces\u2019 database, along with benchmark, pair-matching tests1. (b) We employ our benchmark to survey and compare the performance of a large variety of existing video face recognition techniques. Finally, (c) we describe a novel set-to-set similarity measure, the Matched Background Similarity (MBGS). This similarity is shown to considerably improve performance on the benchmark tests."}
{"_id": "1b43ecec3bad257a773d6fdf3b4efa79e5c15138", "title": "Implementation of high speed radix-10 parallel multiplier using Verilog", "text": "The emerging computational complexities arises the need of fast multiplication unit. The importance of multiplication in various applications necessitates improvement in its design so as to obtain the multiplication result efficiently. Multiplication operation can be improved by reducing the number of partial products to be added and by enhancing the adder unit for obtaining sum. The number of partial products can be reduced by using higher radix multiplication. For better speed applications a radix-10 multiplier is proposed which uses recoded multiplier digits as in conventional parallel multiplier design. The multiplier digits are encoded using Signed Digit (SD) radix-10 method which converts the digit set to {-5 to 5} from {0 to 9} and also generate a sign bit. This recoding leads to minimized calculations as only five multiples are required to be calculated and the negative multiples are obtained using 2's complement approach. The negative multiples are used when sign bit is high. A different approach is availed in the multiples generation of the multiplicand digit and during accumulation of partial product obtained during multiplication procedure. A modified BCD adder is used which eliminates the post correction while calculating the sum of two BCD digits. The modified architecture eliminates the extra recoding logic thereby reducing the area of overall architecture. This paper delivers the design and implementation of 16-Bit multiplication unit. The design entry is done in Verilog Hardware Description Language (HDL) and simulated using ISIM Simulator. It is synthesized and implemented using Xilinx ISE 12.2. Synthesis results have shown that 20.3% reduction in 4-Input LUTs and 20.4% reduction in the number of slices is observed in the modified methodology. Further 11.5% reduction of maximum combinational path delay is also observed in the modified architecture, thereby leading to high speed multiplication for VLSI applications."}
{"_id": "38919649ae3fd207b96b62e95b3c8c8e69635c7f", "title": "Scenario-Based Performance Analysis of Routing Protocols for Mobile ad-hoc Networks", "text": "This study is a comparison of three routing protocols proposed for wireless mobile ad-hoc networks. The protocols are: Destination Sequenced Distance Vector (DSDV), Ad-hoc On demand Distance Vector (AODV) and Dynamic Source Routing (DSR). Extensive simulations are made on a scenario where nodes moves randomly. Results are presented as a function of a novel mobility metric designed to reflect the relative speeds of the nodes in a scenario. Furthermore, three realistic scenarios are introduced to test the protocols in more specialized contexts. In most simulations the reactive protocols (AODV and DSR) performed significantly better than DSDV. At moderate traffic load DSR performed better than AODV for all tested mobility values, while AODV performed better than DSR at higher traffic loads. The latter is caused by the source routes in DSR data packets, which increase the load on the network. routers and hosts, thus a node may forward packets between other nodes as well as run user applications. Mobile ad-hoc networks have been the focus of many recent research and development efforts. Ad-hoc packet radio networks have so far mainly concerned military applications, where a decentralized network configuration is an operative advantage or even a necessity. Networks using ad-hoc configuration concepts can be used in many military applications, ranging from interconnected wireless access points to networks of wireless devices carried by individuals, e.g., digital maps, sensors attached to the body, voice communication, etc. Combinations of wide range and short range ad-hoc networks seek to provide robust, global coverage, even during adverse operating conditions."}
{"_id": "fc4ad06ced503fc4daef509fea79584176751d18", "title": "Language, thought and reference.", "text": "How should we best analyse the meaning of proper names, indexicals, demonstratives, both simple and complex, and definite descriptions? In what relation do such expressions stand to the objects they designate? In what relation do they stand to mental representations of those objects? Do these expressions form a semantic class, or must we distinguish between those that arc referential and those that are quantificational? Such questions have constituted one of the core research areas in the philosophy of language for much of the last century, yet consensus remains elusive: the field is still divided, for instance, between those who hold that all such expressions are semantically descriptive and those who would analyse most as the natural language counterparts of logical individual constants. The aim of this thesis is to cast new light on such questions by approaching them from within the cognitive framework of Sperber and Wilson's Relevance Theory. Relevance Theory offers not just an articulated pragmatics but also a broad conception of the functioning of natural language which differs radically from that presupposed within (most of) the philosophy of language. The function of linguistic expressions, on this conception, is not to determine propositional content, but rather to provide inferential premises which, in parallel with context and general pragmatic principles, will enable a bearer to reach the speaker's intended interpretation. Working within this framework, I shall argue that the semantics of the expressions discussed should best be analysed not in terms of their relation to those objects which, on occasions of use, they may designate, but rather in terms of the indications they offer a hearer concerning the mental representation which constitutes the content of a speaker's informative intention. Such an analysis can, I shall claim, capture certain key data on reference which have proved notoriously problematic, while respecting a broad range of apparently conflicting intuitions."}
{"_id": "63c22f6b8e5d6eef32d0a79a8895356d5f18703d", "title": "Modeling Multibody Dynamic Systems With Uncertainties . Part II : Numerical Applications", "text": "This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper \u201cModeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects\u201d. In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties."}
{"_id": "eb68b920ebc4e473fa9782a83be69ac52eff1f38", "title": "Normative data for the Rey-Osterrieth and the Taylor complex figure tests in Quebec-French people.", "text": "The Rey-Osterrieth (ROCF) and Taylor (TCF) complex figure tests are widely used to assess visuospatial and constructional abilities as well as visual/non-verbal memory. Normative data adjusted to the cultural and linguistic reality of older Quebec-French individuals is still nonexistent for these tests. In this article, we report the results of two studies that aimed to establish normative data for Quebec-French people (aged at least 50 years) for the copy, immediate recall, and delayed recall trials of the ROCF (Study 1) and the TCF (Study 2). For both studies, the impact of age, education, and sex on test performance was examined. Moreover, the impact of copy time on test performance, the impact of copy score on immediate and delayed recall score, and the impact of immediate recall score on delayed recall performance were examined. Based on regression models, equations to calculate Z scores for copy and recall scores are provided for both tests."}
{"_id": "0f7329cf0d388d4c5d5b94ee52ad2385bd2383ce", "title": "LIBSVX: A Supervoxel Library and Benchmark for Early Video Processing", "text": "Supervoxel segmentation has strong potential to be incorporated into early video analysis as superpixel segmentation has in image analysis. However, there are many plausible supervoxel methods and little understanding as to when and where each is most appropriate. Indeed, we are not aware of a single comparative study on supervoxel segmentation. To that end, we study seven supervoxel algorithms, including both off-line and streaming methods, in the context of what we consider to be a good supervoxel: namely, spatiotemporal uniformity, object/region boundary detection, region compression and parsimony. For the evaluation we propose a comprehensive suite of seven quality metrics to measure these desirable supervoxel characteristics. In addition, we evaluate the methods in a supervoxel classification task as a proxy for subsequent high-level uses of the supervoxels in video analysis. We use six existing benchmark video datasets with a variety of content-types and dense human annotations. Our findings have led us to conclusive evidence that the hierarchical graph-based (GBH), segmentation by weighted aggregation (SWA) and temporal superpixels (TSP) methods are the top-performers among the seven methods. They all perform well in terms of segmentation accuracy, but vary in regard to the other desiderata: GBH captures object boundaries best; SWA has the best potential for region compression; and TSP achieves the best undersegmentation error."}
{"_id": "75bb1dfef08101c501fd44a663d50a7351dce5c9", "title": "A tutorial on simulation modeling in six dimensions", "text": "Simulation involves modeling and analysis of real-world systems. This tutorial will provide a broad overview of the modeling practice within simulation by introducing the reader to modeling choices found using six dimensions: abstraction, complexity, culture, engineering, environment, and process. Modeling can be a daunting task even for the seasoned modeling and simulation professional, and so my goal is to introduce modeling in two ways: 1) to use one specific type of model (Petri Net) as an anchor for cross-dimensional discussion, and 2) to provide a follow up discussion, with additional non Petri Net examples, to clarify the extent of each dimension. For example, in the abstraction dimension, one must think about scale, refinement, and hierarchy when modeling regardless of the type of modeling language. The reader will come away with a broad framework within which to understand the possibilities of models and of modeling within the practice of simulation."}
{"_id": "1c8fac651cda7abcf0140ce73252ae53be21ad2c", "title": "A Case Study of the New York City 2012-2013 Influenza Season With Daily Geocoded Twitter Data From Temporal and Spatiotemporal Perspectives", "text": "BACKGROUND\nTwitter has shown some usefulness in predicting influenza cases on a weekly basis in multiple countries and on different geographic scales. Recently, Broniatowski and colleagues suggested Twitter's relevance at the city-level for New York City. Here, we look to dive deeper into the case of New York City by analyzing daily Twitter data from temporal and spatiotemporal perspectives. Also, through manual coding of all tweets, we look to gain qualitative insights that can help direct future automated searches.\n\n\nOBJECTIVE\nThe intent of the study was first to validate the temporal predictive strength of daily Twitter data for influenza-like illness emergency department (ILI-ED) visits during the New York City 2012-2013 influenza season against other available and established datasets (Google search query, or GSQ), and second, to examine the spatial distribution and the spread of geocoded tweets as proxies for potential cases.\n\n\nMETHODS\nFrom the Twitter Streaming API, 2972 tweets were collected in the New York City region matching the keywords \"flu\", \"influenza\", \"gripe\", and \"high fever\". The tweets were categorized according to the scheme developed by Lamb et al. A new fourth category was added as an evaluator guess for the probability of the subject(s) being sick to account for strength of confidence in the validity of the statement. Temporal correlations were made for tweets against daily ILI-ED visits and daily GSQ volume. The best models were used for linear regression for forecasting ILI visits. A weighted, retrospective Poisson model with SaTScan software (n=1484), and vector map were used for spatiotemporal analysis.\n\n\nRESULTS\nInfection-related tweets (R=.763) correlated better than GSQ time series (R=.683) for the same keywords and had a lower mean average percent error (8.4 vs 11.8) for ILI-ED visit prediction in January, the most volatile month of flu. SaTScan identified primary outbreak cluster of high-probability infection tweets with a 2.74 relative risk ratio compared to medium-probability infection tweets at P=.001 in Northern Brooklyn, in a radius that includes Barclay's Center and the Atlantic Avenue Terminal.\n\n\nCONCLUSIONS\nWhile others have looked at weekly regional tweets, this study is the first to stress test Twitter for daily city-level data for New York City. Extraction of personal testimonies of infection-related tweets suggests Twitter's strength both qualitatively and quantitatively for ILI-ED prediction compared to alternative daily datasets mixed with awareness-based data such as GSQ. Additionally, granular Twitter data provide important spatiotemporal insights. A tweet vector-map may be useful for visualization of city-level spread when local gold standard data are otherwise unavailable."}
{"_id": "7ea0c87ec8b34dac0a450ba78cd219e187632573", "title": "CatBoost: unbiased boosting with categorical features", "text": "This paper presents the key algorithmic techniques behind CatBoost, a new gradient boosting toolkit. Their combination leads to CatBoost outperforming other publicly available boosting implementations in terms of quality on a variety of datasets. Two critical algorithmic advances introduced in CatBoost are the implementation of ordered boosting, a permutation-driven alternative to the classic algorithm, and an innovative algorithm for processing categorical features. Both techniques were created to fight a prediction shift caused by a special kind of target leakage present in all currently existing implementations of gradient boosting algorithms. In this paper, we provide a detailed analysis of this problem and demonstrate that proposed algorithms solve it effectively, leading to excellent empirical results."}
{"_id": "ea44a322c1822116d0c3159517a74cd2f55a98a5", "title": "Strabismus Recognition Using Eye-Tracking Data and Convolutional Neural Networks", "text": "Strabismus is one of the most common vision diseases that would cause amblyopia and even permanent vision loss. Timely diagnosis is crucial for well treating strabismus. In contrast to manual diagnosis, automatic recognition can significantly reduce labor cost and increase diagnosis efficiency. In this paper, we propose to recognize strabismus using eye-tracking data and convolutional neural networks. In particular, an eye tracker is first exploited to record a subject's eye movements. A gaze deviation (GaDe) image is then proposed to characterize the subject's eye-tracking data according to the accuracies of gaze points. The GaDe image is fed to a convolutional neural network (CNN) that has been trained on a large image database called ImageNet. The outputs of the full connection layers of the CNN are used as the GaDe image's features for strabismus recognition. A dataset containing eye-tracking data of both strabismic subjects and normal subjects is established for experiments. Experimental results demonstrate that the natural image features can be well transferred to represent eye-tracking data, and strabismus can be effectively recognized by our proposed method."}
{"_id": "bc168059a400d665fde44281d9453b61e18c247e", "title": "Assessing forecast model performance in an ERP environment", "text": "Purpose \u2013 The paper aims to describe and apply a commercially oriented method of forecast performance measurement (cost of forecast error \u2013 CFE) and to compare the results with commonly adopted statistical measures of forecast accuracy in an enterprise resource planning (ERP) environment. Design/methodology/approach \u2013 The study adopts a quantitative methodology to evaluate the nine forecasting models (two moving average and seven exponential smoothing) of SAP\u2019s ERP system. Event management adjustment and fitted smoothing parameters are also assessed. SAP is the largest European software enterprise and the third largest in the world, with headquarters in Walldorf, Germany. Findings \u2013 The findings of the study support the adoption of CFE as a more relevant commercial decision-making measure than commonly applied statistical forecast measures. Practical implications \u2013 The findings of the study provide forecast model selection guidance to SAP\u2019s 12 \u00fe million worldwide users. However, the CFE metric can be adopted in any commercial forecasting situation. Originality/value \u2013 This study is the first published cost assessment of SAP\u2019s forecasting models."}
{"_id": "2d13971cf59761b76fa5fc08e96f4e56c0c2d6dc", "title": "Robust Image Sentiment Analysis Using Progressively Trained and Domain Transferred Deep Networks", "text": "Sentiment analysis of online user generated content is important for many social media analytics tasks. Researchers have largely relied on textual sentiment analysis to develop systems to predict political elections, measure economic indicators, and so on. Recently, social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment analysis of such large scale visual content can help better extract user sentiments toward events or topics, such as those in image tweets, so that prediction of sentiment from visual content is complementary to textual sentiment analysis. Motivated by the needs in leveraging large scale yet noisy training data to solve the extremely challenging problem of image sentiment analysis, we employ Convolutional Neural Networks (CNN). We first design a suitable CNN architecture for image sentiment analysis. We obtain half a million training samples by using a baseline sentiment algorithm to label Flickr images. To make use of such noisy machine labeled data, we employ a progressive strategy to fine-tune the deep network. Furthermore, we improve the performance on Twitter images by inducing domain transfer with a small number of manually labeled Twitter images. We have conducted extensive experiments on manually labeled Twitter images. The results show that the proposed CNN can achieve better performance in image sentiment analysis than competing algorithms."}
{"_id": "6d75df4360a3d56514dcb775c832fdc572bab64b", "title": "Universals and cultural differences in the judgments of facial expressions of emotion.", "text": "We present here new evidence of cross-cultural agreement in the judgement of facial expression. Subjects in 10 cultures performed a more complex judgment task than has been used in previous cross-cultural studies. Instead of limiting the subjects to selecting only one emotion term for each expression, this task allowed them to indicate that multiple emotions were evident and the intensity of each emotion. Agreement was very high across cultures about which emotion was the most intense. The 10 cultures also agreed about the second most intense emotion signaled by an expression and about the relative intensity among expressions of the same emotion. However, cultural differences were found in judgments of the absolute level of emotional intensity."}
{"_id": "89f9569a9118405156638151c2151b1814f9bb5e", "title": "The Influence of Background Music on the Behavior of Restaurant Patrons Author ( s ) :", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. ."}
{"_id": "a6197444855f3edd1d7e1f6acf4ad6df345ef47b", "title": "Music Emotion Identification from Lyrics", "text": "Very large online music databases have recently been created by vendors, but they generally lack content-based retrieval methods. One exception is Allmusic.com which offers browsing by musical emotion, using human experts to classify several thousand songs into 183 moods. In this paper, machine learning techniques are used instead of human experts to extract emotions in Music. The classification is based on a psychological model of emotion that is extended to 23 specific emotion categories. Our results for mining the lyrical text of songs for specific emotion are promising, generate classification models that are human-comprehensible, and generate results that correspond to commonsense intuitions about specific emotions. Mining lyrics focused in this paper is one aspect of research which combines different classifiers of musical emotion such as acoustics and lyrical text."}
{"_id": "eda501bb1e610098648667eb25273adc4a4dc98d", "title": "Fusing audio, visual and textual clues for sentiment analysis from multimodal content", "text": "A huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both featureand decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy of nearly 80%, outperforming all state-of-the-art systems by more than 20%. & 2015 Elsevier B.V. All rights reserved."}
{"_id": "695d53820f45a0f174e51eed537c6dd4068e13ae", "title": "The DEXMART hand: Mechatronic design and experimental evaluation of synergy-based control for human-like grasping", "text": "This paper summarizes recent activities carried out for the development of an innovative anthropomorphic robotic hand called the DEXMART Hand. The main goal of this research is to face the problems that affect current robotic hands by introducing suitable design solutions aimed at achieving simplification and cost reduction while possibly enhancing robustness and performance. While certain aspects of the DEXMART Hand development have been presented in previous papers, this paper is the first to give a comprehensive description of the final hand version and its use to replicate humanlike grasping. In this paper, particular emphasis is placed on the kinematics of the fingers and of the thumb, the wrist architecture, the dimensioning of the actuation system, and the final implementation of the position, force and tactile sensors. The paper focuses also on how these solutions have been integrated into the mechanical structure of this innovative robotic hand to enable precise force and displacement control of the whole system. Another important aspect is the lack of suitable control tools that severely limits the development of robotic hand applications. To address this issue, a new method for the observation of human hand behavior during interaction with common day-to-day objects by means of a 3D computer vision system is presented in this work together with a strategy for mapping human hand postures to the robotic hand. A simple control strategy based on postural synergies has been used to reduce the complexity of the grasp planning problem. As a preliminary evaluation of the DEXMART Hand\u2019s capabilities, this approach has been adopted in this paper to simplify and speed up the transfer of human actions to the robotic hand, showing its effectiveness in reproducing human-like grasping."}
{"_id": "682adb1bf3998c8f77b3c02b712a33c3a5e65ae5", "title": "Coping with autism: a journey toward adaptation.", "text": "As the number of individuals with autism grows, it is critical for nurses in all settings to understand how autism influences the family unit, as they will likely interact with these children, the adults, and their families. The intent of this descriptive narrative study was to explore the experiences of families of individuals with autism as perceived by the mother. Through personal interviews, 16 mothers' perceptions of the impact of autism on the family unit during different stages of the life cycle were revealed through a constructivist lens. Pediatric nurses employed in acute care settings, community, and schools are poised to assess and support these families following diagnosis and throughout the child's life."}
{"_id": "312182a3f20e842ef9b612e39f305e4def16ffc6", "title": "ATR-Vis: Visual and Interactive Information Retrieval for Parliamentary Discussions in Twitter", "text": "The worldwide adoption of Twitter turned it into one of the most popular platforms for content analysis as it serves as a gauge of the public\u2019s feeling and opinion on a variety of topics. This is particularly true of political discussions and lawmakers\u2019 actions and initiatives. Yet, one common but unrealistic assumption is that the data of interest for analysis is readily available in a comprehensive and accurate form. Data need to be retrieved, but due to the brevity and noisy nature of Twitter content, it is difficult to formulate user queries that match relevant posts that use different terminology without introducing a considerable volume of unwanted content. This problem is aggravated when the analysis must contemplate multiple and related topics of interest, for which comments are being concurrently posted. This article presents Active Tweet Retrieval Visualization (ATR-Vis), a user-driven visual approach for the retrieval of Twitter content applicable to this scenario. The method proposes a set of active retrieval strategies to involve an analyst in such a way that a major improvement in retrieval coverage and precision is attained with minimal user effort. ATR-Vis enables non-technical users to benefit from the aforementioned active learning strategies by providing visual aids to facilitate the requested supervision. This supports the exploration of the space of potentially relevant tweets, and affords a better understanding of the retrieval results. We evaluate our approach in scenarios in which the task is to retrieve tweets related to multiple parliamentary debates within a specific time span. We collected two Twitter datasets, one associated with debates in the Canadian House of Commons during a particular week in May 2014, and another associated with debates in the Brazilian Federal Senate during a selected week in May 2015. The two use cases illustrate the effectiveness of ATR-Vis for the retrieval of relevant tweets, while quantitative results show that our approach achieves high retrieval quality with a modest amount of supervision. Finally, we evaluated our tool with three external users who perform searching in social media as part of their professional work."}
{"_id": "d1d5e9ab7865e26bb2d40d8476ccc9db46c9b79a", "title": "Multilevel thresholding for satellite image segmentation with moth-flame based optimization", "text": "In this paper, an improved version of the moth-flame optimization (MFO) algorithm for image segmentation is proposed to effectively enhance the optimal multilevel thresholding of satellite images. Multilevel thresholding is one of the most widely used methods for image segmentation, as it has efficient processing ability and easy implementation. However, as the number of threshold values increase, it consequently becomes computationally expensive. To overcome this problem, the nature-inspired meta-heuristic named multilevel thresholding moth-flame optimization algorithm (MTMFO) for multilevel thresholding was developed. The improved method proposed herein was tested on various satellite images tested against five different existing methods: the genetic algorithm (GA), the differential evolution (DE) algorithm, the artificial bee colony (ABC) algorithm, the particle swarm optimization (PSO) algorithm, and the moth-flame optimization (MFO) algorithm for solving multilevel satellite image thresholding problems. Experimental results indicate that the MTMFO more effectively and accurately identifies the optimal threshold values with respect to the other state-of-the-art optimization algorithms."}
{"_id": "9e2ccaf41e32a33460c8b1c38a328188af9e353f", "title": "Introduction of optical camera communication for Internet of vehicles (IoV)", "text": "In this paper, we have introduced optical camera communication (OCC) as a new technology for internet of vehicles (IoV). OCC has already been established in many areas which can also be used in vehicular communication. There have some researches in the field of OCC-based vehicular communication but these are not mature enough. Here, we have proposed a new system which will provide great advantages to the vehicular system. We proposed a combination of OCC and cloud-based communication for IoV which will ensure secure, stable, and system. We have also proposed an algorithm to provide cloud based IoV service."}
{"_id": "43f24c56565d0acf417ef5712f2d2cd9635fd9cb", "title": "Topic hierarchy construction for the organization of multi-source user generated contents", "text": "User generated contents (UGCs) carry a huge amount of high quality information. However, the information overload and diversity of UGC sources limit their potential uses. In this research, we propose a framework to organize information from multiple UGC sources by a topic hierarchy which is automatically generated and updated using the UGCs. We explore the unique characteristics of UGCs like blogs, cQAs, microblogs, etc., and introduce a novel scheme to combine them. We also propose a graph-based method to enable incremental update of the generated topic hierarchy. Using the hierarchy, users can easily obtain a comprehensive, in-depth and up-to-date picture of their topics of interests. The experiment results demonstrate how information from multiple heterogeneous sources improves the resultant topic hierarchies. It also shows that the proposed method achieves better F1 scores in hierarchy generation as compared to the state-of-the-art methods."}
{"_id": "aa94627e3affcccf9421546dcff18a5d01787aa7", "title": "Effectiveness of a multi-layer foam dressing in preventing sacral pressure ulcers for the early acute care of patients with a traumatic spinal cord injury: comparison with the use of a gel mattress.", "text": "Individuals with spinal cord injury are at risk of sacral pressure ulcers due to, among other reasons, prolonged immobilisation. The effectiveness of a multi-layer foam dressing installed pre-operatively in reducing sacral pressure ulcer occurrence in spinal cord injured patients was compared to that of using a gel mattress, and stratified analyses were performed on patients with complete tetraplegia and paraplegia. Socio-demographic and clinical data were collected from 315 patients admitted in a level-I trauma centre following a spinal cord injury between April 2010 and March 2016. Upon arrival to the emergency room and until surgery, patients were transferred on a foam stretcher pad with a viscoelastic polymer gel mattress (before 1 October 2014) or received a multi-layer foam dressing applied to their sacral-coccygeal area (after 1 October 2014). The occurrence of sacral pressure ulcer during acute hospitalisation was similar irrespective of whether patients received the dressing or the gel mattress. It was found that 82% of patients with complete tetraplegia receiving the preventive dressing developed sacral ulcers as compared to only 36% of patients using the gel mattress. Although multi-layer dressings were suggested to improve skin protection and decrease pressure ulcer occurrence in critically ill patients, such preventive dressings are not superior to gel mattresses in spinal cord injured patients and should be used with precaution, especially in complete tetraplegia."}
{"_id": "3146ffeed483cb94d474315b0f9ed54505834032", "title": "Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images", "text": "Recovering the 3D representation of an object from single-view or multi-view RGB images by deep neural networks has attracted increasing attention in the past few years. Several mainstream works (e.g., 3D-R2N2) use recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input images sequentially. However, when given the same set of input images with different orders, RNN-based approaches are unable to produce consistent reconstruction results. Moreover, due to long-term memory loss, RNNs cannot fully exploit input images to refine reconstruction results. To solve these problems, we propose a novel framework for single-view and multi-view 3D reconstruction, named Pix2Vox. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e.g., table legs) from different coarse 3D volumes to obtain a fused 3D volume. Finally, a refiner further refines the fused 3D volume to generate the final output. Experimental results on the ShapeNet and Pascal 3D+ benchmarks indicate that the proposed Pix2Vox outperforms state-of-the-arts by a large margin. Furthermore, the proposed method is 24 times faster than 3D-R2N2 in terms of backward inference time. The experiments on ShapeNet unseen 3D categories have shown the superior generalization abilities of our method."}
{"_id": "50dea03d4feb1797f1d5c260736e1cf7ad6d45ca", "title": "Rapid growing fibroadenoma in an adolescent.", "text": "INTRODUCTION\nWe report a case of rapidly growing fibroadenoma.\n\n\nPATIENT\nA 13-year-old girl consulted the outpatient clinic regarding a left breast mass. The mass was diagnosed as fibroadenoma by clinical examinations, and the patient was carefully monitored. The mass enlarged rapidly with each menses and showed a 50% increase in volume four months later. Lumpectomy was performed. The tumor was histologically diagnosed as fibroadenoma organized type and many glandular epithelial cells had positive immunohistochemical staining for anti-estrogen receptor antibody in the nuclei.\n\n\nCONCLUSION\nThe estrogen sensitivity of the tumor could account for the rapid growth."}
{"_id": "e8a31f498a5326885eb09298d85bb4bede57dd0d", "title": "Understanding Online Consumer Stickiness in E-commerce Environment : A Relationship Formation Model", "text": "Consumers with online shopping experience often stick to some special websites. Does it mean that long-term relationships between consumers and the websites form? To solve the problem, this paper analyzed the belief and attitude factors influencing online consumer stickiness intention. Based on the relationships between belief-attitude-intention implied in TPB (Theory of Planned Behavior), according to Expectation-Confirmation theory and Commitment-trust theory, we developed a concept model on online consumer stickiness. Six research hypotheses derived from this model were empirically validated using a survey of online shoppers. SEM (Structure Equation Modeling) was as a data analysis method. The results suggest that online consumer stickiness intention is influenced by consumers\u2019 commitment to the website (e-vendors) and their overall satisfaction to the transaction process, and commitment\u2019s effect is stronger; in turn, both consumers\u2019 commitment and overall satisfaction are influenced by consumers\u2019 ongoing trust significantly; confirmation has a significant effect on overall satisfaction but has not the same effect on commitment. The findings show that the long-term relationships between consumers and special websites exactly exist during online repeat consumption."}
{"_id": "6e3ee43e976f66d991034f921272aab0830953b0", "title": "GFlink: An In-Memory Computing Architecture on Heterogeneous CPU-GPU Clusters for Big Data", "text": "The increasing main memory capacity and the explosion of big data have fueled the development of in-memory big data management and processing. By offering an efficient in-memory parallel execution model which can eliminate disk I/O bottleneck, existing in-memory cluster computing platforms (e.g., Flink and Spark) have already been proven to be outstanding platforms for big data processing. However, these platforms are merely CPU-based systems. This paper proposes GFlink, an in-memory computing architecture on heterogeneous CPU-GPU clusters for big data. Our proposed architecture extends the original Flink from CPU clusters to heterogeneous CPU-GPU clusters, greatly improving the computational power of Flink. Furthermore, we have proposed a programming framework based on Flink's abstract model, i.e., DataSet (DST), hiding the programming complexity of GPUs behind the simple and familiar high-level interfaces. To achieve high performance and good load-balance, an efficient JVM-GPU communication strategy, a GPU cache scheme, and an adaptive locality-aware scheduling scheme for three-stage pipelining execution are proposed. Extensive experiment results indicate that the high computational power of GPUs can be efficiently utilized, and the implementation on GFlink outperforms that on the original CPU-based Flink."}
{"_id": "80d8cf86ab8359843d9e74b67a39fb5f08225226", "title": "A distributed algorithm for 2D shape duplication with smart pebble robots", "text": "We present our digital fabrication technique for manufacturing active objects in 2D from a collection of smart particles. Given a passive model of the object to be formed, we envision submerging this original in a vat of smart particles, executing the new shape duplication algorithm described in this paper, and then brushing aside any extra modules to reveal both the original object and an exact copy, side-by-side. Extensions to the duplication algorithm can be used to create a magnified version of the original or multiple copies of the model object. Our novel duplication algorithm uses a distributed approach to identify the geometric specification of the object being duplicated and then forms the duplicate from spare modules in the vicinity of the original. This paper details the duplication algorithm and the features that make it robust to (1) an imperfect packing of the modules around the original object; (2) missing communication links between neighboring modules; and (3) missing modules in the vicinity of the duplicate object(s). We show that the algorithm requires O(1) storage space per module and that the algorithm exchanges O(n) messages per module. Finally, we present experimental results from 60 hardware trials and 150 simulations. These experiments demonstrate the algorithm working correctly and reliably despite broken communication links and missing modules."}
{"_id": "67dd8ca2dbbd1e6f0eb510c49e53e5bad72940c6", "title": "A Convolutional Learning System for Object Classification in 3-D Lidar Data", "text": "In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment."}
{"_id": "1a1f7f3c8274502b661db8ee09d31f33de1e3078", "title": "Satellite remote sensing of particulate matter and air quality assessment over global cities", "text": "Using 1 year of aerosol optical thickness (AOT) retrievals from the MODerate resolution Imaging Spectro-radiometer (MODIS) on board NASA\u2019s Terra and Aqua satellite along with ground measurements of PM2.5 mass concentration, we assess particulate matter air quality over different locations across the global urban areas spread over 26 locations in Sydney, Delhi, Hong Kong, New York City and Switzerland. An empirical relationship between AOT and PM2.5 mass is obtained and results show that there is an excellent correlation between the bin-averaged daily mean satellite and groundbased values with a linear correlation coefficient of 0.96. Using meteorological and other ancillary datasets, we assess the effects of wind speed, cloud cover, and mixing height (MH) on particulate matter (PM) air quality and conclude that these data are necessary to further apply satellite data for air quality research. Our study clearly demonstrates that satellitederived AOT is a good surrogate for monitoring PM air quality over the earth. However, our analysis shows that the PM2.5\u2013AOT relationship strongly depends on aerosol concentrations, ambient relative humidity (RH), fractional cloud cover and height of the mixing layer. Highest correlation between MODIS AOT and PM2.5 mass is found under clear sky conditions with less than 40\u201350% RH and when atmospheric MH ranges from 100 to 200m. Future remote sensing sensors such as Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) that have the capability to provide vertical distribution of aerosols will further enhance our ability to monitor and forecast air pollution. This study is among the first to examine the relationship between satellite and ground measurements over several global locations. r 2006 Elsevier Ltd. All rights reserved."}
{"_id": "864faa99ea2ffc161aaba84263cdfd51333f3f74", "title": "Characteristics and variability of structural networks derived from diffusion tensor imaging", "text": "Structural brain networks were constructed based on diffusion tensor imaging (DTI) data of 59 young healthy male adults. The networks had 68 nodes, derived from FreeSurfer parcellation of the cortical surface. By means of streamline tractography, the edge weight was defined as the number of streamlines between two nodes normalized by their mean volume. Specifically, two weighting schemes were adopted by considering various biases from fiber tracking. The weighting schemes were tested for possible bias toward the physical size of the nodes. A novel thresholding method was proposed using the variance of number of streamlines in fiber tracking. The backbone networks were extracted and various network analyses were applied to investigate the features of the binary and weighted backbone networks. For weighted networks, a high correlation was observed between nodal strength and betweenness centrality. Despite similar small-worldness features, binary networks and weighted networks are distinctive in many aspects, such as modularity and nodal betweenness centrality. Inter-subject variability was examined for the weighted networks, along with the test-retest reliability from two repeated scans on 44 of the 59 subjects. The inter-/intra-subject variability of weighted networks was discussed in three levels - edge weights, local metrics, and global metrics. The variance of edge weights can be very large. Although local metrics show less variability than the edge weights, they still have considerable amounts of variability. Weighting scheme one, which scales the number of streamlines by their lengths, demonstrates stable intra-class correlation coefficients against thresholding for global efficiency, clustering coefficient and diversity. The intra-class correlation analysis suggests the current approach of constructing weighted network has a reasonably high reproducibility for most global metrics."}
{"_id": "d3fda1e44c8ba36c58b800c9b7a0e9fe7ddb6242", "title": "Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and $k$-Means Clustering", "text": "In this letter, we propose a novel technique for unsupervised change detection in multitemporal satellite images using principal component analysis (PCA) and k-means clustering. The difference image is partitioned into h times h nonoverlapping blocks. S, S les h2, orthonormal eigenvectors are extracted through PCA of h times h nonoverlapping block set to create an eigenvector space. Each pixel in the difference image is represented with an S-dimensional feature vector which is the projection of h times h difference image data onto the generated eigenvector space. The change detection is achieved by partitioning the feature vector space into two clusters using k-means clustering with k = 2 and then assigning each pixel to the one of the two clusters by using the minimum Euclidean distance between the pixel's feature vector and mean feature vector of clusters. Experimental results confirm the effectiveness of the proposed approach."}
{"_id": "eabe277992183a647de0b84882d348ddfc54a983", "title": "Sexuality of male-to-female transsexuals.", "text": "Blanchard's (J Nerv Ment Dis 177:616-623, 1989) theory of autogynephilia suggests that male-to-female transsexuals can be categorized into different types based on their sexuality. Little previous research has compared the sexuality of male-to-female transsexuals to biological females. The present study examined 15 aspects of sexuality among a non-clinical sample of 234 transsexuals and 127 biological females, using either an online or a paper questionnaire. The results showed that, overall, transsexuals tended to place more importance on partner's physical attractiveness and reported higher scores on Blanchard's Core Autogynephilia Scale than biological females. In addition, transsexuals classified as autogynephilic scored significantly higher on Attraction to Feminine Males, Core Autogynephilia, Autogynephilic Interpersonal Fantasy, Fetishism, Preference for Younger Partners, Interest in Uncommitted Sex, Importance of Partner Physical Attractiveness, and Attraction to Transgender Fiction than other transsexuals and biological females. In accordance with Blanchard's theory, autogynephilia measures were positively correlated to Sexual Attraction to Females among transsexuals. In contrast to Blanchard's theory, however, those transsexuals classified as autogynephilic scored higher on average on Sexual Attraction to Males than those classified as non-autogynephilic, and no transsexuals classified as autogynephilic reported asexuality."}
{"_id": "50124f90e90ab12ed5c66fa1fadd5c536d2fc695", "title": "A study of age and gender seen through mobile phone usage patterns in Mexico", "text": "Mobile phone usage provides a wealth of information, which can be used to better understand the demographic structure of a population. In this paper we focus on the population of Mexican mobile phone users. Our first contribution is an observational study of mobile phone usage according to gender and age groups. We were able to detect significant differences in phone usage among different subgroups of the population. Our second contribution is to provide a novel methodology to predict demographic features (namely age and gender) of unlabeled users by leveraging individual calling patterns, as well as the structure of the communication graph. We provide details of the methodology and show experimental results on a real world dataset that involves millions of users."}
{"_id": "48af5c507d82f7270b7ebe26365b4a56bdba3252", "title": "Bitcoin Market Return and Volatility Forecasting Using Transaction Network Flow Properties", "text": "Bit coin, as the foundation for a secure electronic payment system, has drawn broad interests from researchers in recent years. In this paper, we analyze a comprehensive Bit coin transaction dataset and investigate the interrelationship between the flow of Bit coin transactions and its price movement. Using network theory, we examine a few complexity measures of the Bit coin transaction flow networks, and we model the joint dynamic relationship between these complexity measures and Bit coin market variables such as return and volatility. We find that a particular complexity measure of the Bit coin transaction network flow is significantly correlated with the Bit coin market return and volatility. More specifically we document that the residual diversity or freedom of Bit coin network flow scaled by the total system throughput can significantly improve the predictability of Bit coin market return and volatility."}
{"_id": "63ef86bf94d58343f1484bf415911843da8fd612", "title": "A performance prediction model for the CUDA GPGPU platform", "text": "The significant growth in computational power of modern Graphics Processing Units (GPUs) coupled with the advent of general purpose programming environments like NVIDIA's CUDA, has seen GPUs emerging as a very popular parallel computing platform. Till recently, there has not been a performance model for GPGPUs. The absence of such a model makes it difficult to definitively assess the suitability of the GPU for solving a particular problem and is a significant impediment to the mainstream adoption of GPUs as a massively parallel (super)computing platform. In this paper we present a performance prediction model for the CUDA GPGPU platform. This model encompasses the various facets of the GPU architecture like scheduling, memory hierarchy, and pipelining among others. We also perform experiments that demonstrate the effects of various memory access strategies. The proposed model can be used to analyze pseudo code for a CUDA kernel to obtain a performance estimate, in a way that is similar to performing asymptotic analysis. We illustrate the usage of our model and its accuracy with three case studies: matrix multiplication, list ranking, and histogram generation."}
{"_id": "3b94d7407db24e630e5cc4e886f16755a76bd583", "title": "Unified motion control for dynamic quadrotor maneuvers demonstrated on slung load and rotor failure tasks", "text": "In recent years impressive results have been presented illustrating the potential of quadrotors to solve challenging tasks. Generally, the derivation of the controllers involve complex analytical manipulation of the dynamics and are very specific to the task at hand. In addition, most approaches construct a trajectory and then design a stabilizing controller in a separate step, whereas a fully optimal solution requires finding both simultaneously. In this paper, a generalized approach is presented using an iterative optimal control algorithm. A series of complex tasks are thus solved using the same algorithm without the need for manual manipulation of the system dynamics, heuristic simplifications, or manual trajectory generation. First, aggressive maneuvers are performed by requiring the quadrotor to pass with a slung load through a window not high enough for the load to pass while hanging straight down. Second, go-to-goal tasks with single and double rotor failure are demonstrated. The adaptability and applicability of this unified approach to such diverse tasks with a nonlinear, underactuated, constrained, and in the case of the slung load, hybrid quadrotor systems is thus shown."}
{"_id": "3212a1d0bd6c90a16ffffc032328fd819cb4c92f", "title": "Will 5G See its Blind Side? Evolving 5G for Universal Internet Access", "text": "Internet has shown itself to be a catalyst for economic growth and social equity but its potency is thwarted by the fact that the Internet is off limits for the vast majority of human beings. Mobile phones\u2014the fastest growing technology in the world that now reaches around 80% of humanity\u2014can enable universal Internet access if it can resolve coverage problems that have historically plagued previous cellular architectures (2G, 3G, and 4G). These conventional architectures have not been able to sustain universal service provisioning since these architectures depend on having enough users per cell for their economic viability and thus are not well suited to rural areas (which are by definition sparsely populated). The new generation of mobile cellular technology (5G), currently in a formative phase and expected to be finalized around 2020, is aimed at orders of magnitude performance enhancement. 5G offers a clean slate to network designers and can be molded into an architecture also amenable to universal Internet provisioning. Keeping in mind the great social benefits of democratizing Internet and connectivity, we believe that the time is ripe for emphasizing universal Internet provisioning as an important goal on the 5G research agenda. In this paper, we investigate the opportunities and challenges in utilizing 5G for global access to the Internet for all (GAIA). We have also identified the major technical issues involved in a 5G-based GAIA solution and have set up a future research agenda by defining open research problems."}
{"_id": "a4317a18de3fe66121bbb58eb2e9b8d677994bf1", "title": "Multi-hop Communication in the Uplink for LPWANs", "text": "Low-Power Wide Area Networks (LPWANs) have arisen as a promising communication technology for supporting Internet of Things (IoT) services due to their low power operation, wide coverage range, low cost and scalability. However, most LPWAN solutions like SIGFOXTMor LoRaWANTMrely on star topology networks, where stations (STAs) transmit directly to the gateway (GW), which often leads to rapid battery depletion in STAs located far from it. In this work, we analyze the impact on LPWANs energy consumption of multi-hop communication in the uplink, allowing STAs to transmit data packets in lower power levels and higher data rates to closer parent STAs, reducing their energy consumption consequently. To that aim, we introduce the DistanceRing Exponential Stations Generator (DRESG) framework, designed to evaluate the performance of the so-called optimal-hop routing model, which establishes optimal routing connections in terms of energy efficiency, aiming to balance the consumption among all the STAs in the network. Results show that enabling such multi-hop connections entails higher network lifetimes, reducing significantly the bottleneck consumption in LPWANs with up to thousands of STAs. These results lead to foresee multi-hop communication in the uplink as a promising routing alternative for extending the lifetime of LPWAN deployments."}
{"_id": "2421eaf85e274e59508c87df80bc4242edae7168", "title": "Toward automated segmentation of the pathological lung in CT", "text": "Conventional methods of lung segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on scans with lungs that contain dense pathologies, and such scans occur frequently in clinical practice. We propose a segmentation-by-registration scheme in which a scan with normal lungs is elastically registered to a scan containing pathology. When the resulting transformation is applied to a mask of the normal lungs, a segmentation is found for the pathological lungs. As a mask of the normal lungs, a probabilistic segmentation built up out of the segmentations of 15 registered normal scans is used. To refine the segmentation, voxel classification is applied to a certain volume around the borders of the transformed probabilistic mask. Performance of this scheme is compared to that of three other algorithms: a conventional, a user-interactive and a voxel classification method. The algorithms are tested on 10 three-dimensional thin-slice computed tomography volumes containing high-density pathology. The resulting segmentations are evaluated by comparing them to manual segmentations in terms of volumetric overlap and border positioning measures. The conventional and user-interactive methods that start off with thresholding techniques fail to segment the pathologies and are outperformed by both voxel classification and the refined segmentation-by-registration. The refined registration scheme enjoys the additional benefit that it does not require pathological (hand-segmented) training data."}
{"_id": "35d6ef85edd4caf8ce8b2d6ba09594f7d1c29d01", "title": "Animal monitoring with unmanned aerial vehicle-aided wireless sensor networks", "text": "In this paper, we focus on an application of wireless sensor networks (WSNs) with unmanned aerial vehicle (UAV). The aim of the application is to detect the locations of endangered species in large-scale wildlife areas or monitor movement of animals without any attachment devices. We first define the mathematical model of the animal monitoring problem in terms of the value of information (VoI) and rewards. We design a network model including clusters of sensor nodes and a single UAV that acts as a mobile sink and visits the clusters. We propose a path planning approach based on a Markov decision process (MDP) model that maximizes the VoI while reducing message delays. We used real-world movement dataset of zebras. Simulation results show that our approach outperforms greedy and random heuristics as well as the path planning based on the solution of the traveling salesman problem."}
{"_id": "18d7a36d953480adba60c21e4b2a3f3208fedc77", "title": "HERB: a home exploring robotic butler", "text": "We describe the architecture, algorithms, and experiments with HERB, an autonomous mobile manipulator that performs useful manipulation tasks in the home. We present new algorithms for searching for objects, learning to navigate in cluttered dynamic indoor scenes, recognizing and registering objects accurately in high clutter using vision, manipulating doors and other constrained objects using caging grasps, grasp planning and execution in clutter, and manipulation on pose and torque constraint manifolds. S.S. Srinivasa ( ) \u00b7 D. Ferguson \u00b7 C.J. Helfrich Intel Research Pittsburgh, 4720 Forbes Avenue, Suite 410, Pittsburgh, PA 15213, USA e-mail: siddhartha.srinivasa@intel.com C.J. Helfrich e-mail: casey.j.helfrich@intel.com D. Berenson \u00b7 A. Collet \u00b7 R. Diankov \u00b7 G. Gallagher \u00b7 G. Hollinger \u00b7 J. Kuffner \u00b7 M.V. Weghe The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA D. Berenson e-mail: dberenso@ri.cmu.edu A. Collet e-mail: acollet@ri.cmu.edu R. Diankov e-mail: rdiankov@ri.cmu.edu G. Gallagher e-mail: ggallagh@ri.cmu.edu G. Hollinger e-mail: gholling@ri.cmu.edu J. Kuffner e-mail: kuffner@ri.cmu.edu M.V. Weghe e-mail: vandeweg@ri.cmu.edu We also present numerous severe real-world test results from the integration of these algorithms into a single mobile manipulator."}
{"_id": "be9336fd5642e57b6c147c6eb97612b052fd43d4", "title": "Projected texture stereo", "text": "Passive stereo vision is widely used as a range sensing technology in robots, but suffers from dropouts: areas of low texture where stereo matching fails. By supplementing a stereo system with a strong texture projector, dropouts can be eliminated or reduced. This paper develops a practical stereo projector system, first by finding good patterns to project in the ideal case, then by analyzing the effects of system blur and phase noise on these patterns, and finally by designing a compact projector that is capable of good performance out to 3m in indoor scenes. The system has been implemented and has excellent depth precision and resolution, especially in the range out to 1.5m."}
{"_id": "cf2a86994505a96c19c73dbbaa4a39801bdee088", "title": "Real-time 3D object pose estimation and tracking for natural landmark based visual servo", "text": "A real-time solution for estimating and tracking the 3D pose of a rigid object is presented for image-based visual servo with natural landmarks. The many state-of-the-art technologies that are available for recognizing the 3D pose of an object in a natural setting are not suitable for real-time servo due to their time lags. This paper demonstrates that a real-time solution of 3D pose estimation become feasible by combining a fast tracker such as KLT [7] [8] with a method of determining the 3D coordinates of tracking points on an object at the time of SIFT based tracking point initiation, assuming that a 3D geometric model with SIFT description of an object is known a-priori. Keeping track of tracking points with KLT, removing the tracking point outliers automatically, and reinitiating the tracking points using SIFT once deteriorated, the 3D pose of an object can be estimated and tracked in real-time. This method can be applied to both mono and stereo camera based 3D pose estimation and tracking. The former guarantees higher frame rates with about 1 ms of local pose estimation, while the latter assures of more precise pose results but with about 16 ms of local pose estimation. The experimental investigations have shown the effectiveness of the proposed approach with real-time performance."}
{"_id": "0674c1e2fd78925a1baa6a28216ee05ed7b48ba0", "title": "Object Recognition from Local Scale-Invariant Features", "text": "Proc. of the International Conference on Computer Vision, Corfu (Sept. 1999) An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest-neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low-residual least-squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially-occluded images with a computation time of under 2 seconds."}
{"_id": "12cedcc79bec6403ffab5d4c85a1bf7500683eca", "title": "Algorithmic Complexity in Coding Theory and the Minimum Distance Problem", "text": "We startwithan overviewof algorithmiccomplexity problemsin coding theory We then show that the problemof computing the minimumdiktanceof a binaryIinwr code is NP-hard,and the correspondingdeci~\u201donproblemis W-complete. Thisconstitutes a proof of the conjecture Bedekamp, McEliece,vanTilborg, dating back to 1978. Extensionsand applicationsof this result to other problemsin codingtheqv are discussed."}
{"_id": "10ff61e6c2a99d8aafcf1706f3e88c7e2dfec188", "title": "Nonparametric belief propagation", "text": "Continuous quantities are ubiquitous in models of real-world phenomena, but are surprisingly difficult to reason about automatically. Probabilistic graphical models such as Bayesian networks and Markov random fields, and algorithms for approximate inference such as belief propagation (BP), have proven to be powerful tools in a wide range of applications in statistics and artificial intelligence. However, applying these methods to models with continuous variables remains a challenging task. In this work we describe an extension of BP to continuous variable models, generalizing particle filtering, and Gaussian mixture filtering techniques for time series to more complex models. We illustrate the power of the resulting nonparametric BP algorithm via two applications: kinematic tracking of visual motion and distributed localization in sensor networks."}
{"_id": "4873e56ce8bfa3d8edaa8cdc28ea3aff54b3e87c", "title": "Feature-Enhanced Probabilistic Models for Diffusion Network Inference", "text": "Cascading processes, such as disease contagion, viral marketing, and information diffusion, are a pervasive phenomenon in many types of networks. The problem of devising intervention strategies to facilitate or inhibit such processes has recently received considerable attention. However, a major challenge is that the underlying network is often unknown. In this paper, we revisit the problem of inferring latent network structure given observations from a diffusion process, such as the spread of trending topics in social media. We define a family of novel probabilistic models that can explain recurrent cascading behavior, and take into account not only the time differences between events but also a richer set of additional features. We show that MAP inference is tractable and can therefore scale to very large real-world networks. Further, we demonstrate the effectiveness of our approach by inferring the underlying network structure of a subset of the popular Twitter following network by analyzing the topics of a large number of messages posted by users over a 10-month period. Experimental results show that our models accurately recover the links of the Twitter network, and significantly improve the performance over previous models based entirely on time."}
{"_id": "e1bf2a95eb36afe7a6d9ae01db26ade2988226a0", "title": "Map-reduce based parallel support vector machine for risk analysis", "text": "Now a days people are enjoying the world of data because size and amount of the data has tremendously increased which acts like an invitation to Big data. But some of the classifier techniques like Support Vector Machine (SVM) is not able to handle the huge amount of data due to it's excessive memory requirement and unreasonable complexity in algorithm tough it is one of the most popularly used classifier in machine learning field. Hence a new technique comes into picture which performs parallel algorithm in a efficient way to work data having large scale called as PSVM. In this paper we are going to discuss a PSVM model for risk analysis which is based on map-reduce, and can easily handle a huge amount of data in a distributed manner."}
{"_id": "a47b4939945e9139ccbf37e2f232d5c57583b385", "title": "Classificationof Mammographic BreastDensityUsinga CombinedClassifierParadigm Keir Bovis andSameerSingh", "text": "In thispaperweinvestigateanew approach to theclassificationof mammographic imagesaccording to breasttype. The classificationof breastdensityin this studyis motivatedby its useasprior knowledge in theimageprocessingpipeline.By utilising thisknowledgeat differentstagesincludingenhanceme nt, segmentation and featureextraction, its applicationaims to increasethe sensiti vity of detectingbreastcancer . Our implementeddiscriminationof breastdensityis basedon the underlyingtexture containedwithin the breast tissueapparent on a digital mammogramandrealisedby utilising four approachesto quantifyingthe texture. Following featureextraction,we adopta variationon bootstrapaggregation(\u2019bagging\u2019) to meetthe assumptionsof independencein datarepresentationof theinputdataset,necessaryfor classifiercombination. Multiple classifierscomprisingfeed-forward Artifi cial NeuralNetwork (ANN) aresubseque ntly trainedwith the differentperturbedinput dataspacesusing10-fold cross-validation. Thesetof classifieroutputs,expressedin a probabilistic framework, aresubsequently combinedusingsix differentclassifiercombinationrulesandtheresultscompared.In this studywe examinetwo differentclassificationtasks;a four-classclassificationproblem differentiatingbetweenfatty, partly fatty, denseandextremelydensebreasttypesanda two-classproblems, differentiatingbetweendenseandfatty breasttypes. The datasetusedin this study is the Digital Database of ScreeningMammograms(DDSM) containingMedio-LateralOblique(MLO) views for eachbreastfor 377 patients. For both tasksthe bestcombinationstrategy was found using the productrule giving an average recognition rateon testof 71.4%for thefour-classproblemand96.7%for thetwo-classproblem."}
{"_id": "11bf72f89874b3bf3e950952543c96bf533d3399", "title": "DTC-SVM Scheme for Induction Motors Fedwith a Three-level Inverter", "text": "Direct Torque Control is a control technique in AC drive systems to obtain high performance torque control. The conventional DTC drive contains a pair of hysteresis comparators. DTC drives utilizing hysteresis comparators suffer from high torque ripple and variable switching frequency. The most common solution to those problems is to use the space vector depends on the reference torque and flux. In this Paper The space vector modulation technique (SVPWM) is applied to 2 level inverter control in the proposed DTC-based induction motor drive system, thereby dramatically reducing the torque ripple. Then the controller based on space vector modulation is designed to be applied in the control of Induction Motor (IM) with a three-level Inverter. This type of Inverter has several advantages over the standard two-level VSI, such as a greater number of levels in the output voltage waveforms, Lower dV/dt, less harmonic distortion in voltage and current waveforms and lower switching frequencies. This paper proposes a general SVPWM algorithm for three-level based on standard two-level SVPWM. The proposed scheme is described clearly and simulation results are reported to demonstrate its effectiveness. The entire control scheme is implemented with Matlab/Simulink. Keywords\u2014Direct torque control, space vector Pulsewidth modulation(SVPWM), neutral point clamped(NPC), two-level inverter."}
{"_id": "60a9035c45fe30e4e88dad530c4f5e476cc61b78", "title": "Data Mining and Knowledge Discovery : Applications , Techniques , Challenges and Process Models in Healthcare", "text": "Many healthcare leaders find themselves overwhelmed with data, but lack the information they need to make right decisions. Knowledge Discovery in Databases (KDD) can help organizations turn their data into information. Organizations that take advantage of KDD techniques will find that they can lower the healthcare costs while improving healthcare quality by using fast and better clinical decision making. In this paper, a review study is done on existing data mining and knowledge discovery techniques, applications and process models that are applicable to healthcare environments. The challenges for applying data mining techniques in healthcare environment will also be discussed."}
{"_id": "528db43eb99e4d3c6b0c7ed63d17332796b4270f", "title": "MPTLsim: a cycle-accurate, full-system simulator for x86-64 multicore architectures with coherent caches", "text": "The introduction of multicore microprocessors in the recent years has made it imperative to use cycleaccurate and full-system simulators in the architecture research community. We introduce MPTLsim - a multicore simulator for the X86 ISA that meets this need. MPTLsim is a uop-accurate, cycle-accurate, full-system simulator for multicore designs based on the X86-64 ISA. MPTLsim extends PTLsim, a publicly available single core simulator, with a host of additional features to support hyperthreading within a core and multiple cores, with detailed models for caches, on-chip interconnections and the memory data flow. MPTLsim incorporates detailed simulation models for cache controllers, interconnections and has built-in implementations of a number of cache coherency protocols."}
{"_id": "bbb9c3119edd9daa414fd8f2df5072587bfa3462", "title": "Apache Spark: a unified engine for big data processing", "text": "This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications."}
{"_id": "4b41aa7f0b0eae6beff4c6d98dd3631863ec51c2", "title": "Bayesian Non-Exhaustive Classification A Case Study: Online Name Disambiguation using Temporal Record Streams", "text": "The name entity disambiguation task aims to partition the records of multiple real-life persons so that each partition contains records pertaining to a unique person. Most of the existing solutions for this task operate in a batch mode, where all records to be disambiguated are initially available to the algorithm. However, more realistic settings require that the name disambiguation task be performed in an online fashion, in addition to, being able to identify records of new ambiguous entities having no preexisting records. In this work, we propose a Bayesian non-exhaustive classification framework for solving online name disambiguation task. Our proposed method uses a Dirichlet process prior with a Normal x Normal x Inverse Wishart data model which enables identification of new ambiguous entities who have no records in the training data. For online classification, we use one sweep Gibbs sampler which is very efficient and effective. As a case study we consider bibliographic data in a temporal stream format and disambiguate authors by partitioning their papers into homogeneous groups. Our experimental results demonstrate that the proposed method is better than existing methods for performing online name disambiguation task."}
{"_id": "3199197df603025a328e2c0837235e590acc10b1", "title": "Further improvement in reducing superficial contamination in NIRS using double short separation measurements", "text": "Near-Infrared Spectroscopy (NIRS) allows the recovery of the evoked hemodynamic response to brain activation. In adult human populations, the NIRS signal is strongly contaminated by systemic interference occurring in the superficial layers of the head. An approach to overcome this difficulty is to use additional NIRS measurements with short optode separations to measure the systemic hemodynamic fluctuations occurring in the superficial layers. These measurements can then be used as regressors in the post-experiment analysis to remove the systemic contamination and isolate the brain signal. In our previous work, we showed that the systemic interference measured in NIRS is heterogeneous across the surface of the scalp. As a consequence, the short separation measurement used in the regression procedure must be located close to the standard NIRS channel from which the evoked hemodynamic response of the brain is to be recovered. Here, we demonstrate that using two short separation measurements, one at the source optode and one at the detector optode, further increases the performance of the short separation regression method compared to using a single short separation measurement. While a single short separation channel produces an average reduction in noise of 33% for HbO, using a short separation channel at both source and detector reduces noise by 59% compared to the standard method using a general linear model (GLM) without short separation. For HbR, noise reduction of 3% is achieved using a single short separation and this number goes to 47% when two short separations are used. Our work emphasizes the importance of integrating short separation measurements both at the source and at the detector optode of the standard channels from which the hemodynamic response is to be recovered. While the implementation of short separation sources presents some difficulties experimentally, the improvement in noise reduction is significant enough to justify the practical challenges."}
{"_id": "ab8799dce29812a8e04cfa01eea095515c24b963", "title": "Magnetic integration of LCC compensated resonant converter for inductive power transfer applications", "text": "The aim of this paper is to present a novel magnetic integrated LCC series-parallel compensation topology for the design of both the primary and pickup pads in inductive power transfer (IPT) applications. A more compact structure can be realized by integrating the inductors of the compensation circuit into the coupled power-transmitting coils. The impact of the extra coupling between the compensated coils (inductors) and the power-transferring coils is modeled and analyzed. The basic characteristics of the proposed topology are studied based on the first harmonic approximation (FHA). High-order harmonics are taken into account to derive an analytical solution for the current at the switching instant, which is helpful for the design of soft-switching operation. An IPT system with up to 5.6kW output power for electric vehicles (EV) charger has been built to verify the validity of the proposed magnetic integrated compensation topology. A peak efficiency of 95.36% from DC power source to the battery load is achieved at rated operation condition."}
{"_id": "1b1337a166cdcf6ee51a70cb23f291c36e9eee34", "title": "Fine-to-Coarse Global Registration of RGB-D Scans", "text": "RGB-D scanning of indoor environments is important for many applications, including real estate, interior design, and virtual reality. However, it is still challenging to register RGB-D images from a hand-held camera over a long video sequence into a globally consistent 3D model. Current methods often can lose tracking or drift and thus fail to reconstruct salient structures in large environments (e.g., parallel walls in different rooms). To address this problem, we propose a fine-to-coarse global registration algorithm that leverages robust registrations at finer scales to seed detection and enforcement of new correspondence and structural constraints at coarser scales. To test global registration algorithms, we provide a benchmark with 10,401 manually-clicked point correspondences in 25 scenes from the SUN3D dataset. During experiments with this benchmark, we find that our fine-to-coarse algorithm registers long RGB-D sequences better than previous methods."}
{"_id": "89324b37187a8a4115e7619056bca5fcf78e8928", "title": "Automatically Quantifying Radiographic Knee Osteoarthritis Severity Final Report-CS 229-Machine Learning", "text": "In this paper, we implement machine learning algorithms to automatically quantify knee osteoarthritis severity from X-ray images according to the Kellgren & Lawrence (KL) grades. We implement and evaluate the performance of various machine learning models like transfer learning, support vector machines and fully connected neural networks based on their classification accuracy. We also implement the task of automatically extracting the knee-joint region from the X-ray images and quantifying their severity by training a faster region convolutional neural network (R-CNN)."}
{"_id": "010d1631433bb22a9261fba477b6e6f5a0d722b8", "title": "Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory", "text": "Perception and expression of emotion are key factors to the success of dialogue systems or conversational agents. However, this problem has not been studied in large-scale conversation generation so far. In this paper, we propose Emotional Chatting Machine (ECM) that can generate appropriate responses not only in content (relevant and grammatical) but also in emotion (emotionally consistent). To the best of our knowledge, this is the first work that addresses the emotion factor in large-scale conversation generation. ECM addresses the factor using three new mechanisms that respectively (1) models the high-level abstraction of emotion expressions by embedding emotion categories, (2) captures the change of implicit internal emotion states, and (3) uses explicit emotion expressions with an external emotion vocabulary. Experiments show that the proposed model can generate responses appropriate not only in content but also in emotion."}
{"_id": "a907e5609bb9efd1bafb11cd424faab14fd42e4a", "title": "Information Retrieval with Verbose Queries", "text": "Recently, the focus of many novel search applications shifted from short keyword queries to verbose natural language queries. Examples include question answering systems and dialogue systems, voice search on mobile devices and entity search engines like Facebook's Graph Search or Google's Knowledge Graph. However the performance of textbook information retrieval techniques for such verbose queries is not as good as that for their shorter counterparts. Thus, effective handling of verbose queries has become a critical factor for adoption of information retrieval techniques in this new breed of search applications. Over the past decade, the information retrieval community has deeply explored the problem of transforming natural language verbose queries using operations like reduction, weighting, expansion, reformulation and segmentation into more effective structural representations. However, thus far, there was not a coherent and organized tutorial on this topic. In this tutorial, we aim to put together various research pieces of the puzzle, provide a comprehensive and structured overview of various proposed methods, and also list various application scenarios where effective verbose query processing can make a significant difference."}
{"_id": "600a5d60cb96eda2a9849413e747547d70dfb00a", "title": "Biologically inspired protection of deep networks from adversarial attacks", "text": "Inspired by biophysical principles underlying nonlinear dendritic computation in neural circuits, we develop a scheme to train deep neural networks to make them robust to adversarial attacks. Our scheme generates highly nonlinear, saturated neural networks that achieve state of the art performance on gradient based adversarial examples on MNIST, despite never being exposed to adversarially chosen examples during training. Moreover, these networks exhibit unprecedented robustness to targeted, iterative schemes for generating adversarial examples, including second-order methods. We further identify principles governing how these networks achieve their robustness, drawing on methods from information geometry. We find these networks progressively create highly flat and compressed internal representations that are sensitive to very few input dimensions, while still solving the task. Moreover, they employ highly kurtotic weight distributions, also found in the brain, and we demonstrate how such kurtosis can protect even linear classifiers from adversarial attack."}
{"_id": "87b1b46129899f328017798e315cae82e0ba70d8", "title": "A fully balanced pseudo-differential OTA with common-mode feedforward and inherent common-mode feedback detector", "text": "A pseudo-differential fully balanced fully symmetric CMOS operational transconductance amplifier (OTA) architecture with inherent common-mode detection is proposed. Through judicious arrangement, the common-mode feedback circuit can be economically implemented. The OTA achieves a third harmonic distortion of 43 dB for 900 mVpp at 30 MHz. The OTA, fabricated in 0.5m CMOS process, is used to design a 100-MHz fourth-order linear phase filter. The measured filter\u2019s group delay ripple is 3% for frequencies up to 100 MHz, and the measured dynamic range is 45 dB for a total harmonic distortion of 46 dB. The filter consumes 42.9 mW per complex pole pair while operating from a 1.65-V power supply."}
{"_id": "11b8093f4a8c421a8638c1be0937151d968d95f9", "title": "Emergence of Fatal PRRSV Variants: Unparalleled Outbreaks of Atypical PRRS in China and Molecular Dissection of the Unique Hallmark", "text": "Porcine reproductive and respiratory syndrome (PRRS) is a severe viral disease in pigs, causing great economic losses worldwide each year. The causative agent of the disease, PRRS virus (PRRSV), is a member of the family Arteriviridae. Here we report our investigation of the unparalleled large-scale outbreaks of an originally unknown, but so-called \"high fever\" disease in China in 2006 with the essence of PRRS, which spread to more than 10 provinces (autonomous cities or regions) and affected over 2,000,000 pigs with about 400,000 fatal cases. Different from the typical PRRS, numerous adult sows were also infected by the \"high fever\" disease. This atypical PRRS pandemic was initially identified as a hog cholera-like disease manifesting neurological symptoms (e.g., shivering), high fever (40-42 degrees C), erythematous blanching rash, etc. Autopsies combined with immunological analyses clearly showed that multiple organs were infected by highly pathogenic PRRSVs with severe pathological changes observed. Whole-genome analysis of the isolated viruses revealed that these PRRSV isolates are grouped into Type II and are highly homologous to HB-1, a Chinese strain of PRRSV (96.5% nucleotide identity). More importantly, we observed a unique molecular hallmark in these viral isolates, namely a discontinuous deletion of 30 amino acids in nonstructural protein 2 (NSP2). Taken together, this is the first comprehensive report documenting the 2006 epidemic of atypical PRRS outbreak in China and identifying the 30 amino-acid deletion in NSP2, a novel determining factor for virulence which may be implicated in the high pathogenicity of PRRSV, and will stimulate further study by using the infectious cDNA clone technique."}
{"_id": "c257c6948fe63f7ee3df0a2d18916d2e3fdc85e5", "title": "Destination bonding : Hybrid cognition using Instagram", "text": "Article history: Received August 28, 2015 Received in revised format November 28, 2015 Accepted December 1, 2015 Available online December 1, 2015 Empirical research has identified the phenomenon of destination bonding as a result of summated physical and emotional values associated with the destination. Physical values, namely natural landscape & other physical settings and emotional values, namely the enculturation processes, have a significant role to play in portraying visitors\u2019 cognitive framework for destination preference. The physical values seemed to be the stimulator for bonding that embodies action or behavior tendencies in imagery. The emotional values were the conditions that lead to affective bonding and are reflected in attitudes for a place which were evident in text narratives. Social networking on virtual platforms offers the scope for hybrid cognitive expression using imagery and text to the visitors. Instagram has emerged as an application-window to capture these hybrid cognitions of visitors. This study focuses on assessing the relationship between hybrid cognition of visitors expressed via Instagram and their bond with the destination. Further to this, the study attempts to examine the impact of hybrid cognition of visitors on the behavioral pattern of prospective visitors to the destination. The study revealed that sharing of visual imageries and related text by the visitors is an expression of the physico-emotional bonding with the destination. It was further established that hybrid cognition strongly asserts destination bonding and has been also found to have moderating impact on the link between destination bonding and electronic-word-of-mouth. \u00a9 2016 Growing Science Ltd. All rights reserved."}
{"_id": "203af6916b501ee53d9c8c7164324ef4f019ca2d", "title": "Hand grasp and motion for intent expression in mid-air virtual pottery", "text": "We describe the design and evaluation of a geometric interaction technique for bare-hand mid-air virtual pottery. We model the shaping of a pot as a gradual and progressive convergence of the potprofile to the shape of the user\u2019s hand represented as a point-cloud (PCL). Our pottery-inspired application served as a platform for systematically revealing how users use their hands to express the intent of deformation during a pot shaping process. Through our approach, we address two specific problems: (a) determining start and end of deformation without explicit clutching and declutching, and (b) identifying user\u2019s intent by characterizing grasp and motion of the hand on the pot. We evaluated our approach\u2019s performance in terms of intent classification, users\u2019 behavior, and users\u2019 perception of controllability. We found that the expressive capability of hand articulation can be effectively harnessed for controllable shaping by organizing the deformation process in broad classes of intended operations such as pulling, pushing and fairing. After minimal practice with the pottery application, users could figure out their own strategy for reaching, grasping and deforming the pot. Further, the use of PCL as mid-air input allows for using common physical objects as tools for pot deformation. Users particularly enjoyed this aspect of our method for shaping pots."}
{"_id": "0cc24d8308665874bddf5cb874c7fb122c249666", "title": "SoftGUESS: Visualization and Exploration of Code Clones in Context", "text": "We introduce SoftGUESS, a code clone exploration system. SoftGUESS is built on the more general GUESS system which provides users with a mechanism to interactively explore graph structures both through direct manipulation as well as a domain-specific language. We demonstrate SoftGUESS through a number of mini-applications to analyze evolutionary code-clone behavior in software systems. The mini-applications of SoftGUESS represent a novel way of looking at code-clones in the context of many system features. It is our hope that SoftGUESS will form the basis for other analysis tools in the software-engineering domain."}
{"_id": "2ccbb28d9f3c0f4867826f24567b4183993037b3", "title": "The Diffusion of Innovations in Social Networks \u2217", "text": "This paper determines how different network structures influence the diffusion of innovations. We develop a model of diffusion where: 1. an individual\u2019s decision to adopt a new technology is influenced by his contacts; and 2. contacts can discuss, coordinate, and make adoption decisions together. A measure of connectedness, \u2018cohesion\u2019, determines diffusion. A cohesive community is defined as a group in which all members have a high proportion of their contacts within the group. We show a key trade-off: on one hand, a cohesive community can hinder diffusion by blocking the spread of a technology into the group; on the other hand, cohesive communities can be particularly effective at acting collectively to adopt an innovation. We find that for technologies with low externalities (that require few people to adopt before others are willing to adopt), social structures with loose ties, where people are not part of cohesive groups, enable greater diffusion. However, as externalities increase (technologies require more people to adopt before others are willing to adopt), social structures with increasingly cohesive groups enable greater diffusion. Given that societal structure is known to differ systematically along this dimension, our findings point to specialization in technological progress exhibiting these patterns. \u2217Bryony Reich, Faculty of Economics, University College London. Email: b.reich@ucl.ac.uk. I would like to thank Alberto Alesina, Antonio Cabrales, Sanjeev Goyal, and Jorgen Weibull for their invaluable guidance and support. I benefited greatly from conversations with and comments of Marco Bassetto, Lars Nesheim and Imran Rasul. I am grateful to Jonathan Newton for numerous interactions at all stages of this project. For helpful comments I would also like to thank Lucie Gadenne, Terri Kneeland, and Sueyhun Kwon, as well as seminar participants at Alicante, Cambridge, INET Contagion Conference, Oxford, PET Luxembourg, and UCL. I gratefully acknowledge financial support from the UK Economic and Social Research Council (grant number ES/K001396/1)."}
{"_id": "2e3fc086ff84d6589dc91200fbfa86903a2d3b76", "title": "SLANGZY: a fuzzy logic-based algorithm for English slang meaning selection", "text": "The text present on online forums and social media platforms conventionally does not follow a standard sentence structure and uses words that are commonly termed as slang or Internet language. Online text mining involves a surfeit of slang words; however, there is a distinct lack of reliable resources available to find accurate meanings of these words. We aim to bridge this gap by introducing SLANGZY, a fuzzy logic-based algorithm for English slang meaning selection which uses a mathematical factor termed as \u201cslang factor\u201d to judge the accuracy of slang word definitions found in Urban Dictionary, the largest Slang Dictionary on the Internet. This slang factor is used to rank definitions of English slang words retrieved from over 4 million unique words on popular social media platforms such as Twitter, YouTube and Reddit. We investigate the usefulness of SLANGZY over Urban Dictionary to find meanings of slang words in social media text and achieve encouraging results due to recognizing the importance of multiple criteria in the calculation of slang factor in the algorithm over successive experiments. The performance of SLANGZY with optimum weights for each criterion is further assessed using the accuracy, error rate, F-Score as well as a difference factor for English slang word definitions. To further illustrate the results, a web portal is created to display the contents of the Slang Dictionary consisting of definitions ranked according to the calculated slang factors."}
{"_id": "05bcd2f5d1833ac354de01341d73e42203a5b6c0", "title": "A Topic Model for Word Sense Disambiguation", "text": "We develop latent Dirichlet allocation with WORDNET (LDAWN), an unsupervised probabilistic topic model that includes word sense as a hidden variable. We develop a probabilistic posterior inference algorithm for simultaneously disambiguating a corpus and learning the domains in which to consider each word. Using the WORDNET hierarchy, we embed the construction of Abney and Light (1999) in the topic model and show that automatically learned domains improve WSD accuracy compared to alternative contexts."}
{"_id": "078fdc9d7dd7105dcc5e65aa19edefe3e48e8bc7", "title": "Probabilistic author-topic models for information discovery", "text": "We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer."}
{"_id": "215aa495b4c860a1e6d87f2c36f34da464376cc4", "title": "Finding scientific topics.", "text": "A first step in identifying the content of a document is determining which topics that document addresses. We describe a generative model for documents, introduced by Blei, Ng, and Jordan [Blei, D. M., Ng, A. Y. & Jordan, M. I. (2003) J. Machine Learn. Res. 3, 993-1022], in which each document is generated by choosing a distribution over topics and then choosing each word in the document from a topic selected according to this distribution. We then present a Markov chain Monte Carlo algorithm for inference in this model. We use this algorithm to analyze abstracts from PNAS by using Bayesian model selection to establish the number of topics. We show that the extracted topics capture meaningful structure in the data, consistent with the class designations provided by the authors of the articles, and outline further applications of this analysis, including identifying \"hot topics\" by examining temporal dynamics and tagging abstracts to illustrate semantic content."}
{"_id": "2483c29cd07f05d20dab7ed16d5ea1936226259c", "title": "Mixtures of hierarchical topics with Pachinko allocation", "text": "The four-level pachinko allocation model (PAM) (Li & McCallum, 2006) represents correlations among topics using a DAG structure. It does not, however, represent a nested hierarchy of topics, with some topical word distributions representing the vocabulary that is shared among several more specific topics. This paper presents hierarchical PAM---an enhancement that explicitly represents a topic hierarchy. This model can be seen as combining the advantages of hLDA's topical hierarchy representation with PAM's ability to mix multiple leaves of the topic hierarchy. Experimental results show improvements in likelihood of held-out documents, as well as mutual information between automatically-discovered topics and humangenerated categories such as journals."}
{"_id": "271d031d03d217170b2d1b1c4ae9d777dc18692b", "title": "A Condensation Approach to Privacy Preserving Data Mining", "text": "In recent years, privacy preserving data mining has become an important problem because of the large amount of personal data which is tracked by many business applications. In many cases, users are unwilling to provide personal information unless the privacy of sensitive information is guaranteed. In this paper, we propose a new framework for privacy preserving data mining of multi-dimensional data. Previous work for privacy preserving data mining uses a perturbation approach which reconstructs data distributions in order to perform the mining. Such an approach treats each dimension independently and therefore ignores the correlations between the different dimensions. In addition, it requires the development of a new distribution based algorithm for each data mining problem, since it does not use the multi-dimensional records, but uses aggregate distributions of the data as input. This leads to a fundamental re-design of data mining algorithms. In this paper, we will develop a new and flexible approach for privacy preserving data mining which does not require new problem-specific algorithms, since it maps the original data set into a new anonymized data set. This anonymized data closely matches the characteristics of the original data including the correlations among the different dimensions. We present empirical results illustrating the effectiveness of the method."}
{"_id": "18ca2837d280a6b2250024b6b0e59345601064a7", "title": "Nonlinear dimensionality reduction by locally linear embedding.", "text": "Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text."}
{"_id": "c86754453c88a5ccf3d4a7ef7acd66ae1cd97928", "title": "Influence of External Currents in Sensors Based on PCB Rogowski Coils", "text": "A current sensor based in the Rogowski coil is an innovative measuring system that gives advantages with respect to conventional measuring systems based in current transformers with magnetic core [1], [2].and [3]. Their main advantages are: the linearity, the span and the bandwidth. Different kinds of manufacturing allow to obtain an important variety of Rogowski coils with different properties. One way of manufacturing is using the same method as for producing printed circuit boards, so by this way is possible to produce coils very similar and with a high precision. The authors are working in current measurement with Rogowski coils or Hall effect sensors and in particular in the realization of good and accurate coils [4] and [5]. In this work, the influence of external currents to the coil in the measured current by the coil has been evaluated."}
{"_id": "1d89516427c0d91653b70171a0e8998af9d5960b", "title": "Identifying quantitative trait loci via group-sparse multitask regression and feature selection: an imaging genetics study of the ADNI cohort", "text": "MOTIVATION\nRecent advances in high-throughput genotyping and brain imaging techniques enable new approaches to study the influence of genetic variation on brain structures and functions. Traditional association studies typically employ independent and pairwise univariate analysis, which treats single nucleotide polymorphisms (SNPs) and quantitative traits (QTs) as isolated units and ignores important underlying interacting relationships between the units. New methods are proposed here to overcome this limitation.\n\n\nRESULTS\nTaking into account the interlinked structure within and between SNPs and imaging QTs, we propose a novel Group-Sparse Multi-task Regression and Feature Selection (G-SMuRFS) method to identify quantitative trait loci for multiple disease-relevant QTs and apply it to a study in mild cognitive impairment and Alzheimer's disease. Built upon regression analysis, our model uses a new form of regularization, group \u2113(2,1)-norm (G(2,1)-norm), to incorporate the biological group structures among SNPs induced from their genetic arrangement. The new G(2,1)-norm considers the regression coefficients of all the SNPs in each group with respect to all the QTs together and enforces sparsity at the group level. In addition, an \u2113(2,1)-norm regularization is utilized to couple feature selection across multiple tasks to make use of the shared underlying mechanism among different brain regions. The effectiveness of the proposed method is demonstrated by both clearly improved prediction performance in empirical evaluations and a compact set of selected SNP predictors relevant to the imaging QTs.\n\n\nAVAILABILITY\nSoftware is publicly available at: http://ranger.uta.edu/%7eheng/imaging-genetics/."}
{"_id": "27d9a8445e322c0ac1f335aea6a5591c4d120b05", "title": "Security Against Hardware Trojan Attacks Using Key-Based Design Obfuscation", "text": "Malicious modification of hardware in untrusted fabrication facilities, referred to as hardware Trojan, has emerged as a major security concern. Comprehensive detection of these Trojans during postmanufacturing test has been shown to be extremely difficult. Hence, it is important to develop design techniques that provide effective countermeasures against hardware Trojans by either preventing Trojan attacks or facilitating detection during test. Obfuscation is a technique that is conventionally employed to prevent piracy of software and hardware intellectual property (IP). In this work, we propose a novel application of key-based circuit structure and functionality obfuscation to achieve protection against hardware Trojans triggered by rare internal circuit conditions. The proposed obfuscation scheme is based on judicious modification of the state transition function, which creates two distinct functional modes: normal and Responsible Editor: S. T. Chakradhar A preliminary version of this work has been published in the International Conference on Computer Aided Design (ICCAD), 2009. R. S. Chakraborty (B) Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, West Bengal 721302, India e-mail: rschakraborty@cse.iitkgp.ernet.in S. Bhunia Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH 44106, USA e-mail: skb21@case.edu obfuscated. A circuit transitions from the obfuscated to the normal mode only upon application of a specific input sequence, which defines the key. We show that it provides security against Trojan attacks in two ways: (1) it makes some inserted Trojans benign, i.e. they become effective only in the obfuscated mode; and (2) it prevents an adversary from exploiting the true rare events in a circuit to insert hard-to-detect Trojans. The proposed design methodology can thus achieve simultaneous protection from hardware Trojans and hardware IP piracy. Besides protecting ICs against Trojan attacks in foundry, we show that it can also protect against malicious modifications by untrusted computeraided design (CAD) tools in both SoC and FPGA design flows. Simulation results for a set of benchmark circuits show that the scheme is capable of achieving high levels of security against Trojan attacks at modest area, power and delay overhead."}
{"_id": "3f8b54ef69b2c8682a76f074110be0f11815f63a", "title": "Combining Neural Networks and Log-linear Models to Improve Relation Extraction", "text": "The last decade has witnessed the success of traditional feature-based methods for exploiting the discrete structures such as words or lexical patterns to extract relations from text. Recently, convolutional and recurrent neural networks have been shown to capture effective hidden structures within sentences via continuous representations, thereby significantly advancing the performance of relation extraction. The advantage of convolutional neural networks is their capacity to generalize the consecutive k-grams in the sentences while recurrent neural networks are effective to encode long range sentence context. This paper proposes to combine the traditional feature-based method, the convolutional and recurrent neural networks to simultaneously benefit from their advantages. Our systematic evaluation of different network architectures and combination methods demonstrates the effectiveness of this approach and results in the state-ofthe-art performance on the ACE 2005 and SemEval datasets."}
{"_id": "d73a261bfdb166dbc44a5fb14fff050c52574a2b", "title": "A second generation computer forensic analysis system", "text": "The architecture of existing \u2013 first generation \u2013 computer forensic tools, including the widely used EnCase and FTK products, is rapidly becoming outdated. Tools are not keeping pace with increased complexity and data volumes of modern investigations. This paper discuses the limitations of first generation computer forensic tools. Several metrics for measuring the efficacy and performance of computer forensic tools are introduced. A set of requirements for second generation tools are proposed. A high-level design for a (work in progress) second generation computer forensic analysis system is presented. a 2009 Digital Forensic Research workshop. Published by Elsevier Ltd. All rights reserved."}
{"_id": "b4641313431f1525d276677bcd8fc5de1c726a8d", "title": "A Low-Power 32-Channel Digitally Programmable Neural Recording Integrated Circuit", "text": "We report the design of an ultra-low-power 32-channel neural-recording integrated circuit (chip) in a 0.18 \u03bc m CMOS technology. The chip consists of eight neural recording modules where each module contains four neural amplifiers, an analog multiplexer, an A/D converter, and a serial programming interface. Each amplifier can be programmed to record either spikes or LFPs with a programmable gain from 49-66 dB. To minimize the total power consumption, an adaptive-biasing scheme is utilized to adjust each amplifier's input-referred noise to suit the background noise at the recording site. The amplifier's input-referred noise can be adjusted from 11.2 \u03bcVrms (total power of 5.4 \u03bcW) down to 5.4 \u03bcVrms (total power of 20 \u03bcW) in the spike-recording setting. The ADC in each recording module digitizes the a.c. signal input to each amplifier at 8-bit precision with a sampling rate of 31.25 kS/s per channel, with an average power consumption of 483 nW per channel, and, because of a.c. coupling, allows d.c. operation over a wide dynamic range. It achieves an ENOB of 7.65, resulting in a net efficiency of 77 fJ/State, making it one of the most energy-efficient designs for neural recording applications. The presented chip was successfully tested in an in vivo wireless recording experiment from a behaving primate with an average power dissipation per channel of 10.1 \u03bc W. The neural amplifier and the ADC occupy areas of 0.03 mm2 and 0.02 mm2 respectively, making our design simultaneously area efficient and power efficient, thus enabling scaling to high channel-count systems."}
{"_id": "7e33296dfff963d595d2121f14a7a0bd5c187188", "title": "Linear-Time Algorithms for Testing the Satisfiability of Propositional Horn Formulae", "text": "D New algorithms for deciding whether a (propositional) Horn formula is satisfiable are presented. If the Horn formula A contains K distinct propositional letters and if it is assumed that they are exactly Pi,. . . , PK, the two algorithms presented in this paper run in time O(N), where N is the total number of occurrences of literals in A. By representing a Horn proposition as a graph, the satisfiability problem can be formulated as a data flow problem, a certain type of pebbling. The difference between the two algorithms presented here is the strategy used for pebbling the graph. The first algorithm is based on the principle used for finding the set of nonterminals of a context-free grammar from which the empty string can be derived. The second algorithm is a graph traversal and uses a \u201ccall-by-need\u201d strategy. This algorithm uses an attribute grammar to translate a propositional Horn formula to its corresponding graph in linear time. Our formulation of the satisfiability problem as a data flow problem appears to be new and suggests the possibility of improving efficiency using parallel processors. a"}
{"_id": "81394ddc1465027c148c0b7d8005bb9e5712d8f7", "title": "ECNUCS: Measuring Short Text Semantic Equivalence Using Multiple Similarity Measurements", "text": "This paper reports our submissions to the Semantic Textual Similarity (STS) task in SemEval 2013 (Task 6). We submitted three Support Vector Regression (SVR) systems in core task, using 6 types of similarity measures, i.e., string similarity, number similarity, knowledge-based similarity, corpus-based similarity, syntactic dependency similarity and machine translation similarity. Our third system with different training data and different feature sets for each test data set performs the best and ranks 35 out of 90 runs. We also submitted two systems in typed task using string based measure and Named Entity based measure. Our best system ranks 5 out of 15 runs."}
{"_id": "719fd380590df3ccbe50db7ed74f9c84fe7e0d6a", "title": "Protocol for developing ANN models and its application to the assessment of the quality of the ANN model development process in drinking water quality modelling", "text": "The application of Artificial Neural Networks (ANNs) in the field of environmental and water resources modelling has become increasingly popular since early 1990s. Despite the recognition of the need for a consistent approach to the development of ANN models and the importance of providing adequate details of the model development process, there is no systematic protocol for the development and documentation of ANN models. In order to address this shortcoming, such a protocol is introduced in this paper. In addition, the protocol is used to critically review the quality of the ANN model development and reporting processes employed in 81 journal papers since 2000 in which ANNs have been used for drinking water quality modelling. The results show that model architecture selection is the best implemented step, while greater focus should be given to input selection considering input independence and model validation considering replicative and structural validity. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "372a9edb48c6894e13bd9946ba50442b9f2f6f2c", "title": "Micro-synchrophasors for distribution systems", "text": "This paper describes a research project to develop a network of high-precision phasor measurement units, termed micro-synchrophasors or \u03bcPMUs, and explore the applications of \u03bcPMU data for electric power distribution systems."}
{"_id": "c9298d761177fa3ab1000aea4dede5e76f891f6f", "title": "From the ephemeral to the enduring: how approach-oriented mindsets lead to greater status.", "text": "We propose that the psychological states individuals bring into newly formed groups can produce meaningful differences in status attainment. Three experiments explored whether experimentally created approach-oriented mindsets affected status attainment in groups, both immediately and over time. We predicted that approach-oriented states would lead to greater status attainment by increasing proactive behavior. Furthermore, we hypothesized that these status gains would persist longitudinally, days after the original mindsets had dissipated, due to the self-reinforcing behavioral cycles the approach-oriented states initiated. In Experiment 1, individuals primed with a promotion focus achieved higher status in their newly formed groups, and this was mediated by proactive behavior as rated by themselves and their teammates. Experiment 2 was a longitudinal experiment and revealed that individuals primed with power achieved higher status, both immediately following the prime and when the groups were reassembled 2 days later to work on new tasks. These effects were mediated by independent coders' ratings of proactive behavior during the first few minutes of group interaction. Experiment 3 was another longitudinal experiment and revealed that priming happiness led to greater status as well as greater acquisition of material resources. Importantly, these immediate and longitudinal effects were independent of the effects of a number of stable dispositional traits. Our results establish that approach-oriented psychological states affect status attainment, over and above the more stable characteristics emphasized in prior research, and provide the most direct test yet of the self-reinforcing nature of status hierarchies. These findings depict a dynamic view of status organization in which the same group may organize itself differently depending on members' incoming psychological states."}
{"_id": "2fa5eb6e30116a6f8d073c122d9c087f844d7912", "title": "Relating reinforcement learning performance to classification performance", "text": "We prove a quantitative connection between the expected sum of rewards of a policy and binary classification performance on created subproblems. This connection holds without any unobservable assumptions (no assumption of independence, small mixing time, fully observable states, or even hidden states) and the resulting statement is independent of the number of states or actions. The statement is critically dependent on the size of the rewards and prediction performance of the created classifiers.We also provide some general guidelines for obtaining good classification performance on the created subproblems. In particular, we discuss possible methods for generating training examples for a classifier learning algorithm."}
{"_id": "6755b01e14b2e7ee39aef0b6bf573769a39eabfe", "title": "Semantic Labeling of Aerial and Satellite Imagery", "text": "Inspired by the recent success of deep convolutional neural networks (CNNs) and feature aggregation in the field of computer vision and machine learning, we propose an effective approach to semantic pixel labeling of aerial and satellite imagery using both CNN features and hand-crafted features. Both CNN and hand-crafted features are applied to dense image patches to produce per-pixel class probabilities. Conditional random fields (CRFs) are applied as a postprocessing step. The CRF infers a labeling that smooths regions while respecting the edges present in the imagery. The combination of these factors leads to a semantic labeling framework which outperforms all existing algorithms on the International Society of Photogrammetry and Remote Sensing (ISPRS) two-dimensional Semantic Labeling Challenge dataset. We advance state-of-the-art results by improving the overall accuracy to 88% on the ISPRS Semantic Labeling Contest. In this paper, we also explore the possibility of applying the proposed framework to other types of data. Our experimental results demonstrate the generalization capability of our approach and its ability to produce accurate results."}
{"_id": "e8a90511a95025aab5f0867ce99704151e40b207", "title": "Are Faculty Members Ready? Individual Factors Affecting Knowledge Management Readiness in Universities", "text": "Knowledge Management (KM) provides a systematic process to help in the creation, transfer and use of knowledge across the university, leading to increased productivity. While KM has been successfully used elsewhere, universities have been late in adopting it. Before a university can initiate KM, it needs to determine if it is ready for KM or not. Through a web-based survey sent to 1263 faculty members from 59 accredited Library and Information Science programs in universities across North America, this study investigated the e\u00aeect of individual factors of trust, knowledge self-e\u00b1cacy, collegiality, openness to change and reciprocity on individual readiness to participate in a KM initiative, and the degree to which this a\u00aeects perceived organisational readiness to adopt KM. 157 valid responses were received. Using structural equation modeling, the study found that apart from trust, all other factors positively a\u00aeected individual readiness, which was found to a\u00aeect organisational readiness. Findings should help universities identify opportunities and barriers before they can adopt KM. It should be a useful contribution to the KM literature, especially in the university context."}
{"_id": "4214cb09e29795f5363e5e3b545750dce027b668", "title": "Overview of Virtual Reality Technologies", "text": "The promise of being able to be inside another world might be resolved by Virtual Reality. It wouldn\u2019t be na\u00efve to assume that we will be able to enter and completely feel another world at some point in the future; be able to interact with knowledge and entertainment in a totally immersive state. Advancements are becoming more frequent, and with the recent popularity of technology in this generation, a lot of investment is being made. Prototypes of head displays that completely cover the user\u2019s view and movement recognition which doesn\u2019t need an intermediate device for input of data are already Virtual Reality devices available to developers and even to the public. From time to time, the way we interact with computers change, and virtual reality promises to make this interaction as real as possible. Although scenes like flying a jet or tank, are already tangible, another scenes, such as being able to feel the dry air of the Sahara in geography classes or feel the hard, cold scales of a dragon in a computer game seem to be in a long w a y f r o m n o w. H o w e v e r , t e c h n o l o g i c advancements and increase in the popularity of these technologies point to the possibility of such amazing scenes coming true."}
{"_id": "1a7f1685e4c9a200b0c213060e203137279142d6", "title": "Ranking with local regression and global alignment for cross media retrieval", "text": "Rich multimedia content including images, audio and text are frequently used to describe the same semantics in E-Learning and Ebusiness web pages, instructive slides, multimedia cyclopedias, and so on. In this paper, we present a framework for cross-media retrieval, where the query example and the retrieved result(s) can be of different media types. We first construct Multimedia Correlation Space (MMCS) by exploring the semantic correlation of different multimedia modalities, during which multimedia content and co-occurrence information is utilized. We propose a novel ranking algorithm, namely ranking with Local Regression and Global Alignment (LRGA), which learns a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking values of its neighboring points. We propose a unified objective function to globally align the local models from all the data points so that an optimal ranking value can be assigned to each data point. LRGA is insensitive to parameters, making it particularly suitable for data ranking. A relevance feedback algorithm is proposed to improve the retrieval performance. Comprehensive experiments have demonstrated the effectiveness of our methods."}
{"_id": "8f127eca4d7df80ea8227a84d9c41cc7d72e370f", "title": "Using Smart City Technology to Make Healthcare Smarter", "text": "Smart cities use information and communication technologies (ICTs) to scale services include utilities and transportation to a growing population. In this paper, we discuss how smart city ICTs can also improve healthcare effectiveness and lower healthcare cost for smart city residents. We survey current literature and introduce original research to offer an overview of how smart city infrastructure supports strategic healthcare using both mobile and ambient sensors combined with machine learning. Finally, we consider challenges that will be faced as healthcare providers make use of these opportunities."}
{"_id": "484937d9f5c5f4ade7cf39c9dfd538b469bf8823", "title": "A Study on Knowledge , Perceptions and Attitudes about Screening and Diagnosis of Diabetes in Saudi Population", "text": "Introduction: Timely screening and treatment of diabetes can considerably reduce associated health adverse effects. The purpose of the study was to evaluate the knowledge about diabetes and perceptions and attitude about screening and early diagnosis of diabetes among the population. Methods: A cross-sectional questionnaire-based study conducted among non-diabetes adult people attending primary health care centers in Jeddah, Saudi Arabia. Participants\u2019 knowledge about diabetes risk factors and complications and perceptions and attitude regarding screening and early diagnosis of diabetes was assessed and different scores were calculated. Results: Total 202 patients were included: mean (SD) age was 38.19 (13.25) years and 55.0% were females. Knowledge about diabetes risk factors and complications was 60.41%84.77% and 44.67%-67.51%, respectively, depending on the item. Perceptions about screening and early diagnosis showed that 81.19% believed that screening tests exist and 75.14% believed that it is possible to diagnose diabetes before complication stage; while 84.16% believed that early diagnosis increases treatment efficacy, decreases incidence of complications (83.66%), and allows early treatment (83.66%). Regarding attitude, 86.14% agreed to undergo diabetes screening if advised by the physician and 60.40% would do on their own initiative. Linear regression showed a positive correlation of attitude score with knowledge about diabetes risk factors (OR=1.87; p<0.0001) and complications (OR=1.46; p<0.0001); perception about feasibility of screening (OR=1.93; p<0.0001) and benefits of early diagnosis (OR=1.69; p<0.0001). Conclusions: The improvement of knowledge about diabetes risk factors and complications as well as the perception about feasibility and benefits of screening are prerequisites for the promotion of diabetes screening among the population."}
{"_id": "1bdfdd4205c0ace6f0d5bb09dd606021a110cf36", "title": "The Evidence Framework Applied to Classification Networks", "text": "Three Bayesian ideas are presented for supervised adaptive classifiers. First, it is argued that the output of a classifier should be obtained by marginalizing over the posterior distribution of the parameters; a simple approximation to this integral is proposed and demonstrated. This involves a \"moderation\" of the most probable classifier's outputs, and yields improved performance. Second, it is demonstrated that the Bayesian framework for model comparison described for regression models in MacKay (1992a,b) can also be applied to classification problems. This framework successfully chooses the magnitude of weight decay terms, and ranks solutions found using different numbers of hidden units. Third, an information-based data selection criterion is derived and demonstrated within this framework."}
{"_id": "89283ee665da63ec9cf87e4008ead3e8963fa02b", "title": "Internet Scale User-Generated Live Video Streaming: The Twitch Case", "text": "Twitch is a live video streaming platform used for broadcasting video gameplay, ranging from amateur players to eSports tournaments. This platform has gathered a substantial world wide community, reaching more than 1.7 million broadcasters and 100 million visitors every month. Twitch is fundamentally different from \u201cstatic\u201d content distribution platforms such as YouTube and Netflix, as streams are generated and consumed in real time. In this paper, we explore the Twitch infrastructure to understand how it manages live streaming delivery to an Internet-wide audience. We found Twitch manages a geo-distributed infrastructure, with presence in four continents. Our findings show that Twitch dynamically allocates servers to channels depending on their popularity. Additionally, we explore the redirection strategy of clients to servers depending on their region and the specific channel."}
{"_id": "1f75856bba0feb216001ba551d249593a9624c01", "title": "Predicting Stock Price Direction using Support Vector Machines", "text": "Support Vector Machine is a machine learning technique used in recent studies to forecast stock prices. This study uses daily closing prices for 34 technology stocks to calculate price volatility and momentum for individual stocks and for the overall sector. These are used as parameters to the SVM model. The model attempts to predict whether a stock price sometime in the future will be higher or lower than it is on a given day. We find little predictive ability in the short-run but definite predictive ability in the long-run."}
{"_id": "2b877d697fb8ba947fe3f964824098c25636fa0e", "title": "CANTINA+: A Feature-Rich Machine Learning Framework for Detecting Phishing Web Sites", "text": "Phishing is a plague in cyberspace. Typically, phish detection methods either use human-verified URL blacklists or exploit Web page features via machine learning techniques. However, the former is frail in terms of new phish, and the latter suffers from the scarcity of effective features and the high false positive rate (FP). To alleviate those problems, we propose a layered anti-phishing solution that aims at (1) exploiting the expressiveness of a rich set of features with machine learning to achieve a high true positive rate (TP) on novel phish, and (2) limiting the FP to a low level via filtering algorithms.\n Specifically, we proposed CANTINA+, the most comprehensive feature-based approach in the literature including eight novel features, which exploits the HTML Document Object Model (DOM), search engines and third party services with machine learning techniques to detect phish. Moreover, we designed two filters to help reduce FP and achieve runtime speedup. The first is a near-duplicate phish detector that uses hashing to catch highly similar phish. The second is a login form filter, which directly classifies Web pages with no identified login form as legitimate.\n We extensively evaluated CANTINA+ with two methods on a diverse spectrum of corpora with 8118 phish and 4883 legitimate Web pages. In the randomized evaluation, CANTINA+ achieved over 92% TP on unique testing phish and over 99% TP on near-duplicate testing phish, and about 0.4% FP with 10% training phish. In the time-based evaluation, CANTINA+ also achieved over 92% TP on unique testing phish, over 99% TP on near-duplicate testing phish, and about 1.4% FP under 20% training phish with a two-week sliding window. Capable of achieving 0.4% FP and over 92% TP, our CANTINA+ has been demonstrated to be a competitive anti-phishing solution."}
{"_id": "3a9b2fce277e474fb1570da2b4380bbf8c8ceb3f", "title": "Survey of clustering algorithms", "text": "Data analysis plays an indispensable role for understanding various phenomena. Cluster analysis, primitive exploration with little or no prior knowledge, consists of research developed across a wide variety of communities. The diversity, on one hand, equips us with many tools. On the other hand, the profusion of options causes confusion. We survey clustering algorithms for data sets appearing in statistics, computer science, and machine learning, and illustrate their applications in some benchmark data sets, the traveling salesman problem, and bioinformatics, a new field attracting intensive efforts. Several tightly related topics, proximity measure, and cluster validation, are also discussed."}
{"_id": "076077a5771747ad7355120f1ba64cfd603141c6", "title": "A Statistical Approach to Mechanized Encoding and Searching of Literary Information", "text": "Written communication of ideas is carried out on the basis of statistical probability in that a writer chooses that level of subject specificity and that combination of words which he feels will convey the most meaning. Since this process varies among individuals and since similar ideas are therefore relayed at different levels of specificity and by means of different words, the problem of literature searching by machines still presents major difficulties. A statistical approach to this problem will be outlined and the various steps of a system based on this approach will be described. Steps include the statistical analysis of a collection of documents in a field of interest, the establishment of a set of \"notions\" and the vocabulary by which they are expressed, the compilation of a thesaurus-type dictionary and index, the automatic encoding of documents by machine with the aid of such a dictionary, the encoding of topological notations (such as branched structures), the recording of the coded information, the establishment of a searching pattern for finding pertinent information, and the programming of appropriate machines to carry out a search."}
{"_id": "151c9a0e8e31ee17a43bdd66091f49324f36dbdc", "title": "Client-Side Defense Against Web-Based Identity Theft", "text": "Web spoofing is a significant problem involving fraudulent email and web sites that trick unsuspecting users into revealing private information. We discuss some aspects of common attacks and propose a framework for client-side defense: a browser plug-in that examines web pages and warns the user when requests for data may be part of a spoof attack. While the plugin, SpoofGuard, has been tested using actual sites obtained through government agencies concerned about the problem, we expect that web spoofing and other forms of identity theft will be continuing problems in"}
{"_id": "64ee319154b15f6b9baad6696daa5facb7ff2d57", "title": "Computational intelligence in sports: Challenges and opportunities within a new research domain", "text": "Computational intelligence is a branch of artificial intelligence that comprises algorithms inspired by nature. The common characteristics of all these algorithms is their collective intelligence and adaptability to a changing environment. Due to their efficiency and simplicity, these algorithms have been employed for problem solving across social and natural sciences. The aim of this paper is to demonstrate that nature-inspired algorithms are also useful within the domain of sport, in particular for obtaining safe and effective training plans targeting various aspects of performance. We outline the benefits and opportunities of applying computational intelligence in sports, and we also comment on the pitfalls and challenges for the future development of this emerging research domain. \u00a9 2015 Elsevier Inc. All rights reserved."}
{"_id": "456d2297894c75f5d1eb0ac20fb84d1523ceae3e", "title": "Vehicle-to-Vehicle Communication: Fair Transmit Power Control for Safety-Critical Information", "text": "Direct radio-based vehicle-to-vehicle communication can help prevent accidents by providing accurate and up-to-date local status and hazard information to the driver. In this paper, we assume that two types of messages are used for traffic safety-related communication: 1) Periodic messages (ldquobeaconsrdquo) that are sent by all vehicles to inform their neighbors about their current status (i.e., position) and 2) event-driven messages that are sent whenever a hazard has been detected. In IEEE 802.11 distributed-coordination-function-based vehicular networks, interferences and packet collisions can lead to the failure of the reception of safety-critical information, in particular when the beaconing load leads to an almost-saturated channel, as it could easily happen in many critical vehicular traffic conditions. In this paper, we demonstrate the importance of transmit power control to avoid saturated channel conditions and ensure the best use of the channel for safety-related purposes. We propose a distributed transmit power control method based on a strict fairness criterion, i.e., distributed fair power adjustment for vehicular environments (D-FPAV), to control the load of periodic messages on the channel. The benefits are twofold: 1) The bandwidth is made available for higher priority data like dissemination of warnings, and 2) beacons from different vehicles are treated with ldquoequal rights,rdquo and therefore, the best possible reception under the available bandwidth constraints is ensured. We formally prove the fairness of the proposed approach. Then, we make use of the ns-2 simulator that was significantly enhanced by realistic highway mobility patterns, improved radio propagation, receiver models, and the IEEE 802.11p specifications to show the beneficial impact of D-FPAV for safety-related communications. We finally put forward a method, i.e., emergency message dissemination for vehicular environments (EMDV), for fast and effective multihop information dissemination of event-driven messages and show that EMDV benefits of the beaconing load control provided by D-FPAV with respect to both probability of reception and latency."}
{"_id": "8abdb9894ea13461b84ecdc56dd38859d659425e", "title": "Determining Gains Acquired from Word Embedding Quantitatively Using Discrete Distribution Clustering", "text": "Word embeddings have become widelyused in document analysis. While a large number of models for mapping words to vector spaces have been developed, it remains undetermined how much net gain can be achieved over traditional approaches based on bag-of-words. In this paper, we propose a new document clustering approach by combining any word embedding with a state-of-the-art algorithm for clustering empirical distributions. By using the Wasserstein distance between distributions, the word-to-word semantic relationship is taken into account in a principled way. The new clustering method is easy to use and consistently outperforms other methods on a variety of data sets. More importantly, the method provides an effective framework for determining when and how much word embeddings contribute to document analysis. Experimental results with multiple embedding models are reported."}
{"_id": "80062e8a15608b23f69f72b0890d2176880f660d", "title": "Cyberbullying perpetration and victimization among adolescents in Hong Kong", "text": "a r t i c l e i n f o Cyberbullying is a growing concern worldwide. Using a sample of 1917 secondary adolescents from seven schools, five psychometric measures (self-efficacy, empathy level, feelings regarding a harmonious school, sense of belonging to the school, and psychosocial wellbeing) and five scales regarding bullying experiences (cyber-and traditional bullying perpetration and victimization; reactions to cyberbullying victimization) were administered to explore the prevalence of cyberbullying in Hong Kong. Findings indicated that male adolescents were more likely than female adolescents to cyberbully others and to be cyber-victimized. Cyberbullying perpe-tration and victimization were found to be negatively associated with the adolescents' psychosocial health and sense of belonging to school. Cyber-and traditional bullying were positively correlated. Multivariate analyses indicated that being male, having a low sense of belonging to school, involvement in traditional bullying perpetra-tion, and experiencing cyber-victimization were associated with an increased propensity to cyberbully others. Technology is advancing rapidly, and peer harassment and aggression are no longer limited to traditional bullying through physical contact. Over the past decade, information and communication technology (ICT) has become increasingly important in the lives of adolescents. In a report by Lenhart, Madden, and Hitlin (2005), it is estimated that close to 90% of American adolescents aged 12 to 17 years surf the Inter-net, with 51% of them using it on a daily basis. Nearly half of the adolescents surveyed have personal mobile phones, and 33% have used a mobile phone to send a text message (Lenhart et al., 2005). Such heavy use of the Internet is not novel among adolescents in Hong Kong. Many empirical studies have been conducted on Hong Kong adolescents on their excessive and/or addictive use of the Internet. Findings of recent studies indicate that a substantially high prevalence rate of internet addiction is reported among Hong Kong adolescents (range In reality, the heavy usage of ICT such as instant mes-saging, e-mail, text messaging, blogs, and social networking sites not only allows adolescents to connect with friends and family, but at the same time also creates the potential to meet and interact with others in harmful ways (Ybarra, Diener-West, & Leaf, 2007). Cyberbullying or online bullying is one such growing concern. Traditional bullying is a widespread problem in both school and community settings, and has long been researched by scholars in general, involves an individual being exposed to negative actions by one or more individuals regularly \u2026"}
{"_id": "16eaf8ed4e3f60657f29704c7cf5cbcd1505cd9b", "title": "Dynamic Modeling and Performance Analysis of a Grid-Connected Current-Source Inverter-Based Photovoltaic System", "text": "Voltage-source inverter (VSI) topology is widely used for grid interfacing of distributed generation (DG) systems. However, when employed as the power conditioning unit in photovoltaic (PV) systems, VSI normally requires another power electronic converter stage to step up the voltage, thus adding to the cost and complexity of the system. To make the proliferation of grid-connected PV systems a successful business option, the cost, performance, and life expectancy of the power electronic interface need to be improved. The current-source inverter (CSI) offers advantages over VSI in terms of inherent boosting and short-circuit protection capabilities, direct output current controllability, and ac-side simpler filter structure. Research on CSI-based DG is still in its infancy. This paper focuses on modeling, control, and steady-state and transient performances of a PV system based on CSI. It also performs a comparative performance evaluation of VSI-based and CSI-based PV systems under transient and fault conditions. Analytical expectations are verified using simulations in the Power System Computer Aided Design/Electromagnetic Transient Including DC (PSCAD/EMTDC) environment, based on a detailed system model."}
{"_id": "873eee7cf6ba60b42e1d36e8fd9e96ba9ff68598", "title": "3D deeply supervised network for automated segmentation of volumetric medical images", "text": "While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume-to-volume learning and inference, which can eliminate redundant computations and alleviate the risk of over-fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post-processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state-of-the-art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN."}
{"_id": "5c36992f0cc206d67dc022322faf3dd2d9b0759f", "title": "Targeted Employee Retention : Performance-Based and Job-Related Differences in Reported Reasons for Staying", "text": "A content model of 12 retention factors is developed in the context of previous theory and research. Coding of open-ended responses from 24,829 employees in the leisure and hospitality industry lends support to the identified framework and reveals that job satisfaction, extrinsic rewards, constituent attachments, organizational commitment, and organizational prestige were the most frequently mentioned reasons for staying. Advancement opportunities and organizational prestige were more common reasons for staying among high performers and non-hourly workers, and extrinsic rewards was more common among low performers and hourly employees, providing support for ease/desirability of movement and psychological contract rationales. The findings highlight the importance of differentiating human resource management practices when the goal is to retain those employees valued most by the organization."}
{"_id": "ce3c8c8c980e4b3aefff80cef374373e0dfda9e9", "title": "An Alpha-wave-based binaural beat sound control system using fuzzy logic and autoregressive forecasting model", "text": "We are developing a new real-time control system for customizing auditory stimulus (the binaural beat sound) by judging user alpha waves to entrain a userpsilas feeling in the most relaxed way. Since brainwave activity provides the necessary predictive information for arousal states, we use an autoregressive forecasting model to estimate the frequency response series of the alpha frequency bands and the inverted-U concept to determine the userpsilas arousal state. A fuzzy logic controller is also employed to regulate the binaural beat control signal on a forecasting error signal. Our system allows comfortable user self-relaxation. The results of experiments confirm the constructed systempsilas effectiveness and necessity."}
{"_id": "b09919813a594af9b59384f261fdc9348e743b35", "title": "A Comparative Study of Stereovision Algorithms", "text": "Stereo vision has been and continues to be one of the most researched domains of computer vision, having many applications, among them, allowing the depth extraction of a scene. This paper provides a comparative study of stereo vision and matching algorithms, used to solve the correspondence problem. The study of matching algorithms was followed by experiments on the Middlebury benchmarks. The tests focused on a comparison of 6 stereovision methods. In order to assess the performance, RMS and some statistics related were computed. In order to emphasize the advantages of each stereo algorithm considered, two-frame methods have been employed, both local and global. The experiments conducted have shown that the best results are obtained by Graph Cuts. Unfortunately, this has a higher computational cost. If high quality is not an issue in applications, local methods provide reasonable results within a much lower time-frame and offer the possibility of parallel"}
{"_id": "754a42bc8525166b1cf44aac35dc21fcce23ebd7", "title": "Learning to rank academic experts in the DBLP dataset", "text": "Expert finding is an information retrieval task that is concerned with the search for the most knowledgeable people with respect to a specific topic, and the search is based on documents that describe people\u2019s activities. The task involves taking a user query as input and returning a list of people who are sorted by their level of expertise with respect to the user query. Despite recent interest in the area, the current state-of-the-art techniques lack in principled approaches for optimally combining different sources of evidence. This article proposes two frameworks for combining multiple estimators of expertise. These estimators are derived from textual contents, from graph-structure of the citation patterns for the community of experts, and from profile information about the experts. More specifically, this article explores the use of supervised learning to rank methods, as well as rank aggregation approaches, for combing all of the estimators of expertise. Several supervised learning algorithms, which are representative of the pointwise, pairwise and listwise approaches, were tested, and various state-of-the-art data fusion techniques were also explored for the rank aggregation framework. Experiments that were performed on a dataset of academic publications from the Computer Science domain attest the adequacy of the proposed approaches."}
{"_id": "69097673bd39554bbbd880ef588763b073fe79c7", "title": "A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images", "text": "BACKGROUND AND OBJECTIVES\nHighly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning.\n\n\nMETHODS\nWe first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works.\n\n\nRESULTS\nWith the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches.\n\n\nCONCLUSIONS\nWe propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets."}
{"_id": "6e32bc2d0188d98d066d4c579f653b07c00609ea", "title": "Single-Fed Low-Profile High-Gain Circularly Polarized Slotted Cavity Antenna Using a High-Order Mode", "text": "In this letter, a single-fed low-profile high-gain circularly polarized slotted cavity antenna using a high-order cavity mode, i.e., TE440 mode, is proposed. The proposed antenna has a simple structure that consists of a radiating linearly polarized slotted cavity antenna and a linear-to-circular polarization converter. An antenna prototype operating at WLAN 5.8-GHz band fabricated with low-cost standard printed circuit board (PCB) process is exemplified to validate the proposed concept. Measured results compared to their simulated counterparts are presented, and a good agreement between simulation and measurement is obtained."}
{"_id": "fccb70ff224b88bb6b5a34a09b7a52b1aa460b4a", "title": "VAMS 2017: Workshop on Value-Aware and Multistakeholder Recommendation", "text": "In this paper, we summarize VAMS 2017 - a workshop on value-aware and multistakeholder recommendation co-located with RecSys 2017. The workshop encouraged forward-thinking papers in this new area of recommender systems research and obtained a diverse set of responses ranging from application results to research overviews."}
{"_id": "188d823521b7d00abd6876f87509938dddfa64cf", "title": "Articulated pose estimation with flexible mixtures-of-parts", "text": "We describe a method for human pose estimation in static images based on a novel representation of part models. Notably, we do not use articulated limb parts, but rather capture orientation with a mixture of templates for each part. We describe a general, flexible mixture model for capturing contextual co-occurrence relations between parts, augmenting standard spring models that encode spatial relations. We show that such relations can capture notions of local rigidity. When co-occurrence and spatial relations are tree-structured, our model can be efficiently optimized with dynamic programming. We present experimental results on standard benchmarks for pose estimation that indicate our approach is the state-of-the-art system for pose estimation, outperforming past work by 50% while being orders of magnitude faster."}
{"_id": "1ba8348a65dcb8da96fb1cfe81d4762d17a99520", "title": "Learning from Explicit and Implicit Supervision Jointly For Algebra Word Problems", "text": "Automatically solving algebra word problems has raised considerable interest recently. Existing state-of-the-art approaches mainly rely on learning from human annotated equations. In this paper, we demonstrate that it is possible to efficiently mine algebra problems and their numerical solutions with little to no manual effort. To leverage the mined dataset, we propose a novel structured-output learning algorithm that aims to learn from both explicit (e.g., equations) and implicit (e.g., solutions) supervision signals jointly. Enabled by this new algorithm, our model gains 4.6% absolute improvement in accuracy on the ALG514 benchmark compared to the one without using implicit supervision. The final model also outperforms the current state-of-the-art approach by 3%."}
{"_id": "994aadb63a7c972a221005bc1eb4931492c9d867", "title": "Fault investigations on die-cast copper rotors", "text": "It has been long known to the market that, the use of copper in the rotor in place of aluminium, taking advantage of copper's high conductivity, can be an economic solution for various applications, including electric vehicle, in the development of energy efficient motors. In comparison to aluminium, copper die-casting is extremely difficult to perform due to its high density, viscosity and melting point. Owing to these reasons, the copper die-cast rotors will evince several manufacturing problems. The objective of this paper is to identify the various defects in the manufacturing of copper die-cast rotor, and identify the lamination-coating-condition that sustains copper die-cast pressure and temperature. This paper evaluates the effectiveness of lamination coating on limiting the eddy current loss, and the recoating process to improve, in case the eddy current losses are in excess due to copper die-casting."}
{"_id": "98319c8a1fc88113edbe0a309610f8ecc4b81ca9", "title": "The recognition of rice images by UAV based on capsule network", "text": "It is important to recognize the rice image captured by unmanned aerial vehicle (UAV) for monitoring the growth of rice and preventing the diseases and pests. Aiming at the image recognition, we use rice images captured by UAV as our data source, the structure of capsule network (CapsNet) is built to recognize rice images in this paper. The images are preprocessed through histogram equalization method into grayscale images and through superpixel algorithm into the superpixel segmentation results. The both results are output into the CapsNet. The function of CapsNet is to perform the reverse analysis of rice images. The CapsNet consists of five layers: an input layer, a convolution layer, a primary capsules layer, a digital capsules layer and an output layer. The CapsNet trains classification and predicts the output vector based on routing-by-agreement protocol. Therefore, the features of rice image by UAV can be precisely and efficiently extracted. The method is more convenient than the traditional artificial recognition. It provides the scientific support and reference for decision-making process of precision agriculture."}
{"_id": "04000fb8bdc2707d956c6eabafecf54b7913bc65", "title": "A functional taxonomy of wireless sensor network devices", "text": "With the availability of cheap computing technology and the standardization of different wireless communication paradigms, wireless sensor networks (WSNs) are becoming a reality. This is visible in the increasing amount of research being done in the WSN area, and the growing number of companies offering commercial WSN solutions. In this regard, we realize that there is a need for an unambiguous classification and definition of WSN devices. Such a classification scheme would benefit the WSN research and industrial community. We consider the spectrum of wireless devices ranging from passive RF devices to gateway devices for the classification. The classification scheme is based on functionality of the devices. This scheme is analogous to an object-oriented classification, where an object is classified on the basis of its functionality and is described by its attributes. Attributes of the wireless devices are grouped into five broad categories: communication, sensing, power, memory, and \"other features\". Each of these are further classified to provide sufficient details that are required for a typical WSN application"}
{"_id": "f905de2223973ecbcdf7e66ecd96b0b3b6063540", "title": "Permittivity and Loss Tangent Characterization for Garment Antennas Based on a New Matrix-Pencil Two-Line Method", "text": "The emergence of wearable antennas to be integrated into garments has revealed the need for a careful electromagnetic characterization of textile materials. Therefore, we propose in this paper a new matrix-pencil two-line method that removes perturbations in the calculated effective permittivity and loss tangent which are caused by imperfect deembedding and inhomogeneities of the textile microstrip line structure. The approach has been rigorously validated for high-frequency laminates by comparing measured and simulated data for the resonance frequency of antennas designed using the calculated parameters. The method has been successfully applied to characterize the permittivity and loss tangent of a variety of textile materials and up to a frequency of 10 GHz. Furthermore it is shown that the use of electrotextiles in antenna design influences the effective substrate permittivity."}
{"_id": "575f3571dbfc5eb20a34eedb6e6fe3b660525585", "title": "Mining geographic-temporal-semantic patterns in trajectories for location prediction", "text": "In recent years, research on location predictions by mining trajectories of users has attracted a lot of attention. Existing studies on this topic mostly treat such predictions as just a type of location recommendation, that is, they predict the next location of a user using location recommenders. However, an user usually visits somewhere for reasons other than interestingness. In this article, we propose a novel mining-based location prediction approach called Geographic-Temporal-Semantic-based Location Prediction (GTS-LP), which takes into account a user's geographic-triggered intentions, temporal-triggered intentions, and semantic-triggered intentions, to estimate the probability of the user in visiting a location. The core idea underlying our proposal is the discovery of trajectory patterns of users, namely GTS patterns, to capture frequent movements triggered by the three kinds of intentions. To achieve this goal, we define a new trajectory pattern to capture the key properties of the behaviors that are motivated by the three kinds of intentions from trajectories of users. In our GTS-LP approach, we propose a series of novel matching strategies to calculate the similarity between the current movement of a user and discovered GTS patterns based on various moving intentions. On the basis of similitude, we make an online prediction as to the location the user intends to visit. To the best of our knowledge, this is the first work on location prediction based on trajectory pattern mining that explores the geographic, temporal, and semantic properties simultaneously. By means of a comprehensive evaluation using various real trajectory datasets, we show that our proposed GTS-LP approach delivers excellent performance and significantly outperforms existing state-of-the-art location prediction methods."}
{"_id": "949b3eb7d26afeb1585729b8a78575f2dbc925b1", "title": "Feature Selection and Kernel Learning for Local Learning-Based Clustering", "text": "The performance of the most clustering algorithms highly relies on the representation of data in the input space or the Hilbert space of kernel methods. This paper is to obtain an appropriate data representation through feature selection or kernel learning within the framework of the Local Learning-Based Clustering (LLC) (Wu and Scho\u0308lkopf 2006) method, which can outperform the global learning-based ones when dealing with the high-dimensional data lying on manifold. Specifically, we associate a weight to each feature or kernel and incorporate it into the built-in regularization of the LLC algorithm to take into account the relevance of each feature or kernel for the clustering. Accordingly, the weights are estimated iteratively in the clustering process. We show that the resulting weighted regularization with an additional constraint on the weights is equivalent to a known sparse-promoting penalty. Hence, the weights of those irrelevant features or kernels can be shrunk toward zero. Extensive experiments show the efficacy of the proposed methods on the benchmark data sets."}
{"_id": "4698ed97f4a78e724c903ec1dd6e5538203237c8", "title": "Using Phase Instead of Optical Flow for Action Recognition", "text": "Currently, the most common motion representation for action recognition is optical flow. Optical flow is based on particle tracking which adheres to a Lagrangian perspective on dynamics. In contrast to the Lagrangian perspective, the Eulerian model of dynamics does not track, but describes local changes. For video, an Eulerian phase-based motion representation, using complex steerable filters, has been successfully employed recently for motion magnification and video frame interpolation. Inspired by these previous works, here, we proposes learning Eulerian motion representations in a deep architecture for action recognition. We learn filters in the complex domain in an end-to-end manner. We design these complex filters to resemble complex Gabor filters, typically employed for phase-information extraction. We propose a phaseinformation extraction module, based on these complex filters, that can be used in any network architecture for extracting Eulerian representations. We experimentally analyze the added value of Eulerian motion representations, as extracted by our proposed phase extraction module, and compare with existing motion representations based on optical flow, on the UCF101 dataset."}
{"_id": "0d82347bd5f454455f564a0407b0c4aaed432d09", "title": "A generalized taxonomy of explanations styles for traditional and social recommender systems", "text": "Recommender systems usually provide explanations of their recommendations to better help users to choose products, activities or even friends. Up until now, the type of an explanation style was considered in accordance to the recommender system that employed it. This relation was one-to-one, meaning that for each different recommender systems category, there was a different explanation style category. However, this kind of one-to-one correspondence can be considered as over-simplistic and non generalizable. In contrast, we consider three fundamental resources that can be used in an explanation: users, items and features and any combination of them. In this survey, we define (i) the Human style of explanation, which provides explanations based on similar users, (ii) the Item style of explanation, which is based on choices made by a user on similar items and (iii) the Feature style of explanation, which explains the recommendation based on item features rated by the user beforehand. By using any combination of the aforementioned styles we can also define the Hybrid style of explanation. We demonstrate how these styles are put into practice, by presenting recommender systems that employ them. Moreover, since there is inadequate research in the impact of social web in contemporary recommender systems and their explanation styles, we study new emerged social recommender systems i.e. Facebook Connect explanations (HuffPo, Netflix, etc.) and geo-social explanations that combine geographical with social data (Gowalla, Facebook Places, etc.). Finally, we summarize the results of three different user studies, to support that Hybrid is the most effective explanation style, since it incorporates all other styles."}
{"_id": "1c607dcc5fd4b26603584c5f85c9c233788c64ed", "title": "Online Learning for Time Series Prediction", "text": "In this paper we address the problem of predicting a time seri e using the ARMA (autoregressive moving average) model, under minimal assumptions on the noi se terms. Using regret minimization techniques, we develop effective online learning algorith ms for the prediction problem, withoutassuming that the noise terms are Gaussian, identically distribu ted or even independent. Furthermore, we show that our algorithm\u2019s performances asymptotically app roaches the performance of the best ARMA model in hindsight."}
{"_id": "41d6966c926015b8e0d7b1a9de3ffab013091e15", "title": "Logarithmic regret algorithms for online convex optimization", "text": "In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover\u2019s Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret $O(\\sqrt{T})$ , for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log\u2009(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1\u201319, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover\u2019s algorithm and gradient descent."}
{"_id": "08b425412e69b2751683dd299d8e939798b81614", "title": "Averaging Expert Predictions", "text": "We consider algorithms for combining advice from a set of experts. In each trial, the algorithm receives the predictions of the experts and produces its own prediction. A loss function is applied to measure the discrepancy between the predictions and actual observations. The algorithm keeps a weight for each expert. At each trial the weights are first used to help produce the prediction and then updated according to the observed outcome. Our starting point is Vovk\u2019s Aggregating Algorithm, in which the weights have a simple form: the weight of an expert decreases exponentially as a function of the loss incurred by the expert. The prediction of the Aggregating Algorithm is typically a nonlinear function of the weights and the experts\u2019 predictions. We analyze here a simplified algorithm in which the weights are as in the original Aggregating Algorithm, but the prediction is simply the weighted average of the experts\u2019 predictions. We show that for a large class of loss functions, even with the simplified prediction rule the additional loss of the algorithm over the loss of the best expert is at most c ln n, where n is the number of experts and c a constant that depends on the loss function. Thus, the bound is of the same form as the known bounds for the Aggregating Algorithm, although the constants here are not quite as good. We use relative entropy to rewrite the bounds in a stronger form and to motivate the update."}
{"_id": "1fbfa8b590ce4679367d73cb8e4f2d169ae5c624", "title": "Online Convex Programming and Generalized Infinitesimal Gradient Ascent", "text": "Convex programming involves a convex set F \u2286 R and a convex function c : F \u2192 R. The goal of convex programming is to find a point in F which minimizes c. In this paper, we introduce online convex programming. In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in F before seeing the cost function for that step. This can be used to model factory production, farm production, and many other industrial optimization problems where one is unaware of the value of the items produced until they have already been constructed. We introduce an algorithm for this domain, apply it to repeated games, and show that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized infinitesimal gradient ascent (GIGA) is universally consistent."}
{"_id": "dbfcb2f0d550f271fd0060ab7baaf1142093a9e4", "title": "Prediction, learning, and games", "text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading prediction learning and games, you can take more advantages with limited budget."}
{"_id": "ee82527145770200c774a5bf952989c7225670ab", "title": "Air Craft Winglet Design and Performance : Cant Angle Effect", "text": "A winglet is a device used to improve the efficiency of aircraft by lowering the lift induced drag caused by wingtip vortices. It is a vertical or angled extension at the tips of each wing. Winglets improve efficiency by diffusing the shed wingtip vortex, which in turn reduces the drag due to lift and improves the wing\u2019s lift over drag ratio Winglets increase the effective aspect ratio of a wing without adding greatly to the structural stress and hence necessary weight of its structure. In this research, a numerical validation procedure (by FLUENT \u00ae, computational fluid dynamics software with The Spalart-Allmaras turbulence model) is described for determination and estimation aerodynamic characteristics of three dimension subsonic rectangular wing (with NACA653218airfoil cross section). It was observed that at the present work a good agreement between the numerical study and the experimental work. This paper describes a CFD 3-dimensional winglets analysis that was performed on a Cessna wing of NACA2412 cross sectional airfoil. The wing has span 13.16 m, root chord 1.857 m, tip chord 0.928 m, sweep angle 11 degree and taper ratio 0.5. The present study shows wing without winglet and wing with winglet at cant angle 0, 30 and 45 degree. A CFD simulation performs by to compare of aerodynamics characteristics of lift coefficient CL, drag coefficient CD and lift to drag ratio, L/D lift, pathlines and pressure contours. The models run at a Mach number of 0.2 at sea level. The pressure and temperature of air at this height are 101.325 kPa and 288.2 K respectively. The results show that the wing with winglet can increase the lift by ratio approximately 12%. The wing with winglet can decrease the drag by ratio approximately 4%. The wing with winglet can increase lift to drag, L/D by about 11% along different phases of flight."}
{"_id": "c288e195c0d84248b5555aca7d62916e32e5b8cf", "title": "Highly compact 1T-1R architecture (4F2 footprint) involving fully CMOS compatible vertical GAA nano-pillar transistors and oxide-based RRAM cells exhibiting excellent NVM properties and ultra-low power operation", "text": "For the first time, nano-meter-scaled 1T-1R non-volatile memory (NVM) architecture comprising of RRAM cells built on vertical GAA nano-pillar transistors, either junction-less or junction-based, is systematically investigated. Transistors are fabricated using fully CMOS compatible technology and RRAM cells are stacked onto the tip of the nano-pillars (with a diameter down to ~37nm) to achieve a compact 4F2 footprint. In addition, through this platform, different RRAM stacks comprising CMOS friendly materials are studied, and it is found that TiN/Ni/HfO2/n+-Si RRAM cells show excellent switching properties in either bipolar or unipolar mode, including (1) ultra-low switching current/power: SET ~20nA/85nW and RESET ~200pA/700pW, (2) multi-level switchability, (3) good endurance, >105, (4) satisfactory retention, 10 years at 85oC; and (5) fast switching speed ~50ns. Moreover, this vertical (gate-all-around) GAA nano-pillar based 1T-1R architecture provides a more direct and flexible test vehicle to verify the scalability and functionality of RRAM candidates with a dimension close to actual application."}
{"_id": "7d27130a0d77fe0a89273e93a30db8f3a64b9fa3", "title": "Sentiment analysis and classification based on textual reviews", "text": "Mining is used to help people to extract valuable information from large amount of data. Sentiment analysis focuses on the analysis and understanding of the emotions from the text patterns. It identifies the opinion or attitude that a person has towards a topic or an object and it seeks to identify the viewpoint underlying a text span. Sentiment analysis is useful in social media monitoring to automatically characterize the overall feeling or mood of consumers as reflected in social media toward a specific brand or company and determine whether they are viewed positively or negatively on the web. This new form of analysis has been widely adopted in customer relation management especially in the context of complaint management. For automating the task of classifying a single topic textual review, document-level sentiment classification is used for expressing a positive or negative sentiment. So analyzing sentiment using Multi-theme document is very difficult and the accuracy in the classification is less. The document level classification approximately classifies the sentiment using Bag of words in Support Vector Machine (SVM) algorithm. In proposed work, a new algorithm called Sentiment Fuzzy Classification algorithm with parts of speech tags is used to improve the classification accuracy on the benchmark dataset of Movies reviews dataset."}
{"_id": "932e38056f1018286563c96716e0b7a0582d3c7b", "title": "Categorizing User Sessions at Pinterest", "text": "Di erent users can use a given Internet application in many different ways. The ability to record detailed event logs of user inapplication activity allows us to discover ways in which the application is being used. This enables personalization and also leads to important insights with actionable business and product outcomes. Here we study the problem of user session categorization, where the goal is to automatically discover categories/classes of user insession behavior using event logs, and then consistently categorize each user session into the discovered classes. We develop a three stage approach which uses clustering to discover categories of sessions, then builds classi ers to classify new sessions into the discovered categories, and nally performs daily classi cation in a distributed pipeline. An important innovation of our approach is selecting a set of events as long-tail features, and replacing them with a new feature that is less sensitive to product experimentation and logging changes. This allows for robust and stable identi cation of session types even though the underlying application is constantly changing. We deploy the approach to Pinterest and demonstrate its e ectiveness. We discover insights that have consequences for product monetization, growth, and design. Our solution classi es millions of user sessions daily and leads to actionable insights."}
{"_id": "c2d1bbff6ec6ccfb2e6b49d8f78f6854571941da", "title": "A PSO-Based Hybrid Metaheuristic for Permutation Flowshop Scheduling Problems", "text": "This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature."}
{"_id": "a3bfe87159938a96d3f2037ff0fe10adca0d21b0", "title": "Fingerprinting Electronic Control Units for Vehicle Intrusion Detection", "text": "As more software modules and external interfaces are getting added on vehicles, new attacks and vulnerabilities are emerging. Researchers have demonstrated how to compromise in-vehicle Electronic Control Units (ECUs) and control the vehicle maneuver. To counter these vulnerabilities, various types of defense mechanisms have been proposed, but they have not been able to meet the need of strong protection for safety-critical ECUs against in-vehicle network attacks. To mitigate this deficiency, we propose an anomaly-based intrusion detection system (IDS), called Clock-based IDS (CIDS). It measures and then exploits the intervals of periodic in-vehicle messages for fingerprinting ECUs. The thusderived fingerprints are then used for constructing a baseline of ECUs\u2019 clock behaviors with the Recursive Least Squares (RLS) algorithm. Based on this baseline, CIDS uses Cumulative Sum (CUSUM) to detect any abnormal shifts in the identification errors \u2014 a clear sign of intrusion. This allows quick identification of in-vehicle network intrusions with a low false-positive rate of 0.055%. Unlike state-of-the-art IDSs, if an attack is detected, CIDS\u2019s fingerprinting of ECUs also facilitates a rootcause analysis; identifying which ECU mounted the attack. Our experiments on a CAN bus prototype and on real vehicles have shown CIDS to be able to detect a wide range of in-vehicle network attacks."}
{"_id": "c814a2f44f36d54b6337dd64c7bc8b3b80127d3e", "title": "A3P: adaptive policy prediction for shared images over popular content sharing sites", "text": "More and more people go online today and share their personal images using popular web services like Picasa. While enjoying the convenience brought by advanced technology, people also become aware of the privacy issues of data being shared. Recent studies have highlighted that people expect more tools to allow them to regain control over their privacy. In this work, we propose an Adaptive Privacy Policy Prediction (A3P) system to help users compose privacy settings for their images. In particular, we examine the role of image content and metadata as possible indicators of users' privacy preferences. We propose a two-level image classification framework to obtain image categories which may be associated with similar policies. Then, we develop a policy prediction algorithm to automatically generate a policy for each newly uploaded image. Most importantly, the generated policy will follow the trend of the user's privacy concerns evolved with time. We have conducted an extensive user study and the results demonstrate effectiveness of our system with the prediction accuracy around 90%."}
{"_id": "42d5c5ff783e80455475cc23e62f852d49ca10ea", "title": "Universal Planning Networks-Long Version + Supplementary", "text": "A key challenge in complex visuomotor control is learning abstract representations that are effective for specifying goals, planning, and generalization. To this end, we introduce universal planning networks (UPN). UPNs embed differentiable planning within a goal-directed policy. This planning computation unrolls a forward model in a latent space and infers an optimal action plan through gradient descent trajectory optimization. The plan-by-gradient-descent process and its underlying representations are learned end-to-end to directly optimize a supervised imitation learning objective. We find that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images. The learned representations can be leveraged to specify distance-based rewards to reach new target states for model-free reinforcement learning, resulting in substantially more effective learning when solving new tasks described via image-based goals. We were able to achieve successful transfer of visuomotor planning strategies across robots with significantly different morphologies and actuation capabilities."}
{"_id": "1ff60a51c94eebafdaacc6d0d3848e39b470b609", "title": "Deep Learning of Semisupervised Process Data With Hierarchical Extreme Learning Machine and Soft Sensor Application", "text": "Data-driven soft sensors have been widely utilized in industrial processes to estimate the critical quality variables which are intractable to directly measure online through physical devices. Due to the low sampling rate of quality variables, most of the soft sensors are developed on small number of labeled samples and the large number of unlabeled process data is discarded. The loss of information greatly limits the improvement of quality prediction accuracy. One of the main issues of data-driven soft sensor is to furthest exploit the information contained in all available process data. This paper proposes a semisupervised deep learning model for soft sensor development based on the hierarchical extreme learning machine (HELM). First, the deep network structure of autoencoders is implemented for unsupervised feature extraction with all the process samples. Then, extreme learning machine is utilized for regression through appending the quality variable. Meanwhile, the manifold regularization method is introduced for semisupervised model training. The new method can not only deeply extract the information that the data contains, but learn more from the extra unlabeled samples as well. The proposed semisupervised HELM method is applied in a high\u2013low transformer to estimate the carbon monoxide content, which shows a significant improvement of the prediction accuracy, compared to traditional methods."}
{"_id": "2948d0e6b3faa43bbfce001bd4fe820bde00d46c", "title": "A 45nm Logic Technology with High-k+Metal Gate Transistors, Strained Silicon, 9 Cu Interconnect Layers, 193nm Dry Patterning, and 100% Pb-free Packaging", "text": "A 45 nm logic technology is described that for the first time incorporates high-k + metal gate transistors in a high volume manufacturing process. The transistors feature 1.0 nm EOT high-k gate dielectric, dual band edge workfunction metal gates and third generation strained silicon, resulting in the highest drive currents yet reported for NMOS and PMOS. The technology also features trench contact based local routing, 9 layers of copper interconnect with low-k ILD, low cost 193 nm dry patterning, and 100% Pb-free packaging. Process yield, performance and reliability are demonstrated on 153 Mb SRAM arrays with SRAM cell size of 0.346 mum2, and on multiple microprocessors."}
{"_id": "a11762d3a16d3d7951f2312cb89bbedff6cfbf21", "title": "An Intra-Panel Interface With Clock-Embedded Differential Signaling for TFT-LCD Systems", "text": "In this paper, an intra-panel interface with a clock embedded differential signaling for TFT-LCD systems is proposed. The proposed interface reduces the number of signal lines between the timing controller and the column drivers in a TFT-LCD panel by adopting the embedded clock scheme. The protocol of the proposed interface provides a delay-locked loop (DLL)-based clock recovery scheme for the receiver. The timing controller and the column driver integrated with the proposed interface are fabricated in 0.13- \u03bcm CMOS process technology and 0.18-\u03bcm high voltage CMOS process technology, respectively. The proposed interface is verified on a 47-inch Full High-Definition (FHD) (1920RGB×1080) TFT-LCD panel with 8-bit RGB and 120-Hz driving technology. The maximum data rate per differential pair was measured to be as high as 2.0 Gb/s in a wafer test."}
{"_id": "c5218d2c19ae5de4e28452460e0bdc97d68c1b35", "title": "Associations among Wine Grape Microbiome, Metabolome, and Fermentation Behavior Suggest Microbial Contribution to Regional Wine Characteristics", "text": "UNLABELLED\nRegionally distinct wine characteristics (terroir) are an important aspect of wine production and consumer appreciation. Microbial activity is an integral part of wine production, and grape and wine microbiota present regionally defined patterns associated with vineyard and climatic conditions, but the degree to which these microbial patterns associate with the chemical composition of wine is unclear. Through a longitudinal survey of over 200 commercial wine fermentations, we demonstrate that both grape microbiota and wine metabolite profiles distinguish viticultural area designations and individual vineyards within Napa and Sonoma Counties, California. Associations among wine microbiota and fermentation characteristics suggest new links between microbiota, fermentation performance, and wine properties. The bacterial and fungal consortia of wine fermentations, composed from vineyard and winery sources, correlate with the chemical composition of the finished wines and predict metabolite abundances in finished wines using machine learning models. The use of postharvest microbiota as an early predictor of wine chemical composition is unprecedented and potentially poses a new paradigm for quality control of agricultural products. These findings add further evidence that microbial activity is associated with wine terroir\n\n\nIMPORTANCE\nWine production is a multi-billion-dollar global industry for which microbial control and wine chemical composition are crucial aspects of quality. Terroir is an important feature of consumer appreciation and wine culture, but the many factors that contribute to terroir are nebulous. We show that grape and wine microbiota exhibit regional patterns that correlate with wine chemical composition, suggesting that the grape microbiome may influence terroir In addition to enriching our understanding of how growing region and wine properties interact, this may provide further economic incentive for agricultural and enological practices that maintain regional microbial biodiversity."}
{"_id": "328abd11800821d149e5366b54943d1e9b24d675", "title": "Suspect tracking based on call logs analysis and visualization", "text": "In Thailand, investigator can track and find the suspects by using call logs from suspects' phone numbers and their contacts. In many cases, the suspects changed their phone numbers to avoid tracking. The problem is that the investigators have difficulty to track these suspects from their call logs. Our hypothesis is that each user has a unique calling behavior pattern. The calling pattern is importance for tracking suspect's telephone number. To compare the calling patterns, we consider common contact groups. Thus, the aim of this project is to develop a call logs tracking system which can predict a set of new possible suspect's phone numbers and present their contacts' connection with our network diagram visualization based on Graph database (Neo4j). This system will be very necessary for investigators because it can save investigators' time from analyzing excessive call logs data. The system can predict the possible suspect's phone numbers. Furthermore, our visualization can enhance human's sight ability to connect the relation among related phone numbers. Finally, the experimental results on real call logs demonstrate that our method can track telephone number approximately 69% of single possible suspect phone number's matching while 89% of multiple possible suspect phone numbers' matching."}
{"_id": "0c504d0c8319802c9f63eeea0d7b437cded2f4ef", "title": "High-performance complex event processing over streams", "text": "In this paper, we present the design, implementation, and evaluation of a system that executes complex event queries over real-time streams of RFID readings encoded as events. These complex event queries filter and correlate events to match specific patterns, and transform the relevant events into new composite events for the use of external monitoring applications. Stream-based execution of these queries enables time-critical actions to be taken in environments such as supply chain management, surveillance and facility management, healthcare, etc. We first propose a complex event language that significantly extends existing event languages to meet the needs of a range of RFID-enabled monitoring applications. We then describe a query plan-based approach to efficiently implementing this language. Our approach uses native operators to efficiently handle query-defined sequences, which are a key component of complex event processing, and pipeline such sequences to subsequent operators that are built by leveraging relational techniques. We also develop a large suite of optimization techniques to address challenges such as large sliding windows and intermediate result sizes. We demonstrate the effectiveness of our approach through a detailed performance analysis of our prototype implementation under a range of data and query workloads as well as through a comparison to a state-of-the-art stream processor."}
{"_id": "c567bdc35a40e568e0661446ac4f9b397787e40d", "title": "A 2.4 GHz Interferer-Resilient Wake-Up Receiver Using A Dual-IF Multi-Stage N-Path Architecture", "text": "A 2.4 GHz interferer-resilient wake-up receiver for ultra-low power wireless sensor nodes uses an uncertain-IF dualconversion topology, combining a distributed multi-stage N-path filtering technique with an unlocked low-Q resonator-referred local oscillator. This structure provides narrow-band selectivity and strong immunity against interferers, while avoiding expensive external resonant components such as BAW resonators or crystals. The 65 nm CMOS receiver prototype provides a sensitivity of -97 dBm and a carrier-to-interferer ratio better than -27 dB at 5 MHz offset, for a data rate of 10 kb/s at a 10-3 bit error rate, while consuming 99 \u03bcW from a 0.5 V voltage supply under continuous operation."}
{"_id": "7e9e66b55476bb0c58f23695c9a6554b6e74d906", "title": "A survey of smoothing techniques for ME models", "text": "In certain contexts, maximum entropy (ME) modeling can be viewed as maximum likelihood (ML) training for exponential models, and like other ML methods is prone to overfitting of training data. Several smoothing methods for ME models have been proposed to address this problem, but previous results do not make it clear how these smoothing methods compare with smoothing methods for other types of related models. In this work, we survey previous work in ME smoothing and compare the performance of several of these algorithms with conventional techniques for smoothing -gram language models. Because of the mature body of research in -gram model smoothing and the close connection between ME and conventional -gram models, this domain is well-suited to gauge the performance of ME smoothing methods. Over a large number of data sets, we find that fuzzy ME smoothing performs as well as or better than all other algorithms under consideration. We contrast this method with previous -gram smoothing methods to explain its superior performance."}
{"_id": "9302c3d5604ac39466548af3a24dfe7bdf67a777", "title": "Calcium currents in hair cells isolated from semicircular canals of the frog.", "text": "L-type and R-type Ca(2+) currents were detected in frog semicircular canal hair cells. The former was noninactivating and nifedipine-sensitive (5 microM); the latter, partially inactivated, was resistant to omega-conotoxin GVIA (5 microM), omega-conotoxin MVIIC (5 microM), and omega-agatoxin IVA (0.4 microM), but was sensitive to mibefradil (10 microM). Both currents were sensitive to Ni(2+) and Cd(2+) (>10 microM). In some cells the L-type current amplitude increased almost twofold upon repetitive stimulation, whereas the R-type current remained unaffected. Eventually, run-down occurred for both currents, but was prevented by the protease inhibitor calpastatin. The R-type current peak component ran down first, without changing its plateau, suggesting that two channel types generate the R-type current. This peak component appeared at -40 mV, reached a maximal value at -30 mV, and became undetectable for voltages > or =0 mV, suggestive of a novel transient current: its inactivation was indeed reversibly removed when Ba(2+) was the charge carrier. The L-type current and the R-type current plateau were appreciable at -60 mV and peaked at -20 mV: the former current did not reverse for voltages up to +60 mV, the latter reversed between +30 and +60 mV due to an outward Cs(+) current flowing through the same Ca(2+) channel. The physiological role of these currents on hair cell function is discussed."}
{"_id": "24b5ac90c5155b7db5b80bfc2767ec1d7e2fb8fe", "title": "Increasing pre-kindergarten early literacy skills in children with developmental disabilities and delays.", "text": "Two hundred and nine children receiving early childhood special education services for developmental disabilities or delays who also had behavioral, social, or attentional difficulties were included in a study of an intervention to increase school readiness, including early literacy skills. Results showed that the intervention had a significant positive effect on children's literacy skills from baseline to the end of summer before the start of kindergarten (d=.14). The intervention also had significant indirect effects on teacher ratings of children's literacy skills during the fall of their kindergarten year (\u03b2=.09). Additionally, when scores were compared to standard benchmarks, a greater percentage of the children who received the intervention moved from being at risk for reading difficulties to having low risk. Overall, this study demonstrates that a school readiness intervention delivered prior to the start of kindergarten may help increase children's early literacy skills."}
{"_id": "42ef50955a7f12afad78f0bd3819dbc555580225", "title": "Deep Learning for Imbalanced Multimedia Data Classification", "text": "Classification of imbalanced data is an important research problem as lots of real-world data sets have skewed class distributions in which the majority of data instances (examples) belong to one class and far fewer instances belong to others. While in many applications, the minority instances actually represent the concept of interest (e.g., fraud in banking operations, abnormal cell in medical data, etc.), a classifier induced from an imbalanced data set is more likely to be biased towards the majority class and show very poor classification accuracy on the minority class. Despite extensive research efforts, imbalanced data classification remains one of the most challenging problems in data mining and machine learning, especially for multimedia data. To tackle this challenge, in this paper, we propose an extended deep learning approach to achieve promising performance in classifying skewed multimedia data sets. Specifically, we investigate the integration of bootstrapping methods and a state-of-the-art deep learning approach, Convolutional Neural Networks (CNNs), with extensive empirical studies. Considering the fact that deep learning approaches such as CNNs are usually computationally expensive, we propose to feed low-level features to CNNs and prove its feasibility in achieving promising performance while saving a lot of training time. The experimental results show the effectiveness of our framework in classifying severely imbalanced data in the TRECVID data set."}
{"_id": "d55dbdb1be09ab502c63dd0592cf4e7a600c1cc7", "title": "Clinical performance of porcelain laminate veneers: outcomes of the aesthetic pre-evaluative temporary (APT) technique.", "text": "This article evaluates the long-term clinical performance of porcelain laminate veneers bonded to teeth prepared with the use of an additive mock-up and aesthetic pre-evaluative temporary (APT) technique over a 12-year period. Sixty-six patients were restored with 580 porcelain laminate veneers. The technique, used for diagnosis, esthetic design, tooth preparation, and provisional restoration fabrication, was based on the APT protocol. The influence of several factors on the durability of veneers was analyzed according to pre- and postoperative parameters. With utilization of the APT restoration, over 80% of tooth preparations were confined to the dental enamel. Over 12 years, 42 laminate veneers failed, but when the preparations were limited to the enamel, the failure rate resulting from debonding and microleakage decreased to 0%. Porcelain laminate veneers presented a successful clinical performance in terms of marginal adaptation, discoloration, gingival recession, secondary caries, postoperative sensitivity, and satisfaction with restoration shade at the end of 12 years. The APT technique facilitated diagnosis, communication, and preparation, providing predictability for the restorative treatment. Limiting the preparation depth to the enamel surface significantly increases the performance of porcelain laminate veneers."}
{"_id": "8dcc86121219fdb9f813d43b35c632811da18b73", "title": "A framework for consciousness", "text": "Here we summarize our present approach to the problem of consciousness. After an introduction outlining our general strategy, we describe what is meant by the term 'framework' and set it out under ten headings. This framework offers a coherent scheme for explaining the neural correlates of (visual) consciousness in terms of competing cellular assemblies. Most of the ideas we favor have been suggested before, but their combination is original. We also outline some general experimental approaches to the problem and, finally, acknowledge some relevant aspects of the brain that have been left out of the proposed framework."}
{"_id": "5626bb154796b2157e15dbc3f5d3950aedf87d63", "title": "Passive returning mechanism for twisted string actuators", "text": "The twisted string actuator is an actuator that is gaining popularity in various engineering and robotics and applications. However, the fundamental limitation of actuators of this type is the uni-directional action, meaning that the actuator can contract but requires external power to return to its initial state. This paper proposes 2 novel passive extension mechanisms based on buckling effect to solve the uni-directional issue of the twisted string actuator. The proposed mechanisms are mechanically simple and compact and provide a nearly-constant extension force throughout the operation range. The constant force can fully extend the twisted string actuator with minimal loss of force during contraction. The designed mechanisms are evaluated in a series of practical tests, and their performances are compared and discussed."}
{"_id": "1426c3469f91faa8577293eaabb4cb9b37db228a", "title": "Compact two-layer slot array antenna with SIW for 60GHz wireless applications", "text": "In a variety of microwave and millimeter-wave applications where high performance antennas are required, waveguide slot arrays have received considerable attention due to their high aperture efficiency, low side lobe levels, and low cross polarization. Resonant slot arrays usually suffer from narrow bandwidth and high cost due to high precision required in manufacturing. Furthermore, because of using standard rectangular waveguides, the antenna array is thick and heavy and is not suitable for monolithic integration with high frequency printed circuits."}
{"_id": "20324e90033dee12c9e95952c3097ca91773be1e", "title": "Latent Aspect Mining via Exploring Sparsity and Intrinsic Information", "text": "We investigate latent aspect mining problem that aims at automatically discovering aspect information from a collection of review texts in a domain in an unsupervised manner. One goal is to discover a set of aspects which are previously unknown for the domain, and predict the user's ratings on each aspect for each review. Another goal is to detect key terms for each aspect. Existing works on predicting aspect ratings fail to handle the aspect sparsity problem in the review texts leading to unreliable prediction. We propose a new generative model to tackle the latent aspect mining problem in an unsupervised manner. By considering the user and item side information of review texts, we introduce two latent variables, namely, user intrinsic aspect interest and item intrinsic aspect quality facilitating better modeling of aspect generation leading to improvement on the accuracy and reliability of predicted aspect ratings. Furthermore, we provide an analytical investigation on the Maximum A Posterior (MAP) optimization problem used in our proposed model and develop a new block coordinate gradient descent algorithm to efficiently solve the optimization with closed-form updating formulas. We also study its convergence analysis. Experimental results on the two real-world product review corpora demonstrate that our proposed model outperforms existing state-of-the-art models."}
{"_id": "59b5b9defaa941bbea8d906a6051d7a99e4736b8", "title": "Self-compassion, body image, and disordered eating: A review of the literature.", "text": "Self-compassion, treating oneself as a loved friend might, demonstrates beneficial associations with body image and eating behaviors. In this systematic review, 28 studies supporting the role of self-compassion as a protective factor against poor body image and eating pathology are reviewed. Findings across various study designs consistently linked self-compassion to lower levels of eating pathology, and self-compassion was implicated as a protective factor against poor body image and eating pathology, with a few exceptions. These findings offer preliminary support that self-compassion may protect against eating pathology by: (a) decreasing eating disorder-related outcomes directly; (b) preventing initial occurrence of a risk factor of a maladaptive outcome; (c) interacting with risk factors to interrupt their deleterious effects; and (d) disrupting the mediational chain through which risk factors operate. We conclude with suggestions for future research that may inform intervention development, including the utilization of research designs that better afford causal inference."}
{"_id": "4859285b20507e2574b178c099eed812422ec84b", "title": "Plug-and-Play priors for model based reconstruction", "text": "Model-based reconstruction is a powerful framework for solving a variety of inverse problems in imaging. In recent years, enormous progress has been made in the problem of denoising, a special case of an inverse problem where the forward model is an identity operator. Similarly, great progress has been made in improving model-based inversion when the forward model corresponds to complex physical measurements in applications such as X-ray CT, electron-microscopy, MRI, and ultrasound, to name just a few. However, combining state-of-the-art denoising algorithms (i.e., prior models) with state-of-the-art inversion methods (i.e., forward models) has been a challenge for many reasons. In this paper, we propose a flexible framework that allows state-of-the-art forward models of imaging systems to be matched with state-of-the-art priors or denoising models. This framework, which we term as Plug-and-Play priors, has the advantage that it dramatically simplifies software integration, and moreover, it allows state-of-the-art denoising methods that have no known formulation as an optimization problem to be used. We demonstrate with some simple examples how Plug-and-Play priors can be used to mix and match a wide variety of existing denoising models with a tomographic forward model, thus greatly expanding the range of possible problem solutions."}
{"_id": "5a114d3050a0a33f8cc6d28d55fa048a5a7ab6f2", "title": "A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures", "text": "An algorithm is presented for the rapid solution of the phase of the complete wave function whose intensity in the diffraction and imaging planes of an imaging system are known. A proof is given showing that a defined error between the estimated function and the correct function must decrease as the algorithm iterates. The problem of uniqueness is discussed and results are presented demonstrating the power of the method."}
{"_id": "02d1285ff1cdd96a3064d651ddcf4ae2a9b5b7f9", "title": "A Partial Derandomization of PhaseLift using Spherical Designs", "text": "The problem of retrieving phase information from amplitude measurements alone has appeared in many scientific disciplines over the last century. PhaseLift is a recently introduced algorithm for phase recovery that is computationally efficient, numerically stable, and comes with rigorous performance guarantees. PhaseLift is optimal in the sense that the number of amplitude measurements required for phase reconstruction scales linearly with the dimension of the signal. However, it specifically demands Gaussian random measurement vectors \u2014 a limitation that restricts practical utility and obscures the specific properties of measurement ensembles that enable phase retrieval. Here we present a partial derandomization of PhaseLift that only requires sampling from certain polynomial size vector configurations, called t-designs. Such configurations have been studied in algebraic combinatorics, coding theory, and quantum information. We prove reconstruction guarantees for a number of measurements that depends on the degree t of the design. If the degree is allowed to to grow logarithmically with the dimension, the bounds become tight up to polylog-factors. Beyond the specific case of PhaseLift, this work highlights the utility of spherical designs for the derandomization of data recovery schemes."}
{"_id": "0748f27afcc64a4ceaeb1213d620f62757b43d18", "title": "Blind Deconvolution Using Convex Programming", "text": "We consider the problem of recovering two unknown vectors, w and x, of length L from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension N and the other with dimension K. Although the observed convolution is nonlinear in both w and x, it is linear in the rank-1 matrix formed by their outer product wx*. This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that, for \u201cgeneric\u201d signals, the program can deconvolve w and x exactly when the maximum of N and K is almost on the order of L. That is, we show that if x is drawn from a random subspace of dimension N, and w is a vector in a subspace of dimension K whose basis vectors are spread out in the frequency domain, then nuclear norm minimization recovers wx* without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length N, which we code using a random L x N coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length K, then the receiver can recover both the channel response and the message when L \u2273 N + K, to within constant and log factors."}
{"_id": "211fd281ccff3de319bab296ae760c80d9d32066", "title": "A unified approach to statistical tomography using coordinate descent optimization", "text": "Over the past years there has been considerable interest in statistically optimal reconstruction of cross-sectional images from tomographic data. In particular, a variety of such algorithms have been proposed for maximum a posteriori (MAP) reconstruction from emission tomographic data. While MAP estimation requires the solution of an optimization problem, most existing reconstruction algorithms take an indirect approach based on the expectation maximization (EM) algorithm. We propose a new approach to statistically optimal image reconstruction based on direct optimization of the MAP criterion. The key to this direct optimization approach is greedy pixel-wise computations known as iterative coordinate decent (ICD). We propose a novel method for computing the ICD updates, which we call ICD/Newton-Raphson. We show that ICD/Newton-Raphson requires approximately the same amount of computation per iteration as EM-based approaches, but the new method converges much more rapidly (in our experiments, typically five to ten iterations). Other advantages of the ICD/Newton-Raphson method are that it is easily applied to MAP estimation of transmission tomograms, and typical convex constraints, such as positivity, are easily incorporated."}
{"_id": "41466cff3ea07504e556ad3e4b9d7ae9ee1c809a", "title": "Design of bipolar pulse generator topology based on Marx supplied by double power", "text": "Pulsed power technology has been used for the ablation tumor. Considerable research shows that a high-strength unipolar pulse electric field can induce irreversible electroporation (IRE) on the cell membrane, which can effectively kill cells. But some scholars and doctors have found that muscle contractions occur during treatment, which are associated with the delivery of electric pulses. Confirmed by further studies that bipolar pulses have been proven more advanced in the treatment of tumor because of the elimination of muscle contractions and the effect for ablating non-uniform tissue. So the bipolar pulse generator is needed for the research on the tumor ablation with bipolar pulses. In this paper, a new type of modular bipolar pulsed-power generator base on Marx generator with double power is proposed. The concept of this generator is charging two series of capacitors in parallel by two power sources respectively, and then connecting the capacitors in series through solid-state switches with different control strategy. Utilizing a number of fast solid-state switches, the capacitors can be connected in series with different polarities, so that a positive or negative polarity pulse will be delivered to the load. A laboratory prototype has been implemented in laboratory. The development of this pulse generator can provide the hardware foundation for the research on biological effect without muscle contraction when the tumors are applied with bipolar pulse electric field."}
{"_id": "c79a5da34c9cfaa59ec9a3813357fc4b8d71d96f", "title": "A Data-driven Method for the Detection of Close Submitters in Online Learning Environments", "text": "Online learning has become very popular over the last decade. However, there are still many details that remain unknown about the strategies that students follow while studying online. In this study, we focus on the direction of detecting \u2018invisible\u2019 collaboration ties between students in online learning environments. Specifically, the paper presents a method developed to detect student ties based on temporal proximity of their assignment submissions. The paper reports on findings of a study that made use of the proposed method to investigate the presence of close submitters in two different massive open online courses. The results show that most of the students (i.e., student user accounts) were grouped as couples, though some bigger communities were also detected. The study also compared the population detected by the algorithm with the rest of user accounts and found that close submitters needed a statistically significant lower amount of activity with the platform to achieve a certificate of completion in a MOOC. These results confirm that the detected close submitters were performing some collaboration or even engaged in unethical behaviors, which facilitates their way into a certificate. However, more work is required in the future to specify various strategies adopted by close submitters and possible associations between the user accounts."}
{"_id": "421e6c7247f41c419a46212477d7b29540cbf7b1", "title": "High speed obstacle avoidance using monocular vision and reinforcement learning", "text": "We consider the task of driving a remote control car at high speeds through unstructured outdoor environments. We present an approach in which supervised learning is first used to estimate depths from single monocular images. The learning algorithm can be trained either on real camera images labeled with ground-truth distances to the closest obstacles, or on a training set consisting of synthetic graphics images. The resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning/policy search is then applied within a simulator that renders synthetic scenes. This learns a control policy that selects a steering direction as a function of the vision system's output. We present results evaluating the predictive ability of the algorithm both on held out test data, and in actual autonomous driving experiments."}
{"_id": "9829a6c00ad45a4be47f7e1e74dd03c792ec15c2", "title": "FLATM: A fuzzy logic approach topic model for medical documents", "text": "One of the challenges for text analysis in medical domains is analyzing large-scale medical documents. As a consequence, finding relevant documents has become more difficult. One of the popular methods to retrieve information based on discovering the themes in the documents is topic modeling. The themes in the documents help to retrieve documents on the same topic with and without a query. In this paper, we present a novel approach to topic modeling using fuzzy clustering. To evaluate our model, we experiment with two text datasets of medical documents. The evaluation metrics carried out through document classification and document modeling show that our model produces better performance than LDA, indicating that fuzzy set theory can improve the performance of topic models in medical domains."}
{"_id": "b750f6d8e348c9a35fef6f53d203bf7e11f4f27b", "title": "Non-Additive Imprecise Image Super-Resolution in a Semi-Blind Context", "text": "The most effective superresolution methods proposed in the literature require precise knowledge of the so-called point spread function of the imager, while in practice its accurate estimation is nearly impossible. This paper presents a new superresolution method, whose main feature is its ability to account for the scant knowledge of the imager point spread function. This ability is based on representing this imprecise knowledge via a non-additive neighborhood function. The superresolution reconstruction algorithm transfers this imprecise knowledge to output by producing an imprecise (interval-valued) high-resolution image. We propose some experiments illustrating the robustness of the proposed method with respect to the imager point spread function. These experiments also highlight its high performance compared with very competitive earlier approaches. Finally, we show that the imprecision of the high-resolution interval-valued reconstructed image is a reconstruction error marker."}
{"_id": "0372ddc78c53f24cbfcb46b2abc26e5870f93df6", "title": "Class-E LCCL for capacitive power transfer system", "text": "This paper presented the design of a capacitive power transfer (CPT) system by implementing a Class-E inverter due to its high efficiency, which in theory can reach close to 100% performance. However, the Class-E inverter is highly sensitive to its circuit parameters under the scenario of having small capacitance at the coupling plate. As a solution, an additional capacitor can be integrated into the Class-E inverter for increased coupling capacitance for a better performance. Both simulation and experimental investigations were carried out to verify the high efficiency CPT system based on Class-E inverter with additional capacitor. The outcome of the investigation exhibited 96% of overall DC-DC power transfer efficiency with 0.97 W output at 1MHz operating frequency."}
{"_id": "67377bd63c40dda67befce699c80a6caf25d3a8c", "title": "Low-latency adaptive streaming over tcp", "text": "Media streaming over TCP has become increasingly popular because TCP's congestion control provides remarkable stability to the Internet. Streaming over TCP requires adapting to bandwidth availability, but unforunately, TCP can introduce significant latency at the application level, which causes unresponsive and poor adaptation. This article shows that this latency is not inherent in TCP but occurs as a result of throughput-optimized TCP implementations. We show that this latency can be minimized by dynamically tuning TCP's send buffer. Our evaluation shows that this approach leads to better application-level adaptation and it allows supporting interactive and other low-latency applications over TCP."}
{"_id": "bef6611150c6a9468a593736f0c02d58d5ed4051", "title": "Graviola: a novel promising natural-derived drug that inhibits tumorigenicity and metastasis of pancreatic cancer cells in vitro and in vivo through altering cell metabolism.", "text": "Pancreatic tumors are resistant to conventional chemotherapies. The present study was aimed at evaluating the potential of a novel plant-derived product as a therapeutic agent for pancreatic cancer (PC). The effects of an extract from the tropical tree Annona Muricata, commonly known as Graviola, was evaluated for cytotoxicity, cell metabolism, cancer-associated protein/gene expression, tumorigenicity, and metastatic properties of PC cells. Our experiments revealed that Graviola induced necrosis of PC cells by inhibiting cellular metabolism. The expression of molecules related to hypoxia and glycolysis in PC cells (i.e. HIF-1\u03b1, NF-\u03baB, GLUT1, GLUT4, HKII, and LDHA) were downregulated in the presence of the extract. In vitro functional assays further confirmed the inhibition of tumorigenic properties of PC cells. Overall, the compounds that are naturally present in a Graviola extract inhibited multiple signaling pathways that regulate metabolism, cell cycle, survival, and metastatic properties in PC cells. Collectively, alterations in these parameters led to a decrease in tumorigenicity and metastasis of orthotopically implanted pancreatic tumors, indicating promising characteristics of the natural product against this lethal disease."}
{"_id": "9db0f052d130339ff0bc427726e081cd66f8a56c", "title": "Hand sign language recognition for Bangla alphabet using Support Vector Machine", "text": "The sign language considered as the main language for deaf and dumb people. So, a translator is needed when a normal person wants to talk with a deaf or dumb person. In this paper, we present a framework for recognizing Bangla Sign Language (BSL) using Support Vector Machine. The Bangla hand sign alphabets for both vowels and consonants have been used to train and test the recognition system. Bangla sign alphabets are recognized by analyzing its shape and comparing its features that differentiates each sign. In proposed system, hand signs are first converted to HSV color space from RGB image. Then Gabor filters are used to acquire desired hand sign features. Since feature vector obtained using Gabor filter is in a high dimension, to reduce the dimensionality a nonlinear dimensionality reduction technique that is Kernel PCA has been used. Lastly, Support Vector Machine (SVM) is employed for classification of candidate features. The experimental results show that our proposed method outperforms the existing work on Bengali hand sign recognition."}
{"_id": "703244978b61a709e0ba52f5450083f31e3345ec", "title": "How Learning Works: Seven Research-Based Principles for Smart Teaching", "text": "In this volume, the authors introduce seven general principles of learning, distilled from the research literature as well as from twenty-seven years of experience working one-on-one with college faculty. They have drawn on research from a breadth of perspectives (cognitive, developmental, and social psychology; educational research; anthropology; demographics; and organizational behavior) to identify a set of key principles underlying learningfrom how effective organization enhances retrieval and use of information to what impacts motivation. These principles provide instructors with an understanding of student learning that can help them see why certain teaching approaches are or are not supporting student learning, generate or refine teaching approaches and strategies that more effectively foster student learning in specific contexts, and transfer and apply these principles to new courses."}
{"_id": "52a345a29267107f92aec9260b6f8e8222305039", "title": "Deeper Inside PageRank", "text": "This paper serves as a companion or extension to the \u201cInside PageRank\u201d paper by Bianchini et al. [19]. It is a comprehensive survey of all issues associated with PageRank, covering the basic PageRank model, available and recommended solution methods, storage issues, existence, uniqueness, and convergence properties, possible alterations to the basic model, suggested alternatives to the traditional solution methods, sensitivity and conditioning, and finally the updating problem. We introduce a few new results, provide an extensive reference list, and speculate about exciting areas of future research."}
{"_id": "2a778ada8905fe6c9ff108b05a3554ebdaa52118", "title": "Practical Design and Implementation of Metamaterial-Enhanced Magnetic Induction Communication", "text": "The wireless communications in complex environments, such as underground and underwater, can enable various applications in the environmental, industrial, homeland security, law enforcement, and military fields. However, conventional electromagnetic wave-based techniques do not work due to the lossy media and complicated structures. Magnetic induction (MI) has been proved to achieve reliable communication in such environments. However, due to the small antenna size, the communication range of MI is still very limited, especially for the portable mobile devices. To this end, Metamaterial-enhanced MI (M2I) communication has been proposed, where the theoretical results predict that it can significantly increase the data rate and range. Nevertheless, there exists a significant gap between the theoretical prediction and the practical realization of M2I; the theoretical model relies on an ideal spherical metamaterial, while it does not exist in nature. In this paper, a practical design is proposed by leveraging a spherical coil array to realize M2I communication. The full-wave simulation is conducted to validate the design objectives. By using the spherical coil array-based M2I communication, the communication range can be significantly extended, exactly as we predicted in the ideal M2I model. Finally, the proposed M2I communication is implemented and tested in various environments."}
{"_id": "22e6b48f31953a3a5bff5feb75a33d76e0af48f7", "title": "The somatic marker hypothesis: A critical evaluation", "text": "The somatic marker hypothesis (SMH; [Damasio, A. R., Tranel, D., Damasio, H., 1991. Somatic markers and the guidance of behaviour: theory and preliminary testing. In Levin, H.S., Eisenberg, H.M., Benton, A.L. (Eds.), Frontal Lobe Function and Dysfunction. Oxford University Press, New York, pp. 217-229]) proposes that emotion-based biasing signals arising from the body are integrated in higher brain regions, in particular the ventromedial prefrontal cortex (VMPFC), to regulate decision-making in situations of complexity. Evidence for the SMH is largely based on performance on the Iowa Gambling Task (IGT; [Bechara, A., Tranel, D., Damasio, H., Damasio, A.R., 1996. Failure to respond autonomically to anticipated future outcomes following damage to prefrontal cortex. Cerebral Cortex 6 (2), 215-225]), linking anticipatory skin conductance responses (SCRs) to successful performance on a decision-making paradigm in healthy participants. These 'marker' signals were absent in patients with VMPFC lesions and were associated with poorer IGT performance. The current article reviews the IGT findings, arguing that their interpretation is undermined by the cognitive penetrability of the reward/punishment schedule, ambiguity surrounding interpretation of the psychophysiological data, and a shortage of causal evidence linking peripheral feedback to IGT performance. Further, there are other well-specified and parsimonious explanations that can equally well account for the IGT data. Next, lesion, neuroimaging, and psychopharmacology data evaluating the proposed neural substrate underpinning the SMH are reviewed. Finally, conceptual reservations about the novelty, parsimony and specification of the SMH are raised. It is concluded that while presenting an elegant theory of how emotion influences decision-making, the SMH requires additional empirical support to remain tenable."}
{"_id": "a53557c7611e66d61054acf163a9d8d4ba161c51", "title": "Crowdsourced Comprehension: Predicting Prerequisite Structure in Wikipedia", "text": "The growth of open-access technical publications and other open-domain textual information sources means that there is an increasing amount of online technical material that is in principle available to all, but in practice, incomprehensible to most. We propose to address the task of helping readers comprehend complex technical material, by using statistical methods to model the \u201cprerequisite structure\u201d of a corpus \u2014 i.e., the semantic impact of documents on an individual reader\u2019s state of knowledge. Experimental results using Wikipedia as the corpus suggest that this task can be approached by crowdsourcing the production of ground-truth labels regarding prerequisite structure, and then generalizing these labels using a learned classifier which combines signals of various sorts. The features that we consider relate pairs of pages by analyzing not only textual features of the pages, but also how the containing corpora is connected and created."}
{"_id": "7152c2531410d3f32531a854edd26ae7ebccb8d0", "title": "The Advanced Health and Disaster Aid Network: A Light-Weight Wireless Medical System for Triage", "text": "Advances in semiconductor technology have resulted in the creation of miniature medical embedded systems that can wirelessly monitor the vital signs of patients. These lightweight medical systems can aid providers in large disasters who become overwhelmed with the large number of patients, limited resources, and insufficient information. In a mass casualty incident, small embedded medical systems facilitate patient care, resource allocation, and real-time communication in the advanced health and disaster aid network (AID-N). We present the design of electronic triage tags on lightweight, embedded systems with limited memory and computational power. These electronic triage tags use noninvasive, biomedical sensors (pulse oximeter, electrocardiogram, and blood pressure cuff) to continuously monitor the vital signs of a patient and deliver pertinent information to first responders. This electronic triage system facilitates the seamless collection and dissemination of data from the incident site to key members of the distributed emergency response community. The real-time collection of data through a mesh network in a mass casualty drill was shown to approximately triple the number of times patients that were triaged compared with the traditional paper triage system."}
{"_id": "4f805391383b20dbc9992796d515029884ba468b", "title": "Cache-aware Roofline model: Upgrading the loft", "text": "The Roofline model graphically represents the attainable upper bound performance of a computer architecture. This paper analyzes the original Roofline model and proposes a novel approach to provide a more insightful performance modeling of modern architectures by introducing cache-awareness, thus significantly improving the guidelines for application optimization. The proposed model was experimentally verified for different architectures by taking advantage of built-in hardware counters with a curve fitness above 90%."}
{"_id": "6928b1bf7c54a4aa8d976317c506e5e5f3eae085", "title": "Deception Detection using Real-life Trial Data", "text": "Hearings of witnesses and defendants play a crucial role when reaching court trial decisions. Given the high-stake nature of trial outcomes, implementing accurate and effective computational methods to evaluate the honesty of court testimonies can offer valuable support during the decision making process. In this paper, we address the identification of deception in real-life trial data. We introduce a novel dataset consisting of videos collected from public court trials. We explore the use of verbal and non-verbal modalities to build a multimodal deception detection system that aims to discriminate between truthful and deceptive statements provided by defendants and witnesses. We achieve classification accuracies in the range of 60-75% when using a model that extracts and fuses features from the linguistic and gesture modalities. In addition, we present a human deception detection study where we evaluate the human capability of detecting deception in trial hearings. The results show that our system outperforms the human capability of identifying deceit."}
{"_id": "e6020e07095eb431595375782572fdfd4b31cc89", "title": "TUTORIAL ON AGENT-BASED MODELING AND SIMULATION", "text": "Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS is a third way of doing science besides deductive and inductive reasoning. Computational advances have made possible a growing number of agentbased applications in a variety of fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling consumer behavior to understanding the fall of ancient civilizations, to name a few. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing ABMS models, and provides some thoughts on the relationship between ABMS and traditional modeling techniques."}
{"_id": "a9b35080d736b9e5cdc6230512809123b383e671", "title": "Human activity recognition with wearable sensors", "text": "This thesis investigates the use of wearable sensors to recognize human activity. The activity of the user is one example of context information \u2013 others include the user\u2019s location or the state of his environment \u2013 which can help computer applications to adapt to the user depending on the situation. In this thesis we use wearable sensors \u2013 mainly accelerometers \u2013 to record, model and recognize human activities. Using wearable sensors allows continuous recording of activities across different locations and independent from external infrastructure. There are many possible applications for activity recognition with wearable sensors, for instance in the areas of healthcare, elderly care, personal fitness, entertainment, or performing arts. In this thesis we focus on two particular research challenges in activity recognition, namely the need for less supervision, and the recognition of high-level activities. We make several contributions towards addressing these challenges. Our first contribution is an analysis of features for activity recognition. Using a data set of activities such as walking, standing, sitting, or hopping, we analyze the performance of commonly used features and window lengths over which the features are computed. Our results indicate that different features perform well for different activities, and that in order to achieve best recognition performance, features and window lengths should be chosen specific for each activity. In order to reduce the need for labeled training data, we propose an unsupervised algorithm which can discover structure in unlabeled recordings of activities. The approach identifies correlated subsets in feature space, and represents these subsets with low-dimensional models. We show that the discovered subsets often correspond to distinct activities, and that the resulting models can be used for recognition of activities in unknown data. In a separate study, we show that the approach can be effectively deployed in a semi-supervised learning framework. More specifically, we combine the approach with a discriminant classifier, and show that this scheme allows high recognition rates even when using only a small amount of labeled training data. Recognition of higher-level activities such as shopping, doing housework, or commuting is challenging, as these activities are composed of changing sub-activities and vary strongly across individuals. We present one study in which we recorded 10h of three different high-level activities, investigating to which extent methods for low-level activities can be scaled to the recognition of high-level activities. Our results indicate that for settings as ours, traditional supervised approaches in combination with data from wearable accelerometers can achieve recognition rates of more than 90%. While unsupervised techniques are desirable for short-term activities, they become crucial for long-term activities, for which annotation is often impractical or impossible. To this end we propose an unsupervised approach based on topic models that allows to discover high-level structure in human activity data. The discovered activity patterns correlate with daily routines such as commuting, office work, or lunch routine, and they can be used to recognize such routines in unknown data."}
{"_id": "224c853d15de6aad11d2121f5ded07aa563219e3", "title": "The land unit \u2014 A fundamental concept in landscape ecology, and its applications", "text": "The land unit, as an expression of landscape as a system, is a fundamental concept in landscape ecology. It is an ecologically homogeneous tract of land at the scale at issue. It provides a basis for studying topologic as well as chorologic landscape ecology relationships. A land unit survey aims at mapping such land units. This is done by simultaneously using characteristics of the most obvious (mappable) land attributes: land-form, soil and vegetation (including human alteration of these three). The land unit is the basis of the map legend but may be expressed via these three land attributes. The more dynamic land attributes, such as certain animal populations and water fluxes, are less suitable as diagnostic criteria, but often link units by characteristic information/energy fluxes. The land unit survey is related to a further development of the widely accepted physiographic soil survey see Edelman (1950). Important aspects include: by means of a systems approach, the various land data can be integrated more appropriately; geomorphology, vegetation and soil science support each other during all stages (photo-interpretation, field survey, data processing, final classification); the time and costs are considerably less compared with the execution of separate surveys; the result is directly suitable as a basis for land evaluation; the results can be expressed in separate soil, vegetation, land use and landform maps, or even single value maps. A land unit survey is therefore: a method for efficient survey of land attributes, such as soils, vegetation, landform, expressed in either separate or combined maps; a means of stimulating integration among separate land attribute sciences; an efficient basis for land evaluation. For multidisciplinary projects with applied ecologic aims (e.g., land management), it is therefore the most appropriate survey approach. Within the land unit approach there is considerable freedom in the way in which the various land attribute data are \u2018integrated\u2019. It is essential, however, that: during the photo-interpretation stage, the contributions of the various specialists are brought together to prepare a preliminary (land unit) photo-interpretation map; the fieldwork data are collected at exactly the same sample point, preferably by a team of specialists in which soil, vegetation and geomorphology are represented; the final map is prepared in close cooperation of all contributing disciplines, based on photo-interpretation and field data; the final map approach may vary from one fully-integrated land unit map to various monothematic maps."}
{"_id": "3e2cfe5a0b7d3c6329a11575b3d91128faa64b4b", "title": "Spanish Multicenter Normative Studies (NEURONORMA Project): norms for verbal span, visuospatial span, letter and number sequencing, trail making test, and symbol digit modalities test.", "text": "As part of the Spanish Multicenter Normative Studies (NEURONORMA project), we provide age- and education-adjusted norms for the following instruments: verbal span (digits), visuospatial span (Corsi's test), letter-number sequencing (WAIS-III), trail making test, and symbol digit modalities test. The sample consists of 354 participants who are cognitively normal, community-dwelling, and age ranging from 50 to 90 years. Tables are provided to convert raw scores to age-adjusted scaled scores. These were further converted into education-adjusted scaled scores by applying regression-based adjustments. The current norms should provide clinically useful data for evaluating elderly Spanish people. These data may be of considerable use for comparisons with other normative studies. Limitations of these normative data are mainly related to the techniques of recruitment and stratification employed."}
{"_id": "a2a8fc3722ce868ac3cc37fd539f50afa31f4445", "title": "Cross-cultural adaptation of health-related quality of life measures: literature review and proposed guidelines.", "text": "Clinicians and researchers without a suitable health-related quality of life (HRQOL) measure in their own language have two choices: (1) to develop a new measure, or (2) to modify a measure previously validated in another language, known as a cross-cultural adaptation process. We propose a set of standardized guidelines for this process based on previous research in psychology and sociology and on published methodological frameworks. These guidelines include recommendations for obtaining semantic, idiomatic, experiential and conceptual equivalence in translation by using back-translation techniques and committee review, pre-testing techniques and re-examining the weight of scores. We applied these guidelines to 17 cross-cultural adaptation of HRQOL measures identified through a comprehensive literature review. The reporting standards varied across studies but agreement between raters in their ratings of the studies was substantial to almost perfect (weighted kappa = 0.66-0.93) suggesting that the guidelines are easy to apply. Further research is necessary in order to delineate essential versus optional steps in the adaptation process."}
{"_id": "566deec44c9788ec88a8f559bab6d42a8f69c10a", "title": "Low Power Magnetic Full-Adder Based on Spin Transfer Torque MRAM", "text": "Power issues have become a major problem of CMOS logic circuits as technology node shrinks below 90 nm. In order to overcome this limitation, emerging logic-in-memory architecture based on nonvolatile memories (NVMs) are being investigated. Spin transfer torque (STT) magnetic random access memory (MRAM) is considered one of the most promising NVMs thanks to its high speed, low power, good endurance, and 3-D back-end integration. This paper presents a novel magnetic full-adder (MFA) design based on perpendicular magnetic anisotropy (PMA) STT-MRAM. It provides advantageous power efficiency and die area compared with conventional CMOS-only full adder (FA). Transient simulations have been performed to validate this design by using an industrial CMOS 40 nm design kit and an accurate STT-MRAM compact model including physical models and experimental measurements."}
{"_id": "331a475a518d8ac95e8054c99fe9f5f6fc879519", "title": "A Wideband Dual-Polarized Antenna for LTE700/GSM850/GSM900 Applications", "text": "A novel resonator-loaded wideband dual-polarized antenna is proposed for LTE700/GSM850/GSM900 applications. Four single-polarized miniaturized antenna elements are placed with square arrangement upon the reflector for dual polarization. One pair of parallel elements is for +45\u00b0 polarization, and the other pair is for \u201345\u00b0 polarization. The impedance bandwidth is greatly improved loaded with a resonator. Experimentally, the antenna exhibits a wide impedance bandwidth of 37.5% ranging from 0.67 to 0.98 GHz for | $S$11|<\u201315 dB, high isolation of more than 40 dB, and stable gain of around 9.5 dBi over the whole operating band."}
{"_id": "ec64e26322c7fdbf9e54981a07693974423f3031", "title": "Sex differences in mate preferences revisited: do people know what they initially desire in a romantic partner?", "text": "In paradigms in which participants state their ideal romantic-partner preferences or examine vignettes and photographs, men value physical attractiveness more than women do, and women value earning prospects more than men do. Yet it remains unclear if these preferences remain sex differentiated in predicting desire for real-life potential partners (i.e., individuals whom one has actually met). In the present study, the authors explored this possibility using speed dating and longitudinal follow-up procedures. Replicating previous research, participants exhibited traditional sex differences when stating the importance of physical attractiveness and earning prospects in an ideal partner and ideal speed date. However, data revealed no sex differences in the associations between participants' romantic interest in real-life potential partners (met during and outside of speed dating) and the attractiveness and earning prospects of those partners. Furthermore, participants' ideal preferences, assessed before the speed-dating event, failed to predict what inspired their actual desire at the event. Results are discussed within the context of R. E. Nisbett and T. D. Wilson's (1977) seminal article: Even regarding such a consequential aspect of mental life as romantic-partner preferences, people may lack introspective awareness of what influences their judgments and behavior."}
{"_id": "0e5c8094d3da52340b58761d441eb809ff96743f", "title": "Distributed active transformer-a new power-combining andimpedance-transformation technique", "text": "In this paper, we compare the performance of the newly introduced distributed active transformer (DAT) structure to that of conventional on-chip impedance-transformations methods. Their fundamental power-efficiency limitations in the design of high-power fully integrated amplifiers in standard silicon process technologies are analyzed. The DAT is demonstrated to be an efficient impedance-transformation and power-combining method, which combines several low-voltage push-pull amplifiers in series by magnetic coupling. To demonstrate the validity of the new concept, a 2.4-GHz 1.9-W 2-V fully integrated power-amplifier achieving a power-added efficiency of 41% with 50input and output matching has been fabricated using 0.35-\u03bcm CMOS transistors Item Type: Article Additional Information: \u00a9 Copyright 2002 IEEE. Reprinted with permission. Manuscript received May 27, 2001. [Posted online: 2002-08-07] This work was supported by the Intel Corporation, the Army Research Office, the Jet Propulsion Laboratory, Infinion, and the National Science Foundation. The authors thank Conexant Systems for chip fabrication, particularly R. Magoon, F. In\u2019tveld, J. Powell, A. Vo, and K. Moye. K. Potter, D. Ham, and H.Wu, all of the California Institute of Technology (Caltech), Pasadena, deserve special thanks for their assistance. The technical support for CAD tools from Agilent Technologies and Sonnet Software Inc., Liverpool, NY, are also appreciated. \u201cSpecial Issue on Silicon-Based RF and Microwave Integrated Circuits\u201d, IEEE Transactions on Microwave Theory and Techniques, vol. 50, no. 1, part 2 Subject"}
{"_id": "14fae9835ae65adfdc434b7b7e761487e7a9548f", "title": "A simplified design approach for radial power combiners", "text": "It is known that a radial power combiner is very effective in combining a large number of power amplifiers, where high efficiency (greater than 90%) over a relatively wide band can be achieved. However, its current use is limited due to its design complexity. In this paper, we develop a step-by-step design procedure, including both the initial approximate design formulas and suitable models for final accurate design optimization purposes. Based on three-dimensional electromagnetic modeling, predicted results were in excellent agreement with those measured. Practical issues related to the radial-combiner efficiency, its graceful degradation, and the effects of higher order package resonances are discussed here in detail"}
{"_id": "47fdb5ec9522019ef7e580d59c262b3dc9519b26", "title": "A planar probe double ladder waveguide power divider", "text": "The successful demonstration of a 1:4 power divider using microstrip probes and a WR-430 rectangular waveguide is presented. The 15-dB return loss bandwidth of the nonoptimized structure is demonstrated to be 22% and its 0.5-dB insertion loss bandwidth 26%. While realized through conventional machining, such a structure is assembled in a fashion consistent with proven millimeter and submillimeter-wave micromachining techniques. Thus, the structure presents a potential power dividing and power combining architecture, which through micromachining, may be used for applications well above 100GHz."}
{"_id": "68218edaf08484871258387e95161a3ce0e6fe67", "title": "A Ka-band power amplifier based on the traveling-wave power-dividing/combining slotted-waveguide circuit", "text": "An eight-device Ka-band solid-state power amplifier has been designed and fabricated using a traveling-wave power-dividing/combining technique. The low-profile slotted-waveguide structure employed in this design provides not only a high power-combining efficiency over a wide bandwidth, but also efficient heat sinking for the active devices. The measured maximum small-signal gain of the eight-device power amplifier is 19.4 dB at 34 GHz with a 3-dB bandwidth of 3.2 GHz (f/sub L/=31.8 GHz, f/sub H/=35 GHz). The measured maximum output power at 1-dB compression (P/sub out/ at 1 dB) from the power amplifier is 33 dBm (/spl sim/2 W) at 32.2 GHz, with a power-combining efficiency of 80%. Furthermore, performance degradation of this power amplifier due to device failures has also been simulated and measured."}
{"_id": "db884813d6d764aea836c44f46604128735bffe0", "title": "Broad-Band High-Power Amplifier Using Spatial Power-Combining Technique", "text": "High power, broad bandwidth, high linearity, and low noise are among the most important features in amplifier design. The broad-band spatial power-combining technique addresses all these issues by combining the output power of a large quantity of microwave monolithic integrated circuit (MMIC) amplifiers in a broad-band coaxial waveguide environment, while maintaining good linearity and improving phase noise of the MMIC amplifiers. A coaxial waveguide was used as the host of the combining circuits for broader bandwidth and better uniformity by equally distributing the input power to each element. A new compact coaxial combiner with much smaller size is investigated. Broad-band slotline to microstrip-line transition is integrated for better compatibility with commercial MMIC amplifiers. Thermal simulations are performed and an improved thermal management scheme over previous designs is employed to improve the heat sinking in high-power application. A high-power amplifier using the compact combiner design is built and demonstrated to have a bandwidth from 6 to 17 GHz with 44-W maximum output power. Linearity measurement has shown a high third-order intercept point of 52 dBm. Analysis shows the amplifier has the ability to extend spurious-free dynamic range by 2 3 times. The amplifier also has shown a residual phase floor close to 140 dBc at 10-kHz offset from the carrier with 5\u20136-dB reductions compared to a single MMIC amplifier it integrates."}
{"_id": "7284cc6bada61d9233eb96c8b62362a46b220ad1", "title": "Ping-pong trajectory perception and prediction by a PC based High speed four-camera vision system", "text": "A high speed vision system is vital for a robot player to play table tennis successfully. The three dimensional ping-pong trajectory should be perceived in high precision at a high frame rate. What's more, with the beginning parts of the trajectory, the following trajectory should be predicted to save enough time for successful planning and control."}
{"_id": "84f7b7be76bc9f34e6ed9ee15defafaeb85ec419", "title": "Multimodal Gesture Recognition Using 3-D Convolution and Convolutional LSTM", "text": "Gesture recognition aims to recognize meaningful movements of human bodies, and is of utmost importance in intelligent human\u2013computer/robot interactions. In this paper, we present a multimodal gesture recognition method based on 3-D convolution and convolutional long-short-term-memory (LSTM) networks. The proposed method first learns short-term spatiotemporal features of gestures through the 3-D convolutional neural network, and then learns long-term spatiotemporal features by convolutional LSTM networks based on the extracted short-term spatiotemporal features. In addition, fine-tuning among multimodal data is evaluated, and we find that it can be considered as an optional skill to prevent overfitting when no pre-trained models exist. The proposed method is verified on the ChaLearn LAP large-scale isolated gesture data set (IsoGD) and the Sheffield Kinect gesture (SKIG) data set. The results show that our proposed method can obtain the state-of-the-art recognition accuracy (51.02% on the validation set of IsoGD and 98.89% on SKIG)."}
{"_id": "38749ebf3f3df0cb7ecc8d6ac9b174ec5b6b73a1", "title": "Unsupervised image segmentation using convolutional autoencoder with total variation regularization as preprocessing", "text": "Conventional unsupervised image segmentation methods use color and geometric information and apply clustering algorithms over pixels. They preserve object boundaries well but often suffer from over-segmentation due to noise and artifacts in the images. In this paper, we contribute on a preprocessing step for image smoothing, which alleviates the burden of conventional unsupervised image segmentation and enhance their performance. Our approach relies on a convolutional autoencoder (CAE) with the total variation loss (TVL) for unsupervised learning. We show that, after our CAE-TVL preprocessing step, the over-segmentation effect is significantly reduced using the same unsupervised image segmentation methods. We evaluate our approach using the BSDS500 image segmentation benchmark dataset and show the performance enhancement introduced by our approach in terms of both increased segmentation accuracy and reduced computation time. We examine the robustness of the trained CAE and show that it is directly applicable to other natural scene images."}
{"_id": "6af1752e2d00000944f630f745d7febba59079cb", "title": "Dependence maps, a dimensionality reduction with dependence distance for high-dimensional data", "text": "We introduce the dependence distance, a new notion of the intrinsic distance between points, derived as a pointwise extension of statistical dependence measures between variables. We then introduce a dimension reduction procedure for preserving this distance, which we call the dependence map. We explore its theoretical justification, connection to other methods, and empirical behavior on real data sets."}
{"_id": "e7c6da7b1a9b772a3af02a82aa6e670f23f5dad1", "title": "Prevention of falls and consequent injuries in elderly people", "text": "Injuries resulting from falls in elderly people are a major public-health concern, representing one of the main causes of longstanding pain, functional impairment, disability, and death in this population. The problem is going to worsen, since the rates of such injuries seem to be rising in many areas, as is the number of elderly people in both the developed and developing world. Many methods and programmes to prevent such injuries already exist, including regular exercise, vitamin D and calcium supplementation, withdrawal of psychotropic medication, cataract surgery, professional environment hazard assessment and modification, hip protectors, and multifactorial preventive programmes for simultaneous assessment and reduction of many of the predisposing and situational risk factors. To receive broader-scale effectiveness, these programmes will need systematic implementation. Care must be taken, however, to rigorously select the right actions for those people most likely to benefit, such as vitamin D and calcium supplementation and hip protectors for elderly people living in institutions."}
{"_id": "546aa62efd9639006053b9ed46bca3b7925c7305", "title": "Geometric Scene Parsing with Hierarchical LSTM", "text": "This paper addresses the problem of geometric scene parsing, i.e. simultaneously labeling geometric surfaces (e.g. sky, ground and vertical plane) and determining the interaction relations (e.g. layering, supporting, siding and affinity) between main regions. This problem is more challenging than the traditional semantic scene labeling, as recovering geometric structures necessarily requires the rich and diverse contextual information. To achieve these goals, we propose a novel recurrent neural network model, named Hierarchical Long Short-Term Memory (H-LSTM). It contains two coupled sub-networks: the Pixel LSTM (P-LSTM) and the Multi-scale Super-pixel LSTM (MS-LSTM) for handling the surface labeling and relation prediction, respectively. The two sub-networks provide complementary information to each other to exploit hierarchical scene contexts, and they are jointly optimized for boosting the performance. Our extensive experiments show that our model is capable of parsing scene geometric structures and outperforming several state-of-theart methods by large margins. In addition, we show promising 3D reconstruction results from the still images based on the geometric parsing."}
{"_id": "9ca545e205d5da6fe668774ce9e4bc5be280fbab", "title": "Securing wireless sensor networks: a survey", "text": "The significant advances of hardware manufacturing technology and the development of efficient software algorithms make technically and economically feasible a network composed of numerous, small, low-cost sensors using wireless communications, that is, a wireless sensor network. WSNs have attracted intensive interest from both academia and industry due to their wide application in civil and military scenarios. In hostile scenarios, it is very important to protect WSNs from malicious attacks. Due to various resource limitations and the salient features of a wireless sensor network, the security design for such networks is significantly challenging. In this article, we present a comprehensive survey of WSN security issues that were investigated by researchers in recent years and that shed light on future directions for WSN security."}
{"_id": "6cd5c87d4877f1a5a71719fcffd7c50ba343ec48", "title": "A theory of self-calibration of a moving camera", "text": "There is a close connection between the calibration of a single camera and the epipolar transformation obtained when the camera undergoes a displacement. The epipolar transformation imposes two algebraic constraints on the camera calibration. If two epipolar transformations, arising from different camera displacements, are available then the compatible camera calibrations are parameterized by an algebraic curve of genus four. The curve can be represented either by a space curve of degree seven contained in the intersection of two cubic surfaces, or by a curve of degree six in the dual of the image plane. The curve in the dual plane has one singular point of order three and three singular points of order two. If three epipolar transformations are available, then two curves of degree six can be obtained in the dual plane such that one of the real intersections of the two yields the correct camera calibration. The two curves have a common singular point of order three. Experimental results are given to demonstrate the feasibility of camera calibration based on the epipolar transformation. The real intersections of the two dual curves are found by locating the zeros of a function defined on the interval [0, 2\u03c0]. The intersection yielding the correct camera calibration is picked out by referring back to the three epipolar transformations."}
{"_id": "e3ccd78eae121ffed6a6d9294901db03704df2e5", "title": "Bearing Fault Detection by a Novel Condition-Monitoring Scheme Based on Statistical-Time Features and Neural Networks", "text": "Bearing degradation is the most common source of faults in electrical machines. In this context, this work presents a novel monitoring scheme applied to diagnose bearing faults. Apart from detecting local defects, i.e., single-point ball and raceway faults, it takes also into account the detection of distributed defects, such as roughness. The development of diagnosis methodologies considering both kinds of bearing faults is, nowadays, subject of concern in fault diagnosis of electrical machines. First, the method analyzes the most significant statistical-time features calculated from vibration signal. Then, it uses a variant of the curvilinear component analysis, a nonlinear manifold learning technique, for compression and visualization of the feature behavior. It allows interpreting the underlying physical phenomenon. This technique has demonstrated to be a very powerful and promising tool in the diagnosis area. Finally, a hierarchical neural network structure is used to perform the classification stage. The effectiveness of this condition-monitoring scheme has been verified by experimental results obtained from different operating conditions."}
{"_id": "06fd86110dbcdc37f298ac5f35c5cb9ccdb1ac08", "title": "A delay-tolerant network architecture for challenged internets", "text": "The highly successful architecture and protocols of today's Internet may operate poorly in environments characterized by very long delay paths and frequent network partitions. These problems are exacerbated by end nodes with limited power or memory resources. Often deployed in mobile and extreme environments lacking continuous connectivity, many such networks have their own specialized protocols, and do not utilize IP. To achieve interoperability between them, we propose a network architecture and application interface structured around optionally-reliable asynchronous message forwarding, with limited expectations of end-to-end connectivity and node resources. The architecture operates as an overlay above the transport layers of the networks it interconnects, and provides key services such as in-network data storage and retransmission, interoperable naming, authenticated forwarding and a coarse-grained class of service."}
{"_id": "6a9d28f382eeaabfe878eaafce1f711c36a69843", "title": "Gimli: open source and high-performance biomedical name recognition", "text": "Automatic recognition of biomedical names is an essential task in biomedical information extraction, presenting several complex and unsolved challenges. In recent years, various solutions have been implemented to tackle this problem. However, limitations regarding system characteristics, customization and usability still hinder their wider application outside text mining research. We present Gimli, an open-source, state-of-the-art tool for automatic recognition of biomedical names. Gimli includes an extended set of implemented and user-selectable features, such as orthographic, morphological, linguistic-based, conjunctions and dictionary-based. A simple and fast method to combine different trained models is also provided. Gimli achieves an F-measure of 87.17% on GENETAG and 72.23% on JNLPBA corpus, significantly outperforming existing open-source solutions. Gimli is an off-the-shelf, ready to use tool for named-entity recognition, providing trained and optimized models for recognition of biomedical entities from scientific text. It can be used as a command line tool, offering full functionality, including training of new models and customization of the feature set and model parameters through a configuration file. Advanced users can integrate Gimli in their text mining workflows through the provided library, and extend or adapt its functionalities. Based on the underlying system characteristics and functionality, both for final users and developers, and on the reported performance results, we believe that Gimli is a state-of-the-art solution for biomedical NER, contributing to faster and better research in the field. Gimli is freely available at http://bioinformatics.ua.pt/gimli ."}
{"_id": "37fdf57456a1b1bd5e7f0c88c0ad79f29c962a8d", "title": "Using audio and visual information for single channel speaker separation", "text": "This work proposes a method to exploit both audio and visual speech information to extract a target speaker from a mixture of competing speakers. The work begins by taking an effective audio-only method of speaker separation, namely the soft mask method, and modifying its operation to allow visual speech information to improve the separation process. The audio input is taken from a single channel and includes the mixture of speakers, where as a separate set of visual features are extracted from each speaker. This allows modification of the separation process to include not only the audio speech but also visual speech from each speaker in the mixture. Experimental results are presented that compare the proposed audio-visual speaker separation with audio-only and visual-only methods using both speech quality and speech intelligibility metrics."}
{"_id": "2b1a9f7b70cedd9a2fa23f33c65b944834878201", "title": "Anchors aweigh: A demonstration of cross-modality anchoring and magnitude priming", "text": "Research has shown that judgments tend to assimilate to irrelevant \"anchors.\" We extend anchoring effects to show that anchors can even operate across modalities by, apparently, priming a general sense of magnitude that is not moored to any unit or scale. An initial study showed that participants drawing long \"anchor\" lines made higher numerical estimates of target lengths than did those drawing shorter lines. We then replicated this finding, showing that a similar pattern was obtained even when the target estimates were not in the dimension of length. A third study showed that an anchor's length relative to its context, and not its absolute length, is the key to predicting the anchor's impact on judgments. A final study demonstrated that magnitude priming (priming a sense of largeness or smallness) is a plausible mechanism underlying the reported effects. We conclude that the boundary conditions of anchoring effects may be much looser than previously thought, with anchors operating across modalities and dimensions to bias judgment."}
{"_id": "86a1ef1a1a0a51de957d9241306a02df9d99e6bd", "title": "An OpenCCG-Based Approach to Question Generation from Concepts", "text": "Dialogue systems are often regarded as being tedious and inflexible. We believe that one reason is rigid and inadaptable system utterances. A good dialogue system should automatically choose a formulation that reflects the user\u2019s expectations. However, current dialogue system development environments only allow the definition of questions with unchangeable formulations. In this paper we present a new approach to the generation of system questions by only defining basic concepts. This is the basis for realising adaptive, user-tailored, and human-like system questions in dialogue systems."}
{"_id": "e73ee8174589e9326d3b36484f1b95685cb1ca42", "title": "mmWave phased-array with hemispheric coverage for 5th generation cellular handsets", "text": "A first-of-the-kind 28 GHz antenna solution for the upcoming 5th generation cellular communication is presented in detail. Extensive measurements and simulations ascertain the proposed 28 GHz antenna solution to be highly effective for cellular handsets operating in realistic propagating environments."}
{"_id": "7ab45f9ef1f30442f19ce474ec036a1cee0fdded", "title": "Toward negotiable reinforcement learning: shifting priorities in Pareto optimal sequential decision-making", "text": "Existing multi-objective reinforcement learning (MORL) algorithms do not account for objectives that arise from players with differing beliefs. Concretely, consider two players with different beliefs and utility functions who may cooperate to build a machine that takes actions on their behalf. A representation is needed for how much the machine\u2019s policy will prioritize each player\u2019s interests over time. Assuming the players have reached common knowledge of their situation, this paper derives a recursion that any Pareto optimal policy must satisfy. Two qualitative observations can be made from the recursion: the machine must (1) use each player\u2019s own beliefs in evaluating how well an action will serve that player\u2019s utility function, and (2) shift the relative priority it assigns to each player\u2019s expected utilities over time, by a factor proportional to how well that player\u2019s beliefs predict the machine\u2019s inputs. Observation (2) represents a substantial divergence from n\u00e4\u0131ve linear utility aggregation (as in Harsanyi\u2019s utilitarian theorem, and existing MORL algorithms), which is shown here to be inadequate for Pareto optimal sequential decision-making on behalf of players with different beliefs."}
{"_id": "b6a60a687bb9819b5fe781f704fb4b0d69a62a41", "title": "Facilitating Object Detection and Recognition through Eye Gaze", "text": "When compared to image recognition, object detection is a much more challenging task because it requires the accurate real-time localization of an object in the target image. In interaction scenarios, this pipeline can be simplified by incorporating the users\u2019 point of regard. Wearable eye trackers can estimate the gaze direction, but lack own processing capabilities. We enable mobile gaze-aware applications by developing an open-source platform which supports mobile eye tracking based on the Pupil headset and a smartphone running Android OS. Through our platform, we offer researchers and developers a rapid prototyping environment for gaze-enabled applications. We describe the concept, our current progress, and research implications. ACM Classification"}
{"_id": "81cb29ece650d391e61cf0c05096c6a14b87ee52", "title": "DataVizard: Recommending Visual Presentations for Structured Data", "text": "Selecting the appropriate visual presentation of the data such that it not only preserves the semantics but also provides an intuitive summary of the data is an important, often the final step of data analytics. Unfortunately, this is also a step involving significant human effort starting from selection of groups of columns in the structured results from analytics stages, to the selection of right visualization by experimenting with various alternatives. In this paper, we describe our DataVizard system aimed at reducing this overhead by automatically recommending the most appropriate visual presentation for the structured result. Specifically, we consider the following two scenarios: first, when one needs to visualize the results of a structured query such as SQL; and the second, when one has acquired a data table with an associated short description (e.g., tables from the Web). Using a corpus of real-world database queries (and their results) and a number of statistical tables crawled from the Web, we show that DataVizard is capable of recommending visual presentations with high accuracy."}
{"_id": "491bf29f83d63b2a719b03f940e32af6df6990d4", "title": "Memorability of natural scenes: The role of attention", "text": "The image memorability consists in the faculty of an image to be recalled after a period of time. Recently, the memorability of an image database was measured and some factors responsible for this memorability were highlighted. In this paper, we investigate the role of visual attention in image memorability around two axis. The first one is experimental and uses results of eye-tracking performed on a set of images of different memorability scores. The second investigation axis is predictive and we show that attention-related features can advantageously replace low-level features in image memorability prediction. From our work it appears that the role of visual attention is important and should be more taken into account along with other low-level features."}
{"_id": "a6069e65c318f07d2b35934b0d4109148f190342", "title": "Real-time garbage collection for flash-memory storage systems of real-time embedded systems", "text": "Flash-memory technology is becoming critical in building embedded systems applications because of its shock-resistant, power economic, and nonvolatile nature. With the recent technology breakthroughs in both capacity and reliability, flash-memory storage systems are now very popular in many types of embedded systems. However, because flash memory is a write-once and bulk-erase medium, we need a translation layer and a garbage-collection mechanism to provide applications a transparent storage service. In the past work, various techniques were introduced to improve the garbage-collection mechanism. These techniques aimed at both performance and endurance issues, but they all failed in providing applications a guaranteed performance. In this paper, we propose a real-time garbage-collection mechanism, which provides a guaranteed performance, for hard real-time systems. On the other hand, the proposed mechanism supports non-real-time tasks so that the potential bandwidth of the storage system can be fully utilized. A wear-leveling method, which is executed as a non-real-time service, is presented to resolve the endurance problem of flash memory. The capability of the proposed mechanism is demonstrated by a series of experiments over our system prototype."}
{"_id": "c5bc433fed9c286540c4cc3bd7cca3114f223f57", "title": "Optimization of the internal grid of an offshore wind farm using Genetic algorithm", "text": "Offshore wind energy is a promising solution thanks to its best performance of production. However its development leads to many technical and especially economic challenges among them the electrical grid topology attracts a large investment. In this paper, our objective is to minimize a part of this total investment which represents the initial cost of the middle voltage cables in the internal network with different cross sections. An approach based on Genetic Algorithm is developed to find the best topology to connect all wind turbines and substations. The proposed model initially accepts all possible configurations: radial, star, ring, and tree. The results prove that the optimization model can be used for designing the electrical architecture of the internal network of an offshore wind farm."}
{"_id": "e50812a804c7b1e29e18adcf32cf3d314bde2457", "title": "A flexible and low-cost tactile sensor for robotic applications", "text": "For humans, the sense of touch is essential for interactions with the environment. With robots slowly starting to emerge as a human-centric technology, tactile information becomes increasingly important. Tactile sensors enable robots to gain information about contacts with the environment, which is required for safe interaction with humans or tactile exploration. Many sensor designs for the application on robots have been presented in literature so far. However, most of them are complex in their design and require high-tech tools for their manufacturing. In this paper, we present a novel design for a tactile sensor that can be built with low-cost, widely available materials, and low effort. The sensor is flexible, may be cut to arbitrary shapes and may have a customized spatial resolution. Both pressure distribution and absolute pressure on the sensor are detected. An experimental evaluation of our design shows low detection thresholds as well as high sensor accuracy. We seek to accelerate research on tactile feedback methods with this easy to replicate design. We consider our design a starting point for the integration of multiple sensor units to a large-scale tactile skin for robots."}
{"_id": "c42f2784547626040b00d96e1f0f08166a184e89", "title": "A Dictionary-based Approach to Racism Detection in Dutch Social Media", "text": "We present a dictionary-based approach to racism detection in Dutch social media comments, which were retrieved from tw o public Belgian social media sites likely to attract racist reactio ns. These comments were labeled as racist or non-racist by mu ltiple annotators. For our approach, three discourse dictionaries were create d: first, we created a dictionary by retrieving possibly raci st and more neutral terms from the training data, and then augmenting these with more general words to remove some bias. A second dictionary w s created through automatic expansion using a word2vec model trained on a large corpus of general Dutch text. Finall y, third dictionary was created by manually filtering out incorrect expansions. We trained multiple Support Vector Machines, using the dist ribu ion of words over the different categories in the dictionaries as f eatures. The best-performing model used the manually clean ed dictionary and obtained an F-score of 0.46 for the racist class on a test s et consisting of unseen Dutch comments, retrieved from the s ame sites used for the training set. The automated expansion of the dic tionary only slightly boosted the model\u2019s performance, and this increase in performance was not statistically significant. The fact t hat the coverage of the expanded dictionaries did increase i ndicates that the words that were automatically added did occur in the corpus, b t were not able to meaningfully impact performance. The di ctionaries, code, and the procedure for requesting the corpus are availa ble at:https://github.com/clips/hades."}
{"_id": "9cd20a7f06d959342c758aa6ab8c09bb22dfb3e6", "title": "Online anomaly detection with concept drift adaptation using recurrent neural networks", "text": "Anomaly detection in time series is an important task with several practical applications. The common approach of training one model in an offline manner using historical data is likely to fail under dynamically changing and non-stationary environments where the definition of normal behavior changes over time making the model irrelevant and ineffective. In this paper, we describe a temporal model based on Recurrent Neural Networks (RNNs) for time series anomaly detection to address challenges posed by sudden or regular changes in normal behavior. The model is trained incrementally as new data becomes available, and is capable of adapting to the changes in the data distribution. RNN is used to make multi-step predictions of the time series, and the prediction errors are used to update the RNN model as well as detect anomalies and change points. Large prediction error is used to indicate anomalous behavior or a change (drift) in normal behavior. Further, the prediction errors are also used to update the RNN model in such a way that short term anomalies or outliers do not lead to a drastic change in the model parameters whereas high prediction errors over a period of time lead to significant updates in the model parameters such that the model rapidly adapts to the new norm. We demonstrate the efficacy of the proposed approach on a diverse set of synthetic, publicly available and proprietary real-world datasets."}
{"_id": "1355972384f2458f32d339c0304862ac24259aa1", "title": "Self-nonself discrimination in a computer", "text": "Current commercial virus detectors are based on The problem of protecting computer systems can be viewed generally as the problem of learning to distinguish relf from other. We describe a method for change detection which is based on the generation of T cella in the immune syetem. Mathematical analysis reveals computational costs of the system, and preliminary experiments illustrate how the method might be applied to the problem of computer viruses."}
{"_id": "23712e35d556e0929c6519c3d5553b896b65747d", "title": "Towards a taxonomy of intrusion-detection systems", "text": "Intrusion-detection systems aim at detecting attacks against computer systems and networks, or against information systems in general, as it is difficult to provide provably secure information systems and maintain them in such a secure state for their entire lifetime and for every utilization. Sometimes, legacy or operational constraints do not even allow a fully secure information system to be realized at all. Therefore, the task of intrusion-detection systems is to monitor the usage of such systems and to detect the apparition of insecure states. They detect attempts and active misuse by legitimate users of the information systems or external parties to abuse their privileges or exploit security vulnerabilities. In this paper, we introduce a taxonomy of intrusion-detection systems that highlights the various aspects of this area. This taxonomy defines families of intrusion-detection systems according to their properties. It is illustrated by numerous examples from past and current projects. q 1999 Elsevier Science B.V. All rights reserved."}
{"_id": "fa4f763cf42d5c5620cedaa0fa5ce9195ff750c3", "title": "Artificial immune systems - a new computational intelligence paradigm", "text": "Give us 5 minutes and we will show you the best book to read today. This is it, the artificial immune systems a new computational intelligence paradigm that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read."}
{"_id": "238d215ea37ed8dda2b3f236304914e3f7d72829", "title": "The COPS Security Checker System", "text": "In the past several years, there have been a large number of published works that have graphically described a wide variety of security problems particular to UNIX. Without fail, the same problems have been discussed over and over again, describing the problems with sum (set user ID) programs, improper file permissions, and bad passwords (to name a few). There are two common characteristics to each of these problems: first, they are usually simple to correct, if found; second, they are fairly easy to detect. Since almost all systems have fairly equivalent problems, it seems appropriate to create a tool to detect potential security problems as an aid to system administrators. This paper describes one such tool: Cops. (Computerized Oracle and Password System) is a freely-available,"}
{"_id": "4e85503ef0e1559bc197bd9de0625b3792dcaa9b", "title": "NetSTAT: A Network-Based Intrusion Detection Approach", "text": "Network-based attacks have become common and sophisticated. For this reason, intrusion detection systems are now shifting their focus from the hosts and their operating systems to the network itself. Network-based intrusion detection is challenging because network auditing produces large amounts of data, and different events related to a single intrusion may be visible in different places on the network. This paper presents NetSTAT, a new approach to network intrusion detection. By using a formal model of both the network and the attacks, NetSTAT is able to determine which network events have to be monitored and where they can be monitored."}
{"_id": "af359ac7cb689315fa85690f654f34c3409daf89", "title": "The Impact of Indoor Lighting on Students \u2019 Learning Performance in Learning Environments : A knowledge internalization perspective", "text": "The purpose of this study is to identify the influence of indoor lighting on students\u2019 learning performance within learning environments from knowledge internalization perspective. This study is a comprehensive review of literatures base on the influence of indoor lighting on people\u2019s productivity and performance especially students\u2019 learning performance. The result that comes from this study shows that it is essential to improve lighting in learning environments to enhance students\u2019 learning performance and also motivate them to learn more. In this study the researchers utilized Pulay (2010) survey and measured the influence of lighting on students\u2019 learning performance. Utilizing survey data collected from 150 students from Alpha course in Malaysia. This study found significant impact between lighting quality and students\u2019 learning performance this finding is also supported by interview from two experts."}
{"_id": "cdbed1c5971d20315f1e1980bf90b3e73e774b94", "title": "Schema Theory for Semantic Link Network", "text": "Semantic link network (SLN) is a loosely coupled semantic data model for managing Web resources. Its nodes can be any types of resources. Its edges can be any semantic relations. Potential semantic links can be derived out according to reasoning rules on semantic relations. This paper proposes the schema theory for SLN including the concepts, rule-constraint normal forms and relevant algorithms. The theory provides the basis for normalized management of SLN and its applications. A case study demonstrates the proposed theory."}
{"_id": "dd10da216b3f3bd2e2cf1fdbeb54c7697ba1dba9", "title": "Design considerations on low voltage synchronous power MOSFETs with monolithically integrated gate voltage pull-down circuitry", "text": "In this paper, a monolithically integrated gate voltage pull-down circuitry is presented to avoid the unintentional C\u00b7dV/dt induced turn-on. The concept of a low threshold voltage MOSFET with this integrated gate voltage pull-down circuitry is introduced as a contributing factor to the next generation high frequency DC-DC converter efficiency improvement. Design considerations on this new device and influences of critical design parameters on device/circuit performance will be fully discussed. In synchronous buck application, this integrated power module achieves more than 2% efficiency improvement over reference solution at high operation frequency (1MHz) under 19V input and 1.3V output condition."}
{"_id": "86335522e84bd14bb53fc23c265d7fed614f3cc4", "title": "A Framework for Dynamic Image Sampling Based on Supervised Learning", "text": "Sparse sampling schemes can broadly be classified into two main categories: static sampling where the sampling pattern is predetermined, and dynamic sampling where each new measurement location is selected based on information obtained from previous measurements. Dynamic sampling methods are particularly appropriate for pointwise imaging methods, in which pixels are measured sequentially in arbitrary order. Examples of pointwise imaging schemes include certain implementations of atomic force microscopy, electron back scatter diffraction, and synchrotron X-ray imaging. In these pointwise imaging applications, dynamic sparse sampling methods have the potential to dramatically reduce the number of measurements required to achieve a desired level of fidelity. However, the existing dynamic sampling methods tend to be computationally expensive and are, therefore, too slow for many practical applications. In this paper, we present a framework for dynamic sampling based on machine learning techniques, which we call a supervised learning approach for dynamic sampling (SLADS). In each step of SLADS, the objective is to find the pixel that maximizes the expected reduction in distortion (ERD) given previous measurements. SLADS is fast because we use a simple regression function to compute the ERD, and it is accurate because the regression function is trained using datasets that are representative of the specific application. In addition, we introduce an approximate method to terminate dynamic sampling at a desired level of distortion. We then extend our algorithm to incorporate multiple measurements at each step, which we call groupwise SLADS. Finally, we present results on computationally generated synthetic data and experimentally collected data to demonstrate a dramatic improvement over state-of-the-art static sampling methods"}
{"_id": "44afe560d0926380d666ec0b1dd4d6b12e077f0a", "title": "High-Frame-Rate Synthetic Aperture Ultrasound Imaging Using Mismatched Coded Excitation Waveform Engineering: A Feasibility Study", "text": "Mismatched coded excitation (CE) can be employed to increase the frame rate of synthetic aperture ultrasound imaging. The high autocorrelation and low cross correlation (CC) of transmitted signals enables the identification and separation of signal sources at the receiver. Thus, the method provides B-mode imaging with simultaneous transmission from several elements and capability of spatial decoding of the transmitted signals, which makes the imaging process equivalent to consecutive transmissions. Each transmission generates its own image and the combination of all the images results in an image with a high lateral resolution. In this paper, we introduce two different methods for generating multiple mismatched CEs with an identical frequency bandwidth and code length. Therefore, the proposed families of mismatched CEs are able to generate similar resolutions and signal-to-noise ratios. The application of these methods is demonstrated experimentally. Furthermore, several techniques are suggested that can be used to reduce the CC between the mismatched codes."}
{"_id": "66c54b8ba52a6eae6727354dedaacbab1dd5a8ea", "title": "Open-Domain Neural Dialogue Systems", "text": "Until recently, the goal of developing opendomain dialogue systems that not only emulate human conversation but fulfill complex tasks, such as travel planning, seemed elusive. However, we start to observe promising results in the last few years as the large amount of conversation data is available for training and the breakthroughs in deep learning and reinforcement learning are applied to dialogue. In this tutorial, we start with a brief introduction to the history of dialogue research. Then, we describe in detail the deep learning and reinforcement learning technologies that have been developed for two types of dialogue systems. First is a task-oriented dialogue system that can help users accomplish tasks, ranging from meeting scheduling to vacation planning. Second is a social bot that can converse seamlessly and appropriately with humans. In the final part of the tutorial, we review attempts to developing opendomain neural dialogue systems by combining the strengths of task-oriented dialogue systems and social bots. The tutorial material is available at http://opendialogue.miulab.tw."}
{"_id": "06d11bdd79c002f7cfdf9bcfa181f25c96f6009a", "title": "Rationalizing Neural Predictions", "text": "Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications \u2013 rationales \u2013 that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant margin. We also successfully illustrate the method on the question retrieval task.1"}
{"_id": "a9411793ca1f4128c0d78c5c3b1d150da1cd15e3", "title": "Discovering Opinion Changes in Online Reviews via Learning Fine-Grained Sentiments", "text": "Recent years have shown rapid advancement in understanding consumers' behaviors and opinions through collecting and analyzing data from online social media platforms. While abundant research has been undertaken to detect users' opinions, few tools are available for understanding events where user opinions change drastically. In this paper, we propose a novel framework for discovering consumer opinion changing events. To detect subtle opinion changes over time, we first develop a novel fine-grained sentiment classification method by leveraging word embedding and convolutional neural networks. The method learns sentiment-enhanced word embedding, both for words and phrases, to capture their corresponding syntactic, semantic, and sentimental characteristics. We then propose an opinion shift detection algorithm that is based on the Kullback-Leibler divergence of temporal opinion category distributions, and conducted experiments on online reviews from Yelp. The results show that the proposed approach can effectively classify fine-grained sentiments of reviews and can discover key moments that correspond to consumer opinion shifts in response to events that relate to a product or service."}
{"_id": "dbb4035111c12f4bce971bd4c8086e9d62c9eb97", "title": "Multi-GCN: Graph Convolutional Networks for Multi-View Networks, with Applications to Global Poverty", "text": "With the rapid expansion of mobile phone networks in developing countries, large-scale graph machine learning has gained sudden relevance in the study of global poverty. Recent applications range from humanitarian response and poverty estimation to urban planning and epidemic containment. Yet the vast majority of computational tools and algorithms used in these applications do not account for the multi-view nature of social networks: people are related in myriad ways, but most graph learning models treat relations as binary. In this paper, we develop a graph-based convolutional network for learning on multi-view networks. We show that this method outperforms state-of-the-art semi-supervised learning algorithms on three different prediction tasks using mobile phone datasets from three different developing countries. We also show that, while designed specifically for use in poverty research, the algorithm also outperforms existing benchmarks on a broader set of learning tasks on multi-view networks, including node labelling in citation networks."}
{"_id": "8f9b34e29a3636ca05eb18ff58bf47d9a3517c82", "title": "Coupling CCG and Hybrid Logic Dependency Semantics", "text": "Categorial grammar has traditionally used the \u03bb-calculus to represent meaning. We present an alternative, dependency-based perspective on linguistic meaning and situate it in the computational setting. This perspective is formalized in terms of hybrid logic and has a rich yet perspicuous propositional ontology that enables a wide variety of semantic phenomena to be represented in a single meaning formalism. Finally, we show how we can couple this formalization to Combinatory Categorial Grammar to produce interpretations compositionally."}
{"_id": "3e29cdcdb12ffdb80a734d12c905c1df43e0b6a4", "title": "Secure data deduplication", "text": "As the world moves to digital storage for archival purposes, there is an increasing demand for systems that can provide secure data storage in a cost-effective manner. By identifying common chunks of data both within and between files and storing them only once, deduplication can yield cost savings by increasing the utility of a given amount of storage. Unfortunately, deduplication exploits identical content, while encryption attempts to make all content appear random; the same content encrypted with two different keys results in very different ciphertext. Thus, combining the space efficiency of deduplication with the secrecy aspects of encryption is problematic.\n We have developed a solution that provides both data security and space efficiency in single-server storage and distributed storage systems. Encryption keys are generated in a consistent manner from the chunk data; thus, identical chunks will always encrypt to the same ciphertext. Furthermore, the keys cannot be deduced from the encrypted chunk data. Since the information each user needs to access and decrypt the chunks that make up a file is encrypted using a key known only to the user, even a full compromise of the system cannot reveal which chunks are used by which users."}
{"_id": "30b5b32f0d18afd441a8b5ebac3b636c6d3173d9", "title": "Robust Median Reversion Strategy for Online Portfolio Selection", "text": "Online portfolio selection has attracted increasing attention from data mining and machine learning communities in recent years. An important theory in financial markets is mean reversion, which plays a critical role in some state-of-the-art portfolio selection strategies. Although existing mean reversion strategies have been shown to achieve good empirical performance on certain datasets, they seldom carefully deal with noise and outliers in the data, leading to suboptimal portfolios, and consequently yielding poor performance in practice. In this paper, we propose to exploit the reversion phenomenon by using robust $L_1$ -median estimators, and design a novel online portfolio selection strategy named \u201cRobust Median Reversion\u201d (RMR), which constructs optimal portfolios based on the improved reversion estimator. We examine the performance of the proposed algorithms on various real markets with extensive experiments. Empirical results show that RMR can overcome the drawbacks of existing mean reversion algorithms and achieve significantly better results. Finally, RMR runs in linear time, and thus is suitable for large-scale real-time algorithmic trading applications."}
{"_id": "3da49a040788459dc85673285bb7a8023f78de6a", "title": "Activities and well-being in older age: effects of self-concept and educational attainment.", "text": "The positive effect of activities on well-being is proposed to be mediated by self-conceptualizations and facilitated by socioeconomic status. The hypothesized processes were estimated with LISREL VIII using data from a large cross-sectional survey with a sample of 679 adults aged 65 and older who were representative of older adults living in the Detroit area. Findings indicate that the frequency of performing both leisure and productive activities yields an effect on physical health and depression and that these effects are mediated in part by a sense of self as agentic, but less clearly by a sense of self as social. Furthermore, socioeconomic status, operationalized as formal educational attainment, facilitates the effect of leisure to a greater extent than that of productive activities."}
{"_id": "5f6f332713f3c7ba52707c7a5a3ff584d60a2720", "title": "A Hybrid Algorithm for LTL Games", "text": "In the game theoretic approach to the synthesis of reactive systems, specifications are often given in linear time logic (LTL). Computing a winning strategy to an infinite game whose winning condition is the set of LTL properties is the main step in obtaining an implementation. We present a practical hybrid algorithm\u2014a combination of symbolic and explicit algorithm\u2014for the computation of winning strategies for unrestricted LTL games that we have successfully applied to synthesize reactive systems with up to 10 states."}
{"_id": "a24d373bd33788640e95c117ebd5d78c88e8ce92", "title": "An IoT-Oriented Data Storage Framework in Cloud Computing Platform", "text": "The Internet of Things (IoT) has provided a promising opportunity to build powerful industrial systems and applications by leveraging the growing ubiquity of Radio Frequency IDentification (RFID) and wireless sensors devices. Benefiting from RFID and sensor network technology, common physical objects can be connected, and are able to be monitored and managed by a single system. Such a network brings a series of challenges for data storage and processing in a cloud platform. IoT data can be generated quite rapidly, the volume of data can be huge and the types of data can be various. In order to address these potential problems, this paper proposes a data storage framework not only enabling efficient storing of massive IoT data, but also integrating both structured and unstructured data. This data storage framework is able to combine and extend multiple databases and Hadoop to store and manage diverse types of data collected by sensors and RFID readers. In addition, some components are developed to extend the Hadoop to realize a distributed file repository, which is able to process massive unstructured files efficiently. A prototype system based on the proposed framework is also developed to illustrate the framework's effectiveness."}
{"_id": "65130672f6741b16b0d76fde6ed6c686d5363f4b", "title": "Maintaining Integrity: How Nurses Navigate Boundaries in Pediatric Palliative Care.", "text": "PURPOSE\nTo explore how nurses manage personal and professional boundaries in caring for seriously ill children and their families.\n\n\nDESIGN AND METHODS\nUsing a constructivist grounded theory approach, a convenience sample of 18 registered nurses from four practice sites was interviewed using a semi-structured interview guide.\n\n\nRESULTS\nNurses across the sites engaged in a process of maintaining integrity whereby they integrated two competing, yet essential, aspects of their nursing role - behaving professionally and connecting personally. When skillful in both aspects, nurses were satisfied that they provided high-quality, family-centered care to children and families within a clearly defined therapeutic relationship. At times, tension existed between these two aspects and nurses attempted to mitigate the tension. Unsuccessful mitigation attempts led to compromised integrity characterized by specific behavioral and emotional indicators. Successfully mitigating the tension with strategies that prioritized their own needs and healing, nurses eventually restored integrity. Maintaining integrity involved a continuous effort to preserve completeness of both oneself and one's nursing practice.\n\n\nCONCLUSIONS\nStudy findings provide a theoretical conceptualization to describe the process nurses use in navigating boundaries and contribute to an understanding for how this specialized area of care impacts health care providers.\n\n\nPRACTICE IMPLICATIONS\nWork environments can better address the challenges of navigating boundaries through offering resources and support for nurses' emotional responses to caring for seriously ill children. Future research can further refine and expand the theoretical conceptualization of maintaining integrity presented in this paper and its potential applicability to other nursing specialties."}
{"_id": "daaaa816ec61677fd88b3996889a00a6d8296290", "title": "Comparing Two IRT Models for Conjunctive Skills", "text": "A step in ITS often involve multiple skills. Thus a step requiring a conjunction of skills is harder than steps that require requiring each individual skill only. We developed two Item-Response Models \u2013 Additive Factor Model (AFM) and Conjunctive Factor Model (CFM) \u2013 to model the conjunctive skills in the student data sets. Both models are compared on simulated data sets and a real assessment data set. We showed that CFM was as good as or better than AFM in the mean cross validation errors on the simulated data. In the real data set CFM is not clearly better. However, AFM is essentially performing as a conjunctive model."}
{"_id": "5b5f553d122c1b042d49e7c011915382b929e1ea", "title": "Using temporal IDF for efficient novelty detection in text streams", "text": "Novelty detection in text streams is a challenging task that emerges in quite a few different scenarios, ranging from email thread filtering to RSS news feed recommendation on a smartphone. An efficient novelty detection algorithm can save the user a great deal of time and resources when browsing through relevant yet usually previously-seen content. Most of the recent research on detection of novel documents in text streams has been building upon either geometric distances or distributional similarities, with the former typically performing better but being much slower due to the need of comparing an incoming document with all the previously-seen ones. In this paper, we propose a new approach to novelty detection in text streams. We describe a resource-aware mechanism that is able to handle massive text streams such as the ones present today thanks to the burst of social media and the emergence of the Web as the main source of information. We capitalize on the historical Inverse Document Frequency (IDF) that was known for capturing well term specificity and we show that it can be used successfully at the document level as a measure of document novelty. This enables us to avoid similarity comparisons with previous documents in the text stream, thus scaling better and leading to faster execution times. Moreover, as the collection of documents evolves over time, we use a temporal variant of IDF not only to maintain an efficient representation of what has already been seen but also to decay the document frequencies as the time goes by. We evaluate the performance of the proposed approach on a real-world news articles dataset created for this task. We examine an exhaustive number of variants of the model and compare them to several commonly used baselines that rely on geometric distances. The results show that the proposed method outperforms all of the baselines while managing to operate efficiently in terms of time complexity and memory usage, which are of great importance in a mobile setting scenario."}
{"_id": "4d06c2e64d76b7bccb659ea7a71ccac1574e13ac", "title": "Applications of SAT Solvers to AES Key Recovery from Decayed Key Schedule Images", "text": "Cold boot attack is a side channel attack which exploits the data remanence property of random access memory (RAM) to retrieve its contents which remain readable shortly after its power has been removed. Given the nature of the cold boot attack, only a corrupted image of the memory contents will be available to the attacker. In this paper, we investigate the use of an off-the-shelf SAT solver, CryptoMinSat, to improve the key recovery of the AES-128 key schedules from its corresponding decayed memory images. By exploiting the asymmetric decay of the memory images and the redundancy of key material inherent in the AES key schedule, rectifying the faults in the corrupted memory images of the AES-128 key schedule is formulated as a Boolean satisfiability problem which can be solved efficiently for relatively very large decay factors. Our experimental results show that this approach improves upon the previously known results."}
{"_id": "d4b68acdbe65c2520fddf1b3c92268c2f7a68159", "title": "Improved Shortest Path Maps with GPU Shaders", "text": "We present in this paper several improvements for computing shortest path maps using OpenGL shaders [1]. The approach explores GPU rasterization as a way to propagate optimal costs on a polygonal 2D environment, producing shortest path maps which can efficiently be queried at run-time. Our improved method relies on Compute Shaders for improved performance, does not require any CPU pre-computation, and handles shortest path maps both with source points and with line segment sources. The produced path maps partition the input environment into regions sharing a same parent point along the shortest path to the closest source point or segment source. Our method produces paths with global optimality, a characteristic which has been mostly neglected in animated virtual environments. The proposed approach is particularly suitable for the animation of multiple agents moving toward the entrances or exits of a virtual environment, a situation which is efficiently represented with the proposed path maps."}
{"_id": "9f5ee0f6fdfd37a801df3c4e96d7e56391744233", "title": "The Cyclic Towers of Hanoi", "text": "The famous Towers of Hanoi puzzle consists of 3 pegs (A, B, C) on one of which (A) are stacked n rings of different sizes, each ring resting on a larger ring. The objective is to move the n rings one by one until they are all stacked on another peg (B) in such a way that no ring is ever placed on a smaller ring; the other peg (C) can be used as workspace. The problem has tong been a favourite iir programming courses as one which admits a concise recursive solution. This solution hinges on the observation that, when the largest ring is moved from A to B, the n 1 remaining rings must all be on peg C. This immediately leads to the recursive procedure"}
{"_id": "310ec7796eeca484d734399d9979e8f74d7d8ed2", "title": "Shakeout: A New Regularized Deep Neural Network Training Scheme", "text": "Recent years have witnessed the success of deep neural networks in dealing with a plenty of practical problems. The invention of effective training techniques largely contributes to this success. The so-called \"Dropout\" training scheme is one of the most powerful tool to reduce over-fitting. From the statistic point of view, Dropout works by implicitly imposing an L2 regularizer on the weights. In this paper, we present a new training scheme: Shakeout. Instead of randomly discarding units as Dropout does at the training stage, our method randomly chooses to enhance or inverse the contributions of each unit to the next layer. We show that our scheme leads to a combination of L1 regularization and L2 regularization imposed on the weights, which has been proved effective by the Elastic Net models in practice. We have empirically evaluated the Shakeout scheme and demonstrated that sparse network weights are obtained via Shakeout training. Our classification experiments on real-life image datasets MNIST and CIFAR10 show that Shakeout deals with over-fitting effectively."}
{"_id": "5cf3db1cb294afdf639324e2ab0a326aa9f78473", "title": "Big Data Security: Survey on Frameworks and Algorithms", "text": "Technology today has progressed to an extent wherein collection of data is possible for every granular aspect of a business, in real time. Electronic devices, power grids and modern software all generate huge volumes of data, which is in the form of petabytes, exabytes and zettabytes. It is important to secure existing big data environments due to increasing threats of breaches and leaks from confidential data and increased adoption of cloud technologies due to the ability of buying processing power and storage on-demand. This is exposing traditional and new data warehouses and repositories to the outside world and at the risk of being compromised to hackers and malicious outsiders and insiders. In this paper the current big data scenario has been summarized along with challenges faced and security issues that need attention. Also some existing approaches have been described to illustrate current and standard directions to solving the issues."}
{"_id": "8c366344669769983f0c238a5f0548cac2afcc43", "title": "Scribbles to Vectors: Preparation of Scribble Drawings for CAD Interpretation", "text": "This paper describes the work carried out on off-line paper based scribbles such that they can be incorporated into a sketch-based interface without forcing designers to change their natural drawing habits. In this work, the scribbled drawings are converted into a vectorial format which can be recognized by a CAD system. This is achieved by using pattern analysis techniques, namely the Gabor filter to simplify the scribbled drawing. Vector line are then extracted from the resulting drawing by means of Kalman filtering."}
{"_id": "15bdf3f1412cd762c40ad41dee5485de38ab0120", "title": "On the Robust Control of Buck-Converter DC-Motor Combinations", "text": "The concepts of active disturbance rejection control and flatness-based control are used in this paper to regulate the response of a dc-to-dc buck power converter affected by unknown, exogenous, time-varying load current demands. The generalized proportional integral observer is used to estimate and cancel the time-varying disturbance signals. A key element in the proposed control for the buck converter-dc motor combination is that even if the control input gain is imprecisely known, the control strategy still provides proper regulation and tracking. The robustness of this method is further extended to the case of a double buck topology driving two different dc motors affected by different load torque disturbances. Simulation results are provided."}
{"_id": "2d78fbe680b4501b0c21fbd49eb7652592cf077d", "title": "Comparative study of Proportional Integral and Backstepping controller for Buck converter", "text": "This paper describes the comparative study of Proportional Integral (PI) and Backstepping controller for Buck converter with R-load and DC motor. Backstepping approach is an efficient control design procedure for both regulation and tracking problems. This approach is based upon a systematic procedure which guarantees global regulation and tracking. The proposed control scheme is to stabilize the output (voltage or speed) and tracking error to converge zero asymptotically. Buck converter system is simulated in MATLAB, using state reconstruction techniques. Simulation results of buck converter with R-load and PMDC motor reveals that, settling time of Backstepping controller is less than PI controller"}
{"_id": "bba5386f9210f2996d403f09224926d860c763d7", "title": "Robust Passivity-Based Control of a Buck\u2013Boost-Converter/DC-Motor System: An Active Disturbance Rejection Approach", "text": "This paper presents an active disturbance rejection (ADR) approach for the control of a buck-boost-converter feeding a dc motor. The presence of arbitrary, time-varying, load torque inputs on the dc motor and the lack of direct measurability of the motor's angular velocity variable prompts a generalized proportional integral (GPI) observer-based ADR controller which is synthesized on the basis of passivity considerations. The GPI observer simultaneously estimates the angular velocity and the exogenous disturbance torque input in an on-line cancellation scheme, known as the ADR control. The proposed control scheme is thus a sensorless one with robustness features added to the traditional energy shaping plus damping injection methodology. The discrete switching control realization of the designed continuous feedback control law is accomplished by means of a traditional PWM-modulation scheme. Additionally, an input to state stability property of the closed-loop system is established. Experimental and simulation results are provided."}
{"_id": "d8ca9a094f56e3fe542269ea272b46f5e46bdd99", "title": "Closed-Loop Analysis and Cascade Control of a Nonminimum Phase Boost Converter", "text": "In this paper, a cascade controller is designed and analyzed for a boost converter. The fast inner current loop uses sliding-mode control. The slow outer voltage loop uses the proportional-integral (PI) control. Stability analysis and selection of PI gains are based on the nonlinear closed-loop error dynamics. It is proven that the closed-loop system has a nonminimum phase behavior. The voltage transients and reference voltage are predictable. The current ripple and system sensitivity are studied. The controller is validated by a simulation circuit with nonideal circuit parameters, different circuit parameters, and various maximum switching frequencies. The simulation results show that the reference output voltage is well tracked under parametric changes, system uncertainties, or external disturbances with fast dynamic transients, confirming the validity of the proposed controller."}
{"_id": "06d22950a79a839d864b575569a0de91ded33135", "title": "A general approach to control a Positive Buck-Boost converter to achieve robustness against input voltage fluctuations and load changes", "text": "A positive buck-boost converter is a known DC- DC converter which may be controlled to act as buck or boost converter with same polarity of the input voltage. This converter has four switching states which include all the switching states of the above mentioned DC-DC converters. In addition there is one switching state which provides a degree of freedom for the positive buck-boost converter in comparison to the buck, boost, and inverting buck-boost converters. In other words the positive buck- boost converter shows a higher level of flexibility for its inductor current control compared to the other DC-DC converters. In this paper this extra degree of freedom is utilised to increase the robustness against input voltage fluctuations and load changes. To address this capacity of the positive buck-boost converter, two different control strategies are proposed which control the inductor current and output voltage against any fluctuations in input voltage and load changes. Mathematical analysis for dynamic and steady state conditions are presented in this paper and simulation results verify the proposed method."}
{"_id": "8b0e6c02a49dcd9ef946c2af4f2f9290b8e65b2f", "title": "Wideband Millimeter-Wave Surface Micromachined Tapered Slot Antenna", "text": "A millimeter-wave surface micromachined tapered slot antenna (TSA) fed by an air-filled rectangular coaxial line is proposed. Detailed parametric study is conducted to determine the effects of the TSA's key structural features on its VSWR and gain. Selected exponential taper with determined growth rate and corrugation length resulted in an antenna with VSWR <; 2.5, gain > 2.75 dBi, and stable patterns over a 43-140-GHz range. The TSA is fabricated, and good correlation between modeling and W-band measurements confirms its wideband performance."}
{"_id": "60bbeedf201a2fcdf9efac19ff32aafe2e33b606", "title": "The genomic landscapes of human breast and colorectal cancers.", "text": "Human cancer is caused by the accumulation of mutations in oncogenes and tumor suppressor genes. To catalog the genetic changes that occur during tumorigenesis, we isolated DNA from 11 breast and 11 colorectal tumors and determined the sequences of the genes in the Reference Sequence database in these samples. Based on analysis of exons representing 20,857 transcripts from 18,191 genes, we conclude that the genomic landscapes of breast and colorectal cancers are composed of a handful of commonly mutated gene \"mountains\" and a much larger number of gene \"hills\" that are mutated at low frequency. We describe statistical and bioinformatic tools that may help identify mutations with a role in tumorigenesis. These results have implications for understanding the nature and heterogeneity of human cancers and for using personal genomics for tumor diagnosis and therapy."}
{"_id": "818c13721db30a435044b37014fe7077e5a8a587", "title": "Incorporating partitioning and parallel plans into the SCOPE optimizer", "text": "Massive data analysis on large clusters presents new opportunities and challenges for query optimization. Data partitioning is crucial to performance in this environment. However, data repartitioning is a very expensive operation so minimizing the number of such operations can yield very significant performance improvements. A query optimizer for this environment must therefore be able to reason about data partitioning including its interaction with sorting and grouping. SCOPE is a SQL-like scripting language used at Microsoft for massive data analysis. A transformation-based optimizer is responsible for converting scripts into efficient execution plans for the Cosmos distributed computing platform. In this paper, we describe how reasoning about data partitioning is incorporated into the SCOPE optimizer. We show how relational operators affect partitioning, sorting and grouping properties and describe how the optimizer reasons about and exploits such properties to avoid unnecessary operations. In most optimizers, consideration of parallel plans is an afterthought done in a postprocessing step. Reasoning about partitioning enables the SCOPE optimizer to fully integrate consideration of parallel, serial and mixed plans into the cost-based optimization. The benefits are illustrated by showing the variety of plans enabled by our approach."}
{"_id": "8420f2f686890d9675538ec831dbb43568af1cb3", "title": "Sentiment classification of Hinglish text", "text": "In order to determine the sentiment polarity of Hinglish text written in Roman script, we experimented with different combinations of feature selection methods and a host of classifiers using term frequency-inverse document frequency feature representation. We carried out in total 840 experiments in order to determine the best classifiers for sentiment expressed in the news and Facebook comments written in Hinglish. We concluded that a triumvirate of term frequency-inverse document frequency-based feature representation, gain ratio based feature selection, and Radial Basis Function Neural Network as the best combination to classify sentiment expressed in the Hinglish text."}
{"_id": "6ffe544f38ddfdba86a83805ce807f3c8e775fd6", "title": "Multi words quran and hadith searching based on news using TF-IDF", "text": "Each week religious leaders need to give advice to their community. Religious advice matters ideally contain discussion and solution of the problem that arising in society. But the lot of religious resources that must be considered agains many arising problems make this religious task is not easy. Especially in moslem community, the religious resources are Quran and Kutubus Sitah, the six most referenced collection of Muhammad (pbuh) news (hadith). The problem that arising in society can be read from various online mass media. Doing manually, they must know the Arabic word of the problem, and make searching manually from Mu'jam, Quran and Hadith index, then write out the found verses or hadith. TF-IDF method is often used in the weighting informational retrieval and text mining. This research want to make tools that get input from mass media news, make multi words searching from database using TF-IDF (Term Frequency - Inverse Document Frequency), and give relevan verse of Quran and hadith. Top five the most relevan verse of Quran and hadith will be displayed. Justified by religious leader, application give 60% precision for Quranic verses, and 53% for hadith, with the average query time 2.706 seconds."}
{"_id": "422a675b71f8655b266524a552e0246cb29e9bd5", "title": "GALE: Geometric Active Learning for Search-Based Software Engineering", "text": "Multi-objective evolutionary algorithms (MOEAs) help software engineers find novel solutions to complex problems. When automatic tools explore too many options, they are slow to use and hard to comprehend. GALE is a near-linear time MOEA that builds a piecewise approximation to the surface of best solutions along the Pareto frontier. For each piece, GALE mutates solutions towards the better end. In numerous case studies, GALE finds comparable solutions to standard methods (NSGA-II, SPEA2) using far fewer evaluations (e.g. 20 evaluations, not 1,000). GALE is recommended when a model is expensive to evaluate, or when some audience needs to browse and understand how an MOEA has made its conclusions."}
{"_id": "a390073adc9c9d23d31404a9a8eb6dac7e684857", "title": "Local force cues for strength and stability in a distributed robotic construction system", "text": "Construction of spatially extended, self-supporting structures requires a consideration of structural stability throughout the building sequence. For collective construction systems, where independent agents act with variable order and timing under decentralized control, ensuring stability is a particularly pronounced challenge. Previous research in this area has largely neglected considering stability during the building process. Physical forces present throughout a structure may be usable as a cue to inform agent actions as well as an indirect communication mechanism (stigmergy) to coordinate their behavior, as adding material leads to redistribution of forces which then informs the addition of further material. Here we consider in simulation a system of decentralized climbing robots capable of traversing and extending a two-dimensional truss structure, and explore the use of feedback based on force sensing as a way for the swarm to anticipate and prevent structural failures. We consider a scenario in which robots are tasked with building an unsupported cantilever across a gap, as for a bridge, where the goal is for the swarm to build any stable spanning structure rather than to construct a specific predetermined blueprint. We show that access to local force measurements enables robots to build cantilevers that span significantly farther than those built by robots without access to such information. This improvement is achieved by taking measures to maintain both strength and stability, where strength is ensured by paying attention to forces during locomotion to prevent joints from breaking, and stability is maintained by looking at how loads transfer to the ground to ensure against toppling. We show that swarms that take both kinds of forces into account have improved building performance, in both structured settings with flat ground and unpredictable environments with rough terrain."}
{"_id": "511921e775ab05a1ab0770a63e57c93da51c8526", "title": "Use of AI Techniques for Residential Fire Detection in Wireless Sensor Networks", "text": "Early residential fire detection is important for prompt extinguishing and reducing damages and life losses. To detect fire, one or a combination of sensors and a detection algorithm are needed. The sensors might be part of a wireless sensor network (WSN) or work independently. The previous research in the area of fire detection using WSN has paid little or no attention to investigate the optimal set of sensors as well as use of learning mechanisms and Artificial Intelligence (AI) techniques. They have only made some assumptions on what might be considered as appropriate sensor or an arbitrary AI technique has been used. By closing the gap between traditional fire detection techniques and modern wireless sensor network capabilities, in this paper we present a guideline on choosing the most optimal sensor combinations for accurate residential fire detection. Additionally, applicability of a feed forward neural network (FFNN) and Na\u00efve Bayes Classifier is investigated and results in terms of detection rate and computational complexity are analyzed."}
{"_id": "c97ebb60531a86bea516d3582758a45ba494de10", "title": "Smart Cars on Smart Roads: An IEEE Intelligent Transportation Systems Society Update", "text": "To promote tighter collaboration between the IEEE Intelligent Transportation Systems Society and the pervasive computing research community, the authors introduce the ITS Society and present several pervasive computing-related research topics that ITS Society researchers are working on. This department is part of a special issue on Intelligent Transportation."}
{"_id": "e91196c1d0234da60314945c4812eda631004d8f", "title": "Towards Multi-Agent Communication-Based Language Learning", "text": "We propose an interactive multimodal framework for language learning. Instead of being passively exposed to large amounts of natural text, our learners (implemented as feed-forward neural networks) engage in cooperative referential games starting from a tabula rasa setup, and thus develop their own language from the need to communicate in order to succeed at the game. Preliminary experiments provide promising results, but also suggest that it is important to ensure that agents trained in this way do not develop an adhoc communication code only effective for the game they are playing."}
{"_id": "6acb911d57720367d1ae7b9bce8ab9f9dcd9aadb", "title": "A region-based image caption generator with refined descriptions", "text": "Describing the content of an image is a challenging task. To enable detailed description, it requires the detection and recognition of objects, people, relationships and associated attributes. Currently, the majority of the existing research relies on holistic techniques, which may lose details relating to important aspects in a scene. In order to deal with such a challenge, we propose a novel region-based deep learning architecture for image description generation. It employs a regional object detector, recurrent neural network (RNN)-based attribute prediction, and an encoderdecoder language generator embedded with two RNNs to produce refined and detailed descriptions of a given image. Most importantly, the proposed system focuses on a local based approach to further improve upon existing holistic methods, which relates specifically to image regions of people and objects in an image. Evaluated with the IAPR TC-12 dataset, the proposed system shows impressive performance, and outperforms state-of-the-art methods using various evaluation metrics. In particular, the proposed system shows superiority over existing methods when dealing with cross-domain indoor scene images."}
{"_id": "385467e7747904397134c3a16d8f3a417e6b16fc", "title": "3D printing: Basic concepts mathematics and technologies", "text": "3D printing is the process of being able to print any object layer by layer. But if we question this proposition, can we find any three dimensional objects that can't be printed layer by layer? To banish any disbeliefs we walked together through the mathematics that prove 3d printing is feasible for any real life object. 3d printers create three dimensional objects by building them up layer by layer. The current generation of 3d printers typically requires input from a CAD program in the form of an STL file, which defines a shape by a list of triangle vertices. The vast majority of 3d printers use two techniques, FDM (Fused Deposition Modelling) and PBP (Powder Binder Printing). One advanced form of 3d printing that has been an area of increasing scientific interest the recent years is bioprinting. Cell printers utilizing techniques similar to FDM were developed for bioprinting. These printers give us the ability to place cells in positions that mimic their respective positions in organs. Finally through series of case studies we show that 3d printers in medicine have made a massive breakthrough lately."}
{"_id": "500b7d63e64e13fa47934ec9ad20fcfe0d4c17a7", "title": "3D strip meander delay line structure for multilayer LTCC-based SiP applications", "text": "Recently, the timing control of high-frequency signals is strongly demanded due to the high integration density in three-dimensional (3D) LTCC-based SiP applications. Therefore, to control the skew or timing delay, new 3D delay lines will be proposed. For frailty of the signal via, we adopt the concept of coaxial line and proposed an advanced signal via structure with quasi coaxial ground (QCOX-GND) vias. We will show the simulated results using EM and circuit simulator."}
{"_id": "a5eeee49f3da9bb3ce75de4c28823dddb7ed23db", "title": "Visual secret sharing scheme for (k,\u00a0n) threshold based on QR code with multiple decryptions", "text": "In this paper, a novel visual secret sharing (VSS) scheme based on QR code (VSSQR) with (k,\u00a0n) threshold is investigated. Our VSSQR exploits the error correction mechanism in the QR code structure, to generate the bits corresponding to shares (shadow images) by VSS from a secret bit in the processing of encoding QR. Each output share is a valid QR code that can be scanned and decoded utilizing a QR code reader, which may reduce the likelihood of attracting the attention of potential attackers. Due to different application scenarios, two different recovered ways of the secret image are given. The proposed VSS scheme based on QR code can visually reveal secret image with the abilities of stacking and XOR decryptions as well as scan every shadow image, i.e., a QR code, by a QR code reader. The secret image could be revealed by human visual system without any computation based on stacking when no lightweight computation device. On the other hand, if the lightweight computation device is available, the secret image can be revealed with better visual quality based on XOR operation and could be lossless revealed when sufficient shares are collected. In addition, it can assist alignment for VSS recovery. The experiment results show the effectiveness of our scheme."}
{"_id": "87374ee9a49ab4b51176b4155eaa6285c02463a1", "title": "Use of Web 2 . 0 technologies in K-12 and higher education : The search for evidence-based practice", "text": "Evidence-based practice in education entails making pedagogical decisions that are informed by relevant empirical research evidence. The main purpose of this paper is to discuss evidence-based pedagogical approaches related to the use of Web 2.0 technologies in both K-12 and higher education settings. The use of such evidence-based practice would be useful to educators interested in fostering student learning through Web 2.0 tools. A comprehensive literature search across the Academic Search Premier, Education Research Complete, ERIC, and PsycINFO databases was conducted. Empirical studies were included for review if they specifically examined the impact of Web 2.0 technologies on student learning. Articles that merely described anecdotal studies such as student perception or feeling toward learning using Web 2.0, or studies that relied on student self-report data such as student questionnaire survey and interview were excluded. Overall, the results of our review suggested that actual evidence regarding the impact of Web 2.0 technologies on student learning is as yet fairly weak. Nevertheless, the use of Web 2.0 technologies appears to have a general positive impact on student learning. None of the studies reported a detrimental or inferior effect on learning. The positive effects are not necessarily attributed to the technologies per se but to how the technologies are used, and how one conceptualizes learning. It may be tentatively concluded that a dialogic, constructionist, or coconstructive pedagogy supported by activities such as Socratic questioning, peer review and self-reflection appeared to increase student achievement in blog-, wiki-, and 3-D immersive virtual world environments, while a transmissive pedagogy supported by review activities appeared to enhance student learning using podcast. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "2e0305d97f2936ee2a87b87ea901d500a8fbcd16", "title": "Gaussian Process Regression with Heteroscedastic or Non-Gaussian Residuals", "text": "Abstract Gaussian Process (GP) regression models typically assume that residuals are Gaussian and have the same variance for all observations. However, applications with input-dependent noise (heteroscedastic residuals) frequently arise in practice, as do applications in which the residuals do not have a Gaussian distribution. In this paper, we propose a GP Regression model with a latent variable that serves as an additional unobserved covariate for the regression. This model (which we call GPLC) allows for heteroscedasticity since it allows the function to have a changing partial derivative with respect to this unobserved covariate. With a suitable covariance function, our GPLC model can handle (a) Gaussian residuals with input-dependent variance, or (b) nonGaussian residuals with input-dependent variance, or (c) Gaussian residuals with constant variance. We compare our model, using synthetic datasets, with a model proposed by Goldberg, Williams and Bishop (1998), which we refer to as GPLV, which only deals with case (a), as well as a standard GP model which can handle only case (c). Markov Chain Monte Carlo methods are developed for both modelsl. Experiments show that when the data is heteroscedastic, both GPLC and GPLV give better results (smaller mean squared error and negative log-probability density) than standard GP regression. In addition, when the residual are Gaussian, our GPLC model is generally nearly as good as GPLV, while when the residuals are non-Gaussian, our GPLC model is better than GPLV."}
{"_id": "fb7920c6b16ead15a3c0f62cf54c2af9ff9c550f", "title": "Atrous Convolutional Neural Network (ACNN) for Semantic Image Segmentation with full-scale Feature Maps", "text": "Deep Convolutional Neural Networks (DCNNs) are used extensively in biomedical image segmentation. However, current DCNNs usually use down sampling layers for increasing the receptive field and gaining abstract semantic information. These down sampling layers decrease the spatial dimension of feature maps, which can be detrimental to semantic image segmentation. Atrous convolution is an alternative for the down sampling layer. It increases the receptive field whilst maintains the spatial dimension of feature maps. In this paper, a method for effective atrous rate setting is proposed to achieve the largest and fully-covered receptive field with a minimum number of atrous convolutional layers. Furthermore, different atrous blocks, shortcut connections and normalization methods are explored to select the optimal network structure setting. These lead to a new and full-scale DCNN Atrous Convolutional Neural Network (ACNN), which incorporates cascaded atrous II-blocks, residual learning and Fine Group Normalization (FGN). Application results of the proposed ACNN to Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) image segmentation demonstrate that the proposed ACNN can achieve comparable segmentation Dice Similarity Coefficients (DSCs) to U-Net, optimized U-Net and hybrid network, but with significantly reduced trainable parameters due to the use of fullscale feature maps and therefore computationally is much more efficient for both the training and inference."}
{"_id": "3ad1787e6690c80f8c934c150d31b7dd6d410903", "title": "Orthogonal AMP", "text": "Approximate message passing (AMP) is a low-cost iterative signal recovery algorithm for linear system models. When the system transform matrix has independent identically distributed (IID) Gaussian entries, the performance of AMP can be asymptotically characterized by a simple scalar recursion called state evolution (SE). However, SE may become unreliable for other matrix ensembles, especially for ill-conditioned ones. This imposes limits on the applications of AMP. In this paper, we propose an orthogonal AMP (OAMP) algorithm based on de-correlated linear estimation (LE) and divergence-free non-linear estimation (NLE). The Onsager term in standard AMP vanishes as a result of the divergence-free constraint on NLE. We develop an SE procedure for OAMP and show numerically that the SE for OAMP is accurate for general unitarily-invariant matrices, including IID Gaussian matrices and partial orthogonal matrices. We further derive optimized options for OAMP and show that the corresponding SE fixed point coincides with the optimal performance obtained via the replica method. Our numerical results demonstrate that OAMP can be advantageous over AMP, especially for ill-conditioned matrices."}
{"_id": "cfec3fb4352ebb004b0aaf8b0a3b9869f23e7765", "title": "Learning Discrete Hashing Towards Efficient Fashion Recommendation", "text": "In our daily life, how to match clothing well is always a troublesome problem especially when we are shopping online to select a pair of matched pieces of clothing from tens of thousands available selections. To help common customers overcome selection issues, recent studies in the recommender system area have started to infer the fashion matching results automatically. The traditional fashion recommendation is normally achieved by considering visual similarity of clothing items or/and item co-purchase history from existing shopping transactions. Due to the high complexity of visual features and the lack of historical item purchase records, most of the existing work is unlikely to make an efficient and accurate recommendation. To address the problem, in this paper, we propose a new model called Discrete Supervised Fashion Coordinates Hashing. Its main objective is to learn meaningful yet compact high-level features of clothing items, which are represented as binary hash codes. In detail, this learning process is supervised by a clothing matching matrix, which is initially constructed based on limited known matching pairs and subsequently on the self-augmented ones. The proposed model jointly learns the intrinsic matching patterns from the matching matrix and the binary representations from the clothing items\u2019 images, where the visual feature of each clothing item is discretized into a fixed-length binary vector. The binary representation learning significantly reduces the memory cost and accelerates the recommendation speed. The experiments compared with several state-of-the-art approaches have evidenced the superior performance of the proposed approach on efficient fashion recommendation."}
{"_id": "1df0b93bd54a104c862002210cbb2051ab3901b4", "title": "Improving malware detection by applying multi-inducer ensemble", "text": "Detection of malicious software (malware) using ma chine learning methods has been explored extensively to enable fas t detection of new released malware. The performance of these classifiers depen ds on the induction algorithms being used. In order to benefit from mul tiple different classifiers, and exploit their strengths we suggest using an ens emble method that will combine the results of the individual classifiers i nto one final result to achieve overall higher detection accuracy. In this paper we evaluate several combining methods using five different base inducers (C4.5 Dec ision Tree, Na\u00efve Bayes, KNN, VFI and OneR) on five malware datasets. The mai n goal is to find the best combining method for the task of detecting mal icious files in terms of accuracy, AUC and execution time."}
{"_id": "9d852855ba9b805f092b271f940848c3009a6a90", "title": "Unikernel-based approach for software-defined security in cloud infrastructures", "text": "The heterogeneity of cloud resources implies sub-stantial overhead to deploy and configure adequate security mechanisms. In that context, we propose a software-defined security strategy based on unikernels to support the protection of cloud infrastructures. This approach permits to address management issues by uncoupling security policy from their enforcement through programmable security interfaces. It also takes benefits from unikernel virtualization properties to support this enforcement and provide resources with low attack surface. These resources correspond to highly constrained configurations with the strict minimum for a given period. We describe the management framework supporting this software-defined security strategy, formalizing the generation of unikernel images that are dynamically built to comply with security requirements over time. Through an implementation based on MirageOS, and extensive experiments, we show that the cost induced by our security integration mechanisms is small while the gains in limiting the security exposure are high."}
{"_id": "4d767a7a672536922a6f393b4b70db8776e4821d", "title": "Sentiment Classification based on Latent Dirichlet Allocation", "text": "Opinion miningrefers to the use of natural language processing, text analysis and computational linguistics to identify and extract the subjective information. Opinion Mining has become an indispensible part of online reviews which is in the present scenario. In the field of information retrieval, a various kinds of probabilistic topic modeling techniques have been used to analyze contents present in a document. A topic model is a generative technique for document. All"}
{"_id": "1a07186bc10592f0330655519ad91652125cd907", "title": "A unified architecture for natural language processing: deep neural networks with multitask learning", "text": "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance."}
{"_id": "27e38351e48fe4b7da2775bf94341738bc4da07e", "title": "Semantic Compositionality through Recursive Matrix-Vector Spaces", "text": "Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them."}
{"_id": "303b0b6e6812c60944a4ac9914222ac28b0813a2", "title": "Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis", "text": "This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify thecontextual polarityfor a large subset of sentiment expressions, achieving results that are significantly better than baseline."}
{"_id": "4eb943bf999ce49e5ebb629d7d0ffee44becff94", "title": "Finding Structure in Time", "text": "Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction."}
{"_id": "e354b28299c0c3fc8e57d081f02ac0b99b1ed354", "title": "Grasping Without Squeezing: Design and Modeling of Shear-Activated Grippers", "text": "Grasping objects that are too large to envelop is traditionally achieved using friction that is activated by squeezing. We present a family of shear-activated grippers that can grasp such objects without the need to squeeze. When a shear force is applied to the gecko-inspired material in our grippers, adhesion is turned on; this adhesion in turn results in adhesion-controlled friction, a friction force that depends on adhesion rather than a squeezing normal force. Removal of the shear force eliminates adhesion, allowing easy release of an object. A compliant shear-activated gripper without active sensing and control can use the same light touch to lift objects that are soft, brittle, fragile, light, or very heavy. We present three grippers, the first two designed for curved objects, and the third for nearly any shape. Simple models describe the grasping process, and empirical results verify the models. The grippers are demonstrated on objects with a variety of shapes, materials, sizes, and weights."}
{"_id": "1c4eda4f85559b3c3fcae6ca6ec4a54bff18002e", "title": "Quantifying Mental Health from Social Media with Neural User Embeddings", "text": "Mental illnesses adversely affect a significant proportion of the population worldwide. However, the typical methods to estimate and characterize the prevalence of mental health conditions are time-consuming and expensive. Consequently, best-available estimates concerning the prevalence of these conditions are often years out of date. Automated approaches that supplement traditional methods with broad, aggregated information derived from social media provide a potential means of furnishing near real-time estimates at scale. These may in turn provide grist for supporting, evaluating and iteratively improving public health programs and interventions. We propose a novel approach for mental health quantification that leverages user embeddings induced from social media post histories. Recent work showed that learned user representations capture latent aspects of individuals (e.g., political leanings). This paper investigates whether these representations also correlate with mental health statuses. To this end, we induced embeddings for a set of users known to be affected by depression and post-traumatic stress disorder, and for a set of demographically matched \u2018control\u2019 users. We then evaluated the induced user representations with respect to: (i) their ability to capture homophilic relations with respect to mental health statuses; and (ii) their predictive performance in downstream mental health models. Our experimental results demonstrate that learned user embeddings capture relevant signals for mental health quantification."}
{"_id": "31e6f92c83ad2df9859f5211971540412beebea6", "title": "Electroporation of Cells and Tissues", "text": "Electrical pulses that cause the transmembrane voltage of fluid lipid bilayer membranes to reach at least 0 2 V, usually 0.5\u20131 V, are hypothesized to create primary membrane \u201cpores\u201d with a minimum radius of \u223c1 nm. Transport of small ions such as Na and Cl through a dynamic pore population discharges the membrane even while an external pulse tends to increase , leading to dramatic electrical behavior. Molecular transport through primary pores and pores enlarged by secondary processes provides the basis for transporting molecules into and out of biological cells. Cell electroporationin vitro is used mainly for transfection by DNA introduction, but many other interventions are possible, including microbial killing. Ex vivo electroporation provides manipulation of cells that are reintroduced into the body to provide therapy. In vivo electroporation of tissues enhances molecular transport through tissues and into their constituative cells. Tissue electroporation, by longer, large pulses, is involved in electrocution injury. Tissue electroporation by shorter, smaller pulses is under investigation for biomedical engineering applications of medical therapy aimed at cancer treatment, gene therapy, and transdermal drug delivery. The latter involves a complex barrier containing both high electrical resistance, multilamellar lipid bilayer membranes and a tough, electrically invisible protein matrix."}
{"_id": "3ac9c55c9da1b19b66f4249edad5468daa452b02", "title": "Computing With Words for Hierarchical Decision Making Applied to Evaluating a Weapon System", "text": "The perceptual computer (Per-C) is an architecture that makes subjective judgments by computing with words (CWWs). This paper applies the Per-C to hierarchical decision making, which means decision making based on comparing the performance of competing alternatives, where each alternative is first evaluated based on hierarchical criteria and subcriteria, and then, these alternatives are compared to arrive at either a single winner or a subset of winners. What can make this challenging is that the inputs to the subcriteria and criteria can be numbers, intervals, type-1 fuzzy sets, or even words modeled by interval type-2 fuzzy sets. Novel weighted averages are proposed in this paper as a CWW engine in the Per-C to aggregate these diverse inputs. A missile-evaluation problem is used to illustrate it. The main advantages of our approaches are that diverse inputs can be aggregated, and uncertainties associated with these inputs can be preserved and are propagated into the final evaluation."}
{"_id": "2ca36cfea0aba89e19b3551c325e999e0fb6607c", "title": "Attribute-Based Encryption for Circuits", "text": "In an attribute-based encryption (ABE) scheme, a ciphertext is associated with an \u2113-bit public index ind and a message m, and a secret key is associated with a Boolean predicate P. The secret key allows decrypting the ciphertext and learning m if and only if P(ind) = 1. Moreover, the scheme should be secure against collusions of users, namely, given secret keys for polynomially many predicates, an adversary learns nothing about the message if none of the secret keys can individually decrypt the ciphertext.\n We present attribute-based encryption schemes for circuits of any arbitrary polynomial size, where the public parameters and the ciphertext grow linearly with the depth of the circuit. Our construction is secure under the standard learning with errors (LWE) assumption. Previous constructions of attribute-based encryption were for Boolean formulas, captured by the complexity class NC1.\n In the course of our construction, we present a new framework for constructing ABE schemes. As a by-product of our framework, we obtain ABE schemes for polynomial-size branching programs, corresponding to the complexity class LOGSPACE, under quantitatively better assumptions."}
{"_id": "b3ddd3d1c874050bf2c7ab770540a7f3e503a2cb", "title": "The Impact of Nonlinear Junction Capacitance on Switching Transient and Its Modeling for SiC MOSFET", "text": "The nonlinear junction capacitances of power devices are critical for the switching transient, which should be fully considered in the modeling and transient analysis, especially for high-frequency applications. The silicon carbide (SiC) MOSFET combined with SiC Schottky Barrier Diode (SBD) is recognized as the proposed choice for high-power and high-frequency converters. However, in the existing SiC MOSFET models only the nonlinearity of gate-drain capacitance is considered meticulously, but the drain-source capacitance, which affects the switching commutation process significantly, is generally regarded as constant. In addition, the nonlinearity of diode junction capacitance is neglected in some simplified analysis. Experiments show that without full consideration of nonlinear junction capacitances, some significant deviations between simulated and measured results will emerge in the switching waveforms. In this paper, the nonlinear characteristics of drain-source capacitance in SiC MOSFET are studied in detail, and the simplified modeling methods for engineering applications are presented. On this basis, the SiC MOSFET model is improved and the simulation results with improved model correspond with the measured results much better than before, which verify the analysis and modeling."}
{"_id": "1ddd5a27359e7b4101be84b1ee4ac6da35792492", "title": "Learning Mixtures of Gaussians in High Dimensions", "text": "Efficiently learning mixture of Gaussians is a fundamental problem in statistics and learning theory. Given samples coming from a random one out of k Gaussian distributions in Rn, the learning problem asks to estimate the means and the covariance matrices of these Gaussians. This learning problem arises in many areas ranging from the natural sciences to the social sciences, and has also found many ma- chine learning applications. Unfortunately, learning mixture of Gaussians is an information theoretically hard problem: in order to learn the parameters up to a reasonable accuracy, the number of samples required is exponential in the number of Gaussian components in the worst case. In this work, we show that provided we are in high enough dimensions, the class of Gaussian mixtures is learnable in its most general form under a smoothed analysis framework, where the parameters are randomly perturbed from an adversarial starting point. In particular, given samples from a mixture of Gaussians with randomly perturbed parameters, when n \u2265 \u03a9(k2), we give an algorithm that learns the parameters with polynomial running time and using polynomial number of samples.\n The central algorithmic ideas consist of new ways to de- compose the moment tensor of the Gaussian mixture by exploiting its structural properties. The symmetries of this tensor are derived from the combinatorial structure of higher order moments of Gaussian distributions (sometimes referred to as Isserlis' theorem or Wick's theorem). We also develop new tools for bounding smallest singular values of structured random matrices, which could be useful in other smoothed analysis settings."}
{"_id": "bfbed02e4e4ee0d382c58c3d33a15355358119ee", "title": "Nonlinear observers for predicting state-of-charge and state-of-health of lead-acid batteries for hybrid-electric vehicles", "text": "This paper describes the application of state-estimation techniques for the real-time prediction of the state-of-charge (SoC) and state-of-health (SoH) of lead-acid cells. Specifically, approaches based on the well-known Kalman Filter (KF) and Extended Kalman Filter (EKF), are presented, using a generic cell model, to provide correction for offset, drift, and long-term state divergence-an unfortunate feature of more traditional coulomb-counting techniques. The underlying dynamic behavior of each cell is modeled using two capacitors (bulk and surface) and three resistors (terminal, surface, and end), from which the SoC is determined from the voltage present on the bulk capacitor. Although the structure of the model has been previously reported for describing the characteristics of lithium-ion cells, here it is shown to also provide an alternative to commonly employed models of lead-acid cells when used in conjunction with a KF to estimate SoC and an EKF to predict state-of-health (SoH). Measurements using real-time road data are used to compare the performance of conventional integration-based methods for estimating SoC with those predicted from the presented state estimation schemes. Results show that the proposed methodologies are superior to more traditional techniques, with accuracy in determining the SoC within 2% being demonstrated. Moreover, by accounting for the nonlinearities present within the dynamic cell model, the application of an EKF is shown to provide verifiable indications of SoH of the cell pack."}
{"_id": "05ce02eacc1026a84ff7972ad25490db39040671", "title": "Pilot Contamination and Precoding in Multi-Cell TDD Systems", "text": "This paper considers a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission. Channel state information (CSI) is essential for precoding at the base stations. An effective technique for obtaining this CSI is time-division duplex (TDD) operation where uplink training in conjunction with reciprocity simultaneously provides the base stations with downlink as well as uplink channel estimates. This paper mathematically characterizes the impact that uplink training has on the performance of such multi-cell multiple antenna systems. When non-orthogonal training sequences are used for uplink training, the paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner. This paper analyzes this fundamental problem of pilot contamination in multi-cell systems. Furthermore, it develops a new multi-cell MMSE-based precoding method that mitigates this problem. In addition to being linear, this precoding method has a simple closed-form expression that results from an intuitive optimization. Numerical results show significant performance gains compared to certain popular single-cell precoding methods."}
{"_id": "c944f96d97613f4f92a2e5d1cc9b87ea1d4f44d8", "title": "Design of IRNSS receiver antennae for smart city applications in India", "text": "In this research paper, a rectangular fractal antenna is designed which may be used for IRNSS based smart city applications in India. For tracking and positioning applications, the government of India has developed its own navigation system which is called Indian Regional Navigational Satellite System (IRNSS). To design an antenna for such an application is desirable. For the upcoming smart cities in India, the IRNSS will play a vital role in terms of traffic management, intelligent transportation, vehicle tracking and disaster management. A rectangular fractal antenna based on Sierpinski carpet antenna geometry for dual band resonant frequencies is simulated and tested. The antenna has operational frequencies at 2492.08 MHz (S-band) and 1176.45 MHz (L5-band) at return loss of -24.323 dB and -13.41dB respectively."}
{"_id": "9ff32b1a8e4e0d0c0e6ac89b75ee1489b2430b36", "title": "A Low-Cost Gate Driver Design Using Bootstrap Capacitors for Multilevel MOSFET Inverters", "text": "Multilevel inverters require a large number of power semiconductors. The gate driver for each power semiconductor requires its own floating or isolated DC voltage source. Traditional methods on meeting the requirement are expensive and bulky. This paper presents a gate driver design for floating voltage source type multilevel inverters. Bootstrap capacitors are used to form the floating voltage sources in the design, which allows a single DC power supply to be used by all the gate drivers. Specially configured diode plus the proper charging cycles maintain adequate capacitor voltage levels. Such a simple and low cost solution allows user to utilize the easy accessible single-channel gate drivers for multilevel inverter applications without the extra cost on the bulky isolated DC power supplies for each gate driver. A prototype 3-cell 8-level floating voltage source inverter using the method illustrates the technique"}
{"_id": "22f6360f4174515ef69904f2f0609d0021a084a1", "title": "Detecting Text in the Wild with Deep Character Embedding Network", "text": "Most text detection methods hypothesize texts are horizontal or multi-oriented and thus define quadrangles as the basic detection unit. However, text in the wild is usually perspectively distorted or curved, which can not be easily tackled by existing approaches. In this paper, we propose a deep character embedding network (CENet) which simultaneously predicts the bounding boxes of characters and their embedding vectors, thus making text detection a simple clustering task in the character embedding space. The proposed method does not require strong assumptions of forming a straight line on general text detection, which provides flexibility on arbitrarily curved or perspectively distorted text. For character detection task, a dense prediction subnetwork is designed to obtain the confidence score and bounding boxes of characters. For character embedding task, a subnet is trained with contrastive loss to project detected characters into embedding space. The two tasks share a backbone CNN from which the multi-scale feature maps are extracted. The final text regions can be easily achieved by a thresholding process on character confidence and embedding distance of character pairs. We evaluated our method on ICDAR13, ICDAR15, MSRA-TD500, and Total Text. The proposed method achieves state-of-the-art or comparable performance on all of the datasets, and shows a substantial improvement in the irregular-text datasets, i.e. Total-Text."}
{"_id": "2069c9389df8bb29b7fedf2c2ccfe7aaf82b2832", "title": "Heterogeneous Transfer Learning for Image Classification", "text": "Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related auxiliary domains for learning. While most of the existing works in this area are only focused on using the source data with the same representational structure as the target data, in this paper, we push this boundary further by extending a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text documents are arbitrary. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through matrix factorization, and to use the latent semantic features generated by the auxiliary data to build a better image classifier. We empirically verify the effectiveness of our algorithm on the Caltech-256 image dataset."}
{"_id": "922ac52351dac20ddbcd5e98e68f95ae7f1c502d", "title": "Employing U.S. Military Families to Provide Business Process Outsourcing Services: A Case study of Impact Sourcing and Reshoring", "text": "This paper describes how a startup business process outsourcing (BPO) provider named Liberty Source helped a large U.S.-based client reshore business services from an established Indian BPO provider. Founded in 2014, Liberty Source is a for-profit firm that provides a competitive alternative to offshoring while fulfilling its social mission to launch and sustain the careers of U.S. military spouses and veterans who face various employment disadvantages. Thus, the case describes reshoring in the context of impact sourcing. It addresses key impact sourcing issues pertaining to workforce development, scalability, and impact on employees. The impact was positive: the workers found the employment and stable salary were beneficial, \u201cthe military\u201d culture fit well with the workers, and workers received considerable flexibility and greater career options. Liberty Source was able to reduce a client\u2019s costs after reshoring the client\u2019s processes because Liberty Source\u2019s U.S. site had about 20 percent fewer full time equivalents (FTEs) FTEs than the original India location and because Liberty Source received subsidies. We found evidence that the offshore BPO provider and Liberty source experienced difficulties with finding enough skilled staff for the wages offered and both firms experienced attrition problems, although attrition was greater in India."}
{"_id": "eeebffb3148dfcab6d2b7b464ad72feeff53c21f", "title": "A New Metrics for Countries' Fitness and Products' Complexity", "text": "Classical economic theories prescribe specialization of countries industrial production. Inspection of the country databases of exported products shows that this is not the case: successful countries are extremely diversified, in analogy with biosystems evolving in a competitive dynamical environment. The challenge is assessing quantitatively the non-monetary competitive advantage of diversification which represents the hidden potential for development and growth. Here we develop a new statistical approach based on coupled non-linear maps, whose fixed point defines a new metrics for the country Fitness and product Complexity. We show that a non-linear iteration is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We show that, given the paradigm of economic complexity, the correct and simplest approach to measure the competitiveness of countries is the one presented in this work. Furthermore our metrics appears to be economically well-grounded."}
{"_id": "53a7a96149bae64041338fc58b281a6eef7d9912", "title": "What's Wrong with the Diffusion of Innovation Theory", "text": "This paper examines the usefulness of the diffusion of innovation research in developing theoretical accounts of the adoption of complex and networked IT solutions. We contrast six conjectures underlying DOI research with field data obtained from the study of the diffusion of EDI. Our analysis shows that DOI based analyses miss some important facets in the diffusion of complex technologies. We suggest that complex IT solutions should be understood as socially constructed and learning intensive artifacts, which can be adopted for varying reasons within volatile diffusion arenas. Therefore DOI researchers should carefully recognize the complex, networked, and learning intensive features of technology; understand the role of institutional regimes, focus on process features (including histories) and key players in the diffusion arena, develop multi-layered theories that factor out mappings between different layers and locales, use multiple perspectives including political models, institutional models and theories of team behavior, and apply varying time scales while crafting accounts of what happened and why. In general the paper calls for a need to develop DOI theories at the site by using multiple levels of"}
{"_id": "091436ab676ecedc8c7ebde611bdbccf978c6e6c", "title": "Computer-Mediated Communication Conflict Management in Groups that Work in Two Different Communication Contexts : Face-To-Face and", "text": "can be found at: Small Group Research Additional services and information for http://sgr.sagepub.com/cgi/alerts Email Alerts: http://sgr.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://sgr.sagepub.com/cgi/content/refs/33/5/481 SAGE Journals Online and HighWire Press platforms): (this article cites 36 articles hosted on the Citations"}
{"_id": "35e0fb8d808ce3b30625a78aeed0d20cf5b39fb4", "title": "Static detection of cross-site scripting vulnerabilities", "text": "Web applications support many of our daily activities, but they often have security problems, and their accessibility makes them easy to exploit. In cross-site scripting (XSS), an attacker exploits the trust a web client (browser) has for a trusted server and executes injected script on the browser with the server's privileges. In 2006, XSS constituted the largest class of newly reported vulnerabilities making it the most prevalent class of attacks today. Web applications have XSS vulnerabilities because the validation they perform on untrusted input does not suffice to prevent that input from invoking a browser's JavaScript interpreter, and this validation is particularly difficult to get right if it must admit some HTML mark-up. Most existing approaches to finding XSS vulnerabilities are taint-based and assume input validation functions to be adequate, so they either miss real vulnerabilities or report many false positives.\n This paper presents a static analysis for finding XSS vulnerabilities that directly addresses weak or absent input validation. Our approach combines work on tainted information flow with string analysis. Proper input validation is difficult largely because of the many ways to invoke the JavaScript interpreter; we face the same obstacle checking for vulnerabilities statically, and we address it by formalizing a policy based on the W3C recommendation, the Firefox source code, and online tutorials about closed-source browsers. We provide effective checking algorithms based on our policy. We implement our approach and provide an extensive evaluation that finds both known and unknown vulnerabilities in real-world web applications."}
{"_id": "6ac3171269609e4c2ca1c3326d1433a9c5b6c121", "title": "Combinatorial Bounds for Broadcast Encryption", "text": "Abst rac t . A broadcast encryption system allows a center to communicate securely over a broadcast channel with selected sets of users. Each time the set of privileged users changes, the center enacts a protocol to establish a new broadcast key that only the privileged users can obtain, and subsequent transmissions by the center are encrypted using the new broadcast key. We study the inherent trade-off between the number of establishment keys held by each user and the number of transmissions needed to establish a new broadcast key. For every given upper bound on the number of establishment keys held by each user, we prove a lower bound on the number of transmissions needed to establish a new broad~ cast key. We show that these bounds are essentially tight, by describing broadcast encryption systems that come close to these bounds."}
{"_id": "413bc7d58d64291042fffc58f36ee74d63c9cb00", "title": "Database security - concepts, approaches, and challenges", "text": "As organizations increase their reliance on, possibly distributed, information systems for daily business, they become more vulnerable to security breaches even as they gain productivity and efficiency advantages. Though a number of techniques, such as encryption and electronic signatures, are currently available to protect data when transmitted across sites, a truly comprehensive approach for data protection must also include mechanisms for enforcing access control policies based on data contents, subject qualifications and characteristics, and other relevant contextual information, such as time. It is well understood today that the semantics of data must be taken into account in order to specify effective access control policies. Also, techniques for data integrity and availability specifically tailored to database systems must be adopted. In this respect, over the years, the database security community has developed a number of different techniques and approaches to assure data confidentiality, integrity, and availability. However, despite such advances, the database security area faces several new challenges. Factors such as the evolution of security concerns, the \"disintermediation\" of access to data, new computing paradigms and applications, such as grid-based computing and on-demand business, have introduced both new security requirements and new contexts in which to apply and possibly extend current approaches. In this paper, we first survey the most relevant concepts underlying the notion of database security and summarize the most well-known techniques. We focus on access control systems, on which a large body of research has been devoted, and describe the key access control models, namely, the discretionary and mandatory access control models, and the role-based access control (RBAC) model. We also discuss security for advanced data management systems, and cover topics such as access control for XML. We then discuss current challenges for database security and some preliminary approaches that address some of these challenges."}
{"_id": "b2114228411d367cfa6ca091008291f250a2c490", "title": "Deep learning and process understanding for data-driven Earth system science", "text": "Machine learning approaches are increasingly used to extract patterns and insights from the ever-increasing stream of geospatial data, but current approaches may not be optimal when system behaviour is dominated by spatial or temporal context. Here, rather than amending classical machine learning, we argue that these contextual cues should be used as part of deep learning (an approach that is able to extract spatio-temporal features automatically) to gain further process understanding of Earth system science problems, improving the predictive ability of seasonal forecasting and modelling of long-range spatial connections across multiple timescales, for example. The next step will be a hybrid modelling approach, coupling physical process models with the versatility of data-driven machine learning. Complex Earth system challenges can be addressed by incorporating spatial and temporal context into machine learning, especially via deep learning, and further by combining with physical models into hybrid models."}
{"_id": "94c4ca9641c018617be87bb21754a3fa9fbc9dda", "title": "Efficient mesh denoising via robust normal filtering and alternate vertex updating", "text": "The most challenging problem in mesh denoising is to distinguish features from noise. Based on the robust guided normal estimation and alternate vertex updating strategy, we investigate a new feature-preserving mesh denoising method. To accurately capture local structures around features, we propose a corner-aware neighborhood (CAN) scheme. By combining both overall normal distribution of all faces in a CAN and individual normal influence of the interested face, we give a new consistency measuring method, which greatly improves the reliability of the estimated guided normals. As the noise level lowers, we take as guidance the previous filtered normals, which coincides with the emerging rolling guidance idea. In the vertex updating process, we classify vertices according to filtered normals at each iteration and reposition vertices of distinct types alternately with individual regularization constraints. Experiments on a variety of synthetic and real data indicate that our method adapts to various noise, both Gaussian and impulsive, no matter in the normal direction or in a random direction, with few triangles flipped."}
{"_id": "668f72e3c27a747f7f86e035709e990f9b145a73", "title": "An Orbital Angular Momentum (OAM) Mode Reconfigurable Antenna for Channel Capacity Improvement and Digital Data Encoding", "text": "For purpose of utilizing orbital angular momentum (OAM) mode diversity, multiple OAM beams should be generated preferably by a single antenna. In this paper, an OAM mode reconfigurable antenna is proposed. Different from the existed OAM antennas with multiple ports for multiple OAM modes transmitting, the proposed antenna with only a single port, but it can be used to transmit mode 1 or mode \u22121 OAM beams arbitrary by controlling the PIN diodes on the feeding network through a programmable microcontroller which control by a remote controller. Simulation and measurement results such as return loss, near-field and far-field radiation patterns of two operating states for mode 1 and mode \u22121, and OAM mode orthogonality are given. The proposed antenna can serve as a candidate for utilizing OAM diversity, namely phase diversity to increase channel capacity at 2.4\u2009GHz. Moreover, an OAM-mode based encoding method is experimentally carried out by the proposed OAM mode reconfigurable antenna, the digital data are encoded and decoded by different OAM modes. At the transmitter, the proposed OAM mode reconfigurable antenna is used to encode the digital data, data symbol 0 and 1 are mapped to OAM mode 1 and mode \u22121, respectively. At the receiver, the data symbols are decoded by phase gradient method."}
{"_id": "c2a684f7f24e8f1448204b3f9f00b26d34e9a03b", "title": "Unsupervised deformable image registration with fully connected generative neural network", "text": "In this paper, a new deformable image registration method based on a fully connected neural network is proposed. Even though a deformation field related to the point correspondence between fixed and moving images are high-dimensional in nature, we assume that these deformation fields form a low dimensional manifold in many real world applications. Thus, in our method, a neural network generates an embedding of the deformation field from a low dimensional vector. This lowdimensional manifold formulation avoids the intractability associated with the high dimensional search space that most other methods face during image registration. As a result, while most methods rely on explicit and handcrafted regularization of the deformation fields, our algorithm relies on implicitly regularizing the network parameters. The proposed method generates deformation fields from latent low dimensional space by minimizing a dissimilarity metric between a fixed image and a warped moving image. Our method removes the need for a large dataset to optimize the proposed network. The proposed method is quantitatively evaluated using images from the MICCAI ACDC challenge. The results demonstrate that the proposed method improves performance in comparison with a moving mesh registration algorithm, and also it correlates well with independent manual segmentations by an expert."}
{"_id": "8900adf0a63fe9a99c2026a897cfcfd2aaf89476", "title": "A Low-Power Subthreshold to Above-Threshold Voltage Level Shifter", "text": "This brief presents a power-efficient voltage level-shifter architecture that is capable of converting extremely low levels of input voltages to higher levels. In order to avoid the static power dissipation, the proposed structure uses a current generator that turns on only during the transition times, in which the logic level of the input signal is not corresponding to the output logic level. Moreover, the strength of the pull-up device is decreased when the pull-down device is pulling down the output node in order for the circuit to be functional even for the input voltage lower than the threshold voltage of a MOSFET. The operation of the proposed structure is also analytically investigated. Post-layout simulation results of the proposed structure in a 0.18-\u03bcm CMOS technology show that at the input low supply voltage of 0.4 V and the high supply voltage of 1.8 V, the level shifter has a propagation delay of 30 ns, a static power dissipation of 130 pW, and an energy per transition of 327 fJ for a 1-MHz input signal."}
{"_id": "cace7912089d586e0acebb4a66329f60d3f1cd09", "title": "A robust, input voltage adaptive and low energy consumption level converter for sub-threshold logic", "text": "A new level converter (LC) is proposed for logic voltage shifting between sub-threshold voltage to normal high voltage. By employing 2 PMOS diodes, the LC shows good operation robustness with sub-threshold logic input. The switching delay of the proposed LC can adapt with the input logic voltage which is more suitable for power aware systems. With a simpler circuit structure, the energy consumption of the LC is smaller than that of the existing sub-threshold LC. Simulation results demonstrate the performance improvement and energy reduction of the proposed LC. Test chip was fabricated using 0.18 mum CMOS process. Measurement results show that our proposed LC can operate correctly with an input at as low as 127 mV and an output voltage at 1.8V."}
{"_id": "e52c199d4f9f815087fb72702428cd3c7bb9d9ff", "title": "A 180-mV subthreshold FFT processor using a minimum energy design methodology", "text": "In emerging embedded applications such as wireless sensor networks, the key metric is minimizing energy dissipation rather than processor speed. Minimum energy analysis of CMOS circuits estimates the optimal operating point of clock frequencies, supply voltage, and threshold voltage according to A. Chandrakasan et al. (see ibid., vol.27, no.4, p.473-84, Apr. 1992). The minimum energy analysis shows that the optimal power supply typically occurs in subthreshold (e.g., supply voltages that are below device thresholds). New subthreshold logic and memory design methodologies are developed and demonstrated on a fast Fourier transform (FFT) processor. The FFT processor uses an energy-aware architecture that allows for variable FFT length (128-1024 point), variable bit-precision (8 b and 16 b) and is designed to investigate the estimated minimum energy point. The FFT processor is fabricated using a standard 0.18-/spl mu/m CMOS logic process and operates down to 180 mV. The minimum energy point for the 16-b 1024-point FFT processor occurs at 350-mV supply voltage where it dissipates 155 nJ/FFT at a clock frequency of 10 kHz."}
{"_id": "e74923f05c05a603356c4e88a5bcc1d743aedeb5", "title": "A Subthreshold to Above-Threshold Level Shifter Comprising a Wilson Current Mirror", "text": "In this brief, we propose a novel level shifter circuit that is capable of converting subthreshold to above-threshold signal levels. In contrast to other existing implementations, it does not require a static current flow and can therefore offer considerable static power savings. The circuit has been optimized and simulated in a 90-nm process technology. It operates correctly across process corners for supply voltages from 100 mV to 1 V on the low-voltage side. At the target design voltage of 200 mV, the level shifter has a propagation delay of 18.4 ns and a static power dissipation of 6.6 nW. For a 1-MHz input signal, the total energy per transition is 93.9 fJ. Simulation results are compared to an existing subthreshold to above-threshold level shifter implementation from the paper of Chen et al."}
{"_id": "381231eecd132199821c5aa3ff3f2278f593ea33", "title": "Subthreshold to Above Threshold Level Shifter Design", "text": ""}
{"_id": "6077b91080f2ac5822fd899e9c41dd82afbdea27", "title": "Clustering-based Approach for Anomaly Detection in XACML Policies", "text": "The development of distributed applications arises multiple security issues such as access control. AttributeBased Access Control has been proposed as a generic access control model, which provides more flexibility and promotes information and security sharing. eXtensible Access Control Markup Language (XACML) is the most convenient way to express ABAC policies. However, in distributed environments, XACML policies become more complex and hard to manage. In fact, an XACML policy in distributed applications may be aggregated from multiple parties and can be managed by more than one administrator. Therefore, it may contain several anomalies such as conflicts and redundancies, which may affect the performance of the policy execution. In this paper, we propose an anomaly detection method based on the decomposition of a policy into clusters before searching anomalies within each cluster. Our evaluation results demonstrate the efficiency of the suggested approach."}
{"_id": "b356bd9e1df74a8fc7a144001a88c0e1f89e616d", "title": "Cache'n DASH: Efficient Caching for DASH", "text": "HTTP-based video streaming services have been dominating the global IP traffic over the last few years. Caching of video content reduces the load on the content servers. In the case of Dynamic Adaptive Streaming over HTTP (DASH), for every video the server needs to host multiple representations of the same video file. These individual representations are further broken down into smaller segments. Hence, for each video the server needs to host thousands of segments out of which, the client downloads a subset of the segments. Also, depending on the network conditions, the adaptation scheme used at the client-end might request a different set of video segments (varying in bitrate) for the same video. The caching of DASH videos presents unique challenges. In order to optimize the cache hits and minimize the misses for DASH video streaming services we propose an Adaptation Aware Cache (AAC) framework to determine the segments that are to be prefetched and retained in the cache. In the current scheme, we use bandwidth estimates at the cache server and the knowledge of the rate adaptation scheme used by the client to estimate the next segment requests, thus improving the prefetching at the cache."}
{"_id": "c7a83708cf93b46e952327822e4fb195b1dafef3", "title": "Automated Assembly Using 3D and 2D Cameras", "text": "2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to < 1 mm and < 1\u25e6. This was demonstrated in a real industrial assembly task where high accuracy is required."}
{"_id": "2d54e36c216de770f849125a62b4a2e74284792a", "title": "A content-based recommendation algorithm for learning resources", "text": "Automatic multimedia learning resources recommendation has become an increasingly relevant problem: it allows students to discover new learning resources that match their tastes, and enables the e-learning system to target the learning resources to the right students. In this paper, we propose a content-based recommendation algorithm based on convolutional neural network (CNN). The CNN can be used to predict the latent factors from the text information of the multimedia resources. To train the CNN, its input and output should first be solved. For its input, the language model is used. For its output, we propose the latent factor model, which is regularized by L 1-norm. Furthermore, the split Bregman iteration method is introduced to solve the model. The major novelty of the proposed recommendation algorithm is that the text information is used directly to make the content-based recommendation without tagging. Experimental results on public databases in terms of quantitative assessment show significant improvements over conventional methods. In addition, the split Bregman iteration method which is introduced to solve the model can greatly improve the training efficiency."}
{"_id": "56d36c2535c867613997688f649cf0461171dbe8", "title": "CiteSeer: An Automatic Citation Indexing System", "text": "We presentCiteSeer : an autonomous citation indexing system which indexes academic literature in electronic format (e.g. Postscript files on the Web). CiteSeer understands how to parse citations, identify citations to the same paper in d ifferent formats, and identify the context of citations in the body of articles. CiteSeer provides most of the advantages o f traditional (manually constructed) citation indexes (e.g . the ISI citation indexes), including: literature retrieval by following citation links (e.g. by providing a list of papers tha cite a given paper), the evaluation and ranking of papers, au thors, journals, etc. based on the number of citations, and t he identification of research trends. CiteSeer has many advantages over traditional citation indexes, including the abi lity to create more up-to-date databases which are not limited to a preselected set of journals or restricted by journal public ation delays, completely autonomous operation with a correspond ing reduction in cost, and powerful interactive browsing of the literature using the context of citations. Given a parti cular paper of interest, CiteSeer can display the context of ho w the paper is cited in subsequent publications. This context may contain a brief summary of the paper, another author\u2019s response to the paper, or subsequent work which builds upon the original article. CiteSeer allows the location of paper s by keyword search or by citation links. Papers related to a give n paper can be located using common citation information or word vector similarity. CiteSeer will soon be available for public use."}
{"_id": "a8823ab946321079c63b9bd42f58bd17b96a25e4", "title": "Face detection and eyes extraction using sobel edge detection and morphological operations", "text": "Face detection and eyes extraction has an important role in many applications such as face recognition, facial expression analysis, security login etc. Detection of human face and facial structures like eyes, nose are the complex procedure for the computer. This paper proposes an algorithm for face detection and eyes extraction from frontal face images using Sobel edge detection and morphological operations. The proposed approach is divided into three phases; preprocessing, identification of face region, and extraction of eyes. Resizing of images and gray scale image conversion is achieved in preprocessing. Face region identification is accomplished by Sobel edge detection and morphological operations. In the last phase, eyes are extracted from the face region with the help of morphological operations. The experiments are conducted on 120, 75, 40 images of IMM frontal face database, FEI face database and IMM face database respectively. The face detection accuracy is 100%, 100%, 97.50% and the eyes extraction accuracy rate is 92.50%, 90.66%, 92.50% respectively."}
{"_id": "327eea4fa452df2e9ff0864601bba4051fd9927c", "title": "The HEXACO-60: a short measure of the major dimensions of personality.", "text": "We describe the HEXACO-60, a short personality inventory that assesses the 6 dimensions of the HEXACO model of personality structure. We selected the 10 items of each of the 6 scales from the longer HEXACO Personality Inventory-Revised (Ashton & Lee, 2008; Lee & Ashton, 2004, 2006), with the aim of representing the broad range of content that defines each dimension. In self-report data from samples of college students and community adults, the scales showed reasonably high levels of internal consistency reliability and rather low interscale correlations. Correlations of the HEXACO-60 scales with measures of the Big Five factors were consistent with theoretical expectations, and convergent correlations between self-reports and observer reports on the HEXACO-60 scales were high, averaging above .50. We recommend the HEXACO-60 for use in personality assessment contexts in which administration time is limited."}
{"_id": "523b8beb41b7890fd4fdd7bd3cba7261137e53b4", "title": "On the benefits of explaining herd immunity in vaccine advocacy", "text": "Most vaccines protect both the vaccinated individual and the community at large by building up herd immunity. Even though reaching disease-specific herd immunity thresholds is crucial for eliminating or eradicating certain diseases1,2, explanation of this concept remains rare in vaccine advocacy3. An awareness of this social benefit makes vaccination not only an individual but also a social decision. Although knowledge of herd immunity can induce prosocial vaccination in order to protect others, it can also invite free-riding, in which individuals profit from the protection provided by a well-vaccinated society without contributing to herd immunity themselves. This cross-cultural experiment assesses whether people will be more or less likely to be vaccinated when they know more about herd immunity. Results show that in cultures that focus on collective benefits, vaccination willingness is generally higher. Communicating the concept of herd immunity improved willingness to vaccinate, especially in cultures lacking this prosocial cultural background. Prosocial nudges can thus help to close these immunity gaps."}
{"_id": "f163718a70999d849278dde9c7a417df1afdbc78", "title": "A ZVS Grid-Connected Three-Phase Inverter", "text": "A six-switch three-phase inverter is widely used in a high-power grid-connected system. However, the antiparallel diodes in the topology operate in the hard-switching state under the traditional control method causing severe switch loss and high electromagnetic interference problems. In order to solve the problem, this paper proposes a topology of the traditional six-switch three-phase inverter but with an additional switch and gave a new space vector modulation (SVM) scheme. In this way, the inverter can realize zero-voltage switching (ZVS) operation in all switching devices and suppress the reverse recovery current in all antiparallel diodes very well. And all the switches can operate at a fixed frequency with the new SVM scheme and have the same voltage stress as the dc-link voltage. In grid-connected application, the inverter can achieve ZVS in all the switches under the load with unity power factor or less. The aforementioned theory is verified in a 30-kW inverter prototype."}
{"_id": "4f0ea476d0aa1315c940988a593a1d4055695c79", "title": "Frame-by-frame language identification in short utterances using deep neural networks", "text": "This work addresses the use of deep neural networks (DNNs) in automatic language identification (LID) focused on short test utterances. Motivated by their recent success in acoustic modelling for speech recognition, we adapt DNNs to the problem of identifying the language in a given utterance from the short-term acoustic features. We show how DNNs are particularly suitable to perform LID in real-time applications, due to their capacity to emit a language identification posterior at each new frame of the test utterance. We then analyse different aspects of the system, such as the amount of required training data, the number of hidden layers, the relevance of contextual information and the effect of the test utterance duration. Finally, we propose several methods to combine frame-by-frame posteriors. Experiments are conducted on two different datasets: the public NIST Language Recognition Evaluation 2009 (3 s task) and a much larger corpus (of 5 million utterances) known as Google 5M LID, obtained from different Google Services. Reported results show relative improvements of DNNs versus the i-vector system of 40% in LRE09 3 second task and 76% in Google 5M LID."}
{"_id": "cb79833f0d3b88963cdc84dbb8cb358fd996946e", "title": "CRITICAL SUCCESS FACTORS FOR CUSTOMER RELATIONSHIP MANAGEMENT IMPLEMENTATIONS", "text": "The growing forces of increasing global competition, continuing customer demands, and the significant revolution in Commercial Off The Shelf (COTS) solutions, especially Customer Relationship Management (CRM) applications, have together put pressure upon many organisations to implement CRM solutions and to switch their organisational processes from being product-centric to being customer-centric. A CRM initiative is not only technology; it is a business strategy supported by technology which automates and enhances the processes associated with managing customer relationships. By the end of 2010, it is predicted that companies will be spending almost $11 billion yearly on CRM solutions. However, studies have found that 70% of CRM projects have failed. Understanding the factors that enable success of CRM is vital. There is very few existing specific research into Critical Success Factors (CSFs) of CRM implementations, and there is no comprehensive view that captures all the aspects for successful CRM implementation and their inter-relationships. Therefore, the aim of this paper is to explore the current literature base of CSFs for CRM implementations and proposes a taxonomy for them. Future research work will continue to investigate in depth these factors by exploring the complex system links between CSFs using systems thinking techniques such as causal maps to investigate the complex, systemic networks of CSFs in organisations which result in emergent effects which themselves influence the failure or success of a CRM."}
{"_id": "88a7ae1ba4eefa5f51661faddd023bd8635da19c", "title": "Log-Euclidean metrics for fast and simple calculus on diffusion tensors.", "text": "Diffusion tensor imaging (DT-MRI or DTI) is an emerging imaging modality whose importance has been growing considerably. However, the processing of this type of data (i.e., symmetric positive-definite matrices), called \"tensors\" here, has proved difficult in recent years. Usual Euclidean operations on matrices suffer from many defects on tensors, which have led to the use of many ad hoc methods. Recently, affine-invariant Riemannian metrics have been proposed as a rigorous and general framework in which these defects are corrected. These metrics have excellent theoretical properties and provide powerful processing tools, but also lead in practice to complex and slow algorithms. To remedy this limitation, a new family of Riemannian metrics called Log-Euclidean is proposed in this article. They also have excellent theoretical properties and yield similar results in practice, but with much simpler and faster computations. This new approach is based on a novel vector space structure for tensors. In this framework, Riemannian computations can be converted into Euclidean ones once tensors have been transformed into their matrix logarithms. Theoretical aspects are presented and the Euclidean, affine-invariant, and Log-Euclidean frameworks are compared experimentally. The comparison is carried out on interpolation and regularization tasks on synthetic and clinical 3D DTI data."}
{"_id": "8fe2f671089c63a0d3f6f729ca8bc63aa3069263", "title": "Mining hidden community in heterogeneous social networks", "text": "Social network analysis has attracted much attention in recent years. Community mining is one of the major directions in social network analysis. Most of the existing methods on community mining assume that there is only one kind of relation in the network, and moreover, the mining results are independent of the users' needs or preferences. However, in reality, there exist multiple, heterogeneous social networks, each representing a particular kind of relationship, and each kind of relationship may play a distinct role in a particular task. Thus mining networks by assuming only one kind of relation may miss a lot of valuable hidden community information and may not be adaptable to the diverse information needs from different users.In this paper, we systematically analyze the problem of mining hidden communities on heterogeneous social networks. Based on the observation that different relations have different importance with respect to a certain query, we propose a new method for learning an optimal linear combination of these relations which can best meet the user's expectation. With the obtained relation, better performance can be achieved for community mining. Our approach to social network analysis and community mining represents a major shift in methodology from the traditional one, a shift from single-network, user-independent analysis to multi-network, user-dependant, and query-based analysis. Experimental results on Iris data set and DBLP data set demonstrate the effectiveness of our method."}
{"_id": "737568fa4422eae79dcb4dc903565386bdc17e43", "title": "Impact of Halo Doping on the Subthreshold Performance of Deep-Submicrometer CMOS Devices and Circuits for Ultralow Power Analog/Mixed-Signal Applications", "text": "In addition to its attractiveness for ultralow power applications, analog CMOS circuits based on the subthreshold operation of the devices are known to have significantly higher gain as compared to their superthreshold counterpart. The effects of halo [both double-halo (DH) and single-halo or lateral asymmetric channel (LAC)] doping on the subthreshold analog performance of 100-nm CMOS devices are systematically investigated for the first time with extensive process and device simulations. In the subthreshold region, although the halo doping is found to improve the device performance parameters for analog applications (such as gm/Id, output resistance and intrinsic gain) in general, the improvement is significant in the LAC devices. Low angle of tilt of the halo implant is found to give the best improvement in both the LAC and DH devices. Our results show that the CMOS amplifiers made with the halo implanted devices have higher voltage gain over their conventional counterpart, and a more than 100% improvement in the voltage gain is observed when LAC doping is made on both the p- and n-channel devices of the amplifier"}
{"_id": "6124e9f8455723e9508fc5d8365b3895b4d15208", "title": "Incremental Spectral Clustering With Application to Monitoring of Evolving Blog Communities", "text": "In recent years, spectral clustering method has gained attentions because of its superior performance compared to other traditional clustering algorithms such as K-means algorithm. The existing spectral clustering algorithms are all off-line algorithms, i.e., they can not incrementally update the clustering result given a small change of the data set. However, the capability of incrementally updating is essential to some applications such as real time monitoring of the evolving communities of websphere or blogsphere. Unlike traditional stream data, these applications require incremental algorithms to handle not only insertion/deletion of data points but also similarity changes between existing items. This paper extends the standard spectral clustering to such evolving data by introducing the incidence vector/matrix to represent two kinds of dynamics in the same framework and by incrementally updating the eigenvalue system. Our incremental algorithm, initialized by a standard spectral clustering, continuously and efficiently updates the eigenvalue system and generates instant cluster labels, as the data set is evolving. The algorithm is applied to a blog data set. Compared with recomputation of the solution by standard spectral clustering, it achieves similar accuracy but with much lower computational cost. Close inspection into the blog content shows that the incremental approach can discover not only the stable blog communities but also the evolution of the individual multi-topic blogs."}
{"_id": "dcfae19ad20ee57b3f68891d8b21570ab2601613", "title": "An Empirical Study on Modeling and Prediction of Bitcoin Prices With Bayesian Neural Networks Based on Blockchain Information", "text": "Bitcoin has recently attracted considerable attention in the fields of economics, cryptography, and computer science due to its inherent nature of combining encryption technology and monetary units. This paper reveals the effect of Bayesian neural networks (BNNs) by analyzing the time series of Bitcoin process. We also select the most relevant features from Blockchain information that is deeply involved in Bitcoin\u2019s supply and demand and use them to train models to improve the predictive performance of the latest Bitcoin pricing process. We conduct the empirical study that compares the Bayesian neural network with other linear and non-linear benchmark models on modeling and predicting the Bitcoin process. Our empirical studies show that BNN performs well in predicting Bitcoin price time series and explaining the high volatility of the recent Bitcoin price."}
{"_id": "94017eca9875a77d7de5daadf5c37023b8bbe6c9", "title": "Low-light image enhancement using variational optimization-based Retinex model", "text": "This paper presents a low-light image enhancement method using the variational-optimization-based Retinex algorithm. The proposed enhancement method first estimates the initial illumination and uses its gamma corrected version to constrain the illumination component. Next, the variational-based minimization is iteratively performed to separate the reflectance and illumination components. The color assignment of the estimated reflectance component is then performed to restore the color component using the input RGB color channels. Experimental results show that the proposed method can provide better enhanced result without saturation, noise amplification or color distortion."}
{"_id": "ef583fd79e57ab0b42bf1db466d782ad64aca09e", "title": "Big Data Reduction Methods: A Survey", "text": "Research on big data analytics is entering in the new phase called fast data where multiple gigabytes of data arrive in the big data systems every second. Modern big data systems collect inherently complex data streams due to the volume, velocity, value, variety, variability, and veracity in the acquired data and consequently give rise to the 6Vs of big data. The reduced and relevant data streams are perceived to be more useful than collecting raw, redundant, inconsistent, and noisy data. Another perspective for big data reduction is that the million variables big datasets cause the curse of dimensionality which requires unbounded computational resources to uncover actionable knowledge patterns. This article presents a review of methods that are used for big data reduction. It also presents a detailed taxonomic discussion of big data reduction methods including the network theory, big data compression, dimension reduction, redundancy elimination, data mining, and machine learning methods. In addition, the open research issues pertinent to the big data reduction are also highlighted."}
{"_id": "94bde5e6667e56c52b040c1d893205828e9e17af", "title": "Nonsuicidal self-injury as a gateway to suicide in young adults.", "text": "PURPOSE\nTo investigate the extent to which nonsuicidal self-injury (NSSI) contributes to later suicide thoughts and behaviors (STB) independent of shared risk factors.\n\n\nMETHODS\nOne thousand four hundred and sixty-six students at five U.S. colleges participated in a longitudinal study of the relationship between NSSI and suicide. NSSI, suicide history, and common risk/protective factors were assessed annually for three years. Analyses tested the hypotheses that the practice of NSSI prior to STB and suicide behavior (excluding ideation) reduced inhibition to later STB independent of shared risk factors. Analyses also examined factors that predicted subsequent STB among individuals with NSSI history.\n\n\nRESULTS\nHistory of NSSI did significantly predict concurrent or later STB (AOR 2.8, 95%, CI 1.9-4.1) independent of covariates common to both. Among those with prior or concurrent NSSI, risk of STB is predicted by > 20 lifetime NSSI incidents (AOR 3.8, 95% CI, 1.4-10.3) and history of mental health treatment (AOR 2.2, 95% CI, 1.9-4.6). Risk of moving from NSSI to STB is decreased by presence of meaning in life (AOR .6, 95% CI, .5-.7) and reporting parents as confidants (AOR, .3, 95% CI, .1-.9).\n\n\nCONCLUSIONS\nNSSI prior to suicide behavior serves as a \"gateway\" behavior for suicide and may reduce inhibition through habituation to self-injury. Treatments focusing on enhancing perceived meaning in life and building positive relationships with others, particularly parents, may be particularly effective in reducing suicide risk among youth with a history of NSSI."}
{"_id": "d35a8ad8133ebcbf7aa4eafa25f753465b3f9fc0", "title": "An airborne experimental test platform: From theory to flight", "text": "This paper provides an overview of the experimental flight test platform developed by the University of Minnesota Unmanned Aerial Vehicle Research Group. Key components of the current infrastructure are highlighted, including the flight test system, high-fidelity nonlinear simulations, software-and hardware-in-the-loop simulations, and the real-time flight software. Recent flight control research and educational applications are described to showcase the advanced capabilities of the platform. A view towards future expansion of the platform is given in the context of upcoming research projects."}
{"_id": "1a3ecd4307946146371852d6571b89b9436e51fa", "title": "Bias and causal associations in observational research", "text": "Readers of medical literature need to consider two types of validity, internal and external. Internal validity means that the study measured what it set out to; external validity is the ability to generalise from the study to the reader's patients. With respect to internal validity, selection bias, information bias, and confounding are present to some degree in all observational research. Selection bias stems from an absence of comparability between groups being studied. Information bias results from incorrect determination of exposure, outcome, or both. The effect of information bias depends on its type. If information is gathered differently for one group than for another, bias results. By contrast, non-differential misclassification tends to obscure real differences. Confounding is a mixing or blurring of effects: a researcher attempts to relate an exposure to an outcome but actually measures the effect of a third factor (the confounding variable). Confounding can be controlled in several ways: restriction, matching, stratification, and more sophisticated multivariate techniques. If a reader cannot explain away study results on the basis of selection, information, or confounding bias, then chance might be another explanation. Chance should be examined last, however, since these biases can account for highly significant, though bogus results. Differentiation between spurious, indirect, and causal associations can be difficult. Criteria such as temporal sequence, strength and consistency of an association, and evidence of a dose-response effect lend support to a causal link."}
{"_id": "2730606a9d29bb52bcc42124393460503f736d74", "title": "Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing", "text": "\u00d0Efficient application scheduling is critical for achieving high performance in heterogeneous computing environments. The application scheduling problem has been shown to be NP-complete in general cases as well as in several restricted cases. Because of its key importance, this problem has been extensively studied and various algorithms have been proposed in the literature which are mainly for systems with homogeneous processors. Although there are a few algorithms in the literature for heterogeneous processors, they usually require significantly high scheduling costs and they may not deliver good quality schedules with lower costs. In this paper, we present two novel scheduling algorithms for a bounded number of heterogeneous processors with an objective to simultaneously meet high performance and fast scheduling time, which are called the Heterogeneous Earliest-Finish-Time (HEFT) algorithm and the Critical-Path-on-a-Processor (CPOP) algorithm. The HEFT algorithm selects the task with the highest upward rank value at each step and assigns the selected task to the processor, which minimizes its earliest finish time with an insertion-based approach. On the other hand, the CPOP algorithm uses the summation of upward and downward rank values for prioritizing tasks. Another difference is in the processor selection phase, which schedules the critical tasks onto the processor that minimizes the total execution time of the critical tasks. In order to provide a robust and unbiased comparison with the related work, a parametric graph generator was designed to generate weighted directed acyclic graphs with various characteristics. The comparison study, based on both randomly generated graphs and the graphs of some real applications, shows that our scheduling algorithms significantly surpass previous approaches in terms of both quality and cost of schedules, which are mainly presented with schedule length ratio, speedup, frequency of best results, and average scheduling time metrics."}
{"_id": "3b6911dc5d98faeb79d3d3e60bcdc40cfd7c9273", "title": "Aggregate and Verifiably Encrypted Signatures from Bilinear Maps", "text": "An aggregate signature scheme is a digital signature that supports aggregation: Given n signatures on n distinct messages from n distinct users, it is possible to aggregate all these signatures into a single short signature. This single signature (and the n original messages) will convince the verifier that the n users did indeed sign the n original messages (i.e., user i signed message Mi for i = 1, . . . , n). In this paper we introduce the concept of an aggregate signature, present security models for such signatures, and give several applications for aggregate signatures. We construct an efficient aggregate signature from a recent short signature scheme based on bilinear maps due to Boneh, Lynn, and Shacham. Aggregate signatures are useful for reducing the size of certificate chains (by aggregating all signatures in the chain) and for reducing message size in secure routing protocols such as SBGP. We also show that aggregate signatures give rise to verifiably encrypted signatures. Such signatures enable the verifier to test that a given ciphertext C is the encryption of a signature on a given message M . Verifiably encrypted signatures are used in contract-signing protocols. Finally, we show that similar ideas can be used to extend the short signature scheme to give simple ring signatures."}
{"_id": "446961b27f6c14413ae6cc2f78ad7d7c53ede26c", "title": "Pors: proofs of retrievability for large files", "text": "In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.\n A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.\n In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.\n We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound."}
{"_id": "517f519b8dbc5b00ff8b1f8578b73a871a1a0b73", "title": "The Exact Security of Digital Signatures - HOw to Sign with RSA and Rabin", "text": "We describe an RSA-based signing scheme which combines essentially optimal e ciency with attractive security properties. Signing takes one RSA decryption plus some hashing, veri cation takes one RSA encryption plus some hashing, and the size of the signature is the size of the modulus. Assuming the underlying hash functions are ideal, our schemes are not only provably secure, but are so in a tight way| an ability to forge signatures with a certain amount of computational resources implies the ability to invert RSA (on the same size modulus) with about the same computational e ort. Furthermore, we provide a second scheme which maintains all of the above features and in addition provides message recovery. These ideas extend to provide schemes for Rabin signatures with analogous properties; in particular their security can be tightly related to the hardness of factoring. Department of Computer Science and Engineering, Mail Code 0114, University of California at San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA. E-mail: mihir@cs.ucsd.edu ; Web page: http://www-cse.ucsd.edu/users/mihir y Department of Computer Science, University of California at Davis, Davis, CA 95616, USA. Email: rogaway@cs.ucdavis.edu ; Web page: http://wwwcsif.cs.ucdavis.edu/~rogaway/homepage.html"}
{"_id": "06a1d8fe505a4ee460e24ae3cf2e279e905cc9b0", "title": "A public key cryptosystem and a signature scheme based on discrete logarithms", "text": "A new signature scheme is proposed, together with an implementation of the Diffie-Hellman key distribution scheme that achieves a public key cryptosystem. The security of both systems relies on the difficulty of computing discrete logarithms over finite fields."}
{"_id": "1745e5dbdeb4575c6f8376c9e75e70650a7e2e29", "title": "Proofs of Work and Bread Pudding Protocols", "text": "We formalize the notion of a proof of work. In many cryptographic protocols, a prover seeks to convince a veriier that she possesses knowledge of a secret or that a certain mathematical relation holds true. By contrast, in a proof of work, a prover demonstrates that she has performed a certain amount of computational work in a speciied interval of time. Proofs of work have served as the basis of a number of security protocols in the literature, but have hitherto lacked careful characterization. We also introduce the dependent idea of a bread pudding protocol. Bread pudding is a dish that originated with the purpose of re-using bread that has gone stale 10]. In the same spirit, we deene a bread pudding protocol to be a proof of work such that the computational eeort invested in the proof may also be harvested to achieve a separate, useful, and veriiably correct computation. As an example of a bread pudding protocol, we show how the MicroMint scheme of Rivest and Shamir can be broken up into a collection of proofs of work. These proofs of work can not only serve in their own right as mechanisms for security protocols, but can also be harvested in order to shift the burden of the MicroMint minting operation onto a large group of un-trusted computational devices."}
{"_id": "50c5def463ac0c7282f6093501bd64ec253029b6", "title": "Techniques for Assessing Polygonal Approximations of Curves", "text": "Given the enormous number of available methods for finding po lygonal approximations to curves techniques are required to assess different algorithms. Some of the standard approaches are shown to be unsuitable if the approximations contain varying numbers of lines. Ins tead, we suggest assessing an algorithm\u2019s results relative to an optimal polygon, and describe a measure which combines the relative fidelity and efficiency of a curve segmentation. We use this measure to compare the appl ication of 23 algorithms to a curve first used by Teh and Chin [37]; their ISEs are assessed relative to the o ptimal ISE. In addition, using an example of pose estimation, it is shown how goal-directed evaluation c a be used to select an appropriate assessment criterion."}
{"_id": "cbd1a6764624ae74d7e5c59ddb139d76c019ff19", "title": "A Two-Handed Interface for Object Manipulation in Virtual Environments", "text": "A two-handed direct manipulation VE (virtual environment) interface has been developed as an intuitive manipulation metaphor for graphical objects. A new input device called ChordGloves introduces a simple technique for rapid and repeatable gesture recognition; the Chordgloves emulate a pair of 3-D mice and a keyboard. A drafting table is isomorphically mapped into the VE and provides hand support for 2-D interface techniques, as well as a reference frame for calibrating the mapping between real and virtual worlds. A cursor gravity function is used to grab vertices, edges, or faces and establish precisely aligned differential constraints between objects called anchors. The capability of subjects to translate, rotate, scale, align, and glue objects is tested with a puzzle building task. An approximation of the puzzle task is done in Adobe Illustrator to provide a performance reference. Results and informal user observations as well as topics for future work are presented."}
{"_id": "9b8fcb9464ff6cc7bd8311b52e5fb62394fb43f4", "title": "Gazture: Design and Implementation of a Gaze based Gesture Control System on Tablets", "text": "We present Gazture, a light-weight gaze based real-time gesture control system on commercial tablets. Unlike existing approaches that require dedicated hardware (e.g., high resolution camera), high computation overhead (powerful CPU) or specific user behavior (keeping head steady), Gazture provides gesture recognition based on easy-to-control user gaze input with a small overhead. To achieve this goal, Gazture incorporates a two-layer structure: The first layer focuses on real-time gaze estimation with acceptable tracking accuracy while incurring a small overhead. The second layer implements a robust gesture recognition algorithm while compensating gaze estimation error. To address user posture change while using mobile device, we design a online transfer function based method to convert current eye features into corresponding eye features in reference posture, which then facilitates efficient gaze position estimation. We implement Gazture on Lenovo Tab3 8 Plus tablet with Android 6.0.1, and evaluate its performance in different scenarios. The evaluation results show that Gazture can achieve a high accuracy in gesture recognition while incurring a low overhead."}
{"_id": "7f90a086140f2ca2ff282e3eedcf8c51ee2db674", "title": "Optimizing Factorization Machines for Top-N Context-Aware Recommendations", "text": "Context-aware Collaborative Filtering (CF) techniques such as Factorization Machines (FM) have been proven to yield high precision for rating prediction. However, the goal of recommender systems is often referred to as a top-N item recommendation task, and item ranking is a better formulation for the recommendation problem. In this paper, we present two collaborative rankers, namely, Ranking Factorization Machines (RankingFM) and Lambda Factorization Machines (LambdaFM), which optimize the FM model for the item recommendation task. Specifically, instead of fitting the preference of individual items, we first propose a RankingFM algorithm that applies the cross-entropy loss function to the FM model to estimate the pairwise preference between individual item pairs. Second, by considering the ranking bias in the item recommendation task, we design two effective lambda-motivated learning schemes for RankingFM to optimize desired ranking metrics, referred to as LambdaFM. The two models we propose can work with any types of context, and are capable of estimating latent interactions between the context features under sparsity. Experimental results show its superiority over several state-of-the-art methods on three public CF datasets in terms of two standard ranking metrics."}
{"_id": "726c458266d4fd91551bbe3cac02052d8df8f309", "title": "A Review on the Meaning of Cognitive Cities", "text": "Over the last years, the cognitive city paradigm has emerged as a promising solution to the challenges that megacities of the future will have to face. In this article, we provide a thorough review of the literature on Cognitive Cities. We put in place a clear and strict methodology that allows as to present a carefully curated set of articles that represent the foundations of the current understanding of the concept of cognitive city. Hence, this article is intended to serve as a reference for future studies in the field. We emphasise the ambiguities and overlapping meanings of the cognitive city term depending on the domain of study and the underlying philosophy. Also, we discuss some of the implications that cognitive cities might have on society pillars such as the healthcare sector, and we point out some of the main challenges for the adoption of the concept. Last but not least, we suggest some possible research lines that are to be pursued in the years to come."}
{"_id": "47bb8674715672e0fde3901ba9ccdb5b07a5e4da", "title": "TANDEM-bottleneck feature combination using hierarchical Deep Neural Networks", "text": "To improve speech recognition performance, a combination between TANDEM and bottleneck Deep Neural Networks (DNN) is investigated. In particular, exploiting a feature combination performed by means of a multi-stream hierarchical processing, we show a performance improvement by combining the same input features processed by different neural networks. The experiments are based on the spontaneous telephone recordings of the Cantonese IARPA Babel corpus using both standard MFCCs and Gabor as input features."}
{"_id": "7fb1ec60d6f6862f16ddc449ffb3ca1db97218f1", "title": "Parking lot guidance software based on MQTT Protocol", "text": "To reduce the amount of CO2 produced from the personal cars in the world, the parking lot guidance systems are considered to be the solution in shopping malls or department stores in many countries. However, most of the current parking lot systems are located in the parking area of each shopping mall and they cannot show the parking lots information for the driver that are driving on the road. So, The drivers can see the parking lot information if and only if they arrive at the shopping mall area. This is the fact that the CO2 are still produced to the world although we use the parking lot guidance system. From the reason as mentioned, we propose the parking lots guidance software to share a parking lot information to a large number of clients in real-time based on MQTT Protocol. The proposed software can share the real-time parking lot information to the mobile devices of the drivers in any location or when they are driving on the road. The results show that the proposed software can share the parking lot information in real-time for at least 1,000 sessions simultaneously with the high average score of usability, design and the benefits."}
{"_id": "ad3480a8d72319699c9a9f22cb77951c38cac7c7", "title": "Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning", "text": "The importance of landscape and heritage recording and documentation with optical remote sensing sensors is well recognized at international level. The continuous development of new sensors, data capture methodologies and multi-resolution 3D representations, contributes significantly to the digital 3D documentation, mapping, conservation and representation of landscapes and heritages and to the growth of research in this field. This article reviews the actual optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications. Examples of 3D surveying and modeling of heritage sites and objects are also shown throughout the paper."}
{"_id": "5499bdc807ec2d040e0b8b215cb234ccdfe45251", "title": "Design of a haptic arm exoskeleton for training and rehabilitation", "text": "A high-quality haptic interface is typically characterized by low apparent inertia and damping, high structural stiffness, minimal backlash, and absence of mechanical singularities in the workspace. In addition to these specifications, exoskeleton haptic interface design involves consideration of space and weight limitations, workspace requirements, and the kinematic constraints placed on the device by the human arm. These constraints impose conflicting design requirements on the engineer attempting to design an arm exoskeleton. In this paper, the authors present a detailed review of the requirements and constraints that are involved in the design of a high-quality haptic arm exoskeleton. In this context, the design of a five-degree-of-freedom haptic arm exoskeleton for training and rehabilitation in virtual environments is presented. The device is capable of providing kinesthetic feedback to the joints of the lower arm and wrist of the operator, and will be used in future work for robot-assisted rehabilitation and training. Motivation for such applications is based on findings that show robot-assisted physical therapy aids in the rehabilitation process following neurological injuries. As a training tool, the device provides a means to implement flexible, repeatable, and safe training methodologies."}
{"_id": "3efd6b2ab1d96342d48ebda78833420108f25189", "title": "Active learning: theory and applications to automatic speech recognition", "text": "We are interested in the problem of adaptive learning in the context of automatic speech recognition (ASR). In this paper, we propose an active learning algorithm for ASR. Automatic speech recognition systems are trained using human supervision to provide transcriptions of speech utterances. The goal of Active Learning is to minimize the human supervision for training acoustic and language models and to maximize the performance given the transcribed and untranscribed data. Active learning aims at reducing the number of training examples to be labeled by automatically processing the unlabeled examples, and then selecting the most informative ones with respect to a given cost function for a human to label. In this paper we describe how to estimate the confidence score for each utterance through an on-line algorithm using the lattice output of a speech recognizer. The utterance scores are filtered through the informativeness function and an optimal subset of training samples is selected. The active learning algorithm has been applied to both batch and on-line learning scheme and we have experimented with different selective sampling algorithms. Our experiments show that by using active learning the amount of labeled data needed for a given word accuracy can be reduced by more than 60% with respect to random sampling."}
{"_id": "6d4fa4b9037b64b8383331583430711be321c587", "title": "Seeing Stars of Valence and Arousal in Blog Posts", "text": "Sentiment analysis is a growing field of research, driven by both commercial applications and academic interest. In this paper, we explore multiclass classification of diary-like blog posts for the sentiment dimensions of valence and arousal, where the aim of the task is to predict the level of valence and arousal of a post on a ordinal five-level scale, from very negative/low to very positive/high, respectively. We show how to map discrete affective states into ordinal scales in these two dimensions, based on the psychological model of Russell's circumplex model of affect and label a previously available corpus with multidimensional, real-valued annotations. Experimental results using regression and one-versus-all approaches of support vector machine classifiers show that although the latter approach provides better exact ordinal class prediction accuracy, regression techniques tend to make smaller scale errors."}
{"_id": "8ccf4f3e0da04d4cc8650f038f87ef84b74f8543", "title": "Uniport: A Uniform Programming Support Framework for Mobile Cloud Computing", "text": "Personal mobile devices (PMDs) have become the most used computing devices for many people. With the introduction of mobile cloud computing, we can augment the storage and computing capabilities of PMDs via cloud support. However, there are many challenges in developing mobile cloud applications (MCAs) that incorporate cloud computing efficiently, especially for developers targeting multiple mobile platforms. This paper presents Uniport, a uniform framework for developing MCAs. We introduce a uniform architecture for MCAs based on the Model-View-Controller (MVC) pattern and a set of programming primitives and runtime libraries. Not only can Uniport support the creation of new MCAs, it can also help transform existing mobile applications to MCAs efficiently. We demonstrate the applicability and flexibility of Uniport in a case study to transform three existing mobile applications on iOS, Android and Windows Phone, to their mobile cloud versions respectively. Evaluation results show that, with very few modifications, we can easily transform mobile applications to MCAs that can exploit the cloud support to improve performance by 3 - 7x and save more than half of their energy consumption."}
{"_id": "886dd36ddaaacdc7f31509a924b30b45cc79f21d", "title": "StreamOp: An Innovative Middleware for Supporting Data Management and Query Functionalities over Sensor Network Streams Efficiently", "text": "During the last decade, a lot of progress has happened in the field of autonomous embedded devices (also known as motes) and wireless sensor networks. In this context, we envision next-generation systems to have autonomous database management functionalities on each mote, for supporting easy management of data on flash disks, adequate querying capabilities, and ubiquitous computing with no-effort. Following this major vision, in this paper we describe application scenarios, middleware approach and data management algorithms of a novel system, called Stream Operation (StreamOp), which effectively and efficiently realizes the depicted challenges. In particular, StreamOp supports heterogeneity, i.e. It works on top of different platforms, efficiency, i.e. It returns responses quickly, and autonomy, i.e. It saves battery power. We show how StreamOp provides these features, along with some experimental results that clearly validate our main assumptions."}
{"_id": "f286cb201c0d8a36b42d11264aaec4efcafbf2f4", "title": "Pulse width modulation with resonant DC link converters", "text": "A technique for realizing pulse-width modulation capability in resonant DC link converters is presented. The technique eliminates the subharmonics present in the conventional discrete pulse modulated converters without requiring additional switching devices. Detailed design-oriented analyses enumerating the tradeoffs involved are presented. Control techniques for simultaneously accomplishing link control and output control are considered. Simulation results verifying the principle of operation are presented.<>"}
{"_id": "bdba977ca5422a97563973be7e8f46a010cacb37", "title": "From Modeling to Implementation of Virtual Sensors in Body Sensor Networks", "text": "Body Sensor Networks (BSNs) represent an emerging technology which has received much attention recently due to its enormous potential to enable remote, real-time, continuous and non-invasive monitoring of people in health-care, entertainment, fitness, sport, social interaction. Signal processing for BSNs usually comprises of multiple levels of data abstraction, from raw sensor data to data calculated from processing steps such as feature extraction and classification. This paper presents a multi-layer task model based on the concept of Virtual Sensors to improve architecture modularity and design reusability. Virtual Sensors are abstractions of components of BSN systems that include sensor sampling and processing tasks and provide data upon external requests. The Virtual Sensor model implementation relies on SPINE2, an open source domain-specific framework that is designed to support distributed sensing operations and signal processing for wireless sensor networks and enables code reusability, efficiency, and application interoperability. The proposed model is applied in the context of gait analysis through wearable sensors. A gait analysis system is developed according to a SPINE2-based Virtual Sensor architecture and experimentally evaluated. Obtained results confirm that great effectiveness can be achieved in designing and implementing BSN applications through the Virtual Sensor approach while maintaining high efficiency and accuracy."}
{"_id": "66d1a4614941f1121462cefb2e14a39bf72ddb67", "title": "Change in perceived psychosocial status following a 12-week Tai Chi exercise programme.", "text": "AIM\nThis paper reports a study to examine change in psychosocial status following a 12-week Tai Chi exercise intervention among ethnic Chinese people with cardiovascular disease risk factors living in the United States of America.\n\n\nBACKGROUND\nRegular participation in physical activity is associated with protection against cardioavascular disease, and improvements in physical and psychological health. Increasing amounts of scientific evidence suggests that mind-body exercise, such as Tai Chi, are related to improvements in mental health, emotional well-being, and stress reduction. No prior study has examined the effect of a Tai Chi exercise intervention on psychosocial status among people with cardiovascular disease risk factors.\n\n\nMETHODS\nThis was a quasi-experimental study. Participants attended a 60-minute Tai Chi exercise class three times per week for 12 weeks. Data were collected at baseline, 6 and 12 weeks following the intervention. Psychosocial status was assessed using Chinese versions of Cohen's Perceived Stress Scale, Profile of Mood States, Multidimensional Scale of Perceived Social Support, and Tai Chi exercise self-efficacy.\n\n\nRESULTS\nA total of 39 participants, on average 66-year-old (+/-8.3), married (85%), Cantonese-speaking (97%), immigrants participated. The majority were women (69%), with < or =12 years education (87%). Statistically significant improvements in all measures of psychosocial status were found (P < or = 0.05) following the intervention. Improvement in mood state (eta2 = 0.12), and reduction in perceived stress (eta2 = 0.13) were found. In addition, Tai Chi exercise statistically significantly increased self-efficacy to overcome barriers to Tai Chi (eta2 = 0.19), confidence to perform Tai Chi (eta2 = 0.27), and perceived social support (eta2 = 0.12).\n\n\nCONCLUSIONS\nTai Chi was a culturally appropriate mind-body exercise for these older adults, with statistically significant psychosocial benefits observed over 12-weeks. Further research examining Tai Chi exercise using a randomized clinical trial design with an attention-control group may reduce potential confounding effects, while exploring potential mechanisms underlying the relaxation response associated with mind-body exercise. In addition, future studies with people with other chronic illnesses in all ethnic groups are recommended to determine if similar benefits can be achieved."}
{"_id": "ba494dc8ae531604a01120d16294228ea024ba7c", "title": "Resolver Models for Manufacturing", "text": "This paper develops a new mathematical model, for pancake resolvers, that is dependent on a set of variables controlled by a resolver manufacturer-the winding parameters. This model allows a resolver manufacturer to manipulate certain in-process controllable variables in order to readjust already assembled resolvers that, without any action, would be scrapped for the production line. The developed model follows a two-step strategy where, on a first step, a traditional transformer's model computes the resolver nominal conditions and, on a second step, a linear model computes the corrections on the controllable variables, in order to compensate small deviations in design assumptions, caused by the variability of the manufacturing process. An experimental methodology for parameter identification is presented. The linearized model develops a complete new approach to simulate the product characteristics of a pancake resolver from the knowledge of manufacturer controllable variables (winding parameters). This model had been simulated and experimentally tested in a resolver manufacturing plant. The performed tests prove the efficiency of the developed model, stabilizing the product specifications in a dynamic environment with high variability of the production processes. All experiments had been led at the resolver manufacturer Tyco Electronics-E\u0301vora plant."}
{"_id": "60a102ca8091f41b9bc9cb40da598503b24b354a", "title": "On Type-Aware Entity Retrieval", "text": "Today, the practice of returning entities from a knowledge base in response to search queries has become widespread. One of the distinctive characteristics of entities is that they are typed, i.e., assigned to some hierarchically organized type system (type taxonomy). The primary objective of this paper is to gain a better understanding of how entity type information can be utilized in entity retrieval. We perform this investigation in an idealized \"oracle\" setting, assuming that we know the distribution of target types of the relevant entities for a given query. We perform a thorough analysis of three main aspects: (i) the choice of type taxonomy, (ii) the representation of hierarchical type information, and (iii) the combination of type-based and term-based similarity in the retrieval model. Using a standard entity search test collection based on DBpedia, we find that type information proves most useful when using large type taxonomies that provide very specific types. We provide further insights on the extensional coverage of entities and on the utility of target types."}
{"_id": "b242312cb14f43485a5987cf51753090d564734a", "title": "Skintillates: Designing and Creating Epidermal Interactions", "text": "Skintillates is a wearable technology that mimics tattoos - the oldest and most commonly used on-skin displays in human culture. We demonstrate that by fabricating electrical traces and thin electronics on temporary tattoo paper, a wide array of displays and sensors can be created. Just like the traditional temporary tattoos often worn by children and adults alike, Skintillates flex naturally with the user's skin. Our simple fabrication technique also enables users to freely design and print with a full range of colors to create application-specific customized designs. We demonstrate the technical capabilities of Skintillates as sensors and as expressive personal and private displays through a series of application examples. Finally, we detail the results of a set of user studies that highlight the user experience, comfort, durability, acceptability, and application potential for Skintillates."}
{"_id": "b2cdbf3fed73f51e0fb12d4e24376b3e33e66d11", "title": "An Efficient and Trustworthy Resource Sharing Platform for Collaborative Cloud Computing", "text": "Advancements in cloud computing are leading to a promising future for collaborative cloud computing (CCC), where globally-scattered distributed cloud resources belonging to different organizations or individuals (i.e., entities) are collectively used in a cooperative manner to provide services. Due to the autonomous features of entities in CCC, the issues of resource management and reputation management must be jointly addressed in order to ensure the successful deployment of CCC. However, these two issues have typically been addressed separately in previous research efforts, and simply combining the two systems generates double overhead. Also, previous resource and reputation management methods are not sufficiently efficient or effective. By providing a single reputation value for each node, the methods cannot reflect the reputation of a node in providing individual types of resources. By always selecting the highest-reputed nodes, the methods fail to exploit node reputation in resource selection to fully and fairly utilize resources in the system and to meet users' diverse QoS demands. We propose a CCC platform, called Harmony, which integrates resource management and reputation management in a harmonious manner. Harmony incorporates three key innovations: integrated multi-faceted resource/reputation management, multi-QoS-oriented resource selection, and price-assisted resource/reputation control. The trace data we collected from an online trading platform implies the importance of multi-faceted reputation and the drawbacks of highest-reputed node selection. Simulations and trace-driven experiments on the real-world PlanetLab testbed show that Harmony outperforms existing resource management and reputation management systems in terms of QoS, efficiency and effectiveness."}
{"_id": "359748b336d43950537e1c2bf15cd7c3d838a3e2", "title": "Recurrent Neural Word Segmentation with Tag Inference", "text": "In this paper, we present a Long Short-TermMemory (LSTM) based model for the task of Chinese Weibo word segmentation. The model adopts a LSTM layer to capture long-range dependencies in sentence and learn the underlying patterns. In order to infer the optimal tag path, we introduce a transition score matrix for jumping between tags of successive characters. Integrated with some unsupervised features, the performance of the model is further improved. Finally, our model achieves a weighted F1-score of 0.8044 on close track, 0.8298 on the semi-open track."}
{"_id": "9931c6b050e723f5b2a189dd38c81322ac0511de", "title": "From Pose to Activity: Surveying Datasets and Introducing CONVERSE", "text": "We present a review on the current state of publicly available datasets within the human action recognition community; highlighting the revival of pose based methods and recent progress of understanding person-person interaction modeling. We categorize datasets regarding several key properties for usage as a benchmark dataset; including the number of class labels, ground truths provided, and application domain they occupy. We also consider the level of abstraction of each dataset; grouping those that present actions, interactions and higher level semantic activities. The survey identifies key appearance and pose based datasets, noting a tendency for simplistic, emphasized, or scripted action classes that are often readily definable by a stable collection of subaction gestures. There is a clear lack of datasets that provide closely related actions, those that are not implicitly identified via a series of poses and gestures, but rather a dynamic set of interactions. We therefore propose a novel dataset that represents complex conversational interactions between two individuals via 3D pose. 8 pairwise interactions describing 7 separate conversation based scenarios were collected using two Kinect depth sensors. The intention is to provide events that are constructed from numerous primitive actions, interactions and motions, over a period of time; providing a set of subtle action classes that are more representative of the real world, and a challenge to currently developed recognition methodologies. We believe this is among one of the first datasets devoted to conversational interaction classification using 3D pose Preprint submitted to Elsevier October 27, 2015 features and the attributed papers show this task is indeed possible. The full dataset is made publicly available to the research community at [1]."}
{"_id": "8fc874191aec7d356c4d6661360935a49c37002b", "title": "Security in Container-Based Virtualization through vTPM", "text": "Cloud computing is a wide-spread technology that enables the enterprises to provide services to their customers with a lower cost, higher performance, better availability and scalability. However, privacy and security in cloud computing has always been a major challenge to service providers and a concern to its users. Trusted computing has led its way in securing the cloud computing and virtualized environment, during the past decades.\n In this paper, first we study virtualized trusted platform modules and integration of vTPM in hypervisor-based virtualization. Then we propose two architectural solutions for integrating the vTPM in container-based virtualization model."}
{"_id": "9e057a396d8de9c2507884ce69a10a1cd69f4add", "title": "Facebook Use Predicts Declines in Subjective Well-Being in Young Adults", "text": "Over 500 million people interact daily with Facebook. Yet, whether Facebook use influences subjective well-being over time is unknown. We addressed this issue using experience-sampling, the most reliable method for measuring in-vivo behavior and psychological experience. We text-messaged people five times per day for two-weeks to examine how Facebook use influences the two components of subjective well-being: how people feel moment-to-moment and how satisfied they are with their lives. Our results indicate that Facebook use predicts negative shifts on both of these variables over time. The more people used Facebook at one time point, the worse they felt the next time we text-messaged them; the more they used Facebook over two-weeks, the more their life satisfaction levels declined over time. Interacting with other people \"directly\" did not predict these negative outcomes. They were also not moderated by the size of people's Facebook networks, their perceived supportiveness, motivation for using Facebook, gender, loneliness, self-esteem, or depression. On the surface, Facebook provides an invaluable resource for fulfilling the basic human need for social connection. Rather than enhancing well-being, however, these findings suggest that Facebook may undermine it."}
{"_id": "26e6b1675e081a514f4fdc0352d6cb211ba6d9c8", "title": "Relay Attacks on Passive Keyless Entry and Start Systems in Modern Cars", "text": "We demonstrate relay attacks on Passive Keyless Entry and Start (PKES) systems used in modern cars. We build two efficient and inexpensive attack realizations, wired and wireless physical-layer relays, that allow the attacker to enter and start a car by relaying messages between the car and the smart key. Our relays are completely independent of the modulation, protocol, or presence of strong authentication and encryption. We perform an extensive evaluation on 10 car models from 8 manufacturers. Our results show that relaying the signal in one direction only (from the car to the key) is sufficient to perform the attack while the true distance between the key and car remains large (tested up to 50 meters, non line-of-sight). We also show that, with our setup, the smart key can be excited from up to 8 meters. This removes the need for the attacker to get close to the key in order to establish the relay. We further analyze and discuss critical system characteristics. Given the generality of the relay attack and the number of evaluated systems, it is likely that all PKES systems based on similar designs are also vulnerable to the same attack. Finally, we propose immediate mitigation measures that minimize the risk of relay attacks as well as recent solutions that may prevent relay attacks while preserving the convenience of use, for which PKES systems were initially introduced."}
{"_id": "e004b2420d872a33aa3bae8e570d33e5e66e2cac", "title": "Two-factor authentication: too little, too late", "text": "T wo-factor authentication isn't our savior. It won't defend against phishing. It's not going to prevent identity theft. It's not going to secure online accounts from fraudulent transactions. It solves the security problems we had 10 years ago, not the security problems we have today. The problem with passwords is that it is too easy to lose control of them. People give their passwords to other people. People write them down, and other people read them. People send them in email, and that email is intercepted. People use them to log into remote servers, and their communications are eavesdropped on. Passwords are also easy to guess. And once any of that happens, the password no longer works as an authentication token because you can never be sure who is typing in that password. Two-factor authentication mitigates this problem. If your password includes a number that changes every minute, or a unique reply to a random challenge, then it's difficult for someone else to intercept. You can't write down the ever-changing part. An intercepted password won't be usable the next time it's needed. And a two-factor password is more difficult to guess. Sure, someone can always give his password and token to his secretary, but no solution is foolproof. These tokens have been around for at least two decades, but it's only recently that they have received mass-market attention. AOL is rolling them out. Some banks are issuing them to customers , and even more are talking about doing it. It seems that corporations are finally recognizing the fact that passwords don't provide adequate security , and are hoping that two-factor authentication will fix their problems. Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses. Two new active attacks we're starting to see include: Man-in-the-Middle Attack. An attacker puts up a fake bank Web site and entices a user to that Web site. The user types in his password, and the attacker in turn uses it to access the bank's real Web site. Done correctly, the user will never realize that he isn't at the bank's Web site. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user's banking transactions while making his own transactions at the same \u2026"}
{"_id": "67f5f4cd77c63f6c196f5ced2047ae80145d49cb", "title": "The Secure Remote Password Protocol", "text": "This paper presents a new password authentication and key-exchange protocol suitable for authenticating users and exchanging keys over an untrusted network. The new protocol resists dictionary attacks mounted by either passive or active network intruders, allowing, in principle, even weak passphrases to be used safely. It also o ers perfect forward secrecy, which protects past sessions and passwords against future compromises. Finally, user passwords are stored in a form that is not plaintext-equivalent to the password itself, so an attacker who captures the password database cannot use it directly to compromise security and gain immediate access to the host. This new protocol combines techniques of zero-knowledge proofs with asymmetric key exchange protocols and o ers signi cantly improved performance over comparably strong extended methods that resist stolen-veri er attacks such as Augmented EKE or B-SPEKE."}
{"_id": "e9ffbd29836571e25e0b7f5f0469b0a7e24dbf05", "title": "Security and Privacy Challenges in the Smart Grid", "text": "Global electrical grids are verging on the largest technological transformation since the introduction of electricity into the home. The antiquated infrastructure that delivers power to our homes and businesses is being replaced with a collection of digital systems called the smart grid. This grid is the modernization of the existing electrical system that enhances customers' and utilities' ability to monitor, control, and predict energy use."}
{"_id": "9d767943a911949158463a4c217d36372ffce9db", "title": "Real-time techniques for 3D flow visualization", "text": "Visualization of three-dimensional steady flow has to overcome a lot of problems to be effective. Among them are occlusion of distant details, lack of directional and depth hints and occlusion. We present methods which address these problems for real-time graphic representations applicable in virtual environments. We use dashtubes, i.e., animated, opacity-mapped streamlines, as a visualization icon for 3D-flow visualization. We present a texture mapping technique to keep the level of texture detail along a streamline nearly constant even when the velocity of the flow varies considerably. An algorithm is described which distributes the dashtubes evenly in space. We apply magic lenses and magic boxes as interaction techniques for investigating densely filled areas without overwhelming the observer with visual detail. Implementation details of these methods and their integration in our virtual environment conclude the paper."}
{"_id": "9cd3ba0489f2e7bd95216236d1331208f15923c8", "title": "SQL Database Primitives for Decision Tree Classifiers", "text": "Scalable data mining in large databases is one of today's challenges to database technologies. Thus, substantial effort is dedicated to a tight coupling of database and data mining systems leading to database primitives supporting data mining tasks. In order to support a wide range of tasks and to be of general usage these primitives should be rather building blocks than implementations of specific algorithms. In this paper, we describe primitives for building and applying decision tree classifiers. Based on the analysis of available algorithms and previous work in this area we have identified operations which are useful for a number of classification algorithms. We discuss the implementation of these primitives on top of a commercial DBMS and present experimental results demonstrating the performance benefit."}
{"_id": "88486f10d3bb059725e57391f3e10e0eb851463f", "title": "Human Body as Antenna and Its Effect on Human Body Communications", "text": "Human body communication (HBC) is a promising wireless technology that uses the human body as part of the communication channel. HBC operates in the near-field of the high frequency (HF) band and in the lower frequencies of the very high frequency (VHF) band, where the electromagnetic field has the tendency to be confined inside the human body. Electromagnetic interference poses a serious reliability issue in HBC; consequently, it has been given increasing attention in regard to adapting techniques to curtail its degrading effect. Nevertheless, there is a gap in knowledge on the mechanism of HBC interference that is prompted when the human body is exposed to electromagnetic fields as well as the effect of the human body as an antenna on HBC. This paper narrows the gap by introducing the mechanisms of HBC interference caused by electromagnetic field exposure of human body. We derived analytic expressions for induced total axial current in the body and associated fields in the vicinity of the body when an imperfectly conducting cylindrical antenna model of the human body is illuminated by a vertically polarized plane wave within the 1\u2013200 MHz frequency range. Also, fields in the vicinity of the human body model from an on-body HBC transmitter are calculated. Furthermore, conducted electromagnetic interference on externally embedded HBC receivers is also addressed. The results show that the maximum HBC gain near 50 MHz is due to whole-body resonance, and the maximum at 80 MHz is due to the resonance of the arm. Similarly, the results also suggest that the magnitude of induced axial current in the body due to electromagnetic field exposure of human body is higher near 50MHz."}
{"_id": "b93926b0ad28f8fa71cde3a2734d5c84ee5a0fee", "title": "High-fidelity transmission of sensory information by single cerebellar mossy fibre boutons", "text": "Understanding the transmission of sensory information at individual synaptic connections requires knowledge of the properties of presynaptic terminals and their patterns of firing evoked by sensory stimuli. Such information has been difficult to obtain because of the small size and inaccessibility of nerve terminals in the central nervous system. Here we show, by making direct patch-clamp recordings in vivo from cerebellar mossy fibre boutons\u2014the primary source of synaptic input to the cerebellar cortex\u2014that sensory stimulation can produce bursts of spikes in single boutons at very high instantaneous firing frequencies (more than 700\u2009Hz). We show that the mossy fibre\u2013granule cell synapse exhibits high-fidelity transmission at these frequencies, indicating that the rapid burst of excitatory postsynaptic currents underlying the sensory-evoked response of granule cells can be driven by such a presynaptic spike burst. We also demonstrate that a single mossy fibre can trigger action potential bursts in granule cells in vitro when driven with in vivo firing patterns. These findings suggest that the relay from mossy fibre to granule cell can act in a \u2018detonator\u2019 fashion, such that a single presynaptic afferent may be sufficient to transmit the sensory message. This endows the cerebellar mossy fibre system with remarkable sensitivity and high fidelity in the transmission of sensory information."}
{"_id": "e739f103622722c55ecb40151ffc93b88d27b09a", "title": "Spectrum Sensing in Cognitive Radio Networks", "text": "The rising number and capacity requirements of wireless systems bring increasing demand for RF spectrum. Cognitive radio (CR) system is an emerging concept to increase the spectrum efficiency. CR system aims to enable opportunistic usage of the RF bands that are not occupied by their primary licensed users in spectrum overlay approach. In this approach, the major challenge in realizing the full potential of CR systems is to identify the spectrum opportunities in the wide band regime reliably and optimally. In the spectrum underlay approach, CR systems enable dynamic spectrum access by co-existing and transmitting simultaneously with licensed primary users without creating harmful interference to them. In this case, the challenge is to transmit with low power so as not to exceed the tolerable interference level to the primary users. Spectrum sensing and estimation is an integral part of the CR system, which is used to identify the spectrum opportunities in spectrum overlay and to identify the interference power to primary users in spectrum underlay approach. In this chapter, the authors present a comprehensive study of signal detection techniques for spectrum sensing proposed for CR systems. Specifically, they outline the state of the art research results, challenges, and future perspectives of spectrum sensing in CR systems, and also present a comparison of different methods. With this chapter, readers can have a comprehensive insight of signal processing methods of spectrum sensing for cognitive radio networks and the ongoing research and development in this area."}
{"_id": "69d685d0cf85dfe70d87c1548b03961366e83663", "title": "Noncontact Monitoring of Blood Oxygen Saturation Using Camera and Dual-Wavelength Imaging System", "text": "We present a noncontact method to monitor blood oxygen saturation (SpO2). The method uses a CMOS camera with a trigger control to allow recording of photoplethysmography (PPG) signals alternatively at two particular wavelengths, and determines the SpO2 from the measured ratios of the pulsatile to the nonpulsatile components of the PPG signals at these wavelengths. The signal-to-noise ratio (SNR) of the SpO2 value depends on the choice of the wavelengths. We found that the combination of orange (\u03bb = 611 nm) and near infrared (\u03bb = 880 nm) provides the best SNR for the noncontact video-based detection method. This combination is different from that used in traditional contact-based SpO2 measurement since the PPG signal strengths and camera quantum efficiencies at these wavelengths are more amenable to SpO2 measurement using a noncontact method. We also conducted a small pilot study to validate the noncontact method over an SpO2 range of 83%-98%. This study results are consistent with those measured using a reference contact SpO2 device (r = 0.936, p <; 0.001). The presented method is particularly suitable for tracking one's health and wellness at home under free-living conditions, and for those who cannot use traditional contact-based PPG devices."}
{"_id": "89de521e6a64430a31844dd7a9f2f3f8794a0c1d", "title": "Autonomous land vehicle project at CMU", "text": "1 . Introduction This paper provides an overview of the Autonomous Land Vehicle (ALV) Project at CMU. The goal of the CMU ALV Project is to build vision and intelligence for a mobile robot capable of operating in the real world outdoors. We are attacking this on a number of fronts: building appropriate research vehicles, exploiting high. speed experimental computers, and building software for reasoning about the perceived world. Research topics includes: \u2022 Construction of research vehicles \u2022 Perception systems to perceive the natural outdoor scenes by means of multiple sensors including cameras (color, stereo, and motion), sonar sensors, and a 3D range finder \u2022 Path planning for obstacle avoidance Use of topological and terrain map \u2022 System architecture to facilitate the system integration \u2022 Utilization of parallel computer architectures Our current research vehicle is the Terregator built at CMU which is equipped with a sonar ring, a color camera, and the ERIM laser range finder. Its initial task is to follow roads and sidewalks in the park and on the campus, and avoid obstacles such as trees, humans, and traffic cones. 2. Vehicle, Sensors, and Host Computers The primary vehicle of the AMU ALV Project has been the Terregator, designed and built at CMU. The Terregator, short for Terrestrial Navigator, is designed to provide a clean separation between the vehicle itself and its sensor payload. As shown in"}
{"_id": "3bf22713709f58c8a64dd56a69257ceae8532013", "title": "Robust real-time lane and road detection in critical shadow conditions", "text": "This paper presents the vision-based road detection system currently installed onto the MOB-LAB land vehicle. Based on a geometrical transform and on a fast morphological processing, the system is capable to detect road markings even in extremely severe shadow conditions on at and structured roads. The use of a special-purpose massively architecture (PAPRICA) allows to achieve a processing rate of about 17 Hz."}
{"_id": "51c88134a668cdfaccda2fe5f88919ac122bceda", "title": "E-LAMP: integration of innovative ideas for multimedia event detection", "text": "Detecting multimedia events in web videos is an emerging hot research area in the fields of multimedia and computer vision. In this paper, we introduce the core methods and technologies of the framework we developed recently for our Event Labeling through Analytic Media Processing (E-LAMP) system to deal with different aspects of the overall problem of event detection. More specifically, we have developed efficient methods for feature extraction so that we are able to handle large collections of video data with thousands of hours of videos. Second, we represent the extracted raw features in a spatial bag-of-words model with more effective tilings such that the spatial layout information of different features and different events can be better captured, thus the overall detection performance can be improved. Third, different from widely used early and late fusion schemes, a novel algorithm is developed to learn a more robust and discriminative intermediate feature representation from multiple features so that better event models can be built upon it. Finally, to tackle the additional challenge of event detection with only very few positive exemplars, we have developed a novel algorithm which is able to effectively adapt the knowledge learnt from auxiliary sources to assist the event detection. Both our empirical results and the official evaluation results on TRECVID MED\u201911 and MED\u201912 demonstrate the excellent performance of the integration of these ideas."}
{"_id": "a6eeab584f3554f3f1c9a5d4e1c062eaf588bcba", "title": "Lnc2Cancer: a manually curated database of experimentally supported lncRNAs associated with various human cancers", "text": "Lnc2Cancer (http://www.bio-bigdata.net/lnc2cancer) is a manually curated database of cancer-associated long non-coding RNAs (lncRNAs) with experimental support that aims to provide a high-quality and integrated resource for exploring lncRNA deregulation in various human cancers. LncRNAs represent a large category of functional RNA molecules that play a significant role in human cancers. A curated collection and summary of deregulated lncRNAs in cancer is essential to thoroughly understand the mechanisms and functions of lncRNAs. Here, we developed the Lnc2Cancer database, which contains 1057 manually curated associations between 531 lncRNAs and 86 human cancers. Each association includes lncRNA and cancer name, the lncRNA expression pattern, experimental techniques, a brief functional description, the original reference and additional annotation information. Lnc2Cancer provides a user-friendly interface to conveniently browse, retrieve and download data. Lnc2Cancer also offers a submission page for researchers to submit newly validated lncRNA-cancer associations. With the rapidly increasing interest in lncRNAs, Lnc2Cancer will significantly improve our understanding of lncRNA deregulation in cancer and has the potential to be a timely and valuable resource."}
{"_id": "a7a43698ce882e74eee010e3927f6215f4ce8f0b", "title": "On the Latency and Energy Efficiency of Distributed Storage Systems", "text": "The increase in data storage and power consumption at data-centers has made it imperative to design energy efficient distributed storage systems (DSS). The energy efficiency of DSS is strongly influenced not only by the volume of data, frequency of data access and redundancy in data storage, but also by the heterogeneity exhibited by the DSS in these dimensions. To this end, we propose and analyze the energy efficiency of a heterogeneous distributed storage system in which $n$ storage servers (disks) store the data of $R$ distinct classes. Data of class $i$ is encoded using a $(n,k_{i})$ erasure code and the (random) data retrieval requests can also vary across classes. We show that the energy efficiency of such systems is closely related to the average latency and hence motivates us to study the energy efficiency via the lens of average latency. Through this connection, we show that erasure coding serves the dual purpose of reducing latency and increasing energy efficiency. We present a queuing theoretic analysis of the proposed model and establish upper and lower bounds on the average latency for each data class under various scheduling policies. Through extensive simulations, we present qualitative insights which reveal the impact of coding rate, number of servers, service distribution and number of redundant requests on the average latency and energy efficiency of the DSS."}
{"_id": "556803fa8049de309f421a6b6ef27f0cf1cf8c58", "title": "A 1.5 nW, 32.768 kHz XTAL Oscillator Operational From a 0.3 V Supply", "text": "This paper presents an ultra-low power crystal (XTAL) oscillator circuit for generating a 32.768 kHz clock source for real-time clock generation. An inverting amplifier operational from 0.3 V VDD oscillates the XTAL resonator and achieves a power consumption of 2.1 nW. A duty-cycling technique powers down the XTAL amplifier without losing the oscillation and reduces the power consumption to 1.5 nW. The proposed circuit is implemented in 130 nm CMOS with an area of 0.0625 mm2 and achieves a temperature stability of 1.85 ppm/\u00b0C."}
{"_id": "d96d3ba21785887805b9441e0cc167b0f22ca28c", "title": "A multivariate time series clustering approach for crime trends prediction", "text": "In recent past, there is an increased interest in time series clustering research, particularly for finding useful similar trends in multivariate time series in various applied areas such as environmental research, finance, and crime. Clustering multivariate time series has potential for analyzing large volume of crime data at different time points as law enforcement agencies are interested in finding crime trends of various police administration units such as states, districts and police stations so that future occurrences of similar incidents can be overcome. Most of the traditional time series clustering algorithms deals with only univariate time series data and for clustering high dimensional data, it has to be transformed into single dimension using a dimension reduction technique. The conventional time series clustering techniques do not provide desired results for crime data set, since crime data is high dimensional and consists of various crime types with different weight age. In this paper, a novel approach based on dynamic time wrapping and parametric Minkowski model has been proposed to find similar crime trends among various crime sequences of different crime locations and subsequently use this information for future crime trends prediction. Analysis on Indian crime records show that the proposed technique generally outperforms the existing techniques in clustering of such multivariate time series data."}
{"_id": "cb6a74d15e51fba7835edf4a95ec0ec37f7740b0", "title": "Mechanisms linking obesity to insulin resistance and type 2 diabetes", "text": "Obesity is associated with an increased risk of developing insulin resistance and type 2 diabetes. In obese individuals, adipose tissue releases increased amounts of non-esterified fatty acids, glycerol, hormones, pro-inflammatory cytokines and other factors that are involved in the development of insulin resistance. When insulin resistance is accompanied by dysfunction of pancreatic islet \u03b2-cells \u2014 the cells that release insulin \u2014 failure to control blood glucose levels results. Abnormalities in \u03b2-cell function are therefore critical in defining the risk and development of type 2 diabetes. This knowledge is fostering exploration of the molecular and genetic basis of the disease and new approaches to its treatment and prevention."}
{"_id": "97cb3258a85a447a61e3812846f7a6e72ff3c1e1", "title": "Company event popularity for financial markets using Twitter and sentiment analysis", "text": "The growing number of Twitter users makes it a valuable source of information to study what is happening right now. Users often use Twitter to report real-life events. Here we are only interested in following the financial community. This paper focuses on detecting events popularity through sentiment analysis of tweets published by the financial community on the Twitter universe. The detection of events popularity on Twitter makes this a non-trivial task due to noisy content that often are the tweets. This work aims to filter out all the noisy tweets in order to analyze only the tweets that influence the financial market, more specifically the thirty companies that compose the Dow Jones Average. To perform these tasks, in this paper it is proposed a methodology that starts from the financial community of Twitter and then filters the collected tweets, makes the sentiment analysis of the tweets and finally detects the important events in the life of companies. \u00a9 2016 Elsevier Ltd. All rights reserved."}
{"_id": "5cf1659a7cdeb988ff2e5b57fe9eb5bb7e1a1fbd", "title": "Detecting and Estimating Signals in Noisy Cable Structures, I: Neuronal Noise Sources", "text": "In recent theoretical approaches addressing the problem of neural coding, tools from statistical estimation and information theory have been applied to quantify the ability of neurons to transmit information through their spike outputs. These techniques, though fairly general, ignore the specific nature of neuronal processing in terms of its known biophysical properties. However, a systematic study of processing at various stages in a biophysically faithful model of a single neuron can identify the role of each stage in information transfer. Toward this end, we carry out a theoretical analysis of the information loss of a synaptic signal propagating along a linear, one-dimensional, weakly active cable due to neuronal noise sources along the way, using both a signal reconstruction and a signal detection paradigm. Here we begin such an analysis by quantitatively characterizing three sources of membrane noise: (1) thermal noise due to the passive membrane resistance, (2) noise due to stochastic openings and closings of voltage-gated membrane channels (Na+ and K+), and (3) noise due to random, background synaptic activity. Using analytical expressions for the power spectral densities of these noise sources, we compare their magnitudes in the case of a patch of membrane from a cortical pyramidal cell and explore their dependence on different biophysical parameters."}
{"_id": "858997daac40de2e42b4b9b5c06943749a93aaaf", "title": "Preventing Shoulder-Surfing Attack with the Concept of Concealing the Password Objects' Information", "text": "Traditionally, picture-based password systems employ password objects (pictures/icons/symbols) as input during an authentication session, thus making them vulnerable to \"shoulder-surfing\" attack because the visual interface by function is easily observed by others. Recent software-based approaches attempt to minimize this threat by requiring users to enter their passwords indirectly by performing certain mental tasks to derive the indirect password, thus concealing the user's actual password. However, weaknesses in the positioning of distracter and password objects introduce usability and security issues. In this paper, a new method, which conceals information about the password objects as much as possible, is proposed. Besides concealing the password objects and the number of password objects, the proposed method allows both password and distracter objects to be used as the challenge set's input. The correctly entered password appears to be random and can only be derived with the knowledge of the full set of password objects. Therefore, it would be difficult for a shoulder-surfing adversary to identify the user's actual password. Simulation results indicate that the correct input object and its location are random for each challenge set, thus preventing frequency of occurrence analysis attack. User study results show that the proposed method is able to prevent shoulder-surfing attack."}
{"_id": "3fa284afe4d429c0805d95a4bd9564a7be0c8de3", "title": "The Evolution of the Peer-to-Peer File Sharing Industry and the Security Risks for Users", "text": "Peer-to-peer file sharing is a growing security risk for firms and individuals. Users who participate in these networks to share music, pictures, and video are subject to many security risks including inadvertent publishing of private information, exposure to viruses and worms, and the consequences of spyware. In this paper, we examine the peer-to-peer file sharing phenomena, including an overview of the industry, its business models, and evolution. We describe the information security risks users' face including personal identification disclosure and leakage of proprietary business information. We illustrate those risks through honey-pot experiments and discuss how peer-to-peer industry dynamics are contributing to the security problem."}
{"_id": "e21f23f56c95fce3c2c22f7037ea74208b68bd20", "title": "Wide-band CMOS low-noise amplifier exploiting thermal noise canceling", "text": "Known elementary wide-band amplifiers suffer from a fundamental tradeoff between noise figure (NF) and source impedance matching, which limits the NF to values typically above 3 dB. Global negative feedback can be used to break this tradeoff, however, at the price of potential instability. In contrast, this paper presents a feedforward noise-canceling technique, which allows for simultaneous noise and impedance matching, while canceling the noise and distortion contributions of the matching device. This allows for designing wide-band impedance-matching amplifiers with NF well below 3 dB, without suffering from instability issues. An amplifier realized in 0.25-/spl mu/m standard CMOS shows NF values below 2.4 dB over more than one decade of bandwidth (i.e., 150-2000 MHz) and below 2 dB over more than two octaves (i.e., 250-1100 MHz). Furthermore, the total voltage gain is 13.7 dB, the -3-dB bandwidth is from 2 MHz to 1.6 GHz, the IIP2 is +12 dBm, and the IIP3 is 0 dBm. The LNA drains 14 mA from a 2.5-V supply and the die area is 0.3/spl times/0.25 mm/sup 2/."}
{"_id": "6b8af92448180d28997499900ebfe33a473cddb7", "title": "A robust adaptive stochastic gradient method for deep learning", "text": "Stochastic gradient algorithms are the main focus of large-scale optimization problems and led to important successes in the recent advancement of the deep learning algorithms. The convergence of SGD depends on the careful choice of learning rate and the amount of the noise in stochastic estimates of the gradients. In this paper, we propose an adaptive learning rate algorithm, which utilizes stochastic curvature information of the loss function for automatically tuning the learning rates. The information about the element-wise curvature of the loss function is estimated from the local statistics of the stochastic first order gradients. We further propose a new variance reduction technique to speed up the convergence. In our experiments with deep neural networks, we obtained better performance compared to the popular stochastic gradient algorithms.1"}
{"_id": "0f699e2c27f6fffb0c84abb6756fda0ca26db113", "title": "Multiresolution genetic clustering algorithm for texture segmentation", "text": "This work plans to approach the texture segmentation problem by incorporating genetic algorithm and K-means clustering method within a multiresolution structure. As the algorithm descends the multiresolution structure, the coarse segmentation results are propagated down to the lower levels so as to reduce the inherent class\u2013position uncertainty and to improve the segmentation accuracy. The procedure is described as follows. In the first step, a quad-tree structure of multiple resolutions is constructed. Sampling windows of different sizes are utilized to partition the underlying image into blocks at different resolution levels and texture features are extracted from each block. Based on the texture features, a hybrid genetic algorithm is employed to perform the segmentation. While the select and mutate operators of the traditional genetic algorithm are adopted in this work, the crossover operator is replaced with K-means clustering method. In the final step, the boundaries and the segmentation result of the current resolution level are propagated down to the next level to act as contextual constraints and the initial configuration of the next level, respectively. q 2003 Elsevier B.V. All rights reserved."}
{"_id": "2189a35642e8b034f1396403e9890ec30db6db13", "title": "Mathematical modeling of photovoltaic module with Simulink", "text": "This paper presents a unique step-by-step procedure for the simulation of photovoltaic modules with Matlab/ Simulink. One-diode equivalent circuit is employed in order to investigate I-V and P-V characteristics of a typical 36 W solar module. The proposed model is designed with a user-friendly icons and a dialog box like Simulink block libraries."}
{"_id": "b0632c9966294279e03d8cda8c6daa86217f2a1a", "title": "Audio Fingerprinting: Concepts And Applications", "text": "An audio fingerprint is a compact digest derived from perceptually relevant aspects of a recording. Fingerprinting technologies allow the monitoring of audio content without the need of meta-data or watermark embedding. However, additional uses exist for audio fingerprinting and some are reviewed in this article."}
{"_id": "c316c705a2bede33d396447119639eb99faf96c7", "title": "Advantage of CNTFET characteristics over MOSFET to reduce leakage power", "text": "In this paper we compare and justify the advantage of CNTFET devices over MOSFET devices in nanometer regime. Thereafter we have analyzed the effect of chiral vector, and temperature on threshold voltage of CNTFET device. After simulation on HSPICE tool we observed that the high threshold voltage can be achieved at low chiral vector pair. It is also observed that the effect of temperature on threshold voltage of CNTFET is negligibly small. After analysis of channel length variation and their impact on threshold voltage of CNTFET as well as MOSFET devices, we found an anomalous result that the threshold voltage increases with decreasing channel length in CNTFET devices, this is quite contrary to the well known short channel effect. It is observed that at below 10 nm channel length the threshold voltage is increased rapidly in case of CNTFET device whereas in case of MOSFET device the threshold voltage decreases drastically below 10 nm channel length."}
{"_id": "a4fe464f6b41f844b4fc63a62c5787fccc942cef", "title": "Advanced obfuscation techniques for Java bytecode", "text": "There exist several obfuscation tools for preventing Java bytecode from being decompiled. Most of these tools simply scramble the names of the identifiers stored in a bytecode by substituting the identifiers with meaningless names. However, the scrambling technique cannot deter a determined cracker very long. We propose several advanced obfuscation techniques that make Java bytecode impossible to recompile or make the decompiled program difficult to understand and to recompile. The crux of our approach is to over use an identifier. That is, an identifier can denote several entities, such as types, fields, and methods, simultaneously. An additional benefit is that the size of the bytecode is reduced because fewer and shorter identifier names are used. Furthermore, we also propose several techniques to intentionally introduce syntactic and semantic errors into the decompiled program while preserving the original behaviors of the bytecode. Thus, the decompiled program would have to be debugged manually. Although our basic approach is to scramble the identifiers in Java bytecode, the scrambled bytecode produced with our techniques is much harder to crack than that produced with other identifier scrambling techniques. Furthermore, the run-time efficiency of the obfuscated bytecode is also improved because the size of the bytecode becomes smaller after obfuscation. 2002 Elsevier Inc. All rights reserved."}
{"_id": "10d6b12fa07c7c8d6c8c3f42c7f1c061c131d4c5", "title": "Histograms of oriented gradients for human detection", "text": "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds."}
{"_id": "2337ff38e6cfb09e28c0958f07e2090c993ef6e8", "title": "Measuring Invariances in Deep Networks", "text": "For many pattern recognition tasks, the ideal input feature would be invariant to multiple confounding properties (such as illumination and viewing angle, in computer vision applications). Recently, deep architectures trained in an unsupervised manner have been proposed as an automatic method for extracting useful features. However, it is difficult to evaluate the learned features by any means other than using them in a classifier. In this paper, we propose a number of empirical tests that directly measure the degree to which these learned features are invariant to different input transformations. We find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images. We find that convolutional deep belief networks learn substantially more invariant features in each layer. These results further justify the use of \u201cdeep\u201d vs. \u201cshallower\u201d representations, but suggest that mechanisms beyond merely stacking one autoencoder on top of another may be important for achieving invariance. Our evaluation metrics can also be used to evaluate future work in deep learning, and thus help the development of future algorithms."}
{"_id": "31b58ced31f22eab10bd3ee2d9174e7c14c27c01", "title": "80 Million Tiny Images: A Large Data Set for Nonparametric Object and Scene Recognition", "text": "With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors."}
{"_id": "34760b63a2ae964a0b04d1850dc57002f561ddcb", "title": "Decoding by linear programming", "text": "This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f/spl isin/R/sup n/ from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the /spl lscr//sub 1/-minimization problem (/spl par/x/spl par//sub /spl lscr/1/:=/spl Sigma//sub i/|x/sub i/|) min(g/spl isin/R/sup n/) /spl par/y - Ag/spl par//sub /spl lscr/1/ provided that the support of the vector of errors is not too large, /spl par/e/spl par//sub /spl lscr/0/:=|{i:e/sub i/ /spl ne/ 0}|/spl les//spl rho//spl middot/m for some /spl rho/>0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of /spl lscr//sub 1/ is a crucial property we call the uniform uncertainty principle that we shall describe in detail."}
{"_id": "27b8c62835890fd31de976c12ccb80704a584ce4", "title": "Credit card fraud detection using Na\u00efve Bayes model based and KNN classifier", "text": "Machine Learning is the technology, in which algorithms which are capable of learning from previous cases and past experiences are designed. It is implemented using various algorithms which reiterate over the same data repeatedly to analyze the pattern of data. The techniques of data mining are no far behind and are widely used to extract data from large databases to discover some patterns making decisions. This paper presents the Na\u00efve Bayes improved K-Nearest Neighbor method (NBKNN) for Fraud Detection of Credit Card. Experimental results illustrate that both classifiers work differently for the same dataset. The purpose is to enhance the accuracy and enhance the flexibility of the algorithm."}
{"_id": "4b605e6a9362485bfe69950432fa1f896e7d19bf", "title": "A Comparison of Human and Automated Face Verification Accuracy on Unconstrained Image Sets", "text": "Automatic face recognition technologies have seen significant improvements in performance due to a combination of advances in deep learning and availability of larger datasets for training deep networks. Since recognizing faces is a task that humans are believed to be very good at, it is only natural to compare the relative performance of automated face recognition and humans when processing fully unconstrained facial imagery. In this work, we expand on previous studies of the recognition accuracy of humans and automated systems by performing several novel analyses utilizing unconstrained face imagery. We examine the impact on performance when human recognizers are presented with varying amounts of imagery per subject, immutable attributes such as gender, and circumstantial attributes such as occlusion, illumination, and pose. Results indicate that humans greatly outperform state of the art automated face recognition algorithms on the challenging IJB-A dataset."}
{"_id": "8cf03252a0231352cfbcb1fb1feee8049da67bb1", "title": "Strip-type Grid Array Antenna with a Two-Layer Rear-Space Structure", "text": "A strip-type grid array antenna is analyzed using the finite-difference time-domain method. The space between the grid array and the ground plane, designated as the rear space, is filled with a dielectric layer and an air layer (two-layer rear-space structure). A VSWR bandwidth of approximately 13% is obtained with a maximum directivity of approximately 18 dB. The grid array exhibits a narrow beam with low side lobes. The cross-polarization component is small (-25 dB), as desired."}
{"_id": "a4d510439644d52701f852d9dd34bbd37f4b8b78", "title": "Using the SLEUTH urban growth model to simulate the impacts of future policy scenarios on urban land use in the Tehran metropolitan area in Iran", "text": "The SLEUTH model, based on the Cellular Automata (CA), can be applied to city development simulation in metropolitan areas. In this study the SLEUTH model was used to model the urban expansion and predict the future possible behavior of the urban growth in Tehran. The fundamental data were five Landsat TM and ETM images of 1988, 1992, 1998, 2001 and 2010. Three scenarios were designed to simulate the spatial pattern. The first scenario assumed historical urbanization mode would persist and the only limitations for development were height and slope. The second one was a compact scenario which makes the growth mostly internal and limited the expansion of suburban areas. The last scenario proposed a polycentric urban structure which let the little patches * Corresponding author. Tel.: +98 912 3572913 E-mail address: shaghayegh.kargozar@yahoo.com"}
{"_id": "f19e6e8a06cba5fc8cf234881419de9193bba9d0", "title": "Confidence Measures for Neural Network Classifiers", "text": "Neural Networks are commonly used in classification and decision tasks. In this paper, we focus on the problem of the local confidence of their results. We review some notions from statistical decision theory that offer an insight on the determination and use of confidence measures for classification with Neural Networks. We then present an overview of the existing confidence measures and finally propose a simple measure which combines the benefits of the probabi-listic interpretation of network outputs and the estimation of the quality of the model by bootstrap error estimation. We discuss empirical results on a real-world application and an artificial problem and show that the simplest measure behaves often better than more sophisticated ones, but may be dangerous under certain situations."}
{"_id": "3d4b4aa5f67e1b2eb97231013c8d2699acdf9ccd", "title": "Using parallel stiffness to achieve improved locomotive efficiency with the Sandia STEPPR robot", "text": "In this paper we introduce STEPPR (Sandia Transmission-Efficient Prototype Promoting Research), a bipedal robot designed to explore efficient bipedal walking. The initial iteration of this robot achieves efficient motions through powerful electromagnetic actuators and highly back-drivable synthetic rope transmissions. We show how the addition of parallel elastic elements at select joints is predicted to provide substantial energetic benefits: reducing cost of transport by 30 to 50 percent. Two joints in particular, hip roll and ankle pitch, reduce dissipated power over three very different gait types: human walking, human-like robot walking, and crouched robot walking. Joint springs based on this analysis are tested and validated experimentally. Finally, this paper concludes with the design of two unique parallel spring mechanisms to be added to the current STEPPR robot in order to provide improved locomotive efficiency."}
{"_id": "ed14d9a452c4a63883df6496b8d2285201a1808b", "title": "The Theory Behind Mp3", "text": "Since the MPEG-1 Layer III encoding technology is nowadays widely used it might be interesting to gain knowledge of how this powerful compression/decompression scheme actually functions. How come the MPEG-1 Layer III is capable of reduc ing the bit rate with a factor of 12 without almost any audible degradation? Would it be fairly easy to implement this encoding algorithm? This paper will answer these questions and give further additional detailed information."}
{"_id": "aeb7a82c61a1733fafa2a36b9bb664ac555e5d86", "title": "Chatbot for IT Security Training: Using Motivational Interviewing to Improve Security Behaviour", "text": "We conduct a pre-study with 25 participants on Mechanical Turk to find out which security behavioural problems are most important for online users. These questions are based on motivational interviewing (MI), an evidence-based treatment methodology that enables to train people about different kinds of behavioural changes. Based on that the chatbot is developed using Artificial Intelligence Markup Language (AIML). The chatbot is trained to speak about three topics: passwords, privacy and secure browsing. These three topics were \u2019most-wanted\u2019 by the users of the pre-study. With the chatbot three training sessions with people are conducted."}
{"_id": "0086ae349537bad560c8755aa2f0ece8f49b95cf", "title": "Walking on Water : Biolocomotion at the Interface", "text": "We consider the hydrodynamics of creatures capable of sustaining themselves on the water surface by means other than flotation. Particular attention is given to classifying water walkers according to their principal means of weight support and lateral propulsion. The various propulsion mechanisms are rationalized through consideration of energetics, hydrodynamic forces applied, or momentum transferred by the driving stroke. We review previous research in this area and suggest directions for future work. Special attention is given to introductory discussions of problems not previously treated in the fluid mechanics literature, with hopes of attracting physicists, applied mathematicians, and engineers to this relatively unexplored area of fluid mechanics. 339 A nn u. R ev . F lu id . M ec h. 2 00 6. 38 :3 39 -3 69 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by Y al e U ni ve rs ity S O C IA L S C IE N C E L IB R A R Y o n 12 /1 7/ 05 . F or p er so na l u se o nl y. AR266-FL38-13 ARI 11 November 2005 20:7"}
{"_id": "cab60be93fe203c5b5ebc3d25121639a330dbcb0", "title": "Economic Benefit of Powerful Credit Scoring \u2217", "text": "We study the economic benefits from using credit scoring models. We contribute to the literature by relating the discriminatory power of a credit scoring model to the optimal credit decision. Given the Receiver Operating Characteristic (ROC) curve, we derive a) the profit-maximizing cutoff and b) the pricing curve. Using these two concepts and a mixture thereof, we study a stylized loan market model with banks differing in the quality of their credit scoring model. Even for small quality differences, the variation in profitability among lenders is large and economically significant. We end our analysis by quantifying the impact on profits when information leaks from a competitor\u2019s scoring model into the market. JEL Classification Codes: D40, G21, H81"}
{"_id": "0ab5d73a786d797476e62cd162ebbff357933c13", "title": "Intelligence Without Reason", "text": "Computers and Thought are the two categories that together de ne Arti cial Intelligence as a discipline. It is generally accepted that work in Arti cial Intelligence over the last thirty years has had a strong in uence on aspects of computer architectures. In this paper we also make the converse claim; that the state of computer architecture has been a strong in uence on our models of thought. The Von Neumann model of computation has lead Arti cial Intelligence in particular directions. Intelligence in biological systems is completely di erent. Recent work in behavior-based Arti cial Intelligence has produced new models of intelligence that are much closer in spirit to biological systems. The non-Von Neumann computational models they use share many characteristics with biological computation. Copyright c Massachusetts Institute of Technology, 1991 This report describes research done at the Arti cial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided in part by the University Research Initiative under O ce of Naval Research contract N00014{86{K{0685, in part by the Advanced Research Projects Agency under O ce of Naval Research contract N00014{ 85{K{0124, in part by the Hughes Arti cial Intelligence Center, in part by Siemens Corporation, and in part by Mazda Corporation."}
{"_id": "27751edbac24dd0134645a08e101457202816fc2", "title": "Chronic administration of cannabidiol to healthy volunteers and epileptic patients.", "text": "In phase 1 of the study, 3 mg/kg daily of cannabidiol (CBD) was given for 30 days to 8 health human volunteers. Another 8 volunteers received the same number of identical capsules containing glucose as placebo in a double-blind setting. Neurological and physical examinations, blood and urine analysis, ECG and EEG were performed at weekly intervals. In phase 2 of the study, 15 patients suffering from secondary generalized epilepsy with temporal focus were randomly divided into two groups. Each patient received, in a double-blind procedure, 200-300 mg daily of CBD or placebo. The drugs were administered for along as 4 1/2 months. Clinical and laboratory examinations, EEG and ECG were performed at 15- or 30-day intervals. Throughout the experiment the patients continued to take the antiepileptic drugs prescribed before the experiment, although these drugs no longer controlled the signs of the disease. All patients and volunteers tolerated CBD very well and no signs of toxicity or serious side effects were detected on examination. 4 of the 8 CBD subjects remained almost free of convulsive crises throughout the experiment and 3 other patients demonstrated partial improvement in their clinical condition. CBD was ineffective in 1 patient. The clinical condition of 7 placebo patients remained unchanged whereas the condition of 1 patient clearly improved. The potential use of CBD as an antiepileptic drug and its possible potentiating effect on other antiepileptic drugs are discussed."}
{"_id": "8776e7260399e838da02f092d0f550ae1e59f7b5", "title": "Face age estimation using wrinkle patterns", "text": "Face age estimation is a challenging problem due to the variation of craniofacial growth, skin texture, gender and race. With recent growth in face age estimation research, wrinkles received attention from a number of research, as it is generally perceived as aging feature and soft biometric for person identification. In a face image, wrinkle is a discontinuous and arbitrary line pattern that varies in different face regions and subjects. Existing wrinkle detection algorithms and wrinkle-based features are not robust for face age estimation. They are either weakly represented or not validated against the ground truth. The primary aim of this thesis is to develop a robust wrinkle detection method and construct novel wrinkle-based methods for face age estimation. First, Hybrid Hessian Filter (HHF) is proposed to segment the wrinkles using the directional gradient and a ridge-valley Gaussian kernel. Second, Hessian Line Tracking (HLT) is proposed for wrinkle detection by exploring the wrinkle connectivity of surrounding pixels using a cross-sectional profile. Experimental results showed that HLT outperforms other wrinkle detection algorithms with an accuracy of 84% and 79% on the datasets of FORERUS and FORERET while HHF achieves 77% and 49%, respectively. Third, Multi-scale Wrinkle Patterns (MWP) is proposed as a novel feature representation for face age estimation using the wrinkle location, intensity and density. Fourth, Hybrid Aging Patterns (HAP) is proposed as a hybrid pattern for face age estimation using Facial Appearance Model (FAM) and MWP. Fifth, Multi-layer Age Regression (MAR) is proposed as a hierarchical model in complementary of FAM and MWP for face age estimation. For performance assessment of age estimation, four datasets namely FGNET, MORPH, FERET and PAL with different age ranges and sample sizes are used as benchmarks. Results showed that MAR achieves the lowest Mean Absolute Error (MAE) of 3.00 (\u00b14.14) on FERET and HAP scores a comparable MAE of 3.02 (\u00b12.92) as state of the art. In conclusion, wrinkles are important features and the uniqueness of this pattern should be considered in developing a robust model for face age estimation."}
{"_id": "d90552370c1c30f58bb2c3a539a1146a6d217837", "title": "ARTIFICIAL NEURAL NETWORK ARCHITECTURE FOR SOLVING THE DOUBLE DUMMY BRIDGE PROBLEM IN CONTRACT BRIDGE", "text": "Card games are interesting for many reasons besides their connection with gambling. Bridge is being a game of imperfect information, it is a well defined, decision making game. The estimation of the number of tricks to be taken by one pair of bridge players is called Double Dummy Bridge Problem (DDBP). Artificial Neural Networks are Non \u2013 Linear mapping structures based on the function of the human brain. Feed Forward Neural Network is used to solve the DDBP in contract bridge. The learning methodology, supervised learning was used in Back \u2013 Propagation Network (BPN) for training and testing the bridge sample deal. In our study we compared back \u2013 Propagation algorithm and obtained that Resilient Back \u2013 Propagation algorithms by using Hyperbolic Tangent function and Resilient Back \u2013 Propagation algorithm produced better result than the other. Among various neural network architectures, in this study we used four network architectures viz., 26x4, 52, 104 and 52x4 for solving DDBP in contract bridge."}
{"_id": "d37233c0e4cf575177b57b680451a5f0fef6eff3", "title": "Intelligent Parking Management System Based on Image Processing", "text": "This paper aims to present an intelligent system for parking space detection based on image processing technique. The proposed system captures and processes the rounded image drawn at parking lot and produces the information of the empty car parking spaces. In this work, a camera is used as a sensor to take photos to show the occupancy of car parks. The reason why a camera is used is because with an image it can detect the presence of many cars at once. Also, the camera can be easily moved to detect different car parking lots. By having this image, the particular car parks vacant can be known and then the processed information was used to guide a driver to an available car park rather than wasting time to find one. The proposed system has been developed in both software and hardware platform. An automatic parking system is used to make the whole process of parking cars more efficient and less complex for both drivers and administrators."}
{"_id": "f918ff8347a890d75a59894c3488c08619915c6a", "title": "Lens on the Endpoint: Hunting for Malicious Software Through Endpoint Data Analysis", "text": "Organizations are facing an increasing number of criminal threats ranging from opportunistic malware to more advanced targeted attacks. While various security technologies are available to protect organizations\u2019 perimeters, still many breaches lead to undesired consequences such as loss of proprietary information, financial burden, and reputation defacing. Recently, endpoint monitoring agents that inspect system-level activities on user machines started to gain traction and be deployed in the industry as an additional defense layer. Their application, though, in most cases is only for forensic investigation to determine the root cause of an incident. In this paper, we demonstrate how endpoint monitoring can be proactively used for detecting and prioritizing suspicious software modules overlooked by other defenses. Compared to other environments in which host-based detection proved successful, our setting of a large enterprise introduces unique challenges, including the heterogeneous environment (users installing software of their choice), limited ground truth (small number of malicious software available for training), and coarse-grained data collection (strict requirements are imposed on agents\u2019 performance overhead). Through applications of clustering and outlier detection algorithms, we develop techniques to identify modules with known malicious behavior, as well as modules impersonating popular benign applications. We leverage a large number of static, behavioral and contextual features in our algorithms, and new feature weighting methods that are resilient against missing attributes. The large majority of our findings are confirmed as malicious by anti-virus tools and manual investigation by experienced security analysts."}
{"_id": "d8889da0826e8153be7699f4eef4f0ed2e8a7ea7", "title": "Distributed heterogeneous event processing: enhancing scalability and interoperability of CEP in an industrial context", "text": "Although a significant amount of research has investigated the benefits of distributed CEP in terms of scalability and extensibility, there is an ongoing reluctance in deploying distributed CEP in an industrial context. In this paper we present the DHEP system developed together with the IBM\u00ae laboratory in B\u00f6blingen. It addresses some of the key problems in increasing the acceptance of distributed CEP, for example supporting interoperability between heterogeneous event processing systems. We present the concepts behind the DHEP system and show how those concepts help to achieve scalable and extensible event processing in an industrial context. Moreover, we verify in an evaluation study that the additional cost imposed by the DHEP system is moderate and 'affordable' for the benefits provided."}
{"_id": "4a5be26509557f0a1a911e639868bfe9d002d664", "title": "An Analysis of the Manufacturing Messaging Specification Protocol", "text": "The Manufacturing Messaging Specification (MMS) protocol is widely used in industrial process control applications, but it is poorly documented. In this paper we present an analysis of the MMS protocol in order to improve understanding of MMS in the context of information security. Our findings show that MMS has insufficient security mechanisms, and the meagre security mechanisms that are available are not implemented in commercially available industrial devices."}
{"_id": "3ffa280bbba607ba3dbc0f6adcc557c85453016c", "title": "Non-iterative, feature-preserving mesh smoothing", "text": "With the increasing use of geometry scanners to create 3D models, there is a rising need for fast and robust mesh smoothing to remove inevitable noise in the measurements. While most previous work has favored diffusion-based iterative techniques for feature-preserving smoothing, we propose a radically different approach, based on robust statistics and local first-order predictors of the surface. The robustness of our local estimates allows us to derive a non-iterative feature-preserving filtering technique applicable to arbitrary \"triangle soups\". We demonstrate its simplicity of implementation and its efficiency, which make it an excellent solution for smoothing large, noisy, and non-manifold meshes."}
{"_id": "e1781105aaeb66f6261a99bdee13ae410ea4495d", "title": "Robust real-time underwater digital video streaming using optical communication", "text": "We present a real-time video delivery solution based on free-space optical communication for underwater applications. This solution comprises of AquaOptical II, a high-bandwidth wireless optical communication device, and a two-layer digital encoding scheme designed for error-resistant communication of high resolution images. Our system can transmit digital video reliably through a unidirectional underwater channel, with minimal infrastructural overhead. We present empirical evaluation of this system's performance for various system configurations, and demonstrate that it can deliver high quality video at up to 15 Hz, with near-negligible communication latencies of 100 ms. We further characterize the corresponding end-to-end latencies, i.e. from time of image acquisition until time of display, and reveal optimized results of under 200 ms, which facilitates a wide range of applications such as underwater robot tele-operation and interactive remote seabed monitoring."}
{"_id": "0c5f142ff723d9a1f8e3b7ad840e2aaee5213605", "title": "Development of Highly Secured Cloud Rendered Log Management System", "text": "A log is a collection of record of events that occurs within an organization containing systems and network. These logs are very important for any organization, because log file will able to record all user activities. As this log files plays vital role and also it contains sensitive information , it should be maintained highly secure. So, management and securely maintenance of log records are very tedious task. However, deploying such a system for high security and privacy of log records is an overhead for an organization and also it requires additional cost. Many techniques have been design so far for security of log records. The alternative solution is to maintaining log records over a cloud database. Log files over cloud environment leads to challenges about privacy, confidentiality and integrity of log files. In this paper, we propose highly secured cloud rendered log management and also use of some cryptographic algorithms for dealing the issues to access a cloud based data storage. To the best of knowledge, this is the strong work to provide a complete solution to the cloud based secure log management problem."}
{"_id": "9c6eb7ec2de5779baa8ceb782a1c8fe7affeaf70", "title": "Inside help: An integrative review of champions in healthcare-related implementation", "text": "Background/aims\nThe idea that champions are crucial to effective healthcare-related implementation has gained broad acceptance; yet the champion construct has been hampered by inconsistent use across the published literature. This integrative review sought to establish the current state of the literature on champions in healthcare settings and bring greater clarity to this important construct.\n\n\nMethods\nThis integrative review was limited to research articles in peer-reviewed, English-language journals published from 1980 to 2016. Searches were conducted on the online MEDLINE database via OVID and PubMed using the keyword \"champion.\" Several additional terms often describe champions and were also included as keywords: implementation leader, opinion leader, facilitator, and change agent. Bibliographies of full-text articles that met inclusion criteria were reviewed for additional references not yet identified via the main strategy of conducting keyword searches in MEDLINE. A five-member team abstracted all full-text articles meeting inclusion criteria.\n\n\nResults\nThe final dataset for the integrative review consisted of 199 unique articles. Use of the term champion varied widely across the articles with respect to topic, specific job positions, or broader organizational roles. The most common method for operationalizing champion for purposes of analysis was the use of a dichotomous variable designating champion presence or absence. Four studies randomly allocated of the presence or absence of champions.\n\n\nConclusions\nThe number of published champion-related articles has markedly increased: more articles were published during the last two years of this review (i.e. 2015-2016) than during its first 30\u2009years (i.e. 1980-2009).The number of champion-related articles has continued to increase sharply since the year 2000. Individual studies consistently found that champions were important positive influences on implementation effectiveness. Although few in number, the randomized trials of champions that have been conducted demonstrate the feasibility of using experimental design to study the effects of champions in healthcare."}
{"_id": "15a2ef5fac225c864759b28913b313908401043f", "title": "Common criteria compliant software development (CC-CASD)", "text": "In order to gain their customers' trust, software vendors can certify their products according to security standards, e.g., the Common Criteria (ISO 15408). However, a Common Criteria certification requires a comprehensible documentation of the software product. The creation of this documentation results in high costs in terms of time and money.\n We propose a software development process that supports the creation of the required documentation for a Common Criteria certification. Hence, we do not need to create the documentation after the software is built. Furthermore, we propose to use an enhanced version of the requirements-driven software engineering process called ADIT to discover possible problems with the establishment of Common Criteria documents. We aim to detect these issues before the certification process. Thus, we avoid expensive delays of the certification effort. ADIT provides a seamless development approach that allows consistency checks between different kinds of UML models. ADIT also supports traceability from security requirements to design documents. We illustrate our approach with the development of a smart metering gateway system."}
{"_id": "21968ae000669eb4cf03718a0d97e23a6bf75926", "title": "Learning influence probabilities in social networks", "text": "Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance."}
{"_id": "24986435c73066aaea0b21066db4539270106bee", "title": "Novelty and redundancy detection in adaptive filtering", "text": "This paper addresses the problem of extending an adaptive information filtering system to make decisions about the novelty and redundancy of relevant documents. It argues that relevance and redundance should each be modelled explicitly and separately. A set of five redundancy measures are proposed and evaluated in experiments with and without redundancy thresholds. The experimental results demonstrate that the cosine similarity metric and a redundancy measure based on a mixture of language models are both effective for identifying redundant documents."}
{"_id": "39282ff070f62ceeaa6495815098cbac8411101f", "title": "Collaborative location and activity recommendations with GPS history data", "text": "With the increasing popularity of location-based services, such as tour guide and location-based social network, we now have accumulated many location data on the Web. In this paper, we show that, by using the location data based on GPS and users' comments at various locations, we can discover interesting locations and possible activities that can be performed there for recommendations. Our research is highlighted in the following location-related queries in our daily life: 1) if we want to do something such as sightseeing or food-hunting in a large city such as Beijing, where should we go? 2) If we have already visited some places such as the Bird's Nest building in Beijing's Olympic park, what else can we do there? By using our system, for the first question, we can recommend her to visit a list of interesting locations such as Tiananmen Square, Bird's Nest, etc. For the second question, if the user visits Bird's Nest, we can recommend her to not only do sightseeing but also to experience its outdoor exercise facilities or try some nice food nearby. To achieve this goal, we first model the users' location and activity histories that we take as input. We then mine knowledge, such as the location features and activity-activity correlations from the geographical databases and the Web, to gather additional inputs. Finally, we apply a collective matrix factorization method to mine interesting locations and activities, and use them to recommend to the users where they can visit if they want to perform some specific activities and what they can do if they visit some specific places. We empirically evaluated our system using a large GPS dataset collected by 162 users over a period of 2.5 years in the real-world. We extensively evaluated our system and showed that our system can outperform several state-of-the-art baselines."}
{"_id": "03f9b5389df52f42cabcf0c4a9ac6e10ff6d4395", "title": "A mobile application framework for the geospatial web", "text": "In this paper we present an application framework that leverages geospatial content on the World Wide Web by enabling innovative modes of interaction and novel types of user interfaces on advanced mobile phones and PDAs. We discuss the current development steps involved in building mobile geospatial Web applications and derive three technological pre-requisites for our framework: spatial query operations based on visibility and field of view, a 2.5D environment model, and a presentationindependent data exchange format for geospatial query results. We propose the Local Visibility Model as a suitable XML-based candidate and present a prototype implementation."}
{"_id": "08a8c653b4f20f2b63ac6734f24fa5f5f819782a", "title": "Mining interesting locations and travel sequences from GPS trajectories", "text": "The increasing availability of GPS-enabled devices is changing the way people interact with the Web, and brings us a large amount of GPS trajectories representing people's location histories. In this paper, based on multiple users' GPS trajectories, we aim to mine interesting locations and classical travel sequences in a given geospatial region. Here, interesting locations mean the culturally important places, such as Tiananmen Square in Beijing, and frequented public areas, like shopping malls and restaurants, etc. Such information can help users understand surrounding locations, and would enable travel recommendation. In this work, we first model multiple individuals' location histories with a tree-based hierarchical graph (TBHG). Second, based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based inference model, which regards an individual's access on a location as a directed link from the user to that location. This model infers the interest of a location by taking into account the following three factors. 1) The interest of a location depends on not only the number of users visiting this location but also these users' travel experiences. 2) Users' travel experiences and location interests have a mutual reinforcement relationship. 3) The interest of a location and the travel experience of a user are relative values and are region-related. Third, we mine the classical travel sequences among locations considering the interests of these locations and users' travel experiences. We evaluated our system using a large GPS dataset collected by 107 users over a period of one year in the real world. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, when considering the users' travel experiences and location interests, we achieved a better performance beyond baselines, such as rank-by-count and rank-by-interest, etc."}
{"_id": "274ed239919074cbe96b1d9d357bb930788e6668", "title": "A Framework for Evaluating Intrusion Detection Architectures in Advanced Metering Infrastructures", "text": "The scale and complexity of Advanced Metering Infrastructure (AMI) networks requires careful planning for the deployment of security solutions. In particular, the large number of AMI devices and the volume and diversity of communication expected to take place on the various AMI networks make the role of intrusion detection systems (IDSes) critical. Understanding the trade-offs for a scalable and comprehensive IDS is key to investing in the right technology and deploying sensors at optimal locations. This paper reviews the benefits and costs associated with different IDS deployment options, including either centralized or distributed solution. A general cost-model framework is proposed to help utilities (AMI asset owners) make more informed decisions when selecting IDS deployment architectures and managing their security investments. We illustrate how the framework can be applied through case studies, and highlight the interesting cost/benefit trade-offs that emerge."}
{"_id": "5a26c0ad196cd5e0a3fcd3d3a306c3ce545977ae", "title": "Interactive volumetric fog display", "text": "Traditional fog screens are 2D. We propose a new design of fog screen that can generate fog at different depth positions and at only where necessary. Our approach applies projection mapping onto a non-planar and reconfigurable fog screen, thus enabling interactive visual contents to be displayed at multiple depth levels. Viewers can perceive the three-dimensionality naturally, and interact with the unencumbered images by touching them directly in mid-air. The display can be used in mixed reality settings where physical objects co-exist and interact with the 3D imagery in physical space."}
{"_id": "86c00866dc78aaadf255e7a94cc4dd7219245167", "title": "Adversarial Reinforcement Learning in a Cyber Security Simulation", "text": "This paper focuses on cyber-security simulations in networks modeled as a Markov game with incomplete information and stochastic elements. The resulting game is an adversarial sequential decision making problem played with two agents, the attacker and defender. The two agents pit one reinforcement learning technique, like neural networks, Monte Carlo learning and Q-learning, against each other and examine their effectiveness against learning opponents. The results showed that Monte Carlo learning with the Softmax exploration strategy is most effective in performing the defender role and also for learning attacking strategies."}
{"_id": "d33d0140d8e36e23dfa88dfc40b837af44709533", "title": "SALMA: Standard Arabic Language Morphological Analysis", "text": "Morphological analyzers are preprocessors for text analysis. Many Text Analytics applications need them to perform their tasks. This paper reviews the SALMA-Tools (Standard Arabic Language Morphological Analysis) [1]. The SALMA-Tools is a collection of open-source standards, tools and resources that widen the scope of Arabic word structure analysis - particularly morphological analysis, to process Arabic text corpora of different domains, formats and genres, of both vowelized and non-vowelized text. Tag-assignment is significantly more complex for Arabic than for many languages. The morphological analyzer should add the appropriate linguistic information to each part or morpheme of the word (proclitic, prefix, stem, suffix and enclitic); in effect, instead of a tag for a word, we need a subtag for each part. Very fine-grained distinctions may cause problems for automatic morphosyntactic analysis - particularly probabilistic taggers which require training data, if some words can change grammatical tag depending on function and context; on the other hand, fine-grained distinctions may actually help to disambiguate other words in the local context. The SALMA - Tagger is a fine grained morphological analyzer which is mainly depends on linguistic information extracted from traditional Arabic grammar books and prior-knowledge broad-coverage lexical resources; the SALMA - ABCLexicon. More fine-grained tag sets may be more appropriate for some tasks. The SALMA - Tag Set is a standard tag set for encoding, which captures long-established traditional fine-grained morphological features of Arabic, in a notation format intended to be compact yet transparent."}
{"_id": "3caedff0a82950046730bce6f8d85aec46cf2e8c", "title": "Empirical evidence in global software engineering: a systematic review", "text": "Recognized as one of the trends of the 21st century, globalization of the world economies brought significant changes to nearly all industries, and in particular it includes software development. Many companies started global software engineering (GSE) to benefit from cheaper, faster and better development of software systems, products and services. However, empirical studies indicate that achieving these benefits is not an easy task. Here, we report our findings from investigating empirical evidence in GSE-related research literature. By conducting a systematic review we observe that the GSE field is still immature. The amount of empirical studies is relatively small. The majority of the studies represent problem-oriented reports focusing on different aspects of GSE management rather than in-depth analysis of solutions for example in terms of useful practices or techniques. Companies are still driven by cost reduction strategies, and at the same time, the most frequently discussed recommendations indicate a necessity of investments in travelling and socialization. Thus, at the same time as development goes global there is an ambition to minimize geographical, temporal and cultural separation. These are normally integral parts of cross-border collaboration. In summary, the systematic review results in several descriptive classifications of the papers on empirical studies in GSE and also reports on some best practices identified from literature."}
{"_id": "4f2eb8902bbea3111b1ec2a974eab31e92bb1435", "title": "Fundamental limits of RSS fingerprinting based indoor localization", "text": "Indoor localization has been an active research field for decades, where the received signal strength (RSS) fingerprinting based methodology is widely adopted and induces many important localization techniques such as the recently proposed one building the fingerprint database with crowd-sourcing. While efforts have been dedicated to improve the accuracy and efficiency of localization, the fundamental limits of RSS fingerprinting based methodology itself is still unknown in a theoretical perspective. In this paper, we present a general probabilistic model to shed light on a fundamental question: how good the RSS fingerprinting based indoor localization can achieve? Concretely, we present the probability that a user can be localized in a region with certain size, given the RSS fingerprints submitted to the system. We reveal the interaction among the localization accuracy, the reliability of location estimation and the number of measurements in the RSS fingerprinting based location determination. Moreover, we present the optimal fingerprints reporting strategy that can achieve the best accuracy for given reliability and the number of measurements, which provides a design guideline for the RSS fingerprinting based indoor localization facilitated by crowdsourcing paradigm."}
{"_id": "3d88c7e728e94278a2f3ef654818a5e93d937743", "title": "Human ES cell-derived neural rosettes reveal a functionally distinct early neural stem cell stage.", "text": "Neural stem cells (NSCs) yield both neuronal and glial progeny, but their differentiation potential toward multiple region-specific neuron types remains remarkably poor. In contrast, embryonic stem cell (ESC) progeny readily yield region-specific neuronal fates in response to appropriate developmental signals. Here we demonstrate prospective and clonal isolation of neural rosette cells (termed R-NSCs), a novel NSC type with broad differentiation potential toward CNS and PNS fates and capable of in vivo engraftment. R-NSCs can be derived from human and mouse ESCs or from neural plate stage embryos. While R-NSCs express markers classically associated with NSC fate, we identified a set of genes that specifically mark the R-NSC state. Maintenance of R-NSCs is promoted by activation of SHH and Notch pathways. In the absence of these signals, R-NSCs rapidly lose rosette organization and progress to a more restricted NSC stage. We propose that R-NSCs represent the first characterized NSC stage capable of responding to patterning cues that direct differentiation toward region-specific neuronal fates. In addition, the R-NSC-specific genetic markers presented here offer new tools for harnessing the differentiation potential of human ESCs."}
{"_id": "01ef8a09ffa6ab5756a74b5aee1f8563d95d6e86", "title": "Material classification and semantic segmentation of railway track images with deep convolutional neural networks", "text": "The condition of railway tracks needs to be periodically monitored to ensure passenger safety. Cameras mounted on a moving vehicle such as a hi-rail vehicle or a geometry inspection car can generate large volumes of high resolution images. Extracting accurate information from those images has been challenging due to background clutter in railroad environments. In this paper, we describe a novel approach to visual track inspection using material classification and semantic segmentation with Deep Convolutional Neural Networks (DCNN). We show that DCNNs trained end-to-end for material classification are more accurate than shallow learning machines with hand-engineered features and are more robust to noise. Our approach results in a material classification accuracy of 93.35% using 10 classes of materials. This allows for the detection of crumbling and chipped tie conditions at detection rates of 86.06% and 92.11%, respectively, at a false positive rate of 10 FP/mile on the 85-mile Northeast Corridor (NEC) 2012-2013 concrete tie dataset."}
{"_id": "b1e57ea60b291ddfe6b97f28f5b73a76f3e65bc3", "title": "A conversational intelligent tutoring system to automatically predict learning styles", "text": "This paper proposes a generic methodology and architecture for developing a novel conversational intelligent tutoring system (CITS) called Oscar that leads a tutoring conversation and dynamically predicts and adapts to a student\u2019s learning style. Oscar aims to mimic a human tutor by implicitly modelling the learning style during tutoring, and personalising the tutorial to boost confidence and improve the effectiveness of the learning experience. Learners can intuitively explore and discuss topics in natural language, helping to establish a deeper understanding of the topic. The Oscar CITS methodology and architecture are independent of the learning styles model and tutoring subject domain. Oscar CITS was implemented using the Index of Learning Styles (ILS) model (Felder & Silverman 1988) to deliver an SQL tutorial. Empirical studies involving real students have validated the prediction of learning styles in a real-world teaching/learning environment. The results showed that all learning styles in the ILS model were successfully predicted from a natural language tutoring conversation, with an accuracy of 61-100%. Participants also found Oscar\u2019s tutoring helpful and achieved an average learning gain of 13%."}
{"_id": "96de937b4bb3dfacdf5f6b5ed653994fafbc1aed", "title": "Magic Quadrant for Data Quality Tools", "text": "Extensive data on functional capabilities, customer base demographics, financial status, pricing and other quantitative attributes gained via a requestfor-information process engaging vendors in this market. Interactive briefings in which vendors provided Gartner with updates on their strategy, market positioning, recent key developments and product road map. A Web-based survey of reference customers provided by each vendor, which captured data on usage patterns, levels of satisfaction with major product functionality categories, various nontechnology-related vendor attributes (such as pricing, product support and overall service delivery), and more. In total, 333 organizations across all major world regions provided input on their experiences with vendors and tools in this manner. Feedback about tools and vendors captured during conversations with users of Gartner's client inquiry service. Market share and revenue growth estimates developed by Gartner's Technology and Service Provider research unit."}
{"_id": "366d2c9a55c13654bcb235c66cf79163999c60b9", "title": "A study of near-field direct antenna modulation systems using convex optimization", "text": "This paper studies the constellation diagram design for a class of communication systems known as near-field direct antenna modulation (NFDAM) systems. The modulation is carried out in a NFDAM system by means of a control unit that switches among a number of pre-designed passive controllers such that each controller generates a desired voltage signal at the far field. To find an optimal number of signals that can be transmitted and demodulated reliably in a NFDAM system, the coverage area of the signal at the far field should be identified. It is shown that this coverage area is a planar convex region in general and simply a circle in the case when no constraints are imposed on the input impedance of the antenna and the voltage received at the far field. A convex optimization method is then proposed to find a polygon that is able to approximate the coverage area of the signal constellation diagram satisfactorily. A similar analysis is provided for the identification of the coverage area of the antenna input impedance, which is beneficial for designing an energy-efficient NFDAM system."}
{"_id": "0eff3eb68ae892012f0d478444f8bb6f50361be5", "title": "BFS and Coloring-Based Parallel Algorithms for Strongly Connected Components and Related Problems", "text": "Finding the strongly connected components (SCCs) of a directed graph is a fundamental graph-theoretic problem. Tarjan's algorithm is an efficient serial algorithm to find SCCs, but relies on the hard-to-parallelize depth-first search (DFS). We observe that implementations of several parallel SCC detection algorithms show poor parallel performance on modern multicore platforms and large-scale networks. This paper introduces the Multistep method, a new approach that avoids work inefficiencies seen in prior SCC approaches. It does not rely on DFS, but instead uses a combination of breadth-first search (BFS) and a parallel graph coloring routine. We show that the Multistep method scales well on several real-world graphs, with performance fairly independent of topological properties such as the size of the largest SCC and the total number of SCCs. On a 16-core Intel Xeon platform, our algorithm achieves a 20X speedup over the serial approach on a 2 billion edge graph, fully decomposing it in under two seconds. For our collection of test networks, we observe that the Multistep method is 1.92X faster (mean speedup) than the state-of-the-art Hong et al. SCC method. In addition, we modify the Multistep method to find connected and weakly connected components, as well as introduce a novel algorithm for determining articulation vertices of biconnected components. These approaches all utilize the same underlying BFS and coloring routines."}
{"_id": "90e3ec000125d579ec1724781410d4201be6d2a8", "title": "Evaluation of communication technologies for IEC 61850 based distribution automation system with distributed energy resources", "text": "This paper presents the study of different communication systems between IEC 61850 based distribution substation and distributed energy resources (DERs). Communication networks have been simulated for a typical distribution automation system (DAS) with DERs using OPNET software. The simulation study shows the performance of wired and wireless communication systems for different messages, such as GOOSE and measured (metered) values between DAS and DERs. A laboratory set-up has been implemented using commercial relay and communication devices for evaluating the performance of GOOSE messages, using wired and wireless physical medium. Finally, simulation and laboratory results are discussed in detail."}
{"_id": "3c8ffc499c5748f28203b40e44da2d9142d8d396", "title": "A clinical study of Noonan syndrome.", "text": "Clinical details are presented on 151 individuals with Noonan syndrome (83 males and 68 females, mean age 12.6 years). Polyhydramnios complicated 33% of affected pregnancies. The commonest cardiac lesions were pulmonary stenosis (62%), and hypertrophic cardiomyopathy (20%), with a normal echocardiogram present in only 12.5% of all cases. Significant feeding difficulties during infancy were present in 76% of the group. Although the children were short (50% with a height less than 3rd centile), and underweight (43% with a weight less than 3rd centile), the mean head circumference of the group was on the 50th centile. Motor milestone delay was usual, the cohort having a mean age of sitting unsupported of 10 months and walking of 21 months. Abnormal vision (94%) and hearing (40%) were frequent findings, but 89% of the group were attending normal primary or secondary schools. Other associations included undescended testicles (77%), hepatosplenomegaly (50%), and evidence of abnormal bleeding (56%). The mean age at diagnosis of Noonan syndrome in this group was 9.0 years. Earlier diagnosis of this common condition would aid both clinical management and genetic counselling."}
{"_id": "ca45e17cf41cf1fd0aa7c9536f0a27bc0f4d3b33", "title": "Superneurons: dynamic GPU memory management for training deep neural networks", "text": "Going deeper and wider in neural architectures improves their accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need to change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity. SuperNeurons features 3 memory optimizations, Liveness Analysis, Unified Tensor Pool, and Cost-Aware Recomputation; together they effectively reduce the network-wide peak memory usage down to the maximal memory usage among layers. We also address the performance issues in these memory-saving techniques. Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have demonstrated that SuperNeurons trains at least 3.2432 deeper network than current ones with the leading performance. Particularly, SuperNeurons can train ResNet2500 that has 104 basic network layers on a 12GB K40c."}
{"_id": "c7baff1a1c70e01d6bc71a29c633fb1fd326f5a7", "title": "BRAVO: Balanced Reliability-Aware Voltage Optimization", "text": "Defining a processor micro-architecture for a targeted productspace involves multi-dimensional optimization across performance, power and reliability axes. A key decision in sucha definition process is the circuit-and technology-driven parameterof the nominal (voltage, frequency) operating point. This is a challenging task, since optimizing individually orpair-wise amongst these metrics usually results in a designthat falls short of the specification in at least one of the threedimensions. Aided by academic research, industry has nowadopted early-stage definition methodologies that considerboth energy-and performance-related metrics. Reliabilityrelatedenhancements, on the other hand, tend to get factoredin via a separate thread of activity. This task is typically pursuedwithout thorough pre-silicon quantifications of the energyor even the performance cost. In the late-CMOS designera, reliability needs to move from a post-silicon afterthoughtor validation-only effort to a pre-silicon definitionprocess. In this paper, we present BRAVO, a methodologyfor such reliability-aware design space exploration. BRAVOis supported by a multi-core simulation framework that integratesperformance, power and reliability modeling capability. Errors induced by both soft and hard fault incidence arecaptured within the reliability models. We introduce the notionof the Balanced Reliability Metric (BRM), that we useto evaluate overall reliability of the processor across soft andhard error incidences. We demonstrate up to 79% improvementin reliability in terms of this metric, for only a 6% dropin overall energy efficiency over design points that maximizeenergy efficiency. We also demonstrate several real-life usecaseapplications of BRAVO in an industrial setting."}
{"_id": "65b0ffbab1ae29deff6dee7d401bf14b4edf8477", "title": "Ensemble Methods for Multi-label Classification", "text": "Ensemble methods have been shown to be an effectiv e tool for solving multi-label classification tasks. In the RAndom k-labELsets (RAKEL) algorithm, each member of the en semble is associated with a small randomly-selected subset of k labels. Then, a single label classifier is trained according to each combination of elements in the subset. In this paper we adopt a similar approach, however, instead of rando mly choosing subsets, we select the minimum required su bsets of k labels that cover all labels and meet additional constraints such as coverage of inter-label correla tions. Construction of the cover is achieved by form ulating the subset selection as a minimum set covering prob lem (SCP) and solving it by using approximation algo rithms. Every cover needs only to be prepared once by offline algorithms. Once prepared, a cover may b e applied to the classification of any given multi-la bel dataset whose properties conform with those of the cover. The contribution of this paper is two-fold. Fir st, we introduce SCP as a general framework for con structing label covers while allowing the user to incorpo rate cover construction constraints. We demonstrate he effectiveness of this framework by proposing two co nstruction constraints whose enforcement produces c overs that improve the prediction performance of ran dom selection. Second, we provide theoretical bound s that quantify the probabilities of random selection t produce covers that meet the proposed construct ion riteria. The experimental results indicate that the p roposed methods improve multi-label classification accuracy and stability compared with the RAKEL algorithm a nd to other state-of-the-art algorithms."}
{"_id": "6f33c49e983acf93e98bfa085de18ca489a27659", "title": "Sensor networks for medical care", "text": "Sensor networks have the potential to greatly impact many aspects of medical care. By outfitting patients with wireless, wearable vital sign sensors, collecting detailed real-time data on physiological status can be greatly simplified. However, there is a significant gap between existing sensor network systems and the needs of medical care. In particular, medical sensor networks must support multicast routing topologies, node mobility, a wide range of data rates and high degrees of reliability, and security. This paper describes our experiences with developing a combined hardware and software platform for medical sensor networks, called CodeBlue. CodeBlue provides protocols for device discovery and publish/subscribe multihop routing, as well as a simple query interface that is tailored for medical monitoring. We have developed several medical sensors based on the popular MicaZ and Telos mote designs, including a pulse oximeter, EKG and motion-activity sensor. We also describe a new, miniaturized sensor mote designed for medical use. We present initial results for the CodeBlue prototype demonstrating the integration of our medical sensors with the publish/subscribe routing substrate. We have experimentally validated the prototype on our 30-node sensor network testbed, demonstrating its scalability and robustness as the number of simultaneous queries, data rates, and transmitting sensors are varied. We also study the effect of node mobility, fairness across multiple simultaneous paths, and patterns of packet loss, confirming the system\u2019s ability to maintain stable routes despite variations in node location and"}
{"_id": "4df65842a527e752bd487c180c368eec85f8b61b", "title": "Digital Forensic Analysis of SIM Cards", "text": "Smart cards are fundamental technology in modern life. It is embedded in numerous devices such as GPS devices, ATM cards, Mobile SIM cards and many others. Mobile devices became the evolution of technology. It becomes smaller, faster and supports large storage capabilities. Digital forensics of mobile devices that maybe found in crime scene is becoming inevitable. The purpose of this research is to address the SIM cards digital forensics analysis. It presents sound forensic methodology and process of SIM cards forensic examination. In particular, the main aim of the research is to answer the following research questions: (1) what forensic evidence could be extracted from a SIM card and (2) what are limitations that may hinder a forensic"}
{"_id": "167895bdf0f1ef88acc962e7a6f255ab92769485", "title": "Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems", "text": "To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 \u00d7 370 image, whereas the original selective search method extracted approximately 10 6 \u00d7 n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset."}
{"_id": "c669d5efe471abcb3a28223845a318a562070cc8", "title": "A switching ringing suppression scheme of SiC MOSFET by Active Gate Drive", "text": "This paper proposes an Active Gate Drive (AGD) to reduce unwanted switching ringing of Silicon Carbide (SiC) MOSFET module with a rating of 120 A and 1200 V. While SiC MOSFET can be operated under high switching frequency and high temperature with very low power losses, one of the key challenges for SiC MOSFET is the electromagnetic interference (EMI) caused by steep switching transients and continuous switching ringing. Compared to Si MOSFET, the higher rate of SiC MOSFET drain current variation introduces worse EMI problems. To reduce EMI generated from the switching ringing, this paper investigates the causes of switching ringing by considering the combined impact of parasitic inductances, capacitances, and low circuit loop resistance. In addition, accurate mathematical expressions are established to explain the ringing behavior and quantitative analysis is carried out to investigate the relationship between the switching transient and gate drive voltage. Thereafter, an AGD method for mitigating SiC MOSFET switching ringing is presented. Substantially reduced switching ringing can be observed from circuit simulations. As a result, the EMI generation is mitigated."}
{"_id": "9b618fa0cd834f7c4122c8e53539085e06922f8c", "title": "Adversarial Perturbations Against Deep Neural Networks for Malware Classification", "text": "Deep neural networks, like many other machine learning models, have recently been shown to lack robustness against adversarially crafted inputs. These inputs are derived from regular inputs by minor yet carefully selected perturbations that deceive machine learning models into desired misclassifications. Existing work in this emerging field was largely specific to the domain of image classification, since the highentropy of images can be conveniently manipulated without changing the images\u2019 overall visual appearance. Yet, it remains unclear how such attacks translate to more securitysensitive applications such as malware detection\u2013which may pose significant challenges in sample generation and arguably grave consequences for failure. In this paper, we show how to construct highly-effective adversarial sample crafting attacks for neural networks used as malware classifiers. The application domain of malware classification introduces additional constraints in the adversarial sample crafting problem when compared to the computer vision domain: (i) continuous, differentiable input domains are replaced by discrete, often binary inputs; and (ii) the loose condition of leaving visual appearance unchanged is replaced by requiring equivalent functional behavior. We demonstrate the feasibility of these attacks on many different instances of malware classifiers that we trained using the DREBIN Android malware data set. We furthermore evaluate to which extent potential defensive mechanisms against adversarial crafting can be leveraged to the setting of malware classification. While feature reduction did not prove to have a positive impact, distillation and re-training on adversarially crafted samples show promising results."}
{"_id": "5e4deed61eaf561f2ef2a26f11ce32345ce64981", "title": "Lung nodules diagnosis based on evolutionary convolutional neural network", "text": "Lung cancer presents the highest cause of death among patients around the world, in addition of being one of the smallest survival rates after diagnosis. In this paper, we exploit a deep learning technique jointly with the genetic algorithm to classify lung nodules in whether malignant or benign, without computing the shape and texture features. The methodology was tested on computed tomography (CT) images from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), with the best sensitivity of 94.66%, specificity of 95.14%, accuracy of 94.78% and area under the ROC curve of 0.949."}
{"_id": "4cad1f0023e1bc904c3c615d791c1518dac8243f", "title": "Crop disease classification using texture analysis", "text": "With the Agriculture Sector being the backbone of not only a large number of industries but also society as a whole, there is a rising need to grow good quality crops which in turn will give a high yield. For this to happen, it is crucial to monitor the crops throughout their growth period. In this paper, Image processing is used to detect and classify sunflower crop diseases based on the image of their leaf. The images are taken through a high resolution digital camera and after preprocessing, are subjected to k-means clustering to get the diseased part of the leaf. These are then run through the various machine learning algorithms and classified based on their color and texture features. A comparison based on accuracy between various machine learning algorithms is done namely K-Nearest Neighbors, Multi-Class Support Vector Machine, Naive Bayes and Multinomial Logistic Regression to achieve maximum accuracy. The implementation has been done using MATLAB."}
{"_id": "d30bf3722157c71938dc94419802239ef4e4e0db", "title": "Practical Private Set Intersection Protocols with Linear Complexity", "text": "The constantly increasing dependence on anytime-anywhere availability of data and the commensurately increasing fear of losing privacy motivate the need for privacy-preserving techniques. One interesting and common problem occurs when two parties need to privately compute an intersection of their respective sets of data. In doing so, one or both parties must obtain the intersection (if one exists), while neither should learn anything about other set elements. Although prior work has yielded a number of effective and elegant Private Set Intersection (PSI) techniques, the quest for efficiency is still underway. This paper explores some PSI variations and constructs several secure protocols that are appreciably more efficient than the state-of-the-art."}
{"_id": "26880494f79ae1e35ffee7f055cb0ad5693060c2", "title": "Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans", "text": "A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot uses two-dimensional laser range scans for localization, it is di cult to accurately detect and localize landmarks in the environment (such as corners and occlusions) from the range scans. In this paper, we develop two new iterative algorithms to register a range scan to a previous scan so as to compute relative robot positions in an unknown environment, that avoid the above problems. The rst algorithm is based on matching data points with tangent directions in two scans and minimizing a distance function in order to solve the displacement between the scans. The second algorithm establishes correspondences between points in the two scans and then solves the point-to-point least-squares problem to compute the relative pose of the two scans. Our methods work in curved environments and can handle partial occlusions by rejecting outliers."}
{"_id": "2ca1fed0a117e9cccaad3f6bab5f4bed7b79a82c", "title": "Autonomous robot path planning in dynamic environment using a new optimization technique inspired by Bacterial Foraging technique", "text": "Path planning is one of the basic and interesting functions for a mobile robot. This paper explores the application of Bacterial Foraging Optimization to the problem of mobile robot navigation to determine shortest feasible path to move from any current position to target position in unknown environment with moving obstacles. It develops a new algorithm based on Bacterial Foraging Optimization (BFO) technique. This algorithm finds a path towards the target and avoiding the obstacles using particles which are randomly distributed on a circle around a robot. The criterion on which it selects the best particle is the distance to target and the Gaussian cost function of the particle. Then, a high level decision strategy is used for the selection and thus proceeds for the result. It works on local environment by using a simple robot sensor. So, it is free from having generated additional map which adds cost. Furthermore, it can be implemented without requirement to tuning algorithm and complex calculation. To simulate the algorithm, the program is written in C language and the environment is created by OpenGL. To test the efficiency of proposed technique, results are compared with Basic Bacterial Foraging Optimization (BFO) and another well-known algorithm called Particle Swarm Optimization (PSO). From the experimental result it can be told that the proposed method gives better path or optimal path."}
{"_id": "36fc5c9957bf34112614c23f0d5645c95d2d273a", "title": "Mobile Robot Path Planning with Randomly Moving Obstacles and Goal", "text": "This article presents the dynamic path planning for a mobile robot to track a randomly moving goal with avoidance of multiple randomly moving obstacles. The main feature of the developed scheme is its capability of dealing with the situation that the paths of both the goal and the obstacles are unknown a priori to the mobile robot. A new mathematical approach that is based on the concepts of 3-D geometry is proposed to generate the path of the mobile robot. The mobile robot decides its path in real time to avoid the randomly moving obstacles and to track the randomly moving goal. The developed scheme results in faster decision-making for successful goal tracking. 3-D simulations using MATLAB validate the developed scheme."}
{"_id": "5d899ccc410877782e8a3c82592de997853f9b6a", "title": "Bacterium-inspired robots for environmental monitoring", "text": "Locating gradient sources and tracking them over time has important applications to environmental monitoring and studies of the ecosystem. We present an approach, inspired by bacterial chemotaxis, for robots to navigate to sources using gradient measurements and a simple actuation strategy (biasing a random walk). Extensive simulations show the efficacy of the approach in varied conditions including multiple sources, dissipative sources, and noisy sensors and actuators. We also show how such an approach could be used for boundary finding. We validate our approach by testing it on a small robot (the robomote) in a phototaxis experiment. A comparison of our approach with gradient descent shows that while gradient descent is faster, our approach is better suited for boundary coverage, and performs better in the presence of multiple and dissipative sources."}
{"_id": "c8a04d0cbb9f70e86800b11b594c9a05d7b6bac0", "title": "Real-time obstacle avoidance for manipulators and mobile robots", "text": ""}
{"_id": "9ccb021ea2da3de52a6c1e606798c0d4f4e27a59", "title": "Controlling electromagnetic fields.", "text": "Using the freedom of design that metamaterials provide, we show how electromagnetic fields can be redirected at will and propose a design strategy. The conserved fields-electric displacement field D, magnetic induction field B, and Poynting vector B-are all displaced in a consistent manner. A simple illustration is given of the cloaking of a proscribed volume of space to exclude completely all electromagnetic fields. Our work has relevance to exotic lens design and to the cloaking of objects from electromagnetic fields."}
{"_id": "ebfa81abbfabd8a3c546d1cd2405e55226619f61", "title": "Brain activation and sexual arousal in healthy, heterosexual males.", "text": "Despite the brain's central role in sexual function, little is known about relationships between brain activation and sexual response. In this study, we employed functional MRI (fMRI) to examine relationships between brain activation and sexual arousal in a group of young, healthy, heterosexual males. Each subject was exposed to two sequences of video material consisting of explicitly erotic (E), relaxing (R) and sports (S) segments in an unpredictable order. Data on penile turgidity was collected using a custom-built pneumatic pressure cuff. Both traditional block analyses using contrasts between sexually arousing and non-arousing video clips and a regression using penile turgidity as the covariate of interest were performed. In both types of analyses, contrast images were computed for each subject and these images were subsequently used in a random effects analysis. Strong activations specifically associated with penile turgidity were observed in the right subinsular region including the claustrum, left caudate and putamen, right middle occipital/ middle temporal gyri, bilateral cingulate gyrus and right sensorimotor and pre-motor regions. Smaller, but significant activation was observed in the right hypothalamus. Few significant activations were found in the block analyses. Implications of the findings are discussed. Our study demonstrates the feasibility of examining brain activation/sexual response relationships in an fMRI environment and reveals a number of brain structures whose activation is time-locked to sexual arousal."}
{"_id": "ef9ecdfa98eba6827ba0140981fd0c259a72c877", "title": "The Complexity of Relational Query Languages (Extended Abstract)", "text": "Two complexity measures for query languages are proposed. Data complexity is the complexity of evaluating a query in the language as a function of the size of the database, and expression complexity is the complexity of evaluating a query in the language as a function of the size of the expression defining the query. We study the data and expression complexity of logical languages - relational calculus and its extensions by transitive closure, fixpoint and second order existential quantification - and algebraic languages - relational algebra and its extensions by bounded and unbounded looping. The pattern which will be shown is that the expression complexity of the investigated languages is one exponential higher then their data complexity, and for both types of complexity we show completeness in some complexity class."}
{"_id": "d42a093e98bb5003e7746aca7357320c8e5d7362", "title": "The emotional power of music: How music enhances the feeling of affective pictures", "text": "Music is an intriguing stimulus widely used in movies to increase the emotional experience. However, no brain imaging study has to date examined this enhancement effect using emotional pictures (the modality mostly used in emotion research) and musical excerpts. Therefore, we designed this functional magnetic resonance imaging study to explore how musical stimuli enhance the feeling of affective pictures. In a classical block design carefully controlling for habituation and order effects, we presented fearful and sad pictures (mostly taken from the IAPS) either alone or combined with congruent emotional musical excerpts (classical pieces). Subjective ratings clearly indicated that the emotional experience was markedly increased in the combined relative to the picture condition. Furthermore, using a second-level analysis and regions of interest approach, we observed a clear functional and structural dissociation between the combined and the picture condition. Besides increased activation in brain areas known to be involved in auditory as well as in neutral and emotional visual-auditory integration processes, the combined condition showed increased activation in many structures known to be involved in emotion processing (including for example amygdala, hippocampus, parahippocampus, insula, striatum, medial ventral frontal cortex, cerebellum, fusiform gyrus). In contrast, the picture condition only showed an activation increase in the cognitive part of the prefrontal cortex, mainly in the right dorsolateral prefrontal cortex. Based on these findings, we suggest that emotional pictures evoke a more cognitive mode of emotion perception, whereas congruent presentations of emotional visual and musical stimuli rather automatically evoke strong emotional feelings and experiences."}
{"_id": "6248c7fea1d7e8e1f945d3a20abac540faa468a5", "title": "Low-Power and Area-Efficient Carry Select Adder", "text": "Carry Select Adder (CSLA) is one of the fastest adders used in many data-processing processors to perform fast arithmetic functions. From the structure of the CSLA, it is clear that there is scope for reducing the area and power consumption in the CSLA. This work uses a simple and efficient gate-level modification to significantly reduce the area and power of the CSLA. Based on this modification 8-, 16-, 32-, and 64-b square-root CSLA (SQRT CSLA) architecture have been developed and compared with the regular SQRT CSLA architecture. The proposed design has reduced area and power as compared with the regular SQRT CSLA with only a slight increase in the delay. This work evaluates the performance of the proposed designs in terms of delay, area, power, and their products by hand with logical effort and through custom design and layout in 0.18-\u03bcm CMOS process technology. The results analysis shows that the proposed CSLA structure is better than the regular SQRT CSLA."}
{"_id": "d15a4fbfbc1180490387dc46c99d1ccd3ce93952", "title": "Digital Media and Symptoms of Attention-Deficit/Hyperactivity Disorder in Adolescents.", "text": "In many regards, this trial represents the best of evidence-based quality improvement efforts\u2014the improvement in health outcomes is often greater than what would have been expected from the individual components alone. An outcome of the trial by Wang et al is the proof that effective stroke interventions can be implemented in China where the burden of stroke is the highest in the world. Trials like this one leave a lasting legacy because the coaching and follow-up and the demonstration that data collection can lead to better outcomes with practice change will leave each of the intervention hospitals with a platform of good-quality stroke care and a mechanism to keep improving. Similar to the challenge of providing a novel drug to patients after a trial ends, a new dilemma that arises for the trialists is how to get the nonintervention hospitals up to the same level of care as the intervention hospitals. A major challenge is the sustainability of these kinds of interventions. Intensive quality improvement is just not practical for long periods; lower level embedded improvement needs to be built into the culture of a hospital, written into job descriptions, and identified as a deliverable in employment contracts to make continuous improvement possible and likely. In addition, the finer details of the intervention must be made available to others. A \u201cteach the teacher\u201d model may apply in distributing this kind of intervention widely. Overall, this randomized clinical trial by Wang et al has achieved its broad objective of both providing both stroke evidence and implanting practice change in 20 hospitals. The challenge now is to sustain that level of quality improvement at these hospitals, and expand this approach to other centers providing care for patients with acute stroke."}
{"_id": "ab39efd43f5998be90fc6f4136ffccf8e3a5745c", "title": "MobileDeepPill: A Small-Footprint Mobile Deep Learning System for Recognizing Unconstrained Pill Images", "text": "Correct identification of prescription pills based on their visual appearance is a key step required to assure patient safety and facilitate more effective patient care. With the availability of high-quality cameras and computational power on smartphones, it is possible and helpful to identify unknown prescription pills using smartphones. Towards this goal, in 2016, the U.S. National Library of Medicine (NLM) of the National Institutes of Health (NIH) announced a nationwide competition, calling for the creation of a mobile vision system that can recognize pills automatically from a mobile phone picture under unconstrained real-world settings. In this paper, we present the design and evaluation of such mobile pill image recognition system called MobileDeepPill. The development of MobileDeepPill involves three key innovations: a triplet loss function which attains invariances to real-world noisiness that deteriorates the quality of pill images taken by mobile phones; a multi-CNNs model that collectively captures the shape, color and imprints characteristics of the pills; and a Knowledge Distillation-based deep model compression framework that significantly reduces the size of the multi-CNNs model without deteriorating its recognition performance. Our deep learning-based pill image recognition algorithm wins the First Prize (champion) of the NIH NLM Pill Image Recognition Challenge. Given its promising performance, we believe MobileDeepPill helps NIH tackle a critical problem with significant societal impact and will benefit millions of healthcare personnel and the general public."}
{"_id": "08fdba599c9998ae27458faca03b37d3bc51b6bb", "title": "Fast-and-Light Stochastic ADMM", "text": "The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an integration of ADMM with the method of stochastic variance reduced gradient (SVRG). Unlike another recent integration attempt called SCAS-ADMM, the proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-ADMM, but is more advantageous in that its storage requirement is very low, even independent of the sample size n. Experimental results demonstrate that it is as fast as SAG-ADMM and SDCA-ADMM, much faster than SCAS-ADMM, and can be used on much bigger data sets."}
{"_id": "6cc7eac71adaabae6a4cd3ce17d4e8d91f5ec249", "title": "Low-Cost Printed Chipless RFID Humidity Sensor Tag for Intelligent Packaging", "text": "This paper presents a fully-printed chipless radio frequency identification sensor tag for short-range item identification and humidity monitoring applications. The tag consists of two planar inductor-capacitor resonators operating wirelessly through inductive coupling. One resonator is used to encode ID data based on frequency spectrum signature, and another one works as a humidity sensor, utilizing a paper substrate as a sensing material. The sensing performances of three paper substrates, including commercial packaging paper, are investigated. The use of paper provides excellent sensitivity and reasonable response time to humidity. The cheap and robust packaging paper, particularly, exhibits the largest sensitivity over the relative humidity range from 20% to 70%, which offers the possibility of directly printing the sensor tag on traditional packages to make the package intelligent at ultralow cost."}
{"_id": "7101ecc00f914204709c4f3a8099cc8b74a893d4", "title": "Making Decisions about Self-Disclosure in Online Social Networks", "text": "This paper explores privacy calculus decision making processes for online social networks (OSN). Content analysis method is applied to analyze data obtained from face-to-face interviews and online survey with open-ended questions of 96 OSN users from different countries. The factors users considered before self-disclosing are explored. The perceived benefits and risks of using OSN and their impact on self-disclosure are also identified. We determine that the perceived risks of OSN usage hinder selfdisclosure. It is not clear, however, whether the perceived benefits offset the impact of the risks on selfdisclosure behavior. The findings as a whole do not support privacy calculus in OSN settings."}
{"_id": "d532777f2386766e7c93c0bf4257d0a359e91f6b", "title": "Deep learning methods and applications - Classification of traffic signs and detection of Alzheimer's disease from images", "text": "In this thesis, the deep learning method convolutional neural networks (CNNs) has been used in an attempt to solve two classification problems, namely traffic sign recognition and Alzheimer\u2019s disease detection. The two datasets used are from the German Traffic Sign Recognition Benchmark (GTSRB) and the Alzheimer\u2019s Disease Neuroimaging Initiative (ADNI). The final test results on the traffic sign dataset generated a classification accuracy of 98.81 %, almost as high as human performance on the same dataset, 98.84 %. Different parameter settings of the selected CNN structure have also been tested in order to see their impact on the classification accuracy. Trying to distinguish between MRI images of healthy brains and brains afflicted with Alzheimer\u2019s disease gained only about 65 % classification accuracy. These results show that the convolutional neural network approach is very promising for classifying traffic signs, but more work needs to be done when working with the more complex problem of detecting Alzheimer\u2019s disease."}
{"_id": "5be063612ace72ea1389bd37de1af83a709123d7", "title": "Digital Clock and Data Recovery Circuits for Optical Links", "text": "Clock and Data Recovery (CDR) circuits perform the function of recovering clock and re-timing received data in optical links. These CDRs must be capable of tolerating large input jitter (high JTOL), filter input jitter (low JTRAN with no jitter peaking) and in burst-mode applications be capable of phase locking in a very short time. In this paper, we elucidate these design tradeoffs and present various CDR architectures that can overcome them. Specifically, D/PLL CDR architecture that achieves high JTOL, low JTRAN, and no jitter peaking is described. A new burst-mode CDR that can lock instantaneously while filtering input jitter is also discussed."}
{"_id": "400203a4dbc493d663d36c1cb6c954826e64a43f", "title": "Can Structural Models Price Default Risk ? Evidence from Bond and Credit Derivative Markets \u2217", "text": "Using a set of structural models, we evaluate the price of default protection for a sample of US corporations. In contrast to previous evidence from corporate bond data, CDS premia are not systematically underestimated. In fact, one of our studied models has little difficulty on average in predicting their level. For robustness, we perform the same exercise for bond spreads by the same issuers on the same trading date. As expected, bond spreads relative to the Treasury curve are systematically underestimated. This is not the case when the swap curve is used as a benchmark, suggesting that previously documented underestimation results may be sensitive to the choice of risk free rate. \u2217We are indebted to seminar participants at Wilfrid Laurier University, Carnegie Mellon University, the Bank of Canada, McGill University and Queen\u2019s University for helpful discussions. In addition, we are grateful to participants of the 2005 EFA in Moscow, the 2005 NFA in Vancouver, the 2006 CFP Vallendar conference and the 2006 Moody\u2019s NYU Credit Risk Conference. We are particularly grateful to Stephen Schaefer for comments. \u2020Faculty of Management, McGill University, 1001 Sherbrooke Street West, Montreal QC, H3A 1G5 Canada. Tel +1 514 398-3186, Fax +1 514 398-3876, Email jan.ericsson@mcgill.ca. \u2021Stockholm School of Economics, Department of Finance, Box 6501, S-113 83 Stockholm, Sweden. Tel: +46 8 736 9143, fax +46 8 312327. 1"}
{"_id": "ed025ff76239202911e9f6c326b089a3e13dbb48", "title": "To ban or not to ban: Differences in mobile phone policies at elementary, middle, and high schools", "text": "The present study was to examine differences in mobile phone policies at elementary, middle and high schools. We surveyed 245 elementary, middle and high schools teachers in Shenzhen of China, using a specially designed 18-item questionnaire. Teachers\u2019 responses indicate that, across elementary, middle and high schools, significant differences exist in (1) students\u2019 percentages of using mobile phones among students, (2) students\u2019 dependence of mobile phones, (3) the number of schools banning students\u2019 mobile phone use, (4) oral and written forms used by schools to ban students\u2019 mobile phone use, and (5) policy reinforcement strategies used by schools. However, no school-level differences was found in (1) students\u2019 fondness of using mobile phones, (2) teachers\u2019 assessment of low-level effectiveness of mobile phone policies, and (3) teachers\u2019 policy improvement recommendations. Significance and implications of the findings are discussed. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "ba6ed4c4e295dd6df7317986cea73a7437c268e6", "title": "Modeling and yield estimation of SRAM sub-system for different capacities subjected to parametric variations", "text": "Process variations have become a major challenge with the advancement in CMOS technologies. The performance of memory sub-systems such as Static Random Access Memory (SRAMs) is heavily dependent on these variations. Also, the VLSI industry requires the SRAM bit cell to qualify in the order of less than 0.1ppb to achieve higher Yield (Y). This paper proposes an efficient qualitative statistical analysis and Yield estimation method of SRAM sub-system which considers deviations due to variations in process parameters in bit line differential and input offset of sense amplifier (SA) all together. The Yield of SRAM is predicted for different capacities of SRAM array by developing a statistical model of memory sub-system in 65nm bulk CMOS technology. For the sub-system with 64 bit cells, it is estimated that the probability of failure is 4.802 \u2217 10\u221213 in a read cycle of frequency 1GHz. Furthermore, the probability of failure for 8MB capacity is 5.035 \u2217 10\u22127 while for 2GB capacity it increases to 1.289 \u2217 10\u22125. It is also observed that as the load on one SA per column is doubled, the probability of failure of memory slice increases by 70%. The proposed technique estimates the Yield(Y) for SRAM array to be more than 99.9999."}
{"_id": "4bace0a37589ec16246805d167c831343c9bd241", "title": "BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark", "text": "Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today's datacenters is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires rethinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user.\n First, BigDebug's simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BigDebug scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BigDebug supports debugging at interactive speeds with minimal performance impact."}
{"_id": "9985a9d63630ced5ea495f387f65ef2ef9783cab", "title": "The effect of different types of corrective feedback on ESL student writing", "text": "Debate about the value of providing corrective feedback on L2 writing has been prominent in recent years as a result of Truscott\u2019s [Truscott, J. (1996). The case against grammar correction in L2 writing classes. Language Learning, 46, 327\u2013369] claim that it is both ineffective and harmful and should therefore be abandoned. A growing body of empirical research is now investigating the agenda proposed by Ferris [Ferris, D.R. (1999). The case for grammar correction in L2 writing classes. A response to Truscott (1996). Journal of Second Language Writing, 8, 1\u201310, Ferris, D.R. (2004). The \u2018\u2018Grammar Correction\u2019\u2019 debate in L2 writing: Where are we, and where do we go from here? (and what do we do in the meantime. . .?). Journal of Second Language Writing, 13, 49\u201362.]. Contributing to this research base, the study reported in this article investigated whether the type of feedback (direct, explicit written feedback and student\u2013researcher 5 minute individual conferences; direct, explicit written feedback only; no corrective feedback) given to 53 adult migrant students on three types of error (prepositions, the past simple tense, and the definite article) resulted in improved accuracy in new pieces of writing over a 12 week period. The study found a significant effect for the combination of written and conference feedback on accuracy levels in the use of the past simple tense and the definite article in new pieces of writing but no overall effect on accuracy improvement for feedback types when the three error categories were considered as a single group. Significant variations in accuracy across the four pieces of writing support earlier SLA discoveries that L2 learners, in the process of acquiring new linguistic forms, may perform them with accuracy on one occasion but fail to do so on other similar occasions. # 2005 Elsevier Inc. All rights reserved."}
{"_id": "918e211ee67064d08e630207ea3471eeddfe9234", "title": "Gut microbes and the brain: paradigm shift in neuroscience.", "text": "The discovery of the size and complexity of the human microbiome has resulted in an ongoing reevaluation of many concepts of health and disease, including diseases affecting the CNS. A growing body of preclinical literature has demonstrated bidirectional signaling between the brain and the gut microbiome, involving multiple neurocrine and endocrine signaling mechanisms. While psychological and physical stressors can affect the composition and metabolic activity of the gut microbiota, experimental changes to the gut microbiome can affect emotional behavior and related brain systems. These findings have resulted in speculation that alterations in the gut microbiome may play a pathophysiological role in human brain diseases, including autism spectrum disorder, anxiety, depression, and chronic pain. Ongoing large-scale population-based studies of the gut microbiome and brain imaging studies looking at the effect of gut microbiome modulation on brain responses to emotion-related stimuli are seeking to validate these speculations. This article is a summary of emerging topics covered in a symposium and is not meant to be a comprehensive review of the subject."}
{"_id": "4a506f4e00451ae883593a92adf92395dd765c3f", "title": "Running on the bare metal with GeekOS", "text": "Undergraduate operating systems courses are generally taught using one of two approaches: abstract or concrete. In the abstract approach, students learn the concepts underlying operating systems theory, and perhaps apply them using user-level threads in a host operating system. In the concrete approach, students apply concepts by working on a real operating system kernel. In the purest manifestation of the concrete approach, students implement operating system projects that run on real hardware.GeekOS is an instructional operating system kernel which runs on real hardware. It provides the minimum functionality needed to schedule threads and control essential devices on an x86 PC. On this foundation, we have developed projects in which students build processes, semaphores, a multilevel feedback scheduler, paged virtual memory, a filesystem, and inter-process communication. We use the Bochs emulator for ease of development and debugging. While this approach (tiny kernel run on an emulator) is not new, we believe GeekOS goes further towards the goal of combining realism and simplicity than previous systems have."}
{"_id": "9d19e705e94173aaeae05103d403274c65628486", "title": "The Reorganization of Human Brain Networks Modulated by Driving Mental Fatigue", "text": "The organization of the brain functional network is associated with mental fatigue, but little is known about the brain network topology that is modulated by the mental fatigue. In this study, we used the graph theory approach to investigate reconfiguration changes in functional networks of different electroencephalography (EEG) bands from 16 subjects performing a simulated driving task. Behavior and brain functional networks were compared between the normal and driving mental fatigue states. The scores of subjective self-reports indicated that 90 min of simulated driving-induced mental fatigue. We observed that coherence was significantly increased in the frontal, central, and temporal brain regions. Furthermore, in the brain network topology metric, significant increases were observed in the clustering coefficient (Cp) for beta, alpha, and delta bands and the character path length ( Lp) for all EEG bands. The normalized measures $\\boldsymbol{\\gamma}$ showed significant increases in beta, alpha, and delta bands, and $\\boldsymbol{\\lambda}$ showed similar patterns in beta and theta bands. These results indicate that functional network topology can shift the network topology structure toward a more economic but less efficient configuration, which suggests low wiring costs in functional networks and disruption of the effective interactions between and across cortical regions during mental fatigue states. Graph theory analysis might be a useful tool for further understanding the neural mechanisms of driving mental fatigue."}
{"_id": "a3ed4c33d1fcaa1303d7e03b298d3a50d719f0d1", "title": "Automatic Image Annotation using Deep Learning Representations", "text": "We propose simple and effective models for the image annotation that make use of Convolutional Neural Network (CNN) features extracted from an image and word embedding vectors to represent their associated tags. Our first set of models is based on the Canonical Correlation Analysis (CCA) framework that helps in modeling both views - visual features (CNN feature) and textual features (word embedding vectors) of the data. Results on all three variants of the CCA models, namely linear CCA, kernel CCA and CCA with k-nearest neighbor (CCA-KNN) clustering, are reported. The best results are obtained using CCA-KNN which outperforms previous results on the Corel-5k and the ESP-Game datasets and achieves comparable results on the IAPRTC-12 dataset. In our experiments we evaluate CNN features in the existing models which bring out the advantages of it over dozens of handcrafted features. We also demonstrate that word embedding vectors perform better than binary vectors as a representation of the tags associated with an image. In addition we compare the CCA model to a simple CNN based linear regression model, which allows the CNN layers to be trained using back-propagation."}
{"_id": "6497a69eebe8d57eee820572ab12b09de51a39d9", "title": "A round- and computation-efficient three-party authenticated key exchange protocol", "text": "In three-party authenticated key exchange protocols, each client shares a secret only with a trusted server with assists in generating a session key used for securely sending messages between two communication clients. Compared with two-party authenticated key exchange protocols where each pair of parties must share a secret with each other, a three-party protocol does not cause any key management problem for the parties. In the literature, mainly there exist three issues in three-party authenticated key exchange protocols are discussed that need to be further improved: (1) to reduce latency, communication steps in the protocol should be as parallel as possible; (2) as the existence of a security-sensitive table on the server side may cause the server to become compromised, the table should be removed; (3) resources required for computation should be as few as possible to avoid the protocol to become an efficiency bottleneck. In various applications over networks, a quick response is required especially by light-weight clients in the mobile e-commerce. In this paper, a roundand computation-efficient three-party authenticated key exchange protocol is proposed which fulfils all of the above mentioned requirements. 2007 Elsevier Inc. All rights reserved."}
{"_id": "170438cf3438c21124dbc05df33ae0d10847c3af", "title": "From eye movements to actions: how batsmen hit the ball", "text": "In cricket, a batsman watches a fast bowler's ball come toward him at a high and unpredictable speed, bouncing off ground of uncertain hardness. Although he views the trajectory for little more than half a second, he can accurately judge where and when the ball will reach him. Batsmen's eye movements monitor the moment when the ball is released, make a predictive saccade to the place where they expect it to hit the ground, wait for it to bounce, and follow its trajectory for 100\u2013200 ms after the bounce. We show how information provided by these fixations may allow precise prediction of the ball's timing and placement. Comparing players with different skill levels, we found that a short latency for the first saccade distinguished good from poor batsmen, and that a cricket player's eye movement strategy contributes to his skill in the game."}
{"_id": "8804402bd9bd1013d1a67f0f9fb26a9c678b6c78", "title": "Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies", "text": "D3EGF(FIH)J KMLONPEGQSRPETN UCV.WYX(Z R.[ V R6\\M[ X N@]_^O\\`JaNcb V RcQ W d EGKeL(^(QgfhKeLOE?i)^(QSj ETNPfPQkRl[ V R)m\"[ X ^(KeLOEG^ npo qarpo m\"[ X ^(KeLOEG^tsAu EGNPb V ^ v wyx zlwO{(|(}<~O\u007fC}\u0081\u0080(\u0082(xp{a\u0083y\u0084.~A}\u0086\u0085\u0088\u0087_~ \u0089C\u008al\u00833\u0089#|<\u0080Az\u0086w#|l\u00806\u0087 \u008b(| \u008c JpfhL X\u008dV\u008f\u008e EG^O\u0090 QgJ \u0091 ETFOR\u0086\u0092\u0093] ^O\\\u0094J\u0095NPb V RcQ\u0097\u0096 X E)ETR \u00986EGKeLOETNcKMLOE\u009a\u0099 F\u0088\u009b ETN V RcQgJp^(^OE ZgZ E i ^(Qkj EGNPfhQSRO\u009b E \u009cOE2m1Jp^ RcNY\u009b E V\u0095Z sO\u009d\u009f\u009e! \u008d\u00a1 q.n sCD X KGKa\u00928\u009d\u00a2EG^ RPNhE\u00a4\u00a3 \u00a5\u00a6Q ZgZ E\u0095s m\u00a7J\u0095^ RPNO\u009b E V\u0095Z s( \u0308 X \u009b EG\u00a9#E\u0081Kas#\u009d V ^ V \u009c V s(H a \u009d\u00aba\u0095\u00ac3\u00ad \u00ae#|.\u0080Y \u0304y} xa\u00b0O\u007fC}l{\u008dx\u0093\u0087 \u0089 \u0083yxl\u0080Y~3{\u008d| \u0084 \u00b12\u0087Pz \u0084 \u009e V J Z J U N V fhKTJp^(Q \u0091 ETFOR\u0086\u0092 J\u0095\\ D vYf3RPEGb \u0301f V ^(\u009c\u00a7\u009d\u0088Jpb\u008fF X RPETN@D KTQ\u0097EG^(KTE i ^(QSjpEGNPfhQSR4v\u03bcJ\u0095\\ U\u00b6Z JaNPEG^(K\u00b7E jYQ V \u009c(Q \u0327D V ^ R V m V N3R V aOs#1 o \u00a1Ga r U Q\u0097NhE\u0081^OoTE1\u20444\u00bb,] R V\u0095Z vC1\u20442 3\u20444 \u0084 x \u00b1 x \u007f \u008b#\u00bf }\u00c0\u0087 \u00893\u0080t}l\u0082C}2\u0087P}<~ \u00act[ X NP\u0090\u0095E\u0081^\u00a7D KeL(b \u0301Qg\u009c(L X \u00a9yETN ] \u0091 DY]_\u00c1 \u009d\u0088J\u0095NPfhJ\u00c3\u00c2 Z j ETo\u0081Q V a\u0095 rpopo2\u00c4 X \u0090 V ^(J(sCD \u00c5)QSRPoTEGN ZgV ^(\u009c \u00c6 \u0089#|\u0095{3 \u0304\u008d|.\u0080(\u007fC}.\u008bC\u00bfY}p\u0084 \u0087Pz\u0086w"}
{"_id": "c9ab6adcf149c740b03ef3759260719af4f7ce07", "title": "Edinburgh's Phrase-based Machine Translation Systems for WMT-14", "text": "This paper describes the University of Edinburgh\u2019s (UEDIN) phrase-based submissions to the translation and medical translation shared tasks of the 2014 Workshop on Statistical Machine Translation (WMT). We participated in all language pairs. We have improved upon our 2013 system by i) using generalized representations, specifically automatic word clusters for translations out of English, ii) using unsupervised character-based models to translate unknown words in RussianEnglish and Hindi-English pairs, iii) synthesizing Hindi data from closely-related Urdu data, and iv) building huge language on the common crawl corpus."}
{"_id": "7c935d804de6efe1546783427ce51e54723f2640", "title": "Extended Bandwidth Instantaneous Current Sharing Scheme for Parallel UPS Systems", "text": "This paper investigates the suitability of instantaneous average current sharing scheme (IACS) for the parallel operation of three-phase uninterruptible power supply systems. A discrete-time model is developed for the analysis and design of the control loops. Some key issues are discussed based on the model and it is found that there is a compromise between system stability and current sharing performance with the conventional IACS scheme when the lengths of interconnecting cables are not negligible. Subsequently, an improved IACS scheme is proposed that ensures proper current sharing by extending the close-loop bandwidth. Its performance is analytically predicted and subsequently validated by experimental results on a 15-kW laboratory prototype."}
{"_id": "3a065f1e9e50503c09e7b174bb8a6f6fb280ebbe", "title": "AccessMiner: using system-centric models for malware protection", "text": "Models based on system calls are a popular and common approach to characterize the run-time behavior of programs. For example, system calls are used by intrusion detection systems to detect software exploits. As another example, policies based on system calls are used to sandbox applications or to enforce access control. Given that malware represents a significant security threat for today's computing infrastructure, it is not surprising that system calls were also proposed to distinguish between benign processes and malicious code.\n Most proposed malware detectors that use system calls follows program-centric analysis approach. That is, they build models based on specific behaviors of individual applications. Unfortunately, it is not clear how well these models generalize, especially when exposed to a diverse set of previously-unseen, real-world applications that operate on realistic inputs. This is particularly problematic as most previous work has used only a small set of programs to measure their technique's false positive rate. Moreover, these programs were run for a short time, often by the authors themselves.\n In this paper, we study the diversity of system calls by performing a large-scale collection (compared to previous efforts) of system calls on hosts that run applications for regular users on actual inputs. Our analysis of the data demonstrates that simple malware detectors, such as those based on system call sequences, face significant challenges in such environments. To address the limitations of program-centric approaches, we propose an alternative detection model that characterizes the general interactions between benign programs and the operating system (OS). More precisely, our system-centric approach models the way in which benign programs access OS resources (such as files and registry entries). Our experiments demonstrate that this approach captures well the behavior of benign programs and raises very few (even zero) false positives while being able to detect a significant fraction of today's malware."}
{"_id": "63bad67d3016837dd2ecb7ad46ee8d0d007aa1e6", "title": "How do people solve the \"weather prediction\" task?: individual variability in strategies for probabilistic category learning.", "text": "Probabilistic category learning is often assumed to be an incrementally learned cognitive skill, dependent on nondeclarative memory systems. One paradigm in particular, the weather prediction task, has been used in over half a dozen neuropsychological and neuroimaging studies to date. Because of the growing interest in using this task and others like it as behavioral tools for studying the cognitive neuroscience of cognitive skill learning, it becomes especially important to understand how subjects solve this kind of task and whether all subjects learn it in the same way. We present here new experimental and theoretical analyses of the weather prediction task that indicate that there are at least three different strategies that describe how subjects learn this task. (1) An optimal multi-cue strategy, in which they respond to each pattern on the basis of associations of all four cues with each outcome; (2) a one-cue strategy, in which they respond on the basis of presence or absence of a single cue, disregarding all other cues; or (3) a singleton strategy, in which they learn only about the four patterns that have only one cue present and all others absent. This variability in how subjects approach this task may have important implications for interpreting how different brain regions are involved in probabilistic category learning."}
{"_id": "05132e44451e92c40ffdea569ccb260c10d6a30d", "title": "RISK FACTORS FOR INJURY TO WOMEN FROM DOMESTIC VIOLENCE", "text": "A BSTRACT Background Domestic violence is the most common cause of nonfatal injury to women in the United States. To identify risk factors for such injuries, we examined the socioeconomic and behavioral characteristics of women who were victims of domestic violence and the men who injured them. Methods We conducted a case\u2013control study at eight large, university-affiliated emergency departments. The 256 intentionally injured women had acute injuries resulting from a physical assault by a male partner. The 659 controls were women treated for other conditions in the emergency department. Information was collected with a standardized questionnaire; no information was obtained directly from the male partners. Results The 256 intentionally injured women had a total of 434 contusions and abrasions, 89 lacerations, and 41 fractures and dislocations. In a multivariate analysis, the characteristics of the partners that were most closely associated with an increased risk of inflicting injury as a result of domestic violence were alcohol abuse (adjusted relative risk, 3.6; 95 percent confidence interval, 2.2 to 5.9); drug use (adjusted relative risk, 3.5; 95 percent confidence interval, 2.0 to 6.4); intermittent employment (adjusted relative risk, 3.1; 95 percent confidence interval, 1.1 to 8.8); recent unemployment (adjusted relative risk, 2.7; 95 percent confidence interval, 1.2 to 6.5); having less than a high-school education (adjusted relative risk, 2.5; 95 percent confidence interval, 1.4 to 4.4); and being a former husband, estranged husband, or former boyfriend (adjusted relative risk, 3.5; 95 percent confidence interval, 1.5 to 8.3). Conclusions Women at greatest risk for injury from domestic violence include those with male partners who abuse alcohol or use drugs, are unemployed or intermittently employed, have less than a high-school education, and are former husbands, estranged husbands, or former boyfriends of the women. (N Engl J Med 1999;341:1892-8.)"}
{"_id": "61dc8de84e0f4aab21a03833aeadcefa87d6d4e5", "title": "Private and Accurate Data Aggregation against Dishonest Nodes", "text": "Privacy-preserving data aggregation in ad hoc networks is a challenging problem, considering the distributed communication and control requirement, dynamic network topology, unreliable communication links, etc. The difficulty is exaggerated when there exist dishonest nodes, and how to ensure privacy, accuracy, and robustness against dishonest nodes remains a n open issue. Different from the widely used cryptographic approaches, in this paper, we address this challenging proble m by exploiting the distributed consensus technique. We first pr opose a secure consensus-based data aggregation (SCDA) algorith m that guarantees an accurate sum aggregation while preservi ng the privacy of sensitive data. Then, to mitigate the polluti on from dishonest nodes, we propose an Enhanced SCDA (E-SCDA) algorithm that allows neighbors to detect dishonest nodes, and derive the error bound when there are undetectable dishones t nodes. We prove the convergence of both SCDA and E-SCDA. We also prove that the proposed algorithms are(\u01eb, \u03c3)-dataprivacy, and obtain the mathematical relationship between\u01eb and \u03c3. Extensive simulations have shown that the proposed algori thms have high accuracy and low complexity, and they are robust against network dynamics and dishonest nodes."}
{"_id": "388a5acf4d701ac8ba4b55985702aea9c3ae4ffb", "title": "Predicting student academic performance in an engineering dynamics course: A comparison of four types of predictive mathematical models", "text": "Predicting student academic performance has long been an important research topic in many academic disciplines. The present study is the first study that develops and compares four types of mathematical models to predict student academic performance in engineering dynamics \u2013 a high-enrollment, highimpact, and core course that many engineering undergraduates are required to take. The four types of mathematical models include the multiple linear regression model, the multilayer perception network model, the radial basis function network model, and the support vector machine model. The inputs (i.e., predictor variables) of the models include student\u2019s cumulative GPA, grades earned in four pre-requisite courses (statics, calculus I, calculus II, and physics), and scores on three dynamics mid-term exams (i.e., the exams given to students during the semester and before the final exam). The output of the models is students\u2019 scores on the dynamics final comprehensive exam. A total of 2907 data points were collected from 323 undergraduates in four semesters. Based on the four types of mathematical models and six different combinations of predictor variables, a total of 24 predictive mathematical models were developed from the present study. The analysis reveals that the type of mathematical model has only a slight effect on the average prediction accuracy (APA, which indicates on average how well a model predicts the final exam scores of all students in the dynamics course) and on the percentage of accurate predictions (PAP, which is calculated as the number of accurate predictions divided by the total number of predictions). The combination of predictor variables has only a slight effect on the APA, but a profound effect on the PAP. In general, the support vector machine models have the highest PAP as compared to the other three types of mathematical models. The research findings from the present study imply that if the goal of the instructor is to predict the average academic performance of his/her dynamics class as a whole, the instructor should choose the simplest mathematical model, which is the multiple linear regression model, with student\u2019s cumulative GPA as the only predictor variable. Adding more predictor variables does not help improve the average prediction accuracy of any mathematical model. However, if the goal of the instructor is to predict the academic performance of individual students, the instructor should use the support vector machine model with the first six predictor variables as the inputs of the model, because this particular predictor combination increases the percentage of accurate predictions, and most importantly, allows sufficient time for the instructor to implement subsequent educational interventions to improve student learning. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "581c8ef54a77a58ebfb354426d16b0dfab596cb7", "title": "Elephant herding optimization algorithm for support vector machine parameters tuning", "text": "Classification is part of various applications and it is an important problem that represents active research topic. Support vector machine is one of the widely used and very powerful classifier. The accuracy of support vector machine highly depends on learning parameters. Optimal parameters can be efficiently determined by using swarm intelligence algorithms. In this paper, we proposed recent elephant herding optimization algorithm for support vector machine parameter tuning. The proposed approach is tested on standard datasets and it was compared to other approaches from literature. The results of computational experiments show that our proposed algorithm outperformed genetic algorithms and grid search considering accuracy of classification."}
{"_id": "4a1351ad68e5af7d578143eeb93e21b5463602eb", "title": "New generation waveform approaches for 5G and beyond", "text": "For 5G and beyond cellular communication systems, new waveform development studies have been continuing in the whole world. Solutions for providing high throughput, high data rates, low latency, large number of terminals, supporting multiple antenna systems, and the related problems originated from these issues are playing an important role in the development of 5G and new waveform technologies. OFDM which has many important advantages continues to be included in various standards. However, because of its disadvantages, OFDM will not be sufficient for dynamic spectrum access and cognitive radio applications in new generation communication systems gradually. Additionally, OFDM is not suitable for asynchronous networks and radios which we expect to see commonly in the future. In this study, some potential candidate 5G waveforms are examined and various comparisons are carried out. Besides, assessments about future hybrid solutions are presented."}
{"_id": "04afb63292337d96da0bb2089334f86aa67fa9eb", "title": "Very-low-profile, stand-alone, tri-band WLAN antenna design for laptop-tablet computer with complete metal-backed cover", "text": "A very-low-profile, tri-band, planar inverted-F antenna (PIFA) with a height of 3.7 mm only for wireless local area network (WlAN) applications in the 2.4 GHz (2400\u223c2484 MHz), 5.2 GHz (5150\u223c5350 MHz), and 5.8 GHz (5725\u223c5825 MHz) bands for tablet-laptop computer with a metal back cover is presented. The antenna was made of a flat plate of size 18.7 mm \u00d7 32.5 mm and bent into a compact structure with the dimensions 3.7 mm \u00d7 7 mm \u00d7 32.5 mm. The PIFA comprised a radiating top patch, a feeding/shorting portion, and an antenna ground plane. The top patch was placed horizontally above the ground with the feeding/shorting portion connecting perpendicularly therebetween. The antenna was designed in a way that the radiating patch consisted of three branches, in which each branch controlled the corresponding resonant mode in the 2.4/5.2/5.8 GHz bands."}
{"_id": "f9daf4bc3574c5a49105b8d4ab233928f39f3696", "title": "GRAVITY-ASSIST: A series elastic body weight support system with inertia compensation", "text": "We present GRAVITY-ASSIST, a series elastic active body weight support and inertia compensation system for use in robot assisted gait rehabilitation. The device consists of a single degree of freedom series elastic actuator that connects to the trunk of a patient. The series elastic system is novel in that, it can provide the desired level of dynamic unloading such that the patient experiences only a percentage of his/her weight and inertia. Inertia compensation is important, since the inertial forces can cause significant deviations from the desired unloading force, specially at low support forces and fast walking speeds. Furthermore, this feature enables the inertia of the harness and force sensing unit attached to the patient to be compensated for, making sure that the device does not interfere with the natural gait cycle. We present a functional prototype of the device, its characterization and experimental verification of the approach."}
{"_id": "bda6aa67d0aaf6ec07d0946244b1563bedc5f861", "title": "A Framework for Information Systems Architecture", "text": "With increasing size and complexity of the implementations of information systems, it is necessary to use some logical construct (or architecture) for defining and controlling the interfaces and the integration of all of the components of the system. This paper defines information systems architecture by creating a d e scriptive framework from disciplines quite independent of information systems, then by analogy specifies information systems architecture based upon the neutral , objective framework. Also, some preliminary conclusions about the implications of the resultant descriptive framework are drawn. The discussion is limited to architecture and does not include a strategic planning methodology. T he subject of information systems architecture is beginning to receive considerable attention. The increased scope of design and levels of complexity of information systems implementations are forcing the use of some logical construct (or architecture) for defining and controlling the interfaces and the integration of all of the components of the system. Thirty years ago this issue was not at all significant because the technology itself did not provide for either breadth in scope or depth in complexity in information systems. The inherent limitations of the then-available 4K machines, for example, constrained design and necessitated suboptimal approaches for automating a business. Current technology is rapidly removing both conceptual and financial constraints. It is not hard to speculate about, if not realize, very large, very complex systems implementations, extending in scope and complexity to encompass an entire enterprise. One can readily delineate the merits of the large, complex, enterprise-oriented approaches. Such systems allow flexibility in managing business changes and coher-ency in the management of business resources. However , there also is merit in the more traditional, smaller, suboptimal systems design approach. Such systems are relatively economical, quickly implemented , and easier to design and manage. In either case, since the technology permits \" distributing \" large amounts of computing facilities in small packages to remote locations, some kind of structure (or architecture) is imperative because decentralization without structure is chaos. Therefore, to keep the business from disintegrating, the concept of information systems architecture is becoming less an option and more a necessity for establishing some order and control in the investment of information systems resources. The cost involved and the success of the business depending increasingly on its information systems require a disciplined approach to the management of those systems. On the assumption that an understanding of information systems architecture \u2026"}
{"_id": "ab53ca2fcb009c97da29988b29fbacb5cbe4ed31", "title": "Reading Comprehension with Deep Learning", "text": "We train a model that combines attention with multi-perspective matching to perform question answering. For each question and context pair in SQuAD, we perform an attention calculation over each context before extracting features of the question and context, matching them from multiple perspectives. Whilst we did not have time to perform a hyper-parameter search or incorporate other features into our model, this joint model obtained an F1 score of 22.5 and an EM score of 13.0, outperforming our baseline model without attention, which attained an F1 score of 19.4 and an EM score of 10.0. In the future, we would like to implement an early stopping feature and a co-dependent representation of each question and context pair."}
{"_id": "90c3a783271e61adc2e140976d3a3e6a0f23f193", "title": "Mapping moods: Geo-mapped sentiment analysis during hurricane sandy", "text": "Sentiment analysis has been widely researched in the domain of online review sites with the aim of generating summarized opinions of product users about different aspects of the products. However, there has been little work focusing on identifying the polarity of sentiments expressed by users during disaster events. Identifying sentiments expressed by users in an online social networking site can help understand the dynamics of the network, e.g., the main users\u2019 concerns, panics, and the emotional impacts of interactions among members. Data produced through social networking sites is seen as ubiquitous, rapid and accessible, and it is believed to empower average citizens to become more situationally aware during disasters and coordinate to help themselves. In this work, we perform sentiment classification of user posts in Twitter during the Hurricane Sandy and visualize these sentiments on a geographical map centered around the hurricane. We show how users' sentiments change according not only to users\u2019 locations, but also based on the distance from the disaster."}
{"_id": "ed8c54d05f1a9d37df2c0587c2ca4d4c849cb267", "title": "AUTOMATIC BLOOD VESSEL SEGMENTATION IN COLOR IMAGES OF RETINA", "text": "Automated image processing techniques have the ability to assist in the early detection of diabetic retinopathy disease which can be regarded as a manifestation of diabetes on the retina. Blood vessel segmentation is the basic foundation while developing retinal screening systems, since vessels serve as one of the main retinal landmark features. This paper proposes an automated method for identification of blood vessels in color images of the retina. For every image pixel, a feature vector is computed that utilizes properties of scale and orientation selective Gabor filters. The extracted features are then classified using generative Gaussian mixture model and discriminative support vector machines classifiers. Experimental results demonstrate that the area under the receiver operating characteristic (ROC) curve reached a value 0.974, which is highly comparable and, to some extent, higher than the previously reported ROCs that range from 0.787 to 0.961. Moreover, this method gives a sensitivity of 96.50% with a specificity of 97.10% for identification of blood vessels. Keywords\u2013 Retinal blood vessels, Gabor filters, support vector machines, vessel segmentation"}
{"_id": "8f3eb3fea1f7b5a609730592c035f672e47a170c", "title": "Cubical Type Theory: A Constructive Interpretation of the Univalence Axiom", "text": "This paper presents a type theory in which it is possible to directly manipulate n-dimensional cubes (points, lines, squares, cubes, etc.) based on an interpretation of dependent type theory in a cubical set model. This enables new ways to reason about identity types, for instance, function extensionality is directly provable in the system. Further, Voevodsky\u2019s univalence axiom is provable in this system. We also explain an extension with some higher inductive types like the circle and propositional truncation. Finally we provide semantics for this cubical type theory in a constructive meta-theory. 1998 ACM Subject Classification F.3.2 Logics and Meanings of Programs: Semantics of Programming Languages, F.4.1 Mathematical Logic and Formal Languages: Mathematical Logic"}
{"_id": "de7170f66996f444905bc37745ac3fa1ca760fed", "title": "A review of wind power and wind speed forecasting methods with different time horizons", "text": "In recent years, environmental considerations have prompted the use of wind power as a renewable energy resource. However, the biggest challenge in integrating wind power into the electric grid is its intermittency. One approach to deal with wind intermittency is forecasting future values of wind power production. Thus, several wind power or wind speed forecasting methods have been reported in the literature over the past few years. This paper provides insight on the foremost forecasting techniques, associated with wind power and speed, based on numeric weather prediction (NWP), statistical approaches, artificial neural network (ANN) and hybrid techniques over different time-scales. An overview of comparative analysis of various available forecasting techniques is discussed as well. In addition, this paper further gives emphasis on the major challenges and problems associated with wind power prediction."}
{"_id": "dbde4f47efed72cbb99f412a9a4c17fe39fa04fc", "title": "What does it take to generate natural textures", "text": "Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state-of-the-art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters."}
{"_id": "c62e6bebdef5d71961384db654e9b99d68a42d39", "title": "Investigating Social Entrepreneurship in Developing Countries", "text": "Social entrepreneurship has drawn interest from global policy makers and social entrepreneurs to target developing countries. Generally, not-forprofit organizations, funded by government and donor grants have played a significant role in poverty alleviation. We argue that, by applying entrepreneurial concepts, organizations can create social value, hence mitigate poverty. This is a theoretical paper that builds upon a multidimensional model in analysing how three social enterprises from India and Kenya create social value to address social problems. The findings suggest that whilst the social mission is central to all these organizations, they also create social value through innovation and pro-activeness. Additionally, the cultural and political environmental contexts hinder their attempt to create social value. Building networks and partnerships to achieve social value creation is vital for these organizations. Policy makers should devise policies that would assist social enterprises to achieve development goals."}
{"_id": "60737e8bd196d5fdc1f027d25b395253b31ec30f", "title": "Crowd-Squared: Amplifying the Predictive Power of Search Trend Data", "text": "Table A1. Search Terms Generated by the Crowd (a) Influenza \u2013 Crowd-Squared Search Terms and % of Turkers Mentioning Each Term Search Term % Mention Search Term % Mention Search Term % Mention Search Term % Mention sick 68% influenza 4% achy 2% fatigue 1% fever 39% coughing 4% weak 2% home 1% cough 21% germs 4% bad 2% hospital 1% cold 16% headache 4% shots 2% nyquil 1% shot 13% ache 4% body aches 2% season 1% vomit 11% runny nose 4% flu shot 2% tea 1%"}
{"_id": "0bc6d36aee2e3c4fd035f7a9b43a9a36cf29c938", "title": "Language Modeling with Feedforward Neural Networks", "text": "In this paper, we describe the use of feedforward neural networks to improve the term-distance term-occurrence (TDTO) language model, previously proposed in [1]\u2212[3]. The main idea behind the TDTO model proposition is to model separately both position and occurrence information of words in the history-context to better estimate n-gram probabilities. Neural networks have been shown to offer a better generalization property than other conventional smoothing methods. We take advantage of such property for a better smoothing mechanism for the TDTO model, referred to as the continuous space TDTO (cTDTO). The newly proposed model has reported an improved perplexity over the baseline TDTO model of up to 9.2%, at history length of ten, as evaluated on the Wall Street Journal (WSJ) corpus. Also, in the Aurora-4 speech recognition N-best re-ranking task, the cTDTO outperformed the TDTO model by reducing the word error rate (WER) up to 12.9% relatively."}
{"_id": "351878d1facaff88169860aafca5199340cf8891", "title": "Diagnosis, prevention, and treatment of catheter-associated urinary tract infection in adults: 2009 International Clinical Practice Guidelines from the Infectious Diseases Society of America.", "text": "Guidelines for the diagnosis, prevention, and management of persons with catheter-associated urinary tract infection (CA-UTI), both symptomatic and asymptomatic, were prepared by an Expert Panel of the Infectious Diseases Society of America. The evidence-based guidelines encompass diagnostic criteria, strategies to reduce the risk of CA-UTIs, strategies that have not been found to reduce the incidence of urinary infections, and management strategies for patients with catheter-associated asymptomatic bacteriuria or symptomatic urinary tract infection. These guidelines are intended for use by physicians in all medical specialties who perform direct patient care, with an emphasis on the care of patients in hospitals and long-term care facilities."}
{"_id": "4ee9ccc200ed7db73fd23ac3e5a2a0deaf12ebab", "title": "A New Clustering Method Based On Type-2 Fuzzy Similarity and Inclusion Measures", "text": "Similarity and inclusion measures between type-2 fuzzy sets have a wide range of applications. New similarity and inclusion measures between type-2 fuzzy sets are respectively defined in this paper. The properties of the measures are discussed. Some examples are used to compare the proposed measures with the existing results. Numerical results show that the proposed measures are more reasonable. Similarity measures and Yang and Shih\u2019s algorithm are combined as a clustering method for type-2 fuzzy data. Clustering results demonstrate that the proposed similarity measures are better than the existing measures."}
{"_id": "1918d5662d327f4524bc6c0b228f5f58784723c0", "title": "Temperature insensitive current reference circuit using standard CMOS devices", "text": "In this paper, temperature insensitive current reference circuit is proposed. The reference current value is determined by using the threshold voltage controlled circuit. The main difference from the previous work (Georglou and Toumazou, 2002) is that the circuit can be fabricated by the standard CMOS process. The resistor and the transistor physical parameter temperature dependences are compensated with each other to determine the output reference current. The detail temperature performance is analyzed, and is evaluated by simulations."}
{"_id": "87708f9d0168776227ad537e65c2fc45774e6fa4", "title": "Computational higher-dimensional type theory", "text": "Formal constructive type theory has proved to be an effective language for mechanized proof. By avoiding non-constructive principles, such as the law of the excluded middle, type theory admits sharper proofs and broader interpretations of results. From a computer science perspective, interest in type theory arises from its applications to programming languages. Standard constructive type theories used in mechanization admit computational interpretations based on meta-mathematical normalization theorems. These proofs are notoriously brittle; any change to the theory potentially invalidates its computational meaning. As a case in point, Voevodsky's univalence axiom raises questions about the computational meaning of proofs. \n We consider the question: Can higher-dimensional type theory be construed as a programming language? We answer this question affirmatively by providing a direct, deterministic operational interpretation for a representative higher-dimensional dependent type theory with higher inductive types and an instance of univalence. Rather than being a formal type theory defined by rules, it is instead a computational type theory in the sense of Martin-L\u00f6f's meaning explanations and of the NuPRL semantics. The definition of the type theory starts with programs; types are specifications of program behavior. The main result is a canonicity theorem stating that closed programs of boolean type evaluate to true or false."}
{"_id": "b00d2790c1de04096021f19a10a58402bcee2fc4", "title": "Computational Higher Type Theory IV: Inductive Types", "text": "This is the fourth in a series of papers extending Martin-L\u00f6f\u2019s meaning explanation of dependent type theory to higher-dimensional types. In this installment, we show how to define cubical type systems supporting a general schema of indexed cubical inductive types whose constructors may take dimension parameters and have a specified boundary. Using this schema, we are able to specify and implement many of the higher inductive types which have been postulated in homotopy type theory, including homotopy pushouts, the torus, W-quotients, truncations, arbitrary localizations. By including indexed inductive types, we enable the definition of identity types. The addition of higher inductive types makes computational higher type theory a model of homotopy type theory, capable of interpreting almost all of the constructions in the HoTT Book [40] (with the exception of inductive-inductive types). This is the first such model with an explicit canonicity theorem, which specifies the canonical values of higher inductive types and confirms that every term in an inductive type evaluates to such a value."}
{"_id": "14cd735eada88905c3db3c16216112e214efc4a3", "title": "A Model of Type Theory in Cubical Sets", "text": "We present a model of type theory with dependent product, sum, and identity, in cubical sets. We describe a universe and explain how to transform an equivalence between two types in an equality. We also explain how to model propositional truncation and the circle. While not expressed internally in type theory, the model is expressed in a constructive metalogic. Thus it is a step towards a computational interpretation of Voevodsky\u2019s Univalence Axiom."}
{"_id": "868589060f4ad5a9c8f067a5ad428cd2626bccf4", "title": "A Monolithic CMOS-MEMS 3-Axis Accelerometer With a Low-Noise, Low-Power Dual-Chopper Amplifier", "text": "This paper reports a monolithically integrated CMOS-MEMS three-axis capacitive accelerometer with a single proof mass. An improved DRIE post-CMOS MEMS process has been developed, which provides robust single-crystal silicon (SCS) structures in all three axes and greatly reduces undercut of comb fingers. The sensing electrodes are also composed of the thick SCS layer, resulting in high resolution and large sensing capacitance. Due to the high wiring flexibility provided by the fabrication process, fully differential capacitive sensing and common-centroid configurations are realized in all three axes. A low-noise, low- power dual-chopper amplifier is designed for each axis, which consumes only 1 mW power. With 44.5 dB on-chip amplification, the measured sensitivities of x-, y-, and z-axis accelerometers are 520 mV/g, 460 mV/g, and 320 mV/g, respectively, which can be tuned by simply changing the amplitude of the modulation signal. Accordingly, the overall noise floors of the x-, y-, and z-axis are 12 mug/radicHz , 14 mug/radicHz, and 110 mug/radicHz, respectively, when tested at around 200 Hz."}
{"_id": "18774ff073d775bac3baf6680b1a8226b9113317", "title": "Cyber security of smart grids modeled through epidemic models in cellular automata", "text": "Due to their distributed management, smart grids can be vulnerable to malicious attacks that undermine their cyber security. An adversary can take control of few nodes in the network and spread digital attacks like an infection, whose diffusion is facilitated by the lack of centralized supervision within the smart grid. In this paper, we propose to investigate these phenomena by means of epidemic models applied to cellular automata. We show that the common key parameters of epidemic models, such as the basic reproductive ratio, are also useful in this context to understand the extent of the grid portion that can be compromised. At the same time, the lack of mobility of individuals limits the spreading of the infection. In particular, we evaluate the role of the grid connectivity degree in both containing the epidemics and avoiding its spreading on the entire network, and also increasing the number of nodes that do not get any contact with the cyber attacks."}
{"_id": "09b6ea6bf73f4f9e67624fdcca69758755c28b6e", "title": "A review of overview+detail, zooming, and focus+context interfaces", "text": "There are many interface schemes that allow users to work at, and move between, focused and contextual views of a dataset. We review and categorize these schemes according to the interface mechanisms used to separate and blend views. The four approaches are overview+detail, which uses a spatial separation between focused and contextual views; zooming, which uses a temporal separation; focus+context, which minimizes the seam between views by displaying the focus within the context; and cue-based techniques which selectively highlight or suppress items within the information space. Critical features of these categories, and empirical evidence of their success, are discussed. The aim is to provide a succinct summary of the state-of-the-art, to illuminate both successful and unsuccessful interface strategies, and to identify potentially fruitful areas for further work."}
{"_id": "5fe5c28e98d2da910237629cc039ac88ad6c25bf", "title": "An Experiment of Influences of Facebook Posts in Other Users", "text": "People usually present similar behavior of their friends due to the social influence that is produced by group pressure. This factor is also present in online social networks such as Facebook. In this paper, we make a study on social influence produced by the contents published on Facebook. First, we have executed a systematic literature review of the previous works to understand better this field of research. Then, by executing the Asch experiment, we have illustrated how such social influence can change the behavior of users in Facebook. Through this experiment, we could identify how the social influence type called \u201cconformity\u201d is present in online social network platforms and how this influence can make changes in the behavior of people."}
{"_id": "23947c3cb843274423a8132f30dc30034006ca66", "title": "Use of the stair vision library within the ISPRS 2D semantic labeling benchmark (Vaihingen)", "text": "This report describes experiments conducted using the multi-class image classification framework implemented in the stair vision library (SVL, (Gould et al., 2008)) in the context of the ISPRS 2D semantic labeling benchmark. The motivation was to get results from a well-established and public available software (Gould, 2014), as a kind of baseline. Besides the use of features implemented in the SVL which makes use of three channel images, assuming RGB, we also included features derived from the height model and the NDVI which is specific here, because the benchmark dataset provides surface models and CIR images. Another point of interest concerned the impact the segmentation had on the overall result. To this end a pre-study was performed where different parameters for the graph-based segmentation method introduced by Felzenszwalb and Huttelocher (2004) have been tested, in addition we only applied a simple chessboard segmentation. Other experiments focused on the question whether the conditional random field classification approach helps to enhance the overall performance. The official evaluation of all experiments described here is available at http://www2.isprs.org/vaihingen-2d-semantic-labeling-contest.html (SVL_1 to SVL_6). The normalized height models are available through the ReseachGate profile of the author (http://www.researchgate.net/profile/Markus_Gerke)"}
{"_id": "a1e45d02c98d76c81b7a801c746765c0b5faf4d2", "title": "Rollable Multisegment Dielectric Elastomer Minimum Energy Structures for a Deployable Microsatellite Gripper", "text": "Debris in space presents an ever-increasing problem for spacecraft in Earth orbit. As a step in the mitigation of this issue, the CleanSpace One (CSO) microsatellite has been proposed. Its mission is to perform active debris removal of a decommissioned nanosatellite (the CubeSat SwissCube). An important aspect of this project is the development of the gripper system that will entrap the capture target. We present the development of rollable dielectric elastomer minimum energy structures (DEMES) as the main component of CSO's deployable gripper. DEMES consist of a prestretched dielectric elastomer actuator membrane bonded to a flexible frame. The actuator finds equilibrium in bending when the prestretch is released and the bending angle can be changed by the application of a voltage bias. The inherent flexibility and lightweight nature of the DEMES enables the gripper to be stored in a rolled-up state prior to deployment. We fabricated proof-of-concept actuators of three different geometries using a robust and repeatable fabrication methodology. The resulting actuators were mechanically resilient to external deformation, and display conformability to objects of varying shapes and sizes. Actuator mass is less than 0.65 g and all the actuators presented survived the rolling-up and subsequent deployment process. Our devices demonstrate a maximum change of bending angle of more than 60\u00b0 and a maximum gripping (reaction) force of 2.2 mN for a single actuator."}
{"_id": "ecdbf3e6aae8f11caef57fff9a76a3b5d4095d10", "title": "MSc THESIS Hardware Acceleration of BWA-MEM Genome Mapping Application", "text": "Faculty of Electrical Engineering, Mathematics and Computer Science CE-MS-2014 Next Generation Sequencing technologies have had a tremendous impact on our understanding of DNA and its role in living organisms. The cost of DNA sequencing has decreased drastically over the past decade, leading the way for personalised genomics. Sequencers provide millions of fragments of DNA (termed short reads), which have to be aligned aginst a reference genome to reconstruct the full information of the DNA molecule under study. Processing short reads requires large computational resources, with many specialised computing platforms now being used to accelerate software aligners. We take up the challenge of accelerating a well know sequence alignment tool called Burrows Wheeler aligner on the Convey Hybrid Computing platform, which has FPGAs as co-processors. The focus of the research is to accelerate the BWA-MEM algorithm of the Burrows Wheeler aligner on the Convey HC-2 platform. The implementation was carried out using the Vivado HLS tool. The architectures proposed are targeted to overcome the memory bottleneck of the application. Two architectures are proposed, the Base architecture and the Batch architecture meant to address the memory bottleneck. Simulations were performed for the intended platform and it was found that, the Batch architecture is 18% faster than the Base architecture for reads with similar run time characteristics. The architectures provide possibilities of further pipelining and implementation of more cores, which is expected to provide better performance than the current implementation. Hardware Acceleration of BWA-MEM Genome Mapping Application"}
{"_id": "7e799292e855ea1d1c5c7502f284702363a327ab", "title": "Performance Gaps between OpenMP and OpenCL for Multi-core CPUs", "text": "OpenCL and OpenMP are the most commonly used programming models for multi-core processors. They are also fundamentally different in their approach to parallelization. In this paper, we focus on comparing the performance of OpenCL and OpenMP. We select three applications from the Rodinia benchmark suite (which provides equivalent OpenMP and OpenCL implementations), and carry out experiments with different datasets on three multi-core platforms. We see that the incorrect usage of the multi-core CPUs, the inherent OpenCL fine-grained parallelism, and the immature OpenCL compilers are the main reasons that lead to the OpenCL poorer performance. After tuning the OpenCL versions to be more CPU-friendly, we show that OpenCL either outperforms or achieves similar performance in more than 80% of the cases. Therefore, we believe that OpenCL is a good alternative for multi-core CPU programming."}
{"_id": "a0daf4760ef9094fb20256a6397d2a79f1f8de85", "title": "Alternative placement of bispectral index electrode for monitoring depth of anesthesia during neurosurgery.", "text": "In neurosurgery in particular, the recommended placement of electrodes for monitoring depth of anesthesia during surgery sometimes conflicts with the surgical site or patient positioning. Therefore, we proposed this study to evaluate the agreement and correlation of bispectral index values recorded from the usual frontal area and the alternate, post-auricular areas in neurosurgery patients. Thirty-four patients scheduled for neurosurgery under general anesthesia were included. Bispectral index (BIS) sensors were placed at both the frontal and post-auricular areas. The anesthesia given was clinically adjusted according to the frontal (standard) BIS reading. The BIS values and impedance were recorded;Pearson's correlation and Bland-Altman plots were analyzed. The bias\u00b12SD for the electrode placement before, during, and post-anesthesia were 0\u00b123.32, 1.5\u00b110.69, and 2.1\u00b113.52, while the limits of agreement were -23.3 to 23.3, -12.2 to 9.2, and -17.7 to 13.5, respectively. The correlation coefficient between frontal- and post-auricular-area electrodes was 0.74 with a p-value<0.001.The post-auricular placement of a BIS electrode is a practical alternative to frontal lobe placement. Nevertheless, proper electrode location is important to minimize error."}
{"_id": "831e37389a3dec5287f865beb2de77732aeca8d4", "title": "A survey on virtual machine migration and server consolidation frameworks for cloud data centers", "text": "Modern Cloud Data Centers exploit virtualization for efficient resource management to reduce cloud computational cost and energy budget. Virtualization empowered by virtual machine (VM) migration meets the ever increasing demands of dynamic workload by relocating VMs within Cloud Data Centers. VM migration helps successfully achieve various resource management objectives such as load balancing, power management, fault tolerance, and system maintenance. However, being resourceintensive, the VM migration process rigorously affects application performance unless attended by smart optimization methods. Furthermore, a Cloud Data Centre exploits server consolidation and DVFS methods to optimize energy consumption. This paper reviews state-of-the-art bandwidth optimization schemes, server consolidation frameworks, DVFS-enabled power optimization, and storage optimization methods over WAN links. Through a meticulous literature review of state-of-the-art live VM migration schemes, thematic taxonomies are proposed to categorize the reported literature. The critical aspects of virtual machine migration schemes are investigated through a comprehensive analysis of the existing schemes. The commonalties and differences among existing VM migration schemes are highlighted through a set of parameters derived from the literature. Finally, open research issues and trends in the VM migration domain that necessitate further consideration to develop optimal VM migration schemes"}
{"_id": "605f6e6f1039f5a633515254b9b49c9a1252292b", "title": "Assessing SMS and PJD Schemes of Anti-Islanding with Varying Quality Factor", "text": "When a particular zone of the distribution network containing an embedded generator is disconnected from the main supply grid, yet the generator continues to operate with normal voltage and frequency to feed power to the isolated section, islanding is said to occur. In a distributed power system, islanding may occur at different possible zones consisting of distribution feeders, substations and voltage levels as long as the isolated zone is found to operate independently of the main grid but remained energized by the distributed generation unit. To ensure safety and reliability, it is necessary to detect islanding condition as quickly as possible by using anti-islanding algorithms. Among the available islanding prevention methods, slip mode frequency shift (SMS) and phase jump detection (PJD) schemes are very common and have numerous advantages over others. This paper presents detailed discussion and comparison of the characteristics of the various anti-islanding schemes. Both SMS and PJD techniques are examined in detail through design and simulation. In the islanding situation with quality factor range (0.1les Qf les10), load voltage waveforms are analyzed and plotted. Both the methods are assessed in terms of detection times with different Qf's. The results are complying with IEEE standard specifications and show that the two developed algorithms could prevent islanding more consistently."}
{"_id": "52ba137ece8735a31a71792e7b06da838bc173b4", "title": "Adaptive Image Registration via Hierarchical Voronoi Subdivision", "text": "Advances in image acquisition systems have made it possible to capture high-resolution images of a scene, recording considerable scene details. With increased resolution comes increased image size and geometric difference between multiview images, complicating image registration. Through Voronoi subdivision, we subdivide large images into small corresponding regions, and by registering small regions, we register the images in a piecewise manner. Image subdivision reduces the geometric difference between regions that are registered and simplifies the correspondence process. The proposed method is a hierarchical one. While previous methods use the same block size and shape at a hierarchy, the proposed method adapts the block size and shape to the local image details and geometric difference between the images. This adaptation makes it possible to keep geometric difference between corresponding regions small and simplifies the correspondence process. Implementational details of the proposed image registration method are provided, and experimental results on various types of images are presented and analyzed."}
{"_id": "ccab66ca5d4c7cbd38929161897f94a12593ec0d", "title": "Augmented Human: Augmented Reality and Beyond", "text": "Will Augmented Reality (AR) allow us to access digital information, experience others' stories, and thus explore alternate realities? AR has recently attracted attention again due to the rapid advances in related multimedia technologies as well as various glass-type AR display devices. However, in order to widely adopt AR in alternated realities, it is necessary to improve various core technologies and integrate them into an AR platform. Especially, there are several remaining technical challenges such as 1) real-time recognition and tracking of multiple objects while generating an environment map, 2) organic user interface with awareness of user's implicit needs, intentions or emotion as well as explicit requests, 3) immersive multimodal content augmentation and just-in-time information visualization, 4) multimodal interaction and collaboration during augmented telecommunication, etc. In addition, in order to encourage user engagement and enable an AR ecosystem, AR standards should be established that support creating AR content, capturing user experiences, and sharing the captured experiences."}
{"_id": "59628a3c8a960baddf412d05c1b0af8f4ffeb8fc", "title": "Result Diversification in Automatic Citation Recommendation", "text": "The increase in the number of published papers each year makes manual literature search inefficient and furthermore insufficient. Hence, automatized reference/citation recommendation have been of interest in the last 3-4 decades. Unfortunately, some of the developed approaches, such as keyword-based ones, are prone to ambiguity and synonymy. On the other hand, using the citation information does not suffer from the same problems since they do not consider textual similarity. Today, obtaining the desired information is as hard as looking for a needle in a haystack. And sometimes, we want that small haystack, e.g., a small result set containing only a few recommendations, cover all the important and relevant parts of the literature. That is, the set should be diversified enough. Here, we investigate the problem of result diversification in automatic citation recommendation. We enhance existing techniques, which were designed to recommend a set of citations with satisfactory quality and diversity, with direction-awareness to allow the users to reach either old, well-cited, well-known research papers or recent, less-known ones. We also propose some novel techniques for a better result diversification. Experimental results show that our techniques are very useful in automatic citation recommendation."}
{"_id": "7d6c3786bd582e62d9f321add5cb733a58d9a23c", "title": "Hull-form optimization in calm and rough water", "text": "The paper presents a formal methodology for the hull form optimization in calm and rough water using wash waves and selected dynamic responses, respectively. Parametric hull form modeling is used to generate the variant hull forms with some of the form parameters modified, which are evaluated in the optimization scheme based on evolutionary strategies. Rankine-source panel method and strip theories are used for the hydrodynamic evaluation. The methodology is implemented in the optimization of a double-chine, planing hull form. Furthermore, a dual-stage optimization strategy is applied on a modern fast displacement ferry. The effect of the selected optimization parameters is presented and discussed. Crown Copyright\u00a9 2009 Published by Elsevier Ltd. All rights reserved."}
{"_id": "aba441e7a7ff828bfdae69b8683bab2d75de125d", "title": "NewsVallum: Semantics-Aware Text and Image Processing for Fake News Detection system", "text": "As a consequence of the social revolution we faced on the Web, news and information we daily enjoy may come from different and diverse sources which are not necessarily the traditional ones such as newspapers, either in their paper or online version, television, radio, etc. Everyone on the Web is allowed to produce and share news which can soon become viral if they follow the new media channels represented by social networks. This freedom in producing and sharing news comes with a counter-effect: the proliferation of fake news. Unfortunately, they can be very effective and may influence people and, more generally, the public opinion. We propose a combined approach of natural language and image processing that takes into account the semantics encoded within both text and images coming with news together with contextual information that may help in the classification of a news as fake or not."}
{"_id": "42f73a6160c4bce389c62f381e5a77bd6db23670", "title": "Supply chain risk management in French companies", "text": "Available online 22 November 2011"}
{"_id": "740844739cd791e9784c4fc843beb9174ed0b487", "title": "Bringing contextual information to google speech recognition", "text": "In automatic speech recognition on mobile devices, very often what a user says strongly depends on the particular context he or she is in. The n-grams relevant to the context are often not known in advance. The context can depend on, for example, particular dialog state, options presented to the user, conversation topic, location, etc. Speech recognition of sentences that include these n-grams can be challenging, as they are often not well represented in a language model (LM) or even include out-of-vocabulary (OOV) words. In this paper, we propose a solution for using contextual information to improve speech recognition accuracy. We utilize an on-the-fly rescoring mechanism to adjust the LM weights of a small set of n-grams relevant to the particular context during speech decoding. Our solution handles out of vocabulary words. It also addresses efficient combination of multiple sources of context and it even allows biasing class based language models. We show significant speech recognition accuracy improvements on several datasets, using various types of contexts, without negatively impacting the overall system. The improvements are obtained in both offline and live experiments."}
{"_id": "3c15822455bcf4d824d6f877ec1324d76ba294e1", "title": "SOFIA: An automated security oracle for black-box testing of SQL-injection vulnerabilities", "text": "Security testing is a pivotal activity in engineering secure software. It consists of two phases: generating attack inputs to test the system, and assessing whether test executions expose any vulnerabilities. The latter phase is known as the security oracle problem. \n In this work, we present SOFIA, a Security Oracle for SQL-Injection Vulnerabilities. SOFIA is programming-language and source-code independent, and can be used with various attack generation tools. Moreover, because it does not rely on known attacks for learning, SOFIA is meant to also detect types of SQLi attacks that might be unknown at learning time. The oracle challenge is recast as a one-class classification problem where we learn to characterise legitimate SQL statements to accurately distinguish them from SQLi attack statements. \n We have carried out an experimental validation on six applications, among which two are large and widely-used. SOFIA was used to detect real SQLi vulnerabilities with inputs generated by three attack generation tools. The obtained results show that SOFIA is computationally fast and achieves a recall rate of 100% (i.e., missing no attacks) with a low false positive rate (0.6%)."}
{"_id": "b2697a162b57c72146c512c5aa01ef12e8cdc90c", "title": "Crisis prevention: how to gear up your board.", "text": "Today's critics of corporate boardrooms have plenty of ammunition. The two crucial responsibilities of boards-oversight of long-term company strategy and the selection, evaluation, and compensation of top management--were reduced to damage control during the 1980s. Walter Salmon, a longtime director, notes that while boards have improved since he began serving on them in 1961, they haven't kept pace with the need for real change. Based on over 30 years of boardroom experience, Salmon recommends against government reform of board practices. But he does prescribe a series of incremental changes as a remedy. To begin with, he suggests limiting the size of boards and increasing the number of outside directors on them. In fact, according to Salmon, only three insiders belong on a board: the CEO, the COO, and the CFO. Changing how committees function is also necessary for gearing up today's boards. The audit committee, for example, can periodically review \"high-exposure areas\" of a business, perhaps helping to prevent embarrassing drops in future profits. Compensation committees can structure incentive compensation for executives to emphasize long-term rather than short-term performance. And nominating committees should be responsible for finding new, independent directors--not the CEO. In general, boards as a whole must spot problems early and blow the whistle, exercising what Salmon calls, \"constructive dissatisfaction.\" On a revitalized board, directors have enough confidence in the process to vigorously challenge one another, including the company's chief executive."}
{"_id": "b20f9d47cf3134214e9d04bf09788e70383fa3cb", "title": "Integrated CMOS transmit-receive switch using LC-tuned substrate bias for 2.4-GHz and 5.2-GHz applications", "text": "CMOS transmit-receive (T/R) switches have been integrated in a 0.18-/spl mu/m standard CMOS technology for wireless applications at 2.4 and 5.2 GHz. This switch design achieves low loss and high linearity by increasing the substrate impedance of a MOSFET at the frequency of operation using a properly tuned LC tank. The switch design is asymmetric to accommodate the different linearity and isolation requirements in the transmit and receive modes. In the transmit mode, the switch exhibits 1.5-dB insertion loss, 28-dBm power, 1-dB compression point (P/sub 1dB/), and 30-dB isolation, at 2.4 and 5.2 GHz. In the receive mode, the switch achieves 1.6-dB insertion loss, 11.5-dBm P/sub 1dB/, and 15-dB isolation, at 2.4 and 5.2 GHz. The linearity obtained in the transmit mode is the highest reported to date in a standard CMOS process. The switch passes the 4-kV Human Body Model electrostatic discharge test. These results show that the switch design is suitable for narrow-band applications requiring a moderate-high transmitter power level (<1 W)."}
{"_id": "b49ba7bd01a9c3627efb78489a2e536fdf5ef2f3", "title": "Computational analysis of the safe zone for the antegrade lag screw in posterior column fixation with the anterior approach in acetabular fracture: A cadaveric study.", "text": "OBJECTIVE\nThe fluoroscopically-guided procedure of antegrade posterior lag screw in posterior column fixation through anterior approach is technique-dependent and requires an experienced surgeon. The purpose of this study was to establish the safe zone for the antegrade posterior lag screw by using computational analysis.\n\n\nMETHOD\nThe virtual three-dimensional model of 178 hemi-pelvises was created from the CT data (DICOM format) by using Mimics\u00ae program, and were used to measure the safe zone of antegrade lag screw fixation on the inner table of the iliac wing, and the largest diameter of cylindrical implant inside safe zone. The central point (point A) of the cylinder was assessed and was compared with the intersection point (point B) between the linea terminalis and the anterior border of the sacroiliac articulation.\n\n\nRESULTS\nThe safe zone was triangular with an average area of 670.4mm2 (range, 374.8-1084.5mm2). The largest diameter of the cylinder was a mean 7.4mm (range, 5.0-10.0mm). When height was under 156.3cm, the diameter of the cylindrical implant was smaller than 7.0mm (p<0.001, regression coefficient=0.09). The linear distance between points A and B was 32.5mm (range, 19.2-49.3mm). Point A was far enough away from the well-positioned anterior column plate to prevent collision between the two.\n\n\nCONCLUSION\nThe safe zone was shaped like a triangle, and was large enough for multiple screws. Considering the straight-line distance between points A and B, the central screw can be fixed without overlapping with the well-positioned anterior column plate at the point between holes 2 and 3."}
{"_id": "ee1931a94457cbb0a6203caad54bafe00c20c76b", "title": "Flexible capacitive sensors for high resolution pressure measurement", "text": "Thin, flexible, robust capacitive pressure sensors have been the subject of research in many fields where axial strain sensing with high spatial resolution and pressure resolution is desirable for small loads, such as tactile robotics and biomechanics. Simple capacitive pressure sensors have been designed and implemented on flexible substrates in general agreement with performance predicted by an analytical model. Two designs are demonstrated for comparison. The first design uses standard flex circuit technology, and the second design uses photolithography techniques to fabricate capacitive sensors with higher spatial and higher pressure resolution. Sensor arrays of varying sensor size and spacing are tested with applied loads from 0 to 1 MPa. Pressure resolution and linearity of the sensors are significantly improved with the miniaturized, custom fabricated sensor array compared to standard flexible circuit technology."}
{"_id": "2f9f61991dc09616be25b94d29ad788c32c55942", "title": "On optimal fuzzy c-means clustering for energy efficient cooperative spectrum sensing in cognitive radio networks", "text": "This work explores the scope of Fuzzy C-Means (FCM) clustering on energy detection based cooperative spectrum sensing (CSS) in single primary user (PU) cognitive radio network (CRN). PU signal energy sensed at secondary users (SUs) are forwarded to the fusion center (FC). Two different combining schemes, namely selection combining (SC) and optimal gain combining are performed at FC to address the sensing reliability problem on two different optimization frameworks. In the first work, optimal cluster center points are searched for using differential evolution (DE) algorithm to maximize the probability of detection under the constraint of meeting the probability of false alarm below a predefined threshold. Simulation results highlight the improved sensing reliability compared to the existing works. In the second one, the problem is extended to the energy efficient design of CRN. The SUs act here as amplifyand-forward (AF) relays and PU energy content is measured at the FC over the combined signal from all the SUs. The objective is to minimize the average energy consumption of all SUs while maintaining the predefined sensing \u2217. Santi P. Maity . Email addresses: santipmaity@it.iiests.ac.in (Santi P. Maity ), subhankar.ece@gmail.com (Subhankar Chatterjee), t_acharya@telecom.iiests.ac.in (Tamaghna Acharya) Preprint submitted to Elsevier October 28, 2015 constraints. Optimal FCM clustering using DE determines the optimal SU amplifying gain and the optimal number of PU samples. Simulation results shed a light on the performance gain of the proposed approach compared to the existing energy efficient CSS schemes."}
{"_id": "cb3fcbd32db5a7d1109a2d7e92c0c534b65e1769", "title": "Exploiting semantic web knowledge graphs in data mining", "text": "Data Mining and Knowledge Discovery in Databases (KDD) is a research field concerned with deriving higher-level insights from data. The tasks performed in that field are knowledge intensive and can often benefit from using additional knowledge from various sources. Therefore, many approaches have been proposed in this area that combine Semantic Web data with the data mining and knowledge discovery process. Semantic Web knowledge graphs are a backbone of many information systems that require access to structured knowledge. Such knowledge graphs contain factual knowledge about real word entities and the relations between them, which can be utilized in various natural language processing, information retrieval, and any data mining applications. Following the principles of the Semantic Web, Semantic Web knowledge graphs are publicly available as Linked Open Data. Linked Open Data is an open, interlinked collection of datasets in machine-interpretable form, covering most of the real world domains. In this thesis, we investigate the hypothesis if Semantic Web knowledge graphs can be exploited as background knowledge in different steps of the knowledge discovery process, and different data mining tasks. More precisely, we aim to show that Semantic Web knowledge graphs can be utilized for generating valuable data mining features that can be used in various data mining tasks. Identifying, collecting and integrating useful background knowledge for a given data mining application can be a tedious and time consuming task. Furthermore, most data mining tools require features in propositional form, i.e., binary, nominal or numerical features associated with an instance, while Linked Open Data sources are usually graphs by nature. Therefore, in Part I, we evaluate unsupervised feature generation strategies from types and relations in knowledge graphs, which are used in different data mining tasks, i.e., classification, regression, and outlier detection. As the number of generated features grows rapidly with the number of instances in the dataset, we provide a strategy for feature selection in hierarchical feature space, in order to select only the most informative and most representative features for a given dataset. Furthermore, we provide an end-to-end tool for mining the Web of Linked Data, which provides functionalities for each step of the knowledge discovery process, i.e., linking local data to a Semantic Web knowledge graph, integrating features from multiple knowledge graphs, feature generation and selection, and building machine learning models. However, we show that such feature generation strategies often lead to high dimensional feature vectors even after"}
{"_id": "2c0e1b5db1b6851d95a765a2264bb77f19ee04e1", "title": "Topic Aware Neural Response Generation", "text": "We consider incorporating topic information into a sequenceto-sequence framework to generate informative and interesting responses for chatbots. To this end, we propose a topic aware sequence-to-sequence (TA-Seq2Seq) model. The model utilizes topics to simulate prior human knowledge that guides them to form informative and interesting responses in conversation, and leverages topic information in generation by a joint attention mechanism and a biased generation probability. The joint attention mechanism summarizes the hidden vectors of an input message as context vectors by message attention and synthesizes topic vectors by topic attention from the topic words of the message obtained from a pre-trained LDA model, with these vectors jointly affecting the generation of words in decoding. To increase the possibility of topic words appearing in responses, the model modifies the generation probability of topic words by adding an extra probability item to bias the overall distribution. Empirical studies on both automatic evaluation metrics and human annotations show that TA-Seq2Seq can generate more informative and interesting responses, significantly outperforming state-of-theart response generation models."}
{"_id": "34e2cb7a4fef0651fb5c0a120c8e70ebab9f0749", "title": "It's only a computer: Virtual humans increase willingness to disclose", "text": "Research has begun to explore the use of virtual humans (VHs) in clinical interviews (Bickmore, Gruber, & Picard, 2005). When designed as supportive and \u2018\u2018safe\u2019\u2019 interaction partners, VHs may improve such screenings by increasing willingness to disclose information (Gratch, Wang, Gerten, & Fast, 2007). In health and mental health contexts, patients are often reluctant to respond honestly. In the context of health-screening interviews, we report a study in which participants interacted with a VH interviewer and were led to believe that the VH was controlled by either humans or automation. As predicted, compared to those who believed they were interacting with a human operator, participants who believed they were interacting with a computer reported lower fear of self-disclosure, lower impression management, displayed their sadness more intensely, and were rated by observers as more willing to disclose. These results suggest that automated VHs can help overcome a significant barrier to obtaining truthful patient information. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "85315b64a4c73cb86f156ef5b0a085d6ebc8a65d", "title": "A Neural Conversational Model", "text": "Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require handcrafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domainspecific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model."}
{"_id": "6128190a8c18cde6b94e0fae934d6fcc406ea0bb", "title": "STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset", "text": "In recent years, automatic generation of image descriptions (captions), that is, image captioning, has attracted a great deal of attention. In this paper, we particularly consider generating Japanese captions for images. Since most available caption datasets have been constructed for English language, there are few datasets for Japanese. To tackle this problem, we construct a large-scale Japanese image caption dataset based on images from MS-COCO, which is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions for 164,062 images. In the experiment, we show that a neural network trained using STAIR Captions can generate more natural and better Japanese captions, compared to those generated using English-Japanese machine translation after generating English captions."}
{"_id": "a5bba7a6363f6abf1783368d02b3928e715fc50e", "title": "A Wearable Computing Prototype for supporting training activities in Automotive Production", "text": "This paper presents the results of the wearable computing prototype supporting trainingand qualification activities at the SKODA production facilities in Czech Republic. The emerged prototype is based upon the first of the 2 main \u201cVariant Production Showcases\u201d (training and assembly-line) which are to be implemented in the WearIT@work project (EC IP 004216). As an introduction, the authors of this paper investigate current training processes at Skoda, and derive the potential benefits and risks of applying wearable computing technology. Accordingly, the approach of creating the wearable prototypes, via usability experiments at the Skoda production site, is explained in detail. As a preliminary result, the first functional prototypes, including a task recognition prototype, based upon the components of the European Wearable Computing Platform, are described. The paper is rounded up by providing a short outlook regarding the second envisaged test case, which is focussed upon selected assembly line operations of blue collar workers."}
{"_id": "c3fd956dc501a8cc33c852eefdd4e2244bb396b4", "title": "Towards Internet of Things (IOTS): Integration of Wireless Sensor Network to Cloud Services for Data Collection and Sharing", "text": "Cloud computing provides great benefits for applications hosted on the Web that also have special computational and storage requirements. This paper proposes an extensible and flexible architecture for integrating Wireless Sensor Networks with the Cloud. We have used REST based Web services as an interoperable application layer that can be directly integrated into other application domains for remote monitoring such as e-health care services, smart homes, or even vehicular area networks (VAN). For proof of concept, we have implemented a REST based Web services on an IP based low power WSN test bed, which enables data access from anywhere. The alert feature has also been implemented to notify users via email or tweets for monitoring data when they exceed values and events of interest."}
{"_id": "0a83be1e2db4f42b97f65b5deac73637468605c8", "title": "Broadband biquad UHF antenna array for DOA", "text": "In this contribution a broadband biquad antenna array for DOA applications is investigated. Simulation and measurement results of the single broadband biquad antenna element and results of antenna array configurations for vertical polarisation, horizontal polarisation or polarisation independant direction of arrival estimation using different algorithms are shown."}
{"_id": "e1a9fab83821aec018d593d63362c25ebb2b305a", "title": "Dynablock: Dynamic 3D Printing for Instant and Reconstructable Shape Formation", "text": "This paper introduces Dynamic 3D Printing, a fast and reconstructable shape formation system. Dynamic 3D Printing can assemble an arbitrary three-dimensional shape from a large number of small physical elements. Also, it can disassemble the shape back to elements and reconstruct a new shape. Dynamic 3D Printing combines the capabilities of 3D printers and shape displays: Like conventional 3D printing, it can generate arbitrary and graspable three-dimensional shapes, while allowing shapes to be rapidly formed and reformed as in a shape display. To demonstrate the idea, we describe the design and implementation of Dynablock, a working prototype of a dynamic 3D printer. Dynablock can form a three-dimensional shape in seconds by assembling 3,000 9 mm blocks, leveraging a 24 x 16 pin-based shape display as a parallel assembler. Dynamic 3D printing is a step toward achieving our long-term vision in which 3D printing becomes an interactive medium, rather than the means for fabrication that it is today. In this paper, we explore possibilities for this vision by illustrating application scenarios that are difficult to achieve with conventional 3D printing or shape display systems."}
{"_id": "dd47d8547382541fe3f589e4844f232ea817fd40", "title": "Design and Fabrication of Integrated Magnetic MEMS Energy Harvester for Low Frequency Applications", "text": "An integrated vibration-to-electrical MEMS electromagnetic energy harvester with low resonant frequency is designed, simulated, fabricated and tested. Special structures and magnet arrays of the device are designed and simulated for favorable low frequency performance. Both FEM simulations and system-level simulations are conducted to investigate the induced voltage of the device. Two types of harvester are fabricated with different magnet arrays and the magnetic field of the two harvesters is investigated. With the combination of the microfabricated metal structures and the electroplated CoNiMnP permanent micro magnets, the designed harvesters are of small size (5 mm \u00d75 mm \u00d7 0.53 mm) without manual assembly of conventional bulk magnets. The output power density is 0.03 \u03bcW/cm3 with optimized resistance load at 64 Hz. This magnetic harvester is suitable for batch fabrication through MEMS process and can be utilized to drive microelectronic devices, such as micro implantable and portable devices."}
{"_id": "4967146be41727e69ec303c34d341b13907c9c36", "title": "Network bandwidth predictor (NBP): a system for online network performance forecasting", "text": "The applicability of network-based computing depends on the availability of the underlying network bandwidth. However, network resources are shared and the available network bandwidth varies with time. There is no satisfactory solution available for network performance predictions. In this research, we propose, design, and implement the NBP (network bandwidth predictor) for rapid network performance prediction. NBP is a new system that employs a neural network based approach for network bandwidth forecasting. This system is designed to integrate with most advanced technologies. It employs the NWS (network weather service) monitoring subsystem to measure the network traffic, and provides an improved, more accurate performance prediction than that of NWS, especially with applications with a network usage pattern. The NBP system has been tested on real time data collected by NWS monitoring subsystem and on trace files. Experimental results confirm that NBP has an improved prediction."}
{"_id": "4d8340eae2c98ab5e0a3b1a7e071a7ddb9106cff", "title": "A Framework for Multiple-Instance Learning", "text": "Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem."}
{"_id": "aae470464cf0a7d6f8a0e71f836ffdd9b1ae4784", "title": "Extracting a biologically relevant latent space from cancer transcriptomes with variational autoencoders", "text": "The Cancer Genome Atlas (TCGA) has profiled over 10,000 tumors across 33 different cancer-types for many genomic features, including gene expression levels. Gene expression measurements capture substantial information about the state of each tumor. Certain classes of deep neural network models are capable of learning a meaningful latent space. Such a latent space could be used to explore and generate hypothetical gene expression profiles under various types of molecular and genetic perturbation. For example, one might wish to use such a model to predict a tumor's response to specific therapies or to characterize complex gene expression activations existing in differential proportions in different tumors. Variational autoencoders (VAEs) are a deep neural network approach capable of generating meaningful latent spaces for image and text data. In this work, we sought to determine the extent to which a VAE can be trained to model cancer gene expression, and whether or not such a VAE would capture biologically-relevant features. In the following report, we introduce a VAE trained on TCGA pan-cancer RNA-seq data, identify specific patterns in the VAE encoded features, and discuss potential merits of the approach. We name our method \"Tybalt\" after an instigative, cat-like character who sets a cascading chain of events in motion in Shakespeare's \"Romeo and Juliet\". From a systems biology perspective, Tybalt could one day aid in cancer stratification or predict specific activated expression patterns that would result from genetic changes or treatment effects."}
{"_id": "3f3fae90dbafea2272677883879195b435596890", "title": "Review of Design of Speech Recognition and Text Analytics based Digital Banking Customer Interface and Future Directions of Technology Adoption", "text": "Banking is one of the most significant adopters of cutting-edge information technologies. Since its modern era beginning in the form of paper based accounting maintained in the branch, adoption of computerized system made it possible to centralize the processing in data centre and improve customer experience by making a more available and efficient system. The latest twist in this evolution is adoption of natural language processing and speech recognition in the user interface between the human and the system and use of machine learning and advanced analytics, in general, for backend processing as well. The paper reviews the progress of technology adoption in the field and comments on the maturity level of solutions involving less studied or lowresource languages like Hindi and also other Indian, regional languages. Furthermore, it also provides an analysis from a prototype built by us. The future directions of this area are also highlighted."}
{"_id": "7c5c8800e9603fc8c7e04d50fb8a642a9c9dd3bf", "title": "Specialized Support Vector Machines for open-set recognition", "text": "Often, when dealing with real-world recognition problems, we do not need, and often cannot have, knowledge of the entire set of possible classes that might appear during operational testing. Sometimes, some of these classes may be ill-sampled, not sampled at all or undefined. In such cases, we need to think of robust classification methods able to deal with the \u201cunknown\u201d and properly reject samples belonging to classes never seen during training. Notwithstanding, almost all existing classifiers to date were mostly developed for the closed-set scenario, i.e., the classification setup in which it is assumed that all test samples belong to one of the classes with which the classifier was trained. In the open-set scenario, however, a test sample can belong to none of the known classes and the classifier must properly reject it by classifying it as unknown. In this work, we extend upon the wellknown Support Vector Machines (SVM) classifier and introduce the Specialized Support Vector Machines (SSVM), which is suitable for recognition in open-set setups. SSVM balances the empirical risk and the risk of the unknown and ensures that the region of the feature space in which a test sample would be classified as known (one of the known classes) is always bounded, ensuring a finite risk of the unknown. The same cannot be guaranteed by the traditional SVM formulation, even when using the Radial Basis Function (RBF) kernel. In this work, we also highlight the properties of the SVM classifier related to the open-set scenario, and provide necessary and sufficient conditions for an RBF SVM to have bounded open-space risk. We also indicate promising directions of investigation of SVM-based methods for open-set scenarios. An extensive set of experiments compares the proposed method with existing solutions in the literature for open-set recognition and the reported results show its effectiveness."}
{"_id": "4cac4976a7f48ff8636d341bc40048ba313c98cd", "title": "Polysemy in Sentence Comprehension: Effects of Meaning Dominance.", "text": "Words like church are polysemous, having two related senses (a building and an organization). Three experiments investigated how polysemous senses are represented and processed during sentence comprehension. On one view, readers retrieve an underspecified, core meaning, which is later specified more fully with contextual information. On another view, readers retrieve one or more specific senses. In a reading task, context that was neutral or biased towards a particular sense preceded a polysemous word. Disambiguating material consistent with only one sense followed, in a second sentence (Experiment 1) or the same sentence (Experiments 2 & 3). Reading the disambiguating material was faster when it was consistent with that context, and dominant senses were committed to more strongly than subordinate senses. Critically, following neutral context, the continuation was read more quickly when it selected the dominant sense, and the degree of sense dominance partially explained the reading time advantage. Similarity of the senses also affected reading times. Across experiments, we found that sense selection may not be completed immediately following a polysemous word but is completed at a sentence boundary. Overall, the results suggest that readers select an individual sense when reading a polysemous word, rather than a core meaning."}
{"_id": "587ad4f178340d7def5437fe5ba0ab0041a53c4b", "title": "Ocular symptom detection using smartphones", "text": "It is often said that \"the eyes are the windows to the soul\". Although this usually metaphorically refers to a person's emotional state, there is literal truth in this statement with reference to a person's biological state. The eyes are dynamic and sensitive organs that are connected to the rest of the body via the circulatory and nervous systems. This means that diseases originating from an organ within the body can indirectly affect the eyes. Although the eyes may not be the origin for many such diseases, they are visible to external observers, making them an accessible point of diagnosis. For instance, broken blood vessels in the sclera could indicate a blood clotting disorder or a vitamin K deficiency. Instead of \"windows to the soul\", the eyes are \"sensors to the body\" that can be monitored to screen for certain diseases."}
{"_id": "acdc3d8d8c880bc9b9e10b337b09bed4c0c762d8", "title": "EMBROIDERED FULLY TEXTILE WEARABLE AN- TENNA FOR MEDICAL MONITORING APPLICATIONS", "text": "Telecommunication systems integrated within garments and wearable products are such methods by which medical devices are making an impact on enhancing healthcare provisions around the clock. These garments when fully developed will be capable of alerting and demanding attention if and when required along with minimizing hospital resources and labour. Furthermore, they can play a major role in preventative ailments, health irregularities and unforeseen heart or brain disorders in apparently healthy individuals. This work presents the feasibility of investigating an Ultra-WideBand (UWB) antenna made from fully textile materials that were used for the substrate as well as the conducting parts of the designed antenna. Simulated and measured results show that the proposed antenna design meets the requirements of wide working bandwidth and provides 17GHz bandwidth with compact size, washable and flexible materials. Results in terms of return loss, bandwidth, radiation pattern, current distribution as well as gain and efficiency are presented to validate the usefulness of the current manuscript design. The work presented here has profound implications for future studies of a standalone suite that may one day help to provide wearer (patient) with such reliable and comfortable medical monitoring techniques. Received 12 April 2011, Accepted 23 May 2011, Scheduled 10 June 2011 * Corresponding author: Mai A. Rahman Osman (mai.rahman@fkegraduate.utm.my)."}
{"_id": "26a686811d8aba081dd56ebba4d6e691fb301b9a", "title": "Health literacy and public health: A systematic review and integration of definitions and models", "text": "BACKGROUND\nHealth literacy concerns the knowledge and competences of persons to meet the complex demands of health in modern society. Although its importance is increasingly recognised, there is no consensus about the definition of health literacy or about its conceptual dimensions, which limits the possibilities for measurement and comparison. The aim of the study is to review definitions and models on health literacy to develop an integrated definition and conceptual model capturing the most comprehensive evidence-based dimensions of health literacy.\n\n\nMETHODS\nA systematic literature review was performed to identify definitions and conceptual frameworks of health literacy. A content analysis of the definitions and conceptual frameworks was carried out to identify the central dimensions of health literacy and develop an integrated model.\n\n\nRESULTS\nThe review resulted in 17 definitions of health literacy and 12 conceptual models. Based on the content analysis, an integrative conceptual model was developed containing 12 dimensions referring to the knowledge, motivation and competencies of accessing, understanding, appraising and applying health-related information within the healthcare, disease prevention and health promotion setting, respectively.\n\n\nCONCLUSIONS\nBased upon this review, a model is proposed integrating medical and public health views of health literacy. The model can serve as a basis for developing health literacy enhancing interventions and provide a conceptual basis for the development and validation of measurement tools, capturing the different dimensions of health literacy within the healthcare, disease prevention and health promotion settings."}
{"_id": "95ccdddc760a29d66955ef475a51f9262778f985", "title": "Tomographic reconstruction of piecewise smooth images", "text": "In computed tomography, direct inversion of the Radon transform requires more projections than are practical due to constraints in scan time and image accessibility. Therefore, it is necessary to consider the estimation of reconstructed images when the problem is under-constrained, i.e., when a unique solution does not exist. To resolve ambiguities among solutions, it is necessary to place additional constraints on the reconstructed image. In this paper, we present a surface evolution technique to model the reconstructed image as piecewise smooth. We model the reconstructed image as two regions that are each smoothly varying in intensity and are separated by a smooth surface. We define a cost functional to penalize deviation from piecewise smoothness while ensuring that the projections of the estimated image match the measured projections. From this functional, we derive an evolution for the modeled image intensity and an evolution for the surface, thereby defining a variational tomographic estimation technique. We show example reconstructions to highlight the performance of the proposed method on real medical images."}
{"_id": "a28aab171a7951a48f31e36ef92d309c2c4f0265", "title": "Clustering and Summarizing Protein-Protein Interaction Networks: A Survey", "text": "The increasing availability and significance of large-scale protein-protein interaction (PPI) data has resulted in a flurry of research activity to comprehend the organization, processes, and functioning of cells by analyzing these data at network level. Network clustering, that analyzes the topological and functional properties of a PPI network to identify clusters of interacting proteins, has gained significant popularity in the bioinformatics as well as data mining research communities. Many studies since the last decade have shown that clustering PPI networks is an effective approach for identifying functional modules, revealing functions of unknown proteins, etc. In this paper, we examine this issue by classifying, discussing, and comparing a wide ranging approaches proposed by the bioinformatics community to cluster PPI networks. A pervasive desire of this review is to emphasize the uniqueness of the network clustering problem in the context of PPI networks and highlight why generic network clustering algorithms proposed by the data mining community cannot be directly adopted to address this problem effectively. We also review a closely related problem to PPI network clustering, network summarization, which can enable us to make sense out of the information contained in large PPI networks by generating multi-level functional summaries."}
{"_id": "1a5ced36faee7ae1d0316c0461c50f4ea1317fad", "title": "Find me if you can: improving geographical prediction with social and spatial proximity", "text": "Geography and social relationships are inextricably intertwined; the people we interact with on a daily basis almost always live near us. As people spend more time online, data regarding these two dimensions -- geography and social relationships -- are becoming increasingly precise, allowing us to build reliable models to describe their interaction. These models have important implications in the design of location-based services, security intrusion detection, and social media supporting local communities.\n Using user-supplied address data and the network of associations between members of the Facebook social network, we can directly observe and measure the relationship between geography and friendship. Using these measurements, we introduce an algorithm that predicts the location of an individual from a sparse set of located users with performance that exceeds IP-based geolocation. This algorithm is efficient and scalable, and could be run on a network containing hundreds of millions of users."}
{"_id": "6a3aaa22977577d699a5b081dcdf7e093c857fec", "title": "Syntax-Tree Regular Expression Based DFA FormalConstruction", "text": "Compiler is a program whose functionality is to translate a computer program written in source language into an equivalent machine code. Compiler construction is an advanced research area because of its size and complexity. The source codes are in higher level languages which are usually complex and, consequently, increase the level of abstraction. Due to such reasons, design and construction of error free compiler is a challenge of the twenty first century. Verification of a source program does not guarantee about correctness of code generated because the bugs in compiler may lead to an incorrect target program. Therefore, verification of compiler is more important than verifying the source programs. Lexical analyzer is a main phase of compiler used for scanning input and grouping into sequence of tokens. In this paper, formal construction of deterministic finite automata (DFA) based on regular expression is presented as a part of lexical analyzer. At first, syntax tree is described based on the augmented regular expression. Then formal description of important operators, checking null-ability and computing first and last positions of internal nodes of the tree is described. In next, the transition diagram is described from the follow positions and converted into deterministic finite automata by defining a relationship among syntax tree, transition diagram and DFA. Formal specification of the procedure is described using Z notation and model analysis is provided using Z/Eves toolset."}
{"_id": "0971a5e835f365b6008177a867cfe4bae76841a5", "title": "Supervised Dictionary Learning by a Variational Bayesian Group Sparse Nonnegative Matrix Factorization", "text": "Nonnegative matrix factorization (NMF) with group sparsity constraints is formulated as a probabilistic graphical model and, assuming some observed data have been generated by the model, a feasible variational Bayesian algorithm is derived for learning model parameters. When used in a supervised learning scenario, NMF is most often utilized as an unsupervised feature extractor followed by classification in the obtained feature subspace. Having mapped the class labels to a more general concept of groups which underlie sparsity of the coefficients, what the proposed group sparse NMF model allows is incorporating class label information to find low dimensional label-driven dictionaries which not only aim to represent the data faithfully, but are also suitable for class discrimination. Experiments performed in face recognition and facial expression recognition domains point to advantages of classification in such label-driven feature subspaces over classification in feature subspaces obtained in an unsupervised manner."}
{"_id": "5ed3dc40cb2355461d126a51c00846a707e85aa3", "title": "Acute Sexual Assault in the Pediatric and Adolescent Population.", "text": "Children and adolescents are at high risk for sexual assault. Early medical and mental health evaluation by professionals with advanced training in sexual victimization is imperative to assure appropriate assessment, forensic evidence collection, and follow-up. Moreover, continued research and outreach programs are needed for the development of preventative strategies that focus on this vulnerable population. In this review we highlight key concepts for assessment and include a discussion of risk factors, disclosure, sequelae, follow-up, and prevention."}
{"_id": "89f256abf0e0187fcf0a56a4df6f447a2c0b17bb", "title": "Listening Comprehension over Argumentative Content", "text": "This paper presents a task for machine listening comprehension in the argumentation domain and a corresponding dataset in English. We recorded 200 spontaneous speeches arguing for or against 50 controversial topics. For each speech, we formulated a question, aimed at confirming or rejecting the occurrence of potential arguments in the speech. Labels were collected by listening to the speech and marking which arguments were mentioned by the speaker. We applied baseline methods addressing the task, to be used as a benchmark for future work over this dataset. All data used in this work is freely available for research."}
{"_id": "7e209751ae6f0e861a7763d3d22533b39aabd7eb", "title": "MauveDB: supporting model-based user views in database systems", "text": "Real-world data --- especially when generated by distributed measurement infrastructures such as sensor networks --- tends to be incomplete, imprecise, and erroneous, making it impossible to present it to users or feed it directly into applications. The traditional approach to dealing with this problem is to first process the data using statistical or probabilistic models that can provide more robust interpretations of the data. Current database systems, however, do not provide adequate support for applying models to such data, especially when those models need to be frequently updated as new data arrives in the system. Hence, most scientists and engineers who depend on models for managing their data do not use database systems for archival or querying at all; at best, databases serve as a persistent raw data store.In this paper we define a new abstraction called model-based views and present the architecture of MauveDB, the system we are building to support such views. Just as traditional database views provide logical data independence, model-based views provide independence from the details of the underlying data generating mechanism and hide the irregularities of the data by using models to present a consistent view to the users. MauveDB supports a declarative language for defining model-based views, allows declarative querying over such views using SQL, and supports several different materialization strategies and techniques to efficiently maintain them in the face of frequent updates. We have implemented a prototype system that currently supports views based on regression and interpolation, using the Apache Derby open source DBMS, and we present results that show the utility and performance benefits that can be obtained by supporting several different types of model-based views in a database system."}
{"_id": "13b6eeb28328252a35cdcbe3ab8d09d2a9caf99d", "title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora", "text": "We introduce (1) a novel stochastic inversion transduction grammar formalism for bilingual language modeling of sentence-pairs, and (2) the concept of bilingual parsing with a variety of parallel corpus analysis applications. Aside from the bilingual orientation, three major features distinguish the formalism from the finite-state transducers more traditionally found in computational linguistics: it skips directly to a context-free rather than finite-state base, it permits a minimal extra degree of ordering flexibility, and its probabilistic formulation admits an efficient maximum-likelihood bilingual parsing algorithm. A convenient normal form is shown to exist. Analysis of the formalism's expressiveness suggests that it is particularly well suited to modeling ordering shifts between languages, balancing needed flexibility against complexity constraints. We discuss a number of examples of how stochastic inversion transduction grammars bring bilingual constraints to bear upon problematic corpus analysis tasks such as segmentation, bracketing, phrasal alignment, and parsing."}
{"_id": "9c7221847823926af3fc10051f6d5864af2eae9d", "title": "Evaluation Techniques and Systems for Answer Set Programming: a Survey", "text": "Answer set programming (ASP) is a prominent knowledge representation and reasoning paradigm that found both industrial and scientific applications. The success of ASP is due to the combination of two factors: a rich modeling language and the availability of efficient ASP implementations. In this paper we trace the history of ASP systems, describing the key evaluation techniques and their implementation in actual tools."}
{"_id": "292a2b2ad9a02e474f45f1d3b163bf47a7c874e5", "title": "Feature Detection with Automatic Scale Selection", "text": "The fact that objects in the world appear in different ways depending on the scale of observation has important implications if one aims at describing them. It shows that the notion of scale is of utmost importance when processing unknown measurement data by automatic methods. In their seminal works, Witkin (1983) and Koenderink (1984) proposed to approach this problem by representing image structures at different scales in a so-called scale-space representation. Traditional scale-space theory building on this work, however, does not address the problem of how to select local appropriate scales for further analysis. This article proposes a systematic methodology for dealing with this problem. A framework is presented for generating hypotheses about interesting scale levels in image data, based on a general principle stating that local extrema over scales of different combinations of \u03b3-normalized derivatives are likely candidates to correspond to interesting structures. Specifically, it is shown how this idea can be used as a major mechanism in algorithms for automatic scale selection, which adapt the local scales of processing to the local image structure. Support for the proposed approach is given in terms of a general theoretical investigation of the behaviour of the scale selection method under rescalings of the input pattern and by integration with different types of early visual modules, including experiments on real-world and synthetic data. Support is also given by a detailed analysis of how different types of feature detectors perform when integrated with a scale selection mechanism and then applied to characteristic model patterns. Specifically, it is described in detail how the proposed methodology applies to the problems of blob detection, junction detection, edge detection, ridge detection and local frequency estimation. In many computer vision applications, the poor performance of the low-level vision modules constitutes a major bottleneck. It is argued that the inclusion of mechanisms for automatic scale selection is essential if we are to construct vision systems to automatically analyse complex unknown environments."}
{"_id": "6b2e3c9b32e92dbbdd094d2bd88eb60a80c3083d", "title": "A Combined Corner and Edge Detector", "text": "The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed."}
{"_id": "7756c24837b1f9ca3fc5be4ce7b4de0fcf9de8e6", "title": "Singularity detection and processing with wavelets", "text": "Most of a signal information is often carried by irregular structures and transient phenomena. The mathematical characterization of singularities with Lipschitz exponents is explained. Theorems are reviewed that estimate local Lipschitz exponents of functions from the evolution across scales of their wavelet transform. It is then proven that the local maxima of the wavelet transform modulus detect the locations of irregular structures and provide numerical procedures to compute their Lipschitz exponents. The wavelet transform of singularities with fast oscillations have a particular behavior that is studied separately. The local frequency of such oscillations are measured from the wavelet transform modulus maxima. It has been shown numerically that oneand two-dimensional signals can be reconstructed, with a good approximation, from the local maxima of their wavelet transform modulus. As an application, an algorithm is developed that removes white noises from signals by analyzing the evolution of the wavelet transform maxima across scales. In two-dimensions, the wavelet transform maxima indicate the location of edges in images. The denoising algorithm is extended for image enhancement."}
{"_id": "8930f62a4b5eb1cbabf224cf84aa009ea798cfee", "title": "Modeling Visual Attention via Selective Tuning", "text": "A model for aspects of visual attention based on the concept of selective tuning is presented. It provides for a solution to the problems of selection in an image, information routing through the visual processing hierarchy and task-specific attentional bias. The central thesis is that attention acts to optimize the search procedure inherent in a solution to vision. It does so by selectively tuning the visual processing network which is accomplished by a top-down hierarchy of winner-take-all processes embedded within the visual processing pyramid. Comparisons to other major computational models of attention and to the relevant neurobiology are included in detail throughout the paper. The model has been implemented; several examples of its performance are shown. This model is a hypothesis for primate visual attention, but it also outperforms existing computational solutions for attention in machine vision and is highly appropriate to solving the problem in a robot vision system."}
{"_id": "f4c83b78c787e01d3d4e58ae18c8f3ab5933a02b", "title": "3D positional integration from image sequences", "text": "An explicit three-dimensional representation is constructed from feature-points extracted from a sequence of images taken by a moving camera. The points are tracked through the sequence, and their 3D locations accurately determined by use of Kalman Filters. The ego-motion of the camera is also solved for."}
{"_id": "253906bf2e541e433131ae304186ae3a0964c528", "title": "SIDES: a cooperative tabletop computer game for social skills development", "text": "This paper presents a design case study of SIDES: Shared Interfaces to Develop Effective Social Skills. SIDES is a tool designed to help adolescents with Asperger's Syndrome practice effective group work skills using a four-player cooperative computer game that runs on tabletop technology. We present the design process and evaluation of SIDES conducted over six months with a middle school social group therapy class. Our findings indicate that cooperative tabletop computer games are a motivating and supportive tool for facilitating effective group work among our target population and reveal several design lessons to inform the development of similar systems."}
{"_id": "1a5ef67211cd6cdab5e8ea1d19976f23043d588e", "title": "On Controllable Sparse Alternatives to Softmax", "text": "Converting an n-dimensional vector to a probability distribution over n objects is a commonly used component in many machine learning tasks like multiclass classification, multilabel classification, attention mechanisms etc. For this, several probability mapping functions have been proposed and employed in literature such as softmax, sum-normalization, spherical softmax, and sparsemax, but there is very little understanding in terms how they relate with each other. Further, none of the above formulations offer an explicit control over the degree of sparsity. To address this, we develop a unified framework that encompasses all these formulations as special cases. This framework ensures simple closed-form solutions and existence of sub-gradients suitable for learning via backpropagation. Within this framework, we propose two novel sparse formulations, sparsegen-lin and sparsehourglass, that seek to provide a control over the degree of desired sparsity. We further develop novel convex loss functions that help induce the behavior of aforementioned formulations in the multilabel classification setting, showing improved performance. We also demonstrate empirically that the proposed formulations, when used to compute attention weights, achieve better or comparable performance on standard seq2seq tasks like neural machine translation and abstractive summarization."}
{"_id": "86824ba374daff17ff2235484055b3bb9c464555", "title": "S-CNN: Subcategory-Aware Convolutional Networks for Object Detection", "text": "The marriage between the deep convolutional neural network (CNN) and region proposals has made breakthroughs for object detection in recent years. While the discriminative object features are learned via a deep CNN for classification, the large intra-class variation and deformation still limit the performance of the CNN based object detection. We propose a subcategory-aware CNN (S-CNN) to solve the object intra-class variation problem. In the proposed technique, the training samples are first grouped into multiple subcategories automatically through a novel instance sharing maximum margin clustering process. A multi-component Aggregated Channel Feature (ACF) detector is then trained to produce more latent training samples, where each ACF component corresponds to one clustered subcategory. The produced latent samples together with their subcategory labels are further fed into a CNN classifier to filter out false proposals for object detection. An iterative learning algorithm is designed for the joint optimization of image subcategorization, multi-component ACF detector, and subcategory-aware CNN classifier. Experiments on INRIA Person dataset, Pascal VOC 2007 dataset and MS COCO dataset show that the proposed technique clearly outperforms the state-of-the-art methods for generic object detection."}
{"_id": "0fb4bdf126623c4f252991db3277e22c37b0ae82", "title": "Revealing protein\u2013lncRNA interaction", "text": "Long non-coding RNAs (lncRNAs) are associated to a plethora of cellular functions, most of which require the interaction with one or more RNA-binding proteins (RBPs); similarly, RBPs are often able to bind a large number of different RNAs. The currently available knowledge is already drawing an intricate network of interactions, whose deregulation is frequently associated to pathological states. Several different techniques were developed in the past years to obtain protein-RNA binding data in a high-throughput fashion. In parallel, in silico inference methods were developed for the accurate computational prediction of the interaction of RBP-lncRNA pairs. The field is growing rapidly, and it is foreseeable that in the near future, the protein-lncRNA interaction network will rise, offering essential clues for a better understanding of lncRNA cellular mechanisms and their disease-associated perturbations."}
{"_id": "371b07d65891b03eaae15c2865da2a6751a99bb8", "title": "gHull: A GPU algorithm for 3D convex hull", "text": "A novel algorithm is presented to compute the convex hull of a point set in \u211d3 using the graphics processing unit (GPU). By exploiting the relationship between the Voronoi diagram and the convex hull, the algorithm derives the approximation of the convex hull from the former. The other extreme vertices of the convex hull are then found by using a two-round checking in the digital and the continuous space successively. The algorithm does not need explicit locking or any other concurrency control mechanism, thus it can maximize the parallelism available on the modern GPU.\n The implementation using the CUDA programming model on NVIDIA GPUs is exact and efficient. The experiments show that it is up to an order of magnitude faster than other sequential convex hull implementations running on the CPU for inputs of millions of points. The works demonstrate that the GPU can be used to solve nontrivial computational geometry problems with significant performance benefit."}
{"_id": "8728a963eedc329b363bfde61d93fa7446bcfffd", "title": "gTRICLUSTER: A More General and Effective 3D Clustering Algorithm for Gene-Sample-Time Microarray Data", "text": "Clustering is an important technique in microarray data analysis, and mining three-dimensional (3D) clusters in gene-sample-time (simply GST) microarray data is emerging as a hot research topic in this area. A 3D cluster consists of a subset of genes that are coherent on a subset of samples along a segment of time series. This kind of coherent clusters may contain information for the users to identify useful phenotypes, potential genes related to these phenotypes and their expression rules. TRICLUSTER is the state-of-the-art 3D clustering algorithm for GST microarray data. In this paper, we propose a new algorithm to mine 3D clusters over GST microarray data. We term the new algorithm gTRICLUSTER because it is based on a more general 3D cluster model than the one that TRICLUSTER is based on. gTRICLUSTER can find more biologically meaningful coherent gene clusters than TRICLUSTER can do. It also outperforms TRICLUSTER in robustness to noise. Experimental results on a real-world microarray dataset validate the effectiveness of the proposed new algorithm."}
{"_id": "70776e0e21ebbf96dd02d579cbb7152eaccf4d7f", "title": "Validation of the Dutch Version of the CDC Core Healthy Days Measures in a Community Sample", "text": "An important disadvantage of most indicators of health related quality of life used in public health surveillance is their length. In this study the authors investigated the reliability and validity of a short indicator of health related quality of life, the Dutch version of the four item \u2018CDC Core Healthy Days Measures\u2019 (CDC HRQOL-4). The reliability was evaluated by calculating Cronbach\u2019s alpha of the CDC HRQOL-4. The concurrent validity was tested by comparing the CDC HRQOL-4 with three other indicators of health related quality of life, the SF-36, the WHOQoL-BREF and the GHQ-12. The construct validity was evaluated by assessing the ability of the CDC HRQOL-4 to discriminate between respondents with and without a (non-mental) chronic condition, depression, a visit to the general practitioner and use of prescription drugs. Randomly sampled respondents from the city of Utrecht were asked to fill in a questionnaire. 659 respondents (response rate 45%) completed the questionnaire. Participants represented the adult, non-institutionalised population of the city of Utrecht, the Netherlands: 58% women; mean age 41\u00a0years; 15% of non-Dutch origin. The reliability of the CDC HRQOL-4 was good. Cronbach\u2019s alpha of three of the four CDC HRQOL-4-items was 0.77 which is good for internal consistent scales. The concurrent validity was good. The four items of the CDC HRQOL-4 showed higher correlations with their corresponding domains of the other instrument than the other domains. Comparison of respondents with or without a chronic condition, depression, visit to the GP and use of prescription drugs produced evidence for an excellent construct validity of the CDC HRQOL-4."}
{"_id": "6ef623c853814ee9add50c32491367e0dff983d1", "title": "Hardware/Software Codesign: The Past, the Present, and Predicting the Future", "text": "Hardware/software codesign investigates the concurrent design of hardware and software components of complex electronic systems. It tries to exploit the synergy of hardware and software with the goal to optimize and/or satisfy design constraints such as cost, performance, and power of the final product. At the same time, it targets to reduce the time-to-market frame considerably. This paper presents major achievements of two decades of research on methods and tools for hardware/software codesign by starting with a historical survey of its roots, by highlighting its major research directions and achievements until today, and finally, by predicting in which direction research in codesign might evolve in the decades to come."}
{"_id": "6a89449e763fea5c9a06c30e4c11072de54f6f49", "title": "Analysis of Audio-Visual Features for Unsupervised Speech Recognition", "text": "Research on \u201czero resource\u201d speech processing focuses on learning linguistic information from unannotated, or raw, speech data, in order to bypass the expensive annotations required by current speech recognition systems. While most recent zero-resource work has made use of only speech recordings, here, we investigate the use of visual information as a source of weak supervision, to see whether grounding speech in a visual context can provide additional benefit for language learning. Specifically, we use a dataset of paired images and audio captions to supervise learning of low-level speech features that can be used for further \u201cunsupervised\u201d processing of any speech data. We analyze these features and evaluate their performance on the Zero Resource Challenge 2015 evaluation metrics, as well as standard keyword spotting and speech recognition tasks. We show that features generated with a joint audiovisual model contain more discriminative linguistic information and are less speaker-dependent than traditional speech features. Our results show that visual grounding can improve speech representations for a variety of zero-resource tasks."}
{"_id": "700a4c8dba65fe5d3a595c30de7542488815327c", "title": "Organizational behavior: affect in the workplace.", "text": "The study of affect in the workplace began and peaked in the 1930s, with the decades that followed up to the 1990s not being particularly fertile. Whereas job satisfaction generally continues to be loosely but not carefully thought of and measured as an affective state, critical work in the 1990s has raised serious questions about the affective status of job satisfaction in terms of its causes as well as its definition and measurement. Recent research has focused on the production of moods and emotions at work, with an emphasis, at least conceptually, on stressful events, leaders, work groups, physical settings, and rewards/punishment. Other recent research has addressed the consequences of workers' feelings, in particular, a variety of performance outcomes (e.g., helping behaviors and creativity). Even though recent interest in affect in the workplace has been intense, many theoretical and methodological opportunities and challenges remain."}
{"_id": "95fc8418de56aebf867669c834fbef75f593f403", "title": "Personality Traits Recognition on Social Network - Facebook", "text": "For the natural and social interaction it is necessary to understand human behavior. Personality is one of the fundamental aspects, by which we can understand behavioral dispositions. It is evident that there is a strong correlation between users\u2019 personality and the way they behave on online social network (e.g., Facebook). This paper presents automatic recognition of Big-5 personality traits on social network (Facebook) using users\u2019 status text. For the automatic recognition we studied different classification methods such as SMO (Sequential Minimal Optimization for Support Vector Machine), Bayesian Logistic Regression (BLR) and Multinomial Na\u00efve Bayes (MNB) sparse modeling. Performance of the systems had been measured using macro-averaged precision, recall and F1; weighted average accuracy (WA) and un-weighted average accuracy (UA). Our comparative study shows that MNB performs better than BLR and SMO for personality traits recognition on the social network data."}
{"_id": "a7ab4065bad554f5d26f597256fe7225ad5842a6", "title": "Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists", "text": "Functional analysis of large gene lists, derived in most cases from emerging high-throughput genomic, proteomic and bioinformatics scanning approaches, is still a challenging and daunting task. The gene-annotation enrichment analysis is a promising high-throughput strategy that increases the likelihood for investigators to identify biological processes most pertinent to their study. Approximately 68 bioinformatics enrichment tools that are currently available in the community are collected in this survey. Tools are uniquely categorized into three major classes, according to their underlying enrichment algorithms. The comprehensive collections, unique tool classifications and associated questions/issues will provide a more comprehensive and up-to-date view regarding the advantages, pitfalls and recent trends in a simpler tool-class level rather than by a tool-by-tool approach. Thus, the survey will help tool designers/developers and experienced end users understand the underlying algorithms and pertinent details of particular tool categories/tools, enabling them to make the best choices for their particular research interests."}
{"_id": "d8753f340fb23561ca27009a694998e29513658b", "title": "A Hybrid Gripper With Soft Material and Rigid Structures", "text": "Various robotic grippers have been developed over the past several decades for robotic manipulators. Especially, the soft grippers based on the soft pneumatic actuator (SPA) have been studied actively, since it offers more pliable bending motion, inherent compliance, and a simple morphological structure. However, few studies have focused on simultaneously improving the fingertip force and actuation speed within the specified design parameters. In this study, we developed a hybrid gripper that incorporates both soft and rigid components to improve the fingertip force and actuation speed simultaneously based on three design principles: first, the degree of bending is proportional to the ratio of the rigid structure; second, a concave chamber design is preferred for large longitudinal strain; and third, a round shape between soft and rigid materials increases the fingertip force. The suggested principles were verified using the finite element methods. The improved performance of the hybrid gripper was verified experimentally and compared with the performance of a conventional SPAs. The ability of the hybrid gripper to grasp different objects was evaluated and was applied in a teleoperated system."}
{"_id": "0502017650165c742ad7eaf14fe72b5925c90707", "title": "Finger ECG signal for user authentication: Usability and performance", "text": "Over the past few years, the evaluation of Electrocardio-graphic (ECG) signals as a prospective biometric modality has revealed promising results. Given the vital and continuous nature of this information source, ECG signals offer several advantages to the field of biometrics; yet, several challenges currently prevent the ECG from being adopted as a biometric modality in operational settings. These arise partially due to ECG signal's clinical tradition and intru-siveness, but also from the lack of evidence on the permanence of the ECG templates over time. The problem of in-trusiveness has been recently overcome with the \u201coff-the-person\u201d approach for capturing ECG signals. In this paper we provide an evaluation of the permanence of ECG signals collected at the fingers, with respect to the biometric authentication performance. Our experimental results on a small dataset suggest that further research is necessary to account for and understand sources of variability found in some subjects. Despite these limitations, \u201coff-the-person\u201d ECG appears to be a viable trait for multi-biometric or standalone biometrics, low user throughput, real-world scenarios."}
{"_id": "a691d3a72a20bf24ecc460cc4a6e7d84f4146d65", "title": "The future of parking: A survey on automated valet parking with an outlook on high density parking", "text": "In the near future, humans will be relieved from parking. Major improvements in autonomous driving allow the realization of automated valet parking (AVP). It enables the vehicle to drive to a parking spot and park itself. This paper presents a review of the intelligent vehicles literature on AVP. An overview and analysis of the core components of AVP such as the platforms, sensor setups, maps, localization, perception, environment model, and motion planning is provided. Leveraging the potential of AVP, high density parking (HDP) is reviewed as a future research direction with the capability to either reduce the necessary space for parking by up to 50 % or increase the capacity of future parking facilities. Finally, a synthesized view discussing the remaining challenges in automated valet parking and the technological requirements for high density parking is given."}
{"_id": "4811820379d8db823ac9cae55218b4d60dcd95a8", "title": "Web Application Migration with Closure Reconstruction", "text": "Due to its high portability and simplicity, web application (app) based on HTML/JavaScript/CSS has been widely used for various smart-device platforms. To take advantage of its wide platform pool, a new idea called app migration has been proposed for the web platform. Web app migration is a framework to serialize a web app running on a device and restore it in another device to continue its execution. In JavaScript semantics, one of the language features that does not allow easy app migration is a closure. A JavaScript function can access variables defined in its outer function even if the execution of the outer function is terminated. It is allowed because the inner function is created as a closure such that it contains the outer function\u2019s environment. This feature is widely used in web app development because it is the most common way to implement data encapsulation in web programming. Closures are not easy to serialize because environments can be shared by a number of closures and environments can be created in a nested way. In this paper, we propose a novel approach to fully serialize closures. We created mechanisms to extract information from a closure\u2019s environment through the JavaScript engine and to serialize the information in a proper order so that the original relationship between closures and environments can be restored properly. We implemented our mechanism on the WebKit browser and successfully migrated Octane benchmarks and seven real web apps which heavily exploit closures. We also show that our mechanism works correctly even for some extreme, closure-heavy cases."}
{"_id": "0c50901215676b09da8cc235773b999a1857e558", "title": "A 0 . 5V Ultra-Low-Power OTA With Improved Gain and Bandwidth", "text": "In the context of ultra-low-power and ultra-low-voltage operational transconductance amplifiers (OTAs), there is the inconvenience of low unity-gain frequency and DC gain. In this paper, we presented a 0.5-V improved ultra-low-power OTA using the transistors operate in weak inversion. The proposed topology based on a bulk-driven input differential pair employed an under-weak-inversion gain-stage in the Miller capacitor feedback path to improve the \u201cpole-splitting\u201d effect. Simulations in a standard 0.18 \u03bcm CMOS process resulted on considerable enhancement in the unity-gain bandwidth and the DC gain as well. The topology presents rail-to-rail input and output swings and consumes only 1 \u03bcW."}
{"_id": "950b7920c148ab40633d123046af2f8768d10551", "title": "Performance analysis of supervised machine learning algorithms for text classification", "text": "The demand of text classification is growing significantly in web searching, data mining, web ranking, recommendation systems and so many other fields of information and technology. This paper illustrates the text classification process on different dataset using some standard supervised machine learning techniques. Text documents can be classified through various kinds of classifiers. Labeled text documents are used to classify the text in supervised classifications. This paper applied these classifiers on different kinds of labeled documents and measures the accuracy of the classifiers. An Artificial Neural Network (ANN) model using Back Propagation Network (BPN) is used with several other models to create an independent platform for labeled and supervised text classification process. An existing benchmark approach is used to analysis the performance of classification using labeled documents. Experimental analysis on real data reveals which model works well in terms of classification accuracy."}
{"_id": "a06cc19b0d8530bde5c58d81aa500581876a1222", "title": "A risk management ontology for Quality-by-Design based on a new development approach according GAMP 5.0", "text": "A new approach to the development of a risk management ontology is presented. This method meets the requirements of a pharmaceutical Quality by Design approach, good manufacturing practice and good automated manufacturing practice. The need for a risk management ontology for a pharmaceutical environment is demonstrated, and the term \u2018\u2018ontology\u2019\u2019 is generally defined and described with regard to the knowledge domain of quality risk management. To fulfill software development requirements defined by good manufacturing practice regulations and good automated manufacturing practice 5.0 for the novel development approach, we used a V-model as a process model, which is discussed in detail. The development steps for the new risk management ontology, such as requirement specification, conceptualization, formalization, implementation and validation approach, are elaborated. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "6912446b6382ccbe8fa50a6f06f5b75d345c4d4e", "title": "Low-voltage low-power fast-settling CMOS operational transconductance amplifiers for switched-capacitor applications", "text": "This paper presents a new fully differential operational transconductance amplifier (OTA) for low-voltage and fast-settling switched-capacitor circuits in digital CMOS technology. The proposed two-stage OTA is a hybrid class A/AB that combines a folded cascode as the first stage with active current mirrors as the second stage. It employs a hybrid cascode compensation scheme, merged Ahuja and improved Ahuja style compensations, for fast settling."}
{"_id": "864f0e5e317a7d304dcc1dfca176b7afd230f4c2", "title": "Focal loss dense detector for vehicle surveillance", "text": "Deep learning has been widely recognized as a promising approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; However, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for vehicle detection. State-of-the-art performance result has been showed on the DETRAC vehicle dataset."}
{"_id": "267cf2013307453bb796c8c7f59522dc852f840c", "title": "Online Parameter Estimation Technique for Adaptive Control Applications of Interior PM Synchronous Motor Drives", "text": "This paper proposes an online parameter estimation method based on a discrete-time dynamic model for the interior permanent-magnet synchronous motors (IPMSMs). The proposed estimation technique, which takes advantage of the difference in dynamics of motor parameters, consists of two affine projection algorithms. The first one is designed to accurately estimate the stator inductances, whereas the second one is designed to precisely estimate the stator resistance, rotor flux linkage, and load torque. In this paper, the adaptive decoupling proportional-integral (PI) controllers with the maximum torque per ampere operation, which utilize the previously identified parameters in real time, are chosen to verify the effectiveness of the proposed parameter estimation scheme. The simulation results via MATLAB/Simulink and the experimental results via a prototype IPMSM drive system with a TI TMS320F28335 DSP are presented under various conditions. A comparative study with the conventional decoupling PI control method is carried out to demonstrate the better performances (i.e., faster dynamic response, less steady-state error, more robustness, etc.) of the adaptive decoupling PI control scheme based on the proposed online parameter estimation technique."}
{"_id": "80be8624771104ff4838dcba9629bacfe6b3ea09", "title": "Simultaneous Feature and Dictionary Learning for Image Set Based Face Recognition", "text": "In this paper, we propose a simultaneous feature and dictionary learning (SFDL) method for image set-based face recognition, where each training and testing example contains a set of face images, which were captured from different variations of pose, illumination, expression, resolution, and motion. While a variety of feature learning and dictionary learning methods have been proposed in recent years and some of them have been successfully applied to image set-based face recognition, most of them learn features and dictionaries for facial image sets individually, which may not be powerful enough because some discriminative information for dictionary learning may be compromised in the feature learning stage if they are applied sequentially, and vice versa. To address this, we propose a SFDL method to learn discriminative features and dictionaries simultaneously from raw face pixels so that discriminative information from facial image sets can be jointly exploited by a one-stage learning procedure. To better exploit the nonlinearity of face samples from different image sets, we propose a deep SFDL (D-SFDL) method by jointly learning hierarchical non-linear transformations and class-specific dictionaries to further improve the recognition performance. Extensive experimental results on five widely used face data sets clearly shows that our SFDL and D-SFDL achieve very competitive or even better performance with the state-of-the-arts."}
{"_id": "5b8e4b909309910fd393e2eb8ca17c9540862141", "title": "Sentence level emotion recognition based on decisions from subsentence segments", "text": "Emotion recognition from speech plays an important role in developing affective and intelligent systems. This study investigates sentence-level emotion recognition. We propose to use a two-step approach to leverage information from subsentence segments for sentence level decision. First we use a segment level emotion classifier to generate predictions for segments within a sentence. A second component combines the predictions from these segments to obtain a sentence level decision. We evaluate different segment units (words, phrases, time-based segments) and different decision combination methods (majority vote, average of probabilities, and a Gaussian Mixture Model (GMM)). Our experimental results on two different data sets show that our proposed method significantly outperforms the standard sentence-based classification approach. In addition, we find that using time-based segments achieves the best performance, and thus no speech recognition or alignment is needed when using our method, which is important to develop language independent emotion recognition systems."}
{"_id": "11651db02c4a243b5177516e62a45f952dc54430", "title": "Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding", "text": "The goal of this paper is to use multi-task learning to efficiently scale slot filling models for natural language understanding to handle multiple target tasks or domains. The key to scalability is reducing the amount of training data needed to learn a model for a new task. The proposed multi-task model delivers better performance with less data by leveraging patterns that it learns from the other tasks. The approach supports an open vocabulary, which allows the models to generalize to unseen words, which is particularly important when very little training data is used. A newly collected crowd-sourced data set, covering four different domains, is used to demonstrate the effectiveness of the domain adaptation and open vocabulary techniques."}
{"_id": "f70391f9a6aeec75bc52b9fe38588b9d0b4d40c6", "title": "Learning Character-level Representations for Part-of-Speech Tagging", "text": "Distributed word representations have recently been proven to be an invaluable resource for NLP. These representations are normally learned using neural networks and capture syntactic and semantic information about words. Information about word morphology and shape is normally ignored when learning word representations. However, for tasks like part-of-speech tagging, intra-word information is extremely useful, specially when dealing with morphologically rich languages. In this paper, we propose a deep neural network that learns character-level representation of words and associate them with usual word representations to perform POS tagging. Using the proposed approach, while avoiding the use of any handcrafted feature, we produce stateof-the-art POS taggers for two languages: English, with 97.32% accuracy on the Penn Treebank WSJ corpus; and Portuguese, with 97.47% accuracy on the Mac-Morpho corpus, where the latter represents an error reduction of 12.2% on the best previous known result."}
{"_id": "ff1577528a34a11c2a81d2451d346c412c674c02", "title": "Character-based Neural Machine Translation", "text": "We introduce a neural machine translation model that views the input and output sentences as sequences of characters rather than words. Since word-level information provides a crucial source of bias, our input model composes representations of character sequences into representations of words (as determined by whitespace boundaries), and then these are translated using a joint attention/translation model. In the target language, the translation is modeled as a sequence of word vectors, but each word is generated one character at a time, conditional on the previous character generations in each word. As the representation and generation of words is performed at the character level, our model is capable of interpreting and generating unseen word forms. A secondary benefit of this approach is that it alleviates much of the challenges associated with preprocessing/tokenization of the source and target languages. We show that our model can achieve translation results that are on par with conventional word-based models."}
{"_id": "00a28138c74869cfb8236a18a4dbe3a896f7a812", "title": "Better Word Representations with Recursive Neural Networks for Morphology", "text": "Vector-space word representations have been very successful in recent years at improving performance across a variety of NLP tasks. However, common to most existing work, words are regarded as independent entities without any explicit relationship among morphologically related words being modeled. As a result, rare and complex words are often poorly estimated, and all unknown words are represented in a rather crude way using only one or a few vectors. This paper addresses this shortcoming by proposing a novel model that is capable of building representations for morphologically complex words from their morphemes. We combine recursive neural networks (RNNs), where each morpheme is a basic unit, with neural language models (NLMs) to consider contextual information in learning morphologicallyaware word representations. Our learned models outperform existing word representations by a good margin on word similarity tasks across many datasets, including a new dataset we introduce focused on rare words to complement existing ones in an interesting way."}
{"_id": "6ce9d1c1e9a8b889a04faa98f96b0df1b93fcc84", "title": "Adoption and Impact of IT Governance and Management Practices: A COBIT 5 Perspective", "text": "This paper empirically investigates how adoption of IT governance and management processes, as identified in the IT governance framework COBIT 5, relates to the level of IT-related goals achievement, which in turn associates to the level of enterprise goals achievement. Simultaneously, this research project provides an international benchmark on how organizations are currently adopting the governance and management processes as identified in COBIT 5. The findings suggest that organizations are best in adopting the \u201cIT factory\u201d related management processes and that implementation scores drop in management and governance processes when more business and board involvement is required. Additionally, there are significant differences in perceived implementation maturity of COBIT 5 processes between SMEs and larger organizations. Also, the data offers empirical evidence that the COBIT 5 processes have a positive association with enterprise value creation. KeywORdS COBIT 5, Enterprise Goals, Enterprise Governance and Management of IT, IT Governance, SMEs"}
{"_id": "a8fd9be2f7775b123f62094eadd59d18bbbef027", "title": "Peephole: Predicting Network Performance Before Training", "text": "The quest for performant networks has been a significant force that drives the advancements of deep learning in recent years. While rewarding, improving network design has never been an easy journey. The large design space combined with the tremendous cost required for network training poses a major obstacle to this endeavor. In this work, we propose a new approach to this problem, namely, predicting the performance of a network before training, based on its architecture. Specifically, we develop a unified way to encode individual layers into vectors and bring them together to form an integrated description via LSTM. Taking advantage of the recurrent network\u2019s strong expressive power, this method can reliably predict the performances of various network architectures. Our empirical studies showed that it not only achieved accurate predictions but also produced consistent rankings across datasets \u2013 a key desideratum in performance prediction."}
{"_id": "7313097125b82b876e89b47b078fe85a56655e95", "title": "Bangladeshi banknote recognition by neural network with axis symmetrical masks", "text": "Automated banknote recognition system can be a very good utility in banking systems and other field of commerce. It can also aid visually impaired people. Although in Bangladesh, bill money recognition machines are not common but it is used in other countries. In this paper, for the first time, we have proposed a Neural Network based recognition scheme for Bangladeshi banknotes. The scheme can efficiently be implemented in cheap hardware which may be very useful in many places. The recognition system takes scanned images of banknotes which are scanned by low cost optoelectronic sensors and then fed into a Multilayer Perceptron, trained by Backpropagation algorithm, for recognition. Axis Symmetric Masks are used in preprocessing stage which reduces the network size and guarantees correct recognition even if the note is flipped. Experimental results are presented which show that this scheme can recognize currently available 8 notes (1, 2, 5, 10, 20, 50, 100 & 500 Taka) successfully with an average accuracy of 98.57%."}
{"_id": "1f3774fecbe0ee12a2135cd05a761a7d2b537e13", "title": "Cowden syndrome.", "text": "BACKGROUND\nCowden syndrome is a rare genodermatosis charactarized by presence of multiple hamartomas. The aim of the study was to specify the clinical, therapeutic and prognostic aspects of Cowden syndrome.\n\n\nCASES REPORT\nOur study included 4 patients with Cowden syndrome, 2 males and 2 females between 14 and 46 years old. Clinical examination of the skin revealed facials papules (4 cases), acral keratosis (1 case), translucent keratotic papules (2 cases). Oral examination revealed papules (4 cases), papillomatosis (4 cases), gingival hypertrophy (4 cases) and scrotal tongue (2 cases). Investigations revealed thyroid lesions (2 cases), fibrocystic disease and lipoma of the breast in 1 case, \"glycogenic acanthosis\" (1 case), macrocephaly (2 cases), dysmorphic face (1 case) and lichen nitidus (1 case). Oral etretinate and acitretine were temporary efficient in 2 patients. Topical treatment with tretinoin lotion resulted in some improvement in cutaneous, but not mucosal lesions in one patient. No cancer was revealed.\n\n\nCONCLUSION\nThe pathognomonic mucocutaneous lesions were found in all patients. However, no degenerative lesions have been revealed. A new association of Cowden syndrome with lichen nitidus was found. Treatment with oral retinoids was efficient on cutaneous lesions."}
{"_id": "122374a3baf1e0efde03301226344a2d728eafc3", "title": "High resolution stationary digital breast tomosynthesis using distributed carbon nanotube x-ray source array.", "text": "PURPOSE\nThe purpose of this study is to investigate the feasibility of increasing the system spatial resolution and scanning speed of Hologic Selenia Dimensions digital breast tomosynthesis (DBT) scanner by replacing the rotating mammography x-ray tube with a specially designed carbon nanotube (CNT) x-ray source array, which generates all the projection images needed for tomosynthesis reconstruction by electronically activating individual x-ray sources without any mechanical motion. The stationary digital breast tomosynthesis (s-DBT) design aims to (i) increase the system spatial resolution by eliminating image blurring due to x-ray tube motion and (ii) reduce the scanning time. Low spatial resolution and long scanning time are the two main technical limitations of current DBT technology.\n\n\nMETHODS\nA CNT x-ray source array was designed and evaluated against a set of targeted system performance parameters. Simulations were performed to determine the maximum anode heat load at the desired focal spot size and to design the electron focusing optics. Field emission current from CNT cathode was measured for an extended period of time to determine the stable life time of CNT cathode for an expected clinical operation scenario. The source array was manufactured, tested, and integrated with a Selenia scanner. An electronic control unit was developed to interface the source array with the detection system and to scan and regulate x-ray beams. The performance of the s-DBT system was evaluated using physical phantoms.\n\n\nRESULTS\nThe spatially distributed CNT x-ray source array comprised 31 individually addressable x-ray sources covering a 30 angular span with 1 pitch and an isotropic focal spot size of 0.6 mm at full width at half-maximum. Stable operation at 28 kV(peak) anode voltage and 38 mA tube current was demonstrated with extended lifetime and good source-to-source consistency. For the standard imaging protocol of 15 views over 14, 100 mAs dose, and 2\u2009\u00d7\u20092 detector binning, the projection resolution along the scanning direction increased from 4.0 cycles/mm [at 10% modulation-transfer-function (MTF)] in DBT to 5.1 cycles/mm in s-DBT at magnification factor of 1.08. The improvement is more pronounced for faster scanning speeds, wider angular coverage, and smaller detector pixel sizes. The scanning speed depends on the detector, the number of views, and the imaging dose. With 240 ms detector readout time, the s-DBT system scanning time is 6.3 s for a 15-view, 100 mAs scan regardless of the angular coverage. The scanning speed can be reduced to less than 4 s when detectors become faster. Initial phantom studies showed good quality reconstructed images.\n\n\nCONCLUSIONS\nA prototype s-DBT scanner has been developed and evaluated by retrofitting the Selenia rotating gantry DBT scanner with a spatially distributed CNT x-ray source array. Preliminary results show that it improves system spatial resolution substantially by eliminating image blur due to x-ray focal spot motion. The scanner speed of s-DBT system is independent of angular coverage and can be increased with faster detector without image degration. The accelerated lifetime measurement demonstrated the long term stability of CNT x-ray source array with typical clinical operation lifetime over 3 years."}
{"_id": "0fc73c4a6e537b6c718ad54f47ae8847115a5d17", "title": "From Vision to NLP : A Merge", "text": "The study of artificial intelligence can be simplified into one goal: trying to mimic/enhance human senses. This paper attempts to combine computer vision and natural language processing to create a question answer system. This system takes a question and an image as input and outputs a response to the answer based on how the RCNN understands the question asked. The system correlates the question with the image by leveraging attention and memory mechanisms. Mentor: Arun Chaganty"}
{"_id": "b4729432b23842ff6b1b126572f8fa17aca14758", "title": "Fast high-quality non-blind deconvolution using sparse adaptive priors", "text": "We present an efficient approach for high-quality non-blind deconvolution based on the use of sparse adaptive priors. Its regularization term enforces preservation of strong edges while removing noise. We model the image-prior deconvolution problem as a linear system, which is solved in the frequency domain. This clean formulation lends to a simple and efficient implementation. We demonstrate its effectiveness by performing an extensive comparison with existing non-blind deconvolution methods, and by using it to deblur photographs degraded by camera shake. Our experiments show that our solution is faster and its results tend to have higher peak signal-to-noise ratio than the state-of-the-art techniques. Thus, it provides an attractive alternative to perform high-quality non-blind deconvolution of large images, as well as to be used as the final step of blind-deconvolution algorithms."}
{"_id": "41a5499a8e4a55a16c94b1944a74274f4340be74", "title": "Towards Contactless, Low-Cost and Accurate 3D Fingerprint Identification", "text": "Human identification using fingerprint impressions has been widely studied and employed for more than 2000\u00a0years. Despite new advancements in the 3D imaging technologies, widely accepted representation of 3D fingerprint features and matching methodology is yet to emerge. This paper investigates 3D representation of widely employed 2D minutiae features by recovering and incorporating (i) minutiae height z and (ii) its 3D orientation \u03c6 information and illustrates an effective matching strategy for matching popular minutiae features extended in 3D space. One of the obstacles of the emerging 3D fingerprint identification systems to replace the conventional 2D fingerprint system lies in their bulk and high cost, which is mainly contributed from the usage of structured lighting system or multiple cameras. This paper attempts to addresses such key limitations of the current 3D fingerprint technologies bydeveloping the single camera-based 3D fingerprint identification system. We develop a generalized 3D minutiae matching model and recover extended 3D fingerprint features from the reconstructed 3D fingerprints. 2D fingerprint images acquired for the 3D fingerprint reconstruction can themselves be employed for the performance improvement and have been illustrated in the work detailed in this paper. This paper also attempts to answer one of the most fundamental questions on the availability of inherent discriminableinformation from 3D fingerprints. The experimental results are presented on a database of 240 clients 3D fingerprints, which is made publicly available to further research efforts in this area, and illustrate the discriminant power of 3D minutiae representation andmatching to achieve performance improvement."}
{"_id": "8d7c2c3d03ae5360bb73ac818a0e36f324f1e8ce", "title": "PiCANet: Learning Pixel-wise Contextual Attention in ConvNets and Its Application in Saliency Detection", "text": "1Context plays an important role in many computer vision tasks. Previous models usually construct contextual information from the whole context region. However, not all context locations are helpful and some of them may be detrimental to the final task. To solve this problem, we propose a novel pixel-wise contextual attention network, i.e., the PiCANet, to learn to selectively attend to informative context locations for each pixel. Specifically, it can generate an attention map over the context region for each pixel, where each attention weight corresponds to the contextual relevance of each context location w.r.t. the specified pixel location. Thus, an attended contextual feature can be constructed by using the attention map to aggregate the contextual features. We formulate PiCANet in a global form and a local form to attend to global contexts and local contexts, respectively. Our designs for the two forms are both fully differentiable. Thus they can be embedded into any CNN architectures for various computer vision tasks in an end-to-end manner. We take saliency detection as an example application to demonstrate the effectiveness of the proposed PiCANets. Specifically, we embed global and local PiCANets into an encoder-decoder Convnet hierarchically. Thorough * This paper was previously submitted to CVPR 2017 and ICCV 2017. This is a slightly revised version based on our previous submission. analyses indicate that the global PiCANet helps to construct global contrast while the local PiCANets help to enhance the feature maps to be more homogenous, thus making saliency detection results more accurate and uniform. As a result, our proposed saliency model achieves state-of-the-art results on 4 benchmark datasets."}
{"_id": "35875600a30f89ea133ac06afeefc8cacec9fb3d", "title": "Can virtual reality simulations be used as a research tool to study empathy, problems solving and perspective taking of educators?: theory, method and application", "text": "Simulations in virtual environments are becoming an important research tool for educators. These simulations can be used in a variety of areas from the study of emotions and psychological disorders to more effective training. The current study uses animated narrative vignette simulation technology to observe a classroom situation depicting a low level aggressive peer-to-peer victim dyad. Participants were asked to respond to this situation as if they were the teacher, and these responses were then coded and analyzed. Consistent with other literature, the pre-service teachers expressed very little empathic concern, problem-solving or management of the situation with the victim. Future direction and educational implications are presented."}
{"_id": "5f17b4a08d14afaa14a695573ad598dcb763c623", "title": "Comparison of SVM and ANN for classification of eye events in EEG", "text": "The eye events (eye blink, eyes close and eyes open) are usually considered as biological artifacts in the electroencephalographic (EEG) signal. One can control his or her eye blink by proper training and hence can be used as a control signal in Brain Computer Interface (BCI) applications. Support vector machines (SVM) in recent years proved to be the best classification tool. A comparison of SVM with the Artificial Neural Network (ANN) always provides fruitful results. A one-against-all SVM and a multilayer ANN is trained to detect the eye events. A comparison of both is made in this paper."}
{"_id": "f971f858e59edbaeb0b75b7bcbca0d5c7b7d8065", "title": "Effects of Fitness Applications with SNS: How Do They Influence Physical Activity", "text": "Fitness applications with social network services (SNS) have emerged for physical activity management. However, there is little understanding of the effects of these applications on users\u2019 physical activity. Motivated thus, we develop a theoretical model based on social cognitive theory of self-regulation to explain the effects of goal setting, self-tracking, and SNS (through social comparison from browsing others\u2019 tracking data and social support from sharing one\u2019s own tracking data) on physical activity. The model was tested with objective data from 476 Runkeeper users. Our preliminary results show that goal setting, number of uses, the proportion of lower-performing friends, and number of likes positively influence users\u2019 physical activities, while the proportion of higher-performing friends has a negative effect. Moreover, the effect of the proportion of higher-performing friends is moderated by the average difference between the friends and the individual. The initial contributions of the study and remaining research plan are described."}
{"_id": "44911bdb33d0dc1781016e9afe605a9091ea908b", "title": "Vehicle license plate detection and recognition using non-blind image de-blurring algorithm", "text": "This paper proposes the method of vehicle license plate recognition, which is essential in the field of intelligent transportation system. The purpose of the study is to present a simple and effective vehicle license plate detection and recognition using non-bling image de-blurring algorithm. The sharpness of the edges in an image is restored by the prior information on images. The blue kernel is free of noise while using the non-blind image de-blurring algorithm. This non-blind image de-blurring (NBID) algorithm is involved in the process of removing the optimization difficulties with respect to unknown image and unknown blur. Estimation is carried out for the length of the motion kernel with Radon transform in Fourier domain. The proposed algorithm was tested on different vehicle images and achieved satisfactory results in license plate detection. The experimental results deal with the efficiency and robustness of the proposed algorithm in both synthesized and real images."}
{"_id": "36973330ae638571484e1f68aaf455e3e6f18ae9", "title": "Scale-Aware Fast R-CNN for Pedestrian Detection", "text": "In this paper, we consider the problem of pedestrian detection in natural scenes. Intuitively, instances of pedestrians with different spatial scales may exhibit dramatically different features. Thus, large variance in instance scales, which results in undesirable large intracategory variance in features, may severely hurt the performance of modern object instance detection methods. We argue that this issue can be substantially alleviated by the divide-and-conquer philosophy. Taking pedestrian detection as an example, we illustrate how we can leverage this philosophy to develop a Scale-Aware Fast R-CNN (SAF R-CNN) framework. The model introduces multiple built-in subnetworks which detect pedestrians with scales from disjoint ranges. Outputs from all of the subnetworks are then adaptively combined to generate the final detection results that are shown to be robust to large variance in instance scales, via a gate function defined over the sizes of object proposals. Extensive evaluations on several challenging pedestrian detection datasets well demonstrate the effectiveness of the proposed SAF R-CNN. Particularly, our method achieves state-of-the-art performance on Caltech [P. Dollar, C. Wojek, B. Schiele, and P. Perona, \u201cPedestrian detection: An evaluation of the state of the art,\u201d IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 4, pp. 743\u2013761, Apr. 2012], and obtains competitive results on INRIA [N. Dalal and B. Triggs, \u201cHistograms of oriented gradients for human detection,\u201d in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2005, pp. 886\u2013893], ETH [A. Ess, B. Leibe, and L. V. Gool, \u201cDepth and appearance for mobile scene analysis,\u201d in Proc. Int. Conf. Comput. Vis., 2007, pp. 1\u20138], and KITTI [A. Geiger, P. Lenz, and R. Urtasun, \u201cAre we ready for autonomous driving? The KITTI vision benchmark suite,\u201d in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2012, pp. 3354\u20133361]."}
{"_id": "f1e1437ad6cada93dc8627f9c9679ffee02d921c", "title": "Behavior-Based Telecommunication Churn Prediction with Neural Network Approach", "text": "A behavior-based telecom customer churn prediction system is presented in this paper. Unlike conventional churn prediction methods, which use customer demographics, contractual data, customer service logs, call-details, complaint data, bill and payment as inputs and churn as target output, only customer service usage information is included in this system to predict customer churn using a clustering algorithm. It can solve the problems which traditional methods have to face, such as missing or non-reliable data and the correlation among inputs. This study provides a new way to solve traditional churn prediction problems."}
{"_id": "3769e65690e424808361e3eebfdec8ab91908aa9", "title": "Affective Image Retrieval via Multi-Graph Learning", "text": "Images can convey rich emotions to viewers. Recent research on image emotion analysis mainly focused on affective image classification, trying to find features that can classify emotions better. We concentrate on affective image retrieval and investigate the performance of different features on different kinds of images in a multi-graph learning framework. Firstly, we extract commonly used features of different levels for each image. Generic features and features derived from elements-of-art are extracted as low-level features. Attributes and interpretable principles-of-art based features are viewed as mid-level features, while semantic concepts described by adjective noun pairs and facial expressions are extracted as high-level features. Secondly, we construct single graph for each kind of feature to test the retrieval performance. Finally, we combine the multiple graphs together in a regularization framework to learn the optimized weights of each graph to efficiently explore the complementation of different features. Extensive experiments are conducted on five datasets and the results demonstrate the effectiveness of the proposed method."}
{"_id": "00c57bb8c7d2ce7c8f32aef062ef3d61b9961711", "title": "Everyday dwelling with WhatsApp", "text": "In this paper, we present a study of WhatsApp, an instant messaging smartphone application. Through our interviews with participants, we develop anthopologist Tim Ingold's notion of dwelling, and discuss how use of WhatsApp is constitutive of a felt-life of being together with those close by. We focus on the relationship \"doings\" in WhatsApp and how this togetherness and intimacy are enacted through small, continuous traces of narrative, of tellings and tidbits, noticings and thoughts, shared images and lingering pauses; this is constitutive of dwelling. Further, we discuss how an intimate knowing of others in these relationships, through past encounters and knowledge of coming together in the future, pertain to the particular forms of relationship engagements manifest through the possibilities presented in WhatsApp. We suggest that this form of sociality is likely to be manifest in other smartphone IM-like applications."}
{"_id": "41ca84736e55375c73416b7d698fb72ad3ccde67", "title": "Enabling Real-Time Context-Aware Collaboration through 5G and Mobile Edge Computing", "text": "Creating context-aware ad hoc collaborative systems remains to be one of the primary hurdles hampering the ubiquitous deployment of IT and communication services. Especially under mission-critical scenarios, these services must often adhere to strict timing deadlines. We believe empowering such realtime collaboration systems requires context-aware application platforms working in conjunction with ultra-low latency data transmissions. In this paper, we make a strong case that this could be accomplished by combining the novel communication architectures being proposed for 5G with the principles of Mobile Edge Computing (MEC). We show that combining 5G with MEC would enable inter- and intra-domain use cases that are otherwise not feasible."}
{"_id": "75c72ed46042f172d174afe106ce41ec8dee71ae", "title": "Modeling of magnetically biased graphene patch frequency selective surface (FSS)", "text": "A free-standing magnetically biased graphene patch frequency selective surface (FSS) is modelled in this paper. Its transmission coefficients of co- and cross-polarizations can be obtained with an equivalent tensorial surface conductivity. Then, the rotation angle for normal incidence is explored with different values of biasing magnetic field B0. The maximum rotation angle provided by the magnetically biased graphene patch FSS is 43\u00b0 at 4.7 THz with B0=2.5 T, which is much larger than the rotation angle provided by a graphene sheet. This is very promising for THz nano-devices based on the giant Faraday rotation of graphenes."}
{"_id": "1c28f6b59c209e4f73809d5a5d16d672d26ad1d8", "title": "Cooperative Localization for Autonomous Underwater Vehicles", "text": "Self-localization of an underwater vehicle is particularly challenging due to the absence of Global Positioning System (GPS) reception or features at known positions that could otherwise have been used for position computation. Thus Autonomous Underwater Vehicle (AUV) applications typically require the pre-deployment of a set of beacons. This thesis examines the scenario in which the members of a group of AUVs exchange navigation information with one another so as to improve their individual position estimates. We describe how the underwater environment poses unique challenges to vehicle navigation not encountered in other environments in which robots operate and how cooperation can improve the performance of self-localization. As intra-vehicle communication is crucial to cooperation, we also address the constraints of the communication channel and the effect that these constraints have on the design of cooperation strategies. The classical approaches to underwater self-localization of a single vehicle, as well as more recently developed techniques are presented. We then examine how methods used for cooperating land-vehicles can be transferred to the underwater domain. An algorithm for distributed self-localization, which is designed to take the specific characteristics of the environment into account, is proposed. We also address how correlated position estimates of cooperating vehicles can lead to overconfidence in individual position estimates. Finally, key to any successful cooperative navigation strategy is the incorporation of the relative positioning between vehicles. The performance of localization algorithms with different geometries is analyzed and a distributed algorithm for the dynamic positioning of vehicles, which serve as dedicated navigation beacons for a fleet of AUVs, is proposed. Thesis Supervisor: John J. Leonard Title: Professor of Mechanical and Ocean Engineering Massachusetts Institute of Technology Acknowledgments This thesis would not have been possible without the help and support of many friends and colleagues who made the last five years at MIT an exceptionally fulfilling experience. I would like to thank my advisor John Leonard who strongly supported me ever since our first email exchange in 2003. He guided me all the way through the application process, research, thesis and finding a post doc position. His broad range of research interests enabled me to find a thesis subject that exactly matched my interest. And I very much appreciated him allowing me to take a significant amount of time to pursue other projects as well as travel. I would also like to thank my other committee members: Henrik Schmidt for his continued support and great company during several research cruises, and Hanu Singh and Arjuna Balasuriya for their helpful suggestions during my thesis writing. Many thanks to David Battle who helped me through my first steps with Autonomous Underwater Vehicles and was a great mate to have around. Many of the experiments presented in this thesis would not have been possible without the support of Andrew Patrikalakis. The results owe a lot to his countless hours of coding assistance and his efforts to ensure that the kayaks would be ready when needed. They were also made possible by Joe Curcio, the builder of the kayaks, and the support of Jacques Leedekerken and Kevin Cockrell. The last years would not have been the same without the many great people I met at MIT. Most important was Matt Walter who convinced me in August 2003 that MIT is not only an interesting place, but that it can be very friendly as well. Throughout the years we shared many great personal and academic experiences. I wish him all the best, wherever life may take him. Patrycja unfortunately left our lab, but made up for it by taking me on a very memorable trip across the country. I wish Alec, Emma, Albert, Tom, Olivier and Aisha all the best for their remaining time and life beyond. Iuliu was always a welcome distraction in the United States and abroad and a great help with all hardware questions. I was glad to join Carrick, Marty and David on a number of exciting research and conference trips, as well as their advisor Daniela Rus. In the last months I was also very fortunate to meet a new group of people. First, I am very thankful to Maurice for carrying on what I started I cannot imagine somebody better suited for it and also Been, Georgios, Hordur and Rob. I would also like to thank the many people of the SEA 2007 cruise, particularly the B watch and Julian, Jamie, Heather, Chris, Toby and Jane. One of the most exciting things during my time at MIT was that I was not only able to pursue my thesis topic but also two other projects. First, the flood warning project introduced me to Elizabeth Basha. We shared many joyful moments as well as blood, sweat and tears in the Central American wilderness. I hope that the end of my PhD only marks the beginning of that partnership. Second, the harbor porpoise tag project led by Stacy DeRuiter was a great design challenge. It also provided an opportunity to reach into other areas of ocean sciences by contributing to marine biology research. Her dedication along with the support from Mark Johnson, Peter Tyack and Tom Hurst ensured the project\u2019s success. The exciting results and the process leading up to them rewarded me with a better experience at MIT than I could have ever hoped for. I would like to thank John Leonard for letting me take this scenic route. The path that led me to MIT would not have been possible without the support from people in the early stages of my engineering career who I would like to thank here: Raimund Eich for patiently answering my first electrical engineering questions; my best friends Jan Horn, Daniel Steffensky, Alexander Zimmer and Ulf Radenz for helping me through my university time in Germany; and John Peatman, Ludger Becks and Uwe Zimmer for their academic guidance. Finally, I would like to thank my parents for their continued support and especially Melissa Pitotti for her encouragement not only to start the work at this institution but also to finish it when the time had come. This work was funded by Office of Naval Research grants N00014-97-1-0202, N00014-05-1-0255, N00014-02-C-0210, N00014-07-1-1102 and the ASAP MURI program led by Naomi Leonard of Princeton University. \u201cOne degree is not a large distance. On a compass it is scarcely the thickness of a fingernail. But in certain conditions, one degree can be a very large distance. Enough to unmake a man.\u201d The Mysterious Geographic Explorations of Jasper Morello, c \u00a9 3D Films, Australia 2005 Meinen Eltern & meinem Bruder"}
{"_id": "d9ad9a0dc7c4032b085aa621bba108a9e14fd83f", "title": "Blitz: A Principled Meta-Algorithm for Scaling Sparse Optimization", "text": "By reducing optimization to a sequence of small subproblems, working set methods achieve fast convergence times for many challenging problems. Despite excellent performance, theoretical understanding of working sets is limited, and implementations often resort to heuristics to determine subproblem size, makeup, and stopping criteria. We propose BLITZ, a fast working set algorithm accompanied by useful guarantees. Making no assumptions on data, our theory relates subproblem size to progress toward convergence. This result motivates methods for optimizing algorithmic parameters and discarding irrelevant variables as iterations progress. Applied to `1-regularized learning, BLITZ convincingly outperforms existing solvers in sequential, limited-memory, and distributed settings. BLITZ is not specific to `1-regularized learning, making the algorithm relevant to many applications involving sparsity or constraints."}
{"_id": "aab8c9514b473c4ec9c47d780b7c79112add9008", "title": "Using Case Studies in Research", "text": "Case study as a research strategy often emerges as an obvious option for students and other new researchers who are seeking to undertake a modest scale research project based on their workplace or the comparison of a limited number of organisations. The most challenging aspect of the application of case study research in this context is to lift the investigation from a descriptive account of \u2018what happens\u2019 to a piece of research that can lay claim to being a worthwhile, if modest addition to knowledge. This article draws heavily on established textbooks on case study research and related areas, such as Yin, 1994, Hamel et al., 1993, Eaton, 1992, Gomm, 2000, Perry, 1998, and Saunders et al., 2000 but seeks to distil key aspects of case study research in such a way as to encourage new researchers to grapple with and apply some of the key principles of this research approach. The article explains when case study research can be used, research design, data collection, and data analysis, and finally offers suggestions for drawing on the evidence in writing up a report or dissertation."}
{"_id": "d7f016f6ceb87092bc5cff84fafd94bfe3fa4adf", "title": "Comparative Evaluation of Binary Features", "text": "Performance evaluation of salient features has a long-standing tradition in computer vision. In this paper, we fill the gap of evaluation for the recent wave of binary feature descriptors, which aim to provide robustness while achieving high computational efficiency. We use established metrics to embed our assessment into the body of existing evaluations, allowing us to provide a novel taxonomy unifying both traditional and novel binary features. Moreover, we analyze the performance of different detector and descriptor pairings, which are often used in practice but have been infrequently analyzed. Additionally, we complement existing datasets with novel data testing for illumination change, pure camera rotation, pure scale change, and the variety present in photo-collections. Our performance analysis clearly demonstrates the power of the new class of features. To benefit the community, we also provide a website for the automatic testing of new description methods using our provided metrics and datasets (www.cs.unc.edu/feature-evaluation)."}
{"_id": "f8c79719e877a9dec27f0fbfb0c3e5e4fd730304", "title": "Appraisal of homogeneous techniques in Distributed Data Mining: classifier approach", "text": "In recent years, Distributed Data Mining (DDM) has evolved in large space aiming at minimizing computation cost and memory overhead in processing huge geographically distributed data. There are two approaches of DDM -- homogeneous and heterogeneous classifier approach. This paper presents implementation of four homogeneous classifier techniques for DDM with different Electronic Health Records (EHRs) homogeneous datasets namely diabetes, hepatitis, hypothyroid and further analyzing results based on metric evaluation."}
{"_id": "26a1ef8133da61e162c2d8142d2691c2d89584f7", "title": "Effects of loneliness and differential usage of Facebook on college adjustment of first-year students", "text": "The popularity of social network sites (SNSs) among college students has stimulated scholarship examining the relationship between SNS use and college adjustment. The present research furthers our understanding of SNS use by studying the relationship between loneliness, varied dimensions of Facebook use, and college adjustment among first-year students. We looked at three facets of college adjustment: social adjustment, academic motivation, and perceived academic performance. Compulsive use of Facebook had a stronger association with academic motivation than habitual use of Facebook, but neither were directly correlated with academic performance. Too much time spent on Facebook was weakly but directly associated with poorer perceived academic performance. Loneliness was a stronger indicator of college adjustment than any dimension of Facebook usage. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "080aebd2cc1019f17e78496354c37195560b0697", "title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems", "text": "MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines."}
{"_id": "12569f509a4a8f34717c38e32701fdab9def0e06", "title": "Status of Serverless Computing and Function-as-a-Service(FaaS) in Industry and Research", "text": "This whitepaper summarizes issues raised during the First International Workshop on Serverless Computing (WoSC) 2017 held June 5th 2017 and especially in the panel and associated discussion that concluded the workshop. We also include comments from the keynote and submitted papers. A glossary at the end (section 8) defines many technical terms used in this report."}
{"_id": "24c7cee069066f528d738267207692d640b7bd8f", "title": "Building a Chatbot with Serverless Computing", "text": "Chatbots are emerging as the newest platform used by millions of consumers worldwide due in part to the commoditization of natural language services, which provide provide developers with many building blocks to create chatbots inexpensively. However, it is still difficult to build and deploy chatbots. Developers need to handle the coordination of the cognitive services to build the chatbot interface, integrate the chatbot with external services, and worry about extensibility, scalability, and maintenance. In this work, we present the architecture and prototype of a chatbot using a serverless platform, where developers compose stateless functions together to perform useful actions. We describe our serverless architecture based on function sequences, and how we used these functions to coordinate the cognitive microservices in the Watson Developer Cloud to allow the chatbot to interact with external services. The serverless model improves the extensibility of our chatbot, which currently supports 6 abilities: location based weather reports, jokes, date, reminders, and a simple music tutor."}
{"_id": "2b1759673f0f8da5e9477b127bab976f8d4d50fe", "title": "Serverless Computing: Design, Implementation, and Performance", "text": "We present the design of a novel performance-oriented serverless computing platform implemented in. NET, deployed in Microsoft Azure, and utilizing Windows containers as function execution environments. Implementation challenges such as function scaling and container discovery, lifecycle, and reuse are discussed in detail. We propose metrics to evaluate the execution performance of serverless platforms and conduct tests on our prototype as well as AWS Lambda, Azure Functions, Google Cloud Functions, and IBM's deployment of Apache OpenWhisk. Our measurements show the prototype achieving greater throughput than other platforms at most concurrency levels, and we examine the scaling and instance expiration trends in the implementations. Additionally, we discuss the gaps and limitations in our current design, propose possible solutions, and highlight future research."}
{"_id": "3f60775a4e97573ee90177e5e44983a3f230d7b7", "title": "Cloud-Native, Event-Based Programming for Mobile Applications", "text": "Creating mobile applications often requires both client and server-side code development, each requiring vastly different skills. Recently, cloud providers like Amazon and Google introduced \"server-less\" programming models that abstract away many infrastructure concerns and allow developers to focus on their application logic. In this demonstration, we introduce OpenWhisk, our system for constructing cloud native actions, within the context of mobile application development process. We demonstrate how OpenWhisk is used in mobile application development, allows cloud API customizations for mobile, and simplifies mobile application architectures."}
{"_id": "4182cecefdcf4b6e69203926e3f92e660efe8cf0", "title": "Whose Book is it Anyway ? Using Machine Learning to Identify the Author of Unknown Texts", "text": "In this paper, we present an implementation of a new method for text classification that achieves significantly faster results than most existing classifiers. We extract features that are computationally efficient and forgo more computationally expensive ones used by many other text classifiers, most notably n-grams (contiguous sequences of n words). We then analyze the feature vectors through a hybrid SVM technique that sequentially classifies them with a one-versus-all classifier and a sequence of binary classifiers. Our overall method finishes execution within seconds on large novels, with accuracy comparable to that of standard classifiers but with a significantly shorter runtime."}
{"_id": "f818b08e6d7d69bc497e38a51ea387f48aca1bd3", "title": "Discussions in the comments section: Factors influencing participation and interactivity in online newspapers' reader comments", "text": "Posting comments on the news is one of the most popular forms of user participation in online newspapers, and there is great potential for public discourse that is associated with this form of user communication. However, this potential arises only when several users participate in commenting and when their communication becomes interactive. Based on an adaption of Galtung and Ruge\u2019s theory of newsworthiness, we hypothesized that a news article\u2019s news factors affect both participation levels and interactivity in a news item\u2019s comments section. The data from an online content analysis of political news are consistent with the hypotheses. This article explores the ways in which news factors affect participation levels and interactivity, and it discusses the theoretical, normative, and practical implications of those findings."}
{"_id": "75074802e33d3bbe45290acc4655cc9722b4db70", "title": "Analogical Mapping by Constraint Satisfaction", "text": "similarity. Units not yet reached asymptote: 0 Goodness of network: 0.61 Calculating the best mappings after 72 cycles. Best mapping of SMART is HUNGRY. 0.70 Best mapping of TALL is FRIENDLY. 0.71 Best mapping of TIMID is FRISKY. 0.54 Best mopping of TOM is BLACKIE. 0.70 Best mapping of STEVE is FIDO. 0.54 Best mopping of BILL is ROVER. 0.71 by ACME for this example. The five sentences corresponding to the five propositions in each analog (e.g., \u201cBill is smart\u201d) were listed in adjacent colunms on a piece of paper. Sentences related to the same individual were listed consecutively; otherwise, the order was scrambled. Across subjects two different orders were used. The instructions simply stated, \u201cYour task is to figure out what in the left set of sentences corresponds to what in the right set of sentences.\u201d Subjects were also told that the meaning of the words was irrelevant. The three individuals and three attributes of the analog on the left were listed on the bottom of the page; for each element, subjects were to write down what they believed to be the corresponding element of the analog on the right. Three minutes were allowed for completion of the task. A group of 8 UCLA students in an undergraduate psychology class served as subjects. Five subjects produced the same set of six correspondences identified by ACME, 2 subjects produced four of the six, and 1 subject was unable to understand the task. These results indicate that finding the isomorphism for this example is within the capability of many college students. 344 HOLYOAK AND THAGARD Structure and Pragmatics in Metaphor To explore the performance of ACME in metaphorical mapping, the program was given predicate-calculus representations of the knowledge underlying a metaphor that has been analyzed in detail by Kittay (1987). The metaphor is derived from a passage in Plato\u2019s Theuetetus in which Socrates declares himself to be a \u201cmidwife of ideas,\u201d elaborating the metaphor at length. Table 19 contains predicate-calculus representations based upon Kittay\u2019s analysis of the source analog r.?ncerning the role of a midwife and of the target analog concerning the role of a philosopher-teacher. Roughly, Socrates claims that he is like a midwife in that he introduces the student to intellectual partners, just as a midwife often serves first as a matchmaker; Socrates helps the student evaluate the truth or falsity of his ideas much as a midwife helps a mother to deliver a child. This metaphor was used to provide an illustration of the manner in which structural and pragmatic constraints interact in ACME. Table 19 presents predicate-calculus representations of two versions of the metaphor: an isomorphic version based directly upon Kittay\u2019s analysis, and a nonisomorphic version created by adding irrelevant and misleading information to the representation of the \u201cSocrates\u201d target analog. The best mappings obtained for each object and predicate in the target, produced by three runs of ACME, are reported in Table 20. The asymptotic activations of the best mappings are also presented. A mapping of \u201cnone\u201d means that no mapping unit had an asymptotic activation greater than .20. The run reported in the first column used the isomorphic version without any pragmatic weights. The network settles with a correct set of mappings after 34 cycles. Thus Socrates maps to the midwife, his student o the mother, the student\u2019s intellectual partner to the father, and the idea to the child. (Note that there is a homomorphic mapping of the predicates thinks-about and tests-truth to in-labor-with.) The propositions expressing causal relations in the two analogs are not essential here; deletion of them still allows a complete mapping to be discovered. A very different set of mappings is reported in the middle column of Table 20 for the nonisomorphic version of the \u201cSocrates\u201d analog. This version provides additional knowledge about Socrates that would be expected to produce major interference with discovery of the metaphoric relation between the two analogs. The nonisomorphic version contains the information that Socrates drinks hemlock juice, which is of course irrelevant to the metaphor. Far worse, the representation encodes the information that Socrates him\u2019self was matched to his wife by a midwife; and that Socrates\u2019 wife had a child with the help of this midwife. Clearly, this nonisomorphic extension will cause the structural and semantic constraints on mapping to support a much more superficial set of correspondences between the two situations. And indeed, in this second run ACME finds only the barest fragments of the intended metaphoric mappings when the network settles ANALOGICAL MAPPING 345 TABLE 19 Predicate-Calculus Representatlans of Knowledge Underlying the Metaphor \u201cSocrates is a Mldwlfe of Ideas\u201d\u2018(lsamarphlc and Nanlsomorphlc Verslons)"}
{"_id": "ba17b83bc2462b1040490dd08e49aef5233ef257", "title": "Intelligent Buildings of the Future: Cyberaware, Deep Learning Powered, and Human Interacting", "text": "Intelligent buildings are quickly becoming cohesive and integral inhabitants of cyberphysical ecosystems. Modern buildings adapt to internal and external elements and thrive on ever-increasing data sources, such as ubiquitous smart devices and sensors, while mimicking various approaches previously known in software, hardware, and bioinspired systems. This article provides an overview of intelligent buildings of the future from a range of perspectives. It discusses everything from the prospects of U.S. and world energy consumption to insights into the future of intelligent buildings based on the latest technological advancements in U.S. industry and government."}
{"_id": "269ef7beca7de8aa853bcb32ba99bce7c4013fe6", "title": "A Robust, Simple Genotyping-by-Sequencing (GBS) Approach for High Diversity Species", "text": "Advances in next generation technologies have driven the costs of DNA sequencing down to the point that genotyping-by-sequencing (GBS) is now feasible for high diversity, large genome species. Here, we report a procedure for constructing GBS libraries based on reducing genome complexity with restriction enzymes (REs). This approach is simple, quick, extremely specific, highly reproducible, and may reach important regions of the genome that are inaccessible to sequence capture approaches. By using methylation-sensitive REs, repetitive regions of genomes can be avoided and lower copy regions targeted with two to three fold higher efficiency. This tremendously simplifies computationally challenging alignment problems in species with high levels of genetic diversity. The GBS procedure is demonstrated with maize (IBM) and barley (Oregon Wolfe Barley) recombinant inbred populations where roughly 200,000 and 25,000 sequence tags were mapped, respectively. An advantage in species like barley that lack a complete genome sequence is that a reference map need only be developed around the restriction sites, and this can be done in the process of sample genotyping. In such cases, the consensus of the read clusters across the sequence tagged sites becomes the reference. Alternatively, for kinship analyses in the absence of a reference genome, the sequence tags can simply be treated as dominant markers. Future application of GBS to breeding, conservation, and global species and population surveys may allow plant breeders to conduct genomic selection on a novel germplasm or species without first having to develop any prior molecular tools, or conservation biologists to determine population structure without prior knowledge of the genome or diversity in the species."}
{"_id": "9f0eb43643dd4f7252e16b1e3136e565e3ee940a", "title": "Classify or Select: Neural Architectures for Extractive Document Summarization", "text": "We present two novel and contrasting Recurrent Neural Network (RNN) based architectures for extractive summarization of documents. The Classifier based architecture sequentially accepts or rejects each sentence in the original document order for its membership in the final summary. The Selector architecture, on the other hand, is free to pick one sentence at a time in any arbitrary order to piece together the summary. Our models under both architectures jointly capture the notions of salience and redundancy of sentences. In addition, these models have the advantage of being very interpretable, since they allow visualization of their predictions broken up by abstract features such as information content, salience and redundancy. We show that our models reach or outperform state-of-the-art supervised models on two different corpora. We also recommend the conditions under which one architecture is superior to the other based on experimental evidence."}
{"_id": "d3a2dca439d2df686fb09d5d05ae00a97e74a1af", "title": "Domain Generalization with Adversarial Feature Learning", "text": "In this paper, we tackle the problem of domain generalization: how to learn a generalized feature representation for an \"unseen\" target domain by taking the advantage of multiple seen source-domain data. We present a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization. To be specific, we extend adversarial autoencoders by imposing the Maximum Mean Discrepancy (MMD) measure to align the distributions among different domains, and matching the aligned distribution to an arbitrary prior distribution via adversarial feature learning. In this way, the learned feature representation is supposed to be universal to the seen source domains because of the MMD regularization, and is expected to generalize well on the target domain because of the introduction of the prior distribution. We proposed an algorithm to jointly train different components of our proposed framework. Extensive experiments on various vision tasks demonstrate that our proposed framework can learn better generalized features for the unseen target domain compared with state-of-the-art domain generalization methods."}
{"_id": "a088bed7ac41ae77dbb23041626eb8424d96a5ba", "title": "A Pattern Learning Approach to Question Answering Within the Ephyra Framework", "text": "This paper describes the Ephyra question answering engine, a modular and extensible framework that allows to integrate multiple approaches to question answering in one system. Our framework can be adapted to languages other than English by replacing language-specific components. It supports the two major approaches to question answering, knowledge annotation and knowledge mining. Ephyra uses the web as a data resource, but could also work with smaller corpora. In addition, we propose a novel approach to question interpretation which abstracts from the original formulation of the question. Text patterns are used to interpret a question and to extract answers from text snippets. Our system automatically learns the patterns for answer extraction, using question-answer pairs as training data. Experimental results revealed the potential of this approach."}
{"_id": "60e897c5270c4970510cfdb6d49ef9513a0d24f1", "title": "Towards time-optimal race car driving using nonlinear MPC in real-time", "text": "This paper addresses the real-time control of autonomous vehicles under a minimum traveling time objective. Control inputs for the vehicle are computed from a nonlinear model predictive control (MPC) scheme. The time-optimal objective is reformulated such that it can be tackled by existing efficient algorithms for real-time nonlinear MPC that build on the generalized Gauss-Newton method. We numerically validate our approach in simulations and present a real-world hardware setup of miniature race cars that is used for an experimental comparison of different approaches."}
{"_id": "45524d7a40435d989579b88b70d25e4d65ac9e3c", "title": "Focusing Attention: Towards Accurate Text Recognition in Natural Images", "text": "Scene text recognition has been a hot research topic in computer vision due to its various applications. The state of the art is the attention-based encoder-decoder framework that learns the mapping between input images and output sequences in a purely data-driven way. However, we observe that existing attention-based methods perform poorly on complicated and/or low-quality images. One major reason is that existing methods cannot get accurate alignments between feature areas and targets for such images. We call this phenomenon \u201cattention drift\u201d. To tackle this problem, in this paper we propose the FAN (the abbreviation of Focusing Attention Network) method that employs a focusing attention mechanism to automatically draw back the drifted attention. FAN consists of two major components: an attention network (AN) that is responsible for recognizing character targets as in the existing methods, and a focusing network (FN) that is responsible for adjusting attention by evaluating whether AN pays attention properly on the target areas in the images. Furthermore, different from the existing methods, we adopt a ResNet-based network to enrich deep representations of scene text images. Extensive experiments on various benchmarks, including the IIIT5k, SVT and ICDAR datasets, show that the FAN method substantially outperforms the existing methods."}
{"_id": "327f0ec65bd0e0dabad23c42514d0e2ac8b05a97", "title": "FACTORS INFLUENCING CONSUMERS \u2019 ATTITUDE TOWARDS E-COMMERCE PURCHASES THROUGH ONLINE SHOPPING", "text": "Online shopping is the process of buying goods and services from merchants who sell on the internet. Shoppers can visit web stores from the comfort of their homes and shop as they sit in front of the computer. The main purpose of this study is to determine the factors influencing consumers\u2019 attitude towards e-commerce purchases through online shopping. The study also investigate how socio-demographic (age, income and occupation), pattern of online buying (types of goods, e-commerce experience and hours use on internet) and purchase perception (product perception, customers\u2019 service and consumers\u2019 risk) affect consumers\u2019 attitude towards online shopping. Convenience sampling method was conducted in this study and the sample comparison of 100 respondents in Taman Tawas Permai, Ipoh. Data were collected via self-administered questionnaire which contains 15 questions in Part A (respondents\u2019 background and their pattern of using internet and online buying), 34 questions in Part B (attitude towards online purchase) and 36 questions in Part C (purchase perception towards online shopping). One-way ANOVA were used to assess the differences between independent variable such as age, income, occupation and pattern of online buying (type of goods) and dependant variable such as attitude towards online shopping. The findings revealed that there is no significant difference in attitude towards online shopping among age group (F = 1.020, p < 0.05) but there is a significant difference in attitude towards online shopping among income group (F = 0.556, p > 0.05). The research finding also showed that there is no significant difference in attitude towards online shopping among occupation group (F = 1.607, p < 0.05) and types of goods group (F = 1.384, p < 0.05). Pearson\u2019s correlation were used to assess the relationship between independent variable such as e-commerce experience, hours spent on internet, product perception, customers\u2019 service and consumers\u2019 risk and dependant variable such as attitude towards online shopping. The findings revealed that there is a significant relationship between e-commerce experience and attitude towards online shopping among the respondents (r = -0.236**, p < 0.05). However, there is no significant relationship between hours spent on internet and attitude towards online shopping among the respondents (r = 0.106, p > 0.05). This study also indicated that there is a significant relationship between product perception and attitude towards online shopping among the respondents (r = 0.471**, p < 0.01) and there is also a significant relationship between customers\u2019 service and attitude towards online shopping among the respondents (r = 0.459**, p < 0.01). Lastly, this result showed that there is no significant relationship between consumers\u2019 risk and attitude towards online shopping among the respondents (r = 0.153, p > 0.05). Further study should explore other factors that influencing consumers\u2019 attitude towards e-commerce purchases through online shopping with a broader range of population and high representative sampling method."}
{"_id": "3b3f93f16e475cf37cd2e6dba208a25575dc301d", "title": "Risk-RRT\u2217: A robot motion planning algorithm for the human robot coexisting environment", "text": "In the human robot coexisting environment, to reach the goal efficiently and safely is very meaningful for the mobile service robot. In this paper, a Risk based Rapidly-exploring Random Tree for optimal motion planning (Risk-RRT\u2217) algorithm is proposed by combining the comfort and collision risk (CCR) map with the RRT\u2217 algorithm, which provides a variant of the RRT\u2217 algorithm in the dynamic human robot coexisting environment. In the experiments, the time cost in the navigation process and the length of the trajectory are utilized for the evaluation of the proposed algorithm. A comparison with the Risk-RRT algorithm is carried out and experimental results reveal that our proposed algorithm can achieve a better performance than that of the Risk-RRT in both static and dynamic environments."}
{"_id": "227ed02b3e5edf4c5b08539c779eca90683549e6", "title": "Towards a sustainable e-Participation implementation model", "text": "A great majority of the existing frameworks are inadequate to address their universal applicability in countries with certain socio-economic and technological settings. Though there is so far no \u201cone size fits all\u201d strategy in implementing eGovernment, there are some essential common elements in the transformation. Therefore, this paper attempts to develop a singular sustainable model based on some theories and the lessons learned from existing e-Participation initiatives of developing and developed countries, so that the benefits of ICT can be maximized and greater participation be ensured."}
{"_id": "8701435bc82840cc5f040871e3690964998e6fd3", "title": "circRNA biogenesis competes with pre-mRNA splicing.", "text": "Circular RNAs (circRNAs) are widely expressed noncoding RNAs. However, their biogenesis and possible functions are poorly understood. Here, by studying circRNAs that we identified in neuronal tissues, we provide evidence that animal circRNAs are generated cotranscriptionally and that their production rate is mainly determined by intronic sequences. We demonstrate that circularization and splicing compete against each other. These mechanisms are tissue specific and conserved in animals. Interestingly, we observed that the second exon of the splicing factor muscleblind (MBL/MBNL1) is circularized in flies and humans. This circRNA (circMbl) and its flanking introns contain conserved muscleblind binding sites, which are strongly and specifically bound by MBL. Modulation of MBL levels strongly affects circMbl biosynthesis, and this effect is dependent on the MBL binding sites. Together, our data suggest that circRNAs can function in gene regulation by competing with linear splicing. Furthermore, we identified muscleblind as a factor involved in circRNA biogenesis."}
{"_id": "13ee420bea383f67b029531f845528ff3ad5b3e1", "title": "Modeling astrocyte-neuron interactions in a tripartite synapse", "text": "Glial cells (microglia, oligodendrocytes, and especially astrocytes) play a critical role in the central nervous system by affecting in various ways the neuronal single cell level interactions as well as connectivity and communication at the network level, both in the developing and mature brain. Numerous studies (see, e.g., [1-3]) indicate an important modulatory role of astrocytes in brain homeostasis but most specifically in neuronal metabolism, plasticity, and survival. Astrocytes are also known to play an important role in many neurological disorders and neurodegenerative diseases. It is therefore important in the light of recent evidence to assess how the astrocytes interact with neurons, both in situ and in silico. The integration of biological knowledge into computational models is becoming increasingly important to help understand the role of astrocytes both in health and disease. We have previously addressed the role of transmitters and amyloid-beta peptide on calcium signals in rat cortical astrocytes [4]. In this work, we extend the work by using a modified version of the previously developed model [5] for astrocyte-neuron interactions in a tripartite synapse to explore the effects of various preand postsynaptic as well as extrasynaptic mechanisms on neuronal activity. We consider extending the model to include various additional mechanisms, such as the role of IP3 receptor function, recycling of neurotransmitters, K+ buffering by the Na+/K+ pump, and retrograde signaling by endocannabinoids. The improved tripartite synapse model for astrocyte-neuron interactions will provide an essential modeling tool for facilitating studies of local network dynamics in the brain. The model may also serve as an important step toward understanding mechanisms behind induction and maintenance of plastic changes in the brain."}
{"_id": "be29cc9cd74fd7d260b4571a4b72518accae5127", "title": "Semisupervised Autoencoder for Sentiment Analysis", "text": "In this paper, we investigate the usage of autoencoders in modeling textual data. Traditional autoencoders suffer from at least two aspects: scalability with the high dimensionality of vocabulary size and dealing with task-irrelevant words. We address this problem by introducing supervision via the loss function of autoencoders. In particular, we first train a linear classifier on the labeled data, then define a loss for the autoencoder with the weights learned from the linear classifier. To reduce the bias brought by one single classifier, we define a posterior probability distribution on the weights of the classifier, and derive the marginalized loss of the autoencoder with Laplace approximation. We show that our choice of loss function can be rationalized from the perspective of Bregman Divergence, which justifies the soundness of our model. We evaluate the effectiveness of our model on six sentiment analysis datasets, and show that our model significantly outperforms all the competing methods with respect to classification accuracy. We also show that our model is able to take advantage of unlabeled dataset and get improved performance. We further show that our model successfully learns highly discriminative feature maps, which explains its superior performance."}
{"_id": "8062e650cf0c469540f03fd1d08b5d9f12ebcfb2", "title": "Rethinking Centrality: The Role of Dynamical Processes in Social Network Analysis", "text": "Many popular measures used in social network analysis, including centrality, are based on the random walk. The random walk is a model of a stochastic process where a node interacts with one other node at a time. However, the random walk may not be appropriate for modeling social phenomena, including epidemics and information diffusion, in which one node may interact with many others at the same time, for example, by broadcasting the virus or information to its neighbors. To produce meaningful results, social network analysis algorithms have to take into account the nature of interactions between the nodes. In this paper we classify dynamical processes as conservative and non-conservative and relate them to well-known measures of centrality used in network analysis: PageRank and Alpha-Centrality. We demonstrate, by ranking users in online social networks used for broadcasting information, that non-conservative Alpha-Centrality generally leads to a better agreement with an empirical ranking scheme than the conservative PageRank."}
{"_id": "6a51422fd215ca9d98e6ce1c930f0046520013e2", "title": "Power Optimization of Battery Charging System Using FPGA Based Neural Network Controller", "text": "This paper involves designing a small scale battery charging system which is powered via a photovoltaic panel. This work aims at the usage of solar energy for charging the battery and optimizing the power of the system. Implementation is done using Artificial Neural Network (ANN) on FPGA. To develop this system an Artificial Neural Network is trained and its result is further used for the PWM technique. PWM pulse generation has been done using Papilio board which is based on XILINX Spartan 3E FPGA. The ANN with PWM technique is ported on FPGA which is programmed using VHDL. This able to automatically control the whole charging system operation without requirement of external sensory unit. The simulation results are achieved by using MATLAB and XILINX. These results allowed demonstrating the charging of the battery using proposed ANN and PWM technique. KeywordsPhotovoltaic battery charger, PWM, ANN, FPGA"}
{"_id": "e180d2d1b58d553a948d778715ddf15246f838a9", "title": "Foot orthoses for plantar heel pain: a systematic review and meta-analysis.", "text": "OBJECTIVE\nTo investigate the effectiveness of foot orthoses for pain and function in adults with plantar heel pain.\n\n\nDESIGN\nSystematic review and meta-analysis. The primary outcome was pain or function categorised by duration of follow-up as short (0 to 6 weeks), medium (7 to 12 weeks) or longer term (13 to 52 weeks).\n\n\nDATA SOURCES\nMedline, CINAHL, SPORTDiscus, Embase and the Cochrane Library from inception to June 2017.\n\n\nELIGIBILITY CRITERIA FOR SELECTING STUDIES\nStudies must have used a randomised parallel-group design and evaluated foot orthoses for plantar heel pain. At least one outcome measure for pain or function must have been reported.\n\n\nRESULTS\nA total of 19 trials (1660 participants) were included. In the short term, there was very low-quality evidence that foot orthoses do not reduce pain or improve function. In the medium term, there was moderate-quality evidence that foot orthoses were more effective than sham foot orthoses at reducing pain (standardised mean difference -0.27 (-0.48 to -0.06)). There was no improvement in function in the medium term. In the longer term, there was very low-quality evidence that foot orthoses do not reduce pain or improve function. A comparison of customised and prefabricated foot orthoses showed no difference at any time point.\n\n\nCONCLUSION\nThere is moderate-quality evidence that foot orthoses are effective at reducing pain in the medium term, however it is uncertain whether this is a clinically important change."}
{"_id": "1e5029cf2a120c0d7453f3ecbd059f97eebbbf6f", "title": "From wireless sensor networks towards cyber physical systems", "text": "In the past two decades, a lot of research activities have been dedicated to the fields of mobile ad hoc network (MANET) and wireless sensor networks (WSN). More recently, the cyber physical system (CPS) has emerged as a promising direction to enrich the interactions between physical and virtual worlds. In this article, we first review some research activities inWSN, including networking issues and coverage and deployment issues. Then,we review some CPS platforms and systems that have been developed recently, including health care, navigation, rescue, intelligent transportation, social networking, and gaming applications. Through these reviews, we hope to demonstrate how CPS applications exploit the physical information collected by WSNs to bridge real and cyber spaces and identify important research challenges related to CPS designs. \u00a9 2011 Elsevier B.V. All rights reserved."}
{"_id": "74473e75ea050a18f1d1d73ebba240c11c21e882", "title": "The Improvisation Effect: A Case Study of User Improvisation and Its Effects on Information System Evolution", "text": "Few studies have examined interactions between IT change and organizational change during information systems evolution (ISE). We propose a dynamic model of ISE where change dynamics are captured in four dimensions: planned change, improvised change, organizational change and IT change. This inductively-generated model yields a rich account of ISE and its drivers by integrating the four change dimensions. The model shows how incremental adjustments in IT and organizational processes often grow into a profound change as users improvise. We demonstrate the value of the dynamic model by illustrating ISE processes in the context of two manufacturing organizations implementing the same system over a study period of five years. This paper makes its contribution by holistically characterizing improvisation in the context of IT and organizational change. Our ISE model moves research in organizational and IT change towards a common framing by showing how each affects the other\u2019s form, function and evolution."}
{"_id": "858ddff549ae0a3094c747fb1f26aa72821374ec", "title": "Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications", "text": "Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research."}
{"_id": "008b7fbda45b9cd63c89ee5ed3f8e6b2b6bf8457", "title": "Kinematics and the Implementation of an Elephant's Trunk Manipulator and Other Continuum Style Robots", "text": "Traditionally, robot manipulators have been a simple arrangement of a small number of serially connected links and actuated joints. Though these manipulators prove to be very effective for many tasks, they are not without their limitations, due mainly to their lack of maneuverability or total degrees of freedom. Continuum style (i.e., continuous \"back-bone\") robots, on the other hand, exhibit a wide range of maneuverability, and can have a large number of degrees of freedom. The motion of continuum style robots is generated through the bending of the robot over a given section; unlike traditional robots where the motion occurs in discrete locations, i.e., joints. The motion of continuum manipulators is often compared to that of biological manipulators such as trunks and tentacles. These continuum style robots can achieve motions that could only be obtainable by a conventionally designed robot with many more degrees of freedom. In this paper we present a detailed formulation and explanation of a novel kinematic model for continuum style robots. The design, construction, and implementation of our continuum style robot called the elephant trunk manipulator is presented. Experimental results are then provided to verify the legitimacy of our model when applied to our physical manipulator. We also provide a set of obstacle avoidance experiments that help to exhibit the practical implementation of both our manipulator and our kinematic model."}
{"_id": "8b1d430d8d37998c48411ce2e9dc35f3a8529fd7", "title": "Knowledge-Based Reasoning in a Unified Feature Modeling Scheme", "text": "Feature-based modeling is an accepted approach to include high-level geometric information in product models as well as to facilitate a parameterized and constraint-based product development process. Moreover, features are suitable as an intermediate layer between a product\u2019s knowledge model and its geometry model for effective and efficient information management. To achieve this, traditional feature technology must be extended to align with the approach of knowledge-based reasoning. In this paper, based on a previously proposed unified feature modeling scheme, feature definitions are extended to support knowledge-based reasoning. In addition, a communication mechanism between the knowledge-based system and the feature model is established. The methods to embed and use knowledge for information consistency control are described."}
{"_id": "d7d6cbc28cb7751176ad241b9375802a7d6d62df", "title": "FOREX Rate prediction using Chaos and Quantile Regression Random Forest", "text": "This paper presents a hybrid of chaos modeling and Quantile Regression Random Forest (QRRF) for Foreign Exchange (FOREX) Rate prediction. The exchange rates data of US Dollar (USD) versus Japanese Yen (JPY), British Pound (GBP), and Euro (EUR) are used to test the efficacy of proposed model. Based on the experiments conducted, we conclude that the proposed model yielded accurate predictions compared to Chaos + Quantile Regression (QR), Chaos+Random Forest (RF) and that of Pradeepkumar and Ravi [12] in terms of both Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE)."}
{"_id": "5fca285e4e252b99f8bd0a8489c984280e78a976", "title": "Non-parametric Model for Background Subtraction", "text": "Background subtraction is a method typically used to segment moving regions in image sequences taken from a static camera by comparing each new frame to a model of the scene background. We present a novel non-parametric background model and a background subtraction approach. The model can handle situations where the background of the scene is cluttered and not completely static but contains small motions such as tree branches and bushes. The model estimates the probability of observing pixel intensity values based on a sample of intensity values for each pixel. The model adapts quickly to changes in the scene which enables very sensitive detection of moving targets. We also show how the model can use color information to suppress detection of shadows. The implementation of the model runs in real-time for both gray level and color imagery. Evaluation shows that this approach achieves very sensitive detection with very low false alarm rates."}
{"_id": "ccbd40c5d670f4532ad1a9f0003a8b8157388aa0", "title": "Image Difference Threshold Strategies and Shadow Detection", "text": "The paper considers two problems associated with the detection and classification of motion in image sequences obtained from a static camera. Motion is detected by differencing a reference and the \"current\" image frame, and therefore requires a suitable reference image and the selection of an appropriate detection threshold. Several threshold selection methods are investigated, and an algorithm based on hysteresis thresholding is shown to give acceptably good results over a number of test image sets. The second part of the paper examines the problem of detecting shadow regions within the image which are associated with the object motion. This is based on the notion of a shadow as a semi-transparent region in the image which retains a (reduced contrast) representation of the underlying surface pattern, texture or grey value. The method uses a region growing algorithm which uses a growing criterion based on a fixed attenuation of the photometric gain over the shadow region, in comparison to the reference image."}
{"_id": "e182225eb0c1e90f09cc3a0f69abb7ac0e9b3dba", "title": "Pfinder: Real-Time Tracking of the Human Body", "text": "Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding. Index Terms \u2014Blobs, blob tracking, real-time, person tracking, 3D person tracking, segmentation, gesture recognition, mixture model, MDL."}
{"_id": "f680e7e609ce0729c8a594336e0cf8f447b3ef13", "title": "Adaptive Background Estimation and Foreground Detection using Kalman-Filtering", "text": "In image sequence processing kalman filtering is used for an adaptive background estimation, in order to separate the foreground from the background. The presented work is an approach which takes into account that changing illumination should be considered in the background estimation, and should not be detected as foreground. The new approach assumes a stationary CCD cameras with fixed focal length and considers non-rigid objects moving non-continuously like human bodies. Furthermore, statistic based methods are used to overcome the problems caused by shadow borders and the adaptation, when the background is covered by the foreground."}
{"_id": "1ad7dd7cc87774b0e865c8562553c0ac5a6bd9f8", "title": "Artificial Diversity as Maneuvers in a Control Theoretic Moving Target Defense", "text": "Moving target cyber-defense systems encompass a wide variety of techniques in multiple areas of cyber-security. The dynamic system reconfiguration aspect of moving target cyber-defense can be used as a basis for providing an adaptive attack surface. The goal of this research is to develop novel control theoretic mechanisms by which a range of cyber maneuver techniques are provided such that when an attack is detected the environment can select the most appropriate maneuver to ensure a sufficient shift in the attack surface to render the identified attack ineffective. Effective design of this control theoretic cyber maneuver approach requires the development of two additional theories. First, algorithms are required for the estimation of security state. This will identify when a maneuver is required. Second, a theory for the estimation of the cost of performing a maneuver is required. This is critical for selecting the most cost-effective maneuver while ensuring that the attack is rendered fully ineffective. Finally, we present our moving target control loop as well as a detailed case study examining the impact of our proposed cyber maneuver paradigm on DHCP attacks."}
{"_id": "8cfac6b7417c198ea192511231343793ff2f3b63", "title": "GoSCAN: Decentralized scalable data clustering", "text": "Identifying clusters is an important aspect of analyzing large datasets. Clustering algorithms classically require access to the complete dataset. However, as huge amounts of data are increasingly originating from multiple, dispersed sources in distributed systems, alternative solutions are required. Furthermore, data and network dynamicity in a distributed setting demand adaptable clustering solutions that offer accurate clustering models at a reasonable pace. In this paper, we propose GoScan, a fully decentralized density-based clustering algorithm which is capable of clustering dynamic and distributed datasets without requiring central control or message flooding. We identify two major tasks: finding the core data points, and forming the actual clusters, which we execute in parallel employing gossip-based communication. This approach is very efficient, as it offers each peer enough authority to discover the clusters it is interested in. Our algorithm poses no extra burden of overlay formation in the network, while providing high levels of scalability. We also offer several optimizations to the basic clustering algorithm for improving communication overhead and processing costs. Coping with dynamic data is made possible by introducing an age factor, which gradually detects data-set changes and enables clustering updates. In our experimental evaluation, we will show that GoSCAN can discover the clusters efficiently with scalable transmission cost."}
{"_id": "7adf1aa47f8068158ac66ecb940252f90b8a2e6f", "title": "Analysis and comparison of MIMO radar waveforms", "text": "Choosing a proper waveform is a critical task for the implementation of multiple-input multiple-output (MIMO) radars. In addition to the general requirements for radar waveforms such as good resolution, low sidelobes, etc, MIMO radar waveforms also should possess good orthogonality. In this paper we give a brief overview of MIMO radar waveforms, which are classified into four categories: (1) time division multiple access (TDMA), (2) frequency division multiple access (FDMA), (3) Doppler division multiple access (DDMA), and (4) code division multiple access (CDMA). A special circulating MIMO waveform is also addressed The properties as well as application limitations of different waveforms are analyzed and compared. Some simulations results are also presented to illustrate the respective performance of different waveforms."}
{"_id": "7716aa20c98c233b1dc89ab6143c2ba25441ec49", "title": "A MATLAB toolbox for Granger causal connectivity analysis", "text": "Assessing directed functional connectivity from time series data is a key challenge in neuroscience. One approach to this problem leverages a combination of Granger causality analysis and network theory. This article describes a freely available MATLAB toolbox--'Granger causal connectivity analysis' (GCCA)--which provides a core set of methods for performing this analysis on a variety of neuroscience data types including neuroelectric, neuromagnetic, functional MRI, and other neural signals. The toolbox includes core functions for Granger causality analysis of multivariate steady-state and event-related data, functions to preprocess data, assess statistical significance and validate results, and to compute and display network-level indices of causal connectivity including 'causal density' and 'causal flow'. The toolbox is deliberately small, enabling its easy assimilation into the repertoire of researchers. It is however readily extensible given proficiency with the MATLAB language."}
{"_id": "6afe5319630d966c1355f3812f9d4b4b4d6d9fd0", "title": "Branch Prediction Strategies and Branch Target Buffer Design", "text": ""}
{"_id": "4f5051cadf136ab08953ef987c83c17019b99343", "title": "Conceptual model of enterprise resource planning and business intelligence systems usage", "text": "Businesses have invested considerable resources in the usage of enterprise resource planning (ERP) and business intelligence (BI) systems. These systems were heavily studied in developed countries while there are a few and narrowly focused studies in developing ones. However, studies on the integration of ERP and BI have not been given enough attention (hereafter ERPBI). There are many challenges facing the ERPBI usage in term of the steadily increasing speed with which new technologies are evolving. In addition, there are a number of factors that affecting this usage. Based on the finding of the literature, a model from a critical success factors (CSFs) perspective that examine the relationship between ERPBI usage and organisational performance is proposed. The conceptual model provides a foundation for more research in the future. The expected results of the study will improve the business outcome and help design strategies based on an investigation between ERPBI usage and organisational performance."}
{"_id": "370fc745c07b32330c36b4dc84298db2058d99f5", "title": "Molding CNNs for text: non-linear, non-consecutive convolutions", "text": "The success of deep learning often derives from well-chosen operational building blocks. In this work, we revise the temporal convolution operation in CNNs to better adapt it to text processing. Instead of concatenating word representations, we appeal to tensor algebra and use low-rank n-gram tensors to directly exploit interactions between words already at the convolution stage. Moreover, we extend the n-gram convolution to non-consecutive words to recognize patterns with intervening words. Through a combination of lowrank tensors, and pattern weighting, we can efficiently evaluate the resulting convolution operation via dynamic programming. We test the resulting architecture on standard sentiment classification and news categorization tasks. Our model achieves state-of-the-art performance both in terms of accuracy and training speed. For instance, we obtain 51.2% accuracy on the fine-grained sentiment classification task.1"}
{"_id": "5989d226d4234d7c22d3df6ad208e69245873059", "title": "Scaling Reflection Prompts in Large Classrooms via Mobile Interfaces and Natural Language Processing", "text": "We present the iterative design, prototype, and evaluation of CourseMIRROR (Mobile In-situ Reflections and Review with Optimized Rubrics), an intelligent mobile learning system that uses natural language processing (NLP) techniques to enhance instructor-student interactions in large classrooms. CourseMIRROR enables streamlined and scaffolded reflection prompts by: 1) reminding and collecting students' in-situ written reflections after each lecture; 2) continuously monitoring the quality of a student's reflection at composition time and generating helpful feedback to scaffold reflection writing; and 3) summarizing the reflections and presenting the most significant ones to both instructors and students. Through a combination of a 60-participant lab study and eight semester-long deployments involving 317 students, we found that the reflection and feedback cycle enabled by CourseMIRROR is beneficial to both instructors and students. Furthermore, the reflection quality feedback feature can encourage students to compose more specific and higher-quality reflections, and the algorithms in CourseMIRROR are both robust to cold start and scalable to STEM courses in diverse topics."}
{"_id": "59719ac7c1617878196e36db4fbce7cb2ac16b16", "title": "Dynamic switching-based reliable flooding in low-duty-cycle wireless sensor networks", "text": "Reliable flooding in wireless sensor networks (WSNs) is desirable for a broad range of applications and network operations. However, it is a challenging problem to ensure 100% flooding coverage efficiently considering the combined effects of low-duty-cycle operation and unreliable wireless transmission. In this work, we propose a novel dynamic switching-based reliable flooding (DSRF) framework, which is designed as an enhancement layer to provide efficient and reliable flooding over a variety of existing flooding tree structures in low-duty-cycle WSNs. Through comprehensive simulations, we demonstrate that DSRF can effectively improve both flooding energy efficiency and latency."}
{"_id": "16ac517ca8fccfdd010718a7f29d40ae065d9e78", "title": "Tactical driver behavior prediction and intent inference: A review", "text": "Drawing upon fundamental research in human behavior prediction, recently there has been a research focus on how to predict driver behaviors. In this paper we review the field of driver behavior and intent prediction, with a specific focus on tactical maneuvers, as opposed to operational or strategic maneuvers. The aim of a driver behavior prediction system is to forecast the trajectory of the vehicle prior in real-time, which could allow a Driver Assistance System to compensate for dangerous or uncomfortable circumstances. This review provides insights into the scope of the problem, as well as the inputs, algorithms, performance metrics, and shortcomings in the state-of-the-art systems."}
{"_id": "d10dbb161f4f975e430579a7e71802c824be4087", "title": "Validity and reliability of the Cohen 10-item Perceived Stress Scale in patients with chronic headache: Persian version.", "text": "BACKGROUND\nThe Cohen Perceived Stress Scale is being used widely in various countries. The present study evaluated the validity and reliability of the Cohen 10-item Perceived Stress Scale (PSS-10) in assessing tension headache, migraine, and stress-related diseases in Iran.\n\n\nMETHODS\nThis study is a methodological and cross-sectional descriptive investigation of 100 patients with chronic headache admitted to the pain clinic of Baqiyatallah Educational and Therapeutic Center. Convenience sampling was used for subject selection. PSS psychometric properties were evaluated in two stages. First, the standard scale was translated. Then, the face validity, content, and construct of the translated version were determined.\n\n\nRESULTS\nThe average age of participants was 38 years with a standard deviation (SD) of 13.2. As for stress levels, 12% were within the normal range, 36% had an intermediate level, and 52% had a high level of stress. The face validity and scale content were remarkable, and the KMO coefficient was 0.82. Bartlett's test yielded 0.327 which was statistically significant (p<0.0001) representing the quality of the sample. In factor analysis of the scale, the two elements of \"coping\" and \"distress\" were determined. A Cronbach's Alpha coefficient of 0.72 was obtained. This confirmed the remarkable internal consistency and stability of the scale through repeated measure tests (0.93).\n\n\nCONCLUSION\nThe Persian PSS-10 has good internal consistency and reliability. The availability of a validated Persian PSS-10 would indicate a link between stress and chronic headache."}
{"_id": "c89f47c93c62c107e6bd75acde89ee7417ebf244", "title": "Comparison of Architectural Design Decisions for Resource-Constrained Self-Driving Cars - A Multiple Case-Study", "text": "Context: Self-Driving cars are getting more and more attention with public demonstration from all important automotive OEMs but also from companies, which do not have a long history in the automotive industry. Fostered by large international competitions in the last decade, several automotive OEMs have already announced to bring this technology to the market around 2020. Objective: International competitions like the 2007 DARPA Urban Challenge did not focus on efficient usage of resources to realize the self-driving vehicular functionality. Since the automotive industry is very cost-sensitive, realizing reliable and robust selfdriving functionality is challenging when expensive and sophisticated sensors mounted very visibly on the vehicle\u2019s roof for example cannot be used. Therefore, the goal for this study is to investigate how architectural design decisions of recent self-driving vehicular technology consider resource-efficiency. Method: In a multiple case study, the architectural design decisions derived for resourceconstrained self-driving miniature cars for the international competition CaroloCup are compared with architectural designs from recent real-scale self-driving cars. Results: Scaling down available resources for realizing self-driving vehicular technology puts additional constraints on the architectural design; especially reusability of software components in platform-independent algorithmic concepts are prevailing. Conclusion: Software frameworks like the robotic operating system (ROS) enable fast prototypical solutions; however, architectural support for resource-constrained devices is limited. Here, architectural design drivers as realized in AUTOSAR are more suitable."}
{"_id": "3208ed3d4ff2de382ad6a16431cfe7118c000725", "title": "Constructive Induction of Cartesian Product Attributes", "text": "Constructive induction is the process of changing the representation of examples by creating new attributes from existing attributes. In classi cation, the goal of constructive induction is to nd a representation that facilitates learning a concept description by a particular learning system. Typically, the new attributes are Boolean or arithmetic combinations of existing attributes and the learning algorithms used are decision trees or rule learners. We describe the construction of new attributes that are the Cartesian product of existing attributes. We consider the e ects of this operator on a Bayesian classi er an a nearest neighbor algorithm."}
{"_id": "6b02a00274af1f5a432223adf61205e11ccf7249", "title": "Analyzing User Preference for Social Image Recommendation", "text": "With the incredibly growing amount of multimedia data shared on the social media platforms, recommender systems have become an important necessity to ease users\u2019 burden on the information overload. In such a scenario, extensive amount of heterogeneous information such as tags, image content, in addition to the user-to-item preferences, is extremely valuable for making effective recommendations. In this paper, we explore a novel hybrid algorithm termed STM, for image recommendation. STM jointly considers the problem of image content analysis with the users\u2019 preferences on the basis of sparse representation. STM is able to tackle the challenges of highly sparse user feedbacks and cold-start problmes in the social network scenario. In addition, our model is based on the classical probabilistic matrix factorization and can be easily extended to incorporate other useful information such as the social relationships. We evaluate our approach with a newly collected 0.3 million social image data set from Flickr. The experimental results demonstrate that sparse topic modeling of the image content leads to more effective recommendations, , with a significant performance gain over the state-of-the-art alternatives."}
{"_id": "c603e1febf588ec1eccb99d3af89448400f21388", "title": "Novel microstrip patch antenna design employing flexible PVC substrate suitable for defence and radio-determination applications", "text": "This paper propounds a new flexible material, which has been designed by employing flexible Poly Vinyl Chloride (PVC) having dielectric constant \u03b5r= 2.7 as substrate. The projected antenna design states that a circular shaped copper patch with flag shaped slot on it has been deployed over the hexagonal PVC substrate of thickness 1 mm. Then ground plane is partially minimized and slotted to revamp the antenna performance. The proposed antenna operates within frequency range of 7.2032GHz to 7.9035GHz while resonating at 7.55GHz. The proposed antenna design has minimal return loss of \u221255.226dB with the impedance bandwidth of 700 MHz, gain of 4.379dB and directivity of 4.111dBi. The antenna has VSWR less than maximum acceptable value of 2. The projected antenna design can be suitably employed for UWB, radio-determination applications (7.235GHz to 7.25GHz), naval, defence systems (7.25GHz \u2013 7.3GHz) and weather satellite applications (7.75GHz to 7.90GHz). The antenna has been designed in CST Microwave Studio 2014. The proposed antenna has been fabricated and tested using E5071C Network Analyser and anechoic chamber. It has been observed that the stimulated results closely match with the experimental results."}
{"_id": "bb0c3045d839f5f74b9252581cf45faa8b3b5f7e", "title": "Multi-Type Itemset Embedding for Learning Behavior Success", "text": "Contextual behavior modeling uses data from multiple contexts to discover patterns for predictive analysis. However, existing behavior prediction models often face difficulties when scaling for massive datasets. In this work, we formulate a behavior as a set of context items of different types (such as decision makers, operators, goals and resources), consider an observable itemset as a behavior success, and propose a novel scalable method, \"multi-type itemset embedding\", to learn the context items' representations preserving the success structures. Unlike most of existing embedding methods that learn pair-wise proximity from connection between a behavior and one of its items, our method learns item embeddings collectively from interaction among all multi-type items of a behavior, based on which we develop a novel framework, LearnSuc, for (1) predicting the success rate of any set of items and (2) finding complementary items which maximize the probability of success when incorporated into an itemset. Extensive experiments demonstrate both effectiveness and efficency of the proposed framework."}
{"_id": "3dad5ad11fe72bf708193db543d4bb1d1a6c8262", "title": "Context augmented Dynamic Bayesian Networks for event recognition", "text": "This paper proposes a new Probabilistic Graphical Model (PGM) to incorporate the scene, event object interaction, and the event temporal contexts into Dynamic Bayesian Networks (DBNs) for event recognition in surveillance videos. We first construct the baseline event DBNs for modeling the events from their own appearance and kinematic observations, and then augment the DBN with contexts to improve its event recognition performance. Unlike the existing context methods, our model incorporates various contexts simultaneously into one unified model. Experiments on real scene surveillance datasets with complex backgrounds show that the contexts can effectively improve the event recognition performance even under great challenges like large intra-class variations and low image resolution. The topic of modeling and recognizing events in video surveillance system has attracted growing interest from both academia and industry (Oh et al., 2011). Various graphical, syntactic, and description-based approaches (Turaga et al., 2008) have been introduced for modeling and understanding events. Among those approaches, the time-sliced graphical models, i.e. Hidden Markov Models (HMMs) and Dynamic Bayesian Networks (DBNs), have become popular tools. However, surveillance video event recognition still faces difficulties even with the well-built models for describing the events. The first difficulty arises from the tremendous intra-class variations in events. The same category of events can have huge variations in their observations due to visual appearances differences, target motion variations, viewpoint change and temporal variability. Also, the low resolution of event targets also affects event recognition. To compensate for such challenges, we propose to capture various contextual knowledge and systematically integrate them with image data using a Probabilistic Graphical Model (PGM) (Koller and Friedman, 2009) to improve the performance of event recognition on challenging surveillance videos. Contextual knowledge can be regarded as one type of extra information that does not directly describe the recognition task, but it can support the task. As an additional information that can capture certain temporal, spatial or logical relationships with the recognition target, context plays an important role for various visual recognition tasks. Various contexts that are available/retriev-able both during training and testing are widely used in many approaches. For example, Yao and Fei-Fei (2010) and Yao and Fei-Fei (2012) propose a context model to make human pose estimation task and object detection task as mutual context to help each other. Also, Ding et al. (2012) use both the local features and the context cues in the neighborhood windows to construct a combined feature \u2026"}
{"_id": "2a4ca461fa847e8433bab67e7bfe4620371c1f77", "title": "Learning from Labeled and Unlabeled Data with LabelPropagationXiaojin", "text": "We investigate the use of unlabeled data to help labeled data in cl ssification. We propose a simple iterative algorithm, label pro pagation, to propagate labels through the dataset along high density are as d fined by unlabeled data. We analyze the algorithm, show its solution , and its connection to several other algorithms. We also show how to lear n p ameters by minimum spanning tree heuristic and entropy minimiz ation, and the algorithm\u2019s ability to perform feature selection. Expe riment results are promising."}
{"_id": "3ebe2bd33cdbe2a2e0cd3d01b955f8dc96c9923e", "title": "Mobile App Tagging", "text": "Mobile app tagging aims to assign a list of keywords indicating core functionalities, main contents, key features or concepts of a mobile app. Mobile app tags can be potentially useful for app ecosystem stakeholders or other parties to improve app search, browsing, categorization, and advertising, etc. However, most mainstream app markets, e.g., Google Play, Apple App Store, etc., currently do not explicitly support such tags for apps. To address this problem, we propose a novel auto mobile app tagging framework for annotating a given mobile app automatically, which is based on a search-based annotation paradigm powered by machine learning techniques. Specifically, given a novel query app without tags, our proposed framework (i) first explores online kernel learning techniques to retrieve a set of top-N similar apps that are semantically most similar to the query app from a large app repository; and (ii) then mines the text data of both the query app and the top-N similar apps to discover the most relevant tags for annotating the query app. To evaluate the efficacy of our proposed framework, we conduct an extensive set of experiments on a large real-world dataset crawled from Google Play. The encouraging results demonstrate that our technique is effective and promising."}
{"_id": "1d3c0e77cdcb3d8adbd803503d847bd5f82e9503", "title": "Robust Lane Detection and Tracking for Real-Time Applications", "text": "An effective lane-detection algorithm is a fundamental component of an advanced driver assistant system, as it provides important information that supports driving safety. The challenges faced by the lane detection and tracking algorithm include the lack of clarity of lane markings, poor visibility due to bad weather, illumination and light reflection, shadows, and dense road-based instructions. In this paper, a robust and real-time vision-based lane detection algorithm with an efficient region of interest is proposed to reduce the high noise level and the calculation time. The proposed algorithm also processes a gradient cue and a color cue together and a line clustering with scan-line tests to verify the characteristics of the lane markings. It removes any false lane markings and tracks the real lane markings using the accumulated statistical data. The experiment results show that the proposed algorithm gives accurate results and fulfills the real-time operation requirement on embedded systems with low computing power."}
{"_id": "047f8d710cf229d35d64294bc7d9853f1f4f7fee", "title": "Space-time video completion", "text": "We present a method for space-time completion of large space-time \"holes\" in video sequences of complex dynamic scenes. The missing portions are filled-in by sampling spatio-temporal patches from the available parts of the video, while enforcing global spatio-temporal consistency between all patches in and around the hole. This is obtained by posing the task of video completion and synthesis as a global optimization problem with a well-defined objective function. The consistent completion of static scene parts simultaneously with dynamic behaviors leads to realistic looking video sequences. Space-time video completion is useful for a variety of tasks, including, but not limited to: (i) Sophisticated video removal (of undesired static or dynamic objects) by completing the appropriate static or dynamic background information, (ii) Correction of missing/corrupted video frames in old movies, and (iii) Synthesis of new video frames to add a visual story, modify it, or generate a new one. Some examples of these are shown in the paper."}
{"_id": "ea2ad4358bfd06b288c7cbbec7b0465057711738", "title": "Predictors of physical restraint use in Canadian intensive care units", "text": "INTRODUCTION\nPhysical restraint (PR) use in the intensive care unit (ICU) has been associated with higher rates of self-extubation and prolonged ICU length of stay. Our objectives were to describe patterns and predictors of PR use.\n\n\nMETHODS\nWe conducted a secondary analysis of a prospective observational study of analgosedation, antipsychotic, neuromuscular blocker, and PR practices in 51 Canadian ICUs. Data were collected prospectively for all mechanically ventilated adults admitted during a two-week period. We tested for patient, treatment, and hospital characteristics that were associated with PR use and number of days of use, using logistic and Poisson regression respectively.\n\n\nRESULTS\nPR was used on 374 out of 711 (53%) patients, for a mean number of 4.1 (standard deviation (SD) 4.0) days. Treatment characteristics associated with PR were higher daily benzodiazepine dose (odds ratio (OR) 1.05, 95% confidence interval (CI) 1.00 to 1.11), higher daily opioid dose (OR 1.04, 95% CI 1.01 to 1.06), antipsychotic drugs (OR 3.09, 95% CI 1.74 to 5.48), agitation (Sedation-Agitation Scale (SAS) >4) (OR 3.73, 95% CI 1.50 to 9.29), and sedation administration method (continuous and bolus versus bolus only) (OR 3.09, 95% CI 1.74 to 5.48). Hospital characteristics associated with PR indicated patients were less likely to be restrained in ICUs from university-affiliated hospitals (OR 0.32, 95% CI 0.17 to 0.61). Mainly treatment characteristics were associated with more days of PR, including: higher daily benzodiazepine dose (incidence rate ratio (IRR) 1.07, 95% CI 1.01 to 1.13), daily sedation interruption (IRR 3.44, 95% CI 1.48 to 8.10), antipsychotic drugs (IRR 15.67, 95% CI 6.62 to 37.12), SAS <3 (IRR 2.62, 95% CI 1.08 to 6.35), and any adverse event including accidental device removal (IRR 8.27, 95% CI 2.07 to 33.08). Patient characteristics (age, gender, Acute Physiology and Chronic Health Evaluation II score, admission category, prior substance abuse, prior psychotropic medication, pre-existing psychiatric condition or dementia) were not associated with PR use or number of days used.\n\n\nCONCLUSIONS\nPR was used in half of the patients in these 51 ICUs. Treatment characteristics predominantly predicted PR use, as opposed to patient or hospital/ICU characteristics. Use of sedative, analgesic, and antipsychotic drugs, agitation, heavy sedation, and occurrence of an adverse event predicted PR use or number of days used."}
{"_id": "a4eac8a7218fcafd89dc904d6e9ec14d8d4a470b", "title": "Reference scope identification for citances by classification with text similarity measures", "text": "This paper targets at the first step towards generating citation summaries - to identify the reference scope (i.e., cited text spans) for citances. We present a novel classification-based method that converts the task into binary classification which distinguishes cited and non-cited pairs of citances and reference sentences. The method models pairs of citances and reference sentences as feature vectors where citation-dependent and citation-independent features based on the semantic similarity between texts and the significance of texts are explored. Such vector representations are utilized to train a binary classifier. For a citance, once the set of reference sentences classified as the cited sentences are collected, a heuristic-based filtering strategy is applied to refine the output. The method is evaluated using the CL-SciSumm 2016 datasets and found to perform well with competitive results."}
{"_id": "0da75bded3ae15e255f5bd376960cfeffa173b4e", "title": "The Role of Context for Object Detection and Semantic Segmentation in the Wild", "text": "In this paper we study the role of context in existing state-of-the-art detection and segmentation approaches. Towards this goal, we label every pixel of PASCAL VOC 2010 detection challenge with a semantic category. We believe this data will provide plenty of challenges to the community, as it contains 520 additional classes for semantic segmentation and object detection. Our analysis shows that nearest neighbor based approaches perform poorly on semantic segmentation of contextual classes, showing the variability of PASCAL imagery. Furthermore, improvements of existing contextual models for detection is rather modest. In order to push forward the performance in this difficult scenario, we propose a novel deformable part-based model, which exploits both local context around each candidate detection as well as global context at the level of the scene. We show that this contextual reasoning significantly helps in detecting objects at all scales."}
{"_id": "5e0f8c355a37a5a89351c02f174e7a5ddcb98683", "title": "Microsoft COCO: Common Objects in Context", "text": "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model."}
{"_id": "a0e03c5b647438299c79c71458e6b1776082a37b", "title": "Areas of Attention for Image Captioning", "text": "We propose \u201cAreas of Attention\u201d, a novel attentionbased model for automatic image captioning. Our approach models the dependencies between image regions, caption words, and the state of an RNN language model, using three pairwise interactions. In contrast to previous attentionbased approaches that associate image regions only to the RNN state, our method allows a direct association between caption words and image regions. During training these associations are inferred from image-level captions, akin to weakly-supervised object detector training. These associations help to improve captioning by localizing the corresponding regions during testing. We also propose and compare different ways of generating attention areas: CNN activation grids, object proposals, and spatial transformers nets applied in a convolutional fashion. Spatial transformers give the best results. They allow for image specific attention areas, and can be trained jointly with the rest of the network. Our attention mechanism and spatial transformer attention areas together yield state-of-the-art results on the MSCOCO dataset."}
{"_id": "a2c2999b134ba376c5ba3b610900a8d07722ccb3", "title": "Bleu: a Method for Automatic Evaluation of Machine Translation", "text": ""}
{"_id": "ab116cf4e1d5ed947f4d762518738305e3a0ab74", "title": "Deep visual-semantic alignments for generating image descriptions", "text": ""}
{"_id": "ebdfa3dc3bff0d6721e9996af66726d71159c36b", "title": "Multi objective outbound logistics network design for a manufacturing supply chain", "text": "Outbound logistics network (OLN) in the downstream supply chain of a firm plays a dominant role in the success or failure of that firm. This paper proposes the design of a hybrid and flexible OLN in multi objective context. The proposed distribution network for a manufacturing supply chain consists of a set of customer zones (CZs) at known locations with known demands being served by a set of potential manufacturing plants, a set of potential central distribution centers (CDCs), and a set of potential regional distribution centers (RDCs). Three variants of a single product classified based on nature of demand are supplied to CZs through three different distribution channels. The decision variables include number of plants, CDCs, RDCs, and quantities of each variant of product delivered to CZs through a designated distribution channel. The goal is to design the network with multiple objectives so as to minimize the total cost, maximize the unit fill rates, and maximize the resource utilization of the facilities in the network. The problem is formulated as a mixed integer linear programming problem and a multiobjective genetic algorithm (MOGA) called non-dominated sorting genetic algorithm\u2014II (NSGA-II) is employed to solve the resulting NP-hard combinatorial optimization problem. Computational experiments conducted on randomly generated data sets are presented and analyzed showing the effectiveness of the solution algorithm for the proposed network. N. C. Hiremath \u00b7 S. Sahu \u00b7 M. K. Tiwari (B) Department of Industrial Engineering and Management, Indian Institute of Technology Kharagpur, Kharagpur, 721302 West Bengal, India e-mail: mkt09@hotmail.com N. C. Hiremath e-mail: nchiremath@gmail.com S. Sahu e-mail: sahus@mech.iitkgp.ernet.in"}
{"_id": "1109d70aad64298fe824d6ffc34aac7f5bd76303", "title": "Formal Models for Computer Security", "text": "Efforts to build \"secure\" computer systems have now been underway for more than a decade. Many designs have been proposed, some prototypes have been constructed, and a few systems are approaching the production stage. A small number of systems are even operating in what the Department of Defense calls the \"multilevel\" mode some information contained m these computer systems may have a clasmfication higher than the clearance of some of the users of those systems. This paper revmws the need for formal security models, describes the structure and operation of military security controls, considers how automation has affected security problems, surveys models that have been proposed and applied to date, and suggests possible d~rectlons for future models"}
{"_id": "0dc08ceff7472e6b23a6074430f0dcbd6cad1025", "title": "The Cerebellum, Sensitive Periods, and Autism", "text": "Cerebellar research has focused principally on adult motor function. However, the cerebellum also maintains abundant connections with nonmotor brain regions throughout postnatal life. Here we review evidence that the cerebellum may guide the maturation of remote nonmotor neural circuitry and influence cognitive development, with a focus on its relationship with autism. Specific cerebellar zones influence neocortical substrates for social interaction, and we propose that sensitive-period disruption of such internal brain communication can account for autism's key features."}
{"_id": "75ac5ebf54a71b4d82651287c6ac28f33c12e5e0", "title": "Configuration Tool for ARINC 653 Operating Systems Eu", "text": "ARINC 653 Specification defines a standardized interface of real-time operating systems and an Application Executive (APEX) to develop the reliable applications for avionics based on Integrated Modular Avionics (IMA). The requirements of system platform based on ARINC 653 Standard are defined as configuration data and are integrated to the XML configuration file(s) in the real-time operating system. Unfortunately, existing configuration tools for integrating requirements do not provide checking the syntax errors of XML and verifying the integrity of input data for partitioning. This paper presents a configuration tool for ARINC 653 OS that consist of Wizard module which generates the basic configuration data for IMA based on XML Scheme of ARINC 653 Standard, XML and Partition Editor for the partitioning system of IMA, and Verification module which checks the integrity of input data and XML syntax with its visualization."}
{"_id": "21eda52dd31188de4b325a8b7fb75b52076c4724", "title": "Energy and Performance Characterization of Mobile Heterogeneous Computing", "text": "A modern mobile application processor is a heterogeneous multi-core SoC which integrates CPU and application-specific accelerators such as GPU and DSP. It provides opportunity to accelerate other compute-intensive applications, yet mapping an algorithm to such a heterogeneous platform is not a straightforward task and has many design decisions to make. In this paper, we evaluate the performance and energy benefits of utilizing the integrated GPU and DSP cores to offload or share CPU's compute-intensive tasks. The evaluation is conducted on three representative mobile platforms, TI's OMAP3530, Qualcomn's Snapdragon S2, and Nvidia's Tegra2, using common computation tasks in mobile applications. We identify key factors that should be considered in energy-optimized mobile heterogeneous computing. Our evaluation results show that, by effectively utilizing all the computing cores concurrently, an average of 3.7X performance improvement can be achieved with the cost of 33% more power consumption, in comparison with the case of utilizing CPU only. This stands for 2.8X energy saving."}
{"_id": "64f51fe4f6b078142166395ed209d423454007fb", "title": "Scene Text Synthesis for Efficient and Effective Deep Network Training", "text": "A large amount of annotated training images is critical for training accurate and robust deep network models but the collection of a large amount of annotated training images is often time-consuming and costly. Image synthesis alleviates this constraint by generating annotated training images automatically by machines which has attracted increasing interest in the recent deep learning research. We develop an innovative image synthesis technique that composes annotated training images by realistically embedding foreground objects of interest (OOI) into background images. The proposed technique consists of two key components that in principle boost the usefulness of the synthesized images in deep network training. The first is context-aware semantic coherence which ensures that the OOI are placed around semantically coherent regions within the background image. The second is harmonious appearance adaptation which ensures that the embedded OOI are agreeable to the surrounding background from both geometry alignment and appearance realism. The proposed technique has been evaluated over two related but very different computer vision challenges, namely, scene text detection and scene text recognition. Experiments over a number of public datasets demonstrate the effectiveness of our proposed image synthesis technique the use of our synthesized images in deep network training is capable of achieving similar or even better scene text detection and scene text recognition performance as compared with using real images."}
{"_id": "ceb4040acf7f27b4ca55da61651a14e3a1ef26a8", "title": "Angry Crowds: Detecting Violent Events in Videos", "text": ""}
{"_id": "3c64af0a3046ea3607d284e164cb162e9cd52441", "title": "A Novel CPW Fed Multiband Circular Microstrip Patch Antenna for Wireless Applications", "text": "In this paper, a novel design of coplanar micro strip patch antenna is presented for ultra wideband (UWB) and smart grid applications. It is designed on a dielectric substrate and fed by a coplanar wave guide (CPW). Micro strip patch antenna consists of a circular patch with an elliptical shaped eye opening in the center to provide multiband operations. The antenna has wide bandwidth of 5 GHz between 10.8-15.8 GHz and 3 GHz between 5.8-8.8 GHz, high return loss of -34 dB at 3.9 GHz and -29 dB at 6.8 GHz with satisfactory radiation properties. The parameters that affect the performance of the antenna in terms of its frequency domain characteristics are investigated. The antenna design has been simulated on Ansoft's High Frequency Structure Simulator (HFSS). It is a compact design of 47 \u00d7 47 mm2 area on Rogers RO3003 (tm) substrate with dielectric constant of 3 and thickness of 1.6 mm. The simulated antenna has three operating frequency bands at 2.82, 5.82-8.8, 10.8-15.8 GHz and shows the band-notch characteristic in the UWB band to avoid interferences, which is caused by WLAN at 5.15-5.825 GHz and Wi-MAX at 5.25-5.85 GHz. The designed antenna structure with 5.68 dBi gain is planar, simple and compact, hence it can be easily embedded in wireless communication systems and integrated with microwave circuitry for low manufacturing cost."}
{"_id": "226cfb67d2d8eba835f2ec695fe28b78b556a19f", "title": "Majority is not enough: bitcoin mining is vulnerable", "text": "The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed.\n We show that the Bitcoin mining protocol is not incentive-compatible. We present an attack with which colluding miners' revenue is larger than their fair share. The attack can have significant consequences for Bitcoin: Rational miners will prefer to join the attackers, and the colluding group will increase in size until it becomes a majority. At this point, the Bitcoin system ceases to be a decentralized currency.\n Unless certain assumptions are made, selfish mining may be feasible for any coalition size of colluding miners. We propose a practical modification to the Bitcoin protocol that protects Bitcoin in the general case. It prohibits selfish mining by a coalition that command less than 1/4 of the resources. This threshold is lower than the wrongly assumed 1/2 bound, but better than the current reality where a coalition of any size can compromise the system."}
{"_id": "3a03957218eda9094858087538e9668ab0db503b", "title": "Containers and Virtual Machines at Scale: A Comparative Study", "text": "Virtualization is used in data center and cloud environments to decouple applications from the hardware they run on. Hardware virtualization and operating system level virtualization are two prominent technologies that enable this. Containers, which use OS virtualization, have recently surged in interest and deployment. In this paper, we study the differences between the two virtualization technologies. We compare containers and virtual machines in large data center environments along the dimensions of performance, manageability and software development.\n We evaluate the performance differences caused by the different virtualization technologies in data center environments where multiple applications are running on the same servers (multi-tenancy). Our results show that co-located applications can cause performance interference, and the degree of interference is higher in the case of containers for certain types of workloads. We also evaluate differences in the management frameworks which control deployment and orchestration of containers and VMs. We show how the different capabilities exposed by the two virtualization technologies can affect the management and development of applications. Lastly, we evaluate novel approaches which combine hardware and OS virtualization."}
{"_id": "eb7754ea140e73efa14c5ce85cfa1c7a04a18a76", "title": "Path Planning and Steering Control for an Automatic Perpendicular Parking Assist System", "text": "This paper considers the perpendicular reverse parking problem of front wheel steering vehicles. Relationships between the widths of the parking aisle and the parking place, as well as the parameters and initial position of the vehicle for planning a collision-free reverse perpendicular parking in one maneuver are first presented. Two types of steering controllers (bang-bang and saturated tanh-type controllers) for straightline tracking are proposed and evaluated. It is demonstrated that the saturated controller, which is continuous, achieves also quick steering avoiding chattering and can be successfully used in solving parking problems. Simulation results and first experimental tests confirm the effectiveness of the proposed control scheme."}
{"_id": "7bcf85fa463922bedacda5a47338acc59466b0e5", "title": "A Low-Power Wideband Transmitter Front-End Chip for 80 GHz FMCW Radar Systems With Integrated 23 GHz Downconverter VCO", "text": "A low-power FMCW 80 GHz radar transmitter front-end chip is presented, which was fabricated in a SiGe bipolar production technology ( fT=180 GHz, fmax=250 GHz ). Additionally to the fundamental 80 GHz VCO, a 4:1-frequency divider (up to 100 GHz), a 23 GHz local oscillator (VCO) with a low phase noise of -112 dBc/Hz (1 MHz offset), a PLL-mixer and a static frequency divider is integrated together with several output buffers. This chip was designed for low power consumption (in total <; 0.5 W, i.e., 100 mA at 5 V supply voltage), which is dominated by the 80 GHz VCO due to the demands for high output power (\u2248 12 dBm) and low phase noise (minimum -97 dBc/Hz at 1 MHz offset) within the total wide tuning range from 68 GHz to 92.5 GHz (\u0394f = 24.5 GHz). Measurements of the double-PLL system at 80 GHz showed a low phase noise of -88 dBc/Hz at 10 kHz offset frequency."}
{"_id": "2a4153655ad1169d482e22c468d67f3bc2c49f12", "title": "Face Alignment Across Large Poses: A 3D Solution", "text": "Face alignment, which fits a face model to an image and extracts the semantic meanings of facial pixels, has been an important topic in CV community. However, most algorithms are designed for faces in small to medium poses (below 45), lacking the ability to align faces in large poses up to 90. The challenges are three-fold: Firstly, the commonly used landmark-based face model assumes that all the landmarks are visible and is therefore not suitable for profile views. Secondly, the face appearance varies more dramatically across large poses, ranging from frontal view to profile view. Thirdly, labelling landmarks in large poses is extremely challenging since the invisible landmarks have to be guessed. In this paper, we propose a solution to the three problems in an new alignment framework, called 3D Dense Face Alignment (3DDFA), in which a dense 3D face model is fitted to the image via convolutional neutral network (CNN). We also propose a method to synthesize large-scale training samples in profile views to solve the third problem of data labelling. Experiments on the challenging AFLW database show that our approach achieves significant improvements over state-of-the-art methods."}
{"_id": "a7d70bdfd81c27203eab5fc331602494c0ec64f5", "title": "Recommending Self-Regulated Learning Strategies Does Not Improve Performance in a MOOC", "text": "Many committed learners struggle to achieve their goal of completing a Massive Open Online Course (MOOC). This work investigates self-regulated learning (SRL) in MOOCs and tests if encouraging the use of SRL strategies can improve course performance. We asked a group of 17 highly successful learners about their own strategies for how to succeed in a MOOC. Their responses were coded based on a SRL framework and synthesized into seven recommendations. In a randomized experiment, we evaluated the effect of providing those recommendations to learners in the same course (N = 653). Although most learners rated the study tips as very helpful, the intervention did not improve course persistence or achievement. Results suggest that a single SRL prompt at the beginning of the course provides insufficient support. Instead, embedding technological aids that adaptively support SRL throughout the course could better support learners in MOOCs."}
{"_id": "8075177606c8c8e711decb255e7e59c8d19c20f0", "title": "ANMPI-BASED PARALLEL AND DISTRIBUTED MACHINE LEARNING PLATFORM ON LARGE-SCALE HPC CLUSTERS", "text": "This paper presents the design of an MPI-based parallel and distributed machine learning platform on large-scale HPC clusters. Researchers and practitioners can implement easily a class of parallelizable machine learning algorithms on the platform, or port quickly an existing non-parallel implementation of a parallelizable algorithm to the platform with only minor modifications. Complicated functions in parallel programming such as scheduling, caching and load balancing are handled automatically by the platform. The platform performance was evaluated in a series of stress tests by using a k-means clustering task on 7,500 hours of speech data (about 2.7 billion 52dimensional feature vectors). Good scalability is demonstrated on an HPC cluster with thousands of CPU cores."}
{"_id": "57040a4256fc0d1c848a0b6d9333a4d93b1c7881", "title": "Stereo Acoustic Echo Cancellation Employing Frequency-Domain Preprocessing and Adaptive Filter", "text": "This paper proposes a windowing frequency domain adaptive filter and an upsampling block transform preprocessing to solve the stereo acoustic echo cancellation problem. The proposed adaptive filter uses windowing functions with smooth cutoff property to reduce the spectral leakage during filter updating, so the utilization of the independent noise introduced by preprocessing in stereo acoustic echo cancellation can be increased. The proposed preprocessing is operated in short blocks with low processing delay, and it uses frequency-domain upsampling to meet the minimal block length requirement given by the band limit of simultaneous masking. Therefore, the simultaneous masking can be well utilized to improve the audio quality. The acoustic echo cancellation simulations and the audio quality evaluation show that, the proposed windowing frequency domain adaptive filter performs better than the conventional frequency domain adaptive filter in both mono and stereo cases, and the upsampling block transform preprocessing provides better audio quality and stereo acoustic echo cancellation performance than the half-wave preprocessing at the same noise level."}
{"_id": "642d1ea778f078b20b944da89e74bfd1c5a63b44", "title": "Spatial but not verbal cognitive deficits at age 3 years in persistently antisocial individuals.", "text": "Previous studies have repeatedly shown verbal intelligence deficits in adolescent antisocial individuals, but it is not known whether these deficits are in place prior to kindergarten or, alternatively, whether they are acquired throughout childhood. This study assesses whether cognitive deficits occur as early as age 3 years and whether they are specific to persistently antisocial individuals. Verbal and spatial abilities were assessed at ages 3 and 11 years in 330 male and female children, while antisocial behavior was assessed at ages 8 and 17 years. Persistently antisocial individuals (N = 47) had spatial deficits in the absence of verbal deficits at age 3 years compared to comparisons (N = 133), and also spatial and verbal deficits at age 11 years. Age 3 spatial deficits were independent of social adversity, early hyperactivity, poor test motivation, poor test comprehension, and social discomfort during testing, and they were found in females as well as males. Findings suggest that early spatial deficits contribute to persistent antisocial behavior whereas verbal deficits are developmentally acquired. An early-starter model is proposed whereby early spatial impairments interfere with early bonding and attachment, reflect disrupted right hemisphere affect regulation and expression, and predispose to later persistent antisocial behavior."}
{"_id": "3a8174f08abdfb86615ba3385e4a849c4b2db672", "title": "Facile one-pot solvothermal method to synthesize sheet-on-sheet reduced graphene oxide (RGO)/ZnIn2S4 nanocomposites with superior photocatalytic performance.", "text": "Highly reductive RGO (reduced graphene oxide)/ZnIn2S4 nanocomposites with a sheet-on-sheet morphology have been prepared via a facile one-pot solvothermal method in a mixture of N,N-dimethylformamide (DMF) and ethylene glycol (EG) as solvent. A reduction of GO (graphene oxide) to RGO and the formation of ZnIn2S4 nanosheets on highly reductive RGO has been simultaneously achieved. The effect of the solvents on the morphology of final products has been investigated and the formation mechanism was proposed. The as-prepared RGO/ZnIn2S4 nanoscomposites were characterized by powder X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), N2-adsorption BET surface area, UV-vis diffuse reflectance spectroscopy (DRS), scanning electron microscopy (SEM), transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM). The photocatalytic activity for hydrogen evolution under visible light irradiations over the as-prepared RGO/ZnIn2S4 nanocomposites has been investigated. The as-prepared RGO/ZnIn2S4 nanocomposites show enhanced photocatalytic activity for hydrogen evolution under visible light irradiations and an optimum photocatalytic activity is observed over 1.0 wt % RGO incorporated ZnIn2S4 nanocomposite. The superior photocatalytic performance observed over RGO/ZnIn2S4 nanocomposites can be ascribed to the existence of highly reductive RGO which has strong interactions with ZnIn2S4 nanosheets. The existence of the strong interaction between ZnIn2S4 nanosheets and RGO in the nancomposites facilitates the electron transfer from ZnIn2S4 to RGO, with the latter serving as a good electron acceptor, mediator as well as the co-catalyst for hydrogen evolution. This study can provide some guidance for us in the developing of RGO-incorporated nanocomposite photocatalysts."}
{"_id": "93d61a07475c234809992265f36bffe3a749c83b", "title": "Improving optical character recognition through efficient multiple system alignment", "text": "Individual optical character recognition (OCR) engines vary in the types of errors they commit in recognizing text, particularly poor quality text. By aligning the output of multiple OCR engines and taking advantage of the differences between them, the error rate based on the aligned lattice of recognized words is significantly lower than the individual OCR word error rates. This lattice error rate constitutes a lower bound among aligned alternatives from the OCR output. Results from a collection of poor quality mid-twentieth century typewritten documents demonstrate an average reduction of 55.0% in the error rate of the lattice of alternatives and a realized word error rate (WER) reduction of 35.8% in a dictionary-based selection process. As an important precursor, an innovative admissible heuristic for the A* algorithm is developed, which results in a significant reduction in state space exploration to identify all optimal alignments of the OCR text output, a necessary step toward the construction of the word hypothesis lattice. On average 0.0079% of the state space is explored to identify all optimal alignments of the documents."}
{"_id": "6ca6ac9b3fec31be02ac296e6acb653107881a95", "title": "RASR - The RWTH Aachen University Open Source Speech Recognition Toolkit", "text": "RASR is the open source version of the well-proven speech recognition toolkit developed and used at RWTH Aachen University. The current version of the package includes state of the art speech recognition technology for acoustic model training and decoding. Speaker adaptation, speaker adaptive training, unsupervised training, discriminative training, lattice processing tools, flexible signal analysis, a finite state automata library, and an efficient dynamic network decoder are notable components. Comprehensive documentation, example setups for training and recognition, and tutorials are provided to support newcomers."}
{"_id": "457dbb5ec48936face9df372b34d7733957eb37d", "title": "Properties and Therapeutic Application of Bromelain: A Review", "text": "Bromelain belongs to a group of protein digesting enzymes obtained commercially from the fruit or stem of pineapple. Fruit bromelain and stem bromelainare prepared differently and they contain different enzymatic composition. \"Bromelain\" refers usually to the \"stem bromelain.\" Bromelain is a mixture of different thiol endopeptidases and other components like phosphatase, glucosidase, peroxidase, cellulase, escharase, and several protease inhibitors. In vitro and in vivo studies demonstrate that bromelain exhibits various fibrinolytic, antiedematous, antithrombotic, and anti-inflammatory activities. Bromelain is considerably absorbable in the body without losing its proteolytic activity and without producing any major side effects. Bromelain accounts for many therapeutic benefits like the treatment of angina pectoris, bronchitis, sinusitis, surgical trauma, and thrombophlebitis, debridement of wounds, and enhanced absorption of drugs, particularly antibiotics. It also relieves osteoarthritis, diarrhea, and various cardiovascular disorders. Bromelain also possesses some anticancerous activities and promotes apoptotic cell death. This paper reviews the important properties and therapeutic applications of bromelain, along with the possible mode of action."}
{"_id": "36e2d0ca25cbdfb623e9eb63758705c2d766b256", "title": "Light propagation with phase discontinuities: generalized laws of reflection and refraction.", "text": "Conventional optical components rely on gradual phase shifts accumulated during light propagation to shape light beams. New degrees of freedom are attained by introducing abrupt phase changes over the scale of the wavelength. A two-dimensional array of optical resonators with spatially varying phase response and subwavelength separation can imprint such phase discontinuities on propagating light as it traverses the interface between two media. Anomalous reflection and refraction phenomena are observed in this regime in optically thin arrays of metallic antennas on silicon with a linear phase variation along the interface, which are in excellent agreement with generalized laws derived from Fermat's principle. Phase discontinuities provide great flexibility in the design of light beams, as illustrated by the generation of optical vortices through use of planar designer metallic interfaces."}
{"_id": "0a4e617157fa43baeba441909de14b799a6e06db", "title": "Automated Test Data Generation Using an Iterative Relaxation Method", "text": "An important problem that arises in path oriented testing is the generation of test data that causes a program to follow a given path. In this paper, we present a novel program execution based approach using an iterative relaxation method to address the above problem. In this method, test data generation is initiated with an arbitrarily chosen input from a given domain. This input is then iteratively refined to obtain an input on which all the branch predicates on the given path evaluate to the desired outcome. In each iteration the program statements relevant to the evaluation of each branch predicate on the path are executed, and a set of linear constraints is derived. The constraints are then solved to obtain the increments for the input. These increments are added to the current input to obtain the input for the next iteration. The relaxation technique used in deriving the constraints provides feedback on the amount by which each input variable should be adjusted for the branches on the path to evaluate to the desired outcome.When the branch conditions on a path are linear functions of input variables, our technique either finds a solution for such paths in one iteration or it guarantees that the path is infeasible. In contrast, existing execution based approaches may require an unacceptably large number of iterations for relatively long paths because they consider only one input variable and one branch predicate at a time and use backtracking. When the branch conditions on a path are nonlinear functions of input variables, though it may take more then one iteration to derive a desired input, the set of constraints to be solved in each iteration is linear and is solved using Gaussian elimination. This makes our technique practical and suitable for automation."}
{"_id": "2443463b62634a46a064ede7a0aa8002308946b2", "title": "An Examination of Multivariate Time Series Hashing with Applications to Health Care", "text": "As large-scale multivariate time series data become increasingly common in application domains, such as health care and traffic analysis, researchers are challenged to build efficient tools to analyze it and provide useful insights. Similarity search, as a basic operator for many machine learning and data mining algorithms, has been extensively studied before, leading to several efficient solutions. However, similarity search for multivariate time series data is intrinsically challenging because (1) there is no conclusive agreement on what is a good similarity metric for multivariate time series data and (2) calculating similarity scores between two time series is often computationally expensive. In this paper, we address this problem by applying a generalized hashing framework, namely kernelized locality sensitive hashing, to accelerate time series similarity search with a series of representative similarity metrics. Experiment results on three large-scale clinical data sets demonstrate the effectiveness of the proposed approach."}
{"_id": "7ea0ea01f47f49028802aa1126582d2d8a80dd59", "title": "Privacy and Security in Internet of Things and Wearable Devices", "text": "Enter the nascent era of Internet of Things (IoT) and wearable devices, where small embedded devices loaded with sensors collect information from its surroundings, process it, and relay it to remote locations for further analysis. Albeit looking harmless, these nascent technologies raise security and privacy concerns. We pose the question of the possibility and effects of compromising such devices. Concentrating on the design flow of IoT and wearable devices, we discuss some common design practices and their implications on security and privacy. Two representatives from each category, the Google Nest Thermostat and the Nike+ Fuelband, are selected as examples on how current industry practices of security as an afterthought or an add-on affect the resulting device and the potential consequences to the user's security and privacy. We then discuss design flow enhancements, through which security mechanisms can efficiently be added into a device, vastly differing from traditional practices."}
{"_id": "1883204b8dbb419fcfa8adf86d61baf62283c667", "title": "Long-range communications in unlicensed bands: the rising stars in the IoT and smart city scenarios", "text": "Connectivity is probably the most basic building block of the IoT paradigm. Up to now, the two main approaches to provide data access to things have been based on either multihop mesh networks using short-range communication technologies in the unlicensed spectrum, or long-range legacy cellular technologies, mainly 2G/GSM/GPRS, operating in the corresponding licensed frequency bands. Recently, these reference models have been challenged by a new type of wireless connectivity, characterized by low-rate, long-range transmission technologies in the unlicensed sub-gigahertz frequency bands, used to realize access networks with star topology referred to as low-power WANs (LPWANs). In this article, we introduce this new approach to provide connectivity in the IoT scenario, discussing its advantages over the established paradigms in terms of efficiency, effectiveness, and architectural design, particularly for typical smart city applications."}
{"_id": "2b00e526490d65f2ec00107fb7bcce0ace5960c7", "title": "The Internet of Things: A survey", "text": "This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details. 2010 Elsevier B.V. All rights reserved."}
{"_id": "49565dd40c89680fdf9d6958f721eabcdfb89c22", "title": "Stealthy Dopant-Level Hardware Trojans", "text": "In recent years, hardware Trojans have drawn the attention of governments and industry as well as the scientific community. One of the main concerns is that integrated circuits, e.g., for military or criticalinfrastructure applications, could be maliciously manipulated during the manufacturing process, which often takes place abroad. However, since there have been no reported hardware Trojans in practice yet, little is known about how such a Trojan would look like, and how difficult it would be in practice to implement one. In this paper we propose an extremely stealthy approach for implementing hardware Trojans below the gate level, and we evaluate their impact on the security of the target device. Instead of adding additional circuitry to the target design, we insert our hardware Trojans by changing the dopant polarity of existing transistors. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to most detection techniques, including fine-grain optical inspection and checking against \u201cgolden chips\u201d. We demonstrate the effectiveness of our approach by inserting Trojans into two designs \u2014 a digital post-processing derived from Intel\u2019s cryptographically secure RNG design used in the Ivy Bridge processors and a side-channel resistant SBox implementation \u2014 and by exploring their detectability and their effects on security."}
{"_id": "8619cf157a41aae1a2080f1689fcaedb0e47eb4d", "title": "ERP in action - Challenges and benefits for management control in SME context", "text": "Article history: Received 31 January 2011 Received in revised form 7 February 2012 Accepted 2 March 2012 ERP systems have fundamentally re-shaped the way business data is collected, stored, disseminated and used throughout the world. However, the existing research in accounting has provided only relatively few empirical findings on the implications for management control when companies implement ERP systems as the technological platform. Especially scarce are the findings concerning the production phase, after implementation, when the information processes, related work practices and the new information contents can be seen as established. In this paper we explored and theorized the benefits, challenges and problems for management control when an ERP system is in use, four years after the implementation. Our findings also illustrate why and under what circumstances these challenges and benefits may exist. For a holistic view of the organization our findings, based on a qualitative case study, are constructed from the viewpoints of people at different levels and functions of the organization. Top management expected a new strategic control system, but due to the many challenges it ended up with merely financial accounting based control. At the operational level, serious challenges lead to inadequate usage of the ERP system. Management control produces the financial basic data and must contend with many practical problems caused by ERP implementation. \u00a9 2012 Elsevier Inc. All rights reserved."}
{"_id": "4f4a9ec008ee88f1eabc7d490b2475bf97913fbb", "title": "Process discovery from event data: Relating models and logs through abstractions", "text": "Event data are collected in logistics, manufacturing, finance, healthcare, customer relationship management, e-learning, e-government, and many other domains. The events found in these domains typically refer to activities executed by resources at particular times and for a particular case (i.e., process instances). Process mining techniques are able to exploit such data. In this article, we focus on process discovery. However, process mining also includes conformance checking, performance analysis, decision mining, organizational mining, predictions, recommendations, etc. These techniques help to diagnose problems and improve processes. All process mining techniques involve both event data and process models. Therefore, a typical first step is to automatically learn a control-flow model from the event data. This is very challenging, but in recent years many powerful discovery techniques have been developed. It is not easy to compare these techniques since they use different representations and make different assumptions. Users often need to resort to trying different algorithms in an ad-hoc manner. Developers of new techniques are often trying to solve specific instances of a more general problem. Therefore, we aim to unify existing approaches by focusing on log and model abstractions. These abstractions link observed and modeled behavior: Concrete behaviors recorded in event logs are related to possible behaviors represented by process models. Hence, such behavioral abstractions provide an \u201cinterface\u201d between both. We discuss four discovery approaches involving three abstractions and different types of process models (Petri nets, block-structured models, and declarative models). The goal is to provide a comprehensive understanding of process discovery and show how to develop new techniques. Examples illustrate the different approaches and pointers to software are given. The discussion on abstractions and process representations is also used to reflect on the gap between process mining literature and commercial process mining tools. This facilitates users to select an appropriate process discovery technique. Moreover, structuring the role of internal abstractions and representations helps to broaden the view and facilitates the creation of new discovery approaches. \u2217Process and Data Science (PADS), RWTH Aachen University, Aachen, Germany"}
{"_id": "b02771eb27df3f69d721cfd64e13e338acbc3336", "title": "End-to-End Conversion of HTML Tables for Populating a Relational Database", "text": "Automating the conversion of human-readable HTML tables into machine-readable relational tables will enable end-user query processing of the millions of data tables found on the web. Theoretically sound and experimentally successful methods for index-based segmentation, extraction of category hierarchies, and construction of a canonical table suitable for direct input to a relational database are demonstrated on 200 heterogeneous web tables. The methods are scalable: the program generates the 198 Access compatible CSV files in ~0.1s per table (two tables could not be indexed)."}
{"_id": "baff7613deb1c84d2570bee2212ebb2391261727", "title": "Reinforcement Learning with Perturbed Rewards", "text": "Recent studies have shown the vulnerability of reinforcement learning (RL) models in noisy settings. The sources of noises differ across scenarios. For instance, in practice, the observed reward channel is often subject to noise (e.g., when observed rewards are collected through sensors), and thus observed rewards may not be credible as a result. Also, in applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors. In this paper, we consider noisy RL problems where observed rewards by RL agents are generated with a reward confusion matrix. We call such observed rewards as perturbed rewards. We develop an unbiased reward estimator aided robust RL framework that enables RL agents to learn in noisy environments while observing only perturbed rewards. Our framework draws upon approaches for supervised learning with noisy data. The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards. We prove the convergence and sample complexity of our approach. Extensive experiments on different DRL platforms show that policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines. For instance, the state-of-the-art PPO algorithm is able to obtain 67.5% and 46.7% improvements in average on five Atari games, when the error rates are 10% and 30% respectively."}
{"_id": "79e18d73de9ff62ef19277655d330e514e849462", "title": "Current Trends in Educational Technology Research", "text": "Educational technology research has moved through several stages or \u201cages\u201d, focusing at the beginning on the content to be learned, then on the format of instructional messages and, finally on the interaction between computers and students. The present paper reviews the research in technology-based learning environments in order to give both a historical perspective on educational technology research and a view of the current state of this discipline. We conclude that: 1) trends in educational technology research were forged by the evolution of learning theories and the technological changes, 2) a clear shift from the design of instruction to the design of learning environments can be noticed; 3) there is a positive effect of educational technology on learning, but the size of the effect varies considerably; 4) learning is much more dependent on the activity of the learner than on the quantity of information and processing opportunities provided by the environment."}
{"_id": "839a69a55d862563fe75528ec5d763fb01c09c61", "title": "A COMPRESSED SENSING VIEW OF UNSUPERVISED TEXT EMBEDDINGS , BAG-OF-n-GRAMS , AND LSTM S", "text": "Low-dimensional vector embeddings, computed using LSTMs or simpler techniques, are a popular approach for capturing the \u201cmeaning\u201d of text and a form of unsupervised learning useful for downstream tasks. However, their power is not theoretically understood. The current paper derives formal understanding by looking at the subcase of linear embedding schemes. Using the theory of compressed sensing we show that representations combining the constituent word vectors are essentially information-preserving linear measurements of Bag-of-n-Grams (BonG) representations of text. This leads to a new theoretical result about LSTMs: low-dimensional embeddings derived from a low-memory LSTM are provably at least as powerful on classification tasks, up to small error, as a linear classifier over BonG vectors, a result that extensive empirical work has thus far been unable to show. Our experiments support these theoretical findings and establish strong, simple, and unsupervised baselines on standard benchmarks that in some cases are state of the art among word-level methods. We also show a surprising new property of embeddings such as GloVe and word2vec: they form a good sensing matrix for text that is more efficient than random matrices, the standard sparse recovery tool, which may explain why they lead to better representations in practice."}
{"_id": "8b0feafa8faddf7f4ca1aaf41e0cd19512385914", "title": "Employee turnover prediction and retention policies design: a case study", "text": "This paper illustrates the similarities between the problems of customer churn and employee turnover. An example of employee turnover prediction model leveraging classical machine learning techniques is developed. Model outputs are then discussed to design & test employee retention policies. This type of retention discussion is, to our knowledge, innovative and constitutes the main value of this paper. 2010 Mathematics Subject Classification."}
{"_id": "f8f91f477ffdb979f54da0f762d107bef6b9bc86", "title": "Eastern meditative techniques and hypnosis: a new synthesis.", "text": "In this article major ancient Buddhist meditation techniques, samatha, vipassana, Zen, and ton-len, will be described in reference to contemporary clinical hypnosis. In so doing, the Eastern healing framework out of which these techniques emerged is examined in comparison with and in contrast to its Western counterpart. A growing body of empirical literature shows that meditation and hypnosis have many resemblances despite the distinct differences in underlying philosophy and technical methodologies. Although not all meditation techniques \"fit\" the Western culture, each has much to offer to clinicians who are familiar with hypnosis."}
{"_id": "403c0f91fba1399e9b7a15c5fbea60ce5f28eabb", "title": "Architecture for an Intelligent Tutoring System that Considers Learning Styles", "text": "In this paper we propose the architecture of an Intelligent Tutoring System that considers the student's learning style and the competency-based education. We also describe the processes that have been implemented so far. Our architecture presents innovations in the representation of the tutor module and in the knowledge module; the tutor module incorporates a selector agent, which will choose the content to show, considering the teaching strategies that support the student's learning style."}
{"_id": "4526ae7cd03dfc5d624e0285e3134d38b8640cdd", "title": "Real-time tracking on adaptive critic design with uniformly ultimately bounded condition", "text": "In this paper, we proposed a new nonlinear tracking controller based on heuristic dynamic programming (HDP) with the tracking filter. Specifically, we integrate a goal network into the regular HDP design and provide the critic network with detailed internal reward signal to help the value function approximation. The architecture is explicitly explained with the tracking filter, goal network, critic network and action network, respectively. We provide the stability analysis of our proposed controller with Lyapunov approach. It is shown that the filtered tracking errors and the weights estimation errors in neural networks are all uniformly ultimately bounded (UUB) under certain conditions. Finally, we compare our proposed approach with regular HDP approach in virtual reality (VR)/Simulink environment to justify the improved control performance."}
{"_id": "faae3d1d38cd8c899d610bec0a18931b5da78ab3", "title": "EEG-based BCI and video games: a progress report", "text": "This paper presents a systematic review of electroencephalography (EEG)-based brain\u2013computer interfaces (BCIs) used in the video games, a vibrant field of research that touches upon all relevant questions concerning the future directions of BCI. The paper examines the progress of BCI research with regard to games and shows that gaming applications offer numerous advantages by orienting BCI to concerns and expectations of a gaming application. Different BCI paradigms are investigated, and future directions are discussed."}
{"_id": "170fa6c08c7d13e0b128ce108adc8d0c8d247674", "title": "A CMOS hysteresis undervoltage lockout with current source inverter structure", "text": "This paper describes a simple architecture and low power consumption undervoltage lockout (UVLO) circuit with hysteretic threshold. The UVLO circuit monitors the supply voltage and determines whether or not the supply voltage satisfies a predetermined condition. The under voltage lockout circuit is designed based on CSMC 0.5um CMOS technology, utilizing a relatively few amount of circuitry. It is realized with a current source inverter. The threshold voltage is determined by the W/L ratio of current source inverter and resistor in reference generator. The hysteresis is realized by using a feedback circuit to overcome the bad disturbance and noise rejection of the single threshold. Hysteretic threshold range is 40mV. The quiescent current is about 1uA at 3V supply voltage,while the power of circuit consumes only 3uW."}
{"_id": "6949c82155c51eb3990e4e961449a1a631fe37d9", "title": "A High Torque Density Vernier PM Machines for Hybrid Electric Vehicle Applications", "text": "This paper proposes a novel vernier permanent magnet (VPM) machine for hybrid electric vehicle (HEV) applications. The main features of the proposed machine are concentrated armature windings with short and non- overlapping end-winding, and the permanent magnets (PMs) are on both sides of rotor and stator. Unlike conventional motors, this machine operates with flux modulation principles, and the electromagnetic torque is produced by the interaction between the armature magneto motive force (MMF) and the exciting fields by stator and rotor PMs. In this paper, the electromagnetic performance is simulated by the finite element analysis (FEA). The results show that the torque density is about 33% higher than an existing interior permanent magnet (IPM) machines."}
{"_id": "50d90e75b14fc1527dcc99c06edb3577924e37b6", "title": "A High Speed Architecture for Galois/Counter Mode of Operation (GCM)", "text": "In this paper we present a fully pipelined high speed hardware architecture for Galois/Counter Mode of Operation (GCM) by analyzing the data dependencies in the GCM algorithm at the architecture level. We show that GCM encryption circuit and GCM authentication circuit have similar critical path delays resulting in an efficient pipeline structure. The proposed GCM architecture yields a throughput of 34 Gbps running at 271 MHz using a 0.18 \u03bcm CMOS standard cell library."}
{"_id": "5dfb49cd8cc0568e2cc8204de7fa4aca30ff0ca7", "title": "Small-world networks and functional connectivity in Alzheimer's disease.", "text": "We investigated whether functional brain networks are abnormally organized in Alzheimer's disease (AD). To this end, graph theoretical analysis was applied to matrices of functional connectivity of beta band-filtered electroencephalography (EEG) channels, in 15 Alzheimer patients and 13 control subjects. Correlations between all pairwise combinations of EEG channels were determined with the synchronization likelihood. The resulting synchronization matrices were converted to graphs by applying a threshold, and cluster coefficients and path lengths were computed as a function of threshold or as a function of degree K. For a wide range of thresholds, the characteristic path length L was significantly longer in the Alzheimer patients, whereas the cluster coefficient C showed no significant changes. This pattern was still present when L and C were computed as a function of K. A longer path length with a relatively preserved cluster coefficient suggests a loss of complexity and a less optimal organization. The present study provides further support for the presence of \"small-world\" features in functional brain networks and demonstrates that AD is characterized by a loss of small-world network characteristics. Graph theoretical analysis may be a useful approach to study the complexity of patterns of interrelations between EEG channels."}
{"_id": "a66fe50037d1a4c3798fb0aadb6e9b7c5c8b6319", "title": "The YouTube Lens: Crowdsourced Personality Impressions and Audiovisual Analysis of Vlogs", "text": "Despite an increasing interest in understanding human perception in social media through the automatic analysis of users' personality, existing attempts have explored user profiles and text blog data only. We approach the study of personality impressions in social media from the novel perspective of crowdsourced impressions, social attention, and audiovisual behavioral analysis on slices of conversational vlogs extracted from YouTube. Conversational vlogs are a unique case study to understand users in social media, as vloggers implicitly or explicitly share information about themselves that words, either written or spoken cannot convey. In addition, research in vlogs may become a fertile ground for the study of video interactions, as conversational video expands to innovative applications. In this work, we first investigate the feasibility of crowdsourcing personality impressions from vlogging as a way to obtain judgements from a variate audience that consumes social media video. Then, we explore how these personality impressions mediate the online video watching experience and relate to measures of attention in YouTube. Finally, we investigate on the use of automatic nonverbal cues as a suitable lens through which impressions are made, and we address the task of automatic prediction of vloggers' personality impressions using nonverbal cues and machine learning techniques. Our study, conducted on a dataset of 442 YouTube vlogs and 2210 annotations collected in Amazon's Mechanical Turk, provides new findings regarding the suitability of collecting personality impressions from crowdsourcing, the types of personality impressions that emerge through vlogging, their association with social attention, and the level of utilization of nonverbal cues in this particular setting. In addition, it constitutes a first attempt to address the task of automatic vlogger personality impression prediction using nonverbal cues, with promising results."}
{"_id": "fd6918d4c9ac32028f5bfb53ba8b9f9caca71370", "title": "Packet Injection Attack and Its Defense in Software-Defined Networks", "text": "Software-defined networks (SDNs) are novel networking architectures that decouple the network control and forwarding functions from the data plane. Unlike traditional networking, the control logic of SDNs is implemented in a logically centralized controller which provides a global network view and open programming interface to the applications. While SDNs have become a hot topic among both academia and industry in recent years, little attention has been paid on the security aspect. In this paper, we introduce a novel attack, namely, packet injection attack, in SDNs. By maliciously injecting manipulated packets into SDNs, attackers can affect the services and networking applications in the control plane, and largely consume the resources in the data plane. The consequences could be the disruption of applications built on the top of the topology manager service and rest API, as well as a huge consumption of network resources, such as the bandwidth of the OpenFlow channel. To defend against the packet injection attack, we present PacketChecker, a lightweight extension module on SDN controllers to effectively detect and mitigate the flooding of falsified packets. We implement a prototype of PacketChecker in floodlight controller and conduct experiments to evaluate the efficiency of the defense mechanism. The evaluation shows that the PacketChecker module can effectively mitigate the attack with a minor overhead to the SDN controller."}
{"_id": "faaa47b2545922a3615c1744e8f4f11c77f1985a", "title": "Detecting and Classifying Crimes from Arabic Twitter Posts using Text Mining Techniques", "text": "Crime analysis has become a critical area for helping law enforcement agencies to protect civilians. As a result of a rapidly increasing population, crime rates have increased dramatically, and appropriate analysis has become a timeconsuming effort. Text mining is an effective tool that may help to solve this problem to classify crimes in effective manner. The proposed system aims to detect and classify crimes in Twitter posts that written in the Arabic language, one of the most widespread languages today. In this paper, classification techniques are used to detect crimes and identify their nature by different classification algorithms. The experiments evaluate different algorithms, such as SVM, DT, CNB, and KNN, in terms of accuracy and speed in the crime domain. Also, different features extraction techniques are evaluated, including rootbased stemming, light stemming, n-gram. The experiments revealed the superiority of n-gram over other techniques. Specifically, the results indicate the superiority of SVM with trigram over other classifiers, with a 91.55% accuracy. Keywords\u2014Crimes; text mining; classification; features extraction techniques; arabic posts; twitter"}
{"_id": "2f2dbb88c9266356dfda696ed907067c1e42902c", "title": "Tracking down software bugs using automatic anomaly detection", "text": "This paper introduces DIDUCE, a practical and effective tool that aids programmers in detecting complex program errors and identifying their root causes. By instrumenting a program and observing its behavior as it runs, DIDUCE dynamically formulates hypotheses of invariants obeyed by the program. DIDUCE hypothesizes the strictest invariants at the beginning, and gradually relaxes the hypothesis as violations are detected to allow for new behavior. The violations reported help users to catch software bugs as soon as they occur. They also give programmers new visibility into the behavior of the programs such as identifying rare corner cases in the program logic or even locating hidden errors that corrupt the program's results.We implemented the DIDUCE system for Java programs and applied it to four programs of significant size and complexity. DIDUCE succeeded in identifying the root causes of programming errors in each of the programs quickly and automatically. In particular, DIDUCE is effective in isolating a timing-dependent bug in a released JSSE (Java Secure Socket Extension) library, which would have taken an experienced programmer days to find. Our experience suggests that detecting and checking program invariants dynamically is a simple and effective methodology for debugging many different kinds of program errors across a wide variety of application domains."}
{"_id": "06e04fd496cd805bca69eea2c1977f90afeeef83", "title": "Causal Interventions for Fairness", "text": "Most approaches in algorithmic fairness constrain machine learning methods so the resulting predictions satisfy one of several intuitive notions of fairness. While this may help private companies comply with non-discrimination laws or avoid negative publicity, we believe it is often too little, too late. By the time the training data is collected, individuals in disadvantaged groups have already suffered from discrimination and lost opportunities due to factors out of their control. In the present work we focus instead on interventions such as a new public policy, and in particular, how to maximize their positive effects while improving the fairness of the overall system. We use causal methods to model the effects of interventions, allowing for potential interference\u2013each individual\u2019s outcome may depend on who else receives the intervention. We demonstrate this with an example of allocating a budget of teaching resources using a dataset of schools in New York City."}
{"_id": "7274c36c94c71d10ba687418ff5f68aafad40402", "title": "Atanassov's Intuitionistic Fuzzy Programming Method for Heterogeneous Multiattribute Group Decision Making With Atanassov's Intuitionistic Fuzzy Truth Degrees", "text": "The aim of this paper is to develop a new Atanassov's intuitionistic fuzzy (A-IF) programming method to solve heterogeneous multiattribute group decision-making problems with A-IF truth degrees in which there are several types of attribute values such as A-IF sets (A-IFSs), trapezoidal fuzzy numbers, intervals, and real numbers. In this method, preference relations in comparisons of alternatives with hesitancy degrees are expressed by A-IFSs. Hereby, A-IF group consistency and inconsistency indices are defined on the basis of preference relations between alternatives. To estimate the fuzzy ideal solution (IS) and weights, a new A-IF programming model is constructed on the concept that the A-IF group inconsistency index should be minimized and must be not larger than the A-IF group consistency index by some fixed A-IFS. An effective method is developed to solve the new derived model. The distances of the alternatives to the fuzzy IS are calculated to determine their ranking order. Moreover, some generalizations or specializations of the derived model are discussed. Applicability of the proposed methodology is illustrated with a real supplier selection example."}
{"_id": "95b6f903dc5bb3a6ede84ea2cf7cd48ab7e8e06b", "title": "Individuals\u2019 Stress Assessment Using Human-Smartphone Interaction Analysis", "text": "The increasing presence of stress in people\u2019 lives has motivated much research efforts focusing on continuous stress assessment methods of individuals, leveraging smartphones and wearable devices. These methods have several drawbacks, i.e., they use invasive external devices, thus increasing entry costs and reducing user acceptance, or they use some of privacy-related information. This paper presents an approach for stress assessment that leverages data extracted from smartphone sensors, and that is not invasive concerning privacy. Two different approaches are presented. One, based on smartphone gestures analysis, e.g., \u2018tap\u2019, \u2018scroll\u2019, \u2018swipe\u2019 and \u2018text writing\u2019, and evaluated in laboratory settings with 13 participants (F-measure 79-85 percent within-subject model, 70-80 percent global model); the second one based on smartphone usage analysis and tested in-the-wild with 25 participants (F-measure 77-88 percent within-subject model, 63-83 percent global model). Results show how these two methods enable an accurate stress assessment without being too intrusive, thus increasing ecological validity of the data and user acceptance."}
{"_id": "efaf2ddf50575ae79c458b7c4974620d2a740ac7", "title": "Word Segmentation in Sanskrit Using Path Constrained Random Walks", "text": "In Sanskrit, the phonemes at the word boundaries undergo changes to form new phonemes through a process called as sandhi. A fused sentence can be segmented into multiple possible segmentations. We propose a word segmentation approach that predicts the most semantically valid segmentation for a given sentence. We treat the problem as a query expansion problem and use the path-constrained random walks framework to predict the correct segments."}
{"_id": "25c96ec442fbd6c1dc0f88cfc5657723b248d525", "title": "A unifying view of the basis of social cognition", "text": "In this article we provide a unifying neural hypothesis on how individuals understand the actions and emotions of others. Our main claim is that the fundamental mechanism at the basis of the experiential understanding of others' actions is the activation of the mirror neuron system. A similar mechanism, but involving the activation of viscero-motor centers, underlies the experiential understanding of the emotions of others."}
{"_id": "2787d376f9564122d7ed584d3f2b04cb4ba5376d", "title": "Understanding motor events: a neurophysiological study", "text": "Neurons of the rostral part of inferior premotor cortex of the monkey discharge during goal-directed hand movements such as grasping, holding, and tearing. We report here that many of these neurons become active also when the monkey observes specific, meaningful hand movements performed by the experimenters. The effective experimenters' movements include among others placing or retrieving a piece of food from a table, grasping food from another experimenter's hand, and manipulating objects. There is always a clear link between the effective observed movement and that executed by the monkey and, often, only movements of the experimenter identical to those controlled by a given neuron are able to activate it. These findings indicate that premotor neurons can retrieve movements not only on the basis of stimulus characteristics, as previously described, but also on the basis of the meaning of the observed actions."}
{"_id": "7ae9516d4873688e16f7199cef31adbc6b181213", "title": "Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study.", "text": "Functional magnetic resonance imaging (fMRI) was used to localize brain areas that were active during the observation of actions made by another individual. Object- and non-object-related actions made with different effectors (mouth, hand and foot) were presented. Observation of both object- and non-object-related actions determined a somatotopically organized activation of premotor cortex. The somatotopic pattern was similar to that of the classical motor cortex homunculus. During the observation of object-related actions, an activation, also somatotopically organized, was additionally found in the posterior parietal lobe. Thus, when individuals observe an action, an internal replica of that action is automatically generated in their premotor cortex. In the case of object-related actions, a further object-related analysis is performed in the parietal lobe, as if the subjects were indeed using those objects. These results bring the previous concept of an action observation/execution matching system (mirror system) into a broader perspective: this system is not restricted to the ventral premotor cortex, but involves several somatotopically organized motor circuits."}
{"_id": "8a81800350e0572202607152fc861644d4e39a54", "title": "Intentional attunement: A neurophysiological perspective on social cognition and its disruption in autism", "text": "A direct form of experiential understanding of others, \"intentional attunement\", is achieved by modeling their behavior as intentional experiences on the basis of the activation of shared neural systems underpinning what the others do an feel and what we do and feel. This modeling mechanism is embodied simulation. In parallel with the detached sensory description of the observed social stimuli, internal representations of the body states associated with actions, emotions, and sensations are evoked in the observer, as if he/she would be doing a similar action or experiencing a similar emotion or sensation. Mirror neuron systems are likely the neural correlate of this mechanism. By means of a shared neural state realized in two different bodies, the \"objectual other\" becomes \"another self\". A defective intentional attunement caused by a lack of embodies simulation might cause some of the social impairments of autistic individuals."}
{"_id": "0f52a233d2e20e7b270a4eed9e06aff1840a46d6", "title": "The mirror-neuron system.", "text": "A category of stimuli of great importance for primates, humans in particular, is that formed by actions done by other individuals. If we want to survive, we must understand the actions of others. Furthermore, without action understanding, social organization is impossible. In the case of humans, there is another faculty that depends on the observation of others' actions: imitation learning. Unlike most species, we are able to learn by imitation, and this faculty is at the basis of human culture. In this review we present data on a neurophysiological mechanism--the mirror-neuron mechanism--that appears to play a fundamental role in both action understanding and imitation. We describe first the functional properties of mirror neurons in monkeys. We review next the characteristics of the mirror-neuron system in humans. We stress, in particular, those properties specific to the human mirror-neuron system that might explain the human capacity to learn by imitation. We conclude by discussing the relationship between the mirror-neuron system and language."}
{"_id": "3181f5190b064398c2bb322b7a5100863e5b6a01", "title": "Pulsed Out of Awareness: EEG Alpha Oscillations Represent a Pulsed-Inhibition of Ongoing Cortical Processing", "text": "Alpha oscillations are ubiquitous in the brain, but their role in cortical processing remains a matter of debate. Recently, evidence has begun to accumulate in support of a role for alpha oscillations in attention selection and control. Here we first review evidence that 8-12 Hz oscillations in the brain have a general inhibitory role in cognitive processing, with an emphasis on their role in visual processing. Then, we summarize the evidence in support of our recent proposal that alpha represents a pulsed-inhibition of ongoing neural activity. The phase of the ongoing electroencephalography can influence evoked activity and subsequent processing, and we propose that alpha exerts its inhibitory role through alternating microstates of inhibition and excitation. Finally, we discuss evidence that this pulsed-inhibition can be entrained to rhythmic stimuli in the environment, such that preferential processing occurs for stimuli at predictable moments. The entrainment of preferential phase may provide a mechanism for temporal attention in the brain. This pulsed inhibitory account of alpha has important implications for many common cognitive phenomena, such as the attentional blink, and seems to indicate that our visual experience may at least some times be coming through in waves."}
{"_id": "9d8a785a4275dafe21c6cdf62d6080bc5fad18ef", "title": "Extracting Opinion Features in Sentiment Patterns", "text": "Due to the increasing amount of opinions and reviews on the Internet, opinion mining has become a hot topic in data mining, in which extracting opinion features is a key step. Most of existing methods utilize a rule-based mechanism or statistics to extract opinion features, but they ignore the structure characteristics of reviews. The performance has hence not been promising. A new approach of OFESP (Opinion Feature Extraction based on Sentiment Patterns) is proposed in this paper, which takes into account the structure characteristics of reviews for higher values of precision and recall. With a self-constructed database of sentiment patterns, OFESP matches each review sentence to obtain its features, and then filters redundant features regarding relevance of the domain, statistics and semantic similarity. Experimental studies on real-world data demonstrate that as compared with the traditional method based on a window mechanism, OFESP outperforms it on precision, recall and F-score. Meanwhile, in comparison to the approach based on syntactic analysis, OFESP performs better on recall and F-score."}
{"_id": "611d86b4d3cebf105ee5fdc6da8fc1f725f3c8d5", "title": "Superfaces: A Super-Resolution Model for 3D Faces", "text": "Face recognition based on the analysis of 3D scans has been an active research subject over the last few years. However, the impact of the resolution of 3D scans on the recognition process has not been addressed explicitly yet being of primal importance after the introduction of a new generation of low cost 4D scanning devices. These devices are capable of combined depth/rgb acquisition over time with a low resolution compared to the 3D scanners typically used in 3D face recognition benchmarks. In this paper, we define a super-resolution model for 3D faces by which a sequence of low-resolution 3D scans can be processed to extract a higher resolution 3D face model, namely the superface model. The proposed solution relies on the Scaled ICP procedure to align the low-resolution 3D models with each other and estimate the value of the high-resolution 3D model based on the statistics of values of the lowresolution scans in corresponding points. The approach is validated on a data set that includes, for each subject, one sequence of low-resolution 3D face scans and one ground-truth high-resolution 3D face model acquired through a high-resolution 3D scanner. In this way, results of the super-resolution process are evaluated qualitatively and quantitatively by measuring the error between the superface and the ground-truth."}
{"_id": "21e96b9f949aef121f1031097442a12308821974", "title": "The maternal brain and its plasticity in humans", "text": "This article is part of a Special Issue \"Parental Care\". Early mother-infant relationships play important roles in infants' optimal development. New mothers undergo neurobiological changes that support developing mother-infant relationships regardless of great individual differences in those relationships. In this article, we review the neural plasticity in human mothers' brains based on functional magnetic resonance imaging (fMRI) studies. First, we review the neural circuits that are involved in establishing and maintaining mother-infant relationships. Second, we discuss early postpartum factors (e.g., birth and feeding methods, hormones, and parental sensitivity) that are associated with individual differences in maternal brain neuroplasticity. Third, we discuss abnormal changes in the maternal brain related to psychopathology (i.e., postpartum depression, posttraumatic stress disorder, substance abuse) and potential brain remodeling associated with interventions. Last, we highlight potentially important future research directions to better understand normative changes in the maternal brain and risks for abnormal changes that may disrupt early mother-infant relationships."}
{"_id": "b94d9bfa0c33b720562dd0d8b8c4143a33e8f95b", "title": "Using randomized response techniques for privacy-preserving data mining", "text": "Privacy is an important issue in data mining and knowledge discovery. In this paper, we propose to use the randomized response techniques to conduct the data mining computation. Specially, we present a method to build decision tree classifiers from the disguised data. We conduct experiments to compare the accuracy of our decision tree with the one built from the original undisguised data. Our results show that although the data are disguised, our method can still achieve fairly high accuracy. We also show how the parameter used in the randomized response techniques affects the accuracy of the results."}
{"_id": "d5b8483294ac4c3af88782bde7cfe90ecd699e99", "title": "Sculpting: an interactive volumetric modeling technique", "text": "We present a new interactive modeling technique based on the notion of sculpting a solid material. A sculpting tool is controlled by a 3D input device and the material is represented by voxel data; the tool acts by modifying the values in the voxel array, much as a \"paint\" program's \"paintbrush\" modifies bitmap values. The voxel data is converted to a polygonal surface using a \"marching-cubes\" algorithm; since the modifications to the voxel data are local, we accelerate this computation by an incremental algorithm and accelerate the display by using a special data structure for determining which polygons must be redrawn in a particular screen region. We provide a variety of tools: one that cuts away material, one that adds material, a \"sandpaper\" tool, a \"heat gun,\" etc. The technique provides an intuitive direct interaction, as if the user were working with clay or wax. The models created are free-form and may have complex topology; however, they are not precise, so the technique is appropriate for modeling a boulder or a tooth but not for modeling a crankshaft."}
{"_id": "46c90184ff78ccab48f2520f56531aea101537dd", "title": "HIGH DIMENSIONAL SPACES", "text": "In this paper, we analyze deep learning from a mathematical point of view and derive several novel results. The results are based on intriguing mathematical properties of high dimensional spaces. We first look at perturbation based adversarial examples and show how they can be understood using topological arguments in high dimensions. We point out fallacy in an argument presented in a published paper in 2015 by Goodfellow et al., see reference [10], and we present a more rigorous, general and correct mathematical result to explain adversarial examples in terms of topology of image manifolds. Second, we look at optimization landscapes of deep neural networks and examine the number of saddle points relative to that of local minima. Third, we show how multiresolution nature of images explains perturbation based adversarial examples in form of a stronger result. Our results state that expectation of L2-norm of adversarial perturbations shrinks to 0 as image resolution becomes arbitrarily large. Finally, by incorporating the parts-whole manifold learning hypothesis for natural images, we investigate the working of deep neural networks and root causes of adversarial examples and discuss how future improvements can be made and how adversarial examples can be eliminated."}
{"_id": "f43a106b220aa20fa0862495153f0184c9f3d8f6", "title": "Certificates of virginity and reconstruction of the hymen.", "text": "Virginity in women has traditionally been associated with the integrity of the hymen, the incomplete membrane occluding the lower end of the vagina. The hymen is often ruptured during the first coitus, which may cause bleeding of varying intensity. In certain communities, the bed-sheet, now soiled with blood, is shown as proof that the first sexual intercourse has taken place and that the bride was indeed a virgin up to that point. Thus, the sullied sheet constitutes evidence that the \u2018honour\u2019 of the woman and that of her family remain intact. Literature data concerning the frequency of bleeding at defloration are amazingly scarce. Olga Loeber conducted a retrospective survey among women of diverse ethnic origin and with different cultural and religious background aborted at one centre in The Netherlands. Somewhat less than half of these women reported that they had not bled during their first coitus! The Jewish, Christian and Muslim faiths all attach considerable importance to premarital virginity, particularly that of women. Yet, the social valorization of virginity dates back to a much earlier time than the imposition of chastity upon unmarried women in Muslim communities. Neither the Koran nor Hadiths state that virginity is a precondition to marriage. Patriarchal societies aim to control the sexuality of women in order to regulate lines of descent and transfer. Islam has only translated a social into a religious norm. It is worth remembering that until the 1960s, moral and behavioural norms kept even European atheists from engaging in premarital sex. These constraints have only been gradually lifted in the last 50 years as a result of the liberalization of sexuality and the improved social status of women."}
{"_id": "0f3ee3493e1148c1ac6ac58889eec1c709af6ed4", "title": "Systematic review of bankruptcy prediction models: Towards a framework for tool selection", "text": "The bankruptcy prediction research domain continues to evolve with many new different predictive models developed using various tools. Yet many of the tools are used with the wrong data conditions or for the wrong situation. Using the Web of Science, Business Source Complete and Engineering Village databases, a systematic review of 49 journal articles published between 2010 and 2015 was carried out. This review shows how eight popular and promising tools perform based on 13 key criteria within the bankruptcy prediction models research area. These tools include two statistical tools: multiple discriminant analysis and Logistic regression; and six artificial intelligence tools: artificial neural network, support vector machines, rough sets, case based reasoning, decision tree and genetic algorithm. The 13 criteria identified include accuracy, result transparency, fully deterministic output, data size capability, data dispersion, variable selection method required, variable types applicable, and more. Overall, it was found that no single tool is predominantly better than other tools in relation to the 13 identified criteria. A tabular and a diagrammatic framework are provided as guidelines for the selection of tools that best fit different situations. It is concluded that an overall better performance model can only be found by informed integration of tools to form a hybrid model. This paper contributes towards a thorough understanding of the features of the tools used to develop bankruptcy prediction models and their related shortcomings."}
{"_id": "50984f8345a3120d0e6c0a75adc2ac1a13e37961", "title": "Impaired face processing in autism: fact or artifact?", "text": "Within the last 10 years, there has been an upsurge of interest in face processing abilities in autism which has generated a proliferation of new empirical demonstrations employing a variety of measuring techniques. Observably atypical social behaviors early in the development of children with autism have led to the contention that autism is a condition where the processing of social information, particularly faces, is impaired. While several empirical sources of evidence lend support to this hypothesis, others suggest that there are conditions under which autistic individuals do not differ from typically developing persons. The present paper reviews this bulk of empirical evidence, and concludes that the versatility and abilities of face processing in persons with autism have been underestimated."}
{"_id": "6695c398a384e5fe09e51c86c0702654c74102d0", "title": "A Variable Threshold Voltage CMOS Comparator for Flash Analog to Digital Converter", "text": "This paper presents a variable threshold voltage CMOS comparator for flash analog to digital converter. The proposed comparator has single-ended type of architecture. The comparator is designed and analyzed by Cadence Virtuoso Analog Design Environment using UMC 180nm technology. The proposed comparator consumes peak power of 34. 97 ?W from 1. 8 V power supply. It achieves the power delay product (PDP) of 8 fJ and propagation delay of 230 ps. The designed comparator eliminates the requirement of resistive ladder network for reference voltage generation. This makes it highly suitable in the design of flash analog to digital converter."}
{"_id": "19bf971a1fb6cd388c0ba07834dc4ddcbc442318", "title": "Automatic Registration Method for Optical Remote Sensing Images with Large Background Variations Using Line Segments", "text": "Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood). Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges) is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines), which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is automatic, fast (4 ms faster than the second fastest method, i.e., the rotationand scale-invariant shape context) and can achieve a recall of 79.7%, a precision of 89.1% and a root mean square error (RMSE) of 1.0 pixels on average for remote sensing images with large background variations."}
{"_id": "3e79a574d776c46bbe6d34f41b1e83b5d0f698f2", "title": "Sentence-State LSTM for Text Representation", "text": "The baseline BiLSTM model consists of two LSTM components, which process the input in the forward left-to-right and the backward rightto-left directions, respectively. In each direction, the reading of input words is modelled as a recurrent process with a single hidden state. Given an initial value, the state changes its value recurrently, each time consuming an incoming word. Take the forward LSTM component for example. Denoting the initial state as h 0, which is a model parameter, the recurrent state transition step for calculating h 1, . . . , h n+1 is defined as follows (Graves and Schmidhuber, 2005):"}
{"_id": "2d47c8a220ca6ce7e328298b19ba27294954f733", "title": "The causes of welfare state expansion : deindustrialization or globalization ?", "text": "An influential line of argument holds that trade exposure causes economic uncertainty and spurs popular demands for compensatory and risk-sharing welfare state spending. The argument has gained renewed prominence through the recent work of Garrett (1998) and Rodrik (1997; 1998). This paper argues that the relationship between trade openness and welfare state expansion is spurious, and that the engine of welfare state expansion since the 1960s has been deindustrialization. Based on cross-sectional time-series data for 15 OECD countries we show that there is no relationship between trade exposure and the level of labor market risks (in terms of employment and wages), whereas the uncertainty and dislocations caused by deindustrialization have spurred electoral demands for compensating welfare state policies. Yet, while differential rates of deindustrialization explain differences in the overall size of the welfare state, its particular character -in terms of the share of direct government provision and the equality of transfer payments -is shaped by government partisanship. The argument has implications for the study, and the future,of thewelfare state thatarevery different from thosesuggested in the trade openness literature. Zusammenfassung In vielen einflu\u00dfreichen Diskussionsbeitr\u00e4gen wird die Meinung vertreten, da\u00df die Liberalisierung des Handels \u00f6konomische Verunsicherung zur Folge habe und damit zu Forderungen nach ausgleichenden wohlfahrtsstaatlichen Ausgaben f\u00fchre. Die Arbeiten von Garrett (1998) und Rodrik (1997;1998) verliehen diesem Argument zus\u00e4tzliche Relevanz. Gegenstand dieser Untersuchung ist die Beziehung zwischen Ausma\u00df an Offenheit einer Volkswirtschaft und der Ausdehnung des Wohlfahrtsstaates, dessen gro\u00dfz\u00fcgige Entwicklung seit den 1960er Jahren durch zunehmende Deindustrialisierung erm\u00f6glicht wurde. Auf der Grundlage von Analysen l\u00e4nder\u00fcbergreifender Zeitreihen und von 15 OECD-L\u00e4ndern wird gezeigt, da\u00df kein Zusammenhang zwischen einer Handelsliberalisierung und dem Grad der Arbeitsmarktrisiken (bezogen auf L\u00f6hne und Besch\u00e4ftigung) besteht. Angesichts der durch die Deindustrialisierung verursachten Unsicherheit kommt es jedoch von seiten der W\u00e4hler zu Forderungen nach einer ausgleichenden Sozialpolitik. W\u00e4hrend das Ausma\u00df der Deindustrialisierung die Gr\u00f6\u00dfe und Ausstattung des Wohlfahrsstaates determiniert, wird sein spezifischer Charakter hinsichtlich der direkten Regierungsdienstleistungen und der ausgleichenden Transferzahlungen von den Regierungsparteien gepr\u00e4gt. Diese Argumentation ist von gro\u00dfer Tragweite f\u00fcr die Analyse und Zukunft des Wohlfahrtsstaates; sie weicht gravierend von der Literatur \u00fcber offene Volkswirtschaften ab."}
{"_id": "7dc8bfcc945f20ae614629a00251d8f175623462", "title": "A Tutorial on Reinforcement Learning Techniques", "text": "Reinforcement Learning (RL) is learning through direct experimentation. It does not assume the existence of a teacher that provides examples upon which learning of a task takes place. Instead, in RL experience is the only teacher. With historical roots on the study of conditioned reflexes, RL soon attracted the interest of Engineers and Computer Scientists because of its theoretical relevance and potential applications in fields as diverse as Operational Research and Robotics. Computationally, RL is intended to operate in a learning environment composed by two subjects: the learner and a dynamic process. At successive time steps, the learner makes an observation of the process state, selects an action and applies it back to the process. The goal of the learner is to find out an action policy that controls the behavior of this dynamic process, guided by signals (reinforcements) that indicate how well it is performing the required task. These signals are usually associated to some dramatic condition \u2014 e.g., accomplishment of a subtask (reward) or complete failure (punishment), and the learner\u2019s goal is to optimize its behavior based on some performance measure (a function of the received reinforcements). The crucial point is that in order to do that, the learner must evaluate the conditions (associations between observed states and chosen actions) that lead to rewards or punishments. In other words, it must learn how to assign credit to past actions and states by correctly estimating costs associated to these events. Starting from basic concepts, this tutorial presents the many flavors of RL algorithms, develops the corresponding mathematical tools, assess their practical limitations and discusses alternatives that have been proposed for applying RL to realistic tasks, such as those involving large state spaces or partial observability. It relies on examples and diagrams to illustrate the main points, and provides many references to the specialized literature and to Internet sites where relevant demos and additional information can be obtained."}
{"_id": "255befa1707aa15f1fc403fb57eaf76327fc8cef", "title": "Robust Influence Maximization", "text": "In this paper, we address the important issue of uncertainty in the edge influence probability estimates for the well studied influence maximization problem --- the task of finding k seed nodes in a social network to maximize the influence spread. We propose the problem of robust influence maximization, which maximizes the worst-case ratio between the influence spread of the chosen seed set and the optimal seed set, given the uncertainty of the parameter input. We design an algorithm that solves this problem with a solution-dependent bound. We further study uniform sampling and adaptive sampling methods to effectively reduce the uncertainty on parameters and improve the robustness of the influence maximization task. Our empirical results show that parameter uncertainty may greatly affect influence maximization performance and prior studies that learned influence probabilities could lead to poor performance in robust influence maximization due to relatively large uncertainty in parameter estimates, and information cascade based adaptive sampling method may be an effective way to improve the robustness of influence maximization."}
{"_id": "62df185ddf7cd74040dd7f8dd72a9a2d85b734ed", "title": "An fMRI investigation of emotional engagement in moral judgment.", "text": "The long-standing rationalist tradition in moral psychology emphasizes the role of reason in moral judgment. A more recent trend places increased emphasis on emotion. Although both reason and emotion are likely to play important roles in moral judgment, relatively little is known about their neural correlates, the nature of their interaction, and the factors that modulate their respective behavioral influences in the context of moral judgment. In two functional magnetic resonance imaging (fMRI) studies using moral dilemmas as probes, we apply the methods of cognitive neuroscience to the study of moral judgment. We argue that moral dilemmas vary systematically in the extent to which they engage emotional processing and that these variations in emotional engagement influence moral judgment. These results may shed light on some puzzling patterns in moral judgment observed by contemporary philosophers."}
{"_id": "a7e9ba17a4e289d2d0d6e87ccb686fa23e0cd92c", "title": "IoT based Smart Home design using power and security management", "text": "The paper presents the design and implementation of an Ethernet-based Smart Home intelligent system for monitoring the electrical energy consumption based upon the real time tracking of the devices at home an INTEL GALILEO 2ND generation development board, which can be used in homes and societies. The proposed system works on real time monitoring and voice control, so that the electrical devices and switches can be remotely controlled and monitored with or without an android based app. It uses various sensors to not only monitor the real time device tracking but also maintaining the security of your house. It is monitored and controlled remotely from an android app using the Internet or the Intranet connectivity. The proposed outcome of the project aims as multiple benefits of saving on electricity bills of the home as well as keep the users updated about their home security with an option of controlling the switching of the devices by using their voice or simple toggle touch on their smartphone, and last but most importantly, monitor the usage in order to conserve the precious natural resources by reducing electrical energy consumption."}
{"_id": "74db933fcd3d53466ac004e7064fc5db348f1d38", "title": "Scalable Visualisation of Sentiment and Stance", "text": "Natural language processing systems have the ability to analyse not only the sentiment of human language, but also the stance of the speaker. Representing this information visually from unevenly distributed and potentially sparse datasets is challenging, in particular when trying to facilitate exploration and knowledge discovery. We present work on a novel visualisation approach for scalable visualisation of sentiment and stance and provide a language resource of e-government public engagement of 9,278 user comments with stance explicitly declared by the author."}
{"_id": "76bf95815fdef196f017d4699857ecfa478b8755", "title": "Differentially Private K-Means Clustering", "text": "There are two broad approaches for differentially private data analysis. The interactive approach aims at developing customized differentially private algorithms for various data mining tasks. The non-interactive approach aims at developing differentially private algorithms that can output a synopsis of the input dataset, which can then be used to support various data mining tasks. In this paper we study the effectiveness of the two approaches on differentially private k-means clustering. We develop techniques to analyze the empirical error behaviors of the existing interactive and non-interactive approaches. Based on the analysis, we propose an improvement of DPLloyd which is a differentially private version of the Lloyd algorithm. We also propose a non-interactive approach EUGkM which publishes a differentially private synopsis for k-means clustering. Results from extensive and systematic experiments support our analysis and demonstrate the effectiveness of our improvement on DPLloyd and the proposed EUGkM algorithm."}
{"_id": "44dd6443a07f0d139717be74a98988e3ec80beb8", "title": "{31 () Unifying Instance-based and Rule-based Induction Pedro Domingos", "text": "Several well-developed approaches to inductive learning now exist, but each has speci c limitations that are hard to overcome. Multi-strategy learning attempts to tackle this problem by combining multiple methods in one algorithm. This article describes a uni cation of two widely-used empirical approaches: rule induction and instance-based learning. In the new algorithm, instances are treated as maximally speci c rules, and classi cation is performed using a best-match strategy. Rules are learned by gradually generalizing instances until no improvement in apparent accuracy is obtained. Theoretical analysis shows this approach to be e cient. It is implemented in the RISE 3.1 system. In an extensive empirical study, RISE consistently achieves higher accuracies than state-of-the-art representatives of both its parent approaches (PEBLS and CN2), as well as a decision tree learner (C4.5). Lesion studies show that each of RISE's components is essential to this performance. Most signi cantly, in 14 of the 30 domains studied, RISE is more accurate than the best of PEBLS and CN2, showing that a signi cant synergy can be obtained by combining multiple empirical methods."}
{"_id": "384b8bb787f58bb1f1637d0be0c5aa7d562acc46", "title": "Benchmarking short text semantic similarity", "text": "Short Text Semantic Similarity measurement is a ne w and rapidly growing field of research. \u201cShort texts\u201d are typically sentence leng th but are not required to be grammatically correct . There is great potential for applying these measure s in fields such as Information Retrieval, Dialogue Management and Question Answering. A datas et of 65 sentence pairs, with similarity ratings, produced in 2006 has become adopted as a de facto Gold Standard benchmark. This paper discusses the adoption of the 2006 dataset, lays do wn a number of criteria that can be used to determine whether a dataset should be awarded a \u201cGo ld Standard\u201d accolade and illustrates its use as a benchmark. Procedures for the generation of furth er Gold Standard datasets in this field are recommended."}
{"_id": "24b77da5387095aa074d51e75db875e766eefbee", "title": "01 05 03 2 v 2 1 5 N ov 2 00 1 Quantum Digital Signatures", "text": "We present a quantum digital signature scheme whose security is based on fundamental principles of quantum physics. It allows a sender (Alice) to sign a message in such a way that the signature can be validated by a number of different people, and all will agree either that the message came from Alice or that it has been tampered with. To accomplish this task, each recipient of the message must have a copy of Alice\u2019s \u201cpublic key,\u201d which is a set of quantum states whose exact identity is known only to Alice. Quantum public keys are more difficult to deal with than classical public keys: for instance, only a limited number of copies can be in circulation, or the scheme becomes insecure. However, in exchange for this price, we achieve unconditionally secure digital signatures. Sending an m-bit message uses up O(m) quantum bits for each recipient of the public key. We briefly discuss how to securely distribute quantum public keys, and show the signature scheme is absolutely secure using one method of key distribution. The protocol provides a model for importing the ideas of classical public key cryptography into the quantum world."}
{"_id": "6617cd6bdb16054c5369fb8ca40cf538a43f977a", "title": "Medieval Islamic Architecture , Quasicrystals , and Penrose and Girih Tiles : Questions from the Classroom", "text": "Tiling Theory studies how one might cover the plane with various shapes. Medieval Islamic artisans developed intricate geometric tilings to decorate their mosques, mausoleums, and shrines. Some of these patterns, called girih tilings, first appeared in the 12 Century AD. Recent investigations show these medieval tilings contain symmetries similar to those found in aperiodic Penrose tilings first investigated in the West in the 1970\u2019s. These intriguing discoveries may suggest that the mathematical understanding of these artisans was much deeper than originally thought."}
{"_id": "d4375f08528e3326280210f95215237db637f199", "title": "An HMM-Based Approach for Off-Line Unconstrained Handwritten Word Modeling and Recognition", "text": "\u00d0This paper describes a hidden Markov model-based approach designed to recognize off-line unconstrained handwritten words for large vocabularies. After preprocessing, a word image is segmented into letters or pseudoletters and represented by two feature sequences of equal length, each consisting of an alternating sequence of shape-symbols and segmentationsymbols, which are both explicitly modeled. The word model is made up of the concatenation of appropriate letter models consisting of elementary HMMs and an HMM-based interpolation technique is used to optimally combine the two feature sets. Two rejection mechanisms are considered depending on whether or not the word image is guaranteed to belong to the lexicon. Experiments carried out on real-life data show that the proposed approach can be successfully used for handwritten word recognition. Index Terms\u00d0Handwriting modeling, preprocessing, segmentation, feature extraction, hidden Markov models, word recognition, rejection."}
{"_id": "f4373ab2f8d5a457141c77eabd2bab4a698e26cf", "title": "A Full English Sentence Database for Off-line Handwriting Recognition", "text": "In this paper we present a new database for off-line handwriting recognition, together with a few preprocessing and text segmentation procedures. The database is based on the Lancaster-Oslo/Bergen(LOB) corpus. This corpus is a collection of texts that were used to generate forms, which subsequently were filled out by persons with their handwriting. Up to now (December 1998) the database includes 556 forms produced by approximately 250 different writers. The database consists of full English sentences. It can serve as a basis for a variety of handwriting recognition tasks. The main focus, however, is on recognition techniques that use linguistic knowledge beyond the lexicon level. This knowledge can be automatically derived from the corpus or it can be supplied from external sources."}
{"_id": "62a134740314b4469c83c8921ae2e1beea22b8f5", "title": "A Database for Handwritten Text Recognition Research", "text": "We propose in this correspondence a new method to perform two-class clustering of 2-D data in a quick and automatic way by preserving certain features of the input data. The method is analytical, deterministic, unsupervised, automatic, and noniterative. The computation time is of order n if the data size is n, and hence much faster than any other method which requires the computation of an n-by-n dissimilarity matrix. Furthermore, the proposed method does not have the trouble of guessing initial values. This new approach is thus more suitable for fast automatic hierarchical clustering or any other fields requiring fast automatic two-class clustering of 2-D data. The method can be extended to cluster data in higher dimensional space. A 3-D example is included."}
{"_id": "a1dcd66aaf62a428364878b875c7342b660f7f1e", "title": "Continuous speech recognition using hidden Markov models", "text": "The use of hidden Markov models (HMMs) in continuous speech recognition is reviewed. Markov models are presented as a generalization of their predecessor technology, dynamic programming. A unified view is offered in which both linguistic decoding and acoustic matching are integrated into a single, optimal network search framework. Advances in recognition architectures are discussed. The fundamentals of Viterbi beam search, the dominant search algorithm used today in speed recognition, are presented. Approaches to estimating the probabilities associated with an HMM model are examined. The HMM-supervised training paradigm is examined. Several examples of successful HMM-based speech recognition systems are reviewed.<>"}
{"_id": "b38ac03b806a291593c51cb51818ce8e919a1a43", "title": "Pattern classification - a unified view of statistical and neural approaches", "text": ""}
{"_id": "4debb3fe83ea743a888aa2ec8f4252bbe6d0fcb8", "title": "A framework analysis of the open source software development paradigm", "text": "Open Source Software (OSS) has become the subject of much commercial interest of late. Certainly, OSS seems to hold much promise in addressing the core issues of the software crisis, namely that of software taking too long to develop, exceeding its budget, and not working very well. Indeed, there have been several examples of significant OSS success stories\u2014the Linux operating system, the Apache web server, the BIND domain name resolution utility, to name but a few. However, little by way of rigorous academic research on OSS has been conducted to date. In this study, a framework was derived from two previous frameworks which have been very influential in the IS field, namely that of Zachman\u2019s IS architecture (ISA) and Checkland\u2019s CATWOE framework from Soft Systems Methodology (SSM). The resulting framework is used to analyze the OSS approach in detail. The potential future of OSS research is also discussed."}
{"_id": "263cedeadb67c325a933d3ab8eb8fba3db83f627", "title": "On pricing discrete barrier options using conditional expectation and importance sampling Monte Carlo", "text": "Estimators for the price of a discrete barrier option based on conditional expectation and importance sampling variance reduction techniques are given. There are erroneous formulas for the conditional expectation estimator published in the literature: we derive the correct expression for the estimator. We use a simulated annealing algorithm to estimate the optimal parameters of exponential twisting in importance sampling, and compare it with a heuristic used in the literature. Randomized quasi-Monte Carlo methods are used to further increase the accuracy of the estimators. c \u00a9 2007 Elsevier Ltd. All rights reserved."}
{"_id": "638a8302b681221b32aa4a8b69154ec4e6820a54", "title": "On the mechanical design of the Berkeley Lower Extremity Exoskeleton (BLEEX)", "text": "The first energetically autonomous lower extremity exoskeleton capable of carrying a payload has been demonstrated at U.C. Berkeley. This paper summarizes the mechanical design of the Berkeley Lower Extremity Exoskeleton (BLEEX). The anthropomorphically-based BLEEX has seven degrees of freedom per leg, four of which are powered by linear hydraulic actuators. The selection of the degrees of freedom and their ranges of motion are described. Additionally, the significant design aspects of the major BLEEX components are covered."}
{"_id": "9e03eb52f39e226656d2ae58e0e5c8e4d7564a18", "title": "Five Styles of Customer Knowledge Management , and How Smart Companies Use Them To Create Value", "text": "Corporations are beginning to realize that the proverbial \u2018if we only knew what we know\u2019 also includes \u2018if we only knew what our customers know.\u2019 The authors discuss the concept of Customer Knowledge Management (CKM), which refers to the management of knowledge from customers, i.e. knowledge resident in customers. CKM is contrasted with knowledge about customers, e.g. customer characteristics and preferences prevalent in previous work on knowledge management and customer relationship management. Five styles of CKM are proposed and practically illustrated by way of corporate examples. Implications are discussed for knowledge management, the resource based view, and strategy process research. \uf8e9 2002 Elsevier Science Ltd. All rights reserved"}
{"_id": "d8a9d06f5dba9ea4586aba810d4679b8af460885", "title": "Rule Based Part of Speech Tagging of Sindhi Language", "text": "Part of Speech (POS) tagging is a process of assigning correct syntactic categories to each word in the text. Tag set and word disambiguation rules are fundamental parts of any POS tagger. No work has hitherto been published of tag set in Sindhi language. The Sindhi lexicon for computational processing is also not available. In this study, the tag set for Sindhi POS, lexicon and word disambiguation rules are designed and developed. The Sindhi corpus is collected from a comprehensive Sindhi Dictionary. The corpus is based on the most recent available vocabulary used by local people. In this paper, preliminary achievements of rule based Sindhi Part of Speech (SPOS) tagger are presented. Tagging and tokenization algorithms are also designed for the implementation of SPOS. The outputs of SPOS are verified by Sindhi linguist. The development of SPOS tagger may have an important milestone towards computational Sindhi language processing."}
{"_id": "d1cf19c3f9d202d81cd67a999d7c831aa871cc6e", "title": "gNUFFTW : Auto-Tuning for High-Performance GPU-Accelerated Non-Uniform Fast Fourier Transforms by Teresa", "text": "Non-uniform sampling of the Fourier transform appears in many important applications such as magnetic resonance imaging (MRI), optics, tomography and radio interferometry. Computing the inverse often requires fast application of the non-uniform discrete Fourier transform (NUDFT) and its adjoint operation. Non-Uniform Fast Fourier Transform (NUFFT) methods, such as gridding/regridding, are approximate algorithms which often leverage the highly-optimized Fast Fourier Transform (FFT) and localized interpolations. These approaches require selecting several parameters, such as interpolation and FFT grid sizes, which affect both the accuracy and runtime. In addition, different implementations lie on a spectrum of precomputation levels, which can further speed up repeated computations, with various trade-offs in planning time, execution time and memory usage. Choosing the optimal parameters and implementations is important for performance speed, but difficult to do manually since the performance of NUFFT is not well-understood for modern parallel processors. Inspired by the FFTW library, we demonstrate an empirical auto-tuning approach for the NUFFT on General Purpose Graphics Processors Units (GPGPU). We demonstrate order-of-magnitude speed improvements with autotuning compared to typical default choices. Our auto-tuning is implemented in an easy to use proof-of-concept library called gNUFFTW, which leverages existing open-source NUFFT packages, cuFFT and cuSPARSE libraries, as well as our own NUFFT implementations for high performance. Keywords\u2014non-uniform, non-Cartesian, FFT, NUFFT, GPU, auto-tuning, image processing."}
{"_id": "1aa3e563e9179841a2b3a9b026c9f8a00c7fe664", "title": "Automatic Medical Concept Extraction from Free Text Clinical Reports, a New Named Entity Recognition Approach", "text": "Actually in the Hospital Information Systems, there is a wide range of clinical information representation from the Electronic Health Records (EHR), and most of the information contained in clinical reports is written in natural language free text. In this context, we are researching the problem of automatic clinical named entities recognition from free text clinical reports. We are using Snomed-CT (Systematized Nomenclature of Medicine \u2013 Clinical Terms) as dictionary to identify all kind of clinical concepts, and thus the problem we are considering is to map each clinical entity named in a free text report with its Snomed-CT unique ID. More in general, we are developed a new approach for the named entity recognition (NER) problem in specific domains, and we have applied it to recognize clinical concepts in free text clinical reports. In our approach we apply two types of NER approaches, dictionary-based and machine learning-based. We use a specific domain dictionary-based gazetteer (using Snomed-CT to get the standard clinical code for the clinical concept), and the main approach that we introduce is using a unsupervised shallow learning neural network, word2vec from Mikolov et al., to represent words as vectors, and then making the recognition based on the distance between candidates and dictionary terms. We have applied our approach on a Dataset with 318.585 clinical reports in Spanish from the emergency service of the Hospital \u201cRafael M\u00e9ndez\u201d from Lorca (Murcia) Spain, and preliminary results are encouraging. Key-Words: Snomed-CT, word2vec, doc2vec, clinical information extraction, skipgram, medical terminologies, search semantic, named entity recognition, ner, medical entity recognition"}
{"_id": "dceb94c63c2ecb097e0506abbb0dea243465598f", "title": "Femtosecond-Long Pulse-Based Modulation for Terahertz Band Communication in Nanonetworks", "text": "Nanonetworks consist of nano-sized communicating devices which are able to perform simple tasks at the nanoscale. Nanonetworks are the enabling technology of long-awaited applications such as advanced health monitoring systems or high-performance distributed nano-computing architectures. The peculiarities of novel plasmonic nano-transceivers and nano-antennas, which operate in the Terahertz Band (0.1-10 THz), require the development of tailored communication schemes for nanonetworks. In this paper, a modulation and channel access scheme for nanonetworks in the Terahertz Band is developed. The proposed technique is based on the transmission of one-hundred-femtosecond-long pulses by following an asymmetric On-Off Keying modulation Spread in Time (TS-OOK). The performance of TS-OOK is evaluated in terms of the achievable information rate in the single-user and the multi-user cases. An accurate Terahertz Band channel model, validated by COMSOL simulation, is used, and novel stochastic models for the molecular absorption noise in the Terahertz Band and for the multi-user interference in TS-OOK are developed. The results show that the proposed modulation can support a very large number of nano-devices simultaneously transmitting at multiple Gigabits-per-second and up to Terabits-per-second, depending on the modulation parameters and the network conditions."}
{"_id": "84f5622739103b92288f694a557a907b807086bf", "title": "Flow map layout", "text": "Cartographers have long used flow maps to show the movement of objects from one location to another, such as the number of people in a migration, the amount of goods being traded, or the number of packets in a network. The advantage of flow maps is that they reduce visual clutter by merging edges. Most flow maps are drawn by hand and there are few computer algorithms available. We present a method for generating flow maps using hierarchical clustering given a set of nodes, positions, and flow data between the nodes. Our techniques are inspired by graph layout algorithms that minimize edge crossings and distort node positions while maintaining their relative position to one another. We demonstrate our technique by producing flow maps for network traffic, census data, and trade data."}
{"_id": "a426971fa937bc8d8388e8a657ff891c012a855f", "title": "Deep Learning for Biomedical Information Retrieval: Learning Textual Relevance from Click Logs", "text": "We describe a Deep Learning approach to modeling the relevance of a document\u2019s text to a query, applied to biomedical literature. Instead of mapping each document and query to a common semantic space, we compute a variable-length difference vector between the query and document which is then passed through a deep convolution stage followed by a deep regression network to produce the estimated probability of the document\u2019s relevance to the query. Despite the small amount of training data, this approach produces a more robust predictor than computing similarities between semantic vector representations of the query and document, and also results in significant improvements over traditional IR text factors. In the future, we plan to explore its application in improving PubMed search."}
{"_id": "4bd48f4438ba7bf731e91cb29508a290e938a1d0", "title": "Electrically small omni-directional antenna of circular polarization", "text": "A compact omni-directional antenna of circular polarization (CP) is presented for 2.4 GHz WLAN access-point applications. The antenna consists of four bended monopoles and a feeding network simultaneously exciting these four monopoles. The electrical size of the CP antenna is only \u03bb0/5\u00d7\u03bb0/5\u00d7\u03bb0/13. The impedance bandwidth (|S11|<;-10 dB) is 3.85% (2.392 GHz to 2.486 GHz) and the axial ratio in the azimuth plane is lower than 0.5 dB in the operating band."}
{"_id": "a72f412d336d69a82424cb7d610bb1bb9d81ef7c", "title": "Avoiding monotony: improving the diversity of recommendation lists", "text": "The primary premise upon which top-N recommender systems operate is that similar users are likely to have similar tastes with regard to their product choices. For this reason, recommender algorithms depend deeply on similarity metrics to build the recommendation lists for end-users.\n However, it has been noted that the products offered on recommendation lists are often too similar to each other and attention has been paid towards the goal of improving diversity to avoid monotonous recommendations.\n Noting that the retrieval of a set of items matching a user query is a common problem across many applications of information retrieval, we model the competing goals of maximizing the diversity of the retrieved list while maintaining adequate similarity to the user query as a binary optimization problem. We explore a solution strategy to this optimization problem by relaxing it to a trust-region problem.This leads to a parameterized eigenvalue problem whose solution is finally quantized to the required binary solution. We apply this approach to the top-N prediction problem, evaluate the system performance on the Movielens dataset and compare it with a standard item-based top-N algorithm. A new evaluation metric ItemNovelty is proposed in this work. Improvements on both diversity and accuracy are obtained compared to the benchmark algorithm."}
{"_id": "192a668ef5bcca820fe58ee040bc650edff76625", "title": "Small-signal modeling of pulse-width modulated switched-mode power converters", "text": "A power processing system is required to convert electrical energy from one voltage, current, or frequency to another with, ideally, 100-percent efficiency, together with adjustability of the conversion ratio. The allowable elements are switches, capacitors, and magnetic devices. Essential to the design is a knowledge of the transfer functions, and since the switching converter is nonlinear, a suitable modeling approach is needed. This paper discusses the state-space averaging method, which is an almost ideal compromise between accuracy and simplicity. The treatment here emphasizes the motivation and objectives of modeling by equivalent circuits based upon physical interpretation of state-space averaging, and is limited to pulse-width modulated dc-to-dc converters."}
{"_id": "64670ea3b03e812f895eef6cfe6dc842ec08e514", "title": "Software transformations to improve malware detection", "text": "Malware is code designed for a malicious purpose, such as obtaining root privilege on a host. A malware detector identifies malware and thus prevents it from adversely affecting a host. In order to evade detection, malware writers use various obfuscation techniques to transform their malware. There is strong evidence that commercial malware detectors are susceptible to these evasion tactics. In this paper, we describe the design and implementation of a malware transformer that reverses the obfuscations performed by a malware writer. Our experimental evaluation demonstrates that this malware transformer can drastically improve the detection rates of commercial malware detectors."}
{"_id": "50485a11fc03e14031b08960370358c26553d7e5", "title": "Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic", "text": "Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30\u00a0years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call \"resource-rational analysis.\""}
{"_id": "df77e52aa0cdc772a77a54c3e5cc7a510a2d6e59", "title": "True upper bounds for Bermudan products via non-nested Monte", "text": "We present a generic non-nested Monte Carlo procedure for computing true upper bounds for Bermudan products, given an approximation of the Snell envelope. The pleonastic \u201ctrue\u201d stresses that, by construction, the estimator is biased above the Snell envelope. The key idea is a regression estimator for the Doob martingale part of the approximative Snell envelope, which preserves the martingale property. The so constructed martingale can be employed for computing tight dual upper bounds without nested simulation. In general, this martingale can also be used as a control variate for simulation of conditional expectations. In this context, we develop a variance reduced version of the nested primal-dual estimator (Andersen and Broadie, 2004). Numerical experiments indicate the efficiency of the proposed algorithms."}
{"_id": "402530f548db5f383f8a8282858895931d2a3e11", "title": "Forecasting and Trading Currency Volatility : An Application of Recurrent Neural Regression and Model Combination by", "text": "In this paper, we examine the use of GARCH models, Neural Network Regression (NNR), Recurrent Neural Network (RNN) regression and model combinations for forecasting and trading currency volatility, with an application to the GBP/USD and USD/JPY exchange rates. Both the results of the NNR/RNN models and the model combination results are benchmarked against the simpler GARCH alternative. The idea of developing a nonlinear nonparametric approach to forecast FX volatility, identify mispriced options and subsequently develop a trading strategy based upon this process is intuitively appealing. Using daily data from December 1993 through April 1999, we develop alternative FX volatility forecasting models. These models are then tested out-of-sample over the period April 1999-May 2000, not only in terms of forecasting accuracy, but also in terms of trading efficiency: In order to do so, we apply a realistic volatility trading strategy using FX option straddles once mispriced options have been identified. Allowing for transaction costs, most trading strategies retained produce positive returns. RNN models appear as the best single modelling approach yet, somewhat surprisingly, model combination which has the best overall performance in terms of forecasting accuracy, fails to improve the RNN-based volatility trading results. Another conclusion from our results is that, for the period and currencies considered, the currency option market was inefficient and/or the pricing formulae applied by market participants were inadequate. * Christian Dunis is Girobank Professor of Banking and Finance at Liverpool Business School and Director of CIBEF (E-mail: cdunis@globalnet.co.uk). The opinions expressed herein are not those of Girobank. ** Xuehuan Huang is an Associate Researcher with CIBEF. *** CIBEF \u2013 Centre for International Banking, Economics and Finance, JMU, John Foster Building, 98 Mount Pleasant, Liverpool L3 5UZ. We are grateful to Professor Ken Holden and Professor John Thompson of Liverpool Business School for helpful comments on an earlier version of this paper."}
{"_id": "1e0964d83cc60cc91b840559260f2eeca9a2205b", "title": "Neural systems responding to degrees of uncertainty in human decision-making.", "text": "Much is known about how people make decisions under varying levels of probability (risk). Less is known about the neural basis of decision-making when probabilities are uncertain because of missing information (ambiguity). In decision theory, ambiguity about probabilities should not affect choices. Using functional brain imaging, we show that the level of ambiguity in choices correlates positively with activation in the amygdala and orbitofrontal cortex, and negatively with a striatal system. Moreover, striatal activity correlates positively with expected reward. Neurological subjects with orbitofrontal lesions were insensitive to the level of ambiguity and risk in behavioral choices. These data suggest a general neural circuit responding to degrees of uncertainty, contrary to decision theory."}
{"_id": "65f2bae5564fe8c0449439347357a0cdd84e2e97", "title": "Software Defined Networking for Improved Wireless Sensor Network Management: A Survey", "text": "Wireless sensor networks (WSNs) are becoming increasingly popular with the advent of the Internet of things (IoT). Various real-world applications of WSNs such as in smart grids, smart farming and smart health would require a potential deployment of thousands or maybe hundreds of thousands of sensor nodes/actuators. To ensure proper working order and network efficiency of such a network of sensor nodes, an effective WSN management system has to be integrated. However, the inherent challenges of WSNs such as sensor/actuator heterogeneity, application dependency and resource constraints have led to challenges in implementing effective traditional WSN management. This difficulty in management increases as the WSN becomes larger. Software Defined Networking (SDN) provides a promising solution in flexible management WSNs by allowing the separation of the control logic from the sensor nodes/actuators. The advantage with this SDN-based management in WSNs is that it enables centralized control of the entire WSN making it simpler to deploy network-wide management protocols and applications on demand. This paper highlights some of the recent work on traditional WSN management in brief and reviews SDN-based management techniques for WSNs in greater detail while drawing attention to the advantages that SDN brings to traditional WSN management. This paper also investigates open research challenges in coming up with mechanisms for flexible and easier SDN-based WSN configuration and management."}
{"_id": "6991990042c4ace19fc9aaa551cfe4549ac36799", "title": "1.5MVA grid-connected interleaved inverters using coupled inductors for wind power generation system", "text": "In this paper, grid-connected interleaved voltage source inverters for PMSG wind power generation system with coupled inductors is introduced. In parallel operation, the undesirable circulating current flows between inverters. There are some differences in circulating currents according to source configuration. It is mathematically analyzed and simulated for the verification of analysis. In case of interleaved inverters, the uncontrollable circulating current in switching frequency level flows between inverter modules regardless of source configuration such as common or separated if filter inductance which is placed between inverters is significantly low. In order to suppress high frequency circulating current, the coupled inductors are employed in each phase. A case of 1.5MVA grid-connected interleaved inverters using coupled inductors prototype for the PMSG wind power generation Back To Back converters in parallel are introduced and experimentally verified the proposed topology."}
{"_id": "653587df5ffb93d5df4f571a64997aced4b395fe", "title": "Anthropometric and physiological predispositions for elite soccer.", "text": "This review is focused on anthropometric and physiological characteristics of soccer players with a view to establishing their roles within talent detection, identification and development programmes. Top-class soccer players have to adapt to the physical demands of the game, which are multifactorial. Players may not need to have an extraordinary capacity within any of the areas of physical performance but must possess a reasonably high level within all areas. This explains why there are marked individual differences in anthropometric and physiological characteristics among top players. Various measurements have been used to evaluate specific aspects of the physical performance of both youth and adult soccer players. The positional role of a player is related to his or her physiological capacity. Thus, midfield players and full-backs have the highest maximal oxygen intakes ( > 60 ml x kg(-1) x min(-1)) and perform best in intermittent exercise tests. On the other hand, midfield players tend to have the lowest muscle strength. Although these distinctions are evident in adult and elite youth players, their existence must be interpreted circumspectly in talent identification and development programmes. A range of relevant anthropometric and physiological factors can be considered which are subject to strong genetic influences (e.g. stature and maximal oxygen intake) or are largely environmentally determined and susceptible to training effects. Consequently, fitness profiling can generate a useful database against which talented groups may be compared. No single method allows for a representative assessment of a player's physical capabilities for soccer. We conclude that anthropometric and physiological criteria do have a role as part of a holistic monitoring of talented young players."}
{"_id": "c70e94f55397c49d46d531a6d854eca9b95d09c6", "title": "Security advances and challenges in 4G wireless networks", "text": "This paper presents a study of security advances and challenges associated with emergent 4G wireless technologies. The paper makes a number of contributions to the field. First, it studies the security standards evolution across different generations of wireless standards. Second, the security-related standards, architecture and design for the LTE and WiMAX technologies are analyzed. Third, security issues and vulnerabilities present in the above 4G standards are discussed. Finally, we point to potential areas for future vulnerabilities and evaluate areas in 4G security which warrant attention and future work by the research and advanced technology industry."}
{"_id": "2227d14a01f1521939a479d53c5d0b4dda79c22f", "title": "Force feedback based gripper control on a robotic arm", "text": "Telemanipulation (master-slave operation) was developed in order to replace human being in hazardous environment, with using a slave device (e.g. robot). In this paper many kind of master devices are studied, for the precise control. Remote control needs much information about remote environment for good and safe human-machine interaction. Existing master devices with vision system or audio feedback lacks most of such information, therefor force feedback, called haptics become significant recently. In this paper we propose a new concept of force feedback. This system can overcome the bottlenecks of other feedback system in a user friendly way. Force sensor and laser distance sensor communicates the information from the gripper's status to the teleoperator by using force feedback module on a glove. Pneumatic pressure gives the operator distance information, while a Magnetorheological Fluid (MR-Fluid) based actuator presents the gripper's force. The experiment result shows the possibility of usage of such force feedback glove in combination with a robotic arm."}
{"_id": "82a80da61c0f1a3a33deab676213794ee1f8bd6d", "title": "Learning hierarchical control structures for multiple tasks and changing environments", "text": "While the need for hierarchies within control systems is apparent, it is also clear to many researchers that such hierarchies should be learned. Learning both the structure and the component behaviors is a di cult task. The bene t of learning the hierarchical structures of behaviors is that the decomposition of the control structure into smaller transportable chunks allows previously learned knowledge to be applied to new but related tasks. Presented in this paper are improvements to Nested Q-learning (NQL) that allow more realistic learning of control hierarchies in reinforcement environments. Also presented is a simulation of a simple robot performing a series of related tasks that is used to compare both hierarchical and non-hierarchal learning techniques."}
{"_id": "23dc97e672467abe20a4e9d5c6367a18f887dcd1", "title": "Learning object detection from a small number of examples: the importance of good features", "text": "Face detection systems have recently achieved high detection rates and real-time performance. However, these methods usually rely on a huge training database (around 5,000 positive examples for good performance). While such huge databases may be feasible for building a system that detects a single object, it is obviously problematic for scenarios where multiple objects (or multiple views of a single object) need to be detected. Indeed, even for multi-viewface detection the performance of existing systems is far from satisfactory. In this work we focus on the problem of learning to detect objects from a small training database. We show that performance depends crucially on the features that are used to represent the objects. Specifically, we show that using local edge orientation histograms (EOH) as features can significantly improve performance compared to the standard linear features used in existing systems. For frontal faces, local orientation histograms enable state of the art performance using only a few hundred training examples. For profile view faces, local orientation histograms enable learning a system that seems to outperform the state of the art in real-time systems even with a small number of training examples."}
{"_id": "9afe065b7babf94e7d81a18ad32553306133815d", "title": "Feature Selection Using Adaboost for Face Expression Recognition", "text": "We propose a classification technique for face expression recognition using AdaBoost that learns by selecting the relevant global and local appearance features with the most discriminating information. Selectivity reduces the dimensionality of the feature space that in turn results in significant speed up during online classification. We compare our method with another leading margin-based classifier, the Support Vector Machines (SVM) and identify the advantages of using AdaBoost over SVM in this context. We use histograms of Gabor and Gaussian derivative responses as the appearance features. We apply our approach to the face expression recognition problem where local appearances play an important role. Finally, we show that though SVM performs equally well, AdaBoost feature selection provides a final hypothesis model that can easily be visualized and interpreted, which is lacking in the high dimensional support vectors of the SVM."}
{"_id": "0015fa48e4ab633985df789920ef1e0c75d4b7a8", "title": "Training Support Vector Machines: an Application to Face Detection", "text": "Detection (To appear in the Proceedings of CVPR'97, June 17-19, 1997, Puerto Rico.) Edgar Osunay? Robert Freund? Federico Girosiy yCenter for Biological and Computational Learning and ?Operations Research Center Massachusetts Institute of Technology Cambridge, MA, 02139, U.S.A. Abstract We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classi ers. The decision surfaces are found by solving a linearly constrained quadratic programming problem. This optimization problem is challenging because the quadratic form is completely dense and the memory requirements grow with the square of the number of data points. We present a decomposition algorithm that guarantees global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping criteria for the algorithm. We present experimental results of our implementation of SVM, and demonstrate the feasibility of our approach on a face detection problem that involves a data set of 50,000 data points."}
{"_id": "0224e11e8582dd35b32203e9da064d4a3935a792", "title": "Fast Pose Estimation with Parameter-Sensitive Hashing", "text": "Example-basedmethodsareeffectivefor parameterestimationproblemswhentheunderlyingsystemis simpleor thedimensionalityof the input is low. For complex andhigh-dimensional problemssuch asposeestimation,thenumberof required examplesand the computationalcomplexity rapidly becmeprohibitivelyhigh. We introducea new algorithm that learnsa setof hashingfunctionsthat efficiently index examplesrelevant to a particular estimationtask. Our algorithm extendsa recentlydevelopedmethodfor locality-sensitivehashing, which findsapproximateneighborsin timesublinearin thenumber of examples.Thismethoddependscritically on thechoiceof hashfunctions;weshowhowto find thesetof hashfunctions thatare optimallyrelevantto a particular estimationproblem.Experimentsdemonstr atethat theresultingalgorithm,which wecall Parameter -SensitiveHashing, canrapidlyandaccuratelyestimatethearticulatedposeof humanfiguresfroma large databaseof exampleimages. 0Part of thiswork wasdonewhenG.S.andP.V. werewith MitsubishiElectricResearchLabs,Cambridge,MA."}
{"_id": "c0ee9a28a84bb7eb9ac67ded94112298c4ff2be6", "title": "The effects of morphology and fitness on catastrophic interference", "text": "Catastrophic interference occurs when an agent improves in one training instance but becomes worse in other instances. Many methods intended to combat interference have been reported in the literature that modulate properties of a neural controller, such as synaptic plasticity or modularity. Here, we demonstrate that adjustments to the body of the agent, or the way its performance is measured, can also reduce catastrophic interference without requiring changes to the controller. Additionally, we introduce new metrics to quantify catastrophic interference. We do not show that our approach outperforms others on benchmark tests. Instead, by more precisely measuring interactions between morphology, fitness, and interference, we demonstrate that embodiment is an important aspect of this problem. Furthermore, considerations into morphology and fitness can combine with, rather than compete with, existing methods for combating catastrophic interference."}
{"_id": "6e42274914360a6cf89618a28eba8086fe04fe6f", "title": "Review of Dercum\u2019s disease and proposal of diagnostic criteria, diagnostic methods, classification and management", "text": "UNLABELLED\nDEFINITION AND CLINICAL PICTURE: We propose the minimal definition of Dercum's disease to be generalised overweight or obesity in combination with painful adipose tissue. The associated symptoms in Dercum's disease include fatty deposits, easy bruisability, sleep disturbances, impaired memory, depression, difficulty concentrating, anxiety, rapid heartbeat, shortness of breath, diabetes, bloating, constipation, fatigue, weakness and joint aches.\n\n\nCLASSIFICATION\nWe suggest that Dercum's disease is classified into: I. Generalised diffuse form A form with diffusely widespread painful adipose tissue without clear lipomas, II. Generalised nodular form - a form with general pain in adipose tissue and intense pain in and around multiple lipomas, and III. Localised nodular form - a form with pain in and around multiple lipomas IV. Juxtaarticular form - a form with solitary deposits of excess fat for example at the medial aspect of the knee.\n\n\nEPIDEMIOLOGY\nDercum's disease most commonly appears between the ages of 35 and 50 years and is five to thirty times more common in women than in men. The prevalence of Dercum's disease has not yet been exactly established.\n\n\nAETIOLOGY\nProposed, but unconfirmed aetiologies include: nervous system dysfunction, mechanical pressure on nerves, adipose tissue dysfunction and trauma. DIAGNOSIS AND DIAGNOSTIC METHODS: Diagnosis is based on clinical criteria and should be made by systematic physical examination and thorough exclusion of differential diagnoses. Advisably, the diagnosis should be made by a physician with a broad experience of patients with painful conditions and knowledge of family medicine, internal medicine or pain management. The diagnosis should only be made when the differential diagnoses have been excluded.\n\n\nDIFFERENTIAL DIAGNOSIS\nDifferential diagnoses include: fibromyalgia, lipoedema, panniculitis, endocrine disorders, primary psychiatric disorders, multiple symmetric lipomatosis, familial multiple lipomatosis, and adipose tissue tumours. GENETIC COUNSELLING: The majority of the cases of Dercum's disease occur sporadically. A to G mutation at position A8344 of mitochondrial DNA cannot be detected in patients with Dercum's disease. HLA (human leukocyte antigen) typing has not revealed any correlation between typical antigens and the presence of the condition. MANAGEMENT AND TREATMENT: The following treatments have lead to some pain reduction in patients with Dercum's disease: Liposuction, analgesics, lidocaine, methotrexate and infliximab, interferon \u03b1-2b, corticosteroids, calcium-channel modulators and rapid cycling hypobaric pressure. As none of the treatments have led to long lasting complete pain reduction and revolutionary results, we propose that Dercum's disease should be treated in multidisciplinary teams specialised in chronic pain.\n\n\nPROGNOSIS\nThe pain in Dercum's disease seems to be relatively constant over time."}
{"_id": "bff3d3aa6ee2e311c4363f52b23496d75d48a678", "title": "Mapping the game landscape: Locating genres using functional classification", "text": "Are typical computer game genres still valid descriptors and useful for describing game structure and game content? Games have changed from simple to complex and from single function to multi function. By identifying structural differences in game elements we develop a more nuanced model to categorized games and use cluster analysis as a descriptive tool in order to do so. The cluster analysis of 75 functionally different games shows that the two perspectives (omnipresent and vagrant), as well as challenges, mutability and savability are important functional categories to use in order to describe games. Author"}
{"_id": "1fbb5c3abd953e3e2280b4ff86cadd8ffc4144ab", "title": "Reasoning for Semantic Error Detection in Text", "text": "Identifying incorrect content (i.e., semantic error) in text is a difficult task because of the ambiguous nature of written natural language and themany factors that canmake a statement semantically erroneous. Current methods identify semantic errors in a sentence by determining whether it contradicts the domain to which the sentence belongs. However, because thesemethods are constructedon expected logic contradictions, they cannot handle new or unexpected semantic errors. In this paper, we propose a new method for detecting semantic errors that is based on logic reasoning. Our proposed method converts text into logic clauses, which are later analyzed against a domain ontology by an automatic reasoner to determine its consistency. This approach can provide a complete analysis of the text, since it can analyze a single sentence or sets of multiple sentences. When there are multiple sentences to analyze, in order to avoid the high complexity of reasoning over a large set of logic clauses, we propose rules that reduce the set of sentences to analyze, based on the logic relationships between sentences. In our evaluation, we have found that our proposed method can identify a significant percentage of semantic errors and, in the case of multiple sentences, it does so without significant computational cost. We have also found that both the quality of the information extraction output and modeling elements B Dejing Dou dou@cs.uoregon.edu Fernando Gutierrez fernando@cs.uoregon.edu Nisansa de Silva nisansa@cs.uoregon.edu Stephen Fickas fickas@cs.uoregon.edu 1 Department of Computer and Information Science, 1202 University of Oregon, Eugene, OR 97403, USA of the ontology (i.e., property domain and range) affect the capability of detecting errors."}
{"_id": "070705fe7fd1a7f48498783ee2ac80963a045450", "title": "Panda: Public Auditing for Shared Data with Efficient User Revocation in the Cloud", "text": "With data storage and sharing services in the cloud, users can easily modify and share data as a group. To ensure shared data integrity can be verified publicly, users in the group need to compute signatures on all the blocks in shared data. Different blocks in shared data are generally signed by different users due to data modifications performed by different users. For security reasons, once a user is revoked from the group, the blocks which were previously signed by this revoked user must be re-signed by an existing user. The straightforward method, which allows an existing user to download the corresponding part of shared data and re-sign it during user revocation, is inefficient due to the large size of shared data in the cloud. In this paper, we propose a novel public auditing mechanism for the integrity of shared data with efficient user revocation in mind. By utilizing the idea of proxy re-signatures, we allow the cloud to re-sign blocks on behalf of existing users during user revocation, so that existing users do not need to download and re-sign blocks by themselves. In addition, a public verifier is always able to audit the integrity of shared data without retrieving the entire data from the cloud, even if some part of shared data has been re-signed by the cloud. Moreover, our mechanism is able to support batch auditing by verifying multiple auditing tasks simultaneously. Experimental results show that our mechanism can significantly improve the efficiency of user revocation."}
{"_id": "50bdbae105d717821dbcf60a48b0891d96e32990", "title": "100kHz IGBT inverter use of LCL topology for high power induction heating", "text": "A power electronic inverter is developed for a high frequency induction heating application. This equipment is available for the implementation of ZVS LCL resonance method was used. Using IGBT 20MVA, 100kHz LCL resonant induction heating equipment during the development process, 100kHz IGBT module development process and understanding gained from the design of LCL resonant Zero Crossing through the PLL control over operation of the IGBT Soft Switching, and focused on minimizing losses. In addition, through the actual design and implementation of devices using the area to test drive the development of large inverter technology based on the test"}
{"_id": "288eb3c71c76ef2a820c0c2e372436ce078224d0", "title": "P-MOD: Secure Privilege-Based Multilevel Organizational Data-Sharing in Cloud Computing", "text": "Cloud computing has changed the way enterprises store, access and share data. Data is constantly being uploaded to the cloud and shared within an organization built on a hierarchy of many different individuals that are given certain data access privileges. With more data storage needs turning over to the cloud, finding a secure and efficient data access structure has become a major research issue. With different access privileges, individuals with more privileges (at higher levels of the hierarchy) are granted access to more sensitive data than those with fewer privileges (at lower levels of the hierarchy). In this paper, a Privilege-based Multilevel Organizational Data-sharing scheme (P-MOD) is proposed that incorporates a privilege-based access structure into an attribute-based encryption mechanism to handle these concerns. Each level of the privilege-based access structure is affiliated with an access policy that is uniquely defined by specific attributes. Data is then encrypted under each access policy at every level to grant access to specific data users based on their data access privileges. An individual ranked at a certain level can decrypt the ciphertext (at that specific level) if and only if that individual owns a correct set of attributes that can satisfy the access policy of that level. The user may also decrypt the ciphertexts at the lower levels with respect to the user\u2019s level. Security analysis shows that P-MOD is secure against adaptively chosen plaintext attack assuming the DBDH assumption holds. The comprehensive performance analysis demonstrates that PMOD is more efficient in computational complexity and storage space than the existing schemes in secure data sharing within an organization."}
{"_id": "f28fc9551aed8905844feae98b4697301b9e983b", "title": "Generating ROC curves for artificial neural networks", "text": "Receiver operating characteristic (ROC) analysis is an established method of measuring diagnostic performance in medical imaging studies. Traditionally, artificial neural networks (ANN's) have been applied as a classifier to find one \"best\" detection rate. Recently researchers have begun to report ROC curve results for ANN classifiers. The current standard method of generating ROC curves for an ANN is to vary the output node threshold for classification. Here, the authors propose a different technique for generating ROC curves for a two class ANN classifier. They show that this new technique generates better ROC curves in the sense of having greater area under the ROC curve (AUC), and in the sense of being composed of a better distribution of operating points."}
{"_id": "7197ccb285c37c175fc95de58656fd2bf912adbc", "title": "Socio-emotional development: from infancy to young adulthood.", "text": "Results from the Uppsala Longitudinal Study (ULS), which started in 1985, are reported in two sections. The first section gives a summary of longitudinal data from infancy to middle childhood (age 9 years; n = 96) concerning predictions of social functioning aspects from the theoretical perspectives of temperament, attachment, and health psychology (social factors). The second section presents the first results emanating from a follow-up when participants were 21 years old (n = 85). The developmental roots of social anxiety symptoms were studied from the same perspectives as above, although with a special focus on the predictive power of the temperament trait of shyness/inhibition. Results for middle childhood outcomes showed that temperament characteristics were relevant for most outcomes, whereas the contribution of attachment was most convincingly shown in relation to social competence and personality. Social factors were found to have moderating functions, but direct effects were also shown, the most interesting perhaps being positive effects of non-parental day care. Results from the 21-year data confirmed the expected predictive relation from shyness/inhibition to symptoms of social anxiety and further showed this relation to be specific; the relation to symptoms of depression did not survive control for social anxiety, although the opposite was true. The broad analysis of predictor associations with social anxiety, showing the relevance of other temperament factors as well as interactive effects, again attested to the need for multi-faceted models to analyze developmental trajectories."}
{"_id": "34dda94f63ddb27a6e2525efcd2e260670f14668", "title": "Contingency planning over probabilistic hybrid obstacle predictions for autonomous road vehicles", "text": "This paper presents a novel optimization based path planner that can simultaneously plan multiple contingency paths to account for the uncertain actions of dynamic obstacles. This planner addresses the particular problem of collision avoidance for autonomous road vehicles which are required to safely interact with other vehicles with unknown intentions. The presented path planner utilizes an efficient spline based trajectory representation and fast but accurate collision probability approximations to enable the simultaneous optimization of multiple contingency paths."}
{"_id": "66c2225def7c1a5ace4c58883169aaaa73955f4c", "title": "Motion Detection and Face Recognition for CCTV Surveillance System", "text": "Closed Circuit Television (CCTV) is currently used in daily basis for a wide variety of purposes. The development of CCTV has transformed from a simple passive surveillance into an integrated intelligent control system. In this research, motion detection and facial recognition in CCTV video is used as the basis of decision making to produce automatic, effective and efficient integrated system. This CCTV video process provides three outputs, motion detection information, face detection information and face identification information. Accumulative Differences Images (ADI) method is used for motion detection, and Haar Classifiers Cascade method is used for face detection. Feature extraction is done with Speeded-Up Robust Features (SURF) and Principal Component Analysis (PCA). Then, these features are trained by Counter-Propagation Network (CPN). Offline tests are performed on 45 CCTV video. The result shows 92.655% success rate on motion detection,76% success rate on face detection, and 60% success rate on face detection. It shows that this faces detection and identification through CCTV video have not been able to obtain optimal results. The motion detection process is ideal to be applied in real-time conditions. Yet if it\u2019s combined with face recognition process, it causes a significant time delay. Keywords\u2014 ADI, Haar Cascade Classifiers, SURF, PCA, CPN \uf06e ISSN (print): 1978-1520, ISSN (online): 2460-7258 IJCCS Vol. 12, No. 2, July 2018 : 107 \u2013 118 108"}
{"_id": "a26d65817df33f8fee7c39a26a70c0e8a41b2e3e", "title": "Deep neural networks with Elastic Rectified Linear Units for object recognition", "text": "Rectified Linear Unit (ReLU) is crucial to the recent success of deep neural networks (DNNs). In this paper, we propose a novel Elastic Rectified Linear Unit (EReLU) that focuses on processing the positive part of input. Unlike previous variants of ReLU that typically adopt linear or piecewise linear functions to represent the positive part, EReLU is characterized by that each positive value scales within a moderate range like a spring during training stage. On test time, EReLU becomes standard ReLU. EReLU improves model fitting with no extra parameters and little overfitting risk. Furthermore, we propose Elastic Parametric Rectified Linear Unit (EPReLU) by taking advantage of EReLU and parametric ReLU (PReLU). EPReLU \u2217Corresponding author Email addresses: jiangxiaoheng@tju.edu.cn (Xiaoheng Jiang), pyw@tju.edu.cn (Yanwei Pang ), xuelong_li@opt.ac.cn (Xuelong Li), jingpan23@gmail.com (Jing Pan), xieyinghong@163.com (Yinghong Xie) Preprint submitted to Neurocomputing September 3, 2017"}
{"_id": "ca74a59166af72a14af031504e31d86c7953dc91", "title": "The exact bound in the Erd\u00f6s - Ko - Rado theorem", "text": ""}
{"_id": "8577f8716ef27982a47963c498bd12f891c70702", "title": "Interaction Proxies for Runtime Repair and Enhancement of Mobile Application Accessibility", "text": "We introduce interaction proxies as a strategy for runtime repair and enhancement of the accessibility of mobile applications. Conceptually, interaction proxies are inserted between an application's original interface and the manifest interface that a person uses to perceive and manipulate the application. This strategy allows third-party developers and researchers to modify an interaction without an application's source code, without rooting the phone, without otherwise modifying an application, while retaining all capabilities of the system (e.g., Android's full implementation of the TalkBack screen reader). This paper introduces interaction proxies, defines a design space of interaction re-mappings, identifies necessary implementation abstractions, presents details of implementing those abstractions in Android, and demonstrates a set of Android implementations of interaction proxies from throughout our design space. We then present a set of interviews with blind and low-vision people interacting with our prototype interaction proxies, using these interviews to explore the seamlessness of interaction, the perceived usefulness and potential of interaction proxies, and visions of how such enhancements could gain broad usage. By allowing third-party developers and researchers to improve an interaction, interaction proxies offer a new approach to personalizing mobile application accessibility and a new approach to catalyzing development, deployment, and evaluation of mobile accessibility enhancements."}
{"_id": "1e978d2b07a1e9866ea079c06e27c9565e9051b8", "title": "The Shaping of Information by Visual Metaphors", "text": "The nature of an information visualization can be considered to lie in the visual metaphors it uses to structure information. The process of understanding a visualization therefore involves an interaction between these external visual metaphors and the user's internal knowledge representations. To investigate this claim, we conducted an experiment to test the effects of visual metaphor and verbal metaphor on the understanding of tree visualizations. Participants answered simple data comprehension questions while viewing either a treemap or a node-link diagram. Questions were worded to reflect a verbal metaphor that was either compatible or incompatible with the visualization a participant was using. The results suggest that the visual metaphor indeed affects how a user derives information from a visualization. Additionally, we found that the degree to which a user is affected by the metaphor is strongly correlated with the user's ability to answer task questions correctly. These findings are a first step towards illuminating how visual metaphors shape user understanding, and have significant implications for the evaluation, application, and theory of visualization."}
{"_id": "e42485ca1ba3f9c5f02e41d1bcb459ed94f91975", "title": "Rehabilitation robots assisting in walking training for SCI patient", "text": "We have been developing a series of robots to apply for each step of spinal cord injury (SCI) recovery. We describe the preliminary walking pattern assisting robot and the practical walking assisting robot, which will be applied to the incomplete type of SCI to Quadriplegia. The preliminary experimental results are performed by normal subjects to verify the basic functions of the robot assisting and to prepare the test by SCI patients."}
{"_id": "f180d4f9e9d7606aed2dd161cc8388cb5a553fed", "title": "In good company: how social capital makes organizations work [Book Review]", "text": "Don Cohen and Laurence Prusak define \"social capital\" in terms of connections among people. It emerges, they write, from \"trust, mutual understanding, and shared values and behaviors that bind the members of human networks and communities and make more cooperative action possible.\" Continuing a theme that Prusak, executive director of the IBM Institute for Knowledge Management, has been developing, they offer examples of how social capital is created and destroyed and the role it plays in collective human endeavors, particularly in modern business. The authors suggest that this understanding is essential for developing models that respect and nourish human relationships. Seekers of a rigorous, quantified theoretical framework won't find it here. Instead, the book explores the role of trust, community, connectedness and loyalty in making companies work today.--Alan S. Kay"}
{"_id": "52cf369ecd012dd4c9b3b49753b9141e1098ecf2", "title": "Harmony: A Hand Wash Monitoring and Reminder System using Smart Watches", "text": "Hand hygiene compliance is extremely important in hospitals, clinics and food businesses. Caregivers\u2019 compliance with hand hygiene is one of the most effective tools in preventing healthcare associated infections (HAIs) in hospitals and clinics. In food businesses, hand hygiene compliance is essential to prevent food contamination, and thus food borne illness. Washing hands properly is the cornerstone of hand hygiene. However, the hand wash compliance rate by the workers (care givers, waiters, chefs, food processors and so on) is not up to the mark. Monitoring hand wash compliance along with a reminder system increases the compliance rate significantly. Quality of a hand wash is also important which can be achieved by washing hands in accordance with standard guidelines. In this paper, we present Harmony, a hand wash monitoring and reminder system that monitors hand wash events and their quality, provides real time feedback, reminds the person of interest when he/she is required to wash hands, and stores related data in a server for further use. Worker worn smart watches are the key components of Harmony that can differentiate hand wash gestures from other gestures with an average accuracy of about 88%. Harmony is robust, scalable, and easy to install, and it overcomes most of the problems of existing related systems."}
{"_id": "a5baf22b35c5ac2315819ce64be63d4e9bc05f27", "title": "Subjective Reality and Strong Artificial Intelligence", "text": "The main prospective aim of modern research related to Artificial Intelligence is the creation of technical systems that implement the idea of Strong Intelligence. According our point of view the path to the development of such systems comes through the research in the field related to perceptions. Here we formulate the model of the perception of external world which may be used for the description of perceptual activity of intelligent beings. We consider a number of issues related to the development of the set of patterns which will be used by the intelligent system when interacting with environment. The key idea of the presented perception model is the idea of subjective reality. The principle of the relativity of perceived world is formulated. It is shown that this principle is the immediate consequence of the idea of subjective reality. In this paper we show how the methodology of subjective reality may be used for the creation of different types of Strong AI systems."}
{"_id": "3ea09660687628d3afb503bccc62126bb6304d18", "title": "A systematic review and comparative analysis of cross-document coreference resolution methods and tools", "text": "Information extraction (IE) is the task of automatically extracting structured information from unstructured/semi-structured machine-readable documents. Among various IE tasks, extracting actionable intelligence from an ever-increasing amount of data depends critically upon cross-document coreference resolution (CDCR) - the task of identifying entity mentions across information sources that refer to the same underlying entity. CDCR is the basis of knowledge acquisition and is at the heart of Web search, recommendations, and analytics. Real time processing of CDCR processes is very important and have various applications in discovering must-know information in real-time for clients in finance, public sector, news, and crisis management. Being an emerging area of research and practice, the reported literature on CDCR challenges and solutions is growing fast but is scattered due to the large space, various applications, and large datasets of the order of peta-/tera-bytes. In order to fill this gap, we provide a systematic review of the state of the art of challenges and solutions for a CDCR process. We identify a set of quality attributes, that have been frequently reported in the context of CDCR processes, to be used as a guide to identify important and outstanding issues for further investigations. Finally, we assess existing tools and techniques for CDCR subtasks and provide guidance on selection of tools and algorithms."}
{"_id": "2b0c5d98a2a93e49411a14d2b89f57a14f253514", "title": "High throughput heavy hitter aggregation for modern SIMD processors", "text": "Heavy hitters are data items that occur at high frequency in a data set. They are among the most important items for an organization to summarize and understand during analytical processing. In data sets with sufficient skew, the number of heavy hitters can be relatively small. We take advantage of this small footprint to compute aggregate functions for the heavy hitters in fast cache memory in a single pass.\n We design cache-resident, shared-nothing structures that hold only the most frequent elements. Our algorithm works in three phases. It first samples and picks heavy hitter candidates. It then builds a hash table and computes the exact aggregates of these elements. Finally, a validation step identifies the true heavy hitters from among the candidates.\n We identify trade-offs between the hash table configuration and performance. Configurations consist of the probing algorithm and the table capacity that determines how many candidates can be aggregated. The probing algorithm can be perfect hashing, cuckoo hashing and bucketized hashing to explore trade-offs between size and speed.\n We optimize performance by the use of SIMD instructions, utilized in novel ways beyond single vectorized operations, to minimize cache accesses and the instruction footprint."}
{"_id": "617af330639f56dd2b3472b8d5fbda239506610f", "title": "Analyzing Uber's Ride-sharing Economy", "text": "Uber is a popular ride-sharing application that matches people who need a ride (or riders) with drivers who are willing to provide it using their personal vehicles. Despite its growing popularity, there exist few studies that examine large-scale Uber data, or in general the factors affecting user participation in the sharing economy. We address this gap through a study of the Uber market that analyzes large-scale data covering 59 million rides which spans a period of 7 months. The data were extracted from email receipts sent by Uber collected on Yahoo servers, allowing us to examine the role of demographics (e.g., age and gender) on participation in the ride-sharing economy. In addition, we evaluate the impact of dynamic pricing (i.e., surge pricing) and income on both rider and driver behavior. We find that the surge pricing does not bias Uber use towards higher income riders. Moreover, we show that more homophilous matches (e.g., riders to drivers of a similar age) can result in higher driver ratings. Finally, we focus on factors that affect retention and use information from earlier rides to accurately predict which riders or drivers will become active Uber users."}
{"_id": "03695a9a42b0f64b8a13b4ddd3bfde076e9f604a", "title": "Operating System Profiling via Latency Analysis", "text": "Operating systems are complex and their behavior depends on many factors. Source code, if available, does not directly help one to understand the OS's behavior, as the behavior depends on actual workloads and external inputs. Runtime profiling is a key technique to prove new concepts, debug problems, and optimize performance. Unfortunately, existing profiling methods are lacking in important areas---they do not provide enough information about the OS's behavior, they require OS modification and therefore are not portable, or they incur high overheads thus perturbing the profiled OS.\n We developed OSprof: a versatile, portable, and efficient OS profiling method based on latency distributions analysis. OSprof automatically selects important profiles for subsequent visual analysis. We have demonstrated that a suitable workload can be used to profile virtually any OS component. OSprof is portable because it can intercept operations and measure OS behavior from user-level or from inside the kernel without requiring source code. OSprof has typical CPU time overheads below 4%. In this paper we describe our techniques and demonstrate their usefulness through a series of profiles conducted on Linux, FreeBSD, and Windows, including client/server scenarios. We discovered and investigated a number of interesting interactions, including scheduler behavior, multi-modal I/O distributions, and a previously unknown lock contention, which we fixed."}
{"_id": "d385f52c9ffbf4bb1d689406cebc5075f5ad4d6a", "title": "Affective Computing and Sentiment Analysis", "text": "Understanding emotions is an important aspect of personal development and growth, and as such it is a key tile for the emulation of human intelligence. Besides being important for the advancement of AI, emotion processing is also important for the closely related task of polarity detection. The opportunity to automatically capture the general public's sentiments about social events, political movements, marketing campaigns, and product preferences has raised interest in both the scientific community, for the exciting open challenges, and the business world, for the remarkable fallouts in marketing and financial market prediction. This has led to the emerging fields of affective computing and sentiment analysis, which leverage human-computer interaction, information retrieval, and multimodal signal processing for distilling people's sentiments from the ever-growing amount of online social data."}
{"_id": "20b877e96201a08332b5dcd4e73a1a30c9ac5a9e", "title": "MapReduce : Distributed Computing for Machine Learning", "text": "We use Hadoop, an open-source implementation of Google\u2019s distributed file system and the MapReduce framework for distributed data processing, on modestly-sized compute clusters to evaluate its efficacy for standard machine learning tasks. We show benchmark performance on searching and sorting tasks to investigate the effects of various system configurations. We also distinguish classes of machine-learning problems that are reasonable to address within the MapReduce framework, and offer improvements to the Hadoop implementation. We conclude that MapReduce is a good choice for basic operations on large datasets, although there are complications to be addressed for more complex machine learning tasks."}
{"_id": "f09a1acd38a781a3ab6cb31c8e4f9f29be5f58cb", "title": "Large-scale Artificial Neural Network: MapReduce-based Deep Learning", "text": "Faced with continuously increasing scale of data, original back-propagation neural network based machine learning algorithm presents two non-trivial challenges: huge amount of data makes it difficult to maintain both efficiency and accuracy; redundant data aggravates the system workload. This project is mainly focused on the solution to the issues above, combining deep learning algorithm with cloud computing platform to deal with large-scale data. A MapReduce-based handwriting character recognizer will be designed in this project to verify the efficiency improvement this mechanism will achieve on training and practical large-scale data. Careful discussion and experiment will be developed to illustrate how deep learning algorithm works to train handwritten digits data, how MapReduce is implemented on deep learning neural network, and why this combination accelerates computation. Besides performance, the scalability and robustness will be mentioned in this report as well. Our system comes with two demonstration software that visually illustrates our handwritten digit recognition/encoding application. 1"}
{"_id": "0122e063ca5f0f9fb9d144d44d41421503252010", "title": "Large Scale Distributed Deep Networks", "text": "Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestlysized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm."}
{"_id": "ea1a7c6ba9357ba597cd182ec70e1f03dca485f7", "title": "A novel modular compliant knee joint actuator for use in assistive and rehabilitation orthoses", "text": "Despite significant advancements in the field of wearable robots (WRs), commercial WRs still use traditional direct-drive actuation units to power their joints. On the other hand, in research prototypes compliant actuators are increasingly being used to more adequately address the issues of safety, robustness, control and overall system efficiency. The advantages of mechanical compliance are exploited in a novel modular actuator prototype designed for the knee joint. Due to its modularity, the actuator can be implemented in a knee joint of a standalone or a multi-joint lower-limbs orthosis, for use in gait rehabilitation and/or walking assistance. Differently from any other actuator used in orthotic research prototypes, it combines a Variable Stiffness Actuator (VSA) and a Parallel Elasticity Actuation (PEA) unit in a single modular system. Although independent, the units are designed to work together in order to fully mimic dynamic behavior of the human knee joint. In this paper, design aspects and functional evaluation of the new actuator are presented and a rationale for such a design in biomechanics of the human knee joint is given. The VSA subsystem is characterized in a quasi-static benchmarking environment and the results showing main performance indicators are presented."}
{"_id": "14671928b6bd0c6dbb1145b92d1f3e35513452a8", "title": "How similar are fluid cognition and general intelligence? A developmental neuroscience perspective on fluid cognition as an aspect of human cognitive ability.", "text": "This target article considers the relation of fluid cognitive functioning to general intelligence. A neurobiological model differentiating working memory/executive function cognitive processes of the prefrontal cortex from aspects of psychometrically defined general intelligence is presented. Work examining the rise in mean intelligence-test performance between normative cohorts, the neuropsychology and neuroscience of cognitive function in typically and atypically developing human populations, and stress, brain development, and corticolimbic connectivity in human and nonhuman animal models is reviewed and found to provide evidence of mechanisms through which early experience affects the development of an aspect of cognition closely related to, but distinct from, general intelligence. Particular emphasis is placed on the role of emotion in fluid cognition and on research indicating fluid cognitive deficits associated with early hippocampal pathology and with dysregulation of the hypothalamic-pituitary-adrenal axis stress-response system. Findings are seen to be consistent with the idea of an independent fluid cognitive construct and to assist with the interpretation of findings from the study of early compensatory education for children facing psychosocial adversity and from behavior genetic research on intelligence. It is concluded that ongoing development of neurobiologically grounded measures of fluid cognitive skills appropriate for young children will play a key role in understanding early mental development and the adaptive success to which it is related, particularly for young children facing social and economic disadvantage. Specifically, in the evaluation of the efficacy of compensatory education efforts such as Head Start and the readiness for school of children from diverse backgrounds, it is important to distinguish fluid cognition from psychometrically defined general intelligence."}
{"_id": "4b38d1951822e71dafac01cba257fd49e1c3b033", "title": "Solution of the Generalized Noah's Ark Problem.", "text": "The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%."}
{"_id": "18e04dbd350ba6dcb5ed10a76476f929b6b5598e", "title": "The Design and Implementation of a Log-Structured File System", "text": "This paper presents a new technique for disk storage management called a log-structured file system. A log-structured file system writes all modifications to disk sequentially in a log-like structure, thereby speeding up both file writing and crash recovery. The log is the only structure on disk; it contains indexing information so that files can be read back from the log efficiently. In order to maintain large free areas on disk for fast writing, we divide the log into segments and use a segment cleaner to compress the live information from heavily fragmented segments. We present a series of simulations that demonstrate the efficiency of a simple cleaning policy based on cost and benefit. We have implemented a prototype log-structured file system called Sprite LFS; it outperforms current Unix file systems by an order of magnitude for small-file writes while matching or exceeding Unix performance for reads and large writes. Even when the overhead for cleaning is included, Sprite LFS can use 70% of the disk bandwidth for writing, whereas Unix file systems typically can use only 5--10%."}
{"_id": "2512284d4cb66ee63a3f349abfab75476130b28e", "title": "Robot team learning enhancement using Human Advice", "text": "The paper discusses the augmentation of the Concurrent Individual and Social Learning (CISL) mechanism with a new Human Advice Layer (HAL). The new layer is characterized by a Gaussian Mixture Model (GMM), which is trained on human experience data. The CISL mechanism consists of the Individual Performance and Task Allocation Markov Decision Processes (MDP), and the HAL can provide preferred action selection policies to the individual agents. The data utilized for training the GMM is collected using a heterogeneous team foraging simulation. When leveraging human experience in the multi-agent learning process, the team performance is enhanced significantly."}
{"_id": "79df628dc7b292d6d6387365f074253fbbfbe826", "title": "UI on the Fly: Generating a Multimodal User Interface", "text": "UI on the Fly is a system that dynamically presents coordinated multimodal content through natural language and a small-screen graphical user interface. It adapts to the user\u2019s preferences and situation. Multimodal Functional Unification Grammar (MUG) is a unification-based formalism that uses rules to generate content that is coordinated across several communication modes. Faithful variants are scored with a heuristic function."}
{"_id": "b15b3e0fb6ac1cfb16df323c14ae0bbbf3779c1b", "title": "NFC antenna research and a simple impedance matching method", "text": "Near Field Communication (NFC) technology evolved from a non-contact radio frequency identification (RFID) technology and intercommunication technology. Sensor reader, contactless card and peer to peer features integrate to a single chip. This chip could communicate with compatible devices by the identification and data exchange at a short distance. The technology was a simple merger of RFID technology and network technology at first, but it has evolved into a short-range wireless communications technology, which develops very quickly. This paper introduces the concept and characteristics of NFC technology first. Then, starting from the antenna coil used by NFC technology, this paper accomplished the theoretical analysis, Matlab simulation, HFSS and ADS2009 joint simulation respectively, and improved design an antenna impedance matching approach."}
{"_id": "0200f2b70d414bcc13debe90e4325817b9726d5a", "title": "No Unbiased Estimator of the Variance of K-Fold Cross-Validation", "text": "Most machine learning researchers perform quantitative experiments to estimate generalization error and compare algorithm performances. In order to draw statistically convincing conclusions, it is important to estimate the uncertainty of such estimates. This paper studies the estimation of uncertainty around the K-fold cross-validation estimator. The main theorem shows that there exists no universal unbiased estimator of the variance of K-fold cross-validation. An analysis based on the eigendecomposition of the covariance matrix of errors helps to better understand the nature of the problem and shows that naive estimators may grossly underestimate variance, as con\u00a3rmed by numerical experiments."}
{"_id": "897eacd4b98c8862dbf166e1d677a1d0d6015297", "title": "Analysis of SDN Contributions for Cloud Computing Security", "text": "Cloud infrastructures are composed fundamentally of computing, storage, and networking resources. In regards to network, Software-Defined Networking (SDN) has become one of the most important architectures for the management of networks that require frequent re-policing or re-configurations. Considering the already known security issues of Cloud Computing, SDN helps to give fast answers to emerging threats, but also introduces new vulnerabilities related to its own architecture. In this paper, we analyze recent security proposals derived from the use of SDN, and elaborate on whether it helps to improve trust, security and privacy in Cloud Computing. Moreover, we discuss security concerns introduced by the SDN architecture and how they could compromise Cloud services. Finally, we explore future security perspectives with regard to leveraging SDN benefits and mitigating its security issues."}
{"_id": "f7f4f47b3ef2a2428754f46bb3f7a6bf0c765425", "title": "A Digital Twin-Based Approach for Designing and Multi-Objective Optimization of Hollow Glass Production Line", "text": "Various new national advanced manufacturing strategies, such as Industry 4.0, Industrial Internet, and Made in China 2025, are issued to achieve smart manufacturing, resulting in the increasing number of newly designed production lines in both developed and developing countries. Under the individualized designing demands, more realistic virtual models mirroring the real worlds of production lines are essential to bridge the gap between design and operation. This paper presents a digital twin-based approach for rapid individualized designing of the hollow glass production line. The digital twin merges physics-based system modeling and distributed real-time process data to generate an authoritative digital design of the system at pre-production phase. A digital twin-based analytical decoupling framework is also developed to provide engineering analysis capabilities and support the decision-making over the system designing and solution evaluation. Three key enabling techniques as well as a case study in hollow glass production line are addressed to validate the proposed approach."}
{"_id": "209b7b98b460c6a0f14cf751a181e971cd4d2a9f", "title": "A Focus on Recent Developments and Trends in Underwater Imaging", "text": "\u25a0 Compact, efficient and easy to program digital signal processors execute algorithms once too computationally expensive for real-time applications. \u25a0 Modeling and simulation programs more accurately predict the effects that physical ocean parameters have on the performance of imaging systems under different geometric configurations. \u25a0 Image processing algorithms that handle data from multiple synchronous sources and that can extract and match feature points from each such source derive accurate 3-D scene information. \u25a0 Digital compression schemes provide high-quality standardizations for increased data transfer rates (i.e. streaming video) and reduced storage requirements. This paper reports developments over the past three years in the following topics: \u25a0 Image formation and image processing methods; \u25a0 Extended range imaging techniques;"}
{"_id": "075ef34a0203477c41438743a272f106f95ae909", "title": "Velocity and acceleration estimation for optical incremental encoders q", "text": "Optical incremental encoders are extensively used for position measurements in motion systems. The position measurements suffer from quantization errors. Velocity and acceleration estimations obtained by numerical differentiation largely amplify the quantization errors. In this paper, the time stamping concept is used to obtain more accurate position, velocity and acceleration estimations. Time stamping makes use of stored events, consisting of the encoder counts and their time instants, captured at a high resolution clock. Encoder imperfections and the limited resolution of the capturing rate of the encoder events result in errors in the estimations. In this paper, we propose a method to extend the observation interval of the stored encoder events using a skip operation. Experiments on a motion system show that the velocity estimation is improved by 54% and the acceleration estimation by 92%. 2009 Elsevier Ltd. All rights reserved."}
{"_id": "70091b309add6d3e6006bae68a5df5931cfc9d2f", "title": "Phase spectrum prediction of audio signals", "text": "Modeling the phases of audio signals has received significantly less attention in comparison to the modeling of magnitudes. This paper proposes to use linear least squares and neural networks to predict phases from the neighboring points only in the phase spectrum. The simulation results show that there is a structure in the phase components which could be used in further analysis algorithms based on the phase spectrum."}
{"_id": "d7bf5c4e8d640f877efefa6faaaed5b4556fbe8e", "title": "About Face 3: the essentials of interaction design", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd about face 3 the essentials of interaction design alan cooper as the choice of reading, you can find here."}
{"_id": "3decb702ee67b996190692b3eaeef3ba6f41b61d", "title": "From Distributional Semantics to Conceptual Spaces: A Novel Computational Method for Concept Creation", "text": "We investigate the relationship between lexical spaces and contextually-defined conceptual spaces, offering applications to creative concept discovery. We define a computational method for discovering members of concepts based on semantic spaces: starting with a standard distributional model derived from corpus co-occurrence statistics, we dynamically select characteristic dimensions associated with seed terms, and thus a subspace of terms defining the related concept. This approach performs as well as, and in some cases better than, leading distributional semantic models on a WordNet-based concept discovery task, while also providing a model of concepts as convex regions within a space with interpretable dimensions. In particular, it performs well on more specific, contextualized concepts; to investigate this we therefore move beyond WordNet to a set of human empirical studies, in which we compare output against human responses on a membership task for novel concepts. Finally, a separate panel of judges rate both model output and human responses, showing similar ratings in many cases, and some commonalities and divergences which reveal interesting issues for computational concept discovery."}
{"_id": "6bc111834108cbcd6cd831f8ac34342d84bc886a", "title": "Modeling and Simulation of Lithium-Ion Batteries from a Systems Engineering Perspective", "text": "The lithium-ion battery is an ideal candidate for a wide variety of applications due to its high energy/power density and operating voltage. Some limitations of existing lithium-ion battery technology include underutilization, stress-induced material damage, capacity fade, and the potential for thermal runaway. This paper reviews efforts in the modeling and simulation of lithium-ion batteries and their use in the design of better batteries. Likely future directions in battery modeling and design including promising research opportunities are outlined. \u00a9 2011 The Electrochemical Society. [DOI: 10.1149/2.018203jes] All rights reserved."}
{"_id": "92106be17def485cba1c5997b80bbb2d0b21b813", "title": "An analytical framework for evaluating e-commerce business models and strategies", "text": "Electronic commerce or business is more than just another way to sustain or enhance existing business practices. Rather, e-commerce is a paradigm shift. It is a `\u0300 disruptive\u2019\u2019 innovation that is radically changing the traditional way of doing business. The industry is moving so fast because it operates under totally different principles and work rules in the digital economy. A general rule in e-commerce is that there is no simple prescription and almost no such thing as an established business or revenue model for companies even within the same industry. Under such conditions, an analytical framework is needed to assist e-commerce planners and strategic managers in assessing the critical success factors when formulating e-commerce business models and strategies. This research develops an analytical framework based on the theories of transaction costs and switching costs. Both demand-side and supply-side economies of scale and scope are also applied to the development of this framework. In addition, e-commerce revenue models and strategies are also discussed. Based on the analytical framework developed by this research, this paper discusses the five essential steps for e-commerce success. They are: redefine the competitive advantage; rethink business strategy; re-examine traditional business and revenue models, re-engineer the corporation and Web site; and re-invent customer service. E-commerce planners and strategic managers will be able to use the framework to analyze and evaluate the critical successful factors for e-commerce success."}
{"_id": "4077c4e01162471b8947f2c08b27cb2d2970b09a", "title": "OpenPAWS: An open source PAWS and UHF TV White Space database implementation for India", "text": "TV white space (TVWS) geolocation database is being used for the protection of the terrestrial TV broadcast receivers, and the coexistence of secondary devices. To the best of our knowledge, though TV White Space calculations are available, an active online database does not exist for India. In this paper, the development of the first TVWS database for India is detailed and is released for public access. A standardized protocol to access the TVWS database on a readily available hardware platform is implemented. A hardware prototype, which is capable of querying the TVWS database and operating in the TV band without causing harmful interference to the TV receivers in UHF TV bands, is developed. The source code of our implementation has been released under the GNU general public license version 2.0."}
{"_id": "5d7effed9fc5824359ef315a4a517c90eeb286d6", "title": "Automatic Verb Classification Based on Statistical Distributions of Argument Structure", "text": "Automatic acquisition of lexical knowledge is critical to a wide range of natural language processing tasks. Especially important is knowledge about verbs, which are the primary source of relational information in a sentence-the predicate-argument structure that relates an action or state to its participants (i.e., who did what to whom). In this work, we report on supervised learning experiments to automatically classify three major types of English verbs, based on their argument structure-specifically, the thematic roles they assign to participants. We use linguistically-motivated statistical indicators extracted from large annotated corpora to train the classifier, achieving 69.8 accuracy for a task whose baseline is 34, and whose expert-based upper bound we calculate at 86.5. A detailed analysis of the performance of the algorithm and of its errors confirms that the proposed features capture properties related to the argument structure of the verbs. Our results validate our hypotheses that knowledge about thematic relations is crucial for verb classification, and that it can be gleaned from a corpus by automatic means. We thus demonstrate an effective combination of deeper linguistic knowledge with the robustness and scalability of statistical techniques."}
{"_id": "75c5e7566ef28c5d51fe0825afe40d4bd3d90746", "title": "Revealing Event Saliency in Unconstrained Video Collection", "text": "Recent progresses in multimedia event detection have enabled us to find videos about a predefined event from a large-scale video collection. Research towards more intrinsic unsupervised video understanding is an interesting but understudied field. Specifically, given a collection of videos sharing a common event of interest, the goal is to discover the salient fragments, i.e., the curt video fragments that can concisely portray the underlying event of interest, from each video. To explore this novel direction, this paper proposes an unsupervised event saliency revealing framework. It first extracts features from multiple modalities to represent each shot in the given video collection. Then, these shots are clustered to build the cluster-level event saliency revealing framework, which explores useful information cues (i.e., the intra-cluster prior, inter-cluster discriminability, and inter-cluster smoothness) by a concise optimization model. Compared with the existing methods, our approach could highlight the intrinsic stimulus of the unseen event within a video in an unsupervised fashion. Thus, it could potentially benefit to a wide range of multimedia tasks like video browsing, understanding, and search. To quantitatively verify the proposed method, we systematically compare the method to a number of baseline methods on the TRECVID benchmarks. Experimental results have demonstrated its effectiveness and efficiency."}
{"_id": "15dd88a84e2581e9d408b2be142b0dba6b9c4c3e", "title": "RTL Hardware IP Protection Using Key-Based Control and Data Flow Obfuscation", "text": "Recent trends of hardware intellectual property (IP) piracy and reverse engineering pose major business and security concerns to an IP-based system-on-chip (SoC) design flow. In this paper, we propose a Register Transfer Level (RTL) hardware IP protection technique based on low-overhead key-based obfuscation of control and data flow. The basic idea is to transform the RTL core into control and data flow graph (CDFG) and then integrate a well-obfuscated finite state machine (FSM) of special structure, referred as \u201cMode-Control FSM\u201d, into the CDFG in a manner that normal functional behavior is enabled only after application of a specific input sequence. We provide formal analysis of the effectiveness of the proposed approach and present a simple metric to quantify the level of obfuscation. We also present an integrated design flow that implements the proposed obfuscation at low computational overhead. Simulation results for two open-source IP cores show that high levels of security is achievable at nominal area and power overheads under delay constraint."}
{"_id": "c32d241a6857a13750b51243766bee102f2e3df3", "title": "Micro bubble fluidics by EWOD and ultrasonic excitation for micro bubble tweezers", "text": "Recently, we envisioned so called micro bubble tweezers where EWOD (electrowetting-on-dielectric) actuated bubbles can manipulate micro objects such as biological cells by pushing or pulling them. Besides, oscillating (shrinking and expanding) bubbles in the presence of ultrasonic wave act as a to deliver drugs and molecules into the cells. In this paper, as a great stride in our quest for micro bubble tweezers, we present (1) full realization of two critical bubble operations (generating of bubbles in an on-chip and on-demand manner and splitting of single bubbles) and (2) two possible applications of mobile bubbles oscillating under acoustic excitation (a mobile vortex generator and micro particle carrier)."}
{"_id": "76e12e9b8c02ed94cd5e1cae7174060ca13098f2", "title": "Energy-efficient deployment of Intelligent Mobile sensor networks", "text": "Many visions of the future include people immersed in an environment surrounded by sensors and intelligent devices, which use smart infrastructures to improve the quality of life and safety in emergency situations. Ubiquitous communication enables these sensors or intelligent devices to communicate with each other and the user or a decision maker by means of ad hoc wireless networking. Organization and optimization of network resources are essential to provide ubiquitous communication for a longer duration in large-scale networks and are helpful to migrate intelligence from higher and remote levels to lower and local levels. In this paper, distributed energy-efficient deployment algorithms for mobile sensors and intelligent devices that form an Ambient Intelligent network are proposed. These algorithms employ a synergistic combination of cluster structuring and a peer-to-peer deployment scheme. An energy-efficient deployment algorithm based on Voronoi diagrams is also proposed here. Performance of our algorithms is evaluated in terms of coverage, uniformity, and time and distance traveled until the algorithm converges. Our algorithms are shown to exhibit excellent performance."}
{"_id": "8714d4596da62426a37516f45ef1aa4dd5b79a56", "title": "The spatial and temporal meanings of English prepositions can be independently impaired", "text": "English uses the same prepositions to describe both spatial and temporal relationships (e.g., at the corner, at 1:30), and other languages worldwide exhibit similar patterns. These space-time parallelisms have been explained by the Metaphoric Mapping Theory, which maintains that humans have a cognitive predisposition to structure temporal concepts in terms of spatial schemas through the application of a TIME IS SPACE metaphor. Evidence comes from (among other sources) historical investigations showing that languages consistently develop in such a way that expressions that originally have only spatial meanings are gradually extended to take on analogous temporal meanings. It is not clear, however, if the metaphor actively influences the way that modern adults process prepositional meanings during language use. To explore this question, a series of experiments was conducted with four brain-damaged subjects with left perisylvian lesions. Two subjects exhibited the following dissociation: they failed a test that assesses knowledge of the spatial meanings of prepositions, but passed a test that assesses knowledge of the corresponding temporal meanings of the same prepositions. This result suggests that understanding the temporal meanings of prepositions does not necessarily require establishing structural alignments with their spatial correlates. Two other subjects exhibited the opposite dissociation: they performed better on the spatial test than on the temporal test. Overall, these findings support the view that although the spatial and temporal meanings of prepositions are historically linked by virtue of the TIME IS SPACE metaphor, they can be (and may normally be) represented and processed independently of each other in the brains of modern adults."}
{"_id": "e46c54636bff2ec3609a67fd33e21a7fc8816179", "title": "Mental simulation in literal and figurative language understanding", "text": "Suppose you ask a colleague how a class he just taught went, and he replies that \"It was a great class the students were glued to their seats.\" There are two clearly distinct interpretations of this utterance, which might usefully be categorized as a literal and a figurative one. Which interpretation you believe is appropriate will determine your course of action; whether you congratulate your colleague for his fine work, or whether you report him to the dean for engaging in false imprisonment. What distinguishes between these two types of interpretation? Classically, the notions of literalness and figurativity are viewed as pertaining directly to language words have literal meanings, and can be used figuratively when specific figures of speech cue appropriate interpretation processes (Katz 1977, Searle 1978, Dascal 1987). Indeed, this approach superficially seems to account for the two interpretations described above. The literal (and one would hope, less likely) interpretation involves simply interpreting each word in the sentence and their combination in terms of its straightforward meaning the glue is literal glue and the seats are literal seats. The figurative interpretation, by contrast, requires knowledge of an idiom a figure of speech. This idiom has a particular form, using the words glued, to, and seat(s), with a possessive pronoun or other noun phrase indicating who was glued to their seat(s) in the middle. It also carries with it a particular meaning: the person or people described were in rapt attention. Thus the figurative interpretation of the sentence differs from the literal one in that the meaning to be taken from it is not built up compositionally from the meanings of words included in the utterance. Rather, the idiom imposes a non-compositional interpretation. As a result, the meaning of the whole can be quite distinct from the meanings of the words included within the idiom. This distinction between idiomaticity and compositionality is an important component of the classical figurative-literal distinction. A consequence of equating figurativity with particular figures of speech is that figurative language can be seen as using words in ways that differ from their real, literal meaning. While this classical notion of figurative and literal language may seem sensible, it leads to a number of incorrect and inconsistent claims about the relation between literal and figurative language. (Note that although it is not strictly correct to talk about language itself as being literal or figurative instead we should discuss acts of figurative or literal language processing we will adopt this convention for ease of exposition.) One major problem is that it claims that figurative language processing mechanisms are triggered by language explicit linguistic cues or figures of speech. Indeed, such appears to be case in the above example, but in many cases it is not. A sentence like You'll find yourself glued to the latest shareholders' report can be interpreted \"figuratively\" even though it has in it no figure of speech equivalent of glued to X's seat(s) that might trigger a search for such an interpretation. The only trigger for this figurative meaning is the verb glue, which would be hard to justify as an idiom or figure of speech. The absence of overt indicators of figurativity might lead us to hypothesize that some words may encode both literal and figurative meanings (as suggests by"}
{"_id": "15b45650fa30c56bdc4c595a5afd31663f7f3eb4", "title": "Does Language Shape Thought?: Mandarin and English Speakers' Conceptions of Time", "text": "Does the language you speak affect how you think about the world? This question is taken up in three experiments. English and Mandarin talk about time differently--English predominantly talks about time as if it were horizontal, while Mandarin also commonly describes time as vertical. This difference between the two languages is reflected in the way their speakers think about time. In one study, Mandarin speakers tended to think about time vertically even when they were thinking for English (Mandarin speakers were faster to confirm that March comes earlier than April if they had just seen a vertical array of objects than if they had just seen a horizontal array, and the reverse was true for English speakers). Another study showed that the extent to which Mandarin-English bilinguals think about time vertically is related to how old they were when they first began to learn English. In another experiment native English speakers were taught to talk about time using vertical spatial terms in a way similar to Mandarin. On a subsequent test, this group of English speakers showed the same bias to think about time vertically as was observed with Mandarin speakers. It is concluded that (1) language is a powerful tool in shaping thought about abstract domains and (2) one's native language plays an important role in shaping habitual thought (e.g., how one tends to think about time) but does not entirely determine one's thinking in the strong Whorfian sense."}
{"_id": "2e36f88fa9826e44182fc266d1e16ac7704a65b7", "title": "As time goes by : Evidence for two systems in processing space 4 ime metaphors", "text": "Temporal language is often couched in spatial metaphors. English has been claimed to have two space-time metaphoric systems: the ego-moving metaphor, wherein the observer\u2019s context progresses along the time-line towards the future, and the time-moving metaphor, wherein time is conceived of as a river or conveyor belt on which events are moving from the future to the past. In three experiments, we investigated the psychological status of these metaphors by asking subjects to carry out temporal inferences stated in terms of spatial metaphors. In Experiment 1, we found that subjects were slowed in their processing when the assertions shifted from one spatial metaphoric system to the other. In Experiment 2, we determined that this cost of shifting could not be attributed to local lexical factors. In Experiment 3, we again found this metaphor consistency effect in a naturalistic version of the study in which we asked commonsense time questions of passengers at an airport. The results of the three studies provide converging evidence that people use spatial metaphors in temporal reasoning. Implications for the status of metaphoric systems arc discussed."}
{"_id": "c1c1907129cd058d84e0cc7c89e1c34f2adaa34a", "title": "Metaphoric structuring: understanding time through spatial metaphors", "text": "The present paper evaluates the claim that abstract conceptual domains are structured through metaphorical mappings from domains grounded directly in experience. In particular, the paper asks whether the abstract domain of time gets its relational structure from the more concrete domain of space. Relational similarities between space and time are outlined along with several explanations of how these similarities may have arisen. Three experiments designed to distinguish between these explanations are described. The results indicate that (1) the domains of space and time do share conceptual structure, (2) spatial relational information is just as useful for thinking about time as temporal information, and (3) with frequent use, mappings between space and time come to be stored in the domain of time and so thinking about time does not necessarily require access to spatial schemas. These findings provide some of the first empirical evidence for Metaphoric Structuring. It appears that abstract domains such as time are indeed shaped by metaphorical mappings from more concrete and experiential domains such as space."}
{"_id": "60388818aa7faf07945d53292a21d3efa2ea841e", "title": "Bayesian Gaussian process models : PAC-Bayesian generalisation error bounds and sparse approximations", "text": "Non-parametric models and techniques enjoy a growing popularity in the field of machine learning, and among these Bayesian inference for Gaussian process (GP) models has recently received significant attention. We feel that GP priors should be part of the standard toolbox for constructing models relevant to machine learning in the same way as parametric linear models are, and the results in this thesis help to remove some obstacles on the way towards this goal. In the first main chapter, we provide a distribution-free finite sample bound on the difference between generalisation and empirical (training) error for GP classification methods. While the general theorem (the PAC-Bayesian bound) is not new, we give a much simplified and somewhat generalised derivation and point out the underlying core technique (convex duality) explicitly. Furthermore, the application to GP models is novel (to our knowledge). A central feature of this bound is that its quality depends crucially on task knowledge being encoded faithfully in the model and prior distributions, so there is a mutual benefit between a sharp theoretical guarantee and empirically well-established statistical practices. Extensive simulations on real-world classification tasks indicate an impressive tightness of the bound, in spite of the fact that many previous bounds for related kernel machines fail to give non-trivial guarantees in this practically relevant regime. In the second main chapter, sparse approximations are developed to address the problem of the unfavourable scaling of most GP techniques with large training sets. Due to its high importance in practice, this problem has received a lot of attention recently. We demonstrate the tractability and usefulness of simple greedy forward selection with information-theoretic criteria previously used in active learning (or sequential design) and develop generic schemes for automatic model selection with many (hyper)parameters. We suggest two new generic schemes and evaluate some of their variants on large real-world classification and regression tasks. These schemes and their underlying principles (which are clearly stated and analysed) can be applied to obtain sparse approximations for a wide regime of GP models far beyond the special cases we studied here."}
{"_id": "5ce872665d058a4500d143797ac7d2af04139c9d", "title": "Romantic Relationships among Young Adults : An Attachment Perspective", "text": "The present study used adult attachment as a theoretical framework to investigate individual differences in young adults\u2019 romantic relationships. Emphasis was put on studying the relational aspects of the self and coping strategies in an attempt to better understand how romantic partners feel about themselves and how they subsequently behave in periods of conflict. The sample comprised undergraduate university students (N=377) who responded to a self report questionnaire in English. Most findings were consistent with the proposed hypotheses. Results indicated that anxious individuals tend to report an ambivalent coping style and that fearful individuals were most vulnerable to health risks such as depression. Gender differences were also explored and results showed for instance that women generally tend to seek more support, and men might be more dismissing than women."}
{"_id": "d9c012f793800b470e1be86aff533a54ce333990", "title": "Block-diagonal Hessian-free Optimization for Training Neural Networks", "text": "Second-order methods for neural network optimization have several advantages over methods based on first-order gradient descent, including better scaling to large mini-batch sizes and fewer updates needed for convergence. But they are rarely applied to deep learning in practice because of high computational cost and the need for model-dependent algorithmic variations. We introduce a variant of the Hessian-free method that leverages a block-diagonal approximation of the generalized Gauss-Newton matrix. Our method computes the curvature approximation matrix only for pairs of parameters from the same layer or block of the neural network and performs conjugate gradient updates independently for each block. Experiments on deep autoencoders, deep convolutional networks, and multilayer LSTMs demonstrate better convergence and generalization compared to the original Hessian-free approach and the Adam method."}
{"_id": "70d930de222e3243d597da448f3f067dee173b24", "title": "A 0.018% THD+N, 88-dB PSRR PWM Class-D Amplifier for Direct Battery Hookup", "text": "A low-distortion third-order class-D amplifier that is fully integrated into a 0.18-\u03bc m CMOS process was designed for direct battery hookup in a mobile application. A class-D amplifier for direct battery hookup must have a sufficiently high power supply rejection ratio (PSRR) in preparation for noise, such as when a global system for mobile communications (GSM) bursts ripples through the system power line. This amplifier has a high PSRR of 88 dB for 217-Hz power supply ripples, using a third-order loop filter. System performance and stability are improved by applying the design technique of input-feedforward delta-sigma (\u0394\u03a3) modulators to the pulse-width modulation (PWM) class-D amplifier. A filterless method that can remove the external LC filter is employed, which offers great advantages in terms of PCB space and system cost. This amplifier achieves a power efficiency of 85.5% while delivering an output power of 750 mW into an 8-\u03a9 load from a 3.7-V supply voltage. Maximum achieved output power at 1% total harmonic distortion plus noise (THD+N) from a 4.9-V supply voltage into an 8-\u03a9 load is 1.15 W. This class-D amplifier is designed to have a broad operational range of 2.7-4.9 V for the direct use of mobile phone battery power. It has a total area of 1.01 mm2 and achieves a THD+N of 0.018%."}
{"_id": "5b25081ac8d857e7f1b09ace9f98eaa657220fe3", "title": "SeiSIM : Structural Similarity Evaluation for Seismic Data Retrieval", "text": "Structural similarity evaluation is a critical step for retrieving existing databases to find matching records for a given seismic data set. The objective is to enable the re-use of historical findings to assist exploration with new survey data. Currently there are very few structural similarity metrics specifically designed for seismic data, especially seismic survey maps. In this paper, we propose a metric that combines texture similarity and geological similarity, which is derived from discontinuity maps. We test the seismic similarity metric in a retrieval application. The successful results indicate that our approach is promising."}
{"_id": "4d91926b9291fd4542547f5e974be8c82cf6c822", "title": "A BIOLOGICALLY INSPIRED VISUAL WORKING MEMORY", "text": "The ability to look multiple times through a series of pose-adjusted glimpses is fundamental to human vision. This critical faculty allows us to understand highly complex visual scenes. Short term memory plays an integral role in aggregating the information obtained from these glimpses and informing our interpretation of the scene. Computational models have attempted to address glimpsing and visual attention but have failed to incorporate the notion of memory. We introduce a novel, biologically inspired visual working memory architecture that we term the Hebb-Rosenblatt memory. We subsequently introduce a fully differentiable Short Term Attentive Working Memory model (STAWM) which uses transformational attention to learn a memory over each image it sees. The state of our HebbRosenblatt memory is embedded in STAWM as the weights space of a layer. By projecting different queries through this layer we can obtain goal-oriented latent representations for tasks including classification and visual reconstruction. Our model obtains highly competitive classification performance on MNIST and CIFAR-10. As demonstrated through the CelebA dataset, to perform reconstruction the model learns to make a sequence of updates to a canvas which constitute a parts-based representation. Classification with the self supervised representation obtained from MNIST is shown to be in line with the state of the art models (none of which use a visual attention mechanism). Finally, we show that STAWM can be trained under the dual constraints of classification and reconstruction to provide an interpretable visual sketchpad which helps open the \u2018black-box\u2019 of deep learning."}
{"_id": "7d360a98dd7dfec9e3079ea18e78e74c6a7038f1", "title": "FRIENDSHIPS IN ONLINE PEER-TO-PEER LENDING : PIPES , PRISMS , AND RELATIONAL HERDING 1", "text": "This paper investigates how friendship relationships act as pipes, prisms, and herding signals in a large online, peer-to-peer (P2P) lending site. By analyzing decisions of lenders, we find that friends of the borrower, especially close offline friends, act as financial pipes by lending money to the borrower. On the other hand, the prism effect of friends\u2019 endorsements via bidding on a loan negatively affects subsequent bids by third parties. However, when offline friends of a potential lender, especially close friends, place a bid, a relational herding effect occurs as potential lenders are likely to follow their offline friends with a bid."}
{"_id": "1a37f07606d60df365d74752857e8ce909f700b3", "title": "Deep Neural Networks for Learning Graph Representations", "text": "In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the samplingbased method for generating linear sequences proposed by Perozzi et al. (2014). The advantages of our approach will be illustrated from both theorical and empirical perspectives. We also give a new perspective for the matrix factorization method proposed by Levy and Goldberg (2014), in which the pointwise mutual information (PMI) matrix is considered as an analytical solution to the objective function of the skipgram model with negative sampling proposed by Mikolov et al. (2013). Unlike their approach which involves the use of the SVD for finding the low-dimensitonal projections from the PMI matrix, however, the stacked denoising autoencoder is introduced in our model to extract complex features and model non-linearities. To demonstrate the effectiveness of our model, we conduct experiments on clustering and visualization tasks, employing the learned vertex representations as features. Empirical results on datasets of varying sizes show that our model outperforms other stat-of-the-art models in such tasks."}
{"_id": "1a4b6ee6cd846ef5e3030a6ae59f026e5f50eda6", "title": "Deep Learning for Video Classification and Captioning", "text": "Accelerated by the tremendous increase in Internet bandwidth and storage space, video data has been generated, published and spread explosively, becoming an indispensable part of today's big data. In this paper, we focus on reviewing two lines of research aiming to stimulate the comprehension of videos with deep learning: video classification and video captioning. While video classification concentrates on automatically labeling video clips based on their semantic contents like human actions or complex events, video captioning attempts to generate a complete and natural sentence, enriching the single label as in video classification, to capture the most informative dynamics in videos. In addition, we also provide a review of popular benchmarks and competitions, which are critical for evaluating the technical progress of this vibrant field."}
{"_id": "6faf92529ccb3680c3965babc343c69cff5d9576", "title": "Generalized Sampling and Infinite-Dimensional Compressed Sensing", "text": "We introduce and analyze a framework and corresponding method for compressed sensing in infinite dimensions. This extends the existing theory from finite-dimensional vector spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary by demonstrating that existing finitedimensional techniques are ill-suited for solving a number of key problems. This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases. A conclusion of this paper is that one can extend these ideas to allow for significant subsampling of sparse or compressible signals. Central to this work is the introduction of two novel concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize an infinite-dimensional problem."}
{"_id": "40f08a80714e7384651ddab62f4597b37779dff5", "title": "Walking away from the Desktop Computer: Distributed Collaboration and Mobility in a Product Design Team", "text": "A study of a spatially distributed product design team shows that most members are rarely at their individual desks. Mobility is essential for the use of shared resources and for communication. It facilitates informal interactions and awareness unavailable to colleagues at remote sites. Implications for technology design include portable and distributed computing resources, in particular, moving beyond individual workstation-centric CSCW applications."}
{"_id": "6426be25b02086a78a5dd5505c077c5d4205f275", "title": "Um estudo preliminar sobre Tipos de Personalidade em Equipes Scrum", "text": "In software development, people have a fundamental role as the basis for a project\u2019s success. Regarding agile methodologies, this factor is increased by the need of self-organized teams, which is related to its member\u2019s personality and the relationships between them. This paper evaluates how the member\u2019s personality types and social relations influence the outcome of Scrum teams, based on MBTI and sociometry. As a result it was possible to identify how psychological profiles influence the quality, productivity and achievement of goals defined in the Sprint Backlog."}
{"_id": "37bed9d4fec61263e109402b46136272bcd8c5ce", "title": "Long short term memory for driver intent prediction", "text": "Advanced Driver Assistance Systems have been shown to greatly improve road safety. However, existing systems are typically reactive with an inability to understand complex traffic scenarios. We present a method to predict driver intention as the vehicle enters an intersection using a Long Short Term Memory (LSTM) based Recurrent Neural Network (RNN). The model is learnt using the position, heading and velocity fused from GPS, IMU and odometry data collected by the ego-vehicle. In this paper we focus on determining the earliest possible moment in which we can classify the driver's intention at an intersection. We consider the outcome of this work an essential component for all levels of road vehicle automation."}
{"_id": "bd08d97a8152216e0bb234a2c5a56a378c7a738e", "title": "Multi-Antenna Wireless Legitimate Surveillance Systems: Design and Performance Analysis", "text": "To improve national security, government agencies have long been committed to enforcing powerful surveillance measures on suspicious individuals or communications. In this paper, we consider a wireless legitimate surveillance system, where a full-duplex multi-antenna legitimate monitor aims to eavesdrop on a dubious communication link between a suspicious pair via proactive jamming. Assuming that the legitimate monitor can successfully overhear the suspicious information only when its achievable data rate is no smaller than that of the suspicious receiver, the key objective is to maximize the eavesdropping non-outage probability by joint design of the jamming power, receive and transmit beamformers at the legitimate monitor. Depending on the number of receive/transmit antennas implemented, i.e., single-input single-output, single-input multiple-output, multiple-input single-output, and multiple-input multiple-output (MIMO), four different scenarios are investigated. For each scenario, the optimal jamming power is derived in a closed form and efficient algorithms are obtained for the optimal transmit/receive beamforming vectors. Moreover, low-complexity suboptimal beamforming schemes are proposed for the MIMO case. Our analytical findings demonstrate that by exploiting multiple antennas at the legitimate monitor, the eavesdropping non-outage probability can be significantly improved compared with the single-antenna case. In addition, the proposed suboptimal transmit zero-forcing scheme yields similar performance as the optimal scheme."}
{"_id": "de5576fbe7efefff0ef9d4b8231e9a5a67fc9d77", "title": "Detection , localization and characterization of damage in plates with an in situ array of spatially distributed ultrasonic sensors", "text": "Permanently attached piezoelectric sensors arranged in a spatially distributed array are under consideration for structural health monitoring systems incorporating active ultrasonic methods. Most damage detection and localization methods that have been proposed are based upon comparing monitored signals to baselines recorded from the structure prior to initiation of damage. To be effective, this comparison process must take into account any conditions other than damage that have changed the ultrasonic signals. Proposed here is a two-step process whereby damage is first detected and is then localized and characterized. The detection strategy considers the long time behavior of the signals in the diffuse-like regime where distinct echoes can no longer be identified. The localization strategy is to generate images of damage based upon the early time regime when discrete echoes from boundary reflections and scattering sites are meaningful. Results are shown for an aluminum plate with artificial damage introduced in combination with temperature variations. The loss of local temporal coherence combined with an optimal baseline selection procedure is shown to be effective for the detection of damage, and a delay-and-sum imaging method applied to the residual signals both localizes the damage and provides characterization information. (Some figures in this article are in colour only in the electronic version)"}
{"_id": "ca0332af5ac073a32fbce802b7d02bb2fb0d827e", "title": "Design, development and performance test of an automatic two-Axis solar tracker system", "text": "The energy extracted from solar photovoltaic (PV) or solar thermal depends on solar insolation. For the extraction of maximum energy from the sun, the plane of the solar collector should always be normal to the incident radiation. The diurnal and seasonal movement of the earth affects the radiation intensity received on the solar collector. Sun trackers move the solar collector to follow the sun trajectories and keep the orientation of the solar collector at an optimal tilt angle. Energy efficiency of solar PV or solar thermal can be substantially improved using solar tracking system. In this work, an automatic solar tracking system has been designed and developed using LDR sensors and DC motors on a mechanical structure with gear arrangement. Two-axis solar tracking (azimuth angle as well as altitude angle) has been implemented through microcontroller based sophisticated control logic. Performance of the proposed system over the important parameters like solar radiation received on the collector, maximum hourly electrical power, efficiency gain, short circuit current, open circuit voltage and fill factor has been evaluated and compared with those for fixed tilt angle solar collector."}
{"_id": "bb6782a9f12310b196d57a8bcce2b0445a9d7e2a", "title": "A combined pose, object, and feature model for action understanding", "text": "Understanding natural human activity involves not only identifying the action being performed, but also locating the semantic elements of the scene and describing the person's interaction with them. We present a system that is able to recognize complex, fine-grained human actions involving the manipulation of objects in realistic action sequences. Our method takes advantage of recent advances in sensors and pose trackers in learning an action model that draws on successful discriminative techniques while explicitly modeling both pose trajectories and object manipulations. By combining these elements in a single model, we are able to simultaneously recognize actions and track the location and manipulation of objects. To showcase this ability, we introduce a novel Cooking Action Dataset that contains video, depth readings, and pose tracks from a Kinect sensor. We show that our model outperforms existing state of the art techniques on this dataset as well as the VISINT dataset with only video sequences."}
{"_id": "aa3eaccca6a4c9082a98ec60fe1de798a8c77e73", "title": "Role of mask pattern in intelligibility of ideal binary-masked noisy speech.", "text": "Intelligibility of ideal binary masked noisy speech was measured on a group of normal hearing individuals across mixture signal to noise ratio (SNR) levels, masker types, and local criteria for forming the binary mask. The binary mask is computed from time-frequency decompositions of target and masker signals using two different schemes: an ideal binary mask computed by thresholding the local SNR within time-frequency units and a target binary mask computed by comparing the local target energy against the long-term average speech spectrum. By depicting intelligibility scores as a function of the difference between mixture SNR and local SNR threshold, alignment of the performance curves is obtained for a large range of mixture SNR levels. Large intelligibility benefits are obtained for both sparse and dense binary masks. When an ideal mask is dense with many ones, the effect of changing mixture SNR level while fixing the mask is significant, whereas for more sparse masks the effect is small or insignificant."}
{"_id": "65952bad69d5331c1760a37ae0af5817cb666edd", "title": "Are Computers Gender-Neutral ? Gender Stereotypic Responses to Computers", "text": "This study tested whether computers embedded with the most minimal gender cues will evoke sex-based stereotypic responses. Using an experimental paradigm (N=40) that involved computers with voice output, the study tested three sex-based stereotypes under conditions in which all suggestions of gender were removed, with the sole exception of vocal cues. In all three cases, gender stereotypic responses were obtained. Because the experimental manipulation involved no deception regarding the source of the voices, this study presents evidence that the tendency to gender stereotype is extremely powerful, extending even to inanimate machines. Are computers gender-neutral?"}
{"_id": "432ed941958594a886753f4fb874e143e4e6995c", "title": "Creating health typologies with random forest clustering", "text": "In this paper, we describe the creation of a health-specific geodemographic classification system for the whole city of Birmingham UK. Compared to some existing open source and commercial systems, the proposed work has a couple of distinct advantages: (i) It is particularly designed for the public health domain by combining most reliable health data and other sources accounting for the main determinants of health. (ii) A novel random forest clustering algorithm is used for generating clusters and it has several obvious advantages over the commonly used k-means algorithm in practice. These resultant health typologies will help local authorities to understand and design customized health interventions for the population. A Birmingham map illustrating the distribution of all health typologies is produced."}
{"_id": "1b482decdcf473fda7fb883cc8b232f302738e52", "title": "Crowd disasters as systemic failures: analysis of the Love Parade disaster", "text": "Each year, crowd disasters happen in different areas of the world. How and why do such disasters happen? Are the fatalities caused by relentless behavior of people or a psychological state of panic that makes the crowd \u2018go mad\u2019? Or are they a tragic consequence of a breakdown of coordination? These and other questions are addressed, based on a qualitative analysis of publicly available videos and materials, which document the planning and organization of the Love Parade in Duisburg, Germany, and the crowd disaster on July 24, 2010. Our analysis reveals a number of misunderstandings that have widely spread. We also provide a new perspective on concepts such as \u2018intentional pushing\u2019, \u2018mass panic\u2019, \u2018stampede\u2019, and \u2018crowd crushes\u2019. The focus of our analysis is on the contributing causal factors and their mutual interdependencies, not on legal issues or the judgment of personal or institutional responsibilities. Video recordings show that people stumbled and piled up due to a \u2018domino effect\u2019, resulting from a phenomenon called \u2018crowd turbulence\u2019 or \u2018crowd quake\u2019. Crowd quakes are a typical reason for crowd disasters, to be distinguished from crowd disasters resulting from \u2018mass panic\u2019 or \u2018crowd crushes\u2019. In Duisburg, crowd turbulence was the consequence of amplifying feedback and cascading effects, which are typical for systemic instabilities. Accordingly, things can go terribly wrong in spite of no bad intentions from anyone. Comparing the incident in Duisburg with others, we give recommendations to help prevent future crowd disasters. In particular, we introduce a new scale to assess the criticality of conditions in the crowd. This may allow preventative measures to be taken earlier on. Furthermore, we discuss the merits and limitations of citizen science for public investigation, considering that today, almost every event is recorded and reflected in the World Wide Web."}
{"_id": "7d44b2416c1563c75077b727170c1791641d0f67", "title": "An Internet based wireless home automation system for multifunctional devices", "text": "The aim of home automation is to control home devices from a central control point. In this paper, we present the design and implementation of a low cost but yet flexible and secure Internet based home automation system. The communication between the devices is wireless. The protocol between the units in the design is enhanced to be suitable for most of the appliances. The system is designed to be low cost and flexible with the increasing variety of devices to be controlled."}
{"_id": "6f7ef31d1f59993a5004b46b598cf3f5a1743ebb", "title": "Examining the Contribution of Information Technology Toward Productivity and Profitability in U . S . Retail Banking", "text": "There has been much debate on whether or not the investment in Information Technology (IT) provides improvements in productivity and business efficiency. Several studies both at the industry-level and at the firm-level have contributed differing understandings of this phenomenon. Of late, however, firm-level studies, primarily in the manufacturing sector, have shown that there are significant positive contributions from IT investments toward productivity. This study examines the effect of IT investment on both productivity and profitability in the retail banking sector. Using data collected through a major study of retail banking institutions in the United States, this paper concludes that additional investment in IT capital may have no real benefits and may be more of a strategic necessity to stay even with the competition. However, the results indicate that there are substantially high returns to increase in investment in IT labor, and that retail banks need to shift their emphasis in IT investment from capital to labor. Prasad and Harker IT Impact in Retail Banking"}
{"_id": "2dc3ec722948c08987127647ae34a502cabaa6db", "title": "Scalable Hashing-Based Network Discovery", "text": "Discovering and analyzing networks from non-network data is a task with applications in fields as diverse as neuroscience, genomics, energy, economics, and more. In these domains, networks are often constructed out of multiple time series by computing measures of association or similarity between pairs of series. The nodes in a discovered graph correspond to time series, which are linked via edges weighted by the association scores of their endpoints. After graph construction, the network may be thresholded such that only the edges with stronger weights remain and the desired sparsity level is achieved. While this approach is feasible for small datasets, its quadratic time complexity does not scale as the individual time series length and the number of compared series increase. Thus, to avoid the costly step of building a fully-connected graph before sparsification, we propose a fast network discovery approach based on probabilistic hashing of randomly selected time series subsequences. Evaluation on real data shows that our methods construct graphs nearly 15 times as fast as baseline methods, while achieving both network structure and accuracy comparable to baselines in task-based evaluation."}
{"_id": "9e6c094ffc5ba2cd0776d798692f3374920de574", "title": "Study of battery modeling using mathematical and circuit oriented approaches", "text": "Energy storage improves the efficiency and reliability of the electric utility system. The most common device used for storing electrical energy is batteries. To investigate power converter-based charge and discharge control of a battery storage device, effective battery models are critically needed. This paper presents a comparison study of mathematical and circuit-oriented battery models with a focus on lead-acid batteries that are normally used for large power storage applications. The paper shows how mathematical and circuit-oriented battery models are developed to reflect typical battery electrochemical properties. The relation between mathematical and circuit-oriented battery models is analyzed in the paper. Comparison study is made to investigate the difference and complexity in parameter extraction using the two different modeling approaches. The paper shows that the fundamental battery electromechanical relationship is usually built into the battery mathematical model but is not directly available in the circuit-oriented model. In terms of the computational complexity, the circuit-oriented battery model requires much expensive computing resources. Performance study is conducted to evaluate various factors that may affect the behavior of the mathematical battery models."}
{"_id": "4a9c1b4569289623bf9812ffe2225e4b3d7acb22", "title": "Real-time flood monitoring and warning system", "text": "Flooding is one of the major disasters occurring in various parts of the world. The system for real-time monitoring of water conditions: water level; flow; and precipitation level, was developed to be employed in monitoring flood in Nakhon Si Thammarat, a southern province in Thailand. The two main objectives of the developed system is to serve 1) as information channel for flooding between the involved authorities and experts to enhance their responsibilities and collaboration and 2) as a web based information source for the public, responding to their need for information on water condition and flooding. The developed system is composed of three major components: sensor network, processing/transmission unit, and database/ application server. These real-time data of water condition can be monitored remotely by utilizing wireless sensors network that utilizes the mobile General Packet Radio Service (GPRS) communication in order to transmit measured data to the application server. We implemented a so-called VirtualCOM, a middleware that enables application server to communicate with the remote sensors connected to a GPRS data unit (GDU). With VirtualCOM, a GDU behaves as if it is a cable directly connected the remote sensors to the application server. The application server is a web-based system implemented using PHP and JAVA as the web application and MySQL as its relational database. Users can view real-time water condition as well as the forecasting of the water condition directly from the web via web browser or via WAP. The developed system has demonstrated the applicability of today\u2019s sensors in wirelessly monitor real-time water conditions."}
{"_id": "f5fca08badb5f182bfc5bc9050e786d40e0196df", "title": "Design of a Water Environment Monitoring System Based on Wireless Sensor Networks", "text": "A water environmental monitoring system based on a wireless sensor network is proposed. It consists of three parts: data monitoring nodes, data base station and remote monitoring center. This system is suitable for the complex and large-scale water environment monitoring, such as for reservoirs, lakes, rivers, swamps, and shallow or deep groundwaters. This paper is devoted to the explanation and illustration for our new water environment monitoring system design. The system had successfully accomplished the online auto-monitoring of the water temperature and pH value environment of an artificial lake. The system's measurement capacity ranges from 0 to 80 \u00b0C for water temperature, with an accuracy of \u00b10.5 \u00b0C; from 0 to 14 on pH value, with an accuracy of \u00b10.05 pH units. Sensors applicable to different water quality scenarios should be installed at the nodes to meet the monitoring demands for a variety of water environments and to obtain different parameters. The monitoring system thus promises broad applicability prospects."}
{"_id": "0969bae35536395aff521f6fbcd9d5ff379664e3", "title": "Routing in multi-radio, multi-hop wireless mesh networks", "text": "We present a new metric for routing in multi-radio, multi-hop wireless networks. We focus on wireless networks with stationary nodes, such as community wireless networks.The goal of the metric is to choose a high-throughput path between a source and a destination. Our metric assigns weights to individual links based on the Expected Transmission Time (ETT) of a packet over the link. The ETT is a function of the loss rate and the bandwidth of the link. The individual link weights are combined into a path metric called Weighted Cumulative ETT (WCETT) that explicitly accounts for the interference among links that use the same channel. The WCETT metric is incorporated into a routing protocol that we call Multi-Radio Link-Quality Source Routing.We studied the performance of our metric by implementing it in a wireless testbed consisting of 23 nodes, each equipped with two 802.11 wireless cards. We find that in a multi-radio environment, our metric significantly outperforms previously-proposed routing metrics by making judicious use of the second radio."}
{"_id": "50fc6949a8208486e26a716c2f4b255405715bbd", "title": "A Review of Wireless Sensor Technologies and Applications in Agriculture and Food Industry: State of the Art and Current Trends", "text": "The aim of the present paper is to review the technical and scientific state of the art of wireless sensor technologies and standards for wireless communications in the Agri-Food sector. These technologies are very promising in several fields such as environmental monitoring, precision agriculture, cold chain control or traceability. The paper focuses on WSN (Wireless Sensor Networks) and RFID (Radio Frequency Identification), presenting the different systems available, recent developments and examples of applications, including ZigBee based WSN and passive, semi-passive and active RFID. Future trends of wireless communications in agriculture and food industry are also discussed."}
{"_id": "71573d5dc03b28279c1a337e9fd91ffbeff47569", "title": "Attentive Fashion Grammar Network for Fashion Landmark Detection and Clothing Category Classification", "text": "This paper proposes a knowledge-guided fashion network to solve the problem of visual fashion analysis, e.g., fashion landmark localization and clothing category classification. The suggested fashion model is leveraged with high-level human knowledge in this domain. We propose two important fashion grammars: (i) dependency grammar capturing kinematics-like relation, and (ii) symmetry grammar accounting for the bilateral symmetry of clothes. We introduce Bidirectional Convolutional Recurrent Neural Networks (BCRNNs) for efficiently approaching message passing over grammar topologies, and producing regularized landmark layouts. For enhancing clothing category classification, our fashion network is encoded with two novel attention mechanisms, i.e., landmark-aware attention and category-driven attention. The former enforces our network to focus on the functional parts of clothes, and learns domain-knowledge centered representations, leading to a supervised attention mechanism. The latter is goal-driven, which directly enhances task-related features and can be learned in an implicit, top-down manner. Experimental results on large-scale fashion datasets demonstrate the superior performance of our fashion grammar network."}
{"_id": "86df5ac1f7065fd8e58e371fb733b3992affcf9e", "title": "An Ontology For Specifying Spatiotemporal Scopes in Life Cycle Assessment", "text": "Life Cycle Assessment (LCA) evaluates the environmental impact of a product through its entire life cycle, from material extraction to final disposal or recycling. The environmental impacts of an activity depend on both the activity\u2019s direct emissions to the environment as well as indirect emissions caused by activities elsewhere in the supply chain. Both the impacts of direct emissions and the provisioning of supply chain inputs to an activity depend on the activity\u2019s spatiotemporal scope. When accounting for spatiotemporal dynamics, LCA often faces significant data interoperability challenges. Ontologies and Semantic technologies can foster interoperability between diverse data sets from a variety of domains. Thus, this paper presents an ontology for modeling spatiotemporal scopes, i.e., the contexts in which impact estimates are valid. We discuss selected axioms and illustrate the use of the ontology by providing an example from LCA practice. The ontology enables practitioners to address key competency questions regarding the effect of spatiotemporal scopes on environmental impact estimation."}
{"_id": "07119bc66e256f88b7436e62a4ac3384365e4e9b", "title": "RASL: Robust Alignment by Sparse and Low-Rank Decomposition for Linearly Correlated Images", "text": "This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a low-rank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of \u21131-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments on both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions."}
{"_id": "2cad955c4667a8625553cd02e8bd0f84126b259a", "title": "Semantic image retrieval system based on object relationships", "text": "Semantic-based image retrieval has recently become popular as an avenue to improve retrieval accuracy. The \u201csemantic gap\u201d between the visual features and the high-level semantic features could be narrowed down by utilizing this kind of retrieval method. However, most of the current methods of semantic-based image retrieval utilize visual semantic features and do not consider spatial relationships. We build a system for content-based image retrieval from image collections on the web and tackle the challenges of distinguishing between images that contain similar objects, in order to capture the semantic meaning of a search query. In order to do so, we utilize a combination of segmentation into objects as well as the relationships of these objects with each other."}
{"_id": "9dbe7462761fa90b4b0340ceaf6661b25b7a4313", "title": "Development of HILS simulator for steering control in hot strip finishing mill", "text": "In this paper, hardware-in-the loop-simulation(HILS) simulator for steering control is developed. The simulator describes lateral movement of the strip in hot strip finishing mill. The simulator include strip dynamics, mill model, mill motor and AGC controllers. The factors considered for strip dynamics are initial off-center and wedge difference. The wedge is realized using the expansion of deformation process. To test validation of a simulator, PID controller is tested."}
{"_id": "fa453f2ae1f944a61d7dd0ea576b5bd05d08b9ed", "title": "Critical period regulation.", "text": "Neuronal circuits are shaped by experience during critical periods of early postnatal life. The ability to control the timing, duration, and closure of these heightened levels of brain plasticity has recently become experimentally accessible, especially in the developing visual system. This review summarizes our current understanding of known critical periods across several systems and species. It delineates a number of emerging principles: functional competition between inputs, role for electrical activity, structural consolidation, regulation by experience (not simply age), special role for inhibition in the CNS, potent influence of attention and motivation, unique timing and duration, as well as use of distinct molecular mechanisms across brain regions and the potential for reactivation in adulthood. A deeper understanding of critical periods will open new avenues to \"nurture the brain\"-from international efforts to link brain science and education to improving recovery from injury and devising new strategies for therapy and lifelong learning."}
{"_id": "304097b8cacc87977fd745716dff1412648a2326", "title": "Towards a holistic analysis of mobile payments: A multiple perspectives approach", "text": "As the mobile technologies and services are in constant evolution, many speculate on whether or not mobile payments will be a killer application for mobile commerce. To have a better understanding of the market, there is a need to analyze not only the technology but also the different actors that are involved. For this purpose, we propose to conduct two disruption analyses to draw the disruptiveness profile of mobile payment solutions compared to other payment instruments. Then, we try to discover what factors have hindered the technical and commercial development by using a DSS based on a multi-criteria decision making method called Electre I."}
{"_id": "8e6a2774982d24492273558a85eef0cec056efba", "title": "The Impact of Information Technology Investment Announcements on the Market Value of the Firm", "text": "Determining whether investments in information technology (IT) have an impact on firm performance has been and continues to be a major problem for information systems researchers and practitioners. Financial theory suggests that managers should make investment'decisions that maximize the value of the firm. Using event-study methodology, we provide empirical evidence on the effect of announcements of IT investments on the market value of the firm for a sample of 97 IT investments from the finance and manufacturing industries from 1981 to 1988. Over the announcement period, we find no excess returns for either the full sample or for any one of the industry subsamples. However, cross-sectional analysis reveals that the market reacts differently to announcements of innovative IT investments than to foUowup, or noninnovative investments in IT. Innovative IT investments increase firm value, while noninnovative investments do not. Furthermore, the market's reaction to announcements of innovative and noninnovative IT investments is independent of industry classification. These results indicate that, on average, IT investments are zero net present value (NPV) investments; they are worth as much as they cost. Innovative IT investments, however, increase the value of the firm."}
{"_id": "827d2d768f58759e193ed17da6d6f7e87749686a", "title": "Fingerprint matching by thin-plate spline modelling of elastic deformations", "text": "This paper presents a novel minutiae matching method that describes elastic distortions in 4ngerprints by means of a thin-plate spline model, which is estimated using a local and a global matching stage. After registration of the 4ngerprints according to the estimated model, the number of matching minutiae can be counted using very tight matching thresholds. For deformed 4ngerprints, the algorithm gives considerably higher matching scores compared to rigid matching algorithms, while only taking 100 ms on a 1 GHz P-III machine. Furthermore, it is shown that the observed deformations are di6erent from those described by theoretical models proposed in the literature. ? 2003 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved."}
{"_id": "ab66f5bcea5ca7adc638342958206262007f8afd", "title": "1 Mobile Landscapes : using location data from cellphones for urban analysis", "text": "The technology for determining the geographic location of cell phones and other hand-held devices is becoming increasingly available. It is opening the way to a wide range of applications, collectively referred to as Location Based Services (LBS), that are primarily aimed at individual users. However, if deployed to retrieve aggregated data in cities, LBS could become a powerful tool for urban analysis. This paper aims to review and introduce the potential of this technology to the urban planning community. In addition, it presents the \u2018Mobile Landscapes\u2019 project: an application in the metropolitan area of Milan, Italy, based on the geographical mapping of cell phone usage at different times of the day. The results enable a graphic representation of the intensity of urban activities and their evolution through space and time. Finally, a number of future applications are discussed and their potential for urban studies and planning is assessed."}
{"_id": "4b77ac9081958ff56543e9ab2eb942c0cc65610e", "title": "Comparison of SQL, NoSQL and NewSQL databases for internet of things", "text": "The Internet of Things (IoT) in all its essentiality is a collection of sensors bridged together tightly. The present day development in the technology industry has thus elevated the emphasis of large amounts of data. IoT is expected to generate and collect an enormous amount of data from varied locations very quickly. The concerns for features like storage and performance factors are coercing the databases to outperform themselves. Traditional relational databases have time and again proved to be efficient. NoSQL and more recently NewSQL databases having gained an impetus are asserted to perform better than the classic SQL counterpart. This paper compares the performance of three databases each from the said technologies; SQL (MySQL), NoSQL (MongoDB) and NewSQL (VoltDB) respectively for sensor readings. The sensor data handled ranges in vast (MB to GB) size and tested against: single write, single read, single delete and multi write operations."}
{"_id": "c3a2c1eedee7cd170606c5bda4afd05289af8562", "title": "Agenda-Based User Simulation for Bootstrapping a POMDP Dialogue System", "text": "This paper investigates the problem of bootstrapping a statistical dialogue manager without access to training data and proposes a new probabilistic agenda-based method for simulating user behaviour. In experiments with a statistical POMDP dialogue system, the simulator was realistic enough to successfully test the prototype system and train a dialogue policy. An extensive study with human subjects showed that the learned policy was highly competitive, with task completion rates above 90%. 1 Background and Introduction 1.1 Bootstrapping Statistical Dialogue Managers One of the key advantages of a statistical approach to Dialogue Manager (DM) design is the ability to formalise design criteria as objective reward functions and to learn an optimal dialogue policy from real dialogue data. In cases where a system is designed from scratch, however, it is often the case that no suitable in-domain data is available for training the DM. Collecting dialogue data without a working prototype is problematic, leaving the developer with a classic chicken-and-egg problem. Wizard-of-Oz (WoZ) experiments can be carried out to record dialogues, but they are often time-consuming and the recorded data may show characteristics of humanhuman conversation rather than typical human-computer dialogue. Alternatively, human-computer dialogues can be recorded with a handcrafted DM prototype but neither of these two methods enables the system designer to test the implementation of the statistical DM and the learning algorithm. Moreover, the size of the recorded corpus (typically 10 dialogues) usually falls short of the requirements for training a statistical DM (typically 10 dialogues). 1.2 User Simulation-Based Training In recent years, a number of research groups have investigated the use of a two-stage simulation-based setup. A statistical user model is first trained on a limited amount of dialogue data and the model is then used to simulate dialogues with the interactively learning DM (see Schatzmann et al. (2006) for a literature review). The simulation-based approach assumes the presence of a small corpus of suitably annotated in-domain dialogues or out-of-domain dialogues with a matching dialogue format (Lemon et al., 2006). In cases when no such data is available, handcrafted values can be assigned to the model parameters given that the model is sufficiently simple (Levin et al., 2000; Pietquin and Dutoit, 2005) but the performance of dialogue policies learned this way has not been evaluated using real users."}
{"_id": "dd569c776064693d02374ee89ec864f2eec53b02", "title": "A tomographic formulation of spotlight-mode synthetic aperture radar", "text": "Spotlight-mode synthetic aperture radar (spotlight-mode SAR) synthesizes high-resolution terrain maps using data gathered from multiple observation angles. This paper shows that spotlight-mode SAR can be interpreted as a tomographic reeonstrution problem and analyzed using the projection-slice theorem from computer-aided tomograpy (CAT). The signal recorded at each SAR transmission point is modeled as a portion of the Fourier transform of a central projection of the imaged ground area. Reconstruction of a SAR image may then be accomplished using algorithms from CAT. This model permits a simple understanding of SAR imaging, not based on Doppler shifts. Resolution, sampling rates, waveform curvature, the Doppler effect, and other issues are also discussed within the context of this interpretation of SAR."}
{"_id": "3dd688dbd5d425340203e73ed6590e58c970b083", "title": "An Accelerator for High Efficient Vision Processing", "text": "In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications. Still, both the energy efficiency and performance of such accelerators remain limited by memory accesses. In this paper, we focus on image applications, arguably the most important category among recognition and mining applications. The neural networks which are state-of-the-art for these applications are convolutional neural networks (CNNs), and they have an important property: weights are shared among many neurons, considerably reducing the neural network memory footprint. This property allows to entirely map a CNN within an SRAM, eliminating all DRAM accesses for weights. By further hoisting this accelerator next to the image sensor, it is possible to eliminate all remaining DRAM accesses, i.e., for inputs and outputs. In this paper, we propose such a CNN accelerator, placed next to a CMOS or CCD sensor. The absence of DRAM accesses combined with a careful exploitation of the specific data access patterns within CNNs allows us to design an accelerator which is highly energy-efficient. We present a single-core implementation down to the layout at 65 nm, with a modest footprint of 5.94mm $^{\\boldsymbol {2}}$ and consuming only 336mW, but still about $\\boldsymbol {30\\times }$ faster than high-end GPUs. For visual processing with higher resolution and frame-rate requirements, we further present a multicore implementation with elevated performance."}
{"_id": "3cf12282c9a37211f8f7adbc5e667a1436317626", "title": "Adaptive Control of KNTU Planar Cable-Driven Parallel Robot with Uncertainties in Dynamic and Kinematic Parameters", "text": "This paper addresses the design and implementation of adaptive control on a planar cable-driven parallel robot with uncertainties in dynamic and kinematic parameters. To develop the idea, firstly, adaptation is performed on dynamic parameters and it is shown that the controller is stable despite the kinematic uncertainties. Then, internal force term is linearly separated into a regressor matrix in addition to a kinematic parameter vector that contains estimation error. In the next step to improve the controller performance, adaptation is performed on both the dynamic and kinematic parameters. It is shown that the performance of the proposed controller is improved by correction in the internal forces. The proposed controller not only keeps all cables in tension for the whole workspace of the robot, it is computationally simple and it does not require measurement of the end-effector acceleration as well. Finally, the effectiveness of the proposed control algorithm is examined through some experiments on KNTU planar cable-driven parallel robot and it is shown that the proposed control algorithm is able to provide suitable performance in practice."}
{"_id": "7626ebf435036928046afbd7ab88d4b76d3de008", "title": "Integration of Text and Audio Features for Genre Classification in Music Information Retrieval", "text": "Multimedia content can be described in versatile ways as its essence is not limited to one view. For music data these multiple views could be a song\u2019s audio features as well as its lyrics. Both of these modalities have their advantages as text may be easier to search in and could cover more of the \u2018content semantics\u2019 of a song, while omitting other types of semantic categorisation. (Psycho)acoustic feature sets, on the other hand, provide the means to identify tracks that \u2018sound similar\u2019 while less supporting other kinds of semantic categorisation. Those discerning characteristics of different feature sets meet users\u2019 differing information needs. We will explain the nature of text and audio feature sets which describe the same audio tracks. Moreover, we will propose the use of textual data on top of low level audio features for music genre classification. Further, we will show the impact of different combinations of audio features and textual features based on content words."}
{"_id": "155c4153aa867e0d36e81aef2b9a677712c349d4", "title": "Extraction of high-resolution frames from video sequences", "text": "The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system that do this are unknown, the effect is not too surprising given that temporally adjacent frames in a video sequence contain slightly different, but unique, information. This paper addresses the use of both the spatial and temporal information present in a short image sequence to create a single high-resolution video frame. A novel observation model based on motion compensated subsampling is proposed for a video sequence. Since the reconstruction problem is ill-posed, Bayesian restoration with a discontinuity-preserving prior image model is used to extract a high-resolution video still given a short low-resolution sequence. Estimates computed from a low-resolution image sequence containing a subpixel camera pan show dramatic visual and quantitative improvements over bilinear, cubic B-spline, and Bayesian single frame interpolations. Visual and quantitative improvements are also shown for an image sequence containing objects moving with independent trajectories. Finally, the video frame extraction algorithm is used for the motion-compensated scan conversion of interlaced video data, with a visual comparison to the resolution enhancement obtained from progressively scanned frames."}
{"_id": "f66a2d4ee08d388e9919fa6a2a2dad5227a256e8", "title": "The Big Bang: Facial Trauma Caused by Recreational Fireworks.", "text": "In the Netherlands, it is a tradition of setting off fireworks to celebrate the turn of the year. In our medical facility, each year patients with severe skeletal maxillofacial trauma inflicted by recreational fireworks are encountered. We present two cases of patients with severe blast injury to the face, caused by direct impact of rockets, and thereby try to contribute to the limited literature on facial blast injuries, their treatment, and clinical outcome. These patients require multidisciplinary treatment, involving multiple reconstructive surgeries, and the overall recovery process is long. The severity of these traumas raises questions about the firework traditions and legislations not only in the Netherlands but also worldwide. Therefore, the authors support restrictive laws on personal use of fireworks in the Netherlands."}
{"_id": "1786aab096f743614cbe3950602004b9f104d4ba", "title": "A State-Space Approach to Dynamic Nonnegative Matrix Factorization", "text": "Nonnegative matrix factorization (NMF) has been actively investigated and used in a wide range of problems in the past decade. A significant amount of attention has been given to develop NMF algorithms that are suitable to model time series with strong temporal dependencies. In this paper, we propose a novel state-space approach to perform dynamic NMF (D-NMF). In the proposed probabilistic framework, the NMF coefficients act as the state variables and their dynamics are modeled using a multi-lag nonnegative vector autoregressive (N-VAR) model within the process equation. We use expectation maximization and propose a maximum-likelihood estimation framework to estimate the basis matrix and the N-VAR model parameters. Interestingly, the N-VAR model parameters are obtained by simply applying NMF. Moreover, we derive a maximum a posteriori estimate of the state variables (i.e., the NMF coefficients) that is based on a prediction step and an update step, similarly to the Kalman filter. We illustrate the benefits of the proposed approach using different numerical simulations where D-NMF significantly outperforms its static counterpart. Experimental results for three different applications show that the proposed approach outperforms two state-of-the-art NMF approaches that exploit temporal dependencies, namely a nonnegative hidden Markov model and a frame stacking approach, while it requires less memory and computational power."}
{"_id": "73e5f9f4ea344768ba464f146b9ba1e3d965c6b0", "title": "Standalone OPC UA Wrapper for Industrial Monitoring and Control Systems", "text": "OPC unified architecture (UA), a communication standard for the manufacturing industry, enables exchanging control and management data among distributed entities in industrial automation systems. OPC UA wrapper is a migration strategy that provides UA clients with seamless access to legacy servers having OPC classic interfaces. This paper presents the design of a standalone OPC UA wrapper and discusses its performance through extensive experiments using a prototype implementation. The wrapper consists of two main components, i.e., UA server and classic client, which communicate with each other via shared memory and semaphore. One important feature of the design is that it employs a distributed component object model runtime library implemented in Java for platform independence. This makes it possible to build a cost-competitive wrapper system by using commercial off-the-shelf non-Windows solutions with low-cost microprocessors. Another key feature is the event-driven update interface between the UA and classic components, which we propose as an alternative to the sampling-based mechanism for the reduced delay. Through experiments using workloads from an industrial monitoring system, we present a systematic approach of identifying the system parameters having a direct impact on the wrapper performance and eventually tuning them such that the read and subscription services of OPC UA exhibit the best performance."}
{"_id": "6044b30751c19b3231782fb0475c9ca438940690", "title": "Real-time Action Recognition with Dissimilarity-based Training of Specialized Module Networks", "text": "This paper addresses the problem of real-time action recognition in trimmed videos, for which deep neural networks have defined the state-of-the-art performance in the recent literature. For attaining higher recognition accuracies with efficient computations, researchers have addressed the various aspects of limitations in the recognition pipeline. This includes network architecture, the number of input streams (where additional streams augment the color information), the cost function to be optimized, in addition to others. The literature has always aimed, though, at assigning the adopted network (or networks, in case of multiple streams) the task of recognizing the whole number of action classes in the dataset at hand. We propose to train multiple specialized module networks instead. Each module is trained to recognize a subset of the action classes. Towards this goal, we present a dissimilarity-based optimized procedure for distributing the action classes over the modules, which can be trained simultaneously offline. On two standard datasets\u2013UCF-101 and HMDB-51\u2013the proposed method demonstrates a comparable performance, that is superior in some aspects, to the state-of-the-art, and that satisfies the real-time constraint. We achieved 72.5% accuracy on the challenging HMDB-51 dataset. By assigning fewer and unalike classes to each module network, this research paves the way to benefit from light-weight architectures without compromising recognition accuracy1."}
{"_id": "d2998f77f7b16fde8e1146d1e4b96f4fbb267577", "title": "Edge Computing and IoT Based Research for Building Safe Smart Cities Resistant to Disasters", "text": "Recently, several researches concerning with smart and connected communities have been studied. Soon the 4G / 5G technology becomes popular, and cellular base stations will be located densely in the urban space. They may offer intelligent services for autonomous driving, urban environment improvement, disaster mitigation, elderly/disabled people support and so on. Such infrastructure might function as edge servers for disaster support base. In this paper, we enumerate several research issues to be developed in the ICDCS community in the next decade in order for building safe, smart cities resistant to disasters. In particular, we focus on (A) up-to-date urban crowd mobility prediction and (B) resilient disaster information gathering mechanisms based on the edge computing paradigm. We investigate recent related works and projects, and introduce our on-going research work and insight for disaster mitigation."}
{"_id": "505253630ab7e8f35e26e27904bd3c8faea3c5ce", "title": "Predicting clicks: estimating the click-through rate for new ads", "text": "Search engine advertising has become a significant element of the Web browsing experience. Choosing the right ads for the query and the order in which they are displayed greatly affects the probability that a user will see and click on each ad. This ranking has a strong impact on the revenue the search engine receives from the ads. Further, showing the user an ad that they prefer to click on improves user satisfaction. For these reasons, it is important to be able to accurately estimate the click-through rate of ads in the system. For ads that have been displayed repeatedly, this is empirically measurable, but for new ads, other means must be used. We show that we can use features of ads, terms, and advertisers to learn a model that accurately predicts the click-though rate for new ads. We also show that using our model improves the convergence and performance of an advertising system. As a result, our model increases both revenue and user satisfaction."}
{"_id": "270d8ac5350039c06f48e13d88da631634afaadb", "title": "GeoDa: An Introduction to Spatial Data Analysis", "text": "This article presents an overview of GeoDa, a free software program intended to serve as a user-friendly and graphical introduction to spatial analysis for nongeographic information systems (GIS) specialists. It includes functionality ranging from simple mapping to exploratory data analysis, the visualization of global and local spatial autocorrelation, and spatial regression. A key feature of GeoDa is an interactive environment that combines maps with statistical graphics, using the technology of dynamically linked windows. A brief review of the software design is given, as well as some illustrative examples that highlight distinctive features of the program in applications dealing with public health, economic development, real estate analysis, and criminology."}
{"_id": "37316a323a2808f3d4dd6a5086e72b56ba960a96", "title": "Exploratory analysis of spatial and temporal data - a systematic approach", "text": "To open the e-book, you will require Adobe Reader software program. You can download the installer and instructions free from the Adobe Web site if you do not have Adobe Reader already installed on your computer. You might download and conserve it to your PC for later read through. Be sure to follow the download button above to download the PDF file. Review s Review s This ebook may be worth purchasing. it absolutely was writtern quite flawlessly and beneficial. I discovered this ebook from my dad and i suggested this pdf to discover.-Ma ximil ia n Wil kin so n DDS-Ma ximil ia n Wil kin so n DDS This is actually the very best publication i have read through till now. It is definitely simplistic but unexpected situations in the 50 % in the pdf. You can expect to like just how the article writer compose this pdf. A new electronic book with a new perspective. Better then never, though i am quite late in start reading this one. Your life period will be change the instant you comprehensive looking at this pdf."}
{"_id": "7918f936313ae27647e77aea8779dc02a1764f8f", "title": "How Maps Work - Representation, Visualization, and Design", "text": "how maps work representation visualization and design. Book lovers, when you need a new book to read, find the book here. Never worry not to find what you need. Is the how maps work representation visualization and design your needed book now? That's true; you are really a good reader. This is a perfect book that comes from great author to share with you. The book offers the best experience and lesson to take, not only take, but also learn."}
{"_id": "987560b6faaf0ced5e6eb97826dcf7f3ce367df2", "title": "Local Indicators of Spatial Association", "text": "The capabilities for visualization, rapid data retrieval, and manipulation in geographic information systems (GIS) have created the need for new techniques of exploratory data analysis that focus on the \"spatial\" aspects of the data. The identification of local patterns of spatial association is an important concern in this respect. In this paper, I outline a new general class of local indicators of spatial association (LISA) and show how they allow for the decomposition of global indicators, such as Moran's I, into the contribution of each observation. The LISA statistics serve two purposes. On one hand, they may be interpreted as indicators of local pockets of nonstationarity, or hot spots, similar to the Gi and G; statistics of Getis and Ord (1992). On the other hand, they may be used to assess the influence of individual locations on the magnitude of the global statistic and to identify \"outliers,\" as in Anselin's Moran scatterplot (1993a). An initial evaluation of the properties of a LISA statistic is carried out for the local Moran, which is applied in a study of the spatial pattern of conflict for African countries and in a number of Monte Carlo simulations."}
{"_id": "54c13129cbbc8737dce7d14dd1c7e6462016409f", "title": "Detection of Influential Observation in Linear Regression", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "555e1d6ecc7af031f29b0225bdca06d4a6da77ed", "title": "Clause Restructuring for Statistical Machine Translation", "text": "We describe a method for incorporating syntactic information in statistical machine translation systems. The first step of the method is to parse the source language string that is being translated. The second step is to apply a series of transformations to the parse tree, effectively reordering the surface string on the source language side of the translation system. The goal of this step is to recover an underlying word order that is closer to the target language word-order than the original string. The reordering approach is applied as a pre-processing step in both the training and decoding phases of a phrase-based statistical MT system. We describe experiments on translation from German to English, showing an improvement from 25.2% Bleu score for a baseline system to 26.8% Bleu score for the system with reordering, a statistically significant improvement."}
{"_id": "0fce0891edb205343ff126823b9df0bae04fe9dd", "title": "Temporal Convolutional Neural Networks for Diagnosis from Lab Tests", "text": "Early diagnosis of treatable diseases is essential for improving healthcare, and many diseases\u2019 onsets are predictable from annual lab tests and their temporal trends. We introduce a multi-resolution convolutional neural network for early detection of multiple diseases from irregularly measured sparse lab values. Our novel architecture takes as input both an imputed version of the data and a binary observation matrix. For imputing the temporal sparse observations, we develop a flexible, fast to train method for differentiable multivariate kernel regression. Our experiments on data from 298K individuals over 8 years, 18 common lab measurements, and 171 diseases show that the temporal signatures learned via convolution are significantly more predictive than baselines commonly used for early disease diagnosis."}
{"_id": "9bb386b2e75df0445126741d24a89c3740d07804", "title": "Automatic Generation of Visual-Textual Presentation Layout", "text": "Visual-textual presentation layout (e.g., digital magazine cover, poster, Power Point slides, and any other rich media), which combines beautiful image and overlaid readable texts, can result in an eye candy touch to attract users\u2019 attention. The designing of visual-textual presentation layout is therefore becoming ubiquitous in both commercially printed publications and online digital magazines. However, handcrafting aesthetically compelling layouts still remains challenging for many small businesses and amateur users. This article presents a system to automatically generate visual-textual presentation layouts by investigating a set of aesthetic design principles, through which an average user can easily create visually appealing layouts. The system is attributed with a set of topic-dependent layout templates and a computational framework integrating high-level aesthetic principles (in a top-down manner) and low-level image features (in a bottom-up manner). The layout templates, designed with prior knowledge from domain experts, define spatial layouts, semantic colors, harmonic color models, and font emotion and size constraints. We formulate the typography as an energy optimization problem by minimizing the cost of text intrusion, the utility of visual space, and the mismatch of information importance in perception and semantics, constrained by the automatically selected template and further preserving color harmonization. We demonstrate that our designs achieve the best reading experience compared with the reimplementation of parts of existing state-of-the-art designs through a series of user studies."}
{"_id": "9005c34200880bc2ca0bad398d0a6391667a2dfc", "title": "Disability studies as a source of critical inquiry for the field of assistive technology", "text": "Disability studies and assistive technology are two related fields that have long shared common goals - understanding the experience of disability and identifying and addressing relevant issues. Despite these common goals, there are some important differences in what professionals in these fields consider problems, perhaps related to the lack of connection between the fields. To help bridge this gap, we review some of the key literature in disability studies. We present case studies of two research projects in assistive technology and discuss how the field of disability studies influenced that work, led us to identify new or different problems relevant to the field of assistive technology, and helped us to think in new ways about the research process and its impact on the experiences of individuals who live with disability. We also discuss how the field of disability studies has influenced our teaching and highlight some of the key publications and publication venues from which our community may want to draw more deeply in the future."}
{"_id": "779bb1441b3f06eab3eb8424336920f6dc10827c", "title": "Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning", "text": "The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search avoids this problem by encouraging a search in all interesting directions. That occurs by replacing a performance objective with a reward for novel behaviors, as defined by a human-crafted, and often simple, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a novelty pressure in image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g. churches, mosques, obelisks, etc.). Here we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: e.g. producing intelligent software, robot controllers, optimized physical components, and art."}
{"_id": "1563adc48cd0b56724ea4fe71cfb4193865fe601", "title": "Being Bayesian About Network Structure. A Bayesian Approach to Structure Discovery in Bayesian Networks", "text": "In many multivariate domains, we are interested in analyzing the dependency structure of the underlying distribution, e.g., whether two variables are in direct interaction. We can represent dependency structures using Bayesian network models. To analyze a given data set, Bayesian model selection attempts to find the most likely (MAP) model, and uses its structure to answer these questions. However, when the amount of available data is modest, there might be many models that have non-negligible posterior. Thus, we want compute the Bayesian posterior of a feature, i.e., the total posterior probability of all models that contain it. In this paper, we propose a new approach for this task. We first show how to efficiently compute a sum over the exponential number of networks that are consistent with a fixed order over network variables. This allows us to compute, for a given order, both the marginal probability of the data and the posterior of a feature. We then use this result as the basis for an algorithm that approximates the Bayesian posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC) method, but over orders rather than over network structures. The space of orders is smaller and more regular than the space of structures, and has much a smoother posterior \u201clandscape\u201d. We present empirical results on synthetic and real-life datasets that compare our approach to full model averaging (when possible), to MCMC over network structures, and to a non-Bayesian bootstrap approach."}
{"_id": "482c9eefd106e109cc13daf7253e74553a6eb141", "title": "Design Thinking Methods and Tools for Innovation in Multidisciplinary Teams", "text": "Design Thinking (DT), a human-centered approach to innovation, is regarded as a system of overlapping spaces of viability, desirability and feasibility. Innovation increases when these three perspectives are addressed. This position paper proposes DT methods and tools to foster innovation in a multidisciplinary team by facilitating decision-making processes. We discuss how DT methods and tools reflect one or more DT perspectives, namely, the human, business and technology perspectives. We also discuss how these DT methods and tools can support decision-making processes, collaboration and engagement in a multidisciplinary team. Author"}
{"_id": "3a01f9933066f0950435a509c2b7bf427a1ebd7f", "title": "Exfiltration of information from air-gapped machines using monitor's LED indicator", "text": "In this paper we present a new approach for data exfiltration by leaking data from monitor's LED to Smartphone's camera. The new approach may be used by attackers to leak valuable information from the organization as part of an Advanced Persistent Threat (APT). The proof of concept that was developed is described in the paper followed by a description of an experiment that demonstrates that practically people are not aware of the attack. We propose ways that will facilitate the detection of such threats and some possible countermeasures."}
{"_id": "e6a4afd84109b6af4a10c2f9b2b888b446b79e98", "title": "Unsupervised Learning without Overfitting: Empirical Risk Approximation as an Induction Principle for Reliable Clustering", "text": "Unsupervised learning algorithms are designed to extract structure from data s amples on the basis of a cost function for structures. For a reliable and robust inference proce ss, th unsupervised learning algorithm has to guarantee that the extracted structures ar e typical for the data source. In particular, it has to reject all structures where the infe renc is dominated by the arbitrariness of the sample noise and which, consequently, can be characterize d as overfitting in unsupervised learning. This paper summarizes an inference principle called Empirical Risk Approximationwhich allows us to quantitatively measure the overfitting effect and to derive a criterion as a saveguard against it. The crucial condition for learning is me t if (i) the empirical risk of learning uniformly converges towards the expected risk and if ( ii) the hypothesis class retains a minimal variety for consistent inference. Parameter s l ction of learnable data structures is demonstrated for the case of k-means clustering and Monte Carlo simulations are presented to support the selection principle. 1 PHILOSOPHY OF UNSUPERVISED LEARNING Learning algorithms are designed to extract structure from data. Two class es of algorithms have been widely discussed in the literature \u2013 supervised andunsupervised learning. The distinction between the two classes depends on supervision or teacher informati on which is either available to the learning algorithm or missing in the learning process. T hi paper describes a statistical learning theory of unsupervised learning. We summarize an induct ion principle for unsupervised learning which is refered to as Empirical Risk Approximation[1]. This principle is based on the optimization of a quality functional for structures in data and, most i portantly, it contains a safeguard against overfitting of structure to the noise in a sampl e set. The extracted structure of the data is encoded by a loss function and it is assumed to produce a lear ning risk below a predefined risk threshold. This induction principle is summarized by the fol lowing two inference steps: 1. Define a hypothesis class containing loss functions which evaluate candidate structures of the data and which measure their quality. Control the complexity of the hypothesis class by an upper bound on the costs. 2. Select an arbitrary loss function from the smallest subset which still guarantees c onsistent learning (in the sense of Statistical Learning Theory). The reader should note that the learning algorithm has to return a structure with c ostsbounded by a preselected cost threshold, but it is not required to return the structure w ith minimal empirical risk as in Vapnik\u2019s \u201cEmpirical Risk Minimization\u201d induction principle for classi fication and regression [2]. All structures in the data with risk smaller than the sel cted risk bound are considered to be equivalent in the approximation process without further distinction. Various cases of unsupervised learning algorithms are discussed in the literature wi h mphasis on the optimization aspect, i.e., data clustering or vector quantization, self-organi zing maps, principal or independent component analysis and principal curves, projection pursuit or algorithms for relational data as multidimensional scaling and pairwise clustering and histogram based clustering methods. How can we select a loss function from a hypothesis class with desired approxim ation quality or, equivalently, find a structure with bounded costs? The nested sequence of hypothesis classe suggests a tracking strategy of solutions which are elements of smaller and sm ller hypothesis classes in that sequence. The sequential sampling can be implemented by soluti on tracking, i.e., admissible solutions are incrementally improved by gradient descent if t he process samples from a subset with smaller risk bound than in the previous sampling step. Candidate procedure for the sampling of structures from the hypothesis class are stochastic search t chniques or continuation methods like simulated or deterministic annealing, although the theory does not refer to any particular learning algorithm. The general framework of Empirical Risk Approximationand its mathematical formulation is presented in Sec. 2, whereas a more detailed discussion can be found in [1]. Thi s theory of unsupervised learning is applied to the case of central clustering in Sec. 3. 2 MATHEMATICAL FRAMEWORK FOR UNSUPERVISED LEARNING The data samples X = xi 2 : IRd; 1 i l which have to be analysed by the unsupervised learning algorithm are elements of a suitable d-dimensional Euclidian space. The data are distributed according to a measure which is assumed to be known for the analysis. A mathematically precise statement of the Empirical Risk Approximationinduction principle requires several definitions which formalize the notion of searching for struct ure in the data. The quality of structures extracted from the data set X is evaluated by a learning cost function R\u0302( ;X ) = 1l l Xi=1 h(xi; ): (1) R\u0302( ;X ) denotes theempirical risk of learning a structure for an i.i.d. sample set X . The functionh(x; ) is known asloss function in statistics. It measures the costs for processing a generic datumx and often corresponds to the negative log-likelihood of a stochastic model. For example in vector quantization, the loss function quantifies what the costs are t o ssign a data pointx to a particular prototype or codebook vector. Each value 2 parametrizes an individual loss functions with denoting the parameter set. The parameter characterizes the different structures of the data set which are hypothetically considered candida te structures in the learning process and which have to be validated. Statistical learning t heory distinguishes between different classes of loss functions, i.e., 0 1 functions in classification problems, bounded functions and unbounded non-negative functions in regression problems. This paper is concerned with the class of unbounded non-negative functions since we are particularly interested in polynomially increasing loss function ( O kxkp)) as they occur in vector quantization. Note that the quality measur\u00ea R( ;X ) depends on the i.i.d. data set X and, therefore, is itself a random variable. The relevant quality measure for unsupervised learning, howev er, is the expectation value of this random variable, known as the expected risk of learning R( ) = Z h(x; ) d (x): (2) While minima ofR\u0302( ;X ) or solutions with bounded empirical risk are influenced by fluctuations in the samples, it is the expected risk R( ) which completely assesses the quality of learning results in a sound probabilistic fashion. The distribution is assumed to be known in the following analysis and it has to decay sufficiently fast such that all rth moments ( r > 2) of the loss functionh(x; ) are bounded byE fjh(x; ) R( )jrg r! r 2V fh(x; )g; 8 2 . E f:g andV f:g denote expectation and variance of a random variable, respectively. is a distribution dependent constant. The moment constraint on the distribution h lds for all distributions with exponentially fast decaying tails which include all distri butions with finite support. Empirical Risk Approximationis an induction principle which requires the learning algorithm to sample from the smallest consistently learnable subset of the hypothesis cla s . In the following, the hypothesis class H contains all loss functions h(x; ); 2 and the subsets of risk bounded loss functions HR are defined as HR = fh(x; ) : 2 ^ R( ) R g : (3) The subsetsHR are obviously nested since HR 1 HR 2 H forR 1 R 2 1 andH = limR !1HR . R induces a structure on the hypothesis class in the sense of Vapnik\u2019s \u201cStructural Risk\u201d and essentially controls the complexity of the hypothesis clas . TheEmpirical Risk Approximationinduction principle requires to define a nested sequence of hypothesis classes with bounded expected risk and to sample from the hypothesis clas s with the desired approximation quality. Algorithmically, however, we select t he loss function according to the bounded empirical risk\u0302 R( ;X ) R . This induction principle is consistent if a bounded empirical risk implies in the asymptotic limit that the loss functi o has bounded expected riskR( ) R and, therefore, is an element of the hypothesis class HR , i.e., 8 2 lim l!1 R\u0302( ;X ) R =) h(x; ) 2 HR : (4) This implication essentially states that the expected risk asymptotica lly does not exceed the risk bound (R( ) R ) and, therefore, the inferred structure is an approximation of the optimal data structure with risk not worse than R . The consistency assumption (4) for Empirical Risk Approximationholds if the empirical risk uniformly converges towards the expected risk lim l!1P(sup 2 jR( ) R\u0302( ;X )j pV fh(x; )g > ) = 0; 8 > 0; (5) where denotes the hypothesis set of distinguishable structures. defines an -netH ;R on the set of loss functions HR or coarsens the hypothesis class with an -separated set if there does not exist a finite -net forHR . Using results from the theory of empirical processes the probability of an \u2013deviation of empirical risk from expected risk can be bounded by Bernstein\u2019s inequality ([3], Lemma 2.2.11) P( sup 2 jR( ) R\u0302( ;X )j pV fh(x; )g > ) 2jH j exp l 2 2(1 + = min) : (6) The minimal variance of all loss functions is denoted by 2 min = inf 2 V fh(x; )g. jH j denotes the cardinality of an -net constructed for the hypothesis class HR under the assumption of the measure . The confidence level limits the probability of large deviations [1]. The large deviation inequality weighs two competing effects in the learning probl em, i.e., the probability of a large deviation exponentially decreases with growing sample si ze l, whereas a large deviation becomes increasingly likely with growing cardinality of the hypothesis class. A compromise between both effects determines how reliable an estimate ac tually is for a given data set X . The sample complexity l0( ; ) is defined as the nec"}
{"_id": "de135542d20285fa40a8ae91eb109385572f6160", "title": "Paper windows: interaction techniques for digital paper", "text": "In this paper, we present Paper Windows, a prototype windowing environment that simulates the use of digital paper displays. By projecting windows on physical paper, Paper Windows allows the capturing of physical affordances of paper in a digital world. The system uses paper as an input device by tracking its motion and shape with a Vicon Motion Capturing System. We discuss the design of a number of interaction techniques for manipulating information on paper displays."}
{"_id": "5b0f9417de6b616199c6bd15b3ca552d46973de8", "title": "Deep Convolutional Neural Network for 6-DOF Image Localization", "text": "Wee present an accurate and robust method for six degree of freedom image localization. There are two key-points of our method, 1). automatic immense photo synthesis and labeling from point cloud model and, 2). pose estimation with deep convolutional neural networks (ConvNets) regression. Our model can directly regresses 6-DOF camera poses from images, accurately describing where and how it was captured. We achieved an accuracy within 1 meters and 1 degree on our outdoor dataset, which covers about 20, 000m on our school campus. Unlike previous point cloud registration solutions, our model supports low resolution images (i.e. 224\u00d7224 in our settings), and is tiny in size when finished training. Moreover, in pose estimation, our model uses O(1) time & space complexity as trainset grows. We will show the importance to localization using hundreds of thousands of generated and self-labeled \u201dphotos\u201d came from a short video. We will show our model\u2019s robustness despite of illumination and seasonal variances, which usually fails methods that leverage image feature descriptors like SIFT. Furthermore, we will show the ability of transfer our model trained on one scene to another, and the gains in accuracy and efficiency."}
{"_id": "9b63b9da0351abddee88114971a6b3f62e3a528d", "title": "Characterizing in-text citations in scientific articles: A large-scale analysis", "text": "We report characteristics of in-text citations in over five million full text articles from two large databases \u2013 the PubMed Central Open Access subset and Elsevier journals \u2013 as functions of time, textual progression, and scientific field. The purpose of this study is to understand the characteristics of in-text citations in a detailed way prior to pursuing other studies focused on answering more substantive research questions. As such, we have analyzed in-text citations in several ways and report many findings here. Perhaps most significantly, we find that there are large field-level differences that are reflected in position within the text, citation interval (or reference age), and citation counts of references. In general, the fields of Biomedical and Health Sciences, Life and Earth Sciences, and Physical Sciences and Engineering have similar reference distributions, although they vary in their specifics. The two remaining fields, Mathematics and Computer Science and Social Science and Humanities, have different reference distributions from the other three fields and between themselves. We also show that in all fields the numbers of sentences, references, and in-text mentions per article have increased over time, and that there are field-level and temporal differences in the numbers of in-text mentions per reference. A final finding is that references mentioned only once tend to be much more highly cited than those mentioned multiple times."}
{"_id": "1c99df3eba23275195679d14015eabd954157ca8", "title": "Determined to Die! Ability to Act Following Multiple Self-inflicted Gunshot Wounds to the Head. The Cook County Office of Medical Examiner Experience (2005-2012) and Review of Literature.", "text": "Cases of multiple (considered 2+) self-inflicted gunshot wounds are a rarity and require careful examination of the scene of occurrence; thorough consideration of the decedent's psychiatric, medical, and social histories; and accurate postmortem documentation of the gunshot wounds. We present a series of four cases of multiple self-inflicted gunshot wounds to the head from the Cook County Medical Examiner's Office between 2005 and 2012 including the first case report of suicide involving eight gunshot wounds to the head. In addition, a review of the literature concerning multiple self-inflicted gunshot wounds to the head is performed. The majority of reported cases document two gunshot entrance wound defects. Temporal regions are the most common affected regions (especially the right and left temples). Determining the capability to act following a gunshot wound to the head is necessary in crime scene reconstruction and in differentiation between homicide and suicide."}
{"_id": "c5b51c0a5aad2996a4d2548c003a4262203c718f", "title": "Definition and Multidimensionality of Security Awareness: Close Encounters of the Second Order", "text": "This study proposes and examines a multidimensional definition of information security awareness. We also investigate its antecedents and analyze its effects on compliance with organizational information security policies. The above research goals are tested through the theoretical lens of technology threat avoidance theory and protection motivation theory. Information security awareness is defined as a second-order construct composed of the elements of threat and coping appraisals supplemented by the responsibilities construct to account for organizational environment. The study was executed in two stages. First, the participants (employees of a municipality) were exposed to a series of phishing messages. Second, the same individuals were asked to participate in a survey designed to examine their security awareness. The research model was tested using PLS-SEM approach. The results indicate that security awareness is in fact a second-order formative construct composed of six components. There are significant differences in security awareness levels between the victims of the phishing experiment and the employees who maintain compliance with security policies. Our study extends the theory by proposing and validating a general, yet practical definition of security awareness. It also bridges the gap between theory and practice - our contextualization of security awareness draws heavily on both fields."}
{"_id": "1c1b8a049ac9e76b88ef4cea43f88097d275abdc", "title": "Knowledge discovery and data mining in toxicology.", "text": "Knowledge discovery and data mining tools are gaining increasing importance for the analysis of toxicological databases. This paper gives a survey of algorithms, capable to derive interpretable models from toxicological data, and presents the most important application areas. The majority of techniques in this area were derived from symbolic machine learning, one commercial product was developed especially for toxicological applications. The main application area is presently the detection of structure-activity relationships, very few authors have used these techniques to solve problems in epidemiological and clinical toxicology. Although the discussed algorithms are very flexible and powerful, further research is required to adopt the algorithms to the specific learning problems in this area, to develop improved representations of chemical and biological data and to enhance the interpretability of the derived models for toxicological experts."}
{"_id": "1c9eb7a0a96cceb93c83827bba1c17d33d7240cc", "title": "Toward autonomous mapping and exploration for mobile robots through deep supervised learning", "text": "We consider an autonomous mapping and exploration problem in which a range-sensing mobile robot is guided by an information-based controller through an a priori unknown environment, choosing to collect its next measurement at the location estimated to yield the maximum information gain within its current field of view. We propose a novel and time-efficient approach to predict the most informative sensing action using a deep neural network. After training the deep neural network on a series of thousands of randomly-generated \u201cdungeon maps\u201d, the predicted optimal sensing action can be computed in constant time, with prospects for appealing scalability in the testing phase to higher dimensional systems. We evaluated the performance of deep neural networks on the autonomous exploration of two-dimensional workspaces, comparing several different neural networks that were selected due to their success in recent ImageNet challenges. Our computational results demonstrate that the proposed method provides high efficiency as well as accuracy in selecting informative sensing actions that support autonomous mobile robot exploration."}
{"_id": "4e343bb81bfd683778bce920039851f479d9c70a", "title": "Analysing the use of interactive technology to implement interactive teaching", "text": "Recent policy initiatives in England have focused on promoting \u2018interactive\u2019 teaching in schools, with a clear expectation that this will lead to improvements in learning. This expectation is based on the perceived success of such approaches in other parts of the world.At the same time, there has been a large investment in Information and Communication Technology (ICT) resources, and particularly in interactive whiteboard technology. This paper explores the idea of interactive teaching in relation to the interactive technology which might be used to support it. It explains the development of a framework for the detailed analysis of teaching and learning in activity settings which is designed to represent the features and relationships involved in interactivity. When applied to a case study of interactive teaching during a lesson involving a variety of technology-based activities, the framework reveals a confusion of purpose in students\u2019 use of an ICT resource that limits the potential for learning when students are working independently. Discussion of relationships between technical and pedagogical interactivity points a way forward concerning greater focus on learning goals during activity in order to enable learners to be more autonomous in exploiting ICT\u2019s affordances, and the conclusion identifies the variables and issues which need to be considered in future research which will illuminate this path."}
{"_id": "d611408b08c1950160fb15b4ca865056c1afc91e", "title": "Facial Expression Recognition From Image Sequence Based on LBP and Taylor Expansion", "text": "The aim of an automatic video-based facial expression recognition system is to detect and classify human facial expressions from image sequence. An integrated automatic system often involves two components: 1) peak expression frame detection and 2) expression feature extraction. In comparison with the image-based expression recognition system, the video-based recognition system often performs online detection, which prefers low-dimensional feature representation for cost-effectiveness. Moreover, effective feature extraction is needed for classification. Many recent recognition systems often incorporate rich additional subjective information and thus become less efficient for real-time application. In our facial expression recognition system, first, we propose the double local binary pattern (DLBP) to detect the peak expression frame from the video. The proposed DLBP method has a much lower-dimensional size and can successfully reduce detection time. Besides, to handle the illumination variations in LBP, logarithm-laplace (LL) domain is further proposed to get a more robust facial feature for detection. Finally, the Taylor expansion theorem is employed in our system for the first time to extract facial expression feature. We propose the Taylor feature pattern (TFP) based on the LBP and Taylor expansion to obtain an effective facial feature from the Taylor feature map. Experimental results on the JAFFE and Cohn-Kanade data sets show that the proposed TFP method outperforms some state-of-the-art LBP-based feature extraction methods for facial expression feature extraction and can be suited for real-time applications."}
{"_id": "ca4ef779dc1e5dc01231eb9805fa05bbbc51fec3", "title": "A 1.2V 64Gb 341GB/S HBM2 stacked DRAM with spiral point-to-point TSV structure and improved bank group data control", "text": "With the recent increasing interest in big data and artificial intelligence, there is an emerging demand for high-performance memory system with large density and high data-bandwidth. However, conventional DIMM-type memory has difficulty achieving more than 50GB/s due to its limited pin count and signal integrity issues. High-bandwidth memory (HBM) DRAM, with TSV technology and wide IOs, is a prominent solution to this problem, but it still has many limitations: including power consumption and reliability. This paper presents a power-efficient structure of TSVs with reliability and a cost-effective HBM DRAM core architecture."}
{"_id": "698b8181cd613a72adeac0d75252afe7f57a5180", "title": "Parallel tree-ensemble algorithms for GPUs using CUDA", "text": "We present two new parallel implementations of the tree-ensemble algorithms Random Forest (RF) and Extremely randomized trees (ERT) for emerging many-core platforms, e.g., contemporary graphics cards suitable for general-purpose computing (GPGPU). Random Forest and Extremely randomized trees are ensemble learners for classification and regression. They operate by constructing a multitude of decision trees at training time and outputting a prediction by comparing the outputs of the individual trees. Thanks to the inherent parallelism of the task, an obvious platform for its computation is to employ contemporary GPUs with a large number of processing cores. Previous parallel algorithms for Random Forests in the literature are either designed for traditional multi-core CPU platforms or early history GPUs with simpler hardware architecture and relatively few number of cores. The new parallel algorithms are designed for contemporary GPUs with a large number of cores and take into account aspects of the newer hardware architectures as memory hierarchy and thread scheduling. They are implemented using the C/C++ language and the CUDA interface for best possible performance on NVidia-based GPUs. An experimental study comparing with the most important previous solutions for CPU and GPU platforms shows significant improvement for the new implementations, often with several magnitudes."}
{"_id": "b135aaf8822b97e139ff87a0dd5515e044ba2ae1", "title": "The Role of Network Analysis in Industrial and Applied Mathematics", "text": "Many problems in industry \u2014 and in the social, natural, information, and medical sciences \u2014 involve discrete data and benefit from approaches from subjects such as network science, information theory, optimization, probability, and statistics. Because the study of networks is concerned explicitly with connectivity between different entities, it has become very prominent in industrial settings, and this importance has been accentuated further amidst the modern data deluge. In this commentary, we discuss the role of network analysis in industrial and applied mathematics, and we give several examples of network science in industry. We focus, in particular, on discussing a physical-applied-mathematics approach to the study of networks. 1 ar X iv :1 70 3. 06 84 3v 2 [ cs .S I] 1 4 Se p 20 17 The Role of Network Analysis in Industrial and Applied Mathematics: A Physical-Applied-Mathematics Perspective Mason A. Porter and Sam D. Howison Department of Mathematics, University of California, Los Angeles, Los Angeles, California 90095, USA Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK CABDyN Complexity Centre, University of Oxford, Oxford OX1 1HP, UK"}
{"_id": "9904cbfba499285206d51cffe50e9a060c43e60f", "title": "Measuring retail company performance using credit scoring techniques", "text": "This paper proposes a theoretical framework for predicting financial distress based on Hunt\u2019s (2000) \u2018Resource-Advantage (R-A) Theory of Competition\u2019. The study focuses on the US retail market. Five credit scoring methodologies \u2013Na\u00efve Bayes, Logistic Regression, Recursive Partition, Artificial Neural Network, and Sequential Minimal Optimization\u2014 are used on a sample of 195 healthy companies and 51 distressed firms over different time periods from 1994 to 2002. Analyses provide sufficient evidence that the five credit scoring methodologies have sound classification ability. In the time period of one year before financial distress, logistic regression model shows identical performance with neural network model based on the accuracy rate and shows the best performance in terms of AUROC value. Other models are slightly worse for predicting financial distress, but still present high accuracy rate and AUROC value. Moreover, the methodologies remain sound even five years prior to financial distress with classification accuracy rates above 85% and AUROC values above 0.85 for all five methodologies. This paper also shows external environment influences exist based on the na\u00efve bayes, logistic, recursive partition and SMO models, but these influences are weak. With regards to the model applicability, a subset of the different models is compared with Moody\u2019s rankings. It is found that both SMO and Logistic models are better than the Neural Network model in terms of similarity with Moody\u2019s ranking, with SMO being slightly better than the Logistic Model."}
{"_id": "1458f7b234b827931155aea5eab1a80b652d6c4a", "title": "Are we dependent upon coffee and caffeine? A review on human and animal data", "text": "Caffeine is the most widely used psychoactive substance and has been considered occasionally as a drug of abuse. The present paper reviews available data on caffeine dependence, tolerance, reinforcement and withdrawal. After sudden caffeine cessation, withdrawal symptoms develop in a small portion of the population but are moderate and transient. Tolerance to caffeine-induced stimulation of locomotor activity has been shown in animals. In humans, tolerance to some subjective effects of caffeine seems to occur, but most of the time complete tolerance to many effects of caffeine on the central nervous system does not occur. In animals, caffeine can act as a reinforcer, but only in a more limited range of conditions than with classical drugs of dependence. In humans, the reinforcing stimuli functions of caffeine are limited to low or rather moderate doses while high doses are usually avoided. The classical drugs of abuse lead to quite specific increases in cerebral functional activity and dopamine release in the shell of the nucleus accumbens, the key structure for reward, motivation and addiction. However, caffeine doses that reflect the daily human consumption, do not induce a release of dopamine in the shell of the nucleus accumbens but lead to a release of dopamine in the prefrontal cortex, which is consistent with caffeine reinforcing properties. Moreover, caffeine increases glucose utilization in the shell of the nucleus accumbens only at rather high doses that stimulate most brain structures, non-specifically, and likely reflect the side effects linked to high caffeine ingestion. That dose is also 5-10-fold higher than the one necessary to stimulate the caudate nucleus, which mediates motor activity and the structures regulating the sleep-wake cycle, the two functions the most sensitive to caffeine. In conclusion, it appears that although caffeine fulfils some of the criteria for drug dependence and shares with amphetamines and cocaine a certain specificity of action on the cerebral dopaminergic system, the methylxanthine does not act on the dopaminergic structures related to reward, motivation and addiction."}
{"_id": "bb5588e5726e67c6368cf173d54d431a26632cc1", "title": "Approximation algorithms for data placement in arbitrary networks", "text": "We study approximation algorithms for placing replicated data in arbitrary networks. Consider a network of nodes with individual storage capacities and a metric communication cost function, in which each node periodically issues a request for an object drawn from a collection of uniform-length objects. We consider the problem of placing copies of the objects among the nodes such that the average access cost is minimized. Our main result is a polynomial-time constant-factor approximation algorithm for this placement problem. Our algorithm is based on a careful rounding of a linear programming relaxation of the problem. We also show that the data placement problem is MAXSNP-hard.\nWe extend our approximation result to a generalization of the data placement problem that models additional costs such as the cost of realizing the placement. We also show that when object lengths are non-uniform, a constant-factor approximation is achievable if the capacity at each node in the approximate solution is allowed to exceed that in the optimal solution by the length of the largest object."}
{"_id": "0157dcd6122c20b5afc359a799b2043453471f7f", "title": "Exploiting Similarities among Languages for Machine Translation", "text": "Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs."}
{"_id": "0e2795b1329b25ba3709584b96fd5cb4c96f6f22", "title": "A Systematic Comparison of Various Statistical Alignment Models", "text": "We present and compare various methods for computing word alignments using statistical or heuristic models. We consider the five alignment models presented in Brown, Della Pietra, Della Pietra, and Mercer (1993), the hidden Markov alignment model, smoothing techniques, and refinements. These statistical models are compared with two heuristic models based on the Dice coefficient. We present different methods for combining word alignments to perform a symmetrization of directed statistical alignment models. As evaluation criterion, we use the quality of the resulting Viterbi alignment compared to a manually produced reference alignment. We evaluate the models on the German-English Verbmobil task and the French-English Hansards task. We perform a detailed analysis of various design decisions of our statistical alignment system and evaluate these on training corpora of various sizes. An important result is that refined alignment models with a first-order dependence and a fertility model yield significantly better results than simple heuristic models. In the Appendix, we present an efficient training algorithm for the alignment models presented."}
{"_id": "0f16ab376632ee83f2a3af21e96ebb925a8ac8b8", "title": "Combining Bilingual and Comparable Corpora for Low Resource Machine Translation", "text": "Statistical machine translation (SMT) performance suffers when models are trained on only small amounts of parallel data. The learned models typically have both low accuracy (incorrect translations and feature scores) and low coverage (high out-of-vocabulary rates). In this work, we use an additional data resource, comparable corpora, to improve both. Beginning with a small bitext and corresponding phrase-based SMT model, we improve coverage by using bilingual lexicon induction techniques to learn new translations from comparable corpora. Then, we supplement the model\u2019s feature space with translation scores estimated over comparable corpora in order to improve accuracy. We observe improvements between 0.5 and 1.7 BLEU translating Tamil, Telugu, Bengali, Malayalam, Hindi, and Urdu into English."}
{"_id": "1b4e04381ddd2afab1660437931cd62468370a98", "title": "Part-of-Speech Tagging with Neural Networks", "text": "Text corpora which are tagged with part-of-speech information are useful in many areas of linguistic research. In this paper, a new part-of-speech tagging method hased on neural networks (Net-Tagger) is presented and its performance is compared to that of a llMM-tagger (Cutting et al., 1992) and a trigrambased tagger (Kempe, 1993). It is shown that the Net-Tagger performs as well as the trigram-based tagger and better than the iIMM-tagger."}
{"_id": "d42ef23ee18d80f75643bdb3625fedf4d943a51b", "title": "Review of Ontology Matching Approaches and Challenges", "text": "Ontology mapping aims to solve the semantic heterogeneity problems such as ambiguous entity names, different entity granularity, incomparable categorization, and various instances of different ontologies. The mapping helps to search or query data from different sources. Ontology mapping is necessary in many applications such as data integration, ontology evolution, data warehousing, e-commerce and data exchange in various domains such as purchase order, health, music and e-commerce. It is performed by ontology matching approaches that find semantic correspondences between ontology entities. In this paper, we review state of the art ontology matching approaches. We describe the approaches according to instancebased, schema-based, instance and schema-based, usage-based, element-level, and structure-level. The analysis of the existing approaches will assist us in revealing some challenges in ontology mapping such as handling ontology matching errors, user involvement and reusing previous match operations. We explain the way of handling the challenges using new strategy in order to increase the performance."}
{"_id": "bfea83f7d17793ec99cac2ceab07c76db5be9fba", "title": "Decision Trees: Theory and Algorithms", "text": "4."}
{"_id": "8c235d86755720ae535c9e4128fa64b5ac4d6fb0", "title": "Clustering of periodic multichannel timeseries data with application to plasma fluctuations", "text": "A periodic datamining algorithm has been developed and used to extract distinct plasma fluctuations in multichannel oscillatory timeseries data. The technique uses the Expectation Maximisation algorithm to solve for the maximum likelihood estimates and cluster assignments of a mixture of multivariate independent von Mises distributions (EM-VMM). The performance of the algorithm shows significant benefits when compared to a periodic k-means algorithm and clustering using non-periodic techniques on several artificial datasets and real experimental data. Additionally, a new technique for identifying interesting features in multichannel oscillatory timeseries data is described (STFT-clustering). STFTclustering identifies the coincidence of spectral features overmost channels of amulti-channel array using the averaged short time Fourier transform of the signals. These features are filtered using clustering to remove noise. This method is particularly good at identifying weaker features and complements existing methods of feature extraction. Results from applying the STFT-clustering and EM-VMM algorithm to the extraction and clustering of plasma wave modes in the time series data from a helical magnetic probe array on the H-1NF heliac are presented. \u00a9 2014 Elsevier B.V. All rights reserved."}
{"_id": "88f68acad516005b008d16ed99062156afdde34b", "title": "Gamification Solutions to Enhance Software User Engagement - A Systematic Review", "text": "Gamification is the use of video-game mechanics and elements in nongame contexts to enhance user engagement and performance. The purpose of this study is to conduct a systematic review to have an in-depth investigation into the existing gamification solutions targeted at solving user engagement problems in different categories of software. We carried out this systematic review by proposing a framework of gamifying process, which is the basis for comparison of existing gamification solutions. In order to report the review, the primary studies are categorized according to the following: a) gamified software and their platforms, b) elements of the gamifying process, c) gamification solutions in each software type, d) gamification solutions for software user engagement problems, e) gamification solutions in general, and f) effects of gamification on software user engagement and performance. Based on the search procedure and criteria, a total of 78 primary studies were extracted. Most of the studies focused on educational and social software, which were developed for web or mobile platforms. We concluded that the number of studies on motivating users to use software content, solving problems in learning software, and using real identity is very limited. Furthermore, few studies have been carried out on gamifying the following software categories: productivity software, cloud storage, utility software, entertainment software, search engine software, tool software, fitness software, software engineering, information worker software, and health-care software. In addition, a large number of gamification solutions are relatively simple and require improvement. Thus, for future studies, researchers can work on the items discovered in this review; they can improve the quality of the current gamified systems by using a wide variety of game mechanics and interface elements, utilizing a combination of contextual types of rewards and giving users the ability to use received rewards \u201cin-game\u201d and \u201cout-game\u201d."}
{"_id": "8aef56e3ed2a4743cb393bf067f3a17a7412a4bc", "title": "Optimal control approach for pneumatic artificial muscle with using pressure-force conversion model", "text": "In this paper, we propose an optimal control framework for pneumatic actuators. In particular, we consider using Pneumatic Artificial Muscle (PAM) as a part of Pneumatic-Electric (PE) hybrid actuation system. An optimal control framework can be useful for PE hybrid system to properly distribute desired torque outputs to the actuators that have different characteristics. In the optimal control framework, the standard choice to represent control cost is squared force or torque outputs. However, since the control input for PAM is pressure rather than the force or the torque, we should explicitly consider the pressure of PAM as the control cost in an objective function of the optimal control method. We show that we are able to use pressure input as the control cost for PAM by explicitly considering the model which represents a relationship between the pressure input and the force output of PAM. We demonstrate that one-DOF robot with the PE hybrid actuation system can generate pressure-optimized ball throwing movements by using the optimal control method."}
{"_id": "68ba338be70fd3c5bdbc1c271243740f2e0a0f0c", "title": "Beyond Independence: Probabilistic Models for Query Approximation on Binary Transaction Data", "text": "We investigate the problem of generating fast approximate a nswers to queries posed to large sparse binary data sets. We focus in particular on probabilistic mode l-based approaches to this problem and develop a number of techniques that are significantly more accurate t han a baseline independence model. In particular, we introduce two techniques for building probabil ist c models from frequent itemsets: the itemset maximum entropy method, and the itemset inclusion-exclusi on model. In the maximum entropy method we treat itemsets as constraints on the distribution of the q uery variables and use the maximum entropy principle to build a joint probability model for the query at tributes online. In the inclusion-exclusion model itemsets and their frequencies are stored in a data structur e alled an ADtree that supports an efficient implementation of the inclusion-exclusion principle in orde r to answer the query. We empirically compare these two itemset-based models to direct querying of the ori ginal data, querying of samples of the original data, as well as other probabilistic models such as the indep endence model, the Chow-Liu tree model, and the Bernoulli mixture model. These models are able to handle high-dimensionality (hundreds or thousands of attributes), whereas most other work on this topic has foc used on relatively low-dimensional OLAP problems. Experimental results on both simulated and realwor d transaction data sets illustrate various fundamental tradeoffs between approximation error, model complexity, and the online time required to compute a query answer."}
{"_id": "84b54e5fb253e2b15caa21776c1bdd11ceae0eb0", "title": "MapMarker: Extraction of Postal Addresses and Associated Information for General Web Pages", "text": "Address information is essential for people\u2019s daily life. People often need to query addresses of unfamiliar location through Web and then use map services to mark down the location for direction purpose. Although both address information and map services are available online, they are not well combined. Users usually need to copy individual address from a Web site and paste it to another Web site with map services to locate its direction. Such copy and paste operations have to be repeated if multiple addresses are listed on a single page such as public school list or apartment list. Furthermore, associated information with individual address has to be copied and included on each marker for better comprehension. Our research is devoted to automate the above process and make the combination an easier task for users. The main techniques applied here include postal address extraction and associated information extraction. We apply sequence labeling algorithm based on Conditional Random Fields (CRFs) to train models for address extraction. Meanwhile, using the extracted addresses as landmarks, we apply pattern mining to identify the boundaries of address blocks and extract associated information with each individual address. The experimental result shows high F-score at 91% for postal address extraction and 87% accuracy for associated information extraction."}
{"_id": "90522a98ccce3aa0ce20b4dfedb76518b886ed96", "title": "Revising the Structural Framework for Marketing Management", "text": "Special thanks to Robert Skipper and Aaron Hyman for their assistance on an earlier version of this manuscript. Also thanks to Shaun McQuitty, Robin Peterson, Chuck Pickett, Kevin Shanahan, and the Journal of Business Research editors and reviewers, for their helpful comments. An earlier version of this manuscript won the Shaw Award for best paper presented at 2001 Society for Marketing Advances conference. An abridged version of this manuscript has been accepted for publication in Journal of Business Research."}
{"_id": "ae4315453cd8378ce73b744ca9589657ff79de37", "title": "End-to-End Learning of Task-Oriented Dialogs", "text": "In this thesis proposal, we address the limitations of conventional pipeline design of taskoriented dialog systems and propose end-toend learning solutions. We design neural network based dialog system that is able to robustly track dialog state, interface with knowledge bases, and incorporate structured query results into system responses to successfully complete task-oriented dialog. In learning such neural network based dialog systems, we propose hybrid offline training and online interactive learning methods. We introduce a multi-task learning method in pre-training the dialog agent in a supervised manner using task-oriented dialog corpora. The supervised training agent can further be improved via interacting with users and learning online from user demonstration and feedback with imitation and reinforcement learning. In addressing the sample efficiency issue with online policy learning, we further propose a method by combining the learning-from-user and learningfrom-simulation approaches to improve the online interactive learning efficiency."}
{"_id": "e03c61fbcf8b7e6df3dbb9b7d80f1915b048f0ab", "title": "Learning model-based strategies in simple environments with hierarchical q-networks", "text": "Recent advances in deep learning have allowed artificial agents to rival human-level performance on a wide range of complex tasks; however, the ability of these networks to learn generalizable strategies remains a pressing challenge. This critical limitation is due in part to two factors: the opaque information representation in deep neural networks and the complexity of the task environments in which they are typically deployed. Here we propose a novel Hierarchical Q-Network (HQN), motivated by theories of the hierarchical organization of the human prefrontal cortex, that attempts to identify lower dimensional patterns in the value landscape that can be exploited to construct an internal model of rules in simple environments. We draw on combinatorial games, where there exists a single optimal strategy for winning that generalizes across other features of the game, to probe the strategy generalization of the HQN and other reinforcement learning (RL) agents using variations of Wythoff\u2019s game. Traditional RL approaches failed to reach satisfactory performance on variants of Wythoff\u2019s Game; however, the HQN learned heuristic-like strategies that generalized across changes in board configuration. More importantly, the HQN allowed for transparent inspection of the agent\u2019s internal model of the game following training. Our results show how a biologically inspired hierarchical learner can facilitate learning abstract rules to promote robust and flexible action policies in simplified training environments with clearly delineated optimal strategies. 1 ar X iv :1 80 1. 06 68 9v 1 [ cs .A I] 2 0 Ja n 20 18"}
{"_id": "e8403aaac14c5abb3e617391ff71d491f644e1e2", "title": "Ending the war between Sales & Marketing.", "text": "Sales departments tend to believe that marketers are out of touch with what's really going on in the marketplace. Marketing people, in turn, believe the sales force is myopic--too focused on individual customer experiences, insufficiently aware of the larger market, and blind to the future. In short, each group undervalues the other's contributions. Both stumble (and organizational performance suffers) when they are out of sync. Yet few firms seem to make serious overtures toward analyzing and enhancing the relationship between these two critical functions. Curious about the misalignment between Sales and Marketing, the authors interviewed pairs of chief marketing officers and sales vice presidents to capture their perspectives. They looked in depth at the relationship between Sales and Marketing in a variety of companies in different industries. Their goal was to identify best practices that could enhance the joint performance and increase the contributions of these two functions. Among their findings: The marketing function takes different forms in different companies at different product life cycle stages. Marketing's increasing influence in each phase of an organization's growth profoundly affects its relationship with Sales. The strains between Sales and Marketing fall into two main categories: economic (a single budget is typically divided, between Sales and Marketing, and not always evenly) and cultural (the two functions attract very different types of people who achieve success by spending their time in very different ways). In this article, the authors describe the four types of relationships Sales and Marketing typically exhibit. They provide a diagnostic to help readers assess their companies' level of integration, and they offer recommendations for more closely aligning the two functions."}
{"_id": "51331e1a0607285ab710bff9d6b25aadf2546ba6", "title": "The Enigmatic temporal pole: a review of findings on social and emotional processing.", "text": "The function of the anterior-most portion of the temporal lobes, the temporal pole, is not well understood. Anatomists have long considered it part of an extended limbic system based on its location posterior to the orbital frontal cortex and lateral to the amygdala, along with its tight connectivity to limbic and paralimbic regions. Here we review the literature in both non-human primates and humans to assess the temporal pole's putative role in social and emotional processing. Reviewed findings indicate that it has some role in both social and emotional processes, including face recognition and theory of mind, that goes beyond semantic memory. We propose that the temporal pole binds complex, highly processed perceptual inputs to visceral emotional responses. Because perceptual inputs remain segregated into dorsal (auditory), medial (olfactory) and ventral (visual) streams, the integration of emotion with perception is channel specific."}
{"_id": "813308251c76640f0f9f98c54339ae73752793aa", "title": "Short-Text Clustering using Statistical Semantics", "text": "Short documents are typically represented by very sparse vectors, in the space of terms. In this case, traditional techniques for calculating text similarity results in measures which are very close to zero, since documents even the very similar ones have a very few or mostly no terms in common. In order to alleviate this limitation, the representation of short-text segments should be enriched by incorporating information about correlation between terms. In other words, if two short segments do not have any common words, but terms from the first segment appear frequently with terms from the second segment in other documents, this means that these segments are semantically related, and their similarity measure should be high. Towards achieving this goal, we employ a method for enhancing document clustering using statistical semantics. However, the problem of high computation time arises when calculating correlation between all terms. In this work, we propose the selection of a few terms, and using these terms with the Nystr\\\"om method to approximate the term-term correlation matrix. The selection of the terms for the Nystr\\\"om method is performed by randomly sampling terms with probabilities proportional to the lengths of their vectors in the document space. This allows more important terms to have more influence on the approximation of the term-term correlation matrix and accordingly achieves better accuracy."}
{"_id": "3819427b2653724372a13f2194d113bdc67f2b72", "title": "QUO VADIS , DYNAMIC CAPABILITIES ? ACONTENT-ANALYTICREVIEWOFTHECURRENTSTATEOF KNOWLEDGE AND RECOMMENDATIONS FOR FUTURE RESEARCH", "text": "Although the dynamic capabilities perspective has become one of the most frequently used theoretical lenses in management research, critics have repeatedly voiced their frustration with this literature, particularly bemoaning the lack of empirical knowledge and the underspecification of the construct of dynamic capabilities. But research on dynamic capabilities has advanced considerably since its early years, in which most contributions to this literature were purely conceptual. A plethora of empirical studies as well as further theoretical elaborations have shed substantial light on a variety of specific, measurable factors connected to dynamic capabilities. Our article starts out by analyzing these studies to develop a meta-framework that specifies antecedents, dimensions, mechanisms, moderators, and outcomes of dynamic capabilities identified in the literature to date. This framework provides a comprehensive and systematic synthesis of the dynamic capabilities perspective that reflects the richness of the research while at the same time unifying it into a cohesive, overarching model. Such an analysis has not yet been undertaken; no comprehensive framework with this level of detail has previously been presented for dynamic capabilities. Our analysis shows where research has made the most progress and where gaps and unresolved tensions remain. Based on this analysis, we propose a forward-looking research agenda that outlines directions for future research."}
{"_id": "bf1cee21ef693725c9e05da8eeb464573ee5cefd", "title": "Is Tom Cruise Threatened ? Using Netflix Prize Data to Examine the Long Tail of Electronic Commerce", "text": "We analyze a large data set from Netflix, the leading online movie rental company, to shed new light on the causes and consequences of the Long Tail effect, which suggests that on the Internet, over time, consumers will increasingly shift away from hit products and toward niche products. We examine the aggregate level demand as well as demand at the individual consumer level and we find that the consumption of both the hit and the niche movies decreased over time when the popularity of the movies is ranked in absolute terms (e.g., the top/bottom 10 titles). However, we also observe that the active product variety has increased dramatically over the study period. To separate out the demand diversification effect from the shift in consumer preferences, we propose to measure the popularity of movies in relative terms by dynamically adjusting for the current product variety (e.g., the top/bottom 1% of titles). Using this alternative definition of popularity, we find that the demand for the hits rises, while the demand for the niches still falls. We conclude that new movie titles appear much faster than consumers discover them. Finally, we find no evidence that niche titles satisfy consumer tastes better than hit titles and that a small number of heavy users are more likely to venture into niches than light users."}
{"_id": "edfc90cef4872faa942136c5f824bbe4c6839c57", "title": "Improvasher: A Real-Time Mashup System for Live Musical Input", "text": "In this paper we present Improvasher a real-time musical accompaniment system which creates an automatic mashup to accompany live musical input. Improvasher is built around two music processing modules, the first, a performance following technique, makes beat-synchronous predictions of chroma features from a live musical input. The second, a music mashup system, determines the compatibility between beat-synchronous chromagrams from different pieces of music. Through the combination of these two techniques, a real-time predictive mashup can be generated towards a new form of automatic accompaniment for interactive musical performance."}
{"_id": "6321f959144d155276ec11ce3e6d366dfaefa163", "title": "Implementing Intelligent Traffic Control System for Congestion Control, Ambulance Clearance, and Stolen Vehicle Detection", "text": "This paper presents an intelligent traffic control system to pass emergency vehicles smoothly. Each individual vehicle is equipped with special radio frequency identification (RFID) tag (placed at a strategic location), which makes it impossible to remove or destroy. We use RFID reader, NSK EDK-125-TTL, and PIC16F877A system-on-chip to read the RFID tags attached to the vehicle. It counts number of vehicles that passes on a particular path during a specified duration. It also determines the network congestion, and hence the green light duration for that path. If the RFID-tag-read belongs to the stolen vehicle, then a message is sent using GSM SIM300 to the police control room. In addition, when an ambulance is approaching the junction, it will communicate to the traffic controller in the junction to turn ON the green light. This module uses ZigBee modules on CC2500 and PIC16F877A system-on-chip for wireless communications between the ambulance and traffic controller. The prototype was tested under different combinations of inputs in our wireless communication laboratory and experimental results were found as expected."}
{"_id": "498e08e8e2f4e44a115fdf541583dbb3fa80d284", "title": "CAD-based pose estimation for random bin-picking of multiple objects using a RGB-D camera", "text": "In this paper, we propose a CAD-based 6-DOF pose estimation design for random bin-picking of multiple different objects using a Kinect RGB-D sensor. 3D CAD models of objects are constructed via a virtual camera, which generates a point cloud database for object recognition and pose estimation. A voxel grid filter is suggested to reduce the number of 3D point cloud of objects for reducing computing time of pose estimation. A voting-scheme method was adopted for the 6-DOF pose estimation a swell as object recognition of different type objects in the bin. Furthermore, an outlier filter is designed to filter out bad matching poses and occluded ones, so that the robot arm always picks up the upper object in the bin to increase pick up success rate. A series of experiments on a Kuka 6-axis robot revels that the proposed system works satisfactorily to pick up all random objects in the bin. The average recognition rate of three different type objects is 93.9% and the pickup success rate is 89.7%."}
{"_id": "e92771cf6244a4b5965f3cae60d16131774b794c", "title": "Linear codes over Z4+uZ4: MacWilliams identities, projections, and formally self-dual codes", "text": "Linear codes are considered over the ring Z 4 + uZ 4 , a non-chain extension of Z 4. Lee weights, Gray maps for these codes are defined and MacWilliams identities for the complete, symmetrized and Lee weight enumer-ators are proved. Two projections from Z 4 + uZ 4 to the rings Z 4 and F 2 + uF 2 are considered and self-dual codes over Z 4 +uZ 4 are studied in connection with these projections. Finally three constructions are given for formally self-dual codes over Z 4 + uZ 4 and their Z 4-images together with some good examples of formally self-dual Z 4-codes obtained through these constructions."}
{"_id": "bd7ccb1ba361b4bba213c595b55eccaf3482423c", "title": "Millimeter-Wave Compact EBG Structure for Mutual Coupling Reduction Applications", "text": "A new millimeter-wave (MMW), electromagnetic band-gap (EBG) structure is presented. The proposed EBG structure without the use of metallic vias or vertical components is formed by etching two slots and adding two connecting bridges to a conventional uniplanar EBG unit-cell. The transmission characteristics of the proposed EBG structure are measured. Results show that the proposed EBG structure has a wide bandgap around the 60 GHz band. The size of the proposed EBG unit-cell is 78% less than a conventional uniplanar EBG, and 72% less than a uniplanar-compact EBG (UC-EBG) operating at the same frequency band. Moreover, and despite the fabrication limitations at the 60 GHz band, the proposed EBG unit-cell provides at least 12% more size reduction than any other planar EBG structures at microwave frequencies. Its enhanced performance and applicability to reduce mutual coupling in antenna arrays are then investigated. Results show a drastic decrease in the mutual coupling level. This EBG structure can find its applications in MMW wireless communication systems."}
{"_id": "eeb889a7b1c82ab8bcdd22615abefbdddbae8e99", "title": "Alternative activation of macrophages", "text": "The classical pathway of interferon-\u03b3-dependent activation of macrophages by T helper 1 (TH1)-type responses is a well-established feature of cellular immunity to infection with intracellular pathogens, such as Mycobacterium tuberculosis and HIV. The concept of an alternative pathway of macrophage activation by the TH2-type cytokines interleukin-4 (IL-4) and IL-13 has gained credence in the past decade, to account for a distinctive macrophage phenotype that is consistent with a different role in humoral immunity and repair. In this review, I assess the evidence in favour of alternative macrophage activation in the light of macrophage heterogeneity, and define its limits and relevance to a range of immune and inflammatory conditions."}
{"_id": "1e5668364b5831c3dc1fac0cfcf8ef3f275f08c3", "title": "EEG Signal Classification Using Wavelet Feature Extraction and Neural Networks", "text": "Decision support systems have been utilised since 1960, providing physicians with fast and accurate means towards more accurate diagnoses and increased tolerance when handling missing or incomplete data. This paper describes the application of neural network models for classification of electroencephalogram (EEG) signals. Decision making was performed in two stages: initially, a feature extraction scheme using the wavelet transform (WT) has been applied and then a learning-based algorithm classifier performed the classification. The performance of the neural model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed scheme has potential in classifying the EEG signals"}
{"_id": "e577546dd7c767a207f7ca3d8e0148c94aaac857", "title": "ProM: The Process Mining Toolkit", "text": "Nowadays, all kinds of information systems store detailed information in logs. Process mining has emerged as a way to analyze these systems based on these detailed logs. Unlike classical data mining, the focus of process mining is on processes. First, process mining allows us to extract a process model from an event log. Second, it allows us to detect discrepancies between a modeled process (as it was envisioned to be) and an event log (as it actually is). Third, it can enrich an existing model with knowledge derived from an event log. This paper presents our tool ProM, which is the world-leading tool in the area of process mining."}
{"_id": "fc40ad1238fba787dd8a58a7aed57a8d020a6fdc", "title": "Artificial neural networks: fundamentals, computing, design, and application.", "text": "Artificial neural networks (ANNs) are relatively new computational tools that have found extensive utilization in solving many complex real-world problems. The attractiveness of ANNs comes from their remarkable information processing characteristics pertinent mainly to nonlinearity, high parallelism, fault and noise tolerance, and learning and generalization capabilities. This paper aims to familiarize the reader with ANN-based computing (neurocomputing) and to serve as a useful companion practical guide and toolkit for the ANNs modeler along the course of ANN project development. The history of the evolution of neurocomputing and its relation to the field of neurobiology is briefly discussed. ANNs are compared to both expert systems and statistical regression and their advantages and limitations are outlined. A bird's eye review of the various types of ANNs and the related learning rules is presented, with special emphasis on backpropagation (BP) ANNs theory and design. A generalized methodology for developing successful ANNs projects from conceptualization, to design, to implementation, is described. The most common problems that BPANNs developers face during training are summarized in conjunction with possible causes and remedies. Finally, as a practical application, BPANNs were used to model the microbial growth curves of S. flexneri. The developed model was reasonably accurate in simulating both training and test time-dependent growth curves as affected by temperature and pH."}
{"_id": "25406e6733a698bfc4ac836f8e74f458e75dad4f", "title": "What Size Net Gives Valid Generalization?", "text": "We address the question of when a network can be expected to generalize from m random training examples chosen from some arbitrary probability distribution, assuming that future test examples are drawn from the same distribution. Among our results are the following bounds on appropriate sample vs. network size. Assume 0 < \u220a 1/8. We show that if m O(W/\u220a log N/\u220a) random examples can be loaded on a feedforward network of linear threshold functions with N nodes and W weights, so that at least a fraction 1 \u220a/2 of the examples are correctly classified, then one has confidence approaching certainty that the network will correctly classify a fraction 1 \u220a of future test examples drawn from the same distribution. Conversely, for fully-connected feedforward nets with one hidden layer, any learning algorithm using fewer than (W/\u220a) random training examples will, for some distributions of examples consistent with an appropriate weight choice, fail at least some fixed fraction of the time to find a weight choice that will correctly classify more than a 1 \u220a fraction of the future test examples."}
{"_id": "656a33c1db546da8490d6eba259e2a849d73a001", "title": "Learning in Artificial Neural Networks: A Statistical Perspective", "text": "The premise of this article is that learning procedures used to train artificial neural networks are inherently statistical techniques. It follows that statistical theory can provide considerable insight into the properties, advantages, and disadvantages of different network learning methods. We review concepts and analytical results from the literatures of mathematical statistics, econometrics, systems identification, and optimization theory relevant to the analysis of learning in artificial neural networks. Because of the considerable variety of available learning procedures and necessary limitations of space, we cannot provide a comprehensive treatment. Our focus is primarily on learning procedures for feedforward networks. However, many of the concepts and issues arising in this framework are also quite broadly relevant to other network learning paradigms. In addition to providing useful insights, the material reviewed here suggests some potentially useful new training methods for artificial neural networks."}
{"_id": "fbe24a2d9598c620324e3bd51e2f817cd35e9c81", "title": "Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights", "text": "A two-layer neural network can be used to approximate any nonlinear function. T h e behavior of the hidden nodes tha t allows the network to do this is described. Networks w i th one input are analyzed first, and the analysis is then extended to networks w i t h mult iple inputs. T h e result of th is analysis is used to formulate a method for ini t ial izat ion o f the weights o f neural networks to reduce t ra in ing t ime. Training examples are given and the learning curve for these examples are shown to illustrate the decrease in necessary training t ime. Introduction Two-layer feed forward neural networks have been proven capable of approximating any arbitrary functions [l], given that they have sufficient numbers of nodes in their hidden layers. We offer a description of how this works, along with a method of speeding up the training process by choosing the networks\u2019 initial weights. The relationship between the inputs and the output of a two-layer neural network may be described by Equation (1) H l y = wi . sigmoid(LqX + W b i ) (1) i=O where y is the network\u2019s output, X is the input vector, H is the number of hidden nodes, Wi is the weight vector of the i th node of the hidden layer, Wbi is the bias weight of the ith hidden node, w i is the weight of the output layer which connects the i th hidden unit to the output. The behavior of hidden nodes in two-layer networks with one input To illustrate the behavior of the hidden nodes, a two-layer network with one input is trained to approximate a function of one variable d(z) . That is, the network is trained to produce d(z) given z as input using the back-propagation algorithm [2]. The output of the network is given as"}
{"_id": "f66696ce5c60b8fe71590521718213316fb2ea2f", "title": "Large Margin Metric Learning for Multi-Label Prediction", "text": "Canonical correlation analysis (CCA) and maximum margin output coding (MMOC) methods have shown promising results for multi-label prediction, where each instance is associated with multiple labels. However, these methods require an expensive decoding procedure to recover the multiple labels of each testing instance. The testing complexity becomes unacceptable when there are many labels. To avoid decoding completely, we present a novel large margin metric learning paradigm for multi-label prediction. In particular, the proposed method learns a distance metric to discover label dependency such that instances with very different multiple labels will be moved far away. To handle many labels, we present an accelerated proximal gradient procedure to speed up the learning process. Comprehensive experiments demonstrate that our proposed method is significantly faster than CCA and MMOC in terms of both training and testing complexities. Moreover, our method achieves superior prediction performance compared with state-of-the-art methods."}
{"_id": "4afa918506a45be77b0682156c3dfdc956272fa3", "title": "Optimal Design of Energy-Efficient Multi-User MIMO Systems: Is Massive MIMO the Answer?", "text": "Assume that a multi-user multiple-input multiple-output (MIMO) system is designed from scratch to uniformly cover a given area with maximal energy efficiency (EE). What are the optimal number of antennas, active users, and transmit power? The aim of this paper is to answer this fundamental question. We consider jointly the uplink and downlink with different processing schemes at the base station and propose a new realistic power consumption model that reveals how the above parameters affect the EE. Closed-form expressions for the EE-optimal value of each parameter, when the other two are fixed, are provided for zero-forcing (ZF) processing in single-cell scenarios. These expressions prove how the parameters interact. For example, in sharp contrast to common belief, the transmit power is found to increase (not to decrease) with the number of antennas. This implies that energy-efficient systems can operate in high signal-to-noise ratio regimes in which interference-suppressing signal processing is mandatory. Numerical and analytical results show that the maximal EE is achieved by a massive MIMO setup wherein hundreds of antennas are deployed to serve a relatively large number of users using ZF processing. The numerical results show the same behavior under imperfect channel state information and in symmetric multi-cell scenarios."}
{"_id": "0d94ee90e5d91fa1eb3d5e26d8a2cd9a1e57812c", "title": "A systematic analysis of textual variability modeling languages", "text": "Industrial variability models tend to grow in size and complexity due to ever-increasing functionality and complexity of software systems. Some authors report on variability models specifying several thousands of variabilities. However, traditional variability modeling approaches do not seem to scale adequately to cope with size and complexity of such models. Recently, textual variability modeling languages have been advocated as one scalable solution.\n In this paper, we provide a systematic analysis of the capabilities of current textual variability modeling languages, in particular regarding variability management in the large. Towards this aim, we define a classification schema consisting of five dimensions, classify ten different textual variability modeling languages using the classification schema and provide an analysis. In summary, some textual variability modeling languages go beyond textual representations of traditional variability modeling approaches and provide sophisticated modeling concepts and constraint languages. Three textual variability modeling approaches already support mechanisms for large-scale variability modeling such as model composition, modularization, or evolution support."}
{"_id": "0a70ea1496ccd01ea3e51afe60f508ee6c0984ec", "title": "Cracking the Code of Biodiversity Responses to Past Climate Change.", "text": "How individual species and entire ecosystems will respond to future climate change are among the most pressing questions facing ecologists. Past biodiversity dynamics recorded in the paleoecological archives show a broad array of responses, yet significant knowledge gaps remain. In particular, the relative roles of evolutionary adaptation, phenotypic plasticity, and dispersal in promoting survival during times of climate change have yet to be clarified. Investigating the paleo-archives offers great opportunities to understand biodiversity responses to future climate change. In this review we discuss the mechanisms by which biodiversity responds to environmental change, and identify gaps of knowledge on the role of range shifts and tolerance. We also outline approaches at the intersection of paleoecology, genomics, experiments, and predictive models that will elucidate the processes by which species have survived past climatic changes and enhance predictions of future changes in biological diversity."}
{"_id": "5dd5f3844c0402141e4083bccdde66303750f87c", "title": "Application of Artificial Intelligence to Real-Time Fault Detection in Permanent-Magnet Synchronous Machines", "text": "This paper discusses faults in rotating electrical machines in general and describes a fault detection technique using artificial neural network (ANN) which is an expert system to detect short-circuit fault currents in the stator windings of a permanent-magnet synchronous machine (PMSM). The experimental setup consists of PMSM coupled mechanically to a dc motor configured to run in torque mode. Particle swarm optimization is used to adjust the weights of the ANN. All simulations are carried out in MATLAB/SIMULINK environment. The technique is shown to be effective and can be applied to real-time fault detection."}
{"_id": "38438103e787b7b2d112596fd14225872a5403f3", "title": "Language Models with Pre-Trained (GloVe) Word Embeddings", "text": "In this work we present a step-by-step implementation of training a Language Model (LM) , using Recurrent Neural Network (RNN) and pre-trained GloVe word embeddings, introduced by Pennigton et al. in [1]. The implementation is following the general idea of training RNNs for LM tasks presented in [2] , but is rather using Gated Recurrent Unit (GRU) [3] for a memory cell, and not the more commonly used LSTM [4]. The implementation presented is based on using keras1 [5]."}
{"_id": "e22be626987f1288744bb9f0ffc60806b4ed8bbc", "title": "Comparison study of non-orthogonal multiple access schemes for 5G", "text": "With the development of mobile Internet and Internet of things (IoT), the 5th generation (5G) wireless communications will foresee explosive increase in mobile traffic. To address challenges in 5G such as higher spectral efficiency, massive connectivity, and lower latency, some non-orthogonal multiple access (NOMA) schemes have been recently actively investigated, including power-domain NOMA, multiple access with low-density spreading (LDS), sparse code multiple access (SCMA), multiuser shared access (MUSA), pattern division multiple access (PDMA), etc. Different from conventional orthogonal multiple access (OMA) schemes, NOMA can realize overloading by introducing some controllable interferences at the cost of slightly increased receiver complexity, which can achieve significant gains in spectral efficiency and accommodate much more users. In this paper, we will discuss basic principles and key features of three typical NOMA schemes, i.e., SCMA, MUSA, and PDMA. What's more, their performance in terms of uplink bit error rate (BER) will be compared. Simulation results show that in typical Rayleigh fading channels, SCMA has the best performance, while the BER performance of MUSA and PDMA are very close to each other. In addition, we also analyze the performance of PDMA using the same factor graph as SCMA, which indicates that the performance gain of SCMA over PDMA comes from both the difference of factor graph and the codebook optimization."}
{"_id": "1ac8d6ec012f0bca057ea0ca71df8c8746cf39c5", "title": "RPL-based multipath Routing Protocols for Internet of Things on Wireless Sensor Networks", "text": "In the last few years, Wireless Sensor Network (WSN) emerges and appears as an essential platform for prominent concept of Internet of Things (IoT). Their application ranges from so-called \u201csmart cities\u201d, \u201csmart homes\u201d over environmental monitoring. The connectivity in IoT mainly relies on RPL (IPv6 Routing Protocol for Low Power and Lossy Network) - a routing algorithm that constructs and maintains DODAGs (Destination Oriented Directed Acyclic Graph) to transmit data from sensors to root over a single path. However, due to the resource constraints of sensor nodes and the unreliability of wireless links, single-path routing approaches cannot be considered effective techniques to meet the performance demands of various applications. In order to overcome these problems, many individuals and group research focuses on multi-path solutions for RPL routing protocol. In this paper, we propose three multipath schemes based on RPL (Energy Load Balancing-ELB, Fast Local Repair-FLR and theirs combination-ELB-FLR) and integrate them in a modified IPv6 communication stack for IoT. These schemes are implemented in OMNET++ simulator and the experiment outcomes show that our approaches have achieved better energy efficiency, better end-to-end delay, packet delivery rate and network load balance compared to traditional solution of RPL."}
{"_id": "041326c202655cd60df276bf7a148f2ecddfc479", "title": "Cognitive architectures: Research issues and challenges", "text": "In this paper, we examine the motivations for research on cognitive architectures and review some candidates that have been explored in the literature. After this, we consider the capabilities that a cognitive architecture should support, some properties that it should exhibit related to representation, organization, performance, and learning, and some criteria for evaluating such architectures at the systems level. In closing, we discuss some open issues that should drive future research in this important area. 2008 Published by Elsevier B.V."}
{"_id": "e2a3bbfd375811c5fef523be8623904455af1cec", "title": "GRS: The green, reliability, and security of emerging machine to machine communications", "text": "Machine-to-machine communications is characterized by involving a large number of intelligent machines sharing information and making collaborative decisions without direct human intervention. Due to its potential to support a large number of ubiquitous characteristics and achieving better cost efficiency, M2M communications has quickly become a market-changing force for a wide variety of real-time monitoring applications, such as remote e-healthcare, smart homes, environmental monitoring, and industrial automation. However, the flourishing of M2M communications still hinges on fully understanding and managing the existing challenges: energy efficiency (green), reliability, and security (GRS). Without guaranteed GRS, M2M communications cannot be widely accepted as a promising communication paradigm. In this article, we explore the emerging M2M communications in terms of the potential GRS issues, and aim to promote an energy-efficient, reliable, and secure M2M communications environment. Specifically, we first formalize M2M communications architecture to incorporate three domains - the M2M, network, and application domains - and accordingly define GRS requirements in a systematic manner. We then introduce a number of GRS enabling techniques by exploring activity scheduling, redundancy utilization, and cooperative security mechanisms. These techniques hold promise in propelling the development and deployment of M2M communications applications."}
{"_id": "b6bf558edda0378cd756a0267801164109f5f5c5", "title": "Adverse drug reactions as cause of admission to hospital: prospective analysis of 18 820 patients.", "text": "OBJECTIVE\nTo ascertain the current burden of adverse drug reactions (ADRs) through a prospective analysis of all admissions to hospital.\n\n\nDESIGN\nProspective observational study.\n\n\nSETTING\nTwo large general hospitals in Merseyside, England.\n\n\nPARTICIPANTS\n18 820 patients aged > 16 years admitted over six months and assessed for cause of admission.\n\n\nMAIN OUTCOME MEASURES\nPrevalence of admissions due to an ADR, length of stay, avoidability, and outcome.\n\n\nRESULTS\nThere were 1225 admissions related to an ADR, giving a prevalence of 6.5%, with the ADR directly leading to the admission in 80% of cases. The median bed stay was eight days, accounting for 4% of the hospital bed capacity. The projected annual cost of such admissions to the NHS is 466m pounds sterling (706m Euros, 847m dollars). The overall fatality was 0.15%. Most reactions were either definitely or possibly avoidable. Drugs most commonly implicated in causing these admissions included low dose aspirin, diuretics, warfarin, and non-steroidal anti-inflammatory drugs other than aspirin, the most common reaction being gastrointestinal bleeding.\n\n\nCONCLUSION\nThe burden of ADRs on the NHS is high, accounting for considerable morbidity, mortality, and extra costs. Although many of the implicated drugs have proved benefit, measures need to be put into place to reduce the burden of ADRs and thereby further improve the benefit:harm ratio of the drugs."}
{"_id": "ed6e0366584769a5b4c45b4fdf847583038611ae", "title": "Health-related rumour detection on Twitter", "text": "In the last years social networks have emerged as a critical mean for information spreading. In spite of all the positive consequences this phenomenon brings, unverified and instrumentally relevant information statements in circulation, named as rumours, are becoming a potential threat to the society. Recently, there have been several studies on topic-independent rumour detection on Twitter. In this paper we present a novel rumour detection system which focuses on a specific topic, that is health-related rumours on Twitter. To this aim, we constructed a new subset of features including influence potential and network characteristics features. We tested our approach on a real dataset observing promising results, as it is able to correctly detect about 89% of rumours, with acceptable levels of precision."}
{"_id": "cf1fb664b28fa5dd4952d08c1cb631c20a504b02", "title": "Full bridge phase-shifted soft switching high-frequency inverter with boost PFC function for induction heating system", "text": "This paper is mainly concerned with a high frequency soft-switching PWM inverter suitable for consumer induction heating system. Proposed system is composed of soft switching chopper based voltage boost PFC input stage and phase shifted PWM controlled full bridge ZCZVS high frequency inverter stage. Its fundamentals of operating performances are illustrated and evaluated on the experimental results. Its effectiveness is substantially proved on the basis of the experimental results from a practical point of view."}
{"_id": "68af6bcc37d08863af5bb081301e562b597cb4fc", "title": "Automatic Modulation Classification of Overlapped Sources Using Multi-Gene Genetic Programming With Structural Risk Minimization Principle", "text": "As the spectrum environment becomes increasingly crowded and complicated, primary users may be interfered by secondary users and other illegal users. Automatic modulation classification (AMC) of a single source cannot recognize the overlapped sources. Consequently, the AMC of overlapped sources attracts much attention. In this paper, we propose a genetic programming-based modulation classification method for overlapped sources (GPOS). The proposed GPOS consists of two stages, the training stage, and the classification stage. In the training stage, multi-gene genetic programming (MGP)-based feature engineering transforms sample estimates of cumulants into highly discriminative MGP-features iteratively, until optimal MGP-features (OMGP-features) are obtained, where the structural risk minimization principle (SRMP) is employed to evaluate the classification performance of MGP-features and train the classifier. Moreover, a self-adaptive genetic operation is designed to accelerate the feature engineering process. In the classification stage, the classification decision is made by the trained classifier using the OMGP-features. Through simulation results, we demonstrate that the proposed scheme outperforms other existing methods in terms of classification performance and robustness in case of varying power ratios and fading channel."}
{"_id": "87a7ccc5f37cd846a978ae17d60b6dcd923bd996", "title": "Measuring geographical regularities of crowd behaviors for Twitter-based geo-social event detection", "text": "Recently, microblogging sites such as Twitter have garnered a great deal of attention as an advanced form of location-aware social network services, whereby individuals can easily and instantly share their most recent updates from any place. In this study, we aim to develop a geo-social event detection system by monitoring crowd behaviors indirectly via Twitter. In particular, we attempt to find out the occurrence of local events such as local festivals; a considerable number of Twitter users probably write many posts about these events. To detect such unusual geo-social events, we depend on geographical regularities deduced from the usual behavior patterns of crowds with geo-tagged microblogs. By comparing these regularities with the estimated ones, we decide whether there are any unusual events happening in the monitored geographical area. Finally, we describe the experimental results to evaluate the proposed unusuality detection method on the basis of geographical regularities obtained from a large number of geo-tagged tweets around Japan via Twitter."}
{"_id": "bccda0e4b34cc1b73b50f0faeb1d340919619825", "title": "Classic Hallucinogens and Mystical Experiences: Phenomenology and Neural Correlates.", "text": "This chapter begins with a brief review of descriptions and definitions of mystical-type experiences and the historical connection between classic hallucinogens and mystical experiences. The chapter then explores the empirical literature on experiences with classic hallucinogens in which claims about mystical or religious experiences have been made. A psychometrically validated questionnaire is described for the reliable measurement of mystical-type experiences occasioned by classic hallucinogens. Controlled laboratory studies show that under double-blind conditions that provide significant controls for expectancy bias, psilocybin can occasion complete mystical experiences in the majority of people studied. These effects are dose-dependent, specific to psilocybin compared to placebo or a psychoactive control substance, and have enduring impact on the moods, attitudes, and behaviors of participants as assessed by self-report of participants and ratings by community observers. Other studies suggest that enduring personal meaning in healthy volunteers and therapeutic outcomes in patients, including reduction and cessation of substance abuse behaviors and reduction of anxiety and depression in patients with a life-threatening cancer diagnosis, are related to the occurrence of mystical experiences during drug sessions. The final sections of the chapter draw parallels in human neuroscience research between the neural bases of experiences with classic hallucinogens and the neural bases of meditative practices for which claims of mystical-type experience are sometimes made. From these parallels, a functional neural model of mystical experience is proposed, based on changes in the default mode network of the brain that have been observed after the administration of classic hallucinogens and during meditation practices for which mystical-type claims have been made."}
{"_id": "97626d505052d2eb6ebeb194d4ba3d993c320fe4", "title": "Sparsely sampled Fourier ptychography.", "text": "Fourier ptychography (FP) is an imaging technique that applies angular diversity functions for high-resolution complex image recovery. The FP recovery routine switches between two working domains: the spectral and spatial domains. In this paper, we investigate the spectral-spatial data redundancy requirement of the FP recovery process. We report a sparsely sampled FP scheme by exploring the sampling interplay between these two domains. We demonstrate the use of the reported scheme for bypassing the high-dynamic-range combination step in the original FP recovery routine. As such, it is able to shorten the acquisition time of the FP platform by ~50%. As a special case of the sparsely sample FP, we also discuss a sub-sampled scheme and demonstrate its application in solving the pixel aliasing problem plagued in the original FP algorithm. We validate the reported schemes with both simulations and experiments. This paper provides insights for the development of the FP approach."}
{"_id": "fdbf82918de37e14d047893c074f59deb2879f32", "title": "Threats to Feminist Identity and Reactions to Gender Discrimination", "text": "The aim of this research was to examine conditions that modify feminists' support for women as targets of gender discrimination. In an experimental study we tested a hypothesis that threatened feminist identity will lead to greater differentiation between feminists and conservative women as victims of discrimination and, in turn, a decrease in support for non-feminist victims. The study was conducted among 96 young Polish female professionals and graduate students from Gender Studies programs in Warsaw who self-identified as feminists (Mage \u2009=\u200922.23). Participants were presented with a case of workplace gender discrimination. Threat to feminist identity and worldview of the discrimination victim (feminist vs. conservative) were varied between research conditions. Results indicate that identity threat caused feminists to show conditional reactions to discrimination. Under identity threat, feminists perceived the situation as less discriminatory when the target held conservative views on gender relations than when the target was presented as feminist. This effect was not observed under conditions of no threat. Moreover, feminists showed an increase in compassion for the victim when she was portrayed as a feminist compared to when she was portrayed as conservative. Implications for the feminist movement are discussed."}
{"_id": "9b9906a2cf7fe150faed8d618def803232684719", "title": "Dynamic Filters in Graph Convolutional Networks", "text": "Convolutional neural networks (CNNs) have massively impacted visual recognition in 2D images, and are now ubiquitous in state-of-the-art approaches. While CNNs naturally extend to other domains, such as audio and video, where data is also organized in rectangular grids, they do not easily generalize to other types of data such as 3D shape meshes, social network graphs or molecular graphs. To handle such data, we propose a novel graph-convolutional network architecture that builds on a generic formulation that relaxes the 1-to-1 correspondence between filter weights and data elements around the center of the convolution. The main novelty of our architecture is that the shape of the filter is a function of the features in the previous network layer, which is learned as an integral part of the neural network. Experimental evaluations on digit recognition, semi-supervised document classification, and 3D shape correspondence yield state-of-the-art results, significantly improving over previous work for shape correspondence."}
{"_id": "50dad1b5f35c0ba613fd79fae91d7270c64cea0f", "title": "BINSEC/SE: A Dynamic Symbolic Execution Toolkit for Binary-Level Analysis", "text": "When it comes to software analysis, several approaches exist from heuristic techniques to formal methods, which are helpful at solving different kinds ofproblems. Unfortunately very few initiative seek to aggregate this techniques in the same platform. BINSEC intend to fulfill this lack of binary analysis platform by allowing to perform modular analysis. This work focusses on BINSEC/SE, the new dynamic symbolic execution engine (DSE) implemented in BINSEC. We will highlight the novelties of the engine, especially in terms of interactions between concrete and symbolic execution or optimization of formula generation. Finally, two reverse engineering applications are shown in order to emphasize the tool effectiveness."}
{"_id": "4a5f9152c66f61158abc8e9eaa8de743129cdbba", "title": "Artificial intelligent firewall", "text": "Firewalls are now an integral part of network security. An intelligent firewall that prevents unauthorized access to a system has been developed. Artificial intelligence applications are uniquely suited for the ever-changing, ever-evolving world of network security. Typical firewalls are only as good as the information provided by the Network Administrator. A new type of attack creates vulnerabilities, which a static firewall does not have the ability to avoid without human direction. An AI-managed firewall service, however, can protect a computer network from known and future threats. We report in this paper on research in progress concerning the integration of different security techniques. A main purpose of the project is to integrate a smart detection engine into a firewall. The smart detection engine will aim at not only detecting anomalous network traffic as in classical IDSs, but also detecting unusual structures data packets that suggest the presence of virus data. We will report in this paper on the concept of an intelligent firewall that contains a smart detection engine for potentially malicious data packets."}
{"_id": "bc12715a1ddf1a540dab06bf3ac4f3a32a26b135", "title": "Tracking the Trackers: An Analysis of the State of the Art in Multiple Object Tracking", "text": "Standardized benchmarks are crucial for the majority of computer vision applications. Although leaderboards and ranking tables should not be over-claimed, benchmarks often provide the most objective measure of performance and are therefore important guides for research. We present a benchmark for Multiple Object Tracking launched in the late 2014, with the goal of creating a framework for the standardized evaluation of multiple object tracking methods. This paper collects the two releases of the benchmark made so far, and provides an in-depth analysis of almost 50 state-of-the-art trackers that were tested on over 11000 frames. We show the current trends and weaknesses of multiple people tracking methods, and provide pointers of what researchers should be focusing on to push the field forward."}
{"_id": "2e0db4d4c8bdc7e11541b362cb9f8972f66563ab", "title": "Personality Analysis Through Handwriting", "text": ""}
{"_id": "02599a02d46ea2f1c00e14cac2a76dcb156df8ee", "title": "Search Based Software Engineering", "text": "The use of evolutionary algorithms for solving multi-objective optimization problems has become increasingly popular, mainly within the last 15 years. From among the several research trends that have originated in recent years, one of the most promising is the use of hybrid approaches that allow to improve the performance of multi-objective evolutionary algorithms (MOEAs). In this talk, some of the most representative research on the use of hybrid approaches in evolutionary multi-objective optimization will be discussed. The topics discussed will include multi-objective memetic algorithms, hybridization of MOEAs with gradient-based methods and with direct search methods, as well as multi-objective hyperheuristics. Some applications of these approaches as well as some potential paths for future research in this area will also be briefly discussed. Towards Evolutionary Multitasking: A New Paradigm"}
{"_id": "2b286ed9f36240e1d11b585d65133db84b52122c", "title": "Real-time 3D eyelids tracking from semantic edges", "text": "State-of-the-art real-time face tracking systems still lack the ability to realistically portray subtle details of various aspects of the face, particularly the region surrounding the eyes. To improve this situation, we propose a technique to reconstruct the 3D shape and motion of eyelids in real time. By combining these results with the full facial expression and gaze direction, our system generates complete face tracking sequences with more detailed eye regions than existing solutions in real-time. To achieve this goal, we propose a generative eyelid model which decomposes eyelid variation into two low-dimensional linear spaces which efficiently represent the shape and motion of eyelids. Then, we modify a holistically-nested DNN model to jointly perform semantic eyelid edge detection and identification on images. Next, we correspond vertices of the eyelid model to 2D image edges, and employ polynomial curve fitting and a search scheme to handle incorrect and partial edge detections. Finally, we use the correspondences in a 3D-to-2D edge fitting scheme to reconstruct eyelid shape and pose. By integrating our fast fitting method into a face tracking system, the estimated eyelid results are seamlessly fused with the face and eyeball results in real time. Experiments show that our technique applies to different human races, eyelid shapes, and eyelid motions, and is robust to changes in head pose, expression and gaze direction."}
{"_id": "05c025af60aeab10a3069256674325802c844212", "title": "Recurrent Network Models for Human Dynamics", "text": "We propose the Encoder-Recurrent-Decoder (ERD) model for recognition and prediction of human body pose in videos and motion capture. The ERD model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers. We test instantiations of ERD architectures in the tasks of motion capture (mocap) generation, body pose labeling and body pose forecasting in videos. Our model handles mocap training data across multiple subjects and activity domains, and synthesizes novel motions while avoiding drifting for long periods of time. For human pose labeling, ERD outperforms a per frame body part detector by resolving left-right body part confusions. For video pose forecasting, ERD predicts body joint displacements across a temporal horizon of 400ms and outperforms a first order motion model based on optical flow. ERDs extend previous Long Short Term Memory (LSTM) models in the literature to jointly learn representations and their dynamics. Our experiments show such representation learning is crucial for both labeling and prediction in space-time. We find this is a distinguishing feature between the spatio-temporal visual domain in comparison to 1D text, speech or handwriting, where straightforward hard coded representations have shown excellent results when directly combined with recurrent units [31]."}
{"_id": "02a88a2f2765b17c9ea76fe13148b4b8a9050b95", "title": "DeepPose: Human Pose Estimation via Deep Neural Networks", "text": "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images."}
{"_id": "092b64ce89a7ec652da935758f5c6d59499cde6e", "title": "Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments", "text": "We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m."}
{"_id": "1fdc785c0152d86d661213038150195058a24703", "title": "Sparse deep belief net model for visual area V2", "text": "Motivated in part by the hierarchical organization of the cortex, a number of algorithms have recently been proposed that try to learn hierarchical, or \u201cdeep,\u201d structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hierarchy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of Hinton et al. (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear (\u201ccontour\u201d) features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex \u201ccorner\u201d features matches well with the results from the Ito & Komatsu\u2019s study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features."}
{"_id": "497a80b2813cffb17f46af50e621a71505094528", "title": "Modeling Human Motion Using Binary Latent Variables", "text": "We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued \u201cvisible\u201d variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture. Website: http://www.cs.toronto.edu/ \u223cgwtaylor/publications/nips2006mhmublv/"}
{"_id": "02de9d7b2c76a11896902c79b329a3034fc572b6", "title": "Efficient Large Message Broadcast using NCCL and CUDA-Aware MPI for Deep Learning", "text": "Emerging paradigms like High Performance Data Analytics (HPDA) and Deep Learning (DL) pose at least two new design challenges for existing MPI runtimes. First, these paradigms require an efficient support for communicating unusually large messages across processes. And second, the communication buffers used by HPDA applications and DL frameworks generally reside on a GPU's memory. In this context, we observe that conventional MPI runtimes have been optimized over decades to achieve lowest possible communication latency for relatively smaller message sizes (up-to 1 Megabyte) and that too for CPU memory buffers. With the advent of CUDA-Aware MPI runtimes, a lot of research has been conducted to improve performance of GPU buffer based communication. However, little exists in current state of the art that deals with very large message communication of GPU buffers. In this paper, we investigate these new challenges by analyzing the performance bottlenecks in existing CUDA-Aware MPI runtimes like MVAPICH2-GDR, and propose hierarchical collective designs to improve communication latency of the MPI_Bcast primitive by exploiting a new communication library called NCCL. To the best of our knowledge, this is the first work that addresses these new requirements where GPU buffers are used for communication with message sizes surpassing hundreds of megabytes. We highlight the design challenges for our work along with the details of design and implementation. In addition, we provide a comprehensive performance evaluation using a Micro-benchmark and a CUDA-Aware adaptation of Microsoft CNTK DL framework. We report up to 47% improvement in training time for CNTK using the proposed hierarchical MPI_Bcast design."}
{"_id": "bd1c2b1cda567cc73be398164fdd61c7c341cef5", "title": "Bifurcation Analysis of a Mathematical Model for Malaria Transmission", "text": "We present an ordinary differential equation mathematical model for the spread of malaria in human and mosquito populations. Susceptible humans can be infected when they are bitten by an infectious mosquito. They then progress through the exposed, infectious, and recovered classes, before reentering the susceptible class. Susceptible mosquitoes can become infected when they bite infectious or recovered humans, and once infected they move through the exposed and infectious classes. Both species follow a logistic population model, with humans having immigration and disease-induced death. We define a reproductive number, R0, for the number of secondary cases that one infected individual will cause through the duration of the infectious period. We find that the disease-free equilibrium is locally asymptotically stable when R0 < 1 and unstable when R0 > 1. We prove the existence of at least one endemic equilibrium point for all R0 > 1. In the absence of disease-induced death, we prove that the transcritical bifurcation at R0 = 1 is supercritical (forward). Numerical simulations show that for larger values of the disease-induced death rate, a subcritical (backward) bifurcation is possible at R0 = 1."}
{"_id": "406847bc624fd81a2fe63d42827e6755aa439ebb", "title": "The Distributed Dissimilar Redundancy Architecture of Fly-by-Wire Flight Control System", "text": "This article presents a distributed dissimilar redundancy architecture of fly-by-wire Flight Control System. It is based on many dissimilar flight control computer, redundancy structure which includes the cross arranged hydraulic and power supply, three types of data buses, lots of data correction monitors, a distributed control loop to avoid the correlated faults and achieve the rigid safety airworthiness requirements. A Fault Tree Analysis (FTA) is implemented to verify whether the system safety could meet system design targets."}
{"_id": "fd0e2f95fbe0da2e75288c8561e7553d8efe3325", "title": "Handwritten Digit Recognition: A Neural Network Demo", "text": "A handwritten digit recognition system was used in a demonstration project to visualize artificial neural networks, in particular Kohonen\u2019s self-organizing feature map. The purpose of this project was to introduce neural networks through a relatively easy-to-understand application to the general public. This paper describes several techniques used for preprocessing the handwritten digits, as well as a number of ways in which neural networks were used for the recognition task. Whereas the main goal was a purely educational one, a moderate recognition rate of 98% was reached on a test set."}
{"_id": "24c180807250d54a733ac830bda979f13cf12231", "title": "Unmanned Aerial Vehicle Based Wireless Sensor Network for Marine-Coastal Environment Monitoring", "text": "Marine environments are delicate ecosystems which directly influence local climates, flora, fauna, and human activities. Their monitorization plays a key role in their preservation, which is most commonly done through the use of environmental sensing buoy networks. These devices transmit data by means of satellite communications or close-range base stations, which present several limitations and elevated infrastructure costs. Unmanned Aerial Vehicles (UAV) are another alternative for remote environmental monitoring which provide new types of data and ease of use. These aircraft are mainly used in video capture related applications, in its various light spectrums, and do not provide the same data as sensing buoys, nor can they be used for such extended periods of time. The aim of this research is to provide a flexible, easy to deploy and cost-effective Wireless Sensor Network (WSN) for monitoring marine environments. This proposal uses a UAV as a mobile data collector, low-power long-range communications and sensing buoys as part of a single WSN. A complete description of the design, development, and implementation of the various parts of this system is presented, as well as its validation in a real-world scenario."}
{"_id": "b9d831d6e7c015317b2be3fdbbc467e3e717e58d", "title": "A Modularization Method for Battery Equalizers Using Multiwinding Transformers", "text": "This paper proposes a modularized global architecture using multi-winding transformers for battery cell balancing. The global balancing for a series-connected battery string is achieved based on forward conversion in each battery module and based on flyback conversion among modules. The demagnetization of the multiwinding transformers is also simultaneously achieved by the flyback conversion among modules without the need of additional demagnetizing circuits. Moreover, all MOSFET switches are driven by two complementary pulse width modulation signals without the requirement of cell voltage sensors, and energy can be automatically and simultaneously delivered from any high voltage cells to any low voltage cells. Compared with existing equalizers requiring additional balancing circuits for battery modules, the proposed modularized equalizer shares one circuit for the balancing among cells and modules. The balancing performance of the proposed equalizer is perfectly verified through experimental results, and the maximum balancing efficiency is up to 91.3%. In summary, the proposed modularized equalizer has the advantages of easier modularization, simpler control, higher efficiency, smaller size, and lower cost, ensuring the battery system higher reliability and easier implementation."}
{"_id": "b65bee86a4d65a52796a2db19ab2c49d855a59fa", "title": "Probabilistic diffusion tractography with multiple fibre orientations: What can we gain?", "text": "We present a direct extension of probabilistic diffusion tractography to the case of multiple fibre orientations. Using automatic relevance determination, we are able to perform online selection of the number of fibre orientations supported by the data at each voxel, simplifying the problem of tracking in a multi-orientation field. We then apply the identical probabilistic algorithm to tractography in the multi- and single-fibre cases in a number of example systems which have previously been tracked successfully or unsuccessfully with single-fibre tractography. We show that multi-fibre tractography offers significant advantages in sensitivity when tracking non-dominant fibre populations, but does not dramatically change tractography results for the dominant pathways."}
{"_id": "ecee90e6b4ac297403a8714138ae93913734a5c2", "title": "CAPNet: Continuous Approximation Projection For 3D Point Cloud Reconstruction Using 2D Supervision", "text": "Knowledge of 3D properties of objects is a necessity in order to build effective computer vision systems. However, lack of large scale 3D datasets can be a major constraint for datadriven approaches in learning such properties. We consider the task of single image 3D point cloud reconstruction, and aim to utilize multiple foreground masks as our supervisory data to alleviate the need for large scale 3D datasets. A novel differentiable projection module, called \u2018CAPNet\u2019, is introduced to obtain such 2D masks from a predicted 3D point cloud. The key idea is to model the projections as a continuous approximation of the points in the point cloud. To overcome the challenges of sparse projection maps, we propose a loss formulation termed \u2018affinity loss\u2019 to generate outlierfree reconstructions. We significantly outperform the existing projection based approaches on a large-scale synthetic dataset. We show the utility and generalizability of such a 2D supervised approach through experiments on a real-world dataset, where lack of 3D data can be a serious concern. To further enhance the reconstructions, we also propose a test stage optimization procedure to obtain reconstructions that display high correspondence with the observed input image."}
{"_id": "2f16ae3f04933f6c95a6d6ca664d05be47288e60", "title": "A Learning Scheme for Microgrid Islanding and Reconnection", "text": "This paper introduces a robust learning scheme that can dynamically predict the stability of the reconnection of subnetworks to a main grid. As the future electrical power systems tend towards smarter and greener technology, the deploymen t of self sufficient networks, or microgrids, becomes more likel y. Microgrids may operate on their own or synchronized with the main grid, thus control methods need to take into account islandi ng and reconnecting said networks. The ability to optimally and safely reconnect a portion of the grid is not well understood and, as of now, limited to raw synchronization between interconnection points. A support vector machine (SVM) leveraging real-time data from phasor measurement units (PMUs) is proposed to predict in real time whether the reconnection of a sub-network to the main grid would lead to stability or instability. A dynam ics simulator fed with pre-acquired system parameters is used t o create training data for the SVM in various operating states. The classifier was tested on a variety of cases and operating poin ts to ensure diversity. Accuracies of approximately 90% were obs erved throughout most conditions when making dynamic predictions of a given network. Keywords\u2014Synchrophasor, machine learning, microgrid, islanding, reconnection I. I NTRODUCTION As we make strides towards a smarter power system, it is important to explore new techniques and innovations to fully capture the potential of such a dynamic entity. Many large blackout events, such as the blackout of 2003, could have been prevented with smarter controls and better monito ring [1]. Phasor measurement units, or PMUs, are one such breakthrough that will allow progress to be made in both monitoring and implementing control to the system [2]. PMUs allow for direct measurement of bus voltages and angles at high sample rates which makes dynamic state estimation more feasible [3], [4]. With the use of PMUs, it is possible to improve upon current state estimation [5] and potentially o pen up new ways to control the grid. The addition of control techniques and dynamic monitoring will be important as we begin to integrate newer solutions, such as microgrids, int o the power network. With these advanced monitoring devices, microgrids become more feasible due to the potential for rea ltime monitoring schemes. The integration of microgrids bri ng many benefits such as the ability to operate while islanded as well as interconnected with the main grid; they provide a smooth integration for renewable energy sources that matc h C. Lassetter, E. Cotilla-Sanchez, and J. Kim are with the Sch ool of Electrical Engineering & Computer Science, Oregon State University, C orvallis, OR, 97331 USA e-mail:{lassettc,ecs,kimjinsu }@oregonstate.edu local demand. Unfortunately the implementation of microgr ids is still challenging due to lacking experience with the beha vior of control schemes during off-nominal operation. Currently, microgrids are being phased in slowly due in part to the difficulty of operating subnetworks independent ly as well as determining when they can be reconnected to the main grid. Works in the literature have focused on the potential of reconnecting microgrids to the main grid, in particular aiming at synchronizing the buses at points of interconnect with respects to their voltages, frequencies , and angles [6]. Effort has been directed at creating control sch emes to minimize power flow at the point of common coupling (PCC) using direct machine control, load shedding, as well a s energy storage, to aid in smooth reconnection [7], [8]. Upon reconnection of an islanded sub-network to the main grid, instability can cause damage on both ends. It is important to track instabilities on both the microgrid and main grid upon reconnection to accurately depict the outcome of reconnect ion. Most works focus on the synchronization of two networks before reconnection [9], [10]. In some cases we may need to look at larger microgrids or subnetworks in which multipl e PCCs exist. In such scenarios, it becomes much more difficult to implement a control scheme that satisfies goodreconnection tolerances in regards to minimizing bus frequency, angle, a nd voltage differences at each PCC. In addition to the possibil ity of multiple PCCs, it is possible that direct manipulation of the system becomes limited, compromised, or unsupported with respect to synchronization. In order to address these short comings, we implement an algorithm that dynamically tracks and makes predictions based on the system states, providing real-time stability information of potential reconnectio ns. An algorithm that tracks potential reconnection and island ing times needs to be robust to prevent incorrect operation during these scenarios. PMUs make use of GPS synchronization [11] which can create an attack platform for adversarie s by changing or shifting the time synchronization. Incorrec t or compromised usage could lead to incorrect predictions that would degrade system stability due to hidden failures that remain dormant until triggered by contingencies [12]. We propose an algorithm that can make accurate predictions in face of potentially compromised measurements. Due to the complexity of the power grid, it is difficult to come up with a verbatim standard depicting the potential sta bility after reconnection of a subnetwork. With advances in the artificial intelligence community, we can make use of machin e learning algorithms in order to explore vast combinations o f sensor inputs, states, and control actions. This can be done in a"}
{"_id": "849ff867cba2c102a593da5f6c4c61c95e09ffe2", "title": "Notification and awareness: synchronizing task-oriented collaborative activity", "text": "People working collaboratively must establish and maintain awareness of one another\u2019s intentions, actions and results. Notification systems typically support awareness of the presence, tasks and actions of collaborators, but they do not adequately support awareness of persistent and complex activities. We analysed awareness breakdowns in use of our Virtual School system\u2014stemming from problems related to the collaborative situation, group, task and tool support\u2014to motivate the concept of activity awareness. Activity awareness builds on prior conceptions of social and action awareness, but emphasizes the importance of activity context factors like planning and coordination. This work suggests design strategies for notification systems to better support collaborative activity. r 2003 Elsevier Science Ltd. All rights reserved."}
{"_id": "ba4a037153bff392b1e56a4109de4b04521f17b2", "title": "Design Challenges/Solutions for Environments Supporting the Analysis of Social Media Data in Crisis Informatics Research", "text": "Crisis informatics investigates how society's pervasive access to technology is transforming how it responds to mass emergency events. To study this transformation, researchers require access to large sets of data that because of their volume and heterogeneous nature are difficult to collect and analyze. To address this concern, we have designed and implemented an environment - EPIC Analyze - that supports researchers with the collection and analysis of social media data. Our research has identified the types of components - such as NoSQL, MapReduce, caching, and search - needed to ensure that these services are reliable, scalable, extensible, and efficient. We describe the design challenges encountered - such as data modeling, time vs. Space tradeoffs, and the need for a useful and usable system - when building EPIC Analyze and discuss its scalability, performance, and functionality."}
{"_id": "2326a12358718ae6cf3127eae6be91e0de7b7363", "title": "Anomaly Detection and Attribution Using Bayesian Networks", "text": "We present a novel approach to anomaly detection in Bayesian networks, enabling both the detection and explanation of anomalous cases in a dataset. By exploiting the structure of a Bayesian network, our algorithm is able to efficiently search for local maxima of data conflict between closely related variables. Benchmark tests using data simulated from complex Bayesian networks show that our approach provides a significant improvement over techniques that search for anomalies using the entire network, rather than its subsets. We conclude with demonstrations of the unique explanatory power of our approach in determining the observation(s) responsible for an anomaly. APPROVED FOR PUBLIC RELEASE"}
{"_id": "850d941ece492fd57c0bffad4b1eaf7b8d241337", "title": "Automatic Detection of Diabetic Retinopathy using Deep Convolutional Neural Network", "text": "The purpose of this project is to design an automated and efficient solution that could detect the symptoms of DR from a retinal image within seconds and simplify the process of reviewing and examination of images. Diabetic Retinopathy (DR) is a complication of diabetes that is caused by changes in the blood vessel of the retina and it is one of the leading causes of blindness in the developed world. Currently, detecting DR symptoms is a manual and time-consuming process. Recently, fullyconnected and convolutional neural networks have been trained to achieve state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics. In our approach, we trained a deep Convolutional Neural Network model on a large dataset consisting around 35,000 images and used dropout layer techniques to achieve higher accuracy."}
{"_id": "19751e0f81a103658bbac2506f5d5c8e06a1c06a", "title": "STDP-based spiking deep convolutional neural networks for object recognition", "text": "Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions."}
{"_id": "4416236e5ee4239e86e3cf3db6a2d1a2ff2ae720", "title": "Weld : A Common Runtime for High Performance Data Analytics", "text": "Modern analytics applications combine multiple functions from different libraries and frameworks to build increasingly complex workflows. Even though each function may achieve high performance in isolation, the performance of the combined workflow is often an order of magnitude below hardware limits due to extensive data movement across the functions. To address this problem, we propose Weld, a runtime for data-intensive applications that optimizes across disjoint libraries and functions. Weld uses a common intermediate representation to capture the structure of diverse dataparallel workloads, including SQL, machine learning and graph analytics. It then performs key data movement optimizations and generates efficient parallel code for the whole workflow. Weld can be integrated incrementally into existing frameworks like TensorFlow, Apache Spark, NumPy and Pandas without changing their user-facing APIs. We show that Weld can speed up these frameworks, as well as applications that combine them, by up to 30\u00d7."}
{"_id": "1ed826370fd34f165744653c6c5e0a56a389dd7c", "title": "Passivity-Based Controller Design of Grid-Connected VSCs for Prevention of Electrical Resonance Instability", "text": "The time delay in the current control loop of a grid-connected voltage-source converter (VSC) may cause destabilization of electrical resonances in the grid or in the VSC's input filter. Instability is prevented if the input admittance of the VSC can be made passive. This paper presents an analytical controller design method for obtaining passivity. The method is equally applicable to single- and three-phase systems, i.e., in the latter case, for both stationary- and synchronous-frame control. Simulations and experiments verify the theoretical results."}
{"_id": "47d4838087a7ac2b995f3c5eba02ecdd2c28ba14", "title": "Automatic Recognition of Deceptive Facial Expressions of Emotion", "text": "Humans modify facial expressions in order to mislead observers regarding their true emotional states. Being able to recognize the authenticity of emotional displays is notoriously difficult for human observers. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of genuine and deceptive facial expressions of emotions for automatic recognition1 . We show that overall the problem of recognizing deceptive facial expressions can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt feature space. Interesting additional results show that on average it is easier to distinguish among genuine expressions than deceptive ones and that certain emotion pairs are more difficult to distinguish than others."}
{"_id": "72468505ea6ced938493f4d47beda85e318d4c72", "title": "Coalition Structure Generation: Dynamic Programming Meets Anytime Optimization", "text": "Coalition structure generation involves partitioning a set of agents into exhaustive and disjoint coalitions so as to maximize the social welfare. What makes this such a challenging problem is that the number of possible solutions grows exponentially as the number of agents increases. To date, two main approaches have been developed to solve this problem, each with its own strengths and weaknesses. The state of the art in the first approach is the Improved Dynamic Programming (IDP) algorithm, due to Rahwan and Jennings, that is guaranteed to find an optimal solution in O(3), but which cannot generate a solution until it has completed its entire execution. The state of the art in the second approach is an anytime algorithm called IP, due to Rahwan et al., that provides worst-case guarantees on the quality of the best solution found so far, but which is O(n). In this paper, we develop a novel algorithm that combines both IDP and IP, resulting in a hybrid performance that exploits the strength of both algorithms and, at the same, avoids their main weaknesses. Our approach is also significantly faster (e.g. given 25 agents, it takes only 28% of the time required by IP, and 0.3% of the time required by IDP)."}
{"_id": "255a80360b407a92621ca70f89803eff4a6d3e32", "title": "Privacy-Preserving User-Auditable Pseudonym Systems", "text": "Personal information is often gathered and processed in a decentralized fashion. Examples include health records and governmental data bases. To protect the privacy of individuals, no unique user identifier should be used across the different databases. At the same time, the utility of the distributed information needs to be preserved which requires that it be nevertheless possible to link different records if they relate to the same user. Recently, Camenisch and Lehmann (CCS 15) have proposed a pseudonym scheme that addresses this problem by domain-specific pseudonyms. Although being unlinkable, these pseudonyms can be converted by a central authority (the converter). To protect the users' privacy, conversions are done blindly without the converter learning the pseudonyms or the identity of the user. Unfortunately, their scheme sacrifices a crucial privacy feature: transparency. Users are no longer able to inquire with the converter and audit the flow of their personal data. Indeed, such auditability appears to be diametral to the goal of blind pseudonym conversion. In this paper we address these seemingly conflicting requirements and provide a system where user-centric audits logs are created by the oblivious converter while maintaining all privacy properties. We prove our protocol to be UC-secure and give an efficient instantiation using novel building blocks."}
{"_id": "8501fa541d13634b021a7cab8d1d84ba2c5b9a7c", "title": "Boosting slow oscillations during sleep potentiates memory", "text": "There is compelling evidence that sleep contributes to the long-term consolidation of new memories. This function of sleep has been linked to slow (<1\u2009Hz) potential oscillations, which predominantly arise from the prefrontal neocortex and characterize slow wave sleep. However, oscillations in brain potentials are commonly considered to be mere epiphenomena that reflect synchronized activity arising from neuronal networks, which links the membrane and synaptic processes of these neurons in time. Whether brain potentials and their extracellular equivalent have any physiological meaning per se is unclear, but can easily be investigated by inducing the extracellular oscillating potential fields of interest. Here we show that inducing slow oscillation-like potential fields by transcranial application of oscillating potentials (0.75\u2009Hz) during early nocturnal non-rapid-eye-movement sleep, that is, a period of emerging slow wave sleep, enhances the retention of hippocampus-dependent declarative memories in healthy humans. The slowly oscillating potential stimulation induced an immediate increase in slow wave sleep, endogenous cortical slow oscillations and slow spindle activity in the frontal cortex. Brain stimulation with oscillations at 5 Hz\u2014another frequency band that normally predominates during rapid-eye-movement sleep\u2014decreased slow oscillations and left declarative memory unchanged. Our findings indicate that endogenous slow potential oscillations have a causal role in the sleep-associated consolidation of memory, and that this role is enhanced by field effects in cortical extracellular space."}
{"_id": "acedd1d4b7ab46a424dd9860557bd6b81c067c1d", "title": "Gestational diabetes mellitus.", "text": "Gestational diabetes mellitus (GDM) is defined as glucose intolerance of various degrees that is first detected during pregnancy. GDM is detected through the screening of pregnant women for clinical risk factors and, among at-risk women, testing for abnormal glucose tolerance that is usually, but not invariably, mild and asymptomatic. GDM appears to result from the same broad spectrum of physiological and genetic abnormalities that characterize diabetes outside of pregnancy. Indeed, women with GDM are at high risk for having or developing diabetes when they are not pregnant. Thus, GDM provides a unique opportunity to study the early pathogenesis of diabetes and to develop interventions to prevent the disease."}
{"_id": "65c876f9b8765499a5164c6cdedcb3792aac8eea", "title": "From the Shortest Vector Problem to the Dihedral Hidden Subgroup Problem", "text": "In Quantum Computation and Lattice Problems [11] Oded Regev presented the first known connection between lattices and quantum computation, in the form of a quantum reduction from the poly(n)-unique shortest vector problem to the dihedral hidden subgroup problem by sampling cosets. This article contains a summary of Regev\u2019s result."}
{"_id": "dd6330dbabf0e321a5fe2ea238eb0a96f54f2b59", "title": "Ontologies in E-Learning: Review of the Literature", "text": "We have witnessed a great interest in ontologies as an emerging technology within the Semantic Web. Its foundation that was laid in the last decade paved the way for the development of ontologies and systems that use ontologies in various domains, including the E-Learning domain. In this article, we survey key contributions related to the development and use of ontologies in the domain of E-Learning systems. We provide a framework for classification of the literature that is useful for the community of practice in categorizing work in this field and for determining possible research lines. We also discuss future trends in this area."}
{"_id": "59fe0f477f81a8671956b8d1363bdc06ae8b08b3", "title": "Vision-Based Gesture Recognition: A Review", "text": "The use of gesture as a natural interface serves as a motivating force for research in modeling, analyzing and recognition of gestures. In particular, human computer intelligent interaction needs vision-based gesture recognition, which involves many interdisciplinary studies. A survey on recent vision-based gesture recognition approaches is given in this paper. We shall review methods of static hand posture and temporal gesture recognition. Several application systems of gesture recognition are also described in this paper. We conclude with some thoughts about future research directions."}
{"_id": "87431da2f57fa6471712e9e48cf3b724af723d94", "title": "Methods for Designing Multiple Classifier Systems", "text": "In the field of pattern recognition, multiple classifier systems based on the combination of outputs of a set of different classifiers have been proposed as a method for the development of high performance classification systems. In this paper, the problem of design of multiple classifier system is discussed. Six design methods based on the so-called \u201coverproduce and choose\u201d paradigm are described and compared by experiments. Although these design methods exhibited some interesting features, they do not guarantee to design the optimal multiple classifier system for the classification task at hand. Accordingly, the main conclusion of this paper is that the problem of the optimal MCS design still remains open."}
{"_id": "d1ee87290fa827f1217b8fa2bccb3485da1a300e", "title": "Bagging predictors", "text": "Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy."}
{"_id": "e16b5afbe2acc57530359c065227dbb634b2109b", "title": "A Real-Time Continuous Gesture Recognition System for Sign Language", "text": "In this paper, a large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a DataGlove . The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to 4 parameters in a gesture : posture, position, orientation, and motion. We have implemented a prototype system with a lexicon of 250 vocabularies in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signerdependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%."}
{"_id": "ec25da04ef7f09396ca00da3f9b5f2d9670cb6fc", "title": "Methods of combining multiple classifiers and their applications to handwriting recognition", "text": "Method of combining the classification powers of several classifiers is regarded as a general problem in various application areas of pattern recognition, and a systematic investigation has been made. Possible solutions to the problem can be divided into three categories according to the levels of information available from the various classifiers. Four approaches are proposed based on different methodologies for solving this problem. One is suitable for combining individual classifiers such as Bayesian, k-NN and various distance classifiers. The other three could be used for combining any kind of individual classifiers. On applying these methods to combine several classifiers for recognizing totally unconstrained handwritten numerals, the experimental results show that the performance of individual classifiers could be improved significantly. For example, on the U.S. zipcode database, the result of 98.9% recognition with 0.90% substitution and 0.2% rejection can be obtained, as well as a high reliability with 95% recognition, 0% substitution and 5% rejection. These results compared favorably to other research p u p s in Europe, Asia, and North America."}
{"_id": "bc0a189bfccc7634789f2c3ccd5beedac497b039", "title": "A single-stage three-phase high power factor rectifier with high-frequency isolation and regulated DC-bus based on the DCM SEPIC converter", "text": "In this paper, the main concepts related to the development of a single-stage three-phase high power factor rectifier, with high-frequency isolation and regulated DC bus are described. The structure operation is presented, being based on the DC-DC SEPIC converter operating in the discontinuous conduction mode. This operational mode provides to the rectifier a high power factor feature, with sinusoidal input current, without the use of any current sensors and current loop control. A design example and simulation results are presented in order to validate the theoretical analysis."}
{"_id": "649197627a94fc003384fb743cfd78cdf12b3306", "title": "SYSTEM DYNAMICS : SYSTEMIC FEEDBACK MODELING FOR POLICY ANALYSIS", "text": ""}
{"_id": "4ce68170f85560942ee51465e593b16560f9c580", "title": "Practical Matrix Completion and Corruption Recovery Using Proximal Alternating Robust Subspace Minimization", "text": "Low-rank matrix completion is a problem of immense practical importance. Recent works on the subject often use nuclear norm as a convex surrogate of the rank function. Despite its solid theoretical foundation, the convex version of the problem often fails to work satisfactorily in real-life applications. Real data often suffer from very few observations, with support not meeting the randomness requirements, ubiquitous presence of noise and potentially gross corruptions, sometimes with these simultaneously occurring. This paper proposes a Proximal Alternating Robust Subspace Minimization method to tackle the three problems. The proximal alternating scheme explicitly exploits the rank constraint on the completed matrix and uses the $$\\ell _0$$ \u2113 0 pseudo-norm directly in the corruption recovery step. We show that the proposed method for the non-convex and non-smooth model converges to a stationary point. Although it is not guaranteed to find the global optimal solution, in practice we find that our algorithm can typically arrive at a good local minimizer when it is supplied with a reasonably good starting point based on convex optimization. Extensive experiments with challenging synthetic and real data demonstrate that our algorithm succeeds in a much larger range of practical problems where convex optimization fails, and it also outperforms various state-of-the-art algorithms."}
{"_id": "b8f65d04152a8ecb0e632635e41c5383883f2754", "title": "Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle", "text": "The use of a multi-camera system enables a robot to obtain a surround view, and thus, maximize its perceptual awareness of its environment. An accurate calibration is a necessary prerequisite if vision-based simultaneous localization and mapping (vSLAM) is expected to provide reliable pose estimates for a micro aerial vehicle (MAV) with a multi-camera system. On our MAV, we set up each camera pair in a stereo configuration. We propose a novel vSLAM-based self-calibration method for a multi-sensor system that includes multiple calibrated stereo cameras and an inertial measurement unit (IMU). Our selfcalibration estimates the transform with metric scale between each camera and the IMU. Once the MAV is calibrated, the MAV is able to estimate its global pose via a multi-camera vSLAM implementation based on the generalized camera model. We propose a novel minimal and linear 3-point algorithm that uses inertial information to recover the relative motion of the MAV with metric scale. Our constant-time vSLAM implementation with loop closures runs on-board the MAV in real-time. To the best of our knowledge, no published work has demonstrated realtime on-board vSLAM with loop closures. We show experimental results in both indoor and outdoor environments. The code for both the self-calibration and vSLAM is available as a set of ROS packages at https://github.com/hengli/vmav-ros-pkg."}
{"_id": "8e645951b8135d15e3229a8813a1782f9fbd18c7", "title": "CAB: Connectionist Analogy Builder", "text": "The ability to make informative comparisons is central to human cognition. Comparison involves aligning two representations and placing their elements into correspondence. Detecting correspondences is a necessary component of analogical inference, recognition, categorization, schema formation, and similarity judgment. Connectionist Analogy Builder (CAB) determines correspondences through a simple iterative computation that matches elements in one representation with elements playing compatible roles in the other representation while simultaneously enforcing structural constraints. CAB shows promise as a process model of comparison as its performance can be related to human performance (e.g., solution trajectory, error patterns, time-on-task). Furthermore, CAB\u2019s bounded working memory allows it to account for the inherent capacity limitations of human processing. CAB\u2019s strengths are its parsimony, transparency of operations, and ability to generate performance predictions. In this paper, CAB is evaluated against benchmark phenomena from the analogy literature. \u00a9 2003 Cognitive Science Society, Inc. All rights reserved."}
{"_id": "1a1df35585975e2ee551a88a615923a99f1b44b5", "title": "Co-fusion: Real-time segmentation, tracking and fusion of multiple objects", "text": "In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes."}
{"_id": "d468d43eab97ab090aa229108c7b75cb12dd0b18", "title": "Reverse Curriculum Generation for Reinforcement Learning", "text": "Many relevant tasks require an agent to reach a certain state, or to manipulate objects into a desired configuration. For example, we might want a robot to align and assemble a gear onto an axle or insert and turn a key in a lock. These tasks present considerable difficulties for reinforcement learning approaches, since the natural reward function for such goal-oriented tasks is sparse and prohibitive amounts of exploration are required to reach the goal and receive a learning signal. Past approaches tackle these problems by manually designing a task-specific reward shaping function to help guide the learning. Instead, we propose a method to learn these tasks without requiring any prior task knowledge other than obtaining a single state in which the task is achieved. The robot is trained in \u201creverse\u201d, gradually learning to reach the goal from a set of starting positions increasingly far from the goal. Our method automatically generates a curriculum of starting positions that adapts to the agent\u2019s performance, leading to efficient training on such tasks. We demonstrate our approach on difficult simulated fine-grained manipulation problems, not solvable by state-of-the-art reinforcement learning methods."}
{"_id": "5277f5662973e5251c70c8a04d7e995496ca9f4d", "title": "Error Mechanisms of the Oscillometric Fixed-Ratio Blood Pressure Measurement Method", "text": "The oscillometric fixed-ratio method is widely employed for non-invasive measurement of systolic and diastolic pressures (SP and DP) but is heuristic and prone to error. We investigated the accuracy of this method using an established mathematical model of oscillometry. First, to determine which factors materially affect the errors of the method, we applied a thorough parametric sensitivity analysis to the model. Then, to assess the impact of the significant parameters, we examined the errors over a physiologically relevant range of those parameters. The main findings of this model-based error analysis of the fixed-ratio method are that: (1) SP and DP errors drastically increase as the brachial artery stiffens over the zero trans-mural pressure regime; (2) SP and DP become overestimated and underestimated, respectively, as pulse pressure (PP) declines; (3) the impact of PP on SP and DP errors is more obvious as the brachial artery stiffens over the zero trans-mural pressure regime; and (4) SP and DP errors can be as large as 58\u00a0mmHg. Our final and main contribution is a comprehensive explanation of the mechanisms for these errors. This study may have important implications when using the fixed-ratio method, particularly in subjects with arterial disease."}
{"_id": "1273db39180f894a8b04e517669896f3b5975f14", "title": "2 Conditional Saliency 2 . 1 Lossy Coding", "text": "By the guidance of attention, human visual system is able to locate objects of interest in complex scene. In this paper, we propose a novel visual saliency detection method the conditional saliency for both image and video. Inspired by biological vision, the definition of visual saliency follows a strictly local approach. Given the surrounding area, the saliency is defined as the minimum uncertainty of the local region, namely the minimum conditional entropy, when the perceptional distortion is considered. To simplify the problem, we approximate the conditional entropy by the lossy coding length of multivariate Gaussian data. The final saliency map is accumulated by pixels and further segmented to detect the proto-objects. Experiments are conducted on both image and video. And the results indicate a robust and reliable feature invariance saliency."}
{"_id": "08bb1bc1cc40f3a0dfe264c80105fcd82e5a70d1", "title": "New reverse-conducting IGBT (1200V) with revolutionary compact package", "text": "Fuji Electric developed a 1200V class RC-IGBT based on our latest thin wafer process. The performance of this RC-IGBT shows the same relationship between conduction loss and switching loss as our 6th generation conventional IGBT and FWD. In addition its trade-off can be optimized for hard switching by lifetime killer. Calculations of the hard switching inverter loss and chip junction temperature (Tj) show that the optimized RC-IGBT can handle 35% larger current density per chip area. In order to utilize the high performance characteristics of the RC-IGBT, we assembled them in our newly developed compact package. This module can handle 58% higher current than conventional 100A modules at a 51% smaller footprint."}
{"_id": "1ca43c217aceabea4cff14bff1d81df2debe058f", "title": "Saliency Unified: A Deep Architecture for simultaneous Eye Fixation Prediction and Salient Object Segmentation", "text": "Human eye fixations often correlate with locations of salient objects in the scene. However, only a handful of approaches have attempted to simultaneously address the related aspects of eye fixations and object saliency. In this work, we propose a deep convolutional neural network (CNN) capable of predicting eye fixations and segmenting salient objects in a unified framework. We design the initial network layers, shared between both the tasks, such that they capture the object level semantics and the global contextual aspects of saliency, while the deeper layers of the network address task specific aspects. In addition, our network captures saliency at multiple scales via inception-style convolution blocks. Our network shows a significant improvement over the current state-of-the-art for both eye fixation prediction and salient object segmentation across a number of challenging datasets."}
{"_id": "452aec9ce1f26098d17fa5e80134912f9ff043e4", "title": "Center-Fed Patch Antenna Array Excited by an Inset Dielectric Waveguide for 60-GHz Applications", "text": "A patch antenna array proximity-coupled fed by the inset dielectric waveguide (IDW) is proposed in 60-GHz frequency band. The antenna array consists of four series-fed subarrays each having eight pairs of circular patch elements, four substrate integrated IDWs, and a compact four-way power divider transmitting the signal from the grounded coplanar waveguide (GCPW) to the IDWs. The IDW is realized by metallized via-hole arrays in a printed circuit board (PCB). The parallel center-fed configuration is applied to suppress the frequency-dependent beam squinting of the series-fed subarray. Measured results show that the proposed antenna exhibits a gain variation between 15.8 and 19.2 dBi, fixed main beam in the direction of \u03b8 = 0\u00b0, and the reflection coefficient |S11| \u2264 -10 dB over the frequency band from 58 to 64 GHz."}
{"_id": "e2a533a7966b9c9d338149bac78bc3d7c1a3b420", "title": "A systematic review and meta-analysis of dressings used for wound healing: the efficiency of honey compared to silver on burns.", "text": "BACKGROUND\nHoney has the antibacterial effect of silver without the toxic effect of silver on the skin. Even so, silver is the dominant antibacterial dressing used in wound healing.\n\n\nOBJECTIVES\nTo evaluate the healing effects of honey dressings compared to silver dressings for acute or chronic wounds.\n\n\nDESIGN\nA systematic review with meta-analysis.\n\n\nMETHOD\nThe search, conducted in seven databases, resulted in six randomised controlled trial studies from South Asia focusing on antibacterial properties and healing times of honey and silver.\n\n\nRESULT\nHoney was more efficacious for wound healing than silver, as measured in the number of days needed for wounds to heal (pooled risk difference -20, 95% CI -0.29 to -0.11, p < .001). Honey turned out to have more antibacterial qualities than silver.\n\n\nCONCLUSION\nAll the included studies based on burns showed the unequivocal result that honey had an even more positive effect than silver on wound healing."}
{"_id": "39ec37905f9b2321fbb2173eb6452410b010e771", "title": "A novel, compact, low-cost, impulse ground-penetrating radar for nondestructive evaluation of pavements", "text": "This paper reports on the development of a novel, compact, low-cost, impulse ground-penetrating radar (GPR) and demonstrate its use for nondestructive evaluation of pavement structures. This GPR consists of an ultrashort-monocycle-pulse transmitter (330 ps), an ultrawide-band (UWB) sampling receiver (0-6 GHz), and two UWB antennas (0.2-20 GHz)-completely designed using microwave-integrated circuits with seamless electrical connections between them. An approximate analysis is used to determine the signal loss and power budget. Performance of this GPR has been verified through the measurements of relative permittivity and thicknesses of various samples, and a good agreement between the experimental and theoretical results has been achieved."}
{"_id": "65e9f4d7a80ea39a265b9d60d57397011395efcc", "title": "Tracing Data Lineage Using Schema Transformation Pathways", "text": "With the increasing amount and diversity of information available on the Internet, there has been a huge growth in information systems that need to integrate data from distributed, heterogeneous data sources. Tracing the lineage of the integrated data is one of the current problems being addressed in data warehouse research. In this paper, we propose a new approach for tracing data linage based on schema transformation pathways. We show how the individual transformation steps in a transformation pathway can be used to trace the derivation of the integrated data in a step-wise fashion, thus simplifying the lineage tracing process."}
{"_id": "715592bfe309cc85c9df6fb314270cbdfb67d543", "title": "Wireless sensor networks: a survey on recent developments and potential synergies", "text": "Wireless sensor network (WSN) has emerged as one of the most promising technologies for the future. This has been enabled by advances in technology and availability of small, inexpensive, and smart sensors resulting in cost effective and easily deployable WSNs. However, researchers must address a variety of challenges to facilitate the widespread deployment of WSN technology in real-world domains. In this survey, we give an overview of wireless sensor networks and their application domains including the challenges that should be addressed in order to push the technology further. Then we review the recent technologies and testbeds for WSNs. Finally, we identify several open research issues that need to be investigated in future. Our survey is different from existing surveys in that we focus on recent developments in wireless sensor network technologies. We review the leading research projects, standards and technologies, and platforms. Moreover, we highlight a recent phenomenon in WSN research that is to explore synergy between sensor networks and other technologies and explain how this can help sensor networks achieve their full potential. This paper intends to help new researchers entering the domain of WSNs by providing a comprehensive survey on recent developments."}
{"_id": "12fa61561442c4524f8f99146c29ca131e749293", "title": "Control of systems integrating logic, dynamics, and constraints", "text": "This paper proposes a framework for modeling and controlling systems described by interdependent physical laws, logic rules, and operating constraints, denoted as Mixed Logical Dynamical (MLD) systems. These are described by linear dynamic equations subject to linear inequalities involving real and integer variables. MLD systems include constrained linear systems, finite state machines, some classes of discrete event systems, and nonlinear systems which can be approximated by piecewise linear functions. A predictive control scheme is proposed which is able to stabilize MLD systems on desired reference trajectories while fulfilling operating constraints, and possibly take into account previous qualitative knowledge in the form of heuristic rules. Due to the presence of integer variables, the resulting on-line optimization procedures are solved through Mixed Integer Quadratic Programming (MIQP), for which efficient solvers have been recently developed. Some examples and a simulation case study on a complex gas supply system are reported."}
{"_id": "7dc9c3eecd1aa717b9eab494d8671bdad04bf171", "title": "Fabric-based actuator modules for building soft pneumatic structures with high payload-to-weight ratio", "text": "This paper introduces a new concept of building soft pneumatic structures by assembling modular units of fabric-based rotary actuators (FRAs) and beams. Upon pressurization, the inner folds of FRA would expand, which causes the FRA module to unfold, generating angular displacement. Hence, FRAs would enable mobility function of the structure and its range of motion. The modular nature of the actuator units enables customized configuration of pneumatic structures, which can be modified and scaled by selecting the appropriate modules. FRAs are also designed to be bladder-less, that is, they are made without an additional layer of inner bladder. Thus, a simple fabrication process can be used to prepare the actuators. In this paper, we studied how the performance of the FRA modules changes with their dimensions and demonstrated how a soft gripper can be constructed using these modules. The kinematic response of the actuator segments of the gripper was analyzed and a pressure control algorithm was developed to regulate the pneumatic pressure of the actuator. The modular based soft robotic gripper alone weighs about 140g. Yet, based on the grip tests, it is able to lift heavier objects (up to 2.4kg), achieving a high payload-to-weight ratio of about 1714%, which is higher than the value reported by previously developed soft pneumatic grippers using elastomeric materials. Lastly, we also demonstrated that the gripper is capable of performing two essential grasping modes, which are power grasping and fingertip grasping, for objects of various shapes."}
{"_id": "83e5bd4d8b4cc509585969b5a895f44be4483ca6", "title": "A critical evaluation of the complex PTSD literature: implications for DSM-5.", "text": "Complex posttraumatic stress disorder (CPTSD) has been proposed as a diagnosis for capturing the diverse clusters of symptoms observed in survivors of prolonged trauma that are outside the current definition of PTSD. Introducing a new diagnosis requires a high standard of evidence, including a clear definition of the disorder, reliable and valid assessment measures, support for convergent and discriminant validity, and incremental validity with respect to implications for treatment planning and outcome. In this article, the extant literature on CPTSD is reviewed within the framework of construct validity to evaluate the proposed diagnosis on these criteria. Although the efforts in support of CPTSD have brought much needed attention to limitations in the trauma literature, we conclude that available evidence does not support a new diagnostic category at this time. Some directions for future research are suggested."}
{"_id": "ca9a5c4887396637f53123fa0176f475703bfbca", "title": "Agent based urban growth modeling framework on Apache Spark", "text": "The simulation of urban growth is an important part of urban planning and development. Due to large data and computational challenges, urban growth simulation models demand efficient data analytic frameworks for scaling them to large geographic regions. Agent-based models are widely used to observe and analyze the urban growth simulation at various scales. The incorporation of the agent-based model makes the scaling task even harder due to communication and coordination among agents. Many existing agent-based model frameworks were implemented using traditional shared and distributed memory programming models. On the other hand, Apache Spark is becoming a popular platform for distributed big data in-memory analytics. This paper presents an implementation of agent-based sub-model in Apache Spark framework. With the in-memory computation, Spark implementation outperforms the traditional distributed memory implementation using MPI. This paper provides (i) an overview of our framework capable of running urban growth simulations at a fine resolution of 30 meter grid cells, (ii) a scalable approach using Apache Spark to implement an agent-based model for simulating human decisions, and (iii) the comparative analysis of performance of Apache Spark and MPI based implementations."}
{"_id": "0b440695c822a8e35184fb2f60dcdaa8a6de84ae", "title": "KinectFaceDB: A Kinect Database for Face Recognition", "text": "The recent success of emerging RGB-D cameras such as the Kinect sensor depicts a broad prospect of 3-D data-based computer applications. However, due to the lack of a standard testing database, it is difficult to evaluate how the face recognition technology can benefit from this up-to-date imaging sensor. In order to establish the connection between the Kinect and face recognition research, in this paper, we present the first publicly available face database (i.e., KinectFaceDB1) based on the Kinect sensor. The database consists of different data modalities (well-aligned and processed 2-D, 2.5-D, 3-D, and video-based face data) and multiple facial variations. We conducted benchmark evaluations on the proposed database using standard face recognition methods, and demonstrated the gain in performance when integrating the depth data with the RGB data via score-level fusion. We also compared the 3-D images of Kinect (from the KinectFaceDB) with the traditional high-quality 3-D scans (from the FRGC database) in the context of face biometrics, which reveals the imperative needs of the proposed database for face recognition research."}
{"_id": "8ef7b152c35434eba0c5f1cf03051a65426e3463", "title": "Electromagnetic energy harvesting from train induced railway track vibrations", "text": "Anelectromagnetic energy harvester is designed to harness the vibrational power from railroad track deflections due to passing trains. Whereas typical existing vibration energy harvester technologies are built for low power applications of milliwatts range, the proposed harvester will be designed for higher power applications for major track-side equipment such as warning signals, switches, and health monitoring sensors, which typically require a power supply of 10 Watts or more. To achieve this goal, we implement a new patent pending motion conversion mechanism which converts irregular pulse-like bidirectional linear vibration into regulated unidirectional rotational motion. Features of the motion mechanism include bidirectional to unidirectional conversion and flywheel speed regulation, with advantages of improved reliability, efficiency, and quality of output power. It also allows production of DC power directly from bidirectional vibration without electronic diodes. Preliminary harvester prototype testing results illustrate the features and benefits of the proposed motion mechanism, showing reduction of continual system loading, regulation of generator speed, and capability for continuous DC power generation."}
{"_id": "b90b53780ef8defacd0698d240c503db71329701", "title": "Sensorized pneumatic muscle for force and stiffness control", "text": "This paper presents the design and experimental validation of a soft pneumatic artificial muscle with position and force sensing capabilities. Conductive liquid-based soft sensors are embedded in a fiber-reinforced contractile actuator to measure two modes of deformation \u2014 axial strain and diametral expansion \u2014 which, together, are used to determine the stroke length and contractile force generated under internal pressure. We validate the proposed device by using data from the embedded sensors to estimate the force output of the actuator at fixed lengths and the stiffness and force output of a one degree-of-freedom hinge joint driven by an antagonist pair of the sensorized pneumatic muscles."}
{"_id": "e5d8f00a413a1e6d111c52f3d984c6761151f364", "title": "Spelling Error Trends and Patterns in Sindhi", "text": "Statistical error Correction technique is the most accurate and widely used approach today, but for a language like Sindhi which is a low resourced language the trained corpora\u2019s are not available, so the statistical techniques are not possible at all. Instead a useful alternative would be to exploit various spelling error trends in Sindhi by using a Rule based approach. For designing such technique an essential prerequisite would be to study the various error patterns in a language. This paper presents various studies of spelling error trends and their types in Sindhi Language. The research shows that the error trends common to all languages are also encountered in Sindhi but their do exist some error patters that are catered specifically to a Sindhi language."}
{"_id": "fc47024ca5d2bdc1737d0d1c41525193fbf8fc32", "title": "Induction motor fault diagnosis using labview", "text": "Now-a-days, manufacturing companies of electrical machines are in need of easy and early fault detection techniques as they are attire by the customers. Though many researchers have developed many fault diagnosing techniques, user friendly software with automated long measurement process is needed so that the non-technical persons also can operate to identify the faults occurred. In our work the healthy induction motor is modeled in stationary reference frame. Faulty conditions are included in the model to analyze the effect of fault in the motor using the user friendly software, LabVIEW."}
{"_id": "a85275f12472ecfbf4f4f00a61514b0773923b86", "title": "Applications, challenges, and prospective in emerging body area networking technologies", "text": "Advances in wireless technology and supporting infrastructure provide unprecedented opportunity for ubiquitous real-time healthcare and fitness monitoring without constraining the activities of the user. Wirelessly connected miniaturized sensors and actuators placed in, on, and around the body form a body area network for continuous, automated, and unobtrusive monitoring of physiological signs to support medical, lifestyle and entertainment applications. BAN technology is in the early stage of development, and several research challenges have to be overcome for it to be widely accepted. In this article we study the core set of application, functional, and technical requirements of the BAN. We also discuss fundamental research challenges such as scalability (in terms of data rate, power consumption, and duty cycle), antenna design, interference mitigation, coexistence, QoS, reliability, security, privacy, and energy efficiency. Several candidate technologies poised to address the emerging BAN market are evaluated, and their merits and demerits are highlighted. A brief overview of standardization activities relevant to BANs is also presented."}
{"_id": "aa531e41c4c646285b522cf6f33f82a9d68d5062", "title": "Addressing Security and Privacy Risks in Mobile Applications", "text": "Applications for mobile platforms are being developed at a tremendous rate, but often without proper security implementation. Insecure mobile applications can cause serious information security and data privacy issues and can have severe repercussions on users and organizations alike."}
{"_id": "f4abebef4e39791f358618294cd8d040d7024399", "title": "Security Analysis of Wearable Fitness Devices ( Fitbit )", "text": "This report describes an analysis of the Fitbit Flex ecosystem. Our objectives are to describe (1) the data Fitbit collects from its users, (2) the data Fitbit provides to its users, and (3) methods of recovering data not made available to device owners. Our analysis covers four distinct attack vectors. First, we analyze the security and privacy properties of the Fitbit device itself. Next, we observe the Bluetooth traffic sent between the Fitbit device and a smartphone or personal computer during synchronization. Third, we analyze the security of the Fitbit Android app. Finally, we study the security properties of the network traffic between the Fitbit smartphone or computer application and the Fitbit web service. We provide evidence that Fitbit unnecessarily obtains information about nearby Flex devices under certain circumstances. We further show that Fitbit does not provide device owners with all of the data collected. In fact, we find evidence of per-minute activity data that is sent to the Fitbit web service but not provided to the owner. We also discovered that MAC addresses on Fitbit devices are never changed, enabling usercorrelation attacks. BTLE credentials are also exposed on the network during device pairing over TLS, which might be intercepted by MITM attacks. Finally, we demonstrate that actual user activity data is authenticated and not provided in plaintext on an end-to-end basis from the device to the Fitbit web service."}
{"_id": "0e74f8e8763bd7a9c9507badaee390d449b1f8ca", "title": "Versatile low power media access for wireless sensor networks", "text": "We propose B-MAC, a carrier sense media access protocol for wireless sensor networks that provides a flexible interface to obtain ultra low power operation, effective collision avoidance, and high channel utilization. To achieve low power operation, B-MAC employs an adaptive preamble sampling scheme to reduce duty cycle and minimize idle listening. B-MAC supports on-the-fly reconfiguration and provides bidirectional interfaces for system services to optimize performance, whether it be for throughput, latency, or power conservation. We build an analytical model of a class of sensor network applications. We use the model to show the effect of changing B-MAC's parameters and predict the behavior of sensor network applications. By comparing B-MAC to conventional 802.11-inspired protocols, specifically SMAC, we develop an experimental characterization of B-MAC over a wide range of network conditions. We show that B-MAC's flexibility results in better packet delivery rates, throughput, latency, and energy consumption than S-MAC. By deploying a real world monitoring application with multihop networking, we validate our protocol design and model. Our results illustrate the need for flexible protocols to effectively realize energy efficient sensor network applications."}
{"_id": "4241cf81fce2e4a4cb486f8221c8c33bbaabc426", "title": "Security and Privacy Issues in Wireless Sensor Networks for Healthcare Applications", "text": "The use of wireless sensor networks (WSN) in healthcare applications is growing in a fast pace. Numerous applications such as heart rate monitor, blood pressure monitor and endoscopic capsule are already in use. To address the growing use of sensor technology in this area, a new field known as wireless body area networks (WBAN or simply BAN) has emerged. As most devices and their applications are wireless in nature, security and privacy concerns are among major areas of concern. Due to direct involvement of humans also increases the sensitivity. Whether the data gathered from patients or individuals are obtained with the consent of the person or without it due to the need by the system, misuse or privacy concerns may restrict people from taking advantage of the full benefits from the system. People may not see these devices safe for daily use. There may also possibility of serious social unrest due to the fear that such devices may be used for monitoring and tracking individuals by government agencies or other private organizations. In this paper we discuss these issues and analyze in detail the problems and their possible measures."}
{"_id": "22e1903702dd328e10cb9be1d273bb5408ca638d", "title": "New Beam Tracking Technique for Millimeter Wave-band Communications", "text": "In this paper, we propose an efficient beam tracking method for mobility scenario in mmWave-band communications. When the position of the mobile changes in mobility scenario , the base-station needs to perform beam training frequently to t rack the time-varying channel, thereby spending significant res ources for training beams. In order to reduce the training overhead, we propose a new beam training approach called \u201cbeam tracking\u201d which exploits the continuous nature of time varying angle of departure (AoD) for beam selection. We show that transmissi on of only two training beams is enough to track the time-varying AoD at good accuracy. We derive the optimal selection of beam pai r which minimizes Cramer-Rao Lower Bound (CRLB) for AoD estimation averaged over statistical distribution of the AoD. Our numerical results demonstrate that the proposed beam track ing scheme produces better AoD estimation than the conventiona l beam training protocol with less training overhead. I. I NTRODUCTION The next generation wireless communication systems aim to achieve Giga bit/s throughput to support high speed multimedia data service [1], [2]. Since there exist ample amount of unutilized frequency spectrum in millimeter Wave (mmWave) band (30 GHz-300 GHz), wireless communication over mmWave band is considered as a promising solution to achieve significant leap in spectral efficiency [3]. However , one major limitation of mmWave communications is significant free space path loss, which causes large attenuation of sign al power at the receiver. Furthermore, the overall path loss ge t worse when the signal goes through obstacles, rain, foliage , and any blockage to mobile devices. Recently, active resear ch on mmWave communication has been conducted in order to overcome these limitations [1]\u2013[5]. In mmWave band, many antenna elements can be integrated in a small form factor and hence, we can employ high directional beamforming using a large number of antennas to compensate high path loss. In order to perform high directional beamforming, it is necessary to estimate channels for all transmitter and rece iver antenna pair. While this step requires high computational c omplexity due to large number of antennas, channel estimation can be performed efficiently by using the angular domain representation of channels [6]. In angular domain, only a fe w angular bins contain the most of the received energy. Hence, if we identify the dominant angular bins (which correspond to the angle of arrival (AoA) and the angle of departure (AoD)), we can obtain the channel estimate without incurrin g computational complexity. Basically, both AoD and AoA can be estimated using so called \u201cbeam training\u201d procedure. The base-station sends t he training beams at the designated direction and the receiver estimates the AoD/AoA based on the received signals. Widely used beam training method (called \u201cbeam cycling method\u201d) is to allow the base-station to transmit N training beams one by one at the equally spaced directions. However, to ensure good estimate of AoD/AoA, N should be large, leading to significant training overhead. This problem becomes even more serious for the mobility scenario in mmWave communications. Since the location of mobiles keeps changing, th e base-station should transmit training beams more frequent ly to update AoD/AoA estimates, causing significant drop in data throughput [7]. Recently, several adaptive beam train ing schemes have been proposed to improve the conventional beam training method [10]\u2013[14]. In this paper, we introduce a novel beam training method for mobility scenario in mmWave communications. Our idea is based on the observation that for mobility scenario, the AoD of the particular user does not change drastically so that continuous nature of the AoD change can be accounted to improve the efficacy of the beam training. Since this approach exploits temporal dynamics of AoD, we call such beam training scheme \u201cbeam tracking\u201d. While the convenional method makes no assumption on the state of AoD, we use statistical distribution of the AoD given the previousl y state of AoD. Using the probabilistic model on AoD change, we derive effective beam tracking strategy which employs transmission of two training beams from the base-station. Optimal placement of two training beams in angular domain is sought by minimizing (the lower bound of) variance of the estimation error for AoD. As a result, we choose the best beam p ir from the beam codebook for the given prior knowledge on AoD. Our simulation results show that the proposed beam tracking method offers the channel estimation performance comparable to the conventional beam training methods with significantly reduced training overhead. The rest of this paper is organized as follows; In section II, we introduce the system and channel models for mmWave communications and in section III, we describe the proposed b am tracking method and the simulation results are provide d in section IV. Finally, the paper is concluded in section V. II. SYSTEM MODEL In this section, we describe the system model for mmWave communications. First, we describe the angular domain repr esentation of the mmWave channel and then we introduce the procedure for beam training and channel estimation. A. Channel Model Consider single user mmWave MIMO systems with the base-station withNb antennas and the mobile with Nm antennas. The MIMO channel model with L paths at timet is described by [8]"}
{"_id": "3ed2bba32887f8f216106849e2652b7d2f814827", "title": "Curvature-controlled curve editing using piecewise clothoid curves", "text": "Two-dimensional curves are conventionally designed using splines or B\u00e9zier curves. Although formally they are C or higher, the variation of the curvature of (piecewise) polynomial curves is difficult to control; in some cases it is practically impossible to obtain the desired curvature. As an alternative we propose piecewise clothoid curves (PCCs). We show that from the design point of view they have many advantages: control points are interpolated, curvature extrema lie in the control points, and adding control points does not change the curve. We present a fast localized clothoid interpolation algorithm that can also be used for curvature smoothing, for curve fitting, for curvature blending, and even for directly editing the curvature. We give a physical interpretation of variational curvature minimization, from which we derive our scheme. Finally, we demonstrate the achievable quality with a range of examples. & 2013 Elsevier Ltd. All rights reserved."}
{"_id": "fd7233e36dfcda4ab85c46536d2b9875c3298819", "title": "Linking received packet to the transmitter through physical-fingerprinting of controller area network", "text": "The Controller Area Network (CAN) bus serves as a legacy protocol for in-vehicle data communication. Simplicity, robustness, and suitability for real-time systems are the salient features of the CAN bus protocol. However, it lacks the basic security features such as massage authentication, which makes it vulnerable to the spoofing attacks. In a CAN network, linking CAN packet to the sender node is a challenging task. This paper aims to address this issue by developing a framework to link each CAN packet to its source. Physical signal attributes of the received packet consisting of channel and node (or device) which contains specific unique artifacts are considered to achieve this goal. Material and design imperfections in the physical channel and digital device, which are the main contributing factors behind the device-channel specific unique artifacts, are leveraged to link the received electrical signal to the transmitter. Generally, the inimitable patterns of signals from each ECUs exist over the course of time that can manifest the stability of the proposed method. Uniqueness of the channel-device specific attributes are also investigated for time-and frequency-domain. Feature vector is made up of both time and frequency domain physical attributes and then employed to train a neural network-based classifier. Performance of the proposed fingerprinting method is evaluated by using a dataset collected from 16 different channels and four identical ECUs transmitting same message. Experimental results indicate that the proposed method achieves correct detection rates of 95.2% and 98.3% for channel and ECU classification, respectively."}
{"_id": "7827e8fa0e8934676df05e1248a9dd70f3dd7525", "title": "Multi-objective model predictive control for grid-tied 15-level packed U cells inverter", "text": "This paper presents a multi-objective Model Predictive Control (MPC) for grid-tied 4-cell 15-level Packed U Cells (PUC) inverter. This challenging topology is characterized by high power quality with reduced number of components compared to conventional multilevel inverters. Compared to traditional PI controller, MPC is attracting more interest due to its good dynamic response and high accuracy of reference tracking, through the minimization of a flexible user-defined cost function. For the presented PUC topology, the grid current should be jointly controlled with the capacitors' voltages for ensuring proper operation of the inverter, which leads to an additional requirement of pre-charge circuits for the capacitors in case of using PI current controller (or using additional PI controllers). The proposed MPC achieves grid current injection, low current THD, unity PF, while balancing the capacitors' voltages."}
{"_id": "a6f726fa39b189e56b1bcb0756a03796c8aa16f8", "title": "Test anxiety and direction of attention.", "text": "The literature reviewed suggests an attentional interpretation, of the adverse effects which test anxiety has on task performance. During task performance the highly test-anxious person divides his attention between self-relevant and task-relevant variables, in contrast to the low-test-anxious person who focuses his attention more fully on the task. This interpretation was supported by literature from diverse areas suggesting that (a) highly anxious persons are generally more self-preoccupied than are people low in anxiety; (b) the selffocusing tendencies of highly test-anxious persons are activated in testing situations; (c) those situational conditions in which the greatest performance differences occur are ones which elicit the self-focusing tendencies of highly test-anxious subjects, and the task-focusing tendencies of low-anxious subjects; (d) research examining the relationship between anxiety and task variables suggests that anxiety reduces the range of task cues utilized in performance; (e) \"worry,\" an attentionally demanding cognitive activity, is more debilitating of task performance than is autonomic arousal. Treatment and research implications of this attentional interpretation of test anxiety are briefly discussed."}
{"_id": "4de602b8737642b80b50940b9ae9eb2da7dbc051", "title": "GPS positioning in a multipath environment", "text": "We address the problem of GPS signal delay estimation in a multipath environment with a low-complexity constraint. After recalling the usual early\u2013late estimator and its bias in a multipath propagation context, we study the maximum-likelihood estimator (MLE) based on a signal model including the parametric contribution of reflected components. It results in an efficient algorithm using the existing architecture, which is also very simple and cheap to implement. Simulations show that the results of the proposed algorithm, in a multipath environment, are similar to these of the early\u2013late in a single-path environment. The performance are further characterized, for both MLEs (based on the single-path and multipath propagation) in terms of bias and standard deviation. The expressions of the corresponding Cram\u00e9r\u2014Rao (CR) bounds are derived in both cases to show the good performance of the estimators when unbiased."}
{"_id": "9e71d67774159fd7094c39c2efbc8dab497c12d7", "title": "Counter-forensics in machine learning based forgery detection", "text": "With the powerful image editing tools available today, it is very easy to create forgeries without leaving visible traces. Boundaries between host image and forgery can be concealed, illumination changed, and so on, in a naive form of counter-forensics. For this reason, most modern techniques for forgery detection rely on the statistical distribution of micro-patterns, enhanced through high-level filtering, and summarized in some image descriptor used for the final classification. In this work we propose a strategy to modify the forged image at the level of micro-patterns to fool a state-of-the-art forgery detector. Then, we investigate on the effectiveness of the proposed strategy as a function of the level of knowledge on the forgery detection algorithm. Experiments show this approach to be quite effective especially if a good prior knowledge on the detector is available."}
{"_id": "09d08e543a9b2fc350cb37e47eb087935c12be16", "title": "A Multimodal, Full-Surround Vehicular Testbed for Naturalistic Studies and Benchmarking: Design, Calibration and Deployment", "text": "Recent progress in autonomous and semiautonomous driving has been made possible in part through an assortment of sensors that provide the intelligent agent with an enhanced perception of its surroundings. It has been clear for quite some while now that for intelligent vehicles to function effectively in all situations and conditions, a fusion of different sensor technologies is essential. Consequently, the availability of synchronized multi-sensory data streams are necessary to promote the development of fusion based algorithms for low, mid and high level semantic tasks. In this paper, we provide a comprehensive description of our heavily sensorized, panoramic testbed capable of providing high quality data from a slew of synchronized and calibrated sensors such as cameras, LIDARs, radars, and the IMU/GPS. The vehicle has recorded over 100 hours of real world data for a very diverse set of weather, traffic and daylight conditions. All captured data is accurately calibrated and synchronized using timestamps, and stored safely in high performance servers mounted inside the vehicle itself. Details on the testbed instrumentation, sensor layout, sensor outputs, calibration and synchronization are described in this paper."}
{"_id": "8164171501d5d7418a3e83673923466b77b2fd5b", "title": "Prototypical Networks for Few-shot Learning", "text": "We propose Prototypical Networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend Prototypical Networks to zero-shot learning and achieve state-ofthe-art results on the CU-Birds dataset."}
{"_id": "694064760c02d2748ecda38151a25fe2b1ff6870", "title": "Prozessmodellierung in eGovernment-Projekten mit der eEPK", "text": "Im Rahmen der Informationssystemgestaltung unterscheiden sich die Projektbeteiligten insbesondere hinsichtlich ihrer Anforderungen an die Inhalte und Repr\u00e4sentationsformen der f\u00fcr die Anwendungssystemund Organisationsgestaltung verwendeten Modellen. Die multiperspektivische Modellierung stellt einen viel versprechenden Ansatz dar, um diesem Umstand durch die Bereitstellung perspektivenspezifischer Sichten auf Informationsmodelle gerecht zu werden. Der Beitrag stellt eine Erweiterung ereignisgesteuerter Prozessketten durch Konfigurationsmechanismen vor, welche die Erstellung und Nutzung von Perspektiven auf Prozessmodelle unterst\u00fctzt. 1 Perspektiven auf Prozessmodelle Die wissenschaftliche Diskussion \u00fcber die Qualit\u00e4t von Prozessmodellen wurde in den letzten Jahren wesentlich durch die Entwicklung von allgemeinen Modellierungsempfehlungen, welche auf die syntaktische Korrektheit der Modelle und der Einhaltung der Grunds\u00e4tze ordnungsm\u00e4\u00dfiger Modellierung bei der Erstellung von Prozessmodellen abzielen [BRS95; Ro96; BRU00], gepr\u00e4gt. Ein dominierendes Kriterium, das die Qualit\u00e4t von Prozessmodellen determiniert, ist dabei die Entsprechung der Modellkonzeption und -repr\u00e4sentation mit den Anforderungen der jeweiligen Modellnutzergruppe [RS99, S. 25f.; Be02, S. 28ff.; RSD03, S. 49]. So werden z. B. innerhalb eines prozessbasierten Reorganisationsprojektes diese Modellnutzergruppen durch die verschiedenen organisatorischen Rollen der Projektteilnehmer bestimmt. Ein Sachbearbeiter ben\u00f6tigt hierbei lediglich die f\u00fcr ihn relevanten Prozessmodelle und deren Schnittstellen. Aus der Sicht des Managers der zu reorganisierenden Unternehmung sollten die Modelle in aggregierter Form vorliegen und durch erweiterte visuelle Merkmale, wie z. B. ansprechender Farboder Symboldarstellung, gekennzeichnet sein. Einem Anwendungsentwickler sollten sehr detaillierte Modelle vorgelegt werden, damit sich die aus der Reorganisation ergebenden Konsequenzen im Anwendungssystem ber\u00fccksichtigt werden k\u00f6nnen. Die verschiedenen Anforderungen der Nutzergruppen resultieren aus den Differenzen ihrer subjektabh\u00e4ngigen Vorstellungswelten und den daraus resultierenden Probleml\u00f6sungsans\u00e4tzen [Be01a; Be02, S. 30ff.]. Ein Ansatz zur expliziten Vermittlung zwischen diesen Vorstellungswelten findet sich in der Bereitstellung von anforderungsgerechten Perspektiven wieder [Fr94, S. 36f.]. Perspektiven werden dadurch determiniert, welcher Modellierungszweck verfolgt wird, welche organisatorische Rolle die Nutzer einnehmen und welche pers\u00f6nlichen Pr\u00e4ferenzen bzgl. der Modellkonzeption und -repr\u00e4sentation bestehen [Be02, S. 38ff., RSD03, S. 49]. Die Entwicklung und Bereitstellung perspektivenspezifischer Modelle wird unter dem Begriff der multiperspektivischen Informationsmodellierung diskutiert [Fi92; RS99, S. 25f.; RSD03, S. 52]. Soll den Anforderungen mehrerer Nutzergruppen entsprochen werden, d. h. sollen mehrere Perspektiven ber\u00fccksichtigt werden, ergeben sich f\u00fcr den Modellersteller zwei Alternativen. Modelle k\u00f6nnten einerseits f\u00fcr unterschiedliche Perspektiven redundant vorgehalten werden. Nachteile dieser Vorgehensweise sind die \u00fcblichen, mit Redundanzen verbundenen Zusatzaufw\u00e4nde wie erh\u00f6hter Pflegeaufwand und Gefahr von Inkonsistenzen. Andererseits kann ein Gesamtmodell derartig erstellt werden, dass es die f\u00fcr s\u00e4mtliche Perspektiven relevanten Elemente redundanzfrei enth\u00e4lt. Den einzelnen Vertretern der Perspektiven werden die f\u00fcr sie relevanten Modelle bereitgestellt, indem ihnen eine View auf das Gesamtmodell zur Verf\u00fcgung gestellt wird, die das Gesamtmodell um die nicht relevanten Inhalte reduziert. Die Ma\u00dfnahmen, welche vom Modellierer zur Erstellung dieser Views durchgef\u00fchrt werden m\u00fcssen, k\u00f6nnen hierbei durchaus zu einer komplexen Aufgabe werden. Im Folgenden wird anhand ereignisgesteuerter Prozessketten (EPKs) [KNS92] die Umsetzung der zweiten Variante diskutiert. Dabei wird in Abschnitt 2 zun\u00e4chst eine metamodellbasierte Formalisierung der EPK vorgenommen, um eine konzeptionelle Basis f\u00fcr die multiperspektivische Erweiterung der Modellierungstechnik durch Konfigurationsmechanismen zu schaffen. Darauf aufbauend werden in Abschnitt 3 die notwendigen Erweiterungen der Modellierungstechnik vorgenommen. Abschnitt 4 behandelt die Problematik und Notwendigkeit von Konsistenzsicherungsmechanismen, die bei der Verwendung der Konfigurationsmechanismen auftreten k\u00f6nnen. Nach einem Praxisbeispiel in Abschnitt 5 schlie\u00dft der Beitrag mit einem Ausblick in Abschnitt 6. 2 Metamodellbasierte Formalisierung der EPK Das hier vorgestellte Verfahren zur Erstellung multiperspektivischer ereignisgesteuerter Prozessketten basiert auf der Manipulation von Modellstrukturen durch die Anwendung von Konfigurationsmechanismen. Da hierbei die strukturellen Ver\u00e4nderungen auf Ebene der Sprachspezifikation sowie deren Auswirkungen auf die Struktur der Modelle im Zentrum der Betrachtung stehen, empfiehlt es sich, die Sprache der EPK formalisiert darzustellen. Dies kann wiederum durch Informationsmodelle \u2013 sogenannte sprachorientierte Metamodelle \u2013 erfolgen [St96, S. 24ff.]. Als Notationsform eignen sich insbesondere Struktursprachen wie das Entity-Relationship-Modell [Ch76]. Ein erweiterter Dialekt dieser Modellierungssprache wird im vorliegenden Beitrag zur Metamodellierung verwendet. 2 Das zentrale Element der EPK Sprache ist das Prozesselement (vgl. Abbildung 1). Alle am Prozessgraphen beteiligten Knoten werden als Spezialisierung dieses Elements mo1 Der Begriff EPK wird im Folgenden synonym zur erweiterten ereignisgesteuerten Prozesskette (eEPK) verwendet, die neben der Ablauflogik die Annotation von ben\u00f6tigten Ressourcen erlaubt. 2 Es handelt sich hierbei um einen Dialekt, der die perspektivenabh\u00e4ngige Modifikation von Sprachen erlaubt (vgl. ausf\u00fchrlich [Be02, S. 77ff.]). delliert. Hierzu geh\u00f6ren die Elemente Prozessfunktion, Prozessereignis und Operator (vgl. Spezialisierung des Entitytyps Prozesselement). Eine Prozessfunktion beschreibt eine Aktivit\u00e4t innerhalb eines Prozesses. Diese Aktivit\u00e4ten werden beim Eintreten eines oder einer Kombination von Ereignissen ausgef\u00fchrt und f\u00fchren nach ihrer Beendigung selbst wieder zu dem Eintreten von Ereignissen. Somit kann ein Ereignis als Zustandswechsel interpretiert werden. Die notwendige Kombination der Ereignisse f\u00fcr die Ausf\u00fchrung einer Aktivit\u00e4t sowie die Definition der Menge der eintretenden Ereignisse nach erfolgreicher Ausf\u00fchrung werden \u00fcber die Operatoren definiert. Hierdurch lassen sich parallele Prozessstr\u00e4nge und deren Wiederzusammenf\u00fchrung im Prozessmodell abbilden. Prozessfunktion Operator Prozessereignis Prozesselement D,T (0,n) (0,n) Ressource (0,n) (0,n) D,P Organisationseinheit Fachbegriff Anwendungssystemtyp Vorg\u00e4nger/ Nachfolger PERessourcenZuO (0,n) ProzessRessourcenBeziehungstyp Typ PRBeziehungstyphierarchie"}
{"_id": "cd6b17b5954011c619d687a88493564c6ab345b7", "title": "Decoding of EEG Signals Using Deep Long Short-Term Memory Network in Face Recognition Task", "text": "The paper proposes a novel approach to classify the human memory response involved in the face recognition task by the utilization of event related potentials. Electroencephalographic signals are acquired when a subject engages himself/herself in familiar or unfamiliar face recognition tasks. The signals are analyzed through source Iocalization using eLORETA and artifact removal by ICA from a set of channels corresponding to those selected sources, with an ultimate aim to classify the EEG responses of familiar and unfamiliar faces. The EEG responses of the two different classes (familiar and unfamiliar face recognition)are distinguished by analyzing the Event Related Potential signals that reveal the existence of large N250 and P600 signals during familiar face recognition.The paper introduces a novel LSTM classifier network which is designed to classify the ERP signals to fulfill the prime objective of this work. The first layer of the novel LSTM network evaluates the spatial and local temporal correlations between the obtained samples of local EEG time-windows. The second layer of this network models the temporal correlations between the time-windows. An attention mechanism has been introduced in each layer of the proposed model to compute the contribution of each EEG time-window in face recognition task. Performance analysis reveals that the proposed LSTM classifier with attention mechanism outperforms the efficiency of the conventional LSTM and other classifiers with a significantly large margin. Moreover, source Iocalization using eLORETA shows the involvement of inferior temporal and frontal lobes during familiar face recognition and pre-frontal lobe during unfamiliar face recognition. Thus, the present research outcome can be used in criminal investigation, where meticulous differentiation of familiar and unfamiliar face detection by criminals can be performed from their acquired brain responses."}
{"_id": "0f202ec12a845564634455e562d1b297fa56ce64", "title": "Algorithmic Prediction of Health-Care Costs", "text": "The rising cost of health care is one of the world\u2019s most important problems. Accordingly, predicting such costs with accuracy is a significant first step in addressing this problem. Since the 1980s, there has been research on the predictive modeling of medical costs based on (health insurance) claims data using heuristic rules and regression methods. These methods, however, have not been appropriately validated using populations that the methods have not seen. We utilize modern data-mining methods, specifically classification trees and clustering algorithms, along with claims data from over 800,000 insured individuals over three years, to provide rigorously validated predictions of health-care costs in the third year, based on medical and cost data from the first two years. We quantify the accuracy of our predictions using unseen (out-of-sample) data from over 200,000 members. The key findings are: (a) our data-mining methods provide accurate predictions of medical costs and represent a powerful tool for prediction of health-care costs, (b) the pattern of past cost data is a strong predictor of future costs, and (c) medical information only contributes to accurate prediction of medical costs of high-cost members."}
{"_id": "25849e48b1436aeedb7f1c1d1532f62799c42b1a", "title": "Extensive Benchmark and Survey of Modeling Methods for Scene Background Initialization", "text": "Scene background initialization is the process by which a method tries to recover the background image of a video without foreground objects in it. Having a clear understanding about which approach is more robust and/or more suited to a given scenario is of great interest to many end users or practitioners. The aim of this paper is to provide an extensive survey of scene background initialization methods as well as a novel benchmarking framework. The proposed framework involves several evaluation metrics and state-of-the-art methods, as well as the largest video data set ever made for this purpose. The data set consists of several camera-captured videos that: 1) span categories focused on various background initialization challenges; 2) are obtained with different cameras of different lengths, frame rates, spatial resolutions, lighting conditions, and levels of compression; and 3) contain indoor and outdoor scenes. The wide variety of our data set prevents our analysis from favoring a certain family of background initialization methods over others. Our evaluation framework allows us to quantitatively identify solved and unsolved issues related to scene background initialization. We also identify scenarios for which state-of-the-art methods systematically fail."}
{"_id": "18801e8f7ae36e1231497140d9f4ad065f913704", "title": "PUMA: Planning Under Uncertainty with Macro-Actions", "text": "Planning in large, partially observable domains is challenging, especially when a long-horizon lookahead is necessary to obtain a good policy. Traditional POMDP planners that plan a different potential action for each future observation can be prohibitively expensive when planning many steps ahead. An efficient solution for planning far into the future in fully observable domains is to use temporallyextended sequences of actions, or \u201cmacro-actions.\u201d In this paper, we present a POMDP algorithm for planning under uncertainty with macro-actions (PUMA) that automatically constructs and evaluates open-loop macro-actions within forward-search planning, where the planner branches on observations only at the end of each macro-action. Additionally, we show how to incrementally refine the plan over time, resulting in an anytime algorithm that provably converges to an \u01eb-optimal policy. In experiments on several large POMDP problems which require a long horizon lookahead, PUMA outperforms existing state-of-the art solvers. Most partially observable Markov decision process (POMDP) planners select actions conditioned on the prior observation at each timestep: we refer to such planners as fully-conditional. When good performance relies on considering different possible observations far into the future, both online and offline fully-conditional planners typically struggle. An extreme alternative is unconditional (or \u201copenloop\u201d) planning where a sequence of actions is fixed and does not depend on the observations that will be received during execution. While open-loop planning can be extremely fast and perform surprisingly well in certain domains, acting well in most real-world domains requires plans where at least some action choices are conditional on the obtained observations. This paper focuses on the significant subset of POMDP domains, including scientific exploration, target surveillance, and chronic care management, where it is possible to act well by planning using conditional sequences of openloop, fixed-length action chains, or \u201cmacro-actions.\u201d We call this approach semi-conditional planning, in that actions are chosen based on the received observations only at the end of each macro-action. Copyright c \u00a9 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. For a discussion of using open-loop planning for multi-robot tag for open-loop planning see Yu et al. (2005). We demonstrate that for certain domains, planning with macro-actions can offer performance close to fullyconditional planning at a dramatically reduced computational cost. In comparison to prior macro-action work, where a domain expert often hand-coded a good set of macro-actions for each problem, we present a technique for automatically constructing finite-length open-loop macroactions. Our approach uses sub-goal states based on immediate reward and potential information gain. We then describe how to incrementally refine an initial macro-action plan by incorporating successively shorter macro-actions. We combine these two contributions in a forward-search algorithm for planning under uncertainty with macro-actions (PUMA). PUMA is an anytime algorithm which guarantees eventual convergence to an \u01eb-optimal policy, even in domains that may require close to fully-conditional plans. PUMA outperforms a state-of-the-art POMDP planner both in terms of plan quality and computational cost on two large simulated POMDP problems. However, semiconditional planning does not yield an advantage in all domains, and we provide preliminary experimental analysis towards determining a priori when planning in a semiconditional manner will be helpful. However, even in domains that are not well suited to semi-conditional planning, our anytime improvement allows PUMA to still eventually compute a good policy, suggesting that PUMA may be viable as a generic planner for large POMDP problems."}
{"_id": "bd7eac15c893453a7076078348c7ae6a1b69ad0a", "title": "Revising Perceptual Linear Prediction (PLP)", "text": "Mel Frequency Cepstral Coefficients (MFCC) and Perceptual Linear Prediction (PLP) are the most popular acoustic features used in speech recognition. Often it depends on the task, which of the two methods leads to a better performance. In this work we develop acoustic features that combine the advantages of MFCC and PLP. Based on the observation that the techniques have many similarities, we revise the processing steps of PLP. In particular, the filter-bank, the equal-loudness pre-emphasis and the input for the linear prediction are improved. It is shown for a broadcast news transcription task and a corpus of children\u2019s speech that the new variant of PLP performs better than both MFCC and conventional PLP for a wide range of clean and noisy acoustic conditions."}
{"_id": "846996945fc33abebdcbeb92fe2fe88afb92c47e", "title": "Multi-speaker modeling and speaker adaptation for DNN-based TTS synthesis", "text": "In DNN-based TTS synthesis, DNNs hidden layers can be viewed as deep transformation for linguistic features and the output layers as representation of acoustic space to regress the transformed linguistic features to acoustic parameters. The deep-layered architectures of DNN can not only represent highly-complex transformation compactly, but also take advantage of huge amount of training data. In this paper, we propose an approach to model multiple speakers TTS with a general DNN, where the same hidden layers are shared among different speakers while the output layers are composed of speaker-dependent nodes explaining the target of each speaker. The experimental results show that our approach can significantly improve the quality of synthesized speech objectively and subjectively, comparing with speech synthesized from the individual, speaker-dependent DNN-based TTS. We further transfer the hidden layers for a new speaker with limited training data and the resultant synthesized speech of the new speaker can also achieve a good quality in term of naturalness and speaker similarity."}
{"_id": "6a6b757a7640e43544df78fd15db7f14a8084263", "title": "Classroom Response Systems: A Review of the Literature.", "text": "As the frequency with which Classroom Response Systems (CRSs) are used is increasing, it becomes more and more important to define the affordances and limitations of these tools. Currently existing literature is largely either anecdotal or focuses on comparing CRS and non-CRS environments that are unequal in other aspects as well. In addition, the literature primarily describes situations in which the CRS is used to provide an individual as opposed to a group response. This article points to the need for a concerted research effort, one that rigorously explores conditions of use across diverse settings and pedagogies."}
{"_id": "10349122e4a67d60c2e0cc3e382b9502d95a6b55", "title": "An energy-efficient dual sampling SAR ADC with reduced capacitive DAC", "text": "This paper presents an energy-efficient SAR ADC which adopts reduced MSB cycling step with dual sampling of the analog signal. By sampling and holding the analog signal asymmetrically at both input sides of comparator, the MSB cycling step can be hidden by hold mode. Benefits from this technique, not only the total capacitance of DAC is reduced by half, but also the average switching energy is reduced by 68% compared with conventional SAR ADC. Moreover, switching energy distribution is more uniform over entire output code compared with previous works."}
{"_id": "abf21d598275ea3bf51f4f59a4fc1388cb0a58b8", "title": "The impact of pretend play on children's development: a review of the evidence.", "text": "Pretend play has been claimed to be crucial to children's healthy development. Here we examine evidence for this position versus 2 alternatives: Pretend play is 1 of many routes to positive developments (equifinality), and pretend play is an epiphenomenon of other factors that drive development. Evidence from several domains is considered. For language, narrative, and emotion regulation, the research conducted to date is consistent with all 3 positions but insufficient to draw conclusions. For executive function and social skills, existing research leans against the crucial causal position but is insufficient to differentiate the other 2. For reasoning, equifinality is definitely supported, ruling out a crucially causal position but still leaving open the possibility that pretend play is epiphenomenal. For problem solving, there is no compelling evidence that pretend play helps or is even a correlate. For creativity, intelligence, conservation, and theory of mind, inconsistent correlational results from sound studies and nonreplication with masked experimenters are problematic for a causal position, and some good studies favor an epiphenomenon position in which child, adult, and environment characteristics that go along with play are the true causal agents. We end by considering epiphenomenalism more deeply and discussing implications for preschool settings and further research in this domain. Our take-away message is that existing evidence does not support strong causal claims about the unique importance of pretend play for development and that much more and better research is essential for clarifying its possible role."}
{"_id": "14b8cfb5585af4952d43a5dfd21b76e1d1ee2b81", "title": "Adversarial Examples: Attacks on Machine Learning-based Malware Visualization Detection Methods", "text": "As the threat of malicious software (malware) becomes urgently serious, automatic malware detection techniques have received increasing attention recently, where the machine learning (ML)-based visualization detection plays a significant role. However, this leads to a fundamental problem whether such detection methods can be robust enough against various potential attacks. Even though ML algorithms show superiority to conventional ones in malware detection in terms of high efficiency and accuracy, this paper demonstrates that such ML-based malware detection methods are vulnerable to adversarial examples (AE) attacks. We propose the first AE-based attack framework, named Adversarial Texture Malware Perturbation Attacks (ATMPA), based on the gradient descent or L-norm optimization method. By introducing tiny perturbations on the transformed dataset, ML-based malware detection methods completely fail. The experimental results on the MS BIG malware dataset show that a small interference can reduce the detection rate of convolutional neural network (CNN), support vector machine (SVM) and random forest(RF)-based malware detectors down to 0 and the attack transferability can achieve up to 88.7% and 74.1% on average in different ML-based detection methods."}
{"_id": "88e1ce8272282bf1d1b5e55894cc545e72ff9aa1", "title": "Block Chain based Searchable Symmetric Encryption", "text": "The mechanism for traditional Searchable Symmetric Encryption is pay-then-use. That is to say, if a user wants to search some documents that contain special keywords, he needs to pay to the server firstly, then he can enjoy search service. Under this situation, these kinds of things will happen: After the user paying the service fees, the server may either disappear because of the poor management or returning nothing. As a result, the money that the user paid cannot be brought back quickly. Another case is that the server may return incorrect document sets to the user in order to save his own cost. Once such events happen, it needs the arbitration institution to mediate which will cost a long time. Besides, to settle the disputes the user has to pay to the arbitration institution. Ideally, we deeply hope that when the user realizes the server has a tendency to cheat in the task of searching, he can immediately and automatically withdraw his money to safeguard his right. However, the existing SSE protocols cannot satisfy this demand. To solve this dilemma, we find a compromised method by introducing the block chain into SSE. Our scheme achieves three goals stated below. Firstly, when the server does not return any thing to user after he gets the search token, the user can get some compensation from the server, because the server can infer some important information from the Index and this token. Besides, the user also doesn\u2019t pay the service charge. Secondly, if the documents that the server returns are false, the server cannot receive service fees, meanwhile, he will be punished. Lastly, when the user receives some bitcoin from server at the beginning, he may terminate the protocol. Under this situation, the server is a victim. In order to prevent such thing from happening, the server will broadcast a transaction to redeem his pledge after an appointed time. \u2217Corresponding author Email addresses: tianhb@mail.sysu.edu.cn (Haibo Tian), isszhfg@mail.sysu.edu.cn (Fangguo Zhang ) Preprint submitted to Elsevier April 26, 2017"}
{"_id": "b91b36da582de570d7c368a192914c43961a6eff", "title": "Ionospheric Time-Delay Algorithm for Single-Frequency GPS Users", "text": "The goal in designing an ionospheric time-delay correctionalgorithm for the single-frequency global positioning system userwas to include the main features of the complex behavior of theionosphere, yet require a minimum of coefficients and usercomputational time, while still yielding an rms correction of at least50 percent. The algorithm designed for this purpose, andimplemented in the GPS satellites, requires only eight coefficientssent as part of the satellite message, contains numerousapproximations designed to reduce user computationalrequirements, yet preserves the essential elements required to obtaingroup delay values along multiple satellite viewing directions."}
{"_id": "6d535375536d5a71eb9b18d93aa9fb09e3e050fb", "title": "Weighted Similarity Schemes for High Scalability in User-Based Collaborative Filtering", "text": "Similarity-based algorithms, often referred to as memory-based collaborative filtering techniques, are one of the most successful methods in recommendation systems. When explicit ratings are available, similarity is usually defined using similarity functions, such as the Pearson correlation coefficient, cosine similarity or mean square difference. These metrics assume similarity is a symmetric criterion. Therefore, two users have equal impact on each other in recommending new items. In this paper, we introduce new weighting schemes that allow us to consider new features in finding similarities between users. These weighting schemes, first, transform symmetric similarity to asymmetric similarity by considering the number of ratings given by users on non-common items. Second, they take into account the habit effects of users are regarded on rating items by measuring the proximity of the number of repetitions for each rate on common rated items. Experiments on two datasets were implemented and compared to other similarity measures. The results show that adding weighted schemes to traditional similarity measures These authors contributed equally to this work as the first author. P. Pirasteh \u00b7 D. Hwang Department of Computer Engineering, Yeungnam University, Gyeongsan, Republic of Korea P. Pirasteh e-mail: parivash63@gmail.com D. Hwang e-mail: dosamhwang@gmail.com J. E. Jung ( ) Department of Computer Engineering, Chung-Ang University, Seoul, Republic of Korea e-mail: ontology.society@gmail.com significantly improve the results obtained from traditional similarity measures."}
{"_id": "c6e3df7ea9e28e25e048f840f59088a34bed8322", "title": "Optimization tool for direct water cooling system of high power IGBT modules", "text": "Thermal management of power electronic devices is essential for reliable system performance especially at high power levels. Since even the most efficient electronic circuit becomes hot because of ohmic losses, it is clear that cooling is needed in electronics and even more as the power increases. One of the most important activities in the thermal management and reliability improvement is the cooling system design. As industries are developing smaller power devices with higher power densities, optimized design of cooling systems with minimum thermal resistance and pressure drop become important issue for thermal design engineers. This paper aims to present a user friendly optimization tool for direct water cooling system of a high power module which enables the cooling system designer to identify the optimized solution depending on customer load profiles and available pump power. CFD simulations are implemented to find best solution for each scenario."}
{"_id": "5d3d9ac765d5dbe27ac1b967aece1b3fc0c3893f", "title": "BER Analysis of MIMO-OFDM System using Alamouti STBC and MRC Diversity Scheme over Rayleigh Multipath Channel", "text": "Commons Attribution-Noncommercial 3.0 Unported License http://creativecommons.org/licenses/by-nc/3.0/), permitting all non commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Global Journal of Researches in Engineering Electrical and Electronics Engineering Volume 13 Issue 13 Version 1.0 Year 2013 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 2249-4596 & Print ISSN: 0975-5861"}
{"_id": "8b3539e359f4dd445c0651ebcb911c034d1ffc5b", "title": "A passivity based Cartesian impedance controller for flexible joint robots - part I: torque feedback and gravity compensation", "text": "In this paper a novel approach to the Cartesian impedance control problem for robots with flexible joints is presented. The proposed controller structure is based on simple physical considerations, which are motivating the extension of classical position feedback by an additional feedback of the joint torques. The torque feedback action can be interpreted as a scaling of the apparent motor inertia. Furthermore the problem of gravity compensation is addressed. Finally, it is shown that the closed loop system can be seen as a feedback interconnection of passive systems. Based on this passivity property a proof of asymptotic stability is presented."}
{"_id": "33576c0fc316c45c3672523114b20a5bb996e1f4", "title": "A unified approach for motion and force control of robot manipulators: The operational space formulation", "text": "A framework for the analysis and control of manipulator systems with respect to the dynamic behavior of their end-effectors is developed. First, issues related to the description of end-effector tasks that involve constrained motion and active force control are discussed. The fundamentals of the operational space formulation are then presented, and the unified approach for motion and force control is developed. The extension of this formulation to redundant manipulator systems is also presented, constructing the end-effector equations of motion and describing their behavior with respect to joint forces. These results are used in the development of a new and systematic approach for dealing with the problems arising at kinematic singularities. At a singular configuration, the manipulator is treated as a mechanism that is redundant with respect to the motion of the end-effector in the subspace of operational space orthogonal to the singular direction."}
{"_id": "4358e29f6c95f371ade11a56e8b2ffea549d5842", "title": "A passivity based Cartesian impedance controller for flexible joint robots - part II: full state feedback, impedance design and experiments", "text": "The paper presents a Cartesian impedance controller for flexible joint robots based on the feedback of the complete state of the system, namely the motor position, the joint torque and their derivatives. The approach is applied to a quite general robot model, in which also a damping element is considered in parallel to the joint stiffness. Since passivity and asymptotic stability of the controller hold also for varying damping matrices, some possibilities of designing those gain matrices (depending on the actual inertia matrix) are addressed. The passivity of the controller relies on the usage of only motor side measurements for the position feedback. A method is introduced, which provides the exact desired link side stiffness based on this motor position information. Experimental results are validating the proposed controller."}
{"_id": "666e4ea184b0d093faacb0dd301e8e598a62603b", "title": "Nonlinear Systems Analysis", "text": "THIS IS a completely rewritten version of the first edition which appeared in 1978. At that time the book gave a good overview of the main body of nonlinear systems and control theory, except for nonlinear optimal control theory. The emphasis was very much on the analysis of systems (in openor closed-loop configuration), instead of on the actual design of controllers. Most attention was being focused on stability issues, both from the input-output and the state-space point of view. Highlights in this theory were the concepts of input-output stability including Popov's criterion and the circle criterion (see also Desoer and Vidyasagar, 1975), the theory of Lyapunov stability, and perhaps the method of describing functions. Since the appearance of the first edition, geometric nonlinear control theory has become a prevailing trend. (In fact these developments were already initiated at the beginning of the seventies.) Geometric nonlinear control theory started as a successful approach to deal with basic system-theoretic questions in the state-space formulation of nonlinear control systems, such as controllability and observability properties and minimal realization theory. It gained strong impetus at the beginning of the eighties by the systematic study of nonlinear (state) feedback for various synthesis problems; also stimulated by the geometric approach to linear control theory of Wonham and Morse and Basile and Marro: In particular the Lie bracket conditions for controllability and feedback linearizability have become popular and powerful tools in nonlinear control. The books by Isidori (1989) and Nijmeijer and Van der Schaft (1990) are clear exponents of the achievements of geometric nonlinear control theory. In the last couple of years one can witness some developments which can be looked at as attempts to bridge the gap between nonlinear systems and control theory as put forward in the first edition of the present book on the one hand, and geometric nonlinear control theory on the other hand. Indeed, in nonlinear adaptive control as well as in nonlinear robust control (including nonlinear ~-cont ro l ) there is much need for the kind of stability concepts as exposed in the first edition of the book, while at the same time geometric nonlinear control theory offers an underlying structural framework. In particular passivity, and more generally, dissipativity concepts turn out to be extremely useful in these areas, especially within the context of the control of physical systems. Also, there has been recently some rapprochement between nonlinear input-output theory and (geometric) state-space theory. In my opinion these converging developments are very important and indeed offer promising perspectives for a truly nonlinear control design. In the present second edition Professor Vidyasagar has made an admirable attempt to include at least an introduction to geometric nonlinear control theory in an additional chapter of the book. In fact, this new chapter deals with the perhaps most 'useful' parts of geometric nonlinear control theory such as controllability, feedback linearization and input-output linearization. The other most noticeable differences with the first edition are the inclusion of some"}
{"_id": "d04c3ab918aad5f2b3fcec971db5cbe81de72d3e", "title": "On a New Generation of Torque Controlled Light-Weight Robots", "text": "The paper describes the recent design and development efforts in DLR \u0301s robotics lab towards the second generation of light-weight robots. The design of the lightweight mechanics, integrated sensors and electronics is outlined. The fully sensorized joint, with motor and link position sensors as well as joint torque sensors enables the implementation of effective vibration damping and advanced control strategies for compliant manipulation. The mechatronic approach incorporates a tight collaboration between mechanics, electronics and controller design as well. Thus we hope, that important steps towards a new generation of service and personal robots have been achieved."}
{"_id": "a1d66f1344424f8294a740dcda9847bbc853be2e", "title": "Compact Zigzag-Shaped-Slit Microstrip Antenna With Circular Defected Ground Structure for Wireless Applications", "text": "In this letter, a compact zigzag-shaped-slit rectangular microstrip patch antenna with circular defected ground structure (DGS) is designed for wireless applications. The probe-fed antenna consisting of a zigzag-shaped slit, dual T-shaped slits on either sides of a rectangular patch, and circular dumbbell-shaped defected ground plane is optimized. The antenna was able to generate three separate resonances to cover both the 2.45/5.28-GHz WLAN bands and the 3.5-GHz WiMAX bands while maintaining a small overall size of 40 \u00d728 \u00d73.175 mm3. The return-loss impedance bandwidth values are enhanced significantly for three resonant frequencies. The designed antenna is characterized with better radiation patterns and potentially stable gain around 4-6 dBi over the working bands. Good agreement was obtained between measurements and simulations."}
{"_id": "43751b572e8608f88d851a541f67a8ae456b780b", "title": "FIRST EXPERIMENTAL INVESTIGATIONS ON WHEEL-WALKING FOR IMPROVING TRIPLE-BOGIE ROVER LOCOMOTION PERFORMANCES", "text": "Deployment actuators of a triple-bogie rover locomotion platform can be used to perform Wheel-Walking (WW) manoeuvres. How WW could affect the traversing capabilities of rovers is a recurrent debate in the planetary robotics community. The Automation and Robotics Section of ESTEC has initiated a long term project to evaluate the performance of WW manoeuvres in different scenarios. This paper presents the first experimental results on this project, obtained during the test campaign run on November 2014 at the Planetary Robotics Lab (PRL) of ESTEC, and shows the performance analysis made when comparing WW with standard rolling. The PRL rover prototype ExoTeR was used to test three different scenarios: entrapment in loose soil, up-slope traverse and lander egressing. WW locomotion showed increased capabilities in all scenarios and proved its relevance and advantages for planetary exploration missions."}
{"_id": "61723cdd6f8195c8bb7407a04bb29a690de6a08c", "title": "ISO 9241-11 Revised: What Have We Learnt About Usability Since 1998?", "text": "A revision is currently being undertaken of ISO 9241-11, published in 1998 to provide guidance on usability. ISO-9241-11 defines usability in terms of effectiveness, efficiency and satisfaction in a particular context of use. The intention was to emphasise that usability is an outcome of interaction rather than a property of a product. This is now widely accepted. However, the standard also places emphasis on usability measurement and it is now appreciated that there is more to usability evaluation than measurement. Other developments include an increasing awareness of the importance of the individual user's emotional experience as discretionary usage of complex consumer products and use of the World Wide Web have became more widespread. From an organisational perspective, it is now appreciated that usability plays an important role in managing the potentials risks that can arise from inappropriate outcomes of interaction. The revision of ISO 9241-11 takes account of these issues and other feedback."}
{"_id": "c4938b4967b5953d95bd531d84f10ae6585fb434", "title": "Aerodynamic Design Optimization Studies of a Blended-Wing-Body Aircraft", "text": "Abstract The blended-wing body is an aircraft configuration that has the potential to be more efficient than conventional large transport aircraft configurations with the same capability. However, the design of the blended-wing is challenging due to the tight coupling between aerodynamic performance, trim, and stability. Other design challenges include the nature and number of the design variables involved, and the transonic flow conditions. To address these issues, we perform a series of aerodynamic shape optimization studies using Reynolds-averaged Navier\u2013Stokes computational fluid dynamics with a Spalart\u2013Allmaras turbulence model. A gradient-based optimization algorithm is used in conjunction with a discrete adjoint method that computes the derivatives of the aerodynamic forces. A total of 273 design variables\u2014twist, airfoil shape, sweep, chord, and span\u2014are considered. The drag coefficient at the cruise condition is minimized subject to lift, trim, static margin, and center plane bending moment constraints. The studies investigate the impact of the various constraints and design variables on optimized blended-wing-body configurations. The lowest drag among the trimmed and stable configurations is obtained by enforcing a 1% static margin constraint, resulting in a nearly elliptical spanwise lift distribution. Trim and static stability are investigated at both onand off-design flight conditions. The single-point designs are relatively robust to the flight conditions, but further robustness is achieved through a multi-point optimization."}
{"_id": "864fe713fd082585b67198ad7942a1568e536bd2", "title": "Group-Sensitive Triplet Embedding for Vehicle Reidentification", "text": "The widespread use of surveillance cameras toward smart and safe cities poses the critical but challenging problem of vehicle reidentification (Re-ID). The state-of-the-art research work performed vehicle Re-ID relying on deep metric learning with a triplet network. However, most existing methods basically ignore the impact of intraclass variance-incorporated embedding on the performance of vehicle reidentification, in which robust fine-grained features for large-scale vehicle Re-ID have not been fully studied. In this paper, we propose a deep metric learning method, group-sensitive-triplet embedding (GS-TRE), to recognize and retrieve vehicles, in which intraclass variance is elegantly modeled by incorporating an intermediate representation \u201cgroup\u201d between samples and each individual vehicle in the triplet network learning. To capture the intraclass variance attributes of each individual vehicle, we utilize an online grouping method to partition samples within each vehicle ID into a few groups, and build up the triplet samples at multiple granularities across different vehicle IDs as well as different groups within the same vehicle ID to learn fine-grained features. In particular, we construct a large-scale vehicle database \u201cPKU-Vehicle,\u201d consisting of 10 million vehicle images captured by different surveillance cameras in several cities, to evaluate the vehicle Re-ID performance in real-world video surveillance applications. Extensive experiments over benchmark datasets VehicleID, VeRI, and CompCar have shown that the proposed GS-TRE significantly outperforms the state-of-the-art approaches for vehicle Re-ID."}
{"_id": "8039a25fa3408d73829c3746e0e453ad7dc268b4", "title": "Integrating technology in the classroom: a visual conceptualization of teachers' knowledge, goals and beliefs", "text": "In this paper, we devise a diagrammatic conceptualization to describe and represent the complex interplay of a teacher\u2019s knowledge (K), goals (G) and beliefs (B) in leveraging technology effectively in the classroom. The degree of coherency between the KGB region and the affordances of the technology serves as an indicator of the teachers\u2019 developmental progression through the initiation, implementation and maturation phases of using technology in the classroom. In our study, two teachers with differing knowledge, goals and beliefs are studied as they integrated GroupScribbles technology in their classroom lessons over a period of 1 year. Our findings reveal that the transition between the teacher\u2019s developmental states (as indicated by coherency diagrams) is nonlinear, and thus the importance of ensuring high coherency right at the initiation stage. Support for the teacher from other teachers and researchers remains an important factor in developing the teacher\u2019s competency to leverage the technology successfully. The stability of the KGB region further ensures smooth progression of the teacher\u2019s effective integration of technology in the classroom."}
{"_id": "94d03985252d5a497feb407f1366fc29b34cb0fc", "title": "EvalVid - A Framework for Video Transmission and Quality Evaluation", "text": "With EvalVid we present a complete framework and tool-set for evaluation of the quality of video transmitted over a real or simu lated communication network. Besides measuring QoS parameters of the underlyin g network, like loss rates, delays, and jitter, we support also a subjective vide o quality evaluation of the received video based on the frame-by-frame PSNR calcula tion. The tool-set has a modular construction, making it possible to exchange b oth the network and the codec. We present here its application for MPEG-4 as exam ple. EvalVid is targeted for researchers who want to evaluate their network designs or setups in terms of user perceived video quality. The tool-set is publi cly available [11]."}
{"_id": "fbad8d246083321ffbfdb4b07c28e3868f45e1ce", "title": "Design and control of an actuated thumb exoskeleton for hand rehabilitation following stroke", "text": "Chronic hand impairment is common following stroke. This paper presents an actuated thumb exoskeleton (ATX) to facilitate research in hand rehabilitation therapy. The ATX presented in this work permits independent bi-directional actuation in each of the 5 degrees-of-freedom (DOF) of the thumb using a mechanism that has 5 active DOF and 3 passive DOF. The ATX is able to provide considerable joint torques for the user while still allowing backdrivability through flexible shaft transmission. A prototype has been built and experiments were conducted to evaluate the closed-loop position control. Further improvement and future work are discussed."}
{"_id": "917a10ee4d12cf07a59e16e0d3620545218a60c6", "title": "Robotic Cutting : Mechanics and Control of Knife Motion", "text": "Effectiveness of cutting is measured by the ability to achieve material fracture with smooth knife movements. The work performed by the knife overcomes the material toughness, acts against the blade-material friction, and generates shape deformation. This paper studies how to control a 2-DOF robotic arm equipped with a force/torque sensor to cut through an object in a sequence of three moves: press, push, and slice. For each move, a separate control strategy in the Cartesian space is designed to incorporate contact and/or force constraints while following some prescribed trajectory. Experiments conducted over several types of natural foods have demonstrated smooth motions like would be commanded by a human hand."}
{"_id": "ff00e759842a949776fb15235db1f4a433d4a303", "title": "Pravastatin in elderly individuals at risk of vascular disease (PROSPER): a randomised controlled trial", "text": "BACKGROUND\nAlthough statins reduce coronary and cerebrovascular morbidity and mortality in middle-aged individuals, their efficacy and safety in elderly people is not fully established. Our aim was to test the benefits of pravastatin treatment in an elderly cohort of men and women with, or at high risk of developing, cardiovascular disease and stroke.\n\n\nMETHODS\nWe did a randomised controlled trial in which we assigned 5804 men (n=2804) and women (n=3000) aged 70-82 years with a history of, or risk factors for, vascular disease to pravastatin (40 mg per day; n=2891) or placebo (n=2913). Baseline cholesterol concentrations ranged from 4.0 mmol/L to 9.0 mmol/L. Follow-up was 3.2 years on average and our primary endpoint was a composite of coronary death, non-fatal myocardial infarction, and fatal or non-fatal stroke. Analysis was by intention-to-treat.\n\n\nFINDINGS\nPravastatin lowered LDL cholesterol concentrations by 34% and reduced the incidence of the primary endpoint to 408 events compared with 473 on placebo (hazard ratio 0.85, 95% CI 0.74-0.97, p=0.014). Coronary heart disease death and non-fatal myocardial infarction risk was also reduced (0.81, 0.69-0.94, p=0.006). Stroke risk was unaffected (1.03, 0.81-1.31, p=0.8), but the hazard ratio for transient ischaemic attack was 0.75 (0.55-1.00, p=0.051). New cancer diagnoses were more frequent on pravastatin than on placebo (1.25, 1.04-1.51, p=0.020). However, incorporation of this finding in a meta-analysis of all pravastatin and all statin trials showed no overall increase in risk. Mortality from coronary disease fell by 24% (p=0.043) in the pravastatin group. Pravastatin had no significant effect on cognitive function or disability.\n\n\nINTERPRETATION\nPravastatin given for 3 years reduced the risk of coronary disease in elderly individuals. PROSPER therefore extends to elderly individuals the treatment strategy currently used in middle aged people."}
{"_id": "26aa2421657e68751f6c9954f157ae54cc27049f", "title": "A framework for evaluating quality-driven self-adaptive software systems", "text": "Over the past decade the dynamic capabilities of self-adaptive software-intensive systems have proliferated and improved significantly. To advance the field of self-adaptive and self-managing systems further and to leverage the benefits of self-adaptation, we need to develop methods and tools to assess and possibly certify adaptation properties of self-adaptive systems, not only at design time but also, and especially, at run-time. In this paper we propose a framework for evaluating quality-driven self-adaptive software systems. Our framework is based on a survey of self-adaptive system papers and a set of adaptation properties derived from control theory properties. We also establish a mapping between these properties and software quality attributes. Thus, corresponding software quality metrics can then be used to assess adaptation properties."}
{"_id": "a4fa356a2652a87eaa0fe362b21e89e4b17fd0ed", "title": "A multimodal adaptive session manager for physical rehabilitation exercising", "text": "Physical exercising is an essential part of any rehabilitation plan. The subject must be committed to a daily exercising routine, as well as to a frequent contact with the therapist. Rehabilitation plans can be quite expensive and time-consuming. On the other hand, tele-rehabilitation systems can be really helpful and efficient for both subjects and therapists. In this paper, we present ReAdapt, an adaptive module for a tele-rehabilitation system that takes into consideration the progress and performance of the exercising utilizing multisensing data and adjusts the session difficulty resulting to a personalized session. Multimodal data such as speech, facial expressions and body motion are being collected during the exercising and feed the system to decide on the exercise and session difficulty. We formulate the problem as a Markov Decision Process and apply a Reinforcement Learning algorithm to train and evaluate the system on simulated data."}
{"_id": "a4fea01c735f9515636291696e0860a17d63918a", "title": "Environmental influence in the brain, human welfare and mental health", "text": "The developing human brain is shaped by environmental exposures\u2014for better or worse. Many exposures relevant to mental health are genuinely social in nature or believed to have social subcomponents, even those related to more complex societal or area-level influences. The nature of how these social experiences are embedded into the environment may be crucial. Here we review select neuroscience evidence on the neural correlates of adverse and protective social exposures in their environmental context, focusing on human neuroimaging data and supporting cellular and molecular studies in laboratory animals. We also propose the inclusion of innovative methods in social neuroscience research that may provide new and ecologically more valid insight into the social-environmental risk architecture of the human brain."}
{"_id": "d4cb8486072db071eb3388078fb2a01b724ef262", "title": "Compact Rep-Rate GW Pulsed Generator Based on Forming Line With Built-In High-Coupling Transformer", "text": "In this paper, a compact rep-rate GW pulsed power generator is developed. First, its three key subsystems are theoretically analyzed, engineering designed, and experimentally investigated, respectively. The emphases are put on these four problems: the theoretical analysis of the voltage distribution across the conical secondary windings of the high-coupling transformer, the investigation of the high energy storage density dielectric used in the pulse forming line, the choice of the gas flow velocity of the gas blowing system, and theoretical analysis of the passive stability of the pulsed power generator operated in rep-rate mode. Second, the developed pulsed power generator is described in detail. It has a 0.2-m diameter, a 1.0-m length, and a 20- \u03a9 wave impedance. Across a 100- \u03a9 resistive dummy load, it can steadily operate at a 300-kV output voltage in 50-Hz rep-rate and 250 kV in 150 Hz without gas blowing system. The average power is ~ 1 kW. Finally, the pulsed power generator is applied to drive a relativistic backward-wave oscillator, generating a high-power microwave with peak output power of 200 MW and duration (full-width at half-maximum) of 5 ns in 150-Hz rep-rate. These efforts set a good foundation for the development of a compact rep-rate pulsed power generator and show a promising application for the future."}
{"_id": "ae4f36ed7915db38c0baa21c14ed1455df95a738", "title": "Myanmar Spell Checker", "text": "Natural Language Processing (NLP) is one of the most important research area carried out in the world of Human Language. For every language, spell checker is an essential component of many of Office Automation Systems and Machine Translation Systems. In this paper, we develop a Myanmar Spell Checker System which can handle Typographic errors, Sequence errors, Phonetic errors, and Context errors. A Myanmar Text Corpus is created for developing Myanmar Spell checker. To check Typographic Errors, corpus look up approach is applied. Myanmar3 Unicode is applied in this system so that it can automatically reorder the character sequence. A compound misused word detection algorithm is proposed for Phonetic Errors checking and Bayesian Classifier is applied for Context Errors checking. In this system, Levenshtein Distance Algorithm is applied to improve users\u2019 efficiency by providing a suggestion list for misspelled Myanmar Words. We provide evaluation results of the system and our approach can handle various types of Myanmar spell errors."}
{"_id": "79b58b2ddabb506637508b6d6d5a2081586d69cf", "title": "A Supervised Named-Entity Extraction System for Medical Text", "text": "TOKEN LEMMA POS Chunk Capitalized Token Class Dependency Sem Group Doc Type Section CUI TUI SemGroup UMLS Categ LEXIQUE SG LEXIQUE CUI DISO TOKEN ANAT TOKEN Wiki Gold label Nails nail NNS B-NP Capitalized Word ROOT DIS heent B-C1 B-T023 B-ANAT B DISO B-ANAT B-C0027342 1 1 \u2013 O : : : O \u2013 Punct P ANAT DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O No no DT B-NP Capitalized Word NMOD #NA DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O bed bed NN I-NP \u2013 Word NMOD #NA DIS heent \u2013 \u2013 \u2013 O O O 1 1 \u2013 B-DISORDER abnormalities abnormality NNS I-NP \u2013 Word NMOD DISO DIS heent \u2013 \u2013 \u2013 O O O 1 0 \u2013 I-DISORDER , , , O \u2013 Punct P #NA DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O lunulas lunula NNS B-NP \u2013 Word AMOD #NA DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O present present JJ B-ADJP \u2013 Word APPO #NA DIS heent \u2013 \u2013 \u2013 O O O 1 1 \u2013 O , , , O \u2013 Punct P ANAT DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O no no DT B-NP \u2013 Word NMOD DISO DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O splinters splinter NNS I-NP \u2013 Word ROOT DIS heent B-C2 C3 B-T037 B-DISO O O O 1 0 B-DISO B-DISORDER , , , I-NP \u2013 Punct P DISO DIS heent \u2013 \u2013 \u2013 O O O 0 0 \u2013 O pulses pulse NNS I-NP \u2013 Word COORD DISO DIS heent \u2013 \u2013 \u2013 O B-PHYS B-C0391850 0 0 \u2013 O I Lexical and morphological features (e.g., capitalization, lemma)"}
{"_id": "f306c0d24a5eb338b7a577a17d8b35d78716d880", "title": "Dual-Transfer Face Sketch\u2013Photo Synthesis", "text": "Recognizing the identity of a sketched face from a face photograph dataset is a critical yet challenging task in many applications, not least law enforcement and criminal investigations. An intelligent sketched face identification system would rely on automatic face sketch synthesis from photographs, thereby avoiding the cost of artists manually drawing sketches. However, conventional face sketch\u2013photo synthesis methods tend to generate sketches that are consistent with the artists\u2019drawing styles. Identity-specific information is often overlooked, leading to unsatisfactory identity verification and recognition performance. In this paper, we discuss the reasons why conventional methods fail to recover identity-specific information. Then, we propose a novel dual-transfer face sketch\u2013photo synthesis framework composed of an inter-domain transfer process and an intra-domain transfer process. In the inter-domain transfer, a regressor of the test photograph with respect to the training photographs is learned and transferred to the sketch domain, ensuring the recovery of common facial structures during synthesis. In the intra-domain transfer, a mapping characterizing the relationship between photographs and sketches is learned and transferred across different identities, such that the loss of identity-specific information is suppressed during synthesis. The fusion of information recovered by the two processes is straightforward by virtue of an ad hoc information splitting strategy. We employ both linear and nonlinear formulations to instantiate the proposed framework. Experiments on The Chinese University of Hong Kong face sketch database demonstrate that compared to the current state-of-the-art the proposed framework produces more identifiable facial structures and yields higher face recognition performance in both the photo and sketch domains."}
{"_id": "6e7f16af764e42f8386705bbc955fd65ae2a5f71", "title": "Skin microbiota: overview and role in the skin diseases acne vulgaris and rosacea.", "text": "As the first barrier to environmental exposures, human skin has developed an integrated immune system to protect the inner body from chemical, physical or microbial insults. Microorganisms inhabiting superficial skin layers are known as skin microbiota and include bacteria, viruses, archaea and fungi. The microbiota composition is crucial in the instruction and support of the skin's immune system. Changes in microbiota can be due to individual, environmental or behavioral factors, such as age, climate, hygiene or antibiotic consumption, which can cause dysbiosis. The contribution of skin microbiota to disease development is known in atopic dermatitis, where there is an increase in Staphylococcus aureus. Culture-independent studies have enabled more accurate descriptions of this complex interplay. Microbial imbalance is associated with the development of various diseases. This review focuses on microbial imbalances in acne vulgaris and rosacea."}
{"_id": "60b87f87bee45a05ab2e711c893fddf621ac37cb", "title": "Eliminating Tight Coupling using Subscriptions Subgrouping in Structured Overlays", "text": "An advertisement and matching subscription are tightly coupled to activate a single publication routing path in content-based publish/subscribe systems. Due to this tight coupling, instantaneous updates in routing tables are required to generate alternative paths for dynamic routing. This poses serious challenges in offering scalable and robust dynamic routing in cyclic overlays when network conditions like link congestion is detected. We propose, OctopiA, a distributed publish/subscribe system for content-based dynamic routing in structured cyclic overlays. OctopiA uses a novel concept of subscription subgrouping to divide subscriptions into disjoint sets, called subscription subgroups, to eliminate tight coupling. While aiming at deployment in data center networks, OctopiA generates routing paths of minimum lengths. We use a homogeneous clustering approach with a bit-vector to realize subscription subgrouping and offer inter-cluster dynamic routing without requiring updates in routing tables. Experiments on a cluster testbed with real world data show that OctopiA reduces the number of saved advertisements in routing tables by 93%, subscription broadcast delay by 33%, static and dynamic publication delivery delays by 25% and 54%, respectively."}
{"_id": "e5c290307d3df32b1aaa872e2b5cd60f3f410f10", "title": "Virtual Reality as an Innovative Setting for Simulations in Education", "text": "The increasingly widespread use of simulations in education underlines the fact that these teaching tools compared to other methods or media allow students to approach and experience new topics in a more realistic way. This realism enhances their learning and understanding of complex subjects. So far it has been difficult to interactively simulate three-dimensional dynamic learning content. In this field, the continuing development of Virtual Reality (VR) offers new opportunities for educators to convey a wide variety of subjects. It is the aim of this paper to characterize the nature of Virtual Reality as an educational setting for simulations, and to show that Studierstube, our multi-user collaborative Virtual Environment, comprises the necessary features for applying VR-techniques to educational purposes. We further discuss the general applicability of VR to various fields of education and demonstrate its specific application as a tool for teaching elementary three-dimensional geometry."}
{"_id": "488f675419b6692e388e57e1324db87a82daa895", "title": "Causal discovery and inference: concepts and recent methodological advances", "text": "This paper aims to give a broad coverage of central concepts and principles involved in automated causal inference and emerging approaches to causal discovery from i.i.d data and from time series. After reviewing concepts including manipulations, causal models, sample predictive modeling, causal predictive modeling, and structural equation models, we present the constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, and discuss the assumptions underlying its validity. We then focus on causal discovery based on structural equations models, in which a key issue is the identifiability of the causal structure implied by appropriately defined structural equation models: in the two-variable case, under what conditions (and why) is the causal direction between the two variables identifiable? We show that the independence between the error term and causes, together with appropriate structural constraints on the structural equation, makes it possible. Next, we report some recent advances in causal discovery from time series. Assuming that the causal relations are linear with nonGaussian noise, we mention two problems which are traditionally difficult to solve, namely causal discovery from subsampled data and that in the presence of confounding time series. Finally, we list a number of open questions in the field of causal discovery and inference."}
{"_id": "37eba6d7c346813ceb21c1e17aee53df34527962", "title": "Learning to Reason With Adaptive Computation", "text": "Multi-hop inference is necessary for machine learning systems to successfully solve tasks such as Recognising Textual Entailment and Machine Reading. In this work, we demonstrate the effectiveness of adaptive computation for learning the number of inference steps required for examples of different complexity and that learning the correct number of inference steps is difficult. We introduce the first model involving Adaptive Computation Time which provides a small performance benefit on top of a similar model without an adaptive component as well as enabling considerable insight into the reasoning process of the model."}
{"_id": "b6d2863a6a7afcda3630c83a7d9c353864f50086", "title": "Alignment of Monophonic and Polyphonic Music to a Score", "text": "Music alignmentis the associationof eventsin a score with pointsin the time axisof an audio signal. Thesignal is thus segmentedaccording to theeventsin thescore. We proposea new methodology for automaticalignmentbasedon dynamic time warping, where the spectral peak structure is usedto computethe local distance, enhancedby a modelof attacks andof silence. Themethodologycancopewith performances consider eddifficult to align, like polyphonicmusic,trills, fast sequences, or multi-instrumentmusic.An optimisationof the representationof thealignmentpathmakesthemethodapplicableto long soundfiles, so that unit databasescan be fully automaticallysegmentedand labeled. On 708 sequencesof synthesisedmusic,weachievedanaverageoffsetof 18msand an error rateof 2.5%."}
{"_id": "3605b9befd5f1b53019b8edb3b3d227901e76c89", "title": "Adaptive Mixtures of Local Experts", "text": "We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network."}
{"_id": "4682c9cbcb19aa1b381e64cc119a82b99ffdc6df", "title": "Bayesian Methods for Mixtures of Experts", "text": "Tony Robinson Cambridge University Engineering Department Cambridge CB2 1PZ England. Tel: [+44] 1223 332815 ajr@eng.cam.ac.uk We present a Bayesian framework for inferring the parameters of a mixture of experts model based on ensemble learning by variational free energy minimisation. The Bayesian approach avoids the over-fitting and noise level under-estimation problems of traditional maximum likelihood inference. We demonstrate these methods on artificial problems and sunspot time series prediction."}
{"_id": "58ceeb151558c1f322b9f6273b47e90e9c04e6b1", "title": "Neural Networks and the Bias/Variance Dilemma", "text": "Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals."}
{"_id": "630cb3d77f6fd9e8e81beecabed54df4ef87c627", "title": "The variational approximation for Bayesian inference", "text": "The influence of this Thomas Bayes' work was immense. It was from here that \"Bayesian\" ideas first spread through the mathematical world, as Bayes's own article was ignored until 1780 and played no important role in scientific debate until the 20th century. It was also this article of Laplace's that introduced the mathematical techniques for the asymptotic analysis of posterior distributions that are still employed today. And it was here that the earliest example of optimum estimation can be found, the derivation and characterization of an estimator that minimized a particular measure of posterior expected loss. After more than two centuries, we mathematicians, statisticians cannot only recognize our roots in this masterpiece of our science, we can still learn from it."}
{"_id": "3e1dd9fb09c6cddc6bf9d38686bea81986f9dabe", "title": "A multi-objective artificial bee colony algorithm", "text": "This work presents a multi-objective optimization method based on the artificial bee colony, called the MOABC, for optimizing problems with multiple objectives. The MOABC uses a grid-based approach to adaptively assess the Pareto frontmaintained in an external archive. The external archive is used to control the flying behaviours of the individuals and structuring the bee colony. The employed bees adjust their trajectories based on the non-dominated solutionsmaintained in the external archive. On the other hand, the onlooker bees select the food sources advertised by the employed bees to update their positions. The qualities of these food sources are computed based on the Pareto dominance notion. The scout bees are used by the MOABC to get rid of food sources with poor qualities. The proposed algorithm was evaluated on a set of standard test problems in comparison with other state-of-the-art algorithms. Experimental results indicate that the proposed approach is competitive compared to other algorithms considered in this work. \u00a9 2011 Elsevier B.V. All rights reserved."}
{"_id": "9b838caf2aff2040f4ac1b676e6af98b20d8b33c", "title": "Refining Geometry from Depth Sensors using IR Shading Images", "text": "We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements."}
{"_id": "6f20e254e3993538c79e0ff2b9b8f198d3359cb3", "title": "Receptive fields of single neurones in the cat's striate cortex.", "text": "In the central nervous system the visual pathway from retina to striate cortex provides an opportunity to observe and compare single unit responses at several distinct levels. Patterns of light stimuli most effective in influencing units at one level may no longer be the most effective at the next. From differences in responses at successive stages in the pathway one may hope to gain some understanding of the part each stage plays in visual perception. By shining small spots of light on the light-adapted cat retina Kuffler (1953) showed that ganglion cells have concentric receptive fields, with an 'on' centre and an 'off ' periphery, or vice versa. The 'on' and 'off' areas within a receptive field were found to be mutually antagonistic, and a spot restricted to the centre of the field was more effective than one covering the whole receptive field (Barlow, FitzHugh & Kuffler, 1957). In the freely moving lightadapted cat it was found that the great majority of cortical cells studied gave little or no response to light stimuli covering most of the animal's visual field, whereas small spots shone in a restricted retinal region often evoked brisk responses (Hubel, 1959). A moving spot of light often produced stronger responses than a stationary one, and sometimes a moving spot gave more activation for one direction than for the opposite. The present investigation, made in acute preparations, includes a study of receptive fields of cells in the cat's striate cortex. Receptive fields of the cells considered in this paper were divided into separate excitatory and inhibitory ('on' and 'off') areas. In this respect they resembled retinal ganglion-cell receptive fields. However, the shape and arrangement of excitatory and inhibitory areas differed strikingly from the concentric pattern found in retinal ganglion cells. An attempt was made to correlate responses to moving stimuli"}
{"_id": "dcbe97cf05f157d43d17111665648fdbd54aa462", "title": "A handheld mirror simulation", "text": "We present the design and construction of a handheld mirror simulation device. The perception of the world reflected through a mirror depends on the viewer's position with respect to the mirror and the 3-D geometry of the world. In order to simulate a real mirror on a computer screen, images of the observed world, consistent with the viewer's position, must be synthesized and displayed in realtime. Our system is build around a LCD screen manipulated by the user, a single camera fixed on the screen, and a tracking device. The continuous input video stream and tracker data is used to synthesize, in real-time, a continuous video stream displayed on the LCD screen. The synthesized video stream is a close approximation of what the user would see on the screen surface if it were a real mirror. Our system provides a generic interface for applications involving rich, first-person interaction, such as the Virtual Daguerreotype."}
{"_id": "95d0e83f8c6c6574d76db83b83923b5f8e1dfdc5", "title": "Magic decorator: automatic material suggestion for indoor digital scenes", "text": "Assigning textures and materials within 3D scenes is a tedious and labor-intensive task. In this paper, we present Magic Decorator, a system that automatically generates material suggestions for 3D indoor scenes. To achieve this goal, we introduce local material rules, which describe typical material patterns for a small group of objects or parts, and global aesthetic rules, which account for the harmony among the entire set of colors in a specific scene. Both rules are obtained from collections of indoor scene images. We cast the problem of material suggestion as a combinatorial optimization considering both local material and global aesthetic rules. We have tested our system on various complex indoor scenes. A user study indicates that our system can automatically and efficiently produce a series of visually plausible material suggestions which are comparable to those produced by artists."}
{"_id": "9e23f09b17891ea03e92b4bf4e2fd7378adb4647", "title": "Efficiency Degradation in Wideband Power Amplifiers", "text": "This paper addresses the efficiency degradation observed in wideband power amplifiers. It starts by presenting a detailed explanation that relates this observed performance degradation with the terminations at baseband. Then, a comparison between two implemented power amplifiers with the same fundamental and harmonic terminations, but with different baseband networks is presented, showing that an optimized bias network design can improve the observed efficiency degradation."}
{"_id": "76ea8a16454a878b5613f398a62e022097cab39c", "title": "Generating Keyword Queries for Natural Language Queries to Alleviate Lexical Chasm Problem", "text": "In recent years, the task of reformulating natural language queries has received considerable attention from both industry and academic communities. Because of the lexical chasm problem between natural language queries and web documents, if we directly use natural language queries as inputs for retrieval, the results are usually unsatisfactory. In this work, we formulated the task as a translation problem to convert natural language queries into keyword queries. Since the nature language queries users input are diverse and multi-faceted, general encoder-decoder models cannot effectively handle low-frequency words and out-of-vocabulary words. We propose a novel encoder-decoder method with two decoders: the pointer decoder firstly extracts query terms directly from the source text via copying mechanism, then the generator decoder generates query terms using two attention modules simultaneously considering the source text and extracted query terms. For evaluation and training, we also proposed a semi-automatic method to construct a large-scale dataset about natural language query-keyword query pairs. Experimental results on this dataset demonstrated that our model could achieve better performance than the previous state-of-the-art methods."}
{"_id": "9a2607cbf136ae8094e0ad9a7a3c23e07ced9e4d", "title": "Customizing Computational Methods for Visual Analytics with Big Data", "text": "The volume of available data has been growing exponentially, increasing data problem's complexity and obscurity. In response, visual analytics (VA) has gained attention, yet its solutions haven't scaled well for big data. Computational methods can improve VA's scalability by giving users compact, meaningful information about the input data. However, the significant computation time these methods require hinders real-time interactive visualization of big data. By addressing crucial discrepancies between these methods and VA regarding precision and convergence, researchers have proposed ways to customize them for VA. These approaches, which include low-precision computation and iteration-level interactive visualization, ensure real-time interactive VA for big data."}
{"_id": "483c708a3ca76ddcbd3d0ba3a4fc5355a5611cad", "title": "EW-SHOT L EARNING", "text": "A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art few-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and miniImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-ofthe-art results on the Caltech UCSD bird dataset."}
{"_id": "353e31b5facc3f8016211a95325326c5a2d39033", "title": "Setting Lower Bounds on Truthfulness", "text": "We present general techniques for proving inapproximability results for several paradigmatic truthful multidimensional mechanism design problems. In particular, we demonstrate the strength of our techniques by exhibiting a lower bound of 2 \u2212 1 m for the scheduling problem with m unrelated machines (formulated as a mechanism design problem in the seminal paper of Nisan and Ronen on Algorithmic Mechanism Design). Our lower bound applies to truthful randomized mechanisms, regardless of any computational assumptions on the running time of these mechanisms. Moreover, it holds even for the wider class of truthfulness-in-expectation mechanisms. This lower bound nearly matches the known 1.58606 randomized truthful upper bound for the case of two machines (a non-truthful FPTAS exists). Recently, Daskalakis and Weinberg [17] show that there is a polynomial-time 2approximately optimal Bayesian mechanism for makespan minimization for unrelated machines. We complement this result by showing an appropriate lower bound of 1.25 for deterministic incentive compatible Bayesian mechanisms. We then show an application of our techniques to the workload-minimization problem in networks. We prove our lower bounds for this problem in the inter-domain routing setting presented by Feigenbaum, Papadimitriou, Sami, and Shenker. Finally, we discuss several notions of non-utilitarian fairness (Max-Min fairness, Min-Max fairness, and envy minimization) and show how our techniques can be used to prove lower bounds for these notions. No lower bounds for truthful mechanisms in multidimensional probabilistic settings were previously known.1 \u2217Computer Science Department, Technion, Haifa, Israel. ahumu@yahoo.com. \u2020School of Computer Science and Engineering, Hebrew University, Jerusalem, Israel. schapiram@huji.ac.il. The current paper supersedes \"Setting Lower Bounds on Truthfulness\" that appeared as an extended abstract in the Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA07), pages 1143-1152, 2007. The current version includes a new lower bound result for Bayesian Incentive compatible Mechanisms."}
{"_id": "e9146b13946071d6273012aeb90b09d8c39eb25c", "title": "A d\u2013q Voltage Droop Control Method With Dynamically Phase-Shifted Phase-Locked Loop for Inverter Paralleling Without Any Communication Between Individual Inverters", "text": "This paper presents a modified droop control method for equal load sharing of parallel-connected inverters, without any communication between individual inverters. Droop in d- and q-axis voltages are given depending upon d- and q-axis currents, respectively. Each inverter works in the voltage control mode, where it controls the filter capacitor voltage. Voltage references of each inverter come from d- and q-axis voltage droops. This d- and q -axis voltage droops force the parallel-connected inverters to share equal current and hence equal active and reactive power. A dynamically phase-shifted phase-locked loop (PLL) technique is locally designed for generating phase reference of each inverter. The phase angle between filter capacitor voltage vector and d-axis is dynamically adjusted with the change in q-axis inverter current to generate the phase reference of each inverter. The strategy has been verified with simulations and experiments and results are presented in this paper."}
{"_id": "d6ae945f3a12f1151be8c4f4e5a41c486205f682", "title": "A Practical Approach to Insertion with Variable Socket Position Using Deep Reinforcement Learning", "text": "Insertion is a challenging haptic and visual control problem with significant practical value for manufacturing. Existing approaches in the model-based robotics community can be highly effective when task geometry is known, but are complex and cumbersome to implement, and must be tailored to each individual problem by a qualified engineer. Within the learning community there is a long history of insertion research, but existing approaches are typically either too sample-inefficient to run on real robots, or assume access to high-level object features, e.g. socket pose. In this paper we show that relatively minor modifications to an off-the-shelf Deep-RL algorithm (DDPG), combined with a small number of human demonstrations, allows the robot to quickly learn to solve these tasks efficiently and robustly. Our approach requires no modeling or simulation, no parameterized search or alignment behaviors, no vision system aside from raw images, and no reward shaping. We evaluate our approach on a narrowclearance peg-insertion task and a deformable clip-insertion task, both of which include variability in the socket position. Our results show that these tasks can be solved reliably on the real robot in less than 10 minutes of interaction time, and that the resulting policies are robust to variance in the socket position and orientation."}
{"_id": "ff74b6935cd793f1ab2a8029d9fc81e8bc7e065b", "title": "LTE in the sky: trading off propagation benefits with interference costs for aerial nodes", "text": "The popularity of unmanned aerial vehicles has exploded over the last few years, urgently demanding solutions to transfer large amounts of data from the UAV to the ground. Conversely, a control channel to the UAV is desired, in order to safely operate these vehicles remotely. This article analyzes the use of LTE for realizing this downlink data and uplink control. By means of measurements and simulations, we study the impact of interference and path loss when transmitting data to and from the UAV. Two scenarios are considered in which UAVs act as either base stations transmitting in downlink or UEs transmitting in uplink, and their impact on the respective downlink and uplink performance of an LTE ground network is analyzed. Both measurements and simulations are used to quantify such impact for a range of scenarios with varying altitude, distance from the base station, or UAV density. The measurement sets show that signal-to-interference ratio decreases up to 7 dB for UAVs at 150 m compared to ground users. Simulation results show that a UAV density of 10/km2 gives an average degradation of the signal-to-interference ratio of more than 6 dB. It is concluded that interference is going to be a major limiting factor when LTE enabled UAVs are introduced, and that strong technical solutions will have to be found."}
{"_id": "e567683dd6c1d226a8a34d1915fe74e04a3a585b", "title": "Scattered data modeling", "text": "A variety of methods for modeling scattered data are discussed, with an emphasis on two types of data: volumetric and spherical. To demonstrate the performance of the various methods, results from an empirical study using trivariate scattered data are presented. The author's objective is to provide information to aid in selecting a type of method or to form the basis for customizing a method for a particular application.<>"}
{"_id": "0421b5198da32440eaf1275b0cb247332e7330b9", "title": "Benign Osteoblastoma Involving Maxilla: A Case Report and Review of the Literature", "text": "Background. Osteoblastoma is a rare benign tumor. This tumor is characterized by osteoid and bone formation with the presence of numerous osteoblasts. The lesion is more frequently seen in long bones and rarely involves maxilla and mandible. Due to its clinical and histological similarity with other bone tumors such as osteoid osteoma and fibro-osseous lesions, osteoblastoma presents a diagnostic dilemma. Case Report. Very few cases of osteoblastomas involving maxillofacial region have been reported in the literature. This case report involves osteoblastoma involving right maxilla in an 18-year-old male patient. Following detailed clinical examination, radiological interpretation, and histopathological diagnosis, surgical excision was performed. The patient was followed up for a period of 3 years and was disease free. Summary and Conclusion. Benign osteoblastoma involving jaw bones is a rare tumor. There is a close resemblance of this tumor with other lesions such as fibro-osseous lesions and odontogenic tumors and thus faces a diagnostic challenge. Surgical excision with a long-term follow-up gives good prognosis to this lesion-Benign Osteoblastoma."}
{"_id": "a5799efdd5a7117a0d6e8b9a5ce0055d7a4499b4", "title": "Almost Optimal Exploration in Multi-Armed Bandits", "text": "We study the problem of exploration in stochastic Multi-Armed Bandits. Even in the simplest setting of identifying the best arm, there remains a logarithmic multiplicative gap between the known lower and upper bounds for the number of arm pulls required for the task. This extra logarithmic factor is quite meaningful in nowadays large-scale applications. We present two novel, parameterfree algorithms for identifying the best arm, in two di\u21b5erent settings: given a target confidence and given a target budget of arm pulls, for which we prove upper bounds whose gap from the lower bound is only doublylogarithmic in the problem parameters. We corroborate our theoretical results with experiments demonstrating that our algorithm outperforms the state-of-the-art and scales better as the size of the problem increases."}
{"_id": "dbe1f69118e5cd182add6ba115cb0d27a06a437a", "title": "Buck-boost converter fed BLDC motor drive for solar PV array based water pumping", "text": "Solar photovoltaic (SPV) array based water pumping is receiving wide attention now a days because the everlasting solar energy is the best alternative to the conventional energy sources. This paper deals with the utilization of a buck-boost converter in solar PV array based water pumping as an intermediate DC-DC converter between a solar PV array and a voltage source inverter (VSI) in order to achieve the maximum efficiency of the solar PV array and the soft starting of the permanent magnet brushless DC (BLDC) motor by proper control. Consisting of a least number of components and single switch, the buck-boost converter exhibits very good conversion efficiency. Moreover, buck-boost DC-DC converter topology is the only one allowing follow-up of the PV array maximum power point (MPP) regardless of temperature, irradiance, and connected load. A BLDC motor is employed to drive a centrifugal type of water pump because its load characteristic is well matched to the maximum power locus of the PV generator. The transient, dynamic and steady state behaviors of the proposed solar PV array powered buck-boost converter fed BLDC motor driven water pumping system are evaluated under the rapid and slowly varying atmospheric conditions using the sim-power-system toolboxes of the MATLAB/Simulink environment."}
{"_id": "75620a00ac0b00c5d94185409dd77b38d9cebe0e", "title": "Event detection from social media: 5W1H analysis on big data", "text": "Increasing number of social media content shared across the globe in real time creates a fascinating area of research. For most events, social media users act as collective sensors by sharing important information about events of any scale. Due to its real time nature, social media is much quicker to respond to such events relative to conventional news media. This paper proposes an event detection system which provides 5W1H (What, Where, When, Why, Who, How) analysis for each detected event. We make use of a myriad of techniques such as anomaly detection, named entity recognition, automatic summary generation, user link analysis. Our experimental results for the system indicate a faster event detection performance compared to conventional news sources. Event analysis results are also in line with the corresponding news articles about detected events."}
{"_id": "288366387becf5a15eda72c94a18e5b4f179578f", "title": "Persistence and periodicity in a dynamic proximity network", "text": "The topology of social networks can be understood as being inherently dynamic, with edges having a distinct position in time. Most characterizations of dynamic networks discretize time by converting temporal information into a sequence of network \u201csnapshots\u201d for further analysis. Here we study a highly resolved data set of a dynamic proximity network of 66 individuals. We show that the topology of this network evolves over a very broad distribution of time scales, that its behavior is characterized by strong periodicities driven by external calendar cycles, and that the conversion of inherently continuous-time data into a sequence of snapshots can produce highly biased estimates of network structure. We suggest that dynamic social networks exhibit a natural time scale \u2206nat, and that the best conversion of such dynamic data to a discrete sequence of networks is done at this natural rate."}
{"_id": "a7b1a3901b9a55d042d7287dee32b5fcc6c4a97b", "title": "Metal-oxide-semiconductor field-effect transistor with a vacuum channel.", "text": "High-speed electronic devices rely on short carrier transport times, which are usually achieved by decreasing the channel length and/or increasing the carrier velocity. Ideally, the carriers enter into a ballistic transport regime in which they are not scattered. However, it is difficult to achieve ballistic transport in a solid-state medium because the high electric fields used to increase the carrier velocity also increase scattering. Vacuum is an ideal medium for ballistic transport, but vacuum electronic devices commonly suffer from low emission currents and high operating voltages. Here, we report the fabrication of a low-voltage field-effect transistor with a vertical vacuum channel (channel length of ~20 nm) etched into a metal-oxide-semiconductor substrate. We measure a transconductance of 20 nS \u00b5m(-1), an on/off ratio of 500 and a turn-on gate voltage of 0.5 V under ambient conditions. Coulombic repulsion in the two-dimensional electron system at the interface between the oxide and the metal or the semiconductor reduces the energy barrier to electron emission, leading to a high emission current density (~1 \u00d7 10(5) A cm(-2)) under a bias of only 1 V. The emission of two-dimensional electron systems into vacuum channels could enable a new class of low-power, high-speed transistors."}
{"_id": "97a3f0901e715c12b39555194cad91818f22aa8a", "title": "Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon", "text": "Do people really care about their privacy? Surveys show that privacy is a primary concern for citizens in the digital age. On the other hand, individuals reveal personal information for relatively small rewards, often just for drawing the attention of peers in an online social network. This inconsistency of privacy attitudes and privacy behavior is often referred to as the \u201cprivacy paradox\u201d. In this paper, we present the results of a review of research literature on the privacy paradox. We analyze studies that provide evidence of a paradoxical dichotomy between attitudes and behavior and studies that challenge the existence of such a phenomenon. The diverse research results are explained by the diversity in research methods, the different contexts and the different conceptualizations of the privacy paradox. We also present several interpretations of the privacy paradox, stemming from social theory, psychology, behavioral economics and, in one case, from quantum theory. We conclude that current research has improved our understanding of the privacy paradox phenomenon. It is, however, a complex phenomenon that requires extensive further research. Thus, we call for synthetic studies to be based on comprehensive theoretical models that take into account the diversity of personal information and the diversity of privacy concerns. We suggest that future studies should use evidence of actual behavior rather than self-reported behavior."}
{"_id": "3007a8f5416404432166ff3f0158356624d282a1", "title": "GraphBuilder: scalable graph ETL framework", "text": "Graph abstraction is essential for many applications from finding a shortest path to executing complex machine learning (ML) algorithms like collaborative filtering. Graph construction from raw data for various applications is becoming challenging, due to exponential growth in data, as well as the need for large scale graph processing. Since graph construction is a data-parallel problem, MapReduce is well-suited for this task. We developed GraphBuilder, a scalable framework for graph Extract-Transform-Load (ETL), to offload many of the complexities of graph construction, including graph formation, tabulation, transformation, partitioning, output formatting, and serialization. GraphBuilder is written in Java, for ease of programming, and it scales using the MapReduce model. In this paper, we describe the motivation for GraphBuilder, its architecture, MapReduce algorithms, and performance evaluation of the framework. Since large graphs should be partitioned over a cluster for storing and processing and partitioning methods have significant performance impacts, we develop several graph partitioning methods and evaluate their performance. We also open source the framework at https://01.org/graphbuilder/."}
{"_id": "74b5640a6611a96cfa469cbd01fd7ea250b7eaed", "title": "Convolutional neural networks for SAR image segmentation", "text": "Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides this contribution we also suggest a new way to do pixel wise annotation of SAR images that replaces a human expert manual segmentation process, which is both slow and troublesome. Our method for annotation relies on 3D CAD models of objects and scene, and converts these to labels for all pixels in a SAR image. Our algorithms are evaluated on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset which was released by the Defence Advanced Research Projects Agency during the 1990s. The method is not restricted to the type of targets imaged in MSTAR but can easily be extended to any SAR data where prior information about scene geometries can be estimated."}
{"_id": "20eb9c6085fc129b4f7348809599542e324c5d23", "title": "Analysis and implementation of software rejuvenation in cluster systems", "text": "Several recent studies have reported the phenomenon of \"software aging\", one in which the state of a software system degrades with time. This may eventually lead to performance degradation of the software or crash/hang failure or both. \"Software rejuvenation\" is a pro-active technique aimed to prevent unexpected or unplanned outages due to aging. The basic idea is to stop the running software, clean its internal state and restart it. In this paper, we discuss software rejuvenation as applied to cluster systems. This is both an innovative and an efficient way to improve cluster system availability and productivity. Using Stochastic Reward Nets (SRNs), we model and analyze cluster systems which employ software rejuvenation. For our proposed time-based rejuvenation policy, we determine the optimal rejuvenation interval based on system availability and cost. We also introduce a new rejuvenation policy based on prediction and show that it can dramatically increase system availability and reduce downtime cost. These models are very general and can capture a multitude of cluster system characteristics, failure behavior and performability measures, which we are just beginning to explore. We then briefly describe an implementation of a software rejuvenation system that performs periodic and predictive rejuvenation, and show some empirical data from systems that exhibit aging"}
{"_id": "2e526c2fac79c080b818b304485ddf84d09cf08b", "title": "Predicting Rare Events In Temporal Domains", "text": "Temporal data mining aims at finding patterns in historical data. Our work proposes an approach to extract temporal patterns from data to predict the occurrence of target events, such as computer attacks on host networks, or fraudulent transactions in financial institutions. Our problem formulation exhibits two major challenges: 1) we assume events being characterized by categorical features and displaying uneven inter-arrival times; such an assumption falls outside the scope of classical time-series analysis, 2) we assume target events are highly infrequent; predictive techniques must deal with the class-imbalance problem. We propose an efficient algorithm that tackles the challenges above by transforming the event prediction problem into a search for all frequent eventsets preceding target events. The class imbalance problem is overcome by a search for patterns on the minority class exclusively; the discrimination power of patterns is then validated against other classes. Patterns are then combined into a rule-based model for prediction. Our experimental analysis indicates the types of event sequences where target events can be accurately predicted."}
{"_id": "95a357bf35e78cc2efa14b2d03913616700174b4", "title": "The impact of technology scaling on lifetime reliability", "text": "The relentless scaling of CMOS technology has provided a steady increase in processor performance for the past three decades. However, increased power densities (hence temperatures) and other scaling effects have an adverse impact on long-term processor lifetime reliability. This paper represents a first attempt at quantifying the impact of scaling on lifetime reliability due to intrinsic hard errors, taking workload characteristics into consideration. For our quantitative evaluation, we use RAMP (Srinivasan et al., 2004), a previously proposed industrial-strength model that provides reliability estimates for a workload, but for a given technology. We extend RAMP by adding scaling specific parameters to enable workload-dependent lifetime reliability evaluation at different technologies. We show that (1) scaling has a significant impact on processor hard failure rates - on average, with SPEC benchmarks, we find the failure rate of a scaled 65nm processor to be 316% higher than a similarly pipelined 180nm processor; (2) time-dependent dielectric breakdown and electromigration have the largest increases; and (3) with scaling, the difference in reliability from running at worst-case vs. typical workload operating conditions increases significantly, as does the difference from running different workloads. Our results imply that leveraging a single microarchitecture design for multiple remaps across a few technology generations will become increasingly difficult, and motivate a need for workload specific, microarchitectural lifetime reliability awareness at an early design stage."}
{"_id": "aff00508b54357ee5cf766401c8677773c833dba", "title": "Temperature-Aware Microarchitecture", "text": "With power density and hence cooling costs rising exponentially, processor packaging can no longer be designed for the worst case, and there is an urgent need for runtime processor-level techniques that can regulate operating temperature when the package's capacity is exceeded. Evaluating such techniques, however, requires a thermal model that is practical for architectural studies.This paper describes HotSpot, an accurate yet fast model based on an equivalent circuit of thermal resistances and capacitances that correspond to microarchitecture blocks and essential aspects of the thermal package. Validation was performed using finite-element simulation. The paper also introduces several effective methods for dynamic thermal management (DTM): \"temperature-tracking\" frequency scaling, localized toggling, and migrating computation to spare hardware units. Modeling temperature at the microarchitecture level also shows that power metrics are poor predictors of temperature, and that sensor imprecision has a substantial impact on the performance of DTM."}
{"_id": "2e1dab46b0547f4a08adf8d4dfffc9e8cd6b0054", "title": "CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules", "text": "Previous studies propose that associative classification has high classification accuracy and strong flexibility at handling unstructured data. However, it still suffers from the huge set of mined rules and sometimes biased classification or overfitting since the classification is based on only single high-confidence rule. In this study, we propose a new associative classification method, CMAR, i.e., Classification based on Multiple Association Rules. The method extends an efficient frequent pattern mining method, FP-growth, constructs a class distribution-associated FP-tree, and mines large database efficiently. Moreover, it applies a CR-tree structure to store and retrieve mined association rules efficiently, and prunes rules effectively based on confidence, correlation and database coverage. The classification is performed based on a weighted analysis using multiple strong association rules. Our extensive experiments on databases from UCI machine learning database repository show that CMAR is consistent, highly effective at classification of various kinds of databases and has better average classification accuracy in comparison with CBA and C4.5. Moreover, our performance study shows that the method is highly efficient and scalable in comparison with other reported associative classification methods."}
{"_id": "3970adba1270ab5aa9bd5e0cc2a46e870f369956", "title": "ApLeaf: An efficient android-based plant leaf identification system", "text": "To automatically identify plant species is very useful for ecologists, amateur botanists, educators, and so on. The Leafsnap is the first successful mobile application system which tackles this problem. However, the Leafsnap is based on the IOS platform. And to the best of our knowledge, as the mobile operation system, the Android is more popular than the IOS. In this paper, an Android-based mobile application designed to automatically identify plant species according to the photographs of tree leaves is described. In this application, one leaf image can be either a digital image from one existing leaf image database or a picture collected by a camera. The picture should be a single leaf placed on a light and untextured background without other clutter. The identification process consists of three steps: leaf image segmentation, feature extraction, and species identification. The demo system is evaluated on the ImageCLEF2012 Plant Identification database which contains 126 tree species from the French Mediterranean area. The outputs of the system to users are the top several species which match the query leaf image the best, as well as the textual descriptions and additional images about plant leaves, flowers, etc. Our system works well with state-of-the-art identification performance."}
{"_id": "307e641b7e8713b5a37ece0b44809cd6dacc418b", "title": "Usability evaluation considered harmful (some of the time)", "text": "Current practice in Human Computer Interaction as encouraged by educational institutes, academic review processes, and institutions with usability groups advocate usability evaluation as a critical part of every design process. This is for good reason: usability evaluation has a significant role to play when conditions warrant it. Yet evaluation can be ineffective and even harmful if naively done 'by rule' rather than 'by thought'. If done during early stage design, it can mute creative ideas that do not conform to current interface norms. If done to test radical innovations, the many interface issues that would likely arise from an immature technology can quash what could have been an inspired vision. If done to validate an academic prototype, it may incorrectly suggest a design's scientific worthiness rather than offer a meaningful critique of how it would be adopted and used in everyday practice. If done without regard to how cultures adopt technology over time, then today's reluctant reactions by users will forestall tomorrow's eager acceptance. The choice of evaluation methodology - if any - must arise from and be appropriate for the actual problem or research question under consideration."}
{"_id": "507844d654911a6bbbe9690e5396c91d07c39b3f", "title": "Competing for Attention in Social Media under Information Overload Conditions", "text": "Modern social media are becoming overloaded with information because of the rapidly-expanding number of information feeds. We analyze the user-generated content in Sina Weibo, and find evidence that the spread of popular messages often follow a mechanism that differs from the spread of disease, in contrast to common belief. In this mechanism, an individual with more friends needs more repeated exposures to spread further the information. Moreover, our data suggest that for certain messages the chance of an individual to share the message is proportional to the fraction of its neighbours who shared it with him/her, which is a result of competition for attention. We model this process using a fractional susceptible infected recovered (FSIR) model, where the infection probability of a node is proportional to its fraction of infected neighbors. Our findings have dramatic implications for information contagion. For example, using the FSIR model we find that real-world social networks have a finite epidemic threshold in contrast to the zero threshold in disease epidemic models. This means that when individuals are overloaded with excess information feeds, the information either reaches out the population if it is above the critical epidemic threshold, or it would never be well received."}
{"_id": "3fe06b70c2ce351162da36ee7cac060279e9c23f", "title": "An adaptive FEC algorithm using hidden Markov chains", "text": "Anumber of performance issues must be addressed in order to transmit continuous media stream over the Internet with acceptable quality [9, 6] . These include reducing jitter and recovering from packet losses mainly due to congestion in the routers . Several methods can be used to deal with the loss of a packet such as : retransmission schemes, error concealment, error resilience, interleaving, and FEC (forward error correction) . Each has its advantages/disadvantages depending on the network scenario and conditions (RTT, propagation delay, topology, channel error rate, congestion in the sender/receiver path, etc.) . In this work we focus on FEC techniques. FEC works by adding redundant information to the stream being transmitted in order to reconstruct the original stream at the receiver when packet losses occur. Among its advantages are the small delays introduced to recover information compared to retransmission, for instance, and simplicity of implementation . A clear disadvantage is the increase in the transmission rate due to the redundancy added. One of the simplest approaches among FEC techniques that have been proposed in the literature [11] is to add a packet to each group of N 1 packets with a payload equal to the result of an XOR operation performed in the group. Clearly, if one packet out of the N is lost, the stream is completely recovered . This is accomplished at the expense ofextra bandwidth, in this example an increase of 1/(N 1) . In [4] a clever FEC scheme was proposed in which each packet carries a sample of the previous transmitted packet but compression is used to reduce the overhead . In [7] another technique was developed aimed at maintaining a good compromise between recovery efficiency and bandwidth usage. Since packet losses are mostly due to congestion, increasing the amount of protection using FEC may be unfair to flow controlled streams and also may even have an adverse effect on the results [1] . Therefore, the FEC algorithm should be carefully chosen according to the error characteristics ofthe path between the sender and receiver. Another issue is to develop accurate models ofthe loss process . Many studies exist to characterize the loss process [2, 14, 8] . Simple models such as the Bernoulli process, the Gilbert model and sophisticated Markov models have been proposed. Using either the Bernoulli or the Gilbert process, the work of[3] propose an adaptive algorithm based on the FECscheme of[4] . Recently, the work of Salamatian and Vaton [13] has shown that Hidden"}
{"_id": "899545faba813316feb194438397eb0530f31b6c", "title": "Brain\u2013machine interface via real-time fMRI: Preliminary study on thought-controlled robotic arm", "text": "Real-time functional MRI (rtfMRI) has been used as a basis for brain-computer interface (BCI) due to its ability to characterize region-specific brain activity in real-time. As an extension of BCI, we present an rtfMRI-based brain-machine interface (BMI) whereby 2-dimensional movement of a robotic arm was controlled by the regulation (and concurrent detection) of regional cortical activations in the primary motor areas. To do so, the subjects were engaged in the right- and/or left-hand motor imagery tasks. The blood oxygenation level dependent (BOLD) signal originating from the corresponding hand motor areas was then translated into horizontal or vertical robotic arm movement. The movement was broadcasted visually back to the subject as a feedback. We demonstrated that real-time control of the robotic arm only through the subjects' thought processes was possible using the rtfMRI-based BMI trials."}
{"_id": "73e936f8d4f4e6fc762c368d53b74fa5043eed2b", "title": "Similarity-Based Approach for Positive and Unlabeled Learning", "text": "Positive and unlabelled learning (PU learning) has been investigated to deal with the situation where only the positive examples and the unlabelled examples are available. Most of the previous works focus on identifying some negative examples from the unlabelled data, so that the supervised learning methods can be applied to build a classifier. However, for the remaining unlabelled data, which can not be explicitly identified as positive or negative (we call them ambiguous examples), they either exclude them from the training phase or simply enforce them to either class. Consequently, their performance may be constrained. This paper proposes a novel approach, called similarity-based PU learning (SPUL) method, by associating the ambiguous examples with two similarity weights, which indicate the similarity of an ambiguous example towards the positive class and the negative class, respectively. The local similarity-based and global similarity-based mechanisms are proposed to generate the similarity weights. The ambiguous examples and their similarity-weights are thereafter incorporated into an SVM-based learning phase to build a more accurate classifier. Extensive experiments on real-world datasets have shown that SPUL outperforms state-of-the-art PU learning methods."}
{"_id": "a046d7034c48e9a8c0efa69cbbf28c16d22fe273", "title": "Systematic Variation of Prosthetic Foot Spring Affects Center-of-Mass Mechanics and Metabolic Cost During Walking", "text": "Lower-limb amputees expend more energy to walk than non-amputees and have an elevated risk of secondary disabilities. Insufficient push-off by the prosthetic foot may be a contributing factor. We aimed to systematically study the effect of prosthetic foot mechanics on gait, to gain insight into fundamental prosthetic design principles. We varied a single parameter in isolation, the energy-storing spring in a prototype prosthetic foot, the controlled energy storage and return (CESR) foot, and observed the effect on gait. Subjects walked on the CESR foot with three different springs. We performed parallel studies on amputees and on non-amputees wearing prosthetic simulators. In both groups, spring characteristics similarly affected ankle and body center-of-mass (COM) mechanics and metabolic cost. Softer springs led to greater energy storage, energy return, and prosthetic limb COM push-off work. But metabolic energy expenditure was lowest with a spring of intermediate stiffness, suggesting biomechanical disadvantages to the softest spring despite its greater push-off. Disadvantages of the softest spring may include excessive heel displacements and COM collision losses. We also observed some differences in joint kinetics between amputees and non-amputees walking on the prototype foot. During prosthetic push-off, amputees exhibited reduced energy transfer from the prosthesis to the COM along with increased hip work, perhaps due to greater energy dissipation at the knee. Nevertheless, the results indicate that spring compliance can contribute to push-off, but with biomechanical trade-offs that limit the degree to which greater push-off might improve walking economy."}
{"_id": "0a54d2f49bda694071bbf43d8e653f5adf85be19", "title": "Meta-Learning in Distributed Data Mining Systems: Issues and Approaches", "text": "Data mining systems aim to discover patterns and extract useful information from facts recorded in databases. A widely adopted approach to this objective is to apply various machine learning algorithms to compute descriptive models of the available data. Here, we explore one of the main challenges in this research area, the development of techniques that scale up to large and possibly physically distributed databases. Meta-learning is a technique that seeks to compute higher-level classifiers (or classification models), called meta-classifiers, that integrate in some p rincipled fashion multiple classifiers computed separately over different databas es. This study, describes meta-learning and presents the JAM system (Java Ag ents for Meta-learning), an agent-based meta-learning system for large-scale data mining applications. Specifically, it identifies and addresses several impor tant desiderata for distributed data mining systems that stem from thei r additional complexity compared to centralized or host-based systems. Distributed systems may need to deal with heterogenous platforms, with multiple databases and (possibly) different schemas, with the design and imp lementation of scalable and effective protocols for communicating among the data sites, and the selective and efficient use of the information that is gat hered from other peer data sites. Other important problems, intrinsic wi thin \u2217Supported in part by an IBM fellowship. data mining systems that must not be ignored, include, first, the abil ity to take advantage of newly acquired information that was not previously av ailable when models were computed and combine it with existing models, and second, the flexibility to incorporate new machine learning methods and d ta mining technologies. We explore these issues within the context of JAM and evaluate various proposed solutions through extensive empirical st udie ."}
{"_id": "0390337f0a3315051ed685e943c2fe676b332357", "title": "Observations on periorbital and midface aging.", "text": "BACKGROUND\nMany of the anatomical changes of facial aging are still poorly understood. This study looked at the aging process in individuals linearly over time, focusing on aspects of periorbital aging and the upper midface.\n\n\nMETHODS\nThe author compared photographs of patients' friends and relatives taken 10 to 50 years before with closely matched recent follow-up pictures. The best-matching old and recent pictures were equally sized and superimposed in the computer. The images were then assembled into GIF animations, which automate the fading of one image into the other and back again indefinitely.\n\n\nRESULTS\nThe following findings were new to the author: (1) the border of the pigmented lid skin and thicker cheek skin (the lid-cheek junction) is remarkably stable in position over time, becoming more visible by contrast, not by vertical descent as is commonly assumed. (2) Orbicularis wrinkles on the cheek and moles and other markers on the upper midface were also stable over decades. (3) With aging, there can be a distinct change in the shape of the upper eyelid. The young upper lid frequently has a medially biased peak. The upper lid peak becomes more central in the older lid. This article addresses these three issues. No evidence was seen here for descent of the globe in the orbit.\n\n\nCONCLUSIONS\nThere seems to be very little ptosis (inferior descent) of the lid-cheek junction or of the upper midface. These findings suggest that vertical descent of skin, and by association, subcutaneous tissue, is not necessarily a major component of aging in those areas. In addition, the arc of the upper lid changes shape in a characteristic way in some patients. Other known changes of the periorbital area are visualized."}
{"_id": "9185718ea951f6b6c9f15a64a01225109fcf3da4", "title": "Performance of elasticsearch in cloud environment with nGram and non-nGram indexing", "text": "The fact that technology have changed the lives of human beings cannot be denied. It has drastically reduced the effort needed to perform a particular task and has increased the productivity and efficiency. Computers especially have been playing a very important role in almost all fields in today's world. They are used to store large amount of data in almost all sectors, be it business and industrial sectors, personal lives or any other. The research areas of science and technology uses computers to solve complex and critical problems. Information is the most important requirement of each individual. In this era of quick-growing and huge data, it has become increasingly illogical to analyse it with the help of traditional techniques or relational databases. New big data instruments, architectures and designs have come into existence to give better support to the requirements of organizations/institutions in analysing large data. Specifically, Elasticsearch, a full-text java based search engine, designed keeping cloud environment in mind solves issues of scalability, search in real time, and efficiency that relational databases were not able to address. In this paper, we present our own experience with Elasticsearch an open source, Apache Lucene based, full-text search engine that provides near real-time search ability, as well as a RESTful API for the ease of user in the field of research."}
{"_id": "901528adf0747612c559274100581bc305e55faa", "title": "Development of the radiographic union score for tibial fractures for the assessment of tibial fracture healing after intramedullary fixation.", "text": "BACKGROUND\n: The objective was to evaluate the newly developed Radiographic Union Score for Tibial fractures (RUST). Because there is no \"gold standard,\" it was hypothesized that the RUST score would provide substantial improvements compared with previous scores presented in the literature.\n\n\nMETHODS\n: Forty-five sets of X-rays of tibial shaft fractures treated with intramedullary fixation were selected. Seven orthopedic reviewers independently scored bony union using RUST. Radiographs were reassessed at 9 weeks. Intraclass correlation coefficients (ICC) with 95% confidence intervals (CI) measured agreement.\n\n\nRESULTS\n: Overall agreement was substantial (ICC, 0.86; 95% CI, 0.79-0.91). There was improved reliability among traumatologists compared with others (ICC = 0.86, 0.81, and 0.83, respectively). Overall intraobserver reliability was also substantial (ICC, 0.88; 95% CI, 0.80-0.96).\n\n\nCONCLUSIONS\n: The RUST score exhibits substantial improvements in reliability from previously published scores and produces equally reproducible results among a variety of orthopedic specialties and experience levels. Because no \"gold standards\" currently exist against which RUST can be compared, this study provides only the initial step in the score's full validation for use in a clinical context."}
{"_id": "064d2ddd75aa931899136173c8364ced0c5cbea3", "title": "Comparison of Measurement-Based Call Admission Control Algorithms for Controlled-Load Service", "text": "We compare the performance of four admission control algorithms\u2014one parameter-based and three measurementbased\u2014for controlled-load service. The parameter-based admission control ensures that the sum of reserved resources is bounded by capacity. The three measurementbased algorithms are based on measured bandwidth, acceptance region [9], and equivalent bandwidth [7]. We use simulationon several network scenarios to evaluate the link utilization and adherence to service commitment achieved by these four algorithms."}
{"_id": "2ae2684f120dab4c319e30d33b33e7adf384810a", "title": "ProteusTM: Abstraction Meets Performance in Transactional Memory", "text": "The Transactional Memory (TM) paradigm promises to greatly simplify the development of concurrent applications. This led, over the years, to the creation of a plethora of TM implementations delivering wide ranges of performance across workloads. Yet, no universal implementation fits each and every workload. In fact, the best TM in a given workload can reveal to be disastrous for another one. This forces developers to face the complex task of tuning TM implementations, which significantly hampers their wide adoption. In this paper, we address the challenge of automatically identifying the best TM implementation for a given workload. Our proposed system, ProteusTM, hides behind the TM interface a large library of implementations. Underneath, it leverages a novel multi-dimensional online optimization scheme, combining two popular learning techniques: Collaborative Filtering and Bayesian Optimization.\n We integrated ProteusTM in GCC and demonstrate its ability to switch between TMs and adapt several configuration parameters (e.g., number of threads). We extensively evaluated ProteusTM, obtaining average performance <3% from optimal, and gains up to 100x over static alternatives."}
{"_id": "c361896da526a9a65e23127187dee10c391637a1", "title": "Upper limb exoskeleton control based on sliding mode control and feedback linearization", "text": "Exoskeletons have potential to improve human quality of life by relieving loads on the human musculoskeletal system or by helping motor rehabilitation. Controllers that were initially developed for industrial applications are also applied to control an exoskeleton, despite of major differences in requirements. Nevertheless, which controller has better performance for this specific application remains an open question. This paper presents a comparison between sliding mode control and feedback linearization control. The implementation of the sliding mode controller assumes a complete measurement of the biological system's dynamics. On the other hand, the feedback linearization method does not need any information from the biological system. Our results indicate that a model combining biological control and dynamics could improve several characteristics of robotic exoskeleton, such as energy consumption."}
{"_id": "22b6bd1980ea40d0f095a151101ea880fae33049", "title": "Part 5: Machine Translation Evaluation Chapter 5.1 Introduction", "text": "The evaluation of machine translation (MT) systems is a vital field of research, both for determining the effectiveness of existing MT systems and for optimizing the performance of MT systems. This part describes a range of different evaluation approaches used in the GALE community and introduces evaluation protocols and methodologies used in the program. We discuss the development and use of automatic, human, task-based and semi-automatic (human-in-the-loop) methods of evaluating machine translation, focusing on the use of a human-mediated translation error rate HTER as the evaluation standard used in GALE. We discuss the workflow associated with the use of this measure, including post editing, quality control, and scoring. We document the evaluation tasks, data, protocols, and results of recent GALE MT Evaluations. In addition, we present a range of different approaches for optimizing MT systems on the basis of different measures. We outline the requirements and specific problems when using different optimization approaches and describe how the characteristics of different MT metrics affect the optimization. Finally, we describe novel recent and ongoing work on the development of fully automatic MT evaluation metrics that can have the potential to substantially improve the effectiveness of evaluation and optimization of MT systems. Progress in the field of machine translation relies on assessing the quality of a new system through systematic evaluation, such that the new system can be shown to perform better than pre-existing systems. The difficulty arises in the definition of a better system. When assessing the quality of a translation, there is no single correct answer; rather, there may be any number of possible correct translations. In addition, when two translations are only partially correct but in different ways it is difficult to distinguish quality. Moreover, quality assessments may be dependent on the intended use for the translation, e.g., the tone of a translation may be crucial in some applications, but irrelevant in other applications. Traditionally, there are two paradigms of machine translation evaluation: (1) Glass Box evaluation, which measures the quality of a system based upon internal system properties, and (2) Black Box evaluation, which measures the quality of a system based solely upon its output, without respect to the internal mechanisms of the translation system. Glass Box evaluation focuses upon an examination of the system's linguistic"}
{"_id": "da2802b3af2458adfd2246202e3f4a4ca5afe7a0", "title": "A Model-Based Method for Computer-Aided Medical Decision-Making", "text": "While MYCIN and PIP were under development at Stanford and Tufts! M.I. T., a group of computer scientists at Rutgers University was developing a system to aid in the evaluation and treatment of patients with glaucoma. The group was led by Professor Casimir K ulikou1ski, a researcher with extensive background in mathematical and pattern-recognition approaches to computer-based medical decision making (Nordyke et al., 1971), working within the Rutgers Research Resource on Computers in Biomedicine headed by Professor Saul Amarel. Working collaboratively with Dr. Arin Safir, Professor of Ophthalmology, \"who was then based at the Mt. Sinai School of Medicine in New York City, Kulikowski and Sholom Weiss (a graduate student at Rutgers who \"went on to become a research scientist there) developed a method of computer-assisted medical decision making that was based on causal-associational network (CASNET) models of disease. Although the work was inspired by the glaucoma domain, the approach had general features that were later refined in the development of the EXPERT system-building tool (see Chapters 18 and 20). A CASNET model consists of three rnain components: observations of a patient, pathophysiological states, and disease classifications. As observations are recorded, they are associated with the appropriate intennediate states. These states, in turn, are typically causally related, thereby forming a network that summarizes the mechanisms of disease. It is these patterns of states in the network that are linked to individual disease classes. Strat-"}
{"_id": "075077bcf5a33838ab4f980f141eb4771473fd54", "title": "Storage and Querying of E-Commerce Data", "text": "New generation of e-commerce applications require data sch emas that are constantly evolving and sparsely populated. The co nventional horizontal row representation fails to meet these re quir ments. We represent objects in a vertical format storing an o bject as a set of tuples. Each tuple consists of an object identifier and attribute name-value pair. Schema evolution is now easy. Ho wever, writing queries against this format becomes cumberso me. We create a logical horizontal view of the vertical representa tion and transform queries on this view to the vertical table. We pres ent alternative implementations and performance results that s ow the effectiveness of the vertical representation for sparse da ta. We also identify additional facilities needed in database systems to support these applications well."}
{"_id": "0324ed4e0a3a7eb2a3b32198996bc72ecdfde26b", "title": "Jitter and Phase Noise in Ring Oscillators", "text": "A companion analysis of clock jitter and phase noise of single-ended and differential ring oscillators is presented. The impulse sensitivity functions are used to derive expressions for the jitter and phase noise of ring oscillators. The effect of the number of stages, power dissipation, frequency of oscillation, and shortchannel effects on the jitter and phase noise of ring oscillators is analyzed. Jitter and phase noise due to substrate and supply noise is discussed, and the effect of symmetry on the upconversion of 1/f noise is demonstrated. Several new design insights are given for low jitter/phase-noise design. Good agreement between theory and measurements is observed."}
{"_id": "2ce9158c722551fa522aff365d4ebdcdc892116c", "title": "Infinite Edge Partition Models for Overlapping Community Detection and Link Prediction", "text": "A hierarchical gamma process infinite edge partition model is proposed to factorize the binary adjacency matrix of an unweighted undirected relational network under a Bernoulli-Poisson link. The model describes both homophily and stochastic equivalence, and is scalable to big sparse networks by focusing its computation on pairs of linked nodes. It can not only discover overlapping communities and inter-community interactions, but also predict missing edges. A simplified version omitting inter-community interactions is also provided and we reveal its interesting connections to existing models. The number of communities is automatically inferred in a nonparametric Bayesian manner, and efficient inference via Gibbs sampling is derived using novel data augmentation techniques. Experimental results on four real networks demonstrate the models\u2019 scalability and state-of-the-art performance."}
{"_id": "8e4caf932910122ba7618d64db3b4a3bad0a1514", "title": "GCap: Graph-based Automatic Image Captioning", "text": "Given an image, how do we automatically assign keywords to it? In this paper, we propose a novel, graph-based approach (GCap) which outperforms previously reported methods for automatic image captioning. Moreover, it is fast and scales well, with its training and testing time linear to the data set size. We report auto-captioning experiments on the \"standard\" Corel image database of 680 MBytes, where GCap outperforms recent, successful auto-captioning methods by up to 10 percentage points in captioning accuracy (50% relative improvement)."}
{"_id": "24fe430b21931c00f310ed05b3b9fbff02aea1a7", "title": "A cortical motor nucleus drives the basal ganglia-recipient thalamus in singing birds", "text": "The pallido-recipient thalamus transmits information from the basal ganglia to the cortex and is critical for motor initiation and learning. Thalamic activity is strongly inhibited by pallidal inputs from the basal ganglia, but the role of nonpallidal inputs, such as excitatory inputs from cortex, remains unclear. We simultaneously recorded from presynaptic pallidal axon terminals and postsynaptic thalamocortical neurons in a basal ganglia\u2013recipient thalamic nucleus that is necessary for vocal variability and learning in zebra finches. We found that song-locked rate modulations in the thalamus could not be explained by pallidal inputs alone and persisted following pallidal lesion. Instead, thalamic activity was likely driven by inputs from a motor cortical nucleus that is also necessary for singing. These findings suggest a role for cortical inputs to the pallido-recipient thalamus in driving premotor signals that are important for exploratory behavior and learning."}
{"_id": "9222c69ca851b26e8338b0082dfafbc663d1be50", "title": "A Graph Traversal Based Approach to Answer Non-Aggregation Questions Over DBpedia", "text": "We present a question answering system over DBpedia, filling the gap between user information needs expressed in natural language and a structured query interface expressed in SPARQL over the underlying knowledge base (KB). Given the KB, our goal is to comprehend a natural language query and provide corresponding accurate answers. Focusing on solving the non-aggregation questions, in this paper, we construct a subgraph of the knowledge base from the detected entities and propose a graph traversal method to solve both the semantic item mapping problem and the disambiguation problem in a joint way. Compared with existing work, we simplify the process of query intention understanding and pay more attention to the answer path ranking. We evaluate our method on a non-aggregation question dataset and further on a complete dataset. Experimental results show that our method achieves best performance compared with several state-of-the-art systems."}
{"_id": "5a9a26fd264187428da6662392857a1d3e485175", "title": "Optimal Cell Load and Throughput in Green Small Cell Networks With Generalized Cell Association", "text": "This paper thoroughly explores the fundamental interactions between cell association, cell load, and throughput in a green (energy-efficient) small cell network in which all base stations form a homogeneous Poisson point process (PPP) of intensity \u03bbB and all users form another independent PPP of intensity \u03bb\u222a. Cell voidness, usually disregarded due to rarity in cellular network modeling, is first theoretically analyzed under generalized (channel-aware) cell association (GCA). We show that the void cell probability cannot be neglected any more since it is bounded above by exp(-\u03bb\u222a/\u03bbB) that is typically not small in a small cell network. The accurate expression of the void cell probability for GCA is characterized and it is used to derive the average cell and user throughputs. We learn that cell association and cell load \u03bb\u222a/\u03bbB significantly affect these two throughputs. According to the average cell and user throughputs, the green cell and user throughputs are defined respectively to reflect whether the energy of a base station is efficiently used to transmit information or not. In order to achieve satisfactory throughput with certain level of greenness, cell load should be properly determined. We present the theoretical solutions of the optimal cell loads that maximize the green cell and user throughputs, respectively, and verify their correctness by simulation."}
{"_id": "23e8236644775fd5d8ff5536ba06b960e19f904b", "title": "Control Flow Integrity for COTS Binaries", "text": "Control-Flow Integrity (CFI) has been recognized as an important low-level security property. Its enforcement can defeat most injected and existing code attacks, including those based on Return-Oriented Programming (ROP). Previous implementations of CFI have required compiler support or the presence of relocation or debug information in the binary. In contrast, we present a technique for applying CFI to stripped binaries on x86/Linux. Ours is the first work to apply CFI to complex shared libraries such as glibc. Through experimental evaluation, we demonstrate that our CFI implementation is effective against control-flow hijack attacks, and eliminates the vast majority of ROP gadgets. To achieve this result, we have developed robust techniques for disassembly, static analysis, and transformation of large binaries. Our techniques have been tested on over 300MB of binaries (executables and shared libraries)."}
{"_id": "7d5e165a55d62750e9ad69bb317c764a2e4e12fc", "title": "SOK: (State of) The Art of War: Offensive Techniques in Binary Analysis", "text": "Finding and exploiting vulnerabilities in binary code is a challenging task. The lack of high-level, semantically rich information about data structures and control constructs makes the analysis of program properties harder to scale. However, the importance of binary analysis is on the rise. In many situations binary analysis is the only possible way to prove (or disprove) properties about the code that is actually executed. In this paper, we present a binary analysis framework that implements a number of analysis techniques that have been proposed in the past. We present a systematized implementation of these techniques, which allows other researchers to compose them and develop new approaches. In addition, the implementation of these techniques in a unifying framework allows for the direct comparison of these apporaches and the identification of their advantages and disadvantages. The evaluation included in this paper is performed using a recent dataset created by DARPA for evaluating the effectiveness of binary vulnerability analysis techniques. Our framework has been open-sourced and is available to the security community."}
{"_id": "a2be3a50e1b31bd226941ce7ef70e15bf21b83a1", "title": "Binary code is not easy", "text": "Binary code analysis is an enabling technique for many applications. Modern compilers and run-time libraries have introduced significant complexities to binary code, which negatively affect the capabilities of binary analysis tool kits to analyze binary code, and may cause tools to report inaccurate information about binary code. Analysts may hence be confused and applications based on these tool kits may have degrading quality. We examine the problem of constructing control flow graphs from binary code and labeling the graphs with accurate function boundary annotations. We identified several challenging code constructs that represent hard-to-analyze aspects of binary code, and show code examples for each code construct. As part of this discussion, we present new code parsing algorithms in our open source Dyninst tool kit that support these constructs, including a new model for describing jump tables that improves our ability to precisely determine the control flow targets, a new interprocedural analysis to determine when a function is non-returning, and techniques for handling tail calls. We evaluated how various tool kits fare when handling these code constructs with real software as well as test binaries patterned after each challenging code construct we found in real software."}
{"_id": "b00672fc5ff99434bf5347418a2d2762a3bb2639", "title": "Firmalice - Automatic Detection of Authentication Bypass Vulnerabilities in Binary Firmware", "text": "Embedded devices have become ubiquitous, and they are used in a range of privacy-sensitive and security-critical applications. Most of these devices run proprietary software, and little documentation is available about the software\u2019s inner workings. In some cases, the cost of the hardware and protection mechanisms might make access to the devices themselves infeasible. Analyzing the software that is present in such environments is challenging, but necessary, if the risks associated with software bugs and vulnerabilities must be avoided. As a matter of fact, recent studies revealed the presence of backdoors in a number of embedded devices available on the market. In this paper, we present Firmalice, a binary analysis framework to support the analysis of firmware running on embedded devices. Firmalice builds on top of a symbolic execution engine, and techniques, such as program slicing, to increase its scalability. Furthermore, Firmalice utilizes a novel model of authentication bypass flaws, based on the attacker\u2019s ability to determine the required inputs to perform privileged operations. We evaluated Firmalice on the firmware of three commercially-available devices, and were able to detect authentication bypass backdoors in two of them. Additionally, Firmalice was able to determine that the backdoor in the third firmware sample was not exploitable by an attacker without knowledge of a set of unprivileged credentials."}
{"_id": "00e90d7acd9c0a8b0efdb68c38af1632d5e60beb", "title": "Who Wrote This Code? Identifying the Authors of Program Binaries", "text": "Program authorship attribution\u2014identifying a programmer based on stylistic characteristics of code\u2014has practical implications for detecting software theft, digital forensics, and malware analysis. Authorship attribution is challenging in these domains where usually only binary code is available; existing source code-based approaches to attribution have left unclear whether and to what extent programmer style survives the compilation process. Casting authorship attribution as a machine learning problem, we present a novel program representation and techniques that automatically detect the stylistic features of binary code. We apply these techniques to two attribution problems: identifying the precise author of a program, and finding stylistic similarities between programs by unknown authors. Our experiments provide strong evidence that programmer style is preserved in program binaries."}
{"_id": "3c125732664d6e5be3d21869b536066a7591b53d", "title": "Importance filtering for image retargeting", "text": "Content-aware image retargeting has attracted a lot of interests recently. The key and most challenging issue for this task is how to balance the tradeoff between preserving the important contents and minimizing the visual distortions on the consistency of the image structure. In this paper we present a novel filtering-based technique to tackle this issue, called \u201dimportance filtering\u201d. Specifically, we first filter the image saliency guided by the image itself to achieve a structure-consistent importance map. We then use the pixel importance as the key constraint to compute the gradient map of pixel shifts from the original resolution to the target. Finally, we integrate the shift gradient across the image using a weighted filter to construct a smooth shift map and render the target image. The weight is again controlled by the pixel importance. The two filtering processes enforce to maintain the structural consistency and yet preserve the important contents in the target image. Furthermore, the simple nature of filter operations allows highly efficient implementation for real-time applications and easy extension to video retargeting, as the structural constraints from the original image naturally convey the temporal coherence between frames. The effectiveness and efficiency of our importance filtering algorithm are confirmed in extensive experiments."}
{"_id": "ce24c942ccb56891a984c38e9bd0f7aa3e681512", "title": "PUBLIC-KEY STEGANOGRAPHY BASED ON MODIFIED LSB METHOD", "text": "Steganography is the art and science of invisible communication. It is a technique which keeps the existence of the message secret. This paper proposed a technique to implement steganogaraphy and cryptography together to hide the data into an image. This technique describes as: Find the shared stego-key between the two communication parties by applying Diffie-Hellman Key exchange protocol, then encrypt the data using secret stego-key and then select the pixels by encryption process with the help of same secret stegokey to hide the data. Each selected pixel will be used to hide 8 bits of data by using LSB method."}
{"_id": "16334a6d7cf985a2a57ac9889a5a3d4ef8d460b3", "title": "Bayesian Pot-Assembly from Fragments as Problems in Perceptual-Grouping and Geometric-Learning", "text": "A heretofore unsolved problem of great archaeological importance is the automatic assembly of pots made on a wheel from the hundreds (or thousands) of sherds found at an excavation site. An approach is presented to the automatic estimation of mathematical models of such pots from 3D measurements of sherds. A Bayesian approach is formulated beginning with a description of the complete set of geometric parameters that determine the distribution of the sherd measurement data. Matching of fragments and aligning them geometrically into configurations is based on matching break-curves (curves on a pot surface separating fragments), estimated axis and profile curve pairs for individual fragments and configurations of fragments, and a number of features of groups of break-curves. Pot assembly is a bottom-up maximum likelihood performance-based search. Experiments are illustrated on pots which were broken for the purpose, and on sherds from an archaeological dig located in Petra, Jordan. The performance measure can also be aposteriori probability, and many other types of information can be included, e.g., pot wall thickness, surface color, patterns on the surface, etc. This can also be viewed as the problem of learning a geometric object from an unorganized set of free-form fragments of the object and of clutter, or as a problem of perceptual grouping."}
{"_id": "ccfd4a2a22b6c38f8206d1021f9be31f8c33a430", "title": "Drawing lithography for microneedles: a review of fundamentals and biomedical applications.", "text": "A microneedle is a three-dimensional (3D) micromechanical structure and has been in the spotlight recently as a drug delivery system (DDS). Because a microneedle delivers the target drug after penetrating the skin barrier, the therapeutic effects of microneedles proceed from its 3D structural geometry. Various types of microneedles have been fabricated using subtractive micromanufacturing methods which are based on the inherently planar two-dimensional (2D) geometries. However, traditional subtractive processes are limited for flexible structural microneedles and makes functional biomedical applications for efficient drug delivery difficult. The authors of the present study propose drawing lithography as a unique additive process for the fabrication of a microneedle directly from 2D planar substrates, thus overcoming a subtractive process shortcoming. The present article provides the first overview of the principal drawing lithography technology: fundamentals and biomedical applications. The continuous drawing technique for an ultrahigh-aspect ratio (UHAR) hollow microneedle, stepwise controlled drawing technique for a dissolving microneedle, and drawing technique with antidromic isolation for a hybrid electro-microneedle (HEM) are reviewed, and efficient biomedical applications by drawing lithography-mediated microneedles as an innovative drug and gene delivery system are described. Drawing lithography herein can provide a great breakthrough in the development of materials science and biotechnology."}
{"_id": "698b7261b4bb888c907aff82293e27adf41507d2", "title": "Optimization of Cooperative Sensing in Cognitive Radio Networks: A Sensing-Throughput Tradeoff View", "text": "In cognitive radio networks, the performance of the spectrum sensing depends on the sensing time and the fusion scheme that are used when cooperative sensing is applied. In this paper, we consider the case where the secondary users cooperatively sense a channel using the k -out-of-N fusion rule to determine the presence of the primary user. A sensing-throughput tradeoff problem under a cooperative sensing scenario is formulated to find a pair of sensing time and k value that maximize the secondary users' throughput subject to sufficient protection that is provided to the primary user. An iterative algorithm is proposed to obtain the optimal values for these two parameters. Computer simulations show that significant improvement in the throughput of the secondary users is achieved when the parameters for the fusion scheme and the sensing time are jointly optimized."}
{"_id": "0d2c07f3373c2bd63b3f2d0ba98d332daff33dcb", "title": "Primary gonadal failure and precocious adrenarche in a boy with Prader-Labhart-Willi syndrome", "text": "A 7-year-old boy with Prader-Labhart-Willi syndrome who had precocious adrenarche was found to have primary gonadal failure, as evidenced by appropriate laboratory investigations: elevated basal levels of plasma FSH and LH with exaggerated responses to LH-RH stimulation and unresponsiveness of plasma testosterone to repeated hCG stimulations. The elevated values of plasma DHEA which were found indicate an early activation of the adrenal gland. This patient demonstrates the varibility of pubertal development in the Prader-Labhart-Willi syndrome, with the unusual association of primary gonadal failure and precocious adrenarche."}
{"_id": "322c063e97cd26f75191ae908f09a41c534eba90", "title": "Improving Image Classification Using Semantic Attributes", "text": "The Bag-of-Words (BoW) model\u2014commonly used for image classification\u2014has two strong limitations: on one hand, visual words are lacking of explicit meanings, on the other hand, they are usually polysemous. This paper proposes to address these two limitations by introducing an intermediate representation based on the use of semantic attributes. Specifically, two different approaches are proposed. Both approaches consist in predicting a set of semantic attributes for the entire images as well as for local image regions, and in using these predictions to build the intermediate level features. Experiments on four challenging image databases (PASCAL VOC 2007, Scene-15, MSRCv2 and SUN-397) show that both approaches improve performance of the BoW model significantly. Moreover, their combination achieves the state-of-the-art results on several of these image databases."}
{"_id": "9061fb46185cffde44168aff4eb17f25d520f93a", "title": "Deceptive Reviews : The Influential Tail", "text": "Research in the psycholinguistics literature has identified linguistic cues indicating when a message is more likely to be deceptive and we find that the textual comments in the reviews without prior transactions exhibit many of these characteristics. In contrast, these reviews are less likely to contain expressions describing the fit or feel of the items, which can generally only be assessed by physical inspection of the items."}
{"_id": "6949a33423051ce6fa5b08fb7d5f06ac9dcc721b", "title": "Process Mining and Fraud Detection", "text": "A case study on the theoretical and practical value of using process mining for the detection of fraudulent behavior in the procurement process Abstract This thesis presents the results of a six month research period on process mining and fraud detection. This thesis aimed to answer the research question as to how process mining can be utilized in fraud detection and what the benefits of using process mining for fraud detection are. Based on a literature study it provides a discussion of the theory and application of process mining and its various aspects and techniques. Using both a literature study and an interview with a domain expert, the concepts of fraud and fraud detection are discussed. These results are combined with an analysis of existing case studies on the application of process mining and fraud detection to construct an initial setup of two case studies, in which process mining is applied to detect possible fraudulent behavior in the procurement process. Based on the experiences and results of these case studies, the 1+5+1 methodology is presented as a first step towards operationalizing principles with advice on how process mining techniques can be used in practice when trying to detect fraud. This thesis presents three conclusions: (1) process mining is a valuable addition to fraud detection, (2) using the 1+5+1 concept it was possible to detect indicators of possibly fraudulent behavior (3) the practical use of process mining for fraud detection is diminished by the poor performance of the current tools. The techniques and tools that do not suffer from performance issues are an addition, rather than a replacement, to regular data analysis techniques by providing either new, quicker, or more easily obtainable insights into the process and possible fraudulent behavior. iii Occam's Razor: \" One should not increase, beyond what is necessary, the number of entities required to explain anything \" iv Contents"}
{"_id": "8aef832372c6e3e83f10532f94f18bd26324d4fd", "title": "Question Answering on Freebase via Relation Extraction and Textual Evidence", "text": "Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F1 of 53.3%, a substantial improvement over the state-of-the-art."}
{"_id": "f4fcc5125a4eb702cdff0af421e500433fbe9a16", "title": "Beyond Big and Little : The Four C Model of Creativity", "text": "Most investigations of creativity tend to take one of two directions: everyday creativity (also called \u201clittle-c\u201d), which can be found in nearly all people, and eminent creativity (also called \u201cBig-C\u201d), which is reserved for the great. In this paper, the authors propose a Four C model of creativity that expands this dichotomy. Specifically, the authors add the idea of \u201cmini-c,\u201d creativity inherent in the learning process, and Pro-c, the developmental and effortful progression beyond little-c that represents professional-level expertise in any creative area. The authors include different transitions and gradations of these four dimensions of creativity, and then discuss advantages and examples of the Four C Model."}
{"_id": "154901c49ce0e1640547c0193ba0dfcc06d90ec1", "title": "Comparing a computer agent with a humanoid robot", "text": "HRI researchers interested in social robots have made large investments in humanoid robots. There is still sparse evidence that peoples' responses to robots differ from their responses to computer agents, suggesting that agent studies might serve to test HRI hypotheses. To help us understand the difference between people's social interactions with an agent and a robot, we experimentally compared people's responses in a health interview with (a) a computer agent projected either on a computer monitor or life-size on a screen, (b) a remote robot projected life-size on a screen, or (c) a collocated robot in the same room. We found a few behavioral and large attitude differences across these conditions. Participants forgot more and disclosed least with the collocated robot, next with the projected remote robot, and then with the agent. They spent more time with the collocated robot and their attitudes were most positive toward that robot. We discuss tradeoffs for HRI research of using collocated robots, remote robots, and computer agents as proxies of robots."}
{"_id": "b4e1853acf75a91f64bd65de91ae05f4f7ef35a4", "title": "Why is image quality assessment so difficult?", "text": "Image quality assessment plays an important role in various image processing applications. A great deal of effort has been made in recent years to develop objective image quality metrics that correlate with perceived quality measurement. Unfortunately, only limited success has been achieved. In this paper, we provide some insights on why image quality assessment is so difficult by pointing out the weaknesses of the error sensitivity based framework, which has been used by most image quality assessment approaches in the literature. Furthermore, we propose a new philosophy in designing image quality metrics: The main function of the human eyes is to extract structural information from the viewing field, and the human visual system is highly adapted for this purpose. Therefore, a measurement of structural distortion should be a good approximation of perceived image distortion. Based on the new philosophy, we implemented a simple but effective image quality indexing algorithm, which is very promising as shown by our current results."}
{"_id": "f2be739f0805729f754edb5238fde49b37afb00c", "title": "Performance indicators for an objective measure of public transport service quality", "text": "The measurement of transit performance represents a very useful tool for ensuring continuous increase of the quality of the delivered transit services, and for allocating resources among competing transit agencies. Transit service quality can be evaluated by subjective measures based on passengers\u2019 perceptions, and objective measures represented by disaggregate performance measures expressed as numerical values, which must be compared with fixed standards or past performances. The proposed research work deals with service quality evaluation based on objective measures; specifically, an extensive overview and an interpretative review of the objective indicators until investigated by researchers are proposed. The final aim of the work is to give a review as comprehensive as possible of the objective indicators, and to provide some suggestions for the selection of the most appropriate indicators for evaluating a transit service aspect."}
{"_id": "9de64889ea7d467fdb3d3fa615d91b4d7ff2b068", "title": "Novel paradigms for advanced distribution grid energy management", "text": "The electricity distribution grid was not designed to cope with load dynamics imposed by high penetration of electric vehicles, neither to deal with the increasing deployment of distributed Renewable Energy Sources. Distribution System Operators (DSO) will increasingly rely on flexible Distributed Energy Resources (flexible loads, controllable generation and storage) to keep the grid stable and to ensure quality of supply. In order to properly integrate demand-side flexibility, DSOs need new energy management architectures, capable of fostering collaboration with wholesale market actors and prosumers. We propose the creation of Virtual Distribution Grids (VDG) over a common physical infrastructure, to cope with heterogeneity of resources and actors, and with the increasing complexity of distribution grid management and related resources allocation problems. Focusing on residential VDG, we propose an agent-based hierarchical architecture for providing Demand-Side Management services through a market-based approach, where households transact their surplus/lack of energy and their flexibility with neighbours, aggregators, utilities and DSOs. For implementing the overall solution, we consider fine-grained control of smart homes based on Internet of Things technology. Homes seamlessly transact self-enforcing smart contracts over a blockchainbased generic platform. Finally, we extend the architecture to solve existing problems on smart home control, beyond energy management."}
{"_id": "574ec01e67e69a072e1e7cf4d6138d43491170ea", "title": "Planar Differential Elliptical UWB Antenna Optimization", "text": "A recently proposed optimization procedure, based on the time domain characteristics of an antenna, is exploited to design a planar differential elliptical antenna for ultrawideband (UWB) applications. The optimization procedure aims at finding an antenna not only with low VSWR but also one exhibiting low-dispersion characteristics over the relevant frequency band. Furthermore, since in pulse communications systems the input signal is often of a given form, suited to a particular purpose, the optimization procedure also aims at finding the best antenna for the given input signal form. Specifically, the optimized antenna is designed for high temporal correlation between its electric field intensity signal and the given input signal. The optimization technique followed in this work makes use of genetic algorithm (GA) search concepts. The electromagnetic analysis of the antenna is done by means of a finite-difference time-domain method using the commercially available CST Microwave Studio software"}
{"_id": "befc714b8b0611d5582960ce3d8573db0cc35f52", "title": "Tracking-based deer vehicle collision detection using thermal imaging", "text": "Deer vehicle collision (DVC) is constantly a major safety issue for the driving on rural road. It is estimated that there are over 35,000 DVCs yearly in the US resulting in about 200 deaths and close to 4,000 reported property damages of one thousand dollars or more. This justifies many attempts trying to detect deer on road. However, very little success has been achieved. In order to reduce the number of DVCs, this work focused on the study of using an infrared thermal camera with tracking system to detect the presence of deer to avoid DVCs. The prototype consists of an infrared thermal temperature image grabbing and processing system, which includes an infrared thermal camera, a frame grabber, an image processing system and a motion tracking system, which includes two motors with their motion control system. By analyzing the infrared thermal images which are independent of visible light, the presence of an animal can be determined in either night or day time through pattern recognition and matching."}
{"_id": "442cf9b24661c9ea5c2a1dcabd4a5b8af1cd89da", "title": "Beyond One-hot Encoding: lower dimensional target embedding", "text": "Target encoding plays a central role when learning Convolutional Neural Networks. In this realm, One-hot encoding is the most prevalent strategy due to its simplicity. However, this so widespread encoding schema assumes a flat label space, thus ignoring rich relationships existing among labels that can be exploited during training. In large-scale datasets, data does not span the full label space, but instead lies in a low-dimensional output manifold. Following this observation, we embed the targets into a lowdimensional space, drastically improving convergence speed while preserving accuracy. Our contribution is two fold: (i) We show that random projections of the label space are a valid tool to find such lower dimensional embeddings, boosting dramatically convergence rates at zero computational cost; and (ii) we propose a normalized eigenrepresentation of the class manifold that encodes the targets with minimal information loss, improving the accuracy of random projections encoding while enjoying the same convergence rates. Experiments on CIFAR-100, CUB200-2011, Imagenet, and MIT Places demonstrate that the proposed approach drastically improves convergence speed while reaching very competitive accuracy rates."}
{"_id": "311e8b4881482e9d940f4f1929d78a2b4a4337f8", "title": "Simba: Efficient In-Memory Spatial Analytics", "text": "Large spatial data becomes ubiquitous. As a result, it is critical to provide fast, scalable, and high-throughput spatial queries and analytics for numerous applications in location-based services (LBS). Traditional spatial databases and spatial analytics systems are disk-based and optimized for IO efficiency. But increasingly, data are stored and processed in memory to achieve low latency, and CPU time becomes the new bottleneck. We present the Simba (Spatial In-Memory Big data Analytics) system that offers scalable and efficient in-memory spatial query processing and analytics for big spatial data. Simba is based on Spark and runs over a cluster of commodity machines. In particular, Simba extends the Spark SQL engine to support rich spatial queries and analytics through both SQL and the DataFrame API. It introduces indexes over RDDs in order to work with big spatial data and complex spatial operations. Lastly, Simba implements an effective query optimizer, which leverages its indexes and novel spatial-aware optimizations, to achieve both low latency and high throughput. Extensive experiments over large data sets demonstrate Simba's superior performance compared against other spatial analytics system."}
{"_id": "fe404310e7f485fd77367cb9c6c1cdc3884381d1", "title": "Smart fabrics and interactive textile enabling wearable personal applications: R&D state of the art and future challenges", "text": "Smart fabrics and interactive textiles (SFIT) are fibrous structures that are capable of sensing, actuating, generating/storing power and/or communicating. Research and development towards wearable textile-based personal systems allowing e.g. health monitoring, protection & safety, and healthy lifestyle gained strong interest during the last 10 years. Under the Information and Communication Programme of the European Commission, a cluster of R&D projects dealing with smart fabrics and interactive textile wearable systems regroup activities along two different and complementary approaches i.e. \u201capplication pull\u201d and \u201ctechnology push\u201d. This includes projects aiming at personal health management through integration, validation, and use of smart clothing and other networked mobile devices as well as projects targeting the full integration of sensors/actuators, energy sources, processing and communication within the clothes to enable personal applications such as protection/safety, emergency and healthcare. The integration part of the technologies into a real SFIT product is at present stage on the threshold of prototyping and testing. Several issues, technical as well user-centred, societal and business, remain to be solved. The paper presents on going major R&D activities, identifies gaps and discuss key challenges for the future."}
{"_id": "586d39f024325314528f1eaa64f4b2bbe2a1baa5", "title": "Multilingual speech-to-speech translation system for mobile consumer devices", "text": "Along with the advancement of speech recognition technology and machine translation technology in addition to the fast distribution of mobile devices, speech-to-speech translation technology no longer remains as a subject of research as it has become popularized throughout many users. In order to develop a speech-to-speech translation system that can be widely used by many users, however, the system needs to reflect various characteristics of utterances by the users who are actually to use the speech-to-speech translation system other than improving the basic functions under the experimental environment. This study has established a massive language and speech database closest to the environment where speech-to- speech translation device actually is being used after mobilizing plenty of people based on the survey on users' demands. Through this study, it was made possible to secure excellent basic performance under the environment similar to speech-to-speech translation environment, rather than just under the experimental environment. Moreover, with the speech-to-speech translation UI, a user-friendly UI has been designed; and at the same time, errors were reduced during the process of translation as many measures to enhance user satisfaction were employed. After implementing the actual services, the massive database collected through the service was additionally applied to the system following a filtering process in order to procure the best-possible robustness toward both the details and the environment of the users' utterances. By applying these measures, this study is to unveil the procedures where multi-language speech-to-speech translation system has been successfully developed for mobile devices."}
{"_id": "0058c41f797d48aa8544894b75a26a26602a8152", "title": "Missing value imputation for gene expression data: computational techniques to recover missing data from available information", "text": "Microarray gene expression data generally suffers from missing value problem due to a variety of experimental reasons. Since the missing data points can adversely affect downstream analysis, many algorithms have been proposed to impute missing values. In this survey, we provide a comprehensive review of existing missing value imputation algorithms, focusing on their underlying algorithmic techniques and how they utilize local or global information from within the data, or their use of domain knowledge during imputation. In addition, we describe how the imputation results can be validated and the different ways to assess the performance of different imputation algorithms, as well as a discussion on some possible future research directions. It is hoped that this review will give the readers a good understanding of the current development in this field and inspire them to come up with the next generation of imputation algorithms."}
{"_id": "94a6d653f4c54dbfb8f62a5bb54b4286c9affdb2", "title": "FML-based feature similarity assessment agent for Japanese/Taiwanese language learning", "text": "In this paper, we propose a fuzzy markup language (FML)-based feature similarity assessment agent with machine-learning ability to evaluate easy-to-learn degree of the Japanese and Taiwanese words. The involved domain experts define knowledge base (KB) and rule base (RB) of the proposed agent. The KB and RB are stored in the constructed ontology, including features of pronunciation similarity, writing similarity, and culture similarity. Next, we calculate feature similarity in pronunciation, writing, and culture for each word pair between Japanese and Taiwanese. Finally, we infer the easy-to-learn degree for one Japanese word and its corresponding Taiwanese one. Additionally, a genetic learning is also adopted to tune the KB and RB of the intelligent agent. The experimental results show that after-learning results perform better than before-learning ones."}
{"_id": "66b2d52519a7e6ecc19ac8a46edf6932aba14695", "title": "foo . castr : visualising the future AI workforce", "text": "*Correspondence: mmolinas@ic.ac.uk 1Data Science Institute, Imperial College London, London, UK Full list of author information is available at the end of the article Abstract Organization of companies and their HR departments are becoming hugely affected by recent advancements in computational power and Artificial Intelligence, with this trend likely to dramatically rise in the next few years. This work presents foo.castr, a tool we are developing to visualise, communicate and facilitate the understanding of the impact of these advancements in the future of workforce. It builds upon the idea that particular tasks within job descriptions will be progressively taken by computers, forcing the shaping of human jobs. In its current version, foo.castr presents three different scenarios to help HR departments planning potential changes and disruptions brought by the adoption of Artificial Intelligence."}
{"_id": "16edc3faf625fd437aaca1527e8821d979354fba", "title": "On happiness and human potentials: a review of research on hedonic and eudaimonic well-being.", "text": "Well-being is a complex construct that concerns optimal experience and functioning. Current research on well-being has been derived from two general perspectives: the hedonic approach, which focuses on happiness and defines well-being in terms of pleasure attainment and pain avoidance; and the eudaimonic approach, which focuses on meaning and self-realization and defines well-being in terms of the degree to which a person is fully functioning. These two views have given rise to different research foci and a body of knowledge that is in some areas divergent and in others complementary. New methodological developments concerning multilevel modeling and construct comparisons are also allowing researchers to formulate new questions for the field. This review considers research from both perspectives concerning the nature of well-being, its antecedents, and its stability across time and culture."}
{"_id": "200c10cbb2eaf70727af4751f0d8e0a5a6661e07", "title": "Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being.", "text": "Human beings can be proactive and engaged or, alternatively, passive and alienated, largely as a function of the social conditions in which they develop and function. Accordingly, research guided by self-determination theory has focused on the social-contextual conditions that facilitate versus forestall the natural processes of self-motivation and healthy psychological development. Specifically, factors have been examined that enhance versus undermine intrinsic motivation, self-regulation, and well-being. The findings have led to the postulate of three innate psychological needs--competence, autonomy, and relatedness--which when satisfied yield enhanced self-motivation and mental health and when thwarted lead to diminished motivation and well-being. Also considered is the significance of these psychological needs and processes within domains such as health care, education, work, sport, religion, and psychotherapy."}
{"_id": "66626e65418d696ed913ce3fa75604fb2946a2a1", "title": "Mean and Covariance Structures (MACS) Analyses of Cross-Cultural Data: Practical and Theoretical Issues.", "text": "Practical and theoretical issues are discussed for testing (a) the comparability, or measurement equivalence, of psychological constructs and (b) detecting possible sociocultural difference on the constructs in cross-cultural research designs. Specifically, strong factorial invariance (Meredith, 1993) of each variable's loading and intercept (mean-level) parameters implies that constructs are fundamentally the same in each sociocultural group, and thus comparable. Under this condition, hypotheses about the nature of sociocultural differences and similarities can be confidently and meaningfully tested among the constructs' moments in each sociocultural sample. Some of the issues involved in making such tests are reviewed and explicated within the framework of multiple-group mean and covariance structures analyses."}
{"_id": "81c920b0892b7636c127be90ec3775ae4ef8143c", "title": "Daily Well-Being : The Role of Autonomy , Competence , and Relatedness", "text": "Emotional well-being is most typically studied in trait or traitlike terms, yet a growing literature indicates that daily (withinperson) fluctuations in emotional well-being may be equally important. The present research explored the hypothesis that daily variations may be understood in terms of the degree to which three basic needs\u2014autonomy, competence, and relatedness\u2014are satisfied in daily activity. Hierarchical linear models were used to examine this hypothesis across 2 weeks of daily activity and well-being reports controlling for trait-level individual differences. Results strongly supported the hypothesis. The authors also examined the social activities that contribute to satisfaction of relatedness needs. The best predictors were meaningful talk and feeling understood and appreciated by interaction partners. Finally, the authors found systematic day-of-the-week variations in emotional well-being and need satisfaction. These results are discussed in terms of the importance of daily activities and the need to consider both trait and day-level determinants of well-being."}
{"_id": "88af62beb2ef314e6d50c750e6ff0a3551fb9f26", "title": "Personal projects, happiness, and meaning: on doing well and being yourself.", "text": "Personal Projects Analysis (B. R. Little, 1983) was adapted to examine relations between participants' appraisals of their goal characteristics and orthogonal happiness and meaning factors that emerged from factor analyses of diverse well-being measures. In two studies with 146 and 179 university students, goal efficacy was associated with happiness and goal integrity was associated with meaning. A new technique for classifying participants according to emergent identity themes is introduced. In both studies, identity-compensatory predictors of happiness were apparent. Agentic participants were happiest if their goals were supported by others, communal participants were happiest if their goals were fun, and hedonistic participants were happiest if their goals were being accomplished. The distinction between happiness and meaning is emphasized, and the tension between efficacy and integrity is discussed. Developmental implications are discussed with reference to results from archival data from a sample of senior managers."}
{"_id": "397fd411c58b25d04dd6a4c1896e86a78ae51004", "title": "Quasi-Dense Reconstruction from Image Sequence", "text": "This paper proposes a quasi-dense reconstruction from uncalibrated sequence. The main innovation is that all geometry is computed based on re-sampled quasi-dense correspondences rather than the standard sparse points of interest. It not only produces more accurate and robust reconstruction due to highly redundant and well spread input data, but also fills the gap of insufficiency of sparse reconstruction for visualization application. The computational engine is the quasi-dense 2-view and the quasi-dense 3-view algorithms developed in this paper. Experiments on real sequences demonstrate the superior performance of quasi-dense w.r.t. sparse reconstruction both in accuracy and robustness."}
{"_id": "b5e3beb791cc17cdaf131d5cca6ceb796226d832", "title": "Novel Dataset for Fine-Grained Image Categorization : Stanford Dogs", "text": "We introduce a 120 class Stanford Dogs dataset, a challenging and large-scale dataset aimed at fine-grained image categorization. Stanford Dogs includes over 22,000 annotated images of dogs belonging to 120 species. Each image is annotated with a bounding box and object class label. Fig. 1 shows examples of images from Stanford Dogs. This dataset is extremely challenging due to a variety of reasons. First, being a fine-grained categorization problem, there is little inter-class variation. For example the basset hound and bloodhound share very similar facial characteristics but differ significantly in their color, while the Japanese spaniel and papillion share very similar color but greatly differ in their facial characteristics. Second, there is very large intra-class variation. The images show that dogs within a class could have different ages (e.g. beagle), poses (e.g. blenheim spaniel), occlusion/self-occlusion and even color (e.g. Shih-tzu). Furthermore, compared to other animal datasets that tend to exist in natural scenes, a large proportion of the images contain humans and are taken in manmade environments leading to greater background variation. The aforementioned reasons make this an extremely challenging dataset."}
{"_id": "79a928769af7c9b1e2dba81a1eb0247ed34faf55", "title": "The importance of being flexible: the ability to both enhance and suppress emotional expression predicts long-term adjustment.", "text": "Researchers have documented the consequences of both expressing and suppressing emotion using between-subjects designs. It may be argued, however, that successful adaptation depends not so much on any one regulatory process, but on the ability to flexibly enhance or suppress emotional expression in accord with situational demands. We tested this hypothesis among New York City college students in the aftermath of the September 11th terrorist attacks. Subjects' performance in a laboratory task in which they enhanced emotional expression, suppressed emotional expression, and behaved normally on different trials was examined as a prospective predictor of their adjustment across the first two years of college. Results supported the flexibility hypothesis. A regression analysis controlling for initial distress and motivation and cognitive resources found that subjects who were better able to enhance and suppress the expression of emotion evidenced less distress by the end of the second year. Memory deficits were also observed for both the enhancement and the suppression tasks, suggesting that both processes require cognitive resources."}
{"_id": "e11edb4201007530c3692814a155b22f78a0d659", "title": "OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles", "text": "We present a new major release of the OpenSubtitles collection of parallel corpora. The release is compiled from a large database of movie and TV subtitles and includes a total of 1689 bitexts spanning 2.6 billion sentences across 60 languages. The release also incorporates a number of enhancements in the preprocessing and alignment of the subtitles, such as the automatic correction of OCR errors and the use of meta-data to estimate the quality of each subtitle and score subtitle pairs."}
{"_id": "175059c67687eefbdb7ea12e38a515db77086af0", "title": "Training RNN simulated vehicle controllers using the SVD and evolutionary algorithms", "text": "We describe an approach to creating a controller for The Open Car Racing Simulator (TORCS), based on The Simulated Car Racing Championship (SCRC) client, using unsupervised evolutionary learning for recurrent neural networks. Our method of training the recurrent neural network controllers relies on combining the components of the singular value decomposition of two different neural network connection matrices."}
{"_id": "40e927fdee7517fb7513d03735754af80fb9c7b0", "title": "Label Embedding Trees for Large Multi-Class Tasks", "text": "Multi-class classification becomes challenging at test time when the number of classes is very large and testing against every possible class can become computationally infeasible. This problem can be alleviated by imposing (or learning) a structure over the set of classes. We propose an algorithm for learning a treestructure of classifiers which, by optimizing the overall tree loss, provides superior accuracy to existing tree labeling methods. We also propose a method that learns to embed labels in a low dimensional space that is faster than non-embedding approaches and has superior accuracy to existing embedding approaches. Finally we combine the two ideas resulting in the label embedding tree that outperforms alternative methods including One-vs-Rest while being orders of magnitude faster."}
{"_id": "03563044af9cb9c84b14e34516c8135fa5165c25", "title": "Dependency Grammar and Dependency Parsing", "text": "Despite a long and venerable tradition in descriptive lingu stics, dependency grammar has until recently played a fairly marginal role both in t heoretical linguistics and in natural language processing. The increasing interes t in dependency-based representations in natural language parsing in recent year s appears to be motivated both by the potential usefulness of bilexical relations in d isambiguation and by the gains in efficiency that result from the more constrained par sing problem for these representations. In this paper, we will review the state of the art in dependenc y-based parsing, starting with the theoretical foundations of dependency gr ammar and moving on to consider both grammar-driven and data-driven methods fo r dependency parsing. We will limit our attention to systems for dependency parsin g i a narrow sense, i.e. systems where the analysis assigned to an input sentenc akes the form of a dependency structure. This means that we will not discuss sy tems that exploit dependency relations for the construction of another type o f r presentation, such as the head-driven parsing models of Collins (1997, 1999). M oreover, we will restrict ourselves to systems for full parsing, which means that we will not deal with systems that produce a partial or underspecified repres ntation of dependency structure, such as Constraint Grammar parsers (Karlsson, 1 990; Karlsson et al., 1995)."}
{"_id": "dedf3eea39a97d606217462638f38adabc7321d1", "title": "Realized Volatility and Variance : Options via Swaps", "text": "Let St denote the value of a stock or stock index at time t. Given a variance/volatility option to be priced and hedged, let us designate as time 0 the start of its averaging period, and time T the end. For t \u2208 [0, T ], and \u03c4 \u2264 t, let R2 \u03c4,t denote the realized variance of returns over the time interval [\u03c4, t]. The mathematical results about the synthesis of volatility and variance swaps will hold exactly if R2 refers to the continuously-monitored variance. This means the quadratic variation of logS, \u2217Bloomberg L.P. and NYU Courant Institute pcarr@nyc.rr.com. \u2020University of Chicago. RL@math.uchicago.edu. We thank the International Securities Exchange, Kris Monaco,"}
{"_id": "7f3e1903023fae5ca343db2546d3eebd2b89c827", "title": "Single-Stage Single-Switch PFC Flyback Converter Using a Synchronous Rectifier", "text": "A single-stage single-switch power factor correction (PFC) flyback converter with a synchronous rectifier (SR) is proposed for improving power factor and efficiency. Using a variable switching-frequency controller, this converter is continuously operated with a reduced turn-on switching loss at the boundary of the continuous conduction mode and discontinuous conduction mode (DCM). The proposed PFC circuit provides relatively low dc-link voltage in the universal line voltage, and also complies with Standard IEC 61000-3-2 Class D limits. In addition, a new driving circuit as the voltage driven-synchronous rectifier is proposed to achieve high efficiency. In particular, since a driving signal is generated according to the voltage polarity, the SR driving circuit can easily be used in DCM applications. The proposed PFC circuit and SR driving circuit in the flyback converter with the reduced switching loss are analyzed in detail and optimized for high performance. Experimental results for a 19 V/90 W adapter at the variable switching-frequency of 30~70 kHz were obtained to show the performance of the proposed converter."}
{"_id": "286381f03a2e30a942f29b8ed649c63e5b668e63", "title": "Current and Emerging Topics in Sports Video Processing", "text": "Sports video processing is an interesting topic for research, since the clearly defined game rules in sports provide the rich domain knowledge for analysis. Moreover, it is interesting because many specialized applications for sports video processing are emerging. This paper gives an overview of sports video research, where we describe both basic algorithmic techniques and applications."}
{"_id": "9151541a84d25e31a1101e82a4b345639223bbd8", "title": "Improving Business Process Quality through Exception Understanding, Prediction, and Prevention", "text": "Business process automation technologies are being increasingly used by many companies to improve the efficiency of both internal processes as well as of e-services offered to customers. In order to satisfy customers and employees, business processes need to be executed with a high and predictable quality. In particular, it is crucial for organizations to meet the Service Level Agreements (SLAs) stipulated with the customers and to foresee as early as possible the risk of missing SLAs, in order to set the right expectations and to allow for corrective actions. In this paper we focus on a critical issue in business process quality: that of analyzing, predicting and preventing the occurrence of exceptions, i.e., of deviations from the desired or acceptable behavior. We characterize the problem and propose a solution, based on data warehousing and mining techniques. We then describe the architecture and implementation of a tool suite that enables exception analysis, prediction, and prevention. Finally, we show experimental results obtained by using the tool suite to analyze internal HP processes."}
{"_id": "ac4b1536d8e1dd45c7816dd446e7dd890ba00067", "title": "A Service-Oriented Architecture for Virtualizing Robots in Robot-as-a-Service Clouds", "text": "Exposing software and hardware computing resources as services through a cloud is increasingly emerging in the recent years. This comes as a result of extending the service-oriented architecture (SOA) paradigm to virtualize computing resources. In this paper, we extend the paradigm of the SOA approach to virtualize robotic hardware and software resources to expose them as services through the Web. This allows non-technical users to access, interact and manipulate robots simply through a Web browser. The proposed RoboWeb system is based on a SOAP-based Web service middleware that binds robots computing resources as services and publish them to the end-users. We consider robots that operates with the Robotic Operating System (ROS), as it provides hardware abstraction that makes easier applications development. We describe the implementation of RoboWeb and demonstrate how researchers can use it to interact remotely with the robots. We believe that this work consistently contributes to enabling remote robotic labs using the cloud paradigm."}
{"_id": "79acd526a54ea8508bc80f64f5f4d7ac44f1f53b", "title": "TumorNet: Lung nodule characterization using multi-view Convolutional Neural Network with Gaussian Process", "text": "Characterization of lung nodules as benign or malignant is one of the most important tasks in lung cancer diagnosis, staging and treatment planning. While the variation in the appearance of the nodules remains large, there is a need for a fast and robust computer aided system. In this work, we propose an end-to-end trainable multi-view deep Convolutional Neural Network (CNN) for nodule characterization. First, we use median intensity projection to obtain a 2D patch corresponding to each dimension. The three images are then concatenated to form a tensor, where the images serve as different channels of the input image. In order to increase the number of training samples, we perform data augmentation by scaling, rotating and adding noise to the input image. The trained network is used to extract features from the input image followed by a Gaussian Process (GP) regression to obtain the malignancy score. We also empirically establish the significance of different high level nodule attributes such as calcification, sphericity and others for malignancy determination. These attributes are found to be complementary to the deep multi-view CNN features and a significant improvement over other methods is obtained."}
{"_id": "fb26a77fb3532da237ef9c372af8e9c46dc4b16c", "title": "Behavior Analysis Using a Multilevel Motion Pattern Learning Framework", "text": "The increasing availability of video data, through existing traffic cameras or dedicated field data collection, paves the way for the collection of massive datasets about the microscopic behaviour of road users using computer vision techniques. Analysis of such datasets helps to understand the normal road user behaviour, and it can be used for realistic prediction of future motion and computing surrogate safety indicators. A multi-level motion pattern learning framework is developed that enables automated scene interpretation, anomalous behaviour detection and surrogate safety analysis. Firstly, Points of interest (POI) are learnt based on Gaussian Mixture Model and the Expectation Maximization algorithm and then used to form activity paths (APs). Secondly, motion patterns, represented by trajectory prototypes, are learnt from road users\u2019 trajectories in each AP using a two-stage trajectory clustering method based on spatial then temporal (speed) information. Finally, motion prediction relies on matching at each instant partial trajectories to the learnt prototypes to evaluate the potential for collision by computing indicators. An intersection case study demonstrates the framework ability in many ways: it helps to reduce the computation cost up to 90 %, clean trajectories dataset from tracking outliers, use actual trajectories as prototypes without any preand post-processing and predict future motion realistically to compute surrogate safety indicators."}
{"_id": "b9c4f8f76680070a9d3a41284320bdd9561a6204", "title": "A High-Power Broadband Passive Terahertz Frequency Doubler in CMOS", "text": "To realize a high-efficiency terahertz signal source, a varactor-based frequency-doubler topology is proposed. The structure is based on a compact partially coupled ring that simultaneously achieves isolation, matching, and harmonic filtering for both input and output signals at f0 and 2f0. The optimum varactor pumping/loading conditions for the highest conversion efficiency are also presented analytically along with intuitive circuit representations. Using the proposed circuit, a passive 480-GHz frequency doubler with a measured minimum conversion loss of 14.3 dB and an unsaturated output power of 0.23 mW is reported. Within 20-GHz range, the fluctuation of the measured output power is less than 1.5 dB, and the simulated 3-dB output bandwidth is 70 GHz (14.6%). The doubler is fabricated using 65-nm low-power bulk CMOS technology and consumes near zero dc power."}
{"_id": "9d5d3fa5bc48de89c398042236b85566f433eb5c", "title": "Anonymous CoinJoin Transactions with Arbitrary Values", "text": "Bitcoin, the arguably most popular cryptocurrency to date, allows users to perform transactions using freely chosen pseudonymous addresses. Previous research, however, suggests that these pseudonyms can easily be linked, implying a lower level of privacy than originally expected. To obfuscate the links between pseudonyms, different mixing methods have been proposed. One of the first approaches is the CoinJoin concept, where multiple users merge their transactions into one larger transaction. In theory, CoinJoin can be used to mix and transact bitcoins simultaneously, in one step. Yet, it is expected that differing bitcoin amounts would allow an attacker to derive the original single transactions. Solutions based on CoinJoin therefore prescribe the use of fixed bitcoin amounts and cannot be used to perform arbitrary transactions.In this paper, we define a model for CoinJoin transactions and metrics that allow conclusions about the provided anonymity. We generate and analyze CoinJoin transactions and show that with differing, representative amounts they generally do not provide any significant anonymity gains. As a solution to this problem, we present an output splitting approach that introduces sufficient ambiguity to effectively prevent linking in CoinJoin transactions. Furthermore, we discuss how this approach could be used in Bitcoin today."}
{"_id": "8124e1c470957979625142b876acad033bdbe69f", "title": "Blockchain \u2014 Literature survey", "text": "In the modern world that we live in everything is gradually shifting towards a more digitized outlook. From teaching methods to online transactions, every small aspect of our life is given a more technological touch. In such a case \u201cmoney\u201d is not left behind either. An approach towards digitizing money led to the creation of \u201cbitcoin\u201d. Bitcoin is the most efficient crypto-currency so far in the history of digital currency. Ruling out the interference of any third party which monitors the cash flow, Bitcoin is a decentralized form of online currency and is widely accepted for internet transactions all over the world. The need for digital money would be extensive in the near future."}
{"_id": "06c75fd841ea83440a9496ecf6d8e3d6341f655a", "title": "IMPLICIT SLICING METHOD FOR ADDITIVE MANUFACTURING PROCESSES", "text": "All additive manufacturing processes involve a distinct preprocessing stage in which a set of instructions, or GCode, that control the process specific manufacturing tool are generated, otherwise known as slicing. In regards to fused deposition modeling, the GCode defines many crucial parameters which are needed to produce a part, including tool path, infill density, layer height, feed rate, tool head and plate temperature. The majority of current commercial slicing programs generate tool paths explicitly, and do not consider particular geometric properties of parts, such as thin walls, small corners and round profiles that can result in critical voids leading to part failure. This work replicates an implicit slicing algorithm in which functionally derived infill patterns are overlaid onto each layer of a part, reducing the possibility of undesired voids and flaws. This work also further investigates the effects that varying implicitly derived infill patterns have on a part\u2019s mechanical properties through tensile testing dog bone specimens with three different infill patterns and comparing ultimate stress and elastic modulus properties."}
{"_id": "61060bea27a3410260988540b627ccc5ba131822", "title": "Adversarial Cross-Modal Retrieval", "text": "Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g., texts vs. images). The core of cross-modal retrieval research is to learn a common subspace where the items of different modalities can be directly compared to each other. In this paper, we present a novel Adversarial Cross-Modal Retrieval (ACMR) method, which seeks an effective common subspace based on adversarial learning. Adversarial learning is implemented as an interplay between two processes. The first process, a feature projector, tries to generate a modality-invariant representation in the common subspace and to confuse the other process, modality classifier, which tries to discriminate between different modalities based on the generated representation. We further impose triplet constraints on the feature projector in order to minimize the gap among the representations of all items from different modalities with same semantic labels, while maximizing the distances among semantically different images and texts. Through the joint exploitation of the above, the underlying cross-modal semantic structure of multimedia data is better preserved when this data is projected into the common subspace. Comprehensive experimental results on four widely used benchmark datasets show that the proposed ACMR method is superior in learning effective subspace representation and that it significantly outperforms the state-of-the-art cross-modal retrieval methods."}
{"_id": "25387000175f1622ddc4b485a91c9fa12bd253a3", "title": "Key drivers of internet banking services use", "text": "Purpose \u2013 The purpose of this paper is to analyse the determinants of internet banking use, paying special attention to the role of product involvement, perceived risk and trust. Design/methodology/approach \u2013 The impact of trust, perceived risks, product involvement and TAM beliefs (ease of use and usefulness) on internet banking adoption is tested through structural equation modelling techniques. The sample consists of 511 Spanish internet banking services users and the data are collected through an internet survey. Risk is measured as a formative construct. Findings \u2013 Data analysis shows that TAM beliefs and perceived risks (security, privacy, performance and social) have a direct influence on e-banking adoption. Trust appears as a key variable that reduces perceived risk. Involvement plays an important role in increasing perceived ease of use. Practical implications \u2013 This research provides banks with knowledge of what aspects to highlight in their communications strategies to increase their internet banking services adoption rates. The research findings show managers that web contents and design are key tools to increase internet banking services adoption. Practical recommendations to increase web usefulness and trust, and guidelines to reduce perceived risk dimensions are also provided. Originality/value \u2013 Despite the importance of trust issues and risk perceptions for internet banking adoption, only limited work has been done to identify trust and risk dimensions in an online banking context. We have evaluated the impact of each risk dimension instead of treating risk as a whole. Furthermore, risk has been measured as a formative construct because there is no reason to expect that risk dimensions in online financial services are correlated."}
{"_id": "ac8c2e1fa35e797824958ced835257cd49e1be9c", "title": "Information technology and organizational learning: a review and assessment of research", "text": "This paper reviews and assesses the emerging research literature on information technology and organizational learning. After discussing issues of meaning and measurement, we identify and assess two main streams of research: studies that apply organizational learning concepts to the process of implementing and using information technology in organizations; and studies concerned with the design of information technology applications to support organizational learning. From the former stream of research, we conclude that experience plays an important, yet indeterminate role in implementation success; learning is accomplished through both formal training and participation in practice; organizational knowledge barriers may be overcome by learning from other organizations; and that learning new technologies is a dynamic process characterized by relatively narrow windows of opportunity. From the latter stream, we conclude that conceptual designs for organizational memory information systems are a valuable contribution to artifact development; learning is enhanced through systems that support communication and discourse; and that information technologies have the potential to both enable and disable organizational learning. Currently, these two streams flow independently of each other, despite their close conceptual and practical links. We advise that future research on information technology and organizational learning proceeds in a more integrated fashion, recognizes the situated nature of organizational learning, focuses on distributed organizational memory, demonstrates the effectiveness of artifacts in practice, and looks for relevant research findings in related fields."}
{"_id": "12b8cf6ab20d0b66f975794767ead30e9f11fc49", "title": "Cyber attack modeling and simulation for network security analysis", "text": "Cyber security methods are continually being developed. To test these methods many organizations utilize both virtual and physical networks which can be costly and time consuming. As an alternative, in this paper, we present a simulation modeling approach to represent computer networks and intrusion detection systems (IDS) to efficiently simulate cyber attack scenarios. The outcome of the simulation model is a set of IDS alerts that can be used to test and evaluate cyber security systems. In particular, the simulation methodology is designed to test information fusion systems for cyber security that are under development."}
{"_id": "26dd163ee6a0e75659a7f1485d1d442852d74436", "title": "Using Knowledge Representation and Reasoning Tools in the Design of Robots", "text": "The paper describes the authors\u2019 experience in using knowledge representation and reasoning tools in the design of robots. The focus is on the systematic construction of models of the robot\u2019s capabilities and its domain at different resolutions, and on establishing a clear relationship between the models at the different resolutions."}
{"_id": "65f797e97cca7463d3889e2f2f9de4c0f8121742", "title": "An Experience Teaching a Graduate Course in Cryptography", "text": "This article describes an experience of teaching a graduate-level course in cryptography and computer security at New York University. The course content as well as lessons learned and plans for the future are discussed."}
{"_id": "24019050c30b7e5bf1be28e48b8cb5278c4286fd", "title": "PH2 - A dermoscopic image database for research and benchmarking", "text": "The increasing incidence of melanoma has recently promoted the development of computer-aided diagnosis systems for the classification of dermoscopic images. Unfortunately, the performance of such systems cannot be compared since they are evaluated in different sets of images by their authors and there are no public databases available to perform a fair evaluation of multiple systems. In this paper, a dermoscopic image database, called PH2, is presented. The PH2 database includes the manual segmentation, the clinical diagnosis, and the identification of several dermoscopic structures, performed by expert dermatologists, in a set of 200 dermoscopic images. The PH2 database will be made freely available for research and benchmarking purposes."}
{"_id": "ac9748ea3945eb970cc32a37db7cfdfd0f22e74c", "title": "Ridge-based vessel segmentation in color images of the retina", "text": "A method is presented for automated segmentation of vessels in two-dimensional color images of the retina. This method can be used in computer analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The system is based on extraction of image ridges, which coincide approximately with vessel centerlines. The ridges are used to compose primitives in the form of line elements. With the line elements an image is partitioned into patches by assigning each image pixel to the closest line element. Every line element constitutes a local coordinate frame for its corresponding patch. For every pixel, feature vectors are computed that make use of properties of the patches and the line elements. The feature vectors are classified using a kNN-classifier and sequential forward feature selection. The algorithm was tested on a database consisting of 40 manually labeled images. The method achieves an area under the receiver operating characteristic curve of 0.952. The method is compared with two recently published rule-based methods of Hoover et al. and Jiang et al. . The results show that our method is significantly better than the two rule-based methods (p<0.01). The accuracy of our method is 0.944 versus 0.947 for a second observer."}
{"_id": "dffaa9e4bfd2bca125883372820c1dcfd87bc081", "title": "Active Contours without Edges for Vector-Valued Images", "text": "In this paper, we propose an active contour algorithm for object detection in vectorvalued images (such as RGB or multispectral). The model is an extension of the scalar Chan\u2013Vese algorithm to the vector-valued case [1]. The model minimizes a Mumford\u2013Shah functional over the length of the contour, plus the sum of the fitting error over each component of the vector-valued image. Like the Chan\u2013Vese model, our vector-valued model can detect edges both with or without gradient. We show examples where our model detects vector-valued objects which are undetectable in any scalar representation. For instance, objects with different missing parts in different channels are completely detected (such as occlusion). Also, in color images, objects which are invisible in each channel or in intensity can be detected by our algorithm. Finally, the model is robust with respect to noise, requiring no a priori denoising step. C \u00a9 2000 Academic Press"}
{"_id": "2d88e7922d9f046ace0234f9f96f570ee848a5b5", "title": "Building Better Detection with Privileged Information", "text": "Modern detection systems use sensor outputs available in the deployment environment to probabilistically identify attacks. These systems are trained on past or synthetic feature vectors to create a model of anomalous or normal behavior. Thereafter, run-time collected sensor outputs are compared to the model to identify attacks (or the lack of attack). While this approach to detection has been proven to be effective in many environments, it is limited to training on only features that can be reliably collected at test-time. Hence, they fail to leverage the often vast amount of ancillary information available from past forensic analysis and post-mortem data. In short, detection systems don\u2019t train (and thus don\u2019t learn from) features that are unavailable or too costly to collect at run-time. In this paper, we leverage recent advances in machine learning to integrate privileged information \u2013 features available at training time, but not at run-time\u2013 into detection algorithms. We apply three different approaches to model training with privileged information; knowledge transfer, model influence, and distillation, and empirically validate their performance in a range of detection domains. Our evaluation shows that privileged information can increase detector precision and recall: we observe an average of 4.8% decrease in detection error for malware traffic detection over a system with no privileged information, 3.53% for fast-flux domain bot detection, 3.33% for malware classification, 11.2% for facial user authentication. We conclude by exploring the limitations and applications of different privileged information techniques in detection systems."}
{"_id": "3723eb29efd2e5193834b9d1ef71f1169b3d4b9a", "title": "Combining Textual Entailment and Argumentation Theory for Supporting Online Debates Interactions", "text": "Blogs and forums are widely adopted by online communities to debate about various issues. However, a user that wants to cut in on a debate may experience some difficulties in extracting the current accepted positions, and can be discouraged from interacting through these applications. In our paper, we combine textual entailment with argumentation theory to automatically extract the arguments from debates and to evaluate their acceptability."}
{"_id": "654d129eafc136bf5fccbc54e6c8078e87989ea8", "title": "A multimode-beamforming 77-GHz FMCW radar system", "text": "In this work a multimode-beamforming 77-GHz frequency-modulated continuous-wave radar system is presented. Four transceiver chips with integrated inphase/quadrature modulators in the transmit path are used in order to simultaneously realize a short-range frequency-division multiple-access (FDMA) multiple-input multiple-output (MIMO) and a long-range transmit phased-array (PA) radar system with the same antennas. It combines the high angular resolution of FDMA MIMO radars and the high-gain and steerable beam of PA transmit antennas. Several measurements were carried out to show the potential benefits of using this concept for a linear antenna array with four antennas and methods of digital beamforming in the receive path."}
{"_id": "eeba24b5ae5034665de50ed56c9f76adb95167c3", "title": "X509Cloud \u2014 Framework for a ubiquitous PKI", "text": "The SSL protocol has been widely used for verifying digital identities and to secure Internet traffic since the early days of the web. Although X.509 certificates have been in existence for more than two decades, individual user uptake has been low due to the high cost of issuance and maintenance of such certs. This has led to a situation whereby users are able to verify the identity of an organization or e-commerce retailer via their digital certificate, but organizations have to rely on weak username and password combinations to verify the identity of customers registered with their service. We propose the X509Cloud framework which enables organizations to issue certificates to their users at zero cost, and allows them to securely store and disseminate client certificates using the Bitcoin inspired blockchain protocol. This in turn will enable organizations and individuals to authenticate and to securely communicate with other users on the Internet."}
{"_id": "45b1f6b683fdc5f8d5a340a7ba203c574c1d93a5", "title": "Finding and Optimizing Solvable Priority Schemes for Decoupled Path Planning Techniques for Teams of Mobile Robots", "text": "Coordinating the motion of multiple mobile robots is one of the fundamental problems in robotics. The predominant algorithms for coordinating teams of robots are decoupled and prioritized, thereby avoiding combinatorially hard planning problems typically faced by centralized approaches. While these methods are very efficient, they have two major drawbacks. First, they are incomplete, i.e. they sometimes fail to find a solution even if one exists, and second, the resulting solutions are often not optimal. In this paper we present a method for finding and optimizing priority schemes for such prioritized and decoupled planning techniques. Existing approaches apply a single priority scheme which makes them overly prone to failure in cases where valid solutions exist. By searching in the space of priorization schemes, our approach overcomes this limitation. It performs a randomized search with hill-climbing to find solutions and to minimize the overall path length. To focus the search, our algorithm is guided by constraints generated from the task specification. To illustrate the appropriateness of this approach, this paper discusses experimental results obtained with real robots and through systematic robot simulation. The experimental results illustrate the superior performance of our approach, both in terms of efficiency of robot motion and in the ability to find valid plans."}
{"_id": "21cc39f04252e7ea98ab3492fa1945a258708f2c", "title": "The Impact of Node Selfishness on Multicasting in Delay Tolerant Networks", "text": "Due to the uncertainty of transmission opportunities between mobile nodes, delay tolerant networks (DTNs) exploit the opportunistic forwarding mechanism. This mechanism requires nodes to forward messages in a cooperative and selfish way. However, in the real word, most of the nodes exhibit selfish behaviors, such as individual and social selfishness. In this paper, we are the first to investigate how the selfish behaviors of nodes affect the performance of DTN multicast. We consider two typical multicast relaying schemes, namely, two-hop relaying and epidemic relaying, and study their performance in terms of average message transmission delay and transmission cost. Specifically, we model the message delivery process under selfish behaviors by a 3-D continuous time Markov chain; under this model, we derive closed-form formulas for the message transmission delay and cost. Then, we evaluate the accuracy of the proposed Markov chain model by comparing the theoretical results with the simulation results obtained by simulating the message dissemination under both two-hop and epidemic relaying with different network sizes and mobility models. Our study shows that different selfish behaviors may have different impacts on different performance metrics. In addition, selfish behaviors influence epidemic relaying more than two-hop relaying. Furthermore, our results show that the performance of multicast with selfish nodes depends on the multicast group size."}
{"_id": "a4188db982a6e979d838c955a1688c1a10db5d48", "title": "Implementing & Improvisation of K-means Clustering Algorithm", "text": "The clustering techniques are the most important part of the data analysis and k-means is the oldest and popular clustering technique used. The paper discusses the traditional K-means algorithm with advantages and disadvantages of it. It also includes researched on enhanced k-means proposed by various authors and it also includes the techniques to improve traditional K-means for better accuracy and efficiency. There are two area of concern for improving K-means; 1) is to select initial centroids and 2) by assigning data points to nearest cluster by using equations for calculating mean and distance between two data points. The time complexity of the proposed K-means technique will be lesser that then the traditional one with increase in accuracy and efficiency. The main purpose of the article is to proposed techniques to enhance the techniques for deriving initial centroids and the assigning of the data points to its nearest clusters. The clustering technique proposed in this paper is enhancing the accuracy and time complexity but it still needs some further improvements and in future it is also viable to include efficient techniques for selecting value for initial clusters(k). Experimental results show that the improved method can effectively improve the speed of clustering and accuracy, reducing the computational complexity of the k-means. Introduction Clustering is a process of grouping data objects into disjointed clusters so that the data in the same cluster are similar, but data belonging to different cluster differ. A cluster is a collection of data object that are similar to one another are in same cluster and dissimilar to the objects are in other clusters [k-4]. At present the applications of computer technology in increasing rapidly which Unnati R. Raval et al, International Journal of Computer Science and Mobile Computing, Vol.5 Issue.5, May2016, pg. 191-203 \u00a9 2016, IJCSMC All Rights Reserved 192 created high volume and high dimensional data sets [10]. These data is stored digitally in electronic media, thus providing potential for the development of automatic data analysis, classification and data retrieval [10]. The clustering is important part of the data analysis which partitioned given dataset in to subset of similar data points in each subset and dissimilar to data from other clusters [1]. The clustering analysis is very useful with increasing in digital data to draw meaningful information or drawing interesting patters from the data sets hence it finds applications in many fields like bioinformatics, pattern recognition, image processing, data mining, marketing and economics etc [4]. There have been many clustering techniques proposed but K-means is one of the oldest and most popular clustering techniques. In this method the number of cluster (k) is predefined prior to analysis and then the selection of the initial centroids will be made randomly and it followed by iterative process of assigning each data point to its nearest centroid. This process will keep repeating until convergence criteria met. However, there are shortcomings of K-means, it is important to proposed techniques that enhance the final result of analysis. This article includes researched on papers [1,2,3,4,5,6] which made some very important improvements towards the accuracy and efficiency of the clustering technique. Basic K-means Algorithm : A centroid-based Clustering technique According to the basic K-mean clustering algorithm, clusters are fully dependent on the selection of the initial clusters centroids. K data elements are selected as initial centers; then distances of all data elements are calculated by Euclidean distance formula. Data elements having less distance to centroids are moved to the appropriate cluster. The process is continued until no more changes occur in clusters[k-1]. This partitioning clustering is most popular and fundamental technique [1]. It is vastly used clustering technique which requires user specified parameters like number of clusters k, cluster initialisation and cluster metric [2]. First it needs to define initial clusters which makes subsets (or groups) of nearest points (from centroid) inside the data set and these subsets (or groups) called clusters [1]. Secondly, it finds means value for each cluster and define new centroid to allocate data points to this new centroid and this iterative process will goes on until centroid [3] does not changes. The simplest algorithm for the traditional K-means [2] is as follows; Input: D = {d1, d2, d3,.......dn} // set of n numbers of data points K // The number of desire Clusters Output: A set of k clusters 1. Select k points as initial centroids. 2. Repeat 3. From K clusters by assigning each data point to its nearest centroid. 4. Recompute the centroid for each cluster until centroid does not change [2]. Unnati R. Raval et al, International Journal of Computer Science and Mobile Computing, Vol.5 Issue.5, May2016, pg. 191-203 \u00a9 2016, IJCSMC All Rights Reserved 193 Start"}
{"_id": "ba6c855521307dec467bffadf5460c39e1ececa8", "title": "An Analysis on the factors causing telecom churn: First Findings", "text": "The exponential growth in number of telecom providers and unceasing competition in the market has caused telecom operators to divert their focus more towards customer retention rather than customer acquisition. Hence it becomes extremely essential to identify the factors causing a customer to switch and take proactive actions in order to retain them. The paper aims to review existing literature to bring up a holistic view of various factors and their relationships that cause a customer to churn in the telecommunication industry. A total of 140 factors related to customer churn were compiled by reviewing several studies on customer churn behavior. The identified factors are then discussed in detail with their impact on customer\u2019s switching behavior. The limitations of the existing studies and directions for future research have also been provided. The paper will aid practitioners and researchers by serving a strong base for their industrial practice and future research."}
{"_id": "289d3a9562f57d0182d1aae9376b0e3793d80272", "title": "The role of knowledge in discourse comprehension: a construction-integration model.", "text": "In contrast to expectation-based, predictive views of discourse comprehension, a model is developed in which the initial processing is strictly bottom-up. Word meanings are activated, propositions are formed, and inferences and elaborations are produced without regard to the discourse context. However, a network of interrelated items is created in this manner, which can be integrated into a coherent structure through a spreading activation process. Data concerning the time course of word identification in a discourse context are examined. A simulation of arithmetic word-problem understanding provides a plausible account for some well-known phenomena in this area."}
{"_id": "6e54df7c65381b40948f50f6e456e7c06c009fd0", "title": "Co-evolution of Fitness Maximizers and Fitness Predictors", "text": "We introduce an estimation of distribution algorithm (EDA) based on co-evolution of fitness maximizers and fitness predictors for improving the performance of evolutionary search when evaluations are prohibitively expensive. Fitness predictors are lightweight objects which, given an evolving individual, heuristically approximate the true fitness. The predictors are trained by their ability to correctly differentiate between good and bad solutions using reduced computation. We apply co-evolving fitness prediction to symbolic regression and measure its impact. Our results show that the additional computational investment in training the co-evolving fitness predictors can greatly enhance both speed and convergence of individual solutions while overall reducing the number of evaluations. In application to symbolic regression, the advantage of using fitness predictors grows as the complexity of models increases."}
{"_id": "eaede512ffebb375e3d76e9103d03a32b235e8f9", "title": "An MLP-based representation of neural tensor networks for the RDF data models", "text": "In this paper, a new representation of neural tensor networks is presented. Recently, state-of-the-art neural tensor networks have been introduced to complete RDF knowledge bases. However, mathematical model representation of these networks is still a challenging problem, due to tensor parameters. To solve this problem, it is proposed that these networks can be represented as two-layer perceptron network. To complete the network topology, the traditional gradient based learning rule is then developed. It should be mentioned that for tensor networks there have been developed some learning rules which are complex in nature due to the complexity of the objective function used. Indeed, this paper is aimed to show that the tensor network can be viewed and represented by the two-layer feedforward neural network in its traditional form. The simulation results presented in the paper easily verify this claim."}
{"_id": "a21fd0749a59611af146765e4883bcfc73dcbdc8", "title": "PointCNN: Convolution On $\\mathcal{X}$-Transformed Points", "text": "We present a simple and general framework for feature learning from point clouds. The key to the success of CNNs is the convolution operator that is capable of leveraging spatially-local correlation in data represented densely in grids (e.g. images). However, point clouds are irregular and unordered, thus directly convolving kernels against features associated with the points will result in desertion of shape information and variance to point ordering. To address these problems, we propose to learn an X -transformation from the input points to simultaneously promote two causes: the first is the weighting of the input features associated with the points, and the second is the permutation of the points into a latent and potentially canonical order. Element-wise product and sum operations of the typical convolution operator are subsequently applied on the X -transformed features. The proposed method is a generalization of typical CNNs to feature learning from point clouds, thus we call it PointCNN. Experiments show that PointCNN achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks."}
{"_id": "b1c6c92e8c1c3f0f395b333acf9bb85aaeef926e", "title": "Increased corpus callosum size in musicians", "text": "Using in-vivo magnetic resonance morphometry it was investigated whether the midsagittal area of the corpus callosum (CC) would differ between 30 professional musicians and 30 age-, sex- and handedness-matched controls. Our analyses revealed that the anterior half of the CC was significantly larger in musicians. This difference was due to the larger anterior CC in the subgroup of musicians who had begun musical training before the age of 7. Since anatomic studies have provided evidence for a positive correlation between midsagittal callosal size and the number of fibers crossing through the CC, these data indicate a difference in interhemispheric communication and possibly in hemispheric (a)symmetry of sensorimotor areas. Our results are also compatible with plastic changes of components of the CC during a maturation period within the first decade of human life, similar to those observed in animal studies."}
{"_id": "bd863d284d552d9c506fc2978d4019d45101250a", "title": "Integrated pathology and radiology learning for a musculoskeletal system module: an example of interdisciplinary integrated form", "text": "Introduction\nMany curricula integrate radiology with anatomy courses but none of these curricula adopt integration of pathology with radiology as interdisciplinary form at the undergraduate level. The aim of the current study was to identify the outcome of interdisciplinary integrated course of pathology and radiology in musculoskeletal system module (MSK).\n\n\nMethods\nA comparative interventional study was conducted on 60 students representing a whole class of the third year of level V. MSK and gastrointestinal module (GIT) were selected as study and control module, respectively, as being adopted for the same level/allocated hours, enriched with many subject areas for both fields, and availability of learning resources for both. A planned interdisciplinary integrated course for MSK pathology and radiology was implemented in the pathology lab. The subject area was selected and taught for both fields in consecutive ways by pathology and radiology experts. After teaching, gross/histopathologic specimens and radiology imaging/reports were distributed over benches and the students investigated the same. Conversely, in GIT control module, both fields were delivered separately, and no interdisciplinary form of integration occurred. Students' scores for both fields were filtered from the objective structured practical exam, quiz, and final exam. Students' marks and satisfaction were subjected to multiple comparisons using independent student's t-test. SPSS version 17 was used.\n\n\nResults\nSignificances were obtained between total marks of students for both modules and between radiology courses for both with P=0.0152 and 0.0199, respectively. Number of students who achieved >80% in MSK was 20 and 26 compared to 15 and 17 in GIT for pathology and radiology, respectively. Student satisfaction was high for interdisciplinary integration in MSK with significant difference obtained between MSK and GIT.\n\n\nConclusion\nThe integration of both fields augments student performance for both. This experience must encourage curriculum committee to globalize it over all other modules."}
{"_id": "d327ef1bfceda3a0ecbed76e030918445770c5db", "title": "Born Digital: Understanding the First Generation of Digital Natives", "text": "Perhaps you have seen them on the train in the morning, adjusting their iPods and texting with their preternaturally agile thumbs. Or perhaps one sits in the cubicle next to yours, cycling through a seemingly endless loop of news feeds, YouTube videos, cell phone calls, and work. Perhaps you have one living in your home, capable of reprogramming the wireless router and leaping insurmountable DVR instructions in a single, tech-savvy bound."}
{"_id": "cbd61961cb67ac04d7c6d1ac3f4ce27a802fc0cc", "title": "Towards a 99% efficient three-phase buck-type PFC rectifier for 400 V DC distribution systems", "text": "In telecom applications, the vision for a total power conversion efficiency from the mains to the output of PoL converters of 95% demands for an optimization of every conversion step, i.e. the PFC rectifier front-end should show an outstanding efficiency in the range of 99%. For recently discussed 400 V DC distribution bus voltages a buck-type PFC rectifier is a logical solution. In this paper, an efficiency-optimized, nearly 99% efficient, 5 kW three-phase buck-type PFC rectifier with 400 V output is presented. Methods for calculating losses of all components are described, and are used to optimize the converter design for efficiency at full load. Special attention is paid to semiconductor losses, which are shown to be dominant, with the parasitic device capacitance losses being a significant component. A prototype of the proposed rectifier is constructed which verifies the accuracy of the models used for loss calculation and optimization."}
{"_id": "60611349d1b6d64488a5a88a9193e62d9db27b71", "title": "REVIEW OF FATIGUE DETECTION AND PREDICTION TECHNOLOGIES September 2000 Prepared", "text": "This report reviews existing fatigue detection and prediction technologies. Data regarding the different technologies available were collected from a wide variety of worldwide sources. The first half of this report summarises the current state of research and development of the technologies and summarises the status of the technologies with respect to the key issues of sensitivity, reliability, validity and acceptability. The second half evaluates the role of the technologies in transportation, and comments on the place of the technologies vis-a-vis other enforcement and regulatory frameworks, especially in Australia and New Zealand. The report authors conclude that the hardware technologies should never be used as the company fatigue management system. Hardware technologies only have the potential to be a last ditch safety device. Nevertheless, the output of hardware technologies could usefully feed into company fatigue management systems to provide real time risk assessment. However, hardware technology output should never be the only input into a management system. Other inputs should at least come from validated software technologies, mutual assessment of fitness for duty and other risk assessments of work load, schedules and rosters. Purpose: For information: to provide an understanding of the place of fatigue detection and prediction technologies in the management of fatigue in drivers of heavy vehicles."}
{"_id": "057dd526f262be89e6ba22bfc12bb89f61537c6b", "title": "\"Eczema coxsackium\" and unusual cutaneous findings in an enterovirus outbreak.", "text": "OBJECTIVE\nTo characterize the atypical cutaneous presentations in the coxsackievirus A6 (CVA6)-associated North American enterovirus outbreak of 2011-2012.\n\n\nMETHODS\nWe performed a retrospective case series of pediatric patients who presented with atypical cases of hand, foot, and mouth disease (HFMD) from July 2011 to June 2012 at 7 academic pediatric dermatology centers. Patients were included if they tested positive for CVA6 or if they met clinical criteria for atypical HFMD (an enanthem or exanthem characteristic of HFMD with unusual morphology or extent of cutaneous findings). We collected demographic, epidemiologic, and clinical data including history of skin conditions, morphology and extent of exanthem, systemic symptoms, and diagnostic test results.\n\n\nRESULTS\nEighty patients were included in this study (median age 1.5 years, range 4 months-16 years). Seventeen patients were CVA6-positive, and 63 met clinical inclusion criteria. Ninety-nine percent of patients exhibited a vesiculobullous and erosive eruption; 61% of patients had rash involving >10% body surface area. The exanthem had a perioral, extremity, and truncal distribution in addition to involving classic HFMD areas such as palms, soles, and buttocks. In 55% of patients, the eruption was accentuated in areas of eczematous dermatitis, termed \"eczema coxsackium.\" Other morphologies included Gianotti-Crosti-like (37%), petechial/purpuric (17%) eruptions, and delayed onychomadesis and palm and sole desquamation. There were no patients with serious systemic complications.\n\n\nCONCLUSIONS\nThe CVA6-associated enterovirus outbreak was responsible for an exanthem potentially more widespread, severe, and varied than classic HFMD that could be confused with bullous impetigo, eczema herpeticum, vasculitis, and primary immunobullous disease."}
{"_id": "3ba631631031863c748cf4e37fd042541db934e3", "title": "Removal Mechanism in Chemical Mechanical Polishing : Theory and Modeling", "text": "The abrasion mechanism in solid-solid contact mode of the chemical mechanical polishing (CMP) process is investigated in detail. Based on assumptions of plastic contact over wafer-abrasive and pad-abrasive interfaces, the normal distribution of abrasive size and an assumed periodic roughness of pad surface, a novel model is developed for material removal in CMP. The basic model is = removed, where is the density of wafer, the number of active abrasives, and removed the volume of material removed by a single abrasive. The model proposed integrates process parameters including pressure and velocity and other important input parameters including the wafer hardness, pad hardness, pad roughness, abrasive size, and abrasive geometry into the same formulation to predict the material removal rate ( ). An interface between the chemical effect and mechanical effect has been constructed through a fitting parameter , a \u201cdynamical\u201d hardness value of the wafer surface, in the model. It reflects the influences of chemicals on the mechanical material removal. The fluid effect in the current model is attributed to the number of active abrasives. It is found that the nonlinear down pressure dependence of material removal rate is related to a probability density function of the abrasive size and the elastic deformation of the pad. Compared with experimental results, the model accurately predicts . With further verification of the model, a better understanding of the fundamental mechanism involved in material removal in the CMP process, particularly different roles played by the consumables and their interactions, can be obtained."}
{"_id": "afdee392c887aae5fc529c815ec88cabaeba6bcf", "title": "Abductive Matching in Question Answering", "text": "We study question-answering over semi-structured data. We introduce a new way to apply the technique of semantic parsing by applying machine learning only to provide annotations that the system infers to be missing; all the other parsing logic is in the form of manually authored rules. In effect, the machine learning is used to provide non-syntactic matches, a step that is ill-suited to manual rules. The advantage of this approach is in its debuggability and in its transparency to the end-user. We demonstrate the effectiveness of the approach by achieving state-of-the-art performance of 40.42% on a standard benchmark dataset over tables from Wikipedia."}
{"_id": "36c54ce6a0da98f5812269afccc3b931d304f6f8", "title": "Academic-Industrial Perspective on the Development and Deployment of a Moderation System for a Newspaper Website", "text": "This paper describes an approach and our experiences from the development, deployment and usability testing of a Natural Language Processing (NLP) and Information Retrieval system that supports the moderation of user comments on a large newspaper website. We highlight some of the differences between industry-oriented and academic research settings and their influence on the decisions made in the data collection and annotation processes, selection of document representation and machine learning methods. We report on classification results, where the problems to solve and the data to work with come from a commercial enterprise. In this context typical for NLP research, we discuss relevant industrial aspects. We believe that the challenges faced as well as the solutions proposed for addressing them can provide insights to others working in a similar setting. Data and experiment code related to this paper are available for download at https://ofai.github.io/million-post-corpus."}
{"_id": "80c363061420aea6d1557c64f2df188b16f32ddd", "title": "Divide and conquer: neuroevolution for multiclass classification", "text": "Neuroevolution is a powerful and general technique for evolving the structure and weights of artificial neural networks. Though neuroevolutionary approaches such as NeuroEvolution of Augmenting Topologies (NEAT) have been successfully applied to various problems including classification, regression, and reinforcement learning problems, little work has explored application of these techniques to larger-scale multiclass classification problems. In this paper, NEAT is evaluated in several multiclass classification problems, and then extended via two ensemble approaches: One-vs-All and One-vs-One. These approaches decompose multiclass classification problems into a set of binary classification problems, in which each binary problem is solved by an instance of NEAT. These ensemble models exhibit reduced variance and increasingly superior accuracy as the number of classes increases. Additionally, higher accuracy is achieved early in training, even when artificially constrained for the sake of fair comparison with standard NEAT. However, because the approach can be trivially distributed, it can be applied quickly at large scale to solve real problems. In fact, these approaches are incorporated into DarwinTM, an enterprise automatic machine learning solution that also incorporates various other algorithmic enhancements to NEAT. The resulting complete system has proven robust to a wide variety of client datasets."}
{"_id": "942327cd70be7c41d7085efac9ba21c3d5077c97", "title": "Homicides with mutilation of the victim's body.", "text": "Information on homicide offenders guilty of mutilation is sparse. The current study estimates the rate of mutilation of the victim's body in Finnish homicides and compares sociodemographic characteristics, crime history, life course development, psychopathy, and psychopathology of these and other homicide offenders. Crime reports and forensic examination reports of all offenders subjected to forensic examination and convicted for a homicide in 1995-2004 (n = 676) were retrospectively analyzed for offense and offender variables and scored with the Psychopathy Check List Revised. Thirteen homicides (2.2%) involved mutilation. Educational and mental health problems in childhood, inpatient mental health contacts, self-destructiveness, and schizophrenia were significantly more frequent in offenders guilty of mutilation. Mutilation bore no significant association with psychopathy or substance abuse. The higher than usual prevalence of developmental difficulties and mental disorder of this subsample of offenders needs to be recognized."}
{"_id": "e75ef7017fc3c9f4e83e643574bc03a27d1c3851", "title": "Why do they still use paper?: understanding data collection and use in Autism education", "text": "Autism education programs for children collect and use large amounts of behavioral data on each student. Staff use paper almost exclusively to collect these data, despite significant problems they face in tracking student data in situ, filling out data sheets and graphs on a daily basis, and using the sheets in collaborative decision making. We conducted fieldwork to understand data collection and use in the domain of autism education to explain why current technology had not met staff needs. We found that data needs are complex and unstandardized, immediate demands of the job interfere with staff ability to collect in situ data, and existing technology for data collection is inadequate. We also identified opportunities for technology to improve sharing and use of data. We found that data sheets are idiosyncratic and not useful without human mediation; improved communication with parents could benefit children's development; and staff are willing, and even eager, to incorporate technology. These factors explain the continued dependence on paper for data collection in this environment, and reveal opportunities for technology to support data collection and improve use of collected data."}
{"_id": "11543c44ed72784c656362b2ef42f7509250a423", "title": "ARSA: a sentiment-aware model for predicting sales performance using blogs", "text": "Due to its high popularity, Weblogs (or blogs in short) present a wealth of information that can be very helpful in assessing the general public's sentiments and opinions. In this paper, we study the problem of mining sentiment information from blogs and investigate ways to use such information for predicting product sales performance. Based on an analysis of the complex nature of sentiments, we propose Sentiment PLSA (S-PLSA), in which a blog entry is viewed as a document generated by a number of hidden sentiment factors. Training an S-PLSA model on the blog data enables us to obtain a succinct summary of the sentiment information embedded in the blogs. We then present ARSA, an autoregressive sentiment-aware model, to utilize the sentiment information captured by S-PLSA for predicting product sales performance. Extensive experiments were conducted on a movie data set. We compare ARSA with alternative models that do not take into account the sentiment information, as well as a model with a different feature selection method. Experiments confirm the effectiveness and superiority of the proposed approach."}
{"_id": "1c30b2ad55a3db9d201a96616cb66dda3bb757bc", "title": "Learning Extraction Patterns for Subjective Expressions", "text": "This paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective (opinionated) expressions. High-precision classifiers label unannotated data to automatically create a large training set, which is then given to an extraction pattern learning algorithm. The learned patterns are then used to identify more subjective sentences. The bootstrapping process learns many subjective patterns and increases recall while maintaining high precision."}
{"_id": "3e04c16376620a46f65a9dd8199cbd45d224c371", "title": "Crowdsourcing to smartphones: incentive mechanism design for mobile phone sensing", "text": "Mobile phone sensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. In a mobile phone sensing system, the platform recruits smartphone users to provide sensing service. Existing mobile phone sensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for mobile phone sensing. We consider two system models: the platform-centric model where the platform provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, we design an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms."}
{"_id": "9b1454154e82d61a519987620b47b975b0f8368d", "title": "A Three-Step Vehicle Detection Framework for Range Estimation Using a Single Camera", "text": "This paper proposes and validates a real-time on-road vehicle detection system, which uses a single camera for the purpose of intelligent driver assistance. A three-step vehicle detection framework is presented to detect and track the target vehicle within an image. In the first step, probable vehicle locations are hypothesized using pattern recognition. The vehicle candidates are then verified in the hypothesis verification step. In this step, lane detection is used to filter vehicle candidates that are not within the lane region of interest. In the final step tracking and online learning are implemented to optimize the detection algorithm during misdetection and temporary occlusion. Good detection performance and accuracy was observed in highway driving environments with minimal shadows."}
{"_id": "33bd9454417a01a63ba452b974cb4c265d94d312", "title": "A Blind Baud-Rate ADC-Based CDR", "text": "This paper proposes a 10-Gb/s blind baud-rate ADC-based CDR. The blind baud-rate operation is made possible by using a 2UI integrate-and-dump filter, which creates intentional ISI in adjacent bit periods. The blind samples are interpolated to recover center-of-the-eye samples for a speculative Mueller-Muller PD and a 2-tap DFE operation. A test chip, fabricated in 65-nm CMOS, implements a 10-Gb/s CDR with a measured high-frequency jitter tolerance of 0.19 UIPP and \u00b1300 ppm of frequency offset."}
{"_id": "50a7d2139cb4203160c0da0908291342d8e1ca78", "title": "Three-axis NC milling simulation based on adaptive triangular mesh", "text": "0360-8352/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.cie.2010.06.016 * Corresponding author. Tel.: +86 21 67791413; fax E-mail address: mainpowers@gmail.com (J. Mao). NC milling simulation has become an important step in computer aided manufacturing (CAM). To achieve real-time simulation, the total number of polygons has to be reduced, which results in poor image quality. This paper presents an adaptive triangular mesh algorithm to reduce the number of polygons while image quality remains high. Binary tree is used to represent the milling surface, and the optimization of the mesh is performed dynamically in the process of simulation. In this algorithm, the resolution of triangles is automatically updated according to local surface flatness, thus greatly reducing the number of triangles at planar regions. By doing this, real-time and high quality of visual presentation is insured and the translation, rotation and zooming operations are still applicable. When machining precision is evaluated, or overcut, undercut and interference are inspected, full resolution model stored in memory is automatically loaded to ensure the accuracy and correctness of these inspections. Finally, an example is presented to illustrate the validity of proposed algorithm. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "a2a1a84bc6be32d981d8fac5da813a0f179a3f75", "title": "Deep Generative Model using Unregularized Score for Anomaly Detection with Heterogeneous Complexity", "text": "Accurate and automated detection of anomalous samples in a natural image dataset can be accomplished with a probabilistic model for end-to-end modeling of images. Such images have heterogeneous complexity, however, and a probabilistic model overlooks simply shaped objects with small anomalies. This is because the probabilistic model assigns undesirably lower likelihoods to complexly shaped objects that are nevertheless consistent with set standards. To overcome this difficulty, we propose an unregularized score for deep generative models (DGMs), which are generative models leveraging deep neural networks. We found that the regularization terms of the DGMs considerably influence the anomaly score depending on the complexity of the samples. By removing these terms, we obtain an unregularized score, which we evaluated on a toy dataset and real-world manufacturing datasets. Empirical results demonstrate that the unregularized score is robust to the inherent complexity of samples and can be used to better detect anomalies."}
{"_id": "8877f87b0c61f08c86c9e1114fb8ca1a61ab6105", "title": "Surface Plasmon Resonance Imaging Sensors : A Review", "text": "Surface plasmon resonance (SPR) imaging sensors realize label free, real-time, highly sensitive, quantitative, high throughput biological interaction monitoring and the binding profiles from multi-analytes further provide the binding kinetic parameters between different bio-molecules. In the past two decades, SPR imaging sensors found rapid increasing applications in fundamental biological studies, medical diagnostics, drug discovery, food safety, precision measurement and environmental monitoring. In this paper, we review the recent advances of SPR imaging sensor technology towards high throughput multi-analyte screening. Finally, we describe our multiplex colorimetric SPR imaging biosensor based on polarization orientation for high throughput bio-sensing applications."}
{"_id": "9026755081d838609c62878c3deab601090c67da", "title": "Individual differences in non-verbal number acuity correlate with maths achievement", "text": "Human mathematical competence emerges from two representational systems. Competence in some domains of mathematics, such as calculus, relies on symbolic representations that are unique to humans who have undergone explicit teaching. More basic numerical intuitions are supported by an evolutionarily ancient approximate number system that is shared by adults, infants and non-human animals\u2014these groups can all represent the approximate number of items in visual or auditory arrays without verbally counting, and use this capacity to guide everyday behaviour such as foraging. Despite the widespread nature of the approximate number system both across species and across development, it is not known whether some individuals have a more precise non-verbal \u2018number sense\u2019 than others. Furthermore, the extent to which this system interfaces with the formal, symbolic maths abilities that humans acquire by explicit instruction remains unknown. Here we show that there are large individual differences in the non-verbal approximation abilities of 14-year-old children, and that these individual differences in the present correlate with children\u2019s past scores on standardized maths achievement tests, extending all the way back to kindergarten. Moreover, this correlation remains significant when controlling for individual differences in other cognitive and performance factors. Our results show that individual differences in achievement in school mathematics are related to individual differences in the acuity of an evolutionarily ancient, unlearned approximate number sense. Further research will determine whether early differences in number sense acuity affect later maths learning, whether maths education enhances number sense acuity, and the extent to which tertiary factors can affect both."}
{"_id": "475274fd477632da62cf75e2cd61b3e8e8ac3c1a", "title": "Adult hippocampal neurogenesis and its role in Alzheimer's disease", "text": "The hippocampus, a brain area critical for learning and memory, is especially vulnerable to damage at early stages of Alzheimer's disease (AD). Emerging evidence has indicated that altered neurogenesis in the adult hippocampus represents an early critical event in the course of AD. Although causal links have not been established, a variety of key molecules involved in AD pathogenesis have been shown to impact new neuron generation, either positively or negatively. From a functional point of view, hippocampal neurogenesis plays an important role in structural plasticity and network maintenance. Therefore, dysfunctional neurogenesis resulting from early subtle disease manifestations may in turn exacerbate neuronal vulnerability to AD and contribute to memory impairment, whereas enhanced neurogenesis may be a compensatory response and represent an endogenous brain repair mechanism. Here we review recent findings on alterations of neurogenesis associated with pathogenesis of AD, and we discuss the potential of neurogenesis-based diagnostics and therapeutic strategies for AD."}
{"_id": "25a2da00c9572e2e811eda096ebf6a3f8764da87", "title": "Joint SFM and detection cues for monocular 3D localization in road scenes", "text": "We present a system for fast and highly accurate 3D localization of objects like cars in autonomous driving applications, using a single camera. Our localization framework jointly uses information from complementary modalities such as structure from motion (SFM) and object detection to achieve high localization accuracy in both near and far fields. This is in contrast to prior works that rely purely on detector outputs, or motion segmentation based on sparse feature tracks. Rather than completely commit to tracklets generated by a 2D tracker, we make novel use of raw detection scores to allow our 3D bounding boxes to adapt to better quality 3D cues. To extract SFM cues, we demonstrate the advantages of dense tracking over sparse mechanisms in autonomous driving scenarios. In contrast to complex scene understanding, our formulation for 3D localization is efficient and can be regarded as an extension of sparse bundle adjustment to incorporate object detection cues. Experiments on the KITTI dataset show the efficacy of our cues, as well as the accuracy and robustness of our 3D object localization relative to ground truth and prior works."}
{"_id": "57f7390d57d398d60c240bc1bfb023239a8487ae", "title": "Dynamic Traffic Diversion in SDN: testbed vs Mininet", "text": "In this paper, we first propose a simple Dynamic Traffic Diversion (DTD) algorithm for Software Defined Networks (SDN). After implementing the algorithm inside the controller, we then compare the results obtained under two different test environments: 1) a testbed using real Cisco equipment and 2) a network emulation using Mininet. From the results, we get two key messages. First, we can clearly see that dynamically diverting important traffic on a backup path will prevent packet loss and reduce jitter. Finally, the two test environments provide relatively similar results. The small differences could be explained by the early field trial image that was used on the Cisco equipment and by the many setting parameters that are available in both environments."}
{"_id": "0d3950fa9967d74825d7685be052ed55b231347c", "title": "Neural Joint Model for Transition-based Chinese Syntactic Analysis", "text": "We present neural network-based joint models for Chinese word segmentation, POS tagging and dependency parsing. Our models are the first neural approaches for fully joint Chinese analysis that is known to prevent the error propagation problem of pipeline models. Although word embeddings play a key role in dependency parsing, they cannot be applied directly to the joint task in the previous work. To address this problem, we propose embeddings of character strings, in addition to words. Experiments show that our models outperform existing systems in Chinese word segmentation and POS tagging, and perform preferable accuracies in dependency parsing. We also explore bi-LSTM models with fewer features."}
{"_id": "0b181362cb46b50e31ee451c16d669cee9c90860", "title": "A Novel CMOS Envelope Detector Structure", "text": "A novel high performance envelope detector is offered in this work. Proposed circuit holds the signal peaks in two periods and combines them to obtain the envelope of the signal. This way, ripple is fixed by the peak holder and tracking can be improved without the traditional compensation between keeping and tracking required in these circuits. A comparison is offered between a conventional circuit, a previous work and the proposed envelope detector. It is shown the superior performance of the latter obtaining small ripple (<1%), fast settling (0.4mus) and high linearity."}
{"_id": "114d06cd7f6ba4d6fa0501057655a56f38d429e8", "title": "Exploring the Role of Prior Beliefs for Argument Persuasion", "text": "Public debate forums provide a common platform for exchanging opinions on a topic of interest. While recent studies in natural language processing (NLP) have provided empirical evidence that the language of the debaters and their patterns of interaction play a key role in changing the mind of a reader, research in psychology has shown that prior beliefs can affect our interpretation of an argument and could therefore constitute a competing alternative explanation for resistance to changing one\u2019s stance. To study the actual effect of language use vs. prior beliefs on persuasion, we provide a new dataset and propose a controlled setting that takes into consideration two reader-level factors: political and religious ideology. We find that prior beliefs affected by these reader-level factors play a more important role than language use effects and argue that it is important to account for them in NLP studies of persuasion."}
{"_id": "207a737774a54537d804e9353346002dd75b62da", "title": "77 GHz radar scattering properties of pedestrians", "text": "Using radars for detecting and tracking pedestrians has important safety and security applications. Most existing human detection radar systems operate in UHF, X-band, and 24GHz bands. The newly allocated 76-77 GHz frequency band for automobile collision avoidance radar systems has once again raised great interest in pedestrian detection and tracking at these frequencies due to its longer detection range and better location accuracy. The electromagnetic scattering properties must be thoroughly understood and analyzed so a catalog of human scattering can be utilized for intensive automotive radar testing. Measurements of real human subjects at these frequencies is not a trivial task. This paper presents validation between the measured radar cross section (RCS) patterns of various human subjects and a full-wave EM simulation. The RCS of a human is shown to depend on a number of factors, such as posture, body shape, clothing type, etc."}
{"_id": "16b65be3bba1eaf243b262614a142a48c6261e68", "title": "An Evaluation Framework for Plagiarism Detection", "text": "We present an evaluation framework for plagiarism detection.1 The framework provides performance measures that address the specifics of plagiarism detection, and the PAN-PC-10 corpus, which contains 64 558 artificial and 4 000 simulated plagiarism cases, the latter generated via Amazon\u2019s Mechanical Turk. We discuss the construction principles behind the measures and the corpus, and we compare the quality of our corpus to existing corpora. Our analysis gives empirical evidence that the construction of tailored training corpora for plagiarism detection can be automated, and hence be done on a large scale."}
{"_id": "8f95ecbdf27b1e95e5eecd9cfbaf4f5c9820c5f0", "title": "It's always April fools' day!: On the difficulty of social network misinformation classification via propagation features", "text": "Given the huge impact that Online Social Networks (OSN) had in the way people get informed and form their opinion, they became an attractive playground for malicious entities that want to spread misinformation, and leverage their effect. In fact, misinformation easily spreads on OSN, and this is a huge threat for modern society, possibly influencing also the outcome of elections, or even putting people's life at risk (e.g., spreading \"anti-vaccines\" misinformation). Therefore, it is of paramount importance for our society to have some sort of \"validation\" on information spreading through OSN. The need for a wide-scale validation would greatly benefit from automatic tools. In this paper, we show that it is difficult to carry out an automatic classification of misinformation considering only structural properties of content propagation cascades. We focus on structural properties, because they would be inherently difficult to be manipulated, with the the aim of circumventing classification systems. To support our claim, we carry out an extensive evaluation on Facebook posts belonging to conspiracy theories (representative of misinformation), and scientific news (representative of fact-checked content). Our findings show that conspiracy content reverberates in a way which is hard to distinguish from scientific content: for the classification mechanism we investigated, classification F-score never exceeds 0.7."}
{"_id": "411a9b4b1db38379ff33b78eff234f9267bda30a", "title": "Scaling Human-Object Interaction Recognition Through Zero-Shot Learning", "text": "Recognizing human object interactions (HOI) is an important part of distinguishing the rich variety of human action in the visual world. While recent progress has been made in improving HOI recognition in the fully supervised setting, the space of possible human-object interactions is large and it is impractical to obtain labeled training data for all interactions of interest. In this work, we tackle the challenge of scaling HOI recognition to the long tail of categories through a zero-shot learning approach. We introduce a factorized model for HOI detection that disentangles reasoning on verbs and objects, and at test-time can therefore produce detections for novel verb-object pairs. We present experiments on the recently introduced large-scale HICODET dataset, and show that our model is able to both perform comparably to state-of-the-art in fully-supervised HOI detection, while simultaneously achieving effective zeroshot detection of new HOI categories."}
{"_id": "979e9b8ddd64c0251740bd8ff2f65f3c9a1b3408", "title": "Phytochemical screening and Extraction: A Review", "text": "Prashant Tiwari*, Bimlesh Kumar, Mandeep Kaur, Gurpreet Kaur, Harleen Kaur Plants are a source of large amount of drugs comprising to different groups such as antispasmodics, emetics, anti-cancer, antimicrobials etc. A large number of the plants are claimed to possess the antibiotic properties in the traditional system and are also used extensively by the tribal people worldwide. It is now believed that nature has given the cure of every disease in one way or another. Plants have been known to relieve various diseases in Ayurveda. Therefore, the researchers today are emphasizing on evaluation and characterization of various plants and plant constituents against a number of diseases based on their traditional claims of the plants given in Ayurveda. Extraction of the bioactive plant constituents has always been a challenging task for the researchers. In this present review, an attempt has been made to give an overview of certain extractants and extraction processes with their advantages and disadvantages. Department of Pharmaceutical Sciences, Lovely School of Pharmaceutical Sciences, Phagwara, Punjab Date of Submission: 12-01-2011 Date of Acceptance: 22-02-2011 Conflict of interest: Nil Source of support: None"}
{"_id": "bb78cecf9982cd3ae4b1eaaab60d48e06fe304e7", "title": "Relationships between handwriting performance and organizational abilities among children with and without dysgraphia: a preliminary study.", "text": "Organizational ability constitutes one executive function (EF) component essential for common everyday performance. The study aim was to explore the relationship between handwriting performance and organizational ability in school-aged children. Participants were 58 males, aged 7-8 years, 30 with dysgraphia and 28 with proficient handwriting. Group allocation was based on children's scores in the Handwriting Proficiency Screening Questionnaire (HPSQ). They performed the Hebrew Handwriting Evaluation (HHE), and their parents completed the Questionnaire for Assessing Students' Organizational Abilities-for Parents (QASOA-P). Significant differences were found between the groups for handwriting performance (HHE) and organizational abilities (QASOA-P). Significant correlations were found in the dysgraphic group between handwriting spatial arrangement and the QASOA-P mean score. Linear regression indicated that the QASOA-P mean score explained 42% of variance of handwriting proficiency (HPSQ). Based on one discriminant function, 81% of all participants were correctly classified into groups. Study results strongly recommend assessing organizational difficulties in children referred for therapy due to handwriting deficiency."}
{"_id": "928e85a76a346ef5296e6e2e895a03f915cd952f", "title": "FPGA Implementations of the AES Masked Against Power Analysis Attacks", "text": "Power analysis attacks are a serious treat for implementations of modern cryptographic algorithms. Masking is a particularly appealing countermeasure against such attacks since it increases the security to a well quantifiable level and can be implemented without modifying the underlying technology. Its main drawback is the performance overhead it implies. For example, due to prohibitive memory costs, the straightforward application of masking to the AES algorithm, with precomputed tables, is hardly practical. In this paper, we exploit both the increased size of state-of-the-art reconfigurable hardware devices and previous optimization techniques to minimize the memory occupation of software S-boxes, in order to provide an efficient FPGA implementation of the AES algorithm, masked against side-channel attacks. We describe two high throughput architectures, based on 32-bit and 128-bit datapaths that are suitable for Xilinx Virtex-5 devices. In this way, we demonstrate the possibility to efficiently combine technological advances with algorithmic optimizations in this context."}
{"_id": "eadf5023c90a6af8a0f8e8605bd8050cc13c23a3", "title": "The Fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, Task and Baselines", "text": "The CHiME challenge series aims to advance robust automatic speech recognition (ASR) technology by promoting research at the interface of speech and language processing, signal processing, and machine learning. This paper introduces the 5th CHiME Challenge, which considers the task of distant multimicrophone conversational ASR in real home environments. Speech material was elicited using a dinner party scenario with efforts taken to capture data that is representative of natural conversational speech and recorded by 6 Kinect microphone arrays and 4 binaural microphone pairs. The challenge features a single-array track and a multiple-array track and, for each track, distinct rankings will be produced for systems focusing on robustness with respect to distant-microphone capture vs. systems attempting to address all aspects of the task including conversational language modeling. We discuss the rationale for the challenge and provide a detailed description of the data collection procedure, the task, and the baseline systems for array synchronization, speech enhancement, and conventional and end-to-end ASR."}
{"_id": "a280060d463f1eb81f31fc92371356e6de3988b9", "title": "Detection Method of DNS-based Botnet Communication Using Obtained NS Record History", "text": "To combat with botnet, early detection of the botnet communication and fast identification of the bot-infected PCs is very important for network administrators. However, in DNS protocol, which appears to have been used for botnet communication recently, it is difficult to differentiate the ordinary domain name resolution and suspicious communication. Our key idea is that the most of domain name resolutions first obtain the corresponding NS (Name Server) record from authoritative name servers in the Internet, whereas suspicious communication may omit the procedures to hide their malicious activities. Based on this observation, we propose a detection method of DNS basis botnet communication using obtained NS record history. Our proposed method checks whether the destined name server (IP address) of a DNS query is included in the obtained NS record history to detect the botnet communications."}
{"_id": "a5e93260c0a0a974419bfcda9a25496b311d9d5b", "title": "Algorithmic bias amplifies opinion polarization: A bounded confidence model", "text": "The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards polarization, which emerges also in conditions where the original model would predict convergence, and b) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Polarization is augmented by a fragmented initial population. 1 ar X iv :1 80 3. 02 11 1v 1 [ ph ys ic s. so cph ] 6 M ar 2 01 8"}
{"_id": "283b528ca325bd2fede81b5d9273b080f98d18bb", "title": "Automatic License Plate Recognition: a Review", "text": "In recent years, many researches on Intelligent Transportation Systems (ITS) have been reported. Automatic License Plate Recognition (ALPR) is one form of ITS technology that not only recognizes and counts vehicles, but distinguishes each as unique by recognizing and recording the license plate\u2019s characters. This paper discusses the main techniques of ALPR. Several open problems are proposed at the end of the paper for future research."}
{"_id": "d26c517baa9d6acbb826611400019297df2476a9", "title": "Artificial Intelligence and Soft Computing: Behavioral and Cognitive Modeling of the Human Brain", "text": ""}
{"_id": "0ee1916a0cb2dc7d3add086b5f1092c3d4beb38a", "title": "The Pascal Visual Object Classes (VOC) Challenge", "text": "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension."}
{"_id": "2f93877a1e10ae594846660102c1decabaf33a73", "title": "Detecting readers with dyslexia using machine learning with eye tracking measures", "text": "Worldwide, around 10% of the population has dyslexia, a specific learning disorder. Most of previous eye tracking experiments with people with and without dyslexia have found differences between populations suggesting that eye movements reflect the difficulties of individuals with dyslexia. In this paper, we present the first statistical model to predict readers with and without dyslexia using eye tracking measures. The model is trained and evaluated in a 10-fold cross experiment with a dataset composed of 1,135 readings of people with and without dyslexia that were recorded with an eye tracker. Our model, based on a Support Vector Machine binary classifier, reaches 80.18% accuracy using the most informative features. To the best of our knowledge, this is the first time that eye tracking measures are used to predict automatically readers with dyslexia using machine learning."}
{"_id": "981fef7155742608b8b6673f4a9566158b76cd67", "title": "ImageNet Large Scale Visual Recognition Challenge", "text": ""}
{"_id": "8c03ea8dfe9f2aa5f806d3ce4a7f83671a5db3f5", "title": "Disassortative Degree Mixing and Information Diffusion for Overlapping Community Detection in Social Networks (DMID)", "text": "In this paper we propose a new two-phase algorithm for overlapping community detection (OCD) in social networks. In the first phase, called disassortative degree mixing, we identify nodes with high degrees through a random walk process on the row-normalized disassortative matrix representation of the network. In the second phase, we calculate how closely each node of the network is bound to the leaders via a cascading process called network coordination game. We implemented the algorithm and four additional ones as a Web service on a federated peer-to-peer infrastructure. Comparative test results for small and big real world networks demonstrated the correct identification of leaders, high precision and good time complexity. The Web service is available as open source software."}
{"_id": "ed31a0422cc5580c176d648e6e143b9700a38ce2", "title": "SustData: A Public Dataset for ICT4S Electric Energy Research", "text": "Energy and environmental sustainability can benefit a lot from advances in data mining and machine learning techniques. However, those advances rely on the availability of relevant datasets required to develop, improve and validate new techniques. Only recently the first datasets were made publicly available for the energy and sustainability research community. In this paper we present a freely available dataset containing power usage and related information from 50 homes. Here we describe our dataset, the hardware and software setups used when collecting the data and how others can access it. We then discuss potential uses of this data in the future of energy ecofeedback and demand side management research."}
{"_id": "418e5e5e58cd9cafe802d8b679651f66160d3728", "title": "BlueSky: a cloud-backed file system for the enterprise", "text": "We present BlueSky, a network file system backed by cloud storage. BlueSky stores data persistently in a cloud storage provider such as Amazon S3 or Windows Azure, allowing users to take advantage of the reliability and large storage capacity of cloud providers and avoid the need for dedicated server hardware. Clients access the storage through a proxy running on-site, which caches data to provide lower-latency responses and additional opportunities for optimization. We describe some of the optimizations which are necessary to achieve good performance and low cost, including a log-structured design and a secure in-cloud log cleaner. BlueSky supports multiple protocols\u2014both NFS and CIFS\u2014and is portable to different providers."}
{"_id": "9d23def1f0a13530cce155fa94f1480de6af91ea", "title": "Extreme learning machine: Theory and applications", "text": "It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks. r 2006 Elsevier B.V. All rights reserved."}
{"_id": "9dd2c8891384b13dc284e3a77cf4da29fb184a24", "title": "Training of Airport Security Screeners", "text": "scientifically based training system that is effective and efficient and allows achieving an excellent level of detection performance. Object recognition is a very complex process but essentially it means to compare visual information to object representations stored in visual memory. The ability to recognize an object class depends on whether itself or similar instance has been stored previously in visual memory. In other words, you can only recognize what you have learned. This explains why training is so important. Identifying the threat items in Fig. 1a and 1b is difficult without training because the objects are depicted in a view that is rather unusual in everyday life. Detecting a bomb such as in Fig. 1c is difficult for untrained people because usually we do not encounter bombs in everyday life. Therefore, a good training system must contain many forbidden objects in many viewpoints in order to train screeners to detect them reliably. Indeed, several studies from our lab and many others worldwide have found that object recognition is often dependent on viewpoint. Moreover, there are numerous studies from neuroscience suggesting that objects are stored in a view-based format in the brain. As you can see in Fig. 2 the hammer, dirk, grenade and gun, which are visible in the bags of Fig. 1a and 1b are indeed much easier to recognize if they are shown in a view that is more often encountered in real life. Because you never know how terrorists place their threat items in a bag, airport security screeners should be trained to detect prohibited items from all kinds of different viewpoints. In a close collaboration with Zurich State Police, Airport division we have Current x-ray machines provide high resolution images, many image processing features and even automatic explosive detection. But the machine is only one half of the whole system. The last and most important decision is always taken by the human operator. In fact, the best and most expensive equipment is of limited use, if a screener finally fails to recognize a threat in the x-ray image. This is of special importance because according to several aviation security experts the human operator is currently the weakest link in airport security. This matter is being realized more and more and several authorities as well as airports are planning to increase investments into a very important element of aviation security: Effective and efficient training of screeners. Indeed, \u2026"}
{"_id": "68167e5e7d8c4477fb0bf570aea3f11a4f4504db", "title": "Chord-PKI: A distributed trust infrastructure based on P2P networks", "text": "Many P2P applications require security services such as privacy, anonymity, authentication, and non-repudiation. Such services could be provided through a hierarchical Public Key Infrastructure. However, P2P networks are usually Internet-scale distributed systems comprised of nodes with undetermined trust level, thus making hierarchical solutions unrealistic. In this paper, we propose Chord-PKI, a distributed PKI architecture which is build upon the Chord overlay network, in order to provide security services for P2P applications. Our solution distributes the functionality of a PKI across the peers, by using threshold cryptography and proactive updating. We analyze the security of the proposed infrastructure and through simulations, we evaluate its performance for various scenarios of untrusted node distributions."}
{"_id": "ae2eecbe2d5a4365107dc2e4b8a2dcbd0b3938b7", "title": "Ip = Pspace", "text": "In this paper, it is proven that when both randomization and interaction are allowed, the proofs that can be verified in polynomial time are exactly those proofs that can be generated with polynomial space."}
{"_id": "a10b7a2c80e2bae49d46b980ed03c074fe36bb2a", "title": "The International Exascale Software Project roadmap", "text": "Over the last 20 years, the open-source community has provided more and more software on which the world\u2019s highperformance computing systems depend for performance and productivity. The community has invested millions of dollars and years of effort to build key components. However, although the investments in these separate software elements have been tremendously valuable, a great deal of productivity has also been lost because of the lack of planning, coordination, and key integration of technologies necessary to make them work together smoothly and efficiently, both within individual petascale systems and between different systems. It seems clear that this completely uncoordinated development model will not provide the software needed to support the unprecedented parallelism required for peta/ exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such as transactional memory, speculative execution, and graphics processing units. This report describes the work of the community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated International Exascale Software Project."}
{"_id": "0b563ad5fdea6923d2ee91f9ee4ad3d6dcdf9cd0", "title": "Steering Behaviors For Autonomous Characters", "text": "This paper presents solutions for one requirement of autonomous characters in animation and games: the ability to navigate around their world in a life-like and improvisational manner. These \u201csteering behaviors\u201d are largely independent of the particulars of the character\u2019s means of locomotion. Combinations of steering behaviors can be used to achieve higher level goals (For example: get from here to there while avoiding obstacles, follow this corridor, join that group of characters...) This paper divides motion behavior into three levels. It will focus on the middle level of steering behaviors, briefly describe the lower level of locomotion, and touch lightly on the higher level of goal setting and strategy."}
{"_id": "78c7428e1d1c6c27aec8a589b97002c95de391ed", "title": "Compositional Distributional Semantics with Compact Closed Categories and Frobenius Algebras", "text": "The provision of compositionality in distributional models of meaning, where a word is represented as a vector of co-occurrence counts with every other word in the vocabulary, offers a solution to the fact that no text corpus, regardless of its size, is capable of providing reliable co-occurrence statistics for anything but very short text constituents. The purpose of a compositional distributional model is to provide a function that composes the vectors for the words within a sentence, in order to create a vectorial representation that reflects its meaning. Using the abstract mathematical framework of category theory, Coecke, Sadrzadeh and Clark showed that this function can directly depend on the grammatical structure of the sentence, providing an elegant mathematical counterpart of the formal semantics view. The framework is general and compositional but stays abstract to a large extent. This thesis contributes to ongoing research related to the above categorical model in three ways: Firstly, I propose a concrete instantiation of the abstract framework based on Frobenius algebras (joint work with Sadrzadeh). The theory improves shortcomings of previous proposals, extends the coverage of the language, and is supported by experimental work that improves existing results. The proposed framework describes a new class of compositional models that find intuitive interpretations for a number of linguistic phenomena. Secondly, I propose and evaluate in practice a new compositional methodology which explicitly deals with the different levels of lexical ambiguity (joint work with Pulman). A concrete algorithm is presented, based on the separation of vector disambiguation from composition in an explicit prior step. Extensive experimental work shows that the proposed methodology indeed results in more accurate composite representations for the framework of Coecke et al. in particular and every other class of compositional models in general. As a last contribution, I formalize the explicit treatment of lexical ambiguity in the context of the categorical framework by resorting to categorical quantum mechanics (joint work with Coecke). In the proposed extension, the concept of a distributional vector is replaced with that of a density matrix, which compactly represents a probability distribution over the potential different meanings of the specific word. Composition takes the form of quantum measurements, leading to interesting analogies between quantum physics and linguistics."}
{"_id": "e48f224826da49e369107db51d33d201e5e8a3ee", "title": "Tibetan-Chinese Cross Language Text Similarity Calculation Based onLDA Topic Model", "text": "Topic model building is the basis and the most critical module of cross-language topic detection and tracking. Topic model also can be applied to cross-language text similarity calculation. It can improve the efficiency and the speed of calculation by reducing the texts\u2019 dimensionality. In this paper, we use the LDA model in cross-language text similarity computation to obtain Tibetan-Chinese comparable corpora: (1) Extending Tibetan-Chinese dictionary by extracting Tibetan-Chinese entities from Wikipedia. (2) Using topic model to make the texts mapped to the feature space of topics. (3) Calculating the similarity of two texts in different language according to the characteristics of the news text. The method for text similarity calculation based on LDA model reduces the dimensions of text space vector, and enhances the understanding of the text\u2019s semantics. It also improves the speed and efficiency of calculation."}
{"_id": "a48ba720733efa324ed247e3fea1d9d2cb08336e", "title": "The emotional brain", "text": "The discipline of affective neuroscience is concerned with the neural bases of emotion and mood. The past 30 years have witnessed an explosion of research in affective neuroscience that has addressed questions such as: which brain systems underlie emotions? How do differences in these systems relate to differences in the emotional experience of individuals? Do different regions underlie different emotions, or are all emotions a function of the same basic brain circuitry? How does emotion processing in the brain relate to bodily changes associated with emotion? And, how does emotion processing in the brain interact with cognition, motor behaviour, language and motivation?"}
{"_id": "408b8d34b7467c0b25b27fdafa77ee241ce7f4c4", "title": "Fast in-memory transaction processing using RDMA and HTM", "text": "We present DrTM, a fast in-memory transaction processing system that exploits advanced hardware features (i.e., RDMA and HTM) to improve latency and throughput by over one order of magnitude compared to state-of-the-art distributed transaction systems. The high performance of DrTM are enabled by mostly offloading concurrency control within a local machine into HTM and leveraging the strong consistency between RDMA and HTM to ensure serializability among concurrent transactions across machines. We further build an efficient hash table for DrTM by leveraging HTM and RDMA to simplify the design and notably improve the performance. We describe how DrTM supports common database features like read-only transactions and logging for durability. Evaluation using typical OLTP workloads including TPC-C and SmallBank show that DrTM scales well on a 6-node cluster and achieves over 5.52 and 138 million transactions per second for TPC-C and SmallBank Respectively. This number outperforms a state-of-the-art distributed transaction system (namely Calvin) by at least 17.9X for TPC-C."}
{"_id": "aa75853bfbc3801c2e936a69651f0b22cff7229b", "title": "Teaching theory of mind: a new approach to social skills training for individuals with autism.", "text": "This study examined the effectiveness of a social skills training program for normal-IQ adolescents with autism. Five boys participated in the 4 1/2-month treatment condition; four boys matched on age, IQ, and severity of autism constituted the no-treatment control group. In addition to teaching specific interactional and conversational skills, the training program provided explicit and systematic instruction in the underlying social-cognitive principles necessary to infer the mental states of others (i.e., theory of mind). Pre- and post-intervention assessment demonstrated meaningful change in the treatment group's performance on several false belief tasks, but no improvement in the control sample. No changes, however, were demonstrated on general parent and teacher ratings of social competence for either group."}
{"_id": "7e31e59f7d7e40041ee2b0bdb2f4ce3dd27cc9e7", "title": "A GA based approach for task scheduling in multi-cloud environment", "text": "In multi-cloud environment, task scheduling has attracted a lot of attention due to NP-Complete nature of the problem. Moreover, it is very challenging due to heterogeneity of the cloud resources with varying capacities and functionalities. Therefore, minimizing the makespan for task scheduling is a challenging issue. In this paper, we propose a genetic algorithm (GA) based approach for solving task scheduling problem. The algorithm is described with innovative idea of fitness function derivation and mutation. The proposed algorithm is exposed to rigorous testing using various benchmark datasets and its performance is evaluated in terms of total makespan."}
{"_id": "6029ecc36dfa3d55a8a482c0d415776bda80b195", "title": "Kernel auto-encoder for semi-supervised hashing", "text": "Hashing-based approaches have gained popularity for large-scale image retrieval in recent years. It has been shown that semi-supervised hashing, which incorporates similarity/dissimilarity information into hash function learning could improve the hashing quality. In this paper, we present a novel kernel-based semi-supervised binary hashing model for image retrieval by taking into account auxiliary information, i.e., similar and dissimilar data pairs in achieving high quality hashing. The main idea is to map the data points into a highly non-linear feature space and then map the non-linear features into compact binary codes such that similar/dissimilar data points have similar/dissimilar hash codes. Empirical evaluations on three benchmark datasets demonstrate the superiority of the proposed method over several existing unsupervised and semi-supervised hash function learning methods."}
{"_id": "67bf0b6bc7d09b0fe7a97469f786e26f359910ef", "title": "Abnormal use of facial information in high-functioning autism.", "text": "Altered visual exploration of faces likely contributes to social cognition deficits seen in autism. To investigate the relationship between face gaze and social cognition in autism, we measured both face gaze and how facial regions were actually used during emotion judgments from faces. Compared to IQ-matched healthy controls, nine high-functioning adults with autism failed to make use of information from the eye region of faces, instead relying primarily on information from the mouth. Face gaze accounted for the increased reliance on the mouth, and partially accounted for the deficit in using information from the eyes. These findings provide a novel quantitative assessment of how people with autism utilize information in faces when making social judgments."}
{"_id": "66726950d756256ca479a068664ecb052b048991", "title": "Tracking the Evolution of the Internet of Things Concept Across Different Application Domains", "text": "Both the idea and technology for connecting sensors and actuators to a network to remotely monitor and control physical systems have been known for many years and developed accordingly. However, a little more than a decade ago the concept of the Internet of Things (IoT) was coined and used to integrate such approaches into a common framework. Technology has been constantly evolving and so has the concept of the Internet of Things, incorporating new terminology appropriate to technological advances and different application domains. This paper presents the changes that the IoT has undertaken since its conception and research on how technological advances have shaped it and fostered the arising of derived names suitable to specific domains. A two-step literature review through major publishers and indexing databases was conducted; first by searching for proposals on the Internet of Things concept and analyzing them to find similarities, differences, and technological features that allow us to create a timeline showing its development; in the second step the most mentioned names given to the IoT for specific domains, as well as closely related concepts were identified and briefly analyzed. The study confirms the claim that a consensus on the IoT definition has not yet been reached, as enabling technology keeps evolving and new application domains are being proposed. However, recent changes have been relatively moderated, and its variations on application domains are clearly differentiated, with data and data technologies playing an important role in the IoT landscape."}
{"_id": "478cecc85fdad9b1f6ba9a48eff35e76ea3f0002", "title": "Online sketching hashing", "text": "Recently, hashing based approximate nearest neighbor (ANN) search has attracted much attention. Extensive new algorithms have been developed and successfully applied to different applications. However, two critical problems are rarely mentioned. First, in real-world applications, the data often comes in a streaming fashion but most of existing hashing methods are batch based models. Second, when the dataset becomes huge, it is almost impossible to load all the data into memory to train hashing models. In this paper, we propose a novel approach to handle these two problems simultaneously based on the idea of data sketching. A sketch of one dataset preserves its major characters but with significantly smaller size. With a small size sketch, our method can learn hash functions in an online fashion, while needs rather low computational complexity and storage space. Extensive experiments on two large scale benchmarks and one synthetic dataset demonstrate the efficacy of the proposed method."}
{"_id": "4bf25bc8d2727766ffd11b6994b1a56b1333c2eb", "title": "4 RFID-enabled Supply Chain Traceability : Existing Methods , Applications and Challenges", "text": "Radio Frequency Identification (RFID) technology promises to revolutionize various areas in supply chain. Recently, many researchers have investigated on how to improve the ability to track and trace a specific product along the supply chain in terms of both effectiveness and efficiency with the help of this technology. To enable traceability over the entire lifecycle a robust and seamless traceability system has to be constructed. This requires for the following three elements: (1) data model and storage scheme that allows unique identification and scalable database, (2) system framework which enables to share the traceability data between trading partners while maintaining a sovereignty over what is shared and with whom, and (3) a tracing mechanism in order to achieve end-to-end traceability and to provide the history information of products in question. Along with the studies addressing the requirements and design of traceability system architecture, applications in the real environment have also been reported. Due to the strong regulation in EU which states that food business operators shall be able to identify any person who supplied them and any business which takes food from them, RFID-enabled traceability systems are well implemented in the food supply chain. However, there exist other industries adopting RFID to enhance traceability such as pharmaceutical and aviation and even in the continuous process industry like iron ore refining. Despite the promising nature of RFID in tracking, there are several challenges to be addressed. Since an RFID tag does not require line-of-sight, multiple tags can be read simultaneously but also tag collisions may occur. Therefore there is no guarantee that a tag will be continuously detected on consecutive scans. Moreover, the use of RFID tags can be of serious threat to the privacy of information. This may facilitate the espionage of unauthorized personnel. In this chapter, we analyze the main issues of RFID-enabled traceability along the supply chain mentioned above: existing methods, applications and future challenges. Section 2 starts with pointing out the characteristics of RFID data and the requirements for RFIDenabled traceability. Subsequently, we introduce data types, storage schemes and system frameworks proposed in the existing literatures. Then, we discuss tracing methods based on the traceability system architecture. Section 3 presents current applications in real settings of both discrete and continuous production. We also discuss challenges that are preventing companies from adopting RFID for their traceability solutions in section 4. Finally, we conclude our study in section 5."}
{"_id": "9509c435260fce9dbceaf44b52791ac8fc5343bf", "title": "A Survey on Image Classification Approaches and Techniques", "text": "Object Classification is an important task within the field of computer vision. Image classification refers to the labelling of images into one of a number of predefined categories. Classification includes image sensors, image pre-processing, object detection, object segmentation, feature extraction and object classification.Many classification techniques have been developed for image classification. In this survey various classification techniques are considered; Artificial Neural Network(ANN), Decision Tree(DT), Support Vector Machine(SVM) and Fuzzy Classification"}
{"_id": "a48c71153265d6da7fbc4b16327320a5cbfa6cba", "title": "Unite the People : Closing the loop between 3 D and 2 D Human Representations Supplementary Material", "text": "We have obtained human segmentation labels to integrate shape information into the SMPLify 3D fitting procedure and for the evaluation of methods introduced in the main paper. The labels consist of foreground segmentation for multiple human pose datasets and six body part segmentation for the LSP dataset. Whereas we discuss their use in the context of the UP dataset in the main paper, we discuss the annotation tool that we used for the collection (see Sec. 2.1) as well as the direct use of the human labels for model training (see Sec. 2.2) in this document."}
{"_id": "9311c5d48dbb57cc8cbc226dd517f09ca18207b7", "title": "Mass Cytometry: Single Cells, Many Features", "text": "Technology development in biological research often aims to either increase the number of cellular features that can be surveyed simultaneously or enhance the resolution at which such observations are possible. For decades, flow cytometry has balanced these goals to fill a critical need by enabling the measurement of multiple features in single cells, commonly to examine complex or hierarchical cellular systems. Recently, a format for flow cytometry has been developed that leverages the precision of mass spectrometry. This fusion of the two technologies, termed mass cytometry, provides measurement of over 40 simultaneous cellular parameters at single-cell resolution, significantly augmenting the ability of cytometry to evaluate complex cellular systems and processes. In this Primer, we review the current state of mass cytometry, providing an overview of the instrumentation, its present capabilities, and methods of data analysis, as well as thoughts on future developments and applications."}
{"_id": "54d0d36eb833b1ffed905f25e06abc3dc6db233b", "title": "Manga content analysis using physiological signals", "text": "Recently, the physiological signals have been analyzed more and more, especially in the context of everyday life activities such as watching video or looking at pictures. Tracking these signals gives access to the mental state of the user (interest, tiredness, stress) but also to his emotions (sadness, fright, happiness). The analysis of the reader's physiological signals during reading can provide a better understanding of the reader's feelings but also a better understanding of the documents. Our main research direction is to find the relationship between a change in the reader's physiological signal and the content of the reading. As a first step, we investigate whether it is possible to distinguish a manga (Japanese comic book) from another by analyzing the physiological signals of the reader. We use 3 different manga genres (horror, romance, comedy) and try to predict which one is read by analyzing the features extracted from the physiological signals of the reader. Our method uses the blood volume pulse, the electrodermal activity and the skin temperature of the reader while reading. We show that by using these physiological signals with a support vector machine we can retrieve which manga has been read with a 90% average accuracy."}
{"_id": "404abe4a6b47cb210512b7ba10c155dda6331585", "title": "Blood monocytes: development, heterogeneity, and relationship with dendritic cells.", "text": "Monocytes are circulating blood leukocytes that play important roles in the inflammatory response, which is essential for the innate response to pathogens. But inflammation and monocytes are also involved in the pathogenesis of inflammatory diseases, including atherosclerosis. In adult mice, monocytes originate in the bone marrow in a Csf-1R (MCSF-R, CD115)-dependent manner from a hematopoietic precursor common for monocytes and several subsets of macrophages and dendritic cells (DCs). Monocyte heterogeneity has long been recognized, but in recent years investigators have identified three functional subsets of human monocytes and two subsets of mouse monocytes that exert specific roles in homeostasis and inflammation in vivo, reminiscent of those of the previously described classically and alternatively activated macrophages. Functional characterization of monocytes is in progress in humans and rodents and will provide a better understanding of the pathophysiology of inflammation."}
{"_id": "2283a70e539feea50f2e888863f68c2410082e0c", "title": "Participatory networking: an API for application control of SDNs", "text": "We present the design, implementation, and evaluation of an API for applications to control a software-defined network (SDN). Our API is implemented by an OpenFlow controller that delegates read and write authority from the network's administrators to end users, or applications and devices acting on their behalf. Users can then work with the network, rather than around it, to achieve better performance, security, or predictable behavior. Our API serves well as the next layer atop current SDN stacks. Our design addresses the two key challenges: how to safely decompose control and visibility of the network, and how to resolve conflicts between untrusted users and across requests, while maintaining baseline levels of fairness and security. Using a real OpenFlow testbed, we demonstrate our API's feasibility through microbenchmarks, and its usefulness by experiments with four real applications modified to take advantage of it."}
{"_id": "3f623aed8de2c45820eab2e4591b1207b3a65c13", "title": "What happens when HTTP adaptive streaming players compete for bandwidth?", "text": "With an increasing demand for high-quality video content over the Internet, it is becoming more likely that two or more adaptive streaming players share the same network bottleneck and compete for available bandwidth. This competition can lead to three performance problems: player instability, unfairness between players, and bandwidth underutilization. However, the dynamics of such competition and the root cause for the previous three problems are not yet well understood. In this paper, we focus on the problem of competing video players and describe how the typical behavior of an adaptive streaming player in its Steady-State, which includes periods of activity followed by periods of inactivity (ON-OFF periods), is the main root cause behind the problems listed above. We use two adaptive players to experimentally showcase these issues. Then, focusing on the issue of player instability, we test how several factors (the ON-OFF durations, the available bandwidth and its relation to available bitrates, and the number of competing players) affect stability."}
{"_id": "4b55ece626f3aca49ea7774f62b22d2a0e18201f", "title": "Towards network-wide QoE fairness using openflow-assisted adaptive video streaming", "text": "Video streaming is an increasingly popular way to consume media content. Adaptive video streaming is an emerging delivery technology which aims to increase user QoE and maximise connection utilisation. Many implementations naively estimate bandwidth from a one-sided client perspective, without taking into account other devices in the network. This behaviour results in unfairness and could potentially lower QoE for all clients. We propose an OpenFlow-assisted QoE Fairness Framework that aims to fairly maximise the QoE of multiple competing clients in a shared network environment. By leveraging a Software Defined Networking technology, such as OpenFlow, we provide a control plane that orchestrates this functionality. The evaluation of our approach in a home networking scenario introduces user-level fairness and network stability, and illustrates the optimisation of QoE across multiple devices in a network."}
{"_id": "1f0ea586a80833ee7b27ada93cc751449c4a3cdf", "title": "A network in a laptop: rapid prototyping for software-defined networks", "text": "Mininet is a system for rapidly prototyping large networks on the constrained resources of a single laptop. The lightweight approach of using OS-level virtualization features, including processes and network namespaces, allows it to scale to hundreds of nodes. Experiences with our initial implementation suggest that the ability to run, poke, and debug in real time represents a qualitative change in workflow. We share supporting case studies culled from over 100 users, at 18 institutions, who have developed Software-Defined Networks (SDN). Ultimately, we think the greatest value of Mininet will be supporting collaborative network research, by enabling self-contained SDN prototypes which anyone with a PC can download, run, evaluate, explore, tweak, and build upon."}
{"_id": "23e1cd65fc01e8dfe3beecaa07484a279ad396de", "title": "Network characteristics of video streaming traffic", "text": "Video streaming represents a large fraction of Internet traffic. Surprisingly, little is known about the network characteristics of this traffic. In this paper, we study the network characteristics of the two most popular video streaming services, Netflix and YouTube. We show that the streaming strategies vary with the type of the application (Web browser or native mobile application), and the type of container (Silverlight, Flash, or HTML5) used for video streaming. In particular, we identify three different streaming strategies that produce traffic patterns from non-ack clocked ON-OFF cycles to bulk TCP transfer. We then present an analytical model to study the potential impact of these streaming strategies on the aggregate traffic and make recommendations accordingly."}
{"_id": "0938ee6e489b9ac8fcc78df9b75e5395a734d357", "title": "Software architecture: a roadmap", "text": "Over the past decade software architecture has received increasing attention as an important subfield of software engineering. During that time there has been considerable progress in developing the technological and methodological base for treating architectural design as an engineering discipline. However, much remains to be done to achieve that goal. Moreover, the changing face of technology raises a number of new challenges for software architecture. This paper examines some of the important trends of software architecture in research and practice, and speculates on the important emerging trends, challenges, and aspirations."}
{"_id": "f4150e2fb4d8646ebc2ea84f1a86afa1b593239b", "title": "Threat detection in online discussions", "text": "This paper investigates the effect of various types of linguistic features (lexical, syntactic and semantic) for training classifiers to detect threats of violence in a corpus of YouTube comments. Our results show that combinations of lexical features outperform the use of more complex syntactic and semantic features for this task."}
{"_id": "a3819dda9a5f00dbb8cd3413ca7422e37a0d5794", "title": "Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations", "text": "In this paper, a fast and flexible algorithm for computing watersheds in digital grayscale images is introduced. A review of watersheds and related notion is first presented, and the major methods to determine watersheds are discussed. The present algorithm is based on an immersion process analogy, in which the flooding of the water in the picture is efficiently simulated using a queue of pixels. It is described in detail and provided in a pseudo C language. We prove the accuracy of this algorithm is superior to that of the existing implementations. Furthermore, it is shown that its adaptation to any kind of digital grid and its generalization to n-dimensional images and even to graphs are straightforward. In addition, its strongest point is that it is faster than any other watershed algorithm. Applications of this algorithm with regard to picture segmentation are presented for MR imagery and for digital elevation models. An example of 3-D watershed is also provided. Lastly, some ideas are given on how to solve complex segmentation tasks using watersheds on graphs. Zndex Terms-Algorithm, digital image, FIFO structure, graph, grid, mathematical morphology, picture segmentation, watersheds."}
{"_id": "6d21e6fff7c7de7e4758cf02b5933e7ad987a718", "title": "Key Challenges for the Smart City: Turning Ambition into Reality", "text": "Smart city is a label internationally used by cities, researchers and technology providers with different meanings. As a popular concept it is widely used by city administrators and politicians to promote their efforts. It is hard enough to find a good definition for smart cities, but even harder to find a trustworthy description of what it takes to become a smart city and how a city administration is impacted. This paper sets out to investigate how a city, aspiring to become a 'smart city', can manage the organization to realize that ambition. Specifically, the paper describes the case of the City of Ghent, Belgium, and the key challenges it has been facing in its ongoing efforts to be a smart city. Based on in depth interviews with city representatives six key challenges for smart city realization were identified and tested with a panel of representatives from five European cities that are in the process of becoming a smart city. This way, the study contributes to a more professional pursuit of the smart city concept."}
{"_id": "a6eb10b1d30b4547b04870a82ec0c65baf2198f8", "title": "Big Data Management Assessed Coursework Two Big Data vs Semantic Web F 21 BD", "text": ""}
{"_id": "712194bd90119fd9f059025fd41b72ed12e5cc32", "title": "A multiscale random field model for Bayesian image segmentation", "text": "Many approaches to Bayesian image segmentation have used maximum a posteriori (MAP) estimation in conjunction with Markov random fields (MRF). Although this approach performs well, it has a number of disadvantages. In particular, exact MAP estimates cannot be computed, approximate MAP estimates are computationally expensive to compute, and unsupervised parameter estimation of the MRF is difficult. The authors propose a new approach to Bayesian image segmentation that directly addresses these problems. The new method replaces the MRF model with a novel multiscale random field (MSRF) and replaces the MAP estimator with a sequential MAP (SMAP) estimator derived from a novel estimation criteria. Together, the proposed estimator and model result in a segmentation algorithm that is not iterative and can be computed in time proportional to MN where M is the number of classes and N is the number of pixels. The also develop a computationally efficient method for unsupervised estimation of model parameters. Simulations on synthetic images indicate that the new algorithm performs better and requires much less computation than MAP estimation using simulated annealing. The algorithm is also found to improve classification accuracy when applied to the segmentation of multispectral remotely sensed images with ground truth data."}
{"_id": "db9bbebbe0c4dab997583b2a3de2bca6b2217af9", "title": "A systemic and cognitive view on collaborative knowledge building with wikis", "text": "Wikis provide new opportunities for learning and for collaborative knowledge building as well as for understanding these processes. This article presents a theoretical framework for describing how learning and collaborative knowledge building take place. In order to understand these processes, three aspects need to be considered: the social processes facilitated by a wiki, the cognitive processes of the users, and how both processes influence each other mutually. For this purpose, the model presented in this article borrows from the systemic approach of Luhmann as well as from Piaget\u2019s theory of equilibration and combines these approaches. The model analyzes processes which take place in the social system of a wiki as well as in the cognitive systems of the users. The model also describes learning activities as processes of externalization and internalization. Individual learning happens through internal processes of assimilation and accommodation, whereas changes in a wiki are due to activities of external assimilation and accommodation which in turn lead to collaborative knowledge building. This article provides empirical examples for these equilibration activities by analyzing Wikipedia articles. Equilibration activities are described as being caused by subjectively perceived incongruities between an individuals\u2019 knowledge and the information provided by a wiki. Incongruities of medium level cause cognitive conflicts which in turn activate the described processes of equilibration and facilitate individual learning and collaborative knowledge building."}
{"_id": "a68228a6108ac558bed194d098b02a2c90b58e0f", "title": "Anaerobic biotechnology for industrial wastewater treatment.", "text": "Microbiological formation of methane has been occurring naturally for ages in such diverse habitats as marshes, rice paddies, benthic deposits, deep ocean trenches, hot springs, trees, cattle, pigs, iguanas, termites, and human beings (Mah and Smith, 1981; Steggerda and Dimmick, 1966; Prins, 1979; Balch et al., 1979). In the past decade, interest in anaerobic biotechnology has grown considerably, both in the harnessing of the process for industrial wastewater treatment and in the bioconversion of crop-grown biomass to methane (Sheridan, 1982; Chynoweth and Srivastava, 1980)."}
{"_id": "b2165397df5c5b9dda23b74c7c592aaec30bf9ec", "title": "Biometric Template Protection: Bridging the performance gap between theory and practice", "text": "Biometric recognition is an integral component of modern identity management and access control systems. Due to the strong and permanent link between individuals and their biometric traits, exposure of enrolled users' biometric information to adversaries can seriously compromise biometric system security and user privacy. Numerous techniques have been proposed for biometric template protection over the last 20 years. While these techniques are theoretically sound, they seldom guarantee the desired noninvertibility, revocability, and nonlinkability properties without significantly degrading the recognition performance. The objective of this work is to analyze the factors contributing to this performance divide and highlight promising research directions to bridge this gap. The design of invariant biometric representations remains a fundamental problem, despite recent attempts to address this issue through feature adaptation schemes. The difficulty in estimating the statistical distribution of biometric features not only hinders the development of better template protection algorithms but also diminishes the ability to quantify the noninvertibility and nonlinkability of existing algorithms. Finally, achieving nonlinkability without the use of external secrets (e.g., passwords) continues to be a challenging proposition. Further research on the above issues is required to cross the chasm between theory and practice in biometric template protection."}
{"_id": "a2a3c8dcc38e6f8708e03413bdff6cd6c7c6e0d3", "title": "Lava flow Simulation with Cellular Automata: Applications for Civil Defense and Land Use Planning", "text": "The determination of areas exposed to new eruptive events in volcanic regions is crucial for diminishing consequences in terms of human causalities and damages of material properties. In this paper, we illustrate a methodology for defining flexible high-detailed lava invasion hazard maps which is based on an robust and efficient Cellular Automata model for simulating lava flows. We also present some applications for land use planning and civil defense to some inhabited areas of Mt Etna (South Italy), Europe\u2019s most active volcano, showing the methodology\u2019s appropriateness."}
{"_id": "680e7d3f1400c4f3ebd587290f8742cba4542599", "title": "Asynchronous Event-Based Hebbian Epipolar Geometry", "text": "Epipolar geometry, the cornerstone of perspective stereo vision, has been studied extensively since the advent of computer vision. Establishing such a geometric constraint is of primary importance, as it allows the recovery of the 3-D structure of scenes. Estimating the epipolar constraints of nonperspective stereo is difficult, they can no longer be defined because of the complexity of the sensor geometry. This paper will show that these limitations are, to some extent, a consequence of the static image frames commonly used in vision. The conventional frame-based approach suffers from a lack of the dynamics present in natural scenes. We introduce the use of neuromorphic event-based-rather than frame-based-vision sensors for perspective stereo vision. This type of sensor uses the dimension of time as the main conveyor of information. In this paper, we present a model for asynchronous event-based vision, which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels. Practical experiments demonstrate the validity of the approach, solving the problem of estimating the fundamental matrix applied, in a first stage, to classic perspective vision and then to more general cameras. Furthermore, this paper shows that the properties of event-based vision sensors allow the exploration of not-yet-defined geometric relationships, finally, we provide a definition of general epipolar geometry deployable to almost any visual sensor."}
{"_id": "0e036ebbabb6683c03f6bc10adb58724a8644d1c", "title": "Invalid retro-cues can eliminate the retro-cue benefit: Evidence for a hybridized account.", "text": "The contents of visual working memory (VWM) are capacity limited and require frequent updating. The retrospective cueing (retro-cueing) paradigm clarifies how directing internal attention among VWM items boosts VWM performance. In this paradigm a cue appears prior to retrieval, but after encoding and maintenance. The retro-cue effect (RCE) refers to superior VWM after valid versus neutral retro-cues. Here we investigated the effect of the invalid retro-cues' inclusion on VWM performance. We conducted 2 pairs of experiments, changing both probe type (recognition and recall) as well as presence and absence of invalid retro-cue trials. Furthermore, to fully characterize these effects over time, we used extended post-retro-cue delay durations. In the first set of experiments, probing VWM using recognition indicated that the RCE remained consistent in magnitude with or without invalid retro-cue trials. In the second set of experiments, VWM was probed with recall. Here, the RCE was eliminated when invalid retro-cues were included. This finer-grained measure of VWM fidelity showed that all items were subject to decay over time. We conclude that the invalid retro-cues impaired the protection of validly cues items, but they remain accessible, suggesting greater concordance with a prioritization account."}
{"_id": "40e06608324781f6de425617a870a103d4233d5c", "title": "Macro process of knowledge management for continuous innovation", "text": "Purpose The purpose of this research is to understand the mechanisms of knowledge management (KM) for innovation and provide an approach for enterprises to leverage KM activities into continuous innovation. Design/methodology/approach \u2013 By reviewing the literature from multidisciplinary fields, the concepts of knowledge, KM and innovation are investigated. The physical, human and technological perspectives of KM are distinguished with the identification of two core activities for innovation: knowledge creation and knowledge usage. Then an essential requirement for continuous innovation \u2013an internalization phase is defined. The systems thinking and human-centred perspectives are adopted for providing a comprehensive understanding about the mechanisms of KM for innovation. Findings \u2013 A networking process of continuous innovation based on KM is proposed by incorporating the phase of internalization. According to the three perspectives of KM, three sources of organizational knowledge assets in innovation are identified. Then based on the two core activities of innovation, a meta-model and a macro process of KM are proposed to model the mechanisms of KM for continuous innovation. Then, in order to operationalize the KM mechanisms, a hierarchical model is constructed by integrating three sources of knowledge assets, the meta-model and the macro process into the process of continuous innovation. This model decomposes the complex relationships between knowledge and innovation into four layers. Practical implications \u2013 According to the lessons learned about KM practices in previous research, the three perspectives of KM should collaborate with each other for successful implementation of KM projects for innovation; and the hierarchical model provides a suitable architecture to implement systems of KM for innovation. Originality/value \u2013 The meta-model and macro process of KM explain how the next generation of KM can help the value creation and support the continuous innovation from the systems thinking perspective. The hierarchical model illustrates the complicate knowledge dynamics in the process of continuous innovation."}
{"_id": "6a85c70922e618d5aec15065f88f7b23c48a676b", "title": "Repellent and Contact Toxicity of Alpinia officinarum Rhizome Extract against Lasioderma serricorne Adults", "text": "The repellent and contact toxicities of Alpinia officinarum rhizome extract on Lasioderma serricorne adults, and its ability to protect stored wheat flour from L. serricorne adults infestation were investigated. The A. officinarum extract exhibited strong repellent and contact toxicities against L. serricorne adults. The toxicities enhanced significantly with the increasing treatment time and treatment dose. The mean percentage repellency value reached 91.3% at class V at the dose of 0.20 \u03bcL/cm2 after 48 h of exposure. The corrected mortality reached over 80.0% at the dose of 0.16 \u03bcL/cm2 after 48 h of exposure. The A. officinarum extract could significantly reduce L. serricorne infestation level against stored wheat flour. Particularly, the insect infestation was nil in wheat flour packaged with kraft paper bags coated with the A. officinarum extract at the dose of above 0.05 \u03bcL/cm2. The naturally occurring A. officinarum extract could be useful for integrated management of L. serricorne."}
{"_id": "9cb6333ecb28ecb661d76dc8cda8e0766f1c06f4", "title": "A regression-based radar-mote system for people counting", "text": "People counting is key to a diverse set of sensing applications. In this paper, we design a mote-scale event-driven solution that uses a low-power pulsed radar to estimate the number of people within the ~10m radial range of the radar. In contrast to extant solutions, most of which use computer vision, our solution is light-weight and private. It also better tolerates the presence of obstacles that partially or fully impair line of sight; this is achieved by accounting for \u201csmall\u201d indirect radio reflections via joint time-frequency domain features. The counter itself is realized using Support Vector Regression; the regression map is learned from a medium sized dataset of 0-~40 people in various indoor room settings. 10-fold cross validation of our counter yields a mean absolute error of 2.17 between the estimated count and the ground truth and a correlation coefficient of 0.97.We compare the performance of our solution with baseline counters."}
{"_id": "027dc7d698bb161e1b7437c218b5811c015840e6", "title": "Automatic Image Alignment and Stitching of Medical Images with Seam Blending", "text": "This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together using the new triangular averaging based blending algorithm. The quality of the resultant image is tested for photometric inconsistencies and geometric misalignments. This method cannot correct rotational, scale and perspective artifacts. Keywords\u2014Histogram Matching, Image Alignment, Image Stitching, Medical Imaging."}
{"_id": "530de5705066057ead2f911cef2b45f8d237e5e9", "title": "Formal verification of security protocol implementations: a survey", "text": "Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approach."}
{"_id": "197904cd4c8387244761acab5bd0fe455ec41108", "title": "Multiple-path testing for cross site scripting using genetic algorithms", "text": "Web applications suffer from different security vulnerabilities that could be exploited by hackers to cause harm in a variety of ways. A number of approaches have been proposed to test for such vulnerabilities. However, some gaps are still to be addressed. In this paper, we address one of such gaps: the problem of automatically generating test data (i.e., possible attacks) to test for cross site scripting (XSS) type of vulnerability. The objective is to generate a set of test data to exercise candidate security-vulnerable paths in a given script. The desirable set of test data must be effective in the sense that it uncovers whether any path can indeed be exploited to launch an attack. We designed a genetic algorithm-based test data generator that uses a database of XSS attack patterns to generate possible attacks and assess whether the attack is successful. We considered different types of XSS vulnerability: stored, reflected and DOM based. We empirically validated our test data generator using case studies of Web applications developed using PHP and MySQL. Empirical results show that our test data generator is effective in generating, in one run, multiple test data to cover multiple target"}
{"_id": "14ecc44aaaf525955fa0cc248787be310f08cdc4", "title": "Multi-task assignment for crowdsensing in mobile social networks", "text": "Mobile crowdsensing is a new paradigm in which a crowd of mobile users exploit their carried smart devices to conduct complex computation and sensing tasks in mobile social networks (MSNs). In this paper, we focus on the task assignment problem in mobile crowdsensing. Unlike traditional task scheduling problems, the task assignment in mobile crowdsensing must follow the mobility model of users in MSNs. To solve this problem, we propose an oFfline Task Assignment (FTA) algorithm and an oNline Task Assignment (NTA) algorithm. Both FTA and NTA adopt a greedy task assignment strategy. Moreover, we prove that the FTA algorithm is an optimal offline task assignment algorithm, and give a competitive ratio of the NTA algorithm. In addition, we demonstrate the significant performance of our algorithms through extensive simulations, based on four real MSN traces and a synthetic MSN trace."}
{"_id": "1dba1fa6dd287fde87823218d4f03559dde4e15b", "title": "Natural Language Annotations for Question Answering", "text": "This paper presents strategies and lessons learned from the use of natural language annotations to facilitate question answering in the START information access system."}
{"_id": "4e06b359a9452d1420b398fa7391cad411d4cb23", "title": "Preventing man-in-the-middle attack in Diffie-Hellman key exchange protocol", "text": "The acceleration in developments in communication technology has led to a consequent increase in the vulnerability of data due to penetration attacks. These attacks often came from outside where non-qualified companies develop IT projects. Cryptography can offer high levels of security but has recently shown vulnerabilities such as the man-in-the-middle (MITM) attack in areas of key exchange protocols, especially in the Diffie-Hellman (DH) protocol. Firstly, this paper presents an overview of MITM attacks targeted at the DH protocol then discusses some of the shortcomings of current defenses. A proposed method to secure DH, which helps secure systems against MITM attacks, is then presented. This method involves the use of Geffe generation of binary sequences. The use of Geffe generator offers high levels of randomness. Data hashed and encrypted using this proposed method will be so difficult to intercept and decrypt without the appropriate keys. This offers high levels of security and helps prevent MITM attacks."}
{"_id": "f0f2b2cda279843d0b32b4ca91d014592030ba45", "title": "Information security behavior: Recognizing the influencers", "text": "With the wide spread use of Internet, comes an increase in Information Security threats. To protect against these threats, technology alone have been found not enough as it can be misused by users and become vulnerable to various threats, thus, losing its usefulness. This is evident as users tend to use weak passwords, open email attachments without checking and do not set correct security settings. However, especially with the continuously evolving threat landscape, one cannot assume that users are always motivated to learn about Information Security and practice it. Actually, there are situations of an aware user who knows how to protect himself but, simply, chooses not to, because they do not care, usability problems or because they do not consider themselves as targets. Thus, understanding human security behavior is vital for ensuring an efficient Information Security environment that cannot depend on technology only. Although a number of psychological theories and models, such as Protection Motivation Theory and Technology Acceptance Model, were used in the literature to interpret these behaviors, they tend to assess users' intensions rather than actual behavior. The aim of this paper is to understand and assess these behaviors from a holistic view by finding the significant factors that influence them and how to best assist users to protect themselves. To accomplish this, a systematic literature review was conducted where relevant literature was sought in a number of academic digital databases. As a result, several key behavioral influencers were identified to be essential to consider when educating and directing users' security behavior. Further to that, a number of Information Security awareness approaches were proposed that may transform the user from being ill-informed into a security-minded user who is able to make an informed decision."}
{"_id": "9e948ba3b3431ff2b1da9077955bec5326288f8c", "title": "Near-optimal hybrid analog and digital precoding for downlink mmWave massive MIMO systems", "text": "Millimeter wave (mmWave) massive MIMO can achieve orders of magnitude increase in spectral and energy efficiency, and it usually exploits the hybrid analog and digital precoding to overcome the serious signal attenuation induced by mmWave frequencies. However, most of hybrid precoding schemes focus on the full-array structure, which involves a high complexity. In this paper, we propose a near-optimal iterative hybrid precoding scheme based on the more realistic subarray structure with low complexity. We first decompose the complicated capacity optimization problem into a series of ones easier to be handled by considering each antenna array one by one. Then we optimize the achievable capacity of each antenna array from the first one to the last one by utilizing the idea of successive interference cancelation (SIC), which is realized in an iterative procedure that is easy to be parallelized. It is shown that the proposed hybrid precoding scheme can achieve better performance than other recently proposed hybrid precoding schemes, while it also enjoys an acceptable computational complexity."}
{"_id": "d7f6446205d8c30711d9135df7719ce0a9a45d32", "title": "Self-compassion increases self-improvement motivation.", "text": "Can treating oneself with compassion after making a mistake increase self-improvement motivation? In four experiments, the authors examined the hypothesis that self-compassion motivates people to improve personal weaknesses, moral transgressions, and test performance. Participants in a self-compassion condition, compared to a self-esteem control condition and either no intervention or a positive distraction control condition, expressed greater incremental beliefs about a personal weakness (Experiment 1); reported greater motivation to make amends and avoid repeating a recent moral transgression (Experiment 2); spent more time studying for a difficult test following an initial failure (Experiment 3); exhibited a preference for upward social comparison after reflecting on a personal weakness (Experiment 4); and reported greater motivation to change the weakness (Experiment 4). These findings suggest that, somewhat paradoxically, taking an accepting approach to personal failure may make people more motivated to improve themselves."}
{"_id": "97c76f09bcb077ca7b9f47e82a34e37171d01f41", "title": "Collaborative Joint Training With Multitask Recurrent Model for Speech and Speaker Recognition", "text": "Automatic speech and speaker recognition are traditionally treated as two independent tasks and are studied separately. The human brain in contrast deciphers the linguistic content, and the speaker traits from the speech in a collaborative manner. This key observation motivates the work presented in this paper. A collaborative joint training approach based on multitask recurrent neural network models is proposed, where the output of one task is backpropagated to the other tasks. This is a general framework for learning collaborative tasks and fits well with the goal of joint learning of automatic speech and speaker recognition. Through a comprehensive study, it is shown that the multitask recurrent neural net models deliver improved performance on both automatic speech and speaker recognition tasks as compared to single-task systems. The strength of such multitask collaborative learning is analyzed, and the impact of various training configurations is investigated."}
{"_id": "1005645c05585c2042e3410daeed638b55e2474d", "title": "A Scalable Hierarchical Distributed Language Model", "text": "Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the nonhierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models."}
{"_id": "3a91c3eb9a05ef5c8b875ab112448cc3f44a1268", "title": "Extensions of recurrent neural network language model", "text": "We present several modifications of the original recurrent neural network language model (RNN LM).While this model has been shown to significantly outperform many competitive language modeling techniques in terms of accuracy, the remaining problem is the computational complexity. In this work, we show approaches that lead to more than 15 times speedup for both training and testing phases. Next, we show importance of using a backpropagation through time algorithm. An empirical comparison with feedforward networks is also provided. In the end, we discuss possibilities how to reduce the amount of parameters in the model. The resulting RNN model can thus be smaller, faster both during training and testing, and more accurate than the basic one."}
{"_id": "0b47b6ffe714303973f40851d975c042ff4fcde1", "title": "Distributional Clustering of English Words", "text": "We describe and experimentally evaluate a method for automatically clustering words according to their distribution in particular syntactic contexts. Deterministic annealing is used to find lowest distortion sets of clusters. As the annealing parameter increases, existing clusters become unstable and subdivide, yielding a hierarchical \u201csoft\u201d clustering of the data. Clusters are used as the basis for class models of word coocurrence, and the models evaluated with respect to held-out test data."}
{"_id": "77fbbb9ff612c48dad8313087b0e6ed03c31812a", "title": "Characterization of liquid crystal polymer (LCP) material and transmission lines on LCP substrates from 30 to 110 GHz", "text": "Liquid crystal polymer (LCP) is a material that has gained attention as a potential high-performance microwave substrate and packaging material. This investigation uses several methods to determine the electrical properties of LCP for millimeter-wave frequencies. Microstrip ring resonators and cavity resonators are measured in order to characterize the dielectric constant (/spl epsi//sub r/) and loss tangent (tan/spl delta/) of LCP above 30 GHz. The measured dielectric constant is shown to be steady near 3.16, and the loss tangent stays below 0.0049. In addition, various transmission lines are fabricated on different LCP substrate thicknesses and the loss characteristics are given in decibels per centimeter from 2 to 110 GHz. Peak transmission-line losses at 110 GHz vary between 0.88-2.55 dB/cm, depending on the line type and geometry. These results show, for the first time, that LCP has excellent dielectric properties for applications extending through millimeter-wave frequencies."}
{"_id": "ccd80fefa3b6ca995c62f5a5ee0fbfef5732aaf3", "title": "Efficient Multi-task Feature and Relationship Learning", "text": "We consider a multitask learning problem, in which several predictors are learned jointly. Prior research has shown that learning the relations between tasks, and between the input features, together with the predictor, can lead to better generalization and interpretability, which proved to be useful for applications in many domains. In this paper, we consider a formulation of multitask learning that learns the relationships both between tasks and between features, represented through a task covariance and a feature covariance matrix, respectively. First, we demonstrate that existing methods proposed for this problem present an issue that may lead to ill-posed optimization. We then propose an alternative formulation, as well as an efficient algorithm to optimize it. Using ideas from optimization and graph theory, we propose an efficient coordinate-wise minimization algorithm that has a closed form solution for each block subproblem. Our experiments show that the proposed optimization method is orders of magnitude faster than its competitors. We also provide a nonlinear extension that is able to achieve better generalization than existing methods."}
{"_id": "bd564543735722c1f5040ce52e57c324ead4e499", "title": "Sensor relocation in mobile sensor networks", "text": "Recently there has been a great deal of research on using mobility in sensor networks to assist in the initial deployment of nodes. Mobile sensors are useful in this environment because they can move to locations that meet sensing coverage requirements. This paper explores the motion capability to relocate sensors to deal with sensor failure or respond to new events. We define the problem of sensor relocation and propose a two-phase sensor relocation solution: redundant sensors are first identified and then relocated to the target location. We propose a Grid-Quorum solution to quickly locate the closest redundant sensor with low message complexity, and propose to use cascaded movement to relocate the redundant sensor in a timely, efficient and balanced way. Simulation results verify that the proposed solution outperforms others in terms of relocation time, total energy consumption, and minimum remaining energy."}
{"_id": "3c03973d488666fa61a2c7dad65f8e0dea24b012", "title": "Distributed Optimization for Model Predictive Control of Linear Dynamic Networks With Control-Input and Output Constraints", "text": "A linear dynamic network is a system of subsystems that approximates the dynamic model of large, geographically distributed systems such as the power grid and traffic networks. A favorite technique to operate such networks is distributed model predictive control (DMPC), which advocates the distribution of decision-making while handling constraints in a systematic way. This paper contributes to the state-of-the-art of DMPC of linear dynamic networks in two ways. First, it extends a baseline model by introducing constraints on the output of the subsystems and by letting subsystem dynamics to depend on the state besides the control signals of the subsystems in the neighborhood. With these extensions, constraints on queue lengths and delayed dynamic effects can be modeled in traffic networks. Second, this paper develops a distributed interior-point algorithm for solving DMPC optimization problems with a network of agents, one for each subsystem, which is shown to converge to an optimal solution. In a traffic network, this distributed algorithm permits the subsystem of an intersection to be reconfigured by only coordinating with the subsystems in its vicinity."}
{"_id": "4b75c15b4bff6fac5dc1a45ab1b9edbf29f6a1ec", "title": "Semi-Global Matching: A Principled Derivation in Terms of Message Passing", "text": "Semi-global matching, originally introduced in the context of dense stereo, is a very successful heuristic to minimize the energy of a pairwise multi-label Markov Random Field defined on a grid. We offer the first principled explanation of this empirically successful algorithm, and clarify its exact relation to belief propagation and tree-reweighted message passing. One outcome of this new connection is an uncertainty measure for the MAP label of a variable in a Markov Random Field."}
{"_id": "a00d2c18777b97f60554d3a88dd2e948d74538bc", "title": "Challenges and implications of using ultrasonic communications in intra-body area networks", "text": "Body area networks (BANs) promise to enable revolutionary biomedical applications by wirelessly interconnecting devices implanted or worn by humans. However, BAN wireless communications based on radio-frequency (RF) electromagnetic waves suffer from poor propagation of signals in body tissues, which leads to high levels of attenuation. In addition, in-body transmissions are constrained to be low-power to prevent overheating of tissues and consequent death of cells. To address the limitations of RF propagation in the human body, we propose a paradigm shift by exploring the use of ultrasonic waves as the physical medium to wirelessly interconnect in-body implanted devices. Acoustic waves are the transmission technology of choice for underwater communications, since they are known to propagate better than their RF counterpart in media composed mainly of water. Similarly, we envision that ultrasound (e.g., acoustic waves at non-audible frequencies) will provide support for communications in the human body, which is composed for 65% of water. In this paper, we first assess the feasibility of using ultrasonic communications in intra-body BANs, i.e., in-body networks where the devices are biomedical sensors that communicate with an actuator/gateway device located inside the body. We discuss the fundamentals of ultrasonic propagation in tissues, and explore important tradeoffs, including the choice of a transmission frequency, transmission power, bandwidth, and transducer size. Then, we discuss future research challenges for ultrasonic networking of intra-body devices at the physical, medium access and network layers of the protocol stack."}
{"_id": "0c9c4daa230bcc62cf3d78236ccf54ff686a36e0", "title": "Biobanks: transnational, European and global networks.", "text": "Biobanks contain biological samples and associated information that are essential raw materials for advancement of biotechnology, human health, and research and development in life sciences. Population-based and disease-oriented biobanks are major biobank formats to establish the disease relevance of human genes and provide opportunities to elucidate their interaction with environment and lifestyle. The developments in personalized medicine require molecular definition of new disease subentities and biomarkers for identification of relevant patient subgroups for drug development. These emerging demands can only be met if biobanks cooperate at the transnational or even global scale. Establishment of common standards and strategies to cope with the heterogeneous legal and ethical landscape in different countries are seen as major challenges for biobank networks. The Central Research Infrastructure for Molecular Pathology (CRIP), the concept for a pan-European Biobanking and Biomolecular Resources Research Infrastructure (BBMRI), and the Organization for Economic Co-operation and Development (OECD) global Biological Resources Centres network are examples for transnational, European and global biobank networks that are described in this article."}
{"_id": "8b3bbf6a4b1cb54b4141eb2696e192fefc7fa7f2", "title": "Implantable Antennas for Biomedical Applications: An Overview on Alternative Antenna Design Methods and Challenges", "text": "Implanted biomedical devices are witnessing great attention in finding solutions to complex medical conditions. Many challenges face the design of implantable biomedical devices including designing and implanting antennas within hostile environment due to the surrounding tissues of human body. Implanted antennas must be compact in size, efficient, safe, and can effectively work within adequate medical frequency bands. This paper presents an overview of the major aspects related to the design and challenges of in body implanted antennas. The review includes surveying the applications, design methods, challenges, simulation tools, and testing and manufacturing of implantable biomedical antennas."}
{"_id": "fd04694eef08eee47d239b6cac5afacf943bf0a1", "title": "Neural networks for computer-aided diagnosis: detection of lung nodules in chest radiograms", "text": "The paper describes a neural-network-based system for the computer aided detection of lung nodules in chest radiograms. Our approach is based on multiscale processing and artificial neural networks (ANNs). The problem of nodule detection is faced by using a two-stage architecture including: 1) an attention focusing subsystem that processes whole radiographs to locate possible nodular regions ensuring high sensitivity; 2) a validation subsystem that processes regions of interest to evaluate the likelihood of the presence of a nodule, so as to reduce false alarms and increase detection specificity. Biologically inspired filters (both LoG and Gabor kernels) are used to enhance salient image features. ANNs of the feedforward type are employed, which allow an efficient use of a priori knowledge about the shape of nodules, and the background structure. The images from the public JSRT database, including 247 radiograms, were used to build and test the system. We performed a further test by using a second private database with 65 radiograms collected and annotated at the Radiology Department of the University of Florence. Both data sets include nodule and nonnodule radiographs. The use of a public data set along with independent testing with a different image set makes the comparison with other systems easier and allows a deeper understanding of system behavior. Experimental results are described by ROC/FROC analysis. For the JSRT database, we observed that by varying sensitivity from 60 to 75% the number of false alarms per image lies in the range 4-10, while accuracy is in the range 95.7-98.0%. When the second data set was used comparable results were obtained. The observed system performances support the undertaking of system validation in clinical settings."}
{"_id": "e726318569f5670369684e1a9f5b8f632d29bffc", "title": "Particle Swarm Optimization of the Multioscillatory LQR for a Three-Phase Four-Wire Voltage-Source Inverter With an $LC$ Output Filter", "text": "This paper presents evolutionary optimization of the linear quadratic regulator (LQR) for a voltage-source inverter with an LC output filter. The procedure involves particle-swarm-based search for the best weighting factors in the quadratic cost function. It is common practice that the weights in the cost function are set using the guess-and-check method. However, it becomes quite challenging, and usually very time-consuming, if there are many auxiliary states added to the system. In order to immunize the system against unbalanced and nonlinear loads, oscillatory terms are incorporated into the control scheme, and this significantly increases the number of weights to be guessed. All controller gains are determined altogether in one LQR procedure call, and the originality reported here refers to evolutionary tuning of the weighting matrix. There is only one penalty factor to be set by the designer during the controller synthesis procedure. This coefficient enables shaping the dynamics of the closed-loop system by penalizing the dynamics of control signals instead of selecting individual weighting factors for augmented state vector components. Simulational tuning and experimental verification (the physical converter at the level of 21 kVA) are included."}
{"_id": "b109f7d8b90a789a962649e65c35e041370f6bf4", "title": "Towards fabrication of Vertical Slit Field Effect Transistor (VeSFET) as new device for nano-scale CMOS technology", "text": "This paper proposes a CMOS based process for Vertical Slit Field Effect Transistors. The central part of the device, namely, the vertical slit, is defined by using electron beam lithography and silicon dry etching. In order to verify the validity and the reproducibility of the process, devices having the slit width ranging from 16 nm to 400 nm were fabricated, with slit conductance in the range 0.6 to 3 milliSiemens, in agreement with the expected values."}
{"_id": "0605a012aeeee9bef773812a533c4f3cb7fa5a5f", "title": "Interpretable Counting for Visual Question Answering", "text": "Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image. In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count. Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections. A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image. Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting."}
{"_id": "9d1c4f3f946f837d59a256df3bee3d5204271537", "title": "A systematic review of the impact of the use of social networking sites on body image and disordered eating outcomes.", "text": "A large body of literature has demonstrated mass media effects on body image and disordered eating. More recently, research in this area has turned to 'new' forms of media, such as the Internet, and particularly Social Networking Sites (SNSs). A systematic search for peer-reviewed articles on SNS use and body image and eating disorders resulted in 20 studies meeting specific inclusion criteria. As a whole, these articles demonstrated that use of SNSs is associated with body image and disordered eating. Specific SNS activities, such as viewing and uploading photos and seeking negative feedback via status updates, were identified as particularly problematic. A small number of studies also addressed underlying processes and found that appearance-based social comparison mediated the relationship between SNS use and body image and eating concerns. Gender was not found to be a moderating factor. It was concluded that, although there is a good deal of correlational research supporting the maladaptive effect of SNS use on body image and disordered eating, more longitudinal and experimental studies are needed."}
{"_id": "235af8cac488c9347bf7a958fea93835fe9b896e", "title": "Supporting Cloud Computing in Thin-Client/Server Computing Model", "text": "This paper addresses the issue of how to support cloud computing in thin-client/server computing model. We describe the design and implementation of the multiple-application-server thin-client/server computing (MAS TC/S) model that allows users with thin-client devices to roam around a wide area network whilst experiencing transparent working environments. MAS TC/S can be applied to a wide variety of applications in wide area network, such as pervasive software rental, service-oriented infrastructure for an Internet service provider, and office automation for transnational corporations. The MAS TC/S architecture model comprises five major components: the display protocol, the multiple-application-server topology, the application-server discovery protocol, the distributed file system, and the network input/output protocol. We discuss problems and present solutions in the design of each constituent component. A prototype of the MAS TC/S that spans the campuses of two universities in Taiwan has been built \u2013 we also report our experiences in constructing this prototype."}
{"_id": "7220e204d4c81e309e2a20a4ba3aadbbb8e34315", "title": "Explaining Inconsistencies in OWL Ontologies", "text": "Justifications play a central role as the basis for explaining entailments in OWL ontologies. While techniques for computing justifications for entailments in consistent ontologies are theoretically and practically well-understood, little is known about the practicalities of computing justifications for inconsistent ontologies. This is despite the fact that justifications are important for repairing inconsistent ontologies, and can be used as a basis for paraconsistent reasoning. This paper presents algorithms, optimisations, and experiments in this area. Surprisingly, it turns out that justifications for inconsistent ontologies are more \u201cdifficult\u201d to compute and are often more \u201cnumerous\u201d than justifications for entailments in consistent ontologies: whereas it is always possible to compute some justifications, it is often not possible to compute all justifications for real world inconsistent ontologies."}
{"_id": "6ff17302e0a3c583be76ee2f7d90d6f2067a74d7", "title": "Impact of interleaving on common-mode EMI filter weight reduction of paralleled three-phase voltage-source converters", "text": "This paper presents a detailed analysis on the impact of the interleaving angle on EMI filter weight reduction in a paralleled three-phase VSI motor drive system. The EMI noise analysis equivalent circuits are given in frequency domain and EMI filter design methods for a motor drive system are discussed. Based on the analysis, design examples are given for DC and AC common-mode (CM) EMI filters showing that minimum corner frequency in this case cannot ensure minimum EMI filter weight. In effect, the EMI filter weight is also determined by the volt-seconds on the inductor. With this consideration, the impact of interleaving is analyzed based on the equivalent circuit developed. Specifically, it is shown that interleaving can either reduce the volt-second on the inductor or help reduce the inductance needed, which has a different impact on the filter weight when the design condition is different. EMI filters are designed for both AC and DC sides as examples, showing the impact that different interleaving angles can have on filter weight reduction. Verifications are carried out both in simulation and experimentally on a 2 kW converter system."}
{"_id": "cb84ef73db0a259b07289590f0dfcb9b8b9bbe79", "title": "A hybrid RF and vibration energy harvester for wearable devices", "text": "This paper describes a hybrid radio frequency (RF) and piezoelectric thin film polyvinylidene fluoride (PVDF) vibration energy harvester for wearable devices. By exploiting the impedance characteristics of parasitic capacitances and discrete inductors, the proposed harvester not only scavenges 15 Hz vibration energy but also works as a 915 MHz flexible silver-ink RF dipole antenna. In addition, an interface circuit including a 6-stage Dickson RF-to-DC converter and a diode bridge rectifier to convert the RF and vibration outputs of the hybrid harvester into DC signals to power resistive loads is evaluated. A maximum DC output power of 20.9 \u03bc\u201d, when using the RF to DC converter and \u22128 dBm input RF power, is achieved at 36 % of the open-circuit output voltage while the DC power harvested from 3 g vibration excitation reaches a maximum of 2.8 \u03bcW at 51% of open-circuit voltage. Experimental results show that the tested hybrid harvesting system simultaneously generates 7.3 \u03bcW DC power, when the distance from the harvester to a 3 W EIRP 915 MHz transmitter is 5.5 m, and 1.8 \u03bcW DC power from a 1.8 g vibration acceleration peak."}
{"_id": "3b2bb6b3b1dfc8e036106ad73a58ee9b7bc8f887", "title": "An Efficient End-to-End Neural Model for Handwritten Text Recognition", "text": "Offline handwritten text recognition from images is an important problem for enterprises attempting to digitize large volumes of handmarked scanned documents/reports. Deep recurrent models such as Multi-dimensional LSTMs [10] have been shown to yield superior performance over traditional Hidden Markov Model based approaches that suffer from the Markov assumption and therefore lack the representational power of RNNs. In this paper we introduce a novel approach that combines a deep convolutional network with a recurrent Encoder-Decoder network to map an image to a sequence of characters corresponding to the text present in the image. The entire model is trained end-to-end using Focal Loss [18], an improvement over the standard Cross-Entropy loss that addresses the class imbalance problem, inherent to text recognition. To enhance the decoding capacity of the model, Beam Search algorithm is employed which searches for the best sequence out of a set of hypotheses based on a joint distribution of individual characters. Our model takes as input a downsampled version of the original image thereby making it both computationally and memory efficient. The experimental results were benchmarked against two publicly available datasets, IAM and RIMES. We surpass the state-of-the-art word level accuracy on the evaluation set of both datasets by 3.5% & 1.1%, respectively."}
{"_id": "e4efcdcf039f48dcbe1024c013e90dc778808f03", "title": "Supply chain information systems strategy : Impacts on supply chain performance and firm performance", "text": "This paper examines the relationship between supply chain (SC) strategy and supply chain information systems (IS) strategy, and its impact on supply chain performance and firm performance. Theorizing from the supply chain and IS literatures within an overarching framework of the information processing theory (IPT), we develop hypotheses proposing a positive moderating effect of two supply chain IS strategies \u2013 IS for Efficiency and IS for Flexibility \u2013 on the respective relationships between two SC strategies \u2013 Lean and Agile, and supply chain performance. Based on confirmatory analysis and structural equation modeling of survey data from members of senior and executive management in the purchase/materials management/logistics/supply chain functions, from 205 firms, we validate these hypotheses and show that the IS for Efficiency (IS for Flexibility) IS strategy enhances the relationship between Lean (Agile) SC strategy and supply chain performance. We also show a positive association between supply chain performance and firm performance, and a full (partial) mediation effect of supply chain performance on the relation between Agile (Lean) SC strategy and firm performance. The paper contributes to the supply chain literature by providing theoretical understanding and empirical support of how SC strategies and IS strategies can work together to boost supply chain performance. In doing so, it identifies particular types of supply chain IS application portfolios that can enhance the benefits from specific SC strategies. The paper also develops and validates instruments for measuring two types of SC strategies and supply chain IS strategies. For practice, the paper offers guidance in making investment decisions for adopting and deploying IS appropriate to particular SC strategies and analyzing possible lack of alignment between applications that the firm deploys in its supply chain, and the information processing needs of its SC strategy. & 2012 Elsevier B.V. All rights reserved."}
{"_id": "64e1ced6f61385bc8bfbc685040febca49384607", "title": "CIAN: Cross-Image Affinity Net for Weakly Supervised Semantic Segmentation", "text": "Weakly supervised semantic segmentation based on image-level labels aims for alleviating the data scarcity problem by training with coarse labels. State-of-the-art methods rely on image-level labels to generate proxy segmentation masks, then train the segmentation network on these masks with various constraints. These methods consider each image independently and lack the exploration of cross-image relationships. We argue the cross-image relationship is vital to weakly supervised learning. We propose an end-to-end affinity module for explicitly modeling the relationship among a group of images. By means of this, one image can benefit from the complementary information from other images, and the supervision guidance can be shared in the group. The proposed method improves over the baseline with a large margin. Our method achieves 64.1% mIOU score on Pascal VOC 2012 validation set, and 64.7% mIOU score on test set, which is a new state-of-theart by only using image-level labels, demonstrating the effectiveness of the method."}
{"_id": "317bc06798ad37993f79f5dca38e177d12a116f0", "title": "Where2Stand: A Human Position Recommendation System for Souvenir Photography", "text": "People often take photographs at tourist sites and these pictures usually have two main elements: a person in the foreground and scenery in the background. This type of \u201csouvenir photo\u201d is one of the most common photos clicked by tourists. Although algorithms that aid a user-photographer in taking a well-composed picture of a scene exist [Ni et al. 2013], few studies have addressed the issue of properly positioning human subjects in photographs. In photography, the common guidelines of composing portrait images exist. However, these rules usually do not consider the background scene. Therefore, in this article, we investigate human-scenery positional relationships and construct a photographic assistance system to optimize the position of human subjects in a given background scene, thereby assisting the user in capturing high-quality souvenir photos. We collect thousands of well-composed portrait photographs to learn human-scenery aesthetic composition rules. In addition, we define a set of negative rules to exclude undesirable compositions. Recommendation results are achieved by combining the first learned positive rule with our proposed negative rules. We implement the proposed system on an Android platform in a smartphone. The system demonstrates its efficacy by producing well-composed souvenir photos."}
{"_id": "032e660447156a045ad6cf50272bca46246f4645", "title": "Extreme Adaptation for Personalized Neural Machine Translation", "text": "Every person speaks or writes their own flavor of their native language, influenced by a number of factors: the content they tend to talk about, their gender, their social status, or their geographical origin. When attempting to perform Machine Translation (MT), these variations have a significant effect on how the system should perform translation, but this is not captured well by standard one-sizefits-all models. In this paper, we propose a simple and parameter-efficient adaptation technique that only requires adapting the bias of the output softmax to each particular user of the MT system, either directly or through a factored approximation. Experiments on TED talks in three languages demonstrate improvements in translation accuracy, and better reflection of speaker traits in the target text."}
{"_id": "a0bf0c6300640a3c5757d1ca14418be178a33e99", "title": "Three-Level Bidirectional Converter for Fuel-Cell/Battery Hybrid Power System", "text": "A novel three-level (3L) bidirectional converter (BDC) is proposed in this paper. Compared with the traditional BDC, the inductor of the 3L BDC can be reduced significantly so that the dynamic response is greatly improved. Hence, the proposed converter is very suitable for fuel-cell/battery hybrid power systems. In addition, the voltage stress on the switch of the proposed converter is only half of the voltage on the high-voltage side, so it is also suitable for high-voltage applications. The operation principle and the implementation of the control circuit are presented in detail. This paper also proposes a novel bidirectional soft-start control strategy for the BDC. A 1-kW prototype converter is built to verify the theoretical analysis."}
{"_id": "59363d255f07e89e81f727ab4c627e21da888fe5", "title": "Gamut Mapping to Preserve Spatial Luminance Variations", "text": "A spatial gamut mapping technique is proposed to overcome the shortcomings encountered with standard pointwise gamut mapping algorithms by preserving spatially local luminance variations in the original image. It does so by first processing the image through a standard pointwise gamut mapping algorithm. The difference between the original image luminance Y and gamut mapped image luminance Y\u2019 is calculated. A spatial filter is then applied to this difference signal, and added back to the gamut mapped signal Y\u2019. The filtering operation can result in colors near the gamut boundary being placed outside the gamut, hence a second gamut mapping step is required to move these pixels back into the gamut. Finally, the in-gamut pixels are processed through a color correction function for the output device, and rendered to that device. Psychophysical experiments validate the superior performance of the proposed algorithm, which reduces many of the artifacts arising from standard pointwise techniques."}
{"_id": "d8e8bdd687dd588b71d92ff8f6018a1084f85437", "title": "Intelligent Device-to-Device Communication in the Internet of Things", "text": "Analogous to the way humans use the Internet, devices will be the main users in the Internet of Things (IoT) ecosystem. Therefore, device-to-device (D2D) communication is expected to be an intrinsic part of the IoT. Devices will communicate with each other autonomously without any centralized control and collaborate to gather, share, and forward information in a multihop manner. The ability to gather relevant information in real time is key to leveraging the value of the IoT as such information will be transformed into intelligence, which will facilitate the creation of an intelligent environment. Ultimately, the quality of the information gathered depends on how smart the devices are. In addition, these communicating devices will operate with different networking standards, may experience intermittent connectivity with each other, and many of them will be resource constrained. These characteristics open up several networking challenges that traditional routing protocols cannot solve. Consequently, devices will require intelligent routing protocols in order to achieve intelligent D2D communication. We present an overview of how intelligent D2D communication can be achieved in the IoT ecosystem. In particular, we focus on how state-of-the-art routing algorithms can achieve intelligent D2D communication in the IoT."}
{"_id": "5bcc988bc50428074f123495a426c2637230a1f8", "title": "Social trust and social reciprocity based cooperative D2D communications", "text": "Thanks to the convergence of pervasive mobile communications and fast-growing online social networking, mobile social networking is penetrating into our everyday life. Aiming to develop a systematic understanding of the interplay between social structure and mobile communications, in this paper we exploit social ties in human social networks to enhance cooperative device-to-device communications. Specifically, as hand-held devices are carried by human beings, we leverage two key social phenomena, namely social trust and social reciprocity, to promote efficient cooperation among devices. With this insight, we develop a coalitional game theoretic framework to devise social-tie based cooperation strategies for device-to-device communications. We also develop a network assisted relay selection mechanism to implement the coalitional game solution, and show that the mechanism is immune to group deviations, individually rational, and truthful. We evaluate the performance of the mechanism by using real social data traces. Numerical results show that the proposed mechanism can achieve up-to 122% performance gain over the case without D2D cooperation."}
{"_id": "a250172e20563f689388ae6a5e4c0b572259fa58", "title": "Resource allocation for device-to-device communications underlaying LTE-advanced networks", "text": "The Long Term Evolution-Advanced (LTEAdvanced) networks are being developed to provide mobile broadband services for the fourth generation (4G) cellular wireless systems. Deviceto- device (D2D) communications is a promising technique to provide wireless peer-to-peer services and enhance spectrum utilization in the LTE-Advanced networks. In D2D communications, the user equipments (UEs) are allowed to directly communicate between each other by reusing the cellular resources rather than using uplink and downlink resources in the cellular mode when communicating via the base station. However, enabling D2D communications in a cellular network poses two major challenges. First, the interference caused to the cellular users by D2D devices could critically affect the performances of the cellular devices. Second, the minimum quality-of-service (QoS) requirements of D2D communications need to be guaranteed. In this article, we introduce a novel resource allocation scheme (i.e. joint resource block scheduling and power control) for D2D communications in LTE-Advanced networks to maximize the spectrum utilization while addressing the above challenges. First, an overview of LTE-Advanced networks, and architecture and signaling support for provisioning of D2D communications in these networks are described. Furthermore, research issues and the current state-of-the-art of D2D communications are discussed. Then, a resource allocation scheme based on a column generation method is proposed for D2D communications. The objective is to maximize the spectrum utilization by finding the minimum transmission length in terms of time slots for D2D links while protecting the cellular users from harmful interference and guaranteeing the QoS of D2D links. The performance of this scheme is evaluated through simulations."}
{"_id": "7fb0a03eee8369ca214c6ed5e6bbefcdaa11a153", "title": "THREE LAYERS APPROACH FOR NETWORK SCANNING DETECTION", "text": "Computer networks became one of the most important dimensions in any organization. This importance is due to the connectivity benefits that can be given by networks, such as computing power, data sharing and enhanced performance. However using networks comes with a cost, there are some threats and issues that need to be addressed, such as providing sufficient level of security. One of the most challenging issues in network security is network scanning. Network scanning is considered to be the initial step in any attacking process. Therefore, detecting networks scanning helps to protect networks resources, services, and data before the real attack happens. This paper proposes an approach that consists of three layers to detect Sequential and Random network scanning for both TCP and UDP protocols. The proposed Three Layers Approach aims to increase network scanning detection accuracy. The Three Layers Approach defines some packets to be used as signs of network scanning existence. Before applying the approach in a network, there is a Thresholds Generation Stage to that aims to determine descriptive set of thresholds. After that, the first layer of the approach aggregates sign packets in separated tables. Then the second layer of the approach analyzes these tables in new tables by counting packets generated by each IP. Finally, the last layer makes a decision of whether or not a network is being scanned."}
{"_id": "ebaccd68ab660c7d534a2c0f1bf3d10d03c4dcc1", "title": "Study of induction heating power supply based on fuzzy controller", "text": "In order to satisfy the higher control performance requirement of the Induction Heating Supply, a fuzzy logic control technology for induction heating power supply power control system is researched. This study presents the composition and design of the induction heating control system based on the fuzzy logic controller. In this paper, a complete simulation model of induction heating systems is obtained by using the Matlab /Simulink software, Simulation results show the effectiveness and superiority of the control system."}
{"_id": "24f6497bb4cb6cfbb68492f07624ab5212bc39e3", "title": "Kawaii/Cute interactive media", "text": "Cuteness in interactive systems is a relatively new development yet has its roots in the aesthetics of many historical and cultural elements. Symbols of cuteness abound in nature as in the creatures of neotenous proportions: drawing in the care and concern of the parent and the care from a protector. We provide an in-depth look at the role of cuteness in interactive systems beginning with a history. We particularly focus on the Japanese culture of Kawaii, which has made large impact around the world, especially in entertainment, fashion, and animation. We then take the approach of defining cuteness in contemporary popular perception. User studies are presented offering an in-depth understanding of key perceptual elements, which are identified as cute. This knowledge provides for the possibility to create a cute filter that can transform inputs and automatically create more cute outputs. This paper also provides an insight into the next generation of interactive systems that bring happiness and comfort to users of all ages and cultures through the soft power of cute."}
{"_id": "a2729b6ca8d24bb806c168528eb81de950871446", "title": "A unified approach for mining outliers", "text": "This paper deals with nding outliers (exceptions) in large datasets. The identiication of outliers can often lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the analysis of performance statistics of professional athletes. One contribution of this paper is to show how our proposed, intuitive notion of outliers can unify or generalize many of the existing notions of outliers provided by discordancy tests for standard statistical distributions. Thus, when mining large datasets containing many attributes, a uniied approach can replace many statistical discordancy tests, regardless of any knowledge about the underlying distribution of the attributes. A second contribution of this paper is the development of an algorithm to nd all outliers in a dataset. An important advantage of this algorithm is that its time complexity is linear with respect to the number of objects in the dataset. We include preliminary performance results."}
{"_id": "5e6035535d6d258a29598faf409b57a71ec28f21", "title": "An efficient centroid type-reduction strategy for general type-2 fuzzy logic system", "text": ""}
{"_id": "1e00b499af499ff00b97a737bc9053b34dcc352a", "title": "Automatic Ontology Matching via Upper Ontologies: A Systematic Evaluation", "text": "\u00bfOntology matching\u00bf is the process of finding correspondences between entities belonging to different ontologies. This paper describes a set of algorithms that exploit upper ontologies as semantic bridges in the ontology matching process and presents a systematic analysis of the relationships among features of matched ontologies (number of simple and composite concepts, stems, concepts at the top level, common English suffixes and prefixes, and ontology depth), matching algorithms, used upper ontologies, and experiment results. This analysis allowed us to state under which circumstances the exploitation of upper ontologies gives significant advantages with respect to traditional approaches that do no use them. We run experiments with SUMO-OWL (a restricted version of SUMO), OpenCyc, and DOLCE. The experiments demonstrate that when our \u00bfstructural matching method via upper ontology\u00bf uses an upper ontology large enough (OpenCyc, SUMO-OWL), the recall is significantly improved while preserving the precision obtained without upper ontologies. Instead, our \u00bfnonstructural matching method\u00bf via OpenCyc and SUMO-OWL improves the precision and maintains the recall. The \u00bfmixed method\u00bf that combines the results of structural alignment without using upper ontologies and structural alignment via upper ontologies improves the recall and maintains the F-measure independently of the used upper ontology."}
{"_id": "59483664dfb38a7ce66e7dc279ac2d0d8456dbb6", "title": "A Wideband Slotted Bow-Tie Antenna With Reconfigurable CPW-to-Slotline Transition for Pattern Diversity", "text": "We propose a slotted bow-tie antenna with pattern reconfigurability. The antenna consists of a coplanar waveguide (CPW) input, a pair of reconfigurable CPW-to-slotline transitions, a pair of Vivaldi-shaped radiating tapered slots, and four PIN diodes for reconfigurability. With suitable arrangement of the bias network, the proposed antenna demonstrates reconfigurable radiation patterns in the frequency range from 3.5 to 6.5 GHz in three states: a broadside radiation with fairly omnidirectional pattern and two end-fire radiations whose main beams are directed to exactly opposite directions. The proposed antenna is investigated comprehensively with the help of the radiation patterns in the two principal cuts and also the antenna gain responses versus frequencies. The simulation and measurement results reveal fairly good agreement and hence sustain the reconfigurability of the proposed design."}
{"_id": "422be962dc717f88f412b954420f394490a5c110", "title": "Analysis of Software Aging in a Web Server", "text": "Several recent studies have reported & examined the phenomenon that long-running software systems show an increasing failure rate and/or a progressive degradation of their performance. Causes of this phenomenon, which has been referred to as \"software aging\", are the accumulation of internal error conditions, and the depletion of operating system resources. A proactive technique called \"software rejuvenation\" has been proposed as a way to counteract software aging. It involves occasionally terminating the software application, cleaning its internal state and/or its environment, and then restarting it. Due to the costs incurred by software rejuvenation, an important question is when to schedule this action. While periodic rejuvenation at constant time intervals is straightforward to implement, it may not yield the best results. The rate at which software ages is usually not constant, but it depends on the time-varying system workload. Software rejuvenation should therefore be planned & initiated in the face of the actual system behavior. This requires the measurement, analysis, and prediction of system resource usage. In this paper, we study the development of resource usage in a web server while subjecting it to an artificial workload. We first collect data on several system resource usage & activity parameters. Non-parametric statistical methods are then applied toward detecting & estimating trends in the data sets. Finally, we fit time series models to the data collected. Unlike the models used previously in the research on software aging, these time series models allow for seasonal patterns, and we show how the exploitation of the seasonal variation can help in adequately predicting the future resource usage. Based on the models employed here, proactive management techniques like software rejuvenation triggered by actual measurements can be built"}
{"_id": "5cf3659f1741e5962fdf183a874db9b5a6878f47", "title": "A Flexible Fabrication Approach Toward the Shape Engineering of Microscale Soft Pneumatic Actuators", "text": "The recent developments of soft robots have inspired a variety of applications that require compliance for safety and softness for biomimetics. With such prevalence and advantages of soft robots, researchers are diverting their interests toward applications like micromanipulation of biological tissue. However, the progress has been thwarted by the difficulty in producing soft robots in miniature scale. In this paper, we present a new kind of microscale soft pneumatic actuators (SPA) made from streamlined and standardized fabrication procedure with customizable bending modalities inspired by shape engineering. Preliminary mathematical models are given to interpret width-based shape engineering for customization and to compare the bending angle and radius of curvature measured from the characterization experiments. The fabricated SPA was tested on the sciatic nerve of a rat in order to show its future potentials in biomedical field. Ultimately, this paper will contribute to the diversification of SPA fabrication and customization, as well as biomedical applications that demand smaller dimensions and higher precision."}
{"_id": "771b9c74d58f4fb29ce3303a3d4e864c6707340d", "title": "Cognitive Wireless Powered Network: Spectrum Sharing Models and Throughput Maximization", "text": "The recent advance in radio-frequency (RF) wireless energy transfer (WET) has motivated the study of wireless powered communication network (WPCN) in which distributed wireless devices are powered via dedicated WET by the hybrid access-point (H-AP) in the downlink (DL) for uplink (UL) wireless information transmission (WIT). In this paper, by utilizing the cognitive radio (CR) technique, we study a new type of CR enabled secondary WPCN, called cognitive WPCN, under spectrum sharing with the primary wireless communication system. In particular, we consider a cognitive WPCN, consisting of one single H-AP with constant power supply and distributed wireless powered users, shares the same spectrum for its DL WET and UL WIT with an existing primary communication link, where the WPCN's WET/WIT and the primary link's WIT may interfere with each other. Under this new setup, we propose two coexisting models for spectrum sharing of the two systems, namely underlay and overlay-based cognitive WPCNs, depending on different types of knowledge on the primary user transmission available for the cognitive WPCN. For each model, we maximize the sum-throughput of the cognitive WPCN by optimizing its transmission under different constraints applied to protect the primary user transmission. Analysis and simulation results are provided to compare the sum-throughput of the cognitive WPCN versus the achievable rate of the primary user under two proposed coexisting models. It is shown that the overlay based cognitive WPCN outperforms the underlay based counterpart, thanks to its fully co-operative WET/WIT design with the primary WIT, while also requiring higher complexity for implementation."}
{"_id": "7a02337958a489f4d7817f198b8e36e92ae1e155", "title": "Accounting Restatements and External Financing Choices", "text": "There is little research on how accounting information quality affects a firm\u2019s external financing choices. In this paper, we use the occurrence of accounting restatements as a proxy for the reduced credibility of accounting information and investigate how restatements affect a firm\u2019s external financing choices. We find that for firms that obtain external financing after restatements, they rely more on debt financing, especially private debt financing, and less on equity financing. The increase in debt financing is more pronounced for firms with more severe information problems and less pronounced for firms with prompt CEO/CFO turnover and auditor dismissal. Our evidence indicates that accounting information quality affects capital providers\u2019 resource allocation and that debt holders help alleviate information problems after accounting restatements."}
{"_id": "743675c5510ba05ddc24d4b5f9b07589dac6c006", "title": "Integrating Perception and Planning for Autonomous Navigation of Urban Vehicles", "text": "The paper addresses the problem of autonomous navigation of a car-like robot evolving in an urban environment. Such an environment exhibits an heterogeneous geometry and is cluttered with moving obstacles. Furthermore, in this context, motion safety is a critical issue. The proposed approach to the problem lies in the coupling of two crucial robotic capabilities, namely perception and planning. The main contributions of this work are the development and integration of these modules into one single application, considering explicitly the constraints related to the environment and the system"}
{"_id": "0b9ef11912c9667cd9150b4295e0b69705ab7d61", "title": "The international personality item pool and the future of public-domain personality measures", "text": "Seven experts on personality measurement here discuss the viability of public-domain personality measures, focusing on the International Personality Item Pool (IPIP) as a prototype. Since its inception in 1996, the use of items and scales from the IPIP has increased dramatically. Items from the IPIP have been translated from English into more than 25 other languages. Currently over 80 publications using IPIP scales are listed at the IPIP Web site (http://ipip.ori.org), and the rate of IPIPrelated publications has been increasing rapidly. The growing popularity of the IPIP can be attributed to Wve factors: (1) It is cost free; (2) its items can be obtained instantaneously via the Internet; (3) it includes over 2000 items, all easily available for inspection; (4) scoring keys for IPIP scales are This article represents a synthesis of contributions to the presidential symposium, The International Personality Item Pool and the Future of Public-Domain Personality Measures (L.R. Goldberg, Chair) at the sixth annual meeting of the Association for Research in Personality, New Orleans, January 20, 2005. Authorship order is based on the order of participation in the symposium. The IPIP project has been continually supported by Grant MH049227 from the National Institute of Mental Health, U.S. Public Health Service. J.A. Johnson\u2019s research was supported by the DuBois Educational Foundation. The authors thank Paul T. Costa Jr., Samuel D. Gosling, Leonard G. Rorer, Richard Reed, and Krista Trobst for their helpful suggestions. * Corresponding author. Fax: +1 814 375 4784. E-mail address: j5j@psu.edu (J.A. Johnson). 0092-6566/$ see front matter \uf6d9 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.jrp.2005.08.007 L.R. Goldberg et al. / Journal of Research in Personality 40 (2006) 84\u201396 85 provided; and (5) its items can be presented in any order, interspersed with other items, reworded, translated into other languages, and administered on the World Wide Web without asking permission of anyone. The unrestricted availability of the IPIP raises concerns about possible misuse by unqualiWed persons, and the freedom of researchers to use the IPIP in idiosyncratic ways raises the possibility of fragmentation rather than scientiWc uniWcation in personality research. \uf6d9 2005 Elsevier Inc. All rights reserved."}
{"_id": "3971b8751d29266d38e1f12fc1935b1469d73af8", "title": "A CCPP algorithm based on the standard map for the mobile robot", "text": "This paper introduces a new integrated algorithm to achieve the complete coverage path planning (CCPP) task for the mobile robot in a given obstacles-included terrain. The algorithm combines the cellular decomposition approach and the chaotic Standard map together to design the coverage procedure. The cellular decomposition approach decompose the target region into several rectangular feasible sub-regions. Then the chaotic Standard map in the full mapping state produces the complete coverage trajectories inside the feasible sub-regions, and the connection trajectories between two adjacent feasible sub-regions. Compared with the general cellular decomposition method, the proposed integrated algorithm needs no designated start point and goal point to link two adjacent sub-regions. The planned trajectories demonstrate a good distribution characteristics with regard to completeness and evenness. No obstacles-avoidance method and boundaries detection are needed in the coverage procedure."}
{"_id": "7a5cc570c628d8afe267f8344cfc7762f1532dc9", "title": "Probabilistic slope stability analysis by finite elements", "text": "The paper investigates the probability of failure of a cohesive slope using both simple and more advanced probabilistic analysis tools. The influence of local averaging on the probability of failure of a test problem is thoroughly investigated. In the simple approach, classical slope stability analysis techniques are used, and the shear strength is treated as a single random variable. The advanced method, called the random finite element method (RFEM), uses elastoplasticity combined with random field theory. The RFEM method is shown to offer many advantages over traditional probabilistic slope stability techniques, because it enables slope failure to develop naturally by \u201cseeking out\u201d the most critical mechanism. Of particular importance in this work, is the conclusion that simplified probabilistic analysis, in which spatial variability is ignored by assuming perfect correlation, can lead to unconservative estimates of the probability of failure. This contradicts the findings of other investigators using classical slope stability analysis tools."}
{"_id": "f11fce8cd480ba13c4ef44cf61c19b243c5c0288", "title": "New low-voltage class AB/AB CMOS op amp with rail-to-rail input/output swing", "text": "A new low-voltage CMOS Class AB/AB fully differential opamp with rail-to-rail input/output swing and supply voltage lower than two V/sub GS/ drops is presented. The scheme is based on combining floating-gate transistors and Class AB input and output stages. The op amp is characterized by low static power consumption and enhanced slew-rate. Moreover the proposed opamp does not suffer from typical reliability problems related to initial charge trapped in the floating-gate devices. Simulation and experimental results in 0.5-/spl mu/m CMOS technology verify the scheme operating with /spl plusmn/0.9-V supplies and close to rail-to-rail input and output swing."}
{"_id": "f1c47a061547bc1de9ac5e56a12ca173d32313af", "title": "Ultrafast photonic reinforcement learning based on laser chaos", "text": "Reinforcement learning involves decision making in dynamic and uncertain environments and constitutes an important element of artificial intelligence (AI). In this work, we experimentally demonstrate that the ultrafast chaotic oscillatory dynamics of lasers efficiently solve the multi-armed bandit problem (MAB), which requires decision making concerning a class of difficult trade-offs called the exploration\u2013exploitation dilemma. To solve the MAB, a certain degree of randomness is required for exploration purposes. However, pseudorandom numbers generated using conventional electronic circuitry encounter severe limitations in terms of their data rate and the quality of randomness due to their algorithmic foundations. We generate laser chaos signals using a semiconductor laser sampled at a maximum rate of 100 GSample/s, and combine it with a simple decision-making principle called tug of war with a variable threshold, to ensure ultrafast, adaptive, and accurate decision making at a maximum adaptation speed of 1\u2009GHz. We found that decision-making performance was maximized with an optimal sampling interval, and we highlight the exact coincidence between the negative autocorrelation inherent in laser chaos and decision-making performance. This study paves the way for a new realm of ultrafast photonics in the age of AI, where the ultrahigh bandwidth of light wave can provide new value."}
{"_id": "0b68ad65eb35eeb54e858ea427f14985f211a28c", "title": "A generalized maximum entropy approach to bregman co-clustering and matrix approximation", "text": "Co-clustering is a powerful data mining technique with varied applications such as text clustering, microarray analysis and recommender systems. Recently, an information-theoretic co-clustering approach applicable to empirical joint probability distributions was proposed. In many situations, co-clustering of more general matrices is desired. In this paper, we present a substantially generalized co-clustering framework wherein any Bregman divergence can be used in the objective function, and various conditional expectation based constraints can be considered based on the statistics that need to be preserved. Analysis of the co-clustering problem leads to the minimum Bregman information principle, which generalizes the maximum entropy principle, and yields an elegant meta algorithm that is guaranteed to achieve local optimality. Our methodology yields new algorithms and also encompasses several previously known clustering and co-clustering algorithms based on alternate minimization."}
{"_id": "737596df3ceee6e8db69dcaae64decbacda102e6", "title": "Adolescent coping style and behaviors: conceptualization and measurement.", "text": "The developmental tasks associated with adolescence pose a unique set of stressors and strains. Included in the normative tasks of adolescence are developing and identity, differentiating from the family while still staying connected, and fitting into a peer group. The adolescent's adaptation to these and other, often competing demands is achieved through the process of coping which involves cognitive and behavioral strategies directed at eliminating or reducing demands, redefining demands so as to make them more manageable, increasing resources for dealing with demands, and/or managing the tension which is felt as a result of experiencing demands. In this paper, individual copying theory and family stress theory are reviewed to provide a theoretical foundation for assessing adolescent coping. In addition, the development and testing of an adolescent self-report coping inventory, Adolescent Coping Orientation for Problem Experiences (A-COPE) is presented. Gender differences in coping style are presented and discussed. These coping patterns were validated against criterion indices of adolescents' use of cigarettes, liquor, and marijuana using data from a longitudinal study of 505 families with adolescents. The findings are discussed in terms of coping theory and measurement and in terms of adolescent development and substance use."}
{"_id": "7fa2f9b6f3894555aabe025885a29749cfa80f19", "title": "VisTA: Visual Terminology Alignment Tool for Factual Knowledge Aggregation", "text": "The alignment of terminologies can be considered as a kind of \u201ctranslation\u201d between two or more terminologies that aims to enhance the communication among people coming from different domains or expertise. In this paper we introduce the Visual Terminology Alignment tool (VisTA) that enables the exact alignment for RDF/SKOS-like terminologies, suitable for integrating knowledge bases, rather than information retrieval systems. The tool provides a simple and friendly web-based user interface for the alignment between two terminologies, while it visualizes the terminology hierarchies, enables the interactive alignment process, and presents the alignment result. The latter is a native RDF/SKOS graph that interconnects the two terminology graphs, supporting interoperability and extending the search capabilities over the integrated semantic graph, using the broader and exact match properties."}
{"_id": "766c251bd7686dd707acd500e80d7184929035c6", "title": "Evaluating State-of-the-Art Object Detector on Challenging Traffic Light Data", "text": "Traffic light detection (TLD) is a vital part of both intelligent vehicles and driving assistance systems (DAS). General for most TLDs is that they are evaluated on small and private datasets making it hard to determine the exact performance of a given method. In this paper we apply the state-of-the-art, real-time object detection system You Only Look Once, (YOLO) on the public LISA Traffic Light dataset available through the VIVA-challenge, which contain a high number of annotated traffic lights, captured in varying light and weather conditions.,,,,,,The YOLO object detector achieves an AUC of impressively 90.49% for daysequence1, which is an improvement of 50.32% compared to the latest ACF entry in the VIVAchallenge. Using the exact same training configuration as the ACF detector, the YOLO detector reaches an AUC of 58.3%, which is in an increase of 18.13%."}
{"_id": "86585bd7288f41a28eeda883a35be6442224110a", "title": "A Variational Observation Model of 3 D Object for Probabilistic Semantic SLAM", "text": "We present a Bayesian object observation model for complete probabilistic semantic SLAM. Recent studies on object detection and feature extraction have become important for scene understanding and 3D mapping. However, 3D shape of the object is too complex to formulate the probabilistic observation model; therefore, performing the Bayesian inference of the object-oriented features as well as their pose is less considered. Besides, when the robot equipped with an RGB mono camera only observes the projected single view of an object, a significant amount of the 3D shape information is abandoned. Due to these limitations, semantic SLAM and viewpoint-independent loop closure using volumetric 3D object shape is challenging. In order to enable the complete formulation of probabilistic semantic SLAM, we approximate the observation model of a 3D object with a tractable distribution. We also estimate the variational likelihood from the 2D image of the object to exploit its observed single view. In order to evaluate the proposed method, we perform pose and feature estimation, and demonstrate that the automatic loop closure works seamlessly without additional loop detector in various environments."}
{"_id": "d83a48c8bb4324de2f4c701e786a40e264b3cfe1", "title": "The level and nature of autistic intelligence.", "text": "Autistics are presumed to be characterized by cognitive impairment, and their cognitive strengths (e.g., in Block Design performance) are frequently interpreted as low-level by-products of high-level deficits, not as direct manifestations of intelligence. Recent attempts to identify the neuroanatomical and neurofunctional signature of autism have been positioned on this universal, but untested, assumption. We therefore assessed a broad sample of 38 autistic children on the preeminent test of fluid intelligence, Raven's Progressive Matrices. Their scores were, on average, 30 percentile points, and in some cases more than 70 percentile points, higher than their scores on the Wechsler scales of intelligence. Typically developing control children showed no such discrepancy, and a similar contrast was observed when a sample of autistic adults was compared with a sample of nonautistic adults. We conclude that intelligence has been underestimated in autistics."}
{"_id": "7c2cb6c31f7d2c99cb54c9c1b0fc6b1fc045780a", "title": "A theory of biological pattern formation", "text": "One of the elementary processes in morphogenesis is the formation of a spatial pattern of tissue structures, starting from almost homogeneous tissue. It will be shown that relatively simple molecular mechanisms based on auto- and cross catalysis can account for a primary pattern of morphogens to determine pattern formation of the tissue. The theory is based on short range activation, long range inhibition, and a distinction between activator and inhibitor concentrations on one hand, and the densities of their sources on the other. While source density is expected to change slowly, e.g. as an effect of cell differentiation, the concentration of activators and inhibitors can change rapidly to establish the primary pattern; this results from auto- and cross catalytic effects on the sources, spreading by diffusion or other mechanisms, and degradation. Employing an approximative equation, a criterium is derived for models, which lead to a striking pattern, starting from an even distribution of morphogens, and assuming a shallow source gradient. The polarity of the pattern depends on the direction of the source gradient, but can be rather independent of other features of source distribution. Models are proposed which explain size regulation (constant proportion of the parts of the pattern irrespective of total size). Depending on the choice of constants, aperiodic patterns, implying a one-to-one correlation between morphogen concentration and position in the tissue, or nearly periodic patterns can be obtained. The theory can be applied not only to multicellular tissues, but also to intracellular differentiation, e.g. of polar cells. The theory permits various molecular interpretations. One of the simplest models involves bimolecular activation and monomolecular inhibition. Source gradients may be substituted by, or added to, sink gradients, e.g. of degrading enzymes. Inhibitors can be substituted by substances required for, and depleted by activation. Sources may be either synthesizing systems or particulate structures releasing activators and inhibitors. Calculations by computer are presented to exemplify the main features of the theory proposed. The theory is applied to quantitative data on hydra \u2014 a suitable one-dimensional model for pattern formation \u2014 and is shown to account for activation and inhibition of secondary head formation."}
{"_id": "6a7384bf0d319d19bbbbf578ac7052bb72ef940c", "title": "Topological Value Iteration Algorithm for Markov Decision Processes", "text": "Value Iteration is an inefficient algorithm for Markov decision processes (MDPs) because it puts the majority of its effort into backing up the entire state space, which turns out to be unnecessary in many cases. In order to overcome this problem, many approaches have been proposed. Among them, LAO*, LRTDP and HDP are state-of-theart ones. All of these use reachability analysis and heuristics to avoid some unnecessary backups. However, none of these approaches fully exploit the graphical features of the MDPs or use these features to yield the best backup sequence of the state space. We introduce an algorithm named Topological Value Iteration (TVI) that can circumvent the problem of unnecessary backups by detecting the structure of MDPs and backing up states based on topological sequences. We prove that the backup sequence TVI applies is optimal. Our experimental results show that TVI outperforms VI, LAO*, LRTDP and HDP on our benchmark MDPs."}
{"_id": "0aba945c25c1f71413746f550aa81587db37c42a", "title": "On the segmentation of 3D LIDAR point clouds", "text": "This paper presents a set of segmentation methods for various types of 3D point clouds. Segmentation of dense 3D data (e.g. Riegl scans) is optimised via a simple yet efficient voxelisation of the space. Prior ground extraction is empirically shown to significantly improve segmentation performance. Segmentation of sparse 3D data (e.g. Velodyne scans) is addressed using ground models of non-constant resolution either providing a continuous probabilistic surface or a terrain mesh built from the structure of a range image, both representations providing close to real-time performance. All the algorithms are tested on several hand labeled data sets using two novel metrics for segmentation evaluation."}
{"_id": "13dd25c5e7df2b23ec9a168a233598702c2afc97", "title": "Efficient Graph-Based Image Segmentation", "text": "This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions."}
{"_id": "30c3e410f689516983efcd780b9bea02531c387d", "title": "Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes", "text": "We present a 3-D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin-image representation. The spin-image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin-images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes. This research was performed at Carnegie Mellon University and was supported by the US Department of Energy under contract DE-AC21-92MC29104. 1"}
{"_id": "5f0d86c9c5b7d37b4408843aa95119bf7771533a", "title": "A Method for Registration of 3-D Shapes", "text": "Tianhe Yang Summary of \" A Method for Registration of 3-D Shapes \""}
{"_id": "befad8e606dcf37e5e0883367d2d7d6fac81513c", "title": "Underwater Vehicle Obstacle Avoidance and Path Planning Using a MultiBeam Forward Looking Sonar", "text": "This paper describes a new framework for segmentation of sonar images, tracking of underwater objects and motion estimation. This framework is applied to the design of an obstacle avoidance and path planning system for underwater vehicles based on a multi-beam forward looking sonar sensor. The real-time data flow (acoustic images) at the input of the system is first segmented and relevant features are extracted. We also take advantage of the real-time data stream to track the obstacles in following frames to obtain their dynamic characteristics. This allows us to optimize the preprocessing phases in segmenting only the relevant part of the images. Once the static (size and shape) as well as dynamic characteristics (velocity, acceleration, ...) of the obstacles have been computed, we create a representation of the vehicle\u2019s workspace based on these features. This representation uses constructive solid geometry(CSG) to create a convex set of obstacles defining the workspace. The tracking takes also into account obstacles which are no longer in the field of view of the sonar in the path planning phase. A well-proven nonlinear search (sequential quadratic programming) is then employed, where obstacles are expressed as constraints in the search space. This approach is less affected by local minima than classical methods using potential fields. The proposed system is not only capable of obstacle avoidance but also of path planning in complex environments which include fast moving obstacles. Results obtained on real sonar data are shown and discussed. Possible applications to sonar servoing and real-time motion estimation are also discussed."}
{"_id": "60036886e7a44d2c254fb50f8b8aa480d6659ee0", "title": "Finding the shortest paths by node combination", "text": "By repeatedly combining the source node's nearest neighbor, we propose a node combination (NC) method to implement the Dijkstra's algorithm. The NC algorithm finds the shortest paths with three simple iterative steps: find the nearest neighbor of the source node, combine that node with the source node, and modify the weights on edges that connect to the nearest neighbor. The NC algorithm is more comprehensible and convenient for programming as there is no need to maintain a set with the nodes' distances. Experimental evaluations on various networks reveal that the NC algorithm is as efficient as Dijkstra's algorithm. As the whole process of the NC algorithm can be implemented with vectors, we also show how to find the shortest paths on a weight matrix. } is the set of nodes, E = {e ij j if there is a link from v i to v j } is the set of edges and W = {w ij j 1 6 i, j 6 N} is the weight matrix for E. Given two nodes v s , v t of G, the shortest path problem can be defined as how to find a path with the minimum sum of the weights on the edges in a v s , v t-path. Generally, v s and v t are called source node and sink node, respectively. The shortest path problem is one of the most fundamental network optimization problems with widespread applications [1\u20134]. Among the various shortest path algorithms developed [5\u201312], Dijkstra's algorithm is probably the most well-known. It maintains a set S of solved nodes, comprising those nodes whose final shortest path distance from the source v s has determined, and labels d(i), storing the upper bound of the shortest path distance from v s to v i. The algorithm repeatedly selects the node v k 2 VnS with the minimum d(i), adds v k to S, and updates d(i) for nodes that are incident to v k Step 1. Select a node v k from Q such that d\u00f0v k \u00de \u00bc min v j 2Q d\u00f0v j \u00de, if d(v k) = 1, stop, otherwise go to Step 2. Step 3. for every v j 2 Q, update d(v j) = min{d(v j), d(v k) + w kj }. Go to Step 1. In practice, Dijkstra's algorithm relies heavily on the strategies used to select the next minimum labeled \u2026"}
{"_id": "0af7d6ed75ec9436426edfb75c0aeb774df5607a", "title": "User friendly SLAM initialization", "text": "The development of new Simultaneous Localization and Mapping (SLAM) techniques is quickly advancing in research communities and rapidly transitioning into commercial products. Creating accurate and high-quality SLAM maps relies on a robust initialization process. However, the robustness and usability of SLAM initialization for end-users has often been disregarded. This paper presents and evaluates a novel tracking system for 6DOF pose tracking between a single keyframe and the current camera frame, without any prior scene knowledge. Our system is particularly suitable for SLAM initialization, since it allows 6DOF pose tracking in the intermediate frames before a wide-enough baseline between two keyframes has formed. We investigate how our tracking system can be used to interactively guide users in performing an optimal motion for SLAM initialization. However, our findings from a pilot study indicate that the need for such motion can be completely hidden from the user and outsourced to our tracking system. Results from a second user study show that letting our tracking system create a SLAM map as soon as possible is a viable and usable solution. Our work provides important insight for SLAM systems, showing how our novel tracking system can be integrated with a user interface to support fast, robust and user-friendly SLAM initialization."}
{"_id": "90520ec673a792634f1c7765fb93372c288f4816", "title": "Online Dictionary Learning on Symmetric Positive Definite Manifolds with Vision Applications", "text": "Symmetric Positive Definite (SPD) matrices in the form of region covariances are considered rich descriptors for images and videos. Recent studies suggest that exploiting the Riemannian geometry of the SPD manifolds could lead to improved performances for vision applications. For tasks involving processing large-scale and dynamic data in computer vision, the underlying model is required to progressively and efficiently adapt itself to the new and unseen observations. Motivated by these requirements, this paper studies the problem of online dictionary learning on the SPD manifolds. We make use of the Stein divergence to recast the problem of online dictionary learning on the manifolds to a problem in Reproducing Kernel Hilbert Spaces, for which, we develop efficient algorithms by taking into account the geometric structure of the SPD manifolds. To our best knowledge, our work is the first study that provides a solution for online dictionary learning on the SPD manifolds. Empirical results on both large-scale image classification task and dynamic video processing tasks validate the superior performance of our approach as compared to several state-of-the-art algorithms."}
{"_id": "d05507aec2e217a1f3ba35153153a64e135fb42c", "title": "Web Content Analysis: Expanding the Paradigm", "text": "Are established methods of content analysis (CA) adequate to analyze web content, or should new methods be devised to address new technological developments? This chapter addresses this question by contrasting narrow and broad interpretations of the concept of web content analysis. The utility of a broad interpretation that subsumes the narrow one is then illustrated with reference to research on weblogs (blogs), a popular web format in which features of HTML documents and interactive computer-mediated communication converge. The chapter concludes by proposing an expanded Web Content Analysis (WebCA) paradigm in which insights from paradigms such as discourse analysis and social network analysis are operationalized and implemented within a general content analytic framework."}
{"_id": "35c473bae9d146072625cc3d452c8f6b84c8cc47", "title": "ZoomNet: Deep Aggregation Learning for High-Performance Small Pedestrian Detection", "text": "It remains very challenging for a single deep model to detect pedestrians of different sizes appears in an image. One typical remedy for the small pedestrian detection is to upsample the input and pass it to the network multiple times. Unfortunately this strategy not only exponentially increases the computational cost but also probably impairs the model effectiveness. In this work, we present a deep architecture, refereed to as ZoomNet, which performs small pedestrian detection by deep aggregation learning without up-sampling the input. ZoomNet learns and aggregates deep feature representations at multiple levels and retains the spatial information of the pedestrian from different scales. ZoomNet also learns to cultivate the feature representations from the classification task to the detection task and obtains further performance improvements. Extensive experimental results demonstrate the state-of-the-art performance of ZoomNet. The source code of this work will be made public available to facilitate further studies on this problem."}
{"_id": "e28935d4570b8e3c67b16080fc533bceffff4548", "title": "On Oblique Random Forests", "text": "Abstract. In his original paper on random forests, Breiman proposed two different decision tree ensembles: one generated from \u201corthogonal\u201d trees with thresholds on individual features in every split, and one from \u201coblique\u201d trees separating the feature space by randomly oriented hyperplanes. In spite of a rising interest in the random forest framework, however, ensembles built from orthogonal trees (RF) have gained most, if not all, attention so far. In the present work we propose to employ \u201coblique\u201d random forests (oRF) built from multivariate trees which explicitly learn optimal split directions at internal nodes using linear discriminative models, rather than using random coefficients as the original oRF. This oRF outperforms RF, as well as other classifiers, on nearly all data sets but those with discrete factorial features. Learned node models perform distinctively better than random splits. An oRF feature importance score shows to be preferable over standard RF feature importance scores such as Gini or permutation importance. The topology of the oRF decision space appears to be smoother and better adapted to the data, resulting in improved generalization performance. Overall, the oRF propose here may be preferred over standard RF on most learning tasks involving numerical and spectral data."}
{"_id": "524dd11ce1249bae235bf06de89621c59c18286e", "title": "The ATILF-LLF System for Parseme Shared Task: a Transition-based Verbal Multiword Expression Tagger", "text": "We describe the ATILF-LLF system built for the MWE 2017 Shared Task on automatic identification of verbal multiword expressions. We participated in the closed track only, for all the 18 available languages. Our system is a robust greedy transition-based system, in which MWE are identified through a MERGE transition. The system was meant to accommodate the variety of linguistic resources provided for each language, in terms of accompanying morphological and syntactic information. Using per-MWE Fscore, the system was ranked first1 for all but two languages (Hungarian and Romanian)."}
{"_id": "449b47c55dac9cae588086dde9249caa230e01b1", "title": "Indistinguishability of Random Systems", "text": "An (X ,Y)-random system takes inputs X1, X2, . . . \u2208 X and generates, for each new input Xi, an output Yi \u2208 Y, depending probabilistically on X1, . . . , Xi and Y1, . . . , Yi\u22121. Many cryptographic systems like block ciphers, MAC-schemes, pseudo-random functions, etc., can be modeled as random systems, where in fact Yi often depends only on Xi, i.e., the system is stateless. The security proof of such a system (e.g. a block cipher) amounts to showing that it is indistinguishable from a certain perfect system (e.g. a random permutation). We propose a general framework for proving the indistinguishability of two random systems, based on the concept of the equivalence of two systems, conditioned on certain events. This abstraction demonstrates the common denominator among many security proofs in the literature, allows to unify, simplify, generalize, and in some cases strengthen them, and opens the door to proving new indistinguishability results. We also propose the previously implicit concept of quasi-randomness and give an efficient construction of a quasi-random function which can be used as a building block in cryptographic systems based on pseudorandom functions."}
{"_id": "195c63fd6b6c9e7f1b5f6920da5a8de4de896f4a", "title": "CEC : Research in visualization techniques for field construction", "text": "Field construction can be planned, monitored, and controlled at two distinct levels: 1) the activity or schedule level; and 2) the operation or process level. Graphical 3D visualization can serve as an effective communication method at both levels. Several research efforts in visualizing construction are rooted in scheduling. They typically involve linking activity-based construction schedules and 3D CAD models of facilities to describe discretely-evolving construction \u201cproduct\u201d visualizations (often referred to as 4D CAD). The focus is on communicating what component(s) are built where and when, with the intention of studying the optimal activity sequence, spatial, and temporal interferences, etc. The construction processes or operations actually involved in building the components are usually implied. A second approach in visualizing construction is rooted in discrete-event simulation that, in addition to visualizing evolving construction products, also concerns the visualization of the operations and processes that are performed in building them. In addition to what is built where and when, the approach communicates who builds it and how by depicting the interaction between involved machines, resources, and materials. This paper introduces the two approaches, and describes the differences in concept, form, and content between activity level and operations level construction visualization. An example of a structural steel framing operation is presented to elucidate the comparison. This work was originally published in the proceedings of the 2002 IEEE Winter Simulation Conference (Kamat and Martinez 2002). This paper expands on the original work, by describing recent advances in both activity and operations level construction visualization."}
{"_id": "b3e3fc76e4f0a73de2562d88a64d864d0236ba26", "title": "Brain\u2014Computer Interface", "text": "Human-computer interfaces (HCIs) have become ubiquitous. Interfaces such as keyboards and mouses are used daily while interacting with computing devices (Ebrahimi et al., 2003). There is a developing need, however, for HCIs that can be used in situations where these typical interfaces are not viable. Direct brain-computer interfaces (BCI) is a developing field that has been adding this new dimension of functionality to HCI. BCI has created a novel communication channel, especially for those users who are unable to generate necessary muscular movements to use typical HCI devices."}
{"_id": "dca9e15cde7665415b88486ae182c77ce8806b26", "title": "Human Performance Issues and User Interface Design for Teleoperated Robots", "text": "In the future, it will become more common for humans to team up with robotic systems to perform tasks that humans cannot realistically accomplish alone. Even for autonomous and semiautonomous systems, teleoperation will be an important default mode. However, teleoperation can be a challenging task because the operator is remotely located. As a result, the operator's situation awareness of the remote environment can be compromised and the mission effectiveness can suffer. This paper presents a detailed examination of more than 150 papers covering human performance issues and suggested mitigation solutions. The paper summarizes the performance decrements caused by video images bandwidth, time lags, frame rates, lack of proprioception, frame of reference, two-dimensional views, attention switches, and motion effects. Suggested solutions and their limitations include stereoscopic displays, synthetic overlay, multimodal interfaces, and various predicative and decision support systems."}
{"_id": "ada5876e216130cdd7ad6e44539849049dd2de39", "title": "Study on advantages and disadvantages of Cloud Computing \u2013 the advantages of Telemetry Applications in the Cloud", "text": "As companies of all shapes and sizes begin to adapt to cloud computing, this new technology is evolving like never before. Industry experts believe that this trend will only continue to grow and develop even further in the coming few years. While Cloud computing is undoubtedly beneficial for mid-size to large companies, it is not without its downsides, especially for smaller businesses. In this paper we are presenting a list of advantages and disadvantages of Cloud computing technology, with a view to helping enterprises fully understand and adopt the concept of Cloud computing. Also, in the last chapter we are presenting a cloud application for telemetry with a focus on monitoring hydro-energy, in order to demonstrate the advantages that cloud technology can have for this domain. We consider that the way to make cloud vastly benefit all types of businesses is to know very well it's ups and downs and adapt to them accordingly. Key-Words: Cloud Computing, Grid Computing, telemetry, architecture, advantages, disadvantages."}
{"_id": "396514fb219879a4a18762cddfae2a6a607f439f", "title": "Finding a Needle in Haystack: Facebook's Photo Storage", "text": "Facebook needed an architecture to address the need of storing metadata to find photos on storage machines efficiently in memory. They realized that storing a single photo in a single file would prevent them from achieving this goal; thus, they chose to store multiple photos in a single file, reducing the metadata overhead and making it more possible to store the metadata in memory. They ultimately came up with an architecture that consists of a Haystack Store (the persistent storage), Haystack Directory (used to locate photos and manage storage), and Haystack Cache (used to provide quick access to photos not found in the CDN but generating a large number of requests from users). Their architecture allowed them to choose whether to use a CDN or not, but in either case, the Cache was used to address the issue of the \u201clong tail\u201d problem, where less popular photos still generate requests, and does so by having any request from the user directly be stored in the Cache or by having photos fetched from a write-enabled machine to be stored in the Cache. Haystack addressed the potential of failures wiping inmemory data structures by storing these structures in index files. These index files are updated asynchronously of writes preventing performance decrease. To efficiently handle modifications, Haystack chose to append modified files to the end of the file as well as record file to delete the old version of the file. The old version of the file, along with any files marked deleted through delete requests are deleted through an operation called compaction, which takes files not marked deleted or only new versions of modified file and stores it in a new file on disk. This way, deletes and modifications are handled asynchronously of writes and thus is efficient."}
{"_id": "39373d7c6d85f6189dd798e3c419d9aac3ad04db", "title": "Similarity Preserving Snippet-Based Visualization of Web Search Results", "text": "Internet users are very familiar with the results of a search query displayed as a ranked list of snippets. Each textual snippet shows a content summary of the referred document (or webpage) and a link to it. This display has many advantages, for example, it affords easy navigation and is straightforward to interpret. Nonetheless, any user of search engines could possibly report some experience of disappointment with this metaphor. Indeed, it has limitations in particular situations, as it fails to provide an overview of the document collection retrieved. Moreover, depending on the nature of the query for example, it may be too general, or ambiguous, or ill expressed the desired information may be poorly ranked, or results may contemplate varied topics. Several search tasks would be easier if users were shown an overview of the returned documents, organized so as to reflect how related they are, content wise. We propose a visualization technique to display the results of web queries aimed at overcoming such limitations. It combines the neighborhood preservation capability of multidimensional projections with the familiar snippet-based representation by employing a multidimensional projection to derive two-dimensional layouts of the query search results that preserve text similarity relations, or neighborhoods. Similarity is computed by applying the cosine similarity over a \"bag-of-wordsa\u0302' vector representation of collection built from the snippets. If the snippets are displayed directly according to the derived layout, they will overlap considerably, producing a poor visualization. We overcome this problem by defining an energy functional that considers both the overlapping among snippets and the preservation of the neighborhood structure as given in the projected layout. Minimizing this energy functional provides a neighborhood preserving two-dimensional arrangement of the textual snippets with minimum overlap. The resulting visualization conveys both a global view of the query results and visual groupings that reflect related results, as illustrated in several examples shown."}
{"_id": "bae7025af534d5f8b75de243b03311af4645bc48", "title": "The Tale of e-Government: A Review of the Stories that Have Been Told So Far and What is Yet to Come", "text": "Since its first appearance, the concept of eGovernment has evolved into a recognized means that has helped the public sector to increase its efficiency and effectiveness. A lot of research has therefore been done in this area to elaborate on the different aspects encompassing this concept. However, when looking at the existing e-Government literature, research mostly focuses on one specific aspect of e-Government and there are few generic publications that provide an overview of the diversity of this interdisciplinary research field over a longer term period. This study analyzes the abstracts of eight e-Government journals from 2000 to 2016 by means of a quantitative text mining analysis, backed by a qualitative Delphi approach. The article concludes with a discussion on the findings and implications as well as directions for future research."}
{"_id": "ac3a287c972b9fda317e918c864bb3b738ed4566", "title": "Public health implications of altered puberty timing.", "text": "Changes in puberty timing have implications for the treatment of individual children, for the risk of later adult disease, and for chemical testing and risk assessment for the population. Children with early puberty are at a risk for accelerated skeletal maturation and short adult height, early sexual debut, potential sexual abuse, and psychosocial difficulties. Altered puberty timing is also of concern for the development of reproductive tract cancers later in life. For example, an early age of menarche is a risk factor for breast cancer. A low age at male puberty is associated with an increased risk for testicular cancer according to several, but not all, epidemiologic studies. Girls and, possibly, boys who exhibit premature adrenarche are at a higher risk for developing features of metabolic syndrome, including obesity, type 2 diabetes, and cardiovascular disease later in adulthood. Altered timing of puberty also has implications for behavioral disorders. For example, an early maturation is associated with a greater incidence of conduct and behavior disorders during adolescence. Finally, altered puberty timing is considered an adverse effect in reproductive toxicity risk assessment for chemicals. Recent US legislation has mandated improved chemical testing approaches for protecting children's health and screening for endocrine-disrupting agents, which has led to changes in the US Environmental Protection Agency's risk assessment and toxicity testing guidelines to include puberty-related assessments and to the validation of pubertal male and female rat assays for endocrine screening."}
{"_id": "242a3f4c68e6153debf889560390cd4c75900924", "title": "Effect of written emotional expression on immune function in patients with human immunodeficiency virus infection: a randomized trial.", "text": "OBJECTIVES\nTo determine whether writing about emotional topics compared with writing about neutral topics could affect CD4+ lymphocyte count and human immunodeficiency virus (HIV) viral load among HIV-infected patients.\n\n\nMETHODS\nThirty-seven HIV-infected patients were randomly allocated to 2 writing conditions focusing on emotional or control topics. Participants wrote for 4 days, 30 minutes per day. The CD4+ lymphocyte count and HIV viral load were measured at baseline and at 2 weeks, 3 months, and 6 months after writing.\n\n\nRESULTS\nThe emotional writing participants rated their essays as more personal, valuable, and emotional than those in the control condition. Relative to the drop in HIV viral load, CD4+ lymphocyte counts increased after the intervention for participants in the emotional writing condition compared with control writing participants.\n\n\nCONCLUSIONS\nThe results are consistent with those of previous studies using emotional writing in other patient groups. Based on the self-reports of the value of writing and the preliminary laboratory findings, the results suggest that emotional writing may provide benefit for patients with HIV infection."}
{"_id": "9457b48cf5932d9528a32f36e23ab003b54e17ac", "title": "A Quantitative Comparison of Semantic Web Page Segmentation Approaches", "text": "This paper explores the effectiveness of different semantic web page segmentation algorithms on modern websites. We compare three known algorithms each serving as an example of a particular approach to the problem, and one self-developed algorithm, WebTerrain, that combines two of the approaches. With our testing framework we have compared the performance of four algorithms for a large benchmark we have constructed. We have examined each algorithm for a total of eight different configurations (varying datasets, evaluation metric and the type of the input HTML documents). We found that all algorithms performed better on random pages on average than on popular pages, and results are better when running the algorithms on the HTML obtained from the DOM rather than on the plain HTML. Overall there is much room for improvement as we find the best average F-score to be 0.49, indicating that for modern websites currently available algorithms are not yet of practical use."}
{"_id": "108520cc4be3b7558fc22b8287999f921f01b34d", "title": "Smith-Petersen osteotomy in thoracolumbar deformity surgery.", "text": "OBJECTIVE\nTo describe the technique and indications of a Smith-Petersen osteotomy in spinal deformity surgery.\n\n\nMETHODS\nPertinent literature was reviewed to describe the indications and reported complications of this corrective technique.\n\n\nRESULTS\nThe operative nuances of the technique are described.\n\n\nCONCLUSION\nA Smith-Petersen osteotomy is a safe and effective surgical technique to obtain correction of spinal deformity in both the sagittal and coronal planes."}
{"_id": "36445111c9f9eb6763fedab5294aca792519f925", "title": "Ensembles for unsupervised outlier detection: challenges and research questions a position paper", "text": "Ensembles for unsupervised outlier detection is an emerging topic that has been neglected for a surprisingly long time (although there are reasons why this is more difficult than supervised ensembles or even clustering ensembles). Aggarwal recently discussed algorithmic patterns of outlier detection ensembles, identified traces of the idea in the literature, and remarked on potential as well as unlikely avenues for future transfer of concepts from supervised ensembles. Complementary to his points, here we focus on the core ingredients for building an outlier ensemble, discuss the first steps taken in the literature, and identify challenges for future research."}
{"_id": "3fc840191f8358def2aa745cca1c0425cf2c5938", "title": "Efficient SIMD code generation for runtime alignment and length conversion", "text": "When generating codes for today's multimedia extensions, one of the major challenges is to deal with memory alignment issues. While hand programming still yields best performing SIMD codes, it is both time consuming and error prone. Compiler technology has greatly improved, including techniques that simdize loops with misaligned accesses by automatically rearranging mis-aligned memory streams in registers. Current techniques are applicable to runtime alignments, but they aggressively reduce the alignment overhead only when all alignments are known at compile time. This paper presents two major enhancements to the state of the art, improving both performance and coverage. First, we propose a novel technique to simdize loops with runtime alignment nearly as efficiently as those with compile-time misalignment. Runtime alignment is pervasive in real applications because it is either part of the algorithms, or it is an artifact of the compiler's inability to extract accurate alignment information from complex applications. Second, we incorporate length conversion operations, e.g., conversions between data of different sizes, into the alignment handling framework. Length conversions are pervasive in multimedia applications where mixed integer types are often used. Supporting length conversion can greatly improve the coverage of simdizable loops. Experimental results indicate that our runtime alignment technique achieves a 19% to 32% speedup increase over prior art for a benchmark stressing the impact of misaligned data. We also demonstrate speedup factors of up to 8.11 for real benchmarks over sequential execution."}
{"_id": "e7830a70f9170eccc088b5e70523b4aa1cc03b6a", "title": "Turning the Waiting Room into a Classroom: Weekly Classes Using a Vegan or a Portion-Controlled Eating Plan Improve Diabetes Control in a Randomized Translational Study.", "text": "BACKGROUND\nIn research settings, plant-based (vegan) eating plans improve diabetes management, typically reducing weight, glycemia, and low-density lipoprotein (LDL) cholesterol concentrations to a greater extent than has been shown with portion-controlled eating plans.\n\n\nOBJECTIVE\nThe study aimed to test whether similar benefits could be found using weekly nutrition classes in a typical endocrinology practice, hypothesizing that a vegan eating plan would improve glycemic control, weight, lipid concentrations, blood pressure, and renal function and would do so more effectively than a portion-controlled eating plan.\n\n\nDESIGN\nIn a 20-week trial, participants were randomly assigned to a low-fat vegan or portion-controlled eating plan.\n\n\nPARTICIPANTS/SETTING\nIndividuals with type 2 diabetes treated in a single endocrinology practice in Washington, DC, participated (45 starters, 40 completers).\n\n\nINTERVENTION\nParticipants attended weekly after-hours classes in the office waiting room. The vegan plan excluded animal products and added oils and favored low-glycemic index foods. The portion-controlled plan included energy intake limits for weight loss (typically a deficit of 500 calories/day) and provided guidance on portion sizes.\n\n\nMAIN OUTCOME MEASURES\nBody weight, hemoglobin A1c (HbA1c), plasma lipids, urinary albumin, and blood pressure were measured.\n\n\nSTATISTICAL ANALYSES PERFORMED\nFor normally distributed data, t tests were used; for skewed outcomes, rank-based approaches were implemented (Wilcoxon signed-rank test for within-group changes, Wilcoxon two-sample test for between-group comparisons, and exact Hodges-Lehmann estimation to estimate effect sizes).\n\n\nRESULTS\nAlthough participants were in generally good metabolic control at baseline, body weight, HbA1c, and LDL cholesterol improved significantly within each group, with no significant differences between the two eating plans (weight:\u00a0-6.3 kg vegan,\u00a0-4.4 kg portion-controlled, between-group P=0.10; HbA1c,\u00a0-0.40 percentage point in both groups, P=0.68; LDL cholesterol\u00a0-11.9 mg/dL vegan,\u00a0-12.7 mg/dL portion-controlled, P=0.89). Mean urinary albumin was normal at baseline and did not meaningfully change. Blood pressure changes were not significant.\n\n\nCONCLUSIONS\nWeekly classes, integrated into a clinical practice and using either a low-fat vegan or portion-controlled eating plan, led to clinical improvements in individuals with type 2 diabetes."}
{"_id": "8a6ab194c992f33b7066f6866f5339e57fcfd31a", "title": "On Two Metaphors for Learning and the Dangers of Choosing Just One", "text": "The upshots of the former section can be put as follows: All our concepts and beliefs have their roots in a limited number of fundamental ideas that cross disciplinary boundaries and are carried from one domain to another by the language we use. One glance at the current discourse on learning should be enough to realize that nowadays educational research is caught between two metaphors that, in this article, will be called the acquisition metaphor and the participation metaphor. Both of these metaphors are simultaneously present in most recent texts, but while the acquisition metaphor is likely to be more prominent in older writings, more recent studies are often dominated by the participation metaphor."}
{"_id": "c90c3c10cba3c4700298ab2883c2bfecd7401fae", "title": "Visual feature integration and the temporal correlation hypothesis.", "text": "The mammalian visual system is endowed with a nearly infinite capacity for the recognition of patterns and objects. To have acquired this capability the visual system must have solved what is a fundamentally combinatorial problem. Any given image consists of a collection of features, consisting of local contrast borders of luminance and wavelength, distributed across the visual field. For one to detect and recognize an object within a scene, the features comprising the object must be identified and segregated from those comprising other objects. This problem is inherently difficult to solve because of the combinatorial nature of visual images. To appreciate this point, consider a simple local feature such as a small vertically oriented line segment placed within a fixed location of the visual field. When combined with other line segments, this feature can form a nearly infinite number of geometrical objects. Any one of these objects may coexist with an equally large number of other"}
{"_id": "860d51e4bcaf1b6519d36772ccf645861dded118", "title": "Virgin soil in irony research : Personality , humor , and the \u201c sense of irony \u201d", "text": "he aim of the paper is fourfold: (a) show why humor scholars should study irony, (b) explore the need for considering interindividual differences in healthy adults\u2019 irony performance, (c) stress the necessity for developing tools assessing habitual differences in irony performance, and (d) indicate future directions for joint irony and humor research and outline possible applications. Verbal irony is often employed with a benevolent humorous intent by speakers, but can also serve as a means of disparagement humor. In both cases, encoding and decoding activities entailing irony need to be considered in the context of the psychology of humor. We argue that verbal irony performance can be considered a phenomenon native to the realm of humor and individual differences. We point out that research has widely neglected the meaningfulness of variance in irony performance within experimental groups when looking at determinants of irony detection and production. Based on theoretical considerations and previous empirical findings we show that this variance can be easily related to individual-differences variables such as the sense of humor, dispositions toward laughter and ridicule (e.g., gelotophobia), and general mental ability. Furthermore, we hypothesize that there is an enduring trait determining irony performance we will label the sense of irony. The sense of irony possibly goes along with inclinations toward specific affective and cognitive processing patterns when dealing with verbal irony. As an application, novel irony performance tests can help to study psychological and neurophysiological correlates of irony performance more feasibly, that is, in nonclinical groups. DOI: https://doi.org/10.1037/tps0000054 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-124094 Accepted Version Originally published at: Bruntsch, Richard; Hofmann, Jennifer; Ruch, Willibald (2016). Virgin soil in irony research: Personality, humor, and the \u201csense of irony\u201d. Translational Issues in Psychological Science, 2(1):25-34. DOI: https://doi.org/10.1037/tps0000054 VIRGIN SOIL IN IRONY RESEARCH 1 Running head: VIRGIN SOIL IN IRONY RESEARCH Virgin soil in irony research: Personality, humor, and the \u201csense of irony\u201d"}
{"_id": "0cf7da0df64557a4774100f6fde898bc4a3c4840", "title": "Shape matching and object recognition using low distortion correspondences", "text": "We approach recognition in the framework of deformable shape matching, relying on a new algorithm for finding correspondences between feature points. This algorithm sets up correspondence as an integer quadratic programming problem, where the cost function has terms based on similarity of corresponding geometric blur point descriptors as well as the geometric distortion between pairs of corresponding feature points. The algorithm handles outliers, and thus enables matching of exemplars to query images in the presence of occlusion and clutter. Given the correspondences, we estimate an aligning transform, typically a regularized thin plate spline, resulting in a dense correspondence between the two shapes. Object recognition is then handled in a nearest neighbor framework where the distance between exemplar and query is the matching cost between corresponding points. We show results on two datasets. One is the Caltech 101 dataset (Fei-Fei, Fergus and Perona), an extremely challenging dataset with large intraclass variation. Our approach yields a 48% correct classification rate, compared to Fei-Fei et al 's 16%. We also show results for localizing frontal and profile faces that are comparable to special purpose approaches tuned to faces."}
{"_id": "136b9952f29632ab3fa2bbf43fed277204e13cb5", "title": "SUN database: Large-scale scene recognition from abbey to zoo", "text": "Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes."}
{"_id": "1e6fbe7e923ffadd22bf63ce80997bf248659a1d", "title": "Shape matching and object recognition using shape contexts", "text": "This paper presents my work on computing shape models that are computationally fast and invariant basic transformations like translation, scaling and rotation. In this paper, I propose shape detection using a feature called shape context. Shape context describes all boundary points of a shape with respect to any single boundary point. Thus it is descriptive of the shape of the object. Object recognition can be achieved by matching this feature with a priori knowledge of the shape context of the boundary points of the object. Experimental results are promising on handwritten digits, trademark images."}
{"_id": "61f20cd6ec2c9f9ecad8dd9216eddab3ad606fb0", "title": "Backward Stochastic Differential Equations and Applications", "text": "A new type of stochastic differential equation, called the backward stochastic differentil equation (BSDE), where the value of the solution is prescribed at the final (rather than the initial) point of the time interval, but the solution is nevertheless required to be at each time a function of the past of the underlying Brownian motion, has been introduced recently, independently by Peng and the author in [16], and by Dufne and Epstein in [7]. This class of equations is a natural nonlinear extension of linear equations that appear both as the equation for the adjoint process in the maximum principle for optimal stochastic control (see [2]), and as a basic model for asset pricing in financial mathematics. It was soon after discovered (see [22], [17]) that those BSDEs provide probabilistic formulas for solutions of certain semilinear partial differential equations (PDEs), which generalize the well-known Feynmann-Kac formula for second order linear PDEs. This provides a new additional tool for analyzing solutions of certain PDEs, for instance reaction-diffusion equations."}
{"_id": "79dd787b2877cf9ce08762d702589543bda373be", "title": "Face detection using SURF cascade", "text": "We present a novel boosting cascade based face detection framework using SURF features. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by two key contributions. First, the proposed framework deals with only several hundreds of multidimensional local SURF patches instead of hundreds of thousands of single dimensional haar features in the VJ framework. Second, it takes AUC as a single criterion for the convergence test of each cascade stage rather than the two conflicting criteria (false-positive-rate and detection-rate) in the VJ framework. These modifications yield much faster training convergence and much fewer stages in the final cascade. We made experiments on training face detector from large scale database. Results shows that the proposed method is able to train face detectors within one hour through scanning billions of negative samples on current personal computers. Furthermore, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed."}
{"_id": "883492ad4c49f7a6d0d1c38119e908fc0faed911", "title": "Automatic detection and classification of diabetic retinopathy stages using CNN", "text": "A Convolutional Neural Networks (CNNs) approach is proposed to automate the method of Diabetic Retinopathy(DR) screening using color fundus retinal photography as input. Our network uses CNN along with denoising to identify features like micro-aneurysms and haemorrhages on the retina. Our models were developed leveraging Theano, an open source numerical computation library for Python. We trained this network using a high-end GPU on the publicly available Kaggle dataset. On the data set of over 30,000 images our proposed model achieves around 95% accuracy for the two class classification and around 85% accuracy for the five class classification on around 3,000 validation images."}
{"_id": "e8d5aeca722bcd28a6151640b105b72b43e48ab5", "title": "Qualitative research in health care. Assessing quality in qualitative research.", "text": "In the past decade, qualitative methods have become more commonplace in areas such as health services research and health technology assessment, and there has been a corresponding rise in the reporting of qualitative research studies in medical and related journals. Interest in these methods and their wider exposure in health research has led to necessary scrutiny of qualitative research. Researchers from other traditions are increasingly concerned to understand qualitative methods and, most importantly, to examine the claims researchers make about the findings obtained from these methods. The status of all forms of research depends on the quality of the methods used. In qualitative research, concern about assessing quality has manifested itself recently in the proliferation of guidelines for doing and judging qualitative work. Users and funders of research have had an important role in developing these guidelines as they become increasingly familiar with qualitative methods, but require some means of assessing their quality and of distinguishing \u201cgood\u201d and \u201cpoor\u201d quality research. However, the issue of \u201cquality\u201d in qualitative research is part of a much larger and contested debate about the nature of the knowledge produced by qualitative research, whether its quality can legitimately be judged, and, if so, how. This paper cannot do full justice to this wider epistemological debate. Rather it outlines two views of how qualitative methods might be judged and argues that qualitative research can be assessed according to two broad criteria: validity and relevance."}
{"_id": "a75791024a2aca42641e492b992a433eae35cadb", "title": "An empirical examination of factors contributing to the creation of successful e-learning environments", "text": "Although existing models of e-learning effectiveness in information systems (IS) have increased our understanding of how technology can support and enhance learning, most of our models do not take into account the importance of social presence. Thus, this study extends previous research by developing a model of e-learning effectiveness which adds social presence to other oft studied variables including application-specific computer self-efficacy (AS-CSE), perceived usefulness, course interaction, and e-learning effectiveness. Using data from 345 individuals, this model was validated through a field study in an introductory IS survey course. Results indicate that AS-CSE and perceived usefulness were related to course performance, course satisfaction, and course instrumentality. In addition, course interaction was related to course performance and satisfaction. Finally, social presence was related to course satisfaction and course instrumentality. Implications for research and practice are discussed. r 2007 Elsevier Ltd. All rights reserved."}
{"_id": "e20469c66f44e6e18eccc722ef3f74a630d102df", "title": "peepHole : Securing your homes with Raspberry Pi and Amazon Echo", "text": "In recent years, the field of Internet of Things (IoT) has seen significant investments made by the research community and the industry. Specifically, the Smart Home space has been a prime focus with the introduction of devices such as Amazon Echo, Google Home, Samsung Smart Things among others. The growth of an industry results in innovative, economic, and advanced solutions. In this paper, we focus on smart home security and how to build a robust, cost-effective system that can be widely used. We power our system using Amazon Echo, Amazon\u2019s cloud services, its speech services. A Raspberry Pi with a camera module is used as the hardware component for providing home security. We describe the different components of our product and we show that our system works effectively to identify the person at the doorstep, and have thus named our system peepHole. Keywords\u2014Intenet of Things, Smart Homes, Home Security, Amazon Web Services, Amazon Echo, Raspberry Pi, Face Verification"}
{"_id": "f48ec0076b6d07ffb68c52d457947695310f161b", "title": "Chemical structure recognition and prediction: A machine learning technique", "text": "A chemical structure recognition system designed to identify and predict hand-drawn chemical diagrams is presented. Although the proposed system is primarily intended to help chemists complete their chemical structure drawings and predict and complete the chemical structure, it is also envisioned to be utilized as a learning aid in chemistry education on a student's touchscreen device or on a teacher's smart board. The proposed system continuously provides the user with real-time feedback about the most relevant combinations to complete the chemical compound. A constantly updated list of prioritized candidate structures is displayed to the user. If the user is an expert, he or she will be allowed to contribute to the learning process of the proposed system. The proposed system deploys two trainable Bayesian networks, one at its input and another one at its output. Between the two networks, the classification of hand- drawn chemical structure is done using morphological image processing, LDA and PCA, incorporating therefore both image- based and feature-based techniques. This incorporation contributes considerably to the increased performance of the proposed system by enabling it to efficiently recognize messy drawings and noisy sketches like those containing over-traced strokes. Performance comparison with existing state of the art techniques proved the system to be outperforming in terms of recognition accuracy."}
{"_id": "1c2adf688022702aecf149ab8e94d408dc54275d", "title": "Breath acetone monitoring by portable Si:WO3 gas sensors.", "text": "Breath analysis has the potential for early stage detection and monitoring of illnesses to drastically reduce the corresponding medical diagnostic costs and improve the quality of life of patients suffering from chronic illnesses. In particular, the detection of acetone in the human breath is promising for non-invasive diagnosis and painless monitoring of diabetes (no finger pricking). Here, a portable acetone sensor consisting of flame-deposited and in situ annealed, Si-doped epsilon-WO(3) nanostructured films was developed. The chamber volume was miniaturized while reaction-limited and transport-limited gas flow rates were identified and sensing temperatures were optimized resulting in a low detection limit of acetone (\u223c20ppb) with short response (10-15s) and recovery times (35-70s). Furthermore, the sensor signal (response) was robust against variations of the exhaled breath flow rate facilitating application of these sensors at realistic relative humidities (80-90%) as in the human breath. The acetone content in the breath of test persons was monitored continuously and compared to that of state-of-the-art proton transfer reaction mass spectrometry (PTR-MS). Such portable devices can accurately track breath acetone concentration to become an alternative to more elaborate breath analysis techniques."}
{"_id": "97a987931d616233569e070a0ac110b4db583725", "title": "Performance analysis of latent fingerprint enhancement techniques", "text": "Fingerprint Enhancement is an important step used to improve the accuracy and rate of automated detection of fingerprints. Enhancing the latent fingerprints facilitates the extraction of minutiae from the fingerprint image. In this paper, two methods of latent fingerprint enhancement are carried out. The first uses Histogram Equalization for enhancing the latent fingerprint. The second method uses a novel approach of Binarization followed by Wiener filtering and an additional Median filter. The improvements in the image are then analyzed based on their entropy, contrast and moment."}
{"_id": "eb06182a2817d06e82612a0c32a6c843f01c6a03", "title": "Text Generation From Tables", "text": "This paper proposes a neural generative model, namely Table2Seq, to generate a natural language sentence based on a table. Specifically, the model maps a table to continuous vectors and then generates a natural language sentence by leveraging the semantics of a table. Since rare words, e.g., entities and values, usually appear in a table, we develop a flexible copying mechanism that selectively replicates contents from the table to the output sequence. We conduct extensive experiments to demonstrate the effectiveness of our Table2Seq model and the utility of the designed copying mechanism. On the WIKIBIO and SIMPLEQUESTIONS datasets, the Table2Seq model improves the state-of-the-art results from 34.70 to 40.26 and from 33.32 to 39.12 in terms of BLEU-4 scores, respectively. Moreover, we construct an open-domain dataset WIKITABLETEXT that includes 13\u00a0318 descriptive sentences for 4962 tables. Our Table2Seq model achieves a BLEU-4 score of 38.23 on WIKITABLETEXT outperforming template-based and language model based approaches. Furthermore, through experiments on 1\u00a0M table-query pairs from a search engine, our Table2Seq model considering the structured part of a table, i.e., table attributes and table cells, as additional information outperforms a sequence-to-sequence model considering only the sequential part of a table, i.e., table caption."}
{"_id": "58d2a4c5c58085c3b6878473f7fdcc55984aacd6", "title": "Standards for smart education \u2013 towards a development framework", "text": "Smart learning environments (SLEs) utilize a range of digital technologies in supporting learning, education and training; they also provide a prominent signpost for how future learning environments might be shaped. Thus, while innovation proceeds, SLEs are receiving growing attention from the research community, outputs from which are discussed in this paper. Likewise, this broad application of educational digital technologies is also the remit of standardization in an ISO committee, also discussed in this paper. These two communities share a common interest in, conceptualizing this emerging domain with the aim to identifying direction to further development. In doing so, terminology issues arise along with key questions such as, \u2018how is smart learning different from traditional learning?\u2019 Presenting a bigger challenge is the question, \u2018how can standardization work be best scoped in today's innovation-rich, networked, cloud-based and data-driven learning environments?\u2019 In responding, this conceptual paper seeks to identify candidate constructs and approaches that might lead to stable, coherent and exhaustive understanding of smart learning environments, thereby providing standards development for learning, education and training a needed direction. Based on reviews of pioneering work within smart learning, smart education and smart learning environments we highlight two models, a cognitive smart learning model and a smartness level model. These models are evaluated against current standardization challenges in the field of learning, education and training to form the basis for a development platform for new standards in this area."}
{"_id": "e0a5db6594b592d46dbbbeca28a69248f0bfe421", "title": "Early Detection of Sudden Pedestrian Crossing for Safe Driving During Summer Nights", "text": "Sudden pedestrian crossing (SPC) is the major reason for pedestrian-vehicle crashes. In this paper, we focus on detecting SPCs at night for supporting an advanced driver assistance system using a far-infrared (FIR) camera mounted on the front roof of a vehicle. Although the thermal temperature of the road is similar to or higher than that of the pedestrians during summer nights, many previous researches have focused on pedestrian detection during the winter, spring, or autumn seasons. However, our research concentrates on SPC during the hot summer season because the number of collisions between pedestrians and vehicles in Korea is higher at that time than during the other seasons. For real-time processing, we first decide the optimal levels of image scaling and search area. We then use our proposed method for detecting virtual reference lines that are associated with road segmentation without using color information and change these lines according to the turning direction of the vehicle. Pedestrian detection is conducted using a cascade random forest with low-dimensional Haar-like features and oriented center-symmetric local binary patterns. The SPC is predicted based on the likelihood and the spatiotemporal features of the pedestrians, such as their overlapping ratio with virtual reference lines, as well as the direction and magnitude of each pedestrian\u2019s movement. The proposed algorithm was successfully applied to various pedestrian data sets captured by an FIR camera, and the results show that its SPC detection performance is better than those of other methods."}
{"_id": "728ebf81883fccccd76f092d90f1538491b961f7", "title": "Experiences with rapid mobile game development using unity engine", "text": "In this work, we describe our experiences with FunCopter, a casual game we have designed and implemented using Unity engine, suitable for portable devices. We emphasize some general principles, particularly with respect to rapid game development and constrained graphics. In addition to that, we present the main activities realized in each software engineering phase and the lessons we learned during its development. These descriptions include details from the initial concept to the fully realized, playable game. This work will share general but important experiences with Unity, particularly interesting for game developers in beginner's level."}
{"_id": "ea951c82efe26424e3ce0d167e01f59e5135a2da", "title": "Dimensionality reduction for the quantitative evaluation of a smartphone-based Timed Up and Go test", "text": "The Timed Up and Go is a clinical test to assess mobility in the elderly and in Parkinson's disease. Lately instrumented versions of the test are being considered, where inertial sensors assess motion. To improve the pervasiveness, ease of use, and cost, we consider a smartphone's accelerometer as the measurement system. Several parameters (usually highly correlated) can be computed from the signals recorded during the test. To avoid redundancy and obtain the features that are most sensitive to the locomotor performance, a dimensionality reduction was performed through principal component analysis (PCA). Forty-nine healthy subjects of different ages were tested. PCA was performed to extract new features (principal components) which are not redundant combinations of the original parameters and account for most of the data variability. They can be useful for exploratory analysis and outlier detection. Then, a reduced set of the original parameters was selected through correlation analysis with the principal components. This set could be recommended for studies based on healthy adults. The proposed procedure could be used as a first-level feature selection in classification studies (i.e. healthy-Parkinson's disease, fallers-non fallers) and could allow, in the future, a complete system for movement analysis to be incorporated in a smartphone."}
{"_id": "e4911ece92989fbdcd68d95f1d9fc06c9d7616bb", "title": "Let's stop trying to be \"sexy\" - preparing managers for the (big) data-driven business era", "text": "Let's stop trying to be 'sexy' \u2013 preparing managers for the (big) data-driven business era Kevin Daniel Andr\u00e9 Carillo, Article information: To cite this document: Kevin Daniel Andr\u00e9 Carillo, (2017) \"Let's stop trying to be 'sexy' \u2013 preparing managers for the (big) data-driven business era\", Business Process Management Journal, Vol. 23 Issue: 3, doi: 10.1108/BPMJ-09-2016-0188 Permanent link to this document: http://dx.doi.org/10.1108/BPMJ-09-2016-0188"}
{"_id": "c0ddc972ae95764639350f272c45cbc5f9fb77d2", "title": "The neurobiology of social cognition", "text": "Recent studies have begun to elucidate the roles played in social cognition by specific neural structures, genes, and neurotransmitter systems. Cortical regions in the temporal lobe participate in perceiving socially relevant stimuli, whereas the amygdala, right somatosensory cortices, orbitofrontal cortices, and cingulate cortices all participate in linking perception of such stimuli to motivation, emotion, and cognition. Open questions remain about the domain-specificity of social cognition, about its overlap with emotion and with communication, and about the methods best suited for its investigation."}
{"_id": "739b0a79b851b174558e17355e09e26c3236daeb", "title": "Conformal Surface Parameterization for Texture Mapping", "text": "In this paper , we give an explicit methodfor mappingany simply connectedsurfaceonto the spherein a mannerwhich preservesangles. This techniquerelieson certainconformalmappingsfrom differentialgeometry. Our methodprovidesa new way to automaticallyassigntexturecoordinates to complex undulatingsurfaces. We demonstratea finite elementmethod thatcanbeusedto applyour mappingtechniqueto a triangulatedgeometric descriptionof a surface."}
{"_id": "82570dbb0c240876ca70289035e8e51eb6d1c4f6", "title": "The ARTEMIS European driving cycles for measuring car pollutant emissions.", "text": "In the past 10 years, various work has been undertaken to collect data on the actual driving of European cars and to derive representative real-world driving cycles. A compilation and synthesis of this work is provided in this paper. In the frame of the European research project: ARTEMIS, this work has been considered to derive a set of reference driving cycles. The main objectives were as follows: to derive a common set of reference real-world driving cycles to be used in the frame of the ARTEMIS project but also in the frame of on-going national campaigns of pollutant emission measurements, to ensure the compatibility and integration of all the resulting emission data in the European systems of emission inventory; to ensure and validate the representativity of the database and driving cycles by comparing and taking into account all the available data regarding driving conditions; to include in three real-world driving cycles (urban, rural road and motorway) the diversity of the observed driving conditions, within sub-cycles allowing a disaggregation of the emissions according to more specific driving conditions (congested and free-flow urban). Such driving cycles present a real advantage as they are derived from a large database, using a methodology that was widely discussed and approved. In the main, these ARTEMIS driving cycles were designed using the available data, and the method of analysis was based to some extent on previous work. Specific steps were implemented. The study includes characterisation of driving conditions and vehicle uses. Starting conditions and gearbox use are also taken into account."}
{"_id": "ee9018d1c4e47b1e5d0df8b9a9ab6bb0e19a8e73", "title": "Syntax and semantics in the acquisition of locative verbs.", "text": "Children between the ages of three and seven occasionally make errors with locative verbs like pour and fill, such as *I filled water into the glass and *I poured the glass with water (Bowerman, 1982). To account for this pattern of errors, and for how they are eventually unlearned, we propose that children use a universal linking rule called object affectedness: the direct object corresponds to the argument that is specified as 'affected' in some particular way in the semantic representation of a verb. However, children must learn which verbs specify which of their arguments as being affected; specifically, whether it is the argument whose referent is undergoing a change of location, such as the content argument of pour, or the argument whose referent is undergoing a change of state, such as the container argument of fill. This predicts that syntactic errors should be associated with specific kinds of misinterpretations of verb meaning. Two experiments were performed on the ability of children and adults to understand and produce locative verbs. The results confirm that children tend to make syntactic errors with sentences containing fill and empty, encoding the content argument as direct object (e.g. fill the water). As predicted, children also misinterpreted the meanings of fill and empty as requiring not only that the container be brought into a full or empty state, but also that the content move in some specific manner (by pouring, or by dumping). Furthermore, children who misinterpreted the verbs' meanings were more likely to make syntactic errors with them. These findings support the hypothesis that verb meaning and syntax are linked in precise ways in the lexicons of language learners."}
{"_id": "8a4bf74449a4c011cae16f4ebfcff3976cf1e9c7", "title": "Dermal and subdermal tissue filling with fetal connective tissue and cartilage, collagen, and silicone: Experimental study in the pig compared with clinical results. A new technique of dermis mini-autograft injections", "text": "The early reaction to the injection of silicone, collagen, and lyophilized heterologous fetal connective and cartilage tissues into the limiting zone deep dermis-superficial subcutaneous tissue was histologically examined in the pig and compared with clinical results. The inflammatory reaction to lyophilized heterologous fetal tissue is considerably more intense than that to collagen and silicone and lasts for several weeks. Therefore, it is not recommended for soft tissue filling in the face. Admitting an inferior antigenicity of fetal tissues, the authors suggest that enzymatically denaturalized collagen should be manufactured from heterologous fetal connective tissue, to be then further tested. The reaction of tissue to silicone and collagen is minimal. Silicone is preferred for dermal injections since in clinical experience it remains in the site of injection much longer. For subdermal injections, however, collagen is preferred. Based on experience with over 600 patients since 1958, the first author continues using liquid silicone. The lack of complications is probably a result of the fact that only small amounts (milliliters) of silicone were used in wrinkles or small depressions in the dermal layer and that from the beginning injection into the subcutaneous tissue was avoided. Since 1988 a new technique for the treatment of wrinkles and skin depressions with injections of dermal miniautografts has been used with satisfactory results."}
{"_id": "f2560b735ee734f4b0ee0b523a8069f44a21bbf4", "title": "Multi-modal Attention Mechanisms in LSTM and Its Application to Acoustic Scene Classification", "text": "Neural network architectures such as long short-term memory (LSTM) have been proven to be powerful models for processing sequences including text, audio and video. On the basis of vanilla LSTM, multi-modal attention mechanisms are proposed in this paper to synthesize the time and semantic information of input sequences. First, we reconstruct the forget and input gates of the LSTM unit from the perspective of attention model in the temporal dimension. Then the memory content of the LSTM unit is recalculated using a cluster-based attention mechanism in semantic space. Experiments on acoustic scene classification tasks show performance improvements of the proposed methods when compared with vanilla LSTM. The classification errors on LITIS ROUEN dataset and DCASE2016 dataset are reduced by 16.5% and 7.7% relatively. We get a second place in the Kaggle\u2019s YouTube-8M video understanding challenge, and multi-modal attention based LSTM model is one of our bestperforming single systems."}
{"_id": "7ae1b915e58af9a5755ad2b33d56a2717d2802fc", "title": "The regulation of explicit and implicit race bias: the role of motivations to respond without prejudice.", "text": "Three studies examined the moderating role of motivations to respond without prejudice (e.g., internal and external) in expressions of explicit and implicit race bias. In all studies, participants reported their explicit attitudes toward Blacks. Implicit measures consisted of a sequential priming task (Study 1) and the Implicit Association Test (Studies 2 and 3). Study 3 used a cognitive busyness manipulation to preclude effects of controlled processing on implicit responses. In each study, explicit race bias was moderated by internal motivation to respond without prejudice, whereas implicit race bias was moderated by the interaction of internal and external motivation to respond without prejudice. Specifically, high internal, low external participants exhibited lower levels of implicit race bias than did all other participants. Implications for the development of effective self-regulation of race bias are discussed."}
{"_id": "e467278d981ba30ab3b24235d09205e2aaba3d6f", "title": "If Only my Leader Would just Do Something! Passive Leadership Undermines Employee Well-being Through Role Stressors and Psychological Resource Depletion.", "text": "The goal of this study was to develop and test a sequential mediational model explaining the negative relationship of passive leadership to employee well-being. Based on role stress theory, we posit that passive leadership will predict higher levels of role ambiguity, role conflict and role overload. Invoking Conservation of Resources theory, we further hypothesize that these role stressors will indirectly and negatively influence two aspects of employee well-being, namely overall mental health and overall work attitude, through psychological work fatigue. Using a probability sample of 2467 US workers, structural equation modelling supported the model by showing that role stressors and psychological work fatigue partially mediated the negative relationship between passive leadership and both aspects of employee well-being. The hypothesized, sequential indirect relationships explained 47.9% of the overall relationship between passive leadership and mental health and 26.6% of the overall relationship between passive leadership and overall work attitude. Copyright \u00a9 2016 John Wiley & Sons, Ltd."}
{"_id": "746234e6678624c9f6a68aa99c7fcef645fcbae1", "title": "THE EXPERIENCE FACTORY", "text": "This article presents an infrastructure, called the experience factory , aimed at capitalization and reuse of life cycle experience and products. The experience factory is a logical and physical organization, and its activities are independent from the ones of the development organization. The activities of the development organization and of the experience factory can be outlined in the following way:"}
{"_id": "abd0905c49d98ba0ac7488bf009be6ed119dcaf3", "title": "The Timeboxing process model for iterative software development", "text": "In today\u2019s business where speed is of essence, an iterative development approach that allows the functionality to be delivered in parts has become a necessity and an effective way to manage risks. In an iterative process, the development of a software system is done in increments, each increment forming of an iteration and resulting in a working system. A common iterative approach is to decide what should be developed in an iteration and then plan the iteration accordingly. A somewhat different iterative is approach is to time box different iterations. In this approach, the length of an iteration is fixed and what should be developed in an iteration is adjusted to fit the time box. Generally, the time boxed iterations are executed in sequence, with some overlap where feasible. In this paper we propose the timeboxing process model that takes the concept of time boxed iterations further by adding pipelining concepts to it for permitting overlapped execution of different iterations. In the timeboxing process model, each time boxed iteration is divided into equal length stages, each stage having a defined function and resulting in a clear work product that is handed over to the next stage. With this division into stages, pipelining concepts are employed to have multiple time boxes executing concurrently, leading to a reduction in the delivery time for product releases. We illustrate the use of this process model through an example of a commercial project that was successfully executed using the proposed model."}
{"_id": "058f712a7dd173dd0eb6ece7388bd9cdd6f77d67", "title": "Iterative and incremental developments. a brief history", "text": "Although many view iterative and incremental development as a modern practice, its application dates as far back as the mid-1950s. Prominent software-engineering thought leaders from each succeeding decade supported IID practices, and many large projects used them successfully. These practices may have differed in their details, but all had a common theme-to avoid a single-pass sequential, document-driven, gated-step approach."}
{"_id": "6b8283005a83f24e6301605acbaad3bb6d277ca5", "title": "A Methodology for Collecting Valid Software Engineering Data", "text": "An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To ensure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50 percent of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful."}
{"_id": "86d717a749483ba1583f04eb251888d795b25fc0", "title": "Quantitative Evaluation of Software Quality", "text": "The study reported in this paper establishes a conceptual framework and some key initial results in the analysis of the characteristics of software quality. Its main results and conclusions are:\n \u2022 Explicit attention to characteristics of software quality can lead to significant savings in software life-cycle costs.\n \u2022 The current software state-of-the-art imposes specific limitations on our ability to automatically and quantitatively evaluate the quality of software.\n \u2022 A definitive hierarchy of well-defined, well-differentiated characteristics of software quality is developed. Its higher-level structure reflects the actual uses to which software quality evaluation would be put; its lower-level characteristics are closely correlated with actual software metric evaluations which can be performed.\n \u2022 A large number of software quality-evaluation metrics have been defined, classified, and evaluated with respect to their potential benefits, quantifiability, and ease of automation.\n \u2022Particular software life-cycle activities have been identified which have significant leverage on software quality.\n Most importantly, we believe that the study reported in this paper provides for the first time a clear, well-defined framework for assessing the often slippery issues associated with software quality, via the consistent and mutually supportive sets of definitions, distinctions, guidelines, and experiences cited. This framework is certainly not complete, but it has been brought to a point sufficient to serve as a viable basis for future refinements and extensions."}
{"_id": "291119421373a0bcebbf11d4b0ff89523cdbd9da", "title": "A tutorial on the dynamics and control of wind turbines and wind farms", "text": "Wind energy is currently the fastest-growing energy source in the world, with a concurrent growth in demand for the expertise of engineers and researchers in the wind energy field. There are still many unsolved challenges in expanding wind power, and there are numerous problems of interest to systems and control researchers. In this paper, we first review the basic structure of wind turbines and then describe wind turbine control systems and control loops. Of great interest are the generator torque and blade pitch control systems, where significant performance improvements are achievable with more advanced systems and control research. We describe recent developments in advanced controllers for wind turbines and wind farms, and we also outline many open problems in the areas of modeling and control of wind turbines."}
{"_id": "75bf3e75b4842b273fa8fd12bc98bfd3b91efa26", "title": "ReLiDSS: Novel lie detection system from speech signal", "text": "Lying is among the most common wrong human acts that merits spending time thinking about it. The lie detection is until now posing a problem in recent research which aims to develop a non-contact application in order to estimate physiological changes. In this paper, we have proposed a preliminary investigation on which relevant acoustic parameter can be useful to classify lie or truth from speech signal. Our proposed system in is based on the Mel Frequency Cepstral Coefficient (MFCC) commonly used in automatic speech processing on our own constructed database ReLiDDB (ReGIM-Lab Lie Detection DataBase) for both cases lie detection and person voice recognition. We have performed on this database the Support Vector Machines (SVM) classifier using Linear kernel and we have obtained an accuracy of Lie and Truth detection of speech audio respectively 88.23% and 84.52%."}
{"_id": "e955cf5f5a7d9722f6bb14d09d964df8fefea654", "title": "On Software Implementation of High Performance GHASH Algorithms", "text": "There have been several modes of operations available for symmetric key block ciphers, among which Galois Counter Mode (GCM) of operation is a standard. GCM mode of operation provides con dentiality with the help of symmetric key block cipher operating in counter mode. The authentication component of GCM comprises of Galois hash (GHASH) computation which is a keyed hash function. The most important component of GHASH computation is carry-less multiplication of 128bit operands which is followed by a modulo reduction. There have been a number of schemes proposed for e cient software implementation of carry-less multiplication to improve performance of GHASH by increasing the speed of multiplications. This thesis focuses on providing an e cient way of software implementation of high performance GHASH function as being proposed by Meloni et al., and also on the implementation of GHASH using a carry-less multiplication instruction provided by Intel on their Westmere architecture. The thesis work includes implementation of the high performance GHASH and its comparison to the older or standard implementation of GHASH function. It also includes comparison of the two implementations using Intel's carry-less multiplication instruction. This is the rst time that this kind of comparison is being done on software implementations of these algorithms. Our software implementations suggest that the new GHASH algorithm, which was originally proposed for the hardware implementations due to the required parallelization, can't take advantage of the Intel carry-less multiplication instruction PCLMULQDQ. On the other hand, when implementations are done without using the PCLMULQDQ instruction the new algorithm performs better, even if its inherent parallelization is not utilized. This suggest that the new algorithm will perform better on embedded systems that do not support PCLMULQDQ."}
{"_id": "24d90bf184b1de66726e817d00b6edea192e5e53", "title": "Theory of Credit Card Networks : A Survey of the Literature", "text": "Credit cards provide benefits to consumers and merchants not provided by other payment instruments as evidenced by their explosive growth in the number and value of transactions over the last 20 years. Recently, credit card networks have come under scrutiny from regulators and antitrust authorities around the world. The costs and benefits of credit cards to network participants are discussed. Focusing on interrelated bilateral transactions, several theoretical models have been constructed to study the implications of several business practices of credit card networks. The results and implications of these economic models along with future research topics are discussed."}
{"_id": "f0ca32430b12470178eb1cd7e4f8093502d1a33e", "title": "Cytoscape.js: a graph theory library for visualisation and analysis", "text": "UNLABELLED\nCytoscape.js is an open-source JavaScript-based graph library. Its most common use case is as a visualization software component, so it can be used to render interactive graphs in a web browser. It also can be used in a headless manner, useful for graph operations on a server, such as Node.js.\n\n\nAVAILABILITY AND IMPLEMENTATION\nCytoscape.js is implemented in JavaScript. Documentation, downloads and source code are available at http://js.cytoscape.org.\n\n\nCONTACT\ngary.bader@utoronto.ca."}
{"_id": "9a8d78c61fb3ad2f72bdcd1ab8795da5e4e60b77", "title": "Time series forecasting using a deep belief network with restricted Boltzmann machines", "text": "Multi-layer perceptron (MLP) and other artificial neural networks (ANNs) have been widely applied to time series forecasting since 1980s. However, for some problems such as initialization and local optima existing in applications, the improvement of ANNs is, and still will be the most interesting study for not only time series forecasting but also other intelligent computing fields. In this study, we propose a method for time series prediction using Hinton & Salakhutdinov\u2019s deep belief nets (DBN) which are probabilistic generative neural network composed by multiple layers of restricted Boltzmann machine (RBM). We use a 3-layer deep network of RBMs to capture the feature of input space of time series data, and after pretraining of RBMs using their energy functions, gradient descent training, i.e., back-propagation learning algorithm is used for fine-tuning connection weights between \u201cvisible layers\u201d and \u201chidden layers\u201d of RBMs. To decide the sizes of neural networks and the learning rates, Kennedy & Eberhart\u2019s particle swarm optimization (PSO) is adopted during the training processes. Furthermore, \u201ctrend removal\u201d, a preprocessing to the original data, is also approached in the forecasting experiment using CATS benchmark data. Additionally, approximating and short-term prediction of chaotic time series such as Lorenz chaos and logistic map were also applied by the proposed method."}
{"_id": "00b7ffd43e9b6b70c80449872a8c9ec49c7d045a", "title": "Hierarchical structure and the prediction of missing links in networks", "text": "Networks have in recent years emerged as an invaluable tool for describing and quantifying complex systems in many branches of science. Recent studies suggest that networks often exhibit hierarchical organization, in which vertices divide into groups that further subdivide into groups of groups, and so forth over multiple scales. In many cases the groups are found to correspond to known functional units, such as ecological niches in food webs, modules in biochemical networks (protein interaction networks, metabolic networks or genetic regulatory networks) or communities in social networks. Here we present a general technique for inferring hierarchical structure from network data and show that the existence of hierarchy can simultaneously explain and quantitatively reproduce many commonly observed topological properties of networks, such as right-skewed degree distributions, high clustering coefficients and short path lengths. We further show that knowledge of hierarchical structure can be used to predict missing connections in partly known networks with high accuracy, and for more general network structures than competing techniques. Taken together, our results suggest that hierarchy is a central organizing principle of complex networks, capable of offering insight into many network phenomena."}
{"_id": "a70f3db55d561e0f0df7e58e4c8082d6b114def2", "title": "PROPOSED METHODOLOGY FOR EARTHQUAKE-INDUCED LOSS ASSESSMENT OF INSTRUMENTED STEEL FRAME BUILDINGS : BUILDING-SPECIFIC AND CITY-SCALE APPROACHES", "text": "The performance-based earthquake engineering framework utilizes probabilistic seismic demand models to obtain accurate estimates of building engineering demand parameters. These parameters are utilized to estimate earthquake-induced losses in terms of probabilistic repair costs, life-safety impact, and loss of function due to damage to a wide variety of building structural and non-structural components and content. Although nonlinear response history analysis is a reliable tool to develop probabilistic seismic demand models, it typically requires a considerable time investment in developing detailed building numerical model representations. In that respect, the challenge of city-scale damage assessment still remains. This paper proposes a simplified methodology that rapidly assesses the story-based engineering demand parameters (e.g., peak story drift ratios, residual story drift ratios, peak absolute floor accelerations) along the height of a steel frame building in the aftermath of an earthquake. The proposed methodology can be employed at a city-scale in order to facilitate rapid earthquake-induced loss assessment of steel frame buildings in terms of structural damage and monetary losses within a region. Therefore, buildings within an urban area that are potentially damaged after an earthquake event or scenario can be easily identified without a detailed engineering inspection. To illustrate the methodology for rapid earthquake-induced loss assessment at the city-scale we employ data from instrumented steel frame buildings in urban California that experienced the 1994 Northridge earthquake. Maps that highlight the expected structural and nonstructural damage as well as the expected earthquake-induced monetary losses are developed. The maps are developed with the geographical information system for steel frame buildings located in Los Angeles. Seong-Hoon Hwang and Dimitrios G. Lignos"}
{"_id": "114a2c60da136f80c304f4ed93fa7c796cc76f28", "title": "Forecasting the winner of a tennis match", "text": "We propose a method to forecast the winner of a tennis match, not only at the beginning of the match, but also (and in particular) during the match. The method is based on a fast and flexible computer program, and on a statistical analysis of a large data set from Wimbledon, both at match and at point level. 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "5d1b386ec5c1ca0c5d9fd6540ede1d68d25a98fd", "title": "Overconfidence in IT Investment Decisions: Why Knowledge can be a Boon and Bane at the same Time", "text": "Despite their strategic relevance in organizations, information technology (IT) investments still result in alarmingly high failure rates. As IT investments are such a delicate high-risk/high-reward matter, it is crucial for organizations to avoid flawed IT investment decision-making. Previous research in consumer and organizational decision-making shows that a decision\u2019s accuracy is often influenced by decisionmakers\u2019 overconfidence and that the magnitude of overconfidence strongly depends on decision-makers\u2019 certainty of their knowledge. Drawing on these strands of research, our findings from a field survey (N=166) show that IT managers\u2019 decisions in IT outsourcing are indeed affected by overconfidence. However, an in-depth investigation of three types of knowledge, namely experienced, objective and subjective knowledge, reveals that different types of knowledge can have contrasting effects on overconfidence and thus on the quality of IT outsourcing decisions. Knowledge can be a boon and bane at the same time. Implications for research and practice are discussed."}
{"_id": "9a86ae8e9b946dc6d957357e0670f262fa1ead9d", "title": "Bare bones differential evolution", "text": "Article history: Received 22 August 2007 Accepted 29 February 2008 Available online xxxx"}
{"_id": "f8acaabc99801a89baa5a9eff445fc5922498dd0", "title": "Domain Alignment with Triplets", "text": "Deep domain adaptation methods can reduce the distribution discrepancy by learning domain-invariant embedddings. However, these methods only focus on aligning the whole data distributions, without considering the classlevel relations among source and target images. Thus, a target embeddings of a bird might be aligned to source embeddings of an airplane. This semantic misalignment can directly degrade the classifier performance on the target dataset. To alleviate this problem, we present a similarity constrained alignment (SCA) method for unsupervised domain adaptation. When aligning the distributions in the embedding space, SCA enforces a similarity-preserving constraint to maintain class-level relations among the source and target images, i.e., if a source image and a target image are of the same class label, their corresponding embeddings are supposed to be aligned nearby, and vise versa. In the absence of target labels, we assign pseudo labels for target images. Given labeled source images and pseudo-labeled target images, the similarity-preserving constraint can be implemented by minimizing the triplet loss. With the joint supervision of domain alignment loss and similarity-preserving constraint, we train a network to obtain domain-invariant embeddings with two critical characteristics, intra-class compactness and inter-class separability. Extensive experiments conducted on the two datasets well demonstrate the effectiveness of SCA."}
{"_id": "277048b7eede669b105c762446e890d49eb3c6a9", "title": "Sensor fusion for robot control through deep reinforcement learning", "text": "Deep reinforcement learning is becoming increasingly popular for robot control algorithms, with the aim for a robot to self-learn useful feature representations from unstructured sensory input leading to the optimal actuation policy. In addition to sensors mounted on the robot, sensors might also be deployed in the environment, although these might need to be accessed via an unreliable wireless connection. In this paper, we demonstrate deep neural network architectures that are able to fuse information generated by multiple sensors and are robust to sensor failures at runtime. We evaluate our method on a search and pick task for a robot both in simulation and the real world."}
{"_id": "29c352ca636950e2f84b488be6ff3060cfe3ca6a", "title": "Optimization Criteria, Sensitivity and Robustness of Motion and Structure Estimation", "text": "The prevailing efforts to study the standard formulation of motion and structure recovery have been recently focused on issues of sensitivity and and robustness of existing techniques. While many cogent observations have been made and verified experimentally, many statements do not hold in general settings and make a comparison of existing techniques difficult. With an ultimate goal of clarifying these issues we study the main aspects of the problem: the choice of objective functions, optimization techniques and the sensitivity and robustness issues in the presence of noise. We clearly reveal the relationship among different objective functions, such as \u201c(normalized) epipolar constraints\u201d, \u201creprojection error\u201d or \u201ctriangulation\u201d, which can all be be unified in a new \u201c optimal triangulation\u201d procedure formulated as a constrained optimization problem. Regardless of various choices of the objective function, the optimization problems all inherit the same unknown parameter space, the so called \u201cessential manifold\u201d, making the new optimization techniques on Riemanian manifolds directly applicable. Using these analytical results we provide a clear account of sensitivity and robustness of the proposed linear and nonlinear optimization techniques and study the analytical and practical equivalence of different objective functions. The geometric characterization of critical points of a function defined on essential manifold and the simulation results clarify the difference between the effect of bas relief ambiguity and other types of local minima leading to a consistent interpretations of simulation results over large range of signal-to-noise ratio and variety of configurations. Further more we justify the choice of the linear techniques for (re)initialization and for detection of incorrect local minima."}
{"_id": "aab29a13185bee54a17573024c309c37b3d1456f", "title": "A comparison of loop closing techniques in monocular SLAM", "text": "Loop closure detection systems for monocular SLAM come in three broad categories: (i) map-to-map, (ii) image-to-image and (iii) image-to-map. In this paper, we have chosen an implementation of each and performed experiments allowing the three approaches to be compared. The sequences used include both indoor and outdoor environments and single and multiple loop trajectories. \u00a9 2009 Elsevier B.V. All rights reserved."}
{"_id": "8d281dd3432fea0e3ca4924cd760786008f43a07", "title": "A Simplified Model for Normal Mode Helical Antennas", "text": "\u2500 Normal mode helical antennas are widely used for RFID and mobile communications applications due to their relatively small size and omni-directional radiation pattern. However, their highly curved geometry can make the design and analysis of helical antennas that are part of larger complex structures quite difficult. A simplified model is proposed that replaces the curved helix with straight wires and lumped elements. The simplified model can be used to reduce the complexity of full-wave models that include a helical antenna. It also can be used to estimate the performance of a helical antenna without fullwave modeling of the helical structure. Index Terms\u2500 Helical Antennas, RFID."}
{"_id": "0c2bbea0d28861b23879cec33f04125cf556173d", "title": "Likelihood-informed dimension reduction for nonlinear inverse problems", "text": "The intrinsic dimensionality of an inverse problem is affected by prior information, the accuracy and number of observations, and the smoothing properties of the forward operator. From a Bayesian perspective, changes from the prior to the posterior may, in many problems, be confined to a relatively lowdimensional subspace of the parameter space. We present a dimension reduction approach that defines and identifies such a subspace, called the \u201clikelihoodinformed subspace\u201d (LIS), by characterizing the relative influences of the prior and the likelihood over the support of the posterior distribution. This identification enables new and more efficient computational methods for Bayesian inference with nonlinear forward models and Gaussian priors. In particular, we approximate the posterior distribution as the product of a lower-dimensional posterior defined on the LIS and the prior distribution marginalized onto the complementary subspace. Markov chain Monte Carlo sampling can then proceed in lower dimensions, with significant gains in computational efficiency. We also introduce a Rao-Blackwellization strategy that de-randomizes Monte Carlo estimates of posterior expectations for additional variance reduction. We demonstrate the efficiency of our methods using two numerical examples: inference of permeability in a groundwater system governed by an elliptic PDE, and an atmospheric remote sensing problem based on Global Ozone Monitoring System (GOMOS) observations."}
{"_id": "cf6508043c418891c1e7299debccfc1527e4ca2a", "title": "A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge", "text": "We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition resolutions, or with axiomatized general and common sense knowledge. The results show that fine-grained and consistent knowledge coming from diverse sources is a necessary condition determining the correctness and traceability of results."}
{"_id": "28b2723d1feb5373e3136d358210efc76f7d6f46", "title": "Building Consumer Trust Online", "text": "M oving some Web consumers along to the purchase click is proving to be difficult, despite the impressive recent growth in online shopping. Consumer online shopping revenues and related corporate profits are still meager, though the industry is optimistic, thanks to bullish forecasts of cyberconsumer activity for the new millennium. In 1996, Internet shopping revenues for U.S. users, excluding cars and real estate, were estimated by Jupiter Communications , an e-commerce consulting firm in New York, at approximately $707 million but are expected to How merchants can win back lost consumer trust in the interests of e-commerce sales. reach nearly $37.5 billion by 2002 [1]. Meanwhile, the business-to-business side is taking off with more than $8 billion in revenues for 1997 and $327 billion predicted by 2002 just in the U.S., according to For-rester Research, an information consulting firm in Cambridge, Mass. [4]. On the consumer side, a variety of barriers are invoked to explain the continuing difficulties. There are, to be sure, numerous barriers. Such factors as the lack of standard technologies for secure payment, and the lack of profitable business models play important roles in the relative dearth of commercial activity by businesses and consumers on the Internet compared to what analysts expect in the near future. Granted, the commercial development of the Web is still in its infancy, so few expect these barriers to commercial development to persist. Still, commercial development of the Web faces a far more formidable barrier\u2014consumers' fear of divulging their personal data\u2014to its ultimate com-mercialization. The reason more people have yet to shop online or even provide information to Web providers in exchange for access to information, is the fundamental lack of faith between most businesses and consumers on the Web today. In essence, consumers simply do not trust most Web providers enough to engage in \" relationship exchanges \" involving money and personal information with them. Our research reveals that this lack of trust arises from the fact that cyberconsumers feel they lack control over the access that Web merchants have to their"}
{"_id": "9a79e11b4261aa601d7f77f4ba5aed22ca1f6ad6", "title": "Perception-Guided Multimodal Feature Fusion for Photo Aesthetics Assessment", "text": "Photo aesthetic quality evaluation is a challenging task in multimedia and computer vision fields. Conventional approaches suffer from the following three drawbacks: 1) the deemphasized role of semantic content that is many times more important than low-level visual features in photo aesthetics; 2) the difficulty to optimally fuse low-level and high-level visual cues in photo aesthetics evaluation; and 3) the absence of a sequential viewing path in the existing models, as humans perceive visually salient regions sequentially when viewing a photo.\n To solve these problems, we propose a new aesthetic descriptor that mimics humans sequentially perceiving visually/semantically salient regions in a photo. In particular, a weakly supervised learning paradigm is developed to project the local aesthetic descriptors (graphlets in this work) into a low-dimensional semantic space. Thereafter, each graphlet can be described by multiple types of visual features, both at low-level and in high-level. Since humans usually perceive only a few salient regions in a photo, a sparsity-constrained graphlet ranking algorithm is proposed that seamlessly integrates both the low-level and the high-level visual cues. Top-ranked graphlets are those visually/semantically prominent graphlets in a photo. They are sequentially linked into a path that simulates the process of humans actively viewing. Finally, we learn a probabilistic aesthetic measure based on such actively viewing paths (AVPs) from the training photos that are marked as aesthetically pleasing by multiple users. Experimental results show that: 1) the AVPs are 87.65% consistent with real human gaze shifting paths, as verified by the eye-tracking data; and 2) our photo aesthetic measure outperforms many of its competitors."}
{"_id": "186336fb15a47ebdc6f0730d0cf4f56c58c5b906", "title": "Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models", "text": "Advanced Driver Assistance Systems (ADAS) have made driving safer over the last decade. They prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver. However, many accidents are unavoidable because by the time drivers are alerted, it is already too late. Anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give ADAS more time to avoid or prepare for the danger. In this work we anticipate driving maneuvers a few seconds before they occur. For this purpose we equip a car with cameras and a computing device to capture the driving context from both inside and outside of the car. We propose an Autoregressive Input-Output HMM to model the contextual information alongwith the maneuvers. We evaluate our approach on a diverse data set with 1180 miles of natural freeway and city driving and show that we can anticipate maneuvers 3.5 seconds before they occur with over 80% F1-score in real-time."}
{"_id": "34ac076a96456860f674fb4dcd714762b847a341", "title": "Pattern recognition based techniques for fruit sorting : A survey", "text": "In most of the food industries, farms and fruit markets the quality sorting of the fruits is done manually; only a few automated sorting systems are developed for this. Each fruit changes its color in its life span; including stages not ripe, semi ripped, completely ripped or at the end rotten. Hence, we can predict the quality of the fruits by processing its color images and then applying pattern recognition techniques on those images. By quality of fruit we mean size, ripeness, sweetness, longetivity, diseased/ rotten etc. Some advance technologies which are used for classification of fruits are Infrared Imaging, Magnetic Resonance Imaging, Thermal Sensing, Electronic Nose sensing etc; but these are costlier as compared to color image processing and pattern recognition techniques. This paper summarizes and compares various techniques of Pattern Recognition which are used effectively for fruit sorting. All the fruits have their unique ways of ripening stages; hence the pattern recognition techniques used for one fruit does not necessarily match for the other fruit. This paper reviews the high quality research work done for the quality classification of Apples, Dates, Mangoes, Cranberry, Jatropha, Tomatoes, and some other fruits using pattern recognition techniques. KeywordsClassification & predication, Pattern Recognition, Image Processing, Fruit Quality __________________________________________________*****_________________________________________________"}
{"_id": "76d5a90f26e1270c952eac1fa048a83d63f1dd39", "title": "The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations.", "text": "In this article, we attempt to distinguish between the properties of moderator and mediator variables at a number of levels. First, we seek to make theorists and researchers aware of the importance of not using the terms moderator and mediator interchangeably by carefully elaborating, both conceptually and strategically, the many ways in which moderators and mediators differ. We then go beyond this largely pedagogical function and delineate the conceptual and strategic implications of making use of such distinctions with regard to a wide range of phenomena, including control and stress, attitudes, and personality traits. We also provide a specific compendium of analytic procedures appropriate for making the most effective use of the moderator and mediator distinction, both separately and in terms of a broader causal system that includes both moderators and mediators."}
{"_id": "a3c3c084d4c30cf40e134314a5dcaf66b4019171", "title": "Predicting User Intentions: Comparing the Technology Acceptance Model with the Theory of Planned Behavior", "text": ""}
{"_id": "ba0644aa7569f33194090ade9f8f91fa51968b18", "title": "USER ACCEPTANCE OF COMPUTER TECHNOLOGY : A COMPARISON OF TWO THEORETICAL MODELS *", "text": "Computer systems cannot improve organizational performance if they aren't used. Unfortunately, resistance to end-user systems by managers and professionals is a widespread problem. To better predict, explain, and increase user acceptance, we need to better understand why people accept or reject computers. This research addresses the ability to predict peoples' computer acceptance from a measure oftheir intentions, and the ability to explain their intentions in terms oftheir attitudes, subjective norms, perceived usefulness, perceived ease of use, and related variables. In a longitudinal study of 107 users, intentions to use a specific system, measured after a onehour introduction to the system, were correlated 0.35 with system use 14 weeks later. The intentionusage correlation was 0.63 at the end of this time period. Perceived usefulness strongly influenced peoples' intentions, explaining more than half of the variance in intentions at the end of 14 weeks. Perceived ease of use had a small but significant effect on intentions as well, although this effect subsided over time. Attitudes only partially mediated the effects of these beliefs on intentions. Subjective norms had no effect on intentions. These results suggest the possibility of simple but powerful models ofthe determinants of user acceptance, with practical value for evaluating systems and guiding managerial interventions aimed at reducing the problem of underutilized computer technology. (INFORMATION TECHNOLOGY; USER ACCEPTANCE; INTENTION MODELS)"}
{"_id": "d761aaacd25b762a1cd82413a34eee0992b86130", "title": "Self-perception: An alternative interpretation of cognitive dissonance phenomena.", "text": "A theory of self-perception is proposed to provide an alternative interpretation for several of the major phenomena embraced by Festinger's theory of cognitive dissonance and to explicate some of the secondary patterns of data that have appeared in dissonance experiments. It is suggested that the attitude statements which comprise the major dependent variables in dissonance experiments may be regarded as interpersonal judgments in which the observer and the observed happen to be the same individual and that it is unnecessary to postulate an aversive motivational drive toward consistency to account for the attitude change phenomena observed. Supporting experiments are presented, and metatheoretical contrasts between the \"radical\" behavioral approach utilized and the phenomenological approach typified by dissonance theory are discussed."}
{"_id": "d9973f9367de4c2b1ac54c3157464e79ddeb3d6a", "title": "Attitudes, satisfaction and usage: Factors contributing to each in the acceptance of information technology", "text": "This research tests and develops the Technology Acceptance Model (TAM), introduced by Davis (1986), which attempts to explain end users\u2019 attitudes to computing technologies. It introduces several new variables, including compatibility, user characteristics, system rating and the enduser computing satisfaction (EUCS) construct, a surrogate measure for IT success and acceptance. A questionnaire with over seventy items was completed by a large number of users and LISREL, a technique for modelling a system of structural equations, was used to analyse the responses. The output shows the model as a whole \u00ae ts the data very well and indicates signi\u00ae cant relationships between variables in the model. These results con\u00ae rm that TAM is a valuable tool for predicting attitudes, satisfaction, and usage from beliefs and external variables. They also show that relative advantage of the system contributed most to attitudes and satisfaction. Compatibility (of the system to the task performed) contributed most to usage and was the most important antecedent of the belief variables, including relative advantage."}
{"_id": "cfa6986dd9f124aeb5c9430a5f5b638fb71d405b", "title": "Cardiorespiratory fitness estimation in free-living using wearable sensors", "text": "OBJECTIVE\nIn this paper we propose artificial intelligence methods to estimate cardiorespiratory fitness (CRF) in free-living using wearable sensor data.\n\n\nMETHODS\nOur methods rely on a computational framework able to contextualize heart rate (HR) in free-living, and use context-specific HR as predictor of CRF without need for laboratory tests. In particular, we propose three estimation steps. Initially, we recognize activity primitives using accelerometer and location data. Using topic models, we group activity primitives and derive activities composites. We subsequently rank activity composites, and analyze the relation between ranked activity composites and CRF across individuals. Finally, HR data in specific activity primitives and composites is used as predictor in a hierarchical Bayesian regression model to estimate CRF level from the participant's habitual behavior in free-living.\n\n\nRESULTS\nWe show that by combining activity primitives and activity composites the proposed framework can adapt to the user and context, and outperforms other CRF estimation models, reducing estimation error between 10.3% and 22.6% on a study population of 46 participants.\n\n\nCONCLUSIONS\nOur investigation showed that HR can be contextualized in free-living using activity primitives and activity composites and robust CRF estimation in free-living is feasible."}
{"_id": "d65b6d948aa3ec4a1640a139362f7bab211a5f45", "title": "Sindhi Language Processing: A survey", "text": "In this era of information technology, natural language processing (NLP) has become volatile field because of digital reliance of today's communities. The growth of Internet usage bringing the communities, cultures and languages online. In this regard much of the work has been done of the European and east Asian languages, in the result these languages have reached mature level in terms of computational processing. Despite the great importance of NLP science, still most of the South Asian languages are under developing phase. Sindhi language is one of them, which stands among the most ancient languages in the world. The Sindhi language has a great influence on the large community in Sindh province of Pakistan and some states of India and other countries. But unfortunately, it is at infant level in terms of computational processing, because it has not received such attention of language engineering community, due to its complex morphological structure and scarcity of language resources. Therefore, this study has been carried out in order to summarize the existing work on Sindhi Language Processing (SLP) and to explore future research opportunities, also some potential research problems. This paper will be helpful for the researchers in order to find all the information regarding SLP at one place in a unique way."}
{"_id": "2c69688a2fc686cad14bfa15f8a0335b26b54054", "title": "Multi-View Representation Learning: A Survey from Shallow Methods to Deep Methods", "text": "Recently, multi-view representation learning has become a rapidly growing direction in machine learning and data mining areas. This paper introduces several principles for multi-view representation learning: correlation, consensus, and complementarity principles. Consequently, we first review the representative methods and theories of multi-view representation learning based on correlation principle, especially on canonical correlation analysis (CCA) and its several extensions. Then from the viewpoint of consensus and complementarity principles we investigate the advancement of multi-view representation learning that ranges from shallow methods including multi-modal topic learning, multi-view sparse coding, and multi-view latent space Markov networks, to deep methods including multi-modal restricted Boltzmann machines, multi-modal autoencoders, and multi-modal recurrent neural networks. Further, we also provide an important perspective from manifold alignment for multi-view representation learning. Overall, this survey aims to provide an insightful overview of theoretical basis and state-of-the-art developments in the field of multi-view representation learning and to help researchers find the most appropriate tools for particular applications."}
{"_id": "21aebb53a45ccac7f6763d9c47477092599f6be1", "title": "A survey of the effects of aging on biometric identity verification", "text": ""}
{"_id": "12e1923fb86ed06c702878bbed51b4ded2b16be1", "title": "Gesture recognition for smart home applications using portable radar sensors", "text": "In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes."}
{"_id": "3a488bded1695743b34e39338ccc3c687ea771d0", "title": "Glomosim: a scalable network simulation environment", "text": "Large-scale hybrid networks that include wireless, wired, and satellite based communications are becoming common in both military and commercial situations. This paper describes a scalable simulation environment called GloMoSim (for Global Mobile Information System Simulator) that effectively utilizes parallel execution to reduce the simulation time of detailed high-fidelity models of large communication networks. The paper also presents a set of case studies that evaluate the performance of large wireless networks with thousands of nodes and compares the impact of different lower layer protocols on the performance of typical applications."}
{"_id": "d55e7b0d6b4fb6e2fe7a74b66103b5866c1bfa84", "title": "School Leadership and Cyberbullying\u2014A Multilevel Analysis", "text": "Cyberbullying is a relatively new form of bullying, with both similarities and differences to traditional bullying. While earlier research has examined associations between school-contextual characteristics and traditional bullying, fewer studies have focused on the links to students' involvement in cyberbullying behavior. The aim of the present study is to assess whether school-contextual conditions in terms of teachers' ratings of the school leadership are associated with the occurrence of cyberbullying victimization and perpetration among students. The data are derived from two separate data collections performed in 2016: The Stockholm School Survey conducted among students in the second grade of upper secondary school (ages 17-18 years) in Stockholm municipality, and the Stockholm Teacher Survey which was carried out among teachers in the same schools. The data include information from 6067 students distributed across 58 schools, linked with school-contextual information based on reports from 1251 teachers. Cyberbullying victimization and perpetration are measured by students' self-reports. Teachers' ratings of the school leadership are captured by an index based on 10 items; the mean value of this index was aggregated to the school level. Results from binary logistic multilevel regression models show that high teacher ratings of the school leadership are associated with less cyberbullying victimization and perpetration. We conclude that a strong school leadership potentially prevents cyberbullying behavior among students."}
{"_id": "694210eec3a6262c5e7edbc2cfbf2d9341f9f426", "title": "Probabilistic graphical models for semi-supervised traffic classification", "text": "Traffic classification using machine learning continues to be an active research area. The majority of work in this area uses off-the-shelf machine learning tools and treats them as black-box classifiers. This approach turns all the modelling complexity into a feature selection problem. In this paper, we build a problem-specific solution to the traffic classification problem by designing a custom probabilistic graphical model. Graphical models are a modular framework to design classifiers which incorporate domain-specific knowledge. More specifically, our solution introduces semi-supervised learning which means we learn from both labelled and unlabelled traffic flows. We show that our solution performs competitively compared to previous approaches while using less data and simpler features."}
{"_id": "87df1f7da151df6797bf5a751f1cf5a3cdb1edcf", "title": "Risk Management: A Maturity Model Based on ISO 31000", "text": "Risk Management, according with the ISO Guide 73 is the set of \"coordinated activities to direct and control an organization with regard to risk\". In a nutshell, Risk Management is the business process used to manage risk in organizations. ISO 31000 defines a framework and process for risk management. However, implementing this standard without a detailed plan can become a burden on organizations. This paper presents a maturity model for the risk management process based on ISO 31000. The purpose of this model is to provide an assessment tool for organizations to use in order to get their current risk management maturity level. The results can then be used to create an improvement plan which will guide organizations to reach their target maturity level. This maturity model allows organizations to assess a risk management process according to the best practices defined in risk management references. The maturity model can also be used as a reference for improving this process since it sets a clear path of how a risk management process should be performed."}
{"_id": "c81d2f308b80f8a008d329a22b63dc09503ae045", "title": "Recommending citations with translation model", "text": "Citation Recommendation is useful for an author to find out the papers or books that can support the materials she is writing about. It is a challengeable problem since the vocabulary used in the content of papers and in the citation contexts are usually quite different. To address this problem, we propose to use translation model, which can bridge the gap between two heterogeneous languages. We conduct an experiment and find the translation model can provide much better candidates of citations than the state-of-the-art methods."}
{"_id": "bac06fd65318737c5d445a2a2679b0daca7766c2", "title": "An Affordable Augmented Reality Based Rehabilitation System for Hand Motions", "text": "Repetitive practices can facilitate the rehabilitation of the motor functions of the upper extremity for stroke survivors whose motor functions have been impaired. An affordable rehabilitation system for hand motion would be useful to provide intensive training to the patients. Augmented Reality (AR) can provide an intuitive interface to the users with programmable environment and realism feelings. In this research, a low cost viable rehabilitation system for hand motion is developed based on the AR technology. In this application, seamless interaction between the virtual environment and the real hand is supported with markers and a self-designed data-glove. The data-glove is low cost (("}
{"_id": "901f60e47aaf76e48a50b527d0ef249a16b4b11d", "title": "A Deep Learning Approach to IoT Authentication", "text": "At its peak, the Internet-of-Things will largely be composed of low-power devices with wireless radios attached. Yet, secure authentication of these devices amidst adversaries with much higher power and computational capability remains a challenge, even for advanced cryptographic and wireless security protocols. For instance, a high-power software radio could simply replay chunks of signals from a low-power device to emulate it. This paper presents a deep-learning classifier that learns hardware imperfections of low-power radios that are challenging to emulate, even for high- power adversaries. We build an LSTM framework, specifically sensitive to signal imperfections that persist over long durations. Experimental results from a testbed of 30 low-power nodes demonstrate high resilience to advanced software radio adversaries."}
{"_id": "f395bfa935029944e13057e75b1f5525be4ca478", "title": "The Blockchain-Based Digital Content Distribution System", "text": "The blockchain-based digital content distribution system was developed. Decentralized and pear-to-pear authentication mechanism can be considered as the ideal rights management mechanism. The blockchain has the potential to realize this ideal content distribution system. This is the successful model of the Superdistribution concept which was announced almost 30 years ago. The proposed system was demonstrated and got a lot of feedback for the future practical system."}
{"_id": "9a5c2e7a6058e627b24b6f5771215d8515769f3f", "title": "Applying Epidemic Algorithm for Financial Service Based on Blockchain Technology", "text": "Our global market is emerging transformation strategy that can make the difference between success and failure. Smart contract systems through developments of technological innovations are increasingly seen as alternative technologies to impact transactional processes significantly. Blockchain is a smart contract protocol with trust offering the potential for creating new transaction platforms and thus shows a radical change of the current core value creation in third parties. These results in enormous cost and time savings and the reduced risk for the parties. This study proposed a method to improve the efficiency of distributed consensus in blockchains using epidemic algorithm. The results showed that epidemic protocols can distribute the information similar to blockchain."}
{"_id": "73a1116905643fad65c242c9f43e6b7fcc6b3aad", "title": "M4: A Visualization-Oriented Time Series Data Aggregation", "text": "Visual analysis of high-volume time series data is ubiquitous in many industries, including finance, banking, and discrete manufacturing. Contemporary, RDBMS-based systems for visualization of high-volume time series data have difficulty to cope with the hard latency requirements and high ingestion rates of interactive visualizations. Existing solutions for lowering the volume of time series data disregard the semantics of visualizations and result in visualization errors. In this work, we introduce M4, an aggregation-based time series dimensionality reduction technique that provides errorfree visualizations at high data reduction rates. Focusing on line charts, as the predominant form of time series visualization, we explain in detail the drawbacks of existing data reduction techniques and how our approach outperforms state of the art, by respecting the process of line rasterization. We describe how to incorporate aggregation-based dimensionality reduction at the query level in a visualizationdriven query rewriting system. Our approach is generic and applicable to any visualization system that uses an RDBMS as data source. Using real world data sets from high tech manufacturing, stock markets, and sports analytics domains we demonstrate that our visualization-oriented data aggregation can reduce data volumes by up to two orders of magnitude, while preserving perfect visualizations."}
{"_id": "1e7cf9047604f39e517951d129b2b3eecf9e1cfb", "title": "Modeling Interestingness with Deep Neural Networks", "text": "This paper presents a deep semantic similarity model (DSSM), a special type of deep neural networks designed for text analysis, for recommending target documents to be of interest to a user based on a source document that she is reading. We observe, identify, and detect naturally occurring signals of interestingness in click transitions on the Web between source and target documents, which we collect from commercial Web browser logs. The DSSM is trained on millions of Web transitions, and maps source-target document pairs to feature vectors in a latent space in such a way that the distance between source documents and their corresponding interesting targets in that space is minimized. The effectiveness of the DSSM is demonstrated using two interestingness tasks: automatic highlighting and contextual entity search. The results on large-scale, real-world datasets show that the semantics of documents are important for modeling interestingness and that the DSSM leads to significant quality improvement on both tasks, outperforming not only the classic document models that do not use semantics but also state-of-the-art topic models."}
{"_id": "4ae1a887623634d517e289ec676d90464996bd6a", "title": "Developments in Radar Imaging", "text": "Using range and Doppler information to produce radar images is a technique used in such diverse fields as air-to-ground imaging of objects, terrain, and oceans and ground-to-air imaging of aircraft, space objects, and planets. A review of the range-Doppler technique is presented along with a description of radar imaging forms including details of data acquisition and processing techniques."}
{"_id": "30420d73113586934cb680e999c8a9def7f5bccb", "title": "The Expressive Power of SPARQL", "text": "This paper studies the expressive power of SPARQL. The main result is that SPARQL and non-recursive safe Datalog with negation have equivalent expressive power, and hence, by classical results, SPARQL is equivalent from an expressive point of view to Relational Algebra. We present explicit generic rules of the transformations in both directions. Among other findings of the paper are the proof that negation can be simulated in SPARQL, that non-safe filters are superfluous, and that current SPARQL W3C semantics can be simplified to a standard"}
{"_id": "c8934bd3b0d8825a2a9bb112653250956b3a2a67", "title": "Working Memory Deficits can be Overcome : Impacts of Training and Medication on Working Memory in Children with ADHD", "text": "This study evaluated the impact of two interventions\u2014a training program and stimulant medication\u2014on working memory (WM) function in children with attention deficit hyperactivity disorder (ADHD). Twenty-five children aged between 8 and 11 years participated in training that taxed WM skills to the limit for a minimum of 20 days, and completed other assessments of WM and IQ before and after training, and with and without prescribed drug treatment. While medication significantly improved visuo-spatial memory performance, training led to substantial gains in all components of WM across untrained tasks. Training gains associated with the central executive persisted over a 6month period. IQ scores were unaffected by either intervention. These findings indicate that the WM impairments in children with ADHD can be differentially ameliorated by training and by stimulant medication. Copyright # 2009 John Wiley & Sons, Ltd."}
{"_id": "4bd70c3a9f1455e59d5ff9a2cc477b3028784556", "title": "Toward Fast and Accurate Neural Discourse Segmentation", "text": "Discourse segmentation, which segments texts into Elementary Discourse Units, is a fundamental step in discourse analysis. Previous discourse segmenters rely on complicated hand-crafted features and are not practical in actual use. In this paper, we propose an endto-end neural segmenter based on BiLSTMCRF framework. To improve its accuracy, we address the problem of data insufficiency by transferring a word representation model that is trained on a large corpus. We also propose a restricted self-attention mechanism in order to capture useful information within a neighborhood. Experiments on the RST-DT corpus show that our model is significantly faster than previous methods, while achieving new stateof-the-art performance. 1"}
{"_id": "b2d9a76773ba090210145e0ab72a6e339c953461", "title": "Apollonian Circle Packings: Geometry and Group Theory II. Super-Apollonian Group and Integral Packings", "text": "Apollonian circle packings arise by repeatedly filling the interstices between four mutually tangent circles with further tangent circles. Such packings can be described in terms of the Descartes configurations they contain, where a Descartes configuration is a set of four mutually tangent circles in the Riemann sphere, having disjoint interiors. Part I showed there exists a discrete group, the Apollonian group, acting on a parameter space of (ordered, oriented) Descartes configurations, such that the Descartes configurations in a packing formed an orbit under the action of this group. It is observed there exist infinitely many types of integral Apollonian packings in which all circles had integer curvatures, with the integral structure being related to the integral nature of the Apollonian group. Here we consider the action of a larger discrete group, the super-Apollonian group, also having an integral structure, whose orbits describe the Descartes quadruples of a geometric object we call a super-packing. The circles in a super-packing never cross each other but are nested to an arbitrary depth. Certain Apollonian packings and super-packings are strongly integral in the sense that the curvatures of all circles are integral and the curvature\u00d7centers of all circles are integral. We show that (up to scale) there are exactly 8 different (geometric) strongly integral super-packings, and that"}
{"_id": "6753404e2515bc76f17016d0ec52d91b65eb1aa3", "title": "Addressing challenges in the clinical applications associated with CRISPR/Cas9 technology and ethical questions to prevent its misuse", "text": "Xiang Jin Kang, Chiong Isabella Noelle Caparas, Boon Seng Soh, Yong Fan 1 Key Laboratory for Major Obstetric Diseases of Guangdong Province, Key Laboratory of Reproduction and Genetics of Guangdong Higher Education Institutes, Center for Reproductive Medicine, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510150, China 2 Disease Modeling and Therapeutics Laboratory, A*STAR Institute of Molecular and Cell Biology, 61 Biopolis Drive Proteos, Singapore 138673, Singapore & Correspondence: bssoh@imcb.a-star.edu.sg (B. S. Soh), yongfan011@gzhmu.edu.cn (Y. Fan)"}
{"_id": "a8b0e7517ff5121eda0b60cf7cbf203d6fac525d", "title": "Common mode EMI reduction technique for interleaved MHz critical mode PFC converter with coupled inductor", "text": "Coupled inductor has been widely adopted in VR applications for many years because of its benefits such as reducing current ripple and improving transient performance. In this presentation, coupled inductor concept is applied to interleaved MHz totem-pole CRM PFC converter with GaN devices. The coupled inductor in CRM PFC converter can reduce switching frequency variation, help achieving ZVS and reduce circulating energy. Therefore the coupled inductor can improve the efficiency of PFC converter. In addition, balance technique is applied to help minimize CM noise. This paper will introduce how to achieve balance with coupled inductor. The novel PCB winding inductor design will be provided. The PCB winding coupled inductor will have similar loss with litz wire inductor but much less labor in manufacture."}
{"_id": "8ad12d3ee186403b856639b58d7797aa4b89a6c7", "title": "Temporal Relational Reasoning in Videos", "text": "Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets SomethingSomething, Jester, and Charades which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos."}
{"_id": "2a628e4a9c5f78bc6dcdf16514353336547846cc", "title": "Achieving high utilization with software-driven WAN", "text": "We present SWAN, a system that boosts the utilization of inter-datacenter networks by centrally controlling when and how much traffic each service sends and frequently re-configuring the network's data plane to match current traffic demand. But done simplistically, these re-configurations can also cause severe, transient congestion because different switches may apply updates at different times. We develop a novel technique that leverages a small amount of scratch capacity on links to apply updates in a provably congestion-free manner, without making any assumptions about the order and timing of updates at individual switches. Further, to scale to large networks in the face of limited forwarding table capacity, SWAN greedily selects a small set of entries that can best satisfy current demand. It updates this set without disrupting traffic by leveraging a small amount of scratch capacity in forwarding tables. Experiments using a testbed prototype and data-driven simulations of two production networks show that SWAN carries 60% more traffic than the current practice."}
{"_id": "35deb0910773b810a642ff3b546de4eecfdc3ac3", "title": "B4: experience with a globally-deployed software defined wan", "text": "We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge.\n These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work."}
{"_id": "122560e02003d5c0f1de3e0083c7a0474c8f1a53", "title": "BotMiner: Clustering Analysis of Network Traffic for Protocol- and Structure-Independent Botnet Detection", "text": "Botnets are now the key platform for many Internet attacks, such as spam, distributed denial-of-service (DDoS), identity theft, and phishing. Most of the current botnet detection approaches work only on specific botnet command and control (C&C) protocols (e.g., IRC) and structures (e.g., centralized), and can become ineffective as botnets change their C&C techniques. In this paper, we present a general detection framework that is independent of botnet C&C protocol and structure, and requires noa priori knowledge of botnets (such as captured bot binaries and hence the botnet signatures, and C&C server names/addresses). We start from the definition and essential properties of botnets. We define a botnet as acoordinated groupof malware instances that arecontrolled via C&C communication channels. The essential properties of a botnet are that the bots communicate with some C&C servers/peers, perform malicious activities, and do so in a similar or correlated way. Accordingly, our detection framework clusters similar communication traffic and similar malicious traffic, and performs cross cluster correlation to identify the hosts that share both similar communication patterns and similar malicious activity patterns. These hosts are thus bots in the monitored network. We have implemented our BotMiner prototype system and evaluated it using many real network traces. The results show that it can detect real-world botnets ( IRC-based, HTTP-based, and P2P botnets including Nugache and Storm worm), and has a very low false positive rate."}
{"_id": "1783cbfb965484403b6702d07ccde5dc74ebb132", "title": "Forwarding metamorphosis: fast programmable match-action processing in hardware for SDN", "text": "In Software Defined Networking (SDN) the control plane is physically separate from the forwarding plane. Control software programs the forwarding plane (e.g., switches and routers) using an open interface, such as OpenFlow. This paper aims to overcomes two limitations in current switching chips and the OpenFlow protocol: i) current hardware switches are quite rigid, allowing ``Match-Action'' processing on only a fixed set of fields, and ii) the OpenFlow specification only defines a limited repertoire of packet processing actions. We propose the RMT (reconfigurable match tables) model, a new RISC-inspired pipelined architecture for switching chips, and we identify the essential minimal set of action primitives to specify how headers are processed in hardware. RMT allows the forwarding plane to be changed in the field without modifying hardware. As in OpenFlow, the programmer can specify multiple match tables of arbitrary width and depth, subject only to an overall resource limit, with each table configurable for matching on arbitrary fields. However, RMT allows the programmer to modify all header fields much more comprehensively than in OpenFlow. Our paper describes the design of a 64 port by 10 Gb/s switch chip implementing the RMT model. Our concrete design demonstrates, contrary to concerns within the community, that flexible OpenFlow hardware switch implementations are feasible at almost no additional cost or power."}
{"_id": "2740b1c450c7b6fa0d7cb8b52681c2e5b6f72752", "title": "Stability and Generalization", "text": "We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification."}
{"_id": "cf28677d9a095fde0ea19cb90d1f84f8b8d2e56c", "title": "Zero-voltage switching flyback-boost converter with voltage-doubler rectifier for high step-up applications", "text": "A zero-voltage switching (ZVS) flyback-boost (FB) converter with a voltage-doubler rectifier (VDR) has been proposed. By combining the common part between a flyback converter and a boost converter as a parallel-input/series-output (PISO) configuration, this proposed circuit can increase a step-up ratio and clamp the surge voltage of switches. The secondary VDR provides a further extended step-up ratio as well as its voltage stress to be clamped. An auxiliary switch instead of a boost diode enables all switches to be turned on under ZVS conditions. The zero-current turn-off of the secondary VDR alleviates its reverse-recovery losses. The operation principles, the theoretical analysis, and the design consideration are investigated. The experimental results from a 250W and 42V-to-400V prototype are shown to verify the proposed scheme."}
{"_id": "9d3133a2d3c536fe628fd1fe1d428371b414ba0e", "title": "Human Gait Recognition Using Patch Distribution Feature and Locality-Constrained Group Sparse Representation", "text": "In this paper, we propose a new patch distribution feature (PDF) (i.e., referred to as Gabor-PDF) for human gait recognition. We represent each gait energy image (GEI) as a set of local augmented Gabor features, which concatenate the Gabor features extracted from different scales and different orientations together with the X-Y coordinates. We learn a global Gaussian mixture model (GMM) (i.e., referred to as the universal background model) with the local augmented Gabor features from all the gallery GEIs; then, each gallery or probe GEI is further expressed as the normalized parameters of an image-specific GMM adapted from the global GMM. Observing that one video is naturally represented as a group of GEIs, we also propose a new classification method called locality-constrained group sparse representation (LGSR) to classify each probe video by minimizing the weighted l1, 2 mixed-norm-regularized reconstruction error with respect to the gallery videos. In contrast to the standard group sparse representation method that is a special case of LGSR, the group sparsity and local smooth sparsity constraints are both enforced in LGSR. Our comprehensive experiments on the benchmark USF HumanID database demonstrate the effectiveness of the newly proposed feature Gabor-PDF and the new classification method LGSR for human gait recognition. Moreover, LGSR using the new feature Gabor-PDF achieves the best average Rank-1 and Rank-5 recognition rates on this database among all gait recognition algorithms proposed to date."}
{"_id": "c6a2aaf173061731ec7f06f3a1d4b3734f10af18", "title": "A series resonant converter with constant on-time control for capacitor charging applications", "text": "A capacitor charging power supply (CCPS) has been assembled. A full-bridge series resonant topology using MOSFETs as switching elements makes up the bulk of the high-power section of the CCPS. A high-voltage transformer couples the output stage to the resonant inverter, and the secondary current of this transformer is rectified to provide the charging current. The CCPS uses constant on-time control with zero current switching. The control scheme is implemented with Unitrode's UC3860 resonant mode power supply controller. The control circuitry also protects the CCPS against both overvoltage and overcurrent conditions. The CCPS has been tested in the laboratory, where it successfully charged capacitors ranging from 90 nF to 1 mu F to 2000 V.<>"}
{"_id": "770ce8f2b66749c017a7e38ba7e8d504576afccf", "title": "A vehicular rooftop, shark-fin, multiband antenna for the GPS/LTE/cellular/DSRC systems", "text": "A shark-fin rooftop multiband antenna for the automotive industry is proposed. The antenna system receives Right Hand Circularly Polarized (RHCP) satellite signals for the Global Positioning System's (GPS) L1-Band and Vertically Linearly Polarized (VLP) signals for the Long Term Evolution (LTE), the cellular frequencies, the Universal Mobile Telecommunications Systems (UMTS) and the Dedicated Short Range Communications (DSRC) system. A ceramic patch antenna element was used for the GPS signals and a printed monopole utilizing printed ground sleeve technique for the cellular and LTE signals. A study of four potential DSRC antenna elements to determine the appropriate antenna element location within the antenna cover to obtain optimum performance is also presented and discussed in detail. The antenna module was measured on a 1-meter diameter ground plane and on a vehicle roof model. The design and measurements results are presented and discussed below."}
{"_id": "25b87d1d17adabe2923da63e0b93fb7d2bac73f7", "title": "Service specific anomaly detection for network intrusion detection", "text": "The constant increase of attacks against networks and their resources (as recently shown by the CodeRed worm) causes a necessity to protect these valuable assets. Firewalls are now a common installation to repel intrusion attempts in the first place. Intrusion detection systems (IDS), which try to detect malicious activities instead of preventing them, offer additional protection when the first defense perimeter has been penetrated. ID systems attempt to pin down attacks by comparing collected data to predefined signatures known to be malicious (signature based) or to a model of legal behavior (anomaly based).Anomaly based systems have the advantage of being able to detect previously unknown attacks but they suffer from the difficulty to build a solid model of acceptable behavior and the high number of alarms caused by unusual but authorized activities. We present an approach that utilizes application specific knowledge of the network services that should be protected. This information helps to extend current, simple network traffic models to form an application model that allows to detect malicious content hidden in single network packets. We describe the features of our proposed model and present experimental data that underlines the efficiency of our systems."}
{"_id": "dc1b9c537812b3542c08ab982394abca998fc211", "title": "Wideband and Unidirectional Cavity-Backed Folded Triangular Bowtie Antenna", "text": "In this paper, a cavity-backed folded triangular bowtie antenna (FTBA) is proposed and investigated. Comparisons show that it has much smaller fluctuation of input impedance and much larger bandwidth (BW) for stable radiation patterns than cavity-backed ordinary TBA. The proposed antenna is fed by a conventional balun composed of a transition from a microstrip line to a parallel stripline. It achieves an impedance BW of 92.2% for |S11| les -10 dB, stable gain of around 9.5 dBi and unidirectional radiation patterns over the whole operating band. A close examination of the antenna aperture efficiency and the electric field distribution in the aperture provides an explanation for the stable gain across such large frequency band. Comparisons are made against several conventional antennas, including short backfire and microstrip patch antennas, to illustrate the salient features of the proposed antenna. A design guideline is also given for practical applications."}
{"_id": "e50d15d7d5a7435d709757fa29da1a752e977d3c", "title": "A new and rapid colorimetric determination of acetylcholinesterase activity.", "text": "A photometric method for determining acetylcholinesterase activity of tissue extracts, homogenates, cell suspensions, etc., has been described. The enzyme activity is measured by following the increase of yellow color produced from thiocholine when it reacts with dithiobisnitrobenzoate ion. It is based on coupling of these reactions: (enzyme) acetylthiocholine -----+ thiocholine + acetate thiocholine + dithiobisnitrobenzoate -+ yellow color The latter reaction is rapid and the assay is sensitive (i.e. a 10 pl sample of blood is adequate). The use of a recorder has been most helpful, but is not essential. The method has been used to study the enzyme in human erythrocytes and homogenates of rat brain, kidney, lungs, liver and muscle tissue. Kinetic constants determined by this system for erythrocyte cholinesterase are presented. The data obtained with acetylthiocholine as substrate are similar to those with acetylcholine. INTRODUCTION A FEW years ago Bonting and Featherstonel introduced a modification of the Hestrin hydroxamic acid method2 suitable for the determination of cholinesterase levels in small quantities of cells cultured in vitro. This modification was used successfully in several studies involving the control of enzyme levels in cells by manipulating the levels of substrates or closely related compounds present in the medium.3 Several interesting areas of research were indicated by these studies. However, this modified method of enzyme assay, although scaled down to the micro-level, had several disadvantages. Among these was the fact that only a terminal figure could be obtained from the material in one tube of cultured cells. Thus, the time course of the reaction could not be followed without resorting to separate experimental tubes for each time interval desired. The method also had the disadvantage that the color measured *was developed from the remainder of an added substrate-a procedure in which the possibility of error is relatively great when the level of enzyme activity is small. Consideration of the relative merits of various methods which might be useful in studying the time-course of acetylcholinesterase activity in very small tissue samples led us to combine a method reported by Koelle4 with a sulfhydryl reagent studied by Ellman.5 This new method, which is presented here, is extremely sensitive and is applicable to either small amounts of tissue or to low concentrations of enzyme. It makes detailed kinetic studies of acetylcholinesterase activity possible. The progress of the hydrolysis is followed by the measurement of a product of the reaction. 88 A new and rapid calorimetric determination of acetylcholinesterase activity 89 Acetylthiocholine is used as the substrate. This analog of the natural substrate has been used most extensively by Koelle4 for histochemical localization. Other workerss, \u2019 have used the sulfur analog in the enzyme assay. Their work, in addition to data we shall present, suggests that this compound is a satisfactory substitute for the natural substrate, and differs much less than some of the synthetic substrates frequently used as in assays of phosphatases, trypsin, chymotrypsin, pepsin, etc. The principle of the method is the measurement of the rate of production of thiocholine as acetylthiocholine is hydrolyzed. This is accomplished by the continuous reaction of the thiol with 5 : Sdithiobis-2-nitrobenzoate ion (I)5 (enzyme) H,O + (CH,),&CH,CH,SCOCH, h (CH&CH2CH2S+ + CH,COO+ 2H+ (CH&H$H$+ RSSR + (CH&&CH&H,SSR + RS(I) (II) to produce the yellow anion of 5-thio-2-nitro-benzoic acid (II). The rate of color production is measured at 412 rnp in a photometer. The reaction with the thiol has been shown to be sufficiently rapid so as not to be rate limiting in the measurement of the enzyme, and in the concentrations used does not inhibit the enzymic hydrolysis. By recording the output of the photometer continuously, records of the complete assay can be obtained (Fig. 1). We considered it desirable to establish that this method yields results comparable with other procedures. For this reason, the effects of inhibitors were examined; the kinetic constants were calculated and compared with those obtained by other methods. In addition, we were able to compare assays on blood samples by the ferrichydroxamate methods and the present one. METHODS The reaction rates were recorded with a Beckman DU spectrophotometer equipped with a Beckman adapter and a Minneapolis-Honeywell recorder. The general method used was to place buffer in the photocell and add concentrated solutions of reagents by means of micropipettes. The mixture was stirred by continued blowing through the pipettes while moving them around the bottom of the photometer cells. In this way, reagents were added, mixed, and the cover of the cell compartment replaced within 10-15 sec. Our photometer required 3040 set to become stabilized to new light conditions. Thus, there was about 1 min when the readings were due to a combination of factors (e.g. bubbles rising through the solutions, sulfhydryl material in the \u201cenzyme\u201d, etc.) which were unrelated to the desired measurements. Subsequent readings were strictly dependent on the absorption of the solution under consideration and even rapid changes were followed by the recorder faithfully, as evidenced by the reproducibility of time-transmission curves (Fig. 1). 90 GEORGE L. ELLMAN et. al. Solutions Buffer. Phosphate, 0.1 M, pH 8.0. Substrate. Acetylthiocholine iodide, * O-075 M (21.67 mg/ml). This solution was used successfully for lo-15 days if kept refrigerated. Reagent. Dithiobisnitrobenzoic acid (DTNB) 0.01 M of the 5 : 5-dithiobis-2nitrobenzoic acid7 prepared as described previously,5 39.6 mg were dissolved in 10 ml pH 7.0 phosphate buffer (O-1 M) and 15 mg of sodium bicarbonate were added. The reagent was made up in buffer of pH 7 in which it was more stable than in that of pH 8. Enzyme. Bovine erythrocyte cholinesterase (Nutritional Biochem. Corp., 20,000 units) was dissolved in 20 ml of 1% gelatin. This solution was diluted 1 : 200 with water for use, yielding a solution of 5 units/ml. General method A typical run used : 3.0 ml pH 8.0 buffer, 20.0 ~1 substrate, 100.0 ~1 DTNB (reagent), 50.0 ~1 enzyme. The results of several runs are shown in Fig. 1. The blank for such a run consists of buffer, substrate, and DTNB solutions. The absorbances: were read from the strip charts and plotted on a rectangular graph paper, the best line drawn through the points and the slope measured. In a run such as that described above, the linear portion of the curve describing the hydrolysis was observed during the first 15-20 min of the reaction; the slope is the rate in absorbance units/min. At this pH level, there is an appreciable non-enzymic hydrolysis of the substrate, and for long runs it was necessary to correct for this. The rate of non-enzymic hydrolysis of acetylthiocholine at 25\u201d was 0.0016 absorbance units per min. The procedures have been extended to micro-size. A run comparable to those in Fig. 1 was made in a micro-cell (total solution volume was O-317 ml). The rate was 0*102/min, the same as that determined in the larger cuvettes. Since the extinction coefficient of the yellow anion (II) is known5 the rates can be converted to absolute units, viz. : A absorbance/min rate (moles/l. per min) = 1.36 x lo4 In dealing with cell extracts or suspensions, a blank consisting of extract or suspension, DTNB, and buffer may be required to correct for release of thiol material from the cells and the absorbance of the other materials in the suspension. Method for blood A fairly stable suspension was formed from whole blood or washed human erythrocytes. Since the acetylcholinesterase is on the cell membrane, hemolysis was not necessary. The assay of blood was carried out as follows: (1) A suspension of the blood cells4 in phosphate buffer (pH 8.0, 0.1 M) was prepared. The most practical dilution was 1 : 600 (e.g. 10~1 blood into 6 ml buffer). (2) Exactly 3.0 ml of the suspension were pipetted into a cuvette. * California Corporation for Biochemical Research, Los Angeles, California. t This is now available from the Aldrich Chemical Co.. 2369 No. 29th. Milwaukee 10, Wisconsin. $ Strip charts printed in absorbance units are available from Minneapolis-Honeywell Corporation, Chart 5871. 4 Red cell counts were performed by the clinical laboratory. FI G . 1. P ho to gr ap h of s tr ip ch ar t re co rd of t w o id en tic al as sa ys . A t th e ar ro w , ph ys os tig m in e sa lic yl at e (f in al c on ce nt ra tio n, 3 x IO -\u2019 M ) w as a dd ed to a t hi rd re pl ic at e."}
{"_id": "59b6d25b2e69f6114646e2913005263817fdf00e", "title": "Entity-Relationship Extraction from Wikipedia Unstructured Text", "text": "Wikipedia has been the primary source of information for many automatically-generated Semantic Web data sources. However, they suffer from incompleteness since they largely do not cover information contained in the unstructured texts of Wikipedia. Our goal is to extract structured entity-relationships in RDF from such unstructured texts, ultimately using them to enrich existing data sources. Our extraction technique is aimed to be topic-independent, leveraging grammatical dependency of sentences and semantic refinement. Preliminary evaluations of the proposed approach have shown some promising results."}
{"_id": "71271e751e94ede85484250d0d8f7fc444423533", "title": "Future Internet of Things: open issues and challenges", "text": "Internet of Things (IoT) and its relevant technologies have been attracting the attention of researchers from academia, industry, and government in recent years. However, since the requirements of the IoT are quite different from what the Internet today can offer, several innovative techniques have been gradually developed and incorporated into IoT, which is referred to as the Future Internet of Things (FIoT). Among them, how to extract \u2018\u2018data\u2019\u2019 and transfer them into \u2018\u2018knowledge\u2019\u2019 from sensing layer to application layer has become a vital issue. This paper begins with an overview of IoT and FIoT, followed by discussions on how to apply data mining and computational intelligence to FIoT. An intelligent data management framework inspired by swarm optimization will then given. Finally, open issues and future trends of this field"}
{"_id": "e773cdd58f066db97b305fcb5dbad551aa38f00c", "title": "Reading and Writing as a Creative Cycle: the Need for a Computational Model", "text": "The field of computational narratology has produced many efforts aimed at generating narrative by computational means. In recent times, a number of such efforts have considered the task of modelling how a reader might consume the story. Whereas all these approaches are clearly different aspects of the task of generating narrative, so far the efforts to model them have occurred as separate and disjoint initiatives. There is an enormous potential for improvement if a way was found to combine results from these initiatives with one another. The present position paper provides a breakdown of the activity of creating stories into five stages that are conceptually different from a computational point of view and represent important aspects of the overall process as observed either in humans or in existing systems. These stages include a feedback loop that builds interpretations of an ongoing composition and provides feedback based on these to inform the composition process. This model provides a theoretical framework that can be employed first to understand how the various aspects of the task of generating narrative relate to one another, second to identify which of these aspects are being addressed by the different existing research efforts, and finally to point the way towards possible integrations of these aspects within progressively more complex systems."}
{"_id": "df91209c3933b263885a42934f78e7216bc2dc74", "title": "Kinematics, Dynamics and Power Consumption Analyses for Turning Motion of a Six-Legged Robot", "text": "This paper deals with kinematics, dynamics and power consumption analyses of a six-legged robot generating turning motions to follow a circular path. Direct and inverse kinematics analysis has been carried out for each leg in order to develop an overall kinematics model of the six-legged robot. It aims to estimate energyoptimal feet forces and joint torques of the sixlegged robot, which are necessary to have for its real-time control. To determine the optimum feet forces, two approaches are developed, such as minimization of norm of feet forces and minimization of norm of joint torques using a least square method, and their performances are compared. The developed kinematics and dynamics models are tested through computer simulations for generating turning motion of a statically stable sixlegged robot over flat terrain with four different duty factors. The maximum values of feet forces and joint torques decrease with the increase of duty factor. A power consumption model has S. S. Roy Department of Mechanical Engineering, National Institute of Technology, Durgapur, India e-mail: ssroy99@yahoo.com D. K. Pratihar (B) Department of Mechanical Engineering, Indian Institute of Technology, Kharagpur, India e-mail: dkpra@mech.iitkgp.ernet.in been derived for the statically stable wave gaits to minimize the power requirement for both optimal foot force distributions and optimal foot-hold selection. The variations of average power consumption with the height of the trunk body and radial offset have been analyzed in order to find out energy-optimal foothold. A parametric study on energy consumption has been carried out by varying angular velocity of the robot to minimize the total energy consumption during locomotion. It has been found that the energy consumption decreases with the increase of angular velocity for a particular traveled distance."}
{"_id": "34ad25995c2e2ab2ef6e39d36db8808805b2914b", "title": "An Autonomous Reliability-Aware Negotiation Strategy for Cloud Computing Environments", "text": "Cloud computing paradigm allows subscription-based access to computing and storages services over the Internet. Since with advances of Cloud technology, operations such as discovery, scaling, and monitoring are accomplished automatically, negotiation between Cloud service requesters and providers can be a bottleneck if it is carried out by humans. Therefore, our objective is to offer a state-of-the-art solution to automate the negotiation process in Cloud environments. In previous works in the SLA negotiation area, requesters trust whatever QoS criteria values providers offer in the process of negotiation. However, the proposed negotiation strategy for requesters in this work is capable of assessing reliability of offers received from Cloud providers. In addition, our proposed negotiation strategy for Cloud providers considers utilization of resources when it generates new offers during negotiation and concedes more on the price of less utilized resources. The experimental results show that our strategy helps Cloud providers to increase their profits when they are participating in parallel negotiation with multiple requesters."}
{"_id": "790989568b2290298bcf15a7b06abc6c27720f75", "title": "Medication intake adherence with real time activity recognition on IoT", "text": "Usefulness of health care services is seriously effected by medication adherence. Medication intake is one of the cases where adherence may be difficult for the patients, who are willing to undertake the prescribed therapy. Internet of Things (IoT) infrastructure for activity monitoring is a strong candidate solution to maintain adherence of forgetful patients. In this study, we propose an IoT framework where medication intake is ensured with real time continuous activity recognition. We present our use case with the focus of application and network layers. We utilize an activity classification scheme, which considers inter-activity detection consistency based on non-predefined feature extraction as the application layer. The network layer includes a gateway structure ensuring end-to-end reliability in the connection between a wireless sensor network (WSN) and Internet. Results obtained in simulation environment suggest that the selected application and network layers introduce a feasible solution for the medication intake use case."}
{"_id": "c1ff5cdadaa8215d885c398fb0a3691550ad5770", "title": "An Overview of Small Unmanned Aerial Vehicles for Air Quality Measurements: Present Applications and Future Prospectives", "text": "Assessment of air quality has been traditionally conducted by ground based monitoring, and more recently by manned aircrafts and satellites. However, performing fast, comprehensive data collection near pollution sources is not always feasible due to the complexity of sites, moving sources or physical barriers. Small Unmanned Aerial Vehicles (UAVs) equipped with different sensors have been introduced for in-situ air quality monitoring, as they can offer new approaches and research opportunities in air pollution and emission monitoring, as well as for studying atmospheric trends, such as climate change, while ensuring urban and industrial air safety. The aims of this review were to: (1) compile information on the use of UAVs for air quality studies; and (2) assess their benefits and range of applications. An extensive literature review was conducted using three bibliographic databases (Scopus, Web of Knowledge, Google Scholar) and a total of 60 papers was found. This relatively small number of papers implies that the field is still in its early stages of development. We concluded that, while the potential of UAVs for air quality research has been established, several challenges still need to be addressed, including: the flight endurance, payload capacity, sensor dimensions/accuracy, and sensitivity. However, the challenges are not simply technological, in fact, policy and regulations, which differ between countries, represent the greatest challenge to facilitating the wider use of UAVs in atmospheric research."}
{"_id": "2815c2835d6056479d64aa64d879df9cd2572d2f", "title": "Predicting India Volatility Index: An Application of Artificial Neural Network", "text": "Forecasting has always been an area of interest for the researchers in various realms of finance especially in the stock market e. g. stock index, return on a stock, etc. Stock market volatility is one such area. Since the inception of implied volatility index (VIX) by the Chicago Board of Options Exchange (CBOE) in 1993, VIX index has generated a lot of interest. This study examines the predicting ability of several technical indicators related to VIX index to forecast the next trading day's volatility. There is a wide set of methods available for forecasting in finance. In this study, Artificial neural network (ANN) modeling technique has been employed to forecast the upwards or downwards movement in next trading day's volatility using India VIX (a volatility index based on the NIFTY Index Option prices) based indicators. The results of the study reveal that ANN models can be real handy in forecasting the downwards movement in VIX. The knowledge about a more probable downwards movement in volatility might be significant value add for the investors and help them in making decisions related to trading."}
{"_id": "873c4a435d52f803e8391dde9be89044ff630725", "title": "A Novel Transformer Structure for High power, High Frequency converter", "text": "Power transformer structure is a key factor for the high power, high frequency converter performance which includes efficiency, thermal performance and power density. This paper proposes a novel transformer structure for the kilo-watt level, high frequency converter which is reinforce insulation needed for the secondary side to primary side. The transformer has spiral-wound primary layers using TIW (triple insulation wire) and PCB-winding secondary layers. All the windings are arranged by full interleaving structure to minimize the leakage inductance and eddy current loss. Further more, the secondary rectifiers and filter capacitors are mounted in PCB-winding secondary layers to further minimize the termination effect. A 1.2 KW (O/P: 12 V/100 A, I/P: 400 V) Mega Hz LLC converter prototype employed the proposed transformer structure is constructed, and over 96% efficiency achieved."}
{"_id": "10338babf0119e3dba196aef44fa717a1d9a06df", "title": "Private local automation clouds built by CPS: Potential and challenges for distributed reasoning", "text": ""}
{"_id": "bffe23d44eec0324f1be57877f05b06c379e77d5", "title": "Overview: The Design, Adoption, and Analysis of a Visual Document Mining Tool for Investigative Journalists", "text": "For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview, an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system \u201cin the wild\u201d, and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of \u201cexploring\u201d a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology."}
{"_id": "8b9b7e1fb101a899b0309ec508ac5912787cc12d", "title": "Securing Bitcoin wallets via a new DSA / ECDSA threshold signature scheme", "text": "The Bitcoin ecosystem has suffered frequent thefts and losses affecting both businesses and individuals. Due to the irreversibility, automation, and pseudonymity of transactions, Bitcoin currently lacks support for the sophisticated internal control systems deployed by modern businesses to deter fraud. To address this problem, we present the first threshold signature scheme compatible with Bitcoin\u2019s ECDSA signatures and show how distributed Bitcoin wallets can be built using this primitive. For businesses, we show how our distributed wallets can be used to systematically eliminate single points of failure at every stage of the flow of bitcoins through the system. For individuals, we design, implement, and evaluate a two-factor secure Bitcoin wallet."}
{"_id": "a477778d8ae4cb70b67d02d094d030c44d0c6ffd", "title": "Joint Learning Templates and Slots for Event Schema Induction", "text": "Automatic event schema induction (AESI) means to extract meta-event from raw text, in other words, to find out what types (templates) of event may exist in the raw text and what roles (slots) may exist in each event type. In this paper, we propose a joint entity-driven model to learn templates and slots simultaneously based on the constraints of templates and slots in the same sentence. In addition, the entities\u2019 semantic information is also considered for the inner connectivity of the entities. We borrow the normalized cut criteria in image segmentation to divide the entities into more accurate template clusters and slot clusters. The experiment shows that our model gains a relatively higher result than previous work."}
{"_id": "70fc8f3b9727a028e23ae89bbf1bb4ec4feb8db6", "title": "Information & Environment: IoT-Powered Recommender Systems", "text": "Internet of Things (IoT) infrastructure within the physical library environment is the basis for an integrative, hybrid approach to digital resource recommenders. The IoT infrastructure provides mobile, dynamic wayfinding support for items in the collection, which includes features for location-based recommendations. The evaluation and analysis herein clarified the nature of users\u2019 requests for recommendations based on their location, and describes subject areas of the library for which users request recommendations. The results indicated that users of IoTbased recommendation are interested in a broad distribution of subjects, with a short-head distribution from this collection in American and English Literature. A long-tail finding showed a diversity of topics that are recommended to users in the library book stacks with IoT-powered recommendations."}
{"_id": "36e41cdfddd190d7861b91b04a515967fd1541d9", "title": "Giving too much social support: social overload on social networking sites", "text": "Received: 20 July 2012 Revised: 18 February 2013 2nd Revision: 28 June 2013 3rd Revision: 20 September 2013 4th Revision: 7 November 2013 Accepted: 1 February 2014 Abstract As the number of messages and social relationships embedded in social networking sites (SNS) increases, the amount of social information demanding a reaction from individuals increases as well. We observe that, as a consequence, SNS users feel they are giving too much social support to other SNS users. Drawing on social support theory (SST), we call this negative association with SNS usage \u2018social overload\u2019 and develop a latent variable to measure it. We then identify the theoretical antecedents and consequences of social overload and evaluate the social overload model empirically using interviews with 12 and a survey of 571 Facebook users. The results show that extent of usage, number of friends, subjective social support norms, and type of relationship (online-only vs offline friends) are factors that directly contribute to social overload while age has only an indirect effect. The psychological and behavioral consequences of social overload include feelings of SNS exhaustion by users, low levels of user satisfaction, and a high intention to reduce or even stop using SNS. The resulting theoretical implications for SST and SNS acceptance research are discussed and practical implications for organizations, SNS providers, and SNS users are drawn. European Journal of Information Systems advance online publication, 4 March 2014; doi:10.1057/ejis.2014.3; corrected online 11 March 2014"}
{"_id": "d67cd96b0a30c5d2b2703491e02ec0f7d1954bb8", "title": "A taxonomy of sequential pattern mining algorithms", "text": "Owing to important applications such as mining web page traversal sequences, many algorithms have been introduced in the area of sequential pattern mining over the last decade, most of which have also been modified to support concise representations like closed, maximal, incremental or hierarchical sequences. This article presents a taxonomy of sequential pattern-mining techniques in the literature with web usage mining as an application. This article investigates these algorithms by introducing a taxonomy for classifying sequential pattern-mining algorithms based on important key features supported by the techniques. This classification aims at enhancing understanding of sequential pattern-mining problems, current status of provided solutions, and direction of research in this area. This article also attempts to provide a comparative performance analysis of many of the key techniques and discusses theoretical aspects of the categories in the taxonomy."}
{"_id": "b24799316c69f367fccbd7ce2c4c732f9f92d1b5", "title": "Predicting consumer intentions to use on-line shopping: the case for an augmented technology acceptance model", "text": "Derived from the theory of reasoned action, the technology acceptance model (TAM) focuses on two specific salient beliefs\u2014 ease of use and usefulness. It has been applied in the study of user adoption of different technologies, and has emerged as a reliable and robust model. However, this has not discouraged researchers from incorporating additional constructs to the original model in their quest for increased predictive power. Here, an attempt is made in the context of explaining consumer intention to use on-line shopping. Besides ease of use and usefulness, compatibility, privacy, security, normative beliefs, and selfefficacy are included in an augmented TAM. A test of this model, with data collected from 281 consumers, show support for seven of nine research hypotheses. Specifically, compatibility, usefulness, ease of use, and security were found to be significant predictors of attitude towards on-line shopping, but privacy was not. Further, intention to use on-line shopping was strongly influenced by attitude toward on-line shopping, normative beliefs, and self-efficacy. # 2003 Elsevier B.V. All rights reserved."}
{"_id": "40457623fcdbaa253e2894c2f114837fde1c11e5", "title": "A new approach for sparse matrix vector product on NVIDIA GPUs", "text": "The sparse matrix vector product (SpMV) is a key operation in engineering and scientific computing and, hence, it has been subjected to intense research for a long time. The irregular computations involved in SpMV make its optimization challenging. Therefore, enormous effort has been devoted to devise data formats to store the sparse matrix with the ultimate aim of maximizing the performance. Graphics Processing Units (GPUs) have recently emerged as platforms that yield outstanding acceleration factors. SpMV implementations for NVIDIA GPUs have already appeared on the scene. This work proposes and evaluates a new implementation of SpMV for NVIDIA GPUs based on a new format, ELLPACK-R, that allows storage of the sparse matrix in a regular manner. A comparative evaluation against a variety of storage formats previously proposed has been carried out based on a representative set of test matrices. The results show that, although the performance strongly depends on the specific pattern of the matrix, the implementation based on ELLPACK-R achieves higher overall performance. Moreover, a comparison with standard state-of-the-art superscalar processors reveals that significant speedup factors are achieved with GPUs. Copyright 2010 John Wiley & Sons, Ltd."}
{"_id": "ffcb7146dce1aebf47a910b51a873cfec897d602", "title": "Fast scan algorithms on graphics processors", "text": "Scan and segmented scan are important data-parallel primitives for a wide range of applications. We present fast, work-efficient algorithms for these primitives on graphics processing units (GPUs). We use novel data representations that map well to the GPU architecture. Our algorithms exploit shared memory to improve memory performance. We further improve the performance of our algorithms by eliminating shared-memory bank conflicts and reducing the overheads in prior shared-memory GPU algorithms. Furthermore, our algorithms are designed to work well on general data sets, including segmented arrays with arbitrary segment lengths. We also present optimizations to improve the performance of segmented scans based on the segment lengths. We implemented our algorithms on a PC with an NVIDIA GeForce 8800 GPU and compared our results with prior GPU-based algorithms. Our results indicate up to 10x higher performance over prior algorithms on input sequences with millions of elements."}
{"_id": "356869aa0ae8d598e956c7f2ae884bbf5009c98c", "title": "NVIDIA Tesla: A Unified Graphics and Computing Architecture", "text": "To enable flexible, programmable graphics and high-performance computing, NVIDIA has developed the Tesla scalable unified graphics and parallel computing architecture. Its scalable parallel array of processors is massively multithreaded and programmable in C or via graphics APIs."}
{"_id": "3dcd4a259e47d171d0728e01ac71f2421ab8f7fe", "title": "Iterative methods for sparse linear systems", "text": "Sometimes we need to solve the linear equation Ax = b for a very big and very sparse A. For example, the Poisson equation where only 5 entries of each row of the matrix A are non-zero. Standard methods such as inverting the matrix A (numerically unstable) or Guass elimination do not take advantage of the sparsity of A. In this lesson we will discuss two iterative methods suitable for sparse linear systems: Jacobi and Gauss-Seidel."}
{"_id": "6a640438a4e50fa31943462eeca716413891a773", "title": "Ranking , Boosting , and Model Adaptation", "text": "We present a new ranking algorithm that combines the strengt hs of two previous methods: boosted tree classification, and LambdaR ank, which has been shown to be empirically optimal for a widely used inform ation retrieval measure. The algorithm is based on boosted regression trees , although the ideas apply to any weak learners, and it is significantly fast er in both train and test phases than the state of the art, for comparable accu racy. We also show how to find the optimal linear combination for any two ran kers, and we use this method to solve the line search problem exactly du ring boosting. In addition, we show that starting with a previously tra ined model, and boosting using its residuals, furnishes an effective techn ique for model adaptation, and we give results for a particularly pressing prob lem in Web Search training rankers for markets for which only small amounts o f labeled data are available, given a ranker trained on much more data from a larger market."}
{"_id": "d78c26f2e2fe87ea600ef6f667020fd933b8060f", "title": "Heart rate variability biofeedback increases baroreflex gain and peak expiratory flow.", "text": "OBJECTIVE\nWe evaluated heart rate variability biofeedback as a method for increasing vagal baroreflex gain and improving pulmonary function among 54 healthy adults.\n\n\nMETHODS\nWe compared 10 sessions of biofeedback training with an uninstructed control. Cognitive and physiological effects were measured in four of the sessions.\n\n\nRESULTS\nWe found acute increases in low-frequency and total spectrum heart rate variability, and in vagal baroreflex gain, correlated with slow breathing during biofeedback periods. Increased baseline baroreflex gain also occurred across sessions in the biofeedback group, independent of respiratory changes, and peak expiratory flow increased in this group, independently of cardiovascular changes. Biofeedback was accompanied by fewer adverse relaxation side effects than the control condition.\n\n\nCONCLUSIONS\nHeart rate variability biofeedback had strong long-term influences on resting baroreflex gain and pulmonary function. It should be examined as a method for treating cardiovascular and pulmonary diseases. Also, this study demonstrates neuroplasticity of the baroreflex."}
{"_id": "336222eedc745e17f85cc85141891ed3b48b9ef7", "title": "Moderating variables of music training-induced neuroplasticity: a review and discussion", "text": "A large body of literature now exists to substantiate the long-held idea that musicians' brains differ structurally and functionally from non-musicians' brains. These differences include changes in volume, morphology, density, connectivity, and function across many regions of the brain. In addition to the extensive literature that investigates these differences cross-sectionally by comparing musicians and non-musicians, longitudinal studies have demonstrated the causal influence of music training on the brain across the lifespan. However, there is a large degree of inconsistency in the findings, with discordance between studies, laboratories, and techniques. A review of this literature highlights a number of variables that appear to moderate the relationship between music training and brain structure and function. These include age at commencement of training, sex, absolute pitch (AP), type of training, and instrument of training. These moderating variables may account for previously unexplained discrepancies in the existing literature, and we propose that future studies carefully consider research designs and methodologies that control for these variables."}
{"_id": "70fa13b31906c59c0b79d8c18a0614c5aaf77235", "title": "KPB-SIFT: a compact local feature descriptor", "text": "Invariant feature descriptors such as SIFT and GLOH have been demonstrated to be very robust for image matching and object recognition. However, such descriptors are typically of high dimensionality, e.g. 128-dimension in the case of SIFT. This limits the performance of feature matching techniques in terms of speed and scalability. A new compact feature descriptor, called Kernel Projection Based SIFT (KPB-SIFT), is presented in this paper. Like SIFT, our descriptor encodes the salient aspects of image information in the feature point's neighborhood. However, instead of using SIFT's smoothed weighted histograms, we apply kernel projection techniques to orientation gradient patches. The produced KPB-SIFT descriptor is more compact as compared to the state-of-the-art, does not require pre-training step needed by PCA based descriptors, and shows superior advantages in terms of distinctiveness, invariance to scale, and tolerance of geometric distortions. We extensively evaluated the effectiveness of KPB-SIFT with datasets acquired under varying circumstances."}
{"_id": "a29f2bd2305e11d8fe139444e733e9b50ea210d6", "title": "NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study", "text": "This paper introduces a novel large dataset for example-based single image super-resolution and studies the state-of-the-art as emerged from the NTIRE 2017 challenge. The challenge is the first challenge of its kind, with 6 competitions, hundreds of participants and tens of proposed solutions. Our newly collected DIVerse 2K resolution image dataset (DIV2K) was employed by the challenge. In our study we compare the solutions from the challenge to a set of representative methods from the literature and evaluate them using diverse measures on our proposed DIV2K dataset. Moreover, we conduct a number of experiments and draw conclusions on several topics of interest. We conclude that the NTIRE 2017 challenge pushes the state-of-the-art in single-image super-resolution, reaching the best results to date on the popular Set5, Set14, B100, Urban100 datasets and on our newly proposed DIV2K."}
{"_id": "b07e04eb64705f0f1b5e6403320e0230e2eadc4d", "title": "A multiagent reinforcement learning algorithm using extended optimal response", "text": "Stochastic games provides a theoretical framework to multiagent reinforcement learning. Based on the framework, a multiagent reinforcement learning algorithm for zero-sum stochastic games was proposed by Littman and it was extended to general-sum games by Hu and Wellman. Given a stochastic game, if all agents learn with their algorithm, we can expect that the policies of the agents converge to a Nash equilibrium. However, agents with their algorithm always try to converge to a Nash equilibrium independent of the policies used by the other agents. In addition, in case there are multiple Nash equilibria, agents must agree on the equilibrium where they want to reach. Thus, their algorithm lacks adaptability in a sense. In this paper, we propose a multiagent reinforcement learning algorithm. The algorithm uses the extended optimal response which we introduce in this paper. It will converge to a Nash equilibrium when other agents are adaptable, otherwise it will make an optimal response. We also provide some empirical results in three simple stochastic games, which show that the algorithm can realize what we intend."}
{"_id": "5c89841a22541d66f7bad32da6de2991b2d298ef", "title": "Applying MIPVU Metaphor Identification Procedure on Czech", "text": "This paper represents the current state of the research project aimed at modifying the MIPVU protocol for metaphor annotation for usage on Czech-language texts. Three annotators were trained to use metaphor identification procedure MIPVU and annotated 2 short text excerpts of about 600 tokens length, then the reliability of annotation was measured using Fleiss\u2019 kappa. The resultant interannotator agreement of 0.70 was below kappa values reported by annotators of VU Amsterdam Metaphor Corpus (Steen et al., 2010) and very similar to the agreement that researchers (Badryzlova et al., 2013) got in their first reliability test with unmodified MIPVU procedure applied on Russian texts. Some modifications of the annotation procedure are proposed in order for it to be more suitable for Czech language. The modifications are based on the observations made by annotators in error analysis and by authors of similar projects aimed to transfer MIPVU procedure to Slavic/inflected languages. The functionality of the annotation procedure refinements now have to be tested in the second reliability test."}
{"_id": "fd88e20c0ddaa0964bfbb8a9d985475dadf994eb", "title": "A Novel Phase-Shift Control of Semibridgeless Active Rectifier for Wireless Power Transfer", "text": "A novel phase-shift control of a semibridgeless active rectifier (S-BAR) is investigated in order to utilize the S-BAR in wireless energy transfer applications. The standard receiver-side rectifier topology is developed by replacing rectifier lower diodes with synchronous switches controlled by a phase-shifted PWM signal. Theoretical and simulation results show that with the proposed control technique, the output quantities can be regulated without communication between the receiver and transmitter. To confirm the performance of the proposed converter and control, experimental results are provided using 8-, 15-, and 23-cm air gap coreless transformer which has dimension of 76 cm \u00d7 76 cm, with 120-V input and the output power range of 0 to 1kW with a maximum efficiency of 94.4%."}
{"_id": "3106bf5bb397125cb390daa006cb6f0fca853d6c", "title": "Creating Social Contagion Through Viral Product Design: A Randomized Trial of Peer Influence in Networks", "text": "We examine how firms can create word-of-mouth peer influence and social contagion by designing viral features into their products and marketing campaigns. Word-of-mouth (WOM) is generally considered to be more effective at promoting product contagion when it is personalized and active. Unfortunately, the relative effectiveness of different viral features has not been quantified, nor has their effectiveness been definitively established, largely because of difficulties surrounding econometric identification of endogenous peer effects. We therefore designed a randomized field experiment on a popular social networking website to test the effectiveness of a range of viral messaging capabilities in creating peer influence and social contagion among the 1.4 million friends of 9,687 experimental users. Overall, we find that viral product design features can indeed generate econometrically identifiable peer influence and social contagion effects. More surprisingly, we find that passive-broadcast viral messaging generates a 246% increase in local peer influence and social contagion effects, while adding active-personalized viral messaging only generates an additional 98% increase in contagion. Although active-personalized messaging is more effective in encouraging adoption per message and is correlated with more user engagement and sustained product use, passivebroadcast messaging is used more often enough to eclipse those benefits, generating more total peer adoption in the network. In addition to estimating the effects of viral product design on social contagion and product diffusion, our work also provides a model for how randomized trials can be used to identify peer influence effects in networks."}
{"_id": "65ca6f17a7972fae19b12efdc88c9c9d6d0cf2e8", "title": "Multi-factor Authentication as a Service", "text": "An architecture for providing multi-factor authentication as a service is proposed, resting on the principle of a loose coupling and separation of duties between network entities and end user devices. The multi-factor authentication architecture leverages Identity Federation and Single-Sign-On technologies, such as the OpenID framework, in order to provide for a modular integration of various factors of authentication. The architecture is robust and scalable enabling service providers to define risk-based authentication policies by way of assurance level requirements, which map to concrete authentication factor capabilities on user devices."}
{"_id": "c4e416e2b5683306aebd07eb7d8854e6375a57bb", "title": "Trajectory planning for car-like vehicles: A modular approach", "text": "We consider the problem of trajectory planning with geometric constraints for a car-like vehicle. The vehicle is described by its dynamic model, considering such effects as lateral slipping and aerodynamic drag. We propose a modular solution, where three different problems are identified and solved by specific modules. The execution of the three modules can be orchestrated in different ways in order to produce efficient solutions to a variety of trajectory planning problems (e.g., obstacle avoidance, or overtake). As a specific example, we show how to generate the optimal lap on a specified racing track. The numeric examples provided in the paper are good witnesses of the effectiveness of our strategy."}
{"_id": "219563417819f1129cdfcfa8a75c03d074577be1", "title": "Annotating Opinions in the World Press", "text": "In this paper we present a detailed scheme for annotating expressions of opinions, beliefs, emotions, sentiment and speculation (private states) in the news and other discourse. We explore inter-annotator agreement for individual private state expressions, and show that these low-level annotations are useful for producing higher-level subjective sentence annotations."}
{"_id": "7bcc937e81d135061f1d4f7a466e908e0234093a", "title": "Behavioral and Neural Adaptation in Approach Behavior", "text": "People often make approachability decisions based on perceived facial trustworthiness. However, it remains unclear how people learn trustworthiness from a population of faces and whether this learning influences their approachability decisions. Here we investigated the neural underpinning of approach behavior and tested two important hypotheses: whether the amygdala adapts to different trustworthiness ranges and whether the amygdala is modulated by task instructions and evaluative goals. We showed that participants adapted to the stimulus range of perceived trustworthiness when making approach decisions and that these decisions were further modulated by the social context. The right amygdala showed both linear response and quadratic response to trustworthiness level, as observed in prior studies. Notably, the amygdala's response to trustworthiness was not modulated by stimulus range or social context, a possible neural dynamic adaptation. Together, our data have revealed a robust behavioral adaptation to different trustworthiness ranges as well as a neural substrate underlying approach behavior based on perceived facial trustworthiness."}
{"_id": "72691b1adb67830a58bebdfdf213a41ecd38c0ba", "title": "Clearing the Skies: A Deep Network Architecture for Single-Image Rain Removal", "text": "We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training."}
{"_id": "48320a4be9cc741fdb28ad72f359c449e41309cc", "title": "Manga109 dataset and creation of metadata", "text": "We have created Manga109, a dataset of a variety of 109 Japanese comic books publicly available for use for academic purposes. This dataset provides numerous comic images but lacks the annotations of elements in the comics that are necessary for use by machine learning algorithms or evaluation of methods. In this paper, we present our ongoing project to build metadata for Manga109. We first define the metadata in terms of frames, texts and characters. We then present our web-based software for efficiently creating the ground truth for these images. In addition, we provide a guideline for the annotation with the intent of improving the quality of the metadata."}
{"_id": "b743dafa3dcb8924244c14f0a719cde5e93d9155", "title": "Efficient 3D LIDAR based loop closing using deep neural network", "text": "Loop closure detection in 3D LIDAR data is an essential but challenging problem in SLAM system. It is important to reduce global inconsistency or re-localize the robot that loses the localization, while is difficult for the lack of prior information. We present a semi-handcrafted representation learning method for LIDAR point cloud using siamese convolution neural network, which states the loop closure detection to a similarity modeling problem. With the learned representation, the similarity between two LIDAR scans is transformed as the Euclidean distance between the representations respectively. Based on it, we furthermore establish kd-tree to accelerate the searching of similar scans. To demonstrate the performance and effectiveness of the proposed method, the KITTI dataset is employed for comparison with other LIDAR loop closure detection methods. The result shows that our method can achieve both higher accuracy and efficiency."}
{"_id": "4ea80c206b8ad73a6d320c9d8ed0321d84fe6d85", "title": "Recursive Neural Networks for Learning Logical Semantics", "text": "Supervised recursive neural network models (RNNs) for sent ence meaning have been successful in an array of sophisticated language tasks , but it remains an open question whether they can learn compositional semantic gra mm rs that support logical deduction. We address this question directly by for the first time evaluating whether each of two classes of neural model \u2014 plain RNNs and re cursive neural tensor networks (RNTNs) \u2014 can correctly learn relationship such as entailment and contradiction between pairs of sentences, where we have generated controlled data sets of sentences from a logical grammar. Our first exper iment evaluates whether these models can learn the basic algebra of logical r elations involved. Our second and third experiments extend this evaluation to c omplex recursive structures and sentences involving quantification. We find t hat he plain RNN achieves only mixed results on all three experiments, where as the stronger RNTN model generalizes well in every setting and appears capable of learning suitable representations for natural language logical inference."}
{"_id": "40d5019ecad67026db8bb4a813719f1d48222405", "title": "Analysis of dynamically scheduled tile algorithms for dense linear algebra on multicore architectures", "text": "The objective of this paper is to analyze the dynamic scheduling of dense linear algebra algorithms on shared-memory, multicore architectures. Current numerical libraries, e.g., LAPACK, show clear limitations on such emerging systems mainly due to their coarse granularity tasks. Thus, many numerical algorithms need to be redesigned to better fit the architectural design of the multicore platform. The PLASMA library (Parallel Linear Algebra for Scalable Multi-core Architectures) developed at the University of Tennessee tackles this challenge by using tile algorithms to achieve a finer task granularity. These tile algorithms can then be represented by Directed Acyclic Graphs (DAGs), where nodes are the tasks and edges are the dependencies between the tasks. The paramount key to achieve high performance is to implement a runtime environment to efficiently schedule the DAG across the multicore platform. This paper studies the impact on the overall performance of some parameters, both at the level of the scheduler, e.g., window size and locality, and the algorithms, e.g., Left Looking (LL) and Right Looking (RL) variants. The conclusion of this study claims that some commonly accepted rules for dense linear algebra algorithms may need to be revisited."}
{"_id": "36340f1d2f1d6edb1303905225ef8dd7bd2b5ab6", "title": "Underwater electric field measurement and analysis", "text": "The University of Idaho (UI) is developing electromagnetic field sensor systems that are attachable to autonomous underwater vehicles with the intent of taking survey measurements in underwater ocean environments. This paper presents the testing of these sensors and compares measurements to predictions. Testing was conducted off the coast of Florida with a moving artificial electric field source and an electric field sensor equipped AUV. At the closest pass, the peak value of the AUV-acquired electric field was consistent with the predicted field of the artificial source when the accuracy of AUV position was known to within \u223c1 m."}
{"_id": "2c75342fc4d809f4e84b997a4a605c985fdbe0fb", "title": "259 Second ring-down time and 4.45 million quality factor in 5.5 kHz fused silica birdbath shell resonator", "text": "The fused silica birdbath (BB) resonator is an axisymmetric 3D shell resonator that could be used in high-performance MEMS gyroscopes. We report a record quality factor (0 for a 5-mm-radius resonator, which is expected to reduce gyroscope bias drift. We present measurement results for two sizes of resonators with long vibratory decay time constants (\u03c4), high Qs, and low damping asymmetry (\u0394\u03c4\u22121) between their n = 2 wine-glass (WG) modes. We find a reduction in damping for larger resonators and a correlation between higher Q and lower as well as evidence of a lower bound on Q for resonators with low damping asymmetry."}
{"_id": "3981e2fc542b439a4e3f10c33f4fca2a445aa1c7", "title": "Feeling Love and Doing More for Distant Others : Specific Positive Emotions Differentially Affect Prosocial Consumption", "text": "Marketers often employ a variety of positive emotions to encourage consumption or promote a particular behavior (e.g., buying, donating, recycling) to benefit an organization or cause. The authors show that specific positive emotions do not universally increase prosocial behavior but, rather, encourage different types of prosocial behavior. Four studies show that whereas positive emotions (i.e., love, hope, pride, and compassion) all induce prosocial behavior toward close entities (relative to a neutral emotional state), only love induces prosocial behavior toward distant others and international organizations. Love\u2019s effect is driven by a distinct form of broadening, characterized by extending feelings of social connection and the boundary of caring to be more inclusive of others regardless of relatedness. Love\u2014as a trait and a momentary emotion\u2014 is unique among positive emotions in fostering connectedness that other positive emotions (hope and pride) do not and broadening behavior in a way that other connected emotions (compassion) do not. This research contributes to the broaden-and-build theory of positive emotion by demonstrating a distinct type of broadening for love and adds an important qualification to the general finding that positive emotions uniformly encourage prosocial behavior."}
{"_id": "e2b7a371b7cfb5f2bdc2abeb41397aee03fd5480", "title": "On Compiling CNF into Decision-DNNF", "text": "Decision-DNNF is a strict subset of decomposable negation normal form (DNNF) that plays a key role in analyzing the complexity of model counters (the searches performed by these counters have their traces in Decision-DNNF). This paper presents a number of results on Decision-DNNF. First, we introduce a new notion of CNF width and provide an algorithm that compiles CNFs into Decision-DNNFs in time and space that are exponential only in this width. The new width strictly dominates the treewidth of the CNF primal graph: it is no greater and can be bounded when the treewidth of the primal graph is unbounded. This new result leads to a tighter bound on the complexity of model counting. Second, we show that the output of the algorithm can be converted in linear time to a sentential decision diagram (SDD), which leads to a tighter bound on the complexity of compiling CNFs into SDDs."}
{"_id": "34d1ba9476ae474f1895dbd84e8dc82b233bc32e", "title": "A Redundancy Dual-backup Method of the Battery Pack for the Two-wheeled Self-balanced Vehicle", "text": ""}
{"_id": "35c931611b9ceab623839ce4f5c9000eaa41506a", "title": "Automatic mood detection of indian music using mfccs and k-means algorithm", "text": "This paper proposes a method of identifying the mood underlying a piece of music by extracting suitable and robust features from music clip. To recognize the mood, K-means clustering and global thresholding was used. Three features were amalgamated to decide the mood tag of the musical piece. Mel frequency cepstral coefficients, frame energy and peak difference are the features of interest. These features were used for clustering and further achieving silhouette plot which formed the basis of deciding the limits of threshold for classification. Experiments were performed on a database of audio clips of various categories. The accuracy of the mood extracted is around 90% indicating that the proposed technique provides encouraging results."}
{"_id": "5176f5207ba955fa9a0927a242bdb7ec3cd319c0", "title": "Synthetic aperture radar interferometry", "text": "Synthetic aperture radar interferometry is an imaging technique for measuring the topography of a surface, its changes over time, and other changes in the detailed characteristic of the surface. By exploiting the phase of the coherent radar signal, interferometry has transformed radar remote sensing from a largely interpretive science to a quantitative tool, with applications in cartography, geodesy, land cover characterization, and natural hazards. This paper reviews the techniques of interferometry, systems and limitations, and applications in a rapidly growing area of science and engineering."}
{"_id": "71731832d54d299a8582b22f78fd5eaf80c78589", "title": "The evolution of synthetic aperture radar systems and their progression to the EOS SAR", "text": "The spaceborne imaging radar program of the National Aeronautics and Space Administration (NASA) has evolved primarily through Seasat and the Shuttle Imaging Radar (SIR) to a multifrequency, multipolarization system capable of monitoring the Earth with a variety of imaging geometries, resolutions, and swaths. In particular, the ability to digitally process and calibrate the data has been a key factor in developing the algorithms which will operationally provide geophysical and biophysical information about the surface of the Earth using synthetic aperture radar (SAR). This paper describes the evolution of the spaceborne imaging radar starting with the Seasat SAR, through the SIR-A, SIR-B, and SIR-CI X-SAR missions, to the EOS SAR which is scheduled for launch as part of the Earth Observing System (EOS) at the end of the 1990s. A summary of the planned international missions, which may produce a permanent active microwave capability in space starting as early as 1991, is also presented, along with a description of the airborne systems which will be essential to the algorithm development and long-term calibration of the spaceborne data. Finally, a brief summary of the planetary missions utilizing SAR and a comparison of their imaging capabilities with those available on Earth is presented."}
{"_id": "65b6c31bccadf0bfb0427929439f2ae1dc2b45bc", "title": "On the derivation of coseismic displacement fields using differential radar interferometry: The Landers earthquake", "text": "We present a map of the coseismic displacement field resulting from the Landers, California, June 28, 1992, earthquake derived using data acquired from an orbiting high-resolution radar system. We achieve results more accurate than previous space studies and similar in accuracy to those obtained by conventional field survey techniques. Data from the ERS 1 synthetic aperture radar instrument acquired in April, July, and August 1992 are used to generate a high-resolution, wide area map of the displacements. The data represent the motion in the direction of the radar line of sight to centimeter level precision of each 30-m resolution element in a 113 km by 90 km image. Our coseismic displacement contour map gives a lobed pattern consistent with theoretical models of the displacement field from the earthquake. Fine structure observed as displacement tiling in regions several kilometers from the fault appears to be the result of local surface fracturing. Comparison of these data with Global Positioning System and electronic distance measurement survey data yield a correlation of 0.96; thus the radar measurements are a means to extend the point measurements acquired by traditional techniques to an area map format. The technique we use is (1) more automatic, (2) more precise, and (3) better validated than previous similar applications of differential radar interferometry. Since we require only remotely sensed satellite data with no additional requirements for ancillary information, the technique is well suited for global seismic monitoring and analysis."}
{"_id": "daa549ac4fcf598f6c7016656f926978734096ac", "title": "Accuracy of topographic maps derived from ERS-1 interferometric radar", "text": "A interferometric radar technique for topographic mapping of surfaces promises a high-resolution approach to the generation of digital elevation models. We present here analyses of data collected by the synthetic aperture radar instrument on-board the ERS-1 satellite on successive orbits. Use of a single satellite in a nearly repeating orbit is attractive for reducing cost and spaceborne hardware complexity; also it permits inference of changes in the surface from the correlation properties of the radar echoes. The data have been reduced to correlation maps and digital elevation models. The correlation maps show that temporal correlation decreases significantly with time, but not necessarily at a constant well-defined rate, likely depending on environmental factors. When correlation among passes remains high, however, it is possible to form digital elevation models. Analyses of noise expected in ERS-1 interferometric data collected over Alaska and the southwestern United States indicate that maps with relative errors less than 5 m rms are possible in some regions. However, orbit uncertainties imply that tie points are required in order to reduce absolute height errors to a similar magnitude. We find that about 6 tie points per 40 X 40 km scene with 5 m rms or better height accuracy are needed to keep systematic map height errors below 5 m rms. The performance of the ERS-1 radar system for topographic applications, though useful for a variety of regional and local discipline studies, may be improved with respect to temporal decorrelation errors and absolute height acuity by modifying the orbit repeat period and incorporating precise orbit determination techniques. The resulting implementation will meet many, but not all, objectives of a global mapping mission."}
{"_id": "e40446ad0153b3703f8578a4bb10b551643f8d1e", "title": "Satellite radar interferometry for monitoring ice sheet motion: application to an antarctic ice stream.", "text": "Satellite radar interferometry (SRI) provides a sensitive means of monitoring the flow velocities and grounding-line positions of ice streams, which are indicators of response of the ice sheets to climatic change or internal instability. The detection limit is about 1.5 millimeters for vertical motions and about 4 millimeters for horizontal motions in the radar beam direction. The grounding line, detected by tidal motions where the ice goes afloat, can be mapped at a resolution of approximately 0.5 kilometer. The SRI velocities and grounding line of the Rutford Ice Stream, Antarctica, agree fairly well with earlier ground-based data. The combined use of SRI and other satellite methods is expected to provide data that will enhance the understanding of ice stream mechanics and help make possible the prediction of ice sheet behavior."}
{"_id": "4b34c6fa16e48b0e5fc5b6c32514d4405382bbd3", "title": "Towards the enhancement of e-democracy: identifying the notion of the 'middleman paradox'", "text": "The challenge towards e-democracy, through the electronic transformation of political systems, has become increasingly evident within developed economies. It is regarded as an approach for increased and better quality citizen participation in the democratic processes. E-democracy forms a component of overall e-government initiatives where technology adoption and diffusion, to enhance wider access to, and the delivery of, government services, are apparent. However, previous research demonstrates that very few e-democracy proposals survive the stage of formal political decision-making to become substantive egovernment projects within national or international agendas. Furthermore, the implementation of e-democracy projects is undertaken at a much slower pace and with dramatically less support than the implementation of other, so-called e-administration, activities in the public sector. The research in this paper considers the notion of the \u2018middleman paradox\u2019, presenting theoretical and empirical evidence that further investigates the phenomenon associated with potential e-democracy improvements. Specifically, the paper adds a new dimension to existing theories on the hesitant evolution of e-democracy that clearly identifies politicians as an inhibiting factor. Proposals are made for an enhancement of these processes, and suggestions for further applicable research are demonstrated."}
{"_id": "48444fdfe803bb62d3f1a5595f5821954f03b6ea", "title": "Burden of Depressive Disorders by Country, Sex, Age, and Year: Findings from the Global Burden of Disease Study 2010", "text": "BACKGROUND\nDepressive disorders were a leading cause of burden in the Global Burden of Disease (GBD) 1990 and 2000 studies. Here, we analyze the burden of depressive disorders in GBD 2010 and present severity proportions, burden by country, region, age, sex, and year, as well as burden of depressive disorders as a risk factor for suicide and ischemic heart disease.\n\n\nMETHODS AND FINDINGS\nBurden was calculated for major depressive disorder (MDD) and dysthymia. A systematic review of epidemiological data was conducted. The data were pooled using a Bayesian meta-regression. Disability weights from population survey data quantified the severity of health loss from depressive disorders. These weights were used to calculate years lived with disability (YLDs) and disability adjusted life years (DALYs). Separate DALYs were estimated for suicide and ischemic heart disease attributable to depressive disorders. Depressive disorders were the second leading cause of YLDs in 2010. MDD accounted for 8.2% (5.9%-10.8%) of global YLDs and dysthymia for 1.4% (0.9%-2.0%). Depressive disorders were a leading cause of DALYs even though no mortality was attributed to them as the underlying cause. MDD accounted for 2.5% (1.9%-3.2%) of global DALYs and dysthymia for 0.5% (0.3%-0.6%). There was more regional variation in burden for MDD than for dysthymia; with higher estimates in females, and adults of working age. Whilst burden increased by 37.5% between 1990 and 2010, this was due to population growth and ageing. MDD explained 16 million suicide DALYs and almost 4 million ischemic heart disease DALYs. This attributable burden would increase the overall burden of depressive disorders from 3.0% (2.2%-3.8%) to 3.8% (3.0%-4.7%) of global DALYs.\n\n\nCONCLUSIONS\nGBD 2010 identified depressive disorders as a leading cause of burden. MDD was also a contributor of burden allocated to suicide and ischemic heart disease. These findings emphasize the importance of including depressive disorders as a public-health priority and implementing cost-effective interventions to reduce its burden. Please see later in the article for the Editors' Summary."}
{"_id": "3ea6601e24f37697ed0a3e9018f76952625ceb10", "title": "Enhance emotional and social adaptation skills for children with autism spectrum disorder: A virtual reality enabled approach", "text": "Deficits in social-emotional reciprocity, one of the diagnostic criteria of Autism Spectrum Disorder (ASD), greatly hinders children with ASD from responding appropriately and adapting themselves in various social situations. Although evidences have shown that virtual reality environment is a promising tool for emotional and social adaptation skills training on ASD population, there is a lack of large-scale trials with intensive evaluations to support such findings. This paper presents a virtual reality enabled program for enhancing emotional and social adaptation skills for children with ASD. Six unique learning scenarios, of which one focuses on emotion control and relaxation strategies, four that simulate various social situations, and one that facilitates consolidation and generalization, are designed and developed with corresponding psychoeducation procedures and protocols. The learning scenarios are presented to the children via a 4-side immersive virtual reality environment (a.k.a., half-CAVE) with non-intrusive motion tracking. A total number of 94 children between the ages of 6 to 12 with clinical diagnosis of ASD participated in the 28-session program that lasted for 14 weeks. By comparing preand post-assessments, results reported in this paper show significant improvements in the project\u2019s primary measures on children\u2019s emotion expression and regulation and social-emotional reciprocity but not on other secondary measures."}
{"_id": "6627b056d128e6c1d8e0b3eb7d2471c23ef99547", "title": "Real time Sign Language Recognition using PCA", "text": "The Sign Language is a method of communication for deaf-dumb people. This paper presents the Sign Language Recognition system capable of recognizing 26 gestures from the Indian Sign Language by using MATLAB. The proposed system having four modules such as: pre-processing and hand segmentation, feature extraction, sign recognition and sign to text and voice conversion. Segmentation is done by using image processing. Different features are extracted such as Eigen values and Eigen vectors which are used in recognition. The Principle Component Analysis (PCA) algorithm was used for gesture recognition and recognized gesture is converted into text and voice format. The proposed system helps to minimize communication barrier between deaf-dumb people and normal people."}
{"_id": "b8061918d0edbb5cd3042aa3d8b6ce7becc1d961", "title": "Noninvasive diagnosis of fetal aneuploidy by shotgun sequencing DNA from maternal blood.", "text": "We directly sequenced cell-free DNA with high-throughput shotgun sequencing technology from plasma of pregnant women, obtaining, on average, 5 million sequence tags per patient sample. This enabled us to measure the over- and underrepresentation of chromosomes from an aneuploid fetus. The sequencing approach is polymorphism-independent and therefore universally applicable for the noninvasive detection of fetal aneuploidy. Using this method, we successfully identified all nine cases of trisomy 21 (Down syndrome), two cases of trisomy 18 (Edward syndrome), and one case of trisomy 13 (Patau syndrome) in a cohort of 18 normal and aneuploid pregnancies; trisomy was detected at gestational ages as early as the 14th week. Direct sequencing also allowed us to study the characteristics of cell-free plasma DNA, and we found evidence that this DNA is enriched for sequences from nucleosomes."}
{"_id": "72405632d3054f1ccb32ea102b1c159145413e39", "title": "Design and Implementation of Phalanges Based on Iteration of a Prosthesis", "text": "In this work an electromechanical hand is implemented based on the study and implementation of the phalanges of the five fingers of the hand, a control system based on a microcontroller was implemented that takes as feedback signals coming from flexosensors and strain gauges. The movement of the fingers was made based on myoelectric signals in real time, these signals were acquired through an innovative method of acquisition based on patterns. The tests performed were measurements with extended hand, closed hand and pressing an object that validates the present proposal."}
{"_id": "8f2eb4dfe08f00faa0567c88b5ef1019c08714e3", "title": "Resilience and Sustainable Development : Theory of resilience , systems thinking and adaptive governance", "text": "\uf0b7 Basic information on SD \uf0b7 Country profiles \uf0b7 Quarterly reports \uf0b7 Policy briefs \uf0b7 Case studies \uf0b7 Conference papers \uf0b7 Workshop papers \uf0b7 Getting in touch with us The European Sustainable Development Network (ESDN) is an informal network of public administrators and other experts who deal with sustainable development strategies and policies. The network covers all 27 EU Member States, plus other European countries. The ESDN is active in promoting sustainable development and facilitating the exchange of good practices in Europe and gives advice to policy-makers at the European and national levels. This ESDN Quarterly Report (QR) provides a condensed overview of the concept of resilience. Despite the complexity of the theory behind resilience, this QR tries to communicate the main notions behind this concept in a way that is understandable and not overly technical. The intention of this QR is to serve as a guide through the concept of resilience. The report does not aim at being exhaustive but intends to provide an overview on the links which are particularly relevant for sustainable development (SD) in general and SD governance in particular. A multitude of diverse sources have been used, mainly from the academic literature. It has to be mentioned the significant and decisive role that the Resilience Alliance has in providing extensive knowledge: the website that they are running, is an exceptionally good source of information for those who are interested and want to deepen their knowledge of resilience. Additionally, among all the scientific publications cited throughout the report, a special mention goes to the book by Walker and Salt (2006) entitled \" Resilience thinking: sustaining ecosystems and people in a changing world \" , which is very much suggested as a practical source of information on resilience. As a first chapter, an executive summary is provided with a short overview of the report with the essential notions that are depicted throughout the QR. The second chapter then introduces the concept of resilience and gives an extensive background on the notions behind it. It intends to provide guidance, especially to understand the linkages between the concept of resilience and sustainable development, and the importance of resilience and systems thinking for policy-makers and for those who work on SD governance. The third chapter summarizes the relationships among resilience, society, governance and policy. Therefore, the concept of 'adaptive governance' is advanced as a more appropriate way to deal with \u2026"}
{"_id": "a6c8433d244b388b7ebbdeb81613a863dbbc532a", "title": "GRID-CLUSTERING: A FAST HIERARCHICAL CLUSTERING METHOD FOR VERY LARGE DATA SETS", "text": "This paper presents a new approach to hierarchical clustering of very large data sets, named GridClustering. The method organizes unlike the conventional methods the space surrounding the patterns and not the patterns. It uses a multidimensional grid data structure. The resulting block partitioning of the value space is clustered via a topological neighbor search. The Grid-Clustering method is able to deliver structural pattern distribution information for very large data sets. It superceeds all conventional hierarchical algorithms in runtime behavior and memory space requirements. The algorithm was analyzed within a testbed and suitable values for the tunable parameters of the algorithm are proposed. A comparison of the executions times to other commonly used clustering algorithms and a heuristic runtime analysis is given."}
{"_id": "7216da0fbb285a5de12c860ad6033087ce9392a4", "title": "Optimal Parallel Algorithms for Computing the Sum, the Prefix-Sums, and the Summed Area Table on the Memory Machine Models", "text": "The main contribution of this paper is to show optimal parallel algorithms to compute the sum, the prefix-sums, and the summed area table on two memory machine models, the Discrete Memory Machine (DMM) and the Unified Memory Machine (UMM). The DMM and the UMM are theoretical parallel computing models that capture the essence of the shared memory and the global memory of GPUs. These models have three parameters, the number p of threads, and the width w of the memory, and the memory access latency l. We first show that the sum of n numbers can be computed in O(n w + nl p + l log n) time units on the DMM and the UMM. We then go on to show that \u03a9(n w + nl p + l log n) time units are necessary to compute the sum. We also present a parallel algorithm that computes the prefix-sums of n numbers in O( n w + nl p + l log n) time units on the DMM and the UMM. Finally, we show that the summed area table of size \u221a n \u00d7 \u221an can be computed in O( n w + nl p + l log n) time units on the DMM and the UMM. Since the computation of the prefix-sums and the summed area table is at least as hard as the sum computation, these parallel algorithms are also optimal. key words: Memory machine models, prefix-sums computation, parallel algorithm, GPU, CUDA"}
{"_id": "a1ea65c00fe1fb7d2c6f494a3114bf52ddbd5401", "title": "From checking on to checking in: designing for low socio-economic status older adults", "text": "In this paper we describe the design evolution of a novel technology that collects and displays presence information to be used in the homes of older adults. The first two iterations, the Ambient Plant and Presence Clock, were designed for higher socio-economic status (SES) older adults, whereas the Check-In Tree was designed for low SES older adults. We describe how feedback from older adult participants drove our design decisions, and give an in-depth account of how the Check-In Tree evolved from concept to a final design ready for in situ deployment."}
{"_id": "bcc849022cf59b0d012a795178e4bdff3a9c18f7", "title": "Design of dual-polarized waveguide slotted antenna array for Ka-band application", "text": "A dual polarized waveguide slotted array for Ka band application is designed. Vertical polarization (VP) is realized by longitudinal slots on broad wall of the ridge waveguide sub array, while horizontal polarization (HP) is realized by tilted slot pairs in the narrow wall of the rectangular waveguide sub array. The feed networks are also designed for each polarization. Especially that for VP, a method to improve the coupling is presented. The antenna has low cross-polarization level and high degree of isolation between two ports."}
{"_id": "27862ec7411fd4e7768e6a4e2da699fe09ea3a06", "title": "Analysis and Support of Lifestyle via Emotions Using Social Media", "text": "Using recent insights from Cognitive, Affective and Social Neuroscience this paper addresses how affective states in social interactions can be used through social media to analyze and support behaviour for a certain lifestyle. A computational model is provided integrating mechanisms for the impact of one\u2019s emotions on behaviour, and for the impact of emotions of others on one\u2019s own emotion. The model is used to reason about and assess the state of a user with regard to a lifestyle goal (such as exercising frequently), based on extracted information about emotions exchanged in social interaction. Support is provided by proposing ways to affecting these social interactions, thereby indirectly influencing the impact of the emotions of others. An ambient intelligent system incorporating this has been implemented for the social medium Twitter."}
{"_id": "052aef133dec5629446f1f6f46f294c52f07f123", "title": "Is Proof More Cost-Effective Than Testing?", "text": "\u00d0This paper describes the use of formal development methods on an industrial safety-critical application. The Z notation was used for documenting the system specification and part of the design, and the SPARK subset of Ada was used for coding. However, perhaps the most distinctive nature of the project lies in the amount of proof that was carried out: proofs were carried out both at the Z level\u00d0approximately 150 proofs in 500 pages\u00d0and at the SPARK code level\u00d0approximately 9,000 verification conditions generated and discharged. The project was carried out under UK Interim Defence Standards 00-55 and 00-56, which require the use of formal methods on safety-critical applications. It is believed to be the first to be completed against the rigorous demands of the 1991 version of these standards. The paper includes comparisons of proof with the various types of testing employed, in terms of their efficiency at finding faults. The most striking result is that the Z proof appears to be substantially more efficient at finding faults than the most efficient testing phase. Given the importance of early fault detection, we believe this helps to show the significant benefit and practicality of large-scale proof on projects of this kind. Index Terms\u00d0Safety-critical software, formal specification, SPARK, specification proof, code proof, proof vs. testing, industrial case study."}
{"_id": "49104b50403c410ba43bba458cd065ee9e05772b", "title": "An Immune Genetic Algorithm for Software Test Data Generation", "text": "This paper aims at incorporating immune operators in genetic algorithms as an advanced method for solving the problem of test data generation. The new proposed hybrid algorithm is called immune genetic algorithm (IGA). A full description of this algorithm is presented before investigating its application in the context of software test data generation using some benchmark programs. Moreover, the algorithm is compared with other evolutionary algorithms."}
{"_id": "9cb813f60a762795558e9d5621efc8afd6363d35", "title": "High speed and low power preset-able modified TSPC D flip-flop design and performance comparison with TSPC D flip-flop", "text": "Positron emission tomography (PET) is a nuclear functional imaging technique that produces a three-dimensional image of functional organs in the body. PET requires high resolution, fast and low power multichannel analog to digital converter (ADC). A typical multichannel ADC for PET scanner architecture consists of several blocks. Most of the blocks can be designed by using fast, low power D flip-flops. A preset-able true single phase clocked (TSPC) D flip-flop shows numerous glitches (noise) at the output due to unnecessary toggling at the intermediate nodes. Preset-able modified TSPC (MTSPC) D flip-flop have been proposed as an alternative solution to alleviate this problem. However, the MTSPC D flip-flop requires one extra PMOS to suspend toggling of the intermediate nodes. In this work, we designed a 7-bit preset-able gray code counter by using the proposed D flip-flop. This work involves UMC 180 nm CMOS technology for preset-able 7-bit gray code counter where we achieved 1 GHz maximum operation frequency with most significant bit (MSB) delay 0.96 ns, power consumption 244.2 \u03bcW (micro watt) and power delay product (PDP) 0.23 pJ (Pico joule) from 1.8 V power supply."}
{"_id": "147668a915f6a7ae3acd7ea79bd7548de9e0555b", "title": "Multifocal planes head-mounted displays.", "text": "Stereoscopic head-mounted displays (HMD's) provide an effective capability to create dynamic virtual environments. For a user of such environments, virtual objects would be displayed ideally at the appropriate distances, and natural concordant accommodation and convergence would be provided. Under such image display conditions, the user perceives these objects as if they were objects in a real environment. Current HMD technology requires convergent eye movements. However, it is currently limited by fixed visual accommodation, which is inconsistent with real-world vision. A prototype multiplanar volumetric projection display based on a stack of laminated planes was built for medical visualization as discussed in a paper presented at a 1999 Advanced Research Projects Agency workshop (Sullivan, Advanced Research Projects Agency, Arlington, Va., 1999). We show how such technology can be engineered to create a set of virtual planes appropriately configured in visual space to suppress conflicts of convergence and accommodation in HMD's. Although some scanning mechanism could be employed to create a set of desirable planes from a two-dimensional conventional display, multiplanar technology accomplishes such function with no moving parts. Based on optical principles and human vision, we present a comprehensive investigation of the engineering specification of multiplanar technology for integration in HMD's. Using selected human visual acuity and stereoacuity criteria, we show that the display requires at most 27 equally spaced planes, which is within the capability of current research and development display devices, located within a maximal 26-mm-wide stack. We further show that the necessary in-plane resolution is of the order of 5 microm."}
{"_id": "dbf86b0708407de8d90cdb3d35773d437e0bfe74", "title": "The dynamic kinetochore-microtubule interface.", "text": "The kinetochore is a control module that both powers and regulates chromosome segregation in mitosis and meiosis. The kinetochore-microtubule interface is remarkably fluid, with the microtubules growing and shrinking at their point of attachment to the kinetochore. Furthermore, the kinetochore itself is highly dynamic, its makeup changing as cells enter mitosis and as it encounters microtubules. Active kinetochores have yet to be isolated or reconstituted, and so the structure remains enigmatic. Nonetheless, recent advances in genetic, bioinformatic and imaging technology mean we are now beginning to understand how kinetochores assemble, bind to microtubules and release them when the connections made are inappropriate, and also how they influence microtubule behaviour. Recent work has begun to elucidate a pathway of kinetochore assembly in animal cells; the work has revealed that many kinetochore components are highly dynamic and that some cycle between kinetochores and spindle poles along microtubules. Further studies of the kinetochore-microtubule interface are illuminating: (1) the role of the Ndc80 complex and components of the Ran-GTPase system in microtubule attachment, force generation and microtubule-dependent inactivation of kinetochore spindle checkpoint activity; (2) the role of chromosomal passenger proteins in the correction of kinetochore attachment errors; and (3) the function of microtubule plus-end tracking proteins, motor depolymerases and other proteins in kinetochore movement on microtubules and movement coupled to microtubule poleward flux."}
{"_id": "1baff891e92bd7693bcb358296f2220137b352bb", "title": "Active compensation of current unbalance in paralleled silicon carbide MOSFETs", "text": "Current unbalance in paralleled power electronic devices can affect the performance and reliability of them. In this paper, the factors causing current unbalance in parallel connected silicon carbide (SiC) MOSFETs are analyzed, and the threshold mismatch is identified as the major factor. Then the distribution and temperature dependence of SiC MOSFETs' threshold voltage are studied experimentally. Based on the analysis and study, an active current balancing (ACB) scheme is presented. The scheme directly measures the unbalance current, and eliminates it in closed loop by varying the gate delay to each device. The turn-on and turn-off current unbalance are sensed and independently compensated to yield an optimal performance at both switching transitions. The proposed scheme provides robust compensation of current unbalance in fast-switching wide-band-gap devices while keeping circuit complexity and cost low. The performance of the proposed ACB scheme is verified by both simulation and experimental results."}
{"_id": "682f92a3deedbed883b7fb7faac0f4f29fa46877", "title": "Assessing The Impact Of Gender And Personality On Film Preferences", "text": "In this paper, the impact of gender and Big Five personality factors (Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism) on visual media preferences (for comedy, horror, action, romance and fantasy film genres) was examined. The analyses were carried out using data collected by Stillwell\u2019s (2007) myPersonality application on Facebook; and focused on a sub-sample of British residents aged 16-25 (n=29,197). It was confirmed that females prefer romantic films, whereas males favour action films. As predicted, there was also a marked interaction between gender and personality characteristics: females score higher on Neuroticism and Agreeableness than males. Individuals scoring high on the dimension of Openness were more likely to prefer comedy and fantasy films than romantic films. Conscientious individuals were more inclined to enjoy action and romance genres. Individuals scoring high on Neuroticism tended to prefer romantic films. Finally, significant interactions were noted between gender and Openness with regard to preferences for comedy, action and romance genres. Overall, it is certain that the combination of gender and Big Five personality factors contributes substantially to an understanding of people\u2019s film preferences. GENDER, PERSONALITY AND FILM PREFERENCES 3 Assessing the impact of Gender and Personality on Film Preferences Do comedies have a sexually differentiated public? Are males and females who enjoy horror films similar in terms of personality? Regardless of gender, do personality characteristics influence preference for the fantasy genre? In the present study, we have attempted to provide a meaningful answer to the aforementioned set of questions, based on the analysis of gender and personality interactions with regard to film preferences. In an attempt to complement previous research findings, we focused our analysis on the role of sex and personality in film preferences for British Facebook users aged 16-25. We paid attention to their scores on online Big Five personality questionnaires and their stated gender, using both single and multivariate measures to relate the two variables to film preferences, also listed on their profiles. The following hypotheses were tested: 1. Females and males differ with regard to film preferences. 2. Personality characteristics differ according to gender. 3. Film preferences vary in accordance with personality characteristics. 4. There is a significant interaction between gender and personality with regard to film preferences. the explanatory potential of online social networks Online social networks are mass repositories of valuable social and psychological information for researchers. Facebook, ranked as the most popular social network by worldwide monthly users, hosts for instance more than 400 million user profiles, in which individuals\u2019 number of friends, media content preferences and socio-demographic background\u2013 amongst other things are displayed. Their systematic use for data collection purposes is a relatively recent trend in social psychological research; Stillwell\u2019s creation of the myPersonality application (2007) on Facebook is a striking exemplar of the considerable explanatory potential of those online social networks. With myPersonality, users can take an online version of the Big Five personality questionnaire and agree to share their profile information for research purposes. As such, users\u2019 personalities can be matched to any other single or collection of individual information available on Facebook, opening up an array of possible areas of investigation. GENDER, PERSONALITY AND FILM PREFERENCES 4 the predominance of visual media The appeal of visual media is easily demonstrated in terms of the importance it has in the lives of most individuals. Data from the National Survey of Culture, Leisure and Sport (2007) indicates that watching television is the most common leisure activity for over eight out of ten men and women in England, taking precedence over spending time with friends and family, sport, etc. Another intriguing finding relates to TV and movie watching online: the number of Britons visiting such websites reached 21 million in 2007. Although it is clear that many people enjoy visual media offerings, it is much more difficult to pinpoint the specific viewer characteristics that predict genre enjoyment. In other words, who prefers which type of content? visual media content: an explanation of viewer preferences Initially the domain of inquiry of communication theorists, media preferences and their determinants are increasingly studied from the vantage point of a variety of disciplines, amongst which psychology and sociology feature predominantly (Kraaykamp et al, 2005). Their combined explanatory potential finds its articulation in the uses and gratification approach (Blumler & Katz, 1974; Rosengreen & Windhal, 1989), according to which preferences for specific media and/or specific content result from an individual\u2019s psychological and social attributes. gender differences in media content preferences Research on media preferences has paid notable attention to sex differences in responses to different types of film. It has been established that various movie genres elicit differentiated affective responses between the two sexes. Rather interestingly, the widespread gender \u2013 stereotyping running through entertainment products seems to correspond to actual viewer preferences. Males indeed prefer action/adventure genres typically associated with the masculine sex, whilst females have a soft spot for romance/drama. Oliver et al. (2000) point to the explanatory potential of differential gender-linked expectations, which can be conceptualized along two dimensions: (1) communality, typically associated with females and (2) agency, often associated with males (Eagly, 1987). Depending on whether they are communal or agency-consistent, the themes preponderant in each genre make up for a sexually differentiated public. A number of studies highlighted the importance of sex in social affiliation with media characters: viewers tend to affiliate more strongly with and GENDER, PERSONALITY AND FILM PREFERENCES 5 experience more intense emotional responses to same-sex characters (e.g. Deutsch, 1975; Hoffner & Cantor, 1991). Another potential explanation is that of sex differences in mean personality levels, which remain fairly constant across the lifespan (Feingold, 1994; McCrae & Costa, 1984). personality differences in media preferences Theorists have also focused considerable attention on the role of personality characteristics in modern mass communication theory (Blumler & Katz, 1974; Wober, 1986). In most personality/media use models, personality is conceptualized as the nexus of attitudes, beliefs and values that guide one\u2019s cognitive and affective interactions with the social environment (Weaver et al, 1993). Personality characteristics are thus believed to influence media preferences, which are essentially evaluative judgements pertaining to the gratifications consumers anticipate from their interaction with the media (Palmgreen, 1984). The Big Five framework of personality is most frequently employed by researchers seeking to demonstrate empirical connections between media gratifications and their psychological roots (e.g. Kraaykamp, 2001; Kraaykamp et al, 2005). It includes the following domains: Openness (traits like originality and open-mindedness), Conscientiousness (traits like hardworking and orderly), Extraversion (traits like energetic and sociable), Agreeableness (traits like affectionate and considerate) and Neuroticism (traits like nervous and tense) (Donellan & Lucas, 2008). With regard to media preferences, individuals scoring high on the dimension of Openness seem to prefer original and serious media content, whilst conscientious individuals like predictable and structured formats (Kraaykamp et al, 2005). The findings for extroverts are less straightforward: they seemingly favour sensory stimulation because of relatively high arousal levels (Costa & McCrae, 1988). Agreeableness appears to be the second most important personality trait, after Openness, in predicting visual preferences. Friendly individuals tend to watch popular content, devoid of upsetting and unconventional images (Kraykaamp, 2001). Finally, emotionally unstable individuals are thought to find easy means of escape from feelings of tension and stress in watching TV (Conway & Rubin, 1991). Most studies on personality effects in media preferences tend to either overlook or control for participants\u2019 gender (e.g. Weaver et al, 1993; Kraaykamp, 2001). As a result, it is difficult to tell in the former case whether the differentiated association between media GENDER, PERSONALITY AND FILM PREFERENCES 6 preferences and personality characteristics is partially caused by the fact that personality traits differ systematically between people of different sexes (Kraaykamp & Van Eijck, 2005). The latter design precludes the possibility of assessing the interaction between gender and personality when it comes to predicting media taste. It seemed to us as though no one has investigated gendered differences in personality in the context of similar genre preferences."}
{"_id": "1cdc4ad61825d3a7527b85630fe60e0585fb9347", "title": "The Open University \u2019 s repository of research publications and other research outputs Learning analytics : drivers , developments and challenges", "text": "Learning analytics is a significant area of technology-enhanced learning that has emerged during the last decade. This review of the field begins with an examination of the technological, educational and political factors that have driven the development of analytics in educational settings. It goes on to chart the emergence of learning analytics, including their origins in the 20th century, the development of data-driven analytics, the rise of learningfocused perspectives and the influence of national economic concerns. It next focuses on the relationships between learning analytics, educational data mining and academic analytics. Finally, it examines developing areas of learning analytics research, and identifies a series of future challenges."}
{"_id": "41a5aa76a0d6b2ff234546a5e9efba48b403ce19", "title": "Color , Shape and Texture based Fruit Recognition System", "text": "The paper presents an automated system for classification of fruits. A dataset containing five different fruits was constructed using an ordinary camera. All the fruits were analyzed on the basis of their color (RGB space), shape and texture and then classified using different classifiers to find the classifier that gives the best accuracy. Gray Level Cooccurance Matrix(GLCM) is used to calculate texture features .Best accuracy was achieved by support vector machine (svm). All the processing was carried out in Matlab. Keywords\u2014computer vision, pattern recognition, support vector machine, texture features."}
{"_id": "f3ac0d94ba2374e46dfa3a13effcc540205faf21", "title": "Exploring the relationships between college students' cell phone use, personality and leisure", "text": ""}
{"_id": "f93a76ffaf8c824c4100557e78cbb208f5fe5efb", "title": "A New ADS-B Authentication Framework Based on Efficient Hierarchical Identity-Based Signature with Batch Verification", "text": "Automatic dependent surveillance-broadcast (ADS-B) has become a crucial part of next generation air traffic surveillance technology and will be mandatorily deployed for most of the airspaces worldwide by 2020. Each aircraft equipped with an ADS-B device keeps broadcasting plaintext messages to other aircraft and the ground station controllers once or twice per second. The lack of security measures in ADS-B systems makes it susceptible to different attacks. Among the various security issues, we investigate the integrity and authenticity of ADS-B messages. We propose a new framework for providing ADS-B with authentication based on three-level hierarchical identity-based signature (HIBS) with batch verification. Previous signature-based ADS-B authentication protocols focused on how to generate signatures efficiently, while our schemes can also significantly reduce the verification cost, which is critical to ADS-B systems, since at any time an ADS-B receiver may receive lots of signatures. We design two concrete schemes. The basic scheme supports partial batch verification and the extended scheme provides full batch verification. We give a formal security proof for the extended scheme. Experiment results show that our schemes with batch verification are tremendously more efficient in batch verifying $n$ signatures than verifying $n$ signatures independently. For example, the running time of verifying 100 signatures is 502 and 484\u00a0ms for the basic scheme and the extended scheme respectively, while the time is 2500\u00a0ms if verifying the signatures independently."}
{"_id": "ee29d24b5e5bc8a8a71d5b57368f9a69e537fda7", "title": "Dual-Band and Dual-Polarized SIW-Fed Microstrip Patch Antenna", "text": "A new dual-band and dual-polarized antenna is investigated in this letter. Two longitudinal and transverse slots on the broad wall of the substrate integrated waveguide (SIW) are responsible for creating two different frequency bands with distinct polarization. A frequency selective surface (FSS) is placed on top of the microstrip patches to reduce the cross-polarization level of the antenna. The resonance frequencies of the patch and slot are set near to each other to get a wider impedance bandwidth. Bandwidth of more than 7% is achieved in each band. SIW feeding network increases the efficiency of the antenna. The simulated radiation efficiency of the antenna is better than 87% over the impedance bandwidth ."}
{"_id": "24c3961036ba2c0d3e548b2d94af3410ad8d7e6d", "title": "Simulation of lightning impulse voltage stresses in underground cables", "text": "The goal of this work is to study the transient behavior of the cable against the application of standard and non-standard lightning impulse voltage waveforms. A 66 kV cable model has been developed in MATLAB Simulink and standard and non-standard impulse voltages are applied to it. A preliminary comparative study on the obtained voltages indicated that non-standard impulse voltage waveform developed higher voltage stress in the cable. Simulation results helped to investigate which impulses (standard or non-standard) represent the worst possible voltage stresses on the cables. This study provides the basis for further study of effects of non-standard impulse voltage waveforms and make necessary correction in the existing impulse testing standards."}
{"_id": "7fb8b44968e65b668ab09ad0e64763785c31bc1d", "title": "Skeletal Quads: Human Action Recognition Using Joint Quadruples", "text": "Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skeletal feature, referred to as skeletal quad. Further, the use of a Fisher kernel representation is suggested to describe the skeletal quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues."}
{"_id": "c3833f53c947bf89e2c06fd152ca4c7e5a651d6e", "title": "Pedestrian detection and tracking with night vision", "text": "This paper presents a method for pedestrian detection and tracking using a single night-vision video camera installed on the vehicle. To deal with the nonrigid nature of human appearance on the road, a two-step detection/tracking method is proposed. The detection phase is performed by a support vector machine (SVM) with size-normalized pedestrian candidates and the tracking phase is a combination of Kalman filter prediction and mean shift tracking. The detection phase is further strengthened by information obtained by a road-detection module that provides key information for pedestrian validation. Experimental comparisons (e.g., grayscale SVM recognition versus binary SVM recognition and entire-body detection versus upper-body detection) have been carried out to illustrate the feasibility of our approach."}
{"_id": "07f488bf2285b290058eb49cf8c25abfd3a13c7d", "title": "Video Google: A Text Retrieval Approach to Object Matching in Videos", "text": "We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieval is immediate, returning a ranked list of key frames/shots in the manner of Google. The method is illustrated for matching on two full length feature films."}
{"_id": "a5134affedeeac45a1123167e0ecbf52aa96cc1e", "title": "Multi-Granularity Chinese Word Embedding", "text": "This paper considers the problem of learning Chinese word embeddings. In contrast to English, a Chinese word is usually composed of characters, and most of the characters themselves can be further divided into components such as radicals. While characters and radicals contain rich information and are capable of indicating semantic meanings of words, they have not been fully exploited by existing word embedding methods. In this work, we propose multi-granularity embedding (MGE) for Chinese words. The key idea is to make full use of such word-character-radical composition, and enrich word embeddings by further incorporating finer-grained semantics from characters and radicals. Quantitative evaluation demonstrates the superiority of MGE in word similarity computation and analogical reasoning. Qualitative analysis further shows its capability to identify finer-grained semantic meanings of words."}
{"_id": "4212f7c3534e5c549504ffa602c985b08788301c", "title": "When is a linguistic metaphor a conceptual metaphor ?", "text": "Volume 24 New Directions in Cognitive Linguistics Edited by Vyvyan Evans and St\u00e9phanie Pourcel Editorial Advisory Board Melissa F. Bowerman Nijmegen Wallace Chafe Santa Barbara, CA Philip R. Cohen Portland, OR Antonio Damasio Iowa City, IA Morton Ann Gernsbacher Madison, WI David McNeill Chicago, IL Eric Pederson Eugene, OR Fran\u00e7ois Recanati Paris Sally Rice Edmonton, Alberta Benny Shanon Jerusalem Lokendra Shastri Berkeley, CA Paul !agard Waterloo, Ontario Human Cognitive Processing (HCP)"}
{"_id": "49fd00a22f44a52f4699730403033416e0762e6d", "title": "Spam, Phishing, and Fraud Detection Using Random Projections, Adversarial Learning, and Semi-Supervised Learning", "text": ""}
{"_id": "a20bd2c0b96b352a94a97f224bd2160a037f0e1f", "title": "Pathways for irony detection in tweets", "text": "Posts on Twitter allow users to express ideas and opinions in a very dynamic way. The volume of data available is incredibly high in this support, and it may provide relevant clues regarding the judgement of the public on certain product, event, service etc. While in standard sentiment analysis the most common task is to classify the utterances according to their polarity, it is clear that detecting ironic senses represent a big challenge for Natural Language Processing. By observing a corpus constitued by tweets, we propose a set of patterns that might suggest ironic/sarcastic statements. Thus, we developed special clues for irony detection, through the implementation and evaluation of a set of patterns."}
{"_id": "860d3d4114711fa4ce9a5a4ccf362b80281cc981", "title": "Developing an Ontology of the Cyber Security Domain", "text": "This paper reports on a trade study we performed to support the development of a Cyber ontology from an initial malware ontology. The goals of the Cyber ontology effort are first described, followed by a discussion of the ontology development methodology used. The main body of the paper then follows, which is a description of the potential ontologies and standards that could be utilized to extend the Cyber ontology from its initially constrained malware focus. These resources include, in particular, Cyber and malware standards, schemas, and terminologies that directly contributed to the initial malware ontology effort. Other resources are upper (sometimes called 'foundational') ontologies. Core concepts that any Cyber ontology will extend have already been identified and rigorously defined in these foundational ontologies. However, for lack of space, this section is profoundly reduced. In addition, utility ontologies that are focused on time, geospatial, person, events, and network operations are briefly described. These utility ontologies can be viewed as specialized super-domain or even mid-level ontologies, since they span many, if not most, ontologies -including any Cyber ontology. An overall view of the ontological architecture used by the trade study is also given. The report on the trade study concludes with some proposed next steps in the iterative evolution of the"}
{"_id": "18640600aa6dfdc25ce89031939b14fa3fe85108", "title": "The Painting Fool Sees! New Projects with the Automated Painter", "text": "We report the most recent advances in The Painting Fool project, where we have integrated machine vision capabilities from the DARCI system into the automated painter, to enhance its abilities before, during and after the painting process. This has enabled new art projects, including a commission from an Artificial Intelligence company, and we report on this collaboration, which is one of the first instances in Computational Creativity research where creative software has been commissioned directly. The new projects have advanced The Painting Fool as an independent artist able to produce more diverse styles which break away from simulating natural media. The projects have also raised a philosophical question about whether software artists need to see in the same way as people, which we discuss briefly."}
{"_id": "e21c77eaf5330a8cec489d91743ce7688811b38b", "title": "Socializing by Gaming: Revealing Social Relationships in Multiplayer Online Games", "text": "Multiplayer Online Games (MOGs) like Defense of the Ancients and StarCraft II have attracted hundreds of millions of users who communicate, interact, and socialize with each other through gaming. In MOGs, rich social relationships emerge and can be used to improve gaming services such as match recommendation and game population retention, which are important for the user experience and the commercial value of the companies who run these MOGs. In this work, we focus on understanding social relationships in MOGs. We propose a graph model that is able to capture social relationships of a variety of types and strengths. We apply our model to real-world data collected from three MOGs that contain in total over ten years of behavioral history for millions of players and matches. We compare social relationships in MOGs across different game genres and with regular online social networks like Facebook. Taking match recommendation as an example application of our model, we propose SAMRA, a Socially Aware Match Recommendation Algorithm that takes social relationships into account. We show that our model not only improves the precision of traditional link prediction approaches, but also potentially helps players enjoy games to a higher extent."}
{"_id": "6d255f6d3b99296545e435744a8745d085621167", "title": "Private Collaborative Neural Network Learning", "text": "Machine learning algorithms, such as neural networks, create better predictive models when having access to larger datasets. In many domains, such as medicine and finance, each institute has only access to limited amounts of data, and creating larger datasets typically requires collaboration. However, there are privacy related constraints on these collaborations for legal, ethical, and competitive reasons. In this work, we present a feasible protocol for learning neural networks in a collaborative way while preserving the privacy of each record. This is achieved by combining Differential Privacy and Secure Multi-Party Computation with Machine Learning."}
{"_id": "617e96136117a10dd01445c8d5531d1be47f1e97", "title": "Supervised Hashing with Deep Neural Networks", "text": "In this paper, we propose training very deep neural networks (DNNs) for supervised learning of hash codes. Existing methods in this context train relatively \u201cshallow\u201d networks limited by the issues arising in back propagation (e.g. vanishing gradients) as well as computational efficiency. We propose a novel and efficient training algorithm inspired by alternating direction method of multipliers (ADMM) that overcomes some of these limitations. Our method decomposes the training process into independent layer-wise local updates through auxiliary variables. Empirically we observe that our training algorithm always converges and its computational complexity is linearly proportional to the number of edges in the networks. Empirically we manage to train DNNs with 64 hidden layers and 1024 nodes per layer for supervised hashing in about 3 hours using a single GPU. Our proposed very deep supervised hashing (VDSH) method significantly outperforms the state-of-theart on several benchmark datasets."}
{"_id": "833d2d79d6563df0ee201f5067ab0081220d0d0d", "title": "Treatment of thoracolumbar burst fractures without neurologic deficit by indirect reduction and posterior instrumentation: bisegmental stabilization with monosegmental fusion", "text": "This study retrospectively reviews 20 sequential patients with thoracolumbar burst fractures without neurologic deficit. All patients were treated by indirect reduction, bisegmental posterior transpedicular instrumentation and monosegmental fusion. Clinical and radiological outcome was analyzed after an average follow-up of 6.4 years. Re-kyphosis of the entire segment including the cephaled disc was significant with loss of the entire postoperative correction over time. This did not influence the generally benign clinical outcome. Compared to its normal height the fused cephalad disc was reduced by 70% and the temporarily spanned caudal disc by 40%. Motion at the temporarily spanned segment could be detected in 11 patients at follow-up, with no relation to the clinical result. Posterior instrumentation of thoracolumbar burst fractures can initially reduce the segmental kyphosis completely. The loss of correction within the fractured vertebral body is small. However, disc space collapse leads to eventual complete loss of segmental reduction. Therefore, posterolateral fusion alone does not prevent disc space collapse. Nevertheless, clinical long-term results are favorable. However, if disc space collapse has to prevented, an interbody disc clearance and fusion is recommended."}
{"_id": "14c8609e5632205c6af493277b1bcc36db266411", "title": "Socioeconomic disparities in health: pathways and policies.", "text": "Socioeconomic status (SES) underlies three major determinants of health: health care, environmental exposure, and health behavior. In addition, chronic stress associated with lower SES may also increase morbidity and mortality. Reducing SES disparities in health will require policy initiatives addressing the components of socioeconomic status (income, education, and occupation) as well as the pathways by which these affect health. Lessons for U.S. policy approaches are taken from the Acheson Commission in England, which was charged with reducing health disparities in that country."}
{"_id": "5d9fbab509a8fb09eac6eea88dbb8dcfb646f5df", "title": "Human wayfinding in information networks", "text": "Navigating information spaces is an essential part of our everyday lives, and in order to design efficient and user-friendly information systems, it is important to understand how humans navigate and find the information they are looking for. We perform a large-scale study of human wayfinding, in which, given a network of links between the concepts of Wikipedia, people play a game of finding a short path from a given start to a given target concept by following hyperlinks. What distinguishes our setup from other studies of human Web-browsing behavior is that in our case people navigate a graph of connections between concepts, and that the exact goal of the navigation is known ahead of time. We study more than 30,000 goal-directed human search paths and identify strategies people use when navigating information spaces. We find that human wayfinding, while mostly very efficient, differs from shortest paths in characteristic ways. Most subjects navigate through high-degree hubs in the early phase, while their search is guided by content features thereafter. We also observe a trade-off between simplicity and efficiency: conceptually simple solutions are more common but tend to be less efficient than more complex ones. Finally, we consider the task of predicting the target a user is trying to reach. We design a model and an efficient learning algorithm. Such predictive models of human wayfinding can be applied in intelligent browsing interfaces."}
{"_id": "366e55e7cc93240e8360b85dfec95124410c48bc", "title": "Designing and Packaging Wide-Band PAs: Wideband PA and Packaging, History, and Recent Advances: Part 1", "text": "In current applications such as communication, aerospace and defense, electronic warfare (EW), electromagnetic compatibility (EMC), and sensing, among others, there is an ever-growing demand for more linear power with increasingly greater bandwidth and efficiency. Critical for these applications are the design and packaging of wide-band, high-power amplifiers that are both compact in size and low in cost [1]. Most such applications, including EW, radar, and EMC testers, require high 1-dB compression point (P1dB) power with good linearity across a wide band (multi-octave to decade bandwidth) [2]. In addition to linear power, wide bandwidth is essential for high-data-rate communication and high resolution in radar and active imagers [3]. In modern electronics equipment such as automated vehicles, rapidly increasing complexity imposes strict EMC regulations for human safety and security [4]. This often requires challenging specifications for the power amplifier (PA), such as very high P1dB power [to kilowatt, continuous wave (CW)] across approximately a decade bandwidth with high linearity, reliability, and long life, even for 100% load mismatch [5]."}
{"_id": "abe51b31659d4c45f10b99ada0c346be78d333b1", "title": "Grid-Connected Photovoltaic Generation System", "text": "This study addresses a grid-connected photovoltaic (PV) generation system. In order to make the PV generation system more flexible and expandable, the backstage power circuit is composed of a high step-up converter and a pulsewidth-modulation (PWM) inverter. In the dc-dc power conversion, the high step-up converter is introduced to improve the conversion efficiency of conventional boost converters and to allow the parallel operation of low-voltage PV modules. Moreover, an adaptive total sliding-mode control system is designed for the current control of the PWM inverter to maintain the output current with a higher power factor and less variation under load changes. In addition, an adaptive step-perturbation method is proposed to achieve the objective of maximum power point tracking, and an active sun tracking scheme without any light sensors is investigated to make PV plates face the sun directly in order to capture maximum irradiation and enhance system efficiency. Experimental results are given to verify the validity of the high step-up converter, the PWM inverter control, the ASP method, and the active sun tracker for a grid-connected PV generation system."}
{"_id": "07ee87c822e9bb6deac504cf45439ff856352b5e", "title": "Development of the Perceived Stress Questionnaire: a new tool for psychosomatic research.", "text": "A 30-question Perceived Stress Questionnaire (PSQ) was validated, in Italian and English, among 230 subjects. Test-retest reliability was 0.82 for the General (past year or two) PSQ, while monthly Recent (past month) PSQs varied by a mean factor of 1.9 over 6 months; coefficient alpha > 0.9. General and/or Recent PSQ scores were associated with trait anxiety (r = 0.75), Cohen's Perceived Stress Scale (r = 0.73), depression (r = 0.56), self-rated stress (r = 0.56), and stressful life events (p < 0.05). The General PSQ was higher in in-patients than in out-patients (p < 0.05); both forms were correlated with a somatic complaints scale in a non-patient population (r > 0.5), and were higher, among 27 asymptomatic ulcerative colitis patients, in the seven who had rectal inflammation than in those with normal proctoscopy (p = 0.03). Factor analysis yielded seven factors, of which those reflecting interpersonal conflict and tension were significantly associated with health outcomes. The Perceived Stress Questionnaire may be a valuable addition to the armamentarium of psychosomatic researchers."}
{"_id": "640a932208e0fc67b49019961bc420dfb91ac676", "title": "Dynamically pre-trained deep recurrent neural networks using environmental monitoring data for predicting PM2.5", "text": "Fine particulate matter ( $$\\hbox {PM}_{2.5}$$ PM 2.5 ) has a considerable impact on human health, the environment and climate change. It is estimated that with better predictions, US$9 billion can be saved over a 10-year period in the USA (State of the science fact sheet air quality. http://www.noaa.gov/factsheets/new , 2012). Therefore, it is crucial to keep developing models and systems that can accurately predict the concentration of major air pollutants. In this paper, our target is to predict $$\\hbox {PM}_{2.5}$$ PM 2.5 concentration in Japan using environmental monitoring data obtained from physical sensors with improved accuracy over the currently employed prediction models. To do so, we propose a deep recurrent neural network (DRNN) that is enhanced with a novel pre-training method using auto-encoder especially designed for time series prediction. Additionally, sensors selection is performed within DRNN without harming the accuracy of the predictions by taking advantage of the sparsity found in the network. The numerical experiments show that DRNN with our proposed pre-training method is superior than when using a canonical and a state-of-the-art auto-encoder training method when applied to time series prediction. The experiments confirm that when compared against the $$\\hbox {PM}_{2.5}$$ PM 2.5 prediction system VENUS (National Institute for Environmental Studies. Visual Atmospheric Environment Utility System. http://envgis5.nies.go.jp/osenyosoku/ , 2014), our technique improves the accuracy of $$\\hbox {PM}_{2.5}$$ PM 2.5 concentration level predictions that are being reported in Japan."}
{"_id": "6864a3cab546cf9de7dd183c73ae344fddb033b6", "title": "Feature selection in text classification", "text": "In recent years, text classification have been widely used. Dimension of text data has increased more and more. Working of almost all classification algorithms is directly related to dimension. In high dimension data set, working of classification algorithms both takes time and occurs over fitting problem. So feature selection is crucial for machine learning techniques. In this study, frequently used feature selection metrics Chi Square (CHI), Information Gain (IG) and Odds Ratio (OR) have been applied. At the same time the method Relevancy Frequency (RF) proposed as term weighting method has been used as feature selection method in this study. It is used for tf.idf term as weighting method, Sequential Minimal Optimization (SMO) and Naive Bayes (NB) in the classification algorithm. Experimental results show that RF gives successful results."}
{"_id": "f5feb2a151c54ec9699924d401a66c193ddd3c8b", "title": "Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle", "text": "This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system."}
{"_id": "49c88aa6a22a41eef4058578ce1470964439b35f", "title": "3D laser scan classification using web data and domain adaptation", "text": "Over the last years, object recognition has become a more and more active field of research in robotics. An important problem in object recognition is the need for sufficient labeled training data to learn good classifiers. In this paper we show how to significantly reduce the need for manually labeled training data by leveraging data sets available on the World Wide Web. Specifically, we show how to use objects from Google\u2019s 3D Warehouse to train classifiers for 3D laser scans collected by a robot navigating through urban environments. In order to deal with the different characteristics of the web data and the real robot data, we additionally use a small set of labeled 3D laser scans and perform domain adaptation. Our experiments demonstrate that additional data taken from the 3D Warehouse along with our domain adaptation greatly improves the classification accuracy on real laser scans."}
{"_id": "707012ec660e31ba101cc43609e2201a2bd833fa", "title": "Detecting human activities in retail surveillance using hierarchical finite state machine", "text": "Cashiers in retail stores usually exhibit certain repetitive and periodic activities when processing items. Detecting such activities plays a key role in most retail fraud detection systems. In this paper, we propose a highly efficient, effective and robust vision technique to detect checkout-related primitive activities, based on a hierarchical finite state machine (FSM). Our deterministic approach uses visual features and prior spatial constraints on the hand motion to capture particular motion patterns performed in primitive activities. We also apply our approach to the problem of retail fraud detection. Experimental results on a large set of video data captured from retail stores show that our approach, while much simpler and faster, achieves significantly better results than state-of-the-art machine learning-based techniques both in detecting checkout-related activities and in detecting checkout-related fraudulent incidents."}
{"_id": "49e78709e255efb02d9eca2e34b4a0edf8fcd026", "title": "COMPARING PROGRAMMER PRODUCTIVITY IN OPENACC AND CUDA: AN EMPIRICAL INVESTIGATION", "text": "OpenACC has been touted as a \"high productivity\" API designed to make GPGPU programming accessible to scientific programmers, but to date, no studies have attempted to verify this quantitatively. In this paper, we conduct an empirical investigation of program productivity comparisons between OpenACC and CUDA in the programming time, the execution time and the analysis of independence of OpenACC model in high performance problems. Our results show that, for our programs and our subject pool, this claim is true. We created two assignments called Machine Problem 3(MP3) and Machine Problem 4(MP4) in the classroom environment and instrumented the WebCode website developed by ourselves to record details of students\u2019 coding process. Three hypotheses were supported by the statistical data: for the same parallelizable problem, (1) the OpenACC programming time is at least 37% shorter than CUDA; (2) the CUDA running speed is 9x faster than OpenACC; (3) the OpenACC development work is not significantly affected by previous CUDA experience"}
{"_id": "7d426470b5f705404d643b796c4b8b3826e8af99", "title": "PROFITING FROM ARBITRAGE AND ODDS BIASES OF THE EUROPEAN FOOTBALL GAMBLING MARKET", "text": "A gambling market is usually described as being inefficient if there are one or more betting strategies that generate profit, at a consistent rate, as a consequence of exploiting market flaws. This paper examines the online European football gambling market based on 14 European football leagues over a period of seven years, from season 2005/06 to 2011/12 inclusive, and takes into consideration the odds provided by numerous bookmaking firms. Contrary to common misconceptions, we demonstrate that the accuracy of bookmakers' odds has not improved over this period. More importantly, our results question market efficiency by demonstrating high profitability on the basis of consistent odds biases and numerous arbitrage opportunities."}
{"_id": "7b0f2b6485fe5d4c7332030b98bd11f43dcd90fd", "title": "Low-cost, real-time obstacle avoidance for mobile robots", "text": "The goal of this project1 is to advance the field of automation and robotics by utilizing recently-released, low-cost sensors and microprocessors to develop a mechanism that provides depth-perception and autonomous obstacle avoidance in a plug-and-play fashion. We describe the essential hardware components that can enable such a low-cost solution and an algorithm to avoid static obstacles present in the environment. The mechanism utilizes a novel single-point LIDAR module that affords more robustness and invariance than popular approaches, such as Neural Networks and Stereo. When this hardware is coupled with the proposed efficient obstacle avoidance algorithm, this mechanism is able to accurately represent environments through point clouds and construct obstacle-free paths to a destination, in a small timeframe. A prototype mechanism has been installed on a quadcopter for visualization on how actual implementation may take place2. We describe experimental results based on this prototype."}
{"_id": "4767a0c9f7261a4265db650d3908c6dd1d10a076", "title": "Joint tracking and segmentation of multiple targets", "text": "Tracking-by-detection has proven to be the most successful strategy to address the task of tracking multiple targets in unconstrained scenarios [e.g. 40, 53, 55]. Traditionally, a set of sparse detections, generated in a preprocessing step, serves as input to a high-level tracker whose goal is to correctly associate these \u201cdots\u201d over time. An obvious short-coming of this approach is that most information available in image sequences is simply ignored by thresholding weak detection responses and applying non-maximum suppression. We propose a multi-target tracker that exploits low level image information and associates every (super)-pixel to a specific target or classifies it as background. As a result, we obtain a video segmentation in addition to the classical bounding-box representation in unconstrained, real-world videos. Our method shows encouraging results on many standard benchmark sequences and significantly outperforms state-of-the-art tracking-by-detection approaches in crowded scenes with long-term partial occlusions."}
{"_id": "6ae0a919ffee81098c9769a4503cc15e42c5b585", "title": "Parallel Formulations of Decision-Tree Classification Algorithms", "text": "Classification decision tree algorithms are used extensively for data mining in many domains such as retail target marketing, fraud detection, etc. Highly parallel algorithms for constructing classification decision trees are desirable for dealing with large data sets in reasonable amount of time. Algorithms for building classification decision trees have a natural concurrency, but are difficult to parallelize due to the inherent dynamic nature of the computation. In this paper, we present parallel formulations of classification decision tree learning algorithm based on induction. We describe two basic parallel formulations. One is based on Synchronous Tree Construction Approach and the other is based on Partitioned Tree Construction Approach. We discuss the advantages and disadvantages of using these methods and propose a hybrid method that employs the good features of these methods. We also provide the analysis of the cost of computation and communication of the proposed hybrid method. Moreover, experimental results on an IBM SP-2 demonstrate excellent speedups and scalability."}
{"_id": "146bb2ea1fbdd86f81cd0dae7d3fd63decac9f5c", "title": "Genetic Algorithms in Search Optimization and Machine Learning", "text": "This book brings together-in an informal and tutorial fashion-the computer techniques, mathematical tools, and research results that will enable both students and practitioners to apply genetic algorithms to problems..."}
{"_id": "97d3a8863cf2ea6a4670a911cf2fbeca6e043b36", "title": "Indian Sign Language gesture classification as single or double handed gestures", "text": "The development of a sign language recognition system can have a great impact on the daily lives of humans with hearing disabilities. Recognizing gestures from the Indian Sign Language (ISL) with a camera can be difficult due to complexity of various gestures. The motivation behind the paper is to develop an approach to successfully classify gestures in the ISL under ambiguous conditions from static images. A novel approach involving the decomposition of gestures into single handed or double handed gesture has been presented in this paper. Classifying gesture into these subcategories simplifies the process of gesture recognition in the ISL due to presence of lesser number of gestures in each subcategory. Various approaches making use of Histogram of Gradients (HOG) features and geometric descriptors using KNN and SVM classifiers were tried on a dataset consisting of images of all 26 English alphabets present in the ISL under variable background. HOG features when classified with Support Vector Machine were found to be the most efficient approach resulting in an accuracy of 94.23%."}
{"_id": "006c843a29a3c77f232638bf7aea63fc8601a73a", "title": "Estimation of the Mathematical Parameters of Double-Exponential Pulses Using the Nelder\u2013Mead Algorithm", "text": "Transient pulses for electromagnetic compatibility problems, such as the high-altitude electromagnetic pulse and ultrawideband pulses, are often described by a double-exponential pulse. Such a pulse shape is specified physically by the three characteristic parameters rise time tr, pulsewidth tfwhm (full-width at half-maximum), and maximum amplitude Emax. The mathematical description is a double-exponential function with the parameters \u03b1, \u03b2, and E0. In practice, it is often necessary to transform the two groups of parameters into each other. This paper shows a novel relationship between the physical parameters tr and tfwhm on the one hand and the mathematical parameters \u03b1 and \u03b2 on the other. It is shown that the least-squares method in combination with the Nelder-Mead simplex algorithm is appropriate to determine an approximate closed-form formula between these parameters. Therefore, the extensive analysis of double-exponential pulses is possible in a considerably shorter computation time. The overall approximation error is less than 3.8%."}
{"_id": "3c32cd58de4ed693e196d51eb926396700ed8488", "title": "An Error-Oriented Approach to Word Embedding Pre-Training", "text": "We propose a novel word embedding pretraining approach that exploits writing errors in learners\u2019 scripts. We compare our method to previous models that tune the embeddings based on script scores and the discrimination between correct and corrupt word contexts in addition to the generic commonly-used embeddings pre-trained on large corpora. The comparison is achieved by using the aforementioned models to bootstrap a neural network that learns to predict a holistic score for scripts. Furthermore, we investigate augmenting our model with error corrections and monitor the impact on performance. Our results show that our error-oriented approach outperforms other comparable ones which is further demonstrated when training on more data. Additionally, extending the model with corrections provides further performance gains when data sparsity is an issue."}
{"_id": "851262970e8c26cb1fa0bef54eb21f22a0d5233f", "title": "Single-Channel and Multi-Channel Sinusoidal Audio Coding Using Compressed Sensing", "text": "Compressed sensing (CS) samples signals at a much lower rate than the Nyquist rate if they are sparse in some basis. In this paper, the CS methodology is applied to sinusoidally modeled audio signals. As this model is sparse by definition in the frequency domain (being equal to the sum of a small number of sinusoids), we investigate whether CS can be used to encode audio signals at low bitrates. In contrast to encoding the sinusoidal parameters (amplitude, frequency, phase) as current state-of-the-art methods do, we propose encoding few randomly selected samples of the time-domain description of the sinusoidal component (per signal segment). The potential of applying compressed sensing both to single-channel and multi-channel audio coding is examined. The listening test results are encouraging, indicating that the proposed approach can achieve comparable performance to that of state-of-the-art methods. Given that CS can lead to novel coding systems where the sampling and compression operations are combined into one low-complexity step, the proposed methodology can be considered as an important step towards applying the CS framework to audio coding applications."}
{"_id": "7ad5ccfc923be803044c91422e6537ce979f4f09", "title": "Knowledge Discovery in Biomedical Data Facilitated by Domain Ontologies", "text": "In some real-world areas, it is important to enrich the data with external background knowledge so as to provide context and to facilitate pattern recognition. These areas may be described as data rich but knowledge poor. There are two challenges to incorporate this biological knowledge into the data mining cycle: (1) generating the ontologies; and (2) adapting the data mining algorithms to make use of the ontologies. This chapter presents the state-of-the-art in bringing the background ontology knowledge into the pattern recognition task for biomedical data."}
{"_id": "9bb9a6e6dcbe47417448b40557d7d7a200f02792", "title": "AUTOMATIC GENERATION OF PRESENTATION SLIDES FOR ACADEMIC PAPERS USING INTEGER LINEAR PROGRAMMING", "text": "Presentations are one of the most common and effective ways of communicating the overview of a work to the audience. Given a specialized paper, programmed era of introduction slides diminishes the exertion of the moderator and aides in making an organized synopsis of the paper. In this Project, we propose the structure of another framework that does this assignment. Any paper that has a theoretical and whose segments can be ordered under presentation, related work, model, examinations and conclusions can be given as info. This XML record is parsed and data in it is extricated. An inquiry particular extractive summarizer has been utilized to create slides. In numerous meetings and social occasions, a moderator takes the guide of slides to present his work deliberately (pictorial). The introduction slides for the paper are then made by utilizing the Integer Linear Programming (ILP) demonstrate with complicatedly organized target breaking points and goals to pick and adjust key expressions and sentences. The framework removed theme and non subject parts from the article in light of the basic talk structure. The isolated subject and non-topic parts were determined to the slides by giving fitting indents in perspective of the examination of their syntactic structure. Keywords\u2014 Abstracting methods, Integer Linear Programming, Support Vector Regression model, text mining."}
{"_id": "8eefd28eb47e72794bb0355d8abcbebaac9d8ab1", "title": "Semi-Supervised Text Classification Using EM", "text": "For several decades, statisticians have advocated using a c ombination of labeled and unlabeled data to train classifiers by estimating parameters o f a generative model through iterative Expectation-Maximization (EM) techniques. Thi s chapter explores the effectiveness of this approach when applied to the domain of text class ification. Text documents are represented here with a bag-of-words model, which leads to a generative classification model based on a mixture of multinomials. This model is an ext remely simplistic representation of the complexities of written text. This chapter explains and illustrates three key points about semi-supervised learning for text classifi cat on with generative models. First, despite the simplistic representation, some text do mains have a high positive correlation between generative model probability and classifica tion accuracy. In these domains, a straightforward application of EM with the naive Bayes tex t model works well. Second, some text domains do not have this correlation. Here we can ad opt a more expressive and appropriate generative model that does have a positive c orrelation. In these domains, semi-supervised learning again improves classification ac curacy. Finally, EM suffers from the problem of local maxima, especially in high dimension do mains such as text classification. We demonstrate that deterministic annealing, a varia nt of EM, can help overcome the problem of local maxima and increase classification accurac y further when the generative model is appropriate."}
{"_id": "7cfa413af7b561f910edaf3cbaf5a75df4e60e92", "title": "Crown and post-free adhesive restorations for endodontically treated posterior teeth: from direct composite to endocrowns.", "text": "Coronal rehabilitation of endodontically treated posterior teeth is still a controversial issue. Although the classical crown supported by radicular metal posts remains widely spread in dentistry, its invasiveness has been largely criticized. New materials and therapeutic options based entirely on adhesion are nowadays available. They allow performing a more conservative, faster and less expensive dental treatment. All clinical cases presented in this paper are solved by using these modern techniques, from direct composite restorations to indirect endocrowns."}
{"_id": "5034d0c127d3b881b3afe4b68690608ceecf1b04", "title": "Advanced Safe PIN-Entry Against Human Shoulder-Surfing Ms", "text": "When users insert their passwords in a common area, they might be at risk of aggressor stealing their password. The PIN entry can be perceived by close by adversaries, more effectually in a crowded place. A new technique has been established to cope with this problem that is cryptography prevention techniques. Instead, there have been alternative approaches among them, the PIN entry was elegant because of its simplicity and accessibility. The basic BW method is focused to withstand a human shoulder surfing attack. In every round, a well ordered numeric keypad is colored at odd. A user who knows the accurate PIN digit can enter by pressing the separate color key. The IBW method is examined to be confidential against human nemesis due to the restricted cognitive abilities of humans. Also the IBW method is proven to be robust against any hacking attacks."}
{"_id": "b9e5a91a84d541097b42b60ba673b506a206f5a0", "title": "A recommender system using GA K-means clustering in an online shopping market", "text": "The Internet is emerging as a new marketing channel, so understanding the characteristics of online customers\u2019 needs and expectations is considered a prerequisite for activating the consumer-oriented electronic commerce market. In this study, we propose a novel clustering algorithm based on genetic algorithms (GAs) to effectively segment the online shopping market. In general, GAs are believed to be effective on NP-complete global optimization problems, and they can provide good near-optimal solutions in reasonable time. Thus, we believe that a clustering technique with GA can provide a way of finding the relevant clusters more effectively. The research in this paper applied K-means clustering whose initial seeds are optimized by GA, which is called GA K-means, to a real-world online shopping market segmentation case. In this study, we compared the results of GA K-means to those of a simple K-means algorithm and self-organizing maps (SOM). The results showed that GA K-means clustering may improve segmentation performance in comparison to other typical clustering algorithms. In addition, our study validated the usefulness of the proposed model as a preprocessing tool for recommendation systems. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "e0f6a9dbe9a7eb5b0d899658579545fdfc3f1994", "title": "Large-scale deep learning at Baidu", "text": "In the past 30 years, tremendous progress has been achieved in building effective shallow classification models. Despite the success, we come to realize that, for many applications, the key bottleneck is not the qualify of classifiers but that of features. Not being able to automatically get useful features has become the main limitation for shallow models. Since 2006, learning high-level features using deep architectures from raw data has become a huge wave of new learning paradigms. In recent two years, deep learning has made many performance breakthroughs, for example, in the areas of image understanding and speech recognition. In this talk, I will walk through some of the latest technology advances of deep learning within Baidu, and discuss the main challenges, e.g., developing effective models for various applications, and scaling up the model training using many GPUs. In the end of the talk I will discuss what might be interesting future directions."}
{"_id": "6be9cd1b6be302e0440a5d1882e4b128172d6fd9", "title": "SD-Layer: Stain Deconvolutional Layer for CNNs in Medical Microscopic Imaging", "text": "Convolutional Neural Networks (CNNs) are typically trained in the RGB color space. However, in medical imaging, we believe that pixel stain quantities offer a fundamental view of the interaction between tissues and stain chemicals. Since the optical density (OD) colorspace allows to compute pixel stain quantities from pixel RGB intensities using the Beer-Lambert\u2019s law, we propose a stain deconvolutional layer, hereby named as SD-Layer, affixed at the front of CNN that performs two functions: (1) it transforms the input RGB microscopic images to Optical Density (OD) space and (2) this layer deconvolves OD image with the stain basis learned through backpropagation and provides tissue-specific stain absorption quantities as input to the following CNN layers. With the introduction of only nine additional learnable parameters in the proposed SD-Layer, we obtain a considerably improved performance on two standard CNN architectures: AlexNet and T-CNN. Using the T-CNN architecture prefixed with the proposed SD-Layer, we obtain 5-fold crossvalidation accuracy of 93.2% in the problem of differentiating malignant immature White Blood Cells (WBCs) from normal immature WBCs for cancer detection."}
{"_id": "b554333fb0d318606d6663e237f59858584e42f2", "title": "Principles of spatial database analysis and design", "text": "This chapter covers the fundamentals of spatial database analysis and design, It begins by defining the most important concepts: \u2018spatial database\u2019, \u2018analysis\u2019, \u2018design\u2019, and \u2018model\u2019; and continues with a presentation of the rationale supporting the use of formal methods for analysis and design. The basic elements and approaches of such methods are described, in addition to the processes used. Emphasis is placed on the particularities of spatial databases and the improvements needed for non-spatial methods and tools in order to enhance their efficiency. Finally, the chapter presents a set of tools, called CASE (computer-assisted software engineering), which are built to support the formal analysis and design methods."}
{"_id": "e8b109a3881eeb3c156568536f351a6d39fd2023", "title": "A BiCMOS Ultra-Wideband 3.1–10.6-GHz Front-End", "text": "This paper presents a direct-conversion receiver for FCC-compliant ultra-wideband (UWB) Gaussian-shaped pulses that are transmitted in one of fourteen 500-MHz-wide channels within the 3.1-10.6-GHz band. The receiver is fabricated in 0.18-mum SiGe BiCMOS. The packaged chip consists of an unmatched wideband low-noise amplifier (LNA), filter, phase-splitter, 5-GHz ISM band switchable notch filter, 3.1-10.6-GHz local oscillator (LO) amplifiers, mixers, and baseband channel-select filters/buffers. The required quadrature single-ended LO signals are generated externally. The average conversion gain and input P1dB are 32 dB and -41 dBm, respectively. The unmatched LNA provides a system noise figure of 3.3 to 5 dB over the entire band. The chip draws 30 mA from 1.8 V. To verify the unmatched LNA's performance in a complete system, wireless testing of the front-end embedded in a full receiver at 100 Mbps reveals a 10-3 bit-error rate (BER) at -80 dBm sensitivity. The notch filter suppresses out-of-band interferers and reduces the effects of intermodulation products that appear in the baseband. BER improvements of an order of magnitude and greater are demonstrated with the filter"}
{"_id": "696ad1c38b588dae3295668a0fa34021c4481030", "title": "Information-theoretical label embeddings for large-scale image classification", "text": "We present a method for training multi-label, massively multi-class image classification models, that is faster and more accurate than supervision via a sigmoid cross-entropy loss (logistic regression). Our method consists in embedding high-dimensional sparse labels onto a lower-dimensional dense sphere of unit-normed vectors, and treating the classification problem as a cosine proximity regression problem on this sphere. We test our method on a dataset of 300 million high-resolution images with 17,000 labels, where it yields considerably faster convergence, as well as a 7% higher mean average precision compared to logistic regression."}
{"_id": "e6cfbf052d38c2ac09d88f1114d775b5c71ddb72", "title": "Analysis of a microstrip patch array fed cylindric lens antenna for 77GHz automotive radar", "text": "Regarding gain and SLL, the new lens antenna concept has better focusing abilities compared to the column antenna. Pattern perturbations due to lens edge effects were analyzed with regard to digital array processing. They can be confined either with an appropriate choice for the lens length L and the xlscr,i of the radar sensorpsilas receiving ULA or with narrower feeding beams in the H-plane."}
{"_id": "941427ee3ce73c7855fee71833a93803e1fb3576", "title": "Towards Modelling Pedestrian-Vehicle Interactions: Empirical Study on Urban Unsignalized Intersection", "text": "The modelling and simulation of the interaction among vehicles and pedestrians during crosswalking is an open challenge for both research and practical computational solutions supporting urban/traffic decision makers and managers. The social cost of pedestrians\u2019 risky behaviour pushes the development of a new generation of computational models integrating analytical knowledge, data and experience about the complex dynamics occurring in pedestrian/vehicle interactions, which are not completely understood despite recent efforts. This paper presents the results of a significant data gathering campaign realised at an unsignalized zebra crossing. The selected area of the city of Milan (Italy) is characterised by a significant presence of elderly inhabitants and pedestrian-vehicle risky interactions, testified by a high number of accidents involving pedestrians in the past years. The results concern the analysis of: (i) vehicular and pedestrian traffic volumes; (ii) level of service; (iii) pedestrian-vehicle interactions, considering the impact of ageing on crossing behaviour. Results showed that the phenomenon is characterised by three main phases: approaching, appraising (evaluation of the distance and speed of oncoming vehicles) and crossing. The final objective of the research is to support the development of a microscopic agent-based tool for simulating pedestrian behaviour at unsignalized crosswalks, focusing on the specific needs of the elderly pedestrians."}
{"_id": "a2b5b8a05db3ffdc02db7d4c2b1e0e64ad2f2f2a", "title": "Status of market, regulation and research of genetically modified crops in Chile.", "text": "Agricultural biotechnology and genetically modified (GM) crops are effective tools to substantially increase productivity, quality, and environmental sustainability in agricultural farming. Furthermore, they may contribute to improving the nutritional content of crops, addressing needs related to public health. Chile has become one of the most important global players for GM seed production for counter-season markets and research purposes. It has a comprehensive regulatory framework to carry out this activity, while at the same time there are numerous regulations from different agencies addressing several aspects related to GM crops. Despite imports of GM food/feed or ingredients for the food industry being allowed without restrictions, Chilean farmers are not using GM seeds for farming purposes because of a lack of clear guidelines. Chile is in a rather contradictory situation about GM crops. The country has invested considerable resources to fund research and development on GM crops, but the lack of clarity in the current regulatory situation precludes the use of such research to develop new products for Chilean farmers. Meanwhile, a larger scientific capacity regarding GM crop research continues to build up in the country. The present study maps and analyses the current regulatory environment for research and production of GM crops in Chile, providing an updated overview of the current status of GM seeds production, research and regulatory issues."}
{"_id": "d0f95a9ae047ca69ca30b0ca88be7387b2074c23", "title": "A Machine Learning Approach to Software Requirements Prioritization", "text": "Deciding which, among a set of requirements, are to be considered first and in which order is a strategic process in software development. This task is commonly referred to as requirements prioritization. This paper describes a requirements prioritization method called Case-Based Ranking (CBRank), which combines project's stakeholders preferences with requirements ordering approximations computed through machine learning techniques, bringing promising advantages. First, the human effort to input preference information can be reduced, while preserving the accuracy of the final ranking estimates. Second, domain knowledge encoded as partial order relations defined over the requirement attributes can be exploited, thus supporting an adaptive elicitation process. The techniques CBRank rests on and the associated prioritization process are detailed. Empirical evaluations of properties of CBRank are performed on simulated data and compared with a state-of-the-art prioritization method, providing evidence of the method ability to support the management of the tradeoff between elicitation effort and ranking accuracy and to exploit domain knowledge. A case study on a real software project complements these experimental measurements. Finally, a positioning of CBRank with respect to state-of-the-art requirements prioritization methods is proposed, together with a discussion of benefits and limits of the method."}
{"_id": "57fa2f7be1f0ef44f738698d7088d8b281f5d806", "title": "Novel Feed Network for Circular Polarization Antenna Diversity", "text": "In this letter, we propose a novel feed network for circular polarization (CP) diversity antennas to enhance communications performances. The previous CP diversity systems adopted dual polarization diversity using right-hand circular polarization (RHCP) and left-hand circular polarization (LHCP) within a small space. However, the same rotational CPs need further antenna distance. To overcome the space limitation for the same CP diversity antenna, we propose a feed network for CP antenna using conventional CP with orthogonal polarization radiation characteristics of a linear polarization (LP) diversity system without additional antennas. The port isolation characteristics and impedance matching conditions are numerically verified for the diversity system and antenna characteristics. The proposed feed network with CP patch antennas is fabricated and measured in a reverberation chamber for diversity performances, and their validity is verified."}
{"_id": "3fa9ab38d81d89e601fe7a704b8c693bf48bd4e8", "title": "Microcephaly and early-onset nephrotic syndrome \u2014confusion in Galloway-Mowat syndrome", "text": "We report a 2-year-old girl with nephrotic syndrome, microcephaly, seizures and psychomotor retardation. Histological studies of a renal biopsy revealed focal glomerular sclerosis with mesangiolysis and capillary microaneurysms. Dysmorphic features were remarkable: abnormal-shaped skull, coarse hair, narrow forehead, large low-set ears, almond-shaped eyes, low nasal bridge, pinched nose, thin lips and micrognathia. Cases with this rare combination of microcephaly and early onset of nephrotic syndrome with various neurological abnormalities have been reported. However, clinical manifestations and histological findings showed a wide variation, and there is a lot of confusion in this syndrome. We therefore reviewed the previous reports and propose a new clasification of this syndrome."}
{"_id": "17598f2061549df6a28c2fca5fe6d244fd9b2ad0", "title": "Learning Parameterized Models of Image Motion", "text": "A f?umeworkfor learning parumeterized models oj\u2018optical flow from image sequences is presented. A class of motions is represented by U set of orthogonul basis $ow fields that are computed froni a training set using priiicipal component analysis. Many complex image motions can be represeiited by a linear combiriution of U small number of these basisjows. The learned motion models may be usedfilr optical flow estimation und for model-bused recognition. For optical flow estimation we describe a robust, multi-resolution scheme for directly computing the parameters of the learned flow models from image derivatives. As examples we consider learning motion discontinuities, non-rigid motion of human mouths, und urticulated human motion."}
{"_id": "c5e01b4bd9e7dbe22a0aaecd1311cede2087052c", "title": "Teaching cryptography with open-source software", "text": "Cryptography has become an important topic in undergraduate curricula in mathematics and computer science, not just for its intrinsic interest---``about the most fun you can have with mathematics''\\cite{ferg04, but for its current standing as the basis for almost all computer security. From wireless networking to secure email to password protection, cryptographic methods are used to secure information, to protect users, and to protect data.\n At Victoria University, cryptography has been taught as part of a mathematics and computer science degree for several years. The students all have had at least a year of tertiary mathematics, and some exposure to a computer algebra system (Maple). However, the cost of Maple, and the current licensing agreement, means that students are unable to experiment with the software away from the computer laboratories at the University. For this reason we have decided to investigate the use of open-source computer algebra systems Maxima and Axiom. Although not as full-featured and powerful as the commercial systems Maple and Mathematica, we show they are in fact admirably suited for a subject such as cryptography. In some ways Maxima and Axiom even surpass Maple and Mathematica. Student response to the introduction of these systems has been very positive."}
{"_id": "1bdc66bf35f7443cea550eb82a691966761f1111", "title": "Web-based models for natural language processing", "text": "Previous work demonstrated that Web counts can be used to approximate bigram counts, suggesting that Web-based frequencies should be useful for a wide variety of Natural Language Processing (NLP) tasks. However, only a limited number of tasks have so far been tested using Web-scale data sets. The present article overcomes this limitation by systematically investigating the performance of Web-based models for several NLP tasks, covering both syntax and semantics, both generation and analysis, and a wider range of n-grams and parts of speech than have been previously explored. For the majority of our tasks, we find that simple, unsupervised models perform better when n-gram counts are obtained from the Web rather than from a large corpus. In some cases, performance can be improved further by using backoff or interpolation techniques that combine Web counts and corpus counts. However, unsupervised Web-based models generally fail to outperform supervised state-of-the-art models trained on smaller corpora. We argue that Web-based models should therefore be used as a baseline for, rather than an alternative to, standard supervised models."}
{"_id": "ad5974c04b316f4f379191e4dbea836fd766f47c", "title": "Large Language Models in Machine Translation", "text": "This paper reports on the benefits of largescale statistical language modeling in machine translation. A distributed infrastructure is proposed which we use to train on up to 2 trillion tokens, resulting in language models having up to 300 billion n-grams. It is capable of providing smoothed probabilities for fast, single-pass decoding. We introduce a new smoothing method, dubbed Stupid Backoff, that is inexpensive to train on large data sets and approaches the quality of Kneser-Ney Smoothing as the amount of training data increases."}
{"_id": "0c739b915d633cc3c162e4ef1e57b796c2dc2217", "title": "VerbOcean: Mining the Web for Fine-Grained Semantic Verb Relations", "text": "Broad-coverage repositories of semantic relations between verbs could benefit many NLP tasks. We present a semi-automatic method for extracting fine-grained semantic relations between verbs. We detect similarity, strength, antonymy, enablement, and temporal happens-before relations between pairs of strongly associated verbs using lexicosyntactic patterns over the Web. On a set of 29,165 strongly associated verb pairs, our extraction algorithm yielded 65.5% accuracy. Analysis of error types shows that on the relation strength we achieved 75% accuracy. We provide the resource, called VERBOCEAN, for download at http://semantics.isi.edu/ocean/."}
{"_id": "12644d51a8ccbdc092ea322907989c098bd16813", "title": "Web question answering: is more always better?", "text": "This paper describes a question answering system that is designed to capitalize on the tremendous amount of data that is now available online. Most question answering systems use a wide variety of linguistic resources. We focus instead on the redundancy available in large corpora as an important resource. We use this redundancy to simplify the query rewrites that we need to use, and to support answer mining from returned snippets. Our system performs quite well given the simplicity of the techniques being utilized. Experimental results show that question answering accuracy can be greatly improved by analyzing more and more matching passages. Simple passage ranking and n-gram extraction techniques work well in our system making it efficient to use with many backend retrieval engines."}
{"_id": "216dcb818763ed3493536d44b0a9e4c8e16738b1", "title": "A Winnow-Based Approach to Context-Sensitive Spelling Correction", "text": "A large class of machine-learning problems in natural language require the characterization of linguistic context. Two characteristic properties of such problems are that their feature space is of very high dimensionality, and their target concepts depend on only a small subset of the features in the space. Under such conditions, multiplicative weight-update algorithms such as Winnow have been shown to have exceptionally good theoretical properties. In the work reported here, we present an algorithm combining variants of Winnow and weighted-majority voting, and apply it to a problem in the aforementioned class: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting to for too, casual for causal, and so on. We evaluate our algorithm, WinSpell, by comparing it against BaySpell, a statistics-based method representing the state of the art for this task. We find: (1) When run with a full (unpruned) set of features, WinSpell achieves accuracies significantly higher than BaySpell was able to achieve in either the pruned or unpruned condition; (2) When compared with other systems in the literature, WinSpell exhibits the highest performance; (3) While several aspects of WinSpell's architecture contribute to its superiority over BaySpell, the primary factor is that it is able to learn a better linear separator than BaySpell learns; (4) When run on a test set drawn from a different corpus than the training set was drawn from, WinSpell is better able than BaySpell to adapt, using a strategy we will present that combines supervised learning on the training set with unsupervised learning on the (noisy) test set."}
{"_id": "4478fb662060be76cefacbcd2b1f7529824245a8", "title": "Rule based Autonomous Citation Mining with TIERL", "text": "Citations management is an important task in managing digital libraries. Citations provide valuable information e.g., used in evaluating an author's influences or scholarly quality (the impact factor of research journals). But although a reliable and effective autonomous citation management is essential, manual citation management can be extremely costly. Automatic citation mining on the other hand is a non-trivial task mainly due to non-conforming citation styles, spelling errors and the difficulty of reliably extracting text from PDF documents. In this paper we propose a novel rule-based autonomous citation mining technique, to address this important task. We define a set of common heuristics that together allow to improve the state of the art in automatic citation mining. Moreover, by first disambiguating citations based on venues, our technique significantly enhances the correct discovery of citations. Our experiments show that the proposed approach is indeed able to overcome limitations of current leading citation indexes such as ISI Web of Knowledge , Citeseer and Google Scholar."}
{"_id": "ab200f31fe7c50f8b468cb3ed9274883140167d9", "title": "Feature selection for high-dimensional data", "text": "This paper offers a comprehensive approach to feature selection in the scope of classification problems, explaining the foundations, real application problems and the challenges of feature selection in the context of high-dimensional data. First, we focus on the basis of feature selection, providing a review of its history and basic concepts. Then, we address different topics in which feature selection plays a crucial role, such as microarray data, intrusion detection, or medical applications. Finally, we delve into the open challenges that researchers in the field have to deal with if they are interested to confront the advent of \u201cBig Data\u201d and, more specifically, the \u201cBig Dimensionality\u201d."}
{"_id": "8dbac520e4a89916cab47be23aab46793f39bb28", "title": "Is Memory Schematic ?", "text": "ion Information that has been selected because it is important and/or relevant to the schema is further reduced during the encoding process by abstraction. This process codes the meaning but not the format of a message (e.g., Bobrow, 1970; Bransford, Barclay, & Franks, 1972). Thus, details such as the lexical form of an individual word (e.g., Schank, 1972, 1976) and the syntactic form of a sentence (e.g., Sachs, 1967) will not be preserved in memory. Because memory for syntax appears to be particularly sparce as well as brief (e.g., J. R. Anderson, 1974; Begg & Wickelgren, 1974; Jarvella, 1971; Sachs, 1967, 1974), the abstraction process is thought to operate during encoding. Additional support for the notion that what is stored is an abstracted representation of the original stimulus comes from studies that demonstrate that after a passage is read, it takes subjects the same amount of time to verify information originally presented in a complex linguistic format as it does to verify that same information presented in a simpler format (e.g., King & Greeno, 1974; Kintsch &Monk, 1972). There are a considerable number of theories that assume that memory consists of sets of propositions and their relations (e.g., J. R. Anderson, 1976; Bransford, et al., 1972; Brewer, 1975; Frederiksen, 1975a; Kintsch, 1974; Norman & Rumelhart, 1975; Schank, 1972, 1976). One formalized presentation of this idea is Schank's conceptual dependency theory (1972). The theory asserts that all propositions can be expressed by a small set of primitive concepts. All lexical expressions that share an identical meaning will be represented in one way (and so stored economically) regardless of their presentation format. As a result people should often incorrectly recall or misrecognize synonyms of originally presented words, and they do (e.g., Anderson & Bower, 1973; R. C. Anderson, 1974; Anisfeld & Knapp, 1968; Brewer, 1975; Graesser, 1978b; Sachs, 1974). Abstraction and memory theories. Since considerable detail is lost via the abstraction process, this process can easily account for the incompleteness that is characteristic of people's recall of complex events. In light of the abstraction process, the problem for schema theories becomes one of accounting for accurate recall. Schema theories do this by borrowing a finding from psycholinguistic research, to wit, that speakers of a language share preferred ways of expressing information. If both the creator and perceiver of a message are operating with the same preferences or under the same biases, the perceiver's reproduction of the input may appear to be accurate. The accuracy, however, is the product of recalling the semantic content of the message and imposing the preferred structure onto it. Thus, biases operate in a manner that is similar to the \"probable detail\" reconstruction process. Biases have been documented for both syntactic information (J. R. Anderson, 1974; Bock, 1977; Bock & Brewer, 1974; Clark & Clark, 1968; James, Thompson, & Baldwin, 1973) and lexical information (Brewer, 1975; Brewer & Lichtenstein, 1974). Distortions may result from the abstraction process if biases are not shared by the person who creates the message and the one who receives it. More importantly, the ab-"}
{"_id": "3722384bd171b2245102ad2d99f2d0fd230910c9", "title": "Effects of a 7-Day Meditation Retreat on the Brain Function of Meditators and Non-Meditators During an Attention Task", "text": "Meditation as a cognitive enhancement technique is of growing interest in the field of health and research on brain function. The Stroop Word-Color Task (SWCT) has been adapted for neuroimaging studies as an interesting paradigm for the understanding of cognitive control mechanisms. Performance in the SWCT requires both attention and impulse control, which is trained in meditation practices. We presented SWCT inside the MRI equipment to measure the performance of meditators compared with non-meditators before and after a meditation retreat. The aim of this study was to evaluate the effects of a 7-day Zen intensive meditation training (a retreat) on meditators and non-meditators in this task on performance level and neural mechanisms. Nineteen meditators and 14 non-meditators were scanned before and after a 7-day Zen meditation retreat. No significant differences were found between meditators and non-meditators in the number of the correct responses and response time (RT) during SWCT before and after the retreat. Probably, due to meditators training in attention, their brain activity in the contrast incongruent > neutral during the SWCT in the anterior cingulate, ventromedial prefrontal cortex/anterior cingulate, caudate/putamen/pallidum/temporal lobe (center), insula/putamen/temporal lobe (right) and posterior cingulate before the retreat, were reduced compared with non-meditators. After the meditation retreat, non-meditators had reduced activation in these regions, becoming similar to meditators before the retreat. This result could be interpreted as an increase in the brain efficiency of non-meditators (less brain activation in attention-related regions and same behavioral response) promoted by their intensive training in meditation in only 7 days. On the other hand, meditators showed an increase in brain activation in these regions after the same training. Intensive meditation training (retreat) presented distinct effects on the attention-related regions in meditators and non-meditators probably due to differences in expertise, attention processing as well as neuroplasticity."}
{"_id": "3e832919bf4156dd2c7c191acbbc714535f1d8c2", "title": "LOMIT: LOCAL MASK-BASED IMAGE-TO-IMAGE TRANSLATION VIA PIXEL-WISE HIGHWAY ADAPTIVE INSTANCE NORMALIZATION", "text": "Recently, image-to-image translation has seen a significant success. Among many approaches, image translation based on an exemplar image, which contains the target style information, has been popular, owing to its capability to handle multimodality as well as its suitability for practical use. However, most of the existing methods extract the style information from an entire exemplar and apply it to the entire input image, which introduces excessive image translation in irrelevant image regions. In response, this paper proposes a novel approach that jointly extracts out the local masks of the input image and the exemplar as targeted regions to be involved for image translation. In particular, the main novelty of our model lies in (1) co-segmentation networks for local mask generation and (2) the local mask-based highway adaptive instance normalization technique. We demonstrate the quantitative and the qualitative evaluation results to show the advantages of our proposed approach. Finally, the code is available at https://github.com/AnonymousIclrAuthor/ Highway-Adaptive-Instance-Normalization."}
{"_id": "bbeacfbe469913de3549a8839abed0cb4415675e", "title": "Designing Algorithms To Aid Discovery by Chemical\nRobots", "text": "Recently, automated robotic systems have become very efficient, thanks to improved coupling between sensor systems and algorithms, of which the latter have been gaining significance thanks to the increase in computing power over the past few decades. However, intelligent automated chemistry platforms for discovery orientated tasks need to be able to cope with the unknown, which is a profoundly hard problem. In this Outlook, we describe how recent advances in the design and application of algorithms, coupled with the increased amount of chemical data available, and automation and control systems may allow more productive chemical research and the development of chemical robots able to target discovery. This is shown through examples of workflow and data processing with automation and control, and through the use of both well-used and cutting-edge algorithms illustrated using recent studies in chemistry. Finally, several algorithms are presented in relation to chemical robots and chemical intelligence for knowledge discovery."}
{"_id": "6cb45af3db1de2ba5466aedcb698deb6c4bb4678", "title": "CS 224N Assignment 4: Question Answering on SQuAD", "text": "In this project, we are interested in building an end-to-end neural network architecture for the Question Answering task on the well-known Stanford Question Answering Dataset (SQuAD). Our implementation is motivated from a recent highperformance achieving method that combines coattention encoder with a dynamic pointing decoder known as Dynamic Coattention Network. We explored different ensemble and test decoding techniques that we believe might improve the performance of such systems."}
{"_id": "55d87169355378ab5bded7488c899dc0069219dd", "title": "Neural Text Generation: A Practical Guide", "text": "Deep learning methods have recently achieved great empirical success on machine translation, dialogue response generation, summarization, and other text generation tasks. At a high level, the technique has been to train end-to-end neural network models consisting of an encoder model to produce a hidden representation of the source text, followed by a decoder model to generate the target. While such models have significantly fewer pieces than earlier systems, significant tuning is still required to achieve good performance. For text generation models in particular, the decoder can behave in undesired ways, such as by generating truncated or repetitive outputs, outputting bland and generic responses, or in some cases producing ungrammatical gibberish. This paper is intended as a practical guide for resolving such undesired behavior in text generation models, with the aim of helping enable real-world applications."}
{"_id": "25e2a24478c8c8c1b26482fa06d3a4fb445303e5", "title": "Compatibility Is Not Transparency: VMM Detection Myths and Realities", "text": "Recent work on applications ranging from realistic honeypots to stealthier rootkits has speculated about building transparent VMMs \u2013 VMMs that are indistinguishable from native hardware, even to a dedicated adversary. We survey anomalies between real and virtual hardware and consider methods for detecting such anomalies, as well as possible countermeasures. We conclude that building a transparent VMM is fundamentally infeasible, as well as impractical from a performance and engineering standpoint."}
{"_id": "48c26d5edecb484ec1c34c2b148a1c843ab24327", "title": "Textual resource acquisition and engineering", "text": "and engineering J. Chu-Carroll J. Fan N. Schlaefer W. Zadrozny A key requirement for high-performing question-answering (QA) systems is access to high-quality reference corpora from which answers to questions can be hypothesized and evaluated. However, the topic of source acquisition and engineering has received very little attention so far. This is because most existing systems were developed under organized evaluation efforts that included reference corpora as part of the task specification. The task of answering Jeopardy!i questions, on the other hand, does not come with such a well-circumscribed set of relevant resources. Therefore, it became part of the IBM Watsoni effort to develop a set of well-defined procedures to acquire high-quality resources that can effectively support a high-performing QA system. To this end, we developed three procedures, i.e., source acquisition, source transformation, and source expansion. Source acquisition is an iterative development process of acquiring new collections to cover salient topics deemed to be gaps in existing resources based on principled error analysis. Source transformation refers to the process in which information is extracted from existing sources, either as a whole or in part, and is represented in a form that the system can most easily use. Finally, source expansion attempts to increase the coverage in the content of each known topic by adding new information as well as lexical and syntactic variations of existing information extracted from external large collections. In this paper, we discuss the methodology that we developed for IBM Watson for performing acquisition, transformation, and expansion of textual resources. We demonstrate the effectiveness of each technique through its impact on candidate recall and on end-to-end QA performance."}
{"_id": "b321c6d6059e448d85e8ffb5cb2318cf7f2e9ebc", "title": "The definition and classification of glaucoma in prevalence surveys.", "text": "This review describes a scheme for diagnosis of glaucoma in population based prevalence surveys. Cases are diagnosed on the grounds of both structural and functional evidence of glaucomatous optic neuropathy. The scheme also makes provision for diagnosing glaucoma in eyes with severe visual loss where formal field testing is impractical, and for blind eyes in which the optic disc cannot be seen because of media opacities."}
{"_id": "3d29f938094672cb45a119e8b1a08e299672f7b6", "title": "Empathy circuits", "text": "The social neuroscientific investigation of empathy has revealed that the same neural networks engaged during first-hand experience of affect subserve empathic responses. Recent meta-analyses focusing on empathy for pain for example reliably identified a network comprising anterior insula and anterior midcingulate cortex. Moreover, recent studies suggest that the generation of empathy is flexibly supported by networks involved in action simulation and mentalizing depending on the information available in the environment. Further, empathic responses are modulated by many factors including the context they occur in. Recent work shows how this modulation can be afforded by the engagement of antagonistic motivational systems or by cognitive control circuits, and these modulatory systems can also be employed in efforts to regulate one's empathic responses."}
{"_id": "511c905fcd908e46116c37dbb650e5609d4b3a14", "title": "Performance analysis of two open source intrusion detection systems", "text": "Several studies have been conducted where authors compared the performance of open source Intrusion detection systems, namely Snort and Suricata. However, most studies were limited to either security indicators or performance measurements under the same operating system. The objective of this study is to give a comprehensive analysis of both products in terms of several security related and performance related indicators. In addition, we tested the products under two different operating systems. Several experiments were run to evaluate the effects of open source intrusion detection and prevention systems Snort and Suricata, operating systems Windows, Linux and various attack types on system resource usage, dropped packets rate and ability to detect intrusions. The results show that Suricata has a higher CPU and RAM utilization than Snort in all cases on both operating systems, but lower percentage of dropped packets when evaluated during five of six simulated attacks. Both products had the same number of correctly identified intrusions. The results show that Linux-based solutions consume more system resources, but Windows-based systems had a higher rate of dropped packets. This indicates that these two intrusion detection and prevention systems should be run on Linux. However, both systems are inappropriate for high volumes of traffic in single-server setting."}
{"_id": "211855f1de279c452858177331860cbc326351ab", "title": "Designing Neural Networks Using Genetic Algorithms with Graph Generation System", "text": "We present a new method of designing neural networks using the genetic algorithm. Recently there have been several reports claiming attempts to design neural networks using genetic algorithms were successful. However, these methods have a problem in scalability, i.e., the convergence characteristic degrades significantly as the size of the network increases. This is because these methods employ direct mapp ing of chromosomes into network connectivities. As an alternative approach, we propose a graph grammatical encoding that will encode graph generation grammar to the chromosome so that it generates more regular connectivity patterns with shorter chromosome length. Experimental results support that our new scheme provides magnitude of speedup in convergence of neural network design and exhibits desirable scaling property."}
{"_id": "b91f7fe7a54159663946e5a88937af4e268edbb4", "title": "Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing", "text": "The primary sensory system requires the integrated function of multiple cell types, although its full complexity remains unclear. We used comprehensive transcriptome analysis of 622 single mouse neurons to classify them in an unbiased manner, independent of any a priori knowledge of sensory subtypes. Our results reveal eleven types: three distinct low-threshold mechanoreceptive neurons, two proprioceptive, and six principal types of thermosensitive, itch sensitive, type C low-threshold mechanosensitive and nociceptive neurons with markedly different molecular and operational properties. Confirming previously anticipated major neuronal types, our results also classify and provide markers for new, functionally distinct subtypes. For example, our results suggest that itching during inflammatory skin diseases such as atopic dermatitis is linked to a distinct itch-generating type. We demonstrate single-cell RNA-seq as an effective strategy for dissecting sensory responsive cells into distinct neuronal types. The resulting catalog illustrates the diversity of sensory types and the cellular complexity underlying somatic sensation."}
{"_id": "5bc7ddd557b655eb1115a354550254dfd0b0d826", "title": "Learning Semantically Coherent and Reusable Kernels in Convolution Neural Nets for Sentence Classification", "text": "The purpose of this work is to empirically study desirable properties such as semantic coherence, attention mechanism and kernel reusability in Convolution Neural Networks (CNNs) for learning sentence classification tasks. We propose to learn semantically coherent kernels using clustering scheme combined with Word2Vec representation and domain knowledge such as SentiWordNet. We also suggest a technique to visualize attention mechanism of CNNs. These ideas are useful for decision explanation purpose. Reusable property enables kernels learned on one problem to be used in another problem. This helps in efficient learning as only a few additional domain specific kernels may have to be learned. Experimental results demonstrate the usefulness of our approach. The performance of the proposed approach, which uses semantic and re-usability properties, is close to that of the state-of-the-art approaches on many real-world datasets."}
{"_id": "09702e1165eeac3504faf7a32065ea4c08aad467", "title": "Standardization of the in-car gesture interaction space", "text": "Driven by technological advancements, gesture interfaces have recently found their way into vehicular prototypes of various kind. Unfortunately, their application is less than perfect and detailed information about preferred gesture execution regions, spatial extent, and time behavior are not available yet. Providing car (interior) manufacturer with gesture characteristics would allow them to design future in-vehicle concepts in a way to not interfere with gestural interaction. To tackle the problem, this research aims as preliminary work for a later standardization of the diverse properties of gestures and gesture classes similarly to what is already standardized in norms such as ISO 3958/4040 for placement and reachability of traditional controls and indicators. We have set up a real driving experiment recording trajectories and time behavior of gestures related to car and media control tasks. Data evaluation reveals that most of the subjects perform gestures in the same region (bounded by a \"triangle\" steering wheel, rear mirror, and gearshift) and with similar spatial extent (on average below 2 sec.). The generated density plots can be further used for an initial discussion about gesture execution in the passenger compartment. The final aim is to propose a new standard on permitted gesture properties (time, space) in the car."}
{"_id": "fd86074685e3da33e31b36914f2d4ccf64821491", "title": "The Modified Abbreviated Math Anxiety Scale: A Valid and Reliable Instrument for Use with Children", "text": "Mathematics anxiety (MA) can be observed in children from primary school age into the teenage years and adulthood, but many MA rating scales are only suitable for use with adults or older adolescents. We have adapted one such rating scale, the Abbreviated Math Anxiety Scale (AMAS), to be used with British children aged 8-13. In this study, we assess the scale's reliability, factor structure, and divergent validity. The modified AMAS (mAMAS) was administered to a very large (n = 1746) cohort of British children and adolescents. This large sample size meant that as well as conducting confirmatory factor analysis on the scale itself, we were also able to split the sample to conduct exploratory and confirmatory factor analysis of items from the mAMAS alongside items from child test anxiety and general anxiety rating scales. Factor analysis of the mAMAS confirmed that it has the same underlying factor structure as the original AMAS, with subscales measuring anxiety about Learning and Evaluation in math. Furthermore, both exploratory and confirmatory factor analysis of the mAMAS alongside scales measuring test anxiety and general anxiety showed that mAMAS items cluster onto one factor (perceived to represent MA). The mAMAS provides a valid and reliable scale for measuring MA in children and adolescents, from a younger age than is possible with the original AMAS. Results from this study also suggest that MA is truly a unique construct, separate from both test anxiety and general anxiety, even in childhood."}
{"_id": "2cf3b8d8865fdeab8e251ef803ecde1ddbf6f6a3", "title": "Using RDF Summary Graph For Keyword-based Semantic Searches", "text": "The Semantic Web began to emerge as its standards and technologies developed rapidly in the recent years. The continuing development of Semantic Web technologies has facilitated publishing explicit semantics with data on the Web in RDF data model. This study proposes a semantic search framework to support efficient keyword-based semantic search on RDF data utilizing near neighbor explorations. The framework augments the search results with the resources in close proximity by utilizing the entity type semantics. Along with the search results, the system generates a relevance confidence score measuring the inferred semantic relatedness of returned entities based on the degree of similarity. Furthermore, the evaluations assessing the effectiveness of the framework and the accuracy of the results are presented."}
{"_id": "11ad7734bbb81e901f2e59b73456324b299d8980", "title": "Localization for mobile sensor networks", "text": "Many sensor network applications require location awareness, but it is often too expensive to include a GPS receiver in a sensor network node. Hence, localization schemes for sensor networks typically use a small number of seed nodes that know their location and protocols whereby other nodes estimate their location from the messages they receive. Several such localization techniques have been proposed, but none of them consider mobile nodes and seeds. Although mobility would appear to make localization more difficult, in this paper we introduce the sequential Monte Carlo Localization method and argue that it can exploit mobility to improve the accuracy and precision of localization. Our approach does not require additional hardware on the nodes and works even when the movement of seeds and nodes is uncontrollable. We analyze the properties of our technique and report experimental results from simulations. Our scheme outperforms the best known static localization schemes under a wide range of conditions."}
{"_id": "e11d5a4edec55f5d5dc8ea25621ecbf89e9bccb7", "title": "Taxonomy and Survey of Collaborative Intrusion Detection", "text": "The dependency of our society on networked computers has become frightening: In the economy, all-digital networks have turned from facilitators to drivers; as cyber-physical systems are coming of age, computer networks are now becoming the central nervous systems of our physical world\u2014even of highly critical infrastructures such as the power grid. At the same time, the 24/7 availability and correct functioning of networked computers has become much more threatened: The number of sophisticated and highly tailored attacks on IT systems has significantly increased. Intrusion Detection Systems (IDSs) are a key component of the corresponding defense measures; they have been extensively studied and utilized in the past. Since conventional IDSs are not scalable to big company networks and beyond, nor to massively parallel attacks, Collaborative IDSs (CIDSs) have emerged. They consist of several monitoring components that collect and exchange data. Depending on the specific CIDS architecture, central or distributed analysis components mine the gathered data to identify attacks. Resulting alerts are correlated among multiple monitors in order to create a holistic view of the network monitored. This article first determines relevant requirements for CIDSs; it then differentiates distinct building blocks as a basis for introducing a CIDS design space and for discussing it with respect to requirements. Based on this design space, attacks that evade CIDSs and attacks on the availability of the CIDSs themselves are discussed. The entire framework of requirements, building blocks, and attacks as introduced is then used for a comprehensive analysis of the state of the art in collaborative intrusion detection, including a detailed survey and comparison of specific CIDS approaches."}
{"_id": "3fa2b1ea36597f2b3055844dcf505bacd884f437", "title": "Confabulation based sentence completion for machine reading", "text": "Sentence completion and prediction refers to the capability of filling missing words in any incomplete sentences. It is one of the keys to reading comprehension, thus making sentence completion an indispensible component of machine reading. Cogent confabulation is a bio-inspired computational model that mimics the human information processing. The building of confabulation knowledge base uses an unsupervised machine learning algorithm that extracts the relations between objects at the symbolic level. In this work, we propose performance improved training and recall algorithms that apply the cogent confabulation model to solve the sentence completion problem. Our training algorithm adopts a two-level hash table, which significantly improves the training speed, so that a large knowledge base can be built at relatively low computation cost. The proposed recall function fills missing words based on the sentence context. Experimental results show that our software can complete trained sentences with 100% accuracy. It also gives semantically correct answers to more than two thirds of the testing sentences that have not been trained before."}
{"_id": "bf3302bec256bddceabda4d5185ec289ac38ea9f", "title": "Seismic Data Classification Using Machine Learning", "text": "Earthquakes around the world have been a cause of major destruction and loss of life and property. An early detection and prediction system using machine learning classification models can prove to be very useful for disaster management teams. The earthquake stations continuously collect data even when there is no event. From this data, we need to distinguish earthquake and non-earthquake. Machine learning techniques can be used to analyze continuous time series data to detect earthquakes effectively. Furthermore, the earthquake data can be used to predict the P-wave and S-wave arrival times."}
{"_id": "8552f6e3f73db564a2e625cceb1d1348d70b598c", "title": "Learning Compact Appearance Representation for Video-based Person Re-Identification", "text": "This paper presents a novel approach for video-based person re-identification using multiple Convolutional Neural Networks (CNNs). Unlike previous work, we intend to extract a compact yet discriminative appearance representation from several frames rather than the whole sequence. Specifically, given a video, the representative frames are selected based on the walking profile of consecutive frames. A multiple CNN architecture incorporated with feature pooling is proposed to learn and compile the features of the selected representative frames into a compact description about the pedestrian for identification. Experiments are conducted on benchmark datasets to demonstrate the superiority of the proposed method over existing person re-identification approaches."}
{"_id": "83a281e049de09f5b2e667786125da378fd7a14c", "title": "Extracting Social Network and Character Categorization From Bengali Literature", "text": "Literature network analysis is an emerging area in the computational research domain. Literature network is a type of social network with various distinct features. The analysis explores significance of human behavior and complex social relationships. The story consists of some characters and creates an interconnected social system. Each character of the literature represents a node and the edge between any two nodes offered the interaction between them. An annotation and a novel character categorization method are developed to extract interactive social network from the Bengali drama. We analyze Raktakarabi and Muktodhara, two renowned Bengali dramas of Rabindranath Tagore. Weighted degree, closeness, and betweenness centrality analyze the correlation among the characters. We propose an edge contribution-based centrality and diversity metric of a node to determine the influence of one character over others. High diverse nodes show low clustering coefficient and vice versa. We propose a novel idea to analyze the characteristics of protagonist and antagonist from the influential nodes based on the complex graph. We also present a game theory-based community detection method that clusters the actors with a high degree of relationship. Evaluation on real-world networks demonstrates the superiority of the proposed method over the other existing algorithms. Interrelationship of the actors within the drama is also shown from the detected communities, as underlying theme of the narrations is identical. The analytical results show that our method efficiently finds the protagonist and antagonist from the literature network. The method is unique, and the analytical results are more accurate and unbiased than the human perspective. Our approach establishes similar results compared with the benchmark analysis available in Tagore\u2019s Bengali literature."}
{"_id": "1a54a8b0c7b3fc5a21c6d33656690585c46ca08b", "title": "Fast Feature Pyramids for Object Detection", "text": "Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures)."}
{"_id": "6c5c61bc780b9a696ef72fb8f27873fa7ae33215", "title": "Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces", "text": "We propose a novel and fast multiscale feature detection and description approach that exploits the benefits of nonlinear scale spaces. Previous attempts to detect and describe features in nonlinear scale spaces are highly time consuming due to the computational burden of creating the nonlinear scale space. In this paper we propose to use recent numerical schemes called Fast Explicit Diffusion (FED) embedded in a pyramidal framework to dramatically speed-up feature detection in nonlinear scale spaces. In addition, we introduce a Modified-Local Difference Binary (M-LDB) descriptor that is highly efficient, exploits gradient information from the nonlinear scale space, is scale and rotation invariant and has low storage requirements. We present an extensive evaluation that shows the excellent compromise between speed and performance of our approach compared to state-of-the-art methods such as BRISK, ORB, SURF, SIFT and KAZE."}
{"_id": "d716ea2b7975385d770162230d253d9a69ed6e07", "title": "Polynomial Eigenvalue Solutions to the 5-pt and 6-pt Relative Pose Problems", "text": "In this paper we provide new fast and simple solutions to two important minimal problems in computer vision, the five-point relative pose problem and the six-point focal length problem. We show that these two problems can easily be formulated as polynomial eigenvalue problems of degree three and two and solved using standard efficient numerical algorithms. Our solutions are somewhat more stable than state-of-the-art solutions by Nister and Stewenius and are in some sense more straightforward and easier to implement since polynomial eigenvalue problems are well studied with many efficient and robust algorithms available. The quality of the solvers is demonstrated in experiments 1."}
{"_id": "128477ed907857a8ae96cfd25ea6b2bef74cf827", "title": "Multi-image matching using multi-scale oriented patches", "text": "This paper describes a novel multi-view matching framework based on a new type of invariant feature. Our features are located at Harris corners in discrete scale-space and oriented using a blurred local gradient. This defines a rotationally invariant frame in which we sample a feature descriptor, which consists of an 8 /spl times/ 8 patch of bias/gain normalised intensity values. The density of features in the image is controlled using a novel adaptive non-maximal suppression algorithm, which gives a better spatial distribution of features than previous approaches. Matching is achieved using a fast nearest neighbour algorithm that indexes features based on their low frequency Haar wavelet coefficients. We also introduce a novel outlier rejection procedure that verifies a pairwise feature match based on a background distribution of incorrect feature matches. Feature matches are refined using RANSAC and used in an automatic 2D panorama stitcher that has been extensively tested on hundreds of sample inputs."}
{"_id": "38fffdbb23a55d7e15fcb81fee69a5a644a1334d", "title": "The Option-Critic Architecture", "text": "We first studied the behavior of the intra-option policy gradient algorithm when the initiation sets and subgoals are fixed by hand. In this case, options terminate with probability 0.9 in a hallway state and four of its incoming neighboring states. We chose to parametrize the intra-option policies using the softmax distribution with a one-hot encoding of state-action pairs as basis functions. Learning both intra-option policies and terminations"}
{"_id": "3f0196d0dd25ee48d3ed0f8f112c60cf9fdaf41d", "title": "Dark vs. Dim Silicon and Near-Threshold Computing", "text": "Due to limited scaling of supply voltage, power density is expected to grow in future technology nodes. This increasing power density potentially limits the number of transistors switching at full speed in the future. Near-threshold operation can increase the number of simultaneously active cores, at the expense of much lower operating frequency (\u201cdim silicon\u201d). Although promising to increase overall throughput, dim cores suffer from diminishing returns as the number of cores increases. At this point, hardware accelerators become more efficient alternatives. To explore such a broad design space, we have developed a framework called Lumos to analytically quantify the performance limits of many-core, heterogeneous systems operating at near-threshold voltage. Lumos augments Amdahl\u2019s Law with detailed scaling of frequency and power, calibrated by circuit-level simulations using a modified Predictive Technology Model (PTM) and factors in effects of process variations. While our results show that dim cores do indeed boost throughput, even in the presence of process variations, significant benefits are only achieved in applications with very high parallelism or with novel architectures to mitigate variation. A more beneficial 1 and scalable approach is to use accelerators. However, reconfigurable logic that supports a variety of accelerators is more beneficial than a dedicated, fixed-logic accelerator, unless 1) the dedicated kernel has overwhelming coverage across applications (e.g. twice as large as the total of all others), or 2) the speedup of the dedicated accelerator over the reconfigurable equivalent is significant (e.g. 10x-50x)."}
{"_id": "11ee2c34abc0262364d76e94ed7be7d7c885d197", "title": "AmiGO: online access to ontology and annotation data", "text": "AmiGO is a web application that allows users to query, browse and visualize ontologies and related gene product annotation (association) data. AmiGO can be used online at the Gene Ontology (GO) website to access the data provided by the GO Consortium; it can also be downloaded and installed to browse local ontologies and annotations. AmiGO is free open source software developed and maintained by the GO Consortium."}
{"_id": "20a152d5e7dbd6523418d591b1700f09bbb8c752", "title": "Stress and hippocampal plasticity.", "text": "The hippocampus is a target of stress hormones, and it is an especially plastic and vulnerable region of the brain. It also responds to gonadal, thyroid, and adrenal hormones, which modulate changes in synapse formation and dendritic structure and regulate dentate gyrus volume during development and in adult life. Two forms of structural plasticity are affected by stress: Repeated stress causes atrophy of dendrites in the CA3 region, and both acute and chronic stress suppresses neurogenesis of dentate gyrus granule neurons. Besides glucocorticoids, excitatory amino acids and N-methyl-D-aspartate (NMDA) receptors are involved in these two forms of plasticity as well as in neuronal death that is caused in pyramidal neurons by seizures and by ischemia. The two forms of hippocampal structural plasticity are relevant to the human hippocampus, which undergoes a selective atrophy in a number of disorders, accompanied by deficits in declarative episodic, spatial, and contextual memory performance. It is important, from a therapeutic standpoint, to distinguish between a permanent loss of cells and a reversible atrophy."}
{"_id": "2d2ea08f87d6b29119ea8b896689005dbea96f15", "title": "Conservative treatment of carpal tunnel syndrome: comparison between laser therapy and Fascial Manipulation(\u00ae).", "text": "The etiopathogenesis of Carpal Tunnel Syndrome (CTS) is multifactorial and most cases are classified as idiopathic (Thurston 2013). A randomized controlled trial was performed to compare the effectiveness of Fascial Manipulation(\u00ae) (FM) and Low-Level Laser Therapy (LLLT) for CTS. This prospective trial included 42 patients (70 hands with symptoms) with clinical and electroneuromyographic diagnosis of CTS. The patients were randomly assigned to receive multiple sessions of FM or multiple session of LLLT. The Visual Analogic Scale (VAS) and Boston Carpal Tunnel Questionnaire (BCTQ) were performed at baseline, end of treatment and after three months. The group that received FM showed a significant reduction in subjective pain perception and an increased function assessed by BCTQ at the end of the treatment and follow-up. The group that received LLLT showed an improvement in the BCTQ at the end of the treatment but the improvement level was not sustained at the three month follow-up. FM is a valid alternative treatment for CTS."}
{"_id": "94b73b4bfcee507cc73937cd802e96dd91ae2790", "title": "An OFDMA based multiple access protocol with QoS guarantee for next generation WLAN", "text": "To provide better QoS guarantee for the next generation WLAN, IEEE 802.11ax task group is founded in March 2014. As a promising technology to accommodate multiple nodes concurrent transmissions in dense deployment scenario, orthogonal frequency division multiple access (OFDMA) will be adopted in IEEE 802.11ax with great possibility. In this paper, an OFDMA based multiple access protocol with QoS guarantee is proposed for the next generation WLAN. Firstly, a redundant access mechanism is given to increase the access success probability of the video traffic where the video stations can concurrently send multiple RTS packets in multiple subchannels. Secondly, a priority based resource allocation scheme is presented to let AP allocate more resources to the video stations. Simulation results show that our protocol outperforms the existing OFDMA based multiple access for IEEE 802.11ax (OMAX) protocol in terms of delay and delay jitter of video traffic in dense deployment scenario."}
{"_id": "f73038f3e683f87b7d360de982effb718a39a668", "title": "10.6 THz figure-of-merit phase-change RF switches with embedded micro-heater", "text": "We report on GeTe-based phase-change RF switches with embedded micro-heater for thermal switching. With heater parasitics reduced, GeTe RF switches show onstate resistance of 0.05 ohm*mm and off-state capacitance of 0.3 pF/mm. The RF switch figure-of-merit is estimated to be 10.6 THz, which is about 15 times better than state-of-the-art silicon-on-insulator switches. With on-state resistance of 1 ohm and off-state capacitance of 15 fF, RF insertion loss was measured at <;0.2 dB, and isolation was >25 dB at 20 GHz, respectively. RF power handling was >5.6 W for both onand off-state of GeTe."}
{"_id": "d9128cef2a167a5aca31def620d8aa46358509f1", "title": "The Mediator complex: a central integrator of transcription", "text": "The RNA polymerase II (Pol II) enzyme transcribes all protein-coding and most non-coding RNA genes and is globally regulated by Mediator \u2014 a large, conformationally flexible protein complex with a variable subunit composition (for example, a four-subunit cyclin-dependent kinase 8 module can reversibly associate with it). These biochemical characteristics are fundamentally important for Mediator's ability to control various processes that are important for transcription, including the organization of chromatin architecture and the regulation of Pol II pre-initiation, initiation, re-initiation, pausing and elongation. Although Mediator exists in all eukaryotes, a variety of Mediator functions seem to be specific to metazoans, which is indicative of more diverse regulatory requirements."}
{"_id": "cd373a4bb33e7a123f69de3d0ff697a0388cd2b8", "title": "From information security to ... business security?", "text": "This short opinion paper argues that information security, the discipline responsible for protecting a company's information assets against business risks, has now become such a crucial component of good Corporate Governance, that it should rather be called Business Security instead of Information Security. During the last year or two, driven by developments in the field of Corporate Governance, including IT Governance, it became apparent that the scope of Information Security is much wider than just (directly) protecting the data, information and software of a business. Such data, information and software had become invaluable assets of the business as a whole, and not properly protecting this information could have profound business and legal implications. Basically, the data and information of the business became its 'life blood', and compromising this life blood, could kill the business. Executive Management and Boards started realizing that Information Security Governance was becoming their direct responsibility, and that serious personal consequences, specifically legally, could flow from ignoring information security. Information security governance had become an"}
{"_id": "34dbcb9b88ef298079b4632ec25e171895c6e7fe", "title": "Support Vector Machines For Synthetic Aperture Radar Automatic Target Recognition", "text": "Algorithms that produce classifiers with large margins, such as support vector machines (SVMs), AdaBoost, etc. are receiving more and more attention in the literature. This paper presents a real application of SVMs for synthetic aperture radar automatic target recognition (SAR/ATR) and compares the result with conventional classifiers. The SVMs are tested for classification both in closed and open sets (recognition). Experimental results showed that SVMs outperform conventional classifiers in target classification. Moreover, SVMs with the Gaussian kernels are able to form a local \" bounded \" decision region around each class that presents better rejection to confusers."}
{"_id": "713a8b22d15359e3e8380ee81da6de633eed9c8b", "title": "A SiC-Based High-Efficiency Isolated Onboard PEV Charger With Ultrawide DC-Link Voltage Range", "text": "In LLC-based onboard battery charging architectures used in plug-in electric vehicles (PEV), the dc link voltage can be actively regulated to follow the battery pack voltage so that the LLC converter can operate in proximity of resonant frequency and achieve high efficiencies over the wide range of battery pack voltage. However, conventional boost-type power factor correction (PFC) converters are unable to provide ultrawide dc link voltages since their output voltages should always be larger than their input voltages. This paper proposes a Silicon Carbide (SiC)-based onboard PEV charger using single-ended primary-inductor converter (SEPIC) PFC converter followed by an isolated LLC resonant converter. With the proposed charger architecture, the SEPIC PFC converter is able to provide an ultrawide range for dc link voltage, and consequently enhance the efficiency of the LLC stage by ensuring operation in proximity of resonant frequency. A 1-kW SiC-based prototype is designed to validate the proposed idea. The experimental result shows that the SEPIC PFC converter achieves unity power factor, 2.72% total harmonic distortion, and 95.3% peak conversion efficiency. The LLC converter achieves 97.1% peak efficiency and always demonstrates a very high efficiency across the ultrawide dc-link voltage range. The overall efficiency of the charger is 88.5% to 93.5% from 20% of the rated load to full load."}
{"_id": "720158a53b79667e39c2caf2f7ebb2670b848693", "title": "EKG-based key agreement in Body Sensor Networks", "text": "Preserving a person's privacy in an efficient manner is very important for critical, life-saving infrastructures like body sensor networks (BSN). This paper presents a novel key agreement scheme which allows two sensors in a BSN to agree to a common key generated using electrocardiogram (EKG) signals. This EKG-based key agreement (EKA) scheme aims to bring the \"plug-n-play\" paradigm to BSN security whereby simply deploying sensors on the subject can enable secure communication, without requiring any form of initialization such as pre-deployment. Analysis of the scheme based on real EKG data (obtained from MIT PhysioBank database) shows that keys resulting from EKA are: random, time variant, can be generated based on short-duration EKG measurements, identical for a given subject and different for separate individuals."}
{"_id": "e29a55474198b61590dfe20399d04f9f5b6ee7c9", "title": "Wireless access monitoring and control system based on digital door lock", "text": "We propose a novel wireless access monitoring and control system based on the digital door lock, which is explosively used as a digital consumer device. Digital door lock is an electronic locking system operated by the combination of digital key, security password or number codes. This paper presents a prototype of the proposed system and shows a scheme for the implementation. To implement the system with ZigBee network protocol, four types of modules are developed, ZigBee module, digital door lock module, human detection module, and ZigBee relay module. ZigBee module is designed to support wireless sensor network and also used for the ZigBee tag to identify the access objects. Digital door lock module is implemented as a digital consumer device to control the access system as well as locking system. It is very convenient system for the consumers and has extensible and flexible characteristics. That is, it can be used as a home security system by the ZigBee network with additional sensor devices. Therefore, it can be a good practical product for the realization of an access monitoring and control system. It also can be applied to the real market for home networking system. Furthermore, the system can be extended to another service such as a connection between mobile phone and home networking system."}
{"_id": "f692c692d3426cc663f3ec9be0c7025b670b2e5c", "title": "Code Conjurer: Pulling Reusable Software out of Thin Air", "text": "For many years, the IT industry has sought to accelerate the software development process by assembling new applications from existing software assets. However, true component-based reuse of the form Douglas Mcllroy envisaged in the 1960s is still the exception rather than the rule, and most of the systematic software reuse practiced today uses heavyweight approaches such as product-line engineering or domain-specific frameworks. By component, we mean any cohesive and compact unit of software functionality with a well-defined interface - from simple programming language classes to more complex artifacts such as Web services and Enterprise JavaBeans."}
{"_id": "8027ec37cbcc4bd5b87a608a2003db37f4d40385", "title": "Synthetic aperture radar at millimeter wavelength for UAV surveillance applications", "text": "The airborne monitoring of scenes using unmanned aircrafts is becoming increasingly important. Several types of airborne sensors - in the optical, infrared or millimeter wave spectrum - are available for the different platforms. Beside the all-weather suitability of the sensors, the deployment scenarios, often also demand for the ability to look through dust clouds, smoke, and fog. The only sensor, which is capable to cope with such environmental restrictions and is able to deliver high-resolution images, is the synthetic aperture radar (SAR). In this paper we focus on miniaturized SAR systems which were developed and optimized for utilization in a UA V (unmanned aerial vehicle) with a low loading capacity. This not only requires a compact and light radar sensor but the processing also has to cope with the unstable flight conditions of a small aircraft. Therefore, a high-precision inertial measurement unit (IMU) and motion compensating SAR-algorithms are needed. Thanks to the utilization of a high transmit frequency of either 35 GHz or 94 GHz, the sensors are suitable for the detection of small-scale objects. A very high resolution of 15 cm \u00d7 15 cm can be achieved when using modern FMCW (frequency modulated continuous wave) generation with a high bandwidth (up to 1 GHz) in combination with small antennas."}
{"_id": "1f1122f01db3a48b2df32d2c929bcf5193b0e89c", "title": "Effect of rule weights in fuzzy rule-based classification systems", "text": "This paper examines the effect of rule weights in fuzzy rule-based classification systems. Each fuzzy IF\u2013THEN rule in our classification system has antecedent linguistic values and a single consequent class. We use a fuzzy reasoning method based on a single winner rule in the classification phase. The winner rule for a new pattern is the fuzzy IF\u2013THEN rule that has the maximum compatibility grade with the new pattern. When we use fuzzy IF\u2013THEN rules with certainty grades (i.e., rule weights), the winner is determined as the rule with the maximum product of the compatibility grade and the certainty grade. In this paper, the effect of rule weights is illustrated by drawing classification boundaries using fuzzy IF\u2013THEN rules with/without certainty grades. It is also shown that certainty grades play an important role when a fuzzy rule-based classification system is a mixture of general rules and specific rules. Through computer simulations, we show that comprehensible fuzzy rule-based systems with high classification performance can be designed without modifying the membership functions of antecedent linguistic values when we use fuzzy IF\u2013THEN rules with certainty grades."}
{"_id": "202891ef448753dd07cc42039440f35ce217df7d", "title": "FindingHuMo: Real-Time Tracking of Motion Trajectories from Anonymous Binary Sensing in Smart Environments", "text": "In this paper we have proposed and designed FindingHuMo (Finding Human Motion), a real-time user tracking system for Smart Environments. FindingHuMo can perform device-free tracking of multiple (unknown and variable number of) users in the Hallway Environments, just from non-invasive and anonymous (not user specific) binary motion sensor data stream. The significance of our designed system are as follows: (a) fast tracking of individual targets from binary motion data stream from a static wireless sensor network in the infrastructure. This needs to resolve unreliable node sequences, system noise and path ambiguity, (b) Scaling for multi-user tracking where user motion trajectories may crossover with each other in all possible ways. This needs to resolve path ambiguity to isolate overlapping trajectories, FindingHumo applies the following techniques on the collected motion data stream: (i) a proposed motion data driven adaptive order Hidden Markov Model with Viterbi decoding (called Adaptive-HMM), and then (ii) an innovative path disambiguation algorithm (called CPDA). Using this methodology the system accurately detects and isolates motion trajectories of individual users. The system performance is illustrated with results from real-time system deployment experience in a Smart Environment."}
{"_id": "01e343f4253001cb8437ecea40c5e0b7c49d5109", "title": "On the role of color in the perception of motion in animated visualizations", "text": "Although luminance contrast plays a predominant role in motion perception, significant additional effects are introduced by chromatic contrasts. In this paper, relevant results from psychophysical and physiological research are described to clarify the role of color in motion detection. Interpreting these psychophysical experiments, we propose guidelines for the design of animated visualizations, and a calibration procedure that improves the reliability of visual motion representation. The guidelines are applied to examples from texture-based flow visualization, as well as graph and tree visualization."}
{"_id": "8e843cb073ffd749d0786bb852b2a7920081ac34", "title": "Forward models and passive psychotic symptoms", "text": "Pickering and Clark (2014) present two ways of viewing the role of forward models in human cognition: the auxiliary forward model (AFM) account and the integral forward model (IFM) account. The AFM account \u201cassumes a dedicated prediction mechanism implemented by additional circuitry distinct from the core mechanisms of perception and action\u201d (p. 451). The standard AFM account exploits a corollary discharge from the motor command in order to compute the sensory consequences of the action. In contrast, on the IFM account, \u201cperception itself involves the use of a forward (generative) model, whose role is to construct the incoming sensory signal \u2018from the top down\u201d\u2019 (p. 453). Furthermore, within this account, motor commands are dispensed with: they are predictions that are fulfilled by movement as part of the prediction error minimization that is taken to govern all aspects of cognition (Adams et al., 2013). Pickering and Clark present two \u201ctesting grounds\u201d for helping us adjudicate between IFMs and AFMs, which are committed to the idea, derived from Pickering\u2019s own work, that one predicts others in a way that is similar to the way that one predicts oneself. Although I like this \u201cprediction by simulation\u201d account, in this commentary, I want to emphasize that neither the IFM nor the AFM accounts are necessarily wedded to it. A less committal, and hence perhaps more compelling, testing ground is to be found in psychotic symptoms, and the capacity of the two frameworks to account for them. Indeed, using psychosis to illustrate forward modelling is not new: the inability to self-tickle was taken to be convincing data for the presence of forward models (viewed, by default, within an AFM account), and in particular for problems with them in patients with diagnoses of schizophrenia (Frith et al., 2000). The AFM account has been used more generally to explain symptoms of schizophrenia. Something goes wrong with the generation of the forward model, and so the sensory consequences of selfgenerated stimuli are poorly predicted, and hence fail to be attenuated, and are, ultimately, misattributed to an external source. Although most have accepted this for delusions of control, some have questioned the application of this model to passive symptoms (Stephens and Graham, 2000), namely those which do not involve action, such as auditory verbal hallucinations (AVHs). If the symptoms of schizophrenia are explainable in terms of problems with an AFM, and this is constructed out of a motor command, then non-motoric (\u201cpassive\u201d) symptoms cannot be so explained. One move has been to keep working within the AFM framework but claim that \u201cpassive\u201d symptoms merely look passive: they are actually \u201cactive.\u201d Several theorists (e.g., Jones and Fernyhough, 2007) have attempted to explain AVHs in terms of inner speech misattribution, where inner speech is taken to involve motoric elements. This motoric involvement has been empirically supported by several electromyographical (EMG) studies (which measured muscular activity during inner speech) some of which date as far back as the early 1930s (Jacobsen, 1931). Later experiments made the connection between inner speech and AVH, showing that similar muscular activation is involved in healthy inner speech and AVH (Gould, 1948). The involvement of motoric elements in both inner speech and in AVH is further supported by findings (Gould, 1950) showing that when subjects hallucinated, subvocalizations occurred which could be picked up with a throat microphone. That these subvocalizations were causally responsible for the inner speech allegedly implicated in AVHs, and not just echoing it, was suggested by data (Bick and Kinsbourne, 1987) demonstrating that if people experiencing hallucinations opened their mouths wide, stopping vocalizations, then the majority of AVHs stopped. However, this does not seem to capture all AVH subtypes. For example, Dodgson and Gordon (2009) convincingly present \u201chypervigilance hallucinations,\u201d which are not based on self-generated stimuli, but constitute hypervigilant boosting and molding of external stimuli. As I have argued (Wilkinson, 2014) recently, one can account for both inner speech-based and hypervigilance hallucinations, within an IFM framework (although I called it a \u201cPredictive Processing Framework\u201d). Since it is good practice to support models that accommodate more phenomena, assuming (as seems plausible) that hypervigilance hallucinations are a genuine subtype of AVH, the IFM account is preferable to the AFM account. In conclusion, although I agree with Pickering and Clark that the IFM account is preferable, I do so on the basis of a"}
{"_id": "3f28447c7a8b85ae2f3b0966b3be839ec6c99f40", "title": "Finding objects for blind people based on SURF features", "text": "Nowadays computer vision technology is helping the visually impaired by recognizing objects in their surroundings. Unlike research of navigation and wayfinding, there are no camera-based systems available in the market to find personal items for the blind. This paper proposes an object recognition method to help blind people find missing items using Speeded-Up Robust Features (SURF). SURF features can extract distinctive invariant features that can be utilized to perform reliable matching between different images in multiple scenarios. These features are invariant to image scale, translation, rotation, illumination, and partial occlusion. The proposed recognition process begins by matching individual features of the user queried object to a database of features with different personal items which are saved in advance. Experiment results demonstrate the effectiveness and efficiency of the proposed method."}
{"_id": "c29b2b2db4f1cbe4931a21ec8b95ce808ccff96b", "title": "UAV vision aided positioning system for location and landing", "text": "The research and use of UAVs (Unmanned Aerial Vehicles) have grown in attention and new ideas in the last couple of decades, mainly due to technology improvement of its construction parts, sensors and controllers. When it comes to its autonomous behavior, robust techniques make themselves necessary to assure the security of the aircraft and people around it. This work presents the use of computer vision techniques to improve the positioning of a quadrotor UAV in the landing moment, adding security and precision, overcoming intrinsic errors of other positioning sensors, such as GPS. A landmark was built to be detected by the camera with contour detection, image thresholding and other mathematical techniques in order to obtain fast, simple and robust processing and results. A GoPro camera model was used with a developed algorithm to remove the image distortion. With this set up, the vehicle can be programmed to develop any mission, with this works control algorithm taking part with the vision aided positioning in the landing moment. The quadrotor was conceived and built at the GRIn (Grupo de Robtica Inteligente) laboratory at UFJF (Juiz de Fora Federal University), and uses open source code for its control, with Mavlink communication protocol to exchange messages with the vision-based control algorithm for mission and landing. This algorithm was developed in Python language, and is in charge of image characteristics segmentation, extracting distances and altitude from the data. The control commands are sent to the quadrotor using the mentioned protocol. The system is structured with a remote laptop connected with a serial port to the quadrotor, and a router transmits messages back and forth among the GCS (Ground Control Station, on the laptop) and the high-level control algorithm (mission planning and vision based control) through TCP ports to the quadrotor itself. The camera is connected via Wi-Fi built-in network to the laptop, so all the image processing is done inside the last one. The results obtained in outdoor tests show the efficacy and fast processing of the vision methods and ensure safety in the landing moment. Furthermore, the positioning techniques can be extended to other UAV applications, such as transmission lines and utility equipment inspection and monitoring. Future works look forward to embedding the process in a companion board attached to the UAV, so the process would become automatic and the risk of communication interference and lost heavily reduced."}
{"_id": "4a667c9c1154aae10980b559d30730ad225683c5", "title": "On semi-supervised linear regression in covariate shift problems", "text": "Semi-supervised learning approaches are trained using the full training (labeled) data and available testing (unlabeled) data. Demonstrations of the value of training with unlabeled data typically depend on a smoothness assumption relating the conditional expectation to high density regions of the marginal distribution and an inherent missing completely at random assumption for the labeling. So-called covariate shift poses a challenge for many existing semi-supervised or supervised learning techniques. Covariate shift models allow the marginal distributions of the labeled and unlabeled feature data to differ, but the conditional distribution of the response given the feature data is the same. An example of this occurs when a complete labeled data sample and then an unlabeled sample are obtained sequentially, as it would likely follow that the distributions of the feature data are quite different between samples. The value of using unlabeled data during training for the elastic net is justified geometrically in such practical covariate shift problems. The approach works by obtaining adjusted coefficients for unlabeled prediction which recalibrate the supervised elastic net to compromise: (i) maintaining elastic net predictions on the labeled data with (ii) shrinking unlabeled predictions to zero. Our approach is shown to dominate linear supervised alternatives on unlabeled response predictions when the unlabeled feature data are concentrated on a low dimensional manifold away from the labeled data and the true coefficient vector emphasizes directions away from this manifold. Large variance of the supervised predictions on the unlabeled set is reduced more than the increase in squared bias when the unlabeled responses are expected to be small, so an improved compromise within the bias-variance tradeoff is the rationale for this performance improvement. Performance is validated on simulated and real data."}
{"_id": "6a47c811ec4174dd5aa6be5eb7b8e48777eb7b42", "title": "VIDEO GAMES: PERSPECTIVE, POINT-OF-VIEW, AND IMMERSION", "text": "of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Arts VIDEO GAMES: PERSPECTIVE, POINT-OF-VIEW, AND IMMERSION By Laurie N. Taylor May 2002 Chair: Dr. Terry Harpold Department: English Presently, video game designers, players, and often even video game theorists approach the representation of point-of-view in video games as intuitive and uncomplicated. Drawing on the current critical work on video games, theories of immersion and engagement, and theories of human-computer-interaction, I question the significance of optical perspective in video games in terms of player point-of-view. Because video games are an experiential medium, critical study of video games must include the study of the diegetic, visual-auditory, interface, and experiential (often termed interactive or participatory) aspects of video game play. Through an analysis of the various points-of-view within different games and the differing points-of-view within a single game, I attempt to delineate the differences both in the creation of the experiential game space and in the possibilities for immersion within the game space. For this, I delimit two types of immersion: diegetic immersion, which corresponds to immersion in the game, and intra-diegetic or situated immersion, which corresponds to immersion"}
{"_id": "0a2ebb26cfbd456ec6792e776d550671c16b5da7", "title": "Sensorless speed control of a PM synchronous motor based on sliding mode observer and extended Kalman filter", "text": "In this paper a method for rotor speed and position detection in permanent magnet synchronous motors is presented, suitable for applications where low speed and standstill operations are not required. The approach is based on the estimation of the motor back-EMF through a sliding mode observer and a cascaded Kalman filtering. Due to its characteristics, the technique can be applied to motors having distorted and whatever shaped back-EMF waveform. The equation error for the rotor position estimation is derived and discussed. Test results are presented, which show the system performance including start-up capability, which is one of the most critical conditions in back-EMF based sensorless schemes."}
{"_id": "21371d12e9f1f900fffa78229768609a87556681", "title": "Back EMF Sensorless-Control Algorithm for High-Dynamic Performance PMSM", "text": "In this paper, a low-time-consuming and low-cost sensorless-control algorithm for high-dynamic performance permanent-magnet synchronous motors, both surface and internal permanent-magnet mounted for position and speed estimation, is introduced, discussed, and experimentally validated. This control algorithm is based on the estimation of rotor speed and angular position starting from the back electromotive force space-vector determination without voltage sensors by using the reference voltages given by the current controllers instead of the actual ones. This choice obviously introduces some errors that must be vanished by means of a compensating function. The novelties of the proposed estimation algorithm are the position-estimation equation and the process of compensation of the inverter phase lag that also suggests the final mathematical form of the estimation. The mathematical structure of the estimation guarantees a high degree of robustness against parameter variation as shown by the sensitivity analysis reported in this paper. Experimental verifications of the proposed sensorless-control system have been made with the aid of a flexible test bench for brushless motor electrical drives. The test results presented in this paper show the validity of the proposed low-cost sensorless-control algorithm and, above all, underline the high dynamic performances of the sensorless-control system also with a reduced equipment."}
{"_id": "b17a4d96b87422bb41681ccdfd7788011dc56bb8", "title": "Mechanical sensorless control of PMSM with online estimation of stator resistance", "text": "This paper provides an improvement in sensorless control performance of nonsalient-pole permanent-magnet synchronous machines. To ensure sensorless operation, most of the existing methods require that the initial position error as well as the parameters uncertainties must be limited. In order to overcome these drawbacks, we study them analytically and present a solution using an online identification method which is easy to implement and is highly stable. A stability analysis based on Lyapunov's linearization method shows the stability of the closed-loop system with the proposed estimator combined with the sensorless algorithm. This approach does not need a well-known initial rotor position and makes the sensorless control more robust with respect to the stator resistance variations at low speed. The simulation and experimental results illustrate the validity of the analytical approach and the efficiency of the proposed method."}
{"_id": "f50db0afc953ca5b24f197b21491c80737f22c89", "title": "Lyapunov-function-based flux and speed observer for AC induction motor sensorless control and parameters estimation", "text": "AC induction motors have become very popular for motion-control applications due to their simple and reliable construction. Control of drives based on ac induction motors is a quite complex task. Provided the vector-control algorithm is used, not only the rotor speed but also the position of the magnetic flux inside the motor during the control process should be known. In most applications, the flux sensors are omitted and the magnetic-flux phasor position has to be calculated. However, there are also applications in which even speed sensors should be omitted. In such a situation, the task of state reconstruction can be solved only from voltage and current measurements. In the current paper, a method based on deterministic evaluation of measurement using the state observer based on the Lyapunov function is presented. The method has been proven in testing on a real ac induction machine."}
{"_id": "178881b589085ad4e0ac00817ae96598c117f831", "title": "DSP-based speed adaptive flux observer of induction motor", "text": "A method of estimating the speed of an induction motor is presented. This method is based on the adaptive control theory. Experimental results of a direct field oriented induction motor control without speed sensors are presented.<>"}
{"_id": "686d4e2aee9499136eb1ae7f21a3cb6f8b810ee3", "title": "Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping", "text": "The lack of reliable data in developing countries is a major obstacle to sustainable development, food security, and disaster relief. Poverty data, for example, is typically scarce, sparse in coverage, and labor-intensive to obtain. Remote sensing data such as high-resolution satellite imagery, on the other hand, is becoming increasingly available and inexpensive. Unfortunately, such data is highly unstructured and currently no techniques exist to automatically extract useful insights to inform policy decisions and help direct humanitarian efforts. We propose a novel machine learning approach to extract large-scale socioeconomic indicators from highresolution satellite imagery. The main challenge is that training data is very scarce, making it difficult to apply modern techniques such as Convolutional Neural Networks (CNN). We therefore propose a transfer learning approach where nighttime light intensities are used as a data-rich proxy. We train a fully convolutional CNN model to predict nighttime lights from daytime imagery, simultaneously learning features that are useful for poverty prediction. The model learns filters identifying different terrains and man-made structures, including roads, buildings, and farmlands, without any supervision beyond nighttime lights. We demonstrate that these learned features are highly informative for poverty mapping, even approaching the predictive performance of survey data collected in the field."}
{"_id": "0fd6d5727215283bca296fe75852475e7c6c7ca0", "title": "Deep learning methods for knowledge base population", "text": "xxi Zusammenfassung xxiii"}
{"_id": "d880c174661118ec23fc7e7cec0e2656cc0654ec", "title": "Structural Balance Theory-Based E-Commerce Recommendation over Big Rating Data", "text": "Recommending appropriate product items to the target user is becoming the key to ensure continuous success of E-commerce. Today, many E-commerce systems adopt various recommendation techniques, e.g., Collaborative Filtering (abbreviated as CF)-based technique, to realize product item recommendation. Overall, the present CF recommendation can perform very well, if the target user owns similar friends (user-based CF), or the product items purchased and preferred by target user own one or more similar product items (item-based CF). While due to the sparsity of big rating data in E-commerce, similar friends and similar product items may be both absent from the user-product purchase network, which lead to a big challenge to recommend appropriate product items to the target user. Considering the challenge, we put forward a Structural Balance Theory-based Recommendation (i.e., SBT-Rec) approach. In the concrete, (I) user-based recommendation: we look for target user's \u201cenemy\u201d (i.e., the users having opposite preference with target user); afterwards, we determine target user's \u201cpossible friends\u201d, according to \u201cenemy's enemy is a friend\u201d rule of Structural Balance Theory, and recommend the product items preferred by \u201cpossible friends\u201d of target user to the target user. (II) likewise, for the product items purchased and preferred by target user, we determine their \u201cpossibly similar product items\u201d based on Structural Balance Theory and recommend them to the target user. At last, the feasibility of SBT-Rec is validated, through a set of experiments deployed on MovieLens-1M dataset."}
{"_id": "77ee479c9df201d7e93366c74198e9661316f1bb", "title": "Improved Localization of Cortical Activity By Combining EEG and MEG with MRI Cortical Surface Reconstruction", "text": "We describe a comprehensive linear approach to the problem of imaging brain activity with high temporal as well as spatial resolution based on combining EEG and MEG data with anatomical constraints derived from MRI images. The \u201c inverse problem\u201d of estimating the distribution of dipole strengths over the cortical surface is highly underdetermined, even given closely spaced EEG and MEG recordings. We have obtained much better solutions to this problem by explicitly incorporating both local cortical orientation as well as spatial covariance of sources and sensors into our for-"}
{"_id": "4cd0ef755d5473415b5a99555c12f52ce7ce9329", "title": "Modified Firefly Algorithm", "text": "Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time."}
{"_id": "d5ba527534daf11748a5a1f2d417283ce88d5cd1", "title": "Real-time human gesture grading based on OpenPose", "text": "In this paper, we presented a real-time 2D human gesture grading system from monocular images based on OpenPose, a library for real-time multi-person keypoint detection. After capturing 2D positions of a person's joints and skeleton wireframe of the body, the system computed the equation of motion trajectory for every joint. Similarity metric was defined as distance between motion trajectories of standard and real-time videos. A modifiable scoring formula was used for simulating the gesture grading scenario. Experimental results showed that the system worked efficiently with high real-time performance, low cost of equipment and strong robustness to the interference of noise."}
{"_id": "25f3d15130c2e5591d7daf6f69e88edbcfc1ab9d", "title": "The effectiveness of yoga in modifying risk factors for cardiovascular disease and metabolic syndrome: A systematic review and meta-analysis of randomized controlled trials.", "text": "BACKGROUND\nYoga, a popular mind-body practice, may produce changes in cardiovascular disease (CVD) and metabolic syndrome risk factors.\n\n\nDESIGN\nThis was a systematic review and random-effects meta-analysis of randomized controlled trials (RCTs).\n\n\nMETHODS\nElectronic searches of MEDLINE, EMBASE, CINAHL, PsycINFO, and The Cochrane Central Register of Controlled Trials were performed for systematic reviews and RCTs through December 2013. Studies were included if they were English, peer-reviewed, focused on asana-based yoga in adults, and reported relevant outcomes. Two reviewers independently selected articles and assessed quality using Cochrane's Risk of Bias tool.\n\n\nRESULTS\nOut of 1404 records, 37 RCTs were included in the systematic review and 32 in the meta-analysis. Compared to non-exercise controls, yoga showed significant improvement for body mass index (-0.77\u2009kg/m(2) (95% confidence interval -1.09 to -0.44)), systolic blood pressure (-5.21\u2009mmHg (-8.01 to -2.42)), low-density lipoprotein cholesterol (-12.14\u2009mg/dl (-21.80 to -2.48)), and high-density lipoprotein cholesterol (3.20\u2009mg/dl (1.86 to 4.54)). Significant changes were seen in body weight (-2.32\u2009kg (-4.33 to -0.37)), diastolic blood pressure (-4.98\u2009mmHg (-7.17 to -2.80)), total cholesterol (-18.48\u2009mg/dl (-29.16 to -7.80)), triglycerides (-25.89\u2009mg/dl (-36.19 to -15.60), and heart rate (-5.27 beats/min (-9.55 to -1.00)), but not fasting blood glucose (-5.91\u2009mg/dl (-16.32 to 4.50)) nor glycosylated hemoglobin (-0.06% Hb (-0.24 to 0.11)). No significant difference was found between yoga and exercise. One study found an impact on smoking abstinence.\n\n\nCONCLUSIONS\nThere is promising evidence of yoga on improving cardio-metabolic health. Findings are limited by small trial sample sizes, heterogeneity, and moderate quality of RCTs."}
{"_id": "0eb5872733e643f43a0c1a7ff78953dfea74dfea", "title": "Automated Scoring: Beyond Natural Language Processing", "text": "In this position paper, we argue that building operational automated scoring systems is a task that has disciplinary complexity above and beyond standard competitive shared tasks which usually involve applying the latest machine learning techniques to publicly available data in order to obtain the best accuracy. Automated scoring systems warrant significant cross-discipline collaboration of which natural language processing and machine learning are just two ofmany important components. Such systems have multiple stakeholders with different but valid perspectives that can often times be at odds with each other. Our position is that it is essential for us as NLP researchers to understand and incorporate these perspectives into our research and work towards a mutually satisfactory solution in order to build automated scoring systems that are accurate, fair, unbiased, and useful. 1 What is Automated Scoring? Automated scoring is an NLP application usually deployed in the educational domain. It involves automatically analyzing a student\u2019s response to a question and generating either (a) a score in order to assess the student\u2019s knowledge and/or other skills and/or (b) some actionable feedback on how the student can improve the response (Page, 1966; Burstein et al., 1998; Burstein et al., 2004; Zechner et al., 2009; Bernstein et al., 2010). It is considered an NLP application since typically the core technology behind the automated analysis of the student response enlists NLP techniques. The student responses can include essays, short answers, or spoken responses and the two most common kinds of automated scoring are the automated evaluation of writing quality and content knowledge. Both the scores and feedback are usually based on linguistic characteristics of the responses including but not limited to: (i) Lower-level errors in the response (e.g., pronunciation errors in spoken responses and grammatical/spelling errors in written responses), (ii) The discourse structure and/or organization of the response, (iii) Relevance of the response to the question that was asked."}
{"_id": "4ecbc7119802103dfcc21c243575e598b84fd108", "title": "Software risk management barriers: An empirical study", "text": "This paper reports results from a survey of experienced project managers on their perception of software risk management. From a sample of 18 experienced project managers, we have found good awareness of risk management, but low tool usage. We offer evidence that the main barriers to performing risk management are related to its perceived high cost and comparative low value. Psychological issues are also important, but less so. Risk identification and monitoring, in particular, are perceived as effort intensive and costly. The perception is that risk management is not prioritised highly enough. Our conclusion is that more must be done to visibly prove the value : cost ratio for risk management activities."}
{"_id": "1b969b308baea3cfec03f8f08d6f5fe7493e55ad", "title": "Football analysis using spatio-temporal tools", "text": "Analysing a football match is without doubt an important task for coaches, talent scouts, players and even media; and with current technologies more and more match data is collected. Several companies offer the ability to track the position of the players and the ball with high accuracy and high resolution. They also offer software that include basic analysis tools, for example straight-forward statistics about distance run and number of passes. It is, however, a non-trivial task to perform more advanced analysis. We present a collection of tools that we developed specifically for analysing the performance of football players and teams."}
{"_id": "f307c6b3058a09ba1bda5bafa89ad4c501e5079a", "title": "DNA Cryptography and Deep Learning using Genetic Algorithm with NW algorithm for Key Generation", "text": "Cryptography is not only a science of applying complex mathematics and logic to design strong methods to hide data called as encryption, but also to retrieve the original data back, called decryption. The purpose of cryptography is to transmit a message between a sender and receiver such that an eavesdropper is unable to comprehend it. To accomplish this, not only we need a strong algorithm, but a strong key and a strong concept for encryption and decryption process. We have introduced a concept of DNA Deep Learning Cryptography which is defined as a technique of concealing data in terms of DNA sequence and deep learning. In the cryptographic technique, each alphabet of a letter is converted into a different combination of the four bases, namely; Adenine (A), Cytosine (C), Guanine (G) and Thymine (T), which make up the human deoxyribonucleic acid (DNA). Actual implementations with the DNA don\u2019t exceed laboratory level and are expensive. To bring DNA computing on a digital level, easy and effective algorithms are proposed in this paper. In proposed work we have introduced firstly, a method and its implementation for key generation based on the theory of natural selection using Genetic Algorithm with Needleman-Wunsch (NW) algorithm and Secondly, a method for implementation of encryption and decryption based on DNA computing using biological operations Transcription, Translation, DNA Sequencing and Deep Learning."}
{"_id": "59ab10d40edcef929642881812d670ecdd86bf7a", "title": "Robust Non-Intrusive Load Monitoring (NILM) with unknown loads", "text": "A Non-Intrusive Load Monitoring (NILM) method, robust even in the presence of unlearned or unknown appliances (UUAs) is presented in this paper. In the absence of such UUAs, this NILM algorithm is capable of accurately identifying each of the turned-ON appliances as well as their energy levels. However, when there is an UUA or set of UUAs are turned-ON during a particular time window, proposed NILM method detects their presence. This enables the operator to detect presence of anomalies or unlearned appliances in a household. This quality increases the reliability of the NILM strategy and makes it more robust compared to existing NILM methods. The proposed Robust NILM strategy (RNILM) works accurately with a single active power measurement taken at a low sampling rate as low as one sample per second. Here first, a unique set of features for each appliance was extracted through decomposing their active power signal traces into uncorrelated subspace components (SCs) via a high-resolution implementation of the Karhunen-Loeve (KLE). Next, in the appliance identification stage, through considering power levels of the SCs, the number of possible appliance combinations were rapidly reduced. Finally, through a Maximum a Posteriori (MAP) estimation, the turned-ON appliance combination and/or the presence of UUA was determined. The proposed RNILM method was validated using real data from two public databases: Reference Energy Disaggregation Dataset (REDD) and Tracebase. The presented results demonstrate the capability of the proposed RNILM method to identify, the turned-ON appliance combinations, their energy level disaggregation as well as the presence of UUAs accurately in real households."}
{"_id": "e8a3306cceffd3d8f19c253137fc93664e4ef8f6", "title": "A novel SVM-kNN-PSO ensemble method for intrusion detection system", "text": "In machine learning, a combination of classifiers, known as an ensemble classifier, often outperforms individual ones. While many ensemble approaches exist, it remains, however, a difficult task to find a suitable ensemble configuration for a particular dataset. This paper proposes a novel ensemble construction method that uses PSO generated weights to create ensemble of classifiers with better accuracy for intrusion detection. Local unimodal sampling (LUS) method is used as a meta-optimizer to find better behavioral parameters for PSO. For our empirical study, we took five random subsets from the well-known KDD99 dataset. Ensemble classifiers are created using the new approaches as well as the weighted majority algorithm (WMA) approach. Our experimental results suggest that the new approach can generate ensembles that outperform WMA in terms of classification accuracy . \u00a9 2015 Published by Elsevier B.V. US ultiple classifier ptimization SO 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 VM eighted majority voting (WMV) MA"}
{"_id": "8b20232f8df62fdda47906c00598712e18f25d47", "title": "The thoracolumbar fascia: anatomy, function and clinical considerations.", "text": "In this overview, new and existent material on the organization and composition of the thoracolumbar fascia (TLF) will be evaluated in respect to its anatomy, innervation biomechanics and clinical relevance. The integration of the passive connective tissues of the TLF and active muscular structures surrounding this structure are discussed, and the relevance of their mutual interactions in relation to low back and pelvic pain reviewed. The TLF is a girdling structure consisting of several aponeurotic and fascial layers that separates the paraspinal muscles from the muscles of the posterior abdominal wall. The superficial lamina of the posterior layer of the TLF (PLF) is dominated by the aponeuroses of the latissimus dorsi and the serratus posterior inferior. The deeper lamina of the PLF forms an encapsulating retinacular sheath around the paraspinal muscles. The middle layer of the TLF (MLF) appears to derive from an intermuscular septum that developmentally separates the epaxial from the hypaxial musculature. This septum forms during the fifth and sixth weeks of gestation. The paraspinal retinacular sheath (PRS) is in a key position to act as a 'hydraulic amplifier', assisting the paraspinal muscles in supporting the lumbosacral spine. This sheath forms a lumbar interfascial triangle (LIFT) with the MLF and PLF. Along the lateral border of the PRS, a raphe forms where the sheath meets the aponeurosis of the transversus abdominis. This lateral raphe is a thickened complex of dense connective tissue marked by the presence of the LIFT, and represents the junction of the hypaxial myofascial compartment (the abdominal muscles) with the paraspinal sheath of the epaxial muscles. The lateral raphe is in a position to distribute tension from the surrounding hypaxial and extremity muscles into the layers of the TLF. At the base of the lumbar spine all of the layers of the TLF fuse together into a thick composite that attaches firmly to the posterior superior iliac spine and the sacrotuberous ligament. This thoracolumbar composite (TLC) is in a position to assist in maintaining the integrity of the lower lumbar spine and the sacroiliac joint. The three-dimensional structure of the TLF and its caudally positioned composite will be analyzed in light of recent studies concerning the cellular organization of fascia, as well as its innervation. Finally, the concept of a TLC will be used to reassess biomechanical models of lumbopelvic stability, static posture and movement."}
{"_id": "2b435ee691718d0b55d057d9be4c3dbb8a81526e", "title": "SURF-Face: Face Recognition Under Viewpoint Consistency Constraints", "text": "We analyze the usage of Speeded Up Robust Features (SURF) as local descriptors for face recognition. The effect of different feature extraction and viewpoint consistency constrained matching approaches are analyzed. Furthermore, a RANSAC based outlier removal for system combination is proposed. The proposed approach allows to match faces under partial occlusions, and even if they are not perfectly aligned or illuminated. Current approaches are sensitive to registration errors and usually rely on a very good initial alignment and illumination of the faces to be recognized. A grid-based and dense extraction of local features in combination with a block-based matching accounting for different viewpoint constraints is proposed, as interest-point based feature extraction approaches for face recognition often fail. The proposed SURF descriptors are compared to SIFT descriptors. Experimental results on the AR-Face and CMU-PIE database using manually aligned faces, unaligned faces, and partially occluded faces show that the proposed approach is robust and can outperform current generic approaches."}
{"_id": "96ea8f0927f87ab4be3a7fd5a3b1dd38eeaa2ed6", "title": "Trefoil Torus Knot Monopole Antenna", "text": "A wideband and simple torus knot monopole antenna is presented in this letter. The antenna is fabricated using additive manufacturing technology, commonly known as 3-D printing. The antenna is mechanically simple to fabricate and has stable radiation pattern as well as input reflection coefficient below -10 dB over a frequency range of 1-2 GHz. A comparison of measured and simulated performance of the antenna is also presented."}
{"_id": "d7775931803aab23494937856bbfcb31233c2537", "title": "Human Body Posture Classification by a Neural Fuzzy Network and Home Care System Application", "text": "A new classification approach for human body postures based on a neural fuzzy network is proposed in this paper, and the approach is applied to detect emergencies that are caused by accidental falls. Four main body postures are used for posture classification, including standing, bending, sitting, and lying. After the human body is segmented from the background, the classification features are extracted from the silhouette. The body silhouette is projected onto horizontal and vertical axes, and then, a discrete Fourier transform is applied to each projected histogram. Magnitudes of significant Fourier transform coefficients together with the silhouette length-width ratio are used as features. The classifier is designed by a neural fuzzy network. The four postures can be classified with high accuracy according to experimental results. Classification results are also applicable to home care emergency detection of a person who suddenly falls and remains in the lying posture for a period of time due to experiments that were performed."}
{"_id": "379696f21d6bc8829c425295b917824edf687e0a", "title": "CORE: Context-Aware Open Relation Extraction with Factorization Machines", "text": "We propose CORE, a novel matrix factorization model that leverages contextual information for open relation extraction. Our model is based on factorization machines and integrates facts from various sources, such as knowledge bases or open information extractors, as well as the context in which these facts have been observed. We argue that integrating contextual information\u2014such as metadata about extraction sources, lexical context, or type information\u2014significantly improves prediction performance. Open information extractors, for example, may produce extractions that are unspecific or ambiguous when taken out of context. Our experimental study on a large real-world dataset indicates that CORE has significantly better prediction performance than state-ofthe-art approaches when contextual information is available."}
{"_id": "e346cb8ecd797c7001bab004f10ebfca25d58074", "title": "Urethrocutaneous fistula repair after hypospadias surgery.", "text": "OBJECTIVE\nTo evaluate and compare the success rates of simple and layered repairs of urethrocutaneous fistulae after hypospadias repair.\n\n\nPATIENTS AND METHODS\nThe charts of 72 children who developed fistulae after hypospadias repair were reviewed; 39 had a simple closure of the fistula, whereas 32 had a 'pants over vest' repair, in all cases after excluding an impairment of urine outflow.\n\n\nRESULTS\nThe success rate at the first attempt was 74% for simple closure and 94% for the layered repair; at the second attempt it was 80% and 100%, the difference being statistically significant for both repairs.\n\n\nCONCLUSIONS\nAlthough probably far from an optimal technique for repairing urethrocutaneous fistulae, the pants-over-vest repair allows a good success rate for penile shaft fistulae."}
{"_id": "745cd47479008f8f6f3a4262f5817901ee44c1a3", "title": "Directly Fabricating Soft Robotic Actuators With an Open-Source 3-D Printer", "text": "3-D printing silicone has been a long sought for goal by roboticists. Fused deposition manufacturing (FDM) is a readily accessible and simple 3-D printing scheme that could hold the key to printing silicone. This study details an approach to 3-D print silicone elastomer through use of a thickening additive and heat curing techniques. We fabricated an identical control actuator using molding and 3-D printing techniques for comparison. By comparing the free space elongation and fixed length force of both actuators, we were able to evaluate the quality of the print. We observed that the 3-D printed linear actuator was able to perform similarly to the molded actuator, with an average error of 5.08% in actuator response, establishing the feasibility of such a system. We envision that further development of this system would contribute to the way soft robotic systems are fabricated."}
{"_id": "206b204618640917f278e72bd0e2a881d8cec7ad", "title": "A family of algorithms for approximate Bayesian inference", "text": "One of the major obstacles to using Bayesian methods for pattern recognition has been its computational expense. This thesis presents an approximation technique that can perform Bayesian inference faster and more accurately than previously possible. This method, \"Expectation Propagation,\" unifies and generalizes two previous techniques: assumeddensity filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. The unification shows how both of these algorithms can be viewed as approximating the true posterior distribution with a simpler distribution, which is close in the sense of KL-divergence. Expectation Propagation exploits the best of both algorithms: the generality of assumed-density filtering and the accuracy of loopy belief propagation. Loopy belief propagation, because it propagates exact belief states, is useful for limited types of belief networks, such as purely discrete networks. Expectation Propagation approximates the belief states with expectations, such as means and variances, giving it much wider scope. Expectation Propagation also extends belief propagation in the opposite direction-propagating richer belief states which incorporate correlations between variables. This framework is demonstrated in a variety of statistical models using synthetic and real-world data. On Gaussian mixture problems, Expectation Propagation is found, for the same amount of computation, to be convincingly better than rival approximation techniques: Monte Carlo, Laplace's method, and variational Bayes. For pattern recognition, Expectation Propagation provides an algorithm for training Bayes Point Machine classifiers that is faster and more accurate than any previously known. The resulting classifiers outperform Support Vector Machines on several standard datasets, in addition to having a comparable training time. Expectation Propagation can also be used to choose an appropriate feature set for classification, via Bayesian model selection. Thesis Supervisor: Rosalind Picard Title: Associate Professor of Media Arts and Sciences"}
{"_id": "ccfec3649345ec324821313a3211560bed2ba5e1", "title": "Neural Extractive Text Summarization with Syntactic Compression", "text": "Recent neural network approaches to summarization are largely either sentence-extractive, choosing a set of sentences as the summary, or abstractive, generating the summary from a seq2seq model. In this work, we present a neural model for single-document summarization based on joint extraction and compression. Following recent successful extractive models, we frame the summarization problem as a series of local decisions. Our model chooses sentences from the document and then decides which of a set of compression options to apply to each selected sentence. We compute this set of options using discrete compression rules based on syntactic constituency parses; however, our approach is modular and can flexibly use any available source of compressions. For learning, we construct oracle extractive-compressive summaries that reflect uncertainty over our model\u2019s decision sequence, then learn both of our components jointly with this supervision. Experimental results on the CNN/Daily Mail and New York Times datasets show that our model achieves the state-of-the-art performance on content selection evaluated by ROUGE. Moreover, human and manual evaluation show that our model\u2019s output generally remains grammatical."}
{"_id": "a9062945a0377878cc1c2161012375270bbb3ae3", "title": "Category-Based Induction", "text": "An argument is categorical if its premises and conclusion are of the form All members ofC have property F, where C is a natural category like FALCON or BIRD, and P remains the same across premises and conclusion. An example is Grizzly bears love onions. Therefore, all bears love onions. Such an argument is psychologically strong to the extent that belief in its premises engenders belief in its conclusion. A subclass of categorical arguments is examined, and the following hypothesis is advanced: The strength of a categorical argument increases with (a) the degree to which the premise categories are similar to the conclusion category and (b) the degree to which the premise categories are similar to members of the lowest level category that includes both the premise and the conclusion categories. A model based on this hypothesis accounts for 13 qualitative phenomena and the quantitative results of several experiments."}
{"_id": "58d19026a8ee2c2b719d19b9fe12e1762da9b3fd", "title": "Knowledge Based Approach for Word Sense Disambiguation using Hindi Wordnet", "text": "------------------------------------------------------------Abstract-------------------------------------------------------------Word sense disambiguation (WSD) is an open research area in natural language processing and artificial intelligence. and it is also based on computational linguistics. It is of considerable theoretical and practical interest. WSD is to analyze word tokens in context and specify exactly,which sense of several word is being used.It can be declare as theortically motivated,one or more methods and techniques. WSD is a big problem of natural language processing (NLP) and artificial intelligence. But here, the problem is to search the sense for a word in given a context and lexicon relation. It is a technique of natural language processing in which requires queries and documents in NLP or texts from Machine Translation (MT). MT is an automatic translation. It is involving some languages such as Marathi, Urdu, Bengali, Punjabi, Hindi, and English etc. Most of the work has been completed in English, and now the point of convergence has shifted to others languages. The applications of WSD are disambiguation of content in information retrieval (IR), machine translation (MT), speech processing, lexicography, and text processing. In our paper, we are using knowledge based approach to WSD with Hindi language . A knowledge based approach use of external lexical resources suach as dictionary and thesauri . It involve incorporation of word knowledge to disambiguate words. We have developed a WSD tool using knowledge base d approach with Hindi wordnet. Wordnet is built from cooccurrence and collocation and it includes synset or synonyms which belong to either noun, verb, adjective, or adverb, these are called as part-ofspeech (POS) tagger. In this paper we shall introduce the implementation of our tool and its evaluation."}
{"_id": "10efde0973c9b8221202bacfcdb79a77e1a47fa0", "title": "Internet paradox. A social technology that reduces social involvement and psychological well-being?", "text": "The Internet could change the lives of average citizens as much as did the telephone in the early part of the 20th century and television in the 1950s and 1960s. Researchers and social critics are debating whether the Internet is improving or harming participation in community life and social relationships. This research examined the social and psychological impact of the Internet on 169 people in 73 households during their first 1 to 2 years on-line. We used longitudinal data to examine the effects of the Internet on social involvement and psychological well-being. In this sample, the Internet was used extensively for communication. Nonetheless, greater use of the Internet was associated with declines in participants' communication with family members in the household, declines in the size of their social circle, and increases in their depression and loneliness. These findings have implications for research, for public policy and for the design of technology."}
{"_id": "4942001ded918f11eda28251522031c6e6bc68ae", "title": "Effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, and prosocial behavior: a meta-analytic review of the scientific literature.", "text": "Research on exposure to television and movie violence suggests that playing violent video games will increase aggressive behavior. A metaanalytic review of the video-game research literature reveals that violent video games increase aggressive behavior in children and young adults. Experimental and nonexperimental studies with males and females in laboratory and field settings support this conclusion. Analyses also reveal that exposure to violent video games increases physiological arousal and aggression-related thoughts and feelings. Playing violent video games also decreases prosocial behavior."}
{"_id": "9d526feb12b50b8956432dde911513b55f708bdd", "title": "Development of a Compendium of Energy Expenditures for Youth", "text": "BACKGROUND\nThis paper presents a Compendium of Energy Expenditures for use in scoring physical activity questionnaires and estimating energy expenditure levels in youth.\n\n\nMETHOD/RESULTS\nModeled after the adult Compendium of Physical Activities, the Compendium of Energy Expenditures for Youth contains a list of over 200 activities commonly performed by youth and their associated MET intensity levels. A review of existing data collected on the energy cost of youth performing activities was undertaken and incorporated into the compendium. About 35% of the activity MET levels were derived from energy cost data measured in youth and the remaining MET levels estimated from the adult compendium.\n\n\nCONCLUSION\nThe Compendium of Energy Expenditures for Youth is useful to researchers and practitioners interested in identifying physical activity and energy expenditure values in children and adolescents in a variety of settings."}
{"_id": "2fe9bd2c1ef121cffcb44924af219dd8505e20c8", "title": "Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life.", "text": "Two studies examined violent video game effects on aggression-related variables. Study 1 found that real-life violent video game play was positively related to aggressive behavior and delinquency. The relation was stronger for individuals who are characteristically aggressive and for men. Academic achievement was negatively related to overall amount of time spent playing video games. In Study 2, laboratory exposure to a graphically violent video game increased aggressive thoughts and behavior. In both studies, men had a more hostile view of the world than did women. The results from both studies are consistent with the General Affective Aggression Model, which predicts that exposure to violent video games will increase aggressive behavior in both the short term (e.g., laboratory aggression) and the long term (e.g., delinquency)."}
{"_id": "ad40428b40b051164ade961bc841a0da2c44515d", "title": "The CES-D Scale : A Self-Report Depression Scale for Research in the General Population", "text": ""}
{"_id": "9972d618fdac424c97e26205f364357d275b995d", "title": "Chapter 8 Location-Based Social Networks : Users", "text": "In this chapter, we introduce and define the meaning of location-based social network (LBSN) and discuss the research philosophy behind LBSNs from the perspective of users and locations. Under the circumstances of trajectory-centric LBSN, we then explore two fundamental research points concerned with understanding users in terms of their locations. One is modeling the location history of an individual using the individual\u2019s trajectory data. The other is estimating the similarity between two different people according to their location histories. The inferred similarity represents the strength of connection between two users in a locationbased social network, and can enable friend recommendations and community discovery. The general approaches for evaluating these applications are also presented."}
{"_id": "f4d133d9933879f550d09955aa66e49a98110609", "title": "Personality Traits and Music Genres: What Do People Prefer to Listen To?", "text": "Personality-based personalized systems are increasingly gaining interest as personality traits has been shown to be a stable construct within humans. In order to provide a personality-based experience to the user, users' behavior, preferences, and needs in relation to their personality need to be investigated. Although for a technological mediated environment the search for these relationships is often new territory, there are findings from personality research of the real world that can be used in personalized systems. However, for these findings to be implementable, we need to investigate whether they hold in a technologically mediated environment. In this study we assess prior work on personality-based music genre preferences from traditional personality research. We analyzed a dataset consisting of music listening histories and personality scores of 1415 Last.fm users. Our results show agreements with prior work, but also important differences that can help to inform personalized systems."}
{"_id": "3c24dbb4f1e49b6e8b13dc376cd4bb944aaf9968", "title": "DeepHTTP: Semantics-Structure Model with Attention for Anomalous HTTP Traffic Detection and Pattern Mining", "text": "In the Internet age, cyber-attacks occur frequently with complex types. Traffic generated by access activities can record website status and user request information, which brings a great opportunity for network attack detection. Among diverse network protocols, Hypertext Transfer Protocol (HTTP) is widely used in government, organizations and enterprises. In this work, we propose DeepHTTP, a semantics structure integration model utilizing Bidirectional Long Short-TermMemory (Bi-LSTM) with attention mechanism to model HTTP traffic as a natural language sequence. In addition to extracting traffic content information, we integrate structural information to enhance the generalization capabilities of the model. Moreover, the application of attention mechanism can assist in discovering critical parts of anomalous traffic and furthermining attack patterns. Additionally, we demonstrate how to incrementally update the data set and retrain model so that it can be adapted to new anomalous traffic. Extensive experimental evaluations over large traffic data have illustrated that DeepHTTP has outstanding performance in traffic detection and pattern discovery."}
{"_id": "9d2fbb950e218149ca8063a54468bacb837d9ce6", "title": "How to detect the Cuckoo Sandbox and to Strengthen it?", "text": "Nowadays a lot of malware are analyzed with virtual machines. The Cuckoo sandbox (Cuckoo DevTeam: Cuckoo sandbox. http://www.cuckoosandbox.org , 2013) offers the possibility to log every actions performed by the malware on the virtual machine. To protect themselves and to evande detection, malware need to detect whether they are in an emulated environment or in a real one. With a few modifications and tricks on Cuckoo and the virtual machine we can try to prevent malware to detect that they are under analyze, or at least make it harder. It is not necessary to apply all the modifications, because it may produce a significant overhead and if malware checks his execution time, it may detect an anomaly and consider that it is running in a virtual machine. The present paper will show how a malware can detect the Cuckoo sandbox and how we can counter that."}
{"_id": "0b2267913519ffe3cdff8681d3c43ac1bb0aa35d", "title": "Synchronous Bidirectional Inference for Neural Sequence Generation", "text": "In sequence to sequence generation tasks (e.g. machine translation and abstractive summarization), inference is generally performed in a left-to-right manner to produce the result token by token. The neural approaches, such as LSTM and self-attention networks, are now able to make full use of all the predicted history hypotheses from left side during inference, but cannot meanwhile access any future (right side) information and usually generate unbalanced outputs in which left parts are much more accurate than right ones. In this work, we propose a synchronous bidirectional inference model to generate outputs using both left-to-right and right-to-left decoding simultaneously and interactively. First, we introduce a novel beam search algorithm that facilitates synchronous bidirectional decoding. Then, we present the core approach which enables left-to-right and right-to-left decoding to interact with each other, so as to utilize both the history and future predictions simultaneously during inference. We apply the proposed model to both LSTM and self-attention networks. In addition, we propose two strategies for parameter optimization. The extensive experiments on machine translation and abstractive summarization demonstrate that our synchronous bidirectional inference model can achieve remarkable improvements over the strong baselines."}
{"_id": "846a41ff58ec488ea798069d98eb0156a4b7fa0a", "title": "Towards Modeling False Memory With Computational Knowledge Bases", "text": "One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling."}
{"_id": "e4bd80adc5a3486c3a5c3d82cef91b70b67ae681", "title": "Structural Models of Corporate Bond Pricing : An Empirical Analysis", "text": "This article empirically tests five structural models of corporate bond pricing: those of Merton (1974), Geske (1977), Longstaff and Schwartz (1995), Leland and Toft (1996), and Collin-Dufresne and Goldstein (2001). We implement the models using a sample of 182 bond prices from firms with simple capital structures during the period 1986\u20131997. The conventional wisdom is that structural models do not generate spreads as high as those seen in the bond market, and true to expectations, we find that the predicted spreads in our implementation of the Merton model are too low. However, most of the other structural models predict spreads that are too high on average. Nevertheless, accuracy is a problem, as the newer models tend to severely overstate the credit risk of firms with high leverage or volatility and yet suffer from a spread underprediction problem with safer bonds. The Leland and Toft model is an exception in that it overpredicts spreads on most bonds, particularly those with high coupons. More accurate structural models must avoid features that increase the credit risk on the riskier bonds while scarcely affecting the spreads of the safest bonds."}
{"_id": "9e60b8314ed2b9d651622a156fe4703cdff8ae05", "title": "A survey and comparative study of data deduplication techniques", "text": "Increase in enormous amount of digital data needs more storage space, which in turn significantly increases the cost of backup and its performance. Traditional backup solutions do not provide any inherent capability to prevent duplicate data from being backed up. Backup of duplicate data significantly increases the backup time and unnecessary consumption of resources. Data Deduplication plays an important role in eliminating this redundant data and reducing the storage consumption. Its main intend is how to reduce more duplicates efficiently, removing them at high speed and to achieve good duplicate removal ratio. Many mechanisms have been proposed to meet these objectives. It has been observed that achieving one objective may affect the other. In this paper we classify and review the existing deduplication techniques. We also present evaluation of deduplication algorithms by measuring their performance in terms of complexity and efficiency with unstructured files. We propose an efficient means to achieve high deduplication ratio with minimum backup window. We also indicate some key issues need to focus in the future work."}
{"_id": "201c8d41fe9c56d319908041af5063570ea37bed", "title": "Detecting concept change in dynamic data streams", "text": "In this research we present a novel approach to the concept change detection problem. Change detection is a fundamental issue with data stream mining as classification models generated need to be updated when significant changes in the underlying data distribution occur. A number of change detection approaches have been proposed but they all suffer from limitations with respect to one or more key performance factors such as high computational complexity, poor sensitivity to gradual change, or the opposite problem of high false positive rate. Our approach uses reservoir sampling to build a sequential change detection model that offers statistically sound guarantees on false positive and false negative rates but has much smaller computational complexity than the ADWIN concept drift detector. Extensive experimentation on a wide variety of datasets reveals that the scheme also has a smaller false detection rate while maintaining a competitive true detection rate to ADWIN."}
{"_id": "8bc6a2b77638eb934b107fb61e9a1ce5876aa9a0", "title": "Smarter Cities and Their Innovation Challenges", "text": "The transformation to smarter cities will require innovation in planning, management, and operations. Several ongoing projects around the world illustrate the opportunities and challenges of this transformation. Cities must get smarter to address an array of emerging urbanization challenges, and as the projects highlighted in this article show, several distinct paths are available. The number of cities worldwide pursuing smarter transformation is growing rapidly. However, these efforts face many political, socioeconomic, and technical hurdles. Changing the status quo is always difficult for city administrators, and smarter city initiatives often require extensive coordination, sponsorship, and support across multiple functional silos. The need to visibly demonstrate a continuous return on investment also presents a challenge. The technical obstacles will center on achieving system interoperability, ensuring security and privacy, accommodating a proliferation of sensors and devices, and adopting a new closed-loop human-computer interaction paradigm."}
{"_id": "9fc8123aa2b5be1f802e83bb80b4c32ab97faa6f", "title": "Differential Impairment of Remembering the Past and Imagining Novel Events after Thalamic Lesions", "text": "Vividly remembering the past and imagining the future (mental time travel) seem to rely on common neural substrates and mental time travel impairments in patients with brain lesions seem to encompass both temporal domains. However, because future thinking\u2014or more generally imagining novel events\u2014involves the recombination of stored elements into a new event, it requires additional resources that are not shared by episodic memory. We aimed to demonstrate this asymmetry in an event generation task administered to two patients with lesions in the medial dorsal thalamus. Because of the dense connection with pFC, this nucleus of the thalamus is implicated in executive aspects of memory (strategic retrieval), which are presumably more important for future thinking than for episodic memory. Compared with groups of healthy matched control participants, both patients could only produce novel events with extensive help of the experimenter (prompting) in the absence of episodic memory problems. Impairments were most pronounced for imagining personal fictitious and impersonal events. More precisely, the patients' descriptions of novel events lacked content and spatio-temporal relations. The observed impairment is unlikely to trace back to disturbances in self-projection, scene construction, or time concept and could be explained by a recombination deficit. Thus, although memory and the imagination of novel events are tightly linked, they also partly rely on different processes."}
{"_id": "0bf69a49c2baed67fa9a044daa24b9e199e73093", "title": "Inducing Probabilistic Grammars by Bayesian Model Merging", "text": "We describe a framework for inducing probabilistic grammars from corpora of positive samples. First, samples are incorporated by adding ad-hoc rules to a working grammar; subsequently, elements of the model (such as states or nonterminals) are merged to achieve generalization and a more compact representation. The choice of what to merge and when to stop is governed by the Bayesian posterior probability of the grammar given the data, which formalizes a trade-off between a close fit to the data and a default preference for simpler models (\u2018Occam\u2019s Razor\u2019). The general scheme is illustrated using three types of probabilistic grammars: Hidden Markov models, class-based n-grams, and stochastic context-free grammars."}
{"_id": "e3f20444124a645b825afa1f79186e9547175fe9", "title": "Statistics: a brief overview.", "text": "The Accreditation Council for Graduate Medical Education sets forth a number of required educational topics that must be addressed in residency and fellowship programs. We sought to provide a primer on some of the important basic statistical concepts to consider when examining the medical literature. It is not essential to understand the exact workings and methodology of every statistical test encountered, but it is necessary to understand selected concepts such as parametric and nonparametric tests, correlation, and numerical versus categorical data. This working knowledge will allow you to spot obvious irregularities in statistical analyses that you encounter."}
{"_id": "7262bc3674c4c063526eaf4d2dcf54eecea7bf77", "title": "ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations", "text": "We describe PARANMT-50M, a dataset of more than 50 million English-English sentential paraphrase pairs. We generated the pairs automatically by using neural machine translation to translate the nonEnglish side of a large parallel corpus, following Wieting et al. (2017). Our hope is that PARANMT-50M can be a valuable resource for paraphrase generation and can provide a rich source of semantic knowledge to improve downstream natural language understanding tasks. To show its utility, we use PARANMT-50M to train paraphrastic sentence embeddings that outperform all supervised systems on every SemEval semantic textual similarity competition, in addition to showing how it can be used for paraphrase generation.1"}
{"_id": "be59652a90db393242dc95bcde049fc14cf8faad", "title": "User-click modeling for understanding and predicting search-behavior", "text": "Recent advances in search users' click modeling consider both users' search queries and click/skip behavior on documents to infer the user's perceived relevance. Most of these models, including dynamic Bayesian networks (DBN) and user browsing models (UBM), use probabilistic models to understand user click behavior based on individual queries. The user behavior is more complex when her actions to satisfy her information needs form a search session, which may include multiple queries and subsequent click behaviors on various items on search result pages. Previous research is limited to treating each query within a search session in isolation, without paying attention to their dynamic interactions with other queries in a search session.\n Investigating this problem, we consider the sequence of queries and their clicks in a search session as a task and propose a task-centric click model~(TCM). TCM characterizes user behavior related to a task as a collective whole. Specifically, we identify and consider two new biases in TCM as the basis for user modeling. The first indicates that users tend to express their information needs incrementally in a task, and thus perform more clicks as their needs become clearer. The other illustrates that users tend to click fresh documents that are not included in the results of previous queries. Using these biases, TCM is more accurately able to capture user search behavior. Extensive experimental results demonstrate that by considering all the task information collectively, TCM can better interpret user click behavior and achieve significant improvements in terms of ranking metrics of NDCG and perplexity."}
{"_id": "6f8f096471cd696a0b95073776ed045e3c96fc4f", "title": "Distributed intelligence for multi-camera visual surveillance", "text": "Latest advances in hardware technology and state of the art of computer vision and arti,cial intelligence research can be employed to develop autonomous and distributed monitoring systems. The paper proposes a multi-agent architecture for the understanding of scene dynamics merging the information streamed by multiple cameras. A typical application would be the monitoring of a secure site, or any visual surveillance application deploying a network of cameras. Modular software (the agents) within such architecture controls the di0erent components of the system and incrementally builds a model of the scene by merging the information gathered over extended periods of time. The role of distributed arti,cial intelligence composed of separate and autonomous modules is justi,ed by the need for scalable designs capable of co-operating to infer an optimal interpretation of the scene. Decentralizing intelligence means creating more robust and reliable sources of interpretation, but also allows easy maintenance and updating of the system. Results are presented to support the choice of a distributed architecture, and to prove that scene interpretation can be incrementally and e4ciently built by modular software. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."}
{"_id": "96b95da0ab88de23641014abff2a5c0b5fec00c9", "title": "O Bitcoin Where Art Thou? Insight into Large-Scale Transaction Graphs", "text": "Bitcoin is a rising digital currency and exemplifies the growing need for systematically gathering and analyzing public transaction data sets such as the blockchain. However, the blockchain in its raw form is just a large ledger listing transfers of currency units between alphanumeric character strings, without revealing contextually relevant real-world information. In this demo, we present GraphSense, which is a solution that applies a graph-centric perspective on digital currency transactions. It allows users to explore transactions and follow the money flow, facilitates analytics by semantically enriching the transaction graph, supports path and graph pattern search, and guides analysts to anomalous data points. To deal with the growing volume and velocity of transaction data, we implemented our solution on a horizontally scalable data processing and analytics infrastructure. Given the ongoing digital transformation in financial services and technologies, we believe that our approach contributes to development of analytics solutions for digital currency ecosystems, which is relevant in fields such as financial analytics, law enforcement, or scientific research."}
{"_id": "c1a38bed9cdba8c51b9562184296a0720115120c", "title": "Layering concepts in anterior composite restorations.", "text": "SUMMARY\nWith increasing patient demands for esthetic dentition, composite resin restorations enjoy great popularity due to excellent esthetics, acceptable longevity, and relatively low costs. Shading concepts used by manufacturers, however, may be confusing to many clinicians.\n\n\nPURPOSE\nTo review and describe the current main shading concepts, evaluate their potential for creating natural esthetics, and provide guidelines for application."}
{"_id": "358425aa898ec0a3e8e129b94473a58d8bba366c", "title": "Distribution Free Prediction Bands", "text": "We study distribution free, nonparametric prediction bands with a special focus on their finite sample behavior. First we investigate and develop different notions of finite sample coverage guarantees. Then we give a new prediction band estimator by combining the idea of \u201cconformal prediction\u201d (Vovk et al., 2009) with nonparametric conditional density estimation. The proposed estimator, called COPS (Conformal Optimized Prediction Set), always has finite sample guarantee in a stronger sense than the original conformal prediction estimator. Under regularity conditions the estimator converges to an oracle band at a minimax optimal rate. A fast approximation algorithm and a data driven method for selecting the bandwidth are developed. The method is illustrated first in simulated data. Then, an application shows that the proposed method gives desirable prediction intervals in an automatic way, as compared to the classical linear regression modeling."}
{"_id": "3a19f83c37afc6320315f4084618073514d481af", "title": "Harnessing the Influence of Social Proof in Online Shopping: The Effect of Electronic Word of Mouth on Sales of Digital Microproducts", "text": "Social commerce has taken the e\u2010tailing world by storm. Business\u2010to\u2010consumer sites and, more important, intermediaries that facilitate shopping experience, continue to offer more and more innovative technologies to support social interaction among like\u2010 minded community members or friends who share the same shopping interests. Among these technologies, reviews, ratings, and recommendation systems have become some of the most popular social shopping platforms due to their ease of use and simplicity in sharing buying experience and aggregating evaluations. This paper studies the effect of electronic word of mouth (eWOM) communication among a closed community of book readers. We studied the entire market of Amazon Shorts e\u2010books, which are digital mi\u2010 croproducts sold at a low and uniform price. With the minimal role of price in the buying decision, social discussion via eWOM becomes a collective signal of reputation, and ultimately a significant demand driver. Our empirical study suggests that eWOM can be used to convey the reputation of the product (e.g., the book), the reputation of the brand (i.e., the author), and the reputation of complementary goods (e.g., books in the same category). Until newer social shopping technologies gain acceptance, eWOM technolo\u2010 gies should be considered by both e\u2010tailers and shoppers as the first and perhaps primary source of social buying experience."}
{"_id": "47dec4bc234f03fd6dbc0728cfaae4cccde5cd7c", "title": "EARNING INDEPENDENT CAUSAL MECHANISMS", "text": "Independent causal mechanisms are a central concept in the study of causality with implications for machine learning tasks. In this work we develop an algorithm to recover a set of (inverse) independent mechanisms relating a distribution transformed by the mechanisms to a reference distribution. The approach is fully unsupervised and based on a set of experts that compete for data to specialize and extract the mechanisms. We test and analyze the proposed method on a series of experiments based on image transformations. Each expert successfully maps a subset of the transformed data to the original domain, and the learned mechanisms generalize to other domains. We discuss implications for domain transfer and links to recent trends in generative modeling."}
{"_id": "6efe199ccd3c46ea0dbd06a016e731d3f750b15a", "title": "Experiences in Improving Risk Management Processes Using the Concepts of the Riskit Method", "text": "This paper describes experiences from two organizations that have used the Riskit method for risk management in their software projects. This paper presents the Riskit method, the organizations involved, case study designs, and findings from case studies. We focus on the experiences and insights gained through the application of the method in industrial context and propose some general conclusions based on the case studies."}
{"_id": "42f7bb3f5e07cf8bf3f73fc75bddc6e9b845b085", "title": "Convolutional Recurrent Deep Learning Model for Sentence Classification", "text": "As the amount of unstructured text data that humanity produces overall and on the Internet grows, so does the need to intelligently to process it and extract different types of knowledge from it. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been applied to natural language processing systems with comparative, remarkable results. The CNN is a noble approach to extract higher level features that are invariant to local translation. However, it requires stacking multiple convolutional layers in order to capture long-term dependencies, due to the locality of the convolutional and pooling layers. In this paper, we describe a joint CNN and RNN framework to overcome this problem. Briefly, we use an unsupervised neural language model to train initial word embeddings that are further tuned by our deep learning network, then, the pre-trained parameters of the network are used to initialize the model. At a final stage, the proposed framework combines former information with a set of feature maps learned by a convolutional layer with long-term dependencies learned via long-short-term memory. Empirically, we show that our approach, with slight hyperparameter tuning and static vectors, achieves outstanding results on multiple sentiment analysis benchmarks. Our approach outperforms several existing approaches in term of accuracy; our results are also competitive with the state-of-the-art results on the Stanford Large Movie Review data set with 93.3% accuracy, and the Stanford Sentiment Treebank data set with 48.8% fine-grained and 89.2% binary accuracy, respectively. Our approach has a significant role in reducing the number of parameters and constructing the convolutional layer followed by the recurrent layer as a substitute for the pooling layer. Our results show that we were able to reduce the loss of detailed, local information and capture long-term dependencies with an efficient framework that has fewer parameters and a high level of performance."}
{"_id": "da16883de44b888d983e1252315c78ae7fcdd9dc", "title": "Predicting The Next App That You Are Going To Use", "text": "Given the large number of installed apps and the limited screen size of mobile devices, it is often tedious for users to search for the app they want to use. Although some mobile OSs provide categorization schemes that enhance the visibility of useful apps among those installed, the emerging category of homescreen apps aims to take one step further by automatically organizing the installed apps in a more intelligent and personalized way. In this paper, we study how to improve homescreen apps' usage experience through a prediction mechanism that allows to show to users which app she is going to use in the immediate future. The prediction technique is based on a set of features representing the real-time spatiotemporal contexts sensed by the homescreen app. We model the prediction of the next app as a classification problem and propose an effective personalized method to solve it that takes full advantage of human-engineered features and automatically derived features. Furthermore, we study how to solve the two naturally associated cold-start problems: app cold-start and user cold-start. We conduct large-scale experiments on log data obtained from Yahoo Aviate, showing that our approach can accurately predict the next app that a person is going to use."}
{"_id": "441836b27ce2d1ac795ab5b42dd203514a5f46c3", "title": "Osteology of Ischikauia steenackeri (Teleostei: Cypriniformes) with comments on its systematic position", "text": "The lakeweed chub Ischikauia steenackeri is a medium-sized, herbivorous fish and the sole extant member of the genus Ischikauia, which is endemic to Lake Biwa and the Yodo River drainage, Japan. In order to clarify its systematic position, the skeletal anatomy of I. steenackeri is described and its relationships with related genera are discussed. The present data suggest the monophyly of Ischikauia and seven cultrine genera (Culter, Chanodichthys, Megalobrama, Sinibrama, Hemiculter, Toxabramis, and Anabarilius) based on a unique character, the metapterygoid elongated dorsally. Additionally, our data suggest that Ischikauia is closely related to Culter, Chanodichthys, Megalobrama, and Sinibrama. This relationship is supported by three synapomorphies that are common to them: a narrow third infraorbital, dorsal extension of the third supraneural, and a large quadrate foramen."}
{"_id": "1e4558354f9509ab9992001340f88be9f298debe", "title": "Feature Selection for Unsupervised Learning", "text": "In this paper, we identify two issues involved in developing an automated feature subset selection algorithm for unlabeled data: the need for finding the number of clusters in conjunction with feature selection, and the need for normalizing the bias of feature selection criteria with respect to dimension. We explore the feature selection problem and these issues through FSSEM (Feature Subset Selection using Expectation-Maximization (EM) clustering) and through two different performance criteria for evaluating candidate feature subsets: scatter separability and maximum likelihood. We present proofs on the dimensionality biases of these feature criteria, and present a cross-projection normalization scheme that can be applied to any criterion to ameliorate these biases. Our experiments show the need for feature selection, the need for addressing these two issues, and the effectiveness of our proposed solutions."}
{"_id": "36dd3331060e5e6157d9558563b95253308709cb", "title": "Machine learning: a review of classification and combining techniques", "text": "Supervised classification is one of the tasks most frequently carried out by so-called Intelligent Systems. Thus, a large number of techniques have been developed based on Artificial Intelligence (Logic-based techniques, Perceptron-based techniques) and Statistics (Bayesian Networks, Instance-based techniques). The goal of supervised learning is to build a concise model of the distribution of class labels in terms of predictor features. The resulting classifier is then used to assign class labels to the testing instances where the values of the predictor features are known, but the value of the class label is unknown. This paper describes various classification algorithms and the recent attempt for improving classification accuracy\u2014ensembles of classifiers."}
{"_id": "9971ff92d9d09b4736228bdbd7726e2e19b9aa2d", "title": "Radiomics: extracting more information from medical images using advanced feature analysis.", "text": "Solid cancers are spatially and temporally heterogeneous. This limits the use of invasive biopsy based molecular assays but gives huge potential for medical imaging, which has the ability to capture intra-tumoural heterogeneity in a non-invasive way. During the past decades, medical imaging innovations with new hardware, new imaging agents and standardised protocols, allows the field to move towards quantitative imaging. Therefore, also the development of automated and reproducible analysis methodologies to extract more information from image-based features is a requirement. Radiomics--the high-throughput extraction of large amounts of image features from radiographic images--addresses this problem and is one of the approaches that hold great promises but need further validation in multi-centric settings and in the laboratory."}
{"_id": "0345aa6780b7daf1f04698d8fe64f5aae9e0233a", "title": "Reduction Techniques for Instance-Based Learning Algorithms", "text": "Instance-based learning algorithms are often faced with the problem of deciding which instances to store for use during generalization. Storing too many instances can result in large memory requirements and slow execution speed, and can cause an oversensitivity to noise. This paper has two main purposes. First, it provides a survey of existing algorithms used to reduce storage requirements in instance-based learning algorithms and other exemplar-based algorithms. Second, it proposes six additional reduction algorithms called DROP1\u2013DROP5 and DEL (three of which were first described in Wilson & Martinez, 1997c, as RT1\u2013RT3) that can be used to remove instances from the concept description. These algorithms and 10 algorithms from the survey are compared on 31 classification tasks. Of those algorithms that provide substantial storage reduction, the DROP algorithms have the highest average generalization accuracy in these experiments, especially in the presence of uniform class noise."}
{"_id": "0bd261bac58d7633138350db3727d60be8a94f29", "title": "Decision Tree Induction Based on Efficient Tree Restructuring", "text": "The ability to restructure a decision tree efficiently enables a variety of approaches to decision tree induction that would otherwise be prohibitively expensive. Two such approaches are described here, one being incremental tree induction (ITI), and the other being non-incremental tree induction using a measure of tree quality instead of test quality (DMTI). These approaches and several variants offer new computational and classifier characteristics that lend themselves to particular applications."}
{"_id": "140a6bdfb0564eb18a1f51a39dff36f20272a461", "title": "Automatic camera and range sensor calibration using a single shot", "text": "As a core robotic and vision problem, camera and range sensor calibration have been researched intensely over the last decades. However, robotic research efforts still often get heavily delayed by the requirement of setting up a calibrated system consisting of multiple cameras and range measurement units. With regard to removing this burden, we present a toolbox with web interface for fully automatic camera-to-camera and camera-to-range calibration. Our system is easy to setup and recovers intrinsic and extrinsic camera parameters as well as the transformation between cameras and range sensors within one minute. In contrast to existing calibration approaches, which often require user intervention, the proposed method is robust to varying imaging conditions, fully automatic, and easy to use since a single image and range scan proves sufficient for most calibration scenarios. Experimentally, we demonstrate that the proposed checkerboard corner detector significantly outperforms current state-of-the-art. Furthermore, the proposed camera-to-range registration method is able to discover multiple solutions in the case of ambiguities. Experiments using a variety of sensors such as grayscale and color cameras, the Kinect 3D sensor and the Velodyne HDL-64 laser scanner show the robustness of our method in different indoor and outdoor settings and under various lighting conditions."}
{"_id": "0a388e51dcc3ac6d38687e1e557e8a000b97b45a", "title": "A taxonomy for multi-agent robotics", "text": "A key di culty in the design of multi-agent robotic systems is the size and complexity of the space of possible designs. In order to make principled design decisions, an understanding of the many possible system con gurations is essential. To this end, we present a taxonomy that classi es multiagent systems according to communication, computational and other capabilities. We survey existing e orts involving multi-agent systems according to their positions in the taxonomy. We also present additional results concerning multi-agent systems, with the dual purposes of illustrating the usefulness of the taxonomy in simplifying discourse about robot collective properties, and also demonstrating that a collective can be demonstrably more powerful than a single unit of the collective."}
{"_id": "da67375c8b6a250fbd5482bfbfce14f4eb7e506c", "title": "A Survey of Artificial Cognitive Systems: Implications for the Autonomous Development of Mental Capabilities in Computational Agents", "text": "This survey presents an overview of the autonomous development of mental capabilities in computational agents. It does so based on a characterization of cognitive systems as systems which exhibit adaptive, anticipatory, and purposive goal-directed behavior. We present a broad survey of the various paradigms of cognition, addressing cognitivist (physical symbol systems) approaches, emergent systems approaches, encompassing connectionist, dynamical, and enactive systems, and also efforts to combine the two in hybrid systems. We then review several cognitive architectures drawn from these paradigms. In each of these areas, we highlight the implications and attendant problems of adopting a developmental approach, both from phylogenetic and ontogenetic points of view. We conclude with a summary of the key architectural features that systems capable of autonomous development of mental capabilities should exhibit"}
{"_id": "c9904943c2d0dbbb8cf7dfc6649a9cd8ff10526b", "title": "Lexical Analysis for the Webshell Attacks", "text": "In this paper, we proposed a design considering for web application firewall system. Anyway, the website managing, web technology and hacker technology are vastly different from each other, whilst hacker techs are keep on moving forward, while company and government MIS are difficult of updating their information to the latest state due to working, they are often not updated and followed to the latest stuffs. And moreover, these newest techs are in generally needed to be figured out whether it matches the compatibility of the old versions, such as the recent PHP 7, many MIS are not capable to update their software to handle the variants of WAF bypass methods."}
{"_id": "517f6372302bcbb458f253d840d4b3c467c889bd", "title": "Tagging The Web: Building A Robust Web Tagger with Neural Network", "text": "In this paper, we address the problem of web-domain POS tagging using a twophase approach. The first phase learns representations that capture regularities underlying web text. The representation is integrated as features into a neural network that serves as a scorer for an easy-first POS tagger. Parameters of the neural network are trained using guided learning in the second phase. Experiment on the SANCL 2012 shared task show that our approach achieves 93.15% average tagging accuracy, which is the best accuracy reported so far on this data set, higher than those given by ensembled syntactic parsers."}
{"_id": "f1309720a0c32f22f5a1d2c65b4f09d844ac8516", "title": "A hardware CABAC encoder for HEVC", "text": "This paper presents a hardware design of context-based adaptive binary arithmetic coding (CABAC) for the emerging High efficiency video coding (HEVC) standard. While aiming at higher compression efficiency, the CABAC in HEVC also invests a lot of effort in the pursuit of parallelism and reducing hardware cost. Simulation results show that our design processes 1.18 bins per cycle on average. It can work at 357 MHz with 48.940K gates targeting 0.13 \u03bcm CMOS process. This processing rate can support real-time encoding for all sequences under common test conditions of HEVC standard conforming to the main profile level 6.1 of main tier or main profile level 5.1 of high tier."}
{"_id": "73defb621377dfd4680b3ed1c4d7027cc1a796f0", "title": "Cervical Cancer Diagnosis Using Random Forest Classifier With SMOTE and Feature Reduction Techniques", "text": "Cervical cancer is the fourth most common malignant disease in women\u2019s worldwide. In most cases, cervical cancer symptoms are not noticeable at its early stages. There are a lot of factors that increase the risk of developing cervical cancer like human papilloma virus, sexual transmitted diseases, and smoking. Identifying those factors and building a classification model to classify whether the cases are cervical cancer or not is a challenging research. This study aims at using cervical cancer risk factors to build classification model using Random Forest (RF) classification technique with the synthetic minority oversampling technique (SMOTE) and two feature reduction techniques recursive feature elimination and principle component analysis (PCA). Most medical data sets are often imbalanced because the number of patients is much less than the number of non-patients. Because of the imbalance of the used data set, SMOTE is used to solve this problem. The data set consists of 32 risk factors and four target variables: Hinselmann, Schiller, Cytology, and Biopsy. After comparing the results, we find that the combination of the random forest classification technique with SMOTE improve the classification performance."}
{"_id": "c875fdb3efe590b6ccbbf7c27d67338be736c102", "title": "Social media-induced technostress: Its impact on the job performance of it professionals and the moderating role of job characteristics", "text": "Using social media during work hours for non-work-related reasons is becoming commonplace. Organizations are therefore challenged with identifying and overcoming the consequences of such use. Social media-induced technostress has been identified as an important unintended consequence of using social media at work, as it could negatively impact job performance. This study draws on Person-Environment Fit to investigate the relationship between social media-induced technostress and job performance in IT professionals, and the moderating effect of job characteristics on this relationship. The results indicate that social media-induced technostress is negatively related to job performance and the negative impact of social media-induced technostress is intensified when the job characteristics are low. This work extends the literature on job-stress, social media, technostress, and job characteristics."}
{"_id": "a8b916062661ddacc54d586fa5676344130b84b5", "title": "Mining long-term search history to improve search accuracy", "text": "Long-term search history contains rich information about a user's search preferences, which can be used as search context to improve retrieval performance. In this paper, we study statistical language modeling based methods to mine contextual information from long-term search history and exploit it for a more accurate estimate of the query language model. Experiments on real web search data show that the algorithms are effective in improving search accuracy for both fresh and recurring queries. The best performance is achieved when using clickthrough data of past searches that are related to the current query."}
{"_id": "5113d4b4737841fbce92427e3252f81fa99397f2", "title": "Cobalt fill for advanced interconnects", "text": "Cobalt as a material promises to change the conductor landscape in many areas, particularly logic contact and interconnect. In this paper, we focus on Cobalt as the conductor for logic interconnect \u2014 a potential Cu replacement. We demonstrate Co fill capability down to 10nm CD, and show that Co-Cu resistivity cross-over will occur below 14nm CD. Single damascene Co-ULK structures are used to establish an optimized metallization stack with robust CMP performance that has electromigration and TDDB reliability better than copper interconnect."}
{"_id": "0b48ef877d32694b4f34b618e3994d94203fe242", "title": "We're in it together: interpersonal management of disclosure in social network services", "text": "The workload needed for managing privacy and publicness in current social network services (SNSs) is placed on individuals, yet people have few means to control what others disclose about them. This paper considers SNS-users' concerns in relation to online disclosure and the ways in which they cope with these both individually and collaboratively. While previous work has focused mainly on individual coping strategies, our findings from a qualitative study with 27 participants suggest that collaborative strategies in boundary regulation are of additional importance. We present a framework of strategies for boundary regulation that informs both theoretical work and design practice related to management of disclosure in SNSs. The framework considers disclosure as an interpersonal process of boundary regulation, in which people are dependent on what others choose to disclose about them. The paper concludes by proposing design solutions supportive of collaborative and preventive strategies in boundary regulation that facilitate the management of disclosure online."}
{"_id": "159b52158512481df7684c341401efbdbc5d8f02", "title": "Object Detection with Active Sample Harvesting", "text": "The work presented in this dissertation lies in the domains of image classification, object detection, and machine learning. Whether it is training image classifiers or object detectors, the learning phase consists in finding an optimal boundary between populations of samples. In practice, all the samples are not equally important: some examples are trivially classified and do not bring much to the training, while others close to the boundary or misclassified are the ones that truly matter. Similarly, images where the samples originate from are not all rich in informative samples. However, most training procedures select samples and images uniformly or weigh them equally. The common thread of this dissertation is how to efficiently find the informative samples/images for training. Although we never consider all the possible samples \u201cin the world\u201d, our purpose is to select the samples in a smarter manner, without looking at all the available ones. The framework adopted in this work consists in organising the data (samples or images) in a tree to reflect the statistical regularities of the training samples, by putting \u201csimilar\u201d samples in the same branch. Each leaf carries a sample and a weight related to the \u201cimportance\u201d of the corresponding sample, and each internal node carries statistics about the weights below. The tree is used to select the next sample/image for training, by applying a sampling policy, and the \u201cimportance\u201d weights are updated accordingly, to bias the sampling towards informative samples/images in future iterations. Our experiments show that, in the various applications, properly focusing on informative images or informative samples improves the learning phase by either reaching better performances faster or by reducing the training loss faster."}
{"_id": "2e7d283fd0064da3a97fef9562a8cc1bd55fa84a", "title": "Behavioral resource-aware model inference", "text": "Software bugs often arise because of differences between what developers think their system does and what the system actually does. These differences frustrate debugging and comprehension efforts. We describe Perfume, an automated approach for inferring behavioral, resource-aware models of software systems from logs of their executions. These finite state machine models ease understanding of system behavior and resource use.\n Perfume improves on the state of the art in model inference by differentiating behaviorally similar executions that differ in resource consumption. For example, Perfume separates otherwise identical requests that hit a cache from those that miss it, which can aid understanding how the cache affects system behavior and removing cache-related bugs. A small user study demonstrates that using Perfume is more effective than using logs and another model inference tool for system comprehension. A case study on the TCP protocol demonstrates that Perfume models can help understand non-trivial protocol behavior. Perfume models capture key system properties and improve system comprehension, while being reasonably robust to noise likely to occur in real-world executions."}
{"_id": "6b1790a5f326831727b41b29c9246bc78f928c78", "title": "High-level visual features for underwater place recognition", "text": "This paper reports on a method to perform robust visual relocalization between temporally separated sets of underwater images gathered by a robot. The place recognition and relocalization problem is more challenging in the underwater environment mainly due to three factors: 1) changes in illumination; 2) long-term changes in the visual appearance of features because of phenomena like biofouling on man-made structures and growth or movement in natural features; and 3) low density of visually salient features for image matching. To address these challenges, a patch-based feature matching approach is proposed, which uses image segmentation and local intensity contrast to locate salient patches and HOG description to make correspondences between patches. Compared to traditional point-based features that are sensitive to dramatic appearance changes underwater, patch-based features are able to encode higher level information such as shape or structure which tends to persist across years in underwater environments. The algorithm is evaluated on real data, from multiple years, collected by a Hovering Autonomous Underwater Vehicle for ship hull inspection. Results in relocalization performance across missions from different years are compared to other traditional methods."}
{"_id": "bced698f9a71ec586931c6deefd6d3c55f19d02d", "title": "Increasing individual upper alpha power by neurofeedback improves cognitive performance in human subjects.", "text": "The hypothesis was tested of whether neurofeedback training (NFT)--applied in order to increase upper alpha but decrease theta power--is capable of increasing cognitive performance. A mental rotation task was performed before and after upper alpha and theta NFT. Only those subjects who were able to increase their upper alpha power (responders) performed better on mental rotations after NFT. Training success (extent of NFT-induced increase in upper alpha power) was positively correlated with the improvement in cognitive performance. Furthermore, the EEG of NFT responders showed a significant increase in reference upper alpha power (i.e. in a time interval preceding mental rotation). This is in line with studies showing that increased upper alpha power in a prestimulus (reference) interval is related to good cognitive performance."}
{"_id": "38c76a87a94818209e0da82f68a95d0947ce5d25", "title": "The Impact of Blogging and Scaffolding on Primary School Pupils' Narrative Writing: A Case Study", "text": "Narrative writing is a skill that all primary (elementary) school pupils in Singapore are required to develop in their learning of the English language. However, this is an area in which not all pupils excel. This study investigates if the use of blogging and scaffolding can improve pupils\u2019 narrative writing. Data were gathered from 36 primary five (grade five) pupils through pre-post writing tests, reflection sheets, and interviews. The pre-post writing tests were administered before and after the pupils had completed their blogging activities, while the blogs were used to draft their narrative writings and to comment on their peers\u2019 writings. The teacher also used a writing guide that served as a scaffold to help pupils plan their writing on their blogs. Overall, results showed a statistically significant difference of medium effect size between the pre-post test scores. Pupils\u2019 perceptions of using blogs as a tool for writing were also explored. attend school (Huffaker, 2004). In order to have success throughout life, verbal literacy is crucial and especially so from the beginnings of education to the future employment of adults (Cassell, 2004). In Singapore, the primary education system consists of a six-year program: a four-year foundation stage from primary (grade) 1 to 4 and a two-year orientation stage from primary 5 to 6 (Ministry of Education, 2010). One of the overall aims of the Singapore primary education is to give pupils a good grasp of the English Language. Literacy development such DOI: 10.4018/jwltt.2010040101 2 International Journal of Web-Based Learning and Teaching Technologies, 5(2), 1-17, April-June 2010 Copyright \u00a9 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. as reading and writing is at the heart of the English Language instructional programme in Singapore schools (Curriculum Planning and Development Division (CPDD), 2001). As indicated in the 2001 English Language Syllabus"}
{"_id": "8ff1f35121c13cabc0b070f23edc46ea88d09b38", "title": "Combined software-defined network (SDN) and Internet of Things (IoT)", "text": "The Internet of Things (IoT) is an impending technology that captured the industries and research interests in a brief period. Presently, the number of Internet-connected devices are estimated at billions and by 2020 many studies expect to have around 50 billion devices that are connected to the Internet. The IoT devices have produced a significant amount of data that could be threatening to the security and directing the flow of data in the network. One of the alternative solutions for problems caused by IoT is to have some kind of solutions that are based on programming and primary control. Software Define Network (SDN) offers a programmable and primary control for the basic network without variability of the current architecture of the network. This paper discusses some solutions for various domains for the combined SDN based on IoT and current directions in studies. The relative analysis of these solutions prepares a simple and brief view of those directions."}
{"_id": "107971a12550051421d4c570b3e9f285df09d6a8", "title": "Approximation Algorithms", "text": "There are many variations of local search: Hill-climbing: The name for the basic method described above. Metropolis: Pick a random y \u2208 N(x). If f(y) < f(x), set x = y. If f(y) > f(x), set x = y with some probability. Simulated annealing: Like Metropolis, but where the probability of moving to a higher-cost neighbor decreases over time. Tabu search: Like the previous two, but with memory in order to avoid getting stuck in suboptimal regions or in plateaus (\u201ctaboo\u201d areas). Parallel search: Do more than one search, and occasionally replace versions that are doing poorly with copies of versions that are doing well. Genetic algorithm: Keep a population of searches that changes over time via \u201cbreeding\u201d of the best-performing searches."}
{"_id": "c7d0dc4a94b027b0242cd7cf82c9779dc0dd8b33", "title": "Vitamins, minerals, and mood.", "text": "In this article, the authors explore the breadth and depth of published research linking dietary vitamins and minerals (micronutrients) to mood. Since the 1920s, there have been many studies on individual vitamins (especially B vitamins and Vitamins C, D, and E), minerals (calcium, chromium, iron, magnesium, zinc, and selenium), and vitamin-like compounds (choline). Recent investigations with multi-ingredient formulas are especially promising. However, without a reasonable conceptual framework for understanding mechanisms by which micronutrients might influence mood, the published literature is too readily dismissed. Consequently, 4 explanatory models are presented, suggesting that mood symptoms may be expressions of inborn errors of metabolism, manifestations of deficient methylation reactions, alterations of gene expression by nutrient deficiency, and/or long-latency deficiency diseases. These models provide possible explanations for why micronutrient supplementation could ameliorate some mental symptoms."}
{"_id": "8ff70a1aaa2aeeef42104b3b4f1e686f7be07243", "title": "NameClarifier: A Visual Analytics System for Author Name Disambiguation", "text": "In this paper, we present a novel visual analytics system called NameClarifier to interactively disambiguate author names in publications by keeping humans in the loop. Specifically, NameClarifier quantifies and visualizes the similarities between ambiguous names and those that have been confirmed in digital libraries. The similarities are calculated using three key factors, namely, co-authorships, publication venues, and temporal information. Our system estimates all possible allocations, and then provides visual cues to users to help them validate every ambiguous case. By looping users in the disambiguation process, our system can achieve more reliable results than general data mining models for highly ambiguous cases. In addition, once an ambiguous case is resolved, the result is instantly added back to our system and serves as additional cues for all the remaining unidentified names. In this way, we open up the black box in traditional disambiguation processes, and help intuitively and comprehensively explain why the corresponding classifications should hold. We conducted two use cases and an expert review to demonstrate the effectiveness of NameClarifier."}
{"_id": "2fb9660e25df30a61803f33b74d37021288afe14", "title": "LegUp: high-level synthesis for FPGA-based processor/accelerator systems", "text": "In this paper, we introduce a new open source high-level synthesis tool called LegUp that allows software techniques to be used for hardware design. LegUp accepts a standard C program as input and automatically compiles the program to a hybrid architecture containing an FPGA-based MIPS soft processor and custom hardware accelerators that communicate through a standard bus interface. Results show that the tool produces hardware solutions of comparable quality to a commercial high-level synthesis tool."}
{"_id": "11764d0125d5687cef2945a3bf335775835d192b", "title": "Differential Evolution-A simple and efficient adaptive scheme for global optimization over continuous spaces by", "text": "A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation. ________________________________________ 1)International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704-1198, Suite 600, Fax: 510-643-7684. E-mail: storn@icsi.berkeley.edu. On leave from Siemens AG, ZFE T SN 2, OttoHahn-Ring 6, D-81739 Muenchen, Germany. Fax: 01149-636-44577, Email:rainer.storn@zfe.siemens.de. 2)836 Owl Circle, Vacaville, CA 95687, kprice@solano.community.net."}
{"_id": "777896b0f4cb6d2823e15a9265b890c0db9b6de5", "title": "Comprehensive learning particle swarm optimizer for global optimization of multimodal functions", "text": "This paper presents a variant of particle swarm optimizers (PSOs) that we call the comprehensive learning particle swarm optimizer (CLPSO), which uses a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity. This strategy enables the diversity of the swarm to be preserved to discourage premature convergence. Experiments were conducted (using codes available from http://www.ntu.edu.sg/home/epnsugan) on multimodal test functions such as Rosenbrock, Griewank, Rastrigin, Ackley, and Schwefel and composition functions both with and without coordinate rotation. The results demonstrate good performance of the CLPSO in solving multimodal problems when compared with eight other recent variants of the PSO."}
{"_id": "1e4cddd9d4b89a9dbdf854aaf84b584ba47386ab", "title": "Hybrid Particle Swarm Optimiser with breeding and subpopulations", "text": "In this paper we present two hybrid Particle Swarm Optimisers combining the idea of the particle swarm with concepts from Evolutionary Algorithms. The hybrid PSOs combine the traditional velocity and position update rules with the ideas of breeding and subpopulations. Both hybrid models were tested and compared with the standard PSO and standard GA models. This is done to illustrate that PSOs with breeding strategies have the potential to achieve faster convergence and the potential to find a better solution. The objective of this paper is to describe how to make the hybrids benefit from genetic methods and to test their potential and competetiveness on function optimisation."}
{"_id": "2483949663c6ba1632924e5a891edf7c8e2d3674", "title": "The particle swarm - explosion, stability, and convergence in a multidimensional complex space", "text": "The particle swarm is an algorithm for finding optimal regions of complex search spaces through the interaction of individuals in a population of particles. Even though the algorithm, which is based on a metaphor of social interaction, has been shown to perform well, researchers have not adequately explained how it works. Further, traditional versions of the algorithm have had some undesirable dynamical properties, notably the particles\u2019 velocities needed to be limited in order to control their trajectories. The present paper analyzes a particle\u2019s trajectory as it moves in discrete time (the algebraic view), then progresses to the view of it in continuous time (the analytical view). A five-dimensional depiction is developed, which describes the system completely. These analyses lead to a generalized model of the algorithm, containing a set of coefficients to control the system\u2019s convergence tendencies. Some results of the particle swarm optimizer, implementing modifications derived from the analysis, suggest methods for altering the original algorithm in ways that eliminate problems and increase the ability of the particle swarm to find optima of some well-studied test functions."}
{"_id": "bcc63fa7900d45b6b939feb430cf154a6605be17", "title": "Simulated annealing: Practice versus theory", "text": "Simulated annealing (SA) presents an optimization technique with several striking positive and negative features. Perhaps its most salient feature, statistically promising to deliver an optimal solution, in current practice is often spurned to use instead modified faster algorithms, \u201csimulated quenching\u201d (SQ). Using the author\u2019s Adaptive Simulated Annealing (ASA) code, some examples are given which demonstrate how SQ can be much faster than SA without sacrificing accuracy."}
{"_id": "57a5b23825df99aa8ec0fc9f1eaacc2042a70200", "title": "Deep Image Segmentation by Quality Inference", "text": "Traditionally, convolutional neural networks are trained for semantic segmentation by having an image given as input and the segmented mask as output. In this work, we propose a neural network trained by being given an image and mask pair, with the output being the quality of that pairing. The segmentation is then created afterwards through backpropagation on the mask. This allows enriching training with semi-supervised synthetic variations on the ground-truth. The proposed iterative segmentation technique allows improving an existing segmentation or creating one from scratch. We compare the performance of the proposed methodology with state-of-the-art deep architectures for image segmentation and achieve competitive results, being able to improve their segmentations."}
{"_id": "4808afa14fcb89cffcba491fbae284383c39fb17", "title": "1 ACO Algorithms for the Traveling Salesman Problemy", "text": "Ant algorithms [18, 14, 19] are a recently developed, population-based approach which has been successfully applied to several NP-hard combinatorial optimization problems [6, 13, 17, 23, 34, 40, 49]. As the name suggests, ant algorithms have been inspired by the behavior of real ant colonies, in particular, by their foraging behavior. One of the main ideas of ant algorithms is the indirect communication of a colony of agents, called (artificial) ants, based on pheromone trails (pheromones are also used by real ants for communication). The (artificial) pheromone trails are a kind of distributed numeric information which is modified by the ants to reflect their experience while solving a particular problem. Recently, the Ant Colony Optimization (ACO) metaheuristic has been proposed which provides a unifying framework for most applications of ant algorithms [15, 16] to combinatorial optimization problems. In particular, all the ant algorithms applied to the TSP fit perfectly into the ACO meta-heuristic and, therefore, we will call these algorithms also ACO algorithms. The first ACO algorithm, called Ant System (AS) [18, 14, 19], has been applied to the Traveling Salesman Problem (TSP). Starting from Ant System, several improvements of the basic algorithm have been proposed [21, 22, 17, 51, 53, 7]. Typically, these improved algorithms have been tested again on the TSP. All these improved versions of AS have in common a stronger exploita-"}
{"_id": "1d262c871b775990a34dbfe8b8f07db69f31deb6", "title": "Variational shape approximation", "text": "A method for concise, faithful approximation of complex 3D datasets is key to reducing the computational cost of graphics applications. Despite numerous applications ranging from geometry compression to reverse engineering, efficiently capturing the geometry of a surface remains a tedious task. In this paper, we present both theoretical and practical contributions that result in a novel and versatile framework for geometric approximation of surfaces. We depart from the usual strategy by casting shape approximation as a variational geometric partitioning problem. Using the concept of geometric proxies, we drive the distortion error down through repeated clustering of faces into best-fitting regions. Our approach is entirely discrete and error-driven, and does not require parameterization or local estimations of differential quantities. We also introduce a new metric based on normal deviation, and demonstrate its superior behavior at capturing anisotropy."}
{"_id": "68711fe2a5d89e3f13855e6e9a83f735b3d6dfab", "title": "End-user feature labeling: Supervised and semi-supervised approaches based on locally-weighted logistic regression", "text": "When intelligent interfaces, such as intelligent desktop assistants, email classif iers, and recommender systems, customize themselves to a particular end user , such customizations can decrease productivity and increase frustration due to inacc urate predictions\u2014especially in early stages when training data is limited. The end user can improve the learning algorithm by tediously labeling a substantial amoun t of additional training data, but this takes time and is too ad hoc to target a particular area of inaccuracy. To solve this problem, we propose new supervised and semi-supervised learning algorithms based on locally weighted logistic regression for feature labeling by end users , enabling them to point out which features are important for a class, rath er than provide new training instances. We first evaluate our algorithms against other feature labeling algorit hms under idealized conditions using feature labels generated by an oracle . In addition, another of our contributions is an evaluation of feature labeling algorithms under real world conditions using feature labels harvested from actual end users in our user study. Our user study is the first statistical user study for feature labeling invo lvi g a large number of end users (43 participants), all of whom have no background in machine learning. Our supervised and semi-supervised algorithms were among the best perfo rmers when compared to other feature labeling algorithms in the idealized setting and they are also robust to poor quality feature labels provided by ordinary end u sers in our study. We also perform an analysis to investigate the relative gains of incorporatin g the different sources of knowledge available in the labeled training set, the feature labels and the unlabeled data. Together, our results strongly suggest that feature labeling by e d users is both viable and effective for allowing end users to improve the learnin g algorithm behind their customized applicati ons. * Corresponding author E-mail address: dassh@eecs.oregonstate.edu (Shubhomoy Das, 1148 Kelley Engineering Center, Corvallis, OR 97331-5501, USA, Ph: 1-541-908-6949). 1 Early versions of portions of this work appeared in [36, 37] S. Das, T. Moore, W-K. Wong et. al./Artificial Intelligence \u00a9 2012 Elsevier B.V. All rights reserved."}
{"_id": "7d40c75a12b98263eb61701d3c670c147cdd0054", "title": "On the Kinematic Analysis of Robotic Mechanisms", "text": "The kinematic analyses, of manipulators and other robotic devices composed of mechanical links, usually depend on the solution of sets of nonlinear equations. There are a variety of both numerical and algebraic techniques available to solve such systems of equations and to give bounds on the number of solutions. These solution methods have also led to an understanding of how special choices of the various structural parameters of a mechanism influence the number of solutions inherent to the kinematic geometry of a given structure. In this paper, results from studying the kinematic geometry of such systems are reviewed, and the three most useful solution techniques are summarized. The solution techniques are polynomial continuation, Gr\u00f6bner bases, and elimination. We then discuss the results that have been obtained with these techniques in the solution of two basic problems, namely, the inverse kinematics for serial-chain manipulators, and the direct kinematics of in-parallel platform devices. KEY WORDS\u2014inverse kinematics, direct kinematics, manipulators, in-parallel manipulators, series manipulators, polynomial continuation, characteristic polynomial"}
{"_id": "24c495e35cd75a44d8cd0b59421c8977dbc75e0c", "title": "The spread of alcohol consumption behavior in a large social network.", "text": "Context A person's alcohol use might mirror that of his or her social contacts. Contribution Using the same group of Framingham Heart Study participants who helped to define the associations between social networks and other health behaviors, the researchers found that alcohol use was similar among individuals in a social cluster. Furthermore, changes in a person's alcohol intake over time followed changes in the alcohol intake of their social contacts. Caution Alcohol use was self-reported, and the researchers did not have access to social contacts who were not participating in the study. Implication Changing alcohol use may require intervening with social groups as well as with individuals. The Editors Alcohol use is common in the United States. In 2002, 55% of adults reported having had at least 1 drink in the previous month, and the prevalence of past-month alcohol consumption was somewhat higher for men (62%) than for women (48%) (1). The lifetime prevalence of alcohol use disorders has been measured at 14.6% (1). Excessive alcohol use, either in the form of heavy drinking or binge drinking, increases the risk for numerous health and social problems (2, 3), and approximately 75000 deaths in 2001 were attributable to excessive alcohol use, which makes it the third-leading lifestyle-related cause of death (3). Alcohol consumption behavior has many determinants. Previous studies (3, 4) suggest that biological factors have a significant effect on the progression from experimentation to regular use and that social and cultural factors play a critical role in experimentation with alcohol and the development of drinking patterns over time. Given the social nature of this behavior, it is not surprising that previous work has identified interactions with friends and family members as key factors (48). Although this literature primarily focused on cross-sectional panels, some studies (68) have attempted to test whether social influences act over time. These studies, which focused on peer influence among college students, showed inconsistent results and tended to focus just on pairs of connected persons. The study of social influences on behavior has expanded in recent years to the study of networks of linked individuals over time (9). Recent work in this area has shown that various health-related phenomena, ranging from sexually transmitted diseases to obesity, smoking, and even suicide, may travel along and within social networks (1015). Using a longitudinal, dynamic network of 12067 persons, we analyzed the role of social networks in alcohol use, focusing on 1) whether clusters of heavy drinkers and abstainers existed within the network; 2) whether a person's alcohol consumption behavior was associated with that of his or her social contacts; 3) the extent to which such associations depended on the nature and direction of the social ties (for example, friends of different kinds, siblings, spouses, coworkers, or neighbors); and 4) whether gender affected the spread of alcohol consumption across social ties. Methods Source Data We used data from participants in the Framingham Heart Study (FHS). The FHS is a population-based, longitudinal, observational cohort study that was initiated in 1948 to prospectively investigate risk factors for cardiovascular disease. Four cohorts, who mostly represent different generations linked to an original cohort, are included in the entire FHS. Participant data, collected every 2 to 4 years, includes physical examinations, laboratory tests, noninvasive cardiac and vascular testing, battery testing (such as the Mini-Mental State Examination), questionnaire results, and basic demographic information. For our analyses, we aligned the examination waves for the original cohort with those of the second-generation offspring cohort, which allowed us to treat all participants as having been examined in 7 waves. The offspring cohort, initiated in 1971, is the source of our study's principals, or focal individuals in the network (16). However, we included other FHS participants whom the principals listed as social contacts and refer to them here as contacts. Therefore, even though principals come only from the offspring cohort, contacts are drawn from the entire set of both the original and offspring cohorts. To ascertain social network ties, we created a separate data set that linked individuals through self-described social ties, collected in each of the 7 waves of the study. We could then detect relationships between participants (for example, spouse, sibling, friend, coworker, or neighbor) and observe changes in these ties across time. Either party to a link between 2 people might identify his or her link to the other. This is most relevant to the friend link, which could exist if A nominated B or B nominated A as a friend. We also used complete records of participants' and their contacts' address in each wave since 1971 in our analyses, although we have no information about relationships that participants did not report. For each wave, we could determine who is whose neighbor and the geographic distance between persons (10, 17). Table 1 provides descriptive statistics for the 5124 principals in our sample. Table 1. Summary Statistics for Principals Measures Alcohol consumption was self-reported in all studied waves, with participants reporting their average number of drinks per week over the past year as well as the number of days within the past week during which they consumed alcohol (beer, wine, and liquor). Self-reported data are generally considered a valid and reliable source when assessing alcohol consumption, although recall measures, such as those used in this study, can be subject to recall bias from participants (18). We treated alcohol consumption as a continuous variable in some analyses (for example, number of drinks per day, calculated from participant responses) but conducted others with dichotomous cut-points, defining heavy drinkers as those who averaged more than 1 (for women) or 2 (for men) drinks per day; moderate drinkers as those whose alcohol consumption was less than the cutoff values for heavy drinkers; and abstainers as those who reported no alcohol consumption. We did not use self-reported number of days drinking in the past week as a measure in and of itself but rather as a means to calculate average number of drinks in a day. (These labels do not reflect clinical definitions of alcohol abuse or dependence.) Table 2 shows averages for the study population across time, including age, alcohol consumption, and percentages of abstainers and drinkers. Although the differences in how we measured heavy drinking made it difficult to compare our results with those for other population samples, the other averages for the mean-age groups in each year of the given waves are roughly similar to national averages of alcohol consumption behavior (1, 19, 20). Table 2. Average Age and Alcohol Consumption Behavior, by Examination Statistical Analysis Our first goal was to evaluate whether a person's alcohol consumption behavior was associated with that of his or her social network ties at various degrees of separation. To test this hypothesis, we took an observed clustering of persons (and their alcohol consumption behavior) within the whole network and compared them with 1000 simulated networks with the same topology and overall prevalence of drinking as the observed network, but with the incidence of drinking (for example, at least 1 drink per day) randomly distributed across the nodes (random drinking networks). If clustering occurs in drinking behavior, then the probability that a contact is a drinker given that a principal is a drinker should be higher in the observed network than in the random drinking networks (21). We used the KamadaKawai algorithm, which iteratively repositions nodes to reduce the number of ties that cross each other, to draw the networks (22). Our second goal was to examine the possible determinants of any clustering in alcohol consumption behavior. We considered 3 explanations for nonrandom clustering of alcohol consumption behavior in the network: principals might choose to associate with like contacts (homophily) (23, 24); principals and contacts might share attributes or jointly experience unobserved contemporaneous events that cause their alcohol consumption behavior to covary (omitted variables or confounding); and contacts might exert social influence or peer effects on principals (induction). The availability of dynamic, longitudinal data on both network connections and drinking behavior allowed us to distinguish between interpersonal induction of drinking and homophily (25). Our basic statistical approach involved specifying longitudinal logistic regression models in which a principal's drinking status at time t + 1 is a function of his or her various attributes, such as age, sex, and education; his or her drinking status at time t; and the drinking status of his or her contacts at times t and t + 1 (25). We used generalized estimating equation procedures to account for multiple observations of the same principal across both waves and principalcontact pairings (26). We assumed an independent working correlation structure for the clusters (27). By using a time-lagged dependent variable (lagged to the previous examination) for alcohol consumption, we eliminated serial correlation in the errors (28) (evaluated with a Lagrange multiplier test) and substantially controlled for the principal's genetic endowment and any intrinsic, stable predilection to drink. In addition, the lagged independent variable for a contact's drinking status substantially controlled for homophily (25, 29). The key variable of interest is a contact's alcohol consumption behavior at time t + 1. A significant coefficient on this variable would suggest either that a contact's drinking affects a principal's drinking or that a principal and a contact experience contemporaneous"}
{"_id": "e486a10208969181c82aada7dae2f365585a5219", "title": "Chapter 2 : Game Analytics \u2013 The Basics", "text": "Take Away Points: \uf0b7 Overview of important key terms in game analytics \uf0b7 Introduction to game telemetry as a source of business intelligence. \uf0b7 In-depth description and discussion of user-derived telemetry and metrics. \uf0b7 Introduction to feature selection in game analytics \uf0b7 Introduction to the knowledge discovery process in game analytics \uf0b7 References to essential further reading. 1 Analytics \u2013 a new industry paradigm Developing a profitable game in today's market is a challenging endeavor. Thousands of commercial titles are published yearly, across a number of hardware platforms and distribution channels, all competing for players' time and attention, and the game industry is decidedly competitive. In order to effectively develop games, a variety of tools and techniques from e.g. business practices, project management to user testing have been developed in the game industry, or adopted and adapted from other IT sectors. One of these methods is analytics, which in recent years has decidedly impacted the game industry and game research environment. Analytics is the process of discovering and communicating patterns in data, towards solving problems in business or conversely predictions for supporting enterprise decision management, driving action and/or improving performance. The meth-odological foundations for analytics are statistics, data mining, mathematics, programming and operations research, as well as data visualization in order to communicate insights learned to the relevant stakeholders. Analytics is not just the querying and reporting of BI data, but rests on actual analysis, e.g. statistical analysis, predic-tive modeling, optimization, forecasting, etc. (Davenport and Harris, 2007). Analytics typically relies on computational modeling. There are several branches or domains of analytics, e.g. marketing analytics, risk analytics, web analytics \u2013 and game analytics. Importantly, analytics is not the same thing as data analysis. Analytics is an umbrella term, covering the entire methodology of finding and communicating patterns in data, whereas analysis is used for individual applied instances, e.g. running"}
{"_id": "d7257e47cdd257fa73af986342fd966d047b8e50", "title": "Building an Intrusion Detection System Using a Filter-Based Feature Selection Algorithm", "text": "Redundant and irrelevant features in data have caused a long-term problem in network traffic classification. These features not only slow down the process of classification but also prevent a classifier from making accurate decisions, especially when coping with big data. In this paper, we propose a mutual information based algorithm that analytically selects the optimal feature for classification. This mutual information based feature selection algorithm can handle linearly and nonlinearly dependent data features. Its effectiveness is evaluated in the cases of network intrusion detection. An Intrusion Detection System (IDS), named Least Square Support Vector Machine based IDS (LSSVM-IDS), is built using the features selected by our proposed feature selection algorithm. The performance of LSSVM-IDS is evaluated using three intrusion detection evaluation datasets, namely KDD Cup 99, NSL-KDD and Kyoto 2006+ dataset. The evaluation results show that our feature selection algorithm contributes more critical features for LSSVM-IDS to achieve better accuracy and lower computational cost compared with the state-of-the-art methods."}
{"_id": "df7a9c107ee81e00db7b782304c68ca4aa7aec63", "title": "Small form factor PIFA antenna design at 28 GHz for 5G applications", "text": "This paper presents the design and analysis of lowest form factor planar inverted-F Antenna at 28GHz for 5G Mobile applications. Feeding and shorting of the antenna is done using metallic strips. The Important properties of this antenna are its small footprint (0.25\u03bbg), good gain of 4.5dBi, 10dB impedance bandwidth of 1.55 GHz and radiation efficiency of 94%. The total size of the PIFA antenna is 0.25\u03bbg. PIFA antenna is one of the antennas whose performance greatly depends on ground plane size and its position on the ground plane. It shows good radiation pattern when placed at the corner or edge."}
{"_id": "821ea10539fefb9e911dd0f93a0fec2e9c0cbff3", "title": "Amharic-English Speech Translation in Tourism Domain", "text": "This paper describes speech translation from Amharic-to-English, particularly Automatic Speech Recognition (ASR) with post-editing feature and AmharicEnglish Statistical Machine Translation (SMT). ASR experiment is conducted using morpheme language model (LM) and phoneme acoustic model (AM). Likewise, SMT conducted using word and morpheme as unit. Morpheme based translation shows a 6.29 BLEU score at a 76.4% of recognition accuracy while word based translation shows a 12.83 BLEU score using 77.4% word recognition accuracy. Further, after post-edit on Amharic ASR using corpus based n-gram, the word recognition accuracy increased by 1.42%. Since post-edit approach reduces error propagation, the word based translation accuracy improved by 0.25 (1.95%) BLEU score. We are now working towards further improving propagated errors through different algorithms at each unit of speech translation cascading component."}
{"_id": "2381e81d414c7fec5157f4e7275082d66030979d", "title": "Optimal management of shoulder impingement syndrome", "text": "Shoulder impingement is a progressive orthopedic condition that occurs as a result of altered biomechanics and/or structural abnormalities. An effective nonoperative treatment for impingement syndrome is aimed at addressing the underlying causative factor or factors that are identified after a complete and thorough evaluation. The clinician devises an effective rehabilitation program to regain full glenohumeral range of motion, reestablish dynamic rotator cuff stability, and implement a progression of resistive exercises to fully restore strength and local muscular endurance in the rotator cuff and scapular stabilizers. The clinician can introduce stresses and forces via sport-specific drills and functional activities to allow a return to activity."}
{"_id": "7a402e00e2264bfc80faaf1d913a69f654eaf6b6", "title": "Usefulness of altmetrics for measuring the broader impact of research: A case study using data from PLOS and F1000Prime", "text": "Purpose: The present case study investigates the usefulness of altmetrics for measuring the broad impact of research. Methods: This case study is based on a sample of 1,082 PLOS (the Public Library of Science) journal articles recommended in F1000. The dataset includes altmetrics which were provided by PLOS. The F1000 dataset contains tags on papers which were assigned by experts to characterise them. Findings: Results from the Facebook and Twitter models show higher predicted numbers of counts for \"good for teaching\" papers than for those papers where the tag is not set. Further model estimations show that saves by Mendeley users are particularly to be expected when a paper introduces a new practical/ theoretical technique (tag: \"technical advance\"). The tag \"New finding\" is statistically significant in the model with which the Facebook counts are evaluated. Conclusions: The \"good for teaching\" is assigned to papers which could be of interest to a wider circle of readers than the peers in a specialized area. Thus, the results of the current study indicate that Facebook and Twitter, but not Figshare or Mendeley, might provide an indication of which papers are of interest to a broader circle of readers (and not only for the peers in a specialist area), and could therefore be useful for the measurement of the social impact of research."}
{"_id": "12d6ce2b6a940862b18b664a754335de0942dbb9", "title": "Introduction of the automated assessment of homework assignments in a university-level programming course", "text": "Modern teaching paradigms promote active student participation, encouraging teachers to adapt the teaching process to involve more practical work. In the introductory programming course at the Faculty of Computer and Information Science, University of Ljubljana, Slovenia, homework assignments contribute approximately one half to the total grade, requiring a significant investment of time and human resources in the assessment process. This problem was alleviated by the automated assessment of homework assignments. In this paper, we introduce an automated assessment system for programming assignments that includes dynamic testing of student programs, plagiarism detection, and a proper presentation of the results. We share our experience and compare the introduced system with the manual assessment approach used before."}
{"_id": "496a67013da1995d89f81a392ee545d713b4c544", "title": "Sediment quality criteria in use around the world", "text": "There have been numerous sediment quality guidelines (SQGs) developed during the past 20 years to assist regulators in dealing with contaminated sediments. Unfortunately, most of these have been developed in North America. Traditionally, sediment contamination was determined by assessing the bulk chemical concentrations of individual compounds and often comparing them with background or reference values. Since the 1980s, SQGs have attempted to incorporate biological effects in their derivation approach. These approaches can be categorized as empirical, frequency-based approaches to establish the relationship between sediment contamination and toxic response, and theoretically based approaches that attempt to account for differences in bioavailability through equilibrium partitioning (EqP) (i.e., using organic carbon or acid volatile sulfides). Some of these guidelines have been adopted by various regulatory agencies in several countries and are being used as cleanup goals in remediation activities and to identify priority polluted sites. The original SQGs, which compared bulk chemical concentrations to a reference or to background, provided little insight into the ecosystem impact of sediment contaminants. Therefore, SQGs for individual chemicals were developed that relied on field sediment chemistry paired with field or laboratory-based biological effects data. Although some SQGs have been found to be relatively good predictors of significant site contamination, they also have several limitations. False positive and false negative predictions are frequently in the 20% to 30% range for many chemicals and higher for others. The guidelines are chemical specific and do not establish causality where chemical mixtures occur. Equilibrium-based guidelines do not consider sediment ingestion as an exposure route. The guidelines do not consider spatial and temporal variability, and they may not apply in dynamic or larger-grained sediments. Finally, sediment chemistry and bioavailability are easily altered by sampling and subsequent manipulation processes, and therefore, measured SQGs may not reflect in situ conditions. All the assessment tools provide useful information, but some (such as SQGs, laboratory toxicity and bioaccumulation, and benthic indices) are prone to misinterpretation without the availability of specific in situ exposure and effects data. SQGs should be used only in a \u201cscreening\u201d manner or in a \u201cweight-of-evidence\u201d approach. Aquatic ecosystems (including sediments) must be assessed in a \u201cholistic\u201d manner in which multiple components are assessed (e.g., habitat, hydrodynamics, resident biota, toxicity, and physicochemistry, including SQGs) by using integrated approaches."}
{"_id": "164e2afbb0a39993581fa3a05530a67c1e3e6da6", "title": "Mapping science through bibliometric triangulation: An experimental approach applied to water research", "text": "The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question what can be learned from a combination \u2013 triangulation \u2013 of these different science maps. In this paper, we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method does individually. The outcomes from the three different approaches are associated with each other and can be systematically interpreted, and provide insights into the complex multidisciplinary structure of the field of water"}
{"_id": "2ef4a3f755beb0de66454678e2dce47d926eb0c9", "title": "Iterative Document Representation Learning Towards Summarization with Polishing", "text": "In this paper, we introduce Iterative Text Summarization (ITS), an iteration-based model for supervised extractive text summarization, inspired by the observation that it is often necessary for a human to read an article multiple times in order to fully understand and summarize its contents. Current summarization approaches read through a document only once to generate a document representation, resulting in a sub-optimal representation. To address this issue we introduce a model which iteratively polishes the document representation on many passes through the document. As part of our model, we also introduce a selective reading mechanism that decides more accurately the extent to which each sentence in the model should be updated. Experimental results on the CNN/DailyMail and DUC2002 datasets demonstrate that our model significantly outperforms state-of-the-art extractive systems when evaluated by machines and by humans."}
{"_id": "14d8c69d5a6e1dae271a9763262a3a85b5cf90ee", "title": "TouchIn: Sightless two-factor authentication on multi-touch mobile devices", "text": "Mobile authentication is indispensable for preventing unauthorized access to multi-touch mobile devices. Existing mobile authentication techniques are often cumbersome to use and also vulnerable to shoulder-surfing and smudge attacks. This paper focuses on designing, implementing, and evaluating TouchIn, a two-factor authentication system on multi-touch mobile devices. TouchIn works by letting a user draw on the touchscreen with one or multiple fingers to unlock his mobile device, and the user is authenticated based on the geometric properties of his drawn curves as well as his behavioral and physiological characteristics. TouchIn allows the user to draw on arbitrary regions on the touchscreen without looking at it. This nice sightless feature makes TouchIn very easy to use and also robust to shoulder-surfing and smudge attacks. Comprehensive experiments on Android devices confirm the high security and usability of TouchIn."}
{"_id": "2972719029a179e9cee3ac1a3f0d4fa519563d4d", "title": "Understanding and Using Patterns in Software Development", "text": "Patterns have shown to be an effective means of capturing and communicating software design experience. However, there is more to patterns than software design patterns: We believe that patterns work for software development on several levels. In this paper we explore what we have come to understand as crucial aspects of the pattern concept, relate patterns to the different models built during software design, discuss pattern forms and how we think that patterns can form larger wholes like pattern handbooks."}
{"_id": "c3f6bdea5a42fa86907fd02a329911bc48d9074d", "title": "Korean pop takes off! Social media strategy of Korean entertainment industry", "text": "The Korean Wave (K-wave), or Hallyu, referred as \u201cthe phenomenon of Korean pop culture, such as TV dramas, films, pop music, fashion, and online games being widely embraced and shared among the people of Japan, China, Hong Kong, Taiwan, and other Asian countries\u201d. After the drama-oriented craze of the first K-wave, Korean pop music (shortly K-pop) took the integral part of the second K-wave. In addition, with the rapid spread of social media like YouTube and Twitter, K-pop has expanded its fandom outside of Asia to the West. The world-wide success of K-pop contributes to improve the `Korea' image and make a positive impact on Korean economy. It is reported that around 1,000 entertainment agencies are active in Korea, while there exist the \u201cbig three\u201d record labels and entertainment agencies: SM Entertainment, YG Entertainment and JYP Entertainment. In this case, we address the world-wide phenomenon of K-pop popularity and the role of social media on recent boom of K-pop. Focusing the major agencies above, we present lessons on how to manage social media strategically: align strategic business model with social media; maximize various social media channels; engage customers with on-and offline promotions; and stimulate audience with exclusive contents."}
{"_id": "44f10dd2458e5f3c6f9e6a3f070df5191de8613b", "title": "Performance Evaluation of Channel Decoding with Deep Neural Networks", "text": "With the demand of high data rate and low latency in fifth generation (5G), deep neural network decoder (NND) has become a promising candidate due to its capability of one-shot decoding and parallel computing. In this paper, three types of NND, i.e., multi-layer perceptron (MLP), convolution neural network (CNN) and recurrent neural network (RNN), are proposed with the same parameter magnitude. The performance of these deep neural networks are evaluated through extensive simulation. Numerical results show that RNN has the best decoding performance, yet at the price of the highest computational overhead. Moreover, we find there exists a saturation length for each type of neural network, which is caused by their restricted learning abilities."}
{"_id": "067420e38c16b83c3e3374dcb8d3f40dffea55c1", "title": "Impact Analysis of Flow Shaping in Ethernet-AVB/TSN and AFDX from Network Calculus and Simulation Perspective", "text": "Ethernet-AVB/TSN (Audio Video Bridging/Time-Sensitive Networking) and AFDX (Avionics Full DupleX switched Ethernet) are switched Ethernet technologies, which are both candidates for real-time communication in the context of transportation systems. AFDX implements a fixed priority scheduling strategy with two priority levels. Ethernet-AVB/TSN supports a similar fixed priority scheduling with an additional Credit-Based Shaper (CBS) mechanism. Besides, TSN can support time-triggered scheduling strategy. One direct effect of CBS mechanism is to increase the delay of its flows while decreasing the delay of other priority ones. The former effect can be seen as the shaping restriction and the latter effect can be seen as the shaping benefit from CBS. The goal of this paper is to investigate the impact of CBS on different priority flows, especially on the intermediate priority ones, as well as the effect of CBS bandwidth allocation. It is based on a performance comparison of AVB/TSN and AFDX by simulation in an automotive case study. Furthermore, the shaping benefit is modeled based on integral operation from network calculus perspective. Combing with the analysis of shaping restriction and shaping benefit, some configuration suggestions on the setting of CBS bandwidth are given. Results show that the effect of CBS depends on flow loads and CBS configurations. A larger load of high priority flows in AVB tends to a better performance for the intermediate priority flows when compared with AFDX. Shaping benefit can be explained and calculated according to the changing from the permitted maximum burst."}
{"_id": "24c64dc1e20924d4c2f65bf1e71f59abe2195f2e", "title": "The empathic brain: how, when and why?", "text": "Recent imaging results suggest that individuals automatically share the emotions of others when exposed to their emotions. We question the assumption of the automaticity and propose a contextual approach, suggesting several modulatory factors that might influence empathic brain responses. Contextual appraisal could occur early in emotional cue evaluation, which then might or might not lead to an empathic brain response, or not until after an empathic brain response is automatically elicited. We propose two major roles for empathy; its epistemological role is to provide information about the future actions of other people, and important environmental properties. Its social role is to serve as the origin of the motivation for cooperative and prosocial behavior, as well as help for effective social communication."}
{"_id": "1e91cc9eb0ba68a7167491876360027b3fe7819d", "title": "Marine animal classification using UMSLI in HBOI optical test facility", "text": "Environmental monitoring is a critical aspect of marine renewable energy project success. A new system called Unobtrusive Multistatic Serial LiDAR Imager (UMSLI) has been prepared to capture and classify marine life interaction with electrical generation equipment. We present both hardware and software innovations of the UMSLI system. Underwater marine animal imagery has been captured for the first time using red laser diode serial LiDAR, which has advantages over conventional optical cameras in many areas. Moreover, given the scarcity of existing underwater LiDAR data, a shape matching based classification algorithm is proposed which requires few training data. On top of applying shape descriptors, the algorithm also adopts information theoretical learning based affine shape registration, improving point correspondences found by shape descriptors as well as the final similarity measure. Within Florida Atlantic University\u2019s Harbor Branch Oceanographic Institute optical test facility, experimental LiDAR data are collected through the front end of the UMSLI prototype, on which the classification algorithm is validated."}
{"_id": "744fdd298a2abff4273a5d1e8a9aa3c378fbf9d5", "title": "\u201cThe Internet is a Mask\u201d: High School Students' Suggestions for Preventing Cyberbullying", "text": "INTRODUCTION\nInteractions through technology have an important impact on today's youth. While some of these interactions are positive, there are concerns regarding students engaging in negative interactions like cyberbullying behaviors and the negative impact these behaviors have on others. The purpose of the current study was to explore participant suggestions for both students and adults for preventing cyberbullying incidents.\n\n\nMETHODS\nForty high school students participated in individual, semi-structured interviews. Participant experiences and perceptions were coded using constant comparative methods to illustrate ways in which students and adults may prevent cyberbullying from occurring within their school and community.\n\n\nRESULTS\nStudents reported that peers would benefit from increasing online security, as well as becoming more aware of their cyber-surroundings. Regarding adult-provided prevention services, participants often discussed that there is little adults can do to reduce cyberbullying. Reasons included the difficulties in restricting online behaviors or providing effective consequences. However, some students did discuss the use of in-school curricula while suggesting that adults blame people rather than technology as potential ways to prevent cyberbullying.\n\n\nCONCLUSION\nFindings from the current study indicate some potential ways to improve adult efforts to prevent cyberbullying. These strategies include parent/teacher training in technology and cyberbullying, interventions focused more on student behavior than technology restriction, and helping students increase their online safety and awareness."}
{"_id": "a204471ad4722a5e4ade844f8a25aa1c1037e1c1", "title": "Brain regions with mirror properties: A meta-analysis of 125 human fMRI studies", "text": "Mirror neurons in macaque area F5 fire when an animal performs an action, such as a mouth or limb movement, and also when the animal passively observes an identical or similar action performed by another individual. Brain-imaging studies in humans conducted over the last 20 years have repeatedly attempted to reveal analogous brain regions with mirror properties in humans, with broad and often speculative claims about their functional significance across a range of cognitive domains, from language to social cognition. Despite such concerted efforts, the likely neural substrates of these mirror regions have remained controversial, and indeed the very existence of a distinct subcategory of human neurons with mirroring properties has been questioned. Here we used activation likelihood estimation (ALE), to provide a quantitative index of the consistency of patterns of fMRI activity measured in human studies of action observation and action execution. From an initial sample of more than 300 published works, data from 125 papers met our strict inclusion and exclusion criteria. The analysis revealed 14 separate clusters in which activation has been consistently attributed to brain regions with mirror properties, encompassing 9 different Brodmann areas. These clusters were located in areas purported to show mirroring properties in the macaque, such as the inferior parietal lobule, inferior frontal gyrus and the adjacent ventral premotor cortex, but surprisingly also in regions such as the primary visual cortex, cerebellum and parts of the limbic system. Our findings suggest a core network of human brain regions that possess mirror properties associated with action observation and execution, with additional areas recruited during tasks that engage non-motor functions, such as auditory, somatosensory and affective components."}
{"_id": "fb4dcbd818e5839f025a6bc247b3bc5632be502f", "title": "Immersion and Emotion: Their Impact on the Sense of Presence", "text": "The present study is designed to test the role of immersion and media content in the sense of presence. Specifically, we are interested in the affective valence of the virtual environments. This paper describes an experiment that compares three immersive systems (a PC monitor, a rear projected video wall, and a head-mounted display) and two virtual environments, one involving emotional content and the other not. The purpose of the experiment was to test the interactive role of these two media characteristics (form and content). Scores on two self-report presence measurements were compared among six groups of 10 people each. The results suggest that both immersion and affective content have an impact on presence. However, immersion was more relevant for non-emotional environments than for emotional ones."}
{"_id": "1f315e1ba65cdce34d372ae445b737c0bcc4dac7", "title": "Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders", "text": "To examine mirror neuron abnormalities in autism, high-functioning children with autism and matched controls underwent fMRI while imitating and observing emotional expressions. Although both groups performed the tasks equally well, children with autism showed no mirror neuron activity in the inferior frontal gyrus (pars opercularis). Notably, activity in this area was inversely related to symptom severity in the social domain, suggesting that a dysfunctional 'mirror neuron system' may underlie the social deficits observed in autism."}
{"_id": "f7785ec6cc6d443151dcbb7560da7d8bdf4e9ed0", "title": "The successive mean quantization transform", "text": "This paper presents the successive mean quantization transform (SMQT). The transform reveals the organization or structure of the data and removes properties such as gain and bias. The transform is described and applied in speech processing and image processing. The SMQT is considered as an extra processing step for the mel frequency cepstral coefficients commonly used in speech recognition. In image processing the transform is applied in automatic image enhancement and dynamic range compression."}
{"_id": "b18796d9ecbc3371bea65b136f77b7d65f129390", "title": "Global asymptotic and robust stability of recurrent neural networks with time delays", "text": "In this paper, two related problems, global asymptotic stability (GAS) and global robust stability (GRS) of neural networks with time delays, are studied. First, GAS of delayed neural networks is discussed based on Lyapunov method and linear matrix inequality. New criteria are given to ascertain the GAS of delayed neural networks. In the designs and applications of neural networks, it is necessary to consider the deviation effects of bounded perturbations of network parameters. In this case, a delayed neural network must be formulated as a interval neural network model. Several sufficient conditions are derived for the existence, uniqueness, and GRS of equilibria for interval neural networks with time delays by use of a new Lyapunov function and matrix inequality. These results are less restrictive than those given in the earlier references."}
{"_id": "871314d440dc55cfd2198e229cde92584964be0f", "title": "Evaluating iterative optimization across 1000 datasets", "text": "While iterative optimization has become a popular compiler optimization approach, it is based on a premise which has never been truly evaluated: that it is possible to learn the best compiler optimizations across data sets. Up to now, most iterative optimization studies find the best optimizations through repeated runs on the same data set. Only a handful of studies have attempted to exercise iterative optimization on a few tens of data sets.\n In this paper, we truly put iterative compilation to the test for the first time by evaluating its effectiveness across a large number of data sets. We therefore compose KDataSets, a data set suite with 1000 data sets for 32 programs, which we release to the public. We characterize the diversity of KDataSets, and subsequently use it to evaluate iterative optimization.We demonstrate that it is possible to derive a robust iterative optimization strategy across data sets: for all 32 programs, we find that there exists at least one combination of compiler optimizations that achieves 86% or more of the best possible speedup across all data sets using Intel's ICC (83% for GNU's GCC). This optimal combination is program-specific and yields speedups up to 1.71 on ICC and 2.23 on GCC over the highest optimization level (-fast and -O3, respectively). This finding makes the task of optimizing programs across data sets much easier than previously anticipated, and it paves the way for the practical and reliable usage of iterative optimization. Finally, we derive pre-shipping and post-shipping optimization strategies for software vendors."}
{"_id": "85f9441dfd90b1143c6a9afcdca1f279a2e555c0", "title": "SDN-Guard: DoS Attacks Mitigation in SDN Networks", "text": "Software Defined Networking (SDN) has recently emerged as a new networking technology offering an unprecedented programmability that allows network operators to dynamically configure and manage their infrastructures. The main idea of SDN is to move the control plane into a central controller that is in charge of taking all routing decisions in the network. However, despite all the advantages offered by this technology, Deny-of-Service (DoS) attacks are considered a major threat to such networks as they can easily overload the controller processing and communication capacity and flood switch CAM tables, resulting in a critical degradation of the overall network performance. To address this issue, we propose in this paper SDN-Guard, a novel scheme able to efficiently protect SDN networks against DoS attacks by dynamically (1) rerouting potential malicious traffic, (2) adjusting flow timeouts and (3) aggregating flow rules. Realistic experiments using Mininet show that the proposed solution succeeds in minimizing by up to 32% the impact of DoS attacks on~the controller performance, switch memory usage and control plane bandwidth and thereby maintaining acceptable network performance during such attacks."}
{"_id": "92b89ddca96c90b56410396bedcfaec68bf13578", "title": "Twitter Geolocation Prediction Shared Task of the 2016 Workshop on Noisy User-generated Text", "text": "This paper describes the shared task for the English Twitter geolocation prediction associated with WNUT 2016. We discuss details of the task settings, data preparation and participant systems. The derived dataset and performance figures from each system provide baselines for future research in this realm."}
{"_id": "99fbfa347fa1ce0ac5828d5fde6177b9a2e3d47c", "title": "Feasibility of a DC network for commercial facilities", "text": "This paper analyzes the feasibility of direct current for the supply of offices and commercial facilities. This is done by analyzing a case study, i.e. the supply to a university department. Voltage drop calculations have been carried out for different voltage levels. A back-up system for reliable power supply is designed based on commercially available batteries. Finally, an economic evaluation of AC vs. DC is performed."}
{"_id": "7b0bc241de12eeeeaacefa8cd8b86a81cfc0a87d", "title": "Sexual arousal: The correspondence of eyes and genitals", "text": "Men's, more than women's, sexual responses may include a coordination of several physiological indices in order to build their sexual arousal to relevant targets. Here, for the first time, genital arousal and pupil dilation to sexual stimuli were simultaneously assessed. These measures corresponded more strongly with each other, subjective sexual arousal, and self-reported sexual orientation in men than women. Bisexual arousal is more prevalent in women than men. We therefore predicted that if bisexual-identified men show bisexual arousal, the correspondence of their arousal indices would be more female-typical, thus weaker, than for other men. Homosexual women show more male-typical arousal than other women; hence, their correspondence of arousal indices should be stronger than for other women. Findings, albeit weak in effect, supported these predictions. Thus, if sex-specific patterns are reversed within one sex, they might affect more than one aspect of sexual arousal. Because pupillary responses reflected sexual orientation similar to genital responses, they offer a less invasive alternative for the measurement of sexual arousal."}
{"_id": "dabb16e3bbcd382e2bc4c5f380d7bfa21b7cc9d6", "title": "Personal Safety is More Important Than Cost of Damage During Robot Failure", "text": "As robots become more common in everyday life it will be increasingly important to understand how non-experts will view robot failure. In this study, we found that severity of failure seems to be tightly coupled with perceived risk to self rather than risk to the robot's task and object. We initially thought perceived severity would be tied to the cost of damage. Instead, participants placed falling drinking glasses above a laptop when rating the severity of the failure. Related results reinforce the primacy of personal safety over the financial cost of damage and suggest the results were tied to proximity to breaking glass."}
{"_id": "b7682634c8633822145193242e7a3e3739042768", "title": "MusicMixer: computer-aided DJ system based on an automatic song mixing", "text": "In this paper, we present MusicMixer, a computer-aided DJ system that helps DJs, specifically with song mixing. MusicMixer continuously mixes and plays songs using an automatic music mixing method that employs audio similarity calculations. By calculating similarities between song sections that can be naturally mixed, MusicMixer enables seamless song transitions. Though song mixing is the most fundamental and important factor in DJ performance, it is difficult for untrained people to seamlessly connect songs. MusicMixer realizes automatic song mixing using an audio signal processing approach; therefore, users can perform DJ mixing simply by selecting a song from a list of songs suggested by the system, enabling effective DJ song mixing and lowering entry barriers for the inexperienced. We also propose personalization for song suggestions using a preference memorization function of MusicMixer."}
{"_id": "e7ee27816ade366584d411f4287e50bdc4771e56", "title": "Fast Discovery of Association Rules", "text": ""}
{"_id": "38215c283ce4bf2c8edd597ab21410f99dc9b094", "title": "The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent", "text": "SEMAINE has created a large audiovisual database as a part of an iterative approach to building Sensitive Artificial Listener (SAL) agents that can engage a person in a sustained, emotionally colored conversation. Data used to build the agents came from interactions between users and an \"operator\u201d simulating a SAL agent, in different configurations: Solid SAL (designed so that operators displayed an appropriate nonverbal behavior) and Semi-automatic SAL (designed so that users' experience approximated interacting with a machine). We then recorded user interactions with the developed system, Automatic SAL, comparing the most communicatively competent version to versions with reduced nonverbal skills. High quality recording was provided by five high-resolution, high-framerate cameras, and four microphones, recorded synchronously. Recordings total 150 participants, for a total of 959 conversations with individual SAL characters, lasting approximately 5 minutes each. Solid SAL recordings are transcribed and extensively annotated: 6-8 raters per clip traced five affective dimensions and 27 associated categories. Other scenarios are labeled on the same pattern, but less fully. Additional information includes FACS annotation on selected extracts, identification of laughs, nods, and shakes, and measures of user engagement with the automatic system. The material is available through a web-accessible database."}
{"_id": "af77be128c7a0efb1e9c559b4e77fa0b1b1e77d6", "title": "Analysis and Simulation of Theme Park Queuing System", "text": "It has been an important issue to improve customers' satisfaction in theme parks for which become a major role of recreation in our daily life. Waiting for rides has been identified as a factor decreasing satisfaction. A previous study indicated that a virtual queuing system can reduce the total waiting time so the customer's satisfaction is improved. The results from a simulation tool Arena show that an index Satisfaction Value (SV) increases when the queuing system is introduced. In this study, a more complex scenario of theme park queuing system (TPQS) is first designed, followed by comparison of a number of combinations of the rides with various waiting time and distribution factors. Analysis is also carried out."}
{"_id": "802b80852996d87dc16082b86f6e77115eb6c9a6", "title": "Reverse Engineering Flash EEPROM Memories Using Scanning Electron Microscopy", "text": "In this article, a methodology to extract Flash EEPROM memory contents is presented. Samples are first backside prepared to expose the tunnel oxide of floating gate transistors. Then, a Scanning Electron Microscope (SEM) in the so called Passive Voltage Contrast (PVC) mode allows distinguishing \u20180\u2019 and \u20181\u2019 bit values stored in individual memory cell. Using SEM operator-free acquisition and standard image processing technique we demonstrate the possible automating of such technique over a full memory. The presented fast, efficient and low cost technique is successfully implemented on 0.35\u03bcm technology node microcontrollers and on a 0.21\u03bcm smart card type integrated circuit. The technique is at least two orders of magnitude faster than state-of-the-art Scanning Probe Microscopy (SPM) methods. Without adequate protection an adversary could obtain the full memory array content within minutes. The technique is a first step for reverse engineering secure embedded systems."}
{"_id": "b2c31108c126ec05d0741146dc5d639f30a12d11", "title": "Multi-stage beamforming codebook for 60GHz WPAN", "text": "Beamforming(BF) based on codebook is regarded as an attractive solution to resolve the poor link budget of millimeter-wave 60GHz wireless communication. Because the number of the elements of antenna array in 60GHz increases, the beam patterns generated are more than common BF, and it causes long set-up time during beam patterns searching. In order to reduce the set-up time, three stages protocol, namely the device (DEV) to DEV linking, sector-level searching and beam-level searching has been adopted by the IEEE 802.15.3c as an optional functionality to realize Gbps communication systems. However, it is still a challenge to create codebook of different patterns to support three stages protocol from common codebook of beam pattern. In this paper, we proposes a multi-stage codebook design and the realization architecture to support three stages BF. The multi-stage codebook can create different granularity of beam patterns and realize progressive searching. Simulation results for eight elements uniform linear array (ULA) show that this design can divide the beam searching to three stages searching without increasing the system complexity."}
{"_id": "ca036c1e6a18931386016df9733a8c1366098235", "title": "Explanation of Two Anomalous Results in Statistical Mediation Analysis.", "text": "Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M, a, increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y, b, was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a. Implications of these findings are discussed."}
{"_id": "dd5a7641b70120813cff6f3a08573a2215f4ad40", "title": "The Use Of Edmodo In Creating An Online Learning Community Of Practice For Learning To Teach Science", "text": "This study aimed to create an online community of practice by creating a virtual classroom in the Edmodo application and ascertain the opinions of pre-service primary teachers about the effects of Edmodo on their learning to teach science and availability of Edmodo. The research used a case study, which is one method of descriptive research. During the implementation process, pre-service primary teachers used Edmodo to share activities they had designed that centred on the scientific concepts taught in primary science education programmes. They also shared their diary entries that outlined their experiences and views after they had practised their activities in a real classroom. 58 pre-service primary teachers participated in the study. The author developed a questionnaire and it included one closed-ended and six open-ended questions; the questionnaire was used as the data collection tool. The pre-service primary teachers used Edmodo for 12 weeks. Descriptive and content analysis methods were used to analyse the data obtained from the study. The results obtained from the data analysis showed that pre-service primary teachers generally had positive views about the use of Edmodo in teacher education programmes. Most pre-service primary teachers stated that Edmodo provides the possibility of sharing knowledge, experiences and views. However, some pre-service teachers stated that Edmodo has some limitations; for example, the fact that it requires the user to have internet access. As a result, it can be said that Edmodo can be used to create an online community of practice in teacher education programmes."}
{"_id": "3d1f7de876b57952fded28adfbb74d5cd4249b02", "title": "QUANTUM COMPUTING : AN INTRODUCTION", "text": "After some remarks on the fundamental physical nature of information, Bennett and Fredkin's ideas of reversible computation are introduced. This leads on to the suggestions of Benioff and Feynman as to the possibilit y of a new type of essentially \u2018quantum computers\u2019 . If we can build such devices, Deutsch showed that \u2018quantum paralleli sm\u2019 leads to new algorithms and new complexity classes. This is dramatically ill ustrated by Shor's quantum algorithm for factorization which is polynomial in time in contrast to algorithms for factorization on a classical Turing computer. This discovery has potentiall y important implications for the security of many modern cryptographic systems. The fundamentals of quantum computing are then introduced reversible logic gates, qubits and quantum registers. The key quantum property of \u2018 entanglement\u2019 is described, with due homage to Einstein and Bell . As an ill ustration of a quantum program, Grover's database search algorithm is described in some detail . After all this theory, the status of experimental attempts to build a quantum computer is reviewed: it will become evident that we have a long way to go before we can factorize even small numbers. Finally, we end with some thoughts about the process of \u2018quantum compilation\u2019 translating a quantum algorithm into actual physical operations on a quantum system and some comments on prospects for future progress."}
{"_id": "b155c6ddeba5d094207d6dce1b55893210b65c2d", "title": "Classification methods for activity recognition", "text": "Activity recognition is an important subject in different areas of research, like e-health and ubiquitous computing. In this paper we give an overview of recent work in the field of activity recognition from both body mounted sensors and ambient sensors. We discuss some of the differences among approaches and present guidelines for choosing a suitable approach under various circumstances."}
{"_id": "a23e7b612245366c3f7934eebf5bac6205ff023a", "title": "Biocybernetic system evaluates indices of operator engagement in automated task", "text": "A biocybernetic system has been developed as a method to evaluate automated flight deck concepts for compatibility with human capabilities. A biocybernetic loop is formed by adjusting the mode of operation of a task set (e.g., manual/automated mix) based on electroencephalographic (EEG) signals reflecting an operator's engagement in the task set. A critical issue for the loop operation is the selection of features of the EEG to provide an index of engagement upon which to base decisions to adjust task mode. Subjects were run in the closed-loop feedback configuration under four candidate and three experimental control definitions of an engagement index. The temporal patterning of system mode switching was observed for both positive and negative feedback of the index. The indices were judged on the basis of their relative strength in exhibiting expected feedback control system phenomena (stable operation under negative feedback and unstable operation under positive feedback). Of the candidate indices evaluated in this study, an index constructed according to the formula, beta power/(alpha power + theta power), reflected task engagement best."}
{"_id": "1a85b157bc6237aa1e556ccfb84a1e3c6c9d602f", "title": "AmphiBot I: an amphibious snake-like robot", "text": "This article presents a project that aims at constructing a biologically inspired amphibious snake-like robot. The robot is designed to be capable of anguilliform swimming like sea-snakes and lampreys in water and lateral undulatory locomotion like a snake on ground. Both the structure and the controller of the robot are inspired by elongate vertebrates. In particular, the locomotion of the robot is controlled by a central pattern generator (a system of coupled oscillators) that produces travelling waves of oscillations as limit cycle behavior. We present the design considerations behind the robot and its controller. Experiments are carried out to identify the types of travelling waves that optimize speed during lateral undulatory locomotion on ground. In particular, the optimal frequency, amplitude and wavelength are thus identified when the robot is crawling on a particular surface."}
{"_id": "47c80e2ba776d82edfe038cb0e132e20170a79e6", "title": "Breakthrough in improving the skin sagging with focusing on the subcutaneous tissue structure , retinacula cutis", "text": "Skin sagging is one of the most prominent aging signs and a concerning issue for people over middle age. Although many cosmetic products are challenging the reduction of the skin sagging by improving the dermal elasticity, which decreases with age, the effects are insufficient for giving drastic changes to the facial morphology. This study focused on subcutaneous tissue for investigating a skin sagging mechanism. Subcutaneous tissue consists of predominantly adipose tissue with fibrous network structures, called retinacula cutis (RC), which is reported to have a possibility to maintain the soft tissue structure and morphology. This study investigated the effect of subcutaneous tissue physical-property alteration due to RC-deterioration on the skin sagging. For evaluating RC structure noninvasively, the tomographic images of faces were obtained by magnetic resonance (MR) imaging. Subcutaneous tissue network structures observed by MR imaging were indicated to be RC by comparing MR images and the histological specimens of human skin. The density of RC was measured by image analysis. For evaluating sagging degree and physical properties of the skin, sagging scoring and the measurement of elasticity of deeper skin layers were performed. The density of RC was correlated with the elasticity data of deeper skin layers, and the sagging scores tended to increase with decreasing the density. These results suggested that the sparse RC structure gave a decrease in the elasticity of subcutaneous tissue layer, which consequently would be the cause of facial sagging. This study would be a pathfinder for the complete elimination of skin sagging."}
{"_id": "3cb35dda9637a582c650ecfd80214bfd2914c9f3", "title": "Predicting defects using network analysis on dependency graphs", "text": "In software development, resources for quality assurance are limited by time and by cost. In order to allocate resources effectively, managers need to rely on their experience backed by code complexity metrics. But often dependencies exist between various pieces of code over which managers may have little knowledge. These dependencies can be construed as a low level graph of the entire system. In this paper, we propose to use network analysis on these dependency graphs. This allows managers to identify central program units that are more likely to face defects. In our evaluation on Windows Server 2003, we found that the recall for models built from network measures is by 10% points higher than for models built from complexity metrics. In addition, network measures could identify 60% of the binaries that the Windows developers considered as critical-twice as many as identified by complexity metrics."}
{"_id": "55289d3feef4bc1e4ff17008120e371eb7f55a24", "title": "Conditional Generation and Snapshot Learning in Neural Dialogue Systems", "text": "Recently a variety of LSTM-based conditional language models (LM) have been applied across a range of language generation tasks. In this work we study various model architectures and different ways to represent and aggregate the source information in an endto-end neural dialogue system framework. A method called snapshot learning is also proposed to facilitate learning from supervised sequential signals by applying a companion cross-entropy objective function to the conditioning vector. The experimental and analytical results demonstrate firstly that competition occurs between the conditioning vector and the LM, and the differing architectures provide different trade-offs between the two. Secondly, the discriminative power and transparency of the conditioning vector is key to providing both model interpretability and better performance. Thirdly, snapshot learning leads to consistent performance improvements independent of which architecture is used."}
{"_id": "027b0d240066f8d1560edcd75505f5650291cced", "title": "Solving optimization problems using black hole algorithm", "text": "Various meta-heuristic optimization approaches have been recently created and applied in different areas. Many of these approaches are inspired by swarm behaviors in the nature. This paper studies the solving optimization problems using Black Hole Algorithm (BHA) which is a population-based algorithm. Since the performance of this algorithm was not tested in mathematical functions, we have studied this issue using some standard functions. The results of the BHA are compared with the results of GA and PSO algorithms which indicate that the performance of BHA is better than the other two mentioned algorithms."}
{"_id": "75c4b33059aa300e7b52d1b5dab37968ac927e89", "title": "A Wideband Dual-Polarized L-Probe Stacked Patch Antenna Array", "text": "A 2 times 1 dual-polarized L-probe stacked patch antenna array is presented. It has employed a novel technique to achieve high isolation between the two input ports. The proposed antenna has a 14-dB return loss bandwidth of 19.8%, which is ranged from 0.808 to 0.986 GHz, for both ports. Also, it has an input port isolation of more than 30 dB and an average gain of 10.5 dBi over this bandwidth. Moreover, its radiation patterns in the two principal planes have cross-polarization level of less than -15 dB within the 3-dB beamwidths across the passband. Due to these features, this antenna array is highly suitable for the outdoor base station that is required to cover the operating bandwidths of both CDMA800 and GSM900 mobile communication systems."}
{"_id": "9268b7db4bd7f94e5c55602a8f7e5c1befa3604f", "title": "High-Isolation XX-Polar Antenna", "text": "A new configuration of the dual-band and dual-polarized antenna is presented, which is fed by aperture coupling for the existing mobile communication systems working over 870-960 MHz (GSM) and 1710-2180 MHz (DCS/UMTS) frequency band. XX-Polar antenna stands for an antenna with dual-band and dual linear slant polarization. In this paper DU band stands for DCS/UMTS band. Measurement results show that the proposed antenna yields good broadside radiation characteristics including symmetric radiation patterns, low cross-polarization level ( <; -14 dB), low backlobe level (F/B >; 20 dB) and high isolation (>; 30 dB) at both bands. The designed antenna has an impedance bandwidth of 20.4% (790-970 MHz) for VSWR <; 1.5 in the lower band and 29.3% for VSWR <; 1.6 (1630-2190 MHz) in the upper band. The measured average gains are about 9.3-10.2 and 8.6-10 dBi in the lower and upper band, respectively. It is also promising for array antenna in various wireless systems."}
{"_id": "aa33d8f540baf255529392d8d2a96127256307d1", "title": "A Compact Dual-Polarized Printed Dipole Antenna With High Isolation for Wideband Base Station Applications", "text": "A compact dual-polarized printed dipole antenna for wideband base station applications is presented in this communication. The proposed dipole antenna is etched on three assembled substrates. Four horizontal triangular patches are introduced to form two dipoles in two orthogonal polarizations. Two integrated baluns connected with 50 \u03a9 SMA launchers are used to excite the dipole antenna. The proposed dipole antenna achieves a more compact size than many reported wideband printed dipole and magneto-electric dipole antennas. Both simulated and measured results show that the proposed antenna has a port isolation higher than 35 dB over 52% impendence bandwidth (VSWR <; 1.5). Moreover, stable radiation pattern with a peak gain of 7 dBi - 8.6 dBi is obtained within the operating band. The proposed dipole antenna is suitable as an array element and can be used for wideband base station antennas in the next generation IMT-advanced communications."}
{"_id": "f20c6d8c99e06646a063ef71c747d1fe3759d701", "title": "Wideband Dual-Polarized Patch Antenna With Broadband Baluns", "text": "A pair of novel 180deg broadband microstrip baluns are used to feed a dual-polarized patch antenna. The 180deg broadband balun delivers equal amplitude power division and consistent 180deg (plusmn5deg) phase shifting over a wide bandwidth (>50%). We demonstrate that for a dual-polarized quadruple L-probe square patch antenna, the use of the proposed 180deg broadband balun pair, in place of a conventional 180deg narrowband balun pair, provides improved input port isolation and reduced H-plane cross-polarization levels over a wider frequency range, while maintaining low E-plane cross-polarization levels and stable E- and H-plane co-polarization patterns throughout the impedance passband"}
{"_id": "fa0092adfb76508be15ec05fde2dcf854d4e5d02", "title": "Design of dual-polarized L-probe patch antenna arrays with high isolation", "text": "An experimental study of a dual-polarized L-probe patch antenna is presented. The antenna is designed to operate at around 1.8 GHz. A \"dual-feed\" technique is introduced to achieve high isolation between two input ports. The proposed antenna has an impedance bandwidth of 23.8% (SWR/spl les/2), 15% (SWR/spl les/1.5) and an isolation over 30 dB. In array designs, techniques for improving the isolation between two adjacent elements of an antenna array are also investigated. A two-element array with more than 30 dB isolation is designed and tested."}
{"_id": "3cf0b2fa44c38feb0276d22e93f2719889b0667a", "title": "Mining process models with non-free-choice constructs", "text": "Process mining aims at extracting information from event logs to capture the business process as it is being executed. Process mining is particularly useful in situations where events are recorded but there is no system enforcing people to work in a particular way. Consider for example a hospital where the diagnosis and treatment activities are recorded in the hospital information system, but where health-care professionals determine the \u201ccareflow.\u201d Many process mining approaches have been proposed in recent years. However, in spite of many researchers\u2019 persistent efforts, there are still several challenging problems to be solved. In this paper, we focus on mining non-free-choice constructs, i.e., situations where there is a mixture of choice and synchronization. Although most real-life processes exhibit non-free-choice behavior, existing algorithms are unable to adequately deal with such constructs. Using a Petri-net-based representation, we will show that there are two kinds of causal dependencies between tasks, i.e., explicit and implicit ones. We propose an algorithm that is able to deal with both kinds of dependencies. The algorithm has been implemented in the ProM framework and experimental results shows that the algorithm indeed significantly improves existing process mining techniques."}
{"_id": "4dbe9b736de47d92127c8211bf38566ee45d6b43", "title": "EigenTrustp++: Attack resilient trust management", "text": "This paper argues that trust and reputation models should take into account not only direct experiences (local trust) and experiences from the circle of \u201cfriends\u201d, but also be attack resilient by design in the presence of dishonest feedbacks and sparse network connectivity. We first revisit EigenTrust, one of the most popular reputation systems to date, and identify the inherent vulnerabilities of EigenTrust in terms of its local trust vector, its global aggregation of local trust values, and its eigenvector based reputation propagating model. Then we present EigenTrust++, an attack resilient trust management scheme. EigenTrust++ extends the eigenvector based reputation propagating model, the core of EigenTrust, and counters each of vulnerabilities identified with alternative methods that are by design more resilient to dishonest feedbacks and sparse network connectivity under four known attack models. We conduct extensive experimental evaluation on EigenTrust++, and show that EigenTrust++ can significantly outperform EigenTrust in terms of both performance and attack resilience in the presence of dishonest feedbacks and sparse network connectivity against four representative attack models."}
{"_id": "3102866d1d3f9e616a6437a719197a78c1590bb1", "title": "Expression and regulation of lincRNAs during T cell development and differentiation", "text": "Although intergenic long noncoding RNAs (lincRNAs) have been linked to gene regulation in various tissues, little is known about lincRNA transcriptomes in the T cell lineages. Here we identified 1,524 lincRNA clusters in 42 T cell samples, from early T cell progenitors to terminally differentiated helper T cell subsets. Our analysis revealed highly dynamic and cell-specific expression patterns for lincRNAs during T cell differentiation. These lincRNAs were located in genomic regions enriched for genes that encode proteins with immunoregulatory functions. Many were bound and regulated by the key transcription factors T-bet, GATA-3, STAT4 and STAT6. We found that the lincRNA LincR-Ccr2-5\u2032AS, together with GATA-3, was an essential component of a regulatory circuit in gene expression specific to the TH2 subset of helper T cells and was important for the migration of TH2 cells."}
{"_id": "4c60b754ead9755121d7cf7ef31f86ff282cc8fb", "title": "Hierarchical Complex Activity Representation and Recognition Using Topic Model and Classifier Level Fusion", "text": "Human activity recognition is an important area of ubiquitous computing. Most current researches in activity recognition mainly focus on simple activities, e.g., sitting, running, walking, and standing. Compared with simple activities, complex activities are more complicated with high-level semantics, e.g., working, commuting, and having a meal. This paper presents a hierarchical model to recognize complex activities as mixtures of simple activities and multiple actions. We generate the components of complex activities using a clustering algorithm, represent and recognize complex activities by applying a topic model on these components. It is a data-driven method that can retain effective information for representing and recognizing complex activities. In addition, acceleration and physiological signals are fused in classifier level to ensure the overall performance of complex activity recognition. The results of experiments show that our method has ability to represent and recognize complex activities effectively."}
{"_id": "5d85f0ab7acd995669a24dc93422325e0caea009", "title": "5G security architecture and light weight security authentication", "text": "The concept of future 5G (the 5th generation of mobile networks) widely spreads in industry, but the related standards are still unclear. The definition of 5G is to provide adequate RF coverage, more bits/Hz and to interconnect all wireless heterogeneous networks to provide seamless, consistent telecom experience to the user. In this paper we have surveyed 5G network architecture and proposed a multi-tier security architecture and a lightweight authentication measure for 5G. The proposed scheme aims at achieving fast authentication and minimizing the packet transmission overhead without compromising the security requirements."}
{"_id": "43512258e2670a0abeba714643ec28f3d1c9f8ef", "title": "Higher-Order Inference for Multi-class Log-Supermodular Models", "text": "Higher-order models have been shown to be very useful for a plethora of computer vision tasks. However, existing techniques have focused mainly on MAP inference. In this paper, we present the first efficient approach towards approximate Bayesian marginal inference in a general class of high-order, multi-label attractive models, where previous techniques slow down exponentially with the order (clique size). We formalize this task as performing inference in log-supermodular models under partition constraints, and present an efficient variational inference technique. The resulting optimization problems are convex and yield bounds on the partition function. We also obtain a fully factorized approximation to the posterior, which can be used in lieu of the true complicated distribution. We empirically demonstrate the performance of our approach by comparing it to traditional inference methods on a challenging high-fidelity multi-label image segmentation dataset. We obtain state-of-the-art classification accuracy for MAP inference, and substantially improved ROC curves using the approximate marginals."}
{"_id": "3562ed47225c461df47b1d69410e40d58e8860e0", "title": "EOMM: An Engagement Optimized Matchmaking Framework", "text": "Matchmaking connects multiple players to participate in online player-versus-player games. Current matchmaking systems depend on a single core strategy: create fair games at all times. These systems pair similarly skilled players on the assumption that a fair game is best player experience. We will demonstrate, however, that this intuitive assumption sometimes fails and that matchmaking based on fairness is not optimal for engagement. In this paper, we propose an Engagement Optimized Matchmaking (EOMM) framework that maximizes overall player engagement. We prove that equal-skill based matchmaking is a special case of EOMM on a highly simplified assumption that rarely holds in reality. Our simulation on real data from a popular game made by Electronic Arts,Inc. (EA) supports our theoretical results, showing significant improvement in enhancing player engagement compared to existing matchmaking methods."}
{"_id": "0c5b32acd045996716f66e327a2d39423446a220", "title": "Local information statistics of LBP and HOG for pedestrian detection", "text": "We present several methods of pedestrian detection in intensity images using different local statistical measures applied to two classes of features extensively used in pedestrian detection: uniform local binary patterns - LBP and a modified version of histogram of oriented gradients - HOG. Our work extracts local binary patterns and magnitude and orientation of the gradient image. Then we divide the image into blocks. Within each block we extract different statistics like: histogram (weighted by the gradient magnitude in the case of HOG), information, entropy and energy of the local binary code. We use Adaboost for training four classifiers and we analyze the classification error of each method on the Daimler benchmark pedestrian dataset."}
{"_id": "02dd5b65dfc005c30c2c4f49ba5afb586cd7770c", "title": "Event Detection in Social Media Detecting", "text": "The proliferation of social media and user-generated content in the Web has opened new opportunities for detecting and disseminating information quickly. The Twitter stream is one large source of information, but the magnitude of tweets posted and the noisy nature of its content makes the harvesting of knowledge from Twitter very hard. Aiming at overcoming some of the challenges and extract some of the hidden information, this thesis proposes a system for real-time detection of news events from the Twitter stream. The first step of our approach is to let a classifier, based on an Artificial Neural Network and deep learning, detect news relevant tweets from the stream. Next, a novel streaming data clustering algorithm is applied to the detected news tweets to form news events. Finally, the events of highest interest is retrieved based on events\u2019 sizes and rapid growth in tweet frequencies, before the news events are presented and visualized in a web user interface. We evaluate the proposed system on a large, publicly available corpus of annotated news events from Twitter. As part of the evaluation, we compare our approach with a related state-of-the-art solution. Overall, our experiments and user-based evaluation show that our approach on detecting current (real) news events delivers state-of-the-art performance."}
{"_id": "63396a1a658863faeb9e3c8ee1f1934fd00a7512", "title": "Predicting statistics of asynchronous SGD parameters for a large-scale distributed deep learning system on GPU supercomputers", "text": "Many studies have shown that Deep Convolutional Neural Networks (DCNNs) exhibit great accuracies given large training datasets in image recognition tasks. Optimization technique known as asynchronous mini-batch Stochastic Gradient Descent (SGD) is widely used for deep learning because it gives fast training speed and good recognition accuracies, while it may increases generalization error if training parameters are in inappropriate ranges. We propose a performance model of a distributed DCNN training system called SPRINT that uses asynchronous GPU processing based on mini-batch SGD. The model considers the probability distribution of mini-batch size and gradient staleness that are the core parameters of asynchronous SGD training. Our performance model takes DCNN architecture and machine specifications as input parameters, and predicts time to sweep entire dataset, mini-batch size and staleness with 5%, 9% and 19% error in average respectively on several supercomputers with up to thousands of GPUs. Experimental results on two different supercomputers show that our model can steadily choose the fastest machine configuration that nearly meets a target mini-batch size."}
{"_id": "3e28d80281d769a32ff5eb7f9a713b3ae01f6475", "title": "Analysis on robust H\u221e performance and stability for linear systems with interval time-varying state delays via some new augmented Lyapunov-Krasovskii functional", "text": "In this paper, the problem ofH1 performance and stability analysis for linear systems with interval time-varying delays is considered. First, by constructing a newly augmented Lyapunov\u2013Krasovskii functional which has not been proposed yet, an improvedH1 performance criterion with the framework of linear matrix inequalities (LMIs) is introduced. Next, the result and method are extended to the problem of a delay-dependent stability criterion. Finally, numerical examples are given to show the superiority of proposed method. 2013 Elsevier Inc. All rights reserved."}
{"_id": "257a795fa475c98906b319ffdb38d49d54e199f2", "title": "Existence and Stability of Equilibrium of DC Microgrid With Constant Power Loads", "text": "Constant power loads (CPLs) are often the cause of instability and no equilibrium of dc microgrids. In this study, we analyze the existence and stability of equilibrium of dc microgirds with CPLs and the sufficient conditions for them are provided. To derive the existence of system equilibrium, we transform the problem of quadratic equation solvability into the existence of a fixed point for an increasing fractional mapping. Then, the sufficient condition based on the Tarski fixed-point theorem is derived. It is less conservative compared with the existing results. Moreover, we adopt the small-signal model to predict the system qualitative behavior around equilibrium. The analytic conditions of robust stability are determined by analyzing quadratic eigenvalue. Overall, the obtained conditions provide the references for building reliable dc microgrids. The simulation results verify the correctness of the proposed conditions."}
{"_id": "9e2803a17936323ea7fcf2f9b1dd86612b176062", "title": "A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications", "text": "Particle swarmoptimization (PSO) is a heuristic global optimizationmethod, proposed originally byKennedy and Eberhart in 1995. It is now one of themost commonly used optimization techniques.This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (tomultiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research,mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms."}
{"_id": "91422d30847919aab2b80a7a14fc1a01f597a8a0", "title": "Multi-Scale Patch-Based Image Restoration", "text": "Many image restoration algorithms in recent years are based on patch processing. The core idea is to decompose the target image into fully overlapping patches, restore each of them separately, and then merge the results by a plain averaging. This concept has been demonstrated to be highly effective, leading often times to the state-of-the-art results in denoising, inpainting, deblurring, segmentation, and other applications. While the above is indeed effective, this approach has one major flaw: the prior is imposed on intermediate (patch) results, rather than on the final outcome, and this is typically manifested by visual artifacts. The expected patch log likelihood (EPLL) method by Zoran and Weiss was conceived for addressing this very problem. Their algorithm imposes the prior on the patches of the final image, which in turn leads to an iterative restoration of diminishing effect. In this paper, we propose to further extend and improve the EPLL by considering a multi-scale prior. Our algorithm imposes the very same prior on different scale patches extracted from the target image. While all the treated patches are of the same size, their footprint in the destination image varies due to subsampling. Our scheme comes to alleviate another shortcoming existing in patch-based restoration algorithms-the fact that a local (patch-based) prior is serving as a model for a global stochastic phenomenon. We motivate the use of the multi-scale EPLL by restricting ourselves to the simple Gaussian case, comparing the aforementioned algorithms and showing a clear advantage to the proposed method. We then demonstrate our algorithm in the context of image denoising, deblurring, and super-resolution, showing an improvement in performance both visually and quantitatively."}
{"_id": "7b7683a2f570cd3a4c8c139f25af2bad8e889dc9", "title": "Virtual machine placement strategies in cloud computing", "text": "The prominent technology that drives the industry now-a-days is cloud computing. The growth of cloud computing has resulted in the setup of large number of data centers around the world. The data centers consume more power making it source for the carbon dioxide emission and a major contributor to greenhouse effect. This led to the deployment of virtualization. Infrastructure as a Service is one of the important services offered by cloud computing that allows virtualization and hardware to get virtualized by creating many instances of Virtual Machine (VM) on a single Physical Machine (PM) and helps in improving utilization of resources. VM consolidation includes method of choosing the more appropriate algorithm for migration of VM's and placement of VMs to the most suitable host. VM placement is a part of VM migration. The effective placement of VM is aimed to improve performance, resource utilization and reduce the energy consumption in data centers without SLA violation. This paper aims to focus on various VM placement schemes."}
{"_id": "b891a8df3d7b4a6b73c9de7194f7341b00d93f6f", "title": "Boosting Response Aware Model-Based Collaborative Filtering", "text": "Recommender systems are promising for providing personalized favorite services. Collaborative filtering (CF) technologies, making prediction of users' preference based on users' previous behaviors, have become one of the most successful techniques to build modern recommender systems. Several challenging issues occur in previously proposed CF methods: (1) most CF methods ignore users' response patterns and may yield biased parameter estimation and suboptimal performance; (2) some CF methods adopt heuristic weight settings, which lacks a systematical implementation; and (3) the multinomial mixture models may weaken the computational ability of matrix factorization for generating the data matrix, thus increasing the computational cost of training. To resolve these issues, we incorporate users' response models into the probabilistic matrix factorization (PMF), a popular matrix factorization CF model, to establish the response aware probabilistic matrix factorization (RAPMF) framework. More specifically, we make the assumption on the user response as a Bernoulli distribution which is parameterized by the rating scores for the observed ratings while as a step function for the unobserved ratings. Moreover, we speed up the algorithm by a mini-batch implementation and a crafting scheduling policy. Finally, we design different experimental protocols and conduct systematical empirical evaluation on both synthetic and real-world datasets to demonstrate the merits of the proposed RAPMF and its mini-batch implementation."}
{"_id": "2da593f91bd50a4d7a49a3997a0704f4785124de", "title": "Interactive painterly stylization of images, videos and 3D animations", "text": "We introduce a real-time system that converts images, video, or 3D animation sequences to artistic renderings in various painterly styles. The algorithm, which is entirely executed on the GPU, can efficiently process 512 resolution frames containing 60,000 individual strokes at over 30 fps. In order to exploit the parallel nature of GPUs, our algorithm determines the placement of strokes entirely from local pixel neighborhood information. The strokes are rendered as point sprites with textures. Temporal coherence is achieved by treating the brush strokes as particles and moving them based on optical flow. Our system renders high quality results while allowing the user interactive control over many stylistic parameters such as stroke size, texture and density."}
{"_id": "25b63e08ac2a2c4d1dd2f66983a010955ea90192", "title": "Energy-traffic tradeoff cooperative offloading for mobile cloud computing", "text": "This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable tradeoff between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN."}
{"_id": "682d8bc7bc07cbfa3beed789afafea7e9702e928", "title": "Global Hypothesis Generation for 6D Object Pose Estimation", "text": "This paper addresses the task of estimating the 6D-pose of a known 3D object from a single RGB-D image. Most modern approaches solve this task in three steps: i) compute local features, ii) generate a pool of pose-hypotheses, iii) select and refine a pose from the pool. This work focuses on the second step. While all existing approaches generate the hypotheses pool via local reasoning, e.g. RANSAC or Hough-Voting, we are the first to show that global reasoning is beneficial at this stage. In particular, we formulate a novel fully-connected Conditional Random Field (CRF) that outputs a very small number of pose-hypotheses. Despite the potential functions of the CRF being non-Gaussian, we give a new, efficient two-step optimization procedure, with some guarantees for optimality. We utilize our global hypotheses generation procedure to produce results that exceed state-of-the-art for the challenging Occluded Object Dataset."}
{"_id": "9916ddf04e038c7ca333ff94f8ff613a69778587", "title": "Decision-Making in a Robotic Architecture for Autonomy", "text": "This paper presents an overview of the intelligent decision-making capabilities of the CLARAty robotic architecture for autonomy. CLARAty is a two layered architecture where the top Decision Layer contains techniques for autonomously creating a plan of robot commands and the bottom Functional Layer provides standard robot capabilities that interface to system hardware. This paper focuses on the Decision Layer organization and capabilities. Specifically, the Decision Layer provides a framework for utilizing AI planning and executive techniques, which provide onboard, autonomous command generation and replanning for planetary rovers. The Decision Layer also provides a flexible interface to the Functional Layer, which can be tailored based on user preferences and domain features. This architecture is currently being tested on several JPL rovers."}
{"_id": "08d2858762644d85df9112886f10d3abb6465ccb", "title": "Optimal estimation of the mean function based on discretely sampled functional data: Phase transition", "text": "The problem of estimating the mean of random functions based on discretely sampled data arises naturally in functional data analysis. In this paper, we study optimal estimation of the mean function under both common and independent designs. Minimax rates of convergence are established and easily implementable rateoptimal estimators are introduced. The analysis reveals interesting and different phase transition phenomena in the two cases. Under the common design, the sampling frequency solely determines the optimal rate of convergence when it is relatively small and the sampling frequency has no effect on the optimal rate when it is large. On the other hand, under the independent design, the optimal rate of convergence is determined jointly by the sampling frequency and the number of curves when the sampling frequency is relatively small. When it is large, the sampling frequency has no effect on the optimal rate. Another interesting contrast between the two settings is that smoothing is necessary under the independent design, while, somewhat surprisingly, it is not essential under the common design."}
{"_id": "1e3182738045e85b289a90f2f6f53565f0d6f9ca", "title": "CTrigger: exposing atomicity violation bugs from their hiding places", "text": "Multicore hardware is making concurrent programs pervasive. Unfortunately, concurrent programs are prone to bugs. Among different types of concurrency bugs, atomicity violation bugs are common and important. Existing techniques to detect atomicity violation bugs suffer from one limitation: requiring bugs to manifest during monitored runs, which is an open problem in concurrent program testing.\n This paper makes two contributions. First, it studies the interleaving characteristics of the common practice in concurrent program testing (i.e., running a program over and over) to understand why atomicity violation bugs are hard to expose. Second, it proposes CTrigger to effectively and efficiently expose atomicity violation bugs in large programs. CTrigger focuses on a special type of interleavings (i.e., unserializable interleavings) that are inherently correlated to atomicity violation bugs, and uses trace analysis to systematically identify (likely) feasible unserializable interleavings with low occurrence-probability. CTrigger then uses minimum execution perturbation to exercise low-probability interleavings and expose difficult-to-catch atomicity violation.\n We evaluate CTrigger with real-world atomicity violation bugs from four sever/desktop applications (Apache, MySQL, Mozilla, and PBZIP2) and three SPLASH2 applications on 8-core machines. CTrigger efficiently exposes the tested bugs within 1--235 seconds, two to four orders of magnitude faster than stress testing. Without CTrigger, some of these bugs do not manifest even after 7 full days of stress testing. In addition, without deterministic replay support, once a bug is exposed, CTrigger can help programmers reliably reproduce it for diagnosis. Our tested bugs are reproduced by CTrigger mostly within 5 seconds, 300 to over 60000 times faster than stress testing."}
{"_id": "1459d4d16088379c3748322ab0835f50300d9a38", "title": "Cross-Domain Visual Matching via Generalized Similarity Measure and Feature Learning", "text": "Cross-domain visual data matching is one of the fundamental problems in many real-world vision tasks, e.g., matching persons across ID photos and surveillance videos. Conventional approaches to this problem usually involves two steps: i) projecting samples from different domains into a common space, and ii) computing (dis-)similarity in this space based on a certain distance. In this paper, we present a novel pairwise similarity measure that advances existing models by i) expanding traditional linear projections into affine transformations and ii) fusing affine Mahalanobis distance and Cosine similarity by a data-driven combination. Moreover, we unify our similarity measure with feature representation learning via deep convolutional neural networks. Specifically, we incorporate the similarity measure matrix into the deep architecture, enabling an end-to-end way of model optimization. We extensively evaluate our generalized similarity model in several challenging cross-domain matching tasks: person re-identification under different views and face verification over different modalities (i.e., faces from still images and videos, older and younger faces, and sketch and photo portraits). The experimental results demonstrate superior performance of our model over other state-of-the-art methods."}
{"_id": "95167cfafcd1dd7d15c02d7bb21232b90f9682b1", "title": "Design of Bidirectional DC\u2013DC Resonant Converter for Vehicle-to-Grid (V2G) Applications", "text": "In this paper, a detailed design procedure is presented for a bidirectional CLLLC-type resonant converter for a battery charging application. This converter is similar to an LLC-type resonant converter with an extra inductor and capacitor in the secondary side. Soft-switching can be ensured in all switches without additional snubber or clamp circuitry. Because of soft-switching in all switches, very high-frequency operation is possible; thus, the size of the magnetics and the filter capacitors can be made small. To reduce the size and cost of the converter, a CLLC-type resonant network is derived from the original CLLLC-type resonant network. First, in this paper, an equivalent model for the bidirectional converter is derived for the steady-state analysis. Then, the design methodology is presented for the CLLLC-type resonant converter. Design of this converter includes determining the transformer turns ratio, design of the magnetizing inductance based on zero-voltage switching condition, design of the resonant inductances and capacitances. Then, the CLLCtype resonant network is derived from the CLLLC-type resonant network. To validate the design procedure, a 3.5-kW converter was designed following the guidelines in the proposed methodology. A prototype was built and tested in the laboratory. Experimental results verified the design procedure presented."}
{"_id": "9b16e0fe76bc9a529d07557f44475b4fd851f71f", "title": "Evolutionary Computation Meets Machine Learning: A Survey", "text": "Evolutionary computation (EC) is a kind of optimization methodology inspired by the mechanisms of biological evolution and behaviors of living organisms. In the literature, the terminology evolutionary algorithms is frequently treated the same as EC. This article focuses on making a survey of researches based on using ML techniques to enhance EC algorithms. In the framework of an ML-technique enhanced-EC algorithm (MLEC), the main idea is that the EC algorithm has stored ample data about the search space, problem features, and population information during the iterative search process, thus the ML technique is helpful in analyzing these data for enhancing the search performance. The paper presents a survey of five categories: ML for population initialization, ML for fitness evaluation and selection, ML for population reproduction and variation, ML for algorithm adaptation, and ML for local search."}
{"_id": "135772775121ba60b47b9f2f012e682fe4128761", "title": "Obstruction-Free Synchronization: Double-Ended Queues as an Example", "text": "We introduce obstruction-freedom, a new nonblocking property for shared data structure implementations. This property is strong enough to avoid the problems associated with locks, but it is weaker than previous nonblocking properties\u2014specifically lock-freedom and wait-freedom\u2014 allowing greater flexibility in the design of efficient implementations. Obstruction-freedom admits substantially simpler implementations, and we believe that in practice it provides the benefits of wait-free and lock-free implementations. To illustrate the benefits of obstruction-freedom, we present two obstruction-free CAS-based implementations of double-ended queues (deques); the first is implemented on a linear array, the second on a circular array. To our knowledge, all previous nonblocking deque implementations are based on unrealistic assumptions about hardware support for synchronization, have restricted functionality, or have operations that interfere with operations at the opposite end of the deque even when the deque has many elements in it. Our obstruction-free implementations have none of these drawbacks, and thus suggest that it is much easier to design obstruction-free implementations than lock-free and waitfree ones. We also briefly discuss other obstruction-free data structures and operations that we have implemented."}
{"_id": "3ee0eec0553ca719f602b777f2b600a7aa5c31aa", "title": "Scheduling Multithreaded Computations by Work Stealing", "text": "This paper studies the problem of efficiently schedulling fully strict (i.e., well-structured) multithreaded computations on parallel computers. A popular and practical method of scheduling this kind of dynamic MIMD-style computation is \u201cwork stealing,\u201d in which processors needing work steal computational threads from other processors. In this paper, we give the first provably good work-stealing scheduler for multithreaded computations with dependencies.\nSpecifically, our analysis shows that the expected time to execute a fully strict computation on P processors using our work-stealing scheduler is T1/P + O(T \u221e , where T1 is the minimum serial execution time of the multithreaded computation and (T \u221e is the minimum execution time with an infinite number of processors. Moreover, the space required by the execution is at most S1P, where S1 is the minimum serial space requirement. We also show that the expected total communication of the algorithm is at most O(PT \u221e ( 1 + nd)Smax), where Smax is the size of the largest activation record of any thread and nd is the maximum number of times that any thread synchronizes with its parent. This communication bound justifies the folk wisdom that work-stealing schedulers are more communication efficient than their work-sharing counterparts. All three of these bounds are existentially optimal to within a constant factor."}
{"_id": "42142c121b2dbe48d55e81c2ce198a5639645030", "title": "Linearizability: A Correctness Condition for Concurrent Objects", "text": "A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable."}
{"_id": "03a00248b7d5e2d89f5337e62c39fad277c66102", "title": "Introduction to Algorithms", "text": "problems To understand the class of polynomial-time solvable proble ms, we must first have a formal notion of what a \u201cproblem\u201d is. We define anbstract problemQ to be a binary relation on a set I of probleminstancesand a setS of problemsolutions. For example, an instance for SHORTEST-PATH is a triple consi sting of a graph and two vertices. A solution is a sequence of vertices in the g raph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a gra ph and two vertices with a shortest path in the graph that connects the two vertices. S ince shortest paths are not necessarily unique, a given problem instance may have mo r than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness res tricts attention to decision problems : those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instan ce setI to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH i s the problem PATH that we saw earlier. If i = \u3008G,u, v,k\u3009 is an instance of the decision problem PATH, then PATH(i ) = 1 (yes) if a shortest path fromu to v has at mostk edges, and PATH (i ) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems , in which some value must be minimized or maximized. As we saw above, however, it is usual ly a simple matter to recast an optimization problem as a decision problem that is no harder. 1See Hopcroft and Ullman [156] or Lewis and Papadimitriou [20 4] for a thorough treatment of the Turing-machine model. 34.1 Polynomial time 973"}
{"_id": "05c34e5fc12aadcbb309b36dc9f0ed309fd2dd50", "title": "An Axiomatic Basis for Computer Programming", "text": "In this paper an attempt is made to explore the logical foundations of computer programming by use of techniques which were first applied in the study of geometry and have later been extended to other branches of mathematics. This involves the elucidation of sets of axioms and rules of inference which can be used in proofs of the properties of computer programs. Examples are given of such axioms and rules, and a formal proof of a simple theorem is displayed. Finally, it is argued that important advantage, both theoretical and practical, may follow from a pursuance of these topics."}
{"_id": "55e8fe04a8af5ad6eaeee8383bd9686dff1d0ab3", "title": "Production of first and second generation biofuels : A comprehensive review", "text": "Sustainable economic and industrial growth requires safe, sustainable resources of energy. For the future re-arrangement of a sustainable economy to biological raw materials, completely new approaches in research and development, production, and economy are necessary. The \u2018first-generation\u2019 biofuels appear unsustainable because of the potential stress that their production places on food commodities. For organic chemicals and materials these needs to follow a biorefinery model under environmentally sustainable conditions. Where these operate at present, their product range is largely limited to simple materials (i.e. cellulose, ethanol, and biofuels). Second generation biorefineries need to build on the need for sustainable chemical products through modern and proven green chemical technologies such as bioprocessing including pyrolysis, Fisher Tropsch, and other catalytic processes in order to make more complex molecules and materials on which a future sustainable society will be based. This review focus on cost effective technologies and the processes to convert biomass into useful liquid biofuels and bioproducts, with particular focus on some biorefinery concepts based on different feedstocks aiming at the integral utilization of these feedstocks for the production of value added chemicals. 2009 Elsevier Ltd. All rights reserved."}
{"_id": "2c3bc86c5bceced5c1bb989227b099f46f7f478f", "title": "Dissolvable films of silk fibroin for ultrathin conformal bio-integrated electronics.", "text": "Electronics that are capable of intimate, non-invasive integration with the soft, curvilinear surfaces of biological tissues offer important opportunities for diagnosing and treating disease and for improving brain/machine interfaces. This article describes a material strategy for a type of bio-interfaced system that relies on ultrathin electronics supported by bioresorbable substrates of silk fibroin. Mounting such devices on tissue and then allowing the silk to dissolve and resorb initiates a spontaneous, conformal wrapping process driven by capillary forces at the biotic/abiotic interface. Specialized mesh designs and ultrathin forms for the electronics ensure minimal stresses on the tissue and highly conformal coverage, even for complex curvilinear surfaces, as confirmed by experimental and theoretical studies. In vivo, neural mapping experiments on feline animal models illustrate one mode of use for this class of technology. These concepts provide new capabilities for implantable and surgical devices."}
{"_id": "2dac046a9303a568ed6253a7132a4e9194b6924c", "title": "FURTHER ANALYSIS OF THE HIPPOCAMPAL AMNESIC SYNDROME : 14-YEAR FOLLOW-UP STUDY OF H . M . *", "text": "-The report attempts to delineatecertain residual learning capacities of H.M., a young man who became amnesic in 1953 following a bilateral removal in the hippocampal zone. In addition to being able to acquire new motor skills (CoRKIN [2]), this patient shows some evidence of perceptual learning. He also achieves some retention of very simple visual and tactual mazes in which the sequence of required turns is short enough to fit into his immediate memory span; even then, the rate of acquisition is extremely slow. These vestigial abilities, which have their occasional parallels in the patient's everyday life, are assessed against the background of his continuing profound amnesia for most on-going events, an amnesia that persists in spite of above-average intelligence and superior performance on many perceptual tasks. THE PRESENT repor t has three aims. In the first place, it describes the persis tent features of a severe amnesic syndrome acquired 14 years ago, fol lowing bi la tera l mesial t empora l lobec tomy (ScoVILLE [28]). Secondly, the repor t a t tempts to give further substance to our previous ly held belief tha t the pa t ien t ' s perceptua l and intel lectual capacit ies remain intact , as mani fes ted by no rma l or super ior pe r tb rmance on a fair ly wide range o f exper imental tasks. Thirdly , we are explor ing the na ture o f the m e m o r y detect in some detai l by trying to discover which learning tasks the pat ient can master , as c o m p a r e d with those on which he always fails. I N T E R V A L H I S T O R Y Since the onset of his amnes ia in 1953, we have twice had the oppor tun i ty o f br inging this pa t ien t under intensive observat ion. In 1962, he spent one week at the Mont rea l Neuro log ica l Inst i tute, and most results o f these and earl ier examinat ions have a l ready been repor ted (CoRK~N [1]; MILNER [16, 18, 19]). Extensive testing was again carr ied out in 1966, dur ing a two-week admiss ion to the Clinical Research Center at M.I .T. Findings ob ta ined dur ing that per iod, supplemented later by visits to the pa t ien t ' s home, form the basis of the present report . * From the Montreal Neurological Institute and from the Psychophysiological Laboratory and Clinical Research Center, M.I.T. \"Ilais work was supported in part by the Medical Research Council of Canada, and in part by grants to H.-L. TEtmER from the John A. Hartford Foundation, NASA, and the National Institutes of Health (under MH-05673, and a Clinical Center Grant, FR88). The follow-up study of K.M. was supported by United States Public Health Small Grant M5774A to BRENDA MILNER."}
{"_id": "7a0714e12dc106ba33a11bc0daa202caf9e1f07d", "title": "Animation of dynamic legged locomotion", "text": "This paper is about the use of control algorithms to animate dynamic legged locomotion. Control could free the animator from specifying the details of joint and limb motion while producing both physically realistic and natural looking results. We implemented computer animations of a biped robot, a quadruped robot, and a kangaroo. Each creature was modeled as a linked set of rigid bodies with compliant actuators at its joints. Control algorithms regulated the running speed, organized use of the legs, and maintained balance. All motions were generated by numerically integrating equations of motion derived from the physical models. The resulting behavior included running at various speeds, traveling with several gaits (run, trot, bound, gallop, and hop), jumping, and traversing simple paths. Whereas the use of control permitted a variety of physically realistic animated behavior to be generated with limited human intervention, the process of designing the control algorithms was not automated: the algorithms were \"tweaked\" and adjusted for each new creature."}
{"_id": "3a1d329c2f3c782e574bb9aece9e7b44214f28c6", "title": "A Wideband Dual Circularly Polarized Full-Corporate Waveguide Array Antenna Fed by Triple-Resonant Cavities", "text": "A dual circularly polarized (CP) waveguide array antenna is presented for Ka-band wireless communication. A novel triple-resonant cavity is proposed to implement mode conversion and impedance matching in the $2 \\times 2$ -element subarray. A ridge waveguide polarizer is integrated to form the left-hand circular polarization (LHCP) and the right-hand circular polarization (RHCP). A flyover-shaped full-corporate feeding network is designed to accommodate the dual polarizations and keep the wideband characteristic for a large-scale array. Subarray spacing is used to optimize the port-to-port isolation and decrease the sidelobe level at some definite angles. Computer numerically controlled machining technology is applied to fabricating a $16 \\times 16$ -element prototype. Measured results demonstrate a bandwidth of 16% from 27.6 to 32.4 GHz for the dual polarizations, in terms of good reflection coefficient and axial ratio. A maximum gain of 32.8 dBic is achieved at 28.8 GHz. A total efficiency over 60% is obtained throughout the above band for both LHCP and RHCP ports."}
{"_id": "809d484629228c94889c47ffcfd530e8edff17cc", "title": "Microfinance and Poverty : Evidence Using Panel Data from Bangladesh", "text": "Microfinance supports mainly informal activities that often have a low return and low market demand. It may therefore be hypothesized that the aggregate poverty impact of microfinance is modest or even nonexistent. If true, the poverty impact of microfinance observed at the participant level represents either income redistribution or short-run income generation from the microfinance intervention. This article examines the effects of microfinance on poverty reduction at both the participant and the aggregate levels using panel data from Bangladesh. The results suggest that access to microfinance contributes to poverty reduction, especially for female participants, and to overall poverty reduction at the village level. Microfinance thus helps not only poor participants but also the local economy."}
{"_id": "5088f57ee6eeacb2b125239870a7f1f5ca3acde2", "title": "Millimeter-Wave Microstrip Array Antenna with High Efficiency for Automotive Radar Systems", "text": "Automotive radar systems utilizing the millimeterwave band have been developed 2) since they can detect targets even in bad weather. In the automotive radar systems, linear polarization inclined 45 degrees is utilized in order to avoid mutual radio interference between them. The antenna for the automotive radar systems is required to have high efficiency, which means high gain against for an aperture area determined by the size of a radar sensor, in order to detect the targets at a long distance. In addition, low profile is required not to spoil the appearance of a car. Ease of manufacturing is also one of the significant factors in order to install the automotive radar systems in popular cars. As candidate antennas for the automotive radar systems, a slotted waveguide array antenna and a triplate-type array antenna have been studied. These antennas have low profile and high efficiency 7"}
{"_id": "bc7120aaebcd69113ad8e905a9eebee1f6986e37", "title": "A Calibration Technique for Bang-Bang ADPLLs Using Jitter Distribution Monitoring", "text": "This brief presents a built-in self-calibration (BISC) technique for minimization of the total jitter in bang-bang all-digital phase-locked loops (ADPLLs). It is based on the addition of a monitoring phase-frequency detector (PFD) with tunable delay cells for the reference clock and the divider clock and a counter for this PFD output signal. This allows for on-chip binary comparison of the jitter distribution widths at the ADPLL PFD input, when ADPLL filter parameters are altered. Since only a relative comparison is performed, no accurate delay calibration is required. The statistical properties of this comparison of two random distributions are analyzed theoretically, and guidelines for circuit dimensioning are derived. The proposed method is used for BISC by adaption of the ADPLL filter coefficients. This allows for jitter minimization under process, voltage and temperature variations as well as gain and period jitter of the digitally controlled oscillator. The proposed calibration technique is verified by system simulations and measurements of a silicon prototype implementation in 28-nm CMOS technology."}
{"_id": "0b3ef771ef086c115443ffd253f40a9f0d1436ac", "title": "Towards Future THz Communications Systems", "text": "Carrier frequencies beyond 300 GHz, belonging to the so-called THz range, have received attention for considering this frequency band for future multi-gigabit short-range communication systems. This review paper gives an overview of current issues in the emerging field of THz communications targeting to deliver wireless 100 Gbps over short distances. The paper will start by introducing scenarios and applications requiring such high data rates followed by a discussion on the radio channel characteristics. In the 300 GHz frequency band, the path loss is even more significant than at 60 GHz and appropriate measures to mitigate effects in none-line-of-sight (NLOS) cases, caused e. g. by the influence of moving people, are required. Advanced antenna techniques like beam forming or beam switching are a pre-requisite to guarantee seamless service. In order to consider such techniques in the standards development, the propagation channel operating at these mmand sub-mm wave bands in realistic environments must be well understood. Therefore, intensive channel measurement and modeling activities have been done at 300 GHz, which started a couple of years ago at Terahertz Communications Lab (TCL). Due to the short wave length, the influence of surface roughness of typical building materials plays an important role. Hence, the modeling of rough surface scattering has been one of the main areas in these investigations. In this contribution, the main results of the propagation research activities at TCL are summarized. In the last part of the paper, an overview of the state-of-the-art in technology development and successful demonstrations of data transmission will be given together with a report on the status quo of ongoing activities in standardization at IEEE 802.15 IG THz and the regulation of the spectrum beyond 300 GHz."}
{"_id": "9ac5b66036da98f2c1e62c6ca2bdcc075083ef85", "title": "Election Analysis and Prediction of Election Results with Twitter", "text": ""}
{"_id": "20070c1778da82394ddb1bc11968920ab94c3112", "title": "A soft hand model for physically-based manipulation of virtual objects", "text": "We developed a new hand model for increasing the robustness of finger-based manipulations of virtual objects. Each phalanx of our hand model consists of a number of deformable soft bodies, which dynamically adapt to the shape of grasped objects based on the applied forces. Stronger forces directly result in larger contact areas, which increase the friction between hand and object as would occur in reality. For a robust collision-based soft body simulation, we extended the lattice-shape matching algorithm to work with adaptive stiffness values, which are dynamically derived from force and velocity thresholds. Our implementation demonstrates that this approach allows very precise and robust grasping, manipulation and releasing of virtual objects and performs in real-time for a variety of complex scenarios. Additionally, laborious tuning of object and friction parameters is not necessary for the wide range of objects that we typically grasp with our hands."}
{"_id": "8ee38ec2d2da62ad96e36c7804b3bbf3a5153ab7", "title": "Stitching Web Tables for Improving Matching Quality", "text": "HTML tables on web pages (\u201cweb tables\u201d) cover a wide variety of topics. Data from web tables can thus be useful for tasks such as knowledge base completion or ad hoc table extension. Before table data can be used for these tasks, the tables must be matched to the respective knowledge base or base table. The challenges of web table matching are the high heterogeneity and the small size of the tables. Though it is known that the majority of web tables are very small, the gold standards that are used to compare web table matching systems mostly consist of larger tables. In this experimental paper, we evaluate T2K Match, a web table to knowledge base matching system, and COMA, a standard schema matching tool, using a sample of web tables that is more realistic than the gold standards that were previously used. We find that both systems fail to produce correct results for many of the very small tables in the sample. As a remedy, we propose to stitch (combine) the tables from each web site into larger ones and match these enlarged tables to the knowledge base or base table afterwards. For this stitching process, we evaluate different schema matching methods in combination with holistic correspondence refinement. Limiting the stitching procedure to web tables from the same web site decreases the heterogeneity and allows us to stitch tables with very high precision. Our experiments show that applying table stitching before running the actual matching method improves the matching results by 0.38 in F1-measure for T2K Match and by 0.14 for COMA. Also, stitching the tables allows us to reduce the amount of tables in our corpus from 5 million original web tables to as few as 100,000 stitched tables."}
{"_id": "a42aff222c6c8bc248a0504d063ef7334fe5a177", "title": "Efficient adaptive density estimation per image pixel for the task of background subtraction", "text": "We analyze the computer vision task of pixel-level background subtraction. We present recursive equations that are used to constantly update the parameters of a Gaussian mixture model and to simultaneously select the appropriate number of components for each pixel. We also present a simple non-parametric adaptive density estimation method. The two methods are compared with each other and with some previously proposed algorithms. 2005 Elsevier B.V. All rights reserved."}
{"_id": "82d0595da0a3f391fab5263fee3b0328916dad37", "title": "Automatic Topic Labeling in Asynchronous Conversations", "text": "Asynchronous conversations are conversations where participants collaborate with each other at different times (e.g., email, blog, forum). The huge amount of textual data generated everyday in these conversations calls for automated methods of conversational text analysis. Topic segmentation and labeling is often considered a prerequisite for higher-level conversation analysis and has been shown to be useful in many NLP applications including summarization, information extraction and conversation visualization. Topic segmentation in asynchronous conversation refers to the task of clustering the sentences into a set of coherent topical clusters. In [6, 7], we presented unsupervised and supervised topic segmentation models for asynchronous conversations, which to our knowledge are the state-of-the-art. This study concerns topic labeling, that is, given a set of topical clusters in a conversation, the task is to assign appropriate topic labels to the clusters. For example, five topic labels in a Slashdot [2] blog conversation about releasing a new game called Daggerfall are game contents and size, game design, bugs/faults, other gaming options and performance/speed issues. Such topic labels can serve as a concise summary of the conversation and can also be used for indexing. Ideally, topic labels should be meaningful, semantically similar to the underlying topic, general (i.e., broad coverage of the topic) and discriminative (or exclusive) when there are multiple topics [10]. Traditionally, the top K terms in a multinomial topic model (e.g., LDA [3]) are used to represent a topic. However, as pointed out by [10], at the word-level, topic labels may become too general that it can impose cognitive difficulties on a user to interpret the meaning of the topic by associating the words together. On the other hand, if the labels are expressed at the sentence-level, they may become too specific to cover the whole theme of the topic. Based on these observations, [10] and other recent studies (e.g., [9]) advocate for phrase-level topic labels. This is also consistent with the monologue corpora built as part of the Topic Detection and Tracking (TDT) project [1] as well as with our own email and blog conversational corpora in which human annotators without specific instructions spontaneously generated labels at the phrase-level. Considering all these factors, we also treat phrase-level as the right level of granularity for a label in this work. Few prior studies have addressed the topic labeling problem in different settings [10, 9]. Common to their approaches is that they first mine topics in the form of topic-word distributions from the whole corpus using topic models like LDA. Then, they try to label the topics (i.e., topic-word distributions) with an appropriate label using the statistical association metrics (e.g., point-wise mutual information, t-test) computed from either the source corpus or an external knowledge base (e.g., Wikipedia). In contrast, our task is to label the topical clusters in a given conversation, where topics are closely related and distributional variations are subtle (e.g., game contents and size, game design). Therefore, corpus-based statistical association metrics are not reliable in our case. Also at the conversation-level, the topics are too specific to find their labels in an external source. To our knowledge, none has studied this problem before. Therefore, there is no standard corpus and no agreed-upon evaluation metrics available. Our contributions aim to remedy these problems. First, we present a blog and an email corpora annotated with topics. Second, we propose to generate topic labels using an extractive approach, that finds the most representative phrases from the text without relying on an external source. Since, graph-based key phrase ranking has proved to be the state-of-the-art (unsupervised) method [11], we adopt the same framework. We propose a novel biased random walk model that exploits the fact that the leading sentences in a topic often carry the most informative clues for its label. However, the phrases extracted only by considering the sentences in a topic may ignore the global aspect of the conversation. As another contribution, we propose to re-rank the phrases extracted from the whole conversation with respect to the individual topics and include the relevant ones. Experimental results show that our approach outperforms other ranking models including a general random walk model (i.e., TextRank) proposed by [11], a leadoriented model and a frequency-based model, and including the relevant conversation-level phrases improves the performance."}
{"_id": "4e8ff5811686a5c8e45decfb168fd7ecd1cb088a", "title": "A framework for facial surgery simulation", "text": "The accurate prediction of the post-surgical facial shape is of paramount importance for surgical planning in facial surgery. In this paper we present a framework for facial surgery simulation which is based on volumetric finite element modeling. We contrast conventional procedures for surgical planning against our system by accompanying a patient during the entire process of planning, medical treatment and simulation. In various preprocessing steps a 3D physically based facial model is reconstructed from CT and laser range scans. All geometric and topological changes are modeled interactively using Alias.#8482; Applying fully 3D volumetric elasticity allows us to represent important volumetric effects such as incompressibility in a natural and physically accurate way. For computational efficiency, we devised a novel set of prismatic shape functions featuring a globally C1-continuous surface in combination with a C0 interior. Not only is it numerically accurate, but this construction enables us to compute smooth and visually appealing facial shapes."}
{"_id": "11b11d5b58f9d9519031b79385d7e0a712390ff9", "title": "Design of an adaptive sliding mode controller for a novel spherical rolling robot", "text": "This paper presents a novel spherical rolling robot and the design procedure of an adaptive sliding mode controller for the proposed robot. Towards containing the uncertainties, the sliding mode controller is utilized and simulation tests represent the effectiveness of this method. However, the chattering phenomenon in the robot operation is the main insufficiency of this controller. Hence, in order to present a more accurate controller, sliding mode controller is equipped with an identification method which works in an online manner for identifying the exact model of the robot. The provided identification procedure is based on Recursive Least Square approach, which is one of the most promising approaches toward identifying the model. It should be also noted that, this adaptive controller identify the model without taking into account any data from the derived model."}
{"_id": "318c86751f018b5d7415dafc58e20c0ce06c68b6", "title": "A Memory Soft Error Measurement on Production Systems", "text": "Memory state can be corrupted by the impact of particles causing single-event upsets (SEUs). Understanding and dealing with these soft (or transient) errors is important for system reliability. Several earlier studies have p rovided field test measurement results on memory soft error rate, but no results were available for recent production computer systems. We believe the measurement results on real production systems are uniquely valuable due to various environmental effects. This paper presents methodologies for memory soft error measurement on production systems where performance impact on existing running applications must be negligible and the system administrativ e control might or might not be available. We conducted measurements in three distinct system environments: a rack-mounted server farm for a popular Internet service (Ask.com search engine), a set of office desktop computers (Univ. of Rochester), and a geographically distributed network testbed (PlanetLab). Our preliminary measurement on over 300 machines for varying multi-month periods finds 2 suspected soft errors. In particular, our result on the Internet servers indicates that, with high probability, the soft error rate is at least two orders of magnitude lower than those reported previously. We provide discussions that attribute the low error rate to several factors in today\u2019s production system environments . As a contrast, our measurement unintentionally discovers permanent (or hard) memory faults on 9 out of 212 Ask.com machines, suggesting the relative commonness of"}
{"_id": "bcc709bca17483239fb5e43cfdad62b0f6df9827", "title": "Winding Machine for Automated Production of an Innovative Air-Gap Winding for Lightweight Electric Machines", "text": "This paper presents a newly developed winding machine, which enables an automated production of stator-mounted air-gap windings with meander structure. This structure has very high accuracy requirements. Therefore, automation is realized by the interaction of 15 actuators and a compound construction with 13 degrees of freedom. The programming works with discrete open-loop motion control to generate the kinematics. Above all, a flexible prototype of the winding machine is developed, manufactured, and tested for a motor with external rotor. Finally, experimental results of the developed automation for air-gap windings with meander structure are presented."}
{"_id": "920ca8ae80f659328fe6248b9ba2d40a18c4a9c4", "title": "Classification of malignant melanoma and benign skin lesions: implementation of automatic ABCD rule", "text": "The ABCD (asymmetry, border irregularity, colour and dermoscopic structure) rule of dermoscopy is a scoring method used by dermatologists to quantify dermoscopy findings and effectively separate melanoma from benign lesions. Automatic detection of the ABCD features and separation of benign lesions from melanoma could enable earlier detection of melanoma. In this study, automatic ABCD scoring of dermoscopy lesions is implemented. Pre-processing enables automatic detection of hair using Gabor filters and lesion boundaries using geodesic active contours. Algorithms are implemented to extract the characteristics of ABCD attributes. Methods used here combine existing methods with novel methods to detect colour asymmetry and dermoscopic structures. To classify lesions as melanoma or benign nevus, the total dermoscopy score is calculated. The experimental results, using 200 dermoscopic images, where 80 are malignant melanomas and 120 benign lesions, show that the algorithm achieves 91.25% sensitivity of 91.25 and 95.83% specificity. This is comparable to the 92.8% sensitivity and 90.3% specificity reported for human implementation of the ABCD rule. The experimental results show that the extracted features can be used to build a promising classifier for melanoma detection."}
{"_id": "9fed98656b42d08e2f7bf2589f4af1d877b2b650", "title": "A Hybrid Framework for News Clustering Based on the DBSCAN-Martingale and LDA", "text": "Nowadays there is an important need by journalists and media monitoring companies to cluster news in large amounts of web articles, in order to ensure fast access to their topics or events of interest. Our aim in this work is to identify groups of news articles that share a common topic or event, without a priori knowledge of the number of clusters. The estimation of the correct number of topics is a challenging issue, due to the existence of \u201cnoise\u201d, i.e. news articles which are irrelevant to all other topics. In this context, we introduce a novel density-based news clustering framework, in which the assignment of news articles to topics is done by the well-established Latent Dirichlet Allocation, but the estimation of the number of clusters is performed by our novel method, called \u201cDBSCAN-Martingale\u201d, which allows for extracting noise from the dataset and progressively extracts clusters from an OPTICS reachability plot. We evaluate our framework and the DBSCAN-Martingale on the 20newsgroups-mini dataset and on 220 web news articles, which are references to specific Wikipedia pages. Among twenty methods for news clustering, without knowing the number of clusters k, the framework of DBSCAN-Martingale provides the correct number of clusters and the highest Normalized Mutual Information."}
{"_id": "25599e9f11ae6fa901299d44d9a2666c7072af1f", "title": "Overview of ImageCLEFcaption 2017 - Image Caption Prediction and Concept Detection for Biomedical Images", "text": "This paper presents an overview of the ImageCLEF 2017 caption tasks on the analysis of images from the biomedical literature. Two subtasks were proposed to the participants: a concept detection task and caption prediction task, both using only images as input. The two subtasks tackle the problem of providing image interpretation by extracting concepts and predicting a caption based on the visual information of an image alone. A dataset of 184,000 figure-caption pairs from the biomedical open access literature (PubMed Central) are provided as a testbed with the majority of them as trainign data and then 10,000 as validation and 10,000 as test data. Across two tasks, 11 participating groups submitted 71 runs. While the domain remains challenging and the data highly heterogeneous, we can note some surprisingly good results of the difficult task with a quality that could be beneficial for health applications by better exploiting the visual content of biomedical figures."}
{"_id": "286e3064e650ec49b4c2216b4cf27499d3aaceb3", "title": "A Hierarchical Cluster Synchronization Framework of Energy Internet", "text": "The concept of \u201cEnergy Internet\u201d has been recently proposed to improve the efficiency of renewable energy resources. Accordingly, this paper investigates the synchronization of the Energy Internet system. Specifically, an appropriate hierarchical cluster synchronization framework is firstly developed for the Energy Internet, which includes primary synchronization, secondary synchronization, and tertiary synchronization. Furthermore, a secondary synchronization strategy (cluster synchronization) is proposed for the synchronization of multiple microgrid clusters in the Energy Internet, which is based on the consensus algorithm of multiagent networks. The detailed synchronization framework and control scheme are presented in this paper. Finally, some simulation results are provided and discussed to validate the performance and effectiveness of the proposed strategy."}
{"_id": "35fcbcda34a18ed46399f2922f808e82ba729a55", "title": "Optimal Power Flow by Black Hole Optimization Algorithm", "text": "In this paper, a black hole optimization algorithm (BH) is utilized to solve the optimal power flow problem considering the generation fuel cost, reduction of voltage deviation and improvement of voltage stability as an objective functions. The black hole algorithm simulate the black hole phenomenon which relay on tow operations, the star absorption and star sucking. The IEEE 30-Bus and IEEE 57-Bus systems are used to illustrate performance of the proposed algorithm and results are compared with those in literatures."}
{"_id": "8d243d0a43298f39f6bf1b4846b7161b3dfeb697", "title": "Extraction and classification texture of inflammatory cells and nuclei in normal pap smear images", "text": "The presence of inflammatory cells complicates the process of identifying the nuclei in the early detection of cervical cancer. Inflammatory cells need to be eliminated to assist pathologists in reading Pap smear slides. The texture of Grey-Level Run-Length Matrix (GLRLM) for inflammatory cells and nuclei types are investigated. The inflammatory cells and nuclei have different texture, and it can be used to differentiate them. To extract all of the features, firstly manual cropping of inflammatory cells and nuclei needs to be done. All of extracted features have been analyzed and selected by Decision Tree classifier (J48). Originally there have been eleven features in the direction of 135\u00b0 which are extracted to classify cropping cells into inflammatory cells and nuclei. Then the eleven features are reduced into eight, namely low gray level run emphasis, gray level non uniformity, run length non-uniformity, long run low gray-level emphasis, short run high gray-level emphasis, short run low gray-level emphasis, long run high gray-level emphasis and run percentage based on the rule of classification. This experiment is applied into 957 cells which were from 50 images. The compositions of these cells were 122 cells of nuclei and 837 cells of inflammatory. The proposed algorithm applied to all of the cells and the result of classification by using these eight texture features obtains the sensitivity rates which show that there are still nuclei of cells that were considered as inflammatory cells. It was in accordance with the conditions of the difficulties faced by the pathologist while the specificity rate suggests that inflammatory cells detected properly and few inflammatory cells are considered as nucleus."}
{"_id": "f45eb5367bb9fa9a52fd4321a63308a37960e93a", "title": "Development of Autonomous Car\u2014Part II: A Case Study on the Implementation of an Autonomous Driving System Based on Distributed Architecture", "text": "Part I of this paper proposed a development process and a system platform for the development of autonomous cars based on a distributed system architecture. The proposed development methodology enabled the design and development of an autonomous car with benefits such as a reduction in computational complexity, fault-tolerant characteristics, and system modularity. In this paper (Part II), a case study of the proposed development methodology is addressed by showing the implementation process of an autonomous driving system. In order to describe the implementation process intuitively, core autonomous driving algorithms (localization, perception, planning, vehicle control, and system management) are briefly introduced and applied to the implementation of an autonomous driving system. We are able to examine the advantages of a distributed system architecture and the proposed development process by conducting a case study on the autonomous system implementation. The validity of the proposed methodology is proved through the autonomous car A1 that won the 2012 Autonomous Vehicle Competition in Korea with all missions completed."}
{"_id": "03f4141389da40a98517efc02b0fe910db17b8f0", "title": "Digital Image Processing", "text": "Image defects which could be caused by the digitization process or by faults in the imaging set-up (for example, bad lighting) can be corrected using Image Enhancement techniques. Once the image is in good condition, the Measurement Extraction operations can be used to obtain useful information from the image. Course Objectives \u201cIntroduction to Digital Image Processing\u201d is two-day hands-on course that provide fundamental concepts and techniques for digital image processing and the software principles used in their practical implementation."}
{"_id": "773e2eb3682962df329539fab6af2ab4148f3db7", "title": "Real-time computer vision with OpenCV", "text": "Mobile computer-vision technology will soon become as ubiquitous as touch interfaces."}
{"_id": "f9048bac0bd8cb284902eac1baa1bcbc4e610cf3", "title": "Python for Prototyping Computer Vision Applications", "text": "Python is a popular language widely adopted by the scientific community due to its clear syntax and an extensive number of specialized packages. For image processing or computer vision development, two libraries are prominently used: NumPy/SciPy and OpenCV with a Python wrapper. In this paper, we present a comparative evaluation of both libraries, assessing their performance and their usability. We also investigate the performance of OpenCV when accessed through a python wrapper versus directly using the native C implementation."}
{"_id": "6181671fc9c9caef9e3d75b56241b202a011c55a", "title": "Python for Scientific Computing", "text": "Python is an excellent \"steering\" language for scientific codes written in other languages. However, with additional basic tools, Python transforms into a high-level language suited for scientific and engineering code that's often fast enough to be immediately useful but also flexible enough to be sped up with additional extensions."}
{"_id": "719de8345edd73a78566e1267aeca60c9545e3df", "title": "Image Alignment and Stitching : A Tutorial 1", "text": "This tutorial reviews image alignment and image stitching algorithms. Image alignment algorithms can discover the correspondence relationships among images with varying degrees of overlap. They are ideally suited for applications such as video stabilization, summarization, and the creation of panoramic mosaics. Image stitching algorithms take the alignment estimates produced by such registration algorithms and blend the images in a seamless manner, taking care to deal with potential problems such as blurring or ghosting caused by parallax and scene movement as well as varying image exposures. This tutorial reviews the basic motion models underlying alignment and stitching algorithms, describes effective direct (pixel-based) and feature-based alignment algorithms, and describes blending algorithms used to produce seamless mosaics. It closes with a discussion of open research problems in the area."}
{"_id": "db17a183cb220ae8473bf1b25d62d5ef6fcfeac7", "title": "Substrate-Independent Microwave Components in Substrate Integrated Waveguide Technology for High-Performance Smart Surfaces", "text": "Although all existing air-filled substrate integrated waveguide (AFSIW) topologies yield a substrate-independent electrical performance, they rely on dedicated, expensive, laminates to form air-filled regions that contain the electromagnetic fields. This paper proposes a novel substrate-independent AFSIW manufacturing technology, enabling straightforward integration of high-performance microwave components into a wide range of general-purpose commercially available surface materials by means of standard additive (3-D printing) or subtractive (computer numerically controlled milling/laser cutting) manufacturing processes. First, an analytical formula is derived for the effective permittivity and loss tangent of the AFSIW waveguide. This allows the designer to reduce substrate losses to levels typically encountered in high-frequency laminates. Then, several microwave components are designed and fabricated. Measurements of multiple AFSIW waveguides and a four-way power divider/combiner, both relying on a new coaxial-to-air-filled SIW transition, prove that this novel approach yields microwave components suitable for direct integration into everyday surfaces, with low insertion loss, and excellent matching and isolation over the entire [5.15\u20135.85] GHz band. Hence, this innovative approach paves the way for a new generation of cost-effective, high-performance, and invisibly integrated smart surface systems that efficiently exploit the area and the materials available in everyday objects."}
{"_id": "7aee5d2bdb153be534685f2598ad1b2e09621ce9", "title": "VIPS: a Vision-based Page Segmentation Algorithm", "text": "A new web content structure analysis based on visual representation is proposed in this paper. Many web applications such as information retrieval, information extraction and automatic page adaptation can benefit from this structure. This paper presents an automatic top-down, tag-tree independent approach to detect web content structure. It simulates how a user understands web layout structure based on his visual perception. Comparing to other existing techniques, our approach is independent to underlying documentation representation such as HTML and works well even when the HTML structure is far different from layout structure. Experiments show satisfactory results."}
{"_id": "4d2cbdc1291c45ed9d87ad0c37e20967c97172e5", "title": "Modality Effects in Deception Detection and Applications in Automatic-Deception-Detection", "text": "Modality is an important context factor in deception, which is context-dependent. In order to build a reliable and flexible tool for automatic-deception-detection (ADD), we investigated the characteristics of verbal cues to deceptive behavior in three modalities: text, audio and face-to-face communication. Seven categories of verbal cues (21 cues) were studied: quantity, complexity, diversity, verb nonimmediacy, uncertainty, specificity and affect. After testing the interaction effects between modality and condition (deception or truth), we found significance only with specificity and observed that differences between deception and truth were in general consistent across the three modalities. However, modality had strong effects on verbal cues. For example, messages delivered face-to-face were largest in quantity (number of words, verbs, and sentences), followed by the audio modality. Text had the sparsest examples. These modality effects are an important factor in building baselines in ADD tools, because they make it possible to use them to adjust the baseline for an unknown modality according to a known baseline, thereby simplifying the process of ADD. The paper discusses in detail the implications of these findings on modality effects in three modalities."}
{"_id": "b80ba9b52a6286199872c0f892c3b2fa1033ec1e", "title": "Model-based clustering of high-dimensional data: A review", "text": "Model-based clustering is a popular tool which is renowned for its probabilistic foundations and its flexibility. However, high-dimensional data are nowadays more and more frequent and, unfortunately, classical model-based clustering techniques show a disappointing behavior in high-dimensional spaces. This is mainly due to the fact that model-based clustering methods are dramatically over-parametrized in this case. However, high-dimensional spaces have specific characteristics which are useful for clustering and recent techniques exploit those characteristics. After having recalled the bases of model-based clustering, this article will review dimension reduction approaches, regularization-based techniques, parsimonious modeling, subspace clustering methods and clustering methods based on variable selection. Existing softwares for model-based clustering of high-dimensional data will be also reviewed and their practical use will be illustrated on real-world data sets."}
{"_id": "624da56c523f93172b417291aa331e6a8b8e1442", "title": "Categorical Data Clustering using Frequency and Tf-Idf based Cosine Similarity", "text": "Clustering is the process of grouping a set of physical objects into classes of similar object. Objects in real world consist of both numerical and categorical data. Categorical data are not analyzed as numerical data due to the lack of inherit ordering. This paper describes Frequency and Tf-Idf Based Categorical Data Clustering (FTCDC) technique based on cosine similarity measure. The FTCDC system consists of four modules, such as data pre-processing, similarity matrix generation, cluster formation and validation. The System architecture of FTCDC is explained and its performance is examined using a simple example scenario. The performance on real world data is measured using accuracy and error rate. The performance of the system much relay on the similarity threshold selected for the clustering process."}
{"_id": "545a2844de1a253357403c1fa37561f810bdb553", "title": "Transcription Methods for Consistency, Volume and Efficiency", "text": "This paper describes recent efforts at Linguistic Data Consortium at the University of Pennsylvania to create manual transcripts as a shared resource for human language technology research and evaluation. Speech recognition and related technologies in particular call for substantial volumes of transcribed speech for use in system development, and for human gold standard references for evaluating performance over time. Over the past several years LDC has developed a number of transcription approaches to support the varied goals of speech technology evaluation programs in multiple languages and genres. We describe each transcription method in detail, and report on the results of a comparative analysis of transcriber consistency and efficiency, for two transcription methods in three languages and five genres. Our findings suggest that transcripts for planned speech are generally more consistent than those for spontaneous speech, and that careful transcription methods result in higher rates of agreement when compared to quick transcription methods. We conclude with a general discussion of factors contributing to transcription quality, efficiency and consistency."}
{"_id": "767ad42da2e1dcd79dba2e5822eed65ac2fe1dea", "title": "GraphP: Reducing Communication for PIM-Based Graph Processing with Efficient Data Partition", "text": "Processing-In-Memory (PIM) is an effective technique that reduces data movements by integrating processing units within memory. The recent advance of \u201cbig data\u201d and 3D stacking technology make PIM a practical and viable solution for the modern data processing workloads. It is exemplified by the recent research interests on PIM-based acceleration. Among them, TESSERACT is a PIM-enabled parallel graph processing architecture based on Micron\u2019s Hybrid Memory Cube (HMC), one of the most prominent 3D-stacked memory technologies. It implements a Pregel-like vertex-centric programming model, so that users could develop programs in the familiar interface while taking advantage of PIM. Despite the orders of magnitude speedup compared to DRAM-based systems, TESSERACT generates excessive crosscube communications through SerDes links, whose bandwidth is much less than the aggregated local bandwidth of HMCs. Our investigation indicates that this is because of the restricted data organization required by the vertex programming model. In this paper, we argue that a PIM-based graph processing system should take data organization as a first-order design consideration. Following this principle, we propose GraphP, a novel HMC-based software/hardware co-designed graph processing system that drastically reduces communication and energy consumption compared to TESSERACT. GraphP features three key techniques. 1) \u201cSource-cut\u201d partitioning, which fundamentally changes the cross-cube communication from one remote put per cross-cube edge to one update per replica. 2) \u201cTwo-phase Vertex Program\u201d, a programming model designed for the \u201csource-cut\u201d partitioning with two operations: GenUpdate and ApplyUpdate. 3) Hierarchical communication and overlapping, which further improves performance with unique opportunities offered by the proposed partitioning and programming model. We evaluate GraphP using a cycle accurate simulator with 5 real-world graphs and 4 algorithms. The results show that it provides on average 1.7 speedup and 89% energy saving compared to TESSERACT."}
{"_id": "41ef0f5baf35793ac780885d5186d65d12f91df9", "title": "Architectural optimizations for high performance and energy efficient Smith-Waterman implementation on FPGAs using OpenCL", "text": "Smith-Waterman is a dynamic programming algorithm that plays a key role in the modern genomics pipeline as it is guaranteed to find the optimal local alignment between two strings of data. The state of the art presents many hardware acceleration solutions that have been implemented in order to exploit the high degree of parallelism available in this algorithm. The majority of these implementations use heuristics to increase the performance of the system at the expense of the accuracy of the result. In this work, we present an implementation of the pure version of the algorithm. We include the key architectural optimizations to achieve highest possible performance for a given platform and leverage the Berkeley roofline model to track the performance and guide the optimizations. To achieve scalability, our custom design comprises of systolic arrays, data compression features and shift registers, while a custom port mapping strategy aims to maximize performance. Our designs are built leveraging an OpenCL-based design entry, namely Xilinx SDAccel, in conjunction with a Xilinx Virtex 7 and Kintex Ultrascale platform. Our final design achieves a performance of 42.47 GCUPS (giga cell updates per second) with an energy efficiency of 1.6988 GCUPS/W. This represents an improvement of 1.72x in performance and energy efficiency over previously published FPGA implementations and 8.49x better in energy efficiency over comparable GPU implementations."}
{"_id": "338557d69772b2018793e026cda7afa091a638dd", "title": "An efficient Bayesian inference framework for coalescent-based nonparametric phylodynamics", "text": "MOTIVATION\nThe field of phylodynamics focuses on the problem of reconstructing population size dynamics over time using current genetic samples taken from the population of interest. This technique has been extensively used in many areas of biology but is particularly useful for studying the spread of quickly evolving infectious diseases agents, e.g. influenza virus. Phylodynamic inference uses a coalescent model that defines a probability density for the genealogy of randomly sampled individuals from the population. When we assume that such a genealogy is known, the coalescent model, equipped with a Gaussian process prior on population size trajectory, allows for nonparametric Bayesian estimation of population size dynamics. Although this approach is quite powerful, large datasets collected during infectious disease surveillance challenge the state-of-the-art of Bayesian phylodynamics and demand inferential methods with relatively low computational cost.\n\n\nRESULTS\nTo satisfy this demand, we provide a computationally efficient Bayesian inference framework based on Hamiltonian Monte Carlo for coalescent process models. Moreover, we show that by splitting the Hamiltonian function, we can further improve the efficiency of this approach. Using several simulated and real datasets, we show that our method provides accurate estimates of population size dynamics and is substantially faster than alternative methods based on elliptical slice sampler and Metropolis-adjusted Langevin algorithm.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe R code for all simulation studies and real data analysis conducted in this article are publicly available at http://www.ics.uci.edu/\u223cslan/lanzi/CODES.html and in the R package phylodyn available at https://github.com/mdkarcher/phylodyn.\n\n\nCONTACT\nS.Lan@warwick.ac.uk or babaks@uci.edu\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."}
{"_id": "b1ce62b6831c93cb4c8d16ebf4c8614235cb3f38", "title": "Multiuser Millimeter Wave Communications With Nonorthogonal Beams", "text": "Recently, millimeter-wave (mmWave) and even terahertz wireless (with higher frequency) networks have attracted significant research interests as alternatives to support the formidable growth of wireless communication services. Normally, directional beamforming (BF) is shown as a promising technique to compensate its high path loss. We focus on mmWave communications and assume that both mmWave base stations (MBSs) and user equipments (UEs) can support directional BF. As mmWave spectrum has short wavelength, massive antenna arrays can be deployed at MBSs to form multiple directional beams through BF training. Then, an MBS can transmit simultaneously to multiple UEs (SUEs) with different beams in the networks. However, the beams that serve different SUEs may transmit (almost) in the same path, especially when SUEs are distributed densely. Thus, they are not in perfect orthogonal beams. Due to the leakage of transmission power, the interference among these beams may be severe. To address this problem, typically the MBS could serve these SUEs in time division multiplex. This will degrade the spectral efficiency. In this context, we investigate the effect of nonorthogonal beam interference and then propose two novel solutions (i.e., dynamic beam switching and static beam selection) to coordinate the transmitting beams effectively. Then, an improved downlink multiuser simultaneous transmission scheme is introduced. In the scheme, an MBS can serve multiple SUEs simultaneously with multiple orthogonal and/or nonorthogonal beams to guarantee SUEs\u2019 Quality of Service. The theoretical and numerical results have shown that our scheme can largely improve the performance of the achievable rate and, meanwhile, can serve lots of SUEs simultaneously."}
{"_id": "fbac6141263b54f1009df931857d249a1299fa27", "title": "An Adaptive Privacy Protection Method for Smart Home Environments Using Supervised Learning", "text": "In recent years, smart home technologies have started to be widely used, bringing a great deal of convenience to people\u2019s daily lives. At the same time, privacy issues have become particularly prominent. Traditional encryption methods can no longer meet the needs of privacy protection in smart home applications, since attacks can be launched even without the need for access to the cipher. Rather, attacks can be successfully realized through analyzing the frequency of radio signals, as well as the timestamp series, so that the daily activities of the residents in the smart home can be learnt. Such types of attacks can achieve a very high success rate, making them a great threat to users\u2019 privacy. In this paper, we propose an adaptive method based on sample data analysis and supervised learning (SDASL), to hide the patterns of daily routines of residents that would adapt to dynamically changing network loads. Compared to some existing solutions, our proposed method exhibits advantages such as low energy consumption, low latency, strong adaptability, and effective privacy protection."}
{"_id": "837c8d073dab369f284bf297682c44602d749a10", "title": "65GHz Doppler Sensor with On-Chip Antenna in 0.18\u03bcm SiGe BiCMOS", "text": "A single-chip 65GHz Doppler radar transceiver with on-chip patch antenna is reported. Implemented in a production 0.18mum SiGe BiCMOS process, it features a differential output transmit power of 4.3dBm, 16.5dB single-ended down-conversion gain and a double-sideband noise figure of 12.8dB. The radar includes a 65GHz 2-stage cascode LNA with S 11<-15dB at 50-94GHz and 14dB gain at 65GHz, a double-balanced down-convert mixer, a SiGe HBT IF amplifier with low 1/f noise, a VCO and a 65GHz output buffer. The LO is provided by an integrated varactor-tuned 61-67GHz VCO optimized for low phase noise. The patch antenna is designed to be impedance-matched for dual-band operation at 62.8 and 64.6GHz. The use of lumped inductors in all blocks and a vertically-stacked transformer for single-ended to differential conversion in the receive path help reduce the transceiver area to 1mm times 1mm"}
{"_id": "4575716ba329ca8332787427dc19b3f6af5ca35e", "title": "In-class distractions: The role of Facebook and the primary learning task", "text": "While laptops and other Internet accessible technologies facilitate student learning in the classroom, they also increase opportunities for interruptions from off-task social networking sites such as Facebook (FB). A small number of correlational studies have suggested that FB has a detrimental effect on learning performance, however; these studies had neglected to investigate student-engagement in the primary learning task and how this affects task-switching to goal-irrelevant FB intrusions (distractions); and how purposeful deployment of attention to FB (goal-relevant interruptions) affect lecture comprehension on such tasks. This experiment fills a gap in the literature by manipulating lecture interest-value and controls for duration of FB exposure, time of interruption, FB material and the order of FB posts. One hundred and fifty participants were randomly allocated to one of six conditions: (A) no FB intrusions, high-interest (HI) lecture; (B) no FB intrusions, low-interest (LI) lecture (C) goal-relevant FB intrusions, HI lecture (D) goal-relevant FB intrusions, LI lecture (E) goal-irrelevant FB intrusions, HI lecture (F) goal-irrelevant FB intrusions, LI lecture. As predicted, participants were more susceptible to FB distractions when the primary learning task was of low-interest. The study also found that goal-relevant FB intrusions significantly reduced HI lecture comprehension compared to the control condition (A). The results highlight the need for recourses that will help educators increase student engagement with their learning task. Implications for future research are discussed. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "8216673632b897ec50db06358b77f13ddd432c47", "title": "Guidelines for human electromyographic research.", "text": ""}
{"_id": "6869d0101515fd56b52e4370db52c30d1d279df6", "title": "Design and experimental validation of HyTAQ, a Hybrid Terrestrial and Aerial Quadrotor", "text": "This paper details the design, modeling, and experimental validation of a novel mobile robot capable of both aerial and terrestrial locomotion. Flight is achieved through a quadrotor configuration; four actuators provide the required thrust. Adding a rolling cage to the quadrotor makes terrestrial locomotion possible using the same actuator set and control system. Thus, neither the mass nor the system complexity is increased by inclusion of separate actuators for terrestrial and aerial locomotion. An analysis of the system's energy consumption demonstrates that during terrestrial locomotion, the robot only needs to overcome rolling resistance and consumes much less energy compared to the aerial mode. This solves one of the most vexing problems of quadrotors and rotorcraft in general - their short operation time. Experimental results show that the hybrid robot can travel a distance four times greater and operate almost six times longer than an aerial only system. It also solves one of the most challenging problems in terrestrial robot design - obstacle avoidance. When an obstacle is encountered, the system simply flies over it."}
{"_id": "1d973294d8ed400806e728196b1b3d6ab4002dea", "title": "Inference and Analysis of Population Structure Using Genetic Data and Network Theory.", "text": "Clustering individuals to subpopulations based on genetic data has become commonplace in many genetic studies. Inference about population structure is most often done by applying model-based approaches, aided by visualization using distance-based approaches such as multidimensional scaling. While existing distance-based approaches suffer from a lack of statistical rigor, model-based approaches entail assumptions of prior conditions such as that the subpopulations are at Hardy-Weinberg equilibria. Here we present a distance-based approach for inference about population structure using genetic data by defining population structure using network theory terminology and methods. A network is constructed from a pairwise genetic-similarity matrix of all sampled individuals. The community partition, a partition of a network to dense subgraphs, is equated with population structure, a partition of the population to genetically related groups. Community-detection algorithms are used to partition the network into communities, interpreted as a partition of the population to subpopulations. The statistical significance of the structure can be estimated by using permutation tests to evaluate the significance of the partition's modularity, a network theory measure indicating the quality of community partitions. To further characterize population structure, a new measure of the strength of association (SA) for an individual to its assigned community is presented. The strength of association distribution (SAD) of the communities is analyzed to provide additional population structure characteristics, such as the relative amount of gene flow experienced by the different subpopulations and identification of hybrid individuals. Human genetic data and simulations are used to demonstrate the applicability of the analyses. The approach presented here provides a novel, computationally efficient model-free method for inference about population structure that does not entail assumption of prior conditions. The method is implemented in the software NetStruct (available at https://giligreenbaum.wordpress.com/software/)."}
{"_id": "fb6281592efee6d356982137cf4579540881debf", "title": "A comprehensive review of firefly algorithms", "text": "The firefly algorithm has become an increasingly important tool of Swarm Intelligence that has been applied in almost all areas of optimization, as well as engineering practice. Many problems from various areas have been successfully solved using the firefly algorithm and its variants. In order to use the algorithm to solve diverse problems, the original firefly algorithm needs to be modified or hybridized. This paper carries out a comprehensive review of this living and evolving discipline of Swarm Intelligence, in order to show that the firefly algorithm could be applied to every problem arising in practice. On the other hand, it encourages new researchers and algorithm developers to use this simple and yet very efficient algorithm for problem solving. It often guarantees that the obtained results will meet the expectations. & 2013 Elsevier Ltd. All rights reserved."}
{"_id": "05eef019bac01e6520526510c2590cc1718f7fe6", "title": "Third-wave livestreaming: teens' long form selfie", "text": "Mobile livestreaming is now well into its third wave. From early systems such as Bambuser and Qik, to more popular apps Meerkat and Periscope, to today's integrated social streaming features in Facebook and Instagram, both technology and usage have changed dramatically. In this latest phase of livestreaming, cameras turn inward to focus on the streamer, instead of outwards on the surroundings. Teens are increasingly using these platforms to entertain friends, meet new people, and connect with others on shared interests. We studied teens' livestreaming behaviors and motivations on these new platforms through a survey completed by 2,247 American livestreamers and interviews with 20 teens, highlighting changing practices, teens' differences from the broader population, and implications for designing new livestreaming services."}
{"_id": "40519f6c6732791339f29cf676cb4c841613a3eb", "title": "State-space solutions to standard H2 and H\u221e control problems", "text": "Simple state-space formulas are presented for a controller solving a standard H\u221e-problem. The controller has the same state-dimension as the plant, its computation involves only two Riccati equations, and it has a separation structure reminiscent of classical LQG (i.e., H2) theory. This paper is also intended to be of tutorial value, so a standard H2-solution is developed in parallel."}
{"_id": "40bd61c220ec5d098ab6a10285249255c45421be", "title": "What Makes a Great Software Engineer?", "text": "Good software engineers are essential to the creation of good software. However, most of what we know about software-engineering expertise are vague stereotypes, such as 'excellent communicators' and 'great teammates'. The lack of specificity in our understanding hinders researchers from reasoning about them, employers from identifying them, and young engineers from becoming them. Our understanding also lacks breadth: what are all the distinguishing attributes of great engineers (technical expertise and beyond)? We took a first step in addressing these gaps by interviewing 59 experienced engineers across 13 divisions at Microsoft, uncovering 53 attributes of great engineers. We explain the attributes and examine how the most salient of these impact projects and teams. We discuss implications of this knowledge on research and the hiring and training of engineers."}
{"_id": "bc69383a7d46cbaf80b5b5ef902a3dccf23df696", "title": "TF.Learn: TensorFlow's High-level Module for Distributed Machine Learning", "text": "TF.Learn is a high-level Python module for distributed machine learning inside TensorFlow (Abadi et al., 2015). It provides an easy-to-use Scikit-learn (Pedregosa et al., 2011) style interface to simplify the process of creating, configuring, training, evaluating, and experimenting a machine learning model. TF.Learn integrates a wide range of state-ofart machine learning algorithms built on top of TensorFlow\u2019s low level APIs for small to large-scale supervised and unsupervised problems. This module focuses on bringing machine learning to non-specialists using a general-purpose high-level language as well as researchers who want to implement, benchmark, and compare their new methods in a structured environment. Emphasis is put on ease of use, performance, documentation, and API consistency."}
{"_id": "7e27a053c720d58d78ab32e236ee185a90c61b2a", "title": "Drunk person identification using thermal infrared images", "text": "Drunk person identification is carried out using thermal infrared images. The features used for this purpose are simply the pixel values on specific points on the face of the person. It is proved that for a drunk person, the corresponding cluster in the feature space moves far away from its original position for the sober person. The concept behind the proposed approach is based on the physiology-based face identification. For demonstration purposes, Fisher Linear Discriminant approach is used for space dimensionality reduction. The feature space is found to be of very low dimensionality."}
{"_id": "ac8db14cbc7ad0119d0130e88f98ccb3ec61780f", "title": "Big Data , Digital Media , and Computational Social Science : Possibilities and Perils", "text": "We live life in the network. We check our e-mails regularly, make mobile phone calls from almost any location ... make purchases with credit cards ... [and] maintain friendships through online social networks. ... These transactions leave digital traces that can be compiled into comprehensive pictures of both individual and group behavior, with the potential to transform our understanding of our lives, organizations, and societies."}
{"_id": "0817e3f73342ce3c6c5a5739878dd1aeea6adec7", "title": "A combinatorial strongly polynomial algorithm for minimizing submodular functions", "text": "This paper presents a combinatorial polynomial-time algorithm for minimizing submodular functions, answering an open question posed in 1981 by Gr\u00f6tschel, Lov\u00e1sz, and Schrijver. The algorithm employs a scaling scheme that uses a flow in the complete directed graph on the underlying set with each arc capacity equal to the scaled parameter. The resulting algorithm runs in time bounded by a polynomial in the size of the underlying set and the length of the largest absolute function value. The paper also presents a strongly polynomial version in which the number of steps is bounded by a polynomial in the size of the underlying set, independent of the function values."}
{"_id": "08c30bbfb9ff90884f9d1f873a1eeb6bb616e761", "title": "The combinatorial assignment problem: approximate competitive equilibrium from equal incomes", "text": "Impossibility theorems suggest that the only efficient and strategyproof mechanisms for the problem of combinatorial assignment - e.g., assigning schedules of courses to students - are dictatorships. Dictatorships are mostly rejected as unfair: for any two agents, one chooses all their objects before the other chooses any. Any solution will involve compromise amongst efficiency, incentive and fairness considerations.\n This paper proposes a solution to the combinatorial assignment problem. It is developed in four steps. First, I propose two new criteria of outcome fairness, the maximin share guarantee and envy bounded by a single good, which weaken well-known criteria to accommodate indivisibilities; the criteria formalize why dictatorships are unfair. Second, I prove existence of an approximation to Competitive Equilibrium from Equal Incomes in which (i) incomes are unequal but arbitrarily close together; (ii) the market clears with error, which approaches zero in the limit and is small for realistic problems. Third, I show that this Approximate CEEI satisfies the fairness criteria. Last, I define a mechanism based on Approximate CEEI that is strategyproof for the zero-measure agents economists traditionally regard as price takers. The proposed mechanism is calibrated on real data and is compared to alternatives from theory and practice: all other known mechanisms are either manipulable by zero-measure agents or unfair ex-post, and most are both manipulable and unfair."}
{"_id": "10bd0bab60e38e8052e167e3f7379ea0aeade2e4", "title": "On approximately fair allocations of indivisible goods", "text": "We study the problem of fairly allocating a set of indivisible goods to a set of people from an algorithmic perspective. fair division has been a central topic in the economic literature and several concepts of fairness have been suggested. The criterion that we focus on is envy-freeness. In our model, a monotone utility function is associated with every player specifying the value of each subset of the goods for the player. An allocation is envy-free if every player prefers her own share than the share of any other player. When the goods are divisible, envy-free allocations always exist. In the presence of indivisibilities, we show that there exist allocations in which the envy is bounded by the maximum marginal utility, and present a simple algorithm for computing such allocations. We then look at the optimization problem of finding an allocation with minimum possible envy. In the general case the problem is not solvable or approximable in polynomial time unless P = NP. We consider natural special cases (e.g.additive utilities) which are closely related to a class of job scheduling problems. Approximation algorithms as well as inapproximability results are obtained. Finally we investigate the problem of designing truthful mechanisms for producing allocations with bounded envy."}
{"_id": "2fd40688614f94dbecba7ef06dc37d41473328ed", "title": "Allocating indivisible goods", "text": "The problem of allocating divisible goods has enjoyed a lot of attention in both mathematics (e.g. the cake-cutting problem) and economics (e.g. market equilibria). On the other hand, the natural requirement of indivisible goods has been somewhat neglected, perhaps because of its more complicated nature. In this work we study a fairness criterion, called the Max-Min Fairness problem, for k players who want to allocate among themselves m indivisible goods. Each player has a specified valuation function on the subsets of the goods and the goal is to split the goods between the players so as to maximize the minimum valuation. Viewing the problem from a game theoretic perspective, we show that for two players and additive valuations the expected minimum of the (randomized) cut-and-choose mechanism is a 1/2-approximation of the optimum. To complement this result we show that no truthful mechanism can compute the exact optimum.We also consider the algorithmic perspective when the (true) additive valuation functions are part of the input. We present a simple 1/(m - k + 1) approximation algorithm which allocates to every player at least 1/k fraction of the value of all but the k - 1 heaviest items. We also give an algorithm with additive error against the fractional optimum bounded by the value of the largest item. The two approximation algorithms are incomparable in the sense that there exist instances when one outperforms the other."}
{"_id": "7d2c7748359f57c2b4227b31eca9e5f7a70a6b5c", "title": "A polynomial-time approximation scheme for maximizing the minimum machine completion time", "text": ""}
{"_id": "0d1fd04c0dec97bd0b1c4deeba21b8833f792651", "title": "Design Considerations and Performance Evaluation of Single-Stage TAIPEI Rectifier for HVDC Distribution Applications", "text": "Design considerations and performance evaluations of a three-phase, four-switch, single-stage, isolated zero-voltage-switching (ZVS) rectifier are presented. The circuit is obtained by integrating the three-phase, two-switch, ZVS, discontinuous-current-mode (DCM), boost power-factorcorrection (PFC) rectifier, named for short the TAIPEI rectifier, with the ZVS full-bridge (FB) phase-shift dc/dc converter. The performance was evaluated on a three-phase 2.7-kW prototype designed for HVDC distribution applications with the line-toline voltage range from 180 VRMS to 264 VRMS and with a tightly regulated variable dc output voltage from 200 V to 300 V. The prototype operates with ZVS over the entire input-voltage and load-current range and achieves less than 5% input-current THD with the efficiency in the 95% range."}
{"_id": "48451a29f8627b2c7fa1d27e6c7a8c61dc5fda7c", "title": "The induction of apoptosis and autophagy by Wasabia japonica extract in colon cancer.", "text": "PURPOSE\nWasabia japonica (wasabi) has been shown to exhibit properties of detoxification, anti-inflammation and the induction of apoptosis in cancer cells. This study aimed to investigate the molecular mechanism of the cytotoxicity of wasabi extract (WE) in colon cancer cells to evaluate the potential of wasabi as a functional food for chemoprevention.\n\n\nMETHODS\nColo 205 cells were treated with different doses of WE, and the cytotoxicity was analyzed by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide. Apoptosis and autophagy were detected by 4',6-diamidino-2-phenylindole, 5,5',6,6'-tetrachloro-1,1',3,3'-tetraethyl-imidacarbo-yanine iodide and staining for acidic vascular organelles (AVOs), along with Western blotting.\n\n\nRESULTS\nThe results demonstrated that WE induced the extrinsic pathway and mitochondrial death machinery through the activation of TNF-\u03b1, Fas-L, caspases, truncated Bid and cytochrome C. WE also induced autophagy by decreasing the phosphorylation of Akt and mTOR and promoting the expression of microtubule-associated protein 1 light chain 3-II and AVO formation. An in vivo xenograft model verified that tumor growth was delayed by WE treatment.\n\n\nCONCLUSION\nOur studies revealed that WE exhibits anti-colon cancer properties through the induction of apoptosis and autophagy. These results provide support for the application of WE as a chemopreventive functional food and as a prospective treatment of colon cancer."}
{"_id": "304412a849e6ec260f15ad42d8205c43bcdea54f", "title": "Detecting Image Splicing using Geometry Invariants and Camera Characteristics Consistency", "text": "Recent advances in computer technology have made digital image tampering more and more common. In this paper, we propose an authentic vs. spliced image classification method making use of geometry invariants in a semi-automatic manner. For a given image, we identify suspicious splicing areas, compute the geometry invariants from the pixels within each region, and then estimate the camera response function (CRF) from these geometry invariants. The cross-fitting errors are fed into a statistical classifier. Experiments show a very promising accuracy, 87%, over a large data set of 363 natural and spliced images. To the best of our knowledge, this is the first work detecting image splicing by verifying camera characteristic consistency from a single-channel image"}
{"_id": "a2c1176eccf51cd3d30d46757d44fcf7f6fcb160", "title": "Internet Reflections on Teenagers", "text": "The Internet is a piece of today's way of life that numerous young people can't envision what the world resembled before the Internet existed. The Internet is fun, enlightening and an extraordinary wellspring of correspondence with others. It's an instructive device and clients can find out about just about anything. Sharing data through Internet is simple, shoddy and quick. Young people have admittance to billions of sites containing data as content, pictures and recordings [1-10]."}
{"_id": "9bd6bff7c3eae7ec6da8ed7aeb70491bcd40177e", "title": "Optimal Mass Transport for Registration and Warping", "text": "Image registration is the process of establishing a common geometric reference frame between two or more image data sets possibly taken at different times. In this paper we present a method for computing elastic registration and warping maps based on the Monge\u2013Kantorovich theory of optimal mass transport. This mass transport method has a number of important characteristics. First, it is parameter free. Moreover, it utilizes all of the grayscale data in both images, places the two images on equal footing and is symmetrical: the optimal mapping from image A to image B being the inverse of the optimal mapping from B to A. The method does not require that landmarks be specified, and the minimizer of the distance functional involved is unique; there are no other local minimizers. Finally, optimal transport naturally takes into account changes in density that result from changes in area or volume. Although the optimal transport method is certainly not appropriate for all registration and warping problems, this mass preservation property makes the Monge\u2013Kantorovich approach quite useful for an interesting class of warping problems, as we show in this paper. Our method for finding the registration mapping is based on a partial differential equation approach to the minimization of the L 2 Kantorovich\u2013Wasserstein or \u201cEarth Mover's Distance\u201d under a mass preservation constraint. We show how this approach leads to practical algorithms, and demonstrate our method with a number of examples, including those from the medical field. We also extend this method to take into account changes in intensity, and show that it is well suited for applications such as image morphing."}
{"_id": "7e8668156c88eb77f5d397522c02fe528db333c3", "title": "Formation and suppression of acoustic memories during human sleep", "text": "Sleep and memory are deeply related, but the nature of the neuroplastic processes induced by sleep remains unclear. Here, we report that memory traces can be both formed or suppressed during sleep, depending on sleep phase. We played samples of acoustic noise to sleeping human listeners. Repeated exposure to a novel noise during Rapid Eye Movements (REM) or light non-REM (NREM) sleep leads to improvements in behavioral performance upon awakening. Strikingly, the same exposure during deep NREM sleep leads to impaired performance upon awakening. Electroencephalographic markers of learning extracted during sleep confirm a dissociation between sleep facilitating memory formation (light NREM and REM sleep) and sleep suppressing learning (deep NREM sleep). We can trace these neural changes back to transient sleep events, such as spindles for memory facilitation and slow waves for suppression. Thus, highly selective memory processes are active during human sleep, with intertwined episodes of facilitative and suppressive plasticity. Though memory and sleep are related, it is still unclear whether new memories can be formed during sleep. Here, authors show that people could learn new sounds during REM or light non-REM sleep, but that learning was suppressed when sounds were played during deep NREM sleep."}
{"_id": "1045cc9599944e0088ccc6f6c13ee9f960ecffd7", "title": "Shape Segmentation by Approximate Convexity Analysis", "text": "We present a shape segmentation method for complete and incomplete shapes. The key idea is to directly optimize the decomposition based on a characterization of the expected geometry of a part in a shape. Rather than setting the number of parts in advance, we search for the smallest number of parts that admit the geometric characterization of the parts. The segmentation is based on an intermediate-level analysis, where first the shape is decomposed into approximate convex components, which are then merged into consistent parts based on a nonlocal geometric signature. Our method is designed to handle incomplete shapes, represented by point clouds. We show segmentation results on shapes acquired by a range scanner, and an analysis of the robustness of our method to missing regions. Moreover, our method yields results that are comparable to state-of-the-art techniques evaluated on complete shapes."}
{"_id": "29c386498f174f4d20a326f8c779840ba88d394f", "title": "A Survey Paper : Areas , Techniques and Challenges of Opinion Mining", "text": "Opinion Mining is a promising discipline, defined as an intersection of information retrieval and computational linguistic techniques to deal with the opinions expressed in a document. The field aims at solving the problems related to opinions about products, Politian in newsgroup posts, review sites, comments on Facebook posts and twitter etc. This paper is about to covers the techniques, applications, long and short future research areas, Research gaps and future challenges in opinion mining. Further, an attempt has been made to discuss in detail the use of supervised, unsupervised, machine learning and case based reasoning techniques in opinion mining to perform computational treatment of sentiments."}
{"_id": "b8358032fb3124dce867b42aabb5c532607e9d1e", "title": "A Study of Influencing Factors for Repurchase Intention in Internet Shopping Malls", "text": "This research studied the effect of 15 variables on the consumers' overall satisfaction and repurchase intention in Internet shopping malls. As the value of loyal customers is incomparably high in Electronic Commerce, winning customers\u2019 loyalty is vital to success of shopping malls. In this study, a customer is defined as one who has purchased goods or services at least once from Internet shopping malls. The research variables are demographic factors, product perceptions, customer service, perceived ease of use, site image, promotion, perceived consumer risk, personal characteristics and Internet communications environments. The outcome of the research is as follows: Perceived consumer risk shows a negative relationship with the repurchase intention, and all the other variables product perceptions, customer service, perceived ease of use, site image, promotion, communications environments are positively related with the repurchase intention. Also, the overall satisfaction level of customers for the Internet shopping malls positively influences repurchase intention."}
{"_id": "24dd60ec6fbb2a25f6dd0c1955b3b0e0b29c7d42", "title": "Adding Chinese Captions to Images", "text": "This paper extends research on automated image captioning in the dimension of language, studying how to generate Chinese sentence descriptions for unlabeled images. To evaluate image captioning in this novel context, we present Flickr8k-CN, a bilingual extension of the popular Flickr8k set. The new multimedia dataset can be used to quantitatively assess the performance of Chinese captioning and English-Chinese machine translation. The possibility of re-using existing English data and models via machine translation is investigated. Our study reveals to some extent that a computer can master two distinct languages, English and Chinese, at a similar level for describing the visual world. Data is publicly available at http://tinyurl.com/flickr8kcn"}
{"_id": "5417bd72d1b787ade0c485f1188189474c199f4d", "title": "MAGAN: Margin Adaptation for Generative Adversarial Networks", "text": "We propose a novel training procedure for Generative Adversarial Networks (GANs) to improve stability and performance by using an adaptive hinge loss objective function. We estimate the appropriate hinge loss margin with the expected energy of the target distribution, and derive both a principled criterion for updating the margin and an approximate convergence measure. The resulting training procedure is simple yet robust on a diverse set of datasets. We evaluate the proposed training procedure on the task of unsupervised image generation, noting both qualitative and quantitative performance improvements."}
{"_id": "96a3c95a069d20fda310c315692a2a96635f4384", "title": "Efficient and fast multi-modal foreground-background segmentation using RGBD data", "text": "This paper addresses the problem of foreground and background segmentation. Multi-modal data specifically RGBD data has gain many tasks in computer vision recently. However, techniques for background subtraction based only on single-modality still report state-of-the-art results in many benchmarks. Succeeding the fusion of depth and color data for this task requires a robust formalization allowing at the same time higher precision and faster processing. To this end, we propose to make use of kernel density estimation technique to adapt multi-modal data. To speed up kernel density estimation, we explore the fast Gauss transform which allows the summation of a mixture of M kernel at N evaluation points in O(M+N) time as opposed to O(MN) time for a direct evaluation. Extensive experiments have been carried out on four publicly available RGBD foreground/background datasets. Results demonstrate that our proposal outperforms state-of-the-art methods for almost all of the sequences acquired in challenging indoor and outdoor contexts with a fast and non-parametric operation. c \u00a9 2017 Elsevier Ltd. All rights reserved."}
{"_id": "2858c69e5d9432c8b9529e3d97017d3909eb0b9c", "title": "Semi-Supervised Ensemble Ranking", "text": "Ranking plays a central role in many Web search and information retrieval applications. Ensemble ranking, sometimes called meta-search, aims to improve the retrieval performance by combining the outputs from multiple ranking algorithms. Many ensemble ranking approaches employ supervised learning techniques to learn appropriate weights for combining multiple rankers. The main shortcoming with these approaches is that the learned weights for ranking algorithms are query independent. This is suboptimal since a ranking algorithm could perform well for certain queries but poorly for others. In this paper, we propose a novel semi-supervised ensemble ranking (SSER) algorithm that learns query-dependent weights when combining multiple rankers in document retrieval. The proposed SSER algorithm is formulated as an SVM-like quadratic program (QP), and therefore can be solved efficiently by taking advantage of optimization techniques that were widely used in existing SVM solvers. We evaluated the proposed technique on a standard document retrieval testbed and observed encouraging results by comparing to a number of state-of-the-art techniques."}
{"_id": "08e4982410ebaa6dbd203a953113214bc9740b80", "title": "Comparative evaluation of text- and citation-based plagiarism detection approaches using guttenplag", "text": "Various approaches for plagiarism detection exist. All are based on more or less sophisticated text analysis methods such as string matching, fingerprinting or style comparison. In this paper a new approach called Citation-based Plagiarism Detection is evaluated using a doctoral thesis, in which a volunteer crowd-sourcing project called GuttenPlag identified substantial amounts of plagiarism through careful manual inspection. This new approach is able to identify similar and plagiarized documents based on the citations used in the text. It is shown that citation-based plagiarism detection performs significantly better than text-based procedures in identifying strong paraphrasing, translation and some idea plagiarism. Detection rates can be improved by combining citation-based with text-based plagiarism detection."}
{"_id": "486b5cd2cdd15436c090431d0a442328dcb637f3", "title": "Routing, scheduling and channel assignment in Wireless Mesh Networks: Optimization models and algorithms", "text": "Wireless Mesh networks (WMNs) can partially replace the wired backbone of traditional wireless access networks and, similarly, they require to carefully plan radio resource assignment in order to provide the same quality guarantees to traffic flows. In this paper we study the radio resource assignment optimization problem in Wireless Mesh Networks assuming a time division multiple access (TDMA) scheme, a dynamic power control able to vary emitted power slot-by-slot, and a rate adaptation mechanism that sets transmission rates according to the signalto-interference-and-noise ratio (SINR). The proposed optimization framework includes routing, scheduling and channel assignment. Quality requirements of traffic demands are expressed in terms of minimum bandwidth and modeled with constraints defining the number of information units (packets) that must be delivered per frame. We consider an alternative problem formulation where decision variables represent compatible sets of links active in the same slot and channel, called configurations. We propose a two phases solution approach where a set of configurations is first selected to meet traffic requirements along the best available paths, and then configurations are assigned to channels according to device characteristics and constraints. The optimization goal is to minimize the number of used slots, which is directly related to the global resource allocation efficiency. We provide a lower bound of the optimal solution solving the continuous relaxation of problem formulation. Moreover, we propose a heuristic approach to determine practical integer solutions (upper bound). Since configuration variables are exponentially many, our solution approaches are based on the Column Generation technique. In order to assess the effectiveness of the proposed algorithms we show the numerical results obtained on a set of realistic-size randomly generated instances."}
{"_id": "2ae60aba40b0af8b83f65bdbf1b556f12b7672ce", "title": "A high performance 76.5 GHz FMCW RADAR for advanced driving assistance system", "text": "Frequency modulated Continuous Wave (FMCW) is continuous wave RADAR widely used for Intelligent Adaptive Cruise Control (ACC) and Collision Warning System (CWS). The paper presents a design and simulation of millimeter wave FMCW RADAR. Paper introduces a mathematical modeling of a FMCW RADAR including its target model. Design of a 76.5 GHz FMCW RADAR & its target model by using Advanced Design System (ADS) software and target range & velocity at various conditions are employed in the simulated FMCW RADAR system to verify the operation of the system."}
{"_id": "007ee2559d4a2a8c661f4f5182899f03736682a7", "title": "CANAuth - A Simple, Backward Compatible Broadcast Authentication Protocol for CAN bus", "text": "The Controller-Area Network (CAN) bus protocol [1] is a bus protocol invented in 1986 by Robert Bosch GmbH, originally intended for automotive use. By now, the bus can be found in devices ranging from cars and trucks, over lightning setups to industrial looms. Due to its nature, it is a system very much focused on safety, i.e., reliability. Unfortunately, there is no build-in way to enforce security, such as encryption or authentication. In this paper, we investigate the problems associated with implementing a backward compatible message authentication protocol on the CAN bus. We show which constraints such a protocol has to meet and why this eliminates, to the best of our knowledge, all the authentication protocols published so far. Furthermore, we present a message authentication protocol, CANAuth, that meets all of the requirements set forth and does not violate any constraint of the CAN bus. Keywords\u2014CAN bus, embedded networks, broadcast authentication, symmetric cryptography"}
{"_id": "8d19205039af3056b4b57eb6637462d12982e2a2", "title": "Finding social roles in Wikipedia", "text": "This paper investigates some of the social roles people play in the online community of Wikipedia. We start from qualitative comments posted on community oriented pages, wiki project memberships, and user talk pages in order to identify a sample of editors who represent four key roles: substantive experts, technical editors, vandal fighters, and social networkers. Patterns in edit histories and egocentric network visualizations suggest potential \"structural signatures\" that could be used as quantitative indicators of role adoption. Using simple metrics based on edit histories we compare two samples of Wikipedians: a collection of long term dedicated editors, and a cohort of editors from a one month window of new arrivals. According to these metrics, we find that the proportions of editor types in the new cohort are similar those observed in the sample of dedicated contributors. The number of new editors playing helpful roles in a single month's cohort nearly equal the number found in the dedicated sample. This suggests that informal socialization has the potential provide sufficient role related labor despite growth and change in Wikipedia. These results are preliminary, and we describe several ways that the method can be improved, including the expansion and refinement of role signatures and identification of other important social roles."}
{"_id": "8dc4025cfa05e6aab60ce578f9c9b55e356aeb79", "title": "Active Magnetic Anomaly Detection Using Multiple Micro Aerial Vehicles", "text": "Magnetic anomaly detection (MAD) is an important problem in applications ranging from geological surveillance to military reconnaissance. MAD sensors detect local disturbances in the magnetic field, which can be used to detect the existence of and to estimate the position of buried, hidden, or submerged objects, such as ore deposits or mines. These sensors may experience false positive and false negative detections and, without prior knowledge of the targets, can only determine proximity to a target. The uncertainty in the sensors, coupled with a lack of knowledge of even the existence of targets, makes the estimation and control problems challenging. We utilize a hierarchical decomposition of the environment, coupled with an estimation algorithm based on random finite sets, to determine the number of and the locations of targets in the environment. The small team of robots follow the gradient of mutual information between the estimated set of targets and the future measurements, locally maximizing the rate of information gain. We present experimental results of a team of quadrotor micro aerial vehicles discovering and localizing an unknown number of permanent magnets."}
{"_id": "c6654cd8a1ce6562330fad4e42ac5a5e86a52c98", "title": "Demand Side Management in Smart Grid Using Heuristic Optimization", "text": "Demand side management (DSM) is one of the important functions in a smart grid that allows customers to make informed decisions regarding their energy consumption, and helps the energy providers reduce the peak load demand and reshape the load profile. This results in increased sustainability of the smart grid, as well as reduced overall operational cost and carbon emission levels. Most of the existing demand side management strategies used in traditional energy management systems employ system specific techniques and algorithms. In addition, the existing strategies handle only a limited number of controllable loads of limited types. This paper presents a demand side management strategy based on load shifting technique for demand side management of future smart grids with a large number of devices of several types. The day-ahead load shifting technique proposed in this paper is mathematically formulated as a minimization problem. A heuristic-based Evolutionary Algorithm (EA) that easily adapts heuristics in the problem was developed for solving this minimization problem. Simulations were carried out on a smart grid which contains a variety of loads in three service areas, one with residential customers, another with commercial customers, and the third one with industrial customers. The simulation results show that the proposed demand side management strategy achieves substantial savings, while reducing the peak load demand of the smart grid."}
{"_id": "a481ad59a970cd605265b4397de1bc1974d3a48e", "title": "Cookie Hijacking in the Wild : Security and Privacy Implications", "text": "The widespread demand for online privacy, also fueled by widely-publicized demonstrations of session hijacking attacks against popular websites, has spearheaded the increasing deployment of HTTPS. However, many websites still avoid ubiquitous encryption due to performance or compatibility issues. The prevailing approach in these cases is to force critical functionality and sensitive data access over encrypted connections, while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. In this work, we conduct an in-depth assessment of a diverse set of major websites and explore what functionality and information is exposed to attackers that have hijacked a user\u2019s HTTP cookies. We identify a recurring pattern across websites with partially deployed HTTPS; service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-session cookies. Our cookie hijacking study reveals a number of severe flaws; attackers can obtain the user\u2019s home and work address and visited websites from Google, Bing and Baidu expose the user\u2019s complete search history, and Yahoo allows attackers to extract the contact list and send emails from the user\u2019s account. Furthermore, e-commerce vendors such as Amazon and Ebay expose the user\u2019s purchase history (partial and full respectively), and almost every website exposes the user\u2019s name and email address. Ad networks like Doubleclick can also reveal pages the user has visited. To fully evaluate the practicality and extent of cookie hijacking, we explore multiple aspects of the online ecosystem, including mobile apps, browser security mechanisms, extensions and search bars. To estimate the extent of the threat, we run IRB-approved measurements on a subset of our university\u2019s public wireless network for 30 days, and detect over 282K accounts exposing the cookies required for our hijacking attacks. We also explore how users can protect themselves and find that, while mechanisms such as the EFF\u2019s HTTPS Everywhere extension can reduce the attack surface, HTTP cookies are still regularly exposed. The privacy implications of these attacks become even more alarming when considering how they can be used to deanonymize Tor users. Our measurements suggest that a significant portion of Tor users may currently be vulnerable to cookie hijacking."}
{"_id": "f1c03f90c35d9c66baf0b14e7852890dedeed794", "title": "Learning Profiles in Duplicate Question Detection", "text": "This paper presents the results of systematic and comparative experimentation with major types of methodologies for automatic duplicate question detection when these are applied to datasets of progressively larger sizes, thus allowing to study the learning profiles of this task under these different approaches and evaluate their merits. This study was made possible by resorting to the recent release for research purposes, by the Quora online question answering engine, of a new dataset with over 400,000 pairs labeled with respect to their elements being duplicate interrogative segments."}
{"_id": "af4ab6727929ca80227e5edbbe7135f09eedc5f0", "title": "Neurobiology of Schizophrenia", "text": "With its hallucinations, delusions, thought disorder, and cognitive deficits, schizophrenia affects the most basic human processes of perception, emotion, and judgment. Evidence increasingly suggests that schizophrenia is a subtle disorder of brain development and plasticity. Genetic studies are beginning to identify proteins of candidate genetic risk factors for schizophrenia, including dysbindin, neuregulin 1, DAOA, COMT, and DISC1, and neurobiological studies of the normal and variant forms of these genes are now well justified. We suggest that DISC1 may offer especially valuable insights. Mechanistic studies of the properties of these candidate genes and their protein products should clarify the molecular, cellular, and systems-level pathogenesis of schizophrenia. This can help redefine the schizophrenia phenotype and shed light on the relationship between schizophrenia and other major psychiatric disorders. Understanding these basic pathologic processes may yield novel targets for the development of more effective treatments."}
{"_id": "061f07269d1bbeffd14509043dd5bee6f425e494", "title": "The design and implementation of microdrivers", "text": "Device drivers commonly execute in the kernel to achieve high performance and easy access to kernel services. However, this comes at the price of decreased reliability and increased programming difficulty. Driver programmers are unable to use user-mode development tools and must instead use cumbersome kernel tools. Faults in kernel drivers can cause the entire operating system to crash. User-mode drivers have long been seen as a solution to this problem, but suffer from either poor performance or new interfaces that require a rewrite of existing drivers.\n This paper introduces the Microdrivers architecture that achieves high performance and compatibility by leaving critical path code in the kernel and moving the rest of the driver code to a user-mode process. This allows data-handling operations critical to I/O performance to run at full speed, while management operations such as initialization and configuration run at reduced speed in user-level. To achieve compatibility, we present DriverSlicer, a tool that splits existing kernel drivers into a kernel-level component and a user-level component using a small number of programmer annotations. Experiments show that as much as 65% of driver code can be removed from the kernel without affecting common-case performance, and that only 1-6 percent of the code requires annotations."}
{"_id": "129359a872783b7c3a82c2c9dbef75df2956d2d3", "title": "XFI: Software Guards for System Address Spaces", "text": "XFI is a comprehensive protection system that offers both flexible access control and fundamental integrity guarantees, at any privilege level and even for legacy code in commodity systems. For this purpose, XFI combines static analysis with inline software guards and a two-stack execution model. We have implemented XFI for Windows on the x86 architecture using binary rewriting and a simple, stand-alone verifier; the implementation's correctness depends on the verifier, but not on the rewriter. We have applied XFI to software such as device drivers and multimedia codecs. The resulting modules function safely within both kernel and user-mode address spaces, with only modest enforcement overheads."}
{"_id": "287d5bd4a085eac093591ce72c07f06b3c64acec", "title": "CuriOS: Improving Reliability through Operating System Structure", "text": "An error that occurs in a microkernel operating system service can potentially result in state corruption and service failure. A simple restart of the failed service is not always the best solution for reliability. Blindly restarting a service which maintains client-related state such as session information results in the loss of this state and affects all clients that were using the service. CuriOS represents a novel OS design that uses lightweight distribution, isolation and persistence of OS service state to mitigate the problem of state loss during a restart. The design also significantly reduces error propagation within client-related state maintained by an OS service. This is achieved by encapsulating services in separate protection domains and granting access to client-related state only when required for request processing. Fault injection experiments show that it is possible to recover from between 87% and 100% of manifested errors in OS services such as the file system, network, timer and scheduler while maintaining low performance overheads."}
{"_id": "585706dc56e146c8fb42228fc5cbe1de0bb0a69d", "title": "CIL: Intermediate Language and Tools for Analysis and Transformation of C Programs", "text": "This paper describes the C Intermediate Language: a highlevel representation along with a set of tools that permit easy analysis and source-to-source transformation of C programs. Compared to C, CIL has fewer constructs. It breaks down certain complicated constructs of C into simpler ones, and thus it works at a lower level than abstract-syntax trees. But CIL is also more high-level than typical intermediate languages (e.g., three-address code) designed for compilation. As a result, what we have is a representation that makes it easy to analyze and manipulate C programs, and emit them in a form that resembles the original source. Moreover, it comes with a front-end that translates to CIL not only ANSI C programs but also those using Microsoft C or GNU C extensions. We describe the structure of CIL with a focus on how it disambiguates those features of C that we found to be most confusing for program analysis and transformation. We also describe a whole-program merger based on structural type equality, allowing a complete project to be viewed as a single compilation unit. As a representative application of CIL, we show a transformation aimed at making code immune to stack-smashing attacks. We are currently using CIL as part of a system that analyzes and instruments C programs with run-time checks to ensure type safety. CIL has served us very well in this project, and we believe it can usefully be applied in other situations as well."}
{"_id": "9090142233801801411a28b30c653aae5408182a", "title": "Thorough static analysis of device drivers", "text": "Bugs in kernel-level device drivers cause 85% of the system crashes in the Windows XP operating system [44]. One of the sources of these errors is the complexity of the Windows driver API itself: programmers must master a complex set of rules about how to use the driver API in order to create drivers that are good clients of the kernel. We have built a static analysis engine that finds API usage errors in C programs. The Static Driver Verifier tool (SDV) uses this engine to find kernel API usage errors in a driver. SDV includes models of the OS and the environment of the device driver, and over sixty API usage rules. SDV is intended to be used by driver developers \"out of the box.\" Thus, it has stringent requirements: (1) complete automation with no input from the user; (2) a low rate of false errors. We discuss the techniques used in SDV to meet these requirements, and empirical results from running SDV on over one hundred Windows device drivers."}
{"_id": "0d441ab58a1027cb64084ad065cfea5e15b8e74c", "title": "Why We Need New Evaluation Metrics for NLG", "text": "The majority of NLG evaluation relies on automatic metrics, such as BLEU. In this paper, we motivate the need for novel, systemand data-independent automatic evaluation methods: We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG. We also show that metric performance is dataand system-specific. Nevertheless, our results also suggest that automatic metrics perform reliably at system-level and can support system development by finding cases where a system performs poorly."}
{"_id": "045b415f039ca547e0720f6bbc7d80256c725681", "title": "Living in two homes-a Swedish national survey of wellbeing in 12 and 15\u00a0year olds with joint physical custody", "text": "BACKGROUND\nThe practice of joint physical custody, where children spend equal time in each parent's home after they separate, is increasing in many countries. It is particularly common in Sweden, where this custody arrangement applies to 30 per cent of children with separated parents. The aim of this study was to examine children's health-related quality of life after parental separation, by comparing children living with both parents in nuclear families to those living in joint physical custody and other forms of domestic arrangements.\n\n\nMETHODS\nData from a national Swedish classroom study of 164,580 children aged 12 and 15-years-old were analysed by two-level linear regression modelling. Z-scores were used to equalise scales for ten dimensions of wellbeing from the KIDSCREEN-52 and the KIDSCREEN-10 Index and analysed for children in joint physical custody in comparison with children living in nuclear families and mostly or only with one parent.\n\n\nRESULTS\nLiving in a nuclear family was positively associated with almost all aspects of wellbeing in comparison to children with separated parents. Children in joint physical custody experienced more positive outcomes, in terms of subjective wellbeing, family life and peer relations, than children living mostly or only with one parent. For the 12-year-olds, beta coefficients for moods and emotions ranged from -0.20 to -0.33 and peer relations from -0.11 to -0.20 for children in joint physical custody and living mostly or only with one parent. The corresponding estimates for the 15-year-olds varied from -0.08 to -0.28 and from -0.03 to -0.13 on these subscales. The 15-year-olds in joint physical custody were more likely than the 12-year-olds to report similar wellbeing levels on most outcomes to the children in nuclear families.\n\n\nCONCLUSIONS\nChildren who spent equal time living with both parents after a separation reported better wellbeing than children in predominantly single parent care. This was particularly true for the 15-year-olds, while the reported wellbeing of 12-years-olds was less satisfactory. There is a need for further studies that can account for the pre and post separation context of individual families and the wellbeing of younger age groups in joint physical custody."}
{"_id": "2033551a919723b8b70fadfed2178909740a19ff", "title": "Reaching Agreement in the Presence of Faults", "text": "The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor.\nIt is shown that the problem is solvable for, and only for, n \u2265 3m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n \u2265 m \u2265 0. This weaker assumption can be approximated in practice using cryptographic methods."}
{"_id": "72a44aa53a3fe7d3a0edfc66b8e6fb0de2728cb6", "title": "User privacy risk calculator", "text": "User privacy is becoming an issue on the Internet due to common data breaches and various security threats. Services tend to require private user data in order to provide more personalized content and users are typically unaware of potential risks to their privacy. This paper proposes a risk calculator based on a feedforward neural network that will provide users with an ability to calculate risks to their privacy. The proposed calculator is evaluated on a set of real world example scenarios. Furthermore, to give more insight into privacy issues, each estimated risk is explained by several real life scenarios that might happen if the observed parameters are obtained by an attacker. In turn, this should raise user awareness and knowledge about privacy issues on the Internet."}
{"_id": "ed8e145b1c1da64c4d30af097148e9798e712985", "title": "VALUE , RARENESS , COMPETITIVE ADVANTAGE , AND PERFORMANCE : A CONCEPTUAL-LEVEL EMPIRICAL INVESTIGATION OF THE RESOURCE-BASED VIEW OF THE FIRM", "text": "The resource-based view of the firm (RBV) hypothesizes that the exploitation of valuable, rare resources and capabilities contributes to a firm\u2019s competitive advantage, which in turn contributes to its performance. Despite this notion, few empirical studies test these hypotheses at the conceptual level. In response to this gap, this study empirically examines the relationships between value, rareness, competitive advantage, and performance. The results suggest that value and rareness are related to competitive advantage, that competitive advantage is related to performance, and that competitive advantage mediates the rareness-performance relationship. These findings have important academic and practitioner implications which are then discussed. Copyright \uf6d9 2008 John Wiley & Sons, Ltd."}
{"_id": "fecf0793c768fcb6e1d45fe52e2bfe43731ffa99", "title": "A Configurable 12\u2013237 kS/s 12.8 mW Sparse-Approximation Engine for Mobile Data Aggregation of Compressively Sampled Physiological Signals", "text": "Compressive sensing (CS) is a promising technology for realizing low-power and cost-effective wireless sensor nodes (WSNs) in pervasive health systems for 24/7 health monitoring. Due to the high computational complexity (CC) of the reconstruction algorithms, software solutions cannot fulfill the energy efficiency needs for real-time processing. In this paper, we present a 12-237 kS/s 12.8 mW sparse-approximation (SA) engine chip that enables the energy-efficient data aggregation of compressively sampled physiological signals on mobile platforms. The SA engine chip integrated in 40 nm CMOS can support the simultaneous reconstruction of over 200 channels of physiological signals while consuming (1% of a smartphone's power budget. Such energyefficient reconstruction enables two-to-three times energy saving at the sensor nodes in a CS-based health monitoring system as compared to traditional Nyquist-based systems, while providing timely feedback and bringing signal intelligence closer to the user."}
{"_id": "8149e52a8c549aa9e1f4a93fe315263849bc92c1", "title": "Dragonfly wing nodus: A one-way hinge contributing to the asymmetric wing deformation.", "text": "Dragonfly wings are highly specialized locomotor systems, which are formed by a combination of several structural components. The wing components, also known as structural elements, are responsible for the various aspects of the wing functionality. Considering the complex interactions between the wing components, modelling of the wings as a whole is only possible with inevitable huge oversimplifications. In order to overcome this difficulty, we have recently proposed a new approach to model individual components of complex wings comparatively. Here, we use this approach to study nodus, a structural element of dragonfly wings which has been less studied to date. Using a combination of several imaging techniques including scanning electron microscopy (SEM), wide-field fluorescence microscopy (WFM), confocal laser scanning microscopy (CLSM) and micro-computed tomography (micro-CT) scanning, we aim to characterize the spatial morphology and material composition of fore- and hindwing nodi of the dragonfly Brachythemis contaminata. The microscopy results show the presence of resilin in the nodi, which is expected to help the deformability of the wings. The computational results based on three-dimensional (3D) structural data suggest that the specific geometry of the nodus restrains its displacements when subjected to pressure on the ventral side. This effect, resulting from an interlocking mechanism, is expected to contribute to the dorso-ventral asymmetry of wing deformation and to provide a higher resistance to aerodynamic forces during the downstroke. Our results provide an important step towards better understanding of the structure-property-function relationship in dragonfly wings.\n\n\nSTATEMENT OF SIGNIFICANCE\nIn this study, we investigate the wing nodus, a specialized wing component in dragonflies. Using a combination of modern imaging techniques, we demonstrate the presence of resilin in the nodus, which is expected to facilitate the wing deformability in flight. The specific geometry of the nodus, however, seems to restrain its displacements when subjected to pressure on the ventral side. This effect, resulting from an interlocking mechanism, is suggested to contribute to dorso-ventral asymmetry of wing deformations and to provide a higher resistance to aerodynamic forces during the downstroke. Our results provide an important step towards better understanding of the structure-property-function relationship in dragonfly wings and might help to design more efficient wings for biomimetic micro-air vehicles."}
{"_id": "8c878f4e0e17be1a1ee840fa3479304f60f175fc", "title": "Single-stage 6.78 MHz power-amplifier design using high-voltage GaN power ICs for wireless charging applications", "text": "Traditional power amplifiers for wireless power transfer require a low voltage dc power supply generated by a power adapter. This multi-stage power conversion solution reduces the overall efficiency. A single-stage \u201cAC-RF\u201d power amplifier architecture using a phase-modulated full-bridge topology and high voltage GaN Power ICs is proposed to directly generate 6.78 MHz wireless power from a rectified ac source. A 50W power amplifier is built and achieves 90% ac-to-coil efficiency, which reduces overall power loss by half, comparing to existing multi-stage solutions. The operating principle of a phase-shifted full bridge power amplifier is discussed, and analytical equations are provided for the passive filter network design. A coupled ZVS-tank scheme is also proposed to improve efficiency."}
{"_id": "3653be0cf0eafea9a43388e1be93e5520f0ad898", "title": "Batch Mode Active Learning for Regression With Expected Model Change", "text": "While active learning (AL) has been widely studied for classification problems, limited efforts have been done on AL for regression. In this paper, we introduce a new AL framework for regression, expected model change maximization (EMCM), which aims at choosing the unlabeled data instances that result in the maximum change of the current model once labeled. The model change is quantified as the difference between the current model parameters and the updated parameters after the inclusion of the newly selected examples. In light of the stochastic gradient descent learning rule, we approximate the change as the gradient of the loss function with respect to each single candidate instance. Under the EMCM framework, we propose novel AL algorithms for the linear and nonlinear regression models. In addition, by simulating the behavior of the sequential AL policy when applied for $k$ iterations, we further extend the algorithms to batch mode AL to simultaneously choose a set of $k$ most informative instances at each query time. Extensive experimental results on both UCI and StatLib benchmark data sets have demonstrated that the proposed algorithms are highly effective and efficient."}
{"_id": "ad5b3af1764a553a16d81d5dcb53a0432405e32b", "title": "End-to-End Sound Source Separation Conditioned On Instrument Labels", "text": "Can we perform an end-to-end sound source separation (SSS) with a variable number of sources using a deep learning model? This paper presents an extension of the WaveU-Net [1] model which allows end-to-end monaural source separation with a non-fixed number of sources. Furthermore, we propose multiplicative conditioning with instrument labels at the bottleneck of the Wave-U-Net and show its effect on the separation results. This approach can be further extended to other types of conditioning such as audio-visual SSS and score-informed SSS."}
{"_id": "9b2ac79115d61001ddaee57597403e845c1a29c4", "title": "Applying Genre-Based Ontologies to Enterprise Architecture", "text": "This paper elaborates the approach of using ontologies as a conceptual base for enterprise architecture (EA) descriptions. The method focuses on recognising and modelling business critical information concepts, their content, and semantics used to operate the business. Communication genres and open and semi-structured information need interviews are used as a domain analysis method. Ontologies aim to explicate the results of domain analysis and to provide a common reference model for Business Information Architecture (BIA) descriptions. The results are generalised to model further aspects of EA."}
{"_id": "d1c3a97700e42ad1f5aafe6bb22725b401558df0", "title": "Evaluation of LoRa receiver performance under co-technology interference", "text": "LoRa networks achieve a remarkably wide coverage as compared to that of the conventional wireless systems by employing the chirp spread spectrum (CSS). However, single-hop LoRa networks have limited penetration in indoor environments and cannot efficiently utilize the multiple-access dimensions arising from different spreading factors (SF) because the star topology is typically used for LoRa. On the other hand, a multi-hop LoRa network has the potential to realize extensive network coverage by utilizing packet relaying and improving the network capacity by utilizing different SFs. We evaluated the LoRa receiver performance under co-technology interference via simulation and experiments to realize these potentials. The results show that LoRa can survive interference from time-synchronized packets transmitted by multiple transmitters with an identical SF. Furthermore, we examined the orthogonality between different SFs by evaluating the required signal-to-interference ratio (SIR). Finally, we demonstrated the possibility of employing different SFs to construct the pipeline in a multi-hop relay network to increase network efficiency."}
{"_id": "444b29b93d3e9828a69a60f6c1540d8130ae325d", "title": "A decision model for evaluating third-party logistics providers using fuzzy analytic hierarchy process", "text": "As a consequence of an increasing trend toward the outsourcing of logistics activities, shippers have been faced with the inevitability of selecting an appropriate third-party logistics (3PL) provider. The selection process for the identification of a 3PL provider that best fits user requirements involves multiple criteria and alternatives and may be one of the most complex decisions facing logistics users. In this regard, this study proposes an evaluation framework and methodology for selecting a suitable 3PL provider and illustrates the process of evaluation and selection through a case study. It is expected that the results of this study will provide a practical reference for logistics managers who want to engage the best 3PL provider. Future research using different datasets is warranted to verify the generalizability of the findings."}
{"_id": "fc26b3804f27cd2e2d79f3f5db95ecf5c4370792", "title": "A Short-Term Rainfall Prediction Model Using Multi-task Convolutional Neural Networks", "text": "Precipitation prediction, such as short-term rainfall prediction, is a very important problem in the field of meteorological service. In practice, most of recent studies focus on leveraging radar data or satellite images to make predictions. However, there is another scenario where a set of weather features are collected by various sensors at multiple observation sites. The observations of a site are sometimes incomplete but provide important clues for weather prediction at nearby sites, which are not fully exploited in existing work yet. To solve this problem, we propose a multi-task convolutional neural network model to automatically extract features from the time series measured at observation sites and leverage the correlation between the multiple sites for weather prediction via multi-tasking. To the best of our knowledge, this is the first attempt to use multi-task learning and deep learning techniques to predict short-term rainfall amount based on multi-site features. Specifically, we formulate the learning task as an end-to-end multi-site neural network model which allows to leverage the learned knowledge from one site to other correlated sites, and model the correlations between different sites. Extensive experiments show that the learned site correlations are insightful and the proposed model significantly outperforms a broad set of baseline models including the European Centre for Medium-range Weather Forecasts system (ECMWF)."}
{"_id": "f05af9088a50918154628b54e5467ca5b365670d", "title": "Consumer Brand Engagement in Social Media : Conceptualization , Scale Development & Validation", "text": "Consumer Brand Engagement in Social Media: Conceptualization, Scale Development & Validation Abstract In the last three decades an influential research stream has emerged, which highlights the dynamics of focal consumer/brand relationships. Specifically, more recently the \u2018consumer brand engagement\u2019 (CBE) concept has been postulated to more comprehensively reflect the nature of consumers\u2019 particular interactive brand relationships, relative to traditional concepts, including \u2018involvement.\u2019 However, despite the growing scholarly interest regarding the undertaking of marketing research addressing \u2018engagement,\u2019 studies have been predominantly exploratory in nature; thus generating a lack of empirical research in this area to date. By developing and validating a CBE scale in specific social media settings, we address this identified literature gap. Specifically, we conceptualize CBE as a consumer\u2019s positively valenced brand-related cognitive, emotional and behavioral activity during or related to focal consumer/brand interactions. We derive three CBE dimensions, including cognitive processing, affection, and activation. Within three different social media contexts, we employ exploratory and confirmatory factor analyses to develop a reliable, 10-item CBE scale, which we proceed to validate within a nomological net of conceptual relationships and a rival model. The findings suggest that while consumer brand \u2018involvement\u2019 acts as a CBE antecedent, consumer \u2018self-brand connection\u2019 and \u2018brand usage intent\u2019 represent key CBE consequences; thus providing a platform for further research in this emerging area. We conclude with an overview of key managerial and scholarly implications arising from this research."}
{"_id": "523eea06d191218488f11b52bed5246e5cdb3f31", "title": "The efficiency frontier approach to economic evaluation of health-care interventions.", "text": "BACKGROUND\nIQWiG commissioned an international panel of experts to develop methods for the assessment of the relation of benefits to costs in the German statutory health-care system.\n\n\nPROPOSED METHODS\nThe panel recommended that IQWiG inform German decision makers of the net costs and value of additional benefits of an intervention in the context of relevant other interventions in that indication. To facilitate guidance regarding maximum reimbursement, this information is presented in an efficiency plot with costs on the horizontal axis and value of benefits on the vertical. The efficiency frontier links the interventions that are not dominated and provides guidance. A technology that places on the frontier or to the left is reasonably efficient, while one falling to the right requires further justification for reimbursement at that price. This information does not automatically give the maximum reimbursement, as other considerations may be relevant. Given that the estimates are for a specific indication, they do not address priority setting across the health-care system.\n\n\nCONCLUSION\nThis approach informs decision makers about efficiency of interventions, conforms to the mandate and is consistent with basic economic principles. Empirical testing of its feasibility and usefulness is required."}
{"_id": "abd2a82e0b2a41e5db311e66b53f8e8f947c9710", "title": "Lane detection and tracking system based on the MSER algorithm, hough transform and kalman filter", "text": "We present a novel lane detection and tracking system using a fusion of Maximally Stable Extremal Regions (MSER) and Progressive Probabilistic Hough Transform (PPHT). First, MSER is applied to obtain a set of blobs including noisy pixels (e.g., trees, cars and traffic signs) and the candidate lane markings. A scanning refinement algorithm is then introduced to enhance the results of MSER and filter out noisy data. After that, to achieve the requirements of real-time systems, the PPHT is applied. Compared to Hough transform which returns the parameters \u03c1 and \u0398, PPHT returns two end-points of the detected line markings. To track lane markings, two kalman trackers are used to track both end-points. Several experiments are conducted in Ottawa roads to test the performance of our framework. The detection rate of the proposed system averages 92.7% and exceeds 84.9% in poor conditions."}
{"_id": "1034bfdcdfd1ae0941bbee4660255a2130f56bd2", "title": "IT capability and organizational performance: the roles of business process agility and environmental factors", "text": "School of Business Administration, Southwestern University of Finance and Economics, Sichuan, P. R. of China; Business School, Shantou University, Guangdong, P. R. of China; Information Technology Management, School of Business, University at Albany, Albany, NY, U.S.A; Department of Management, Hong Kong University of Science and Technology, Hong Kong, P.R. China; Department of Finance and Decision Sciences, School of Business, Hong Kong Baptist University, Hong Kong, P.R. China"}
{"_id": "a1126609dd3cceb494a3e2aba6adc7509c94479c", "title": "Maximum Performance Computing with Dataflow Engines", "text": "Multidisciplinary dataflow computing is a powerful approach to scientific computing that has led to orders-of-magnitude performance improvements for a wide range of applications."}
{"_id": "1499fe40fdf50f1e85a2757b82b4538b5d2b2f9b", "title": "Crowdfunding : tapping the right crowd", "text": "The basic idea of crowdfunding is to raise external finance from a large audience (the \u201ccrowd\u201d), where each individual provides a very small amount, instead of soliciting a small group of sophisticated investors. The paper develops a model that associates crowdfunding with pre-ordering and price discrimination, and studies the conditions under which crowdfunding is preferred to traditional forms of external funding. Compared to traditional funding, crowdfunding has the advantage of offering an enhanced experience to some consumers and, thereby, of allowing the entrepreneur to practice menu pricing and extract a larger share of the consumer surplus; the disadvantage is that the entrepreneur is constrained in his/her choice of prices by the amount of capital that he/she needs to raise: the larger this amount, the more prices have to be twisted so as to attract a large number of \u201ccrowdfunders\u201d who pre-order, and the less profitable the menu pricing scheme."}
{"_id": "411476007a7673c87b497e61848d0962fdb03d07", "title": "Opportunistic Flooding in Low-Duty-Cycle Wireless Sensor Networks with Unreliable Links", "text": "Intended for network-wide dissemination of commands, configurations and code binaries, flooding has been investigated extensively in wireless networks. However, little work has yet been done on low-duty-cycle wireless sensor networks in which nodes stay asleep most of time and wake up asynchronously. In this type of network, a broadcasting packet is rarely received by multiple nodes simultaneously, a unique constraining feature that makes existing solutions unsuitable. Combined with unreliable links, flooding in low-duty-cycle networks is a new challenging issue.\n In this paper, we introduce Opportunistic Flooding, a novel design tailored for low-duty-cycle networks with unreliable wireless links and predetermined working schedules. The key idea is to make probabilistic forwarding decisions at a sender based on the delay distribution of next-hop nodes. Only opportunistically early packets are forwarded using links outside the energy optimal tree to reduce the flooding delay and redundancy in transmission. To improve performance further, we propose a forwarder selection method to alleviate the hidden terminal problem and a link-quality-based backoff method to resolve simultaneous forwarding operations. We evaluate Opportunistic Flooding with extensive simulation and a test-bed implementation consisting of 30 MicaZ nodes. Evaluation shows our design is close to the optimal performance achievable by oracle flooding designs. Compared with improved traditional flooding, our design achieves significantly shorter flooding delay while consuming only 20% ~ 60% of the transmission energy in various low-duty-cycle network settings."}
{"_id": "9471f2f4b13886516976d4ddc1bee223312dfe77", "title": "How can machine learning help stock investment ?", "text": "The million-dollar question for stock investors is if the price of a stock will rise or not. The fluctuation of stock market is violent and there are many complicated financial indicators. Only people with extensive experience and knowledge can understand the meaning of the indicators, use them to make good prediction to get fortune. Most of other people can only rely on lucky to earn money from stock trading. Machine learning is an opportunity for ordinary people to gain steady fortune from stock market and also can help experts to dig out the most informative indicators and make better prediction."}
{"_id": "fa4cdf31d062cae4d6425648a28b88f555ad71f4", "title": "Activities on Facebook Reveal the Depressive State of Users", "text": "BACKGROUND\nAs online social media have become prominent, much effort has been spent on identifying users with depressive symptoms in order to aim at early diagnosis, treatment, and even prevention by using various online social media. In this paper, we focused on Facebook to discern any correlations between the platform's features and users' depressive symptoms. This work may be helpful in trying to reach and detect large numbers of depressed individuals more easily.\n\n\nOBJECTIVE\nOur goal was to develop a Web application and identify depressive symptom-related features from users of Facebook, a popular social networking platform.\n\n\nMETHODS\n55 Facebook users (male=40, female=15, mean age 24.43, SD 3.90) were recruited through advertisement fliers distributed to students in a large university in Korea. Using EmotionDiary, the Facebook application we developed, we evaluated depressive symptoms using the Center for Epidemiological Studies-Depression (CES-D) scale. We also provided tips and facts about depression to participants and measured their responses using EmotionDiary. To identify the Facebook features related to depression, correlation analyses were performed between CES-D and participants' responses to tips and facts or Facebook social features. Last, we interviewed depressed participants (CES-D\u226525) to assess their depressive symptoms by a psychiatrist.\n\n\nRESULTS\nFacebook activities had predictive power in distinguishing depressed and nondepressed individuals. Participants' response to tips and facts, which can be explained by the number of app tips viewed and app points, had a positive correlation (P=.04 for both cases), whereas the number of friends and location tags had a negative correlation with the CES-D scale (P=.08 and P=.045 respectively). Furthermore, in finding group differences in Facebook social activities, app tips viewed and app points resulted in significant differences (P=.01 and P=.03 respectively) between probably depressed and nondepressed individuals.\n\n\nCONCLUSIONS\nOur results using EmotionDiary demonstrated that the more depressed one is, the more one will read tips and facts about depression. We also confirmed depressed individuals had significantly fewer interactions with others (eg, decreased number of friends and location tagging). Our app, EmotionDiary, can successfully evaluate depressive symptoms as well as provide useful tips and facts to users. These results open the door for examining Facebook activities to identify depressed individuals. We aim to conduct the experiment in multiple cultures as well."}
{"_id": "6ba074a7114112ddb74a205540f568d9737a7f08", "title": "KidCAD: digitally remixing toys through tangible tools", "text": "Children have great facility in the physical world, and can skillfully model in clay and draw expressive illustrations. Traditional digital modeling tools have focused on mouse, keyboard and stylus input. These tools may be complicated and difficult for young users to easily and quickly create exciting designs. We seek to bring physical interaction to digital modeling, to allow users to use existing physical objects as tangible building blocks for new designs. We introduce KidCAD a digital clay interface for children to remix toys. KidCAD allows children to imprint 2.5D shapes from physical objects into their digital models by deforming a malleable gel input device, deForm. Users can mashup existing objects, edit and sculpt or draw new designs on a 2.5D canvas using physical objects, hands and tools as well as 2D touch gestures. We report on a preliminary user study with 13 children, ages 7 to 10, which provides feedback for our design and helps guide future work in tangible modeling for children."}
{"_id": "7861d8ce85b5cbb66f8a9deccf5f08ff59e1c2a8", "title": "3D Human Body Reconstruction from a Single Image via Volumetric Regression", "text": "This paper proposes the use of an end-to-end Convolutional Neural Network for direct reconstruction of the 3D geometry of humans via volumetric regression. The proposed method does not require the fitting of a shape model and can be trained to work from a variety of input types, whether it be landmarks, images or segmentation masks. Additionally, non-visible parts, either self-occluded or otherwise, are still reconstructed, which is not the case with depth map regression. We present results that show that our method can handle both pose variation and detailed reconstruction given appropriate datasets for training."}
{"_id": "0adfe845a05ba4887f9bf21949f6f276246b789f", "title": "Human hand modelling: kinematics, dynamics, applications", "text": "An overview of mathematical modelling of the human hand is given. We consider hand models from a specific background: rather than studying hands for surgical or similar goals, we target at providing a set of tools with which human grasping and manipulation capabilities can be studied, and hand functionality can be described. We do this by investigating the human hand at various levels: (1) at the level of kinematics, focussing on the movement of the bones of the hand, not taking corresponding forces into account; (2) at the musculotendon structure, i.e. by looking at the part of the hand generating the forces and thus inducing the motion; and (3) at the combination of the two, resulting in hand dynamics as well as the underlying neurocontrol. Our purpose is to not only provide the reader with an overview of current human hand modelling approaches but also to fill the gaps with recent results and data, thus allowing for an encompassing picture."}
{"_id": "2fccaa0c8ad0c727f1f7ec948ba9256092c2a64d", "title": "Accurate, Robust, and Flexible Real-time Hand Tracking", "text": "We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis."}
{"_id": "fec0aabf0adce530b83695b83312aef46176519d", "title": "Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input", "text": "Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to difficult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to handobject tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classification of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness."}
{"_id": "38bb98c164c4c84f2c10c8ea97636075240b66c0", "title": "Terminological ontology learning and population using latent Dirichlet allocation", "text": "The success of Semantic Web will heavily rely on the availability of formal ontologies to structure machine understanding data. However, there is still a lack of general methodologies for ontology automatic learning and population, i.e. the generation of domain ontologies from various kinds of resources by applying natural language processing and machine learning techniques In this paper, the authors present an ontology learning and population system that combines both statistical and semantic methodologies. Several experiments have been carried out, demonstrating the effectiveness of the proposed system. Keywords-Ontologies, Ontology Learning, Ontology Population, Latent Dirichlet Allocation"}
{"_id": "d9fc07892b677f5365cb889e3da1f0335b01e1a8", "title": "Evolutionary Competition in Platform Ecosystems", "text": "I competition has received scant attention in prior studies, which predominantly study interplatform competition. We develop a middle-range theory of how complementarity between input control and a platform extension\u2019s modularization\u2014by inducing evolution\u2014influences its performance in a platform market. Primary and archival data spanning five years from 342 Firefox extensions show that such complementarity fosters performance by accelerating an extension\u2019s perpetual evolution."}
{"_id": "2d8acf4fa3ac825387ce416972af079b6562f5e0", "title": "Cache in the air: exploiting content caching and delivery techniques for 5G systems", "text": "The demand for rich multimedia services over mobile networks has been soaring at a tremendous pace over recent years. However, due to the centralized architecture of current cellular networks, the wireless link capacity as well as the bandwidth of the radio access networks and the backhaul network cannot practically cope with the explosive growth in mobile traffic. Recently, we have observed the emergence of promising mobile content caching and delivery techniques, by which popular contents are cached in the intermediate servers (or middleboxes, gateways, or routers) so that demands from users for the same content can be accommodated easily without duplicate transmissions from remote servers; hence, redundant traffic can be significantly eliminated. In this article, we first study techniques related to caching in current mobile networks, and discuss potential techniques for caching in 5G mobile networks, including evolved packet core network caching and radio access network caching. A novel edge caching scheme based on the concept of content-centric networking or information-centric networking is proposed. Using trace-driven simulations, we evaluate the performance of the proposed scheme and validate the various advantages of the utilization of caching content in 5G mobile networks. Furthermore, we conclude the article by exploring new relevant opportunities and challenges."}
{"_id": "22fe619996b59c09cb73be40103a123d2e328111", "title": "The German Traffic Sign Recognition Benchmark: A multi-class classification competition", "text": "The \u201cGerman Traffic Sign Recognition Benchmark\u201d is a multi-category classification competition held at IJCNN 2011. Automatic recognition of traffic signs is required in advanced driver assistance systems and constitutes a challenging real-world computer vision and pattern recognition problem. A comprehensive, lifelike dataset of more than 50,000 traffic sign images has been collected. It reflects the strong variations in visual appearance of signs due to distance, illumination, weather conditions, partial occlusions, and rotations. The images are complemented by several precomputed feature sets to allow for applying machine learning algorithms without background knowledge in image processing. The dataset comprises 43 classes with unbalanced class frequencies. Participants have to classify two test sets of more than 12,500 images each. Here, the results on the first of these sets, which was used in the first evaluation stage of the two-fold challenge, are reported. The methods employed by the participants who achieved the best results are briefly described and compared to human traffic sign recognition performance and baseline results."}
{"_id": "9ab30776f4fbf24ae2d0ff56054e085b256d158e", "title": "American Sign Language Translation through Sensory Glove ; SignSpeak", "text": "To make a communication bridge, a highly accurate, cost effective and an independent glove was designed for deaf/mute people to enable them to communicate. The glove translates the sign language gestures into speech according to the American Sign Language Standard. The glove contained flex and contact sensors to detect the movements of the fingers and bending of the palm. In addition an accelerometer was built in the glove to measure the acceleration produced by the changing positions of the hand. Principal Component Analysis (PCA) was used to train the glove into recognizing various gestures, and later classify the gestures into alphabets in real time. The glove then established a Bluetooth link with an Android phone, which was used to display the received letters and words and convert the text into speech. The glove was found to have an accuracy of 92%."}
{"_id": "78fb896b01cce8e5c6d47a090935169b1e0c276f", "title": "Extreme scale with full SQL language support in microsoft SQL Azure", "text": "Cloud SQL Server is an Internet scale relational database service which is currently used by Microsoft delivered services and also offered directly as a fully relational database service known as \"SQL Azure\". One of the principle design objectives in Cloud SQL Server was to provide true SQL support with full ACID transactions within controlled scale \"consistency domains\" and provide a relaxed degree of consistency across consistency domains that would be viable to clusters of 1,000's of nodes. In this paper, we describe the implementation of Cloud SQL Server with an emphasis on this core design principle."}
{"_id": "3ee02dff33c03d98fb5a2cecf298b77171f0d0dc", "title": "Multiwave: Doppler Effect Based Gesture Recognition in Multiple Dimensions", "text": "We constructed an acoustic, gesture-based recognition system called Multiwave, which leverages the Doppler Effect to translate multidimensional movements into user interface commands. Our system only requires the use of two speakers and a microphone to be operational. Since these components are already built in to most end user systems, our design makes gesture-based input more accessible to a wider range of end users. By generating a known high frequency tone from multiple speakers and detecting movement using changes in the sound waves, we are able to calculate a Euclidean representation of hand velocity that is then used for more natural gesture recognition and thus, more meaningful interaction mappings. We present the results of a user study of Multiwave to evaluate recognition rates for different gestures and report accuracy rates comparable to or better than the current state of the art. We also report subjective user feedback and some lessons learned from our system that provide additional insight for future applications of multidimensional gesture recognition."}
{"_id": "1a3e334f16b6888843a8150ddf7e4f46b2b28fd5", "title": "An inventory for measuring clinical anxiety: psychometric properties.", "text": "The development ofa 2 l-item self-report inventory for measuring the severity of anxiety in psychiatric populations is described. The initial item pool of 86 items was drawn from three preexisting scales: the Anxiety Checklist, the Physician's Desk Reference Checklist, and the Situational Anxiety Checklist. A series of analyses was used to reduce the item pool. The resulting Beck Anxiety Inventory (BAI) is a 21-item scale that showed high internal consistency (a = .92) and test-retest reliability over 1 week, r(81) = .75. The BAI discriminated anxious diagnostic groups (panic disorder, generalized anxiety disorder, etc.) from nonanxious diagnostic groups (major depression, dysthymic disorder, etc). In addition, the BAI was moderately correlated with the revised Hamilton Anxiety Rating Scale, r(150) = .51, and was only mildly correlated with the revised Hamilton Depression Rating Scale, r(l 53) = .25."}
{"_id": "abc7254b751b124ff98cbf522526cf2ce5376e95", "title": "Manifold surface reconstruction of an environment from sparse Structure-from-Motion data", "text": "The majority of methods for the automatic surface reconstruction of an environment from an image sequence have two steps: Structure-from-Motion and dense stereo. From the computational standpoint, it would be interesting to avoid dense stereo and to generate a surface directly from the sparse cloud of 3D points and their visibility information provided by Structure-fromMotion. The previous attempts to solve this problem are currently very limited: the surface is non-manifold or has zero genus, the experiments are done on small scenes or objects using a few dozens of images. Our solution does not have these limitations. Furthermore, we experiment with hand-held or helmet-held catadioptric cameras moving in a city and generate 3D models such that the camera trajectory can be longer than one kilometer."}
{"_id": "3b938f66d03559e1144fa2ab63a3a9a076a6b48b", "title": "A coordinate gradient descent method for \u21131-regularized convex minimization", "text": "In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing `1-regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general `1-regularized convex minimization problems, i.e., the problem of minimizing an `1-regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale `1-regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale `1-regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-art algorithms specifically designed for solving large-scale `1-regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems."}
{"_id": "8ad03b36ab3cba911699fe1699332c6353f227bc", "title": "Application of Computational Intelligence to Improve Education in Smart Cities", "text": "According to UNESCO, education is a fundamental human right and every nation's citizens should be granted universal access with equal quality to it. Because this goal is yet to be achieved in most countries, in particular in the developing and underdeveloped countries, it is extremely important to find more effective ways to improve education. This paper presents a model based on the application of computational intelligence (data mining and data science) that leads to the development of the student's knowledge profile and that can help educators in their decision making for best orienting their students. This model also tries to establish key performance indicators to monitor objectives' achievement within individual strategic planning assembled for each student. The model uses random forest for classification and prediction, graph description for data structure visualization and recommendation systems to present relevant information to stakeholders. The results presented were built based on the real dataset obtained from a Brazilian private k-9 (elementary school). The obtained results include correlations among key data, a model to predict student performance and recommendations that were generated for the stakeholders."}
{"_id": "bf45ece00d825c2a9a8cdf333965b7acded1a128", "title": "On self-aggrandizement and anger: a temporal analysis of narcissism and affective reactions to success and failure.", "text": "Narcissists are thought to display extreme affective reactions to positive and negative information about the self. Two experiments were conducted in which high- and low-narcissistic individuals, as defined by the Narcissistic Personality Inventory (NPI), completed a series of tasks in which they both succeeded and failed. After each task, participants made attributions for their performance and reported their moods. High-NPI participants responded with greater changes in anxiety, anger, and self-esteem. Low self-complexity was examined, but it neither mediated nor moderated affective responses. High-NPI participants tended to attribute initial success to ability, leading to more extreme anger responses and greater self-esteem reactivity to failure. A temporal sequence model linking self-attribution and emotion to narcissistic rage is discussed."}
{"_id": "a74720eeb8a9f6289dbece69902e85acf0e99871", "title": "Optimal and Low-Complexity Algorithms for Dynamic Spectrum Access in Centralized Cognitive Radio Networks with Fading Channels", "text": "In this paper, we develop a centralized spectrum sensing and Dynamic Spectrum Access (DSA) scheme for secondary users (SUs) in a Cognitive Radio (CR) network. Assuming that the primary channel occupancy follows a Markovian evolution, the channel sensing problem is modeled as a Partially Observable Markov Decision Process (POMDP). We assume that each SU can sense only one channel at a time by using energy detection, and the sensing outcomes are then reported to a central unit, called the secondary system decision center (SSDC), that determines the channel sensing/accessing policies. We derive both the optimal channel assignment policy for secondary users to sense the primary channels, and the optimal channel access rule. Our proposed optimal sensing and accessing policies alleviate many shortcomings and limitations of existing proposals: (a) ours allows fully utilizing all available primary spectrum white spaces, (b) our model, and thus the proposed solution, exploits the temporal and spatial diversity across different primary channels, and (c) is based on realistic local sensing decisions rather than complete knowledge of primary signalling structure. As an alternative to the high complexity of the optimal channel sensing policy, a suboptimal sensing policy is obtained by using the Hungarian algorithm iteratively, which reduces the complexity of the channel assignment from an exponential to a polynomial order. We also propose a heuristic algorithm that reduces the complexity of the sensing policy further to a linear order. The simulation results show that the proposed algorithms achieve a near-optimal performance with a significant reduction in computational time."}
{"_id": "b24a8abf5f70c1ac20e77b7646efdd6f2058ff16", "title": "Compact Design of a Hydraulic Driving Robot for Intraoperative MRI-Guided Bilateral Stereotactic Neurosurgery", "text": "In this letter, we present an intraoperative magnetic resonance imaging (MRI)-guided robot for bilateral stereotactic procedures. Its compact design enables robot's operation within the constrained space of standard imaging head coil. magnetic resonance (MR) safe and high-performance hydraulic transmissions are incorporated. A maximum stiffness coefficient of 24.35\u00a0N/mm can be achieved with transmission fluid preloaded at 2\u00a0bar. Sufficient targeting accuracy (average within \u22641.73\u00a0mm) has been demonstrated in a simulated needle insertion task of deep brain stimulation. A novel MR-based wireless tracking technique is adopted. It is capable of offering real-time and continuous (30\u201340\u00a0Hz) three-dimensional (3-D) localization of robotic instrument under the proper MR tracking sequence. It outperforms the conventional method of using low-contrast passive fiducials that can only be revealed in the MR image domain. Two wireless tracking units/markers are integrated with the robot, which are two miniaturized coil circuits in size of 1.5\u00a0\u00d7\u00a05\u00a0\u00d7\u00a00.2\u00a0mm3, fabricated on flexible thin films. A navigation test was performed under the standard MRI settings in order to visualize the 3-D localization of the robotic instrument. MRI compatibility test was also carried out to prove the minimal interference to MR images of the presented hydraulic robotic platform."}
{"_id": "0d5676d90f20215d08dfe7e71fb55303f23604f7", "title": "The Cricket location-support system", "text": "This paper presents the design, implementation, and evaluation of Cricket, a location-support system for in-building, mobile, location-dependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze information from beacons spread throughout the building. Cricket is the result of several design goals, including user privacy, decentralized administration, network heterogeneity, and low cost. Rather than explicitly tracking user location, Cricket helps devices learn where they are and lets them decide whom to advertise this information to; it does not rely on any centralized management or control and there is no explicit coordination between beacons; it provides information to devices regardless of their type of network connectivity; and each Cricket device is made from off-the-shelf components and costs less than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy. Our experience with Cricket shows that several location-dependent applications such as in-building active maps and device control can be developed with little effort or manual configuration."}
{"_id": "8476714438669d5703690d4bbee9bfe751f61144", "title": "Debunking the Myths of Influence Maximization: An In-Depth Benchmarking Study", "text": "Influence maximization (IM) on social networks is one of the most active areas of research in computer science. While various IM techniques proposed over the last decade have definitely enriched the field, unfortunately, experimental reports on existing techniques fall short in validity and integrity since many comparisons are not based on a common platform or merely discussed in theory. In this paper, we perform an in-depth benchmarking study of IM techniques on social networks. Specifically, we design a benchmarking platform, which enables us to evaluate and compare the existing techniques systematically and thoroughly under identical experimental conditions. Our benchmarking results analyze and diagnose the inherent deficiencies of the existing approaches and surface the open challenges in IM even after a decade of research. More fundamentally, we unearth and debunk a series of myths and establish that there is no single state-of-the-art technique in IM. At best, a technique is the state of the art in only one aspect."}
{"_id": "f8f5a4a3c1ad91d4971758f393f115af203ea8aa", "title": "A 5.8-Gb/s Adaptive Integrating Duobinary DFE Receiver for Multi-Drop Memory Interface", "text": "This paper describes a 5.8 Gb/s adaptive integrating duobinary decision-feedback equalizer (DFE) for use in next-generation multi-drop memory interface. The proposed receiver combines traditional interface techniques like the integrated signaling and the duobinary signaling, in which the duobinary signal is generated by current integration in the receiver. It can address issues such as input data dependence during integration, need for precursor equalization, high equalizer gain boosting, and sensitivity to high-frequency noise. The proposed receiver also alleviates DFE critical timing to provide gain in speed, and embed DFE taps in duobinary decoding to provide gain in power and area. The adaptation for adjusting the equalizer common-mode level, duobinary zero level, tap coefficient values, and timing recovery is incorporated. The proposed DFE receiver was fabricated in a 45 nm CMOS process, whose measurement results indicated that it worked at 5.8 Gb/s speed in a four-drop channel configuration with seven slave ICs, and the bathtub curve shows 36% open for $10^{-10}$ bit error rate."}
{"_id": "0d81a9e8545dc31d441f2c58a548371e3d888a35", "title": "A Relatively Small Turing Machine Whose Behavior Is Independent of Set Theory", "text": "Since the definition of the busy beaver function by Rado in 1962, an interesting open question has been the smallest value of n for which BB(n) is independent of Zermelo\u2013Fraenkel set theory with the axiom of choice (ZFC). Is this n approximately 10, or closer to 1 000 000, or is it even larger? In this paper, we show that it is at most 7910 by presenting an explicit description of a 7910-state Turing machine Z with one tape and a two-symbol alphabet that cannot be proved to run forever in ZFC (even though it presumably does), assuming ZFC is consistent. The machine is based on work of Harvey Friedman on independent statements involving order-invariant graphs. In doing so, we give the first known upper bound on the highest provable busy beaver number in ZFC. To create Z, we develop and use a higher-level language, Laconic, which is much more convenient than direct state manipulation. We also use Laconic to design two Turing machines, G and R, that halt if and only if there are counterexamples to Goldbach\u2019s conjecture and the Riemann hypothesis, respectively."}
{"_id": "43ea93b01be7d3eed2641b9393c6438d19b825a0", "title": "Graph analytics using vertica relational database", "text": "Graph analytics is becoming increasingly popular, with a number of new applications and systems developed in the past few years. In this paper, we study Vertica relational database as a platform for graph analytics. We show that vertex-centric graph analysis can be translated to SQL queries, typically involving table scans and joins, and that modern column-oriented databases are very well suited to running such queries. Furthermore, we show how developers can trade memory footprint for significantly reduced I/O costs in Vertica. We present an experimental evaluation of the Vertica relational database system on a variety of graph analytics, including iterative analysis, a combination of graph and relational analyses, and more complex 1-hop neighborhood graph analytics, showing that it is competitive to two popular vertex-centric graph analytics systems, namely Giraph and GraphLab."}
{"_id": "6149339ccd74f9c38ec67281651ad8efa3b375f6", "title": "On the Reuse of Past Optimal Queries", "text": "Vijay V. Raghavan Hayri Sever The Center for Advanced Computer Studies The Department of Computer Science University of Southwestern Louisiana University of Southwestern Louisiana Lafayette, LA 70504, USA Lafayette, LA 70504, USA e-mail: raghavan@cacs.usl. edu Information Retrieval (IR) systems exploit user feedback by generating an optimal query with respect to a particular information need. Since obtaining an optimal query is an expensive process, the need for mechanisms to save and reuse past optimal queries for future queries is obvions. In this article, we propose the use of a query base, a set of persistent past optimal queries, and investigate similarity measures between queries. The query base can be used either to answer user queries or to formulate optimal queries. We justify the former case analytically and the latter case by experiment."}
{"_id": "14423a62355ff3a1cff997f24534390921f854b5", "title": "Learning to map between ontologies on the semantic web", "text": "Ontologies play a prominent role on the Semantic Web. They make possible the widespread publication of machine understandable data, opening myriad opportunities for automated information processing. However, because of the Semantic Web's distributed nature, data on it will inevitably come from many different ontologies. Information processing across ontologies is not possible without knowing the semantic mappings between their elements. Manually finding such mappings is tedious, error-prone, and clearly not possible at the Web scale. Hence, the development of tools to assist in the ontology mapping process is crucial to the success of the Semantic Web.We describe glue, a system that employs machine learning techniques to find such mappings. Given two ontologies, for each concept in one ontology glue finds the most similar concept in the other ontology. We give well-founded probabilistic definitions to several practical similarity measures, and show that glue can work with all of them. This is in contrast to most existing approaches, which deal with a single similarity measure. Another key feature of glue is that it uses multiple learning strategies, each of which exploits a different type of information either in the data instances or in the taxonomic structure of the ontologies. To further improve matching accuracy, we extend glue to incorporate commonsense knowledge and domain constraints into the matching process. For this purpose, we show that relaxation labeling, a well-known constraint optimization technique used in computer vision and other fields, can be adapted to work efficiently in our context. Our approach is thus distinguished in that it works with a variety of well-defined similarity notions and that it efficiently incorporates multiple types of knowledge. We describe a set of experiments on several real-world domains, and show that glue proposes highly accurate semantic mappings."}
{"_id": "500923d2513d30299350a6a0e9b84b077250dc78", "title": "Determining Semantic Similarity among Entity Classes from Different Ontologies", "text": "Semantic similarity measures play an important role in information retrieval and information integration. Traditional approaches to modeling semantic similarity compute the semantic distance between definitions within a single ontology. This single ontology is either a domain-independent ontology or the result of the integration of existing ontologies. We present an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications. A similarity function determines similar entity classes by using a matching process over synonym sets, semantic neighborhoods, and distinguishing features that are classified into parts, functions, and attributes. Experimental results with different ontologies indicate that the model gives good results when ontologies have complete and detailed representations of entity classes. While the combination of word matching and semantic neighborhood matching is adequate for detecting equivalent entity classes, feature matching allows us to discriminate among similar, but not necessarily equivalent, entity classes."}
{"_id": "1c58b4c7adee37874ac96f7d859d1a51f97bf6aa", "title": "Issues in Stacked Generalization", "text": "Stacked generalization is a general method of using a high-level model to combine lowerlevel models to achieve greater predictive accuracy. In this paper we address two crucial issues which have been considered to be a `black art' in classi cation tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We nd that best results are obtained when the higher-level model combines the con dence (and not just the predictions) of the lower-level ones. We demonstrate the e ectiveness of stacked generalization for combining three di erent types of learning algorithms for classi cation tasks. We also compare the performance of stacked generalization with majority vote and published results of arcing and bagging."}
{"_id": "c96f0b75eabfdf1797b69ac3a92487b45c2d9855", "title": "PROMPT: Algorithm and Tool for Automated Ontology Merging and Alignment", "text": "Researchers in the ontology-design field have developed the content for ontologies in many domain areas. Recently, ontologies have become increasingly common on the WorldWide Web where they provide semantics for annotations in Web pages. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. The processes of ontology alignment and merging are usually handled manually and often constitute a large and tedious portion of the sharing process. We have developed and implemented PROMPT, an algorithm that provides a semi-automatic approach to ontology merging and alignment. PROMPT performs some tasks automatically and guides the user in performing other tasks for which his intervention is required. PROMPT also determines possible inconsistencies in the state of the ontology, which result from the user\u2019s actions, and suggests ways to remedy these inconsistencies. PROMPT is based on an extremely general knowledge model and therefore can be applied across various platforms. Our formative evaluation showed that a human expert followed 90% of the suggestions that PROMPT generated and that 74% of the total knowledge-base operations invoked by the user were suggested by PROMPT. 1 Ontologies in AI and on the Web Ontologies today are available in many different forms: as artifacts of a tedious knowledge-engineering process, as information that was extracted automatically from informal electronic sources, or as simple \u201clight-weight\u201d ontologies that specify semantic relationships among resources available on the World-Wide Web (Brickley and Guha 1999). But what does a user do when he finds several ontologies that he would like to use but that do not conform to one another? The user must establish correspondences among the source ontologies, and to determine the set of overlapping concepts, concepts that are similar in meaning but have different names or structure, concepts that are unique to each of the sources. This work must be done regardless of whether the ultimate goal is to create a single coherent ontology that includes the information from all the sources ( merging) or if the sources must be made consistent and coherent with one another but kept separately ( alignment). Copyright \u00a9 2000, American Association for Artificial Intelligence ( www.aaai.org) . All rights reserved. Currently the work of mapping, merging, or aligning ontologies is performed mostly by hand, without any tools to automate the process fully or partially (Fridman Noy and Musen 1999).Our participation in the ontology-alignment effort within DARPA\u2019s High-Performance KnowledgeBases project (Cohen et al. 1999) was a strong motivation for developing semi-automated specialized tools for ontology merging and alignment. Several teams developed ontologies in the domain of military planning, which then needed to be aligned to one another. We found the experience of manually aligning the ontologies to be an extremely tedious and time-consuming process. At the same time we noticed many steps in the process that could be automated, many points where a tool could make reasonable suggestions, and many conflicts and constraint violations for which a tool could check. We developed a formalism-independent algorithm for ontology merging and alignment\u2014PROMPT (formerly SMART)\u2014which automates the process as much as possible. Where an automatic decision is not possible, the algorithm guides the user to the places in the ontology where his intervention is necessary, suggests possible actions, and determines the conflicts in the ontology and proposes solutions for these conflicts. We implemented the algorithm in an interactive tool based on the Prot\u00e9g\u00e9-2000 knowledge-modeling environment (Fridman Noy et al. 2000). Prot\u00e9g\u00e9-2000 is an ontology-design and knowledgeacquisition tool with an OKBC-compatible (Chaudhri et al. 1998) knowledge model, which allows domain experts (and not necessarily knowledge engineers) to develop ontologies and perform knowledge acquisition. We have evaluated PROMPT, comparing its performance with the humanexpert performance and with the performance of another ontology-merging tool."}
{"_id": "0bdb56c3e9141d654a4809f014d0369305f1e7fd", "title": "You are what you say: privacy risks of public mentions", "text": "In today's data-rich networked world, people express many aspects of their lives online. It is common to segregate different aspects in different places: you might write opinionated rants about movies in your blog under a pseudonym while participating in a forum or web site for scholarly discussion of medical ethics under your real name. However, it may be possible to link these separate identities, because the movies, journal articles, or authors you mention are from a sparse relation space whose properties (e.g., many items related to by only a few users) allow re-identification. This re-identification violates people's intentions to separate aspects of their life and can have negative consequences; it also may allow other privacy violations, such as obtaining a stronger identifier like name and address.This paper examines this general problem in a specific setting: re-identification of users from a public web movie forum in a private movie ratings dataset. We present three major results. First, we develop algorithms that can re-identify a large proportion of public users in a sparse relation space. Second, we evaluate whether private dataset owners can protect user privacy by hiding data; we show that this requires extensive and undesirable changes to the dataset, making it impractical. Third, we evaluate two methods for users in a public forum to protect their own privacy, suppression and misdirection. Suppression doesn't work here either. However, we show that a simple misdirection strategy works well: mention a few popular items that you haven't rated."}
{"_id": "d5d034e16247370356ccca7a9920138fd70d0bbb", "title": "Richard Thaler and the Rise of Behavioral Economics", "text": "The emergence of behavioral economics is one of the most prominent conceptual developments in the social sciences in recent decades. The central figure in the field in its early years was Richard Thaler. In this article, I review and discuss his scientific contributions. JEL classification: B2, D9, G1"}
{"_id": "594f60fdc0e99c136f527c152ff22cf625879abb", "title": "Development of a deep silicon phase Fresnel lens using Gray-scale lithography and deep reactive ion etching", "text": "We report the first fabrication and development of a deep phase Fresnel lens (PFL) in silicon through the use of gray-scale lithography and deep-reactive ion etching (DRIE). A Gaussian tail approximation is introduced as a method of predicting the height of photoresist gray levels given the relative amount of transmitted light through a gray-scale optical mask. Device mask design is accomplished through command-line scripting in a CAD tool to precisely define the millions of pixels required to generate the appropriate profile in photoresist. Etch selectivity during DRIE pattern transfer is accurately controlled to produce the desired scaling factor between the photoresist and silicon profiles. As a demonstration of this technology, a 1.6-mm diameter PFL is etched 43 /spl mu/m into silicon with each grating profile designed to focus 8.4 keV photons a distance of 118 m."}
{"_id": "de864fce3469f0175fc0ff2869ddbe8d144cf53e", "title": "A Comparative Analysis of Enterprise Architecture Frameworks Based on EA Quality Attributes", "text": "Many Enterprise Architecture Frameworks (EAFs) are in use today to guide or serve as a model for the enterprise architecture development. Researchers offer general comparative information about EAF. However, there has been little work on the quality of Enterprise Architecture (EA). This study provides the characteristics of the five EAFs using comparative analysis based on Enterprise Architecture Quality Attributes (EAQAs). The attribute of EA quality was extracted from the EAFs and the definition of EAQA was defined from EA user viewpoint. The criteria for each quality attribute were developed to compare EAFs using four dimensional concepts. This paper compared several frameworks using the criteria to provide guidance in the selection of an EAF that meets the quality requirement of EA"}
{"_id": "3b55f6e637d13503bf7d0ff3b2a0f4b2f8814239", "title": "Rational Choice Theory : Toward a Psychological , Social , and Material Contextualization of Human Choice Behavior", "text": "The main purpose of this paper is to provide a brief overview of the rational choice approach, followed by an identification of several of the major criticisms of RCT and its conceptual and empirical limitations. It goes on to present a few key initiatives to develop alternative, more realistic approaches which transcend some of the limitations of Rational Choice Theory (RCT). Finally, the article presents a few concluding reflections and a table comparing similarities and differences between the mainstream RCT and some of the initial components of an emerging choice theory. Our method has been to conduct a brief selective review of rational choice theoretical formulations and applications as well as a review of diverse critical literature in the social sciences where rational choice has been systematically criticized. We have focused on a number of leading contributors (among others, several Nobel Prize Recipients in economics, who have addressed rational choice issues). So this article makes no claim for completeness. The review maps a few key concepts and assumptions underpinning the conceptual model and empirical applications of RCT. It reviews also a range of critical arguments and evidence of limitations. It identifies selected emerging concepts and theoretical revisions and adaptations to choice theory and what they entail. The results obtained, based on our literature reviews and analyses, are the identification of several major limitations of RCT as well as selected modifications and adaptations of choice theory which overcome or promise to overcome some of the RCT limitations. Thus, the article with Table 1 in hand provides a point of departure for follow-up systematic reviews and more precise questions for future theory development. The criticisms and adaptations of RCT have contributed to greater realism, empirical relevance, and increased moral considerations. The developments entail, among other things: the now well-known cognitive limitations (\u201cbounded rationality\u201d) and, for instance, the role of satisficing rather than maximizing in decision-making to deal with cognitive T. Burns, E. Roszkowska 196 complexity and the uncertainties of multiple values; choice situations are re-contextualized with psychology, sociology, economic, and material conditions and factors which are taken into account explicitly and insightfully in empirical and theoretical work. Part of the contextualization concerns the place of multiple values, role and norm contradictions, and moral dilemmas in much choice behavior. In concluding, the article suggests that the adaptations and modifications made in choice theory have led to substantial fragmentation of choice theory and as of yet no integrated approach has appeared to simply displace RCT."}
{"_id": "0aa2ba1d8ffd0802cc12468bb566da029c0a9230", "title": "SASI: A New Ultralightweight RFID Authentication Protocol Providing Strong Authentication and Strong Integrity", "text": "As low-cost RFIDs become more and more popular, it is imperative to design ultralightweight RFID authentication protocols to resist all possible attacks and threats. However, all of the previous ultralightweight authentication schemes are vulnerable to various attacks. In this paper, we propose a new ultralightweight RFID authentication protocol that provides strong authentication and strong integrity protection of its transmission and of updated data. The protocol requires only simple bit-wise operations on the tag and can resist all the possible attacks. These features make it very attractive to low-cost RFIDs and very low-cost RFIDs."}
{"_id": "017ee86aa9be09284a2e07c9200192ab3bea9671", "title": "Self-Supervised Generative Adversarial Networks", "text": "Conditional GANs are at the forefront of natural image synthesis. The main drawback of such models is the necessity for labelled data. In this work we exploit two popular unsupervised learning techniques, adversarial training and self-supervision, to close the gap between conditional and unconditional GANs. In particular, we allow the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game. The role of self-supervision is to encourage the discriminator to learn meaningful feature representations which are not forgotten during training. We test empirically both the quality of the learned image representations, and the quality of the synthesized images. Under the same conditions, the self-supervised GAN attains a similar performance to stateof-the-art conditional counterparts. Finally, we show that this approach to fully unsupervised learning can be scaled to attain an FID of 33 on unconditional IMAGENET generation."}
{"_id": "0fa3f365a1e7803df60d7298e6410a0b8b4063fc", "title": "Digital Transformation in the Automotive Industry: towards a Generic Value Network", "text": "The emergence of digital innovations is accelerating and intervening existing business models by delivering opportunities for new services. Drawing on the automotive industry, leading trends like self-driving cars, connectivity and car sharing are creating new business models. These are simultaneously giving rise for innovative market entrants, which begin to transform the automotive industry. However, literature does not provide a generic value network of the automotive industry, including new market players. The paper aims to visualize the current automotive ecosystem, by evolving a generic value network using the e3-value method. We define different roles, which are operating in the automotive industry by analyzing 650 companies reported in the Crunchbase database and present the value streams within the ecosystem. To validate the proposed generic value network we conducted five preliminary interviews with experts from the automotive industry. Our results show the central role of mobility service platforms, emerging disruptive technology providers and the dissemination of industries, e.g., as OEMs collaborate with mobile payment providers. Scholars in this field can apply the developed generic value network for further research, while car manufacturers may apply the model to position themselves in their market and to identify possible disruptive actors or potential business opportunities."}
{"_id": "ac1aa7aa49192ed51c03c862941ba04e420de7a8", "title": "Design of RYSEN: An Intrinsically Safe and Low-Power Three-Dimensional Overground Body Weight Support", "text": "Body weight support (BWS) systems are widely used in gait research and rehabilitation. This letter introduces a new three-dimensional overground BWS system, called the RYSEN. The RYSEN is designed to be intrinsically safe and low power consuming, while still performing at least as well as existing BWS systems regarding human\u2013robot interaction. These features are mainly achieved by decoupling degrees of freedom between motors: slow/high-torque motors for vertical motion and fast/low-torque motors for horizontal motion. This letter explains the design and evaluates its performance on power consumption and safety. Power consumption is expressed in terms of the sum of the positive mechanical output power of all motor axes. Safety is defined as the difference between the mechanical power available for horizontal and vertical movements and the mechanical power that is needed to perform its task. The results of the RYSEN are compared to the performance of three similar systems: a gantry, the FLOAT, and a classic cable robot. The results show that the RYSEN and a gantry consume approximately the same amount of power. The amount is approximately half the power consumed by the next-best system. For the safety, the gantry is taken as the benchmark, because of its perfect decoupling of directions. The RYSEN has a surplus of 268\u00a0W and 126 W for horizontal and vertical movements, respectively. This is significantly lower than the next-best system, which has a surplus of 1088 W and 1967 W, respectively."}
{"_id": "ccebcfc9f1f5d4355bebf0756f6f1da965629dd9", "title": "Interpretable and effective opinion spam detection via temporal patterns mining across websites", "text": "Millions of ratings and reviews on online review websites are influential over business revenues and customer experiences. However, spammers are posting fake reviews in order to gain financial benefits, at the cost of harming honest businesses and customers. Such fake reviews can be illegal and it is important to detect spamming attacks to eliminate unjust ratings and reviews. However, most of the current approaches can be incompetent as they can only utilize data from individual websites independently, or fail to detect more subtle attacks even they can fuse data from multiple sources. Further, the revealed evidence fails to explain the more complicated real world spamming attacks, hindering the detection processes that usually have human experts in the loop. We close this gap by introducing a novel framework that can jointly detect and explain the potential attacks. The framework mines both macroscopic level temporal sentimental patterns and microscopic level features from multiple review websites. We construct multiple sentimental time series to detect atomic dynamics, based on which we mine various cross-site sentimental temporal patterns that can explain various attacking scenarios. To further identify individual spams within the attacks with more evidence, we study and identify effective microscopic textual and behavioral features that are indicative of spams. We demonstrate via human annotations, that the simple and effective framework can spot a sizable collection of spams that have bypassed one of the current commercial anti-spam systems."}
{"_id": "5c695f1810951ad1bbdf7da5f736790dca240e5b", "title": "Aspect based sentiment analysis in social media with classifier ensembles", "text": "The analysis of user generated content on social media and the accurate specification of user opinions towards products and events is quite valuable to many applications. With the proliferation of Web 2.0 and the rapid growth of user-generated content on the web, approaches on aspect level sentiment analysis that yield fine grained information are of great interest. In this work, a classifier ensemble approach for aspect based sentiment analysis is presented. The approach is generic and utilizes latent dirichlet allocation to model a topic and to specify the main aspects that users address. Then, each comment is further analyzed and word dependencies that indicate the interactions between words and aspects are extracted. An ensemble classifier formulated by naive bayes, maximum entropy and support vector machines is designed to recognize the polarity of the user's comment towards each aspect. The evaluation results show sound improvement compared to individual classifiers and indicate that the ensemble system is scalable and accurate in analyzing user generated content and in specifying users' opinions and attitudes."}
{"_id": "f2942c955ad0200c43a4c6924784c6875ca3ece8", "title": "Bioaccumulation of nutrients and metals in sediment, water, and phoomdi from Loktak Lake (Ramsar site), northeast India: phytoremediation options and risk assessment.", "text": "In order to determine the potential of phoomdi to accumulate nutrients and metals, 11 dominant species belonging to 10 different families, sediment, and water were analyzed for a period of 2\u00a0years from the largest freshwater wetland of north-east India, Loktak (Ramsar site). Results revealed nutrient (TN and TP) and metal (Fe, Mn, Zn, and Cu) compartmentalization in the order phoomdi > sediment > water. Iron concentrations in water (0.37\u2009\u00b1\u20090.697 to 0.57\u2009\u00b1\u20091.010\u00a0mg\u00a0L(-1)) and sediments (81.8\u2009\u00b1\u20090.45 to 253.1\u2009\u00b1\u20090.51\u00a0mg\u00a0kg(-1)) show high metal discharge into the wetland. Metal accumulation in phoomdi ranged up to 212.3\u2009\u00b1\u20090.46-9461.4\u2009\u00b1\u20091.09\u00a0mg\u00a0kg(-1) for Fe; 85.9\u2009\u00b1\u20090.31-3565.1\u2009\u00b1\u20090.87\u00a0mg\u00a0kg(-1) for Mn; 9.6\u2009\u00b1\u20090.41-85.39\u2009\u00b1\u20090.58\u00a0mg\u00a0kg(-1) for Zn; and 0.31\u2009\u00b1\u20090.04-9.2\u2009\u00b1\u20090.04\u00a0mg\u00a0kg(-1) for Cu, respectively. High bioaccumulation factors (BAF) for metals (S. cucullata, 5.8\u2009\u00d7\u200910(4) Fe, 3.9\u2009\u00d7\u200910(4) Mn, and 1.7\u2009\u00d7\u200910(4) Cu, and O. javanica, 4.9\u2009\u00d7\u200910(3) Zn) and nutrients (S. polyrhiza, 9.7\u2009\u00d7\u200910(2) TN, and Z. latifolia, 7.9\u2009\u00d7\u200910(4) TP) revealed good accumulation in phoomdi compared to the wetland water column and indicate their potential to maintain a safe environment of Loktak. Further, the paper analyzed the health hazard of metals via phoomdi wild edible consumption, with the results confirming potential risk. Thus, the paper showed the need of in-depth monitoring and ample management strategies to ensure nutritional safety conditions of locals from the metals."}
{"_id": "e529218f0071de888146f7efedf5db1d74999044", "title": "Me , myself , and lie : The role of self-awareness in deception", "text": "Deception has been studied extensively but still little is known about individual differences in deception ability. We investigated the relationship between self-awareness and deception ability. We enlisted novice actors to portray varying levels of deception. Forty-two undergraduates viewed the videotaped portrayals and rated the actors believability. Actors with high private self-awareness were more effective deceivers, suggesting that high self-monitors are more effective at deceiving. Self-awareness may lead to knowledge of another s mental state (i.e., Theory of Mind), which may improve an individual s deception ability. 2005 Elsevier Ltd. All rights reserved."}
{"_id": "e5aa59866cd8df4f2b88ee6411866a46756b28e1", "title": "A Location-Sentiment-Aware Recommender System for Both Home-Town and Out-of-Town Users", "text": "Spatial item recommendation has become an important means to help people discover interesting locations, especially when people pay a visit to unfamiliar regions. Some current researches are focusing on modelling individual and collective geographical preferences for spatial item recommendation based on users' check-in records, but they fail to explore the phenomenon of user interest drift across geographical regions, i.e., users would show different interests when they travel to different regions. Besides, they ignore the influence of public comments for subsequent users' check-in behaviors. Specifically, it is intuitive that users would refuse to check in to a spatial item whose historical reviews seem negative overall, even though it might fit their interests. Therefore, it is necessary to recommend the right item to the right user at the right location. In this paper, we propose a latent probabilistic generative model called LSARS to mimic the decision-making process of users' check-in activities both in home-town and out-of-town scenarios by adapting to user interest drift and crowd sentiments, which can learn location-aware and sentiment-aware individual interests from the contents of spatial items and user reviews. Due to the sparsity of user activities in out-of-town regions, LSARS is further designed to incorporate the public preferences learned from local users' check-in behaviors. Finally, we deploy LSARS into two practical application scenes: spatial item recommendation and target user discovery. Extensive experiments on two large-scale location-based social networks (LBSNs) datasets show that LSARS achieves better performance than existing state-of-the-art methods."}
{"_id": "0ae944eb32cdce405125f948b2eef2e7c0512fd3", "title": "HF , VHF , and UHF Systems and Technology", "text": "A wide variety of unique systems and components inhabits the HF, VHF, and UHF bands. Many communication systems (ionospheric, meteor-burst, and troposcatter) provide beyond-line-of-sight coverage and operate independently of external infrastructure. Broadcasting and over-the-horizon radar also operate in these bands. Magnetic-resonance imaging uses HF/VHF signals to see the interior of a human body, and RF heating is used in a variety of medical and industrial applications. Receivers typically employ a mix of analog and digital-signal-processing techniques. Systems for these frequencies make use of RF-power MOSFETs, p-i-n diodes, and ferrite-loaded transmission-line transformers."}
{"_id": "b274ec2cd276a93d899ae923932d27293417b26e", "title": "Experimental study of desalination using direct contact membrane distillation : a new approach to flux enhancement", "text": "New membrane distillation configurations and a new membrane module were investigated to improve water desalination. The performances of three hydrophobic microporous membranes were evaluated under vacuum enhanced direct contact membrane distillation (DCMD) with a turbulent flow regime and with a feed water temperature of only 40 \u25e6C. The new configurations provide reduced temperature polarization effects due to better mixing and increased mass transport of water due to higher permeability through the membrane and due to a total pressure gradient across the membrane. Comparison with previously reported results in the literature reveals that mass transport of water vapors is substantially improved with the new approach. The performance of the new configuration was investigated with both NaCl and synthetic sea salt feed solutions. Salt rejection was greater than 99.9% in almost all cases. Salt concentrations in the feed stream had only a minor effect on water flux. The economic aspects of the enhanced DCMD process are briefly discussed and comparisons are made with the reverse osmosis (RO) process for desalination. \u00a9 2003 Elsevier B.V. All rights reserved."}
{"_id": "dd9970b4a06ff90f287d761bf9e9cf09a6483400", "title": "Dynamic hand gesture recognition using hidden Markov models", "text": "Hand gesture has become a powerful means for human-computer interaction. Traditional gesture recognition just consider hand trajectory. For some specific applications, such as virtual reality, more natural gestures are needed, which are complex and contain movement in 3-D space. In this paper, we introduce an HMM-based method to recognize complex single hand gestures. Gesture images are gained by a common web camera. Skin color is used to segment hand area from the image to form a hand image sequence. Then we put forward a state-based spotting algorithm to split continuous gestures. After that, feature extraction is executed on each gesture. Features used in the system contain hand position, velocity, size, and shape. We raise a data aligning algorithm to align feature vector sequences for training. Then an HMM is trained alone for each gesture. The recognition results demonstrate that our methods are effective and accurate."}
{"_id": "6985a4cb132b23778cd74bd9abed8764d103ae59", "title": "Human Action Recognition Using Multi-Velocity STIPs and Motion Energy Orientation Histogram", "text": "Local image features in space-time or spatio-temporal interest points provide compact and abstract representations of patterns in a video sequence. In this paper, we present a novel human action recognition method based on multi-velocity spatio-temporal interest points (MVSTIPs) and a novel local descriptor called motion energy (ME) orientation histogram (MEOH). The MVSTIP detection includes three steps: first, filtering video frames with multi-direction ME filters at different speeds to detect significant changes at the pixel level; thereafter, a surround suppression model is employed to rectify the ME deviation caused by the camera motion and complicated backgrounds (e.g., dynamic texture); finally, MVSTIPs are obtained with local maximum filters at multispeeds. After detection, we develop MEOH descriptor to capture the motion features in local regions around interest points. The performance of the proposed method is evaluated on KTH, Weizmann, and UCF sports human action datasets. Results show that our method is robust to both simple and complex backgrounds and the method is superior to other methods that are based on local features."}
{"_id": "02677b913f430f419ee8a8f2a8860d1af5e86b63", "title": "Pixelwise View Selection for Unstructured Multi-View Stereo", "text": "This work presents a Multi-View Stereo system for robust and efficient dense modeling from unstructured image collections. Our core contributions are the joint estimation of depth and normal information, pixelwise view selection using photometric and geometric priors, and a multi-view geometric consistency term for the simultaneous refinement and image-based depth and normal fusion. Experiments on benchmarks and large-scale Internet photo collections demonstrate stateof-the-art performance in terms of accuracy, completeness, and efficiency. Fig. 1. Reconstructions for Louvre, Todai-ji, Paris Opera, and Astronomical Clock."}
{"_id": "b75f57b742cbfc12fe15790ce27e75ed5f4a9349", "title": "Hybrid Forest: A Concept Drift Aware Data Stream Mining Algorithm", "text": "Nowadays with a growing number of online controlling systems in the organization and also a high demand of monitoring and stats facilities that uses data streams to log and control their subsystems, data stream mining becomes more and more vital. Hoeffding Trees (also called Very Fast Decision Trees a.k.a. VFDT) as a Big Data approach in dealing with the data stream for classification and regression problems showed good performance in handling facing challenges and making the possibility of any-time prediction. Although these methods outperform other methods e.g. Artificial Neural Networks (ANN) and Support Vector Regression (SVR), they suffer from high latency in adapting with new concepts when the statistical distribution of incoming data changes. In this article, we introduced a new algorithm that can detect and handle concept drift phenomenon properly. This algorithms also benefits from fast startup ability which helps systems to be able to predict faster than other algorithms at the beginning of data stream arrival. We also have shown that our approach will overperform other controversial approaches for classification and regression tasks."}
{"_id": "f9bf3d4810d169014474cab501ffb09abf0ac9db", "title": "An induction furnace employing with half bridge series resonant inverter", "text": "In this paper, an induction furnace employing with half bridge series resonant inverter built around IGBT as its switching devices suitable for melting 500 grams of brass is presented. Melting times of 10 minutes were achieved at a power level of approximately 4 kW. The operating frequency is automatically tracked to maintain a small constant lagging phase angle using dual phase lock loop when brass is melted. The coil voltage is controlled to protect the resonant capacitors. The experimental results are presented."}
{"_id": "d539082b5b951fb38a7d6e2a6d1dd73d2d67d366", "title": "A Review of Several Optimization Problems Related to Security in Networked System", "text": "Security issues are becoming more and more important to acti vities of individuals, organizations and the society in our modern ne tworked computerized world. In this chapter we survey a few optimization framewor ks for problems related to security of various networked system such as the int erne or the power grid system."}
{"_id": "fb7dd133450e99abd8449ae1941e5ed4cf267eea", "title": "Improving deep learning performance using random forest HTM cortical learning algorithm", "text": "Deep Learning is an artificial intelligence function that imitates the mechanisms of the human mind in processing records and developing shapes to be used in selection construction. The objective of the paper is to improve the performance of the deep learning using a proposed algorithm called RFHTMC. This proposed algorithm is a merged version from Random Forest and HTM Cortical Learning Algorithm. The methodology for improving the performance of Deep Learning depends on the concept of minimizing the mean absolute percentage error which is an indication of the high performance of the forecast procedure. In addition to the overlap duty cycle which its high percentage is an indication of the speed of the processing operation of the classifier. The outcomes depict that the proposed set of rules reduces the absolute percent errors by using half of the value. And increase the percentage of the overlap duty cycle with 15%."}
{"_id": "036c70cd07e4a2024eb71b2b1e0c1bc872cff107", "title": "Scientific Paper Summarization Using Citation Summary Networks", "text": "Quickly moving to a new area of research is painful for researchers due to the vast amount of scientific literature in each field of study. One possible way to overcome this problem is to summarize a scientific topic. In this paper, we propose a model of summarizing a single article, which can be further used to summarize an entire topic. Our model is based on analyzing others\u2019 viewpoint of the target article\u2019s contributions and the study of its citation summary network using a clustering approach."}
{"_id": "15ae0badc584a287fc51e5de46d1ef51495a2398", "title": "Finding Deceptive Opinion Spam by Any Stretch of the Imagination", "text": "Consumers increasingly rate, review and research products online (Jansen, 2010; Litvin et al., 2008). Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam\u2014fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90% accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing."}
{"_id": "17accbdd4aa3f9fad6af322bc3d7f4d5b648d9cd", "title": "Transductive Inference for Text Classification using Support Vector Machines", "text": "This paper introduces Transductive Support Vector Machines (TSVMs) for text classi cation. While regular Support Vector Machines (SVMs) try to induce a general decision function for a learning task, Transductive Support Vector Machines take into account a particular test set and try to minimize misclassi cations of just those particular examples. The paper presents an analysis of why TSVMs are well suited for text classi cation. These theoretical ndings are supported by experiments on three test collections. The experiments show substantial improvements over inductive methods, especially for small training sets, cutting the number of labeled training examples down to a twentieth on some tasks. This work also proposes an algorithm for training TSVMs e ciently, handling 10,000 examples and more."}
{"_id": "4f1fe957a29a2e422d4034f4510644714d33fb20", "title": "Thumbs up? Sentiment Classification using Machine Learning Techniques", "text": "We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging. Publication info: Proceedings of EMNLP 2002, pp. 79\u201386."}
{"_id": "6fd2a76924cbd0619f60d0fcb9b651c6dc17dbe3", "title": "Judging Grammaticality with Tree Substitution Grammar Derivations", "text": "In this paper, we show that local features computed from the derivations of tree substitution grammars \u2014 such as the identify of particular fragments, and a count of large and small fragments \u2014 are useful in binary grammatical classification tasks. Such features outperform n-gram features and various model scores by a wide margin. Although they fall short of the performance of the hand-crafted feature set of Charniak and Johnson (2005) developed for parse tree reranking, they do so with an order of magnitude fewer features. Furthermore, since the TSGs employed are learned in a Bayesian setting, the use of their derivations can be viewed as the automatic discovery of tree patterns useful for classification. On the BLLIP dataset, we achieve an accuracy of 89.9% in discriminating between grammatical text and samples from an n-gram language model."}
{"_id": "88997c8b7daa74cd822d82bf33581c8fa966a347", "title": "Code Review Quality: How Developers See It", "text": "In a large, long-lived project, an effective code review process is key to ensuring the long-term quality of the code base. In this work, we study code review practices of a large, open source project, and we investigate how the developers themselves perceive code review quality. We present a qualitative study that summarizes the results from a survey of 88 Mozilla core developers. The results provide developer insights into how they define review quality, what factors contribute to how they evaluate submitted code, and what challenges they face when performing review tasks. We found that the review quality is primarily associated with the thoroughness of the feedback, the reviewer's familiarity with the code, and the perceived quality of the code itself. Also, we found that while different factors are perceived to contribute to the review quality, reviewers often find it difficult to keep their technical skills up-to-date, manage personal priorities, and mitigate context switching."}
{"_id": "4b03cee450788d27c5944af9978a94b6ac1858bb", "title": "EPOCHS: a platform for agent-based electric power and communication simulation built from commercial off-the-shelf components", "text": "This paper reports on the development and subsequent use of the electric power and communication synchronizing simulator (EPOCHS), a distributed simulation environment. Existing electric power simulation tools accurately model power systems of the past, which were controlled as large regional power pools without significant communication elements. However, as power systems increasingly turn to protection and control systems that make use of computer networks, these simulators are less and less capable of predicting the likely behavior of the resulting power grids. Similarly, the tools used to evaluate new communication protocols and systems have been developed without attention to the roles they might play in power scenarios. EPOCHS integrates multiple research and commercial off-the-shelf systems to bridge the gap."}
{"_id": "b6b8268fd9f6263d3b2c24059031f383e93e951f", "title": "Closed-form Solution for IMU based LSD-SLAM Point Cloud Conversion into the Scaled 3D World Environment", "text": "SLAM is a very popular research stream in computer vision and robotics nowadays. For more effective SLAM implementation it is necessary to have reliable information about the environment, also the data should be aligned and scaled according to the real world coordinate system. Monocular SLAM research is an attractive sub-stream, because of the low equipment cost, size and weight. In this paper we present a way to build a conversion from LSD-SLAM coordinate space to the real world coordinates using a true metric scale with IMU sensor data implementation. The causes of differences between the real and calculated spaces are explained and the possibility of conversions between the spaces is proved. Additionally, a closed-form solution for inter space transformation calculation is presented. The synthetic method of generating high level accurate and well controlled input data for the LSD-SLAM algorithm is presented. Finally, the reconstructed 3D environment representation is delivered as an output of the implemented conversion."}
{"_id": "3166c2fa13bfe125fc1477bba5769da4f04864ff", "title": "Scalable Iterative Classification for Sanitizing Large-Scale Datasets", "text": "Cheap ubiquitous computing enables the collection of massive amounts of personal data in a wide variety of domains. Many organizations aim to share such data while obscuring features that could disclose personally identifiable information. Much of this data exhibits weak structure (e.g., text), such that machine learning approaches have been developed to detect and remove identifiers from it. While learning is never perfect, and relying on such approaches to sanitize data can leak sensitive information, a small risk is often acceptable. Our goal is to balance the value of published data and the risk of an adversary discovering leaked identifiers. We model data sanitization as a game between 1) a publisher who chooses a set of classifiers to apply to data and publishes only instances predicted as non-sensitive and 2) an attacker who combines machine learning and manual inspection to uncover leaked identifying information. We introduce a fast iterative greedy algorithm for the publisher that ensures a low utility for a resource-limited adversary. Moreover, using five text data sets we illustrate that our algorithm leaves virtually no automatically identifiable sensitive instances for a state-of-the-art learning algorithm, while sharing over 93 percent of the original data, and completes after at most five iterations."}
{"_id": "7b86ca4f90a6f2d62247158b64f670a9df012d7c", "title": "The diverse CB1 and CB2 receptor pharmacology of three plant cannabinoids: delta9-tetrahydrocannabinol, cannabidiol and delta9-tetrahydrocannabivarin.", "text": "Cannabis sativa is the source of a unique set of compounds known collectively as plant cannabinoids or phytocannabinoids. This review focuses on the manner with which three of these compounds, (-)-trans-delta9-tetrahydrocannabinol (delta9-THC), (-)-cannabidiol (CBD) and (-)-trans-delta9-tetrahydrocannabivarin (delta9-THCV), interact with cannabinoid CB1 and CB2 receptors. Delta9-THC, the main psychotropic constituent of cannabis, is a CB1 and CB2 receptor partial agonist and in line with classical pharmacology, the responses it elicits appear to be strongly influenced both by the expression level and signalling efficiency of cannabinoid receptors and by ongoing endogenous cannabinoid release. CBD displays unexpectedly high potency as an antagonist of CB1/CB2 receptor agonists in CB1- and CB2-expressing cells or tissues, the manner with which it interacts with CB2 receptors providing a possible explanation for its ability to inhibit evoked immune cell migration. Delta9-THCV behaves as a potent CB2 receptor partial agonist in vitro. In contrast, it antagonizes cannabinoid receptor agonists in CB1-expressing tissues. This it does with relatively high potency and in a manner that is both tissue and ligand dependent. Delta9-THCV also interacts with CB1 receptors when administered in vivo, behaving either as a CB1 antagonist or, at higher doses, as a CB1 receptor agonist. Brief mention is also made in this review, first of the production by delta9-THC of pharmacodynamic tolerance, second of current knowledge about the extent to which delta9-THC, CBD and delta9-THCV interact with pharmacological targets other than CB1 or CB2 receptors, and third of actual and potential therapeutic applications for each of these cannabinoids."}
{"_id": "1062a70701476fe243b8c66cd2991755e93982d5", "title": "Design of an IGBT-based LCL-Resonant Inverter for High-Frequency Induction Heating", "text": "A power electronic inverter is developed for a high-frequency induction heating application. The application requires up to 160kW of power at a frequency of 1OOkHz. This power-frequency product represents a significant challenge for today's power semiconductor technology. Voltage source and current source inverters both using ZCS or ZVS are analyzed and compared. To attain the level of performance required, an LCL loadresonant topology is selected to enable ZVS close to the zero current crossing of the load. This mode of softswitching is suitable to greatly reduce the IGBT losses. Inverter control is achieved via a Phase Locked Loop (PLL). This paper presents the circuit design, modeling and control considerations."}
{"_id": "e6049b61e7e963a206a3a8e3d6447b672a794370", "title": "Parents with doubts about vaccines: which vaccines and reasons why.", "text": "OBJECTIVES\nThe goals were (1) to obtain national estimates of the proportions of parents with indicators of vaccine doubt, (2) to identify factors associated with those parents, compared with parents reporting no vaccine doubt indicators, (3) to identify the specific vaccines that prompted doubt and the reasons why, and (4) to describe the main reasons parents changed their minds about delaying or refusing a vaccine for their child.\n\n\nMETHODS\nData were from the National Immunization Survey (2003-2004). Groups included parents who ever got a vaccination for their child although they were not sure it was the best thing to do (\"unsure\"), delayed a vaccination for their child (\"delayed\"), or decided not to have their child get a vaccination (\"refused\").\n\n\nRESULTS\nA total of 3924 interviews were completed. Response rates were 57.9% in 2003 and 65.0% in 2004. Twenty-eight percent of parents responded yes to ever experiencing >or=1 of the outcome measures listed above. In separate analyses for each outcome measure, vaccine safety concern was a predictor for unsure, refused, and delayed parents. The largest proportions of unsure and refused parents chose varicella vaccine as the vaccine prompting their concern, whereas delayed parents most often reported \"not a specific vaccine\" as the vaccine prompting their concern. Most parents who delayed vaccines for their child did so for reasons related to their child's illness, unlike the unsure and refused parents. The largest proportion of parents who changed their minds about delaying or not getting a vaccination for their child listed \"information or assurances from health care provider\" as the main reason.\n\n\nCONCLUSIONS\nParents who exhibit doubts about immunizations are not all the same. This research suggests encouraging children's health care providers to solicit questions about vaccines, to establish a trusting relationship, and to provide appropriate educational materials to parents."}
{"_id": "4c5bddad70d30b0a57a5a8560a2406aabc65cfc0", "title": "A Review on Comparative analysis of different clustering and Decision Tree for Synthesized Data Mining Algorithm", "text": "Web mining is the sub category or application of data mining techniques to extract knowledge from Web data. With the advent of new advancements in technology the rapid use of new algorithms has been increased in the market. A data mining is one of the fast growing research field which is used in a wide areas of applications. The data mining consists of classification algorithms, association algorithms and searching algorithms. Different classification and clustering algorithm are used for the synthetic datasets. In this paper various techniques that are based on clustering and decision tree for synthesized data mining are discussed."}
{"_id": "8bec0be257cd03a3cd1aba6627d533cf212dfbba", "title": "Poor Man's Methadone: A Case Report of Loperamide Toxicity.", "text": "Loperamide, a common over-the-counter antidiarrheal drug and opioid derivative, is formulated to act upon intestinal opioid receptors. However, at high doses, loperamide crosses the blood-brain barrier and reaches central opioid receptors in the brain, leading to central opiate effects including euphoria and respiratory depression. We report the case of a young man found dead in his residence with a known history of drug abuse. At autopsy, the only significant findings were a distended bladder and bloody oral purge. Drug screening found nontoxic levels of alprazolam, fluoxetine, and marijuana metabolites. Liquid chromatography time-of-flight mass spectrometry found an unusual set of split isotope peaks consistent with chlorine. On the basis of autopsy and toxicological findings, loperamide toxicity was suspected because of its opioid properties and molecular formula containing chlorine. A sample of loperamide was analyzed by liquid chromatography time-of-flight mass spectrometry, resulting in a matching mass and retention time to the decedent's sample. Subsequently, quantitative testing detected 63 ng/mL of loperamide or more than 6 times of therapeutic peak concentration. Cause of death was determined as \"toxic effects of loperamide with fluoxetine and alprazolam.\" Because of its opioid effects and easy accessibility, loperamide is known as \"poor man's methadone\" and may go undetected at medical and forensic drug screening."}
{"_id": "3e75781a18158f6643115ba825034f9f95b4f13b", "title": "Feature Extraction and Classification for Automatic Speaker Recognition System \u2013 A Review", "text": "Automatic speaker recognition (ASR) has found immense applications in the industries like banking, security, forensics etc. for its advantages such as easy implementation, more secure, more user friendly. To have a good recognition rate is a pre-requisite for any ASR system which can be achieved by making an optimal choice among the available techniques for ASR. In this paper, different techniques for the system have been discussed such as MFCC, LPCC, LPC, Wavelet decomposition for feature extraction and VQ, GMM, SVM, DTW, HMM for feature classification. All these techniques are also compared with each other to find out best suitable candidate among them. On the basis of the comparison done, MFCC has upper edge over other techniques for feature extraction as it is more consistent with human hearing. GMM comes out to be the best among classification models due to its good classification accuracy and less memory usage. Keywords\u2014 Automatic Speaker Recognition, Mel Frequency Cepstral Coefficients (MFCC), Linear Predictive Cepstral Coefficients, Gaussian Mixture Model ( GMM), Vector Quantization ( VQ), Dynamic Time Warping (DTW), Hidden Markov Model ( HMM), Wavelet decomposition"}
{"_id": "e406c7d0bf67ea13dc9553fd4514ceaa3a61f6df", "title": "Learning to Identify Review Spam", "text": "In the past few years, sentiment analysis and opinion mining becomes a popular and important task. These studies all assume that their opinion resources are real and trustful. However, they may encounter the faked opinion or opinion spam problem. In this paper, we study this issue in the context of our product review mining system. On product review site, people may write faked reviews, called review spam, to promote their products, or defame their competitors\u2019 products. It is important to identify and filter out the review spam. Previous work only focuses on some heuristic rules, such as helpfulness voting, or rating deviation, which limits the performance of this task. In this paper, we exploit machine learning methods to identify review spam. Toward the end, we manually build a spam collection from our crawled reviews. We first analyze the effect of various features in spam identification. We also observe that the review spammer consistently writes spam. This provides us another view to identify review spam: we can identify if the author of the review is spammer. Based on this observation, we provide a twoview semi-supervised method, co-training, to exploit the large amount of unlabeled data. The experiment results show that our proposed method is effective. Our designed machine learning methods achieve significant improvements in comparison to the heuristic baselines."}
{"_id": "4d16fb4c3b17f60813757df4daf18aac71f01e8b", "title": "Enhanced cardiac perception is associated with benefits in decision-making.", "text": "In the present study we provide the first empirical evidence that viscero-sensory feedback from an internal organ is associated with decision-making processes. Participants with accurate vs. poor perception of their heart activity were compared with regard to their performance in the Iowa Gambling Task. During this task, participants have to choose between four card decks. Decks A and B yield high gains and high losses, and if played continuously, result in net loss. In contrast, decks C and D yield small gains and also small losses, but result in net profit if they are selected continuously. Accordingly, participants have to learn to avoid the net loss options in favor of the net gain options. In our study, participants with good cardiac perception chose significantly more of the net gain and fewer of the net loss options. Our findings document the substantial role of visceral feedback in decision-making processes in complex situations."}
{"_id": "be5808e41e079ccee7a95c9950796032eee7e8d1", "title": "Novel blood pressure estimation method using single photoplethysmography feature", "text": "Continuous blood pressure (BP) monitoring has a significant meaning to the prevention and early diagnosis of cardiovascular disease. However, existing continuous BP monitoring approaches, especially cuff-less BP monitoring approaches, are all contraptions which complex and huge computation required. For example, for the most sophisticated cuff-less BP monitoring method using pulse transit time (PTT), the simultaneous record of photoplethysmography (PPG) signal and electrocardiography (ECG) are required, and various measurement of characteristic points are needed. These issues hindered widely application of cuff less BP measurement in the wearable devices. In this study, a novel BP estimation method using single PPG signal feature was proposed and its performance in BP estimation was also tested. The results showed that the new approach proposed in this study has a mean error \u22120.91 \u00b1 3.84 mmHg for SBP estimation and \u22120.36 \u00b1 3.36 mmHg for DBP estimation respectively. This approach performed better than the traditional PTT based BP estimation, which mean error for SBP estimation was \u22120.31 \u00b1 4.78 mmHg, and for DBP estimation was \u22120.18 \u00b1 4.32 mmHg. Further investigation revealed that this new BP estimation approach only required measurement of one characteristic point, reducing much computation when implementing. These results demonstrated that this new approach might be more suitable implemented in the wearable BP monitoring devices."}
{"_id": "b903c556c9f55916c53ad7a9446b326176b4cf16", "title": "Wideband printed rectangular monopole antenna for circularly polarization", "text": "In this paper, a printed monopole antenna for circular polarization has been proposed. The relationship between the geometry of the antenna and the axial ratio is investigated. A wideband circularly polarized antenna is designed according to the parametric studies. The simulated bandwidth of 3dB-axial ratio is from 2.06GHz to 6.04GHz (98%)."}
{"_id": "79c8850c8be533c241a61a80883e6ed1d559a229", "title": "An Artificial Neural Network based Intrusion Detection System and Classification of Attacks", "text": "Network security is becoming an issue of paramount importance in the information technology era. Nowadays with the dramatic growth of communication and computer networks, security has become a critical subject for computer system. Intrusion detection is the art of detecting computer abuse and any attempt to break the networks. Intrusion detection system is an effective security tool that helps to prevent unauthorized access to network resources by analyzing the network traffic. Different algorithms, methods and applications are created and implemented to solve the problem of detecting the attacks in intrusion detection systems. Most methods detect attacks and categorize in two groups, normal or threat. One of the most promising areas of research in the area of Intrusion Detection deals with the applications of the Artificial Intelligence (AI) techniques. This proposed system presents a new approach of intrusion detection system based on artificial neural network. Multi Layer Percepron (MLP) architecture is used for Intrusion Detection System. The performance and evaluations are performed by using the set of benchmark data from a KDD (Knowledge discovery in Database) dataset. The proposed system detects the attacks and classifies them in six groups."}
{"_id": "2f056defdb9abf61acd7f8b49e50062cd62092ce", "title": "What can quantum theory bring to information retrieval", "text": "The probabilistic formalism of quantum physics is said to provide a sound basis for building a principled information retrieval framework. Such a framework can be based on the notion of information need vector spaces where events, such as document relevance or observed user interactions, correspond to subspaces. As in quantum theory, a probability distribution over these subspaces is defined through weighted sets of state vectors (density operators), and used to represent the current view of the retrieval system on the user information need. Tensor spaces can be used to capture different aspects of information needs. Our evaluation shows that the framework can lead to acceptable performance in an ad-hoc retrieval task. Going beyond this, we discuss the potential of the framework for three active challenges in information retrieval, namely, interaction, novelty and diversity."}
{"_id": "348720586f503d69f6cd64be06d9191a7da0e137", "title": "The Formulation of Parameters for Type Design of Indian Script Based on Calligraphic Studies", "text": "A number of parameters were formulated for better analysing the anatomy of Indic letterforms. This methodology contributes to the understanding of the intricacies of type design of complex Indian scripts."}
{"_id": "644cda71ed69a42e1e819a753ffa5cb5c6ccbaec", "title": "Using association rules to guide a search for best fitting transfer models of student learning", "text": "We say a transfer model is a mapping between the questions in an intelligent tutoring system and the knowledge components (i.e., skills, strategies, declarative knowledge, etc) needed to answer a question correctly. [JKT00] showed how you could take advantage of 1) the Power Law Of Learning, 2) an existing transfer model, and 3) set of tutorial log files, to learn a function (using logistic regression) that will predict when a student will get a question correct. In the main conference proceeding [CHK2004] give an example of using this technique for transfer model selection. Koedinger and Junker [KJ99] also conceptualized a search space where each state is a new transfer model. The operators in this search space split, add or merge knowledge components based upon factors that are tagged to questions. Koedinger and Junker called this method learning factors analysis emphasizing that this method can be used to study learning. Unfortunately, the search space is huge and searching for good fitting transfer models is exponential. The main goal of this paper is show a technique that will make searching for transfer models more efficient. Our procedure implements a search method using association rules as a means of guiding the search. The association rules are mined from a dataset derived from student-tutor interaction logs. The association rules found in the mining process determine what operations to perform on the current transfer model. We report on the speed up achieved. Being able to find good transfer models quicker will help intelligent tutor system builders as well as cognitive science researchers better assess what makes certain problems hard and other problems easy for students."}
{"_id": "48ceb89d2ccf24dc8774d702bdedee60e3edc935", "title": "A Survey of In-Band Full-Duplex Transmission: From the Perspective of PHY and MAC Layers", "text": "In-band full-duplex (IBFD) transmission represents an attractive option for increasing the throughput of wireless communication systems. A key challenge for IBFD transmission is reducing self-interference. Fortunately, the power associated with residual self-interference can be effectively canceled for feasible IBFD transmission with combinations of various advanced passive, analog, and digital self-interference cancellation schemes. In this survey paper, we first review the basic concepts of IBFD transmission with shared and separated antennas and advanced self-interference cancellation schemes. Furthermore, we also discuss the effects of IBFD transmission on system performance in various networks such as bidirectional, relay, and cellular topology networks. This survey covers a wide array of technologies that have been proposed in the literature as feasible for IBFD transmission and evaluates the performance of the IBFD systems compared to conventional half-duplex transmission in connection with theoretical aspects such as the achievable sum rate, network capacity, system reliability, and so on. We also discuss the research challenges and opportunities associated with the design and analysis of IBFD systems in a variety of network topologies. This work also explores the development of MAC protocols for an IBFD system in both infrastructure-based and ad hoc networks. Finally, we conclude our survey by reviewing the advantages of IBFD transmission when applied for different purposes, such as spectrum sensing, network secrecy, and wireless power transfer."}
{"_id": "32bbef1c8bb76636839f27c016080744f5749317", "title": "KP-Miner: A keyphrase extraction system for English and Arabic documents", "text": "Automatic key phrase extraction has many important applications including but not limited to summarization, cataloging/indexing, feature extraction for clustering and classification, and data mining. This paper presents the KP-Miner system, and demonstrates through experimentation and comparison with existing systems that the it is effective in extracting keyphrases from both English and Arabic documents of varied length. Unlike other existing keyphrase extraction systems, the KP-Miner system has the advantage of being configurable as the rules and heuristics adopted by the system are related to the general nature of documents and keyphrase. This implies that the users of this system can use their understanding of the document(s) being input into the system, to fine tune it to their particular needs."}
{"_id": "313eceae658fb6c132a448c86f3abcc37931df09", "title": "Docker Cluster Management for the Cloud - Survey Results and Own Solution", "text": "Docker provides a good basis to run composite applications in the cloud, especially if those are not cloud-aware, or cloud-native. However, Docker concentrates on managing containers on one host, but SaaS providers need a container management solution for multiple hosts. Therefore, a number of tools emerged that claim to solve the problem. This paper classifies the solutions, maps them to requirements from a case study and identifies gaps and integration requirements. We close some of these gaps with our own integration components and tool enhancements, resulting in the currently most complete management suite."}
{"_id": "a2b7dddd3a980da4e9d7496dec6121d5f36f56e7", "title": "Technology adoption: A conjoint analysis of consumers' preference on future online banking services", "text": "The importance of service delivery technology and online service adoption and usage in the banking industry has received an increased discussion in the literature in recent years. Owing to the fact that Strong online banking services are important drivers for bank performance and customer service delivery; several studies have been carried out on online banking service adoption or acceptance where services are already deployed and on the factors that influence customers' adoption and use or intention to use those services. However, despite the increasing discussion in the literatures, no attempt has been made to look at consumers' preference in terms of future online banking service adoption. This study used conjoint analysis and stated preference methods with discrete choice model to analyze the technology adoption pattern regarding consumers' preference for potential future online banking services in the Nigerian banking industry. The result revealed that to increase efficiency and strengthen competitiveness, banks need to promote smart and practical branded services especially self-services at the same time promote a universal adoption of e-banking system services that add entertainment or extra convenience to customers such as ease of usage including digital wallet, real-time interaction (video banking), ATMs integrated with smart phones, website customization, biometric services, and digital currency. These services can contribute to an increasing adoption of online services. & 2015 Elsevier Ltd. All rights reserved."}
{"_id": "722e2f7894a1b62e0ab09913ce9b98654733d98e", "title": "Information overload and the message dynamics of online interaction spaces: a theoretical model and empirical exploration", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles."}
{"_id": "c7cf039d918ceeea88e99be3b522b44d1bc132f0", "title": "Decoding actions at different levels of abstraction.", "text": "Brain regions that mediate action understanding must contain representations that are action specific and at the same time tolerate a wide range of perceptual variance. Whereas progress has been made in understanding such generalization mechanisms in the object domain, the neural mechanisms to conceptualize actions remain unknown. In particular, there is ongoing dissent between motor-centric and cognitive accounts whether premotor cortex or brain regions in closer relation to perceptual systems, i.e., lateral occipitotemporal cortex, contain neural populations with such mapping properties. To date, it is unclear to which degree action-specific representations in these brain regions generalize from concrete action instantiations to abstract action concepts. However, such information would be crucial to differentiate between motor and cognitive theories. Using ROI-based and searchlight-based fMRI multivoxel pattern decoding, we sought brain regions in human cortex that manage the balancing act between specificity and generality. We investigated a concrete level that distinguishes actions based on perceptual features (e.g., opening vs closing a specific bottle), an intermediate level that generalizes across movement kinematics and specific objects involved in the action (e.g., opening different bottles with cork or screw cap), and an abstract level that additionally generalizes across object category (e.g., opening bottles or boxes). We demonstrate that the inferior parietal and occipitotemporal cortex code actions at abstract levels whereas the premotor cortex codes actions at the concrete level only. Hence, occipitotemporal, but not premotor, regions fulfill the necessary criteria for action understanding. This result is compatible with cognitive theories but strongly undermines motor theories of action understanding."}
{"_id": "054bca7f2fa00c3d55f0e028b37513bebb9c4ea5", "title": "TRESOR Runs Encryption Securely Outside RAM", "text": "Current disk encryption techniques store necessary keys in RAM and are therefore susceptible to attacks that target volatile memory, such as Firewire and cold boot attacks. We present TRESOR, a Linux kernel patch that implements the AES encryption algorithm and its key management solely on the microprocessor. Instead of using RAM, TRESOR ensures that all encryption states as well as the secret key and any part of it are only stored in processor registers throughout the operational time of the system, thereby substantially increasing its security. Our solution takes advantage of Intel\u2019s new AES-NI instruction set and exploits the x86 debug registers in a non-standard way, namely as cryptographic key storage. TRESOR is compatible with all modern Linux distributions, and its performance is on a par with that of standard AES implementations."}
{"_id": "0fd2467de521b52805eea902edc9587c87818276", "title": "Discoverer: Automatic Protocol Reverse Engineering from Network Traces", "text": "Application-level protocol specifications are useful for many security applications, including intrusion prevention and detection that performs deep packet inspection and traffic normalization, and penetration testing that generates network inputs to an application to uncover potential vulnerabilities. However, current practice in deriving protocol specifications is mostly manual. In this paper, we present Discoverer, a tool for automatically reverse engineering the protocol message formats of an application from its network trace. A key property of Discoverer is that it operates in a protocol-independent fashion by inferring protocol idioms commonly seen in message formats of many application-level protocols. We evaluated the efficacy of Discoverer over one text protocol (HTTP) and two binary protocols (RPC and CIFS/SMB) by comparing our inferred formats with true formats obtained from Ethereal [5]. For all three protocols, more than 90% of our inferred formats correspond to exactly one true format; one true format is reflected in five inferred formats on average; our inferred formats cover over 95% of messages, which belong to 30-40% of true formats observed in the trace."}
{"_id": "30dd12a894eff2a488c83f565bc287b4dd03c0cc", "title": "Howard: A Dynamic Excavator for Reverse Engineering Data Structures", "text": "Even the most advanced reverse engineering techniques and products are weak in recovering data structures in stripped binaries\u2014binaries without symbol tables. Unfortunately, forensics and reverse engineering without data structures is exceedingly hard. We present a new solution, known as Howard, to extract data structures from C binaries without any need for symbol tables. Our results are significantly more accurate than those of previous methods \u2014 sufficiently so to allow us to generate our own (partial) symbol tables without access to source code. Thus, debugging such binaries becomes feasible and reverse engineering becomes simpler. Also, we show that we can protect existing binaries from popular memory corruption attacks, without access to source code. Unlike most existing tools, our system uses dynamic analysis (on a QEMU-based emulator) and detects data structures by tracking how a program uses memory."}
{"_id": "36f05c011a7fdb74b0380e41ceada8632ba35f24", "title": "Transparent Runtime Shadow Stack : Protection against malicious return address modifications", "text": "Exploitation of buffer overflow vulnerabilities constitutes a significant portion of security attacks in computer systems. One of the most common types of buffer overflow attacks is the hijacking of the program counter by overwriting function return addresses in the process\u2019 stack so as to redirect the program\u2019s control flow to some malicious code injected into the process\u2019 memory. Previous solutions to this problem are based either on hardware or the compiler. The former requires special hardware while the latter requires the source code of the software. In this paper we introduce the use of a T ransparent RUntime Shadow Stack (TRUSS) to protect against function return address modification. Our proposed scheme is built on top of DynamoRIO, a dynamic binary rewriting framework. DynamoRIO is implemented on both Windows and Linux. Hence, our scheme is able to protect applications on both operating systems. We have successfully tested our implementation on the SPECINT 2000 benchmark programs on both Windows and Linux, John Wilander\u2019s \u201cDynamic testbed for twenty buffer overflow attacks\u201d as well as Microsoft Access, Powerpoint and Word 2002. This paper will discuss the implementation details of our scheme as well as provide a performance evaluation. The latter shows that TRUSS is able to operate with an average overhead of about 20% to 50% which we believe is acceptable."}
{"_id": "a21921eb0c5600562c8dad8e2bc40fff1ec8906b", "title": "Automatic Reverse Engineering of Data Structures from Binary Execution", "text": "With only the binary executable of a program, it is useful to discover the program\u2019s data structures and infer their syntactic and semantic definitions. Such knowledge is highly valuable in a variety of security and forensic applications. Although there exist efforts in program data structure inference, the existing solutions are not suitable for our targeted application scenarios. In this paper, we propose a reverse engineering technique to automatically reveal program data structures from binaries. Our technique, called REWARDS, is based on dynamic analysis. More specifically, each memory location accessed by the program is tagged with a timestamped type attribute. Following the program\u2019s runtime data flow, this attribute is propagated to other memory locations and registers that share the same type. During the propagation, a variable\u2019s type gets resolved if it is involved in a type-revealing execution point or \u201ctype sink\u201d. More importantly, besides the forward type propagation, REWARDS involves a backward type resolution procedure where the types of some previously accessed variables get recursively resolved starting from a type sink. This procedure is constrained by the timestamps of relevant memory locations to disambiguate variables reusing the same memory location. In addition, REWARDS is able to reconstruct in-memory data structure layout based on the type information derived. We demonstrate that REWARDS provides unique benefits to two applications: memory image forensics and binary fuzzing for vulnerability discovery."}
{"_id": "e831d694790a2cb74db0d1fb90dc9cf623b4c47c", "title": "Diagnosis and debugging of programmable logic controller control programs by neural networks", "text": "Ladder logic diagram (LLD) as the interfacing programming language of programmable logic controllers (PLCs) is utilized in modern discrete event control systems. However, LLD is hard to debug and maintain in practice. This is due to many factors such as non-structured nature of LLD, the LLD programmers' background, and the huge sizes of real world LLD. In this paper, we introduce a recurrent neural network (RNN) based technique for PLC program diagnosis. A manufacturing control system example has been presented to illustrate the applicability of the proposed algorithm. This method could be very advantageous in reducing the complexity in PLC control programs diagnosis because of the ease of use of the RNN compared to debugging the LLD code."}
{"_id": "34a3e79849443ee7fffac386f8c003040d04180c", "title": "Support Vector Machine-recursive feature elimination for the diagnosis of Parkinson disease based on speech analysis", "text": "Parkinson disease has become a serious problem in the old people. There is no precise method to diagnosis Parkinson disease now. Considering the significance and difficulty of recognizing the Parkinson disease, the measurement of samples' voices is regard as one of the best non-invasive ways to find the real patient. Support Vector Machine is one of the most effective tools to classify in machine learning, and it has been applied successfully in many areas. In this paper, we implement the SVM-recursive feature elimination which has not been used before for selecting the subset including the most important features for classification from the original features. We also implement SVM with PCA for selecting the principle components for diagnosis PD set with 22 features in order to compare. At last, we discuss the relationship between SVM-RFE and SVM with PCA specially in the experiment. The experiment illustrates that the SVM-RFE has the better performance than other methods in general."}
{"_id": "366f70bfa316afc5ae56139cacdbd65563b7eb59", "title": "Towards efficient content-aware search over encrypted outsourced data in cloud", "text": "With the increasing adoption of cloud computing, a growing number of users outsource their datasets into cloud. The datasets usually are encrypted before outsourcing to preserve the privacy. However, the common practice of encryption makes the effective utilization difficult, for example, search the given keywords in the encrypted datasets. Many schemes are proposed to make encrypted data searchable based on keywords. However, keyword-based search schemes ignore the semantic representation information of users retrieval, and cannot completely meet with users search intention. Therefore, how to design a content-based search scheme and make semantic search more effective and context-aware is a difficult challenge. In this paper, we proposed an innovative semantic search scheme based on the concept hierarchy and the semantic relationship between concepts in the encrypted datasets. More specifically, our scheme first indexes the documents and builds trapdoor based on the concept hierarchy. To further improve the search efficiency, we utilize a tree-based index structure to organize all the document index vectors. Our experiment results based on the real world datasets show the scheme is more efficient than previous scheme. We also study the threat model of our approach and prove it does not introduce any security risk."}
{"_id": "09637c11a69ef063124a2518e2262f8ad7a1aa87", "title": "A new multiclass SVM algorithm and its application to crowd density analysis using LBP features", "text": "Crowd density analysis is a crucial component in visual surveillance for security monitoring. In this paper, we propose to estimate crowd density at patch level, where the size of each patch varies in such way to compensate the effects of perspective distortions. The main contribution of this paper is two-fold: First, we propose to learn a discriminant subspace of the high-dimensional Local Binary Pattern (LBP) instead of using raw LBP feature vector. Second, an alternative algorithm for multiclass SVM based on relevance scores is proposed. The effectiveness of the proposed approach is evaluated on PETS dataset, and the results demonstrate the effect of low-dimensional compact representation of LBP on the classification accuracy. Also, the performance of the proposed multiclass SVM algorithm is compared to other frequently used algorithms for multi-classification problem and the proposed algorithm gives good results while reducing the complexity of the classification."}
{"_id": "ace8904509fa60c462632a33d85cabf3a4a0f359", "title": "A geometric view of optimal transportation and generative model", "text": "In this work, we show the intrinsic relations between optimal transportation and convex geometry, especially the variational approach to solve Alexandrov problem: constructing a convex polytope with prescribed face normals and volumes. This leads to a geometric interpretation to generative models, and leads to a novel framework for generative models. By using the optimal transportation view of GAN model, we show that the discriminator computes the Kantorovich potential, the generator calculates the transportation map. For a large class of transportation costs, the Kantorovich potential can give the optimal transportation map by a close-form formula. Therefore, it is sufficient to solely optimize the discriminator. This shows the adversarial competition can be avoided, and the computational architecture can be simplified. Preliminary experimental results show the geometric method outperforms WGAN for approximating probability measures with multiple clusters in low dimensional space."}
{"_id": "54d7ff1b20cb0f0596cc23769e5f6fe3045570a8", "title": "The challenges of SVM optimization using Adaboost on a phoneme recognition problem", "text": "The use of digital technology is growing at a very fast pace which led to the emergence of systems based on the cognitive infocommunications. The expansion of this sector impose the use of combining methods in order to ensure the robustness in cognitive systems."}
{"_id": "be48c3ac5eb233155158a3b8defed0e083cc2381", "title": "Dry electrodes for electrocardiography.", "text": "Patient biopotentials are usually measured with conventional disposable Ag/AgCl electrodes. These electrodes provide excellent signal quality but are irritating for long-term use. Skin preparation is usually required prior to the application of electrodes such as shaving and cleansing with alcohol. To overcome these difficulties, researchers and caregivers seek alternative electrodes that would be acceptable in clinical and research environments. Dry electrodes that operate without gel, adhesive or even skin preparation have been studied for many decades. They are used in research applications, but they have yet to achieve acceptance for medical use. So far, a complete comparison and evaluation of dry electrodes is not well described in the literature. This work compares dry electrodes for biomedical use and physiological research, and reviews some novel systems developed for cardiac monitoring. Lastly, the paper provides suggestions to develop a dry-electrode-based system for mobile and long-term cardiac monitoring applications."}
{"_id": "062ebea1c4861cf90f18c369b3d729b79b076f8f", "title": "Object Category Understanding via Eye Fixations on Freehand Sketches", "text": "The study of eye gaze fixations on photographic images is an active research area. In contrast, the image sub-category of freehand sketches has not received as much attention for such studies. In this paper, we analyze the results of a free-viewing gaze fixation study conducted on 3904 freehand sketches distributed across 160 object categories. Our analysis shows that fixation sequences exhibit marked consistency within a sketch, across sketches of a category and even across suitably grouped sets of categories. This multi-level consistency is remarkable given the variability in depiction and extreme image content sparsity that characterizes hand-drawn object sketches. In this paper, we show that the multi-level consistency in the fixation data can be exploited to 1) predict a test sketch\u2019s category given only its fixation sequence and 2) build a computational model which predicts part-labels underlying fixations on objects. We hope that our findings motivate the community to deem sketch-like representations worthy of gaze-based studies vis-a-vis photographic images."}
{"_id": "2186e013b7cebb5c3efec31d6339198885962e96", "title": "A rare case of mesh infection 3 years after a laparoscopic totally extraperitoneal (TEP) inguinal hernia repair.", "text": "Late complications after a laparoscopic inguinal hernia repair are extremely rare and have only recently entered into the literature. One such late complication is mesh infection, of which there have been a handful of cases reported in the literature. Mesh infections occurring many years after inguinal hernia repairs are not only of significance because they are not well documented in the literature, and the pathogenesis and risk factors contributing to their development are not well understood. This report details a rare case of mesh infection 3 years after a laparoscopic totally extraperitoneal inguinal hernia repair, describes our management of the condition, highlights the current options for management, and attempts to define its pathophysiology."}
{"_id": "d1a80c5b53bf4882090d1e1c31da134f1b1ffd48", "title": "Visual domain-specific modelling : Benefits and experiences of using metaCASE tools", "text": "1 Introduction Jackson (Jackson 95) recognises the vital difference between an application's domain and its code: two different worlds, each with its own language, experts, ways of thinking etc. A finished application forms the intersection between these worlds. The difficult job of the software engineer is to build a bridge between these worlds, at the same time as solving problems in both worlds."}
{"_id": "5726690fc48158f4806ca25b207130544ecee95d", "title": "Digital Image Forgeries and Passive Image Authentication Techniques: A Survey", "text": "Digital images are present everywhere on magazine covers, in newspapers, in courtrooms as evidences, and all over the Internet signifying one of the major ways for communication nowadays. The trustworthiness of digital images has been questioned, because of the ease with which these images can be manipulated in both its origin & content as a result of tremendous growth of digital image manipulation tools. Digital image forensics is the latest research field which intends to authorize the genuineness of images. This survey attempts to provide an overview of various digital image forgeries and the state of art passive methods to authenticate digital images."}
{"_id": "28fadbb09e3bf36e58660b30e626d870de43785a", "title": "A systematic review of neurobiological and clinical features of mindfulness meditations.", "text": "BACKGROUND\nMindfulness meditation (MM) practices constitute an important group of meditative practices that have received growing attention. The aim of the present paper was to systematically review current evidence on the neurobiological changes and clinical benefits related to MM practice in psychiatric disorders, in physical illnesses and in healthy subjects.\n\n\nMETHOD\nA literature search was undertaken using Medline, ISI Web of Knowledge, the Cochrane collaboration database and references of retrieved articles. Controlled and cross-sectional studies with controls published in English up to November 2008 were included.\n\n\nRESULTS\nElectroencephalographic (EEG) studies have revealed a significant increase in alpha and theta activity during meditation. Neuroimaging studies showed that MM practice activates the prefrontal cortex (PFC) and the anterior cingulate cortex (ACC) and that long-term meditation practice is associated with an enhancement of cerebral areas related to attention. From a clinical viewpoint, Mindfulness-Based Stress Reduction (MBSR) has shown efficacy for many psychiatric and physical conditions and also for healthy subjects, Mindfulness-Based Cognitive Therapy (MBCT) is mainly efficacious in reducing relapses of depression in patients with three or more episodes, Zen meditation significantly reduces blood pressure and Vipassana meditation shows efficacy in reducing alcohol and substance abuse in prisoners. However, given the low-quality designs of current studies it is difficult to establish whether clinical outcomes are due to specific or non-specific effects of MM.\n\n\nDISCUSSION\nDespite encouraging findings, several limitations affect current studies. Suggestions are given for future research based on better designed methodology and for future directions of investigation."}
{"_id": "35d6e101e3332aa7defc312db7b083a22ee13f7b", "title": "When D2D meets cloud: Hybrid mobile task offloadings in fog computing", "text": "In this paper we propose HyFog, a novel hybrid task offloading framework in fog computing, where device users have the flexibility of choosing among multiple options for task executions, including local mobile execution, Device-to-Device (D2D) offloaded execution, and Cloud offloaded execution. We further develop a novel three-layer graph matching algorithm for efficient hybrid task offloading among the devices. Specifically, we first construct a three-layer graph to capture the choice space enabled by these three execution approaches, and then the problem of minimizing the total task execution cost is recast as a minimum weight matching problem over the constructed three-layer graph, which can be efficiently solved using the Edmonds's Blossom algorithm. Numerical results demonstrate that the proposed three-layer graph matching solution can achieve superior performance, with more than 50% cost reduction over the case of local task executions by all the devices."}
{"_id": "d38c2006c0817f1ca7f7fb78015fc547004959c6", "title": "Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture", "text": "Recent studies have demonstrated the usefulness of optical indices from hyperspectral remote sensing in the assessment of vegetation biophysical variables both in forestry and agriculture. Those indices are, however, the combined response to variations of several vegetation and environmental properties, such as Leaf Area Index (LAI), leaf chlorophyll content, canopy shadows, and background soil reflectance. Of particular significance to precision agriculture is chlorophyll content, an indicator of photosynthesis activity, which is related to the nitrogen concentration in green vegetation and serves as a measure of the crop response to nitrogen application. This paper presents a combined modeling and indices-based approach to predicting the crop chlorophyll content from remote sensing data while minimizing LAI (vegetation parameter) influence and underlying soil (background) effects. This combined method has been developed first using simulated data and followed by evaluation in terms of quantitative predictive capability using real hyperspectral airborne data. Simulations consisted of leaf and canopy reflectance modeling with PROSPECT and SAILH radiative transfer models. In this modeling study, we developed an index that integrates advantages of indices minimizing soil background effects and indices that are sensitive to chlorophyll concentration. Simulated data have shown that the proposed index Transformed Chlorophyll Absorption in Reflectance Index/Optimized Soil-Adjusted Vegetation Index (TCARI/OSAVI) is both very sensitive to chlorophyll content variations and very resistant to the variations of LAI and solar zenith angle. It was therefore possible to generate a predictive equation to estimate leaf chlorophyll content from the combined optical index derived from above-canopy reflectance. This relationship was evaluated by application to hyperspectral CASI imagery collected over corn crops in three experimental farms from Ontario and Quebec, Canada. The results presented here are from the L\u2019Acadie, Quebec, Agriculture and AgriFood Canada research site. Images of predicted leaf chlorophyll content were generated. Evaluation showed chlorophyll variability over crop plots with various levels of nitrogen, and revealed an excellent agreement with ground truth, with a correlation of r = .81 between estimated and field measured chlorophyll content data. D 2002 Elsevier Science Inc. All rights reserved."}
{"_id": "2485c98aa44131d1a2f7d1355b1e372f2bb148ad", "title": "The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations", "text": "In this paper, we describe the acquisition and contents of a large-scale Chinese face database: the CAS-PEAL face database. The goals of creating the CAS-PEAL face database include the following: 1) providing the worldwide researchers of face recognition with different sources of variations, particularly pose, expression, accessories, and lighting (PEAL), and exhaustive ground-truth information in one uniform database; 2) advancing the state-of-the-art face recognition technologies aiming at practical applications by using off-the-shelf imaging equipment and by designing normal face variations in the database; and 3) providing a large-scale face database of Mongolian. Currently, the CAS-PEAL face database contains 99 594 images of 1040 individuals (595 males and 445 females). A total of nine cameras are mounted horizontally on an arc arm to simultaneously capture images across different poses. Each subject is asked to look straight ahead, up, and down to obtain 27 images in three shots. Five facial expressions, six accessories, and 15 lighting changes are also included in the database. A selected subset of the database (CAS-PEAL-R1, containing 30 863 images of the 1040 subjects) is available to other researchers now. We discuss the evaluation protocol based on the CAS-PEAL-R1 database and present the performance of four algorithms as a baseline to do the following: 1) elementarily assess the difficulty of the database for face recognition algorithms; 2) preference evaluation results for researchers using the database; and 3) identify the strengths and weaknesses of the commonly used algorithms."}
{"_id": "3229a96ffe8d305800b311cbdc2a5710e8f442f4", "title": "Recovering free space of indoor scenes from a single image", "text": "In this paper we consider the problem of recovering the free space of an indoor scene from its single image. We show that exploiting the box like geometric structure of furniture and constraints provided by the scene, allows us to recover the extent of major furniture objects in 3D. Our \u201cboxy\u201d detector localizes box shaped objects oriented parallel to the scene across different scales and object types, and thus blocks out the occupied space in the scene. To localize the objects more accurately in 3D we introduce a set of specially designed features that capture the floor contact points of the objects. Image based metrics are not very indicative of performance in 3D. We make the first attempt to evaluate single view based occupancy estimates for 3D errors and propose several task driven performance measures towards it. On our dataset of 592 indoor images marked with full 3D geometry of the scene, we show that: (a) our detector works well using image based metrics; (b) our refinement method produces significant improvements in localization in 3D; and (c) if one evaluates using 3D metrics, our method offers major improvements over other single view based scene geometry estimation methods."}
{"_id": "5af8b070ce47e4933442ebf563d836f118b124c8", "title": "Educational games for improving the teaching-learning process of a CLIL subject: Physics and chemistry in secondary education", "text": "The use of educational games in an academic context seems to be a superb alternative to traditional learning activities, such as drill-type exercises, in order to engage 21st-century students. Consequently, this work tries to raise the following objectives: analyze the effectiveness of game-based learning, characterize game elements that may contribute to create playing experiences, comprehend how different player types interact with games and, finally, design interactive games which may create challenges, set goals and provide feedback on progress while motivating learners to study physics and chemistry in a foreign language in the second cycle of Secondary Education (4th E.S.O. that corresponds to the age of 15-16 years old). Specifically, we have used several Web 2.0 tools (Hot Potatoes, Scratch, What2Learn and SMART Notebook 11) and applications (Microsoft PowerPoint and Microsoft Excel) in order to create the games; and these games are based on the successive contents: laboratory safety, laboratory equipment, stoichiometry, atomic structure, electronic configuration, the periodic table, forces, motion and energy."}
{"_id": "73ce5b57f37858f13ec80df075b5d74164715421", "title": "Classification and Adaptive Novel Class Detection of Feature-Evolving Data Streams", "text": "Data stream classification poses many challenges to the data mining community. In this paper, we address four such major challenges, namely, infinite length, concept-drift, concept-evolution, and feature-evolution. Since a data stream is theoretically infinite in length, it is impractical to store and use all the historical data for training. Concept-drift is a common phenomenon in data streams, which occurs as a result of changes in the underlying concepts. Concept-evolution occurs as a result of new classes evolving in the stream. Feature-evolution is a frequently occurring process in many streams, such as text streams, in which new features (i.e., words or phrases) appear as the stream progresses. Most existing data stream classification techniques address only the first two challenges, and ignore the latter two. In this paper, we propose an ensemble classification framework, where each classifier is equipped with a novel class detector, to address concept-drift and concept-evolution. To address feature-evolution, we propose a feature set homogenization technique. We also enhance the novel class detection module by making it more adaptive to the evolving stream, and enabling it to detect more than one novel class at a time. Comparison with state-of-the-art data stream classification techniques establishes the effectiveness of the proposed approach."}
{"_id": "8d80f739382aa318c369b324b36a8361d7651d38", "title": "A Driving Right Leg Circuit (DgRL) for Improved Common Mode Rejection in Bio-Potential Acquisition Systems", "text": "The paper presents a novel Driving Right Leg (DgRL) circuit designed to mitigate the effect of common mode signals deriving, say, from power line interferences. The DgRL drives the isolated ground of the instrumentation towards a voltage which is fixed with respect to the common mode potential on the subject, therefore minimizing common mode voltage at the input of the front-end. The paper provides an analytical derivation of the common mode rejection performances of DgRL as compared to the usual grounding circuit or Driven Right Leg (DRL) loop. DgRL is integrated in a bio-potential acquisition system to show how it can reduce the common mode signal of more than 70 dB with respect to standard patient grounding. This value is at least 30 dB higher than the reduction achievable with DRL, making DgRL suitable for single-ended front-ends, like those based on active electrodes. EEG signal acquisition is performed to show how the system can successfully cancel power line interference without any need for differential acquisition, signal post-processing or filtering."}
{"_id": "21880e93aa876fa01ab994d2fc05b86b525933e3", "title": "Head Nod Detection from a Full 3D Model", "text": "As a non-verbal communication mean, head gestures play an important role in face-to-face conversation and recognizing them is therefore of high value for social behavior analysis or Human Robotic Interactions (HRI) modelling. Among the various gestures, head nod is the most common one and can convey agreement or emphasis. In this paper, we propose a novel nod detection approach based on a full 3D face centered rotation model. Compared to previous approaches, we make two contributions. Firstly, the head rotation dynamic is computed within the head coordinate instead of the camera coordinate, leading to pose invariant gesture dynamics. Secondly, besides the rotation parameters, a feature related to the head rotation axis is proposed so that nod-like false positives due to body movements could be eliminated. The experiments on two-party and four-party conversations demonstrate the validity of the approach."}
{"_id": "c5eed6bde6e6a68215c0b4c494fe03f037bcd3d2", "title": "Algorithmic Di erentiation in Python with AlgoPy", "text": "Many programs for scienti c computing in Python are based on NumPy and therefore make heavy use of numerical linear algebra (NLA) functions, vectorized operations, slicing and broadcasting. AlgoPy provides the means to compute derivatives of arbitrary order and Taylor approximations of such programs. The approach is based on a combination of univariate Taylor polynomial arithmetic and matrix calculus in the (combined) forward/reverse mode of Algorithmic Di erentiation (AD). In contrast to existing AD tools, vectorized operations and NLA functions are not considered to be a sequence of scalar elementary functions. Instead, dedicated algorithms for the matrix product, matrix inverse and the Cholesky, QR, and symmetric eigenvalue decomposition are implemented in AlgoPy. We discuss the reasons for this alternative approach and explain the underlying idea. Examples illustrate how AlgoPy can be used from a user's point of view."}
{"_id": "4630706e9f864afd5c66cb1bda7d87c2522b4eee", "title": "Energy detection spectrum sensing on DPSK modulation transceiver using GNU radio", "text": "Cognitive radio (CR) can facilitate the issue of range shortage by permitting secondary users to coincide with incumbent clients in authorized range groups, while it causes no interference to occupant interchanges. Spectrum sensing is the essential in enabling innovation of CR to detect the presence of primary users (PUs) or authorized user signals and adventure the spectrum holes. Henceforth, this paper addresses the testing of spectrum sensing method for Differential Phase Shift Keying (DPSK) transceiver by utilizing an open source programming, GNU Radio. Moreover, the effect of various reproduction channels, for example, Dynamic channel demonstrate with Rayleigh fading, dynamic channel display with Rician fading, frequency selective faded channel show with Rayleigh and frequency selective faded channel show with Rician fading are exhibited."}
{"_id": "b25bf94033b726d85b44446551e0b4936be2e4f7", "title": "Nerve root sedimentation sign: evaluation of a new radiological sign in lumbar spinal stenosis.", "text": "STUDY DESIGN\nRetrospective case-referent study.\n\n\nOBJECTIVE\nTo assess whether the new sedimentation sign discriminates between nonspecific low back pain (LBP) and symptomatic lumbar spinal stenosis (LSS).\n\n\nSUMMARY OF BACKGROUND DATA\nIn the diagnosis of LSS, radiologic findings do not always correlate with clinical symptoms, and additional diagnostic signs are needed. In patients without LSS, we observe the sedimentation of lumbar nerve roots to the dorsal part of the dural sac on supine magnetic resonance image scans. In patients with symptomatic and morphologic central LSS, this sedimentation is rarely seen. We named this phenomenon \"sedimentation sign\" and defined the absence of sedimenting nerve roots as positive sedimentation sign for the diagnosis of LSS.\n\n\nMETHODS\nThis study included 200 patients. Patients in the LSS group (n = 100) showed claudication with or without LBP and leg pain, a cross-sectional area <80 mm, and a walking distance <200 m; patients in the LBP group (n = 100) had LBP, no leg pain, no claudication, a cross-sectional area of the dural sac >120 mm, and a walking distance >1000 m. The frequency of a positive sedimentation sign was compared between the 2 groups, and intraobserver and interobserver reliability were assessed in a random subsample (n = 20).\n\n\nRESULTS\nA positive sedimentation sign was identified in 94 patients in the LSS group (94%; 95% confidence interval, 90%-99%) but none in the LBP group (0%; 95% confidence interval, 0%-4%). Reliability was kappa = 1.0 (intraobserver) and kappa = 0.93 (interobserver), respectively. There was no difference in the detection of the sign between segmental levels L1-L5 in the LSS group.\n\n\nCONCLUSION\nA positive sedimentation sign exclusively and reliably occurs in patients with LSS, suggesting its usefulness in clinical practice. Future accuracy studies will address its sensitivity and specificity. If they confirm the sign's high specificity, a positive sedimentation sign can rule in LSS, and, with a high sensitivity, a negative sedimentation sign can rule out LSS."}
{"_id": "e8ad2e8e3aae9edf914b4d890d0b8bc8db47fbed", "title": "Combining Gradient Boosting Machines with Collective Inference to Predict Continuous Values", "text": "Gradient boosting of regression trees is a competitive procedure for learning predictive models of continuous data that fits the data with an additive non-parametric model. The classic version of gradient boosting assumes that the data is independent and identically distributed. However, relational data with interdependent, linked instances is now common and the dependencies in such data can be exploited to improve predictive performance. Collective inference is one approach to exploit relational correlation patterns and significantly reduce classification error. However, much of the work on collective learning and inference has focused on discrete prediction tasks rather than continuous. In this work, we investigate how to combine these two paradigms together to improve regression in relational domains. Specifically, we propose a boosting algorithm for learning a collective inference model that predicts a continuous target variable. In the algorithm, we learn a basic relational model, collectively infer the target values, and then iteratively learn relational models to predict the residuals. We evaluate our proposed algorithm on a real network dataset and show that it outperforms alternative boosting methods. However, our investigation also revealed that the relational features interact together to produce better predictions."}
{"_id": "a0456c27cdd58f197032c1c8b4f304f09d4c9bc5", "title": "Multiple Classifier Systems", "text": "Ensemble methods are learning algorithms that construct a set of classi ers and then classify new data points by taking a weighted vote of their predictions The original ensemble method is Bayesian aver aging but more recent algorithms include error correcting output coding Bagging and boosting This paper reviews these methods and explains why ensembles can often perform better than any single classi er Some previous studies comparing ensemble methods are reviewed and some new experiments are presented to uncover the reasons that Adaboost does not over t rapidly"}
{"_id": "22df3c0d055b7f65871068dfcd83d10f0a4fe2e4", "title": "Optimizing Java Bytecode Using the Soot Framework: Is It Feasible?", "text": "This paper presents Soot, a framework for optimizing Java bytecode. The framework is implemented in Java and supports three intermediate representations for representing Java bytecode: Baf, a streamlined representation of Java's stack-based bytecode; Jimple, a typed three-address intermediate representation suitable for optimization; and Grimp, an aggregated version of Jimple. Our approach to class le optimization is to rst convert the stack-based bytecode into Jimple, a three-address form more amenable to traditional program optimization, and then convert the optimized Jimple back to bytecode. In order to demonstrate that our approach is feasible, we present experimental results showing the e ects of processing class les through our framework. In particular, we study the techniques necessary to effectively translate Jimple back to bytecode, without losing performance. Finally, we demonstrate that class le optimization can be quite e ective by showing the results of some basic optimizations using our framework. Our experiments were done on ten benchmarks, including seven SPECjvm98 benchmarks, and were executed on ve di erent Java virtual machine implementations."}
{"_id": "3ef87e07d6ffc3c58cad602f792f96fe48fb0b8f", "title": "The Java Language Specification", "text": "class RawMembers extends NonGeneric implements Collection { static Collection cng = new ArrayList(); public static void main(String[] args) { RawMembers rw = null; Collection cn = rw.myNumbers(); // OK Iterator is = rw.iterator(); // Unchecked warning Collection cnn = rw.cng; // OK, static member } } In this program (which is not meant to be run), RawMembers inherits the method: Iterator iterator() from the Collection superinterface. The raw type RawMembers inherits iterator() from Collection, the erasure of Collection, which means that the return type of iterator() in RawMembers is Iterator. As a result, the attempt to 4.9 Intersection Types TYPES, VALUES, AND VARIABLES 70 assign rw.iterator() to Iterator requires an unchecked conversion, so a compile-time unchecked warning is issued. In contrast, RawMembers inherits myNumbers() from the NonGeneric class whose erasure is also NonGeneric. Thus, the return type of myNumbers() in RawMembers is not erased, and the attempt to assign rw.myNumbers() to Collection requires no unchecked conversion, so no compile-time unchecked warning is issued. Similarly, the static member cng retains its parameterized type even when accessed through a object of raw type. Note that access to a static member through an instance is considered bad style and is discouraged. This example reveals that certain members of a raw type are not erased, namely static members whose types are parameterized, and members inherited from a non-generic supertype. Raw types are closely related to wildcards. Both are based on existential types. Raw types can be thought of as wildcards whose type rules are deliberately unsound, to accommodate interaction with legacy code. Historically, raw types preceded wildcards; they were first introduced in GJ, and described in the paper Making the future safe for the past: Adding Genericity to the Java Programming Language by Gilad Bracha, Martin Odersky, David Stoutamire, and Philip Wadler, in Proceedings of the ACM Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA 98), October 1998. 4.9 Intersection Types An intersection type takes the form T1 & ... & Tn (n > 0), where Ti (1 \u2264 i \u2264 n) are types. Intersection types can be derived from type parameter bounds (\u00a74.4) and cast expressions (\u00a715.16); they also arise in the processes of capture conversion (\u00a75.1.10) and least upper bound computation (\u00a74.10.4). The values of an intersection type are those objects that are values of all of the types Ti for 1 \u2264 i \u2264 n. Every intersection type T1 & ... & Tn induces a notional class or interface for the purpose of identifying the members of the intersection type, as follows: \u2022 For each Ti (1 \u2264 i \u2264 n), let Ci be the most specific class or array type such that Ti <: Ci. Then there must be some Ck such that Ck <: Ci for any i (1 \u2264 i \u2264 n), or a compile-time error occurs. \u2022 For 1 \u2264 j \u2264 n, if Tj is a type variable, then let Tj' be an interface whose members are the same as the public members of Tj; otherwise, if Tj is an interface, then let Tj' be Tj. TYPES, VALUES, AND VARIABLES Subtyping 4.10 71 \u2022 If Ck is Object, a notional interface is induced; otherwise, a notional class is induced with direct superclass Ck. This class or interface has direct superinterfaces T1', ..., Tn' and is declared in the package in which the intersection type appears. The members of an intersection type are the members of the class or interface it induces. It is worth dwelling upon the distinction between intersection types and the bounds of type variables. Every type variable bound induces an intersection type. This intersection type is often trivial, consisting of a single type. The form of a bound is restricted (only the first element may be a class or type variable, and only one type variable may appear in the bound) to preclude certain awkward situations coming into existence. However, capture conversion can lead to the creation of type variables whose bounds are more general, such as array types)."}
{"_id": "53732eff5c29585bc84d0ec280b0923639bf740e", "title": "Reverse Engineering of the Interaction Diagrams from C++ Code", "text": "In object oriented programming, the functionalities of a system result from the interactions (message exchanges) among the objects allocated by the system. While designing object interactions is far more complex than designing the object structure in forward engineering, the problem of understanding object interactions during code evolution is even harder, because the related information is spread across the code. In this paper, a technique for the automatic extraction of UML interaction diagrams from C++ code is proposed. The algorithm is based on a static, conservative flow analysis, that approximates the behavior of the system in any execution and for any possible input. Applicability of the approach to large software is achieved by means of two mechanisms: partial analysis and focusing. Usage of our method on a real world, large C++ system confirmed its viability."}
{"_id": "9a292e0d862debccffa04396cd5bceb5d866de18", "title": "Compilers: Principles, Techniques, and Tools", "text": ""}
{"_id": "e81ee8f90fa0e67d2e40dc794809dd1a942853aa", "title": "A Fast Algorithm for Finding Dominators in a Flowgraph", "text": "A fast algorithm for finding dominators in a flowgraph is presented. The algorithm uses depth-first search and an efficient method of computing functions defined on paths in trees. A simple implementation of the algorithm runs in O(m log n) time, where m is the number of edges and n is the number of vertices in the problem graph. A more sophisticated implementation runs in O(m\u03b1(m, n)) time, where \u03b1(m, n) is a functional inverse of Ackermann's function.\nBoth versions of the algorithm were implemented in Algol W, a Stanford University version of Algol, and tested on an IBM 370/168. The programs were compared with an implementation by Purdom and Moore of a straightforward O(mn)-time algorithm, and with a bit vector algorithm described by Aho and Ullman. The fast algorithm beat the straightforward algorithm and the bit vector algorithm on all but the smallest graphs tested."}
{"_id": "3aa8bccb391e05762f79b6ea0164c24f4fee38bf", "title": "P300 amplitude is determined by target-to-target interval.", "text": "P300 event-related brain potential (ERP) measures are affected by target stimulus probability, the number of nontargets preceding the target in the stimulus sequence structure, and interstimulus interval (ISI). Each of these factors contributes to the target-to-target interval (TTI), which also has been found to affect P300. The present study employed a variant of the oddball paradigm and manipulated the number of preceding nontarget stimuli (0, 1, 2, 3) and ISI (1, 2, 4 s) in order to systematically assess TTI effects on P300 values from auditory and visual stimuli. Number of preceding nontargets generally produced stronger effects than ISI in a manner suggesting that TTI determined P300 measures: Amplitude increased as TTI increased for both auditory and visual stimulus conditions, whereas latency tended to decrease with increased TTI. The finding that TTI is a critical determinant of P300 responsivity is discussed within a resource allocation theoretical framework."}
{"_id": "8d02d62f324c354c899b217e77ab097d1ddb2a9d", "title": "Ultra Low Power Wake-Up Radios: A Hardware and Networking Survey", "text": "In wireless environments, transmission and reception costs dominate system power consumption, motivating research effort on new technologies capable of reducing the footprint of the radio, paving the way for the Internet of Things. The most important challenge is to reduce power consumption when receivers are idle, the so called idle-listening cost. One approach proposes switching off the main receiver, then introduces new wake-up circuitry capable of detecting an incoming transmission, optionally discriminating the packet destination using addressing, then switching on the main radio only when required. This wake-up receiver technology represents the ultimate frontier in low power radio communication. In this paper, we present a comprehensive literature review of the research progress in wake-up radio (WuR) hardware and relevant networking software. First, we present an overview of the WuR system architecture, including challenges to hardware design and a comparison of solutions presented throughout the last decade. Next, we present various medium access control and routing protocols as well as diverse ways to exploit WuRs, both as an extension of pre-existing systems and as a new concept to manage low-power networking."}
{"_id": "7525328002640456f2b641aa48c0d31660cb62f0", "title": "Experimentable Digital Twins\u2014Streamlining Simulation-Based Systems Engineering for Industry 4.0", "text": "Digital twins represent real objects or subjects with their data, functions, and communication capabilities in the digital world. As nodes within the internet of things, they enable networking and thus the automation of complex value-added chains. The application of simulation techniques brings digital twins to life and makes them experimentable; digital twins become experimentable digital twins (EDTs). Initially, these EDTs communicate with each other purely in the virtual world. The resulting networks of interacting EDTs model different application scenarios and are simulated in virtual testbeds, providing new foundations for comprehensive simulation-based systems engineering. Its focus is on EDTs, which become more detailed with every single application. Thus, complete digital representations of the respective real assets and their behaviors are created successively. The networking of EDTs with real assets leads to hybrid application scenarios in which EDTs are used in combination with real hardware, thus realizing complex control algorithms, innovative user interfaces, or mental models for intelligent systems."}
{"_id": "f4a82dd4a4788e43baed10d8dda63545b81b6eb0", "title": "Design, kinematics and dynamics modeling of a lower-limb walking assistant robot", "text": "This paper focuses on a lower limb walking assistant robot. This robot has been designed and modeled for two tasks of facilitating healthy people, and assisting patients for walking. First, basic concepts of the motion mechanism are presented. Next, its kinematics and dynamics equations have been developed. Kinematics analysis has been done using Denavit-Hartenberg scheme to obtain position, velocity and acceleration of center of mass (COM) of all links. Dynamics equations have been developed using recursive Lagrange method and finally inverse dynamics problem has been solved. Obtained simulation results will be discussed which validate the proposed model."}
{"_id": "e1dd6e567c6f476b5d2ce8adb2a11f001c88ee62", "title": "Metrics for text entry research: an evaluation of MSD and KSPC, and a new unified error metric", "text": "We describe and identify shortcomings in two statistics recently introduced to measure accuracy in text entry evaluations: the minimum string distance (MSD) error rate and keystrokes per character (KSPC). To overcome the weaknesses, a new framework for error analysis is developed and demonstrated. It combines the analysis of the presented text, input stream (keystrokes), and transcribed text. New statistics include a unified total error rate, combining two constituent error rates: the corrected error rate (errors committed but corrected) and the not corrected error rate (errors left in the transcribed text). The framework includes other measures including error correction efficiency, participant conscientiousness, utilised bandwidth, and wasted bandwidth. A text entry study demonstrating the new methodology is described."}
{"_id": "6e62da624a263aaff9b5c10ae08f1859a03cfb76", "title": "Lyapunov Approach for the stabilization of the Inverted Spherical Pendulum", "text": "A nonlinear controller is presented for the stabilization of the spherical inverted pendulum system. The control strategy is based on the Lyapunov approach in conjunction with LaSalle's invariance principle. The proposed controller is able to bring the pendulum to the unstable upright equilibrium point with the position of the movable base at the origin. The obtained closed-loop system has a very large domain of attraction, that can be as large as desired, for any initial position of the pendulum which lies above the horizontal plane"}
{"_id": "24927a463e19c4348802f7f286acda31a035715b", "title": "A Data Structure for Dynamic Trees", "text": "We propose a data structure to maintain a collection of vertex-disjoint trees under a sequence of two kinds of operations: a link operation that combines two trees into one by adding an edge, and a cut operation that divides one tree into two by deleting an edge. Our data structure requires O(log n) time per operation when the time is amortized over a sequence of operations. Using our data structure, we obtain new fast algorithms for the following problems:\n (1) Computing deepest common ancestors.\n (2) Solving various network flow problems including finding maximum flows, blocking flows, and acyclic flows.\n (3) Computing certain kinds of constrained minimum spanning trees.\n (4) Implementing the network simplex algorithm for the transshipment problem.\n Our most significant application is (2); we obtain an O(mn log n)-time algorithm to find a maximum flow in a network of n vertices and m edges, beating by a factor of log n the fastest algorithm previously known for sparse graphs."}
{"_id": "7e5fbba6437067da53be67c71ca6022b469a2e94", "title": "A survey of variability modeling in industrial practice", "text": "Over more than two decades, numerous variability modeling techniques have been introduced in academia and industry. However, little is known about the actual use of these techniques. While dozens of experience reports on software product line engineering exist, only very few focus on variability modeling. This lack of empirical data threatens the validity of existing techniques, and hinders their improvement. As part of our effort to improve empirical understanding of variability modeling, we present the results of a survey questionnaire distributed to industrial practitioners. These results provide insights into application scenarios and perceived benefits of variability modeling, the notations and tools used, the scale of industrial models, and experienced challenges and mitigation strategies."}
{"_id": "610bc4ab4fbf7f95656b24330eb004492e63ffdf", "title": "NONCONVEX MODEL FOR FACTORING NONNEGATIVE MATRICES", "text": "We study the Nonnegative Matrix Factorization problem which approximates a nonnegative matrix by a low-rank factorization. This problem is particularly important in Machine Learning, and finds itself in a large number of applications. Unfortunately, the original formulation is ill-posed and NPhard. In this paper, we propose a row sparse model based on Row Entropy Minimization to solve the NMF problem under separable assumption which states that each data point is a convex combination of a few distinct data columns. We utilize the concentration of the entropy function and the `\u221e norm to concentrate the energy on the least number of latent variables. We prove that under the separability assumption, our proposed model robustly recovers data columns that generate the dataset, even when the data is corrupted by noise. We empirically justify the robustness of the proposed model and show that it is significantly more robust than the state-ofthe-art separable NMF algorithms."}
{"_id": "af657f4f268d31e8f1ca8b9702bb72c2f788b161", "title": "Multi-label classification by exploiting label correlations", "text": "a Department of Computer Science and Technology, Tongji University, Shanghai 201804, PR China b Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2G7, Canada c Key Laboratory of Embedded System and Service Computing, Ministry of Education, Tongji University, Shanghai 201804, PR China d System Research Institute, Polish Academy of Sciences, Warsaw, Poland e School of Software, Jiangxi Agricultural University, Nanchang 330013, PR China"}
{"_id": "f829fa5686895ec831dd157f88949f79976664a7", "title": "Variational Bayesian Inference for Big Data Marketing Models 1", "text": "Hierarchical Bayesian approaches play a central role in empirical marketing as they yield individual-level parameter estimates that can be used for targeting decisions. MCMC methods have been the methods of choice for estimating hierarchical Bayesian models as they are capable of providing accurate individual-level estimates. However, MCMC methods are computationally prohibitive and do not scale well when applied to massive data sets that have become common in the current era of \u201cBig Data\u201d. We introduce to the marketing literature a new class of Bayesian estimation techniques known as variational Bayesian (VB) inference. These methods tackle the scalability challenge via a deterministic optimization approach to approximate the posterior distribution and yield accurate estimates at a fraction of the computational cost associated with simulation-based MCMC methods. We exploit and extend recent developments in variational Bayesian inference and highlight how two VB estimation approaches \u2013 Mean-field VB (that is analogous to Gibbs sampling) for conjugate models and Fixed-form VB (which is analogous to Metropolis-Hasting) for nonconjugate models \u2013 can be effectively combined for estimating complex marketing models. We also show how recent advances in parallel computing and in stochastic optimization can be used to further enhance the speed of these VB methods. Using simulated as well as real data sets, we apply the VB approaches to several commonly used marketing models (e.g. mixed linear, logit, selection, and hierarchical ordinal logit models), and demonstrate how the VB inference is widely applicable for marketing problems."}
{"_id": "ee38164081011364ee9618abcc166b3dfd749740", "title": "Uplift Modeling with Multiple Treatments and General Response Types", "text": "Randomized experiments have been used to assist decisionmaking in many areas. They help people select the optimal treatment for the test population with certain statistical guarantee. However, subjects can show significant heterogeneity in response to treatments. The problem of customizing treatment assignment based on subject characteristics is known as uplift modeling, differential response analysis, or personalized treatment learning in literature. A key feature for uplift modeling is that the data is unlabeled. It is impossible to know whether the chosen treatment is optimal for an individual subject because response under alternative treatments is unobserved. This presents a challenge to both the training and the evaluation of uplift models. In this paper we describe how to obtain an unbiased estimate of the key performance metric of an uplift model, the expected response. We present a new uplift algorithm which creates a forest of randomized trees. The trees are built with a splitting criterion designed to directly optimize their uplift performance based on the proposed evaluation method. Both the evaluation method and the algorithm apply to arbitrary number of treatments and general response types. Experimental results on synthetic data and industry-provided data show that our algorithm leads to significant performance improvement over other applicable methods. Accepted: 2017 SIAM International Conference on Data"}
{"_id": "4328013cae88f6fe413bb4fe302bfd3909d5276e", "title": "Automated Diagnosis of Coronary Artery Disease Based on Data Mining and Fuzzy Modeling", "text": "A fuzzy rule-based decision support system (DSS) is presented for the diagnosis of coronary artery disease (CAD). The system is automatically generated from an initial annotated dataset, using a four stage methodology: 1) induction of a decision tree from the data; 2) extraction of a set of rules from the decision tree, in disjunctive normal form and formulation of a crisp model; 3) transformation of the crisp set of rules into a fuzzy model; and 4) optimization of the parameters of the fuzzy model. The dataset used for the DSS generation and evaluation consists of 199 subjects, each one characterized by 19 features, including demographic and history data, as well as laboratory examinations. Tenfold cross validation is employed, and the average sensitivity and specificity obtained is 62% and 54%, respectively, using the set of rules extracted from the decision tree (first and second stages), while the average sensitivity and specificity increase to 80% and 65%, respectively, when the fuzzification and optimization stages are used. The system offers several advantages since it is automatically generated, it provides CAD diagnosis based on easily and noninvasively acquired features, and is able to provide interpretation for the decisions made."}
{"_id": "45bc88a5c627d5a54e536e3b8440e2ea76cee1c0", "title": "Predicting Source Code Quality with Static Analysis and Machine Learning", "text": "This paper is investigating if it is possible to predict source code quality based on static analysis and machine learning. The proposed approach includes a plugin in Eclipse, uses a combination of peer review/human rating, static code analysis, and classification methods. As training data, public data and student hand-ins in programming are used. Based on this training data, new and uninspected source code can be accurately classified as \u201cwell written\u201d or \u201cbadly written\u201d. This is a step towards feedback in an interactive environment without peer assessment."}
{"_id": "51a5fae9c638c33882b91fcf70dd40d0c3a75fcc", "title": "Traffic Accident Data Mining Using Machine Learning Paradigms", "text": "Engineers and researchers in the automobile industry have tried to design and build safer automobiles, but traffic accidents are unavoidable. Patterns involved in dangerous crashes could be detected if we develop a prediction model that automatically classifies the type of injury severity of various traffic accidents. These behavioral and roadway patterns are useful in the development of traffic safety control policy. We believe that to obtain the greatest possible accident reduction effects with limited budgetary resources, it is important that measures be based on scientific and objective surveys of the causes of accidents and severity of injuries. This paper presents some models to predict the severity of injury that occurred during traffic accidents using three machine-learning approaches. We considered neural networks trained using hybrid learning approaches, decision trees and a concurrent hybrid model involving decision trees and neural networks. Experiment results reveal that among the machine learning paradigms considered the hybrid decision tree-neural network approach outperformed the individual approaches."}
{"_id": "a2ba37db8d15abf65c2286ad2ec526f47437b92f", "title": "Design Procedure of Dual-Stator Spoke-Array Vernier Permanent-Magnet Machines", "text": "The dual-stator spoke-array vernier permanent-magnet (DSSA VPM) machines proposed in the previous papers have been proven to be with high torque density and high power factor. However, the design procedure on the DSSA VPM machines has not been well established, and there is little design experience to be followed, which makes the DSSA VPM machine design quite difficult. This paper presents the detailed DSSA VPM machine design procedure including decision of design parameter initial values, analytical sizing equation, geometric size relationship, and so on. In order to get reasonable design parameter initial values which can reduce the number of design iteration loop, the influence of the key parameters, such as rotor/stator pole combination, slot opening, magnet thickness, etc., on the performances is analyzed based on the finite-element algorithm (FEA) in this paper, and the analysis results can be regarded as design experience during the selection process of the initial values. After that, the analytical sizing equation and geometric relationship formulas are derived and can be used to obtain and optimize the size data of the DSSA VPM machines with little time consumption. The combination of the analytical and FEA methods makes the design procedure time-effective and reliable. Finally, the design procedure is validated by experiments on a DSSA VPM prototype with 2000 N\u00b7m."}
{"_id": "4686b6bdd80a721d052f3da92620c21035aa2c68", "title": "Agent-Based Modeling of the Spread of Influenza-Like Illness in an Emergency Department: A Simulation Study", "text": "The objective of this paper was to develop an agent based modeling framework in order to simulate the spread of influenza virus infection on a layout based on a representative hospital emergency department in Winnipeg, Canada. In doing so, the study complements mathematical modeling techniques for disease spread, as well as modeling applications focused on the spread of antibiotic-resistant nosocomial infections in hospitals. Twenty different emergency department scenarios were simulated, with further simulation of four infection control strategies. The agent based modeling approach represents systems modeling, in which the emergency department was modeled as a collection of agents (patients and healthcare workers) and their individual characteristics, behaviors, and interactions. The framework was coded in C + + using Qt4 libraries running under the Linux operating system. A simple ordinary least squares (OLS) regression was used to analyze the data, in which the percentage of patients that be came infected in one day within the simulation was the dependent variable. The results suggest that within the given instance con text, patient-oriented infection control policies (alternate treatment streams, masking symptomatic patients) tend to have a larger effect than policies that target healthcare workers. The agent-based modeling framework is a flexible tool that can be made to reflect any given environment; it is also a decision support tool for practitioners and policymakers to assess the relative impact of infection control strategies. The framework illuminates scenarios worthy of further investigation, as well as counterintuitive findings."}
{"_id": "900c24c6f783db6f6239fe44989814d4b7deaa9b", "title": "The Sorcerer's Apprentice Guide to Fault Attacks", "text": "The effect of faults on electronic systems has been studied since the 1970s when it was noticed that radioactive particles caused errors in chips. This led to further research on the effect of charged particles on silicon, motivated by the aerospace industry, which was becoming concerned about the effect of faults in airborne electronic systems. Since then various mechanisms for fault creation and propagation have been discovered and researched. This paper covers the various methods that can be used to induce faults in semiconductors and exploit such errors maliciously. Several examples of attacks stemming from the exploiting of faults are explained. Finally a series of countermeasures to thwart these attacks are described."}
{"_id": "8c0062c200cfc4fb740d4c64d310068530dec58b", "title": "Health and well-being benefits of spending time in forests: systematic review", "text": "BACKGROUND\nNumerous studies have reported that spending time in nature is associated with the improvement of various health outcomes and well-being. This review evaluated the physical and psychological benefits of a specific type of exposure to nature, forest therapy.\n\n\nMETHOD\nA literature search was carried out using MEDLINE, PubMed, ScienceDirect, EMBASE, and ProQuest databases and manual searches from inception up to December 2016. Key words: \"Forest\" or \"Shinrin -Yoku\" or \"Forest bath\" AND \"Health\" or \"Wellbeing\". The methodological quality of each randomized controlled trials (RCTs) was assessed according to the Cochrane risk of bias (ROB) tool.\n\n\nRESULTS\nSix RCTs met the inclusion criteria. Participants' ages ranged from 20 to 79\u00a0years. Sample size ranged from 18 to 99. Populations studied varied from young healthy university students to elderly people with chronic disease. Studies reported the positive impact of forest therapy on hypertension (n\u2009=\u20092), cardiac and pulmonary function (n\u2009=\u20091), immune function (n\u2009=\u20092), inflammation (n\u2009=\u20093), oxidative stress (n\u2009=\u20091), stress (n\u2009=\u20091), stress hormone (n\u2009=\u20091), anxiety (n\u2009=\u20091), depression (n\u2009=\u20092), and emotional response (n\u2009=\u20093). The quality of all studies included in this review had a high ROB.\n\n\nCONCLUSION\nForest therapy may play an important role in health promotion and disease prevention. However, the lack of high-quality studies limits the strength of results, rendering the evidence insufficient to establish clinical practice guidelines for its use. More robust RCTs are warranted."}
{"_id": "d8b096030b8f7bd0c39ef9e3217b533e298a8985", "title": "Character-based feature extraction with LSTM networks for POS-tagging task", "text": "In this paper we describe a work in progress on designing the continuous vector space word representations able to map unseen data adequately. We propose a LSTM-based feature extraction layer that reads in a sequence of characters corresponding to a word and outputs a single fixed-length real-valued vector. We then test our model on a POS tagging task on four typologically different languages. The results of the experiments suggest that the model can offer a solution to the out-of-vocabulary words problem, as in a comparable setting its OOV accuracy improves over that of a state of the art tagger."}
{"_id": "5f93df0c298a87e23be75f51fe27bd96d3f9d65b", "title": "Source Localization in Wireless Sensor Networks From Signal Time-of-Arrival Measurements", "text": "Recent advances in wireless sensor networks have led to renewed interests in the problem of source localization. Source localization has broad range of applications such as emergency rescue, asset inventory, and resource management. Among various measurement models, one important and practical source signal measurement is the received signal time of arrival (TOA) at a group of collaborative wireless sensors. Without time-stamp at the transmitter, in traditional approaches, these received TOA measurements are subtracted pairwise to form time-difference of arrival (TDOA) data for source localization, thereby leading to a 3-dB loss in signal-to-noise ratio (SNR). We take a different approach by directly applying the original measurement model without the subtraction preprocessing. We present two new methods that utilize semidefinite programming (SDP) relaxation for direct source localization. We further address the issue of robust estimation given measurement errors and inaccuracy in the locations of receiving sensors. Our results demonstrate some potential advantages of source localization based on the direct TOA data over time-difference preprocessing."}
{"_id": "c1ff5842b61d20ed61c49d1080c2c44e5a0ac015", "title": "Design, modeling and control of an omni-directional aerial vehicle", "text": "In this paper we present the design and control of a novel six degrees-of-freedom aerial vehicle. Based on a static force and torque analysis for generic actuator configurations, we derive an eight-rotor configuration that maximizes the vehicle's agility in any direction. The proposed vehicle design possesses full force and torque authority in all three dimensions. A control strategy that allows for exploiting the vehicle's decoupled translational and rotational dynamics is introduced. A prototype of the proposed vehicle design is built using reversible motor-propeller actuators and capable of flying at any orientation. Preliminary experimental results demonstrate the feasibility of the novel design and the capabilities of the vehicle."}
{"_id": "bf8a0014ac21ba452c38d27bc7d930c265c32c60", "title": "High Level Sensor Data Fusion Approaches For Object Recognition In Road Environment", "text": "Application of high level fusion approaches demonstrate a sequence of significant advantages in multi sensor data fusion and automotive safety fusion systems are no exception to this. High level fusion can be applied to automotive sensor networks with complementary or/and redundant field of views. The advantage of this approach is that it ensures system modularity and allows benchmarking, as it does not permit feedbacks and loops inside the processing. In this paper two specific high level data fusion approaches are described including a brief architectural and algorithmic presentation. These approaches differ mainly in their data association part: (a) track level fusion approach solves it with the point to point association with emphasis on object continuity and multidimensional assignment, and (b) grid based fusion approach that proposes a generic way to model the environment and to perform sensor data fusion. The test case for these approaches is a multi sensor equipped PReVENT/ProFusion2 truck demonstrator vehicle."}
{"_id": "c873b2af7fd26bf6df7196bf077905da9bcb3b4d", "title": "A simple stimulus generator for testing single-channel monopulse processor", "text": "The monopulse antenna tracking system is the preferred choice where higher pointing accuracy is required because of its inherent advantages. There are three possible configurations for realizing monopulse tracking system. One of the configurations is called single channel monopulse tracking system as it requires only one down converter chain. In this configuration, the sum and both (AZ/EL) error signals are combined to reduce the required number of down converters. A single channel monopulse processor is vital subsystem of a single channel monopulse tracking system which extracts the pointing error information from the IF signal. During the development, these processors need to be tested for its functionality in the laboratory which requires a stimulus generator. The stimulus generator generates an IF signal which mimics the real time signal and can be used for debugging and functional verification. This paper presents a simple approach for realizing a stimulus generator for single channel monopulse processor. A stimulus generator has been developed using this approach and has been used for laboratory testing of a single channel monopulse processor. The tested single channel monopulse processor has been successfully integrated with the earth station antenna tracking chain at NRSC, Hyderabad, India and used for tracking the LEO satellites."}
{"_id": "4a0e875a97011e8c3789cf10d04d60a61a19f60a", "title": "Pulsed radiofrequency treatment in interventional pain management: mechanisms and potential indications\u2014a review", "text": "The objective of this review is to evaluate the efficacy of Pulsed Radiofrequency (PRF) treatment in chronic pain management in randomized clinical trials (RCTs) and well-designed observational studies. The physics, mechanisms of action, and biological effects are discussed to provide the scientific basis for this promising modality. We systematically searched for clinical studies on PRF. We searched the MEDLINE (PubMed) and EMBASE database, using the free text terms: pulsed radiofrequency, radio frequency, radiation, isothermal radiofrequency, and combination of these. We classified the information in two tables, one focusing only on RCTs, and another, containing prospective studies. Date of last electronic search was 30 May 2010. The methodological quality of the presented reports was scored using the original criteria proposed by Jadad et al. We found six RCTs that evaluated the efficacy of PRF, one against corticosteroid injection, one against sham intervention, and the rest against conventional RF thermocoagulation. Two trials were conducted in patients with lower back pain due to lumbar zygapophyseal joint pain, one in cervical radicular pain, one in lumbosacral radicular pain, one in trigeminal neuralgia, and another in chronic shoulder pain. From the available evidence, the use of PRF to the dorsal root ganglion in cervical radicular pain is compelling. With regards to its lumbosacral counterpart, the use of PRF cannot be similarly advocated in view of the methodological quality of the included study. PRF application to the supracapular nerve was found to be as efficacious as intra-articular corticosteroid in patients with chronic shoulder pain. The use of PRF in lumbar facet arthropathy and trigeminal neuralgia was found to be less effective than conventional RF thermocoagulation techniques."}
{"_id": "1267fe36b5ece49a9d8f913eb67716a040bbcced", "title": "On the limited memory BFGS method for large scale optimization", "text": "We study the numerical performance of a limited memory quasi Newton method for large scale optimization which we call the L BFGS method We compare its performance with that of the method developed by Buckley and LeNir which combines cyles of BFGS steps and conjugate direction steps Our numerical tests indicate that the L BFGS method is faster than the method of Buckley and LeNir and is better able to use additional storage to accelerate convergence We show that the L BFGS method can be greatly accelerated by means of a simple scaling We then compare the L BFGSmethod with the partitioned quasi Newton method of Griewank and Toint a The results show that for some problems the partitioned quasi Newton method is clearly superior to the L BFGS method However we nd that for other problems the L BFGS method is very competitive due to its low iteration cost We also study the convergence properties of the L BFGS method and prove global convergence on uniformly convex problems"}
{"_id": "146f6f6ed688c905fb6e346ad02332efd5464616", "title": "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention", "text": "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-theart performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO."}
{"_id": "2abe6b9ea1b13653b7384e9c8ef14b0d87e20cfc", "title": "RCV1: A New Benchmark Collection for Text Categorization Research", "text": "Reuters Corpus Volume I (RCV1) is an archive of over 800,000 manually categorized newswire stories recently made available by Reuters, Ltd. for research purposes. Use of this data for research on text categorization requires a detailed understanding of the real world constraints under which the data was produced. Drawing on interviews with Reuters personnel and access to Reuters documentation, we describe the coding policy and quality control procedures used in producing the RCV1 data, the intended semantics of the hierarchical category taxonomies, and the corrections necessary to remove errorful data. We refer to the original data as RCV1-v1, and the corrected data as RCV1-v2. We benchmark several widely used supervised learning methods on RCV1-v2, illustrating the collection\u2019s properties, suggesting new directions for research, and providing baseline results for future studies. We make available detailed, per-category experimental results, as well as corrected versions of the category assignments and taxonomy structures, via online appendices."}
{"_id": "2cb8497f9214735ffd1bd57db645794459b8ff41", "title": "Teaching Machines to Read and Comprehend", "text": "Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure."}
{"_id": "19c60ef563531be3d89010f098e12aa90bea2b57", "title": "SEQUENCE BASED HIERARCHICAL CONFLICT-FREE ROUTING STRATEGY OF BI-DIRECTIONAL AUTOMATED GUIDED VEHICLES", "text": "Conflict-free routing is an important problem to be addressed in AGV Systems, in particular when they are bi-directional. This paper considers the problem of routing AGVs in the presence of contingencies. The importance of this work stems from the realization that path-planning approaches are ill suited to AGVSs subject to contingencies. In this paper, a two stage closed-loop control strategy is proposed. In the first control stage, a pre-planning method proposed by Kim and Tanchoco (1991) is used to establish the fastest conflict-free routes for AGVs. The objective of the second stage is the avoidance of deadlocks in the presence of interruptions while maintaining the planned routes. Copyright \u00a9 2005 IFAC"}
{"_id": "41545565e1efda90e1c993d4e308ef1a301e505b", "title": "Have I really met timing? - validating primetime timing reports with SPICE", "text": "At sign-off everybody is wondering about how good the accuracy of the static timing analysis timing reports generated with PrimeTime\u00af really is. Errors can be introduced by STA setup, interconnect modeling, library characterization etc. The claims that path timingcalculated by PrimeTime usually is within a few percent of Spice don't help to ease your uncertainty.When the Signal Integrity features were introduced to PrimeTime there was also a feature added that was hardly announced: PrimeTime can write out timing paths for simulation with Spice that can be used to validate the timing numbers calculated by PrimeTime. By comparingthe numbers calculated by PrimeTime to a simulation with Spice for selected paths the designers can verify the timing and build up confidence or identify errors.This paper will describe a validation flow for PrimeTime timing reports that is based on extraction of the Spice paths, starting the Spice simulation, parsing the simulation results, and creating a report comparing PrimeTime and Spice timing. All these steps are done inside the TCL environment of PrimeTime. It will describe this flow, what is needed for the Spice simulation, how it can be set up, what can go wrong, and what kind of problems in the STA can be identified."}
{"_id": "81780a68b394ba15112ba0be043a589a2e8563ab", "title": "A comparative review of tone-mapping algorithms for high dynamic range video", "text": "Tone-mapping constitutes a key component within the field of high dynamic range (HDR) imaging. Its importance is manifested in the vast amount of tone-mapping methods that can be found in the literature, which are the result of an active development in the area for more than two decades. Although these can accommodate most requirements for display of HDR images, new challenges arose with the advent of HDR video, calling for additional considerations in the design of tone-mapping operators (TMOs). Today, a range of TMOs exist that do support video material. We are now reaching a point where most camera captured HDR videos can be prepared in high quality without visible artifacts, for the constraints of a standard display device. In this report, we set out to summarize and categorize the research in tone-mapping as of today, distilling the most important trends and characteristics of the tone reproduction pipeline. While this gives a wide overview over the area, we then specifically focus on tone-mapping of HDR video and the problems this medium entails. First, we formulate the major challenges a video TMO needs to address. Then, we provide a description and categorization of each of the existing video TMOs. Finally, by constructing a set of quantitative measures, we evaluate the performance of a number of the operators, in order to give a hint on which can be expected to render the least amount of artifacts. This serves as a comprehensive reference, categorization and comparative assessment of the state-of-the-art in tone-mapping for HDR video."}
{"_id": "a30a45a56911a9bfa845f9c86f3c9b29e11807b6", "title": "Energy Management Based on Frequency Approach for Hybrid Electric Vehicle Applications: Fuel-Cell/Lithium-Battery and Ultracapacitors", "text": "This paper presents the ultracapacitors (U) and fuel-cell/lithium-battery connection with an original energy management method for hybrid electric vehicle (HEV) applications. The proposed method is focused on the frequency approach to meet the load energy requirement. The ultracapacitors are connected to the dc link through a buck-boost converter, and the fuel cell is connected to the dc link via a boost converter for the first topology. In the second topology, the lithium battery is connected to the dc link without a converter to avoid the dc-link voltage control. An asynchronous machine is used like the traction motor; it is related to the dc link through a dc/ac converter (inverter). The main contribution of this paper is focused on HEV energy management according to the dynamics (frequency) of the hybrid sources using polynomial correctors. The performances of the proposed method are evaluated through some simulations and the experimental tests, using the New European Driving Cycle (NEDC). This study is extended to an aggressive test cycle, such as the U.S. driving cycle (USDC), to understand the system response and the control performances."}
{"_id": "83b01b5f8c127a081b74ff96952a3ec9695d9e50", "title": "Randomized clinical trial of cognitive behavioral therapy (CBT) versus acceptance and commitment therapy (ACT) for mixed anxiety disorders.", "text": "OBJECTIVE\nRandomized comparisons of acceptance-based treatments with traditional cognitive behavioral therapy (CBT) for anxiety disorders are lacking. To address this gap, we compared acceptance and commitment therapy (ACT) to CBT for heterogeneous anxiety disorders.\n\n\nMETHOD\nOne hundred twenty-eight individuals (52% female, mean age = 38, 33% minority) with 1 or more DSM-IV anxiety disorders began treatment following randomization to CBT or ACT; both treatments included behavioral exposure. Assessments at pre-treatment, post-treatment, and 6- and 12-month follow-up measured anxiety-specific (principal disorder Clinical Severity Ratings [CSRs], Anxiety Sensitivity Index, Penn State Worry Questionnaire, Fear Questionnaire avoidance) and non-anxiety-specific (Quality of Life Index [QOLI], Acceptance and Action Questionnaire-16 [AAQ]) outcomes. Treatment adherence, therapist competency ratings, treatment credibility, and co-occurring mood and anxiety disorders were investigated.\n\n\nRESULTS\nCBT and ACT improved similarly across all outcomes from pre- to post-treatment. During follow-up, ACT showed steeper linear CSR improvements than CBT (p < .05, d = 1.26), and at 12-month follow-up, ACT showed lower CSRs than CBT among completers (p < .05, d = 1.10). At 12-month follow-up, ACT reported higher AAQ than CBT (p = .08, d = 0.42; completers: p < .05, d = 0.56), whereas CBT reported higher QOLI than ACT (p < .05, d = 0.42). Attrition and comorbidity improvements were similar; ACT used more non-study psychotherapy at 6-month follow-up. Therapist adherence and competency were good; treatment credibility was higher in CBT.\n\n\nCONCLUSIONS\nOverall improvement was similar between ACT and CBT, indicating that ACT is a highly viable treatment for anxiety disorders."}
{"_id": "5c91c11b7fd9847129f36fdc2787c3d6df3a803b", "title": "Phylogeography of Y-chromosome haplogroup I reveals distinct domains of prehistoric gene flow in europe.", "text": "To investigate which aspects of contemporary human Y-chromosome variation in Europe are characteristic of primary colonization, late-glacial expansions from refuge areas, Neolithic dispersals, or more recent events of gene flow, we have analyzed, in detail, haplogroup I (Hg I), the only major clade of the Y phylogeny that is widespread over Europe but virtually absent elsewhere. The analysis of 1,104 Hg I Y chromosomes, which were identified in the survey of 7,574 males from 60 population samples, revealed several subclades with distinct geographic distributions. Subclade I1a accounts for most of Hg I in Scandinavia, with a rapidly decreasing frequency toward both the East European Plain and the Atlantic fringe, but microsatellite diversity reveals that France could be the source region of the early spread of both I1a and the less common I1c. Also, I1b*, which extends from the eastern Adriatic to eastern Europe and declines noticeably toward the southern Balkans and abruptly toward the periphery of northern Italy, probably diffused after the Last Glacial Maximum from a homeland in eastern Europe or the Balkans. In contrast, I1b2 most likely arose in southern France/Iberia. Similarly to the other subclades, it underwent a postglacial expansion and marked the human colonization of Sardinia approximately 9,000 years ago."}
{"_id": "7da961cb039b1a01cad9b78d93bdfe2a69ed3ccf", "title": "Hierarchical Gaussian Descriptors with Application to Person Re-Identification", "text": "Describing the color and textural information of a person image is one of the most crucial aspects of person re-identification (re-id). In this paper, we present novel meta-descriptors based on a hierarchical distribution of pixel features. Although hierarchical covariance descriptors have been successfully applied to image classification, the mean information of pixel features, which is absent from the covariance, tends to be the major discriminative information for person re-id. To solve this problem, we describe a local region in an image via hierarchical Gaussian distribution in which both means and covariances are included in their parameters. More specifically, the region is modeled as a set of multiple Gaussian distributions in which each Gaussian represents the appearance of a local patch. The characteristics of the set of Gaussians are again described by another Gaussian distribution. In both steps, we embed the parameters of the Gaussian into a point of Symmetric Positive Definite (SPD) matrix manifold. By changing the way to handle mean information in this embedding, we develop two hierarchical Gaussian descriptors. Additionally, we develop feature norm normalization methods with the ability to alleviate the biased trends that exist on the descriptors. The experimental results conducted on five public datasets indicate that the proposed descriptors achieve remarkably high performance on person re-id."}
{"_id": "7a012d0eb6e19e2a40529314f58f4a31f4d5f8ff", "title": "On Covert Acoustical Mesh Networks in Air", "text": "Covert channels can be used to circumvent system and network policies by establishing communications that have not been considered in the design of the computing system. We construct a covert channel between different computing systems that utilizes audio modulation/demodulation to exchange data between the computing systems over the air medium. The underlying network stack is based on a communication system that was originally designed for robust underwater communication. We adapt the communication system to implement covert and stealthy communications by utilizing the ultrasonic frequency range. We further demonstrate how the scenario of covert acoustical communication over the air medium can be extended to multi-hop communications and even to wireless mesh networks. A covert acoustical mesh network can be conceived as a meshed botnet or malnet that is accessible via inaudible audio transmissions. Different applications of covert acoustical mesh networks are presented, including the use for remote keylogging over multiple hops. It is shown that the concept of a covert acoustical mesh network renders many conventional security concepts useless, as acoustical communications are usually not considered. Finally, countermeasures against covert acoustical mesh networks are discussed, including the use of lowpass filtering in computing systems and a host-based intrusion detection system for analyzing audio input and output in order to detect any irregularities."}
{"_id": "3e25f645e7806f3184f3572fea764396d1289c49", "title": "A developmental, mentalization-based approach to the understanding and treatment of borderline personality disorder.", "text": "The precise nature and etiopathogenesis of borderline personality disorder (BPD) continues to elude researchers and clinicians. Yet, increasing evidence from various strands of research converges to suggest that affect dysregulation, impulsivity, and unstable relationships constitute the core features of BPD. Over the last two decades, the mentalization-based approach to BPD has attempted to provide a theoretically consistent way of conceptualizing the interrelationship between these core features of BPD, with the aim of providing clinicians with a conceptually sound and empirically supported approach to BPD and its treatment. This paper presents an extended version of this approach to BPD based on recently accumulated data. In particular, we suggest that the core features of BPD reflect impairments in different facets of mentalization, each related to impairments in relatively distinct neural circuits underlying these facets. Hence, we provide a comprehensive account of BPD by showing how its core features are related to each other in theoretically meaningful ways. More specifically, we argue that BPD is primarily associated with a low threshold for the activation of the attachment system and deactivation of controlled mentalization, linked to impairments in the ability to differentiate mental states of self and other, which lead to hypersensitivity and increased susceptibility to contagion by other people's mental states, and poor integration of cognitive and affective aspects of mentalization. The combination of these impairments may explain BPD patients' propensity for vicious interpersonal cycles, and their high levels of affect dysregulation and impulsivity. Finally, the implications of this expanded mentalization-based approach to BPD for mentalization-based treatment and treatment of BPD more generally are discussed."}
{"_id": "c8cc94dd21d78f4f0d07ccb61153bfb798aeef2c", "title": "Statistical Methods for Fighting Financial Crimes", "text": ""}
{"_id": "69d324f27ca4af58737000c11700ddea9fb31508", "title": "Design and Measurement of Reconfigurable Millimeter Wave Reflectarray Cells With Nematic Liquid Crystal", "text": "Numerical simulations are used to study the electromagnetic scattering from phase agile microstrip reflectarray cells which exploit the voltage controlled dielectric anisotropy property of nematic state liquid crystals (LCs). In the computer model two arrays of equal size elements constructed on a 15 mum thick tuneable LC layer were designed to operate at center frequencies of 102 GHz and 130 GHz. Micromachining processes based on the metallization of quartz/silicon wafers and an industry compatible LCD packaging technique were employed to fabricate the grounded periodic structures. The loss and the phase of the reflected signals were measured using a quasi-optical test bench with the reflectarray inserted at the beam waist of the imaged Gaussian beam, thus eliminating some of the major problems associated with traditional free-space characterization at these frequencies. By applying a low frequency AC bias voltage of 10 V, a 165deg phase shift with a loss 4.5-6.4 dB at 102 GHz and 130deg phase shift with a loss variation between 4.3-7 dB at 130 GHz was obtained. The experimental results are shown to be in close agreement with the computer model."}
{"_id": "d4be9c65b86315c0801b56183a95e50766ec9c26", "title": "Reasoning about Categories in Conceptual Spaces", "text": "Understanding the process of categorization is a primary research goal in artificial intelligence. The conceptual space framework provides a flexible approach to modeling context-sensitive categorization via a geometrical representation designed for modeling and managing concepts. In this paper we show how algorithms developed in computational geometry, and the Region Connection Calculus can be used to model important aspects of categorization in conceptual spaces. In particular, we demonstrate the feasibility of using existing geometric algorithms to build and manage categories in conceptual spaces, and we show how the Region Connection Calculus can be used to reason about categories and other conceptual regions."}
{"_id": "c94d52a92b1da191f720cc4471d2f176e1d7ef0b", "title": "Towards the Development of Realistic Botnet Dataset in the Internet of Things for Network Forensic Analytics: Bot-IoT Dataset", "text": "The proliferation of IoT systems, has seen them targeted by malicious third parties. To address this, realistic protection and investigation countermeasures need to be developed. Such countermeasures include network intrusion detection and network forensic systems. For that purpose, a well-structured and representative dataset is paramount for training and validating the credibility of the systems. Although there are several network, in most cases, not much information is given about the Botnet scenarios that were used. This paper, proposes a new dataset, Bot-IoT, which incorporates legitimate and simulated IoT network traffic, along with various types of attacks. We also present a realistic testbed environment for addressing the existing dataset drawbacks of capturing complete network information, accurate labeling, as well as recent and complex attack diversity. Finally, we evaluate the reliability of the BoT-IoT dataset using different statistical and machine learning methods for forensics purposes compared with the existing datasets. This work provides the baseline for allowing botnet identificaiton across IoT-specific networks. The Bot-IoT dataset can be accessed at [1]."}
{"_id": "afd3cba292f60b9a76b58ef731b50c812e4c544f", "title": "Binary Input Layer: Training of CNN models with binary input data", "text": "For the efficient execution of deep convolutional neural networks (CNN) on edge devices, various approaches have been presented which reduce the bit width of the network parameters down to 1 bit. Binarization of the first layer was always excluded, as it leads to a significant error increase. Here, we present the novel concept of binary input layer (BIL), which allows the usage of binary input data by learning bit specific binary weights. The concept is evaluated on three datasets (PAMAP2, SVHN, CIFAR-10). Our results show that this approach is in particular beneficial for multimodal datasets (PAMAP2) where it outperforms networks using full precision weights in the first layer by 1.92 percentage points (pp) while consuming only 2% of the chip area."}
{"_id": "434553e2a9b6048f1eb7780ec2cd828dc2644013", "title": "Leveraging Legacy Code to Deploy Desktop Applications on the Web", "text": "Xax is a browser plugin model that enables developers to leverage existing tools, libraries, and entire programs to deliver feature-rich applications on the web. Xax employs a novel combination of mechanisms that collectively provide security, OS-independence, performance, and support for legacy code. These mechanisms include memory-isolated native code execution behind a narrow syscall interface, an abstraction layer that provides a consistent binary interface across operating systems, system services via hooks to existing browser mechanisms, and lightweight modifications to existing tool chains and code bases. We demonstrate a variety of applications and libraries from existing code bases, in several languages, produced with various tool chains, running in multiple browsers on multiple operating systems. With roughly two person-weeks of effort, we ported 3.3 million lines of code to Xax, including a PDF viewer, a Python interpreter, a speech synthesizer, and an OpenGL pipeline."}
{"_id": "f9023f510ca00a17b678e54af1187d5e5177d784", "title": "Stacked Convolutional and Recurrent Neural Networks for Music Emotion Recognition", "text": "This paper studies the emotion recognition from musical tracks in the 2-dimensional valence-arousal (V-A) emotional space. We propose a method based on convolutional (CNN) and recurrent neural networks (RNN), having significantly fewer parameters compared with state-of-theart method for the same task. We utilize one CNN layer followed by two branches of RNNs trained separately for arousal and valence. The method was evaluated using the \u201cMediaEval2015 emotion in music\u201d dataset. We achieved an RMSE of 0.202 for arousal and 0.268 for valence, which is the best result reported on this dataset."}
{"_id": "7df4f891caa55917469e27a8da23886d66f9ecca", "title": "Anchoring : accessibility as a cause of judgmental assimilation", "text": "Anchoring denotes assimilation of judgment toward a previously considered value \u2014 an anchor. The selective accessibility model argues that anchoring is a result of selective accessibility of information compatible with an anchor. The present review shows the similarities between anchoring and knowledge accessibility effects. Both effects depend on the applicability of the accessible information, which is also used similarly. Furthermore, both knowledge accessibility and anchoring influence the time needed for the judgment and both display temporal robustness. Finally, we offer recent evidence for the selective accessibility model and demonstrate how the model can be applied to reducing the anchoring effect."}
{"_id": "7b500392be1d20bb7217d40c27a4d974430e3e31", "title": "POSTER: Watch Out Your Smart Watch When Paired", "text": "We coin a new term called \\textit{data transfusion} as a phenomenon that a user experiences when pairing a wearable device with the host device. A large amount of data stored in the host device (e.g., a smartphone) is forcibly copied to the wearable device (e.g., a smart watch) due to pairing while the wearable device is usually less attended. To the best of knowledge, there is no previous work that manipulates how sensitive data is transfused even without user's consent and how users perceive and behave regarding such a phenomenon for smart watches. We tackle this problem by conducting an experimental study of data extraction from commodity devices, such as in Android Wear, watchOS, and Tizen platforms, and a following survey study with 205 smart watch users, in two folds. The experimental studies have shown that a large amount of sensitive data was transfused, but there was not enough user notification. The survey results have shown that users have lower perception on smart watches for security and privacy than smartphones, but they tend to set the same passcode on both devices when needed. Based on the results, we perform risk assessment and discuss possible mitigation that involves volatile transfusion."}
{"_id": "b45c69f287a764deda320160e186d3e9b12bcc2c", "title": "ARGoS: a modular, parallel, multi-engine simulator for multi-robot systems", "text": "We present a novel multi-robot simulator named ARGoS. ARGoS is designed to simulate complex experiments involving large swarms of robots of different types. ARGoS is the first multi-robot simulator that is at the same time both efficient (fast performance with many robots) and flexible (highly customizable for specific experiments). Novel design choices in ARGoS have enabled this breakthrough. First, in ARGoS, it is possible to partition the simulated space into multiple sub-spaces, managed by different physics engines running in parallel. Second, ARGoS\u2019 architecture is multi-threaded, thus designed to optimize the usage of modern multi-core CPUs. Finally, the architecture of ARGoS is highly modular, enabling easy addition of custom features and appropriate allocation of computational resources. We assess the efficiency of ARGoS and showcase its flexibility with targeted experiments. Experimental results demonstrate that simulation run-time increases linearly with the number of robots. A 2D-dynamics simulation of 10,000 e-puck robots can be performed in 60\u00a0% of the time taken by the corresponding real-world experiment. We show how ARGoS can be extended to suit the needs of an experiment in which custom functionality is necessary to achieve sufficient simulation accuracy. ARGoS is open source software licensed under GPL3 and is downloadable free of charge."}
{"_id": "73a9e37e6fe9586982d041f2d5999d41affe9398", "title": "Storytelling in Information Visualizations: Does it Engage Users to Explore Data?", "text": "We present the results of three web-based field experiments, in which we evaluate the impact of using initial narrative visualization techniques and storytelling on user-engagement with exploratory information visualizations. We conducted these experiments on a popular news and opinion outlet, and on a popular visualization gallery website. While data-journalism exposes visualizations to a large public, we do not know how effectively this public makes sense of interactive graphics, and in particular if people explore them to gain additional insight to that provided by the journalists. In contrast to our hypotheses, our results indicate that augmenting exploratory visualizations with introductory 'stories' does not seem to increase user-engagement in exploration."}
{"_id": "b933c84d4c3bc27cda48eb47d68f3b1c197f2e5b", "title": "We can learn your #hashtags: Connecting tweets to explicit topics", "text": "In Twitter, users can annotate tweets with hashtags to indicate the ongoing topics. Hashtags provide users a convenient way to categorize tweets. From the system's perspective, hashtags play an important role in tweet retrieval, event detection, topic tracking, and advertising, etc. Annotating tweets with the right hashtags can lead to a better user experience. However, two problems remain unsolved during an annotation: (1) Before the user decides to create a new hashtag, is there any way to help her/him find out whether some related hashtags have already been created and widely used? (2) Different users may have different preferences for categorizing tweets. However, few work has been done to study the personalization issue in hashtag recommendation. To address the above problems, we propose a statistical model for personalized hashtag recommendation in this paper. With millions of <;tweet, hashtag> pairs being published everyday, we are able to learn the complex mappings from tweets to hashtags with the wisdom of the crowd. Two questions are answered in the model: (1) Different from traditional item recommendation data, users and tweets in Twitter have rich auxiliary information like URLs, mentions, locations, social relations, etc. How can we incorporate these features for hashtag recommendation? (2) Different hashtags have different temporal characteristics. Hashtags related to breaking events in the physical world have strong rise-and-fall temporal pattern while some other hashtags remain stable in the system. How can we incorporate hashtag related features to serve for hashtag recommendation? With all the above factors considered, we show that our model successfully outperforms existing methods on real datasets crawled from Twitter."}
{"_id": "24c143f88333bb20bf8b19df97520b21c68829ac", "title": "Environment-Driven Lexicon Induction for High-Level Instructions", "text": "As described in the main paper, we collected a dataset D = (x(n), e(n), a(n), \u03c0(n))500 n=1. Environment Complexity. Our environments are 3D scenarios consisting of complex objects such as fridge, microwave and television with many states. These objects can be in different spatial relations with respect to other objects. For example, \u201cbag of chips\u201d can be found behind the television. Figure 1 shows some sample environments from our dataset. For example, an object of category television consists of 6 channels, volume level and power status. An object can have different values of states in different environment and different environment consists of different set of objects and their placement. For example, television might be powered on in one environment and closed in another, microwave might have an object inside it or not in different environment, etc. Moreover, there are often more than one object of the same category. For example, our environment typically have two books, five couches, four pillows etc. Objects of the same category can have different appearance. For example, a book can have the cover of a math book or a Guinness book of world record; resulting in complex object descriptions such as in \u201cthrow the sticky stuff in the bowl\u201d. They can also have the same appearance making people use spatial relations or other means while describing them such as in \u201cget me the cup next to the microwave\u201d. This dataset is significantly more challenging compared to the 2D navigation dataset or GUI actions in windows dataset considered earlier."}
{"_id": "5c01842e5431bff0a47140c57b3dd5362107afc6", "title": "Early Breast Cancer Detection using Mammogram Images: A Review of Image Processing Techniques", "text": "Breast cancer is one of the most common cancers worldwide among women so that one in eight women is affected by this disease during their lifetime. Mammography is the most effective imaging modality for early detection of breast cancer in early stages. Because of poor contrast and low visibility in the mammographic images, early detection of breast cancer is a significant step to efficient treatment of the disease. Different computeraided detection algorithms have been developed to help radiologists provide an accurate diagnosis. This paper reviews the most common image processing approaches developed for detection of masses and calcifications. The main focus of this review is on image segmentation methods and the variables used for early breast cancer detection. Texture analysis is the crucial step in any image segmentation techniques which are based on a local spatial variation of intensity or color. Therefore, various methods of texture analysis for mass and micro-calcification detection in mammography are discussed in details."}
{"_id": "4189eac6d7104e00323f78a8897167d50c815c80", "title": "Is beautiful really usable? Toward understanding the relation between usability, aesthetics, and affect in HCI", "text": "This paper analyzes the relation between usability and aesthetics. In a laboratory study, 80 participants used one of four different versions of the same online shop, differing in interface-aesthetics (low vs. high) and interface-usability (low vs. high). Participants had to find specific items and rate the shop before and after usage on perceived aesthetics and perceived usability, which were assessed using four validated instruments. Results show that aesthetics does not affect perceived usability. In contrast, usability has an effect on post-use perceived aesthetics. Our findings show that the \u201cwhat is beautiful is usable\u201d notion, which assumes that aesthetics enhances the perception of usability can be reversed under certain conditions (here: strong usability manipulation combined with a medium to large aesthetics manipulation). Furthermore, our results indicate that the user\u2019s affective experience with the usability of the shop might serve as a mediator variable within the aestheticsusability relation: The frustration of poor usability lowers ratings on perceived aesthetics. The significance of the results is discussed in context of the existing research on the relation between aesthetics and usability."}
{"_id": "2512182cf3c4d7b3df549456fbeceee0a77c3954", "title": "Trust-Aware Collaborative Filtering for Recommender Systems", "text": "Recommender Systems allow people to find the resources they need by making use of the experiences and opinions of their nearest neighbours. Costly annotations by experts are replaced by a distributed process where the users take the initiative. While the collaborative approach enables the collection of a vast amount of data, a new issue arises: the quality assessment. The elicitation of trust values among users, termed \u201cweb of trust\u201d, allows a twofold enhancement of Recommender Systems. Firstly, the filtering process can be informed by the reputation of users which can be computed by propagating trust. Secondly, the trust metrics can help to solve a problem associated with the usual method of similarity assessment, its reduced computability. An empirical evaluation on Epinions.com dataset shows that trust propagation allows to increase the coverage of Recommender Systems while preserving the quality of predictions. The greatest improuvements are achieved for new users, who provided few ratings."}
{"_id": "4c521025566e6afceb9adcf27105cd33e4022fb6", "title": "Fake Review Detection : Classification and Analysis of Real and Pseudo Reviews", "text": "In recent years, fake review detection has attracted significant attention from both businesses and the research community. For reviews to reflect genuine user experiences and opinions, detecting fake reviews is an important problem. Supervised learning has been one of the main approaches for solving the problem. However, obtaining labeled fake reviews for training is difficult because it is very hard if not impossible to reliably label fake reviews manually. Existing research has used several types of pseudo fake reviews for training. Perhaps, the most interesting type is the pseudo fake reviews generated using the Amazon Mechanical Turk (AMT) crowdsourcing tool. Using AMT crafted fake reviews, [36] reported an accuracy of 89.6% using only word n-gram features. This high accuracy is quite surprising and very encouraging. However, although fake, the AMT generated reviews are not real fake reviews on a commercial website. The Turkers (AMT authors) are not likely to have the same psychological state of mind while writing such reviews as that of the authors of real fake reviews who have real businesses to promote or to demote. Our experiments attest this hypothesis. Next, it is naturally interesting to compare fake review detection accuracies on pseudo AMT data and real-life data to see whether different states of mind can result in different writings and consequently different classification accuracies. For real review data, we use filtered (fake) and unfiltered (non-fake) reviews from Yelp.com (which are closest to ground truth labels) to perform a comprehensive set of classification experiments also employing only n-gram features. We find that fake review detection on Yelp\u2019s real-life data only gives 67.8% accuracy, but this accuracy still indicates that n-gram features are indeed useful. We then propose a novel and principled method to discover the precise difference between the two types of review data using the information theoretic measure KL-divergence and its asymmetric property. This reveals some very interesting psycholinguistic phenomena about forced and natural fake reviewers. To improve classification on the real Yelp review data, we propose an additional set of behavioral features about reviewers and their reviews for learning, which dramatically improves the classification result on real-life opinion spam data."}
{"_id": "5fbbfbc05c0e57faf2feb02fb471160807dfc000", "title": "Towards More Confident Recommendations : Improving Recommender Systems Using Filtering Approach Based on Rating Variance", "text": "In the present age of information overload, it is becoming increasingly harder to find relevant content. Recommender systems have been introduced to help people deal with these vast amounts of information and have been widely used in research as well as e-commerce applications. In this paper, we propose several new approaches to improve the accuracy of recommender systems by using rating variance to gauge the confidence of recommendations. We then empirically demonstrate how these approaches work with various recommendation techniques. We also show how these approaches can generate more personalized recommendations, as measured by the coverage metric. As a result, users can be given a better control to choose whether to receive recommendations with higher coverage or higher accuracy."}
{"_id": "6aa1c88b810825ee80b8ed4c27d6577429b5d3b2", "title": "Evaluating collaborative filtering recommender systems", "text": "Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated."}
{"_id": "8743b9ce070406d8e44e778af93a564bfb68b330", "title": "Identify Online Store Review Spammers via Social Review Graph", "text": "Online shopping reviews provide valuable information for customers to compare the quality of products, store services, and many other aspects of future purchases. However, spammers are joining this community trying to mislead consumers by writing fake or unfair reviews to confuse the consumers. Previous attempts have used reviewers\u2019 behaviors such as text similarity and rating patterns, to detect spammers. These studies are able to identify certain types of spammers, for instance, those who post many similar reviews about one target. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like normal reviewers, and thus cannot be detected by the available techniques.\n In this article, we propose a novel concept of review graph to capture the relationships among all reviewers, reviews and stores that the reviewers have reviewed as a heterogeneous graph. We explore how interactions between nodes in this graph could reveal the cause of spam and propose an iterative computation model to identify suspicious reviewers. In the review graph, we have three kinds of nodes, namely, reviewer, review, and store. We capture their relationships by introducing three fundamental concepts, the trustiness of reviewers, the honesty of reviews, and the reliability of stores, and identifying their interrelationships: a reviewer is more trustworthy if the person has written more honesty reviews; a store is more reliable if it has more positive reviews from trustworthy reviewers; and a review is more honest if many other honest reviews support it. This is the first time such intricate relationships have been identified for spam detection and captured in a graph model. We further develop an effective computation method based on the proposed graph model. Different from any existing approaches, we do not use an review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results."}
{"_id": "4152070bd6cd28cc44bc9e54ab3e641426382e75", "title": "A Survey of Text Classification Algorithms", "text": "The problem of classification has been widely studied in the data mining, machine learning, database, and information retrieval communities with applications in a number of diverse domains, such as target marketing, medical diagnosis, news group filtering, and document organization. In this paper we will provide a survey of a wide variety of text classification"}
{"_id": "00758573be71bb03267d15e95474e361978e9717", "title": "RMiner: a tool set for role mining", "text": "Recently, there are many approaches proposed for mining roles using automated technologies. However, it lacks a tool set that can be used to aid the application of role mining approaches and update role states. In this demonstration, we introduce a tool set, RMiner, which is based on the core of WEKA, an open source data mining tool. RMiner implements most of the classic and latest role mining algorithms and provides interactive tools for administrator to update role states. The running examples of RMiner are presented to demonstrate the effectiveness of the tool set."}
{"_id": "6ed6ecaaf5e618b11305b6dad415b1b5546c9bb8", "title": "Leading us not unto temptation: momentary allurements elicit overriding goal activation.", "text": "The present research explored the nature of automatic associations formed between short-term motives (temptations) and the overriding goals with which they interfere. Five experimental studies, encompassing several self-regulatory domains, found that temptations tend to activate such higher priority goals, whereas the latter tend to inhibit the temptations. These activation patterns occurred outside of participants' conscious awareness and did not appear to tax their mental resources. Moreover, they varied as a function of subjective goal importance and were more pronounced for successful versus unsuccessful self-regulators in a given domain. Finally, priming by temptation stimuli was found not only to influence the activation of overriding goals but also to affect goal-congruent behavioral choices."}
{"_id": "71715f3f4f7cfbde9f23e04a8f05b4cf87c8ec83", "title": "Hedonism , Achievement , and Power : Universal values that characterize the Dark Triad", "text": "Using a sample of Swedes and Americans (N = 385), we attempted to understand the Dark Triad traits (i.e., Machiavellianism, narcissism, and psychopathy) in terms of universal social values. The Dark Triad traits correlated significantly with all 10 value types, forming a sinusoid pattern corresponding to the value model circumplex. In regression analyses, Machiavellianism and narcissism were positively associated with the values Achievement and Power, while psychopathy was positively associated with the values Hedonism, and Power. In addition, the Dark Triad traits explained significant variance over the Big Five traits in accounting for individual differences in social values. Differences between the Swedish and the US sample in the social value Achievement was mediated by the Dark Triad traits, as well as age. Given the unique complex of values accounted for by the Dark Triad traits compared to the Big Five traits, we argue that the former account for a system of self-enhancing \u2018\u2018dark values\u2019\u2019, often hidden but constantly contributing in evaluations of others. 2015 Elsevier Ltd. All rights reserved."}
{"_id": "4a4e66f72890b0473de1d59f52ad3819bd0413a7", "title": "Parallel Training of Neural Networks for Speech Recognition", "text": "In this paper we describe parallel implementation of ANN training procedure based on block mode back-propagation learning algorithm. Two different approaches to parallelization were implemented. The first is data parallelization using POSIX threads, it is suitable for multi-core computers. The second is node parallelization using high performance SIMD architecture of GPU with CUDA, suitable for CUDA enabled computers. We compare the speed-up of both approaches by learning typically-sized network on the real-world phoneme-state classification task, showing nearly 10 times reduction when using CUDA version, while the 8-core server with multi-thread version gives only 4 times reduction. In both cases we compared to an already BLAS optimized implementation. The training tool will be released as Open-Source software under project name TNet."}
{"_id": "9feed125ff3e8b92510eb979e37ea1bbc35156b6", "title": "Neurologic foundations of spinal cord fusion (GEMINI).", "text": "Cephalosomatic anastomosis has been carried out in both monkeys and mice with preservation of brain function. Nonetheless the spinal cord was not reconstructed, leaving the animals unable to move voluntarily. Here we review the details of the GEMINI spinal cord fusion protocol, which aims at restoring electrophysiologic conduction across an acutely transected spinal cord. The existence of the cortico-truncoreticulo-propriospinal pathway, a little-known anatomic entity, is described, and its importance concerning spinal cord fusion emphasized. The use of fusogens and electrical stimulation as adjuvants for nerve fusion is addressed. The possibility of achieving cephalosomatic anastomosis in humans has become reality in principle."}
{"_id": "45cd2f3575a42f6a611957cec4fa9b9a1359af89", "title": "High speed true random number generator based on open loop structures in FPGAs", "text": "Most hardware \u2018\u2018True\u2019\u2019 Random Number Generators (TRNG) take advantage of the thermal agitation around a flip-flop metastable state. In Field Programmable Gate Arrays (FPGA), the classical TRNG structure uses at least two oscillators, build either from PLL or ring oscillators. This creates good TRNG albeit limited in frequency by the interference rate which cannot exceed a few Mbit/s. This article presents an architecture allowing higher bit rates while maintaining provable unconditional security. This speed requirement becomes stringent for secure communication applications such as the cryptographic quantum key distribution protocols. The proposed architecture is very simple and generic as it is based on an open loop structure with no specific component such as PLL. & 2009 Elsevier Ltd. All rights reserved."}
{"_id": "dd6bb309fd83347026258baeb2cd554ec4fbf5a7", "title": "Sources of Customer Satisfaction and Dissatisfaction with Information Technology Help Desks", "text": "As the use, develophient, and control of information systems continues to become diffused throughout organizations and society, the information technology (IT) help desk function plays an increasingly important role in effective management and utilization of information resources. Variously referred to as information centers, software support centers, software hotlines, and PC help desks, such centers have been established tohelp end users resolve problems and obtain information about new functions in the information systems they use. This study investigates the determinants of customer satisfaction and dissatisfaction with service encounters involving information technologyhelp desks. The IS satisfaction research that has been done to datehas viewed satisfaction as an attitudinal, bipolar evaluative construct (Melone 1990; Heckman 1993). Satisfaction as viewed in this way is a relatively enduring, stable cognitive state. In the marketing literature, however, satisfaction has also been conceptualized as a less enduring post-consumption response. In this study, a conceptualization of the satisfaction construct based on the service encounter and consumer product satisfaction literatures (e.g; Bitner, Boon}s and Tetreault 1990) is adopted as a starting point. After responses to help desk service encounters have been analyzed from this perspective, an attempt is made to integrate these findings with attitudinal satisfaction constructs. The study employs the Critical Incident Technique (CIT), an inductive, qualitative methodology. It consists of a set of specifically defined procedures for collecting observations of human behavior and classifying them in such a way as to make them useful in addressing practical problems (Flanagan 1954). It is a I thod that is comparable to other inductive grouping procedures such as factor analysis, cluster analysis, and multidimensional scaling. Unlike other such procedures, however, CIT uses content analysis of stories rather than quantitative solutions in the data analysis stage of the procedure. The study addressed four research questions: 1. What specific behaviors and events lead to user/customer satisfaction and dissatisfaction with IT help desk service encounters? 2. Are the underlying events and behaviors that lead to satisfactory and dissatisfactory encounters similar? That is, are these events and behaviors opposites or mirror images of each other? 3. Are the underlying events and behaviors in help desk service encounters similar to those found in other contexts? 4. Can an understanding ofuser/customer responses to help desk service encounters shed light on the development and modification ofa tudinal satisfaction constructs such as UIS (Ives, Olson and Baroudi 1983), EUIS (Doll and Torkzadeh 1988), and VMS Satisfaction (Heckman 1993)? Descriptions of approximately500 incidents have been obtained to dateand analyzed. A tentative classification scheme was developed from the preliminary anal>sis. It was modeled after the incident classification scheme developed by Bitner, Booms and Tetreault and uses the same three major categories: core service failure, special customer request, and extraordinary provider behavior. As in the Bitner, Booms and Tetreault analysis, results suggest that a core service failure does not inevitably lead to dissatisfaction. Initial analysis also suggests that while the scheme is applicable in some ways, the knowledge-based nature of the ]T help desk service encounter requires several additional constructs to account for various customer responses."}
{"_id": "4840349d684f59640c2431de4f66659818e390c7", "title": "An evolutionary optimization framework for neural networks and neuromorphic architectures", "text": "As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures."}
{"_id": "e050e89d01afffd5b854458fc48c9d6720a8072c", "title": "THE DOMAIN AND CONCEPTUAL FOUNDATIONS OF RELATIONSHIP MARKETING", "text": ""}
{"_id": "e3de767356caa7960303558f7eb560c73fbe42ba", "title": "Development and design of a robotic manta ray featuring flexible pectoral fins", "text": "This paper showed our preliminary work in probing into fish robot with pectoral fin propulsion. The aim of this project is to develop a fish robot mimicking manta rays that swim with flexible pectoral fins in oscillation motion. Based on the analysis of motion feature and skeletal structure of pectoral fin of manta rays, a kind of flexible pectoral fin made of silicone rubber was designed. Experiments on the propulsion of fin prototype were carried out in tank. Experiment results showed that flexible pectoral fin in oscillation motion could generate great thrust and the thrust would increase with fin-beat frequency and amplitude increasing. In the end, a robot fish with flexible pectoral fins was built. The speed of robot fish could reach the maximum of 1.4 times BL (body length) per second. Experiments proved that it was feasible to develop a robot fish featuring flexible pectoral fins in oscillation motion and the method to mimic manta rays was a great idea to improve performance of underwater vehicles."}
{"_id": "712c13fbbabf086df5a6d795377d2b435e132a57", "title": "Video-Based Human Walking Estimation Using Joint Gait and Pose Manifolds", "text": "We study two fundamental issues about video-based human walking estimation, where the goal is to estimate 3D gait kinematics (i.e., joint positions) from 2D gait appearances (i.e., silhouettes). One is how to model the gait kinematics from different walking styles, and the other is how to represent the gait appearances captured under different views and from individuals of distinct walking styles and body shapes. Our research is conducted in three steps. First, we propose the idea of joint gait-pose manifold (JGPM), which represents gait kinematics by coupling two nonlinear variables, pose (a specific walking stage) and gait (a particular walking style) in a unified latent space. We extend the Gaussian process latent variable model (GPLVM) for JGPM learning, where two heuristic topological priors, a torus and a cylinder, are considered and several JGPMs of different degrees of freedom (DoFs) are introduced for comparative analysis. Second, we develop a validation technique and a series of benchmark tests to evaluate multiple JGPMs and recent GPLVMs in terms of their performance for gait motion modeling. It is shown that the toroidal prior is slightly better than the cylindrical one, and the JGPM of 4 DoFs that balances the toroidal prior with the intrinsic data structure achieves the best performance. Third, a JGPM-based visual gait generative model (JGPM-VGGM) is developed, where JGPM plays a central role to bridge the gap between the gait appearances and the gait kinematics. Our proposed JGPM-VGGM is learned from Carnegie Mellon University MoCap data and tested on the HumanEva-I and HumanEva-II data sets. Our experimental results demonstrate the effectiveness and competitiveness of our algorithms compared with existing algorithms."}
{"_id": "8bf72fb4edcb6974d3c4b0b2df63d9fd75c5dc4f", "title": "Semantic Sentiment Analysis Challenge at ESWC2017", "text": "Sentiment Analysis is a widely studied research field in both research and industry, and there are different approaches for addressing sentiment analysis related tasks. Sentiment Analysis engines implement approaches spanning from lexicon-based techniques, to machine learning, or involving syntactical rules analysis. Such systems are already evaluated in international research challenges. However, Semantic Sentiment Analysis approaches, which take into account or rely also on large semantic knowledge bases and implement Semantic Web best practices, are not under specific experimental evaluation and comparison by other international challenges. Such approaches may potentially deliver higher performance, since they are also able to analyze the implicit, semantics features associated with natural language concepts. In this paper, we present the fourth edition of the Semantic Sentiment Analysis Challenge, in which systems implementing or relying on semantic features are evaluated in a competition involving large test sets, and on different sentiment tasks. Systems merely based on syntax/word-count or just lexicon-based approaches have been excluded by the evaluation. Then, we present the results of the evaluation for each task and show the winner of the most innovative approach award, that combines several knowledge bases for addressing the sentiment analysis task."}
{"_id": "35c3a29076b939bfa1b4314e0affc8413446230e", "title": "Beidou combined-FFT acquisition algorithm for resource-constrained embedded platform", "text": "The length of Beidou CB2I code is twice that of the Global Position System (GPS) C/A code, and the resource consumption will be doubled on embedded hardware with a finite resource if it is processed with the traditional algorithm. Hence, this paper proposes an acquisition algorithm based on combined Fast Fourier Transform (FFT), which separates a signal into odd and even sequences and processes them in serial. By doing so, FFT length is halved and memory usage is decreased. The paper then goes on to analyze the space complexity and time complexity of the proposed algorithm and the traditional algorithm on the embedded platform. The results show that the memory usage of the proposed algorithm is reduced by 23.3% and the amount of multiplication by 12% while the acquisition accuracy remains stable as the PTP values of these two algorithms are close. In other words, the proposed algorithm consumes less resource with no acquisition accuracy losses."}
{"_id": "6e669e90a34c4179f9364406d8a7a7f855745086", "title": "Speeding up distributed request-response workflows", "text": "We found that interactive services at Bing have highly variable datacenter-side processing latencies because their processing consists of many sequential stages, parallelization across 10s-1000s of servers and aggregation of responses across the network. To improve the tail latency of such services, we use a few building blocks: reissuing laggards elsewhere in the cluster, new policies to return incomplete results and speeding up laggards by giving them more resources. Combining these building blocks to reduce the overall latency is non-trivial because for the same amount of resource (e.g., number of reissues), different stages improve their latency by different amounts. We present Kwiken, a framework that takes an end-to-end view of latency improvements and costs. It decomposes the problem of minimizing latency over a general processing DAG into a manageable optimization over individual stages. Through simulations with production traces, we show sizable gains; the 99th percentile of latency improves by over 50% when just 0.1% of the responses are allowed to have partial results and by over 40% for 25% of the services when just 5% extra resources are used for reissues."}
{"_id": "ed1369fbd9290cce28886c6fba560aff41913d18", "title": "Beyond Wise et al . : Neuroleptic-induced \u201c anhedonia \u201d in rats : Pimozide blocks reward quality of food", "text": "O all the neurotransmitters in the brain the one that is best known to the lay public, by far, is dopamine (DA). To the public, and much of the media, DA is the brain\u2019s \u201cpleasure transmitter.\u201d By this popular view, it is a burst of DA that produces the pleasure associated with the receipt or anticipation of natural rewards \u2013 a chocolate cake, sex, even social rewards \u2013 as well as the euphoria produced by addictive drugs, such as amphetamine, cocaine or heroin. In the popular media it is often suggested that it is pursuit of this DA \u201crush\u201d that leads to a variety of impulse control disorders, such as over-eating, gambling and addiction. Of all the ideas, in all of neuroscience, this is one of the most widely known, and its origin can be traced to the 1978 paper by Roy Wise, Joan Spindler, Harriet deWit and Gary Gerber, that is the topic of this chapter. This paper is an excellent example of how the interpretation of a seemingly simple experimental finding can profoundly influence the thinking and trajectory of an entire field, and the public imagination, for decades \u2013 even if it is wrong. But before getting to that conclusion, we should step back and consider the time leading up to the publication of this extremely influential paper. It was only a little over a decade earlier that DA was first recognized as a neurotransmitter in its own right (rather than just being a precursor of norepinephrine). This came about from a series of important studies by researchers in Sweden who developed a histofluorescence method to identify and map monoamine neurotransmitters in the brain (including DA). They thus effectively created the field of chemical neuroanatomy, as well as vividly mapping the location of dopamine in the brain. They also produced useful methods to selectively lesion monoamine-containing cells with neurotoxins, such as 6-hydroxydopamine (6-OHDA), which would help reveal dopamine\u2019s psychological functions via behavioral consequences. As a result of these studies it became clear that DA-producing neurons located in the midbrain (the substantia nigra and ventral tegmental area) send dense projections to the dorsal and ventral striatum (the caudate-putamen and nucleus accumbens) in the"}
{"_id": "8473a97a461625781d0a4747a840196b9a972f33", "title": "Benchmarking distributed data warehouse solutions for storing genomic variant information", "text": "Database URL\nhttps://github.com/ZSI-Bio/variantsdwh."}
{"_id": "e413224e24626cf1e846ae14b5a6267a4105b470", "title": "Towards Automated Driving: Unmanned Protective Vehicle for Highway Hard Shoulder Road Works", "text": "Mobile road works on the hard shoulder of German highways bear an increased accident risk for the crew of the protective vehicle which safeguards road works against moving traffic. The project \u201cAutomated Unmanned Protective Vehicle for Highway Hard Shoulder Road Works\u201d aims at the unmanned operation of the protective vehicle in order to reduce this risk. Simultaneously, this means the very first unmanned operation of a vehicle on German roads in public traffic. This contribution introduces the project by pointing out the main objectives and demonstrates the current state of the system design regarding functionality, modes of operation, as well as the functional system architecture. Pivotal for the project, the scientific challenges raised by the unmanned operation - strongly related to the general challenges in the field of automated driving - are presented as well. The results of the project shall serve as a basis to stimulate an advanced discussion about ensuring safety for fully automated vehicles in public traffic operating at higher speeds and in less defined environments. Thus, this contribution aims at contacting the scientific community in an early state of the project."}
{"_id": "f8e534431c743d39b34c326b72c7bf86af3e554d", "title": "Overconfidence as a cause of diagnostic error in medicine.", "text": "The great majority of medical diagnoses are made using automatic, efficient cognitive processes, and these diagnoses are correct most of the time. This analytic review concerns the exceptions: the times when these cognitive processes fail and the final diagnosis is missed or wrong. We argue that physicians in general underappreciate the likelihood that their diagnoses are wrong and that this tendency to overconfidence is related to both intrinsic and systemically reinforced factors. We present a comprehensive review of the available literature and current thinking related to these issues. The review covers the incidence and impact of diagnostic error, data on physician overconfidence as a contributing cause of errors, strategies to improve the accuracy of diagnostic decision making, and recommendations for future research."}
{"_id": "118422012ca38272ac766294f27ca83b5319d3cb", "title": "Forecasting the weather of Nevada: A deep learning approach", "text": "This paper compares two approaches for predicting air temperature from historical pressure, humidity, and temperature data gathered from meteorological sensors in Northwestern Nevada. We describe our data and our representation and compare a standard neural network against a deep learning network. Our empirical results indicate that a deep neural network with Stacked Denoising Auto-Encoders (SDAE) outperforms a standard multilayer feed forward network on this noisy time series prediction task. In addition, predicting air temperature from historical air temperature data alone can be improved by employing related weather variables like barometric pressure, humidity and wind speed data in the training process."}
{"_id": "661ca6c0f4ecbc08961c7f175f8e5446da224749", "title": "Equity price direction prediction for day trading: Ensemble classification using technical analysis indicators with interaction effects", "text": "We investigate the performance of complex trading rules in equity price direction prediction, over and above continuous-valued indicators and simple technical trading rules. Ten of the most popular technical analysis indicators are included in this research. We use Random Forest ensemble classifiers using minute-by-minute stock market data. Results show that our models have predictive power and yield better returns than the buy-and-hold strategy when disregarding transaction costs both in terms of number of stocks with profitable trades as well as overall returns. Moreover, our findings show that two-way and three-way combinations, i.e., complex trading rules, are important to \u201cbeat\u201d the buy-and-hold strategy."}
{"_id": "7ec4dcda2991371142798621ab865bde75edb2b9", "title": "10 Watts UHF broadband GaN based power amplifier for multi-band applications", "text": "The demand of broadband amplifier has been boosted to meet the requirements of multi-mode and multi-band wireless applications. GaN HEMT is the next generation of RF power transistor technology. In this paper, we have designed a 10W UHF broadband class-AB Power Amplifier (PA) based on GaN HEMT. The proposed amplifier has been designed and developed in the frequency range from 200-500 MHz. A maximum drain efficiency of 71% is achieved."}
{"_id": "5198e078fe7a9c5a9ed6ed9548ec1b1c34daf12b", "title": "Experimental evaluation of the schunk 5-Finger gripping hand for grasping tasks", "text": "In order to perform useful tasks, a service robot needs to manipulate objects in its environment. In this paper, we propose a method for experimental evaluation of the suitability of a robotic hand for grasping tasks in service robotics. The method is applied to the Schunk 5-Finger Gripping Hand, which is a mechatronic gripper designed for service robots. During evaluation, it is shown, that it is able to grasp various common household objects and execute the grasps from the well known Cutkosky grasp taxonomy [1]. The result is, that it is a suitable hand for service robot tasks."}
{"_id": "dd30bbd7149678d9be191df4e286842001be6fb0", "title": "Keyphrase Extraction Using Knowledge Graphs", "text": "Extracting keyphrases from documents automatically is an important and interesting task since keyphrases provide a quick summarization for documents. Although lots of efforts have been made on keyphrase extraction, most of the existing methods (the co-occurrence-based methods and the statistic-based methods) do not take semantics into full consideration. The co-occurrence-based methods heavily depend on the co-occurrence relations between two words in the input document, which may ignore many semantic relations. The statistic-based methods exploit the external text corpus to enrich the document, which introduce more unrelated relations inevitably. In this paper, we propose a novel approach to extract keyphrases using knowledge graphs, based on which we could detect the latent relations of two keyterms (i.e., noun words and named entities) without introducing many noises. Extensive experiments over real data show that our method outperforms the state-of-the-art methods including the graph-based co-occurrence methods and statistic-based clustering methods."}
{"_id": "7dbdab80492fe692484f3267e50388ac4b26538b", "title": "The Situated Function - Behaviour - Structure Framework", "text": "This paper extends the Function-Behaviour-Structure (FBS) framework, which proposed eight fundamental processes involved in designing. This framework did not explicitly account for the dynamic character of the context in which designing takes place, described by the notion of situatedness. This paper describes this concept as a recursive interrelationship between different environments, which, together with a model of constructive memory, provides the foundation of a situated FBS framework. The eight fundamental processes are then reconstructed within this new framework to represent designing in a dynamic world."}
{"_id": "15536ea6369d2a3df504605cc92d78a7c7170d65", "title": "Parametric Exponential Linear Unit for Deep Convolutional Neural Networks", "text": "Object recognition is an important task for improving the ability of visual systems to perform complex scene understanding. Recently, the Exponential Linear Unit (ELU) has been proposed as a key component for managing bias shift in Convolutional Neural Networks (CNNs), but defines a parameter that must be set by hand. In this paper, we propose learning a parameterization of ELU in order to learn the proper activation shape at each layer in the CNNs. Our results on the MNIST, CIFAR-10/100 and ImageNet datasets using the NiN, Overfeat, All-CNN and ResNet networks indicate that our proposed Parametric ELU (PELU) has better performances than the non-parametric ELU. We have observed as much as a 7.28% relative error improvement on ImageNet with the NiN network, with only 0.0003% parameter increase. Our visual examination of the non-linear behaviors adopted by Vgg using PELU shows that the network took advantage of the added flexibility by learning different activations at different layers."}
{"_id": "0172cec7fef1815e460678d12eb51fa0d051a677", "title": "Unsupervised feature learning for audio classification using convolutional deep belief networks", "text": "In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning approaches have not been extensively studied for auditory data. In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification tasks. In the case of speech data, we show that the learned features correspond to phones/phonemes. In addition, our feature representations learned from unlabeled audio data show very good performance for multiple audio classification tasks. We hope that this paper will inspire more research on deep learning approaches applied to a wide range of audio recognition tasks."}
{"_id": "0585b80713848a5b54b82265a79f031a4fbd3332", "title": "Bottom-Up Segmentation for Top-Down Detection", "text": "In this paper we are interested in how semantic segmentation can help object detection. Towards this goal, we propose a novel deformable part-based model which exploits region-based segmentation algorithms that compute candidate object regions by bottom-up clustering followed by ranking of those regions. Our approach allows every detection hypothesis to select a segment (including void), and scores each box in the image using both the traditional HOG filters as well as a set of novel segmentation features. Thus our model ``blends'' between the detector and segmentation models. Since our features can be computed very efficiently given the segments, we maintain the same complexity as the original DPM. We demonstrate the effectiveness of our approach in PASCAL VOC 2010, and show that when employing only a root filter our approach outperforms Dalal & Triggs detector on all classes, achieving 13% higher average AP. When employing the parts, we outperform the original DPM in $19$ out of $20$ classes, achieving an improvement of 8% AP. Furthermore, we outperform the previous state-of-the-art on VOC 2010 test by 4%."}
{"_id": "71d17d1a0d9c0b3863e5072bd032edc2e9dc03ce", "title": "Design of planar substrate integrated waveguide (SIW) phase shifter using air holes", "text": "In this paper, a design of Substrate Intergrated Waveguide (SIW) Phase Shifter is designed by placing multiple rows of air holes in a single substrate which modifies effective dielectric constant of the material and as a result, the phase of the propagating wave is modified while maintaining insertion loss less than 0.8 dB. The proposed design operates in X band (8-12 GHz) having reflection coefficient better than -15 dB within a band of 3 GHz (31.5%) and a phase shift of 24.8\u00b0\u00b14.7\u00b0 throughout the operating band."}
{"_id": "21da9ece5587df5a2ef79bf937ea19397abecfa0", "title": "Predictive coding under the free-energy principle.", "text": "This paper considers prediction and perceptual categorization as an inference problem that is solved by the brain. We assume that the brain models the world as a hierarchy or cascade of dynamical systems that encode causal structure in the sensorium. Perception is equated with the optimization or inversion of these internal models, to explain sensory data. Given a model of how sensory data are generated, we can invoke a generic approach to model inversion, based on a free energy bound on the model's evidence. The ensuing free-energy formulation furnishes equations that prescribe the process of recognition, i.e. the dynamics of neuronal activity that represent the causes of sensory input. Here, we focus on a very general model, whose hierarchical and dynamical structure enables simulated brains to recognize and predict trajectories or sequences of sensory states. We first review hierarchical dynamical models and their inversion. We then show that the brain has the necessary infrastructure to implement this inversion and illustrate this point using synthetic birds that can recognize and categorize birdsongs."}
{"_id": "411582b1755a7c5424febdb2ec730a7239c5d6d5", "title": "Self-inflicted explosive death by intra-oral detonation of a firecracker: a case report.", "text": "Self-inflicted explosive deaths due to detonation of fireworks are rare. In this case report, a peculiar case of an elderly male who discharged a firecracker inside his mouth, resulting in fatal blast induced craniofacial injuries, is described. There is paucity of published data describing fireworks-related suicidal and/or non-suicidal deaths. Even scantier data is present specifically describing fireworks-related blast induced neurotrauma and the mechanism(s) of injury involved in such cases. This case report emphasizes the severe damage that a commercially available explosive, the so-called \"Gorilla Bomb\", can cause, and raises questions about the relative ease of its acquisition."}
{"_id": "3e03be1568929940696fb75f67d3209a26a568ca", "title": "COMPARISON OF DIFFERENT METHODS OF ANTIOXIDANT ACTIVITY EVALUATION OF GREEN AND ROAST C . ARABICA AND C . ROBUSTA COFFEE BEANS", "text": "aDepartment of Hygiene and Technology of Vegetable Foodstuff, University of Veterinary and Pharmaceutical Sciences Brno, Palack\u00e9ho t\u0159. 1/3, 612 42 Brno Kr\u00e1lovo Pole. Czech Republic bDepartment of Biochemistry and Biophysics, University of Veterinary and Pharmaceutical Sciences Brno, Palack\u00e9ho t\u0159. 1/3, 612 42 Brno Kr\u00e1lovo Pole. Czech Republic cDepartment of Food Science and Technology, Kaunas University of Technology, Radvil\u0117n\u0173 pl. 19, LT-50254 Kaunas. Lithuania"}
{"_id": "dd0d47a6997a09370fb6007d70827934a509115d", "title": "Clustering algorithm for internet of vehicles (IoV) based on dragonfly optimizer (CAVDO)", "text": "Internet of vehicles (IoV) is a branch of the internet of things (IoT) which is used for communication among vehicles. As vehicular nodes are considered always in motion, hence it causes the frequent changes in the topology. These changes cause major issues in IoV like scalability, dynamic topology changes, and shortest path for routing. Clustering is among one of the solutions for such type of issues. In this paper, the stability of IoV topology in a dynamic environment is focused. The proposed metaheuristic dragonfly-based clustering algorithm CAVDO is used for cluster-based packet route optimization to make stable topology, and mobility aware dynamic transmission range algorithm (MA-DTR) is used with CAVDO for transmission range adaptation on the basis of traffic density. The proposed CAVDO with MA-DTR is compared with the progressive baseline techniques ant colony optimization (ACO) and comprehensive learning particle swarm optimization (CLPSO). Numerous experiments were performed keeping in view the transmission dynamics for stable topology. CAVDO performed better in many cases providing minimum number of clusters according to current channel condition. Considerable important parameters involved in clustering process are: number of un-clustered nodes as a re-clustering criterion, clustering time, re-clustering delay, dynamic transmission range, direction, and speed. According to these parameters, results indicate that CAVDO outperformed ACO-based clustering and CLPSO in various network settings. Additionally, to improve the network availability and to incorporate the functionalities of next-generation network infrastructure, 5G-enabled architecture is also utilized."}
{"_id": "30d6183851366a484d438385c2167a764edbab8a", "title": "Influence of PEEK Coating on Hip Implant Stress Shielding: A Finite Element Analysis", "text": "Stress shielding is a well-known failure factor in hip implants. This work proposes a design concept for hip implants, using a combination of metallic stem with a polymer coating (polyether ether ketone (PEEK)). The proposed design concept is simulated using titanium alloy stems and PEEK coatings with thicknesses varying from 100 to 400\u2009\u03bcm. The Finite Element analysis of the cancellous bone surrounding the implant shows promising results. The effective von Mises stress increases between 81 and 92% for the complete volume of cancellous bone. When focusing on the proximal zone of the implant, the increased stress transmission to the cancellous bone reaches between 47 and 60%. This increment in load transferred to the bone can influence mineral bone loss due to stress shielding, minimizing such effect, and thus prolonging implant lifespan."}
{"_id": "fb619c5a074393efbaa865f24631598350cf1fef", "title": "A Multi-aspect Analysis of Automatic Essay Scoring for Brazilian Portuguese", "text": "While several methods for automatic essay scoring (AES) for the English language have been proposed, systems for other languages are unusual. To this end, we propose in this paper a multi-aspect AES system for Brazilian Portuguese which we apply to a collection of essays, which human experts evaluated according to the five aspects defined by the Brazilian Government for the National High School Exam (ENEM). These aspects are skills that student must master and every skill is assessed separately from one another. In addition to prediction, we also performed feature analysis for each aspect. The proposed AES system employs several features already used by AES systems for the English language. Our results show that predictions for some aspects performed well with the employed features, while predictions for other aspects performed poorly. Furthermore, the detailed feature analysis we performed made it possible to note their independent impacts on each of the five aspects. Finally, aside from these contributions, our work reveals some challenges and directions for future research, related, for instance, to the fact that the ENEM has over eight million yearly enrollments."}
{"_id": "001be2c9a96a82c1e380970e8460827ecce1cb97", "title": "Physician well-being: A powerful way to improve the patient experience.", "text": "Improving the patient experience\u2014or the patientcenteredness of care\u2014is a key focus of health care organizations. With a shift to reimbursement models that reward higher patient experience scores, increased competition for market share, and cost constraints emphasizing the importance of patient loyalty, many health care organizations consider optimizing the patient experience to be a key strategy for sustaining financial viability. A survey conducted by The Beryl Institute, which supports research on the patient experience, found that hospital executives ranked patient experience as a clear top priority, falling a close second behind quality and safety.1 According to the institute, engagement with employees, including physicians, is the most effective way to improve the patient experience, as more engaged, satisfied staffers provide better service and care to patients.2 Data on physician satisfaction support this supposition: Research has shown that physician career satisfaction closely correlates with patient satisfaction within a geographic region.3 Given the importance of physician satisfaction to the patient experience, it is concerning that dissatisfaction and burnout are on the rise among physicians. In a 2012 online poll of more than 24,000 physicians across the country, only 54 percent would choose medicine again as a career, down from 69 percent in 2011.4 Almost half of the more than 7,000 physicians responding to a recent online survey reported at least one symptom of burnout.5 Physician dissatisfaction and burnout have a profound negative effect on the patient experience of care. Physician leaders can take steps within their organizations to foster physician well-being, improving the care experience for physicians and patients while strengthening the sustainability of their organizations."}
{"_id": "38a935e212c8e10460545b74a7888e3966c03e74", "title": "Amodal Detection of 3D Objects: Inferring 3D Bounding Boxes from 2D Ones in RGB-Depth Images", "text": "This paper addresses the problem of amodal perception of 3D object detection. The task is to not only find object localizations in the 3D world, but also estimate their physical sizes and poses, even if only parts of them are visible in the RGB-D image. Recent approaches have attempted to harness point cloud from depth channel to exploit 3D features directly in the 3D space and demonstrated the superiority over traditional 2.5D representation approaches. We revisit the amodal 3D detection problem by sticking to the 2.5D representation framework, and directly relate 2.5D visual appearance to 3D objects. We propose a novel 3D object detection system that simultaneously predicts objects 3D locations, physical sizes, and orientations in indoor scenes. Experiments on the NYUV2 dataset show our algorithm significantly outperforms the state-of-the-art and indicates 2.5D representation is capable of encoding features for 3D amodal object detection. All source code and data is on https://github.com/phoenixnn/Amodal3Det."}
{"_id": "a825fa40f998cff4ca2908f867ba1d3757e355d5", "title": "An architecture of agent-based multi-layer interactive e-learning and e-testing platform", "text": "E-learning is the synthesis of multimedia and social media platforms powered by Internet and mobile technologies. Great popularity of e-learning is encouraging governments and educational institutions to adopt and develop e-learning cultures in societies in general and universities in particular. In traditional e-learning systems, all components (agents and services) are tightly coupled into a single system. In this study, we propose a new architecture for e-learning with two subsystems, namely, e-learning and e-testing. The motivation of the research is to improve the effectiveness of the learning process by extracting relevant features for elastic learning and testing process. We employ a multi-agent system because it contains five-layer architecture, including agents at various levels. We also propose a novel method for updating content through question and answer between e-learners and intelligent agents. To achieve optimization, we design a system that applies various technologies, which M. Arif (B)\u00b7 A. Karim \u00b7 K. A. Alam \u00b7 S. Farid \u00b7 S. Iqbal Faculty of Computer Science and Information Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia e-mail: arifmuhammad36@siswa.um.edu.my M. Arif \u00b7 M. Illahi Departmnet of Computer Science, COMSATS Institute of Information Technology Islamabad, Islamabad, Pakistan S. Shamshirband Faculty of Computer Science and Information Technology, Department of Computer System and Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia e-mail: shamshirband1396@gmail.com S. Shamshirband Department of Computer Science, Chalous Branch, Islamic Azad University (IAU), Chalous, 46615-397 Mazandaran, Iran Z. Buang Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia V. E. Balas Department of Automation and Applied Informatics, University of Arad, 310130 Arad, Romania"}
{"_id": "1f4789a2effea966c8fd10491fe859cfc7607137", "title": "A Dynamic Bayesian Network Model for Autonomous 3D Reconstruction from a Single Indoor Image", "text": "When we look at a picture, our prior knowledge about the world allows us to resolve some of the ambiguities that are inherent to monocular vision, and thereby infer 3d information about the scene. We also recognize different objects, decide on their orientations, and identify how they are connected to their environment. Focusing on the problem of autonomous 3d reconstruction of indoor scenes, in this paper we present a dynamic Bayesian network model capable of resolving some of these ambiguities and recovering 3d information for many images. Our model assumes a \"floorwall\" geometry on the scene and is trained to recognize the floor-wall boundary in each column of the image. When the image is produced under perspective geometry, we show that this model can be used for 3d reconstruction from a single image. To our knowledge, this was the first monocular approach to automatically recover 3d reconstructions from single indoor images."}
{"_id": "3746184fd7b82bbef7baee49daa9c7d79c37ea06", "title": "Forecasting exchange rate with deep belief networks", "text": "Forecasting exchange rates is an important financial problem which has received much attention. Nowadays, neural network has become one of the effective tools in this research field. In this paper, we propose the use of a deep belief network (DBN) to tackle the exchange rate forecasting problem. A DBN is applied to predict both British Pound/US dollar and Indian rupee/US dollar exchange rates in our experiments. We use six evaluation criteria to evaluate its performance. We also compare our method to a feedforward neural network (FFNN), which is the state-of-the-art method for forecasting exchange rate with neural networks. Experiments indicate that deep belief networks (DBNs) are applicable to the prediction of foreign exchange rate, since they achieve better performance than feedforward neural networks (FFNNs)."}
{"_id": "b9689f229371ce83d035d5286875a49bd15b7ca6", "title": "Applying the Clique Percolation Method to analyzing cross-market branch banking network structure: the case of Illinois", "text": "This study applies the Clique Percolation Method (CPM) to an investigation of the changing spatial organization of the Illinois cross-market branch banking network. Nonoverlapping community detection algorithms assign nodes into exclusive communities and, when results are mapped, these techniques may generate spatially disjointed geographical regions, an undesirable characteristic for geographical study. Alternative overlapping community detection algorithms allow overlapping membership where a node can be a member of different communities. Such a structure simultaneously accommodates spatial proximity and spatial separation which occur with respect to a node in relation to other nodes in the system. Applying such a structure in geographical analysis helps preserve well-established principles regarding spatial relationships within the geography discipline. The result can also be mapped for display and correct interpretation. The CPM is chosen in this study due to the complete connection within cliques which simulates the practice by banking institutions of forming highly connected networks through multi-location operations in order to diversify their business and hedge against risks. Applying the CPM helps reveal the spatial pattern of branch banking connections which would otherwise be difficult to see. However, the CPM has been shown to not be among the best performing overlapping community detection algorithms. Future research should explore other possible algorithms for detecting overlapping communities. Detecting communities in a network only reveals certain characteristics of the spatial organization of the network, rather than providing explanation of the spatial-network patterns revealed. Full interpretation of the pattern must rely on the attribute data and additional information. This may illustrate the value of an integrated approach in geographical analysis using both social network analysis and spatial analysis techniques."}
{"_id": "83f6e857033a54c271fe178391ac23044c623b05", "title": "Multi-platform molecular profiling of a large cohort of glioblastomas reveals potential therapeutic strategies", "text": "Glioblastomas (GBM) are the most aggressive and prevalent form of gliomas with abysmal prognosis and limited treatment options. We analyzed clinically relevant molecular aberrations suggestive of response to therapies in 1035 GBM tumors. Our analysis revealed mutations in 39 genes of 48 tested. IHC revealed expression of PD-L1 in 19% and PD-1 in 46%. MGMT-methylation was seen in 43%, EGFRvIII in 19% and 1p19q co-deletion in 2%. TP53 mutation was associated with concurrent mutations, while IDH1 mutation was associated with MGMT-methylation and TP53 mutation and was mutually exclusive of EGFRvIII mutation. Distinct biomarker profiles were seen in GBM compared with WHO grade III astrocytoma, suggesting different biology and potentially different treatment approaches. Analysis of 17 metachronous paired tumors showed frequent biomarker changes, including MGMT-methylation and EGFR aberrations, indicating the need for a re-biopsy for tumor profiling to direct subsequent therapy. MGMT-methylation, PR and TOPO1 appeared as significant prognostic markers in sub-cohorts of GBM defined by age. The current study represents the largest biomarker study on clinical GBM tumors using multiple technologies to detect gene mutation, amplification, protein expression and promoter methylation. These data will inform planning for future personalized biomarker-based clinical trials and identifying effective treatments based on tumor biomarkers."}
{"_id": "83ee9921f6bae094ffaa8ddf20f8e908b3656a88", "title": "Soft Pneumatic Actuators for Rehabilitation", "text": "Pneumatic artificial muscles are pneumatic devices with practical and various applications as common actuators. They, as human muscles, work in agonistic-antagonistic way, giving a traction force only when supplied by compressed air. The state of the art of soft pneumatic actuators is here analyzed: different models of pneumatic muscles are considered and evolution lines are presented. Then, the use of Pneumatic Muscles (PAM) in rehabilitation apparatus is described and the general characteristics required in different applications are considered, analyzing the use of proper soft actuators with various technical properties. Therefore, research activity carried out in the Department of Mechanical and Aerospace Engineering in the field of soft and textile actuators is presented here. In particular, pneumatic textile muscles useful for active suits design are described. These components are made of a tubular structure, with an inner layer of latex coated with a deformable outer fabric sewn along the edge. In order to increase pneumatic muscles forces and contractions Braided Pneumatic Muscles are studied. In this paper, new prototypes are presented, based on a fabric construction and various kinds of geometry. Pressure-force-deformation tests results are carried out and analyzed. These actuators are useful for rehabilitation applications. In order to reproduce the whole upper limb movements, new kind of soft actuators are studied, based on the same principle of planar membranes deformation. As an example, the bellows muscle model and worm muscle model are developed and described. In both cases, wide deformations are expected. Another issue for soft actuators is the pressure therapy. Some textile sleeve prototypes developed for massage therapy on patients suffering of lymph edema are analyzed. Different types of fabric and assembly techniques have been tested. In general, these OPEN ACCESS Actuators 2014, 3 85 Pressure Soft Actuators are useful for upper/lower limbs treatments, according to medical requirements. In particular devices useful for arms massage treatments are considered. Finally some applications are considered."}
{"_id": "9d0b2ececb713e6e65ec1b693d7002a2ea21dece", "title": "Scalable Content-Aware Collaborative Filtering for Location Recommendation", "text": "Location recommendation plays an essential role in helping people find attractive places. Though recent research has studied how to recommend locations with social and geographical information, few of them addressed the cold-start problem of new users. Because mobility records are often shared on social networks, semantic information can be leveraged to tackle this challenge. A typical method is to feed them into explicit-feedback-based content-aware collaborative filtering, but they require drawing negative samples for better learning performance, as users\u2019 negative preference is not observable in human mobility. However, prior studies have empirically shown sampling-based methods do not perform well. To this end, we propose a scalable Implicit-feedback-based Content-aware Collaborative Filtering (ICCF) framework to incorporate semantic content and to steer clear of negative sampling. We then develop an efficient optimization algorithm, scaling linearly with data size and feature size, and quadratically with the dimension of latent space. We further establish its relationship with graph Laplacian regularized matrix factorization. Finally, we evaluate ICCF with a large-scale LBSN dataset in which users have profiles and textual content. The results show that ICCF outperforms several competing baselines, and that user information is not only effective for improving recommendations but also coping with cold-start scenarios."}
{"_id": "82c2a90da14727e171293906cfa927eda8d6e125", "title": "E-commerce web page classification based on automatic content extraction", "text": "Currently, There are many E-commerce websites around the internet world. These E-commerce websites can be categorized into many types which one of them is C2C (Customer to Customer) websites such as eBay and Amazon. The main objective of C2C websites is an online market place that everyone can buy or sell anything at any time. Since, there are a lot of products in the E-commerce websites and each product are classified into its category by human. It is very hard to define their categories in automatic manner when the data is very large. In this paper, we propose the method for classifying E-commerce web pages based on their product types. Firstly, we apply the proposed automatic content extraction to extract the contents of E-commerce web pages. Then, we apply the automatic key word extraction to select words from these extracted contents for generating the feature vectors that represent the E-commerce web pages. Finally, we apply the machine learning technique for classifying the E-commerce web pages based on their product types. The experimental results signify that our proposed method can classify the E-commerce web pages in automatic fashion."}
{"_id": "f70ffda954932ff474fa62f2ab0db4227ad614ef", "title": "MIMO for DVB-NGH, the next generation mobile TV broadcasting [Accepted From Open Call]", "text": "DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) is the next generation technology for mobile TV broadcasting, which has been developed by the DVB project with the most advanced transmission technologies. DVB-NGH is the first broadcasting standard to incorporate multiple-input multiple-output (MIMO) as the key technology to overcome the Shannon limit of single antenna communications. MIMO techniques can be used to improve the robustness of the transmitted signal by exploiting the spatial diversity of the MIMO channel, but also to achieve increased data rates through spatial multiplexing. This article describes the benefits of MIMO that motivated its incorporation in DVB-NGH, reviews the MIMO schemes adopted, and discusses some aspects related to the deployment of MIMO networks in DVB-NGH. The article also provides a feature comparison with the multi-antenna techniques for 3GGP's LTE/LTE-Advanced for cellular networks. Finally, physical layer simulation results calibrated within the DVB-NGH standardization process are provided to illustrate the gain of MIMO for the next generation of mobile TV broadcasting."}
{"_id": "e07b6211d8843575b9ea554ea9e2823387563f7e", "title": "Early sensitivity to arguments: how preschoolers weight circular arguments.", "text": "Observational studies suggest that children as young as 2 years can evaluate some of the arguments people offer them. However, experimental studies of sensitivity to different arguments have not yet targeted children younger than 5 years. The current study aimed at bridging this gap by testing the ability of preschoolers (3-, 4-, and 5-year-olds) to weight arguments. To do so, it focused on a common type of fallacy-circularity-to which 5-year-olds are sensitive. The current experiment asked children-and, as a group control, adults-to choose between the contradictory opinions of two speakers. In the first task, participants of all age groups favored an opinion supported by a strong argument over an opinion supported by a circular argument. In the second task, 4- and 5-year-olds, but not 3-year-olds or adults, favored the opinion supported by a circular argument over an unsupported opinion. We suggest that the results of these tasks in 3- to 5-year-olds are best interpreted as resulting from the combination of two mechanisms: (a) basic skills of argument evaluations that process the content of arguments, allowing children as young as 3 years to favor non-circular arguments over circular arguments, and (b) a heuristic that leads older children (4- and 5-year-olds) to give some weight to circular arguments, possibly by interpreting these arguments as a cue to speaker dominance."}
{"_id": "3605b37545544d057c0756e4ada3a336c93593f0", "title": "StackGuard: Simple Stack Smash Protection for GCC", "text": "Since 1998, StackGuard patches to GCC have been used to protect entire distributions from stack smashing buffer overflows. Performance overhead and software compatibility issues have been minimal. In its history, the parts of GCC that StackGuard has operated in have twice changed enough to require complete overhauls of the StackGuard patch. Since StackGuard is a mature technology, even seeing re-implementations in other compilers, we propose that GCC adopt StackGuard as a standard feature. This paper describes our recent work to bring StackGuard fully up to date with current GCC, introduce architecture independence, and extend the protection of stack data structures, while keeping the StackGuard patch as small, simple, and modular as possible."}
{"_id": "70ded7db743e08d99c5dc8803c5178b0c3ed55de", "title": "Barriers to Physicians\u2019 Adoption of Healthcare Information Technology: An Empirical Study on Multiple Hospitals", "text": "Prior research on technology usage had largely overlooked the issue of user resistance or barriers to technology acceptance. Prior research on the Electronic Medical Records had largely focused on technical issues but rarely on managerial issues. Such oversight prevented a better understanding of users\u2019 resistance to new technologies and the antecedents of technology rejection. Incorporating the enablers and the inhibitors of technology usage intention, this study explores physicians\u2019 reactions towards the electronic medical record. The main focus is on the barriers, perceived threat and perceived inequity. 115 physicians from 6 hospitals participated in the questionnaire survey. Structural Equation Modeling was employed to verify the measurement scale and research hypotheses. According to the results, perceived threat shows a direct and negative effect on perceived usefulness and behavioral intentions, as well as an indirect effect on behavioral intentions via perceived usefulness. Perceived inequity reveals a direct and positive effect on perceived threat, and it also shows a direct and negative effect on perceived usefulness. Besides, perceived inequity reveals an indirect effect on behavioral intentions via perceived usefulness with perceived threat as the inhibitor. The research finding presents a better insight into physicians\u2019 rejection and the antecedents of such outcome. For the healthcare industry understanding the factors contributing to physicians\u2019 technology acceptance is important as to ensure a smooth implementation of any new technology. The results of this study can also provide change managers reference to a smooth IT introduction into an organization. In addition, our proposed measurement scale can be applied as a diagnostic tool for them to better understand the status quo within their organizations and users\u2019 reactions to technology acceptance. By doing so, barriers to physicians\u2019 acceptance can be identified earlier and more effectively before leading to technology rejection."}
{"_id": "36e83ef515ef5d6cdb5db827f11bab22155d57a8", "title": "Learning Low-Dimensional Representations of Medical Concepts", "text": "We show how to learn low-dimensional representations (embeddings) of a wide range of concepts in medicine, including diseases (e.g., ICD9 codes), medications, procedures, and laboratory tests. We expect that these embeddings will be useful across medical informatics for tasks such as cohort selection and patient summarization. These embeddings are learned using a technique called neural language modeling from the natural language processing community. However, rather than learning the embeddings solely from text, we show how to learn the embeddings from claims data, which is widely available both to providers and to payers. We also show that with a simple algorithmic adjustment, it is possible to learn medical concept embeddings in a privacy preserving manner from co-occurrence counts derived from clinical narratives. Finally, we establish a methodological framework, arising from standard medical ontologies such as UMLS, NDF-RT, and CCS, to further investigate the embeddings and precisely characterize their quantitative properties."}
{"_id": "3bf4ef3e38fc5a6b18db2aa6d1263f0373b604f2", "title": "Nanowire dye-sensitized solar cells.", "text": "Excitonic solar cells-including organic, hybrid organic-inorganic and dye-sensitized cells (DSCs)-are promising devices for inexpensive, large-scale solar energy conversion. The DSC is currently the most efficient and stable excitonic photocell. Central to this device is a thick nanoparticle film that provides a large surface area for the adsorption of light-harvesting molecules. However, nanoparticle DSCs rely on trap-limited diffusion for electron transport, a slow mechanism that can limit device efficiency, especially at longer wavelengths. Here we introduce a version of the dye-sensitized cell in which the traditional nanoparticle film is replaced by a dense array of oriented, crystalline ZnO nanowires. The nanowire anode is synthesized by mild aqueous chemistry and features a surface area up to one-fifth as large as a nanoparticle cell. The direct electrical pathways provided by the nanowires ensure the rapid collection of carriers generated throughout the device, and a full Sun efficiency of 1.5% is demonstrated, limited primarily by the surface area of the nanowire array."}
{"_id": "4d7a8836b304a1ecebee19ff297f1850e81903b4", "title": "SECURE COMPUTER SYSTEMS : MATHEMATICAL FOUNDATIONS", "text": ""}
{"_id": "9890ff8405f80042605040015fcdbe8139592c83", "title": "Daily and compulsive internet use and well-being in adolescence: a diathesis-stress model based on big five personality traits.", "text": "This study examined the associations between adolescents' daily Internet use and low well-being (i.e., loneliness, low self-esteem, and depressive moods). We hypothesized that (a) linkages between high levels of daily Internet use and low well-being would be mediated by compulsive Internet use (CIU), and (b) that adolescents with low levels of agreeableness and emotional stability, and high levels of introversion would be more likely to develop CIU and lower well-being. Data were used from a sample of 7888 Dutch adolescents (11-21 years). Results from structural equation modeling analyses showed that daily Internet use was indirectly related to low well-being through CIU. In addition, daily Internet use was found to be more strongly related to CIU in introverted, low-agreeable, and emotionally less-stable adolescents. In turn, again, CIU was more strongly linked to loneliness in introverted, emotionally less-stable, and less agreeable adolescents."}
{"_id": "184676cb50d1c62b249e90a97e4edc8e42a1c024", "title": "Word Alignment Modeling with Context Dependent Deep Neural Network", "text": "In this paper, we explore a novel bilingual word alignment approach based on DNN (Deep Neural Network), which has been proven to be very effective in various machine learning tasks (Collobert et al., 2011). We describe in detail how we adapt and extend the CD-DNNHMM (Dahl et al., 2012) method introduced in speech recognition to the HMMbased word alignment model, in which bilingual word embedding is discriminatively learnt to capture lexical translation information, and surrounding words are leveraged to model context information in bilingual sentences. While being capable to model the rich bilingual correspondence, our method generates a very compact model with much fewer parameters. Experiments on a large scale EnglishChinese word alignment task show that the proposed method outperforms the HMM and IBM model 4 baselines by 2 points in F-score."}
{"_id": "29c34a034f6f35915a141dac98cabf625bea2b3c", "title": "Contrastive Estimation: Training Log-Linear Models on Unlabeled Data", "text": "Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs are log-linear, allowing the incorporation of arbitrary features into the model. To train on u labeled data, we requireunsupervisedestimation methods for log-linear models; few exist. We describe a novel approach,contrastive estimation . We show that the new technique can be intuitively understood as exploiting implicit negative evidence and is computationally efficient. Applied to a sequence labeling problem\u2014POS tagging given a tagging dictionary and unlabeled text\u2014contrastive estimation outperforms EM (with the same feature set), is more robust to degradations of the dictionary, and can largely recover by modeling additional features."}
{"_id": "368f3dea4f12c77dfc9b7203f3ab2b9efaecb635", "title": "Statistical Phrase-Based Translation", "text": "We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models. Within our framework, we carry out a large number of experiments to understand better and explain why phrase-based models outperform word-based models. Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy wordlevel alignment models does not have a strong impact on performance. Learning only syntactically motivated phrases degrades the performance of our systems."}
{"_id": "dbca66128d360286c6a5e99214101ae356b93ef2", "title": "HMM-Based Word Alignment in Statistical Translation", "text": "In this paper, we describe a new model for word alignment in statistical translation and present experimental results. The idea of the model is to make the alignment probabilities dependent on the differences in the alignment positions rather than on the absolute positions. To achieve this goal, the approach uses a first-order Hidden Markov model (HMM) for the word alignment problem as they are used successfully in speech recognition for the time alignment problem. The difference to the time alignment HMM is that there is no monotony constraint for the possible word orderings. We describe the details of the model and test the model on several bilingual corpora."}
{"_id": "386dc0445ba9991e6f981ad407bc45f17900daf5", "title": "Background Removal in Image Indexing and Retrieval", "text": "This paper presents our research in image content based indexing and retrieval, a key technology in digital image libraries. In most of the existing image content-based techniques, image features used for indexing and retrieval are global, features are computed over the entire image. The major problem with the global image feature based retrieval methods is that background features can be easily mistakenly taken as object features. When a user attempts to retrieve images using color features, he/she usually means the color feature of an object or objects of interests contained in the image. The approach we describe in this paper utilizes color clusters for image background analysis. Once the background regions are identified, they are removed from the image indexing procedure; therefore, no longer interfering with the meaningful image content during the retrieval process. The algorithm consists of three major steps of computation, fuzzy clustering, color image segmentation, and background analysis."}
{"_id": "3f56b4e313441735f394b37ac74bc9b9acda6b9f", "title": "Induction of fuzzy rules and membership functions from training examples", "text": "Most fuzzy controllers and fuzzy expert systems must predefine membership functions and fuzzy inference rules to map numeric data into linguistic variable terms and to make fuzzy reasoning work. In this paper, we propose a general learning method as a framework for automatically deriving membership functions and fuzzy if-then rules from a set of given training examples to rapidly build a prototype fuzzy expert system. Based on the membership functions and the fuzzy rules derived, a corresponding fuzzy inference procedure to process inputs is also developed."}
{"_id": "461ebcb7a274525b8efecf7990c85994248ab433", "title": "Routing Attacks and Countermeasures in the RPL-Based Internet of Things", "text": "The Routing Protocol for Low-Power and Lossy Networks (RPL) is a novel routing protocol standardized for constrained environments such as 6LoWPAN networks. Providing security in IPv6/RPL connected 6LoWPANs is challenging because the devices are connected to the untrusted Internet and are resource constrained, the communication links are lossy, and the devices use a set of novel IoT technologies such as RPL, 6LoWPAN, and CoAP/CoAPs. In this paper we provide a comprehensive analysis of IoT technologies and their new security capabilities that can be exploited by attackers or IDSs. One of the major contributions in this paper is our implementation and demonstration of well-known routing attacks against 6LoWPAN networks running RPL as a routing protocol. We implement these attacks in the RPL implementation in the Contiki operating system and demonstrate these attacks in the Cooja simulator. Furthermore, we highlight novel security features in the IPv6 protocol and exemplify the use of these features for intrusion detection in the IoT by implementing a lightweight heartbeat protocol."}
{"_id": "cfcf81f0bb3a88ff84a16905af46e9314e6e5fc1", "title": "Speech emotion classification using tree-structured sparse logistic regression", "text": "The extraction and selection of acoustic features are crucial steps in the development of a system for classifying emotions in speech. Most works in the field use some kind of prosodic features, often in combination with spectral and glottal features, and select appropriate features in classifying emotions. In the methods, feature choices are mostly made regardless of existing relationships and structures between features. However, considering them can be beneficial, potentially both for interpretability and to improve classification performance. To this end, a structured sparse logistic regression model incorporated with the hierarchical structure of features derived from prosody, spectral envelope, and glottal information is proposed in this paper. The proposed model simultaneously addresses tree-structured sparse feature selection and emotion classification. Evaluation of the proposed model on Berlin emotional database showed substantial improvement over the conventional sparse logistic regression model."}
{"_id": "7d1b4f1cc97dd9306ce3836420e0440ba8a7d25c", "title": "Quantum clustering algorithms", "text": "By the term \"quantization\", we refer to the process of using quantum mechanics in order to improve a classical algorithm, usually by making it go faster. In this paper, we initiate the idea of quantizing clustering algorithms by using variations on a celebrated quantum algorithm due to Grover. After having introduced this novel approach to unsupervised learning, we illustrate it with a quantized version of three standard algorithms: divisive clustering, k-medians and an algorithm for the construction of a neighbourhood graph. We obtain a significant speedup compared to the classical approach."}
{"_id": "6e2bf58a3e04031aa08e6832cc0e7b845e5d706c", "title": "A Comparative Study of Polar Code Constructions for the AWGN Channel", "text": "We present a comparative study of the performance of various polar code constructions in an additive white Gaussian noise (AWGN) channel. A polar code construction is any algorithm that selects K best among N possible polar bit-channels at the design signal-to-noise-ratio (design-SNR) in terms of bit error rate (BER). Optimal polar code construction is hard and therefore many suboptimal polar code constructions have been proposed at different computational complexities. Polar codes are also non-universal meaning the code changes significantly with the design-SNR. However, it is not known which construction algorithm at what design-SNR constructs the best polar codes. We first present a comprehensive survey of all the well-known polar code constructions along with their full implementations. We then propose a heuristic algorithm to find the best design-SNR for constructing best possible polar codes from a given construction algorithm. The proposed algorithm involves a search among several possible design-SNRs. We finally use our algorithm to perform a comparison of different construction algorithms using extensive simulations. We find that all polar code construction algorithms generate equally good polar codes in an AWGN channel, if the design-SNR is optimized. Keywords\u2014Bhattacharyya bounds, bit-channels, Gaussian approximation, polar codes"}
{"_id": "ce08fc211d6cf1b572abed6f6540e36150daa4b2", "title": "False Data Injection Attacks in Control Systems", "text": "This paper analyzes the effects of false data injection attacks on Control System. We assume that the system, equipped with a Kalman filter and LQG controller, is used to monitor and control a discrete linear time invariant Gaussian system. We further assume that the system is equipped with a failure detector. An attacker wishes to destabilize the system by compromising a subset of sensors and sending corrupted readings to the state estimator. In order to inject fake sensor measurements without being detected the attacker needs to carefully design its inputs to fool the failure detector, since abnormal sensor measurements usually trigger an alarm from the failure detector. We will provide a necessary and sufficient condition under which the attacker could destabilize the system while successfully bypassing the failure detector. A design method for the defender to improve the resilience of the CPS against such kind of false data injection attacks is also provided."}
{"_id": "24c9646926edc4b33f39d85184cbfd46595050ed", "title": "Automatic Text Summarization based on Betweenness Centrality", "text": "Automatic text summary plays an important role in information retrieval. With a large volume of information, presenting the user only a summary greatly facilitates the search work of the most relevant. Therefore, this task can provide a solution to the problem of information overload. Automatic text summary is a process of automatically creating a compressed version of a certain text that provides useful information for users. This article presents an unsupervised extractive approach based on graphs. The method constructs an indirected weighted graph from the original text by adding a vertex for each sentence, and calculates a weighted edge between each pair of sentences that is based on a similarity/dissimilarity criterion. The main contribution of the work is that we do a study of the impact of a known algorithm for the social network analysis, which allows to analyze large graphs efficiently. As a measure to select the most relevant sentences, we use betweenness centrality. The method was evaluated in an open reference data set of DUC2002 with Rouge scores."}
{"_id": "637e37d06965e82e8f2456ce5d59b825a54b0ef7", "title": "The Abandoned Side of the Internet: Hijacking Internet Resources When Domain Names Expire", "text": "The vulnerability of the Internet has been demonstrated by prominent IP prefix hijacking events. Major outages such as the China Telecom incident in 2010 stimulate speculations about malicious intentions behind such anomalies. Surprisingly, almost all discussions in the current literature assume that hijacking incidents are enabled by the lack of security mechanisms in the inter-domain routing protocol BGP. In this paper, we discuss an attacker model that accounts for the hijacking of network ownership information stored in Regional Internet Registry (RIR) databases. We show that such threats emerge from abandoned Internet resources (e.g., IP address blocks, AS numbers). When DNS names expire, attackers gain the opportunity to take resource ownership by re-registering domain names that are referenced by corresponding RIR database objects. We argue that this kind of attack is more attractive than conventional hijacking, since the attacker can act in full anonymity on behalf of a victim. Despite corresponding incidents have been observed in the past, current detection techniques are not qualified to deal with these attacks. We show that they are feasible with very little effort, and analyze the risk potential of abandoned Internet resources for the European service region: our findings reveal that currently 73 /24 IP prefixes and 7 ASes are vulnerable to be attacked. We discuss countermeasures and outline research directions towards preventive solutions."}
{"_id": "8ae14aedfa58b9f1f1161b56e38f1a9e5190ec25", "title": "A Modular IoT Platform for Real-Time Indoor Air Quality Monitoring", "text": "The impact of air quality on health and on life comfort is well established. In many societies, vulnerable elderly and young populations spend most of their time indoors. Therefore, indoor air quality monitoring (IAQM) is of great importance to human health. Engineers and researchers are increasingly focusing their efforts on the design of real-time IAQM systems using wireless sensor networks. This paper presents an end-to-end IAQM system enabling measurement of CO\u2082, CO, SO\u2082, NO\u2082, O\u2083, Cl\u2082, ambient temperature, and relative humidity. In IAQM systems, remote users usually use a local gateway to connect wireless sensor nodes in a given monitoring site to the external world for ubiquitous access of data. In this work, the role of the gateway in processing collected air quality data and its reliable dissemination to end-users through a web-server is emphasized. A mechanism for the backup and the restoration of the collected data in the case of Internet outage is presented. The system is adapted to an open-source Internet-of-Things (IoT) web-server platform, called Emoncms, for live monitoring and long-term storage of the collected IAQM data. A modular IAQM architecture is adopted, which results in a smart scalable system that allows seamless integration of various sensing technologies, wireless sensor networks (WSNs) and smart mobile standards. The paper gives full hardware and software details of the proposed solution. Sample IAQM results collected in various locations are also presented to demonstrate the abilities of the system."}
{"_id": "92f030e3c4c7a24a607054775d37e0c947808f8d", "title": "A Delegation Framework for Task-Role Based Access Control in WFMS", "text": "Access control is important for protecting information integrity in workflow management system (WfMS). Compared to conventional access control technology such as discretionary, mandatory, and role-based access control models, task-role-based access control (TRBAC) model, an access control model based on both tasks and roles, meets more requirements for modern enterprise environments. However, few discussions on delegation mechanisms for TRBAC are made. In this paper, a framework considering temporal constraints to improve delegation and help automatic delegation in TRBAC is presented. In the framework, the methodology for delegations requested from both users and WfMS is discussed. The constraints for delegatee selection such as delegation loop and separation of duty (SOD) are addressed. With the framework, a sequence of algorithms for delegation and revocation of tasks are constructed gradually. Finally, a comparison is made between our approach and the representative related works."}
{"_id": "46a1e47435bcaa949b105edc5ef2b3243225d238", "title": "Application of AHP and Taguchi loss functions in supply chain", "text": "Purpose \u2013 The purpose of this paper is to develop a decision model to help decision makers with selection of the appropriate supplier. Design/methodology/approach \u2013 Supplier selection is a multi-criteria decision-making process encompassing various tangible and intangible factors. Both risks and benefits of using a vendor in supply chain are identified for inclusion in the evaluation process. Since these factors can be objective and subjective, a hybrid approach that applies to both quantitative and qualitative factors is used in the development of the model. Taguchi loss functions are used to measure performance of each supplier candidate with respect to the risks and benefits. Analytical hierarchy process (AHP) is used to determine the relative importance of these factors to the decision maker. The weighted loss scores are then calculated for each supplier by using the relative importance as the weights. The composite weighted loss scores are used for ranking of the suppliers. The supplier with the smallest loss score is recommended for selection. Findings \u2013 Inclusion of both risk and benefit categories in the evaluation process provides a comprehensive decision tool. Practical implications \u2013 The proposed model provides guidelines for supply chain managers to make an informed decision regarding supplier selection. Originality/value \u2013 Combining Taguchi loss function and AHP provides a novel approach for ranking of potential suppliers for outsourcing purposes."}
{"_id": "954b56b09b599c5fc3aa2fc180692c54f9ebeeee", "title": "Oscillator-based assistance of cyclical movements: model-based and model-free approaches", "text": "In this article, we propose a new method for providing assistance during cyclical movements. This method is trajectory-free, in the sense that it provides user assistance irrespective of the performed movement, and requires no other sensing than the assisting robot\u2019s own encoders. The approach is based on adaptive oscillators, i.e., mathematical tools that are capable of learning the high level features (frequency, envelope, etc.) of a periodic input signal. Here we present two experiments that we recently conducted to validate our approach: a simple sinusoidal movement of the elbow, that we designed as a proof-of-concept, and a walking experiment. In both cases, we collected evidence illustrating that our approach indeed assisted healthy subjects during movement execution. Owing to the intrinsic periodicity of daily life movements involving the lower-limbs, we postulate that our approach holds promise for the design of innovative rehabilitation and assistance protocols for the lower-limb, requiring little to no user-specific calibration."}
{"_id": "4881281046f78e57cd4bbc16c517d019e9a2abb5", "title": "Wechsler Intelligence Scale for Children-IV Conceptual and Interpretive Guide", "text": "The Wechsler intelligence scales were developed by Dr. David Wechsler, a clinical psychologist with Bellevue Hospital. His initial test, the Wechsler-Bellevue Intelligence Scale, was published in 1939 and was designed to measure intellectual performance by adults. Wechsler constructed the WBIS based on his observation that, at the time, existing intelligence tests for adults were merely adaptations of tests for children and had little face validity for older age groups."}
{"_id": "d3339336d8c66272321a39e802a96fcb2f717516", "title": "Percutaneous left atrial appendage closure vs warfarin for atrial fibrillation: a randomized clinical trial.", "text": "IMPORTANCE\nWhile effective in preventing stroke in patients with atrial fibrillation (AF), warfarin is limited by a narrow therapeutic profile, a need for lifelong coagulation monitoring, and multiple drug and diet interactions.\n\n\nOBJECTIVE\nTo determine whether a local strategy of mechanical left atrial appendage (LAA) closure was noninferior to warfarin.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nPROTECT AF was a multicenter, randomized (2:1), unblinded, Bayesian-designed study conducted at 59 hospitals of 707 patients with nonvalvular AF and at least 1 additional stroke risk factor (CHADS2 score \u22651). Enrollment occurred between February 2005 and June 2008 and included 4-year follow-up through October 2012. Noninferiority required a posterior probability greater than 97.5% and superiority a probability of 95% or greater; the noninferiority margin was a rate ratio of 2.0 comparing event rates between treatment groups.\n\n\nINTERVENTIONS\nLeft atrial appendage closure with the device (n\u2009=\u2009463) or warfarin (n\u2009=\u2009244; target international normalized ratio, 2-3).\n\n\nMAIN OUTCOMES AND MEASURES\nA composite efficacy end point including stroke, systemic embolism, and cardiovascular/unexplained death, analyzed by intention-to-treat.\n\n\nRESULTS\nAt a mean (SD) follow-up of 3.8 (1.7) years (2621 patient-years), there were 39 events among 463 patients (8.4%) in the device group for a primary event rate of 2.3 events per 100 patient-years, compared with 34 events among 244 patients (13.9%) for a primary event rate of 3.8 events per 100 patient-years with warfarin (rate ratio, 0.60; 95% credible interval, 0.41-1.05), meeting prespecified criteria for both noninferiority (posterior probability, >99.9%) and superiority (posterior probability, 96.0%). Patients in the device group demonstrated lower rates of both cardiovascular mortality (1.0 events per 100 patient-years for the device group [17/463 patients, 3.7%] vs 2.4 events per 100 patient-years with warfarin [22/244 patients, 9.0%]; hazard ratio [HR], 0.40; 95% CI, 0.21-0.75; P\u2009=\u2009.005) and all-cause mortality (3.2 events per 100 patient-years for the device group [57/466 patients, 12.3%] vs 4.8 events per 100 patient-years with warfarin [44/244 patients, 18.0%]; HR, 0.66; 95% CI, 0.45-0.98; P\u2009=\u2009.04).\n\n\nCONCLUSIONS AND RELEVANCE\nAfter 3.8 years of follow-up among patients with nonvalvular AF at elevated risk for stroke, percutaneous LAA closure met criteria for both noninferiority and superiority, compared with warfarin, for preventing the combined outcome of stroke, systemic embolism, and cardiovascular death, as well as superiority for cardiovascular and all-cause mortality.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00129545."}
{"_id": "bd7c3791a20cba4ae6b1ec16212bd4b049fd1c41", "title": "Gunshot detection and localization using sensor networks", "text": "Acoustic sensors or sensor networks can be efficiently used to spot and observe targets of interest and have the ability to conclude the exact location of an object or event within the sensor network's coverage range. For the detection and spot estimation of shooter, the schemes of gunshot localization require one or more sensing technologies. In current situation the importance of counter sniper systems cannot be denied. An efficient counter sniper system that can provide immediate shooter location will provide many benefits to our security personnel. In this paper, the Time Difference Of Arrival (TDOA) of Shock Wave and Muzzle Blast is used to estimate the shooter location. The proposed design and algorithm was validated and shooter origin was resolute that was very close to theoretical values. The result shows that the scheme worked very fine and is capable of locating sniper fires very efficiently, competently and effectively. The designed algorithm worked very well and results were found to be very accurately."}
{"_id": "5b8869bb7afa5d8d3c183dfac0d0f26c2e218593", "title": "Fast Priority Queues for Cached Memory", "text": "The cache hierarchy prevalent in todays high performance processors has to be taken into account in order to design algorithms that perform well in practice. This paper advocates the adaption of external memory algorithms to this purpose. This idea and the practical issues involved are exemplified by engineering a fast priority queue suited to external memory and cached memory that is based on k-way merging. It improves previous external memory algorithms by constant factors crucial for transferring it to cached memory. Running in the cache hierarchy of a workstation the algorithm is at least two times faster than an optimized implementation of binary heaps and 4-ary heaps for large inputs."}
{"_id": "0e93d05017b495f205fbf5d27188bba7be9be5e4", "title": "Reverse Nearest Neighbors in Unsupervised Distance-Based Outlier Detection", "text": "Outlier detection in high-dimensional data presents various challenges resulting from the \u201ccurse of dimensionality.\u201d A prevailing view is that distance concentration, i.e., the tendency of distances in high-dimensional data to become indiscernible, hinders the detection of outliers by making distance-based methods label all points as almost equally good outliers. In this paper, we provide evidence supporting the opinion that such a view is too simple, by demonstrating that distance-based methods can produce more contrasting outlier scores in high-dimensional settings. Furthermore, we show that high dimensionality can have a different impact, by reexamining the notion of reverse nearest neighbors in the unsupervised outlier-detection context. Namely, it was recently observed that the distribution of points' reverse-neighbor counts becomes skewed in high dimensions, resulting in the phenomenon known as hubness. We provide insight into how some points (antihubs) appear very infrequently in k-NN lists of other points, and explain the connection between antihubs, outliers, and existing unsupervised outlier-detection methods. By evaluating the classic k-NN method, the angle-based technique designed for high-dimensional data, the density-based local outlier factor and influenced outlierness methods, and antihub-based methods on various synthetic and real-world data sets, we offer novel insight into the usefulness of reverse neighbor counts in unsupervised outlier detection."}
{"_id": "76a92112bceaf9c77acb2a77245ee2cfa478bddd", "title": "CONCEPTUAL REQUIREMENTS FOR THE AUTOMATIC RECONSTRUCTION OF BUILDING INFORMATION MODELS FROM UNINTERPRETED 3 D MODELS", "text": "A multitude of new applications is quickly emerging in the field of Building Information Models (BIM). BIM models describe buildings with respect to their spatial and especially semantic and thematic characteristics. Since BIM models are manually created during the planning and construction phase, they are only available for newly planned or recently constructed buildings. In order to apply the new applications to already existing buildings, methods for the acquisition of BIM models for built-up sites are required. Primary data source are 3D geometry models obtained from surveying, CAD, or computer graphics. Automation of this process is highly desirable, but faces a range of specific problems setting the bar very high for a reconstruction process. This paper discusses these problems and identifies consequential requirements on reconstruction methods. Above, a two-step strategy for BIM model reconstruction is proposed which incorporates CityGML as an intermediate layer between 3D graphics models and IFC/BIM models."}
{"_id": "50cc848978797112a27d124af57991321f545069", "title": "Based Emotion Recognition : A Survey", "text": "Human Computer interaction is a very powerful and most current area of research because the human world is getting more digitize. This needs the digital systems to imitate the human behaviour correctly. Emotion is one aspect of human behaviour which plays an important role in human computer interaction, the computer interfaces need to recognize the emotion of the users in order to exhibit a truly intelligent behaviour. Human express the emotion in the form facial expression, speech, and writing text. Every day, massive amount of textual data is gathered into internet such as blogs, social media etc. This comprise a challenging style as it is formed with both plaint text and short messaging language. This paper is mainly focused on an overview of emotion detection from text and describes the emotion detection methods. These methods are divided into the following four main categories: keyword-based, Lexical Affinity method, learning based, and hybrid based approach. Limitations of these emotion recognition methods are presented in this paper and also, addresses the text normalization using different handling techniques for both plaint text and short messaging language."}
{"_id": "2e890b62603b25038a36d01ef6b64a7be6160282", "title": "Efficient Planning in Non-Gaussian Belief Spaces and Its Application to Robot Grasping", "text": "The limited nature of robot sensors make many important robo tics problems partially observable. These problems may require the s yst m to perform complex information-gathering operations. One approach to so lving these problems is to create plans inbelief-space , the space of probability distributions over the underlying state of the system. The belief-space plan encodes a st rategy for performing a task while gaining information as necessary. Most approach es to belief-space planning rely upon representing belief state in a particular way (typically as a Gaussian). Unfortunately, this can lead to large errors between the assumed density representation of belief state and the true belief state. Th is paper proposes a new sample-based approach to belief-space planning that has fix ed computational complexity while allowing arbitrary implementations of Bayes filtering to be used to track belief state. The approach is illustrated in the conte xt of a simple example and compared to a prior approach. Then, we propose an applicatio n of the technique to an instance of the grasp synthesis problem where a robot mu st simultaneously localize and grasp an object given initially uncertain obje ct parameters by planning information-gathering behavior. Experimental results ar e presented that demonstrate the approach to be capable of actively localizing and graspi ng boxes that are presented to the robot in uncertain and hard-to-localize config urations."}
{"_id": "b032fdfcbf657377d875fe9b1633c7b374239a13", "title": "A model transformation from NL to SBVR", "text": "In Requirement Engineering, requirements are usually written in sentences of natural language and natural languages are ambiguous and inconsistent, so the requirements written in natural languages also tend to be ambiguous. To avoid this problem of ambiguity we present an approach of model transformation to generate requirements based on SBVR (Semantics of Business Vocabulary and Business Rules). The information provided in source metamodel (NL) is automatically transformed into target metamodel (SBVR). SBVR metamodel can not only be processed by machine but also provides precise and reliable model for software design. The standard SBVR metamodel is already available but for natural language we proposed our own metamodel because there is no standard metamodel available for natural languages."}
{"_id": "02df3d50dbd1d15c38db62ff58a5601ebf815d59", "title": "NLTK: The Natural Language Toolkit", "text": "NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset."}
{"_id": "1f6ba0782862ec12a5ec6d7fb608523d55b0c6ba", "title": "Convolutional Neural Networks for Sentence Classification", "text": "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification."}
{"_id": "45c56dc268a04c5fc7ca04d7edb985caf2a25093", "title": "Parameter estimation for text analysis", "text": "Presents parameter estimation methods common with discrete probability distributions, which is of particular interest in text modeling. Starting with maximum likelihood, a posteriori and Bayesian estimation, central concepts like conjugate distributions and Bayesian networks are reviewed. As an application, the model of latent Dirichlet allocation (LDA) is explained in detail with a full derivation of an approximate inference algorithm based on Gibbs sampling, including a discussion of Dirichlet hyperparameter estimation. History: version 1: May 2005, version 2.4: August 2008."}
{"_id": "9e463eefadbcd336c69270a299666e4104d50159", "title": "A COEFFICIENT OF AGREEMENT FOR NOMINAL SCALES 1", "text": ""}
{"_id": "ccce1cf96f641b3581fba6f4ce2545f4135a15e3", "title": "Least Squares Support Vector Machine Classifiers", "text": "In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM's. The approach is illustrated on a two-spiral benchmark classification problem."}
{"_id": "5e120701b01f37d7e68dcc261212c4741d143153", "title": "Vehicle Routing with Drones", "text": "We introduce a package service model where trucks as well as drones can deliver packages. Drones can travel on trucks or fly; but while flying, drones can only carry one package at a time and have to return to a truck to charge after each delivery. We present a heuristic algorithm to solve the problem of finding a good schedule for all drones and trucks. The algorithm is based on two nested local searches, thus the definition of suitable neighbourhoods of solutions is crucial for the algorithm. Empirical tests show that our algorithm performs significantly better than a natural Greedy algorithm. Moreover, the savings compared to solutions without drones turn out to be substantial, suggesting that delivery systems might considerably benefit from using drones in addition to trucks."}
{"_id": "0f7be87ca4608975a0a7f5c9e505688649f523c1", "title": "2 REINVENTING GROUNDED THEORY : SOME QUESTIONS ABOUT THEORY , GROUND AND DISCOVERY", "text": "Grounded theory\u2016s popularity persists after three decades of broad-ranging critique. We discuss here three problematic notions \u2013 \u201ctheory,\u201d \u201cground\u201d and \u201cdiscovery\u201d \u2013 which linger in the continuing use and development of grounded theory procedures. We argue that far from providing the epistemic security promised by grounded theory, these notions \u2013 embodied in continuing reinventions of grounded theory \u2013 constrain and distort qualitative inquiry. We argue that what is contrived is not in fact theory in any meaningful sense, that \u201cground\u201d is a misnomer when talking about interpretation and that what ultimately materializes following grounded theory procedures is less like discovery and more akin to invention. The procedures admittedly provide signposts for qualitative inquirers, but educational researchers should be wary, for the significance of interpretation, narrative and reflection can be undermined in the procedures of grounded theory."}
{"_id": "d3e0ba556d1d1108aa823e7234b5bdd81a668e48", "title": "Ways of Asking and Replying in Duplicate Question Detection", "text": "This paper presents the results of systematic experimentation on the impact in duplicate question detection of different types of questions across both a number of established approaches and a novel, superior one used to address this language processing task. This study permits to gain a novel insight on the different levels of robustness of the diverse detection methods with respect to different conditions of their application, including the ones that approximate real usage scenarios."}
{"_id": "c938af3d216d047c90b485530d70699568dc4c49", "title": "Exploitation of Dual-Polarization Diversity for 5G Millimeter-Wave MIMO Beamforming Systems", "text": "For the first time, to the best of our knowledge, this paper reports a complete millimeter wave (mmWave) wireless system, which coherently utilizes polarization diversity, multi-input multioutput (MIMO), and beamforming technologies applicable for upcoming fifth generation wireless mobile terminals. Using an interdisciplinary approach across the entire layers constituting of antennas, RF architecture, and digital modem, the polarization of the presented mmWave beamforming antenna system dynamically adapts to the channel environments and MIMO transmission modes. Empirical investigations, which are taken under real-life environments, ascertain that implementing switchable dual-polarized array antennas can alleviate the complexity of the beam searching procedure and MIMO channelization. In addition, experimental results acquired from intensive system level tests indicate more than 22 dB averaged cross-polarization discrimination under polarized MIMO channel conditions at the 60 GHz band. The beam selection algorithm that can be easily implemented based on power metric is confirmed to feature a 99.5% accuracy in comparison with the optimal selection algorithm derived from the theoretical maximal capacity."}
{"_id": "b1a04ad062d980237523cc3502926130e05d2c75", "title": "The Overview of Research on Microgrid Protection Development", "text": "In general, the microgrid can operate in both grid-connection mode and islanded mode that challenges the traditional over-current protection scheme in the distribution network. The novel research and development of protection, including both the microgrid and distribution network are comprehensively analyzed in this paper, which mainly focuses on two aspects. The first one analyses the impact on the network protection from the fault current feature while microgrid-connected. Secondly, several innovative protection schemes for network and microgrid are discussed. It is essential to improve traditional protection schemes by using communication techniques. In order to protect the microgrid in islanded mode, two innovative voltage protection schemes are discussed. However, the differential current protection scheme is possibly still the most effective solution for both network and microgrid."}
{"_id": "d08022c927cbc8d52d6cb58d8e868d3b7df1c35d", "title": "An improved anomaly detection and diagnosis framework for mobile network operators", "text": "The ever increasing complexity of commercial mobile networks drives the need for methods capable of reducing human workload required for network troubleshooting. In order to address this issue, several attempts have been made to develop automatic anomaly detection and diagnosis frameworks for mobile network operators. In this paper, the latest improvements introduced to one of those frameworks are discussed, including more sophisticated profiling and detection capabilities. The new algorithms further reduce the need for human intervention related to the proper configuration of the profiling and anomaly detection apparatus. The main concepts of the new approach are described and illustrated with an explanatory showcase featuring performance data from a live 3 G network."}
{"_id": "63132196af23969531569d5866fa3cda5b8f7a11", "title": "DFuzzy: a deep learning-based fuzzy clustering model for large graphs", "text": "Graph clustering is successfully applied in various applications for finding similar patterns. Recently, deep learning- based autoencoder has been used efficiently for detecting disjoint clusters. However, in real-world graphs, vertices may belong to multiple clusters. Thus, it is obligatory to analyze the membership of vertices toward clusters. Furthermore, existing approaches are centralized and are inefficient in handling large graphs. In this paper, a deep learning-based model \u2018DFuzzy\u2019 is proposed for finding fuzzy clusters from large graphs in distributed environment. It performs clustering in three phases. In first phase, pre-training is performed by initializing the candidate cluster centers. Then, fine tuning is performed to learn the latent representations by mining the local information and capturing the structure using PageRank. Further, modularity is used to redefine clusters. In last phase, reconstruction error is minimized and final cluster centers are updated. Experiments are performed over real-life graph data, and the performance of DFuzzy is compared with four state-of-the-art clustering algorithms. Results show that DFuzzy scales up linearly to handle large graphs and produces better quality of clusters when compared to state-of-the-art clustering algorithms. It is also observed that deep structures can help in getting better graph representations and provide improved clustering performance."}
{"_id": "cc51a87552b8e34a6562cc2c588fc0744e3aced6", "title": "A recommender system based on historical usage data for web service discovery", "text": "The tremendous growth in the amount of available web services impulses many researchers on proposing recommender systems to help users discover services. Most of the proposed solutions analyzed query strings and web service descriptions to generate recommendations. However, these text-based recommendation approaches depend mainly on user\u2019s perspective, languages, and notations, which easily decrease recommendation\u2019s efficiency. In this paper, we present an approach in which we take into account historical usage data instead of the text-based analysis. We apply collaborative filtering technique on user\u2019s interactions. We propose and implement four algorithms to validate our approach. We also provide evaluation methods based on the precision and recall in order to assert the efficiency of our algorithms."}
{"_id": "430d599e07e94fbae71b698a4098a2d615ab5608", "title": "Detection of leukemia in human blood sample based on microscopic images: A study", "text": "At the moment, identification of blood disorders is through visual inspection of microscopic images of blood cells. From the identification of blood disor de s, it can lead to classification of certain dise ases related to blood. This paper describes a preliminar y study of developing a detection of leukemia types using microscopic blood sample images. Analyzing through images is very important as from images, diseases can be detected and diagnosed at earlier stage. Fro m there, further actions like controlling, monitori ng and prevention of diseases can be done. Images are used as they are cheap and do not require expensive tes ting and lab equipments. The system will focus on white blood cells disease, leukemia. The system will use features in microscopic images and examine changes on texture, geometry, color and statistical analysi s. Changes in these features will be used as a classif ier input. A literature review has been done and Reinforcement Learning is proposed to classify type s of leukemia. A little discussion about issues inv olved by researchers also has been prepared."}
{"_id": "dffc86f4646c71c2c67278ac55aaf67db5a55b21", "title": "A Surgical Palpation Probe With 6-Axis Force/Torque Sensing Capability for Minimally Invasive Surgery", "text": "A novel surgical palpation probe installing a miniature 6-axis force/torque (F/T) sensor is presented for robot-assisted minimally invasive surgery. The 6-axis F/T sensor is developed by using the capacitive transduction principle and a novel capacitance sensing method. The sensor consists of only three parts, namely a sensing printed circuit board, a deformable part, and a base part. The simple configuration leads to simpler manufacturing and assembly processes in conjunction with high durability and low weight. In this study, a surgical instrument installed with a surgical palpation probe is implemented. The 6-axis F/T sensing capability of the probe has been experimentally validated by comparing it with a reference 6-axis F/T sensor. Finally, a vivo tissue palpation task is performed in a simulated surgical environment with an animal organ and a relatively hard simulated cancer buried under the surface of the organ."}
{"_id": "7796de8efb1866ffa45e7f8b6a7305cb8b582144", "title": "A review of essential standards and patent landscapes for the Internet of Things: A key enabler for Industry 4.0", "text": "This paper is a formal overview of standards and patents for Internet of Things (IoT) as a key enabler for the next generation advanced manufacturing, referred as Industry 4.0 (I 4.0). IoT at the fundamental level is a means of connecting physical objects to the Internet as a ubiquitous network that enables objects to collect and exchange information. The manufacturing industry is seeking versatile manufacturing service provisions to overcome shortened product life cycles, increased labor costs, and fluctuating customer needs for competitive marketplaces. This paper depicts a systematic approach to review IoT technology standards and patents. The thorough analysis and overview include the essential standard landscape and the patent landscape based on the governing standards organizations for America, Europe and China where most global manufacturing bases are located. The literature of emerging IoT standards from the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the Guobiao standards (GB), and global patents issued in US, Europe, China and World Intellectual Property Organization (WIPO) are systematically presented in this study. 2016 Elsevier Ltd. All rights reserved."}
{"_id": "0b1942ebdcde63ad10540fd9670661f421b44f26", "title": "TanDEM-X: A Satellite Formation for High-Resolution SAR Interferometry", "text": "TanDEM-X (TerraSAR-X add-on for digital elevation measurements) is an innovative spaceborne radar interferometer that is based on two TerraSAR-X radar satellites flying in close formation. The primary objective of the TanDEM-X mission is the generation of a consistent global digital elevation model (DEM) with an unprecedented accuracy, which is equaling or surpassing the HRTI-3 specification. Beyond that, TanDEM-X provides a highly reconfigurable platform for the demonstration of new radar imaging techniques and applications. This paper gives a detailed overview of the TanDEM-X mission concept which is based on the systematic combination of several innovative technologies. The key elements are the bistatic data acquisition employing an innovative phase synchronization link, a novel satellite formation flying concept allowing for the collection of bistatic data with short along-track baselines, as well as the use of new interferometric modes for system verification and DEM calibration. The interferometric performance is analyzed in detail, taking into account the peculiarities of the bistatic operation. Based on this analysis, an optimized DEM data acquisition plan is derived which employs the combination of multiple data takes with different baselines. Finally, a collection of instructive examples illustrates the capabilities of TanDEM-X for the development and demonstration of new remote sensing applications."}
{"_id": "15ca31fb78fa69f02aaad5d6ad2e5eb61088dddf", "title": "Human-Computer Interaction for Complex Pattern Recognition Problems", "text": "We review some applications of human-computer interaction that alleviate the complexity of visual recognition by partitioning it into human and machine tasks to exploit the differences between human and machine capabilities. Human involvement offers advantages, both in the design of automated pattern classification systems, and at the operational level of some image retrieval and classification tasks. Recent development of interactive systems has benefited from the convergence of computer vision and psychophysics in formulating visual tasks as computational processes. Computer-aided classifier design and exploratory data analysis are already well established in pattern recognition and machine learning, but interfaces and functionality are improving. On the operational side, earlier recognition systems made use of human talent only in preprocessing and in coping with rejects. Most current content-based image retrieval systems make use of relevance feedback without direct image interaction. In contrast, some visual object classification systems can exploit such interaction. They require, however, a domain-specific visible model that makes sense to both human and computer."}
{"_id": "b353e8b6a976422d1200ed29d8c6ab01f0c0cc3d", "title": "Toward an integrated knowledge discovery and data mining process model", "text": "Enterprise decision making is continuously transforming in the wake of ever increasing amounts of data. Organizations are collecting massive amounts of data in their quest for knowledge nuggets in form of novel, interesting, understandable patterns that underlie these data. The search for knowledge is a multi-step process comprising of various phases including development of domain (business) understanding, data understanding, data preparation, modeling, evaluation and ultimately, the deployment of the discovered knowledge. These phases are represented in form of Knowledge Discovery and Data Mining (KDDM) Process Models that are meant to provide explicit support towards execution of the complex and iterative knowledge discovery process. Review of existing KDDM process models reveals that they have certain limitations (fragmented design, only a checklist-type description of tasks, lack of support towards execution of tasks, especially those of the business understanding phase etc) which are likely to affect the efficiency and effectiveness with which KDDM projects are currently carried out. This dissertation addresses the various identified limitations of existing KDDM process models through an improved model (named the Integrated Knowledge Discovery and Data Mining Process Model) which presents an integrated view of the KDDM process and provides explicit support towards execution of each one of the tasks outlined in the model. We also evaluate the effectiveness and efficiency offered by the IKDDM model against CRISP-DM, a leading KDDM process model, in aiding data mining users to execute various tasks of the KDDM process. Results of statistical tests"}
{"_id": "4c747fc7765b543a06f8e880c8a63a1aebf75ecf", "title": "CryptoDL : Towards Deep Learning over Encrypted Data", "text": "With increasing growth of cloud services, machine learning services can be run on cloud providers\u2019 infrastructure where training and deploying machine learning models are performed on cloud servers. However, machine learning solutions require access to the raw data, which can create potential security and privacy risks. In this work, we take the first steps towards developing theoretical foundation for implementing deep learning algorithms in encrypted domain and propose a method to adopt deep neural networks (NN) within the practical limitations of current homomorphic encryption schemes. We first design two methods for approximation of activation functions with low degree polynomials. Then, we train NNs with the generated polynomials and analyze the performance of the trained models. Finally, we run the low degree polynomials over encrypted values to estimate the computation costs and time."}
{"_id": "ab27ddbbed5a7bd221de44355e3a058b5ea1b14f", "title": "Exploiting Topic based Twitter Sentiment for Stock Prediction", "text": "*This work is supported by a grant from National Science Foundation (IIS-1111092) and a strategic research grant from City University of Hong Kong (7002770) *The work was done when the first author was visiting University of Illinois at Chicago. Abstract \u2022 The Web has seen a tremendous rise in social media. \u2022 People\u2019s experiences and opinions expressed in Social Media (e.g., Twitter, Facebook) contain rich social signals (e.g., about consumer sentiment, stock market, economic conditions, etc.). \u2022 The goal of this paper is to exploit public sentiment for stock market prediction. \u2022 Topic based sentiment is an important factor in stock prediction because it reflects public opinions about current issues/topics. \u2022 We employ a Bayesian non-parametric topic-based sentiment time series approach learned from streaming tweets to predict the S&P100 Stock Index."}
{"_id": "f9ad9115dac88872fb25a633fda3d87f547f75b0", "title": "Intelligent Maze Solving Robot Based on Image Processing and Graph Theory Algorithms", "text": "The most important task for maze solving robots is the fast and reliable finding of its shortest path from its initial point to its final destination point. This paper proposes an intelligent maze solving robot that can determine its shortest path on a line maze based on image processing and artificial intelligence algorithms. The image of the line maze is captured by a camera and sent to the computer to be analyzed and processed by a program developed using Visual C++ and OpenCV libraries and based on graph theory algorithms. The developed program solves the captured maze by examining all possible paths exist in the maze that could convey the robot to the required destination point. After that, the best shortest path is determined and then the instructions that guide the car-like robot to reach its desired destination point are sent to the robot through Bluetooth. The robot follows the received guide path to reach its destination. The proposed approach works faster than the traditional methods which push the robot to move through the maze cell by cell in order to find its destination point. Moreover, the proposed method allows the maze solving robot to avoid trapping and falling in infinity loops. Applications of maze solving systems include intelligent traffic control that helps ambulances, fire fighters, or rescuing robots to find their shortest path to their destination."}
{"_id": "ea99f5d06f501f86de266534d420f6fdaa4f2112", "title": "Monte-Carlo Tree Search for the Game of \"7 Wonders\"", "text": "Monte-Carlo Tree Search algorithm, and in particular with the Upper Confidence Bounds formula, provided huge improvements for AI in numerous games, particularly in Go, Hex, Havannah, Amazons and Breakthrough. In this work we study this algorithm on a more complex game, the game of \u201c7 Wonders\u201d. This card game gathers together several known difficult features, such as hidden information, N-player and stochasticity. It also includes an inter-player trading system that induces a combinatorial search to decide which decisions are legal. Moreover it is difficult to hand-craft an efficient evaluation function since the card values are heavily dependent upon the stage of the game and upon the other player decisions. We show that, in spite of the fact that \u201c7 Wonders\u201d is apparently not so related to abstract games, a lot of known results still hold."}
{"_id": "2188276d512c11271ac08bf72994cbec5d7468d3", "title": "Collaboration, creativity and the co-construction of oral and written texts", "text": "The Open University's repository of research publications and other research outputs Collaboration, creativity and the co-construction of oral and written texts Journal Article Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online's data policy on reuse of materials please consult the policies page. (To be included in a special issue of Thinking Skills and Creativity, on \" Collaborative Abstract In this paper we explore how primary school children 'learn to collaborate' and 'collaborate to learn' on creative writing projects by using diverse cultural artefacts-including oracy, literacy and ICT. We begin by reviewing some key socio-cultural concepts which serve as a theoretical framework for the research reported. Secondly, we describe the context in which the children talked and worked together to create their projects. This context is a 'learning community' developed as part of an innovative educational programme with the aim of promoting the social construction of knowledge among all participants. We then present microgenetic analyses of the quality of the interaction and dialogues taking place as peers worked together on their projects, and how these collaborative processes and uses of the mediational artefacts were taken up by the children. In order to exemplify these processes, our analyses centre on a selection of examples of dialogues, texts and multimedia products of stories created by groups of 4th grade (9-10 year-old) children. Overall, the work reveals the dynamic functioning in educational settings of some central socio-cultural concepts. These include: co-construction; intertextuality and intercontextuality amongst oracy, literacy and uses of ICT; collaborative creativity; development of dialogical and text production strategies and appropriation of diverse cultural artefacts for knowledge construction."}
{"_id": "1311ccef82e70126bc0f9d69084030f19c79eac2", "title": "A comparative analysis of common YouTube comment spam filtering techniques", "text": "Ever since its development in 2005, YouTube has been providing a vital social media platform for video sharing. Unfortunately, YouTube users may have malicious intentions, such as disseminating malware and profanity. One way to do so is using the comment field for this purpose. Although YouTube provides a built-in tool for spam control, yet it is insufficient for combating malicious and spam contents within the comments. In this paper, a comparative study of the common filtering techniques used for YouTube comment spam is conducted. The study deploys datasets extracted from YouTube using its Data API. According to the obtained results, high filtering accuracy (more than 98%) can be achieved with low-complexity algorithms, implying the possibility of developing a suitable browser extension to alleviate comment spam on YouTube in future."}
{"_id": "1df4dba907cd6afadb172d9c454c9978f5150bf7", "title": "A Novel Method for Comparative Analysis of DNA Sequences by Ramanujan-Fourier Transform", "text": "Alignment-free sequence analysis approaches provide important alternatives over multiple sequence alignment (MSA) in biological sequence analysis because alignment-free approaches have low computation complexity and are not dependent on high level of sequence identity. However, most of the existing alignment-free methods do not employ true full information content of sequences and thus can not accurately reveal similarities and differences among DNA sequences. We present a novel alignment-free computational method for sequence analysis based on Ramanujan-Fourier transform (RFT), in which complete information of DNA sequences is retained. We represent DNA sequences as four binary indicator sequences and apply RFT on the indicator sequences to convert them into frequency domain. The Euclidean distance of the complete RFT coefficients of DNA sequences are used as similarity measures. To address the different lengths of RFT coefficients in Euclidean space, we pad zeros to short DNA binary sequences so that the binary sequences equal the longest length in the comparison sequence data. Thus, the DNA sequences are compared in the same dimensional frequency space without information loss. We demonstrate the usefulness of the proposed method by presenting experimental results on hierarchical clustering of genes and genomes. The proposed method opens a new channel to biological sequence analysis, classification, and structural module identification."}
{"_id": "1bc8a129675fbbb651d0a23f827ab867165088f9", "title": "Safe Exploration of State and Action Spaces in Reinforcement Learning", "text": "In this paper, we consider the important problem of safe exploration in reinforcement learning. While reinforcement learning is well-suited to domains with complex transition dynamics and high-dimensional state-action spaces, an additional challenge is posed by the need for safe and efficient exploration. Traditional exploration techniques are not particularly useful for solving dangerous tasks, where the trial and error process may lead to the selection of actions whose execution in some states may result in damage to the learning system (or any other system). Consequently, when an agent begins an interaction with a dangerous and high-dimensional state-action space, an important question arises; namely, that of how to avoid (or at least minimize) damage caused by the exploration of the state-action space. We introduce the PI-SRL algorithm which safely improves suboptimal albeit robust behaviors for continuous state and action control tasks and which efficiently learns from the experience gained from the environment. We evaluate the proposed method in four complex tasks: automatic car parking, pole-balancing, helicopter hovering, and business management."}
{"_id": "bb3a3836652c8a581e0ce92fdb3a7cd884efec40", "title": "Optical see-through head up displays' effect on depth judgments of real world objects", "text": "Recent research indicates that users consistently underestimate depth judgments to Augmented Reality (AR) graphics when viewed through optical see-through displays. However, to our knowledge, little work has examined how AR graphics may affect depth judgments of real world objects that have been overlaid or annotated with AR graphics. This study begins a preliminary analysis whether AR graphics have directional effects on users' depth perception of real-world objects, as might be experienced in vehicle driving scenarios (e.g., as viewed via an optical see-through head-up display or HUD). Twenty-four participants were asked to judge the depth of a physical pedestrian proxy figure moving towards them at a constant rate of 1 meter/second. Participants were shown an initial target location that varied in distance from 11 to 20 m and were then asked to press a button to indicate when the moving target was perceived to be at the previously specified target location. Each participant experienced three different display conditions: no AR visual display (control), a conformal AR graphic overlaid on the pedestrian via a HUD, and the same graphic presented on a tablet physically located on the pedestrian. Participants completed 10 trials (one for each target distance between 11 and 20 inclusive) per display condition for a total of 30 trials per participant. The judged distance from the correct location was recorded, and after each trial, participants' confidence in determining the correct distance was captured. Across all conditions, participants underestimated the distance of the physical object consistent with existing literature. Greater variability was observed in the accuracy of distance judgments under the AR HUD condition relative to the other two display conditions. In addition, participant confidence levels were considerably lower in the AR HUD condition."}
{"_id": "2f201c77e7ccdf1f37115e16accac3486a65c03d", "title": "Stochastic Activation Pruning for Robust Adversarial Defense", "text": "Neural networks have been found vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory. We cast the problem as a minimax zero-sum game between the adversary and the model. For such games, it is in general optimal for players to use a stochastic policy, also known as a mixed strategy. In light of this, we propose a mixed strategy for the model: stochastic activation pruning. SAP prunes a random subset of activations (preferentially pruning activations with smaller magnitude) and scales up the survivors to compensate. SAP can be applied to pre-trained neural networks\u2014even adversarially trained models\u2014without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness in the adversarial setting, increasing accuracy whilst preserving calibration."}
{"_id": "0eb329b3291944ee358fcbad6d26a2e111addd6b", "title": "Restructuring of deep neural network acoustic models with singular value decomposition", "text": "Recently proposed deep neural network (DNN) obtains significant accuracy improvements in many large vocabulary continuous speech recognition (LVCSR) tasks. However, DNN requires much more parameters than traditional systems, which brings huge cost during online evaluation, and also limits the application of DNN in a lot of scenarios. In this paper we present our new effort on DNN aiming at reducing the model size while keeping the accuracy improvements. We apply singular value decomposition (SVD) on the weight matrices in DNN, and then restructure the model based on the inherent sparseness of the original matrices. After restructuring we can reduce the DNN model size significantly with negligible accuracy loss. We also fine-tune the restructured model using the regular back-propagation method to get the accuracy back when reducing the DNN model size heavily. The proposed method has been evaluated on two LVCSR tasks, with context-dependent DNN hidden Markov model (CD-DNN-HMM). Experimental results show that the proposed approach dramatically reduces the DNN model size by more than 80% without losing any accuracy."}
{"_id": "40ea042765e037628bb2728e34f9800c24f93600", "title": "Deep neural networks for small footprint text-dependent speaker verification", "text": "In this paper we investigate the use of deep neural networks (DNNs) for a small footprint text-dependent speaker verification task. At development stage, a DNN is trained to classify speakers at the frame-level. During speaker enrollment, the trained DNN is used to extract speaker specific features from the last hidden layer. The average of these speaker features, or d-vector, is taken as the speaker model. At evaluation stage, a d-vector is extracted for each utterance and compared to the enrolled speaker model to make a verification decision. Experimental results show the DNN based speaker verification system achieves good performance compared to a popular i-vector system on a small footprint text-dependent speaker verification task. In addition, the DNN based system is more robust to additive noise and outperforms the i-vector system at low False Rejection operating points. Finally the combined system outperforms the i-vector system by 14% and 25% relative in equal error rate (EER) for clean and noisy conditions respectively."}
{"_id": "5d6fca1c2dc1bb30b2bfcc131ec6e35a16374df8", "title": "An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices", "text": "Detecting and reacting to user behavior and ambient context are core elements of many emerging mobile sensing and Internet-of-Things (IoT) applications. However, extracting accurate inferences from raw sensor data is challenging within the noisy and complex environments where these systems are deployed. Deep Learning -- is one of the most promising approaches for overcoming this challenge, and achieving more robust and reliable inference. Techniques developed within this rapidly evolving area of machine learning are now state-of-the-art for many inference tasks (such as, audio sensing and computer vision) commonly needed by IoT and wearable applications. But currently deep learning algorithms are seldom used in mobile/IoT class hardware because they often impose debilitating levels of system overhead (e.g., memory, computation and energy). Efforts to address this barrier to deep learning adoption are slowed by our lack of a systematic understanding of how these algorithms behave at inference time on resource constrained hardware. In this paper, we present the first -- albeit preliminary -- measurement study of common deep learning models (such as Convolutional Neural Networks and Deep Neural Networks) on representative mobile and embedded platforms. The aim of this investigation is to begin to build knowledge of the performance characteristics, resource requirements and the execution bottlenecks for deep learning models when being used to recognize categories of behavior and context. The results and insights of this study, lay an empirical foundation for the development of optimization methods and execution environments that enable deep learning to be more readily integrated into next-generation IoT, smartphones and wearable systems."}
{"_id": "5f274d318b1fac66a01a5ed31f4433de5e1fa032", "title": "EmotionSense: a mobile phones based adaptive platform for experimental social psychology research", "text": "Today's mobile phones represent a rich and powerful computing platform, given their sensing, processing and communication capabilities. Phones are also part of the everyday life of billions of people, and therefore represent an exceptionally suitable tool for conducting social and psychological experiments in an unobtrusive way.\n de the ability of sensing individual emotions as well as activities, verbal and proximity interactions among members of social groups. Moreover, the system is programmable by means of a declarative language that can be used to express adaptive rules to improve power saving. We evaluate a system prototype on Nokia Symbian phones by means of several small-scale experiments aimed at testing performance in terms of accuracy and power consumption. Finally, we present the results of real deployment where we study participants emotions and interactions. We cross-validate our measurements with the results obtained through questionnaires filled by the users, and the results presented in social psychological studies using traditional methods. In particular, we show how speakers and participants' emotions can be automatically detected by means of classifiers running locally on off-the-shelf mobile phones, and how speaking and interactions can be correlated with activity and location measures."}
{"_id": "4b93cb61f17f7ee722dac1c68020d97a1699ace3", "title": "Multimodal Deep Learning for Cervical Dysplasia Diagnosis", "text": "To improve the diagnostic accuracy of cervical dysplasia, it is important to fuse multimodal information collected during a patient\u2019s screening visit. However, current multimodal frameworks suffer from low sensitivity at high specificity levels, due to their limitations in learning correlations among highly heterogeneous modalities. In this paper, we design a deep learning framework for cervical dysplasia diagnosis by leveraging multimodal information. We first employ the convolutional neural network (CNN) to convert the low-level image data into a feature vector fusible with other non-image modalities. We then jointly learn the non-linear correlations among all modalities in a deep neural network. Our multimodal framework is an end-to-end deep network which can learn better complementary features from the image and non-image modalities. It automatically gives the final diagnosis for cervical dysplasia with 87.83% sensitivity at 90% specificity on a large dataset, which significantly outperforms methods using any single source of information alone and previous multimodal frameworks."}
{"_id": "8abccc77774967a48c3c0f7fb097297aad011d47", "title": "1 An End-to-End Framework for Evaluating Surface Reconstruction", "text": "We present a benchmark for the evaluation and comparison of algorithms which reconstruct a surface from point cloud data. Although a substantial amount of effort has been dedicated to the problem of surface reconstruction, a comprehensive means of evaluating this class of algorithms is noticeably absent. We propose a simple pipeline for measuring surface reconstruction algorithms, consisting of three main phases: surface modeling, sampling, and evaluation.We employ implicit surfaces for modeling shapes which are expressive enough to contain details of varying size, in addition to preserving sharp features. From these implicit surfaces, we produce point clouds by synthetically generating range scans which resemble realistic scan data. We validate our synthetic sampling scheme by comparing against scan data produced via a commercial optical laser scanner, wherein we scan a 3D-printed version of the original implicit surface. Last, we perform evaluation by comparing the output reconstructed surface to a dense uniformly-distributed sampling of the implicit surface. We decompose our benchmark into two distinct sets of experiments. The first set of experiments measures reconstruction against point clouds of complex shapes sampled under a wide variety of conditions. Although these experiments are quite useful for the comparison of surface reconstruction algorithms, they lack a fine-grain analysis. Hence to complement this, the second set of experiments are designed to measure specific properties of surface reconstruction, both from a sampling and surface modeling viewpoint. Together, these experiments depict a detailed examination of the state of surface reconstruction algorithms. An End-to-End Framework for Evaluating Surface Reconstruction"}
{"_id": "cd14329ef11aed723d528abb2d48f28e4295d695", "title": "Air-Traffic Complexity Resolution in Multi-Sector Planning", "text": "Using constraint programming, we effectively model and efficiently solve the problem of balancing and minimising the traffic complexities of an airspace of adjacent sectors. The traffic complexity of a sector is here defined in terms of the numbers of flights within it, near its border, and on non-level segments within it. The allowed forms of complexity resolution are the changing of the take-off times of not yet airborne flights, the changing of the remaining approach times into the chosen airspace of already airborne flights by slowing down and speeding up within the two layers of feeder sectors around that airspace, as well as the changing of the levels of passage over way-points in that airspace. Experiments with actual European flight profiles obtained from the Central Flow Management Unit (CFMU) show that these forms of complexity resolution can lead to significant complexity reductions and rebalancing."}
{"_id": "968102a9e7329c829e2a5993dc1612ec50a9e4dc", "title": "Underwater image enhancement via dark channel prior and luminance adjustment", "text": "Underwater images are degraded mainly by light scattering and color distortion. Light scattering occurs because the light is reflected and deflected multiple times by the suspended particles in the water before reaching the camera. Color distortion is caused due to the absorption degrees vary according to the light wavelength. At last, the underwater images are dominated by a bluish tone. In this paper, we propose a novel underwater image enhancement approach based on dark channel prior and luminance adjustment. The dehazed underwater image obtained by dark channel prior characterizes with low brightness. Then, we utilize a novel luminance adjustment method to enhance image contrast. The enhanced image shows improved global contrast and better preserved image details and edges. Experimental results demonstrate the effectiveness of our method compared to the state-of-the-art methods."}
{"_id": "8e54c3f24a46be98199a78df6b0ed2c974636fa9", "title": "Real-Time Recognition of Physical Activities and Their Intensities Using Wireless Accelerometers and a Heart Rate Monitor", "text": "In this paper, we present a real-time algorithm for automatic recognition of not only physical activities, but also, in some cases, their intensities, using five triaxial wireless accelerometers and a wireless heart rate monitor. The algorithm has been evaluated using datasets consisting of 30 physical gymnasium activities collected from a total of 21 people at two different labs. On these activities, we have obtained a recognition accuracy performance of 94.6% using subject-dependent training and 56.3% using subject-independent training. The addition of heart rate data improves subject-dependent recognition accuracy only by 1.2% and subject-independent recognition only by 2.1%. When recognizing activity type without differentiating intensity levels, we obtain a subject-independent performance of 80.6%. We discuss why heart rate data has such little discriminatory power."}
{"_id": "229dbc8d379f12d15597b17bb0502667b2ef3fa2", "title": "Brief Announcement: Dynamic Determinacy Race Detection for Task Parallelism with Futures", "text": "Existing dynamic determinacy race detectors for task-parallel programs are limited to programs with strict computation graphs, where a task can only wait for its descendant tasks to complete. In this paper, we present the first known determinacy race detector for non-strict computation graphs with futures. The space and time complexity of our algorithm are similar to those of the classical SP-bags algorithm, when using only structured parallel constructs such as spawn-sync and async-finish. In the presence of point-to-point synchronization using futures, the complexity of the algorithm increases by a factor determined by the number of future operations, which includes future task creation and future get operations. The experimental results show that the slowdown factor observed for our algorithm relative to the sequential version is in the range of 1.00x to 9.92x, which is very much in line with slowdowns experienced for fully strict computation graphs."}
{"_id": "b45a3a8e08b80295fa2913e66c673052b142d158", "title": "Generalized stacking of layerwise-trained Deep Convolutional Neural Networks for document image classification", "text": "This article presents our recent study of a lightweight Deep Convolutional Neural Network (DCNN) architecture for document image classification. Here, we concentrated on training of a committee of generalized, compact and powerful base DCNNs. A support vector machine (SVM) is used to combine the outputs of individual DCNNs. The main novelty of the present study is introduction of supervised layerwise training of DCNN architecture in document classification tasks for better initialization of weights of individual DCNNs. Each DCNN of the committee is trained for a specific part or the whole document. Also, here we used the principle of generalized stacking for combining the normalized outputs of all the members of the DCNN committee. The proposed document classification strategy has been tested on the well-known Tobacco3482 document image dataset. Results of our experimentations show that the proposed strategy involving a considerably smaller network architecture can produce comparable document classification accuracies in competition with the state-of-the-art architectures making it more suitable for use in comparatively low configuration mobile devices."}
{"_id": "fb6f5cb26395608a3cf0e9c6c618293a4278a8ad", "title": "Facial Image Attributes Transformation via Conditional Recycle Generative Adversarial Networks", "text": "This study introduces a novel conditional recycle generative adversarial network for facial attribute transformation, which can transform high-level semantic face attributes without changing the identity. In our approach, we input a source facial image to the conditional generator with target attribute condition to generate a face with the target attribute. Then we recycle the generated face back to the same conditional generator with source attribute condition. A face which should be similar to that of the source face in personal identity and facial attributes is generated. Hence, we introduce a recycle reconstruction loss to enforce the final generated facial image and the source facial image to be identical. Evaluations on the CelebA dataset demonstrate the effectiveness of our approach. Qualitative results show that our approach can learn and generate high-quality identity-preserving facial images with specified attributes."}
{"_id": "76264abe68ea6de6c14944990c5e01da1725c860", "title": "Less is more ? How demographic sample weights can improve public opinion estimates based on Twitter data", "text": "Twitter data is widely acknowledged to hold great promise for the study of political behavior and public opinion. However, a key limitation in previous studies is the lack of information about the sociodemographic characteristics of individual users, which raises concerns about the validity of inferences based on this source of data. This paper addresses this challenge by employing supervised machine learning methods to estimate the age, gender, race, party affiliation, propensity to vote, and income of any Twitter user in the U.S. The training dataset for these classifiers was obtained by matching a large dataset of 1 billion geolocated Twitter messages with voting registration records and estimates of home values across 15 different states, resulting in a sample of nearly 250,000 Twitter users whose sociodemographic traits are known. To illustrate the value of this approach, I offer three applications that use information about the predicted demographic composition of a random sample of 500,000 U.S. Twitter users. First, I explore how attention to politics varies across demographics groups. Then, I apply multilevel regression and postratification methods to recover valid estimate of presidential and candidate approval that can serve as early indicators of public opinion changes and thus complement traditional surveys. Finally, I demonstrate the value of Twitter data to study questions that may suffer from social desirability bias. Working paper. This version: September 21, 2016. I would like to thank John Ahlquist, Drew Dimmery, Pat Egan, Martin Elff, Mik Laver, Jeff Lewis, Thomas Leeper, Jonathan Nagler, Molly Roberts, Arthur Spirling, David Sontag for helpful comments and discussions. I also gratefully acknowledges financial support from the Gordon and Betty Moore Foundation and the Alfred P. Sloan Foundation."}
{"_id": "8e917a621d15dc37c71be1bdc3b43c41e56557c8", "title": "Effectiveness of Blog Advertising: Impact of Communicator Expertise, Advertising Intent, and Product Involvement", "text": "Blog advertising, which refers to the paid sponsorship of bloggers to review, promote, or sell products in their blog writing, is becoming prevalent. This paper investigates the impact of three critical factors on blog advertising effectiveness: communicator expertise, advertising intent, and product involvement. An experiment with a 2\u00d72\u00d72 factorial design was used to test their interaction effects on advertising effectiveness. The results indicate that, for low-involvement products, there is better advertising effectiveness when low-expertise communicators are explicit about the advertising intent or when high-expertise communicators are implicit about the advertising intent. But for high-involvement products, the results show that when low-expertise communicators are explicit about the advertising intent, the outcome is lesser advertising effectiveness. For such products, advertising effectiveness does not differ when high-expertise communicators are implicit or explicit about the advertising intent. Based on these results, some implications for further research and practice are given."}
{"_id": "f103fabe05026d93bc1033e0e03bf34b4da0932d", "title": "Feature Analysis for Fake Review Detection through Supervised Classification", "text": "Nowadays, review sites are more and more confronted with the spread of misinformation, i.e., opinion spam, which aims at promoting or damaging some target businesses, by misleading either human readers, or automated opinion mining and sentiment analysis systems. For this reason, in the last years, several data-driven approaches have been proposed to assess the credibility of user-generated content diffused through social media in the form of on-line reviews. Distinct approaches often consider different subsets of characteristics, i.e., features, connected to both reviews and reviewers, as well as to the network structure linking distinct entities on the review-site in exam. This article aims at providing an analysis of the main review- and reviewer-centric features that have been proposed up to now in the literature to detect fake reviews, in particular from those approaches that employ supervised machine learning techniques. These solutions provide in general better results with respect to purely unsupervised approaches, which are often based on graph-based methods that consider relational ties in review sites. Furthermore, this work proposes and evaluates some additional new features that can be suitable to classify genuine and fake reviews. For this purpose, a supervised classifier based on Random Forests have been implemented, by considering both well-known and new features, and a large-scale labeled dataset from which all these features have been extracted. The good results obtained show the effectiveness of new features to detect in particular singleton fake reviews, and in general the utility of this study."}
{"_id": "e9ed90f57c55e1229b67628930f37f3b5ed5dc12", "title": "Miniature Continuous Coverage Antenna Array for GNSS Receivers", "text": "This letter presents a miniature conformal array that provides continuous coverage and good axial ratio from 1100-1600 MHz. Concurrently, it maintains greater than 1.5 dBic RHCP gain and return loss less than -10 dB. The four element array is comprised of two-arm slot spirals with lightweight polymer substrate dielectric loading and a novel termination resistor topology. Multiple elements provide the capability to suppress interfering signals common in GNSS applications. The array, including feeding network, is 3.5\" times 3.5\" times 0.8\" in size and fits into the FRPA-3 footprint and radome."}
{"_id": "6597eb92249e137c6de9d68013bd683b63234af4", "title": "A Survey on Preprocessing Methods for Web Usage Data", "text": "World Wide Web is a huge repository of web pages and links. It provides abundance of information for the Internet users. The growth of web is tremendous as approximately one million pages are added daily. Users\u2019 accesses are recorded in web logs. Because of the tremendous usage of web, the web log files are growing at a faster rate and the size is becoming huge. Web data mining is the application of data mining techniques in web data. Web Usage Mining applies mining techniques in log data to extract the behavior of users which is used in various applications like personalized services, adaptive web sites, customer profiling, prefetching, creating attractive web sites etc., Web usage mining consists of three phases preprocessing, pattern discovery and pattern analysis. Web log data is usually noisy and ambiguous and preprocessing is an important process before mining. For discovering patterns sessions are to be constructed efficiently. This paper reviews existing work done in the preprocessing stage. A brief overview of various data mining techniques for discovering patterns, and pattern analysis are discussed. Finally a glimpse of various applications of web usage mining is also presented. KeywordsData Cleaning, Path Completion, Session Identification , User Identification, Web Log Mining"}
{"_id": "4dd87fa846dc344dcce5fb12de283a0d51dfe140", "title": "Exploration-exploitation tradeoff using variance estimates in multi-armed bandits", "text": "Algorithms based on upper confidence bounds for balancing exploration and exploitation are gaining popularity since they are easy to implement, efficient and effective. This paper considers a variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account the empirical variance of the different arms. In earlier experimental works, such algorithms were found to outperform the competing algorithms. We provide the first analysis of the expected regret for such algorithms. As expected, our results show that the algorithm that uses the variance estimates has a major advantage over its alternatives that do not use such estimates provided that the variances of the payoffs of the suboptimal arms are low. We also prove that the regret concentrates only at a polynomial rate. This holds for all the upper confidence bound based algorithms and for all bandit problems except those special ones where with probability one the payoff obtained by pulling the optimal arm is IThis research was funded in part by the National Science and Engineering Research Council (NSERC), iCore and the Alberta Ingenuity Fund. \u2217Corresponding author. Email addresses: audibert@certis.enpc.fr (Jean Yves Audibert), remi.munos@inria.fr (R\u00e9mi Munos), szepesva@cs.ualberta.ca (Csaba Szepesv\u00e1ri) 1Csaba Szepesv\u00e1ri is on leave from MTA SZTAKI, Budapest, Hungary. Preprint submitted to Theoretical Computer Science October 1, 2008 larger than the expected payoff for the second best arm. Hence, although upper confidence bound bandit algorithms achieve logarithmic expected regret rates, they might not be suitable for a risk-averse decision maker. We illustrate some of the results by computer simulations."}
{"_id": "3342018e8defb402896d2133cda0417e49f1e9aa", "title": "Face Verification Across Age Progression", "text": "Human faces undergo considerable amounts of variations with aging. While face recognition systems have been proven to be sensitive to factors such as illumination and pose, their sensitivity to facial aging effects is yet to be studied. How does age progression affect the similarity between a pair of face images of an individual? What is the confidence associated with establishing the identity between a pair of age separated face images? In this paper, we develop a Bayesian age difference classifier that classifies face images of individuals based on age differences and performs face verification across age progression. Further, we study the similarity of faces across age progression. Since age separated face images invariably differ in illumination and pose, we propose preprocessing methods for minimizing such variations. Experimental results using a database comprising of pairs of face images that were retrieved from the passports of 465 individuals are presented. The verification system for faces separated by as many as nine years, attains an equal error rate of 8.5%"}
{"_id": "f0bc66a21a03190ce93165085ca313e25be7f1af", "title": "Design, Modeling, and Validation of a Soft Magnetic 3-D Force Sensor", "text": "Recent advances in robotics promise a future where robots co-exist and collaborate with humans in unstructured environments, which will require frequent physical interactions where accurate tactile information will be crucial for performance and safety. This article describes the design, fabrication, modeling, and experimental validation of a soft-bodied tactile sensor that accurately measures the complete 3-D force vector for both normal and shear loading conditions. Our research considers the detection of changes in the magnetic field vector due to the motion of a miniature magnet in a soft substrate to measure normal and shear forces with high accuracy and bandwidth. The proposed sensor is a pyramid-shaped tactile unit with a tri-axis Hall element and a magnet embedded in a silicone rubber substrate. The non-linear mapping between the 3-D force vector and the Hall effect voltages is characterized by training a neural network. We validate the proposed soft force sensor over static and dynamic loading experiments and obtain a mean absolute error below 11.7 mN or 2.2% of the force range. These results were obtained for a soft force sensor prototype and loading conditions not included in the training process, indicating strong generalization of the model. To demonstrate its utility, the proposed sensor is used in a force-controlled pick-and-place experiment as a proof-of-concept case study."}
{"_id": "755050d838b9b27d715c4bf1e8317294011fa5fc", "title": "Traffic engineering techniques for data center networks", "text": "Traffic engineering consists in improving the performance of the teleco-munication networks which is evaluated by a large number of criteria. The ultimate objective is to avoid congestion in the network by keeping its links from being overloaded. In large Ethernet networks with thousands of servers, such as data centers, improving the performance of the traditional switching protocols is a crucial but very challenging task due to an exploration in the size of solution space and the complexity. Thus, exact methods are inappropriate even for reasonable size networks. Local Search (LS) is a powerful method for solving computational optimization problems such as the Vertex Cover, Traveling Salesman, or Boolean Satisfiability. The advantage of LS for these problems is its ability to find an intelligent path from a low quality solution to a high quality one in a huge search space. In this thesis, we propose different approximate methods based on Local Search for solving the class of traffic engineering problems in data center networks that implement Spanning Tree Protocol and Multiple Spanning Tree Protocol. First, we tackle the minimization of the maximal link utilization in the Ethernet networks with one spanning tree. Next, we cope with data center networks containing many spanning trees. We then deal with the minimization of service disruption and the worst-case maximal link utilization in data center networks with many spanning trees. Last, we develop a novel design of multi-objective algorithms for solving the traffic engineering problems in large data centers by taking into account three objectives to be minimized: maximal link utilization, network total load and number of used links. Our schemes reduce significantly the size of the search space by releasing the dependence of the solutions from the link cost computation iii iv in order to obtain an intended spanning tree. We propose efficient in-cremental techniques to speed up the computation of objective values. Furthermore, our approaches show good results on the credible data sets and are evaluated by the strong assessment methods. ACKNOWLEDGEMENTS First of all, I would like to thank Yves Deville and Olivier Bonaventure for the four years they spent helping me. I am very lucky to have two great advisors of two different promoting styles. Yves Deville is a very demanding supervisor who wants every smallest detail to be clear and intuitive. His ideas and his criticisms sometimes made me nervous or even angry but I could not deny the truth \u2026"}
{"_id": "080c8650dd8b3107fb6af6d3e0d5a1d4aabf4f4b", "title": "A framework for collaborative sensing and processing of mobile data streams: demo", "text": "Emerging mobile applications involve continuous sensing and complex computations on sensed data streams. Examples include cognitive apps (e.g., speech recognition, natural language translation, as well as face, object, or gesture detection and recognition) and anticipatory apps that proactively track and provide services when needed. Unfortunately, today's mobile devices cannot keep pace with such apps, despite advances in hardware capability. Traditional approaches address this problem by computation offloading. One approach offloads by sending sensed streams to remote cloud servers via cellular networks or to cloudlets via Wi-Fi, where a clone of the app runs [2, 3, 4]. However, cloudlets may not be widely deployed and access to cloud infrastructure may yield high network delays and can be intermittent due to mobility. Morever, users might hesitate to upload private sensing data to the cloud or cloudlet. A second approach offloads to accelerators by rewriting code to use DSP or GPU within mobile devices. However, using accelerators requires substantial programming effort and produces varied benefits for diverse codes on heterogeneous devices."}
{"_id": "6b09fb02ace9507f5ff50e87cc7171a81f5b2e5f", "title": "Optimal Expectations and the Welfare Cost of Climate Variability", "text": "Uncertainty about the future is an important determinant of well-being, especially in developing countries where financial markets and other market failures result in ineffective insurance mechanisms. However, separating the effects of future uncertainty from realised events, and then measuring the impact of uncertainty on utility, presents a number of empirical challenges. This paper aims to address these issues and provides supporting evidence to show that increased climate variability (a proxy for future income uncertainty) reduces farmers\u2019 subjective well-being, consistent with the theory of optimal expectations (Brunnermeier & Parker, 2005 AER), using panel data from rural Ethiopia and a new data set containing daily atmospheric parameters. The magnitude of our result indicates that a one standard deviation (7%) increase in climate variability has an equivalent effect on life satisfaction to a two standard deviation (1-2%) decrease in consumption. This effect is one of the largest determinants of life satisfaction in rural Ethiopia. (JEL: C25, D60, I31.) \u2217Alem \u2013 Department of Economics, University of Gothenburg, Sweden and Department of Agricultural and Resource Economics, UC, Berkeley, USA. E-mail: yonas.alem@economics.gu.se. Colmer \u2013 The Grantham Research Institute and Centre for Economic Performance, London School of Economics, UK. E-mail: j.m.colmer@lse.ac.uk. We are grateful to Allen Blackman, Jeffery Bookwalter, Gharad Bryan, Douglas Dalenberg, Paul Dolan, John Feddersen, Greer Gosnell, Cameron Hepburn, Derek Kellenberg, Peter Martinsson, Kyle Meng, Rob Metcalfe, Eric Neumayer, Jonathan Parker, Hendrik Wolff and seminar participants at the EfD 6th Annual Meeting, EAERE, the University of Oxford, London School of Economics, University of Montana, and Resources for the Future for helpful thoughts, comments and discussions. The first author would like to thank the Swedish International Development Agency (Sida) through the Environment for Development Initiative (EfD) at the University of Gothenburg, and the Swedish Research Council Formas through the Human Cooperation to Manage Natural Resources (COMMONS) programme for financial support. Part of this research was done while Alem was a Visiting Scholar at the Department of Agricultural and Resource Economics, University of California Berkeley. The second author would like to thank the ESRC Centre for Climate Change Economics and Policy and the Grantham Foundation for financial support. The data used in this article were collected by the University of Addis Ababa, the International Food Policy Research Institute (IFPRI), and the Centre for the Study of African Economies (CSAE). Funding for the ERHS survey was provided by the Economic and Social Research Council (ESRC), the Swedish International Development Agency (SIDA) and the United States Agency for International Development (USAID). All errors and omissions are our own."}
{"_id": "b8a3c1c0ebe113ffb61b0066aa91ef03701e5550", "title": "Portable Option Discovery for Automated Learning Transfer in Object-Oriented Markov Decision Processes", "text": "We introduce a novel framework for option discovery and learning transfer in complex domains that are represented as object-oriented Markov decision processes (OO-MDPs) [Diuk et al., 2008]. Our framework, Portable Option Discovery (POD), extends existing option discovery methods, and enables transfer across related but different domains by providing an unsupervised method for finding a mapping between object-oriented domains with different state spaces. The framework also includes heuristic approaches for increasing the efficiency of the mapping process. We present the results of applying POD to Pickett and Barto\u2019s [2002] PolicyBlocks and MacGlashan\u2019s [2013] Option-Based Policy Transfer in two application domains. We show that our approach can discover options effectively, transfer options among different domains, and improve learning performance with low computational overhead."}
{"_id": "8e2548e29b45fda7725605b71a21318d86f9b00e", "title": "Semantic component retrieval in software engineering", "text": "In the early days of programming the concept of subroutines, and through this software reuse, was invented to spare limited hardware resources. Since then software systems have become increasingly complex and developing them would not have been possible without reusable software elements such as standard libraries and frameworks. Furthermore, other approaches commonly subsumed under the umbrella of software reuse such as product lines and design patterns have become very successful in recent years. However, there are still no software component markets available that would make buying software components as simple as buying parts in a do-it-yourself hardware store and millions of software fragments are still lying un(re)used in configuration management repositories all over the world. The literature primarily blames this on the immense effort required so far to set up and maintain searchable component repositories and the weak mechanisms available for retrieving components from them, resulting in a severe usability problem. In order to address these issues within this thesis, we developed a proactive component reuse recommendation system, naturally integrated into test-first development approaches, which is able to propose semantically appropriate, reusable components according to the specification a developer is just working on. We have implemented an appropriate system as a plugin for the well-known Eclipse IDE and demonstrated its usefulness by carrying out a case study from a popular agile development book. Furthermore, we present a precision analysis for our approach and examples of how components can be retrieved based on a simplified semantics description in terms of standard test cases. Zusammenfassung Zu Zeiten der ersten Programmiersprachen wurde die Idee von Unterprogrammen und damit die Idee der Wiederverwendung von Software zur Einsparung knapper Hardware-Ressourcen erdacht. Seit dieser Zeit wurden Software-Systeme immer komplexer und ihre Entwicklung w\u00e4re ohne weitere wiederverwendbare SoftwareElemente wie Bibliotheken und Frameworks schlichtweg nicht mehr handhabbar. Weitere, \u00fcblicherweise unter dem Begriff Software Reuse zusammengefasste Ans\u00e4tze, wie z.B. Produktlinien und Entwurfsmuster waren in den letzten Jahren ebenfalls sehr erfolgreich, gleichzeitig existieren allerdings noch immer keine Marktpl\u00e4tze, die das Kaufen von Software-Komponenten so einfach machen w\u00fcrden, wie den Einkauf von Kleinteilen in einem Heimwerkermarkt. Daher schlummern derzeit Millionen von nicht (wieder) genutzten Software-Fragmenten in Konfigurations-Management-Systemen auf der ganzen Welt. Die Fachliteratur lastet dies prim\u00e4r dem hohen Aufwand, der bisher f\u00fcr Aufbau und Instandhaltung von durchsuchbaren Komponenten-Repositories getrieben werden musste, an. Zusammen mit den ungenauen Algorithmen, wie sie bisher zum Durchsuchen solcher Komponentenspeicher zur Verf\u00fcgung stehen, macht diese Tatsache die Benutzung dieser Systeme zu kompliziert und damit unattraktiv. Um diese H\u00fcrde k\u00fcnftig abzumildern, entwickelten wir in der vorliegenden Arbeit ein proaktives Komponenten-Empfehlungssystem, das eng an testgetriebene Entwicklungsprozesse angelehnt ist und darauf aufbauend wiederverwendbare Komponenten vorschlagen kann, die genau die Funktionalit\u00e4t erbringen, die ein Entwickler gerade ben\u00f6tigt. Wir haben das System als Plugin f\u00fcr die bekannte Eclipse IDE entwickelt und seine Nutzbarkeit unter Beweis gestellt, in dem wir ein Beispiel aus einem bekannten Buch \u00fcber agile Entwicklung damit nachimplementiert haben. Weiterhin enth\u00e4lt diese Arbeit eine Analyse der Precision unseres Ansatzes sowie zahlreiche Beispiele, wie gew\u00f6hnliche Testf\u00e4lle als vereinfachte semantische Beschreibung einer Komponente und als Ausgangspunkt f\u00fcr die Suche nach wiederverwendbaren Komponenten genutzt werden k\u00f6nnen."}
{"_id": "9b95747c0220379df9c59c29092f22e8e54a681c", "title": "An efficient automated design to generate UML diagram from Natural Language Specifications", "text": "The foremost problem that arises in the Software Development Cycle is during the Requirements specification and analysis. Errors that are encountered during the first phase of the cycle migrate to other phases too which in turn results in the most costly process than the original specified process. The reason is that the specifications of software requirements are termed in the Nature Language Format. One can easily transform the requirements specified into computer model using UML. To minimize the errors that arise in the existing system, we have proposed a new technique that enhances the generation of UML models through Natural Language requirements, which can easily provide automatic assistance to the developers. The main aim of our paper is to focus on the production of Activity Diagram and Sequence Diagram through Natural Language Specifications. Standard POS tagger and parser analyze the input i.e., requirements in English language given by the users and extract phrases, activities, etc. from the text specifies. The technique is beneficial as it reduces the gap between informal natural language and the formal modeling language. The input is the requirements laid down by the users in English language. Some stages like pre-processing, part of speech (POs), tagging, parsing, phrase identification and designing of UML diagrams occur along with the input. The application and its framework is developed in Java and it is tested on by implementing on a few technical documents."}
{"_id": "94a4a4fdb58a6777f13bb60955470bb10a415d6f", "title": "Classification of Text Documents", "text": "The exponential growth of the internet has led to a great deal of interest in developing useful and efficient tools and software to assist users in searching the Web. Document retrieval, categorization, routing and filtering can all be formulated as classification problems. However, the complexity of natural languages and the extremely high dimensionality of the feature space of documents have made this classification problem very difficult. We investigate four different methods for document classification: the naive Bayes classifier, the nearest neighbour classifier, decision trees and a subspace method. These were applied to seven-class Yahoo news groups (business, entertainment, health, international, politics, sports and technology) individually and in combination. We studied three classifier combination approaches: simple voting, dynamic classifier selection and adaptive classifier combination. Our experimental results indicate that the naive Bayes classifier and the subspace method outperform the other two classifiers on our data sets. Combinations of multiple classifiers did not always improve the classification accuracy compared to the best individual classifier. Among the three different combination approaches, our adaptive classifier combination method introduced here performed the best. The best classification accuracy that we are able to achieve on this seven-class problem is approximately 83 %, which is comparable to the performance of other similar studies. However, the classification problem considered here is more difficult because the pattern classes used in our experiments have a large overlap of words in their corresponding documents."}
{"_id": "1b631abf2977c0efc186483658d7242098c921b8", "title": "Ontologies and Knowledge Bases. Towards a Terminological Clarification", "text": "The word \u201contology\u201d has recently gained a good popularity within the knowledge engineering community. However, its meaning tends to remain a bit vague, as the term is used in very different ways. Limiting our attention to the various proposals made in the current debate in AI, we isolate a number of interpretations, which in our opinion deserve a suitable clarification. We elucidate the implications of such various interpretations, arguing for the need of clear terminological choices regarding the technical use of terms like \u201contology\u201d, \u201cconceptualization\u201d and \u201contological commitment\u201d. After some comments on the use \u201cOntology\u201d (with the capital \u201co\u201d) as a term which denotes a philosophical discipline, we analyse the possible confusion between an ontology intended as a particular conceptual framework at the knowledge level and an ontology intended as a concrete artifact at the symbol level, to be used for a given purpose. A crucial point in this clarification effort is the careful analysis of Gruber\u2019 s definition of an ontology as a specification of a conceptualization."}
{"_id": "47c9c4ea22d4d4a286e74ed1f8b8f62d9bea54fb", "title": "Knowledge Engineering: Principles and Methods", "text": "This paper gives an overview about the development of the field of Knowledge Engineering over the last 15 years. We discuss the paradigm shift from a transfer view to a modeling view and describe two approaches which considerably shaped research in Knowledge Engineering: Role-limiting Methods and Generic Tasks. To illustrate various concepts and methods which evolved in the last years we describe three modeling frameworks: CommonKADS, MIKE, and PROT\u00c9G\u00c9-II. This description is supplemented by discussing some important methodological developments in more detail: specification languages for knowledge-based systems, problem-solving methods, and ontologies. We conclude with outlining the relationship of Knowledge Engineering to Software Engineering, Information Integration and Knowledge Management."}
{"_id": "598909e8871a92a3c8a19eeaefdbe6dd8e271b55", "title": "What Is an Ontology?", "text": "The word \u201contology\u201d is used with different senses in different communities. The most radical difference is perhaps between the philosophical sense, which has of course a well-established tradition, and the computational sense, which emerged in the recent years in the knowledge engineering community, starting from an early informal definition of (computational) ontologies as \u201cexplicit specifications of conceptualizations\u201d. In this paper we shall revisit the previous attempts to clarify and formalize such original definition, providing a detailed account of the notions of conceptualization and explicit specification, while discussing at the same time the importance of shared explicit specifications."}
{"_id": "1c27cb8364a7655b2e4e8aa799970a08f90dea61", "title": "Building a Large-Scale Knowledge Base for Machine Translation", "text": "Introduction Linguistic Resources Knowledge-based machine translation (KBMT) systems have achieved excellent results in constrained domains, but have not yet scaled up to newspaper text. The reason is that knowledge resources (lexicons, grammar rules, world models) must be painstakingly handcrafted from scratch. One of the hypotheses being tested in the PANGLOSS machine translation project is whether or not these resources can be semi-automatically acquired on a very large scale. This paper focuses on the construction of a large ontology (or knowledge base, or world model) for supporting KBMT. It contains representations for some 70,000 commonly encountered objects, processes, qualities, and relations. The ontology was constructed by merging various online dictionaries, semantic networks, and bilingual resources, through semi-automatic methods. Some of these methods (e.g., conceptual matching of semantic taxonomies) are broadly applicable to problems of importing/exporting knowledge from one KB to another. Other methods (e.g., bilingual matching) allow a knowledge engineer to build up an index to a KB in a second language, such as Spanish or Japanese. USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 knight,luk @isi.edu"}
{"_id": "25663da276c3d0d913ecf31cde5a8e7f6f19151d", "title": "Toward principles for the design of ontologies used for knowledge sharing?", "text": "Recent work in Artificial Intelligence is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices and evaluated against the design criteria."}
{"_id": "9845cad436c22642e6c99d24b020f8ca0b528ceb", "title": "Cloud Computing Security: A Survey", "text": "1 Qatar Computing Research Institute (QCRI), Qatar Foundation, Doha, Qatar; E-Mail: ikhalil@uaeu.ac.ae 2 Department of Electrical and Computer Engineering, Newark College of Engineering, New Jersey Institute of Technology, University Heights, Newark, NJ 07102, USA; E-Mail: abdallah@njit.edu 3 College of Information Technology, United Arab Emirates University, PO Box 15551, Al Ain, United Arab Emirates; E-Mail: azeemsamma@hotmail.com"}
{"_id": "408e8eecc14c5cc60bbdfc486ba7a7fc97031788", "title": "Discriminative Unsupervised Feature Learning with Convolutional Neural Networks", "text": "Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled \u2019seed\u2019 image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101)."}
{"_id": "9c3d3e64ffead7ea93cd21ab7ab89fc9afcd690c", "title": "Hidden in Plain Sight: Storing and Managing Secrets on a Public Ledger", "text": "Current blockchain systems are incapable of holding sensitive data securely on their public ledger while supporting accountability of data access requests and revocability of data access rights. Instead, they either keep the sensitive data off-chain as a semi-centralized solution or they just publish the data on the ledger ignoring the problem altogether. In this work, we introduce SCARAB the first secure decentralized access control mechanism for blockchain systems that addresses the challenges of accountability, by publicly logging each request before granting data access, and of revocability, by introducing collectively managed data access policies. SCARAB introduces, therefore, on-chain secrets, which utilize verifiable secret sharing to enable collectively managed secrets under a Byzantine adversary, and identity skipchains, which enable the dynamic management of identities and of access control policies. The evaluation of our SCARAB implementation shows that the latency of a single read/write request scales linearly with the number of access-securing trustees and is in the range of 200 ms to 8 seconds for 16 to 128 trustees."}
{"_id": "e6a5ed2a0498d6e58d2fa370063ad3647b11e249", "title": "Controlling negative and positive power at the ankle with a soft exosuit", "text": "The soft exosuit is a new approach for applying assistive forces over the wearer's body through load paths configured by the textile architecture. In this paper, we present a body-worn lower-extremity soft exosuit and a new control approach that can independently control the level of assistance that is provided during negative- and positive-power periods at the ankle. The exosuit was designed to create load paths assisting ankle plantarflexion and hip flexion, and the actuation system transmits forces from the motors to the suit via Bowden cables. A load cell and two gyro sensors per leg are used to measure real-time data, and the controller performs position control of the cable on a step-by-step basis with respect to the power delivered to the wearer's ankle by controlling two force parameters, the pretension and the active force. Human subjects testing results demonstrate that the controller is capable of modulating the amount of power delivered to the ankle joint. Also, significant reductions in metabolic rate (11%-15%) were observed, which indicates the potential of the proposed control approach to provide benefit to the wearer during walking."}
{"_id": "8215133f440fbed91aa319da2256d0b7efe9535d", "title": "The Challenge of Big Data in Public Health: An Opportunity for Visual Analytics", "text": "Public health (PH) data can generally be characterized as big data. The efficient and effective use of this data determines the extent to which PH stakeholders can sufficiently address societal health concerns as they engage in a variety of work activities. As stakeholders interact with data, they engage in various cognitive activities such as analytical reasoning, decision-making, interpreting, and problem solving. Performing these activities with big data is a challenge for the unaided mind as stakeholders encounter obstacles relating to the data's volume, variety, velocity, and veracity. Such being the case, computer-based information tools are needed to support PH stakeholders. Unfortunately, while existing computational tools are beneficial in addressing certain work activities, they fall short in supporting cognitive activities that involve working with large, heterogeneous, and complex bodies of data. This paper presents visual analytics (VA) tools, a nascent category of computational tools that integrate data analytics with interactive visualizations, to facilitate the performance of cognitive activities involving big data. Historically, PH has lagged behind other sectors in embracing new computational technology. In this paper, we discuss the role that VA tools can play in addressing the challenges presented by big data. In doing so, we demonstrate the potential benefit of incorporating VA tools into PH practice, in addition to highlighting the need for further systematic and focused research."}
{"_id": "e1af23ebf650056824b7d779f4ef27be05989aa7", "title": "A Customizable Matrix Multiplication Framework for the Intel HARPv2 Xeon+FPGA Platform: A Deep Learning Case Study", "text": "General Matrix to Matrix multiplication (GEMM) is the cornerstone for a wide gamut of applications in high performance computing (HPC), scientific computing (SC) and more recently, deep learning. In this work, we present a customizable matrix multiplication framework for the Intel HARPv2 CPU+FPGA platform that includes support for both traditional single precision floating point and reduced precision workloads. Our framework supports arbitrary size GEMMs and consists of two parts: (1) a simple application programming interface (API) for easy configuration and integration into existing software and (2) a highly customizable hardware template. The API provides both compile and runtime options for controlling key aspects of the hardware template including dynamic precision switching; interleaving and block size control; and fused deep learning specific operations. The framework currently supports single precision floating point (FP32), 16, 8, 4 and 2 bit Integer and Fixed Point (INT16, INT8, INT4, INT2) and more exotic data types for deep learning workloads: INT16xTernary, INT8xTernary, BinaryxBinary.\n We compare our implementation to the latest NVIDIA Pascal GPU and evaluate the performance benefits provided by optimizations built into the hardware template. Using three neural networks (AlexNet, VGGNet and ResNet) we illustrate that reduced precision representations such as binary achieve the best performance, and that the HARPv2 enables fine-grained partitioning of computations over both the Xeon and FPGA. We observe up to 50x improvement in execution time compared to single precision floating point, and that runtime configuration options can improve the efficiency of certain layers in AlexNet up to 4x, achieving an overall 1.3x improvement over the entire network."}
{"_id": "b2b22dad2ae73278a8349f813f2e864614a04ed4", "title": "Designing inflatable structures", "text": "We propose an interactive, optimization-in-the-loop tool for designing inflatable structures. Given a target shape, the user draws a network of seams defining desired segment boundaries in 3D. Our method computes optimally-shaped flat panels for the segments, such that the inflated structure is as close as possible to the target while satisfying the desired seam positions. Our approach is underpinned by physics-based pattern optimization, accurate coarse-scale simulation using tension field theory, and a specialized constraint-optimization method. Our system is fast enough to warrant interactive exploration of different seam layouts, including internal connections, and their effects on the inflated shape. We demonstrate the resulting design process on a varied set of simulation examples, some of which we have fabricated, demonstrating excellent agreement with the design intent."}
{"_id": "7e9db6bcc86f0e0359e3056c60aa7d96f4d8feed", "title": "Smartphone physics \u2013 a smart approach to practical work in science education ?", "text": "In the form of teacher didactical design research, this work addresses a didactical issue encountered during physics teaching in a Swedish upper secondary school. A need for renewed practical laboratory work related to Newtonian mechanics is met by proposing and designing an activity based on highspeed photography using the nowadays omnipresent smartphone, thus bringing new technology into the classroom. The activity \u2013 video analysis of the collision physics of football kicks \u2013 is designed and evaluated by following a didactical design cycle. The work elaborates on how the proposed laboratory activity relates to the potential and complications of experimental activities in science education, as described in the vast literature on the topic. It is argued that the use of smartphones constitutes an interesting use of new technology for addressing known problems of practical work. Of particular interest is that smartphones offer a way to bridge the gap between the everyday life of students and the world of physics experiments (smartphones are powerful pocket laboratories). The use of smartphones also avoids using unfamiliar laboratory equipment that is known to hinder focus on intended content, while at the same time exploring a powerful tool for data acquisition and analysis. Overall, the use of smartphones (and computers) in this manner can be seen as the result of applying Occam\u2019s razor to didactics: only familiar and readily available instrumentation is used, and skills learned (movie handling and image analysis) are all educationally worthwhile. Although the activity was judged successful, a systematic investigation of learning outcome was out of scope. This means that no strong conclusions can be drawn based on this limited work. Nonetheless, the smartphone activity was well received by the students and should constitute a useful addition to the set of instructional approaches, especially since variation is known to benefit learning. The main failure of the design was an overestimation of student prior knowledge on motion physics (and its application to image data). As a consequence, the activity took required more time and effort than originally anticipated. No severe pitfalls of smartphone usage were identified, but it should be noted that the proposed activity \u2013 with its lack of well-defined results due to variations in kick strength \u2013 requires that the teacher is capable of efficiently analysing multiple student films (avoiding the feedback process to become overwhelmingly time consuming). If not all student films are evaluated, the feedback to the students may become of low quality, and misconceptions may pass under the radar. On the other hand, given that programming from 2018 will become compulsory, an interesting development of the activity would be to include handling of images and videos using a high-level programming language like Python."}
{"_id": "f7c9ee705b857cd093f00a531ba90bcf5ba5b25a", "title": "Mining HEXACO personality traits from Enterprise Social Media", "text": "In this paper we introduce a novel computational technique of extraction of personality traits (HEXACO) of employees from Enterprise Social Media posts. We deal with challenges such as not being able to use existing survey instruments for scoring and not being able to directly use existing psychological studies on written text due to lack of overlapping words between the existing dictionary and words used in Enterprise Social Media. Using our approach we are able to infer personality traits (HEXACO) from posts and find better coverage and usage of the extended dictionary."}
{"_id": "2db631dd019b085a2744e04ed2e3658462ec57a4", "title": "Hawkes Processes with Stochastic Excitations", "text": "We propose an extension to Hawkes processes by treating the levels of self-excitation as a stochastic differential equation. Our new point process allows better approximation in application domains where events and intensities accelerate each other with correlated levels of contagion. We generalize a recent algorithm for simulating draws from Hawkes processes whose levels of excitation are stochastic processes, and propose a hybrid Markov chain Monte Carlo approach for model fitting. Our sampling procedure scales linearly with the number of required events and does not require stationarity of the point process. A modular inference procedure consisting of a combination between Gibbs and Metropolis Hastings steps is put forward. We recover expectation maximization as a special case. Our general approach is illustrated for contagion following geometric Brownian motion and exponential Langevin dynamics."}
{"_id": "8681f43d3e28bbdcc0c36e2d5afc6e1b192e735d", "title": "ICP-based pose-graph SLAM", "text": "Odometry-like localization solutions can be built upon Light Detection And Ranging (LIDAR) sensors, by sequentially registering the point clouds acquired along a robot trajectory. Yet such solutions inherently drift over time: we propose an approach that adds a graphical model layer on top of such LIDAR odometry layer, using the Iterative Closest Points (ICP) algorithm for registration. Reference frames called keyframes are defined along the robot trajectory, and ICP results are used to build a pose graph, that in turn is used to solve an optimization problem that provides updates for the keyframes upon loop closing, enabling the correction of the path of the robot and of the map of the environment. We present in details the configuration used to register data from the Velodyne High Definition LIDAR (HDL), and a strategy to build local maps upon which current frames are registered, either when discovering new areas or revisiting previously mapped areas. Experiments show that it is possible to build the graph using data from ICP and that the loop closings in the graph level reduces the overall drift of the system."}
{"_id": "58f90ba279b804676141359773887cdad366fd0d", "title": "Panelrama : A Framework for Cross-Device Applications", "text": "We introduce Panelrama, a web framework for the construction of distributed user interfaces (DUI). Our implementation provides developers with low migration costs through built-in mechanisms for the synchronization of UI state and minimal changes to existing languages. Additionally, we describe a solution to categorize device characteristics and dynamically change UI allocation to best-fit devices. Finally, we illustrate the use of Panelrama through a sample application which demonstrates multidevice interaction techniques and collaborative potential. Author"}
{"_id": "5099c982707d23f94a352f2eafbd6fc48c7c45b8", "title": "Mitigating Poisoning Attacks on Machine Learning Models: A Data Provenance Based Approach", "text": "The use of machine learning models has become ubiquitous. Their predictions are used to make decisions about healthcare, security, investments and many other critical applications. Given this pervasiveness, it is not surprising that adversaries have an incentive to manipulate machine learning models to their advantage. One way of manipulating a model is through a poisoning or causative attack in which the adversary feeds carefully crafted poisonous data points into the training set. Taking advantage of recently developed tamper-free provenance frameworks, we present a methodology that uses contextual information about the origin and transformation of data points in the training set to identify poisonous data, thereby enabling online and regularly re-trained machine learning applications to consume data sources in potentially adversarial environments. To the best of our knowledge, this is the first approach to incorporate provenance information as part of a filtering algorithm to detect causative attacks. We present two variations of the methodology - one tailored to partially trusted data sets and the other to fully untrusted data sets. Finally, we evaluate our methodology against existing methods to detect poison data and show an improvement in the detection rate."}
{"_id": "e85f578f51a1068934e1d3fa3a832fdf62d5c770", "title": "Terahertz band communications: Applications, research challenges, and standardization activities", "text": "Terahertz frequency band, 0.1\u201310THz, is envisioned as one of the possible resources to be utilized for wireless communications in networks beyond 5G. Communications over this band will be feature a number of attractive properties, including potentially terabit-per-second link capacities, miniature transceivers and, potentially, high energy efficiency. Meanwhile, a number of specific research challenges have to be addressed to convert the theoretical estimations into commercially attractive solutions. Due to the diversity of the challenges, the research on THz communications at its early stages was mostly performed by independent communities from different areas. Therefore, the existing knowledge in the field is substantially fragmented. In this paper, an attempt to address this issue and provide a clear and easy to follow introduction to the THz communications is performed. A review on the state-of-the-art in THz communications research is given by identifying the target applications and major open research challenges as well as the recent achievements by industry, academia, and the standardization bodies. The potential of the THz communications is presented by illustrating the basic tradeoffs in typical use cases. Based on the given summary, certain prospective research directions in the field are identified."}
{"_id": "407e4b401395682b15c431347ec9b0f88ceec04b", "title": "Multi-target tracking by learning local-to-global trajectory models", "text": "The multi-target tracking problem is challenging when there exist occlusions, tracking failures of the detector and severe interferences between detections. In this paper, we propose a novel detection based tracking method that links detections into tracklets and further forms long trajectories. Unlike many previous hierarchical frameworks which split the data association into two separate optimization problems (linking detections locally and linking tracklets globally), we introduce a unified algorithm that can automatically relearn the trajectory models from the local and global information for finding the joint optimal assignment. In each temporal window, the trajectory models are initialized by the local information to link those easy-to-connect detections into a set of tracklets. Then the trajectory models are updated by the reliable tracklets and reused to link separated tracklets into long trajectories. We iteratively update the trajectory models by more information from more frames until the result converges. The iterative process gradually improves the accuracy of the trajectory models, which in turn improves the target ID inferences for all detections by the MRF model. Experiment results revealed that our proposed method achieved state-of-the-art multi-target tracking performance. Crown Copyright & 2014 Published by Elsevier Ltd. All rights reserved."}
{"_id": "473cbc5ec2609175041e1410bc6602b187d03b23", "title": "Semantic Audiovisual Data Fusion for Automatic Emotion Recognition", "text": "The paper describes a novel technique for the recognition of emotions from multimodal data. We focus on the recognition of the six prototypic emotions. The results from the facial expression recognition and from the emotion recognition from speech are combined using a bi-modal multimodal semantic data fusion model that determines the most probable emotion of the subject. Two types of models based on geometric face features for facial expression recognition are being used, depending on the presence or absence of speech. In our approach we define an algorithm that is robust to changes of face shape that occur during regular speech. The influence of phoneme generation on the face shape during speech is removed by using features that are only related to the eyes and the eyebrows. The paper includes results from testing the presented models."}
{"_id": "c70a2de3fb74a351cac7585f3bddbf7db44d6f3b", "title": "Mobile Food Recognition with an Extreme Deep Tree", "text": "Food recognition is an emerging topic in the field of computer vision. The recent interest of the research community in this area is justified by the rise in popularity of food diary applications, where the users take note of their food intake for self-monitoring or to provide useful statistics to dietitians. However, manually annotating food intake can be a tedious task, thus explaining the need of a system that automatically recognizes food, and possibly its amount, from pictures acquired by mobile devices. In this work we propose an approach to food recognition which combines the strengths of different state-of-the-art classifiers, namely Convolutional Neural Networks, Extreme Learning Machines and Neural Trees. We show that the proposed architecture can achieve good results even with low computational power, as in the case of mobile devices."}
{"_id": "263286281bb576d353fecebd6023b05effc15fbf", "title": "Efficient Selection of Vector Instructions Using Dynamic Programming", "text": "Accelerating program performance via SIMD vector units is very common in modern processors, as evidenced by the use of SSE, MMX, VSE, and VSX SIMD instructions in multimedia, scientific, and embedded applications. To take full advantage of the vector capabilities, a compiler needs to generate efficient vector code automatically. However, most commercial and open-source compilers fall short of using the full potential of vector units, and only generate vector code for simple innermost loops. In this paper, we present the design and implementation of anauto-vectorization framework in the back-end of a dynamic compiler that not only generates optimized vector code but is also well integrated with the instruction scheduler and register allocator. The framework includes a novel{\\em compile-time efficient dynamic programming-based} vector instruction selection algorithm for straight-line code that expands opportunities for vectorization in the following ways: (1) {\\em scalar packing} explores opportunities of packing multiple scalar variables into short vectors, (2)judicious use of {\\em shuffle} and {\\em horizontal} vector operations, when possible, and (3) {\\em algebraic reassociation} expands opportunities for vectorization by algebraic simplification. We report performance results on the impact of auto-vectorization on a set of standard numerical benchmarks using the Jikes RVM dynamic compilation environment. Our results show performance improvement of up to 57.71\\% on an Intel Xeon processor, compared tonon-vectorized execution, with a modest increase in compile-time in the range from 0.87\\% to 9.992\\%. An investigation of the SIMD parallelization performed by v11.1 of the Intel Fortran Compiler (IFC) on three benchmarks shows that our system achieves speedup with vectorization in all three cases and IFC does not. Finally, a comparison of our approach with an implementation of the Super word Level Parallelization (SLP) algorithm from~\\cite{larsen00}, shows that our approach yields a performance improvement of up to 13.78\\% relative to SLP."}
{"_id": "391fa13fce3e4c814c61f3384d9979aba56df4bf", "title": "Body Part Detection for Human Pose Estimation and Tracking", "text": "Accurate 3-D human body pose tracking from a monocular video stream is important for a number of applications. We describe a novel hierarchical approach for tracking human pose that uses edge-based features during the coarse stage and later other features for global optimization. At first, humans are detected by motion and tracked by fitting an ellipse in the image. Then, body components are found using edge features and used to estimate the 2D positions of the body joints accurately. This helps to bootstrap the estimation of 3D pose using a sampling-based search method in the last stage. We present experiment results with sequences of different realistic scenes to illustrate the performance of the method."}
{"_id": "53fab2769ef87e8153d905a5cf76ead9f9e46c22", "title": "The American Sign Language Lexicon Video Dataset", "text": "The lack of a written representation for American sign language (ASL) makes it difficult to do something as commonplace as looking up an unknown word in a dictionary. The majority of printed dictionaries organize ASL signs (represented in drawings or pictures) based on their nearest English translation; so unless one already knows the meaning of a sign, dictionary look-up is not a simple proposition. In this paper we introduce the ASL lexicon video dataset, a large and expanding public dataset containing video sequences of thousands of distinct ASL signs, as well as annotations of those sequences, including start/end frames and class label of every sign. This dataset is being created as part of a project to develop a computer vision system that allows users to look up the meaning of an ASL sign. At the same time, the dataset can be useful for benchmarking a variety of computer vision and machine learning methods designed for learning and/or indexing a large number of visual classes, and especially approaches for analyzing gestures and human communication."}
{"_id": "74249aaa95deaa5dcab72ffbd22657f9bde07fca", "title": "MHz-frequency operation of flyback converter with monolithic self-synchronized rectifier (SSR)", "text": "Synchronous rectification offers an effective solution to improve efficiency of low-output-voltage flyback converters. Conventional synchronous flyback topologies using discrete synchronous rectifier face challenges in precise timing control on the secondary side. In this paper, MHz-frequency operation of a flyback DC-DC converter using a new monolithic self-synchronized rectifier (SSR) is investigated and demonstrated. The SSR IC, acting solely on its own drain-source voltage instead of external control signals, considerably simplifies converter design, improves system efficiency, and enables MHz operating frequency. Analysis, modeling, and experimental results are presented."}
{"_id": "efb1591c78b27f908c17fac0fdbbb4a9f8bfbdb4", "title": "Assessing The Feasibility Of Self-organizing Maps For Data Mining Financial Information", "text": "Analyzing financial performance in today\u2019s information-rich society can be a daunting task. With the evolution of the Internet, access to massive amounts of financial data, typically in the form of financial statements, is widespread. Managers and stakeholders are in need of a data-mining tool allowing them to quickly and accurately analyze this data. An emerging technique that may be suited for this application is the self-organizing map. The purpose of this study was to evaluate the performance of self-organizing maps for analyzing financial performance of international pulp and paper companies. For the study, financial data, in the form of seven financial ratios, was collected, using the Internet as the primary source of information. A total of 77 companies, and six regional averages, were included in the study. The time frame of the study was the period 1995-00. An example analysis was performed, and the results analyzed based on information contained in the annual reports. The results of the study indicate that self-organizing maps can be feasible tools for the financial analysis of large amounts of financial data."}
{"_id": "80d492462d76c5a8305739b4339fd7af0277fc7a", "title": "A Self-Cloning Agents Based Model for High-Performance Mobile-Cloud Computing", "text": "The rise of the mobile-cloud computing paradigm in recent years has enabled mobile devices with processing power and battery life limitations to achieve complex tasks in real-time. While mobile-cloud computing is promising to overcome the limitations of mobile devices for real-time computing, the lack of frameworks compatible with standard technologies and techniques for dynamic performance estimation and program component relocation makes it harder to adopt mobile-cloud computing at large. Most of the available frameworks rely on strong assumptions such as the availability of a full clone of the application code and negligible execution time in the cloud. In this paper, we present a dynamic computation offloading model for mobile-cloud computing, based on autonomous agents. Our approach does not impose any requirements on the cloud platform other than providing isolated execution containers, and it alleviates the management burden of offloaded code by the mobile platform using stateful, autonomous application partitions. We also investigate the effects of different cloud runtime environment conditions on the performance of mobile-cloud computing, and present a simple and low-overhead dynamic make span estimation model integrated into autonomous agents to enhance them with self-performance evaluation in addition to self-cloning capabilities. The proposed performance profiling model is used in conjunction with a cloud resource optimization scheme to ensure optimal performance. Experiments with two mobile applications demonstrate the effectiveness of the proposed approach for high-performance mobile-cloud computing."}
{"_id": "1da8ecc3fe3a2d3a8057487dcaaf97a455bda117", "title": "QUBIC: a qualitative biclustering algorithm for analyses of gene expression data", "text": "Biclustering extends the traditional clustering techniques by attempting to find (all) subgroups of genes with similar expression patterns under to-be-identified subsets of experimental conditions when applied to gene expression data. Still the real power of this clustering strategy is yet to be fully realized due to the lack of effective and efficient algorithms for reliably solving the general biclustering problem. We report a QUalitative BIClustering algorithm (QUBIC) that can solve the biclustering problem in a more general form, compared to existing algorithms, through employing a combination of qualitative (or semi-quantitative) measures of gene expression data and a combinatorial optimization technique. One key unique feature of the QUBIC algorithm is that it can identify all statistically significant biclusters including biclusters with the so-called 'scaling patterns', a problem considered to be rather challenging; another key unique feature is that the algorithm solves such general biclustering problems very efficiently, capable of solving biclustering problems with tens of thousands of genes under up to thousands of conditions in a few minutes of the CPU time on a desktop computer. We have demonstrated a considerably improved biclustering performance by our algorithm compared to the existing algorithms on various benchmark sets and data sets of our own. QUBIC was written in ANSI C and tested using GCC (version 4.1.2) on Linux. Its source code is available at: http://csbl.bmb.uga.edu/ approximately maqin/bicluster. A server version of QUBIC is also available upon request."}
{"_id": "0bddc08769e9ff5d42ed36336f69bb3b1f42e716", "title": "Autonomous Inverted Helicopter Flight via Reinforcement Learning", "text": "Helicopters have highly stochastic, nonlinear, dynamics, and autonomous helicopter flight is widely regarded to be a challenging control problem. As helicopters are highly unstable at low speeds, it is particularly difficult to design controllers for low speed aerobatic maneuvers. In this paper, we describe a successful application of reinforcement learning to designing a controller for sustained inverted flight on an autonomous helicopter. Using data collected from the helicopter in flight, we began by learning a stochastic, nonlinear model of the helicopter\u2019s dynamics. Then, a reinforcement learning algorithm was applied to automatically learn a controller for autonomous inverted hovering. Finally, the resulting controller was successfully tested on our autonomous helicopter platform."}
{"_id": "2e268b70c7dcae58de2c8ff7bed1e58a5e58109a", "title": "Dynamic Programming and Optimal Control", "text": "This is an updated version of Chapter 4 of the author\u2019s Dynamic Programming and Optimal Control, Vol. II, 4th Edition, Athena Scientific, 2012. It includes new material, and it is substantially revised and expanded (it has more than doubled in size). The new material aims to provide a unified treatment of several models, all of which lack the contractive structure that is characteristic of the discounted problems of Chapters 1 and 2: positive and negative cost models, deterministic optimal control (including adaptive DP), stochastic shortest path models, and risk-sensitive models. Here is a summary of the new material:"}
{"_id": "7a09464f26e18a25a948baaa736270bfb84b5e12", "title": "On-line Q-learning using connectionist systems", "text": "Reinforcement learning algorithms are a powerful machine learning technique. However, much of the work on these algorithms has been developed with regard to discrete nite-state Markovian problems, which is too restrictive for many real-world environments. Therefore, it is desirable to extend these methods to high dimensional continuous state-spaces, which requires the use of function approximation to generalise the information learnt by the system. In this report, the use of back-propagation neural networks (Rumelhart, Hinton and Williams 1986) is considered in this context. We consider a number of di erent algorithms based around Q-Learning (Watkins 1989) combined with the Temporal Di erence algorithm (Sutton 1988), including a new algorithm (Modi ed Connectionist Q-Learning), and Q( ) (Peng and Williams 1994). In addition, we present algorithms for applying these updates on-line during trials, unlike backward replay used by Lin (1993) that requires waiting until the end of each trial before updating can occur. On-line updating is found to be more robust to the choice of training parameters than backward replay, and also enables the algorithms to be used in continuously operating systems where no end of trial conditions occur. We compare the performance of these algorithms on a realistic robot navigation problem, where a simulated mobile robot is trained to guide itself to a goal position in the presence of obstacles. The robot must rely on limited sensory feedback from its surroundings, and make decisions that can be generalised to arbitrary layouts of obstacles. These simulations show that on-line learning algorithms are less sensitive to the choice of training parameters than backward replay, and that the alternative update rules of MCQ-L and Q( ) are more robust than standard Q-learning updates. 1"}
{"_id": "13e2d5c4d39bc58b42f9004e5b03905f847dfa0f", "title": "Autonomous Helicopter Flight via Reinforcement Learning", "text": "Autonomous helicopter flight represents a challenging cont rol problem, with complex, noisy, dynamics. In this paper, we describe a s uccessful application of reinforcement learning to autonomous helic opter flight. We first fit a stochastic, nonlinear model of the helicopter dy namics. We then use the model to learn to hover in place, and to fly a number of maneuvers taken from an RC helicopter competition."}
{"_id": "6fa33dcf5bd93299443cb46208219e9365f53ab3", "title": "Latent syntactic structure-based sentiment analysis", "text": "People share their opinions about things like products, movies and services using social media channels. The analysis of these textual contents for sentiments is a gold mine for marketing experts, thus automatic sentiment analysis is a popular area of applied artificial intelligence. We propose a latent syntactic structure-based approach for sentiment analysis which requires only sentence-level polarity labels for training. Our experiments on three domains (movie, IT products, restaurant) show that a sentiment analyzer that exploits syntactic parses and has access only to sentence-level polarity annotation for in-domain sentences can outperform state-of-the-art models that were trained on out-domain parse trees with sentiment annotation for each node of the trees. In practice, millions of sentence-level polarity annotations are usually available for a particular domain thus our approach is applicable for training a sentiment analyzer for a new domain while it can exploit the syntactic structure of sentences as well."}
{"_id": "f9825a2d225b549f6c70b2faaeabd8537730a319", "title": "Boosting the Quality of Approximate String Matching by Synonyms", "text": "A string-similarity measure quantifies the similarity between two text strings for approximate string matching or comparison. For example, the strings \u201cSam\u201d and \u201cSamuel\u201d can be considered to be similar. Most existing work that computes the similarity of two strings only considers syntactic similarities, for example, number of common words or q-grams. While this is indeed an indicator of similarity, there are many important cases where syntactically-different strings can represent the same real-world object. For example, \u201cBill\u201d is a short form of \u201cWilliam,\u201d and \u201cDatabase Management Systems\u201d can be abbreviated as \u201cDBMS.\u201d Given a collection of predefined synonyms, the purpose of this article is to explore such existing knowledge to effectively evaluate the similarity between two strings and efficiently perform similarity searches and joins, thereby boosting the quality of approximate string matching.\n In particular, we first present an expansion-based framework to measure string similarities efficiently while considering synonyms. We then study efficient algorithms for similarity searches and joins by proposing two novel indexes, called SI-trees and QP-trees, which combine signature-filtering and length-filtering strategies. In order to improve the efficiency of our algorithms, we develop an estimator to estimate the size of candidates to enable an online selection of signature filters. This estimator provides strong low-error, high-confidence guarantees while requiring only logarithmic space and time costs, thus making our method attractive both in theory and in practice. Finally, the experimental results from a comprehensive study of the algorithms with three real datasets verify the effectiveness and efficiency of our approaches."}
{"_id": "89a75e3ad11cbdc00df88d619422fbe1b75b95d6", "title": "Augmented Reality with Hololens: Experiential Architectures Embedded in the Real World", "text": "Early hands-on experiences with the Microsoft Hololens augmented/mixed reality device are reported and discussed, with a general aim of exploring basic 3D visualization. A range of usage cases are tested, including data visualization and immersive data spaces, in-situ visualization of 3D models and full scale architectural form visualization. Ultimately, the Hololens is found to provide a remarkable tool for moving from traditional visualization of 3D objects on a 2D screen, to fully experiential 3D visualizations embedded in the real world."}
{"_id": "cd84e1e9890c5e10088376b56796718f79fa7e61", "title": "Aspect based sentiment analysis", "text": "In this fast paced and social media frenzy world, decision making has been revolutionized as there are lots of opinions floating on the internet in the form of blogs, social media updates, forums etc. This paper focuses on how banks can utilize these reviews to improve their services. The reviews are scraped from the internet and the sentiment analysis is done on an aspect level to have a much clear idea of where the banks are performing bad according to the customers or users."}
{"_id": "9892ad453d0d7a6f260a0c6ca50c54c614cd2160", "title": "Real time object detection and tracking using the Kalman Filter embedded in single board in a robot", "text": "This paper presents an algorithm implemented in real time in a laboratory robot, designed to trace and detect the color of an object in an indoor environment using the Kalman filter. The image capture is done by a kinect camera installed in the robot. The algorithm is programmed into a single board computer, BeagleBone Black that uses the ROS \u2014 OpenCV system in image processing."}
{"_id": "6d596cb55d99eae216840090b46bc5e49d7aeea5", "title": "A Case for Malleable Thread-Level Linear Algebra Libraries: The LU Factorization With Partial Pivoting", "text": "We propose two novel techniques for overcoming load-imbalance encountered when implementing so-called look-ahead mechanisms in relevant dense matrix factorizations for the solution of linear systems. Both techniques target the scenario where two thread teams are created/activated during the factorization, with each team in charge of performing an independent task/branch of execution. The first technique promotes worker sharing (WS) between the two tasks, allowing the threads of the task that completes first to be reallocated for use by the costlier task. The second technique allows a fast task to alert the slower task of completion, enforcing the early termination (ET) of the second task, and a smooth transition of the factorization procedure into the next iteration. The two mechanisms are instantiated via a new malleable thread-level implementation of the basic linear algebra subprograms, and their benefits are illustrated via an implementation of the LU factorization with partial pivoting enhanced with look-ahead. Concretely, our experimental results on an Intel-Xeon system with 12 cores show the benefits of combining WS+ET, reporting competitive performance in comparison with a task-parallel runtime-based solution."}
{"_id": "7157dda72073ff66cc2de6ec5db056a3e8b326d7", "title": "A generic statistical approach for spam detection in Online Social Networks", "text": "25 26 27 28 29 30 31 32 33 34 35 36 Article history: Received 13 February 2012 Received in revised form 18 March 2013 Accepted 4 April 2013 Available online xxxx"}
{"_id": "8ee60491a7e54f0a347f301e53b5b4521f8714e7", "title": "Fast 3D mapping in highly dynamic environments using normal distributions transform occupancy maps", "text": "Autonomous vehicles operating in real-world industrial environments have to overcome numerous challenges, chief among which is the creation and maintenance of consistent 3D world models. This paper focuses on a particularly important challenge: mapping in dynamic environments. We introduce several improvements to the recently proposed Normal Distributions Transform Occupancy Map (NDT-OM) aimed for efficient mapping in dynamic environments. A careful consistency analysis is given based on convergence and similarity metrics specifically designed for evaluation of NDT maps in dynamic environments. We show that in the context of mapping with known poses the proposed method results in improved consistency and in superior runtime performance, when compared against 3D occupancy grids at the same size and resolution. Additionally, we demonstrate that NDT-OM features real-time performance in a highly dynamic 3D mapping and tracking scenario with centimeter accuracy over a 1.5km trajectory."}
{"_id": "1121fed40a291ab796ee7f2ca1c6e59988087011", "title": "CHAPTER TWO Self-Distancing : Theory , Research , and Current Directions", "text": "When people experience negative events, they often try to understand their feelings to improve the way they feel. Although engaging in this meaning-making process leads people to feel better at times, it frequently breaks down leading people to ruminate and feel worse. This raises the question: What factors determine whether people\u2019s attempts to \u201cwork-through\u201d their negative feelings succeed or fail? In this article, we describe an integrative program of research that has addressed this issue by focusing on the role that self-distancing plays in facilitating adaptive self-reflection. We begin by describing the \u201cself-reflection puzzle\u201d that initially motivated this line of work. Next, we introduce the concept of self-distancing and describe the conceptual framework we developed to explain how this process should facilitate adaptive self-reflection. After describing the early studies that evaluated this framework, we discuss how these findings have been extended to broaden and deepen our understanding of the role that this process plays in self-regulation. We conclude by offering several parting thoughts that integrate the ideas discussed in this chapter. 1. THE SELF-REFLECTION PUZZLE Many people try to understand their feelings when they are upset, under the assumption that doing so will lead them to feel better. Indeed, it would seem that many of us reflexively heed Socrates\u2019 advice to \u201cknow thyself \u201d when we experience emotional pain. But are people\u2019s attempts to work-through their feelings productive? Do they actually lead people to feel better? A great deal of research has addressed these questions over the past 40 years, and the results reveal a puzzle. On the one hand, several studies suggest that it is indeed helpful for people to reflect on their emotions when they experience distress 82 E. Kross and O. Ayduk"}
{"_id": "3e789c93315b6bd9bd2493fc12ecb7381a598509", "title": "An Embedded Support Vector Machine", "text": "In this paper we work on the balance between hardware and software implementation of a machine learning algorithm, which belongs to the area of statistical learning theory. We use system-on-chip technology to demonstrate the potential usefulness of moving the critical sections of an algorithm into HW: the so-called hardware/software balance. Our experiments show that the approach can achieve speedups using a complex machine learning algorithm called a support vector machine. The experiments are conducted on a real-time Java virtual machine named Java optimized processor"}
{"_id": "adf242eae5e6793b5135fa7468479b8a1ad5008e", "title": "Toward an E-orientation platform: Using hybrid recommendation systems", "text": "Choosing the right career can be difficult for students, because they have to take into considerations many elements, in order to be on the best path. Several studies in many disciplines have cooperate to help students making their career decision. The purpose of this work is to invest the subject of school orientation, identifying its background and studying some related works. This paper aims also to define the student's profile, and conceiving basis of a recommendation system that will play the advisor role in students helping to choose their career path."}
{"_id": "0b1447fda2c0c1a55df07ab1d405fffcedd0420f", "title": "Personalized News Recommendation: A Review and an Experimental Investigation", "text": "Online news articles, as a new format of press releases, have sprung up on the Internet. With its convenience and recency, more and more people prefer to read news online instead of reading the paper-format press releases. However, a gigantic amount of news events might be released at a rate of hundreds, even thousands per hour. A challenging problem is how to effciently select specific news articles from a large corpus of newly-published press releases to recommend to individual readers, where the selected news items should match the reader's reading preference as much as possible. This issue refers to personalized news recommendation. Recently, personalized news recommendation has become a promising research direction as the Internet provides fast access to real-time information from multiple sources around the world. Existing personalized news recommendation systems strive to adapt their services to individual users by virtue of both user and news content information. A variety of techniques have been proposed to tackle personalized news recommendation, including content-based, collaborative filtering systems and hybrid versions of these two. In this paper, we provide a comprehensive investigation of existing personalized news recommenders. We discuss several essential issues underlying the problem of personalized news recommendation, and explore possible solutions for performance improvement. Further, we provide an empirical study on a collection of news articles obtained from various news websites, and evaluate the effect of different factors for personalized news recommendation. We hope our discussion and exploration would provide insights for researchers who are interested in personalized news recommendation."}
{"_id": "77185982b410e6441dc1a2c87c6acc3362ac0f01", "title": "Paying for performance: Performance incentives increase desire for the reward object.", "text": "The current research examines how exposure to performance incentives affects one's desire for the reward object. We hypothesized that the flexible nature of performance incentives creates an attentional fixation on the reward object (e.g., money), which leads people to become more desirous of the rewards. Results from 5 laboratory experiments and 1 large-scale field study provide support for this prediction. When performance was incentivized with monetary rewards, participants reported being more desirous of money (Study 1), put in more effort to earn additional money in an ensuing task (Study 2), and were less willing to donate money to charity (Study 4). We replicated the result with nonmonetary rewards (Study 5). We also found that performance incentives increased attention to the reward object during the task, which in part explains the observed effects (Study 6). A large-scale field study replicated these findings in a real-world setting (Study 7). One laboratory experiment failed to replicate (Study 3). (PsycINFO Database Record"}
{"_id": "66903e95f84767a31beef430b2367492ac9cc750", "title": "Childhood sexual abuse and psychiatric disorder in young adulthood: II. Psychiatric outcomes of childhood sexual abuse.", "text": "OBJECTIVE\nThis is the second in a series of articles that describe the prevalence, correlates, and consequences of childhood sexual abuse (CSA) in a birth cohort of more than 1,000 New Zealand children studied to the age of 18 years. This article examines the associations between reports of CSA at age 18 and DSM-IV diagnostic classifications at age 18.\n\n\nMETHOD\nA birth cohort of New Zealand children was studied at annual intervals from birth to age 16 years. At age 18 years retrospective reports of CSA prior to age 16 and concurrently measured psychiatric symptoms were obtained.\n\n\nRESULTS\nThose reporting CSA had higher rates of major depression, anxiety disorder, conduct disorder, substance use disorder, and suicidal behaviors than those not reporting CSA (p < .002). There were consistent relationships between the extent of CSA and risk of disorder, with those reporting CSA involving intercourse having the highest risk of disorder. These results persisted when findings were adjusted for prospectively measured childhood family and related factors. Similar but less marked relationships between CSA and nonconcurrently measured disorders were found.\n\n\nCONCLUSIONS\nThe findings suggest that CSA, and particularly severe CSA, was associated with increased risk of psychiatric disorder in young adults even when due allowance was made for prospectively measured confounding factors."}
{"_id": "c24333af7c45436c9da2121f11bb63444381aaff", "title": "Complementary Skyrmion Racetrack Memory With Voltage Manipulation", "text": "Magnetic skyrmion holds promise as information carriers in the next-generation memory and logic devices, owing to the topological stability, small size, and extremely low current needed to drive it. One of the most potential applications of skyrmion is to design racetrack memory (RM), named Sk-RM, instead of utilizing domain wall. However, current studies face some key design challenges, e.g., skyrmion manipulation, data representation, and synchronization. To address these challenges, we propose here a complementary Sk-RM structure with voltage manipulation. Functionality and performance of the proposed design are investigated with micromagnetic simulations."}
{"_id": "26f6693ade84ea613d4ffa380406db060dafe3ae", "title": "Power Attack on Small RSA Public Exponent", "text": "In this paper, we present a new attack on RSA when the public exponent is short, for instance 3 or 2 +1, and when the classical exponent randomization is used. This attack works even if blinding is used on the messages. From a Simple Power Analysis (SPA) we study the problem of recovering the RSA private key when non consecutive bits of it leak from the implementation. We also show that such information can be gained from sliding window implementations not protected against SPA."}
{"_id": "c07d5ce6c340b3684eeb1d8e70ffcdf6b70652f9", "title": "Understanding and Supporting Cross-Device Web Search for Exploratory Tasks with Mobile Touch Interactions", "text": "Mobile devices enable people to look for information at the moment when their information needs are triggered. While experiencing complex information needs that require multiple search sessions, users may utilize desktop computers to fulfill information needs started on mobile devices. Under the context of mobile-to-desktop web search, this article analyzes users\u2019 behavioral patterns and compares them to the patterns in desktop-to-desktop web search. Then, we examine several approaches of using Mobile Touch Interactions (MTIs) to infer relevant content so that such content can be used for supporting subsequent search queries on desktop computers. The experimental data used in this article was collected through a user study involving 24 participants and six properly designed cross-device web search tasks. Our experimental results show that (1) users\u2019 mobile-to-desktop search behaviors do significantly differ from desktop-to-desktop search behaviors in terms of information exploration, sense-making and repeated behaviors. (2) MTIs can be employed to predict the relevance of click-through documents, but applying document-level relevant content based on the predicted relevance does not improve search performance. (3) MTIs can also be used to identify the relevant text chunks at a fine-grained subdocument level. Such relevant information can achieve better search performance than the document-level relevant content. In addition, such subdocument relevant information can be combined with document-level relevance to further improve the search performance. However, the effectiveness of these methods relies on the sufficiency of click-through documents. (4) MTIs can also be obtained from the Search Engine Results Pages (SERPs). The subdocument feedbacks inferred from this set of MTIs even outperform the MTI-based subdocument feedback from the click-through documents."}
{"_id": "b6450a9b119bece8058e7a43c03b21bd6522c220", "title": "Ethical Considerations in Artificial Intelligence Courses", "text": "The recent surge in interest in ethics in artificial intelligence may leave many educators wondering how to address moral, ethical, and philosophical issues in their AI courses. As instructors we want to develop curriculum that not only prepares students to be artificial intelligence practitioners, but also to understand the moral, ethical, and philosophical impacts that artificial intelligence will have on society. In this article we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course."}
{"_id": "27eaf1c2047fba8a71540496d59f3fec9b7f46bb", "title": "Monocular Visual\u2013Inertial State Estimation With Online Initialization and Camera\u2013IMU Extrinsic Calibration", "text": "There have been increasing demands for developing microaerial vehicles with vision-based autonomy for search and rescue missions in complex environments. In particular, the monocular visual\u2013inertial system (VINS), which consists of only an inertial measurement unit (IMU) and a camera, forms a great lightweight sensor suite due to its low weight and small footprint. In this paper, we address two challenges for rapid deployment of monocular VINS: 1) the initialization problem and 2) the calibration problem. We propose a methodology that is able to initialize velocity, gravity, visual scale, and camera\u2013IMU extrinsic calibration on the fly. Our approach operates in natural environments and does not use any artificial markers. It also does not require any prior knowledge about the mechanical configuration of the system. It is a significant step toward plug-and-play and highly customizable visual navigation for mobile robots. We show through online experiments that our method leads to accurate calibration of camera\u2013IMU transformation, with errors less than 0.02 m in translation and 1\u00b0 in rotation. We compare out method with a state-of-the-art marker-based offline calibration method and show superior results. We also demonstrate the performance of the proposed approach in large-scale indoor and outdoor experiments."}
{"_id": "f7e7b91480a5036562a6b2276066590c9cc9c4f1", "title": "Quasi-online reinforcement learning for robots", "text": "This paper describes quasi-online reinforcement learning: while a robot is exploring its environment, in the background a probabilistic model of the environment is built on the fly as new experiences arrive; the policy is trained concurrently based on this model using an anytime algorithm. Prioritized sweeping, directed exploration, and transformed reward functions provide additional speed-ups. The robot quickly learns goal-directed policies from scratch, requiring few interactions with the environment and making efficient use of available computation time. From an outside perspective it learns the behavior online and in real time. We describe comparisons with standard methods and show the individual utility of each of the proposed techniques"}
{"_id": "40f5cc97ff200cfa17d7a3ce5a64176841075247", "title": "High speed network traffic analysis with commodity multi-core systems", "text": "Multi-core systems are the current dominant trend in computer processors. However, kernel network layers often do not fully exploit multi-core architectures. This is due to issues such as legacy code, resource competition of the RX-queues in network interfaces, as well as unnecessary memory copies between the OS layers. The result is that packet capture, the core operation in every network monitoring application, may even experience performance penalties when adapted to multi-core architectures. This work presents common pitfalls of network monitoring applications when used with multi-core systems, and presents solutions to these issues. We describe the design and implementation of a novel multi-core aware packet capture kernel module that enables monitoring applications to scale with the number of cores. We showcase that we can achieve high packet capture performance on modern commodity hardware."}
{"_id": "35d6448436d4829c774b2cd62fdf1a2b9c81c376", "title": "RF Localization in Indoor Environment", "text": "In this paper indoor localization system based on the RF power measurements of the Received Signal Strength (RSS) in WLAN environment is presented. Today, the most viable solution for localization is the RSS fingerprinting based approach, where in order to establish a relationship between RSS values and location, different machine learning approaches are used. The advantage of this approach based on WLAN technology is that it does not need new infrastructure (it reuses already and widely deployed equipment), and the RSS measurement is part of the normal operating mode of wireless equipment. We derive the Cram\u00e9r-Rao Lower Bound (CRLB) of localization accuracy for RSS measurements. In analysis of the bound we give insight in localization performance and deployment issues of a localization system, which could help designing an efficient localization system. To compare different machine learning approaches we developed a localization system based on an artificial neural network, k-nearest neighbors, probabilistic method based on the Gaussian kernel and the histogram method. We tested the developed system in real world WLAN indoor environment, where realistic RSS measurements were collected. Experimental comparison of the results has been investigated and average location estimation error of around 2 meters was obtained."}
{"_id": "3890e38fd7018fa6afa80a25e6610a1976a7741c", "title": "Lateral prefrontal cortex and self-control in intertemporal choice", "text": "Disruption of function of left, but not right, lateral prefrontal cortex (LPFC) with low-frequency repetitive transcranial magnetic stimulation (rTMS) increased choices of immediate rewards over larger delayed rewards. rTMS did not change choices involving only delayed rewards or valuation judgments of immediate and delayed rewards, providing causal evidence for a neural lateral-prefrontal cortex\u2013based self-control mechanism in intertemporal choice."}
{"_id": "6ebdb88c39787f4242e92504b6d2c60b8421193a", "title": "The association between psychological distance and construal level: evidence from an implicit association test.", "text": "According to construal level theory (N. Liberman, Y. Trope, & E. Stephan, in press; Y. Trope & N. Liberman, 2003), people use a more abstract, high construal level when judging, perceiving, and predicting more psychologically distal targets, and they judge more abstract targets as being more psychologically distal. The present research demonstrated that associations between more distance and higher level of construal also exist on a pure conceptual level. Eight experiments used the Implicit Association Test (IAT; A. G. Greenwald, D. E. McGhee, & J. L. K. Schwartz, 1998) to demonstrate an association between words related to construal level (low vs. high) and words related to four dimensions of distance (proximal vs. distal): temporal distance, spatial distance, social distance, and hypotheticality. In addition to demonstrating an association between level of construal and psychological distance, these findings also corroborate the assumption that all 4 dimensions of psychological distance are related to level of construal in a similar way and support the notion that they all are forms of psychological distance."}
{"_id": "1f4412f8c0d2e491b2b4bf486d47d448d8f46858", "title": "Implicit Association Test 1 The Implicit Association Test at Age 7 : A Methodological and Conceptual Review", "text": "A mong earthly organisms, humans have a unique propensity to introspect or look inward into the contents of their own minds, and to share those observations with others. With the ability to introspect comes the palpable feeling of \" knowing, \" of being objective or certain, of being mentally in control of one's thoughts, aware of the causes of one's thoughts, feelings, and actions, and of making decisions deliberately and rationally. Among the noteworthy discoveries of 20th century psychology was a challenge posed to this assumption of rationality. From the groundbreaking theorizing of Herbert Simon (1955) and the mind-boggling problems posed by Kahneman, Slovik, and Tversky (1982) to striking demonstrations of illusions of control (Wegner, 2002), the paucity of introspection (Nisbett and Wilson, 1977), and the automaticity of everyday thought (Bargh, 1997), psychologists have shown the frailties of the minds of their species. As psychologists have come to grips with the limits of the mind, there has been an increased interest in measuring aspects of thinking and feeling that may not be easily accessed or available to consciousness. Innovations in measurement have been undertaken with the purpose of bringing under scrutiny new forms of cogni-tion and emotion that were previously undiscovered and especially by asking if traditional concepts such as attitude and preference, belief and stereotype, self-concept and self-esteem can be rethought based on what the new measures reveal. These newer measures do not require introspection on the part of the subject. For many constructs this is considered a valuable, if not essential, feature of measurement; for others, avoiding introspection is greeted with suspicion and skepticism. For example, one approach to measuring math ability would be to ask \" how good are you at math? \" whereas an alternative approach is to infer math ability via a performance on a math skills test. The former requires introspection to assess the relevant construct, the latter does not. And yet, the latter is accepted"}
{"_id": "2698f74468c49b29ac69e193d5aeaa09bb33faea", "title": "Can language restructure cognition? The case for space", "text": "Frames of reference are coordinate systems used to compute and specify the location of objects with respect to other objects. These have long been thought of as innate concepts, built into our neurocognition. However, recent work shows that the use of such frames in language, cognition and gesture varies cross-culturally, and that children can acquire different systems with comparable ease. We argue that language can play a significant role in structuring, or restructuring, a domain as fundamental as spatial cognition. This suggests we need to rethink the relation between the neurocognitive underpinnings of spatial cognition and the concepts we use in everyday thinking, and, more generally, to work out how to account for cross-cultural cognitive diversity in core cognitive domains."}
{"_id": "755b94b766dee3a34536f6b481a60f0d9f68aa0c", "title": "The Role of Feasibility and Desirability Considerations in Near and Distant Future Decisions : A Test of Temporal Construal Theory", "text": "Temporal construal theory states that distant future situations are construed on a higher level (i.e., using more abstract and central features) than near future situations. Accordingly, the theory suggests that the value associated with the high-level construal is enhanced over delay and that the value associated with the low-level construal is discounted over delay. In goal-directed activities, desirability of the activity's end state represents a high-level construal, whereas the feasibility of attaining this end state represents a low-level construal. Study 1 found that distant future activities were construed on a higher level than near future activities. Studies 2 and 3 showed that decisions regarding distant future activities, compared with decisions regarding near future activities, were more influenced by the desirability of the end state and less influenced by the feasibility of attaining the end state. Study 4 presented students with a real-life choice of academic assignments varying in difficulty (feasibility) and interest (desirability). In choosing a distant future assignment, students placed relatively more weight on the assignment's interest, whereas in choosing a near future assignment, they placed relatively more weight on difficulty. Study 5 found that distant future plans, compared with near future plans, were related to desirability of activities rather than to time constraints."}
{"_id": "848e2f107ed6abe3c32c6442bcef4f6215f3f426", "title": "A Capacitor-DAC-Based Technique For Pre-Emphasis-Enabled Multilevel Transmitters", "text": "This brief presents a capacitor digital-to-analog converter (DAC) based technique that is suitable for pre-emphasis-enabled multilevel wireline transmitter design in voltage mode. Detailed comparisons between the proposed technique and conventional direct-coupling-based as well as resistor-DAC-based multilevel transmitter design techniques are given, revealing potential benefits in terms of speed, linearity, implementation complexity, and also power consumption. A PAM-4 transmitter with 2-Tap feed-forward equalization adopting the proposed technique is implemented in 65-nm CMOS technology. It achieves a 25-Gb/s data rate and energy efficiency of 2 mW/Gb/s."}
{"_id": "10b0ec0b2920a5e9d7b6527e5b87d8fde0b11e86", "title": "Toward defining the preclinical stages of Alzheimer\u2019s disease: Recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease", "text": "The pathophysiological process of Alzheimer's disease (AD) is thought to begin many years before the diagnosis of AD dementia. This long \"preclinical\" phase of AD would provide a critical opportunity for therapeutic intervention; however, we need to further elucidate the link between the pathological cascade of AD and the emergence of clinical symptoms. The National Institute on Aging and the Alzheimer's Association convened an international workgroup to review the biomarker, epidemiological, and neuropsychological evidence, and to develop recommendations to determine the factors which best predict the risk of progression from \"normal\" cognition to mild cognitive impairment and AD dementia. We propose a conceptual framework and operational research criteria, based on the prevailing scientific evidence to date, to test and refine these models with longitudinal clinical research studies. These recommendations are solely intended for research purposes and do not have any clinical implications at this time. It is hoped that these recommendations will provide a common rubric to advance the study of preclinical AD, and ultimately, aid the field in moving toward earlier intervention at a stage of AD when some disease-modifying therapies may be most efficacious."}
{"_id": "6d2465be30dbcbf9b76509eed81cd5f32c4f8618", "title": "Fully integrated 54nm STT-RAM with the smallest bit cell dimension for high density memory application", "text": "A compact STT(Spin-Transfer Torque)-RAM with a 14F2 cell was integrated using modified DRAM processes at the 54nm technology node. The basic switching performance (R-H and R-V) of the MTJs and current drivability of the access transistors were characterized at the single bit cell level. Through the direct access capability and normal chip operation in our STT-RAM test blocks, the switching behavior of bit cell arrays was also analyzed statistically. From this data and from the scaling trend of STT-RAM, we estimate that the unit cell dimension below 30nm can be smaller than 8F2."}
{"_id": "f70c724a299025fa58127b4bcd3426c565e1e7be", "title": "Single-Image Noise Level Estimation for Blind Denoising", "text": "Noise level is an important parameter to many image processing applications. For example, the performance of an image denoising algorithm can be much degraded due to the poor noise level estimation. Most existing denoising algorithms simply assume the noise level is known that largely prevents them from practical use. Moreover, even with the given true noise level, these denoising algorithms still cannot achieve the best performance, especially for scenes with rich texture. In this paper, we propose a patch-based noise level estimation algorithm and suggest that the noise level parameter should be tuned according to the scene complexity. Our approach includes the process of selecting low-rank patches without high frequency components from a single noisy image. The selection is based on the gradients of the patches and their statistics. Then, the noise level is estimated from the selected patches using principal component analysis. Because the true noise level does not always provide the best performance for nonblind denoising algorithms, we further tune the noise level parameter for nonblind denoising. Experiments demonstrate that both the accuracy and stability are superior to the state of the art noise level estimation algorithm for various scenes and noise levels."}
{"_id": "16bf6a111ada81d837da3de7227def58baa6b95b", "title": "Causal Structure Learning and Inference : A Selective Review", "text": "In this paper we give a review of recent causal inference methods. First, we discuss methods for causal structure learning from observational data when confounders are not present and have a close look at methods for exact identifiability. We then turn to methods which allow for a mix of observational and interventional data, where we also touch on active learning strategies. We also discuss methods which allow arbitrarily complex structures of hidden variables. Second, we present approaches for estimating the interventional distribution and causal effects given the (true or estimated) causal structure. We close with a note on available software and two examples on real data."}
{"_id": "ec5bfa1d267d0797af46b6cc6f69f91748ae27c1", "title": "Activity recognition using a single accelerometer placed at the wrist or ankle.", "text": "PURPOSE\nLarge physical activity surveillance projects such as the UK Biobank and NHANES are using wrist-worn accelerometer-based activity monitors that collect raw data. The goal is to increase wear time by asking subjects to wear the monitors on the wrist instead of the hip, and then to use information in the raw signal to improve activity type and intensity estimation. The purposes of this work was to obtain an algorithm to process wrist and ankle raw data and to classify behavior into four broad activity classes: ambulation, cycling, sedentary, and other activities.\n\n\nMETHODS\nParticipants (N = 33) wearing accelerometers on the wrist and ankle performed 26 daily activities. The accelerometer data were collected, cleaned, and preprocessed to extract features that characterize 2-, 4-, and 12.8-s data windows. Feature vectors encoding information about frequency and intensity of motion extracted from analysis of the raw signal were used with a support vector machine classifier to identify a subject's activity. Results were compared with categories classified by a human observer. Algorithms were validated using a leave-one-subject-out strategy. The computational complexity of each processing step was also evaluated.\n\n\nRESULTS\nWith 12.8-s windows, the proposed strategy showed high classification accuracies for ankle data (95.0%) that decreased to 84.7% for wrist data. Shorter (4 s) windows only minimally decreased performances of the algorithm on the wrist to 84.2%.\n\n\nCONCLUSIONS\nA classification algorithm using 13 features shows good classification into the four classes given the complexity of the activities in the original data set. The algorithm is computationally efficient and could be implemented in real time on mobile devices with only 4-s latency."}
{"_id": "5cbf2f236ac35434eda20239237ed84db491a28d", "title": "Toward Rapid Development of Multi-Party Virtual Human Negotiation Scenarios", "text": "This paper reports on an ongoing effort to enable the rapid development of multi-party virtual human negotiation scenarios. We present a case study in which a new scenario supporting negotiation between two human role players and two virtual humans was developed over a period of 12 weeks. We discuss the methodology and development process that were employed, from storyline design through role play and iterative development of the virtual humans\u2019 semantic and task representations and natural language processing capabilities. We analyze the effort, expertise, and time required for each development step, and discuss opportunities to further streamline the development process."}
{"_id": "452bc3a5f547ba1c1678b20552e3cd465870a33a", "title": "Deep Convolutional Neural Networks [Lecture Notes]", "text": "Neural networks are a subset of the field of artificial intelligence (AI). The predominant types of neural networks used for multidimensional signal processing are deep convolutional neural networks (CNNs). The term deep refers generically to networks having from a \"few\" to several dozen or more convolution layers, and deep learning refers to methodologies for training these systems to automatically learn their functional parameters using data representative of a specific problem domain of interest. CNNs are currently being used in a broad spectrum of application areas, all of which share the common objective of being able to automatically learn features from (typically massive) data bases and to generalize their responses to circumstances not encountered during the learning phase. Ultimately, the learned features can be used for tasks such as classifying the types of signals the CNN is expected to process. The purpose of this \"Lecture Notes\" article is twofold: 1) to introduce the fundamental architecture of CNNs and 2) to illustrate, via a computational example, how CNNs are trained and used in practice to solve a specific class of problems."}
{"_id": "40385eb9d1464bea8f13fb24ca58aee3e36bd634", "title": "Nearest prototype classifier designs: An experimental study", "text": "We compare eleven methods for finding prototypes upon which to base the nearest \u017d prototype classifier. Four methods for prototype selection are discussed: Wilson Hart a . condensation error-editing method , and three types of combinatorial search random search, genetic algorithm, and tabu search. Seven methods for prototype extraction are discussed: unsupervised vector quantization, supervised learning vector quantization \u017d . with and without training counters , decision surface mapping, a fuzzy version of vector quantization, c-means clustering, and bootstrap editing. These eleven methods can be usefully divided two other ways: by whether they employ preor postsupervision; and by whether the number of prototypes found is user-defined or \u2018\u2018automatic.\u2019\u2019 Generalization error rates of the 11 methods are estimated on two synthetic and two real data sets. Offering the usual disclaimer that these are just a limited set of experiments, we feel confident in asserting that presupervised, extraction methods offer a better chance for success to the casual user than postsupervised, selection schemes. Finally, our calculations do not suggest that methods which find the \u2018\u2018best\u2019\u2019 number of prototypes \u2018\u2018automatically\u2019\u2019 are superior to methods for which the user simply specifies the number of prototypes. 2001 John Wiley & Sons, Inc."}
{"_id": "dbbbddfa44c0a091b7e15b7989fbf8af4fad3a78", "title": "Synthesizing evidence in software engineering research", "text": "Synthesizing the evidence from a set of studies that spans many countries and years, and that incorporates a wide variety of research methods and theoretical perspectives, is probably the single most challenging task of performing a systematic review. In this paper, we perform a tertiary review to assess the types and methods of research synthesis in systematic reviews in software engineering. Almost half of the 31 studies included in our review did not contain any synthesis; of the ones that did, two thirds performed a narrative or a thematic synthesis. The results show that, despite the focus on systematic reviews, there is, currently, limited attention to research synthesis in software engineering. This needs to change and a repertoire of synthesis methods needs to be an integral part of systematic reviews to increase their significance and utility for research and practice."}
{"_id": "b5aa59085ebd6b0a23e0941efc2ab10efb7474bc", "title": "Spectral\u2013Spatial Classification and Shape Features for Urban Road Centerline Extraction", "text": "This letter presents a two-step method for urban main road extraction from high-resolution remotely sensed imagery by integrating spectral-spatial classification and shape features. In the first step, spectral-spatial classification segments the imagery into two classes, i.e., the road class and the nonroad class, using path openings and closings. The local homogeneity of the gray values obtained by local Geary's C is then fused with the road class. In the second step, the road class is refined by using shape features. The experimental results indicated that the proposed method was able to achieve a comparatively good performance in urban main road extraction."}
{"_id": "29500203bfb404e5f2808c0342410a6b91e75e31", "title": "Automatic Optimization Computational Method for Unconventional S . W . A . T . H . Ships Resistance", "text": "The paper illustrates the main theoretical and computational aspects of an automatic computer based procedure for the parametric shape optimization of a particular unconventional hull typology: that for a catamaran S.W.A.T.H. ship. The goal of the integrated computational procedure is to find the best shape of the submerged hulls of a new U.S.V. (Unmanned Surface Vehicle) S.W.A.T.H. (Small Waterplane Area Twin Hull) vessel, in terms of minimum wave pattern resistance. After dealing with the theoretical aspects the papers presents the numerical aspects of the main software module of the automatic procedure, which integrates a parametric generation routine for innovative and unconventional S.W.A.T.H. (Small Waterplane Area Twin Hull) vessel geometry, a multi-objective, globally convergent and constrained, optimization algorithm and a Computational Fluid Dynamic (C.F.D.) solver. The integrated process is able to find the best shape of the submerged hull of the vessel, subject to the total displaced volume constraint. The hydrodynamic computation is carried out by means of a free surface potential flow method and it is addressed to find the value of wave resistance of each hull variant. Results of the application of the described computational procedure are presented for two optimization cases and the obtained best shapes are compared with a conventional one, featuring a typical torpedo-shaped body, proving the effectiveness of the method in reducing the resistance by a considerable extent, in the order of 40 percent. Keywords\u2014S.W.A.T.H., B.E.M., Wave Resistance, Parametric Modeling, Optimization, Genetic Algorithms"}
{"_id": "244fa023adbbff31806ded21d5b2f36afd3ff988", "title": "BRAD 1.0: Book reviews in Arabic dataset", "text": "The availability of rich datasets is a pre-requisite for proposing robust sentiment analysis systems. A variety of such datasets exists in English language. However, it is rare or nonexistent for the Arabic language except for a recent LABR dataset, which consists of a little bit over 63,000 book reviews extracted from. Goodreads. com. We introduce BRAD 1.0, the largest Book Reviews in Arabic Dataset for sentiment analysis and machine language applications. BRAD comprises of almost 510,600 book records. Each record corresponds for a single review and has the review in Arabic language and the reviewer's rating on a scale of 1 to 5 stars. In this paper, we present and describe the properties of BRAD. Further, we provide two versions of BRAD: the complete unbalanced dataset and the balanced version of BRAD. Finally, we implement four sentiment analysis classifiers based on this dataset and report our findings. When training and testing the classifiers on BRAD as opposed to LABR, an improvement rate growth of 46% is reported. The highest accuracy attained is 91%. Our core contribution is to make this benchmark-dataset available and accessible to the research community on Arabic language."}
{"_id": "90a2a7a3d22c58c57e3b1a4248c7420933d7fe2f", "title": "An integrated approach to testing complex systems", "text": "The increasing complexity of today\u2019s testing scenarios for complex systems demands an integrated, open, and flexible approach to support the management of the overall test process. \u201cClassical\u201d model-based testing approaches, where a complete and precise formal specification serves as a reference for automatic test generation, are often impractical. Reasons are, on the one hand, the absence of a suitable formal specification. As complex systems are composed of several components, either hardware or software, often pre-built and third party, it is unrealistic to assume that a formal specification exists a priori. On the other hand, a sophisticated test execution environment is needed that can handle distributed test cases. This is because the test actions and observations can take place on different subsystems of the overall system. This thesis presents a novel approach to the integrated testing of complex systems. Our approach offers a coarse grained test environment, realized in terms of a component-based test design on top of a library of elementary but intuitively understandable test case fragments. The relations between the fragments are treated orthogonally, delivering a test design and execution environment enhanced by means of light-weight formal verification methods. In this way we are able to shift the test design issues from total experts of the system and the used test tools to experts of the system\u2019s logic only. We illustrate the practical usability of our approach by means of industrial case studies in two different application domains: Computer Telephony Integrated solutions and Web-based applications. As an enhancement of our integrated test approach we provide an algorithm for generating approximate models for complex systems a posteriori. This is done by optimizing a standard machine learning algorithm according to domain-specific structural properties, i.e. properties like prefix-closeness, input-determinism, as well as independency and symmetries of events. The resulting models can never be exact, i.e. reflect the complete and correct behaviour of the considered system. Nevertheless they can be useful in practice, to represent the cumulative knowledge of the system in a consistent description."}
{"_id": "8df383aae16ce1003d57184d8e4bf729f265ab40", "title": "Axial-Ratio-Bandwidth Enhancement of a Microstrip-Line-Fed Circularly Polarized Annular-Ring Slot Antenna", "text": "The design of a new microstrip-line-fed wideband circularly polarized (CP) annular-ring slot antenna (ARSA) is proposed. Compared with existing ring slot antennas, the ARSAs designed here possess much larger CP bandwidths. The main features of the proposed design include a wider ring slot, a pair of grounded hat-shaped patches, and a deformed bent feeding microstrip line. The ARSAs designed using FR4 substrates in the L and S bands have 3-dB axial-ratio bandwidths (ARBWs) of as large as 46% and 56%, respectively, whereas the one using an RT5880 substrate in the L band, 65%. In these 3-dB axial-ratio bands, impedance matching with VSWR \u2264 2 is also achieved."}
{"_id": "1b0af2ba6f22b43f9115c03a52e515b5d1e358d2", "title": "A Practical Approach to Differential Private Learning", "text": "Applying differential private learning to real-world data is currently unpractical. Differential privacy (DP) introduces extra hyper-parameters for which no thorough good practices exist, while manually tuning these hyper-parameters on private data results in low privacy guarantees. Furthermore, the exact guarantees provided by differential privacy for machine learning models are not well understood. Current approaches use undesirable post-hoc privacy attacks on models to assess privacy guarantees. To improve this situation, we introduce three tools to make DP machine learning more practical. First, two sanity checks for differential private learning are proposed. These sanity checks can be carried out in a centralized manner before training, do not involve training on the actual data and are easy to implement. Additionally, methods are proposed to reduce the effective number of tuneable privacy parameters by making use of an adaptive clipping bound. Lastly, existing methods regarding large batch training and differential private learning are combined. It is demonstrated that this combination improves model performance within a constant privacy budget."}
{"_id": "d82c363d46e2d49028776da1092674efe4282d39", "title": "Privacy as a fuzzy concept: A new conceptualization of privacy for practitioners", "text": "\u2022 Users may freely distribute the URL that is used to identify this publication. \u2022 Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. \u2022 User may use extracts from the document in line with the concept of \u2018fair dealing\u2019 under the Copyright, Designs and Patents Act 1988 (?) \u2022 Users may not further distribute the material nor use it for the purposes of commercial gain."}
{"_id": "e65881a89633b8d4955e9314e84b943e155da6a9", "title": "TG13 flowchart for the management of acute cholangitis and cholecystitis.", "text": "We propose a management strategy for acute cholangitis and cholecystitis according to the severity assessment. For Grade I (mild) acute cholangitis, initial medical treatment including the use of antimicrobial agents may be sufficient for most cases. For non-responders to initial medical treatment, biliary drainage should be considered. For Grade II (moderate) acute cholangitis, early biliary drainage should be performed along with the administration of antibiotics. For Grade III (severe) acute cholangitis, appropriate organ support is required. After hemodynamic stabilization has been achieved, urgent endoscopic or percutaneous transhepatic biliary drainage should be performed. In patients with Grade II (moderate) and Grade III (severe) acute cholangitis, treatment for the underlying etiology including endoscopic, percutaneous, or surgical treatment should be performed after the patient's general condition has been improved. In patients with Grade I (mild) acute cholangitis, treatment for etiology such as endoscopic sphincterotomy for choledocholithiasis might be performed simultaneously, if possible, with biliary drainage. Early laparoscopic cholecystectomy is the first-line treatment in patients with Grade I (mild) acute cholecystitis while in patients with Grade II (moderate) acute cholecystitis, delayed/elective laparoscopic cholecystectomy after initial medical treatment with antimicrobial agent is the first-line treatment. In non-responders to initial medical treatment, gallbladder drainage should be considered. In patients with Grade III (severe) acute cholecystitis, appropriate organ support in addition to initial medical treatment is necessary. Urgent or early gallbladder drainage is recommended. Elective cholecystectomy can be performed after the improvement of the acute inflammatory process. Free full-text articles and a mobile application of TG13 are available via http://www.jshbps.jp/en/guideline/tg13.html."}
{"_id": "08e1f479de40d66711f89e0926bc3f6d14f3dbc0", "title": "Disease Trajectory Maps", "text": "Medical researchers are coming to appreciate that many diseases are in fact complex, heterogeneous syndromes composed of subpopulations that express different variants of a related complication. Longitudinal data extracted from individual electronic health records (EHR) offer an exciting new way to study subtle differences in the way these diseases progress over time. In this paper, we focus on answering two questions that can be asked using these databases of longitudinal EHR data. First, we want to understand whether there are individuals with similar disease trajectories and whether there are a small number of degrees of freedom that account for differences in trajectories across the population. Second, we want to understand how important clinical outcomes are associated with disease trajectories. To answer these questions, we propose the Disease Trajectory Map (DTM), a novel probabilistic model that learns low-dimensional representations of sparse and irregularly sampled longitudinal data. We propose a stochastic variational inference algorithm for learning the DTM that allows the model to scale to large modern medical datasets. To demonstrate the DTM, we analyze data collected on patients with the complex autoimmune disease, scleroderma. We find that DTM learns meaningful representations of disease trajectories and that the representations are significantly associated with important clinical outcomes."}
{"_id": "95e873c3f64a9bd8346f5b5da2e4f14774536834", "title": "Wideband H-Plane Horn Antenna Based on Ridge Substrate Integrated Waveguide (RSIW)", "text": "A substrate integrated waveguide (SIW) H-plane sectoral horn antenna, with significantly improved bandwidth, is presented. A tapered ridge, consisting of a simple arrangement of vias on the side flared wall within the multilayer substrate, is introduced to enlarge the operational bandwidth. A simple feed configuration is suggested to provide the propagating wave for the antenna structure. The proposed antenna is simulated by two well-known full-wave packages, Ansoft HFSS and CST Microwave Studio, based on segregate numerical methods. Close agreement between simulation results is reached. The designed antenna shows good radiation characteristics and low VSWR, lower than 2.5, for the whole frequency range of 18-40 GHz."}
{"_id": "8c030a736512456e9fd8d53763cbfcac0c014ab3", "title": "Multiscale Approaches To Music Audio Feature Learning", "text": "Content-based music information retrieval tasks are typically solved with a two-stage approach: features are extracted from music audio signals, and are then used as input to a regressor or classifier. These features can be engineered or learned from data. Although the former approach was dominant in the past, feature learning has started to receive more attention from the MIR community in recent years. Recent results in feature learning indicate that simple algorithms such as K-means can be very effective, sometimes surpassing more complicated approaches based on restricted Boltzmann machines, autoencoders or sparse coding. Furthermore, there has been increased interest in multiscale representations of music audio recently. Such representations are more versatile because music audio exhibits structure on multiple timescales, which are relevant for different MIR tasks to varying degrees. We develop and compare three approaches to multiscale audio feature learning using the spherical K-means algorithm. We evaluate them in an automatic tagging task and a similarity metric learning task on the Magnatagatune dataset."}
{"_id": "a4ac001d1a11df51e05b1651d497c4e56dec3f51", "title": "Training Simplification and Model Simplification for Deep Learning: A Minimal Effort Back Propagation Method", "text": "We propose a simple yet effective technique to simplify the training and the resulting model of neural networks. In back propagation, only a small subset of the full gradient is computed to update the model parameters. The gradient vectors are sparsified in such a way that only the top-k elements (in terms of magnitude) are kept. As a result, only k rows or columns (depending on the layout) of the weight matrix are modified, leading to a linear reduction in the computational cost. Based on the sparsified gradients, we further simplify the model by eliminating the rows or columns that are seldom updated, which will reduce the computational cost both in the training and decoding, and potentially accelerate decoding in real-world applications. Surprisingly, experimental results demonstrate that most of time we only need to update fewer than 5% of the weights at each back propagation pass. More interestingly, the accuracy of the resulting models is actually improved rather than degraded, and a detailed analysis is given. The model simplification results show that we could adaptively simplify the model which could often be reduced by around 9x, without any loss on accuracy or even with improved accuracy."}
{"_id": "5aab22bf1f18e28fbb15af044f4dcf6f60524eb2", "title": "SeemGo: Conditional Random Fields Labeling and Maximum Entropy Classification for Aspect Based Sentiment Analysis", "text": "This paper describes our SeemGo system for the task of Aspect Based Sentiment Analysis in SemEval-2014. The subtask of aspect term extraction is cast as a sequence labeling problem modeled with Conditional Random Fields that obtains the F-score of 0.683 for Laptops and 0.791 for Restaurants by exploiting both word-based features and context features. The other three subtasks are solved by the Maximum Entropy model, with the occurrence counts of unigram and bigram words of each sentence as features. The subtask of aspect category detection obtains the best result when applying the Boosting method on the Maximum Entropy model, with the precision of 0.869 for Restaurants. The Maximum Entropy model also shows good performance in the subtasks of both aspect term and aspect category polarity classification."}
{"_id": "8188d1bac84d020595115a695ba436ceeb5437e0", "title": "Machine learning approach for automated screening of malaria parasite using light microscopic images.", "text": "The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification."}
{"_id": "c29920f73686404a580fa2f9bff34548fd73125a", "title": "Multi-factor Authentication Security Framework in Cloud Computing", "text": "Data Security is the most critical issues in a cloud computing environment. Authentication is a key technology for information security, which is a mechanism to establish proof of identities to get access of information in the system. Traditional password authentication does not provide enough security for information in cloud computing environment to the most modern means of attacks. In this paper, we propose a new multi-factor authentication framework for cloud computing. In this paper the features of various access control mechanisms are discussed and a novel framework of access control is proposed for cloud computing, which provides a multi -step and multifactor authentication of a user. The model proposed is well-organized and provably secure solution of access control for externally hosted applications."}
{"_id": "57bb4066753a13f3c0f289941d3557ae144c9b6a", "title": "A Hybrid Model to Detect Phishing-Sites Using Supervised Learning Algorithms", "text": "Since last decades, online technologies have revolutionized the modern computing world. However, as a result, security threats are increasing rapidly. A huge community is using the online services even from chatting to banking is done via online transactions. Customers of web technologies face various security threats and phishing is one of the most important threat that needs to be address. Therefore, the security mechanism must be enhanced. The attacker uses phishing attack to get victims credential information like bank account number, passwords or any other information by mimicking a website of an enterprise, and the victim is unaware of phishing website. In literature, several approaches have been proposed for detection and filtering phishing attack. However, researchers are still searching for such a solution that can provide better results to secure users from phishing attack. Phishing websites have certain characteristics and patterns and to identify those features can help us to detect phishing. To identify such features is a classification task and can be solved using data mining techniques. In this paper, we are presenting a hybrid model for classification to overcome phishing-sites problem. To evaluate this model, we have used the dataset from UCI repository which contains 30 attributes and 11055 instances. The experimental results showed that our proposed hybrid model outperforms in terms of high accuracy and less error rate."}
{"_id": "8462ca8f2459bcf35378d6dbb10dce70a6fba70a", "title": "Top 10 algorithms in data mining", "text": "This paper presents the top 10 data mining algorithms identified by the IEEE International Conference on Data Mining (ICDM) in December 2006: C4.5, k-Means, SVM, Apriori, EM, PageRank, AdaBoost, kNN, Naive Bayes, and CART. These top 10 algorithms are among the most influential data mining algorithms in the research community. With each algorithm, we provide a description of the algorithm, discuss the impact of the algorithm, and review current and further research on the algorithm. These 10 algorithms cover classification, clustering, statistical learning, association analysis, and link mining, which are all among the most important topics in data mining research and development."}
{"_id": "12a376e621d690f3e94bce14cd03c2798a626a38", "title": "Rapid Object Detection using a Boosted Cascade of Simple Features", "text": "This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \u201cIntegral Image\u201d which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers[6]. The third contribution is a method for combining increasingly more complex classifiers in a \u201ccascade\u201d which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection."}
{"_id": "4d676af094b50974bd702c1e49d2ca363131e30b", "title": "An Associative Classification Data Mining Approach for Detecting Phishing Websites", "text": "Phishing websites are fake websites that are created by dishonest people to mimic webpages of real websites. Victims of phishing attacks may expose their financial sensitive information to the attacker whom might use this information for financial and criminal activities. Various approaches have been proposed to detect phishing websites, among which, approaches that utilize data mining techniques had shown to be more effective. The main goal of data mining is to analyze a large set of data to identify unsuspected relation and extract understandable useful patterns. Associative Classification (AC) is a promising data mining approach that integrates association rule and classification to build classification models (classifiers). This paper, proposes a new AC algorithm called Phishing Associative Classification (PAC), for detecting phishing websites. PAC employed a novel methodology in construction the classifier which results in generating moderate size classifiers. The algorithm improved the effectiveness and efficiency of a known algorithm called MCAR, by introducing a new prediction procedure and adopting a different rule pruning procedure. The conducted experiments compared PAC with 4 well-known data mining algorithms, these are: covering algorithm (Prism), decision tree (C4.5), associative Classification (CBA) and MCAR. Experiments are performed on a dataset that consists of 1010 website. Each Website is represented using 17 features categorized into 4 sets. The features are extracted from the website contents and URL. The results on each features set show that PAC is either equivalent or more effective than the compared algorithms. When all features are considered, PAC outperformed the compared algorithms and correctly identified 99.31% of the tested websites. Furthermore, PAC produced less number of rules than MCAR, and therefore, is more efficient."}
{"_id": "33b53abdf2824b2cb0ee083c284000df4343a33e", "title": "Using Stories to Teach Human Values to Artificial Agents", "text": "Value alignment is a property of an intelligent agent indicating that it can only pursue goals that are beneficial to humans. Successful value alignment should ensure that an artificial general intelligence cannot intentionally or unintentionally perform behaviors that adversely affect humans. This is problematic in practice since it is difficult to exhaustively enumerated by human programmers. In order for successful value alignment, we argue that values should be learned. In this paper, we hypothesize that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate. We describe preliminary work on using stories to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behavior."}
{"_id": "af930bef4d27fc83d0cb138b7e3351f26d696284", "title": "Supervised and Unsupervised methods to detect Insider Threat from Enterprise Social and Online Activity Data", "text": "Insider threat is a significant security risk for organizations, and detection of insider threat is of paramount concern to organizations. In this paper, we attempt to discover insider threat by analyzing enterprise social and online activity data of employees. To this end, we process and extract relevant features that are possibly indicative of insider threat behavior. This includes features extracted from social data including email communication patterns and content, and online activity data such as web browsing patterns, email frequency, and file and machine access patterns. Subsequently, we take two approaches to detect insider threat: (i) an unsupervised approach where we identify statistically abnormal behavior with respect to these features using state-of-the-art anomaly detection methods, and (ii) a supervised approach where we use labels indicating when employees quit the company as a proxy for insider threat activity to design a classifier. We test our approach on a real world data set with artificially injected insider threat events. We obtain a ROC score of 0.77 for the unsupervised approach, and a classification accuracy of 73.4% for the supervised approach. These results indicate that our proposed approaches are fairly successful in identifying insider threat events. Finally, we build a visualization dashboard that enables managers and HR personnel to quickly identify employees with high threat risk scores which will enable them to take suitable preventive measures and limit security risk."}
{"_id": "30ebfab0dc11c065397ee22d1ad7c366a7638646", "title": "An analytical framework for data stream mining techniques based on challenges and requirements", "text": "A growing number of applications that generate massive streams of data need intelligent data processing and online analysis. Real-time surveillance systems, telecommunication systems, sensor networks and other dynamic environments are such examples. The imminent need for turning such data into useful information and knowledge augments the development of systems, algorithms and frameworks that address streaming challenges. The storage, querying and mining of such data sets are highly computationally challenging tasks. Mining data streams is concerned with extracting knowledge structures represented in models and patterns in non stopping streams of information. Generally, two main challenges are designing fast mining methods for data streams and need to promptly detect changing concepts and data distribution because of highly dynamic nature of data streams. The goal of this article is to analyze and classify the application of diverse data mining techniques in different challenges of data stream mining. In this paper, we present the theoretical foundations of data stream analysis and propose an analytical framework for data stream mining techniques."}
{"_id": "355e35184d084abc712c5bfcceafc0fdfe78ceef", "title": "BLIS: A Framework for Rapidly Instantiating BLAS Functionality", "text": "The BLAS-like Library Instantiation Software (BLIS) framework is a new infrastructure for rapidly instantiating Basic Linear Algebra Subprograms (BLAS) functionality. Its fundamental innovation is that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed and optimized in terms of very simple kernels. While others have had similar insights, BLIS reduces the necessary kernels to what we believe is the simplest set that still supports the high performance that the computational science community demands. Higher-level framework code is generalized and implemented in ISO C99 so that it can be reused and/or reparameterized for different operations (and different architectures) with little to no modification. Inserting high-performance kernels into the framework facilitates the immediate optimization of any BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are given a choice of using the traditional Fortran-77 BLAS interface, a generalized C interface, or any other higher level interface that builds upon this latter API. Preliminary performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL).\n Replicated Computational Results Certified: This article was subjected to an optional supplementary review process in which the computational results leading to the main conclusions of the paper were independently replicated. A separate report with the results of this process is published in this issue (see: http://dx.doi.org/10.1145/2738033). The certification process is described in Michael Heroux's editorial: http://dx.doi.org/10.1145/2743015."}
{"_id": "28f0e0d3783659bc9adb2cec56f19b1f90cdd2be", "title": "Video2GIF: Automatic Generation of Animated GIFs from Video", "text": "We introduce the novel problem of automatically generating animated GIFs from video. GIFs are short looping video with no sound, and a perfect combination between image and video that really capture our attention. GIFs tell a story, express emotion, turn events into humorous moments, and are the new wave of photojournalism. We pose the question: Can we automate the entirely manual and elaborate process of GIF creation by leveraging the plethora of user generated GIF content? We propose a Robust Deep RankNet that, given a video, generates a ranked list of its segments according to their suitability as GIF. We train our model to learn what visual content is often selected for GIFs by using over 100K user generated GIFs and their corresponding video sources. We effectively deal with the noisy web data by proposing a novel adaptive Huber loss in the ranking formulation. We show that our approach is robust to outliers and picks up several patterns that are frequently present in popular animated GIFs. On our new large-scale benchmark dataset, we show the advantage of our approach over several state-of-the-art methods."}
{"_id": "8cd3681c146ae525cc7f485f56cf6e432a36da28", "title": "SoilJ : An ImageJ Plugin for the Semiautomatic Processing of Three-Dimensional X-ray Images of Soils", "text": "Noninvasive threeand four-dimensional X-ray imaging approaches have proved to be valuable analysis tools for vadose zone research. One of the main bottlenecks for applying X-ray imaging to data sets with a large number of soil samples is the relatively large amount of time and expertise needed to extract quantitative data from the respective images. SoilJ is a plugin for the free and open imaging software ImageJ that aims at automating the corresponding processing steps for cylindrical soil columns. It includes modules for automatic column outline recognition, correction of image intensity bias, image segmentation, extraction of particulate organic matter and roots, soil surface topography detection, as well as morphology and percolation analyses. In this study, the functionality and precision of some key SoilJ features were demonstrated on five different image data sets of soils. SoilJ has proved to be useful for strongly decreasing the amount of time required for image processing of large image data sets. At the same time, it allows researchers with little experience in image processing to make use of X-ray imaging methods. The SoilJ source code is freely available and may be modified and extended at will by its users. It is intended to stimulate further community-driven development of this software."}
{"_id": "c696f0584a45f56bff31399fb339aa9b6a38baff", "title": "Trade-Offs in PMU Deployment for State Estimation in Active Distribution Grids", "text": "Monitoring systems are expected to play a major role in active distribution grids, and the design of the measurement infrastructure is a critical element for an effective operation. The use of any available and newly installed, though heterogeneous, metering device providing more accurate and real-time measurement data offers a new paradigm for the distribution grid monitoring system. In this paper the authors study the meter placement problem for the measurement infrastructure of an active distribution network, where heterogeneous measurements provided by Phasor Measurement Units (PMUs) and other advanced measurement systems such as Smart Metering systems are used in addition to measurements that are typical of distribution networks, in particular substation measurements and a-priori knowledge. This work aims at defining a design approach for finding the optimal measurement infrastructure for an active distribution grid. The design problem is posed in terms of a stochastic optimization with the goal of bounding the overall uncertainty of the state estimation using heterogeneous measurements while minimizing the investment cost. The proposed method is also designed for computational efficiency so to cover a wide set of scenarios."}
{"_id": "ad93d3b55fb94827c4df45e9fc67c55a7d90d00b", "title": "Relative clustering validity criteria: A comparative overview", "text": "Many different relative clustering validity criteria exist that are very useful in practice as quantitative measures for evaluating the quality of data partitions, and new criteria have still been proposed from time to time. These criteria are endowed with particular features that may make each of them able to outperform others in specific classes of problems. In addition, they may have completely different computational requirements. Then, it is a hard task for the user to choose a specific criterion when he or she faces such a variety of possibilities. For this reason, a relevant issue within the field of clustering analysis consists of comparing the performances of existing validity criteria and, eventually, that of a new criterion to be proposed. In spite of this, the comparison paradigm traditionally adopted in the literature is subject to some conceptual limitations. The present paper describes an alternative, possibly complementary methodology for comparing clustering validity criteria and uses it to make an extensive comparison of the performances of 40 criteria over a collection of 962,928 partitions derived from five well-known clustering algorithms and 1080 different data sets of a given class of interest. A detailed review of the relative criteria under investigation is also provided that includes an original comparative asymptotic analysis of their computational complexities. This work is intended to be a complement of the classic study reported in 1985 by Milligan and Cooper as well as a thorough extension of a preliminary paper by the authors themselves. \uf6d9 2010 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 3:"}
{"_id": "90a762dfd34fe9e389f30318b9b9cb1571d9766e", "title": "A multi-resonant gate driver for Very-High-Frequency (VHF) resonant converters", "text": "This paper presents the design and implementation of a Very-High-Frequency (VHF) multi-resonant gate drive circuit. The design procedure is greatly simplified compared with some VHF self-oscillating multi-resonant gate drivers presented in previous works. The proposed circuit has the potential to reduce long start-up time required in a self-oscillating resonant gate drive circuit and to better utilize fast transient capability of VHF converters. A prototype resonant gate driver is demonstrated and able to reduce 60 % of the gate driving loss of a 20 MHz 32 W Class-E power amplifier with Si MOSFET."}
{"_id": "5619de6b87691e2f2b9ea6665aee766d3eec6669", "title": "Automated segmentation of cell nuclei in PAP smear images", "text": "In this paper an automated method for cell nucleus segmentation in PAP smear images is presented. The method combines the global knowledge about the cells and nuclei appearance and the local characteristics of the area of the nuclei, in order to achieve an accurate nucleus boundary. Filters and morphological operators in all three channels of a color image result in the determination of the locations of nuclei in the image, even in cases where cell overlapping occurs. The nucleus boundary is determined by a deformable model. The results are very promising, even when images with high degree of overlapping are used."}
{"_id": "5c04b3178af0cc5f367c833030c118701c210229", "title": "Two-Stream Neural Networks for Tampered Face Detection", "text": "We propose a two-stream network for face tampering detection. We train GoogLeNet to detect tampering artifacts in a face classification stream, and train a patch based triplet network to leverage features capturing local noise residuals and camera characteristics as a second stream. In addition, we use two different online face swaping applications to create a new dataset that consists of 2010 tampered images, each of which contains a tampered face. We evaluate the proposed two-stream network on our newly collected dataset. Experimental results demonstrate the effectness of our method."}
{"_id": "3a900f71d2e74c91711a590dba4932504b21a3f5", "title": "Orestes: A scalable Database-as-a-Service architecture for low latency", "text": "Today, the applicability of database systems in cloud environments is considerably restricted because of three major problems: I) high network latencies for remote/mobile clients, II) lack of elastic horizontal scalability mechanisms, and III) missing abstraction of storage and data models. In this paper, we propose an architecture, a REST/HTTP protocol and a set of algorithms to solve these problems through a Database-as-a-Service middleware called Orestes (Objects RESTfully Encapsulated in Standard Formats). Orestes exposes cloud-hosted NoSQL database systems through a scalable tier of REST servers. These provide database-independent, object-oriented schema design, a client-independent REST-API for database operations, globally distributed caching, cache consistency mechanisms and optimistic ACID transactions. By comparative evaluations we offer empirical evidence that the proposed Database-as-a-Service architecture indeed solves common latency, scalability and abstraction problems encountered in modern cloud-based applications."}
{"_id": "7d4642da036febe5174da1390521444ef405c864", "title": "Estimating Continuous Distributions in Bayesian Classifiers", "text": "When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous vari\u00ad ables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality as\u00ad sumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experi\u00ad mental results on a variety of natural and ar\u00ad tificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using nonparamet\u00ad ric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests that kernel estimation is a useful tool for learning Bayesian models."}
{"_id": "28440bfbdeb74668ce5ff85568afa198d897394d", "title": "Semantic web-mining and deep vision for lifelong object discovery", "text": "Autonomous robots that are to assist humans in their daily lives must recognize and understand the meaning of objects in their environment. However, the open nature of the world means robots must be able to learn and extend their knowledge about previously unknown objects on-line. In this work we investigate the problem of unknown object hypotheses generation, and employ a semantic web-mining framework along with deep-learning-based object detectors. This allows us to make use of both visual and semantic features in combined hypotheses generation. Experiments on data from mobile robots in real world application deployments show that this combination improves performance over the use of either method in isolation."}
{"_id": "fb4531ce15b5813259bdd140a763361578688013", "title": "TDParse: Multi-target-specific sentiment recognition on Twitter", "text": "Existing target-specific sentiment recognition methods consider only a single target per tweet, and have been shown to miss nearly half of the actual targets mentioned. We present a corpus of UK election tweets, with an average of 3.09 entities per tweet and more than one type of sentiment in half of the tweets. This requires a method for multi-target specific sentiment recognition, which we develop by using the context around a target as well as syntactic dependencies involving the target. We present results of our method on both a benchmark corpus of single targets and the multi-target election corpus, showing state-of-the art performance in both corpora and outperforming previous approaches to multi-target sentiment task as well as deep learning models for singletarget sentiment."}
{"_id": "74574cb26bcec6435170bb3e735e1140ac676942", "title": "Understanding the Factors Driving NFC-Enabled Mobile Payment Adoption: an Empirical Investigation", "text": "Though NFC mobile payment offers both convenience and benefits to consumers, its adoption rate is still very low. This study aims to explore the factors determining consumers\u2019 adoption of NFC mobile payment. In this study, building on TAM, an extended adoption intention model is developed by incorporating three sorts of variables into it, namely mobile payment system factors (compatibility and perceived complementarity), user characteristics (mobile payment knowledge), and a risk-related factor (perceived risk). The model is empirically tested with a sample of 377 validated respondents. Compatibility, perceived ease of use and mobile payment knowledge are found to be the factors determining individuals\u2019 intention to use NFC mobile payment. Against our expectations, perceived usefulness and perceived risk do not affect use intention significantly in adopting NFC mobile payments among consumers. We discuss the theoretical and practical implications of these findings, and point out the limitations and need for future research."}
{"_id": "3cc67f2347310b6b8f21080acb7b2b9cf5f91ca4", "title": "Development of a 4-joint 3-DOF robotic arm with anti-reaction force mechanism for a multicopter", "text": "In this paper, we propose a design method for the mechanism of a robotic arm suitable for a multicopter. The motion of an arm attached to a multicopter can possibly disturb the multicopter attitude. Our aim is to suppress this attitude disturbance by mechanical methods. In the proposed design method, the arm has the following features for suppressing the disturbance of the multicopter attitude. 1. The robotic arm can adjust its center of gravity (COG). 2. In the simplified model, the sum of the angular momentum of the rotating parts constituting the robot arm is equal to zero. These features can compensate for both the displacement of the COG of the arm and the counter torque generated from rotating parts, which cause the disturbance to the multicopter attitude. Since the disturbance of the multicopter attitude can be suppressed mechanically, the robotic arm does not require a special flight control law. In other words, this robotic arm has the advantage of being attached to a multicopter using a common, off-the-shelf, multipurpose flight controller. Furthermore, we discuss the mass distribution, power transmission method, and the moment of inertia of rotating parts, such as the speed reducer, motor, and arm. Additionally, we fabricate a prototype robotic arm with four joints and three degrees of freedom (DOFs), based on the proposed design method. Finally, the experiments showed that the robotic arm suppressed the disturbance of the multicopter attitude caused by the arm motion."}
{"_id": "dc115ea8684f9869040dcd7f72a6f1146c626e9a", "title": "Operations for Learning with Graphical Models", "text": "This paper is a multidisciplinary review of empirical, statistical learning from a graph-ical model perspective. Well-known examples of graphical models include Bayesian networks , directed graphs representing a Markov chain, and undirected networks representing a Markov eld. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, diierentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation max-imization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical speciication. This includes versions of linear regression , techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. The main original contributions here are the decomposition techniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms."}
{"_id": "a6f312917ad63749e38ca5f2f8120522ad0548c6", "title": "The Q-CHAT (Quantitative CHecklist for Autism in Toddlers): a normally distributed quantitative measure of autistic traits at 18-24 months of age: preliminary report.", "text": "We report a major revision of the CHecklist for Autism in Toddlers (CHAT). This quantitative CHAT (Q-CHAT) contains 25 items, scored on a 5 point scale (0-4). The Q-CHAT was completed by parents of n = 779 unselected toddlers (mean age 21 months) and n = 160 toddlers and preschoolers (mean age 44 months) with an Autism Spectrum Condition (ASC). The ASC group (mean (SD) = 51.8 (14.3)) scored higher on the Q-CHAT than controls (26.7 (7.8)). Boys in the control group (27.5 (7.8)) scored higher than girls (25.8 (7.7)). The intraclass correlation for test-retest reliability was 0.82 (n = 330). The distribution in the control group was close to normal. Full examination of the clinical validity of the Q-CHAT and test properties is underway."}
{"_id": "519f7a6897f380ca9de4d971422f8209a5a1b67e", "title": "Leakage Current Elimination of Four-Leg Inverter for Transformerless Three-Phase PV Systems", "text": "Eliminating the leakage current is one of the most important issues for transformerless three-phase photovoltaic (PV) systems. In this paper, the leakage current elimination of a three-phase four-leg PV inverter is investigated. With the common-mode loop model established, the generation mechanism of the leakage current is clearly identified. Different typical carrier-based modulation methods and their corresponding common-mode voltages are discussed. A new modulation strategy with Boolean logic function is proposed to achieve the constant common-mode voltage for the leakage current reduction. Finally, the different modulation methods are implemented and tested on the TMS320F28335 DSP +XC3S400 FPGA digital control platform. The experimental results verify the effectiveness of the proposed solution."}
{"_id": "f459c04bcb07b47342317f1dade894a9cb492fcb", "title": "Hybrid cryptography mechanism for securing self-organized wireless networks", "text": "Nowadays use of wireless technologies tremendously increased due to the continuous development of these technologies in various applications. One of the development in wireless technologies leads to deployment of the networks in which all mobile nodes will communicate via wireless links directly without use of any base station or predefined infrastructural support. These networks are self-organized networks called mobile ad hoc networks (MANETs) and architecture used for these networks will be either stand alone with one or few nodes communicating with base station or distributed. As communication is done through wireless links, these types of networks are more prone to several kinds of attacks. Authentication and encryption acts as first line of defense while an intrusion detection system (IDS) acts as a second line of defense. One of the intrusion detection system used in MANET is Enhanced Adaptive Acknowledgement (EAACK) which is based on acknowledgement packets for detection of malicious activities. This system increases the overhead of network significantly relative to other system such as watchdog. So, in this paper we adopt the hybrid cryptography to reduce the network overhead and enhance the security of networks significantly."}
{"_id": "8ac4edf2d94a0a872e7ece90d12e718ac9ba3f00", "title": "Amazigh Isolated-Word speech recognition system using Hidden Markov Model toolkit (HTK)", "text": "This paper aims to build a speaker-independent automatic Amazigh Isolated-Word speech recognition system. Hidden Markov Model toolkit (HTK) that uses hidden Markov Models has been used to develop the system. The recognition vocabulary consists on the Amazigh Letters and Digits. The system has been trained to recognize the Amazigh 10 first digits and 33 alphabets. Mel frequency spectral coefficients (MFCCs) have been used to extract the feature. The training data has been collected from 60 speakers including both males and females. The test-data used for evaluating the system-performance has been collected from 20 speakers. The experimental results show that the presented system provides the overall word-accuracy 80%. The initial results obtained are very satisfactory in comparison with the training database's size, this encourages us to increase system performance to achieve a higher recognition rate."}
{"_id": "d9259ccdbfe8b1816400efb8c44f5afb3cae4a3b", "title": "Identification of Flux Linkage Map of Permanent Magnet Synchronous Machines Under Uncertain Circuit Resistance and Inverter Nonlinearity", "text": "This paper proposes a novel scheme for the identification of the whole flux linkage map of permanent magnet synchronous machines, by which the map of dq-axis flux linkages at different load or saturation conditions can be identified by the minimization of a proposed estimation model. The proposed method works on a conventional three-phase inverter based vector control system and the immune clonal based quantum genetic algorithm is employed for the global searching of minimal point. Besides, it is also noteworthy that the influence of uncertain inverter nonlinearity and circuit resistance are cancelled during the modeling process. The proposed method is subsequently tested on two PMSMs and shows quite good performance compared with the finite element prediction results."}
{"_id": "ffd9d08519b6b6518141313250fc9c386e20ba3d", "title": "Simple and strong : Twisted silver painted nylon artificial muscle actuated by Joule heating", "text": "Highly oriented nylon and polyethylene fibres shrink in length when heated and expand in diameter. By twisting and then coiling monofilaments of these materials to form helical springs, the anisotropic thermal expansion has recently been shown to enable tensile actuation of up to 49% upon heating. Joule heating, by passing a current through a conductive coating on the surface of the filament, is a convenient method of controlling actuation. In previously reported work this has been done using highly flexible carbon nanotube sheets or commercially available silver coated fibres. In this work silver paint is used as the Joule heating element at the surface of the muscle. Up to 29% linear actuation is observed with energy and power densities reaching 840 kJ m (528 J kg) and 1.1 kW kg (operating at 0.1 Hz, 4% strain, 1.4 kg load). This simple coating method is readily accessible and can be applied to any polymer filament. Effective use of this technique relies on uniform coating to avoid temperature gradients."}
{"_id": "11dc5064baf01c211d48f1db872657344c961d50", "title": "WHY AGENTS? ON THE VARIED MOTIVATIONS FOR AGENT COMPUTING IN THE SOCIAL SCIENCES", "text": "The many motivations for employing agent-based computation in the social sciences are reviewed. It is argued that there exist three distinct uses of agent modeling techniques. One such use \u2014 the simplest \u2014 is conceptually quite close to traditional simulation in operations research. This use arises when equations can be formulated that completely describe a social process, and these equations are explicitly soluble, either analytically or numerically. In the former case, the agent model is merely a tool for presenting results, while in the latter it is a novel kind of Monte Carlo analysis. A second, more commonplace usage of computational agent models arises when mathematical models can be written down but not completely solved. In this case the agent-based model can shed significant light on the solution structure, illustrate dynamical properties of the model, serve to test the dependence of results on parameters and assumptions, and be a source of counter-examples. Finally, there are important classes of problems for which writing down equations is not a useful activity. In such circumstances, resort to agent-based computational models may be the only way available to explore such processes systematically, and constitute a third distinct usage of such models. \u2217 Corresponding author address: Robert Axtell, Center on Social and Economic Dynamics, The Brookings Institution, 1775 Massachusetts Ave. NW, Washington, DC 20036; e-mail: raxtell@brook.edu; web: http://www.brook.edu/es/dynamics."}
{"_id": "1a6699f52553e095cf5f8ec79d71013db8e0ee16", "title": "Agent-based modeling: methods and techniques for simulating human systems.", "text": "Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed."}
{"_id": "26b71580a1a07f8e555ffd26e7ea711e953815c7", "title": "The Swarm Simulation System: A Toolkit for Building Multi-Agent Simulations", "text": "Swarm is a multi agent software platform for the simulation of complex adaptive systems In the Swarm system the basic unit of simulation is the swarm a collec tion of agents executing a schedule of actions Swarm supports hierarchical model ing approaches whereby agents can be composed of swarms of other agents in nested structures Swarm provides object oriented libraries of reusable components for build ing models and analyzing displaying and controlling experiments on those models Swarm is currently available as a beta version in full free source code form It requires the GNU C Compiler Unix and X Windows More information about Swarm can be obtained from our web pages http www santafe edu projects swarm Santa Fe Institute Hyde Park Rd Santa Fe NM nelson santafe edu Deere Company John Deere Road Moline IL roger ci deere com Santa Fe Institute Hyde Park Rd Santa Fe NM cgl santafe edu Santa Fe Institute Hyde Park Rd Santa Fe NM manor santafe edu Swarm has been developed at the Santa Fe Institute with the support of Grant No N from the O ce of Naval Research and Grant No N G from the Naval Research Laboratory both acting in cooperation with the Defense Advanced Research Projects Agency Swarm bene ted from earlier support from The Carol O Donnell Foundation Mr and Mrs Michael Grantham and the National Science Foundation SFI also gratefully acknowledges invaluable support from Deere Company for this"}
{"_id": "51f0e3fe5335e2c3a55e673a6adae646f0ad6e11", "title": "From Factors to Actors : Computational Sociology and Agent-Based Modeling", "text": "\u25a0 Abstract Sociologists often model social processes as interactions among variables. We review an alternative approach that models social life as interactions among adaptive agents who influence one another in response to the influence they receive. These agent-based models (ABMs) show how simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action. Emergent social patterns can also appear unexpectedly and then just as dramatically transform or disappear, as happens in revolutions, market crashes, fads, and feeding frenzies. ABMs provide theoretical leverage where the global patterns of interest are more than the aggregation of individual attributes, but at the same time, the emergent pattern cannot be understood without a bottom up dynamical model of the microfoundations at the relational level. We begin with a brief historical sketch of the shift from \u201cfactors\u201d to \u201cactors\u201d in computational sociology that shows how agent-based modeling differs fundamentally from earlier sociological uses of computer simulation. We then review recent contributions focused on the emergence of social structure and social order out of local interaction. Although sociology has lagged behind other social sciences in appreciating this new methodology, a distinctive sociological contribution is evident in the papers we review. First, theoretical interest focuses on dynamic social networks that shape and are shaped by agent interaction. Second, ABMs are used to perform virtual experiments that test macrosociological theories by manipulating structural factors like network topology, social stratification, or spatial mobility. We conclude our review with a series of recommendations for realizing the rich sociological potential of this approach."}
{"_id": "dc63ae3f4fdde0afb3f91b146e276fc663c22e6d", "title": "Complex suicide by self-stabbing with subsequent drowning in the sea.", "text": "The paper presents a unique case of a complex suicide committed by a young man, mostly probably triggered by a disappointment in love. The uniqueness of the suicide lies in the fact that the victim inflicted several deep stab wounds on himself, in the chest and abdomen, while standing partly submerged in the sea and, having done so, he dropped and disappeared in the water. The postmortem examination showed, apart from deep wounds in the trunk, characteristics of drowning that manifested itself in the form of aqueous emphysema of the lungs. Suicide was clearly determined on the basis of the circumstances preceding death, the location, and arrangement of the trunk wounds and the testimony given by a witness of the incident. The circumstances preceding the suicidal act clearly suggest an underlying undiagnosed mental disorder."}
{"_id": "10d21ca7728cb3dd15731accedda9ea711d8a0f4", "title": "An End-to-End Discriminative Approach to Machine Translation", "text": "We present a perceptron-style discriminative approach to machine translation in which large feature sets can be exploited. Unlike discriminative reranking approaches, our system can take advantage of learned features in all stages of decoding. We first discuss several challenges to error-driven discriminative approaches. In particular, we explore different ways of updating parameters given a training example. We find that making frequent but smaller updates is preferable to making fewer but larger updates. Then, we discuss an array of features and show both how they quantitatively increase BLEU score and how they qualitatively interact on specific examples. One particular feature we investigate is a novel way to introduce learning into the initial phrase extraction process, which has previously been entirely heuristic."}
{"_id": "180bdb9ab62b589166f2bb3a854d934d4e12746d", "title": "EMPHASIS: An Emotional Phoneme-based Acoustic Model for Speech Synthesis System", "text": "We present EMPHASIS, an emotional phoneme-based acoustic model for speech synthesis system. EMPHASIS includes a phoneme duration prediction model and an acoustic parameter prediction model. It uses a CBHG-based regression network to model the dependencies between linguistic features and acoustic features. We modify the input and output layer structures of the network to improve the performance. For the linguistic features, we apply a feature grouping strategy to enhance emotional and prosodic features. The acoustic parameters are designed to be suitable for the regression task and waveform reconstruction. EMPHASIS can synthesize speech in real-time and generate expressive interrogative and exclamatory speech with high audio quality. EMPHASIS is designed to be a multi-lingual model and can synthesize Mandarin-English speech for now. In the experiment of emotional speech synthesis, it achieves better subjective results than other real-time speech synthesis systems."}
{"_id": "191b8537a875f0171593a4356f2535d5f1bbceac", "title": "Tensor-Factorized Neural Networks", "text": "The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification performance using vectorized NN is constrained, because the temporal or spatial information in neighboring ways is disregarded. More parameters are required to learn the complicated data structure. This paper presents a new tensor-factorized NN (TFNN), which tightly integrates TF and NN for multiway feature extraction and classification under a unified discriminative objective. This TFNN is seen as a generalized NN, where the affine transformation in an NN is replaced by the multilinear and multiway factorization for tensor-based NN. The multiway information is preserved through layerwise factorization. Tucker decomposition and nonlinear activation are performed in each hidden layer. The tensor-factorized error backpropagation is developed to train TFNN with the limited parameter size and computation time. This TFNN can be further extended to realize the convolutional TFNN (CTFNN) by looking at small subtensors through the factorized convolution. Experiments on real-world classification tasks demonstrate that TFNN and CTFNN attain substantial improvement when compared with an NN and a convolutional NN, respectively."}
{"_id": "30d8e493ae35a64b2bebbe6ec90dc190488f82fa", "title": "Using inaccurate models in reinforcement learning", "text": "In the model-based policy search approach to reinforcement learning (RL), policies are found using a model (or \"simulator\") of the Markov decision process. However, for high-dimensional continuous-state tasks, it can be extremely difficult to build an accurate model, and thus often the algorithm returns a policy that works in simulation but not in real-life. The other extreme, model-free RL, tends to require infeasibly large numbers of real-life trials. In this paper, we present a hybrid algorithm that requires only an approximate model, and only a small number of real-life trials. The key idea is to successively \"ground\" the policy evaluations using real-life trials, but to rely on the approximate model to suggest local changes. Our theoretical results show that this algorithm achieves near-optimal performance in the real system, even when the model is only approximate. Empirical results also demonstrate that---when given only a crude model and a small number of real-life trials---our algorithm can obtain near-optimal performance in the real system."}
{"_id": "b73cdb60b2fe9fb317fca4fb9f5e1106e13c2345", "title": "Distance Metric Learning for Large Margin Nearest Neighbor Classification", "text": ""}
{"_id": "31821f81c091d2deceed17206528223a8a5b8822", "title": "Learning by Example: Training Users with High-quality Query Suggestions", "text": "The queries submitted by users to search engines often poorly describe their information needs and represent a potential bottleneck in the system. In this paper we investigate to what extent it is possible to aid users in learning how to formulate better queries by providing examples of high-quality queries interactively during a number of search sessions. By means of several controlled user studies we collect quantitative and qualitative evidence that shows: (1) study participants are able to identify and abstract qualities of queries that make them highly effective, (2) after seeing high-quality example queries participants are able to themselves create queries that are highly effective, and, (3) those queries look similar to expert queries as defined in the literature. We conclude by discussing what the findings mean in the context of the design of interactive search systems."}
{"_id": "b84b96cecb34c476749527ec2cd828fe0300e7a7", "title": "Monolithic CMOS distributed amplifier and oscillator", "text": "CMOS implementations for RF applications often employ technology modifications to reduce the silicon substrate loss at high frequencies. The most common techniques include the use of a high-resistivity substrate (/spl rho/>10 /spl Omega/-cm) or silicon-on-insulator (SOI) substrate and precise bondwire inductors. However, these techniques are incompatible with low-cost CMOS manufacture. This design demonstrates use of CMOS with a conventional low-resistivity epi-substrate and on-chip inductors for applications above 10 GHz."}
{"_id": "aa0c01e553d0a1ab40c204725d13fe528c514bba", "title": "Anticipating many futures: Online human motion prediction and synthesis for human-robot collaboration", "text": "Fluent and safe interactions of humans and robots require both partners to anticipate the others\u2019 actions. A common approach to human intention inference is to model specific trajectories towards known goals with supervised classifiers. However, these approaches do not take possible future movements into account nor do they make use of kinematic cues, such as legible and predictable motion. The bottleneck of these methods is the lack of an accurate model of general human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motions. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold."}
{"_id": "4b40d6b77b332214eefc7d1e79e15fbc2d86d86a", "title": "Mapping Texts: Combining Text-Mining and Geo-Visualization To Unlock The Research Potential of Historical Newspapers", "text": "In this paper, we explore the task of automatic text processing applied to collections of historical newspapers, with the aim of assisting historical research. In particular, in this first stage of our project, we experiment with the use of topical models as a means to identify potential issues of interest for historians. 1 Newspapers in Historical Research Surviving newspapers are among the richest sources of information available to scholars studying peoples and cultures of the past 250 years, particularly for research on the history of the United States. Throughout the nineteenth and twentieth centuries, newspapers served as the central venues for nearly all substantive discussions and debates in American society. By the mid-nineteenth century, nearly every community (no matter how small) boasted at least one newspaper. Within these pages, Americans argued with one another over politics, advertised and conducted economic business, and published articles and commentary on virtually all aspects of society and daily life. Only here can scholars find editorials from the 1870s on the latest political controversies, advertisements for the latest fashions, articles on the latest sporting events, and languid poetry from a local artist, all within one source. Newspapers, in short, document more completely the full range of the human experience than nearly any other source available to modern scholars, providing windows into the past available nowhere else. Despite their remarkable value, newspapers have long remained among the most underutilized historical resources. The reason for this paradox is quite simple: the sheer volume and breadth of information available in historical newspapers has, ironically, made it extremely difficult for historians to go through them page-by-page for a given research project. A historian, for example, might need to wade through tens of thousands of newspaper pages in order to answer a single research question (with no guarantee of stumbling onto the necessary information). Recently, both the research potential and problem of scale associated with historical newspapers has expanded greatly due to the rapid digitization of these sources. The National Endowment for the Humanities (NEH) and the Library of Congress (LOC), for example, are sponsoring a nationwide historical digitization project, Chronicling America, geared toward digitizing all surviving historical newspapers in the United States, from 1836 to the present. This project recently digitized its one millionth page (and they project to have more than 20 million pages within a few years), opening a vast wealth of historical newspapers in digital form. While projects such as Chronicling America have indeed increased access to these important sources, they have also increased the problem of scale that have long prevent scholars from using these sources in meaningful ways. Indeed, without tools and methods capable of handling such large datasets \u2013 and thus sifting out meaningful patterns embedded within them \u2013 scholars find themselves confined to performing only basic word searches across enormous collections. These simple searches can, indeed, find stray information scattered in unlikely 46 places. Such rudimentary search tools, however, become increasingly less useful to researchers as datasets continue to grow in size. If a search for a particular term yields 4,000,000 results, even those search results produce a dataset far too large for any single scholar to analyze in a meaningful way using traditional methods. The age of abundance, it turns out, can simply overwhelm historical scholars, as the sheer volume of available digitized historical newspapers is beginning to do. In this paper, we explore the use of topic modeling, in an attempt to identify the most important and potentially interesting topics over a given period of time. Thus, instead of asking a historian to look through thousands of newspapers to identify what may be interesting topics, we take a reverse approach, where we first automatically cluster the data into topics, and then provide these automatically identified topics to the historian so she can narrow her scope to focus on the individual patterns in the dataset that are most applicable to her research. Of more utility would be where the modeling would reveal unexpected topics that point towards unusual patterns previously unknown, thus help shaping a scholar\u2019s subsequent research. The topic modeling can be done for any periods of time, which can consist of individual years or can cover several years at a time. In this way, we can see the changes in the discussions and topics of interest over the years. Moreover, pre-filters can also be applied to the data prior to the topic modeling. For instance, since research being done in the History department at our institution is concerned with the \u201cU. S. cotton economy,\u201d we can use the same approach to identify the interesting topics mentioned in the news articles that talk about the issue of \u201ccotton.\u201d 2 Topic Modeling Topic models have been used by Newman and Block (2006) and Nelson (2010) on newspaper corpora to discover topics and trends over time. The former used the probabilistic latent semantic analysis (pLSA) model, and the latter used the latent Dirichlet allocation (LDA) model, a method introduced by Blei et al. (2003). LDA has also been used by Griffiths and Steyvers (2004) to find research topic trends by looking at abstracts of scientific papers. Hall et al. (2008) have similarly applied LDA to discover trends in the computational linguistics field. Both pLSA and LDA models are probabilistic models that look at each document as a mixture of multinomials or topics. The models decompose the document collection into groups of words representing the main topics. See for instance Table 1, which shows two topics extracted from our collection. Topic worth price black white goods yard silk made ladies wool lot inch week sale prices pair suits fine quality state states bill united people men general law government party made president today washington war committee country public york Table 1: Example of two topic groups Boyd-Graber et al. (2009) compared several topic models, including LDA, correlated topic model (CTM), and probabilistic latent semantic indexing (pLSI), and found that LDA generally worked comparably well or better than the other two at predicting topics that match topics picked by the human annotators. We therefore chose to use a parallel threaded SparseLDA implementation to conduct the topic modeling, namely UMass Amherst\u2019s MAchine Learning for LanguagE Toolkit (MALLET) (McCallum, 2002). MALLET\u2019s topic modeling toolkit has been used by Walker et al. (2010) to test the effects of noisy optical character recognition (OCR) data on LDA. It has been used by Nelson (2010) to mine topics from the Civil War era newspaper Dispatch, and it has also been used by Blevins (2010) to examine general topics and to identify emotional moments from Martha Ballards Diary. 3 Dataset Our sample data comes from a collection of digitized historical newspapers, consisting of newspapers published in Texas from 1829 to 2008. Issues are segmented by pages with continuous text containing articles and advertisements. Table 2 provides more information about the dataset. 2 http://mallet.cs.umass.edu/ 1 http://americanpast.richmond.edu/dispatch/ 3 http://historying.org/2010/04/01/ 47 Property Number of titles Number of years Number of issues Number of pages Number of tokens 114 180 32,745 232,567 816,190,453 Table 2: Properties of the newspaper collection 3.1 Sample Years and Categories From the wide range available, we sampled several historically significant dates in order to evaluate topic modeling. These dates were chosen for their unique characteristics (detailed below), which made it possible for a professional historian to examine and evaluate the relevancy of the results. These are the subcategories we chose as samples: \u2022 Newspapers from 1865-1901: During this period, Texans rebuilt their society in the aftermath of the American Civil War. With the abolition of slavery in 1865, Texans (both black and white) looked to rebuild their post-war economy by investing heavily in cotton production throughout the state. Cotton was considered a safe investment, and so Texans produced enough during this period to make Texas the largest cotton producer in the United States by 1901. Yet overproduction during that same period impoverished Texas farmers by driving down the market price for cotton, and thus a large percentage went bankrupt and lost their lands (over 50 percent by 1900). As a result, angry cotton farmers in Texas during the 1890s joined a new political party, the Populists, whose goal was to use the national government to improve the economic conditions of farmers. This effort failed by 1896, although it represented one of the largest third-party political revolts in American history. This period, then, was dominated by the rise of cotton as the foundation of the Texas economy, the financial failures of Texas farmers, and their unsuccessful political protests of the 1890s as cotton bankrupted people across the state. These are the issues we would expect to emerge as important topics from newspapers in this category. This dataset consists of 52,555 pages over 5,902 issues. \u2022 Newspapers from 1892: This was the year of the formation of the Populist Party, which a large portion of Texas farmers joined for the U. S. presidential election of 1892. The Populists sought to have the U. S. federal government become actively involved in regulating the economy in places like Texas (something never done before) in order to prevent cotton farmers from going further into debt. In the 1892 election, the Populists did surprisingly well (garnering about 10 percent of the vote nationally) and won a full 23 percent of the vote in Texas. This dataset consists of 1,303 pages over 223 issue"}
{"_id": "5350676fae09092b42731448acae3469cba8919c", "title": "Building an Intelligent Assistant for Digital Forensics", "text": "Software tools designed for disk analysis play a critical role today in digital forensics investigations. However, these digital forensics tools are often difficult to use, usually task specific, and generally require professionally trained users with IT backgrounds. The relevant tools are also often open source requiring additional technical knowledge and proper configuration. This makes it difficult for investigators without some computer science background to easily conduct the needed disk analysis. In this dissertation, we present AUDIT, a novel automated disk investigation toolkit that supports investigations conducted by non-expert (in IT and disk technology) and expert investigators. Our system design and implementation of AUDIT intelligently integrates open source tools and guides non-IT professionals while requiring minimal technical knowledge about the disk structures and file systems of the target disk image. We also present a new hierarchical disk investigation model which leads AUDIT to systematically examine the disk in its totality based on its physical and logical structures. AUDIT\u2019s capabilities as an intelligent digital assistant are evaluated through a series of experiments comparing it with a human investigator as well as against standard benchmark disk images."}
{"_id": "03aa3c66aad11069e79d73108a92eeef3a43b40a", "title": "ELDEN: Improved Entity Linking Using Densified Knowledge Graphs", "text": "Entity Linking (EL) systems aim to automatically map mentions of an entity in text to the corresponding entity in a Knowledge Graph (KG). Degree of connectivity of an entity in the KG directly affects an EL system\u2019s ability to correctly link mentions in text to the entity in KG. This causes many EL systems to perform well for entities well connected to other entities in KG, bringing into focus the role of KG density in EL. In this paper, we propose Entity Linking using Densified Knowledge Graphs (ELDEN). ELDEN is an EL system which first densifies the KG with co-occurrence statistics from a large text corpus, and then uses the densified KG to train entity embeddings. Entity similarity measured using these trained entity embeddings result in improved EL. ELDEN outperforms stateof-the-art EL system on benchmark datasets. Due to such densification, ELDEN performs well for sparsely connected entities in the KG too. ELDEN\u2019s approach is simple, yet effective. We have made ELDEN\u2019s code and data publicly available."}
{"_id": "ad5a08146e1b811369cf3f3c629de6157784ee58", "title": "What Do Stroke Patients Look for in Game-Based Rehabilitation", "text": "Stroke is one of the most common causes of physical disability, and early, intensive, and repetitive rehabilitation exercises are crucial to the recovery of stroke survivors. Unfortunately, research shows that only one third of stroke patients actually perform recommended exercises at home, because of the repetitive and mundane nature of conventional rehabilitation exercises. Thus, to motivate stroke survivors to engage in monotonous rehabilitation is a significant issue in the therapy process. Game-based rehabilitation systems have the potential to encourage patients continuing rehabilitation exercises at home. However, these systems are still rarely adopted at patients' places. Discovering and eliminating the obstacles in promoting game-based rehabilitation at home is therefore essential. For this purpose, we conducted a study to collect and analyze the opinions and expectations of stroke patients and clinical therapists. The study is composed of 2 parts: Rehab-preference survey - interviews to both patients and therapists to understand the current practices, challenges, and expectations on game-based rehabilitation systems; and Rehab-compatibility survey - a gaming experiment with therapists to elaborate what commercial games are compatible with rehabilitation. The study is conducted with 30 outpatients with stroke and 19 occupational therapists from 2 rehabilitation centers in Taiwan. Our surveys show that game-based rehabilitation systems can turn the rehabilitation exercises more appealing and provide personalized motivation for various stroke patients. Patients prefer to perform rehabilitation exercises with more diverse and fun games, and need cost-effective rehabilitation systems, which are often built on commodity hardware. Our study also sheds light on incorporating the existing design-for-fun games into rehabilitation system. We envision the results are helpful in developing a platform which enables rehab-compatible (i.e., existing, appropriately selected) games to be operated on commodity hardware and brings cost-effective rehabilitation systems to more and more patients' home for long-term recovery."}
{"_id": "47fdd1579f732dd6389f9342027560e385853180", "title": "Deep Sparse Subspace Clustering", "text": "In this paper, we present a deep extension of Sparse Subspace Clustering, termed Deep Sparse Subspace Clustering (DSSC). Regularized by the unit sphere distribution assumption for the learned deep features, DSSC can infer a new data affinity matrix by simultaneously satisfying the sparsity principle of SSC and the nonlinearity given by neural networks. One of the appealing advantages brought by DSSC is: when original real-world data do not meet the class-specific linear subspace distribution assumption, DSSC can employ neural networks to make the assumption valid with its hierarchical nonlinear transformations. To the best of our knowledge, this is among the first deep learning based subspace clustering methods. Extensive experiments are conducted on four real-world datasets to show the proposed DSSC is significantly superior to 12 existing methods for subspace clustering."}
{"_id": "505234f56be43637a82761b2a7c3bd7c46f1e06c", "title": "Semi-Supervised Learning for Relation Extraction", "text": "This paper proposes a semi-supervised learning method for relation extraction. Given a small amount of labeled data and a large amount of unlabeled data, it first bootstraps a moderate number of weighted support vectors via SVM through a co-training procedure with random feature projection and then applies a label propagation (LP) algorithm via the bootstrapped support vectors. Evaluation on the ACE RDC 2003 corpus shows that our method outperforms the normal LP algorithm via all the available labeled data without SVM bootstrapping. Moreover, our method can largely reduce the computational burden. This suggests that our proposed method can integrate the advantages of both SVM bootstrapping and label propagation."}
{"_id": "3fbc45152f20403266b02c4c2adab26fb367522d", "title": "Sequence-to-Sequence RNNs for Text Summarization", "text": "In this work, we cast text summarization as a sequence-to-sequence problem and apply the attentional encoder-decoder RNN that has been shown to be successful for Machine Translation (Bahdanau et al. (2014)). Our experiments show that the proposed architecture significantly outperforms the state-of-the art model of Rush et al. (2015) on the Gigaword dataset without any additional tuning. We also propose additional extensions to the standard architecture, which we show contribute to further improvement in performance."}
{"_id": "7eb1dca76c46bfbb4bd34c189c98472706ec31eb", "title": "Research on the Forecast of Shared Bicycle Rental Demand Based on Spark Machine Learning Framework", "text": "In recent years, the shared bicycle project has developed rapidly. In use of shared bicycles, a great deal of user riding information is recorded. How to extract effective knowledge from these vast amounts of information, how to use this knowledge to improve the shared bicycle system, and how to improve the user experience, are problems to solve. Citi Bike is selected as the research target. Data on Citi Bike\u2019s user historical behavior, weather information, and holiday information are collected from three different sources, and converted into appropriate formats for model training. Spark MLlib is used to construct three different predictive models, advantages and disadvantages of different forecasting models are compared. Some techniques are used to enhance the accuracy of random forests model. The experimental results show that the root mean square error RMSE of the final model is reduced from 305.458 to 243.346."}
{"_id": "cbbf72d487f5b645d50d7d3d94b264f6a881c96f", "title": "W-Band CMOS on-chip energy harvester and rectenna", "text": "This paper presents the first fully on-chip integrated energy harvester and rectenna at the W-Band in 65nm CMOS technology. The designs are based on a 1-stage Dickson voltage multiplier. The rectenna consists of an on-chip integrated dipole antenna with a reflector underneath the substrate to enhance the directivity and realized gain. The energy harvester and rectenna achieve a power conversion efficiency of 10% and 2% respectively at 94GHz. The stand-alone harvester occupies only 0.0945mm2 including pads, while the fully integrated rectenna occupies a minimal chip area of 0.48mm2."}
{"_id": "915f50f7885dc47fe5a29cc64ee9902a6994b4a4", "title": "Spurious group differences due to head motion in a diffusion MRI study", "text": "Diffusion-weighted MRI (DW-MRI) has become a popular imaging modality for probing the microstructural properties of white matter and comparing them between populations in vivo. However, the contrast in DW-MRI arises from the microscopic random motion of water molecules in brain tissues, which makes it particularly sensitive to macroscopic head motion. Although this has been known since the introduction of DW-MRI, most studies that use this modality for group comparisons do not report measures of head motion for each group and rely on registration-based correction methods that cannot eliminate the full effects of head motion on the DW-MRI contrast. In this work we use data from children with autism and typically developing children to investigate the effects of head motion on differences in anisotropy and diffusivity measures between groups. We show that group differences in head motion can induce group differences in DW-MRI measures, and that this is the case even when comparing groups that include control subjects only, where no anisotropy or diffusivity differences are expected. We also show that such effects can be more prominent in some white-matter pathways than others, and that they can be ameliorated by including motion as a nuisance regressor in the analyses. Our results demonstrate the importance of taking head motion into account in any population study where one group might exhibit more head motion than the other."}
{"_id": "81918e9416c1f9a910c10e6059a909fb81c22000", "title": "Tap water versus sterile saline solution in the colonisation of skin wounds.", "text": "Irrigating wounds with tap water does not increase colonisation, but controlled studies are required for further evidence. Microbial colonisation was assessed in skin wounds, before and after irrigation with tap water, and was compared with irrigation using 0\u00b79% sodium chloride sterile solution. The study included 120 subjects with chronic, traumatic, vascular, pressure or neuropathic wounds. A total of 60 wounds were randomly assigned to be irrigated with tap water (tap water group) and another 60 to be irrigated with 0\u00b79% sodium chloride sterile solution (saline group), at a pressure of 0\u00b746-0\u00b754 PSI. Samples were collected from the centre of each wound using Levine's technique, before and after irrigation, and cultivated in thioglycollate, hypertonic mannitol agar, eosin methylene blue (EMB) agar, blood agar and Sabouraud agar at 37\u00b0C for 72 hours. There was concordance (kappa test) and discordance (McNemar test) regarding the count of positive and/or negative samples before and after irrigation in each group. The proportion of reduction of positive samples was similar for both groups in all cultures. Colony-forming unit count before and after irrigation was similar in both groups and in all cultures, except for the culture in hypertonic mannitol agar from the tap water group, for which the count was lower after irrigation (Wilcoxon z = 2\u00b705, P = 0\u00b7041). It is concluded that skin wound irrigation with tap water leads to further reduction of Gram-positive bacteria compared with 0\u00b79% sodium chloride sterile solution, with no difference in colonisation of haemolytic bacteria, Gram-negative bacteria and fungi."}
{"_id": "99ce379cff98f5d098f3eb0492191291fd712d7b", "title": "Analysis of level design 'push & pull' within 21 games", "text": "This paper investigates the differences between 3D level designs in 21 popular games. We have developed a framework to analyze 3D level designs based on patterns extracted from level designers and game play sessions. We then use these patterns to analyze several game play sessions. Results of this analysis reveal methods by which designers push and pull players through levels. We discuss an analysis of these patterns in terms of three level affordance configurations, combat, environmental resistance, and mixed goals in 21 different games using one walkthrough play session per game. By looking at the variety of games, we can further explore the level similarities and differences between games."}
{"_id": "e9bb3ca7558ea0ce9ca7d5c965e4179668d69d0a", "title": "Spinal Instability Neoplastic Score (SINS): Reliability Among Spine Fellows and Resident Physicians in Orthopedic Surgery and Neurosurgery", "text": "Study Design\nReliability analysis.\n\n\nObjectives\nThe Spinal Instability Neoplastic Score (SINS) was developed for assessing patients with spinal neoplasia. It identifies patients who may benefit from surgical consultation or intervention. It also acts as a prognostic tool for surgical decision making. Reliability of SINS has been established for spine surgeons, radiologists, and radiation oncologists, but not yet among spine surgery trainees. The purpose of our study is to determine the reliability of SINS among spine residents and fellows, and its role as an educational tool.\n\n\nMethods\nTwenty-three residents and 2 spine fellows independently scored 30 de-identified spine tumor cases on 2 occasions, at least 6 weeks apart. Intraclass correlation coefficient (ICC) measured interobserver and intraobserver agreement for total SINS scores. Fleiss's kappa and Cohen's kappa analysis evaluated interobserver and intraobserver agreement of 6 component subscores (location, pain, bone lesion quality, spinal alignment, vertebral body collapse, and posterolateral involvement of spinal elements).\n\n\nResults\nTotal SINS scores showed near perfect interobserver (0.990) and intraobserver (0.907) agreement. Fleiss's kappa statistics revealed near perfect agreement for location; substantial for pain; moderate for alignment, vertebral body collapse, and posterolateral involvement; and fair for bone quality (0.948, 0.739, 0.427, 0.550, 0.435, and 0.382). Cohen's kappa statistics revealed near perfect agreement for location and pain, substantial for alignment and vertebral body collapse, and moderate for bone quality and posterolateral involvement (0.954, 0.814, 0.610, 0.671, 0.576, and 0.561, respectively).\n\n\nConclusions\nThe SINS is a reliable and valuable educational tool for spine fellows and residents learning to judge spinal instability."}
{"_id": "5bf1debebe42befa82fbf51d5f75d4f6a756553d", "title": "MACA: a modified author co-citation analysis method combined with general descriptive metadata of citations", "text": "Author co-citation analysis (ACA) is a well-known and frequently-used method to exhibit the academic researchers and the professional field sketch according to co-citation relationships between authors in an article set. However, visualizing subtle examination is limited because only author co-citation information is required in ACA. The proposed method, called modified author co-citation analysis (MACA), exploits author co-citation relationship, citations published time, citations published carriers, and citations keywords, to construct MACA-based co-citation matrices. According to the results of our experiments: (1) MACA shows a good clustering result with more delicacy and more clearness; (2) more information involved in co-citation analysis performs good visual acuity; (3) in visualization of co-citation network produced by MACA, the points in different categories have far more distance, and the points indicating authors in the same category are closer together. As a result, the proposed MACA is found that more detailed and subtle information of a knowledge domain analyzed can be obtained, compared to ACA."}
{"_id": "0ee392ad467b967c0a32d8ecb19fc20f7c1d62fe", "title": "Probabilistic Inference Using Markov Chain Monte Carlo Methods", "text": "Probabilistic inference is an attractive approach to uncertain reasoning and em pirical learning in arti cial intelligence Computational di culties arise however because probabilistic models with the necessary realism and exibility lead to com plex distributions over high dimensional spaces Related problems in other elds have been tackled using Monte Carlo methods based on sampling using Markov chains providing a rich array of techniques that can be applied to problems in arti cial intelligence The Metropolis algorithm has been used to solve di cult problems in statistical physics for over forty years and in the last few years the related method of Gibbs sampling has been applied to problems of statistical inference Concurrently an alternative method for solving problems in statistical physics by means of dynamical simulation has been developed as well and has recently been uni ed with the Metropolis algorithm to produce the hybrid Monte Carlo method In computer science Markov chain sampling is the basis of the heuristic optimization technique of simulated annealing and has recently been used in randomized algorithms for approximate counting of large sets In this review I outline the role of probabilistic inference in arti cial intelligence present the theory of Markov chains and describe various Markov chain Monte Carlo algorithms along with a number of supporting techniques I try to present a comprehensive picture of the range of methods that have been developed including techniques from the varied literature that have not yet seen wide application in arti cial intelligence but which appear relevant As illustrative examples I use the problems of probabilistic inference in expert systems discovery of latent classes from data and Bayesian learning for neural networks"}
{"_id": "30667550901b9420e02c7d61cdf8fa7d5db207af", "title": "Bayesian Learning for Neural Networks", "text": ""}
{"_id": "f707a81a278d1598cd0a4493ba73f22dcdf90639", "title": "Generalization by Weight-Elimination with Application to Forecasting", "text": "Inspired by the information theoretic idea of minimum description length, we add a term to the back propagation cost function that penalizes network complexity. We give the details of the procedure, called weight-elimination, describe its dynamics, and clarify the meaning of the parameters involved. From a Bayesian perspective, the complexity term can be usefully interpreted as an assumption about prior distribution of the weights. We use this procedure to predict the sunspot time series and the notoriously noisy series of currency exchange rates."}
{"_id": "b23848d181bebd47d4bae1c782534424085bbadf", "title": "A case study of user participation in the information systems development process", "text": "There are many in the information systems discipline who believe that user participation is necessary for successful systems development. However, it has been suggested that this belief is neither grounded in theory nor substantiated by research data. This may indicate that researchers have not addressed fully the underlying complexity of the concept. If so, this is indicative of a deficiency in understanding user participation in information systems development as it occurs in organizations. In order to enhance the extant understanding of participative information systems development, the present study adopts a qualitative, case-based approach to research so as to provide an in-depth description of the complex social nature of the phenomenon as manifested in one organization. The results of the study illustrate that a high degree of direct and indirect user participation did not guarantee the successful implementation and use of information systems in the organization studied. Such participatory development practices did, however, result in the development of systems that adequately captured user requirements and hence satisfied user informational needs. It was clear that despite the perceived negative impact, which the new systems would have on user work-related roles and activities, the existence of an organization-wide participative policy, and associated participative structures, coupled with a favorable organization culture, generated a participatory development climate that was conducive to the successful development of information systems, while not guaranteeing it. That said, the central conclusion of this study was that user dissatisfaction with developed systems centered on the poor management of change in the organization."}
{"_id": "fc3def67ba2c5956f7d4066df9f1d92a9e65bfd7", "title": "Spatial Data Mining Features between General Data Mining", "text": "Data mining is usually defined as searching,analyzing and sifting through large amounts of data to find relationships, patterns, or any significant statistical correlation. Spatial Data Mining (SDM) is the process of discovering interesting, useful, non-trivial patterns information or knowledge from large spatial datasets.Extracting interesting and useful patterns from spatial datasets must be more difficult than extracting the corresponding patterns from traditional numeric or categorical data due to the complexity of spatial data types, spatial relationships, and spatial auto-correlation.Emphasized overviewed the unique features that distinguish spatial data mining from classical Data Mining, and presents major accomplishments of spatial Data Mining research. Extracting interesting patterns and rules from spatial datasets, such as remotely sensed imagery and associated ground data, can be of importance in precision agriculture, community planning,resource discovery and other areas."}
{"_id": "4a7478652c1b4f27d1bbf18ea0f1cf47fc30779e", "title": "Continuous Integration and Its Tools", "text": "Continuous integration has been around for a while now, but the habits it suggests are far from common practice. Automated builds, a thorough test suite, and committing to the mainline branch every day sound simple at first, but they require a responsible team to implement and constant care. What starts with improved tooling can be a catalyst for long-lasting change in your company's shipping culture. Continuous integration is more than a set of practices, it's a mindset that has one thing in mind: increasing customer value. The Web extra at http://youtu.be/tDl_cHfrJZo is an audio podcast of the Tools of the Trade column discusses how continuous integration is more than a set of practices, it's a mindset that has one thing in mind: increasing customer value."}
{"_id": "1d11abe1ec659bf2a0b078f9e45796889e0c0740", "title": "The evaluation of children in the primary care setting when sexual abuse is suspected.", "text": "This clinical report updates a 2005 report from the American Academy of Pediatrics on the evaluation of sexual abuse in children. The medical assessment of suspected child sexual abuse should include obtaining a history, performing a physical examination, and obtaining appropriate laboratory tests. The role of the physician includes determining the need to report suspected sexual abuse; assessing the physical, emotional, and behavioral consequences of sexual abuse; providing information to parents about how to support their child; and coordinating with other professionals to provide comprehensive treatment and follow-up of children exposed to child sexual abuse."}
{"_id": "60ade26a56f8f34d18b6e3ccfd0cd095b72e3013", "title": "Malaria Models with Spatial Effects", "text": "Malaria, a vector-borne infectious disease caused by the Plasmodium parasite, is still endemic in more than 100 countries in Africa, Southeast Asia, the Eastern Mediterranean, Western Pacific, Americas, and Europe. In 2010 there were about 219 million malaria cases, with an estimated 660,000 deaths, mostly children under 5 in sub-Saharan Africa (WHO 2012). The malaria parasite is transmitted to humans via the bites of infected female mosquitoes of the genus Anopheles. Mosquitoes can become infected when they feed on the blood of infected humans. Thus the infection goes back and forth between humans and mosquitoes. Mathematical modeling of malaria transmission has a long history. It has helped us to understand transmission mechanism, design and improve control measures, forecast disease outbreaks, etc. The so-called Ross\u2013Macdonald model"}
{"_id": "2819ca354f01bc60bbf0ca13611e0ad66487450a", "title": "Contextual Video Recommendation by Multimodal Relevance and User Feedback", "text": "With Internet delivery of video content surging to an unprecedented level, video recommendation, which suggests relevant videos to targeted users according to their historical and current viewings or preferences, has become one of most pervasive online video services. This article presents a novel contextual video recommendation system, called VideoReach, based on multimodal content relevance and user feedback. We consider an online video usually consists of different modalities (i.e., visual and audio track, as well as associated texts such as query, keywords, and surrounding text). Therefore, the recommended videos should be relevant to current viewing in terms of multimodal relevance. We also consider that different parts of videos are with different degrees of interest to a user, as well as different features and modalities have different contributions to the overall relevance. As a result, the recommended videos should also be relevant to current users in terms of user feedback (i.e., user click-through). We then design a unified framework for VideoReach which can seamlessly integrate both multimodal relevance and user feedback by relevance feedback and attention fusion. VideoReach represents one of the first attempts toward contextual recommendation driven by video content and user click-through, without assuming a sufficient collection of user profiles available. We conducted experiments over a large-scale real-world video data and reported the effectiveness of VideoReach."}
{"_id": "805115059488fe0e5e1e3376f95ad2944bbff76d", "title": "CONSTRAINTS AND APPROACHES FOR DISTRIBUTED SENSOR NETWORK SECURITY", "text": "Executive Summary Confidentiality, integrity, and authentication services are critical to preventing an adversary from compromising the security of a distributed sensor network. Key management is likewise critical to establishing the keys necessary to provide this protection. However, providing key management is difficult due to the ad hoc nature, intermittent connectivity, and resource limitations of the sensor network environment. As part of the SensIT program, NAI Labs is addressing this problem by identifying and developing cryptographic protocols and mechanisms that efficiently provide key management security support services. This document describes our sensor network constraints and key management approaches research for FY 2000. As a first step, NAI Labs has researched battlefield sensor and sensor network technology and the unique communications environment in which it will be deployed. We have identified the requirements specific to our problem of providing key management for confidentiality and group-level authentication. We have also identified constraints, particularly energy consumption, that render this problem difficult. NAI Labs has developed novel key management protocols specifically designed for the distributed sensor network environment, including Identity-Based Symmetric Keying and Rich Uncle. We have analyzed both existing and NAI Labs-developed keying protocols for their suitability at satisfying identified requirements while overcoming battlefield energy constraints. Our research has focused heavily on key management energy consumption, evaluating protocols based on total system, average sensor node, and individual sensor node energy consumption. We examined a number of secret-key-based protocols, determining some to be suitable for sensor networks but all of the protocols have flexibility limitations. Secret-key-based protocols are generally energy-efficient, using encryption and hashing algorithms that consume relatively little energy. Security of secret-key-based protocols is generally determined by the granularity of established keys, which vary widely for the protocols described herein. During our examination of these protocols we noted that some of these protocols are not sufficiently flexible for use in battlefield sensor network, since they cannot efficiently handle unanticipated additions of sensor nodes to the network. Our Identity-Based Symmetric Keying protocol and the less efficient Symmetric Key Certificate Based Protocol are well suited for certain sensor networks, establishing granular keys while consuming relatively little energy. However, all of the secure secret-key-based protocols use special nodes that operate as Key Distribution Centers (or Translators). The sensor nodes communicate with these centers exchanging information as part of the key establishment process. Since these special nodes are expected to make up less than 1% of the sensor \u2026"}
{"_id": "d7d901a9e311e6c434a8a39566d1ed2845546f5b", "title": "Evidences for the Anti-panic Actions of Cannabidiol", "text": "BACKGROUND\nPanic disorder (PD) is a disabling psychiatry condition that affects approximately 5% of the worldwide population. Currently, long-term selective serotonin reuptake inhibitors (SSRIs) are the first-line treatment for PD; however, the common side-effect profiles and drug interactions may provoke patients to abandon the treatment, leading to PD symptoms relapse. Cannabidiol (CBD) is the major non-psychotomimetic constituent of the Cannabis sativa plant with antianxiety properties that has been suggested as an alternative for treating anxiety disorders. The aim of the present review was to discuss the effects and mechanisms involved in the putative anti-panic effects of CBD.\n\n\nMETHODS\nelectronic database was used as source of the studies selected selected based on the studies found by crossing the following keywords: cannabidiol and panic disorder; canabidiol and anxiety, cannabidiol and 5-HT1A receptor).\n\n\nRESULTS\nIn the present review, we included both experimental laboratory animal and human studies that have investigated the putative anti-panic properties of CBD. Taken together, the studies assessed clearly suggest an anxiolytic-like effect of CBD in both animal models and healthy volunteers.\n\n\nCONCLUSIONS\nCBD seems to be a promising drug for the treatment of PD. However, novel clinical trials involving patients with the PD diagnosis are clearly needed to clarify the specific mechanism of action of CBD and the safe and ideal therapeutic doses of this compound."}
{"_id": "adf29144f925c58d55781307718080f62bc97de3", "title": "Radiation Search Operations using Scene Understanding with Autonomous UAV and UGV", "text": "Autonomously searching for hazardous radiation sources requires the ability of the aerial and ground systems to understand the scene they are scouting. In this paper, we present systems, algorithms, and experiments to perform radiation search using unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV) by employing semantic scene segmentation. The aerial data is used to identify radiological points of interest, generate an orthophoto along with a digital elevation model (DEM) of the scene, and perform semantic segmentation to assign a category (e.g . road, grass) to each pixel in the orthophoto. We perform semantic segmentation by training a model on a dataset of images we collected and annotated, using the model to perform inference on images of the test area unseen to the model, and then refining the results with the DEM to better reason about category predictions at each pixel. We then use all of these outputs to plan a path for a UGV carrying a LiDAR to map the environment and avoid obstacles not present during the flight, and a radiation detector to collect more precise radiation measurements from the ground. Results of the analysis for each scenario tested favorably. We also note that our approach is general and has the potential to work for a variety of different sensing tasks."}
{"_id": "7ef66d8adc7322218a2420066d27e154535a2460", "title": "Twitter games: how successful spammers pick targets", "text": "Online social networks, such as Twitter, have soared in popularity and in turn have become attractive targets of spam. In fact, spammers have evolved their strategies to stay ahead of Twitter's anti-spam measures in this short period of time. In this paper, we investigate the strategies Twitter spammers employ to reach relevant target audiences. Due to their targeted approaches to send spam, we see evidence of a large number of the spam accounts forming relationships with other Twitter users, thereby becoming deeply embedded in the social network.\n We analyze nearly 20 million tweets from about 7 million Twitter accounts over a period of five days. We identify a set of 14,230 spam accounts that manage to live longer than the other 73% of other spam accounts in our data set. We characterize their behavior, types of tweets they use, and how they target their audience. We find that though spam campaigns changed little from a recent work by Thomas et al., spammer strategies evolved much in the same short time span, causing us to sometimes find contradictory spammer behavior from what was noted in Thomas et al.'s work. Specifically, we identify four major strategies used by 2/3rd of the spammers in our data. The most popular of these was one where spammers targeted their own followers. The availability of various kinds of services that help garner followers only increases the popularity of this strategy. The evolution in spammer strategies we observed in our work suggests that studies like ours should be undertaken frequently to keep up with spammer evolution."}
{"_id": "f9455bf42f61eb8c0ac14e17539366851f64c790", "title": "Structure Based User Identification across Social Networks", "text": "Identification of anonymous identical users of cross-platforms refers to the recognition of the accounts belonging to the same individual among multiple Social Network (SN) platforms. Evidently, cross-platform exploration may help solve many problems in social computing, in both theory and practice. However, it is still an intractable problem due to the fragmentation, inconsistency, and disruption of the accessible information among SNs. Different from the efforts implemented on user profiles and users\u2019 content, many studies have noticed the accessibility and reliability of network structure in most of the SNs for addressing this issue. Although substantial achievements have been made, most of the current network structure-based solutions, requiring prior knowledge of some given identified users, are supervised or semi-supervised. It is laborious to label the prior knowledge manually in some scenarios where prior knowledge is hard to obtain. Noticing that friend relationships are reliable and consistent in different SNs, we proposed an unsupervised scheme, termed Friend Relationship-based User Identification algorithm without Prior knowledge (FRUI-P). The FRUI-P first extracts the friend feature of each user in an SN into friend feature vector, and then calculates the similarities of all the candidate identical users between two SNs. Finally, a one-to-one map scheme is developed to identify the users based on the similarities. Moreover, FRUI-P is proved to be efficient theoretically. Results of extensive experiments demonstrated that FRUI-P performs much better than current state-of-art network structure-based algorithm without prior knowledge. Due to its high precision, FRUI-P can additionally be utilized to generate prior knowledge for supervised and semi-supervised schemes. In applications, the unsupervised anonymous identical user identification method accommodates more scenarios where the seed users are unobtainable."}
{"_id": "1217c950ba6f73c3e4cbbfb4ed4546f8d6bf914b", "title": "libtissue - implementing innate immunity", "text": "In a previous paper the authors argued the case for incorporating ideas from innate immunity into artificial immune systems (AISs) and presented an outline for a conceptual framework for such systems. A number of key general properties observed in the biological innate and adaptive immune systems were highlighted, and how such properties might be instantiated in artificial systems was discussed in detail. The next logical step is to take these ideas and build a software system with which AISs with these properties can be implemented and experimentally evaluated. This paper reports on the results of that step - the libtissue system."}
{"_id": "1797a323f1024ba17389befe19824c8d443a0940", "title": "A multi-centre evaluation of eleven clinically feasible brain PET/MRI attenuation correction techniques using a large cohort of patients", "text": "AIM\nTo accurately quantify the radioactivity concentration measured by PET, emission data need to be corrected for photon attenuation; however, the MRI signal cannot easily be converted into attenuation values, making attenuation correction (AC) in PET/MRI challenging. In order to further improve the current vendor-implemented MR-AC methods for absolute quantification, a number of prototype methods have been proposed in the literature. These can be categorized into three types: template/atlas-based, segmentation-based, and reconstruction-based. These proposed methods in general demonstrated improvements compared to vendor-implemented AC, and many studies report deviations in PET uptake after AC of only a few percent from a gold standard CT-AC. Using a unified quantitative evaluation with identical metrics, subject cohort, and common CT-based reference, the aims of this study were to evaluate a selection of novel methods proposed in the literature, and identify the ones suitable for clinical use.\n\n\nMETHODS\nIn total, 11 AC methods were evaluated: two vendor-implemented (MR-ACDIXON and MR-ACUTE), five based on template/atlas information (MR-ACSEGBONE (Koesters et al., 2016), MR-ACONTARIO (Anazodo et al., 2014), MR-ACBOSTON (Izquierdo-Garcia et al., 2014), MR-ACUCL (Burgos et al., 2014), and MR-ACMAXPROB (Merida et al., 2015)), one based on simultaneous reconstruction of attenuation and emission (MR-ACMLAA (Benoit et al., 2015)), and three based on image-segmentation (MR-ACMUNICH (Cabello et al., 2015), MR-ACCAR-RiDR (Juttukonda et al., 2015), and MR-ACRESOLUTE (Ladefoged et al., 2015)). We selected 359 subjects who were scanned using one of the following radiotracers: [18F]FDG (210), [11C]PiB (51), and [18F]florbetapir (98). The comparison to AC with a gold standard CT was performed both globally and regionally, with a special focus on robustness and outlier analysis.\n\n\nRESULTS\nThe average performance in PET tracer uptake was within \u00b15% of CT for all of the proposed methods, with the average\u00b1SD global percentage bias in PET FDG uptake for each method being: MR-ACDIXON (-11.3\u00b13.5)%, MR-ACUTE (-5.7\u00b12.0)%, MR-ACONTARIO (-4.3\u00b13.6)%, MR-ACMUNICH (3.7\u00b12.1)%, MR-ACMLAA (-1.9\u00b12.6)%, MR-ACSEGBONE (-1.7\u00b13.6)%, MR-ACUCL (0.8\u00b11.2)%, MR-ACCAR-RiDR (-0.4\u00b11.9)%, MR-ACMAXPROB (-0.4\u00b11.6)%, MR-ACBOSTON (-0.3\u00b11.8)%, and MR-ACRESOLUTE (0.3\u00b11.7)%, ordered by average bias. The overall best performing methods (MR-ACBOSTON, MR-ACMAXPROB, MR-ACRESOLUTE and MR-ACUCL, ordered alphabetically) showed regional average errors within \u00b13% of PET with CT-AC in all regions of the brain with FDG, and the same four methods, as well as MR-ACCAR-RiDR, showed that for 95% of the patients, 95% of brain voxels had an uptake that deviated by less than 15% from the reference. Comparable performance was obtained with PiB and florbetapir.\n\n\nCONCLUSIONS\nAll of the proposed novel methods have an average global performance within likely acceptable limits (\u00b15% of CT-based reference), and the main difference among the methods was found in the robustness, outlier analysis, and clinical feasibility. Overall, the best performing methods were MR-ACBOSTON, MR-ACMAXPROB, MR-ACRESOLUTE and MR-ACUCL, ordered alphabetically. These methods all minimized the number of outliers, standard deviation, and average global and local error. The methods MR-ACMUNICH and MR-ACCAR-RiDR were both within acceptable quantitative limits, so these methods should be considered if processing time is a factor. The method MR-ACSEGBONE also demonstrates promising results, and performs well within the likely acceptable quantitative limits. For clinical routine scans where processing time can be a key factor, this vendor-provided solution currently outperforms most methods. With the performance of the methods presented here, it may be concluded that the challenge of improving the accuracy of MR-AC in adult brains with normal anatomy has been solved to a quantitatively acceptable degree, which is smaller than the quantification reproducibility in PET imaging."}
{"_id": "0ce9ad941f6da90068759344abf07a4e15cf4ccf", "title": "A System for Video Surveillance and Monitoring", "text": "The Robotics Institute at Carnegie Mellon University (CMU) and the Sarnoff Corporation are developing a system for autonomous Video Surveillance and Monitoring. The technical objective is to use multiple, cooperative video sensors to provide continuous coverage of people and vehicles in cluttered environments. This paper presents an overview of the system and significant results achieved to date."}
{"_id": "b60a86ee106946f74313535c809209a743080f30", "title": "Web Accessibility Evaluation", "text": "Web accessibility evaluation is a broad field that combines different disciplines and skills. It encompasses technical aspects such as the assessment of conformance to standards and guidelines, as well as non-technical aspects such as the involvement of end-users during the evaluation process. Since Web accessibility is a qualitative and experiential measure rather than a quantitative and concrete property, the evaluation approaches need to include different techniques andmaintain flexibility and adaptability toward different situations. At the same time, evaluation approaches need to be robust and reliable so that they can be effective. This chapter explores some of the techniques and strategies to evaluate the accessibility of Web content for people with disabilities. It highlights some of the common approaches to carry out andmanage evaluation processes rather than list out individual steps for evaluating Web content. This chapter also provides an outlook to some of the future directions in which the field seems to be heading, and outlines some opportunities for research and"}
{"_id": "475c278c88fa76db47d78d620863d5d9366ab1ef", "title": "Phish Phinder: A Game Design Approach to Enhance User Confidence in Mitigating Phishing Attacks", "text": "Phishing is an especially challenging cyber security threat as it does not attack computer systems, but targets the user who works on that system by relying on the vulnerability of their decision-making ability. Phishing attacks can be used to gather sensitive information from victims and can have devastating impact if they are successful in deceiving the user. Several anti-phishing tools have been designed and implemented but they have been unable to solve the problem adequately. This failure is often due to security experts overlooking the human element and ignoring their fallibility in making trust decisions online. In this paper, we present Phish Phinder, a serious game designed to enhance the user\u2019s confidence in mitigating phishing attacks by providing them with both conceptual and procedural knowledge about phishing. The user is trained through a series of gamified challenges, designed to educate them about important phishing related concepts, through an interactive user interface. Key elements of the game interface were identified through an empirical study with the aim of enhancing user interaction with the game. We also adopted several persuasive design principles while designing Phish Phinder to enhance phishing avoidance behaviour among users."}
{"_id": "463bec3d0298e96e3702e071e241e3898f76eff2", "title": "Morsel-driven parallelism: a NUMA-aware query evaluation framework for the many-core age", "text": "With modern computer architecture evolving, two problems conspire against the state-of-the-art approaches in parallel query execution: (i) to take advantage of many-cores, all query work must be distributed evenly among (soon) hundreds of threads in order to achieve good speedup, yet (ii) dividing the work evenly is difficult even with accurate data statistics due to the complexity of modern out-of-order cores. As a result, the existing approaches for plan-driven parallelism run into load balancing and context-switching bottlenecks, and therefore no longer scale. A third problem faced by many-core architectures is the decentralization of memory controllers, which leads to Non-Uniform Memory Access (NUMA). In response, we present the morsel-driven query execution framework, where scheduling becomes a fine-grained run-time task that is NUMA-aware. Morsel-driven query processing takes small fragments of input data (morsels) and schedules these to worker threads that run entire operator pipelines until the next pipeline breaker. The degree of parallelism is not baked into the plan but can elastically change during query execution, so the dispatcher can react to execution speed of different morsels but also adjust resources dynamically in response to newly arriving queries in the workload. Further, the dispatcher is aware of data locality of the NUMA-local morsels and operator state, such that the great majority of executions takes place on NUMA-local memory. Our evaluation on the TPC-H and SSB benchmarks shows extremely high absolute performance and an average speedup of over 30 with 32 cores."}
{"_id": "a112d3f6d056cc1d85f4a9902f8660fbeb898f90", "title": "Mean-shift Blob Tracking through Scale Space", "text": "The mean-shift algorithm is an efficient technique for tracking 2D blobs through an image. Although the scale of the mean-shift kernel is a crucial parameter, there is presently no clean mechanism for choosing or updating scale while tracking blobs that are changing in size. We adapt Lindeberg\u2019s theory of feature scale selection based on local maxima of differential scale-space filters to the problem of selecting kernel scale for mean-shift blob tracking. We show that a difference of Gaussian (DOG) mean-shift kernel enables efficient tracking of blobs through scale space. Using this kernel requires generalizing the mean-shift algorithm to handle images that contain negative sample weights."}
{"_id": "e3cfdcaf0eb59af379d5081c1caef9b458cd1fba", "title": "Research on Chinese text classification based on Word2vec", "text": "The set of features which the traditional feature selection algorithm of chi-square selected is not complete. This causes the low performance for the final text classification. Therefore, this paper proposes a method. The method utilizes word2vec to generate word vector to improve feature selection algorithm of the chi square. The algorithm applies the word vector generated by word2vec to the process of the traditional feature selection and uses these words to supplement the set of features as appropriate. Ultimately, the set of features obtained by this method has better discriminatory power. Because, the feature words with the better discriminatory power has the strong ability of distinguishing categories as its semantically similar words. On this base, multiple experiments have been carried out in this paper. The experimental results show that the performance of text classification can increase after extension of feature words."}
{"_id": "b20a5427d79c660fe55282da2533071629bfc533", "title": "Deep Learning Advances on Different 3D Data Representations: A Survey", "text": "3D data is a valuable asset in the field of computer vision as it provides rich information about the full geometry of sensed objects and scenes. With the recent availability of large 3D datasets and the increase in computational power, it is today possible to consider applying deep learning to learn specific tasks on 3D data such as segmentation, recognition and correspondence. Depending on the considered 3D data representation, different challenges may be foreseen in using existent deep learning architectures. In this paper, we provide a comprehensive overview of various 3D data representations highlighting the difference between Euclidean and non-Euclidean ones. We also discuss how deep learning methods are applied on each representation, analyzing the challenges to"}
{"_id": "756822ece7da9c8b78997cf9001b0ec69a9a2112", "title": "INOCULANTS OF PLANT GROWTH-PROMOTING BACTERIA FOR USE IN AGRICULTURE", "text": "An assessment of the current state of bacterial inoculants for c ontemporary agriculture in developed and developing countries is critically evaluated from the point of view of their actual status and future use. Special emphasis is given to two new concepts of inoculation, as yet unavailable commercially: (i) synthetic inoculants under development for plant-growth promoting bacteria (PGPB) (Bashan and Holguin, 1998), and (ii) inoculation by groups of associated bacteria. This review contains: A brief historical overview of bacterial inoculants; the rationale for plant inoculation with emphasis on developing countries and semiarid agriculture, and the concept and application of mixed inoculant; discussion of microbial formulation including optimization of carrier-compound characteristics, types of existing carriers for in oculants, traditional formulations, future trends in formulations using unconventional materials, encapsulated synthetic formulations, macro and micro formulations of alginate, encapsulation of beneficial bacteria using other materials, regulation and contamination of commercial inoculants, and examples of modem commercial bacterial inoculants; and a consideration of time constraints and application methods for bacterial inoculants, commercial production, marketing, and the prospects of inoculants in modern agriculture. \u00ae 1998 Elsevier Science Inc."}
{"_id": "f1737a54451e79f4c9cff1bc789fbfb0c83aee3a", "title": "Categorical Data Analysis: Away from ANOVAs (transformation or not) and towards Logit Mixed Models.", "text": "This paper identifies several serious problems with the widespread use of ANOVAs for the analysis of categorical outcome variables such as forced-choice variables, question-answer accuracy, choice in production (e.g. in syntactic priming research), et cetera. I show that even after applying the arcsine-square-root transformation to proportional data, ANOVA can yield spurious results. I discuss conceptual issues underlying these problems and alternatives provided by modern statistics. Specifically, I introduce ordinary logit models (i.e. logistic regression), which are well-suited to analyze categorical data and offer many advantages over ANOVA. Unfortunately, ordinary logit models do not include random effect modeling. To address this issue, I describe mixed logit models (Generalized Linear Mixed Models for binomially distributed outcomes, Breslow & Clayton, 1993), which combine the advantages of ordinary logit models with the ability to account for random subject and item effects in one step of analysis. Throughout the paper, I use a psycholinguistic data set to compare the different statistical methods."}
{"_id": "70bda6d1377ad5bd4e9c2902f89dc934b856fcf6", "title": "Enterprise Content Management - A Literature Review", "text": "Managing information and content on an enterprise-wide scale is challenging. Enterprise content management (ECM) can be considered as an integrated approach to information management. While this concept received much attention from practitioners, ECM research is still an emerging field of IS research. Most authors that deal with ECM claim that there is little scholarly literature available. After approximately one decade of ECM research, this paper provides an in-depth review of the body of academic research: the ECM domain, its evolution, and main topics are characterized. An established ECM research framework is adopted, refined, and explained with its associated elements and working definitions. On this basis, 68 articles are reviewed, classified, and concepts are derived. Prior research is synthesized and findings are integrated in a conceptcentric way. Further, implications for research and practice, including future trends, are drawn."}
{"_id": "254d59ab9cac1921687d2c0313b8f6fc66541dfb", "title": "A New Mura Defect Inspection Way for TFT-LCD Using Level Set Method", "text": "Mura is a typical vision defect of LCD panel, appearing as local lightness variation with low contrast and blurry contour. This letter presents a new machine vision inspection way for Mura defect based on the level set method. First, a set of real Gabor filters are applied to eliminating the global textured backgrounds. Then, the level set method is employed for image segmentation with a new region-based active contours model, which is an improvement of the Chan-Vese's model so that it more suitable to the segmentation of Mura. Using some results from the level set based segmentation, the defects are quantified based on the SEMU method. Experiments show that the proposed method has better performance for Mura detection and quantification."}
{"_id": "1792b2c06e47399ee2eb1a7905056f02d7b9bf24", "title": "Text Analytics in Social Media", "text": "The rapid growth of online social media in the form of collaborativelycreated content presents new opportunities and challenges to both producers and consumers of information. With the large amount of data produced by various social media services, text analytics provides an effective way to meet usres\u2019 diverse information needs. In this chapter, we first introduce the background of traditional text analytics and the distinct aspects of textual data in social media. We next discuss the research progress of applying text analytics in social media from different perspectives, and show how to improve existing approaches to text representation in social media, using real-world examples."}
{"_id": "538e03892217075a7c2347f088c727725ebc031d", "title": "Project Management Process Maturity \u201e PM ... 2 Model", "text": "This paper presents the project management process maturity (PM) 2 model that determines and positions an organizatio relative project management level with other organizations. The comprehensive model follows a systematic approach to es organization\u2019s current project management level. Each maturity level consists of major project management characteristics, fa processes. The model evolves from functionally driven organizational practices to project driven organization that incorporates co project learning. The (PM) 2 model provides an orderly, disciplined process to achieve higher levels of project management mat DOI: 10.1061/ ~ASCE!0742-597X~2002!18:3~150! CE Database keywords: Project management; Models; Organizations. ve lan ani uiteron\u2019s r th at ctic"}
{"_id": "682c434becc69b9dc70a4c18305f9d733d03f581", "title": "Users of the world , unite ! The challenges and opportunities of Social Media", "text": "As of January 2009, the online social networking application Facebook registered more than 175 million active users. To put that number in perspective, this is only slightly less than the population of Brazil (190 million) and over twice the population of Germany (80 million)! At the same time, every minute, 10 hours of content were uploaded to the video sharing platform YouTube. And, the image hosting site Flickr provided access to over 3 billion photographs, making the world-famous Louvre Museum\u2019s collection of 300,000 objects seem tiny in comparison. According to Forrester Research, 75% of Internet surfers used \u2018\u2018Social Media\u2019\u2019 in the second quarter of 2008 by joining social networks, reading blogs, or contributing reviews to shopping sites; this represents a significant rise from 56% in 2007. The growth is not limited to teenagers, either; members of Generation X, now 35\u201444 years old, increasingly populate the ranks of joiners, spectators, and critics. It is therefore reasonable to say that Social Media represent a revolutionary new trend that should be of interest to companies operating in online space\u2013\u2014or any space, for that matter. Yet, not overly many firms seem to act comfortably in a world where consumers can speak so freely Business Horizons (2010) 53, 59\u201468"}
{"_id": "6cdb6ba83bfaca7b2865a53341106a71e1b3d2dd", "title": "Social Media Metrics \u2014 A Framework and Guidelines for Managing Social Media", "text": "Social media are becoming ubiquitous and need to be managed like all other forms of media that organizations employ to meet their goals. However, social media are fundamentally different from any traditional or other online media because of their social network structure and egalitarian nature. These differences require a distinct measurement approach as a prerequisite for proper analysis and subsequent management. To develop the right social media metrics and subsequently construct appropriate dashboards, we provide a tool kit consisting of three novel components. First, we theoretically derive and propose a holistic framework that covers the major elements of social media, drawing on theories from marketing, psychology, and sociology. We continue to support and detail these elements \u2014 namely \u2018motives,\u2019 \u2018content,\u2019 \u2018network structure,\u2019 and \u2018social roles & interactions\u2019 \u2014 with recent research studies. Second, based on our theoretical framework, the literature review, and practical experience, we suggest nine guidelines that may prove valuable for designing appropriate social media metrics and constructing a sensible social media dashboard. Third, based on the framework and the guidelines we derive managerial implications and suggest an agenda for future research. \u00a9 2013 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved."}
{"_id": "535fc722b7e6275ed8101f9805cf06f39edeeb01", "title": "Fast Labeling and Transcription with the Speechalyzer Toolkit", "text": "We describe a software tool named \u201cSpeechalyzer\u201d which is optimized to process large speech data sets with respect to transcription, labeling and annotation. It is implemented as a client server based framework in Java and interfaces software for speech recognition, synthesis, speech classification and quality evaluation. The application is mainly the processing of training data for speech recognition and classification models and performing benchmarking tests on speech to text, text to speech and speech categorization software systems."}
{"_id": "3c9598a2be80a88fccecde80e6f266af7907d7e7", "title": "Vector Space Representations of Documents in Classifying Finnish Social Media Texts", "text": ""}
{"_id": "47c3d413056fe8538cb5ce2d6adc860e70062bf6", "title": "The Selective Labels Problem: Evaluating Algorithmic Predictions in the Presence of Unobservables", "text": "Evaluating whether machines improve on human performance is one of the central questions of machine learning. However, there are many domains where the data is selectively labeled, in the sense that the observed outcomes are themselves a consequence of the existing choices of the human decision-makers. For instance, in the context of judicial bail decisions, we observe the outcome of whether a defendant fails to return for their court appearance only if the human judge decides to release the defendant on bail. This selective labeling makes it harder to evaluate predictive models as the instances for which outcomes are observed do not represent a random sample of the population. Here we propose a novel framework for evaluating the performance of predictive models on selectively labeled data. We develop an approach called contraction which allows us to compare the performance of predictive models and human decision-makers without resorting to counterfactual inference. Our methodology harnesses the heterogeneity of human decision-makers and facilitates effective evaluation of predictive models even in the presence of unmeasured confounders (unobservables) which influence both human decisions and the resulting outcomes. Experimental results on real world datasets spanning diverse domains such as health care, insurance, and criminal justice demonstrate the utility of our evaluation metric in comparing human decisions and machine predictions."}
{"_id": "b927923d991f6e549376496362900b4ddd85b1a4", "title": "Out-of-order transmission enabled congestion and scheduling control for multipath TCP", "text": "With development of wireless communication technologies, mobile devices are commonly equipped with multiple network interfaces and ready to adopt emerging transport layer protocols such as multipath TCP (MPTCP). The protocol is specifically useful for Internet of Things streaming applications with critical latency and bandwidth demands. To achieve full potential of MPTCP, major challenges on congestion control, fairness, and path scheduling are identified and draw considerable research attention. In this paper, we propose a joint congestion control and scheduling algorithm allowing out-of-order transmission as an overall solution. It is achieved by adaptive window coupling, congestion discrimination, and delay-aware packet ordering. The algorithm is implemented in the Linux kernel for real-world experiments. Favorable results are obtained in both shared or distinct bottleneck scenarios."}
{"_id": "2d94165c007865a27e1e4dc76b29a25eef2c26bd", "title": "Neuronal factors determining high intelligence.", "text": "Many attempts have been made to correlate degrees of both animal and human intelligence with brain properties. With respect to mammals, a much-discussed trait concerns absolute and relative brain size, either uncorrected or corrected for body size. However, the correlation of both with degrees of intelligence yields large inconsistencies, because although they are regarded as the most intelligent mammals, monkeys and apes, including humans, have neither the absolutely nor the relatively largest brains. The best fit between brain traits and degrees of intelligence among mammals is reached by a combination of the number of cortical neurons, neuron packing density, interneuronal distance and axonal conduction velocity--factors that determine general information processing capacity (IPC), as reflected by general intelligence. The highest IPC is found in humans, followed by the great apes, Old World and New World monkeys. The IPC of cetaceans and elephants is much lower because of a thin cortex, low neuron packing density and low axonal conduction velocity. By contrast, corvid and psittacid birds have very small and densely packed pallial neurons and relatively many neurons, which, despite very small brain volumes, might explain their high intelligence. The evolution of a syntactical and grammatical language in humans most probably has served as an additional intelligence amplifier, which may have happened in songbirds and psittacids in a convergent manner."}
{"_id": "6ad11aed72ff31c0dbdaf8b9123b28b9bef422b7", "title": "Implications of modified waterfall model to the roles and education of health IT professionals", "text": "Electronic Health Records (EHRs) are believed to have the potential to enhance efficiency and provide better health care. However, the benefits could be easily compromised if EHRs are not used appropriately. This paper applies a modified waterfall life cycle model to evaluate the roles of health IT professionals in the adoption and management of EHRs. We then present our development of a Masters' program in Medical Informatics for the education of health IT professionals. We conclude that health IT professionals serve key roles in addressing the problems and concerns and help fulfill envision of EHRs."}
{"_id": "1efa222a89838ba52fd38469704bd83aa3b4cad8", "title": "A SUMMARY REVIEW OF VIBRATION-BASED DAMAGE IDENTIFICATION METHODS", "text": "This paper provides an overview of methods to detect, locate, and characterize damage in structural and mechanical systems by examining changes in measured vibration response. Research in vibration-based damage identification has been rapidly expanding over the last few years. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is presented. The methods are categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. The methods are also described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of vibration-based damage identification."}
{"_id": "364316a7639eacd00bffa5affb73b217524cf4d8", "title": "Context Based Approach for Second Language Acquisition", "text": "SLAM 2018 focuses on predicting a student\u2019s mistake while using the Duolingo application. In this paper, we describe the system we developed for this shared task. Our system uses a logistic regression model to predict the likelihood of a student making a mistake while answering an exercise on Duolingo in all three language tracks English/Spanish (en/es), Spanish/English (es/en) and French/English (fr/en). We conduct an ablation study with several features during the development of this system and discover that context based features play a major role in language acquisition modeling. Our model beats Duolingo\u2019s baseline scores in all three language tracks (AUROC scores for en/es = 0.821, es/en = 0.790 and fr/en = 0.812). Our work makes a case for providing favourable textual context for students while learning second language."}
{"_id": "4f6fdfbb56543110950416be2493dde5dbfa0a49", "title": "Library Anxiety: A Grounded Theory and Its Development", "text": "This qualitative study explored the feelings of students about using the library for research. Personal writing, collected in beginning composition courses over a two-year period, was analyzed for recurrent themes. It was found that 75 to 85 percent of the students in these courses described their initial response to library research in terms of fear. Three concepts emerged from these descriptions: (1) students generally feel that their own library-use skills are inadequate while the skills of other students are adequate, (2) the inadequacy is shameful and should be hidden, and (3) the inadequacy would be revealed by asking questions. A grounded theory of library anxiety was constructed from these data."}
{"_id": "502c7b19cd34f8ddeb9d8ffe4de7797538a442ca", "title": "VEGF-A stimulates lymphangiogenesis and hemangiogenesis in inflammatory neovascularization via macrophage recruitment.", "text": "Lymphangiogenesis, an important initial step in tumor metastasis and transplant sensitization, is mediated by the action of VEGF-C and -D on VEGFR3. In contrast, VEGF-A binds VEGFR1 and VEGFR2 and is an essential hemangiogenic factor. We re-evaluated the potential role of VEGF-A in lymphangiogenesis using a novel model in which both lymphangiogenesis and hemangiogenesis are induced in the normally avascular cornea. Administration of VEGF Trap, a receptor-based fusion protein that binds and neutralizes VEGF-A but not VEGF-C or -D, completely inhibited both hemangiogenesis and the outgrowth of LYVE-1(+) lymphatic vessels following injury. Furthermore, both lymphangiogenesis and hemangiogenesis were significantly reduced in mice transgenic for VEGF-A(164/164) or VEGF-A(188/188) (each of which expresses only one of the three principle VEGF-A isoforms). Because VEGF-A is chemotactic for macrophages and we demonstrate here that macrophages in inflamed corneas release lymphangiogenic VEGF-C/VEGF-D, we evaluated the possibility that macrophage recruitment plays a role in VEGF-A-mediated lymphangiogenesis. Either systemic depletion of all bone marrow-derived cells (by irradiation) or local depletion of macrophages in the cornea (using clodronate liposomes) prior to injury significantly inhibited both hemangiogenesis and lymphangiogenesis. We conclude that VEGF-A recruitment of monocytes/macrophages plays a crucial role in inducing inflammatory neovascularization by supplying/amplifying signals essential for pathological hemangiogenesis and lymphangiogenesis."}
{"_id": "eeea4fa7448cc04bb11dc2241a357e1d6699f460", "title": "Psychological Needs as a Predictor of Cyber bullying : a Preliminary Report on College Students", "text": "Recent surveys show that cyber bullying is a pervasive problem in North America. Many news stories have reported cyber bullying incidents around the world. Reports on the prevalence of cyber bullying and victimization as a result of cyber bullying increase yearly. Although we know what cyber bullying is it is important that we learn more about the psychological eff ects of it. Th erefore, the aim of the current study is to investigate the relationship between psychological needs and cyber bullying. Participants of the study included 666 undergraduate students (231 males and 435 females) from 15 programs in the Faculty of Education at Selcuk University, Turkey. Questions about demographics, engagement in and exposure to cyber bullying, and the Adjective Check List were administered. 22.5% of the students reported engaging in cyber bullying at least one time, and 55.3% of the students reported being victims of cyber bullying at least once in their lifetime. Males reported more cyber bullying behavior than females. Results indicate that aggression and succorance positively predict cyber bullying wheras intraception negatively predict it. In addition, endurance and aff iliation negatively predict cyber victimization. Only the need for change was found as a positive, but weak predictor of cyber victimization. In light of these findings, aggression and intraception should be investigated further in future research on cyber bullying."}
{"_id": "4661794de39ac660a4f37a2ef44b0b1a009d5575", "title": "Machine Reading", "text": "Over the last two decades or so, Natural Language Processing (NLP) has developed powerful methods for low-level syntactic and semantic text processing tasks such as parsing, semantic role labeling, and text categorization. Over the same period, the fields of machine learning and probabilistic reasoning have yielded important breakthroughs as well. It is now time to investigate how to leverage these advances to understand text."}
{"_id": "7d3e8733e7c2df397cfbb895b99448f27d87ca8f", "title": "Real-time detection of students' emotional states in the classroom", "text": "Today, as social media sharing increases, the instant emotional state of the students changes frequently. This situation is greatly affecting the learning process as well as the motivation of the students in the classroom. Sometimes, it is insufficient for the educator to observe the emotional states of students. Therefore, an automatic system is needed that can detect and analyze the emotional states of students in the classroom. In this study, an auxiliary information system, which uses image processing and human-computer interaction, has been developed that would be used in the field of education. In this system, the dataset obtained from the students' faces was tested using various machine learning algorithms. As a result, the accuracy of this system was found as 97.15% using support vector machine. This system is aimed to direct the educator to communicate with the students and to increase their motivation when necessary."}
{"_id": "3e8a2080e4c05706b0ea6759781ba52e00dcda07", "title": "Automatic parking identification and vehicle guidance with road awareness", "text": "Advanced driver assistance systems (ADAS) are becoming more common in safety and convenience applications. The computer vision based ADAS described in this paper, is an add-on system, suitable to variety of cars. It detects vacant legal parking spots, and safely guides the vehicle into the selected parking. Detection can be performed in both indoor parking lots and along roadsides. The system is composed of three standard computer-connected webcams, which are attached to the vehicle. Upon slowing down, the system starts searching automatically for a right hand-side vacant parking spot, while being aware to parking color signs. Once detected, the parking orientation is determined, and the driver is notified. Once a parking is selected by the driver, the relative position between the vehicle and the parking spot is monitored. Vocal and visual parking guidance instructions are presented to the driver. In addition, if during parking, an object is moving on the road towards the car, a safety alert is given. The system is universal in the sense that, as an add-on system, it can be installed on any private 4-wheeled vehicle, and is suited to urban driving environment."}
{"_id": "728c36755e73758f9d2e18126b060099d1f97670", "title": "Deep Learning for Sensorless 3D Freehand Ultrasound Imaging", "text": "3D freehand ultrasound imaging is a very promising imaging modality but its acquisition is often neither portable nor practical because of the required external tracking hardware. Building a sensorless solution that is fully based on image analysis would thus have many potential applications. However, previously proposed approaches rely on physical models whose assumptions only hold on synthetic or phantom datasets, failing to translate to actual clinical acquisitions with sufficient accuracy. In this paper, we investigate the alternative approach of using statistical learning to circumvent this problem. To that end, we are leveraging the unique modeling capabilities of convolutional neural networks in order to build an end-to-end system where we directly predict the ultrasound probe motion from the images themselves. Based on thorough experiments using both phantom acquisitions and a set of 100 in-vivo long ultrasound sweeps for vein mapping, we show that our novel approach significantly outperforms the standard method and has direct clinical applicability, with an average drift error of merely 7% over the whole length of each ultrasound clip."}
{"_id": "11dbb38b7ff8a54ac8387264abd36b017f50f202", "title": "EPBC: Efficient Public Blockchain Client for lightweight users", "text": "Public blockchains provide a decentralized method for storing transaction data and have many applications in different sectors. In order for a user to track transactions, a simple method is that every user keeps a local copy of the entire public ledger. Since the size of a ledger keeps growing, this method becomes increasingly less practical, especially for lightweight users such as IoT devices and smartphones. In order to deal with this problem, there have been some proposals. However, existing solutions either achieve a limited storage reduction (e.g., simple payment verification), or rely on some strong security assumption (e.g., the use of trusted server). We propose EPBC, a novel and efficient transaction verification scheme for public ledgers, which only requires lightweight users to store a small amount of data that is independent of the size of the blockchain. We analyze EPBC's performance and security, and discuss its integration with existing public ledger systems. Experimental results confirm that EPBC is practical for lightweight users."}
{"_id": "16a2899351f589174714a469ccd9f7ee264ecb37", "title": "A unifying computational framework for motor control and social interaction.", "text": "Recent empirical studies have implicated the use of the motor system during action observation, imitation and social interaction. In this paper, we explore the computational parallels between the processes that occur in motor control and in action observation, imitation, social interaction and theory of mind. In particular, we examine the extent to which motor commands acting on the body can be equated with communicative signals acting on other people and suggest that computational solutions for motor control may have been extended to the domain of social interaction."}
{"_id": "965a753fab6e5d0bc271b6bbff473b506a3bb649", "title": "A Review of Wideband Wide-Angle Scanning 2-D Phased Array and Its Applications in Satellite Communication", "text": "In this review, research progress on the wideband wide-angle scanning two-dimensional phased arrays is summarized. The importance of the wideband and the wide-angle scanning characteristics for satellite communication is discussed. Issues like grating lobe avoidance, active reflection coefficient suppression and gain fluctuation reduction are emphasized in this review. Besides, techniques to address these issues and methods to realize the wideband wide-angle scanning phased array are reviewed."}
{"_id": "ffde7a4733f713d85643b2ec53fc258d9ce3b713", "title": "Impact of research on practice in the field of inspections, reviews and walkthroughs: learning from successful industrial uses", "text": "Software inspections, reviews, and walkthroughs have become a standard process component in many software development domains. Maturity level 3 of the CMM-I requires establishment of peer reviews [12] and substantial sustained improvements in quality and productivity have been reported as a result of using reviews ([16], [21], [22], [27]). The NSF Impact project identifies the degree to which these industrial success cases have been instigated and improved by research in software engineering.\n This research identifies that there is widespread adoption of inspections, reviews or walkthroughs but that companies do not generally exploit their full potential. However there exist sustained industrial success cases with respect to the wide-spread and measurably successful application of them. It also identifies research in software engineering that can be credibly documented as having influenced the industrial success cases. Credible documentation may exist in the form of publications or documented reports by witnesses. Due to the semi-formal nature of inspections, reviews, and walkthroughs, a specific focus is given to empirical research results as motivators for adoption. Through the examination of one detailed case study, it is shown that software engineering research has had a significant impact on practice and that the impact can be traced in this case from research to that practice. The case study chosen provides evidence of both success and failure regarding sustained application in practice.\n Thus the analysis of historic impact chains of research reveals a clear impact of software engineering research on sustained industrial success for inspections, reviews and walkthroughs. More importantly, in impact chains where the empirical results have not been established, we conclude that success has not been achieved or has not been sustained.\n The paper closes with (1) lessons learned for creating the sustained use and impact of semi-formal software engineering processes, (2) a request for researchers and practitioners to further consider how their work can improve the effectiveness of research and practice, and (3) a request to contribute additional success cases and impact factors to the authors database for future enhancements of this paper."}
{"_id": "b04175bb99d6beff0f201ed82971aeb91d2c081d", "title": "Exploring Deep Learning Methods for Discovering Features in Speech Signals", "text": "Exploring Deep Learning Methods for discovering features in speech signals. Navdeep Jaitly Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2014 This thesis makes three main contributions to the area of speech recognition with Deep Neural Network Hidden Markov Models (DNN-HMMs). Firstly, we explore the effectiveness of features learnt from speech databases using Deep Learning for speech recognition. This contrasts with prior works that have largely confined themselves to using traditional features such as Mel Cepstral Coefficients and Mel log filter banks for speech recognition. We start by showing that features learnt on raw signals using Gaussian-ReLU Restricted Boltzmann Machines can achieve accuracy close to that achieved with the best traditional features. These features are, however, learnt using a generative model that ignores domain knowledge. We develop methods to discover features that are endowed with meaningful semantics that are relevant to the domain using capsules. To this end, we extend previous work on transforming autoencoders and propose a new autoencoder with a domain-specific decoder to learn capsules from speech databases. We show that capsule instantiation parameters can be combined with Mel log filter banks to produce improvements in phone recognition on TIMIT. On WSJ the word error rate does not improve, even though we get strong gains in classification accuracy. We speculate this may be because of the mismatched objectives of word error rate over an utterance and frame error rate on the sub-phonetic class for a frame. Secondly, we develop a method for data augmentation in speech datasets. Such methods result in strong gains in object recognition, but have largely been ignored in speech recognition. Our data augmentation encourages the learning of invariance to vocal tract length of speakers. The method is shown to improve the phone error rate on TIMIT and the word error rate on a 14 hour subset of WSJ. Lastly, we develop a method for learning and using a longer range model of targets, conditioned on the input. This method predicts the labels for multiple frames together and uses a geometric average of these predictions during decoding. It produces state of the art results on phone recognition with TIMIT and also produces significant gains on WSJ."}
{"_id": "4b23bba6706147da34044e041c2871719d9de1af", "title": "Collaborative Quantization for Cross-Modal Similarity Search", "text": "Cross-modal similarity search is a problem about designing a search system supporting querying across content modalities, e.g., using an image to search for texts or using a text to search for images. This paper presents a compact coding solution for efficient search, with a focus on the quantization approach which has already shown the superior performance over the hashing solutions in the single-modal similarity search. We propose a cross-modal quantization approach, which is among the early attempts to introduce quantization into cross-modal search. The major contribution lies in jointly learning the quantizers for both modalities through aligning the quantized representations for each pair of image and text belonging to a document. In addition, our approach simultaneously learns the common space for both modalities in which quantization is conducted to enable efficient and effective search using the Euclidean distance computed in the common space with fast distance table lookup. Experimental results compared with several competitive algorithms over three benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance."}
{"_id": "08f6b1a26715648b7d475e5ca85bda69b2cf86d5", "title": "Centrocestus formosanus (Digenea: Heterophyidae) encysted in the freshwater fish, Puntius brevis, from Lao PDR.", "text": "The metacercariae of Centrocestus formosanus, a minute intestinal trematode of mammals and birds, were detected in the freshwater fish, Puntius brevis, from Vientiane Municipality, Lao PDR. The metacercariae were experimentally fed to mice, and adult flukes were recovered in their small intestines 7 days later. The adult flukes were morphologically characterized by having 32 (rarely 34) circumoral spines arranged in 2 alternative rows, a large bipartite seminal vesicle, an oval-shaped ovary, and an X-shaped excretory bladder. Based on these characters, the adults were identified as Centrocestus formosanus (Nishigori, 1924). The taxonomic significance of C. formosanus, in relation to a closely related species, C. caninus (Leiper, 1913), is briefly discussed. It has been first verified by adult worm recovery that C. formosanus is prevalent in Vientiane areas of Lao PDR, taking the freshwater fish, P. brevis, as a second intermediate host."}
{"_id": "6d604f26cef4cae50bd42d9fab02952cdf93de42", "title": "A Low-THD Class-D Audio Amplifier With Dual-Level Dual-Phase Carrier Pulsewidth Modulation", "text": "In this paper, a class-D audio amplifier which combines the advantages of the phase-shifted carrier pulsewidth modulation (PWM) and the multiple-level carrier PWM is proposed with a dual-level dual-phase carrier (DLDPC) PWM. The proposed closed-loop amplifier includes a second-order integrator and a DLDPC triangular wave generator. Two sets of 180\u00b0 out-of-phase triangular waves are used as carriers, and each set has its respective offset voltage level with nonoverlapping amplitude. By performing the double Fourier analysis, it can be found that the linearity can be enhanced and the distortion can be reduced with the proposed modulation. Experimental results show that the proposed fully differential DLDPC PWM class-D audio amplifier features a total harmonic distortion lower than 0.01% with an output voltage swing of \u00b15 V."}
{"_id": "8ab2d7246cce944b9dbd87b291f60e09b64ad611", "title": "Interactions between working memory, attention and eye movements.", "text": "This paper reviews the recent findings on working memory, attention and eye movements. We discuss the research that shows that many phenomena related to visual attention taking place when selecting relevant information from the environment are similar to processes needed to keep information active in working memory. We discuss new data that show that when retrieving information from working memory, people may allocate visual spatial attention to the empty location in space that used to contain the information that has to be retrieved. Moreover, we show that maintaining a location in working memory not only may involve attention rehearsal, but might also recruit the oculomotor system. Recent findings seem to suggest that remembering a location may involve attention-based rehearsal in higher brain areas, while at the same time there is inhibition of specific motor programs at lower brain areas. We discuss the possibility that working memory functions do not reside at a special area in the brain, but emerge from the selective recruitment of brain areas that are typically involved in spatial attention and motor control."}
{"_id": "6a63f001aec8e5605fe11f11efce0780689a7323", "title": "Use of Design Patterns in PHP-Based Web Application Frameworks", "text": "It is known that design patterns of object-oriented programming are used in the design of Web applications, but there is no sufficient information which data patterns are used, how often they are used, and the level of quality at which they are used. This paper describes the results concerning the use of design patterns in projects which develop PHP-based Web application frameworks. Documentation and source code were analysed for 10 frameworks, finding that design patterns are used in the development of Web applications, but not too much and without much consistency. The results and conclusions can be of use when planning and developing new projects because the existing experience can be taken into account. The paper also offers information which design patterns are not used because they may be artificial or hard-to-use in real projects. Alternatively, developers may simply lack information on the existence of the design patterns."}
{"_id": "b813118f82ddb0ff21e3d3df31cddb815822b103", "title": "PHP FRAMEWORK FOR DATABASE MANAGEMENT BASED ON MVC PATTERN", "text": "PHP is a powerful language to develop dynamic and interactive web applications. One of the defining features of PHP is the ease for developers to connect and manipulate a database. PHP prepares the functions for database manipulation. However, database management is done by the Structure Query Language (SQL). Most novice programmers often have trouble with SQL syntax. In this paper, we present the PHP framework for database management based on the MVC pattern. The MVC pattern is very useful for the architecture of web applications, separating the model, view and controller of a web application. The PHP framework encapsulated, common database operations are INSERT, UPDATE, DELETE and SELECT. Developers will not be required to consider the specific SQL statement syntax, just to call it the method in the model module. In addition, we use White-Box testing for the code verification in the model module. Lastly, a web application example is shown to illustrate the process of the PHP framework."}
{"_id": "7de20692c774798016e06d2eb0ade959c61df350", "title": "Patterns of Enterprise Application Architecture", "text": "protected void doInsert(DomainObject subject, PreparedStatement insertStatement) throws SQLException; class PersonMapper... protected String insertStatement() { return \"INSERT INTO people VALUES (?, ?, ?, ?)\"; } protected void doInsert( DomainObject abstractSubject, PreparedStatement stmt) throws SQLException { Person subject = (Person) abstractSubject; stmt.setString(2, subject.getLastName()); stmt.setString(3, subject.getFirstName()); stmt.setInt(4, subject.getNumberOfDependents()); } Example: Separating the Finders (Java) To allow domain objects to invoke finder behavior I can use Separated Interface (476) to separate the finder interfaces from the mappers (Figure 10.5 ). I can put these finder interfaces in a separate package that's visible to the domain layer, or, as in this case, I can put them in the domain layer itself. Figure 10.5. Defining a finder interface in the domain package."}
{"_id": "98c820611bfd24bc7e5752192182d991540ab939", "title": "The Research of PHP Development Framework Based on MVC Pattern", "text": "PHP is one of the leading web development languages, however, the development model of existing PHP organizes without a structure, which mixes the code of data access, the processing of business logic , and web presentation layer together, as a relult, it brought about many problems in the web applications , meanwhile, it could not meet the rapid development of web apply any more. In this paper, a implementation of PHP based on MVC design patterns - FDF framework was provided for PHP developers, which can offer a framework for web applications, separate the data, view and control of web applications, afford to achieve loose coupling, thereby enhanced the efficiency, reliability, maintainability and scalability of application development."}
{"_id": "15b53584315083345e88f2d3c436197744f72a01", "title": "Co-regularized least square regression for multi-view multi-class classification", "text": "Many classification problems involve instances that are unlabeled, multi-view and multi-class. However, few technique has been benchmarked for this complex scenario, with a notable exception that combines co-trained naive bayes (CoT-NB) with BCH coding. In this paper, we benchmark the performance of co-regularized least square regression (CoR-LS) for semi-supervised multi-view multi-class classification. We find it performed consistently and significantly better than CoT-NB over eight data sets at different scales. We also find for CoR-LS identity coding is optimal on large data sets and BCH coding is optimal on small data sets. Optimal scoring, a data-dependent coding scheme, often provides near-optimal performance."}
{"_id": "ede538be81d0d2f7982500374a329be18438881d", "title": "Introduction to Fillers.", "text": "BACKGROUND\nOver the last few years, injectable soft-tissue fillers have become an integral part of cosmetic therapy, with a wide array of products designed to fill lines and folds and revolumize the face.\n\n\nMETHODS\nThis review describes cosmetic fillers currently approved by the Food and Drug Administration and discusses new agents under investigation for use in the United States.\n\n\nRESULTS\nBecause of product refinements over the last few years-greater ease of use and longevity, the flexibility of multiple formulations within one line of products, and the ability to reverse poor clinical outcomes-practitioners have gravitated toward the use of biodegradable agents that stimulate neocollagenesis for sustained aesthetic improvements lasting up to a year or more with minimal side effects. Permanent implants provide long-lasting results but are associated with greater potential risk of complications and require the skilled hand of the experienced injector.\n\n\nCONCLUSIONS\nA variety of biodegradable and nonbiodegradable filling agents are available or under investigation in the United States. Choice of product depends on injector preference and the area to be filled. Although permanent agents offer significant clinical benefits, modern biodegradable fillers are durable and often reversible in the event of adverse effects."}
{"_id": "ab3d0ea202b2641eeb66f1d6a391a43598ba22b9", "title": "Truncated Importance Sampling for Reinforcement Learning with Experience Replay", "text": "Reinforcement Learning (RL) is considered here as an adaptation technique of neural controllers of machines. The goal is to make Actor-Critic algorithms require less agent-environment interaction to obtain policies of the same quality, at the cost of additional background computations. We propose to achieve this goal in the spirit of experience replay. An estimation method of improvement direction of a changing policy, based on preceding experience, is essential here. We propose one that uses truncated importance sampling. We derive bounds of bias of that type of estimators and prove that this bias asymptotically vanishes. In the experimental study we apply our approach to the classic ActorCritic and obtain 20-fold increase in speed of learning."}
{"_id": "3a2fed81b9ede781a2bb8c8db949a2edcec21aa8", "title": "EmEx, a Tool for Automated Emotive Face Recognition Using Convolutional Neural Networks", "text": "The work described in this paper represents the study and the attempt to make a contribution to one of the most stimulating and promising sectors in the field of emotion recognition, which is health care management. Multidisciplinary studies in artificial intelligence, augmented reality and psychology stressed out the importance of emotions in communication and awareness. The intent is to recognize human emotions, processing images streamed in real-time from a mobile device. The adopted techniques involve the use of open source libraries of visual recognition and machine learning approaches based on convolutional neural networks (CNN)."}
{"_id": "71e258b1aeea7a0e2b2076a4fddb0679ad2ecf9f", "title": "A Novel Secure Architecture for the Internet of Things", "text": "The \"Internet of Things\"(IoT) opens opportunities for devices and softwares to share information on an unprecedented scale. However, such a large interconnected network poses new challenges to system developers and users. In this article, we propose a layered architecture of IoT system. Using this model, we try to identify and assess each layer's challenges. We also discuss several existing technologies that can be used make this architecture secure."}
{"_id": "2d99efd269098d8de9f924076f1946150805aafb", "title": "MIMO Performance of Realistic UE Antennas in LTE Scenarios at 750 MHz", "text": "Multiple-input-multiple-output (MIMO) is a technique to achieve high data rates in mobile communication networks. Simulations are performed at both the antenna level and Long-Term Evolution (LTE) system level to assess the performance of realistic handheld devices with dual antennas at 750 MHz. It is shown that MIMO works very well and gives substantial performance gain in user devices with a quarter-wavelength antenna separation."}
{"_id": "39bc177ddbd0cd1c02f93340e84a6ca4973960d7", "title": "Implementation and verification of a generic universal memory controller based on UVM", "text": "This paper presents a coverage driven constraint random based functional verification method based on the Universal Verification Methodology (UVM) using System Verilog for generic universal memory controller architecture. This universal memory controller is looking forward to improving the performance of the existing memory controllers through a complete integration of the existing memory controllers features in addition of providing novel features. It also reduces the consumed power through providing high power consumption control due to its proposed different power levels supported to fit all power scenarios. While implementing a worthy architecture like the proposed generic universal memory controller, UVM is the best choice to build well-constructed, high controlled and reusable verification environment to efficiently verify it. More than 200 coverage points have been covered to verify the validation of the integrated features which makes the proposed universal memory controller replaces the existing controllers on the scene as it provides all of their powerful features in addition of novel features to control two of the most dominated types of memory; FLASH and DRAM through one memory controller."}
{"_id": "5e671ff1d980aca18cb4859f7bbe38924eb6dd86", "title": "Towards Internet of Things : Survey and Future Vision", "text": "Internet of things is a promising research due to its importance in many commerce, industry, and education applications. Recently, new applications and research challenges in numerous areas of Internet of things are fired. In this paper, we discuss the history of Internet of things, different proposed architectures of Internet of things, research challenges and open problems related to the Internet of things. We also introduce the concept of Internet of things database and discuss about the future vision of Internet of things. These are the manuscript preparation guidelines used as a standard template for all journal submissions. Author must follow these instructions while preparing/modifying these guidelines."}
{"_id": "ea14548bc4ab5e5a8af29696bcd2c1eb7463d02a", "title": "Joint Semantic and Latent Attribute Modelling for Cross-Class Transfer Learning", "text": "A number of vision problems such as zero-shot learning and person re-identification can be considered as cross-class transfer learning problems. As mid-level semantic properties shared cross different object classes, attributes have been studied extensively for knowledge transfer across classes. Most previous attribute learning methods focus only on human-defined/nameable semantic attributes, whilst ignoring the fact there also exist undefined/latent shareable visual properties, or latent attributes. These latent attributes can be either discriminative or non-discriminative parts depending on whether they can contribute to an object recognition task. In this work, we argue that learning the latent attributes jointly with user-defined semantic attributes not only leads to better representation but also helps semantic attribute prediction. A novel dictionary learning model is proposed which decomposes the dictionary space into three parts corresponding to semantic, latent discriminative and latent background attributes respectively. Such a joint attribute learning model is then extended by following a multi-task transfer learning framework to address a more challenging unsupervised domain adaptation problem, where annotations are only available on an auxiliary dataset and the target dataset is completely unlabelled. Extensive experiments show that the proposed models, though being linear and thus extremely efficient to compute, produce state-of-the-art results on both zero-shot learning and person re-identification."}
{"_id": "0e1e19c8f1c6f2e0182a119756d3695900cdc18c", "title": "The Parallel Knowledge Gradient Method for Batch Bayesian Optimization", "text": "In many applications of black-box optimization, one can evaluate multiple points simultaneously, e.g. when evaluating the performances of several different neural networks in a parallel computing environment. In this paper, we develop a novel batch Bayesian optimization algorithm \u2014 the parallel knowledge gradient method. By construction, this method provides the one-step Bayes optimal batch of points to sample. We provide an efficient strategy for computing this Bayes-optimal batch of points, and we demonstrate that the parallel knowledge gradient method finds global optima significantly faster than previous batch Bayesian optimization algorithms on both synthetic test functions and when tuning hyperparameters of practical machine learning algorithms, especially when function evaluations are noisy."}
{"_id": "a1a2878be2f0733878d1d53c362b116d13135c24", "title": "The structure of musical preferences: a five-factor model.", "text": "Music is a cross-cultural universal, a ubiquitous activity found in every known human culture. Individuals demonstrate manifestly different preferences in music, and yet relatively little is known about the underlying structure of those preferences. Here, we introduce a model of musical preferences based on listeners' affective reactions to excerpts of music from a wide variety of musical genres. The findings from 3 independent studies converged to suggest that there exists a latent 5-factor structure underlying music preferences that is genre free and reflects primarily emotional/affective responses to music. We have interpreted and labeled these factors as (a) a Mellow factor comprising smooth and relaxing styles; (b) an Unpretentious factor comprising a variety of different styles of sincere and rootsy music such as is often found in country and singer-songwriter genres; (c) a Sophisticated factor that includes classical, operatic, world, and jazz; (d) an Intense factor defined by loud, forceful, and energetic music; and (e) a Contemporary factor defined largely by rhythmic and percussive music, such as is found in rap, funk, and acid jazz. The findings from a fourth study suggest that preferences for the MUSIC factors are affected by both the social and the auditory characteristics of the music."}
{"_id": "37ca01962475d9867b970d35a133584b3935b9c5", "title": "Coagulation and ablation patterns of high-intensity focused ultrasound on a tissue-mimicking phantom and cadaveric skin", "text": "High-intensity focused ultrasound (HIFU) can be applied noninvasively to create focused zones of tissue coagulation on various skin layers. We performed a comparative study of HIFU, evaluating patterns of focused tissue coagulation and ablation upon application thereof. A tissue-mimicking (TM) phantom was prepared with bovine serum albumin and polyacrylamide hydrogel to evaluate the geometric patterns of HIFU-induced thermal injury zones (TIZs) for five different HIFU devices. Additionally, for each device, we investigated histologic patterns of HIFU-induced coagulation and ablation in serial sections of cadaveric skin of the face and neck. All HIFU devices generated remarkable TIZs in the TM phantom, with different geometric values of coagulation for each device. Most of the TIZs seemed to be separated into two or more tiny parts. In cadaveric skin, characteristic patterns of HIFU-induced ablation and coagulation were noted along the mid to lower dermis at the focal penetration depth of 3\u00a0mm and along subcutaneous fat to the superficial musculoaponeurotic system or the platysma muscle of the neck at 4.5\u00a0mm. Additionally, remarkable pre-focal areas of tissue coagulation were observed in the upper and mid dermis at the focal penetration depth of 3\u00a0mm and mid to lower dermis at 4.5\u00a0mm. For five HIFU devices, we outlined various patterns of HIFU-induced TIZ formation along pre-focal, focal, and post-focal areas of TM phantom and cadaveric skin of the face and neck."}
{"_id": "b6d111606afcd911afd8d9fb70857988f68bcb15", "title": "Revising Immersion: A Conceptual Model for the Analysis of Digital Game Involvement", "text": "Game studies literature has recently seen a renewed interest in game experience with the recent publication of a number of edited collections, dissertations and conferences focusing on the subject. This paper aims to contribute to that growing body of literature by presenting a summary of my doctoral research in digital game involvement and immersion. It outlines a segment of a conceptual model that describes and analyzes the moment by moment involvement with digital games on a variety of experiential dimensions corresponding to six broad categories of game features. The paper ends with a proposal to replace the metaphor of immersion with one of incorporation. Incorporation aims to avoid the binary notion of the player\u2019s plunge into the virtual environment characteristic of \u201cimmersion\u201d while dispelling the vagueness of application that all too often surrounds the term."}
{"_id": "397e30b7a9fef1f210f29ab46eda013efb093fef", "title": "SpatialHadoop: towards flexible and scalable spatial processing using mapreduce", "text": "Recently, MapReduce frameworks, e.g., Hadoop, have been used extensively in different applications that include tera-byte sorting, machine learning, and graph processing. With the huge volumes of spatial data coming from different sources, there is an increasing demand to exploit the efficiency of Hadoop, coupled with the flexibility of the MapReduce framework, in spatial data processing. However, Hadoop falls short in supporting spatial data efficiently as the core is unaware of spatial data properties. This paper describes SpatialHadoop; a full-edged MapReduce framework with native support for spatial data. SpatialHadoop is a comprehensive extension to Hadoop that injects spatial data awareness in each Hadoop layer, namely, the language, storage, MapReduce, and operations layers. In the language layer, SpatialHadoop adds a simple and ex- pressive high level language for spatial data types and operations. In the storage layer, SpatialHadoop adapts traditional spatial index structures, Grid, R-tree and R+-tree, to form a two-level spatial index. SpatialHadoop enriches the MapReduce layer by two new components, SpatialFileSplitter and SpatialRecordReader, for efficient and scalable spatial data processing. In the operations layer, SpatialHadoop is already equipped with a dozen of operations, including range query, kNN, and spatial join. The flexibility and open source nature of SpatialHadoop allows more spatial operations to be implemented efficiently using MapReduce. Extensive experiments on a real system prototype and real datasets show that SpatialHadoop achieves orders of magnitude better performance than Hadoop for spatial data processing."}
{"_id": "5863433b7f1fd3e73aba0b747b530012f3679bc8", "title": "Impact of a workplace stress reduction program on blood pressure and emotional health in hypertensive employees.", "text": "OBJECTIVES\nThis study examined the impact of a workplace-based stress management program on blood pressure (BP), emotional health, and workplace-related measures in hypertensive employees of a global information technology company.\n\n\nDESIGN\nThirty-eight (38) employees with hypertension were randomly assigned to a treatment group that received the stress-reduction intervention or a waiting control group that received no intervention during the study period. The treatment group participated in a 16-hour program, which included instruction in positive emotion refocusing and emotional restructuring techniques intended to reduce sympathetic nervous system arousal, stress, and negative affect, increase positive affect, and improve performance. Learning and practice of the techniques was enhanced by heart rate variability feedback, which helped participants learn to self-generate physiological coherence, a beneficial physiologic mode associated with increased heart rhythm coherence, physiologic entrainment, parasympathetic activity, and vascular resonance. BP, emotional health, and workplace-related measures were assessed before and 3 months after the program.\n\n\nRESULTS\nThree months post-intervention, the treatment group exhibited a mean adjusted reduction of 10.6 mm Hg in systolic BP and of 6.3 mm Hg in diastolic BP. The reduction in systolic BP was significant in relation to the control group. The treatment group also demonstrated improvements in emotional health, including significant reductions in stress symptoms, depression, and global psychological distress and significant increases in peacefulness and positive outlook. Reduced systolic BP was correlated with reduced stress symptoms. Furthermore, the trained employees demonstrated significant increases in the work-related scales of workplace satisfaction and value of contribution.\n\n\nCONCLUSIONS\nResults suggest that a brief workplace stress management intervention can produce clinically significant reductions in BP and improve emotional health among hypertensive employees. Implications are that such interventions may produce a healthier and more productive workforce, enhancing performance and reducing losses to the organization resulting from cognitive decline, illness, and premature mortality."}
{"_id": "7f91c4420181ccad208ddeb625d09ff7510e62df", "title": "IL-17 and Th17 Cells.", "text": "CD4+ T cells, upon activation and expansion, develop into different T helper cell subsets with different cytokine profiles and distinct effector functions. Until recently, T cells were divided into Th1 or Th2 cells, depending on the cytokines they produce. A third subset of IL-17-producing effector T helper cells, called Th17 cells, has now been discovered and characterized. Here, we summarize the current information on the differentiation and effector functions of the Th17 lineage. Th17 cells produce IL-17, IL-17F, and IL-22, thereby inducing a massive tissue reaction owing to the broad distribution of the IL-17 and IL-22 receptors. Th17 cells also secrete IL-21 to communicate with the cells of the immune system. The differentiation factors (TGF-beta plus IL-6 or IL-21), the growth and stabilization factor (IL-23), and the transcription factors (STAT3, RORgammat, and RORalpha) involved in the development of Th17 cells have just been identified. The participation of TGF-beta in the differentiation of Th17 cells places the Th17 lineage in close relationship with CD4+CD25+Foxp3+ regulatory T cells (Tregs), as TGF-beta also induces differentiation of naive T cells into Foxp3+ Tregs in the peripheral immune compartment. The investigation of the differentiation, effector function, and regulation of Th17 cells has opened up a new framework for understanding T cell differentiation. Furthermore, we now appreciate the importance of Th17 cells in clearing pathogens during host defense reactions and in inducing tissue inflammation in autoimmune disease."}
{"_id": "3c964f1810821aecd2a6a2b68aaa0deca4306492", "title": "Variational mesh segmentation via quadric surface fitting", "text": "Wepresent a new variationalmethod formesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. \u00a9 2012 Elsevier Ltd. All rights reserved."}
{"_id": "82ffd343c082700895f2a3810e7f52b0a7eb9b92", "title": "Extracting Opinion Expressions with semi-Markov Conditional Random Fields", "text": "Extracting opinion expressions from text is usually formulated as a token-level sequence labeling task tackled using Conditional Random Fields (CRFs). CRFs, however, do not readily model potentially useful segment-level information like syntactic constituent structure. Thus, we propose a semi-CRF-based approach to the task that can perform sequence labeling at the segment level. We extend the original semi-CRF model (Sarawagi and Cohen, 2004) to allow the modeling of arbitrarily long expressions while accounting for their likely syntactic structure when modeling segment boundaries. We evaluate performance on two opinion extraction tasks, and, in contrast to previous sequence labeling approaches to the task, explore the usefulness of segmentlevel syntactic parse features. Experimental results demonstrate that our approach outperforms state-of-the-art methods for both opinion expression tasks."}
{"_id": "178cc2fc3062b1aa789418448d197440349fdaa0", "title": "The GUM corpus: creating multilayer resources in the classroom", "text": "This paper presents the methodology, design principles and detailed evaluation of a new freely available multilayer corpus, collected and edited via classroom annotation using collaborative software. After briefly discussing corpus design for open, extensible corpora, five classroom annotation projects are presented, covering structural markup in TEI XML, multiple part of speech tagging, constituent and dependency parsing, information structural and coreference annotation, and Rhetorical Structure Theory analysis. Layers are inspected for annotation quality and together they coalesce to form a richly annotated corpus that can be used to study the interactions between different levels of linguistic description. The evaluation gives an indication of the expected quality of a corpus created by students with relatively little training. A multifactorial example study on lexical NP coreference likelihood is also presented, which illustrates some applications of the corpus. The results of this project show that high quality, richly annotated resources can be created effectively as part of a linguistics curriculum, opening new possibilities not just for research, but also for corpora in linguistics pedagogy."}
{"_id": "c14acbdb281b54bd597ba7525ec6d24001e26b27", "title": "Situationally Aware In-Car Information Presentation Using Incremental Speech Generation: Safer, and More Effective", "text": "Holding non-co-located conversations while driving is dangerous (Horrey and Wickens, 2006; Strayer et al., 2006), much more so than conversations with physically present, \u201csituated\u201d interlocutors (Drews et al., 2004). In-car dialogue systems typically resemble non-co-located conversations more, and share their negative impact (Strayer et al., 2013). We implemented and tested a simple strategy for making in-car dialogue systems aware of the driving situation, by giving them the capability to interrupt themselves when a dangerous situation is detected, and resume when over. We show that this improves both driving performance and recall of system-presented information, compared to a non-adaptive strategy."}
{"_id": "79ce69b138b893b825f2f164c4cc3af3be6eedad", "title": "Predictive maintenance techniques", "text": "This article discusses the importance of PdM for industrial process applications and investigates a number of emerging technologies that enable this approach, including online energy-efficiency evaluation and continuous condition monitoring. The article gives an overview of existing and future technologies that can be used in these areas. Two methods for bearing fault detection and energy-efficiency estimation are discussed. The article concludes with focus on one pilot installation at Weyerhaeuser's Containerboard Packaging Plant in Manitowoc, Wisconsin, USA, monitoring three critical induction motors: a 75-hrho blower motor, a 50-hrho hydraulic pump motor, and a 200-hp compressor motor. Finally, the field experience gained in this plant is presented as two case studies."}
{"_id": "aedcf0f37d4f9122894c5794545c324f68c17c05", "title": "Spectral analysis of surface electromyography (EMG) of upper esophageal sphincter-opening muscles during head lift exercise.", "text": "Although recent studies have shown enhancement of deglutitive upper esophageal sphincter opening in healthy elderly patients performing an isometric/isotonic head lift exercise (HLE), the muscle groups affected by this process are not known. A shift in the spectral analysis of surface EMG activity seen with muscle fatigue can be used to identify muscles affected by an exercise. The objective of this study was to use spectral analysis to evaluate surface EMG activities in the suprahyoid (SHM), infrahyoid (IHM), and sternocleidomastoid (SCM) muscle groups during the HLE. Surface EMG signals were recorded continuously on a TECA Premiere II during two phases of the HLE protocol in eleven control subjects. In the first phase of the protocol, surface EMG signals were recorded simultaneously from the three muscle groups for a period of 20 s. In the second phase, a 60 s recording was obtained for each of three successive trials with individual muscle groups. The mean frequency (MNF), median frequency (MDF), root mean square (RMS), and average rectified value (ARV) were used as spectral variables to assess the fatigue of the three muscle groups during the exercise. Least squares regression lines were fitted to each variable data set. Our findings suggest that during the HLE the SHM, IHM, and SCM muscle groups all show signs of fatigue; however, the SCM muscle group fatigued faster than the SHM and IHM muscle groups. Because of its higher fatigue rate, the SCM muscle group may play a limiting role in the HLE."}
{"_id": "291c9e85ff878659ed35b607a8177d4ba30bc009", "title": "ZStream: a cost-based query processor for adaptively detecting composite events", "text": "Composite (or Complex) event processing (CEP) systems search sequences of incoming events for occurrences of user-specified event patterns. Recently, they have gained more attention in a variety of areas due to their powerful and expressive query language and performance potential. Sequentiality (temporal ordering) is the primary way in which CEP systems relate events to each other. In this paper, we present a CEP system called ZStream to efficiently process such sequential patterns. Besides simple sequential patterns, ZStream is also able to detect other patterns, including conjunction, disjunction, negation and Kleene closure.\n Unlike most recently proposed CEP systems, which use non-deterministic finite automata (NFA's) to detect patterns, ZStream uses tree-based query plans for both the logical and physical representation of query patterns. By carefully designing the underlying infrastructure and algorithms, ZStream is able to unify the evaluation of sequence, conjunction, disjunction, negation, and Kleene closure as variants of the join operator. Under this framework, a single pattern in ZStream may have several equivalent physical tree plans, with different evaluation costs. We propose a cost model to estimate the computation costs of a plan. We show that our cost model can accurately capture the actual runtime behavior of a plan, and that choosing the optimal plan can result in a factor of four or more speedup versus an NFA based approach. Based on this cost model and using a simple set of statistics about operator selectivity and data rates, ZStream is able to adaptively and seamlessly adjust the order in which it detects patterns on the fly. Finally, we describe a dynamic programming algorithm used in our cost model to efficiently search for an optimal query plan for a given pattern."}
{"_id": "acf68625946d2eb0e88e2da5daca4b3c86b0fd8d", "title": "Environmental implications of plastic debris in marine settings--entanglement, ingestion, smothering, hangers-on, hitch-hiking and alien invasions.", "text": "Over the past five or six decades, contamination and pollution of the world's enclosed seas, coastal waters and the wider open oceans by plastics and other synthetic, non-biodegradable materials (generally known as 'marine debris') has been an ever-increasing phenomenon. The sources of these polluting materials are both land- and marine-based, their origins may be local or distant, and the environmental consequences are many and varied. The more widely recognized problems are typically associated with entanglement, ingestion, suffocation and general debilitation, and are often related to stranding events and public perception. Among the less frequently recognized and recorded problems are global hazards to shipping, fisheries and other maritime activities. Today, there are rapidly developing research interests in the biota attracted to freely floating (i.e. pelagic) marine debris, commonly known as 'hangers-on and hitch-hikers' as well as material sinking to the sea floor despite being buoyant. Dispersal of aggressive alien and invasive species by these mechanisms leads one to reflect on the possibilities that ensuing invasions could endanger sensitive, or at-risk coastal environments (both marine and terrestrial) far from their native habitats."}
{"_id": "23c425f022baa054c68683eaf81f5d482915ce13", "title": "A Fast Quantum Mechanical Algorithm for Database Search", "text": "were proposed in the early 1980\u2019s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80\u2019s and early 90\u2019s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ----------------------------------------------"}
{"_id": "3d98653c152e733fb690ccd3d307ac14bc879569", "title": "Quantum theory , the Church-Turing principle and the universal quantum computer", "text": "It is argued that underlying the Church-Turing hypothesis there is an implicit physical assertion. Here, this assertion is presented explicitly as a physical principle: \u2018every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means\u2019. Classical physics and the universal Turing machine, because the former is continuous and the latter discrete, do not obey the principle, at least in the strong form above. A class of model computing machines that is the quantum generalization of the class of Turing machines is described, and it is shown that quantum theory and the \u2018universal quantum computer\u2019 are compatible with the principle. Computing machines resembling the universal quantum computer could, in principle, be built and would have many remarkable properties not reproducible by any Turing machine. These do not include the computation of non-recursive functions, but they do include \u2018quantum parallelism\u2019, a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it. The intuitive explanation of these properties places an intolerable strain on all interpretations of quantum theory other than Everett\u2019s. Some of the numerous connections between the quantum theory of computation and the rest of physics are explored. Quantum complexity theory allows a physically more reasonable definition of the \u2018complexity\u2019 or \u2018knowledge\u2019 in a physical system than does classical complexity theory. Current address: Centre for Quantum Computation, Clarendon Laboratory, Department of Physics, Parks Road, OX1 3PU Oxford, United Kingdom. Email: david.deutsch@qubit.org This version (Summer 1999) was edited and converted to L TEX by Wim van Dam at the Centre for Quantum Computation. Email:wimvdam@qubit.org 1 Computing machines and the Church-Turing principle The theory of computing machines has been extensively developed during the last few decades. Intuitively, a computing machine is any physical system whose dynamical evolution takes it from one of a set of \u2018input\u2019 states to one of a set of \u2018output\u2019 states. The states are labelled in some canonical way, the machine is prepared in a state with a given input label and then, following some motion, the output state is measured. For a classical deterministic system the measured output label is a definite functionf of the prepared input label; moreover the value of that label can in principle be measured by an outside observer (the \u2018user\u2019) and the machine is said to \u2018compute\u2019the functionf . Two classical deterministic computing machines are \u2018computationally equivalent\u2019 under given labellings of their input and output states if they compute the same function under those labellings. But quantum computing machines, and indeed classical stochastic computing machines, do not \u2018compute functions\u2019 in the above sense: the output state of a stochastic machine is random with only the probability distribution function for the possible outputs depending on the input state. The output state of a quantum machine, although fully determined by the input state is not an observable and so the user cannot in general discover its label. Nevertheless, the notion of computational equivalence can be generalized to apply to such machines also. Again we define computational equivalence under given labellings,but it is now necessary to specify more precisely what is to be labelled. As far as the input is concerned, labels must be given for each of the possible ways of preparing the machine, which correspond, by definition, to all the possible input states. This is identical with the classical deterministic case. However, there is an asymmetry between input and output because there is an asymmetry between preparation and measurement: whereas a quantum system can be prepared in any desired permitted input state, measurement cannot in general determine its output state; instead one must measure the value of some observable. (Throughout this paper I shall be using the Schr \u0308 odinger picture, in which the quantum state is a function of time but observables are constant operators.) Thus what must be labelled is the set of ordered pairs consisting of an output observable and a possible measured value of that observable (in quantum theory, a Hermitian operator and one of its eigenvalues). Such an ordered pair contains, in effect, the specification of a possible experiment that could be made on the output, together with a possible result of that experiment. Two computing machines are computationally equivalent under given labellings if in any possible experiment or sequence of experiments in which their inputs were prepared equivalently under the input labellings, and observables corresponding to each other under the output labellings were measured, the measured values of these observables for the two machines would be statistically indistinguishable. That is, the probability distribution functions for the outputs of the two machines would be identical. In the sense just described, a given computing machine M computes at most one function. However, there ought to be no fundamental difference between altering the input state in which M is prepared, and altering systematically the constitution of M so that it becomes a different machine M0 computing a different function. To formalize such operations, it is often useful to consider machines with two inputs, the preparation of one constituting a \u2018program\u2019 determining which function of the other is to be computed. To each such machine M there corresponds a set C (M) of \u2018M-computable functions\u2019. A functionf isM-computable ifM can computef when prepared with some program. The set C(M) can be enlarged by enlarging the set of changes in the constitution of M that are labelled as possible M-programs. Given two machines M andM0 it is possible to construct a composite machine whose set of computable functions contains the union of C (M) and C(M0). There is no purely logical reason why one could not go on ad infinitumbuilding more powerful"}
{"_id": "f311ee943b38f6b0045d0f381aed421919581c15", "title": "Quantum Inspired Genetic Algorithms", "text": "|A novel evolutionary computing method | quantum inspired genetic algorithms | is introduced, where concepts and principles of quantum mechanics are used to inform and inspire more eecient evolutionary computing methods. The basic terminology of quantum mechanics is introduced before a comparison is made between a classical genetic algorithm and a quantum inspired method for the travelling salesperson problem. It is informally shown that the quantum inspired genetic algorithm performs better than the classical counterpart for a small domain. The paper concludes with some speculative comments concerning the relationship between quantum inspired genetic algorithms and various complexity classes."}
{"_id": "6902cb196ec032852ff31cc178ca822a5f67b2f2", "title": "Algorithms for Quantum Computation: Discrete Log and Factoring Extended Abstract", "text": "This paper gives algorithms for the discrete log and the factoring problems that take random polynomial time on a quantum computer (thus giving the rst examples of quantum cryptanalysis)."}
{"_id": "c2f61c998295caef43878fb46c91694a022ea00e", "title": "A Path-Tracking Criterion for an LHD Articulated Vehicle", "text": "A path-tracking criterion for the so-called LHD (load-haul-dump) truck used in underground mining is proposed in this paper. It exploits the particular configuration of this vehicle, composed of two units connected by an actuated articulation. The task is to follow the path represented by the middle of the tunnel, maintaining the whole vehicle at a reduced distance from the path itself, to decrease the risk of crashes against the walls of the tunnel. This is accomplished via feedback through the synthesis of an appropriate path-tracking criterion. The criterion is based on monitoring the distances of the midpoints of both axles of the vehicle from their orthogonal projections on the path, using two different moving frames simultaneously. Local asymptotic stability to paths of constant curvature is achieved by means of linear-state feedback. KEY WORDS\u2014mobile robots, articulated vehicles, mining trucks, path tracking, off-tracking, feedback stabilization"}
{"_id": "b8d04c10b72b0a1de52d99057d2d2cb7a06e0261", "title": "A Multilingual Multimedia Indian Sign Language Dictionary Tool", "text": "This paper presents a cross platform multilingual multimedia Indian Sign Language (ISL) dictionary building tool. ISL is a linguistically under-investigated language with no source of well documented electronic data. Research on ISL linguistics also gets hindered due to a lack of ISL knowledge and the unavailability of any educational tools. Our system can be used to associate signs corresponding to a given text. The current system also facilitates the phonological annotation of Indian signs in the form of HamNoSys structure. The generated HamNoSys string can be given as input to an avatar module to produce an animated sign representation."}
{"_id": "68c95dd9665a3ca3be70f1aa5aea73136a0dc6cc", "title": "Detection and Recognition of Malaysian Special License Plate Based On SIFT Features", "text": "Automated car license plate recognition systems are developed and applied for purpose of facilitating the surveillance, law enforcement, access control and intelligent transportation monitoring with least human intervention. In this paper, an algorithm based on SIFT feature points clustering and matching is proposed to address the issue of recognizing Malaysian special plates. These special plates do not follow the normal standard car plates\u2019 format as they may contain italic, cursive, connected and small letters. The algorithm is tested with 150 Malaysian special plate images under different environment and the promising experimental results demonstrate that the proposed algorithm is relatively robust. Keywords-license plate recognition, scale invariant feature transform (SIFT), feature extraction, homography, special license plate."}
{"_id": "5df0750546ed2b544461c39ae6b17ee2277accd4", "title": "The quality of online social relationships", "text": "Online relationships are less valuable than offline ones. Indeed, their net benefit depends on whether they supplement or substitute for offline social relationships."}
{"_id": "275105509cf1df8344d99bb349513e637346cbbf", "title": "Low power and low voltage VT extractor circuit and MOSFET radiation dosimeter", "text": "This work discusses two fundamental blocks of an in vivo MOS dosimeter, namely the radiation sensor and the VT-extractor circuit. It is shown that threshold extractor circuits based on an all-region MOSFET model are very appropriate for low power design. The accuracy of the extractor circuits allows using the PMOS transistors of the integrated circuit CD4007 as the radiation sensor in a dosimeter for radiotherapy applications."}
{"_id": "1235bc3c1234d78a26779f60c81fe159a072280a", "title": "Control of hybrid AC/DC microgrid involving energy storage, renewable energy and pulsed loads", "text": "This paper proposes the coordinated control of a hybrid AC/DC power system with renewable energy source, energy storages and critical loads. The hybrid microgrid consists of both AC and DC sides. A synchronous generator and a PV farm supply power to the system's AC and DC sides, respectively. A bidirectional fully controlled AC/DC converter with active and reactive power decoupling technique is used to link the AC bus with the DC bus while regulating the system voltage and frequency. A DC/DC boost converter with a maximum power point tracking (MPPT) function is implemented to maximize the energy generation from the PV farm. Current controlled bidirectional DC/DC converters are applied to connect each lithium-ion battery bank to the DC bus. Lithium-ion battery banks act as energy storage devices that serve to increase the system stability by absorbing or injecting power to the grid as ancillary services. The proposed system can function in both grid-connected mode and islanding mode. Power electronic converters with different control strategies are analyzed in both modes in detail. Simulation results in MATLAB Simulink verify that the proposed topology is coordinated for power management in both AC and DC sides under critical loads with high efficiency, reliability, and robustness under both grid-connected and islanding modes."}
{"_id": "18767cf566f3654318814a18f418325047167e06", "title": "Does the Tyndall effect describe the blue hue periodically observed in subdermal hyaluronic acid gel placement?", "text": "BACKGROUND\nThe blue hue of skin overlying injected hyaluronic acid (HA) fillers in certain cases has been hypothesized in the literature as related to the Tyndall effect. This investigation aims to understand the relevant optical concepts and to discuss the plausibility of this assertion.\n\n\nMETHODS\nTheoretic and physical aspects of relevant optical theories including the Tyndall effect, the Raleigh criterion and the Mie Solution are discussed, with simple examples. The physical properties of the system (both HA and subcutaneous tissue) are explored. Alternate concepts of dermal hue generation are discussed.\n\n\nRESULTS\nThe Tyndall effect (and Rayleigh criterion) describe optical phenomenon that occur as light passes through colloidal solutions containing uniform spherical particles of sizes less than the length of a wavelength of visible light. HA fillers are complex, large, non-spherical, cross-linked hydrogels, and thus are not well characterized by these theories.Skin is a complex optical surface in which shorter wavelengths of light are selectively filtered at superficial depths. Light passing through to subdermal HA would have low blue light amplitude, minimizing what light could be preferentially scattered. Further, should blue hues be 'generated' subdermally, the same skin filters work in reverse, making the blue light poorly detectable by an external observer.\n\n\nCONCLUSIONS\nThe Tyndall effect is unlikely to cause dermal hue changes in HA filler instillation. Optical and perceptual processes explaining superficial vein coloration may better describe subdermal HA hue changes. Vein coloration is thought to be related to three processes: the reflective properties of the skin, the absorptive properties of blood and the perceptive properties of an observer's eyes. Subdermal HA may simulate these phenomena by a number of undetermined, yet plausible mechanisms."}
{"_id": "96e51a29148b08ff9d4f40ef93a0522522adf919", "title": "Character-Based Text Classification using Top Down Semantic Model for Sentence Representation", "text": "Despite the success of deep learning on many fronts especially image and speech, its application in text classification often is still not as good as a simple linear SVM on n-gram TF-IDF representation especially for smaller datasets. Deep learning tends to emphasize on sentence level semantics when learning a representation with models like recurrent neural network or recursive neural network, however from the success of TF-IDF representation, it seems a bag-of-words type of representation has its strength. Taking advantage of both representions, we present a model known as TDSM (Top Down Semantic Model) for extracting a sentence representation that considers both the word-level semantics by linearly combining the words with attention weights and the sentence-level semantics with BiLSTM and use it on text classification. We apply the model on characters and our results show that our model is better than all the other characterbased and word-based convolutional neural network models by (Zhang et al., 2015) across seven different datasets with only 1% of their parameters. We also demonstrate that this model beats traditional linear models on TF-IDF vectors on small and polished datasets like news article in which typically deep learning models surrender."}
{"_id": "9e549115438a49e468d2ee129c735058d1cc1623", "title": "Treating the Banana Fold with the Dermotuberal Anchorage Technique: Case Report", "text": "The banana fold, or the infragluteal fold, is a fat deposit on the posterior thigh close to the gluteal crease and parallel to it. A banana fold may form for different reasons, among which an iatrogenic cause is recurrent. Although banana fold is a common problem, unrelished by most women, few procedures are targeted specifically to fight it. This report presents a severe case of iatrogenic banana fold corrected by a modification of the dermotuberal anchorage buttock-lifting technique, as reported by the author for gluteal ptosis. The operation is performed by tucking in part of the banana fold tissue caudal to the gluteal crease, sliding that tissue, after depithelization, under the buttock, and pulling it up toward the ischial tuberosity until the redundant skin on the posterior thigh is tight and the banana fold is reduced. Assessment of the results 1 year after surgery showed that the technique provided a good scar kept within the subgluteal crease, and that it satisfactorily corrected the patient\u2019s major complaint: the banana-shaped fold."}
{"_id": "7e0c812ee6725644404472088debd56e9f756b75", "title": "Sliding spotlight SAR processing for TerraSAR-X using a new formulation of the extended chirp scaling algorithm", "text": "This paper describes the sliding spotlight algorithm and the processing strategy to be applied for TerraSAR-X. The steering spotlight geometry is analysed. Analysis of scene size and resolution demonstrates the particularities of this mode. The Doppler frequencies for individual targets and for a whole sliding spotlight scene are analyzed and the result shows the applicability of azimuth subaperture processing to sliding spotlight data. A description of the Extended Chirp Scaling Algorithm for sliding spotlight is presented."}
{"_id": "1c1b5fd282d3a1fe03a671f7d13092d49cb31139", "title": "Enhancing Knowledge Graph Embedding with Probabilistic Negative Sampling", "text": "Link Prediction using Knowledge graph embedding projects symbolic entities and relations into low dimensional vector space, thereby learning the semantic relations between entities. Among various embedding models, there is a series of translation-based models such as TransE [1], TransH [2], and TransR[3]. This paper proposes modifications in the TransR model to address the issue of skewed data which is common in real-world knowledge graphs. The enhancements enable the model to smartly generate corrupted triplets during negative sampling, which significantly improves the training time and performance of TransR. The proposed approach can be applied to other translationbased models."}
{"_id": "6871999e2800a1d8b0edd7380795f4758fb1b3a5", "title": "A survey of wireless sensor technologies applied to precision agriculture", "text": "This paper gives a state-of-art of wireless sensor network (WSN) technologies and solutions applied to precision agriculture (PA). The paper first considers applications and existing experiences that show how WSN technologies have been introduced in to agricultural applications. Then, a survey in hardware and software solutions is related with special emphasis on technological aspects. Finally, the paper shows how five networking and technological solutions may impact the next generation of sensors. These are (i) scalar wireless sensor networks, (ii) wireless multimedia sensor networks, (iii) Mobility of nodes, (iv) tag-based systems, and (v) smart-phone applications."}
{"_id": "101c0e09d533b738d83a3740f1f6e49ab2984e55", "title": "Neural Clinical Paraphrase Generation with Attention", "text": "Paraphrase generation is important in various applications such as search, summarization, and question answering due to its ability to generate textual alternatives while keeping the overall meaning intact. Clinical paraphrase generation is especially vital in building patient-centric clinical decision support (CDS) applications where users are able to understand complex clinical jargons via easily comprehensible alternative paraphrases. This paper presents Neural Clinical Paraphrase Generation (NCPG), a novel approach that casts the task as a monolingual neural machine translation (NMT) problem. We propose an end-to-end neural network built on an attention-based bidirectional Recurrent Neural Network (RNN) architecture with an encoderdecoder framework to perform the task. Conventional bilingual NMT models mostly rely on word-level modeling and are often limited by out-of-vocabulary (OOV) issues. In contrast, we represent the source and target paraphrase pairs as character sequences to address this limitation. To the best of our knowledge, this is the first work that uses attention-based RNNs for clinical paraphrase generation and also proposes an end-to-end character-level modeling for this task. Extensive experiments on a large curated clinical paraphrase corpus show that the attention-based NCPG models achieve improvements of up to 5.2 BLEU points and 0.5 METEOR points over a non-attention based strong baseline for word-level modeling, whereas further gains of up to 6.1 BLEU points and 1.3 METEOR points are obtained by the character-level NCPG models over their word-level counterparts. Overall, our models demonstrate comparable performance relative to the state-of-the-art phrase-based non-neural models."}
{"_id": "88d946cf47f322c1131efad3b72ff45e372c68f1", "title": "ViBe: A Disruptive Method for Background Subtraction", "text": "The proliferation of the video surveillance cameras, which accounts for the vast majority of cameras worldwide, has resulted in the need to nd methods and algorithms for dealing with the huge amount of information that is gathered every second. This encompasses processing tasks, such as raising an alarm or detouring moving objects, as well as some semantic tasks like event monitoring, trajectory or ow analysis, counting people, etc."}
{"_id": "18d8f0b1ff6c3cf0f110daa2af9c6f91d260d326", "title": "A Large Scale Ranker-Based System for Search Query Spelling Correction", "text": "This paper makes three significant extensions to a noisy channel speller designed for standard written text to target the challenging domain of search queries. First, the noisy channel model is subsumed by a more general ranker, which allows a variety of features to be easily incorporated. Second, a distributed infrastructure is proposed for training and applying Web scale n-gram language models. Third, a new phrase-based error model is presented. This model places a probability distribution over transformations between multi-word phrases, and is estimated using large amounts of query-correction pairs derived from search logs. Experiments show that each of these extensions leads to significant improvements over the stateof-the-art baseline methods."}
{"_id": "8b8ab057fec81fda064b0c315b239646635ca4fb", "title": "Unifying Logical and Statistical AI", "text": "Intelligent agents must be able to handle the complexity and uncertainty of the real world. Logical AI has focused mainly on the former, and statistical AI on the latter. Markov logic combines the two by attaching weights to first-order formulas and viewing them as templates for features of Markov networks. Inference algorithms for Markov logic draw on ideas from satisfiability, Markov chain Monte Carlo and knowledge-based model construction. Learning algorithms are based on the voted perceptron, pseudo-likelihood and inductive logic programming. Markov logic has been successfully applied to a wide variety of problems in natural language understanding, vision, computational biology, social networks and others, and is the basis of the open-source Alchemy system."}
{"_id": "cdefa98415b1f072989214b62e51073c82304e33", "title": "The role of autophagy in cancer development and response to therapy", "text": "Autophagy is a process in which subcellular membranes undergo dynamic morphological changes that lead to the degradation of cellular proteins and cytoplasmic organelles. This process is an important cellular response to stress or starvation. Many studies have shed light on the importance of autophagy in cancer, but it is still unclear whether autophagy suppresses tumorigenesis or provides cancer cells with a rescue mechanism under unfavourable conditions. What is the present state of our knowledge about the role of autophagy in cancer development, and in response to therapy? And how can the autophagic process be manipulated to improve anticancer therapeutics?"}
{"_id": "0166193eba341df8580c22e36b155671dc1eccc5", "title": "Optical Character Recognition for Cursive Handwriting", "text": "\u00d0In this paper, a new analytic scheme, which uses a sequence of segmentation and recognition algorithms, is proposed for offline cursive handwriting recognition problem. First, some global parameters, such as slant angle, baselines, and stroke width and height are estimated. Second, a segmentation method finds character segmentation paths by combining gray scale and binary information. Third, Hidden Markov Model (HMM) is employed for shape recognition to label and rank the character candidates. For this purpose, a string of codes is extracted from each segment to represent the character candidates. The estimation of feature space parameters is embedded in HMM training stage together with the estimation of the HMM model parameters. Finally, the lexicon information and HMM ranks are combined in a graph optimization problem for word-level recognition. This method corrects most of the errors produced by segmentation and HMM ranking stages by maximizing an information measure in an efficient graph search algorithm. The experiments in dicate higher recognition rates compared to the available methods reported in the literature. Index Terms\u00d0Handwritten word recognition, preprocessing, segmentation, optical character recognition, cursive handwriting, hidden Markov model, search, graph, lexicon matching."}
{"_id": "249031d9eedd92cc5fbe509dd6e7bf2faaea5bbb", "title": "IGLOO: Slicing the Features Space to Represent Long Sequences", "text": "Up until recently Recurrent neural networks (RNNs) have been the standard go-to component when processing sequential data with neural networks. Issues relative to vanishing gradient have been partly addressed by Long short-term memory (LSTM) and gated recurrent unit (GRU), but in practice experiments show that very long terms dependencies (beyond 1000 time steps) are difficult to learn. We introduce IGLOO, a new neural network architecture which aims at being faster than both LSTM and GRU but also their respective CuDNN optimized versions when convergence happens and provide an alternative in the case when there is no convergence at all. IGLOO\u2019s core idea is to use the relationships between patches sliced out of the features maps of convolution layers at different levels of granularity to build a representation for the sequence. We show that the model can deal with dependencies of more than 25,000 steps in a reasonable time frame. Beyond the well bench-marked copy-memory and addition problems which show good results, we also achieve best recorded accuracy on permuted MNIST (98.4%). IGLOO is also applied on the IMDB set and on the TUH Abnormal EEG Corpus."}
{"_id": "a65476a4fadd112b64f519a6f51a71f6077ed0ae", "title": "Wafer level fabrication method of hemispherical reflector coupled micro-led array stimulator for optogenetics", "text": "We report a monolithic wafer-level fabrication method for a hemispherical reflector coupled light-emitting-diode (LED) array using isotropic etching of silicon. These neural stimulators collect the backside as well as the front side emission of the \u03bc-LEDs and thus provide higher intensity, which is imperative for opsin expressions in optogenetics experiments. Aluminum was used as the reflective layer and the planarization of polymer on the reflector cavity was done using polydimethylsiloxane (PDMS). The lateral and vertical profiles of silicon etching were measured and the light intensity increase due to the reflector was investigated. It was found that the intensity increases by a minimum of 49% and maximum of 65% when coupling a reflector with the \u03bc-LEDs."}
{"_id": "6f283845b860519af101e0e5c151cfebd528773f", "title": "Improved extraction of vegetable oils under high-intensity ultrasound and/or microwaves.", "text": "Ultrasound-assisted extraction (UAE) and microwave-assisted extraction (MAE) techniques have been employed as complementary techniques to extract oils from vegetable sources, viz, soybean germ and a cultivated marine microalga rich in docosahexaenoic acid (DHA). Ultrasound (US) devices developed by ourselves, working at several frequencies (19, 25, 40 and 300 kHz), were used for US-based protocols, while a multimode microwave (MW) oven (operating with both open and closed vessels) was used for MAE. Combined treatments were also studied, such as simultaneous double sonication (at 19 and 25 kHz) and simultaneous US/MW irradiation, achieved by inserting a non-metallic horn in a MW oven. Extraction times and yields were compared with those resulting from conventional procedures. With soybean germ the best yield was obtained with a 'cavitating tube' prototype (19 kHz, 80 W), featuring a thin titanium cylinder instead of a conventional horn. Double sonication, carried out by inserting an immersion horn (25 kHz) in the same tube, improved the yield only slightly but halved the extraction time. Almost comparable yields were achieved by closed-vessel MAE and simultaneous US/MW irradiation. Compared with conventional methods, extraction times were reduced by up to 10-fold and yields increased by 50-500%. In the case of marine microalgae, UAE worked best, as the disruption by US of the tough algal cell wall considerably improved the extraction yield from 4.8% in soxhlet to 25.9%. Our results indicate that US and MW, either alone or combined, can greatly improve the extraction of bioactive substances, achieving higher efficiency and shorter reaction times at low or moderate costs, with minimal added toxicity."}
{"_id": "b824c38819f67f9a9d2e2e77e2767be340221173", "title": "The Complexity of Temporal Logic Model Checking", "text": "Temporal logic. Logical formalisms for reasoning about time and the timing of events appear in several fields: physics, philosophy, linguistics, etc. Not surprisingly, they also appear in computer science, a field where logic is ubiquitous. Here temporal logics are used in automated reasoning, in planning, in semantics of programming languages, in artificial intelligence, etc. There is one area of computer science where temporal logic has been unusually successful: the specification and verification of programs and systems, an area we shall just call \u201cprogramming\u201d for simplicity. In today\u2019s curricula, thousands of programmers first learn about temporal logic in a course on model checking! Temporal logic and programming. Twenty five years ago, Pnueli identified temporal logic as a very convenient formal language in which to state, and reason about, the behavioral properties of parallel programs and more generally reactive systems [Pnu77, Pnu81]. Indeed, correctness for these systems typically involves reasoning upon related events at different moments of a system execution [OL82]. Furthermore, when it comes to liveness properties, the expected behavior of reactive systems cannot be stated as a static property, or as an invariant one. Finally, temporal logic is well suited to expressing the whole variety of fairness properties that play such a prominent role in distributed systems [Fra86]. For these applications, one usually restricts oneself to propositional temporal logic: on the one hand, this does not appear to be a severe limitation in practice, and on the other hand, this restriction allows decision procedures for validity and entailment, so that, at least in principle, the above-mentioned reasoning can be automated. Model checking. Generally speaking, model checking is the algorithmic verification that a given logic formula holds in a given structure (the model"}
{"_id": "be6789bd46d16afa45c8962560a56a89a9089355", "title": "Stereo-Based Autonomous Navigation and Obstacle Avoidance*", "text": "This paper presents a stereo vision-based autonomous navigation system using a GPS and a modified version of the VFH algorithm. In order to obtain a high-accuracy disparity map and meet the time constraints of the real time navigation system, this work proposes the use of a semi-global stereo method. By not suffering the same issues of the regularly used local stereo methods, the employed stereo technique enables the generation of a highly dense, efficient, and accurate disparity map. Obstacles are detected using a method that checks for relative slopes and heights differences. Experimental tests using an electric vehicle in an urban environment were performed to validate the proposed approach."}
{"_id": "b73201c7da4de20aa3ea2a130f9558e19cbf33b3", "title": "Practical electrical parameter aware methodology for analog designers with emphasis on LDE aware for devices", "text": "The device layout structure has proven to have profound effects to its electrical characteristics for advanced technology nodes, which, if not taken into account during the design cycle, will have devastating impact to the circuit functionality. A new design methodology is presented in this paper, which can help circuit designers identify early in the design stage the performance implication due to shift of critical device instance parameters from its layout."}
{"_id": "61f4f67fc0e73fa3aef8628aae53a4d9b502d381", "title": "Understanding Mutual Information and its Use in InfoGAN", "text": "Interpretable variables are useful in generative models. Generative Adversarial Networks (GANs) are generative models that are flexible in their input. The Information Maximizing GAN (InfoGAN) ties the output of the generator to a component of its input called the latent codes. By forcing the output to be tied to this input component, we can control some properties of the output representation. It is notoriously difficult to find the Nash equilibrium when jointly training the discriminator and generator in a GAN. We uncover some successful and unsuccessful configurations for generating images using InfoGAN."}
{"_id": "1ba779d5a5c9553ee8ecee5cf6bafb4b494ea7bc", "title": "On lightweight mobile phone application certification", "text": "Users have begun downloading an increasingly large number of mobile phone applications in response to advancements in handsets and wireless networks. The increased number of applications results in a greater chance of installing Trojans and similar malware. In this paper, we propose the Kirin security service for Android, which performs lightweight certification of applications to mitigate malware at install time. Kirin certification uses security rules, which are templates designed to conservatively match undesirable properties in security configuration bundled with applications. We use a variant of security requirements engineering techniques to perform an in-depth security analysis of Android to produce a set of rules that match malware characteristics. In a sample of 311 of the most popular applications downloaded from the official Android Market, Kirin and our rules found 5 applications that implement dangerous functionality and therefore should be installed with extreme caution. Upon close inspection, another five applications asserted dangerous rights, but were within the scope of reasonable functional needs. These results indicate that security configuration bundled with Android applications provides practical means of detecting malware."}
{"_id": "41289566ac0176dced2312f813328ad4c0552618", "title": "DroidScope: Seamlessly Reconstructing the OS and Dalvik Semantic Views for Dynamic Android Malware Analysis", "text": "The prevalence of mobile platforms, the large market share of Android, plus the openness of the Android Market makes it a hot target for malware attacks. Once a malware sample has been identified, it is critical to quickly reveal its malicious intent and inner workings. In this paper we present DroidScope, an Android analysis platform that continues the tradition of virtualization-based malware analysis. Unlike current desktop malware analysis platforms, DroidScope reconstructs both the OSlevel and Java-level semantics simultaneously and seamlessly. To facilitate custom analysis, DroidScope exports three tiered APIs that mirror the three levels of an Android device: hardware, OS and Dalvik Virtual Machine. On top of DroidScope, we further developed several analysis tools to collect detailed native and Dalvik instruction traces, profile API-level activity, and track information leakage through both the Java and native components using taint analysis. These tools have proven to be effective in analyzing real world malware samples and incur reasonably low performance overheads."}
{"_id": "9e1bcd6414fc6fdd3b63aab48cc3732dc761f538", "title": "AppIntent: analyzing sensitive data transmission in android for privacy leakage detection", "text": "Android phones often carry personal information, attracting malicious developers to embed code in Android applications to steal sensitive data. With known techniques in the literature, one may easily determine if sensitive data is being transmitted out of an Android phone. However, transmission of sensitive data in itself does not necessarily indicate privacy leakage; a better indicator may be whether the transmission is by user intention or not. When transmission is not intended by the user, it is more likely a privacy leakage. The problem is how to determine if transmission is user intended. As a first solution in this space, we present a new analysis framework called AppIntent. For each data transmission, AppIntent can efficiently provide a sequence of GUI manipulations corresponding to the sequence of events that lead to the data transmission, thus helping an analyst to determine if the data transmission is user intended or not. The basic idea is to use symbolic execution to generate the aforementioned event sequence, but straightforward symbolic execution proves to be too time-consuming to be practical. A major innovation in AppIntent is to leverage the unique Android execution model to reduce the search space without sacrificing code coverage. We also present an evaluation of AppIntent with a set of 750 malicious apps, as well as 1,000 top free apps from Google Play. The results show that AppIntent can effectively help separate the apps that truly leak user privacy from those that do not."}
{"_id": "05ca17ffa777f64991a8da04f2fd03880ac51236", "title": "Towards automatic generation of vulnerability-based signatures", "text": "In this paper we explore the problem of creating vulnerability signatures. A vulnerability signature matches all exploits of a given vulnerability, even polymorphic or metamorphic variants. Our work departs from previous approaches by focusing on the semantics of the program and vulnerability exercised by a sample exploit instead of the semantics or syntax of the exploit itself. We show the semantics of a vulnerability define a language which contains all and only those inputs that exploit the vulnerability. A vulnerability signature is a representation (e.g., a regular expression) of the vulnerability language. Unlike exploit-based signatures whose error rate can only be empirically measured for known test cases, the quality of a vulnerability signature can be formally quantified for all possible inputs. We provide a formal definition of a vulnerability signature and investigate the computational complexity of creating and matching vulnerability signatures. We also systematically explore the design space of vulnerability signatures. We identify three central issues in vulnerability-signature creation: how a vulnerability signature represents the set of inputs that may exercise a vulnerability, the vulnerability coverage (i.e., number of vulnerable program paths) that is subject to our analysis during signature creation, and how a vulnerability signature is then created for a given representation and coverage. We propose new data-flow analysis and novel adoption of existing techniques such as constraint solving for automatically generating vulnerability signatures. We have built a prototype system to test our techniques. Our experiments show that we can automatically generate a vulnerability signature using a single exploit which is of much higher quality than previous exploit-based signatures. In addition, our techniques have several other security applications, and thus may be of independent interest"}
{"_id": "07356c5477c83773bd062b525f45c433e5b044e8", "title": "Computer security technology planning study", "text": "Approved for pubJic reJease; distribution unri mited. When U.S. Government drawings, specifications or other data are used for any purpose other than a definitely related government procurement operation, the government thereby incurs no responsibility nor any obligation whatsoever; and the fact that the government may have formulated, fui\u00b7nished, or in any way sup\u00ad plied the said drawings, specifications, or other data is not to be regarded by implication or otherwise as in any manner licensing the holder or any other person or conveying any rights or permission to manufacture, use, or sell any patented invention that may in any way be related thereto."}
{"_id": "befc5d5391e2f17464f480383fda4672900882ec", "title": "Ghost hunters: ambient light and hand gesture utilization in mobile augmented reality games", "text": "This paper presents the current status of work-in-progress in developing Ghost Hunters that is targeted to explore the possibilities of incorporating gesture control and ambient lighting in a mobile augmented reality game played with smart glasses. We present the design and implementation of the current prototype, together with a small feasibility evaluation of battery consumption and gesture control."}
{"_id": "f4bff7e109e5c213a23b993614ef5d9431dca8ec", "title": "An Expressive (Zero-Knowledge) Set Accumulator", "text": "We present a new construction of an expressive set accumulator. Unlike existing cryptographic accumulators, ours provides succinct proofs for a large collection of operations over accumulated sets, including intersection, union, set difference, SUM, COUNT, MIN, MAX, and RANGE, as well as arbitrary nestings of the above. We also show how to extend our accumulator to be zero-knowledge. The security of our accumulator is based on extractability assumptions and other assumptions that hold in the generic group model. Our construction has asymptotically optimal verification complexity and proof size, constant update complexity, and public verifiability/updatability\u2014namely, any client who knows the public key and the last accumulator value can verify the supported operations and update the accumulator. The expressiveness of our accumulator comes at the cost of quadratic prover time. However, we show that the cryptographic operations involved are cheap compared to those incurred by generic approaches (e.g., SNARKs) that are equally expressive: our prover runs faster for sets of up to 5 million items. Our accumulator serves as a powerful cryptographic tool with many applications. For example, it can be applied to efficiently support verification of a rich collection of SQL queries when used as a drop-in replacement in existing verifiable database systems (e.g., IntegriDB, CCS 2015)."}
{"_id": "71501c2b1d00229fbe7356e5aa0c6a343a1c1c6e", "title": "Effectiveness of fundamental frequency (F0) and strength of excitation (SOE) for spoofed speech detection", "text": "Current countermeasures used in spoof detectors (for speech synthesis (SS) and voice conversion (VC)) are generally phase-based (as vocoders in SS and VC systems lack phase-information). These approaches may possibly fail for non-vocoder or unit-selection-based spoofs. In this work, we explore excitation source-based features, i.e., fundamental frequency (F0) contour and strength of excitation (SoE) at the glottis as discriminative features using GMM-based classification system. We use F0 and SoE1 estimated from speech signal through zero frequency (ZF) filtering method. Further, SoE2 is estimated from negative peaks of derivative of glottal flow waveform (dGFW) at glottal closure instants (GCIs). On the evaluation set of ASVspoof 2015 challenge database, the F0 and SoEs features along with its dynamic variations achieve an Equal Error Rate (EER) of 12.41%. The source features are fused at score-level with MFCC and recently proposed cochlear filter cepstral coefficients and instantaneous frequency (CFCCIF) features. On fusion with MFCC (CFCCIF), the EER decreases from 4.08% to 3.26% (2.07% to 1.72%). The decrease in EER was evident on both known and unknown vocoder-based attacks. When MFCC, CFCCIF and source features are combined, the EER further decreased to 1.61%. Thus, source features captures complementary information than MFCC and CFCCIF used alone."}
{"_id": "5d6c136219b69315e70ba2b6b07aaba30f0f568d", "title": "Quantitative trait loci for glucosinolate accumulation in Brassica rapa leaves.", "text": "Glucosinolates and their breakdown products have been recognized for their effects on plant defense, human health, flavor and taste of cruciferous vegetables. Despite this importance, little is known about the regulation of the biosynthesis and degradation in Brassica rapa. Here, the identification of quantitative trait loci (QTL) for glucosinolate accumulation in B. rapa leaves in two novel segregating double haploid (DH) populations is reported: DH38, derived from a cross between yellow sarson R500 and pak choi variety HK Naibaicai; and DH30, from a cross between yellow sarson R500 and Kairyou Hakata, a Japanese vegetable turnip variety. An integrated map of 1068 cM with 10 linkage groups, assigned to the international agreed nomenclature, is developed based on the two individual DH maps with the common parent using amplified fragment length polymorphism (AFLP) and single sequence repeat (SSR) markers. Eight different glucosinolate compounds were detected in parents and F(1)s of the DH populations and found to segregate quantitatively in the DH populations. QTL analysis identified 16 loci controlling aliphatic glucosinolate accumulation, three loci controlling total indolic glucosinolate concentration and three loci regulating aromatic glucosinolate concentrations. Both comparative genomic analyses based on Arabidopsis-Brassica rapa synteny and mapping of candidate orthologous genes in B. rapa allowed the selection of genes involved in the glucosinolate biosynthesis pathway that may account for the identified QTL."}
{"_id": "e683e46e96ec91c8725b142b9f89c8bb46c68603", "title": "Can Smartwatches Replace Smartphones for Posture Tracking?", "text": "This paper introduces a human posture tracking platform to identify the human postures of sitting, standing or lying down, based on a smartwatch. This work develops such a system as a proof-of-concept study to investigate a smartwatch's ability to be used in future remote health monitoring systems and applications. This work validates the smartwatches' ability to track the posture of users accurately in a laboratory setting while reducing the sampling rate to potentially improve battery life, the first steps in verifying that such a system would work in future clinical settings. The algorithm developed classifies the transitions between three posture states of sitting, standing and lying down, by identifying these transition movements, as well as other movements that might be mistaken for these transitions. The system is trained and developed on a Samsung Galaxy Gear smartwatch, and the algorithm was validated through a leave-one-subject-out cross-validation of 20 subjects. The system can identify the appropriate transitions at only 10 Hz with an F-score of 0.930, indicating its ability to effectively replace smart phones, if needed."}
{"_id": "364fb0677a5d7083e56c0e38629a78cb94836f53", "title": "API design for machine learning software: experiences from the scikit-learn project", "text": "scikit-learn is an increasingly popular machine learning library. Written in Python, it is designed to be simple and efficient, accessible to non-experts, and reusable in various contexts. In this paper, we present and discuss our design choices for the application programming interface (API) of the project. In particular, we describe the simple and elegant interface shared by all learning and processing units in the library and then discuss its advantages in terms of composition and reusability. The paper also comments on implementation details specific to the Python ecosystem and analyzes obstacles faced by users and developers of the library."}
{"_id": "88239ce3e1b37544753b54cb03a92a50072d72fa", "title": "Social support: a conceptual analysis.", "text": "Using the methodology of Walker and Avant, the purpose of this paper was to identify the most frequently used theoretical and operational definitions of social support. A positive relationship between social support and health is generally accepted in the literature. However, the set of dimensions used to define social support is inconsistent. In addition, few measurement tools have established reliability and validity. Findings from this conceptual analysis suggested four of the most frequently used defining attributes of social support: emotional, instrumental, informational, and appraisal. Social network, social embeddedness, and social climate were identified as antecedents of social support. Social support consequences were subsumed under the general rubric of positive health states. Examples were personal competence, health maintenance behaviours, effective coping behaviours, perceived control, sense of stability, recognition of self-worth, positive affect, psychological well-being, and decreased anxiety and depression. Recommendations for future research were made."}
{"_id": "c517f80aa000dd91f721f0ab759ceb703546c355", "title": "Child sexual abuse in sub-Saharan Africa: a literature review.", "text": "OBJECTIVE\nThis article reviews the English-language literature on child sexual abuse in sub-Saharan Africa (SSA). The focus is on the sexual abuse of children in the home/community, as opposed to the commercial sexual exploitation of children.\n\n\nMETHODS\nEnglish language, peer-reviewed papers cited in the Social Sciences Citation Index (SSCI) are examined. Reports from international and local NGOs and UN agencies are also examined.\n\n\nRESULTS\nFew published studies on the sexual abuse of children have been conducted in the region, with the exception of South Africa. Samples are predominantly clinical or University based. A number of studies report that approximately 5% of the sample reported penetrative sexual abuse during their childhood. No national survey of the general population has been conducted. The most frequent explanations for the sexual abuse of children in SSA include rapid social change, AIDS/HIV avoidance strategies and the patriarchal nature of society. Child sexual abuse is most frequently perpetrated by family members, relatives, neighbors or others known to the child.\n\n\nCONCLUSIONS\nThere is nothing to support the widely held view that child sexual abuse is very rare in SSA-prevalence levels are comparable with studies reported from other regions. The high prevalence levels of AIDS/HIV in the region expose sexually abused children to high risks of infection. It is estimated that, approximately.6-1.8% of all children in high HIV-incidence countries in Southern Africa will experience penetrative sexual abuse by an AIDS/HIV infected perpetrator before 18 years of age."}
{"_id": "6fece3ef2da2c2f13a66407615f2c9a5b3737c88", "title": "Stabilizing Dynamic Controllers for Hybrid Systems: A Hybrid Control Lyapunov Function Approach", "text": "This paper proposes a dynamic controller structure and a systematic design procedure for stabilizing discrete-time hybrid systems. The proposed approach is based on the concept of control Lyapunov functions (CLFs), which, when available, can be used to design a stabilizing state-feedback control law. In general, the construction of a CLF for hybrid dynamical systems involving both continuous and discrete states is extremely complicated, especially in the presence of non-trivial discrete dynamics. Therefore, we introduce the novel concept of a hybrid control Lyapunov function, which allows the compositional design of a discrete and a continuous part of the CLF, and we formally prove that the existence of a hybrid CLF guarantees the existence of a classical CLF. A constructive procedure is provided to synthesize a hybrid CLF, by expanding the dynamics of the hybrid system with a specific controller dynamics. We show that this synthesis procedure leads to a dynamic controller that can be implemented by a receding horizon control strategy, and that the associated optimization problem is numerically tractable for a fairly general class of hybrid systems, useful in real world applications. Compared to classical hybrid receding horizon control algorithms, the proposed approach typically requires a shorter prediction horizon to guarantee asymptotic stability of the closed-loop system, which yields a reduction of the computational burden, as illustrated through two examples."}
{"_id": "31605e24e5bf1f89e74fa11bd1f0eb0cbf3d9bde", "title": "Familial hypertrichosis cubiti: hairy elbows syndrome.", "text": "Genetically determined hypertrichosis is uncommon, but several forms of familial hairiness have been described. In some kindreds the unusual hairiness has been generalized, while in others, the hypertrichosis has been confined to a specific site on the body. The purpose of this paper is to report the occurrence of undue hairiness of the elbow regions in members of an Amish family, and to discuss the genetic significance of the condition. No previous descriptions of hypertrichosis of this distribution could be found in the literature, and it is therefore suggested that the condition be termed familial hypertrichosis cubiti or the hairy elbows syndrome."}
{"_id": "3417bd36084fe39af70a2b6e6b2a71feb6218e41", "title": "Instance Based Clustering of Semantic Web Resources", "text": "The original Semantic Web vision was explicit in the need for intelligent autonomous agents that would represent users and help them navigate the Semantic Web. We argue that an essential feature for such agents is the capability to analyse data and learn. In this paper we outline the challenges and issues surrounding the application of clustering algorithms to Semantic Web data. We present several ways to extract instances from a large RDF graph and computing the distance between these. We evaluate our approaches on three different data-sets, one representing a typical relational database to RDF conversion, one based on data from a ontologically rich Semantic Web enabled application, and one consisting of a crawl of FOAF documents; applying both supervised and unsupervised evaluation metrics. Our evaluation did not support choosing a single combination of instance extraction method and similarity metric as superior in all cases, and as expected the behaviour depends greatly on the data being clustered. Instead, we attempt to identify characteristics of data that make particular methods more suitable."}
{"_id": "3cb196322801704ac37fc5c1b78469da88a957f8", "title": "An Evaluation of Fine and Gross Motor Skills in Adolescents with Down Syndromes", "text": "Down syndrome (DS) is the most common genetic cause of intellectual disability. Motor development of children with DS is delayed. The aim of the present study was to evaluate of fine and gross motor skills of adolescents with DS. The study sample of a total of 34 participants aged between 14 to 20 years comprised 16 adolescents with DS and a normally developing group of 18 adolescents without DS. Fine and gross motor skills of the participants were assessed by Bruininks-Oseretsky Test of Motor Proficiency, second edition short form (BOT-2 SF). The highest score of the test is 88 and higher score shows higher performance. The average age of adolescents with DS and without DS who participated in the study were 17.06\u00b12.79 and 16.56\u00b11.09, respectively. All partcipants were male. It was found a significant differences for all BOT-2 SF subtests and total scores when compared between adolescents with DS and adolescents without DS (p<0.05). Adolescents without DS has higher scores than adolescents with DS. In conclusion, both fine and gross motor skill performance of adolescents with DS is lower than normally developing adolescents. This study stresses the importance of interventions facilitating motor skills."}
{"_id": "85d9aff092d860aebf8ea5aa255b06de25a1930e", "title": "Deep Continuous Fusion for Multi-sensor 3D Object Detection", "text": "In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art."}
{"_id": "b78f5c25520e7212d34219b0c59d39cf62aaf373", "title": "Wideband Excitation Technology of ${\\rm TE}_{20}$ Mode Substrate Integrated Waveguide (SIW) and Its Applications", "text": "The higher order mode substrate integrated waveguide (SIW) has advantages in the applications, which are: simplifying the structure with reduced fabrication cost and enhancing the stability of performance by relaxed fabrication tolerance. In this paper, two wideband TE20 mode excitation structures are presented and investigated for the SIW. A slot along the mid line in the longitudinal direction of the SIW is employed to convert the electromagnetic field pattern between slotline and the TE20 mode SIW. The second structure is based on the slot aperture coupling and can realize wideband direct transition between the TE20 mode SIW and microstrip line. Both transitions have a simple and compact structure, as well as broadband characteristics. The wideband transition performance is demonstrated in this paper. As application examples of the transitions, two broadband substrate integrated baluns are presented and fabricated based on the TE20 mode excitation structures and TE20 mode characteristics. The 180 \u00b0 out-of-phase and equal magnitude of the balun outputs in a wide frequency range can be achieved inherently. The balun based on the slotline excitation has a fractional bandwidth of 50.2%, from 7.3 to 12.2 GHz, with measured return loss, amplitude imbalance, and phase imbalance better than 10 and 0.45 dB and 3.8 \u00b0. The balun based on the aperture coupling excitation shows a fractional bandwidth of 50.3%, from 7 to 11.7 GHz, with measured return loss, insertion loss, amplitude imbalance, and phase imbalance better than 10, 1, and 0.27 dB and 2.4 \u00b0, respectively. Both baluns not only show good performance, but also demonstrate the existence of the TE20 mode in the SIW."}
{"_id": "9097ad840707554c7a8a5d288fcb3ce8993d5ee9", "title": "Intuitionistic fuzzy sets and L-fuzzy sets", "text": "This paper proves that the concepts of intuitionistic fuzzy sets and intuitionistic L-fuzzy sets and the concept of L-fuzzy sets are equivalent. c \u00a9 2000 Elsevier Science B.V. All rights reserved"}
{"_id": "c52591baa825539e897385e14e776cced0341c0b", "title": "Games and Agents: Designing Intelligent Gameplay", "text": "There is an attention shift within the gaming industry toward more natural (long-term) behavior of nonplaying characters (NPCs). Multiagent system research offers a promising technology to implement cognitive intelligent NPCs. However, the technologies used in game engines and multiagent platforms are not readily compatible due to some inherent differences of concerns. Where game engines focus on real-time aspects and thus propagate efficiency and central control, multiagent platforms assume autonomy of the agents. Increased autonomy and intelligence may offer benefits for a more compelling gameplay and may even be necessary for serious games. However, it raises problems when current game design techniques are used to incorporate state-of-the-art multiagent system technology. In this paper, we will focus on three specific problem areas that arise from this difference of view: synchronization, information representation, and communication. We argue that the current attempts for integration still fall short on some of these aspects. We show that to fully integrate intelligent agents in games, one should not only use a technical solution, but also a design methodology that is amenable to agents. The game design should be adjusted to incorporate the possibilities of agents early on in the process."}
{"_id": "0d80a29beb1af75efd75a8ffcde936c4d40054b4", "title": "Towards Enhancing the Security of OAuth Implementations in Smart Phones", "text": "With the roaring growth and wide adoption of smart mobile devices, users are continuously integrating with culture of the mobile applications (apps). These apps are not only gaining access to information on the smartphone but they are also able gain users' authorization to access remote servers on their behalf. The Open standard for Authorization (OAuth) is widely used in mobile apps for gaining access to user's resources on remote service providers. In this paper, we analyze the different OAuth implementations adopted by the SDKs of the popular resource providers on smartphones and demonstrate possible attacks on most OAuth implementations. By analyzing source code of more than 430 popular Android apps we summarized the trends followed by the service providers and by the OAuth development choices made by application developers. In addition, we propose an application-based OAuth Manager framework, that provides a secure OAuth flow in smartphones that is based on the concept of privilege separation and does not require high overhead."}
{"_id": "f1899628c98049a4e33af3539cd844321c572e0f", "title": "Improving Cancer Treatment via Mathematical Modeling: Surmounting the Challenges Is Worth the Effort", "text": "Drug delivery schedules are key factors in the efficacy of cancer therapies, and mathematical modeling of population dynamics and treatment responses can be applied to identify better drug administration regimes as well as provide mechanistic insights. To capitalize on the promise of this approach, the cancer field must meet the challenges of moving this type of work into clinics."}
{"_id": "3b3c153b09495e2f79dd973253f9d2ee763940a5", "title": "Unsupervised learning of feature hierarchies", "text": "The applicability of machine learning methods is often limi ted by the amount of available labeled data, and by the ability (or inability) of the de signer to produce good internal representations and good similarity measures for the input data vectors. The aim of this thesis is to alleviate these two limitations by proposing al gorithms tolearngood internal representations, and invariant feature hierarchies from u nlabeled data. These methods go beyond traditional supervised learning algorithms, and rely on unsupervised, and semi-supervised learning. In particular, this work focuses on \u201cdeep learning\u201d methods , a et of techniques and principles to train hierarchical models. Hierarchical mod els produce feature hierarchies that can capture complex non-linear dependencies among the observed data variables in a concise and efficient manner. After training, these mode ls can be employed in real-time systems because they compute the representation by a very fast forward propagation of the input through a sequence of non-linear transf ormations. When the paucity of labeled data does not allow the use of traditional supervi s d algorithms, each layer of the hierarchy can be trained in sequence starting at the bott om by using unsupervised or semi-supervised algorithms. Once each layer has been train ed, the whole system can be fine-tuned in an end-to-end fashion. We propose several unsu pervised algorithms that can be used as building block to train such feature hierarchi es. We investigate algorithms that produce sparse overcomplete representations a nd fe tures that are invariant to known and learned transformations. These algorithms are designed using the Energy-"}
{"_id": "f75256662baf6da5b6d5e11dbc6ae7992cf6a168", "title": "Deep convolutional neural network for latent fingerprint enhancement", "text": "In this work, we propose a novel latent fingerprint enhancement method based on FingerNet inspired by recent development of Convolutional Neural Network (CNN). Although CNN is achieving superior performance in many computer vision tasks from low-level image processing to high-level semantic understanding, limited attention has been paid in fingerprint community. The proposed FingerNet has three major parts: one common convolution part shared by two different deconvolution parts, which are the enhancement branch and the orientation branch. The convolution part is to extract fingerprint features particularly for enhancement purpose. The enhancement deconvolution branch is employed to remove structured noise and enhance fingerprints as its task. The orientation deconvolution branch performs the task of guiding enhancement through a multi-task learning strategy. The network is trained in the manner of pixels-to-pixels and end-to-end learning, that can directly enhance latent fingerprint as the output. We also study some implementation details such as single-task learning, multi-task learning, and the residual learning. Experimental results of the FingerNet system on latent fingerprint dataset NIST SD27 demonstrate effectiveness and robustness of the proposed method."}
{"_id": "56df27ed921375fd8224c56a8cf764cf08e7b837", "title": "IRIS performer: a high performance multiprocessing toolkit for real-time 3D graphics", "text": "This paper describes the design and implementation of IRIS Performer, a toolkit for visual simulation, virtual reality, and other real-time 3D graphics applications. The principal design goal is to allow application developers to more easily obtain maximal performance from 3D graphics workstations which feature multiple CPUs and support an immediate-mode rendering library. To this end, the toolkit combines a low-level library for high-performance rendering with a high-level library that implements pipelined, parallel traversals of a hierarchical scene graph. While discussing the toolkit architecture, the paper illuminates and addresses performance issues fundamental to immediate-mode graphics and coarse-grained, pipelined multiprocessing. Graphics optimizations focus on efficient data transfer to the graphics subsystem, reduction of mode settings, and restricting state inheritance. The toolkit's multiprocessing features solve the problems of how to partition work among multiple processes, how to synchronize these processes, and how to manage data in a pipelined, multiprocessing environment. The paper also discusses support for intersection detection, fixed-frame rates, run-time profiling and special effects such as geometric morphing."}
{"_id": "6bf187cf239e66767688ed7dd88f6a408bf465f0", "title": "Tversky loss function for image segmentation using 3D fully convolutional deep networks", "text": "Fully convolutional deep neural networks carry out excellent potential for fast and accurate image segmentation. One of the main challenges in training these networks is data imbalance, which is particularly problematic in medical imaging applications such as lesion segmentation where the number of lesion voxels is often much lower than the number of non-lesion voxels. Training with unbalanced data can lead to predictions that are severely biased towards high precision but low recall (sensitivity), which is undesired especially in medical applications where false negatives are much less tolerable than false positives. Several methods have been proposed to deal with this problem including balanced sampling, two step training, sample re-weighting, and similarity loss functions. In this paper, we propose a generalized loss function based on the Tversky index to address the issue of data imbalance and achieve much better trade-off between precision and recall in training 3D fully convolutional deep neural networks. Experimental results in multiple sclerosis lesion segmentation on magnetic resonance images show improved F2 score, Dice coefficient, and the area under the precision-recall curve in test data. Based on these results we suggest Tversky loss function as a generalized framework to effectively train deep neural networks."}
{"_id": "8bab7b810adbd52252e11cbdc10a8781050be0e8", "title": "Implementation of FC-TCR for Reactive Power Control", "text": "This paper deals with the simulation and implementation of fixed capacitor thyristor controlled reactor system(SVC) . The TCR system is simulated using MATLAB and the simulation results are presented. SVC is basically a shunt connected static var generator whose output is adjusted to exchange capacitive or inductive current so as to maintain or control specific power variable. In this paper, simple circuit model of Thyristor Controlled Reactor is modeled and simulated using MATLAB. The simulation results are presented. The current drawn by the TCR varies with the variation in the firing angle. The simulation results are compared with the theoretical results."}
{"_id": "7393c8fc67411ab4101d5e03b4b1e2ef12f99f3c", "title": "Deep Automatic Portrait Matting", "text": "We propose an automatic image matting method for portrait images. This method does not need user interaction, which was however essential in most previous approaches. In order to accomplish this goal, a new end-to-end convolutional neural network (CNN) based framework is proposed taking the input of a portrait image. It outputs the matte result. Our method considers not only image semantic prediction but also pixel-level image matte optimization. A new portrait image dataset is constructed with our labeled matting ground truth. Our automatic method achieves comparable results with state-of-the-art methods that require specified foreground and background regions or pixels. Many applications are enabled given the automatic nature of our system."}
{"_id": "94198eccf0551f7dd787c68b6236a87bddeec236", "title": "Prediction of Primary Pupil Enrollment in Government School Using Data Mining Forecasting Technique", "text": "This research concentrates upon predictive analysis of enrolled pupil using forecasting based on data mining technique. The Microsoft SQL Server Data Mining Add-ins Excel 2007 was employed as a software mining tool for predicting enrollment of pupils. The time series algorithm was used for experiment analysis. U-DISE (Unified District Information System for Education) Dataset of Aurangabad district in Maharashtra (India) was obtained from SSA (Sarva Shiksha Abhiyan) and was used for analysis. Data mining for primary pupil enrollment research provides a feasible way to analyze the trend and to addresses the interest of pupil. The dataset was studied and analyzed to forecast registered pupils in government for upcoming years. We conclude that for upcoming year, pupil strength in government school will be dropped down and number of teachers will be supersizing in the said district. Keywords\u2014 Data mining, Microsoft SQL Server Data Mining Add-ins Excel 2007, Time series algorithm, U-DISE"}
{"_id": "3625936bab70324db3bf39bc5ae57cbbc0971f8d", "title": "60-GHz Circularly Polarized U-Slot Patch Antenna Array on LTCC", "text": "This communication presents a 60-GHz wideband circularly polarized (CP) U-slot patch antenna array of 4 \u00d7 4 elements on low temperature cofired ceramic (LTCC). A CP U-slot patch antenna is used as the array element to enhance the impedance bandwidth and a stripline sequential rotation feeding scheme is applied to achieve wide axial ratio (AR) bandwidth. Meanwhile, a grounded coplanar waveguide (GCPW) to stripline transition is designed for probe station measurement. The fabricated antenna array has a dimension of 14 \u00d716 \u00d7 1.1 mm3 . The simulated and measured impedance bandwidths, AR bandwidths, and radiation patterns are investigated and compared. Measured results show that the proposed antenna array has a wide impedance bandwidth from 50.5 GHz to 67 GHz for |S11| <; -10 dB, and a wide AR bandwidth from 54 GHz to 65.5 GHz for AR <; 3 dB. In addition, it exhibits a peak gain of 16 dBi and a beam-shaped pattern with 3-dB beam width of 20\u00b0. Moreover, its AR keeps below 3 dB within the 3-dB beam width."}
{"_id": "447ce2aecdf742cf96137f8bf7355a7404489178", "title": "Wideband Millimeter-Wave Substrate Integrated Waveguide Cavity-Backed Rectangular Patch Antenna", "text": "In this letter, a new type of wideband substrate integrated waveguide (SIW) cavity-backed patch antenna and array for millimeter wave (mmW) are investigated and implemented. The proposed antenna is composed of a rectangular patch with a backed SIW cavity. In order to enhance the bandwidth and radiation efficiency, the cavity is designed to resonate at its TE210 mode. Based on the proposed antenna, a 4 \u00d7 4 array is also designed. Both the proposed antenna and array are fabricated with standard printed circuit board (PCB) process, which possess the advantage of easy integration with planar circuits. The measured bandwidth (|S11| \u2264 -10 dB) of the antenna element is larger than 15%, and that of the antenna array is about 8.7%. The measured peak gains are 6.5 dBi for the element and 17.8 dBi for the array, and the corresponding simulated radiation efficiencies are 83.9% and 74.9%, respectively. The proposed antenna and array are promising for millimeter-wave applications due to its merits of wide band, high efficiency, low cost, low profile, etc."}
{"_id": "3501023a9f262abbfa0947a04a2403c48329d432", "title": "Millimeter-Wave Microstrip Comb-Line Antenna Using Reflection-Canceling Slit Structure", "text": "A microstrip comb-line antenna is developed in the millimeter-wave band. When the element spacing is one guide wavelength for the broadside beam in the traveling-wave excitation, reflections from all the radiating elements are synthesized in phase. Therefore, the return loss increases significantly. Furthermore, re-radiation from elements due to the reflection wave degrades the design accuracy for the required radiation pattern. We propose the way to improve the reflection characteristic of the antenna with arbitrary beam directions including strictly a broadside direction. To suppress the reflection, we propose a reflection-canceling slit structure installed on the feeding line around each radiating element. A 27-element linear array antenna with a broadside beam is developed at 76.5 GHz. To confirm the feasibility of the simple design procedure, the performance is evaluated through the measurement in the millimeter-wave band."}
{"_id": "429d0dd7192450e2a52a8ae7f658a5d99222946e", "title": "Low Cost Planar Waveguide Technology-Based Dielectric Resonator Antenna (DRA) for Millimeter-Wave Applications: Analysis, Design, and Fabrication", "text": "A compact, low cost and high radiation efficiency antenna structure, planar waveguide, substrate integrated waveguide (SIW), dielectric resonator antennas (DRA) is presented in this paper. Since SIW is a high Q- waveguide and DRA is a low loss radiator, then SIW-DRA forms an excellent antenna system with high radiation efficiency at millimeter-waveband, where the conductor loss dominates. The impact of different antenna parameters on the antenna performance is studied. Experimental data for SIW-DRA, based on two different slot orientations, at millimeter-wave band are introduced and compared to the simulated HFSS results to validate our proposed antenna model. A good agreement is obtained. The measured gain for SIW-DRA single element showed a broadside gain of 5.51 dB,-19 dB maximum cross polarized radiation level, and overall calculated (simulated using HFSS) radiation efficiency of greater than 95%."}
{"_id": "21d054e67fb6d401c0c8d474eda6a2e6a22d4d93", "title": "Context-Aware Bandits", "text": "In this paper, we present a simple and efficient Context-Aware Bandit (CAB) algorithm. With CAB we attempt to craft a bandit algorithm that can capture collaborative effects and that can be easily deployed in a real-world recommendation system, where the multi-armed bandits have been shown to perform well in particular with respect to the cold-start problem. CAB utilizes a context-aware clustering technique augmenting exploration-exploitation strategies. CAB dynamically clusters the users based on the content universe under consideration. We provide a theoretical analysis in the standard stochastic multi-armed bandits setting. We demonstrate the efficiency of our approach on production and real-world datasets, showing the scalability and, more importantly, the significantly increased prediction performance against several existing state-of-theart methods."}
{"_id": "36d759b46c001c28247a03129db81bda2f3c190d", "title": "Occupational accidents aboard merchant ships.", "text": "OBJECTIVES\nTo investigate the frequency, circumstances, and causes of occupational accidents aboard merchant ships in international trade, and to identify risk factors for the occurrence of occupational accidents as well as dangerous working situations where possible preventive measures may be initiated.\n\n\nMETHODS\nThe study is a historical follow up on occupational accidents among crew aboard Danish merchant ships in the period 1993-7. Data were extracted from the Danish Maritime Authority and insurance data. Exact data on time at risk were available.\n\n\nRESULTS\nA total of 1993 accidents were identified during a total of 31 140 years at sea. Among these, 209 accidents resulted in permanent disability of 5% or more, and 27 were fatal. The mean risk of having an occupational accident was 6.4/100 years at sea and the risk of an accident causing a permanent disability of 5% or more was 0.67/100 years aboard. Relative risks for notified accidents and accidents causing permanent disability of 5% or more were calculated in a multivariate analysis including ship type, occupation, age, time on board, change of ship since last employment period, and nationality. Foreigners had a considerably lower recorded rate of accidents than Danish citizens. Age was a major risk factor for accidents causing permanent disability. Change of ship and the first period aboard a particular ship were identified as risk factors. Walking from one place to another aboard the ship caused serious accidents. The most serious accidents happened on deck.\n\n\nCONCLUSIONS\nIt was possible to clearly identify work situations and specific risk factors for accidents aboard merchant ships. Most accidents happened while performing daily routine duties. Preventive measures should focus on workplace instructions for all important functions aboard and also on the prevention of accidents caused by walking around aboard the ship."}
{"_id": "549fc15ee760ceb7569c38888b21cee1c3806148", "title": "Dynamic intra- and interhemispheric interactions during unilateral and bilateral hand movements assessed with fMRI and DCM", "text": "Any motor action results from a dynamic interplay of various brain regions involved in different aspects of movement preparation and execution. Establishing a reliable model of how these areas interact is crucial for a better understanding of the mechanisms underlying motor function in both healthy subjects and patients. We used fMRI and dynamic causal modeling to reveal the specific excitatory and inhibitory influences within the human motor system for the generation of voluntary hand movements. We found an intrinsic balance of excitatory and inhibitory couplings among core motor regions within and across hemispheres. Neural coupling within this network was specifically modulated upon uni- and bimanual movements. During unimanual movements, connectivity towards the contralateral primary motor cortex was enhanced while neural coupling towards ipsilateral motor areas was reduced by both transcallosal inhibition and top-down modulation. Bimanual hand movements were associated with a symmetric facilitation of neural activity mediated by both increased intrahemispheric connectivity and enhanced transcallosal coupling of SMA and M1. The data suggest that especially the supplementary motor area represents a key structure promoting or suppressing activity in the cortical motor network driving uni- and bilateral hand movements. Our data demonstrate that fMRI in combination with DCM allows insights into intrinsic properties of the human motor system and task-dependent modulations thereof."}
{"_id": "e66ba647600b74e05b79ac5de8a6c150e96e8f6b", "title": "Stereo HDR Disparity Map Computation Using Structured Light", "text": "In this paper, we present work in progress towards the generation of a ground truth data set for High Dynamic Range (HDR) stereo matching. The development and evaluation of novel stereo matching algorithms that are tailored to the characteristics of HDR images would greatly benefit from the availability of such reference data. We describe our laboratory setup and processing steps for acquiring multi-exposed stereo images along with a corresponding reference disparity map computed by a structured light approach. We discuss the special requirements and challenges which HDR test scenes impose on the computation of the reference disparities and show some preliminary results."}
{"_id": "5c8b6f2a503ae7179751dbebe3982594e08140ae", "title": "Metamodel for Service Design and Service Innovation: Integrating Service Activities, Service Systems, and Value Constellations", "text": "This paper presents a metamodel that addresses service system design and innovation by traversing and integrating three essential layers, service activities, service systems, and value constellations. The metamodel's approach to service systems as service-inoperation is an alternative to another currently used approach that views service systems as systems of economic exchange. The metamodel addresses service science topics including basic concepts of service science, design thinking for service systems, decomposition within service systems, and integration of IT service architecture with"}
{"_id": "2c4a486acbf10c870629d9b980600c0492e6126e", "title": "Accessing Structured Health Information through English Queries and Automatic Deduction", "text": "While much health data is available online, patients who are not technically astute may be unable to access it because they may not know the relevant resources, they may be reluctant to confront an unfamiliar interface, and they may not know how to compose an answer from information provided by multiple heterogeneous resources. We describe ongoing research in using natural English text queries and automated deduction to obtain answers based on multiple structured data sources in a specific subject domain. Each English query is transformed using natural language technology into an unambiguous logical form; this is submitted to a theorem prover that operates over an axiomatic theory of the subject domain. Symbols in the theory are linked to relations in external databases known to the system. An answer is obtained from the proof, along with an English language explanation of how the answer was obtained. Answers need not be present explicitly in any of the databases, but rather may be deduced or computed from the information they provide. Although English is highly ambiguous, the natural language technology is informed by subject domain knowledge, so that readings of the query that are syntactically plausible but semantically impossible are discarded. When a question is still ambiguous, the system can interrogate the patient to determine what meaning was intended. Additional queries can clarify earlier ones or ask questions referring to previously computed answers. We describe a prototype system, Quadri, which answers questions about HIV treatment using the Stanford HIV Drug Resistance Database and other resources. Natural language processing is provided by PARC\u2019s Bridge, and the deductive mechanism is SRI\u2019s SNARK theorem prover. We discuss some of the problems that must be faced to make this approach work, and some of our solutions."}
{"_id": "cccd96c81fb365bb5eab587c4adb1e40d9235b9b", "title": "Automatic categorization of medical images for content-based retrieval and data mining.", "text": "Categorization of medical images means selecting the appropriate class for a given image out of a set of pre-defined categories. This is an important step for data mining and content-based image retrieval (CBIR). So far, published approaches are capable to distinguish up to 10 categories. In this paper, we evaluate automatic categorization into more than 80 categories describing the imaging modality and direction as well as the body part and biological system examined. Based on 6231 reference images from hospital routine, 85.5% correctness is obtained combining global texture features with scaled images. With a frequency of 97.7%, the correct class is within the best ten matches, which is sufficient for medical CBIR applications."}
{"_id": "0559d43f582548d767663f60ddb874ae3678885c", "title": "Capturing and analyzing low-level events from the code editor", "text": "In this paper, we present FLUORITE, a publicly available event logging plug-in for Eclipse which captures all of the low-level events when using the Eclipse code editor. FLUORITE captures not only what types of events occurred in the code editor, but also more detailed information such as the inserted and deleted text and the specific parameters for each command. This enables the detection of many usage patterns that could otherwise not be recognized, such as \"typo correction\" that requires knowing that the entered text is immediately deleted and replaced. Moreover, the snapshots of each source code file that has been opened during the session can be completely reproduced using the collected information. We also provide analysis and visualization tools which report various statistics about usage patterns, and we provide the logs in an XML format so others can write their own analyzers. FLUORITE can be used for not only evaluating existing tools, but also for discovering issues that motivate new tools."}
{"_id": "fce01bf3245691f81026e5209fab684a9167fce5", "title": "Zero-Day Attack Identification in Streaming Data Using Semantics and Spark", "text": "Intrusion Detection Systems (IDS) have been in existence for many years now, but they fall short in efficiently detecting zero-day attacks. This paper presents an organic combination of Semantic Link Networks (SLN) and dynamic semantic graph generation for the on the fly discovery of zero-day attacks using the Spark Streaming platform for parallel detection. In addition, a minimum redundancy maximum relevance (MRMR) feature selection algorithm is deployed to determine the most discriminating features of the dataset. Compared to previous studies on zero-day attack identification, the described method yields better results due to the semantic learning and reasoning on top of the training data and due to the use of collaborative classification methods. We also verified the scalability of our method in a distributed environment."}
{"_id": "0cd95b98b590f4a5600f0ecca54f4d7409e2817c", "title": "Leveraging social grouping for trust building in foreign electronic commerce firms: An exploratory study", "text": "Internet development has fueled e-commerce firms\u2019 globalization efforts, but many have met with only limited success. This often stems from the foreign firms\u2019 limited understanding of a focal country\u2019s local culture and idiosyncrasies. Foreign firms are usually viewed as out-group entities, which lowers consumers\u2019 trust in them. The extent of such a phenomenon varies. In locations where people are more skeptical of out-groups, a critical question is whether it is possible to transform such foreign out-group firms into in-groups, specifically with the support of popular social networking media. Based on Social Identity Theory and Trust Transference Process, five strategies leveraging social grouping and social ties to build trust for foreign electronic commerce firms were proposed. A survey was conducted to examine their effectiveness. The results suggest that social-grouping strategies are useful for in-grouping foreign out-group entities to build trust, and the effectiveness of strategies is determined by the social similarity and psychological distance between the consumer and the endorser. This has important implications for scholars and practitioners, both local and abroad, to leverage social grouping to boost Internet sales. \u00a9 2013 Elsevier Ltd. All rights reserved."}
{"_id": "0cb2e8605a7b5ddb5f3006f71d19cb9da960db98", "title": "DSD: Dense-Sparse-Dense Training for Deep Neural Networks", "text": "Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ\u201993 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn\u2019t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD."}
{"_id": "ae18b6eecef980fb9d88534e98c5a974ecc34865", "title": "High quality information provisioning and data pricing", "text": "This paper presents ideas on how to advance the research on high quality information provisioning and information pricing. To this end, the current state of the art in combining data curation and information provisioning as well as in regard to data pricing is reviewed. Based on that, open issues, such as tailoring data to a user's need and determining a market value of data, are identified. As preliminary solutions, it is proposed to investigate the identified problems in an integrated manner."}
{"_id": "d38fe8c30b17f9e5955e7185ac1fd832bbe1996a", "title": "Chukwa: A large-scale monitoring system", "text": "We describe the design and initial implementation of Chukwa, a data collection system for monitoring and analyzing large distributed systems. Chukwa is built on top of Hadoop, an open source distributed filesystem and MapReduce implementation, and inherits Hadoop\u2019s scalability and robustness. Chukwa also includes a flexible and powerful toolkit for displaying monitoring and analysis results, in order to make the best use of this collected data."}
{"_id": "2556e8a166b24c9dc463fde2ffa43878de6dc018", "title": "Role of Procedural Justice , Organizational Commitment and Job Satisfaction on job Performance : The Mediating Effects of Organizational Citizenship Behavior", "text": "The study examines the impact of procedural justice, organizational commitment, job satisfaction on employee performance, and the potential mediating role played by organization citizenship behaviors in that process. This model was tested using a sample of 70 employees embedded in 2 groups from 15 branches of a large, syariah bank in Malang. The sample is taken using proportional random sampling. Data is collected directly from respondents using questionnaires and technical data analysis using GeSCA. The study results showed that both procedural justice and organizational commitment positively affected Organizational Citizenship Behavior. Organizational commitments do positive influence job performance. Job satisfaction did not positively influence Organizational Citizenship Behavior and job performance. Organizational Citizenship behavior positively influences job performance. Organizational Citizenship behavior acted as a partial mediator between procedural justice, organizational commitment, and job performance. A number of suggestions on managerial theory and implementation were proposed."}
{"_id": "dd04d0106102f3822ad65ae65089821e2c3fc716", "title": "Substrate-Integrated Two-Port Dual-Frequency Antenna", "text": "A two-port dual-frequency substrate-integrated antenna with a large frequency difference is presented. It consists of a differentially fed slot antenna and a substrate-integrated dielectric resonator antenna for low- and high-frequency radiation, respectively. The former is loaded by a hollow patch, whereas the latter is fabricated inside the hollow region of the patch by using air holes and metalized vias. Beneath the antenna substrate is a second substrate on which slot-coupled sources are printed to feed the two antennas. For demonstration, a two-port dual-frequency antenna working at 5.2-GHz WLAN band and 24-GHz ISM band was designed, fabricated, and measured. The S-parameters, radiation patterns, and antenna gains of the two antenna parts are reported. Reasonable agreement between the measured and simulated results is observed. Very good isolation of over 35 dB between the two antenna parts is observed."}
{"_id": "08c766010a11737aa71d9e621fad6697093e4ded", "title": "Balanced Multithreading: Increasing Throughput via a Low Cost Multithreading Hierarchy", "text": "A simultaneous multithreading (SMT) processor can issue instructions from several threads every cycle, allowing it to effectively hide various instruction latencies; this effect increases with the number of simultaneous contexts supported. However, each added context on an SMT processor incurs a cost in complexity, which may lead to an increase in pipeline length or a decrease in the maximum clock rate. This paper presents new designs for multithreaded processors which combine a conservative SMT implementation with a coarse-grained multithreading capability. By presenting more virtual contexts to the operating system and user than are supported in the core pipeline, the new designs can take advantage of the memory parallelism present in workloads with many threads, while avoiding the performance penalties inherent in a many-context SMT processor design. A design with 4 virtual contexts, but which is based on a 2-context SMT processor core, gains an additional 26% throughput when 4 threads are run together."}
{"_id": "76f695955fa7ba6f3edff4a9cec28426f4e75f78", "title": "Can machine learning aid in delivering new use cases and scenarios in 5G?", "text": "5G represents the next generation of communication networks and services, and will bring a new set of use cases and scenarios. These in turn will address a new set of challenges from the network and service management perspective, such as network traffic and resource management, big data management and energy efficiency. Consequently, novel techniques and strategies are required to address these challenges in a smarter way. In this paper, we present the limitations of the current network and service management and describe in detail the challenges that 5G is expected to face from a management perspective. The main contribution of this paper is presenting a set of use cases and scenarios of 5G in which machine learning can aid in addressing their management challenges. It is expected that machine learning can provide a higher and more intelligent level of monitoring and management of networks and applications, improve operational efficiencies and facilitate the requirements of the future 5G network."}
{"_id": "234bca5bb8ee713d168b77f1d7729c23e9623c1d", "title": "Hierarchical Multi-label Conditional Random Fields for Aspect-Oriented Opinion Mining", "text": "A common feature of many online review sites is the use of an overall rating that summarizes the opinions expressed in a review. Unfortunately, these document-level ratings do not provide any information about the opinions contained in the review that concern a specific aspect (e.g., cleanliness) of the product being reviewed (e.g., a hotel). In this paper we study the finer-grained problem of aspect-oriented opinion mining at the sentence level, which consists of predicting, for all sentences in the review, whether the sentence expresses a positive, neutral, or negative opinion (or no opinion at all) about a specific aspect of the product. For this task we propose a set of increasingly powerful models based on conditional random fields (CRFs), including a hierarchical multi-label CRFs scheme that jointly models the overall opinion expressed in the review and the set of aspect-specific opinions expressed in each of its sentences. We evaluate the proposed models against a dataset of hotel reviews (which we here make publicly available) in which the set of aspects and the opinions expressed concerning them are manually annotated at the sentence level. We find that both hierarchical and multi-label factors lead to improved predictions of aspect-oriented opinions."}
{"_id": "ebe1db7281e17f6e353009efc01b00e4eacc3de3", "title": "International Telecommunication Union. Systems Management: Summarization function. Recommendation X.738", "text": "Prepaid mobile phone plans are gaining in importance in the U.S. and are already well established in Europe. Aggregation of a certain number of Micropayments is the preferred business model for wireless carriers to leverage the existing payment infrastructure with a mobile payment solution. There exist various Payment Scenarios that are based on the Wallet metaphor, and in general, they can be used to implement flexible and scalable solutions. Person-to-person transactions are already very popular in the U.S. as a consequence of the widespread use a payment instrument for auctions at eBay. This scenario is most often discussed as a valuable alternative to check payments in the U.S. For several reasons, the U.S. has been slower to adopt wireless technology than Europe. First, landlines in the U.S. are dominant and high personal computer penetration prevents people from using mobile phones to communicate, either by voice calls or text messaging. Second, prices in the U.S. favor landlines, even though prices for mobile phone plans in the competitive market have been cut many times in the past. For mobile payment, the fact that social patterns matter could affect the adoption of this form of payment in Europe, because in several countries, solutions already exist. The factors that form the concept of Diffusion of Innovation Theory eventually can be seen as the critical success factors of mobile payment. ii Whereas some emerging and existing mobile payment systems will come along with their own processing infrastructure, several will be based on existing payment networks that have initially been designed for another purpose. In Europe, credit card operators launch mobile payment solutions to leverage their existing networks while in the U.S., operators of ATM networks and leading credit card processors even try to bypass existing networks. Unlike in Europe, where checks have de facto been banned from the market by charging high prices for the processing of this non-cash payment instrument, the U.S. partly supports innovative check processing solutions to leverage the ACH infrastructure. The European bank approach shows that customer's payment behavior can be influenced by a pricing policy. Meanwhile, European countries and the U.S. have switched to 2.5G networks. With the transition to 3G, interoperability within the U.S. should finally become reality. Commercial launches of 3G services have already started in both regions. Notably, the first U.S. carrier to launch a broadband wireless service seems to be attracting business customers and offering high \u2026"}
{"_id": "d190a3f1dd23dfc980e5081e64620c0b8a076f95", "title": "A Systematic Gap Analysis of Social Engineering Defence Mechanisms Considering Social Psychology", "text": "Social engineering is the acquisition of information about computer systems by methods that deeply include non-technical means. While technical security of most critical systems is high, the systems remain vulnerable to attacks from social engineers. Social engineering is a technique that: (i) does not require any (advanced) technical tools, (ii) can be used by anyone, (iii) is cheap. Traditional penetration testing approaches often focus on vulnerabilities in network or software systems. Few approaches even consider the exploitation of humans via social engineering. While the amount of social engineering attacks and the damage they cause rise every year, the defences against social engineering do not evolve accordingly. Hence, the security awareness of these attacks by employees remains low. We examined the psychological principles of social engineering and which psychological techniques induce resistance to persuasion applicable for social engineering. The techniques examined are an enhancement of persuasion knowledge, attitude bolstering and influencing the decision making. While research exists elaborating on security awareness, the integration of resistance against persuasion has not been done. Therefore, we analysed current defence mechanisms and provide a gap analysis based on research in social psychology. Based on our findings we provide guidelines of how to improve social engineering defence mechanisms such as security awareness programs."}
{"_id": "80fd671b8f887d4e0333d59366b361ff98ecca1f", "title": "An innovative oscillometric blood pressure measurement: Getting rid of the traditional envelope", "text": "Oscillometric blood pressure devices are popular and are considered part of the family's medical chest at home. The popularity of these devices for private use is not shared by physicians mainly due to the fact that the blood pressures are computed instead of measured. The classical way to compute the systolic and diastolic blood pressure is based on the envelope of the oscillometric waveform. The algorithm to compute the blood pressures from the waveform is firm dependent, often patented and lacks scientific foundation. In this paper, we propose a totally new approach. Instead of the envelope of the oscillometric waveform, we use a statistical test to pin-point the time instances where the systolic and diastolic blood pressures are measured in the cuff. This technique has the advantage of being mathematically well-posed instead of the ill-posed problem of envelope fitting. Hence, in order to calibrate the oscillometric blood pressure monitor it is sufficient to make the statistical test unbiased."}
{"_id": "0a64de85c3ae878907e4f0d4f09432d4cdd34eda", "title": "A new ultrawideband printed monopole antenna: the planar inverted cone antenna (PICA)", "text": "A new antenna, the planar inverted cone antenna (PICA), provides ultrawideband (UWB) performance with a radiation pattern similar to monopole disk antennas , but is smaller in size. Extensive simulations and experiments demonstrate that the PICA antenna provides more than a 10:1 impedance bandwidth (for VSWR<2) and supports a monopole type omnidirectional pattern over 4:1 bandwidth. A second version of the PICA with two circular holes changes the current flow on the metal disk and extends the high end of the operating frequency range, improving the pattern bandwidth to 7:1."}
{"_id": "dcecd65af9fce077bc8de294ad806a6439692d2c", "title": "Towards Implicit Content-Introducing for Generative Short-Text Conversation Systems", "text": "The study on human-computer conversation systems is a hot research topic nowadays. One of the prevailing methods to build the system is using the generative Sequence-to-Sequence (Seq2Seq) model through neural networks. However, the standard Seq2Seq model is prone to generate trivial responses. In this paper, we aim to generate a more meaningful and informative reply when answering a given question. We propose an implicit content-introducing method which incorporates additional information into the Seq2Seq model in a flexible way. Specifically, we fuse the general decoding and the auxiliary cue word information through our proposed hierarchical gated fusion unit. Experiments on real-life data demonstrate that our model consistently outperforms a set of competitive baselines in terms of BLEU scores and human evaluation."}
{"_id": "112d7ba8127dd340ed300e3efcb2fccd345f231f", "title": "A 100 watts L-band power amplifier", "text": "In this paper, a 1100\u20131350 MHz power amplifier is introduced. For the realization of the RF power amplifier an LDMOS transistor is used. The amplifier delivers 100W power to a 50 ohm load with 50% efficiency. The transducer power gain of the amplifier is has 16dB at 100W output power. A mixed lumped element and transmission line matching circuit is used to implement matching network. The simulations are performed by the nonlinear model of the transistor, which is provided by the manufacturer. Finally the results of simulations are compared with the measurements."}
{"_id": "583db69db244b8ddda81367bf41bb8c4419ed248", "title": "RF power amplifiers for wireless communications", "text": "A wide variety of semiconductor devices are used in wireless power amplifiers. The RF performance and other attributes of cellphone RF power amplifiers using Si and GaAs based technologies are reviewed and compared."}
{"_id": "128e1d5ef2fe666c6ae88ce5ffe97005a40b157b", "title": "A design methodology for the realization of multi-decade baluns at microwave frequencies", "text": "A new methodology is presented for designing baluns exhibiting multi-decade bandwidths at microwave frequencies. Simulations show that resistors terminating the outer transmission line suppress the half-wavelength resonance and greatly extend the bandwidth. Using linear measurements at microwave frequencies, ferrite beads have been shown to behave as resistors with a small reactance, suitable for terminating the outer transmission line. This paper shows that ferrite beads can perform the dual role of improving magnetic coupling at low frequency and suppressing resonance at higher frequencies. The design methodology was applied to produce a balun that operates between 30MHz and 6GHz, displays less than 1dB of power loss up to 4.4GHz, and delivers an impedance transformation of 2\u22361."}
{"_id": "be25b709e788a86be9ef878e9b60f7a7527e0d24", "title": "100 W GaN HEMT power amplifier module with > 60% efficiency over 100\u20131000 MHz bandwidth", "text": "We have demonstrated a decade bandwidth 100 W GaN HEMT power amplifier module with 15.5\u201318.6 dB gain, 104\u2013121 W CW output power and 61.4\u201376.6 % drain efficiency over the 100\u20131000 MHz band. The 2 \u00d7 2 inch compact power amplifier module combines four 30 W lossy matched broadband GaN HEMT PAs packaged in a ceramic SO8 package. Each of the 4 devices is fully matched to 50 \u2126 and obtains 30.8\u201335.7 W with 68.6\u201379.6 % drain efficiency over the band. The packaged amplifiers contain a GaN on SiC device operating at 48V drain voltage, alongside GaAs integrated passive matching circuitry. The four devices are combined using a broadband low loss coaxial balun. We believe this combination of output power, bandwidth and efficiency is the best reported to date. These amplifiers are targeted for use in multi-band public mobile radios and for instrumentation applications."}
{"_id": "f7363d2ac142bcb66cdb481a70e7c9c5c60771dd", "title": "10W ultra-broadband power amplifier", "text": "We report the design and performance of an ultra-broadband power amplifier. It achieves 10 Watts output power with 21dB \u00b1 1.5dB gain from 20 MHz to 3000 MHz. At lower frequencies from 20 to 1000 MHz the output power is 15 Watts with 22% efficiency. To achieve this performance, we employ a new design concept to control the device impedance and the power combiner impedance to be naturally 50 Ohms, such that no impedance matching is needed. Also, we developed a broadband microwave balun as a push-pull power combiner, which doubles as an impedance transformer. We believe the combination of output power, bandwidth and efficiency is the best reported to date."}
{"_id": "3f3a5303331a2e363ba930688daa113244fa11f0", "title": "Full-Body Compliant Human\u2013Humanoid Interaction: Balancing in the Presence of Unknown External Forces", "text": "This paper proposes an effective framework of human-humanoid robot physical interaction. Its key component is a new control technique for full-body balancing in the presence of external forces, which is presented and then validated empirically. We have adopted an integrated system approach to develop humanoid robots. Herein, we describe the importance of replicating human-like capabilities and responses during human-robot interaction in this context. Our balancing controller provides gravity compensation, making the robot passive and thereby facilitating safe physical interactions. The method operates by setting an appropriate ground reaction force and transforming these forces into full-body joint torques. It handles an arbitrary number of force interaction points on the robot. It does not require force measurement at interested contact points. It requires neither inverse kinematics nor inverse dynamics. It can adapt to uneven ground surfaces. It operates as a force control process, and can therefore, accommodate simultaneous control processes using force-, velocity-, or position-based control. Forces are distributed over supporting contact points in an optimal manner. Joint redundancy is resolved by damping injection in the context of passivity. We present various force interaction experiments using our full-sized bipedal humanoid platform, including compliant balance, even when affected by unknown external forces, which demonstrates the effectiveness of the method."}
{"_id": "5647734e0a7086e7b993cf711f71b24480c2b696", "title": "Neuropsychology and clinical neuroscience of persistent post-concussive syndrome.", "text": "On the mild end of the acquired brain injury spectrum, the terms concussion and mild traumatic brain injury (mTBI) have been used interchangeably, where persistent post-concussive syndrome (PPCS) has been a label given when symptoms persist for more than three months post-concussion. Whereas a brief history of concussion research is overviewed, the focus of this review is on the current status of PPCS as a clinical entity from the perspective of recent advances in the biomechanical modeling of concussion in human and animal studies, particularly directed at a better understanding of the neuropathology associated with concussion. These studies implicate common regions of injury, including the upper brainstem, base of the frontal lobe, hypothalamic-pituitary axis, medial temporal lobe, fornix, and corpus callosum. Limitations of current neuropsychological techniques for the clinical assessment of memory and executive function are explored and recommendations for improved research designs offered, that may enhance the study of long-term neuropsychological sequelae of concussion."}
{"_id": "8a8b5e450ee6e39bbc865c7619e29047640db05b", "title": "Computer-aided generation of multiple-choice tests", "text": "Summary form only given. The paper describes a novel automatic procedure for the generation of multiple-choice tests from electronic documents. In addition to employing various NLP techniques including term extraction and shallow parsing, the system makes use of language resources such as corpora and ontologies. The system operates in a fully automatic mode and also a semiautomatic environment where the user is offered the option to post-edit the generated test items. The results from the conducted evaluation suggest that the new procedure is very effective saving time and labour considerably and that the test items produced with the help of the program are not of inferior quality to those produced manually."}
{"_id": "53d39691197a20f5f1fc2fb2d30d8a73032f9e49", "title": "Model-Driven Reverse Engineering Approaches: A Systematic Literature Review", "text": "This paper explores and describes the state of the art for what concerns the model-driven approaches proposed in the literature to support reverse engineering. We conducted a systematic literature review on this topic with the aim to answer three research questions. We focus on various solutions developed for model-driven reverse engineering, outlining in particular the models they use and the transformations applied to the models. We also consider the tools used for model definition, extraction, and transformation and the level of automation reached by the available tools. The model-driven reverse engineering approaches are also analyzed based on various features such as genericity, extensibility, automation of the reverse engineering process, and coverage of the full or partial source artifacts. We describe in detail and compare fifteen approaches applying model-driven reverse engineering. Based on this analysis, we identify and indicate some hints on choosing a model-driven reverse engineering approach from the available ones, and we outline open issues concerning the model-driven reverse engineering approaches."}
{"_id": "c5212a90571a887f5841ce7aea0ba3fa26237ba2", "title": "Evaluation of Three Methods for MRI Brain Tumor Segmentation", "text": "Imaging plays a central role in the diagnosis and treatment planning of brain tumor. An accurate segmentation is critical, especially when the tumor morphological changes remain subtle, irregular and difficult to assess by clinical examination. Traditionally, segmentation is performed manually in clinical environment that is operator dependent and very tedious and time consuming labor intensive work. However, automated tumor segmentation in MRI brain tumor poses many challenges with regard to characteristics of an image. A comparison of three different semi-automated methods, viz., modified gradient magnitude region growing technique (MGRRGT), level set and a marker controlled watershed method is undertaken here for evaluating their relative performance in the segmentation of tumor. A study on 9 samples using MGRRGT reveals that all the errors are within 6 to 23% in comparison to other two methods."}
{"_id": "864ab6ebe254eeada6fee110da35a2bebcf9fbb3", "title": "Ways to React: Comparing Reactive Languages and Complex Event Processing", "text": "Reactive applications demand for detecting the changes that occur in a domain of interest and for timely reactions. Examples range from simple interactive applications to complex monitoring tasks involving distributed and heterogeneous systems. Over the last years, different programming paradigms and solutions have been proposed to support such applications. In this paper, we focus on two prominent approaches: event-based programming, specifically Complex Event Processing (CEP), and Reactive Languages (RLs). CEP systems enable the definition of high level situations of interest from low level primitive events detected in the external environment. On the other hand, RLs support time-changing values and their composition as dedicated language abstractions. These research fields have been investigated by different communities, belonging respectively to the database and the distributed systems areas and to the programming language area. It is our belief that a deeper understanding of these research fields, including their benefits and limitations, their similarities and differences, could drive further developments in supporting reactive applications. For this reason, we propose a first comparison of the two fields. Despite huge differences, we believe that such a comparison can trigger an interesting discussion across the communities, favor knowledge sharing, and let new ideas emerge."}
{"_id": "af7fe675a8bff39ea6b575860559b10a759cc1fe", "title": "Frontal fibrosing alopecia \u2013 A Case report * Alopecia frontal fibrosante-Relato de caso", "text": "Frontal fibrosing alopecia is a kind of progressive and frequently irreversible cicatricial alopecia marked by a lichenoid infiltrate in histology. Since its first description, in 1994, in Australia, some cases have been documented all over the world. The article reports, for the second time in the medical literature, a Brazilian case and reviews the main aspects of this dermatosis."}
{"_id": "6047e9af00dcffbd2effbfa600735eb111f7de65", "title": "A Discriminative Representation of Convolutional Features for Indoor Scene Recognition", "text": "Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities that characterize such scenes. This paper presents a novel approach that exploits rich mid-level convolutional features to categorize indoor scenes. Traditional convolutional features retain the global spatial structure, which is a desirable property for general object recognition. We, however, argue that the structure-preserving property of the convolutional neural network activations is not of substantial help in the presence of large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target data set but also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale data set of 1300 object categories that are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over the previous state-of-the-art approaches on five major scene classification data sets."}
{"_id": "a5570096c7d3954d029b2ce6cd90ca9137c43191", "title": "An Artificial Light Source Influences Mating and Oviposition of Black Soldier Flies, Hermetia illucens", "text": "Current methods for mass-rearing black soldier flies, Hermetia illucens (L.) (Diptera: Stratiomyidae), in the laboratory are dependent on sunlight. Quartz-iodine lamps and rare earth lamps were examined as artificial light sources for stimulating H. illucens to mate and lay eggs. Sunlight was used as the control. Adults in the quartz-iodine lamp treatment had a mating rate of 61% of those in the sunlight control. No mating occurred when the rare earth lamp was used as a substitute. Egg hatch for the quartz-iodine lamp and sunlight treatments occurred in approximately 4 days, and the hatch rate was similar between these two treatments. Larval and pupal development under these treatments required approximately 18 and 15 days at 28\u00b0C, respectively. Development of methods for mass rearing of H. illucens using artificial light will enable production of this fly throughout the year without investing in greenhouse space or requiring sunlight."}
{"_id": "614399e28cbc5ec2b3540de8a7acfb9c46c0de5e", "title": "Heart rate variability reflects self-regulatory strength, effort, and fatigue.", "text": "Experimental research reliably demonstrates that self-regulatory deficits are a consequence of prior self-regulatory effort. However, in naturalistic settings, although people know that they are sometimes vulnerable to saying, eating, or doing the wrong thing, they cannot accurately gauge their capacity to self-regulate at any given time. Because self-regulation and autonomic regulation colocalize in the brain, an autonomic measure, heart rate variability (HRV), could provide an index of self-regulatory strength and activity. During an experimental manipulation of self-regulation (eating carrots or cookies), HRV was elevated during high self-regulatory effort (eat carrots, resist cookies) compared with low self-regulatory effort (eat cookies, resist carrots). The experimental manipulation and higher HRV at baseline independently predicted persistence at a subsequent anagram task. HRV appears to index self-regulatory strength and effort, making it possible to study these phenomena in the field as well as the lab."}
{"_id": "d3e1081e173cec3ee08cb123506c4402267345c8", "title": "Predicting Neural Activity Patterns Associated with Sentences Using a Neurobiologically Motivated Model of Semantic Representation.", "text": "We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences."}
{"_id": "e997dcda4a10be86f520e88d6ca845dc5346b14f", "title": "Dynamic Action Repetition for Deep Reinforcement Learning", "text": "One of the long standing goals of Artificial Intelligence (AI) is to build cognitive agents which can perform complex tasks from raw sensory inputs without explicit supervision (Lake et al. 2016). Recent progress in combining Reinforcement Learning objective functions and Deep Learning architectures has achieved promising results for such tasks. An important aspect of such sequential decision making problems, which has largely been neglected, is for the agent to decide on the duration of time for which to commit to actions. Such action repetition is important for computational efficiency, which is necessary for the agent to respond in real-time to events (in applications such as self-driving cars). Action Repetition arises naturally in real life as well as simulated environments. The time scale of executing an action enables an agent (both humans and AI) to decide the granularity of control during task execution. Current state of the art Deep Reinforcement Learning models, whether they are off-policy (Mnih et al. 2015; Wang et al. 2015) or on-policy (Mnih et al. 2016), consist of a framework with a static action repetition paradigm, wherein the action decided by the agent is repeated for a fixed number of time steps regardless of the contextual state while executing the task. In this paper, we propose a new framework Dynamic Action Repetition which changes Action Repetition Rate (the time scale of repeating an action) from a hyper-parameter of an algorithm to a dynamically learnable quantity. At every decision-making step, our models allow the agent to commit to an action and the time scale of executing the action. We show empirically that such a dynamic time scale mechanism improves the performance on relatively harder games in the Atari 2600 domain, independent of the underlying Deep Reinforcement Learning algorithm used."}
{"_id": "e56e27462286f39d518677bf3c930c9b21714778", "title": "Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning", "text": "Detection and rejection of adversarial examples in security sensitive and safety-critical systems using deep CNNs is essential. In this paper, we propose an approach to augment CNNs with out-distribution learning in order to reduce misclassification rate by rejecting adversarial examples. We empirically show that our augmented CNNs can either reject or classify correctly most adversarial examples generated using well-known methods (> 95% for MNIST and > 75% for CIFAR-10 on average). Furthermore, we achieve this without requiring to train using any specific type of adversarial examples and without sacrificing the accuracy of models on clean samples significantly (< 4%)."}
{"_id": "ee4ae9d87438e4b39eccd2ff3b509a489587c2b2", "title": "Integrated microstrip and rectangular waveguide in planar form", "text": "Usually transitions from microstrip line to rectangular waveguide are made with three-dimensional complex mounting structures. In this paper, a new planar platform is developed in which the microstrip line and rectangular waveguide are fully integrated on the same substrate, and they are interconnected via a simple taper. Our experiments at 28 GHz show that an effective bandwidth of 12% at 20 dB return loss is obtained with an in-band insertion loss better than 0.3 dB. The new transition allows a complete integration of waveguide components on substrate with MICs and MMICs."}
{"_id": "b6ba896ba86b8853fdbe02787fef9ade745a2cfe", "title": "Teaching Computer Organization and Architecture Using Simulation and FPGA Applications", "text": "This paper presents the design concepts and realization of incorporating micro-operation simulation and FPGA implementation into a teaching tool for computer organization and architecture. This teaching tool helps computer engineering and computer science students to be familiarized practically with computer organization and architecture through the development of their own instruction set, computer programming and interfacing experiments. A two-pass assembler has been designed and implemented to write assembly programs in this teaching tool. In addition to the microoperation simulation, the complete configuration can be run on Xilinx Spartan-3 FPGA board. Such implementation offers good code density, easy customization, easily developed software, small area, and high performance at low cost."}
{"_id": "9f2f648e544a3f60b00bc68dfcbdcf08eb851d63", "title": "A versatile wavelet domain noise filtration technique for medical imaging", "text": "We propose a robust wavelet domain method for noise filtering in medical images. The proposed method adapts itself to various types of image noise as well as to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The algorithm exploits generally valid knowledge about the correlation of significant image features across the resolution scales to perform a preliminary coefficient classification. This preliminary coefficient classification is used to empirically estimate the statistical distributions of the coefficients that represent useful image features on the one hand and mainly noise on the other. The adaptation to the spatial context in the image is achieved by using a wavelet domain indicator of the local spatial activity. The proposed method is of low complexity, both in its implementation and execution time. The results demonstrate its usefulness for noise suppression in medical ultrasound and magnetic resonance imaging. In these applications, the proposed method clearly outperforms single-resolution spatially adaptive algorithms, in terms of quantitative performance measures as well as in terms of visual quality of the images."}
{"_id": "023023c547b8b18d3aa8b50a3ad42e01c97e83f8", "title": "Estimating the class prior and posterior from noisy positives and unlabeled data", "text": "We develop a classification algorithm for estimating posterior distributions from positive-unlabeled data, that is robust to noise in the positive labels and effective for high-dimensional data. In recent years, several algorithms have been proposed to learn from positive-unlabeled data; however, many of these contributions remain theoretical, performing poorly on real high-dimensional data that is typically contaminated with noise. We build on this previous work to develop two practical classification algorithms that explicitly model the noise in the positive labels and utilize univariate transforms built on discriminative classifiers. We prove that these univariate transforms preserve the class prior, enabling estimation in the univariate space and avoiding kernel density estimation for high-dimensional data. The theoretical development and parametric and nonparametric algorithms proposed here constitute an important step towards wide-spread use of robust classification algorithms for positive-unlabeled data."}
{"_id": "e3640b779d2ff35aabfb0a529089b1df5457275f", "title": "Charging algorithms of lithium-ion batteries: An overview", "text": "This paper presents the overview of charging algorithms for lithium-ion batteries, which include constant current-constant voltage (CC/CV), variants of the CC/CV, multistage constant current, pulse current and pulse voltage. The CC/CV charging algorithm is well developed and widely adopted in charging lithium-ion batteries. It is used as a benchmark to compare with other charging algorithms in terms of the charging time, the charging efficiency, the influences on battery life and other aspects, which can serve as a convenient reference for future work in developing a charger for lithium-ion battery."}
{"_id": "322293bb0bbd47349c5fd605dce5c63f03efb6a8", "title": "Weighted PageRank algorithm", "text": "With the rapid growth of the Web, users easily get lost in the rich hyper structure. Providing the relevant information to users to cater to their needs is the primary goal of Website owners. Therefore, finding the content of the Web and retrieving the users' interests and needs from their behavior have become increasingly important. Web mining is used to categorize users and pages by analyzing user behavior, the content of the pages, and the order of the URLs that tend to be accessed. Web structure mining plays an important role in this approach. Two page ranking algorithms, HITS and PageRank, are commonly used in Web structure mining. Both algorithms treat all links equally when distributing rank scores. Several algorithms have been developed to improve the performance of these methods. The weighted PageRank algorithm (WPR), an extension to the standard PageRank algorithm, is introduced. WPR takes into account the importance of both the inlinks and the outlinks of the pages and distributes rank scores based on the popularity of the pages. The results of our simulation studies show that WPR performs better than the conventional PageRank algorithm in terms of returning a larger number of relevant pages to a given query."}
{"_id": "6fd329a1e7f513745e5fc462f146aa80c6090a1d", "title": "Signal-Quality Indices for the Electrocardiogram and Photoplethysmogram: Derivation and Applications to Wireless Monitoring", "text": "The identification of invalid data in recordings obtained using wearable sensors is of particular importance since data obtained from mobile patients is, in general, noisier than data obtained from nonmobile patients. In this paper, we present a signal quality index (SQI), which is intended to assess whether reliable heart rates (HRs) can be obtained from electrocardiogram (ECG) and photoplethysmogram (PPG) signals collected using wearable sensors. The algorithms were validated on manually labeled data. Sensitivities and specificities of 94% and 97% were achieved for the ECG and 91% and 95% for the PPG. Additionally, we propose two applications of the SQI. First, we demonstrate that, by using the SQI as a trigger for a power-saving strategy, it is possible to reduce the recording time by up to 94% for the ECG and 93% for the PPG with only minimal loss of valid vital-sign data. Second, we demonstrate how an SQI can be used to reduce the error in the estimation of respiratory rate (RR) from the PPG. The performance of the two applications was assessed on data collected from a clinical study on hospital patients who were able to walk unassisted."}
{"_id": "9c43b59177cb5539ea649c188387fe374663bbb1", "title": "Learning Discriminative Latent Attributes for Zero-Shot Classification", "text": "Zero-shot learning (ZSL) aims to transfer knowledge from observed classes to the unseen classes, based on the assumption that both the seen and unseen classes share a common semantic space, among which attributes enjoy a great popularity. However, few works study whether the human-designed semantic attributes are discriminative enough to recognize different classes. Moreover, attributes are often correlated with each other, which makes it less desirable to learn each attribute independently. In this paper, we propose to learn a latent attribute space, which is not only discriminative but also semantic-preserving, to perform the ZSL task. Specifically, a dictionary learning framework is exploited to connect the latent attribute space with attribute space and similarity space. Extensive experiments on four benchmark datasets show the effectiveness of the proposed approach."}
{"_id": "ef8070a37fb6f0959acfcee9d40f0b3cb912ba9f", "title": "Epistemological perspectives on IS research: a framework for analysing and systematizing epistemological assumptions", "text": "Over the last three decades, a methodological pluralism has developed within information systems (IS) research. Various disciplines and many research communities as well, contribute to this discussion. However, working on the same research topic or studying the same phenomenon does not necessarily ensure mutual understanding. Especially within this multidisciplinary and international context, the epistemological assumptions made by different researchers may vary fundamentally. These assumptions exert a substantial impact on how concepts like validity, reliability, quality and rigour of research are understood. Thus, the extensive publication of epistemological assumptions is, in effect, almost mandatory. Hence, the aim of this paper is to develop an epistemological framework which can be used for systematically analysing the epistemological assumptions in IS research. Rather than attempting to identify and classify IS research paradigms, this research aims at a comprehensive discussion of epistemology within the context of IS. It seeks to contribute to building the basis for identifying similarities as well as differences between distinct IS approaches and methods. In order to demonstrate the epistemological framework, the consensus-oriented interpretivist approach to conceptual modelling is used as an example."}
{"_id": "25e989b45de04c6086364b376d29ec11008360a3", "title": "Learning physical parameters from dynamic scenes", "text": "Humans acquire their most basic physical concepts early in development, and continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical parameters at multiple levels. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model to human learners on a challenging task of estimating multiple physical parameters in novel microworlds given short movies. This task requires people to reason simultaneously about multiple interacting physical laws and properties. People are generally able to learn in this setting and are consistent in their judgments. Yet they also make systematic errors indicative of the approximations people might make in solving this computationally demanding problem with limited computational resources. We propose two approximations that complement the top-down Bayesian approach. One approximation model relies on a more bottom-up feature-based inference scheme. The second approximation combines the strengths of the bottom-up and top-down approaches, by taking the feature-based inference as its point of departure for a search in physical-parameter space."}
{"_id": "215fae37282267974112a5a757a42382211360fa", "title": "Architectures for Natural Language Generation: Problems and Perspectives", "text": "Current research in natural language generation is situated in a computational linguistics tradition that was founded several decades ago. We critically analyse some of the architectural assumptions underlying existing systems and point out some problems in the domains of text planning and lexicalization. Guided by the identification of major generation challenges viewed from the angles of knowledge-based systems and cognitive psychology, we sketch some new directions for future research."}
{"_id": "078b55e2f4899cf95a4c8d65613c340fa190acf8", "title": "The Second Dialog State Tracking Challenge", "text": "A spoken dialog system, while communicating with a user, must keep track of what the user wants from the system at each step. This process, termed dialog state tracking, is essential for a successful dialog system as it directly informs the system\u2019s actions. The first Dialog State Tracking Challenge allowed for evaluation of different dialog state tracking techniques, providing common testbeds and evaluation suites. This paper presents a second challenge, which continues this tradition and introduces some additional features \u2013 a new domain, changing user goals and a richer dialog state. The challenge received 31 entries from 9 research groups. The results suggest that while large improvements on a competitive baseline are possible, trackers are still prone to degradation in mismatched conditions. An investigation into ensemble learning demonstrates the most accurate tracking can be achieved by combining multiple trackers."}
{"_id": "0b0cf7e00e7532e38238a9164f0a8db2574be2ea", "title": "Attention Is All You Need", "text": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.2 after training for 4.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}
{"_id": "5061f591aa8ff224cd20cdcb3b62d156fb187bed", "title": "One Model To Learn Them All", "text": "Deep learning yields great results across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good results on a number of problems spanning multiple domains. In particular, this single model is trained concurrently on ImageNet, multiple translation tasks, image captioning (COCO dataset), a speech recognition corpus, and an English parsing task. Our model architecture incorporates building blocks from multiple domains. It contains convolutional layers, an attention mechanism, and sparsely-gated layers. Each of these computational blocks is crucial for a subset of the tasks we train on. Interestingly, even if a block is not crucial for a task, we observe that adding it never hurts performance and in most cases improves it on all tasks. We also show that tasks with less data benefit largely from joint training with other tasks, while performance on large tasks degrades only slightly if at all."}
{"_id": "523d3642853ddfb734614f5c6b5941e688d69bb1", "title": "Incremental LSTM-based dialog state tracker", "text": "A dialog state tracker is an important component in modern spoken dialog systems. We present an incremental dialog state tracker, based on LSTM networks. It directly uses automatic speech recognition hypotheses to track the state. We also present the key non-standard aspects of the model that bring its performance close to the state-of-the-art and experimentally analyze their contribution: including the ASR confidence scores, abstracting scarcely represented values, including transcriptions in the training data, and model averaging."}
{"_id": "1b1b36bcbbe3df2bd2b8c4a06f2b970e9b280355", "title": "Application of neural networks to inverse lens distortion modelling", "text": "The accurate and quick modelling of inverse lens distortion to rectify images or predict the image coordinates of real world objects has long been a challenge. This work investigates the use of artificial neural networks to perform this modelling. Several architectures are investigated and multiple methods of training the networks are used to achieve the best results. The error is expressed as a physical displacement on the imaging chip so that a fair comparison can be made between other published results which are based on cameras with different resolutions. It is shown that the novel application of locally optimised neural networks to residual lens calibration data yields an inverse distortion modelling that is highly competitive with prior published results."}
{"_id": "9b4ce2784b9ddc87d1ce669e1f524a58975cc39e", "title": "Quantitative function for community detection.", "text": "We propose a quantitative function for community partition -- i.e., modularity density or D value. We demonstrate that this quantitative function is superior to the widely used modularity Q and also prove its equivalence with the objective function of the kernel k means. Both theoretical and numerical results show that optimizing the new criterion not only can resolve detailed modules that existing approaches cannot achieve, but also can correctly identify the number of communities."}
{"_id": "427edff08dadb04913ad67c2831089e248eff059", "title": "Learning Rapid-Temporal Adaptations", "text": "A hallmark of human intelligence and cognition is its flexibility. One of the longstanding goals in AI research is to replicate this flexibility in a learning machine. In this work we describe a mechanism by which artificial neural networks can learn rapid-temporal adaptation \u2013 the ability to adapt quickly to new environments or tasks \u2013 that we call adaptive neurons. Adaptive neurons modify their activations with task-specific values retrieved from a working memory. On standard metalearning and few-shot learning benchmarks in both vision and language domains, models augmented with adaptive neurons achieve state-of-the-art results."}
{"_id": "d10a558b9caa8bd41c0112434f3b19eb8aab2b5c", "title": "Exemplary Automotive Attack Scenarios : Trojan Horses for Electronic Throttle Control System ( ETC ) and Replay Attacks on the Power Window System", "text": "The consideration of targeted security attacks is not yet common in the automotive domain where primarily safety requirements are considered. Hence this survey addresses the relatively new field of IT-security in automotive technology in order to motivate further research. With the emergence of automotive technologies like car-to-car (c2c) communication, the challenges increase. In order to show the necessity for such research we describe how in theory, a potential attack on the Electronic Throttle Control (ETC) could succeed, if no countermeasures are taken, with potentially drastic consequences on the safety of the vehicle, its occupants and its environment as a result. Also an attack on the power window system is shown using the implementation techniques already employed in the design of automotive components. Our goal is to show, that such exploited security vulnerabilities are likely to have serious consequences with respect to the vehicles safety. Therefore such attacks need to be considered and suitable countermeasures have to be identified. In order to achieve this, the connection of the two disciplines Safety and Security for automotive IT components needs to be discussed. We introduce two exemplary attacks, the Trojan Horse targeting at the Electronic Throttle Control System (ETC) and a replay attack on the electric window lift. The first attack is mainly discussed in theory, the second is elaborated in a practical simulation scenario. Finally we introduce a general investigation of potential ways of intrusion and present some first practical results from tests with current automotive hardware."}
{"_id": "5b2607087d33744edddfa596bf31dbb9ba4ed84e", "title": "Real-time object detection, localization and verification for fast robotic depalletizing", "text": "Depalletizing is a challenging task for manipulation robots. Key to successful application are not only robustness of the approach, but also achievable cycle times in order to keep up with the rest of the process. In this paper, we propose a system for depalletizing and a complete pipeline for detecting and localizing objects as well as verifying that the found object does not deviate from the known object model, e.g., if it is not the object to pick. In order to achieve high robustness (e.g., with respect to different lighting conditions) and generality with respect to the objects to pick, our approach is based on multi-resolution surfel models. All components (both software and hardware) allow operation at high frame rates and, thus, allow for low cycle times. In experiments, we demonstrate depalletizing of automotive and other prefabricated parts with both high reliability (w.r.t. success rates) and efficiency (w.r.t. low cycle times)."}
{"_id": "8eae4d70e2a0246d4a48ec33bc66083f10fd1df5", "title": "Tagging Patient Notes With ICD-9 Codes", "text": "There is substantial growth in the amount of medical data being generated in hospitals. With over 96% adoption rate [1], Electronic Medical/Health Records are used to store most of this medical data. If harnessed correctly, this medium provides a very convenient platform for secondary data analysis of these records to improve medical and patient care. One crucial feature of the information stored in these systems are ICD9-diagnosis codes, which are used for billing purposes and integration to other databases. These codes are assigned to medical text and require expert annotators with experience and training. In this paper we formulate this problem as a multi-label classification problem and propose a deep learning framework to classify the ICD-9 codes a patient is assigned at the end of a visit. We demonstrate that a simple LSTM model with a single layer of non-linearity can learn to classify patient notes with their corresponding ICD-9 labels moderately well."}
{"_id": "7a4741355af401a9b65297960593c645fde364a4", "title": "Visualization of patent analysis for emerging technology", "text": "Many methods have been developed to recognize those progresses of technologies, and one of them is to analyze patent information. And visualization methods are considered to be proper for representing patent information and its analysis results. However, current visualization methods for patent analysis patent maps have some drawbacks. Therefore, we propose an alternative visualization method in this paper. With colleted keywords from patent documents of a target technology field, we cluster patent documents by the k-Means algorithm. With the clustering results, we form a semantic network of keywords without respect of filing dates. And then we build up a patent map by rearranging each keyword node of the semantic network according to its earliest filing date and frequency in patent documents. Our approach contributes to establishing a patent map which considers both structured and unstructured items of a patent document. Besides, differently from previous visualization methods for patent analysis, ours is based on forming a semantic network of keywords from patent documents. And thereby it visualizes a clear overview of patent information in a more comprehensible way. And as a result of those contributions, it enables us to understand advances of emerging technologies and forecast its trend in the future. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "6f0144dc7ba19123ddce8cdd4ad0f6dc36dd4ef2", "title": "Perceptions of Sex, Gender, and Puberty Suppression: A Qualitative Analysis of Transgender Youth", "text": "International guidelines recommend the use of Gonadotropin-Releasing Hormone (GnRH) agonists in adolescents with gender dysphoria (GD) to suppress puberty. Little is known about the way gender dysphoric adolescents themselves think about this early medical intervention. The purpose of the present study was (1) to explicate the considerations of gender dysphoric adolescents in the Netherlands concerning the use of puberty suppression; (2) to explore whether the considerations of gender dysphoric adolescents differ from those of professionals working in treatment teams, and if so in what sense. This was a qualitative study designed to identify considerations of gender dysphoric adolescents regarding early treatment. All 13 adolescents, except for one, were treated with puberty suppression; five adolescents were trans girls and eight were trans boys. Their ages ranged between 13 and 18\u00a0years, with an average age of 16\u00a0years and 11\u00a0months, and a median age of 17\u00a0years and 4\u00a0months. Subsequently, the considerations of the adolescents were compared with views of clinicians treating youth with GD. From the interviews with the gender dysphoric adolescents, three themes emerged: (1) the difficulty of determining what is an appropriate lower age limit for starting puberty suppression. Most adolescents found it difficult to define an appropriate age limit and saw it as a dilemma; (2) the lack of data on the long-term effects of puberty suppression. Most adolescents stated that the lack of long-term data did not and would not stop them from wanting puberty suppression; (3) the role of the social context, for which there were two subthemes: (a) increased media-attention, on television, and on the Internet; (b) an imposed stereotype. Some adolescents were positive about the role of the social context, but others raised doubts about it. Compared to clinicians, adolescents were often more cautious in their treatment views. It is important to give voice to gender dysphoric adolescents when discussing the use of puberty suppression in GD. Otherwise, professionals might act based on assumptions about adolescents' opinions instead of their actual considerations. We encourage gathering more qualitative research data from gender dysphoric adolescents in other countries."}
{"_id": "9742cfb5015952ba4fd12b598d77a5f2b8b4463b", "title": "Good genes, complementary genes and human mate preferences", "text": "The past decade has witnessed a rapidly growing interest in the biological basis of human mate choice. Here we review recent studies that demonstrate preferences for traits which might reveal genetic quality to prospective mates, with potential but still largely unknown influence on offspring fitness. These include studies assessing visual, olfactory and auditory preferences for potential good-gene indicator traits, such as dominance or bilateral symmetry. Individual differences in these robust preferences mainly arise through within and between individual variation in condition and reproductive status. Another set of studies have revealed preferences for traits indicating complementary genes, focussing on discrimination of dissimilarity at genes in the major histocompatibility complex (MHC). As in animal studies, we are only just beginning to understand how preferences for specific traits vary and inter-relate, how consideration of good and compatible genes can lead to substantial variability in individual mate choice decisions and how preferences expressed in one sensory modality may reflect those in another. Humans may be an ideal model species in which to explore these interesting complexities."}
{"_id": "08e7240e59719af7d29dea2683a271a8b525173e", "title": "Anatomy of Forehead, Glabellar, Nasal and Orbital Muscles, and Their Correlation with Distinctive Patterns of Skin Lines on the Upper Third of the Face: Reviewing Concepts", "text": "The purpose of this study is to establish a relationship between the skin lines on the upper third of the face in cadavers, which represent the muscle activity in life and the skin lines achieved by voluntary contraction of the forehead, glabellar, and orbital muscles in patients. Anatomical dissection of fresh cadavers was performed in 20 fresh cadavers, 11 females and 9 males, with ages ranging from 53 to 77\u00a0years. Subcutaneous dissection identified the muscle shape and the continuity of the fibers of the eyebrow elevator and depress muscles. Subgaleal dissection identified the cutaneous insertions of the muscles. They were correlated with skin lines on the upper third of the face of the cadavers that represent the muscle activity in life. Voluntary contraction was performed by 20 voluntary patients, 13 females and 7 males, with ages ranging from 35 to 62\u00a0years. Distinct patterns of skin lines on the forehead, glabellar and orbital areas, and eyebrow displacement were identified. The frontalis exhibited four anatomical shapes with four different patterns of horizontal parallel lines on the forehead skin. The corrugator supercilii showed three shapes of muscles creating six patterns of vertical glabellar lines, three symmetrical and three asymmetrical. The orbicularis oculi and procerus had single patterns. The skin lines exhibited in voluntary contraction of the upper third of the face in patients showed the same patterns of the skin lines achieved in cadavers. Skin lines in cadavers, which are the expression of the muscle activity in life, were similar to those achieved in the voluntary contraction of patients, allowing us to assert that the muscle patterns of patients were similar to those identified in cadavers. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 ."}
{"_id": "16827c8aa80f394a5117126e1ec548fd08410860", "title": "Internet addiction: a systematic review of epidemiological research for the last decade.", "text": "In the last decade, Internet usage has grown tremendously on a global scale. The increasing popularity and frequency of Internet use has led to an increasing number of reports highlighting the potential negative consequences of overuse. Over the last decade, research into Internet addiction has proliferated. This paper reviews the existing 68 epidemiological studies of Internet addiction that (i) contain quantitative empirical data, (ii) have been published after 2000, (iii) include an analysis relating to Internet addiction, (iv) include a minimum of 1000 participants, and (v) provide a full-text article published in English using the database Web of Science. Assessment tools and conceptualisations, prevalence, and associated factors in adolescents and adults are scrutinised. The results reveal the following. First, no gold standard of Internet addiction classification exists as 21 different assessment instruments have been identified. They adopt official criteria for substance use disorders or pathological gambling, no or few criteria relevant for an addiction diagnosis, time spent online, or resulting problems. Second, reported prevalence rates differ as a consequence of different assessment tools and cut-offs, ranging from 0.8% in Italy to 26.7% in Hong Kong. Third, Internet addiction is associated with a number of sociodemographic, Internet use, and psychosocial factors, as well as comorbid symptoms and disorder in adolescents and adults. The results indicate that a number of core symptoms (i.e., compulsive use, negative outcomes and salience) appear relevant for diagnosis, which assimilates Internet addiction and other addictive disorders and also differentiates them, implying a conceptualisation as syndrome with similar etiology and components, but different expressions of addictions. Limitations include the exclusion of studies with smaller sample sizes and studies focusing on specific online behaviours. Conclusively, there is a need for nosological precision so that ultimately those in need can be helped by translating the scientific evidence established in the context of Internet addiction into actual clinical practice."}
{"_id": "2cef1dcc6a8ed9b252a095080f25d21a85cd3991", "title": "Analysis and Characterisation of Spoof Surface Plasmon Polaritons based Wideband Bandpass Filter at Microwave Frequency", "text": "This paper presents the wideband bandpass filter (BPF) in the microwave frequency domain. The realisation approach is based on spoof surface plasmon polaritons (SSPPs) phenomenon using plasmonic metamaterial. A novel unit cell is designed for filter design using an LC resonator concept. Then SSPPs BPF is realised using an optimised mode converter and five unit cells. This paper includes a brief design detail of the proposed novel unit cell. The passband of BPF is achieved at approximately 1.20 5.80 GHz, 3dB bandwidth is tentatively 4.60 GHz and the insertion loss is less than 2 dB approximately over the passband. The overall dimension of fabricated filter is (90 x 45) mm. A basic schematic of transmission line representation is also proposed to evaluate the BPF structure."}
{"_id": "03a495d17c3b984594f873715fef5bb2e7eac567", "title": "Hybrid Electric Vehicles: Architecture and Motor Drives", "text": "Electric traction is one of the most promising technologies that can lead to significant improvements in vehicle performance, energy utilization efficiency, and polluting emissions. Among several technologies, hybrid electric vehicle (HEV) traction is the most promising technology that has the advantages of high performance, high fuel efficiency, low emissions, and long operating range. Moreover, the technologies of all the component hardware are technically and markedly available. At present, almost all the major automotive manufacturers are developing hybrid electric vehicles, and some of them have marketed their productions, such as Toyota and Honda. This paper reviews the present technologies of HEVs in the range of drivetrain configuration, electric motor drives, and energy storages"}
{"_id": "046f6abc4f5d738d08fe9b6453951cd5f4efa3bf", "title": "Learning from other subjects helps reducing Brain-Computer Interface calibration time", "text": "A major limitation of Brain-Computer Interfaces (BCI) is their long calibration time, as much data from the user must be collected in order to tune the BCI for this target user. In this paper, we propose a new method to reduce this calibration time by using data from other subjects. More precisely, we propose an algorithm to regularize the Common Spatial Patterns (CSP) and Linear Discriminant Analysis (LDA) algorithms based on the data from a subset of automatically selected subjects. An evaluation of our approach showed that our method significantly outperformed the standard BCI design especially when the amount of data from the target user is small. Thus, our approach helps in reducing the amount of data needed to achieve a given performance level."}
{"_id": "5bdf93e0ce9e75dadb26d0e0c044dc1a588397f5", "title": "Location Segmentation, Inference and Prediction for Anticipatory Computing", "text": "This paper presents an analysis of continuous cellular tower data representing five months of movement from 215 randomly sampled subjects in a major urban city. We demonstrate the potential of existing community detection methodologies to identify salient locations based on the network generated by tower transitions. The tower groupings from these unsupervised clustering techniques are subsequently validated using data from Bluetooth beacons placed in the homes of the subjects. We then use these inferred locations as states within several dynamic Bayesian networks to predict each subject\u2019s subsequent movements with over 90% accuracy. We also introduce the X-Factor model, a DBN with a latent variable corresponding to abnormal behavior. We conclude with a description of extensions for this model, such as incorporating additional contextual and temporal variables already being logged by the"}
{"_id": "c4f565d65dd383f2821d0d60503d9e20bd957e3c", "title": "A Novel Self-Reference Technique for STT-RAM Read and Write Reliability Enhancement", "text": "Spin-transfer torque random access memory (STT-RAM) has demonstrated great potential in embedded and stand-alone applications. However, process variations and thermal fluctuations greatly influence the operation reliability of STT-RAM and limit its scalability. In this paper, we propose a new field-assisted access scheme to improve the read/write reliability and performance of STT-RAM. During read operations, an external magnetic field is applied to a magnetic tunneling junction (MTJ) device, generating a resistive sense signal without referring to other devices. Such a self-reference scheme offers a very promising alternative approach to overcome the severe cell-to-cell variations at highly scaled technology node. Furthermore, the external magnetic field can be used to assist the MTJ switching during write operations without introducing extra hardware overhead."}
{"_id": "8b76d0465143389a735d76422ddfaadd4ce0e625", "title": "Tendinous insertion of semimembranosus muscle into the lateral meniscus", "text": "Forty-two cadaver knees were used for morphologic and MRI observations of the tendinous distal expansions of the semimembranosus m. and the posterior capsular structures of the knee. A tendinous branch of the semimembranosus m. inserting into the posterior horn of the lateral meniscus was found in 43.2% of the knees dissected, besides five already known insertional branches; capsular, direct, anterior and inferior, as well as the oblique popliteal ligament. The tendon had three morphologic types; thin, broad and round. All three types moved the lateral meniscus posteriorly when pulled on. Thus, the semimembranosus m. may also have a protective function for the lateral meniscus as well as the already well established function of protecting the medial meniscus in knee flexion. When a semimembranosus tendon attachment to the posterior horn of the lateral meniscus is present, its normal insertion is difficult to differentiate from a lateral meniscus tear in MRI and this may cause misdiagnosis."}
{"_id": "173c80de279d6bf01418acf2216556f84243f640", "title": "Millisecond Coupling of Local Field Potentials to Synaptic Currents in the Awake Visual Cortex", "text": "The cortical local field potential (LFP) is a common measure of population activity, but its relationship to synaptic activity in individual neurons is not fully established. This relationship has been typically studied during anesthesia and is obscured by shared slow fluctuations. Here, we used patch-clamp recordings in visual cortex of anesthetized and awake mice to measure intracellular activity; we then applied a simple method to reveal its coupling to the simultaneously recorded LFP. LFP predicted membrane potential as accurately as synaptic currents, indicating a major role for synaptic currents in the relationship between cortical LFP and intracellular activity. During anesthesia, cortical LFP predicted excitation far better than inhibition; during wakefulness, it predicted them equally well, and visual stimulation further enhanced predictions of inhibition. These findings reveal a central role for synaptic currents, and especially inhibition, in the relationship between the subthreshold activity of individual neurons and the cortical LFP during wakefulness."}
{"_id": "334d69c508cfc1aa58b896eb1a6a3892f3e848fd", "title": "The underlying structure of visuospatial working memory in children with mathematical learning disability.", "text": "This study examined visual, spatial-sequential, and spatial-simultaneous working memory (WM) performance in children with mathematical learning disability (MLD) and low mathematics achievement (LMA) compared with typically developing (TD) children. Groups were matched on reading decoding performance and verbal intelligence. Besides statistical significance testing, we used bootstrap confidence interval estimation and computed effect sizes. Children were individually tested with six computerized tasks, two for each visuospatial WM subcomponent. We found that both MLD and LMA children had low visuospatial WM function in both spatial-simultaneous and spatial-sequential WM tasks. The WM deficit was most expressed in MLD children and less in LMA children. This suggests that WM scores are distributed along a continuum with TD children achieving top scores and MLD children achieving low scores. The theoretical and practical significance of findings is discussed. Statement of Contribution What is already known on this subject? Working memory plays an important role in mathematical achievement. Children with mathematical learning disability (MLD) usually have low working memory resources. Conflicting results have been reported concerning the role of VSWM in individuals with MLD. What the present study adds? Children with different degree of impairment in math achievement and typically developing children were tested. Visual, spatial-sequential, and spatial-simultaneous working memory tasks were examined. Only spatial-sequential and spatial-simultaneous working memory tasks discriminated the two impairments groups."}
{"_id": "6a9cca9766c7177131881beb4022dfed9df4d2b0", "title": "The neuroscience of social decision-making.", "text": "Given that we live in highly complex social environments, many of our most important decisions are made in the context of social interactions. Simple but sophisticated tasks from a branch of experimental economics known as game theory have been used to study social decision-making in the laboratory setting, and a variety of neuroscience methods have been used to probe the underlying neural systems. This approach is informing our knowledge of the neural mechanisms that support decisions about trust, reciprocity, altruism, fairness, revenge, social punishment, social norm conformity, social learning, and competition. Neural systems involved in reward and reinforcement, pain and punishment, mentalizing, delaying gratification, and emotion regulation are commonly recruited for social decisions. This review also highlights the role of the prefrontal cortex in prudent social decision-making, at least when social environments are relatively stable. In addition, recent progress has been made in understanding the neural bases of individual variation in social decision-making."}
{"_id": "50d6c73ff96076eaa0015c7b5d00969b56692676", "title": "Audit Log Management in MongoDB", "text": "In the past few years, web-based applications and their data management needs have changed dramatically. Relational databases are often being replaced by other viable alternatives, such as NoSQL databases, for reasons of scalability and heterogeneity. MongoDB, a NoSQL database, is an agile database built for scalability, performance and high availability. It can be deployed in single server environment and also on complex multi-site architectures. MongoDB provides high performance for read and write operations by leveraging in-memory computing. Although researchers have motivated the need for MongoDB, not much appears in the area of log management. Efficient log management techniques are needed for various reasons including security, accountability, and improving the performance of the system. Towards this end, we analyze the different logging methods offered by MongoDB and compare them to the NIST standard. Our analysis indicates that profiling and mongosniff are useful for log management and we present a simple model that combines the two techniques."}
{"_id": "4f1db9a1d579bf5906a356801526613eebe464e1", "title": "Bayesian Information Extraction Network", "text": "Dynamic Bayesian networks (DBNs) offer an elegant way to integrate various aspects of language in one model. Many existing algorithms developed for learning and inference in DBNs are applicable to probabilistic language modeling. To demonstrate the potential of DBNs for natural language processing, we employ a DBN in an information extraction task. We show how to assemble wealth of emerging linguistic instruments for shallow parsing, syntactic and semantic tagging, morphological decomposition, named entity recognition etc. in order to incrementally build a robust information extraction system. Our method outperforms previously published results on an established benchmark domain. 1 Information Extraction Information extraction ( IE) is the task of filling in template information from previously unseen text which belongs to a pre-defined domain. The resulting database is suited for formal queries and filtering.IE systems generally work by detecting patterns in the text that help identify significant information. Researchers have shown [Freitag and McCallum, 1999; Ray and Craven, 2001 ] that a probabilistic approach allows the construction of robust and well-performing systems. However, the existing probabilistic systems are generally based on Hidden Markov Models ( HMMs). Due to this relatively impoverished representation, they are unable to take advantage of the wide array of linguistic information used by many non-probabilistic IE systems. In addition, existing HMM -based systems model each target category separately, failing to capture relational information, such as typical target order, or the fact that each element only belongs to a single category. This paper shows how to incorporate a wide array of knowledge into a probabilistic IE system, based on dynamic Bayesian networks ( DBN)\u2014a rich probabilistic representation that generalizes HMMs. Let us illustrateIE by describing seminar announcements which got established as one of the most popular benchmark domains in the field[Califf and Mooney, 1999; Freitag and McCallum, 1999; Soderland, 1999; Roth and Yih, 2001; Ciravegna, 2001 ]. People receive dozens of seminar announcements weekly and need to manually extract information and paste it into personal organizers. The goal of an IE system is to automatically identify target fields such as location and topic of a seminar, date and starting time, ending time and speaker. Announcements come in many formats, but usually follow some pattern. We often find a header with a gist in the form \u201c PostedBy: john@host.domain; Who: Dr. Steals; When: 1 am; \u201d and so forth. Also in the body of the message, the speaker usually precedes both location and starting time, which in turn precedes ending time as in: \u2018\u2018Dr. Steals presents in Dean Hall at one am.\u2019\u2019 The task is complicated since some fields may be missing or may contain multiple values. This kind of data falls into the so-called semi-structured text category. Instances obey certain structure and usually contain information for most of the expected fields in some order. There are two other categories: free text and structured text. Instructured text , the positions of the information fields are fixed and values are limited to pre-defined set. Consequently, theIE systems focus on specifying the delimiters and order associated with each field. At the opposite end lies the task of extracting information fromfree textwhich, although unstructured, is assumed to be grammatical. Here IE systems rely more on syntactic, semantic and discourse knowledge in order to assemble relevant information potentially scattered all over a large document. IE algorithms face different challenges depending on the extraction targets and the kind of the text they are embedded in. In some cases, the target is uniquely identifiable (singleslot), while in others, the targets are linked together in multislot association frames. For example, a conference schedule has several slots for related speaker, topic and time of the presentation, while a seminar announcement usually refers to a unique event. Sometimes it is necessary to identify each word in a target slot, while some benefit may be reaped from partial identification of the target, such as labeling the beginning or end of the slot separately. Many applications involve processing of domain-specific jargon like Internetese \u2014a style of writing prevalent in news groups, e-mail messages, bulletin boards and online chat rooms. Such documents do not follow a good grammar, spelling or literary style. Often these are more like a stream-of-consciousness ranting in which asciiart and pseudo-graphic sketches are used and emphasis is provided by all-capitals, or using multiple exclamation signs. As we exemplify below, syntactic analysers easily fail on such Position 1 2 3 4 5 6 7 8 9 10 Tag Phrase Doctor Steals Presents in Dean Hall at 1 am . Lemma Dr. present in hall at am PoS NNP NNP(VB) VB(NNS) IN NNP NN(NNP) IN CD NN(RB) . Syn.segm. NP NP (VP) VP PP NP NP PP NP NP(VP) Semantic Title LstName LstName Location Time Length 3 6 8 2 4 4 2 1 2 1 Case Upper Upper Upper lower Upper Upper lower lower Table 1: Sample phrase and its representation in multiple feature values for ten tokens."}
{"_id": "d2165efcf19b65649f0b2ebbe4af2e7902dca391", "title": "A Framework for Highly-Available Composed Real-Time Internet Services Bhaskaran Raman Qualifying Examination", "text": "Applicationservicesfor theend-userareall importantin today\u2019scommunicationnetworks,andcoulddictatethesuccessor failureof technologyor serviceproviders[39]. It is importantto develop and deploy applicationfunctionality quickly [18]. The ability to compose servicesfrom independentprovidersprovidesa flexible way to quickly build new end-to-endfunctionality. Suchcompositionof services acrossthe network and acrossdifferent serviceproviders becomesespeciallyimportant in the context of growing popularityandheterogeneityin 3G+ accessnetworks and devices[25]. In this work, we provide a framework for constructing such composed services on the Internet. Robustnessandhigh-availability arecrucial for Internet services.While cluster-basedmodelsfor resilienceto failureshave beenbuilt for web-servers[27] aswell asproxy services[11, 4], theseareinadequatein thecontext of composedservices.This is especiallyso whenthe application sessionis long-lived, andfailureshaveto behandledduring asession. In the context of composed services, we address the important and challenging issues of resilience to failures, and adapting to changes in overall performance during longlived sessions. Our framework is basedon a connection-orientedoverlay network of compute-clusterson theInternet.Theoverlay network provides the context for composingservices over the wide-area,and monitoring for livenessand performanceof a session. We have performedinitial analysesof the feasibility of network failure detectionover the wide-areaInternet. And we have a preliminaryevaluation of the overheadassociatedwith suchan overlay network. We presentour plansfor furtherevaluationandrefinement of thearchitecture;andfor examiningissuesrelatedto the creationof theoverlaytopology. 1 Intr oduction Application functionality for the end-useris thedriving force behinddevelopmentof technologyandits adoption. It is importantto be ableto developanddeploy new functionality quickly [18]. Compositionof existing servicesto achieve new functionalityenablessuchquick development throughreuseof alreadydeployed components.Consider for example,thescenarioshown in Figure1. Video-on-demand server Replicated instances Service Provider A"}
{"_id": "0cb8f50580cc69191144bd503e268451ce966fa6", "title": "Neural Message Passing for Quantum Chemistry", "text": "Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels."}
{"_id": "bcbf90dd0d592ded9ef1255ac78d0f9479237207", "title": "Substrate integrated waveguide (SIW) power amplifier using CBCPW-to-SIW transition for matching network", "text": "In this paper, for the first time a novel substrate integrated waveguide (SIW)-based 10W power amplifier (PA), designed with conductor-backed coplanar waveguide (CBCPW)-to-SIW transition matching network (MN), is presented. Transition between CBCPW and SIW is employed for both input and output MN designs, the proposed SIW PA can be easily connected with any microstrip or SIW-based circuits. Asymmetrical and symmetrical types of CBCPW-to-SIW transition MN are proposed. Output SIW-based MN is designed with asymmetrical structure by using one inductive metalized post and input SIW-based MN is designed with symmetrical structure by using two identical inductive metalized posts. One SIW-based 10W PA using GaN HEMT at 3.62 GHz is designed, fabricated, and measured. Measured results show that the maximum power added efficiency (PAE) is 54.24 % with 39.74 dBm output power and the maximum gain is 13.31 dB. At the design frequency of 3.6 GHz, the size of proposed SIW-based PA is comparable with other microstrip-based PAs."}
{"_id": "19a4b6d05019b5e1be16790b9306ed086f5ddfb0", "title": "High Performance Single-Chip FPGA Rijndael Algorithm Implementations", "text": "This paper describes high performance single-chip FPGA implementations of the new Advanced Encryption Standard (AES) algorithm, Rijndael. The designs are implemented on the Virtex-E FPGA family of devices. FPGAs have proven to be very effective in implementing encryption algorithms. They provide more flexibility than ASIC implementations and produce higher data-rates than equivalent software implementations. A novel, generic, parameterisable Rijndael encryptor core capable of supporting varying key sizes is presented. The 192-bit key and 256-bit key designs run at data rates of 5.8 Gbits/sec and 5.1 Gbits/sec respectively. The 128-bit key encryptor core has a throughput of 7 Gbits/sec which is 3.5 times faster than similar existing hardware designs and 21 times faster than known software implementations, making it the fastest single-chip FPGA Rijndael encryptor core reported to date. A fully pipelined single-chip 128-bit key Rijndael encryptor/decryptor core is also presented. This design runs at a data rate of 3.2 Gbits/sec on a Xilinx Virtex-E XCV3200E-8-CG1156 FPGA device. There are no known singlechip FPGA implementations of an encryptor/decryptor Rijndael design."}
{"_id": "446573a346acdbd2eb8f0527c5d73fc707f04527", "title": "AES Proposal : Rijndael", "text": ""}
{"_id": "4d5e15c080dde7e5c50ae2697ee56d29218235b1", "title": "Very Compact FPGA Implementation of the AES Algorithm", "text": "In this paper a compact FPGA architecture for the AES algorithm with 128-bit key targeted for low-cost embedded applications is presented. Encryption, decryption and key schedule are all implemented using small resources of only 222 Slices and 3 Block RAMs. This implementation easily fits in a low-cost Xilinx Spartan II XC2S30 FPGA. This implementation can encrypt and decrypt data streams of 150 Mbps, which satisfies the needs of most embedded applications, including wireless communication. Specific features of Spartan II FPGAs enabling compact logic implementation are explored, and a new way of implementing MixColumns and InvMixColumns transformations using shared logic resources is presented."}
{"_id": "e1f21059a73fc1b37c5e3db047f55fa61b07e6db", "title": "A compact FPGA implementation of the hash function whirlpool", "text": "Recent breakthroughs in cryptanalysis of standard hash functions like SHA-1 and MD5 raise the need for alternatives. A credible alternative to for instance SHA-1 or the SHA-2 family of hash functions is Whirlpool. Whirlpool is a hash function that has been evaluated and approved by NESSIE and is standardized by ISO/IEC. To the best of our knowledge only one FPGA implementation of Whirlpool has been published to date. This implementation is designed for high throughput rates requiring a considerable amount of hardware resources. In this article we present a compact hardware implementation of the hash function Whirlpool. The proposed architecture uses an innovative state representation that makes it possible to reduce the required hardware resources remarkably. The complete implementation requires 1456 CLB-slices and, most notably, no block RAMs."}
{"_id": "a6402417792005d38c99d29f9b32a4b27e32d7c8", "title": "House Price Prediction : Hedonic Price Model vs . Artificial Neural Network", "text": "The objective of this paper is to empirically compare the predictive power of the hedonic model with an artificial neural network model on house price prediction. A sample of 200 houses in Christchurch, New Zealand is randomly selected from the Harcourt website. Factors including house size, house age, house type, number of bedrooms, number of bathrooms, number of garages, amenities around the house and geographical location are considered. Empirical results support the potential of artificial neural network on house price prediction, although previous studies have commented on its black box nature and achieved different conclusions."}
{"_id": "62c3daf1899f6841f7092961193f062cc4fe1103", "title": "Search mode and purchase intention in online shopping behavior", "text": "This study focuses on the effect of website visitors' goal-oriented search mode on purchase intention in online environments. In a study of 874 respondents recruited from 13 online shops representing a diversity of product categories and customer segments, the effect of visitors' goal-oriented search mode on purchase intention is found to be moderated by product involvement and product risk. Furthermore, product involvement, product risk, product knowledge and Internet experience are found to have positive effects on the degree of goaloriented search mode of the visitors. Also, product knowledge and Internet experience are reported to have direct positive effects on purchase intention. The results point to the importance of understanding the characteristics of website visitors, and to customize the support and search services offered on the website to the characteristics and preferences of the individual visitor. Customization of this kind may partly be based on immediate visitor history (referrer) and may be used to increase purchase intention, and eventually online sales."}
{"_id": "e103581df3c31b8a0518fe378920c48db51fe5a3", "title": "Data-driven Personas: Constructing Archetypal Users with Clickstreams and User Telemetry", "text": "User Experience (UX) research teams following a user centered design approach harness personas to better understand a user's workflow by examining that user's behavior, goals, needs, wants, and frustrations. To create target personas these researchers rely on workflow data from surveys, self-reports, interviews, and user observation. However, this data not directly related to user behavior, weakly reflects a user's actual workflow in the product, is costly to collect, is limited to a few hundred responses, and is outdated as soon as a persona's workflows evolve. To address these limitations we present a quantitative bottom-up data-driven approach to create personas. First, we directly incorporate user behavior via clicks gathered automatically from telemetry data related to the actual product use in the field; since the data collection is automatic it is also cost effective. Next, we aggregate 3.5 million clicks from 2400 users into 39,000 clickstreams and then structure them into 10 workflows via hierarchical clustering; we thus base our personas on a large data sample. Finally, we use mixed models, a statistical approach that incorporates these clustered workflows to create five representative personas; updating our mixed model ensures that these personas remain current. We also validated these personas with our product's user behavior experts to ensure that workflows and the persona goals represent actual product use."}
{"_id": "29b377cb5e960c8f0f7b880e00399fc6ae955c0f", "title": "Detecting Faulty Nodes with Data Errors for Wireless Sensor Networks", "text": "Wireless Sensor Networks (WSN) promise researchers a powerful instrument for observing sizable phenomena with fine granularity over long periods. Since the accuracy of data is important to the whole system's performance, detecting nodes with faulty readings is an essential issue in network management. As a complementary solution to detecting nodes with functional faults, this article, proposes FIND, a novel method to detect nodes with data faults that neither assumes a particular sensing model nor requires costly event injections. After the nodes in a network detect a natural event, FIND ranks the nodes based on their sensing readings as well as their physical distances from the event. FIND works for systems where the measured signal attenuates with distance. A node is considered faulty if there is a significant mismatch between the sensor data rank and the distance rank. Theoretically, we show that average ranking difference is a provable indicator of possible data faults. FIND is extensively evaluated in simulations and two test bed experiments with up to 25 MicaZ nodes. Evaluation shows that FIND has a less than 5% miss detection rate and false alarm rate in most noisy environments."}
{"_id": "c78f7a2e573d189c76d91958204cfe96248f97f9", "title": "5.2 Distributed system of digitally controlled microregulators enabling per-core DVFS for the POWER8TM microprocessor", "text": "Integrated voltage regulator modules (iVRMs) [1] provide a cost-effective path to realizing per-core dynamic voltage and frequency scaling (DVFS), which can be used to optimize the performance of a power-constrained multi-core processor. This paper presents an iVRM system developed for the POWER8\u2122 microprocessor, which functions as a very fast, accurate low-dropout regulator (LDO), with 90.5% peak power efficiency (only 3.1% worse than an ideal LDO). At low output voltages, efficiency is reduced but still sufficient to realize beneficial energy savings with DVFS. Each iVRM features a bypass mode so that some of the cores can be operated at maximum performance with no regulator loss. With the iVRM area including the input decoupling capacitance (DCAP) (but not the output DCAP inherent to the cores), the iVRMs achieve a power density of 34.5W/mm2, which exceeds that of inductor-based or SC converters by at least 3.4\u00d7 [2]."}
{"_id": "89b127457cf4080e1592589f78ecbda4c102697f", "title": "New Word Detection in Ancient Chinese Literature", "text": "Mining Ancient Chinese corpus is not as convenient as modern Chinese, because there is no complete dictionary of ancient Chinese words which leads to the bad performance of tokenizers. So finding new words in ancient Chinese texts is significant. In this paper, the Apriori algorithm is improved and used to produce candidate character sequences. And a long short-term memory (LSTM) neural network is used to identify the boundaries of the word. Furthermore, we design word confidence feature to measure the confidence score of new words. The experimental results demonstrate that the improved Apriori-like algorithm can greatly improve the recall rate of valid candidate character sequences, and the average accuracy of our method on new word detection raise to 89.7%."}
{"_id": "be99edb722435e465e469f0b09bddcc4e8bf33c9", "title": "Attacking Deterministic Signature Schemes Using Fault Attacks", "text": "Many digital signature schemes rely on random numbers that are unique and non-predictable per signature. Failures of random number generators may have catastrophic effects such as compromising private signature keys. In recent years, many widely-used cryptographic technologies adopted deterministic signature schemes because they are presumed to be safer to implement. In this paper, we analyze the security of deterministic ECDSA and EdDSA signature schemes and show that the elimination of random number generators in these schemes enables new kinds of fault attacks. We formalize these attacks and introduce practical attack scenarios against EdDSA using the Rowhammer fault attack. EdDSA is used in many widely used protocols such as TLS, SSH, and IPSec, and we show that these protocols are not vulnerable to our attack. We formalize the necessary requirements of protocols using these deterministic signature schemes to be vulnerable, and discuss mitigation strategies and their effect on fault attacks against deterministic signature schemes."}
{"_id": "ff56ba67f29179fe65d99cf929dfec811abf175e", "title": "Development of a FACS-verified set of basic and self-conscious emotion expressions.", "text": "In 2 studies, the authors developed and validated of a new set of standardized emotion expressions, which they referred to as the University of California, Davis, Set of Emotion Expressions (UCDSEE). The precise components of each expression were verified using the Facial Action Coding System (FACS). The UCDSEE is the first FACS-verified set to include the three \"self-conscious\" emotions known to have recognizable expressions (embarrassment, pride, and shame), as well as the 6 previously established \"basic\" emotions (anger, disgust, fear, happiness, sadness, and surprise), all posed by the same 4 expressers (African and White males and females). This new set has numerous potential applications in future research on emotion and related topics."}
{"_id": "eb82febc304260fae4a18e89f8d04bfca8823236", "title": "The use of Social Networking Sites (SNSs) by the faculty members of the School of Library & Information Science, PAAET, Kuwait", "text": "Purpose \u2013 The purpose of this study is to describe the usage of Social Networking Sites (SNSs) by the faculty members of the School of Library and Information Science (SLIS), at the College of Basic Education, the Public Authority for Applied Education and Training (PAAET), Kuwait. Design/methodology/approach \u2013 A survey conducted to collect data from 33 faculty members of whom only 21 members were using SNSs, representing 63.6 per cent of the total sample, and 12 members were not using SNSs, representing 36.4 per cent of the total sample. This study revealed that SNSs are used moderately by the faculty members. Findings \u2013 This study showed that faculty members who were using SNSs tend to be males, aged between 41 and 50 years, PhD holders, ranked as assistant professors, full-time members, specialized in information technologies with a relatively new experience of teaching ranged from one to five years, and most of the faculty members who were not using SNSs tended to be also males, aged between 41 and 60 years, PhD holders, ranked as lecturers, full-time members specialized in organization of information with a teaching experience ranged from 16 to 20 years. More than half of the faculty members were using SNSs for three years to less than six years, and a large number of them were using SNSs several times a week and were accessing these sites more from their school office, home and school laboratory. There are no any statistical significant differences between the demographic data of participants (gender, age and education level) and either their use or non-use of SNSs. There are no significant differences between the academic rank, teaching status and teaching experience of faculty and their use of SNSs. However, there is a significant relation between the faculty\u2019s area of teaching and their use of SNSs. Faculty members were interested in the use of SNSs. YouTube, Twitter, Facebook and blogs respectively were used mostly by faculty members, but Twitter, Facebook and YouTube were the most famous SNSs they have profiles on. Faculty members have adopted SNSs mainly for the purpose of communicating with others, finding and sharing information with peers and students as well. Tasks on SNSs made by faculty members were mostly to make communication, send/receive messages and find general and specific information. Faculty members\u2019 profiles on SNSs were mostly on Twitter, Facebook, YouTube, blogs, wikis and podcasting respectively. Faculty members confirmed that the use of YouTube, Facebook, blogs, Twitter, wikis and podcasting respectively was at least effective and the use of YouTube, Facebook, Twitter, Blogs and Wikis respectively was at least fairly useful fairly easy to them. Faculty members are in general agreement about the effectiveness of SNSs especially for disseminating and sharing information, communication and informal collaboration. The study showed The researcher would like to thank the Public Authority for Applied Education and Training (PAAET), the state of Kuwait, for supporting this paper (BE-11-10). The current issue and full text archive of this journal is available on Emerald Insight at: www.emeraldinsight.com/0264-0473.htm"}
{"_id": "a7f46ae35116f4c0b3aaa1c9b46d6e79e63b56c9", "title": "The Bluetooth radio system", "text": "A few years ago it was recognized that the vision of a truly low-cost, low-power radio-based cable replacement was feasible. Such a ubiquitous link would provide the basis for portable devices to communicate together in an ad hoc fashion by creating personal area networks which have similar advantages to their office environment counterpart, the local area network. Bluetooth/sup TM/ is an effort by a consortium of companies to design a royalty-free technology specification enabling this vision. This article describes the radio system behind the Bluetooth concept. Designing an ad hoc radio system for worldwide usage poses several challenges. The article describes the critical system characteristics and motivates the design choices that have been made."}
{"_id": "7c38d99373d68e8206878aa6f49fce6e0d4cfc6a", "title": "Patch Autocorrelation Features: a translation and rotation invariant approach for image classification", "text": "The autocorrelation is often used in signal processing as a tool for finding repeating patterns in a signal. In image processing, there are various image analysis techniques that use the autocorrelation of an image in a broad range of applications from texture analysis to grain density estimation. This paper provides an extensive review of two recently introduced and related frameworks for image representation based on autocorrelation, namely Patch Autocorrelation Features (PAF) and Translation and Rotation Invariant Patch Autocorrelation Features (TRIPAF). The PAF approach stores a set of features obtained by comparing pairs of patches from an image. More precisely, each feature is the euclidean distance between a particular pair of patches. The proposed approach is successfully evaluated in a series of handwritten digit recognition experiments on the popular MNIST data set. However, the PAF approach has limited applications, because it is not invariant to affine transformations. More recently, the PAF approach was extended to become invariant to image transformations, including (but not limited to) translation and rotation changes. In the TRIPAF framework, several features are extracted from each image patch. Based on these features, a vector of similarity values is computed between each pair of patches. Then, the similarity vectors are clustered together such that the spatial offset between the patches of each pair is roughly the same. Finally, the mean and the standard deviation of each similarity value are computed for each group of similarity vectors. These statistics are concatenated to obtain the TRIPAF feature vector. The TRIPAF vector essentially records information about the repeating patterns within an image at various spatial offsets. After presenting the two approaches, several optical character recognition and texture classification experiments are conducted to evaluate the two approaches. Results are reported on the MNIST (98.93%), the Brodatz (96.51%), and the UIUCTex (98.31%) data sets. Both PAF and TRIPAF are fast to compute and produce compact representations in practice, while reaching accuracy levels similar to other state-of-the-art methods."}
{"_id": "6e6f47c4b2109e7824cd475336c3676faf9b113e", "title": "Baby Talk : Understanding and Generating Image Descriptions", "text": "We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work."}
{"_id": "b7954c6886bd953ebf677a96530a4e3923642260", "title": "Single camera lane detection and tracking", "text": "In this paper we present a method to detect and track straight lane boundaries, using a forward-looking single camera mounted on-board. The method proved to be robust, even under varying lighting conditions, and in the presence of heavy shadowing cast by vegetation, vehicles, bridges, etc. Moreover, false positive hardly occur. The lane markings are being continuously detected even when the vehicle is performing maneuvers such as excursion or lane change. The performance is achieved by a novel combination of methods, namely, by first finding vanishing point candidates for the lane boundaries, and later by the careful selection of line segments given a vanishing point, and finally by using a smart heuristics to select the whole set of four lane boundaries. Suppression of false positives is further achieved by tracking the vanishing point, and by constraining the lane width based on recent history."}
{"_id": "219d1747f0afb8b9d2c603d0f3503764d5257796", "title": "CLAMS: Bringing Quality to Data Lakes", "text": "With the increasing incentive of enterprises to ingest as much data as they can in what is commonly referred to as \"data lakes\", and with the recent development of multiple technologies to support this \"load-first\" paradigm, the new environment presents serious data management challenges. Among them, the assessment of data quality and cleaning large volumes of heterogeneous data sources become essential tasks in unveiling the value of big data. The coveted use of unstructured and semi-structured data in large volumes makes current data cleaning tools (primarily designed for relational data) not directly adoptable.\n We present CLAMS, a system to discover and enforce expressive integrity constraints from large amounts of lake data with very limited schema information (e.g., represented as RDF triples). This demonstration shows how CLAMS is able to discover the constraints and the schemas they are defined on simultaneously. CLAMS also introduces a scale-out solution to efficiently detect errors in the raw data. CLAMS interacts with human experts to both validate the discovered constraints and to suggest data repairs.\n CLAMS has been deployed in a real large-scale enterprise data lake and was experimented with a real data set of 1.2 billion triples. It has been able to spot multiple obscure data inconsistencies and errors early in the data processing stack, providing huge value to the enterprise."}
{"_id": "62aa13c3c4cc34cb29edbf60cc30c0138d08e870", "title": "Distributed Detection of Sensor Worms Using Sequential Analysis and Remote Software Attestations", "text": "Recent work has demonstrated that self-propagating worms are a real threat to sensor networks. Since worms can enable an adversary to quickly compromise an entire sensor network, they must be detected and stopped as quickly as possible. To meet this need, we propose a worm propagation detection scheme for sensor networks. The proposed scheme applies a sequential analysis to detect worm propagation by leveraging the intuition that a worm\u2019s communication pattern is different from benign traffic. In particular, a worm in a sensor network requires a long sequence of packets propagating hop-by-hop to each new infected node in turn. We thus have detectors that observe communication patterns in the network, a worm spreading hop-by-hop will quickly create chains of connections that would not be seen in normal traffic. Once detector nodes identify the worm propagation pattern, they initiate remote software attestations to detect infected nodes. Through analysis and simulation, we demonstrate that the proposed scheme effectively and efficiently detects worm propagation. In particular, it blocks worm propagation while restricting the fraction of infected nodes to at most 13.5% with an overhead of at most 0.63 remote attestations per node per time slot."}
{"_id": "288345979bd6b78534cd86228e246a5daed7e700", "title": "\u201cTeach Me\u2013Show Me\u201d\u2014End-User Personalization of a Smart Home and Companion Robot", "text": "Care issues and costs associated with an increasing elderly population are becoming a major concern for many countries. The use of assistive robots in \u201csmart-home\u201d environments has been suggested as a possible partial solution to these concerns. A challenge is the personalization of the robot to meet the changing needs of the elderly person over time. One approach is to allow the elderly person, or their carers or relatives, to make the robot learn activities in the smart home and teach it to carry out behaviors in response to these activities. The overriding premise being that such teaching is both intuitive and \u201cnontechnical.\u201d To evaluate these issues, a commercially available autonomous robot has been deployed in a fully sensorized but otherwise ordinary suburban house. We describe the design approach to the teaching, learning, robot, and smart home systems as an integrated unit and present results from an evaluation of the teaching component with 20 participants and a preliminary evaluation of the learning component with three participants in a human-robot interaction experiment. Participants reported findings using a system usability scale and ad-hoc Likert questionnaires. Results indicated that participants thought that this approach to robot personalization was easy to use, useful, and that they would be capable of using it in real-life situations both for themselves and for others."}
{"_id": "b9e78ee951ad2e4ba32a0861744bc4cea3821d50", "title": "A 20 Gb/s 0.4 pJ/b Energy-Efficient Transmitter Driver Utilizing Constant- ${\\rm G}_{\\rm m}$ Bias", "text": "This paper describes a transmitter driver based on a CMOS inverter with a resistive feedback. By employing the proposed driver topology, the pre-driver can be greatly simplified, resulting in a remarkable reduction of the overall driver power consumption. It also offers another advantage that the implementation of equalization is straightforward, compared with a conventional voltage-mode driver. Furthermore, the output impedance remains relatively constant while the data is being transmitted, resulting in good signal integrity. For evaluation of the driver performance, a fully functional 20 Gb/s transmitter is implemented, including a PRBS generator, a serializer, and a half-rate clock generator. In order to enhance the overall speed of the digital circuits for 20 Gb/s data transmission, the resistive feedback is applied to the time-critical inverters, which enables shorter rise/fall times. The prototype chip is fabricated in a 65 nm CMOS technology. The implemented driver circuit operates up to the data rate of 20 Gb/s, exhibiting an energy efficiency of 0.4 pJ/b for the output swing of 250 mVpp,diff."}
{"_id": "6322fb79165999fb2651b0f03d92ac8e066c11f5", "title": "A Technique for Data Deduplication using Q-Gram Concept with Support Vector Machine", "text": "Several systems that rely on consistent data to offer high quality services, such as digital libraries and e-commerce brokers, may be affected by the existence of duplicates, quasi-replicas, or near-duplicate entries in their repositories. Because of that, there have been significant investments from private and government organizations in developing methods for removing replicas from its data repositories.In this paper, we have proposed accordingly. In the previous work, duplicate record detection was done using three different similarity measures and neural network. In the previous work, we have generated feature vector based on similarity measures and then, neural network was used to find the duplicate records. In this paper, we have developed Q-gram concept with support vector machine for deduplication process. The similarity function, which we are used Dice coefficient,Damerau\u2013Levenshtein distance,Tversky index for similarity measurement. Finally, support vector machine is used for testing whether data record is duplicate or not. A set of data generated from some similarity measures are used as the input to the proposed system. There are two processes which characterize the proposed deduplication technique, the training phase and the testing phase the experimental results showed that the proposed deduplication technique has higher accuracy than the existing method. The accuracy obtained for the proposed deduplication 88%."}
{"_id": "52816eaca8cc8624e497437325ae365cc13d5445", "title": "A New Travelling Wave Antenna in Microstrip", "text": "The radiation characteristics of the first higher order mode of microstrip lines are investigated. As a result, a simple travelling wave antenna element is scribed, having a larger bandwidth compared with resonator antennas. A method to excite the first higher order mode is shown. A single antenna element is treated theoretically and experimentally, and an array of four antenna elements is demonstrated."}
{"_id": "76e02fef750d183623dbbc602033beca8f6ee51a", "title": "The pathogenesis of Campylobacter jejuni-mediated enteritis.", "text": "Campylobacter jejuni, a gram-negative spiral shaped bacterium, is a frequent cause of gastrointestinal food-borne illness in humans throughout the world. Illness with C. jejuni ranges from mild to severe diarrheal disease. This article focuses on Campylobacter virulence determinants and their potential role in the development of C. jejuni-mediated enteritis. A model is presented that diagrams the interactions of C. jejuni with the intestinal epithelium. Additional work to identify and characterize C. jejuni virulence determinants is certain to provide novel insights into the diversity of strategies employed by bacterial pathogens to cause disease."}
{"_id": "96aa80b7c66d7c4b8a4d3d4f7b29beea1be0939f", "title": "Real-Time Systems - Design Principles for Distributed Embedded Applications", "text": "Now, we come to offer you the right catalogues of book to open. real time systems design principles for distributed embedded applications is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you."}
{"_id": "8bacf342c1893a58a0eec0343f9287fa1608a1eb", "title": "Performance evaluation of synchronous rectification in front-end full-bridge rectifiers", "text": "In this paper, performance evaluation of synchronous rectification in front-end full-bridge rectifiers is presented. Specifically, the implementation of the full-bridge rectifier with the two diode rectifiers connected to the negative output terminal replaced by two synchronous rectifiers (SRs) and the implementation with all four diode rectifiers replaced by four SRs are considered. In both implementations, the SRs are N-channel MOSFETs controlled by sensing the line voltage. Two methods of line-voltage sensing are presented. First, direct line-voltage sensing, and second, indirect line-voltage sensing, i.e., sensing the line voltage between an input terminal and the negative output terminal. The proposed implementations also include a protection method of preventing accidental short circuit between the input or output terminals. In addition, SR power-management methods such as turning off the SRs at light loads and at no load are provided. The protection of SRs at large inrush currents is also discussed. Experimental results obtained on a 90-W (19.5-V/4.6-A) laptop adapter for the universal line voltage range (90-264 Vrms) are given. In the experimental circuit with four SRs, the efficiency improvement at 90-Vrms line voltage and full load is 1.31%."}
{"_id": "215aad1520ec1b087ab2ba4043f5e0ecc32e7482", "title": "Reducibility Among Combinatorial Problems", "text": "A large class of computational problems involve the determination of properties of graphs, digraphs, integers, arrays of integers, finite families of finite sets, boolean formulas and elements of other countable domains. Through simple encodings from such domains into the set of words over a finite alphabet these problems can be converted into language recognition problems, and we can inquire into their computational complexity. It is reasonable to consider such a problem satisfactorily solved when an algorithm for its solution is found which terminates within a number of steps bounded by a polynomial in the length of the input. Many problems with wide applicability \u2013 e.g., set cover, knapsack, hitting set, max cut, and satisfiability \u2013 lack a polynomial algorithm for solving them, but also lack a proof that no such polynomial algorithm exists. Hence, they remain \u201copen problems.\u201d This paper references the recent work, \u201cOn the Reducibility of Combinatorial Problems\u201d [1]. BODY A large class of open problems are mutually convertible via poly-time reductions. Hence, either all can be solved in poly-time, or none can. REFERENCES [1] R. Karp. Reducibility Among Combinatorial Problems. In Complexity of Computer Computations, 1972. \u2217With apologies to Professor Richard Karp. Volume X of Tiny Transactions on Computer Science This content is released under the Creative Commons Attribution-NonCommercial ShareAlike License. Permission to make digital or hard copies of all or part of this work is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. CC BY-NC-SA 3.0: http://creativecommons.org/licenses/by-nc-sa/3.0/."}
{"_id": "7c4824cc17bf735a7f80d128cc7ac6c7a8ab8aec", "title": "Grapheme-to-phoneme conversion using Long Short-Term Memory recurrent neural networks", "text": "Grapheme-to-phoneme (G2P) models are key components in speech recognition and text-to-speech systems as they describe how words are pronounced. We propose a G2P model based on a Long Short-Term Memory (LSTM) recurrent neural network (RNN). In contrast to traditional joint-sequence based G2P approaches, LSTMs have the flexibility of taking into consideration the full context of graphemes and transform the problem from a series of grapheme-to-phoneme conversions to a word-to-pronunciation conversion. Training joint-sequence based G2P require explicit grapheme-to-phoneme alignments which are not straightforward since graphemes and phonemes don't correspond one-to-one. The LSTM based approach forgoes the need for such explicit alignments. We experiment with unidirectional LSTM (ULSTM) with different kinds of output delays and deep bidirectional LSTM (DBLSTM) with a connectionist temporal classification (CTC) layer. The DBLSTM-CTC model achieves a word error rate (WER) of 25.8% on the public CMU dataset for US English. Combining the DBLSTM-CTC model with a joint n-gram model results in a WER of 21.3%, which is a 9% relative improvement compared to the previous best WER of 23.4% from a hybrid system."}
{"_id": "9a0fff9611832cd78a82a32f47b8ca917fbd4077", "title": "Text-To-Speech Synthesis", "text": ""}
{"_id": "178631e0f0e624b1607c7a7a2507ed30d4e83a42", "title": "Speech recognition with deep recurrent neural networks", "text": "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score."}
{"_id": "841cebeab490ad455df3e7f7bf2cdff0076e59ca", "title": "A comparative analysis of data center network architectures", "text": "Advances in data intensive computing and high performance computing facilitate rapid scaling of data center networks, resulting in a growing body of research exploring new network architectures that enhance scalability, cost effectiveness and performance. Understanding the tradeoffs between these different network architectures could not only help data center operators improve deployments, but also assist system designers to optimize applications running on top of them. In this paper, we present a comparative analysis of several well known data center network architectures using important metrics, and present our results on different network topologies. We show the tradeoffs between these topologies and present implications on practical data center implementations."}
{"_id": "39ae3816ab8895c754112554b4a4112bc4b6c2c6", "title": "Tuning metaheuristics: A data mining based approach for particle swarm optimization", "text": "The paper is concerned with practices for tuning the parameters of metaheuristics. Settings such as, e.g., the cooling factor in simulated annealing, may greatly affect a metaheuristic\u2019s efficiency as well as effectiveness in solving a given decision problem. However, procedures for organizing parameter calibration are scarce and commonly limited to particular metaheuristics. We argue that the parameter selection task can appropriately be addressed by means of a data mining based approach. In particular, a hybrid system is devised, which employs regression models to learn suitable parameter values from past moves of a metaheuristic in an online fashion. In order to identify a suitable regression method and, more generally, to demonstrate the feasibility of the proposed approach, a case study of particle swarm optimization is conducted. Empirical results suggest that characteristics of the decision problem as well as search history data indeed embody information that allows suitable parameter values to be determined, and that this type of information can successfully be extracted by means of nonlinear regression models. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "459696bdbd8af247d154cd8008aeacdd05fe59e1", "title": "A Unified Approach to the Change of Resolution: Space and Gray-Level", "text": "Multiple resolution analysis of images is a current trend in computer vision. In most cases, only spatial resolution has been considered. However, image resolution has an additional aspect: gray level, or color, resolution. Color resolution has traditionally been considered in the area of computer graphics. By defining a suitable measure for the comparison of images, changes in resolution can be treated with the same tools as changes in color resolution. A gray tone image, for example, can be compared to a halftone image having only two colors (black and white), but of higher spatial resolution. An important application can be in pyramids, one of the most commonly used multiple (spatial) resolution schemes, where this approach provides a tool to change the color resolution as well. Increasing color resolution while reducing spatial resolution to retain more image details and prevent aliasing is an example of the possibility to find optimal combinations of resolution reduction, spatial and color, to best fit an application."}
{"_id": "31e2f7c599cd5cd621f13932186db352be4d8eea", "title": "PaperVis: Literature Review Made Easy", "text": "Reviewing literatures for a certain research field is always important for academics. One could use Google-like information seeking tools, but oftentimes he/she would end up obtaining too many possibly related papers, as well as the papers in the associated citation network. During such a process, a user may easily get lost after following a few links for searching or cross-referencing. It is also difficult for the user to identify relevant/important papers from the resulting huge collection of papers. Our work, called PaperVis, endeavors to provide a user-friendly interface to help users quickly grasp the intrinsic complex citation-reference structures among a specific group of papers. We modify the existing Radial Space Filling (RSF) and Bullseye View techniques to arrange involved papers as a node-link graph that better depicts the relationships among them while saving the screen space at the same time. PaperVis applies visual cues to present node attributes and their transitions among interactions, and it categorizes papers into semantically meaningful hierarchies to facilitate ensuing literature exploration. We conduct experiments on the InfoVis 2004 Contest Dataset to demonstrate the effectiveness of PaperVis."}
{"_id": "40e8d23231469e6495d3e06086e64df93e9dcfa0", "title": "Front-End Factor Analysis for Speaker Verification", "text": "This paper presents an extension of our previous work which proposes a new speaker representation for speaker verification. In this modeling, a new low-dimensional speaker- and channel-dependent space is defined using a simple factor analysis. This space is named the total variability space because it models both speaker and channel variabilities. Two speaker verification systems are proposed which use this new representation. The first system is a support vector machine-based system that uses the cosine kernel to estimate the similarity between the input data. The second system directly uses the cosine similarity as the final decision score. We tested three channel compensation techniques in the total variability space, which are within-class covariance normalization (WCCN), linear discriminate analysis (LDA), and nuisance attribute projection (NAP). We found that the best results are obtained when LDA is followed by WCCN. We achieved an equal error rate (EER) of 1.12% and MinDCF of 0.0094 using the cosine distance scoring on the male English trials of the core condition of the NIST 2008 Speaker Recognition Evaluation dataset. We also obtained 4% absolute EER improvement for both-gender trials on the 10 s-10 s condition compared to the classical joint factor analysis scoring."}
{"_id": "91bb3680cee8cd37b80e07644f66f9cccf1b1aff", "title": "PASCAL Boundaries: A Semantic Boundary Dataset with a Deep Semantic Boundary Detector", "text": "In this paper, we address the task of instance-level semantic boundary detection. To this end, we generate a large database consisting of more than 10k images (which is 20 bigger than existing edge detection databases) along with ground truth boundaries between 459 semantic classes including instances from both foreground objects and different types of background, and call it the PASCAL Boundaries dataset. This is a timely introduction of the dataset since results on the existing standard dataset for measuring edge detection performance, i.e. BSDS500, have started to saturate. In addition to generating a new dataset, we propose a deep network-based multi-scale semantic boundary detector and name it Multi-scale Deep Semantic Boundary Detector (M-DSBD). We evaluate M-DSBD on PASCAL boundaries and compare it to baselines. We test the transfer capabilities of our model by evaluating on MS-COCO and BSDS500 and show that our model transfers well."}
{"_id": "7ee214c411ca42c323af6e6cdc96058d5140aefe", "title": "Service-Oriented Architecture for Big Data Analytics in Smart Cities", "text": "A smart city has recently become an aspiration for many cities around the world. These cities are looking to apply the smart city concept to improve sustainability, quality of life for residents, and economic development. The smart city concept depends on employing a wide range of advanced technologies to improve the performance of various services and activities such as transportation, energy, healthcare, and education, while at the same time improve the city's resources utilization and initiate new business opportunities. One of the promising technologies to support such efforts is the big data technology. Effective and intelligent use of big data accumulated over time in various sectors can offer many advantages to enhance decision making in smart cities. In this paper we identify the different types of decision making processes involved in smart cities. Then we propose a service-oriented architecture to support big data analytics for decision making in smart cities. This architecture allows for integrating different technologies such as fog and cloud computing to support different types of analytics and decision-making operations needed to effectively utilize available big data. It provides different functions and capabilities to use big data and provide smart capabilities as services that the architecture supports. As a result, different big data applications will be able to access and use these services for varying proposes within the smart city."}
{"_id": "bbe0f0b3e2d60c4f96d9d84f97dc8a9be4f72802", "title": "Automatic C-to-CUDA Code Generation for Affine Programs", "text": "Graphics Processing Units (GPUs) offer tremendous computational power. CUDA (Compute Unified Device Architecture) provides a multi-threaded parallel programming model, facilitating high performance implementations of general-purpose computations. However, the explicitly managed memory hierarchy and multi-level parallel view make manual development of high-performance CUDA code rather complicated. Hence the automatic transformation of sequential input programs into efficient parallel CUDA programs is of considerable interest. This paper describes an automatic code transformation system that generates parallel CUDA code from input sequential C code, for regular (affine) programs. Using and adapting publicly available tools that have made polyhedral compiler optimization practically effective, we develop a C-to-CUDA transformation system that generates two-level parallel CUDA code that is optimized for efficient data access. The performance of automatically generated code is compared with manually optimized CUDA code for a number of benchmarks. The performance of the automatically generated CUDA code is quite close to hand-optimized CUDA code and considerably better than the benchmarks\u2019 performance on a multicore CPU."}
{"_id": "42e0a535049b2183f167235fc79f0deace4f11c3", "title": "An Experimental Evaluation of Apple Siri and Google Speech Recognition", "text": "We perform an experimental evaluation of two popular cloud-based speech recognition systems. Cloudbased speech recognition systems enhances Web surfing, transportation, health care, etc. Using voice commands helps drivers stay connected to the Internet by avoiding traffic safety risks. The performance of these type of applications should be robust under difficult network conditions. User frustration with network traffic problems can affect the usability of these applications. We evaluate the performance of two popular cloud-based speech recognition applications, Apple Siri and Google Speech Recognition (GSR) under various network conditions. We evaluate transcription delay and accuracy of transcription of each application under different packet loss and jitter values. Results of our study show that performance of cloud-based speech recognition systems can be affected by jitter and packet loss; which are commonly occurring over WiFi and cellular network connections. keywords: Cloud Speech Recognition, Quality of Experience, Software Measurement, Streaming Media, Real-time Systems."}
{"_id": "f3f191aa4b27789fef02e2a40e11c33ea408c86c", "title": "Interactive evolution for the procedural generation of tracks in a high-end racing game", "text": "We present a framework for the procedural generation of tracks for a high-end car racing game (TORCS) using interactive evolution. The framework maintains multiple populations and allow users to work both on their own population (in single-user mode) or to collaborate with other users on a shared population. Our architecture comprises a web frontend and an evolutionary backend. The former manages the interaction with users (e.g., logs registered and anonymous users, collects evaluations, provides access to all the evolved populations) and maintains the database server that stores all the present/past populations. The latter runs all the tasks related to evolution (selection, recombination and mutation) and all the tasks related to the target racing game (e.g., the track generation). We performed two sets of experiments involving five human subjects to evolve racing tracks alone (in a single-user mode) or cooperatively. Our preliminary results on five human subjects show that, in all the experiments, there is an increase of users' satisfaction as the evolution proceeds. Users stated that they perceived improvements in the quality of the individuals between subsequent populations and that, at the end, the process produced interesting tracks."}
{"_id": "9e5a13f3bc2580fd16bab15e31dc632148021f5d", "title": "Bandwidth-Enhanced Low-Profile Cavity-Backed Slot Antenna by Using Hybrid SIW Cavity Modes", "text": "A bandwidth enhanced method of a low-profile substrate integrated waveguide (SIW) cavity-backed slot antenna is presented in this paper. Bandwidth enhancement is achieved by simultaneously exciting two hybrid modes in the SIW-backed cavity and merging them within the required frequency range. These two hybrid modes, whose dominant fields are located in different half parts of the SIW cavity, are two different combinations of the and resonances. This design method has been validated by experiments. Compared with those of a previously presented SIW cavity-backed slot antenna, fractional impedance bandwidth of the proposed antenna is enhanced from 1.4% to 6.3%, its gain and radiation efficiency are also slightly improved to 6.0 dBi and 90%, and its SIW cavity size is reduced about 30%. The proposed antenna exhibits low cross polarization level and high front to back ratio. It still retains advantages of low-profile, low fabrication cost, and easy integration with planar circuits."}
{"_id": "88ac3acda24e771a8d4659b48205adb21c6933fa", "title": "Ontology semantic approach to extraction of knowledge from holy quran", "text": "With the continued demand for Islamic knowledge, which is mainly based on the Quran as a source of knowledge and wisdom, systems that facilitate an easy search of the content of the Quran remain a considerable challenge. Although in recent years there have been tools for Quran search, most of these tools are based on keyword search, meaning that the user needs to know the correct keywords before being able to retrieve the content of al-Quran. In this paper, we propose a system that supports the end user in querying and exploring the Quran ontology. The system comprises user query reformulation against the Quran ontology stored and annotated in the knowledge base. The Quran ontology consists of noun concepts identified in al-Quran, and the relationship that exists between these concepts. The user writes a query in the natural language and the proposed system reformulates the query to match the content found in the knowledge base in order to retrieve the relevant answer. The answer is represented by the Quranic verse related to the user query."}
{"_id": "ede93aff6b747938e4ed6cf2fae3daf6b66520f7", "title": "A survey on text mining techniques", "text": "text mining is a technique to find meaningful patterns from the available text documents. The pattern discovery from the text and document organization of document is a well-known problem in data mining. Analysis of text content and categorization of the documents is a complex task of data mining. In order to find an efficient and effective technique for text categorization, various techniques of text categorization and classification is recently developed. Some of them are supervised and some of them unsupervised manner of document arrangement. This presented paper discusses different method of text categorization and cluster analysis for text documents. In addition of that a new text mining technique is proposed for future implementation. Keywords\u2014 text mining, classification, cluster analysis, survey"}
{"_id": "28a500f4422032e42315e40b44bfb1db72d828d2", "title": "Bullying among young adolescents: the strong, the weak, and the troubled.", "text": "OBJECTIVES\nBullying and being bullied have been recognized as health problems for children because of their association with adjustment problems, including poor mental health and more extreme violent behavior. It is therefore important to understand how bullying and being bullied affect the well-being and adaptive functioning of youth. We sought to use multiple data sources to better understand the psychological and social problems exhibited by bullies, victims, and bully-victims.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nAnalysis of data from a community sample of 1985 mostly Latino and black 6th graders from 11 schools in predominantly low socioeconomic status urban communities (with a 79% response rate).\n\n\nMAIN OUTCOME MEASURES\nPeer reports of who bullies and who is victimized, self-reports of psychological distress, and peer and teacher reports of a range of adjustment problems.\n\n\nRESULTS\nTwenty-two percent of the sample was classified as involved in bullying as perpetrators (7%), victims (9%), or both (6%). Compared with other students, these groups displayed school problems and difficulties getting along with classmates. Despite increased conduct problems, bullies were psychologically strongest and enjoyed high social standing among their classmates. In contrast, victims were emotionally distressed and socially marginalized among their classmates. Bully-victims were the most troubled group, displaying the highest level of conduct, school, and peer relationship problems.\n\n\nCONCLUSIONS\nTo be able to intervene with bullying, it is important to recognize the unique problems of bullies, victims, and bully-victims. In addition to addressing these issues directly with their patients, pediatricians can recommend school-wide antibullying approaches that aim to change peer dynamics that support and maintain bullying."}
{"_id": "1d758774416b44ad828ebf4ab35e23da14a273e8", "title": "The MagicBook - Moving Seamlessly between Reality and Virtuality", "text": "or more than a decade researchers have tried to create intuitive computer interfaces by blending reality and virtual reality. The goal is for people to interact with the digital domain as easily as with the real world. Various approaches help us achieve this\u2014in the area of tangible interfaces, we use real objects as interface widgets; in augmented reality, researchers overlay 3D virtual imagery onto the real world; and in VR interfaces , we entirely replace the real world with a computer generated environment. As Milgram pointed out, 1 these types of computer interfaces can be placed along a continuum according to how much of the user's environment is computer generated (Figure 1). Tangible interfaces lie far to the left on this reality\u2013virtuality line, while immersive virtual environments are at the right extremity. Most current user interfaces exist as discrete points along this continuum. However, human activity can't always be broken into discrete components and for many tasks users may prefer to move seamlessly along the reality\u2013virtuality continuum. This proves true when interacting with 3D graphical content, either creating virtual models or viewing them. For example, if people want to experience a virtual scene from different scales, then immer-sive virtual reality may be ideal. If they want to have a face-to-face discussion while viewing the virtual scene, an augmented reality interface may be best. 2 The MagicBook project is an early attempt to explore how we can use a physical object to smoothly transport users between reality and virtuality. Young children often fantasize about flying into the pages of a fairy tale and becoming part of the story. The MagicBook project makes this fantasy a reality using a normal book as the main interface object. People can turn the pages of the book, look at the pictures, and read the text without any additional technology (Figure 2a). However, if a person looks at the pages through an augmented reality display, they see 3D virtual models appearing out of the pages (Figure 2b). The models appear attached to the real page so users can see the augmented reality scene from any perspective by moving themselves or the book. The virtual content can be any size and is animated, so the augmented reality view is an enhanced version of a traditional 3D pop-up book. Users can change the virtual models by turning the book pages. When they see a scene they particularly like, \u2026"}
{"_id": "4c98202e345a55b7b1d8c2347def21cac05935e6", "title": "Three stage 6\u201318 GHz high gain and high power amplifier based on GaN technology", "text": "A monolithic three stage HPA has been developed for wide band applications. This MMIC is fabricated on UMS 0.25 \u00b5m GaN technology based on SiC substrate. At 18GHz, the MMIC achieved in CW mode 10W of output power with 20dB linear gain and 20% power added efficiency. The HPA provided 6 to 10W output power over 6 to 18GHz with minimum small signal gain of 18dB. These obtained performances are very promising and very close to the simulations; this will allow a very short term further improvement. This demonstration is the first MMIC on the UMS 0.25\u00b5m GaN technology."}
{"_id": "912ab353ff9f2baac0ec64f80a05477fba07b4a7", "title": "Emotion regulation for frustrating driving contexts", "text": "Driving is a challenging task because of the physical, attentional, and emotional demands. When drivers become frustrated by events their negative emotional state can escalate dangerously. This study examines behavioral and attitudinal effects of cognitively reframing frustrating events. Participants (N = 36) were asked to navigate a challenging driving course that included frustrating events such as long lights and being cut-off. Drivers were randomly assigned to three conditions. After encountering a frustrating event, drivers in a reappraisal-down condition heard voice prompts that reappraised the event in an effort to deflate negative reactions. Drivers in the second group, reappraisal-up, heard voice prompts that brought attention to the negative actions of vehicles and pedestrians. Drivers in a silent condition drove without hearing any voice prompts. Participants in the reappraisal-down condition had better driving behavior and reported less negative emotions than participants in the other conditions."}
{"_id": "42338a0b160eb10decba0397aa83bda530d4803f", "title": "Facial Age Group Classification", "text": "Estimating human age group automatically via facial image analysis has lots of potential real-world applications, such as human computer interaction and multimedia communication. However, It is still a challenging problem for the existing computer vision systems to automatically and effectively estimate human age group. The aging process is determined by not only the person\u2019s gene, but also many external factors, such as health, living style, living location, and weather conditions. Males and females may also age differently. An age group classification system for facial images is proposed in this paper. Five age groups including babies, children, young adults, middle-aged adults, and old adults, are used in the classification system. The process of the system is divided into threephases: location, feature extraction, and age classification. Geometric features are used to distinguish whether the face is baby or child. Wrinkle features are used to classify the image into one of three adult groupsyoung adults, middle-aged adults, and old adults."}
{"_id": "e35e9dbadc9f38abdf49f620635438e3d6f9e8d2", "title": "Reflections on craft: probing the creative process of everyday knitters", "text": "Crafters today blend age-old techniques such as weaving and pottery with new information and communication technologies such as podcasts, online instructions, and blogs. This intersection of tradition and modernity provides an interesting site for understanding the adoption of new technology. We present a qualitative study of seven knitters introduced to Spyn - a system that enables the association of digitally recorded messages with physical locations on knit fabric. We gave knitters Spyn in order to elicit their reflections on their craft practices and learn from their interactions with material, people, and technology. While creating artifacts for friends and loved ones, knitters expanded the creative and communicative potential of their craftwork: knitters envisioned travel journals in knitted potholders and sung lullabies in knitted hats. We describe how these unusual craft activities provide a useful lens onto contemporary technological appropriation."}
{"_id": "29fbe4a6c55f8eae8ff40841440e4cb198cd9aec", "title": "W-net: Bridged U-net for 2D Medical Image Segmentation", "text": "In this paper, we focus on three problems in deep learning based medical image segmentation. Firstly, U-net, as a popular model for medical image segmentation, is difficult to train when convolutional layers increase even though a deeper network usually has a better generalization ability because of more learnable parameters. Secondly, the exponential ReLU (ELU), as an alternative of ReLU, is not much different from ReLU when the network of interest gets deep. Thirdly, the Dice loss, as one of the pervasive loss functions for medical image segmentation, is not effective when the prediction is close to ground truth and will cause oscillation during training. To address the aforementioned three problems, we propose and validate a deeper network that can fit medical image datasets that are usually small in the sample size. Meanwhile, we propose a new loss function to accelerate the learning process and a combination of different activation functions to improve the network performance. Our experimental results suggest that our network is comparable or superior to state-of-the-art methods."}
{"_id": "13890c2699dd12a4f4d722b9f35a666c98b8de45", "title": "Multimodal imaging of optic disc drusen.", "text": "PURPOSE\nTo evaluate optic disc drusen, extracellular protein deposits known to contain numerous aggregates of mitochondria, using multimodal modalities featuring optical coherence tomography (OCT) and autofluorescence imaging.\n\n\nDESIGN\nRetrospective observational case series.\n\n\nMETHODS\nEyes with optic nerve drusen were examined with enhanced depth imaging (EDI)-OCT, swept source OCT, and fundus autofluorescence using a fundus camera.\n\n\nRESULTS\nTwenty-six eyes of 15 patients with optic disc drusen were evaluated. EDI-OCT and swept source OCT showed multiple optic disc drusen at different levels; most were located immediately anterior to the lamina cribrosa. The drusen were ovoid regions of lower reflectivity that were bordered by hyperreflective material, and in 12 eyes (46.2%) there were internal hyperreflective foci. The mean diameter of the optic disc drusen as measured in OCT images was 686.8 (standard deviation \u00b1 395.2) \u03bcm. There was a significant negative correlation between the diameter of the optic disc drusen and the global retinal nerve fiber layer thickness (r = -0.61, P = .001). There was a significant negative correlation between proportion of the optic disc drusen area occupied by optic nerve drusen as detected by autofluorescence imaging and the global retinal nerve fiber layer thickness (r = -0.63, P = .001).\n\n\nCONCLUSIONS\nDeeper-penetration OCT imaging demonstrated the internal characteristics of optic disc drusen and their relationship with the lamina cribrosa in vivo. This study also showed that both the larger the drusen and the more area of the optic canal occupied by drusen, the greater the associated retinal nerve fiber layer abnormalities."}
{"_id": "91e6a1d594895ac2cbcbfc591b4244ca039d7109", "title": "A Compact Fractal Loop Rectenna for RF Energy Harvesting", "text": "This letter presents a compact fractal loop rectenna for RF energy harvesting at GSM1800 bands. First, a fractal loop antenna with novel in-loop ground-plane impedance matching is proposed for the rectenna design. Also, a high-efficiency rectifier is designed in the loop antenna to form a compact rectenna. Measured results show that an efficiency of 61% and an output dc voltage of 1.8\u00a0V have been achieved over 12-k\u2126 resistor for 10\u00a0\u03bcW/cm2 power density at 1.8\u00a0GHz. The rectenna is able to power up a battery-less LCD watch at a distance of 10\u00a0m from the cell tower. The proposed rectenna is compact, easy to fabricate, and useful for various energy harvesting applications."}
{"_id": "0e4963c7d2c6f0be422dbef0b45473dc43503ceb", "title": "Explore, Exploit or Listen: Combining Human Feedback and Policy Model to Speed up Deep Reinforcement Learning in 3D Worlds", "text": "We describe a method to use discrete human feedback to enhance the performance of deep learning agents in virtual three-dimensional environments by extending deep-reinforcement learning to model the confidence and consistency of human feedback. This enables deep reinforcement learning algorithms to determine the most appropriate time to listen to the human feedback, exploit the current policy model, or explore the agent\u2019s environment. Managing the trade-off between these three strategies allows DRL agents to be robust to inconsistent or intermittent human feedback. Through experimentation using a synthetic oracle, we show that our technique improves the training speed and overall performance of deep reinforcement learning in navigating three-dimensional environments using Minecraft. We further show that our technique is robust to highly innacurate human feedback and can also operate when no human feedback is given."}
{"_id": "4c68e7eff1da14003cc7efbfbd9a0a0a3d5d4968", "title": "Making sense of implementation theories, models and frameworks", "text": "BACKGROUND\nImplementation science has progressed towards increased use of theoretical approaches to provide better understanding and explanation of how and why implementation succeeds or fails. The aim of this article is to propose a taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science, to facilitate appropriate selection and application of relevant approaches in implementation research and practice and to foster cross-disciplinary dialogue among implementation researchers.\n\n\nDISCUSSION\nTheoretical approaches used in implementation science have three overarching aims: describing and/or guiding the process of translating research into practice (process models); understanding and/or explaining what influences implementation outcomes (determinant frameworks, classic theories, implementation theories); and evaluating implementation (evaluation frameworks). This article proposes five categories of theoretical approaches to achieve three overarching aims. These categories are not always recognized as separate types of approaches in the literature. While there is overlap between some of the theories, models and frameworks, awareness of the differences is important to facilitate the selection of relevant approaches. Most determinant frameworks provide limited \"how-to\" support for carrying out implementation endeavours since the determinants usually are too generic to provide sufficient detail for guiding an implementation process. And while the relevance of addressing barriers and enablers to translating research into practice is mentioned in many process models, these models do not identify or systematically structure specific determinants associated with implementation success. Furthermore, process models recognize a temporal sequence of implementation endeavours, whereas determinant frameworks do not explicitly take a process perspective of implementation."}
{"_id": "314a508686906f48d55567694fdf3bff50a4604d", "title": "Example-based video color grading", "text": "In most professional cinema productions, the color palette of the movie is painstakingly adjusted by a team of skilled colorists -- through a process referred to as color grading -- to achieve a certain visual look. The time and expertise required to grade a video makes it difficult for amateurs to manipulate the colors of their own video clips. In this work, we present a method that allows a user to transfer the color palette of a model video clip to their own video sequence. We estimate a per-frame color transform that maps the color distributions in the input video sequence to that of the model video clip. Applying this transformation naively leads to artifacts such as bleeding and flickering. Instead, we propose a novel differential-geometry-based scheme that interpolates these transformations in a manner that minimizes their curvature, similarly to curvature flows. In addition, we automatically determine a set of keyframes that best represent this interpolated transformation curve, and can be used subsequently, to manually refine the color grade. We show how our method can successfully transfer color palettes between videos for a range of visual styles and a number of input video clips."}
{"_id": "563e656203f29f0cbabc5cf0611355ba79ae4320", "title": "High Accuracy Optical Flow Estimation Based on a Theory for Warping", "text": "We study an energy functional for computing optical flow that combines three assumptions: a brightness constancy assumption, a gradient constancy assumption, and a discontinuity-preserving spatio-temporal smoothness constraint. In order to allow for large displacements, linearisations in the two data terms are strictly avoided. We present a consistent numerical scheme based on two nested fixed point iterations. By proving that this scheme implements a coarse-to-fine warping strategy, we give a theoretical foundation for warping which has been used on a mainly experimental basis so far. Our evaluation demonstrates that the novel method gives significantly smaller angular errors than previous techniques for optical flow estimation. We show that it is fairly insensitive to parameter variations, and we demonstrate its excellent robustness under noise. In Proc. 8th European Conference on Computer Vision, Springer LNCS 3024, T. Pajdla and J. Matas (Eds.), vol. 4, pp. 25-36, Prague, Czech Republic, May 2004 c \u00a9 Springer-Verlag Berlin Heidelberg 2004 Received The Longuet-Higgins Best Paper Award."}
{"_id": "8ca53d187f6beb3d1e4fb0d1b68544d578c86c53", "title": "A Naturalistic Open Source Movie for Optical Flow Evaluation", "text": "Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the imageand flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available."}
{"_id": "00a7370518a6174e078df1c22ad366a2188313b5", "title": "Determining Optical Flow", "text": "Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image."}
{"_id": "b1ad220e3e7bc095fd02b02a800933258adc5c72", "title": "Biometric-oriented Iris Identification Based on Mathematical Morphology", "text": "A new method for biometric identification of human irises is proposed in this paper. The method is based on morphological image processing for the identification of unique skeletons of iris structures, which are then used for feature extraction. In this approach, local iris features are represented by the most stable nodes, branches and endpoints extracted from the identified skeletons. Assessment of the proposed method was done using subsets of images from the University of Bath Iris Image Database (1000 images) and the CASIA Iris Image Database (500 images). Compelling experimental results demonstrate the viability of using the proposed morphological approach for iris recognition when compared to a state-of-the-art algorithm that uses a global feature extraction approach."}
{"_id": "a6034dba7b2e973eb17676933f472ddfe4ab7ad5", "title": "Genetic Programming + Unfolding Embryology in Automated Layout Planning", "text": "ABSTRACT Automated layout planning aims to the implementation of computational methods for the generation and the optimization of floor plans, considering the spatial configuration and the assignment of activities. Sophisticated strategies such as Genetic Algorithms have been implemented as heuristics of good solutions. However, the generative forces that derive from the social structures have been often neglected. This research aims to illustrate that the data that encode the layout\u2019s social and cultural generative forces, can be implemented within an evolutionary system for the design of residential layouts. For that purpose a co-operative system was created, which is composed of a Genetic Programming algorithm and an agent-based unfolding embryology procedure that assigns activities to the spaces generated by the GP algorithm. The assignment of activities is a recursive process which follows instructions encoded as permeability graphs. Furthermore, the Ranking Sum Fitness evaluation method is proposed and applied for the achievement of multiobjective optimization. Its efficiency is tested against the Weighted-Sum Fitness function. The system\u2019s results, both numerical and spatial, are compared to the results of a conventional evolutionary approach. This comparison showed that, in general, the proposed system can yield better solutions."}
{"_id": "0483a8aa5992185b38f69c87d2f5db15a9de3294", "title": "The Seattle Heart Failure Model: prediction of survival in heart failure.", "text": "BACKGROUND\nHeart failure has an annual mortality rate ranging from 5% to 75%. The purpose of the study was to develop and validate a multivariate risk model to predict 1-, 2-, and 3-year survival in heart failure patients with the use of easily obtainable characteristics relating to clinical status, therapy (pharmacological as well as devices), and laboratory parameters.\n\n\nMETHODS AND RESULTS\nThe Seattle Heart Failure Model was derived in a cohort of 1125 heart failure patients with the use of a multivariate Cox model. For medications and devices not available in the derivation database, hazard ratios were estimated from published literature. The model was prospectively validated in 5 additional cohorts totaling 9942 heart failure patients and 17,307 person-years of follow-up. The accuracy of the model was excellent, with predicted versus actual 1-year survival rates of 73.4% versus 74.3% in the derivation cohort and 90.5% versus 88.5%, 86.5% versus 86.5%, 83.8% versus 83.3%, 90.9% versus 91.0%, and 89.6% versus 86.7% in the 5 validation cohorts. For the lowest score, the 2-year survival was 92.8% compared with 88.7%, 77.8%, 58.1%, 29.5%, and 10.8% for scores of 0, 1, 2, 3, and 4, respectively. The overall receiver operating characteristic area under the curve was 0.729 (95% CI, 0.714 to 0.744). The model also allowed estimation of the benefit of adding medications or devices to an individual patient's therapeutic regimen.\n\n\nCONCLUSIONS\nThe Seattle Heart Failure Model provides an accurate estimate of 1-, 2-, and 3-year survival with the use of easily obtained clinical, pharmacological, device, and laboratory characteristics."}
{"_id": "6159347706d25b51ae35e030e818625a1dbbc09d", "title": "Open-Loop Precision Grasping With Underactuated Hands Inspired by a Human Manipulation Strategy", "text": "In this paper, we demonstrate an underactuated finger design and grasping method for precision grasping and manipulation of small objects. Taking inspiration from the human grasping strategy for picking up objects from a flat surface, we introduce the flip-and-pinch task, in which the hand picks up a thin object by flipping it into a stable configuration between two fingers. Despite the fact that finger motions are not fully constrained by the hand actuators, we demonstrate that the hand and fingers can interact with the table surface to produce a set of constraints that result in a repeatable quasi-static motion trajectory. Even when utilizing only open-loop kinematic playback, this approach is shown to be robust to variation in object size and hand position. Variation of up to 20\u00b0 in orientation and 10 mm in hand height still result in experimental success rates of 80% or higher. These results suggest that the advantages of underactuated, adaptive robot hands can be carried over from basic grasping tasks to more dexterous tasks."}
{"_id": "74643ad2486e432b5ca55eee233913ec17a6269b", "title": "CHAPTER 5 SOFTWARE TESTING", "text": "As more extensively discussed in the Software Quality chapter of the Guide to the SWEBOK, the right attitude towards quality is one of prevention: it is obviously much better to avoid problems, rather than repairing them. Testing must be seen as a means primarily for checking whether the prevention has been effective, but also for identifying anomalies in those cases in which, for some reason, it has been not. It is perhaps obvious, but worth recognizing, that even after successfully completing an extensive testing campaign, the software could still contain faults; nor is defect free code a synonymous for quality product. The remedy to system failures that are experienced after delivery is provided by (corrective) maintenance actions. Maintenance topics are covered into the Software Maintenance chapter of the Guide to the SWEBOK."}
{"_id": "d28b2a49386dfffa320cb835906e0b8dea1ea046", "title": "Warehousing and Protecting Big Data: State-Of-The-Art-Analysis, Methodologies, Future Challenges", "text": "This paper proposes a comprehensive critical survey on the issues of warehousing and protecting big data, which are recognized as critical challenges of emerging big data research. Indeed, both are critical aspects to be considered in order to build truly, high-performance and highly-flexible big data management systems. We report on state-of-the-art approaches, methodologies and trends, and finally conclude by providing open problems and challenging research directions to be considered by future efforts."}
{"_id": "8eaa1a463b87030a72ee7c54d15b2993bc247f0d", "title": "Hierarchical Peer-To-Peer Systems", "text": "Structured peer-to-peer (P2P) lookup services\u2014such as Chord, CAN, Pastry and Tapestry\u2014organize peers into a flat overlay network and offer distributed hash table (DHT) functionality. In these systems, data is associated with keys and each peer is responsible for a subset of the keys. We study hierarchical DHTs, in which peers are organized into groups, and each group has its autonomous intra-group overlay network and lookup service. The groups themselves are organized in a top-level overlay network. To find a peer that is responsible for a key, the top-level overlay first determines the group responsible for the key; the responsible group then uses its intra-group overlay to determine the specific peer that is responsible for the key. After providing a general framework for hierarchical P2P lookup, we consider the specific case of a two-tier hierarchy that uses Chord for the top level. Our analysis shows that by designating the most reliable peers in the groups as superpeers, the hierarchical design can significantly reduce the expected number of hops in Chord. We also propose a scalable design for managing the groups and the superpeers."}
{"_id": "915907859ff9c648c171702289d66454ab4e66e2", "title": "An Optical 6-axis Force Sensor for Brain Function Analysis using fMRI", "text": "This paper presents an 6-axis optical force sensor which can be used in fMRI. Recently, fMRIs are widely used for studying human brain function. Simultaneous measurement of brain activity and peripheral information, such as grip force, enables more precise investigations in studies of motor function. However, conventional force sensors cannot be used in fMRI environment, since metal elements generate noise which severely contaminate the signals of fMRI. An optical 2-axis force sensor has been developed using photo sensors and optical fibers by Tada et. al.[1], that resolved these problems. The developed force sensor removed all magnetic components from the sensing part. It detected minute displacements by measure amount of light and light traveled through the optical fibers. However, there still remain several problems on this optical force sensor. Firstly, the accuracy is not high compared to the conventional force sensors. Secondly, the robustness is not enough against the contact force to the optical fibers. In this paper, the problems concerning to the accuracy and the sensor output stability has been improved by novel methods of fixing fibers and arithmetic circuit. Furthermore, an optical 6-axis force sensor is developed based on these improvements, and usefulness of our sensor for brain function analysis is confirmed in fMRI experimentations."}
{"_id": "23e8f10b3b1c82191a43ea86331ae668d26efb0a", "title": "Pathologies in information bottleneck for deterministic supervised learning", "text": "Information bottleneck (IB) is a method for extracting information from one random variable X that is relevant for predicting another random variable Y . To do so, IB identifies an intermediate \u201cbottleneck\u201d variable T that has low mutual information I(X;T ) and high mutual information I(Y ;T ). The IB curve characterizes the set of bottleneck variables that achieve maximal I(Y ;T ) for a given I(X;T ), and is typically explored by optimizing the IB Lagrangian, I(Y ;T )\u2212 \u03b2I(X;T ). Recently, there has been interest in applying IB to supervised learning, particularly for classification problems that use neural networks. In most classification problems, the output class Y is a deterministic function of the input X , which we refer to as \u201cdeterministic supervised learning\u201d. We demonstrate three pathologies that arise when IB is used in any scenario where Y is a deterministic function of X: (1) the IB curve cannot be recovered by optimizing the IB Lagrangian for different values of \u03b2; (2) there are \u201cuninteresting\u201d solutions at all points of the IB curve; and (3) for classifiers that achieve low error rates, the activity of different hidden layers will not exhibit a strict trade-off between compression and prediction, contrary to a recent proposal. To address problem (1), we propose a functional that, unlike the IB Lagrangian, can recover the IB curve in all cases. We finish by demonstrating these issues on the MNIST dataset."}
{"_id": "60ec519d58ab29184513a1db1692973c40d87050", "title": "Convolution Aware Initialization", "text": "Initialization of parameters in deep neural networks has been shown to have a big impact on the performance of the networks (Mishkin & Matas, 2015). The initialization scheme devised by He et al, allowed convolution activations to carry a constrained mean which allowed deep networks to be trained effectively (He et al., 2015a). Orthogonal initializations and more generally orthogonal matrices in standard recurrent networks have been proved to eradicate the vanishing and exploding gradient problem (Pascanu et al., 2012). Majority of current initialization schemes do not take fully into account the intrinsic structure of the convolution operator. Using the duality of the Fourier transform and the convolution operator, Convolution Aware Initialization builds orthogonal filters in the Fourier space, and using the inverse Fourier transform represents them in the standard space. With Convolution Aware Initialization we noticed not only higher accuracy and lower loss, but faster convergence. We achieve new state of the art on the CIFAR10 dataset, and achieve close to state of the art on various other tasks."}
{"_id": "fdbc19342cdd233dd90d9db9477cf603cf3149b5", "title": "A Comprehensive Survey on Bengali Phoneme Recognition", "text": "\uf0b7 SUST, ICERIE Abstract: Hidden Markov model based various phoneme recognition methods for Bengali language is reviewed. Automatic phoneme recognition for Bengali language using multilayer neural network is reviewed. Usefulness of multilayer neural network over single layer neural network is discussed. Bangla phonetic feature table construction and enhancement for Bengali speech recognition is also discussed. Comparison among these methods is discussed."}
{"_id": "f07233b779ee85f46dd13cf33ece8d89f67ecc2b", "title": "Domain Ontology for Programming Languages", "text": "Ontology have become a relevant representation formalism and many application domains are considering adopting them. This attention claims for methods for reusing domain knowledge resources in the development of domain ontologies. Accordingly, in this paper we discuss a general methodology to create domain ontology for more than one object oriented language (OOP) like Java, PHP and C++. A lot of software development methods specially Web applications have presented most of these methods that are focusing on the structure of distributed systems and security, in which they are connected through networks and the internet; resulting in more valuable business and critical assets stored, searched and manipulated by World Wide Web. The aims of this study building domain ontology for OOP language classes for different OOP languages or different versions of the same language is an exciting opportunity for researchers to access the information required under the constant increase in the volume of information disseminated on the Internet. By creating Ontology domain for OOP, we can Improve methods of viewing and 1 E-mail: iabuhassan@newsoft.ps 2 E-mail: akram.othman@gmail.com Article Info: Received : October 14, 2012. Revised : November 19, 2012 Published online : December 30, 2012 76 Domain Ontology for Programming Languages organizing information, improve the way of processing, in addition to increasing the vocabulary and their relationship to terminology as well as the rules used in natural language with OOP languages. The clear identification of the properties and relations of terms is the starting point to become Ontology domain. The importance of the domain Ontology among object oriented programming languages is that through the synthesis of these relationships or Ontology an OOP can be achieved through web by any junior programmers."}
{"_id": "e7ff2e6be6d379b77111cdaf65da1b597f7fce75", "title": "Cu pillar bumps as a lead-free drop-in replacement for solder-bumped, flip-chip interconnects", "text": "We evaluated Cu pillar bumps with a lead-free SnAg solder cap as 1st-level soldered interconnects in a standard organic BGA (ball grid array) flip chip package. Various pillar heights and solder volumes were investigated at a minimum pad pitch of 150 mum. Flip chip assembly followed the exact process as in the case of collapsible SnAg solder bumps, using an identical material set and identical equipment. Thanks to the properties of Cu pillar bumps, substrate design tolerances could be relaxed compared to solder bump interconnects of the same pitch. This, together with the fact that no solder-on-pad (SOP) is required for Cu pillar bumps, allows for lower substrate cost, which is a major factor in flip chip packaging cost. Cu pillar bumps also offer significantly higher current carrying capability and better thermal performance compared to solder bumps. We found that flip chip assembly with Cu pillar bumps is a robust process with regard to variations in assembly parameters, such as solder cap volume, flux depth, chip placement accuracy and substrate pad size. It is possible to attach the Cu pillar bumps to the metal lines of the non-SOP substrates with line spacing down to 40 mum plusmn15 mum without any solder bridging. It is shown that flip chip packages with suitable 150 mum-pitch Cu pillar bumps provide high mechanical and electrical reliability. No corrosion on the Cu pillars was found after humidity tests (MSL2 level and unbiased HAST for 192 h). Thermal cycling passed 2000 cycles, even at JEDEC JESD22-A104-B condition C (-65 degC to 150 degC). High Temperature Storage passed 2000 h at 150 degC. Cross sections reveal that after 1000 h at 150degC all Sn in the solder is transformed into Cu-Sn intermetallic compounds (IMCs). Preliminary electromigration test results at highly accelerated conditions (Tbump ap 177 degC, I = 0.8 A) show almost 2 orders of magnitude longer lifetime compared to SnAg solder bumps at 200 mum pitch. Cu pillar lifetimes at high current and temperature are expected to highly exceed those of solder bumps, because current crowding in the solder is avoided and solder is transformed into much more stable intermetallics."}
{"_id": "0c2764756299a82659605b132aef9159f61a4171", "title": "Sarcasm Detection on Czech and English Twitter", "text": "This paper presents a machine learning approach to sarcasm detection on Twitter in two languages \u2013 English and Czech. Although there has been some research in sarcasm detection in languages other than English (e.g., Dutch, Italian, and Brazilian Portuguese), our work is the first attempt at sarcasm detection in the Czech language. We created a large Czech Twitter corpus consisting of 7,000 manually-labeled tweets and provide it to the community. We evaluate two classifiers with various combinations of features on both the Czech and English datasets. Furthermore, we tackle the issues of rich Czech morphology by examining different preprocessing techniques. Experiments show that our language-independent approach significantly outperforms adapted state-of-the-art methods in English (F-measure 0.947) and also represents a strong baseline for further research in Czech (F-measure 0.582)."}
{"_id": "74d8eb801c838d1dce814a1e9ce1074bd2c47721", "title": "MIMIC-CXR: A large publicly available database of labeled chest radiographs", "text": "Chest radiography is an extremely powerful imaging modality, allowing for a detailed inspection of a patient\u2019s thorax, but requiring specialized training for proper interpretation. With the advent of high performance general purpose computer vision algorithms, the accurate automated analysis of chest radiographs is becoming increasingly of interest to researchers. However, a key challenge in the development of these techniques is the lack of sufficient data. Here we describe MIMIC-CXR, a large dataset of 371,920 chest x-rays associated with 227,943 imaging studies sourced from the Beth Israel Deaconess Medical Center between 2011 2016. Each imaging study can pertain to one or more images, but most often are associated with two images: a frontal view and a lateral view. Images are provided with 14 labels derived from a natural language processing tool applied to the corresponding free-text radiology reports. All images have been de-identified to protect patient privacy. The dataset is made freely available to facilitate and encourage a wide range of research in medical computer vision."}
{"_id": "b6b3bdfd3fc4036e68ecae7c9700a659255e724a", "title": "Privacy and Security of Big Data: Current Challenges and Future Research Perspectives", "text": "Privacy and security of Big Data is gaining momentum in the research community, also due to emerging technologies like Cloud Computing, analytics engines and social networks. In response of this novel research challenge, several privacy and security of big data models, techniques and algorithms have been proposed recently, mostly adhering to algorithmic paradigms or model-oriented paradigms. Following this major trend, in this paper we provide an overview of state-of-the-art research issues and achievements in the field of privacy and security of big data, by highlighting open problems and actual research trends, and drawing novel research directions in this field."}
{"_id": "95dcdc9f2de1f30a4f0c79c378f98f11aa618f40", "title": "A comparative study on content-based music genre classification", "text": "Content-based music genre classification is a fundamental component of music information retrieval systems and has been gaining importance and enjoying a growing amount of attention with the emergence of digital music on the Internet. Currently little work has been done on automatic music genre classification, and in addition, the reported classification accuracies are relatively low. This paper proposes a new feature extraction method for music genre classification, DWCHs. DWCHs stands for Daubechies Wavelet Coefficient Histograms. DWCHs capture the local and global information of music signals simultaneously by computing histograms on their Daubechies wavelet coefficients. Effectiveness of this new feature and of previously studied features are compared using various machine learning classification algorithms, including Support Vector Machines and Linear Discriminant Analysis. It is demonstrated that the use of DWCHs significantly improves the accuracy of music genre classification."}
{"_id": "23d0ba7c85940553f994e8ed992585da5e2bffdb", "title": "Attention-Fused Deep Matching Network for Natural Language Inference", "text": "Natural language inference aims to predict whether a premise sentence can infer another hypothesis sentence. Recent progress on this task only relies on a shallow interaction between sentence pairs, which is insufficient for modeling complex relations. In this paper, we present an attention-fused deep matching network (AF-DMN) for natural language inference. Unlike existing models, AF-DMN takes two sentences as input and iteratively learns the attention-aware representations for each side by multi-level interactions. Moreover, we add a selfattention mechanism to fully exploit local context information within each sentence. Experiment results show that AF-DMN achieves state-of-the-art performance and outperforms strong baselines on Stanford natural language inference (SNLI), multigenre natural language inference (MultiNLI), and Quora duplicate questions datasets."}
{"_id": "0d98aca44d4e4efc7e0458e6405b9c326137a631", "title": "Turing Computability with Neural Nets", "text": "This paper shows the existence of a finite neural network, made up of sigmoidal nenrons, which simulates a nniversal Turing machine. It is composed of less then lo5 synchronously evolving processors, interconnected linearly. High-order connections are not required."}
{"_id": "c6f142f5011ddb90921f1185b14a147b807119f9", "title": "\u201cExascale Computing and Big Data: The Next Frontier,\u201d", "text": "For scientific and engineering computing, exascale (10 operations per second) is the next proxy in the long trajectory of exponential performance increases that has continued for over half a century. Similarly, large-scale data preservation and sustainability within and across disciplines, metadata creation and multidisciplinary fusion, and digital privacy and security define the frontiers of big data. Solving the myriad technical, political and economic challenges of exascale computing will require coordinated planning across government, industry and academia, commitment to data sharing and sustainability, collaborative research and development, and recognition that both international competition and collaboration will be necessary."}
{"_id": "461d5338d09c6b0c17c9e157a4725225e0bd6972", "title": "An event-based conceptual model for context-aware movement analysis", "text": "Current tracking technologies enable collection of data describing movements of various kinds of objects, including people, animals, icebergs, vehicles, containers with goods, etc. Analysis of movement data is now a hot research topic. However, most of the suggested analysis methods deal with movement data alone. Little has been done to support the analysis of movement in its spatio-temporal context, which includes various spatial and temporal objects as well as diverse properties associated with spatial locations and time moments. Comprehensive analysis of movement requires detection and analysis of relations that occur between moving objects and elements of the context in the process of the movement. We suggest a conceptual model in which movement is considered as a combination of spatial events of diverse types and extents in space and time. Spatial and temporal relations occur between movement events and elements of the spatial and temporal contexts. The model gives a ground to a generic approach based on extraction of interesting events from trajectories and treating the events as independent objects. By means of a prototype implementation, we tested the approach on complex real data about movement of wild animals. The testing showed the validity of the approach."}
{"_id": "34a4b24c054f7e3fbe836a13165bf52838723dfb", "title": "Efficient Discovery of Ontology Functional Dependencies", "text": "Functional Dependencies (FDs) define attribute relationships based on syntactic equality, and, when used in data cleaning, they erroneously label syntactically different but semantically equivalent values as errors. We enhance dependency-based data cleaning with Ontology Functional Dependencies (OFDs), which express semantic attribute relationships such as synonyms and is-a hierarchies defined by an ontology. Our technical contributions are twofold: 1) theoretical foundations for OFDs, including a set of sound and complete axioms and a linear-time inference procedure, and 2) an algorithm for discovering OFDs (exact ones and ones that hold with some exceptions) from data that uses the axioms to prune the exponential search space in the number of attributes. We demonstrate the efficiency of our techniques on real datasets, and we show that OFDs can significantly reduce the number of false positive errors in data cleaning techniques that rely on traditional FDs."}
{"_id": "7679ee792b4bc837d77677a44bf418bb2c73766c", "title": "Attention estimation by simultaneous analysis of viewer and view", "text": "This paper introduces a system for estimating the attention of a driver wearing a first person view camera using salient objects to improve gaze estimation. A challenging data set of pedestrians crossing intersections has been captured using Google Glass worn by a driver. A challenge unique to first person view from cars is that the interior of the car can take up a large part of the image. The proposed system automatically filters out the dashboard of the car, along with other parts of the instrumentation. The remaining area is used as a region of interest for a pedestrian detector. Two cameras looking at the driver are used to determine the direction of the driver's gaze, by examining the eye corners and the center of the iris. This coarse gaze estimation is then linked to the detected pedestrians to determine which pedestrian the driver is focused on at any given time."}
{"_id": "f4e41dd5c158b8b8cef3d6bdd5592e9b7ae910f0", "title": "Distributed Neural Networks for Internet of Things: The Big-Little Approach", "text": "Nowadays deep neural networks are widely used to accurately classify input data. An interesting application area is the Internet of Things (IoT), where a massive amount of sensor data has to be classified. The processing power of the cloud is attractive, however the variable latency imposes a major drawback for neural networks. In order to exploit the apparent trade-off between utilizing the available though limited embedded computing power of the IoT devices at high speed/stable latency and the seemingly unlimited computing power of Cloud computing at the cost of higher and variable latency, we propose a Big-Little architecture for deep neural networks. A small neural network trained to a subset of prioritized output classes can be used to calculate an output on the embedded devices, while a more specific classification can be calculated in the Cloud only when required. We show the applicability of this concept in the IoT domain by evaluating our approach for state of the art neural network classification problems on popular embedded devices such as the Raspberry Pi and Intel Edison."}
{"_id": "4140e7481c2599604b14fcd04625274022583631", "title": "Availability : A Heuristic for Judging Frequency and Probability 122", "text": "This paper explores a judgmental heuristic in which a person evaluates the frequency of classes or the probability of events by availability, i.e., by the ease with which relevant instances come to mind. In general, availability is correlated with ecological frequency, but it is also affected by other factors. Consequently, the reliance on the availability heuristic leads to systematic biases. Such biases are demonstrated in the judged frequency of classes of words, of combinatorial outcomes, and of repeated events. The phenomenon of illusory correlation is explained as an availability bias. The effects of the availability of incidents and scenarios on subjective probability are discussed."}
{"_id": "833a89df16e4b7bea5a2f9dc89333fb6a4dbd739", "title": "Towards value-based pricing \u2014 An integrative framework for decision making", "text": "Despite a recent surge of interest, the subject of pricing in general and value-based pricing in particular has received little academic investigation. Yet, pricing has a huge impact on financial results, both in absolute terms and relative to other instruments of the marketing mix. The objective of this paper is to present a comprehensive framework for pricing decisions which considers all relevant dimensions and elements for profitable and sustainable pricing decisions. The theoretical framework is useful for guiding new product pricing decisions as well as for implementing price-repositioning strategies for existing products. The practical application of this framework is illustrated by a case study involving the pricing decision for a major product launch at a global chemical company. D 2003 Elsevier Inc. All rights reserved."}
{"_id": "8dfa972ab1135505fa2d0e00f4b17df8e49f7557", "title": "Software Engineering Economics", "text": "This paper summarizes the current state of the art and recent trends in software engineering economics. It provides an overview of economic analysis techniques and their applicability to software engineering and management. It surveys the field of software cost estimation, including the major estimation techniques available, the state of the art in algorithmic cost models, and the outstanding research issues in software cost estimation."}
{"_id": "35911388f9542490b4e72f4656ab826fff1d99bb", "title": "Subjective Probability : A Judgment of Representativeness", "text": "This paper explores a heuristic-representativeness-according to which the subjective probability of an event, or a sample, is determined by the degree to which it: (i) is similar in essential characteristics to its parent population; and (ii) reflects the salient features of the process by which it is generated. This heuristic is explicated in a series of empirical examples demonstrating predictable and systematic errors in the evaluation of uncertain events. In particular, since sample size does not represent any property of the population, it is expected to have little or no effect on judgment of likelihood. This prediction is confirmed in studies showing that subjective sampling distributions and posterior probability judgments are determined by the most salient characteristic of the sample (e.g., proportion, mean) without regard to the size of the sample. The present heuristic approach is contrasted with the normative (Bayesian) approach to the analysis of the judgment of uncertainty."}
{"_id": "4ee237d215c41374cb94227a44d1a51af7d3873f", "title": "RECOGNITION AND RETRIEVAL PROCESSES IN FREE RECALL", "text": "A model of free recall is described which identifies two processes in free recall: a retrieval process by which the subject accesses the words, and a recognition process by which the subject decides whether an implicitly retrieved word is a to-be-recalled word. Submodels for the recognition process and the retrieval process are described. The recognition model assumes that during the study phase, the subject associates \"list markers\" to the to-be-recalled words. The establishment of such associates is postulated to be an all-or-none stochastic process. In the test phase, the subject recognizes to-be-recalled words by deciding which words have relevant list markers as associates. A signal detectability model is developed for this decision process. The retrieval model is introduced as a computer program that tags associative paths between list words. In several experiments, subjects studied and were tested on a sequence of overlapping sublists sampled from a master set of common nouns. The twoprocess model predicts that the subject's ability to retrieve the words should increase as more overlapping sublists are studied, but his ability to differentiate the words on the most recent list should deteriorate. Experiments confirmed this predicted dissociation of recognition and retrieval. Further predictions derived from the free recall model were also supported."}
{"_id": "3dae758d2bf62652a93edf9968cc085b822f13c4", "title": "X band septum polarizer as feed for parabolic antenna", "text": "Contemporary RHCP and LHCP wave occurs in several applications of microwave communication and measurement system. From this point of view the septum polarizer can be useful. The septum polarizer is a four-port waveguide device. The square waveguide at one end constitutes two ports because it can support two orthogonal modes. A slopping (or stepped) septum divides the square waveguide into two standard rectangular waveguides sharing a common broad-wall. The size of the septum as well as two versions of the waveguides excitation were analyzed and are described in this paper."}
{"_id": "34f2f927a33ce01b3e5284428a483137cff4feba", "title": "Expanding cellular coverage via cell-edge deployment in heterogeneous networks: spectral efficiency and backhaul power consumption perspectives", "text": "Heterogeneous small-cell networks (HetNets) are considered to be a standard part of future mobile networks where operator/consumer deployed small-cells, such as femto-cells, relays, and distributed antennas (DAs), complement the existing macrocell infrastructure. This article proposes the need-oriented deployment of small-cells and device-to-device (D2D) communication around the edge of the macrocell such that the small-cell base stations (SBSs) and D2D communication serve the cell-edge mobile users, thereby expanding the network coverage and capacity. In this context, we present competitive network configurations, namely, femto-on-edge, DA-one-dge, relay-on-edge, and D2D-communication on- edge, where femto base stations, DA elements, relay base stations, and D2D communication, respectively, are deployed around the edge of the macrocell. The proposed deployments ensure performance gains in the network in terms of spectral efficiency and power consumption by facilitating the cell-edge mobile users with small-cells and D2D communication. In order to calibrate the impact of power consumption on system performance and network topology, this article discusses the detailed breakdown of the end-to-end power consumption, which includes backhaul, access, and aggregation network power consumptions. Several comparative simulation results quantify the improvements in spectral efficiency and power consumption of the D2D-communication-on-edge configuration to establish a greener network over the other competitive configurations."}
{"_id": "dba9539b8d1025406767aa926f956592c416db53", "title": "Chapter 2 Challenges to the Adoption of E-commerce Technology for Supply Chain Management in a Developing Economy : A Focus on Nigerian SMEs", "text": "The evolution of Information Technology has enhanced consumers\u2019 and organisations\u2019 ability to gather information along with purchasing goods and services. Information Technology offers businesses increased competition, lower prices of goods and services, the choice of comparing products from different vendors and easy access to various vendors anywhere, anytime. Therefore, Information Technology facilitates the global nature of commerce. In developing countries, e-commerce dominates their economic activities. E-commerce is one of the leading non-oil sectors in Nigeria, 18.9 % GDP. In contrast with developed nations, e-commerce has not been as successfully adopted in developing countries. This chapter addresses the challenges and benefits of e-commerce technology in relation to SMEs in Nigeria. It presents quantitative evidence of SMEs perceptions of e-commerce technology, benefits, and barriers. A number of hypotheses are presented and assessed. Recommendations to mitigate barriers are suggested."}
{"_id": "8453b9ffa450890653289d3c93f3bc2b2e1c2c63", "title": "The Rise of Bots: A Survey of Conversational Interfaces, Patterns, and Paradigms", "text": "This work documents the recent rise in popularity of messaging bots: chatterbot-like agents with simple, textual interfaces that allow users to access information, make use of services, or provide entertainment through online messaging platforms. Conversational interfaces have been often studied in their many facets, including natural language processing, artificial intelligence, human-computer interaction, and usability. In this work we analyze the recent trends in chatterbots and provide a survey of major messaging platforms, reviewing their support for bots and their distinguishing features. We then argue for what we call \"Botplication\", a bot interface paradigm that makes use of context, history, and structured conversation elements for input and output in order to provide a conversational user experience while overcoming the limitations of text-only interfaces."}
{"_id": "d8ee9ea53d03d3c921b6aaab743c59fd52d2f5e4", "title": "Outpatient approach to palpitations.", "text": "Palpitations are a common problem seen in family medicine; most are of cardiac origin, although an underlying psychiatric disorder, such as anxiety, is also common. Even if a psychiatric comorbidity does exist, it should not be assumed that palpitations are of a noncardiac etiology. Discerning cardiac from noncardiac causes is important given the potential risk of sudden death in those with an underlying cardiac etiology. History and physical examination followed by targeted diagnostic testing are necessary to distinguish a cardiac cause from other causes of palpitations. Standard 12-lead electrocardiography is an essential initial diagnostic test. Cardiac imaging is recommended if history, physical examination, or electrocardiography suggests structural heart disease. An intermittent event (loop) monitor is preferred for documenting cardiac arrhythmias, particularly when they occur infrequently. Ventricular and atrial premature contractions are common cardiac causes of palpitations; prognostic significance is dictated by the extent of underlying structural heart disease. Atrial fibrillation is the most common arrhythmia resulting in hospitalization; such patients are at increased risk of stroke. Patients with supraventricular tachycardia, long QT syndrome, ventricular tachycardia, or palpitations associated with syncope should be referred to a cardiologist."}
{"_id": "c14049c47b99e7e54d1f1c23fb7a5c9a82142748", "title": "Design and implementation of NN5 for Hong Kong stock price forecasting", "text": "A number of published techniques have emerged in the trading community for stock prediction tasks. Among them is neural network (NN). In this paper, the theoretical background of NNs and the backpropagation algorithm is reviewed. Subsequently, an attempt to build a stock buying/selling alert system using a backpropagation NN, NN5, is presented. The system is tested with data from one Hong Kong stock, The Hong Kong and Shanghai Banking Corporation (HSBC) Holdings. The system is shown to achieve an overall hit rate of over 70%. A number of trading strategies are discussed. A best strategy for trading non-volatile stock like HSBC is recommended. r 2006 Elsevier Ltd. All rights reserved."}
{"_id": "5b4796fb14e0d7f65337585c8f2a789cd2d3c4c3", "title": "Rare case of isolated true complete diphallus \u2013 Case report and review of literature", "text": "Penile duplication is very rare anomaly. True complete diphallia is mostly associated with severe anomalies. Isolated complete diphallia is extremely rare. The case is being presented as true penile diphallia without associated anomalies. We are discussing diagnosis and management of such rare cases."}
{"_id": "581cacfa3133513523b7ba8ee4e47177182f1bab", "title": "An ultra-low power capacitor-less LDO for always-on domain in NB-IoT applications", "text": "This paper presents an ultra-low power 55 nm CMOS capacitor-less Low-Dropout (LDO) voltage regulator for power management of an Always-On domain module for implementation of NB-IoT SoC applications. Compared with traditional IoT SoC, the power consumption specification of NB-IoT SoC is lower. One effective way to achieve low power application is to reduce the power of the Always-On domain module in the SoC, where the LDO plays a critical role. According to the tape-out, results are validated with a 0.89V output voltage from 1.8V to 3.3V supply voltage, delivering a load current of 100uA. The peak-peak variance of output voltage is less than 100mV from 1.8V to 3.3V supply voltage. The quiescent current is lower than 61.86nA, including a 2-Transistor reference voltage, which can implement NB-IoT SoC application, with yield as high as 87.88%."}
{"_id": "1ad45747a121055b6fd3be6ff8f3f933f88ab659", "title": "Microscopy cell segmentation via adversarial neural networks", "text": "We present a novel method for cell segmentation in microscopy images which is inspired by the Generative Adversarial Neural Network (GAN) approach. Our framework is built on a pair of two competitive artificial neural networks, with a unique architecture, termed Rib Cage, which are trained simultaneously and together define a min-max game resulting in an accurate segmentation of a given image. Our approach has two main strengths, similar to the GAN, the method does not require a formulation of a loss function for the optimization process. This allows training on a limited amount of annotated data in a weakly supervised manner. Promising segmentation results on real fluorescent microscopy data are presented. The code is freely available at: https://github.com/arbellea/DeepCellSeg.git"}
{"_id": "815aa52cfc02961d82415f080384594639a21984", "title": "Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification", "text": "Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification. Three main challenges exist including spatial (image) feature representation, temporal information representation, and model/computation complexity. It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning. However, as for model/computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit. We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices. In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions. Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level \u201csemantic\u201d features is more useful. Our conclusion generalizes to datasets with very different properties. When combined with several other cost-effective designs including separable spatial/temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24)."}
{"_id": "526f97ff1a102b18bc331fa701a623980a05a509", "title": "Towards Portable Learning Analytics Dashboards", "text": "This paper proposes a novel approach to build and deploy learning analytics dashboards in multiple learning environments. Existing learning dashboards are barely portable: once deployed on a learning platform, it requires considerable effort to deploy the dashboard elsewhere. We suggest constructing dashboards from lightweight web applications, namely widgets. Our approach allows to port dashboards with no additional cost between learning environments that implement open specifications (Open Social and Activity Streams) for data access and use widget APIs. We propose to facilitate reuse by sharing the dashboards and widgets via a centralized analytics repository."}
{"_id": "298687d98e16ba9c98739fe8cbe125ffbaa20b55", "title": "A comparative study of elasticsearch and CouchDB document oriented databases", "text": "With the advent of large complex datasets, NOSQL databases have gained immense popularity for their efficiency to handle such datasets in comparison to relational databases. There are a number of NOSQL data stores for e.g. Mongo DB, Apache Couch DB etc. Operations in these data stores are executed quickly. In this paper we aim to get familiar with 2 most popular NoSQL databases: Elasticsearch and Apache CouchDB. This paper also aims to analyze the performance of Elasticsearch and CouchDB on image data sets. This analysis is based on the results carried out by instantiate, read, update and delete operations on both document-oriented stores and thus justifying how CouchDB is more efficient than Elasticsearch during insertion, updation and deletion operations but during selection operation Elasticsearch performs much better than CouchDB. The implementation has been done on LINUX platform."}
{"_id": "52fbfa181770cbc8291d7ba0c040a55a81d10a7b", "title": "\"Is there anything else I can help you with?\": Challenges in Deploying an On-Demand Crowd-Powered Conversational Agent", "text": "Intelligent conversational assistants, such as Apple\u2019s Siri, Microsoft\u2019s Cortana, and Amazon\u2019s Echo, have quickly become a part of our digital life. However, these assistants have major limitations, which prevents users from conversing with them as they would with human dialog partners. This limits our ability to observe how users really want to interact with the underlying system. To address this problem, we developed a crowd-powered conversational assistant, Chorus, and deployed it to see how users and workers would interact together when mediated by the system. Chorus sophisticatedly converses with end users over time by recruiting workers on demand, which in turn decide what might be the best response for each user sentence. Up to the first month of our deployment, 59 users have held conversations with Chorus during 320 conversational sessions. In this paper, we present an account of Chorus\u2019 deployment, with a focus on four challenges: (i) identifying when conversations are over, (ii) malicious users and workers, (iii) on-demand recruiting, and (iv) settings in which consensus is not enough. Our observations could assist the deployment of crowd-powered conversation systems and crowd-powered systems in general."}
{"_id": "3d53972fa96c73b6f7c6ca3430b0fc6eef649506", "title": "Kinematic-free position control of a 2-DOF planar robot arm", "text": "This paper challenges the well-established assumption in robotics that in order to control a robot it is necessary to know its kinematic information, that is, the arrangement of links and joints, the link dimensions and the joint positions. We propose a kinematic-free robot control concept that does not require any prior kinematic knowledge. The concept is based on our hypothesis that it is possible to control a robot without explicitly measuring its joint angles, by measuring instead the effects of the actuation on its end-effector. We implement a proof-of-concept encoderless robot controller and apply it for the position control of a physical 2-DOF planar robot arm. The prototype controller is able to successfully control the robot to reach a reference position, as well as to track a continuous reference trajectory. Notably, we demonstrate how this novel controller can cope with something that traditional control approaches fail to do: adapt to drastic kinematic changes such as 100% elongation of a link, 35-degree angular offset of a joint, and even a complete overhaul of the kinematics involving the addition of new joints and links."}
{"_id": "f5ff1d285fb0779cc8541126a023fe35fe2fec35", "title": "Demand Queries with Preprocessing", "text": "Given a set of items and a submodular set-function f that determines the value of every subset of items, a demand query assigns prices to the items, and the desired answer is a set S of items that maximizes the pro t, namely, the value of S minus its price. The use of demand queries is well motivated in the context of combinatorial auctions. However, answering a demand query (even approximately) is NP-hard. We consider the question of whether exponential time preprocessing of f prior to receiving the demand query can help in later answering demand queries in polynomial time. We design a preprocessing algorithm that leads to approximation ratios that are NP-hard to achieve without preprocessing. We also prove that there are limitations to the approximation ratios achievable after preprocessing, unless NP \u2282 P/poly."}
{"_id": "ad106136bf0201105f197d501bd8625d7e9c2562", "title": "BotCensor: Detecting DGA-Based Botnet Using Two-Stage Anomaly Detection", "text": "Nowadays, most botnets utilize domain generation algorithms (DGAs) to build resilient and agile command and control (C&C) channels. Specifically, botmasters employ DGAs to dynamically produce a large number of random domains and only register a small subset for their actual C&C servers with the purpose to defend them from takeovers and blacklisting attempts. While many approaches and models have been developed to detect DGA-based botnets, they suffer from several limitations, such as difficulties of DNS traffic collection, low feasibility and scalability, and so forth. In this paper, we present BotCensor, a new system that can determine if a host is infected with certain DGA malware with two-stage anomaly detection. In the first stage, we preliminarily attempt to identify malicious domains using a Markov model, and in the second stage, we re-examine the hosts that requested aforementioned malicious domains using novelty detection algorithms. Our experimental results show that our approach performs very well on identifying previously unknown DGA-generated domains and detects DGA bots with high efficiency and efficacy. Our approach not only can be regarded as security forensics tools, but also can be used to prevent malware infections and spread."}
{"_id": "2301f3bebd0cebbf161a017bbb70faffbb2c2506", "title": "Deep Learning in Radiology: Does One Size Fit All?", "text": "Deep learning (DL) is a popular method that is used to perform many important tasks in radiology and medical imaging. Some forms of DL are able to accurately segment organs (essentially, trace the boundaries, enabling volume measurements or calculation of other properties). Other DL networks are able to predict important properties from regions of an image-for instance, whether something is malignant, molecular markers for tissue in a region, even prognostic markers. DL is easier to train than traditional machine learning methods, but requires more data and much more care in analyzing results. It will automatically find the features of importance, but understanding what those features are can be a challenge. This article describes the basic concepts of DL systems and some of the traps that exist in building DL systems and how to identify those traps."}
{"_id": "35a198cc4d38bd2db60cda96ea4cb7b12369fd3c", "title": "Knowledge transfer in learning to recognize visual objects classes", "text": "Learning to recognize of object classes is one of the most important functionalities of vision. It is estimated that humans are able to learn tens of thousands of visual categori es in their life. Given the photometric and geometric variabilities displayed by objects as well as the high degree of intra-clas s variabilities, we hypothesize that humans achieve such a fe at by using knowledge and information cumulated throughout the learning process. In recent years, a handful of pioneering p apers have applied various forms of knowledge transfer algorithms to the problem of learning object classes. We first review some o f these papers by loosely grouping them into three categories : transfer through prior parameters, transfer through shared features or parts, and transfer through contextual information. In the second half of the paper, we detail a recent algorithm proposed by the author. This incremental learning scheme us es information from object classes previously learned in the f orm of prior models to train a new object class model. Training images can be presented in an incremental way. We present experimental results tested with this model on a large numbe r of object categories."}
{"_id": "b3957777c94d1133386a1de1bc2892dbea1e01cb", "title": "Designing an annotation scheme for summarizing Japanese judgment documents", "text": "We propose an annotation scheme for the summarization of Japanese judgment documents. This paper reports the details of the development of our annotation scheme for this task. We also conduct a human study where we compare the annotation of independent annotators. The end goal of our work is summarization, and our categories and the link system is a consequence of this. We propose three types of generic summaries which are focused on specific legal issues relevant to a given legal case."}
{"_id": "49acf191aa9f0c3b7ce996e8cfd1a405e21a11a7", "title": "Interpretability and Informativeness of Clustering Methods for Exploratory Analysis of Clinical Data", "text": "Clustering methods are among the most commonly used tools for exploratory data analysis. However, using clustering to perform data analysis can be challenging for modern datasets that contain a large number of dimensions, are complex in nature, and lack a ground-truth labeling. Traditional tools, like summarization and plotting of clusters, are of limited benefit in a high-dimensional setting. On the other hand, while many clustering methods have been studied theoretically, such analysis often has limited instructive value for practical purposes due to unverifiable assumptions, oracle-dependent tuning parameters, or unquantified finite-sample effects."}
{"_id": "7412f2f83beebb352b59ed6fe50e79997e0ac808", "title": "Switched Reluctance Motor Modeling, Design, Simulation, and Analysis: A Comprehensive Review", "text": "Switched reluctance machines have emerged as an important technology in industrial automation; they represent a real alternative to conventional variable speed drives in many applications. This paper reviews the technology status and trends in switched reluctance machines. It covers the various aspects of modeling, design, simulation, analysis, and control. Finally, it discusses the impact of switched reluctance machines technology on intelligent motion control."}
{"_id": "370ded29d3a213d9963c12a48141b2ed6c512bad", "title": "Mining term association patterns from search logs for effective query reformulation", "text": "Search engine logs are an emerging new type of data that offers interesting opportunities for data mining. Existing work on mining such data has mostly attempted to discover knowledge at the level of queries (e.g., query clusters). In this paper, we propose to mine search engine logs for patterns at the level of terms through analyzing the relations of terms inside a query. We define two novel term association patterns (i.e., context-sensitive term substitutions and term additions) and propose new methods for mining such patterns from search engine logs. These two patterns can be used to address the mis-specification and under-specification problems of ineffective queries. Experiment results on real search engine logs show that the mined context-sensitive term substitutions can be used to effectively reword queries and improve their accuracy, while the mined context-sensitive term addition patterns can be used to support query refinement in a more effective way."}
{"_id": "66198fbee049a7cd1b462fa4912aa14cd9227c3e", "title": "Child-based personas: need, ability and experience", "text": "Interactive technologies are becoming ubiquitous in many children\u2019s lives. From school to home, technologies are changing the way children live. However, current methods of designing these technologies do not adequately consider children\u2019s needs and developmental abilities. This paper describes and illustrates a new approach for creating user abstractions of children called the child-persona technique. Child-personas integrate theoretical concepts, empirically generated data and experiential goals. An analysis of the utility of this technique provides insights about how this technique can benefit designers by generating realistic child-user abstractions through a process which supports designers in child-centric design."}
{"_id": "20739c96ed44ccfdc5352ea38e1a2a15137363f4", "title": "A global geometric framework for nonlinear dimensionality reduction.", "text": "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs-30,000 auditory nerve fibers or 10(6) optic nerve fibers-a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure."}
{"_id": "2315fc6c2c0c4abd2443e26a26e7bb86df8e24cc", "title": "ImageNet Classification with Deep Convolutional Neural Networks", "text": "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry."}
{"_id": "44d26f78bcdee04a14896c8c6f3681313402d854", "title": "Advances in Spectral-Spatial Classification of Hyperspectral Images", "text": "Recent advances in spectral-spatial classification of hyperspectral images are presented in this paper. Several techniques are investigated for combining both spatial and spectral information. Spatial information is extracted at the object (set of pixels) level rather than at the conventional pixel level. Mathematical morphology is first used to derive the morphological profile of the image, which includes characteristics about the size, orientation, and contrast of the spatial structures present in the image. Then, the morphological neighborhood is defined and used to derive additional features for classification. Classification is performed with support vector machines (SVMs) using the available spectral information and the extracted spatial information. Spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps. To that end, three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map. Finally, a multiple-classifier (MC) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm. Experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and corresponding to various contexts are presented. They highlight the importance of spectral-spatial strategies for the accurate classification of hyperspectral images and validate the proposed methods."}
{"_id": "ef8ae1effca9cd45677086034d8c7b06a69c03e5", "title": "Deep Learning-Based Classification of Hyperspectral Data", "text": "Classification is one of the most popular topics in hyperspectral remote sensing. In the last two decades, a huge number of methods were proposed to deal with the hyperspectral data classification problem. However, most of them do not hierarchically extract deep features. In this paper, the concept of deep learning is introduced into hyperspectral data classification for the first time. First, we verify the eligibility of stacked autoencoders by following classical spectral information-based classification. Second, a new way of classifying with spatial-dominated information is proposed. We then propose a novel deep learning framework to merge the two features, from which we can get the highest classification accuracy. The framework is a hybrid of principle component analysis (PCA), deep learning architecture, and logistic regression. Specifically, as a deep learning architecture, stacked autoencoders are aimed to get useful high-level features. Experimental results with widely-used hyperspectral data indicate that classifiers built in this deep learning-based framework provide competitive performance. In addition, the proposed joint spectral-spatial deep neural network opens a new window for future research, showcasing the deep learning-based methods' huge potential for accurate hyperspectral data classification."}
{"_id": "7513f29210569444dd464c33147d9f824e6fbf08", "title": "How to Work a Crowd: Developing Crowd Capital Through Crowdsourcing", "text": "Traditionally, the term \u2018crowd\u2019 was used almost exclusively in the context of people who selforganized around a common purpose, emotion, or experience. Today, however, firms often refer to crowds in discussions of how collections of individuals can be engaged for organizational purposes. Crowdsourcing\u2013defined here as the use of information technologies to outsource business responsibilities to crowds\u2013can now significantly influence a firm\u2019s ability to leverage previously unattainable resources to build competitive advantage. Nonetheless, many managers are hesitant to consider crowdsourcing because they do not understand how its various types can add value to the firm. In response, we explain what crowdsourcing is, the advantages it offers, and how firms can pursue crowdsourcing. We begin by formulating a crowdsourcing typology and show how its four categories\u2014crowd voting, micro-task, idea, and solution crowdsourcing\u2014can help firms develop \u2018crowd capital,\u2019 an organizational-level resource harnessed from the crowd. We then present a three-step process model for generating crowd capital. Step one includes important considerations that shape how a crowd is to be constructed. Step two outlines the capabilities firms need to develop to acquire and assimilate resources (e.g., knowledge, labor, funds) from the crowd. Step three outlines key decision areas that executives need to address to effectively engage crowds. 1. Crowds and Crowdsourcing Not too long ago, the term \u2018crowd\u2019 was used almost exclusively in the context of people who selforganized around a common purpose, emotion, or experience. Crowds were sometimes seen as a positive occurrence\u2013 for instance, when they formed for political rallies or to support sports teams\u2014 but were more often associated negatively with riots, a mob mentality, or looting. Under today\u2019s lens, they are viewed more positively (Wexler, 2011). Crowds have become useful! It all started in 2006, when crowdsourcing was introduced as \u2018\u2018taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an \u201copen call\u2019\u2019 (Howe, 2006, p. 1). The underlying concept of crowdsourcing, a combination of crowd and outsourcing, is that many hands make light work and that wisdom can be gleaned from crowds (Surowiecki, 2005) to overcome groupthink, leading to superior results (Majchrzak & Malhotra, 2013). Of course, such ambitions are not new, and organizations have long desired to make the most of dispersed knowledge whereby each individual has certain knowledge advantages over every other (Hayek, 1945). Though examples of using crowds to harness what is desired are abundant (for an interesting application, see Table 1), until recently, accessing and harnessing such resources at scale has been nearly impossible for organizations. Due in large part to the proliferation of the Internet, mobile technologies, and the recent explosion of social media (Kietzmann, Hermkens, McCarthy, & Silvestre, 2011), organizations today are in a much better position to engage distributed crowds (Lakhani & Panetta, 2007) of individuals for their innovation and problem-solving needs (Afuah & Tucci, 2012; Boudreau & Lakhani, 2013). As a result, more and more executives\u2013\u2014from small startups to Fortune 500 companies alike\u2013\u2014 are trying to figure out what crowdsourcing really is, the benefits it can offer, and the processes they should follow to engage a crowd. In this formative stage of crowdsourcing, multiple streams of academic and practitioner-based literature\u2013\u2014each using their own language\u2013\u2014are developing independently of one another, without a unifying framework to understand the burgeoning phenomenon of crowd engagement. For executives who would like to explore crowdbased opportunities, this presents a multitude of options and possibilities, but also difficulties. One problem entails lack of a clear understanding of crowds , the various forms they can take, and the value they can offer. Another problem entails absence of a well-defined process to engage crowds. As a result, many executives are unable to develop strategies or are hesitant to allocate resources to crowdsourcing, resulting in missed opportunities for new competitive advantages resulting from engaging crowds. To help provide clarity, we submit an overview of the different types of crowdsourcing. Then we introduce the crowd capital framework, supplying a systematic template for executives to recognize the value of information from crowds, therein mapping the steps to acquire and assimilate resources from crowds. Finally, we discuss the unique benefits that can be gained from crowds before concluding with some advice on how to best \u2018work a crowd.\u2019 2. Types of crowdsourcing Crowdsourcing as an online, distributed problemsolving model (Brabham, 2008) suggests that approaching crowds and asking for contributions can help organizations develop solutions to a variety of business challenges. In this context, the crowd is often treated as a single construct: a general collection of people that can be targeted by firms. However, just as organizations and their problems vary, so do the types of crowds and the different kinds of contributions they can offer the firm. The following typology of crowdsourcing suggests that managers can begin by identifying a business problem and then working outward from there, considering (1) what type of contributions are required from members of the crowd and (2) how these contributions will collectively help find a solution to their business problem. First, the types of contributions required from the crowd could either call for specific objective contributions or for subjective content. Specific objective contributions help to achieve an impartial and unbiased result; here, bare facts matter and crowds can help find or create them. Subjective content contributions revolve around the judgments, opinions, perceptions, and beliefs of individuals in a crowd that are sought to collectively help solve a problem that calls for a subjective result. Second, contributions need to be processed collectively to add value. Depending on the problem to be solved, the contributions must either be aggregated or filtered. Under aggregation, contributions collectively yield value when they are simply combined at face value to inform a decision, without requiring any prior validation. For instance, political elections call for people to express their choices via electoral ballots, which are then tallied to calculate the sums and averages of their collective preferences; the reasons for their choices are not important at this stage. Other problems, however, are more complex and call for crowd contributions to be qualitatively evaluated and filtered before being considered on their relative merits (e.g., when politicians invite constituents\u2019 opinions before campaigning). Together, these two dimensions help executives distinguish among and understand the variety of crowdsourcing alternatives that exist today (see Figure 1). Two forms of crowdsourcing rely on aggregation as the primary process: crowd voting and microtask crowdsourcing. In crowd voting, organizations pose an issue to a crowd and aggregate the subjective responses derived from crowd participants to make a decision. Consider the popular television show American Idol, which allows viewers to support their preferred contestants by submitting votes online or via telephone or text. These votes are tallied at the end of the show and contestants with the fewest votes are eliminated from the competition. Similarly, so-called prediction markets (Arrow et al., 2008) activate the wisdom of the crowd through crowd voting. But rather than simply adding up votes, these markets arrive at specific predictions that can exceed the accuracy of experts by averaging the independent responses of crowd participants. For instance, Starwood Hotels and Resorts utilized an internal prediction market by asking a crowd of its own employees to select the best choice among a variety of potential marketing campaigns (Barlow, 2008). In micro-task crowdsourcing, organizations engage a crowd to undertake work that is often unachievable through standard procedures due to its sheer size or complexity. An organization may need to assemble a large data set, have numerous photos labeled and tagged, translate documents, or transcribe audio transcripts. Breaking such work into micro-tasks (Gino & Staats, 2012) allows daunting undertakings to be completed more quickly, cheaply, and efficiently. Consider how Google uses reCAPTCHA (von Ahn, Maurer, McMillen, Abraham, & Blum, 2008) and the little\u2013\u2014and admittedly annoying\u2013\u2014dialogue boxes that ask users to enter the text snippets they see of distorted images onscreen. It is commonly believed that this web utility is only for authenticating human users, thus keeping websites from spambots. However, every time the task of entering characters is completed, individuals are actually digitizing what optical character recognition (OCR) software has been unable to read. In this way, micro-task crowdsourcing is helping to digitize the archives of The New York Times and moving old manuscripts into Google Books. Similarly, crowdfunding (Stemler, 2013) endeavors are a form of micro-task crowdsourcing whereby an overall highly ambitious financial goal is broken into smaller funding tasks and contributions consist of objective resources (herein \u2018funds\u2019) that are simply aggregated for each venture. Figure 1. Crowdsourcing alternatives Whether objective or subjective, crowdsourced contributions must be processed to be valuable. In idea crowdsourcing, organizations seek creativity from a crowd, hoping to leverage its diversity to generate unique solutions to problems/issues. An organization may receive many ideas from a crowd, which it will need to filter before one or more can be implemented. For instance, online artist community and e-comm"}
{"_id": "5bf3bf0e7725133ccb3ee3bba6a2df52ec8d4abf", "title": "Comparative study of a new dermal filler Uma Jeunesse\u00ae and Juv\u00e9derm\u00ae.", "text": "INTRODUCTION\nInnovation in technology has resulted in the emergence of better, longer-lasting hyaluronic acid implants with fewer side effects. The new dermal implant Uma Jeunesse\u00ae was compared to Juv\u00e9derm\u00ae in this split-face study.\n\n\nMETHODS\nUma Jeunesse\u00ae is crosslinked with butanediol diglycidyl ether (BDDE) using a new crosslinking technology. Uma Jeunesse\u00ae and Juv\u00e9derm\u00ae Ultra 3 were injected in a split-face study on 17 healthy volunteers, whose ages ranged from 33-58 years. There were 14 women and three men with medium to deep nasolabial folds. All subjects randomly received either Uma Jeunesse\u00ae or Juv\u00e9derm\u00ae) Ultra 3 on one half of their face. Patients were followed up for 9 months.\n\n\nRESULTS\nJuv\u00e9derm\u00ae was easier to inject with lesser injection pain because of lidocaine, but late postinjection pain was much less with Uma Jeunesse\u00ae as compared to Juv\u00e9derm\u00ae. Overall rate of early and late complications as well as adverse events was lower with Uma Jeunesse\u00ae than Juv\u00e9derm(\u00ae) . After 9 months of follow-up, Uma Jeunesse\u00ae lasted in tissues for longer as compared to Juv\u00e9derm(\u00ae) even in patients injected for the first time (P<0.0001). Patient acceptability rate of Uma Jeunesse\u00ae was also much higher. Perception of pain during injection was lesser with Juv\u00e9derm\u00ae probably because of the presence of lidocaine.\n\n\nCONCLUSION\nThe new dermal implant Uma Jeunesse\u00ae is a safe and patient-friendly product which resides in the tissues for longer with maintenance of aesthetic effect over and beyond 6 months, reaching 9 months in over 80% of patients, and Juv\u00e9derm\u00ae injection is less painful."}
{"_id": "c621d3ddc939d8ac88347643fcaf93d0f7f49cca", "title": "Direct path and multipath cancellation with discrete distribution structure in passive radar", "text": "The Direct path and multipath interference (DPI and MPI) Cancellation is a crucial issue in the passive radar. In this paper, two improvements have been done in NLMS algorithm. One is to estimate the values and distribution of DPI and MPI roughly, and then remove them from the echo signal before filtering. The other one is to vary the step size of NLMS according to the cross correlation function of the reference signal and the output signal of the adaptive filter. The performance of the improved algorithm in passive radar DPI and MPI Cancellation is reported. The computer simulation results show that the algorithm has a faster convergence rate and a smaller steady state error."}
{"_id": "2bcb9d3e155f03425087e5797a30bb0ef224a5cc", "title": "Interactive bookshelf surface for in situ book searching and storing support", "text": "We propose an interactive bookshelf surface to augment a human ability for in situ book searching and storing. In book searching support, when a user touches the edge of the bookshelf, the cover image of a stored book located above the touched position is projected directly onto the book spine. As a result, the user can search for a desired book by sliding his (or her) finger across the shelf edge. In book storing support, when a user brings a book close to the bookshelf, the place where the book should be stored is visually highlighted by a projection light. This paper also presents sensing technologies to achieve the above mentioned interactive techniques. In addition, by considering the properties of the human visual system, we propose a simple visual effect to reduce the legibility degradation of the projected image contents by the complex textures and geometric irregularities of the spines. We confirmed the feasibility of the system and the effectiveness of the proposed interaction techniques through user studies."}
{"_id": "e4a9c62427182eab2d4622c3dbb38f8ba247b481", "title": "Experiments with crowdsourced re-annotation of a POS tagging data set", "text": "Crowdsourcing lets us collect multiple annotations for an item from several annotators. Typically, these are annotations for non-sequential classification tasks. While there has been some work on crowdsourcing named entity annotations, researchers have largely assumed that syntactic tasks such as part-of-speech (POS) tagging cannot be crowdsourced. This paper shows that workers can actually annotate sequential data almost as well as experts. Further, we show that the models learned from crowdsourced annotations fare as well as the models learned from expert annotations in downstream tasks."}
{"_id": "1089eb54895338621217ed29e69b5104d56cad67", "title": "Sequence-discriminative training of deep neural networks", "text": "Sequence-discriminative training of deep neural networks (DNNs) is investigated on a standard 300 hour American English conversational telephone speech task. Different sequencediscriminative criteria \u2014 maximum mutual information (MMI), minimum phone error (MPE), state-level minimum Bayes risk (sMBR), and boosted MMI \u2014 are compared. Two different heuristics are investigated to improve the performance of the DNNs trained using sequence-based criteria \u2014 lattices are regenerated after the first iteration of training; and, for MMI and BMMI, the frames where the numerator and denominator hypotheses are disjoint are removed from the gradient computation. Starting from a competitive DNN baseline trained using cross-entropy, different sequence-discriminative criteria are shown to lower word error rates by 7-9% relative, on average. Little difference is noticed between the different sequencebased criteria that are investigated. The experiments are done using the open-source Kaldi toolkit, which makes it possible for the wider community to reproduce these results."}
{"_id": "4691d99c6f08e719de161463c2b9210c6d8beecb", "title": "Diabetes Data Analysis and Prediction Model Discovery Using RapidMiner", "text": "Data mining techniques have been extensively applied in bioinformatics to analyze biomedical data. In this paper, we choose the Rapid-I\u00bfs RapidMiner as our tool to analyze a Pima Indians Diabetes Data Set, which collects the information of patients with and without developing diabetes. The discussion follows the data mining process. The focus will be on the data preprocessing, including attribute identification and selection, outlier removal, data normalization and numerical discretization, visual data analysis, hidden relationships discovery, and a diabetes prediction model construction."}
{"_id": "5550bd0554b93cbea47d629a94810c4b531930fa", "title": "Production and structural elucidation of exopolysaccharide from endophytic Pestalotiopsis sp. BC55.", "text": "There is a little information on exopolysaccharide production by endophytic fungi. In this investigation endophytic Pestalotiopsis sp. BC55 was used for optimization of exopolysaccharide production. One variable at a time method and response surface methodology were adopted to find out the best culture conditions and medium compositions for maximum exopolysaccharide production. The organism produced maximum exopolysaccharide (4.320 \u00b1 0.022 g/l EPS) in 250 ml Erlenmeyer flask containing 75 ml potato dextrose broth supplemented with (g%/l) glucose, 7.66; urea, 0.29; CaCl2, 0.05 with medium pH 6.93; after 3.76 days of incubation at 24\u00b0C. Exopolysaccharide [EPS (EP-I)] produced by this organism have Mw \u223c2\u00d710(5)Da with a melting point range of 122-124\u00b0C. Structural elucidation of the EPS (PS-I) was carried out after a series of experiments. Result indicated the presence of only (1\u21923)-linked \u03b2-d-glucopyranosyl moiety. The structure of the repeating unit was established as - \u21923)-\u03b2-d-Glcp-(1\u2192."}
{"_id": "a43d082f83d92f6affc8e21585a3eb194904c201", "title": "Nanopore sequencing technology and tools for genome assembly: computational analysis of the current state, bottlenecks and future directions.", "text": "Nanopore sequencing technology has the potential to render other sequencing technologies obsolete with its ability to generate long reads and provide portability. However, high error rates of the technology pose a challenge while generating accurate genome assemblies. The tools used for nanopore sequence analysis are of critical importance, as they should overcome the high error rates of the technology. Our goal in this work is to comprehensively analyze current publicly available tools for nanopore sequence analysis to understand their advantages, disadvantages and performance bottlenecks. It is important to understand where the current tools do not perform well to develop better tools. To this end, we (1) analyze the multiple steps and the associated tools in the genome assembly pipeline using nanopore sequence data, and (2) provide guidelines for determining the appropriate tools for each step. Based on our analyses, we make four key observations: (1) the choice of the tool for basecalling plays a critical role in overcoming the high error rates of nanopore sequencing technology. (2) Read-to-read overlap finding tools, GraphMap and Minimap, perform similarly in terms of accuracy. However, Minimap has a lower memory usage, and it is faster than GraphMap. (3) There is a trade-off between accuracy and performance when deciding on the appropriate tool for the assembly step. The fast but less accurate assembler Miniasm can be used for quick initial assembly, and further polishing can be applied on top of it to increase the accuracy, which leads to faster overall assembly. (4) The state-of-the-art polishing tool, Racon, generates high-quality consensus sequences while providing a significant speedup over another polishing tool, Nanopolish. We analyze various combinations of different tools and expose the trade-offs between accuracy, performance, memory usage and scalability. We conclude that our observations can guide researchers and practitioners in making conscious and effective choices for each step of the genome assembly pipeline using nanopore sequence data. Also, with the help of bottlenecks we have found, developers can improve the current tools or build new ones that are both accurate and fast, to overcome the high error rates of the nanopore sequencing technology."}
{"_id": "338651c90dcc60a191d23f499b02365baf9b16d3", "title": "Forensic analysis of video file formats", "text": "Video file format standards define only a limited number of mandatory features and leave room for interpretation. Design decisions of device manufacturers and software vendors are thus a fruitful resource for forensic video authentication. This paper explores AVI and MP4-like video streams of mobile phones and digital cameras in detail. We use customized parsers to extract all file format structures of videos from overall 19 digital camera models, 14 mobile phone models, and 6 video editing toolboxes. We report considerable differences in the choice of container formats, audio and video compression algorithms, acquisition parameters, and internal file structure. In combination, such characteristics can help to authenticate digital video files in forensic settings by distinguishing between original and post-processed videos, verifying the purported source of a file, or identifying the true acquisition device model or the processing software used for video processing. a 2014 The Authors. Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/)."}
{"_id": "b23d79cdc235c88e9d96c9a239c81f58a08a1374", "title": "A Security approach for Data Migration in Cloud Computing", "text": "Cloud computing is a new paradigm that combines several computing concepts and technologies of the Internet creating a platform for more agile and cost-effective business applications and IT infrastructure. The adoption of Cloud computing has been increasing for some time and the maturity of the market is steadily growing. Security is the question most consistently raised as consumers look to move their data and applications to the cloud. I justify the importance and motivation of security in the migration of legacy systems and I carry out an approach related to security in migration processes to cloud with the aim of finding the needs, concerns, requirements, aspects, opportunities and benefits of security in the migration process of legacy systems."}
{"_id": "1bc49abe5145055f1fa259bd4e700b1eb6b7f08d", "title": "SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents", "text": "We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art. Our model has the additional advantage of being very interpretable, since it allows visualization of its predictions broken up by abstract features such as information content, salience and novelty. Another novel contribution of our work is abstractive training of our extractive model that can train on human generated reference summaries alone, eliminating the need for sentence-level extractive labels."}
{"_id": "5d017ad8e7ec3dc67aca93970004782e6b99eb2c", "title": "Clustering Based URL Normalization Technique for Web Mining", "text": "URL (Uniform Resource Locator) normalization is an important activity in web mining. Web data can be retrieved in smoother way using effective URL normalization technique. URL normalization also reduces lot of calculations in web mining activities. A web mining technique for URL normalization is proposed in this paper. The proposed technique is based on content, structure and semantic similarity and web page redirection and forwarding similarity of the given set of URLs. Web page redirection and forward graphs can be used to measure the similarities between the URL\u2019s and can also be used for URL clusters. The URL clusters can be used for URL normalization. A data structure is also suggested to store the forward and redirect URL information."}
{"_id": "23de293f939580eda037c6527a51ab7389400c4a", "title": "Towards Data-Driven Autonomics in Data Centers", "text": "Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using generated data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating a predictive model for node failures. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing machine state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if machines will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5%, we can achieve true positive rates between 27% and 88% with precision varying between 50% and 72%. We discuss the practicality of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available from the authors' website."}
{"_id": "67fdbe15c597de4817c3e0a94d80fbc8963a7038", "title": "PyElph - a software tool for gel images analysis and phylogenetics", "text": "This paper presents PyElph, a software tool which automatically extracts data from gel images, computes the molecular weights of the analyzed molecules or fragments, compares DNA patterns which result from experiments with molecular genetic markers and, also, generates phylogenetic trees computed by five clustering methods, using the information extracted from the analyzed gel image. The software can be successfully used for population genetics, phylogenetics, taxonomic studies and other applications which require gel image analysis. Researchers and students working in molecular biology and genetics would benefit greatly from the proposed software because it is free, open source, easy to use, has a friendly Graphical User Interface and does not depend on specific image acquisition devices like other commercial programs with similar functionalities do. PyElph software tool is entirely implemented in Python which is a very popular programming language among the bioinformatics community. It provides a very friendly Graphical User Interface which was designed in six steps that gradually lead to the results. The user is guided through the following steps: image loading and preparation, lane detection, band detection, molecular weights computation based on a molecular weight marker, band matching and finally, the computation and visualization of phylogenetic trees. A strong point of the software is the visualization component for the processed data. The Graphical User Interface provides operations for image manipulation and highlights lanes, bands and band matching in the analyzed gel image. All the data and images generated in each step can be saved. The software has been tested on several DNA patterns obtained from experiments with different genetic markers. Examples of genetic markers which can be analyzed using PyElph are RFLP (Restriction Fragment Length Polymorphism), AFLP (Amplified Fragment Length Polymorphism), RAPD (Random Amplification of Polymorphic DNA) and STR (Short Tandem Repeat). The similarity between the DNA sequences is computed and used to generate phylogenetic trees which are very useful for population genetics studies and taxonomic classification. PyElph decreases the effort and time spent processing data from gel images by providing an automatic step-by-step gel image analysis system with a friendly Graphical User Interface. The proposed free software tool is suitable for researchers and students which do not have access to expensive commercial software and image acquisition devices."}
{"_id": "79f3559e75ba15371f10b1b8e74ee7f67c97755e", "title": "Explicit Solution for Constrained Stochastic Linear-Quadratic Control with Multiplicative Noise", "text": "We study in this paper a class of constrained linear-quadratic (LQ) optimal control problem formulations for the scalar-state stochastic system with multiplicative noise, which has various applications, especially in the financial risk management. The linear constraint on both the control and state variables considered in our model destroys the elegant structure of the conventional LQ formulation and has blocked the derivation of an explicit control policy so far in the literature. We successfully derive in this paper the analytical control policy for such a class of problems by utilizing the state separation property induced from its structure. We reveal that the optimal control policy is a piece-wise affine function of the state and can be computed off-line efficiently by solving two coupled Riccati equations. Under some mild conditions, we also obtain the stationary control policy for infinite time horizon. We demonstrate the implementation of our method via some illustrative examples and show how to calibrate our model to solve dynamic constrained portfolio optimization problems."}
{"_id": "5025427ebf7c57dd8495a2df46af595298a606e8", "title": "Efficient Multipattern Event Processing Over High-Speed Train Data Streams", "text": "Big data is becoming a key basis for productivity growth, innovation, and consumer surplus, but also bring us great challenges in its volume, velocity, variety, value, and veracity. The notion of event is an important cornerstone to manage big data. High-speed railway is one of the most typical application domains for event-based system, especially for the train onboard system. There are usually numerous complex event patterns subscribed in system sharing the same prefix, suffix, or subpattern; consequently, multipattern complex event detection often results in plenty of redundant detection operations and computations. In this paper, we propose a multipattern complex event detection model, multipattern event processing (MPEP), constructed by three parts: 1) multipattern state transition; 2) failure transition; and 3) state output. Based on MPEP, an intelligent onboard system for high-speed train is preliminarily implemented. The system logic is described using our proposed complex event description model and compiled into a multipattern event detection model. Experimental results show that MPEP can effectively optimize the complex event detection process and improve its throughput by eliminating duplicate automata states and redundant computations. This intelligent onboard system also provides better detection ability than other models when processing real-time events stored in high-speed train Juridical Recording Unit (JRU)."}
{"_id": "b00c089cc70a7d28cb6e6e907f249eb67e092c39", "title": "Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls", "text": "Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%."}
{"_id": "9afc33e194fb9670b4538b22138a16abd3d2320b", "title": "A Nonlinear Unified State-Space Model for Ship Maneuvering and Control in a Seaway", "text": "This article presents a unified state-space model for ship maneuvering, station-keeping, and control in a seaway. The frequency-dependent potential and viscous damping terms, which in classic theory results in a convolution integral not suited for real-time simulation, is compactly represented by using a state-space formulation. The separation of the vessel model into a low-frequency model (represented by zero-frequency added mass and damping) and a wave-frequency model (represented by motion transfer functions or RAOs), which is commonly used for simulation, is hence made superfluous."}
{"_id": "15154da78d8c8a464dbc5f8857aad68ff7db8719", "title": "Tasks and scenario-based evaluation of information visualization techniques", "text": "Usability evaluation of an information visualization technique can only be done by the joint evaluation of both the visual representation and the interaction techniques. This work proposes task models as a key element for carrying out such evaluations in a structured way. We base our work on a taxonomy abstracting from rendering functions supported by information visualization techniques. CTTE is used to model these abstract visual tasks as well as to generate scenarios from this model for evaluation purposes. We conclude that the use of task models allows generating test scenarios which are more effective than informal and unstructured evaluations."}
{"_id": "b41ec45c38eb118448b01f457eaad0f71aee02a9", "title": "VMHunt: A Verifiable Approach to Partially-Virtualized Binary Code Simplification", "text": "Code virtualization is a highly sophisticated obfuscation technique adopted by malware authors to stay under the radar. However, the increasing complexity of code virtualization also becomes a \"double-edged sword\" for practical application. Due to its performance limitations and compatibility problems, code virtualization is seldom used on an entire program. Rather, it is mainly used only to safeguard the key parts of code such as security checks and encryption keys. Many techniques have been proposed to reverse engineer the virtualized code, but they share some common limitations. They assume the scope of virtualized code is known in advance and mainly focus on the classic structure of code emulator. Also, few work verifies the correctness of their deobfuscation results. In this paper, with fewer assumptions on the type and scope of code virtualization, we present a verifiable method to address the challenge of partially-virtualized binary code simplification. Our key insight is that code virtualization is a kind of process-level virtual machine (VM), and the context switch patterns when entering and exiting the VM can be used to detect the VM boundaries. Based on the scope of VM boundary, we simplify the virtualized code. We first ignore all the instructions in a given virtualized snippet that do not affect the final result of that snippet. To better revert the data obfuscation effect that encodes a variable through bitwise operations, we then run a new symbolic execution called multiple granularity symbolic execution to further simplify the trace snippet. The generated concise symbolic formulas facilitate the correctness testing of our simplification results. We have implemented our idea as an open source tool, VMHunt, and evaluated it with real-world applications and malware. The encouraging experimental results demonstrate that VMHunt is a significant improvement over the state of the art."}
{"_id": "3e4bd583795875c6550026fc02fb111daee763b4", "title": "Convolutional Sketch Inversion", "text": "In this paper, we use deep neural networks for inverting face sketches to synthesize photorealistic face images. We first construct a semi-simulated dataset containing a very large number of computergenerated face sketches with different styles and corresponding face images by expanding existing unconstrained face data sets. We then train models achieving state-of-the-art results on both computer-generated sketches and hand-drawn sketches by leveraging recent advances in deep learning such as batch normalization, deep residual learning, perceptual losses and stochastic optimization in combination with our new dataset. We finally demonstrate potential applications of our models in fine arts and forensic arts. In contrast to existing patch-based approaches, our deep-neuralnetwork-based approach can be used for synthesizing photorealistic face images by inverting face sketches in the wild."}
{"_id": "d5cd72f923c1c6c3ae21eb705b084c35f3508bd0", "title": "A convolutional neural networks denoising approach for salt and pepper noise", "text": "The salt and pepper noise, especially the one with extremely high percentage of impulses, brings a significant challenge to image denoising. In this paper, we propose a non-local switching filter convolutional neural network denoising algorithm, named NLSF-CNN, for salt and pepper noise. As its name suggested, our NLSF-CNN consists of two steps, i.e., a NLSF processing step and a CNN training step. First, we develop a NLSF pre-processing step for noisy images using non-local information. Then, the pre-processed images are divided into patches and used for CNN training, leading to a CNN denoising model for future noisy images. We conduct a number of experiments to evaluate the effectiveness of NLSF-CNN. Experimental results show that NLSF-CNN outperforms the state-of-the-art denoising algorithms with a few training images."}
{"_id": "b0232b47186aa4fc50cc1d4698f9451764daa660", "title": "A Systematic Literature Review: Code Bad Smells in Java Source Code", "text": "Code smell is an indication of a software designing problem. The presence of code smells can have a severe impact on the software quality. Smells basically refers to the structure of the code which violates few of the design principals and so has negative effect on the quality of the software. Larger the source code, more is its presence. Software needs to be reliable, robust and easily maintainable so that it can minimize the cost of its development as well as maintenance. Smells may increase the chances of failure of the system during maintenance. A SLR has been performed based on the search of digital libraries that includes the publications since 1999 to 2016. 60 research papers are deeply analyzed that are most relevant. The objective of this paper is to provide an extensive overview of existing research in the field of bad smells, identify the detection techniques and correlation between the detection techniques, in addition to find the name of the code smells that need more attention in detection approaches. This SLR identified that code clone (code smell) receives most research attention. Our findings also show that very few papers report on the impact of code bad smells. Most of the papers focused on the detection techniques and tools. A significant correlation between detection techniques has been calculated. There are four code smells that are not yet detected are Primitive Obsession, Inappropriate Intimacy, Incomplete library class and Comments."}
{"_id": "2c0005665472b687cca2d1b527ad3b0254e905c0", "title": "Predicting defect-prone software modules using support vector machines", "text": "Effective prediction of defect-prone software modules can enable software developers to focus quality assurance activities and allocate effort and resources more efficiently. Support vector machines (SVM) have been successfully applied for solving both classification and regression problems in many applications. This paper evaluates the capability of SVM in predicting defect-prone software modules and compares its prediction performance against eight statistical and machine learning models in the context of four NASA datasets. The results indicate that the prediction performance of SVM is generally better than, or at least, is competitive against the compared models. 2007 Elsevier Inc. All rights reserved."}
{"_id": "3759516030dc3b150b6b048fc9d33a044e13d1dc", "title": "Preliminary comparison of techniques for dealing with imbalance in software defect prediction", "text": "Imbalanced data is a common problem in data mining when dealing with classification problems, where samples of a class vastly outnumber other classes. In this situation, many data mining algorithms generate poor models as they try to optimize the overall accuracy and perform badly in classes with very few samples. Software Engineering data in general and defect prediction datasets are not an exception and in this paper, we compare different approaches, namely sampling, cost-sensitive, ensemble and hybrid approaches to the problem of defect prediction with different datasets preprocessed differently. We have used the well-known NASA datasets curated by Shepperd et al. There are differences in the results depending on the characteristics of the dataset and the evaluation metrics, especially if duplicates and inconsistencies are removed as a preprocessing step.\n Further Results and replication package: http://www.cc.uah.es/drg/ease14"}
{"_id": "5fcdd1970bf75e6c00088d0cabd2e19538c97257", "title": "Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings", "text": "Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules. Several classification models have been evaluated for this task. However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results. We consider three potential sources for bias: comparing classifiers over one or a small number of proprietary data sets, relying on accuracy indicators that are conceptually inappropriate for software defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings. To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository. Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful. However, our results indicate that the importance of the particular classification algorithm may be less than previously assumed since no significant performance differences could be detected among the top 17 classifiers."}
{"_id": "b0bbea300988928b7c2998e98e52670b2c365327", "title": "Applying machine learning to software fault-proneness prediction", "text": "The importance of software testing to quality assurance cannot be overemphasized. The estimation of a module\u2019s fault-proneness is important for minimizing cost and improving the effectiveness of the software testing process. Unfortunately, no general technique for estimating software fault-proneness is available. The observed correlation between some software metrics and fault-proneness has resulted in a variety of predictive models based on multiple metrics. Much work has concentrated on how to select the software metrics that are most likely to indicate fault-proneness. In this paper, we propose the use of machine learning for this purpose. Specifically, given historical data on software metric values and number of reported errors, an Artificial Neural Network (ANN) is trained. Then, in order to determine the importance of each software metric in predicting fault-proneness, a sensitivity analysis is performed on the trained ANN. The software metrics that are deemed to be the most critical are then used as the basis of an ANN-based predictive model of a continuous measure of fault-proneness. We also view fault-proneness prediction as a binary classification task (i.e., a module can either contain errors or be error-free) and use Support Vector Machines (SVM) as a state-of-the-art classification method. We perform a comparative experimental study of the effectiveness of ANNs and SVMs on a data set obtained from NASA\u2019s Metrics Data Program data repository. 2007 Elsevier Inc. All rights reserved."}
{"_id": "c11d0ba3e581e691f1bee0022dc0807ff4c428f2", "title": "A systematic analysis of performance measures for classification tasks", "text": "Article history: Received 14 February 2008 Received in revised form 21 November 2008 Accepted 6 March 2009 Available online 8 May 2009"}
{"_id": "d76409a011c54a58235af6b409850af7fae8733a", "title": "Using Screen Brightness to Improve Security in Mobile Social Network Access", "text": "In the today's mobile communications scenario, smartphones offer new capabilities to develop sophisticated applications that seem to make daily life easier and more convenient for users. Such applications, which may involve mobile ticketing, identification, access control operations, etc., are often accessible through social network aggregators, that assume a fundamental role in the federated identity management space. While this makes modern smartphones very powerful devices, it also makes them very attractive targets for spyware injection. This kind of malware is able to bypass classic authentication measures and steal user credentials even when a secure element is used, and can, therefore, perform unauthorized mobile access to social network services without the user's consent. Such an event allows stealing sensitive information or even a full identity theft. In this work, we address this issue by introducing BrightPass, a novel authentication mechanism based on screen brightness. BrightPass allows users to authenticate safely with a PIN-based confirmation in the presence of specific operations on sensitive data. We compare BrightPass with existing schemes, in order to show its usability and security within the social network arena. Furthermore, we empirically assess the security of BrightPass through experimentation. Our tests indicate that BrightPass protects the PIN code against automatic submissions carried out by malware while granting fast authentication phases and reduced error rates."}
{"_id": "9d39133f3079ba10f68ca0d9d60dd7bd18bce805", "title": "Using Design Thinking to Differentiate Useful From Misleading Evidence in Observational Research.", "text": "Few issues can be more important to physicians or patients than that treatment decisions are based on reliable information about benefits and harms. While randomized clinical trials (RCTs) are generally regarded as the most valid source of evidence about benefits and some harms, concerns about their generalizability, costs, and heterogeneity of treatment effects have led to the search for other sources of information to augment or possibly replace trials. This is embodied in the recently passed 21st Century Cures Act, which mandates that the US Food and Drug Administration develop rules for the use of \u201creal world evidence\u201d in drug approval, defined as \u201cdata...derived from sources other than randomized clinical trials.\u201d1 A second push toward the use of nontrial evidence is based on the perception that the torrent of electronic health-related data\u2014medical record, genomic, and lifestyle (ie, \u201cBig Data\u201d)\u2014can be transformed into reliable evidence with the use of powerful modern analytic tools. The use of nontrial evidence puts experts who weigh medical evidence, no less a practicing clinician, in a quandary, as the evaluation of such information requires skills well beyond those of basic critical review. The didactic tutorial by Agoritsas et al2 in this issue of JAMA partly addresses that problem, providing a clear, concise, and accessible introduction to several statistical approaches to adjust analyses of observational studies to minimize or eliminate confounding: matching, standard regression, propensity scores, and instrumental variable analyses. While understanding these technical tools is valuable, valid causal inference from nonrandomized studies about treatment effects depends on many factors other than confounding. These include whether the causal question motivating the study is clearly specified, whether the design matches that question and avoids design biases, whether the analysis matches the design, the appropriateness and quality of the data, the fit of adjustment models, and the potential for model searching to find spurious patterns in vast data streams. Some of these issues are recognizable and remediable, whereas others may defy solely analytic solutions, leading to the rejection of some nonrandomized studies to guide treatment. The hallmark of a well-posed causal question is that one can describe an RCT that would answer it. An RCT requires specifying at minimum an eligible population, an intervention, a comparator, and a start time. As posed by Hern\u00e1n and Taubman,3 an example of a noncausal question is \u201cDoes obesity shorten life?\u201d This is not causal because \u201cobesity\u201d is not a randomizable intervention, nor is weight loss. Randomizable interventions\u2014the goal of which might be weight loss\u2014include diet, exercise, gastric bypass surgery, liposuction, or drugs. Once the causal question is specified, the experimental design must mirror it, creating the foundation for a \u201clike vs like\u201d causal contrast on which a valid analysis is based. When observational data not collected for research are analyzed to estimate treatment effects, sometimes they are fed into statistical software without an explicit design and with a causal question that is either imprecise or not stated. This sets the stage for invalid results, even when the statistical methods appear to be properly implemented.4,5 The example of postmenopausal hormone therapy (HT) illustrates the criticality, when analyzing observational data, of posing a precise causal question, with a design and analysis to match. Observational studies, including one from the Nurses\u2019 Health Study, using some of the adjustment techniques described by Agoritsas et al, showed a strong cardiovascular protective association with HT. This result was famously contravened by a subsequent RCT, the Women\u2019s Health Initiative, which showed cardiovascular harm from HT. A subsequent reanalysis of the Nurses\u2019 Health Study data conformed with the RCT results.6 The reanalysis was structured to mirror the clinical question asked in the trial; viz, how does initiating HT in a postmenopausal woman affect her cardiovascular risk? The original observational analysis, using standard regression, effectively excluded women with coronary heart disease events that occurred shortly after initiation of HT, a remediable design bias known in pharmacoepidemiology as \u201cdepletion of susceptibles\u201d7 or in other fields as \u201csurvivorship bias.\u201d No amount of adjustment for baseline factors between ongoing HT users and nonusers can overcome that bias, but by refining the question, and then using the appropriately tailored design, an appropriate analysis can be crafted. Following the causal thinking reflected in RCT design, most successful examples of observational research that produce findings similar to those from RCTs have compared new drug users with new users of a comparator agent.8,9 The disproven nonrandomized studies referenced at the end of the article by Agoritsas et al all compared users with nonusers. In an RCT, a nonuser is always someone who was eligible and considered for treatment, but that is not guaranteed in an observational study. If instead 2 active treatments indicated for the same condition are compared, it is likely Related article page 748 Opinion"}
{"_id": "2d1853472ed4953fcca2211344d6a3e2f8e7d0cc", "title": "Classifying Photographic and Photorealistic Computer Graphic Images using Natural Image Statistics", "text": "As computer graphics (CG) is getting more photorealistic, for the purpose of image authentication, it becomes increasingly important to construct a detector for classifying photographic images (PIM) and photorealistic computer graphics (PRCG). To this end, we propose that photographic images contain natural-imaging quality (NIQ) and natural-scene quality (NSQ). NIQ is due to the imaging process, while NSQ is due to the subtle physical light transport in a real-world scene. We explicitly model NSQ of photographic images using natural image statistics (NIS). NIS has been used as an image prior in applications such as image compression and denoising. However, NIS has not been comprehensively and systematically employed for classifying PIM and PRCG. In this work, we study three types of NIS with different statistical order, i.e., NIS derived from the power spectrum, wavelet transform and local patch of images. The experiment shows that the classification is in line with the statistical order of the NIS. The local patch NIS achieves a classification accuracy of 83% which outperforms the features derived from modeling the characteristics of computer graphics."}
{"_id": "28d50600bebe281b055fbbb49a6eb9b3acb2b7cf", "title": "Goal-Oriented Requirements Engineering: A Guided Tour", "text": "Goals capture, at different levels of abstraction, the various objectives the system under consideration should achieve. Goal-oriented requirements engineering is concerned with the use of goals for eliciting, elaborating, structuring, specifying, analyzing, negotiating, documenting, and modifying requirements. This area has received increasing attention over the past few years. The paper reviews various research efforts undertaken along this line of research. The arguments in favor of goal orientation are first briefly discussed. The paper then compares the main approaches to goal modeling, goal specification and goal-based reasoning in the many activities of the requirements engineering process. To make the discussion more concrete, a real case study is used to suggest what a goal-oriented requirements engineering method may look like. Experience with such approaches and tool support are briefly discussed as well."}
{"_id": "27eba11d57f7630b75fb67644ff48087472414f3", "title": "Do RNNs learn human-like abstract word order preferences?", "text": "RNN language models have achieved state-ofthe-art results on various tasks, but what exactly they are representing about syntax is as yet unclear. Here we investigate whether RNN language models learn humanlike word order preferences in syntactic alternations. We collect language model surprisal scores for controlled sentence stimuli exhibiting major syntactic alternations in English: heavy NP shift, particle shift, the dative alternation, and the genitive alternation. We show that RNN language models reproduce human preferences in these alternations based on NP length, animacy, and definiteness. We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations. We show that the RNNs\u2019 performance is similar to the human acceptability ratings and is not matched by an n-gram baseline model. Our results show that RNNs learn the abstract features of weight, animacy, and definiteness which underlie soft constraints on syntactic alternations. The best-performing models for many natural language processing tasks in recent years have been recurrent neural networks (RNNs) (Elman, 1990; Sutskever et al., 2014; Goldberg, 2017), but the black-box nature of these models makes it hard to know exactly what generalizations they have learned about their linguistic input: Have they learned generalizations stated over hierarchical structures, or only dependencies among relatively local groups of words (Linzen et al., 2016; Gulordava et al., 2018; Futrell et al., 2018)? Do they represent structures analogous to syntactic dependency trees (Williams et al., 2018), and can they represent complex relationships such as filler\u2013gap dependencies (Chowdhury and Zamparelli, 2018; Wilcox et al., 2018)? In order to make progress with RNNs, it is crucial to determine what RNNs actually learn given currently standard practices; then we can design network architectures, objective functions, and training practices to build on strengths and alleviate weaknesses (Linzen, 2018). In this work, we investigate whether RNNs trained on a language modeling objective learn certain syntactic preferences exhibited by humans, especially those involving word order. We draw on a rich literature from quantitative linguistics that has investigated these preferences in corpora and experiments (e.g., McDonald et al., 1993; Stallings et al., 1998; Bresnan et al., 2007; Rosenbach, 2008). Word order preferences are a key aspect of human linguistic knowledge. In many cases, they can be captured using local co-occurrence statistics: for example, the preference for subject\u2013verb\u2013 object word order in English can often be captured directly in short word strings, as in the dramatic preference for I ate apples over I apples ate. However, some word order preferences are more abstract and can only be stated in terms of higherorder linguistic units and abstract features. For example, humans exhibit a general preference for word orders in which words linked in syntactic dependencies are close to each other: such sentences are produced more frequently and comprehended more easily (Hawkins, 1994; Futrell et al., 2015; Temperley and Gildea, 2018). We are interested in whether RNNs learn abstract word order preferences as a way of probing their syntactic knowledge. If RNNs exhibit these preferences for appropriately controlled stimuli, then on some level they have learned the abstractions required to state them. Knowing whether RNNs show human-like word order preferences also bears on their suitability as language generation systems. White and Rajkumar (2012) have shown that language gener-"}
{"_id": "d6391e2f688996a75c62869cd4c50711cdeaed9d", "title": "Sparse concept coding for visual analysis", "text": "We consider the problem of image representation for visual analysis. When representing images as vectors, the feature space is of very high dimensionality, which makes it difficult for applying statistical techniques for visual analysis. To tackle this problem, matrix factorization techniques, such as Singular Vector Decomposition (SVD) and Non-negative Matrix Factorization (NMF), received an increasing amount of interest in recent years. Matrix factorization is an unsupervised learning technique, which finds a basis set capturing high-level semantics in the data and learns coordinates in terms of the basis set. However, the representations obtained by them are highly dense and can not capture the intrinsic geometric structure in the data. In this paper, we propose a novel method, called Sparse Concept Coding (SCC), for image representation and analysis. Inspired from the recent developments on manifold learning and sparse coding, SCC provides a sparse representation which can capture the intrinsic geometric structure of the image space. Extensive experimental results on image clustering have shown that the proposed approach provides a better representation with respect to the semantic structure."}
{"_id": "70c1da6627f0d7a93bdc2b88b33bcc22a782cd6c", "title": "Design and Modeling of A Crowdsource-Enabled System for Urban Parcel Relay and Delivery", "text": "This paper proposes a crowdsource-enabled system for urban parcel relay and delivery. We consider cyclists and pedestrians as crowdsources who are close to customers and interested in relaying parcels with a truck carrier and undertaking jobs for the last-leg parcel delivery and the first-leg parcel pickup. The crowdsources express their interests in doing so by submitting bids to the truck carrier. The truck carrier then selects bids and coordinates crowdsources\u2019 last-leg delivery (first-leg pickup) with its truck operations. The truck carrier\u2019s problem is formulated as a mixed integer non-linear program which simultaneously i) selects crowdsources to complete the last-leg delivery (first-leg pickup) between customers and selected points for crowdsource-truck relay; and ii) determines the relay points and truck routes and schedule. To solve the truck carrier problem, we first decompose the problem into a winner determination problem and a simultaneous pickup and delivery problem with soft time windows, and propose a Tabu Search based algorithm to iteratively solve the two subproblems. Numerical results show that this solution approach is able to yield close-to-optimum solutions with much less time than using offthe-shelf solvers. By adopting this new system, truck vehicle miles traveled (VMT) and total cost can be reduced compared to pure-truck delivery. The advantage of the system over pure-truck delivery is sensitive to factors such as penalty for servicing outside customers\u2019 desired time windows, truck unit operating cost, time value of crowdsources, and the crowdsource mode."}
{"_id": "49522df4fab1ebbeb831fc265196c2c129bf6087", "title": "Survey on data science with population-based algorithms", "text": "This paper discusses the relationship between data science and population-based algorithms, which include swarm intelligence and evolutionary algorithms. We reviewed two categories of literature, which include population-based algorithms solving data analysis problem and utilizing data analysis methods in population-based algorithms. With the exponential increment of data, the data science, or more specifically, the big data analytics has gained increasing attention among researchers. New and more efficient algorithms should be designed to handle this massive data problem. Based on the combination of population-based algorithms and data mining techniques, we understand better the insights of data analytics, and design more efficient algorithms to solve real-world big data analytics problems. Also, the weakness and strength of population-based algorithms could be analyzed via the data analytics along the optimization process, a crucial entity in population-based algorithms."}
{"_id": "d1a0b53f50d0e016372313aabfad9b76839df437", "title": "Discovering space - Grounding spatial topology and metric regularity in a naive agent's sensorimotor experience", "text": "In line with the sensorimotor contingency theory, we investigate the problem of the perception of space from a fundamental sensorimotor perspective. Despite its pervasive nature in our perception of the world, the origin of the concept of space remains largely mysterious. For example in the context of artificial perception, this issue is usually circumvented by having engineers pre-define the spatial structure of the problem the agent has to face. We here show that the structure of space can be autonomously discovered by a naive agent in the form of sensorimotor regularities, that correspond to so called compensable sensory experiences: these are experiences that can be generated either by the agent or its environment. By detecting such compensable experiences the agent can infer the topological and metric structure of the external space in which its body is moving. We propose a theoretical description of the nature of these regularities and illustrate the approach on a simulated robotic arm equipped with an eye-like sensor, and which interacts with an object. Finally we show how these regularities can be used to build an internal representation of the sensor's external spatial configuration."}
{"_id": "7fafb1fab4c164ffbbd7e99415cdd1754fe3aded", "title": "Sensory gain control (amplification) as a mechanism of selective attention: electrophysiological and neuroimaging evidence.", "text": "Both physiological and behavioral studies have suggested that stimulus-driven neural activity in the sensory pathways can be modulated in amplitude during selective attention. Recordings of event-related brain potentials indicate that such sensory gain control or amplification processes play an important role in visual-spatial attention. Combined event-related brain potential and neuroimaging experiments provide strong evidence that attentional gain control operates at an early stage of visual processing in extrastriate cortical areas. These data support early selection theories of attention and provide a basis for distinguishing between separate mechanisms of attentional suppression (of unattended inputs) and attentional facilitation (of attended inputs)."}
{"_id": "c1c74829d6430d468a1fe1f75eae217325253baf", "title": "Advanced Data Mining Techniques", "text": "The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. I dedicate this book to my grandchildren. Preface The intent of this book is to describe some recent data mining tools that have proven effective in dealing with data sets which often involve uncertain description or other complexities that cause difficulty for the conventional approaches of logistic regression, neural network models, and decision trees. Among these traditional algorithms, neural network models often have a relative advantage when data is complex. We will discuss methods with simple examples, review applications, and evaluate relative advantages of several contemporary methods. Our intent is to cover the fundamental concepts of data mining, to demonstrate the potential of gathering large sets of data, and analyzing these data sets to gain useful business understanding. We have organized the material into three parts. Part I introduces concepts. Part II contains chapters on a number of different techniques often used in data mining. Part III focuses on business applications of data mining. Not all of these chapters need to be covered, and their sequence could be varied at instructor design. The book will include short vignettes of how specific concepts have been applied in real practice. A series of representative data sets will be generated to demonstrate specific methods and concepts. References to data mining software and sites such as www.kdnuggets.com will be provided. Chapter 1 gives an overview of data mining, and provides a description of the data mining process. An overview of useful business applications is provided. Chapter 2 presents the data mining process in more detail. It demonstrates this process with a typical set of data. Visualization of data through data mining software is addressed. Chapter 3 presents memory-based reasoning methods of data mining. Major real applications are described. Algorithms are demonstrated with prototypical data based on real applications. Chapter 4 discusses association rule methods. Application in the form of market basket analysis is discussed. A real data set is described, and a simplified version used to demonstrate association rule methods. Chapter 5 presents fuzzy data mining approaches. Fuzzy decision tree approaches are described, as well as fuzzy association rule applications. Real data mining applications are described and demonstrated Chapter 6 presents Rough \u2026"}
{"_id": "1b29144f0e63f8d9404a44b15623d8150ea76447", "title": "A cost analysis of RDL-first and mold-first fan-out wafer level packaging", "text": "Industry interest in fan-out wafer level packaging has been increasing steadily. This is partially because of the potential behind panel-based fan-out packaging, and partially because wafer level packaging may be fulfilling needs not met by interposer-based and 3D technologies. As is the case with any technology, there are variations within the fan-out wafer level packaging category. This paper will focus on two of the primary processes: RDL-first and mold-first (also called chip-first). While these process flows have many of the same activities, those activities are carried out in a different order, and there are a few key steps that will differ. Each process has unique challenges and benefits, and these will be explored and analyzed."}
{"_id": "fdec0d4159ceae88377288cb340341f6bf80bb15", "title": "Mostly Exploration-Free Algorithms for Contextual Bandits", "text": "The contextual bandit literature has traditionally focused on algorithms that address the explorationexploitation tradeoff. In particular, greedy algorithms that exploit current estimates without any exploration may be sub-optimal in general. However, exploration-free greedy algorithms are desirable in practical settings where exploration may be costly or unethical (e.g., clinical trials). Surprisingly, we find that a simple greedy algorithm can be rate-optimal (achieves asymptotically optimal regret) if there is sufficient randomness in the observed contexts (covariates). We prove that this is always the case for a two-armed bandit under a general class of context distributions that satisfy a condition we term covariate diversity. Furthermore, even absent this condition, we show that a greedy algorithm can be rate-optimal with nonzero probability. Thus, standard bandit algorithms may unnecessarily explore. Motivated by these results, we introduce Greedy-First, a new algorithm that uses only observed contexts and rewards to determine whether to follow a greedy algorithm or to explore. We prove that this algorithm is rate-optimal without any additional assumptions on the context distribution or the number of arms. Extensive simulations demonstrate that Greedy-First successfully reduces experimentation and outperforms existing (exploration-based) contextual bandit algorithms such as Thompson sampling or UCB."}
{"_id": "f20d2ffd2dcfffb0c1d22c11bf4861f5140b1a28", "title": "Unveiling patterns of international communities in a global city using mobile phone data", "text": "We analyse a large mobile phone activity dataset provided by Telecom Italia for the Telecom Big Data Challenge contest. The dataset reports the international country codes of every call/SMS made and received by mobile phone users in Milan, Italy, between November and December 2013, with a spatial resolution of about 200 meters. We first show that the observed spatial distribution of international codes well matches the distribution of international communities reported by official statistics, confirming the value of mobile phone data for demographic research. Next, we define an entropy function to measure the heterogeneity of the international phone activity in space and time. By comparing the entropy function to empirical data, we show that it can be used to identify the city\u2019s hotspots, defined by the presence of points of interests. Eventually, we use the entropy function to characterize the spatial distribution of international communities in the city. Adopting a topological data analysis approach, we find that international mobile phone users exhibit some robust clustering patterns that correlate with basic socio-economic variables. Our results suggest that mobile phone records can be used in conjunction with topological data analysis tools to study the geography of migrant communities in a global city."}
{"_id": "16048a04665a8cddd179a939d6cbeb4e0cdb6672", "title": "Automatic Radial Distortion Estimation from a Single Image", "text": "Many computer vision algorithms rely on the assumptions of the pinhole camera model, but lens distortion with off-the-shelf cameras is usually significant enough to violate this assumption. Many methods for radial distortion estimation have been proposed, but they all have limitations. Robust automatic radial distortion estimation from a single natural image would be extremely useful for many applications, particularly those in human-made environments containing abundant lines. For example, it could be used in place of an extensive calibration procedure to get a mobile robot or quadrotor experiment up and running quickly in an indoor environment. We propose a new method for automatic radial distortion estimation based on the plumb-line approach. The method works from a single image and does not require a special calibration pattern. It is based on Fitzgibbon\u2019s division model, robust estimation of circular arcs, and robust estimation of distortion parameters. We perform an extensive empirical study of the method on synthetic images. We include a comparative statistical analysis of how different circle fitting methods contribute to accurate distortion parameter estimation. We finally provide qualitative results on a wide variety of challenging real images. The experiments demonstrate the method\u2019s ability to accurately identify distortion parameters and remove distortion from images."}
{"_id": "4194a2ddd9621b6068f4ea39ab0e9c180a7bbca0", "title": "Millimeter-Wave Substrate Integrated Waveguide Multibeam Antenna Based on the Parabolic Reflector Principle", "text": "A novel substrate integrated waveguide (SIW) multibeam antenna based on the parabolic reflector principle is proposed and implemented at 37.5 GHz, which takes the advantages of low loss, low profile, high gain, and mass producible. A prototype of the proposed antenna with seven input ports generating a corresponding number of output beams is fabricated in a standard PCB process. The measurements are in good agreement with the simulations, and the results demonstrate that this type of printed multibeam antenna is a good choice for communications applications where mobility and high gain are simultaneously required."}
{"_id": "84f904a71bee129a1cf00dc97f6cdbe1011657e6", "title": "Fashioning with Networks: Neural Style Transfer to Design Clothes", "text": "Convolutional Neural Networks have been highly successful in performing a host of computer vision tasks such as object recognition, object detection, image segmentation and texture synthesis. In 2015, Gatys et. al [7] show how the style of a painter can be extracted from an image of the painting and applied to another normal photograph, thus recreating the photo in the style of the painter. The method has been successfully applied to a wide range of images and has since spawned multiple applications and mobile apps. In this paper, the neural style transfer algorithm is applied to fashion so as to synthesize new custom clothes. We construct an approach to personalize and generate new custom clothes based on a user\u2019s preference and by learning the user\u2019s fashion choices from a limited set of clothes from their closet. The approach is evaluated by analyzing the generated images of clothes and how well they align with the user\u2019s fashion style."}
{"_id": "e1df33e5915dd610a6f3542b28dca4306ad62cd8", "title": "Affective Deprivation Disorder : Does it Constitute a Relational Disorder ?", "text": "Harriet Simons is Adjunct Associate Professor at Smith College School for Social Work and runs a private social work practice for individuals and couples specializing in infertility and Asperger\u2019s relationships. Among her published works are Wanting Another Child: Coping with Secondary Infertility. Jossey-Bass, 1998; (Contributor), Infertility Counseling: A Handbook for Clinicians. Parthenon Press, 1997; and (Co-editor and Contributor) Infertility: Medical, Emotional and Social Considerations. Human Sciences Press, 1984. Jason Thompson is Community Support Officer at SPARAL Disability Services specializing in rehabilitation and lifestyle support for individuals with intellectual and physical disabilities. He has completed certified studies in Applied Behaviour Analysis and Normative Male Alexithymia. He recently published an essay on the subject of alexithymia Alexithymia: An Imaginative Approach, Psychotherapy Australia Journal, vol 14, No 4, Aug. 2008"}
{"_id": "11b576b37a76833da3fabe2f485529f9210c6c05", "title": "Persistent Homology: An Introduction and a New Text Representation for Natural Language Processing", "text": "Persistent homology is a mathematical tool from topological data analysis. It performs multi-scale analysis on a set of points and identifies clusters, holes, and voids therein. These latter topological structures complement standard feature representations, making persistent homology an attractive feature extractor for artificial intelligence. Research on persistent homology for AI is in its infancy, and is currently hindered by two issues: the lack of an accessible introduction to AI researchers, and the paucity of applications. In response, the first part of this paper presents a tutorial on persistent homology specifically aimed at a broader audience without sacrificing mathematical rigor. The second part contains one of the first applications of persistent homology to natural language processing. Specifically, our Similarity Filtration with Time Skeleton (SIFTS) algorithm identifies holes that can be interpreted as semantic \u201ctie-backs\u201d in a text document, providing a new document structure representation. We illustrate our algorithm on documents ranging from nursery rhymes to novels, and on a corpus with child and adolescent writings."}
{"_id": "ee03a87fceb56f36a94639724d0e7bcb0d5ffb92", "title": "Warpage Tuning Study for Multi-chip Last Fan Out Wafer Level Package", "text": "In recent years, the IoT popularity pushes the package development of 3C products into a more functional and thinner target. For high I/O density and low cost considered package, the promising Fan-out Wafer Level Packaging (FOWLP) provides a solution to match OSAT existing capability, besides, the chip last process in FOWLP can further enhance the total yield by selectable known-good dies (KGDs). However, under processing, the large portion of molding compound induces high warpage to challenge fabrication limitation. The additional leveling process is usually applied to lower the warpage that caused by the mismatch of coefficient of thermal expansion and Young's modulus from carriers, dies, and molding compound. This process results in the increase of package cost and even induce internal damages that affect device reliability. In order to avoid leveling process and improve warpage trend, in this paper, we simulated several models with different design of molding compound and dies, and then developed a multi-chip last FOWLP test vehicle by package dimension of 12x15 mm2 with 8x9 and 4x9 mm2 multiple dies, respectively. The test vehicle performed three redistribution layers (RDLs) including one fine pitch RDL of line width/line spacing 2um/2um, which is also the advantage of multi-chip last FOWLP, and also exhibited ball on trace structure for another low cost option. For the wafer warpage discussion, the results showed that tuning the thickness of molding compound can improve warpage trend, especially in the application of high modulus carrier, which improved wafer warpage within 1mm, for package warpage discussion, the thinner die can lower the warpage of package. Through well warpage controlling, the multi-chip last FOWLP package with ball on trace design was successfully presented in this paper, and also passed the package level reliability of TCB 1000 cycles, HTSL 1000 hrs, and uHAST 96 hrs, and drop test by board level reliability."}
{"_id": "809219e7c5d7623a67baed7389c9d8f502b385c9", "title": "Eccentric training in chronic painful impingement syndrome of the shoulder: results of a pilot study", "text": "Treatment with painful eccentric muscle training has been demonstrated to give good clinical results in patients with chronic Achilles tendinosis. The pain mechanisms in chronic painful shoulder impingement syndrome have not been scientifically clarified, but the histological changes found in the supraspinatus tendon have similarities with the findings in Achilles tendinosis. In this pilot study, nine patients (five females and four males, mean age 54\u00a0years) with a long duration of shoulder pain (mean 41\u00a0months), diagnosed as having shoulder impingement syndrome and on the waiting list for surgical treatment (mean 13\u00a0months), were included. Patients with arthrosis in the acromio-clavicular joint or with large calcifications causing mechanical impingement during horizontal shoulder abduction were not included. We prospectively studied the effects of a specially designed painful eccentric training programme for the supraspintus and deltoideus muscles (3\u00d715 reps, 2\u00a0times/day, 7\u00a0days/week, for 12\u00a0weeks). The patients evaluated the amount of shoulder pain during horizontal shoulder activity on a visual analogue scale (VAS), and satisfaction with treatment. Constant score was assessed. After 12\u00a0weeks of treatment, five patients were satisfied with treatment, their mean VAS had decreased (62\u201318, P<0.05), and their mean Constant score had increased (65\u201380, P<0.05). At 52-week follow-up, the same five patients were still satisfied (had withdrawn from the waiting list for surgery), and their mean VAS and Constant score were 31 and 81, respectively. Among the satisfied patients, two had a partial suprasinatus tendon rupture, and three had a Type 3 shaped acromion. In conclusion, the material in this study is small and the follow-up is short, but it seems that although there is a long duration of pain, together with bone and tendon abnormalities, painful eccentric supraspinatus and deltoideus training might be effective. The findings motivate further studies."}
{"_id": "b6f758be954d34817d4ebaa22b30c63a4b8ddb35", "title": "A Proximity-Aware Hierarchical Clustering of Faces", "text": "In this paper, we propose an unsupervised face clustering algorithm called \u201cProximity-Aware Hierarchical Clustering\u201d (PAHC) that exploits the local structure of deep representations. In the proposed method, a similarity measure between deep features is computed by evaluating linear SVM margins. SVMs are trained using nearest neighbors of sample data, and thus do not require any external training data. Clus- ters are then formed by thresholding the similarity scores. We evaluate the clustering performance using three challenging un- constrained face datasets, including Celebrity in Frontal-Profile (CFP), IARPA JANUS Benchmark A (IJB-A), and JANUS Challenge Set 3 (JANUS CS3) datasets. Experimental results demonstrate that the proposed approach can achieve significant improvements over state-of-the-art methods. Moreover, we also show that the proposed clustering algorithm can be applied to curate a set of large-scale and noisy training dataset while maintaining sufficient amount of images and their variations due to nuisance factors. The face verification performance on JANUS CS3 improves significantly by finetuning a DCNN model with the curated MS-Celeb-1M dataset which contains over three million face images."}
{"_id": "96bab00761b857959f2f8baa86c99d8e0155c6eb", "title": "Touch, Taste, & Smell User Interfaces: The Future of Multisensory HCI", "text": "The senses we call upon when interacting with technology are very restricted. We mostly rely on vision and audition, increasingly harnessing touch, whilst taste and smell remain largely underexploited. In spite of our current knowledge about sensory systems and sensory devices, the biggest stumbling block for progress concerns the need for a deeper understanding of people's multisensory experiences in HCI. It is essential to determine what tactile, gustatory, and olfactory experiences we can design for, and how we can meaningfully stimulate such experiences when interacting with technology. Importantly, we need to determine the contribution of the different senses along with their interactions in order to design more effective and engaging digital multisensory experiences. Finally, it is vital to understand what the limitations are that come into play when users need to monitor more than one sense at a time. The aim of this workshop is to deepen and expand the discussion on touch, taste, and smell within the CHI community and promote the relevance of multisensory experience design and research in HCI."}
{"_id": "5bce8a6b14811ab692cd24286249b1348b26ee75", "title": "Minimum Text Corpus Selection for Limited Domain Speech Synthesis", "text": "This paper concerns limited domain TTS system based on the concatenative method, and presents an algorithm capable to extract the minimal domainoriented text corpus from the real data of the given domain, while still reaching the maximum coverage of the domain. The proposed approach ensures that the least amount of texts are extracted, containing the most common phrases and (possibly) all the words from the domain. At the same time, it ensures that appropriate phrase overlapping is kept, allowing to find smooth concatenation in the overlapped regions to reach high quality synthesized speech. In addition, several recommendations allowing a speaker to record the corpus more fluently and comfortably are presented and discussed. The corpus building is tested and evaluated on several domains differing in size and nature, and the authors present the results of the algorithm and demonstrate the advantages of using the domain oriented corpus for speech synthesis."}
{"_id": "24cebabe2dc3ad558dc95e620dfaad71b11445cc", "title": "BreakDancer: An algorithm for high resolution mapping of genomic structural variation", "text": "Detection and characterization of genomic structural variation are important for understanding the landscape of genetic variation in human populations and in complex diseases such as cancer. Recent studies demonstrate the feasibility of detecting structural variation using next-generation, short-insert, paired-end sequencing reads. However, the utility of these reads is not entirely clear, nor are the analysis methods with which accurate detection can be achieved. The algorithm BreakDancer predicts a wide variety of structural variants including insertion-deletions (indels), inversions and translocations. We examined BreakDancer's performance in simulation, in comparison with other methods and in analyses of a sample from an individual with acute myeloid leukemia and of samples from the 1,000 Genomes trio individuals. BreakDancer sensitively and accurately detected indels ranging from 10 base pairs to 1 megabase pair that are difficult to detect via a single conventional approach."}
{"_id": "2fd9f4d331d144f71baf2c66628b12c8c65d3ffb", "title": "ADDITIVE LOGISTIC REGRESSION : A STATISTICAL VIEW OF BOOSTING", "text": "Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications."}
{"_id": "573ae3286d050281ffe4f6c973b64df171c9d5a5", "title": "Sharing Visual Features for Multiclass and Multiview Object Detection", "text": "We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (runtime) computational complexity and the (training-time) sample complexity scale linearly with the number of classes to be detected. We present a multitask learning procedure, based on boosted decision stumps, that reduces the computational and sample complexity by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required and, therefore, the runtime cost of the classifier, is observed to scale approximately logarithmically with the number of classes. The features selected by joint training are generic edge-like features, whereas the features chosen by training each class separately tend to be more object-specific. The generic features generalize better and considerably reduce the computational cost of multiclass object detection"}
{"_id": "8262bd0fb41956f136dd2bc04ff4863584a43ce7", "title": "A Hybrid Discriminative/Generative Approach for Modeling Human Activities", "text": "Accurate recognition and tracking of human activities is an important goal of ubiquitous computing. Recent advances in the development of multi-modal wearable sensors enable us to gather rich datasets of human activities. However, the problem of automatically identifying the most useful features for modeling such activities remains largely unsolved. In this paper we present a hybrid approach to recognizing activities, which combines boosting to discriminatively select useful features and learn an ensemble of static classifiers to recognize different activities, with hidden Markov models (HMMs) to capture the temporal regularities and smoothness of activities. We tested the activity recognition system using over 12 hours of wearable-sensor data collected by volunteers in natural unconstrained environments. The models succeeded in identifying a small set of maximally informative features, and were able identify ten different human activities with an accuracy of 95%."}
{"_id": "345e83dd58f26f51d75e2fef330c02c9aa01e61b", "title": "New Hash Functions and Their Use in Authentication and Set Equality", "text": "In this paper we exhibit several new classes of hash functions with certain desirable properties, and introduce two novel applications for hashing which make use of these functions. One class contains a small number of functions, yet is almost universal,. If the functions hash n-bit long names into m-bit indices, then specifying a member of the class requires only O((m + log, log,(n)) . log,(n)) bits as compared to O(n) bits for earlier techniques. For long names, this is about a factor of m larger than the lower bound of m + log, n -log, m bits. An application of this class is a provably secure authentication technique for sending messages over insecure lines. A second class of functions satisfies a much stronger property than universal,. We present the application of testing sets for equality. The authentication technique allows the receiver to be certain that a message is genuine. An \u201cenemy\u201d-even one with infinite computer resources-cannot forge or modify a message without detection. The set equality technique allows operations including \u201cadd member to set,\u201d \u201cdelete member from set\u201d and \u201ctest two sets for equality\u201d to be performed in expected constant time and with less than a specified probability of error."}
{"_id": "9d3cd20b0a20cb93743a92314f2e92f584041ff0", "title": "The effect of active video games on cognitive functioning in clinical and non-clinical populations: A meta-analysis of randomized controlled trials", "text": "Physically-active video games ('exergames') have recently gained popularity for leisure and entertainment purposes. Using exergames to combine physical activity and cognitively-demanding tasks may offer a novel strategy to improve cognitive functioning. Therefore, this systematic review and meta-analysis was performed to establish effects of exergames on overall cognition and specific cognitive domains in clinical and non-clinical populations. We identified 17 eligible RCTs with cognitive outcome data for 926 participants. Random-effects meta-analyses found exergames significantly improved global cognition (g=0.436, 95% CI=0.18-0.69, p=0.001). Significant effects still existed when excluding waitlist-only controlled studies, and when comparing to physical activity interventions. Furthermore, benefits of exergames where observed for both healthy older adults and clinical populations with conditions associated with neurocognitive impairments (all p<0.05). Domain-specific analyses found exergames improved executive functions, attentional processing and visuospatial skills. The findings present the first meta-analytic evidence for effects of exergames on cognition. Future research must establish which patient/treatment factors influence efficacy of exergames, and explore neurobiological mechanisms of action."}
{"_id": "08877666eb361ad712ed565d0b0f2cf7e7a3490d", "title": "Attitudes -1 The Construction of Attitudes", "text": "Attitudes have long been considered a central concept of social psychology. In fact, early writers have defined social psychology as the scientific study of attitudes (e.g., Thomas & Znaniecki, 1918) and in 1954 Gordon Allport noted, \"This concept is probably the most distinctive and indispensable concept in contemporary American social psychology\" (p. 43). As one may expect of any concept that has received decades of attention, the concept of attitudes has changed over the years (see Allport, 1954, for an early review). The initial definitions were broad and encompassed cognitive, affective, motivational, and behavioral components. For example, Allport (1935) defined an attitude as \"a mental and neural state of readiness, organized through experience, exerting a directive and dynamic influence upon the individual's response to all objects and situations with which it is related\" (p. 810). A decade later, Krech and Crutchfield (1948) wrote, \"An attitude can be defined as an enduring organization of motivational, emotional, perceptual, and cognitive processes with respect to some aspect of the individual's world\" (p. 152). These definitions emphasized the enduring nature of attitudes and their close relationship to individuals' defined attitudes simply in terms of the probability that a person will show a specified behavior in a specified situation. In subsequent decades, the attitude concept lost much of its breadth and was largely reduced to its evaluative component. In the succinct words of Daryl Bem, \"Attitudes are likes and dislikes\" attitudes as \"a psychological tendency that is expressed by evaluating a particular entity with some degree of favor or disfavor\" (p. 1). Along the way, many functions that were initially ascribed to attitudes have been reassigned to other cognitive structures and the accumulating body of empirical findings drew many of the classic assumptions into question. A growing body of literature suggests that attitudes may be much less enduring and stable than has traditionally been assumed. As we review below, self-reports of attitudes are highly context-dependent and can be profoundly influenced by minor changes in question wording, question format or question order. Attitudes-3 their assessment is subject to contextual influences. For other researchers, the same findings indicate that all we assess in attitude measurement are evaluative judgments that respondents construct at the time they are asked, based on whatever information happens to be accessible (e.g., Schwarz & Strack, 1991). From this perspective, the traditional attitude concept may not be particularly useful and we may learn more \u2026"}
{"_id": "5db22877630e0643f9727b0a05dd69cb1eb168d6", "title": "Towards Operational Measures of Computer Security", "text": "Ideally, a measure of the security of a system should capture quantitatively the intuitive notion of \u2018the ability of the system to resist attack\u2019. That is, it should be operational, reflecting the degree to which the system can be expected to remain free of security breaches under particular conditions of operation (including attack). Instead, current security levels at best merely reflect the extensiveness of safeguards introduced during the design and development of a system. Whilst we might expect a system developed to a higher level than another to exhibit \u2018more secure behaviour\u2019 in operation, this cannot be guaranteed; more particularly, we cannot infer what the actual security behaviour will be from knowledge of such a level. In the paper we discuss similarities between reliability and security with the intention of working towards measures of \u2018operational security\u2019 similar to those that we have for reliability of systems. Very informally, these measures could involve expressions such as the rate of occurrence of security breaches (cf rate of occurrence of failures in reliability), or the probability that a specified \u2018mission\u2019 can be accomplished without a security breach (cf reliability function). This new approach is based on the analogy between system failure and security breach. A number of other analogies to support this view are introduced. We examine this duality critically, and have identified a number of important open questions that need to be answered before this quantitative approach can be taken further. The work described here is therefore somewhat tentative, and one of our major intentions is to invite discussion about the plausibility and feasibility of this new approach. Towards Operational Measures of Computer Security 2 _____________________________________________________________________"}
{"_id": "ca9a678185f02807ae1b7d9a0bf89247a130d949", "title": "Estimation of ECG features using LabVIEW", "text": "320 Abstract\u2014Various methods have been proposed and used for ECG feature extraction with considerable percentage of correct detection. Nevertheless, the problem remains open especially with respect to higher detection accuracy in noisy ECGs. In this work we have developed an algorithm based on wavelet transform approach using LabVIEW 8.6. LabVIEW (Laboratory Virtual Instrument Engineering Workbench) is a graphical programming language that uses icons instead of lines of text to create programs. Unlike text based programming language, LabVIEW uses the data flow programming, where the flow of data determines execution. The flexibility, modular nature and ease to use programming possible with labview, makes it less complex. The proposed algorithm is executed in two stages. Firstly, it preprocesses (denoises) the signal to remove the noise from the ECG signal. Then it detects P-wave, T-wave, QRS complex and their onsets and offsets. LabVIEW and the related toolkits, advanced signal processing toolkit and math script are used to build the graphical programme for both the stages. The algorithm is evaluated with ECG data files, with different ECG morphologies and noise content taken from CSE multilead data base which contains 5000 samples per ECG, recorded at a sampling frequency of 500Hz."}
{"_id": "1e75a3bc8bdd942b683cf0b27d1e1ed97fa3b4c3", "title": "Computationally Efficient Bayesian Learning of Gaussian Process State Space Models", "text": "Gaussian processes allow for flexible specification of prior assumptions of unknown dynamics in state space models. We present a procedure for efficient Bayesian learning in Gaussian process state space models, where the representation is formed by projecting the problem onto a set of approximate eigenfunctions derived from the prior covariance structure. Learning under this family of models can be conducted using a carefully crafted particle MCMC algorithm. This scheme is computationally efficient and yet allows for a fully Bayesian treatment of the problem. Compared to conventional system identification tools or existing learning methods, we show competitive performance and reliable quantification of uncertainties in the model."}
{"_id": "2584d7faf2183e4a11f0c8173bc3541a4eda00ee", "title": "Source Code Analysis: A Road Map", "text": "The automated and semi-automated analysis of source code has remained a topic of intense research for more than thirty years. During this period, algorithms and techniques for source-code analysis have changed, sometimes dramatically. The abilities of the tools that implement them have also expanded to meet new and diverse challenges. This paper surveys current work on source-code analysis. It also provides a road map for future work over the next five-year period and speculates on the development of source-code analysis applications, techniques, and challenges over the next 10, 20, and 50 years."}
{"_id": "b8d9e2bb5b517f5b307045efd0cc3a9bf4967419", "title": "Efficient 3D Object Segmentation from Densely Sampled Light Fields with Applications to 3D Reconstruction", "text": "Precise object segmentation in image data is a fundamental problem with various applications, including 3D object reconstruction. We present an efficient algorithm to automatically segment a static foreground object from highly cluttered background in light fields. A key insight and contribution of our article is that a significant increase of the available input data can enable the design of novel, highly efficient approaches. In particular, the central idea of our method is to exploit high spatio-angular sampling on the order of thousands of input frames, for example, captured as a hand-held video, such that new structures are revealed due to the increased coherence in the data. We first show how purely local gradient information contained in slices of such a dense light field can be combined with information about the camera trajectory to make efficient estimates of the foreground and background. These estimates are then propagated to textureless regions using edge-aware filtering in the epipolar volume. Finally, we enforce global consistency in a gathering step to derive a precise object segmentation in both 2D and 3D space, which captures fine geometric details even in very cluttered scenes. The design of each of these steps is motivated by efficiency and scalability, allowing us to handle large, real-world video datasets on a standard desktop computer. We demonstrate how the results of our method can be used for considerably improving the speed and quality of image-based 3D reconstruction algorithms, and we compare our results to state-of-the-art segmentation and multiview stereo methods."}
{"_id": "aaeaa58bedf6fa9b42878bf5914f55f48cf26209", "title": "Odin: Team VictorTango's entry in the DARPA Urban Challenge", "text": "The DARPA Urban Challenge required robotic vehicles to travel more than 90 km through an urban environment without human intervention and included situations such as stop intersections, traffic merges, parking, and roadblocks. Team VictorTango separated the problem into three parts: base vehicle, perception, and planning. A Ford Escape outfitted with a custom drive-by-wire system and computers formed the basis for Odin. Perception used laser scanners, global positioning system, and a priori knowledge to identify obstacles, cars, and roads. Planning relied on a hybrid deliberative/reactive architecture to"}
{"_id": "9af0342e9f6ea8276a4679596cf0e5fbbd7f9996", "title": "The Skiplist-Based LSM Tree", "text": "Log-Structured Merge (LSM) Trees provide a tiered data storage and retrieval paradigm that is a\u008aractive for write-optimized data systems. Maintaining an e\u0081cient bu\u0082er in memory and deferring updates past their initial write-time, the structure provides quick operations over hot data. Because each layer of the structure is logically separate from the others, the structure is also conducive to opportunistic and granular optimization. In this paper, we introduce the SkiplistBased LSM Tree (sLSM), a novel system in which the memory bu\u0082er of the LSM is composed of a sequence of skiplists. We develop theoretical and experimental results that demonstrate that the breadth of tuning parameters inherent to the sLSM allows it broad \u0083exibility for excellent performance across a wide variety of workloads."}
{"_id": "a3f16a77962087f522bfe7efe7d2dfff72d85b34", "title": "Computer vision-based registration techniques for augmented reality", "text": "Augmented reality is a term used to describe systems in which computer-generated information is superimposed on top of the real world; for example, through the use of a see-through head-mounted display. A human user of such a system could still see and interact with the real world, but have valuable additional information, such as descriptions of important features or instructions for performing physical tasks, superimposed on the world. For example, the computer could identify objects and overlay them with graphic outlines, labels, and schematics. The graphics are registered to the real-world objects and appear to be \u201cpainted\u201d onto those objects. Augmented reality systems can be used to make productivity aids for tasks such as inspection, manufacturing, and navigation. One of the most critical requirements for augmented reality is to recognize and locate real-world objects with respect to the person\u2019s head. Accurate registration is necessary in order to overlay graphics accurately on top of the real-world objects. At the Colorado School of Mines, we have developed a prototype augmented reality system that uses head-mounted cameras and computer vision techniques to accurately register the head to the scene. The current system locates and tracks a set of preplaced passive fiducial targets placed on the real-world objects. The system computes the pose of the objects and displays graphics overlays using a see-through head-mounted display. This paper describes the architecture of the system and outlines the computer vision techniques used."}
{"_id": "e9078ffe889c7783af72667c4d002ec9f2edeea1", "title": "Lecture Notes on Network Information Theory", "text": "These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/."}
{"_id": "839fcb04fa560ccea8601089046b0d58b3bb3262", "title": "Economic modelling using computational intelligence techniques", "text": "Attempting to successfully and accurately predict the financial market has long attracted the interests and attention of economists, bankers, mathematicians and scientists alike. The financial markets form the bedrock of any economy. There are a large number of factors and parameters that influence the direction, volume, price and flow of traded stocks. This coupled with the markets\u2019 vulnerability to external and non-finance related factors and the resulting intrinsic volatility makes the development of a robust and accurate financial market prediction model an interesting research and engineering problem. In an attempt to solve this engineering problem, the authors of this paper present a rough set theory based predictive model for the financial markets. Rough set theory has, as its base, imperfect data analysis and approximation. The theory is used to extract a set of reducts and a set of trading rules based on trading data of the Johannesburg Stock Exchange (JSE) for the period 1 April 2006 to 1 April 2011. To increase the efficiency of the model four of dicretization algorithms were used on the data set, namely (Equal Frequency Binning (EFB), Boolean Reasoning, Entropy and the Na\u00efve Algorithm. The EFB algorithm gives the least number of rules and highest accuracy. Next, the reducts are extracted using the Genetic Algorithm and finally the set of dependency rules are generated from the set of reducts. A rough set confusion matrix is used to assess the accuracy of the model. The model gave a prediction accuracy of 80.4% using the Standard Voting classifier. Keywords-rough set theory; financial market modelling, neural networks, discretization, classification. INTRODUCTION Around the world, trading in the stock market has gained enormous popularity as a means through which one can reap huge profits. Attempting to successfully and accurately predict the financial market has long attracted the interests and attention of economists, bankers, mathematicians and scientists alike. Thus, stock price movement prediction has long been a cherished desire of investors, speculators and industries [1]. For a long time statistical techniques such as Bayesian models, regression and some econometric techniques have dominated research activities in prediction [2]. The primary approach to financial forecasting has been the identification of a stock price trend and continuation of the investment strategy until evidence suggests that the trend has reversed [3]. One of the biggest problems with the use of regression methods is that they fail to give satisfactory forecasting results for some series\u2019 because of their linear structure and other inherent limitations [8, 9]. However the emergence of computational intelligence techniques as a viable alternative to the \u201ctraditional\u201d statistical models that have dominated this area since the 1930\u2019s [2,3] has given impetus to the increasing usage of these techniques in fields such as economics and finance [3]. Apart from these, there have been many other successful applications of intelligent systems to decision support and complex automation tasks [4 6]. Since the year of its inception in 1982, rough set theory has been extensively used as an effective data mining and knowledge discovery technique in numerous applications in the finance, investment and banking fields [4, 7, 11, 12, 20]. Data mining is a discipline in computational intelligence that deals with knowledge discovery, data analysis, and full and semi-autonomous decision making [7]. It entails the analysis of data sets such that unsuspected relationships among data objects are found. Predictive modeling is the practice of deriving future inferences based on these relationships. The financial markets form the bedrock of any economy. There are a large number of factors and parameters that influence the direction, volume, price and flow of traded stocks. This coupled with the markets\u2019 vulnerability to external and nonfinance related factors and the resulting intrinsic volatility makes the development of a robust and accurate financial market prediction model an interesting research and engineering problem. This paper presents a generic stock price prediction model based on rough set theory. The model is derived on data from the daily movements of the Johannesburg Stock Exchange\u2019s All Share Index. The data used was collected over a five year period from 1 April 2006 to 1 April 2011. The methodology used in this paper is as follows: data preprocessing, data splitting, data discreetization, redundant attribute elimination, reduct generation, rule generation and prediction. 71 The rest of this paper is organized as follows: Section II covers the theoretical foundations of rough set theory, Section III gives the design of the proposed prediction model while an analysis of the results and conclusions are presented in Sections IV and V respectively. THEORETICAL FOUNDATIONS OF ROUGH SETS Rough set theory (RST) was introduced by Pawlak in 1982. The theory can be regarded as a mathematical tool used for imperfect data analysis [10]. Thus RST has proved useful in applications spanning the engineering, financial and decision support domains to mention but a few. It is based on the assumption that \u201cwith every object in the universe of discourse, some information (data or knowledge) is associated\u201d [10]. In practical applications, the \u201cuniverse of discourse\u201d described in [10] is usually a table called the decision table in which the rows are objects or data elements and the columns are attributes and the entries are called the attribute values [3]. The objects or data elements described by the same attributes are said to be indiscernible (indistinguishable) by the attribute set. Any set of indiscernible data elements forms a granule or atom of knowledge about the entire \u201cuniverse of discourse\u201d (information system framework) [10]. A union of these elementary sets (granules) is said to be a precise or crisp set, other-wise the set is said to be rough [1,2,7,10]. Every rough set will have boundary cases i.e data objects which cannot certainly be classified as belonging to the set or its complement when using the available information [10]. Associated with every rough set is a pair of sets called the lower and upper approximation of the rough set. The lower approximation consists of those objects which one can definitively say belong to the target set. The upper approximation consists of those objects which possibly belong to the target set. The difference between the two sets is the boundary region. The decision rule derived specifies an outcome based on certain conditions. Where the derived rule uniquely identifies outcomes based on some conditions the rule is said to be certain else it is uncertain. Every decision rule has a pair of probabilities associated with it, the certainty and coverage coefficients [3]. These conditional probabilities also satisfy Bayes\u2019 theorem [7,10]. The certainty coefficient is the conditional probability that an object that belongs to the decision class outlined by the rule given that it satisfies the conditions of the rule. The coverage coefficient on other hand expresses the conditional probability of reasons given some decision [10]. Clearly RST can be seen to overlap with many other theories in the realm of imperfect knowledge analysis such as evidence theory, Bayesian inference, fuzzy sets etc [1, 3, 4, 10, 11, 12]. To define rough sets mathematically, we begin by defining an information system S = (U,A), where U and A are finite and non-empty sets that represent the data objects and attributes respectively. Every attribute has a set of possible values Va. Va is called the domain of a. A subset of A say B will determine a binary relation I(B) on U, which is called the indiscerniblity relation. The relation is defined as follows: ( ) ( ) if and only if a(x) = a(y) for every a in B, where a(x) denotes the value of attribute a for data object x [10]. I(B) is an equivalence relation. All equivalence classes of I(B) as U/I(B). An equivalence class of I(B) containing x is denoted as B(x). If (x,y) belong to I(B) they are said to be indiscernible with respect to B. All equivalence classes of the indescernibility relation, I(B), are referred to as B-granules or B-elementary sets [10]. In the information system defined above, we define as in [10]: (1) And, (2) We now define the two operators assigned to every (1) two sets called the upper and lower approximation of X. The two sets are defined as follows [10]: ( ) \u22c3 { ( ) ( ) } (3) And, ( ) \u22c3 { ( ) ( ) } (4) Thus, the lower approximation is the union of all Belementary sets that are included in the target set, whilst the upper approximation is the union of all B-elementary sets that have a non-empty intersection with the target set. The difference between the two sets is called the boundary of region of X."}
{"_id": "f8785ba3a56b5ce90fb264e82dacaca1ac641091", "title": "Comparative study of Hough Transform methods for circle finding", "text": "A variety of circle detection methods which are based on variations of the Hough Transform are investigated. The ,five methods considered are the standard Hough Trans,form, the Fast Hough Transform of Li et al.\u2018, a two stage Hough method, and two space saving approaches based on the method devised by Gerig and Klein2. The performance of each of the methods has been compared on synthetic imagery and real images from a metallurgical application. Figures and comments are presented concerning the accuracy, reliability, computational efficiency and storage requirements of each of the methods."}
{"_id": "295ad06bbaca2dc4262dd834d051e12324ef69c4", "title": "User Identity Linkage by Latent User Space Modelling", "text": "User identity linkage across social platforms is an important problem of great research challenge and practical value. In real applications, the task often assumes an extra degree of difficulty by requiring linkage across multiple platforms. While pair-wise user linkage between two platforms, which has been the focus of most existing solutions, provides reasonably convincing linkage, the result depends by nature on the order of platform pairs in execution with no theoretical guarantee on its stability. In this paper, we explore a new concept of ``Latent User Space'' to more naturally model the relationship between the underlying real users and their observed projections onto the varied social platforms, such that the more similar the real users, the closer their profiles in the latent user space. We propose two effective algorithms, a batch model(ULink) and an online model(ULink-On), based on latent user space modelling. Two simple yet effective optimization methods are used for optimizing objective function: the first one based on the constrained concave-convex procedure(CCCP) and the second on accelerated proximal gradient. To our best knowledge, this is the first work to propose a unified framework to address the following two important aspects of the multi-platform user identity linkage problem --- (I) the platform multiplicity and (II) online data generation. We present experimental evaluations on real-world data sets for not only traditional pairwise-platform linkage but also multi-platform linkage. The results demonstrate the superiority of our proposed method over the state-of-the-art ones."}
{"_id": "305f03effa9bb384c386d479babcab09305be081", "title": "Electricity Smart Meters Interfacing the Households", "text": "The recent worldwide measures for energy savings call for a larger awareness of the household energy consumption, given the relevant contribution of domestic load to the national energy balance. On the other hand, electricity smart meters together with gas, heat, and water meters can be interconnected in a large network offering a potential value to implement energy savings and other energy-related services, as long as an efficient interface with the final user is implemented. Unfortunately, so far, the interface of such devices is mostly designed and addressed at the utilities supervising the system, giving them relevant advantages, while the communication with the household is often underestimated. This paper addresses this topic by proposing the definition of a local interface for smart meters, by looking at the actual European Union and international regulations, at the technological solutions available on the market, and at those implemented in different countries, and, finally, by proposing specific architectures for a proper consumer-oriented implementation of a smart meter network."}
{"_id": "1db11bd3e2d0794cbb0fab25508b494e0f0a46ea", "title": "Multi-target tracking by online learning of non-linear motion patterns and robust appearance models", "text": "We describe an online approach to learn non-linear motion patterns and robust appearance models for multi-target tracking in a tracklet association framework. Unlike most previous approaches that use linear motion methods only, we online build a non-linear motion map to better explain direction changes and produce more robust motion affinities between tracklets. Moreover, based on the incremental learned entry/exit map, a multiple instance learning method is devised to produce strong appearance models for tracking; positive sample pairs are collected from different track-lets so that training samples have high diversity. Finally, using online learned moving groups, a tracklet completion process is introduced to deal with tracklets not reaching entry/exit points. We evaluate our approach on three public data sets, and show significant improvements compared with state-of-art methods."}
{"_id": "fd4ceb60d5f14a6cf450b5ff7be9015da1d12091", "title": "Is a Positive Review Always Effective? Advertising Appeal Effect in the Persuasion of Online Customer Reviews", "text": "Despite the expected value of online customer reviews as an emerging advertising medium, the manner of enforcing its effectiveness on customers\u2019 purchase behavior is still a question not well answered. To address the question, we propose a new central route cue called \u201cadvertising appeal\u201d to examine attitude and intention to purchase a product or service based on the reasoning of the elaboration likelihood model. We also propose that consumers\u2019 consumption goal and attitude certainty separately moderate the advertising appeal effect on purchase intention through a degree of favorable attitude. We test these hypotheses by conducting a laboratory experiment with 50 participants. In this experiment, we manipulate two kinds of advertising appeals and consumption goals. This study demonstrates that attribute-based appeal reviews are more influential than emotion-based appeal reviews in the persuasion process, regardless of the individuals\u2019 consumption goals. However, the advertising appeal effect on purchase intention is more pronounced for hedonic consumers than for utilitarian consumers."}
{"_id": "b5f27d534168dde3667efcca5fedd2b61b40fce7", "title": "Model of Customer Churn Prediction on Support Vector Machine", "text": "To improve the prediction abilities of machine learning methods, a support vector machine (SVM) on structural risk minimization was applied to customer churn prediction. Researching customer churn prediction cases both in home and foreign carries, the method was compared with artifical neural network, decision tree, logistic regression, and naive bayesian classifier. It is found that the method enjoys the best accuracy rate, hit rate, covering rate, and lift coefficient, and therefore, provides an effective measurement for customer churn prediction."}
{"_id": "bf0cf3d565c2e97f8832c87a608e74eed0965e91", "title": "Enhancing Windows Firewall Security Using Fuzzy Reasoning", "text": "Firewall is a standard security utility within the Microsoft Windows operating system. Most Windows users adopt it as the default security option due to its free availability. Moreover, Windows Firewall is a widely used security tool because of the large market share of the Microsoft Windows operating system. It can be customised for filtering of network traffic based on user-defined inbound and outbound rules. It is supplied with only basic functionality. As a result it cannot be considered as an effective tool for monitoring and analysing of inbound and outbound traffic. Nonetheless, as a freely available and conventional end user security tool, with some enhancement it could perform as a useful security tool for millions of Windows users. Therefore, this paper presents an enhanced Windows Firewall for a more effective monitoring and analysis of network traffic, based upon an intuitive fuzzy reasoning approach. Consequently, it can be used to prevent a greater range of attacks beyond the simple filtering of inbound and outbound network traffic. In this paper, a simulation of ICMP flooding is demonstrated, where the created firewall inbound and outbound rules are insufficient to monitor ICMP flooding. However, the addition of fuzzy reasoning system monitored it successfully and enhanced the standard Windows Firewall functionality to prevent ICMP flooding. The use of this Windows Fuzzy-Firewall can also be extended to prevent TCP flooding, UDP flooding and some other types of denial of service attacks."}
{"_id": "142a799aac35f3b47df9fbfdc7547ddbebba0a91", "title": "Deep Model-Based 6D Pose Refinement in RGB", "text": "We present a novel approach for model-based 6D pose refinement in color data. Building on the established idea of contour-based pose tracking, we teach a deep neural network to predict a translational and rotational update. At the core, we propose a new visual loss that drives the pose update by aligning object contours, thus avoiding the definition of any explicit appearance model. In contrast to previous work our method is correspondence-free, segmentation-free, can handle occlusion and is agnostic to geometrical symmetry as well as visual ambiguities. Additionally, we observe a strong robustness towards rough initialization. The approach can run in real-time and produces pose accuracies that come close to 3D ICP without the need for depth data. Furthermore, our networks are trained from purely synthetic data and will be published together with the refinement code at http://campar.in.tum. de/Main/FabianManhardt to ensure reproducibility."}
{"_id": "2b4916a48ad3e1f5559700829297a244c54f4e9b", "title": "The Use of Summation to Aggregate Software Metrics Hinders the Performance of Defect Prediction Models", "text": "Defect prediction models help software organizations to anticipate where defects will appear in the future. When training a defect prediction model, historical defect data is often mined from a Version Control System (VCS, e.g., Subversion), which records software changes at the file-level. Software metrics, on the other hand, are often calculated at the class- or method-level (e.g., McCabe's Cyclomatic Complexity). To address the disagreement in granularity, the class- and method-level software metrics are aggregated to file-level, often using summation (i.e., McCabe of a file is the sum of the McCabe of all methods within the file). A recent study shows that summation significantly inflates the correlation between lines of code (Sloc) and cyclomatic complexity (Cc) in Java projects. While there are many other aggregation schemes (e.g., central tendency, dispersion), they have remained unexplored in the scope of defect prediction. In this study, we set out to investigate how different aggregation schemes impact defect prediction models. Through an analysis of 11 aggregation schemes using data collected from 255 open source projects, we find that: (1) aggregation schemes can significantly alter correlations among metrics, as well as the correlations between metrics and the defect count; (2) when constructing models to predict defect proneness, applying only the summation scheme (i.e., the most commonly used aggregation scheme in the literature) only achieves the best performance (the best among the 12 studied configurations) in 11 percent of the studied projects, while applying all of the studied aggregation schemes achieves the best performance in 40 percent of the studied projects; (3) when constructing models to predict defect rank or count, either applying only the summation or applying all of the studied aggregation schemes achieves similar performance, with both achieving the closest to the best performance more often than the other studied aggregation schemes; and (4) when constructing models for effort-aware defect prediction, the mean or median aggregation schemes yield performance values that are significantly closer to the best performance than any of the other studied aggregation schemes. Broadly speaking, the performance of defect prediction models are often underestimated due to our community's tendency to only use the summation aggregation scheme. Given the potential benefit of applying additional aggregation schemes, we advise that future defect prediction models should explore a variety of aggregation schemes."}
{"_id": "45a44500a82918aae981fe7072b8a62e3b1f9a52", "title": "System-level fault-tolerance in large-scale parallel machines with buffered coscheduling", "text": "Summary form only given. As the number of processors for multiteraflop systems grows to tens of thousands, with proposed petaflops systems likely to contain hundreds of thousands of processors, the assumption of fully reliable hardware has been abandoned. Although the mean time between failures for the individual components can be very high, the large total component count will inevitably lead to frequent failures. It is therefore of paramount importance to develop new software solutions to deal with the unavoidable reality of hardware faults. We will first describe the nature of the failures of current large-scale machines, and extrapolate these results to future machines. Based on this preliminary analysis we will present a new technology that we are currently developing, buffered coscheduling, which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency - requiring no changes to user applications. Preliminary results show that this is attainable with current hardware."}
{"_id": "794f63b0ddbd78156913272a7f4275366bbd24e4", "title": "A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems", "text": "Recent online services rely heavily on automatic personalization to recommend relevant content to a large number of users. This requires systems to scale promptly to accommodate the stream of new users visiting the online services for the first time. In this work, we propose a content-based recommendation system to address both the recommendation quality and the system scalability. We propose to use a rich feature set to represent users, according to their web browsing history and search queries. We use a Deep Learning approach to map users and items to a latent space where the similarity between users and their preferred items is maximized. We extend the model to jointly learn from features of items from different domains and user features by introducing a multi-view Deep Learning model. We show how to make this rich-feature based user representation scalable by reducing the dimension of the inputs and the amount of training data. The rich user feature representation allows the model to learn relevant user behavior patterns and give useful recommendations for users who do not have any interaction with the service, given that they have adequate search and browsing history. The combination of different domains into a single model for learning helps improve the recommendation quality across all the domains, as well as having a more compact and a semantically richer user latent feature vector. We experiment with our approach on three real-world recommendation systems acquired from different sources of Microsoft products: Windows Apps recommendation, News recommendation, and Movie/TV recommendation. Results indicate that our approach is significantly better than the state-of-the-art algorithms (up to 49% enhancement on existing users and 115% enhancement on new users). In addition, experiments on a publicly open data set also indicate the superiority of our method in comparison with transitional generative topic models, for modeling cross-domain recommender systems. Scalability analysis show that our multi-view DNN model can easily scale to encompass millions of users and billions of item entries. Experimental results also confirm that combining features from all domains produces much better performance than building separate models for each domain."}
{"_id": "890716fb7a205ed9f431b504633e26867f70ac9a", "title": "Millimeter Wave MIMO With Lens Antenna Array: A New Path Division Multiplexing Paradigm", "text": "Millimeter wave (mmWave) communication is a promising technology for future wireless systems, while one practical challenge is to achieve its large-antenna gains with only limited radio frequency (RF) chains for cost-effective implementation. To this end, we study in this paper a new lens antenna array enabled mmWave multiple-input multiple-output (MIMO) communication system. We first show that the array response of lens antenna arrays follows a \u201csinc\u201d function, where the antenna element with the peak response is determined by the angle of arrival (AoA)/departure (AoD) of the received/transmitted signal. By exploiting this unique property along with the multi-path sparsity of mmWave channels, we propose a novel low-cost and capacity-achieving spatial multiplexing scheme for both narrow-band and wide-band mmWave communications, termed path division multiplexing (PDM), where parallel data streams are transmitted over different propagation paths with simple per-path processing. We further propose a simple path grouping technique with group-based small-scale MIMO processing to effectively mitigate the inter-stream interference due to similar AoAs/AoDs. Numerical results are provided to compare the performance of the proposed mmWave lens MIMO against the conventional MIMO with uniform planar arrays (UPAs) and hybrid analog/digital processing. It is shown that the proposed design achieves significant throughput gains as well as complexity and cost reductions, thus leading to a promising new paradigm for mmWave MIMO communications."}
{"_id": "47cc4c948d08a22639770ad3bbf1b41e18c6edfe", "title": "The Mutational Landscape of Lethal Castrate Resistant Prostate Cancer", "text": "Characterization of the prostate cancer transcriptome and genome has identified chromosomal rearrangements and copy number gains and losses, including ETS gene family fusions, PTEN loss and androgen receptor (AR) amplification, which drive prostate cancer development and progression to lethal, metastatic castration-resistant prostate cancer (CRPC). However, less is known about the role of mutations. Here we sequenced the exomes of 50 lethal, heavily pre-treated metastatic CRPCs obtained at rapid autopsy (including three different foci from the same patient) and 11 treatment-naive, high-grade localized prostate cancers. We identified low overall mutation rates even in heavily treated CRPCs (2.00 per megabase) and confirmed the monoclonal origin of lethal CRPC. Integrating exome copy number analysis identified disruptions of CHD1 that define a subtype of ETS gene family fusion-negative prostate cancer. Similarly, we demonstrate that ETS2, which is deleted in approximately one-third of CRPCs (commonly through TMPRSS2:ERG fusions), is also deregulated through mutation. Furthermore, we identified recurrent mutations in multiple chromatin- and histone-modifying genes, including MLL2 (mutated in 8.6% of prostate cancers), and demonstrate interaction of the MLL complex with the AR, which is required for AR-mediated signalling. We also identified novel recurrent mutations in the AR collaborating factor FOXA1, which is mutated in 5 of 147 (3.4%) prostate cancers (both untreated localized prostate cancer and CRPC), and showed that mutated FOXA1 represses androgen signalling and increases tumour growth. Proteins that physically interact with the AR, such as the ERG gene fusion product, FOXA1, MLL2, UTX (also known as KDM6A) and ASXL1 were found to be mutated in CRPC. In summary, we describe the mutational landscape of a heavily treated metastatic cancer, identify novel mechanisms of AR signalling deregulated in prostate cancer, and prioritize candidates for future study."}
{"_id": "6d1caaff7af3c9281e423d198697372de2976154", "title": "Circulating mutant DNA to assess tumor dynamics", "text": "The measurement of circulating nucleic acids has transformed the management of chronic viral infections such as HIV. The development of analogous markers for individuals with cancer could similarly enhance the management of their disease. DNA containing somatic mutations is highly tumor specific and thus, in theory, can provide optimum markers. However, the number of circulating mutant gene fragments is small compared to the number of normal circulating DNA fragments, making it difficult to detect and quantify them with the sensitivity required for meaningful clinical use. In this study, we applied a highly sensitive approach to quantify circulating tumor DNA (ctDNA) in 162 plasma samples from 18 subjects undergoing multimodality therapy for colorectal cancer. We found that ctDNA measurements could be used to reliably monitor tumor dynamics in subjects with cancer who were undergoing surgery or chemotherapy. We suggest that this personalized genetic approach could be generally applied to individuals with other types of cancer (pages 914\u2013915)."}
{"_id": "fd06836db095e897ca83c43820d1d44fb481201a", "title": "ETS gene fusions in prostate cancer: from discovery to daily clinical practice.", "text": "CONTEXT\nIn 2005, fusions between the androgen-regulated transmembrane protease serine 2 gene, TMPRSS2, and E twenty-six (ETS) transcription factors were discovered in prostate cancer.\n\n\nOBJECTIVE\nTo review advances in our understanding of ETS gene fusions, focusing on challenges affecting translation to clinical application.\n\n\nEVIDENCE ACQUISITION\nThe PubMed database was searched for reports on ETS fusions in prostate cancer.\n\n\nEVIDENCE SYNTHESIS\nSince the discovery of ETS fusions, novel 5' and 3' fusion partners and multiple splice isoforms have been reported. The most common fusion, TMPRSS2:ERG, is present in approximately 50% of prostate-specific antigen (PSA)-screened localized prostate cancers and in 15-35% of population-based cohorts. ETS fusions can be detected noninvasively in the urine of men with prostate cancer, with a specificity rate in PSA-screened cohorts of >90%. Reports from untreated population-based cohorts suggest an association between ETS fusions and cancer-specific death and metastatic spread. In retrospective prostatectomy cohorts, conflicting results have been published regarding associations between ETS fusions and cancer aggressiveness. In addition to serving as a potential biomarker, tissue and functional studies suggest a specific role for ETS fusions in the transition to carcinoma. Finally, recent results suggest that the 5' and 3' ends of ETS fusions as well as downstream targets may be targeted therapeutically.\n\n\nCONCLUSIONS\nRecent studies suggest that the first clinical applications of ETS fusions are likely to be in noninvasive detection of prostate cancer and in aiding with difficult diagnostic cases. Additional studies are needed to clarify the association between gene fusions and cancer aggressiveness, particularly those studies that take into account the multifocal and heterogeneous nature of localized prostate cancer. Multiple promising strategies have been identified to potentially target ETS fusions. Together, these results suggest that ETS fusions will affect multiple aspects of prostate cancer diagnosis and management."}
{"_id": "69d6775fe81a04711e68a942fd45e0a72ed34110", "title": "Advanced sequencing technologies: methods and goals", "text": "Nearly three decades have passed since the invention of electrophoretic methods for DNA sequencing. The exponential growth in the cost-effectiveness of sequencing has been driven by automation and by numerous creative refinements of Sanger sequencing, rather than through the invention of entirely new methods. Various novel sequencing technologies are being developed, each aspiring to reduce costs to the point at which the genomes of individual humans could be sequenced as part of routine health care. Here, we review these technologies, and discuss the potential impact of such a 'personal genome project' on both the research community and on society."}
{"_id": "5cf321385fcd87dfb0770b48180b38462038cbbf", "title": "Word sense disambiguation as a traveling salesman problem", "text": "Word sense disambiguation (WSD) is a difficult problem in Computational Linguistics, mostly because of the use of a fixed sense inventory and the deep level of granularity. This paper formulates WSD as a variant of the traveling salesman problem (TSP) to maximize the overall semantic relatedness of the context to be disambiguated. Ant colony optimization, a robust nature-inspired algorithm, was used in a reinforcement learning manner to solve the formulated TSP. We propose a novel measure based on the Lesk algorithm and Vector Space Model to calculate semantic relatedness. Our approach to WSD is comparable to state-of-the-art knowledge-based and unsupervised methods for benchmark datasets. In addition, we show that the combination of knowledge-based methods is superior to the most frequent sense heuristic and significantly reduces the difference between knowledge-based and supervised methods. The proposed approach could be customized for other lexical disambiguation tasks, such as Lexical Substitution or Word Domain Disambiguation."}
{"_id": "3b574e22e2151b63a224e3e4f41387d6542210f4", "title": "An improved k-medoids clustering algorithm", "text": "In this paper, we present an improved k-medoids clustering algorithm based on CF-Tree. The algorithm based on the clustering features of BIRCH algorithm, the concept of k-medoids algorithm has been improved. We preserve all the training sample data in an CF-Tree, then use k-medoids method to cluster the CF in leaf nodes of CF-Tree. Eventually, we can get k clusters from the root of the CF-Tree. This algorithm improves obviously the drawbacks of the k-medoids algorithm, such as the time complexity, scalability on large dataset, and can't find the clusters of sizes different very much and the convex shapes. Experiments show that this algorithm enhances the quality and scalability of clustering."}
{"_id": "82ad99ac9fdeb42bb01248e67d28ee05d9b33ca8", "title": "A Prospective, Open-Label Study of Hyaluronic Acid-Based Filler With Lidocaine (VYC-15L) Treatment for the Correction of Infraorbital Skin Depressions.", "text": "BACKGROUND\nInfraorbital skin depressions are one of the most troublesome facial areas for aesthetically aware patients.\n\n\nOBJECTIVE\nEvaluate effectiveness and safety of Juv\u00e9derm Volbella with Lidocaine (VYC-15L; Allergan plc, Dublin, Ireland) for correction of bilateral infraorbital depressions.\n\n\nMETHODS\nIn this 12-month, prospective, uncontrolled, open-label study, subjects aged \u226518 years with infraorbital depressions rated \u22651 on the Allergan Infra-oRbital Scale (AIRS) received injections of VYC-15L with optional touch-up treatment on Day 14. The primary efficacy measure was \u22651 AIRS grade improvement from baseline at month 1.\n\n\nRESULTS\nOf 80 subjects initially treated with VYC-15L, 75 (94%) completed the study. All injections were intentionally deep, most using multiple microbolus technique. At 1 month, 99.3% of eyes achieved \u22651 AIRS grade improvement. The responder rate (subjects with \u22651 AIRS grade improvement in both eyes) was 99% at month 1, 92% at month 6, and 54% at month 12. Most injection site reactions (e.g., bruising, redness, irregularities/bumps) were mild and resolved by day 14. Late-onset mild to moderate edema was observed in 11% of eyes at month 6% and 4% of eyes at month 12.\n\n\nCONCLUSION\nVYC-15L is effective and safe for the treatment of infraorbital depressions, with effectiveness lasting up to 12 months."}
{"_id": "6a31bef6a2ae5a6440c981cd818271f6eab1b628", "title": "Spreadsheet Validation and Analysis through Content Visualization", "text": "Visualizing spreadsheet content provides analytic insight and visual validation of large amounts of spreadsheet data. Oculus Excel Visualizer is a point and click data visualization experiment which directly visualizes Excel data and re-users the layout and formatting already present in the spreadsheet."}
{"_id": "f32a35f6c0bae011aae40589488645b30dcb971e", "title": "A power-efficient 4-2 Adder Compressor topology", "text": "The addition is the most used arithmetic operation in Digital Signal Processing (DSP) algorithms, such as filters, transforms and predictions. These algorithms are increasingly present in audio and video processing of battery-powered mobile devices having, therefore, energy constraints. In the context of addition operation, the efficient 4-2 adder compressor is capable to performs four additions simultaneously. This higher order of parallelism reduces the critical path and the internal glitching, thus reducing mainly the dynamic power dissipation. This work proposes two CMOS+ gate-based topologies to further reduce the power, area and delay of the 4-2 adder compressor. The proposed CMOS+ 4-2 adder compressor circuits topologies were implemented with Cadence Virtuoso tool at 45 nm technology and simulated in both electric and layout levels. The results show that a proper choice of gates in 4-2 adder compressor realization can reduce the power, delay and area about 22.41%, 32.45% and 7.4%, respectively, when compared with the literature."}
{"_id": "7cb9749578fcd1bdf8f238629da42fe75fcea426", "title": "IoTScanner: Detecting and Classifying Privacy Threats in IoT Neighborhoods", "text": "In the context of the emerging Internet of Things (IoT), a proliferation of wireless connectivity can be expected. That ubiquitous wireless communication will be hard to centrally manage and control, and can be expected to be opaque to end users. As a result, owners and users of physical space are threatened to lose control over their digital environments. In this work, we propose the idea of an IoTScanner. The IoTScanner integrates a range of radios to allow local reconnaissance of existing wireless infrastructure and participating nodes. It enumerates such devices, identifies connection patterns, and provides valuable insights for technical support and home users alike. Using our IoTScanner, we attempt to classify actively streaming IP cameras from other non-camera devices using simple heuristics. We show that our classification approach achieves a high accuracy in an IoT setting consisting of a large number of IoT devices. While related work usually focuses on detecting either the infrastructure, or eavesdropping on traffic from a specific node, we focus on providing a general overview of operations in all observed networks. We do not assume prior knowledge of used SSIDs, preshared passwords, or similar."}
{"_id": "38c74ee4aa8b8069f39ce765ee6063423e3c146b", "title": "A new intra-disk redundancy scheme for high-reliability RAID storage systems in the presence of unrecoverable errors", "text": "Today's data storage systems are increasingly adopting low-cost disk drives that have higher capacity but lower reliability, leading to more frequent rebuilds and to a higher risk of unrecoverable media errors. We propose an efficient intradisk redundancy scheme to enhance the reliability of RAID systems. This scheme introduces an additional level of redundancy inside each disk, on top of the RAID redundancy across multiple disks. The RAID parity provides protection against disk failures, whereas the proposed scheme aims to protect against media-related unrecoverable errors. In particular, we consider an intradisk redundancy architecture that is based on an interleaved parity-check coding scheme, which incurs only negligible I/O performance degradation. A comparison between this coding scheme and schemes based on traditional Reed--Solomon codes and single-parity-check codes is conducted by analytical means. A new model is developed to capture the effect of correlated unrecoverable sector errors. The probability of an unrecoverable failure associated with these schemes is derived for the new correlated model, as well as for the simpler independent error model. We also derive closed-form expressions for the mean time to data loss of RAID-5 and RAID-6 systems in the presence of unrecoverable errors and disk failures. We then combine these results to characterize the reliability of RAID systems that incorporate the intradisk redundancy scheme. Our results show that in the practical case of correlated errors, the interleaved parity-check scheme provides the same reliability as the optimum, albeit more complex, Reed--Solomon coding scheme. Finally, the I/O and throughput performances are evaluated by means of analysis and event-driven simulation."}
{"_id": "164ad83f36c50170351e3c0b58731e53ecd4b82c", "title": "Understanding and improving recurrent networks for human activity recognition by continuous attention", "text": "Deep neural networks, including recurrent networks, have been successfully applied to human activity recognition. Unfortunately, the final representation learned by recurrent networks might encode some noise (irrelevant signal components, unimportant sensor modalities, etc.). Besides, it is difficult to interpret the recurrent networks to gain insight into the models' behavior. To address these issues, we propose two attention models for human activity recognition: temporal attention and sensor attention. These two mechanisms adaptively focus on important signals and sensor modalities. To further improve the understandability and mean Fl score, we add continuity constraints, considering that continuous sensor signals are more robust than discrete ones. We evaluate the approaches on three datasets and obtain state-of-the-art results. Furthermore, qualitative analysis shows that the attention learned by the models agree well with human intuition."}
{"_id": "ef05eeab9d356214d928fadafd5ce2b5c52ef4ff", "title": "WASSA-2017 Shared Task on Emotion Intensity", "text": "We present the first shared task on detecting the intensity of emotion felt by the speaker of a tweet. We create the first datasets of tweets annotated for anger, fear, joy, and sadness intensities using a technique called best\u2013worst scaling (BWS). We show that the annotations lead to reliable fine-grained intensity scores (rankings of tweets by intensity). The data was partitioned into training, development, and test sets for the competition. Twenty-two teams participated in the shared task, with the best system obtaining a Pearson correlation of 0.747 with the gold intensity scores. We summarize the machine learning setups, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful for the task. The emotion intensity dataset and the shared task are helping improve our understanding of how we convey more or less intense emotions through language."}
{"_id": "2563efea0a5acc4eae586632c94cca490428658d", "title": "The Structure and Dynamics of Co-Citation Clusters: A Multiple-Perspective Co-Citation Analysis", "text": "A multiple-perspective co-citation analysis method is introduced for characterizing and interpreting the structure and dynamics of co-citation clusters. The method facilitates analytic and sense making tasks by integrating network visualization, spectral clustering, automatic cluster labeling, and text summarization. Co-citation networks are decomposed into co-citation clusters. The interpretation of these clusters is augmented by automatic cluster labeling and summarization. The method focuses on the interrelations between a co-citation cluster\u2019s members and their citers. The generic method is applied to a three-part analysis of the field of Information Science as defined by 12 journals published between 1996 and 2008: 1) a comparative author co-citation analysis (ACA), 2) a progressive ACA of a time series of co-citation networks, and 3) a progressive document co-citation analysis (DCA). Results show that the multipleperspective method increases the interpretability and accountability of both ACA and DCA networks."}
{"_id": "6659f09d0b458dda15d6d990f216ba2b19b9fa00", "title": "Global consequences of land use.", "text": "Land use has generally been considered a local environmental issue, but it is becoming a force of global importance. Worldwide changes to forests, farmlands, waterways, and air are being driven by the need to provide food, fiber, water, and shelter to more than six billion people. Global croplands, pastures, plantations, and urban areas have expanded in recent decades, accompanied by large increases in energy, water, and fertilizer consumption, along with considerable losses of biodiversity. Such changes in land use have enabled humans to appropriate an increasing share of the planet's resources, but they also potentially undermine the capacity of ecosystems to sustain food production, maintain freshwater and forest resources, regulate climate and air quality, and ameliorate infectious diseases. We face the challenge of managing trade-offs between immediate human needs and maintaining the capacity of the biosphere to provide goods and services in the long term."}
{"_id": "ac76c7e2624473dfd77fa350adf11b223655a34d", "title": "Plasmacytoid dendritic cells sense self-DNA coupled with antimicrobial peptide", "text": "Plasmacytoid dendritic cells (pDCs) sense viral and microbial DNA through endosomal Toll-like receptors to produce type 1 interferons. pDCs do not normally respond to self-DNA, but this restriction seems to break down in human autoimmune disease by an as yet poorly understood mechanism. Here we identify the antimicrobial peptide LL37 (also known as CAMP) as the key factor that mediates pDC activation in psoriasis, a common autoimmune disease of the skin. LL37 converts inert self-DNA into a potent trigger of interferon production by binding the DNA to form aggregated and condensed structures that are delivered to and retained within early endocytic compartments in pDCs to trigger Toll-like receptor 9. Thus, our data uncover a fundamental role of an endogenous antimicrobial peptide in breaking innate tolerance to self-DNA and suggest that this pathway may drive autoimmunity in psoriasis."}
{"_id": "046bf6fb90438335eaee07594855efbf541a8aba", "title": "Urban Computing: Concepts, Methodologies, and Applications", "text": "Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community."}
{"_id": "44808fd8f2ffd19bb266708b8de835c28f5b8596", "title": "RaceTrack: efficient detection of data race conditions via adaptive tracking", "text": "Bugs due to data races in multithreaded programs often exhibit non-deterministic symptoms and are notoriously difficult to find. This paper describes RaceTrack, a dynamic race detection tool that tracks the actions of a program and reports a warning whenever a suspicious pattern of activity has been observed. RaceTrack uses a novel hybrid detection algorithm and employs an adaptive approach that automatically directs more effort to areas that are more suspicious, thus providing more accurate warnings for much less over-head. A post-processing step correlates warnings and ranks code segments based on how strongly they are implicated in potential data races. We implemented RaceTrack inside the virtual machine of Microsoft's Common Language Runtime (product version v1.1.4322) and monitored several major, real-world applications directly out-of-the-box,without any modification. Adaptive tracking resulted in a slowdown ratio of about 3x on memory-intensive programs and typically much less than 2x on other programs,and a memory ratio of typically less than 1.2x. Several serious data race bugs were revealed, some previously unknown."}
{"_id": "3be38924dd2a4b420338b3634e4eb7d7d22abbad", "title": "ESTIMATION METHODS FOR STOCHASTIC VOLATILITY MODELS : A SURVEY", "text": "Although stochastic volatility (SV) models have an intuitive appeal, their empirical application has been limited mainly due to difficulties involved in their estimation. The main problem is that the likelihood function is hard to evaluate. However, recently, several new estimation methods have been intro duced and the literature on SV models has grown substantially. In this article, we review this literature. We describe the main estimators of the parameters and the underlying volatilities focusing on their advantages and limitations both from the theoretical and empirical point of view. We complete the survey with an application of the most important procedures to the S&P 500 stock price index."}
{"_id": "6e09a291d61f0e26ce3522a1b0fce952fb811090", "title": "Generative Attention Model with Adversarial Self-learning for Visual Question Answering", "text": "Visual question answering (VQA) is arguably one of the most challenging multimodal understanding problems as it requires reasoning and deep understanding of the image, the question, and their semantic relationship. Existing VQA methods heavily rely on attention mechanisms to semantically relate the question words with the image contents for answering the related questions. However, most of the attention models are simplified as a linear transformation, over the multimodal representation, which we argue is insufficient for capturing the complex nature of the multimodal data. In this paper we propose a novel generative attention model obtained by adversarial self-learning. The proposed adversarial attention produces more diverse visual attention maps and it is able to generalize the attention better to new questions. The experiments show the proposed adversarial attention leads to a state-of-the-art VQA model on the two VQA benchmark datasets, VQA v1.0 and v2.0."}
{"_id": "970b4d2ed1249af97cdf2fffdc7b4beae458db89", "title": "HMDB: A large video database for human motion recognition", "text": "With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion."}
{"_id": "47dd049e44c5e1c0e572284aa0e579900cf092a8", "title": "Automated detection of smuggled high-risk security threats using Deep Learning", "text": "The security infrastructure is ill-equipped to detect and deter the smuggling of non-explosive devices that enable terror attacks such as those recently perpetrated in western Europe. The detection of so-called \u201csmall metallic threats\u201d (SMTs) in cargo containers currently relies on statistical risk analysis, intelligence reports, and visual inspection of X-ray images by security officers. The latter is very slow and unreliable due to the difficulty of the task: objects potentially spanning less than 50 pixels have to be detected in images containing more than 2 million pixels against very complex and cluttered backgrounds. In this contribution, we demonstrate for the first time the use of Convolutional Neural Networks (CNNs), a type of Deep Learning, to automate the detection of SMTs in fullsize X-ray images of cargo containers. Novel approaches for dataset augmentation allowed to train CNNs from-scratch despite the scarcity of data available. We report fewer than 6% false alarms when detecting 90% SMTs synthetically concealed in streamof-commerce images, which corresponds to an improvement of over an order of magnitude over conventional approaches such as Bag-of-Words (BoWs). The proposed scheme offers potentially super-human performance for a fraction of the time it would take for a security officers to carry out visual inspection (processing time is approximately 3.5s per container image)."}
{"_id": "6aa12ca2bdfa18a8eac0c882b10d5c676efa05f7", "title": "Trends in Automotive Communication Systems", "text": "The use of networks for communications between the electronic control units (ECU) of a vehicle in production cars dates from the beginning of the 1990s. The specific requirements of the different car domains have led to the development of a large number of automotive networks such as Local Interconnect Network, J1850, CAN, TTP/C, FlexRay, media-oriented system transport, IDB1394, etc. This paper first introduces the context of in-vehicle embedded systems and, in particular, the requirements imposed on the communication systems. Then, a comprehensive review of the most widely used automotive networks, as well as the emerging ones, is given. Next, the current efforts of the automotive industry on middleware technologies, which may be of great help in mastering the heterogeneity, are reviewed. Finally, we highlight future trends in the development of automotive communication systems."}
{"_id": "fa74238203e293e1a572b206d714729f6efd3444", "title": "Bank Customer Credit Scoring by Using Fuzzy Expert System", "text": "Granting banking facility is one of the most important parts of the financial supplies for each bank. So this activity becomes more valuable economically and always has a degree of risk. These days several various developed Artificial Intelligent systems like Neural Network, Decision Tree, Logistic Regression Analysis, Linear Discriminant Analysis and etc, are used in the field of granting facilities that each of this system owns its advantages and disadvantages. But still studying and working are needed to improve the accuracy and performance of them. In this article among other AI methods, fuzzy expert system is selected. This system is based on data and also extracts rules by using data. Therefore the dependency to experts is omitted and interpretability of rules is obtained. Validity of these rules could be confirmed or rejected by banking affair experts. For investigating the performance of proposed system, this system and some other methods were performed on various datasets. Results show that the proposed algorithm obtained better performance among the others."}
{"_id": "b385debea84e4784f30747c9ec6fbbc7a17f1ec1", "title": "Semantics matters : cognitively plausible delineation of city centres from point of interest data", "text": "We sketch a workflow for cognitively plausible recognition of vague geographical concepts, such as a city centre. Our approach imitates a pedestrian strolling through the streets, and comparing his/her internal cognitive model of a city centre with the stimulus from the external world to decide whether he/she is in the city centre or outside. The cognitive model of a British city centre is elicited through an online questionnaire survey and used to delineate referents of city centre from point of interest data. We first compute a measure of \u0091\u2018city centre-ness\u0092\u2019 at each location within a city, and then merge the area of high city centre-ness to a contiguous region. The process is illustrated on the example of the City of Bristol, and the computed city centre area for Bristol is evaluated by comparison to reference areas derived from alternative sources. The evaluation suggests that our approach performs well and produces a representation of a city centre that is near to people\u0092\u2019s conceptualisation. The benefits of our work are better (and user-driven) descriptions of complex geographical concepts. We see such models as a prerequisite for generalisation over large changes in detail, and for very specific purposes."}
{"_id": "00ad8a79aaba6749789971972879bd591a7b23c5", "title": "Label Information Guided Graph Construction for Semi-Supervised Learning", "text": "In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks."}
{"_id": "afa2696a7bb3ebde1cf3514382433a0b705b43d3", "title": "Flexible sensor for Mckibben pneumatic actuator", "text": "A flexible electro-conductive rubber sensor for measuring the length of a McKibben pneumatic actuator has been developed. Mckibben actuators are flexible, lightweight, and widely used to actuate power assisting devices for which compliance is required for safety. The actuator's length needs to be measured to control the devices accurately. However, the flexibility and lightweight properties might be lost if rigid sensors such as linear potentionmeters or encoders are directly attached to the actuators. To keep the desirable properties of McKibben actuators, compact and flexible sensors are necessary. In this paper, a flexible sensor using electro-conductive flexible rubber is proposed to measure the length of McKibben actuators. Using this sensor, a higher accuracy can be obtained by measuring the circumference displacement than directly measuring the axial displacement. The estimation accuracy is evaluated and the usefulness of the proposed method is verified by adopting to a multiple-link robot arm driven by the McKibben actuators."}
{"_id": "197c43315bdcec6785cb9834638140d9878ec131", "title": "Automated Intelligent Pilots for Combat Flight Simulation", "text": "TacAir-Soar is an intelligent, rule-based system that generates believable \u201chuman-like\u201d behavior for large scale, distributed military simulations. The innovation of the application is primarily a matter of scale and integration. The system is capable of executing most of the airborne missions that the United States military flies in fixed-wing aircraft. It accomplishes this by integrating a wide variety of intelligent capabilities, including real-time hierarchical execution of complex goals and plans, communication and coordination with humans and simulated entities, maintenance of situational awareness, and the ability to accept and respond to new orders while in flight. The system is currently deployed at the Oceana Naval Air Station WISSARD Lab and the Air Force Research Laboratory in Mesa, AZ. Its most dramatic use was in the Synthetic Theater of War 1997, which was an operational training exercise that ran for 48 continuous hours during which TacAir-Soar flew all U.S. fixed-wing aircraft."}
{"_id": "3087289229146fc344560478aac366e4977749c0", "title": "THE INFORMATION CAPACITY OF THE HUMAN MOTOR SYSTEM IN CONTROLLING THE AMPLITUDE OF MOVEMENT", "text": "Information theory has recently been employed to specify more precisely than has hitherto been possible man's capacity in certain sensory, perceptual, and perceptual-motor functions (5, 10, 13, 15, 17, 18). The experiments reported in the present paper extend the theory to the human motor system. The applicability of only the basic concepts, amount of information, noise, channel capacity, and rate of information transmission, will be examined at this time. General familiarity with these concepts as formulated by recent writers (4,11, 20, 22) is assumed. Strictly speaking, we cannot study man's motor system at the behavioral level in isolation from its associated sensory mechanisms. We can only analyze the behavior of the entire receptor-neural-effector system. How-"}
{"_id": "0cd2285d00cc1337cc95ab120e558707b197862a", "title": "The mathematical theory of communication", "text": "T HE recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist 1 and Hartley2 on this subject. In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information. The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function. Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages, we will in all cases use an essentially logarithmic measure. The logarithmic measure is more convenient for various reasons:"}
{"_id": "55ab2b325da10b4d46b69a7673e32823a9706a32", "title": "RoboCup: The Robot World Cup Initiative", "text": "The Robot World Cup Initiative (RoboCup) is an attempt to foster AI and intelligent robotics research by providing a standard problem where wide range of technologies can be integrated and examined. The rst RoboCup competition will be held at IJCAI-97, Nagoya. In order for a robot team to actually perform a soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor-fusion. Unlike AAAI robot competition, which is tuned for a single heavyduty slow-moving robot, RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup's nal target is a world cup with real robots, RoboCup o ers a software platform for research on the software aspects of RoboCup. This paper describes technical challenges involved in RoboCup, rules, and simulation environment."}
{"_id": "e37f60b230a6e7b6f4949cea85b4113aadaf4e0c", "title": "Controlling Cooperative Problem Solving in Industrial Multi-Agent Systems Using Joint Intentions", "text": "One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparative experiments in the real-world domain of electricity transportation management. Finally, the success of the approach of building a system with an explicit and grounded representation of cooperative problem solving is used to outline a proposal for the next generation of multi-agent systems."}
{"_id": "021472c712d51f88b4175b7e2906ab11b7e28bb8", "title": "Laboratory Experiments for Tsunamis Generated by Underwater Landslides: Comparison with Numerical Modeling", "text": "Three-dimensional experiments and fully nonlinear computations are performed at the University of Rhode Island, to investigate tsunami generation by underwater landslides. Each experiment consists of a solid landslide of idealized smooth shape sliding over a plane slope. Surface elevations are measured using very accurate gages placed at strategic locations. Gage calibration is performed using a newly developed automated system. Landslide acceleration is measured with a micro-accelerometer. The repeatability of experiments is first investigated, and then by varying the initial depth of the landslide, different conditions of wave non-linearity and dispersion are generated and compared. The principle of numerical modeling, using an earlier developed model, is briefly explained. One application is presented and results compared to experiments. The agreement of computations with the latter is quite good. In the model, horizontal velocities are found quite non-uniform over depth above the moving landslide. This would preclude using a long wave model for such landslide tsunami wave generation."}
{"_id": "12de03e2691c11d29a82f1c3fc7e97121c07cb5b", "title": "CopyCatch: stopping group attacks by spotting lockstep behavior in social networks", "text": "How can web services that depend on user generated content discern fraudulent input by spammers from legitimate input? In this paper we focus on the social network Facebook and the problem of discerning ill-gotten Page Likes, made by spammers hoping to turn a profit, from legitimate Page Likes. Our method, which we refer to as CopyCatch, detects lockstep Page Like patterns on Facebook by analyzing only the social graph between users and Pages and the times at which the edges in the graph (the Likes) were created. We offer the following contributions: (1) We give a novel problem formulation, with a simple concrete definition of suspicious behavior in terms of graph structure and edge constraints. (2) We offer two algorithms to find such suspicious lockstep behavior - one provably-convergent iterative algorithm and one approximate, scalable MapReduce implementation. (3) We show that our method severely limits \"greedy attacks\" and analyze the bounds from the application of the Zarankiewicz problem to our setting. Finally, we demonstrate and discuss the effectiveness of CopyCatch at Facebook and on synthetic data, as well as potential extensions to anomaly detection problems in other domains. CopyCatch is actively in use at Facebook, searching for attacks on Facebook's social graph of over a billion users, many millions of Pages, and billions of Page Likes."}
{"_id": "1370b7ec6cb56b0ff25f512bd673acbab214708c", "title": "Automated concolic testing of smartphone apps", "text": "We present an algorithm and a system for generating input events to exercise smartphone apps. Our approach is based on concolic testing and generates sequences of events automatically and systematically. It alleviates the path-explosion problem by checking a condition on program executions that identifies subsumption between different event sequences. We also describe our implementation of the approach for Android, the most popular smartphone app platform, and the results of an evaluation that demonstrates its effectiveness on five Android apps."}
{"_id": "541ff4c3fe4ca6bd9acf6b714e065c38d2584e53", "title": "SHEEP, GOATS, LAMBS and WOLVES: a statistical analysis of speaker performance in the NIST 1998 speaker recognition evaluation", "text": "Performance variabilit y in speech and speaker recognition systems can be attributed to many factors. One major factor, which is often acknowledged but seldom analyzed, is inherent differences in the recognizabilit y of different speakers. In speaker recognition systems such differences are characterized by the use of animal names for different types of speakers, including sheep, goats, lambs and wolves, depending on their behavior with respect to automatic recognition systems. In this paper we propose statistical tests for the existence of these animals and apply these tests to hunt for such animals using results from the 1998 NIST speaker recognition evaluation."}
{"_id": "f7a6be26eff0698df6fcb6fdaad79715699fc8cd", "title": "Salient Band Selection for Hyperspectral Image Classification via Manifold Ranking", "text": "Saliency detection has been a hot topic in recent years, and many efforts have been devoted in this area. Unfortunately, the results of saliency detection can hardly be utilized in general applications. The primary reason, we think, is unspecific definition of salient objects, which makes that the previously published methods cannot extend to practical applications. To solve this problem, we claim that saliency should be defined in a context and the salient band selection in hyperspectral image (HSI) is introduced as an example. Unfortunately, the traditional salient band selection methods suffer from the problem of inappropriate measurement of band difference. To tackle this problem, we propose to eliminate the drawbacks of traditional salient band selection methods by manifold ranking. It puts the band vectors in the more accurate manifold space and treats the saliency problem from a novel ranking perspective, which is considered to be the main contributions of this paper. To justify the effectiveness of the proposed method, experiments are conducted on three HSIs, and our method is compared with the six existing competitors. Results show that the proposed method is very effective and can achieve the best performance among the competitors."}
{"_id": "407de9da58871cae7a6ded2f3a6162b9dc371f38", "title": "TraMNet - Transition Matrix Network for Efficient Action Tube Proposals", "text": "Current state-of-the-art methods solve spatio-temporal action localisation by extending 2D anchors to 3D-cuboid proposals on stacks of frames, to generate sets of temporally connected bounding boxes called action micro-tubes. However, they fail to consider that the underlying anchor proposal hypotheses should also move (transition) from frame to frame, as the actor or the camera do. Assuming we evaluate n 2D anchors in each frame, then the number of possible transitions from each 2D anchor to he next, for a sequence of f consecutive frames, is in the order of O(n ), expensive even for small values of f . To avoid this problem we introduce a Transition-Matrix-based Network (TraMNet) which relies on computing transition probabilities between anchor proposals while maximising their overlap with ground truth bounding boxes across frames, and enforcing sparsity via a transition threshold. As the resulting transition matrix is sparse and stochastic, this reduces the proposal hypothesis search space from O(n ) to the cardinality of the thresholded matrix. At training time, transitions are specific to cell locations of the feature maps, so that a sparse (efficient) transition matrix is used to train the network. At test time, a denser transition matrix can be obtained either by decreasing the threshold or by adding to it all the relative transitions originating from any cell location, allowing the network to handle transitions in the test data that might not have been present in the training data, and making detection translationinvariant. Finally, we show that our network is able to handle sparse annotations such as those available in the DALY dataset, while allowing for both dense (accurate) or sparse (efficient) evaluation within a single model. We report extensive experiments on the DALY, UCF101-24 and Transformed-UCF101-24 datasets to support our claims."}
{"_id": "50e3b09c870dc93920f2e4ad5853e590c7b85ed7", "title": "Robust Scale Estimation in Real-Time Monocular SFM for Autonomous Driving", "text": "Scale drift is a crucial challenge for monocular autonomous driving to emulate the performance of stereo. This paper presents a real-time monocular SFM system that corrects for scale drift using a novel cue combination framework for ground plane estimation, yielding accuracy comparable to stereo over long driving sequences. Our ground plane estimation uses multiple cues like sparse features, dense inter-frame stereo and (when applicable) object detection. A data-driven mechanism is proposed to learn models from training data that relate observation covariances for each cue to error behavior of its underlying variables. During testing, this allows per-frame adaptation of observation covariances based on relative confidences inferred from visual data. Our framework significantly boosts not only the accuracy of monocular self-localization, but also that of applications like object localization that rely on the ground plane. Experiments on the KITTI dataset demonstrate the accuracy of our ground plane estimation, monocular SFM and object localization relative to ground truth, with detailed comparisons to prior art."}
{"_id": "af82c83495c50c1603ff868c5335743fe286d144", "title": "The B2C group-buying model on the Internet", "text": "Internet-based group-buying has become a new growth point for China's e-commerce. By studying B2C group-buying model, this paper has collected data of 20 major Chinese group-buying websites, studied factors that might influence Internet users' group-buying behavior, proposed a new model for evaluating the value of Internet enterprises, and provided a feasible reference model for the evaluation and performance of Internet enterprises."}
{"_id": "dbc7401e3e75c40d3c720e7db3c906d48bd742d7", "title": "Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection", "text": "Unsupervised anomaly detection on multior high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score."}
{"_id": "5b09c1880b6eca96b86a06d6a941a70c36623a23", "title": "Introducing qualitative research in psychology Adventures in theory and method", "text": "All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher or a licence from the Copyright Licensing Agency Limited. Details of such licences (for reprographic reproduction) may be obtained from the Copyright Introducing qualitative research in psychology: adventures in theory and method / Carla Willig. p. cm. Includes bibliographical references and index."}
{"_id": "97d7a5321ded1563f07d233a874193a3c2e1ed01", "title": "Visual reaction time and high-speed ball games.", "text": "Laboratory measures of visual reaction time suggest that some aspects of high-speed ball games such as cricket are 'impossible' because there is insufficient time for the player to respond to unpredictable movements of the ball. Given the success with which some people perform these supposedly impossible acts, it has been assumed by some commentators that laboratory measures of reaction time are not applicable to skilled performers. An analysis of high-speed film of international cricketers batting on a specially prepared pitch which produced unpredictable movement of the ball is reported, and it is shown that, when batting, highly skilled professional cricketers show reaction times of around 200 ms, times similar to those found in traditional laboratory studies. Furthermore, professional cricketers take roughly as long as casual players to pick up ball flight information from film of bowlers. These two sets of results suggest that the dramatic contrast between the ability of skilled and unskilled sportsmen to act on the basis of visual information does not lie in differences in the speed of operation of the perceptual system. It lies in the organisation of the motor system that uses the output of the perceptual system."}
{"_id": "517fa38cea1da39375b97ed1d6ab2f5398299fbd", "title": "Combining Confocal Imaging and Descattering", "text": "In translucent objects, light paths are affected by multiple scattering, which is polluting any observation. Confocal imaging reduces the influence of such global illumination effects by carefully focusing illumination and viewing rays from a large aperture to a specific location within the object volume. The selected light paths still contain some global scattering contributions, though. Descattering based on high frequency illumination serves the same purpose. It removes the global component from observed light paths. We demonstrate that confocal imaging and descattering are orthogonal and propose a novel descattering protocol that analyzes the light transport in a neighborhood of light transport paths. In combination with confocal imaging, our descattering method achieves optical sectioning in translucent media with higher contrast and better resolution."}
{"_id": "cc0493c758e49c8bebeade01e3e685e2d370d90c", "title": "A Survey on Survey of Migration of Legacy Systems", "text": "Legacy systems are mission critical complex systems which are hard to maintain owing to shortage of skill sets and monolithic code architecture with tightly coupled tiers, all of which are indication of obsoleteness of the system technology. Thus, they have to be migrated to the latest technology(ies). Migration is an offspring research in Software Engineering which is almost three decades old research and numerous publications have emerged in many topics in the migration domain with focus areas of code migration, architecture migration, case study on migration and effort estimation on migration. In addition, various survey works surveying different aspects of migration have also emerged. This paper provides a survey of these survey works in order to provide a consolidation of these survey works. As an outcome of the survey on survey of migration, a road map of migration evolution from the early stages to the recent works on migration comprising of significant milestones in migration research has been presented in this work."}
{"_id": "35e83667966d56389f7d69cf1d434a14d566bf84", "title": "Distance Preserving Dimension Reduction for Manifold Learning", "text": "Manifold learning is an effective methodology for extracting nonlinear structures from high-dimensional data with many applications in image analysis, computer vision, text data analysis and bioinformatics. The focus of this paper is on developing algorithms for reducing the computational complexity of manifold learning algorithms, in particular, we consider the case when the number of features is much larger than the number of data points. To handle the large number of features, we propose a preprocessing method, distance preserving dimension reduction (DPDR). It produces t-dimensional representations of the high-dimensional data, wheret is the rank of the original dataset. It exactly preserves the Euclidean L2-norm distances as well as cosine similarity measures between data points in the original space. With the original data projected to the t-dimensional space, manifold learning algorithms can be executed to obtain lower dimensional parameterizations with substantial reduction in computational cost. Our experimental results illustrate that DPDR significantly reduces computing time of manifold learning algorithms and produces low-dimensional parameterizations as accurate as those obtained from the original datasets."}
{"_id": "f6fec61221604359e65ccacc98030d900c5b3496", "title": "Quantum Field Theory in Curved Spacetime", "text": "We review the mathematically rigorous formulation of the quantum theory of a linear field propagating in a globally hyperbolic spacetime. This formulation is accomplished via the algebraic approach, which, in essence, simultaneously admits all states in all possible (unitarily inequivalent) Hilbert space constructions. The physically nonsingular states are restricted by the requirement that their two-point function satisfy the Hadamard condition, which insures that the ultra-violet behavior of the state be similar to that of the vacuum state in Minkowski spacetime, and that the expected stress-energy tensor in the state be finite. We briefly review the Unruh and Hawking effects from the perspective of the theoretical framework adopted here. A brief discussion also is given of several open issues and questions in quantum field theory in curved spacetime regarding the treatment of \u201cbackreaction\u201d, the validity of some version of the \u201caveraged null energy condition\u201d, and the formulation and properties of quantum field theory in causality violating"}
{"_id": "642c1b4a9da95ea4239708afc5929a5007a1870d", "title": "Tensor2Tensor for Neural Machine Translation", "text": "Tensor2Tensor is a library for deep learning models that is very well-suited for neural machine translation and includes the reference implementation of the state-of-the-art Transformer model. 1 Neural Machine Translation Background Machine translation using deep neural networks achieved great success with sequence-tosequence models Sutskever et al. (2014); Bahdanau et al. (2014); Cho et al. (2014) that used recurrent neural networks (RNNs) with LSTM cells Hochreiter and Schmidhuber (1997). The basic sequence-to-sequence architecture is composed of an RNN encoder which reads the source sentence one token at a time and transforms it into a fixed-sized state vector. This is followed by an RNN decoder, which generates the target sentence, one token at a time, from the state vector. While a pure sequence-to-sequence recurrent neural network can already obtain good translation results Sutskever et al. (2014); Cho et al. (2014), it suffers from the fact that the whole input sentence needs to be encoded into a single fixed-size vector. This clearly manifests itself in the degradation of translation quality on longer sentences and was partially overcome in Bahdanau et al. (2014) by using a neural model of attention. Convolutional architectures have been used to obtain good results in word-level neural machine translation starting from Kalchbrenner and Blunsom (2013) and later in Meng et al. (2015). These early models used a standard RNN on top of the convolution to generate the output, which creates a bottleneck and hurts performance. Fully convolutional neural machine translation without this bottleneck was first achieved in Kaiser and Bengio (2016) and Kalchbrenner et al. (2016). The model in Kaiser and Bengio (2016) (Extended Neural GPU) used a recurrent stack of gated convolutional layers, while the model in Kalchbrenner et al. (2016) (ByteNet) did away with recursion and used left-padded convolutions in the decoder. This idea, introduced in WaveNet van den Oord et al. (2016), significantly improves efficiency of the model. The same technique was improved in a number of neural translation models recently, including Gehring et al. (2017) and Kaiser et al. (2017)."}
{"_id": "108652cc6c3cdfd754ac8622794d85e945996b3c", "title": "Semantics in Visual Information Retrieval", "text": "V isual information retrieval systems have entered a new era. First-generation systems allowed access to images and videos through textual data. 1,2 Typical searches for these systems include, for example, \" all images of paintings of the Florentine school of the 15th century \" or \" all images by Cezanne with landscapes. \" Such systems expressed information through alphanumeric keywords or scripts. They employed representation schemes like relational models, frame models, and object-oriented models. On the other hand, current-generation retrieval systems support full retrieval by visual content. 3,4 Access to visual information is not only performed at a conceptual level, using keywords as in the textu-al domain, but also at a perceptual level, using objective measurements of visual content. In these systems, image processing, pattern recognition , and computer vision constitute an integral part of the system's architecture and operation. They objectively analyze pixel distribution and extract the content descriptors automatically from raw sensory data. Image content descriptors are commonly represented as feature vectors, whose elements correspond to significant parameters that model image attributes. Therefore, visual attributes are regarded as points in a multidimen-sional feature space, where point closeness reflects feature similarity. These advances (for comprehensive reviews of the field, see the \" Further Reading \" sidebar) have paved the way for third-generation systems, featuring full multimedia data management and networking support. Forthcoming standards such as MPEG-4 and MPEG-7 (see the Nack and Lindsay article in this issue) provide the framework for efficient representation, processing, and retrieval of visual information. Yet many problems must still be addressed and solved before these technologies can emerge. An important issue is the design of indexing structures for efficient retrieval from large, possibly distributed , multimedia data repositories. To achieve this goal, image and video content descriptors can be internally organized and accessed through multidimensional index structures. 5 A second key problem is to bridge the semantic gap between the system and users. That is, devise representations capturing visual content at high semantic levels especially relevant for retrieval tasks. Specifically, automatically obtaining a representation of high-level visual content remains an open issue. Virtually all the systems based on automatic storage and retrieval of visual information proposed so far use low-level perceptual representations of pictorial data, which have limited semantics. Building up a representation proves tantamount to defining a model of the world, possibly through a formal description language, whose semantics capture only a few \u2026"}
{"_id": "09d2f9f868a70e59d06fcf7bf721fbc9d62ccd06", "title": "A Dynamic Method to Forecast the Wheel Slip for Antilock Braking System and Its Experimental Evaluation", "text": "The control of an antilock braking system (ABS) is a difficult problem due to its strongly nonlinear and uncertain characteristics. To overcome this difficulty, the integration of gray-system theory and sliding-mode control is proposed in this paper. This way, the prediction capabilities of the former and the robustness of the latter are combined to regulate optimal wheel slip depending on the vehicle forward velocity. The design approach described is novel, considering that a point, rather than a line, is used as the sliding control surface. The control algorithm is derived and subsequently tested on a quarter vehicle model. Encouraged by the simulation results indicating the ability to overcome the stated difficulties with fast convergence, experimental results are carried out on a laboratory setup. The results presented indicate the potential of the approach in handling difficult real-time control problems."}
{"_id": "64305508a53cc99e62e6ff73592016d0b994afd4", "title": "A survey of RDF data management systems", "text": "RDF is increasingly being used to encode data for the semantic web and data exchange. There have been a large number of works that address RDF data management following different approaches. In this paper we provide an overview of these works. This review considers centralized solutions (what are referred to as warehousing approaches), distributed solutions, and the techniques that have been developed for querying linked data. In each category, further classifications are provided that would assist readers in understanding the identifying characteristics of different approaches."}
{"_id": "254ded254065f2d26ca24ec024cefd7604bd74e7", "title": "Efficient Parallel Graph Exploration on Multi-Core CPU and GPU", "text": "Graphs are a fundamental data representation that has been used extensively in various domains. In graph-based applications, a systematic exploration of the graph such as a breadth-first search (BFS) often serves as a key component in the processing of their massive data sets. In this paper, we present a new method for implementing the parallel BFS algorithm on multi-core CPUs which exploits a fundamental property of randomly shaped real-world graph instances. By utilizing memory bandwidth more efficiently, our method shows improved performance over the current state-of-the-art implementation and increases its advantage as the size of the graph increases. We then propose a hybrid method which, for each level of the BFS algorithm, dynamically chooses the best implementation from: a sequential execution, two different methods of multicore execution, and a GPU execution. Such a hybrid approach provides the best performance for each graph size while avoiding poor worst-case performance on high-diameter graphs. Finally, we study the effects of the underlying architecture on BFS performance by comparing multiple CPU and GPU systems, a high-end GPU system performed as well as a quad-socket high-end CPU system."}
{"_id": "30b41155d8c4dc7abc8eb557fd391d2a87640ac5", "title": "Optimising Results from Minimal Access Cranial Suspension Lifting (MACS-Lift)", "text": "Between November 1999 and February 2005, 450 minimal access cranial suspension (MACS) lifts were performed. Starting with the idea of suspension for sagging soft tissues using permanent purse-string sutures, a new comprehensive approach to facial rejuvenation was developed in which the vertical vector appeared to be essential. The neck is corrected by extended submental liposuction and strong vertical traction on the lateral part of the platysma by means of a first vertical purse-string suture. The volume of the jowls and the cheeks is repositioned in a cranial direction with a second, slightly oblique purse-string suture. The descent of the midface is corrected by suspending the malar fat pad in a nearly vertical direction. In 23 cases (5.1%), the result in the neck was unsatisfactory, and additional work had to be done secondarily, or in later cases, primarily. The problem that appeared was unsatisfactory correction of platysmal bands (resolved with an additional anterior cervicoplasty) or vertical skin folds that appeared in the infralobular region (corrected with an additional posterior cervicoplasty). This article describes two ancillary procedures that, although not frequently necessary, can optimise the result of MACS lifting."}
{"_id": "d247b14fef93a2d6e87a555b0b992a41996a387f", "title": "A Survey of Mobile Cloud Computing Applications: Perspectives and Challenges", "text": "As mobile computing has been developed for decades, a new model for mobile computing, namely, mobile cloud computing, emerges resulting from the marriage of powerful yet affordable mobile devices and cloud computing. In this paper we survey existing mobile cloud computing applications, as well as speculate future generation mobile cloud computing applications. We provide insights for the enabling technologies and challenges that lie ahead for us to move forward from mobile computing to mobile cloud computing for building the next generation mobile cloud applications. For each of the challenges, we provide a survey of existing solutions, identify research gaps, and suggest future research areas."}
{"_id": "4a017a4b92d76b1706f156a42fde89a37762a067", "title": "Human activity recognition using inertial sensors with invariance to sensor orientation", "text": "This work deals with the task of human daily activity recognition using miniature inertial sensors. The proposed method reduces sensitivity to the position and orientation of the sensor on the body, which is inherent in traditional methods, by transforming the observed signals to a \u201cvirtual\u201d sensor orientation. By means of this computationally low-cost transform, the inputs to the classification algorithm are made invariant to sensor orientation, despite the signals being recorded from arbitrary sensor placements. Classification results show that improved performance, in terms of both precision and recall, is achieved with the transformed signals, relative to classification using raw sensor signals, and the algorithm performs competitively compared to the state-of-the-art. Activity recognition using data from a sensor with completely unknown orientation is shown to perform very well over a long term recording in a real-life setting."}
{"_id": "721054a766693d567ebebe36258eb7882540fc7e", "title": "A simple circularly polarized loop tag antenna for increased reading distance", "text": "A simple circularly polarized (CP) loop tag antenna is proposed to overcome the polarization mismatch problem between reader antenna and tag antenna. Good CP radiation can be achieved by introducing an open gap onto the square loop. Also, the technique of loading a matching stub across the RFID chip is applied to the proposed loop tag antenna, and desirable impedance matching between the antenna and RFID chip can be realized by tuning the matching stub. The proposed loop tag antenna shows good reading distance performance, and its measured reading distance reaches to 17.6 m."}
{"_id": "5e23ba80d737e3f5aacccd9e32fc3de4887accec", "title": "Business artifacts: An approach to operational specification", "text": "Any business, no matter what physical goods or services it produces, relies on business records. It needs to record details of what it produces in terms of concrete information. Business artifacts are a mechanism to record this information in units that are concrete, identifiable, self-describing, and indivisible. We developed the concept of artifacts, or semantic objects, in the context of a technique for constructing formal yet intuitive operational descriptions of a business. This technique, called OpS (Operational Specification), was developed over the course of many business-transformation and business-process-integration engagements for use in IBM\u2019s internal processes as well as for use with customers. Business artifacts (or business records) are the basis for the factorization of knowledge that enables the OpS technique. In this paper we present a comprehensive discussion of business artifacts\u2014what they are, how they are represented, and the role they play in operational business modeling. Unlike the more familiar and popular concept of business objects, business artifacts are pure instances rather than instances of a taxonomy of types. Consequently, the key operation on business artifacts is recognition rather than classification."}
{"_id": "01f187c3f0390123e70e01f824101bf771e76b8f", "title": "Bitcoin and Beyond: A Technical Survey on Decentralized Digital Currencies", "text": "Besides attracting a billion dollar economy, Bitcoin revolutionized the field of digital currencies and influenced many adjacent areas. This also induced significant scientific interest. In this survey, we unroll and structure the manyfold results and research directions. We start by introducing the Bitcoin protocol and its building blocks. From there we continue to explore the design space by discussing existing contributions and results. In the process, we deduce the fundamental structures and insights at the core of the Bitcoin protocol and its applications. As we show and discuss, many key ideas are likewise applicable in various other fields, so that their impact reaches far beyond Bitcoin itself."}
{"_id": "17458e303096cb883d33c29a1f9dd646b8c319be", "title": "QoS-aware middleware for Web services composition", "text": "The paradigmatic shift from a Web of manual interactions to a Web of programmatic interactions driven by Web services is creating unprecedented opportunities for the formation of online business-to-business (B2B) collaborations. In particular, the creation of value-added services by composition of existing ones is gaining a significant momentum. Since many available Web services provide overlapping or identical functionality, albeit with different quality of service (QoS), a choice needs to be made to determine which services are to participate in a given composite service. This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service. Two selection approaches are described and compared: one based on local (task-level) selection of services and the other based on global allocation of tasks to services using integer programming."}
{"_id": "9cd71e490ea9113750a2512a0596c35a261c79dc", "title": "Cryptocurrencies, smart contracts, and artificial intelligence", "text": "Recent developments in \"cryptocurrencies\" and \"smart contracts\" are creating new opportunities for applying AI techniques. These economic technologies would benefit from greater real world knowledge and reasoning as they become integrated with everyday commerce. Cryptocurrencies and smart contracts may also provide an infrastructure for ensuring that AI systems follow specified legal and safety regulations as they become more integrated into human society."}
{"_id": "a84627299366e60f3fa157ed7cfcf02d52b9e21b", "title": "To See or not to See \u2013 Innovative Display Technologies as Enablers for Ergonomic Cockpit Concepts Ergonomic requirements future mobility future functionality", "text": "Introduction Given the fact that the driving task will remarkably change by the increase of assistance and information systems in the future innovative display technologies play a central role for the ergonomic realization of the driver vehicle interaction. Increasing assistance and even automation of the driving task do not lead to a decrease or disappearance of visual information but instead request new and in some cases revolutionary concepts to close the loop between driver, vehicle and traffic environment. Augmenting information in contact analog head-up-displays for navigation and driver assistance is a very promising approach. Replacement of mirrors via camera monitor systems is a further example. Free programmable cluster instruments in combination with HUD promise to resolve the problem of information density produced by an increasing amount of ADAS and IVIS functionality."}
{"_id": "84ae50626d5315fe5b8bf93548fda4c3a152d885", "title": "An electrostatic gripper for flexible objects", "text": "We demonstrate a flexible, electrostatic adhesive gripper designed to controllably grasp and manipulate soft goods in space. The 8-fingered gripper has 50 cm2 of active electrodes operating at 3 kV. It generates electrostatic adhesion forces up to 3.5 N (0.70 kPa) on Ge-coated polyimide film and 1.2 N on MLI blanket, a film composite used for satellite thermal insulation. Extremely low-force gripper engagement (0.08 N) and release (0.04 N) of films is ideal for micro-gravity. Individual fingers generate shear adhesion forces up to 4.76 N (5.04 kPa) using electrostatic adhesive and 45.0 N (47.6 kPa) with a hybrid electrostatic / gecko adhesive. To simulate a satellite servicing task, the gripper was mounted on a 7-DoF robot arm and performed a supervised grasp, manipulate, and release sequence on a hanging, Al-coated PET film."}
{"_id": "022a98d6cfcb29b803ceb47297a26e673af20434", "title": "The optimum received power levels of uplink non-orthogonal multiple access (NOMA) signals", "text": "Non-orthogonal multiple access (NOMA) has been recently considered as a promising multiple access technique for fifth generation (5G) mobile networks as an enabling technology to meet the demands of low latency, high reliability, massive connectivity, and high throughput. The two dominants types of NOMA are: power-domain and code-domain. The key feature of power-domain NOMA is to allow different users to share the same time, frequency, and code, but with different power levels. In code-domain NOMA, different spread-spectrum codes are assigned to different users and are then multiplexed over the same time-frequency resources. This paper concentrates on power-domain NOMA. In power-domain NOMA, Successive Interference Cancellation (SIC) is employed at the receiver. In this paper, the optimum received uplink power levels using a SIC detector is determined analytically for any number of transmitters. The optimum uplink received power levels using the SIC decoder in NOMA strongly resembles the \u03bc-law encoding used in pulse code modulation (PCM) speech companders."}
{"_id": "453eab87d80e27eedddbf223303a947144655b27", "title": "Analysis, synthesis, and perception of voice quality variations among female and male talkers.", "text": "Voice quality variations include a set of voicing sound source modifications ranging from laryngealized to normal to breathy phonation. Analysis of reiterant imitations of two sentences by ten female and six male talkers has shown that the potential acoustic cues to this type of voice quality variation include: (1) increases to the relative amplitude of the fundamental frequency component as open quotient increases; (2) increases to the amount of aspiration noise that replaces higher frequency harmonics as the arytenoids become more separated; (3) increases to lower formant bandwidths; and (4) introduction of extra pole zeros in the vocal-tract transfer function associated with tracheal coupling. Perceptual validation of the relative importance of these cues for signaling a breathy voice quality has been accomplished using a new voicing source model for synthesis of more natural male and female voices. The new formant synthesizer, KLSYN88, is fully documented here. Results of the perception study indicate that, contrary to previous research which emphasizes the importance of increased amplitude of the fundamental component, aspiration noise is perceptually most important. Without its presence, increases to the fundamental component may induce the sensation of nasality in a high-pitched voice. Further results of the acoustic analysis include the observations that: (1) over the course of a sentence, the acoustic manifestations of breathiness vary considerably--tending to increase for unstressed syllables, in utterance-final syllables, and at the margins of voiceless consonants; (2) on average, females are more breathy than males, but there are very large differences between subjects within each gender; (3) many utterances appear to end in a \"breathy-laryngealized\" type of vibration; and (4) diplophonic irregularities in the timing of glottal periods occur frequently, especially at the end of an utterance. Diplophonia and other deviations from perfect periodicity may be important aspects of naturalness in synthesis."}
{"_id": "75737d3b6cc483edb9f55c452e661c89ae2ffa60", "title": "FanStore: Enabling Efficient and Scalable I/O for Distributed Deep Learning", "text": "Emerging Deep Learning (DL) applications introduce heavy I/O workloads on computer clusters. The inherent long lasting, repeated, and random file access pattern can easily saturate the metadata and data service and negatively impact other users. In this paper, we present FanStore, a transient runtime file system that optimizes DL I/O on existing hardware/software stacks. FanStore distributes datasets to the local storage of compute nodes, and maintains a global namespace. With the techniques of system call interception, distributed metadata management, and generic data compression, FanStore provides a POSIX-compliant interface with native hardware throughput in an efficient and scalable manner. Users do not have to make intrusive code changes to use FanStore and take advantage of the optimized I/O. Our experiments with benchmarks and real applications show that FanStore can scale DL training to 512 compute nodes with over 90% scaling efficiency."}
{"_id": "0302dd22168fb3c22a57a6d5d05ba94ace0c20cf", "title": "BLE-Based Accurate Indoor Location Tracking for Home and Office", "text": "Nowadays the use of smart mobile devices and the accompanying needs for emerging services relying on indoor location-based services (LBS) for mobile devices are rapidly increasing. For more accurate location tracking using Bluetooth Low Energy (BLE), this paper proposes a novel trilateration-based algorithm and presents experimental results that demonstrate its effectiveness."}
{"_id": "d1e8250f3306613d39f2fed435a44b0abb4a0936", "title": "Automated text summarisation and evidence-based medicine: A survey of two domains", "text": "The practice of evidence-based medicine (EBM) urges medical practitioners to utilise the latest research evidence when making clinical decisions. Because of the massive and growing volume of published research on various medical topics, practitioners often find themselves overloaded with information. As such, natural language processing research has recently commenced exploring techniques for performing medical domain-specific automated text summarisation (ATS) techniques-- targeted towards the task of condensing large medical texts. However, the development of effective summarisation techniques for this task requires cross-domain knowledge. We present a survey of EBM, the domain-specific needs for EBM, automated summarisation techniques, and how they have been applied hitherto. We envision that this survey will serve as a first resource for the development of future operational text summarisation techniques for EBM."}
{"_id": "4f28c74c0db152862f9c35a0e16bba88e620f7c7", "title": "Semiautomatic White Blood Cell Segmentation Based on Multiscale Analysis", "text": "This paper approaches novel methods to segment the nucleus and cytoplasm of white blood cells (WBC). This information is the basis to perform higher level tasks such as automatic differential counting, which plays an important role in the diagnosis of different diseases. We explore the image simplification and contour regularization resulting from the application of the selfdual multiscale morphological toggle (SMMT), an operator with scale-space properties. To segment the nucleus, the image preprocessing with SMMT has shown to be essential to ensure the accuracy of two well-known image segmentations techniques, namely, watershed transform and Level-Set methods. To identify the cytoplasm region, we propose two different schemes, based on granulometric analysis and on morphological transformations. The proposed methods have been successfully applied to a large number of images, showing promising segmentation and classification results for varying cell appearance and image quality, encouraging future works."}
{"_id": "5b43b58ae00d6746ccca82c16ca39ca34973c167", "title": "Hemi-cylinder unwrapping algorithm of fish-eye image based on equidistant projection model", "text": "This paper presents a novel fish-eye image unwrapping algorithm based on equidistant projection mode. We discuss a framework to estimate the image distortion center using vanishing points extraction. Then we propose a fish-eye image unwrapping algorithm using hemi-cylinder projection in 3D space. Experimental results show that our algorithm is efficient and effective. In particular, the hemi-cylinder unwrapping results do not reduce the horizontal field of view which is very useful for panoramic surveillance with applications in important sites safety compared with other fish-eye image correction methods."}
{"_id": "0e7122bd7137fd166b278e34db3422d3174aedc7", "title": "III-nitride micro-emitter arrays : development and applications", "text": "III-nitride micro-emitter array technology was developed in the authors\u2019 laboratory around 1999. Since its inception, much progress has been made by several groups and the technology has led to the invention of several novel devices. This paper provides an overview on recent progress in single-chip ac-micro-size light emitting diodes (\u03bcLEDs) that can be plugged directly into standard high ac voltage power outlets, self-emissive microdisplays and interconnected \u03bcLEDs for boosting light emitting diodes\u2019s wall-plug efficiency, all of which were evolved from III-nitride micro-emitter array technology. Finally, potential applications of III-nitride visible micro-emitter arrays as a light source for DNA microarrays and future prospects of III-nitride deep ultraviolet micro-emitter arrays for label-free protein analysis in microarray format by taking advantage of the direct excitation of intrinsic protein fluorescence are discussed. (Some figures in this article are in colour only in the electronic version)"}
{"_id": "131b3fa7f7c1f35fb81e2860523650750d6ff10e", "title": "Collaging on Internal Representations: An Intuitive Approach for Semantic Transfiguration", "text": "We present a novel CNN-based image editing method that allows the user to change the semantic information of an image over a user-specified region. Our method makes this possible by combining the idea of manifold projection with spatial conditional batch normalization (sCBN), a version of conditional batch normalization with userspecifiable spatial weight maps. With sCBN and manifold projection, our method lets the user perform (1) spatial class translation that changes the class of an object over an arbitrary region of user\u2019s choice, and (2) semantic transplantation that transplants semantic information contained in an arbitrary region of the reference image to an arbitrary region in the target image. These two transformations can be used simultaneously, and can realize a complex composite image-editing task like \u201cchange the nose of a beagle to that of a bulldog, and open her mouth\u201d. The user can also use our method with intuitive copy-paste-style manipulations. We demonstrate the power of our method on various images. Code will be available at https://github. com/pfnet-research/neural-collage."}
{"_id": "e0f73e991514450bb0f14f799878d84adc8601f9", "title": "A study of deep convolutional auto-encoders for anomaly detection in videos", "text": "The detection of anomalous behaviors in automated video surveillance is a recurrent topic in recent computer vision research. Depending on the application field, anomalies can present different characteristics and challenges. Convolutional Neural Networks have achieved the state-of-the-art performance for object recognition in recent years, since they learn features automatically during the training process. From the anomaly detection perspective, the Convolutional Autoencoder (CAE) is an interesting choice, since it captures the 2D structure in image sequences during the learning process. This work uses a CAE in the anomaly detection context, by applying the reconstruction error of each frame as an anomaly score. By exploring the CAE architecture, we also propose a method for aggregating high-level spatial and temporal features with the input frames and investigate how they affect the CAE performance. An easy-to-use measure of video spatial complexity was devised and correlated with the classification performance of the CAE. The proposed methods were evaluated by means of several experiments with public-domain datasets. The promising results support further research in this area. \u00a9 2017 Published by Elsevier B.V."}
{"_id": "f3917c8fa5eecb67318bd43c37260560fae531a6", "title": "Novelty Detection via Topic Modeling in Research Articles", "text": "In today\u2019s world redundancy is the most vital problem faced in almost all domains. Novelty detection is the identification of new or unknown data or signal that a machine learning system is not aware of during training. The problem becomes more intense when it comes to \u201cResearch Articles\u201d. A method of identifying novelty at each sections of the article is highly required for determining the novel idea proposed in the research paper. Since research articles are semistructured, detecting novelty of information from them requires more accurate systems. Topic model provides a useful means to process them and provides a simple way to analyze them. This work compares the most predominantly used topic modelLatent Dirichlet Allocation with the hierarchical Pachinko Allocation Model. The results obtained are promising towards hierarchical Pachinko Allocation Model when used for document retrieval."}
{"_id": "6521560a99dd3c4abeec8ad9634e949d5a0e77cd", "title": "Making B+-Trees Cache Conscious in Main Memory", "text": "Previous research has shown that cache behavior is important for main memory index structures. Cache conscious index structures such as Cache Sensitive Search Trees (CSS-Trees) perform lookups much faster than binary search and T-Trees. However, CSS-Trees are designed for decision support workloads with relatively static data. Although B+-Trees are more cache conscious than binary search and T-Trees, their utilization of a cache line is low since half of the space is used to store child pointers. Nevertheless, for applications that require incremental updates, traditional B+-Trees perform well.\nOur goal is to make B+-Trees as cache conscious as CSS-Trees without increasing their update cost too much. We propose a new indexing technique called \u201cCache Sensitive B+-Trees\u201d (CSB+-Trees). It is a variant of B+-Trees that stores all the child nodes of any given node contiguously, and keeps only the address of the first child in each node. The rest of the children can be found by adding an offset to that address. Since only one child pointer is stored explicitly, the utilization of a cache line is high. CSB+-Trees support incremental updates in a way similar to B+-Trees.\nWe also introduce two variants of CSB+-Trees. Segmented CSB+-Trees divide the child nodes into segments. Nodes within the same segment are stored contiguously and only pointers to the beginning of each segment are stored explicitly in each node. Segmented CSB+-Trees can reduce the copying cost when there is a split since only one segment needs to be moved. Full CSB+-Trees preallocate space for the full node group and thus reduce the split cost. Our performance studies show that CSB+-Trees are useful for a wide range of applications."}
{"_id": "c7aa8436c6c9536d53f2fcd24e79795ce8c6ea12", "title": "H-TCP : TCP for high-speed and long-distance networks", "text": "In this paper we present a congestion control protocol that is suitable for deployment in highspeed and long-distance networks. The new protocol, H-TCP, is shown to fair when deployed in homogeneous networks, to be friendly when competing with conventional TCP sources, to rapidly respond to bandwidth as it becomes available, and to utilise link bandwidth in an efficient manner. Further, when deployed in conventional networks, H-TCP behaves as a conventional TCP-variant."}
{"_id": "40bc7c8ff3d188cff3b234191d73098ec3dbb1d1", "title": "A client-side analysis of TLS usage in mobile apps", "text": "As mobile applications become more pervasive, they provide us with a variety of online services that range from social networking to banking and credit card management. Since many of these services involve communicating and handling of private user information \u2013 and also due to increasing security demands from users \u2013 the use of TLS connections has become a necessity for today\u2019s mobile applications. However, an improper use of TLS and failure to adhere to TLS security guidelines by app developers, exposes users to agents performing TLS interception thus giving them a false sense of security. Unfortunately, researchers and users alike lack of information and easy-to-deploy mechanisms to analyze how securely mobile apps implement TLS. Hence, in order to understand and assess the security of mobile app communications, it is crucial to study their use of the TLS protocol. In this poster we present a method to study the use of TLS in mobile apps using the data provided by the ICSI Haystack app [2], a mobile measurement platform that enables on-device analysis of mobile traffic without requiring root access. The unique vantage point provided by the Haystack platform enables a variety of measurements from the edge of the network with real user workload and the added bonus of having contextual information on the device to supplement the data collection."}
{"_id": "b510de361f440c2b3234077d7ad78deb4fefa27a", "title": "The State-of-the-Art in Twitter Sentiment Analysis: A Review and Benchmark Evaluation", "text": "Twitter has emerged as a major social media platform and generated great interest from sentiment analysis researchers. Despite this attention, state-of-the-art Twitter sentiment analysis approaches perform relatively poorly with reported classification accuracies often below 70%, adversely impacting applications of the derived sentiment information. In this research, we investigate the unique challenges presented by Twitter sentiment analysis and review the literature to determine how the devised approaches have addressed these challenges. To assess the state-of-the-art in Twitter sentiment analysis, we conduct a benchmark evaluation of 28 top academic and commercial systems in tweet sentiment classification across five distinctive data sets. We perform an error analysis to uncover the causes of commonly occurring classification errors. To further the evaluation, we apply select systems in an event detection case study. Finally, we summarize the key trends and takeaways from the review and benchmark evaluation and provide suggestions to guide the design of the next generation of approaches."}
{"_id": "b1b99af0353c836ac44cc68c43e3918b0b12c5a2", "title": "A selectional auto-encoder approach for document image binarization", "text": "Binarization plays a key role in the automatic information retrieval from document images. This process is usually performed in the first stages of documents analysis systems, and serves as a basis for subsequent steps. Hence it has to be robust in order to allow the full analysis workflow to be successful. Several methods for document image binarization have been proposed so far, most of which are based on hand-crafted image processing strategies. Recently, Convolutional Neural Networks have shown an amazing performance in many disparate duties related to computer vision. In this paper we discuss the use of convolutional auto-encoders devoted to learning an end-to-end map from an input image to its selectional output, in which activations indicate the likelihood of pixels to be either foreground or background. Once trained, documents can therefore be binarized by parsing them through the model and applying a global threshold. This approach has proven to outperform existing binarization strategies in a number of document types."}
{"_id": "3611b5f7e169f24a9f9c0915ab21a7cc40009ea9", "title": "Converting Pairing-Based Cryptosystems from Composite-Order Groups to Prime-Order Groups", "text": "We develop an abstract framework that encompasses the key properties of bilinear groups of composite order that are required to construct secure pairing-based cryptosystems, and we show how to use prime-order elliptic curve groups to construct bilinear groups with the same properties. In particular, we define a generalized version of the subgroup decision problem and give explicit constructions of bilinear groups in which the generalized subgroup decision assumption follows from the decision DiffieHellman assumption, the decision linear assumption, and/or related assumptions in prime-order groups. We apply our framework and our prime-order group constructions to create more efficient versions of cryptosystems that originally required composite-order groups. Specifically, we consider the Boneh-GohNissim encryption scheme, the Boneh-Sahai-Waters traitor tracing system, and the Katz-Sahai-Waters attribute-based encryption scheme. We give a security theorem for the prime-order group instantiation of each system, using assumptions of comparable complexity to those used in the composite-order setting. Our conversion of the last two systems to prime-order groups answers a problem posed by Groth and Sahai."}
{"_id": "6162ab446003a91fc5d53c3b82739631c2e66d0f", "title": "Enzyme-Enhanced Microbial Fuel Cells", "text": ""}
{"_id": "f87fd4e849775c8b1b1caa8432ca80f09c383923", "title": "Automating Intention Mining", "text": "Developers frequently discuss aspects of the systems they are developing online. The comments they post to discussions form a rich information source about the system. Intention mining, a process introduced by Di Sorbo et al., classifies sentences in developer discussions to enable further analysis. As one example of use, intention mining has been used to help build various recommenders for software developers. The technique introduced by Di Sorbo et al. to categorize sentences is based on linguistic patterns derived from two projects. The limited number of data sources used in this earlier work introduces questions about the comprehensiveness of intention categories and whether the linguistic patterns used to identify the categories are generalizable to developer discussion recorded in other kinds of software artifacts (e.g., issue reports). To assess the comprehensiveness of the previously identified intention categories and the generalizability of the linguistic patterns for category identification, we manually created a new dataset, categorizing 5,408 sentences from issue reports of four projects in GitHub. Based on this manual effort, we refined the previous categories. We assess Di Sorbo et al.\u2019s patterns on this dataset, finding that the accuracy rate achieved is low (0.31). To address the deficiencies of Di Sorbo et al.\u2019s patterns, we propose and investigate a convolution neural network (CNN)-based approach to automatically classify sentences into different categories of intentions. Our approach optimizes CNN by integrating batch normalization to accelerate the training speed, and an automatic hyperparameter tuning approach to tune appropriate hyperparameters of CNN. Our approach achieves an accuracy of 0.84 on the new dataset, improving Di Sorbo et al.\u2019s approach by 171%. We also apply our approach to improve an automated software engineering task, in which we use our proposed approach to rectify misclassified issue reports, thus reducing the bias introduced by such data to other studies. A case study on four open source projects with 2,076 issue reports shows that our approach achieves an average AUC score of 0.687, which improves other baselines by at least 16%."}
{"_id": "463227cf41949ab09747fa890e3b05b375fc0b5b", "title": "Direction matters: hand pose estimation from local surface normals", "text": "We present a hierarchical regression framework for estimating hand joint positions from single depth images based on local surface normals. The hierarchical regression follows the tree structured topology of hand from wrist to finger tips. We propose a conditional regression forest, i.e. the Frame Conditioned Regression Forest (FCRF) which uses a new normal difference feature. At each stage of the regression, the frame of reference is established from either the local surface normal or previously estimated hand joints. By making the regression with respect to the local frame, the pose estimation is more robust to rigid transformations. We also introduce a new efficient approximation to estimate surface normals. We verify the effectiveness of our method by conducting experiments on two challenging real-world datasets and show consistent improvements over previous discriminative pose estimation methods."}
{"_id": "b20001fb32dc5068afdd212c7a19c80a9d50eb3d", "title": "Estimation of induction motor double-cage model parameters from manufacturer data", "text": "This paper presents a numerical method for the estimation of induction motor double-cage model parameters from standard manufacturer data: full load mechanical power (rated power), full load reactive electrical power, breakdown torque, and starting current and torque. A model sensitivity analysis for the various electrical parameters shows that stator resistance is the least significant parameter. The nonlinear equations to be solved for the parameters determination are formulated as a minimization problem with restrictions. The method has been tested with 223 motors from different manufacturers, with an average value of the normalized residual error of 1.39/spl middot/10/sup -2/. The estimated parameters of these motors are graphically represented as a function of the rated power."}
{"_id": "9d3836aaf0c74efd5ec9aa3eda16989026ac9030", "title": "Approximate Inference with Amortised MCMC", "text": "We propose a novel approximate inference algorithm that approximates a target distribution by amortising the dynamics of a user-selected MCMC sampler. The idea is to initialise MCMC using samples from an approximation network, apply the MCMC operator to improve these samples, and finally use the samples to update the approximation network thereby improving its quality. This provides a new generic framework for approximate inference, allowing us to deploy highly complex, or implicitly defined approximation families with intractable densities, including approximations produced by warping a source of randomness through a deep neural network. Experiments consider image modelling with deep generative models as a challenging test for the method. Deep models trained using amortised MCMC are shown to generate realistic looking samples as well as producing diverse imputations for images with regions of missing pixels."}
{"_id": "4cab37444a947e9f592b0d10a01b61b788808ad1", "title": "Standard Compliant Hazard and Threat Analysis for the Automotive Domain", "text": "The automotive industry has successfully collaborated to release the ISO 26262 standard for developing safe software for cars. The standard describes in detail how to conduct hazard analysis and risk assessments to determine the necessary safety measures for each feature. However, the standard does not concern threat analysis for malicious attackers or how to select appropriate security countermeasures. We propose to apply ISO 27001 for this purpose and show how it can be applied together with ISO 26262. We show how ISO 26262 documentation can be re-used and enhanced to satisfy the analysis and documentation demands of the ISO 27001 standard. We illustrate our approach based on an electronic steering column lock system."}
{"_id": "3f200c41618d0c3d75c4cd287b4730aadcf596f7", "title": "PCC: Re-architecting Congestion Control for Consistent High Performance", "text": "TCP and its variants have suffered from surprisingly poor performance for decades. We argue the TCP family has little hope to achieve consistent high performance due to a fundamental architectural deficiency: hardwiring packet-level events to control responses without understanding the real performance result of its actions. We propose Performance-oriented Congestion Control (PCC), a new congestion control architecture in which each sender continuously observes the connection between its actions and empirically experienced performance, enabling it to consistently adopt actions that result in high performance. We prove that PCC converges to a stable and fair equilibrium. Across many real-world and challenging environments, PCC shows consistent and often 10\u00d7 performance improvement, with better fairness and stability than TCP. PCC requires no router hardware support or new packet format."}
{"_id": "47aa3758c0ac35bfb2a3d2bbeff1e0ac28e623c2", "title": "TCP ex machina: computer-generated congestion control", "text": "This paper describes a new approach to end-to-end congestion control on a multi-user network. Rather than manually formulate each endpoint's reaction to congestion signals, as in traditional protocols, we developed a program called Remy that generates congestion-control algorithms to run at the endpoints.\n In this approach, the protocol designer specifies their prior knowledge or assumptions about the network and an objective that the algorithm will try to achieve, e.g., high throughput and low queueing delay. Remy then produces a distributed algorithm---the control rules for the independent endpoints---that tries to achieve this objective.\n In simulations with ns-2, Remy-generated algorithms outperformed human-designed end-to-end techniques, including TCP Cubic, Compound, and Vegas. In many cases, Remy's algorithms also outperformed methods that require intrusive in-network changes, including XCP and Cubic-over-sfqCoDel (stochastic fair queueing with CoDel for active queue management).\n Remy can generate algorithms both for networks where some parameters are known tightly a priori, e.g. datacenters, and for networks where prior knowledge is less precise, such as cellular networks. We characterize the sensitivity of the resulting performance to the specificity of the prior knowledge, and the consequences when real-world conditions contradict the assumptions supplied at design-time."}
{"_id": "f7f72aefdf053df9e7fb2784af66f0e477272b44", "title": "Algorithms for multi-armed bandit problems", "text": "The stochastic multi-armed bandit problem is an important model for studying the explorationexploitation tradeoff in reinforcement learning. Although many algorithms for the problem are well-understood theoretically, empirical confirmation of their effectiveness is generally scarce. This paper presents a thorough empirical study of the most popular multi-armed bandit algorithms. Three important observations can be made from our results. Firstly, simple heuristics such as -greedy and Boltzmann exploration outperform theoretically sound algorithms on most settings by a significant margin. Secondly, the performance of most algorithms varies dramatically with the parameters of the bandit problem. Our study identifies for each algorithm the settings where it performs well, and the settings where it performs poorly. These properties are not described by current theory, even though they can be exploited in practice in the design of heuristics. Thirdly, the algorithms\u2019 performance relative each to other is affected only by the number of bandit arms and the variance of the rewards. This finding may guide the design of subsequent empirical evaluations. In the second part of the paper, we turn our attention to an important area of application of bandit algorithms: clinical trials. Although the design of clinical trials has been one of the principal practical problems motivating research on multi-armed bandits, bandit algorithms have never been evaluated as potential treatment allocation strategies. Using data from a real study, we simulate the outcome that a 2001-2002 clinical trial would have had if bandit algorithms had been used to allocate patients to treatments. We find that an adaptive trial would have successfully treated at least 50% more patients, while significantly reducing the number of adverse effects and increasing patient retention. At the end of the trial, the best treatment could have still been identified with a high level of statistical confidence. Our findings demonstrate that bandit algorithms are attractive alternatives to current adaptive treatment allocation strategies."}
{"_id": "b6e56ec46f0cd27abee15bd3b7d7cf0c13b3b56b", "title": "Dynamic simulation of die pickup from wafer tape by a multi-disc ejector using peel-energy to peel-velocity coupling", "text": "A 2D simulation of thin die peeling from wafer tape is presented for a Multi Disc ejection system. The simulation models the dynamics of peeling, and visualizes time-dependency of peel front propagation and target die stress. It is based on a series of static snapshots, stringed together like a movie. A coupling of peel energy and peel velocity is defined. This allows to calculate the geometry of the actual snapshot by the peel energy of the preceding one. As a result, experimental data of a peel process can be verified. It is shown, why an increase in disc velocity leads to a slow-down of peel front propagation, and thus to a pickup failure."}
{"_id": "4eacf020f7eae7673a746eccdd5819a6a1be9e85", "title": "Historical Document Layout Analysis Competition", "text": "This paper presents an objective comparative evaluation of layout analysis methods for scanned historical documents. It describes the competition (modus operandi, dataset and evaluation methodology) held in the context of ICDAR2011 and the International Workshop on Historical Document Imaging and Processing (HIP2011), presenting the results of the evaluation of four submitted methods. A commercial state-of-the-art system is also evaluated for comparison. Two scenarios are reported in this paper, one evaluating the ability of methods to accurately segment regions and the other evaluating the whole pipeline of segmentation and region classification (with a text extraction goal). The results indicate that there is a convergence to a certain methodology with some variations in the approach. However, there is still a considerable need to develop robust methods that deal with the idiosyncrasies of historical documents."}
{"_id": "88ac44d10f929a7a5d50af2d42c8817cce4f178e", "title": "A compact branch-line coupler using folded microstrip lines", "text": "This paper presents a compact 3 dB Quadrature branch-line coupler using folded microstrip line geometry. Commercially available IE3D software has been used to design and simulate the structure. The proposed structure is simple and takes lesser optimization time. It has been shown that the structure provides about 43.63% compactness compared to the conventional branch line coupler. The structure, therefore, can be incorporated in the microwave circuit design where medium level compactness and faster design are preferred."}
{"_id": "6a2b83c4ae18651f1a3496e48a35b0cd7a2196df", "title": "Top Rank Supervised Binary Coding for Visual Search", "text": "In recent years, binary coding techniques are becoming increasingly popular because of their high efficiency in handling large-scale computer vision applications. It has been demonstrated that supervised binary coding techniques that leverage supervised information can significantly enhance the coding quality, and hence greatly benefit visual search tasks. Typically, a modern binary coding method seeks to learn a group of coding functions which compress data samples into binary codes. However, few methods pursued the coding functions such that the precision at the top of a ranking list according to Hamming distances of the generated binary codes is optimized. In this paper, we propose a novel supervised binary coding approach, namely Top Rank Supervised Binary Coding (Top-RSBC), which explicitly focuses on optimizing the precision of top positions in a Hamming-distance ranking list towards preserving the supervision information. The core idea is to train the disciplined coding functions, by which the mistakes at the top of a Hamming-distance ranking list are penalized more than those at the bottom. To solve such coding functions, we relax the original discrete optimization objective with a continuous surrogate, and derive a stochastic gradient descent to optimize the surrogate objective. To further reduce the training time cost, we also design an online learning algorithm to optimize the surrogate objective more efficiently. Empirical studies based upon three benchmark image datasets demonstrate that the proposed binary coding approach achieves superior image search accuracy over the state-of-the-arts."}
{"_id": "ba2b7fb8730900bace0e73ee17ea216f2bd559bd", "title": "Joint Optimization of Service Function Chaining and Resource Allocation in Network Function Virtualization", "text": "Network function virtualization (NFV) has already been a new paradigm for network architectures. By migrating NFs from dedicated hardware to virtualization platform, NFV can effectively improve the flexibility to deploy and manage service function chains (SFCs). However, resource allocation for requested SFC in NFV-based infrastructures is not trivial as it mainly consists of three phases: virtual network functions (VNFs) chain composition, VNFs forwarding graph embedding, and VNFs scheduling. The decision of these three phases can be mutually dependent, which also makes it a tough task. Therefore, a coordinated approach is studied in this paper to jointly optimize NFV resource allocation in these three phases. We apply a general cost model to consider both network costs and service performance. The coordinate NFV-RA is formulated as a mixed-integer linear programming, and a heuristic-based algorithm (JoraNFV) is proposed to get the near optimal solution. To make the coordinated NFV-RA more tractable, JoraNFV is divided into two sub-algorithms, one-hop optimal traffic scheduling and a multi-path greedy algorithm for VNF chain composition and VNF forwarding graph embedding. Last, extensive simulations are performed to evaluate the performance of JoraNFV, and results have shown that JoraNFV can get a solution within 1.25 times of the optimal solution with reasonable execution time, which indicates that JoraNFV can be used for online NFV planning."}
{"_id": "29f5ecc324e934d21fe8ddde814fca36cfe8eaea", "title": "Using Three Machine Learning Techniques for Predicting Breast Cancer Recurrence", "text": "Introduction Breast cancer (BC) is the most common cancer in women, affecting about 10% of all women at some stages of their life. In recent years, the incidence rate keeps increasing and data show that the survival rate is 88% after five years from diagnosis and 80% after 10 years from diagnosis [1]. Early prediction of breast cancer is one of the most crucial works in the follow-up process. Data mining methods can help to reduce the number of false positive and false negative decisions [2,3]. Consequently, new methods such as knowledge discovery in databases (KDD) has become a popular research tool for medical researchers who try to identify and exploit patterns and relationships among large number of variables, and predict the outcome of a disease using historical cases stored in datasets [4]."}
{"_id": "8fa5f46bfb5869bdda49e44ed821cb2d2f3b2cc2", "title": "Usability evaluation for history educational games", "text": "The potential for integration of digital games and learning becomes ever more significant recently. One of the goals in educational game design is to create engaging and immersive learning experiences for delivering specified learning goals, outcomes and experiences. However, there is limited number of research done on game usability or quality of game user interface. Failure to design usable game interfaces can interfere with the larger goal of creating a compelling experience for users and can have a negative effect on the overall quality and success of a game. In this paper, we review usability problems identified by previous researchers and propose a history educational game design which includes pedagogical and game design component. Some snapshots of our game module are also presented. Finally we present a usability evaluation method for history educational game design. From our critical literature reviews, we also proposed six constructs which are interface, mechanics, gameplay, playability, feedback and immersion for usability evaluation."}
{"_id": "eb95a7e69a34699ac75ad2cdd107a38e322f7f7b", "title": "Characterizing Task Completion Latencies in Fog Computing", "text": "Fog computing, which distributes computing resources to multiple locations between the Internet of Things (IoT) devices and the cloud, is attracting considerable attention from academia and industry. Yet, despite the excitement about the potential of fog computing, few comprehensive quantitative characteristics of the properties of fog computing architectures have been conducted. In this paper we examine the properties of task completion latencies in fog computing. First, we present the results of our empirical benchmarking-based study of task completion latencies. The study covered a range of settings, and uniquely considered both traditional and serverless fog computing execution points. It demonstrated the range of execution point characteristics in different locations and the relative stability of latency characteristics for a given location. It also highlighted properties of serverless execution that are not incorporated in existing fog computing algorithms. Second, we present a framework we developed for co-optimizing task completion quality and latency, which was inspired by the insights of our empirical study. We describe fog computing task assignment problems we formulated under this framework, and present the algorithms we developed for solving them."}
{"_id": "261e841c8e0175586fb193b1a199cefaa8ecf169", "title": "Weighting Finite-State Transductions With Neural Context", "text": "How should one apply deep learning to tasks such as morphological reinflection, which stochastically edit one string to get another? A recent approach to such sequence-to-sequence tasks is to compress the input string into a vector that is then used to generate the output string, using recurrent neural networks. In contrast, we propose to keep the traditional architecture, which uses a finite-state transducer to score all possible output strings, but to augment the scoring function with the help of recurrent networks. A stack of bidirectional LSTMs reads the input string from leftto-right and right-to-left, in order to summarize the input context in which a transducer arc is applied. We combine these learned features with the transducer to define a probability distribution over aligned output strings, in the form of a weighted finite-state automaton. This reduces hand-engineering of features, allows learned features to examine unbounded context in the input string, and still permits exact inference through dynamic programming. We illustrate our method on the tasks of morphological reinflection and lemmatization."}
{"_id": "b3cf089a45f03703ad1ba33872866f40006912e3", "title": "Consciousness and anesthesia.", "text": "When we are anesthetized, we expect consciousness to vanish. But does it always? Although anesthesia undoubtedly induces unresponsiveness and amnesia, the extent to which it causes unconsciousness is harder to establish. For instance, certain anesthetics act on areas of the brain's cortex near the midline and abolish behavioral responsiveness, but not necessarily consciousness. Unconsciousness is likely to ensue when a complex of brain regions in the posterior parietal area is inactivated. Consciousness vanishes when anesthetics produce functional disconnection in this posterior complex, interrupting cortical communication and causing a loss of integration; or when they lead to bistable, stereotypic responses, causing a loss of information capacity. Thus, anesthetics seem to cause unconsciousness when they block the brain's ability to integrate information."}
{"_id": "2bc8b1c38d180cdf11277001d3b3f2f9822a800f", "title": "Lightweight eye capture using a parametric model", "text": "Facial scanning has become ubiquitous in digital media, but so far most efforts have focused on reconstructing the skin. Eye reconstruction, on the other hand, has received only little attention, and the current state-of-the-art method is cumbersome for the actor, time-consuming, and requires carefully setup and calibrated hardware. These constraints currently make eye capture impractical for general use. We present the first approach for high-quality lightweight eye capture, which leverages a database of pre-captured eyes to guide the reconstruction of new eyes from much less constrained inputs, such as traditional single-shot face scanners or even a single photo from the internet. This is accomplished with a new parametric model of the eye built from the database, and a novel image-based model fitting algorithm. Our method provides both automatic reconstructions of real eyes, as well as artistic control over the parameters to generate user-specific eyes."}
{"_id": "2933ec1f51cd30516491f414d424297bd12f2fd2", "title": "OntoVPA - An Ontology-Based Dialogue Management System for Virtual Personal Assistants", "text": "Dialogue management (DM) is a difficult problem. We present OntoVPA, an Ontology-Based Dialogue Management System (DMS) for Virtual Personal Assistants (VPA\u2019s). The features of OntoVPA are offered as potential solutions to core DM problems. We illustrate OntoVPA\u2019s solutions to these problems by means of a running VPA example domain. To the best of our knowledge, OntoVPA is the first commercially available, fully implemented DMS that employs ontologies, reasoning, and ontology-based rules for a) domain model representation and reasoning, b) dialogue representation and state tracking, and c) response generation. OntoVPA is a declarative, knowledge-based system which, consequently, can be customized to a new VPA domain by swapping in and out ontologies and rule bases, with very little to no conventional programming required. OntoVPA relies on its domainindependent (generic), but dialogue-specific upper-level ontologies and DM rules, which are implementing typical, re-occurring (and usually expensive to address) dialogue system core capabilities, such as anaphora (coreference) resolution, slotfilling, inquiring about missing slot values, and so on. We argue that ontologies and ontology-based rules provide a flexible and expressive framework for realizing DMS\u2019s for VPA\u2019s, with a potential to significantly reduce development time."}
{"_id": "1cf424a167f16d8d040cfb0d76b7cbfd87822672", "title": "Deconfounded Lexicon Induction for Interpretable Social Science", "text": "NLP algorithms are increasingly used in computational social science to take linguistic observations and predict outcomes like human preferences or actions. Making these social models transparent and interpretable often requires identifying features in the input that predict outcomes while also controlling for potential confounds. We formalize this need as a new task: inducing a lexicon that is predictive of a set of target variables yet uncorrelated to a set of confounding variables. We introduce two deep learning algorithms for the task. The first uses a bifurcated architecture to separate the explanatory power of the text and confounds. The second uses an adversarial discriminator to force confound-invariant text encodings. Both elicit lexicons from learned weights and attentional scores. We use them to induce lexicons that are predictive of timely responses to consumer complaints (controlling for product), enrollment from course descriptions (controlling for subject), and sales from product descriptions (controlling for seller). In each domain our algorithms pick words that are associated with narrative persuasion; more predictive and less confound-related than those of standard feature weighting and lexicon induction techniques like regression and log odds."}
{"_id": "02c73d24cd48184d53cf20d30682e8e10015848f", "title": "Additively Manufactured Scaffolds for Bone Tissue Engineering and the Prediction of their Mechanical Behavior: A Review", "text": "Additive manufacturing (AM), nowadays commonly known as 3D printing, is a revolutionary materials processing technology, particularly suitable for the production of low-volume parts with high shape complexities and often with multiple functions. As such, it holds great promise for the fabrication of patient-specific implants. In recent years, remarkable progress has been made in implementing AM in the bio-fabrication field. This paper presents an overview on the state-of-the-art AM technology for bone tissue engineering (BTE) scaffolds, with a particular focus on the AM scaffolds made of metallic biomaterials. It starts with a brief description of architecture design strategies to meet the biological and mechanical property requirements of scaffolds. Then, it summarizes the working principles, advantages and limitations of each of AM methods suitable for creating porous structures and manufacturing scaffolds from powdered materials. It elaborates on the finite-element (FE) analysis applied to predict the mechanical behavior of AM scaffolds, as well as the effect of the architectural design of porous structure on its mechanical properties. The review ends up with the authors' view on the current challenges and further research directions."}
{"_id": "7a0e5002fbf02965b30c50540eabcaf6e2117e10", "title": "Discovering evolutionary theme patterns from text: an exploration of temporal text mining", "text": "Temporal Text Mining (TTM) is concerned with discovering temporal patterns in text information collected over time. Since most text information bears some time stamps, TTM has many applications in multiple domains, such as summarizing events in news articles and revealing research trends in scientific literature. In this paper, we study a particular TTM task -- discovering and summarizing the evolutionary patterns of themes in a text stream. We define this new text mining problem and present general probabilistic methods for solving this problem through (1) discovering latent themes from text; (2) constructing an evolution graph of themes; and (3) analyzing life cycles of themes. Evaluation of the proposed methods on two different domains (i.e., news articles and literature) shows that the proposed methods can discover interesting evolutionary theme patterns effectively."}
{"_id": "441d2e918d3471da524619192f734ba8273e3aa2", "title": "SeeNav: Seamless and Energy-Efficient Indoor Navigation using Augmented Reality", "text": "Augmented Reality (AR) based navigation has emerged as an impressive, yet seamless way of guiding users in unknown environments. Its quality of experience depends on many factors, including the accuracy of camera pose estimation, response delay, and energy consumption. In this paper, we present SeeNav - a seamless and energy-efficient AR navigation system for indoor environments. SeeNav combines image-based localization and inertial tracking to provide an accurate and robust camera pose estimation. As vision processing is much more compute intensive than the processing of inertial sensor data, SeeNav offloads the former one from resource-constrained mobile devices to a cloud to improve tracking performance and reduce power consumption. More than that, SeeNav implements a context-aware task scheduling algorithm that further minimizes energy consumption while maintaining the accuracy of camera pose estimation. Our experimental results, including a user study, show that SeeNav provides seamless navigation experience and reduces the overall energy consumption by 21.56% with context-aware task scheduling."}
{"_id": "9f735132119e871da9743ab8af635cd1c9f536af", "title": "Alleviation of Interest flooding attacks using Token bucket with per interface fairness and Countermeasures in Named Data Networking", "text": "Distributed Denial of Service (DDoS) attacks are an ongoing problem in today's Internet. In this paper we focus on DDoS attacks in Named Data Networking (NDN). NDN is a specific candidate for next-generation Internet architecture designs. In NDN locations are named instead of data, So NDN transforms data into a first-class entity and makes itself an attractive and practical approach to meet the needs for many applications. In a Named Data Networking (NDN), end users request data by sending Interest packets, and the network delivers Data packets upon request only, effectively eliminating many existing DDoS attacks. However, an NDN network faces a new type of DDoS attack, namely Interest packet flooding. In this paper we try to alleviate the Interest flooding using token bucket with per interface fairness algorithm and also we try to find countermeasures for Interest flooding attacks"}
{"_id": "003000c999f12997d7cb7d317a13c54b0092da9f", "title": "Metasurfaces for general transformations of electromagnetic fields.", "text": "In this review paper I discuss electrically thin composite layers, designed to perform desired operations on applied electromagnetic fields. Starting from a historical overview and based on a general classification of metasurfaces, I give an overview of possible functionalities of the most general linear metasurfaces. The review is concluded with a short research outlook discussion."}
{"_id": "ece49135b57d3a8b15f374155942ad48056306e7", "title": "Thompson Sampling for Dynamic Pricing", "text": "In this paper we apply active learning algorithms for dynamic pricing in a prominent e-commerce website. Dynamic pricing involves changing the price of items on a regular basis, and uses the feedback from the pricing decisions to update prices of the items. Most popular approaches to dynamic pricing use a passive learning approach, where the algorithm uses historical data to learn various parameters of the pricing problem, and uses the updated parameters to generate a new set of prices. We show that one can use active learning algorithms such as Thompson sampling to more efficiently learn the underlying parameters in a pricing problem. We apply our algorithms to a real e-commerce system and show that the algorithms indeed improve revenue compared to pricing algorithms that use passive learning."}
{"_id": "6befc9fac6846df9f27e91ffbad5de58b77ffef1", "title": "Structured Receptive Fields in CNNs", "text": "Learning powerful feature representations with CNNs is hard when training data are limited. Pre-training is one way to overcome this, but it requires large datasets sufficiently similar to the target domain. Another option is to design priors into the model, which can range from tuned hyperparameters to fully engineered representations like Scattering Networks. We combine these ideas into structured receptive field networks, a model which has a fixed filter basis and yet retains the flexibility of CNNs. This flexibility is achieved by expressing receptive fields in CNNs as a weighted sum over a fixed basis which is similar in spirit to Scattering Networks. The key difference is that we learn arbitrary effective filter sets from the basis rather than modeling the filters. This approach explicitly connects classical multiscale image analysis with general CNNs. With structured receptive field networks, we improve considerably over unstructured CNNs for small and medium dataset scenarios as well as over Scattering for large datasets. We validate our findings on ILSVRC2012, Cifar-10, Cifar-100 and MNIST. As a realistic small dataset example, we show state-of-the-art classification results on popular 3D MRI brain-disease datasets where pre-training is difficult due to a lack of large public datasets in a similar domain."}
{"_id": "7d6a34508b091ba8cde8a403e26ec791325c60d1", "title": "Net!works European Technology Platform Expert Working Group on Smart Cities Applications and Requirements Executive Summary List of Contributors Contributors Company/institute Emcanta, Es Alcatel-lucent/bell Labs, De Gowex, Es List of Acronyms 3d Three Dimensional Api Application Programming Interfa", "text": "Smart Cities gained importance as a means of making ICT enabled services and applications available to the citizens, companies and authorities that are part of a city's system. It aims at increasing citizens' quality of life, and improving the efficiency and quality of the services provided by governing entities and businesses. This perspective requires an integrated vision of a city and of its infrastructures, in all its components. A Smart City can be taken according to six characteristics: All these domains raise new challenges in security and privacy, since users implicitly expect systems to be secure and privacy-preserving. One of the critical elements is which role(s) the city will take up as an actor within an increasingly complex value network. New players enter the market, actors shift their business strategies, roles change, different types of platforms emerge and vie for market dominance, technological developments create new threats and opportunities, etc. An element related to the trend of platformisation is cloud computing, which is increasingly helping the private sector to reduce cost, increase efficiency, and work smarter. One particular challenge relates to open data business models. Activities necessary for Public Sector Information provision can be identified. The development of efficient and effective e-government is a prerequisite. Transnational authentication systems for citizens and businesses, agreed frameworks for data privacy, and the sharing and collection of individual and business data, are key. Smart Cities need to be able to integrate themselves into national, regional and international infrastructures. Although the implementation aspects depend strongly on the authorities of these infrastructures, European wide recommendations and directives will definitely contribute to accelerate the deployment of Smart Cities. Health, inclusion and assisted living will play an essential role, since the demand for related services is rising, because ageing is changing disease composition. Requirements address a number of technologies, beyond the ones related to mobile and fixed networks. An integrated perspective on healthcare solutions for the near-to long-term can be foreseen, bridging a direct gap in between the health area and the technological development of communications (radio and network components). The needs for mobility in urban areas result into a number of problems, such as traffic congestion and energy consumption, which can be alleviated by exploiting Intelligent Transportation Systems and further adoption of vehicle-to-vehicle and vehicle-to-infrastructure communication networks. The information being managed in this area can be relevant to other domains, which increases its potential. An effective deployment \u2026"}
{"_id": "6ada4de6b4c409fe72824bb48020d79aa7e8936a", "title": "Learning to Generate Corrective Patches using Neural Machine Translation", "text": "Bug fixing is generally a manually-intensive task. However, recent work has proposed the idea of automated program repair, which aims to repair (at least a subset of) bugs in different ways such as code mutation, etc. Following in the same line of work as automated bug repair, in this paper we aim to leverage past fixes to propose fixes of current/future bugs. Specifically, we propose Ratchet, a corrective patch generation system using neural machine translation. By learning corresponding pre-correction and post-correction code in past fixes with a neural sequence-to-sequence model, Ratchet is able to generate a fix code for a given bug-prone code query. We perform an empirical study with five open source projects, namely Ambari, Camel, Hadoop, Jetty and Wicket, to evaluate the effectiveness of Ratchet. Our findings show that Ratchet can generate syntactically valid statements 98.7% of the time, and achieve an F1-measure between 0.41-0.83 with respect to the actual fixes adopted in the code base. In addition, we perform a qualitative validation using 20 participants to see whether the generated statements can be helpful in correcting bugs. Our survey showed that Ratchet\u2019s output was considered to be helpful in fixing the bugs on many occasions, even if fix was not 100%"}
{"_id": "4d709d05f3ec6de94601e0642058180ac23dee88", "title": "Colour Dynamic Photometric Stereo for Textured Surfaces", "text": "In this paper we present a novel method to apply photometric stereo on textured dynamic surfaces. We aim at exploiting the high accuracy of photometric stereo and reconstruct local surface orientation from illumination changes. The main difficulty derives from the fact that photometric stereo requires varying illumination while the object remains still, which makes it quite impractical to use for dynamic surfaces. Using coloured lights gives a clear solution to this problem; however, the system of equations is still ill-posed and it is ambiguous whether the change of an observed surface colour is due to the change of the surface gradient or of the surface reflectance. In order to separate surface orientation from reflectance, our method tracks texture changes over time and exploits surface reflectance\u2019s temporal constancy. This additional constraint allows us to reformulate the problem as an energy functional minimisation, solved by a standard quasi-Newton method. Our method is tested both on real and synthetic data, quantitatively evaluated and compared to a state-of-the-art method."}
{"_id": "311e45968b55462ac3dd3fa7466f15857bae5f2c", "title": "All learning is Local: Multi-agent Learning in Global Reward Games", "text": "In large multiagent games, partial observability , coordination, and credit assignmentpersistently plague attempts to design good learning algorithms. We provide a simple and efficient algorithm that in part uses a linear system to model the world fr om a single agent\u2019s limited perspective, and takes advantage of Kalman filtering to allow an agent to construct a good training signal and effectively learn a near-optimal policy in a wide variety of settings.A sequenceof increasinglycomplex empirical tests verifies the efficacy of this technique."}
{"_id": "01ff3819c40d0e962a533a9c924cafbfd7f36775", "title": "Growers versus showers: a meaningful (or useful) distinction?", "text": "Yafi et al. have conducted a study that will be of great interest to the lay community and also of import to practicing urologists who routinely encounter patients with concerns about the appearance of their phallus [1]. In one study 14% of men expressed dissatisfaction with their genitals with flaccid penile length being the most common source of dissatisfaction [2]. The concerns of such men tend to be the perception that their penis is small, or at least smaller than the penises of other men."}
{"_id": "2c3eb8bcfe5f0bf7666a8719d89784a60ffa3c33", "title": "Multi-sensor multi-object tracking of vehicles using high-resolution radars", "text": "Recent advances in automotive radar technology have led to increasing sensor resolution and hence a more detailed image of the environment with multiple measurements per object. This poses several challenges for tracking systems: new algorithms are necessary to fully exploit the additional information and algorithms need to resolve measurement-to-object association ambiguities in cluttered multi-object scenarios. Also, the information has to be fused if multi-sensor setups are used to obtain redundancy and increased fields of view. In this paper, a Labeled Multi-Bernoulli filter for tracking multiple vehicles using multiple high-resolution radars is presented. This finite-set-statistics-based filter tackles all three challenges in a fully probabilistic fashion and is the first Monte Carlo implementation of its kind. The filter performance is evaluated using radar data from an experimental vehicle."}
{"_id": "783a911805ec8fa146ede0f29c4f8d90e42db509", "title": "Efficient occupancy grid computation on the GPU with lidar and radar for road boundary detection", "text": "Accurate maps of the static environment are essential for many advanced driver-assistance systems. A new method for the fast computation of occupancy grid maps with laser range-finders and radar sensors is proposed. The approach utilizes the Graphics Processing Unit to overcome the limitations of classical occupancy grid computation in automotive environments. It is possible to generate highly accurate grid maps in just a few milliseconds without the loss of sensor precision. Moreover, in the case of a lower resolution radar sensor it is shown that it is suitable to apply super-resolution algorithms to achieve the accuracy of a higher resolution laser-scanner. Finally, a novel histogram based approach for road boundary detection with lidar and radar sensors is presented."}
{"_id": "8f69384b197a424dfbd0f60d7c48c110faf2b982", "title": "High resolution maps from wide angle sonar", "text": ""}
{"_id": "9cd1166aa66b0798bf452f48feab44d7082c1bf6", "title": "Detection of parked vehicles from a radar based occupancy grid", "text": "For autonomous parking applications to become possible, knowledge about the parking environment is required. Therefore, a real-time algorithm for detecting parked vehicles from radar data is presented. These data are first accumulated in an occupancy grid from which objects are detected by applying techniques borrowed from the computer vision field. Two random forest classifiers are trained to recognize two categories of objects: parallel-parked vehicles and cross-parked vehicles. Performances of the classifiers are evaluated as well as the capacity of the complete system to detect parked vehicles in real world scenarios."}
{"_id": "014b191f412f8496813d7c358ddd11d8512f2005", "title": "Instantaneous lateral velocity estimation of a vehicle using Doppler radar", "text": "High-resolution image radars open new opportunities for estimating velocity and direction of movement of extended objects from a single observation. Since radar sensors only measure the radial velocity, a tracking system is normally used to determine the velocity vector of the object. A stable velocity is estimated after several frames at the earliest, resulting in a significant loss of time for reacting to certain situations such as cross-traffic. The following paper presents a robust and model-free approach to determine the velocity vector of an extended target. In contrast to the Kalman filter, it does not require data association in time and space. An instant (~ 50 ms) and bias free estimation of its velocity vector is possible. Our approach can handle noise and systematic variations (e.g., micro-Doppler of wheels) in the signal. It is optimized to deal with measurement errors of the radar sensor not only in the radial velocity, but in the azimuth position as well. The accuracy of this method is increased by the fusion of multiple radar sensors."}
{"_id": "dcbbb4509c7256f20ec2ec3c1450cd09290518f5", "title": "Food, livestock production, energy, climate change, and health", "text": "Food provides energy and nutrients, but its acquisition requires energy expenditure. In post-hunter-gatherer societies, extra-somatic energy has greatly expanded and intensified the catching, gathering, and production of food. Modern relations between energy, food, and health are very complex, raising serious, high-level policy challenges. Together with persistent widespread under-nutrition, over-nutrition (and sedentarism) is causing obesity and associated serious health consequences. Worldwide, agricultural activity, especially livestock production, accounts for about a fifth of total greenhouse-gas emissions, thus contributing to climate change and its adverse health consequences, including the threat to food yields in many regions. Particular policy attention should be paid to the health risks posed by the rapid worldwide growth in meat consumption, both by exacerbating climate change and by directly contributing to certain diseases. To prevent increased greenhouse-gas emissions from this production sector, both the average worldwide consumption level of animal products and the intensity of emissions from livestock production must be reduced. An international contraction and convergence strategy offers a feasible route to such a goal. The current global average meat consumption is 100 g per person per day, with about a ten-fold variation between high-consuming and low-consuming populations. 90 g per day is proposed as a working global target, shared more evenly, with not more than 50 g per day coming from red meat from ruminants (ie, cattle, sheep, goats, and other digastric grazers)."}
{"_id": "5e4f7a53314103efb9b1bae9a3189560839afffb", "title": "Message in a Sealed Bottle: Privacy Preserving Friending in Social Networks", "text": "Many proximity-based mobile social networks are developed to facilitate connections between any two people, or to help a user to find people with a matched profile within a certain distance. A challenging task in these applications is to protect the privacy of the participants' profiles and personal interests. In this paper, we design novel mechanisms, when given a preference-profile submitted by a user, that search persons with matching-profile in decentralized multi-hop mobile social networks. Our mechanisms also establish a secure communication channel between the initiator and matching users at the time when the matching user is found. Our rigorous analysis shows that our mechanism is privacy-preserving (no participants' profile and the submitted preference-profile are exposed), verifiable (both the initiator and the unmatched user cannot cheat each other to pretend to be matched), and efficient in both communication and computation. Extensive evaluations using real social network data, and actual system implementation on smart phones show that our mechanisms are significantly more efficient than existing solutions."}
{"_id": "ad6d5e4545c60ec559d27a09fbef13fa538172e1", "title": "A direct scattering model for tracking vehicles with high-resolution radars", "text": "In advanced driver assistance systems and autonomous driving, reliable environment perception and object tracking based on radar is fundamental. High-resolution radar sensors often provide multiple measurements per object. Since in this case traditional point tracking algorithms are not applicable any more, novel approaches for extended object tracking emerged in the last few years. However, they are primarily designed for lidar applications or omit the additional Doppler information of radars. Classical radar based tracking methods using the Doppler information are mostly designed for point tracking of parallel traffic. The measurement model presented in this paper is developed to track vehicles of approximately rectangular shape in arbitrary traffic scenarios including parallel and cross traffic. In addition to the kinematic state, it allows to determine and track the geometric state of the object. Using the Doppler information is an important component in the model. Furthermore, it neither requires measurement preprocessing, data clustering, nor explicit data association. For object tracking, a Rao-Blackwellized particle filter (RBPF) adapted to the measurement model is presented."}
{"_id": "43216287965da7bfbe56b2e50a552cf0ac3144f0", "title": "LIBOL: a library for online learning algorithms", "text": "LIBOL is an open-source library for large-scale online learning, which consists of a large family of efficient and scalable state-of-the-art online learning algorithms for large-scale online classification tasks. We have offered easy-to-use command-line tools and examples for users and developers, and also have made comprehensive documents available for both beginners and advanced users. LIBOL is not only a machine learning toolbox, but also a comprehensive experimental platform for conducting online learning research."}
{"_id": "b6975005a606d58d461eb507cf194c96ad1f05fd", "title": "Basic Concepts of Lexical Resource Semantics", "text": "Semanticists use a range of highly expressive logical langu ges to characterize the meaning of natural language expressions. The logical languages are usually taken from an i nventory of standard mathematical systems, with which generative linguists are familiar. They are, thus, easily a ccessible beyond the borders of a given framework such as Categorial Grammar, Lexical Functional Grammar, or Govern me t and Binding Theory. Linguists working in the HPSG framework, on the other hand, often use rather idiosync ratic and specialized semantic representations. Their choice is sometimes motivated by computational applicatio ns n parsing, generation, or machine translation. Natural ly, the intended areas of application influence the design of sem antic representations. A typical property of semantic representations in HPSG that is concerned with computational a pplications is underspecification, and other properties come from the particular unification or constraint solving a l orithms that are used for processing grammars. While the resulting semantic representations have properties th at are motivated by, and are adequate for, certain practical applications, their relationship to standard languages is sometimes left on an intuitive level. In addition, the theor etical and ontological status of the semantic representations is o ften neglected. This vagueness tends to be unsatisfying to many semanticists, and the idiosyncratic shape of the seman tic representations confines their usage to HPSG. Since their entire architecture is highly dependent on HPSG, hard ly anyone working outside of that framework is interested in studying them. With our work on Lexical Resource Semantics (LRS), we want to contribute to the investigation of a number of important theoretical issues surrounding semantic repres ntations and possible ways of underspecification. While LR S is formulated in a constraint-based grammar environment an d t kes advantage of the tight connection between syntax proper and logical representations that can easily be achie ved in HPSG, the architecture of LRS remains independent from that framework, and combines attractive properties of various semantic systems. We will explore the types of semantic frameworks which can be specified in Relational Spe ciate Re-entrant Language (RSRL), the formalism that we choose to express our grammar principles, and we evaluate the semantic frameworks with respect to their potential for providing empirically satisfactory analyses of typica l problems in the semantics of natural languages. In LRS, we want to synthesize a flexible meta-theory that can be applied to different interesting semantic representation languag es and make computing with them feasible. We will start our investigation with a standard semantic rep resentation language from natural language semantics, Ty2 (Gallin, 1975). We are well aware of the debate about the a ppropriateness of Montagovian-style intensionality for the analysis of natural language semantics, but we belie ve that it is best to start with a semantic representation tha t most generative linguists are familiar with. As will become cl ar in the course of our discussion, the LRS framework is a meta-theory of semantic representations, and we believ e that it is suitable for various representation languages, This paper can be regarded as a snapshot of our work on LRS. It w as written as material for the authors\u2019 course Constraint-based Combinatorial Semanticsat the 15th European Summer School in Logic, Language and Inf ormation in Vienna in August 2003. It is meant as background r eading and as a basis of discussion for our class. Its air of a work in progressis deliberate. As we see continued development in LRS, its ap plication to a wider range of languages and empirical phenomena, and espec ially the implementation of an LRS module as a component of th e TRALE grammar development environment; we expect further modifications a d refinements to the theory. The implementation of LRS is rea liz d in collaboration with Gerald Penn of the University of Toronto. We would like t o thank Carmella Payne for proofreading various versions of this paper."}
{"_id": "20cf4cf5bdd8db1ebabe8e6de258deff08d9b855", "title": "Design of a scalable InfiniBand topology service to enable network-topology-aware placement of processes", "text": "Over the last decade, InfiniBand has become an increasingly popular interconnect for deploying modern super-computing systems. However, there exists no detection service that can discover the underlying network topology in a scalable manner and expose this information to runtime libraries and users of the high performance computing systems in a convenient way. In this paper, we design a novel and scalable method to detect the InfiniBand network topology by using Neighbor-Joining techniques (NJ). To the best of our knowledge, this is the first instance where the neighbor joining algorithm has been applied to solve the problem of detecting InfiniBand network topology. We also design a network-topology-aware MPI library that takes advantage of the network topology service. The library places processes taking part in the MPI job in a network-topology-aware manner with the dual aim of increasing intra-node communication and reducing the long distance inter-node communication across the InfiniBand fabric."}
{"_id": "3a09d0f6cd5da2178419d7e6c346ef9f6a82863f", "title": "Techniques to Detect Spammers in Twitter- A Survey", "text": "With the rapid growth of social networking sites for communicating, sharing, storing and managing significant information, it is attracting cybercriminals who misuse the Web to exploit vulnerabilities for their illicit benefits. Forged online accounts crack up every day. Impersonators, phishers, scammers and spammers crop up all the time in Online Social Networks (OSNs), and are harder to identify. Spammers are the users who send unsolicited messages to a large audience with the intention of advertising some product or to lure victims to click on malicious links or infecting user's system just for the purpose of making money. A lot of research has been done to detect spam profiles in OSNs. In this paper we have reviewed the existing techniques for detecting spam users in Twitter social network. Features for the detection of spammers could be user based or content based or both. Current study provides an overview of the methods, features used, detection rate and their limitations (if any) for detecting spam profiles mainly in Twitter."}
{"_id": "c9412e0fe79b51a9289cf1712a52fd4c7e9cd87b", "title": "A New Similarity Measure to Understand Visitor Behavior in a Web Site", "text": "The behavior of visitors browsing in a web site offers a lot of information about their requirements and the way they use the respective site. Analyzing such behavior can provide the necessary information in order to improve the web site\u2019s structure. The literature contains already several suggestions on how to characterize web site usage and to identify the respective visitor requirements based on clustering of visitor sessions. Here we propose to combine visitor behavior with the content of the respective web pages and the similarity between different page sequences in order to define a similarity measure between different visits. This similarity serves as input for clustering of visitor sessions. The application of our approach to a bank\u2019s web site and its visitor sessions shows its potential for internet-based businesses. key words: web mining, browsing behavior, similarity measure, clustering."}
{"_id": "0f899b92b7fb03b609fee887e4b6f3b633eaf30d", "title": "Variational Inference with Normalizing Flows", "text": "The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference."}
{"_id": "5028cd9a7003212f307e683252beaf1ce94f72d5", "title": "Testing Django Configurations Using Combinatorial Interaction Testing", "text": "Combinatorial Interaction Testing (CIT) is important because it tests the interactions between the many parameters that make up the configuration space of software systems. We apply this testing paradigm to a Python-based framework for rapid development of webbased applications called Django. In particular, we automatically create a CIT model for Django website configurations and run a state-of-the-art tool for CIT test suite generation to obtain sets of test configurations. Our automatic CIT-based approach is able to efficiently detect invalid configurations."}
{"_id": "41c52fd45b209a1a72f45fe7b99afddd653c5a12", "title": "A Top-Down Compiler for Sentential Decision Diagrams", "text": "The sentential decision diagram (SDD) has been recently proposed as a new tractable representation of Boolean functions that generalizes the influential ordered binary decision diagram (OBDD). Empirically, compiling CNFs into SDDs has yielded significant improvements in both time and space over compiling them into OBDDs, using a bottomup compilation approach. In this work, we present a top-down CNF to SDD compiler that is based on techniques from the SAT literature. We compare the presented compiler empirically to the state-ofthe-art, bottom-up SDD compiler, showing ordersof-magnitude improvements in compilation time."}
{"_id": "5b832f8a5807b66921c151716adfb55fe7fadc15", "title": "Collaboration- and Fairness-Aware Big Data Management in Distributed Clouds", "text": "With the advancement of information and communication technology, data are being generated at an exponential rate via various instruments and collected at an unprecedented scale. Such large volume of data generated is referred to as big data, which now are revolutionizing all aspects of our life ranging from enterprises to individuals, from science communities to governments, as they exhibit great potentials to improve efficiency of enterprises and the quality of life. To obtain nontrivial patterns and derive valuable information from big data, a fundamental problem is how to properly place the collected data by different users to distributed clouds and to efficiently analyze the collected data to save user costs in data storage and processing, particularly the cost savings of users who share data. By doing so, it needs the close collaborations among the users, by sharing and utilizing the big data in distributed clouds due to the complexity and volume of big data. Since computing, storage and bandwidth resources in a distributed cloud usually are limited, and such resource provisioning typically is expensive, the collaborative users require to make use of the resources fairly. In this paper, we study a novel collaboration- and fairness-aware big data management problem in distributed cloud environments that aims to maximize the system throughout, while minimizing the operational cost of service providers to achieve the system throughput, subject to resource capacity and user fairness constraints. We first propose a novel optimization framework for the problem. We then devise a fast yet scalable approximation algorithm based on the built optimization framework. We also analyze the time complexity and approximation ratio of the proposed algorithm. We finally conduct experiments by simulations to evaluate the performance of the proposed algorithm. Experimental results demonstrate that the proposed algorithm is promising, and outperforms other heuristics."}
{"_id": "d6b66236d802a6915077583aa102c03aa170392d", "title": "High-Q 3D RF solenoid inductors in glass", "text": "In this paper, we demonstrate the fabrication and characterization of various 3D solenoid inductors using a glass core substrate. Solenoid inductors were fabricated in glass by drilling through holes in glass and semi-additive copper plating for metallization. This topology is compared to similar solenoid structures in terms of Q-factor performance and inductance density. Inductances of 1.8-4.5nH with Q ~ 60 at 1GHz were demonstrated."}
{"_id": "93f5a28d16e04334fcb71cb62d0fd9b1c68883bb", "title": "Amortized Inference in Probabilistic Reasoning", "text": "Recent studies of probabilistic reasoning have postulated general-purpose inference algorithms that can be used to answer arbitrary queries. These algorithms are memoryless, in the sense that each query is processed independently, without reuse of earlier computation. We argue that the brain operates in the setting of amortized inference, where numerous related queries must be answered (e.g., recognizing a scene from multiple viewpoints); in this setting, memoryless algorithms can be computationally wasteful. We propose a simple form of flexible reuse, according to which shared inferences are cached and composed together to answer new queries. We present experimental evidence that humans exploit this form of reuse: the answer to a complex query can be systematically predicted from a person\u2019s response to a simpler query if the simpler query was presented first and entails a sub-inference (i.e., a sub-component of the more complex query). People are also faster at answering a complex query when it is preceded by a sub-inference. Our results suggest that the astonishing efficiency of human probabilistic reasoning may be supported by interactions between inference and memory."}
{"_id": "acec5113c70016d38605bdbd3e70b1209f5fcecd", "title": "Efficient Phrase-Based Document Similarity for Clustering", "text": "In this paper, we propose a phrase-based document similarity to compute the pair-wise similarities of documents based on the suffix tree document (STD) model. By mapping each node in the suffix tree of STD model into a unique feature term in the vector space document (VSD) model, the phrase-based document similarity naturally inherits the term tf-idf weighting scheme in computing the document similarity with phrases. We apply the phrase-based document similarity to the group-average Hierarchical Agglomerative Clustering (HAC) algorithm and develop a new document clustering approach. Our evaluation experiments indicate that, the new clustering approach is very effective on clustering the documents of two standard document benchmark corpora OHSUMED and RCV1. The quality of the clustering results significantly surpass the results of traditional single-word \\textit{tf-idf} similarity measure in the same HAC algorithm, especially in large document data sets. Furthermore, by studying the property of STD model, we conclude that the feature vector of phrase terms in the STD model can be considered as an expanded feature vector of the traditional single-word terms in the VSD model. This conclusion sufficiently explains why the phrase-based document similarity works much better than the single-word tf-idf similarity measure."}
{"_id": "904229ce8b8d53eb7e7a65dd52df887cf739a4ba", "title": "An Edge Detection Algorithm for Online Image Analysis", "text": "Online image analysis is used in a wide variety of applications. Edge detection is a fundamental tool used to obtain features of objects as a prerequisite step to object segmentation. This paper presents a simple and relatively fast online edge detection algorithm based on second derivative. The proposed edge detector is less sensitive to noise and may be applied on color, gray and binary images without preprocessing requirements. The merits of the algorithm are demonstrated by comparison with Canny\u2019s and Sobel\u2019s edge detectors."}
{"_id": "2966ce6356da4ca4ab9422d9233253dc433b2700", "title": "HermitCore: A Unikernel for Extreme Scale Computing", "text": "We expect that the size and the complexity of future supercomputers will increase on their path to exascale systems and beyond. Therefore, system software has to adapt to the complexity of these systems for a simplification of the development of scalable applications. In this paper, we present a unikernel operating system design for HPC. It extends the multi-kernel approach while providing better programmability and scalability for hierarchical systems, such as HLRS' Hazel Hen, which base on multiple cluster-on-a-chip processors. We prove the scalability of the design via micro benchmarks by taking the example of HermitCore---our prototype implementation of the new design."}
{"_id": "5d37014068ec3113d9b403556c1fdf861bec0162", "title": "Sugar: Secure GPU Acceleration in Web Browsers", "text": "Modern personal computers have embraced increasingly powerful Graphics Processing Units (GPUs). Recently, GPU-based graphics acceleration in web apps (i.e., applications running inside a web browser) has become popular. WebGL is the main effort to provide OpenGL-like graphics for web apps and it is currently used in 53% of the top-100 websites. Unfortunately, WebGL has posed serious security concerns as several attack vectors have been demonstrated through WebGL. Web browsers\u00bb solutions to these attacks have been reactive: discovered vulnerabilities have been patched and new runtime security checks have been added. Unfortunately, this approach leaves the system vulnerable to zero-day vulnerability exploits, especially given the large size of the Trusted Computing Base of the graphics plane. We present Sugar, a novel operating system solution that enhances the security of GPU acceleration for web apps by design. The key idea behind Sugar is using a dedicated virtual graphics plane for a web app by leveraging modern GPU virtualization solutions. A virtual graphics plane consists of a dedicated virtual GPU (or vGPU) as well as all the software graphics stack (including the device driver). Sugar enhances the system security since a virtual graphics plane is fully isolated from the rest of the system. Despite GPU virtualization overhead, we show that Sugar achieves high performance. Moreover, unlike current systems, Sugar is able to use two underlying physical GPUs, when available, to co-render the User Interface (UI): one GPU is used to provide virtual graphics planes for web apps and the other to provide the primary graphics plane for the rest of the system. Such a design not only provides strong security guarantees, it also provides enhanced performance isolation."}
{"_id": "d43d9dfb35e3399b0ef8d842381867a50e7dc349", "title": "3D-stackable crossbar resistive memory based on Field Assisted Superlinear Threshold (FAST) selector", "text": "We report the integration of 3D-stackable 1S1R passive crossbar RRAM arrays utilizing a Field Assisted Superlinear Threshold (FAST) selector. The sneak path issue in crossbar memory integration has been solved using the highest reported selectivity of 1010. Excellent selector performance is presented such as extremely sharp switching slope of <; 5mV/dec., selectivity of 1010, sub-50ns operations, > 100M endurance and processing temperature less than 300\u00b0C. Measurements on the 4Mb 1S1R crossbar array show that the sneak current is suppressed below 0.1nA, while maintaining 102 memory on/off ratio and > 106 selectivity during cycling, enabling high density memory applications."}
{"_id": "e2ad147ca0342795c853ede7491f5c51ebd1d1a3", "title": "Towards detecting fake user accounts in facebook", "text": "People are highly dependent on online social networks (OSNs) which have attracted the interest of cyber criminals for carrying out a number of malicious activities. An entire industry of black-market services has emerged which offers fake accounts based services for sale. We, therefore, in our work, focus on detecting fake accounts on a very popular (and difficult for data collection) online social network, Facebook. Key contributions of our work are as follows. The first contribution has been collection of data related to real and fake accounts on Facebook. Due to strict privacy settings and ever evolving API of Facebook with each version adding more restrictions, collecting user accounts data became a major challenge. Our second contribution is the use of user-feed information on Facebook to understand user profile activity and identifying an extensive set of 17 features which play a key role in discriminating fake users on Facebook with real users. Third contribution is the use these features and identifying the key machine learning based classifiers who perform well in detection task out of a total of 12 classifiers employed. Fourth contribution is the identifying which type of activities (like, comment, tagging, sharing, etc) contribute the most in fake user detection. Results exhibit classification accuracy of 79% among the best performing classifiers. In terms of activities, likes and comments contribute well towards detection task. Although the accuracy is not very high, however, our work forms a baseline for further improvement. Our results indicate that many fake users are classified as real suggesting clearly that fake accounts are mimicking real user behavior to evade detection mechanisms. Our work concludes by enlisting a number of future course of actions that can be undertaken."}
{"_id": "de83aeaea47d40f0838d87637c4843246d1f37c0", "title": "Telemanipulation with force-based display of proximity fields", "text": "In this paper we show and evaluate the design of a novel telemanipulation system that maps proximity values, acquired inside of a gripper, to forces a user can feel through a haptic input device. The command console is complemented by input-devices that give the user an intuitive control over parameters relevant to the system. Furthermore, proximity sensors enable the autonomous alignment/centering of the gripper to objects in user-selected DoFs with the potential of aiding the user and lowering the workload. We evaluate our approach in a user study that shows that the telemanipulation system benefits from the supplementary proximity information and that the workload can indeed be reduced when the system operates with partial autonomy."}
{"_id": "051f689825d4f118a39a286cf72888d2d1a84438", "title": "Learning to Decode Cognitive States from Brain Images", "text": "Over the past decade, functional Magnetic Resonance Imaging (fMRI) has emerged as a powerful new instrument to collect vast quantities of data about activity in the human brain. A typical fMRI experiment can produce a three-dimensional image related to the human subject's brain activity every half second, at a spatial resolution of a few millimeters. As in other modern empirical sciences, this new instrumentation has led to a flood of new data, and a corresponding need for new data analysis methods. We describe recent research applying machine learning methods to the problem of classifying the cognitive state of a human subject based on fRMI data observed over a single time interval. In particular, we present case studies in which we have successfully trained classifiers to distinguish cognitive states such as (1) whether the human subject is looking at a picture or a sentence, (2) whether the subject is reading an ambiguous or non-ambiguous sentence, and (3) whether the word the subject is viewing is a word describing food, people, buildings, etc. This learning problem provides an interesting case study of classifier learning from extremely high dimensional (105 features), extremely sparse (tens of training examples), noisy data. This paper summarizes the results obtained in these three case studies, as well as lessons learned about how to successfully apply machine learning methods to train classifiers in such settings."}
{"_id": "7855705b976b0545067709e5aff54d9d8e2f2c02", "title": "Situation models in language comprehension and memory.", "text": "This article reviews research on the use of situation models in language comprehension and memory retrieval over the past 15 years. Situation models are integrated mental representations of a described state of affairs. Significant progress has been made in the scientific understanding of how situation models are involved in language comprehension and memory retrieval. Much of this research focuses on establishing the existence of situation models, often by using tasks that assess one dimension of a situation model. However, the authors argue that the time has now come for researchers to begin to take the multidimensionality of situation models seriously. The authors offer a theoretical framework and some methodological observations that may help researchers to tackle this issue."}
{"_id": "08c370eb9ba13bfb836349e7f3ea428be4697818", "title": "Factor graphs and the sum-product algorithm", "text": "Algorithms that must deal with complicated global functions of many variables often exploit the manner in which the given functions factor as a product of \u201clocal\u201d functions, each of which depends on a subset of the variables. Such a factorization can be visualized with a bipartite graph that we call afactor graph. In this tutorial paper, we present a generic message-passing algorithm, the sum-product algorithm, that operates in a factor graph. Following a single, simple computational rule, the sum-product algorithm computes\u2014either exactly or approximately\u2014various marginal functions derived from the global function. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative \u201cturbo\u201d decoding algorithm, Pearl\u2019s belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform (FFT) algorithms."}
{"_id": "a2ca4a76a7259da6921ab41eae8858513cbb1af1", "title": "Survey propagation: An algorithm for satisfiability", "text": "We study the satisfiability of randomly generated formulas formed by M clauses of exactly K literals over N Boolean variables. For a given value of N the problem is known to be most difficult when \u03b1 = M/N is close to the experimental threshold \u03b1c separating the region where almost all formulas are SAT from the region where all formulas are UNSAT. Recent results from a statistical physics analysis suggest that the difficulty is related to the existence of a clustering phenomenon of the solutions when \u03b1 is close to (but smaller than) \u03b1c. We introduce a new type of message passing algorithm which allows to find efficiently a satisfying assignment of the variables in this difficult region. This algorithm is iterative and composed of two main parts. The first is a message-passing procedure which generalizes the usual methods like Sum-Product or Belief Propagation: It passes messages that may be thought of as surveys over clusters of the ordinary messages. The second part uses the detailed probabilistic information obtained from the surveys in order to fix variables and simplify the problem. Eventually, the simplified problem that remains is solved by a conventional heuristic. \u00a9 2005 Wiley Periodicals, Inc. Random Struct. Alg., 27, 201\u2013226, 2005"}
{"_id": "0e8933300a20f3d799dc9f19e352967f41d8efcc", "title": "The generalized distributive law", "text": "In this semitutorial paper we discuss a general message passing algorithm, which we call the generalized distributive law (GDL). The GDL is a synthesis of the work of many authors in the information theory, digital communications, signal processing, statistics, and artificial intelligence communities. It includes as special cases the Baum\u2013Welch algorithm, the fast Fourier transform (FFT) on any finite Abelian group, the Gallager\u2013Tanner\u2013Wiberg decoding algorithm, Viterbi\u2019s algorithm, the BCJR algorithm, Pearl\u2019s \u201cbelief propagation\u201d algorithm, the Shafer\u2013Shenoy probability propagation algorithm, and the turbo decoding algorithm. Although this algorithm is guaranteed to give exact answers only in certain cases (the \u201cjunction tree\u201d condition), unfortunately not including the cases of GTW with cycles or turbo decoding, there is much experimental evidence, and a few theorems, suggesting that it often works approximately even when it is not supposed to."}
{"_id": "157218bae792b6ef550dfd0f73e688d83d98b3d7", "title": "A recursive approach to low complexity codes", "text": "A bstruciA method is described for constructing long error-correcting codes from one or more shorter error-correcting codes, referred to as subcodes, and a bipartite graph. A graph is shown which specifies carefully chosen subsets of the digits of the new codes that must be codewords in one of the shorter subcodes. Lower bounds to the rate and the minimum distance of the new code are derived in terms of the parameters of the graph and the subcodes. Both the encoders and decoders proposed are shown to take advantage of the code\u2019s explicit decomposition into subcodes to decompose and simplify the associated computational processes. Bounds on the performance of two specific decoding algorithms are established, and the asymptotic growth of the complexity of decoding for two types of codes and decoders is analyzed. The proposed decoders are able to mahe effective use of probabilistic information supplied by the channel receiver, e.g., reliability information, without greatly increasing the number of computations required. It is shown that choosing a transmission order for the digits that is appropriate for the graph and the subcodes can give the code excellent burst-error correction abilities. The construction principles are illustrated by several examples."}
{"_id": "5af12524db0186bcdcafb7a342556d421cf342bf", "title": "Exploiting Passive Dynamics with Variable Stiffness Actuation in Robot Brachiation", "text": "This paper explores a passive control strategy with variable stiffness actuation for swing movements. We consider brachiation as an example of a highly dynamic task which requires exploitation of gravity in an efficient manner for successful task execution. First, we present our passive control strateg y considering a pendulum with variable stiffness actuation. Then, we formulate the problem based an optimal control framework with temporal optimization in order to simultaneously find an appropriate stiffness profile and movement duration such that the resultant movement will be able to exploit the passive dynamics of the robot. Finally, numerical evaluations on a twolink brachiating robot with a variable stiffness actuator (VSA) model are provided to demonstrate the effectiveness of our approach under different task requirements, modelling errors and switching in the robot dynamics. In addition, we discuss the issue of task description in terms of the choice of cost function for successful task execution in optimal control."}
{"_id": "3056721cc51032f84791a2d7469601afb65b12ee", "title": "Arabic Script Recognition : A Survey", "text": "Optical character recognition (OCR) is essential in various real-world applications, such as digitizing learning resources to assist visually impaired people and transforming printed resources into electronic media. However, the development of OCR for printed Arabic script is a challenging task. These challenges are due to the specific characteristics of Arabic script. Therefore, different methods have been proposed for developing Arabic OCR systems, and this paper aims to provide a comprehensive review of these methods. This paper also discusses relevant issues of printed Arabic OCR including the challenges of printed Arabic script and performance evaluation. It concludes with a discussion of the current status of printed Arabic OCR, analyzing the remaining problems in the field of printed Arabic OCR and providing several directions for future research. Keywords\u2014Optical character recognition; arabic printed OCR; arabic text recognition; arabic OCR survey; feature extraction; segmentation; classification"}
{"_id": "2e87c0613077f7faeee45bbf5f9d209017e30771", "title": "SIFT-based local spectrogram image descriptor: a novel feature for robust music identification", "text": "Music identification via audio fingerprinting has been an active research field in recent years. In the real-world environment, music queries are often deformed by various interferences which typically include signal distortions and time-frequency misalignments caused by time stretching, pitch shifting, etc. Therefore, robustness plays a crucial role in music identification technique. In this paper, we propose to use scale invariant feature transform (SIFT) local descriptors computed from a spectrogram image as sub-fingerprints for music identification. Experiments show that these sub-fingerprints exhibit strong robustness against serious time stretching and pitch shifting simultaneously. In addition, a locality sensitive hashing (LSH)-based nearest sub-fingerprint retrieval method and a matching determination mechanism are applied for robust sub-fingerprint matching, which makes the identification efficient and precise. Finally, as an auxiliary function, we demonstrate that by comparing the time-frequency locations of corresponding SIFT keypoints, the factor of time stretching and pitch shifting that music queries might have experienced can be accurately estimated."}
{"_id": "2f5f689cb289878543ae86c78b809ab924983e46", "title": "Improving the Coverage and Spectral Efficiency of Millimeter-Wave Cellular Networks Using Device-to-Device Relays", "text": "The susceptibility of millimeter waveform propagation to blockages limits the coverage of millimeter-wave (mmWave) signals. To overcome blockages, we propose to leverage two-hop device-to-device (D2D) relaying. Using stochastic geometry, we derive expressions for the downlink coverage probability of relay-assisted mmWave cellular networks when the D2D links are implemented in either uplink mmWave or uplink microwave bands. We further investigate the spectral efficiency (SE) improvement in the cellular downlink, and the effect of D2D transmissions on the cellular uplink. For mmWave links, we derive the coverage probability using dominant interferer analysis while accounting for both blockages and beamforming gains. For microwave D2D links, we derive the coverage probability considering both line-of-sight and non-line-of-sight (NLOS) propagation. Numerical results show that downlink coverage and SE can be improved using two-hop D2D relaying. Specifically, microwave D2D relays achieve better coverage because D2D connections can be established under NLOS conditions. However, mmWave D2D relays achieve better coverage when the density of interferers is large because blockages eliminate interference from NLOS interferers. The SE on the downlink depends on the relay mode selection strategy, and mmWave D2D relays use a significantly smaller fraction of uplink resources than microwave D2D relays."}
{"_id": "a0cc11865ca9f456724eacb711550bd49fa507d1", "title": "From Characters to Understanding Natural Language ( C 2 NLU ) : Robust End-to-End Deep Learning for NLP Organized by", "text": "This report documents the program and the outcomes of Dagstuhl Seminar 17042 \u201cFrom Characters to Understanding Natural Language (C2NLU): Robust End-to-End Deep Learning for NLP\u201d. The seminar brought together researchers from different fields, including natural language processing, computational linguistics, deep learning and general machine learning. 31 participants from 22 academic and industrial institutions discussed advantages and challenges of using characters, i.e., \u201craw text\u201d, as input for deep learning models instead of language-specific tokens. Eight talks provided overviews of different topics, approaches and challenges in current natural language processing research. In five working groups, the participants discussed current natural language processing/understanding topics in the context of character-based modeling, namely, morphology, machine translation, representation learning, end-to-end systems and dialogue. In most of the discussions, the need for a more detailed model analysis was pointed out. Especially for character-based input, it is important to analyze what a deep learning model is able to learn about language \u2013 about tokens, morphology or syntax in general. For an efficient and effective understanding of language, it might furthermore be beneficial to share representations learned from multiple objectives to enable the models to focus on their specific understanding task instead of needing to learn syntactic regularities of language first. Therefore, benefits and challenges of transfer learning were an important topic of the working groups as well as of the panel discussion and the final plenary discussion. Seminar January 22\u201327, 2017 \u2013 http://www.dagstuhl.de/17042 1998 ACM Subject Classification I.2 Artificial Intelligence, I.2.7 Natural Language Processing"}
{"_id": "ccb0c0394bca75a59b2a265acdaaf47b8f86c7ce", "title": "Intentional behaviour in dog-human communication: an experimental analysis of \u201cshowing\u201d behaviour in the dog", "text": "Despite earlier scepticism there is now evidence for simple forms of intentional and functionally referential communication in many animal species. Here we investigate whether dogs engage in functional referential communication with their owners. \u201cShowing\u201d is defined as a communicative action consisting of both a directional component related to an external target and an attention-getting component that directs the attention of the perceiver to the informer or sender. In our experimental situation dogs witness the hiding of a piece of food (or a favourite toy) which they cannot get access to. We asked whether dogs would engage in \u201cshowing\u201d in the presence of their owner. To control for the motivational effects of both the owner and the food on the dogs\u2019 behaviour, control observations were also staged where only the food (or the toy) or the owner was present. Dogs\u2019 gazing frequency at both the food (toy) and the owner was greater when only one of these was present. In other words, dogs looked more frequently at their owner when the food (toy) was present, and they looked more at the location of the food (toy) when the owner was present. When both the food (toy) and the owner were present a new behaviour, \u201cgaze alternation\u201d, emerged which was defined as changing the direction of the gaze from the location of the food (toy) to looking at the owner (or vice versa) within 2 s. Vocalisations that occurred in this phase were always associated with gazing at the owner or the location of the food. This behaviour, which was specific to this situation, has also been described in chimpanzees, a gorilla and humans, and has often been interpreted as a form of functionally referential communication. Based on our observations we argue that dogs might be able to engage in functionally referential communication with their owner, and their behaviour could be described as a form of \u201cshowing\u201d. The contribution of domestication and individual learning to the well-developed communicative skills in dogs is discussed and will be the subject of further studies."}
{"_id": "f4c5f7bdf3f7ce924cd42f26d2a9eb97ab8da4a3", "title": "Full-resolution interactive CPU volume rendering with coherent BVH traversal", "text": "We present an efficient method for volume rendering by raycasting on the CPU. We employ coherent packet traversal of an implicit bounding volume hierarchy, heuristically pruned using preintegrated transfer functions, to exploit empty or homogeneous space. We also detail SIMD optimizations for volumetric integration, trilinear interpolation, and gradient lighting. The resulting system performs well on low-end and laptop hardware, and can outperform out-of-core GPU methods by orders of magnitude when rendering large volumes without level-of-detail (LOD) on a workstation. We show that, while slower than GPU methods for low-resolution volumes, an optimized CPU renderer does not require LOD to achieve interactive performance on large data sets."}
{"_id": "6b8966f0e8f368fea84d6c3e135011fc2b050312", "title": "Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: is there a \u201cface\u201d area?", "text": "Haxby et al. [Science 293 (2001) 2425] recently argued that category-related responses in the ventral temporal (VT) lobe during visual object identification were overlapping and distributed in topography. This observation contrasts with prevailing views that object codes are focal and localized to specific areas such as the fusiform and parahippocampal gyri. We provide a critical test of Haxby's hypothesis using a neural network (NN) classifier that can detect more general topographic representations and achieves 83% correct generalization performance on patterns of voxel responses in out-of-sample tests. Using voxel-wise sensitivity analysis we show that substantially the same VT lobe voxels contribute to the classification of all object categories, suggesting the code is combinatorial. Moreover, we found no evidence for local single category representations. The neural network representations of the voxel codes were sensitive to both category and superordinate level features that were only available implicitly in the object categories."}
{"_id": "03acfc428d38e4e6136513658695cfc1956b5945", "title": "Machine-Learning Research Four Current Directions", "text": "Machine Learning research has been making great progress in many directions This article summarizes four of these directions and discusses some current open problems The four directions are a improving classi cation accuracy by learning ensembles of classi ers b methods for scaling up supervised learning algorithms c reinforcement learning and d learning complex stochastic models"}
{"_id": "5b93ae450e65b6ae70c58dad3a9e754161374b50", "title": "A Study of Fonts Designed for Screen Display", "text": "This study examined the readabiity.and subjective preferences of a set of fonts designed for screen display. Two new binary bitmap fonts performed well, suggesting that designers hould consider incorporating similar attributes into default fonts for online type."}
{"_id": "965f8bb9a467ce9538dec6bef57438964976d6d9", "title": "Recognizing human faces under disguise and makeup", "text": "The accuracy of automated human face recognition algorithms can significantly degrade while recognizing same subjects under make-up and disguised appearances. Increasing constraints on enhanced security and surveillance requires enhanced accuracy from face recognition algorithms for faces under disguise and/or makeup. This paper presents a new database for face images under disguised and make-up appearances the development of face recognition algorithms under such covariates. This database has 2460 images from 410 different subjects and is acquired under real environment, focuses on make-up and disguises covariates and also provides ground truth (eye glass, goggle, mustache, beard) for every image. This can enable developed algorithms to automatically quantify their capability for identifying such important disguise attribute during the face recognition We also present comparative experimental results from two popular commercial matchers and from recent publications. Our experimental results suggest significant performance degradation in the capability of these matchers in automatically recognizing these faces. We also analyze face detection accuracy from these matchers. The experimental results underline the challenges in recognizing faces under these covariates. Availability of this new database in public domain will help to advance much needed research and development in recognizing make-up and disguised faces."}
{"_id": "fe8a9985d8b2df7a4d78e02beaa234084e3790b9", "title": "2 Saddles in Deep Networks : Background and Motivation", "text": "Recent years have seen a growing interest in understanding deep neural networks from an optimization perspective. It is understood now that converging to low-cost local minima is sufficient for such models to become effective in practice. However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy. Our findings from this work are new, and can have a significant impact on the development of gradient descent based methods for training deep networks. We validated our hypotheses using an extensive experimental evaluation on standard datasets such as MNIST and CIFAR-10, and also showed that recent efforts that attempt to escape saddles finally converge to saddles with high degeneracy, which we define as \u2018good saddles\u2019. We also verified the famous Wigner\u2019s Semicircle Law in our experimental results."}
{"_id": "b044d249455e315a9642b2265ede035c4d9b390f", "title": "The happy and unhappy faces of narcissism", "text": "Several theorists have argued in favor of a distinction between overt and covert narcissism, and factor analytic studies have supported this distinction. In this paper I demonstrate that overt narcissists report higher self-esteem and higher satisfaction with life, whereas covert narcissists report lower self-esteem and lower satisfaction with life. I also present mediational models to explain why overt narcissists are relatively happy and covert narcissists are relatively unhappy. In analyses using both partial correlations and structural equation modeling, self-esteem consistently mediated the associations between both types of narcissism and happiness, whereas self-deception did not. These results further demonstrate some of the selfcentered benefits associated with overt narcissism and some of the strong psychological costs associated with covert narcissism. # 2002 Elsevier Science Ltd. All rights reserved."}
{"_id": "f8e32c5707df46bfcd683f723ad27d410e7ff37d", "title": "Some Comments on Cp", "text": ""}
{"_id": "7a87289fb660d729bf0d9c9417ffb58cdea15450", "title": "A Research on Iron Loss of IPMSM With a Fractional Number of Slot Per Pole", "text": "In this paper, we investigated rotor iron loss of interior permanent magnet synchronous machine (IPMSM), which has distributed armature windings. In order to study the iron loss with the effect of slot-pole combination, two machines for high-speed operation such as electric vehicle were designed. One is fractional slot winding and the other is conventional one. In the analysis, we developed a new iron loss model of electrical machines for high-speed operation. The calculated iron loss was compared with the experimental data. It was clarified that the proposed method can estimate iron loss effectively at high-speed operation. Based on this newly proposed method, we analyzed iron loss of the two machines according to their driving conditions. From the analysis results, it was shown that rotor iron loss of machine employing fractional slot-winding is definitely large at load condition. In addition, it is interesting that the ratio (rotor iron loss/stator iron loss) becomes larger as the speed of the machines increase if the number of slot per pole is fractional."}
{"_id": "89fb51f228b2e466cf690028ec96940f3d7b4fb0", "title": "Accelerometer based wireless air mouse using Arduino micro-controller board", "text": "With the day to day advancements in technology, the interaction between the human and the digital world is diminishing. Lot of improvement has taken place in the mobile technology from button keypads to touch screens. However the current PCs still need a pad to operate its mouse and are wired most of the time. The idea is to develop a wireless mouse which works on the hand gestures of the user without the need of a pad making an effortless interaction between the human and the computer. The implementation is done using a sensor named accelerometer to sense the hand gestures. Accelerometer is a motion sensor which sense changes in motion in any of the three axes. In here the accelerometer will be at the users side, attached to the hand to sense the movement and gives the output to a micro-controller to process it. Necessary modifications are done by the microcontroller to these values and are transmitted through a RF module to the PC. At the receiving end a mouse control program which contains functions to control the mouse reads these values and performs the necessary action."}
{"_id": "7ff77bd352aceaea250bab51d0c09e5b4ed7fc25", "title": "Multisensor Fusion and Integration: A Review on Approaches and Its Applications in Mechatronics", "text": "The objective of this paper is to review the theories and approaches of multisensor fusion and integration (MFI) with its application in mechatronics. MFI helps the system perceiving changes of the environment and monitoring the system itself. Since each individual sensor has its own inherent defects and limitations, MFI merges the redundant information acquired by multiple sensors synergistically to provide a more accurate perception and make an optimal decision in further. The wide application spectrum of MFI in mechatronics includes the industrial automation, the development of intelligent robots, military applications, biomedical applications, etc. In this paper, the architecture and algorithms of MFI are reviewed, and some implementation examples in industrial automation and robotic applications are presented. Furthermore, sensor fusion methods at different levels, namely, estimation methods, classification methods and inference methods, the most frequently used algorithms in previous researches with their advantages and limitations are summarized. Applications of MFI in robotics and mechatronics are discussed. Future perspectives of MFI deployment are included in the concluding remarks."}
{"_id": "eb639811df16f49bbe44948a9bf6e937eef47d8b", "title": "E-shopping in a multiple channel environment", "text": "In the present study, the authors propose a segmentation schema based on patterns of e-browsing and e-purchasing. We examine self-reports of browsing and purchasing using five specific non-store channels: the Internet, television infomercials, advertising that accompanies regular television programming, television shopping channels , and print catalogs. Our findings indicate that shoppers who browse and/or purchase on the Internet differ in their use of multi-channel options related to their perceptions of convenience . Some shoppers clearly want to purchase in the store setting and reject multiple forms of non-store shopping. Others like to browse various non-store media and have extended their browsing to the Internet, yet maintain their loyalty to in-store purchases. Retailers who attempt to `\u0300 convert\u2019\u2019 such shoppers to Internet-only purchasing may alienate the shoppers who rely on the Internet solely for information. Introduction The explosive growth of the Internet has revolutionized many aspects of daily life (Fetto, 1999; Rutledge, 2000). Recent statistics tell us that people the world over are using the Internet in ever-increasing numbers, with estimates ranging from 505 million (Global Reach, 2001) to 513.41 million people online throughout the world (NUA Ltd, 2001). There is much to be learned about how the Internet fits in people\u2019s lives, how they use it as part of a set of choices, and what deters them from using it for certain purposes, such as making purchases. Despite increased use of the Web, recent industry studies have documented problems such as an ongoing trend in online shopping cart `\u0300 abandonment\u2019\u2019 in which apparent planned purchases are never completed online (Hurwicz, 1999). In fact, substantial numbers of online shoppers return to physical stores after experiencing problems with slow load times, an inability to locate items, incomplete information, lack of human interaction, and missed or late deliveries (Mardesich, 1999; McCarthy, 2000). Failures with account setups and confusing error messages also caused about 40 percent of shoppers to have problems during checkout (Enos, 2000). While consumers have verified these reasons, we argue that another reason may explain the apparent abandonment that takes place. That is, some Internet browsers may have never intended to complete their purchases online, preferring to shop in a bricks and mortar setting. Perhaps the notion of `\u0300 abandonment\u2019\u2019 is an oversimplification. Some consumers may simply use shopping carts to investigate and tally possible future purchases, with no intent to purchase at the specific time that they are online. The research register for this journal is available at http://www.emeraldinsight.com/researchregisters The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/0736-3761.htm Explosive growth Failure to complete purchases JOURNAL OF CONSUMER MARKETING, VOL. 19 NO. 4 2002, pp. 333-350, # MCB UP LIMITED, 0736-3761, DOI 10.1108/07363760210433645 333 An executive summary for managers and executive readers can be found at the end of this article The purpose of the present study is to empirically investigate the connections among Internet `\u0300 use\u2019\u2019 and other non-store media options in a multi-channel environment. In the present study, the authors propose a segmentation schema based on patterns of e-browsing and e-purchasing, including browsing on the Internet with planned purchasing in an offline channel. Using that schema, our study examines the self-reports of browsing and purchasing of 250 Internet users with respect to five specific non-store media: the Internet, television infomercials, advertising that accompanies regular television programming, television shopping channels, and print catalogs. Our findings indicate that shoppers who browse and/or purchase on the Internet differ in their use of multi-channel options related to their perceptions of convenience. Some shoppers clearly want to purchase in the store setting and reject multiple forms of non-store shopping. Others like to browse various non-store media and have extended their browsing to the Internet, yet maintain their loyalty to in-store purchases. Background The growth and potential of the Internet There is a need for a `\u0300 better understanding of the Web user\u2019\u2019 and e-shopping (Korgaonkar and Wolin, 1999). Current studies vary in predicting the characteristics of Web users, ranging from male, well-educated, middle income and middle-aged or younger (Emmanouilides and Hammond, 2000; Korgaonkar and Wolin, 1999; Wilson, 2000), to substantial increases among women and the elderly (Harris, 1998; Rosen and Howard, 2000; Rosen and Weil, 1995). Recent studies by Media Metrix and Jupiter Communications indicate that more women are online than men, with the greatest increases among teenaged girls and women over 55 (Hamilton, 2000). Many studies have found that typical online buyers have used the Web for several years, and because of their familiarity, they search online for product information and purchase options (Bellman et al., 1999). In many cases, convenience and time-management are the key drivers among e-shoppers (Chang and McFarland, 1999), while some `\u0300 wired but wary\u2019\u2019 use the Internet for e-mail, but do not shop online. Still others have no regular Internet access at all. Many studies examine Internet shopping exclusively in determining consumer usage and satisfaction (Szymanski and Hise, 2000; VanTassel and Weitz, 1997). However, it is not likely that shoppers view the Internet as separate and distinct from other more familiar forms of shopping. We argue that it is more realistic to examine e-shopping in the context of multi-channel alternatives that are also available to shoppers (Grant, 2000; Greco, 1996). The interrelationships among various types of non-store methods are not well understood, nor are their impacts on `\u0300 bricks and mortar\u2019\u2019 retail outlets (Achenbaum, 1999). We also contend that insights can be gained by organizing the study of e-shopping based on sound theoretical relationships that have been used to describe other components of the multiple channel environment. We turn to the literature on recreational shopping and convenience in order to build a foundation for the study. Browsing and purchasing Some studies have attempted to understand whether browsing on the Internet is correlated with purchasing on the Internet (Lindquist and KaufmanScarborough, 2000). It is questionable whether a perfect match is possible, since some shoppers enjoy browsing as a separate activity, while others buy without browsing if their choice is clear and determined in advance. Because of problems like security fears, lack of skill with computers, slow response time by e-tailers, and confusing Web sites, studies report that numerous Segmentation schema Convenience and time management 334 JOURNAL OF CONSUMER MARKETING, VOL. 19 NO. 4 2002 shoppers use non-store methods to search and compare, while going to the `\u0300 bricks and mortar\u2019\u2019 setting to make their purchases (Koprowski, 2000; Levy and Nilson, 1999). Earlier pre-Internet studies investigated why people shop, why they go to the store, and why they look but do not buy. For instance, Tauber (1972, 1995) went beyond retail patronage and demonstrated that people have numerous motives for shopping that are unrelated to the actual purchasing of products. The shoppers in his sample reported that their shopping trips included carrying out expected roles, diversion from daily routine, self-gratification and response to moods, learning about new trends, physical activity, sensory stimulation, meeting others with similar interests, interaction with peer groups, and the pleasure of bargaining. Such motives are likely to result in browsing that does not necessarily lead to purchasing. Other studies have identified persons as `\u0300 recreational shoppers\u2019\u2019, who enjoy shopping as a leisure activity and tend to browse in retail outlets `\u0300 without an upcoming purchase in mind\u2019\u2019 (Bellinger and Korgaonkar, 1980; Ohanian and Tashchian, 1992). Such shoppers report being interested in gaining knowledge about specific product classes and actively seek information about topics such as merchandise, prices, and quality. Such shoppers do not generally gather such information in preparation for an upcoming purchase, but instead appear to enjoy gathering information for its own sake. They engage in word-of-mouth activities more than other shoppers (Bloch and Richins, 1983), and enjoy giving advice and influencing other consumers. These earlier researchers explicitly considered browsing and shopping behaviors in the `\u0300 bricks and mortar\u2019\u2019 setting. We suggest that `\u0300 recreational e-shoppers\u2019\u2019 are also likely to virtually `\u0300 stroll\u2019\u2019 through online shopping sites for learning, social, or diversion-related purposes. Recreational e-shoppers may also enjoy gathering online information and share their knowledge through online chat rooms and buyer forums. If a comparable pattern of information gathering and sharing exists, browsers who do not `\u0300 convert\u2019\u2019 to purchasers may be found to exhibit similar characteristics to in-store recreational shoppers who may be lonely, bored, or simply curious. Browsing convenience versus purchasing convenience Surveys of customers indicate their frustration with the lack of convenience provided by `\u0300 bricks and mortar\u2019\u2019 stores. They report problems with crowded store conditions, out of stock merchandise, and poorly-trained salespersons, prompting shoppers to search for more favorable ways to browse and to purchase. In fact, retailers have been criticized for developing in-store strategies based on their own convenience, rather than that of their customers (Seiders et al., 2000). Convenience is a more complex notion than simply providing quick checkouts or locations close to home. In fact, shoppers are thought to clearly differentiate among various dimensions of conve"}
{"_id": "effaec031a97ae64f965d0f0616438774fc11bc5", "title": "A Neural Network Color Classifier in HSV Color Space", "text": "In this paper, a neural network approach for color classification in HSV color space based on color JND (just noticeable difference) is presented. The HSV color samples are generated using an interactive tool based on JND concept and these samples are used for supervised training. A four layer feed forward neural network is trained for classifying a given color pair as similar or dissimilar pair. An interactive tool for color pixel comparison is developed for testing the color classifier on real life images. Research shows that neural network classifier in HSV color space works better than RGB classifier in terms of efficiency for segmentation purposes. Thus the experimentation results can be used for segmenting real life images."}
{"_id": "29d0c439241e65c51e19f7bd9430f50e900a6e32", "title": "F-DES: Fast and Deep Event Summarization", "text": "In the multimedia era a large volume of video data can be recorded during a certain period of time by multiple cameras. Such a rapid growth of video data requires both effective and efficient multiview video summarization techniques. The users can quickly browse and comprehend a large amount of audiovisual data. It is very difficult in real-time to manage and access the huge amount of video-content-handling issues of interview dependencies significant variations in illumination and presence of many unimportant frames with low activity. In this paper we propose a local-alignment-based FASTA approach to summarize the events in multiview videos as a solution of the aforementioned problems. A deep learning framework is used to extract the features to resolve the problem of variations in illumination and to remove fine texture details and detect the objects in a frame. Interview dependencies among multiple views of video are then captured via the FASTA algorithm through local alignment. Finally object tracking is applied to extract the frames with low activity. Subjective as well as objective evaluations clearly indicate the effectiveness of the proposed approach. Experiments show that the proposed summarization method successfully reduces the video content while keeping momentous information in the form of events. A computing analysis of the system also shows that it meets the requirement of real-time applications."}
{"_id": "c504c88dbea0c1fe9383710646c8180ef44b9bc9", "title": "Image segmentation via adaptive K-mean clustering and knowledge-based morphological operations with biomedical applications", "text": "Image segmentation remains one of the major challenges in image analysis. In medical applications, skilled operators are usually employed to extract the desired regions that may be anatomically separate but statistically indistinguishable. Such manual processing is subject to operator errors and biases, is extremely time consuming, and has poor reproducibility. We propose a robust algorithm for the segmentation of three-dimensional (3-D) image data based on a novel combination of adaptive K-mean clustering and knowledge-based morphological operations. The proposed adaptive K-mean clustering algorithm is capable of segmenting the regions of smoothly varying intensity distributions. Spatial constraints are incorporated in the clustering algorithm through the modeling of the regions by Gibbs random fields. Knowledge-based morphological operations are then applied to the segmented regions to identify the desired regions according to the a priori anatomical knowledge of the region-of-interest. This proposed technique has been successfully applied to a sequence of cardiac CT volumetric images to generate the volumes of left ventricle chambers at 16 consecutive temporal frames. Our final segmentation results compare favorably with the results obtained using manual outlining. Extensions of this approach to other applications can be readily made when a priori knowledge of a given object is available."}
{"_id": "c2912ac3c918f3dbd997e2a3454b6b8b6b17c37f", "title": "UTILIZING A GAME THEORETICAL APPROACH TO PREVENT COLLUSION AND INCENTIVIZE COOPERATION IN", "text": "Author: Arash Golchubian Title: Utilizing a Game Theoretical Approach to Prevent Collusion and Incentivize Cooperation in Cybersecurity Contexts Institution: Florida Atlantic University Thesis Advisor: Dr. Mehrdad Nojoumian Degree: Master of Science Year: 2017 In this research, a new reputation-based model is utilized to disincentivize collusion of defenders and attackers in Software Defined Networks (SDN), and also, to disincentivize dishonest mining strategies in Blockchain. In the context of SDN, the model uses the reputation values assigned to each entity to disincentivize collusion with an attacker. Our analysis shows that not-colluding actions become Nash Equilibrium using the reputationbased model within a repeated game setting. In the context of Blockchain and mining, we illustrate that by using the same socio-rational model, miners not only are incentivized to conduct honest mining but also disincentivized to commit to any malicious activities against other mining pools. We therefore show that honest mining strategies become Nash Equilibrium in our setting. This thesis is laid out in the following manner. In chapter 2 an introduction to game theory is provided followed by a survey of previous works in game theoretic network security, in chapter 3 a new reputation-based model is introduced to be used within the context of a Software Defined Network (SDN), in chapter 4 a reputation-based solution concept is introduced to force cooperation by each mining entity in Blockchain, and finally, in chapter 5, the concluding remarks and future works are presented. vii To: My loving wife, Sanaz. You inspire me to better myself. and My mother, Mina. You showed me that nothing is impossible. UTILIZING A GAME THEORETICAL APPROACH TO PREVENT COLLUSION AND INCENTIVIZE COOPERATION IN CYBERSECURITY CONTEXTS List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii"}
{"_id": "1679710eb9fb5d004cfe2e8e7aa322cc3873647e", "title": "Curing regular expressions matching algorithms from insomnia, amnesia, and acalculia", "text": "The importance of network security has grown tremendously and a collection of devices have been introduced, which can improve the security of a network. Network intrusion detection systems (NIDS) are among the most widely deployed such system; popular NIDS use a collection of signatures of known security threats and viruses, which are used to scan each packet's payload. Today, signatures are often specified as regular expressions; thus the core of the NIDS comprises of a regular expressions parser; such parsers are traditionally implemented as finite automata. Deterministic Finite Automata (DFA) are fast, therefore they are often desirable at high network link rates. DFA for the signatures, which are used in the current security devices, however require prohibitive amounts of memory, which limits their practical use.\n In this paper, we argue that the traditional DFA based NIDS has three main limitations: first they fail to exploit the fact that normal data streams rarely match any virus signature; second, DFAs are extremely inefficient in following multiple partially matching signatures and explodes in size, and third, finite automaton are incapable of efficiently keeping track of counts. We propose mechanisms to solve each of these drawbacks and demonstrate that our solutions can implement a NIDS much more securely and economically, and at the same time substantially improve the packet throughput."}
{"_id": "46e49432fc6c360376cad11367c6d2411fb3e0ea", "title": "On the hardness of approximating minimum vertex cover By", "text": "We prove the Minimum Vertex Cover problem to be NP-hard to approximate to within a factor of 1.3606, extending on previous PCP and hardness of approximation technique. To that end, one needs to develop a new proof framework, and to borrow and extend ideas from several fields."}
{"_id": "0ca2b92a4f992b35683c7fffcd49b4c883772a29", "title": "CAWA: Coordinated warp scheduling and Cache Prioritization for critical warp acceleration of GPGPU workloads", "text": "The ubiquity of graphics processing unit (GPU) architectures has made them efficient alternatives to chip-multiprocessors for parallel workloads. GPUs achieve superior performance by making use of massive multi-threading and fast context-switching to hide pipeline stalls and memory access latency. However, recent characterization results have shown that general purpose GPU (GPGPU) applications commonly encounter long stall latencies that cannot be easily hidden with the large number of concurrent threads/warps. This results in varying execution time disparity between different parallel warps, hurting the overall performance of GPUs -- the warp criticality problem.\n To tackle the warp criticality problem, we propose a coordinated solution, criticality-aware warp acceleration (CAWA), that efficiently manages compute and memory resources to accelerate the critical warp execution. Specifically, we design (1) an instruction-based and stall-based criticality predictor to identify the critical warp in a thread-block, (2) a criticality-aware warp scheduler that preferentially allocates more time resources to the critical warp, and (3) a criticality-aware cache reuse predictor that assists critical warp acceleration by retaining latency-critical and useful cache blocks in the L1 data cache. CAWA targets to remove the significant execution time disparity in order to improve resource utilization for GPGPU workloads. Our evaluation results show that, under the proposed coordinated scheduler and cache prioritization management scheme, the performance of the GPGPU workloads can be improved by 23% while other state-of-the-art schedulers, GTO and 2-level schedulers, improve performance by 16% and -2% respectively."}
{"_id": "180189c3e8b0f783a8df6a1887a94a5e3f82148b", "title": "Value Locality and Load Value Prediction", "text": "Since the introduction of virtual memory demand-paging and cache memories, computer systems have been exploiting spatial and temporal locality to reduce the average latency of a memory reference. In this paper, we introduce the notion of value locality, a third facet of locality that is frequently present in real-world programs, and describe how to effectively capture and exploit it in order to perform load value prediction. Temporal and spatial locality are attributes of storage locations, and describe the future likelihood of references to those locations or their close neighbors. In a similar vein, value locality describes the likelihood of the recurrence of a previously-seen value within a storage location. Modern processors already exploit value locality in a very restricted sense through the use of control speculation (i.e. branch prediction), which seeks to predict the future value of a single condition bit based on previously-seen values. Our work extends this to predict entire 32- and 64-bit register values based on previously-seen values. We find that, just as condition bits are fairly predictable on a per-static-branch basis, full register values being loaded from memory are frequently predictable as well. Furthermore, we show that simple microarchitectural enhancements to two modern microprocessor implementations (based on the PowerPC 620 and Alpha 21164) that enable load value prediction can effectively exploit value locality to collapse true dependencies, reduce average memory latency and bandwidth requirements, and provide measurable performance gains."}
{"_id": "67bf737ceccf387cdd05c379487da8301f55e93d", "title": "Neither more nor less: Optimizing thread-level parallelism for GPGPUs", "text": "General-purpose graphics processing units (GPGPUs) are at their best in accelerating computation by exploiting abundant thread-level parallelism (TLP) offered by many classes of HPC applications. To facilitate such high TLP, emerging programming models like CUDA and OpenCL allow programmers to create work abstractions in terms of smaller work units, called cooperative thread arrays (CTAs). CTAs are groups of threads and can be executed in any order, thereby providing ample opportunities for TLP. The state-of-the-art GPGPU schedulers allocate maximum possible CTAs per-core (limited by available on-chip resources) to enhance performance by exploiting TLP. However, we demonstrate in this paper that executing the maximum possible number of CTAs on a core is not always the optimal choice from the performance perspective. High number of concurrently executing threads might cause more memory requests to be issued, and create contention in the caches, network and memory, leading to long stalls at the cores. To reduce resource contention, we propose a dynamic CTA scheduling mechanism, called DYNCTA, which modulates the TLP by allocating optimal number of CTAs, based on application characteristics. To minimize resource contention, DYNCTA allocates fewer CTAs for applications suffering from high contention in the memory sub-system, compared to applications demonstrating high throughput. Simulation results on a 30-core GPGPU platform with 31 applications show that the proposed CTA scheduler provides 28% average improvement in performance compared to the existing CTA scheduler."}
{"_id": "32c8c7949a6efa2c114e482c830321428ee58d70", "title": "GPUs and the Future of Parallel Computing", "text": "This article discusses the capabilities of state-of-the art GPU-based high-throughput computing systems and considers the challenges to scaling single-chip parallel-computing systems, highlighting high-impact areas that the computing research community can address. Nvidia Research is investigating an architecture for a heterogeneous high-performance computing system that seeks to address these challenges."}
{"_id": "feb263c99a65a94ed1dec7573f288af0f67b3f66", "title": "Mechatronic model of a novel slotless permanent magnet DC-motor with air gap winding design", "text": "This paper presents a mechatronic model of a novel slotless permanent magnet DC-motor with air gap winding. Besides technical advantages of this type of motor like high power density, high torque, very low weight and high efficiency, the motor design allows a very precise and efficient modelling with limited effort. A nonlinear model of magnetic field density can be extracted from a detailed nonlinear FE-model build in ANSYS/Maxwell, approximated by Fourier series and then used to model driving torque and back EMF, representing the coupling between electrical and mechanical subsystems. Analytically founded numerical models for driving torque and back EMF will be given. Real geometry of the phase winding is taken into account to improve model accuracy. The electrical subsystem will be described as coupled three phase system, whose parameters can also be extracted from the nonlinear FE-model with high accuracy. Together with a mechanical model of the rotor a MATLAB/Simulink model is build and extended by models of the hall sensors to detect rotor position and commutation logic to control the HEX-Bridge during operation. Finally, results of a complex simulation model, based on the parameters of the prototype of a wheel-hub motor, implementing the new motor design, are getting shown. Simulation results compare very well to measured data. Simulation time is very short due to the efficient approximation of magnetic flux density."}
{"_id": "6ef99de74e6da9be3b2ece569c7be2a15c1db5db", "title": "Reverse Engineering of Embedded Software Using Syntactic Pattern Recognition", "text": "When a secure component executes sensitive operations, the information carried by the power consumption can be used to recover secret information. Many different techniques have been developped to recover this secret, but only few of them focus on the recovering of the executed code itself. Indeed, the code knowledge acquired through this step of Simple Power Analysis (SPA) can help to identify implementation weaknesses and to improve further kinds of attacks. In this paper we present a new approach improving the SPA based on a pattern recognition methodology, that can be used to automatically identify the processed instructions that leak through power consumption. We firstly process a geometrical classification with chosen instructions to enable the automatic identification of any sequence of instructions. Such an analysis is used to reverse general purpose code executions of a recent secure component."}
{"_id": "5fe18d35bad4238b80a99ec8c4b98aca99a7e389", "title": "Communicating Algorithmic Process in Online Behavioral Advertising", "text": "Advertisers develop algorithms to select the most relevant advertisements for users. However, the opacity of these algorithms, along with their potential for violating user privacy, has decreased user trust and preference in behavioral advertising. To mitigate this, advertisers have started to communicate algorithmic processes in behavioral advertising. However, how revealing parts of the algorithmic process affects users' perceptions towards ads and platforms is still an open question. To investigate this, we exposed 32 users to why an ad is shown to them, what advertising algorithms infer about them, and how advertisers use this information. Users preferred interpretable, non-creepy explanations about why an ad is presented, along with a recognizable link to their identity. We further found that exposing users to their algorithmically-derived attributes led to algorithm disillusionment---users found that advertising algorithms they thought were perfect were far from it. We propose design implications to effectively communicate information about advertising algorithms."}
{"_id": "ea4a2dfa0eab0260a184e5aad0046d62c0dd52bc", "title": "Effective Heart Sound Segmentation and Murmur Classification Using Empirical Wavelet Transform and Instantaneous Phase for Electronic Stethoscope", "text": "Accurate measurement of heart sound and murmur parameters is of great importance in the automated analysis of phonocardiogram (PCG) signals. In this paper, we propose a novel unified PCG signal delineation and murmur classification method without the use of reference signal for automatic detection and classification of heart sounds and murmurs. The major components of the proposed method are the empirical wavelet transform-based PCG signal decomposition for discriminating heart sounds from heart murmurs and suppressing background noises, the Shannon entropy envelope extraction, the instantaneous phase-based boundary determination, heart sound and murmur parameter extraction, the systole/diastole discrimination and the decision rules-based murmur classification. The accuracy and robustness of the proposed method is evaluated using a wide variety of normal and abnormal PCG signals taken from the standard PCG databases, including PASCAL heart sounds challenge database, PhysioNet/CinC challenge heart sound database, and real-time PCG signals. Evaluation results show that the proposed method achieves an average sensitivity (Se) of 94.38%, positive predictivity (Pp) of 97.25%, and overall accuracy (OA) of 91.92% for heart sound segmentation and Se of 97.58%, Pp of 96.46%, and OA of 94.21% in detecting the presence of heart murmurs for SNR of 10 dB. The method yields an average classification accuracy of 95.5% for the PCG signals with SNR of 20 dB. Results show that the proposed method outperforms other existing heart sound segmentation and murmur classification methods."}
{"_id": "eb7ee320f66d62099027461ba16b5d0f898d5e18", "title": "Color categories: Evidence for the cultural relativity hypothesis", "text": "The question of whether language affects our categorization of perceptual continua is of particular interest for the domain of color where constraints on categorization have been proposed both within the visual system and in the visual environment. Recent research (Roberson, Davies, & Davidoff, 2000; Roberson et al., in press) found substantial evidence of cognitive color differences between different language communities, but concerns remained as to how representative might be a tiny, extremely remote community. The present study replicates and extends previous findings using additional paradigms among a larger community in a different visual environment. Adult semi-nomadic tribesmen in Southern Africa carried out similarity judgments, short-term memory and long-term learning tasks. They showed different cognitive organization of color to both English and another language with the five color terms. Moreover, Categorical Perception effects were found to differ even between languages with broadly similar color categories. The results provide further evidence of the tight relationship between language and cognition."}
{"_id": "735a955d25a5ac194c7c998479a57250ec571bd7", "title": "The Airport Gate Assignment Problem: Mathematical Model and a Tabu Search Algorithm", "text": "In this paper, we consider an Airport Gate Assignment Problem that dynamically assigns airport gates to scheduled ights based on passengers' daily origin and destination ow data. The objective of the problem is to minimize the overall connection times that passengers walk to catch their connection ights. We formulate this problem as a mixed 0-1 quadratic integer programming problem and then reformulate it as a mixed 0-1 integer problem with a linear objective function and constraints. We design a simple tabu search meta-heuristic to solve the problem. The algorithm exploits the special properties of di erent types of neighborhood moves, and create highly e ective candidate list strategies. We also address issues of tabu short term memory, dynamic tabu tenure, aspiration rule, and various intensi cation and diversi cation strategies. Preliminary computational experiments are conducted and the results are presented and analyzed."}
{"_id": "b982f80346ca617170c992191905b5c0be2e3db6", "title": "A Continuous Beam Steering Slotted Waveguide Antenna Using Rotating Dielectric Slabs", "text": "The design, simulation and measurement of a beam steerable slotted waveguide antenna operating in X band are presented. The proposed beam steerable antenna consists of a standard rectangular waveguide (RWG) section with longitudinal slots in the broad wall. The beam steering in this configuration is achieved by rotating two dielectric slabs inside the waveguide and consequently changing the phase of the slots excitations. In order to confirm the usefulness of this concept, a non-resonant 20-slot waveguide array antenna with an element spacing of d = 0.58\u03bb0 has been designed, built and measured. A 14o beam scanning from near broadside (\u03b8 = 4o) toward end-fire (\u03b8 = 18o) direction is observed. The gain varies from 18.33 dB to 19.11 dB which corresponds to the radiation efficiencies between 95% and 79%. The side-lobe level is -14 dB at the design frequency of 9.35 GHz. The simulated co-polarized realized gain closely matches the fabricated prototype patterns."}
{"_id": "8890bb44abb89601c950eb5e56172bb58d5beea8", "title": "End-to-End Offline Goal-Oriented Dialog Policy Learning via Policy Gradient", "text": "Learning a goal-oriented dialog policy is generally performed offline with supervised learning algorithms or online with reinforcement learning (RL). Additionally, as companies accumulate massive quantities of dialog transcripts between customers and trained human agents, encoder-decoder methods have gained popularity as agent utterances can be directly treated as supervision without the need for utterance-level annotations. However, one potential drawback of such approaches is that they myopically generate the next agent utterance without regard for dialog-level considerations. To resolve this concern, this paper describes an offline RL method for learning from unannotated corpora that can optimize a goal-oriented policy at both the utterance and dialog level. We introduce a novel reward function and use both on-policy and off-policy policy gradient to learn a policy offline without requiring online user interaction or an explicit state space definition."}
{"_id": "589d84d528d353a382a42e5b58dc48a57d332be8", "title": "Post-Stroke Rehabilitation with the Rutgers Ankle System: A Case Study", "text": "The Rutgers Ankle is a Stewart platform-type haptic interface designed for use in rehabilitation. The system supplies six-degree-of-freedom (DOF) resistive forces on the patient's foot, in response to virtual reality-based exercises. The Rutgers Ankle controller contains an embedded Pentium board, pneumatic solenoid valves, valve controllers, and associated signal conditioning electronics. The rehabilitation exercise used in our case study consists of piloting a virtual airplane through loops. The exercise difficulty can be selected based on the number and placement of loops, the airplane speed in the virtual environment, and the degree of resistance provided by the haptic interface. Exercise data is stored transparently, in real time, in an Oracle database. These data consist of ankle position, forces, and mechanical work during an exercise, and over subsequent rehabilitation sessions. The number of loops completed and the time it took to do that are also stored online. A case study is presented of a patient nine months post-stroke using this system. Results showed that, over six rehabilitation sessions, the patient improved on clinical measures of strength and endurance, which corresponded well with torque and power output increases measured by the Rutgers Ankle. There were also substantial improvements in task accuracy and coordination during the simulation and the patient's walking and stair-climbing ability."}
{"_id": "e56ae9ca0f66897d0bb3cc1219e347054a4d2c17", "title": "Signal Processing Methods for the Automatic Transcription of Music", "text": "Signal processing methods for the automatic transcription of music are developed in this thesis. Music transcription is here understood as the process of analyzing a music signal so as to write down the parameters of the sounds that occur in it. The applied notation can be the traditional musical notation or any symbolic representation which gives sufficient information for performing the piece using the available musical instruments. Recovering the musical notation automatically for a given acoustic signal allows musicians to reproduce and modify the original performance. Another principal application is structured audio coding: a MIDI-like representation is extremely compact yet retains the identifiability and characteristics of a piece of music to an important degree. The scope of this thesis is in the automatic transcription of the harmonic and melodic parts of real-world music signals. Detecting or labeling the sounds of percussive instruments (drums) is not attempted, although the presence of these is allowed in the target signals. Algorithms are proposed that address two distinct subproblems of music transcription. The main part of the thesis is dedicated to multiple fundamental frequency (F0) estimation, that is, estimation of the F0s of several concurrent musical sounds. The other subproblem addressed is musical meter estimation. This has to do with rhythmic aspects of music and refers to the estimation of the regular pattern of strong and weak beats in a piece of music. For multiple-F0 estimation, two different algorithms are proposed. Both methods are based on an iterative approach, where the F0 of the most prominent sound is estimated, the sound is cancelled from the mixture, and the process is repeated for the residual. The first method is derived in a pragmatic manner and is based on the acoustic properties of musical sound mixtures. For the estimation stage, an algorithm is proposed which utilizes the frequency relationships of simultaneous spectral components, without assuming ideal harmonicity. For the cancelling stage, a new processing principle, spectral smoothness, is proposed as an efficient new mechanism for separating the detected sounds from the mixture signal. The other method is derived from known properties of the human auditory system. More specifically, it is assumed that the peripheral parts of hearing can be modelled by a bank of bandpass filters, followed by half-wave rectification and compression of the subband signals. It is shown that this basic structure allows the combined use of time-domain periodicity and frequency-domain periodicity for F0 extraction. In the derived algorithm, the higher-order (unresolved) harmonic partials of a sound are processed collectively, without the need to detect or estimate individual partials. This has the consequence that the method works reasonably accurately for short analysis frames. Computational efficiency of the method is based on calculating a frequency-domain approximation of the summary autocorrelation function, a physiologically-motivated representation of sound. Both of the proposed multiple-F0 estimation methods operate within a single time frame and arrive at approximately the same error rates. However, the auditorily-motivated method is superior in short analysis frames. On the other hand, the pragmatically-oriented method is \u201ccomplete\u201d in the sense that it includes mechanisms for suppressing additive noise (drums) and for estimating the number of concurrent sounds in the analyzed signal. In musical interval and chord identification tasks, both algorithms outperformed the average of ten trained musicians."}
{"_id": "87aae79852c15e1da4479185e3697ac91df844a1", "title": "The Development and Validation of a Measure of Student Attitudes Toward Science , Technology , Engineering , and Math ( S-STEM )", "text": "Using an iterative design along with multiple methodological approaches and a large representative sample, this study presents reliability, validity, and fairness evidence for two surveys measuring student attitudes toward science, technology, engineering, and math (S-STEM) and interest in STEM careers for (a) 4ththrough 5th-grade students (Upper Elementary S-STEM) and (b) 6ththrough 12th-grade students (Middle/High S-STEM). Findings from exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) suggested the use of a four-factor structure to measure student attitudes toward science, math, engineering/technology, and 21st century skills. Subject matter experts and literature reviews provided evidence of content validity. Reliability levels were high for both versions. Furthermore, both the Upper Elementary S-STEM and Middle/High S-STEM Surveys demonstrated evidence of configural, metric, and scalar invariance across grade levels, races/ethnicities, and genders. The findings support the validity of interpretations and inferences made from scores on the instruments\u2019 items and subscales."}
{"_id": "538101a20ea7f125c673b076409b75fa637493a0", "title": "VCDB: A Large-Scale Database for Partial Copy Detection in Videos", "text": "The task of partial copy detection in videos aims at finding if one or more segments of a query video have (transformed) copies in a large dataset. Since collecting and annotating large datasets of real partial copies are extremely time-consuming, previous video copy detection research used either small-scale datasets or large datasets with simulated partial copies by imposing several pre-defined transformations (e.g., photometric or geometric changes). While the simulated datasets were useful for research, it is unknown how well the techniques developed on such data work on real copies, which are often too complex to be simulated. In this paper, we introduce a large-scale video copy database (VCDB) with over 100,000 Web videos, containing more than 9,000 copied segment pairs found through careful manual annotation. We further benchmark a baseline system on VCDB, which has demonstrated state-of-the-art results in recent copy detection research. Our evaluation suggests that existing techniques\u2014which have shown near-perfect results on the simulated benchmarks\u2014are far from satisfactory in detecting complex real copies. We believe that the release of VCDB will largely advance the research around this challenging problem."}
{"_id": "3a463db8048c67b48c7f4f019a4ab3a2f01f25fc", "title": "Applying Winnow to Context-Sensitive Spelling Correction", "text": "Multiplicative weight-updating algorithms such as Winnow have been studied extensively in the COLT literature, but only recently have people started to use them in applications. In this paper, we apply a Winnow-based algorithm to a task in natural language: contextsensitive spelling correction. This is the task of xing spelling errors that happen to result in valid words, such as substituting to for too, casual for causal, and so on. Previous approaches to this problem have been statistics-based; we compare Winnow to one of the more successful such approaches, which uses Bayesian classi ers. We nd that: (1) When the standard (heavily-pruned) set of features is used to describe problem instances, Winnow performs comparably to the Bayesian method; (2) When the full (unpruned) set of features is used, Winnow is able to exploit the new features and convincingly outperform Bayes; and (3) When a test set is encountered that is dissimilar to the training set, Winnow is better than Bayes at adapting to the unfamiliar test set, using a strategy we will present for combining learning on the training set with unsupervised learning on the (noisy) test set. In Machine Learning: Proceedings of the 13th International Conference, Lorenza Saitta, ed., Morgan Kaufmann, San Francisco, CA, 1996, pages 182{190. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonpro t educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Information Technology Center America; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Information Technology Center America. All rights reserved. Copyright c Mitsubishi Electric Information Technology Center America, 1996 201 Broadway, Cambridge, Massachusetts 02139 Dept. of Appl. Math. & CS, Weizmann Institute of Science, Rehovot 76100, Israel. 1. First printing, TR96-07, April 1996"}
{"_id": "e63d53bc8bb4615dce8bb3e1d2aedb3dbe16c044", "title": "Illuminating cell signalling with optogenetic tools", "text": "The light-based control of ion channels has been transformative for the neurosciences, but the optogenetic toolkit does not stop there. An expanding number of proteins and cellular functions have been shown to be controlled by light, and the practical considerations in deciding between reversible optogenetic systems (such as systems that use light-oxygen-voltage domains, phytochrome proteins, cryptochrome proteins and the fluorescent protein Dronpa) are well defined. The field is moving beyond proof of concept to answering real biological questions, such as how cell signalling is regulated in space and time, that were difficult or impossible to address with previous tools."}
{"_id": "b1ffbbbdefb2484d61a182efc2af4243a60d87a3", "title": "Anytime parallel density-based clustering", "text": "The density-based clustering algorithm DBSCAN is a state-of-the-art data clustering technique with numerous applications in many fields. However, DBSCAN requires neighborhood queries for all objects and propagation of labels from object to object. This scheme is time consuming and thus limits its applicability for large datasets. In this paper, we propose a novel anytime approach to cope with this problem by reducing both the range query and the label propagation time of DBSCAN. Our algorithm, called AnyDBC, compresses the data into smaller density-connected subsets called primitive clusters and labels objects based on connected components of these primitive clusters to reduce the label propagation time. Moreover, instead of passively performing range queries for all objects as in existing techniques, AnyDBC iteratively and actively learns the current cluster structure of the data and selects a few most promising objects for refining clusters at each iteration. Thus, in the end, it performs substantially fewer range queries compared to DBSCAN while still satisfying the cluster definition of DBSCAN. Moreover, by processing queries in block and merging the results into the current cluster structure, AnyDBC can be efficiently parallelized on shared memory architectures to further accelerate the performance, uniquely making it a parallel and anytime technique at the same time. Experiments show speedup factors of orders of magnitude compared to DBSCAN and its fastest variants as well as a high parallel scalability on multicore processors for very large real and synthetic complex datasets."}
{"_id": "2aa66587fa04cdc05e38dc4529f61c18de0f378e", "title": "The Interaction of Architecture and Operating System Design", "text": "Today\u2019s high-performance RISC microprocessors have been highly tuned for integer and floating point application performance. These architectures have paid less attention to operating system requirements. At the same time, new operating system designs often have overlooked modern architectural trends which may unavoidably change the relative cost of certain primitive operations. The result is that operating system performance is well below application code performance on contemporary RISCS. This paper examines recent directions in computer architecture and operating systems, and the implications of changes in each domain for the other. The requirements of three components of operating system design are discussed in detail: interprocess communication, virt ual memory, and thread management. For each component, we relate operating system functional and performance needs to the mechanisms available on commercial RISC architectures such as the MIPS R2000 and R3000, Sun SPARC, IBM RS6000, Motorola 88000, and Intel i860. Our analysis reveals a number of specific reasons why the performance of operating system primitives on RISCS has not scaled with integer performance. In addition, we identify areas in which architectures could better (and cost-effectively) accommodate operating system needs, and areas in which operating system design could accommodate certain necessary characteristics of cost-effective highperformance microprocessors. This work was supported in part by the National Science Foundation under Grants No. CCR-8703049, CCR-8619663, and CCR-8907666, by the Washington Technology Center, by the Digital Equipment Corporation Systems Research Center and External Research Program, and by IBM and AT&T Fellowships. Bershad is now with the School of Computer Science, Carnegie Mellon University. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 01991 ACM 0-89791 -380 -9/91 /0003 -0108 . ..$1 .50"}
{"_id": "1fe52bbc61aa7e9854d5daad25b12943d97aed25", "title": "Frapp\u00e9: Functional Reactive Programming in Java", "text": "Functional Reactive Programming (FRP) is a declarative programming model for constructing interactive applications based on a continuous model of time. FRP programs are described in terms of behaviors (continuous, timevarying, reactive values), and events (conditions that occur at discrete points in time). This paper presents Frapp\u00e9, an implementation of FRP in the Java progamming language. The primary contribution of Frapp\u00e9 is its integration of the FRP event/behavior model with the Java Beans event/property model. At the interface level, any Java Beans component may be used as a source or sink for the FRP event and behavior combinators. This provides a mechanism for extending Frapp\u00e9 with new kinds of I/O connections and allows FRP to be used as a high-level declarative model for composing applications from Java Beans components. At the implementation level, the Java Beans event model is used internally by Frapp\u00e9 to propagate FRP events and changes to FRP behaviors. This allows Frapp\u00e9 applications to be packaged as Java Beans components for use in other applications, and yields an implementation of FRP well-suited to the requirements of event-driven applications (such as graphical user interfaces)."}
{"_id": "432a96eaa74c972544a75c1eaa6bc5e15ffdbe94", "title": "Colour spaces: perceptual, historical and applicational background", "text": "Absnacr-In this paper we present an overview of colour spaces used In electrical engineering and image processing. We Streis the imparfance of the perceptual, historical and sppllcananal~hackgraund that led to a colour spare. The colour spaces presented are : RGB;opponentsolours rpsces, phenomenal colaur spscer, CMY, CMYK, TV colour s~aces (UW and YIQ), PhotoYCC, CIE XYZ, Lab and Luv colour spaces. Keywordr-colaur spaces, RGB, HSV, CIE"}
{"_id": "283df50be1d1a5fde310a9252ead5af2189f2720", "title": "Autonomous indoor object tracking with the Parrot AR.Drone", "text": "This article presents an image-based visual servoing system for indoor visual tracking of 3D moving objects by an Unmanned Aerial Vehicle. This system autonomously follows a 3D moving target object, maintaining it with a fixed distance and centered on its image plane. The initial setup is tested in a detailed simulation environment. The system is then validated on flights in indoor scenarios using the Parrot AR.Drone and the CMT tracker, demonstrating the robustness of the system to differences in object features, environmental clutter, and target trajectory. The obtained results indicate that the proposed system is suitable for complex controls task, such object surveillance and pursuit."}
{"_id": "fef2c647b30a0ec40a59272444143891558e2e9b", "title": "Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey", "text": "Deep learning is at the heart of the current rise of artificial intelligence. In the field of computer vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas, deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently led to a large influx of contributions in this direction. This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction."}
{"_id": "2aaa2de300a29becd569fce826677e491ea0ea70", "title": "Learning Category-Specific Deformable 3D Models for Object Reconstruction", "text": "We address the problem of fully automatic object localization and reconstruction from a single image. This is both a very challenging and very important problem which has, until recently, received limited attention due to difficulties in segmenting objects and predicting their poses. Here we leverage recent advances in learning convolutional networks for object detection and segmentation and introduce a complementary network for the task of camera viewpoint prediction. These predictors are very powerful, but still not perfect given the stringent requirements of shape reconstruction. Our main contribution is a new class of deformable 3D models that can be robustly fitted to images based on noisy pose and silhouette estimates computed upstream and that can be learned directly from 2D annotations available in object detection datasets. Our models capture top-down information about the main global modes of shape variation within a class providing a \u201clow-frequency\u201d shape. In order to capture fine instance-specific shape details, we fuse it with a high-frequency component recovered from shading cues. A comprehensive quantitative analysis and ablation study on the PASCAL 3D+ dataset validates the approach as we show fully automatic reconstructions on PASCAL VOC as well as large improvements on the task of viewpoint prediction."}
{"_id": "5f50b98b88b3a5d6127022391984643833b563a5", "title": "A Prism-Free Method for Silhouette Rendering in Inverse Displacement Mapping", "text": "Silhouette is a key feature that distinguishes displacement mapping from normal mapping. However the silhouette rendering in the GPU implementation of displacement mapping (which is often called inversed displacement mapping) is tricky. Previous approaches rely mostly on construction of additional extruding prism-like geometry, which slows down the rendering significantly. In this paper, we proposed a method for solving the silhouette rendering problem in inverse displace mapping without using any extruding prism-like geometry. At each step of intersection finding, we continuously bends the viewing ray according to the current local tangent space associated with the surface. Thus, it allows mapping a displacement map onto an arbitrary curved surface with more accurate silhouette. While our method is simple, it offers surprisingly good results over Curved Relief Map (CRM) [OP05] in many difficult or degenerated cases."}
{"_id": "432153c41e9f388a7a59d2624f48ffc6d291acc0", "title": "Chemical Process Control Education and Practice", "text": "Chemical process control textbooks and courses differ significantly from their electrical or mechanical-oriented brethren. It is our experience that colleagues in electrical engineering (EE) and mechanical engineering (ME) assume that we teach the same theory in our courses and merely have different application examples. The primary goals of this article are to i) emphasize the distinctly challenging characteristics of chemical processes, ii) present a typical process control curriculum, and iii) discuss how chemical process control courses can be revised to better meet the needs of a typical B.S.-level chemical engineer. In addition to a review of material covered in a standard process control course, we discuss innovative approaches in process control education, including the use of case studies, distributed control systems in laboratories, identification and control simulation packages, and studio-based approaches combining lecture, simulation, and experiments in the same room. We also provide perspectives on needed developments in process control education."}
{"_id": "38cb9546c19be6b94cf760d48dcd9c3c9ae1bed7", "title": "Solving the Search for Source Code", "text": "Programmers frequently search for source code to reuse using keyword searches. The search effectiveness in facilitating reuse, however, depends on the programmer's ability to specify a query that captures how the desired code may have been implemented. Further, the results often include many irrelevant matches that must be filtered manually. More semantic search approaches could address these limitations, yet existing approaches are either not flexible enough to find approximate matches or require the programmer to define complex specifications as queries.\n We propose a novel approach to semantic code search that addresses several of these limitations and is designed for queries that can be described using a concrete input/output example. In this approach, programmers write lightweight specifications as inputs and expected output examples. Unlike existing approaches to semantic search, we use an SMT solver to identify programs or program fragments in a repository, which have been automatically transformed into constraints using symbolic analysis, that match the programmer-provided specification.\n We instantiated and evaluated this approach in subsets of three languages, the Java String library, Yahoo! Pipes mashup language, and SQL select statements, exploring its generality, utility, and trade-offs. The results indicate that this approach is effective at finding relevant code, can be used on its own or to filter results from keyword searches to increase search precision, and is adaptable to find approximate matches and then guide modifications to match the user specifications when exact matches do not already exist. These gains in precision and flexibility come at the cost of performance, for which underlying factors and mitigation strategies are identified."}
{"_id": "ea83697999076d473fae2db48ed4abea38f609b0", "title": "Android Malware Detection Using Feature Fusion and Artificial Data", "text": "For the Android malware detection / classification anti-malware community has relied on traditional malware detection methods as a countermeasure. However, traditional detection methods are developed for detecting the computer malware, which is different from Android malware in structure and characteristics. Thus, they may not be useful for Android malware detection. Moreover, majority of suggested detection approaches may not be generalized and are incapable of detecting zero-day malware due to different reasons such as available data set with specific set of examples. Thus, their detection accuracy may be questionable. To address this problem, this paper presents a malware classification approach with a reliable detection accuracy and evaluate the approach using artificially generated examples. The suggested approach generates the signature profiles and behavior profiles of each application in the data set, which are further used as input for the classification task. For improving the detection accuracy, feature fusion of features from filter methods and wrapper method and algorithm fusion is investigated. Without affecting the detection accuracy, the optimal balance between real world examples and synthetic examples is also investigated. The experimental results suggest that both AUC and F1 can be obtained up to 0.94 for both known and unknown malware using original examples and synthetic examples."}
{"_id": "160c5acd01876a95f7f067c800aba2a1b2e6c84c", "title": "Service oriented architectures: approaches, technologies and research issues", "text": "Service-oriented architectures (SOA) is an emerging approach that addresses the requirements of loosely coupled, standards-based, and protocol- independent distributed computing. Typically business operations running in an SOA comprise a number of invocations of these different components, often in an event-driven or asynchronous fashion that reflects the underlying business process needs. To build an SOA a highly distributable communications and integration backbone is required. This functionality is provided by the Enterprise Service Bus (ESB) that is an integration platform that utilizes Web services standards to support a wide variety of communications patterns over multiple transport protocols and deliver value-added capabilities for SOA applications. This paper reviews technologies and approaches that unify the principles and concepts of SOA with those of event-based programing. The paper also focuses on the ESB and describes a range of functions that are designed to offer a manageable, standards-based SOA backbone that extends middleware functionality throughout by connecting heterogeneous components and systems and offers integration services. Finally, the paper proposes an approach to extend the conventional SOA to cater for essential ESB requirements that include capabilities such as service orchestration, \u201cintelligent\u201d routing, provisioning, integrity and security of message as well as service management. The layers in this extended SOA, in short xSOA, are used to classify research issues and current research activities."}
{"_id": "1d26137926a698f02dad8a87df2953b3bf9a339c", "title": "A survey of trust and reputation systems for online service provision", "text": "Trust and reputation systems represent a significant trend in decision support for Internet mediated service provision. The basic idea is to let parties rate each other, for example after the completion of a transaction, and use the aggregated ratings about a given party to derive a trust or reputation score, which can assist other parties in deciding whether or not to transact with that party in the future. A natural side effect is that it also provides an incentive for good behaviour, and therefore tends to have a positive effect on market quality. Reputation systems can be called collaborative sanctioning systems to reflect their collaborative nature, and are related to collaborative filtering systems. Reputation systems are already being used in successful commercial online applications. There is also a rapidly growing literature around trust and reputation systems, but unfortunately this activity is not very coherent. The purpose of this article is to give an overview of existing and proposed systems that can be used to derive measures of trust and reputation for Internet transactions, to analyze the current trends and developments in this area, and to propose a research agenda for trust and reputation systems. Josang, A., R. Ismal, and C. Boyd (2007). A Survey of Trust and Reputation Systems for Online Service Provision, Journal of Decision Support Systems, 43(2):618-644 Citation: Outline of Presentation \u2022 Section 1: Introduction \u2022 Section 2: Define trust and reputation \u2022 Section 3: Trust and Reputation relationship as security mechanisms \u2022 Section 4: Collaborative filtering and reputation \u2022 Section 5: Trust Classes \u2022 Section 6: Four categories of reputation and trust semantics \u2022 Section 7: Centralized and distributed architectures for reputation Outline of Presentation Continued \u2022 Section 8: Reputation computation methods (Day 1 Goal) \u2022 Section 9: Reputation systems used in commercial applications \u2022 Section 10: Description of main problems in reputation systems \u2022 Section 11: Ending discussion Section 1: Introduction \u2022 Online transactions differ from those of in person transactions because of the inherent asymmetry in the transaction, the seller has all the power so to say. \u2022 The nature of online transactions obscure the traditional metrics used to establish if a brick and mortar store is trustworthy. Example: a brick and mortar store takes time to establish a web site takes very little time to set up. \u2013 These reasons make it hard to determine rather or not a particular online venue is trustworthy or not and is why this trust issue is receiving so much attention from an academic point of view. \u2022 The authors of this paper wrote it in part because of the rapidly growing interest in this topic and because they felt that the prior overviews used inconsistent terminology. Section 2: Define trust and reputation \u2022 Two kinds of trust: reliability and decision trust \u2022 Reliability Trust: Trust is the subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends. \u2022 Decision Trust: Trust is the extent to which one party is willing to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible. \u2022 The authors mention that the prior mentioned definitions are not as simple as they seem. \u2013 For example: Trust in an individual is not necessarily enough to enter into a state of dependence with a person. In other words, that danger might seem to the agent an intolerable risk. \u2022 The authors mentions that only a few papers deal with trust and that in economic circles there are some who reject trust as a computational model. \u2022 Someone by the name of Williamson argues that the notion of trust should be avoided when modeling economic interactions, because it adds nothing new, and that well known notions such as reliability, utility and risk are adequate ad sufficient for that purpose. \u2022 Williamson argues however that personal trust is still important for modeling, and that non-computation models for trust can be meaningful for studying certain relationships. \u2022 Concerning reputation, the authors mention two aspects: trust because of good reputation and trust despite of bad reputation. \u2022 These two statements shed light on the fact that trust is often made with outside information, knowledge about the relationship that is not general know, instincts and feelings, etc. \u2022 Reputation can also be considered as a collective measure of trustworthiness based on referrals from the community. \u2022 Research in Trust and Reputation Systems should have two foci: \u2013 Finding adequate online substitutes for traditional cues in the physical world and identifying new elements specific to the online applications which are suitable for measurements. \u2013 Taking advantage of IT and Internet to create efficient systems for collecting information and deriving measurements of trust and reputation in order to aid decision making and improve online markets. \u2022 These simple principles invite rigorous research in order to answer some fundamental questions: What information elements are most suitable for deriving measures of trust and reputation in a given application? How can these information elements be captured and collected? What are the best principles for designing such systems from a theoretic and from a usability point of view? Can they be made resistant to attacks of manipulation by strategic agents? How should users include the information provided by such systems into their decision process? What role can these systems play in the business model of commercial companies? Do these systems truly improve the quality of online trade and interactions? These are important questions that need good answers in order to determine the potential for trust and reputation systems in online environments. \u2022 According to a cited reference in the paper, Resnick, a reputation system must have the following: 1. Entities must be long lived, so that with every interaction there is always an expectation of future interactions. 2. Ratings about current interactions are captured and distributed. 3. Ratings about past interactions must guide decisions about current interactions. Example of how trust is derived. (Fig. 1 from paper) Section 3: Trust and Reputation relationship as security mechanisms \u2022 In a general sense, the purpose of security mechanisms is to provide protection against malicious parties. \u2022 In many situations we have to protect ourselves from those who offer resources so that the problem in fact is reversed. Information providers can for example act deceitfully by providing false or misleading information, and traditional security mechanisms are unable to protect against this type of threat. Trust and reputation systems on the other hand can provide protection against such threats. \u2022 To summaries this section the author basically says that a computer system that appears to have robust security appears more trust worthy to the user. Listing known security vulnerabilities and using encryption techniques make the system appear to me more trust worthy. Section 4: Collaborative filtering and reputation \u2022 Collaborative filtering systems are a mechanism that shares traits with a reputation system but they are different at the same time. \u2022 Collaborative filtering systems (henceforth CF) are a mechanism that attempts to take into consideration that different people have different tastes. \u2022 If two separate people rate two items similarly then they are called neighbours in CF terminology. \u2022 This new fact can be used to recommend to one something that the other liked, a technique called a recommender system. \u2022 This takes the opposite assumption of reputation systems which assume that all people will judge the same performance or transaction consistently. \u2022 The example provided by the article is that in CF systems users might rate a video or music file differently based on tastes but one containing a virus would be universally rated poorly. \u2022 Another caveat about CF vs reputation systems is that CF systems assume an optimistic world view and reputation systems assume a pessimistic world view. \u2022 In specifics, CF systems assume all participants are trustworthy and sincere, meaning that all participants report their genuine opinion. \u2022 Conversely, reputation system assume that participants will try to misrepresent the quality of services in order to make more profit and will lie to achieve said goals. \u2022 This duel opposing nature of these type systems can make it very advantageous to combine them as will be explored in the study of Amazon in section 9 which does this to some extent. Section 5: Trust Classes \u2022 Types of Trust classes: \u2013 Provision \u2013 Access \u2013 Delegation \u2013 Identity \u2013 Context \u2022 Paper mentions them in order to get specific about trust semantics. \u2022 Paper focuses on provision trust so it is emphasized. \u2022 Provision trust describes the relying party\u2019s trust in a service or resource provider. It is relevant when the relying party is a user seeking protection from malicious or unreliable service providers. I extrapolated from the paper that this is the type of trust that would be studied in business through subjects like contract law. \u2022 Access trust describes trust in principals for the purpose of accessing resources owned by or under the responsibility of the relying party. This relates to the access control paradigm which is a central element in computer security. \u2022 Delegation trust describes trust in an agent (the delegate) that acts and makes decision on behalf of the relying party. \u2022 Identity trust describes the belief that an agent identity is as claimed. Identity trust systems have been discussed mostly in the information security community. An example mentioned in the paper is PGP encryption technology. \u2022 Context trust describes the extent to which the relying party believes that the necessary systems and institutions ar"}
{"_id": "6b67a2eb179cad467d4433c153a4b83fdca6cee8", "title": "Survey on Energy Consumption Entities on the Smartphone Platform", "text": "The full degree of freedom in mobile systems heavily depends on the energy provided by the mobile phone's batteries. Their capacity is in general limited and for sure not keeping pace as the mobile devices are crammed up with new functionalities. The discrepancy of Moore's law, offering twice the processing power at least each second year, and the development in batteries, which did not even double over the last decade, makes a shift in researchers' way of designing networks, protocols, and the mobile device itself. The bottleneck to take care of in the design process of mobile systems is not only the wireless data rate, but even more the energy limitation as the customers ask for new energy-hungry services, e.g., requiring faster connections or even multiple air interfaces, and longer standby or operational times of their mobile devices at the same time. In this survey, the energy consuming entities of a mobile device such as wireless air interfaces, display, mp3 player and others are measured and compared. The presented measurement results allow the reader to understand what the energy hungry parts of a mobile device are and use those findings for the design of future mobile protocols and applications. All results presented in this work and further results are made public on our web page [2]."}
{"_id": "2efd3f1cfc20fc17771612630dc92582ae5afe53", "title": "Immunizing online reputation reporting systems against unfair ratings and discriminatory behavior", "text": "Reputation reporting systems have emerged as an important risk management mechanism in online trading communities. However, the predictive value of these systems can be compromised in situations where conspiring buyers intentionally give unfair ratings to sellers or, where sellers discriminate on the quality of service they provide to different buyers. This paper proposes and evaluates a set of mechanisms, which eliminate, or significantly reduce the negative effects of such fraudulent behavior. The proposed mechanisms can be easily integrated into existing online reputation systems in order to safeguard their reliability in the presence of potentially deceitful buyers and sellers."}
{"_id": "604f7fe4a26986a37bba52388cb9379bf1758969", "title": "Web services and business process management", "text": "Web services based on the service-oriented architecture framework provide a suitable technical foundation for making business processes accessible within enterprises and across enterprises. But to appropriately support dynamic business processes and their management, more is needed, namely, the ability to prescribe how Web services are used to implement activities within a business process, how business processes are represented as Web services, and also which business partners perform what parts of the actual business process. In this paper, the relationship between Web services and the management of business processes is worked out and presented in a tutorial-like manner."}
{"_id": "a46eb7ff025ec24eb643baf98f8ce911b9986b9c", "title": "An implementation of cloud-based platform with R packages for spatiotemporal analysis of air pollution", "text": "Recently, the R package has become a popular tool for big data analysis due to its several matured software packages for the data analysis and visualization, including the analysis of air pollution. The air pollution problem is of increasing global concern as it has greatly impacts on the environment and human health. With the rapid development of IoT and the increase in the accuracy of geographical information collected by sensors, a huge amount of air pollution data were generated. Thus, it is difficult to analyze the air pollution data in a single machine environment effectively and reliably due to its inherent characteristic of memory design. In this work, we construct a distributed computing environment based on both the softwares of RHadoop and SparkR for performing the analysis and visualization of air pollution with the R more reliably and effectively. In the work, we firstly use the sensors, called EdiGreen AirBox to collect the air pollution data in Taichung, Taiwan. Then, we adopt the Inverse Distance Weighting method to transform the sensors\u2019 data into the density map. Finally, the experimental results show the accuracy of the short-term prediction results of PM2.5 by using the ARIMA model. In addition, the verification with respect to the prediction accuracy with the MAPE method is also presented in the experimental results."}
{"_id": "e6af72425185b9b7f9854b4ea818ac6a065a4777", "title": "Contrast-limited adaptive histogram equalization: speed and effectiveness", "text": "The Contrast-Limited Adaptive Histogram Equalization (CLARE) method for assigning displayed intensity levels in medical images, is supported by anecdotal evidence and evidence from detection experiments. Despite that, the method requires clinical evaluation and implementation achieving few-second application before it can be clinically adopted. Experiments attempting to produce this evaluation and a machine providing the required performance are"}
{"_id": "4cd7c7c626eed9bd342f5c916d92a0ec3aaae095", "title": "Miniature Dual-Mode Microstrip Filters", "text": "In order to reduce the size of dual-mode microstrip filters by keeping excellent performance, novel filter geometry is proposed. The novel filter includes a dual-mode resonator based on a meander structure. The effect of input/output feed lines located along a straight line on filter response is investigated for the dual-mode filter. The coupling between degenerate modes of the proposed dual-mode microstrip resonator is discussed depending on the perturbation size. The two dual-mode microstrip bandpass filters with real axis and imaginary axis transmission zeros (TZs) are designed, fabricated, and measured to document the validity of the description of the positive and negative coupling coefficient"}
{"_id": "7ee5d9c87a470391f3b027eb44a7859a7cb6090f", "title": "EVS 25 Shenzhen , China , Nov . 5-9 , 2010 Control of A high-Performance Z-Source Inverter for Fuel Cell / Supercapacitor Hybrid Electric Vehicles", "text": "This paper presents a supercapacitor (SC) module connected in parallel with fuel cell (FC) stack to supply a high-performance Z-Source Inverter (HP-ZSI) feeding a three phase induction motor for hybrid electric vehicles applications. The supercapacitor is connected between the input diode and the bidirectional switch of the highperformance ZSI topology. The indirect field-oriented control (IFOC) method is used to control an induction motor speed during motoring and regenerative braking operations to produce the modulation index and a dual loop controller is used to control the Z-network capacitor voltage to produce the shoot-through duty ratio. MATLAB simulation results verified the validity of the proposed control strategy during motoring and regenerative braking operations."}
{"_id": "405e880a3c632255481c59d6fc04206a0a6c2fcb", "title": "How do Retailers Adjust Prices?: Evidence from Store-Level Data", "text": "Recent theoretical work on retail pricing dynamics suggests that retailers periodically hold sales periodic, temporary reductions in price, -even when their costs are unchanged. In this paper we extend existing theory to predict which items will go on sale, and use a new data set from the BLS to document the frequency of sales across a wide range of goods and geographic areas. We find a number of pricing regularities for the 20 categories of goods we examine. First, retailers seem to have a \u201cregular\u201d price, and most deviations from that price are downward. Second, there is considerable heterogeneity in sale behavior across goods within a category (e.g. cereal); the same items are regularly put on sale, while other items rarely are on sale. Third, items are more likely to go on sale when demand is highest. Fourth, for a limited number of items for which we know market shares, products with larger market shares go on sale more often. These final three observations are consistent with our theoretical result that popular products are most likely to be placed on sale."}
{"_id": "a51795160e07aabf7bce59e79502507c60d06a5b", "title": "Conditional Dynamic Mutual Information-Based Feature Selection", "text": "With emergence of new techniques, data in many fields are getting larger and larger, especially in dimensionality aspect. The high dimensionality of data may pose great challenges to traditional learning algorithms. In fact, many of features in large volume of data are redundant and noisy. Their presence not only degrades the performance of learning algorithms, but also confuses end-users in the post-analysis process. Thus, it is necessary to eliminate irrelevant features from data before being fed into learning algorithms. Currently, many endeavors have been attempted in this field and many outstanding feature selection methods have been developed. Among different evaluation criteria, mutual information has also been widely used in feature selection because of its good capability of quantifying uncertainty of features in classification tasks. However, the mutual information estimated on the whole dataset can not exactly represent the correlation between features. To cope with this issue, in this paper we firstly re-estimate mutual information on identified instances dynamically, and then introduce a new feature selection method based on 1194 H. Liu, Y. Mo, J. Zhao conditional mutual information. Performance evaluations on sixteen UCI datasets show that our proposed method achieves comparable performance to other wellestablished feature selection algorithms in most cases."}
{"_id": "099d85f25e9336f48ff64287a4b53ee5fb64ab51", "title": "Learning Sparse Feature Representations for Music Annotation and Retrieval", "text": "We present a data-processing pipeline based on sparse feature learning and describe its applications to music annotation and retrieval. Content-based music annotation and retrieval systems process audio starting with features. While commonly used features, such as MFCC, are handcrafted to extract characteristics of the audio in a succinct way, there is increasing interest in learning features automatically from data using unsupervised algorithms. We describe a systemic approach applying feature-learning algorithms to music data, in particular, focusing on a highdimensional sparse-feature representation. Our experiments show that, using only a linear classifier, the newly learned features produce results on the CAL500 dataset comparable to state-of-the-art music annotation and retrieval systems."}
{"_id": "19c3fcffda8e6e5870b3a533c483bca024501ab5", "title": "Cost-Aware WWW Proxy Caching Algorithms", "text": "Difference between Web Caching & Conventional Paging Problems: Web caching is variable-size caching (web documents vary dramatically in size depending on the information they carry (text, image, video, etc.). Web pages take different amounts of time to download, even if they are of the same size (download latency). Access streams seen by the proxy cache are the union of web access streams from tens to thousands of users (instead of coming from a few programmed sources as in the case of virtual memory paging)"}
{"_id": "c2c9d4ec874b3ef4e84c3ac9b4f6c66a6697daaa", "title": "Deep into the Brain: Artificial Intelligence in Stroke Imaging", "text": "Artificial intelligence (AI), a computer system aiming to mimic human intelligence, is gaining increasing interest and is being incorporated into many fields, including medicine. Stroke medicine is one such area of application of AI, for improving the accuracy of diagnosis and the quality of patient care. For stroke management, adequate analysis of stroke imaging is crucial. Recently, AI techniques have been applied to decipher the data from stroke imaging and have demonstrated some promising results. In the very near future, such AI techniques may play a pivotal role in determining the therapeutic methods and predicting the prognosis for stroke patients in an individualized manner. In this review, we offer a glimpse at the use of AI in stroke imaging, specifically focusing on its technical principles, clinical application, and future perspectives."}
{"_id": "a7e194c7f18f2c4f83993207f737733ff31f03b1", "title": "Syntactic Stylometry: Using Sentence Structure for Authorship Attribution", "text": "Most approaches to statistical stylometry have concentrated on lexical features, such as relative word frequencies or type-token ratios. Syntactic features have been largely ignored. This work attempts to fill that void by introducing a technique for authorship attribution based on dependency grammar. Syntactic features are extracted from texts using a common dependency parser, and those features are used to train a classifier to identify texts by author. While the method described does not outperform existing methods on most tasks, it does demonstrate that purely syntactic features carry information which could be useful for stylometric analysis. Index words: stylometry, authorship attribution, dependency grammar, machine learning Syntactic Stylometry: Using Sentence Structure for Authorship Attribution"}
{"_id": "67161d331d496ad5255ad8982759a1c853856932", "title": "Cooperative flood detection using GSMD via SMS", "text": "This paper proposes architecture for an early warning floods system to alert public against flood disasters. An effective early warning system must be developed with linkages between four elements, which are accurate data collection to undertake risk assessments, development of hazard monitoring services, communication on risk related information and existence of community response capabilities. This project focuses on monitoring water level remotely using wireless sensor network. The project also utilizes Global System for Mobile communication (GSM) and short message service (SMS) to relay data from sensors to computers or directly alert the respective victim's through their mobile phone. It is hope that the proposed architecture can be further develop into a functioning system, which would be beneficial to the community and act as a precautionary action to save lives in the case of flood disaster."}
{"_id": "e78294368171f473a8a2b9bcbf230dd35573de65", "title": "Energy-Efficient Hierarchical Routing for Wireless Sensor Networks: A Swarm Intelligence Approach", "text": "Energy efficient routing in wireless sensor networks (WSNs) require non-conventional paradigm for design and development of power aware protocols. Swarm intelligence (SI) based metaheuristic can be applied for optimal routing of data, in an energy constraint WSNs environment. In this paper, we present BeeSwarm, a SI based energyefficient hierarchical routing protocol for WSNs. Our protocol consists of three phases: (1) Set-up phase-BeeCluster, (2) Route discovery phase-BeeSearch and (3) Data transmission phase-BeeCarrier. Integration of three phases for clustering, data routing and transmission, is the key aspect of our proposed protocol, which ultimately contributes to its robustness. Evaluation of simulation results show that BeeSwarm perform better in terms of packet delivery, energy consumption and throughput with increased network life compared to other SI based hierarchical routing protocols."}
{"_id": "97ddadb162ba02ddb57f4aa799041d88826634b7", "title": "Children's engagement with educational iPad apps: Insights from a Spanish classroom", "text": "This study investigates the effects of a story-making app called Our Story and a selection of other educational apps on the learning engagement of forty-one Spanish 4\u20135-year-olds. Children were observed interacting in small groups with the story-making app and this was compared to their engagement with a selection of construction and drawing apps. Children\u2019s engagement was analysed in two ways: it was categorised using Bangert-Drowns and Pyke\u2019s taxonomy for individual hands-on engagement with educational software, and using the concept of exploratory talk as developed by Mercer et al. to analyse peer engagement. For both approaches, quantitative and qualitative indices of children\u2019s engagement were considered. The overall findings suggested that in terms of the BangertDrowns and Pyke taxonomy, the quality of children\u2019s individual engagement was higher with the OS app in contrast to their engagement with other app software. The frequency of children\u2019s use of exploratory talk was similar with the OS and colouring and drawing apps, and a detailed qualitative analysis of the interaction transcripts revealed several instances of the OS and drawing apps supporting joint problem-solving and collaborative engagement. We suggest that critical indices of an app\u2019s educational value are the extent to which the app supports opportunities for open-ended content and children\u2019s independent use of increasingly difficult features. 2013 Elsevier Ltd. All rights reserved."}
{"_id": "6a013bf5f2e90eee5a98d276e65679ca4e622787", "title": "Integrating the Quality Attribute Workshop ( QAW ) and the Attribute-Driven Design ( ADD ) Method", "text": "............................................................................................................vii"}
{"_id": "3e84d803ed9fbfc4beb76005a33eeee1691c2db7", "title": "Action-Reaction: Forecasting the Dynamics of Human Interaction", "text": "Forecasting human activities from visual evidence is an emerging area of research which aims to allow computational systems to make predictions about unseen human actions. We explore the task of activity forecasting in the context of dual-agent interactions to understand how the actions of one person can be used to predict the actions of another. We model dual-agent interactions as an optimal control problem, where the actions of the initiating agent induce a cost topology over the space of reactive poses \u2013 a space in which the reactive agent plans an optimal pose trajectory. The technique developed in this work employs a kernel-based reinforcement learning approximation of the soft maximum value function to deal with the high-dimensional nature of human motion and applies a mean-shift procedure over a continuous cost function to infer a smooth reaction sequence. Experimental results show that our proposed method is able to properly model human interactions in a high dimensional space of human poses. When compared to several baseline models, results show that our method is able to generate highly plausible simulations of human interaction."}
{"_id": "44cf7f45ddf24c178c5523a64a9aaed76cbf8f0f", "title": "Modeling Human Education Data: From Equation-Based Modeling to Agent-Based Modeling", "text": "Agent-based simulation is increasingly used to analyze the performance of complex systems. In this paper we describe results of our work on one specific agent-based model, showing how it can be validated against the equation-based model from which it was derived, and demonstrating the extent to which it can be used to derive additional results over and above those that the equation-based model can provide."}
{"_id": "665bb05b43dec97c905c387c267302a27599f324", "title": "LITMUS: Landslide detection by integrating multiple sources", "text": "Disasters often lead to other kinds of disasters, forming multi-hazards such as landslides, which may be caused by earthquakes, rainfalls, water erosion, among other reasons. Effective detection and management of multihazards cannot rely only on one information source. In this paper, we evaluate a landslide detection system LITMUS, which combines multiple physical sensors and social media to handle the inherent varied origins and composition of multi-hazards. LITMUS integrates near real-time data from USGS seismic network, NASA TRMM rainfall network, Twitter, YouTube, and Instagram. The landslide detection process consists of several stages of social media filtering and integration with physical sensor data, with a final ranking of relevance by integrated signal strength. Applying LITMUS to data collected in October 2013, we analyzed and filtered 34.5k tweets, 2.5k video descriptions and 1.6k image captions containing landslide keywords followed by integration with physical sources based on a Bayesian model strategy. It resulted in detection of all 11 landslides reported by USGS and 31 more landslides unreported by USGS. An illustrative example is provided to demonstrate how LITMUS\u2019 functionality can be used to determine landslides related to the recent Typhoon Haiyan."}
{"_id": "25239ec7fb6159166dfe15adf229fc2415f071df", "title": "An ontology-based model to determine the automation level of an automated vehicle for co-driving", "text": "Full autonomy of ground vehicles is a major goal of the ITS (Intelligent Transportation Systems) community. However, reaching such highest autonomy level in all situations (weather, traffic, ...) may seem difficult in practice, despite recent results regarding driverless cars (e.g., Google Cars). In addition, an automated vehicle should also self-assess its own perception abilities, and not only perceive its environment. In this paper, we propose an intermediate approach towards full automation, by defining a spectrum of automation layers, from fully manual (the car is driven by a driver) to fully automated (the car is driven by a computer), based on an ontological model for representing knowledge. We also propose a second ontology for situation assessment (what does the automated car perceive?), including the sensors/actuators state, environmental conditions and driver's state. Finally, we also define inference rules to link the situation assessment ontology to the automation level one. Both ontological models have been built and first results are presented."}
{"_id": "ac912f1c708ea00e07d9d55ab076582436a33ceb", "title": "Optimization for allocating BEV recharging stations in urban areas by using hierarchical clustering", "text": "The Battery Electric Vehicle (BEV) prototypes are evolving into a reality in a foreseeable future. Before BEV can embark on a mass adoption by road commuters, a new infrastructure for battery charging will have to be ready in place. By 2015, access to BEV charging will be available at nearly one million charge points in the United States, as claimed by CleanMPG. Early BEV adopters will primarily charge their vehicles at home, in their private garages. However, for many Asia-Pacific regions where people live in concrete ghettos, public charging will play a more central role due to reduced access to convenient home charging. This research focuses on planning BEV charging locations to be installed in urbanized areas where they are usually characterized by dense traffic concentrations, restricted street spaces and other complex factors such as the distribution of power grids. We proposed a two-steps model that first quantified the road information into data points, and subsequently they are converged into 'demand clusters' over an urbanized area by hierarchical clustering analysis. Optimization techniques then applied on the demand clusters with the aim of meeting the supplies and demands, while at the same time certain constraints and cost factors are considered. The model is believed to be an important decision-support tool for city planning and BEV charging stations allocation."}
{"_id": "37babcee68e1a4ba8ab79f9072a5b0cb971472cd", "title": "Relevance and lexical pragmatics *", "text": "The goal of lexical pragmatics is to explain how linguistically specified (\u2018literal\u2019) word meanings are modified in use. While lexical-pragmatic processes such as narrowing, broadening and metaphorical extension are generally studied in isolation from each other, relevance theorists (Carston 2002, Wilson & Sperber 2002) have been arguing for a unified approach. I will continue this work by underlining some of the problems with more standard treatments, and show how a variety of lexicalpragmatic processes may be analysed as special cases of a general pragmatic adjustment process which applies spontaneously, automatically and unconsciously to fine-tune the interpretation of virtually every word."}
{"_id": "670229acc298aa3db2dc24f2871b8a05cee158c8", "title": "Underwater sensor networks: applications, advances and challenges.", "text": "This paper examines the main approaches and challenges in the design and implementation of underwater wireless sensor networks. We summarize key applications and the main phenomena related to acoustic propagation, and discuss how they affect the design and operation of communication systems and networking protocols at various layers. We also provide an overview of communications hardware, testbeds and simulation tools available to the research community."}
{"_id": "c5a8e3ff3c60440ac5e5a4573f76851a1061e33e", "title": "An improvement on the Euler number computing algorithm used in MATLAB", "text": "Computation of the Euler number of a binary image is often necessary in image matching, image database retrieval, image analysis, pattern recognition, and computer vision. This paper proposes an improvement on the Euler number computing algorithm used in the famous image processing tool MATLAB. By use of the information obtained during processing the previous pixel, the number of times of checking the neighbor pixels for processing a pixel decrease from 4 to 2. Our method is very simple in principle, and easily implemented. The experimental results demonstrated that our method outperforms significantly conventional Euler number computing algorithms."}
{"_id": "977fe5853db16e320917a43fb00f334456625a1e", "title": "DIP: the Database of Interacting Proteins", "text": "The Database of Interacting Proteins (DIP; http://dip.doe-mbi.ucla.edu) is a database that documents experimentally determined protein-protein interactions. This database is intended to provide the scientific community with a comprehensive and integrated tool for browsing and efficiently extracting information about protein interactions and interaction networks in biological processes. Beyond cataloging details of protein-protein interactions, the DIP is useful for understanding protein function and protein-protein relationships, studying the properties of networks of interacting proteins, benchmarking predictions of protein-protein interactions, and studying the evolution of protein-protein interactions."}
{"_id": "da86e24280d596ecda32cbed4f4a73d0f3695b67", "title": "Effects of wine, alcohol and polyphenols on cardiovascular disease risk factors: evidences from human studies.", "text": "AIMS\nThe aim of this review was to focus on the knowledge of the cardiovascular benefits of moderate alcohol consumption, as well as to analyze the effects of the different types of alcoholic beverages.\n\n\nMETHODS\nSystematic revision of human clinical studies and meta-analyses related to moderate alcohol consumption and cardiovascular disease (CVD) from 2000 to 2012.\n\n\nRESULTS\nHeavy or binge alcohol consumption unquestionably leads to increased morbidity and mortality. Nevertheless, moderate alcohol consumption, especially alcoholic beverages rich in polyphenols, such as wine and beer, seems to confer cardiovascular protective effects in patients with documented CVD and even in healthy subjects.\n\n\nCONCLUSIONS\nIn conclusion, wine and beer (but especially red wine) seem to confer greater cardiovascular protection than spirits because of their polyphenolic content. However, caution should be taken when making recommendations related to alcohol consumption."}
{"_id": "a34c334d4cc8dfce03ec48fa7a88ab0c3817ace7", "title": "Powering MEMS portable devices \u2014 a review of non-regenerative and regenerative power supply systems with special emphasis on piezoelectric energy harvesting systems", "text": "Power consumption is forecast by the International Technology Roadmap of Semiconductors (ITRS) to pose long-term technical challenges for the semiconductor industry. The purpose of this paper is threefold: (1) to provide an overview of strategies for powering MEMS via non-regenerative and regenerative power supplies; (2) to review the fundamentals of piezoelectric energy harvesting, along with recent advancements, and (3) to discuss future trends and applications for piezoelectric energy harvesting technology. The paper concludes with a discussion of research needs that are critical for the enhancement of piezoelectric energy harvesting devices. (Some figures in this article are in colour only in the electronic version)"}
{"_id": "06f25b058b85486ec1ba185dd545c8b74e7e594f", "title": "Cyber-Physical Systems in the SmartGrid", "text": "Radical changes are expected to occur in the next years in the electricity domain and the grid itself which has been almost unchanged the last 100 years. Value is created when interactions exist and this is the main thrust of the emerging SmartGrid which will heavily rely on IT technologies at several layers for monitoring and control. The basic building blocks are the existing efforts in the domain of the Internet of Things and Internet of Services, that come together with cooperation as the key enabler. The SmartGrid is a complex ecosystem of heterogeneous (cooperating) entities that interact in order to provide the envisioned functionality. Advanced business services will take advantage of the near real-time information flows among all participants. In order to realize the SmartGrid promise we will have to heavily depend on Cyber-Physical Systems (CPS) that will be able to monitor, share and manage information and actions on the business as well as the real world. CPS is seen as an integral part of the SmartGrid, hence several open issues will need to be effectively addressed."}
{"_id": "5125b385447ad58b19960a2652b555373f640331", "title": "The New Frontier of Smart Grids", "text": "The power grid is a massive interconnected network used to deliver electricity from suppliers to consumers and has been a vital energy supply. To minimize the impact of climate change while at the same time maintaining social prosperity, smart energy must be embraced to ensure a balanced economical growth and environmental sustainability. There fore, in the last few years, the new concept of a smart grid (SG) became a critical enabler in the contempo rary world and has attracted increas ing attention of policy makers and engineers. This article introduces the main concepts and technological challenges of SGs and presents the authors' views on some required challenges and opportunities pre sented to the IEEE Industrial Electronics Society (IES) in this new and exciting frontier."}
{"_id": "7b9ab27ad78899b6b284a17c38aa75fb0e1d1765", "title": "An information-centric energy infrastructure: The Berkeley view", "text": "We describe an approach for how to design an essentially more scalable, flexible and resilient electric power infrastructure \u2013 one that encourages efficient use, integrates local generation, and manages demand through omnipresent awareness of energy availability and use over time. We are inspired by how the Internet has revolutionized communications infrastructure, by pushing intelligence to the edges while hiding the diversity of underlying technologies through well-defined interfaces. Any end device is a traffic source or sink and intelligent endpoints adapt their traffic to what the infrastructure can"}
{"_id": "1be0ccc5fbb9c574c3f99b382cb8171ead2e6f68", "title": "Power and energy management for server systems", "text": "This survey shows that heterogeneous server clusters can be made more efficient by conserving power and energy while exploiting information from the service level, such as request priorities established by service-level agreements."}
{"_id": "24ab79f38cb17dc0a62b98ed23122a4062f8f682", "title": "A Multilevel Inverter for Photovoltaic Systems With Fuzzy Logic Control", "text": "Converters for photovoltaic (PV) systems usually consist of two stages: a dc/dc booster and a pulsewidth modulated (PWM) inverter. This cascade of converters presents efficiency issues, interactions between its stages, and problems with the maximum power point tracking. Therefore, only part of the produced electrical energy is utilized. In this paper, the authors propose a single-phase H-bridge multilevel converter for PV systems governed by a new integrated fuzzy logic controller (FLC)/modulator. The novelties of the proposed system are the use of a fully FLC (not requiring any optimal PWM switching-angle generator and proportional-integral controller) and the use of an H-bridge power-sharing algorithm. Most of the required signal processing is performed by a mixed-mode field-programmable gate array, resulting in a fully integrated System-on-Chip controller. The general architecture of the system and its main performance in a large spectrum of practical situations are presented and discussed. The proposed system offers improved performance over two-level inverters, particularly at low-medium power."}
{"_id": "0154a72df1b9929743145794dd2b1ee4d8b23a60", "title": "Robot Programming by Demonstration with Crowdsourced Action Fixes", "text": "Programming by Demonstration (PbD) can allow endusers to teach robots new actions simply by demonstrating them. However, learning generalizable actions requires a large number of demonstrations that is unreasonable to expect from end-users. In this paper, we explore the idea of using crowdsourcing to collect action demonstrations from the crowd. We propose a PbD framework in which the end-user provides an initial seed demonstration, and then the robot searches for scenarios in which the action will not work and requests the crowd to fix the action for these scenarios. We use instance-based learning with a simple yet powerful action representation that allows an intuitive visualization of the action. Crowd workers directly interact with these visualizations to fix them. We demonstrate the utility of our approach with a user study involving local crowd workers (N=31) and analyze the collected data and the impact of alternative design parameters so as to inform a real-world deployment of our system."}
{"_id": "726c76d5c13b2ad763c7c1ba52e08ef5e7078bfc", "title": "Efficient online structured output learning for keypoint-based object tracking", "text": "Efficient keypoint-based object detection methods are used in many real-time computer vision applications. These approaches often model an object as a collection of keypoints and associated descriptors, and detection then involves first constructing a set of correspondences between object and image keypoints via descriptor matching, and subsequently using these correspondences as input to a robust geometric estimation algorithm such as RANSAC to find the transformation of the object in the image. In such approaches, the object model is generally constructed offline, and does not adapt to a given environment at runtime. Furthermore, the feature matching and transformation estimation stages are treated entirely separately. In this paper, we introduce a new approach to address these problems by combining the overall pipeline of correspondence generation and transformation estimation into a single structured output learning framework. Following the recent trend of using efficient binary descriptors for feature matching, we also introduce an approach to approximate the learned object model as a collection of binary basis functions which can be evaluated very efficiently at runtime. Experiments on challenging video sequences show that our algorithm significantly improves over state-of-the-art descriptor matching techniques using a range of descriptors, as well as recent online learning based approaches."}
{"_id": "e3454ea07e4e618989b9d6c658c3653c4577b81b", "title": "Basics of dermal filler rheology.", "text": "BACKGROUND\nHyaluronic acid injectable fillers are the most widely used dermal fillers to treat facial volume deficits, providing long-term facial aesthetic enhancement outcomes for the signs of aging and/or facial contouring.\n\n\nOBJECTIVES\nThe purpose of this article was to explain how rheology, the study of the flow of matter, can be used to help physicians differentiate between dermal fillers targeted to certain areas of the face.\n\n\nMETHODS\nThis article describes how rheological properties affect performance when filler is used in various parts of the face and exposed to mechanical stress (shear deformation and compression/stretching forces) associated with daily facial animation and other commonly occurring external forces.\n\n\nRESULTS\nImproving facial volume deficits with filler is linked mainly to gel viscoelasticity and cohesivity. These 2 properties set the level of resistance to lateral and vertical deformations of the filler and influence filler tissue integration through control of gel spreading.\n\n\nCONCLUSION\nSelection of dermal filler with the right rheological properties is a key factor in achieving a natural-looking long-lasting desired aesthetic outcome."}
{"_id": "56fe16922b87f258ea511344d29a48d74adde182", "title": "Smart agent based prepaid wireless energy meter", "text": "Prepaid meter (PM) is getting very popular especially in developing countries. There are many advantages to use prepaid meter as opposed to postpaid meter both to the utility provider and to the consumer. Brunei Darussalam has adopted PM but it is not intelligent and not wireless enabled. Reading meters and topping up balance are still done manually. Utility provider does not have information on the usage statistics and has only limited functionalities in the grid control. So accordingly a novel software agent based wireless prepaid energy meter was developed using Java Agent Development Environment (JADE-LEAP) allowing agent from utility provider to query wireless energy meter for energy values of every household. These statistics can be used for statistical computation of the power consumed and for policy and future planning."}
{"_id": "498f3f655009f47981a1a48a94720e77f7f2608b", "title": "Online Keyword Spotting with a Character-Level Recurrent Neural Network", "text": "In this paper, we propose a context-aware keyword spotting m odel employing a character-level recurrent neural network (RNN) for spoken term detection in co tinuous speech. The RNN is end-toend trained with connectionist temporal classification (CT C) to generate the probabilities of character and word-boundary labels. There is no need for the phonetic t ranscription, senone modeling, or system dictionary in training and testing. Also, keywords can easi ly be added and modified by editing the text based keyword list without retraining the RNN. Moreover, th e unidirectional RNN processes an infinitely long input audio streams without pre-segmentation and keyw ords are detected with low-latency before the utterance is finished. Experimental results show that th e proposed keyword spotter significantly outperforms the deep neural network (DNN) and hidden Markov m del (HMM) based keyword-filler model even with less computations."}
{"_id": "71f5afa6711410e53b14e4b7f10bbd74067ae0b4", "title": "RACOG and wRACOG: Two Probabilistic Oversampling Techniques", "text": "As machine learning techniques mature and are used to tackle complex scientific problems, challenges arise such as the imbalanced class distribution problem, where one of the target class labels is under-represented in comparison with other classes. Existing oversampling approaches for addressing this problem typically do not consider the probability distribution of the minority class while synthetically generating new samples. As a result, the minority class is not represented well which leads to high misclassification error. We introduce two probabilistic oversampling approaches, namely RACOG and wRACOG, to synthetically generating and strategically selecting new minority class samples. The proposed approaches use the joint probability distribution of data attributes and Gibbs sampling to generate new minority class samples. While RACOG selects samples produced by the Gibbs sampler based on a predefined lag, wRACOG selects those samples that have the highest probability of being misclassified by the existing learning model. We validate our approach using nine UCI data sets that were carefully modified to exhibit class imbalance and one new application domain data set with inherent extreme class imbalance. In addition, we compare the classification performance of the proposed methods with three other existing resampling techniques."}
{"_id": "b32e2a4d4894ac81d66211349320cd1f79b22942", "title": "Enhanced SAR ADC energy efficiency from the early reset merged capacitor switching algorithm", "text": "The early reset merged capacitor switching algorithm (EMCS) is proposed as an energy reducing switching technique for a binary weighted, capacitive successive approximation (SAR) analog to digital converter (ADC). The method uses the merged capacitor switching (MCS) architecture and optimizes the use of the VCM level during the SAR conversion. This algorithm can reduce switching power by over 12% with no additional DAC driver activity when compared to the MCS scheme. The MCS and EMCS approaches are analyzed mathematically and the EMCS energy consumption is shown to be lower than or equal to that of the MCS technique for every digital code. Static linearity improvements for this structure are also shown with the integral non-linearity (INL) reducing by a factor of two due to the utilization of the MCS three level DAC. The EMCS implementation methodology is also described."}
{"_id": "0c6fa98b7b99d807df7c027e8e97751f1bbb9140", "title": "Data programming with DDLite: putting humans in a different part of the loop", "text": "Populating large-scale structured databases from unstructured sources is a critical and challenging task in data analytics. As automated feature engineering methods grow increasingly prevalent, constructing sufficiently large labeled training sets has become the primary hurdle in building machine learning information extraction systems. In light of this, we have taken a new approach called data programming [7]. Rather than hand-labeling data, in the data programming paradigm, users generate large amounts of noisy training labels by programmatically encoding domain heuristics as simple rules. Using this approach over more traditional distant supervision methods and fully supervised approaches using labeled data, we have been able to construct knowledge base systems more rapidly and with higher quality. Since the ability to quickly prototype, evaluate, and debug these rules is a key component of this paradigm, we introduce DDLite, an interactive development framework for data programming. This paper reports feedback collected from DDLite users across a diverse set of entity extraction tasks. We share observations from several DDLite hackathons in which 10 biomedical researchers prototyped information extraction pipelines for chemicals, diseases, and anatomical named entities. Initial results were promising, with the disease tagging team obtaining an F1 score within 10 points of the state-of-the-art in only a single day-long hackathon's work. Our key insights concern the challenges of writing diverse rule sets for generating labels, and exploring training data. These findings motivate several areas of active data programming research."}
{"_id": "a5de09243b4b12fc4bcf4db56c8e38fc3beddf4f", "title": "Governance of an Enterprise Social Intranet Implementation: The Statkraft Case", "text": "Recent studies demonstrate that the implementation of enterprise social systems (ESSs) will transfer organizations into new paradigm of social business which results in enormous economic returns and competitive advantage. Social business creates a completely new way of working and organizing characterised by social collaboration, intrinsic knowledge sharing, voluntarily mass participation, just name a few. Thus, implementation of ESSs should tackle the uniqueness of the new way of working and organizing. However, there is a shortage of knowledge about implementation of these large enterprise systems. The purpose of this paper is to study governance model of ESSs implementation. A case study is conducted to investigate the implementation of the social intranet called the \u2018Stream\u2019 at Statkraft, which is a world-leading energy company in Norway. The governance model of \u2018Stream\u2019 emphasizes the close cooperation and accountability between corporate communication, human resources and IT, which implies paradigm shift in governance of implementing ESSs. Benefits and challenges in the implementation are also identified. Based on the knowledge and insights gained in the study, recommendations are proposed to assist the company in improving governance of ESSs implementation. The study contributes knowledge/know-how on governance of ESSs implementation."}
{"_id": "26dda18412365d6c59866cf8cbc867a911727141", "title": "Do Better ImageNet Models Transfer Better?", "text": "Transfer learning is a cornerstone of computer vision, yet little work has been done to evaluate the relationship between architecture and transfer. An implicit hypothesis in modern computer vision research is that models that perform better on ImageNet necessarily perform better on other vision tasks. However, this hypothesis has never been systematically tested. Here, we compare the performance of 16 classification networks on 12 image classification datasets. We find that, when networks are used as fixed feature extractors or fine-tuned, there is a strong correlation between ImageNet accuracy and transfer accuracy (r = 0.99 and 0.96, respectively). In the former setting, we find that this relationship is very sensitive to the way in which networks are trained on ImageNet; many common forms of regularization slightly improve ImageNet accuracy but yield penultimate layer features that are much worse for transfer learning. Additionally, we find that, on two small fine-grained image classification datasets, pretraining on ImageNet provides minimal benefits, indicating the learned features from ImageNet do not transfer well to fine-grained tasks. Together, our results show that ImageNet architectures generalize well across datasets, but ImageNet features are less general than previously suggested."}
{"_id": "e9487dfc2fc9ff2dfdebab250560d567e6800f57", "title": "A randomized, efficient, and distributed protocol for the detection of node replication attacks in wireless sensor networks", "text": "Wireless sensor networks are often deployed in hostile environments, where anadversary can physically capture some of the nodes. Once a node is captured, the attackercan re-program it and replicate the node in a large number of clones, thus easily taking over the network. The detection of node replication attacks in a wireless sensor network is therefore a fundamental problem. A few distributed solutions have recently been proposed. However, these solutions are not satisfactory. First, they are energy and memory demanding: A serious drawback for any protocol that is to be used in resource constrained environment such as a sensor network. Further, they are vulnerable to specific adversary models introduced in this paper.\n The contributions of this work are threefold. First, we analyze the desirable properties of a distributed mechanism for the detection of node replication attacks. Second, we show that the known solutions for this problem do not completely meet our requirements. Third, we propose a new Randomized, Efficient, and Distributed (RED) protocol for the detection of node replication attacks and we show that it is completely satisfactory with respect to the requirements. Extensive simulations also show that our protocol is highly efficient in communication, memory, and computation, that it sets out an improved attack detection probability compared to the best solutions in the literature, and that it is resistant to the new kind of attacks we introduce in this paper, while other solutions are not."}
{"_id": "6f7745843ca4207567fc55f1023211f3fdde3ac2", "title": "TechWare: Financial Data and Analytic Resources [Best of the Web]", "text": "In this issue, \u201cBest of the Web\u201d focuses on data resources and analytic tools for quantitative analysis of financial markets. An abundance of financial data is reshaping trading, investment research, and risk management. The broad availability of this information creates opportunities to introduce analysis techniques that are new to the financial industry. The financial industry is currently dominated by a handful of workhorse models, such as the capital asset pricing model and the Black-Scholes options pricing model."}
{"_id": "391122d9722b376612e22a6f102ddc5940b80b6b", "title": "Web Mining Techniques in E-Commerce Applications", "text": "Today web is the best medium of communication in modern business. Many companies are redefining their business strategies to improve the business output. Business over internet provides the opportunity to customers and partners where their products and specific business can be found. Nowadays online business breaks the barrier of time and space as compared to the physical office. Big companies around the world are realizing that e-commerce is not just buying and selling over Internet, rather it improves the efficiency to compete with other giants in the market. For this purpose data mining sometimes called as knowledge discovery is used. Web mining is data mining technique that is applied to the WWW. There are vast quantities of information available over the Internet."}
{"_id": "bbfb87e5695e84e36a55719301f0d70ca16e1cc6", "title": "You Are What You Post: What the Content of Instagram Pictures Tells About Users' Personality", "text": "Instagram is a popular social networking application that allows users to express themselves through the uploaded content and the different filters they can apply. In this study we look at the relationship between the content of the uploaded Instagram pictures and the personality traits of users. To collect data, we conducted an online survey where we asked participants to fill in a personality questionnaire, and grant us access to their Instagram account through the Instagram API. We gathered 54,962 pictures of 193 Instagram users. Through the Google Vision API, we analyzed the pictures on their content and clustered the returned labels with the k-means clustering approach. With a total of 17 clusters, we analyzed the relationship with users\u2019 personality traits. Our findings suggest a relationship between personality traits and picture content. This allow for new ways to extract personality traits from social media trails, and new ways to facilitate personalized systems. Author"}
{"_id": "d333d23a0c178cce0132a7f2cc28b809115e3446", "title": "The stressed hippocampus, synaptic plasticity and lost memories", "text": "Stress is a biologically significant factor that, by altering brain cell properties, can disturb cognitive processes such as learning and memory, and consequently limit the quality of human life. Extensive rodent and human research has shown that the hippocampus is not only crucially involved in memory formation, but is also highly sensitive to stress. So, the study of stress-induced cognitive and neurobiological sequelae in animal models might provide valuable insight into the mnemonic mechanisms that are vulnerable to stress. Here, we provide an overview of the neurobiology of stress\u2013memory interactions, and present a neural\u2013endocrine model to explain how stress modifies hippocampal functioning."}
{"_id": "099b097ecbadf722489f2ff9c1a1fcfc28ac7dbb", "title": "Physics 101: Learning Physical Object Properties from Unlabeled Videos", "text": "We study the problem of learning physical properties of objects from unlabeled videos. Humans can learn basic physical laws when they are very young, which suggests that such tasks may be important goals for computational vision systems. We consider various scenarios: objects sliding down an inclined surface and colliding; objects attached to a spring; objects falling onto various surfaces, etc. Many physical properties like mass, density, and coefficient of restitution influence the outcome of these scenarios, and our goal is to recover them automatically. We have collected 17,408 video clips containing 101 objects of various materials and appearances (shapes, colors, and sizes). Together, they form a dataset, named Physics 101, for studying object-centered physical properties. We propose an unsupervised representation learning model, which explicitly encodes basic physical laws into the structure and use them, with automatically discovered observations from videos, as supervision. Experiments demonstrate that our model can learn physical properties of objects from video. We also illustrate how its generative nature enables solving other tasks such as outcome prediction."}
{"_id": "1c18f02b5247c6de4b319f2638707d63b11d5cd7", "title": "A tutorial on training recurrent neural networks , covering BPPT , RTRL , EKF and the \" echo state network \" approach - Semantic Scholar", "text": "This tutorial is a worked-out version of a 5-hour course originally held at AIS in September/October 2002. It has two distinct components. First, it contains a mathematically-oriented crash course on traditional training methods for recurrent neural networks, covering back-propagation through time (BPTT), real-time recurrent learning (RTRL), and extended Kalman filtering approaches (EKF). This material is covered in Sections 2 \u2013 5. The remaining sections 1 and 6 \u2013 9 are much more gentle, more detailed, and illustrated with simple examples. They are intended to be useful as a stand-alone tutorial for the echo state network (ESN) approach to recurrent neural network training."}
{"_id": "9fca534df83ada8e7ddeb68919f12126d0098082", "title": "Advanced Features for Enterprise-Wide Role-Based Access Control", "text": "The administration of users and access rights in large enterprises is a complex and challenging task. Roles are a powerful concept for simplifying access control, but their implementation is normally restricted to single systems and applications. In this article we define Enterprise Roles capable of spanning all IT systems in an organisation. We show how the Enterprise Role-Based Access Control (ERBAC) model exploits the RBAC model outlined in the NIST standard draft[5] and describe its extensions. We have implemented ERBAC as a basic concept of SAM Jupiter, a commercial security administration tool. Based on practical experience with the deployment of Enterprise Roles during SAM implementation projects in large organisations, we have enhanced the ERBAC model by including different ways of parametrising the roles. We show that using parameters can significantly reduce the number of roles needed in an enterprise and simplify the role structure, thereby reducing the administration effort considerably. The enhanced ERBAC features are illustrated by reallife examples."}
{"_id": "ff60d4601adabe04214c67e12253ea3359f4e082", "title": "Video-based emotion recognition in the wild using deep transfer learning and score fusion", "text": "Multimodal recognition of affective states is a difficult problem, unless the recording conditions are carefully controlled. For recognition \u201cin the wild\u201d, large variances in face pose and illumination, cluttered backgrounds, occlusions, audio and video noise, as well as issues with subtle cues of expression are some of the issues to target. In this paper, we describe a multimodal approach for video-based emotion recognition in the wild. We propose using summarizing functionals of complementary visual descriptors for video modeling. These features include deep convolutional neural network (CNN) based features obtained via transfer learning, for which we illustrate the importance of flexible registration and fine-tuning. Our approach combines audio and visual features with least squares regression based classifiers and weighted score level fusion. We report state-of-the-art results on the EmotiW Challenge for \u201cin the wild\u201d facial expression recognition. Our approach scales to other problems, and ranked top in the ChaLearn-LAP First Impressions Challenge 2016 from video clips collected in the wild."}
{"_id": "53275aea89844c503daf8e4d0864764201def8f3", "title": "Learning deep representations by mutual information estimation and maximization", "text": "This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation\u2019s suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals."}
{"_id": "4af39befa8fa3d1e4d8f40e0a2460937df3dac4d", "title": "Clinical practice guidelines for the management of pain, agitation, and delirium in adult patients in the intensive care unit.", "text": "OBJECTIVE\nTo revise the \"Clinical Practice Guidelines for the Sustained Use of Sedatives and Analgesics in the Critically Ill Adult\" published in Critical Care Medicine in 2002.\n\n\nMETHODS\nThe American College of Critical Care Medicine assembled a 20-person, multidisciplinary, multi-institutional task force with expertise in guideline development, pain, agitation and sedation, delirium management, and associated outcomes in adult critically ill patients. The task force, divided into four subcommittees, collaborated over 6 yr in person, via teleconferences, and via electronic communication. Subcommittees were responsible for developing relevant clinical questions, using the Grading of Recommendations Assessment, Development and Evaluation method (http://www.gradeworkinggroup.org) to review, evaluate, and summarize the literature, and to develop clinical statements (descriptive) and recommendations (actionable). With the help of a professional librarian and Refworks database software, they developed a Web-based electronic database of over 19,000 references extracted from eight clinical search engines, related to pain and analgesia, agitation and sedation, delirium, and related clinical outcomes in adult ICU patients. The group also used psychometric analyses to evaluate and compare pain, agitation/sedation, and delirium assessment tools. All task force members were allowed to review the literature supporting each statement and recommendation and provided feedback to the subcommittees. Group consensus was achieved for all statements and recommendations using the nominal group technique and the modified Delphi method, with anonymous voting by all task force members using E-Survey (http://www.esurvey.com). All voting was completed in December 2010. Relevant studies published after this date and prior to publication of these guidelines were referenced in the text. The quality of evidence for each statement and recommendation was ranked as high (A), moderate (B), or low/very low (C). The strength of recommendations was ranked as strong (1) or weak (2), and either in favor of (+) or against (-) an intervention. A strong recommendation (either for or against) indicated that the intervention's desirable effects either clearly outweighed its undesirable effects (risks, burdens, and costs) or it did not. For all strong recommendations, the phrase \"We recommend \u2026\" is used throughout. A weak recommendation, either for or against an intervention, indicated that the trade-off between desirable and undesirable effects was less clear. For all weak recommendations, the phrase \"We suggest \u2026\" is used throughout. In the absence of sufficient evidence, or when group consensus could not be achieved, no recommendation (0) was made. Consensus based on expert opinion was not used as a substitute for a lack of evidence. A consistent method for addressing potential conflict of interest was followed if task force members were coauthors of related research. The development of this guideline was independent of any industry funding.\n\n\nCONCLUSION\nThese guidelines provide a roadmap for developing integrated, evidence-based, and patient-centered protocols for preventing and treating pain, agitation, and delirium in critically ill patients."}
{"_id": "5772ebf6fd60e5e86081799934891815f02173fd", "title": "Emotional Facial Expression Classification for Multimodal User Interfaces", "text": "We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of 10 characteristic points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a data-base of 399 images. For the moment, the method is applied to static images. Application to sequences is being now developed. The extraction of such information about the user is of great interest for the development of new multimodal user interfaces."}
{"_id": "07283e28e324149afb0e2ed36c67c55a640d4a4f", "title": "A framework for evaluating multimodal music mood classification", "text": "This research proposes a framework of music mood classification that utilizes multiple and complementary information sources, namely, music audio, lyric text and social tags associated to music pieces. This article presents the framework and a thorough evaluation on each of its components. Experiment results on a large dataset of 18 mood categories show that combining lyrics and audio significantly outperformed systems using audio-only features. Automatic feature selection techniques were further proved to have reduced feature space. In addition, the examination of learning curves shows that the hybrid systems using lyrics and audio needed fewer training samples and shorter audio clips to achieve the same or better classification accuracies than systems using lyrics or audio singularly. Last but not least, performance comparisons reveal relative importance of audio and lyric features across mood categories. Introduction Music is an essential information type in people\u2019s everyday life. Nowadays, there have been a large number of music collections, repositories, and websites that strive to provide convenient access to music for various users, from musicians to the general public. These repositories and users often use different types of metadata to describe music such as genre, artist, country of source and music mood 1 . (Vignoli, 2004; Hu & Downie, 2007). Many of these music repositories have been relying on manual supply of music metadata, but the increasing amount of music data calls for tools that can automatically classify music pieces. Music mood classification has thus been attracting researchers\u2019 attention in the last decade, but many existing classification systems are solely based on information extracted from the audio recordings of music and have achieved suboptimal performances or reached a \u201cglass ceiling\u201d of performance (Lu, Liu, & Zhang, 2006; Trohidis, Tsoumakas, Kalliris, & Vlahavas, 2008; Hu, Downie, Laurier, Bay, & Ehmann, 2008, Yang & Chen, 2012, Barthet, Fazekas, & Sandler, 2013). At roughly the same time, studies have reported that lyrics and social tags associated with music have important values in Music Information Retrieval (MIR) research. For example, Cunningham, Downie, and Bainbridge (2005) reported lyrics as the most mentioned feature by respondents in answering why they hated a song. Geleijnse, Schedl, and Knees (2007) proposed an effective method of measuring artists similarity using social tags associated with the artists. As lyrics often bear with semantics of human language, they have been exploited in music classification as well (e.g., He et al, 2008; Hu, Chen, & Yang, 2009; Van Zaanen & Kanters, 2010; Dakshina & Sridhar, 2014). Furthermore, based on the hypothesis that lyrics and music audio 2 are different enough and thus may complement each other, researchers have started to combine lyrics and audio for improved classification performances (Laurier, Grivolla, & Herrera, 2008; Yang et al., 2008; Bj\u00f6rn, Johannes, & Gerhard, 2010; Brilis et al., 2012). Such approach of combining multiple information sources in solving classification problems is commonly called multimodal classification (Kim, et al., 2010; Yang & Chen, 2012; Barthet et al., 2013). Multimodal classification approaches in general are reported to have improved classification performances over those based on a single source. However, there are many options and decisions involved in a multimodal classification approach, and to date, there has not been any general guidance on how to make these decisions in order to achieve more effective classifications. This study proposes a framework of multimodal music mood classification where research questions on each specific stage or component of the classification process could be answered. This is one of the first studies presenting a comprehensive experiment on a multimodal dataset of 5,296 unique songs, which exemplifies every stage of the framework and evidences how the performances of music mood classification can be improved by a multimodal approach. Specifically, novelty and contributions of this study can be summarized as follows: 1. Conceptualize a framework for the entire process of automatic music mood classification using multiple information sources. The framework is flexible in that each component can be easily extended by adding new methods, algorithms and tools. Under the framework, this study systematically answers questions often involved in multimodal classification: feature extraction, feature selection, ensemble methods, etc. 2. Following a previous study evaluating a wide range of lyric features and their combinations (Hu & Downie, 2010), this study further explores feature selection and effect of dimension reduction of feature spaces. Thus, it pushes forward the state-of-the-art on sentiment analysis for music lyrics; 3. Examine the reduction of training data brought by the multimodal approach. This aspect of improvement has rarely been addressed by previous studies on multimodal music classification. Both the effect on the number of training examples and that on the length of audio clips are evaluated in this study. 4. Compare relative advantages of lyrics and audio across different mood categories. To date, there is little evidence on which information source works better for which mood category(ies). Gaining insight on this question can contribute to deeper understanding of sources and components of music mood. 5. Build a large ground truth dataset for the task of multimodal music mood classification. The dataset contains 5,296 unique songs in 18 mood categories. This is one of the largest experimental datasets in music mood classification with both audio and lyrics available (Kim, et al., 2010; Yang & Chen, 2012; Barthet et al., 2013). Results from a large dataset with realistic and representative mood categories are more generalizable and of higher practical values. The rest of the paper is organized as follows. Related work is reviewed and research questions are stated. After that, a framework for multimodal music mood classification is proposed. We then report an experiment with ternary information and conclude by discussing the findings and pointing out future work on enriching the proposed framework."}
{"_id": "5ca6217b3e8353778d05fe58bcc5a9ea79707287", "title": "Malaysian E-government : Issues and Challenges in Public Administration", "text": "E-government has become part and parcel of every government\u2019s agenda. Many governments have embraced its significant impacts and influences on governmental operations. As the technology mantra has become more ubiquitous, so government have decided to inaugurate e-government policy in its agencies and departments in order to enhance the quality of services, better transparency and greater accountability. As for Malaysia, the government is inspired by the wave of the e-government, as its establishment can improve the quality of public service delivery, and also its internal operations. This qualitative study will explore the status implementation of e-government initiatives as a case study, and will also provide a comparative evaluation of these findings, using the South Korean government as a benchmark study, given its outstanding performance in e-government. The findings of this study will highlight potential areas for improvement in relation to the public administration perspective and from this comparative approach too, Malaysia can learn some lessons from South Korea\u2019s practices to ensure the success of e-government projects."}
{"_id": "3af3f7b4f48e4aa6b9d1c1748b746ed2d8457b74", "title": "Affect in Human-Robot Interaction", "text": "More and more, robots are expected to interact with humans in a social, easily understandable manner, which presupposes effective use of robot affect. This chapter provides a brief overview of research advances into this important aspect of human-robot interaction. Keywords: human-robot interaction, affective robotics, robot behavior I. Introduction\t\r and\t\r Motivation Humans possess an amazing capability of attributing life and affect to inanimate objects (Reeves and Nass 96, Melson et al 09). Robots take this to the next level, even beyond that of virtual characters due to their embodiment and situatedness. They offer the opportunity for people to bond with them by maintaining a physical presence in their world, in some ways comparable to other beings, such as fellow humans and pets. This raises a broad range of questions in terms of the role of affect in human-robot interaction (HRI), which will be discussed in this article: \u2022 What is the role of affect for a robot and in what ways can it add value and risk to humanrobot relationships? Can robots be companions, friends, even intimates to people? \u2022 Is it necessary for a robot to actually experience emotion in order to convey its internal state to a person? Is emotion important in enhancing HRI and if so when and where? \u2022 What approaches, theories, representations, and experimental methods inform affective HRI research? Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Affect in Human-Robot Interaction 5a. CONTRACT NUMBER"}
{"_id": "12f7b71324ee8e1796a9ef07af05b66674fe6af0", "title": "Collective annotation of Wikipedia entities in web text", "text": "To take the first step beyond keyword-based search toward entity-based search, suitable token spans (\"spots\") on documents must be identified as references to real-world entities from an entity catalog. Several systems have been proposed to link spots on Web pages to entities in Wikipedia. They are largely based on local compatibility between the text around the spot and textual metadata associated with the entity. Two recent systems exploit inter-label dependencies, but in limited ways. We propose a general collective disambiguation approach. Our premise is that coherent documents refer to entities from one or a few related topics or domains. We give formulations for the trade-off between local spot-to-entity compatibility and measures of global coherence between entities. Optimizing the overall entity assignment is NP-hard. We investigate practical solutions based on local hill-climbing, rounding integer linear programs, and pre-clustering entities followed by local optimization within clusters. In experiments involving over a hundred manually-annotated Web pages and tens of thousands of spots, our approaches significantly outperform recently-proposed algorithms."}
{"_id": "77d2698e8efadda698b0edb457cd8de75224bfa0", "title": "Knowledge Base Population: Successful Approaches and Challenges", "text": "In this paper we give an overview of the Knowledge Base Population (KBP) track at the 2010 Text Analysis Conference. The main goal of KBP is to promote research in discovering facts about entities and augmenting a knowledge base (KB) with these facts. This is done through two tasks, Entity Linking \u2013 linking names in context to entities in the KB \u2013 and Slot Filling \u2013 adding information about an entity to the KB. A large source collection of newswire and web documents is provided from which systems are to discover information. Attributes (\u201cslots\u201d) derived from Wikipedia infoboxes are used to create the reference KB. In this paper we provide an overview of the techniques which can serve as a basis for a good KBP system, lay out the remaining challenges by comparison with traditional Information Extraction (IE) and Question Answering (QA) tasks, and provide some suggestions to address these challenges."}
{"_id": "0638d1f7d37f6bda49f6ec951de37aca0e53b98a", "title": "Mixed Initiative in Dialogue: An Investigation into Discourse Segmentation", "text": "Conversation between two people is usually of Mixed-Initiative, with Control over the conversation being transferred from one person to another. We apply a set of rules for the transfer of control to 4 sets of dialogues consisting of a total of 1862 turns. The application of the control rules lets us derive domain-independent discourse structures. The derived structures indicate that initiative plays a role in the structuring of discourse. In order to explore the relationship of control and initiative to discourse processes like centering, we analyze the distribution of four different classes of anaphora for two data sets. This distribution indicates that some control segments are hierarchically related to others. The analysis suggests that discourse participants often mutually agree to a change of topic. We also compared initiative in Task Oriented and Advice Giving dialogues and found that both allocation of control and the manner in which control is transferred is radically different for the two dialogue types. These differences can be explained in terms of collaborative planning principles."}
{"_id": "1c909ac1c331c0c246a88da047cbdcca9ec9b7e7", "title": "Large-Scale Named Entity Disambiguation Based on Wikipedia Data", "text": "This paper presents a large-scale system for the recognition and semantic disambiguation of named entities based on information extracted from a large encyclopedic collection and Web search results. It describes in detail the disambiguation paradigm employed and the information extraction process from Wikipedia. Through a process of maximizing the agreement between the contextual information extracted from Wikipedia and the context of a document, as well as the agreement among the category tags associated with the candidate entities, the implemented system shows high disambiguation accuracy on both news stories and Wikipedia articles."}
{"_id": "2b2c30dfd3968c5d9418bb2c14b2382d3ccc64b2", "title": "DBpedia: A Nucleus for a Web of Open Data", "text": "DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for humanand machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data."}
{"_id": "557e71f87073be437908c1d033500acbe7670712", "title": "On error management: lessons from aviation.", "text": "Copies of the full protocol and details of training programmes are available from Association of Litigation and Risk Management (ALARM), Royal Society of Medicine, 1 Wimpole Street, London W1. Contributors: CV and ST-A carried out the research on which the original protocol was based. All authors participated equally in the development of the protocol, in which successive versions were tested in clinical practice and refined in the light of experience. The writing of the original protocol and present paper was primarily carried out by CV, ST-A, EJC, and DH, but all authors contributed to the final version. CV and DH are the guarantors. Competing interests: CV received funding from Healthcare Risk Resources International to support the work of ST-A during the development of the protocol."}
{"_id": "3f007c43a7b4bb5a052b7a16d4e33c4842a8244a", "title": "Self-assembly of neural networks viewed as swarm intelligence", "text": "While self-assembly is a fairly active area of research in swarm intelligence, relatively little attention has been paid to the issues surrounding the construction of network structures. In this paper we extend methods developed previously for controlling collective movements of agent teams to serve as the basis for self-assembly or \u201cgrowth\u201d of networks, using neural networks as a concrete application to evaluate our approach. Our central innovation is having network connections arise as persistent \u201ctrails\u201d left behind moving agents, trails that are reminiscent of pheromone deposits made by agents in ant colony optimization models. The resulting network connections are thus essentially a record of agent movements. We demonstrate our model\u2019s effectiveness by using it to produce two large networks that support subsequent learning of topographic and feature maps. Improvements produced by the incorporation of collective movements are also examined through computational experiments. These results indicate that methods for directing collective movements can be adopted to facilitate network self-assembly."}
{"_id": "92930f4279b48f7e4e8ec2edc24e8aa65c5954fd", "title": "Client Profiling for an Anti-Money Laundering System", "text": "We present a data mining approach for profiling bank clients in order to support the process of detection of antimoney laundering operations. We first present the overall system architecture, and then focus on the relevant component for this paper. We detail the experiments performed on real world data from a financial institution, which allowed us to group clients in clusters and then generate a set of classification rules. We discuss the relevance of the founded client profiles and of the generated classification rules. According to the defined overall agent-based architecture, these rules will be incorporated in the knowledge base of the intelligent agents responsible for the signaling of suspicious transactions."}
{"_id": "f64d11fa836bf5bde5dc730f4311e899aea1047c", "title": "Strength Training for Endurance Athletes: Theory to Practice", "text": "THE PURPOSE OF THIS REVIEW IS TWOFOLD: TO ELUCIDATE THE UTILITY OF RESISTANCE TRAINING FOR ENDURANCE ATHLETES, AND PROVIDE THE PRACTITIONER WITH EVIDENCED-BASED PERIODIZATION STRATEGIES FOR CONCURRENT STRENGTH AND ENDURANCE TRAINING IN ATHLETIC POPULATIONS. BOTH LOW-INTENSITY EXERCISE ENDURANCE (LIEE) AND HIGH-INTENSITY EXERCISE ENDURANCE (HIEE) HAVE BEEN SHOWN TO IMPROVE AS A RESULT OF MAXIMAL, HIGH FORCE, LOW VELOCITY (HFLV) AND EXPLOSIVE, LOW-FORCE, HIGH-VELOCITY STRENGTH TRAINING. HFLV STRENGTH TRAINING IS RECOMMENDED INITIALLY TO DEVELOP A NEUROMUSCULAR BASE FOR ENDURANCE ATHLETES WITH LIMITED STRENGTH TRAINING EXPERIENCE. A SEQUENCED APPROACH TOSTRENGTHTRAINING INVOLVING PHASES OF STRENGTHENDURANCE, BASIC STRENGTH, STRENGTH, AND POWER WILL PROVIDE FURTHER ENHANCEMENTS IN LIEE AND HIEE FOR HIGHLEVEL ENDURANCE ATHLETES."}
{"_id": "526fb409521b6e7ef1afa771b30afe30e71e8c2b", "title": "Dual Switches DC/DC Converter With Three-Winding-Coupled Inductor and Charge Pump", "text": "In order to obtain a high step-up voltage gain, high-efficiency converter, this paper proposed a dual switches dc/dc converter with three-winding-coupled inductor and charge pump. The proposed converter composed of dual switches structure, three-winding-coupled inductor, and charge pump. This combination facilitates realization of high step-up voltage gain with a low voltage/current stress on the power switches. Meanwhile, the voltage across the diodes is low and the diode reverse-recovery problem is alleviated by the leakage inductance of the three-winding-coupled inductor. Taking all these into consideration, the efficiency can be high. This paper illustrated the operation principle of the proposed converter; discussed the effect of leakage inductance on voltage gain; the conditions of zero current shutting off of the diodes are illustrated; the voltage and current stress of the power devices are shown; a comparison between the performance of the proposed converter and previous high step-up converters was conducted. Finally, a prototype rated at 500 W has been established, and the experimental results verify the correctness of the analysis."}
{"_id": "5b7addfb161b6e43937c9b8db3c85f10de671d0c", "title": "Learning from positive and unlabeled examples", "text": "In many machine learning settings, labeled examples are difficult to collect while unlabeled data are abundant. Also, for some binary classification problems, positive examples which are elements of the target concept are available. Can these additional data be used to improve accuracy of supervised learning algorithms? We investigate in this paper the design of learning algorithms from positive and unlabeled data only. Many machine learning and data mining algorithms, such as decision tree induction algorithms and naive Bayes algorithms, use examples only to evaluate statistical queries (SQ-like algorithms). Kearns designed the statistical query learning model in order to describe these algorithms. Here, we design an algorithm scheme which transforms any SQ-like algorithm into an algorithm based on positive statistical queries (estimate for probabilities over the set of positive instances) and instance statistical queries (estimate for probabilities over the instance space). We prove that any class learnable in the statistical query learning model is learnable from positive statistical queries and instance statistical queries only if a lower bound on the weight of any target concept f can be estimated in polynomial time. Then, we design a decision tree induction algorithm POSC4.5, based on C4.5, that uses only positive and unlabeled examples and we give experimental results for this algorithm. In the case of imbalanced classes in the sense that one of the two classes (say the positive class) is heavily underrepresented compared to the other class, the learning problem remains open. This problem is challenging because it is encountered in many real-world applications. \u00a9 2005 Elsevier B.V. All rights reserved."}
{"_id": "c2bd37348784b4e6c16c7ab5ca8317987f3a73dd", "title": "Specificity of genetic and environmental risk factors for use and abuse/dependence of cannabis, cocaine, hallucinogens, sedatives, stimulants, and opiates in male twins.", "text": "OBJECTIVE\nData on use and misuse of six classes of illicit substances by male twin pairs were used to examine whether genetic and shared environmental risk factors for substance use disorders are substance-specific or -nonspecific in their effect.\n\n\nMETHOD\nLifetime history of use and abuse/dependence of cannabis, cocaine, hallucinogens, sedatives, stimulants, and opiates was assessed at personal interview in both members of 1,196 male-male twin pairs ascertained by the Virginia Twin Registry. Multivariate twin modeling of substance-nonspecific (common) and substance-specific genetic, shared environmental, and unique environmental risk factors was performed by using the program Mx.\n\n\nRESULTS\nHigh levels of comorbidity involving the different substance categories were observed for both use and abuse/dependence. One common genetic factor was found to have a strong influence on risk for illicit use and abuse/dependence for all six substance classes. A modest influence of substance-specific genetic factors was seen for use but not for abuse/dependence. Shared environmental factors were more important for use than for abuse/dependence and were mediated entirely through a single common factor.\n\n\nCONCLUSIONS\nIn an adult population-based sample of male twins, both the genetic and the shared environmental effects on risk for the use and misuse of six classes of illicit substances were largely or entirely nonspecific in their effect. Environmental experiences unique to the person largely determine whether predisposed individuals will use or misuse one class of psychoactive substances rather than another."}
{"_id": "cae8f2af7a25480479811453f27b4189ba5cc801", "title": "Question Answering on SQuAD", "text": "In this project, we exploit several deep learning architectures in Question Answering field, based on the newly released Stanford Question Answering dataset (SQuAD)[7]. We introduce a multi-stage process that encodes context paragraphs at different levels of granularity, uses co-attention mechanism to fuse representations of questions and context paragraphs, and finally decodes the co-attention vectors to get the answers. Our best model gets 62.23% F1 score and 48.72% EM score on the test set."}
{"_id": "c8ea1664d0cf4823b5ffb61a0d2f6dbda2441c49", "title": "Genetic Evidence That Carbohydrate-Stimulated Insulin Secretion Leads to Obesity.", "text": "BACKGROUND\nA fundamental precept of the carbohydrate-insulin model of obesity is that insulin secretion drives weight gain. However, fasting hyperinsulinemia can also be driven by obesity-induced insulin resistance. We used genetic variation to isolate and estimate the potentially causal effect of insulin secretion on body weight.\n\n\nMETHODS\nGenetic instruments of variation of insulin secretion [assessed as insulin concentration 30 min after oral glucose (insulin-30)] were used to estimate the causal relationship between increased insulin secretion and body mass index (BMI), using bidirectional Mendelian randomization analysis of genome-wide association studies. Data sources included summary results from the largest published metaanalyses of predominantly European ancestry for insulin secretion (n = 26037) and BMI (n = 322154), as well as individual-level data from the UK Biobank (n = 138541). Data from the Cardiology and Metabolic Patient Cohort study at Massachusetts General Hospital (n = 1675) were used to validate genetic associations with insulin secretion and to test the observational association of insulin secretion and BMI.\n\n\nRESULTS\nHigher genetically determined insulin-30 was strongly associated with higher BMI (\u03b2 = 0.098, P = 2.2 \u00d7 10-21), consistent with a causal role in obesity. Similar positive associations were noted in sensitivity analyses using other genetic variants as instrumental variables. By contrast, higher genetically determined BMI was not associated with insulin-30.\n\n\nCONCLUSIONS\nMendelian randomization analyses provide evidence for a causal relationship of glucose-stimulated insulin secretion on body weight, consistent with the carbohydrate-insulin model of obesity."}
{"_id": "a52d4736bb9728e6993cd7b3190271f6728706fd", "title": "Slip-aware Model Predictive optimal control for Path following", "text": "Traditional control and planning algorithms for wheeled mobile robots (WMR) either totally ignore or make simplifying assumptions about the effects of wheel slip on the motion. While this approach works reasonably well in practice on benign terrain, it fails very quickly when the WMR is deployed in terrain that induces significant wheel slip. We contribute a novel control framework that predictively corrects for the wheel slip to effectively minimize path following errors. Our framework, the Receding Horizon Model Predictive Path Follower (RHMPPF), specifically addresses the problem of path following in challenging environments where the wheel slip substantially affects the vehicle mobility. We formulate the solution to the problem as an optimal controller that utilizes a slip-aware model predictive component to effectively correct the controls generated by a strictly geometric pure-pursuit path follower. We present extensive experimental validation of our approach using a simulated 6-wheel skid-steered robot in a high-fidelity data-driven simulator, and on a real 4-wheel skid-steered robot. Our results show substantial improvement in the path following performance in both simulation and real world experiments."}
{"_id": "c7b8ad27e2ddbabcc5d785b51b967f4ccb824bc0", "title": "Datum: Managing Data Purchasing and Data Placement in a Geo-Distributed Data Market", "text": "This paper studies two design tasks faced by a geo-distributed cloud data market: which data to purchase data purchasing and where to place/replicate the data for delivery data placement. We show that the joint problem of data purchasing and data placement within a cloud data market can be viewed as a facility location problem and is thus NP-hard. However, we give a provably optimal algorithm for the case of a data market made up of a single data center and then generalize the structure from the single data center setting in order to develop a near-optimal, polynomial-time algorithm for a geo-distributed data market. The resulting design, $\\mathsf {Datum}$ , decomposes the joint purchasing and placement problem into two subproblems, one for data purchasing and one for data placement, using a transformation of the underlying bandwidth costs. We show, via a case study, that $\\mathsf {Datum}$ is near optimal within 1.6% in practical settings."}
{"_id": "0db7dcb8f91604a6b9f74bd789e70188377984ea", "title": "Reducing Length of Stay Using a Robotic-assisted Approach for Retromuscular Ventral Hernia Repair: A Comparative Analysis From the Americas Hernia Society Quality Collaborative.", "text": "OBJECTIVE\nThe aim of this study was to compare length of stay (LOS) after robotic-assisted and open retromuscular ventral hernia repair (RVHR).\n\n\nBACKGROUND\nRVHR has traditionally been performed by open techniques. Robotic-assisted surgery enables surgeons to perform minimally invasive RVHR, but with unknown benefit. Using real-world evidence, this study compared LOS after open (o-RVHR) and robotic-assisted (r-RVHR) approach.\n\n\nMETHODS\nMulti-institutional data from patients undergoing elective RVHR in the Americas Hernia Society Quality Collaborative between 2013 and 2016 were analyzed. Propensity score matching was used to compare median LOS between o-RVHR and r-RVHR groups. This work was supported by an unrestricted grant from Intuitive Surgical, and all clinical authors have declared direct or indirect relationships with Intuitive Surgical.\n\n\nRESULTS\nIn all, 333 patients met inclusion criteria for a 2:1 match performed on 111 r-RVHR patients using propensity scores, with 222 o-RVHR patients having similar characteristics as the robotic-assisted group. Median LOS [interquartile range (IQR)] was significantly decreased for r-RVHR patients [2 days (IQR 2)] compared with o-RVHR patients [3 days (IQR 3), P < 0.001]. No differences in 30-day readmissions or surgical site infections were observed. Higher surgical site occurrences were noted with r-RVHR, consisting mostly of seromas not requiring intervention.\n\n\nCONCLUSIONS\nUsing real-world evidence, a robotic-assisted approach to RVHR offers the clinical benefit of reduced postoperative LOS. Ongoing monitoring of this technique should be employed through continuous quality improvement to determine the long-term effect on hernia recurrence, complications, patient satisfaction, and overall cost."}
{"_id": "d9718e8745bf9bd70727dd0aec48b007593e00bc", "title": "Possibilistic interest discovery from uncertain information in social networks", "text": "User generated content on the microblogging social network Twitter continues to grow with significant amount of information. The semantic analysis offers the opportunity to discover and model latent interests\u2019 in the users\u2019 publications. This article focuses on the problem of uncertainty in the users\u2019 publications that has not been previously treated. It proposes a new approach for users\u2019 interest discovery from uncertain information that augments traditional methods using possibilistic logic. The possibility theory provides a solid theoretical base for the treatment of incomplete and imprecise information and inferring the reliable expressions from a knowledge base. More precisely, this approach used the product-based possibilistic network to model knowledge base and discovering possibilistic interests. DBpedia ontology is integrated into the interests\u2019 discovery process for selecting the significant topics. The empirical analysis and the comparison with the most known methods proves the significance of this approach."}
{"_id": "6b27f7ccbc68e6bf9bc1538f7ed8d1ca9d8e563a", "title": "Mechanisms of emotional arousal and lasting declarative memory", "text": "Neuroscience is witnessing growing interest in understanding brain mechanisms of memory formation for emotionally arousing events, a development closely related to renewed interest in the concept of memory consolidation. Extensive research in animals implicates stress hormones and the amygdaloid complex as key, interacting modulators of memory consolidation for emotional events. Considerable evidence suggests that the amygdala is not a site of long-term explicit or declarative memory storage, but serves to influence memory-storage processes in other brain regions, such as the hippocampus, striatum and neocortex. Human-subject studies confirm the prediction of animal work that the amygdala is involved with the formation of enhanced declarative memory for emotionally arousing events."}
{"_id": "8985000860dbb88a80736cac8efe30516e69ee3f", "title": "Human Activity Recognition Using Recurrent Neural Networks", "text": "Human activity recognition using smart home sensors is one of the bases of ubiquitous computing in smart environments and a topic undergoing intense research in the field of ambient assisted living. The increasingly large amount of data sets calls for machine learning methods. In this paper, we introduce a deep learning model that learns to classify human activities without using any prior knowledge. For this purpose, a Long Short Term Memory (LSTM) Recurrent Neural Network was applied to three real world smart home datasets. The results of these experiments show that the proposed approach outperforms the existing ones in terms of accuracy and performance."}
{"_id": "0658264c335017587a906ceb202da417b5521a92", "title": "An Introductory Study on Time Series Modeling and Forecasting", "text": "ACKNOWLEDGEMENT The timely and successful completion of the book could hardly be possible without the helps and supports from a lot of individuals. I will take this opportunity to thank all of them who helped me either directly or indirectly during this important work. First of all I wish to express my sincere gratitude and due respect to my supervisor Dr. R.K. Agrawal, Associate Professor SC & SS, JNU. I am immensely grateful to him for his valuable guidance, continuous encouragements and positive supports which helped me a lot during the period of my work. I would like to appreciate him for always showing keen interest in my queries and providing important suggestions. I also express whole hearted thanks to my friends and classmates for their care and moral supports. The moments, I enjoyed with them during my M.Tech course will always remain as a happy memory throughout my life. I owe a lot to my mother for her constant love and support. She always encouraged me to have positive and independent thinking, which really matter in my life. I would like to thank her very much and share this moment of happiness with her. Last but not the least I am also thankful to entire faculty and staff of SC & SS for their unselfish help, I got whenever needed during the course of my work.ABSTRACT Time series modeling and forecasting has fundamental importance to various practical domains. Thus a lot of active research works is going on in this subject during several years. Many important models have been proposed in literature for improving the accuracy and effeciency of time series modeling and forecasting. The aim of this book is to present a concise description of some popular time series forecasting models used in practice, with their salient features. In this book, we have described three important classes of time series models, viz. the stochastic, neural networks and SVM based models, together with their inherent forecasting strengths and weaknesses. We have also discussed about the basic issues related to time series modeling, such as stationarity, parsimony, overfitting, etc. Our discussion about different time series models is supported by giving the experimental forecast results, performed on six real time series datasets. While fitting a model to a dataset, special care is taken to select the most parsimonious one. To evaluate forecast accuracy as well as to compare among different models \u2026"}
{"_id": "3db9b57efd0e0c64e3fdc57afb1d8e018b30b0b6", "title": "On the development of name search techniques for Arabic", "text": "The need for effective identity matching systems has led to extensive research in the area of name search. For the most part, such work has been limited to English and other Latin-based languages. Consequently, algorithms such as Soundex and n-gram matching are of limited utility for languages such as Arabic, which has a vastly different morphology that relies heavily on phonetic information. The dearth of work in this field is partly due to the lack of standardized test data. Consequently, we built a collection of 7,939 Arabic names, along with 50 training queries and 111 test queries. We use this collection to evaluate a variety of algorithms, including a derivative of Soundex tailored to Arabic (ASOUNDEX), measuring effectiveness using standard information retrieval measures. Our results show an improvement of 70% over existing approaches. Introduction Identity matching systems frequently employ name search algorithms to effectively locate relevant information about a given person. Such systems are used for applications as diverse as tax fraud detection and immigration control. Using names to retrieve information makes such systems susceptible to problems arising from typographical errors. That is, exact match search approaches will not find instances of misspelled names or those names that have more than one accepted spelling. An example of the severity of the problem is noted in an NCR (1998) report that estimates that the state of Texas saved $43 million over 18 months in the field of tax compliance using an improved name search approach. Thus, the importance of such name-based search applications has resulted in improved name matching algorithms for English that make use of phonetic information, but these language-dependent techniques have not been extended to Arabic."}
{"_id": "becb5fbd24881dd78793686bbe30b153b5745fb8", "title": "Tax Fraud Detection for Under-Reporting Declarations Using an Unsupervised Machine Learning Approach", "text": "Tax fraud is the intentional act of lying on a tax return form with intent to lower one's tax liability. Under-reporting is one of the most common types of tax fraud, it consists in filling a tax return form with a lesser tax base. As a result of this act, fiscal revenues are reduced, undermining public investment.\n Detecting tax fraud is one of the main priorities of local tax authorities which are required to develop cost-efficient strategies to tackle this problem. Most of the recent works in tax fraud detection are based on supervised machine learning techniques that make use of labeled or audit-assisted data. Regrettably, auditing tax declarations is a slow and costly process, therefore access to labeled historical information is extremely limited. For this reason, the applicability of supervised machine learning techniques for tax fraud detection is severely hindered.\n Such limitations motivate the contribution of this work. We present a novel approach for the detection of potential fraudulent tax payers using only unsupervised learning techniques and allowing the future use of supervised learning techniques. We demonstrate the ability of our model to identify under-reporting taxpayers on real tax payment declarations, reducing the number of potential fraudulent tax payers to audit. The obtained results demonstrate that our model doesn't miss on marking declarations as suspicious and labels previously undetected tax declarations as suspicious, increasing the operational efficiency in the tax supervision process without needing historic labeled data."}
{"_id": "b31f0085b7dd24bdde1e5cec003589ce4bf4238c", "title": "Discriminative Label Consistent Domain Adaptation", "text": "Domain adaptation (DA) is transfer learning which aims to learn an effective predictor on target data from source data despite data distribution mismatch between source and target. We present in this paper a novel unsupervised DA method for cross-domain visual recognition which simultaneously optimizes the three terms of a theoretically established error bound. Specifically, the proposed DA method iteratively searches a latent shared feature subspace where not only the divergence of data distributions between the source domain and the target domain is decreased as most state-of-the-art DA methods do, but also the inter-class distances are increased to facilitate discriminative learning. Moreover, the proposed DA method sparsely regresses class labels from the features achieved in the shared subspace while minimizing the prediction errors on the source data and ensuring label consistency between source and target. Data outliers are also accounted for to further avoid negative knowledge transfer. Comprehensive experiments and in-depth analysis verify the effectiveness of the proposed DA method which consistently outperforms the state-of-the-art DA methods on standard DA benchmarks, i.e., 12 cross-domain image classification tasks."}
{"_id": "9bfb04bb15f7cc414108945571bd1d6d1f77b4ad", "title": "Feature Selection by Joint Graph Sparse Coding", "text": "This paper takes manifold learning and regression simultaneously into account to perform unsupervised spectral feature selection. We first extract the bases of the data, and then represent the data sparsely using the extracted bases by proposing a novel joint graph sparse coding model, JGSC for short. We design a new algorithm TOSC to compute the resulting objective function of JGSC, and then theoretically prove that the proposed objective function converges to its global optimum via the proposed TOSC algorithm. We repeat the extraction and the TOSC calculation until the value of the objective function of JGSC satisfies pre-defined conditions. Eventually the derived new representation of the data may only have a few non-zero rows, and we delete the zero rows (a.k.a. zero-valued features) to conduct feature selection on the new representation of the data. Our empirical studies demonstrate that the proposed method outperforms several state-of-the-art algorithms on real datasets in term of the kNN classification performance."}
{"_id": "0f03074cc5ef0e2bd0f16de62a1e170531474eae", "title": "Discovering Available Drinks Through Natural , Robot-Led , Human-Robot Interaction Between a Waiter and a Bartender", "text": "This research focuses on natural, robot-led, human-robot interaction that enables a robot to discover what drinks a barman can prepare through continuous application of speech recognition, understanding and generation. Speech was recognised using Google Cloud\u2019s speech to text API, understood by matching either the object or main verb of a sentence against a list of key words and, finally, generated using templates with variable parts. The difficulty lies in the large quantity of key words, as they are based on the properties of the ordered drinks. The results show that having the aforementioned interaction works well to some extent, i.e. the naturalness of the interaction was ranked 5.5 on average. Furthermore, the obtained precision when identifying the unavailable drinks was 0.625 and the obtained recall was 1, resulting in an F1 measure of 0.769."}
{"_id": "2d8d089d368f2982748fde93a959cf5944873673", "title": "Visually Guided Spatial Relation Extraction from Text", "text": "Extraction of spatial relations from sentences with complex/nesting relationships is very challenging as often needs resolving inherent semantic ambiguities. We seek help from visual modality to fill the information gap in the text modality and resolve spatial semantic ambiguities. We use various recent vision and language datasets and techniques to train inter-modality alignment models, visual relationship classifiers and propose a novel global inference model to integrate these components into our structured output prediction model for spatial role and relation extraction. Our global inference model enables us to utilize the visual and geometric relationships between objects and improves the state-of-art results of spatial information extraction from text."}
{"_id": "852203bfc1694fc2ea4bbe0eea4c2f830df85d31", "title": "Does Microfinance Really Help the Poor ? New Evidence from Flagship Programs in Bangladesh", "text": "The microfinance movement has built on innovations in financial intermediation that reduce the costs and risks of lending to poor households. Replications of the movement\u2019s flagship, the Grameen Bank of Bangladesh, have now spread around the world. While programs aim to bring social and economic benefits to clients, few attempts have been made to quantify benefits rigorously. This paper draws on a new cross-sectional survey of nearly 1800 households, some of which are served by the Grameen Bank and two similar programs, and some of which have no access to programs. Households that are eligible to borrow and have access to the programs do not have notably higher consumption levels than control households, and, for the most part, their children are no more likely to be in school. Men also tend to work harder, and women less. More favorably, relative to controls, households eligible for programs have substantially (and significantly) lower variation in consumption and labor supply across seasons. The most important potential impacts are thus associated with the reduction of vulnerability, not of poverty per se. The consumption-smoothing appears to be driven largely by income-smoothing, not by borrowing and lending. The evaluation holds lessons for studies of other programs in low-income countries. While it is common to use fixed effects estimators to control for unobservable variables correlated with the placement of programs, using fixed effects estimators can exacerbate biases when, as here, programs target their programs to specific populations within larger communities."}
{"_id": "712115f791de02aafc675ee84090fdf8e4ed88a5", "title": "Elastic Bands: Connecting Path Planning and Control", "text": "Elastic bands are proposed as the basis for a new framework to close the gap between global path planning and real-time sensor-based robot control. An elastic band is a deformable collision-free path. The initial shape of the elastic is the free path generated by a planner. Subjected to artificial forces, the elastic band deforms in real time to a short and smooth path that maintains clearance from the obstacles. The elastic continues to deform as changes in the environment are detected by sensors, enabling the robot to accommodate uncertainties and react to unexpected and moving obstacles. While providing a tight connection between the robot and its environment, the elastic band preserves the global nature of the planned path. This paper outlines the framework and discusses an efficient implementation based on bubbles."}
{"_id": "b9bc9a32791dba1fc85bb9d4bfb9c52e6f052d2e", "title": "RRT-Connect: An Efficient Approach to Single-Query Path Planning", "text": "A simple and efficient randomized algorithm is presented for solving single-query path planning problems in high-dimensional configuration spaces. The method works by incrementally building two Rapidly-exploring Random Trees (RRTs) rooted at the start and the goal configurations. The trees each explore space around them and also advance towards each other through the use of a simple greedy heuristic. Although originally designed to plan motions for a human arm (modeled as a 7-DOF kinematic chain) for the automatic graphic animation of collision-free grasping and manipulation tasks, the algorithm has been successfully applied to a variety of path planning problems. Computed examples include generating collision-free motions for rigid objects in 2D and 3D, and collision-free manipulation motions for a 6-DOF PUMA arm in a 3D workspace. Some basic theoretical analysis is also presented."}
{"_id": "d967d9550f831a8b3f5cb00f8835a4c866da60ad", "title": "Rapidly-Exploring Random Trees: A New Tool for Path Planning", "text": ""}
{"_id": "09194121d81be3783fa1719afe51a7056f04e309", "title": "A Random Sampling Scheme for Path Planning", "text": "Several randomized path planners have been proposed during the last few years. Their attractiveness stems from their applicability to virtually any type of robots, and their empirically observed success. In this paper we attempt to present a unifying view of these planners and to theoretically explain their success. First, we introduce a general planning scheme that consists of randomly sampling the robot\u2019s configuration space. We then describe two previously developed planners as instances of planners based on this scheme, but applying very different sampling strategies. These planners are probabilistically complete: if a path exists, they will find one with high probability, if we let them run long enough. Next, for one of the planners, we analyze the relation between the probability of failure and the running time. Under assumptions characterizing the \u201cgoodness\u201d of the robot\u2019s free space, we show that the running time only grows as the absolute value of the logarithm of the probability of failure that we are willing to tolerate. We also show that it increases at a reasonable rate as the space goodness degrades. In the last section we suggest directions for future research."}
{"_id": "0e6c774dd0d11e4a32fc94459ee5f4a9f906fcda", "title": "V-Clip: Fast and Robust Polyhedral Collision Detection", "text": "This article presents the Voronoi-clip, or V-Clip, collision detection alogrithm for polyhedral objects specified by a boundary representation. V-Clip tracks the closest pair of features between convex polyhedra, using an approach reminiscent of the Lin-Canny closest features algorithm. V-Clip is an improvement over the latter in several respects. Coding complexity is reduced, and robustness is significantly improved; the implementation has no numerical tolerances and does not exhibit cycling problems. The algorithm also handles penetrating polyhedra, and can therefore be used to detect collisions between nonvconvex polyhedra described as hierarchies of convex pieces. The article presents the theoretical principles of V-Clip, and gives a pseudocode description of the algorithm. It also documents various test that compare V-Clip, Lin-Canny, and the Enhanced GJK algorithm, a simplex-based algorithm that is widely used for the same application. The results show V-Clip to be a strong contender in this field, comparing favorably with the other algorithms in most of the tests, in term of both performance and robustness."}
{"_id": "03f3197f5f2c0415f972111173ed452a39d436d2", "title": "Compiling Bayesian Networks Using Variable Elimination", "text": "Compiling Bayesian networks has proven an effective approach for inference that can utilize both global and local network structure. In this paper, we define a new method of compiling based on variable elimination (VE) and Algebraic Decision Diagrams (ADDs). The approach is important for the following reasons. First, it exploits local structure much more effectively than previous techniques based on VE. Second, the approach allows any of the many VE variants to compute answers to multiple queries simultaneously. Third, the approach makes a large body of research into more structured representations of factors relevant in many more circumstances than it has been previously. Finally, experimental results demonstrate that VE can exploit local structure as effectively as state\u2013of\u2013the\u2013art algorithms based on conditioning on the networks considered, and can sometimes lead to much faster compilation times."}
{"_id": "6d48640f9f9a1702c13f30a662b115b16ca8ab07", "title": "An Examination of Undergraduate Student's Perceptions and Predilections of the Use of YouTube in the Teaching and Learning Process", "text": "Pervasive social networking and media sharing technologies have augmented perceptual understanding and information gathering and, while text-based resources have remained the standard for centuries, they do not appeal to the hyper-stimulated visual learners of today. In particular, the research suggests that targeted YouTube videos enhance student engagement, depth of understanding, and overall satisfaction in higher education courses. In order to investigate student perceptions and preferences regarding the implications of YouTube, a study was conducted at a Mid-Atlantic minority serving institution that examined student opinions regarding the usage of YouTube videos to augment instruction in online and classroombased courses. According to the findings, use of YouTube in the teaching and learning process enhances instruction with students most likely to visit video sharing services from mobile devices. Further, length has an impact on student decisions whether or not to watch a video, and course delivery format impacts length and audio preferences. Finally, there is no relationship between personal use of social media and the perceived value of the use of YouTube in the instructional process."}
{"_id": "cd43f34d4ce48d4a77b84f1a62d469e8596078d0", "title": "Plant-growth-promoting compounds produced by two agronomically important strains of Azospirillum brasilense, and implications for inoculant formulation", "text": "We evaluated phytohormone and polyamine biosynthesis, siderophore production, and phosphate solubilization in two strains (Cd and Az39) of Azospirillum brasilense used for inoculant formulation in Argentina during the last 20\u00a0years. Siderophore production and phosphate solubilization were evaluated in a chemically defined medium, with negative results. Indole 3-acetic acid (IAA), gibberellic acid (GA3), and abscisic acid (ABA) production were analyzed by gas chromatography-mass spectrometry. Ethylene, polyamine, and zeatin (Z) biosynthesis were determined by gas chromatography-flame ionization detector and high performance liquid chromatography (HPLC-fluorescence and -UV), respectively. Phytohormones IAA, Z, GA3, ABA, ethylene, and growth regulators putrescine, spermine, spermidine, and cadaverine (CAD) were found in culture supernatant of both strains. IAA, Z, and GA3 were found in all two strains; however, their levels were significantly higher (p\u2009<\u20090.01) in Cd (10.8, 2.32, 0.66\u00a0\u03bcg ml\u22121). ABA biosynthesis was significantly higher (p\u2009<\u20090.01) in Az39 (0.077\u00a0\u03bcg ml\u22121). Ethylene and polyamine CAD were found in all two strains, with highest production in Cd cultured in NFb plus l-methionine (3.94\u00a0ng ml\u22121 h\u22121) and Az39 cultured in NFb plus l-lysine (36.55\u00a0ng ml\u22121 h\u22121). This is the first report on the evaluation of important bioactive molecules in strains of A. brasilense as potentially capable of direct plant growth promotion or agronomic yield increase. Az39 and Cd showed differential capability to produce the five major phytohormones and CAD in chemically defined medium. This fact has important technological implications for inoculant formulation as different concentrations of growth regulators are produced by different strains or culture conditions."}
{"_id": "598f98fefa56afe0073d58fd36cc86017f0a4c31", "title": "A 1.2\u20136.6GHz LNA using transformer feedback for wideband input matching and noise cancellation in 0.13\u00b5m CMOS", "text": "A novel transformer feedback featuring wideband input matching and noise cancellation is proposed and demonstrated in a wideband differential LNA for software-defined-radio (SDR) applications. Implemented in 0.13\u03bcm CMOS with an area of 0.32mm2, the LNA prototype measures a wideband input matching S11 of less than -10dB from 1.2GHz to 6.6GHz and minimum NF of 1.8dB while consuming 11mA at 1.2V supply."}
{"_id": "fc10f1ccd2396c1adb4652c807a0c4f6e7534624", "title": "Hamming Clustering: A New Approach to Rule Extraction", "text": "A new algorithm, called Hamming Clustering (HC), is proposed to extract a set of rules underlying a given classification problem. It is able to reconstruct the and-or expression associated with any Boolean function from a training set of"}
{"_id": "231be9ffe942d000891197ee45f41ae62adfe9a8", "title": "Extracting and Analyzing Hidden Graphs from Relational Databases", "text": "Analyzing interconnection structures among underlying entities or objects in a dataset through the use of graph analytics can provide tremendous value in many application domains. However, graphs are not the primary representation choice for storing most data today, and in order to have access to these analyses, users are forced to manually extract data from their data stores, construct the requisite graphs, and then load them into some graph engine in order to execute their graph analysis task. Moreover, in many cases (especially when the graphs are dense), these graphs can be significantly larger than the initial input stored in the database, making it infeasible to construct or analyze such graphs in memory. In this paper we address both of these challenges by building a system that enables users to declaratively specify graph extraction tasks over a relational database schema and then execute graph algorithms on the extracted graphs. We propose a declarative domain specific language for this purpose, and pair it up with a novel condensed, in-memory representation that significantly reduces the memory footprint of these graphs, permitting analysis of larger-than-memory graphs. We present a general algorithm for creating such a condensed representation for a large class of graph extraction queries against arbitrary schemas. We observe that the condensed representation suffers from a duplication issue, that results in inaccuracies for most graph algorithms. We then present a suite of in-memory representations that handle this duplication in different ways and allow trading off the memory required and the computational cost for executing different graph algorithms. We also introduce several novel deduplication algorithms for removing this duplication in the graph, which are of independent interest for graph compression, and provide a comprehensive experimental evaluation over several real-world and synthetic datasets illustrating these trade-offs."}
{"_id": "9a870af1c356f14c51c0af75bd9fec5f2e20f8f7", "title": "Debugging Machine Learning Tasks", "text": "Unlike traditional programs (such as operating systems or word processors) which have large amounts of code, machine learning tasks use programs with relatively small amounts of code (written in machine learning libraries), but voluminous amounts of data. Just like developers of traditional programs debug errors in their code, developers of machine learning tasks debug and fix errors in their data. However, algorithms and tools for debugging and fixing errors in data are less common, when compared to their counterparts for detecting and fixing errors in code. In this paper, we consider classification tasks where errors in training data lead to misclassifications in test points, and propose an automated method to find the root causes of such misclassifications. Our root cause analysis is based on Pearl\u2019s theory of causation, and uses Pearl\u2019s PS (Probability of Sufficiency) as a scoring metric. Our implementation, Psi, encodes the computation of PS as a probabilistic program, and uses recent work on probabilistic programs and transformations on probabilistic programs (along with gray-box models of machine learning algorithms) to efficiently compute PS. Psi is able to identify root causes of data errors in interesting data sets."}
{"_id": "7dd2936b73377bf30c91f9a85e17f602e7b53ec5", "title": "Interactive cloth rendering of microcylinder appearance model under environment lighting", "text": "This paper proposes an interactive rendering method of cloth fabrics under environment lighting. The outgoing radiance from cloth fabrics in the microcylinder model is calculated by integrating the product of the distant environment lighting, the visibility function, the weighting function that includes shadowing/masking effects of threads, and the light scattering function of threads. The radiance calculation at each shading point of the cloth fabrics is simplified to a linear combination of triple product integrals of two circular Gaussians and the visibility function, multiplied by precomputed spherical Gaussian convolutions of the weighting function. We propose an efficient calculation method of the triple product of two circular Gaussians and the visibility function by using the gradient of signed distance function to the visibility boundary where the binary visibility changes in the angular domain of the hemisphere. Our GPU implementation enables interactive rendering of static cloth fabrics with dynamic viewpoints and lighting. In addition, interactive editing of parameters for the scattering function (e.g. thread\u2019s albedo) that controls the visual appearances of cloth fabrics can be achieved."}
{"_id": "a095f249e45856953d3e7878ccf172dd83aefdcd", "title": "GPU Accelerated Sub-Sampled Newton's Method", "text": "First order methods, which solely rely on gradient information, are commonly used in diverse machine learning (ML) and data analysis (DA) applications. This is attributed to the simplicity of their implementations, as well as low per-iteration computational/storage costs. However, they suffer from significant disadvantages; most notably, their performance degrades with increasing problem ill-conditioning. Furthermore, they often involve a large number of hyper-parameters, and are notoriously sensitive to parameters such as the step-size. By incorporating additional information from the Hessian, second-order methods, have been shown to be resilient to many such adversarial effects. However, these advantages of using curvature information come at the cost of higher per-iteration costs, which in \u201cbig data\u201d regimes, can be computationally prohibitive. In this paper, we show that, contrary to conventional belief, second-order methods, when implemented appropriately, can be more efficient than first-order alternatives in many largescale ML/ DA applications. In particular, in convex settings, we consider variants of classical Newton\u2019s method in which the Hessian and/or the gradient are randomly sub-sampled. We show that by effectively leveraging the power of GPUs, such randomized Newton-type algorithms can be significantly accelerated, and can easily outperform state of the art implementations of existing techniques in popular ML/ DA software packages such as TensorFlow. Additionally these randomized methods incur a small memory overhead compared to first-order methods. In particular, we show that for million-dimensional problems, our GPU accelerated sub-sampled Newton\u2019s method achieves a higher test accuracy in milliseconds as compared with tens of seconds for first order alternatives."}
{"_id": "c8f4b2695333abafde9ea253afc5b6a651d1fb54", "title": "Nobody \u2019 s watching ? Subtle cues affect generosity in an anonymous economic game", "text": "Models indicate that opportunities for reputation formation can play an important role in sustaining cooperation and prosocial behavior. Results from experimental economic games support this conclusion, as manipulating reputational opportunities affects prosocial behavior. Noting that some prosocial behavior remains even in anonymous noniterated games, some investigators argue that humans possess a propensity for prosociality independent of reputation management. However, decision-making processes often employ both explicit propositional knowledge and intuitive or affective judgments elicited by tacit cues. Manipulating game parameters alters explicit information employed in overt strategizing but leaves intact cues that may affect intuitive judgments relevant to reputation formation. To explore how subtle cues of observability impact prosocial behavior, we conducted five dictator games, manipulating both auditory cues of the presence of others (via the use of sound-deadening earmuffs) and visual cues (via the presentation of stylized eyespots). Although earmuffs appeared to reduce generosity, this effect was not significant. However, as predicted, eyespots substantially increased generosity, despite no differences in actual anonymity; when using a computer displaying eyespots, almost twice as many participants gave money to their partners compared with the"}
{"_id": "739f26126145cbbf1f3d484be429a81e812fd40a", "title": "An Anomaly Detection Method for Spacecraft Using Relevance Vector Learning", "text": "This paper proposes a novel anomaly detection system for spacecrafts based on data mining techniques. It constructs a nonlinear probabilistic model w.r.t. behavior of a spacecraft by applying the relevance vector regression and autoregression to massive telemetry data, and then monitors the on-line telemetry data using the model and detects anomalies. A major advantage over conventional anomaly detection methods is that this approach requires little a priori knowledge on the system."}
{"_id": "e7030ed729ac8250d022394ec1df3ed216144f9f", "title": "Positron-Emission Tomography and Personality Disorders", "text": "This study used positron-emission tomography to examine cerebral metabolic rates of glucose (CMRG) in 17 patients with DSM III-R diagnoses of personality disorder. Within the group of 17 personality disorder patients, there was a significant inverse correlation between a life history of aggressive impulse difficulties and regional CMRG in the frontal cortex of the transaxial plane approximately 40 mm above the canthomeatal line (CML) (r = \u2212.56, p = 0.17). Diagnostic groups included antisocial (n = 6), borderline (n = 6), dependent (n = 2), and narcissistic (n = 3). Regional CMRG in the six antisocial patients and in the six borderline patients was compared to a control group of 43 subjects using an analysis of covariance with age and sex as covariates. In the borderline personality disorder group, there was a significant decrease in frontal cortex metabolism in the transaxial plane approximately 81 mm above the CML and a significant increase in the transaxial plane approximately 53 mm above the CML (F[1,45] = 8.65, p = .005; and F[1,45] = 7.68, p = .008, respectively)."}
{"_id": "1721e4529c5e222dc7070ff318f7e1d815bfb27b", "title": "Benchmarking personal cloud storage", "text": "Personal cloud storage services are data-intensive applications already producing a significant share of Internet traffic. Several solutions offered by different companies attract more and more people. However, little is known about each service capabilities, architecture and -- most of all -- performance implications of design choices. This paper presents a methodology to study cloud storage services. We apply our methodology to compare 5 popular offers, revealing different system architectures and capabilities. The implications on performance of different designs are assessed executing a series of benchmarks. Our results show no clear winner, with all services suffering from some limitations or having potential for improvement. In some scenarios, the upload of the same file set can take seven times more, wasting twice as much capacity. Our methodology and results are useful thus as both benchmark and guideline for system design."}
{"_id": "7855c55eae35e81ba84519aa60b97c51bb6281d5", "title": "On-line Fault Detection of Sensor Measurements", "text": "On-line fault detection in sensor networks is of paramount importance due to the convergence of a variety of challenging technological, application, conceptual, and safety related factors. We introduce a taxonomy for classi\u00a3cation of faults in sensor networks and the \u00a3rst on-line model-based testing technique. The approach is generic in the sense that it can be applied on an arbitrary system of heterogeneous sensors with an arbitrary type of fault model, while it provides a \u00a4exible tradeoff between accuracy and latency. The key idea is to formulate on-line testing as a set of instances of a non-linear function minimization and consequently apply nonparametric statistical methods to identify the sensors that have the highest probability to be faulty. The optimization is conducted using the Powell nonlinear function minimization method. The effectiveness of the approach is evaluated in the presence of random noise using a system of light sensors."}
{"_id": "7d3808cc062d5e25eafa92256c72f490be2bff7f", "title": "Development of support system for forward tilting of the upper body", "text": "We have been developing a wearable robot that directly and physically supports human movement. A ldquomuscle suitrdquo for the arm that will provide muscular support for the paralyzed or those otherwise unable to move unaided is one example. Whereas low back pain is one of the most serious issue for the manual worker and in this paper, we focus on developing the support system for forward tilting of the upper body. In this paper, the concept and mechanical structure of the support system is shown. From the experimental result to keep forward tilting posture with load indicates our system is very effective for keeping forward tilting posture and it becomes clear that more than 60% reduction of muscular power is possible."}
{"_id": "4f3d6fdb74ebb2a64369269d402210c83060a8d7", "title": "Randomized Controlled Caregiver Mediated Joint Engagement Intervention for Toddlers with Autism", "text": "This study aimed to determine if a joint attention intervention would result in greater joint engagement between caregivers and toddlers with autism. The intervention consisted of 24 caregiver-mediated sessions with follow-up 1 year later. Compared to caregivers and toddlers randomized to the waitlist control group the immediate treatment (IT) group made significant improvements in targeted areas of joint engagement. The IT group demonstrated significant improvements with medium to large effect sizes in their responsiveness to joint attention and their diversity of functional play acts after the intervention with maintenance of these skills 1 year post-intervention. These are among the first randomized controlled data to suggest that short-term parent-mediated interventions can have important effects on core impairments in toddlers with autism. Clinical Trials #: NCT00065910."}
{"_id": "7e1f274a73803c97db5284da0eb295b2cab12486", "title": "Developmental dyslexia: genetic dissection of a complex cognitive trait", "text": "Developmental dyslexia, a specific impairment of reading ability despite adequate intelligence and educational opportunity, is one of the most frequent childhood disorders. Since the first documented cases at the beginning of the last century, it has become increasingly apparent that the reading problems of people with dyslexia form part of a heritable neurobiological syndrome. As for most cognitive and behavioural traits, phenotypic definition is fraught with difficulties and the genetic basis is complex, making the isolation of genetic risk factors a formidable challenge. Against such a background, it is notable that several recent studies have reported the localization of genes that influence dyslexia and other language-related traits. These investigations exploit novel research approaches that are relevant to many areas of human neurogenetics."}
{"_id": "6243769976e9a2e8c33ee9fc28ac0308cb052ebf", "title": "Neural Probabilistic Model for Non-projective MST Parsing", "text": "In this paper, we propose a probabilistic parsing model that defines a proper conditional probability distribution over nonprojective dependency trees for a given sentence, using neural representations as inputs. The neural network architecture is based on bi-directional LSTMCNNs, which automatically benefits from both wordand character-level representations, by using a combination of bidirectional LSTMs and CNNs. On top of the neural network, we introduce a probabilistic structured layer, defining a conditional log-linear model over nonprojective trees. By exploiting Kirchhoff\u2019s Matrix-Tree Theorem (Tutte, 1984), the partition functions and marginals can be computed efficiently, leading to a straightforward end-to-end model training procedure via back-propagation. We evaluate our model on 17 different datasets, across 14 different languages. Our parser achieves state-of-the-art parsing performance on nine datasets."}
{"_id": "042810cdcfb8f8af2f46e13543ccc9a3c9476f69", "title": "Private memoirs of a smart meter", "text": "Household smart meters that measure power consumption in real-time at fine granularities are the foundation of a future smart electricity grid. However, the widespread deployment of smart meters has serious privacy implications since they inadvertently leak detailed information about household activities. In this paper, we show that even without a priori knowledge of household activities or prior training, it is possible to extract complex usage patterns from smart meter data using off-the-shelf statistical methods. Our analysis uses two months of data from three homes, which we instrumented to log aggregate household power consumption every second. With the data from our small-scale deployment, we demonstrate the potential for power consumption patterns to reveal a range of information, such as how many people are in the home, sleeping routines, eating routines, etc. We then sketch out the design of a privacy-enhancing smart meter architecture that allows an electric utility to achieve its net metering goals without compromising the privacy of its customers."}
{"_id": "9a0c3991537f41238237bd6e78747a6b4512f2b3", "title": "Audio-Visual Speech Enhancement based on Multimodal Deep Convolutional Neural Network", "text": "Speech enhancement (SE) aims to reduce noise in speech signals. Most SE techniques focus on addressing audio information only. In this work, inspired by multimodal learning, which utilizes data from different modalities, and the recent success of convolutional neural networks (CNNs) in SE, we propose an audio-visual deep CNN (AVDCNN) SE model, which incorporates audio and visual streams into a unified network model. In the proposed AVDCNN SE model, audio and visual features are first processed using individual CNNs, and then, fused into a joint network to generate enhanced speech at an output layer. The AVDCNN model is trained in an end-toend manner, and parameters are jointly learned through backpropagation. We evaluate enhanced speech using five objective criteria. Results show that the AVDCNN yields notably better performance as compared to an audio-only CNN-based SE model, confirming the effectiveness of integrating visual information into the SE process."}
{"_id": "39df57cefdf38c2c9a76b85d7f67505959cfd6e7", "title": "Multi-Agent Reinforcement Learning: A Report on Challenges and Approaches", "text": "Reinforcement Learning (RL) is a learning paradigm concerned with learning to control a system so as to maximize an objective over the long term. This approach to learning has received immense interest in recent times and success manifests itself in the form of human-level performance on games like Go. While RL is emerging as a practical component in reallife systems, most successes have been in Single Agent domains. This report will instead specifically focus on challenges that are unique to Multi-Agent Systems interacting in mixed cooperative and competitive environments. The report concludes with advances in the paradigm of training Multi-Agent Systems called Decentralized Actor, Centralized Critic, based on an extension of MDPs called Decentralized Partially Observable MDPs, which has seen a renewed interest lately. 1 ar X iv :1 80 7. 09 42 7v 1 [ cs .A I] 2 5 Ju l 2 01 8"}
{"_id": "1cc920998208f988a873dbbfa0315274d0b51b57", "title": "Introduction to Robotics: Mechanics and Control", "text": "How can you change your mind to be more open? There many sources that can help you to improve your thoughts. It can be from the other experiences and also story from some people. Book is one of the trusted sources to get. You can find so many books that we share here in this website. And now, we show you one of the best, the introduction to robotics mechanics and control john j craig solution manual."}
{"_id": "2c935d9e04583f9fb29fdc8b570e30996cbceed9", "title": "Development of humanoid robot platform KHR-2 (KAIST humanoid robot-2)", "text": "We are presenting the mechanical and electrical system design and system integration of controllers including sensory devices of the humanoid, KHR-2 in this paper. The concept and the objective of the design will be described. We have developed KHR-2 since 2003 which has 41 DOF (degree of freedom). Each arm including a hand and a wrist has 11 DOF (5+2 DOF/hand (finger + wrist), 4 DOF/arm) and each leg has 6 DOF. Head and trunk have 6 DOF (2 DOF/eye and 2 DOF/neck) and 1 DOF respectively. The mechanical part of the robot is designed to have human friendly appearance and wide movable angle range. Joint actuators are designed to have negligible uncertainties such as backlash. To control all axes, distributed control architecture is adopted to reduce the computation burden of the main controller (PC) and to expand the devices easily. We developed a sub-controller which is a servo motor controller and sensor interfacing devices using microprocessor. The main controller (PC) attached on the back of the robot communicates with sub-controllers in real-time by using CAN (controller area network). Windows XP is used for OS (operating system) for fast development of main control program. RTX (real time extension) HAL extension software is used to realize the real-time control in Windows XP environment. KHR-2 has several sensor types, which are 3-axis F/T (force/torque) sensors at foot and wrist, inertia sensor system (accelerometer and rate gyro) and CCD camera. The F/T sensor at the foot is the most fundamental sensor for stable walking. The inertia sensor system is essential for determining the inclination between the ground and the robot. Finally, we used the CCD camera for navigation and stabilization of the robot in the future. We described the details of the KHR-2 in this paper."}
{"_id": "943bcaa387edf521cec61f7029eb139772873bba", "title": "Development of a biped walking robot compensating for three-axis moment by trunk motion", "text": "The authors have been using the ZMP (@ro Mfoment&oint) as a criterion to distinguish the stability of walking for a biped walking robot which has a trunk. T6 the authors introduce a control method of dynamic biped walking for a biped walking robot to compensate for the three-axis (pitch, roll and yaw-axis) moment on an arbitraryplanned ZMP by trunk motion. The authors developed a biped walking robot and performed a walking experiment with the robot using the control method. The result was a fast dynamic biped walking a t the walking speed of 0.54 slstep with a 0.3 m step on a flat floor. This walking speed is about 50 percent faster than that with the robot which compensates for only the twoaxis (pitch and roll-axis) moment by t runk motion. -z In 1986, the authors developed a biped walking robot WL12 (Waseda Leg No. 12) which has a trunk for the stabilization of walking. At the same time the authors proposed a control method of dynamic biped walking for a biped walking robot to compensate for the two-axis (pitch and roll-axis) moment on an arbitrary planned ZMP by the trunk[l]-[3). In 1990, by using this walking control method, the authors achieved dynamic biped walking at the fastest walking speed of 0.8 s/step with 0.3 m step length on a flat floor[5]. The method was based on the assumption that the contact point between robot foot and the floor would not slide. That is to say, only the pitch-axis and roll-axis moments were compensated for by trunk motion, and the yaw-axis moment was not taken into account in the stabilization of walking. In the walking experiment, however, as the robot walked faster, the yaw-axis moment began to give the robot a spin on the yaw-axis, which became a considerable problem that affected the stability of walking. In contrast, i r was considered that human beings compensate not only for the pitch and roll-axis moment but also for the yaw-axis moment by swinging both arms and rotating the waist to maintain the total stability of walking. 0-7803-0823-9p33/$3.00 (C) 1993 IEEE __ _I------\" I -__-.*-Fig. 1 Biped walking robot WL-12RV Therefore, the objective of this study was to develop a biped walking robot which has an ability to compensate for the three-axis moment by trunk motion, to work out a control method of dynamic biped walking for the robot and to realize faster walking than before."}
{"_id": "d4319e6c668b97bcd3d2698ce1bf91a5a8d3e340", "title": "The Development of Honda Humanoid Robot", "text": "I n this paper. we present the mechanism, system cor$guration, basic control algorithm and integrated ,functions of the Nonda humanoid robot. Like its human counterpart, this robot lzas the ability to move forward and backward, sideways to the right or the left, as well as diagonally. I n addition, the robot can turn in any direction, walk up and down stairs continuously. Furthermore, due to its unique posture stability control, the robot is able to maintain its balance despite unexpected complications such as uneven ground su faces As a part of its integrated functions, this robot is able to inove on a planned path autonomously and to perform simple operations via wireless tele-operation."}
{"_id": "da0d6c069a4032294b5acf6c0c84ff206378a070", "title": "Design of prototype humanoid robotics platform for HRP", "text": "This paper presents a prototype humanoid robotics platform developed for HRP-2. HRP-2 is a new humanoid robotics platform, which we have been developing in phase two of HRP. HRP is a humanoid robotics project, which has been lunched by Ministry of Economy, Trade and Industry (METI) of Japan from 1998FY to 2002FY for five years. The ability of the biped locomotion of HRP-2 is improved so that HRP-2 can cope with rough terrain in the open air and can prevent the possible damages to a humanoid robot\u2019s own self in the event of tipping over. The ability of whole body motion of HRP-2 is also improved so that HRP-2 can get up by a humanoid robot\u2019s own self even tough HRP-2 tips over. In this paper, the mechanisms and specifications of developed prototype humanoid robotics platform, and its electrical system are introduced."}
{"_id": "c8f1975a08a42bf5b4c804a9e93ff97b18b50220", "title": "Un-Crafting: De-Constructive Engagements with Interactive Artifacts", "text": "Crafting interactive artifacts is typically associated with synthesizing, making, constructing, putting things together. In this paper, we discuss how these activities can be contrasted with de-synthesizing activities that revolve around decomposition, dissection, and taking things apart. Based on framings that emphasize how related practices are valuable in engineering processes, we aim to unlock the potential of de-constructive engagements with interactive technologies as a material driven (design) practice. We propose un-crafting as a framework of four modes of taking interactive artifacts apart (i.e., un-crafting for material exposition, material inspiration, material inquiry, and material exploration) aiming to pinpoint de-constructive episodes inherent to design processes, as well as to encourage the refinement of respective techniques and methods."}
{"_id": "9bfd728d0d804d5d8b71239113460eff3e95e2dc", "title": "Heart rate variability and autonomic activity at rest and during exercise in various physiological conditions", "text": "The rhythmic components of heart rate variability (HRV) can be separated and quantitatively assessed by means of power spectral analysis. The powers of high frequency (HF) and low frequency (LF) components of HRV have been shown to estimate cardiac vagal and sympathetic activities. The reliability of these spectral indices, as well as that of LF/HF ratio as a marker of autonomic interaction at rest and during exercise, is briefly reviewed. Modifications in autonomic activities induced by different physiological conditions, e.g. hypoxia exposure, training, and water immersion, have been found in HRV power spectra at rest. The changes in HF and LF powers and in LF/HF ratio observed during exercise have been shown not to reflect the decrease in vagal activity and the activation of sympathetic system occurring at increasing loads. HF peak was recognised in power spectra in the entire range of relative intensity, being responsible for the most part of HR variability at maximal load. LF power did not change during low intensity exercise and decreased to negligible values at medium\u2013high intensity, where sympathetic activity was enhanced. There was no influence from factors such as fitness level, age, hypoxia, and blood distribution. In contrast, a dramatic effect of body position has been suggested by the observation that LF power increased at medium\u2013high intensities when exercising in the supine position. The increased respiratory activity due to exercise would be responsible of HF modulation of HR via a direct mechanical effect. The changes in LF power observed at medium\u2013high intensity might be the expression of the modifications in arterial pressure control mechanisms occurring with exercise. The finding of opposite trends for LF rhythm in supine and sitting exercises suggests that different readjustments might have occurred in relation to different muscular inputs in the two positions."}
{"_id": "c62beef9b5e68e0af8b9c77d8558f8f53ac47aa3", "title": "Signal compensation and extraction of high resolution position for sinusoidal magnetic encoders", "text": "MEs (magnetic encoders) are widely used in industrial control systems because of its significant advantages, such as high resolution, high speed, operation in harsh environments, and low cost. This paper describes a method to extract the position information with high resolution for sinusoidal MEs. A code compensator based on optimization theories is applied to correct non-ideal signals of the encoder outputs. This algorithm can effectively eliminate the DC offset, phase-shift, amplitude difference and sinusoidal deformation from these outputs. Then, a size-reduced look-up table is built-up off-line to generate the high resolution quadrature pulses suitable for standard incremental encoder signals. These LUTs (Look-up tables) rely on the approximate linear property of the amplitude of sinusoids in section [-sin(pi/4) ; sin(pi/4)] and is decreased by converting the values of LUTs into binary values (0 and 1). The firmware is implemented on a 16-bit DSP (digital signal processor) TMS320F2812 hardware platform with minimum computation. The experimental results are also presented in this paper."}
{"_id": "ba6f87a867f915d43e67bbc1e3230a221b9645d2", "title": "Love, hate, arousal and engagement: exploring audience responses to performing arts", "text": "Understanding audience responses to art and performance is a challenge. New sensors are promising for measurement of implicit and explicit audience engagement. However, the meaning of biometric data, and its relationship to engagement, is unclear. We conceptually explore the audience engagement domain to uncover opportunities and challenges in the assessment and interpretation of audience engagement data. We developed a display that linked performance videos with audience biometric data and presented it to 7 performing arts experts, to explore the measurement, interpretation and application of biometric data. Experts were intrigued by the response data and reflective in interpreting it. We deepened our inquiry with an empirical study with 49 participants who watched a video of a dance performance. We related temporal galvanic skin response (GSR) data to two self-report scales, which provided insights on interpreting this measure. Our findings, which include strong correlations, support the interpretation of GSR as a valid representation of audience engagement."}
{"_id": "a20efead95e227d405bafb39f14474902104ab79", "title": "Micro-Doppler effect in radar: phenomenon, model, and simulation study", "text": "When, in addition to the constant Doppler frequency shift induced by the bulk motion of a radar target, the target or any structure on the target undergoes micro-motion dynamics, such as mechanical vibrations or rotations, the micro-motion dynamics induce Doppler modulations on the returned signal, referred to as the micro-Doppler effect. We introduce the micro-Doppler phenomenon in radar, develop a model of Doppler modulations, derive formulas of micro-Doppler induced by targets with vibration, rotation, tumbling and coning motions, and verify them by simulation studies, analyze time-varying micro-Doppler features using high-resolution time-frequency transforms, and demonstrate the micro-Doppler effect observed in real radar data."}
{"_id": "6a686b525a84a87ca3e4d90a6704da8588e84344", "title": "A Wideband Sequential-Phase Fed Circularly Polarized Patch Array", "text": "This communication presents a wideband circularly polarized (CP) 2 \u00d7 2 patch array using a sequential-phase feeding network. By combining three operating modes, both axial ratio (AR) and impedance bandwidths are enhanced and wider than those of previous published sequential-fed single-layer patch arrays. These three CP operating modes are tuned and matched by optimizing the truncated corners of patch elements and the sequential-phase feeding network. A prototype of the proposed patch array is built to validate the design experimentally. The measured -10-dB impedance bandwidth is 1.03 GHz (5.20-6.23 GHz), and the measured 3-dB AR bandwidth is 0.7 GHz (5.25-5.95 GHz), or 12.7% corresponding to the center frequency of 5.5 GHz. The measured peak gain is about 12 dBic and the gain variation is less than 3 dB within the AR bandwidth."}
{"_id": "d97e3655f50ee9b679ac395b2637f6fa66af98c7", "title": "The effects of feedback on energy conservation: A meta-analysis.", "text": "Feedback has been studied as a strategy for promoting energy conservation for more than 30 years, with studies reporting widely varying results. Literature reviews have suggested that the effectiveness of feedback depends on both how and to whom it is provided; yet variations in both the type of feedback provided and the study methodology have made it difficult for conclusions to be drawn. The current article analyzes past theoretical and empirical research on both feedback and proenvironmental behavior to identify unresolved issues, and utilizes a meta-analysis of 42 feedback studies published between 1976 and 2010 to test a set of hypotheses about when and how feedback about energy usage is most effective. Results indicate that feedback is effective overall, r = .071, p < .001, but with significant variation in effects (r varied from -.080 to .480). Several treatment variables were found to moderate this relationship, including frequency, medium, comparison message, duration, and combination with other interventions (e.g., goal, incentive). Overall, results provide further evidence of feedback as a promising strategy to promote energy conservation and suggest areas in which future research should focus to explore how and for whom feedback is most effective."}
{"_id": "a1a4bcd65489cbe982e2948fa129775d7dd1393f", "title": "Modeling Cellular-to-UAV Path-Loss for Suburban Environments", "text": "Operating unmanned aerial vehicle (UAV) over cellular networks would open the barriers of remote navigation and far-flung flying by combining the benefits of UAVs and the ubiquitous availability of cellular networks. In this letter, we provide an initial insight on the radio propagation characteristics of cellular-to-UAV (CtU) channel. In particular, we model the statistical behavior of the path-loss from a cellular base station toward a flying UAV. Where we report the value of the path-loss as a function of the depression angle and the terrestrial coverage beneath the UAV. The provided model is derived based on extensive experimental data measurements conducted in a typical suburban environment for both terrestrial (by drive test) and aerial coverage (using a UAV). The model provides simple and accurate prediction of CtU path-loss that can be useful for both researchers and network operators alike."}
{"_id": "8e8ddf3c38d6773202556900bd3c0553e2021dd2", "title": "Behavioral Game Theory on Online Social Networks : Colonel Blotto is on Facebook", "text": "We show how online social networks such as Facebook can be used in Behavioral Game Theory research. We report the deployment of a Facebook application \u2018Project Waterloo\u2019 that allows users to play the Colonel Blotto game against their friends and strangers. Unlike conventional studies performed in the laboratory environment, which rely on monetary incentives to attract human subjects to play games, our framework does not use money and instead relies on reputation and entertainment incentives. We describe the Facebook application we created for conducting this experiment, and perform a preliminary analysis of the data collected in the game. We conclude by discussing the advantages of our approach and list some ideas for future work."}
{"_id": "b9bab0bc4a189fd5f05b93fd22046ae3ac06d7d9", "title": "Historical vignettes of the thyroid gland.", "text": "Although \"glands\" in the neck corresponding to the thyroid were known for thousands of years, they were mainly considered pathological when encountered. Recognition of the thyroid gland as an anatomical and physiological entity required human dissection, which began in earnest in the 16th century. Leonardo Da Vinci is generally credited as the first to draw the thyroid gland as an anatomical organ. The drawings were subsequently \"lost\" to medicine for nearly 260 years. The drawings were probably of a nonhuman specimen. Da Vinci vowed to produce an anatomical atlas, but it was never completed. Michelangelo Buonarroti promised to complete drawings for the anatomical work of Realdus Columbus, De Re Anatomica, but these were also never completed. Andreas Vesalius established the thyroid gland as an anatomical organ with his description and drawings in the Fabrica. The thyroid was still depicted in a nonhuman form during this time. The copper etchings of Bartholomew Eustachius made in the 1560s were obviously of humans, but were not actually published until 1714 with a description by Johannes Maria Lancisius. These etchings also depicted some interesting anatomy, which we describe. The Adenographia by Thomas Wharton in 1656 named the thyroid gland for the first time and more fully described it. The book also attempted to assign a function to the gland. The thyroid gland's interesting history thus touches a number of famous men from diverse backgrounds."}
{"_id": "392e2848788f596a0f5426fc2b52f1ca3b79e20f", "title": "Database updating through user feedback in fingerprint-based Wi-Fi location systems", "text": "Wi-Fi fingerprinting is a technique which can provide location in GPS-denied environments, relying exclusively on Wi-Fi signals. It first requires the construction of a database of \u201cfingerprints\u201d, i.e. signal strengths from different access points (APs) at different reference points in the desired coverage area. The location of the device is then obtained by measuring the signal strengths at its location, and comparing it with the different reference fingerprints in the database. The main disadvantage of this technique is the labour required to build and maintain the fingerprints database, which has to be rebuilt every time a significant change in the wireless environment occurs, such as installation or removal of new APs, changes in the layout of a building, etc. This paper investigates a new method to utilise user feedback as a way of monitoring changes in the wireless environment. It is based on a system of \u201cpoints\u201d given to each AP in the database. When an AP is switched off, the number of points associated with that AP will gradually reduce as the users give feedback, until it is eventually deleted from the database. If a new AP is installed, the system will detect it and update the database with new fingerprints. Our proposed system has two main advantages. First it can be used as a tool to monitor the wireless environment in a given place, detecting faulty APs or unauthorised installation of new ones. Second, it regulates the size of the database, unlike other systems where feedback is only used to insert new fingerprints in the database."}
{"_id": "f960f58ae420db5e18a4c1a11730dd0d0a05e3c9", "title": "Interleaved-Boost Converter With High Voltage Gain", "text": "This paper presents an interleaved-boost converter, magnetically coupled to a voltage-doubler circuit, which provides a voltage gain far higher than that of the conventional boost topology. Besides, this converter has low-voltage stress across the switches, natural-voltage balancing between output capacitors, low-input current ripple, and magnetic components operating with the double of switching frequency. These features make this converter suitable to applications where a large voltage step-up is demanded, such as grid-connected systems based on battery storage, renewable energies, and uninterruptible power system applications. Operation principle, main equations, theoretical waveforms, control strategy, dynamic modeling, and digital implementation are provided. Experimental results are also presented validating the proposed topology."}
{"_id": "f55f5914f856be46030219ddc4d1fecd915be7c1", "title": "Internal Coupled-Fed Dual-Loop Antenna Integrated With a USB Connector for WWAN/LTE Mobile Handset", "text": "A coupled-fed dual-loop antenna capable of providing eight-band WWAN/LTE operation and suitable to integrate with a USB connector in the mobile handset is presented. The antenna integrates with a protruded ground, which is extended from the main ground plane of the mobile handset to accommodate a USB connector functioning as a data port of the handset. To consider the presence of the integrated protruded ground, the antenna uses two separate shorted strips and a T-shape monopole encircled therein as a coupling feed and a radiator as well. The shorted strips are short-circuited through a common shorting strip to the protruded ground and coupled-fed by the T-shape monopole to generate two separate quarter-wavelength loop resonant modes to form a wide lower band to cover the LTE700/GSM850/900 operation (704-960 MHz). An additional higher-order loop resonant mode is also generated to combine with two wideband resonant modes contributed by the T-shape monopole to form a very wide upper band of larger than 1 GHz to cover the GSM1800/1900/UMTS/LTE2300/2500 operation (1710-2690 MHz). Details of the proposed antenna are presented. For the SAR (specific absorption rate) requirement in practical mobile handsets to meet the limit of 1.6 W/kg for 1-g human tissue, the SAR values of the antenna are also analyzed."}
{"_id": "7b93119414a02b332b1a6a51cf68b7e6ae3c42be", "title": "Optimal spatial filtering of single trial EEG during imagined hand movement.", "text": "The development of an electroencephalograph (EEG)-based brain-computer interface (BCI) requires rapid and reliable discrimination of EEG patterns, e.g., associated with imaginary movement. One-sided hand movement imagination results in EEG changes located at contra- and ipsilateral central areas. We demonstrate that spatial filters for multichannel EEG effectively extract discriminatory information from two populations of single-trial EEG, recorded during left- and right-hand movement imagery. The best classification results for three subjects are 90.8%, 92.7%, and 99.7%. The spatial filters are estimated from a set of data by the method of common spatial patterns and reflect the specific activation of cortical areas. The method performs a weighting of the electrodes according to their importance for the classification task. The high recognition rates and computational simplicity make it a promising method for an EEG-based brain-computer interface."}
{"_id": "f4e3854fb10fcd29c9f382b7934c012fdeb8fc27", "title": "A Semantic Web Methodology for Situation-Aware Curative Food Service Recommendation System", "text": "Recently curative food is becoming more popular, as more people realize its benefits. Based on the theory of Chinese medicine, food itself is medicine. The Curative food which is an ideal nutritious food can help to loss weight, increase immunity and is also good for curative effects in patients. In this paper, we propose a new curative food service (CFS) recommendation system, the designed Situation-aware Curative Food Service Recommendation System (SCFSRS), which provides the cooperation web-based platform for all related mobile users (MUs) and curative food service providers (CFSPs), could strengthen the ability of CFS suggestion. SCFSRS is a five-tier system composed of the MUs, UDDI registries (UDDIRs), CFSPs, curative food services server (CFSS), and database server (DS)."}
{"_id": "a3ecf44d904d404bd1368ffb56e7bf4439656fd9", "title": "Pattern-based synonym and antonym extraction", "text": "Many research studies adopt manually selected patterns for semantic relation extraction. However, manually identifying and discovering patterns is time consuming and it is difficult to discover all potential candidates. Instead, we propose an automatic pattern construction approach to extract verb synonyms and antonyms from English newspapers. Instead of relying on a single pattern, we combine results indicated by multiple patterns to maximize the recall."}
{"_id": "4c0179f164fbc250a22b5dd449c408aa9dbd6727", "title": "Improving Back-Propagation by Adding an Adversarial Gradient", "text": "The back-propagation algorithm is widely used for learning in artificial neural networks. A challenge in machine learning is to create models that generalize to new data samples not seen in the training data. Recently, a common flaw in several machine learning algorithms was discovered: small perturbations added to the input data lead to consistent misclassification of data samples. Samples that easily mislead the model are called adversarial examples. Training a \u201dmaxout\u201d network on adversarial examples has shown to decrease this vulnerability, but also increase classification performance. This paper shows that adversarial training has a regularizing effect also in networks with logistic, hyperbolic tangent and rectified linear units. A simple extension to the back-propagation method is proposed, that adds an adversarial gradient to the training. The extension requires an additional forward and backward pass to calculate a modified input sample, or mini batch, used as input for standard back-propagation learning. The first experimental results on MNIST show that the \u201dadversarial back-propagation\u201d method increases the resistance to adversarial examples and boosts the classification performance. The extension reduces the classification error on the permutation invariant MNIST from 1.60% to 0.95% in a logistic network, and from 1.40% to 0.78% in a network with rectified linear units. Based on these promising results, adversarial back-propagation is proposed as a stand-alone regularizing method that should be further investigated."}
{"_id": "5fb5caca9bb186321bde420529a5e77b4e019dea", "title": "EVOLUTION OF SOLID WASTE MANAGEMENT IN MALAYSIA", "text": "This paper seeks to examine the policy evolution of solid waste management in Malaysia and to determine its challenges and opportunities by assessing policy gaps, trends and stakeholders perception of solid waste management in Malaysia. Malaysian solid waste generation has been increasing drastically where solid waste generation was projected to increase from about 9.0 million tonnes in 2000 to about 10.9 million tonnes in 2010, to about 12.8 million tonnes in 2015 and finally to about 15.6 million tonnes in 2020 though national recycling rate is only about 3-5 %. This projected increasing rate of solid waste generation is expected to burden the country\u2019s resources and environment in managing these wastes in a sustainable manner. Solid waste management policies in Malaysia has evolved from simple informal policies to supplementary provision in legislation such as the Local Government Act, 1976 and the Environmental Quality Act, 1974 to formal policies such as the National Strategic Plan for Solid Waste Management (NSP) 2005, Master Plan on National Waste Minimization (MWM) in 2006, National Solid Waste Management Policy 2006 and the Solid Waste and Public Cleansing Management Act (SWMA) 2007. Policy gap analysis indicates challenges in the area of policy implementation potentially due to lack of political will, weak stakeholder acceptance and policy impracticality due to direct adoption of policy practices from developed countries while potential opportunities are in the area of legislation for mandatory recycling and source separation as well as government green procurement initiatives. In conclusion, policy evolution of solid waste management in Malaysia may be shifting from a focus on basic solid waste management issues of proper collection, disposal and infrastructure requirements towards sustainable waste management."}
{"_id": "58a93e9cd60ce331606d31ebed62599a2b7db805", "title": "The SWISS-PROT protein sequence database and its supplement TrEMBL in 2000", "text": "SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation (such as the description of the function of a protein, its domains structure, post-translational modifications, variants, etc.), a minimal level of redundancy and high level of integration with other databases. Recent developments of the database include format and content enhancements, cross-references to additional databases, new documentation files and improvements to TrEMBL, a computer-annotated supplement to SWISS-PROT. TrEMBL consists of entries in SWISS-PROT-like format derived from the translation of all coding sequences (CDSs) in the EMBL Nucleotide Sequence Database, except the CDSs already included in SWISS-PROT. We also describe the Human Proteomics Initiative (HPI), a major project to annotate all known human sequences according to the quality standards of SWISS-PROT. SWISS-PROT is available at: http://www.expasy.ch/sprot/ and http://www.ebi.ac.uk/swissprot/"}
{"_id": "63116ca22d65d15a178091ed5b20d89482d7b3db", "title": "MAFFT: a novel method for rapid multiple sequence alignment based on fast Fourier transform.", "text": "A multiple sequence alignment program, MAFFT, has been developed. The CPU time is drastically reduced as compared with existing methods. MAFFT includes two novel techniques. (i) Homo logous regions are rapidly identified by the fast Fourier transform (FFT), in which an amino acid sequence is converted to a sequence composed of volume and polarity values of each amino acid residue. (ii) We propose a simplified scoring system that performs well for reducing CPU time and increasing the accuracy of alignments even for sequences having large insertions or extensions as well as distantly related sequences of similar length. Two different heuristics, the progressive method (FFT-NS-2) and the iterative refinement method (FFT-NS-i), are implemented in MAFFT. The performances of FFT-NS-2 and FFT-NS-i were compared with other methods by computer simulations and benchmark tests; the CPU time of FFT-NS-2 is drastically reduced as compared with CLUSTALW with comparable accuracy. FFT-NS-i is over 100 times faster than T-COFFEE, when the number of input sequences exceeds 60, without sacrificing the accuracy."}
{"_id": "9350c8bd303102f51235673c8283088e5e0972b0", "title": "WorkerRank: Using Employer Implicit Judgements to Infer Worker Reputation", "text": "In online labor marketplaces two parties are involved; employers and workers. An employer posts a job in the marketplace to receive applications from interested workers. After evaluating the match to the job, the employer hires one (or more workers) to accomplish the job via an online contract. At the end of the contract, the employer can provide his worker with some rating that becomes visible in the worker online profile. This form of explicit feedback guides future hiring decisions, since it is indicative of worker true ability. In this paper, first we discuss some of the shortcomings of the existing reputation systems that are based on the end-of-contract ratings. Then we propose a new reputation mechanism that uses Bayesian updates to combine employer implicit feedback signals in a link-analysis approach. The new system addresses the shortcomings of existing approaches, while yielding better signal for the worker quality towards hiring decision."}
{"_id": "1196b2ae45f29c03db542601bee42161c48b6c36", "title": "Implementing deep neural networks for financial market prediction on the Intel Xeon Phi", "text": "Deep neural networks (DNNs) are powerful types of artificial neural networks (ANNs) that use several hidden layers. They have recently gained considerable attention in the speech transcription and image recognition community (Krizhevsky et al., 2012) for their superior predictive properties including robustness to overfitting. However their application to financial market prediction has not been previously researched, partly because of their computational complexity. This paper describes the application of DNNs to predicting financial market movement directions. A critical step in the viability of the approach in practice is the ability to effectively deploy the algorithm on general purpose high performance computing infrastructure. Using an Intel Xeon Phi co-processor with 61 cores, we describe the process for efficient implementation of the batched stochastic gradient descent algorithm and demonstrate a 11.4x speedup on the Intel Xeon Phi over a serial implementation on the Intel Xeon."}
{"_id": "1dedf2986b7b34e6541d2f08d4bd77d67a92fabe", "title": "Twitter Geolocation and Regional Classification via Sparse Coding", "text": "Experimental Methodology ! Three types of embedding schemes (binary bagof-words, word-counts, word-sequence) ! For a fair comparison, we follow the identical experimental methodology as Eisenstein et al. (2010) ! Twitter\u2019s location service enables users to add location information to their Tweets ! Location information of message is useful (e.g., consumer marketing) \u00d8\uf0d8 However, < 3% of messages are geotagged\u2020 ! Machine learning can geolocate based on message content \u00d8\uf0d8 Best results are from supervised modelbased approaches ! We focus on unsupervised data-driven approach, namely sparse coding, to take advantage of abundance of unlabeled messages to geolocate Twitter"}
{"_id": "235783b8d52893b0dc93ce977d41544cbf9b5d18", "title": "The Golden Beauty: Brain Response to Classical and Renaissance Sculptures", "text": "Is there an objective, biological basis for the experience of beauty in art? Or is aesthetic experience entirely subjective? Using fMRI technique, we addressed this question by presenting viewers, na\u00efve to art criticism, with images of masterpieces of Classical and Renaissance sculpture. Employing proportion as the independent variable, we produced two sets of stimuli: one composed of images of original sculptures; the other of a modified version of the same images. The stimuli were presented in three conditions: observation, aesthetic judgment, and proportion judgment. In the observation condition, the viewers were required to observe the images with the same mind-set as if they were in a museum. In the other two conditions they were required to give an aesthetic or proportion judgment on the same images. Two types of analyses were carried out: one which contrasted brain response to the canonical and the modified sculptures, and one which contrasted beautiful vs. ugly sculptures as judged by each volunteer. The most striking result was that the observation of original sculptures, relative to the modified ones, produced activation of the right insula as well as of some lateral and medial cortical areas (lateral occipital gyrus, precuneus and prefrontal areas). The activation of the insula was particularly strong during the observation condition. Most interestingly, when volunteers were required to give an overt aesthetic judgment, the images judged as beautiful selectively activated the right amygdala, relative to those judged as ugly. We conclude that, in observers na\u00efve to art criticism, the sense of beauty is mediated by two non-mutually exclusive processes: one based on a joint activation of sets of cortical neurons, triggered by parameters intrinsic to the stimuli, and the insula (objective beauty); the other based on the activation of the amygdala, driven by one's own emotional experiences (subjective beauty)."}
{"_id": "198b711915429fa55162e749a0b964755b36a62e", "title": "Fine Grained Classification of Named Entities", "text": "While Named Entity extraction is useful in many natural language applications, the coarse categories that most NE extractors work with prove insufficient for complex applications such as Question Answering and Ontology generation. We examine one coarse category of named entities, persons, and describe a method for automatically classifying person instances into eight finergrained subcategories. We present a supervised learning method that considers the local context surrounding the entity as well as more global semantic information derived from topic signatures and WordNet. We reinforce this method with an algorithm that takes advantage of the presence of entities in multiple contexts."}
{"_id": "4cb97088ff4c9adfb40559d23c20af49fdb6bc8b", "title": "User Association for Load Balancing in Heterogeneous Cellular Networks", "text": "For small cell technology to significantly increase the capacity of tower-based cellular networks, mobile users will need to be actively pushed onto the more lightly loaded tiers (corresponding to, e.g., pico and femtocells), even if they offer a lower instantaneous SINR than the macrocell base station (BS). Optimizing a function of the long-term rate for each user requires (in general) a massive utility maximization problem over all the SINRs and BS loads. On the other hand, an actual implementation will likely resort to a simple biasing approach where a BS in tier j is treated as having its SINR multiplied by a factor Aj \u2265 1, which makes it appear more attractive than the heavily-loaded macrocell. This paper bridges the gap between these approaches through several physical relaxations of the network-wide association problem, whose solution is NP hard. We provide a low-complexity distributed algorithm that converges to a near-optimal solution with a theoretical performance guarantee, and we observe that simple per-tier biasing loses surprisingly little, if the bias values Aj are chosen carefully. Numerical results show a large (3.5x) throughput gain for cell-edge users and a 2x rate gain for median users relative to a maximizing received power association."}
{"_id": "d7b2cf873d0c4841887e82a2be0b54013b34080d", "title": "A practical traffic management system for integrated LTE-WiFi networks", "text": "Mobile operators are leveraging WiFi to relieve the pressure posed on their networks by the surging bandwidth demand of applications. However, operators often lack intelligent mechanisms to control the way users access their WiFi networks. This lack of sophisticated control creates poor network utilization, which in turn degrades the quality of experience (QoE). To meet user traffic demands, it is evident that operators need solutions that optimally balance user traffic across cellular and WiFi networks. Motivated by the lack of practical solutions in this space, we design and implement ATOM - an end-to-end system for adaptive traffic offloading for WiFi-LTE deployments. ATOM has two novel components: (i) A network interface selection algorithm that maps user traffic across WiFi and LTE to optimize user QoE and (ii) an interface switching service that seamlessly re-directs ongoing user sessions in a cost-effective and standards-compatible manner. Our evaluations on a real LTE-WiFi testbed using YouTube traffic reveals that ATOM reduces video stalls by 3-4 times compared to naive solutions."}
{"_id": "6140528bd1c4c8637f9d826a11368c0704819019", "title": "Distributed Antennas for Indoor Radio Communications", "text": "The idea of implementing an indoor radio communications system serving an entire building from a single central antenna appears to be an attractive proposition. However, based on various indoor propagation measurements of the signal attenuation and the multipath delay spread, such a centralized approach appears to be limited to small buildings and to narrow-band FDMA-type systems with limited reliability and flexibility. In this paper, we present the results of indoor radio propagation measurements of two signal distribution approaches that improve the picture dramatically. In the first, the building is divided into many small cells, each served from an antenna located in its own center, and with adjacent cells operating in different frequency bands. In the second approach, the building is divided into one or more large cells, each served from a distributed antenna system or a \u201cleaky feeder\u201d that winds its way through the hallways. This approach eliminates the frequency cell handoff problem that is bound to exist in the first approach, while still preserving the dramatic reductions in multipath delay spread and signal attenuation compared to a centralized system. For example, the measurements show that, with either approach, the signal attenuation can be reduced by as much as a few tens of decibels and the rms delay spread becomes limited to 20 to 50 ns, even in large buildings. This can make possible the implementation of sophisticated broad-band TDMA-type systems that are flexible, robust, and virtually building-independent."}
{"_id": "83a1a478c68e21ee83dd0225091bf9d3444a8120", "title": "Design, Implementation and Evaluation of Congestion Control for Multipath TCP", "text": "Multipath TCP, as proposed by the IETF working group mptcp, allows a single data stream to be split across multiple paths. This has obvious benefits for reliability, and it can also lead to more efficient use of networked resources. We describe the design of a multipath congestion control algorithm, we implement it in Linux, and we evaluate it for multihomed servers, data centers and mobile clients. We show that some \u2018obvious\u2019 solutions for multipath congestion control can be harmful, but that our algorithm improves throughput and fairness compared to single-path TCP. Our algorithm is a drop-in replacement for TCP, and we believe it is safe to deploy."}
{"_id": "9482abb8261440ce4d4c235e79eadc94561da2f4", "title": "Rate control for communication networks: shadow prices, proportional fairness and stability", "text": "Rate Control for Communication Networks: Shadow Prices, Proportional Fairness and Stability Author(s): F. P. Kelly, A. K. Maulloo and D. K. H. Tan Source: The Journal of the Operational Research Society, Vol. 49, No. 3 (Mar., 1998), pp. 237-252 Published by: on behalf of the Palgrave Macmillan Journals Operational Research Society Stable URL: http://www.jstor.org/stable/3010473 Accessed: 07-10-2015 21:10 UTC"}
{"_id": "b974d0999a8d3dc6153e005687b307df55615fa2", "title": "Internet of Things and LoRa\u2122 Low-Power Wide-Area Networks: A survey", "text": "Nowadays there is a lot of effort on the study, analysis and finding of new solutions related to high density sensor networks used as part of the IoT (Internet of Things) concept. LoRa (Long Range) is a modulation technique that enables the long-range transfer of information with a low transfer rate. This paper presents a review of the challenges and the obstacles of IoT concept with emphasis on the LoRa technology. A LoRaWAN network (Long Range Network Protocol) is of the Low Power Wide Area Network (LPWAN) type and encompasses battery powered devices that ensure bidirectional communication. The main contribution of the paper is the evaluation of the LoRa technology considering the requirements of IoT. After introduction in Section II the main obstacles of IoT development are discussed, section III addresses the challenges and entails the need for solutions to different problems in WSN research. Section IV presents the LoRaWAN communication protocol architecture requirements while in Section V the LoRa modulation performance is evaluated and discussed. In conclusion LoRa can be considered a good candidate in addressing the IoT challenges."}
{"_id": "eaef0ce6a0ba2999943e99a5c46d624f50948edf", "title": "The Impact of Performance-Contingent Rewards on Perceived Autonomy and Competence 1", "text": "Two studies examined the impact of performance-contingent rewards on perceived autonomy, competence, and intrinsic motivation. Autonomy was measured in terms of both decisional and affective reports. The first study revealed an undermining effect of performance-contingent rewards on affective reports of autonomy among university students, and an increase in reports of competence. Decisional autonomy judgements were unaffected by rewards. The second study replicated this pattern of findings among elementary school children. These results help resolve Cognitive Evaluation Theory\u2019s (E. L. Deci & R. M. Ryan, 1985; R. M. Ryan, V. Mims, & R. Koestner, 1983) and Eisenberger, Rhoades, et al.\u2019s (R. Eisenberger, L. Rhoades, & J. Cameron, 1999) divergent positions on the impact of performance-contingent rewards on autonomy. The studies also included measures of intrinsic motivation."}
{"_id": "a824f364c427da49499ab9d3133782e0221f2e1f", "title": "A Multi-tenant Web Application Framework for SaaS", "text": "Software as a Service (SaaS) is a software delivery model in which software resources are accessed remotely by users. Enterprises find SaaS attractive because of its low cost. SaaS requires sharing of application servers among multiple tenants for low operational costs. Besides the sharing of application servers, customizations are needed to meet requirements of each tenant. Supporting various levels of configuration and customization is desirable for SaaS frameworks. This paper describes a multi-tenant web application framework for SaaS. The proposed framework supports runtime customizations of user interfaces and business logics by use of file-level namespaces, inheritance, and polymorphism. It supports various client-side web application technologies."}
{"_id": "fb41177076327c40dee612f30996739e20cf1bd7", "title": "Comparing deep neural networks against humans: object recognition when the signal gets weaker", "text": "Human visual object recognition is typically rapid and seemingly effortless, as well as largely independent of viewpoint and object orientation. Until very recently, animate visual systems were the only ones capable of this remarkable computational feat. This has changed with the rise of a class of computer vision algorithms called deep neural networks (DNNs) that achieve human-level classification performance on object recognition tasks. Furthermore, a growing number of studies report similarities in the way DNNs and the human visual system process objects, suggesting that current DNNs may be good models of human visual object recognition. Yet there clearly exist important architectural and processing differences between stateof-the-art DNNs and the primate visual system. The potential behavioural consequences of these differences are not well understood. We aim to address this issue by comparing human and DNN generalisation abilities towards image degradations. We find the human visual system to be more robust to image manipulations like contrast reduction, additive noise or novel eidolon-distortions. In addition, we find progressively diverging classification error-patterns between man and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition. We envision that our findings as well as our carefully measured and freely available behavioural datasets1 provide a new useful benchmark for the computer vision community to improve the robustness of DNNs and a motivation for neuroscientists to search for mechanisms in the brain that could facilitate this robustness. Data and materials available at https://github.com/rgeirhos/object-recognition 1 ar X iv :1 70 6. 06 96 9v 1 [ cs .C V ] 2 1 Ju n 20 17"}
{"_id": "85959eb06cdc1cd22b074e9173e9afa2126f06f1", "title": "Retrograde amnesia and remote memory impairment", "text": "-The status of remote memory in case N.A., patients receiving bilateral ECT, and patients with KorsakoBsyndrome was assessed using seven different tests. The extent and pattern of remote memory impairment depended upon the etiology of amnesia, appearing as a relatively brief retrograde amnesia for N.A. and patients receiving ECT and as an extensive impairment covering many decades for patients with KorsakoB syndrome. Differences in remote memory impairment could not be explained by differences in the severity ofanterograde amnesia, indicating that amnesia is not a unitary disorder. We suggest that brief retrograde amnesia is typically present whenever amnesia occurs and that extensive remote memory impairment is a distinct entity related to superimposed cognitive deficits. These ideas and a review of previous studies lead to a comprehensive proposal for understanding retrograde amnesia and remote memory impairment. THE AMNESIC syndrome has been the subject of much interest in neuropsychology because of what it might reveal about the organization and neurological foundations of normal memory. The phenomenon of retrograde amnesia, that loss of memory can occur for the period before the precipitating incident, has had a particularly large impact on ideas about normal memory function [l-5]. However, conflicting views exist regarding both the extent and pattern of retrograde amnesia and these have been used to support different ideas about the nature of amnesia. Specifically, clinical assessment of memory loss in patients with medial temporal lobectomy has suggested that a relatively brief retrograde amnesia extends at least to several months and at most to a few years prior to surgery [6, 71. Formal testing of patients receiving bilateral electroconvulsive therapy (ECT) has revealed a brief retrograde amnesia affecting memory for public events and former television programs that occurred during the few years prior to treatment [8-lo]. These findings, together with studies of anterograde amnesia, are consistent with theories of amnesia that emphasize deficiency in the storage or consolidation of new information [l, 11, 121. By contrast, formal testing of patients with Korsakoff syndrome has revealed an extensive impairment of remote memory for public events [13-l 51, famous faces [2,13,15], and famous voices [16] that covers several decades. These *Now at Department of Psychology, Massachusetts Institute of Technology, Cambridge, MA 02139. ICorrespondence to Larry R. Squire, Veterans Administration Medical Center, 3350 La Jolla Village Drive, V-l 16A, San Diego, CA 92161, U.S.A."}
{"_id": "00bfa802025014fc9c55e316e82da7c227c246bd", "title": "Multiview Photometric Stereo", "text": "This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialize a multiview photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: First, we describe a robust technique to estimate light directions and intensities and, second, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and, hence, allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how, even in the case of highly textured objects, this technique can greatly improve on correspondence-based multiview stereo results."}
{"_id": "83f2fb84f526ebbd6dbf1ceba6843634e5e99951", "title": "Bicycle-Sharing System Analysis and Trip Prediction", "text": "Bicycle-sharing systems, which can provide shared bike usage services for the public, have been launched in many big cities. In bicycle-sharing systems, people can borrow and return bikes at any stations in the service region very conveniently. Therefore, bicycle-sharing systems are normally used as a short distance trip supplement for private vehicles as well as regular public transportation. Meanwhile, for stations located at different places in the service region, the bike usages can be quite skewed and imbalanced. Some stations have too many incoming bikes and get jammed without enough docks for upcoming bikes, while some other stations get empty quickly and lack enough bikes for people to check out. Therefore, inferring the potential destinations and arriving time of each individual trip beforehand can effectively help the service providers schedule manual bike re-dispatch in advance. In this paper, we will study the individual trip prediction problem for bicycle-sharing systems. To address the problem, we study a real-world bicycle-sharing system and analyze individuals' bike usage behaviors first. Based on the analysis results, a new trip destination prediction and trip duration inference model will be introduced. Experiments conducted on a real-world bicycle-sharing system demonstrate the effectiveness of the proposed model."}
{"_id": "a066c755059ec6fe1d9def55fb4c554474eb343f", "title": "Wavelet-Based Neural Network for Power Disturbance Recognition and Classification", "text": "In this paper, a prototype wavelet-based neuralnetwork classifier for recognizing power-quality disturbances is implemented and tested under various transient events. The discrete wavelet transform (DWT) technique is integrated with the probabilistic neural-network (PNN) model to construct the classifier. First, the multiresolution-analysis technique of DWT and the Parseval\u2019s theorem are employed to extract the energy distribution features of the distorted signal at different resolution levels. Then, the PNN classifies these extracted features to identify the disturbance type according to the transient duration and the energy features. Since the proposed methodology can reduce a great quantity of the distorted signal features without losing its original property, less memory space and computing time are required. Various transient events tested, such as momentary interruption, capacitor switching, voltage sag/swell, harmonic distortion, and flicker show that the classifier can detect and classify different power disturbance types efficiently."}
{"_id": "9d0870dfbf2b06da226403430dc77b9c82b05457", "title": "A time window neural network based framework for Remaining Useful Life estimation", "text": "This paper develops a framework for determining the Remaining Useful Life (RUL) of aero-engines. The framework includes the following modular components: creating a moving time window, a suitable feature extraction method and a multi-layer neural network as the main machine learning algorithm. The proposed framework is evaluated on the publicly available C-MAPSS dataset. The prognostic accuracy of the proposed algorithm is also compared against other state-of-the-art methods available in the literature and it has been shown that the proposed framework has the best overall performance."}
{"_id": "4f298d6d0c8870acdbf94fe473ebf6814681bd1f", "title": "Going Deeper into Action Recognition: A Survey", "text": "We provide a detailed review of the work on human action recognition over the past decade. We refer to \u201cactions\u201d as meaningful human motions. Starting with methods that are based on handcrafted representations, we review the impact of revamped deep neural networks on action recognition. We follow a systematic taxonomy of action recognition approaches to present a coherent discussion over their improvements and fall-backs."}
{"_id": "0bca0ca7bb642b747797c17a4899206116fb0b25", "title": "Color attributes for object detection", "text": "State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification, leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape. In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-of-the-art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods."}
{"_id": "8f02f8d04675bb91866b5c2c4a39975b01b4aeef", "title": "Cyber security for home users: A new way of protection through awareness enforcement", "text": "We are currently living in an age, where the use of the Internet has become second nature to millions of people. Not only do businesses depend on the Internet for all types of electronic transactions, but more and more home users are experiencing the immense benefit of the Internet. However, this dependence and use of the Internet bring new and dangerous risks. This is due to increasing attempts from unauthorised third parties to compromise private information for their own benefit \u2013 the whole wide area of cyber crime. It is therefore essential that all users understand the risks of using the Internet, the importance of securing their personal information and the consequences if this is not done properly. It is well known that home users are specifically vulnerable, and that cyber criminals have such users squarely in their target. This vulnerability of home users are due to many factors, but one of the most important ones is the fact that such home users are in many cases not aware of the risks of using the Internet, and often venture into cyber space without any awareness preparation for this journey. This paper specifically investigates the position of the home user, and proposes a new model, The E-Awareness Model (E-AM), in which home users can be forced to acquaint themselves with the risks involved in venturing into cyber space. The E-AM consists of two components : the awareness component housed in the E-Awareness Portal, and the enforcement component. This model proposes a way to improve information security awareness amongst home users by presenting some information security content and enforcing the absorption of this content. The main difference between the presented model and other existing information security awareness models, is that in the presented model the acquiring/absorption of the awareness content is compulsory the user is forced to proceed via the E-Awareness Portal without the option of bypassing it."}
{"_id": "bec987bf3dfbb158900cfad30418e24356d7abf3", "title": "A RSSI-based and calibrated centralized localization technique for wireless sensor networks", "text": "This paper presents a multi-hop localization technique for WSNs exploiting acquired received signal strength indications. The proposed system aims at providing an effective solution for the self localization of nodes in static/semi-static wireless sensor networks without requiring previous deployment information"}
{"_id": "7d7499183128fd032319cf4cbdac6eeab4c0362b", "title": "Loosely-Coupled Benchmark Framework Automates Performance Modeling on IaaS Clouds", "text": "Cloud computing is under rapid development, which brings the urgent need of evaluation and comparison of cloud systems. The performance testers often struggle in tedious manual operations, when carrying out lots of similar experiments on the cloud systems. However, few of current benchmark tools provide both flexible workflow controlling methodology and extensible workload abstraction at the same time. We present a modeling methodology to compare the performance from multiple aspects based on a loosely coupled benchmark framework, which automates experiments under agile workflow controlling and achieves broad cloud supports, as well as good workload extensibility. With several built-in workloads and scenario templates, we performed a series of tests on Amazon EC2 services and our private Open Stack-based cloud, and analyze the elasticity and scalability based on the performance models. Experiments show the robustness and compatibility of our framework, which makes remarkable guarantee that it can be leveraged in practice for researchers and testers to perform their further study."}
{"_id": "4220868688ecf6d252d17f6d784923d0c5f66366", "title": "An automatic irrigation system using ZigBee in wireless sensor network", "text": "Wireless Sensing Technology is widely used everywhere in the current scientific world. As the technology is growing and changing rapidly, Wireless sensing Network (WSN) helps to upgrade the technology. In the research field of wireless sensor networks the power efficient time is a major issue. This problem can be overcome by using the ZigBee technology. The main idea of this is to understand how data travels through a wireless medium transmission using wireless sensor network and monitoring system. This paper design an irrigation system which is automated by using controllable parameter such as temperature, soil moisture and air humidity because they are the important factors to be controlled in PA(Precision Agricultures)."}
{"_id": "4064696e69b0268003879c0bcae6527d3b786b85", "title": "Winner-Take-All Autoencoders", "text": "In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance."}
{"_id": "7553667a0e392e93a9bfecae3e34201a0c1a1266", "title": "Joint optimization of LCMV beamforming and acoustic echo cancellation", "text": "Full-duplex hands-free acoustic human/machine interfaces often require the combination of acoustic echo cancellation and speech enhancement in order to suppress acoustic echoes, local interference, and noise. In order to optimally exploit positive synergies between acoustic echo cancellation and speech enhancement, we present in this contribution a combined least-squares (LS) optimization criterion for the integration of acoustic echo cancellation and adaptive linearly-constrained minimum variance (LCMV) beamforming. Based on this optimization criterion, we derive a computationally efficient system based on the generalized sidelobe canceller (GSC), which effectively deals with scenarioes with time-varying acoustic echo paths and simultaneous presence of double-talk of acoustic echoes, local interference, and desired speakers."}
{"_id": "a3822dcdc70c4ef2a2b91ceba0902f8cf75041d1", "title": "Nonstationary Function Optimization Using Genetic Algorithms with Dominance and Diploidy", "text": "Specifically, we apply genetic algorithms that include diploid genotypes and dominance operators to a simple nonstationary problem in function optimization: an oscillating, blind knapsack problem. In doing this, we find that diploidy and dominance induce a form of long term distributed memory that stores and occasionally remembers good partial solutions that were once desirable. This memory permits faster adaptation to drastic environmental shifts than is possible without the added structures and operators. This paper investigates the use of diploid representations and dominance operators in genetic algorithms (GAs) to improve performance in environments that vary with time. The mechanics of diploidy and dominance in natural genetics are briefly discussed, and the usage of these structures and operators in other GA investigations is reviewed. An extension of the schema theorem is developed which illustrates the ability of diploid GAs with dominance to hold alternative alleles in abeyance. Both haploid and diploid GAs are applied to a simple time varying problem: an oscillating, blind knapsack problem. Simulation results show that a diploid GA with an evolving dominance map adapts more quickly to the sudden changes in this problem environment than either a haploid GA or a diploid GA with a fixed dominance map. These proof-of-principle results indicate that diploidy and dominance can be used to induce a form of long term distributed memory within a population of structures. In the remainder of this paper, we explore the mechanism, theory, and implementation of dominance and diploidy in artificial genetic search. We start by examining the role of diploidy and dominance in natural genetics, and we briefly review examples of their usage in genetic algorithm circles. We extend the schema theorem to analyze the effect of these structures and mechanisms. We present results from computational experiments on a 17-object, oscillating, blind 0-1 knapsack problem. Simulations with adaptive dominance maps and diploidy are able to adapt more quickly to sudden environmental shifts than either a haploid genetic algorithm or a diploid genetic algorithm with fixed dominance map. These results are encouraging and suggest the investigation of dominance and diploidy in other GA applications in search and machine learning. INTRODUCTION Real world problems are seldom independent of time. If you don't like the weather, wait five minutes and it will change. If this week gasoline costs $1.30 a gallon, next week it may cost $0.89 a gallon or perhaps $2.53 a gallon. In these and many more complex ways, real world environments are both nonstationary and noisy. Searching for good solutions or good behavior under such conditions is a difficult task; yet, despite the perpetual change and uncertainty, all is not lost. History does repeat itself, and what goes around does come around. The horrors of Malthusian extrapolation rarely come to pass, and solutions that worked well yesterday are at least somewhat likely to be useful when circumstances are somewhat similar tomorrow or the day after. The temporal regularity implied in these observations places a premium on search augmented by selective memory. In other words, a system which does not learn the lessons of its history is doomed to repeat its mistakes. THE MECHANICS OF NATURAL DOMINANCE AND DIPLOIDY It is surprising to some genetic algorithm newcomers that tpe most commonly used GA is modeled after the mechanics of haploid genetics. After all, don't most elementary genetics textbooks start off with a discussion of Mendel's pea plants and some mention of diploidy and dominance? The reason for this disparity between genetic algorithm practice and genetics textbook coverage is due to the success achieved by early GA investigators (Hollstien, 1971; De Jong, 1975) using haploid chromosome models on stationary problems. It was found that surprising efficacy and efficiency could be obtained using single stranded (haploid) chromosomes under the action of reproduction and crossover. As a result, later investigators of artificial genetic search have tended to ignore diploidy and dominance. In this section we examine the mechanics of diploidy and dominance to understand their roles in shielding alternate In this paper, we investigate the behavior of a genetic algorithm augmented by structures and operators capable of exploiting the regularity and repeatability of many nonstationary environments."}
{"_id": "6c73f9567eefbfdea469365bea9faa579b4b9499", "title": "Spin Transfer Torques", "text": "This tutorial article introduces the physics of spin transfer torques in magnetic devices. We provide an elementary discussion of the mechanism of spin transfer torque, and review the theoretical and experimental progress in this field. Our intention is to be accessible to beginning graduate students. This is the introductory paper for a cluster of \u201cCurrent Perspectives\u201d articles on spin transfer torques published in volume 320 of the Journal of Magnetism and Magnetic Materials. This article is meant to set the stage for the others which follow it in this cluster; they focus in more depth on particularly interesting aspects of spin-torque physics and highlight unanswered questions that might be productive topics for future research."}
{"_id": "6ee2bf4f9647e110fa7788c932a9592cf8eaeee0", "title": "Group-sensitive multiple kernel learning for object categorization", "text": "In this paper, we propose a group-sensitive multiple kernel learning (GS-MKL) method to accommodate the intra-class diversity and the inter-class correlation for object categorization. By introducing an intermediate representation \u201cgroup\u201d between images and object categories, GS-MKL attempts to find appropriate kernel combination for each group to get a finer depiction of object categories. For each category, images within a group share a set of kernel weights while images from different groups may employ distinct sets of kernel weights. In GS-MKL, such group-sensitive kernel combinations together with the multi-kernels based classifier are optimized in a joint manner to seek a trade-off between capturing the diversity and keeping the invariance for each category. Extensive experiments show that our proposed GS-MKL method has achieved encouraging performance over three challenging datasets."}
{"_id": "c38f93b80b41a1a95662d0734e67339d05e8f2c7", "title": "MicroRNA Expression Changes during Interferon-Beta Treatment in the Peripheral Blood of Multiple Sclerosis Patients", "text": "MicroRNAs (miRNAs) are small non-coding RNA molecules acting as post-transcriptional regulators of gene expression. They are involved in many biological processes, and their dysregulation is implicated in various diseases, including multiple sclerosis (MS). Interferon-beta (IFN-beta) is widely used as a first-line immunomodulatory treatment of MS patients. Here, we present the first longitudinal study on the miRNA expression changes in response to IFN-beta therapy. Peripheral blood mononuclear cells (PBMC) were obtained before treatment initiation as well as after two days, four days, and one month, from patients with clinically isolated syndrome (CIS) and patients with relapsing-remitting MS (RRMS). We measured the expression of 651 mature miRNAs and about 19,000 mRNAs in parallel using real-time PCR arrays and Affymetrix microarrays. We observed that the up-regulation of IFN-beta-responsive genes is accompanied by a down-regulation of several miRNAs, including members of the mir-29 family. These differentially expressed miRNAs were found to be associated with apoptotic processes and IFN feedback loops. A network of miRNA-mRNA target interactions was constructed by integrating the information from different databases. Our results suggest that miRNA-mediated regulation plays an important role in the mechanisms of action of IFN-beta, not only in the treatment of MS but also in normal immune responses. miRNA expression levels in the blood may serve as a biomarker of the biological effects of IFN-beta therapy that may predict individual disease activity and progression."}
{"_id": "4917a802cab040fc3f6914c145fb59c1d738aa84", "title": "The potential environmental impact of engineered nanomaterials", "text": "With the increased presence of nanomaterials in commercial products, a growing public debate is emerging on whether the environmental and social costs of nanotechnology outweigh its many benefits. To date, few studies have investigated the toxicological and environmental effects of direct and indirect exposure to nanomaterials and no clear guidelines exist to quantify these effects."}
{"_id": "bd483eefea0e352cdc86288a9b0c27bad45143f7", "title": "SDN-based solutions for Moving Target Defense network protection", "text": "Software-Defined Networking (SDN) allows network capabilities and services to be managed through a central control point. Moving Target Defense (MTD) on the other hand, introduces a constantly adapting environment in order to delay or prevent attacks on a system. MTD is a use case where SDN can be leveraged in order to provide attack surface obfuscation. In this paper, we investigate how SDN can be used in some network-based MTD techniques. We first describe the advantages and disadvantages of these techniques, the potential countermeasures attackers could take to circumvent them, and the overhead of implementing MTD using SDN. Subsequently, we study the performance of the SDN-based MTD methods using Cisco's One Platform Kit and we show that they significantly increase the attacker's overheads."}
{"_id": "5238190eb598fb3352c51ee07b7ee8ec714f3c38", "title": "OPEM: A Static-Dynamic Approach for Machine-Learning-Based Malware Detection", "text": "Malware is any computer software potentially harmful to both computers and networks. The amount of malware is growing every year and poses a serious global security threat. Signature-based detection is the most extended method in commercial antivirus software, however, it consistently fails to detect new malware. Supervised machine learning has been adopted to solve this issue. There are two types of features that supervised malware detectors use: (i) static features and (ii) dynamic features. Static features are extracted without executing the sample whereas dynamic ones requires an execution. Both approaches have their advantages and disadvantages. In this paper, we propose for the first time, OPEM, an hybrid unknown malware detector which combines the frequency of occurrence of operational codes (statically obtained) with the information of the execution trace of an executable (dynamically obtained). We show that this hybrid approach enhances the performance of both approaches when run separately."}
{"_id": "e5acdb0246b33d33c2a34a4a23faaf21e2f9b924", "title": "An Efficient Non-Negative Matrix-Factorization-Based Approach to Collaborative Filtering for Recommender Systems", "text": "Matrix-factorization (MF)-based approaches prove to be highly accurate and scalable in addressing collaborative filtering (CF) problems. During the MF process, the non-negativity, which ensures good representativeness of the learnt model, is critically important. However, current non-negative MF (NMF) models are mostly designed for problems in computer vision, while CF problems differ from them due to their extreme sparsity of the target rating-matrix. Currently available NMF-based CF models are based on matrix manipulation and lack practicability for industrial use. In this work, we focus on developing an NMF-based CF model with a single-element-based approach. The idea is to investigate the non-negative update process depending on each involved feature rather than on the whole feature matrices. With the non-negative single-element-based update rules, we subsequently integrate the Tikhonov regularizing terms, and propose the regularized single-element-based NMF (RSNMF) model. RSNMF is especially suitable for solving CF problems subject to the constraint of non-negativity. The experiments on large industrial datasets show high accuracy and low-computational complexity achieved by RSNMF."}
{"_id": "11efa6998c2cfd3de59cf0ec0321a9e17418915d", "title": "Toward Automated Dynamic Malware Analysis Using CWSandbox", "text": "Malware is notoriously difficult to combat because it appears and spreads so quickly. In this article, we describe the design and implementation of CWSandbox, a malware analysis tool that fulfills our three design criteria of automation, effectiveness, and correctness for the Win32 family of operating systems"}
{"_id": "129ed742b496b23efdf745aaf0c48958ef64d2c6", "title": "Exploring Multiple Execution Paths for Malware Analysis", "text": "Malicious code (or Malware) is defined as software that fulfills the deliberately harmful intent of an attacker. Malware analysis is the process of determining the behavior and purpose of a given Malware sample (such as a virus, worm, or Trojan horse). This process is a necessary step to be able to develop effective detection techniques and removal tools. Currently, Malware analysis is mostly a manual process that is tedious and time-intensive. To mitigate this problem, a number of analysis tools have been proposed that automatically extract the behavior of an unknown program by executing it in a restricted environment and recording the operating system calls that are invoked. The problem of dynamic analysis tools is that only a single program execution is observed. Unfortunately, however, it is possible that certain malicious actions are only triggered under specific circumstances (e.g., on a particular day, when a certain file is present, or when a certain command is received). In this paper, we propose a system that allows us to explore multiple execution paths and identify malicious actions that are executed only when certain conditions are met. This enables us to automatically extract a more complete view of the program under analysis and identify under which circumstances suspicious actions are carried out. Our experimental results demonstrate that many Malware samples show different behavior depending on input read from the environment. Thus, by exploring multiple execution paths, we can obtain a more complete picture of their actions."}
{"_id": "66651bb47e7d16b49f677decfb60cbe1939e7a91", "title": "Optimizing number of hidden neurons in neural networks", "text": "In this paper, a novel and effective criterion based on the estimation of the signal-to-noise-ratio figure (SNRF) is proposed to optimize the number of hidden neurons in neural networks to avoid overfitting in the function approximation. SNRF can quantitatively measure the useful information left unlearned so that overfitting can be automatically detected from the training error only without use of a separate validation set. It is illustrated by optimizing the number of hidden neurons in a multi-layer perceptron (MLP) using benchmark datasets. The criterion can be further utilized in the optimization of other parameters of neural networks when overfitting needs to be considered."}
{"_id": "03bb84352b89691110f112b5515766c55bcc5720", "title": "Neural Relation Extraction via Inner-Sentence Noise Reduction and Transfer Learning", "text": "Extracting relations is critical for knowledge base completion and construction in which distant supervised methods are widely used to extract relational facts automatically with the existing knowledge bases. However, the automatically constructed datasets comprise amounts of low-quality sentences containing noisy words, which is neglected by current distant supervised methods resulting in unacceptable precisions. To mitigate this problem, we propose a novel word-level distant supervised approach for relation extraction. We first build Sub-Tree Parse (STP) to remove noisy words that are irrelevant to relations. Then we construct a neural network inputting the subtree while applying the entity-wise attention to identify the important semantic features of relational words in each instance. To make our model more robust against noisy words, we initialize our network with a priori knowledge learned from the relevant task of entity classification by transfer learning. We conduct extensive experiments using the corpora of New York Times (NYT) and Freebase. Experiments show that our approach is effective and improves the area of Precision/Recall (PR) from 0.35 to 0.39 over the state-of-the-art work."}
{"_id": "5c7f700d28c9e5b5ac115c1409262ecfe89812db", "title": "An Evaluation of Aggregation Techniques in Crowdsourcing", "text": "As the volumes of AI problems involving human knowledge are likely to soar, crowdsourcing has become essential in a wide range of world-wide-web applications. One of the biggest challenges of crowdsourcing is aggregating the answers collected from the crowd since the workers might have wide-ranging levels of expertise. In order to tackle this challenge, many aggregation techniques have been proposed. These techniques, however, have never been compared and analyzed under the same setting, rendering a \u2018right\u2019 choice for a particular application very difficult. Addressing this problem, this paper presents a benchmark that offers a comprehensive empirical study on the performance comparison of the aggregation techniques. Specifically, we integrated several stateof-the-art methods in a comparable manner, and measured various performance metrics with our benchmark, including computation time, accuracy, robustness to spammers, and adaptivity to multi-labeling. We then provide in-depth analysis of benchmarking results, obtained by simulating the crowdsourcing process with different types of workers. We believe that the findings from the benchmark will be able to serve as a practical guideline for crowdsourcing applications."}
{"_id": "6ebd2b2822bcd32b58af033a26fd08a39ba777f7", "title": "ONTOLOGY ALIGNMENT USING MACHINE LEARNING TECHNIQUES", "text": "In semantic web, ontology plays an important role to provide formal definitions of concepts and relationships. Therefore, communicating similar ontologies becomes essential to provide ontologies interpretability and extendibility. Thus, it is inevitable to have similar but not the same ontologies in a particular domain, since there might be several definitions for a given concept. This paper presents a method to combine similarity measures of different categories without having ontology instances or any user feedbacks towards aligning two given ontologies. To align different ontologies efficiently, K Nearest Neighbor (KNN), Support Vector Machine (SVM), Decision Tree (DT) and AdaBoost classifiers are investigated. Each classifier is optimized based on the lower cost and better classification rate. Experimental results demonstrate that the F-measure criterion improves to 99% using feature selection and combination of AdaBoost and DT classifiers, which is highly comparable, and outperforms the previous reported F-measures. Computer Engineering Department, Shahid Chamran University, Ahvaz, Iran."}
{"_id": "d09b70d5029f60bef888a8e73f9a18e2fc2db2db", "title": "Pictogrammar: an AAC device based on a semantic grammar", "text": "As many as two-thirds of individuals with an Autism Spectrum Disorder (ASD) also have language impairments, which can range from mild limitations to complete non-verbal behavior. For such cases, there are several Augmentative and Alternative Communication (AAC) devices available. These are computer-designed tools in order to help people with ASD to palliate or overcome such limitations, at least partially. Some of the most popular AAC devices are based on pictograms, so that a pictogram is the graphical representation of a simple concept and sentences are composed by concatenating a number of such pictograms. Usually, these tools have to manage a vocabulary made up of hundreds of pictograms/concepts, with no or very poor knowledge of the language at semantic and pragmatic level. In this paper we present Pictogrammar, an AAC system which takes advantage of SUpO and PictOntology. SUpO (Simple Upper Ontology) is a formal semantic ontology which is made up of detailed knowledge of facts of everyday life such as simple words, with special interest in linguistic issues, allowing automated grammatical supervision. PictOntology is an ontology developed to manage sets of pictograms, linked to SUpO. Both ontologies make possible the development of tools which are able to take advantage of a formal semantics."}
{"_id": "b0a1f562a55aae189d6a5cb826582b2e7fb06d3c", "title": "Multi-modal Mean-Fields via Cardinality-Based Clamping", "text": "Mean Field inference is central to statistical physics. It has attracted much interest in the Computer Vision community to efficiently solve problems expressible in terms of large Conditional Random Fields. However, since it models the posterior probability distribution as a product of marginal probabilities, it may fail to properly account for important dependencies between variables. We therefore replace the fully factorized distribution of Mean Field by a weighted mixture of such distributions, that similarly minimizes the KL-Divergence to the true posterior. By introducing two new ideas, namely, conditioning on groups of variables instead of single ones and using a parameter of the conditional random field potentials, that we identify to the temperature in the sense of statistical physics to select such groups, we can perform this minimization efficiently. Our extension of the clamping method proposed in previous works allows us to both produce a more descriptive approximation of the true posterior and, inspired by the diverse MAP paradigms, fit a mixture of Mean Field approximations. We demonstrate that this positively impacts real-world algorithms that initially relied on mean fields."}
{"_id": "aded779fe2ec65b9d723435182e093f9f18ad80f", "title": "Design and Implementation of a Reliable Wireless Real-Time Home Automation System Based on Arduino Uno Single-Board Microcontroller", "text": "\u2014This paper presents design and implementation concepts for a wireless real-time home automation system based on Arduino Uno microcontroller as central controllers. The proposed system has two operational modes. The first one is denoted as a manually\u2013automated mode in which the user can monitor and control the home appliances from anywhere over the world using the cellular phone through Wi-Fi communication technology. The second one is referred to a self-automated mode that makes the controllers to be capable of monitoring and controlling different appliances in the home automatically in response to the signals comes from the related sensors. To support the usefulness of the proposed technique, a hardware implementation with Matlab-GUI platform for the proposed system is carried out and the reliability of the system is introduced. The proposed system is shown to be a simple, cost effective and flexible that making it a suitable and a good candidate for the smart home future. I. INTRODUCTION ecently, man's work and life are increasingly tight with the rapid growth in communications and information technology. The informationized society has changed human being's way of life as well as challenged the traditional residence. Followed by the rapid economic expansion, living standard keeps raising up day by day that people have a higher requirement for dwelling functions. The intellectualized society brings diversified information where safe, economic, comfortable and convenient life has become the ideal for every modern family [1]."}
{"_id": "b6741f7862be64a5435d2625ea46f0508b9d3fee", "title": "Security Keys: Practical Cryptographic Second Factors for the Modern Web", "text": "\u201cSecurity Keys\u201d are second-factor devices that protect users against phishing and man-in-the-middle attacks. Users carry a single device and can self-register it with any online service that supports the protocol. The devices are simple to implement and deploy, simple to use, privacy preserving, and secure against strong attackers. We have shipped support for Security Keys in the Chrome web browser and in Google\u2019s online services. We show that Security Keys lead to both an increased level of security and user satisfaction by analyzing a two year deployment which began within Google and has extended to our consumer-facing web applications. The Security Key design has been standardized by the FIDO Alliance, an organization with more than 250 member companies spanning the industry. Currently, Security Keys have been deployed by Google, Dropbox, and GitHub. An updated and extended tech report is available at https://github.com/google/u2fref-code/docs/SecurityKeys_TechReport.pdf."}
{"_id": "f98d91492335e74621837c01c860cbc801a2acbb", "title": "A web-based bayesian intelligent tutoring system for computer programming", "text": "In this paper, we present a Web-based intelligent tutoring system, called BITS. The decision making process conducted in our intelligent system is guided by a Bayesian network approach to support students in learning computer programming. Our system takes full advantage of Bayesian networks, which are a formal framework for uncertainty management in Artificial Intelligence based on probability theory. We discuss how to employ Bayesian networks as an inference engine to guide the students\u2019 learning processes . In addition, we describe the architecture of BITS and the role of each module in the system. Whereas many tutoring systems are static HTML Web pages of a class textbook or lecture notes, our intelligent system can help a student nav igate through the online course materials, recommend learning goals, and generate appropriate reading sequences."}
{"_id": "0204dee746e1f55d87409c5f482d1e7c74c48baa", "title": "Adjustable Autonomy : From Theory to Implementation", "text": "Recent exciting, ambitious applications in agent technology involve agents acting individually or in teams in support of critical activities of individual humans or entire human organizations. Applications range from intelligent homes [13], to \"routine\" organizational coordination[16], to electronic commerce[4] to long-term space missions[12, 6]. These new applications have brought forth an increasing interest in agents\u2019 adjustable autonomy (AA), i.e., in agents\u2019 dynamically adjusting their own level of autonomy based on the situation[8]. In fact, many of these applications will not be deployed, unless reliable AA reasoning is a central component. At the heart of AA is the question of whether and when agents should make autonomous decisions and when they should transfer decision-making control to other entities (e.g., human users). Unfortunately, previous work in adjustable autonomy has focused on individual agent-human interactions and tile techniques developed fail to scale-up to complex heterogeneous organizations. Indeed, as a first step, we focused on a smallscale, but real-world agent-human organization called Electric Elves, where an individual agent and human worked together within a larger multiagent context. Although\u2019the application limits the interactions among entities, key weaknesses of previous approaches to adjustable autonomy are readily apparent. In particular, previous approaches to transferof-control are seen to be too rigid, employing one-shot transfersof-control that can result in unacceptable coordination failures. Furthermore, the previous approaches ignore potential costs (e.g., from delays) to an agent\u2019s team due to such transfers of control. To remedy such problems, we propose a novel approach to AA, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from the agent to the user or vice versa) and (ii) actions to change an agent\u2019s pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high quality individual decisions to be made with minimal disruption to the coordination of the team. We operationalize such strategies via Markov decision processes (MDPs) which select the optimal strategy given an uncertain environment and costs to individuals and teams. We have developed a general reward function and state representation for such an MDP, to facilitate application of the approach to different domains. We present results from a careful evaluation of this approach, including via its use in our real-world, deployed Electric Elves system."}
{"_id": "697754f7e62236f6a2a069134cbc62e3138ac89f", "title": "The Granular Tabu Search and Its Application to the Vehicle-Routing Problem", "text": ""}
{"_id": "8d3c794bf910f9630e9516f3ebb1fa4995a5c0c1", "title": "An Item-based Collaborative Filtering Recommendation Algorithm Using Slope One Scheme Smoothing", "text": "Collaborative filtering is one of the most important technologies in electronic commerce. With the development of recommender systems, the magnitudes of users and items grow rapidly, resulted in the extreme sparsity of user rating data set. Traditional similarity measure methods work poor in this situation, make the quality of recommendation system decreased dramatically. Poor quality is one major challenge in collaborative filtering recommender systems. Sparsity of users\u2019 ratings is the major reason causing the poor quality. To address this issue, an item-based collaborative filtering recommendation algorithm using slope one scheme smoothing is presented. This approach predicts item ratings that users have not rated by the employ of slope one scheme, and then uses Pearson correlation similarity measurement to find the target items\u2019 neighbors, lastly produces the recommendations. The experiments are made on a common data set using different recommender algorithms. The results show that the proposed approach can improve the accuracy of the collaborative filtering recommender system."}
{"_id": "2f56548a5a8d849e17e4186393c698e5a735aa91", "title": "Design methodology using inversion coefficient for low-voltage low-power CMOS voltage reference", "text": "This paper presents an analog design methodology, using the selection of inversion coefficient of MOS devices, to design low voltage and low-power (LVLP) CMOS voltage references. These circuits often work under subthreshold operation. Hence, there is a demand for analog design methods that optimize the sizing process of transistors working in weak and moderate inversion. The advantage of the presented method -- compared with the traditional approach to design circuits -- is the reduction of design cycle time and minimization of trial-and-error simulations, if the proposed equations are used. As a case study, a LVLP voltage reference based on subthreshold MOSFETs with supply voltage of 0.7 V was designed for 0.18-\u00bcm CMOS technology."}
{"_id": "42dcf7c039bf8bb36b8fa3a658bd66e834128c1b", "title": "Exploring commonalities across participants in the neural representation of objects.", "text": "The question of whether the neural encodings of objects are similar across different people is one of the key questions in cognitive neuroscience. This article examines the commonalities in the internal representation of objects, as measured with fMRI, across individuals in two complementary ways. First, we examine the commonalities in the internal representation of objects across people at the level of interobject distances, derived from whole brain fMRI data, and second, at the level of spatially localized anatomical brain regions that contain sufficient information for identification of object categories, without making the assumption that their voxel patterns are spatially matched in a common space. We examine the commonalities in internal representation of objects on 3T fMRI data collected while participants viewed line drawings depicting various tools and dwellings. This exploratory study revealed the extent to which the representation of individual concepts, and their mutual similarity, is shared across participants."}
{"_id": "ee654db227dcb7b39d26bec7cc06e2b43b525826", "title": "Hierarchy of fibrillar organization levels in the polytene interphase chromosomes of Chironomus.", "text": ""}
{"_id": "579b2962ac567a39742601cafe3fc43cf7a7109c", "title": "Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks", "text": "We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal-and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively."}
{"_id": "8772e0057d0adc7b1e8d9f09473fdc1de05a8148", "title": "Autonomous Self-Assembly in Swarm-Bots", "text": "In this paper, we discuss the self-assembling capabilities of the swarm-bot, a distributed robotics concept that lies at the intersection between collective and self-reconfigurable robotics. A swarm-bot is comprised of autonomous mobile robots called s-bots. S-bots can either act independently or self-assemble into a swarm-bot by using their grippers. We report on experiments in which we study the process that leads a group of s-bots to self-assemble. In particular, we present results of experiments in which we vary the number of s-bots (up to 16 physical robots), their starting configurations, and the properties of the terrain on which self-assembly takes place. In view of the very successful experimental results, swarm-bot qualifies as the current state of the art in autonomous self-assembly"}
{"_id": "926e97d5ce2a6e070f8ec07c5aa7f91d3df90ba0", "title": "Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Networks", "text": "Deep Neural Networks (DNNs) have shown to outperform traditional methods in various visual recognition tasks including Facial Expression Recognition (FER). In spite of efforts made to improve the accuracy of FER systems using DNN, existing methods still are not generalizable enough in practical applications. This paper proposes a 3D Convolutional Neural Network method for FER in videos. This new network architecture consists of 3D Inception-ResNet layers followed by an LSTM unit that together extracts the spatial relations within facial images as well as the temporal relations between different frames in the video. Facial landmark points are also used as inputs to our network which emphasize on the importance of facial components rather than the facial regions that may not contribute significantly to generating facial expressions. Our proposed method is evaluated using four publicly available databases in subject-independent and cross-database tasks and outperforms state-of-the-art methods."}
{"_id": "41c419e663aa9b301a951bab245a7a6543a3e5f6", "title": "A Java API and Web Service Gateway for wireless M-Bus", "text": "M-Bus Systems are used to support the remote data exchange with meter units through a network. They provide an easy extend able and cost effective method to connect many meter devices to one coherent network. Master applications operate as concentrated data collectors and are used to process measured values further. As modern approaches facilitate the possibility of a Smart Grid, M-Bus can be seen as the foundation of this technology. With the current focus on a more effective power grid, Smart Meters and Smart Grids are an important research topic. This bachelor thesis first gives an overview of the M-Bus standard and then presents a Java library and API to access M-Bus devices remotely in a standardized way through Web Services. Integration into common IT applications requires interoperable interfaces which also facilitate automated machine-to-machine communication. The oBIX (Open Building Information Exchange) standard provides such standardized objects and thus is used to create a Web Service gateway for the Java API."}
{"_id": "97398e974e6965161ad9b9f85e00fd7a8e7b3d83", "title": "AWEsome: An open-source test platform for airborne wind energy systems", "text": "In this paper we present AWEsome (Airborne Wind Energy Standardized Open-source Model Environment), a test platform for airborne wind energy systems that consists of low-cost hardware and is entirely based on open-source software. It can hence be used without the need of large financial investments, in particular by research groups and startups to acquire first experiences in their flight operations, to test novel control strategies or technical designs, or for usage in public relations. Our system consists of a modified off-the-shelf model aircraft that is controlled by the pixhawk autopilot hardware and the ardupilot software for fixed wing aircraft. The aircraft is attached to the ground by a tether. We have implemented new flight modes for the autonomous tethered flight of the aircraft along periodic patterns. We present the principal functionality of our algorithms. We report on first successful tests of these modes in real flights. \u2217Author names are in alphabetical order. a Universit\u00e4t Bonn, bechtle@physik.uni-bonn.de b Universit\u00e4t Bonn, thomasgehrmann@gmx.net c Humboldt-Universit\u00e4t zu Berlin, csieg@physik.hu-berlin.de d Daidalos Capital, zillmann@daidalos-capital.com awesome.physik.uni-bonn.de 1 ar X iv :1 70 4. 08 69 5v 1 [ cs .S Y ] 2 7 A pr 2 01 7 Section"}
{"_id": "e93d2e8066087b9756a585b28244e7335229f1e4", "title": "Optimization of Belt-Type Electrostatic Separation of Triboaerodynamically Charged Granular Plastic Mixtures", "text": "Electrostatic separation is frequently employed for the selective sorting of conducting and insulating materials for waste electric and electronic equipment. In a series of recent papers, the authors have presented several novel triboelectrostatic separation installations for the recycling of the main categories of plastics contained in information technology wastes. The aim of the present work is to optimize the design and the operation of such an installation, composed of a belt-type electrostatic separator associated to a propeller-type tribocharging device. The experimental design method is employed for modeling and optimizing the separation of a mixture of high-impact polystyrene and acrylonitrile butadiene styrene granules originating from shredded out-of-use computer cases. A distinct experiment is carried out on three synthetic mixtures of virgin plastic granules: 75% polyamide (PA)+ 25% polycarbonate (PC), 50% PA+ 50% PC, and 25% PA + 75% PC. The best results are obtained for the mixtures containing equal quantities of the two constituents."}
{"_id": "aeddb6d9568171cc9ca0297b2adfd6c8e23fe797", "title": "Transformerless micro-inverter for grid-connected photovoltaic systems", "text": "The leakage currents caused by high-frequency common-mode (CM) voltage have become a major concern in transformerless photovoltaic (PV) inverters. This paper addresses to a review on dc-ac converters applied to PV systems that can avoid the circulation of leakage currents. Looking for a lower cost and higher reliability solution, a 250 W PV transformerless micro-inverter prototype based on the bipolar full-bridge topology was built and tested. As it is confirmed by experimental results, this topology is able to maintain the CM voltage constant and prevent the circulation of CM currents through the circuit."}
{"_id": "08f991c35f2b3d67cfe219f19e4e791b049f398d", "title": "GQ ( \u03bb ) : A general gradient algorithm for temporal-difference prediction learning with eligibility traces", "text": "A new family of gradient temporal-difference learning algorithms have recently been introduced by Sutton, Maei and others in which function approximation is much more straightforward. In this paper, we introduce the GQ(\u03bb) algorithm which can be seen as extension of that work to a more general setting including eligibility traces and off-policy learning of temporally abstract predictions. These extensions bring us closer to the ultimate goal of this work\u2014the development of a universal prediction learning algorithm suitable for learning experientially grounded knowledge of the world. Eligibility traces are essential to this goal because they bridge the temporal gaps in cause and effect when experience is processed at a temporally fine resolution. Temporally abstract predictions are also essential as the means for representing abstract, higher-level knowledge about courses of action, or options. GQ(\u03bb) can be thought of as an extension of Q-learning. We extend existing convergence results for policy evaluation to this setting and carry out a forward-view/backwardview analysis to derive and prove the validity of the new algorithm. Introduction One of the main challenges in artificial intelligence (AI) is to connect the low-level experience to high-level representations (grounded world knowledge). Low-level experience refers to rich signals received back and forth between the agent and the world. Recent theoretical developments in temporal-difference learning combined with mathematical ideas developed for temporally abstract options, known as intra-option learning, can be used to address this challenge (Sutton, 2009). Intra-option learning (Sutton, Precup, and Singh, 1998) is seen as a potential method for temporalabstraction in reinforcement learning. Intra-option learning is a type of off-policy learning. Off-policy learning refers to learning about a target policy while following another policy, known as behavior policy. Offpolicy learning arises in Q-learning where the target policy is a greedy optimal policy while the behavior policy is exploratory. It is also needed for intra-option learning. Intra-option methods look inside options and allow AI agent to learn about multiple different options simultaneously from a single stream of received data. Option refers to a temporally course of actions with a termination condition. Options are ubiquitous in our everyday life. For example, to go for hiking, we need to consider and evaluate multiple options such as transportation options to the hiking trail. Each option includes a course of primitive actions and only is excited in particular states. The main feature of intra-option learning is its ability to predict the consequences of each option policy without executing it while data is received from a different policy. Temporal difference (TD) methods in reinforcement learning are considered as powerful techniques for prediction problems. In this paper, we consider predictions always in the form of answers to the questions. Questions are like \u201cIf of follow this trail, would I see a creek?\u201d The answers to such questions are in the form of a single scalar (value function) that tells us about the expected future consequences given the current state. In general, due to the large number of states, it is not feasible to compute the exact value of each state entry. One of the key features of TD methods is their ability to generalize predictions to states that may not have visited; this is known as function approximation. Recently, Sutton et al. (2009b) and Maei et al. (2009) introduced a new family of gradient TD methods in which function approximation is much more straightforward than conventional methods. Prior to their work, the existing classical TD algorithms (e.g.; TD(\u03bb) and Q-learning) with function approximation could become unstable and diverge (Baird, 1995; Tsitsiklis and Van Roy, 1997). In this paper, we extend their work to a more general setting that includes off-policy learning of temporally abstract predictions and eligibility traces. Temporally abstract predictions are essential for representing higher-level knowledge about the course of actions, or options (Sutton et al., 1998). Eligibility traces bridge between the temporal gaps when experience is processes at a temporally fine resolution. In this paper, we introduce the GQ(\u03bb) algorithm that can be thought of as an extension to Q-learning (Watkins and Dayan, 1989); one of the most popular off-policy learning algorithms in reinforcement learning. Our algorithm incorporates gradient-descent ideas originally developed by Sutton et al. (2009a,b), for option conditional predictions with varying eligibility traces. We extend existing convergence results for policy evaluation to this setting and carry forward-view/backwardview analysis and prove the validity of the new algorithm. The organization of the paper is as follows: First, we describe the problem setting and define our notations. Then we introduce the GQ(\u03bb) algorithm and describe how to use it. In the next sections we provide derivation of the algorithm and carry out analytical analysis on the equivalence of TD forward-view/backward-view. We finish the paper with convergence proof and conclusion section. Notation and background We consider the problem of policy evaluation in finite state-action Markov Decision Process (MDP). Under standard conditions, however, our results can be extended to MDPs with infinite state\u2013action pairs. We use a standard reinforcement learning (RL) framework. In this setting, data is obtained from a continually evolving MDP with states st \u2208 S, actions at \u2208 A, and rewards rt \u2208 <, for t = 1, 2, . . ., with each state and reward as a function of the preceding state and action. Actions are chosen according to the behavior policy b, which is assumed fixed and exciting, b(s, a) > 0,\u2200s, a. We consider the transition probabilities between state\u2013 action pairs, and for simplicity we assume there is a finite number N of state\u2013action pairs. Suppose the agent find itself at time t in a state\u2013 action pair st, at. The agent likes its answer at that time to tell something about the future sequence st+1, at+1, . . . , st+k if actions from t + 1 on were taken according to the option until it terminated at time t+k. The option policy is denoted \u03c0 : S \u00d7 A \u2192 [0, 1] and whose termination condition is denoted \u03b2 : S \u2192 [0, 1]. The answer is always in the form of a single number, and of course we have to be more specific about what we are trying to predict. There are two common cases: 1) we are trying to predict the outcome of the option; we want to know about the expected value of some function of the state at the time the option terminates. We call this function the outcome target function, and denote it z : S \u2192 <, 2) we are trying to predict the transient; that is, what happens during the option rather than its end. The most common thing to predict about the transient is the total or discounted reward during the option. We denote the reward function r : S \u00d7A \u2192 <. Finally, the answer could conceivably be a mixture of both a transient and an outcome. Here we will present the algorithm for answering questions with both an outcome part z and a transient part r, with the two added together. In the common place where one wants only one of the two, the other is set to zero. Now we can start to state the goal of learning more precisely. In particular, we would like our answer to be equal to the expected value of the outcome target function at termination plus the cumulative sum of the transient reward function along the way: Q(st, at) (1) \u2261 E [ rt+1 + \u03b3rt+2 + \u00b7 \u00b7 \u00b7+ \u03b3rt+k + zt+k | \u03c0, \u03b2 ] , where \u03b3 \u2208 (0, 1] is discount factor and Q(s, a) denotes action value function that evaluates policy \u03c0 given state-action pair s, a. To simplify the notation, from now on, we drop the superscript \u03c0 on action values. In many problems the number of state-action pairs is large and therefore it is not feasible to compute the action values for each state-action entry. Therefore, we need to approximate the action values through generalization techniques. Here, we use linear function approximation; that is, the answer to a question is always formed linearly as Q\u03b8(s, a) = \u03b8>\u03c6(s, a) \u2248 Q(s, a) for all s \u2208 S and a \u2208 A, where \u03b8 \u2208 < is a learned weight vector and \u03c6(s, a) \u2208 < indicates a state\u2013action feature vector. The goal is to learn parameter vector \u03b8 through a learning method such as TD learning. The above (1) describes the target in a Monte Carlo sense, but of course we want to include the possibility of temporal-difference learning; one of the widely used techniques in reinforcement learning. To do this, we provide an eligibility-trace function \u03bb : S \u2192 [0, 1] as described in Sutton and Barto (1998). We let eligibilitytrace function, \u03bb, to vary over different states. In the next section, first we introduce GQ(\u03bb); a general temporal-difference learning algorithm that is stable under off-policy training, and show how to use it. Then in later sections we provide the derivation of the algorithm and convergence proof. The GQ(\u03bb) algorithm In this section we introduce the GQ(\u03bb) algorithm for off-policy learning about the outcomes and transients of options, in other words, intra-option GQ(\u03bb) for learning the answer to a question chosen from a wide (possibly universal) class of option-conditional predictive questions. To specify the question one provides four functions: \u03c0 and \u03b2, for the option, and z and r, for the target functions. To specify how the answers will be formed one provides their functional form (here in linear form), the feature vectors \u03c6(s, a) for all state\u2013action pairs, and the eligibility-trace function \u03bb. The discount factor \u03b3 can be taken to be 1, and thus ignored; the same effect as discounting can be achieved through the choice of \u03b2. Now, we specify the GQ(\u03bb) algorithm as follows: The weight vector \u03b8 \u2208 < is initialized arbitrarily. The secondary weight vector w \u2208 < is init"}
{"_id": "1721801db3a467adf7a10c69f21c21896a80c6dd", "title": "Efficient Program Analyses Using Deductive and Semantic Methodologies", "text": "Program analysis is the process of gathering deeper insights about a source code and analysing them to resolve software problems of arbitrary complexity. The key challenge in program analysis is to keep it fast, precise and straightforward. This research focuses on three key objectives to achieve an efficient program analysis: (i) expressive data representation, (ii) optimised data structure and (iii) fast data processing mechanisms. State of the art technologies such as Resource Description Framework (RDF) as data representation format, triplestores as the storage & processing layer, and datalog to represent program analysis rules are considered in our research. diagram(BDD) to be embedded in the triplestore. Additionally, an ontology is being designed to standardise the definitions of concepts and representation of the knowledge in the program analysis domain."}
{"_id": "483d6d11a8602166ae33898c8f00f44d6ddf7bf6", "title": "Neuroimaging: Decoding mental states from brain activity in humans", "text": "Recent advances in human neuroimaging have shown that it is possible to accurately decode a person's conscious experience based only on non-invasive measurements of their brain activity. Such 'brain reading' has mostly been studied in the domain of visual perception, where it helps reveal the way in which individual experiences are encoded in the human brain. The same approach can also be extended to other types of mental state, such as covert attitudes and lie detection. Such applications raise important ethical issues concerning the privacy of personal thought."}
{"_id": "54e7e6348fc8eb27dd6c34e0afbe8881eeb0debd", "title": "Content-based multimedia information retrieval: State of the art and challenges", "text": "Extending beyond the boundaries of science, art, and culture, content-based multimedia information retrieval provides new paradigms and methods for searching through the myriad variety of media all over the world. This survey reviews 100+ recent articles on content-based multimedia information retrieval and discusses their role in current research directions which include browsing and search paradigms, user studies, affective computing, learning, semantic queries, new features and media types, high performance indexing, and evaluation techniques. Based on the current state of the art, we discuss the major challenges for the future."}
{"_id": "1064641f13e2309ad54dd05ed9ca5d0c09e82376", "title": "Machine learning classifiers and fMRI: A tutorial overview", "text": "Interpreting brain image experiments requires analysis of complex, multivariate data. In recent years, one analysis approach that has grown in popularity is the use of machine learning algorithms to train classifiers to decode stimuli, mental states, behaviours and other variables of interest from fMRI data and thereby show the data contain information about them. In this tutorial overview we review some of the key choices faced in using this approach as well as how to derive statistically significant results, illustrating each point from a case study. Furthermore, we show how, in addition to answering the question of 'is there information about a variable of interest' (pattern discrimination), classifiers can be used to tackle other classes of question, namely 'where is the information' (pattern localization) and 'how is that information encoded' (pattern characterization)."}
{"_id": "fbe829a9ee818b9fe919bad521f0502c6770df89", "title": "Evidence in Management and Organizational Science : Assembling the Field \u2019 s Full Weight of Scientific Knowledge Through Syntheses", "text": "This chapter advocates the good scientific practice of systematic research syntheses in Management and Organizational Science (MOS). A research synthesis is the systematic accumulation, analysis and reflective interpretation of the full body of relevant empirical evidence related to a question. It is the * Corresponding author. Email: rousseau@andrew.cmu.edu 476 \u2022 The Academy of Management Annals critical first step in effective use of scientific evidence. Synthesis is not a conventional literature review. Literature reviews are often position papers, cherrypicking studies to advocate a point of view. Instead, syntheses systematically identify where research findings are clear (and where they aren\u2019t), a key first step to establishing the conclusions science supports. Syntheses are also important for identifying contested findings and productive lines for future research. Uses of MOS evidence, that is, the motives for undertaking a research synthesis include scientific discovery and explanation, improved management practice guidelines, and formulating public policy. We identify six criteria for establishing the evidentiary value of a body of primary studies in MOS. We then pinpoint the stumbling blocks currently keeping the field from making effective use of its ever-expanding base of empirical studies. Finally, this chapter outlines (a) an approach to research synthesis suitable to the domain of MOS; and (b) supporting practices to make synthesis a collective MOS project. Evidence in Management and Organizational Science It is the nature of the object that determines the form of its possible science. (Bhaskar, 1998, p. 3) Uncertain knowledge is better than ignorance. (Mitchell, 2000, p. 9) This chapter is motivated by the failure of Management and Organizational Science (MOS) to date to make full effective use of its available research evidence. Failure to make effective use of scientific evidence is a problem both management scholars and practitioners face. Effective use of evidence, as we employ the term here, means to assemble and interpret a body of primary studies relevant to a question of fact, and then take appropriate action based upon the conclusions drawn. For science, appropriate action might be to direct subsequent research efforts elsewhere if the science is clear, or to recommend a new tact if findings are inconclusive. For practice, appropriate action might begin with a summary of key findings to share with educators, thought leaders, consultants, and the broader practice community. Unfortunately, bodies of evidence in MOS are seldom assembled or interpreted in the systematic fashion needed to permit their confident use. A systematic review of the full body of evidence is the key first step in formulating a science-based conclusion. As a consequence at present, neither the MOS scholar nor the practitioner can readily claim to be well-informed. This lapse has many causes. Two are central to our failure to use MOS evidence well: (1) overvaluing novelty to the detriment of accumulating convergent findings; and (2) the general absence of systematic research syntheses. These two causes are intertwined in that, as we shall show, use of research syntheses ties closely with how a field gauges the value of its research. This chapter\u2019s subject, the systematic research synthesis, is not to be confused Evidence in Management and Organizational Science \u2022 477 with a conventional literature review, its less systematic, non-representative counterpart. Systematic research syntheses assemble, analyze and interpret a comprehensive body of evidence in a highly reflective fashion according to six evidentiary criteria we describe. The why, what, and how of research synthesis in MOS is this chapter\u2019s focus. The explosion of management research since World War II has created knowledge products at a rate far outpacing our current capacity for recall, sense-making, and use. In all likelihood, MOS\u2019s knowledge base will continue to expand. We estimate over 200 peer-reviewed journals currently publish MOS research. These diverse outlets reflect the fact that MOS is not a discipline; it is an area of inter-related research activities cutting across numerous disciplines and subfields. The area\u2019s expansion translates into a body of knowledge that is increasingly fragmented (Rousseau, 1997), transdisciplinary (Whitley, 2000), and interdependent with advancements in other social sciences (Tranfield, Denyer & Smart, 2003). The complicated state of MOS research makes it tough to know what we know, especially as specialization spawns research communities that often don\u2019t and sometimes can\u2019t talk with each other."}
{"_id": "2902e0a4b12cf8269bb32ef6a4ebb3f054cd087e", "title": "A Bridging Framework for Model Optimization and Deep Propagation", "text": "Optimizing task-related mathematical model is one of the most fundamental methodologies in statistic and learning areas. However, generally designed schematic iterations may hard to investigate complex data distributions in real-world applications. Recently, training deep propagations (i.e., networks) has gained promising performance in some particular tasks. Unfortunately, existing networks are often built in heuristic manners, thus lack of principled interpretations and solid theoretical supports. In this work, we provide a new paradigm, named Propagation and Optimization based Deep Model (PODM), to bridge the gaps between these different mechanisms (i.e., model optimization and deep propagation). On the one hand, we utilize PODM as a deeply trained solver for model optimization. Different from these existing network based iterations, which often lack theoretical investigations, we provide strict convergence analysis for PODM in the challenging nonconvex and nonsmooth scenarios. On the other hand, by relaxing the model constraints and performing end-to-end training, we also develop a PODM based strategy to integrate domain knowledge (formulated as models) and real data distributions (learned by networks), resulting in a generic ensemble framework for challenging real-world applications. Extensive experiments verify our theoretical results and demonstrate the superiority of PODM against these state-of-the-art approaches."}
{"_id": "6f6f1714af8551f5f18d419f00c8d3411802ee7a", "title": "Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations", "text": "CANDECOMP/PARAFAC (CP) has found numerous applications in wide variety of areas such as in chemometrics, telecommunication, data mining, neuroscience, separated representations. For an order- N tensor, most CP algorithms can be computationally demanding due to computation of gradients which are related to products between tensor unfoldings and Khatri-Rao products of all factor matrices except one. These products have the largest workload in most CP algorithms. In this paper, we propose a fast method to deal with this issue. The method also reduces the extra memory requirements of CP algorithms. As a result, we can accelerate the standard alternating CP algorithms 20-30 times for order-5 and order-6 tensors, and even higher ratios can be obtained for higher order tensors (e.g., N \u2265 10). The proposed method is more efficient than the state-of-the-art ALS algorithm which operates two modes at a time (ALSo2) in the Eigenvector PLS toolbox, especially for tensors with order N \u2265 5 and high rank."}
{"_id": "ca8cc908698f1e215f73cf892bd2dd7ad420e787", "title": "Planning With Agents : An Efficient Approach Using Hierarchical Dynamic Decision Networks", "text": "To be useful in solving real world problems, agents need to be able to act in environments in which it may not be possible to be completely aware of the current state and where actions do not always work as planned. Additional complexity is added to the problem when one considers groups of agents working together. By casting the agent planning problem as a partially observable Markov decision problem (POMDP), optimal policies can be generated for partially observable and stochastic environments. Exact solutions, however, are notoriously difficult to find for problems of a realistic nature. We introduce a hierarchical decision network-based planning algorithm that can generate high quality plans during execution while demonstrating significant time savings. We also discuss how this approach is particularly applicable to planning in a multiagent environment as compared to other POMDP-based planning algorithms. We present experimental results comparing our algorithm with results obtained by current POMDP and hierarchical POMDP (HPOMDP) methods."}
{"_id": "318a81acdd15a0ab2f706b5f53ee9d4d5d86237f", "title": "Multi-label learning: a review of the state of the art and ongoing research", "text": "Multi-label learning is quite a recent supervised learning paradigm. Owing to its capabilities to improve performance in problems where a pattern may have more than one associated class, it has attracted the attention of researchers, producing an increasing number of publications. The paper presents an up-todate overview about multi-label learning with the aim of sorting and describing the main approaches developed till now. The formal definition of the paradigm, the analysis of its impact on the literature, its main applications, works developed and ongoing research are presented."}
{"_id": "edd83d028b2ac0a6dce2182a2d9edaac2f573c5c", "title": "Occlusion-capable multiview volumetric three-dimensional display.", "text": "Volumetric 3D displays are frequently purported to lack the ability to reconstruct scenes with viewer-position-dependent effects such as occlusion. To counter these claims, a swept-screen 198-view horizontal-parallax-only 3D display is reported here that is capable of viewer-position-dependent effects. A digital projector illuminates a rotating vertical diffuser with a series of multiperspective 768 x 768 pixel renderings of a 3D scene. Evidence of near-far object occlusion is reported. The aggregate virtual screen surface for a stationary observer is described, as are guidelines to construct a full-parallax system and the theoretical ability of the present system to project imagery outside of the volume swept by the screen."}
{"_id": "9adf319c9aadfbd282244cfda59a140ce0bb7df6", "title": "Ranking Attacks Based on Vulnerability Analysis", "text": "Now that multiple-known attacks can affect one software product at the same time, it is necessary to rank and prioritize those attacks in order to establish a better defense. The purpose of this paper is to provide a set of security metrics to rank attacks based on vulnerability analysis. The vulnerability information is retrieved from a vulnerability management ontology, which integrates commonly used standards like CVE, CWE, CVSS, and CAPEC. Among the benefits of ranking attacks through the method proposed here are: a more effective mitigation or prevention of attack patterns against systems, a better foundation to test software products, and a better understanding of vulnerabilities and attacks."}
{"_id": "d64a414d730e5effd9f3d211bf2ee4aad2fb6001", "title": "Stability analysis of negative impedance converter", "text": "Negative Impedance Converter (NIC) have received much attention to improve the restriction of antenna gain-bandwidth tradeoff relationship. However, a significant problem with NIC is the potential instability of unwanted oscillation due to the presence of positive feedback. For solving this problem, we propose a NIC circuit technique which is stable and has wideband characteristics of negative impedance."}
{"_id": "459ef4fbe6b83247515e1eb9593fb30c4e33a7cc", "title": "Medical robotics\u2014Regulatory, ethical, and legal considerations for increasing levels of autonomy", "text": "The regulatory, ethical, and legal barriers imposed on medical robots necessitate careful consideration of different levels of autonomy, as well as the context for use."}
{"_id": "81d1b5584befd407111958a605c5fa88948900cb", "title": "Activity recognition from acceleration data based on discrete consine transform and SVM", "text": "This paper developed a high-accuracy human activity recognition system based on single tri-axis accelerometer for use in a naturalistic environment. This system exploits the discrete cosine transform (DCT), the Principal Component Analysis (PCA) and Support Vector Machine (SVM) for classification human different activity. First, the effective features are extracted from accelerometer data using DCT. Next, feature dimension is reduced by PCA in DCT domain. After implementing the PCA, the most invariant and discriminating information for recognition is maintained. As a consequence, Multi-class Support Vector Machines is adopted to distinguish different human activities. Experiment results show that the proposed system achieves the best accuracy is 97.51%, which is better than other approaches."}
{"_id": "1932ebba4012827bf6172bc03a698ec86f38cfe9", "title": "Language Problems and ADHD Symptoms: How Specific Are the Links?", "text": "Symptoms of inattention and hyperactivity frequently co-occur with language difficulties in both clinical and community samples. We explore the specificity and strength of these associations in a heterogeneous sample of 254 children aged 5 to 15 years identified by education and health professionals as having problems with attention, learning and/or memory. Parents/carers rated pragmatic and structural communication skills and behaviour, and children completed standardised assessments of reading, spelling, vocabulary, and phonological awareness. A single dimension of behavioural difficulties including both hyperactivity and inattention captured behaviour problems. This was strongly and negatively associated with pragmatic communication skills. There was less evidence for a relationship between behaviour and language structure: behaviour ratings were more weakly associated with the use of structural language in communication, and there were no links with direct measures of literacy. These behaviour problems and pragmatic communication difficulties co-occur in this sample, but impairments in the more formal use of language that impact on literacy and structural communication skills are tied less strongly to behavioural difficulties. One interpretation is that impairments in executive function give rise to both behavioural and social communication problems, and additional or alternative deficits in other cognitive abilities impact on the development of structural language skills."}
{"_id": "d6eb9e15aa32938675ea42c76f8c818261449577", "title": "Soft tissue cephalometric analysis: diagnosis and treatment planning of dentofacial deformity.", "text": "This article will present a new soft tissue cephalometric analysis tool. This analysis may be used by the orthodontist and surgeon as an aid in diagnosis and treatment planning. The analysis is a radiographic instrument that was developed directly from the philosophy expressed in Arnett and Bergman \"Facial keys to orthodontic diagnosis and treatment planning, Parts I and II\" (Am J Orthop Dentofacial Orthod 1993; 103:299-312 and 395-411). The novelty of this approach, as with the \"Facial Keys\" articles, is an emphasis on soft tissue facial measurement."}
{"_id": "8bde3d5f265759e3cdcff80d55e5a0a6bb1b74d3", "title": "A DTN-Based Sensor Data Gathering for Agricultural Applications", "text": "This paper presents our field experience in data collection from remote sensors. By letting tractors, farmers, and sensors have short-range radio communication devices with delay-disruption tolerant networking (DTN), we can collect data from those sensors to our central database. Although, several implementations have been made with cellular phones or mesh networks in the past, DTN-based systems for such applications are still under explored. The main objective of this paper is to present our practical implementation and experiences in DTN-based data collection from remote sensors. The software, which we have developed for this research, has about 50 kbyte footprint, which is much smaller than any other DTN implementation. We carried out an experiment with 39 DTN nodes at the University of Tokyo assuming an agricultural scenario. They achieved 99.8% success rate for data gathering with moderate latency, showing sufficient usefulness in data granularity."}
{"_id": "e16f74df073ae7c8a29e07ed824a438580ea8534", "title": "Detection of Unhealthy Region of plant leaves using Neural Network", "text": "A leaf is an organ of vascular plant and is the principal lateral appendage of the stem. Each leaf has a set of features that differentiate it from the other leaves, such as margin and shape. This work proposes a comparison of supervised plant leaves classification using different approaches, based on different representations of these leaves, and the chosen algorithm. Beginning with the representation of leaves, we presented leaves by a fine-scale margin feature histogram, by a Centroid Contour Distance Curve shape signature, or by an interior texture feature histogram in 64 element vector for each one, after we tried different combination among these features to optimize results. We classified the obtained vectors. Then we evaluate the classification using cross validation. The obtained results are very interesting and show the importance of each feature. The classification of plant leaf images with biometric features. Traditionally, the trained taxonomic perform this process by following various tasks. The taxonomic usually classify the plants based on flowering and associative phenomenon. It was found that this process was time consuming and difficult. The biometric features of plants leaf like venation make this classification easy. Leaf biometric feature are analyzed using computer based method like morphological feature analysis and artificial neural network and Naive bayes based classifier. KNN classification model take input as the leaf venation morphological feature and classify them into four different species. The result of this classification based on leaf venation is achieved 96.53% accuracy in the training of the model for classification of leaves provide the 91% accuracy in testing to classify the leaf images. Keyword: Artificial neural network, K-NN Classification, Leaf venation pattern, Morphological Features"}
{"_id": "65616754cbd08935c0f17c2f34258670b61a5fc9", "title": "Earlier Identification of Children with Autism Spectrum Disorder: An Automatic Vocalisation-Based Approach", "text": "Autism spectrum disorder (ASD) is a neurodevelopmental disorder usually diagnosed in or beyond toddlerhood. ASD is defined by repetitive and restricted behaviours, and deficits in social communication. The early speech-language development of individuals with ASD has been characterised as delayed. However, little is known about ASD-related characteristics of pre-linguistic vocalisations at the feature level. In this study, we examined pre-linguistic vocalisations of 10-month-old individuals later diagnosed with ASD and a matched control group of typically developing individuals (N = 20). We segmented 684 vocalisations from parent-child interaction recordings. All vocalisations were annotated and signal-analytically decomposed. We analysed ASD-related vocalisation specificities on the basis of a standardised set (eGeMAPS) of 88 acoustic features selected for clinical speech analysis applications. 54 features showed evidence for a differentiation between vocalisations of individuals later diagnosed with ASD and controls. In addition, we evaluated the feasibility of automated, vocalisation-based identification of individuals later diagnosed with ASD. We compared linear kernel support vector machines and a 1-layer bidirectional long short-term memory neural network. Both classification approaches achieved an accuracy of 75% for subject-wise identification in a subject-independent 3-fold cross-validation scheme. Our promising results may be an important contribution en-route to facilitate earlier identification of ASD."}
{"_id": "d0f22d6e7b03d7fa6d958c71bb14438ba3af52be", "title": "High sigma measurement of random threshold voltage variation in 14nm Logic FinFET technology", "text": "Random variation of threshold voltage (Vt) in MOSFETs plays a central role in determining the minimum operating voltage of products in a given process technology. Properly characterizing Vt variation requires a large volume of measurements of minimum size devices to understand the high sigma behavior. At the same time, a rapid measurement approach is required to keep the total measurement time practical. Here we describe a new test structure and measurement approach that enables practical characterization of Vt distributions to high sigma and its application to 14nm Logic FinFET technology. We show that both NMOS and PMOS single fin devices have very low random Vt variation of 19mV and 24mV respectively, normally distributed out to +/-5\u03c3."}
{"_id": "480d0ca73e0a7acd5f48e8e0139776ad96e9f57b", "title": "Medicinal and pharmaceutical uses of seaweed natural products: A review", "text": "In the last three decades the discovery of metabolites with biological activities from macroalgae has increased significantly. However, despite the intense research effort by academic and corporate institutions, very few products with real potential have been identified or developed. Based on Silverplatter MEDLINE and Aquatic Biology, Aquaculture & Fisheries Resources databases, the literature was searched for natural products from marine macroalgae in the Rhodophyta, Phaeophyta and Chlorophyta with biological and pharmacological activity. Substances that currently receive most attention from pharmaceutical companies for use in drug development, or from researchers in the field of medicine-related research include: sulphated polysaccharides as antiviral substances, halogenated furanones from Delisea pulchra as antifouling compounds, and kahalalide F from a species of Bryopsis as a possible treatment of lung cancer, tumours and AIDS. Other substances such as macroalgal lectins, fucoidans, kainoids and aplysiatoxins are routinely used in biomedical research and a multitude of other substances have known biological activities. The potential pharmaceutical, medicinal and research applications of these compounds are discussed."}
{"_id": "b4b44a6bca2cc9d4c5efb41b14a6585fc4872637", "title": "Black cattle body shape and temperature measurement using thermography and KINECT sensor", "text": "Black cattle body shape and temperature measurement system is introduced. It is important to evaluate the quality of Japanese black cattle periodically during their growth process. Not only the weight and size of cattle, but also the posture, shape, and temperature need to be tracked as primary evaluation criteria. In the present study, KINECT sensor and thermal camera obtains the body shape and its temperature. The whole system is calibrated to operate in a common coordinate system. Point cloud data are obtained from different angles and reconstructed in a computer. The thermal data are captured too. Both point cloud data and thermal information are combined by considering the orientation of the cow. The collected information is used to evaluate and estimate cattle conditions."}
{"_id": "5dca5aa024f513801a53d9738161b8a01730d395", "title": "Adaptive Mobile Robot Navigation and Mapping", "text": "The task of building a map of an unknown environment and concurrently using that map to navigate is a central problem in mobile robotics research. This paper addresses the problem of how to perform concurrent mapping and localization (CML) adaptively using sonar. Stochastic mapping is a feature-based approach to CML that generalizes the extended Kalman lter to incorporate vehicle localization and environmental mapping. We describe an implementation of stochastic mapping that uses a delayed nearest neighbor data association strategy to initialize new features into the map, match measurements to map features, and delete out-of-date features. We introduce a metric for adaptive sensing which is de ned in terms of Fisher information and represents the sum of the areas of the error ellipses of the vehicle and feature estimates in the map. Predicted sensor readings and expected dead-reckoning errors are used to estimate the metric for each potential action of the robot, and the action which yields the lowest cost (i.e., the maximum information) is selected. This technique is demonstrated via simulations, in-air sonar experiments, and underwater sonar experiments. Results are shown for 1) adaptive control of motion and 2) adaptive control of motion and scanning. The vehicle tends to explore selectively di erent objects in the environment. The performance of this adaptive algorithm is shown to be superior to straight-line motion and random motion."}
{"_id": "79bb020994c837be3c0ea5746a8e7a7c864c9170", "title": "A comparison of track-to-track fusion algorithms for automotive sensor fusion", "text": "In exteroceptive automotive sensor fusion, sensor data are usually only available as processed, tracked object data and not as raw sensor data. Applying a Kalman filter to such data leads to additional delays and generally underestimates the fused objectspsila covariance due to temporal correlations of individual sensor data as well as inter-sensor correlations. We compare the performance of a standard asynchronous Kalman filter applied to tracked sensor data to several algorithms for the track-to-track fusion of sensor objects of unknown correlation, namely covariance union, covariance intersection, and use of cross-covariance. For the simulation setup used in this paper, covariance intersection and use of cross-covariance turn out to yield significantly lower errors than a Kalman filter at a comparable computational load."}
{"_id": "add441844e5720d85058baf77c845a831d9d2a47", "title": "A Reverse Approach to Named Entity Extraction and Linking in Microposts", "text": "In this paper, we present a pipeline for named entity extraction and linking that is designed specifically for noisy, grammatically inconsistent domains where traditional named entity techniques perform poorly. Our approach leverages a large knowledge base to improve entity recognition, while maintaining the use of traditional NER to identify mentions that are not co-referent with any entities in the knowledge base."}
{"_id": "a053423b39a990c61a4805d189c75a89f8986976", "title": "The Study of a Dual-Mode Ring Oscillator", "text": "An analytical investigation of a dual-mode ring oscillator is presented. The ring oscillator is designed using a CMOS 0.18-\u03bcm technology with differential eight-stage delay cells employing auxiliary input devices. With a proper startup control, the oscillator operates in two different modes covering two different frequency bands. A nonlinear model, along with the linearization method, is used to obtain the transient and steady-state behaviors of the dual-mode ring oscillator. The analytical derivations are verified through HSPICE simulation. The oscillator operates at the frequency bands from 2-5 GHz and from 0.1-2 GHz, respectively."}
{"_id": "87094ebab924f893160716021a8a5bc645b3ff1f", "title": "Deep Learning for Wind Speed Forecasting in Northeastern Region of Brazil", "text": "Deep Learning is one of the latest approaches in the field of artificial neural networks. Since they were first proposed in mid-2006, Deep Learning models have obtained state-of-art results in some problems with classification and pattern recognition. However, such models have been little used in time series forecasting. This work aims to investigate the use of some of these architectures in this kind of problem, specifically in predicting the hourly average speed of winds in the Northeastern region of Brazil. The results showed that Deep Learning offers a good alternative for performing this task, overcoming some results of previous works."}
{"_id": "3699fbf9468dadd3ebb1069bbeb5a359cd725f3e", "title": "Characterizing large-scale use of a direct manipulation application in the wild", "text": "Examining large-scale, long-term application use is critical to understanding how an application meets the needs of its user community. However, there have been few published analyses of long-term use of desktop applications, and none that have examined applications that support creating and modifying content using direct manipulation. In this paper, we present an analysis of 2 years of usage data from an instrumented version of the GNU Image Manipulation Program, including data from over 200 users. In the course of our analysis, we show that previous findings concerning the sparseness of command use and idiosyncrasy of users\u2019 command vocabularies extend to a new domain and interaction style. These findings motivate continued research in adaptive and mixed-initiative interfaces. We also describe the novel application of a clustering technique to characterize a user community\u2019s higher-level tasks from low-level logging data."}
{"_id": "5eb1e4bb87b0d99d62f171f1eede90c98bf266ab", "title": "On traveling path and related problems for a mobile station in a rechargeable sensor network", "text": "Wireless power transfer is a promising technology to fundamentally address energy problems in a wireless sensor network. To make such a technology work effectively, a vehicle is needed to carry a charger to travel inside the network. On the other hand, it has been well recognized that a mobile base station offers significant advantages over a fixed one. In this paper, we investigate an interesting problem of co-locating the mobile base station on the wireless charging vehicle. We study an optimization problem that jointly optimizes traveling path, stopping points, charging schedule, and flow routing. Our study is carried out in two steps. First, we study an idealized problem that assumes zero traveling time, and develop a provably near-optimal solution to this idealized problem. In the second step, we show how to develop a practical solution with non-zero traveling time and quantify the performance gap between this solution and the unknown optimal solution to the original problem."}
{"_id": "505ddb0a2a4d9742726bf4c40a7662e58799b945", "title": "An LLCL Power Filter for Single-Phase Grid-Tied Inverter", "text": "This paper presents a new topology of higher order power filter for grid-tied voltage-source inverters, named the LLCL filter, which inserts a small inductor in the branch loop of the capacitor in the traditional LCL filter to compose a series resonant circuit at the switching frequency. Particularly, it can attenuate the switching-frequency current ripple components much better than an LCL filter, leading to a decrease in the total inductance and volume. Furthermore, by decreasing the inductance of a grid-side inductor, it raises the characteristic resonance frequency, which is beneficial to the inverter system control. The parameter design criteria of the proposed LLCL filter is also introduced. The comparative analysis and discussions regarding the traditional LCL filter and the proposed LLCL filter have been presented and evaluated through experiment on a 1.8-kW-single-phase grid-tied inverter prototype."}
{"_id": "c5e78e8a975e31fb6200bf33275152ccdca2b1dc", "title": "Development of a SiC JFET-Based Six-Pack Power Module for a Fully Integrated Inverter", "text": "In this paper, a fully integrated silicon carbide (SiC)-based six-pack power module is designed and developed. With 1200-V, 100-A module rating, each switching element is composed of four paralleled SiC junction gate field-effect transistors (JFETs) with two antiparallel SiC Schottky barrier diodes. The stability of the module assembly processes is confirmed with 1000 cycles of -40\u00b0C to +200\u00b0C thermal shock tests with 1.3\u00b0C/s temperature change. The static characteristics of the module are evaluated and the results show 55 m\u03a9 on-state resistance of the phase leg at 200\u00b0C junction temperature. For switching performances, the experiments demonstrate that while utilizing a 650-V voltage and 60-A current, the module switching loss decreases as the junction temperature increases up to 150\u00b0C. The test setup over a large temperature range is also described. Meanwhile, the shoot-through influenced by the SiC JFET internal capacitance as well as package parasitic inductances are discussed. Additionally, a liquid cooled three-phase inverter with 22.9 cm \u00d7 22.4 cm \u00d7 7.1 cm volume and 3.53-kg weight, based on this power module, is designed and developed for electric vehicle and hybrid electric vehicle applications. A conversion efficiency of 98.5% is achieved at 10 kHz switching frequency at 5 kW output power. The inverter is evaluated with coolant temperature up to 95\u00b0C successfully."}
{"_id": "390daa205c45d196d62f0b89c30e6b8164fb13a8", "title": "Stability of photovoltaic and wind turbine grid-connected inverters for a large set of grid impedance values", "text": "The aim of this paper is to analyze the stability problems of grid connected inverters used in distributed generation. Complex controllers (e.g., multiple rotating dq-frames or resonant-based) are often required to compensate low frequency grid voltage background distortion and an LCL-filter is usually adopted for the high frequency one. The possible wide range of grid impedance values (distributed generation is suited for remote areas with radial distribution plants) challenge the stability and the effectiveness of the LCL-filter-based current controlled system. It has been found out and it will be demonstrated in this paper that the use of active damping helps to stabilise the system in respect to many different kinds of resonances. The use of active damping results in an easy plug-in feature of the generation system in a vast range of grid conditions and in a more flexible operation of the overall system able to manage sudden grid changes. In the paper, a vast measurement campaign made on a single-phase system and on a three-phase system used as scale prototypes for photovoltaic and wind turbines, respectively, validate the analysis."}
{"_id": "7108df07165ae5faca7122fe11f3a62c95a4e919", "title": "10 kV/120 A SiC DMOSFET half H-bridge power modules for 1 MVA solid state power substation", "text": "In this paper, the extension of SiC power technology to higher voltage 10 kV/10 A SiC DMOSFETs and SiC JBS diodes is discussed. A new 10 kV/120 A SiC power module using these 10 kV SiC devices is also described which enables a compact 13.8 kV to 465/\u221a3 solid state power substation (SSPS) rated at 1 MVA."}
{"_id": "7278b8287c5ccd3334c4fa67e82d368c94a5af21", "title": "Overview of Control and Grid Synchronization for Distributed Power Generation Systems", "text": "Renewable energy sources like wind, sun, and hydro are seen as a reliable alternative to the traditional energy sources such as oil, natural gas, or coal. Distributed power generation systems (DPGSs) based on renewable energy sources experience a large development worldwide, with Germany, Denmark, Japan, and USA as leaders in the development in this field. Due to the increasing number of DPGSs connected to the utility network, new and stricter standards in respect to power quality, safe running, and islanding protection are issued. As a consequence, the control of distributed generation systems should be improved to meet the requirements for grid interconnection. This paper gives an overview of the structures for the DPGS based on fuel cell, photovoltaic, and wind turbines. In addition, control structures of the grid-side converter are presented, and the possibility of compensation for low-order harmonics is also discussed. Moreover, control strategies when running on grid faults are treated. This paper ends up with an overview of synchronization methods and a discussion about their importance in the control"}
{"_id": "c7a9a6f421c7ee9390150aa0b48e6a35761e1cd6", "title": "Facial recognition using histogram of gradients and support vector machines", "text": "Face recognition is widely used in computer vision and in many other biometric applications where security is a major concern. The most common problem in recognizing a face arises due to pose variations, different illumination conditions and so on. The main focus of this paper is to recognize whether a given face input corresponds to a registered person in the database. Face recognition is done using Histogram of Oriented Gradients (HOG) technique in AT & T database with an inclusion of a real time subject to evaluate the performance of the algorithm. The feature vectors generated by HOG descriptor are used to train Support Vector Machines (SVM) and results are verified against a given test input. The proposed method checks whether a test image in different pose and lighting conditions is matched correctly with trained images of the facial database. The results of the proposed approach show minimal false positives and improved detection accuracy."}
{"_id": "97d6b836cf64aa83c0178448b4e634826e3cc4c4", "title": "Straight to the Tree: Constituency Parsing with Neural Syntactic Distance", "text": "In this work, we propose a novel constituency parsing scheme. The model predicts a vector of real-valued scalars, named syntactic distances, for each split position in the input sentence. The syntactic distances specify the order in which the split points will be selected, recursively partitioning the input, in a top-down fashion. Compared to traditional shiftreduce parsing schemes, our approach is free from the potential problem of compounding errors, while being faster and easier to parallelize. Our model achieves competitive performance amongst single model, discriminative parsers in the PTB dataset and outperforms previous models"}
{"_id": "47f81a0fe08310cc732fbd6ad16ab9da323f395c", "title": "Performance Analysis of Software-Defined Networking (SDN)", "text": "Software-Defined Networking (SDN) approaches were introduced as early as the mid-1990s, but just recently became a well-established industry standard. Many network architectures and systems adopted SDN, and vendors are choosing SDN as an alternative to the fixed, predefined, and inflexible protocol stack. SDN offers flexible, dynamic, and programmable functionality of network systems, as well as many other advantages such as centralized control, reduced complexity, better user experience, and a dramatic decrease in network systems and equipment costs. However, SDN characterization and capabilities, as well as workload of the network traffic that the SDN-based systems handle, determine the level of these advantages. Moreover, the enabled flexibility of SDN-based systems comes with a performance penalty. The design and capabilities of the underlying SDN infrastructure influence the performance of common network tasks, compared to a dedicated solution. In this paper we analyze two issues: a) the impact of SDN on raw performance (in terms of throughput and latency) under various workloads, and b) whether there is an inherent performance penalty for a complex, more functional, SDN infrastructure. Our results indicate that SDN does have a performance penalty, however, it is not necessarily related to the complexity level of the underlying SDN infrastructure."}
{"_id": "30be3f9d1ffcf94f2a7f8819a3dfe93e7f161050", "title": "Applications of Lucid Dreaming in Sports", "text": "The following article has above all a practical orientation. The various possibilities for the application of lucid dreaming in sports training are presented and briefly illustrated. These theses are based on findings from experiments with experienced lucid dreamers who were instructed to carry out various routine and sport-related actions while lucid-dreaming, with the object of observing the effects on both dreaming and waking states (Tholey, 1981a). Additionally, I report here on numerous spontaneous accounts of athletes who have mastered lucid dreaming, as well as my own years of experience as a lucid dreamer and as an active competitor in different types of sports."}
{"_id": "140e049649229cc317799b512de18857ce09a505", "title": "A Review of Heuristic Global Optimization Based Artificial Neural Network Training Approahes", "text": "Received Nov 5, 2016 Revised Jan 9, 2017 Accepted Feb 18, 2017 Artificial Neural Networks have earned popularity in recent years because of their ability to approximate nonlinear functions. Training a neural network involves minimizing the mean square error between the target and network output. The error surface is nonconvex and highly multimodal. Finding the minimum of a multimodal function is a NP complete problem and cannot be solved completely. Thus application of heuristic global optimization algorithms that computes a good global minimum to neural network training is of interest. This paper reviews the various heuristic global optimization algorithms used for training feedforward neural networks and recurrent neural networks. The training algorithms are compared in terms of the learning rate, convergence speed and accuracy of the output produced by the neural network. The paper concludes by suggesting directions for novel ANN training algorithms based on recent advances in global optimization. Keyword:"}
{"_id": "a6507332b400340cce408e0be948c118d41f440a", "title": "Nursing bedside clinical handover - an integrated review of issues and tools.", "text": "AIMS AND OBJECTIVES\nThis article reviews the available literature that supports implementing bedside clinical handover in nursing clinical practice and then seeks to identify key issues if any.\n\n\nBACKGROUND\nClinical handover practices are recognised as being an essential component in the effective transfer of clinical care between health practitioners. It is recognised that the point where a patient is 'handed over' from one clinician to another is significant in maintaining continuity of care and that doing this poorly can have significant safety issues for the patient.\n\n\nDESIGN\nAn integrated literature review.\n\n\nMETHOD\nA literature review of 45 articles was undertaken to understand bedside clinical handover and the issues related to the implementation of this process.\n\n\nRESULTS\nIt was identified that there are a number of clinical handover mnemonics available that provide structure to the process and that areas such as confidentiality, inclusion of the patient/carer and involving the multidisciplinary team remain topical issues for practitioners in implementing good clinical handover practices.\n\n\nCONCLUSIONS\nThis literature review identified a lack of literature available about the transfer of responsibility and accountability during clinical handover and auditing practices of the clinical handover process. The nurses were more concerned about confidentiality issues than were patients. The use of a structured tool was strongly supported; however, no one singular tool was considered suitable for all clinical areas.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing clinicians seeking to implement best practice within their professional speciality should consider some of the issues raised within this article and seek to address these issues by developing strategies to overcome them."}
{"_id": "d77ea9a5082081ab0e69f1dfda40546ea332ec3d", "title": "Forward Error Correction for DNA Data Storage", "text": "We report on a strong capacity boost in storing digital data in synthetic DNA. In principle, synthetic DNA is an ideal media to archive digital data for very long times because the achievable data density and longevity outperforms today\u2019s digital data storage media by far. On the other hand, neither the synthesis, nor the amplification and the sequencing of DNA strands can be performed error-free today and in the foreseeable future. In order to make synthetic DNA available as digital data storage media, specifically tailored forward error correction schemes have to be applied. For the purpose of realizing a DNA data storage, we have developed an efficient and robust forwarderror-correcting scheme adapted to the DNA channel. We based the design of the needed DNA channel model on data from a proof-of-concept conducted 2012 by a team from the Harvard Medical School [1]. Our forward error correction scheme is able to cope with all error types of today\u2019s DNA synthesis, amplification and sequencing processes, e.g. insertion, deletion, and swap errors. In a successful experiment, we were able to store and retrieve error-free 22 MByte of digital data in synthetic DNA recently. The found residual error probability is already in the same order as it is in hard disk drives and can be easily improved further. This proves the feasibility to use synthetic DNA as longterm digital data storage media."}
{"_id": "f07744725049c566f2f6916b79dc9b7ed8a8c6e1", "title": "CLIP: Continuous Location Integrity and Provenance for Mobile Phones", "text": "Many location-based services require a mobile user to continuously prove his location. In absence of a secure mechanism, malicious users may lie about their locations to get these services. Mobility trace, a sequence of past mobility points, provides evidence for the user's locations. In this paper, we propose a Continuous Location Integrity and Provenance (CLIP) Scheme to provide authentication for mobility trace, and protect users' privacy. CLIP uses low-power inertial accelerometer sensor with a light-weight entropy-based commitment mechanism and is able to authenticate the user's mobility trace without any cost of trusted hardware. CLIP maintains the user's privacy, allowing the user to submit a portion of his mobility trace with which the commitment can be also verified. Wireless Access Points (APs) or colocated mobile devices are used to generate the location proofs. We also propose a light-weight spatial-temporal trust model to detect fake location proofs from collusion attacks. The prototype implementation on Android demonstrates that CLIP requires low computational and storage resources. Our extensive simulations show that the spatial-temporal trust model can achieve high (> 0.9) detection accuracy against collusion attacks."}
{"_id": "1398f5aaaa5abfeef9dc1dd67d323b8004b0e951", "title": "Security and privacy in mobile crowdsourcing networks: challenges and opportunities", "text": "The mobile crowdsourcing network (MCN) is a promising network architecture that applies the principles of crowdsourcing to perform tasks with human involvement and powerful mobile devices. However, it also raises some critical security and privacy issues that impede the application of MCNs. In this article, in order to better understand these critical security and privacy challenges, we first propose a general architecture for a mobile crowdsourcing network comprising both crowdsourcing sensing and crowdsourcing computing. After that, we set forth several critical security and privacy challenges that essentially capture the characteristics of MCNs. We also formulate some research problems leading to possible research directions. We expect this work will bring more attention to further investigation on security and privacy solutions for mobile crowdsourcing networks."}
{"_id": "4bb507d3d74354ecab6dad25f211bbb856bbb392", "title": "DAC-MACS: Effective Data Access Control for Multiauthority Cloud Storage Systems", "text": "Data access control is an effective way to ensure data security in the cloud. However, due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Existing access control schemes are no longer applicable to cloud storage systems, because they either produce multiple encrypted copies of the same data or require a fully trusted cloud server. Ciphertext-policy attribute-based encryption (CP-ABE) is a promising technique for access control of encrypted data. However, due to the inefficiency of decryption and revocation, existing CP-ABE schemes cannot be directly applied to construct a data access control scheme for multiauthority cloud storage systems, where users may hold attributes from multiple authorities. In this paper, we propose data access control for multiauthority cloud storage (DAC-MACS), an effective and secure data access control scheme with efficient decryption and revocation. Specifically, we construct a new multiauthority CP-ABE scheme with efficient decryption, and also design an efficient attribute revocation method that can achieve both forward security and backward security. We further propose an extensive data access control scheme (EDAC-MACS), which is secure under weaker security assumptions."}
{"_id": "b56d065e56866dce8ceaca4af78a8f058aecd6a7", "title": "Learning Long Term Dependencies via Fourier Recurrent Units", "text": "It is a known fact that training recurrent neural networks for tasks that have long term dependencies is challenging. One of the main reasons is the vanishing or exploding gradient problem, which prevents gradient information from propagating to early layers. In this paper we propose a simple recurrent architecture, the Fourier Recurrent Unit (FRU), that stabilizes the gradients that arise in its training while giving us stronger expressive power. Specifically, FRU summarizes the hidden states h along the temporal dimension with Fourier basis functions. This allows gradients to easily reach any layer due to FRU\u2019s residual learning structure and the global support of trigonometric functions. We show that FRU has gradient lower and upper bounds independent of temporal dimension. We also show the strong expressivity of sparse Fourier basis, from which FRU obtains its strong expressive power. Our experimental study also demonstrates that with fewer parameters the proposed architecture outperforms other recurrent architectures on many tasks."}
{"_id": "2230e4f5abfe76608fd257baca9eea043e3f1750", "title": "The Role of Classification in Knowledge Representation and Discovery", "text": "The link between classification and knowledge is explored. Classification schemes have properties that enable the representation of entities and relationships in structures that reflect knowledge of the domain being classified. The strengths and limitations of four classificatory approaches are described in terms of their ability to reflect, discover, and create new knowledge. These approaches are hierarchies, trees, paradigms, and faceted analysis. Examples are provided of the way in which knowledge and the classification process affect each other. INTRODUCTION Developments in our ability to store and retrieve large amounts of' information have stimulated an interest in new ways to exploit this information for advancing human knowledge. This article describes the relationship between knowledge representation (as manifested in classifications) and the processes of knowledge discovery and creation. How does the classification process enable or constrain knowing something or discovering new knowledge about something? In what ways might we develop classifications that will enhance our ability to discover meaningful information in our data stores? The first part of the article describes several representative classificatory structures-hierarchies, trees, paradigms, and faceted analysis with the aim of identifying how these structures serve as knowledge representations and in what ways they can be used for knowledge discovery and creation. The second part of the discussion includes examples from existing classification schemes and discusses how the schemes reflect or fail to reflect knowledge. KNOWLEDGE, THEORY, AND CLASSIFICATION Scholars in many fields, from philosophy to cybernetics, have long discussed the concept of knowledge and the problems of representing knowledge in information systems. The distinction is drawn between merely observing, perceiving, or even describing things and truly knowing them. To know implies a process of integration of facts about objects and the context in which the objects and processes exist. Even in colloquial usage, knowledge about someone or something is always expressed in terms of deep relationships and meanings as well as its place in time and space. To know cars means not only understanding car mechanics but also knowledge of the interplay of the mechanical processes and perhaps even factors such as aesthetics, economics, and psychology. The process of knowledge discovery and creation in science has traditionally followed the path of systematic exploration, observation, description, analysis, and synthesis and testing of phenomena and facts, all conducted within the communication framework of a particular research community with its accepted methodology and set of techniques. We know the process is not entirely rational but often is sparked and then fueled by insight, hunches, and leaps of faith (Bronowski, 1978). Moreover, research is always conducted within a particular political and cultural reality (Olson, 1998). Each researcher and, on a larger scale, each research community at various points must gather up the disparate pieces and in some way communicate what is known, expressing it in such a way as to be useful for further discovery and understanding. A variety of formats exist for the expression of knowledge e.g., theories, models, formulas, descriptive reportage of many sorts, and polemical essays. Of these formats, science particularly values theories and models because they are a \u201csymbolic dimension of experience as opposed to the apprehension of brute fact\u201d (Kaplan, 1963, p. 294) and can therefore be symbolically extended to cover new experiences. A theory thus explains a particular fact by abstracting the relationship of that fact to other facts. Grand, or covering, theories explain facts in an especially eloquent way and in a very wide (some would say, universal) set of situations. Thus, Darwinian, Marxist, or Freudian theories, for example, attempt to explain processes and behaviors in many contexts, but they do so at a high level of abstraction. There are relatively few grand theories, however, and we rely on the explanatory and descriptive usefulness of more \u201clocal\u201d theoriestheories that explain a more limited domain but with greater specificity. CLASSIFICATION AS KNOWLEDGE REPRESENTATION How are theories built? How does knowledge accumulate and then get shaped into a powerful representation? There are, of course, many processes involved, but often one of them is the process of classification. Classification is the meaningful clustering of experience. The process of classification can be used in a formative way and is thus useful during the preliminary stages of inquiry as a heuristic tool in discovery, analysis, and theorizing (Davies, 1989). Once concepts gel and the relationships among concepts become understood, a classification can be used as a rich representation of what is known and is thus useful in communication and in generating a fresh cycle of exploration, comparison, and theorizing. Kaplan (1963) states that \u201ctheory is not the aggregate of the new laws but their connectedness, as a bridge consists of girders only in that the girders are joined together in a particular way\u201d (p. 297). A good classification functions in much the same way that a theory does, connecting concepts in a useful structure. If successful, it is, like a theory, descriptive, explanatory, heuristic, fruitful, and perhaps also elegant, parsimonious, and robust (Kwasnik, l992b). There are many approaches to the process of classification and to the construction of the foundation of classification schemes. Each kind of classification process has different goals, and each type of classification scheme has different structural properties as well as different strengths and weaknesses in terms of knowledge representation and knowledge discovery. The following is a representative sample of some common approaches and structures. HIERARCHIES We have inherited our understanding of hierarchical classifications from Aristotle (Ackrill, 1963), who posited that all nature comprised a unified whole. The whole could be subdivided, like a chicken leg at the joint, into \u201cnatural\u201d classes, and each class further into subclasses, and so on-this process following an orderly and systematic set of rules of association and distinction. How do we know what a natural dividing place is, and how do we arrive at the rules for division and subdivision? According to Aristotle, only exhaustive observation can reveal each entity\u2019s true (essential) attributes, and only philosophy can guide us in determining the necessary and sufficient attributes for membership in any given class. In fact, according to Aristotle\u2019s philosophy, it is only when an entity is properly classed, and its essential properties identified, that we can say we truly know it. This is the aim of science, he claims-i.e., to unambiguously classify all phenomena by their essential (true) qualities. While Aristotle\u2019s legacy is alive in spirit in modern applications of classification, most practitioners recognize that a pure and complete hierarchy is essentially possible only in the ideal. Nevertheless, in knowledge domains that have theoretical foundations (such as germ theory in medicine and the theory of evolution in biology), hierarchies are the preferred structures for knowledge representation (see, for example, the excerpt from the Medical Subject Headings [MeSH] in Figure 1. Based on the MeSH excerpt in Figure 1, note that hierarchies have strict structural requirements: Inclusiveness: The top class (in this case, EYEDISEASES) is the most inclusive class and describes the domain of the classification. The top class includes all its subclasses and sub-subclasses. Put another way, all the classes in the example are included in the top class: EYEDISEASES. Species/differentia: A true hierarchy has only one type of relationship between its superand subclasses and this is the generic relationship, also known as species/differentia, or more colloquially as the is-a relationship. In a generic relationship, ALLERGIC CONJUNCTIVITIS is a kind of CONJUNCTIVITIS, which in turn is a kind of CONJUNCTIVAL DISEASE, which in turn is a kind of EYEDISEASE. Inheritance: This requirement of strict class inclusion ensures that everything that is true for entities in any given class is also true for entities in its subclasses and sub-subclasses. Thus whatever is true of EYE DISEASES (as a whole) is also true of CONJUNCTIVAL Whatever is true of CONJUNCTIVAL DISEASES (as a whole) is also true of CONJUNCTIVITIS, and so on. This property is called inheritance, that is, attributes are inherited by a subclass from its superclass. Transitivity: Since attributes are inherited, all sub-subclasses are members of not only their immediate superclass but of every superclass above that one. Thus if BACTERIAL CONJUNCTIVITIS is a kind Of CONJUNCTIVITIS, and CONJUNCTIVITIS is a kind Of CONJUNCTIVAL DISEASE, then, by the rules of transitivity, BACTERIAL CONJUNCTIVITIS is also a kind of CONJUNCTIVAL DISEASE, and so on. This property is called transitivity. Systematic and predictable rules for association and distinction: The rules for grouping entities in a class (i.e., creating a species) are determined beforehand, as are the rules for creating distinct subclasses (differentia). Thus all entities in a given class are like each other in some predictable (and predetermined) way, and these entities differ from entities in sibling classes in some predictable (and predetermined) way. In the example above, CONJUNCTIVAL and CORNEAL DISEASES are alike in that they are both kinds of EYEDISEASES. They are differentiated from each other along some predictable and systematic criterion of distinction (in this case \u201cpart of the eye affected\u201d) Mutual exclusivity: A given entity can belong to only one class. This property is called mutual exclusivity. Necessary and sufficient criteria: In a pure"}
{"_id": "d1d120bc98e536dd33e37c876aaba57e584d252e", "title": "A soft, bistable valve for autonomous control of soft actuators", "text": "Almost all pneumatic and hydraulic actuators useful for mesoscale functions rely on hard valves for control. This article describes a soft, elastomeric valve that contains a bistable membrane, which acts as a mechanical \u201cswitch\u201d to control air flow. A structural instability\u2014often called \u201csnap-through\u201d\u2014enables rapid transition between two stable states of the membrane. The snap-upward pressure, \u0394P1 (kilopascals), of the membrane differs from the snap-downward pressure, \u0394P2 (kilopascals). The values \u0394P1 and \u0394P2 can be designed by changing the geometry and the material of the membrane. The valve does not require power to remain in either \u201copen\u201d or \u201cclosed\u201d states (although switching does require energy), can be designed to be bistable, and can remain in either state without further applied pressure. When integrated in a feedback pneumatic circuit, the valve functions as a pneumatic oscillator (between the pressures \u0394P1 and \u0394P2), generating periodic motion using air from a single source of constant pressure. The valve, as a component of pneumatic circuits, enables (i) a gripper to grasp a ball autonomously and (ii) autonomous earthworm-like locomotion using an air source of constant pressure. These valves are fabricated using straightforward molding and offer a way of integrating simple control and logic functions directly into soft actuators and robots."}
{"_id": "181092c651f540045042e7006f4837d01aac4122", "title": "Malware detection by text and data mining", "text": "Cyber frauds are a major security threat to the banking industry worldwide. Malware is one of the manifestations of cyber frauds. Malware authors use Application Programming Interface (API) calls to perpetrate these crimes. In this paper, we propose a static analysis method to detect Malware based on API call sequences using text and data mining in tandem. We analyzed the dataset available at CSMINING group. First, we employed text mining to extract features from the dataset consisting a series of API calls. Further, mutual information is invoked for feature selection. Then, we resorted to over-sampling to balance the data set. Finally, we employed various data mining techniques such as Decision Tree (DT), Multi Layer Perceptron (MLP), Support Vector Machine (SVM), Probabilistic Neural Network (PNN) and Group Method for Data Handling (GMDH). We also applied One Class SVM (OCSVM). Throughout the paper, we used 10-fold cross validation technique for testing the techniques. We observed that SVM and OCSVM achieved 100% sensitivity after balancing the dataset."}
{"_id": "6ff87b9a9ecec007fdf46ee81ed98571f881796f", "title": "The changing paradigm of air pollution monitoring.", "text": "The air pollution monitoring paradigm is rapidly changing due to recent advances in (1) the development of portable, lower-cost air pollution sensors reporting data in near-real time at a high-time resolution, (2) increased computational and visualization capabilities, and (3) wireless communication/infrastructure. It is possible that these advances can support traditional air quality monitoring by supplementing ambient air monitoring and enhancing compliance monitoring. Sensors are beginning to provide individuals and communities the tools needed to understand their environmental exposures with these data individual and community-based strategies can be developed to reduce pollution exposure as well as understand linkages to health indicators. Each of these areas as well as corresponding challenges (e.g., quality of data) and potential opportunities associated with development and implementation of air pollution sensors are discussed."}
{"_id": "8faecc97853db2f9a098adb82ffbc842178dfc5b", "title": "Quantitative analysis of small-plastic debris on beaches in the Hawaiian Archipelago.", "text": "Small-plastic beach debris from nine coastal locations throughout the Hawaiian Archipelago was analyzed. At each beach, replicate 20 l samples of sediment were collected, sieved for debris between 1 and 15 mm in size, sorted by type, counted and weighed. Small-plastic debris occurred on all of the beaches, but the greatest quantity was found at three of the most remote beaches on Midway Atoll and Moloka'i. Of the debris analyzed, 72% by weight was plastic. A total of 19100 pieces of plastic were collected from the nine beaches, 11% of which was pre-production plastic pellets. This study documents for the first time the presence of small-plastic debris on Hawaiian beaches and corroborates estimates of the abundance of plastics in the marine environment in the North Pacific."}
{"_id": "a55c57bd59691d0ac17106c757853eb5d546c84e", "title": "Insights on representational similarity in neural networks with canonical correlation", "text": "Comparing different neural network representations and determining how representations evolve over time remain challenging open questions in our understanding of the function of neural networks. Comparing representations in neural networks is fundamentally difficult as the structure of representations varies greatly, even across groups of networks trained on identical tasks, and over the course of training. Here, we develop projection weighted CCA (Canonical Correlation Analysis) as a tool for understanding neural networks, building off of SVCCA, a recently proposed method [22]. We first improve the core method, showing how to differentiate between signal and noise, and then apply this technique to compare across a group of CNNs, demonstrating that networks which generalize converge to more similar representations than networks which memorize, that wider networks converge to more similar solutions than narrow networks, and that trained networks with identical topology but different learning rates converge to distinct clusters with diverse representations. We also investigate the representational dynamics of RNNs, across both training and sequential timesteps, finding that RNNs converge in a bottom-up pattern over the course of training and that the hidden state is highly variable over the course of a sequence, even when accounting for linear transforms. Together, these results provide new insights into the function of CNNs and RNNs, and demonstrate the utility of using CCA to understand representations."}
{"_id": "f4136bd7f3948be30c4c11876a1bf933e3cc8549", "title": "Zara The Supergirl: An Empathetic Personality Recognition System", "text": "Zara the Supergirl is an interactive system that, while having a conversation with a user, uses its built in sentiment analysis, emotion recognition, facial and speech recognition modules, to exhibit the human-like response of sharing emotions. In addition, at the end of a 5-10 minute conversation with the user, it can give a comprehensive personality analysis based on the user\u2019s interaction with Zara. This is a first prototype that has incorporated a full empathy module, the recognition and response of human emotions, into a spoken language interactive system that enhances human-robot understanding. Zara was shown at the World Economic Forum in Dalian in September 2015."}
{"_id": "3b1faf9c554e857eb1e35572215d59d96810b09a", "title": "A CNN-Based Supermarket Auto-Counting System", "text": "Deep learning has made significant breakthrough in the past decade. In certain application domain, its detection accuracy has surpassed human being in the same task, e.g., voice recognition and object detection. Various novel applications has been developed and achieved good performance by leveraging the latest advances in deep learning. In this paper, we propose to utilize deep learning based technique, specifically, Convolutional Neural Network (CNN), to develop an auto-counting system for supermarket scenario. Given a picture, the system can automatically detect the specified categories of goods (e.g., Head & Shoulders bottles) and their respective numbers. To improve detection accuracy of the system, we propose to combine hard example mining and multi-scale feature extraction to the Faster R-CNN framework. Experimental results demonstrate the efficacy of the proposed system. Specifically, our system achieves an mAP of 92.1%, which is better than the state-of-the-art, and the response time is about 250 ms per image, including all steps on a GTX 1080 GPU."}
{"_id": "2b692907212a8f6eaf3d187b8370d6285dbbfaa5", "title": "A unified API gateway for high availability clusters", "text": "High-availability (HA) clusters are widely used to provide high availability services. Recently, many HA cluster solutions and products have been proposed or developed by different organizations. However, each HA cluster system has its own administrative tool and application programming interface (API). Besides, vendor lock-in makes customers dependent on the specific vendors' own high availability clusters. Therefore, it is very complicated to simultaneously manage various HA clusters. To solve this problem, a novel SOA-based Unified API Gateway for high-availability Clusters (UAGC) is proposed in this paper. Under UAGC, cluster services are conveniently managed in a unified way, which is independent of platforms or programming languages. Six web service interfaces are implemented in UAGC to cover most cluster functions. A UAGC-based web service (UAGCService) is implemented with WCF. The experimental results show that UAGCService has good performances."}
{"_id": "52b3bd5cc6e1838a894b63dc378807dab25d2b38", "title": "Concurrent multi-target localization, data association, and navigation for a swarm of flying sensors", "text": "We are developing a probabilistic technique for performing multiple target detection and localization based on data from a swarm of flying sensors, for example to be mounted on a group of micro-UAVs (unmanned aerial vehicles). Swarms of sensors can facilitate detecting and discriminating low signal-to-clutter targets by allowing correlation between different sensor types and/or different aspect angles. However, for deployment of swarms to be feasible, UAVs must operate more autonomously. The current approach is designed to reduce the load on humans controlling UAVs by providing computerized interpretation of a set of images from multiple sensors. We consider a complex case in which target detection and localization are performed concurrently with sensor fusion, multi-target signature association, and improved UAV navigation. This method yields the bonus feature of estimating precise tracks for UAVs, which may be applicable for automatic collision avoidance. We cast the problem in a probabilistic framework known as modeling field theory (MFT), in which the pdf of the data is composed of a mixture of components, each conditional upon parameters including target positions as well as sensor kinematics. The most likely set of parameters is found by maximizing the log-likelihood function using an iterative approach related to expectation-maximization. In terms of computational complexity, this approach scales linearly with number of targets and sensors, which represents an improvement over most existing methods. Also, since data association is treated probabilistically, this method is not prone to catastrophic failure if data association is incorrect. Results from computer simulations are described which quantitatively show the advantages of increasing the number of sensors in the swarm, both in terms of clutter suppression and more accurate target localization. 2005 Elsevier B.V. All rights reserved."}
{"_id": "229547ed3312ee6195104cdec7ce47578f92c2c6", "title": "DYNAMIC CAPABILITIES AND THE EMERGENCE OF INTRA-INDUSTRY DIFFERENTIAL FIRM PERFORMANCE: INSIGHTS FROM A SIMULATION STUDY\u2217 By", "text": "This paper explores how the dynamic capabilities of firms may account for the emergence of differential firm performance within an industry. Synthesizing insights from both strategic and organizational theory, four performance-relevant attributes of dynamic capabilities are proposed: timing of dynamic capability deployment, imitation as part of the search for alternative resource configurations, cost of dynamic capability deployment, and learning to deploy dynamic capabilities. Theoretical propositions are developed suggesting how these attributes contribute to the emergence of differential firm performance. A formal model is presented in which dynamic capability is modeled as a set of routines guiding a firm\u2019s evolutionary processes of change. Simulation of the model yields insights into the process of change through dynamic capability deployment, and permits refinement of the theoretical propositions. One of the interesting findings of this study is that even if dynamic capabilities are equifinal across firms, robust performance differences may arise across firms if the costs and timing of dynamic capability deployment differ across firms."}
{"_id": "fdf72e14c2c22960cdd0c3d1109b71b725fc0d0b", "title": "Vehicular Networking: A Survey and Tutorial on Requirements, Architectures, Challenges, Standards and Solutions", "text": "Vehicular networking has significant potential to enable diverse applications associated with traffic safety, traffic efficiency and infotainment. In this survey and tutorial paper we introduce the basic characteristics of vehicular networks, provide an overview of applications and associated requirements, along with challenges and their proposed solutions. In addition, we provide an overview of the current and past major ITS programs and projects in the USA, Japan and Europe. Moreover, vehicular networking architectures and protocol suites employed in such programs and projects in USA, Japan and Europe are discussed."}
{"_id": "a0e4730c864e7ca035dde6ba7b19207d1e7daa6a", "title": "Force-mode control of rotary series elastic actuators in a lower extremity exoskeleton using model-inverse time delay control (MiTDC)", "text": "For physical human-robot interaction (pHRI), it has been an important issue to control the output force of actuators. The aim of this study was to apply a new control strategy, named model-inverse time delay control (MiTDC), to series elastic actuators (SEA) in a lower extremity exoskeleton, even in the presence of uncertainties from pHRI. The law for time delay control (TDC) is derived and implementation issues are discussed, including the design of the state observer and the selection of the nominal value of the control distribution coefficient. Additionally, a new concept, a new reference position using the inverse of model dynamics, is introduced to realize satisfactory tracking performance without delay. Experimental results showed that the suggested controller achieved satisfactory performance without accurate information on system parameters, requiring only a nominal value of the control distribution coefficient."}
{"_id": "144c1e6fc6439038d709be70aa5484506765b2c2", "title": "Toward an SDN-enabled NFV architecture", "text": "This article presents the progressive evolution of NFV from the initial SDN-agnostic initiative to a fully SDN-enabled NFV solution, where SDN is not only used as infrastructure support but also influences how virtual network functions (VNFs) are designed. In the latest approach, when possible, stateless processing in the VNF shifts from the computing element to the networking element. To support these claims, the article presents the implementation of a flow-based network access control solution, with an SDN-enabled VNF built on IEEE 802.1x, which establishes services as sets of flow definitions that are authorized as the result of an end user authentication process. Enforcing the access to the network is done at the network element, while the authentication and authorization state is maintained at the compute element. The application of this proposal allows the performance to be enhanced, while traffic in the control channel is reduced to a minimum. The SDN-enabled NFV approach sets the foundation to increase the areas of application of NFV, in particular in those areas where massive stateless processing of packets is expected."}
{"_id": "c5153bc17fe1ab323c41325ba010192030d512f0", "title": "Software-defined network virtualization: an architectural framework for integrating SDN and NFV for service provisioning in future networks", "text": "SDN and NFV are two significant innovations in networking. The evolution of both SDN and NFV has shown strong synergy between these two paradigms. Recent research efforts have been made toward combining SDN and NFV to fully exploit the advantages of both technologies. However, integrating SDN and NFV is challenging due to the variety of intertwined network elements involved and the complex interaction among them. In this article, we attempt to tackle this challenging problem by presenting an architectural framework called SDNV. This framework offers a clear holistic vision of integrating key principles of both SDN and NFV into unified network architecture, and provides guidelines for synthesizing research efforts toward combining SDN and NFV in future networks. Based on this framework, we also discuss key technical challenges to realizing SDN-NFV integration and identify some important topics for future research, with a hope to arouse the research community's interest in this emerging area."}
{"_id": "1f2b28dc48c8f2c0349dce728d7b6a0681f58aea", "title": "A Dataset for Lane Instance Segmentation in Urban Environments", "text": "Autonomous vehicles require knowledge of the surrounding road layout, which can be predicted by state-of-the-art CNNs. This work addresses the current lack of data for determining lane instances, which are needed for various driving manoeuvres. The main issue is the timeconsuming manual labelling process, typically applied per image. We notice that driving the car is itself a form of annotation. Therefore, we propose a semi-automated method that allows for efficient labelling of image sequences by utilising an estimated road plane in 3D based on where the car has driven and projecting labels from this plane into all images of the sequence. The average labelling time per image is reduced to 5 seconds and only an inexpensive dash-cam is required for data capture. We are releasing a dataset of 24,000 images and additionally show experimental semantic segmentation and instance segmentation results."}
{"_id": "b533b13910cc0de21054116715988783fbea87cc", "title": "Intrusion Detection Using Data Mining Techniques Intrusion Detection Using Data Mining Techniques", "text": "In these days an increasing number of public and commercial services are used through the Internet, so that security of information becomes more important issue in the society information Intrusion Detection System (IDS) used against attacks for protected to the Computer networks. On another way, some data mining techniques also contribute to intrusion detection. Some data mining techniques used for intrusion detection can be classified into two classes: misuse intrusion detection and anomaly intrusion detection. Misuse always refers to known attacks and harmful activities that exploit the known sensitivity of the system. Anomaly generally means a generally activity that is able to indicate an intrusion. In this paper, comparison made between 23 related papers of using data mining techniques for intrusion detection. Our work provide an overview on data mining and soft computing techniques such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Multivariate Adaptive Regression Spline (MARS), etc. In this paper comparison shown between IDS data mining techniques and tuples used for intrusion detection. In those 23 related papers, 7 research papers use ANN and 4 ones use SVM, because of ANN and SVM are more reliable than other models and structures. In addition, 8 researches use the DARPA1998 tuples and 13 researches use the KDDCup1999, because the standard tuples are much more credible than others. There is no best intrusion detection model in present time. However, future research directions for intrusion detection should be explored in this paper. Keywords\u2014 intrusion detection, data mining, ANN"}
{"_id": "a69fd2ad66791ad9fa8722a3b2916092d0f37967", "title": "Interactive example-based urban layout synthesis", "text": "We present an interactive system for synthesizing urban layouts by example. Our method simultaneously performs both a structure-based synthesis and an image-based synthesis to generate a complete urban layout with a plausible street network and with aerial-view imagery. Our approach uses the structure and image data of real-world urban areas and a synthesis algorithm to provide several high-level operations to easily and interactively generate complex layouts by example. The user can create new urban layouts by a sequence of operations such as join, expand, and blend without being concerned about low-level structural details. Further, the ability to blend example urban layout fragments provides a powerful way to generate new synthetic content. We demonstrate our system by creating urban layouts using example fragments from several real-world cities, each ranging from hundreds to thousands of city blocks and parcels."}
{"_id": "4be6deba004cd99fa5434cd3c297747be9d45bb5", "title": "Machine Ethics and Automated Vehicles", "text": "Road vehicle travel at a reasonable speed involves some risk, even when using computer-controlled driving with failure-free hardware and perfect sensing. A fully-automated vehicle must continuously decide how to allocate this risk without a human driver\u2019s oversight. These are ethical decisions, particularly in instances where an automated vehicle cannot avoid crashing. In this chapter, I introduce the concept of moral behavior for an automated vehicle, argue the need for research in this area through responses to anticipated critiques, and discuss relevant applications from machine ethics and moral modeling research. 1 Ethical Decision Making for Automated Vehicles Vehicle automation has progressed rapidly this millennium, mirroring improvements in machine learning, sensing, and processing. Media coverage often focuses on the anticipated safety benefits from automation, as computers are expected to be more attentive, precise, and predictable than human drivers. Mentioned less often are the novel problems from automated vehicle crash. The first problem is liability, as it is currently unclear who would be at fault if a vehicle crashed while self-driving. The second problem is the ability of an automated vehicle to make ethically-complex decisions when driving, particularly prior to a crash. This chapter focuses on the second problem, and the application of machine ethics to vehicle automation. Driving at any significant speed can never be completely safe. A loaded tractor trailer at 100 km/hr requires eight seconds to come to a complete stop, and a passenger car requires three seconds [1]. Truly safe travel requires accurate predictions of other vehicle behavior over this time frame, something that is simply not possible given the close proximities of road vehicles. To ensure its own safety, an automated vehicle must continually assess risk: the risk of traveling a certain speed on a certain curve, of crossing the centerline to"}
{"_id": "e5b302ee968ddc72263bc9a403a744bc477c91b3", "title": "Daylight Design of Office Buildings : Optimisation of External Solar Shadings by Using Combined Simulation Methods", "text": "Integrating daylight and energy performance with optimization into the design process has always been a challenge for designers. Most of the building environmental performance simulation tools require a considerable amount of time and iterations for achieving accurate results. Moreover the combination of daylight and energy performances has always been an issue, as different software packages are needed to perform detailed calculations. A simplified method to overcome both issues using recent advances in software integration is explored here. As a case study; the optimization of external shadings in a typical office space in Australia is presented. Results are compared against common solutions adopted as industry standard practices. Visual comfort and energy efficiency are analysed in an integrated approach. The DIVA (Design, Iterate, Validate and Adapt) plug-in for Rhinoceros/Grasshopper software is used as the main tool, given its ability to effectively calculate daylight metrics (using the Radiance/Daysim engine) and energy consumption (using the EnergyPlus engine). The optimization process is carried out parametrically controlling the shadings\u2019 geometries. Genetic Algorithms (GA) embedded in the evolutionary solver Galapagos are adopted in order to achieve close to optimum results by controlling iteration parameters. The optimized result, in comparison with conventional design techniques, reveals significant enhancement of comfort levels and energy efficiency. Benefits and drawbacks of the proposed strategy are then discussed. OPEN ACCESS Buildings 2015, 5 561"}
{"_id": "2e939ed3bb378ea966bf9f710fc1138f4e16ef38", "title": "Optimizing the CVaR via Sampling", "text": "Conditional Value at Risk (CVaR) is a prominent risk measure that is being used extensively in various domains. We develop a new formula for the gradient of the CVaR in the form of a conditional expectation. Based on this formula, we propose a novel sampling-based estimator for the gradient of the CVaR, in the spirit of the likelihood-ratio method. We analyze the bias of the estimator, and prove the convergence of a corresponding stochastic gradient descent algorithm to a local CVaR optimum. Our method allows to consider CVaR optimization in new domains. As an example, we consider a reinforcement learning application, and learn a risksensitive controller for the game of Tetris."}
{"_id": "7003d7252358bf82c8767d6416ef70cb422a82d1", "title": "Multidisciplinary Instruction with the Natural Language Toolkit", "text": "The Natural Language Toolkit ( NLTK ) is widely used for teaching natural language processing to students majoring in linguistics or computer science. This paper describes the design ofNLTK , and reports on how it has been used effectively in classes that involve different mixes of linguistics and computer science students. We focus on three key issues: getting started with a course, delivering interactive demonstrations in the classroom, and organizing assignments and projects. In each case, we report on practical experience and make recommendations on how to useNLTK to maximum effect."}
{"_id": "bee18c795cb6299f2f83636dd90a5914c66096f6", "title": "Barbed sutures in facial rejuvenation.", "text": "Self-retaining barbed sutures, innovations for nonsurgical facial and neck rejuvenation, are currently available as short APTOS threads or long WOFFLES threads. The author uses APTOS threads for malar rounding, facial tightening and firming, and uses WOFFLES threads as a sling, suspending ptotic facial tissues to the firm, dense tissues of the temporal scalp."}
{"_id": "a2aa272b32c356ec9933b32ca5809c09f2d21b9f", "title": "Clockwork Convnets for Video Semantic Segmentation", "text": "Recent years have seen tremendous progress in still-image segmentation; however the na\u0131\u0308ve application of these state-of-the-art algorithms to every video frame requires considerable computation and ignores the temporal continuity inherent in video. We propose a video recognition framework that relies on two key observations: 1) while pixels may change rapidly from frame to frame, the semantic content of a scene evolves more slowly, and 2) execution can be viewed as an aspect of architecture, yielding purpose-fit computation schedules for networks. We define a novel family of \u201cclockwork\u201d convnets driven by fixed or adaptive clock signals that schedule the processing of different layers at different update rates according to their semantic stability. We design a pipeline schedule to reduce latency for real-time recognition and a fixed-rate schedule to reduce overall computation. Finally, we extend clockwork scheduling to adaptive video processing by incorporating data-driven clocks that can be tuned on unlabeled video. The accuracy and efficiency of clockwork convnets are evaluated on the Youtube-Objects, NYUD, and Cityscapes video datasets."}
{"_id": "c7b3f5bccb19f1a224eb87c6924f244b1511e680", "title": "Nonlinear Extensions of Reconstruction ICA", "text": "In a recent paper [1] it was observed that unsupervised feature learning with overcomplete features could be achieved using linear autoencoders (named Reconstruction Independent Component Analysis). This algorithm has been shown to outperform other well-known algorithms by penalizing the lack of diversity (or orthogonality) amongst features. In our project, we wish to extend and improve this algorithm to include other non-linearities. In this project we have considered three unsupervised learning algorithms: (a) Sparse Autoencoder (b) Reconstruction ICA (RICA), a linear autoencoder proposed in [1], and (c) Nonlinear RICA, a proposed extension of RICA for capturing nonlinearities in feature detection. Our research indicates that exploring non-linear extensions of RICA holds good promise; preliminary results with hyperbolic tangent function on the MNIST dataset showed impressive accuracy (comparable with sparse autoencoders), robustness, and required a fraction of the computational effort."}
{"_id": "8c1ee9c9653b9e711000641b905971f65432ab95", "title": "Accelerating cine phase-contrast flow measurements using k-t BLAST and k-t SENSE.", "text": "Conventional phase-contrast velocity mapping in the ascending aorta was combined with k-t BLAST and k-t SENSE. Up to 5.3-fold net acceleration was achieved, enabling single breath-hold acquisitions. A standard phase-contrast (PC) sequence with interleaved acquisition of the velocity-encoded segments was modified to collect data in 2 stages, a high-resolution under sampled and a low-resolution fully sampled training stage. In addition, a modification of the k-t reconstruction strategy was tested. This strategy, denoted as \"plug-in,\" incorporates data acquired in the training stage into the final reconstruction for improved data consistency, similar to conventional keyhole. \"k-t SENSE plug-in\" was found to provide best image quality and most accurate flow quantification. For this strategy, at least 10 training profiles are required to yield accurate stroke volumes (relative deviation <5%) and good image quality. In vivo 2D cine velocity mapping was performed in 6 healthy volunteers with 30-32 cardiac phases (spatial resolution 1.3 x 1.3 x 8-10 mm(3), temporal resolution of 18-38 ms), yielding relative stroke volumes of 106 +/- 18% (mean +/- 2*SD) and 112 +/- 15% for 3.8 x and 5.3 x net accelerations, respectively. In summary, k-t BLAST and k-t SENSE are promising approaches that permit significant scan-time reduction in PC velocity mapping, thus making high-resolution breath-held flow quantification possible."}
{"_id": "97c862ac9f0b2df304a2a6d0af697c02d30da6a6", "title": "Cost-effective low-loss flexible optical engine with microlens-imprinted film for high-speed on-board optical interconnection", "text": "There is a strong demand for optical interconnection technology to overcome bandwidth bottlenecks in high-end server systems. The interconnection speed in present systems is approaching 10 Gb/s, and higher-speed interconnections over 25 Gb/s are being discussed. To achieve such optical interconnections in commercial production, it is necessary to develop lower-cost and higher-speed optical transceiver modules. We propose a flexible printed circuit optical engine (FPC-OE) with a microlens-imprinted film and a polymer waveguide to achieve low-cost and high-speed operation. The microlens-imprinted film can be produced at low cost by using nanoimprint technology and can drastically reduce the optical loss of the FPC-OE with polymer waveguide. We successfully demonstrated error-free operation at 25 Gb/s with the fabricated optical transceiver that contains an FPC-OE, microlens-imprinted film, and a polymer waveguide."}
{"_id": "680c730a9a184c8b54fa036cc19c980afe192274", "title": "Pulsewidth modulation for electronic power conversion", "text": "The efficient and fast control of electric power forms part of the key technologies of modern automated production. It is performed using electronic power converters. The converters transfer energy from a source to a controlled process in a quantized fashion, using semiconductor switches which are turned on and off at fast repetition rates. The algorithms which generate the switching functions \u2013 pulsewidth modulation techniques \u2013 are manifold. They range from simple averaging schemes to involved methods of real-time optimization. This paper gives an overview."}
{"_id": "a59dda7562eda2cb57eeb95158d75c1da3e73e2e", "title": "Fuzzy Cognitive Map to model project management problems", "text": "Project management is a complex process impacted by numerous factors either from the external environment and/or internal factors completely or partially under the project manager's control. Managing projects successfully involves a complex amalgamation of comprehensive, informed planning, dynamic assessment and analysis of changes in external and internal factors, and the development and communication of updated strategies over the life of the project. Project management involves the interaction and analysis of many systems and requires the continuous integration and evaluation of large amounts of information. Fuzzy Cognitive Maps (FCM) allow us to encode project management knowledge and experiential results to create a useful model of the interacting systems. This paper covers the representation and development of a construction project management FCM that provides an integrated view of the most important concepts affecting construction project management and risk management. This paper then presents the soft computing approach of FCM to project management (PM) modeling and analysis. The resulting PM-FCM models the interaction of internal and external factors and offers an abstract conceptual model of interacting concepts for construction project management application."}
{"_id": "b8b309631aff6f9e3ff9c4ab57221ff51353e04d", "title": "Design of an endoluminal NOTES robotic system", "text": "Natural orifice transluminal endoscopic surgery, or NOTES, allows for exceedingly minimally invasive surgery but has high requirements for the dexterity and force capabilities of the tools. An overview of the ViaCath System is presented. This system is a first generation teleoperated robot for endoluminal surgery and consists of a master console with haptic interfaces, slave drive mechanisms, and 6 degree-of-freedom, long-shafted flexible instruments that run alongside a standard gastroscope or colonoscope. The system was validated through animal studies. It was discovered that the devices were difficult to introduce into the GI tract and manipulation forces were insufficient. The design of a second generation system is outlined with improvements to the instrument articulation section and a steerable overtube. Results of basic evaluation tests performed on the tools are also presented."}
{"_id": "060a42937eed7ad1388c165feba0ebc0511952b2", "title": "Block-wise construction of tree-like relational features with monotone reducibility and redundancy", "text": "We describe an algorithm for constructing a set of tree-like conjunctive relational features by combining smaller conjunctive blocks. Unlike traditional level-wise approaches which preserve the monotonicity of frequency, our block-wise approach preserves monotonicity of feature reducibility and redundancy, which are important in propositionalization employed in the context of classification learning. With pruning based on these properties, our block-wise approach efficiently scales to features including tens of first-order atoms, far beyond the reach of state-of-the art propositionalization or inductive logic programming systems."}
{"_id": "443418d497a45a197a1a1a96d84ae54078ce3d8f", "title": "Bijective parameterization with free boundaries", "text": "We present a fully automatic method for generating guaranteed bijective surface parameterizations from triangulated 3D surfaces partitioned into charts. We do so by using a distortion metric that prevents local folds of triangles in the parameterization and a barrier function that prevents intersection of the chart boundaries. In addition, we show how to modify the line search of an interior point method to directly compute the singularities of the distortion metric and barrier functions to maintain a bijective map. By using an isometric metric that is efficient to compute and a spatial hash to accelerate the evaluation and gradient of the barrier function for the boundary, we achieve fast optimization times. Unlike previous methods, we do not require the boundary be constrained by the user to a non-intersecting shape to guarantee a bijection, and the boundary of the parameterization is free to change shape during the optimization to minimize distortion."}
{"_id": "110caa791362b26dfeac76060e052c9ccc5c2356", "title": "Adaptive Parser-Centric Text Normalization", "text": "Text normalization is an important first step towards enabling many Natural Language Processing (NLP) tasks over informal text. While many of these tasks, such as parsing, perform the best over fully grammatically correct text, most existing text normalization approaches narrowly define the task in the word-to-word sense; that is, the task is seen as that of mapping all out-of-vocabulary non-standard words to their in-vocabulary standard forms. In this paper, we take a parser-centric view of normalization that aims to convert raw informal text into grammatically correct text. To understand the real effect of normalization on the parser, we tie normalization performance directly to parser performance. Additionally, we design a customizable framework to address the often overlooked concept of domain adaptability, and illustrate that the system allows for transfer to new domains with a minimal amount of data and effort. Our experimental study over datasets from three domains demonstrates that our approach outperforms not only the state-of-the-art wordto-word normalization techniques, but also manual word-to-word annotations."}
{"_id": "f6ed1d32be1ba0681ee460cfed202c7303e224c4", "title": "Path-guided artificial potential fields with stochastic reachable sets for motion planning in highly dynamic environments", "text": "Highly dynamic environments pose a particular challenge for motion planning due to the need for constant evaluation or validation of plans. However, due to the wide range of applications, an algorithm to safely plan in the presence of moving obstacles is required. In this paper, we propose a novel technique that provides computationally efficient planning solutions in environments with static obstacles and several dynamic obstacles with stochastic motions. Path-Guided APF-SR works by first applying a sampling-based technique to identify a valid, collision-free path in the presence of static obstacles. Then, an artificial potential field planning method is used to safely navigate through the moving obstacles using the path as an attractive intermediate goal bias. In order to improve the safety of the artificial potential field, repulsive potential fields around moving obstacles are calculated with stochastic reachable sets, a method previously shown to significantly improve planning success in highly dynamic environments. We show that Path-Guided APF-SR outperforms other methods that have high planning success in environments with 300 stochastically moving obstacles. Furthermore, planning is achievable in environments in which previously developed methods have failed."}
{"_id": "37c998ede7ec9eeef7016a206308081bce0fc414", "title": "ITGovA: Proposition of an IT governance Approach", "text": "I To cope with issues related to optimization, rationalization, risk management, economic value of technology and information assets, the implementation of appropriate IT governance seems an important need in public and private organizations. It\u2019s one of these concepts that suddenly emerged and became an important issue in the information technology area. Many organizations started with the implementation of IT governance to achieve a better alignment between business and IT, however, there is no method to apply the IT governance principles in companies. This paper proposes a new approach to implement IT governance based on five iterative phases. This approach is a critical business process that ensures that the business meets its strategic objectives and depending on IT resources for execution. Keywords\u2014Information technology, IT governance, lifecycle, strategic alignment, value"}
{"_id": "148753559db7b59462dd24d4c9af60b2a73a66bf", "title": "Chemical Similarity Searching", "text": "This paper reviews the use of similarity searching in chemical databases. It begins by introducing the concept of similarity searching, differentiating it from the more common substructure searching, and then discusses the current generation of fragment-based measures that are used for searching chemical structure databases. The next sections focus upon two of the principal characteristics of a similarity measure: the coefficient that is used to quantify the degree of structural resemblance between pairs of molecules and the structural representations that are used to characterize molecules that are being compared in a similarity calculation. New types of similarity measure are then compared with current approaches, and examples are given of several applications that are related to similarity searching."}
{"_id": "34e609610872e6ee8b5a46f6cf5ddc928d785385", "title": "Neural network model for time series prediction by reinforcement learning", "text": "Two important issues when constructing a neural network (NN) for time series prediction: proper selection of (1) the input dimension and (2) the time delay between the inputs. These two parameters determine the structure, computing complexity and accuracy of the NN. This paper is to formulate an autonomous data-driven approach to identify a parsimonious structure for the NN so as to reduce the prediction error and enhance the modeling accuracy. The reinforcement learning based dimension and delay estimator (RLDDE) is proposed. It involves a trial-error learning process to formulate a selection policy for designating the above-mentioned two parameters. The proposed method is evaluated by the prediction of the benchmark sunspot time series."}
{"_id": "545e074f8aa21153f7b5d298aa5ba4fc7c89bd36", "title": "Violent Scenes Detection Using Mid-Level Violence Clustering", "text": "This work proposes a novel system for Violent Scenes Detection, which is based on the combination of visual and audio features with machine learning at segment-level. Multiple Kernel Learning is applied so that multimodality of videos can be maximized. In particular, Mid-level Violence Clustering is proposed in order for mid-level concepts to be implicitly learned, without using manually tagged annotations. Finally a violence-score for each shot is calculated. The whole system is trained ona dataset from MediaEval 2013 Affect Task and evaluated by its official metric. The obtained results outperformed its best score."}
{"_id": "0a1e664b66aae97d2f57b45d86dd7ac152e8fd92", "title": "End-to-End Waveform Utterance Enhancement for Direct Evaluation Metrics Optimization by Fully Convolutional Neural Networks", "text": "Speech enhancement model is used to map a noisy speech to a clean speech. In the training stage, an objective function is often adopted to optimize the model parameters. However, in the existing literature, there is an inconsistency between the model optimization criterion and the evaluation criterion for the enhanced speech. For example, in measuring speech intelligibility, most of the evaluation metric is based on a short-time objective intelligibility STOI measure, while the frame based mean square error MSE between estimated and clean speech is widely used in optimizing the model. Due to the inconsistency, there is no guarantee that the trained model can provide optimal performance in applications. In this study, we propose an end-to-end utterance-based speech enhancement framework using fully convolutional neural networks FCN to reduce the gap between the model optimization and the evaluation criterion. Because of the utterance-based optimization, temporal correlation information of long speech segments, or even at the entire utterance level, can be considered to directly optimize perception-based objective functions. As an example, we implemented the proposed FCN enhancement framework to optimize the STOI measure. Experimental results show that the STOI of a test speech processed by the proposed approach is better than conventional MSE-optimized speech due to the consistency between the training and the evaluation targets. Moreover, by integrating the STOI into model optimization, the intelligibility of human subjects and automatic speech recognition system on the enhanced speech is also substantially improved compared to those generated based on the minimum MSE criterion."}
{"_id": "9af4ef7b5b075e9dc88ed0294891c2a9146adca6", "title": "Impact of User Pairing on 5G Non-Orthogonal Multiple Access", "text": "Non-orthogonal multiple access (NOMA) represents a paradigm shift from conventional orthogonal multiple access (MA) concepts, and has been recognized as one of the key enabling technologies for 5G systems. In this paper, the imp act of user pairing on the performance of two NOMA systems, NOMA with fixed power allocation (F-NOMA) and cognitive radio inspired NOMA (CR-NOMA), is characterized. For FNOMA, both analytical and numerical results are provided to demonstrate that F-NOMA can offer a larger sum rate than orthogonal MA, and the performance gain of F-NOMA over conventional MA can be further enlarged by selecting users whose channel conditions are more distinctive. For CR-NOMA, the quality of service (QoS) for users with the poorer channe l condition can be guaranteed since the transmit power alloca ted to other users is constrained following the concept of cogniti ve radio networks. Because of this constraint, CR-NOMA has different behavior compared to F-NOMA. For example, for the user with the best channel condition, CR-NOMA prefers to pair it with the user with the second best channel condition, whereas the use r with the worst channel condition is preferred by F-NOMA. I. I NTRODUCTION Multiple access in 5G mobile networks is an emerging research topic, since it is key for the next generation netwo rk to keep pace with the exponential growth of mobile data and multimedia traffic [1] and [2]. Non-orthogonal multiple acc ess (NOMA) has recently received considerable attention as a promising candidate for 5G multiple access [3]\u2013[6]. Partic ularly, NOMA uses the power domain for multiple access, where different users are served at different power levels. The users with better channel conditions employ successive int erference cancellation (SIC) to remove the messages intended for other users before decoding their own [7]. The benefit of usin g NOMA can be illustrated by the following example. Consider that there is a user close to the edge of its cell, denoted by A, whose channel condition is very poor. For conventional MA, an orthogonal bandwidth channel, e.g., a time slot, will be allocated to this user, and the other users cannot use this time slot. The key idea of NOMA is to squeeze another user with better channel condition, denoted by B, into this time slot. SinceA\u2019s channel condition is very poor, the interference fromB will not cause much performance degradation to A, but the overall system throughput can be significantly improved since additional information can be delivered between the base station (BS) andB. The design of NOMA for uplink transmissions has been proposed in [4], and the performance of NOMA with randomly deployed mobile stations has been characterized in [5]. The combination of cooperative diver sity with NOMA has been considered in [8]. Z. Ding and H. V. Poor are with the Department of Electrical En gineering, Princeton University, Princeton, NJ 08544, USA. Z. Ding is a l o with the School of Computing and Communications, Lancaster Univers ity, LA1 4WA, UK. Pingzhi Fan is with the Institute of Mobile Communicatio ns, Southwest Jiaotong University, Chengdu, China. Since multiple users are admitted at the same time, frequency and spreading code, co-channel interference will be strong in NOMA systems, i.e., a NOMA system is interference limited. As a result, it may not be realistic to ask all the users in the system to perform NOMA jointly. A promising alternative is to build a hybrid MA system, in which NOMA is combined with conventional MA. In particular, the users i n the system can be divided into multiple groups, where NOMA is implemented within each group and different groups are allocated with orthogonal bandwidth resources. Obviously the performance of this hybrid MA scheme is very dependent on which users are grouped together, and the aim of this paper is to investigate the effect of this grouping. Particularly , tn this paper, we focus on a downlink communication scenario with one BS and multiple users, where the users are ordered according to their connections to the BS, i.e., the m-th user has them-th worst connection to the BS. Consider that two users, the m-th user and then-th user, are selected for performing NOMA jointly, wherem < n. The impact of user pairing on the performance of NOMA will be characterized in this paper, where two types of NOMA will be considered. One is based on fixed power allocation, termed F-NOMA, and the other is cognitive radio inspired NOMA, termed CR-NOMA. For the F-NOMA scheme, the probability that F-NOMA can achieve a larger sum rate than conventional MA is first studie d, where an exact expression for this probability as well as its high signal-to-noise ratio (SNR) approximation are obtain ed. These developed analytical results demonstrate that it is a lmost certain for F-NOMA to outperform conventional MA, and the channel quality of then-th user is critical to this probability. In addition, the gap between the sum rates achieved by FNOMA and conventional MA is also studied, and it is shown that this gap is determined by how different the two users\u2019 channel conditions are, as initially reported in [8]. For ex ample, if n = M , it is preferable to choose m = 1, i.e., pairing the user with the best channel condition with the user with the worst channel condition. The reason for this phenomenon can be explained as follows. When m is small, them-th user\u2019s channel condition is poor, and the data rate supported by thi s user\u2019s channel is also small. Therefore the spectral efficie ncy of conventional MA is low, since the bandwidth allocated to thi s user cannot be accessed by other users. The use of F-NOMA ensures that then-th user will have access to the resource allocated to them-th user. If(n\u2212m) is small, then-th user\u2019s channel quality is similar to the m-th user\u2019s, and the benefit to use NOMA is limited. But ifn >> m, then-th user can use the bandwidth resource much more efficiently than the m-th user, i.e., a larger (n\u2212m) will result in a larger performance gap between F-NOMA and conventional MA. The key idea of CR-NOMA is to opportunistically serve the n-th user on the condition that the m-th user\u2019s quality of service (QoS) is guaranteed. Particularly the transmit pow er"}
{"_id": "229b7759e2ee9e03712836a9504d50bd6c66a973", "title": "Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences", "text": "We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M \u2265 3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [21]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1 \u2212 e. We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings."}
{"_id": "5c905b298c074d17be3fdbc47705fe5cd0797c2e", "title": "Decision Theoretic Generalizations of the PAC Model for Neural Net and Other Learning Applications", "text": "We describe a generalization of the PAC learning model that is based on statistical decision theory. In this model the learner receives randomly drawn examples, each example consisting of an instance x E X and an outcome J E Y, and tries to find a decision rule h: X + A, where h E X, that specifies the appropriate action a E A to take for each instance x in order to minimize the expectation of a loss I( y, a). Here X, Y, and A are arbitrary sets. I is a real-valued function, and examples are generated according to an arbitrary joint distribution on Xx Y. Special cases include the problem of learning a function from X into Y, the problem of learning the conditional probability distribution on Y given X (regression), and the problem of learning a distribution on X (density estimation). We give theorems on the uniform convergence of empirical loss estimates to true expected loss rates for certain decision rule spaces 2, and show how this implies learnability with bounded sample size, disregarding computational complexity. As an application, we give distribution-independent upper bounds on the sample size needed for learning with feedforward neural networks, Our theorems use a generalized notion of VC dimension that applies to classes of real-valued functions, adapted from Vapnik and Pollard\u2019s work, and a notion of capacity and metric dimension for classes of functions that map into a bounded metric space. \u2018(\u20181 1992 Academic Press, Inc."}
{"_id": "70cd98a5710179eb20b7987afed80a044af0523e", "title": "Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima", "text": "We establish theoretical results concerning all local optima of various regularized M-estimators, where both loss and penalty functions are allowed to be nonconvex. Our results show that as long as the loss function satisfies restricted strong convexity and the penalty function satisfies suitable regularity conditions, any local optimum of the composite objective function lies within statistical precision of the true parameter vector. Our theory covers a broad class of nonconvex objective functions, including corrected versions of the Lasso for errors-in-variables linear models; regression in generalized linear models using nonconvex regularizers such as SCAD and MCP; and graph and inverse covariance matrix estimation. On the optimization side, we show that a simple adaptation of composite gradient descent may be used to compute a global optimum up to the statistical precision \u03b5stat in log(1/\u03b5stat) iterations, which is the fastest possible rate of any first-order method. We provide a variety of simulations to illustrate the sharpness of our theoretical predictions. Disciplines Statistics and Probability This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/statistics_papers/222 Regularized M -estimators with nonconvexity: Statistical and algorithmic theory for local optima Po-Ling Loh Department of Statistics University of California, Berkeley Berkeley, CA 94720 ploh@berkeley.edu Martin J. Wainwright Departments of Statistics and EECS University of California, Berkeley Berkeley, CA 94720 wainwrig@stat.berkeley.edu"}
{"_id": "7f0125544c265cd97545d3ed6f9f7a58d7e93f6d", "title": "On the Quality of the Initial Basin in Overspecified Neural Networks", "text": "Deep learning, in the form of artificial neural networks, has achieved remarkable practical success in recent years, for a variety of difficult machine learning applications. However, a theoretical explanation for this remains a major open problem, since training neural networks involves optimizing a highly non-convex objective function, and is known to be computationally hard in the worst case. In this work, we study the geometric structure of the associated non-convex objective function, in the context of ReLU networks and starting from a random initialization of the network parameters. We identify some conditions under which it becomes more favorable to optimization, in the sense of (i) High probability of initializing at a point from which there is a monotonically decreasing path to a global minimum; and (ii) High probability of initializing at a basin (suitably defined) with a small minimal objective value. A common theme in our results is that such properties are more likely to hold for larger (\u201coverspecified\u201d) networks, which accords with some recent empirical and theoretical observations."}
{"_id": "9b8be6c3ebd7a79975067214e5eaea05d4ac2384", "title": "Gradient Descent Converges to Minimizers", "text": "We show that gradient descent converges to a local minimizer , almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory."}
{"_id": "ff457fd8330129f8069c5c427c8be3228dc17884", "title": "2D electronics - opportunities and limitations", "text": "2D materials (2Ds), in particular the 2Ds beyond graphene, have attracted considerable attention in the transistor community. While rapid progress on 2D transistors has been achieved recently, the prospects of 2D electronics is still under debate. In the present paper we discuss our view of the potential of 2D transistors for digital CMOS and for flexible electronics. We show that 2D MOSFETs show promise for ultimately scaled CMOS due their excellent electrostatic integrity and their ability to suppress source-drain tunneling. Moreover, the 2Ds represent a promising option for radio frequency (RF) flexible electronics since they offer an attractive combination of mechanical (bendability, stretchability), and electrical (reasonable mobility) properties."}
{"_id": "ea236e24b5b938ae7d7d382d4378f3b311c011a0", "title": "Advancing Land-Sea Conservation Planning: Integrating Modelling of Catchments, Land-Use Change, and River Plumes to Prioritise Catchment Management and Protection", "text": "Human-induced changes to river loads of nutrients and sediments pose a significant threat to marine ecosystems. Ongoing land-use change can further increase these loads, and amplify the impacts of land-based threats on vulnerable marine ecosystems. Consequently, there is a need to assess these threats and prioritise actions to mitigate their impacts. A key question regarding prioritisation is whether actions in catchments to maintain coastal-marine water quality can be spatially congruent with actions for other management objectives, such as conserving terrestrial biodiversity. In selected catchments draining into the Gulf of California, Mexico, we employed Land Change Modeller to assess the vulnerability of areas with native vegetation to conversion into crops, pasture, and urban areas. We then used SedNet, a catchment modelling tool, to map the sources and estimate pollutant loads delivered to the Gulf by these catchments. Following these analyses, we used modelled river plumes to identify marine areas likely influenced by land-based pollutants. Finally, we prioritised areas for catchment management based on objectives for conservation of terrestrial biodiversity and objectives for water quality that recognised links between pollutant sources and affected marine areas. Our objectives for coastal-marine water quality were to reduce sediment and nutrient discharges from anthropic areas, and minimise future increases in coastal sedimentation and eutrophication. Our objectives for protection of terrestrial biodiversity covered species of vertebrates. We used Marxan, a conservation planning tool, to prioritise interventions and explore spatial differences in priorities for both objectives. Notable differences in the distributions of land values for terrestrial biodiversity and coastal-marine water quality indicated the likely need for trade-offs between catchment management objectives. However, there were priority areas that contributed to both sets of objectives. Our study demonstrates a practical approach to integrating models of catchments, land-use change, and river plumes with conservation planning software to inform prioritisation of catchment management."}
{"_id": "e12fd1721516d0f90de39d63a4e273b9d13d0a0c", "title": "Machine Learning in Medical Applications", "text": "Research in Machine Learning methods to-date remains centered on technological issues and is mostly application driven. This letter summarizes successful applications of machine learning methods that were presented at the Workshop on Machine Learning in Medical Applications. The goals of the workshop were to foster fundamental and applied research in the application of machine learning methods to medical problem solving and to medical research, to provide a forum for reporting significant results, to determine whether Machine Learning methods are able to underpin the research and development on intelligent systems for medical applications, and to identify those areas where increased research is likely to yield advances. A number of recommendations for a research agenda were produced, including both technical and human-centered issues."}
{"_id": "87ce789dbddfebd296993597d72e1950b846e99f", "title": "Multi-view Clustering via Multi-manifold Regularized Nonnegative Matrix Factorization", "text": "Multi-view clustering integrates complementary information from multiple views to gain better clustering performance rather than relying on a single view. NMF based multi-view clustering algorithms have shown their competitiveness among different multi-view clustering algorithms. However, NMF fails to preserve the locally geometrical structure of the data space. In this paper, we propose a multi-manifold regularized nonnegative matrix factorization framework (MMNMF) which can preserve the locally geometrical structure of the manifolds for multi-view clustering. MMNMF regards that the intrinsic manifold of the dataset is embedded in a convex hull of all the views' manifolds, and incorporates such an intrinsic manifold and an intrinsic (consistent) coefficient matrix with a multi-manifold regularizer to preserve the locally geometrical structure of the multi-view data space. We use linear combination to construct the intrinsic manifold, and propose two strategies to find the intrinsic coefficient matrix, which lead to two instances of the framework. Experimental results show that the proposed algorithms outperform existing NMF based algorithms for multi-view clustering."}
{"_id": "6cfdc673b5e0708eedf3fad9fb550a28e672b8aa", "title": "Extracting 3D Vascular Structures from Microscopy Images using Convolutional Recurrent Networks", "text": "Vasculature is known to be of key biological significance, especially in the study of cancer. As such, considerable effort has been focused on the automated measurement and analysis of vasculature in medical and pre-clinical images. In tumors in particular, the vascular networks may be extremely irregular and the appearance of the individual vessels may not conform to classical descriptions of vascular appearance. Typically, vessels are extracted by either a segmentation and thinning pipeline, or by direct tracking. Neither of these methods are well suited to microscopy images of tumor vasculature. In order to address this we propose a method to directly extract a medial representation of the vessels using Convolutional Neural Networks. We then show that these two-dimensional centerlines can be meaningfully extended into 3D in anisotropic and complex microscopy images using the recently popularized Convolutional Long Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this hybrid convolutional-recurrent architecture over both 2D and 3D convolutional comparators."}
{"_id": "07528274309b357651919c59bea8fdafa1116277", "title": "Building fast and compact convolutional neural networks for offline handwritten Chinese character recognition", "text": "Like other problems in computer vision, offline handwritten Chinese character recognition (HCCR) has achieved impressive results using convolutional neural network (CNN)-based methods. However, larger and deeper networks are needed to deliver state-of-the-art results in this domain. Such networks intuitively appear to incur high computational cost, and require the storage of a large number of parameters, which render them unfeasible for deployment in portable devices. To solve this problem, we propose a Global Supervised Low-rank Expansion (GSLRE) method and an Adaptive Drop-weight (ADW) technique to solve the problems of speed and storage capacity. We design a nine-layer CNN for HCCR consisting of 3,755 classes, and devise an algorithm that can reduce the network\u2019s computational cost by nine times and compress the network to 1/18 of the original size of the baseline model, with only a 0.21% drop in accuracy. In tests, the proposed algorithm can still surpass the best single-network performance reported thus far in the literature while requiring only 2.3 MB for storage. Furthermore, when integrated with our effective forward implementation, the recognition of an offline character image takes only 9.7 ms on a CPU. Compared with the state-of-the-art CNN model for HCCR, \u2217Corresponding author Email addresses: xiaoxuefengchina@gmail.com (Xuefeng Xiao), lianwen.jin@gmail.com (Lianwen Jin) Preprint submitted to Pattern Recognition June 30, 2017"}
{"_id": "d9ef20252f9d90295460953e8ab78667b66919ad", "title": "Circuit Fingerprinting Attacks: Passive Deanonymization of Tor Hidden Services", "text": "This paper sheds light on crucial weaknesses in the design of hidden services that allow us to break the anonymity of hidden service clients and operators passively. In particular, we show that the circuits, paths established through the Tor network, used to communicate with hidden services exhibit a very different behavior compared to a general circuit. We propose two attacks, under two slightly different threat models, that could identify a hidden service client or operator using these weaknesses. We found that we can identify the users\u2019 involvement with hidden services with more than 98% true positive rate and less than 0.1% false positive rate with the first attack, and 99% true positive rate and 0.07% false positive rate with the second. We then revisit the threat model of previous website fingerprinting attacks, and show that previous results are directly applicable, with greater efficiency, in the realm of hidden services. Indeed, we show that we can correctly determine which of the 50 monitored pages the client is visiting with 88% true positive rate and false positive rate as low as 2.9%, and correctly deanonymize 50 monitored hidden service servers with true positive rate of 88% and false positive rate of 7.8% in an open world setting."}
{"_id": "e23222907f95c1fcdc87dc3d3cd93edeaa56fa66", "title": "MarrNet: 3D Shape Reconstruction via 2.5D Sketches", "text": "3D object reconstruction from a single image is a highly under-determined problem, requiring strong prior knowledge of plausible 3D shapes. This introduces challenges for learning-based approaches, as 3D object annotations are scarce in real images. Previous work chose to train on synthetic data with ground truth 3D information, but suffered from domain adaptation when tested on real data. In this work, we propose MarrNet, an end-to-end trainable model that sequentially estimates 2.5D sketches and 3D object shape. Our disentangled, two-step formulation has three advantages. First, compared to full 3D shape, 2.5D sketches are much easier to be recovered from a 2D image; models that recover 2.5D sketches are also more likely to transfer from synthetic to real data. Second, for 3D reconstruction from 2.5D sketches, systems can learn purely from synthetic data. This is because we can easily render realistic 2.5D sketches without modeling object appearance variations in real images, including lighting, texture, etc. This further relieves the domain adaptation problem. Third, we derive differentiable projective functions from 3D shape to 2.5D sketches; the framework is therefore end-to-end trainable on real images, requiring no human annotations. Our model achieves state-of-the-art performance on 3D shape reconstruction."}
{"_id": "8eea2b1ca6f62ffaaa96f6d25990ad433eb52588", "title": "Exploring the influential factors in continuance usage of mobile social Apps: Satisfaction, habit, and customer value perspectives", "text": "The emergence of mobile application software (App) has explosively grown in conjunction with the worldwide use of smartphones in recent years. Among numerous categories of mobile Apps, social Apps were one of those with the greatest growth in 2013. Despite abundant research on users\u2019 behavior intention of mobile App usage, few studies have focused on investigating key determinants of users\u2019 continuance intention regarding social Apps. To fill this gap, we integrated customer value perspectives to explore the influential factors in the continuance intention of social App use. Moreover, users\u2019 satisfaction and habit from both the marketing and psychology literature were also incorporated into the research model. A total of 378 valid questionnaires were collected by survey method, and structural equation modeling was employed in the subsequent data analysis. The results indicate that the continuance usage of social Apps is driven by users\u2019 satisfaction, tight connection with others, and hedonic motivation to use the Apps. In addition, full mediation effects of satisfaction and habit were found between perceived usefulness and intention to continue use. These findings extend our understanding of users\u2019 continuance intention in the context of social Apps. Discussion and implications are provided. 2015 Elsevier Ltd. All rights reserved."}
{"_id": "eecf83a2ed7765fc2729bc9e987a56a3d848dbb2", "title": "Perceivable Light Fields: Matching the Requirements Between the Human Visual System and Autostereoscopic 3-D Displays", "text": "Recently, there has been a substantial increase in efforts to develop 3-D visualization technologies that can provide the viewers with a realistic 3-D visual experience. Various terms such as \u201creality communication\u201d have been used to categorize these efforts. In order to provide the viewers with a complete and realistic visual sensation, the display or visualization system and the displayed content need to match the physiological 3-D information sensing capabilities of the human visual system which can be quite complex. These may include spatial and temporal resolutions, depth perception, dynamic range, spectral contents, nonlinear effects, and vergence accommodation effects. In this paper, first we present an overview of some of the 3-D display research efforts which have been extensively pursued in Asia, Europe, and North America among other areas. Based on the limitations and comfort-based requirements of the human visual system when viewing a nonnatural visual input from 3-D displays, we present an analytical framework that combines main perception and human visual requirements with analytical tools and principles used in related disciplines such as optics, computer graphics, computational imaging, and signal processing. Building on the widely used notion of light fields, we define a notion of perceivable light fields to account for the human visual system physiological requirements, and propagate it back to the display device to determine the display device specifications. This helps us clarify the fundamental and practical requirements of the 3-D display devices for reality viewing communication. In view of the proposed analytical framework, we overview various methods that can be applied to overcome the extensive information needed to be displayed in order to meet the requirements imposed by the human visual system."}
{"_id": "75235e03ac0ec643e8a784f432e6d1567eea81b7", "title": "Advances in data stream mining", "text": "Mining data streams has been a focal point of research interest over the past decade. Hardware and software advances have contributed to the significance of this area of research by introducing faster than ever data generation. This rapidly generated data has been termed as data streams. Credit card transactions, Google searches, phone calls in a city, and many others\\are typical data streams. In many important applications, it is inevitable to analyze this streaming data in real time. Traditional data mining techniques have fallen short in addressing the needs of data stream mining. Randomization, approximation, and adaptation have been used extensively in developing new techniques or adopting exiting ones to enable them to operate in a streaming environment. This paper reviews key milestones and state of the art in the data stream mining area. Future insights are also be presented. C \u00a9 2011 Wiley Periodicals, Inc."}
{"_id": "29232c81c51b961ead3d38e6838fe0fb9c279e01", "title": "Visual tracking with online Multiple Instance Learning", "text": "In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called \u201ctracking by detection\u201d have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance."}
{"_id": "e7965fe4d25e59bc39870c760ef0d62024c9e2a3", "title": "Frontal terminations for the inferior fronto-occipital fascicle: anatomical dissection, DTI study and functional considerations on a multi-component bundle", "text": "The anatomy and functional role of the inferior fronto-occipital fascicle (IFOF) remain poorly known. We accurately analyze its course and the anatomical distribution of its frontal terminations. We propose a classification of the IFOF in different subcomponents. Ten hemispheres (5 left, 5 right) were dissected with Klingler\u2019s technique. In addition to the IFOF dissection, we performed a 4-T diffusion tensor imaging study on a single healthy subject. We identified two layers of IFOF. The first one is superficial and antero-superiorly directed, terminating in the inferior frontal gyrus. The second is deeper and consists of three portions: posterior, middle and anterior. The posterior component terminates in the middle frontal gyrus (MFG) and dorso-lateral prefrontal cortex. The middle component terminates in the MFG and lateral orbito-frontal cortex. The anterior one is directed to the orbito-frontal cortex and frontal pole. In vivo tractography study confirmed these anatomical findings. We suggest that the distribution of IFOF fibers within the frontal lobe corresponds to a fine functional segmentation. IFOF can be considered as a \u201cmulti-function\u201d bundle, with each anatomical subcomponent subserving different brain processing. The superficial layer and the posterior component of the deep layer, which connects the occipital extrastriate, temporo-basal and inferior frontal cortices, might subserve semantic processing. The middle component of the deep layer could play a role in a multimodal sensory\u2013motor integration. Finally, the anterior component of the deep layer might be involved in emotional and behavioral aspects."}
{"_id": "c8052beb3f7c41323d4de5930ef602511dbd553b", "title": "Digital Marketing Maturity Models : Overview and Comparison", "text": "The variety of available digital tools, strategies and activities might confuse and disorient even an experienced marketer. This applies in particular to B2B companies, which are usually less flexible in uptaking of digital technology than B2C companies. B2B companies are lacking a framework that corresponds to the specifics of the B2B business, and which helps to evaluate a company\u2019s capabilities and to choose an appropriate path. A B2B digital marketing maturity model helps to fill this gap. However, modern marketing offers no widely approved digital marketing maturity model, and thus, some marketing institutions provide their own tools. The purpose of this paper is building an optimized B2B digital marketing maturity model based on a SWOT (strengths, weaknesses, opportunities, and threats) analysis of existing models. The current study provides an analytical review of the existing digital marketing maturity models with open access. The results of the research are twofold. First, the provided SWOT analysis outlines the main advantages and disadvantages of existing models. Secondly, the strengths of existing digital marketing maturity models, helps to identify the main characteristics and the structure of an optimized B2B digital marketing maturity model. The research findings indicate that only one out of three analyzed models could be used as a separate tool. This study is among the first examining the use of maturity models in digital marketing. It helps businesses to choose between the existing digital marketing models, the most effective one. Moreover, it creates a base for future research on digital marketing maturity models. This study contributes to the emerging B2B digital marketing literature by providing a SWOT analysis of the existing digital marketing maturity models and suggesting a structure and main characteristics of an optimized B2B digital marketing maturity model. Keywords\u2014B2B digital marketing strategy, digital marketing, digital marketing maturity model, SWOT analysis."}
{"_id": "2aa5c1340d7f54e38c707ff3b2275e4e3052150f", "title": "Watershed Cuts: Minimum Spanning Forests and the Drop of Water Principle", "text": "We study the watersheds in edge-weighted graphs. We define the watershed cuts following the intuitive idea of drops of water flowing on a topographic surface. We first establish the consistency of these watersheds: they can be equivalently defined by their \"catchment basinsrdquo (through a steepest descent property) or by the \"dividing linesrdquo separating these catchment basins (through the drop of water principle). Then, we prove, through an equivalence theorem, their optimality in terms of minimum spanning forests. Afterward, we introduce a linear-time algorithm to compute them. To the best of our knowledge, similar properties are not verified in other frameworks and the proposed algorithm is the most efficient existing algorithm, both in theory and in practice. Finally, the defined concepts are illustrated in image segmentation, leading to the conclusion that the proposed approach improves, on the tested images, the quality of watershed-based segmentations."}
{"_id": "7097d025d2ac36154de75f65b3c33e213a53f037", "title": "Eating when bored: revision of the emotional eating scale with a focus on boredom.", "text": "OBJECTIVE\nThe current study explored whether eating when bored is a distinct construct from other negative emotions by revising the emotional eating scale (EES) to include a separate boredom factor. Additionally, the relative endorsement of eating when bored compared to eating in response to other negative emotions was examined.\n\n\nMETHOD\nA convenience sample of 139 undergraduates completed open-ended questions regarding their behaviors when experiencing different levels of emotions. Participants were then given the 25-item EES with 6 additional items designed to measure boredom.\n\n\nRESULTS\nOn the open-ended items, participants more often reported eating in response to boredom than the other emotions. Exploratory factor analysis showed that boredom is a separate construct from other negative emotions. Additionally, the most frequently endorsed item on the EES was \"eating when bored\".\n\n\nCONCLUSIONS\nThese results suggest that boredom is an important construct, and that it should be considered a separate dimension of emotional eating."}
{"_id": "f4d65af751f0a6448f40087267dfd505e885405c", "title": "Classification of seizure based on the time-frequency image of EEG signals using HHT and SVM", "text": "The detection of seizure activity in electroencephalogram (EEG) signals is crucial for the classification of epileptic seizures. However, epileptic seizures occur irregularly and unpredictably, automatic seizure detection in EEG recordings is highly required. In this work, we present a new technique for seizure classification of EEG signals using Hilbert\u2013Huang transform (HHT) and support vector machine (SVM). In our method, the HHT based time-frequency representation (TFR) has been considered as time-frequency image (TFI), the segmentation of TFI has been implemented based on the frequency-bands of the rhythms of EEG signals, the histogram of grayscale sub-images has been represented. Statistical features such as ime-frequency image upport vector machine eizure classification mean, variance, skewness and kurtosis of pixel intensity in the histogram have been extracted. The SVM with radial basis function (RBF) kernel has been employed for classification of seizure and nonseizure EEG signals. The classification accuracy and receiver operating characteristics (ROC) curve have been used for evaluating the performance of the classifier. Experimental results show that the best average classification accuracy of this algorithm can reach 99.125% with the theta rhythm of EEG signals."}
{"_id": "210824e7e4c8fb59afa4e2533498e84889409e5d", "title": "MagicFuzzer: Scalable deadlock detection for large-scale applications", "text": "We present MagicFuzzer, a novel dynamic deadlock detection technique. Unlike existing techniques to locate potential deadlock cycles from an execution, it iteratively prunes lock dependencies that each has no incoming or outgoing edge. Combining with a novel thread-specific strategy, it dramatically shrinks the size of lock dependency set for cycle detection, improving the efficiency and scalability of such a detection significantly. In the real deadlock confirmation phase, it uses a new strategy to actively schedule threads of an execution against the whole set of potential deadlock cycles. We have implemented a prototype and evaluated it on large-scale C/C++ programs. The experimental results confirm that our technique is significantly more effective and efficient than existing techniques."}
{"_id": "7ae45875848ded54d99077e73038c482ea87934f", "title": "Positive Definite Kernels in Machine Learning", "text": "This survey is an introduction to positive definite kernels and the set of methods they have inspired in the machine learning literature, namely kernel methods. We first discuss some properties of positive definite kernels as well as reproducing kernel Hibert spaces, the natural extension of the set of functions {k(x, \u00b7), x \u2208 X} associated with a kernel k defined on a space X . We discuss at length the construction of kernel functions that take advantage of well-known statistical models. We provide an overview of numerous data-analysis methods which take advantage of reproducing kernel Hilbert spaces and discuss the idea of combining several kernels to improve the performance on certain tasks. We also provide a short cookbook of different kernels which are particularly useful for certain datatypes such as images, graphs or speech segments. Remark: This report is a draft. I apologize in advance for the numerous mistakes, typos, and not always well written parts it contains. Comments and suggestions will be highly appreciated."}
{"_id": "481ca924dcf4e0c758534467621f2477adfb6379", "title": "Single imputation with multilayer perceptron and multiple imputation combining multilayer perceptron and k-nearest neighbours for monotone patterns", "text": "The knowledge discovery process is supported by data files information gathered from collected data sets, which often contain errors in the form of missing values. Data imputation is the activity aimed at estimating values for missing data items. This study focuses on the development of automated data imputation models, based on artificial neural networks for monotone patterns of missing values. The present work proposes a single imputation approach relying on a multilayer perceptron whose training is conducted with different learning rules, and a multiple imputation approach based on the combination of multilayer perceptron and k-nearest neighbours. Eighteen real and simulated databases were exposed to a perturbation experiment with random generation of monotone missing data pattern. An empirical test ean/mode model ultilayer perceptron egression model was accomplished on these data sets, including both approaches (single and multiple imputations), and three classical single imputation procedures \u2013 mean/mode imputation, regression and hot-deck \u2013 were also considered. Therefore, the experiments involved five imputation methods. The results, considering different performance measures, demonstrated that, in comparison with traditional tools, both proposals improve the automation level and data quality offering a satisfactory performance."}
{"_id": "8d137ca166879949120568810c15d422abec4a1e", "title": "Who Uses Bitcoin? An exploration of the Bitcoin community", "text": "Many cryptocurrencies have come into existence in recent years, with Bitcoin the most prominent among them. Although its short history has been volatile, the virtual currency maintains a core group of committed users. This paper presents an exploratory analysis of Bitcoin users. As a virtual currency and peer-to-peer payment system, Bitcoin may signal future challenges to state oversight and financial powers through its decentralized structure and offer of instantaneous transactions with relative anonymity. Very little is known about the users of Bitcoin, however. Utilizing publicly available survey data of Bitcoin users, this analysis explores the structure of the Bitcoin community in terms of wealth accumulation, optimism about the future of Bitcoin, and themes that attract users to the cryptocurrency. Results indicate that age, time of initial use, geographic location, mining status, engaging online discourse, and political orientation are all relevant factors that help explain various aspects of Bitcoin wealth, optimism, and attraction."}
{"_id": "2b1ceea2ce803ebe58c448daea0f8d1571cdd3ba", "title": "3-D Reciprocal Collision Avoidance on Physical Quadrotor Helicopters with On-Board Sensing for Relative Positioning", "text": "In this paper, we present an implementation of 3D reciprocal collision avoidance on real quadrotor helicopters where each quadrotor senses the relative position and veloc ity of other quadrotors using an on-board camera. We show that using our approach, quadrotors are able to successfully avo id pairwise collisions GPS and motion-capture denied environ ments, without communication between the quadrotors, and even when human operators deliberately attempt to induce collision. To our knowledge, this is the first time that reciprocal collision avoidance has been successfully implemented on r eal robots where each agent independently observes the others using on-board sensors. We theoretically analyze the respo nse of the collision-avoidance algorithm to the violated assumptions by the use of real robots. We quantitatively analyze our experimental results. A particularly striking observation is that at times the quadrotors exhibit \u201creciprocal dance\u201d behavior, which is also observed when humans move past each other in constrained environments. This seems to be the result of sensing uncertainty, which causes both robots involved to h ave a different belief about the relative positions and velocit ies and, as a result, choose the same side on which to pass."}
{"_id": "2327ad6f237b37150e84f0d745a05565ebf0b24d", "title": "Zerocash: Decentralized Anonymous Payments from Bitcoin", "text": "Bit coin is the first digital currency to see widespread adoption. While payments are conducted between pseudonyms, Bit coin cannot offer strong privacy guarantees: payment transactions are recorded in a public decentralized ledger, from which much information can be deduced. Zero coin (Miers et al., IEEE S&P 2013) tackles some of these privacy issues by unlinking transactions from the payment's origin. Yet, it still reveals payments' destinations and amounts, and is limited in functionality. In this paper, we construct a full-fledged ledger-based digital currency with strong privacy guarantees. Our results leverage recent advances in zero-knowledge Succinct Non-interactive Arguments of Knowledge (zk-SNARKs). First, we formulate and construct decentralized anonymous payment schemes (DAP schemes). A DAP scheme enables users to directly pay each other privately: the corresponding transaction hides the payment's origin, destination, and transferred amount. We provide formal definitions and proofs of the construction's security. Second, we build Zero cash, a practical instantiation of our DAP scheme construction. In Zero cash, transactions are less than 1 kB and take under 6 ms to verify - orders of magnitude more efficient than the less-anonymous Zero coin and competitive with plain Bit coin."}
{"_id": "3d08280ae82c2044c8dcc66d2be5a72c738e9cf9", "title": "Metadata Embeddings for User and Item Cold-start Recommendations", "text": "I present a hybrid matrix factorisation model representing users and items as linear combinations of their content features\u2019 latent factors. The model outperforms both collaborative and content-based models in cold-start or sparse interaction data scenarios (using both user and item metadata), and performs at least as well as a pure collaborative matrix factorisation model where interaction data is abundant. Additionally, feature embeddings produced by the model encode semantic information in a way reminiscent of word embedding approaches, making them useful for a range of related tasks such as tag recommendations."}
{"_id": "6659c47db31129a60df2b113b46f45614197d5d8", "title": "Extremely Low-Profile, Single-Arm, Wideband Spiral Antenna Radiating a Circularly Polarized Wave", "text": "The antenna characteristics are described of a single-arm spiral (SAS) antenna. A balun circuit, required for a conventional two-arm balanced-mode spiral (TAS), is not necessary for feeding this SAS. First, the radiation pattern of an SAS having a disc/ground-plane is investigated. When the radius of the disc is smaller than the radius of the first-mode active region on the spiral and the spacing between the disc and spiral plane is small, the SAS is found to radiate a circularly polarized (CP) bidirectional beam, whose radiation pattern is almost symmetric with respect to the antenna axis normal to the spiral plane. Second, from a practical standpoint, the CP bidirectional beam is transformed into a CP unidirectional beam by placing a conducting cavity behind an SAS with a disc. It is revealed that the SAS has a good VSWR (less than 2) and a good axial ratio (less than 3 dB) over a design frequency range of 3 GHz to 10 GHz, where a cavity/antenna height of 7 mm (0.07 wavelength at the lowest design frequency) is chosen. The frequency response of the gain for the SAS is found to be similar to that for a conventional TAS, and the radiation efficiency for the SAS is slightly larger than that for the TAS. It is concluded that the SAS with a small disc backed by a cavity realizes a circularly polarized, low-profile, wideband antenna with a simple feed system that does not require a balun circuit."}
{"_id": "3692d1c5e36145a5783135f3f077f6486e263d23", "title": "Vocabulary acquisition from extensive reading : A case study", "text": "A number of studies have shown that second language learners acquire vocabulary through reading, but only relatively small amounts. However, most of these studies used only short texts, measured only the acquisition of meaning, and did not credit partial learning of words. This case study of a learner of French explores whether an extensive reading program can enhance lexical knowledge. The study assessed a relatively large number of words (133), and examined whether one month of extensive reading enhanced knowledge of these target words' spelling, meaning, and grammatical characteristics. The measurement procedure was a one-on-one interview that allowed a very good indication of whether learning occurred. The study also explores how vocabulary acquisition varies according to how often words are encountered in the texts. The results showed that knowledge of 65% of the target words was enhanced in some way, for a pickup rate of about 1 of every 1.5 words tested. Spelling was strongly enhanced, even from a small number of exposures. Meaning and grammatical knowledge were also enhanced, but not to the same extent. Overall, the study indicates that more vocabulary acquisition is possible from extensive reading than previous studies have suggested."}
{"_id": "46977c2e7a812e37f32eb05ba6ad16e03ee52906", "title": "Gated End-to-End Memory Networks", "text": "Machine reading using differentiable reasoning models has recently shown remarkable progress. In this context, End-to-End trainable Memory Networks (MemN2N) have demonstrated promising performance on simple natural language based reasoning tasks such as factual reasoning and basic deduction. However, other tasks, namely multi-fact questionanswering, positional reasoning or dialog related tasks, remain challenging particularly due to the necessity of more complex interactions between the memory and controller modules composing this family of models. In this paper, we introduce a novel end-to-end memory access regulation mechanism inspired by the current progress on the connection short-cutting principle in the field of computer vision. Concretely, we develop a Gated End-toEnd trainable Memory Network architecture (GMemN2N). From the machine learning perspective, this new capability is learned in an end-to-end fashion without the use of any additional supervision signal which is, as far as our knowledge goes, the first of its kind. Our experiments show significant improvements on the most challenging tasks in the 20 bAbI dataset, without the use of any domain knowledge. Then, we show improvements on the Dialog bAbI tasks including the real human-bot conversion-based Dialog State Tracking Challenge (DSTC-2) dataset. On these two datasets, our model sets the new state of the art. \u2217work done as an Intern at XRCE"}
{"_id": "a3348ee4cc93e68485df4d06e205e064f12c0239", "title": "Feature Selection for Maximizing the Area Under the ROC Curve", "text": "Feature selection is an important pre-processing step for solving classification problems. A good feature selection method may not only improve the performance of the final classifier, but also reduce the computational complexity of it. Traditionally, feature selection methods were developed to maximize the classification accuracy of a classifier. Recently, both theoretical and experimental studies revealed that a classifier with the highest accuracy might not be ideal in real-world problems. Instead, the Area Under the ROC Curve (AUC) has been suggested as the alternative metric, and many existing learning algorithms have been modified in order to seek the classifier with maximum AUC. However, little work was done to develop new feature selection methods to suit the requirement of AUC maximization. To fill this gap in the literature, we propose in this paper a novel algorithm, called AUC and Rank Correlation coefficient Optimization (ARCO) algorithm. ARCO adopts the general framework of a well-known method, namely minimal redundancy- maximal-relevance (mRMR) criterion, but defines the terms \u201drelevance\u201d and \u201dredundancy\u201d in totally different ways. Such a modification looks trivial from the perspective of algorithmic design. Nevertheless, experimental study on four gene expression data sets showed that feature subsets obtained by ARCO resulted in classifiers with significantly larger AUC than the feature subsets obtained by mRMR. Moreover, ARCO also outperformed the Feature Assessment by Sliding Thresholds algorithm, which was recently proposed for AUC maximization, and thus the efficacy of ARCO was validated."}
{"_id": "49f5671cdf3520e04104891265c74b34c22ebccc", "title": "Overconfidence and Trading Volume", "text": "Theoretical models predict that overconfident investors will trade more than rational investors. We directly test this hypothesis by correlating individual overconfidence scores with several measures of trading volume of individual investors. Approximately 3,000 online broker investors were asked to answer an internet questionnaire which was designed to measure various facets of overconfidence (miscalibration, volatility estimates, better than average effect). The measures of trading volume were calculated by the trades of 215 individual investors who answered the questionnaire. We find that investors who think that they are above average in terms of investment skills or past performance (but who did not have above average performance in the past) trade more. Measures of miscalibration are, contrary to theory, unrelated to measures of trading volume. This result is striking as theoretical models that incorporate overconfident investors mainly motivate this assumption by the calibration literature and model overconfidence as underestimation of the variance of signals. In connection with other recent findings, we conclude that the usual way of motivating and modeling overconfidence which is mainly based on the calibration literature has to be treated with caution. Moreover, our way of empirically evaluating behavioral finance models the correlation of economic and psychological variables and the combination of psychometric measures of judgment biases (such as overconfidence scores) and field data seems to be a promising way to better understand which psychological phenomena actually drive economic behavior."}
{"_id": "5703617b9d9d40e90b6c8ffa21a52734d9822d60", "title": "Defining Computational Thinking for Mathematics and Science Classrooms", "text": "Science and mathematics are becoming computational endeavors. This fact is reflected in the recently released Next Generation Science Standards and the decision to include \u2018\u2018computational thinking\u2019\u2019 as a core scientific practice. With this addition, and the increased presence of computation in mathematics and scientific contexts, a new urgency has come to the challenge of defining computational thinking and providing a theoretical grounding for what form it should take in school science and mathematics classrooms. This paper presents a response to this challenge by proposing a definition of computational thinking for mathematics and science in the form of a taxonomy consisting of four main categories: data practices, modeling and simulation practices, computational problem solving practices, and systems thinking practices. In formulating this taxonomy, we draw on the existing computational thinking literature, interviews with mathematicians and scientists, and exemplary computational thinking instructional materials. This work was undertaken as part of a larger effort to infuse computational thinking into high school science and mathematics curricular materials. In this paper, we argue for the approach of embedding computational thinking in mathematics and science contexts, present the taxonomy, and discuss how we envision the taxonomy being used to bring current educational efforts in line with the increasingly computational nature of modern science and mathematics."}
{"_id": "5fee30aa381e5382ac844c48cdc7bf65f316b3cb", "title": "Flexible Learning on the Sphere Via Adaptive Needlet Shrinkage and Selection", "text": "This paper introduces an approach for flexible, robust Bayesian modeling of structure in spherical data sets. Our method is based upon a recent construction called the needlet, which is a particular form of spherical wavelet with many favorable statistical and computational properties. We perform shrinkage and selection of needlet coefficients, focusing on two main alternatives: empirical-Bayes thresholding for selection, and the horseshoe prior for shrinkage. We study the performance of the proposed methodology both on simulated data and on a real data set involving the cosmic microwave background radiation. Horseshoe shrinkage of needlet coefficients is shown to yield the best overall performance against some common benchmarks."}
{"_id": "129298d1dd8a9ec398d7f14a706e5d06d3aead01", "title": "A privacy-compliant fingerprint recognition system based on homomorphic encryption and Fingercode templates", "text": "The privacy protection of the biometric data is an important research topic, especially in the case of distributed biometric systems. In this scenario, it is very important to guarantee that biometric data cannot be steeled by anyone, and that the biometric clients are unable to gather any information different from the single user verification/identification. In a biom\u00e9trie system with high level of privacy compliance, also the server that processes the biom\u00e9trie matching should not learn anything on the database and it should be impossible for the server to exploit the resulting matching values in order to extract any knowledge about the user presence or behavior. Within this conceptual framework, in this paper we propose a novel complete demonstrator based on a distributed biom\u00e9trie system that is capable to protect the privacy of the individuals by exploiting cryptosystems. The implemented system computes the matching task in the encrypted domain by exploiting homomorphic encryption and using Fingercode templates. The paper describes the design methodology of the demonstrator and the obtained results. The demonstrator has been fully implemented and tested in real applicative conditions. Experimental results show that this method is feasible in the cases where the privacy of the data is more important than the accuracy of the system and the obtained computational time is satisfactory."}
{"_id": "69852901bb1d411717d59f7647b1c332008393c1", "title": "Extended Kalman filter based grid synchronization in the presence of voltage unbalance for smart grid", "text": "In this paper, grid synchronization for gird-connected power generation systems in the presence of voltage unbalance and frequency variation is considered. A new extended Kalman filter based synchronization algorithm is proposed to track the phase angle of the utility network. Instead of processing the three-phase voltage signal in the abc natural reference frame and resorting to the symmetrical component transformation as in the traditional way, the proposed algorithm separates the positive and negative sequences in the transformed \u03b1\u03b2 stationary reference frame. Based on the obtained expressions in \u03b1\u03b2 domain, an EKF is developed to track both the in-phase and quadrature sinusoidal signals together with the unknown frequency. An estimate of the phase angle of the positive sequence is then obtained. As a by-product, estimates of phase angle of negative sequence and grid frequency are also computed. Compared to the commonly used scheme, the proposed algorithm has a simplified structure. The good performance is supported by computer simulations."}
{"_id": "54efb068debeea58fd05951f86db797b5b5e4788", "title": "Frequency split metal artifact reduction (FSMAR) in computed tomography.", "text": "PURPOSE\nThe problem of metal artifact reduction (MAR) is almost as old as the clinical use of computed tomography itself. When metal implants are present in the field of measurement, severe artifacts degrade the image quality and the diagnostic value of CT images. Up to now, no generally accepted solution to this issue has been found. In this work, a method based on a new MAR concept is presented: frequency split metal artifact reduction (FSMAR). It ensures efficient reduction of metal artifacts at high image quality with enhanced preservation of details close to metal implants.\n\n\nMETHODS\nFSMAR combines a raw data inpainting-based MAR method with an image-based frequency split approach. Many typical methods for metal artifact reduction are inpainting-based MAR methods and simply replace unreliable parts of the projection data, for example, by linear interpolation. Frequency split approaches were used in CT, for example, by combining two reconstruction methods in order to reduce cone-beam artifacts. FSMAR combines the high frequencies of an uncorrected image, where all available data were used for the reconstruction with the more reliable low frequencies of an image which was corrected with an inpainting-based MAR method. The algorithm is tested in combination with normalized metal artifact reduction (NMAR) and with a standard inpainting-based MAR approach. NMAR is a more sophisticated inpainting-based MAR method, which introduces less new artifacts which may result from interpolation errors. A quantitative evaluation was performed using the examples of a simulation of the XCAT phantom and a scan of a spine phantom. Further evaluation includes patients with different types of metal implants: hip prostheses, dental fillings, neurocoil, and spine fixation, which were scanned with a modern clinical dual source CT scanner.\n\n\nRESULTS\nFSMAR ensures sharp edges and a preservation of anatomical details which is in many cases better than after applying an inpainting-based MAR method only. In contrast to other MAR methods, FSMAR yields images without the usual blurring close to implants.\n\n\nCONCLUSIONS\nFSMAR should be used together with NMAR, a combination which ensures an accurate correction of both high and low frequencies. The algorithm is computationally inexpensive compared to iterative methods and methods with complex inpainting schemes. No parameters were chosen manually; it is ready for an application in clinical routine."}
{"_id": "7a3f1ea18bf3e8223890b122bc31fb79db758c6e", "title": "Tagging Urdu Sentences from English POS Taggers", "text": "Being a global language, English has attracted a majority of researchers and academia to work on several Natural Language Processing (NLP) applications. The rest of the languages are not focused as much as English. Part-of-speech (POS) Tagging is a necessary component for several NLP applications. An accurate POS Tagger for a particular language is not easy to construct due to the diversity of that language. The global language English, POS Taggers are more focused and widely used by the researchers and academia for NLP processing. In this paper, an idea of reusing English POS Taggers for tagging non-English sentences is proposed. On exemplary basis, Urdu sentences are processed to tagged from 11 famous English POS Taggers. State-of-the-art English POS Taggers were explored from the literature, however, 11 famous POS Taggers were being input to Urdu sentences for tagging. A famous Google translator is used to translate the sentences across the languages. Data from twitter.com is extracted for evaluation perspective. Confusion matrix with kappa statistic is used to measure the accuracy of actual Vs predicted tagging. The two best English POS Taggers which tagged Urdu sentences were Stanford POS Tagger and MBSP POS Tagger with an accuracy of 96.4% and 95.7%, respectively. The system can be generalized for multilingual sentence tagging. Keywords\u2014Standford part-of-speech (POS) tagger; Google translator; Urdu POS tagging; kappa statistic"}
{"_id": "21677e1dda607f649cd63b2311795ce6e2653b33", "title": "IR sensitivity enhancement of CMOS Image Sensor with diffractive light trapping pixels", "text": "We report on the IR sensitivity enhancement of back-illuminated CMOS Image Sensor (BI-CIS) with 2-dimensional diffractive inverted pyramid array structure (IPA) on crystalline silicon (c-Si) and deep trench isolation (DTI). FDTD simulations of semi-infinite thick c-Si having 2D IPAs on its surface whose pitches over 400\u2009nm shows more than 30% improvement of light absorption at \u03bb\u2009=\u2009850\u2009nm and the maximum enhancement of 43% with the 540\u2009nm pitch at the wavelength is confirmed. A prototype BI-CIS sample with pixel size of 1.2\u2009\u03bcm square containing 400\u2009nm pitch IPAs shows 80% sensitivity enhancement at \u03bb\u2009=\u2009850\u2009nm compared to the reference sample with flat surface. This is due to diffraction with the IPA and total reflection at the pixel boundary. The NIR images taken by the demo camera equip with a C-mount lens show 75% sensitivity enhancement in the \u03bb\u2009=\u2009700\u20131200\u2009nm wavelength range with negligible spatial resolution degradation. Light trapping CIS pixel technology promises to improve NIR sensitivity and appears to be applicable to many different image sensor applications including security camera, personal authentication, and range finding Time-of-Flight camera with IR illuminations."}
{"_id": "45a1a5dd0b9186d6476b56ab10bc582f003da2c5", "title": "The Gambler's Ruin Problem, Genetic Algorithms, and the Sizing of Populations", "text": "This paper presents a model to predict the convergence quality of genetic algorithms based on the size of the population. The model is based on an analogy between selection in GAs and one-dimensional random walks. Using the solution to a classic random walk problemthe gambler's ruinthe model naturally incorporates previous knowledge about the initial supply of building blocks (BBs) and correct selection of the best BB over its competitors. The result is an equation that relates the size of the population with the desired quality of the solution, as well as the problem size and difficulty. The accuracy of the model is verified with experiments using additively decomposable functions of varying difficulty. The paper demonstrates how to adjust the model to account for noise present in the fitness evaluation and for different tournament sizes."}
{"_id": "79ce9533944cdee059232495fc9f94f1d47eb900", "title": "Deep Learning Approaches to Semantic Relevance Modeling for Chinese Question-Answer Pairs", "text": "The human-generated question-answer pairs in the Web social communities are of great value for the research of automatic question-answering technique. Due to the large amount of noise information involved in such corpora, it is still a problem to detect the answers even though the questions are exactly located. Quantifying the semantic relevance between questions and their candidate answers is essential to answer detection in social media corpora. Since both the questions and their answers usually contain a small number of sentences, the relevance modeling methods have to overcome the problem of word feature sparsity. In this article, the deep learning principle is introduced to address the semantic relevance modeling task. Two deep belief networks with different architectures are proposed by us to model the semantic relevance for the question-answer pairs. According to the investigation of the textual similarity between the community-driven question-answering (cQA) dataset and the forum dataset, a learning strategy is adopted to promote our models\u2019 performance on the social community corpora without hand-annotating work. The experimental results show that our method outperforms the traditional approaches on both the cQA and the forum corpora."}
{"_id": "bf4aa4acdfa83586bbdbd6351b1a96dbb9672c4c", "title": "JINS MEME algorithm for estimation and tracking of concentration of users", "text": "Activity tracking using a wearable device is an emerging research field. Large-scale studies on activity tracking performed with eyewear-type wearable devices remains a challenging area owing to the negative effect such devices have on users' looks. To cope with this challenge, JINS Inc., an eyewear retailer in Japan, has developed a state-of-the-art smart eyewear called JINS MEME. The design of JINS MEME is almost the same as that of Wellington-type eyeglasses so that people can use it in their daily lives. JINS MEME is equipped with sensors to detect a user's eye movement and head motion. In addition to these functions of JINS MEME, JINS developed an application to measure concentration levels of users. In this demonstration, users will experience wearing the JINS MEME glasses and their concentration will be measured while they perform a certain task at our booth."}
{"_id": "75a4944af32cd53aa30bf3af533556401b9f3759", "title": "A 5GS/s voltage-to-time converter in 90nm CMOS", "text": "A voltage-to-time converter (VTC) is presented for use in a time-based analog-to-digital converter (ADC). The converter runs with a 5 GHz input clock to provide a maximum conversion rate of 5 GS/s. A novel architecture enables the VTC to provide an adjustable linear delay versus input voltage characteristic. The circuit is realized in a 90nm CMOS process. After calibration, the converter achieves better than 3.7 effective bits for input frequencies up to 1.75 GHz, making it suitable for use in a time-based ADC with up to 4-bit resolution."}
{"_id": "03809a85789f7aeb39002fdcd7c3cdf33cc7370f", "title": "A Client-Driven Approach for Channel Management in Wireless LANs", "text": "We propose an efficient client-based approach for channel management (channel assignment and load balancing) in 802.11-based WLANs that lead to better usage of the wireless spectrum. This approach is based on a \u201cconflict set coloring\u201d formulation that jointly performs load balancing along with channel assignment. Such a formulation has a number of advantages. First, it explicitly captures interference effects at clients. Next, it intrinsically exposes opportunities for better channel re-use. Finally, algorithms based on this formulation do not depend on specific physical RF models and hence can be applied efficiently to a wide-range of inbuilding as well as outdoor scenarios. We have performed extensive packet-level simulations and measurements on a deployed wireless testbed of 70 APs to validate the performance of our proposed algorithms. We show that in addition to single network scenarios, the conflict set coloring formulation is well suited for channel assignment where multiple wireless networks share and contend for spectrum in the same physical space. Our results over a wide range of both simulated topologies and in-building testbed experiments indicate that our approach improves application level performance at the clients by upto three times (and atleast 50%) in comparison to current best-known techniques."}
{"_id": "cfb3d339a6b369144c356a93e3b519f22928e238", "title": "Meta-Learning with Hessian Free Approach in Deep Neural Nets Training", "text": "Meta-learning is a promising method to achieve efficient training method towards deep neural net and has been attracting increases interests in recent years. But most of the current methods are still not capable to train complex neuron net model with long-time training process. In this paper, a novel second-order meta-optimizer, named Meta-learning with Hessian-Free(MLHF) approach, is proposed based on the Hessian-Free approach. Two recurrent neural networks are established to generate the damping and the precondition matrix of this Hessian-Free framework. A series of techniques to meta-train the MLHF towards stable and reinforce the meta-training of this optimizer, including the gradient calculation of H . Numerical experiments on deep convolution neural nets, including CUDA-convnet and ResNet18(v2), with datasets of CIFAR10 and ILSVRC2012, indicate that the MLHF shows good and continuous training performance during the whole long-time training process, i.e., both the rapiddecreasing early stage and the steadily-deceasing later stage, and so is a promising meta-learning framework towards elevating the training efficiency in real-world deep neural nets."}
{"_id": "2449a067370ca24353ee8e9fd5e8187cf08ca8f7", "title": "Thoth: Comprehensive Policy Compliance in Data Retrieval Systems", "text": "Data retrieval systems process data from many sources, each subject to its own data use policy. Ensuring compliance with these policies despite bugs, misconfiguration, or operator error in a large, complex, and fast evolving system is a major challenge. Thoth provides an efficient, kernel-level compliance layer for data use policies. Declarative policies are attached to the systems\u2019 input and output files, key-value tuples, and network connections, and specify the data\u2019s integrity and confidentiality requirements. Thoth tracks the flow of data through the system, and enforces policy regardless of bugs, misconfigurations, compromises in application code, or actions by unprivileged operators. Thoth requires minimal changes to an existing system and has modest overhead, as we show using a prototype Thoth-enabled data retrieval system based on the popular Apache Lucene."}
{"_id": "7112dd4ab26fe7d0f8908f1352e4ba279b2f521d", "title": "A multiparadigm intelligent tutoring system for robotic arm training", "text": "To assist learners during problem-solving activities, an intelligent tutoring system (ITS) has to be equipped with domain knowledge that can support appropriate tutoring services. Providing domain knowledge is usually done by adopting one of the following paradigms: building a cognitive model, specifying constraints, integrating an expert system, and using data mining algorithms to learn domain knowledge. However, for some ill-defined domains, each single paradigm may present some advantages and limitations in terms of the required resources for deploying it, and tutoring support that can be offered. To address this issue, we propose using a multiparadigm approach. In this paper, we explain how we have applied this idea in CanadarmTutor, an ITS for learning to operate the Canadarm2 robotic arm. To support tutoring services in this ill-defined domain, we have developed a multiparadigm model combining: 1) a cognitive model to cover well-defined parts of the task and spatial reasoning, 2) a data mining approach for automatically building a task model from user solutions for ill-defined parts of the task, and 3) a 3D path-planner to cover other parts of the task for which no user data are available. The multiparadigm version of CanadarmTutor allows providing a richer set of tutoring services than what could be offered with previous single paradigm versions of CanadarmTutor."}
{"_id": "014004e84fbc6e4c7c4aced2e69fc4a5d28daabf", "title": "Intellectual Capital (IC) Measurement in the Mass Media Context", "text": "Mass media is the key in\u00b0uencer of public opinion. The in\u00b0uence is not only limited to political and social, but also relates to organisational and economical reputation and brands. Within public opinion, organisations must manage how they are represented competitively within mass media so that they can develop their brand strategically to grow and compete in the current global knowledge economy. This is where the link to Intellectual Capital (IC) Measurement is signi \u0304cant. IC, as the sum of all an organisation's intangible assets drives a company's presence and value within the media, albeit related to human, structural or relational capital attributes. The measurement, therefore, of IC in the mass media context is invaluable to understand how a company is placed strategically and competitively in the external space, and how this links to internal activities, goals and outcomes. This paper is an attempt to address some of the issues related to IC measurements in the mass media context by suggesting a framework that provides a multidisciplinary and holistic approach to the understanding and contextualising of the organisation's presence in the public space."}
{"_id": "25d1a2c364b05e0db056846ec397fbf0eacdca5c", "title": "Orthogonal nonnegative matrix tri-factorization for co-clustering: Multiplicative updates on Stiefel manifolds", "text": "Matrix factorization-based methods become popular in dyadic data analysis, where a fundamental problem, for example, is to perform document clustering or co-clustering words and documents given a term-document matrix. Nonnegative matrix tri-factorization (NMTF) emerges as a promising tool for co-clustering, seeking a 3-factor decomposition X USV with all factor matrices restricted to be nonnegative, i.e., U P 0; S P 0;V P 0: In this paper we develop multiplicative updates for orthogonal NMTF where X USV is pursued with orthogonality constraints, UU 1\u20444 I; and VV 1\u20444 I, exploiting true gradients on Stiefel manifolds. Experiments on various document data sets demonstrate that our method works well for document clustering and is useful in revealing polysemous words via co-clustering words and documents. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "019615181a80b6dca5d79a8e72e5eec21eb5cfde", "title": "Metal Additive Manufacturing : A Review", "text": "This paper reviews the state-of-the-art of an important, rapidly emerging, manufacturing technology that is alternatively called additive manufacturing (AM), direct digital manufacturing, free form fabrication, or 3D printing, etc. A broad contextual overview of metallic AM is provided. AM has the potential to revolutionize the global parts manufacturing and logistics landscape. It enables distributed manufacturing and the productions of parts-on-demand while offering the potential to reduce cost, energy consumption, and carbon footprint. This paper explores the material science, processes, and business consideration associated with achieving these performance gains. It is concluded that a paradigm shift is required in order to fully exploit AM potential."}
{"_id": "3ca4cbd7bf47ad1d20224b8ebc19980df337697e", "title": "Measurement Science Needs for Real-time Control of Additive Manufacturing Powder Bed Fusion Processes", "text": "Additive Manufacturing is increasingly used in the development of new products: from conceptual design to functional parts and tooling. However, today, variability in part quality due to inadequate dimensional tolerances, surface roughness, and defects, limits its broader acceptance for high-value or mission-critical applications. While process control in general can limit this variability, it is impeded by a lack of adequate process measurement methods. Process control today is based on heuristics and experimental data, yielding limited improvement in part quality. The overall goal is to develop the measurement science 1 necessary to make in-process measurement and real-time control possible in additive manufacturing. Traceable dimensional and thermal metrology methods must be developed for real-time closed-loop control of additive manufacturing processes. As a precursor, this report presents a review on the additive manufacturing control schemes, process measurements, and modeling and simulation methods as it applies to the powder bed fusion process, though results from other processes are reviewed where applicable. The aim of the review is to identify and summarize the measurement science needs that are critical to real-time process control. We organize our research findings to identify the correlations between process parameters, process signatures, and product quality. The intention of this report is to serve as a background reference and a go-to place for our work to identify the most suitable measurement methods and corresponding measurands for real-time control. 1 Measurement science broadly includes: development of performance metrics, measurement and testing methods, predictive modeling and simulation tools, knowledge modeling, protocols, technical data, and reference materials and artifacts; conduct of inter-comparison studies and calibrations; evaluation of technologies, systems, and practices, including uncertainty analysis; development of the technical basis for standards, codes, and practices in many instances via test-beds, consortia, standards and codes development organizations, and/or other partnerships with industry and academia."}
{"_id": "dafdf1c7dec045bee37d4b97aa897717f590e47a", "title": "A Tlreshold Selection Method from Gray-Level Histograms", "text": "A nonparametric and unsupervised method ofautomatic threshold selection for picture segmentation is presented. An optimal threshold is selected by the discriminant criterion, namely, so as to maximize the separability of the resultant classes in gray levels. The procedure is very simple, utilizing only the zerothand the first-order cumulative moments of the gray-level histogram. It is straightforward to extend the method to multithreshold problems. Several experimental results are also presented to support the validity of the method."}
{"_id": "ea389cc6a91d365d003562df896fa5931fb444c6", "title": "Additive manufacturing of polymer-derived ceramics", "text": "The extremely high melting point of many ceramics adds challenges to additive manufacturing as compared with metals and polymers. Because ceramics cannot be cast or machined easily, three-dimensional (3D) printing enables a big leap in geometrical flexibility. We report preceramic monomers that are cured with ultraviolet light in a stereolithography 3D printer or through a patterned mask, forming 3D polymer structures that can have complex shape and cellular architecture. These polymer structures can be pyrolyzed to a ceramic with uniform shrinkage and virtually no porosity. Silicon oxycarbide microlattice and honeycomb cellular materials fabricated with this approach exhibit higher strength than ceramic foams of similar density. Additive manufacturing of such materials is of interest for propulsion components, thermal protection systems, porous burners, microelectromechanical systems, and electronic device packaging."}
{"_id": "156b0bcef5d03cdf0a35947030c1c0729ca923d3", "title": "Additive Manufacturing of Metals: A Review", "text": "Over the past 20 years, additive manufacturing (AM) has evolved from 3D printers used for rapid prototyping to sophisticated rapid manufacturing that can create parts directly without the use of tooling. AM technologies build near net shape components layer by layer using 3D model data. AM could revolutionize many sectors of manufacturing by reducing component lead time, material waste, energy usage, and carbon footprint. Furthermore, AM has the potential to enable novel product designs that could not be fabricated using conventional processes. This proceedings is a review that assesses available AM technologies for direct metal fabrication. Included is an outline of the definition of AM, a review of the commercially available and under development technologies for direct metal fabrication, possibilities, and an overall assessment of the state of the art. Perspective on future research opportunities is also be presented."}
{"_id": "a1746d4e1535564e02a7a4d5e4cdd1fa7bedc571", "title": "Feature engineering in Context-Dependent Deep Neural Networks for conversational speech transcription", "text": "We investigate the potential of Context-Dependent Deep-Neural-Network HMMs, or CD-DNN-HMMs, from a feature-engineering perspective. Recently, we had shown that for speaker-independent transcription of phone calls (NIST RT03S Fisher data), CD-DNN-HMMs reduced the word error rate by as much as one third\u2014from 27.4%, obtained by discriminatively trained Gaussian-mixture HMMs with HLDA features, to 18.5%\u2014using 300+ hours of training data (Switchboard), 9000+ tied triphone states, and up to 9 hidden network layers."}
{"_id": "2760ee22dcea47ac49381ade8edb01d1094d3a33", "title": "An Exploratory Review of Design Principles in Constructivist Gaming Learning Environments.", "text": "Creating a design theory for Constructivist Gaming Learning Environment necessitates, among other things, the establishment of design principles. These principles have the potential to help designers produce games, where users achieve higher levels of learning. This paper focuses on twelve design principles: Probing, Distributed, Multiple Routes, Practice, Psychosocial Moratorium, Regime of Competence, Self-Knowledge, Collective Knowledge, Engaging, User Interface Ease of Use, On Demand and Just-in-Time Tutorial, and Achievement. We report on two pilot studies of a qualitative nature in which we test our design principles. Game play testing and observations were carried out on five Massively Multiplayer Online Games (MMOGs): RuneScape, GuildWars, Ragnarok, World of WarCraft, and Final Fantasy XI. Two educational games, Carabella Goes to College and Outbreak at WatersEdge were also observed. Our findings indicate that not all of the popular MMOGs and educational games support all of these principles."}
{"_id": "9ca32800987b1f6092c2cacf028dee7298e73084", "title": "Evaluation of machine learning techniques for network intrusion detection", "text": "Network traffic anomaly may indicate a possible intrusion in the network and therefore anomaly detection is important to detect and prevent the security attacks. The early research work in this area and commercially available Intrusion Detection Systems (IDS) are mostly signature-based. The problem of signature based method is that the database signature needs to be updated as new attack signatures become available and therefore it is not suitable for the real-time network anomaly detection. The recent trend in anomaly detection is based on machine learning classification techniques. We apply seven different machine learning techniques with information entropy calculation to Kyoto 2006+ data set and evaluate the performance of these techniques. Our findings show that, for this particular data set, most machine learning techniques provide higher than 90% precision, recall and accuracy. However, using area under the Receiver Operating Curve (ROC) metric, we find that Radial Basis Function (RBF) performs the best among the seven algorithms studied in this work."}
{"_id": "3ced8df3b0a63c845cabc7971a7464c4905ec8ab", "title": "Visible light communications: Challenges and possibilities", "text": "Solid-state lighting is a rapidly developing field. White-light and other visible LEDs are becoming more efficient, have high reliability and can be incorporated into many lighting applications. Recent examples include car head-lights based on white LEDs, and LED illumination as an architectural feature. The prediction that general illumination will use white LEDs in the future has been made, due to the increased energy efficiency that such an approach may have. Such sources can also be modulated at high-speed, offering the possibility of using sources for simultaneous illumination and data communications. Such visible light communications (VLC) was pioneered in Japan, and there is now growing interest worldwide, including within bodies such as the Visible Light Communications Consortium (VLCC) and the Wireless World Research Forum (WWRF). In this paper we outline the basic components in these systems, review the state of the art and discuss some of the challenges and possibilities for this new wireless transmission technique."}
{"_id": "57f9aa83737f2062defaf80afe11448ea5a33ea6", "title": "A conceptual framework to develop Green IT \u2013 going beyond the idea of environmental sustainability", "text": "This paper presents a conceptual framework aiming to better research, understand and develop Green IT within organizations. Based on a literature review on Green IT, regarding the concepts of responsibility and sustainability, we propose an initial framework with five dimensions: ethical, technological, economical, social and environmental. These dimensions compose Green IT strategies and practices. Additionally, the framework considers that environmental changing requirements, strategic requirements and dynamic capabilities are the forces which move organizations toward green practices and foster innovation. This framework is part of a five-year horizon project that began in 2009 and aims to contribute to the theory in the field, having as initial goal to identify the constructs associated with Green IT."}
{"_id": "e6a5a2d440446c1d6e1883cdcb3a855b94b708e7", "title": "Epidermal electronics: Skin sweat patch", "text": "An ultrathin, stretchable, and conformal sensor system for skin-mounted sweat measurement is characterized and demonstrated in this paper. As an epidermal device, the sweat sensor is mechanically designed for comfortable wear on the skin by employing interdigitated electrodes connected via stretchable serpentine-shaped conductors. Experimental results show that the sensor is sensitive to measuring frequency, sweat level and stretching deformation. It was found that 20kHz signals provide the most sensitive performance: electrical impedance changes 50% while sweat level increases from 20 to 80. In addition, sensor elongation from 15 up to 50% affected the measurement sensitivity of both electrical impedance and capacitance."}
{"_id": "e435ccffa5ae89d22b5062c1e8e3dcd2a0908ee8", "title": "A Survey of Utility-Oriented Pattern Mining", "text": "The main purpose of data mining and analytics is to find novel, potentially useful patterns that can be utilized in real-world applications to derive beneficial knowledge. For identifying and evaluating the usefulness of different kinds of patterns, many techniques/constraints have been proposed, such as support, confidence, sequence order, and utility parameters (e.g., weight, price, profit, quantity, etc.). In recent years, there has been an increasing demand for utility-oriented pattern mining (UPM). UPM is a vital task, with numerous high-impact applications, including cross-marketing, e-commerce, finance, medical, and biomedical applications. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods of UPM. First, we introduce an in-depth understanding of UPM, including concepts, examples, and comparisons with related concepts. A taxonomy of the most common and state-of-the-art approaches for mining different kinds of high-utility patterns is presented, including Apriori-based, tree-based, projection-based, vertical-/horizontal-data-format-based, and other hybrid approaches. A comprehensive review of advanced topics of existing high-utility pattern mining techniques is offered, with a discussion of their pros and cons. Finally, we present several well-known open-source software packages for UPM. We conclude our survey with a discussion on open and practical challenges in this field."}
{"_id": "0de16a3505e587589fdf35d7f99d26848f362f55", "title": "Revising the Newman-Girvan Algorithm", "text": "Abstract: One of the common approaches for the community detection in complex networks is the Girvan-Newman algorithm [5] which is based on repeated deletion of edges having the maximum edge betweenness centrality. Although widely used, it may result in different dendrograms of hierarchies of communities if there are several edges eligible for removal (see [6]) thus leaving an ambiguous formation of network subgroups. We will present possible ways to overcome these issues using, instead of edge betweenness computation for single edges, the group edge betweenness for subsets of edges to be subjects of removal."}
{"_id": "13def1f1f6598d4d0c7529e8ef6a7999165fe955", "title": "Ontology-based deep learning for human behavior prediction with explanations in health social networks", "text": "Human behavior modeling is a key component in application domains such as healthcare and social behavior research. In addition to accurate prediction, having the capacity to understand the roles of human behavior determinants and to provide explanations for the predicted behaviors is also important. Having this capacity increases trust in the systems and the likelihood that the systems actually will be adopted, thus driving engagement and loyalty. However, most prediction models do not provide explanations for the behaviors they predict. In this paper, we study the research problem, human behavior prediction with explanations, for healthcare intervention systems in health social networks. We propose an ontology-based deep learning model (ORBM+) for human behavior prediction over undirected and nodes-attributed graphs. We first propose a bottom-up algorithm to learn the user representation from health ontologies. Then the user representation is utilized to incorporate self-motivation, social influences, and environmental events together in a human behavior prediction model, which extends a well-known deep learning method, the Restricted Boltzmann Machine. ORBM+ not only predicts human behaviors accurately, but also, it generates explanations for each predicted behavior. Experiments conducted on both real and synthetic health social networks have shown the tremendous effectiveness of our approach compared with conventional methods."}
{"_id": "8be203797e78aa8351c560f067190ce596feec6b", "title": "Edge Agreement: Graph-Theoretic Performance Bounds and Passivity Analysis", "text": "This work explores the properties of the edge variant of the graph Laplacian in the context of the edge agreement problem. We show that the edge Laplacian, and its corresponding agreement protocol, provides a useful perspective on the well-known node agreement, or the consensus algorithm. Specifically, the dynamics induced by the edge Laplacian facilitates a better understanding of the role of certain subgraphs, e.g., cycles and spanning trees, in the original agreement problem. Using the edge Laplacian, we proceed to examine graph-theoretic characterizations of the H2 and H\u221e performance for the agreement protocol. These results are subsequently applied in the contexts of optimal sensor placement for consensus-based applications. Finally, the edge Laplacian is employed to provide new insights into the nonlinear extension of linear agreement to agents with passive dynamics."}
{"_id": "37a26f4601f59ac02136e8f25f94f5f44ca8173a", "title": "Tracking-based interaction for object creation in mobile augmented reality", "text": "In this work, we evaluate the feasibility of tracking-based interaction using a mobile phone's or tablet's camera in order to create and edit 3D objects in augmented reality applications. We present a feasibility study investigating if and how gestures made with your finger can be used to create such objects. A revised interface design is evaluated in a user study with 24 subjects that reveals a high usability and entertainment value, but also identifies issues such as ergonomic discomfort and imprecise input for complex tasks. Hence, our results suggest huge potential for this type of interaction in the entertainment, edutainment, and leisure domain, but limited usefulness for serious applications."}
{"_id": "aeaa7a8b4ef97bbf2b0ea2ad2ee88b13fcb2b797", "title": "LAN security: problems and solutions for Ethernet networks", "text": "Despite many research and development efforts in the area of data communications security, the importance of internal \u017d . local area network LAN security is still underestimated. This paper discusses why many traditional approaches to network \u017d . security e.g. firewalls, the modern IPSec or various application level protocols are not so effective in local networks and proposes a prospective solution for building of secure encrypted Ethernet LANs. The architecture presented allows for employment of the existing office network infrastructure, does not require changes to workstations\u2019 software, and provides high level of protection. The core idea is to apply security measures in the network interface devices connecting individual \u017d \u017d .. computers to the network i.e. network interface cards NICs . This eliminates the physical possibility of sending unprotected information through the network. Implementation details are given for data and key exchange, network management, and interoperability issues. An in-depth security analysis of the proposed architecture is presented and some conclusions are drawn. q 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "f8ccf1f4c8002ff35ea675122e7c04b35f2000b4", "title": "Fault tolerant three-phase AC motor drive topologies: a comparison of features, cost, and limitations", "text": "This paper compares the many fault tolerant three-phase ac motor drive topologies that have been proposed to provide output capacity for the inverter faults of switch short or open-circuits, phase-leg short-circuits, and single-phase open-circuits. Also included is a review of the respective control methods for fault tolerant inverters including two-phase and unipolar control methods. The output voltage and current space in terms of dq components is identified for each topology and fault. These quantities are then used to normalize the power capacity of each system during a fault to a standard inverter during normal operation. A silicon overrating cost factor is adopted as a metric to compare the relative switching device costs of the topologies compared to a standard three-phase inverter."}
{"_id": "fabc3f0f51d2bdd1484e36b9b08221b52be6826f", "title": "Interactive Plan Explicability in Human-Robot Teaming", "text": "Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status, but also be aware of its human teammates' expectation of itself. Being aware of the human teammates' expectation leads to robot behaviors that better align with the human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot interaction with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where the human and robot's behaviors can influence each other. We term this new measure Interactive Plan Explicability (IPE). We compare the joint plan generated by our approach with the consideration of this measure using the fast forward (FF) planner, with the plan generated by FF without such consideration, as well as with the plan created with human subjects interacting with a robot running an FF planner. Since the human subject is expected to adapt to the robot's behavior dynamically when it deviates from her expectation, the plan created with human subjects is expected to be more explicable than the FF plan, and comparable to the explicable plan generated by our approach. Results indicate that the explicability score of plans generated by our algorithm is indeed closer to the human interactive plan than the plan generated by FF, implying that the plans generated by our algorithms align better with the expected plans of the human during execution. This can lead to more efficient collaboration in practice."}
{"_id": "604ec6924397f793c3286f4e7e2b4ba17a27ee24", "title": "Neighboring gray level dependence matrix for texture classification", "text": "A new approach, neighboring gray level dependence matrix (NGLDM), for texture classification is presented. The major properties of this approach are as follows: (a) texture features can be easily computed; (b) they are essentially invariant under spatial rotation; (c) they are invariant under linear gray level transformation and can be made insensitive to monotonic gray level transformation. These properties have enhanced the practical applications of the texture features. The accuracies of the classification are comparable with those found in the literature."}
{"_id": "2b664988a390637f53ade45a8a1c0cd9cc5f0628", "title": "A Fast Calibration Method for Triaxial Magnetometers", "text": "This paper presents a novel iterative calibration algorithm for triaxial magnetometers. The proposed algorithm estimates and compensates the effects of deterministic interference parameters using only nine distinct measurements. The results of our extensive simulations and empirical evaluations confirm that the proposed method outperforms conventional ellipsoid fitting based models both in terms of accuracy and reliability even in the presence of a moderate wideband noise. The algorithm also achieves considerably faster convergence, which makes it suitable for real-time applications. The algorithm performance is also shown to be independent of the initial guesses of the interference parameters."}
{"_id": "2c1bfdd311742ec077cdc6e5367e34cfe0acc4c0", "title": "An integrated approach for warehouse analysis and optimization: A case study", "text": "The paper focuses on the analysis and optimization of production warehouses, proposing a novel approach to reduce inefficiencies which employs three lean manufacturing tools in an integrated and iterative framework. The proposed approach integrates the Unified Modeling Language (UML) \u2013 providing a detailed description of the warehouse logistics \u2013 the Value Stream Mapping (VSM) tool \u2013 identifying non-value adding activities \u2013 and a mathematical formulation of the so-called Genba Shikumi philosophy \u2013 ranking such system anomalies and assessing how they affect the warehouse. The subsequent reapplication of the VSM produces a complete picture of the reengineered warehouse, and using the UML tool allows describing in detail the updated system. By applying the presented methodology to the warehouse of an Italian interior design producer, we show that it represents a useful tool to systematically and dynamically improve the warehouse management. Indeed, the application of the approach to the company leads to an innovative proposal for the warehouse analysis and optimization: a warehouse management system that leads to increased profitability and quality as well as to reduced errors. 2014 Elsevier B.V. All rights reserved."}
{"_id": "49e90a0110123d5644fb6840683e0a05d2f54371", "title": "Supervised Hash Coding With Deep Neural Network for Environment Perception of Intelligent Vehicles", "text": "Image content analysis is an important surround perception modality of intelligent vehicles. In order to efficiently recognize the on-road environment based on image content analysis from the large-scale scene database, relevant images retrieval becomes one of the fundamental problems. To improve the efficiency of calculating similarities between images, hashing techniques have received increasing attentions. For most existing hash methods, the suboptimal binary codes are generated, as the hand-crafted feature representation is not optimally compatible with the binary codes. In this paper, a one-stage supervised deep hashing framework (SDHP) is proposed to learn high-quality binary codes. A deep convolutional neural network is implemented, and we enforce the learned codes to meet the following criterions: 1) similar images should be encoded into similar binary codes, and vice versa; 2) the quantization loss from Euclidean space to Hamming space should be minimized; and 3) the learned codes should be evenly distributed. The method is further extended into SDHP+ to improve the discriminative power of binary codes. Extensive experimental comparisons with state-of-the-art hashing algorithms are conducted on CIFAR-10 and NUS-WIDE, the MAP of SDHP reaches to 87.67% and 77.48% with 48 b, respectively, and the MAP of SDHP+ reaches to 91.16%, 81.08% with 12 b, 48 b on CIFAR-10 and NUS-WIDE, respectively. It illustrates that the proposed method can obviously improve the search accuracy."}
{"_id": "10146dfeffa2be4267525ae74fda5c644205f497", "title": "Evidence and solution of over-RESET problem for HfOX based resistive memory with sub-ns switching speed and high endurance", "text": "The memory performances of the HfOX based bipolar resistive memory, including switching speed and memory reliability, are greatly improved in this work. Record high switching speed down to 300 ps is achieved. The cycling test shed a clear light on the wearing behavior of resistance states, and the correlation between over-RESET phenomenon and the worn low resistance state in the devices is discussed. The modified bottom electrode is proposed for the memory device to maintain the memory window and to endure resistive switching up to 1010 cycles."}
{"_id": "b83cdc50b0006066d7dbfd4f8bc1aeecf81ddf1c", "title": "Media and children's aggression, fear, and altruism.", "text": "Noting that the social and emotional experiences of American children today often heavily involve electronic media, Barbara Wilson takes a close look at how exposure to screen media affects children's well-being and development. She concludes that media influence on children depends more on the type of content that children find attractive than on the sheer amount of time they spend in front of the screen. Wilson begins by reviewing evidence on the link between media and children's emotions. She points out that children can learn about the nature and causes of different emotions from watching the emotional experiences of media characters and that they often experience empathy with those characters. Although research on the long-term effects of media exposure on children's emotional skill development is limited, a good deal of evidence shows that media exposure can contribute to children's fears and anxieties. Both fictional and news programming can cause lasting emotional upset, though the themes that upset children differ according to a child's age. Wilson also explores how media exposure affects children's social development. Strong evidence shows that violent television programming contributes to children's aggressive behavior. And a growing body of work indicates that playing violent video games can have the same harmful effect. Yet if children spend time with educational programs and situation comedies targeted to youth, media exposure can have more prosocial effects by increasing children's altruism, cooperation, and even tolerance for others. Wilson also shows that children's susceptibility to media influence can vary according to their gender, their age, how realistic they perceive the media to be, and how much they identify with characters and people on the screen. She concludes with guidelines to help parents enhance the positive effects of the media while minimizing the risks associated with certain types of content."}
{"_id": "8a53c7efcf3b0b0cee5e6dfcea0a3eb0b4b81b01", "title": "Curvature preserving fingerprint ridge orientation smoothing using Legendre polynomials", "text": "Smoothing fingerprint ridge orientation involves a principal discrepancy. Too little smoothing can result in noisy orientation fields (OF), too much smoothing will harm high curvature areas, especially singular points (SP). In this paper we present a fingerprint ridge orientation model based on Legendre polynomials. The motivation for the proposed method can be found by analysing the almost exclusively used method in literature for orientation smoothing, proposed by Witkin and Kass (1987) more than two decades ago. The main contribution of this paper argues that the vectorial data (sine and cosine data) should be smoothed in a coupled way and the approximation error should not be evaluated employing vectorial data. For evaluating the proposed method we use a Poincare-Index based SP detection algorithm. The experiments show, that in comparison to competing methods the proposed method has improved orientation smoothing capabilities, especially in high curvature areas."}
{"_id": "68102f37d6da41530b63dfd232984f93d8685693", "title": "On the Generalization Ability of Online Strongly Convex Programming Algorithms", "text": "This paper examines the generalization properties of online convex programming algorithms when the loss function is Lipschitz and strongly convex. Our main result is a sharp bound, that holds with high probability, on the excess risk of the output of an online algorithm in terms of the average regret. This allows one to use recent algorithms with logarithmic cumulative regret guarantees to achieve fast convergence rates for the excess risk with high probability. As a corollary, we characterize the convergence rate of P EGASOS(with high probability), a recently proposed method for solving the SVM optimization problem."}
{"_id": "0329e02fc187d9d479e6310fa85e3c948b09565d", "title": "A Tri-Space Visualization Interface for Analyzing Time-Varying Multivariate Volume Data", "text": "The dataset generated by a large-scale numerical simulation may include thousands of timesteps and hundreds of variables describing different aspects of the modeled physical phenomena. In order to analyze and understand such data, scientists need the capability to explore simultaneously in the temporal, spatial, and variable domains of the data. Such capability, however, is not generally provided by conventional visualization tools. This paper presents a new visualization interface addressing this problem. The interface consists of three components which abstracts the complexity of exploring in temporal, variable, and spatial domain, respectively. The first component displays time histograms of the data, helps the user identify timesteps of interest, and also helps specify time-varying features. The second component displays correlations between variables in parallel coordinates and enables the user to verify those correlations and possibly identity unanticipated ones. The third component allows the user to more closely explore and validate the data in spatial domain while rendering multiple variables into a single visualization in a user controllable fashion. Each of these three components is not only an interface but is also the visualization itself, thus enabling efficient screen-space usage. The three components are tightly linked to facilitate tri-space data exploration, which offers scientists new power to study their time-varying, multivariate volume data."}
{"_id": "2abf2c3e7ebed04e8c09e478157372dda5cb8bc5", "title": "Real-time rigid-body visual tracking in a scanning electron microscope", "text": "Robotics continues to provide researchers with an increasing ability to interact with objects at the nano scale. As micro- and nanorobotic technologies mature, more interest is given to computer-assisted or automated approaches to manipulation at these scales. Although actuators are currently available that enable displacements resolutions in the subnanometer range, improvements in feedback technologies have not kept pace. Thus, many actuators that are capable of performing nanometer displacements are limited in automated tasks by the lack of suitable feedback mechanisms. This paper proposes the use of a rigid-model-based method for end effector tracking in a scanning electron microscope to aid in enabling more precise automated manipulations and measurements. These models allow the system to leverage domain-specific knowledge to increase performance in a challenging tracking environment."}
{"_id": "2c6c6d3c94322e9ff75ff2143f7028bfab2b3c5f", "title": "Extension of phase correlation to subpixel registration", "text": "In this paper, we have derived analytic expressions for the phase correlation of downsampled images. We have shown that for downsampled images the signal power in the phase correlation is not concentrated in a single peak, but rather in several coherent peaks mostly adjacent to each other. These coherent peaks correspond to the polyphase transform of a filtered unit impulse centered at the point of registration. The analytic results provide a closed-form solution to subpixel translation estimation, and are used for detailed error analysis. Excellent results have been obtained for subpixel translation estimation of images of different nature and across different spectral bands."}
{"_id": "42d60f7faaa2f6fdd2b928c352d65eb57b4791aa", "title": "Improving resolution by image registration", "text": "Image resolution can be improved when the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available. The proposed approach is similar to back-projection used in tomography. Examples of improved image resolution are given for gray-level and color images, when the unknown image displacements are computed from the image sequence."}
{"_id": "90614cea8c2ab2bff0343231a26d6d0c9315d6c7", "title": "A Comparison of Affine Region Detectors", "text": "The paper gives a snapshot of the state of the art in affine covariant region detectors, and compares their performance on a set of test images under varying imaging conditions. Six types of detectors are included: detectors based on affine normalization around Harris\u00a0 (Mikolajczyk and \u00a0Schmid, 2002; Schaffalitzky and \u00a0Zisserman, 2002) and Hessian points\u00a0 (Mikolajczyk and \u00a0Schmid, 2002), a detector of \u2018maximally stable extremal regions', proposed by Matas et al.\u00a0(2002); an edge-based region detector\u00a0 (Tuytelaars and Van\u00a0Gool, 1999) and a detector based on intensity extrema (Tuytelaars and Van\u00a0Gool, 2000), and a detector of \u2018salient regions', proposed by Kadir, Zisserman and Brady\u00a0(2004). The performance is measured against changes in viewpoint, scale, illumination, defocus and image compression. The objective of this paper is also to establish a reference test set of images and performance software, so that future detectors can be evaluated in the same framework."}
{"_id": "db0851926cabc588e1cecb879ed7ce0ce4b9a298", "title": "PROCEDURAL GENERATION OF IMAGINATIVE TREES USING A SPACE COLONIZATION ALGORITHM", "text": "The modeling of trees is challenging due to their complex branching structures. Three different ways to generate trees are using real world data for reconstruction, interactive modeling methods and modeling with procedural or rule-based systems. Procedural content generation is the idea of using algorithms to automate content creation processes, and it is useful in plant modeling since it can generate a wide variety of plants that can adapt and react to the environment and changing conditions. This thesis focuses on and extends a procedural tree generation technique that uses a space colonization algorithm to model the tree branches\u2019 competition for space, and shifts the previous works\u2019 focus from realism to fantasy. The technique satisfied the idea of using interaction between the tree\u2019s internal and external factors to determine its final shape, by letting the designer control the where and the how of the tree\u2019s growth process. The implementation resulted in a tree generation application where the user\u2019s imagination decides the limit of what can be produced, and if that limit is reached can the application be used to randomly generate a wide variety of trees and tree-like structures. A motivation for many researchers in the procedural content generation area is how it can be used to augment human imagination. The result of this thesis can be used for that, by stepping away from the restrictions of realism, and with ease let the user generate widely diverse trees, that are not necessarily realistic but, in most cases, adapts to the idea of a tree."}
{"_id": "a6f72b3dabe64d1ede9e1f68d1bb11750bcc5166", "title": "Thin dual-resonant stacked shorted patch antenna for mobile communications", "text": "Introduction: Short-circuited microstrip patch antennas, or PIFAs, can be used as directive internal handset antennas in various communication systems. They provide advantages over traditional external whip and helix antennas in terms of increased total efficiency (when the handset is near the user\u2019s head), decreased radiation towards the user, and increased mechanical reliability. The main disadvantage is their narrow impedance bandwidth compared to the volume occupied by the antenna structure. Typical requirements for directive internal antennas designed for cellular handsets (with a thickness \u2264 20mm) are antenna thickness \u2264 5mm and bandwidth \u2265 10%. It is difficult to design such a small, directive, low-profile handset antenna with high radiation efficiency and an impedance bandwidth greater than 10%. Small antenna design is always a compromise between size, bandwidth, and efficiency. Often, the price for improved performance is increased complexity. If the total dimensions of a patch antenna element are fixed, there are two effective ways to enhance its bandwidth: either dissipative loading [1], which reduces the radiation efficiency, or the addition of more resonators into the antenna structure (matching networks or parasitic elements). The latter method may increase the manufacturing complexity of the antenna. In this Letter, a thin dual-resonant stacked shorted patch antenna is presented. Previously, stacked short-circuited patch antenna elements have been reported in [2, 3]. More recently, similar elements have also been discussed in [4, 5]. The presented antenna is dual-resonant and has a very low profile. The thickness is only one fifth of that of the antenna reported in [5], whereas the surface area reserved for the patches is approximately equal. The small size of the antenna presented in this Letter makes it relatively easy to fit inside a cellular handset. The bandwidth is sufficient for many communication systems. With a common short circuit element which connects both patches to the ground plane, the presented antenna is the simplest example of a stacked shorted microstrip patch antenna. No extra tuning or shorting posts inside the patch area are needed, which makes its manufacturing easier."}
{"_id": "8cc7f8891cf1ed5d22e74ff6fc57d1c23faf9048", "title": "Mapping continued brain growth and gray matter density reduction in dorsal frontal cortex: Inverse relationships during postadolescent brain maturation.", "text": "Recent in vivo structural imaging studies have shown spatial and temporal patterns of brain maturation between childhood, adolescence, and young adulthood that are generally consistent with postmortem studies of cellular maturational events such as increased myelination and synaptic pruning. In this study, we conducted detailed spatial and temporal analyses of growth and gray matter density at the cortical surface of the brain in a group of 35 normally developing children, adolescents, and young adults. To accomplish this, we used high-resolution magnetic resonance imaging and novel computational image analysis techniques. For the first time, in this report we have mapped the continued postadolescent brain growth that occurs primarily in the dorsal aspects of the frontal lobe bilaterally and in the posterior temporo-occipital junction bilaterally. Notably, maps of the spatial distribution of postadolescent cortical gray matter density reduction are highly consistent with maps of the spatial distribution of postadolescent brain growth, showing an inverse relationship between cortical gray matter density reduction and brain growth primarily in the superior frontal regions that control executive cognitive functioning. Inverse relationships are not as robust in the posterior temporo-occipital junction where gray matter density reduction is much less prominent despite late brain growth in these regions between adolescence and adulthood. Overall brain growth is not significant between childhood and adolescence, but close spatial relationships between gray matter density reduction and brain growth are observed in the dorsal parietal and frontal cortex. These results suggest that progressive cellular maturational events, such as increased myelination, may play as prominent a role during the postadolescent years as regressive events, such as synaptic pruning, in determining the ultimate density of mature frontal lobe cortical gray matter."}
{"_id": "4b8b23c8b2364e25f2bc4e2ad7c8737797f59d19", "title": "Broadband antenna design using different 3D printing technologies and metallization processes", "text": "The purpose of this paper is to provide a comprehensive evaluation of how plastic materials, different metallization process, and thickness of the metal can influence the performance of 3D printed broadband antenna structures. A set of antennas were manufactured using Fused Deposition Modeling technology and Polyjet technology. Three different plastic materials ABS, PLA and Vero were employed in the tests. The antenna structures were metallized using three common metallization processes: vacuum metallization, electroplating, and conductive paint. In this project, the broadband performances of the metallized plastic antennas are compared to an original metal antenna by measuring VSWR, radiation patterns (H- and E-Planes), and gain. Measurements show that the performances of these plastic structures, regardless of production method, are very similar to the original metal antenna."}
{"_id": "99ad0533f84c110da2d0713d5798e6e14080b159", "title": "Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences", "text": "We present a reading comprehension challenge in which questions can only be answered by taking into account information from multiple sentences. We solicit and verify questions and answers for this challenge through a 4-step crowdsourcing experiment. Our challenge dataset contains \u223c6k questions for +800 paragraphs across 7 different domains (elementary school science, news, travel guides, fiction stories, etc) bringing in linguistic diversity to the texts and to the questions wordings. On a subset of our dataset, we found human solvers to achieve an F1-score of 86.4%. We analyze a range of baselines, including a recent state-of-art reading comprehension system, and demonstrate the difficulty of this challenge, despite a high human performance. The dataset is the first to study multi-sentence inference at scale, with an open-ended set of question types that requires reasoning skills."}
{"_id": "25eb08e6985ded20ae723ec668014a2bad789e0f", "title": "Kd-Jump: a Path-Preserving Stackless Traversal for Faster Isosurface Raytracing on GPUs", "text": "Stackless traversal techniques are often used to circumvent memory bottlenecks by avoiding a stack and replacing return traversal with extra computation. This paper addresses whether the stackless traversal approaches are useful on newer hardware and technology (such as CUDA). To this end, we present a novel stackless approach for implicit kd-trees, which exploits the benefits of index-based node traversal, without incurring extra node visitation. This approach, which we term Kd-Jump, enables the traversal to immediately return to the next valid node, like a stack, without incurring extra node visitation (kd-restart). Also, Kd-Jump does not require global memory (stack) at all and only requires a small matrix in fast constant-memory. We report that Kd-Jump outperforms a stack by 10 to 20% and kd-restar t by 100%. We also present a Hybrid Kd-Jump, which utilizes a volume stepper for leaf testing and a run-time depth threshold to define where kd-tree traversal stops and volume-stepping occurs. By using both methods, we gain the benefits of empty space removal, fast texture-caching and realtime ability to determine the best threshold for current isosurface and view direction."}
{"_id": "03bf26d72d8cc0cf401c31e31c242e1894bd0890", "title": "Focused Trajectory Planning for autonomous on-road driving", "text": "On-road motion planning for autonomous vehicles is in general a challenging problem. Past efforts have proposed solutions for urban and highway environments individually. We identify the key advantages/shortcomings of prior solutions, and propose a novel two-step motion planning system that addresses both urban and highway driving in a single framework. Reference Trajectory Planning (I) makes use of dense lattice sampling and optimization techniques to generate an easy-to-tune and human-like reference trajectory accounting for road geometry, obstacles and high-level directives. By focused sampling around the reference trajectory, Tracking Trajectory Planning (II) generates, evaluates and selects parametric trajectories that further satisfy kinodynamic constraints for execution. The described method retains most of the performance advantages of an exhaustive spatiotemporal planner while significantly reducing computation."}
{"_id": "86b164233e22e9edbe56ba32c48e20e53c9a71b6", "title": "Pattern Recognition for Downhole Dynamometer Card in Oil Rod Pump System using Artificial Neural Networks", "text": "This paper presents the development of an Artificial Neural Network system for Dynamometer Card pattern recognition in oil well rod pump systems. It covers the establishment of pattern classes and a set of standards for training and validation, the study of descriptors which allow the design and the implementation of features extractor, training, analysis and finally the validation and performance test with"}
{"_id": "3bb0a4630a7055858d669b9be194c643a95af3e2", "title": "Relations such as Hypernymy: Identifying and Exploiting Hearst Patterns in Distributional Vectors for Lexical Entailment", "text": "We consider the task of predicting lexical entailment using distributional vectors. We focus experiments on one previous classifier which was shown to only learn to detect prototypicality of a word pair. Analysis shows that the model single-mindedly learns to detect Hearst Patterns, which are well known to be predictive of lexical relations. We present a new model which exploits this Hearst Detector functionality, matching or outperforming prior work on multiple data sets."}
{"_id": "9006a6b2544162c4bb66cc7f041208e0fe4c0359", "title": "A Subsequence Interleaving Model for Sequential Pattern Mining", "text": "Recent sequential pattern mining methods have used the minimum description length (MDL) principle to define an encoding scheme which describes an algorithm for mining the most compressing patterns in a database. We present a novel subsequence interleaving model based on a probabilistic model of the sequence database, which allows us to search for the most compressing set of patterns without designing a specific encoding scheme. Our proposed algorithm is able to efficiently mine the most relevant sequential patterns and rank them using an associated measure of interestingness. The efficient inference in our model is a direct result of our use of a structural expectation-maximization framework, in which the expectation-step takes the form of a submodular optimization problem subject to a coverage constraint. We show on both synthetic and real world datasets that our model mines a set of sequential patterns with low spuriousness and redundancy, high interpretability and usefulness in real-world applications. Furthermore, we demonstrate that the quality of the patterns from our approach is comparable to, if not better than, existing state of the art sequential pattern mining algorithms."}
{"_id": "58a28a12d9cf44bf7a0504b6a9c3194ca210659a", "title": "Towards a theory of supply chain management : the constructs and measurements", "text": "Rising international cooperation, vertical disintegration, along with a focus on core activities have led to the notion that firms are links in a networked supply chain. This novel perspective has created the challenge of designing and managing a network of interdependent relationships developed and fostered through strategic collaboration. Although research interests in supply chain management (SCM) are growing, no research has been directed towards a systematic development of SCM instruments. This study identifies and consolidates various supply chain initiatives and factors to develop key SCM constructs conducive to advancing the field. To this end, we analyzed over 400 articles and synthesized the large, fragmented body of work dispersed across many disciplines. The result of this study, through successive stages of measurement analysis and refinement, is a set of reliable, valid, and unidimensional measurements that can be subsequently used in different contexts to refine or extend conceptualization and measurements or to test various theoretical models, paving the way for theory building in SCM. \u00a9 2004 Elsevier B.V. All rights reserved."}
{"_id": "14b6b458a931888c7629d47d3e158acd0da13f02", "title": "Sieve: Cryptographically Enforced Access Control for User Data in Untrusted Clouds", "text": "Modern web services rob users of low-level control over cloud storage; a user\u2019s single logical data set is scattered across multiple storage silos whose access controls are set by the web services, not users. The result is that users lack the ultimate authority to determine how their data is shared with other web services. In this thesis, we introduce Sieve, a new architecture for selectively exposing user data to third party web services in a provably secure manner. Sieve starts with a user-centric storage model: each user uploads encrypted data to a single cloud store, and by default, only the user knows the decryption keys. Given this storage model, Sieve defines an infrastructure to support rich, legacy web applications. Using attribute-based encryption, Sieve allows users to define intuitive, understandable access policies that are cryptographically enforceable. Using key homomorphism, Sieve can re-encrypt user data on storage providers in situ, revoking decryption keys from web services without revealing new ones to the storage provider. Using secret sharing and two-factor authentication, Sieve protects against the loss of user devices like smartphones and laptops. The result is that users can enjoy rich, legacy web applications, while benefitting from cryptographically strong controls over what data the services can access. Thesis Supervisor: Nickolai Zeldovich Title: Associate Professor Thesis Supervisor: James Mickens Title: Associate Professor"}
{"_id": "a6bff7b89af697508d23353148530ea43c6c36e1", "title": "Segmentation of Urban Areas Using Road Networks", "text": "Region-based analysis is fundamental and crucial in many geospatialrelated applications and research themes, such as trajectory analysis, human mobility study and urban planning. In this paper, we report on an image-processing-based approach to segment urban areas into regions by road networks. Here, each segmented region is bounded by the high-level road segments, covering some neighborhoods and low-level streets. Typically, road segments are classified into different levels (e.g., highways and expressways are usually high-level roads), providing us with a more natural and semantic segmentation of urban spaces than the grid-based partition method. We show that through simple morphological operators, an urban road network can be efficiently segmented into regions. In addition, we present a case study in trajectory mining to demonstrate the usability of the proposed segmentation method."}
{"_id": "00ae3f736b28e2050e23acc65fcac1a516635425", "title": "Collaborative Deep Learning Across Multiple Data Centers", "text": "Valuable training data is often owned by independent organizations and located in multiple data centers. Most deep learning approaches require to centralize the multi-datacenter data for performance purpose. In practice, however, it is often infeasible to transfer all data to a centralized data center due to not only bandwidth limitation but also the constraints of privacy regulations. Model averaging is a conventional choice for data parallelized training, but its ineffectiveness is claimed by previous studies as deep neural networks are often non-convex. In this paper, we argue that model averaging can be effective in the decentralized environment by using two strategies, namely, the cyclical learning rate and the increased number of epochs for local model training. With the two strategies, we show that model averaging can provide competitive performance in the decentralized mode compared to the data-centralized one. In a practical environment with multiple data centers, we conduct extensive experiments using state-of-the-art deep network architectures on different types of data. Results demonstrate the effectiveness and robustness of the proposed method."}
{"_id": "2abcd3b82f51722a1ef25d7f408c06f43210c809", "title": "An improved box-counting method for image fractal dimension estimation", "text": "Article history: Received 6 September 2007 Received in revised form 14 January 2009 Accepted 2 March 2009"}
{"_id": "d05e42fa59d1f20f0d60ae89225b826204ef1216", "title": "BanditSum: Extractive Summarization as a Contextual Bandit", "text": "In this work, we propose a novel method for training neural networks to perform singledocument extractive summarization without heuristically-generated extractive labels. We call our approach BANDITSUM as it treats extractive summarization as a contextual bandit (CB) problem, where the model receives a document to summarize (the context), and chooses a sequence of sentences to include in the summary (the action). A policy gradient reinforcement learning algorithm is used to train the model to select sequences of sentences that maximize ROUGE score. We perform a series of experiments demonstrating that BANDITSUM is able to achieve ROUGE scores that are better than or comparable to the state-of-the-art for extractive summarization, and converges using significantly fewer update steps than competing approaches. In addition, we show empirically that BANDITSUM performs significantly better than competing approaches when good summary sentences appear late in the source document."}
{"_id": "5dc612abb0b26c734047af8ae8a764819a6f579e", "title": "Introduction to the special section on educational data mining", "text": "Educational Data Mining (EDM) is an emerging multidisciplinary research area, in which methods and techniques for exploring data originating from various educational information systems have been developed. EDM is both a learning science, as well as a rich application area for data mining, due to the growing availability of educational data. EDM contributes to the study of how students learn, and the settings in which they learn. It enables data-driven decision making for improving the current educational practice and learning material. We present a brief overview of EDM and introduce four selected EDM papers representing a crosscut of different application areas for data mining in education."}
{"_id": "34dadeab830f902edb1213f81336f5cfbf753dcf", "title": "Polynomial method for Procedural Terrain Generation", "text": "A systematic fractal brownian motion approach is proposed for generating coherent noise, aiming at procedurally generating realistic terrain and textures. Two models are tested and compared to Perlin noise method for two-dimensional height map generation. A fractal analysis is performed in order to compare fractal behaviour of generated data to real terrain coastlines from the point of view of fractal dimension. Performance analysis show that one of the described schemes requires half as many primitive operations than Perlin noise while producing data of equivalent quality."}
{"_id": "9b678aa28facf4f90081d41c2c484c6addddb86d", "title": "Fully Convolutional Attention Networks for Fine-Grained Recognition", "text": "Fine-grained recognition is challenging due to its subtle local inter-class differences versus large intra-class variations such as poses. A key to address this problem is to localize discriminative parts to extract pose-invariant features. However, ground-truth part annotations can be expensive to acquire. Moreover, it is hard to define parts for many fine-grained classes. This work introduces Fully Convolutional Attention Networks (FCANs), a reinforcement learning framework to optimally glimpse local discriminative regions adaptive to different fine-grained domains. Compared to previous methods, our approach enjoys three advantages: 1) the weakly-supervised reinforcement learning procedure requires no expensive part annotations; 2) the fully-convolutional architecture speeds up both training and testing; 3) the greedy reward strategy accelerates the convergence of the learning. We demonstrate the effectiveness of our method with extensive experiments on four challenging fine-grained benchmark datasets, including CUB-200-2011, Stanford Dogs, Stanford Cars and Food-101."}
{"_id": "18163ea60e83cd3266150bd4b7d7fd3a849ee2db", "title": "Consumption of ultra-processed foods and likely impact on human health. Evidence from Canada.", "text": "OBJECTIVE\nTo investigate consumption of ultra-processed products in Canada and to assess their association with dietary quality.\n\n\nDESIGN\nApplication of a classification of foodstuffs based on the nature, extent and purpose of food processing to data from a national household food budget survey. Foods are classified as unprocessed/minimally processed foods (Group 1), processed culinary ingredients (Group 2) or ultra-processed products (Group 3).\n\n\nSETTING\nAll provinces and territories of Canada, 2001.\n\n\nSUBJECTS\nHouseholds (n 5643).\n\n\nRESULTS\nFood purchases provided a mean per capita energy availability of 8908 (se 81) kJ/d (2129 (se 19) kcal/d). Over 61\u00b77 % of dietary energy came from ultra-processed products (Group 3), 25\u00b76 % from Group 1 and 12\u00b77 % from Group 2. The overall diet exceeded WHO upper limits for fat, saturated fat, free sugars and Na density, with less fibre than recommended. It also exceeded the average energy density target of the World Cancer Research Fund/American Institute for Cancer Research. Group 3 products taken together are more fatty, sugary, salty and energy-dense than a combination of Group 1 and Group 2 items. Only the 20 % lowest consumers of ultra-processed products (who consumed 33\u00b72 % of energy from these products) were anywhere near reaching all nutrient goals for the prevention of obesity and chronic non-communicable diseases.\n\n\nCONCLUSIONS\nThe 2001 Canadian diet was dominated by ultra-processed products. As a group, these products are unhealthy. The present analysis indicates that any substantial improvement of the diet would involve much lower consumption of ultra-processed products and much higher consumption of meals and dishes prepared from minimally processed foods and processed culinary ingredients."}
{"_id": "4157e45f616233a0874f54a59c3df001b9646cd7", "title": "Diagnostically relevant facial gestalt information from ordinary photos", "text": "Craniofacial characteristics are highly informative for clinical geneticists when diagnosing genetic diseases. As a first step towards the high-throughput diagnosis of ultra-rare developmental diseases we introduce an automatic approach that implements recent developments in computer vision. This algorithm extracts phenotypic information from ordinary non-clinical photographs and, using machine learning, models human facial dysmorphisms in a multidimensional 'Clinical Face Phenotype Space'. The space locates patients in the context of known syndromes and thereby facilitates the generation of diagnostic hypotheses. Consequently, the approach will aid clinicians by greatly narrowing (by 27.6-fold) the search space of potential diagnoses for patients with suspected developmental disorders. Furthermore, this Clinical Face Phenotype Space allows the clustering of patients by phenotype even when no known syndrome diagnosis exists, thereby aiding disease identification. We demonstrate that this approach provides a novel method for inferring causative genetic variants from clinical sequencing data through functional genetic pathway comparisons.DOI: http://dx.doi.org/10.7554/eLife.02020.001."}
{"_id": "fef80706b29fc2345c8b7aa53af1076a6afff5f1", "title": "Effect of Control-Loops Interactions on Power Stability Limits of VSC Integrated to AC System", "text": "This paper investigates the effect of control-loops interactions on power stability limits of the voltage-source converter (VSC) as connected to an ac system. The focus is put on the physical mechanism of the control-loops interactions in the VSC, revealing that interactions among the control loops result in the production of an additional loop. The open-loop gain of the additional loop is employed to quantify the severity of the control-loop interactions. Furthermore, the power current sensitivity, closely related to control-loops interactions, is applied to estimate the maximum transferrable power of the VSC connected to an ac grid. On that basis, stability analysis results show that interactions between dc-link voltage control and phase-locked loop restrict the power angle to about 51\u00b0 for stable operation with no dynamic reactive power supported. Conversely, the system is capable of approaching the ac-side maximum power transfer limits with alternating voltage control included. Simulations in MATLAB/Simulink are conducted to validate the stability analysis."}
{"_id": "461ac81b6ce10d48a6c342e64c59f86d7566fa68", "title": "Social network sites: definition, history, and scholarship", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles."}
{"_id": "c03fb606432af6637d9d7d31f447e62a855b77a0", "title": "Facebook in higher education promotes social but not academic engagement", "text": "Although there is evidence that academically successful students are engaged with their studies, it has proved difficult to define student engagement clearly. Student engagement is commonly construed as having two dimensions, social and academic. The rapid adoption of social media and digital technologies has ensured increasing interest in using them for improving student engagement. This paper examines Facebook usage among a first year psychology student cohort and reports that although the majority of students (94%) had Facebook accounts and spent an average of one hour per day on Facebook, usage was found to be predominantly social. Personality factors influenced usage patterns, with more conscientious students tending to use Facebook less than less conscientious students. This paper argues that, rather than promoting social engagement in a way that might increase academic engagement, it appears that Facebook is more likely to operate as a distracting influence."}
{"_id": "171071069cb3b58cfe8e38232c25bfa99f1fbdf5", "title": "Self-Presentation 2.0: Narcissism and Self-Esteem on Facebook", "text": "Online social networking sites have revealed an entirely new method of self-presentation. This cyber social tool provides a new site of analysis to examine personality and identity. The current study examines how narcissism and self-esteem are manifested on the social networking Web site Facebook.com . Self-esteem and narcissistic personality self-reports were collected from 100 Facebook users at York University. Participant Web pages were also coded based on self-promotional content features. Correlation analyses revealed that individuals higher in narcissism and lower in self-esteem were related to greater online activity as well as some self-promotional content. Gender differences were found to influence the type of self-promotional content presented by individual Facebook users. Implications and future research directions of narcissism and self-esteem on social networking Web sites are discussed."}
{"_id": "5e30227914559ce088a750885761adbb7d2edbbf", "title": "A privacy paradox: Social networking in the United States", "text": "Teenagers will freely give up personal information to join social networks on the Internet. Afterwards, they are surprised when their parents read their journals. Communities are outraged by the personal information posted by young people online and colleges keep track of student activities on and off campus. The posting of personal information by teens and students has consequences. This article will discuss the uproar over privacy issues in social networks by describing a privacy paradox; private versus public space; and, social networking privacy issues. It will finally discuss proposed privacy solutions and steps that can be taken to help resolve the privacy paradox."}
{"_id": "95cbfbae52245f7c737f345daa774e6c2a5b7cc5", "title": "The effects of modern mathematics computer games on mathematics achievement and class motivation", "text": "This study examined the effects of a computer game on students' mathematics achievement and motivation, and the role of prior mathematics knowledge, computer skill, and English language skill on their achievement and motivation as they played the game. A total of 193 students and 10 teachers participated in this study. The teachers were randomly assigned to experimental and control groups. A mixed method of quantitative and interviews were used with Multivariate Analysis of Co-Variance to analyze the data. The results indicated significant improvement of the achievement of the experimental versus control group. No significant improvement was found in the motivation of the groups. Students who played the games in their classrooms and school labs reported greater motivation compared to the ones who played the games only in the school labs. Prior knowledge, computer and English language skill did not play significant roles in achievement and motivation of the experimental group. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "50733aebb9fa0397f6fda33f39d4827feb74a0a6", "title": "Improving Matching Models with Contextualized Word Representations for Multi-turn Response Selection in Retrieval-based Chatbots", "text": "We consider matching with pre-trained contextualized word vectors for multi-turn response selection in retrieval-based chatbots. When directly applied to the task, state-ofthe-art models, such as CoVe and ELMo, do not work as well as they do on other tasks, due to the hierarchical nature, casual language, and domain-specific word use of conversations. To tackle the challenges, we propose pre-training a sentence-level and a sessionlevel contextualized word vectors by learning a dialogue generation model from large-scale human-human conversations with a hierarchical encoder-decoder architecture. The two levels of vectors are then integrated into the input layer and the output layer of a matching model respectively. Experimental results on two benchmark datasets indicate that the proposed contextualized word vectors can significantly and consistently improve the performance of existing matching models for response selection."}
{"_id": "40d50a0cb39012bb1aae8b2a8358d41e1786df87", "title": "Low-rank matrix completion using alternating minimization", "text": "Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge [17].\n In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. X = UV\u2020; the algorithm then alternates between finding the best U and the best V. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and is prone to local minima. In fact, there has been almost no theoretical understanding of when this approach yields a good result.\n In this paper we present one of the first theoretical analyses of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a significantly simpler analysis."}
{"_id": "6c394f5eecc0371b43331b54ed118c8637b8b60d", "title": "A general design formula of multi-section power divider based on singly terminated filter design theory", "text": "A novel design formula of multi-section power divider is derived to obtain wide isolation performance. The derived design formula is based on the singly terminated filter design theory. This paper presents several simulation and experimental results of multi section power divider to show validity of the proposed design formula. Experiments show excellent performance of multi section power divider with multi-octave isolation characteristic."}
{"_id": "2de68ac42d7b54b3ef0596c18e3d4a2b3a274f72", "title": "ASAP: Prioritizing Attention via Time Series Smoothing", "text": "Time series visualization of streaming telemetry (i.e., charting of key metrics such as server load over time) is increasingly prevalent in modern data platforms and applications. However, many existing systems simply plot the raw data streams as they arrive, often obscuring large-scale trends due to small-scale noise. We propose an alternative: to better prioritize end users\u2019 attention, smooth time series visualizations as much as possible to remove noise, while retaining large-scale structure to highlight significant deviations. We develop a new analytics operator called ASAP that automatically smooths streaming time series by adaptively optimizing the trade-off between noise reduction (i.e., variance) and trend retention (i.e., kurtosis). We introduce metrics to quantitatively assess the quality of smoothed plots and provide an efficient search strategy for optimizing these metrics that combines techniques from stream processing, user interface design, and signal processing via autocorrelation-based pruning, pixel-aware preaggregation, and ondemand refresh. We demonstrate that ASAP can improve users\u2019 accuracy in identifying long-term deviations in time series by up to 38.4% while reducing response times by up to 44.3%. Moreover, ASAP delivers these results several orders of magnitude faster than alternative search strategies."}
{"_id": "506dd23fd3d39fb3f9e674a9c95caa9b0440f92f", "title": "LD-BSCA: A local-density based spatial clustering algorithm", "text": "Density-based clustering algorithms are very powerful to discover arbitrary-shaped clusters in large spatial databases. However, in many cases, varied local-density clusters exist in different regions of data space. In this paper, a new algorithm LD-BSCA is proposed with introducing the concept of local MinPts (a minimum number of points) and the new cluster expanding condition: ExpandConClId (Expanding Condition of ClId-th Cluster). We minimize the algorithm input down to only one parameter and let the local MinPts diversified as clusters change from one to another simultaneously. Experiments show LD-BSCA algorithm is powerful to discover all clusters in gradient distributing databases. In addition, we introduce an efficient searching method to reduce the runtime of our algorithm. Using several databases, we demonstrate the high quality of the proposed algorithm in clustering the implicit knowledge in asymmetric distribution databases."}
{"_id": "74cbdce4e28ea42d0b7168e91effe450460a2d09", "title": "An Optimal Algorithm for Finding the Kernel of a Polygon", "text": "The kernel K(P) of a simple polygon P wah n verUces is the locus of the points internal to P from which all verUces of P are wslble Equwalently, K(P) is the mtersectmn of appropriate half-planes determined by the polygon's edges Although it is known that to find the intersection of n generic half-planes requires time O(n log n), we show that one can exploit the ordering of the half-planes corresponding to the sequence of the polygon's edges to obtain a kernel finding algorithm which runs m time O(n) and is therefore optimal"}
{"_id": "4aa638ebc72c47841fbe55b0a57efedd7d70cbc1", "title": "Teachers' perceptions of the barriers to technology integration and practices with technology under situated professional development", "text": "This case study examines 18 elementary school teachers\u2019 perceptions of the barriers to technology integration (access, vision, professional development, time, and beliefs) and instructional practices with technology after two years of situated professional development. Months after transitioning from mentoring to teacher-led communities of practice, teachers continued to report positive perceptions of several barriers and were observed engaging in desirable instructional practices. Interviews suggest that the situated professional development activities helped create an environment that supported teachers\u2019 decisions to integrate technology. Implications for teacher professional development and technology integration are discussed in conjunction with the strengths and limitations of the study. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "c8ac5e08d8e2dfb66982de431f429b900248a8da", "title": "Looking at Vehicles in the Night : Detection & Dynamics of Rear Lights", "text": "Existing nighttime vehicle detection methods use color as the primary cue for detecting vehicles. However, complex road and ambient lighting conditions, and camera configurations can influence the effectiveness of such explicit rule and threshold based methods. In this paper, there are three main contributions. First, we present a novel method to detect vehicles during nighttime involving both learned classifiers and explicit rules, which can operate in the presence of varying ambient lighting conditions. The proposed method which is titled as VeDANt (Vehicle Detection using Active-learning during Nighttime) employs a modified form of active learning for training Adaboost classifiers with Haar-like features using gray-level input images. The hypothesis windows are then further verified using the proposed techniques involving perspective geometries and color information of the taillights. Second, VeDANt is extended to analyze the dynamics of the vehicles during nighttime by detecting three taillight activities braking, turning left and turning right. Third, we release three new and fully annotated LISA-Night datasets with over 5000 frames for evaluation and benchmarking, which capture a variety of complex traffic and lighting conditions. Such comprehensively annotated and complex public datasets is a first in the area of nighttime vehicle detection. We show that VeDANt is able to detect vehicles during nighttime with over 98% accuracy and less than 1% false detections."}
{"_id": "8f77c1df86d0f62653405f0cce85e94bce76ad7c", "title": "RASSH - Reinforced adaptive SSH honeypot", "text": "The wide spread of cyber-attacks made the need of gathering as much information as possible about them, a real demand in nowadays global context. The honeypot systems have become a powerful tool on the way to accomplish that. Researchers have already focused on the development of various honeypot systems but the fact that their administration is time consuming made clear the need of self-adaptive honeypot system capable of learning from their interaction with attackers. This paper presents a self-adaptive honeypot system we are developing that tries to overlap some of the disadvantaged that existing systems have. The proposed honeypot is a medium interaction system developed using Python and it emulates a SSH (Secure Shell) server. The system is capable of interacting with the attackers by means of reinforcement learning algorithms."}
{"_id": "485b014bb5f1cfa09f30b7e1e0a553ad2889de46", "title": "Digital Facial Engraving", "text": "This contribution introduces the basic techniques for digital facial engraving, which imitates traditional copperplate engraving. Inspired by traditional techniques, we first establish a set of basic rules thanks to which separate engraving layers are built on the top of the original photo. Separate layers are merged according to simple merging rules and according to range shift/scale masks specially introduced for this purpose. We illustrate the introduced technique by a set of black/white and color engravings, showing different features such as engraving-specific image enhancements, mixing different regular engraving lines with mezzotint, irregular perturbations of engraving lines etc. We introduce the notion of engraving style which comprises a set of separate engraving layers together with a set of associated range shift/scale masks. The engraving style helps to port the look and feel of one engraving to another. Once different libraries of pre-defined mappable engraving styles and an appropriate user interface are added to the basic system, producing a decent gravure starting from a simple digital photo will be a matter of seconds. The engraving technique described in this contribution opens new perspectives for digital art, adding unprecedented power and precision to the engraver's work. 1. Rationale Engraving is among the most important traditional graphical techniques. It first appeared in the fifteenth century as an illustrative support for budding book-printing, but very quickly became an art in its own right, thanks to its specific expressive power. Actually, four main classes of engraving are used by artists: letterpress or relief printing, intaglio or in-hollow printing, silk screen process and lithography, with several different techniques in each class. The history of printmaking was punctuated by prosperous periods of techniques which later declined for various reasons. Facial engraving is one such example. Extremely popular in seventeenth and eighteenth centuries, when photography did not exist, this wonderful art became almost unused, due to the extreme technical demands that it made on the engraver. Professional copperplate engravers are rare today, and the cost of true engravings is simply too prohibitive to be used in everyday printing. At the same time, traditional facial engraving has no doubt very specific appeal: its neat, sharp appearance distinguishes it advantageously from photos. To appreciate the graphical impact of engravings it's enough to compare the engraved portraits in the Wall Street Journal with portraits in other newspapers produced with traditional impersonal screening. Does it mean that this enjoyable art is condemned to disappear \u2026"}
{"_id": "48c298565b7906dcc6f8555848201202660d3428", "title": "Correcting a metacognitive error: feedback increases retention of low-confidence correct responses.", "text": "Previous studies investigating posttest feedback have generally conceptualized feedback as a method for correcting erroneous responses, giving virtually no consideration to how feedback might promote learning of correct responses. Here, the authors show that when correct responses are made with low confidence, feedback serves to correct this initial metacognitive error, enhancing retention of low-confidence correct responses. In 2 experiments, subjects took an initial multiple-choice test on general knowledge facts and made a confidence judgment after each response. Feedback was provided for half of the questions, and retention was assessed by a final cued-recall test. Taking the initial test improved retention relative to not testing, and feedback further enhanced performance. Consistent with prior research, feedback improved retention by allowing subjects to correct initially erroneous responses. Of more importance, feedback also doubled the retention of correct low-confidence responses, relative to providing no feedback. The function of feedback is to correct both memory errors and metacognitive errors."}
{"_id": "561a28d8640394879e4cea6fee9ece53a2480832", "title": "A multi-structural framework for adaptive supply chain planning and operations control with structure dynamics considerations", "text": "A trend in up-to-date developments in supply chain management (SCM) is to make supply chains more agile, flexible, and responsive. In supply chains, different structures (functional, organizational, informational, financial etc.) are (re)formed. These structures interrelate with each other and change in dynamics. The paper introduces a new conceptual framework for multi-structural planning and operations of adaptive supply chains with structure dynamics considerations. We elaborate a vision of adaptive supply chain management (A-SCM), a new dynamic model and tools for the planning and control of adaptive supply chains. SCM is addressed from perspectives of execution dynamics under uncertainty. Supply chains are modelled in terms of dynamic multi-structural macro-states, based on simultaneous consideration of the management as a function of both states and structures. The research approach is theoretically based on the combined application of control theory, operations research, and agent-based modelling. The findings suggest constructive ways to implement multi-structural supply chain management and to transit from a \" one-way \" partial optimization to the feedback-based, closed-loop adaptive supply chain optimization and execution management for value chain adaptability, stability and crisis-resistance. The proposed methodology enhances managerial insight into advanced supply chain management."}
{"_id": "0d060f1edd7e79865f1a48a9c874439c4b10caea", "title": "Artificial Neural Networks Applied to Taxi Destination Prediction", "text": "We describe our first-place solution to the ECML/PKDD discovery challenge on taxi destination prediction. The task consisted in predicting the destination of a taxi based on the beginning of its trajectory, represented as a variable-length sequence of GPS points, and diverse associated meta-information, such as the departure time, the driver id and client information. Contrary to most published competitor approaches, we used an almost fully automated approach based on neural networks and we ranked first out of 381 teams. The architectures we tried use multi-layer perceptrons, bidirectional recurrent neural networks and models inspired from recently introduced memory networks. Our approach could easily be adapted to other applications in which the goal is to predict a fixed-length output from a variable-length sequence. Random order, this does not reflect the weights of contributions. ar X iv :1 50 8. 00 02 1v 2 [ cs .L G ] 2 1 Se p 20 15 2 Artificial Neural Networks Applied to Taxi Destination Prediction"}
{"_id": "740ab1887f02b4986beb99cd39591480372e9199", "title": "Data Compression Techniques in Wireless Sensor Networks", "text": "Wireless sensor networks (WSNs) open a new research field for pervasive computing and context-aware monitoring of the physical environments. Many WSN applications aim at long-term environmental monitoring. In these applications, energy consumption is a principal concern because sensor nodes have to regularly report their sensing data to the remote sink(s) for a very long time. Since data transmission is one primary factor of the energy consumption of sensor nodes, many research efforts focus on reducing the amount of data transmissions through data compression techniques. In this chapter, we discuss the data compression techniques in WSNs, which can be classified into five categories: 1) The string-based compression techniques treat sensing data as a sequence of characters and then adopt the text data compression schemes to compress them. 2) The image-based compression techniques hierarchically organize WSNs and then borrow the idea from the image compression solutions to handle sensing data. 3) The distributed source coding techniques extend the Slepian-Wolf theorem to encode multiple correlated data streams independently at sensor nodes and then jointly decode them at the sink. 4) The compressed sensing techniques adopt a small number of nonadaptive and randomized linear projection samples to compress sensing data. 5) The data aggregation techniques select a subset of sensor nodes in the network to be responsible for fusing the sensing data from other sensor nodes to reduce the amount of data transmissions. A comparison of these data compression techniques is also given in this chapter."}
{"_id": "ddf69c7126022bf99f6d0d5f8b4f272915e232cc", "title": "A meta-analysis of human-system interfaces in unmanned aerial vehicle (UAV) swarm management.", "text": "A meta-analysis was conducted to systematically evaluate the current state of research on human-system interfaces for users controlling semi-autonomous swarms composed of groups of drones or unmanned aerial vehicles (UAVs). UAV swarms pose several human factors challenges, such as high cognitive demands, non-intuitive behavior, and serious consequences for errors. This article presents findings from a meta-analysis of 27 UAV swarm management papers focused on the human-system interface and human factors concerns, providing an overview of the advantages, challenges, and limitations of current UAV management interfaces, as well as information on how these interfaces are currently evaluated. In general allowing user and mission-specific customization to user interfaces and raising the swarm's level of autonomy to reduce operator cognitive workload are beneficial and improve situation awareness (SA). It is clear more research is needed in this rapidly evolving field."}
{"_id": "edf5861705647e0670f841cc6abcd2371823b13e", "title": "On HMM static hand gesture recognition", "text": "A real time static isolated gesture recognition application using a Hidden Markov Model approach with features extracted from gestures' silhouettes is presented. Nine different hand poses with various degrees of rotation are considered. The system, both simple and effective, uses color images of the hands to be recognized directly from the camera and is capable of processing 23 frames per second on a Quad Core Intel Processor."}
{"_id": "95347596715641683678283b06f6b5ce3a09cba3", "title": "Teaching Programming in Secondary Education Through Embodied Computing Platforms: Robotics and Wearables", "text": "Pedagogy has emphasized that physical representations and tangible interactive objects benefit learning especially for young students. There are many tangible hardware platforms for introducing computer programming to children, but there is limited comparative evaluation of them in the context of a formal classroom. In this work, we explore the benefits of learning to code for tangible computers, such as robots and wearable computers, in comparison to programming for the desktop computer. For this purpose, 36 students participated in a within-groups study that involved three types of target computer platform tangibility: (1) desktop, (2) wearable, and (3) robotic. We employed similar blocks-based visual programming environments, and we measured emotional engagement, attitudes, and computer programming performance. We found that students were more engaged by and had a higher intention of learning programming with the robotic rather than the desktop computer. Furthermore, tangible computing platforms, either robot or wearable, did not affect the students\u2019 performance in learning basic computational concepts (e.g., sequence, repeat, and decision). Our findings suggest that computer programming should be introduced through multiple target platforms (e.g., robots, smartphones, wearables) to engage children."}
{"_id": "508095dabb20caf72cfbf342c206161b54c97ecd", "title": "A Dynamic Programming Algorithm for Computing N-gram Posteriors from Lattices", "text": "Efficient computation of n-gram posterior probabilities from lattices has applications in lattice-based minimum Bayes-risk decoding in statistical machine translation and the estimation of expected document frequencies from spoken corpora. In this paper, we present an algorithm for computing the posterior probabilities of all ngrams in a lattice and constructing a minimal deterministic weighted finite-state automaton associating each n-gram with its posterior for efficient storage and retrieval. Our algorithm builds upon the best known algorithm in literature for computing ngram posteriors from lattices and leverages the following observations to significantly improve the time and space requirements: i) the n-grams for which the posteriors will be computed typically comprises all n-grams in the lattice up to a certain length, ii) posterior is equivalent to expected count for an n-gram that do not repeat on any path, iii) there are efficient algorithms for computing n-gram expected counts from lattices. We present experimental results comparing our algorithm with the best known algorithm in literature as well as a baseline algorithm based on weighted finite-state automata operations."}
{"_id": "9e245f05f47a818f6321a0c2f67174f32308bc04", "title": "Magnet design based on transient behavior of an IPMSM in event of malfunction", "text": "This paper deals with the approach of analyzing the magnet design based on transient behavior of an IPMSM during switching processes in the event of malfunction for automotive applications. Depending on the maximal current increase during the transient process the needed percentage of Dysprosium for the magnet design is conducted. Switching off strategy is introduced for both Voltage-Source- and Current-Source-Inverter for automotive application. Both inverters are compared concerning the transient current increase and respectively the Dy-content for the magnet design."}
{"_id": "545067e6fcbbcf1d2b5557002cac9948893b97c8", "title": "NAIDS design using ChiMIC-KGS", "text": "The tasks of IDS is to detect activities in the form of intrusion by unauthorized parties which attempt to gain access to information unlawfully in a network environment. Changes in intrusion strategies affect the behavior of networks where the representation of network traffic data changes over time. The nature of the network data has high volume and dimension and contains redundancy. This study proposes the use of ChiMIC as a method to reduce redundancy features and select the relevant features to be used in detecting anomalies on the network, where KGS is used as an anomaly detection engine. Both approach are used to build NAIDS that has capabilities to detect the emergence of new attacks over time."}
{"_id": "122e4dae668618118e76d3e905b2317fb0dec9a4", "title": "Supply Chain Management ( SCM ) and Organizational Key Factors for Successful Implementation of Enterprise Resource Planning ( ERP ) Systems", "text": "This paper reports some findings of an on-going research into ERP implementation issues. A part of the research, which this paper reports, consists of reviewing several cases of successful implementations of ERP systems. The analysis of all these cases revealed that the critical factors for successful implementation of ERP systems fall into two main categories: technological and organizational factors. However, the organizational factors seem to be more important than the technological ones as far as the successful implementation of ERP systems under SCM is concerned."}
{"_id": "69418ff5d4eac106c72130e152b807004e2b979c", "title": "Semantically Smooth Knowledge Graph Embedding", "text": "This paper considers the problem of embedding Knowledge Graphs (KGs) consisting of entities and relations into lowdimensional vector spaces. Most of the existing methods perform this task based solely on observed facts. The only requirement is that the learned embeddings should be compatible within each individual fact. In this paper, aiming at further discovering the intrinsic geometric structure of the embedding space, we propose Semantically Smooth Embedding (SSE). The key idea of SSE is to take full advantage of additional semantic information and enforce the embedding space to be semantically smooth, i.e., entities belonging to the same semantic category will lie close to each other in the embedding space. Two manifold learning algorithms Laplacian Eigenmaps and Locally Linear Embedding are used to model the smoothness assumption. Both are formulated as geometrically based regularization terms to constrain the embedding task. We empirically evaluate SSE in two benchmark tasks of link prediction and triple classification, and achieve significant and consistent improvements over state-of-the-art methods. Furthermore, SSE is a general framework. The smoothness assumption can be imposed to a wide variety of embedding models, and it can also be constructed using other information besides entities\u2019 semantic categories."}
{"_id": "90434160915f6a0279aeab96a1fcf8927f221ce6", "title": "Automatic Chinese food identification and quantity estimation", "text": "Computer-aided food identification and quantity estimation have caught more attention in recent years because of the growing concern of our health. The identification problem is usually defined as an image categorization or classification problem and several researches have been proposed. In this paper, we address the issues of feature descriptors in the food identification problem and introduce a preliminary approach for the quantity estimation using depth information. Sparse coding is utilized in the SIFT and Local binary pattern feature descriptors, and these features combined with gabor and color features are used to represent food items. A multi-label SVM classifier is trained for each feature, and these classifiers are combined with multi-class Adaboost algorithm. For evaluation, 50 categories of worldwide food are used, and each category contains 100 photographs from different sources, such as manually taken or from Internet web albums. An overall accuracy of 68.3% is achieved, and success at top-N candidates achieved 80.6%, 84.8%, and 90.9% accuracy accordingly when N equals 2, 3, and 5, thus making mobile application practical. The experimental results show that the proposed methods greatly improve the performance of original SIFT and LBP feature descriptors. On the other hand, for quantity estimation using depth information, a straight forward method is proposed for certain food, while transparent food ingredients such as pure water and cooked rice are temporarily excluded."}
{"_id": "9bb14f91b269e72a98d9063d22da2092dc5f552d", "title": "Implementation and Performance Analysis of Firewall on Open vSwitch", "text": "Software Defined Networking (SDN) is a current research trend that follows the ideology of physical separation of the control and data plane of the forwarding devices. SDN mainly advocates with two types of devices: (1) Controllers, that implement the control plane and (2) Switches, that perform the data plane operations. OpenFlow protocol (OFP) is the current standard through which controllers and switches can communicate with each other. Using OpenFlow, SDN controllers can manage forwarding behaviors of SDN switches by managing Flow Table entries. Switches use these low-level Flow Table entries to forward packets to appropriate hosts. Firewalls are integral part of today\u2019s networks. We can\u2019t imagine our network without a Firewall which protects our network from potential threats. As SDN is getting pace in replacing traditional architecture, it would be very interesting to see how much security features can be provided by OpenFlow-enabled switches. Hence, it will be very important to see if SDN, on the top of OpenFlow, can efficiently implement Firewalls and provides support for an advanced feature like connection tracking. The task is straightforward: Controller will add Flow Table entries on switches based upon Firewall rules. Such way, we can enhance packet-processing by providing security. In this Document, one strategy for implementing Firewall on SDN is presented. We can write some controller applications that work as Firewall and inspect incoming packets against the Firewall rules. These applications are also able to implement connection tracking mechanism. As SDN devices for the experiments, we selected Ryu controller and Open vSwitch. Initially, such applications are tested on local machine with small Firewall ruleset. Later, they are tested with real-world traffic and comparatively large Firewall ruleset. The testing results present that such strategy can be used as a first step in implementing security features (including connection tracking) in SDN environment."}
{"_id": "d12e3606d94050d382306761eb43b58c042ac390", "title": "Predicting and analyzing secondary education placement-test scores: A data mining approach", "text": "Understanding the factors that lead to success (or failure) of students at placement tests is an interesting and challenging problem. Since the centralized placement tests and future academic achievements are considered to be related concepts, analysis of the success factors behind placement tests may help understand and potentially improve academic achievement. In this study using a large and feature rich dataset from Secondary Education Transition System in Turkey we developed models to predict secondary education placement test results, and using sensitivity analysis on those prediction models we identified the most important predictors. The results showed that C5 decision tree algorithm is the best predictor with 95% accuracy on hold-out sample, followed by support vector machines (with an accuracy of 91%) and artificial neural networks (with an accuracy of 89%). Logistic regression models came out to be the least accurate of the four with and overall accuracy of 82%. The sensitivity analysis revealed that previous test experience, whether a student has a scholarship, student\u2019s number of siblings, previous years\u2019 grade point average are among the most important predictors of the placement test scores. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "0a9621c60889c34e2ccc4848a58cbbcef924cf4d", "title": "Introduction to Data Mining", "text": "an introduction to data mining san jose state university introduction to data mining university of florida introduction to data mining exinfm introduction to data mining introduction to data mining and knowledge discovery lecture notes for chapter 3 introduction to data mining introduction to data mining introduction to data mining pennsylvania state university introduction to data mining tu/e introduction to data mining sas support session"}
{"_id": "440deffc89b338fcf1662d540a8c4cfc56aa4ccb", "title": "A Data Mining view on Class Room Teaching Language", "text": "From ancient period in India, educational institution embarked to use class room teaching. Where a teacher explains the material and students understand and learn the lesson. There is no absolute scale for measuring knowledge but examination score is one scale which shows the performance indicator of students. So it is important that appropriate material is taught but it is vital that while teaching which language is chosen, class notes must be prepared and attendance. This study analyses the impact of language on the presence of students in class room. The main idea is to find out the support, confidence and interestingness level for appropriate language and attendance in the classroom. For this purpose association rule is used."}
{"_id": "7231c66eea6699456ff434311a79996084e257a1", "title": "Gray level image processing using contrast enhancement and watershed segmentation with quantitative evaluation", "text": "Both image enhancement and image segmentation are most practical approaches among virtually all automated image recognition systems. Feature extraction and recognition have numerous applications on telecommunication, weather forecasting, environment exploration and medical diagnosis. The adaptive image contrast stretching is a typical image enhancement approach and watershed segmentation is a typical image segmentation approach. Under conditions of an improper or disturbed illumination, the adaptive contrast stretching should be conducted, which adapts to intensity distributions. Watershed segmentation is a feasible approach to separate different objects automatically, where watershed lines separate the catchment basins. The erosion and dilation operations are essential procedures involved in watershed segmentation. To avoid over-segmentation, the markers for foreground and background can be selected accordingly. Quantitative measures (gray level energy, discrete entropy, relative entropy and mutual information) are proposed to evaluate the actual improvement via two techniques. These methodologies can be easily expanded to many other image processing approaches."}
{"_id": "ec9c4bb179abcf019255a33357c4fa2e66b09873", "title": "Grassmannian beamforming for multiple-input multiple-output wireless systems", "text": "Transmit and receive beamforming is an attractive, low-complexity technique for exploiting the significant diversity that is available in multiple-input and multiple-output (MIMO) wireless systems. Unfortunately, optimal performance requires either complete channel knowledge or knowledge of the optimal beamforming vector which is difficult to realize in practice. In this paper, we propose a quantized maximum signal-to-noise ratio beamforming technique where the receiver only sends the label of the best weight vector in a predetermined codebook to the transmitter. We develop optimal codebooks for i.i.d. Rayleigh fading matrix channels. We derive the distribution of the optimal transmit weight vector and use this result to bound the SNR degradation as a function of the quantization. A codebook design criterion is proposed by exploiting the connections between the quantization problem and Grassmannian line packing. The design criterion is flexible enough to allow for side constraints on the codebook vectors. Bounds on the maximum distortion with the resulting Grassmannian codebooks follow naturally from the nature of the code. A proof is given that any system using an overcomplete codebook for transmit diversity and maximum ratio combining obtains a diversity on the order of the product of the number of transmit and receive antennas. Bounds on the loss in capacity due to quantization are derived. Monte Carlo simulations are presented that compare the symbol error probability for different quantization strategies."}
{"_id": "2eff823bdecb1a506ba88e1127fa3cdb1a263682", "title": "Efficient Memory Disaggregation with Infiniswap", "text": "Memory-intensive applications suffer large performance loss when their working sets do not fully fit in memory. Yet, they cannot leverage otherwise unused remote memory when paging out to disks even in the presence of large imbalance in memory utilizations across a cluster. Existing proposals for memory disaggregation call for new architectures, new hardware designs, and/or new programming models, making them infeasible. This paper describes the design and implementation of INFINISWAP, a remote memory paging system designed specifically for an RDMA network. INFINISWAP opportunistically harvests and transparently exposes unused memory to unmodified applications by dividing the swap space of each machine into many slabs and distributing them across many machines\u2019 remote memory. Because one-sided RDMA operations bypass remote CPUs, INFINISWAP leverages the power of many choices to perform decentralized slab placements and evictions. We have implemented and deployed INFINISWAP on an RDMA cluster without any modifications to user applications or the OS and evaluated its effectiveness using multiple workloads running on unmodified VoltDB, Memcached, PowerGraph, GraphX, and Apache Spark. Using INFINISWAP, throughputs of these applications improve between 4\u00d7 (0.94\u00d7) to 15.4\u00d7 (7.8\u00d7) over disk (Mellanox nbdX), and median and tail latencies between 5.4\u00d7 (2\u00d7) and 61\u00d7 (2.3\u00d7). INFINISWAP achieves these with negligible remote CPU usage, whereas nbdX becomes CPU-bound. INFINISWAP increases the overall memory utilization of a cluster and works well at scale."}
{"_id": "4076cf9e5c31446f56748352c23c3e5d3d6d69cb", "title": "Hate Source : White Supremacist Hate Groups and Hate Crime", "text": "The relationship between hate group activity and hate crime is theoretically ambiguous. Hate groups may incite criminal behavior in support of their beliefs. On the other hand, hate groups may reduce hate crime by serving as a forum for members to verbally vent their frustrations or as protection from future biased violence. I \u0085nd that the presence of an active white supremacist hate group chapter is associated with an 18.7 percent higher hate crime rate. White supremacist groups are not associated with the level of anti-white hate crimes committed by non-whites, nor do they form in expectation of future hate crimes by non-whites. JEL Codes: K14, J15, D71"}
{"_id": "581685e210b2b4d157ec5cc6196e35d4af4d3a2c", "title": "Relating Newcomer Personality to Survival and Activity in Recommender Systems", "text": "In this work, we explore the degree to which personality information can be used to model newcomer retention, investment, intensity of engagement, and distribution of activity in a recommender community. Prior work shows that Big-Five Personality traits can explain variation in user behavior in other contexts. Building on this, we carry out and report on an analysis of 1008 MovieLens users with identified personality profiles. We find that Introverts and low Agreeableness users are more likely to survive into the second and subsequent sessions compared to their respective counterparts; Introverts and low Conscientiousness users are a significantly more active population compared to their respective counterparts; High Openness and High Neuroticism users contribute (tag) significantly more compared to their counterparts, but their counterparts consume (browse and bookmark) more; and low Agreeableness users are more likely to rate whereas high Agreeableness users are more likely to tag. These results show how modeling newcomer behavior from user personality can be useful for recommender systems designers as they customize the system to guide people towards tasks that need to be done or tasks the users will find rewarding and also decide which users to invest retention efforts in."}
{"_id": "8a5e5aaf26db3acdcafbc2cd5f44680d4f84d4fc", "title": "Stealthy Denial of Service Strategy in Cloud Computing", "text": "The success of the cloud computing paradigm is due to its on-demand, self-service, and pay-by-use nature. According to this paradigm, the effects of Denial of Service (DoS) attacks involve not only the quality of the delivered service, but also the service maintenance costs in terms of resource consumption. Specifically, the longer the detection delay is, the higher the costs to be incurred. Therefore, a particular attention has to be paid for stealthy DoS attacks. They aim at minimizing their visibility, and at the same time, they can be as harmful as the brute-force attacks. They are sophisticated attacks tailored to leverage the worst-case performance of the target system through specific periodic, pulsing, and low-rate traffic patterns. In this paper, we propose a strategy to orchestrate stealthy attack patterns, which exhibit a slowly-increasing-intensity trend designed to inflict the maximum financial cost to the cloud customer, while respecting the job size and the service arrival rate imposed by the detection mechanisms. We describe both how to apply the proposed strategy, and its effects on the target system deployed in the cloud."}
{"_id": "75859ac30f5444f0d9acfeff618444ae280d661d", "title": "Multibiometric Cryptosystems Based on Feature-Level Fusion", "text": "Multibiometric systems are being increasingly de- ployed in many large-scale biometric applications (e.g., FBI-IAFIS, UIDAI system in India) because they have several advantages such as lower error rates and larger population coverage compared to unibiometric systems. However, multibiometric systems require storage of multiple biometric templates (e.g., fingerprint, iris, and face) for each user, which results in increased risk to user privacy and system security. One method to protect individual templates is to store only the secure sketch generated from the corresponding template using a biometric cryptosystem. This requires storage of multiple sketches. In this paper, we propose a feature-level fusion framework to simultaneously protect multiple templates of a user as a single secure sketch. Our main contributions include: (1) practical implementation of the proposed feature-level fusion framework using two well-known biometric cryptosystems, namery,fuzzy vault and fuzzy commitment, and (2) detailed analysis of the trade-off between matching accuracy and security in the proposed multibiometric cryptosystems based on two different databases (one real and one virtual multimodal database), each containing the three most popular biometric modalities, namely, fingerprint, iris, and face. Experimental results show that both the multibiometric cryptosystems proposed here have higher security and matching performance compared to their unibiometric counterparts."}
{"_id": "81961a5a9dbfc08b1e0ead1886790e9b8700af60", "title": "A virtual translation technique to improve current regulator for salient-pole AC machines", "text": "This paper presents a synchronous frame current regulator for salient-pole AC machines, which is insensitive to both parameters and synchronous frequencies. The transformation from stationary to synchronous frame is known to produce a frame-dependent cross-coupling term, which results in degradation in current response at high speeds and is represented as an asymmetric pole in the complex vector form of such systems. The asymmetric pole close to right half plane (RHP) in the complex vector s-domain cannot be perfectly canceled by parameter-based design of PI controllers due to parameter variation. Complex vector design synchronous frame PI current regulator with a virtually translated asymmetric pole is shown in this paper to reduce the parameter sensitivity and to improve disturbance rejection. The current regulator with the virtually translated pole have been analyzed in complex vector form for nonsalient-pole AC machine and in scalar form for salient-pole AC machines. It has been shown to yield comparable performances in both cases. This paper includes experimental verification of the performance of the proposed current regulator for an IPMSM."}
{"_id": "bceea31282c53bc18822c0974f51a08b98e3d4d9", "title": "News-based trading strategies", "text": "The marvel of markets lies in the fact that dispersed information is instantaneously processed and used to adjust the price of goods, services and assets. Financial markets are particularly efficient when it comes to processing information; such information is typically embedded in textual news that is then interpreted by investors. Quite recently, researchers have started to automatically determine news sentiment in order to explain stock price movements. Interestingly, this so-called news sentiment works fairly well in explaining stock returns. In this paper, we design trading strategies that utilize textual news in order to obtain profits on the basis of novel information entering the market. We thus propose approaches for automated decisionmaking based on supervised and reinforcement learning. Altogether, we demonstrate how news-based data can be incorporated into an investment system."}
{"_id": "b3dbbae9257f7a1b0b5c2138aee7701899c219a9", "title": "Handcrafted features with convolutional neural networks for detection of tumor cells in histology images", "text": "Detection of tumor nuclei in cancer histology images requires sophisticated techniques due to the irregular shape, size and chromatin texture of the tumor nuclei. Some very recently proposed methods employ deep convolutional neural networks (CNNs) to detect cells in H&E stained images. However, all such methods use some form of raw pixel intensities as input and rely on the CNN to learn the deep features. In this work, we extend a recently proposed spatially constrained CNN (SC-CNN) by proposing features that capture texture characteristics and show that although CNN produces good results on automatically learned features, it can perform better if the input consists of a combination of handcrafted features and the raw data. The handcrafted features are computed through the scattering transform which gives non-linear invariant texture features. The combination of handcrafted features with raw data produces sharp proximity maps and better detection results than the results of raw intensities with a similar kind of CNN architecture."}
{"_id": "9a46168d008e5b456beba05ad23b8bb3446b5548", "title": "Inverse procedural modeling of 3D models for virtual worlds", "text": "This course presents a collection of state-of-the-art approaches for modeling and editing of 3D models for virtual worlds, simulations, and entertainment, in addition to real-world applications. The first contribution of this course is a coherent review of inverse procedural modeling (IPM) (i.e., proceduralization of provided 3D content). We describe different formulations of the problem as well as solutions based on those formulations. We show that although the IPM framework seems under-constrained, the state-of-the-art solutions actually use simple analogies to convert the problem into a set of fundamental computer science problems, which are then solved by corresponding algorithms or optimizations. The second contribution includes a description and categorization of results and applications of the IPM frameworks. Moreover, a substantial part of the course is devoted to summarizing different domain IPM frameworks for practical content generation in modeling and animation."}
{"_id": "38a7b87d89605d244e26bdeb6d69a61d7263b7e8", "title": "Lack of sleep affects the evaluation of emotional stimuli", "text": "Sleep deprivation (SD) negatively affects various cognitive performances, but surprisingly evidence about a specific impact of sleep loss on subjective evaluation of emotional stimuli remains sparse. In the present study, we assessed the effect of SD on the emotional rating of standardized visual stimuli selected from the International Affective Picture System. Forty university students were assigned to the sleep group (n=20), tested before and after one night of undisturbed sleep at home, or to the deprivation group, tested before and after one night of total SD. One-hundred and eighty pictures (90 test, 90 retest) were selected and categorized as pleasant, neutral and unpleasant. Participants were asked to judge their emotional reactions while viewing pictures by means of the Self-Assessment Manikin. Subjective mood ratings were also obtained by means of Visual Analog Scales. No significant effect of SD was observed on the evaluation of pleasant and unpleasant stimuli. On the contrary, SD subjects perceived the neutral pictures more negatively and showed an increase of negative mood and a decrease of subjective alertness compared to non-deprived subjects. Finally, an analysis of covariance on mean valence ratings of neutral pictures using negative mood as covariate confirmed the effect of SD. Our results indicate that sleep is involved in regulating emotional evaluation. The emotional labeling of neutral stimuli biased toward negative responses was not mediated by the increase of negative mood. This effect can be interpreted as an adaptive reaction supporting the \"better safe than sorry\" principle. It may also have applied implications for healthcare workers, military and law-enforcement personnel."}
{"_id": "8099e1ec5ab75736566c64588690a2fe3abd934d", "title": "A MODERN TAKE", "text": "We revisit the bias-variance tradeoff for neural networks in light of modern empirical findings. The traditional bias-variance tradeoff in machine learning suggests that as model complexity grows, variance increases. Classical bounds in statistical learning theory point to the number of parameters in a model as a measure of model complexity, which means the tradeoff would indicate that variance increases with the size of neural networks. However, we empirically find that variance due to training set sampling is roughly constant (with both width and depth) in practice. Variance caused by the non-convexity of the loss landscape is different. We find that it decreases with width and increases with depth, in our setting. We provide theoretical analysis, in a simplified setting inspired by linear models, that is consistent with our empirical findings for width. We view bias-variance as a useful lens to study generalization through and encourage further theoretical explanation from this perspective."}
{"_id": "6d2c46c04ab0d41088866e6758b1bf838035a9a4", "title": "A Foundation for the Study of IT Effects: A New Look at DeSanctis and Poole's Concepts of Structural Features and Spirit", "text": "Special Issue"}
{"_id": "c1e9c4c5637c2d67863ee53eef3aa2df20a6e56d", "title": "Visibility estimation and joint inpainting of lidar depth maps", "text": "This paper presents a novel variational image inpainting method to solve the problem of generating, from 3-D lidar measures, a dense depth map coherent with a given color image, tackling visibility issues. When projecting the lidar point cloud onto the image plane, we generally obtain a sparse depth map, due to undersampling. Moreover, lidar and image sensor positions generally differ during acquisition, such that depth values referring to objects that are hidden from the image view point might appear with a naive projection. The proposed algorithm estimates the complete depth map, while simultaneously detecting and excluding those hidden points. It consists in a primal-dual optimization method, where a coupled total variation regularization term is included to match the depth and image gradients and a visibility indicator handles the selection of visible points. Tests with real data prove the effectiveness of the proposed strategy."}
{"_id": "e32663c27ff19fbd84898d2a5e41d7fc985c95fe", "title": "Review of Electromagnetic Techniques for Breast Cancer Detection", "text": "Breast cancer is anticipated to be responsible for almost 40,000 deaths in the USA in 2011. The current clinical detection techniques suffer from limitations which motivated researchers to investigate alternative modalities for the early detection of breast cancer. This paper focuses on reviewing the main electromagnetic techniques for breast cancer detection. More specifically, this work reviews the cutting edge research in microwave imaging, electrical impedance tomography, diffuse optical tomography, microwave radiometry, biomagnetic detection, biopotential detection, and magnetic resonance imaging (MRI). The goal of this paper is to provide biomedical researchers with an in-depth review that includes all main electromagnetic techniques in the literature and the latest progress in each of these techniques."}
{"_id": "69544f17534db93e48d06b84e8f7cb611bd88c69", "title": "Mapping CMMI Project Management Process Areas to SCRUM Practices", "text": "Over the past years, the capability maturity model (CMM) and capability maturity model integration (CMMI) have been broadly used for assessing organizational maturity and process capability throughout the world. However, the rapid pace of change in information technology has caused increasing frustration to the heavyweight plans, specifications, and other documentation imposed by contractual inertia and maturity model compliance criteria. In light of that, agile methodologies have been adopted to tackle this challenge. The aim of our paper is to present mapping between CMMI and one of these methodologies, Scrum. It shows how Scrum addresses the Project Management Process Areas of CMMI. This is useful for organizations that have their plan-driven process based on the CMMI model and are planning to improve its processes toward agility or to help organizations to define a new project management framework based on both CMMI and Scrum practices."}
{"_id": "134021cfb9f082f4e8b58f31bcbb41eb990ab874", "title": "DepSky: Dependable and Secure Storage in a Cloud-of-Clouds", "text": "The increasing popularity of cloud storage services has lead companies that handle critical data to think about using these services for their storage needs. Medical record databases, large biomedical datasets, historical information about power systems and financial data are some examples of critical data that could be moved to the cloud. However, the reliability and security of data stored in the cloud still remain major concerns. In this work we present DepSky, a system that improves the availability, integrity, and confidentiality of information stored in the cloud through the encryption, encoding, and replication of the data on diverse clouds that form a cloud-of-clouds. We deployed our system using four commercial clouds and used PlanetLab to run clients accessing the service from different countries. We observed that our protocols improved the perceived availability, and in most cases, the access latency, when compared with cloud providers individually. Moreover, the monetary costs of using DepSky in this scenario is at most twice the cost of using a single cloud, which is optimal and seems to be a reasonable cost, given the benefits."}
{"_id": "be32610fb5cdc850538ace1781fbfd5b96698618", "title": "A descent modified", "text": "In this paper, we propose a modified Polak\u2013Ribi\u00e8re\u2013Polyak (PRP) conjugate gradient method. An attractive property of the proposed method is that the direction generated by the method is always a descent direction for the objective function. This property is independent of the line search used. Moreover, if exact line search is used, the method reduces to the ordinary PRP method. Under appropriate conditions, we show that the modified PRP method with Armijo-type line search is globally convergent. We also present extensive preliminary numerical experiments to show the efficiency of the proposed method."}
{"_id": "66e66467ae444fabea0d1d82dca77f524b9aa797", "title": "Secure Border Gateway Protocol (Secure-BGP)", "text": "The Border Gateway Protocol (BGP), which is used to distribute routing information between autonomous systems (ASes), is a critical component of the Internet\u2019s routing infrastructure. It is highly vulnerable to a variety of malicious attacks, due to the lack of a secure means of verifying the authenticity and legitimacy of BGP control traffic. This document describes a secure, scalable, deployable architecture (S-BGP) for an authorization and authentication system that addresses most of the security problems associated with BGP. The paper discusses the vulnerabilities and security requirements associated with BGP, describes the S-BGP countermeasures, and explains how they address these vulnerabilities and requirements. In addition, this paper provides a comparison of this architecture with other approaches that have been proposed, analyzes the performance implications of the proposed countermeasures, and addresses operational issues."}
{"_id": "3885e1cf675f1fbf32823c981b3f7417c56c5412", "title": "Empowering personalized medicine with big data and semantic web technology: Promises, challenges, and use cases", "text": "In healthcare, big data tools and technologies have the potential to create significant value by improving outcomes while lowering costs for each individual patient. Diagnostic images, genetic test results and biometric information are increasingly generated and stored in electronic health records presenting us with challenges in data that is by nature high volume, variety and velocity, thereby necessitating novel ways to store, manage and process big data. This presents an urgent need to develop new, scalable and expandable big data infrastructure and analytical methods that can enable healthcare providers access knowledge for the individual patient, yielding better decisions and outcomes. In this paper, we briefly discuss the nature of big data and the role of semantic web and data analysis for generating \u201csmart data\u201d which offer actionable information that supports better decision for personalized medicine. In our view, the biggest challenge is to create a system that makes big data robust and smart for healthcare providers and patients that can lead to more effective clinical decision-making, improved health outcomes, and ultimately, managing the healthcare costs. We highlight some of the challenges in using big data and propose the need for a semantic data-driven environment to address them. We illustrate our vision with practical use cases, and discuss a path for empowering personalized medicine using big data and semantic web technology."}
{"_id": "98e03d35857f66c34fa79f3ea0dd2b4e3b670044", "title": "Series No . 16 Profiting from Innovation in the Digital Economy : Standards , Complementary Assets , and Business Models In the Wireless World", "text": ""}
{"_id": "b6c6fda47921d7b6bf76c8cb28c316d9ee0c2b64", "title": "10 user interface elements for mobile learning application development", "text": "Mobile learning application is a new fashionable trend. More than a thousand learning applications have been designed and developed. Therefore, the focus on mobile learning applications needs to be refined and improved in order to facilitate users' interaction and response towards the learning process. To develop an effective mobile learning application, one should start from the User Interface (UI) because an effective UI can play a role in constantly making the user focus on an object and subject. The purpose of this study is to investigate and identify the best UI elements to use in mobile learning application development. Four existing UI guidelines and 12 selected learning applications were analysed, and we identified 10 UI Elements in order to develop the next mobile learning applications. All the 10 elements are described accordingly and they have implications for those designing and implementing UI in mobile learning applications."}
{"_id": "c0ad4773aea744a70aaf995e4768b23d5deb7963", "title": "Automated Accessibility Testing of Mobile Apps", "text": "It is important to make mobile apps accessible, so as not to exclude users with common disabilities such as blindness, low vision, or color blindness. Even when developers are aware of these accessibility needs, the lack of tool support makes the development and assessment of accessible apps challenging. Some accessibility properties can be checked statically, but user interface widgets are often created dynamically and are not amenable to static checking. Some accessibility checking frameworks analyze accessibility properties at runtime, but have to rely on existing thorough test suites. In this paper, we introduce the idea of using automated test generation to explore the accessibility of mobile apps. We present the MATE tool (Mobile Accessibility Testing), which automatically explores apps while applying different checks for accessibility issues related to visual impairment. For each issue, MATE generates a detailed report that supports the developer in fixing the issue. Experiments on a sample of 73 apps demonstrate that MATE detects more basic accessibility problems than static analysis, and many additional types of accessibility problems that cannot be detected statically at all. Comparison with existing accessibility testing frameworks demonstrates that the independence of an existing test suite leads to the identification of many more accessibility problems. Even when enabling Android's assistive features like contrast enhancement, MATE can still find many accessibility issues."}
{"_id": "26aaaf780c09b831c55b38e21ee04c2463cb1406", "title": "Examining the Testing Effect with Open-and Closed-Book Tests", "text": "Two experiments examined the testing effect with open-book tests, in which students view notes and textbooks while taking the test, and closed-book tests, in which students take the test without viewing notes or textbooks. Subjects studied prose passages and then restudied or took an openor closed-book test. Taking either kind of test, with feedback, enhanced long-term retention relative to conditions in which subjects restudied material or took a test without feedback. Open-book testing led to better initial performance than closed-book testing, but this benefit did not persist and both types of testing produced equivalent retention on a delayed test. Subjects predicted they would recall more after repeated studying, even though testing enhanced long-term retention more than restudying. These experiments demonstrate that the testing effect occurs with both openand closedbook tests, and that subjects fail to predict the effectiveness of testing relative to studying in enhancing later recall. Copyright # 2007 John Wiley & Sons, Ltd."}
{"_id": "408cddda023aa02d29783b7ccf99806c7ec8b765", "title": "The Power of Testing Memory: Basic Research and Implications for Educational Practice.", "text": "A powerful way of improving one's memory for material is to be tested on that material. Tests enhance later retention more than additional study of the material, even when tests are given without feedback. This surprising phenomenon is called the testing effect, and although it has been studied by cognitive psychologists sporadically over the years, today there is a renewed effort to learn why testing is effective and to apply testing in educational settings. In this article, we selectively review laboratory studies that reveal the power of testing in improving retention and then turn to studies that demonstrate the basic effects in educational settings. We also consider the related concepts of dynamic testing and formative assessment as other means of using tests to improve learning. Finally, we consider some negative consequences of testing that may occur in certain circumstances, though these negative effects are often small and do not cancel out the large positive effects of testing. Frequent testing in the classroom may boost educational achievement at all levels of education."}
{"_id": "ae843e607257efc4a106343a774e2927da974c6a", "title": "METAMEMORY : A THEORETICAL FRAMEWORK AND NEW FINDINGS", "text": "Although there has been excellent research by many investigators on the topic of metamemory, here we will focus on our own research program. This article will begin with a description of a theoretical framework that has evolved out of metamemory research, followed by a few remarks about our methodology, and will end with a review of our previously unpublished findings. (Our published findings will not be systematically reviewed here; instead, they will be mentioned only when necessary for continuity .)"}
{"_id": "33efb18854d6acc05fab5c23293a218a03f30a51", "title": "Remembering can cause forgetting: retrieval dynamics in long-term memory.", "text": "Three studies show that the retrieval process itself causes long-lasting forgetting. Ss studied 8 categories (e.g., Fruit). Half the members of half the categories were then repeatedly practiced through retrieval tests (e.g., Fruit Or_____). Category-cued recall of unpracticed members of practiced categories was impaired on a delayed test. Experiments 2 and 3 identified 2 significant features of this retrieval-induced forgetting: The impairment remains when output interference is controlled, suggesting a retrieval-based suppression that endures for 20 min or more, and the impairment appears restricted to high-frequency members. Low-frequency members show little impairment, even in the presence of strong, practiced competitors that might be expected to block access to those items. These findings suggest a critical role for suppression in models of retrieval inhibition and implicate the retrieval process itself in everyday forgetting."}
{"_id": "8ee7f9c4cdb93a2c04fc9c0121158e7243b489b1", "title": "Distributed practice in verbal recall tasks: A review and quantitative synthesis.", "text": "The authors performed a meta-analysis of the distributed practice effect to illuminate the effects of temporal variables that have been neglected in previous reviews. This review found 839 assessments of distributed practice in 317 experiments located in 184 articles. Effects of spacing (consecutive massed presentations vs. spaced learning episodes) and lag (less spaced vs. more spaced learning episodes) were examined, as were expanding interstudy interval (ISI) effects. Analyses suggest that ISI and retention interval operate jointly to affect final-test retention; specifically, the ISI producing maximal retention increased as retention interval increased. Areas needing future research and theoretical implications are discussed."}
{"_id": "73623d7b97157358f279fef53ba37b8d2c9908f6", "title": "An ontology for clinical questions about the contents of patient notes", "text": "OBJECTIVE\nMany studies have been completed on question classification in the open domain, however only limited work focuses on the medical domain. As well, to the best of our knowledge, most of these medical question classifications were designed for literature based question and answering systems. This paper focuses on a new direction, which is to design a novel question processing and classification model for answering clinical questions applied to electronic patient notes.\n\n\nMETHODS\nThere are four main steps in the work. Firstly, a relatively large set of clinical questions was collected from staff in an Intensive Care Unit. Then, a clinical question taxonomy was designed for question and answering purposes. Subsequently an annotation guideline was created and used to annotate the question set. Finally, a multilayer classification model was built to classify the clinical questions.\n\n\nRESULTS\nThrough the initial classification experiments, we realized that the general features cannot contribute to high performance of a minimum classifier (a small data set with multiple classes). Thus, an automatic knowledge discovery and knowledge reuse process was designed to boost the performance by extracting and expanding the specific features of the questions. In the evaluation, the results show around 90% accuracy can be achieved in the answerable subclass classification and generic question templates classification. On the other hand, the machine learning method does not perform well at identifying the category of unanswerable questions, due to the asymmetric distribution.\n\n\nCONCLUSIONS\nIn this paper, a comprehensive study on clinical questions has been completed. A major outcome of this work is the multilayer classification model. It serves as a major component of a patient records based clinical question and answering system as our studies continue. As well, the question collections can be reused by the research community to improve the efficiency of their own question and answering systems."}
{"_id": "a4fa9754b555f9c2c2d1e10aecfb3153aea46bf6", "title": "Deep Learning for Intelligent Video Analysis", "text": "Analyzing videos is one of the fundamental problems of computer vision and multimedia content analysis for decades. The task is very challenging as video is an information-intensive media with large variations and complexities. Thanks to the recent development of deep learning techniques, researchers in both computer vision and multimedia communities are now able to boost the performance of video analysis significantly and initiate new research directions to analyze video content. This tutorial will present recent advances under the umbrella of video understanding, which start from a unified deep learning toolkit--Microsoft Cognitive Toolkit (CNTK) that supports popular model types such as convolutional nets and recurrent networks, to fundamental challenges of video representation learning and video classification, recognition, and finally to an emerging area of video and language."}
{"_id": "3c79c967c2cb2e5e69f4b20688d0102a3bb28be3", "title": "Glance: rapidly coding behavioral video with the crowd", "text": "Behavioral researchers spend considerable amount of time coding video data to systematically extract meaning from subtle human actions and emotions. In this paper, we present Glance, a tool that allows researchers to rapidly query, sample, and analyze large video datasets for behavioral events that are hard to detect automatically. Glance takes advantage of the parallelism available in paid online crowds to interpret natural language queries and then aggregates responses in a summary view of the video data. Glance provides analysts with rapid responses when initially exploring a dataset, and reliable codings when refining an analysis. Our experiments show that Glance can code nearly 50 minutes of video in 5 minutes by recruiting over 60 workers simultaneously, and can get initial feedback to analysts in under 10 seconds for most clips. We present and compare new methods for accurately aggregating the input of multiple workers marking the spans of events in video data, and for measuring the quality of their coding in real-time before a baseline is established by measuring the variance between workers. Glance's rapid responses to natural language queries, feedback regarding question ambiguity and anomalies in the data, and ability to build on prior context in followup queries allow users to have a conversation-like interaction with their data - opening up new possibilities for naturally exploring video data."}
{"_id": "7476b749b07a750224c1fe775c4bc3927bc09bbf", "title": "An Unsupervised User Behavior Prediction Algorithm Based on Machine Learning and Neural Network For Smart Home", "text": "The user operates the smart home devices year in year out, have produced mass operation data, but these data do not be utilized well in past. Nowadays, these data can be used to predict user\u2019s behavior custom with the development of big data and machine learning technologies, and then the prediction results can be employed to enhance the intelligence of a smart home system. In view of this, this paper proposes a novel unsupervised user behavior prediction (UUBP) algorithm, which employs an artificial neural network and proposes a forgetting factor to overcome the shortcomings of the previous prediction algorithm. This algorithm has a high-level of autonomous and self-organizing learning ability while does not require too much human intervention. Furthermore, the algorithm can better avoid the influence of user\u2019s infrequent and out-of-date operation records, because of the forgetting factor. Finally, the use of real end user\u2019s operation records to demonstrate that UUBP algorithm has a better level of performance than other algorithms from effectiveness."}
{"_id": "97ead728b450275127a4b599ce7bebad29a17b37", "title": "Zero-Shot Question Generation from Knowledge Graphs for Unseen Predicates and Entity Types", "text": "We present a neural model for question generation from knowledge base triples in a \u201cZeroShot\u201d setup, that is generating questions for triples containing predicates, subject types or object types that were not seen at training time. Our model leverages triples occurrences in the natural language corpus in an encoderdecoder architecture, paired with an original part-of-speech copy action mechanism to generate questions. Benchmark and human evaluation show that our model sets a new state-ofthe-art for zero-shot QG."}
{"_id": "e303ef5a877490b948aea8aead9f22bf3c5131fe", "title": "Bulk current injection test modeling using an equivalent circuit for 1.8V mobile ICs", "text": "This paper shows a novel simulation method for bulk current injection (BCI) tests of I/O buffer circuits of mobile system memory. The simulation model consists of BCI probe, directional coupler, PCB, PKG, and IC. The proposed method is based on a behavioural I/O buffer model using a pulse generator as an input. A detailed simulation flow is introduced and validated through simulations performed on several injection probe loading conditions using a power decoupling capacitor and an on-chip decoupling capacitor."}
{"_id": "f852f6d52948cc94f04225b8268f09a543727684", "title": "HyperUAS - Imaging Spectroscopy from a Multirotor Unmanned Aircraft System", "text": "One of the key advantages of a low-flying unmanned aircraft system (UAS) is its ability to acquire digital images at an ultrahigh spatial resolution of a few centimeters. Remote sensing of quantitative biochemical and biophysical characteristics of small-sized spatially fragmented vegetation canopies requires, however, not only high spatial, but also high spectral (i.e., hyperspectral) resolution. In this paper, we describe the design, development, airborne operations, calibration, processing, and interpretation of image data collected with a new hyperspectral unmanned aircraft system (HyperUAS). HyperUAS is a remotely controlled multirotor prototype carrying onboard a lightweight pushbroom spectroradiometer coupled with a dual frequency GPS and an inertial movement unit. The prototype was built to remotely acquire imaging spectroscopy data of 324 spectral bands (162 bands in a spectrally binned mode) with bandwidths between 4 and 5 nm at an ultrahigh spatial resolution of 2\u20135 cm. Three field airborne experiments, conducted over agricultural crops and over natural ecosystems of Antarctic mosses, proved operability of the system in standard field conditions, but also in a remote and harsh, low-temperature environment of East Antarctica. Experimental results demonstrate that HyperUAS is capable of delivering georeferenced maps of quantitative biochemical and biophysical variables of vegetation and of actual vegetation health state at an unprecedented spatial resolution of 5 cm. C \u00a9 2014 Wiley Periodicals, Inc."}
{"_id": "790db62e0f4ac1e0141d1cb207655a7f6f1460a7", "title": "Sparse Dueling Bandits", "text": "The dueling bandit problem is a variation of the classical multi-armed bandit in which the allowable actions are noisy comparisons between pairs of arms. This paper focuses on a new approach for finding the \u201cbest\u201d arm according to the Borda criterion using noisy comparisons. We prove that in the absence of structural assumptions, the sample complexity of this problem is proportional to the sum of the inverse squared gaps between the Borda scores of each suboptimal arm and the best arm. We explore this dependence further and consider structural constraints on the pairwise comparison matrix (a particular form of sparsity natural to this problem) that can significantly reduce the sample complexity. This motivates a new algorithm called Successive Elimination with Comparison Sparsity (SECS) that exploits sparsity to find the Borda winner using fewer samples than standard algorithms. We also evaluate the new algorithm experimentally with synthetic and real data. The results show that the sparsity model and the new algorithm can provide significant improvements over standard approaches."}
{"_id": "b51c0ccd471bdaa138e8afe1cc201c1ebb7a51e1", "title": "Focus plus context screens: combining display technology with visualization techniques", "text": "Computer users working with large visual documents, such as large layouts, blueprints, or maps perform tasks that require them to simultaneously access overview information while working on details. To avoid the need for zooming, users currently have to choose between using a sufficiently large screen or applying appropriate visualization techniques. Currently available hi-res \"wall-size\" screens, however, are cost-intensive, space-intensive, or both. Visualization techniques allow the user to more efficiently use the given screen space, but in exchange they either require the user to switch between multiple views or they introduce distortion.In this paper, we present a novel approach to simultaneously display focus and context information. Focus plus context screens consist of a hi-res display and a larger low-res display. Image content is displayed such that the scaling of the display content is preserved, while its resolution may vary according to which display region it is displayed in. Focus plus context screens are applicable to practically all tasks that currently use overviews or fisheye views, but unlike these visualization techniques, focus plus context screens provide a single, non-distorted view. We present a prototype that seamlessly integrates an LCD with a projection screen and demonstrate four applications that we have adapted so far."}
{"_id": "9d9eddf405b1aa4eaf6c35b0d16e0b96422f0666", "title": "Treatment of posterolateral tibial plateau fractures with modified Frosch approach : a cadaveric study and case series", "text": "This study aimed to investigate the surgical techniques and the clinical efficacy of a modified Frosch approach in the treatment of posterolateral tibial plateau fractures. The standard Frosch approach was performed on 5 fresh-frozen cadavers. The mean bony surface area was measured upon adequate exposure of the proximal tibial cortex. Lateral proximal tibial plate and posterolateral T-plates were placed and the ease of the procedure was noted. In the study, 12 clinical cases of posterolateral tibial plateau fractures were treated via modified or standard Frosch approaches. The outcome was assessed over short to medium follow-up period. Cadaver studies allowed the inspection of the posterolateral joint surface of all specimens from the lateral side. The mean bony surface areas of the exposed lateral and posterolateral tibial plateau were (6.78 \u00b1 1.13) cm2 and (3.59 \u00b1 0.65) cm2, respectively. Lateral and posterolateral plates were implanted successfully. Lateral proximal tibial plates were fixed in 10 patients via a modified Frosch approach while posterolateral plates were fixed in 2 patients via a standard Frosch approach. Patients were followed up for 10 to 24 months (average: 15.7 months) and no complications were observed during this period. Based on the Rasmussen knee function score system, the results were recorded as excellent, good, and fair in 6, 4, and 2 patients, respectively. In conclusion, the modified Frosch approach has offers advantages of clear clarity in exposure, convenient for reduction and internal fixation of a fracture, and good positive clinical results over the normal approach."}
{"_id": "a1bf3e73cb6a1e1430c60dcc2aacb45a405223ba", "title": "Superficial dorsal penile vein thrombosis (penile Mondor's disease)", "text": "In our center between 1992 and 1994 penile Mondor's disease (superficial dorsal penile vein thrombosis) was diagnosed in 5 patients aged 20\u201339 years. In all patients the thromboses were noted 24\u201348 hours after a prolonged sexual act with or without an intercourse. The main symptom was a cord-like thickening of the superficial veins, which were painless or slightly painful. Doppler examination of the superficial dorsal vein revealed obstruction of the vessels. In 2 patients the retroglandular plexus was also involved. Patients were treated with anti-inflammatory medications (Tenoxicam or Ibuprofen). The resolution of the thrombosis occurred uneventfully within 4\u20136 weeks. No recurrence or erectile dysfunction was noted in any of the patients. Penile Mondor's disease is a benign pathology of the superficial dorsal penile vein and should be taken into account in the differential diagnosis of penile pathologies."}
{"_id": "d499b63fba609f918aa1d9098510bd0ac11418d8", "title": "Opinion-aware Knowledge Graph for Political Ideology Detection", "text": "Identifying individual\u2019s political ideology from their speeches and written texts is important for analyzing political opinions and user behavior on social media. Traditional opinion mining methods rely on bag-of-words representations to classify texts into different ideology categories. Such methods are too coarse for understanding political ideologies. The key to identify different ideologies is to recognize different opinions expressed toward a specific topic. To model this insight, we classify ideologies based on the distribution of opinions expressed towards real-world entities or topics. Specifically, we propose a novel approach to political ideology detection that makes predictions based on an opinion-aware knowledge graph. We show how to construct such graph by integrating the opinions and targeted entities extracted from text into an existing structured knowledge base, and show how to perform ideology inference by information propagation on the graph. Experimental results demonstrate that our method achieves high accuracy in detecting ideologies compared to baselines including LR, SVM and RNN."}
{"_id": "78b1dc7995f9f9b7ee7ef6e193374b9fe3487d30", "title": "Deep Convolution Neural Networks for Twitter Sentiment Analysis", "text": "Twitter sentiment analysis technology provides the methods to survey public emotion about the events or products related to them. Most of the current researches are focusing on obtaining sentiment features by analyzing lexical and syntactic features. These features are expressed explicitly through sentiment words, emoticons, exclamation marks, and so on. In this paper, we introduce a word embeddings method obtained by unsupervised learning based on large twitter corpora, this method using latent contextual semantic relationships and co-occurrence statistical characteristics between words in tweets. These word embeddings are combined with n-grams features and word sentiment polarity score features to form a sentiment feature set of tweets. The feature set is integrated into a deep convolution neural network for training and predicting sentiment classification labels. We experimentally compare the performance of our model with the baseline model that is a word n-grams model on five Twitter data sets, the results indicate that our model performs better on the accuracy and F1-measure for twitter sentiment classification."}
{"_id": "40f979c38176a1c0ccb6f1e8584e52bd7f581273", "title": "Arm pose copying for humanoid robots", "text": "Learning by imitation is becoming increasingly important for teaching humanoid robots new skills. The simplest form of imitation is behavior copying in which the robot is minimizing the difference between its perceived motion and that of the imitated agent. One problem that must be solved even in this simplest of all imitation tasks is calculating the learner's pose corresponding to the perceived pose of the agent it is imitating. This paper presents a general framework for solving this problem in closed form for the arms of a generalized humanoid robot of which most available humanoids are special cases. The paper also reports the evaluation of the proposed system for real and simulated robots."}
{"_id": "515b4bbd7a423b0ee8eb1da0eddc8f6ee4592742", "title": "In-vehicle information system as a driver's secondary activity: Case study", "text": "A robust methodology for detecting and evaluating driver distraction induced by in-vehicle information system using artificial neural network and fuzzy logic is introduced in this paper. An artificial neural network is used to predict driver's performance on a specific road segment. The predicted performance-based measures are compared to the driving with secondary task accomplishment. Fuzzy logic is applied to fuse the variables into a single output, which constitutes a level of driver distraction in percentage. The technique was tested on a vehicle simulator by ten drivers that exploited in-vehicle information system as a secondary activity. The driver-in-the-loop experiment outcomes are discussed."}
{"_id": "9283e274236af381cfb20e7dda79f249936b02ab", "title": "Short-Interval Detailed Production Scheduling in 300mm Semiconductor Manufacturing using Mixed Integer and Constraint Programming", "text": "Fully automated 300mm manufacturing requires the adoption of a real-time lot dispatching paradigm. Automated dispatching has provided significant improvements over manual dispatching by removing variability from the thousands of dispatching decisions made every day in a fab. Real-time resolution of tool queues, with consideration of changing equipment states, process restrictions, physical and logical location of WIP, supply chain objectives and a myriad of other parameters, is required to ensure successful dispatching in the dynamic fab environment. However, the real-time dispatching decision in semiconductor manufacturing generally remains a reactive, heuristic response in existing applications, limited to the current queue of each tool. The shortcomings of this method of assigning WIP to tools, aptly named \"opportunistic scavenging\" as stated in G. Sullivan (1987), have become more apparent in lean manufacturing environments where lower WIP levels present fewer obvious opportunities for beneficial lot sequencing or batching. Recent advancements in mixed integer programming (MIP) and constraint programming (CP) have raised the possibility of integrating optimization software, commonly used outside of the fab environment to compute optimal solutions for scheduling scenarios ranging from order fulfillment systems to crew-shift-equipment assignments, with a real-time dispatcher to create a short-interval scheduler. The goal of such a scheduler is to optimize WIP flow through various sectors of the fab by expanding the analysis beyond the current WIP queue to consider upstream and downstream flow across the entire tool group or sector. This article describes the production implementation of a short-interval local area scheduler in IBM's leading-edge 300mm fab located in East Fishkill, New York, including motivation, approach, and initial results"}
{"_id": "9dcc45ca582e506994e62d255d006435967608a5", "title": "AODVSEC: A Novel Approach to Secure Ad Hoc on-Demand Distance Vector (AODV) Routing Protocol from Insider Attacks in MANETs", "text": "Mobile Ad hoc Network (MANET) is a collection of mobile nodes that can communicate with each other using multihop wireless links without requiring any fixed based-station infrastructure and centralized management. Each node in the network acts as both a host and a router. In such scenario, designing of an efficient, reliable and secure routing protocol has been a major challenging issue over the last many years. Numerous schemes have been proposed for secure routing protocols and most of the research work has so far focused on providing security for routing using cryptography. In this paper, we propose a novel approach to secure Ad hoc On-demand Distance Vector (AODV) routing protocol from the insider attacks launched through active forging of its Route Reply (RREP) control message. AODV routing protocol does not have any security provision that makes it less reliable in publicly open ad hoc network. To deal with the concerned security attacks, we have proposed AODV Security Extension (AODVSEC) which enhances the scope of AODV for the security provision. We have compared AODVSEC with AODV and Secure AODV (SAODV) in normal situation as well as in presence of the three concerned attacks viz. Resource Consumption (RC) attack, Route Disturb (RD) attack, Route Invasion (RI) attack and Blackhole (BH) attack. To evaluate the performances, we have considered Packet Delivery Fraction (PDF), Average End-to-End Delay (AED), Average Throughput (AT), Normalized Routing Load (NRL) and Average Jitter and Accumulated Average Processing Time."}
{"_id": "5a355396369385dbd28434f80f4b9c3ca3aff645", "title": "What's basic about basic emotions?", "text": "A widespread assumption in theories of emotion is that there exists a small set of basic emotions. From a biological perspective, this idea is manifested in the belief that there might be neurophysiological and anatomical substrates corresponding to the basic emotions. From a psychological perspective, basic emotions are often held to be the primitive building blocks of other, nonbasic emotions. The content of such claims is examined, and the results suggest that there is no coherent nontrivial notion of basic emotions as the elementary psychological primitives in terms of which other emotions can be explained. Thus, the view that there exist basic emotions out of which all other emotions are built, and in terms of which they can be explained, is questioned, raising the possibility that this position is an article of faith rather than an empirically or theoretically defensible basis for the conduct of emotion research. This suggests that perhaps the notion of basic emotions will not lead to significant progress in the field. An alternative approach to explaining the phenomena that appear to motivate the postulation of basic emotions is presented."}
{"_id": "3186970b4723ba456b18d4edf600b635f30a1dfd", "title": "Domestic violence and psychological well-being of survivor women in Punjab , Pakistan", "text": "Violence against women is becoming a very critical issue, especially domestic violence. Public health practitioners of developing countries over the years had been deeply concern with trying to study and mitigate the impact of this on female survivors. It is defined as a gender based violence that results in, or is likely to result in, physical, sexual or mental harm /suffering to women, including threat of such acts, coercion or arbitrary deprivation of liberty, whether occurring in public or in private life.1,2 Domestic violence is a global issue and commonly used to describe the \u201cwomen abuse\u201d those who suffer at the hands of their male partner, because wife abused by husband is the most common form of violence against women (e.g. wife beating, burning of women, acid throwing)3\u22126 and according to the American Medical Association domestic violence is perceived as a pattern of physical, sexual and/or psychological abuse by a person with whom the survivor had an intimate relationship.7\u221210 Domestic violence is also considered as an important cause of mortality and morbidity of women in every country where these associations have been studied.2 It is considered as an important cause of intentional injuries in women, which consequently reduce women\u2019s confidence by decreasing their desire to participate fully in life.11\u221214 It occurs in almost all social and economic classes, races and religions in multiple patterns/trends however, not a single society could be declared free of this notion.15"}
{"_id": "1432c7d273275d3bda997fcafb194720fb98fd12", "title": "Constraint-based Round Robin Tournament Planning", "text": "Sport tournament planning becomes a complex task in the presence of heterogeneous requirements from teams, media, fans and other parties. Existing approaches to sport tournament planning often rely on precomputed tournament schemes which may be too rigid to cater for these requirements. Existing work on sport tournaments suggests a separation of the planning process into three phases. In this work, it is shown that all three phases can be solved using nite-domain constraint programming. The design of Friar Tuck, a generic constraint-based round robin planning tool, is outlined. New numerical results on round robin tournaments obtained with Friar Tuck underline the potential of constraints over nite domains in this area."}
{"_id": "82ae74db79fad1916c5c1c5d2a6fb7ae7c59c8e1", "title": "Authentication of FPGA Bitstreams: Why and How", "text": "Encryption of volatile FPGA bitstreams provides confidentiality to the design but does not ensure its authenticity. This paper motivates the need for adding authentication to the configuration process by providing application examples where this functionality is useful. An examination of possible solutions is followed by suggesting a practical one in consideration of the FPGA\u2019s configuration environment constraints. The solution presented here involves two symmetric-key encryption cores running in parallel to provide both authentication and confidentiality while sharing resources for efficient implementation."}
{"_id": "899b8bac810d3fc50e59425a3b6d7faf96470895", "title": "Iterative Procedures for Nonlinear Integral Equations", "text": "The numerical solution of nonlinear integral equations involves the iterative soIutioon of finite systems of nonlinear algebraic or transcendental equations. Certain corwent i o n a l techniqucs for treating such systems are reviewed in the context of a particular class of n o n l i n e a r equations. A procedure is synthesized to offset some of the disadvantages of these t e c h n i q u e s in this context; however, the procedure is not restricted to this pt~rticular class of s y s t e m s of nonlinear equations."}
{"_id": "b0bf2484f2ec40a8c3eaa8bfcaa5ce83797f8e71", "title": "Beam selection for performance-complexity optimization in high-dimensional MIMO systems", "text": "Millimeter-wave (mm-wave) communications systems offer a promising solution to meeting the increasing data demands on wireless networks. Not only do mm-wave systems allow orders of magnitude larger bandwidths, they also create a high-dimensional spatial signal space due to the small wavelengths, which can be exploited for beamforming and multiplexing gains. However, the complexity of digitally processing the entire high-dimensional signal is prohibitive. By exploiting the inherent channel sparsity in beamspace due to highly directional propagation at mm-wave, it is possible to design near-optimal transceivers with dramatically lower complexity. In such beamspace MIMO systems, it is first necessary to determine the set of beams which define the low-dimensional communication subspace. In this paper, we address this beam selection problem and introduce a simple power-based classifier for determining the beamspace sparsity pattern that characterizes the communication subspace. We first introduce a physical model for a small cell which will serve as the setting for our analysis. We then develop a classifier for the physical model, and show its optimality for a class of ideal signals. Finally, we present illustrative numerical results and show the feasibility of the classifier in mobile settings."}
{"_id": "05252b795f0f1238ac7e0d7af7fc2372c34a181d", "title": "Authoritative Sources in a Hyperlinked Environment", "text": "The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of \u201cauthorative\u201d information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of \u201chub pages\u201d that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis."}
{"_id": "3b1953ff2c0c9dd045afe6766afb91599522052b", "title": "Efficient Computation of PageRank", "text": "This paper discusses efficient techniques for computing PageRank, a ranking metric for hypertext documents. We show that PageRank can be computed for very large subgraphs of the web (up to hundreds of millions of nodes) on machines with limited main memory. Running-time measurements on various memory configurations are presented for PageRank computation over the 24-million-page Stanford WebBase archive. We discuss several methods for analyzing the convergence of PageRank based on the induced ordering of the pages. We present convergence results helpful for determining the number of iterations necessary to achieve a useful PageRank assignment, both in the absence and presence of search queries."}
{"_id": "4ba18b2f35515f7f3ad3bc38100730c5808a52af", "title": "The Missing Link - A Probabilistic Model of Document Content and Hypertext Connectivity", "text": "We describe a joint probabilistic model for modeling the contents and inter-connectivity of document collections such as sets of web pages or research paper archives. The model is based on a probabilistic factor decomposition and allows identifying principal topics of the collection as well as authoritative documents within those topics. Furthermore, the relationships between topics is mapped out in order to build a predictive model of link content. Among the many applications of this approach are information retrieval and search, topic identification, query disambiguation, focused web crawling, web authoring, and bibliometric analysis."}
{"_id": "65227ddbbd12015ba8a45a81122b1fa540e79890", "title": "The PageRank Citation Ranking: Bringing Order to the Web.", "text": "The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, e ectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to e ciently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation."}
{"_id": "9ac34c7040d08a27e7dc75cfa46eb0144de3a284", "title": "The Anatomy of a Large-Scale Hypertextual Web Search Engine", "text": "In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want."}
{"_id": "9bd362017e2592eec65da61d758d2c5d55237706", "title": "Threat , Authoritarianism , and Selective Exposure to Information", "text": "We examined the hypothesis that threat alters the cognitive strategies used by high authoritarians in seeking out new political information from the environment. In a laboratory experiment, threat was manipulated through a \u201cmortality salience\u201d manipulation used in research on terror management theory (Pyszczynski, Solomon, & Greenberg, 2003). Subjects (N = 92) were then invited to read one of three editorial articles on the topic of capital punishment. We found that in the absence of threat, both low and high authoritarians were responsive to salient norms of evenhandedness in information selection, preferring exposure to a two-sided article that presents the merits of both sides of an issue to an article that selectively touts the benefits of the pro or con side of the issue. However, in the presence of threat, high but not low authoritarians became significantly more interested in exposure to an article containing uniformly pro-attitudinal arguments, and significantly less interested in a balanced, two-sided article. Finally, a path analysis indicated that selective exposure to attitude-congruent information led to more internally consistent policy attitudes and inhibited attitude change. Discussion focuses on the role of threat in conditioning the cognitive and attitudinal effects of authoritarianism."}
{"_id": "19b758d14b6ff537593108615c609dd821d0efc8", "title": "Adaptive Learning Hybrid Model for Solar Intensity Forecasting", "text": "Energy management is indispensable in the smart grid, which integrates more renewable energy resources, such as solar and wind. Because of the intermittent power generation from these resources, precise power forecasting has become crucial to achieve efficient energy management. In this paper, we propose a novel adaptive learning hybrid model (ALHM) for precise solar intensity forecasting based on meteorological data. We first present a time-varying multiple linear model (TMLM) to capture the linear and dynamic property of the data. We then construct simultaneous confidence bands for variable selection. Next, we apply the genetic algorithm back propagation neural network (GABP) to learn the nonlinear relationships in the data. We further propose ALHM by integrating TMLM, GABP, and the adaptive learning online hybrid algorithm. The proposed ALHM captures the linear, temporal, and nonlinear relationships in the data, and keeps improving the predicting performance adaptively online as more data are collected. Simulation results show that ALHM outperforms several benchmarks in both short-term and long-term solar intensity forecasting."}
{"_id": "af777f8b1c694e353a57d81c3c1b4620e2ae61b1", "title": "A Survey and Typology of Human Computation Games", "text": "Human computation games (HCGs) can be seen as a paradigm that makes use of human brain power to solve computational problems as a byproduct of game play. In recent years, HCGs are increasing in popularity within various application areas which aim to harvest human problem solving skills by seamlessly integrating computational tasks into games. Thus, a proper system of describing such games is necessary in order to obtain a better understanding of the current body of work and identify new opportunities for future research directions. As a starting point, in this paper, we review existing literature on HCGs and present a typology consisting of 12 dimensions based on the key features of HCGs. With this typology, all HCGs could be classified and compared in a systematic manner."}
{"_id": "c2da441587e7cc11497d89d40bb4ad3ac080686f", "title": "Gossip along the way : Order-Optimal Consensus through Randomized Path Averaging", "text": "Gossip algorithms have recently received significant attention, mainly because they constitute simple and robust algorithms for distributed information processing over networks. However for many topologies that are realistic for wireless ad-hoc and sensor networks (like grids and random geometric graphs), the standard nearest-neighbor gossip converges very slowly. A recently proposed algorithm called geographic gossip improves gossip efficiency by a p n/ log n factor for random geometric graphs, by exploiting geographic information of node locations. In this paper we prove that a variation of geographic gossip that averages along routed paths, improves efficiency by an additional p n/ log n factor and is order optimal for grids and random geometric graphs. Our analysis provides some general techniques and can be used to provide bounds on the performance of randomized message passing algorithms operating over various graph topologies."}
{"_id": "2673a3614b4239f1a62fc577922f0db7ec2b8c27", "title": "Evaluation of games for teaching computer science", "text": "Well-designed educational games (both online and offline) can be a valuable way to engage students with a topic. However, games can also be a time-consuming distraction, so it is useful to be able to identify which are suitable. This paper provides guidelines for evaluating games, and applies them to identify promising games. We begin by defining what is meant by a game, ruling out simple animations or quizzes, and focusing on a more playful approach to education. Based on this definition, we have identified games that are available for teaching topics in computer science, and classified them. We have developed criteria for evaluating games based on existing research on educational games, combined with educational theory and considering them in the context of CS. The criteria developed cover the topics taught, how easy it is to install and run, how engaging the game is, and how much time students might invest to use the game. We find that although there are a number of worth-while games in the field of CS and most are free, the coverage of topics is very patchy, with most focused on programming concepts and binary numbers."}
{"_id": "e1a23a0c7c4b82b9c9a72934da04b4e743116554", "title": "Detecting and Tracking The Real-time Hot Topics: A Study on Computational Neuroscience", "text": "In this study, following the idea of our previous paper (Wang, et al., 2013a), we improve the method to detect and track hot topics in a specific field by using the real-time article usage data. With the \u201cusage count\u201d data provided by Web of Science, we take the field of computational neuroscience as an example to make analysis. About 10 thousand articles in the field of Computational Neuroscience are queried in Web of Science, when the records, including the usage count data of each paper, have been harvested and updated weekly from October 19, 2015 to March 21, 2016. The hot topics are defined by the most frequently used keywords aggregated from the articles. The analysis reveals that hot topics in Computational Neuroscience are related to the key technologies, like \u201cfmri\u201d, \u201ceeg\u201d, \u201cerp\u201d, etc. Furthermore, using the weekly updated data, we track the dynamical changes of the topics. The characteristic of immediacy of usage data makes it possible to track the \u201cheat\u201d of hot topics timely and dynamically."}
{"_id": "f9ea177533fe3755f62737842c203bea2b2a8adc", "title": "A probabilistic rating inference framework for mining user preferences from reviews", "text": "We propose a novel Probabilistic Rating infErence Framework, known as Pref, for mining user preferences from reviews and then mapping such preferences onto numerical rating scales. Pref applies existing linguistic processing techniques to extract opinion words and product features from reviews. It then estimates the sentimental orientations (SO) and strength of the opinion words using our proposed relative-frequency-based method. This method allows semantically similar words to have different SO, thereby addresses a major limitation of existing methods. Pref takes the intuitive relationships between class labels, which are scalar ratings, into consideration when assigning ratings to reviews. Empirical results validated the effectiveness of Pref against several related algorithms, and suggest that Pref can produce reasonably good results using a small training corpus. We also describe a useful application of Pref as a rating inference framework. Rating inference transforms user preferences described as natural language texts into numerical rating scales. This allows Collaborative Filtering (CF) algorithms, which operate mostly on databases of scalar ratings, to utilize textual reviews as an additional source of user preferences. We integrated Pref with a classical CF algorithm, and empirically demonstrated the advantages of using rating inference to augment ratings for CF."}
{"_id": "2e03c444e7854188dc7badec33bc9a52cac91885", "title": "Building Information Modeling and real world knowledge: A methodological approach to accurate semantic documentation for the built environment", "text": "Building Information Modeling is considered by the scientific literature as an emerging trend in the architectural documentation scenario, as it is basically a digital representation of physical and functional features of facilities, serving as a shared knowledge resource during their whole life cycle. BIM is actually a process (not a software, as someone indicated), in which different players act sharing data through digital models in a coordinated, consistent and always up to date workflow, in order to reach reliability and higher quality all over the construction process. This way BIM tools were originally meant to ease the design of new architectures, generated by parametric geometries connected through hierarchical relationships of \u201csmart objects\u201d (components self-aware of their identity and conscious of their interactions with each other). However, this approach can also be successfully applied to what already exists: TLS (Terrestrial Laser Scanning) or digital photogrammetry are supposed to be the first abstraction step in a methodology proposal intended as a scientific strategy in which BIM, relying on its own semantic splitting attitude and its topological structure, is explicitly used in representation of existing buildings belonging to the Cultural Heritage. Presenting some progresses in the development of a specific free Autodesk Revit plug-in, nicknamed GreenSpider after its capability to layout points in the digital domain as if they were nodes of an ideal cobweb, this paper examines how point clouds collected during high definition surveys can be processed with accuracy in a BIM environment, highlighting critical aspects and advantages deriving from the application of parametric techniques to the real world domain representation."}
{"_id": "af0cf9da53bf3e1e6d349faed14cb68ad71aa2a4", "title": "Graph Convolutional Networks With Argument-Aware Pooling for Event Detection", "text": "The current neural network models for event detection have only considered the sequential representation of sentences. Syntactic representations have not been explored in this area although they provide an effective mechanism to directly link words to their informative context for event detection in the sentences. In this work, we investigate a convolutional neural network based on dependency trees to perform event detection. We propose a novel pooling method that relies on entity mentions to aggregate the convolution vectors. The extensive experiments demonstrate the benefits of the dependencybased convolutional neural networks and the entity mentionbased pooling method for event detection. We achieve the state-of-the-art performance on widely used datasets with both perfect and predicted entity mentions."}
{"_id": "6fb0452f63918c1daab8f515c6f52444054fd078", "title": "Exploring the comparative salience of restaurant attributes: A conjoint analysis approach", "text": "This study explores how travelers select a restaurant for dining out, given that restaurant customers consider diverse attributes in making a selection. By applying a conjoint analysis, an exploratory multiplecase study is conducted for three restaurants in New York City. Findings from Study 1 (an overall travelers group) and Study 2 (two different country-of-residence groups: foreign and domestic travelers) show that food, value, atmosphere, and service are considered as substantially important criteria in selecting restaurants, in that order. However, results from Study 3 examining different restaurant types (lowpriced food stand, low-priced indoor, and high-priced indoor) reveal that the food attribute is the most important factor, regardless of restaurant types, whereas the other attributes\u2019 rankings vary. Results from ypes of restaurants and travelers"}
{"_id": "6c5a1e4692c07f4069969c423e0767cbb5da6f8e", "title": "Integrality and Separability of Input Devices", "text": "Current input device taxonomies and other frameworks typically emphasize the mechanical structure of input devices. We suggest that selecting an appropriate input device for an interactive task requires looking beyond the physical structure of devices to the deeper perceptual structure of the task, the device, and the interrelationship between the perceptual structure of the task and the control properties of the device. We affirm that perception is key to understanding performance of multidimensional input devices on multidimensional tasks. We have therefore extended the theory of processing of percetual structure to graphical interactive tasks and to the control structure of input devices. This allows us to predict task and device combinations that lead to better performance and hypothesize that performance is improved when the perceptual structure of the task matches the control structure of the device. We conducted an experiment in which subjects performed two tasks with different perceptual structures, using two input devices with correspondingly different control structures, a three-dimensional tracker and a mouse. We analyzed both speed and accuracy, as well as the trajectories generated by subjects as they used the unconstrained three-dimensional tracker to perform each task. The result support our hypothesis and confirm the importance of matching the perceptual structure of the task and the control structure of the input device."}
{"_id": "ef529a661104ebe3c8a68f52d40a2cda81685794", "title": "7 Toward a Universal Underspecified Semantic Representation", "text": "We define Canonical Form Minimal Recursion Semantics (CF-MRS) and prove that all the well-formed MRS structures generated by the MRS semantic composition algorithm are in this form. We prove that the qeq relationships are equivalent to outscoping relations when MRS structures are in this form. This result fills the gap between some underspecification formalisms and motivates defining a Canonical Form Underspecified Representation (CF-UR) which brings those underspecification formalisms together."}
{"_id": "60ae024362ce54b8c587f5d6ff1de25cf6616297", "title": "MIMO radar: an idea whose time has come", "text": "It has recently been shown that multiple-input multiple-output (MIMO) antenna systems have the potential to improve dramatically the performance of communication systems over single antenna systems. Unlike beamforming, which presumes a high correlation between signals either transmitted or received by an array, the MIMO concept exploits the independence between signals at the array elements. In conventional radar, target scintillations are regarded as a nuisance parameter that degrades radar performance. The novelty of MIMO radar is that it takes the opposite view; namely, it capitalizes on target scintillations to improve the radar's performance. We introduce the MIMO concept for radar. The MIMO radar system under consideration consists of a transmit array with widely-spaced elements such that each views a different aspect of the target. The array at the receiver is a conventional array used for direction finding (DF). The system performance analysis is carried out in terms of the Cramer-Rao bound of the mean-square error in estimating the target direction. It is shown that MIMO radar leads to significant performance improvement in DF accuracy."}
{"_id": "04442c483964e14f14a811c46b5772f0f0b79bd7", "title": "Implementation of multiport dc-dc converter-based Solid State Transformer in smart grid system", "text": "A solid-state transformer (SST) would be at least as efficient as a conventional version but would provide other benefits as well, particularly as renewable power sources become more widely used. Among its more notable strong points are on-demand reactive power support for the grid, better power quality, current limiting, management of distributed storage devices and a dc bus. Most of the nation's power grid currently operates one way - power flows from the utility to the consumer - and traditional transformers simply change voltage from one level to another. But smart transformers, based on power semiconductor switches, are more versatile. Not only can they change voltage levels, but also can effectively control the power flow in both directions. The development of a Solid State Transformer (SST) that incorporates a DC-DC multiport converter to integrate both photovoltaic (PV) power generation and battery energy storage is presented in this paper. The DC-DC stage is based on a quad active-bridge (QAB) converter which not only provides isolation for the load, but also for the PV and storage. The AC-DC stage is implemented with a pulse-width-modulated (PWM) single phase rectifier. A novel technique that complements the SISO controller by taking into account the cross coupling characteristics of the QAB converter is also presented herein. Cascaded SISO controllers are designed for the AC-DC stage. The QAB demanded power is calculated at the QAB controls and then fed into the rectifier controls in order to minimize the effect of the interaction between the two SST stages. The dynamic performance of the designed control loops based on the proposed control strategies are verified through extensive simulation of the SST average and switching models."}
{"_id": "6089240c5ccf5e5e740f7c9a8ff82c8878d947bc", "title": "Compact Printed Wide-Slot UWB Antenna With 3.5/5.5-GHz Dual Band-Notched Characteristics", "text": "A novel compact wide-slot ultrawideband (UWB) antenna with dual band-notched characteristics is presented. The antenna consists of an inverted U-shaped slot on the ground plane and a radiation patch similar to the slot that is fed by a 50-\u03a9 microstrip line. By etching a C-shaped slot on the radiation patch and extruding an L-shaped stub from the ground plane, dual band-notched properties in the WiMAX (3.4-3.69 GHz) and WLAN (5.15-5.825 GHz) are achieved. The proposed antenna has a compact size of 20\u00d727 mm2 and operates from 2.89 to 11.52 GHz. Furthermore, nearly omnidirectional radiation patterns and constant gain are obtained in the working band."}
{"_id": "563b03dea20a3c5a5488917a5a26d0377e6d4862", "title": "Connecting and separating mind-sets: culture as situated cognition.", "text": "UNLABELLED\nPeople perceive meaningful wholes and later separate out constituent parts (D. Navon, 1977). Yet there are cross-national differences in whether a focal target or integrated whole is first perceived. Rather than construe these differences as fixed, the proposed culture-as-situated-cognition model explains these differences as due to whether a collective or individual mind-set is cued at the moment of observation. Eight studies demonstrated that when cultural mind-set and task demands are congruent, easier tasks are accomplished more quickly and more difficult or time-constrained tasks are accomplished more accurately (\n\n\nSTUDY 1\nKoreans, Korean Americans; STUDY 2: Hong Kong Chinese; STUDY 3: European- and Asian-heritage Americans; STUDY 4: Americans;\n\n\nSTUDY\n5 Hong Kong Chinese; STUDY 6: Americans; STUDY 7: Norwegians; STUDY 8: African-, European-, and Asian-heritage Americans). Meta-analyses (d = .34) demonstrated homogeneous effects across geographic place (East-West), racial-ethnic group, task, and sensory mode-differences are cued in the moment. Contrast and separation are salient individual mind-set procedures, resulting in focus on a single target or main point. Assimilation and connection are salient collective mind-set procedures, resulting in focus on multiplicity and integration."}
{"_id": "23c035073654412a6a9ec486739dd9bd16dd663d", "title": "Multiview Image Completion with Space Structure Propagation", "text": "We present a multiview image completion method that provides geometric consistency among different views by propagating space structures. Since a user specifies the region to be completed in one of multiview photographs casually taken in a scene, the proposed method enables us to complete the set of photographs with geometric consistency by creating or removing structures on the specified region. The proposed method incorporates photographs to estimate dense depth maps. We initially complete color as well as depth from a view, and then facilitate two stages of structure propagation and structure-guided completion. Structure propagation optimizes space topology in the scene across photographs, while structure-guide completion enhances, and completes local image structure of both depth and color in multiple photographs with structural coherence by searching nearest neighbor fields in relevant views. We demonstrate the effectiveness of the proposed method in completing multiview images."}
{"_id": "497d41eb37484345332e7200438f51ed923bcd2c", "title": "Effectiveness and Adoption of a Drawing-to-Learn Study Tool for Recall and Problem Solving: Minute Sketches with Folded Lists", "text": "Drawing by learners can be an effective way to develop memory and generate visual models for higher-order skills in biology, but students are often reluctant to adopt drawing as a study method. We designed a nonclassroom intervention that instructed introductory biology college students in a drawing method, minute sketches in folded lists (MSFL), and allowed them to self-assess their recall and problem solving, first in a simple recall task involving non-European alphabets and later using unfamiliar biology content. In two preliminary ex situ experiments, students had greater recall on the simple learning task, non-European alphabets with associated phonetic sounds, using MSFL in comparison with a preferred method, visual review (VR). In the intervention, students studying using MSFL and VR had \u223c50-80% greater recall of content studied with MSFL and, in a subset of trials, better performance on problem-solving tasks on biology content. Eight months after beginning the intervention, participants had shifted self-reported use of drawing from 2% to 20% of study time. For a small subset of participants, MSFL had become a preferred study method, and 70% of participants reported continued use of MSFL. This brief, low-cost intervention resulted in enduring changes in study behavior."}
{"_id": "dc5d04d34b278b944097b8925a9147773bbb80cc", "title": "A Temporal Sequence Learning for Action Recognition and Prediction", "text": "In this work1, we present a method to represent a video with a sequence of words, and learn the temporal sequencing of such words as the key information for predicting and recognizing human actions. We leverage core concepts from the Natural Language Processing (NLP) literature used in sentence classification to solve the problems of action prediction and action recognition. Each frame is converted into a word that is represented as a vector using the Bag of Visual Words (BoW) encoding method. The words are then combined into a sentence to represent the video, as a sentence. The sequence of words in different actions are learned with a simple but effective Temporal Convolutional Neural Network (T-CNN) that captures the temporal sequencing of information in a video sentence. We demonstrate that a key characteristic of the proposed method is its low-latency, i.e. its ability to predict an action accurately with a partial sequence (sentence). Experiments on two datasets, UCF101 and HMDB51 show that the method on average reaches 95% of its accuracy within half the video frames. Results, also demonstrate that our method achieves compatible state-of-the-art performance in action recognition (i.e. at the completion of the sentence) in addition to action prediction."}
{"_id": "04850809e4e31437039833753226b440a4fc8864", "title": "Deep Memory Networks for Attitude Identification", "text": "We consider the task of identifying attitudes towards a given set of entities from text. Conventionally, this task is decomposed into two separate subtasks: target detection that identifies whether each entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exact sentiment towards an identified entity (the target) into positive, negative, or neutral.\n Instead, we show that attitude identification can be solved with an end-to-end machine learning architecture, in which the two subtasks are interleaved by a deep memory network. In this way, signals produced in target detection provide clues for polarity classification, and reversely, the predicted polarity provides feedback to the identification of targets. Moreover, the treatments for the set of targets also influence each other -- the learned representations may share the same semantics for some targets but vary for others. The proposed deep memory network, the AttNet, outperforms methods that do not consider the interactions between the subtasks or those among the targets, including conventional machine learning methods and the state-of-the-art deep learning models."}
{"_id": "a76b92f17593358d21c8dd9d1c058b7658086123", "title": "Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks", "text": "Deep learning classifiers are known to be inherently vulnerable to manipulation by intentionally perturbed inputs, named adversarial examples. In this work, we establish that reinforcement learning techniques based on Deep Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and verify the transferability of adversarial examples across different DQN models. Furthermore, we present a novel class of attacks based on this vulnerability that enable policy manipulation and induction in the learning process of DQNs. We propose an attack mechanism that exploits the transferability of adversarial examples to implement policy induction attacks on DQNs, and demonstrate its efficacy and impact through experimental study of a game-learning scenario."}
{"_id": "0a202f1dfc6991a6a204eaa5e6b46d6223a4d98a", "title": "Good features to track", "text": "No feature-based vision system can work unless good features can be identi ed and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under a ne image transformations. We test performance with several simulations and experiments."}
{"_id": "116d7798c1123cf7fad4176e98f58fd49de4f8f1", "title": "Planning and Acting in Partially Observable Stochastic Domains", "text": "In this paper we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains We begin by introducing the theory of Markov decision processes mdps and partially observable mdps pomdps We then outline a novel algorithm for solving pomdps o line and show how in some cases a nite memory controller can be extracted from the solution to a pomdp We conclude with a discussion of how our approach relates to previous work the complexity of nding exact solutions to pomdps and of some possibilities for nding approximate solutions"}
{"_id": "12099545a31155585a813b840ed711de9d83cace", "title": "Simultaneous Mosaicing and Tracking with an Event Camera", "text": "An event camera is a silicon retina which outputs not a sequence of video frames like a standard camera, but a stream of asynchronous spikes, each with pixel location, sign and precise timing, indicating when individual pixels record a threshold log intensity change. By encoding only image change, it offers the potential to transmit the information in a standard video but at vastly reduced bitrate, and with huge added advantages of very high dynamic range and temporal resolution. However, event data calls for new algorithms, and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties. Here, we show for the first time that an event stream, with no additional sensing, can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range. Our method involves parallel camera rotation tracking and template reconstruction from estimated gradients, both operating on an event-by-event basis and based on probabilistic filtering."}
{"_id": "3d57515445c00c635e15222767fc0430069ed200", "title": "Depth Transfer: Depth Extraction from Video Using Non-Parametric Sampling", "text": "We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail (non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large data set containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade."}
{"_id": "139abeefba9ab62418ba69bc74326d3199d33824", "title": "Aerial Lidar Data Classification using AdaBoost", "text": "We use the AdaBoost algorithm to classify 3D aerial lidar scattered height data into four categories: road, grass, buildings, and trees. To do so we use five features: height, height variation, normal variation, lidar return intensity, and image intensity. We also use only lidar-derived features to organize the data into three classes (the road and grass classes are merged). We apply and test our results using ten regions taken from lidar data collected over an area of approximately eight square miles, obtaining higher than 92% accuracy. We also apply our classifier to our entire dataset, and present visual classification results both with and without uncertainty. We implement and experiment with several variations within the AdaBoost family of algorithms. We observe that our results are robust and stable over all the various tests and algorithmic variations. We also investigate features and values that are most critical in distinguishing between the classes. This insight is important in extending the results from one geographic region to another."}
{"_id": "3e42557a8a8120acbe7ab9c1f43c05a529254e04", "title": "Bio-inspired design and dynamic maneuverability of a minimally actuated six-legged robot", "text": "Rapidly running arthropods like cockroaches make use of passive dynamics to achieve remarkable locomotion performance with regard to stability, speed, and maneuverability. In this work, we take inspiration from these organisms to design, fabricate, and control a 10cm, 24 gram underactuated hexapedal robot capable of running at 14 body lengths per second and performing dynamic turning maneuvers. Our design relies on parallel kinematic mechanisms fabricated using the scaled smart composite microstructures (SCM) process and viscoelastic polymer legs with tunable stiffness. In addition to the novel robot design, we present experimental validation of the lateral leg spring (LLS) locomotion model's prediction that dynamic turning can be achieved by modulating leg stiffness. Finally, we present and validate a leg design for active stiffness control using shape memory alloy and demonstrate the ability of the robot to execute near-gymnastic 90\u00b0 turns in the span of five strides."}
{"_id": "7a66b85f5c843ff1b5788ad651f700313993e305", "title": "C\n1-Almost Periodic Solutions of BAM Neural Networks with Time-Varying Delays on Time Scales", "text": "On a new type of almost periodic time scales, a class of BAM neural networks is considered. By employing a fixed point theorem and differential inequality techniques, some sufficient conditions ensuring the existence and global exponential stability of C1-almost periodic solutions for this class of networks with time-varying delays are established. Two examples are given to show the effectiveness of the proposed method and results."}
{"_id": "a6774549061a800427d5f54fe606e80fab0364a3", "title": "Understanding IEEE 1451-Networked smart transducer interface standard - What is a smart transducer?", "text": "This article introduces the IEEE 1451 standard for networked smart transducers. It discusses the concepts of smart transducers, IEEE 1451 smart transducers, the architecture of the IEEE 1451 family of standards, application of IEEE 1451, and example implementations of the IEEE 1451 standards. In conclusion, the IEEE 1451 suite of standards provides a set of standard interfaces for networked smart transducers, helping to achieve sensor plug and play and interoperability for industry and government."}
{"_id": "9488372708f0b6dbf055c2a73ddb1c082934afb6", "title": "The Past, Present, and Future of Silicon Photonics", "text": "The pace of the development of silicon photonics has quickened since 2004 due to investment by industry and government. Commercial state-of-the-art CMOS silicon-on-insulator (SOI) foundries are now being utilized in a crucial test of 1.55-mum monolithic optoelectronic (OE) integration, a test sponsored by the Defense Advanced Research Projects Agency (DARPA). The preliminary results indicate that the silicon photonics are truly CMOS compatible. R&D groups have now developed 10-100-Gb/s electro-optic modulators, ultrafast Ge-on-Si photodetectors, efficient fiber-to-waveguide couplers, and Si Raman lasers. Electrically pumped silicon lasers are under intense investigation, with several approaches being tried; however, lasing has not yet been attained. The new paradigm for the Si-based photonic and optoelectric integrated circuits is that these chip-scale networks, when suitably designed, will operate at a wavelength anywhere within the broad spectral range of 1.2-100 mum, with cryocooling needed in some cases"}
{"_id": "22f656d0f8426c84a33a267977f511f127bfd7f3", "title": "From Facial Expression Recognition to Interpersonal Relation Prediction", "text": "Interpersonal relation defines the association, e.g., warm, friendliness, and dominance, between two or more people. We investigate if such fine-grained and high-level relation traits can be characterized and quantified from face images in the wild. We address this challenging problem by first studying a deep network architecture for robust recognition of facial expressions. Unlike existing models that typically learn from facial expression labels alone, we devise an effective multitask network that is capable of learning from rich auxiliary attributes such as gender, age, and head pose, beyond just facial expression data. While conventional supervised training requires datasets with complete labels (e.g., all samples must be labeled with gender, age, and expression), we show that this requirement can be relaxed via a novel attribute propagation method. The approach further allows us to leverage the inherent correspondences between heterogeneous attribute sources despite the disparate distributions of different datasets. With the network we demonstrate state-of-the-art results on existing facial expression recognition benchmarks. To predict inter-personal relation, we use the expression recognition network as branches for a Siamese model. Extensive experiments show that our model is capable of mining mutual context of faces for accurate fine-grained interpersonal prediction."}
{"_id": "c9a48b155943f05c25da8cc957a7bb63c63a458c", "title": "The subepithelial connective tissue graft palatal donor site: anatomic considerations for surgeons.", "text": "Surgeons must become completely familiar with the anatomy of the palatal donor site to feel confident in providing the subepithelial connective tissue graft procedure. Variations in the size and shape of the hard palate affect the dimensions of donor tissue harvested, as well as the location of the greater palatine neurovascular bundle. This article classifies palatal vaults according to height as high, average, and shallow. Illustrations and cadaver dissection are utilized to demonstrate that surgeons can gain substantial donor tissue specimens without encountering the neurovascular bundle. Actions to be followed in the unlikely event that the neurovasculature is encountered are reviewed."}
{"_id": "47ad62463adc468554da06c836133e23e4e0dce7", "title": "The biology of facial fillers.", "text": "The biologic behavior of a facial filler determines its advantages and disadvantages. The purpose of this article is to look at the relevant biology as part of a logical basis for making treatment decisions. Historical perspectives and biologic characteristics such as local tissue reaction (including phagocytosis and granulomatous inflammation) cross-linking, particle concentration, immunogenicity, biofilm formation, gel hardness, and collagen neogenesis are considered. Bovine collagen is the most immunogenic facial filler. Porcine and bioengineered human collagen implants have very low immunogenicity, but allergic reactions and elevations of IgG are possible. Cross-linking and concentration affect the longevity of collagen and hyaluronic acid fillers. Gel hardness affects how a hyaluronic acid filler flows through the syringe and needle. Calcium hydroxylapatite, poly-L-lactic acid, and polymethylmethacrylate fillers have been shown to stimulate collagen neogenesis. It appears that any facial filler can form a granuloma. Bacterial biofilms may play a role in the activation of quiescent granulomas. Various authors interpret the definition and significance of a granuloma differently."}
{"_id": "34bbe565d9538ffdf4c8ef4e891411edf8d29447", "title": "Robust entropy-based endpoint detection for speech recognition in noisy environments", "text": "This paper presents an entropy-based algorithm for accurate and robust endpoint detection for speech recognition under noisy environments. Instead of using the conventional energy-based features, the spectral entropy is developed to identify the speech segments accurately. Experimental results show that this algorithm outperforms the energy-based algorithms in both detection accuracy and recognition performance under noisy environments, with an average error rate reduction of more than 16%."}
{"_id": "76e04f695a3f5593858766841b284175ac72e5d3", "title": "Customer-Centric Strategic Planning: Integrating CRM in Online Business Systems", "text": "Customer Relationship Management (CRM) is increasingly found at the top of corporate agendas. Online companies in particular are embracing CRM as a major element of corporate strategy, because online technological applications permit a precise segmentation, profiling and targeting of customers, and the competitive pressures of the digital markets require a customer-centric corporate culture. The implementation of CRM systems in online organisation determines a complex restructuring of all organisational elements and processes. The strategic planning process will have to adapt to new customer-centric procedures. The present paper analyses the implementation process of a CRM system in online retail businesses and develops a model of the strategic planning function in a customer-centric context."}
{"_id": "5720a5015ca67400fadd0ff6863519f4b030e731", "title": "A Generic Coordinate Descent Framework for Learning from Implicit Feedback", "text": "In recent years, interest in recommender research has shifted from explicit feedback towards implicit feedback data. A diversity of complex models has been proposed for a wide variety of applications. Despite this, learning from implicit feedback is still computationally challenging. So far, most work relies on stochastic gradient descent (SGD) solvers which are easy to derive, but in practice challenging to apply, especially for tasks with many items. For the simple matrix factorization model, an efficient coordinate descent (CD) solver has been previously proposed. However, efficient CD approaches have not been derived for more complex models. In this paper, we provide a new framework for deriving efficient CD algorithms for complex recommender models. We identify and introduce the property of k-separable models. We show that k-separability is a sufficient property to allow efficient optimization of implicit recommender problems with CD. We illustrate this framework on a variety of state-of-the-art models including factorization machines and Tucker decomposition. To summarize, our work provides the theory and building blocks to derive efficient implicit CD algorithms for complex recommender models."}
{"_id": "5a8399a28aa322ed6b27b6408d34f44abdbf7b46", "title": "Object-Extraction and Question-Parsing using CCG", "text": "Accurate dependency recovery has recently been reported for a number of wide-coverage statistical parsers using Combinatory Categorial Grammar (CCG). However, overall figures give no indication of a parser\u2019s performance on specific constructions, nor how suitable a parser is for specific applications. In this paper we give a detailed evaluation of a CCG parser on object extraction dependencies found in WSJ text. We also show how the parser can be used to parse questions for Question Answering. The accuracy of the original parser on questions is very poor, and we propose a novel technique for porting the parser to a new domain, by creating new labelled data at the lexical category level only. Using a supertagger to assign categories to words, trained on the new data, leads to a dramatic increase in question parsing accuracy."}
{"_id": "4f640c1338840f3740187352531dfeca9381b5c3", "title": "Mining Sequential Patterns: Generalizations and Performance Improvements", "text": "The problem of mining sequential patterns was recently introduced in [AS95]. We are given a database of sequences, where each sequence is a list of transactions ordered by transaction-time, and each transaction is a set of items. The problem is to discover all sequential patterns with a user-speci ed minimum support, where the support of a pattern is the number of data-sequences that contain the pattern. An example of a sequential pattern is \\5% of customers bought `Foundation' and `Ringworld' in one transaction, followed by `Second Foundation' in a later transaction\". We generalize the problem as follows. First, we add time constraints that specify a minimum and/or maximum time period between adjacent elements in a pattern. Second, we relax the restriction that the items in an element of a sequential pattern must come from the same transaction, instead allowing the items to be present in a set of transactions whose transaction-times are within a user-speci ed time window. Third, given a user-de ned taxonomy (is-a hierarchy) on items, we allow sequential patterns to include items across all levels of the taxonomy. We present GSP, a new algorithm that discovers these generalized sequential patterns. Empirical evaluation using synthetic and real-life data indicates that GSP is much faster than the AprioriAll algorithm presented in [AS95]. GSP scales linearly with the number of data-sequences, and has very good scale-up properties with respect to the average datasequence size. Also, Department of Computer Science, University of Wisconsin, Madison."}
{"_id": "024006d4c2a89f7acacc6e4438d156525b60a98f", "title": "Continuous control with deep reinforcement learning", "text": "We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies \u201cend-to-end\u201d: directly from raw pixel inputs."}
{"_id": "9afcff746d84b9a71c2ea7e9950c53a8757fcfc2", "title": "MCMCTS PCG 4 SMB : Monte Carlo Tree Search to Guide Platformer Level Generation", "text": "Markov chains are an enticing option for machine learned generation of platformer levels, but offer poor control for designers and are likely to produce unplayable levels. In this paper we present a method for guiding Markov chain generation using Monte Carlo Tree Search that we call Markov Chain Monte Carlo Tree Search (MCMCTS). We demonstrate an example use for this technique by creating levels trained on a corpus of levels from Super Mario Bros. We then present a player modeling study that was run with the hopes of using the data to better inform the generation of levels in future work."}
{"_id": "60fd71bf5c6a799aba9742a6f8176189884a1196", "title": "Peak-End Effects on Player Experience in Casual Games", "text": "The peak-end rule is a psychological heuristic observing that people's retrospective assessment of an experience is strongly influenced by the intensity of the peak and final moments of that experience. We examine how aspects of game player experience are influenced by peak-end manipulations to the sequence of events in games that are otherwise objectively identical. A first experiment examines players' retrospective assessments of two games (a pattern matching game based on Bejeweled and a point-and-click reaction game) when the sequence of difficulty is manipulated to induce positive, negative and neutral peak-end effects. A second experiment examines assessments of a shootout game in which the balance between challenge and skill is similarly manipulated. Results across the games show that recollection of challenge was strongly influenced by peak-end effects; however, results for fun, enjoyment, and preference to repeat were varied -- sometimes significantly in favour of the hypothesized effects, sometimes insignificant, but never against the hypothesis."}
{"_id": "14748beb5255da1256873f45a4ce71052d2ba40a", "title": "Efficient Spectral Feature Selection with Minimum Redundancy", "text": "Spectral feature selection identifies relevant features by measuring their capability of preserving sample similarity. It provides a powerful framework for both supervised and unsupervised feature selection, and has been proven to be effective in many real-world applications. One common drawback associated with most existing spectral feature selection algorithms is that they evaluate features individually and cannot identify redundant features. Since redundant features can have significant adverse effect on learning performance, it is necessary to address this limitation for spectral feature selection. To this end, we propose a novel spectral feature selection algorithm to handle feature redundancy, adopting an embedded model. The algorithm is derived from a formulation based on a sparse multi-output regression with a L2;1-norm constraint. We conduct theoretical analysis on the properties of its optimal solutions, paving the way for designing an efficient path-following solver. Extensive experiments show that the proposed algorithm can do well in both selecting relevant features and removing redundancy."}
{"_id": "cce60aed492a1107aec818bdb07b04cb066680ed", "title": "Reduction of Fuzzy Rules and Membership Functions and Its Application to Fuzzy PI and PD Type Controllers", "text": "Fuzzy controller\u2019s design depends mainly on the rule base and membership functions over the controller\u2019s input and output ranges. This paper presents two different approaches to deal with these design issues. A simple and efficient approach; namely, Fuzzy Subtractive Clustering is used to identify the rule base needed to realize Fuzzy PI and PD type controllers. This technique provides a mechanism to obtain the reduced rule set covering the whole input/output space as well as membership functions for each input variable. But it is found that some membership functions projected from different clusters have high degree of similarity. The number of membership functions of each input variable is then reduced using a similarity measure. In this paper, the fuzzy subtractive clustering approach is shown to reduce 49 rules to 8 rules and number of membership functions to 4 and 6 for input variables (error and change in error) maintaining almost the same level of performance. Simulation on a wide range of linear and nonlinear processes is carried out and results are compared with fuzzy PI and PD type controllers without clustering in terms of several performance measures such as peak overshoot, settling time, rise time, integral absolute error (IAE) and integral-of-time multiplied absolute error (ITAE) and in each case the proposed schemes shows an identical performance."}
{"_id": "690aa390ee29fb7fe2822c5465db65303f2e37ac", "title": "Removal of fixed impulse noise from digital images using Bezier curve based interpolation", "text": "In this paper, we propose a Bezier curve based interpolation technique to eliminate the fixed impulse noise from digital images as well as maintaining the edges of the image. To eliminate the noise, we make the noisy image to pass through two steps, where in the first step we found out the pixels affected by the impulse noise, and in second step, edge protecting process is done using Bezier interpolation technique. Promising results were obtained for images having more than 80 percent of the image pixels are affected. Our proposed algorithm is producing better results in comparison to existing algorithms."}
{"_id": "e0de791e1d6540a25e2e31ebae882c3b628ece0c", "title": "12 Artificial Intelligence for Space Applications", "text": "The ambitious short-term and long-term goals set down by the various national space agencies call for radical advances in several of the main space engineering areas, the design of intelligent space agents certainly being one of them. In recent years, this has led to an increasing interest in artificial intelligence by the entire aerospace community. However, in the current state of the art, several open issues and showstoppers can be identified. In this chapter, we review applications of artificial intelligence in the field of space engineering and space technology and identify open research questions and challenges. In particular, the following topics are identified and discussed: distributed artificial intelligence, enhanced situation self-awareness, and decision support for spacecraft system design."}
{"_id": "f5a15079faaa34fb0b9775c17fcbcc0f10245725", "title": "Knowledge , Strategy , and the Theory of the Firm Author ( s ) :", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. Wiley-Blackwell and John Wiley & Sons are collaborating with JSTOR to digitize, preserve and extend access to Strategic Management Journal. This paper argues that firms have particular institutional capabilities that allow them to protect knowledge from expropriation and imitation more effectively than market contracting. I argue that it is these generalized institutional capabilities that allow firms to generate and protect the unique resources and capabilities that are central to the strategic theory of the firm."}
{"_id": "bae5575284776cabed101750ac41848c700af431", "title": "The social structure of free and open source software development", "text": "Metaphors, such as the Cathedral and Bazaar, used to describe the organization of FLOSS projects typically place them in sharp contrast to proprietary development by emphasizing FLOSS\u2019s distinctive social and communications structures. But what do we really know about the communication patterns of FLOSS projects? How generalizable are the projects that have been studied? Is there consistency across FLOSS projects? Questioning the assumption of distinctiveness is important because practitioner-advocates from within the FLOSS community rely on features of social structure to describe and account for some of the advantages of FLOSS production. To address this question, we examined 120 project teams from SourceForge, representing a wide range of FLOSS project types, for their communications centralization as revealed in the interactions in the bug tracking system. We found that FLOSS development teams vary widely in their communications centralization, from projects completely centered on one developer to projects that are highly decentralized and exhibit a distributed pattern of conversation between developers and active users. We suggest, therefore, that it is wrong to assume that FLOSS projects are distinguished by a particular social structure merely because they are FLOSS. Our findings suggest that FLOSS projects might have to work hard to achieve the expected development advantages which have been assumed to flow from \u201cgoing open.\u201d In addition, the variation in communications structure across projects means that communications centralization is useful for comparisons between FLOSS teams. We"}
{"_id": "231878207a8641e605dc255f2c557fa4e8bb99bf", "title": "The Cathedral and the Bazaar", "text": "Permission is granted to copy, distribute and/or modify this document under the terms of the Open Publication License, version 2.0. $Date: 2002/08/02 09:02:14 $ Revision History Revision 1.57 11 September 2000 esr New major section \u201cHow Many Eyeballs Tame Complexity\u201d. Revision 1.52 28 August 2000 esr MATLAB is a reinforcing parallel to Emacs. Corbato\u00f3 & Vyssotsky got it in 1965. Revision 1.51 24 August 2000 esr First DocBook version. Minor updates to Fall 2000 on the time-sensitive material. Revision 1.49 5 May 2000 esr Added the HBS note on deadlines and scheduling. Revision 1.51 31 August 1999 esr This the version that O\u2019Reilly printed in the first edition of the book. Revision 1.45 8 August 1999 esr Added the endnotes on the Snafu Principle, (pre)historical examples of bazaar development, and originality in the bazaar. Revision 1.44 29 July 1999 esr Added the \u201cOn Management and the Maginot Line\u201d section, some insights about the usefulness of bazaars for exploring design space, and substantially improved the Epilog. Revision 1.40 20 Nov 1998 esr Added a correction of Brooks based on the Halloween Documents. Revision 1.39 28 July 1998 esr I removed Paul Eggert\u2019s \u2019graph on GPL vs. bazaar in response to cogent aguments from RMS on Revision 1.31 February 1"}
{"_id": "2469fd136aaf16c49bbe6814d6153da1dc6c7c23", "title": "Social translucence: an approach to designing systems that support social processes", "text": "We are interested in desiging systems that support communication and collaboration among large groups of people over computing networks. We begin by asking what properties of the physical world support graceful human-human communication in face-to-face situations, and argue that it is possible to design digital systems that support coherent behavior by making participants and their activites visible to one another. We call such systems \u201csocially translucent systems\u201d and suggest that they have three characteristics\u2014visbility, awareness, and accountability\u2014which enable people to draw upon their experience and expertise to structure their interactions with one another. To motivate and focus our ideas we develop a vision of knowledge communities, conversationally based systems that support the creation, management and reuse of knowledge in a social context. We describe our experience in designing and deploying one layer of functionality for knowledge communities, embodied in a working system called \u201cBarbie\u201d and discuss research issues raised by a socially translucent approach to design."}
{"_id": "2fc0516f700b490b7e13db0f0d73d05afa5e346c", "title": "Cave or Community? An Empirical Examination of 100 Mature Open Source Projects", "text": "Starting with Eric Raymond\u2019s groundbreaking work, The Cathedral and the Bazaar, open-source software (OSS) has commonly been regarded as work produced by a community of developers. Yet, given the nature of software programs, one also hears of developers with no lives that work very hard to achieve great product results. In this paper, I sought empirical evidence that would help us understand which is more commonthe cave (i.e., lone producer) or the community. Based on a study of the top 100 mature products on Sourceforge, I find a few surprising things. First, most OSS programs are developed by individuals, rather than communities. The median number of developers in the 100 projects I looked at was 4 and the mode was 1numbers much lower than previous ones reported for highly successful projects! Second, most OSS programs do not generate a lot of discussion. Third, products with more developers tend to be viewed and downloaded more often. Fourth, the number of developers associated with a project is unrelated to the age of the project. Fifth, the larger the project, the smaller the percent of project administrators."}
{"_id": "4282abe7e08bcfb2d282c063428fb187b2802e9c", "title": "Case Reports of Adipose-derived Stem Cell Therapy for Nasal Skin Necrosis after Filler Injection", "text": "With the gradual increase of cases using fillers, cases of patients treated by non-medical professionals or inexperienced physicians resulting in complications are also increasing. We herein report 2 patients who experienced acute complications after receiving filler injections and were successfully treated with adipose-derived stem cell (ADSCs) therapy. Case 1 was a 23-year-old female patient who received a filler (Restylane) injection in her forehead, glabella, and nose by a non-medical professional. The day after her injection, inflammation was observed with a 3\u00d73 cm skin necrosis. Case 2 was a 30-year-old woman who received a filler injection of hyaluronic acid gel (Juvederm) on her nasal dorsum and tip at a private clinic. She developed erythema and swelling in the filler-injected area A solution containing ADSCs harvested from each patient's abdominal subcutaneous tissue was injected into the lesion at the subcutaneous and dermis levels. The wounds healed without additional treatment. With continuous follow-up, both patients experienced only fine linear scars 6 months postoperatively. By using adipose-derived stem cells, we successfully treated the acute complications of skin necrosis after the filler injection, resulting in much less scarring, and more satisfactory results were achieved not only in wound healing, but also in esthetics."}
{"_id": "c160ae4b1eed860e96250df2d7ecd86a0120c0a2", "title": "Peer support services for individuals with serious mental illnesses: assessing the evidence.", "text": "OBJECTIVE\nThis review assessed the level of evidence and effectiveness of peer support services delivered by individuals in recovery to those with serious mental illnesses or co-occurring mental and substance use disorders.\n\n\nMETHODS\nAuthors searched PubMed, PsycINFO, Applied Social Sciences Index and Abstracts, Sociological Abstracts, Social Services Abstracts, Published International Literature on Traumatic Stress, the Educational Resources Information Center, and the Cumulative Index to Nursing and Allied Health Literature for outcome studies of peer support services from 1995 through 2012. They found 20 studies across three service types: peers added to traditional services, peers in existing clinical roles, and peers delivering structured curricula. Authors judged the methodological quality of the studies using three levels of evidence (high, moderate, and low). They also described the evidence of service effectiveness.\n\n\nRESULTS\nThe level of evidence for each type of peer support service was moderate. Many studies had methodological shortcomings, and outcome measures varied. The effectiveness varied by service type. Across the range of methodological rigor, a majority of studies of two service types--peers added and peers delivering curricula--showed some improvement favoring peers. Compared with professional staff, peers were better able to reduce inpatient use and improve a range of recovery outcomes, although one study found a negative impact. Effectiveness of peers in existing clinical roles was mixed.\n\n\nCONCLUSIONS\nPeer support services have demonstrated many notable outcomes. However, studies that better differentiate the contributions of the peer role and are conducted with greater specificity, consistency, and rigor would strengthen the evidence."}
{"_id": "33224ad0cdf6e2dc4893194dd587309c7887f0ba", "title": "Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990\u20132010)", "text": "The simple, but general formal theory of fun and intrinsic motivation and creativity (1990-2010) is based on the concept of maximizing intrinsic reward for the active creation or discovery of novel, surprising patterns allowing for improved prediction or data compression. It generalizes the traditional field of active learning, and is related to old, but less formal ideas in aesthetics theory and developmental psychology. It has been argued that the theory explains many essential aspects of intelligence including autonomous development, science, art, music, and humor. This overview first describes theoretically optimal (but not necessarily practical) ways of implementing the basic computational principles on exploratory, intrinsically motivated agents or robots, encouraging them to provoke event sequences exhibiting previously unknown, but learnable algorithmic regularities. Emphasis is put on the importance of limited computational resources for online prediction and compression. Discrete and continuous time formulations are given. Previous practical, but nonoptimal implementations (1991, 1995, and 1997-2002) are reviewed, as well as several recent variants by others (2005-2010). A simplified typology addresses current confusion concerning the precise nature of intrinsic motivation."}
{"_id": "e88929f3b08e8c64f81aa9475bfab5429b7cb051", "title": "Deep neural network for manufacturing quality prediction", "text": "Expected product quality is affected by multi-parameter in complex manufacturing processes. Product quality prediction can offer the possibility of designing better system parameters at the early production stage. Many existing approaches fail at providing favorable results duo to shallow architecture in prediction model that can not learn multi-parameter's features insufficiently. To address this issue, a deep neural network (DNN), consisting of a deep belief network (DBN) in the bottom and a regression layer on the top, is proposed in this paper. The DBN uses a greedy algorithm for unsupervised feature learning. It could learn effective features for manufacturing quality prediction in an unsupervised pattern which has been proven to be effective for many fields. Then the learned features are inputted into the regression tool, and the quality predictions are obtained. One type of manufacturing system with multi-parameter is investigated by the proposed DNN model. The experiments show that the DNN has good performance of the deep architecture, and overwhelms the peer shallow models. It is recommended from this study that the deep learning technique is more promising in manufacturing quality prediction."}
{"_id": "946dabbc13f06070f7618cd4ca6733a95b4b03c3", "title": "A linguistic ontology of space for natural language processing", "text": "a r t i c l e i n f o a b s t r a c t We present a detailed semantics for linguistic spatial expressions supportive of computational processing that draws substantially on the principles and tools of ontological engineering and formal ontology. We cover language concerned with space, actions in space and spatial relationships and develop an ontological organization that relates such expressions to general classes of fixed semantic import. The result is given as an extension of a linguistic ontology, the Generalized Upper Model, an organization which has been used for over a decade in natural language processing applications. We describe the general nature and features of this ontology and show how we have extended it for working particularly with space. Treaitng the semantics of natural language expressions concerning space in this way offers a substantial simplification of the general problem of relating natural spatial language to its contextualized interpretation. Example specifications based on natural language examples are presented, as well as an evaluation of the ontology's coverage, consistency, predictive power, and applicability."}
{"_id": "d95a8c90256a4f3008e3b1b8d089e5fbf46eb5e8", "title": "A Survey of Intrusion Detection Systems for Mobile Ad Hoc Networks", "text": "Tactical Mobile Ad-hoc Networks (MANETs) are widely deployed in military organizations that have critical needs of communication even with the absence of fixed infrastructure. The lack of communication infrastructure and dynamic topology makes MANETs vulnerable to a wide range attacks. Tactical military operations are among the prime users of ad-hoc networks. The military holds higher standards of security requirements and thus require special intrusion detection applications. Conventional Intrusion Detection System (IDS) require central control and monitoring entities and thus cannot be applied to MANETs. Solutions to secure these networks are based on distributed and cooperative security. This paper presents a survey of IDS specifically for MANETs and also highlights the strengths and weakness of each model. We present a unique evaluation matrix for measuring the effectiveness of IDS for MANETs in an emergency response scenario."}
{"_id": "e3b2990079f630e0821f38714dbc1bfd1f3e9c87", "title": "Enabling agricultural automation to optimize utilization of water, fertilizer and insecticides by implementing Internet of Things (IoT)", "text": "With the proliferation of smart devices, Internet can be extended into the physical realm of Internet-of-Things (IoT) by deploying them into a communicating-actuating network. In Ion, sensors and actuators blend seamlessly with the environment; collaborate globally with each other through internet to accomplish a specific task. Wireless Sensor Network (WSN) can be integrated into Ion to meet the challenges of seamless communication between any things (e.g., humans or objects). The potentialities of IoT can be brought to the benefit of society by developing novel applications in transportation and logistics, healthcare, agriculture, smart environment (home, office or plant). This research gives a framework of optimizing resources (water, fertilizers, insecticides and manual labour) in agriculture through the use of IoT. The issues involved in the implementation of applications are also investigated in the paper. This frame work is named as AgriTech."}
{"_id": "56c9c6b4e7bc658e065f80617e4e0278f40d6b26", "title": "A review on stress inducement stimuli for assessing human stress using physiological signals", "text": "Assessing human stress in real-time is more difficult and challenging today. The present review deals about the measurement of stress in laboratory environment using different stress inducement stimuli by the help of physiological signals. Previous researchers have been used different stress inducement stimuli such as stroop colour word test (CWT), mental arithmetic test, public speaking task, cold pressor test, computer games and works used to induce the stress. Most of the researchers have been analyzed stress using questionnaire based approach and physiological signals. The several physiological signals like Electrocardiogram (ECG), Electromyogram (EMG), Galvanic Skin Response (GSR), Blood Pressure (BP), Skin Temperature (ST), Blood Volume Pulse (BVP), respiration rate (RIP) and Electroencephalogram (EEG) were briefly investigated to identify the stress. Different statistical methods like Analysis of variance (ANOVA), two-way ANOVA, Multivariate analysis of variance (MANOVA), t-test, paired t-tests and student t-tests have used to describe the correlation between stress inducement stimuli, subjective parameters (age, gender and etc.,) and physiological signals. This present works aims to find the most appropriate stress inducement stimuli, physiological signals and statistical method to efficiently asses the human stress."}
{"_id": "9a700c7a7e7468e436f00c34551fbe3e0f70e42f", "title": "Towards Principled Methods for Training Generative Adversarial Networks", "text": "The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena. This paper is divided into three sections. The first section introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a practical and theoretically grounded direction towards solving these problems, while introducing new tools to study them."}
{"_id": "0cf61f46f76e24beec8a1fe38fc4ab9dd6ac5abd", "title": "Design Exploration of Hybrid CMOS and Memristor Circuit by New Modified Nodal Analysis", "text": "Design of hybrid circuits and systems based on CMOS and nano-device requires rethinking of fundamental circuit analysis to aid design exploration. Conventional circuit analysis with modified nodal analysis (MNA) cannot consider new nano-devices such as memristor together with the traditional CMOS devices. This paper has introduced a new MNA method with magnetic flux (\u03a6) as new state variable. New SPICE-like circuit simulator is thereby developed for the design of hybrid CMOS and memristor circuits. A number of CMOS and memristor-based designs are explored, such as oscillator, chaotic circuit, programmable logic, analog-learning circuit, and crossbar memory, where their functionality, performance, reliability and power can be efficiently verified by the newly developed simulator. Specifically, one new 3-D-crossbar architecture with diode-added memristor is also proposed to improve integration density and to avoid sneak path during read-write operation."}
{"_id": "947b9250c2f6c41203468506255c3257d3decef3", "title": "Facial Expression Recognition with Convolutional Neural Networks", "text": "Facial expression recognition systems have attracted much research interest within the field of artificial intelligence. Many established facial expression recognition (FER) systems apply standard machine learning to extracted image features, and these methods generalize poorly to previously unseen data. This project builds upon recent research to classify images of human faces into discrete emotion categories using convolutional neural networks (CNNs). We experimented with different architectures and methods such as fractional max-pooling and finetuning, ultimately achieving an accuracy of 0.48 in a sevenclass classification task."}
{"_id": "5941bf3d86ebb8347dfa0b40af028e7f61339501", "title": "Object Constraint Language (OCL): A Definitive Guide", "text": "The Object Constraint Language (OCL) started as a complement of the UML notation with the goal to overcome the limitations of UML (and in general, any graphical notation) in terms of precisely specifying detailed aspects of a system design. Since then, OCL has become a key component of any model-driven engineering (MDE) technique as the default language for expressing all kinds of (meta)model query, manipulation and specification requirements. Among many other applications, OCL is frequently used to express model transformations (as part of the source and target patterns of transformation rules), well-formedness rules (as part of the definition of new domain-specific languages), or code-generation templates (as a way to express the generation patterns and rules). This chapter pretends to provide a comprehensive view of this language, its many applications and available tool support as well as the latest research developments and open challenges around it."}
{"_id": "3198e5de8eb9edfd92e5f9c2cb325846e25f22aa", "title": "Topic models in information retrieval", "text": ""}
{"_id": "3c8f916264e8d15ba1bc618c6adf395e86dd7b40", "title": "Generating Descriptions with Grounded and Co-referenced People", "text": "Learning how to generate descriptions of images or videos received major interest both in the Computer Vision and Natural Language Processing communities. While a few works have proposed to learn a grounding during the generation process in an unsupervised way (via an attention mechanism), it remains unclear how good the quality of the grounding is and whether it benefits the description quality. In this work we propose a movie description model which learns to generate description and jointly ground (localize) the mentioned characters as well as do visual co-reference resolution between pairs of consecutive sentences/clips. We also propose to use weak localization supervision through character mentions provided in movie descriptions to learn the character grounding. At training time, we first learn how to localize characters by relating their visual appearance to mentions in the descriptions via a semi-supervised approach. We then provide this (noisy) supervision into our description model which greatly improves its performance. Our proposed description model improves over prior work w.r.t. generated description quality and additionally provides grounding and local co-reference resolution. We evaluate it on the MPII Movie Description dataset using automatic and human evaluation measures and using our newly collected grounding and co-reference data for characters."}
{"_id": "96dc6e4960eaa41b0089a0315fe598afacc52470", "title": "Water content of latent fingerprints - Dispelling the myth.", "text": "Changing procedures in the handling of rare and precious documents in museums and elsewhere, based on assumptions about constituents of latent fingerprints, have led the author to an examination of available data. These changes appear to have been triggered by one paper using general biological data regarding eccrine sweat production to infer that deposited fingerprints are mostly water. Searching the fingerprint literature has revealed a number of reference works similarly quoting figures for average water content of deposited fingerprints of 98% or more. Whilst accurate estimation is difficult there is no evidence that the residue on fingers could be anything like 98% water, even if there were no contamination from sebaceous glands. Consideration of published analytical data of real fingerprints, and several theoretical considerations regarding evaporation and replenishment rates, indicates a probable initial average water content of a fingerprint, soon after deposition, of 20% or less."}
{"_id": "9f22a67fa2d6272cec590d4e8ec2ba75cf41df9a", "title": "A multiscale measure for mixing", "text": "We present a multiscale measure for mixing that is based on the concept of weak convergence and averages the \u201cmixedness\u201d of an advected scalar field at various scales. This new measure, referred to as the Mix-Norm, resolves the inability of the L2 variance of the scalar density field to capture small-scale variations when advected by chaotic maps or flows. In addition, the Mix-Norm succeeds in capturing the efficiency of a mixing protocol in the context of a particular initial scalar field, wherein Lyapunov-exponent based measures fail to do so. We relate the Mix-Norm to the classical ergodic theoretic notion of mixing and present its formulation in terms of the power spectrum of the scalar field. We demonstrate the utility of the Mix-Norm by showing how it measures the efficiency of mixing due to various discrete dynamical systems and to diffusion. In particular, we show that the Mix-Norm can capture known exponential and algebraic mixing properties of certain maps. We also analyze numerically the behaviour of scalar fields evolved by the Standard Map using the Mix-Norm. \u00a9 2005 Elsevier B.V. All rights reserved."}
{"_id": "943d17f36d320ad9fcc3ae82c78914c0111cef1d", "title": "Artificial cooperative search algorithm for numerical optimization problems", "text": "In this paper, a new two-population based global search algorithm, the Artificial Cooperative Search Algorithm (ACS), is introduced. ACS algorithm has been developed to be used in solving real-valued numerical optimization problems. For purposes of examining the success of ACS algorithm in solving numerical optimization problems, 91 benchmark problems that have different specifications were used in the detailed tests. The success of ACS algorithm in solving the related benchmark problems was compared to the successes obtained by PSO, SADE, CLPSO, BBO, CMA-ES, CK and DSA algorithms in solving the related benchmark problems by using Wilcoxon Signed-Rank Statistical Test with Bonferroni-Holm correction. The results obtained in the statistical analysis demonstrate that the success achieved by ACS algorithm in solving numerical optimization problems is better in comparison to the other computational intelligence algorithms used in this paper. 2012 Elsevier Inc. All rights reserved."}
{"_id": "a9166e3223daed5655f4f57e911a8e5f91c6ec37", "title": "Abstraction Refinement for Probabilistic Software", "text": "ion Refinement for Probabilistic Software Mark Kattenbelt, Marta Kwiatkowska, Gethin Norman, and David Parker Oxford University Computing Laboratory, Parks Road, Oxford, OX1 3QD Abstract. We present a methodology and implementation for verifying ANSI-C programs that exhibit probabilistic behaviour, such as failures or randomisation. We use abstraction-refinement techniques that represent We present a methodology and implementation for verifying ANSI-C programs that exhibit probabilistic behaviour, such as failures or randomisation. We use abstraction-refinement techniques that represent probabilistic programs as Markov decision processes and their abstractions as stochastic two-player games. Our techniques target quantitative properties of software such as \u201cthe maximum probability of file-transfer failure\u201d or \u201cthe minimum expected number of loop iterations\u201d and the abstractions we construct yield lower and upper bounds on these properties, which then guide the refinement process. We build upon stateof-the-art techniques and tools, using SAT-based predicate abstraction, symbolic implementations of probabilistic model checking and components from GOTO-CC, SATABS and PRISM. Experimental results show that our approach performs very well in practice, successfully verifying actual networking software whose complexity is significantly beyond the scope of existing probabilistic verification tools."}
{"_id": "a70bbc4c6c3ac0c77526a64bf11073bc8f45bd48", "title": "A study of SSL Proxy attacks on Android and iOS mobile applications", "text": "According to recent articles in popular technology websites, some mobile applications function in an insecure manner when presented with untrusted SSL certificates. These non-browser based applications seem to, in the absence of a standard way of alerting a user of an SSL error, accept any certificate presented to it. This paper intends to research these claims and show whether or not an invisible proxy based SSL attack can indeed steal user's credentials from mobile applications, and which types applications are most likely to be vulnerable to this attack vector. To ensure coverage of the most popular platforms, applications on both Android 4.2 and iOS 6 are tested. The results of our study showed that stealing credentials is indeed possible using invisible proxy man in the middle attacks."}
{"_id": "a1221b0fd74212382c7387e6f6fd957918576dda", "title": "Transient effects in application of PWM inverters to induction motors", "text": "Standard squirrel cage induction motors are subjected to nonsinusoidal wave shapes, when supplied from adjustable frequency inverters. In addition to causing increased heating, these wave patterns can be destructive to the insulation. Pulse width modulated (PWM) inverter output amplitudes and risetimes are investigated, and motor insulation capabilities are discussed. Voltage reflections are simulated for various cable lengths and risetimes and are presented graphically. Simulations confirm potential problems with long cables and short risetimes. Application precautions are also discussed.<>"}
{"_id": "bdf434f475654ee0a99fe11fd63405b038244f69", "title": "Achieving Fairness through Adversarial Learning: an Application to Recidivism Prediction", "text": "Recidivism prediction scores are used across the USA to determine sentencing and supervision for hundreds of thousands of inmates. One such generator of recidivism prediction scores is Northpointe's Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) score, used in states like California and Florida, which past research has shown to be biased against black inmates according to certain measures of fairness. To counteract this racial bias, we present an adversarially-trained neural network that predicts recidivism and is trained to remove racial bias. When comparing the results of our model to COMPAS, we gain predictive accuracy and get closer to achieving two out of three measures of fairness: parity and equality of odds. Our model can be generalized to any prediction and demographic. This piece of research contributes an example of scientific replication and simplification in a high-stakes real-world application like recidivism prediction."}
{"_id": "eb7cf446e983e98c1400c8181949f038caf0c8a8", "title": "Perpetual development: A model of the Linux kernel life cycle", "text": "Software evolution is widely recognized as an important and common phenomenon, whereby the system follows an ever-extending development trajectory w ith intermittent releases. Nevertheless there have been only few lifecycle models that attempt to por tray such evolution. We use the evolution of the Linux kernel as the basis for the formulation of such a model, integrating the progress in time with growth of the codebase, and differentiating bet ween development of new functionality and maintenance of production versions. A unique elem nt of the model is the sequence of activities involved in releasing new production versions, and how this has changed with the growth of Linux. In particular, the release follow-up phase before th forking of a new development version, which was prominent in early releases of production ve rsions, has been eliminated in favor of a concurrent merge window in the release of 2.6.x versions . We also show that a piecewise linear model with increasing slopes provides the best descr iption of the growth of Linux. The perpetual development model is used as a framework in which c ommonly recognized benefits of incremental and evolutionary development may be demonstra ted, nd to comment on issues such as architecture, conservation of familiarity, and failed p rojects. We suggest that this model and variants thereof may apply to many other projects in additio n to Linux."}
{"_id": "0672cf615e621624cb4820ea1b4c8c6997d6093b", "title": "Robust Monte Carlo localization for mobile robots", "text": "Mobile robot localization is the problem of determining a robot's pose from sensor data. Monte Carlo Localization is a family of algorithms for localization based on particle filters, which are approximate Bayes filters that use random samples for posterior estimation. Recently, they have been applied with great success for robot localization. Unfortunately, regular particle filters perform poorly in certain situations. Mixture-MCL, the algorithm described here, overcomes these problems by using a \"dual\" sampler, integrating two complimentary ways of generating samples in the estimation. To apply this algorithm for mobile robot localization, a kd-tree is learned from data that permits fast dual sampling. Systematic empirical results obtained using data collected in crowded public places illustrate superior performance, robustness, and efficiency, when compared to other state-of-the-art localization algorithms."}
{"_id": "09f2af091f6bf5dfe25700c5a8c82f220fac5631", "title": "Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories", "text": "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba\u2019s \"gist\" and Lowe\u2019s SIFT descriptors."}
{"_id": "22e594d4606c94192743ca6f51ebf15b32a3b72a", "title": "What, where and who? Classifying events by scene and object recognition", "text": "We propose a first attempt to classify events in static images by integrating scene and object categorizations. We define an event in a static image as a human activity taking place in a specific environment. In this paper, we use a number of sport games such as snow boarding, rock climbing or badminton to demonstrate event classification. Our goal is to classify the event in the image as well as to provide a number of semantic labels to the objects and scene environment within the image. For example, given a rowing scene, our algorithm recognizes the event as rowing by classifying the environment as a lake and recognizing the critical objects in the image as athletes, rowing boat, water, etc. We achieve this integrative and holistic recognition through a generative graphical model. We have assembled a highly challenging database of 8 widely varied sport events. We show that our system is capable of classifying these event classes at 73.4% accuracy. While each component of the model contributes to the final recognition, using scene or objects alone cannot achieve this performance."}
{"_id": "33fad977a6b317cfd6ecd43d978687e0df8a7338", "title": "Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns", "text": "\u00d0This paper presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed auniform,o are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the auniformo patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Excellent experimental results obtained in true problems of rotation invariance, where the classifier is trained at one particular rotation angle and tested with samples from other rotation angles, demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns. These operators characterize the spatial configuration of local image texture and the performance can be further improved by combining them with rotation invariant variance measures that characterize the contrast of local image texture. The joint distributions of these orthogonal measures are shown to be very powerful tools for rotation invariant texture analysis. Index Terms\u00d0Nonparametric, texture analysis, Outex, Brodatz, distribution, histogram, contrast."}
{"_id": "8ade5d29ae9eac7b0980bc6bc1b873d0dd12a486", "title": "Robust Real-Time Face Detection", "text": ""}
{"_id": "83f3dc086cb5cafd5b4fbce6062c679b4373e845", "title": "Emotion recognition system using brain and peripheral signals: Using correlation dimension to improve the results of EEG", "text": "This paper proposed a multimodal fusion between brain and peripheral signals for emotion detection. The input signals were electroencephalogram, galvanic skin resistance, temperature, blood pressure and respiration, which can reflect the influence of emotion on the central nervous system and autonomic nervous system respectively. The acquisition protocol is based on a subset of pictures which correspond to three specific areas of valance-arousal emotional space (positively excited, negatively excited, and calm). The features extracted from input signals, and to improve the results, correlation dimension as a strong nonlinear feature is used for brain signals. The performance of the Quadratic Discriminant Classifier has been evaluated on different feature sets: peripheral signals, EEG's, and both. In comparison among the results of different feature sets, EEG signals seem to perform better than other physiological signals, and the results confirm the interest of using brain signals as peripherals in emotion assessment. According to the improvement in EEG results compare in each raw of the table, it seems that nonlinear features would lead to better understanding of how emotional activities work."}
{"_id": "1b7189be0287321814c86405181e6951f815e802", "title": "A Geospatial Decision Support System for Drought Risk Management", "text": "We are developing an advanced Geospatial Decision Support System (GDSS) to improve the quality and accessibility of drought related data for drought risk management. This is part of a Digital Government project aimed at developing and integrating new information technologies for improved government services in the USDA Risk Management Agency (RMA) and the Natural Resources Conservation Service (NRCS). Our overall goal is to substantially improve RMA's delivery of risk management services in the near-term and provide a foundation and directions for the future. We integrate spatio-temporal knowledge discovery techniques into our GDSS using a combination of data mining techniques applied to rich, geospatial, time-series data. Our data mining objectives are to: 1) find relationships between user-specified target episodes and other climatic events and 2) predict the target episodes. Understanding relationships between changes in soil moisture regimes and global climatic events such as El Ni\u00f1o could provide a reasonable drought mitigation strategy for farmers to adjust planting dates, hybrid selection, plant populations, tillage practices or crop rotations. This work highlights the innovative data mining approaches integral to our project's success and provides preliminary results that indicate our system\u2019s potential to substantially improve RMA's delivery of drought risk management services."}
{"_id": "beaed5ff6cae9f8311c2158ba2badb7581710ca7", "title": "A Secure Microservice Framework for IoT", "text": "The Internet of Things (IoT) has connected an incredible diversity of devices in novel ways, which has enabled exciting new services and opportunities. Unfortunately, IoT systems also present several important challenges to developers. This paper proposes a vision for how we may build IoT systems in the future by reconceiving IoT's fundamental unit of construction not as a \"thing\", but rather as a widely and finely distributed \"microservice\" already familiar to web service engineering circles. Since IoT systems are quite different from more established uses of microservice architectures, success of the approach depends on adaptations that enable them to met the key challenges that IoT systems present. We argue that a microservice approach to building IoT systems can combine in a mutually enforcing way with patterns for microservices, API gateways, distribution of services, uniform service discovery, containers, and access control. The approach is illustrated using two case studies of IoT systems in personal health management and connected autonomous vehicles. Our hope is that the vision of a microservices approach will help focus research that can fill in current gaps preventing more effective, interoperable, and secure IoT services and solutions in a wide variety of contexts."}
{"_id": "c1e0763f6ce8b8e3464a3bfc16ce3c94f530197b", "title": "Doubly Fed Induction Generator Systems For Variable Speed Wind Turbine", "text": "This paper presents results of a study concerning the dynamic behaviour of a wind energy system powered by a doubly fed induction generator with the rotor connected to the electric network through an AC-AC converter. A tendency to put up more and more wind turbines can be observed all over the world. Also, there is awareness in taking into account the requirements of a clean environment, due to the need to preserve our habitat. Renewable energy sources not contributing to the enhanced greenhouse effect, especially wind power, are becoming an important component of the total generation. Hence, research concerning the dynamic behaviour of wind energy systems is important to achieve a better knowledge."}
{"_id": "2c0a634a71ade1bb8458db124dc1cc9f7e452627", "title": "Local Monotonic Attention Mechanism for End-to-End Speech And Language Processing", "text": "Recently, encoder-decoder neural networks have shown impressive performance on many sequence-related tasks. The architecture commonly uses an attentional mechanism which allows the model to learn alignments between the source and the target sequence. Most attentional mechanisms used today is based on a global attention property which requires a computation of a weighted summarization of the whole input sequence generated by encoder states. However, it is computationally expensive and often produces misalignment on the longer input sequence. Furthermore, it does not fit with monotonous or left-to-right nature in several tasks, such as automatic speech recognition (ASR), grapheme-to-phoneme (G2P), etc. In this paper, we propose a novel attention mechanism that has local and monotonic properties. Various ways to control those properties are also explored. Experimental results on ASR, G2P and machine translation between two languages with similar sentence structures, demonstrate that the proposed encoderdecoder model with local monotonic attention could achieve significant performance improvements and reduce the computational complexity in comparison with the one that used the standard global attention architecture."}
{"_id": "f77c9bf5beec7c975584e8087aae8d679664a1eb", "title": "Local Deep Neural Networks for Age and Gender Classification", "text": "Local deep neural networks have been recently introduced for gender recognition. Although, they achieve very good performance they are very computationally expensive to train. In this work, we introduce a simplified version of local deep neural networks which significantly reduces the training time. Instead of using hundreds of patches per image, as suggested by the original method, we propose to use 9 overlapping patches per image which cover the entire face region. This results in a much reduced training time, since just 9 patches are extracted per image instead of hundreds, at the expense of a slightly reduced performance. We tested the proposed modified local deep neural networks approach on the LFW and Adience databases for the task of gender and age classification. For both tasks and both databases the performance is up to 1% lower compared to the original version of the algorithm. We have also investigated which patches are more discriminative for age and gender classification. It turns out that the mouth and eyes regions are useful for age classification, whereas just the eye region is useful for gender classification."}
{"_id": "4f564267ba850f58a310b74b228b0f59409f2202", "title": "Neuroprotective effect of Bacopa monnieri on beta-amyloid-induced cell death in primary cortical culture.", "text": "AIM OF THE STUDY\nBacopa monnieri (Brahmi) is extensively used in traditional Indian medicine as a nerve tonic and thought to improve memory. To examine the neuroprotective effects of Brahmi extract, we tested its protection against the beta-amyloid protein (25-35) and glutamate-induced neurotoxicity in primary cortical cultured neurons.\n\n\nMATERIALS AND METHODS\nNeuroprotective effects were determined by measuring neuronal cell viability following beta-amyloid and glutamate treatment with and without Brahmi extract. Mechanisms of neuroprotection were evaluated by monitoring cellular oxidative stress and acetylcholinesterase activity.\n\n\nRESULTS\nOur result demonstrated that Brahmi extract protected neurons from beta-amyloid-induced cell death, but not glutamate-induced excitotoxicity. This neuroprotection was possibly due to its ability to suppress cellular acetylcholinesterase activity but not the inhibition of glutamate-mediated toxicity. In addition, culture medium containing Brahmi extract appeared to promote cell survival compared to neuronal cells growing in regular culture medium. Further study showed that Brahmi-treated neurons expressed lower level of reactive oxygen species suggesting that Brahmi restrained intracellular oxidative stress which in turn prolonged the lifespan of the culture neurons. Brahmi extract also exhibited both reducing and lipid peroxidation inhibitory activities.\n\n\nCONCLUSIONS\nFrom this study, the mode of action of neuroprotective effects of Brahmi appeared to be the results of its antioxidant to suppress neuronal oxidative stress and the acetylcholinesterase inhibitory activities. Therefore, treating patients with Brahmi extract may be an alternative direction for ameliorating neurodegenerative disorders associated with the overwhelming oxidative stress as well as Alzheimer's disease."}
{"_id": "15f6a7e3359f470f0a0f6d5b9144f70ef130ca56", "title": "On the control of planar cable-driven parallel robot via classic controllers and tuning with intelligent algorithms", "text": "This paper presents different classical control approaches for planar cable-driven parallel robots. For the proposed robot, PD and PID controllers are designed based on the concept of pole placement method. In order to optimize and tune the controller parameters of planar cable-driven parallel robot, Differential Evaluation, Particle Swarm Optimization and Genetic algorithms are applied as optimization techniques. The simulation results of Genetic algorithm, Particle Swarm Optimization and Differential Evaluation algorithms reveal that the output results of tunes controllers with Particle Swarm Optimization and Differential Evaluation algorithms are more similar than Genetic algorithm and the processing time of Differential Evaluation is less than Genetic algorithm and Particle Swarm Optimization. Moreover, performance of the Particle Swarm Optimization and Differential Evaluation algorithms are better than Genetic algorithm for tuning the controllers parameters."}
{"_id": "12a97799334e3a455e278f2a995a93a6e0c034bf", "title": "Accurate Linear-Time Chinese Word Segmentation via Embedding Matching", "text": "This paper proposes an embedding matching approach to Chinese word segmentation, which generalizes the traditional sequence labeling framework and takes advantage of distributed representations. The training and prediction algorithms have linear-time complexity. Based on the proposed model, a greedy segmenter is developed and evaluated on benchmark corpora. Experiments show that our greedy segmenter achieves improved results over previous neural network-based word segmenters, and its performance is competitive with state-of-the-art methods, despite its simple feature set and the absence of external resources for training."}
{"_id": "5c1ab86362dec6f9892a5e4055a256fa5c1772af", "title": "Handover Performance in 3GPP Long Term Evolution (LTE) Systems", "text": "The specification of the long term evolution (LTE) of 3G systems is currently ongoing in 3GPP with a target date of ready specification at the end of 2007. The evolved radio access network (RAN) involves a new radio interface based on OFDM technology and a radically different RAN architecture, where radio functionality is distributed into the base stations. The distributed nature of the RAN architecture calls for new radio control algorithms and procedures that operate in a distributed manner, including a distributed handover scheme as well. The most important aspects of the handover procedure in LTE has been already settled in 3GPP except a few details. In this paper we give an overview of the LTE intra-access handover procedure and evaluate its performance focusing on the user perceived performance aspects of it. We investigate the necessity of packet forwarding from a TCP throughput point of view, we analyse the problem of out of order packet delivery during handover and propose a simple solution for it. Finally, we investigate the impact of HARQ/ARQ state discard at handover on the radio efficiency. The results show that neither the user perceived performance nor the radio efficiency are compromised by the relocation based handover procedure of LTE."}
{"_id": "51039a6ff686d74a4dec88b3b0e4b85f07222912", "title": "A bandwidth-tunable bioamplifier with voltage-controlled symmetric pseudo-resistors", "text": "This paper describes a bioamplifier that employs a voltage-controlled-pseudo-resistor to achieve tunable bandwidth and wide operating voltage range for biomedical applications. The versatile pseudo-resistor employed provides ultra-high resistance for ac coupling to cancel the dc offset from electrode\u2013tissue interface. The voltage-controlled-pseudo-resistor consists of serial-connected PMOS transistors working at the subthreshold region and an auto-tuning circuit that makes sure the constant (time-invariant) control-voltage of the pseudo-resistor. This bandwidth-tunable bioamplifier is designed in a 0.18-\u03bcm standard CMOS process, achieving a gain of 40.2 dB with 10.35-\u03bcW power consumption. The designed chip was also used to develop the proof-of-concept prototype. An operation bandwidth of 9.5 kHz, inputreferred noise of 5.2 \u03bcVrms from 6.3 Hz to 9.5 kHz and 5.54 \u03bcVrms from 250 Hz to 9.5 kHz, and a tunable cutoff-frequency from 6.3\u2013600 Hz were demonstrated to prove our design. & 2015 Elsevier Ltd. All rights reserved."}
{"_id": "5342ed6e2faec80d85aaaccd5ab4db0abcf355ea", "title": "Broadband Planar Fully Integrated 8 $\\, \\times \\,$8 Butler Matrix Using Coupled-Line Directional Couplers", "text": "An innovative approach that allows for realization of broadband planar fully integrated 8 \u00d7 8 Butler matrices is presented. Coupled-line directional couplers have been utilized, which enable broad operational band. A novel arrangement of the network has been proposed that allows the creation of an entirely planar design having two metallization layers and no interconnections between these layers. Four selected crossovers have been realized as a tandem connection of two 3-dB/90\u00b0 coupled-line directional couplers, which, together with reference lines having appropriate electrical lengths, perform simultaneously crossovers of signal lines, and all needed broadband constant value phase shifters. Moreover, two of the needed 3-dB/90\u00b0 directional couplers have been designed as tandem connections of two 8.34-dB directional couplers, acting as 3-dB/90\u00b0 directional couplers having crossed outputs. With such a modification, a fully planar design with no inter-layer connections is possible. The proposed network arrangement has been experimentally tested with the design of an 8 \u00d7 8 Butler matrix operating at the center frequency of 3 GHz. The obtained measurement results fully confirm the validity of the proposed approach."}
{"_id": "c19fc9aa2d5d57139ac75869aa12d113f3e288a2", "title": "Design and Development of GPS-GSM Based Tracking System with Google Map Based Monitoring", "text": "GPS is one of the technologies that are used in a huge number of applications today. One of the applications is tracking your vehicle and keeps regular monitoring on them. This tracking system can inform you the location and route travelled by vehicle, and that information can be observed from any other remote location. It also includes the web application that provides you exact location of target. This system enables us to track target in any weather conditions. This system uses GPS and GSM technologies. The paper includes the hardware part which comprises of GPS, GSM, Atmega microcontroller MAX 232, 16x2 LCD and software part is used for interfacing all the required modules and a web application is also developed at the client side. Main objective is to design a system that can be easily installed and to provide platform for further enhancement."}
{"_id": "342ebf89fb87bd271ae3abc4512129f8db4e258d", "title": "The DRAM Latency PUF: Quickly Evaluating Physical Unclonable Functions by Exploiting the Latency-Reliability Tradeoff in Modern Commodity DRAM Devices", "text": "Physically Unclonable Functions (PUFs) are commonly used in cryptography to identify devices based on the uniqueness of their physical microstructures. DRAM-based PUFs have numerous advantages over PUF designs that exploit alternative substrates: DRAM is a major component of many modern systems, and a DRAM-based PUF can generate many unique identiers. However, none of the prior DRAM PUF proposals provide implementations suitable for runtime-accessible PUF evaluation on commodity DRAM devices. Prior DRAM PUFs exhibit unacceptably high latencies, especially at low temperatures (e.g., >125.8s on average for a 64KiB memory segment below 55C), and they cause high system interference by keeping part of DRAM unavailable during PUF evaluation. In this paper, we introduce the DRAM latency PUF, a new class of fast, reliable DRAM PUFs. The key idea is to reduce DRAM read access latency below the reliable datasheet specications using software-only system calls. Doing so results in error patterns that reect the compound eects of manufacturing variations in various DRAM structures (e.g., capacitors, wires, sense ampli- ers). Based on a rigorous experimental characterization of 223 modern LPDDR4 DRAM chips, we demonstrate that these error patterns 1) satisfy runtime-accessible PUF requirements, and 2) are quickly generated (i.e., at 88.2ms) irrespective of operating temperature using a real system with no additional hardware modications. We show that, for a constant DRAM capacity overhead of 64KiB, our implementation of the DRAM latency PUF enables an average (minimum, maximum) PUF evaluation time speedup of 152x (109x, 181x) at 70C and 1426x (868x, 1783x) at 55C when compared to a DRAM retention PUF and achieves greater speedups at even lower temperatures."}
{"_id": "b9e4469ef36e3ce41e1052563e8dbce1d4762368", "title": "CrackIT \u2014 An image processing toolbox for crack detection and characterization", "text": "This paper presents a comprehensive set of image processing algorithms for detection and characterization of road pavement surface crack distresses, which is being made available to the research community. The toolbox, in the Matlab environment, includes algorithms to preprocess images, to detect cracks and characterize them into types, based on image processing and pattern recognition techniques, as well as modules devoted to the performance evaluation of crack detection and characterization solutions. A sample database of 84 pavement surface images taken during a traditional road survey is provided with the toolbox, since no pavement image databases are publicly available for crack detection and characterization evaluation purposes. Results achieved applying the proposed toolbox to the sample database are discussed, illustrating the potential of the available algorithms."}
{"_id": "adfc2b7dc3eb7bd9811d17fe4abd5d6151e97018", "title": "Local-Learning-Based Feature Selection for High-Dimensional Data Analysis", "text": "This paper considers feature selection for data classification in the presence of a huge number of irrelevant features. We propose a new feature-selection algorithm that addresses several major issues with prior work, including problems with algorithm implementation, computational complexity, and solution accuracy. The key idea is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local learning, and then learn feature relevance globally within the large margin framework. The proposed algorithm is based on well-established machine learning and numerical analysis techniques, without making any assumptions about the underlying data distribution. It is capable of processing many thousands of features within minutes on a personal computer while maintaining a very high accuracy that is nearly insensitive to a growing number of irrelevant features. Theoretical analyses of the algorithm's sample complexity suggest that the algorithm has a logarithmical sample complexity with respect to the number of features. Experiments on 11 synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem for supervised learning and the effectiveness of our algorithm."}
{"_id": "7e8064a35914d301eb41d2211e770b1d1ad5f9b8", "title": "Nasal bone length throughout gestation: normal ranges based on 3537 fetal ultrasound measurements.", "text": "OBJECTIVE\nTo establish normal ranges for nasal bone length measurements throughout gestation and to compare measurements in two subsets of patients of different race (African-American vs. Caucasian) to determine whether a different normal range should be used in these populations.\n\n\nMETHOD\nNormal nasal bone length reference ranges were generated using prenatal measurements by a standardized technique in 3537 fetuses.\n\n\nRESULTS\nThe nasal bone lengths were found to correlate positively with advancing gestation (R(2) = 0.77, second-order polynomial). No statistical difference was found between African-American and Caucasian subjects.\n\n\nCONCLUSION\nThese reference ranges may prove to be useful in prenatal screening and diagnosis of syndromes known to be associated with nasal hypoplasia. Different normal ranges for African-American and Caucasian women are not required."}
{"_id": "fe6ade6920b212bd32af011bc6efa94b66107e1f", "title": "A Laminated Waveguide Magic-T With Bandpass Filter Response in Multilayer LTCC", "text": "A laminated waveguide magic-T with imbedded Chebyshev filter response is developed in a multilayer low-temperature co-fired ceramic technology by vertically stacked rectangular cavity resonators with highly symmetric coupling structures. The cavities provide even- and odd-symmetric field distributions by simultaneously resonating TE102 and TE201 modes at the center frequency of the magic-T. Thus, the in-phase and out-of-phase responses of the magic-T function are accomplished according to the induced mode. Meanwhile, the filter frequency response is realized by the cascaded cavities with proper coupling strengths according to the filter specification. With the degenerate, but orthogonal cavity modes, a fewer number of cavities are required and the circuit size is further reduced. A third-order bandpass magic-T is designed and fabricated to operate at 24 GHz with 6% fractional bandwidth. The highly symmetric structure of the implemented magic-T provides a low in-band magnitude imbalance ( \u00b10.25 dB) and phase imbalance (0\u00b0-6\u00b0). The sum and difference ports also provide an isolation greater than 30 dB in the operation frequency range. Measurement and simulated results are in good agreement, both validating the design concept."}
{"_id": "bd428c0e44a0743d10e4c0ec5bfe243e2a64f889", "title": "Demand Side Management in Smart Grids", "text": "Smart grids are considered to be the next generation electric grids. Drivers for this development are among others: an increased electricity demand, an ageing network infrastructure and an increased share of renewable energy sources. Sustainability and the European 20/20/20 climate and energy targets are also main drivers for this development. Some of the important aspects of smart grids are that they allow two-way communication between the actors in the grid and enable optimisation of electricity use and production and customer participation. Demand side management is associated to smart grids and means adapting the electricity demand to the electricity production and the available electricity in the grid. It is considered that smart grids and demand side management hold potential to facilitate an increased share of renewable energy sources and to reduce the need for the current power reserve, which is based on fossil energy sources. The aim of this study was to compile and analyse some of the work that has been done and is being implemented around the world concerning demand side management in smart grids with a focus on market rules and business models in relation to Swedish conditions. The study includes a review of selected research and demonstration projects. Success factors and knowledge gaps have been identified through an analysis of the project findings and recommendations. A description of the current Nordic market conditions is also given. The conclusions regarding the conditions on the Swedish electricity market is that it appears to be relatively well adjusted to demand side management with the possibility for hourly meter readings and a full deployment of smart meters. The Nord Pool electricity market allows both day-ahead trading and intra-day trading on the elspot and the elbas markets. The review of the projects show that there is a potential for achieving flexible load from private customers. Hourly metering is seen as a prerequisite for demand side management and active customers is also considered crucial. Visualisation of electricity consumption and current costs is shown to be an important tool for encouraging customers to take an active part in their electricity consumption. Some of the other methods for achieving flexibility are for example: different types of pricing models, a signal for renewable energy sources and direct and indirect control of smart appliances and heating systems. The aggregation concept is an example of direct control that have been used and show potential for achieving flexibility in the grid. The concept means that a so called aggregator, which for example could be the supplier or the distribution system operator, compiles and controls the electrical load for several customers. A key challenge that has been identified is the issue of standardisation. Exchange of experiences from various research and demonstration projects is also considered desirable to support the development of smart grid and demand response."}
{"_id": "5f0157e8a852fc2b1b548342102405aa53c39eb9", "title": "Usability Engineering for Augmented Reality: Employing User-Based Studies to Inform Design", "text": "A major challenge, and thus opportunity, in the field of human-computer interaction and specifically usability engineering (UE) is designing effective user interfaces for emerging technologies that have no established design guidelines or interaction metaphors or introduce completely new ways for users to perceive and interact with technology and the world around them. Clearly, augmented reality (AR) is one such emerging technology. We propose a UE approach that employs user-based studies to inform design by iteratively inserting a series of user-based studies into a traditional usability-engineering life cycle to better inform initial user interface designs. We present an exemplar user-based study conducted to gain insight into how users perceive text in outdoor AR settings and to derive implications for design in outdoor AR. We also describe \"lessons learned\" from our experiences, conducting user-based studies as part of the design process."}
{"_id": "877d81886b57db2980bd87a0527508232f1f2a24", "title": "Dataless Text Classification: A Topic Modeling Approach with Document Manifold", "text": "Recently, dataless text classification has attracted increasing attention. It trains a classifier using seed words of categories, rather than labeled documents that are expensive to obtain. However, a small set of seed words may provide very limited and noisy supervision information, because many documents contain no seed words or only irrelevant seed words. In this paper, we address these issues using document manifold, assuming that neighboring documents tend to be assigned to a same category label. Following this idea, we propose a novel Laplacian seed word topic model (LapSWTM). In LapSWTM, we model each document as a mixture of hidden category topics, each of which corresponds to a distinctive category. Also, we assume that neighboring documents tend to have similar category topic distributions. This is achieved by incorporating a manifold regularizer into the log-likelihood function of the model, and then maximizing this regularized objective. Experimental results show that our LapSWTM significantly outperforms the existing dataless text classification algorithms and is even competitive with supervised algorithms to some extent. More importantly, it performs extremely well when the seed words are scarce."}
{"_id": "e8d45eee001ef838c7f8f4eef41fae98de5246f4", "title": "The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants", "text": "The two standardized tests of face recognition that are widely used suffer from serious shortcomings [Duchaine, B. & Weidenfeld, A. (2003). An evaluation of two commonly used tests of unfamiliar face recognition. Neuropsychologia, 41, 713-720; Duchaine, B. & Nakayama, K. (2004). Developmental prosopagnosia and the Benton Facial Recognition Test. Neurology, 62, 1219-1220]. Images in the Warrington Recognition Memory for Faces test include substantial non-facial information, and the simultaneous presentation of faces in the Benton Facial Recognition Test allows feature matching. Here, we present results from a new test, the Cambridge Face Memory Test, which builds on the strengths of the previous tests. In the test, participants are introduced to six target faces, and then they are tested with forced choice items consisting of three faces, one of which is a target. For each target face, three test items contain views identical to those studied in the introduction, five present novel views, and four present novel views with noise. There are a total of 72 items, and 50 controls averaged 58. To determine whether the test requires the special mechanisms used to recognize upright faces, we conducted two experiments. We predicted that controls would perform much more poorly when the face images are inverted, and as predicted, inverted performance was much worse with a mean of 42. Next we assessed whether eight prosopagnosics would perform poorly on the upright version. The prosopagnosic mean was 37, and six prosopagnosics scored outside the normal range. In contrast, the Warrington test and the Benton test failed to classify a majority of the prosopagnosics as impaired. These results indicate that the new test effectively assesses face recognition across a wide range of abilities."}
{"_id": "38ba24eb2242e073a48caa5b1d34a04046e09a83", "title": "(s|qu)eries: Visual Regular Expressions for Querying and Exploring Event Sequences", "text": "Many different domains collect event sequence data and rely on finding and analyzing patterns within it to gain meaningful insights. Current systems that support such queries either provide limited expressiveness, hinder exploratory workflows or present interaction and visualization models which do not scale well to large and multi-faceted data sets. In this paper we present (s|qu)eries (pronounced \"Squeries\"), a visual query interface for creating queries on sequences (series) of data, based on regular expressions. (s|qu)eries is a touch-based system that exposes the full expressive power of regular expressions in an approachable way and interleaves query specification with result visualizations. Being able to visually investigate the results of different query-parts supports debugging and encourages iterative query-building as well as exploratory work-flows. We validate our design and implementation through a set of informal interviews with data scientists that analyze event sequences on a daily basis."}
{"_id": "45f2a3c6faf4ea158121c9a1ea336400d0776dc5", "title": "SnipSuggest: Context-Aware Autocompletion for SQL", "text": "In this paper, we present SnipSuggest, a system that provides onthe-go, context-aware assistance in the SQL composition process. SnipSuggest aims to help the increasing population of non-expert database users, who need to perform complex analysis on their large-scale datasets, but have difficulty writing SQL queries. As a user types a query, SnipSuggest recommends possible additions to various clauses in the query using relevant snippets collected from a log of past queries. SnipSuggest\u2019s current capabilities include suggesting tables, views, and table-valued functions in the FROM clause, columns in the SELECT clause, predicates in the WHERE clause, columns in the GROUP BY clause, aggregates, and some support for sub-queries. SnipSuggest adjusts its recommendations according to the context: as the user writes more of the query, it is able to provide more accurate suggestions. We evaluate SnipSuggest over two query logs: one from an undergraduate database class and another from the Sloan Digital Sky Survey database. We show that SnipSuggest is able to recommend useful snippets with up to 93.7% average precision, at interactive speed. We also show that SnipSuggest outperforms na\u0131\u0308ve approaches, such as recommending popular snippets."}
{"_id": "4724a2274b957939e960e1ea4cdfb5a319ff3d63", "title": "Profiler: integrated statistical analysis and visualization for data quality assessment", "text": "Data quality issues such as missing, erroneous, extreme and duplicate values undermine analysis and are time-consuming to find and fix. Automated methods can help identify anomalies, but determining what constitutes an error is context-dependent and so requires human judgment. While visualization tools can facilitate this process, analysts must often manually construct the necessary views, requiring significant expertise. We present Profiler, a visual analysis tool for assessing quality issues in tabular data. Profiler applies data mining methods to automatically flag problematic data and suggests coordinated summary visualizations for assessing the data in context. The system contributes novel methods for integrated statistical and visual analysis, automatic view suggestion, and scalable visual summaries that support real-time interaction with millions of data points. We present Profiler's architecture --- including modular components for custom data types, anomaly detection routines and summary visualizations --- and describe its application to motion picture, natural disaster and water quality data sets."}
{"_id": "09d9e89983b07c589e35196f4a0c161987042670", "title": "Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship", "text": "Spoken dialogue systems have received increased interest because they are potentially much more natural and powerful methods of communicating with machines than are current graphics-based interfaces. Wired for Speech presents basic research in the psychological and sociological aspects of voice synthesis and recognition. Its major lesson is that people attribute human characteristics to spoken dialogue systems for reasons related to human evolution. But although it contains interesting basic research, the book is mainly aimed at giving technological or marketing advice to those seeking to use voice interfaces when creating commercial applications. The book is oriented around a series of simple experiments designed to show just how pervasive psychological and social influences can be on the opinions and behaviors of people confronted with voice interfaces. Each chapter describes a basic research hypothesis, introduces an experiment to test it, and discusses its implications for designing voice interfaces: gender, personality, accent, ethnicity, emotion, number of distinct voices, use of \" I \" by the system, voices in concert with faces, mixed synthetic and recorded voices, context, and the effects of errors in human\u2013computer cooperation. Although Wired for Speech is very accessible , especially to the non-scientist, it is written with an unusual bibliography style for an academic book: All references and details are given in a notes section at the end of the book, making up one third of the content. This narrative exposition style will probably not satisfy either type of reader: Scientists will be frustrated at the imprecision in argumen-tation, lack of detail in the book itself, and continually having to refer to the notes. The lack of detail also prevents the book from serving as a reference work. Meanwhile those needing advice when implementing voice interfaces will be puzzled at references to Chomsky, Grice, and Zemlin. Thus the book seems to suffer from not knowing its audience well, which is odd as this is precisely the lesson that the book tries to impart. Another complaint from the scientific point of view is the obsession this particular book has with casting every experimental result as advice for the business and marketing side of voice interfaces, typically concentrating on Web-based e-marketing examples such as buying books on line. Most of the experiments have sample sizes of between 40 and 50, but the authors seem ready to invite multi-billion-dollar businesses to immediately base their deployed systems on these results. Finally, and fundamentally, this \u2026"}
{"_id": "1a41157538ac14718d9ec74f52050891330e6639", "title": "Characterizing the usability of interactive applications through query log analysis", "text": "People routinely rely on Internet search engines to support their use of interactive systems: they issue queries to learn how to accomplish tasks, troubleshoot problems, and otherwise educate themselves on products. Given this common behavior, we argue that search query logs can usefully augment traditional usability methods by revealing the primary tasks and needs of a product's user population. We term this use of search query logs CUTS - characterizing usability through search. In this paper, we introduce CUTS and describe an automated process for harvesting, ordering, labeling, filtering, and grouping search queries related to a given product. Importantly, this data set can be assembled in minutes, is timely, has a high degree of ecological validity, and is arguably less prone to self-selection bias than data gathered via traditional usability methods. We demonstrate the utility of this approach by applying it to a number of popular software and hardware systems."}
{"_id": "6be9447feb1bf824d6a1c827e89e6db3f59566de", "title": "Multithreaded Sliding Window Approach to Improve Exact Pattern Matching Algorithms", "text": "In this paper an efficient pattern matching approach, based on a multithreading sliding window technique, is proposed to improve the efficiency of the common sequential exact pattern matching algorithms including: (i) Brute Force, (ii) Knuth-Morris-Pratt and (iii) Boyer-Moore. The idea is to divide the text under-search into blocks, each block is allocated one or two threads running concurrently. Reported experimental results indicated that the proposed approach improves the performance of the well-known pattern matching algorithms, in terms of search time, especially when the searched patterns are located at the middle or at the end of the text. Keywords\u2014pattern matching; multithreading; sliding window; Brute Force; Knuth-Morris-Pratt; Boyer-Moore"}
{"_id": "3fb91bbffa86733fc68d4145e7f081353eb3dcd8", "title": "Techniques of EMG signal analysis: detection, processing, classification and applications", "text": "Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications."}
{"_id": "6d976d3171e1bf655c0347dc17ac29c55d2afb6c", "title": "A particle swarm optimization algorithm for flexible job shop scheduling problem", "text": "The classical job shop scheduling problem (JSP) is the most popular machine scheduling model in practice and is well known as NP-hard. The formulation of the JSP is based on the assumption that for each part type or job there is only one process plan that prescribes the sequence of operations and the machine on which each operation has to be performed. Flexible job shop scheduling problem (FJSP) is an extension of the JSP, which allows an operation to be processed by any machine from a given set. Since FJSP requires an additional decision of machine allocation during scheduling, therefore it is much more complex problem than JSP. To solve such NP-hard problems, heuristic approaches are commonly preferred over the traditional mathematical techniques. This paper proposes a particle swarm optimization (PSO) based heuristic for solving the FJSP for minimum makespan time criterion. The performance of the proposed PSO is evaluated by comparing its results with the results obtained using ILOG Solver, a constraint-programming tool. The comparison of the results proves the effectiveness of the proposed PSO for solving FJSP instances."}
{"_id": "605a105608bb7aff888360879c53ed02ac116577", "title": "Successfully Managing Impending Skin Necrosis following Hyaluronic Acid Filler Injection, using High-Dose Pulsed Hyaluronidase", "text": "Facial fillers are becoming increasingly popular as aesthetic procedures to temporarily reduce the depth of wrinkles or to contour faces. However, even in the hands of very experienced injectors, there is always a small possibility of vascular complications like intra-arterial injection of filler substance. We present a case report of a patient who developed features of vascular obstruction in right infraorbital artery and tell-tale signs of impending skin necrosis, after hyaluronic acid filler injection by an experienced injector. The diagnosis of a vascular complication was made quickly with the help of clinical features like blanching, livedo reticularis, and poor capillary refill. Patient was treated promptly with \"high-dose pulsed hyaluronidase protocol\" comprising three 1,000-unit pulses of hyaluronidase, administered hourly. There was no further increase in size of the involved area after the first dose of hyaluronidase. All of the involved area, along with 1\u2009cm overlapping in uninvolved skin area, was injected during each injection pulse, using a combination of cannula and needle. Complete reperfusion and good capillary filling were achieved after completion of 3 pulses, and these were taken as the end-point of high-dose pulsed hyaluronidase treatment. Immediate skin changes after filler injections, as well as after hyaluronidase injections and during the 3-week recovery period, were documented with photographs and clinical notes. Involved skin was found to have been fully recovered from this vascular episode, thus indicating that complete recovery of the ischemic skin changes secondary to possible intra-arterial injection could be achieved using high-dose pulsed hyaluronidase protocol."}
{"_id": "c2b4835746ba788b81a2906ac228f86ac6c8162b", "title": "Characterizing and visualizing physical world accessibility at scale using crowdsourcing, computer vision, and machine learning", "text": "Poorly maintained sidewalks and street intersections pose considerable accessibility challenges for people with mobility-impairments [13,14]. According to the most recent U.S. Census (2010), roughly 30.6 million adults have physical disabilities that affect their ambulatory activities [22]. Of these, nearly half report using an assistive aid such as a wheelchair (3.6 million) or a cane, crutches, or walker (11.6 million) [22]. Despite comprehensive civil rights legislation for Americans with Disabilities (e.g., [25,26]), many city streets, sidewalks, and businesses in the U.S. remain inaccessible. The problem is not just that street-level accessibility fundamentally affects where and how people travel in cities, but also that there are few, if any, mechanisms to determine accessible areas of a city a priori. Indeed, in a recent report, the National Council on Disability noted that they could not find comprehensive information on the \"degree to which sidewalks are accessible\" across the US [15]. This lack of information can have a significant negative impact on the independence and mobility of citizens [13,16] For example, in our own initial formative interviews with wheelchair users, we uncovered a prevailing view about navigating to new areas of a city: \"I usually don't go where I don't know [about accessible routes]\" (Interviewee 3, congenital polyneuropathy). Our overarching research vision is to transform the way in which street-level accessibility information is collected and used to support new types of assistive map-based tools."}
{"_id": "f13c1df14bb289807325f4431e81c2ea10be952f", "title": "Capacitance and Force Computation Due to Direct and Fringing Effects in MEMS/NEMS Arrays", "text": "An accurate computation of electrical force is significant in analyzing the performance of microelectromechanical systems and nanoelectromechanical systems. Many analytical and empirical models are available for computing the forces, especially, for a single set of parallel plates. In general, these forces are computed based on the direct electric field between the overlapping areas of the plates and the fringing field effects. Most of the models, which are based on direct electric field effect, consider only the trivial cases of the fringing field effects. In this paper, we propose different models which are obtained from the numerical simulations. It is found to be useful in computing capacitance as well force in simple and complex configurations consisting of an array of beams and electrodes. For the given configurations, the analytical models are compared with the available models and numerical results. While the percentage error of the proposed model is found to be under 6% with respect to the numerical results, the error associated with the analytical model without the fringing field effects is ~50 %. The proposed model can be applied to the devices in which the fringing field effects are dominant."}
{"_id": "8bbc92a1b8a60c8a3460fa20a45e8063fbf88324", "title": "Self-Supervision for Reinforcement Learning by Parsa Mahmoudieh", "text": "Reinforcement learning optimizes policies for expected cumulative reward. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, making it a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pretraining and joint optimization improve the data efficiency and policy returns of end-to-end reinforcement learning."}
{"_id": "04bd349fd466ddd8224cff34cba200b5184f969d", "title": "Vertical and horizontal elasticity for dynamic virtual machine reconfiguration", "text": "Today, cloud computing applications are rapidly constructed by services belonging to different cloud providers and service owners. This work presents the inter-cloud elasticity framework, which focuses on cloud load balancing based on dynamic virtual machine reconfiguration when variations on load or on user requests volume are observed. We design a dynamic reconfiguration system, called inter-cloud load balancer (ICLB), that allows scaling up or down the virtual resources (thus providing automatized elasticity), by eliminating service downtimes and communication failures. It includes an inter-cloud load balancer for distributing incoming user HTTP traffic across multiple instances of inter-cloud applications and services and we perform dynamic reconfiguration of resources according to the real time requirements. The experimental analysis includes different topologies by showing how real-time traffic variation (using real world workloads) affects resource utilization and by achieving better resource usage in inter-cloud."}
{"_id": "77ccf604ca460ac65d2bd14792c901879c4a0153", "title": "Harmony Search Algorithm for Solving Sudoku", "text": "Harmony search (HS) algorithm was applied to solving Sudoku puzzle. The HS is an evolutionary algorithm which mimics musicians\u2019 behaviors such as random play, memory-based play, and pitch-adjusted play when they perform improvisation. Sudoku puzzles in this study were formulated as an optimization problem with number-uniqueness penalties. HS could successfully solve the optimization problem after 285 function evaluations, taking 9 seconds. Also, sensitivity analysis of HS parameters was performed to obtain a better idea of algorithm parameter values."}
{"_id": "5248faae83b3215e0a6f666bf06a325b265633cb", "title": "Introduction Advances in Text Comprehension : Model , Process and Development", "text": "To a very large extent, children learn in and out of school from written text. Information Communications Technologies (ICT) offers many possibilities to facilitate learning by confronting children with multimodal texts. In order to be able to implement learning environments that optimally facilitate children\u2019s learning, insight is needed into the cognitive processes underlying text comprehension. In this light, the aim of this special issue is to report on new advances in text comprehension research in perspective of educational implications. Starting from recent theoretical frameworks on the cognitive processes underlying text comprehension, the online processing of text will be discussed in adults and in school children. Copyright # 2008 John Wiley & Sons, Ltd."}
{"_id": "9241288953ded43ea7c8f984ecfaef3fdcc942aa", "title": "Food Image Recognition Using Very Deep Convolutional Networks", "text": "We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems."}
{"_id": "f609da5448e8dd5cd04095010518dcb4dca22bc0", "title": "Patterns of HCI Design and HCI Design of Patterns", "text": "This chapter introduces the concept of human\u2013computer interaction (HCI) design pattern\u2014also called UI design pattern, interaction design patterns, HCI patterns, user experience pattern and usability engineering pattern. In this book, we mainly use the term HCI Design pattern, but we also use all these terms interchangeably to refer to HCI design pattern. HCI design patterns\u2014have been introduced as a medium to capture and represent users\u2019 experiences as well as to disseminate it as design knowledge. Patterns have the potential of transferring the design expertise of HCI designers to software engineers, who are usually unfamiliar with UI design and usability principles. What is the difference between design patterns and HCI design patterns? Why are they important among HCI designers and SE practitioners? Why design patterns have been considered as a HCI design tool? Why and how HCI design patterns can make a difference? This chapter provides a first answer to these questions that are the key objectives of this book. 1.1 Original Ideas About Design Pattern Among the early attempts to capture and use design knowledge in the format of design patterns, the first major milestone is often attributed to the architect Christopher Alexander, in the late 1970s. In his two books, A Pattern Language (Alexander 1977) and A Timeless Way of Building (Alexander 1979), Alexander, the father of patterns, discusses the avenues to capture and use design knowledge, and presents a large collection of pattern examples to help architects and engineers with the design of buildings, towns, and other urban entities. To illustrate the concept of pattern, Alexander proposes an architectural pattern called Wings of Light (Alexander 1977), where the problem is: Modern buildings are often shaped with no concern for natural light\u2014they depend almost entirely on artificial light. But, buildings which displace natural light as the major source of illumination are not fit places to spend the day. Amongst other information such as design rationale, examples, and links to related patterns, the solution statement to this problem is: 1 The Patterns of HCI Design 2 Arrange each building so that it breaks down into wings which correspond, approximately, to the most important natural social groups within the building. Make each wing long and as narrow as you can\u2014never more than 25 ft wide. According to Alexander, every pattern has three essential elements illustrated in Fig. 1.1, which are: a context, a problem, and a solution. The context describes a recurring set of situations in which the pattern can be applied. The problem refers to a set of forces, i.e., goals and constraints, which occur in the context. Generally, the problem describes when to apply the pattern. The solution refers to a design form or a design rule that can be applied to resolve the forces. It also describes the elements that constitute a pattern, relationships among these elements, as well as responsibilities and collaboration. All of Alexander\u2019s patterns address recurrent problems that designers face by providing a possible solution within a specific context. They follow a similar structure, and the presented information is organized into pattern attributes, such as Problem and Design Rationale. Most noteworthy, the presented solution statement is abstract enough to capture only the invariant properties of good design. The specific pattern implementation is dependent on the design details and the designer\u2019s creativity (Dix et al. 2003). In the example above, there is no mention of specific details such as the corresponding positions of wings to one another, or even the number of wings. These implementation details are left to the designer, allowing different instances of the same pattern solution. In addition, Alexander (1977) recognized that the design and construction of buildings required all stakeholders to make use of a common language for facilitating the implementation of the project from its very beginnings to its completion. If organized properly, patterns could achieve this for all the participants of a design project, acting as a communication tool for design. The pattern concept was not well known until 1987 when patterns appeared again at Object-Oriented Programming, Systems, Languages & Applications (OOPSLA), the object orientation conference in Orlando. There Kent Beck and Ward Cunningham (1987) introduced pattern languages for object-oriented program construction in a seminal paper. Since then many papers and presentations have appeared, authored by renowned software design practitioners such as Grady Booch, Richard Helm, Erich Gamma, and Kent Beck. In 1993, the formation of Hildside Group (1993) by Beck, Cunningham, Coplien, Booch, Johnson and others was the first step forward to forming a design patterns community in the field of software engineering."}
{"_id": "cc51aa9963bbb8cbcf26f4033e707e5d04052186", "title": "Construct validity of the Self-Compassion Scale-Short Form among psychotherapy clients", "text": "Construct validity of the Self-Compassion ScaleShort Form among psychotherapy clients Jeffrey A. Hayes, Allison J. Lockard, Rebecca A. Janis & Benjamin D. Locke To cite this article: Jeffrey A. Hayes, Allison J. Lockard, Rebecca A. Janis & Benjamin D. Locke (2016): Construct validity of the Self-Compassion Scale-Short Form among psychotherapy clients, Counselling Psychology Quarterly, DOI: 10.1080/09515070.2016.1138397 To link to this article: http://dx.doi.org/10.1080/09515070.2016.1138397"}
{"_id": "2bde586f6bbf1de3526f08f6c06976f9e535d4d3", "title": "An evolutionary game-theoretic modeling for heterogeneous information diffusion", "text": "In this paper, we model and analyze the information diffusion in heterogeneous social networks from an evolutionary game perspective. Users interact with each other according to their individual fitness, which are heterogeneous among different user types. We first study a model where in each social interaction the payoff of a user is independent of the type of the interacted user. In such a case, we derive the information diffusion dynamics of each type of users as well as that of the overall network. The evolutionarily stable states (ESSs) of the dynamics are determined accordingly. Afterwards, we investigate a more general model where in each interaction the payoff of a user depends on the type of the interacted user. We show that the local influence dynamics change much more quickly than the global strategy population dynamics and the former keeps track of the latter throughout the information diffusion process. Based on this observation, the global strategy population dynamics are derived. Finally, simulations are conducted to verify the theoretical results."}
{"_id": "cbb866b10674bf7461b768ec154d4b9478e32e82", "title": "Comparative study on power conversion methods for wireless battery charging platform", "text": "In this paper, four different power conversion methods (voltage control, duty-cycle control, frequency control and phase-shift control) are compared for wireless power transfer applications by considering the energy transfer efficiency, electromagnetic interference, stability, and implementation complexity. The phase-shift control is found to be the optimal scheme with good efficiency and low electromagnetic interference. Its constant frequency feature is also within the framework of the new international wireless charging standard called \u2018Qi\u2019. A high system efficiency of 72% for 5W wireless charging applications has been achieved practically."}
{"_id": "7b3f48d53e5203b76df5997926522276c91658c9", "title": "A general procedure for the construction of Gorges polygons for multi-phase windings of electrical machines", "text": "This paper presents a simple and effective procedure for the determination of the Gorges polygon, suitable for all possible winding configurations in electrical machines. This methodology takes into account the determination of a Winding Distribution Table (WDT), in which all the information about the distribution of the currents along the stator periphery is computed and from which the G\u00f6rges polygon are easily derived. The proposed method can be applied to both symmetrical and asymmetrical multi-phase windings, including concentrated, fractional, reduced and dead-coil ones. The examples provided in this paper demonstrate the versatility of the proposed method."}
{"_id": "25549a1678eb8a5b95790b6d72a54970d7aa697d", "title": "Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction", "text": "We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SCIERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SCIIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.1"}
{"_id": "0b951037b7b6d94a4f35ba096d83ee43e4721c66", "title": "Topic discovery and future trend forecasting for texts", "text": "Finding topics from a collection of documents, such as research publications, patents, and technical reports, is helpful for summarizing large scale text collections and the world wide web. It can also help forecast topic trends in the future. This can be beneficial for many applications, such as modeling the evolution of the direction of research and forecasting future trends of the IT industry. In this paper, we propose using association analysis and ensemble forecasting to automatically discover topics from a set of text documents and forecast their evolving trend in a near future. In order to discover meaningful topics, we collect publications from a particular research area, data mining and machine learning, as our data domain. An association analysis process is applied to the collected data to first identify a set of topics, followed by a temporal correlation analysis to help discover correlations between topics, and identify a network of topics and communities. After that, an ensemble forecasting approach is proposed to predict the popularity of research topics in the future. Our experiments and validations on data with 9\u00a0years of publication records validate the effectiveness of the proposed design."}
{"_id": "dc0d979f7c352b40c5076039a16e1002bd8b38c3", "title": "Fake News in Social Networks", "text": "We model the spread of news as a social learning game on a network. Agents can either endorse or oppose a claim made in a piece of news, which itself may be either true or false. Agents base their decision on a private signal and their neighbors\u2019 past actions. Given these inputs, agents follow strategies derived via multi-agent deep reinforcement learning and receive utility from acting in accordance with the veracity of claims. Our framework yields strategies with agent utility close to a theoretical, Bayes optimal benchmark, while remaining flexible to model re-specification. Optimized strategies allow agents to correctly identify most false claims, when all agents receive unbiased private signals. However, an adversary\u2019s attempt to spread fake news by targeting a subset of agents with a biased private signal can be successful. Even more so when the adversary has information about agents\u2019 network position or private signal. When agents are aware of the presence of an adversary they re-optimize their strategies in the training stage and the adversary\u2019s attack is less effective. Hence, exposing agents to the possibility of fake news can be an effective way to curtail the spread of fake news in social networks. Our results also highlight that information about the users\u2019 private beliefs and their social network structure can be extremely valuable to adversaries and should be well protected."}
{"_id": "4290a4451c87281c55b79a8aaf79c8261a55589f", "title": "Comparative Study on Vision Based Rice Seed Varieties Identification", "text": "This paper presents a system for automated classification of rice variety for rice seed production using computer vision and image processing techniques. Rice seeds of different varieties are visually very similar in color, shape and texture that make the classification of rice seed varieties at high accuracy challenging. We investigated various feature extraction techniques for efficient rice seed image representation. We analyzed the performance of powerful classifiers on the extracted features for finding the robust one. Images of six different rice seed varieties in northern Vietnam were acquired and analyzed. Our experiments have demonstrated that the average accuracy of our classification system can reach 90.54% using Random Forest method with a simple feature extraction technique. This result can be used for developing a computer-aided machine vision system for automated assessment of rice seeds purity."}
{"_id": "162875171336ed0f0dbf63926ee144372a47dc2e", "title": "Anatomical Variants in Prostate Artery Embolization: A Pictorial Essay", "text": "Prostate artery embolization (PAE) has emerged as a new treatment option for patients with symptomatic benign prostatic hyperplasia. The main challenges related to this procedure are navigating arteries with atherosclerosis and anatomical variations, and the potential risk of non-target embolization to pelvic structures due to the presence of collateral shunts and reflux of microparticles. Knowledge of classical vascular anatomy and the most common variations is essential for safe embolization, good clinical practice, and optimal outcomes. The aim of this pictorial essay is to illustrate the pelvic vascular anatomy relevant to PAE in order to provide a practical guide that includes the most common anatomical variants as well as to discuss the technical details related to each."}
{"_id": "327cbb1da2652b430a52171d510cf72235b890b6", "title": "Lock-free linked lists and skip lists", "text": "Lock-free shared data structures implement distributed objects without the use of mutual exclusion, thus providing robustness and reliability. We present a new lock-free implementation of singly-linked lists. We prove that the worst-case amortized cost of the operations on our linked lists is linear in the length of the list plus the contention, which is better than in previous lock-free implementations of this data structure. Our implementation uses backlinks that are set when a node is deleted so that concurrent operations visiting the deleted node can recover. To avoid performance problems that would arise from traversing long chains of backlink pointers, we introduce flag bits, which indicate that a deletion of the next node is underway. We then give a lock-free implementation of a skip list dictionary data structure that uses the new linked list algorithms to implement individual levels. Our algorithms use the single-word C&S synchronization primitive."}
{"_id": "65a8e97f46a44ce5c574747fffc35f9058aa364f", "title": "A study into the believability of animated characters in the context of bullying intervention", "text": "The VICTEC (Virtual ICT with Empathic Characters) project provides an opportunity to explore the use of animated characters in a virtual environment for educational issues such as bullying behaviour. Our research aim was to evaluate whether an early prototype of the VICTEC demonstrator could provide useful information about character and story believability, physical aspects of the characters and story comprehensibility. Results from an evaluation with 76 participants revealed high levels of bullying story believability and character conversation was rated as convincing and interesting. In contrast, character movement was poorly rated. Overall the results imply that poor physical aspects of characters do not have detrimental effects on story believability and interest levels with the demonstrator. It is concluded that even at this early design phase, the demonstrator provides a suitable means to explore how virtual environments in terms of character and storyline believability may assist in the development of systems to deal with the educational issue of bullying."}
{"_id": "280cd4a04cf7ecc36f84a7172483916a41403f5e", "title": "Multi-class AdaBoost \u2217", "text": "Boosting has been a very successful technique for solving the two-class classification problem. In going from two-class to multi-class classification, most algorithms have been restricted to reducing the multi-class classification problem to multiple two-class problems. In this paper, we develop a new algorithm that directly extends the AdaBoost algorithm to the multi-class case without reducing it to multiple two-class problems. We show that the proposed multi-class AdaBoost algorithm is equivalent to a forward stagewise additive modeling algorithm that minimizes a novel exponential loss for multi-class classification. Furthermore, we show that the exponential loss is a member of a class of Fisher-consistent loss functions for multi-class classification. As shown in the paper, the new algorithm is extremely easy to implement and is highly competitive in terms of misclassification error rate."}
{"_id": "3215a900e4fb9499c8904bfe662c59de042da67d", "title": "Predicting Movie Sales from Blogger Sentiment", "text": "The volume of discussion about a product in weblogs has recently been shown to correlate with the product\u2019s financial performance. In this paper, we study whether applying sentiment analysis methods to weblog data results in better correlation than volume only, in the domain of movies. Our main finding is that positive sentiment is indeed a better predictor for movie success when applied to a limited context around references to the movie in weblogs, posted prior to its release. If my film makes one more person miserable, I\u2019ve done my job."}
{"_id": "46a7464a8926241c8ed78b243ca0bf24253f8786", "title": "Early Prediction of Movie Box Office Success Based on Wikipedia Activity Big Data", "text": "Use of socially generated \"big data\" to access information about collective states of the minds in human societies has become a new paradigm in the emerging field of computational social science. A natural application of this would be the prediction of the society's reaction to a new product in the sense of popularity and adoption rate. However, bridging the gap between \"real time monitoring\" and \"early predicting\" remains a big challenge. Here we report on an endeavor to build a minimalistic predictive model for the financial success of movies based on collective activity data of online users. We show that the popularity of a movie can be predicted much before its release by measuring and analyzing the activity level of editors and viewers of the corresponding entry to the movie in Wikipedia, the well-known online encyclopedia."}
{"_id": "38d76b64705d193ee8017993f7fc4c6e8d4bdbc8", "title": "A Live-User Study of Opinionated Explanations for Recommender Systems", "text": "This paper describes an approach for generating rich and compelling explanations in recommender systems, based on opinions mined from user-generated reviews. The explanations highlight the features of a recommended item that matter most to the user and also relate them to other recommendation alternatives and the user's past activities to provide a context."}
{"_id": "2d81296e2894baaade499d9f1ed163f339943ddc", "title": "MO-SLAM: Multi object SLAM with run-time object discovery through duplicates", "text": "In this paper, we present MO-SLAM, a novel visual SLAM system that is capable of detecting duplicate objects in the scene during run-time without requiring an offline training stage to pre-populate a database of objects. Instead, we propose a novel method to detect landmarks that belong to duplicate objects. Further, we show how landmarks belonging to duplicate objects can be converted to first-order entities which generate additional constraints for optimizing the map. We evaluate the performance of MO-SLAM with extensive experiments on both synthetic and real data, where the experimental results verify the capabilities of MO-SLAM in detecting duplicate objects and using these constraints to improve the accuracy of the map."}
{"_id": "2172134ed38d28fb910879b39a13832c5fb7998b", "title": "Speed planning for solar-powered electric vehicles", "text": "Electric vehicles (EVs) are the trend for future transportation. The major obstacle is range anxiety due to poor availability of charging stations and long charging time. Solar-powered EVs, which mostly rely on solar energy, are free of charging limitations. However, the range anxiety problem is more severe due to the availability of sun light. For example, shadings of buildings or trees may cause a solar-powered EV to stop halfway in a trip. In this paper, we show that by optimally planning the speed on different road segments and thus balancing energy harvesting and consumption, we can enable a solar-powered EV to successfully reach the destination using the shortest travel time. The speed planning problem is essentially a constrained non-linear programming problem, which is generally difficult to solve. We have identified an optimality property that allows us to compute an optimal speed assignment for a partition of the path; then, a dynamic programming method is developed to efficiently compute the optimal speed assignment for the whole trip with significantly low computation overhead compared to the state-of-the-art non-linear programming solver. To evaluate the usability of the proposed method, we have also developed a solar-powered EV prototype. Experiments show that the predictions by the proposed technique match well with the data collected from the physical EV. Issues on practical implementation are also discussed."}
{"_id": "f3396763e2c3ec1e0c80fddad8b08177960ce34d", "title": "Increase Physical Fitness and Create Health Awareness through Exergames and Gamification - The Role of Individual Factors, Motivation and Acceptance", "text": "Demographic change and the aging population push health and welfare system to its limits. Increased physical fitness and increased awareness for health issues will help elderly to live independently for longer and will thereby reduce the costs in the health care system. Exergames seem to be a promising solution for promoting physical fitness. Still, there is little evidence under what conditions Exergames will be accepted and used by elderly. To investigate promoting and hindering factors we conducted a user study with a prototype of an Exergame. We contrasted young vs. elderly players and investigated the role of gamer types, personality factors and technical expertise on the performance within the game and changes in the attitude towards individual health after the game. Surprisingly, performance within the game is not affected by performance motivation but by gamer type. More importantly, a universal positive effect on perceived pain is detected after the Exergame"}
{"_id": "4e71af17f3b0ec59aa3a1c7ea3f2680a1c9d9f6f", "title": "A CNN-based segmentation model for segmenting foreground by a probability map", "text": "This paper proposes a CNN-based segmentation model to segment foreground from an image and a prior probability map. Our model is constructed based on the FCN model that we simply replace the original RGB-based three channel input layer by a four channel, i.e., RGB and prior probability map. We then train the model by constructing various image, prior probability maps and the groundtruths from the PASCAL VOC dataset, and finally obtain a CNN-based foreground segmentation model that is suitable for general images. Our proposed method is motivated by the observation that the classical graphcut algorithm using GMM for modeling the priors can not capture the semantic segmentation from the prior probability, and thus leads to low segmentation performance. Furthermore, the efficient FCN segmentation model is for specific objects rather than general objects. We therefore improve the graph-cut like foreground segmentation by extending FCN segmentation model. We verify the proposed model by various prior probability maps such as artifical maps, saliency maps, and discriminative maps. The ICoseg dataset that is different from the PASCAL Voc dataset is used for the verification. Experimental results demonstrates the fact that our method obviously outperforms the graphcut algorithms and FCN models."}
{"_id": "5906297bd4108376a032cb4c610d3e2926750d47", "title": "Clothes Co-Parsing Via Joint Image Segmentation and Labeling With Application to Clothing Retrieval", "text": "This paper aims at developing an integrated system for clothing co-parsing (CCP), in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. A novel data-driven system consisting of two phases of inference is proposed. The first phase, referred as \u201cimage cosegmentation,\u201d iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM technique [1]. In the second phase (i.e., \u201cregion colabeling\u201d), we construct a multiimage graphical model by taking the segmented regions as vertices, and incorporating several contexts of clothing configuration (e.g., item locations and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [2], we construct a dataset called the SYSU-Clothes dataset consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29%/88.23% segmentation accuracy and 65.52%/63.89% recognition rate on the Fashionista and the SYSU-Clothes datasets, respectively, which are superior compared with the previous methods. Furthermore, we apply our method on a challenging task, i.e., cross-domain clothing retrieval: given user photo depicting a clothing image, retrieving the same clothing items from online shopping stores based on the fine-grained parsing results."}
{"_id": "ea5fdb7f6c2e94d1a67bff622c566ddc66be19ab", "title": "Face recognition based on convolutional neural network and support vector machine", "text": "Face recognition is an important embodiment of human-computer interaction, which has been widely used in access control system, monitoring system and identity verification. However, since face images vary with expressions, ages, as well as poses of people and illumination conditions, the face images of the same sample might be different, which makes face recognition difficult. There are two main requirements in face recognition, the high recognition rate and less training time. In this paper, we combine Convolutional Neural Network (CNN) and Support Vector Machine (SVM) to recognize face images. CNN is used as a feature extractor to acquire remarkable features automatically. We first pre-train our CNN by ancillary data to get the updated weights, and then train the CNN by the target dataset to extract more hidden facial features. Finally we use SVM as our classifier instead of CNN to recognize all the classes. With the input of facial features extracted from CNN, SVM will recognize face images more accurately. In our experiments, some face images in the Casia-Webfaces database are used for pre-training, and FERET database is used for training and testing. The results in experiments demonstrate the efficiency with high recognition rate and less training time."}
{"_id": "45e3d69e23bf3b3583a8a71022a8e161a37e9571", "title": "Progress on a cognitive-motivational-relational theory of emotion.", "text": "The 2 main tasks of this article are 1st, to examine what a theory of emotion must do and basic issues that it must address. These include definitional issues, whether or not physiological activity should be a defining attribute, categorical versus dimensional strategies, the reconciliation of biological universals with sociocultural sources of variability, and a classification of the emotions. The 2nd main task is to apply an analysis of appraisal patterns and the core relational themes that they produce to a number of commonly identified emotions. Anger, anxiety, sadness, and pride (to include 1 positive emotion) are used as illustrations. The purpose is to show the capability of a cognitive-motivational-relational theory to explain and predict the emotions. The role of coping in emotion is also discussed, and the article ends with a response to criticisms of a phenomenological, folk-theory outlook."}
{"_id": "6ee9fcef938389c4cc48fd3b1875d680161ffe0a", "title": "A Survey of Logical Models for OLAP Databases", "text": "In this paper, we present different proposals for multidimensional data cubes, which are the basic logical model for OLAP applications. We have grouped the work in the field in two categories: commercial tools (presented along with terminology and standards) and academic efforts. We further divide the academic efforts in two subcategories: the relational model extensions and the cube-oriented approaches. Finally, we attempt a comparative analysis of the various efforts."}
{"_id": "a9f1838de25c17a38fae9738d137d7c7644b3be1", "title": "Who Is Going to Get Hurt? Predicting Injuries in Professional Soccer", "text": "Injury prevention has a fundamental role in professional soccer due to the high cost of recovery for players and the strong influence of injuries on a club\u2019s performance. In this paper we provide a predictive model to prevent injuries of soccer players using a multidimensional approach based on GPS measurements and machine learning. In an evolutive scenario, where a soccer club starts collecting the data for the first time and updates the predictive model as the season goes by, our approach can detect around half of the injuries, allowing the soccer club to save 70% of a season\u2019s economic costs related to injuries. The proposed approach can be a valuable support for coaches, helping the soccer club to reduce injury incidence, save money and increase team performance."}
{"_id": "cbf4040cb14a019ff3556fad5c455e99737f169f", "title": "Answering Schr\u00f6dinger's question: A free-energy formulation", "text": "The free-energy principle (FEP) is a formal model of neuronal processes that is widely recognised in neuroscience as a unifying theory of the brain and biobehaviour. More recently, however, it has been extended beyond the brain to explain the dynamics of living systems, and their unique capacity to avoid decay. The aim of this review is to synthesise these advances with a meta-theoretical ontology of biological systems called variational neuroethology, which integrates the FEP with Tinbergen's four research questions to explain biological systems across spatial and temporal scales. We exemplify this framework by applying it to Homo sapiens, before translating variational neuroethology into a systematic research heuristic that supplies the biological, cognitive, and social sciences with a computationally tractable guide to discovery."}
{"_id": "ab71da348979c50d33700bc2f6ddcf25b4c8cfd0", "title": "Reconnaissance with ultra wideband UHF synthetic aperture radar", "text": ""}
{"_id": "ecb4d6621662f6c8eecfe9aab366d36146f0a6da", "title": "Unseen Noise Estimation Using Separable Deep Auto Encoder for Speech Enhancement", "text": "Unseen noise estimation is a key yet challenging step to make a speech enhancement algorithm work in adverse environments. At worst, the only prior knowledge we know about the encountered noise is that it is different from the involved speech. Therefore, by subtracting the components which cannot be adequately represented by a well defined speech model, the noises can be estimated and removed. Given the good performance of deep learning in signal representation, a deep auto encoder (DAE) is employed in this work for accurately modeling the clean speech spectrum. In the subsequent stage of speech enhancement, an extra DAE is introduced to represent the residual part obtained by subtracting the estimated clean speech spectrum (by using the pre-trained DAE) from the noisy speech spectrum. By adjusting the estimated clean speech spectrum and the unknown parameters of the noise DAE, one can reach a stationary point to minimize the total reconstruction error of the noisy speech spectrum. The enhanced speech signal is thus obtained by transforming the estimated clean speech spectrum back into time domain. The above proposed technique is called separable deep auto encoder (SDAE). Given the under-determined nature of the above optimization problem, the clean speech reconstruction is confined in the convex hull spanned by a pre-trained speech dictionary. New learning algorithms are investigated to respect the non-negativity of the parameters in the SDAE. Experimental results on TIMIT with 20 noise types at various noise levels demonstrate the superiority of the proposed method over the conventional baselines."}
{"_id": "488fc01e8663d67de2cb76b33167a60906b81eba", "title": "A variable splitting augmented Lagrangian approach to linear spectral unmixing", "text": "This paper presents a new linear hyperspectral unmixing method of the minimum volume class, termed simplex identification via split augmented Lagrangian (SISAL). Following Craig's seminal ideas, hyperspectral linear unmixing amounts to finding the minimum volume simplex containing the hyperspectral vectors. This is a nonconvex optimization problem with convex constraints. In the proposed approach, the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The obtained problem is solved by a sequence of augmented Lagrangian optimizations. The resulting algorithm is very fast and able so solve problems far beyond the reach of the current state-of-the art algorithms. The effectiveness of SISAL is illustrated with simulated data."}
{"_id": "0d21449cba8735032af2f6dfc46e18641e6fa3d3", "title": "Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation", "text": "We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes."}
{"_id": "5fecf8411f1063a08af028effae9d3d208ef1101", "title": "How to put probabilities on homographies", "text": "We present a family of \"normal\" distributions over a matrix group together with a simple method for estimating its parameters. In particular, the mean of a set of elements can be calculated. The approach is applied to planar projective homographies, showing that using priors defined in this way improves object recognition."}
{"_id": "6d74c216d8246c2a356b00426af715102af2a172", "title": "From Skimming to Reading: A Two-stage Examination Model for Web Search", "text": "User's examination of search results is a key concept involved in all the click models. However, most studies assumed that eye fixation means examination and no further study has been carried out to better understand user's examination behavior. In this study, we design an experimental search engine to collect both the user's feedback on their examinations and the eye-tracking/click-through data. To our surprise, a large proportion (45.8%) of the results fixated by users are not recognized as being \"read\". Looking into the tracking data, we found that before the user actually \"reads\" the result, there is often a \"skimming\" step in which the user quickly looks at the result without reading it. We thus propose a two-stage examination model which composes of a first \"from skimming to reading\" stage (Stage 1) and a second \"from reading to clicking\" stage (Stage 2). We found that the biases (e.g. position bias, domain bias, attractiveness bias) considered in many studies impact in different ways in Stage 1 and Stage 2, which suggests that users make judgments according to different signals in different stages. We also show that the two-stage examination behaviors can be predicted with mouse movement behavior, which can be collected at large scale. Relevance estimation with the two-stage examination model also outperforms that with a single-stage examination model. This study shows that the user's examination of search results is a complex cognitive process that needs to be investigated in greater depth and this may have a significant impact on Web search."}
{"_id": "638867fba638d4088b83e58215ac7682f1c55699", "title": "Artificial Intelligence Technique Applied to Intrusion Detection", "text": "Communication network is facilitated with different protocol .Each protocol supported to increase the network performance in a secured manner. In communication process, user\u2019s connectivity, violations of policy on access of information are handles through intrusion. Intrusion prevention is the process of performing intrusion detection and attempting to stop detected possible incidents. It focused on identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators. However, organizations use Intrusion detection and prevention system (IDPS) for other purposes, such as identifying problems with security policies, documenting existing threats, and determining individuals from violating security policies. Communication architecture is built up on IP.Internet Control Message Protocol (ICMP ) protocol is tightly integrated with IP. ICMP messages, delivered in IP packets, are used for out-of-band messages related to network operation or mis-operation.In this paper describes about HDLC , ICMP protocol sequences which is used to detect the intrusion on hybrid network and its attributes are recommend the standardized protocol for the intrusion detection process. This standardization protocol compare the previous part of the research of HDLC and ICMP protocols."}
{"_id": "ab81be51a3a96237ea24375d7b4fd0c244e5ffa8", "title": "Enhanced LSB technique for audio steganography", "text": "The idea of this paper is to invent a new strategy in Steganography to get the minimum effect in audio which is used to hide data into it. \u201cProgress always involves risk\u201d Fredrick Wilcox observed that technological progress of computer science and the Internet altered the way we lived, and will continue to cast our life[1] . In this paper we have presented a Steganography method of embedding text data in an audio file. The basic approach behind this paper is to provide a good, well-organized method for hiding the data and sent to the destination in safer manner. In the proposed technique first the audio file is sampled and then appropriate bit is modified. In selected sample one bit is modified at least significant bit .The remaining bits may be used but it may be cause noise. We have attempted to provide an overview, theoretical framework about audio Steganography techniques and a novel approach to hide data in an audio using least significant bit (LSB)."}
{"_id": "85d31ba076c2620fbea3c86595fb0ff6c44b1efa", "title": "A stable-isotope dilution GC-MS approach for the analysis of DFRC (derivatization followed by reductive cleavage) monomers from low-lignin plant materials.", "text": "The derivatization followed by reductive cleavage (DFRC) method is a well-established tool to characterize the lignin composition of plant materials. However, the application of the original procedure, especially the chromatographic determination of the DFRC monomers, is problematic for low-lignin foods. To overcome these problems a modified sample cleanup and a stable-isotope dilution approach were developed and validated. To quantitate the diacetylated DFRC monomers, their corresponding hexadeuterated analogs were synthesized and used as internal standards. By using the selected-ion monitoring mode, matrix-associated interferences can be minimized resulting in higher selectivity and sensitivity. The modified method was applied to four low-lignin samples. Lignin from carrot fibers was classified as guaiacyl-rich whereas the lignins from radish, pear, and asparagus fibers where classified as balanced lignins (guaiacyl/syringyl ratio=1-2)."}
{"_id": "8eb62f0e3528f39e93f4073226c044235b45dce8", "title": "Action Design Research and Visualization Design", "text": "In applied visualization research, artifacts are shaped by a series of small design decisions, many of which are evaluated quickly and informally via methods that often go unreported and unverified. Such design decisions are influenced not only by visualization theory, but also by the people and context of the research. While existing applied visualization models support a level of reliability throughout the design process, they fail to explicitly account for the influence of the research context in shaping the resulting design artifacts. In this work, we look to action design research (ADR) for insight into addressing this issue. In particular, ADR offers a framework along with a set of guiding principles for navigating and capitalizing on the disruptive, subjective, human-centered nature of applied design work, while aiming to ensure reliability of the process and design, and emphasizing opportunities for conducting research. We explore the utility of ADR in increasing the reliability of applied visualization design research by: describing ADR in the language and constructs developed within the visualization community; comparing ADR to existing visualization methodologies; and analyzing a recent design study retrospectively through the lens of ADR's framework and principles."}
{"_id": "6cc4a3d0d8a278d30e05418afeaf6b8e5d04d3d0", "title": "Econometric Modelling of Markov-Switching Vector Autoregressions using MSVAR for Ox", "text": ""}
{"_id": "2050e3ecf3919b05aacf53ab6bed8c004c2b6872", "title": "An X-band to Ka-band SPDT switch using 200 nm SiGe HBTs", "text": "This paper presents the design and measured performance of an X-band to Ka-band SiGe HBT SPDT switch. The proposed SPDT switch was fabricated using a 200 nm, 150 GHz peak fT silicon-germanium (SiGe) heterojunction bipolar transistor (HBT) BiCMOS technology. The SPDT switch design uses diode-connected SiGe HBTs in a series-shunt configuration to improve the switch bandwidth and isolation. Between 8 and 40 GHz, this SPDT switch achieves an insertion loss of less than 4.3 dB, an isolation of more than 20.3 dB, and a return loss of more than 9 dB."}
{"_id": "a2eb1abd58e1554dc6bac5a8ea2f9876e9f4d36b", "title": "Measuring dimensions of intergenerational contact: factor analysis of the Queen's University Scale.", "text": "OBJECTIVES\nIntergenerational contact has been linked to a range of health outcomes, including greater engagement and lower depression. Measures of contact are limited. Informed by Allport's contact theory, the Queen's University Scale consists of items rating contact with elders. We administered the survey to a young adult sample (N = 606) to identify factors that may optimize intervention programming and enhance young persons' health as they age.\n\n\nMETHODS\nWe conducted exploratory factor analysis (EFA) in the structural equation modeling framework and then confirmatory factor analysis with items pertaining to the general elder population.\n\n\nRESULTS\nEFAs did not yield an adequate factor structure. We tested two alternative confirmatory models based on findings from the EFA. Neither a second-order model nor a first-order model allowing double loadings and correlated errors proved adequate.\n\n\nCONCLUSION\nDifficulty finding an adequate factor solution reflects challenges to measuring intergenerational contact with this scale. Items reflect relevant topics but subscale models are limited in interpretability. Knox and colleagues' analyses led them to recommend a brief, global scale, but we did not find empirical support for such a measure. Next steps include development and testing of a reliable, valid scale measuring dimensions of contact as perceived by both youth and elders."}
{"_id": "28215a7294a621b3ff0098b3c7b7a4760f9e3e59", "title": "End of Moore \u2019 s law : thermal ( noise ) death of integration in micro and nano electronics", "text": "The exponential growth of memory size and clock frequency in computers has a great impact on everyday life. The growth is empirically described by Moore\u2019s law of miniaturization. Physical limitations of this growth would have a serious impact on technology and economy. A thermodynamical effect, the increasing thermal noise voltage (Johnson\u2013Nyquist noise) on decreasing characteristic capacitances, together with the constrain of using lower supply voltages to keep power dissipation manageable on the contrary of increasing clock frequency, has the potential to break abruptly Moore\u2019s law within 6\u20138 years, or earlier. \uf6d9 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "5b63a03a937507838d4d85623d72fc1d1a852cc5", "title": "Low-power 4-2 and 5-2 compressors", "text": "This paper explores various low power higher order compressors such as 4-2 and 5-2 compressor units. These compressors are building blocks for binary multipliers. Various circuit architectures for 4-2 compressors are compared with respect to their delay and power consumption. The different circuits are simulated using HSPICE. A new circuit for a 5-2 compressor is then presented which is 12% faster and consumes 37% less power."}
{"_id": "617d424beef3c633d55de45f9009bbc649969877", "title": "Analysis of jitter due to power-supply noise in phase-locked loops", "text": "Phase-locked loops (PLL) in RF and mixed signal VLSI circuits experience supply noise which translates to a timing jitter. In this paper an analysis of the timing jitter due to the noise on the power supply rails is presented. Stochastic models of the power supply noise in VLSI circuits for different values of on-chip decoupling capacitances are presented first. This is followed by calculation of the phase noise of the voltage-controlled oscillator (VCO) in terms of the statistical properties of supply noise. Finally the timing jitter of PLL is predicted in response to the VCO phase noise. A PLL circuit has been designed in 0.35\u03bc CMOS process, and our mathematical model was applied to determine the timing jitter. Experimental results prove the accuracy of the predicted model."}
{"_id": "86b732702d3db71c7865d4e761c8aaa1d6930caa", "title": "Performance analysis of low-power 1-bit CMOS full adder cells", "text": "A performance analysis of 1-bit full-adder cell is presented. The adder cell is anatomized into smaller modules. The modules are studied and evaluated extensively. Several designs of each of them are developed, prototyped, simulated and analyzed. Twenty different 1-bit full-adder cells are constructed (most of them are novel circuits) by connecting combinations of different designs of these modules. Each of these cells exhibits different power consumption, speed, area, and driving capability figures. Two realistic circuit structures that include adder cells are used for simulation. A library of full-adder cells is developed and presented to the circuit designers to pick the full-adder cell that satisfies their specific applications."}
{"_id": "ef153bd9092f142b3845568937145fdb203fd978", "title": "Harvesting Big Data to Enhance Supply Chain Innovation Capabilities : An Analytic Infrastructure Based on Deduction Graph", "text": "Today, firms can access to big data (tweets, videos, click streams, and other unstructured sources) to extract new ideas or understanding about their products, customers, and markets. Thus, managers increasingly view data as an important driver of innovation and a significant source of value creation and competitive advantage. To get the most out of the big data (in combination with a firm\u2019s existing data), a more sophisticated way of handling, managing, analysing and interpreting data is necessary. However, there is a lack of data analytics techniques to assist firms to capture the potential of innovation afforded by data and to gain competitive advantage. This research aims to address this gap by developing and testing an analytic infrastructure based on the deduction graph technique. The proposed approach provides an analytic infrastructure for firms to incorporate their own competence sets with other firms. Case studies results indicate that the proposed data analytic approach enable firms to utilise big data to gain competitive advantage by enhancing their supply chain innovation capabilities."}
{"_id": "0bd6a7a179f1c6c2b59bf1cc9227703a137767d9", "title": "Capturing the mood: facebook and face-to-face encounters in the workplace", "text": "What makes people feel happy, engaged and challenged at work? We conducted an in situ study of Facebook and face-to-face interactions examining how they influence people's mood in the workplace. Thirty-two participants in an organization were each observed for five days in their natural work environment using automated data capture and experience sampling. Our results show that online and offline social interactions are associated with different moods, suggesting that they serve different purposes at work. Face-to-face interactions are associated with a positive mood throughout the day whereas Facebook use and engagement in work contribute to a positive feeling at the end of the day. Email use is associated with negative affect and along with multitasking, is associated with a feeling of engagement and challenge throughout the day. Our findings provide initial evidence of how online and offline interactions affect workplace mood, and could inform practices to improve employee morale."}
{"_id": "c3b1c33848e93ad6a853a62412494bba36aa7f2a", "title": "A Survey of Non-conventional Techniques for Low-voltage Low-power Analog Circuit Design", "text": "Designing integrated circuits able to work under low-voltage (LV) low-power (LP) condition is currently undergoing a very considerable boom. Reducing voltage supply and power consumption of integrated circuits is crucial factor since in general it ensures the device reliability, prevents overheating of the circuits and in particular prolongs the operation period for battery powered devices. Recently, non-conventional techniques i.e. bulkdriven (BD), floating-gate (FG) and quasi-floating-gate (QFG) techniques have been proposed as powerful ways to reduce the design complexity and push the voltage supply towards threshold voltage of the MOS transistors (MOST). Therefore, this paper presents the operation principle, the advantages and disadvantages of each of these techniques, enabling circuit designers to choose the proper design technique based on application requirements. As an example of application three operational transconductance amplifiers (OTA) based on these non-conventional techniques are presented, the voltage supply is only \u00b10.4 V and the power consumption is 23.5 \u03bcW. PSpice simulation results using the 0.18 \u03bcm CMOS technology from TSMC are included to verify the design functionality and correspondence with theory."}
{"_id": "fff5273db715623150986d66ec370188a630b4a5", "title": "Listening to music and physiological and psychological functioning: the mediating role of emotion regulation and stress reactivity.", "text": "Music listening has been suggested to have short-term beneficial effects. The aim of this study was to investigate the association and potential mediating mechanisms between various aspects of habitual music-listening behaviour and physiological and psychological functioning. An internet-based survey was conducted in university students, measuring habitual music-listening behaviour, emotion regulation, stress reactivity, as well as physiological and psychological functioning. A total of 1230 individuals (mean\u2009=\u200924.89\u2009\u00b1\u20095.34 years, 55.3% women) completed the questionnaire. Quantitative aspects of habitual music-listening behaviour, i.e. average duration of music listening and subjective relevance of music, were not associated with physiological and psychological functioning. In contrast, qualitative aspects, i.e. reasons for listening (especially 'reducing loneliness and aggression', and 'arousing or intensifying specific emotions') were significantly related to physiological and psychological functioning (all p\u2009=\u20090.001). These direct effects were mediated by distress-augmenting emotion regulation and individual stress reactivity. The habitual music-listening behaviour appears to be a multifaceted behaviour that is further influenced by dispositions that are usually not related to music listening. Consequently, habitual music-listening behaviour is not obviously linked to physiological and psychological functioning."}
{"_id": "5b37a24a31e4f83d056fa1da4048ea0d4dd6c5e1", "title": "Product type and consumers' perception of online consumer reviews", "text": "Consumers hesitate to buy experience products online because it is hard to get enough information about experience products via the Internet. Online consumer reviews may change that, as they offer consumers indirect experiences about dominant attributes of experience products, transforming them into search products. When consumers are exposed to an online consumer review, it should be noted that there are different kinds of review sources. This study investigates the effects of review source and product type on consumers\u2019 perception of a review. The result of the online experiment suggests that product type canmoderate consumers\u2019 perceived credibility of a review from different review sources, and the major findings are: (1) consumers are more influenced by a review for an experience product than for a search product when the review comes from consumer-developed review sites, and (2) a review from an online community is perceived to be the most credible for consumers seeking information about an experience product. The findings provide managerial implications for marketers as to how they can better manage online consumer reviews."}
{"_id": "9ae85a3588fce6e500ddef14313e29268b6ce775", "title": "High-Throughput and Language-Agnostic Entity Disambiguation and Linking on User Generated Data", "text": "The Entity Disambiguation and Linking (EDL) task matches entity mentions in text to a unique Knowledge Base (KB) identifier such as a Wikipedia or Freebase id. It plays a critical role in the construction of a high quality information network, and can be further leveraged for a variety of information retrieval and NLP tasks such as text categorization and document tagging. EDL is a complex and challenging problem due to ambiguity of the mentions and real world text being multi-lingual. Moreover, EDL systems need to have high throughput and should be lightweight in order to scale to large datasets and run on off-the-shelf machines. More importantly, these systems need to be able to extract and disambiguate dense annotations from the data in order to enable an Information Retrieval or Extraction task running on the data to be more efficient and accurate. In order to address all these challenges, we present the Lithium EDL system and algorithm a high-throughput, lightweight, language-agnostic EDL system that extracts and correctly disambiguates 75% more entities than state-of-the-art EDL systems and is significantly faster than them."}
{"_id": "a7141592d15578bff7bf3b694900c929d4e6a5d8", "title": "A hybrid factor analysis and probabilistic PCA-based system for dictionary learning and encoding for robust speaker recognition", "text": "Probabilistic Principal Component Analysis (PPCA) based low dimensional representation of speech utterances is found to be useful for speaker recognition. Although, performance of the FA (Factor Analysis)-based total variability space model is found to be superior, hyperparameter estimation procedure in PPCA is computationally efficient. In this work, recent insight on the FA-based approach as a combination of dictionary learning and encoding is explored to use its encoding procedure in the PPCA framework. With the use of an alternate encoding technique on dictionaries learnt using PPCA, performance of state-of-the-art FA-based i-vector approach is matched by using the proposed procedure. A speed up of 4x is obtained while estimating the hyperparameter at the cost of 0.51% deterioration in performance in terms of the Equal Error Rate (EER) in the worst case. Compared to the conventional PPCA model, absolute improvements of 2.1% and 2.8% are observed on two telephone conditions of NIST 2008 SRE database. Using Canonical Correlational Analysis, it is shown that the i-vectors extracted from the conventional FA model and the proposed approach are highly correlated."}
{"_id": "70e8634a46013c2eb4140ed3917743910f90657d", "title": "Remarkable archaeal diversity detected in a Yellowstone National Park hot spring environment.", "text": "Of the three primary phylogenetic domains--Archaea (archaebacteria), Bacteria (eubacteria), and Eucarya (eukaryotes)--Archaea is the least understood in terms of its diversity, physiologies, and ecological panorama. Although many species of Crenarchaeota (one of the two recognized archaeal kingdoms sensu Woese [Woese, C. R., Kandler, O. & Wheelis, M. L. (1990) Proc. Natl. Acad. Sci. USA 87, 4576-4579]) have been isolated, they constitute a relatively tight-knit cluster of lineages in phylogenetic analyses of rRNA sequences. It seemed possible that this limited diversity is merely apparent and reflects only a failure to culture organisms, not their absence. We report here phylogenetic characterization of many archaeal small subunit rRNA gene sequences obtained by polymerase chain reaction amplification of mixed population DNA extracted directly from sediment of a hot spring in Yellowstone National Park. This approach obviates the need for cultivation to identify organisms. The analyses document the existence not only of species belonging to well-characterized crenarchaeal genera or families but also of crenarchaeal species for which no close relatives have so far been found. The large number of distinct archaeal sequence types retrieved from this single hot spring was unexpected and demonstrates that Crenarchaeota is a much more diverse group than was previously suspected. The results have impact on our concepts of the phylogenetic organization of Archaea."}
{"_id": "2539fad11485821d2bfd573dbabe2f6802d24e07", "title": "Interfacing industrial robots using Realtime Primitives", "text": "Today, most industrial robots are interfaced using text-based programming languages. These languages offer the possibility to declare robotic-specific data types, to specify simple motions, and to interact with tools and sensors via I/O operations. While tailored to the underlying robot controller, they usually only offer a fixed and controller-specific set of possible instructions. The specification of complex motions, the synchronization of cooperating robots and the advanced use of sensors is often very difficult or not even feasible. To overcome these limitations, this paper presents a generic and extensible interface for industrial robots, the Realtime Primitives Interface, as part of a larger software architecture. It allows a flexible specification of complex control instructions and can facilitate the development of sustainable robot controllers. The advantages of this approach are illustrated with several examples."}
{"_id": "b4702ebf71b74023d9769677aff2b2f30900a6cf", "title": "Adaptive Quantization for Deep Neural Network", "text": "In recent years Deep Neural Networks (DNNs) have been rapidly developed in various applications, together with increasingly complex architectures. The performance gain of these DNNs generally comes with high computational costs and large memory consumption, which may not be affordable for mobile platforms. Deep model quantization can be used for reducing the computation and memory costs of DNNs, and deploying complex DNNs on mobile equipment. In this work, we propose an optimization framework for deep model quantization. First, we propose a measurement to estimate the effect of parameter quantization errors in individual layers on the overall model prediction accuracy. Then, we propose an optimization process based on this measurement for finding optimal quantization bit-width for each layer. This is the first work that theoretically analyse the relationship between parameter quantization errors of individual layers and model accuracy. Our new quantization algorithm outperforms previous quantization optimization methods, and achieves 20-40% higher compression rate compared to equal bit-width quantization at the same model prediction accuracy."}
{"_id": "a494042b43e6a15b23da1b6d3e422b5a932f0aa4", "title": "Toroidal skin drive for snake robot locomotion", "text": "Small robots have the potential to access confined spaces where humans cannot go. However, the mobility of wheeled and tracked systems is severely limited in cluttered environments. Snake robots using biologically inspired gaits for locomotion can provide better access in many situations, but are slow and can easily snag. This paper introduces an alternative approach to snake robot locomotion, in which the entire surface of the robot provides continuous propulsive force to significantly improve speed and mobility in many environments."}
{"_id": "fa543fa269762e795082f51625b91d7860793d90", "title": "X-band 200W AlGaN/GaN HEMT for high power application", "text": "A 200 Watts GaN high electron mobility transistor (HEMT) has been developed for X-band applications. The device consists of four dies of 0.35 um-gate GaN HEMT of 16 mm gate periphery together with input and output 2-stage impedance transformers assembled into a low thermal resistance package. The developed GaN HEMT provides 204W output power and 12dB small signal gain at 9.3 GHz with power added efficiency of 32% under pulse condition at a duty of 10% with a pulse width of 100 usec."}
{"_id": "a8c88986ef5e223261e4689040975a2ccff24017", "title": "Implementation of Sugeno FIS in model reference adaptive system adaptation scheme for speed sensorless control of PMSM", "text": "Model reference adaptive system (MRAS) is one of the popular methods to observe the speed and the angle information of the Permanent magnet synchronous motor (PMSM). This paper proposes a new adaptation scheme for MRAS to replace the conventional PI controller based on Popov Hyperstability theorem. In this project, the speed of PMSM is controlled by using field oriented control (FOC) method and the MRAS is used to observe the speed and the rotor position of the motor. The reference model for the MRAS is the PMSM itself while the adjustable model is the current model in the rotor reference frame. The Sugeno Fuzzy Logic Inference System (FIS) is implemented in the adaptation scheme in order to tune the error between the reference and the adjustable model. The new proposed method shows it is capable to track the motor speed efficiently at wide range of operating speeds."}
{"_id": "49472d1bd25a3cf8426ea86aac29d8159a825728", "title": "Analysis of Oblique Aerial Images for Land Cover and Point Cloud Classification in an Urban Environment", "text": "In addition to aerial imagery, point clouds are important remote sensing data in urban environment studies. It is essential to extract semantic information from both images and point clouds for such purposes; thus, this study aims to automatically classify 3-D point clouds generated using oblique aerial imagery (OAI)/vertical aerial imagery (VAI) into various urban object classes, such as roof, facade, road, tree, and grass. A multicamera airborne imaging system that can simultaneously acquire VAI and OAI is suggested. The acquired small-format images contain only three RGB spectral bands and are used to generate photogrammetric point clouds through a multiview-stereo dense matching technique. To assign each 3-D point cloud to a corresponding urban object class, we first analyzed the original OAI through object-based image analyses. A rule-based hierarchical semantic classification scheme that utilizes spectral information and geometry- and topology-related features was developed, in which the object height and gradient features were derived from the photogrammetric point clouds to assist in the detection of elevated objects, particularly for the roof and facade. Finally, the photogrammetric point clouds were classified into the aforementioned five classes. The classification accuracy was assessed on the image space, and four experimental results showed that the overall accuracy is between 82.47% and 91.8%. In addition, visual and consistency analyses were performed to demonstrate the proposed classification scheme's feasibility, transferability, and reliability, particularly for distinguishing elevated objects from OAI, which has a severe occlusion effect, image-scale variation, and ambiguous spectral characteristics."}
{"_id": "cfb353785d6eb6049e772655553593d95fa090a3", "title": "Exploring the Placement and Design of Word-Scale Visualizations", "text": "We present an exploration and a design space that characterize the usage and placement of word-scale visualizations within text documents. Word-scale visualizations are a more general version of sparklines-small, word-sized data graphics that allow meta-information to be visually presented in-line with document text. In accordance with Edward Tufte's definition, sparklines are traditionally placed directly before or after words in the text. We describe alternative placements that permit a wider range of word-scale graphics and more flexible integration with text layouts. These alternative placements include positioning visualizations between lines, within additional vertical and horizontal space in the document, and as interactive overlays on top of the text. Each strategy changes the dimensions of the space available to display the visualizations, as well as the degree to which the text must be adjusted or reflowed to accommodate them. We provide an illustrated design space of placement options for word-scale visualizations and identify six important variables that control the placement of the graphics and the level of disruption of the source text. We also contribute a quantitative analysis that highlights the effect of different placements on readability and text disruption. Finally, we use this analysis to propose guidelines to support the design and placement of word-scale visualizations."}
{"_id": "ff206b8619bc9fa1587c431b329d2fcfb94eb47c", "title": "A deep learning-based method for relative location prediction in CT scan images", "text": "Relative location prediction in computed tomography (CT) scan images is a challenging problem. In this paper, a regression model based on one-dimensional convolutional neural networks is proposed to determine the relative location of a CT scan image both robustly and precisely. A public dataset is employed to validate the performance of the study\u2019s proposed method using a 5-fold cross validation. Experimental results demonstrate an excellent performance of the proposed model when compared with the state-of-the-art techniques, achieving a median absolute error of 1.04 cm and mean absolute error of 1.69 cm."}
{"_id": "efc9001f54de9c3e984d62fa7e0e911f43eda266", "title": "Parametric Shape-from-Shading by Radial Basis Functions", "text": "In this paper, we present a new method of shape from shading by using radial basis functions to parameterize the object depth. The radial basis functions are deformed by adjusting their centers, widths, and weights such that the intensity errors are minimized. The initial centers and widths are arranged hierarchically to speed up convergence and to stabilize the solution. Although the smoothness constraint is used, it can be eventually dropped out without causing instabilities in the solution. An important feature of our parametric shape-from-shading method is that it offers a unified framework for integration of multiple sensory information. We show that knowledge about surface depth and/or surface normals anywhere in the image can be easily incorporated into the shape from shading process. It is further demonstrated that even qualitative knowledge can be used in shape from shading to improve 3D reconstruction. Experimental comparisons of our method with several existing ones are made by using both synthetic and real images. Results show that our solution is more accurate than the others."}
{"_id": "a9ad85feef3ee89492631245a29621454e3d798a", "title": "Practical Deep Stereo (PDS): Toward applications-friendly deep stereo matching", "text": "End-to-end deep-learning networks recently demonstrated extremely good performance for stereo matching. However, existing networks are difficult to use for practical applications since (1) they are memory-hungry and unable to process even modest-size images, (2) they have to be trained for a given disparity range. The Practical Deep Stereo (PDS) network that we propose addresses both issues: First, its architecture relies on novel bottleneck modules that drastically reduce the memory footprint in inference, and additional design choices allow to handle greater image size during training. This results in a model that leverages large image context to resolve matching ambiguities. Second, a novel sub-pixel crossentropy loss combined with a MAP estimator make this network less sensitive to ambiguous matches, and applicable to any disparity range without re-training. We compare PDS to state-of-the-art methods published over the recent months, and demonstrate its superior performance on FlyingThings3D and KITTI sets."}
{"_id": "6180a8a082c3d0e85dcb9cec3677923ff7633bb9", "title": "Digital Infrastructures : The Missing IS Research Agenda", "text": "S the inauguration of information systems research (ISR) two decades ago, the information systems (IS) field\u2019s attention has moved beyond administrative systems and individual tools. Millions of users log onto Facebook, download iPhone applications, and use mobile services to create decentralized work organizations. Understanding these new dynamics will necessitate the field paying attention to digital infrastructures as a category of IT artifacts. A state-of-the-art review of the literature reveals a growing interest in digital infrastructures but also confirms that the field has yet to put infrastructure at the centre of its research endeavor. To assist this shift we propose three new directions for IS research: (1) theories of the nature of digital infrastructure as a separate type of IT artifact, sui generis; (2) digital infrastructures as relational constructs shaping all traditional IS research areas; (3) paradoxes of change and control as salient IS phenomena. We conclude with suggestions for how to study longitudinal, large-scale sociotechnical phenomena while striving to remain attentive to the limitations of the traditional categories that have guided IS research."}
{"_id": "247da68527908544124a1f35b3be527ee0e473ac", "title": "Running head: Working memory in L2 input processing The role of working memory in processing L2 input: Insights from eye-tracking", "text": "Our study investigated how attention paid to a target syntactic construction causative had is related the storage capacity and attention regulation function of working memory (WM) and how these WM abilities moderate the change of knowledge of the target construction in different input conditions. 80 Sri Lankan learners of English were exposed to examples of the target construction in explicit and implicit learning conditions and their eye movements were tracked as they read the input. Correlational and multiple regression analyses indicated a very strong relationship between WM abilities and gains in the knowledge of the target construction. WM scores were closely associated with gains in receptive knowledge in all input conditions, but they had a weaker link to the improvement of productive knowledge in the implicit learning conditions. The amount of attention paid to input was also strongly related to WM abilities."}
{"_id": "c65945c08b7fd77ffd2c53369e8928699c3993e7", "title": "Comparing Alzheimer\u2019s and Parkinson\u2019s diseases networks using graph communities structure", "text": "Recent advances in large datasets analysis offer new insights to modern biology allowing system-level investigation of pathologies. Here we describe a novel computational method that exploits the ever-growing amount of \u201comics\u201d data to shed light on Alzheimer\u2019s and Parkinson\u2019s diseases. Neurological disorders exhibit a huge number of molecular alterations due to a complex interplay between genetic and environmental factors. Classical reductionist approaches are focused on a few elements, providing a narrow overview of the etiopathogenic complexity of multifactorial diseases. On the other hand, high-throughput technologies allow the evaluation of many components of biological systems and their behaviors. Analysis of Parkinson\u2019s Disease (PD) and Alzheimer\u2019s Disease (AD) from a network perspective can highlight proteins or pathways common but differently represented that can be discriminating between the two pathological conditions, thus highlight similarities and differences. In this work we propose a strategy that exploits network community structure identified with a state-of-the-art network community discovery algorithm called InfoMap, which takes advantage of information theory principles. We used two similarity measurements to quantify functional and topological similarities between the two pathologies. We built a Similarity Matrix to highlight similar communities and we analyzed statistically significant GO terms found in clustered areas of the matrix and in network communities. Our strategy allowed us to identify common known and unknown processes including DNA repair, RNA metabolism and glucose metabolism not detected with simple GO enrichment analysis. In particular, we were able to capture the connection between mitochondrial dysfunction and metabolism (glucose and glutamate/glutamine). This approach allows the identification of communities present in both pathologies which highlight common biological processes. Conversely, the identification of communities without any counterpart can be used to investigate processes that are characteristic of only one of the two pathologies. In general, the same strategy can be applied to compare any pair of biological networks."}
{"_id": "7dcb74aa3a2ac4285481dafdbfe2b52197cdd707", "title": "Design of Multimode Net-Type Resonators and Their Applications to Filters and Multiplexers", "text": "Net-type resonators with dual- and tri-mode electrical behaviors have been presented and analyzed theoretically in this paper. The proposed net-type resonator is constructed by a short-ended and several open-ended transmission-line sections, which has the advantages of small size and more flexible resonant frequency allocation and is therefore particularly suitable for applications to the design of microwave devices. To verify the usefulness of the proposed resonators, three experimental examples, including a dual-mode bandpass filter, a dual-passband filter, and a triplexer, have been designed and fabricated with microstrip technology. Each of the designed circuits occupies a very small size and has a good upper stop and performance. All measured results are in good agreement with the full-wave simulation results."}
{"_id": "1d4cbc24ab1b3056d6acd9e097e4f6b24ba64267", "title": "The coming paradigm shift in forensic identification science.", "text": "Converging legal and scientific forces are pushing the traditional forensic identification sciences toward fundamental change. The assumption of discernible uniqueness that resides at the core of these fields is weakened by evidence of errors in proficiency testing and in actual cases. Changes in the law pertaining to the admissibility of expert evidence in court, together with the emergence of DNA typing as a model for a scientifically defensible approach to questions of shared identity, are driving the older forensic sciences toward a new scientific paradigm."}
{"_id": "6a649276903f2d59891e95009b7426dc95eb84cf", "title": "Combining minutiae descriptors for fingerprint matching", "text": "A novel minutiae-based fingerprint matching algorithm is proposed. A minutiae matching algorithm has to solve two problems: correspondence and similarity computation. For the correspondence problem, we assign each minutia two descriptors: texture-based and minutiae-based descriptors, and use an alignment-based greedy matching algorithm to establish the correspondences between minutiae. For the similarity computation, we extract a 17-D feature vector from the matching result, and convert the feature vector into a matching score using support vector classifier. The proposed algorithm is tested on FVC2002 databases and compared to all participators in FVC2002. According to equal error rate, the proposed algorithm ranks 1st on DB3, the most difficult database in FVC2002, and on the average ranks 2nd on all 4 databases. 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."}
{"_id": "e83a2fa459ba921fb176beacba96038a502ff64d", "title": "Handbook of Fingerprint Recognition", "text": "label(s) Label Class c) b) a) 7.3 Integration Strategies 241 Combination strategy A combination strategy (also called combination scheme) is a technique used to combine the output of the individual classifiers. The most popular combination strategies at the abstract level are based on majority vote rules, which simply assign an input pattern to the most voted class (see Section 7.2). When two classifiers are combined, either a logical AND or a logical OR operator is typically used. When more than two classifiers are integrated, the AND/OR rules can be combined. For example, a biometric system may work on \u201cfingerprint OR (face AND hand geometry)\u201d; that is, it requires a user to present either a fingerprint or both face and hand geometry for recognition. Class set reduction, logistic regression, and Borda counts are the most commonly used approaches in combining classifiers based on the rank labels (Ho, Hull, and Srihari, 1994). In class set reduction, a subset of the classes is selected with the aim that the subset be as small as possible and still contain the true class. Multiple subsets from multiple modalities are typically combined using either a union or an intersection of the subsets. The logistic regression and Borda count methods are collectively called the class set reordering methods. The objective here is to derive a consensus ranking of the given classes such that the true class is ranked at the top. Rank labels are very useful for integration in an indexing/retrieval system. A biometric retrieval system typically outputs an ordered list of candidates (most likely matches). The top element of this ordered list is the most likely to be a correct match and the bottom of the list is the least likely match. The most popular combination schemes for combining confidence values from multiple modalities are sum, mean, median, product, minimum, and maximum rules. Kittler et al. (1998) have developed a theoretical framework in an attempt to understand the underlying mathematical basis of these popular schemes. Their experiments demonstrated that the sum or mean scheme typically performs very well in practice. A problem in using the sum rule is that the confidences (or scores) from different modalities should be normalized. This normalization typically involves mapping the confidence measures from different modalities into a common domain. For example, a biometric system may output a distance score (the lower the score, the more similar the patterns) whereas another may output a similarity score (the higher the score, the more similar the patterns) and thus the scores cannot be directly combined using the sum rule. In its simplest form, this normalization may only include inverting the sign of distance scores such that a higher score corresponds to a higher similarity. In a more complex form, the normalization may be non-linear which can be learned from training data by estimating distributions of confidence values from each modality. The scores are then translated and scaled to have zero mean, unit variance, and then remapped to a fixed interval of (0,1) using a hyperbolic tangent function. Note that it is tempting to parameterize the estimated distributions for normalization. However, such parameterization of distributions should be used with care, because the error rates of biometric systems are typically very small and a small error in estimating the tails of the distributions may result in a significant change in the error estimates (see Figure 7.3). Another common practice is to compute different scaling factors (weights) for each modality from training data, such that the accuracy of the combined classifier is maxi7 Multimodal Biometric Systems 242 mized. This weighted sum rule is expected to work better than the simple sum rule when the component classifiers have different strengths (i.e., different error rates). Figure 7.3. a) Genuine and impostor distributions for a fingerprint verification system (Jain et al., 2000) and a Normal approximation for the impostor distribution. Visually, the Normal approximation seems to be good, but causes significant decrease in performance compared to the non-parametric estimate as shown in the ROCs in b), where FMR is referred to as FAR (False Acceptance Rate) and (1-FNMR) as Genuine Acceptance Rate. \u00a9Elsevier. Some schemes to combine multiple modalities in biometric systems have also been studied from a theoretical point of view. Through a theoretical analysis, Daugman (1999b) showed that if a strong biometric and a weak biometric are combined with an abstract level combination using either the AND or the OR voting rules, the performance of the combination will be worse than the better of the two individual biometrics. Hong, Jain, and Pankanti\u2019s (1999) theoretical analysis that AND/OR voting strategies can improve performance only when certain conditions are satisfied confirmed Daugman\u2019s findings. Their analysis further showed that a confidence level fusion is expected to significantly improve overall performance even in the case of combining a weak and a strong biometric. Kittler et al. (1998) introduced a sensitivity analysis to explain why the sum (or average) rule outperforms the other rules. They showed that the sum rule is less sensitive than the other similar rules (such as the \u201cproduct\u201d rule) to the error rates of individual classifiers in estimating posterior probabilities (confidence values). They claim that the sum rule is the most appropriate for combining different estimates of the same posterior probabilities (e.g., resulting from different classifier initializations). Prabhakar and Jain (2002) compared the sum and the product rules with the Neyman\u2212Pearson combination scheme and showed that the product rule is worse than the sum rule when combining correlated features and both the sum rule and the product rules are inferior to the Neyman\u2212 Pearson combination scheme when combining weak and strong classifiers. 0 20 40 60 80 100 0 1 2 3 4 5 6 7 Normalized Matching Score Pe rc en ta ge ( % ) Imposter Genuine Nonparametric Imposter Distribution Normal Imposter Distribution Genuine Distribution 0 1 2 3 4 5 50 55 60 65 70 75 80 85 90 95 100 False Acceptance Rate (%) G en ui ne A cc ep ta nc e R at e (% ) Using Nonparametric Imposter Distribution Using Normal Imposter Distribution"}
{"_id": "f1672c6b9dc9e8e05d2432efa2a8ea5439b5db8b", "title": "A Real-Time Matching System for Large Fingerprint Databases", "text": "With the current rapid growth in multimedia technology, there is an imminent need for efficient techniclues to search and query large image databases. Because of their unique and peculiar needs, image databases cannot be treated iri a similar fashion to other types of digital libraries. The contextual dependencies present in images, and the complex nature of two-dimensional image data make the representation issues more difficult for image databases. An invariant representation of an image is still an open research issue. For these reasons, it is difficult to find a universal content-based retrieval technique. Current approaches based on shape, texture, and color for indexing image databases have met with limited success. Further, these techniques have not been adequately tested in the presence of noise and distortions. A given application domain offers stronger constraints for improving the retrieval performance. Fingerprint databases are characterized by their large size as well as noisy and distorted query images. Distortions are very common in fingerprint images due to elasticity of the skin. In this paper, a method of indexincl large fingerprint image databases is presented. The approach integrates a number of domain-specific high-level features such as pattern class and ridge density at higher levels of the search. At the lowest level, it incorporates elastic structural feature-based matching for indexing the database. With a multilevel indexing approach, we have been able to reduce the search space. The search engine has also been implemented on Splash 2-a field programmable gate array (FPGA)-based array processor to obtain near-,4SIC level speed of matching. Our approach has been tested on a locally collected test data and on NIST-9, a large fingerprint database available in the"}
{"_id": "6d1582de1c59823954988b4202e874a2d9042fd9", "title": "Review of Farm Management Information Systems ( FMIS )", "text": "There have been considerable advancements in the field of MIS over the years, and it continues to grow and develop in response to the changing needs of the business and marketing environment. Professionals and academicians are contributing and serving the field of MIS through the dissemination of their knowledge and ideas in professional journals. Thus, changes and trends that likely have an impact on MIS concepts, processes, and implementation can be determined by reviewing the articles published in journals. Content of the articles published in journals can also give us an idea about the types of research and themes that are popular during a given period. To see the evolutionary change in the field of MIS, the present study examined the content of articles published in business and marketing journals. [New York Science Journal 2010;3(5):87-95]. (ISSN 1554 \u2013 0200)."}
{"_id": "32f96626bd0aefad6417125c8d32428273756b4c", "title": "Is it really about me?: message content in social awareness streams", "text": "In this work we examine the characteristics of social activity and patterns of communication on Twitter, a prominent example of the emerging class of communication systems we call \"social awareness streams.\" We use system data and message content from over 350 Twitter users, applying human coding and quantitative analysis to provide a deeper understanding of the activity of individuals on the Twitter network. In particular, we develop a content-based categorization of the type of messages posted by Twitter users, based on which we examine users' activity. Our analysis shows two common types of user behavior in terms of the content of the posted messages, and exposes differences between users in respect to these activities."}
{"_id": "6ebe87159e496b6ac04f5e320e471fc4d3a4af8a", "title": "DC Motor Speed Control using PID Controllers", "text": "An Implementation of PID controllers for the speed control of a DC motor is given in this report. The motor is modeled as a first order system and its response is studied. The speed control using PI and PID control modes is explained and an implementation of the controller using OP-AMPs is given. The response of the controller to load variations is looked at."}
{"_id": "a3e0e71b9718d958f620655b3000103845e0a395", "title": "Echo: An Edge-Centric Code Offloading System With Quality of Service Guarantee", "text": "Code offloading is a promising way to accelerate mobile applications and reduce the energy consumption of mobile devices by shifting some computation to the cloud. However, existing code offloading systems suffer from a long communication delay between mobile devices and the cloud. To address this challenge, in this paper, we consider to deploy edge nodes in close proximity to mobile devices and study how they benefit code offloading. We design an edge-centric code offloading system, called Echo, over a three-layer computing hierarchy consisting of mobile devices, the edge, and the cloud. A critical problem needs to be addressed by Echo is to decide which methods should be offloaded to which computing platform (the edge or the cloud). Different from existing offloading systems that let mobile devices individually make offloading decisions, Echo implements a centralized decision engine at the edge. This edge-centric design can fully exploit limited hardware resources at the edge to provide offloading services with the quality-of-service guarantee. Furthermore, we propose some novel mechanisms, e.g., lazy object transmission and differential object update, to further improve system performance. The results of a small-scale real deployment and trace-driven simulations show that Echo significantly outperforms existing code offloading systems at both execution time and energy consumption."}
{"_id": "4b68a9b4301a37c8b4509d2652c99ac770dd7c03", "title": "A Load Balancing Model based for Cloud Computing using Public Cloud", "text": "It is difficult to maintain all the resources and services on single system. So by using Load balancing strategy we distribute load on single system to multiple system providing efficient response to user, resulting in improve user satisfaction. Load balancing in the cloud computing has an important impact on the performance. To improve user satisfaction there is a need of efficient cloud computing for that we need a good load balancing algorithm. This article shows"}
{"_id": "699dab450bd2a34594c17a08dd01149fa2610e60", "title": "An international qualitative study of ability and disability in ADHD using the WHO-ICF framework", "text": "This is the third in a series of four cross-cultural empirical studies designed to develop International Classification of Functioning, Disability and Health (ICF, and Children and Youth version, ICF(-CY) Core Sets for Attention-Deficit Hyperactivity Disorder (ADHD). To explore the perspectives of individuals diagnosed with ADHD, self-advocates, immediate family members and professional caregivers on relevant areas of impairment and functional abilities typical for ADHD across the lifespan as operationalized by the ICF(-CY). A qualitative study using focus group discussions or semi-structured interviews of 76 participants, divided into 16 stakeholder groups. Participants from five countries (Brazil, India, Saudi Arabia, South Africa and Sweden) were included. A deductive qualitative content analysis was conducted to extract meaningful functioning and disability concepts from verbatim material. Extracted concepts were then linked to ICF(-CY) categories by independent researchers using a standardized linking procedure. In total, 82 ICF(-CY) categories were identified, of which 32 were related to activities and participation, 25 to environmental factors, 23 to body functions and 2 to body structures. Participants also provided opinions on experienced positive sides to ADHD. A high level of energy and drive, creativity, hyper-focus, agreeableness, empathy, and willingness to assist others were the most consistently reported strengths associated with ADHD. Stakeholder perspectives highlighted the need to appraise ADHD in a broader context, extending beyond diagnostic criteria into many areas of ability and disability as well as environmental facilitators and barriers. This qualitative study, along with three other studies (comprehensive scoping review, expert survey and clinical study), will provide the scientific basis to define ICF(-CY) Core Sets for ADHD, from which assessment tools can be derived for use in clinical and research setting, as well as in health care administration."}
{"_id": "38b982afdab1b4810b954b4eec913308b2adcfb6", "title": "Channel Hardening-Exploiting Message Passing (CHEMP) Receiver in Large-Scale MIMO Systems", "text": "In this paper, we propose a multiple-input multiple-output (MIMO) receiver algorithm that exploits channel hardening that occurs in large MIMO channels. Channel hardening refers to the phenomenon where the off-diagonal terms of the HHH matrix become increasingly weaker compared to the diagonal terms as the size of the channel gain matrix H increases. Specifically, we propose a message passing detection (MPD) algorithm which works with the real-valued matched filtered received vector (whose signal term becomes HTHx, where x is the transmitted vector), and uses a Gaussian approximation on the off-diagonal terms of the HTH matrix. We also propose a simple estimation scheme which directly obtains an estimate of HTH (instead of an estimate of H), which is used as an effective channel estimate in the MPD algorithm. We refer to this receiver as the channel hardening-exploiting message passing (CHEMP) receiver. The proposed CHEMP receiver achieves very good performance in large-scale MIMO systems (e.g., in systems with 16 to 128 uplink users and 128 base station antennas). For the considered large MIMO settings, the complexity of the proposed MPD algorithm is almost the same as or less than that of the minimum mean square error (MMSE) detection. This is because the MPD algorithm does not need a matrix inversion. It also achieves a significantly better performance compared to MMSE and other message passing detection algorithms using MMSE estimate of H. Further, we design optimized irregular low density parity check (LDPC) codes specific to the considered large MIMO channel and the CHEMP receiver through EXIT chart matching. The LDPC codes thus obtained achieve improved coded bit error rate performance compared to off-the-shelf irregular LDPC codes."}
{"_id": "fac41331cff5fa4d91438efee41581e5154886d9", "title": "A unified distance measurement for orientation coding in palmprint verification", "text": "Orientation coding based palmprint verification methods, such as competitive code, palmprint orientation code and robust line orientation code, are state-of-the-art verification algorithms with fast matching speeds. Orientation code makes use of two types of distance measure, SUM_XOR (angular distance) and OR_XOR (Hamming distance), yet little is known about the similarities and differences both SUM_XOR and OR_XOR can be regarded as special cases, and provide some principles for determining the parameters of the unified distance. Experimental results show that, using the same feature extraction and coding methods, the unified distance measure gets lower equal error rates than the original distance measures. & 2009 Elsevier B.V. All rights reserved."}
{"_id": "029fc5ba929316c5209903b5afd38df3d635e2fc", "title": "Group recommendations with rank aggregation and collaborative filtering", "text": "The majority of recommender systems are designed to make recommendations for individual users. However, in some circumstances the items to be selected are not intended for personal usage but for a group; e.g., a DVD could be watched by a group of friends. In order to generate effective recommendations for a group the system must satisfy, as much as possible, the individual preferences of the group's members.\n This paper analyzes the effectiveness of group recommendations obtained aggregating the individual lists of recommendations produced by a collaborative filtering system. We compare the effectiveness of individual and group recommendation lists using normalized discounted cumulative gain. It is observed that the effectiveness of a group recommendation does not necessarily decrease when the group size grows. Moreover, when individual recommendations are not effective a user could obtain better suggestions looking at the group recommendations. Finally, it is shown that the more alike the users in the group are, the more effective the group recommendations are."}
{"_id": "f0883912e7b5fd18b474da7dba08e68f512bc7db", "title": "Lightweight Monitoring of Distributed Streams", "text": "As data becomes dynamic, large, and distributed, there is increasing demand for what have become known as distributed stream algorithms. Since continuously collecting the data to a central server and processing it there is infeasible, a common approach is to define local conditions at the distributed nodes, such that\u2014as long as they are maintained\u2014some desirable global condition holds.\n Previous methods derived local conditions focusing on communication efficiency. While proving very useful for reducing the communication volume, these local conditions often suffer from heavy computational burden at the nodes. The computational complexity of the local conditions affects both the runtime and the energy consumption. These are especially critical for resource-limited devices like smartphones and sensor nodes. Such devices are becoming more ubiquitous due to the recent trend toward smart cities and the Internet of Things. To accommodate for high data rates and limited resources of these devices, it is crucial that the local conditions be quickly and efficiently evaluated.\n Here we propose a novel approach, designated CB (for Convex/Concave Bounds). CB defines local conditions using suitably chosen convex and concave functions. Lightweight and simple, these local conditions can be rapidly checked on the fly. CB\u2019s superiority over the state-of-the-art is demonstrated in its reduced runtime and power consumption, by up to six orders of magnitude in some cases. As an added bonus, CB also reduced communication overhead in all the tested application scenarios."}
{"_id": "b4894f7d6264b94ded94181d54c7a0c773e3662b", "title": "Comparison of vision-based and sensor-based systems for joint angle gait analysis", "text": "Gait analysis has become recently a popular research field and been widely applied to clinical diagnosis of neurodegenerative diseases. Various low-cost sensor-based and vision-based systems are developed for capturing the hip and knee joint angles. However, the performances of these systems have not been validated and compared between each other. The purpose of this study is to set up an experiment and compare the performances of a sensor-based system with multiple inertial measurement units (IMUs), a vision-based gait analysis system with marker detection, and a markerless vision-based system on capturing the hip and knee joint angles during normal walking. The obtained measurements were validated with the data acquired from goniometers as ground truth measurement. The results indicate that the IMUs-based sensor system gives excellent performance with small errors, while vision systems produce acceptable results with slightly larger errors."}
{"_id": "ec59569fdee17844ae071be1536a08f937f08c57", "title": "Speed, Data, and Ecosystems: The Future of Software Engineering", "text": "An evaluation of recent industrial and societal trends revealed three key factors driving software engineering's future: speed, data, and ecosystems. These factors' implications have led to guidelines for companies to evolve their software engineering practices. This article is part of a special issue on the Future of Software Engineering."}
{"_id": "96b128e61be883c3740e2376a82908181563a753", "title": "Convolutional Neural Network for Brain MR Imaging Extraction Using Silver-Standards Masks", "text": "Convolutional neural networks (CNN) for medical imaging are constrained by the number of annotated data required for the training stage. Usually, manual annotation is considered to be the \u201cgold-standard\u2019. However, medical imaging datasets with expert manual segmentation are scarce as this step is time-consuming and expensive. Moreover, the single-rater manual annotation is often used in data-driven approaches making the algorithm optimal for one guideline only. In this work, we propose a convolutional neural network (CNN) for brain magnetic resonance (MR) imaging extraction, fully trained with what we refer to as silver-standard masks. To the best of our knowledge, our approach is the first deep learning approach that is fully trained using silver-standards masks, having an optimal generalization. Furthermore, regarding the Dice coefficient, we outperform the current skull-stripping (SS) state-of-the-art method in three datasets. Our method consists of 1) a dataset with silver-standard masks as input, 2) a tri-planar method using parallel 2D U-Net-based CNNs, which we refer to as CONSNet, and 3) an auto-context implementation of CONSNet. CONSNet refers to our integrated approach, i.e., training with silver-standard masks and us\u2217Corresponding author Email address: oeslle@dca.fee.unicamp.br (Oeslle Lucena) URL: miclab.fee.unicamp.br (Oeslle Lucena) Preprint submitted to Medical Image Analysis April 13, 2018 ar X iv :1 80 4. 04 98 8v 1 [ cs .C V ] 1 3 A pr 2 01 8 ing a CNN architecture. Masks are generated by forming a consensus from a set of publicly available automatic segmentation methods using the simultaneous truth and performance level estimation (STAPLE) algorithm. The CNN architecture is robust, with dropout and batch normalization as regularizers. It was inspired by the 2D U-Net model published by the RECOD group. We conducted our analysis using three publicly available datasets: the Calgary-Campinas-359 (CC-359 ), the LONI Probabilistic Brain Atlas (LPBA40), and the Open Access Series of Imaging Studies (OASIS). Five performance metrics were used in our experiments: Dice coefficient, sensitivity, specificity, Hausdorff distance, and symmetric surface-to-surface mean distance. Our results also demonstrate that we have comparable overall performance versus current methods, including recently proposed deep learningbased strategies that were trained using manually annotated masks. Our usage of silver-standard masks, however, reduced the cost of manual annotation, diminished inter-/intra-rater variability, and avoided CNN segmentation super-specialization towards one specific SS method found when mask generation is done through an agreement among many automatic methods. Moreover, usage of silver-standard masks enlarges the volume of input annotated data since we can generate annotation from unlabelled data. Also, our method has the advantage that, once trained, it takes only a few seconds to process a typical brain image volume using a modern graphics processing unit (GPU). In contrast, many of the other competitive consensus-building methods have processing times on the order of minutes."}
{"_id": "4f7886b0a905d9c5c76327044482e60b5799584e", "title": "Evidence for sensory prediction deficits in schizophrenia.", "text": "OBJECTIVE\nPatients with schizophrenia experiencing delusions and hallucinations can misattribute their own actions to an external source. The authors test the hypothesis that patients with schizophrenia have defects in their ability to predict the sensory consequences of their actions.\n\n\nMETHOD\nThe authors measured sensory attenuation of self-produced stimuli by patients with schizophrenia and by healthy subjects.\n\n\nRESULTS\nPatients with schizophrenia demonstrated significantly less sensory attenuation than healthy subjects.\n\n\nCONCLUSIONS\nPatients with a diagnosis of schizophrenia have a dysfunction in their predictive mechanisms."}
{"_id": "c69e481c33138db0a5514b2e0d9cb0142b52f9de", "title": "Implementing an Effective Test Automation Framework", "text": "Testing automation tools enable developers and/or testers to easily automate the entire process of testing in software development. Nevertheless, adopting automated testing is not easy and unsuccessful due to a lack of key information and skills. In order to help solve such problems, we have designed a new framework to support a clear overview of the test design and/or plan for automating tests in distributed environments. Those that are new to testing do not need to delve into complex automation tools or test scripts. This framework allows a programmer or tester to graphically specify the granularity of execution control and aids communication between various stakeholders and software developers. It also enables them to perform the automated Continuous Integration environment. In this paper, we describe details on experiences and usage of the proposed framework."}
{"_id": "0e1c072b035756104151484e6548cac0517cc5f2", "title": "Synonym set extraction from the biomedical literature by lexical pattern discovery", "text": "Although there are a large number of thesauri for the biomedical domain many of them lack coverage in terms and their variant forms. Automatic thesaurus construction based on patterns was first suggested by Hearst [1], but it is still not clear how to automatically construct such patterns for different semantic relations and domains. In particular it is not certain which patterns are useful for capturing synonymy. The assumption of extant resources such as parsers is also a limiting factor for many languages, so it is desirable to find patterns that do not use syntactical analysis. Finally to give a more consistent and applicable result it is desirable to use these patterns to form synonym sets in a sound way. We present a method that automatically generates regular expression patterns by expanding seed patterns in a heuristic search and then develops a feature vector based on the occurrence of term pairs in each developed pattern. This allows for a binary classifications of term pairs as synonymous or non-synonymous. We then model this result as a probability graph to find synonym sets, which is equivalent to the well-studied problem of finding an optimal set cover. We achieved 73.2% precision and 29.7% recall by our method, out-performing hand-made resources such as MeSH and Wikipedia. We conclude that automatic methods can play a practical role in developing new thesauri or expanding on existing ones, and this can be done with only a small amount of training data and no need for resources such as parsers. We also concluded that the accuracy can be improved by grouping into synonym sets."}
{"_id": "5807d13c51281393e1ab06a33eeee83ff2b476fb", "title": "Age differences in the neural systems supporting human allocentric spatial navigation", "text": "Age-related declines in spatial navigation are well-known in human and non-human species. Studies in non-human species suggest that alteration in hippocampal and other neural circuitry may underlie behavioral deficits associated with aging but little is known about the neural mechanisms of human age-related decline in spatial navigation. The purpose of the present study was to examine age differences in functional brain activation during virtual environment navigation. Voxel-based analysis of activation patterns in young subjects identified activation in the hippocampus and parahippocampal gyrus, retrosplenial cortex, right and left lateral parietal cortex, medial parietal lobe and cerebellum. In comparison to younger subjects, elderly participants showed reduced activation in the hippocampus and parahippocampal gyrus, medial parietal lobe and retrosplenial cortex. Relative to younger participants elderly subjects showed increased activation in anterior cingulate gyrus and medial frontal lobe. These results provide evidence of age specific neural networks supporting spatial navigation and identify a putative neural substrate for age-related differences in spatial memory and navigational skill."}
{"_id": "58df2b3cd8b89f6620e43d322c17928d08a197ba", "title": "Advanced LSTM: A Study About Better Time Dependency Modeling in Emotion Recognition", "text": "Long short-term memory (LSTM) is normally used in recurrent neural network (RNN) as basic recurrent unit. However, conventional LSTM assumes that the state at current time step depends on previous time step. This assumption constraints the time dependency modeling capability. In this study, we propose a new variation of LSTM, advanced LSTM (A-LSTM), for better temporal context modeling. We employ A-LSTM in weighted pooling RNN for emotion recognition. The A-LSTM outperforms the conventional LSTM by 5.5% relatively. The A-LSTM based weighted pooling RNN can also complement the state-of-the-art emotion classification framework. This shows the advantage of A-LSTM."}
{"_id": "131b53ba2c5d20ead6796c306a949cccbef95c1d", "title": "Carpenter: finding closed patterns in long biological datasets", "text": "The growth of bioinformatics has resulted in datasets with new characteristics. These datasets typically contain a large number of columns and a small number of rows. For example, many gene expression datasets may contain 10,000-100,000 columns but only 100-1000 rows.Such datasets pose a great challenge for existing (closed) frequent pattern discovery algorithms, since they have an exponential dependence on the average row length. In this paper, we describe a new algorithm called CARPENTER that is specially designed to handle datasets having a large number of attributes and relatively small number of rows. Several experiments on real bioinformatics datasets show that CARPENTER is orders of magnitude better than previous closed pattern mining algorithms like CLOSET and CHARM."}
{"_id": "f3e92100903dc9a8e3adb6013115df418c41484e", "title": "Incorporating intra-class variance to fine-grained visual recognition", "text": "Fine-grained visual recognition aims to capture discriminative characteristics amongst visually similar categories. The state-of-the-art research work has significantly improved the fine-grained recognition performance by deep metric learning using triplet network. However, the impact of intra-category variance on the performance of recognition and robust feature representation has not been well studied. In this paper, we propose to leverage intra-class variance in metric learning of triplet network to improve the performance of fine-grained recognition. Through partitioning training images within each category into a few groups, we form the triplet samples across different categories as well as different groups, which is called Group Sensitive TRiplet Sampling (GS-TRS). Accordingly, the triplet loss function is strengthened by incorporating intra-class variance with GS-TRS, which may contribute to the optimization objective of triplet network. Extensive experiments over benchmark datasets CompCar and VehicleID show that the proposed GS-TRS has significantly outperformed state-of-the-art approaches in both classification and retrieval tasks."}
{"_id": "6f046750a4fea7f404ca65002338129dbf6ab66d", "title": "Social-information-processing factors in reactive and proactive aggression in children's peer groups.", "text": "We examined social-information-processing mechanisms (e.g., hostile attributional biases and intention-cue detection deficits) in chronic reactive and proactive aggressive behavior in children's peer groups. In Study 1, a teacher-rating instrument was developed to assess these behaviors in elementary school children (N = 259). Reactive and proactive scales were found to be internally consistent, and factor analyses partially supported convergent and discriminant validities. In Study 2, behavioral correlates of these forms of aggression were examined through assessments by peers (N = 339). Both types of aggression related to social rejection, but only proactively aggressive boys were also viewed as leaders and as having a sense of humor. In Study 3, we hypothesized that reactive aggression (but not proactive aggression) would occur as a function of hostile attributional biases and intention-cue detection deficits. Four groups of socially rejected boys (reactive aggressive, proactive aggressive, reactive-proactive aggressive, and nonaggressive) and a group of average boys were presented with a series of hypothetical videorecorded vignettes depicting provocations by peers and were asked to interpret the intentions of the provocateur (N = 117). Only the two reactive-aggressive groups displayed biases and deficits in interpretations. In Study 4, attributional biases and deficits were found to be positively correlated with the rate of reactive aggression (but not proactive aggression) displayed in free play with peers (N = 127). These studies supported the hypothesis that attributional biases and deficits are related to reactive aggression but not to proactive aggression."}
{"_id": "7613af8292d342d8cfcf323d463b414333811fda", "title": "n ensemble learning framework for anomaly detection in building nergy consumption", "text": "During building operation, a significant amount of energy is wasted due to equipment and humanrelated faults. To reduce waste, today\u2019s smart buildings monitor energy usage with the aim of identifying abnormal consumption behaviour and notifying the building manager to implement appropriate energysaving procedures. To this end, this research proposes a new pattern-based anomaly classifier, the collective contextual anomaly detection using sliding window (CCAD-SW) framework. The CCAD-SW framework identifies anomalous consumption patterns using overlapping sliding windows. To enhance the anomaly detection capacity of the CCAD-SW, this research also proposes the ensemble anomaly detection (EAD) framework. The EAD is a generic framework that combines several anomaly detection classifiers"}
{"_id": "3ab3bb02a137ddfc419e7d160f8d1ff470080911", "title": "The Other Side of the Coin: A Framework for Detecting and Analyzing Web-based Cryptocurrency Mining Campaigns", "text": "Mining for crypto currencies is usually performed on high-performance single purpose hardware or GPUs. However, mining can be easily parallelized and distributed over many less powerful systems. Cryptojacking is a new threat on the Internet and describes code included in websites that uses a visitor's CPU to mine for crypto currencies without the their consent. This paper introduces MiningHunter, a novel web crawling framework which is able to detect mining scripts even if they obfuscate their malicious activities. We scanned the Alexa Top 1 million websites for cryptojacking, collected more than 13,400,000 unique JavaScript files with a total size of 246 GB and found that 3,178 websites perform cryptocurrency mining without their visitors' consent. Furthermore, MiningHunter can be used to provide an in-depth analysis of cryptojacking campaigns. To show the feasibility of the proposed framework, three of such campaigns are examined in detail. Our results provide the most comprehensive analysis to date of the spread of cryptojacking on the Internet."}
{"_id": "0e78b20b27d27261f9ae088eb13201f2d5b185bd", "title": "Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning", "text": "Algorithms for feature selection fall into two broad categories: wrappers that use the learning algorithm itself to evaluate the usefulness of features and filters that evaluate features according to heuristics based on general characteristics of the data. For application to large databases, filters have proven to be more practical than wrappers because they are much faster. However, most existing filter algorithms only work with discrete classification problems. This paper describes a fast, correlation-based filter algorithm that can be applied to continuous and discrete problems. The algorithm often outperforms the well-known ReliefF attribute estimator when used as a preprocessing step for naive Bayes, instance-based learning, decision trees, locally weighted regression, and model trees. It performs more feature selection than ReliefF does\u2014reducing the data dimensionality by fifty percent in most cases. Also, decision and model trees built from the preprocessed data are often significantly smaller."}
{"_id": "1b65af0b2847cf6edb1461eda659f08be27bc76d", "title": "Regression Shrinkage and Selection via the Lasso", "text": "We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients hat are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described."}
{"_id": "59fc3d8a65d23466858a5238939a40bfce1a5942", "title": "Algorithms for Infinitely Many-Armed Bandits", "text": "We consider multi-armed bandit problems where the number of arms is larger than the possible number of experiments. We make a stochastic assumption on the mean-reward of a new selected arm which characterizes its probability of being a near-optimal arm. Our assumption is weaker than in previous works. We describe algorithms based on upper-confidence-bounds applied to a restricted set of randomly selected arms and provide upper-bounds on the resulting expected regret. We also derive a lower-bound which matches (up to a logarithmic factor) the upper-bound in some cases."}
{"_id": "004888621a4e4cee56b6633338a89aa036cf5ae5", "title": "Wrappers for Feature Subset Selection", "text": "In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach and show a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and Naive-Bayes. @ 1997 Elsevier Science B.V."}
{"_id": "d70a334b4af474038264e59c227f946cd908a391", "title": "Evaluation on State of Charge Estimation of Batteries With Adaptive Extended Kalman Filter by Experiment Approach", "text": "An accurate State-of-Charge (SoC) estimation plays a significant role in battery systems used in electric vehicles due to the arduous operation environments and the requirement of ensuring safe and reliable operations of batteries. Among the conventional methods to estimate SoC, the Coulomb counting method is widely used, but its accuracy is limited due to the accumulated error. Another commonly used method is model-based online iterative estimation with the Kalman filters, which improves the estimation accuracy in some extent. To improve the performance of Kalman filters in SoC estimation, the adaptive extended Kalman filter (AEKF), which employs the covariance matching approach, is applied in this paper. First, we built an implementation flowchart of the AEKF for a general system. Second, we built an online open-circuit voltage (OCV) estimation approach with the AEKF algorithm so that we can then get the SoC estimate by looking up the OCV-SoC table. Third, we proposed a robust online model-based SoC estimation approach with the AEKF algorithm. Finally, an evaluation on the SoC estimation approaches is performed by the experiment approach from the aspects of SoC estimation accuracy and robustness. The results indicate that the proposed online SoC estimation with the AEKF algorithm performs optimally, and for different error initial values, the maximum SoC estimation error is less than 2% with close-loop state estimation characteristics."}
{"_id": "51182c66098d97061b9ac2e4d8ccf86cef22f76d", "title": "Movie recommendations based in explicit and implicit features extracted from the Filmtipset dataset", "text": "In this paper, we describe the experiments conducted by the Information Retrieval Group at the Universidad Aut\u00f3noma de Madrid (Spain) in order to better recommend movies for the 2010 CAMRa Challenge edition. Experiments were carried out on the dataset corresponding to social Filmtipset track. To obtain the movies recommendations we have used different algorithms based on Random Walks, which are well documented in the literature of collaborative recommendation. We have also included a new proposal in one of the algorithms in order to get better results. The results obtained have been computed by means of the trec_eval standard NIST evaluation procedure."}
{"_id": "c12f1c55f8b8cca2dec1cf7cca2ae4408eda2b32", "title": "Classification and Prediction of Future Weather by using Back Propagation Algorithm-An Approach", "text": "Weather forecasting is the process of recording the parameters of weather like wind direction, wind speed, humidity, rainfall, temperature etc. Back Propagation Algorithm can be applied on these parameters in order to predict the future weather. In \u201cClassification and Prediction of Future Weather by using Back Propagation Algorithm\u201d technique, one parameter say temperature varies by some unit and the variation on other parameters say humidity, temperature and rainfall will be predicted with respect to temperature. In this paper, different models which were used in the past for weather forecasting are discussed. This paper is focusing on three parts, first is different models which were used in weather forecasting, second part is introducing a new wireless kit use for weather forecasting and third part includes the Back Propagation Algorithm can be applied on different parameters of weather forecast. Keywords--Back Propagation Algorithm, Artificial Neural Network, Wireless kit, Classification and Prediction"}
{"_id": "a2bfc30fd47584ba6cb1743d51c0c00b7430dfae", "title": "Review of Video and Image Defogging Algorithms and Related Studies on Image Restoration and Enhancement", "text": "Video and images acquired by a visual system are seriously degraded under hazy and foggy weather, which will affect the detection, tracking, and recognition of targets. Thus, restoring the true scene from such a foggy video or image is of significance. The main goal of this paper was to summarize current video and image defogging algorithms. We first presented a review of the detection and classification method of a foggy image. Then, we summarized existing image defogging algorithms, including image restoration algorithms, image contrast enhancement algorithms, and fusion-based defogging algorithms. We also presented current video defogging algorithms. We summarized objective image quality assessment methods that have been widely used for the comparison of different defogging algorithms, followed by an experimental comparison of various classical image defogging algorithms. Finally, we presented the problems of video and image defogging which need to be further studied. The code of all algorithms will be available at <;uri xlink:href=\"http://www.yongxu.org/lunwen.html\" xlink:type=\"simple\">http://www.yongxu.org/lunwen.html<;/uri>."}
{"_id": "6494c8b6281f44d0c893e40a61f1da8e5206353a", "title": "An Efficient Oblivious Database for the Public Cloud", "text": "Hardware enclaves such as Intel SGX are a promising technology to increase the security of databases outsourced to the cloud. These enclaves provide an execution environment isolated from the hypervisor/OS, and encryption of data in memory. However, for applications that use large amounts of memory\u2014including most realistic databases\u2014enclaves do not protect against access pattern leaks, where an attacker observes which locations in memory are accessed. The n\u00e4\u0131ve way to address this issue, using Oblivious RAM (ORAM) primitives, adds prohibitive overhead. In this paper, we propose ObliDB, a database that co-designs both its data structures (e.g., oblivious B+ trees) and physical operators to accelerate oblivious relational queries, giving up to 329\u00d7 speedup over n\u00e4\u0131ve ORAM. On analytics workloads, ObliDB ranges from competitive to 19\u00d7 faster than previous oblivious systems designed only for analytics, such as Opaque, and comes within 2.6\u00d7 of Spark SQL. Moreover, ObliDB also supports point queries, insertions, and deletions with latencies of 3\u201310ms, which is 7\u201322\u00d7 faster than previously published oblivious data structures, and makes ObliDB suitable for transactional workloads too. To our knowledge, ObliDB is the first general-purpose oblivious database to approach practical performance. PVLDB Reference Format: Saba Eskandarian and Matei Zaharia. An Efficient Oblivious Database for the Public Cloud. PVLDB, 11 (4): xxxx-yyyy, 2017. DOI: https://doi.org/TBD"}
{"_id": "c89e5e22704c6df3f516b5483edaf667aae0e718", "title": "Colors and emotions: preferences and combinations.", "text": "Within three age groups (7-year-old children, 11-year-old children, and adults), preferences for colors and emotions were established by means of two distinct paired-comparison tasks. In a subsequent task, participants were asked to link colors to emotions by selecting an appropriate color. It was hypothesized that the number of times that each color was tied to a specific emotion would be predictable from the separate preferences for colors and emotions. Within age groups, participants had consistent preferences for colors and emotions, but preferences differed from one age group to another. Especially in the youngest group, the pattern of combinations between colors and emotions appeared to be meaningfully related to the preference order for colors and emotions."}
{"_id": "f9f5cb17cf072874e438dc8e1d5f03f7b505a356", "title": "Spark: A Big Data Processing Platform Based on Memory Computing", "text": "Spark is a memory-based computing framework which has a better ability of computing and fault tolerance, supports batch, interactive, iterative and flow calculations. In this paper, we analyze the Spark's primary framework, core technologies, and point out the advantages and disadvantages of the Spark. In the end, we make a discussion for the future trends of the Spark technologies."}
{"_id": "8bcd9f8c4e75a64576d8592713c2aa6b5f1ad2d9", "title": "Towards an automatic early stress recognition system for office environments based on multimodal measurements: A review", "text": "Stress is a major problem of our society, as it is the cause of many health problems and huge economic losses in companies. Continuous high mental workloads and non-stop technological development, which leads to constant change and need for adaptation, makes the problem increasingly serious for office workers. To prevent stress from becoming chronic and provoking irreversible damages, it is necessary to detect it in its early stages. Unfortunately, an automatic, continuous and unobtrusive early stress detection method does not exist yet. The multimodal nature of stress and the research conducted in this area suggest that the developed method will depend on several modalities. Thus, this work reviews and brings together the recent works carried out in the automatic stress detection looking over the measurements executed along the three main modalities, namely, psychological, physiological and behavioural modalities, along with contextual measurements, in order to give hints about the most appropriate techniques to be used and thereby, to facilitate the development of such a holistic system."}
{"_id": "c2a730c06522395378b655038bab293042b4435d", "title": "An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination", "text": "This paper reviews some main results and progress in distributed multi-agent coordination, focusing on papers published in major control systems and robotics journals since 2006. Distributed coordination of multiple vehicles, including unmanned aerial vehicles, unmanned ground vehicles, and unmanned underwater vehicles, has been a very active research subject studied extensively by the systems and control community. The recent results in this area are categorized into several directions, such as consensus, formation control, optimization, and estimation. After the review, a short discussion section is included to summarize the existing research and to propose several promising research directions along with some open problems that are deemed important for further investigations."}
{"_id": "d4528c8ebd5d4b9dab591b8e2288a0a8784b9949", "title": "Music Cognition and the Cognitive Sciences", "text": "Why should music be of interest to cognitive scientists, and what role does it play in human cognition? We review three factors that make music an important topic for cognitive scientific research. First, music is a universal human trait fulfilling crucial roles in everyday life. Second, music has an important part to play in ontogenetic development and human evolution. Third, appreciating and producing music simultaneously engage many complex perceptual, cognitive, and emotional processes, rendering music an ideal object for studying the mind. We propose an integrated status for music cognition in the Cognitive Sciences and conclude by reviewing challenges and big questions in the field and the way in which these reflect recent developments."}
{"_id": "2dbe5601ef0e74a51fa3eab18d68ea753501a0df", "title": "Competency based management: a review of systems and approaches", "text": "Purpose \u2013 Aims to review the key concepts of competency management (CM) and to propose method for developing competency method. Design/methodology/approach \u2013 Examines the CM features of 22 CM systems and 18 learning management systems. Findings \u2013 Finds that the areas of open standard (XML, web services, RDF), semantic technologies (ontologies and the semantic web) and portals with self-service technologies are going to play a significant part in the evolution of CM systems. Originality/value \u2013 Emphasizes the beneficial attributes of CM for private and public organizations."}
{"_id": "2e3cde02fad19f48269c3cc3ba7ee7483d98bc1e", "title": "Nonoutsourceable Scratch-Off Puzzles to Discourage Bitcoin Mining Coalitions", "text": "An implicit goal of Bitcoin's reward structure is to diffuse network influence over a diverse, decentralized population of individual participants. Indeed, Bitcoin's security claims rely on no single entity wielding a sufficiently large portion of the network's overall computational power. Unfortunately, rather than participating independently, most Bitcoin miners join coalitions called mining pools in which a central pool administrator largely directs the pool's activity, leading to a consolidation of power. Recently, the largest mining pool has accounted for more than half of network's total mining capacity. Relatedly, \"hosted mining\" service providers offer their clients the benefit of economies-of-scale, tempting them away from independent participation. We argue that the prevalence of mining coalitions is due to a limitation of the Bitcoin proof-of-work puzzle -- specifically, that it affords an effective mechanism for enforcing cooperation in a coalition. We present several definitions and constructions for \"nonoutsourceable\" puzzles that thwart such enforcement mechanisms, thereby deterring coalitions. We also provide an implementation and benchmark results for our schemes to show they are practical."}
{"_id": "7a6f802c089c339256dcbff9d3039abbe60ea076", "title": "LTE-V for Sidelink 5G V2X Vehicular Communications: A New 5G Technology for Short-Range Vehicle-to-Everything Communications", "text": "This article provides an overview of the long-term evolution-vehicle (LTE-V) standard supporting sidelink or vehicle-to-vehicle (V2V) communications using LTE's direct interface named PC5 in LTE. We review the physical layer changes introduced under Release 14 for LTE-V, its communication modes 3 and 4, and the LTE-V evolutions under discussion in Release 15 to support fifth-generation (5G) vehicle-to-everything (V2X) communications and autonomous vehicles' applications. Modes 3 and 4 support direct V2V communications but differ on how they allocate the radio resources. Resources are allocated by the cellular network under mode 3. Mode 4 does not require cellular coverage, and vehicles autonomously select their radio resources using a distributed scheduling scheme supported by congestion control mechanisms. Mode 4 is considered the baseline mode and represents an alternative to 802.11p or dedicated shortrange communications (DSRC). In this context, this article also presents a detailed analysis of the performance of LTE-V sidelink mode 4, and proposes a modification to its distributed scheduling."}
{"_id": "3a498f33da76cc9eccca1463528b0a3f8c4b25b7", "title": "MatrixWave: Visual Comparison of Event Sequence Data", "text": "Event sequence data analysis is common in many domains, including web and software development, transportation, and medical care. Few have investigated visualization techniques for comparative analysis of multiple event sequence datasets. Grounded in the real-world characteristics of web clickstream data, we explore visualization techniques for comparison of two clickstream datasets collected on different days or from users with different demographics. Through iterative design with web analysts, we designed MatrixWave, a matrix-based representation that allows analysts to get an overview of differences in traffic patterns and interactively explore paths through the website. We use color to encode differences and size to offer context over traffic volume. User feedback on MatrixWave is positive. Our study participants made fewer errors with MatrixWave and preferred it over the more familiar Sankey diagram."}
{"_id": "549867419710842621c14779307b1c7a5debddd6", "title": "Wireless Power Transfer for Vehicular Applications: Overview and Challenges", "text": "More than a century-old gasoline internal combustion engine is a major contributor to greenhouse gases. Electric vehicles (EVs) have the potential to achieve eco-friendly transportation. However, the major limitation in achieving this vision is the battery technology. It suffers from drawbacks such as high cost, rare material, low energy density, and large weight. The problems related to battery technology can be addressed by dynamically charging the EV while on the move. In-motion charging can reduce the battery storage requirement, which could significantly extend the driving range of an EV. This paper reviews recent advances in stationary and dynamic wireless charging of EVs. A comprehensive review of charging pad, power electronics configurations, compensation networks, controls, and standards is presented."}
{"_id": "45061dddc7973a6a78cb2b555dd05953f38e06d3", "title": "Using NLP techniques for file fragment classification", "text": "The classification of file fragments is an important problem in digital forensics. The literature does not include comprehensive work on applying machine learning techniques to this problem. In this work, we explore the use of techniques from natural language processing to classify file fragments. We take a supervised learning approach, based on the use of support vector machines combined with the bag-of-words model, where text documents are represented as unordered bags of words. This technique has been repeatedly shown to be effective and robust in classifying text documents (e.g., in distinguishing positive movie reviews from negative ones). In our approach, we represent file fragments as \u201cbags of bytes\u201d with feature vectors consisting of unigram and bigram counts, as well as other statistical measurements (including entropy and others). We made use of the publicly available Garfinkel data corpus to generate file fragments for training and testing. We ran a series of experiments, and found that this approach is effective in this domain as well. a 2012 O. Zhulyn, S. Fitzgerald & G. Mathews. Published by Elsevier Ltd. All rights reserved."}
{"_id": "508eb5a6156b8fa1b4547b611e85969438116fa2", "title": "Perceptual Generative Adversarial Networks for Small Object Detection", "text": "Detecting small objects is notoriously challenging due to their low resolution and noisy representation. Existing object detection pipelines usually detect small objects through learning representations of all the objects at multiple scales. However, the performance gain of such ad hoc architectures is usually limited to pay off the computational cost. In this work, we address the small object detection problem by developing a single architecture that internally lifts representations of small objects to super-resolved ones, achieving similar characteristics as large objects and thus more discriminative for detection. For this purpose, we propose a new Perceptual Generative Adversarial Network (Perceptual GAN) model that improves small object detection through narrowing representation difference of small objects from the large ones. Specifically, its generator learns to transfer perceived poor representations of the small objects to super-resolved ones that are similar enough to real large objects to fool a competing discriminator. Meanwhile its discriminator competes with the generator to identify the generated representation and imposes an additional perceptual requirement - generated representations of small objects must be beneficial for detection purpose - on the generator. Extensive evaluations on the challenging Tsinghua-Tencent 100K [45] and the Caltech [9] benchmark well demonstrate the superiority of Perceptual GAN in detecting small objects, including traffic signs and pedestrians, over well-established state-of-the-arts."}
{"_id": "8d3fe0ba28864ac905fcdc58ed268bc21a91582c", "title": "Techniques for Machine Learning based Spatial Data Analysis: Research Directions", "text": "Today, machine learning techniques play a significant role in data analysis, predictive modeling and visualization. The main aim of the machine learning algorithms is that they first learn from the empirical data and can be utilized in cases for which the modeled phenomenon is hidden or not yet described. Several significant techniques like Artificial Neural Network, Support Vector Regression, k-Nearest Neighbour, Bayesian Classifiers and Decision Trees are developed in past years to acheive this task. The first and most complicated problem concerns is the amount of available data. This paper reviews state-of-the-art in the domain of spatial data analysis by employing machine learning approaches. First various methods have been summarized, which exist in the literature. Further, the current research scenarios which are going on in this area are described. Based on the research done in past years, identification of the problem in the existing system is also presented in this paper and have given future research directions."}
{"_id": "6b2456d4570cf8f8344c46c9315229669578fc3b", "title": "DRCD: a Chinese Machine Reading Comprehension Dataset", "text": "\u91cd\u8996\u786b\u9178\u7684\u7a2e\u985e\u4ee5\u53ca\u5b83\u5011\u5728\u91ab\u5b78\u4e0a\u7684\u50f9\u503c...... \u5728 1746\u5e74,John Roebuck\u5247\u904b\u7528\u9019\u500b\u539f\u5247,\u958b \u5275\u925b\u5ba4\u6cd5,\u4ee5\u66f4\u4f4e\u6210\u672c\u6709\u6548\u5730\u5927\u91cf\u751f\u7522\u786b \u9178\u3002...... ... Others, such as Ibn S\u012bn\u0101, are more concerned about the type of sulfuric acid and their medical value. In 1746, John Roebuck applied this method to create lead chamber process to efficiently produce large quantities of sulfuric acid at a lower cost. Question: \u91cd\u8996\u786b\u9178\u7684\u7a2e\u985e\u4ee5\u53ca\u5b83\u5011\u5728\u91ab\u5b78\u4e0a\u7684 \u50f9\u503c\u5730\u70ba\u54ea\u4f4d\u91ab\u5e2b? Who value the type of sulfuric acid and their medical value? Answer: \u4f0a\u672c\u00b7\u897f\u90a3/ Ibn S\u012bn\u0101 Question: \u925b\u5ba4\u6cd5\u65bc\u897f\u5143\u5e7e\u5e74\u958b\u5275? / When did lead chamber process being invented? Answer: 1746\u5e74 / 1746"}
{"_id": "3895b68df599bda93c3dc2e3aca3205bf6ab0783", "title": "Parental delay or refusal of vaccine doses, childhood vaccination coverage at 24 months of age, and the Health Belief Model.", "text": "OBJECTIVE\nWe evaluated the association between parents' beliefs about vaccines, their decision to delay or refuse vaccines for their children, and vaccination coverage of children at aged 24 months.\n\n\nMETHODS\nWe used data from 11,206 parents of children aged 24-35 months at the time of the 2009 National Immunization Survey interview and determined their vaccination status at aged 24 months. Data included parents' reports of delay and/or refusal of vaccine doses, psychosocial factors suggested by the Health Belief Model, and provider-reported up-to-date vaccination status.\n\n\nRESULTS\nIn 2009, approximately 60.2% of parents with children aged 24-35 months neither delayed nor refused vaccines, 25.8% only delayed, 8.2% only refused, and 5.8% both delayed and refused vaccines. Compared with parents who neither delayed nor refused vaccines, parents who delayed and refused vaccines were significantly less likely to believe that vaccines are necessary to protect the health of children (70.1% vs. 96.2%), that their child might get a disease if they aren't vaccinated (71.0% vs. 90.0%), and that vaccines are safe (50.4% vs. 84.9%). Children of parents who delayed and refused also had significantly lower vaccination coverage for nine of the 10 recommended childhood vaccines including diphtheria-tetanus-acellular pertussis (65.3% vs. 85.2%), polio (76.9% vs. 93.8%), and measles-mumps-rubella (68.4% vs. 92.5%). After adjusting for sociodemographic differences, we found that parents who were less likely to agree that vaccines are necessary to protect the health of children, to believe that their child might get a disease if they aren't vaccinated, or to believe that vaccines are safe had significantly lower coverage for all 10 childhood vaccines.\n\n\nCONCLUSIONS\nParents who delayed and refused vaccine doses were more likely to have vaccine safety concerns and perceive fewer benefits associated with vaccines. Guidelines published by the American Academy of Pediatrics may assist providers in responding to parents who may delay or refuse vaccines."}
{"_id": "808f852f6101c674e10cae264df4bc6ef55fe7dc", "title": "An Improved Bank Credit Scoring Model: A Na\u00efve Bayesian Approach", "text": "Credit scoring is a decision tool used by organizations to grant or reject credit requests from their customers. Series of artificial intelligent and traditional approaches have been used to building credit scoring model and credit risk evaluation. Despite being ranked amongst the top 10 algorithm in Data mining, Na\u00efve Bayesian algorithm has not been extensively used in building credit score cards. Using demographic and material indicators as input variables, this paper investigate the ability of Bayesian classifier towards building credit scoring model in banking sector."}
{"_id": "641ed032fc5b69eb14cdd462410ebe05f9dcbfc7", "title": "Political Issue Extraction Model: A Novel Hierarchical Topic Model That Uses Tweets By Political And Non-Political Authors", "text": "People often use social media to discuss opinions, including political ones. We refer to relevant topics in these discussions as political issues, and the alternate stands towards these topics as political positions. We present a Political Issue Extraction (PIE) model that is capable of discovering political issues and positions from an unlabeled dataset of tweets. A strength of this model is that it uses twitter timelines of political and non-political authors, and affiliation information of only political authors. The model estimates word-specific distributions (that denote political issues and positions) and hierarchical author/group-specific distributions (that show how these issues divide people). Our experiments using a dataset of 2.4 million tweets from the US show that this model effectively captures the desired properties (with respect to words and groups) of political discussions. We also evaluate the two components of the model by experimenting with: (a) Use to alternate strategies to classify words, and (b) Value addition due to incorporation of group membership information. Estimated distributions are then used to predict political affiliation with 68% accuracy."}
{"_id": "550677a4ce026963d8513f45b6d7ee784e52332b", "title": "Unrealistic optimism about susceptibility to health problems: Conclusions from a community-wide sample", "text": "A mailed questionnaire was used to obtain comparative risk judgments for 32 different hazards from a random sample of 296 individuals living in central New Jersey. The results demonstrate that an optimistic bias about susceptibility to harm-a tendency to claim that one is less at risk than one's peers\u2014is not limited to any particular age, sex, educational, or occupational group. It was found that an optimistic bias is often introduced when people extrapolate from their past experience to estimate their future vulnerability. Thus, the hazards most likely to elicit unrealistic optimism are those associated with the belief (often incorrect) that if the problem has not yet appeared, it is unlikely to occur in the future. Optimistic biases also increase with the perceived preventability of a hazard and decrease with perceived frequency and personal experience. Other data presented illustrate the inconsistent relationships between personal risk judgments and objective risk factors."}
{"_id": "75959c002c7c8f1d6a677b0f09d31931a0499d9d", "title": "The cultural construction of self-enhancement: an examination of group-serving biases.", "text": "Self-serving biases, found routinely in Western samples, have not been observed in Asian samples. Yet given the orientation toward individualism and collectivism in these 2 cultures, respectively, it is imperative to examine whether parallel differences emerge when the target of evaluation is the group. It may be that Asians show a group-serving bias parallel to the Western self-serving bias. In 2 studies, group-serving biases were compared across European Canadian, Asian Canadian, and Japanese students. Study 1 revealed that Japanese students evaluated a family member less positively than did both groups of Canadian students. Study 2 replicated this pattern with students' evaluations of their universities. The data suggest that cultural differences in enhancement biases are robust, generalizing to individuals' evaluations of their groups."}
{"_id": "75a3e2cc31457039106ceae2b3ee66e9243d4696", "title": "Unrealistic optimism about susceptibility to health problems", "text": "In this study, 100 college students compared their own chances of experiencing 45 different health- and life-threatening problems with the chances of their peers. They showed a significant optimistic bias for 34 of these hazards, consistently considering their own chances to be below average. Attempts to account for the amount of bias evoked by different hazards identified perceived controllability, lack of previous experience, and the belief that the problem appears during childhood as factors that tend to increase unrealistic optimism. The investigation also examined the importance of beliefs and emotions as determinants of self-reported interest in adopting precautions to reduce one's risk. It found that: (a) beliefs about risk likelihood, beliefs about risk severity, and worry about the risk all made independent contributions to interest in risk reduction; (b) unrealistic optimism undermined interest in risk reduction indirectly by decreasing worry; and (c) beliefs about risk likelihood and severity were not sufficient to explain the amount of worry expressed about different hazards."}
{"_id": "bed359015324e4e105e95cce895cc79cae2bc2e7", "title": "Flawed Self-Assessment: Implications for Health, Education, and the Workplace.", "text": "Research from numerous corners of psychological inquiry suggests that self-assessments of skill and character are often flawed in substantive and systematic ways. We review empirical findings on the imperfect nature of self-assessment and discuss implications for three real-world domains: health, education, and the workplace. In general, people's self-views hold only a tenuous to modest relationship with their actual behavior and performance. The correlation between self-ratings of skill and actual performance in many domains is moderate to meager-indeed, at times, other people's predictions of a person's outcomes prove more accurate than that person's self-predictions. In addition, people overrate themselves. On average, people say that they are \"above average\" in skill (a conclusion that defies statistical possibility), overestimate the likelihood that they will engage in desirable behaviors and achieve favorable outcomes, furnish overly optimistic estimates of when they will complete future projects, and reach judgments with too much confidence. Several psychological processes conspire to produce flawed self-assessments. Research focusing on health echoes these findings. People are unrealistically optimistic about their own health risks compared with those of other people. They also overestimate how distinctive their opinions and preferences (e.g., discomfort with alcohol) are among their peers-a misperception that can have a deleterious impact on their health. Unable to anticipate how they would respond to emotion-laden situations, they mispredict the preferences of patients when asked to step in and make treatment decisions for them. Guided by mistaken but seemingly plausible theories of health and disease, people misdiagnose themselves-a phenomenon that can have severe consequences for their health and longevity. Similarly, research in education finds that students' assessments of their performance tend to agree only moderately with those of their teachers and mentors. Students seem largely unable to assess how well or poorly they have comprehended material they have just read. They also tend to be overconfident in newly learned skills, at times because the common educational practice of massed training appears to promote rapid acquisition of skill-as well as self-confidence-but not necessarily the retention of skill. Several interventions, however, can be introduced to prompt students to evaluate their skill and learning more accurately. In the workplace, flawed self-assessments arise all the way up the corporate ladder. Employees tend to overestimate their skill, making it difficult to give meaningful feedback. CEOs also display overconfidence in their judgments, particularly when stepping into new markets or novel projects-for example, proposing acquisitions that hurt, rather then help, the price of their company's stock. We discuss several interventions aimed at circumventing the consequences of such flawed assessments; these include training people to routinely make cognitive repairs correcting for biased self-assessments and requiring people to justify their decisions in front of their peers. The act of self-assessment is an intrinsically difficult task, and we enumerate several obstacles that prevent people from reaching truthful self-impressions. We also propose that researchers and practitioners should recognize self-assessment as a coherent and unified area of study spanning many subdisciplines of psychology and beyond. Finally, we suggest that policymakers and other people who makes real-world assessments should be wary of self-assessments of skill, expertise, and knowledge, and should consider ways of repairing self-assessments that may be flawed."}
{"_id": "08aeae7f9899a161db6a78e9566ed8b0df7a97fc", "title": "Judgment under Uncertainty: Heuristics and Biases.", "text": "This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty."}
{"_id": "08a2dbd8932d5060a36c93ad18b2857f4bbbe684", "title": "Fault tolerant parallel data-intensive algorithms", "text": "Fault-tolerance is rapidly becoming a crucial issue in high-end and distributed computing, as increasing number of cores are decreasing the mean-time to failure of the systems. In this work, we present an algorithm-based fault tolerance solution that handles fail-stop failures for a class of iterative data intensive algorithms. We intelligently replicate the data to minimize data loss in multiple failures and decrease re-execution in recovery by little modifications in the algorithms. We evaluate our approach by using two data mining algorithms, kmeans and Apriori. We show that our approach has negligible overhead and allows us to gracefully handle different number of failures. In addition, our approach outperforms Hadoop both in absence and presence of failures."}
{"_id": "2ec2038f229c40bd37552c26545743f02fe1715d", "title": "Confidence sets for persistence diagrams", "text": "Persistent homology is a method for probing topological properties of point clouds and functions. The method involves tracking the birth and death of topological features as one varies a tuning parameter. Features with short lifetimes are informally considered to be \u201ctopological noise,\u201d and those with a long lifetime are considered to be \u201ctopological signal.\u201d In this paper, we bring some statistical ideas to persistent homology. In particular, we derive confidence sets that allow us to separate topological signal from topological noise."}
{"_id": "3e27ee7eb45dc2269fe2364f31f88b35fcd16cfc", "title": "Word Semantic Representations using Bayesian Probabilistic Tensor Factorization", "text": "Many forms of word relatedness have been developed, providing different perspectives on word similarity. We introduce a Bayesian probabilistic tensor factorization model for synthesizing a single word vector representation and per-perspective linear transformations from any number of word similarity matrices. The resulting word vectors, when combined with the per-perspective linear transformation, approximately recreate while also regularizing and generalizing, each word similarity perspective. Our method can combine manually created semantic resources with neural word embeddings to separate synonyms and antonyms, and is capable of generalizing to words outside the vocabulary of any particular perspective. We evaluated the word embeddings with GRE antonym questions, the result achieves the state-ofthe-art performance."}
{"_id": "247d8d94500b9f22125504794169468e1b4a639f", "title": "Facial performance enhancement using dynamic shape space analysis", "text": "The facial performance of an individual is inherently rich in subtle deformation and timing details. Although these subtleties make the performance realistic and compelling, they often elude both motion capture and hand animation. We present a technique for adding fine-scale details and expressiveness to low-resolution art-directed facial performances, such as those created manually using a rig, via marker-based capture, by fitting a morphable model to a video, or through Kinect reconstruction using recent faceshift technology. We employ a high-resolution facial performance capture system to acquire a representative performance of an individual in which he or she explores the full range of facial expressiveness. From the captured data, our system extracts an expressiveness model that encodes subtle spatial and temporal deformation details specific to that particular individual. Once this model has been built, these details can be transferred to low-resolution art-directed performances. We demonstrate results on various forms of input; after our enhancement, the resulting animations exhibit the same nuances and fine spatial details as the captured performance, with optional temporal enhancement to match the dynamics of the actor. Finally, we show that our technique outperforms the current state-of-the-art in example-based facial animation."}
{"_id": "3a8aa4cc6142d433ff55bea8a0cb980103ea15e9", "title": "A fast image dehazing algorithm based on negative correction", "text": ""}
{"_id": "63405b75344e62004c654de86b937f3c32f16d41", "title": "Improving Distant Supervision with Maxpooled Attention and Sentence-Level Supervision", "text": "We propose an effective multitask learning setup for reducing distant supervision noise by leveraging sentence-level supervision. We show how sentence-level supervision can be used to improve the encoding of individual sentences, and to learn which input sentences are more likely to express the relationship between a pair of entities. We also introduce a novel neural architecture for collecting signals from multiple input sentences, which combines the benefits of attention and maxpooling. The proposed method increases AUC by 10% (from 0.261 to 0.284), and outperforms recently published results on the FBNYT dataset."}
{"_id": "c19e3165c8bbffcb08ad23747debd66eccb225e6", "title": "Enhanced Handover Decision Algorithm in Heterogeneous Wireless Network", "text": "Transferring a huge amount of data between different network locations over the network links depends on the network's traffic capacity and data rate. Traditionally, a mobile device may be moved to achieve the operations of vertical handover, considering only one criterion, that is the Received Signal Strength (RSS). The use of a single criterion may cause service interruption, an unbalanced network load and an inefficient vertical handover. In this paper, we propose an enhanced vertical handover decision algorithm based on multiple criteria in the heterogeneous wireless network. The algorithm consists of three technology interfaces: Long-Term Evolution (LTE), Worldwide interoperability for Microwave Access (WiMAX) and Wireless Local Area Network (WLAN). It also employs three types of vertical handover decision algorithms: equal priority, mobile priority and network priority. The simulation results illustrate that the three types of decision algorithms outperform the traditional network decision algorithm in terms of handover number probability and the handover failure probability. In addition, it is noticed that the network priority handover decision algorithm produces better results compared to the equal priority and the mobile priority handover decision algorithm. Finally, the simulation results are validated by the analytical model."}
{"_id": "eee3f5c7848e296f9b4cf05e7d2ccc8a0a9ec2f0", "title": "A FRAMEWORK FOR DESIGNING MOBILE QURANIC MEMORIZATION TOOL USING MULTIMEDIA INTERACTIVE LEARNING METHOD FOR CHILDREN", "text": "Quran is the fundamental holy book of Islam. Among the most important concerns is to learn it by heart. However, current method in Quranic schools is becoming less effective towards young generation. Thus, there is a need to find alternative solutions to memorize Quran for young Muslims. Mobile learning is an alternative to conventional learning that will support the existing method. Mobile devices have made an immediate impact on teaching and learning practices. However, for mobile learning to be effective for children in memorizing Quran, it is necessary to find specific design guidelines and pedagogy to this learning method. This paper aims at providing a unifying framework for developing Quran memorizer application using multimedia interactive method and learning theories for mobile learning."}
{"_id": "d74e38cfe23305d7ccf06899164836d59da0bbb1", "title": "The WHO public health approach to HIV treatment and care: looking back and looking ahead.", "text": "In 2006, WHO set forth its vision for a public health approach to delivering antiretroviral therapy. This approach has been broadly adopted in resource-poor settings and has provided the foundation for scaling up treatment to over 19\u00b75 million people. There is a global commitment to end the AIDS epidemic as a public health threat by 2030 and, to support this goal, there are opportunities to adapt the public health approach to meet the ensuing challenges. These challenges include the need to improve identification of people with HIV infection through expanded approaches to testing; further simplify and improve treatment and laboratory monitoring; adapt the public health approach to concentrated epidemics; and link HIV testing, treatment, and care to HIV prevention. Implementation of these key public health principles will bring countries closer to the goals of controlling the HIV epidemic and providing universal health coverage."}
{"_id": "3f51995bcde8e4e508027e6da7dd780c5645470b", "title": "The effects of students' motivation, cognitive load and learning anxiety in gamification software engineering education: a structural equation modeling study", "text": "Past research has proven the significant effects of game-based learning on learning motivation and academic performance, and described the key factors in game-based design. Nonetheless, research on the correlations among learning motivation, cognitive load, learning anxiety and academic performance in gamified learning environments has been minimal. This study, therefore, aims to develop a Gamification Software Engineering Education Learning System (GSEELS) and evaluate the effects of gamification, learning motivation, cognitive load and learning anxiety on academic performance. By applying Structural Equation Modeling (SEM) to the empirical research, the questionnaire contains: 1. a Gamification Learning Scale; 2. a Learning Motivation Scale; 3. a Cognitive Load Scale; 4. a Learning Anxiety Scale; and 5. an Academic Performance Scale. A total of 107 undergraduates in two classes participated in this study. The Structural Equation Modeling (SEM) analysis includes the path directions and relationship between descriptive statistics, measurement model, structural model evaluation and five variables. The research results support all nine hypotheses, and the research findings also show the effects of cognitive load on learning anxiety, with strong learning motivation resulting from a low learning anxiety. As a result, it is further proven in this study that a well-designed GSEELS would affect student learning motivation and academic performance. Finally, the relationship model between gamification learning, learning motivation, cognitive load, learning anxiety and academic performance is elucidated, and four suggestions are proffered for instructors of software engineering education courses and for further research, so as to assist instructors in the application of favorable gamification teaching strategies."}
{"_id": "37b3100d4a069513f2639f07fe7f99f96567b675", "title": "Towards a global participatory platform : democratising open data , complexity science and collective intelligence", "text": "The FuturICT project seeks to use the power of big data, analytic models grounded in complexity science, and the collective intelligence they yield for societal benefit. Accordingly, this paper argues that these new tools should not remain the preserve of restricted 110 The European Physical Journal Special Topics government, scientific or corporate \u00e9lites, but be opened up for societal engagement and critique. To democratise such assets as a public good, requires a sustainable ecosystem enabling different kinds of stakeholder in society, including but not limited to, citizens and advocacy groups, school and university students, policy analysts, scientists, software developers, journalists and politicians. Our working name for envisioning a sociotechnical infrastructure capable of engaging such a wide constituency is the Global Participatory Platform (GPP). We consider what it means to develop a GPP at the different levels of data, models and deliberation, motivating a framework for different stakeholders to find their ecological niches at different levels within the system, serving the functions of (i) sensing the environment in order to pool data, (ii) mining the resulting data for patterns in order to model the past/present/future, and (iii) sharing and contesting possible interpretations of what those models might mean, and in a policy context, possible decisions. A research objective is also to apply the concepts and tools of complexity science and social science to the project\u2019s own work. We therefore conceive the global participatory platform as a resilient, epistemic ecosystem, whose design will make it capable of selforganization and adaptation to a dynamic environment, and whose structure and contributions are themselves networks of stakeholders, challenges, issues, ideas and arguments whose structure and dynamics can be modelled and analysed."}
{"_id": "5e1eabca76e84a0913dabaacc63b1d03400abcb4", "title": "Comparison of supervised learning methods for spike time coding in spiking neural networks", "text": "In this review we focus our attention on the supervised learning methods for spike time coding in Spiking Neural Networks (SNN). This study is motivated by the recent experimental results on information coding in the biological neural systems which suggest that precise timing of individual spikes may be essential for efficient computation in the brain. We pose a fundamental question, what paradigms of neural temporal coding can be implemented with the recent learning methods. In order to answer this question we discuss various approaches to the considered learning task. We shortly describe the particular learning algorithms and report the results of experiments. Finally we discuss properties, assumptions and limitations of each method. We complete this review with a comprehensive list of pointers to the literature."}
{"_id": "a112270a43307179ac0f97bb34bc459e3a3d0300", "title": "Enhanced parallel cat swarm optimization based on the Taguchi method", "text": "In this paper, we present an enhanced parallel cat swarm optimization (EPCSO) method for solving numerical optimization problems. The parallel cat swarm optimization (PCSO) method is an optimization algorithm designed to solve numerical optimization problems under the conditions of a small population size and a few iteration numbers. The Taguchi method is widely used in the industry for optimizing the product and the process conditions. By adopting the Taguchi method into the tracing mode process of the PCSO method, we propose the EPCSO method with better accuracy and less computational time. In this paper, five test functions are used to evaluate the accuracy of the proposed EPCSO method. The experimental results show that the proposed EPCSO method gets higher accuracies than the existing PSO-based methods and requires less computational time than the PCSO method. We also apply the proposed method to solve the aircraft schedule recovery problem. The experimental results show that the proposed EPCSO method can provide the optimum recovered aircraft schedule in a very short time. The proposed EPCSO method gets the same recovery schedule having the same total delay time, the same delayed flight numbers and the same number of long delay flights as the Liu, Chen, and Chou method (2009). The optimal solutions can be found by the proposed EPCSO method in a very short time. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "7ff528c57aeaf0afadcd03d4c7cbcb3d8c7eb5d0", "title": "Conducting a meta-ethnography of qualitative literature: Lessons learnt", "text": "BACKGROUND\nQualitative synthesis has become more commonplace in recent years. Meta-ethnography is one of several methods for synthesising qualitative research and is being used increasingly within health care research. However, many aspects of the steps in the process remain ill-defined.\n\n\nDISCUSSION\nWe utilized the seven stages of the synthesis process to synthesise qualitative research on adherence to tuberculosis treatment. In this paper we discuss the methodological and practical challenges faced; of particular note are the methods used in our synthesis, the additional steps that we found useful in clarifying the process, and the key methodological challenges encountered in implementing the meta-ethnographic approach. The challenges included shaping an appropriate question for the synthesis; identifying relevant studies; assessing the quality of the studies; and synthesising findings across a very large number of primary studies from different contexts and research traditions. We offer suggestions that may assist in undertaking meta-ethnographies in the future.\n\n\nSUMMARY\nMeta-ethnography is a useful method for synthesising qualitative research and for developing models that interpret findings across multiple studies. Despite its growing use in health research, further research is needed to address the wide range of methodological and epistemological questions raised by the approach."}
{"_id": "f76738a73ec664d5c251a1bfe38b6921700aa542", "title": "Linking CRM Strategy , Customer Performance Measures and Performance in the Hotel Industry", "text": "Customer relationship management (CRM) has been increasingly adopted because of its benefits of greater customer satisfaction and loyalty, which in turn, leads to enhanced financial and competitive performance. This paper reports on a study that examines the relationship between CRM strategy and performance and determines whether the use of customer performance measures plays a mediating role in the relationship between CRM strategy and performance. This study contributes to the limited literature on CRM strategy since little is known about the use of CRM strategy and customer performance measures and their relation with performance in the hotel industry in Malaysia. Data were collected through a questionnaire survey of hotels in Malaysia. Hierarchical regression analyses on a sample of 95 hotels revealed that only the information technology dimension of CRM strategy has a significant and positive effect on performance. In addition, the hypothesis concerning the role of customer performance measures as a mediator was supported."}
{"_id": "3f5876c3e7c5472d86eb94dda660e9e2c40d0474", "title": "Predicting NDUM Student's Academic Performance Using Data Mining Techniques", "text": "The ability to predict the students\u2019 academic performance is very important in institution educational system. Recently some researchers have been proposed data mining techniques for higher education. In this paper, we compare two data mining techniques which are: Artificial Neural Network (ANN) and the combination of clustering and decision tree classification techniques for predicting and classifying students\u2019 academic performance. The data set used in this research is the student data of Computer Science Department, Faculty of Science and Defence Technology, National Defence University of Malaysia (NDUM)."}
{"_id": "4ce5925c5f1b1ae1b0666623ffd8da39431e52f0", "title": "Speeding up k-means Clustering by Bootstrap Averaging", "text": "K-means clustering is one of the most popular clustering algorithms used in data mining. However, clustering is a time consuming task, particularly w ith the large data sets found in data mining. In this p aper we show how bootstrap averaging with k-means can produce results comparable to clustering all of the data but in much less time. The approach of bootstr ap (sampling with replacement) averaging consists of running k-means clustering to convergence on small bootstrap samples of the training data and averagin g similar cluster centroids to obtain a single model. We show why our approach should take less computation time and empirically illustrate its benefits. We sh ow that the performance of our approach is a monotonic function of the size of the bootstrap sample. Howev er, knowing the size of the bootstrap sample that yield s as good results as clustering the entire data set rema ins an open and important question."}
{"_id": "33b98253407829fae510a74ca1b4b100b4178b39", "title": "An Inoperability Input-Output Model (IIM) for disruption propagation analysis", "text": "Today's Supply Chains (SCs) are global and getting more complex, due to the exigencies of the world economy, making them hard to manage. This is exacerbated by the increasingly frequent occurrence of natural disasters and other high impact disruptions. To stay competitive, companies are therefore seeking ways to better understand the impact of such disruptions to their SCs. In addressing this need, this paper proposes an approach for disruption propagation analysis. We develop a method based on a variation of an Inoperability Input-Output Model (adapted from a Leontief I-O model) to quantify the impact of disruptions across the entire SC. We then analyse the factors that have the most influential impacts on the SCs during disruptions. Initial results show that trading volume handled by a company/ node is an important factor in determining the disruption impact to SCs, besides the number of SC partners (connections) as implied from previous work."}
{"_id": "7ceade21aef40866cdbf6207e232b32f72b4e6dd", "title": "CryptVMI: Encrypted Virtual Machine Introspection in the Cloud", "text": "Virtualization techniques are the key in both public and private cloud computing environments. In such environments, multiple virtual instances are running on the same physical machine. The logical isolation between systems makes security assurance weaker than physically isolated systems. Thus, Virtual Machine Introspection techniques become essential to prevent the virtual system from being vulnerable to attacks. However, this technique breaks down the borders of the segregation between multiple tenants, which should be avoided in a public cloud computing environment. In this paper, we focus on building an encrypted Virtual Machine Introspection system, CryptVMI, to address the above concern, especially in a public cloud system. Our approach maintains a query handler on the management node to handle encrypted queries from user clients. We pass the query to the corresponding compute node that holds the virtual instance queried. The introspection application deployed on the compute node processes the query and acquires the encrypted results from the virtual instance for the user. This work shows our design and preliminary implementation of this system."}
{"_id": "d0b179ecf80d487cbf7342fbd1c0072b1b5c2c23", "title": "Acute Ischemic Stroke Therapy Overview.", "text": "The treatment of acute ischemic stroke has undergone dramatic changes recently subsequent to the demonstrated efficacy of intra-arterial (IA) device-based therapy in multiple trials. The selection of patients for both intravenous and IA therapy is based on timely imaging with either computed tomography or magnetic resonance imaging, and if IA therapy is considered noninvasive, angiography with one of these modalities is necessary to document a large-vessel occlusion amenable for intervention. More advanced computed tomography and magnetic resonance imaging studies are available that can be used to identify a small ischemic core and ischemic penumbra, and this information will contribute increasingly in treatment decisions as the therapeutic time window is lengthened. Intravenous thrombolysis with tissue-type plasminogen activator remains the mainstay of acute stroke therapy within the initial 4.5 hours after stroke onset, despite the lack of Food and Drug Administration approval in the 3- to 4.5-hour time window. In patients with proximal, large-vessel occlusions, IA device-based treatment should be initiated in patients with small/moderate-sized ischemic cores who can be treated within 6 hours of stroke onset. The organization and implementation of regional stroke care systems will be needed to treat as many eligible patients as expeditiously as possible. Novel treatment paradigms can be envisioned combining neuroprotection with IA device treatment to potentially increase the number of patients who can be treated despite long transport times and to ameliorate the consequences of reperfusion injury. Acute stroke treatment has entered a golden age, and many additional advances can be anticipated."}
{"_id": "d483efc550245d222da86f0da529ed6ccaa440d4", "title": "Automated Non-Gaussian Clustering of Polarimetric Synthetic Aperture Radar Images", "text": "This paper presents an automatic image segmentation method for polarimetric synthetic aperture radar data. It utilizes the full polarimetric information and incorporates texture by modeling with a non-Gaussian distribution for the complex scattering coefficients. The modeling is based upon the well-known product model, with a Gamma-distributed texture parameter leading to the K-Wishart model for the covariance matrix. The automatic clustering is achieved through a finite mixture model estimated with a modified expectation maximization algorithm. We include an additional goodness-of-fit test stage that allows for splitting and merging of clusters. This not only improves the model fit of the clusters, but also dynamically selects the appropriate number of clusters. The resulting image segmentation depicts the statistically significant clusters within the image. A key feature is that the degree of sub-sampling of the input image will affect the detail level of the clustering, revealing only the major classes or a variable level of detail. Real-world examples are shown to demonstrate the technique."}
{"_id": "4e99f23bc4e6f510750ab9035cd7bf7273053066", "title": "Mining Vessel Tracking Data for Maritime Domain Applications", "text": "The growing number of remote sensing systems and ship reporting technologies (e.g. Automatic Identification System, Long Range Identification and Tracking, radar tracking, Earth Observation) are generating an overwhelming amount of spatio-temporal and geographically distributed data related to vessels and their movements. Research on reliable data mining techniques has proven essential to the discovery of knowledge from such increasingly available information on ship traffic at sea. Data driven knowledge discovery has very recently demonstrated its value in fields that go beyond the original maritime safety and security remits of such data. They include, but are not limited to, fisheries management, maritime spatial planning, gridding ship emissions, mapping activities at sea, risk assessment of offshore platforms, and trade indicators. The extraction of useful information from maritime Big Data is thus a key element in providing operational authorities, policy-makers and scientists with supporting tools to understand what is happening at sea and improve maritime knowledge. This work will provide a survey of the recent JRC research activities relevant to automatic anomaly detection and knowledge discovery in the maritime domain. Data mining, data analytics and predictive analysis examples are introduced using real data. In addition, this paper presents approaches to detect anomalies in reporting messages and unexpected behaviours at sea."}
{"_id": "c7bedb7ab92fc89d1035bd288a075d10acbdf7de", "title": "Requirements Engineering with Use Cases - a Basis for Software Development", "text": "Successful development of software systems depends on the quality of the requirements engineering process. Use cases and scenarios are promising vehicles for eliciting, specifying and validating requirements. This thesis investigates the role of use case modelling in requirements engineering and its relation to system verification and validation. The thesis includes studies of concepts and representations in use case modelling. Semantic issues are discussed and notations based on natural and graphical languages are provided, which allow a hierarchical structure and enable representation at different abstraction levels. Two different strategies for integrating use case modelling with system testing are presented and evaluated, showing how use cases can be a basis for test case generation and reliability assessment. An experiment on requirements validation using inspections with perspective-based reading is also reported, where one of the perspectives applies use case modelling. The results of the experiment indicate that a combination of multiple perspectives may not give higher defect detection rate compared to single perspective reading. Pilot studies of the transition from use case based requirements to high-level design are described, where use cases are successfully applied for documenting how functional requirements are distributed on architectural elements. The investigation of an industrial requirements engineering process improvement programme is also reported, where the introduction of a release-driven prioritisation method contributed to a measurable improvement in delivery precision and product quality. The results presented in the thesis provide further support for how to successfully apply requirements engineering with use cases as an important basis for software development."}
{"_id": "4e845b9780595ff9f18e0ae1d99459253ae3d2b7", "title": "Skyline: an open source document editor for creating and analyzing targeted proteomics experiments", "text": "SUMMARY\nSkyline is a Windows client application for targeted proteomics method creation and quantitative data analysis. It is open source and freely available for academic and commercial use. The Skyline user interface simplifies the development of mass spectrometer methods and the analysis of data from targeted proteomics experiments performed using selected reaction monitoring (SRM). Skyline supports using and creating MS/MS spectral libraries from a wide variety of sources to choose SRM filters and verify results based on previously observed ion trap data. Skyline exports transition lists to and imports the native output files from Agilent, Applied Biosystems, Thermo Fisher Scientific and Waters triple quadrupole instruments, seamlessly connecting mass spectrometer output back to the experimental design document. The fast and compact Skyline file format is easily shared, even for experiments requiring many sample injections. A rich array of graphs displays results and provides powerful tools for inspecting data integrity as data are acquired, helping instrument operators to identify problems early. The Skyline dynamic report designer exports tabular data from the Skyline document model for in-depth analysis with common statistical tools.\n\n\nAVAILABILITY\nSingle-click, self-updating web installation is available at http://proteome.gs.washington.edu/software/skyline. This web site also provides access to instructional videos, a support board, an issues list and a link to the source code project."}
{"_id": "30419576d2ed1a2d699683ac24a4b5ec5b93f093", "title": "Real time action recognition using histograms of depth gradients and random decision forests", "text": "We propose an algorithm which combines the discriminative information from depth images as well as from 3D joint positions to achieve high action recognition accuracy. To avoid the suppression of subtle discriminative information and also to handle local occlusions, we compute a vector of many independent local features. Each feature encodes spatiotemporal variations of depth and depth gradients at a specific space-time location in the action volume. Moreover, we encode the dominant skeleton movements by computing a local 3D joint position difference histogram. For each joint, we compute a 3D space-time motion volume which we use as an importance indicator and incorporate in the feature vector for improved action discrimination. To retain only the discriminant features, we train a random decision forest (RDF). The proposed algorithm is evaluated on three standard datasets and compared with nine state-of-the-art algorithms. Experimental results show that, on the average, the proposed algorithm outperform all other algorithms in accuracy and have a processing speed of over 112 frames/second."}
{"_id": "50e983fd06143cad9d4ac75bffc2ef67024584f2", "title": "LIBLINEAR: A Library for Large Linear Classification", "text": "LIBLINEAR is an open source library for large-scale linear classification. It supports logistic regression and linear support vector machines. We provide easy-to-use command-line tools and library calls for users and developers. Comprehensive documents are available for both beginners and advanced users. Experiments demonstrate that LIBLINEAR is very efficient on large sparse data sets."}
{"_id": "75cbc0eec23375df69de6c64e2f48689dde417c5", "title": "Enhanced Computer Vision With Microsoft Kinect Sensor: A Review", "text": "With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers."}
{"_id": "aa358f4a0578234e301a305d8c5de8d859083a4c", "title": "Discriminative Orderlet Mining for Real-Time Recognition of Human-Object Interaction", "text": "This paper presents a novel visual representation, called orderlets, for real-time human action recognition with depth sensors. An orderlet is a middle level feature that captures the ordinal pattern among a group of low level features. For skeletons, an orderlet captures specific spatial relationship among a group of joints. For a depth map, an orderlet characterizes a comparative relationship of the shape information among a group of subregions. The orderlet representation has two nice properties. First, it is insensitive to small noise since an orderlet only depends on the comparative relationship among individual features. Second, it is a frame-level representation thus suitable for real-time online action recognition. Experimental results demonstrate its superior performance on online action recognition and cross-environment action recognition."}
{"_id": "9a76cf35c74eb3d17f03973d32415c9bab816acb", "title": "Pedestrian Motion Tracking and Crowd Abnormal Behavior Detection Based on Intelligent Video Surveillance", "text": "Pedestrian tracking and detection of crowd abnormal activity under dynamic and complex background using Intelligent Video Surveillance (IVS) system are beneficial for security in public places. This paper presents a pedestrian tracking method combing Histogram of Oriented Gradients (HOG) detection and particle filter. This method regards the particle filter as the tracking framework, identifies the target area according to the result of HOG detection and modifies particle sampling constantly. Our method can track pedestrians in dynamic backgrounds more accurately compared with the traditional particle filter algorithms. Meanwhile, a method to detect crowd abnormal activity is also proposed based on a model of crowd features using Mixture of Gaussian (MOG). This method calculates features of crowd-interest points, then establishes the crowd features model using MOG, conducts self-adaptive updating and detects abnormal activity by matching the input feature with model distribution. Experiments show our algorithm can efficiently detect abnormal velocity and escape panic in crowds with a high detection rate and a relatively low false alarm rate."}
{"_id": "5501e5c600a13eafc1fe0f75b6e09a2b40947844", "title": "Harnessing Context Sensing to Develop a Mobile Intervention for Depression", "text": "BACKGROUND\nMobile phone sensors can be used to develop context-aware systems that automatically detect when patients require assistance. Mobile phones can also provide ecological momentary interventions that deliver tailored assistance during problematic situations. However, such approaches have not yet been used to treat major depressive disorder.\n\n\nOBJECTIVE\nThe purpose of this study was to investigate the technical feasibility, functional reliability, and patient satisfaction with Mobilyze!, a mobile phone- and Internet-based intervention including ecological momentary intervention and context sensing.\n\n\nMETHODS\nWe developed a mobile phone application and supporting architecture, in which machine learning models (ie, learners) predicted patients' mood, emotions, cognitive/motivational states, activities, environmental context, and social context based on at least 38 concurrent phone sensor values (eg, global positioning system, ambient light, recent calls). The website included feedback graphs illustrating correlations between patients' self-reported states, as well as didactics and tools teaching patients behavioral activation concepts. Brief telephone calls and emails with a clinician were used to promote adherence. We enrolled 8 adults with major depressive disorder in a single-arm pilot study to receive Mobilyze! and complete clinical assessments for 8 weeks.\n\n\nRESULTS\nPromising accuracy rates (60% to 91%) were achieved by learners predicting categorical contextual states (eg, location). For states rated on scales (eg, mood), predictive capability was poor. Participants were satisfied with the phone application and improved significantly on self-reported depressive symptoms (beta(week) = -.82, P < .001, per-protocol Cohen d = 3.43) and interview measures of depressive symptoms (beta(week) = -.81, P < .001, per-protocol Cohen d = 3.55). Participants also became less likely to meet criteria for major depressive disorder diagnosis (b(week) = -.65, P = .03, per-protocol remission rate = 85.71%). Comorbid anxiety symptoms also decreased (beta(week) = -.71, P < .001, per-protocol Cohen d = 2.58).\n\n\nCONCLUSIONS\nMobilyze! is a scalable, feasible intervention with preliminary evidence of efficacy. To our knowledge, it is the first ecological momentary intervention for unipolar depression, as well as one of the first attempts to use context sensing to identify mental health-related states. Several lessons learned regarding technical functionality, data mining, and software development process are discussed.\n\n\nTRIAL REGISTRATION\nClinicaltrials.gov NCT01107041; http://clinicaltrials.gov/ct2/show/NCT01107041 (Archived by WebCite at http://www.webcitation.org/60CVjPH0n)."}
{"_id": "7563ca67a1b53f8d0e252f90431f226849f7e57c", "title": "Color and Texture Based Identification and Classification of food Grains using different Color Models and Haralick features", "text": "This paper presents the study on identification and classification of food grains using different color models such as L*a*b, HSV, HSI and YCbCr by combining color and texture features without performing preprocessing. The K-NN and minimum distance classifier are used to identify and classify the different types of food grains using local and global features. Texture and color features are the important features used in the classification of different objects. The local features like Haralick features are computed from co-occurrence matrix as texture features and global features from cumulative histogram are computed along with color features. The experiment was carried out on different food grains classes. The non-uniformity of RGB color space is eliminated by L*a*b, HSV, HSI and YCbCr color space. The correct classification result achieved for different color models is quite good. KeywordsFeature Extraction; co-occurrence matrix; texture information; Global Features; cumulative histogram; RGB, L*a*b, HSV, HSI and YCbCr color model."}
{"_id": "2c5fead7074fc2115fe1e15af7b5cd7f875c2aed", "title": "ZIGZAG: An Efficient Peer-to-Peer Scheme for Media Streaming", "text": "We design a peer-to-peer technique called ZIGZAG for single-source media streaming. ZIGZAG allows the media server to distribute content to many clients by organizing them into an appropriate tree rooted at the server. This applicationlayer multicast tree has a height logarithmic with the number of clients and a node degree bounded by a constant. This helps reduce the number of processing hops on the delivery path to a client while avoiding network bottleneck. Consequently, the end-to-end delay is kept small. Although one could build a tree satisfying such properties easily, an efficient control protocol between the nodes must be in place to maintain the tree under the effects of network dynamics and unpredictable client behaviors. ZIGZAG handles such situations gracefully requiring a constant amortized control overhead. Especially, failure recovery can be done regionally with little impact on the existing clients and mostly no burden on the server."}
{"_id": "5fc370f6bd36390b73d82e23c10547708e6d7421", "title": "GPSP: Graph Partition and Space Projection based Approach for Heterogeneous Network Embedding", "text": "In this paper, we proposeGPSP, a novel Graph Partition and Space Projection based approach, to learn the representation of a heterogeneous network that consists of multiple types of nodes and links. Concretely, we first partition the heterogeneous network into homogeneous and bipartite subnetworks. Then, the projective relations hidden in bipartite subnetworks are extracted by learning the projective embedding vectors. Finally, we concatenate the projective vectors from bipartite subnetworks with the ones learned from homogeneous subnetworks to form the final representation of the heterogeneous network. Extensive experiments are conducted on a real-life dataset. The results demonstrate that GPSP outperforms the state-of-the-art baselines in two key network mining tasks: node classification and clustering2."}
{"_id": "abaabb822fa7834c95fd7d4632a64e97d28f0710", "title": "A dexterous gripper for in-hand manipulation", "text": "During the last few decades, robotic grippers are developed by research community to solve grasping complexities of several objects as their primary objective. Due to the increasing demands of industries, many issues are rising and remain unsolved such as in-hand manipulation, placing object with appropriate posture. Operations like twisting, altering orientation of object, in a hand, requires significant dexterity of the gripper that must be achieved from a compact mechanical design at the first place. In this paper, a newly designed gripper is proposed whose primary goal is to solve in-hand manipulation. The gripper is derived from four identical finger modules; each module has four DOFs, consists of a five bar linkage to grasp and additional two DOFs mechanism similar to ball spline is conceived to manipulate object. Hence the easily constructible gripper is capable to grasp and manipulate plurality of object. As a preliminary inspection, an optimized platform that represents the central concept of the proposed gripper is developed, to the aim of evaluating kinematic feasibilities of manipulation of different objects."}
{"_id": "930a6ea926d1f39dc6a0d90799d18d7995110862", "title": "Privacy-preserving photo sharing based on a secure JPEG", "text": "Sharing photos online is a common activity on social networks and photo hosting platforms, such as Facebook, Pinterest, Instagram, or Flickr. However, after reports of citizens surveillance by governmental agencies and the scandalous leakage of celebrities private photos online, people have become concerned about their online privacy and are looking for ways to protect it. Popular social networks typically offer privacy protection solutions only in response to the public demand and therefore are often rudimental, complex to use, and provide limited degree of control and protection. Most solutions either allow users to control who can access the shared photos or for how long they can be accessed. In contrast, in this paper, we take a structured privacy by design approach to the problem of online photo privacy protection. We propose a privacy-preserving photo sharing architecture that takes into account content and context of a photo with privacy protection integrated inside the JPEG file itself in a secure way. We demonstrate the proposed architecture with a prototype mobile iOS application called ProShare that offers scrambling as the privacy protection tool for a selected region in a photo, secure access to the protected images, and secure photo sharing on Facebook."}
{"_id": "30f1ea3b4194dba7f957fd6bf81bcaf12dca6ff8", "title": "Dynamic Programming for Linear-Time Incremental Parsing", "text": "Incremental parsing techniques such as shift-reduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedyand only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, dynamic programming is in fact possible for many shift-reduce parsers, by merging \u201cequivalent\u201d stacks based on feature values. Empirically, our algorithm yields up to a five-fold speedup over a state-of-the-art shift-reduce dependency parser with no loss in accuracy. Better search also leads to better learning, and our final parser outperforms all previously reported dependency parsers for English and Chinese, yet is much faster."}
{"_id": "24bb660aaf163802c1f0cdee39046f2aa42d2d47", "title": "Photorealistic Video Super Resolution", "text": "With the advent of perceptual loss functions, new possibilities in super-resolution have emerged, and we currently have models that successfully generate near-photorealistic high-resolution images from their low-resolution observations. Up to now, however, such approaches have been exclusively limited to single image super-resolution. The application of perceptual loss functions on video processing still entails several challenges, mostly related to the lack of temporal consistency of the generated images, i.e., flickering artifacts. In this work, we present a novel adversarial recurrent network for video upscaling that is able to produce realistic textures in a temporally consistent way. The proposed architecture naturally leverages information from previous frames due to its recurrent architecture, i.e. the input to the generator is composed of the low-resolution image and, additionally, the warped output of the network at the previous step. We also propose an additional loss function to further reinforce temporal consistency in the generated sequences. The experimental validation of our algorithm shows the effectiveness of our approach which obtains competitive samples in terms of perceptual quality with improved temporal consistency."}
{"_id": "b057d645bef6ab000a6f708e1b4e452437b02b82", "title": "Automatic Conversational Helpdesk Solution using Seq2Seq and Slot-filling Models", "text": "Helpdesk is a key component of any large IT organization, where users can log a ticket about any issue they face related to IT infrastructure, administrative services, human resource services, etc. Normally, users have to assign appropriate set of labels to a ticket so that it could be routed to right domain expert who can help resolve the issue. In practice, the number of labels are very large and organized in form of a tree. It is non-trivial to describe the issue completely and attach appropriate labels unless one knows the cause of the problem and the related labels. Sometimes domain experts discuss the issue with the users and change the ticket labels accordingly, without modifying the ticket description. This results in inconsistent and badly labeled data, making it hard for supervised algorithms to learn from. In this paper, we propose a novel approach of creating a conversational helpdesk system, which will ask relevant questions to the user, for identification of the right category and will then raise a ticket on users' behalf. We use attention based seq2seq model to assign the hierarchical categories to tickets. We use a slot filling model to help us decide what questions to ask to the user, if the top-k model predictions are not consistent. We also present a novel approach to generate training data for the slot filling model automatically based on attention in the hierarchical classification model. We demonstrate via a simulated user that the proposed approach can give us a significant gain in accuracy on ticket-data without asking too many questions to users. Finally, we also show that our seq2seq model is as versatile as other approaches on publicly available datasets, as state of the art approaches."}
{"_id": "a656bbbd46eee2b489c84f8b74b14b54c0a15066", "title": "Unequal Wilkinson Power Divider With Reduced Arm Length For Size Miniaturization", "text": "This paper presents a compact unequal Wilkinson power divider (WPD) with a reduced arm length. The unequal power divider consists of four arms and the isolation elements, including a compensating capacitor and isolation resistor. Compared with the conventional WPD, the electric length \u03b8 of the arms between the input port and the isolation elements is much shorter, and thus, compact size can be obtained. A theoretical analysis is carried out, and design equations are obtained. In addition, a study is carried out to characterize the isolation bandwidth and output port return loss bandwidth with different values of \u03b8. For validation, two power dividers operating at 1.5 GHz are implemented with various size and isolation bandwidths. Around 50% size reduction and good performance are obtained."}
{"_id": "b2a453196b6b490a658a947e4eef9ca6436de000", "title": "Analysis of a laboratory scale three-phase FC-TCR-based static VAr compensator", "text": "This paper presents the design and steady-state analysis of a Fixed Capacitor-Thyristor Controlled Reactor (FC-TCR)-based Static VAr Compensator (SVC). Reactive power compensation is demonstrated through the fundamental frequency analysis of the samples acquired from the designed system. The performance of the SVC in the presence of line reactance is also discussed. National Instrument (NI) based data acquisition system is used to perform the steady-state analysis. Besides, a few transient responses are also captured using the data acquisition system."}
{"_id": "cf5495cc7d7b361ca6577df62db063b0ba449c37", "title": "New clock-gating techniques for low-power flip-flops", "text": "Two novel low power flip-flops are presented in the paper. Proposed flip-flops use new gating techniques that reduce power dissipation deactivating the clock signal. Presented circuits overcome the clock duty-cycle limitation of previously reported gated flip-flops.\nCircuit simulations with the inclusion of parasitics show that sensible power dissipation reduction is possible if input signal has reduced switching activity. A 16-bit counter is presented as a simple low power application."}
{"_id": "fb3967f480c825e9028edc346d2905dc0accce09", "title": "Bone-Conduction-Based Brain Computer Interface Paradigm -- EEG Signal Processing, Feature Extraction and Classification", "text": "The paper presents a novel bone-conduction based brain-computer interface paradigm. Four sub-threshold acoustic frequency stimulus patterns are presented to the subjects in an oddball paradigm allowing for \"aha-responses\" generation to the attended targets. This allows for successful implementation of the bone-conduction based brain-computer interface (BCI) paradigm. The concept is confirmed with seven subjects in online bone-conducted auditory Morse-code patterns spelling BCI paradigm. We report also brain electrophysiological signal processing and classification steps taken to achieve the successful BCI paradigm. We also present a finding of the response latency variability in a function of stimulus difficulty."}
{"_id": "a72933fda89820c319a5b872c062ad2670d2db4a", "title": "Smart Car Parking System Solution for the Internet of Things in Smart Cities", "text": "The Internet of Things (IoT) is able to connect billions of devices and services at anytime in any place, with various applications. Recently, the IoT became an emerging technology. One of the most significant current research discussion topics on the IoT is about the smart car parking. A modern urban city has over a million of cars on its roads but it does not have enough parking space. Moreover, most of the contemporary researchers propose management of the data on cloud. However, this method may be considered as an issue since the raw data is sent promptly from distributed sensors to the parking area via cloud and then received back after it is processed. This is considered as an expensive technique in terms of the data transmission as well as the energy cost and consumption. While the majority of proposed solutions address the problem of finding unoccupied parking space and ignore some other critical issues such as information about the nearest car parking and the roads traffic congestion, this paper goes beyond and proposes the alternative method. The paper proposes a smart car parking system that will assist users to solve the issue of finding a parking space and to minimise the time spent in searching for the nearest available car park. In addition, it provides users with roads traffic congestion status. Moreover, the proposed system collects the raw data locally and extracts features by applying data filtering and fusion techniques to reduce the transmitted data over the network. After that, the transformed data is sent to the cloud for processing and evaluating by using machine learning algorithms."}
{"_id": "422d9b1a05bc33fcca4b9aa9381f46804c6132fd", "title": "CrowdDB: answering queries with crowdsourcing", "text": "Some queries cannot be answered by machines only. Processing such queries requires human input for providing information that is missing from the database, for performing computationally difficult functions, and for matching, ranking, or aggregating results based on fuzzy criteria. CrowdDB uses human input via crowdsourcing to process queries that neither database systems nor search engines can adequately answer. It uses SQL both as a language for posing complex queries and as a way to model data. While CrowdDB leverages many aspects of traditional database systems, there are also important differences. Conceptually, a major change is that the traditional closed-world assumption for query processing does not hold for human input. From an implementation perspective, human-oriented query operators are needed to solicit, integrate and cleanse crowdsourced data. Furthermore, performance and cost depend on a number of new factors including worker affinity, training, fatigue, motivation and location. We describe the design of CrowdDB, report on an initial set of experiments using Amazon Mechanical Turk, and outline important avenues for future work in the development of crowdsourced query processing systems."}
{"_id": "edc2e4e6308d7dfce586cb8a4441c704f8f8d41b", "title": "AIDE: Fast and Communication Efficient Distributed Optimization", "text": "In this paper, we present two new communication-efficient methods for distributed minimization of an average of functions. The first algorithm is an inexact variant of the DANE algorithm [20] that allows any local algorithm to return an approximate solution to a local subproblem. We show that such a strategy does not affect the theoretical guarantees of DANE significantly. In fact, our approach can be viewed as a robustification strategy since the method is substantially better behaved than DANE on data partition arising in practice. It is well known that DANE algorithm does not match the communication complexity lower bounds. To bridge this gap, we propose an accelerated variant of the first method, called AIDE, that not only matches the communication lower bounds but can also be implemented using a purely first-order oracle. Our empirical results show that AIDE is superior to other communication efficient algorithms in settings that naturally arise in machine learning applications."}
{"_id": "757b27a3ceb2293b8284fc24a7084a0c3fc2ae21", "title": "Data Distillation: Towards Omni-Supervised Learning", "text": "We investigate omni-supervised learning, a special regime of semi-supervised learning in which the learner exploits all available labeled data plus internet-scale sources of unlabeled data. Omni-supervised learning is lower-bounded by performance on existing labeled datasets, offering the potential to surpass state-of-the-art fully supervised methods. To exploit the omni-supervised setting, we propose data distillation, a method that ensembles predictions from multiple transformations of unlabeled data, using a single model, to automatically generate new training annotations. We argue that visual recognition models have recently become accurate enough that it is now possible to apply classic ideas about self-training to challenging real-world data. Our experimental results show that in the cases of human keypoint detection and general object detection, state-of-the-art models trained with data distillation surpass the performance of using labeled data from the COCO dataset alone."}
{"_id": "bb6ded212cb2767e95f02ecfc9a1d9c10dc14f2e", "title": "Autonomous lift of a cable-suspended load by an unmanned aerial robot", "text": "In this paper, we address the problem of lifting from the ground a cable-suspended load by a quadrotor aerial vehicle. Furthermore, we consider that the mass of the load is unknown. The lift maneuver is a critical step before proceeding with the transportation of a given cargo. However, it has received little attention in the literature so far. To deal with this problem, we break down the lift maneuver into simpler modes which represent the dynamics of the quadrotor-load system at particular operating regimes. From this decomposition, we obtain a series of waypoints that the aerial vehicle has to reach to accomplish the task. We combine geometric control with a least-squares estimation method to design an adaptive controller that follows a prescribed trajectory planned based on the waypoints. The effectiveness of the proposed control scheme is demonstrated by numerical simulations."}
{"_id": "ad10d6dec0b5855c3c3e71d334161647d6d4ed9e", "title": "Few-Shot and Zero-Shot Multi-Label Learning for Structured Label Spaces", "text": "Large multi-label datasets contain labels that occur thousands of times (frequent group), those that occur only a few times (few-shot group), and labels that never appear in the training dataset (zero-shot group). Multi-label few- and zero-shot label prediction is mostly unexplored on datasets with large label spaces, especially for text classification. In this paper, we perform a fine-grained evaluation to understand how state-of-the-art methods perform on infrequent labels. Furthermore, we develop few- and zero-shot methods for multi-label text classification when there is a known structure over the label space, and evaluate them on two publicly available medical text datasets: MIMIC II and MIMIC III. For few-shot labels we achieve improvements of 6.2% and 4.8% in R@10 for MIMIC II and MIMIC III, respectively, over prior efforts; the corresponding R@10 improvements for zero-shot labels are 17.3% and 19%."}
{"_id": "c677166592b505b80a487fb88ac5a6996fc47d71", "title": "Decentralized control: An overview", "text": "The paper reviews the past and present results in the area of decentralized control of large-scale complex systems. An emphasis is laid on decentralization, decomposition, and robustness. These methodologies serve as effective tools to overcome specific difficulties arising in largescale complex systems such as high dimensionality, information structure constraints, uncertainty, and delays. Several prospective topics for future research are introduced in this contents. The overview is focused on recent decomposition approaches in interconnected dynamic systems due to their potential in providing the extension of decentralized control into networked control systems. # 2008 Elsevier Ltd. All rights reserved."}
{"_id": "6b72812c298cf3985c67a5fefff6d125175439c5", "title": "LexIt: A Computational Resource on Italian Argument Structure", "text": "The aim of this paper is to introduce LexIt, a computational framework for the automatic acquisition and exploration of distributional information about Italian verbs, nouns and adjectives, freely available through a web interface at the address http://sesia.humnet.unipi.it/lexit. LexIt is the first large-scale resource for Italian in which subcategorization and semantic selection properties are characterized fully on distributional ground: in the paper we describe both the process of data extraction and the evaluation of the subcategorization frames extracted with LexIt."}
{"_id": "13a375a84a6c414b85477a401541d3e28db1e11a", "title": "A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise", "text": "Spatial Databases Require to detect knowledge from great amount of data Need to handle with arbitrary shape Requirements of Clustering in Data Mining \u2022 Scalability \u2022 Dealing with different types of attributes \u2022 Discovery of Clusters with arbitrary shape \u2022 Minimal requirements for domain knowledge to determine input parameters \u2022 Able to deal with noise and outliers \u2022 Insensitive to the order of input data \u2022 High dimensionality of data \u2022 Interpretability and usability Introduction(cont..)"}
{"_id": "6eb72b8f2c5fb1b996e5502001131b72825f91a4", "title": "Using grid for accelerating density-based clustering", "text": "Clustering analysis is a primary method for data mining. The ever increasing volumes of data in different applications forces clustering algorithms to cope with it. DBSCAN is a well-known algorithm for density-based clustering. It is both effective so it can detect arbitrary shaped clusters of dense regions and efficient especially in existence of spatial indexes to perform the neighborhood queries efficiently. In this paper we introduce a new algorithm GriDBSCAN to enhance the performance of DBSCAN using grid partitioning and merging, yielding a high performance with the advantage of high degree of parallelism. We verified the correctness of the algorithm theoretically and experimentally, studied the performance theoretically and using experiments on both real and synthetic data. It proved to run much faster than original DBSCAN. We compared the algorithm with a similar algorithm, EnhancedDBSCAN, which is also an enhancement to DBSCAN using partitioning. Experiments showed the new algorithm's superiority in performance and degree of parallelism."}
{"_id": "818826f356444f3daa3447755bf63f171f39ec47", "title": "Active Learning Literature Survey", "text": "The key idea behind active learning is that a machine learning algorithm can achieve greater accuracy with fewer labeled training instances if it is allowed to choose the data from which is learns. An active learner may ask queries in the form of unlabeled instances to be labeled by an oracle (e.g., a human annotator). Active learning is well-motivated in many modern machine learning problems, where unlabeled data may be abundant but labels are difficult, time-consuming, or expensive to obtain. This report provides a general introduction to active learning and a survey of the literature. This includes a discussion of the scenarios in which queries can be formulated, and an overview of the query strategy frameworks proposed in the literature to date. An analysis of the empirical and theoretical evidence for active learning, a summary of several problem setting variants, and a discussion of related topics in machine learning research are also presented."}
{"_id": "dfd209e1a6def6edc2db9da45cfd884e9ffacc97", "title": "An improved sampling-based DBSCAN for large spatial databases", "text": "Spatial data clustering is one of the important data mining techniques for extracting knowledge from large amount of spatial data collected in various applications, such as remote sensing, GIS, computer cartography, environmental assessment and planning, etc. Several useful and popular spatial data clustering algorithms have been proposed in the past decade. DBSCAN is one of them, which can discover clusters of any arbitrary shape and can handle the noise points effectively. However, DBSCAN requires large volume of memory support because it operates on the entire database. This paper presents an improved sampling-based DBSCAN which can cluster large-scale spatial databases effectively. Experimental results included to establish that the proposed sampling-based DBSCAN outperforms DBSCAN as well as its other counterparts, in terms of execution time, without losing the quality of clustering."}
{"_id": "10e1e88f9c137d8e350bfc6c9f60242a8f3d22b4", "title": "Automatic Subspace Clustering of High Dimensional Data for Data Mining Applications", "text": "Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehensibility of the results, non-presumption of any canonical data distribution, and insensitivity to the order of input records. We present CLIQUE, a clustering algorithm that satisfies each of these requirements. CLIQUE identifies dense clusters in subspaces of maximum dimensionality. It generates cluster descriptions in the form of DNF expressions that are minimized for ease of comprehension. It produces identical results irrespective of the order in which input records are presented and does not presume any specific mathematical form for data distribution. Through experiments, we show that CLIQUE efficiently finds accurate cluster in large high dimensional datasets."}
{"_id": "0debd1c0b73fc79dc7a64431b8b6a1fe21dcd9f7", "title": "An improved NSGA-III algorithm for feature selection used in intrusion detection", "text": "Feature selection can improve classification accuracy and decrease the computational complexity of classification. Data features in intrusion detection systems (IDS) always present the problem of imbalanced classification in which some classifications only have a few instances while others have many instances. This imbalance can obviously limit classification efficiency, but few effort s have been made to address it. In this paper, a scheme for the many-objective problem is proposed for feature selection in IDS, which uses two strategies, namely, a special domination method and a predefined multiple targeted search, for population evolution. It can differentiate traffic not only between normal and abnormal but also by abnormality type. Based on our scheme, NSGA-III is used to obtain an adequate feature subset with good performance. An improved many-objective optimization algorithm (I-NSGA-III) is further proposed using a novel niche preservation procedure. It consists of a bias-selection process that selects the individual with the fewest selected features and a fit-selection process that selects the individual with the maximum sum weight of its objectives. Experimental results show that I-NSGA-III can alleviate the imbalance problem with higher classification accuracy for classes having fewer instances. Moreover, it can achieve both higher classification accuracy and lower computational complexity. \u00a9 2016 Published by Elsevier B.V."}
{"_id": "2aa40d3bb71b4e16bb0a63eb4dab586f7c15622f", "title": "A Low Radar Cross Section and Low Profile Antenna Co-Designed With Absorbent Frequency Selective Radome", "text": "A low radar cross section (RCS) and low profile antenna co-designed with absorbent frequency selective radome (AFSR) is investigated. A pair of circular slot resonators is embedded on surface of the AFSR to realize a transmission window in the vertical polarization, while the wide absorption band is still maintained in the horizontal polarization. When a patch antenna is etched within the AFSR, where the metal grounds of the patch antenna and AFSR are co-used, the co-designed antenna with low RCS and low profile is thus realized. For demonstration, an AFSR is designed with its transmission window has a minimal insertion loss of 0.45 dB at 8.9 GHz, and two separate absorption bands (a lower absorption band from 4.8 to 7.5 GHz and an upper absorption band from 10 to 13 GHz) in the vertical polarization, a wide absorption band (from 4.5 to 12.5 GHz) in the horizontal polarization. A patch antenna etched within the AFSR is optimized to operate at 8.9 GHz, then it is simulated and fabricated. The measured results demonstrate that the proposed antenna not only has good radiation patterns, but also obtains significant RCS reduction."}
{"_id": "5a4a53339068eebd1544b9f430098f2f132f641b", "title": "Hierarchical Disentangled Representations", "text": "Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation, often by introducing suitable modifications of the objective function. We synthesize this growing body of literature by formulating a generalization of the evidence lower bound that explicitly represents the trade-offs between sparsity of the latent code, bijectivity of representations, and coverage of the support of the empirical data distribution. Our objective is also suitable to learning hierarchical representations that disentangle blocks of variables whilst allowing for some degree of correlations within blocks. Experiments on a range of datasets demonstrate that learned representations contain interpretable features, are able to learn discrete attributes, and generalize to unseen combinations of factors."}
{"_id": "4b19be501b279b7d80d94b2d9d986bf4f8ab4ede", "title": "Stochastic Models , Estimation and Control", "text": ""}
{"_id": "8cea85a615bbf1fc9329e8a542a049b721667a61", "title": "A lightweight method for virtual machine introspection", "text": "A method for the introspection of virtual machines is proposed. The main distinctive feature of this method is that it makes it possible to obtain information about the system operation using the minimum knowledge about its internal organization. The proposed approach uses rarely changing parts of the application binary interface, such as identifiers and parameters of system calls, calling conventions, and the formats of executable files. The lightweight property of the introspection method is due to the minimization of the knowledge about the system and by its high performance. The introspection infrastructure is based on the QEMU emulator, version 2.8. Presently, monitoring the file operations, processes, and API function calls are implemented. The available introspection tools (RTKDSM, Panda, and DECAF) get data for the analysis using kernel structures. All the data obtained (addresses of structures, etc.) is written to special profiles. Since the addresses and offsets strongly depend not only on the version of the operating system but also on the parameters of its assembly, these tools have to store a large number of profiles. We propose to use parts of the application binary interface because they are rarely modified and it is often possible to use one profile for a family of OSs. The main idea underlying the proposed method is to intercept the system and library function calls and read parameters and returned values. The processor provides special instructions for calling system and user defined functions. The capabilities of QEMU are extended by an instrumentation mechanism to enable one to track each executed instruction and find the instructions of interest among them. When a system call occurs, the control is passed to the system call detector that checks the number of the call and decides to which module the job should be forwarded. In the case of an API function call, the situation is similar, but the API function detector checks the function address. An introspection tool consisting of a set of modules is developed. These modules are dynamic libraries that are plugged in QEMU. The modules can interact by exchanging data."}
{"_id": "cebb7ddfc3664e1f7ebaf40ebbc5712b4e1ecce7", "title": "International experiences with the Hospital Anxiety and Depression Scale--a review of validation data and clinical results.", "text": "More than 200 published studies from most medical settings worldwide have reported experiences with the Hospital Anxiety and Depression Scale (HADS) which was specifically developed by Zigmond and Snaith for use with physically ill patients. Although introduced in 1983, there is still no comprehensive documentation of its psychometric properties. The present review summarizes available data on reliability and validity and gives an overview of clinical studies conducted with this instrument and their most important findings. The HADS gives clinically meaningful results as a psychological screening tool, in clinical group comparisons and in correlational studies with several aspects of disease and quality of life. It is sensitive to changes both during the course of diseases and in response to psychotherapeutic and psychopharmacological intervention. Finally, HADS scores predict psychosocial and possibly also physical outcome."}
{"_id": "e893b706c3d9e68fc978ec41fb17d757ec85ee1e", "title": "Addressing the Winograd Schema Challenge as a Sequence Ranking Task", "text": "The Winograd Schema Challenge targets pronominal anaphora resolution problems which require the application of cognitive inference in combination with world knowledge. These problems are easy to solve for humans but most difficult to solve for machines. Computational models that previously addressed this task rely on syntactic preprocessing and incorporation of external knowledge by manually crafted features. We address the Winograd Schema Challenge from a new perspective as a sequence ranking task, and design a Siamese neural sequence ranking model which performs significantly better than a random baseline, even when solely trained on sequences of words. We evaluate against a baseline and a state-of-the-art system on two data sets and show that anonymization of noun phrase candidates strongly helps our model to generalize."}
{"_id": "8d1b4a8968630d29046285eca24475518e34eee2", "title": "Inter-AS traffic engineering with SDN", "text": "Egress selection in an Internet Service Provider (ISP) is the process of selecting an egress router to route interdomain traffic across the ISP such that a traffic engineering objective is achieved. In traditional ISP networks, traffic through the ISP is carried through the network and exits via an egress that is closest to the source in an attempt to minimize network resources for transit traffic. This exit strategy is known as Hot Potato Routing (HPR). The emerging field of Software-Defined Networking (SDN) has opened up many possibilities and promised to bring new flexibility to the rigid traditional paradigm of networking. In an ISP network, however, completely replacing legacy network devices with SDN nodes is neither simple nor straightforward. This has led to the idea of incremental and selective deployment of SDN nodes in an ISP network. Such a hybrid network gives us control over traffic flows that pass through the SDN nodes without requiring extensive changes to an existing ISP network. In this paper, we look at the problem of choosing an optimal set of egress routers to route inter-domain transit traffic in a hybrid SDN network such that the maximum link utilization of the egress links is minimized. We formulate the optimization problem, show that it is related to the makespan scheduling problem of unrelated parallel machines, and propose heuristics to solve it. We perform simulations to evaluate our heuristic on a real ISP topology and show that even with a small number of SDN nodes in the network, the maximum link utilization on the egress links can be lower than that of traditional HPR."}
{"_id": "7d3eb4456b16c41c56490dcddc3bee3ac700662b", "title": "Graption: A graph-based P2P traffic classification framework for the internet backbone", "text": "1 The authors thank CAIDA for providing this set of traffic traces. Additional information for these traces can be found in the DatCat, Internet Measurement Data Catalog [8], indexed under the label \u2018\u2018PAIX\u2019\u2019. 1910 M. Iliofotou et al. / Computer Networks 55 (2011) 1909\u20131920 behavior. Flow-level and payload-based classification methods require per application training and will thus not detect P2P traffic from emerging protocols. Behavioral-host-based approaches such as BLINC [25] can detect traffic from new protocols [25], but have weak performance when applied at the backbone [26]. In addition, most tools including BLINC [25] require fine-tuning and careful selection of parameters [26]. We discuss the limitations of previous methods in more detail in Section 4. In this paper, we use the network-wide behavior of an application to assist in classifying its traffic. To model this behavior, we use graphs where each node is an IP address, and each edge represents a type of interaction between two nodes. We use the term Traffic Dispersion Graph or TDG to refer to such a graph [19]. Intuitively, with TDGs we enable the detection of network-wide behavior (e.g., highly connected graphs) that is common among P2P applications and different from other traffic (e.g., Web). While we recognize that some previous efforts [6,9] have used graphs to detect worm activity, they have not explored the full capabilities of TDGs for application classification. This paper is an extension of a workshop paper [18] and the differences will be clarified in the related work section (Section 4). We propose a classification framework, dubbed Graption (Graph-based classification), as a systematic way to combine network-wide behavior and flow-level characteristics of network applications. Graption first groups flows using flow-level features, in an unsupervised and agnostic way, i.e., without using application-specific knowledge. It then uses TDGs to classify each group of flows. As a proof of concept, we instantiate our framework and develop a P2P detection method, which we call Graption-P2P. Compared to other methods (e.g., BLINC [25]), Graption-P2P is easier to configure and requires fewer parameters. The highlights of our work can be summarized in the following points: Distinguishing between P2P and client\u2013server TDGs. We use real-world backbone traces and derive graph theoretic metrics that can distinguish between the TDGs formed by client\u2013server (e.g., Web) and P2P (e.g., eDonkey) applications (Section 2.2). Practical considerations for TDGs. We show that even a single backbone link contains enough information to generate TDGs that can be used to classify traffic. In addition, TDGs of the same application seem fairly consistent across time (Section 2.3). High P2P classification accuracy. Our framework instantiation (Graption-P2P) classifies 90% of P2P traffic with 95% accuracy when applied at the backbone. Such traces are particularly challenging for other methods (Section 3.2.2). Comparison with a behavioral-host-based method. Graption-P2P performs better than BLINC [25] in P2P identification at the backbone. For example, Graption-P2P identifies 95% of BitTorrent traffic while BLINC identifies only 25% (Section 3.3). Identifying the unknown. Using Graption, we identified a P2P overlay of the Slapper worm. The TDG of Slapper was never used to train our classifier. This is a promising result showing that our approach can be used to detect both known and unknown P2P applications (Section 3.4). The rest of the paper is organized as follows. In Section 2 we define TDGs, and identify TDG-based metrics that differentiate between applications. In Section 3 we present the Graption framework and our instantiation, GraptionP2P. In Section 5 we discuss various practical issues. In Section 4 we discuss related work. Finally, in Section 6 we conclude the paper. 2. Studying the TDGs of P2P applications 2.1. Traffic dispersion graphs (TDGs) Definition. Throughout this paper, we assume that packets can be grouped into flows using the standard 5-tuple {srcIP, srcPort, dstIP, dstPort, protocol}. Given a group of flows S, collected over a fixed-length time interval, we define the corresponding TDG to be a directed graph G(V,E), where the set of nodes V corresponds to the set of IP addresses in S, and there is a link (u,v) 2 E from u to v if there is a flow f 2 S between them. In this paper, we consider bidirectional flows. We define a TCP flow to start on the first packet with the SYN-flag set and the ACK-flag not set, so that the initiator and the recipient of the flow are defined for the purposes of direction. For UDP flows, direction is decided upon the first packet of the flow. Visualization examples. In Fig. 1, we show TDG examples from two different applications. In order to motivate the discussion in the rest of the paper, we show the contrast between a P2P and a client\u2013server TDG. From the figure we see that P2P traffic forms more connected and more dense graphs compared to client\u2013server TDGs. In Section 2.2, we show how we can translate the visual intuition of Fig. 1 into quantitative measures that can be used to classify TDGs that correspond to different applications. Data set. To study TDGs, we use three backbone traces from a Tier-1 ISP and the Abilene (Internet2) network. These traces are summarized in Table 1. All data are IP anonymized and contain traffic from both directions of the link. The TR-PAY1 and TR-PAY2 traces were collected from an OC48 link of a commercial US Tier-1 ISP at the Palo Alto Internet eXchange (PAIX). To the best of our knowledge, these are the most recent backbone traces with payload that are available to researchers by CAIDA [5]. The TR-ABIL trace is a publicly available data set collected from the Abilene (Internet2) academic network connecting Indianapolis with Kansas City. The Abilene trace consists of five randomly selected five-minute samples taken every day for one month, and covers both day and night hours as well as weekdays and weekends. Extractingground truth. We used a Payload-based Classifier (PC) to establish the ground truth of flows for the 1 2 2 1 0 3 1 9 4 3 3 4 5 4 8 1 4 8 2 9 3 4 1 4 6 1 1 0 7 1 1 5 5 5 6 6 3 4 7 4 5 1 0 5 8 1 1 6 9 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 8 6 3 1 0 3 2 4 4 1 8 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 7 7 1 1 0 9 6 2 5 2 6 4 1 2 4 6 3 2 7 2 8 2 9 3 0 3 1 3 2 3 3 3 4 3 5 3 6 1 0 1 3 7 3 8 7 3 1 6 6 2 0 6 2 3 2 3 4 2 3 4 5 4 0 9 4 8 6 5 3 0 6 3 0 6 3 5 7 3 4 7 4 8 7 4 9 7 9 4 7 9 6 8 5 2 8 5 5 9 1 6 1 0 7 0 1 1 3 4 1 2 4 2 3 9 4 0 8 1 8 5 1 1 6 1 2 3 1 3 0 1 3 2 1 3 5 1 3 9 1 4 8 1 6 1 1 6 4 1 6 5 1 8 6 1 9 1 2 2 5 2 2 7 2 3 5 2 4 3 2 5 7 2 6 8 2 9 2 3 0 3 3 1 3 3 1 6 3 4 9 3 5 0 3 6 2 3 7 6 3 8 3 4 0 3 4 1 3 4 1 4 4 4 5 4 5 5 4 6 1 4 7 7 4 9 3 5 1 7 5 2 4 5 3 7 5 7 0 5 9 1 6 1 4 6 1 9 6 4 0 6 5 7 6 6 6 6 6 8 6 7 4 7 3 1 7 3 6 7 3 7 8 0 0 8 0 7 8 0 8 8 3 1 8 3 9 8 4 1 8 4 4 8 5 6 8 7 0 8 7 6 8 8 8 8 9 2 8 9 8 9 0 0 9 0 8 9 1 4 9 1 5 9 2 5 9 7 4 1 0 0 0 1 0 0 5 1 0 0 9 1 0 1 8 1 0 2 2 1 0 2 7 1 0 3 5 1 0 5 6 1 0 9 2 1 0 9 9 1 1 0 3 1 1 0 5 1 1 0 6 1 1 2 8 1 1 3 3 1 1 6 5 1 1 6 6 1 1 7 0 1 1 8 3 1 1 8 6 1 2 0 9 1 2 5 6 1 2 6 7 1 2 7 2 1 2 8 8 1 2 8 9 1 2 9 1 1 3 3 4 1 3 5 2 1 3 7 2 4 1 4 2 3 0 9 3 2 0 3 5 6 6 0 2 6 3 2 7 1 4 8 7 2 1 0 6 5 4 3 4 4 4 5 4 6 1 1 5 4 7 4 8 2 5 6 7 7 2 4 9 5 0 1 3 7 1 6 0 1 8 1 5 4 5 7 1 5 7 4 1 1 0 0 4 5 1 5 2 5 3 5 4 1 4 2 1 0 0 7 5 5 5 6 5 7 2 9 1 7 6 0 8 8 3 5 8 5 9 6 0 6 1 2 7 7 4 3 2 6 2 6 3 2 9 9 5 7 2 7 3 5 1 3 7 0 6 4 6 5 9 0 2 6 1 2 8 2 3 4 1 5 8 2 6 3 3 1 3 6 4 6 6 6 7 9 2 9 1 0 8 0 1 0 8 2 6 8 7 5 5 1 1 3 5 1 2 2 8 6 9 7 0 7 1 2 7 0 7 2 7 4 7 5 7 6 7 7 7 8 7 9 8 0 8 2 8 3 7 2 3 8 4 8 7 3 6 1 8 8 8 9 1 7 5 9 1 9 2"}
{"_id": "8ba965f138c1178aef09da3781765e300c325f3d", "title": "Design and development of a low cost EMG signal acquisition system using surface EMG electrode", "text": "Electromyogram or EMG signal is a very small signal; it requires a system to enhance for display purpose or for further analysis process. This paper presents the development of low cost physiotherapy EMG signal acquisition system with two channel input. In the acquisition system, both input signals are amplified with a differential amplifier and undergo signal pre-processing to obtain the linear envelope of EMG signal. Obtained EMG signal is then digitalized and sent to the computer to be plotted."}
{"_id": "0fe5990652d47a4de58500203fd6f00ada7de0ae", "title": "Security and Privacy in the Internet of Vehicles", "text": "Internet of Vehicles (IoV) is a typical application of Internet of things in the field of transportation, which aims at achieving an integrated intelligent transportation system to enhance traffics, to avoid accidents, to ensure road safety, and to improve driving experiences. Due to its characteristics of dynamic topological structures, huge network scale, non-uniform distribution of nodes, and mobile limitation, IoV systems face various types of attacks, such as authentication and identification attacks, availability attacks, confidentiality attacks, routing attacks, data authenticity attacks and etc., which result in several challenging requirements in security and privacy. Many security scientists made numerous efforts to ensure the security and privacy for the Internet of Vehicles in recent years. This paper aims to review the advances on issues of security and privacy in IoV, including security and privacy requirements, attack types, and the relevant solutions, and discuss challenges and future trends in this area."}
{"_id": "fffc47e080dbbd4d450b6e6dbee4de7e66324e21", "title": "The meaning of compassion fatigue to student nurses: an interpretive phenomenological study", "text": "Background: Compassion fatigue is a form of occupational stress which occurs when individuals are exposed to suffering and trauma on an ongoing basis. The purpose of this study was to explore the experiences of compassion fatigue among student nurses following their first clinical placement in a UK health care setting during 2015. Methods: The aim of this study was to explore students\u2019 thoughts and feelings about compassion fatigue using reflective poems as a source of data. An interpretive phenomenological approach was taken using a purposeful sampling strategy which aimed to explore in depth meaning of the concept as experienced by the students. Results: From this study it is clear that students experience compassion fatigue and this has a psychological effect on their wellbeing and ability to learn in the clinical practice setting. Reflective poetry writing enabled articulation of feelings which were at times negative and linked to the student\u2019s status as a novice nurse. Conclusions: Students experience compassion fatigue and educators need to find ways to provide support in both clinical and university settings. Positive practices such as shared reflection and the use of creative teaching methods might be beneficial, to support exploration of feelings, build resilience and effective ways of coping."}
{"_id": "01413e1fc981a8c041dc236dcee64790e2239a36", "title": "A New Framework for Distributed Submodular Maximization", "text": "A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. A lot of recent effort has been devoted to developing distributed algorithms for these problems. However, these results suffer from high number of rounds, suboptimal approximation ratios, or both. We develop a framework for bringing existing algorithms in the sequential setting to the distributed setting, achieving near optimal approximation ratios for many settings in only a constant number of MapReduce rounds. Our techniques also give a fast sequential algorithm for non-monotone maximization subject to a matroid constraint."}
{"_id": "e89244816864ee8e000b72923b1983af6c65adb8", "title": "Image Retrieval using Harris Corners and Histogram of Oriented Gradients", "text": "Content based image retrieval is the technique to retrieve similar images from a database that are visually similar to a given query image. It is an active and emerging research field in computer vision. In our proposed system, the Interest points based Histogram of Oriented Gradients (HOG) feature descriptor is used to retrieve the relevant images from the database. The dimensionality of the HOG feature vector is reduced by Principle Component analysis (PCA). To improve the retrieval accuracy of the system the Colour Moments along with HOG feature descriptor are used in this system. The Interest points are detected using the Harris-corner detector in order to extract the image features. The KD-tree is used for matching and indexing the features of the query image with the database images."}
{"_id": "c8b813d447481d2af75a623ef117d45b6d610901", "title": "HanGuard: SDN-driven protection of smart home WiFi devices from malicious mobile apps", "text": "A new development of smart-home systems is to use mobile apps to control IoT devices across a Home Area Network (HAN). As verified in our study, those systems tend to rely on the Wi-Fi router to authenticate other devices. This treatment exposes them to the attack from malicious apps, particularly those running on authorized phones, which the router does not have information to control. Mitigating this threat cannot solely rely on IoT manufacturers, which may need to change the hardware on the devices to support encryption, increasing the cost of the device, or software developers who we need to trust to implement security correctly. In this work, we present a new technique to control the communication between the IoT devices and their apps in a unified, backward-compatible way. Our approach, called HanGuard, does not require any changes to the IoT devices themselves, the IoT apps or the OS of the participating phones. HanGuard uses an SDN-like approach to offer fine-grained protection: each phone runs a non-system userspace Monitor app to identify the party that attempts to access the protected IoT device and inform the router through a control plane of its access decision; the router enforces the decision on the data plane after verifying whether the phone should be allowed to talk to the device. We implemented our design over both Android and iOS (> 95% of mobile OS market share) and a popular router. Our study shows that HanGuard is both efficient and effective in practice."}
{"_id": "012ad0c26d108dda9b035faa6e9545842c14d305", "title": "Digital Beamforming on Receive: Techniques and Optimization Strategies for High-Resolution Wide-Swath SAR Imaging", "text": "Synthetic Aperture Radar (SAR) is a well-proven imaging technique for remote sensing of the Earth. However, conventional SAR systems are not capable of fulfilling the increasing demands for improved spatial resolution and wider swath coverage. To overcome these inherent limitations, several innovative techniques have been suggested which employ multiple receive-apertures to gather additional information along the synthetic aperture. These digital beamforming (DBF) on receive techniques are reviewed with particular emphasis on the multi-aperture signal processing in azimuth and a multi-aperture reconstruction algorithm is presented that allows for the unambiguous recovery of the Doppler spectrum. The impact of Doppler aliasing is investigated and an analytic expression for the residual azimuth ambiguities is derived. Further, the influence of the processing on the signal-to-noise ratio (SNR) is analyzed, resulting in a pulse repetition frequency (PRF) dependent factor describing the SNR scaling of the multi-aperture beamforming network. The focus is then turned to a complete high-resolution wide-swath SAR system design example which demonstrates the intricate connection between multi-aperture azimuth processing and the system architecture. In this regard, alternative processing approaches are compared with the multi-aperture reconstruction algorithm. In a next step, optimization strategies are discussed as pattern tapering, prebeamshaping-on-receive, and modified processing algorithms. In this context, the analytic expressions for both the residual ambiguities and the SNR scaling factor are generalized to cascaded beamforming networks. The suggested techniques can moreover be extended in many ways. Examples discussed are a combination with ScanSAR burst mode operation and the transfer to multistatic sparse array configurations."}
{"_id": "4e23301ba855b5651bd0b152d6a118753fc5465d", "title": "Effects of domain on measures of semantic relatedness", "text": "Measures of semantic relatedness have been used in a variety of applications in information retrieval and language technology, such as measuring document similarity and cohesion of text. Definitions of such measures have ranged from using distance-based calculations over WordNet or other taxonomies to statistical distributional metrics over document collections such as Wikipedia or the Web. Existing measures do not explicitly consider the domain associations of terms when calculating relatedness: This article demonstrates that domain matters. We construct a data set of pairs of terms with associated domain information and extract pairs that are scored nearly identical by a sample of existing semantic-relatedness measures. We show that human judgments reliably score those pairs containing terms from the same domain as significantly more related than cross-domain pairs, even though the semantic-relatedness measures assign the pairs similar scores. We provide further evidence for this result using a machine learning setting by demonstrating that domain is an informative feature when learning a metric. We conclude that existing relatedness measures do not account for domain in the same way or to the same extent as do human judges."}
{"_id": "1ae470266136fee5e98e0e62ba888167615a296a", "title": "A Lossy Image Codec Based on Index Coding", "text": "In this paper we propose a new lossy image codec based on index coding. Both J. Shapiro\u2019s embedded zerotree wavelet algorithm, and A. Said and W. A. Pearlman\u2019s codetree algorithm (which is the state-of-the-art method today) use spatial orientation tree structures to implicitly locate the significant wavelet transform coefficients. Here a direct approach to find the positions of these significant coefficients is presented. The new algorithm combines the discrete wavelet transform, differential coding, variable-length coding of integers, ordered bit plane transmission, and adaptive arithmetic coding. The encoding can be stopped at any point, which allows a target rate or distortion metric to be met exactly. The bits in the bit stream are generated in the order of importance, yielding a fully embedded code to successively approximate the original image source; thus it\u2019s well suited for progressive image transmission. The decoder can also terminate the decoding at any point, and produce a lower bit rate reconstruction image. Our algorithm is very simple in its form (which will make the encoding and decoding very fast), requires no training of any kind or prior knowledge of image sources, and has a clear geometric structure. The image coding results of it are quite competitive with almost all previous reported image compression algorithms on standard test images. 456 1068.0314/96$5.00"}
{"_id": "1b300a7858ab7870d36622a51b0549b1936572d4", "title": "Dynamic Facial Expression Recognition With Atlas Construction and Sparse Representation", "text": "In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison."}
{"_id": "aeaf9157ecd47684f1e3569323a4b16040023cef", "title": "Contextual feature based one-class classifier approach for detecting video response spam on YouTube", "text": "YouTube is one of the largest video sharing websites (with social networking features) on the Internet. The immense popularity of YouTube, anonymity and low publication barrier has resulted in several forms of misuse and video pollution such as uploading of malicious, copyright violated and spam video or content. YouTube has a popular and commonly used feature called as video response which allows users to post a video response to an uploaded or existing video. Some of the popular videos on YouTube receive thousands of video responses. We observe presence of opportunistic users posting unrelated, promotional, pornographic videos (spam videos posted manually or using automated scripts) as video responses to existing videos. We present a method of mining YouTube to automatically detect video response spam. We formulate the problem of video response spam detection as a one-class classification problem (a recognition task) and divide the problem into three sub-problems: promotional video recognition, pornographic or dirty video recognition and automated script or botnet uploader recognition. We create a sample dataset of target class videos for each of the three sub-problems and identify contextual features (meta-data based or non-content based features) characterizing the target class. Our empirical analysis reveals that certain linguistic features (presence of certain terms in the title or description of the YouTube video), temporal features, popularity based features, time based features can be used to predict the video type. We identify features with discriminatory powers and use it within a one-class classification framework to recognize video response spam. We conduct a series of experiments to validate the proposed approach and present evidences to demonstrate the effectiveness of the proposed solution with more than 80% accuracy."}
{"_id": "f6f7ab33c9112dd1e70e965d67dda84f35212862", "title": "A Generic Toolkit for the Successful Management of Delphi Studies", "text": "This paper presents the case of a non-traditional use of the Delphi method for theory evaluation. On the basis of experience gained through secondary and primary research, a generic decision toolkit for Delphi studies is proposed, comprising of taxonomy of Delphi design choices, a stage model and critical methodological decisions. These research tools will help to increase confidence when adopting the Delphi alternative and allow for a wider and more comprehensive recognition of the method within both scientific and interpretivist studies."}
{"_id": "62a3db8c7ba9d91800f4db689e592e763ecb2291", "title": "A Developmental Model of Critical Thinking", "text": "The critical thinking movement, it is suggested, has much to gain from conceptualizing its subject matter in a developmental framework. Most instructional programs designed to teach critical thinking do not draw on contemporary empirical research in cognitive development as a potential resource. The developmental model of critical thinking outlined here derives from contemporary empirical research on directions and processes of intellectual development in children and adolescents. It identifies three forms of second-order cognition (meta-knowing)--metacognitive, metastrategic, and epistemological--that constitute an essential part of what develops cognitively to make critical thinking possible."}
{"_id": "b40a16a78d916eec89bdb51ac7f6b9296d9f7107", "title": "Extension of Soft-Switching Region of Dual-Active-Bridge Converter by a Tunable Resonant Tank", "text": "Hard-switching-induced switching loss can contribute significantly to the power loss of an isolated bidirectional dual-active-bridge (DAB) dc\u2013dc converter operating at high frequency. An $LC$-type series resonant DAB converter based on a switch-controlled-inductor (SCI) is proposed to mitigate the loss arising from hard switching under wide-range variations in output voltage and current. Zero-voltage switching is achieved at the primary side (high voltage), while at the secondary side (low voltage), zero-current switching is preferred to reduce excessive ringing due to circulating current and switching loss. In order to achieve reduced conduction loss, a nominal operating point is chosen where the root-mean-square resonant tank current is the minimum. To validate the proposed topology and modulation scheme, an $LC$-type series resonant DAB converter based on SCI operating at 100 kHz is designed to interface a 400-V dc bus to a supercapacitor-based energy storage. Simulation and experimental results validate the effectiveness of the proposed topology for charging/discharging a supercapacitor with an output voltage variation of between 10 and 48 V and a maximum rated power of 480 W. A maximum efficiency of 94.6% is achieved using the proposed topology and modulation scheme."}
{"_id": "3a1c06e9157d0709ad477e39101b16c727d485a2", "title": "WordNet Atlas : a web application for visualizing WordNet as a zoomable map", "text": "The English WordNet is a lexical database containing more than 206,900 wordconcept pairs and more than 377,500 semantic and lexical links. Trying to visualize them all at once as a node-link diagram results in a representation which can be too big and complex for the user to grasp, and which cannot be easily processed by current web technologies. We propose a visualization technique based on the concept of semantic zooming, usually found in popular web map applications. This technique makes it feasible to handle large and complex graphs, and display them in an interactive representation. By zooming, it is possible to switch from an overview visualization, quickly and clearly presenting the global characteristics of the structure, to a detailed one, displaying a classic node-link diagram. WordNet Atlas is a web application that leverages this method to aid the user in the exploration of the WordNet data set, from its global taxonomy structure to the detail of words and synonym sets."}
{"_id": "0b5ce6a35b0c7e19c77a4b93cd317e3d3a3e2fa4", "title": "Reduced-Order Model and Stability Analysis of Low-Voltage DC Microgrid", "text": "Depleting fossil fuels, increasing energy demand, and need for high-reliability power supply motivate the use of dc microgrids. This paper analyzes the stability of low-voltage dc microgrid systems. Sources are controlled using a droop-based decentralized controller. Various components of the system have been modeled. A linearized system model is derived using small-signal approximation. The stability of the system is analyzed by identifying the eigenvalues of the system matrix. The sufficiency condition for stable operation of the system is derived. It provides upper bound on droop constants and is useful during planning and designing of dc microgrids. Furthermore, the sensitivity of system poles to variation in cable resistance and inductance is identified. It is proved that the poles move further inside the negative real plane with a decrease in inductance or an increase in resistance. The method proposed in this paper is applicable to any interconnecting structure of sources and loads. The results obtained by analysis are verified by detailed simulation study. Root locus plots are included to confirm the movement of system poles. The viability of the model is confirmed by experimental results from a scaled-down laboratory prototype of a dc microgrid developed for the purpose."}
{"_id": "bbebb8f7023ebfb2d2aff69a2dbad13e49b6f84d", "title": "Hierarchical Control of Droop-Controlled AC and DC Microgrids\u2014A General Approach Toward Standardization", "text": "AC and dc microgrids (MGs) are key elements for integrating renewable and distributed energy resources as well as distributed energy-storage systems. In the last several years, efforts toward the standardization of these MGs have been made. In this sense, this paper presents the hierarchical control derived from ISA-95 and electrical dispatching standards to endow smartness and flexibility to MGs. The hierarchical control proposed consists of three levels: 1) The primary control is based on the droop method, including an output-impedance virtual loop; 2) the secondary control allows the restoration of the deviations produced by the primary control; and 3) the tertiary control manages the power flow between the MG and the external electrical distribution system. Results from a hierarchical-controlled MG are provided to show the feasibility of the proposed approach."}
{"_id": "688ef6ec1f26125d84eb14e253ab815e2def5710", "title": "Secondary Load-Frequency Control for MicroGrids in Islanded Operation", "text": "The objective of this paper is to present novel control strategies for MicroGrid operation, especially in islanded mode. The control strategies involve mainly the coordination of secondary load-frequency control by a MicroGrid Central Controller that heads a hierarchical control system able to assure stable and secure operation when the islanding of the MicroGrid occurs and in load-following situations in islanded mode."}
{"_id": "791c408f97de64fbae0f2223cf433f2ad258f7d7", "title": "Decentralized Control for Parallel Operation of Distributed Generation Inverters Using Resistive Output Impedance", "text": "In this paper, a novel wireless load-sharing controller for islanding parallel inverters in an ac-distributed system is proposed. This paper explores the resistive output impedance of the parallel-connected inverters in an island microgrid. The control loops are devised and analyzed, taking into account the special nature of a low-voltage microgrid, in which the line impedance is mainly resistive and the distance between the inverters makes the control intercommunication between them difficult. In contrast with the conventional droop-control method, the proposed controller uses resistive output impedance, and as a result, a different control law is obtained. The controller is implemented by using a digital signal processor board, which only uses local measurements of the unit, thus increasing the modularity, reliability, and flexibility of the distributed system. Experimental results are provided from two 6-kVA inverters connected in parallel, showing the features of the proposed wireless control"}
{"_id": "39d56e05caa642bdb292832fa5a01c5c597a0203", "title": "Eval all, trust a few, do wrong to none: Comparing sentence generation models", "text": "In this paper, we study recent neural generative models for text generation related to variational autoencoders. These models employ various techniques to match the posterior and prior distributions, which is important to ensure a high sample quality and a low reconstruction error. In our study, we follow a rigorous evaluation protocol using a large set of previously used and novel automatic metrics and human evaluation of both generated samples and reconstructions. We hope that it will become the new evaluation standard when comparing neural generative models for text."}
{"_id": "8d89c3789e91f0b0d3ffd46bff50dd07618db636", "title": "DEEP LEARNING AND INTELLIGENT AUDIO MIXING", "text": "Mixing multitrack audio is a crucial part of music production. With recent advances in machine learning techniques such as deep learning, it is of great importance to conduct research on the applications of these methods in the field of automatic mixing. In this paper, we present a survey of intelligent audio mixing systems and their recent incorporation of deep neural networks. We propose to the community a research trajectory in the field of deep learning applied to intelligent music production systems. We conclude with a proof of concept based on stem audio mixing as a contentbased transformation using a deep autoencoder."}
{"_id": "0451c923703472b6c20ff11185001f24b76c48e3", "title": "Coordination and Geometric Optimization via Distributed Dynamical Systems", "text": "Emerging applications for networked and cooperative robots motivate the study of motion coordination for groups of agents. For example, it is envisioned that groups of agents will perform a variety of useful tasks including surveillance, exploration, and environmental monitoring. This paper deals with basic interactions among mobile agents such as \u201cmove away from the closest other agent\u201d or \u201cmove toward the furthest vertex of your own Voronoi polygon.\u201d These simple interactions amount to distributed dynamical systems because their implementation requires only minimal information about neighboring agents. We characterize the close relationship between these distributed dynamical systems and the disk-covering and sphere-packing cost functions from geometric optimization. Our main results are: (i) we characterize the smoothness properties of these geometric cost functions, (ii) we show that the interaction laws are variations of the nonsmooth gradient of the cost functions, and (iii) we establish various asymptotic convergence properties of the laws. The technical approach relies on concepts from computational geometry, nonsmooth analysis, and nonsmooth stability theory."}
{"_id": "a26267a6b8dd93104c2efb8cebe5a728112f6bbf", "title": "Online fuzzy c means", "text": "Clustering streaming data presents the problem of not having all the data available at one time. Further, the total size of the data may be larger than will fit in the available memory of a typical computer. If the data is very large, it is a challenge to apply fuzzy clustering algorithms to get a partition in a timely manner. In this paper, we present an online fuzzy clustering algorithm which can be used to cluster streaming data, as well as very large data sets which might be treated as streaming data. Results on several large volumes of magnetic resonance images show that the new algorithm produces partitions which are very close to what you could get if you clustered all the data at one time. So, the algorithm is an accurate approach for online clustering."}
{"_id": "00d39f57d9c2f430f1aec24171e2f73d46150e2c", "title": "The architecture of subterranean ant nests: beauty and mystery underfoot", "text": "Over the 100 million years of their evolution, ants have constructed or occupied nests in a wide range of materials and situations. A large number of ant species excavate nests in the soil, and these subterranean nests have evolved into a wide range of sizes and architectures. On the basis of casts made of such nests, this variation and the patterns that govern it are described. The possible functions of architectural features are discussed, as are the behavioral \u201crules\u201d through which the nests are created by worker ants."}
{"_id": "450a4c4840ecf5380dc18ca1dd55618ca6ce4940", "title": "A Reorder Buffer Design for High Performance Processors", "text": "Modern reorder buffers (ROBs) were conceived to improve processor performance by allowing instruction execution out of the original program order and run ahead of sequential instruction code exploiting existing instruction level parallelism (ILP). The ROB is a functional structure of a processor execution engine that supports speculative execution, physical register recycling, and precise exception recovering. Traditionally, the ROB is considered as a monolithic circular buffer with incoming instructions at the tail pointer after the decoding stage and completing instructions at the head pointer after the commitment stage. The latter stage verifies instructions that have been dispatched, issued, executed, and are not completed speculatively. This paper presents a design of distributed reorder buffer microarchitecture by using small structures near building blocks which work together, using the same tail and head pointer values on all structures for synchronization. The reduction of area, and therefore, the reduction of power and delay make this design suitable for both embedded and high performance microprocessors."}
{"_id": "a069865afb026979afc15913ac39266c013fd1da", "title": "Achieving ICS Resilience and Security through Granular Data Flow Management", "text": "Modern Industrial Control Systems (ICS) rely on enterprise to plant floor connectivity. Where the size, diversity, and therefore complexity of ICS increase, operational requirements, goals, and challenges defined by users across various sub-systems follow. Recent trends in Information Technology (IT) and Operational Technology (OT) convergence may cause operators to lose a comprehensive understanding of end-to-end data flow requirements. This presents a risk to system security and resilience. Sensors were once solely applied for operational process use, but now act as inputs supporting a diverse set of organisational requirements. If these are not fully understood, incomplete risk assessment, and inappropriate implementation of security controls could occur. In search of a solution, operators may turn to standards and guidelines. This paper reviews popular standards and guidelines, prior to the presentation of a case study and conceptual tool, highlighting the importance of data flows, critical data processing points, and system-to-user relationships. The proposed approach forms a basis for risk assessment and security control implementation, aiding the evolution of ICS security and resilience."}
{"_id": "87e064d1f7351c7dab9809fd0248297dcb841774", "title": "Adversarial Frontier Stitching for Remote Neural Network Watermarking", "text": "The state of the art performance of deep learning models comes at a high cost for companies and institutions, due to the tedious data collection and the heavy processing requirements. Recently, Uchida et al. (2017) proposed to watermark convolutional neural networks by embedding information into their weights. While this is a clear progress towards model protection, this technique solely allows for extracting the watermark from a network that one accesses locally and entirely. This is a clear impediment, as leaked models can be re-used privately, and thus not released publicly for ownership inspection. Instead, we aim at allowing the extraction of the watermark from a neural network (or any other machine learning model) that is operated remotely, and available through a service API. To this end, we propose to operate on the model\u2019s action itself, tweaking slightly its decision frontiers so that a set of specific queries convey the desired information. In present paper, we formally introduce the problem and propose a novel zerobit watermarking algorithm that makes use of adversarial model examples (called adversaries for short). While limiting the loss of performance of the protected model, this algorithm allows subsequent extraction of the watermark using only few remote queries. We experiment this approach on the MNIST dataset with three types of neural networks, demonstrating that e.g., watermarking with 100 images incurs a slight accuracy degradation, while being resilient to most removal attacks."}
{"_id": "0cf4105ec11fb5846e5ea1b9dea11f8ba16e391f", "title": "Strokelets: A Learned Multi-scale Representation for Scene Text Recognition", "text": "Driven by the wide range of applications, scene text detection and recognition have become active research topics in computer vision. Though extensively studied, localizing and reading text in uncontrolled environments remain extremely challenging, due to various interference factors. In this paper, we propose a novel multi-scale representation for scene text recognition. This representation consists of a set of detectable primitives, termed as strokelets, which capture the essential substructures of characters at different granularities. Strokelets possess four distinctive advantages: (1) Usability: automatically learned from bounding box labels, (2) Robustness: insensitive to interference factors, (3) Generality: applicable to variant languages, and (4) Expressivity: effective at describing characters. Extensive experiments on standard benchmarks verify the advantages of strokelets and demonstrate the effectiveness of the proposed algorithm for text recognition."}
{"_id": "11c94b5b20c0347f4e5e9f70749018b6ca318afc", "title": "Unsupervised Footwear Impression Analysis and Retrieval from Crime Scene Data", "text": "Footwear impressions are one of the most frequently secured types of evidence at crime scenes. For the investigation of crime series they are among the major investigative notes. In this paper, we introduce an unsupervised footwear retrieval algorithm that is able to cope with unconstrained noise conditions and is invariant to rigid transformations. A main challenge for the automated impression analysis is the separation of the actual shoe sole information from the structured background noise. We approach this issue by the analysis of periodic patterns. Given unconstrained noise conditions, the redundancy within periodic patterns makes them the most reliable information source in the image. In this work, we present four main contributions: First, we robustly measure local periodicity by fitting a periodic pattern model to the image. Second, based on the model, we normalize the orientation of the image and compute the window size for a local Fourier transformation. In this way, we avoid distortions of the frequency spectrum through other structures or boundary artefacts. Third, we segment the pattern through robust point-wise classification, making use of the property that the amplitudes of the frequency spectrum are constant for each position in a periodic pattern. Finally, the similarity between footwear impressions is measured by comparing the Fourier representations of the periodic patterns. We demonstrate robustness against severe noise distortions as well as rigid transformations on a database with real crime scene impressions. Moreover, we make our database available to the public, thus enabling standardized benchmarking for the first time."}
{"_id": "18b4db8a705a65c277912fc13647b844c1de54d0", "title": "An approach to online identification of Takagi-Sugeno fuzzy models", "text": "An approach to the online learning of Takagi-Sugeno (TS) type models is proposed in the paper. It is based on a novel learning algorithm that recursively updates TS model structure and parameters by combining supervised and unsupervised learning. The rule-base and parameters of the TS model continually evolve by adding new rules with more summarization power and by modifying existing rules and parameters. In this way, the rule-base structure is inherited and up-dated when new data become available. By applying this learning concept to the TS model we arrive at a new type adaptive model called the Evolving Takagi-Sugeno model (ETS). The adaptive nature of these evolving TS models in combination with the highly transparent and compact form of fuzzy rules makes them a promising candidate for online modeling and control of complex processes, competitive to neural networks. The approach has been tested on data from an air-conditioning installation serving a real building. The results illustrate the viability and efficiency of the approach. The proposed concept, however, has significantly wider implications in a number of fields, including adaptive nonlinear control, fault detection and diagnostics, performance analysis, forecasting, knowledge extraction, robotics, behavior modeling."}
{"_id": "38731066f2e444c69818c8533b219b3db2826f18", "title": "Automatic speech recognition and speech variability: A review", "text": "Major progress is being recorded regularly on both the technology and exploitation of automatic speech recognition (ASR) and spoken language systems. However, there are still technological barriers to flexible solutions and user satisfaction under some circumstances. This is related to several factors, such as the sensitivity to the environment (background noise), or the weak representation of grammatical and semantic knowledge. Current research is also emphasizing deficiencies in dealing with variation naturally present in speech. For instance, the lack of robustness to foreign accents precludes the use by specific populations. Also, some applications, like directory assistance, particularly stress the core recognition technology due to the very high active vocabulary (application perplexity). There are actually many factors affecting the speech realization: regional, sociolinguistic, or related to the environment or the speaker herself. These create a wide range of variations that may not be modeled correctly (speaker, gender, speaking rate, vocal effort, regional accent, speaking style, non-stationarity, etc.), especially when resources for system training are scarce. This paper outlines current advances related to these topics. 2007 Elsevier B.V. All rights reserved."}
{"_id": "dabbe2b9310c03999668ee6dbffb9d710fb3a621", "title": "Accurate Pulmonary Nodule Detection in Computed Tomography Images Using Deep Convolutional Neural Networks", "text": "Early detection of pulmonary cancer is the most promising way to enhance a patient\u2019s chance for survival. Accurate pulmonary nodule detection in computed tomography (CT) images is a crucial step in diagnosing pulmonary cancer. In this paper, inspired by the successful use of deep convolutional neural networks (DCNNs) in natural image recognition, we propose a novel pulmonary nodule detection approach based on DCNNs. We first introduce a deconvolutional structure to Faster Region-based Convolutional Neural Network (Faster R-CNN) for candidate detection on axial slices. Then, a three-dimensional DCNN is presented for the subsequent false positive reduction. Experimental results of the LUng Nodule Analysis 2016 (LUNA16) Challenge demonstrate the superior detection performance of the proposed approach on nodule detection(average FROC-score of 0.891, ranking the 1st place over all submitted results)."}
{"_id": "9915efb95d14d3fb45bd9834982b15dda5d4abc8", "title": "Kalman Particle Filter for lane recognition on rural roads", "text": "Despite the availability of lane departure and lane keeping systems for highway assistance, unmarked and winding rural roads still pose challenges to lane recognition systems. To detect an upcoming curve as soon as possible, the viewing range of image-based lane recognition systems has to be extended. This is done by evaluating 3D information obtained from stereo vision or imaging radar in this paper. Both sensors deliver evidence grids as the basis for road course estimation. Besides known Kalman Filter approaches, Particle Filters have recently gained interest since they offer the possibility to employ cues of a road, which can not be described as measurements needed for a Kalman Filter approach. We propose to combine both principles and their benefits in a Kalman Particle Filter. The comparison between the results gained from this recently published filter scheme and the classical approaches using real world data proves the advantages of the Kalman Particle Filter."}
{"_id": "10b4dd334893c12d81d61cff7cf2e3b2e6b1ef21", "title": "A risk taxonomy proposal for software maintenance", "text": "There can be no doubt that risk management is an important activity in the software engineering area. One proof of this is the large body of work existing in this area. However, when one takes a closer look at it, one perceives that almost all this work is concerned with risk management for software development projects. The literature on risk management for software maintenance is much scarcer. On the other hand, software maintenance projects do present specificities that imply they offer different risks than development. This suggests that maintenance projects could greatly benefit from better risk management tools. One step in this direction would be to help identifying potential risk factors at the beginning of a maintenance project. For this, we propose taxonomy of possible risks for software management projects. The ontology was created from: i) an extensive survey of risk management literature, to list known risk factors for software development; and, ii) an extensive survey of maintenance literature, to list known problems that may occur during maintenance."}
{"_id": "a6eee06341987faeb8b6135d00d578d0d8893162", "title": "An industrial study on the risk of software changes", "text": "Modelling and understanding bugs has been the focus of much of the Software Engineering research today. However, organizations are interested in more than just bugs. In particular, they are more concerned about managing risk, i.e., the likelihood that a code or design change will cause a negative impact on their products and processes, regardless of whether or not it introduces a bug. In this paper, we conduct a year-long study involving more than 450 developers of a large enterprise, spanning more than 60 teams, to better understand risky changes, i.e., changes for which developers believe that additional attention is needed in the form of careful code or design reviewing and/or more testing. Our findings show that different developers and different teams have their own criteria for determining risky changes. Using factors extracted from the changes and the history of the files modified by the changes, we are able to accurately identify risky changes with a recall of more than 67%, and a precision improvement of 87% (using developer specific models) and 37% (using team specific models), over a random model. We find that the number of lines and chunks of code added by the change, the bugginess of the files being changed, the number of bug reports linked to a change and the developer experience are the best indicators of change risk. In addition, we find that when a change has many related changes, the reliability of developers in marking risky changes is negatively affected. Our findings and models are being used today in practice to manage the risk of software projects."}
{"_id": "c92ceb6b20df814252a2d0afa601ce88ffb73cc8", "title": "Generative Modeling with Conditional Autoencoders: Building an Integrated Cell", "text": "We present a conditional generative model to learn variation in cell and nuclear morphology and the location of subcellular structures from microscopy images. Our model generalizes to a wide range of subcellular localization and allows for a probabilistic interpretation of cell and nuclear morphology and structure localization from fluorescence images. We demonstrate the effectiveness of our approach by producing photorealistic cell images using our generative model. The conditional nature of the model provides the ability to predict the localization of unobserved structures given cell and nuclear morphology."}
{"_id": "2011736601756486b7a7b0f6b151222ac65121b4", "title": "Stock Direction Forecasting Techniques: An Empirical Study Combining Machine Learning System with Market Indicators in the Indian Context", "text": "Stock price movement prediction has been one of the most challenging issues in finance since the time immemorial. Many researchers in past have carried out extensive studies with the intention of investigating the approaches that uncover the hidden information in stock market data. As a result of which, Artificial Intelligence and data mining techniques have come to the forefront because of their ability to map nonlinear data. The study encapsulates market indicators with AI techniques to generate useful extracts to improve decisions under conditions of uncertainty. Three approaches (fundamental model, technical indicators model and hybrid model) have been tested using the standalone and integrated machine learning algorithms viz. SVM, ANN, GA-SVM, and GA-ANN and the results of all the three approaches have been compared in the four above mentioned methods. The core objective of this paper is to identify an approach from the above mentioned algorithms that best predicts the Indian stocks price movement. It is observed from the results that the use of GA significantly increases the accuracy of ANN and that the use of technical analysis with SVM and ANN is well suited for Indian stocks and can help investors and traders maximize their quarterly profits."}
{"_id": "b2c4bfd31c9ff0aded9040ac22db71a32ce8d58b", "title": "Computation of the characteristics of a claw pole alternator through the finite element method", "text": "The paper presents the analysis of a 3-D numerical model developed for a claw pole alternator. The complex structure of the claw-pole magnetic circuit required a 3D FEM model and a double scalar potential magnetic \u03c6-\u03c6red formulation, in order to reduce computing time and memory. The no-load and magnetization characteristics and the e.m.f. time dependence have been calculated. Working characteristics and the induced voltage in the static winding can be calculated by knowing the 3D distribution of the field in stationary magnetic regime for successive positions of the rotor considering the stator."}
{"_id": "a80caf1996a7ad2ce43abe6a7a78c7ded4adea8c", "title": "Deep Reinforcement Learning for Optimal Control of Space Heating", "text": "Classical methods to control heating systems are often marred by suboptimal performance, inability to adapt to dynamic conditions and unreasonable assumptions e.g. existence of building models. This paper presents a novel deep reinforcement learning algorithm which can control space heating in buildings in a computationally efficient manner, and benchmarks it against other known techniques. The proposed algorithm outperforms rule based control by between 5-10% in a simulation environment for a number of price signals. We conclude that, while not optimal, the proposed algorithm offers additional practical advantages such as faster computation times and increased robustness to non-stationarities in building dynamics."}
{"_id": "a16775abc45bfc9fca92f375ccc5032289d893b5", "title": "Star Ratings versus Sentiment Analysis -- A Comparison of Explicit and Implicit Measures of Opinions", "text": "A typical trade-off in decision making is between the cost of acquiring information and the decline in decision quality caused by insufficient information. Consumers regularly face this trade-off in purchase decisions. Online product/service reviews serve as sources of product/service related information. Meanwhile, modern technology has led to an abundance of such content, which makes it prohibitively costly (if possible at all) to exhaust all available information. Consumers need to decide what subset of available information to use. Star ratings are excellent cues for this decision as they provide a quick indication of the tone of a review. However there are cases where such ratings are not available or detailed enough. Sentiment analysis -text analytic techniques that automatically detect the polarity of text- can help in these situations with more refined analysis. In this study, we compare sentiment analysis results with star ratings in three different domains to explore the promise of this technique."}
{"_id": "2110fe9907b873be02e4a26a01a3b08ca66035a6", "title": "Design and Field Test of a WSN Platform Prototype for Long-Term Environmental Monitoring", "text": "Long-term wildfire monitoring using distributed in situ temperature sensors is an accurate, yet demanding environmental monitoring application, which requires long-life, low-maintenance, low-cost sensors and a simple, fast, error-proof deployment procedure. We present in this paper the most important design considerations and optimizations of all elements of a low-cost WSN platform prototype for long-term, low-maintenance pervasive wildfire monitoring, its preparation for a nearly three-month field test, the analysis of the causes of failure during the test and the lessons learned for platform improvement. The main components of the total cost of the platform (nodes, deployment and maintenance) are carefully analyzed and optimized for this application. The gateways are designed to operate with resources that are generally used for sensor nodes, while the requirements and cost of the sensor nodes are significantly lower. We define and test in simulation and in the field experiment a simple, but effective communication protocol for this application. It helps to lower the cost of the nodes and field deployment procedure, while extending the theoretical lifetime of the sensor nodes to over 16 years on a single 1 Ah lithium battery."}
{"_id": "585bf9bf946b4cd4571b2fe6f73f5a7ba9a3d601", "title": "Building Natural Language Interfaces to Web APIs", "text": "As the Web evolves towards a service-oriented architecture, application program interfaces (APIs) are becoming an increasingly important way to provide access to data, services, and devices. We study the problem of natural language interface to APIs (NL2APIs), with a focus on web APIs for web services. Such NL2APIs have many potential benefits, for example, facilitating the integration of web services into virtual assistants.\n We propose the first end-to-end framework to build an NL2API for a given web API. A key challenge is to collect training data, i.e., NL command-API call pairs, from which an NL2API can learn the semantic mapping from ambiguous, informal NL commands to formal API calls. We propose a novel approach to collect training data for NL2API via crowdsourcing, where crowd workers are employed to generate diversified NL commands. We optimize the crowdsourcing process to further reduce the cost. More specifically, we propose a novel hierarchical probabilistic model for the crowdsourcing process, which guides us to allocate budget to those API calls that have a high value for training NL2APIs. We apply our framework to real-world APIs, and show that it can collect high-quality training data at a low cost, and build NL2APIs with good performance from scratch. We also show that our modeling of the crowdsourcing process can improve its effectiveness, such that the training data collected via our approach leads to better performance of NL2APIs than a strong baseline."}
{"_id": "9c375b82db7c42addc406adbed8a796a6ad7fb15", "title": "Contrast-Enhanced Black and White Images", "text": "This paper investigates contrast enhancement as an approach to tone reduction, aiming to convert a photograph to black and white. Using a filter-based approach to strengthen contrast, we avoid making a hard decision about how to assign tones to segmented regions. Our method is inspired by sticks filtering, used to enhance medical images but not previously used in non-photorealistic rendering. We amplify contrast of pixels along the direction of greatest local difference from the mean, strengthening even weak features if they are most prominent. A final thresholding step converts the contrast-enhanced image to black and white. Local smoothing and contrast enhancement balances abstraction and structure preservation; the main advantage of our method is its faithful depiction of image detail. Our method can create a set of effects: line drawing, hatching, and black and white, all having superior details to previous black and white methods."}
{"_id": "409ff05931b5f252935930ecd8de4e62bc0c7d80", "title": "The City Browser : Utilizing Massive Call Data to Infer City Mobility Dynamics", "text": "This paper presents the City Browser, a tool developed to analyze the complexities underlying human mobility at the city scale. The tool uses data generated from mobile phones as a proxy to provide several insights with regards to the commuting patterns of the population within the bounds of a city. The three major components of the browser are the data warehouse, modules and algorithm, and the visualization interface. The modules and algorithm component utilizes Call Detail Records (CDRs) stored within the data warehouse to infer mobility patterns that are then communicated through the visualization interface. The modules and algorithm component consists of four modules: the spatial-temporal decomposition module, the home/work capturing module, the community detection module, and the flow estimation module. The visualization interface manages the output of each module to provide a comprehensive view of a city\u2019s mobility dynamics over varying time scales. A case study is presented on the city of Riyadh in Saudi Arabia, where the browser was developed to better understand city mobility patterns."}
{"_id": "1a5214cdb88ca0c4f276d8c4e5797d19c662b8a4", "title": "A comprehensive framework for testing graphical user interfaces", "text": "The widespread recognition of the usefulness of graphical user interfaces GUIs has established their importance as critical components of today's software. Although the use of GUIs continues to grow, GUI testing has remained a neglected research area. Since GUIs have c haracteristics that are diierent from those of conventional software, such as user events for input and graphical output, techniques developed to test conventional software cannot bedirectly applied to test GUIs. This thesis develops a uniied solution to the GUI testing problem with the particular goals of automation and integration of tools and techniques used in various phases of GUI testing. These goals are accomplished by developing a GUI testing framework with a GUI model as its central component. For eeciency and scalability, a GUI is represented as a hierarchy of components, each used as a basic unit of testing. The framework also includes a test coverage evaluator, test case generator, test oracle, test executor, and regression tester. The test coverage evaluator employs hierarchical, event-based coverage criteria to automatically specify what to test in a GUI and to determine whether the test suite has adequately tested the GUI. The test case generator employs plan generation techniques from artiicial intelligence to automatically generate a test suite. A test executor automatically executes all the test cases on the GUI. As test cases are being executed, a test oracle automatically determines the correctness of the GUI. The test oracle employs a model of the expected state of the GUI in terms of its constituent objects and their properties. After changes are made to a GUI, a regression tester partitions the original GUI test suite into valid test cases that represent correct inputtoutput for the modiied GUI and invalid test cases that no longer represent correct inputtoutput. The regression tester employs a new technique to reuse some of the invalid test cases by repairing them. A cursory exploration of extending the framework to handle the new testing requirements of web-user interfaces WUIs is also done. The framework iv has been implemented and experiments have demonstrated that the developed techniques are both practical and useful. v Acknowledgements I w ould like to thank my parents whose constant eeorts, encouragement and hard work made achieving the goal of obtaining a Ph.D. possible. I thank all my teachers in schools, colleges, and universities whose dedication and hard work helped lay the foundation for this work. Special thanks \u2026"}
{"_id": "2d4abf7523cda78e39029c46b19cbae74e7ee31b", "title": "A Safe, Efficient Regression Test Selection Technique", "text": "Regression testing is an expensive but necessary maintenance activity performed on modified software to provide confidence that changes are correct and do not adversely affect other portions of the softwore. A regression test selection technique choses, from an existing test set, thests that are deemed necessary to validate modified software. We present a new technique for regression test selection. Our algorithms construct control flow graphs for a precedure or program and its modified version and use these graphs to select tests that execute changed code from the original test suite. We prove that, under certain conditions, the set of tests our technique selects includes every test from the original test suite that con expose faults in the modified procedfdure or program. Under these conditions our algorithms are safe. Moreover, although our algorithms may select some tests that cannot expose faults, they are at lease as precise as other safe regression test selection algorithms. Unlike many other regression test selection algorithms, our algorithms handle all language constructs and all types of program modifications. We have implemented our algorithms; initial empirical studies indicate that our technique can significantly reduce the cost of regression testing modified software."}
{"_id": "6f098bda64fbd59215a3e9686306b4dfb7ed3ac7", "title": "Coverage criteria for GUI testing", "text": "A widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing techniques do not directly apply to GUIs. This paper's focus is on coverage critieria for GUIs, important rules that provide an objective measure of test quality. We present new coverage criteria to help determine whether a GUI has been adequately tested. These coverage criteria use events and event sequences to specify a measure of test adequacy. Since the total number of permutations of event sequences in any non-trivial GUI is extremely large, the GUI's hierarchical structure is exploited to identify the important event sequences to be tested. A GUI is decomposed into GUI components, each of which is used as a basic unit of testing. A representation of a GUI component, called an event-flow graph, identifies the interaction of events within a component and intra-component criteria are used to evaluate the adequacy of tests on these events. The hierarchical relationship among components is represented by an integration tree, and inter-component coverage criteria are used to evaluate the adequacy of test sequences that cross components. Algorithms are given to construct event-flow graphs and an integration tree for a given GUI, and to evaluate the coverage of a given test suite with respect to the new coverage criteria. A case study illustrates the usefulness of the coverage report to guide further testing and an important correlation between event-based coverage of a GUI and statement coverage of its software's underlying code."}
{"_id": "833723cbc2d1930d7e002acd882fda73152b213c", "title": "A Safe, Efficient Algorithm for Regression Test Selection", "text": "Regression testing is a necessary but costly maintenance activity aimed at demonstrating that code has not been adversely aaected by changes. A selective approach to regression testing selects tests for a modi-ed program from an existing test suite. We present a new technique for selective regression testing. Our algorithm constructs control dependence graphs for program versions, and uses these graphs to determine which tests from the existing test suite may exhibit changed behavior on the new version. Unlike most previous techniques for selective retest, our algorithm selects every test from the original test suite that might expose errors in the modiied program, and does this without prior knowledge of program modiications. Our algorithm handles all language constructs and program modiications, and is easily automated."}
{"_id": "0a37a647a2f8464379a1fe327f93561c90d91405", "title": "An Introduction to Least Commitment Planning", "text": "Recent developments have clari ed the process of generating partially ordered, partially speci ed sequences of actions whose execution will achive an agent's goal. This paper summarizes a progression of least commitment planners, starting with one that handles the simple strips representation, and ending with one that manages actions with disjunctive precondition, conditional e ects and universal quanti cation over dynamic universes. Along the way we explain how Chapman's formulation of the Modal Truth Criterion is misleading and why his NP-completeness result for reasoning about plans with conditional e ects does not apply to our planner. 1 I thank Franz Amador, Tony Barrett, Darren Cronquist, Denise Draper, Ernie Davis, Oren Etzioni, Nort Fowler, Rao Kambhampati, Craig Knoblock, Nick Kushmerick, Neal Lesh, Karen Lochbaum, Drew McDermott, Ramesh Patil, Kari Pulli, Ying Sun, Austin Tate and Mike Williamson for helpful comments, but retain sole responsibility for errors. This research was funded in part by O ce of Naval Research Grant 90-J-1904 and by National Science Foundation Grant IRI-8957302"}
{"_id": "b366fce866deceddb9acc46a87481bc6b36b0850", "title": "Polygenic Influence on Educational Attainment: New evidence from The National Longitudinal Study of Adolescent to Adult Health.", "text": "Recent studies have begun to uncover the genetic architecture of educational attainment. We build on this work using genome-wide data from siblings in the National Longitudinal Study of Adolescent to Adult Health (Add Health). We measure the genetic predisposition of siblings to educational attainment using polygenic scores. We then test how polygenic scores are related to social environments and educational outcomes. In Add Health, genetic predisposition to educational attainment is patterned across the social environment. Participants with higher polygenic scores were more likely to grow up in socially advantaged families. Even so, the previously published genetic associations appear to be causal. Among pairs of siblings, the sibling with the higher polygenic score typically went on to complete more years of schooling as compared to their lower-scored co-sibling. We found subtle differences between sibling fixed effect estimates of the genetic effect versus those based on unrelated individuals."}
{"_id": "3889a2a0b3a178136aea3c5b91905bd20e765b4f", "title": "A 4-\u00b5W 0.8-V rail-to-rail input/output CMOS fully differential OpAmp", "text": "This paper presents an ultra low power rail-to-rail input/output operational amplifier (OpAmp) designed in a low cost 0.18 \u00b5m CMOS technology. In this OpAmp, rail-to-rail input operation is enabled by using complementary input pairs with gm control. To maximize the output swing a rail-to-rail output stage is employed. For low-voltage low-power operation, the operating transistors in the input and output stage are biased in the sub-threshold region. The simulated DC open loop gain is 51 dB, and the slew-rate is 0.04 V/\u00b5s with a 10 pF capacitive load connected to each of the amplifier outputs. For the same load, the simulated unity gain frequency is 131 kHz with a 64\u00b0 phase margin. A common-mode feed-forward circuit (CMFF) increases CMRR, reducing drastically the variations in the output common mode voltage and keeping the DC gain almost constant. In fact, their relative error remains below 1.2 % for a (\u221220\u00b0C, +120\u00b0C) temperature span. In addition, the proposed OpAmp is very simple and consumes only 4 \u00b5W at 0.8 V supply."}
{"_id": "4b18303edf701e41a288da36f8f1ba129da67eb7", "title": "An embarrassingly simple approach to zero-shot learning", "text": "Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17%."}
{"_id": "516f842467cc89f0d6551823d6aa0af2c3233c75", "title": "Mosaic: quantifying privacy leakage in mobile networks", "text": "With the proliferation of online social networking (OSN) and mobile devices, preserving user privacy has become a great challenge. While prior studies have directly focused on OSN services, we call attention to the privacy leakage in mobile network data. This concern is motivated by two factors. First, the prevalence of OSN usage leaves identifiable digital footprints that can be traced back to users in the real-world. Second, the association between users and their mobile devices makes it easier to associate traffic to its owners. These pose a serious threat to user privacy as they enable an adversary to attribute significant portions of data traffic including the ones with NO identity leaks to network users' true identities. To demonstrate its feasibility, we develop the Tessellation methodology. By applying Tessellation on traffic from a cellular service provider (CSP), we show that up to 50% of the traffic can be attributed to the names of users. In addition to revealing the user identity, the reconstructed profile, dubbed as \"mosaic,\" associates personal information such as political views, browsing habits, and favorite apps to the users. We conclude by discussing approaches for preventing and mitigating the alarming leakage of sensitive user information."}
{"_id": "d3d631baf08f6df03bdbadcfdc8938206ea96c5f", "title": "A similarity-based prognostics approach for Remaining Useful Life estimation of engineered systems", "text": "This paper presents a similarity-based approach for estimating the Remaining Useful Life (RUL) in prognostics. The approach is especially suitable for situations in which abundant run-to-failure data for an engineered system are available. Data from multiple units of the same system are used to create a library of degradation patterns. When estimating the RUL of a test unit, the data from it will be matched to those patterns in the library and the actual life of those matched units will be used as the basis of estimation. This approach is used to tackle the data challenge problem defined by the 2008 PHM Data Challenge Competition, in which, run-to-failure data of an unspecified engineered system are provided and the RUL of a set of test units will be estimated. Results show that the similarity-based approach is very effective in performing RUL estimation."}
{"_id": "7f6061c83dc36633911e4d726a497cdc1f31e58a", "title": "YouTube-8M: A Large-Scale Video Classification Benchmark", "text": "Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of\u223c8 million videos\u2014500K hours of video\u2014annotated with a vocabulary of 4800 visual entities. To get the videos and their (multiple) labels, we used a YouTube video annotation system, which labels videos with the main topics in them. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals, so they represent an excellent target for content-based annotation approaches. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pretrained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. The dataset contains frame-level features for over 1.9 billion video frames and 8 million videos, making it the largest public multi-label video dataset. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using the publicly-available TensorFlow framework. We plan to release code for training a basic TensorFlow model and for computing metrics. We show that pre-training on large data generalizes to other datasets like Sports-1M and ActivityNet. We achieve state-of-the-art on ActivityNet, improving mAP from 53.8% to 77.6%. We hope that the unprecedented scale and diversity of YouTube-8M will lead to advances in video understanding and representation learning."}
{"_id": "63f97f3b4808baeb3b16b68fcfdb0c786868baba", "title": "Double-ridged horn antenna operating in 18\u201340 GHz range", "text": "The whole design cycle, including design, fabrication and characterization of a broadband double-ridged horn antenna equipped with a parabolic lens and a waveguide adapter is presented in this paper. A major goal of the presented work was to obtain high directivity with a flat phase characteristic within the main radiation lobe at an 18\u201340 GHz frequency range, so that the antenna can be applicable in a freespace material characterization setup."}
{"_id": "27dc1f8252c94877c680634ee9119547429ab9b1", "title": "Population Cost Prediction on Public Healthcare Datasets", "text": "The increasing availability of digital health records should ideally improve accountability in healthcare. In this context, the study of predictive modeling of healthcare costs forms a foundation for accountable care, at both population and individual patient-level care. In this research we use machine learning algorithms for accurate predictions of healthcare costs on publicly available claims and survey data. Specifically, we investigate the use of the regression trees, M5 model trees and random forest, to predict healthcare costs of individual patients given their prior medical (and cost) history.\n Overall, three observations showcase the utility of our research: (a) prior healthcare cost alone can be a good indicator for future healthcare cost, (b) M5 model tree technique led to very accurate future healthcare cost prediction, and (c) although state-of-the-art machine learning algorithms are also limited by skewed cost distributions in healthcare, for a large fraction (75%) of population, we were able to predict with higher accuracy using these algorithms. In particular, using M5 model trees we were able to accurately predict costs within less than $125 for 75% of the population when compared to prior techniques. Since models for predicting healthcare costs are often used to ascertain overall population health, our work is useful to evaluate future costs for large segments of disease populations with reasonably low error as demonstrated in our results on real-world publicly available datasets."}
{"_id": "f5d40bb7a636a042f5c005273ecdae72ee29216c", "title": "Characterization of RF Noise in UTBB FD-SOI MOSFET", "text": "In this paper, we report the noise measurements in the RF frequency range for ultrathin body and thin buried oxide fully depleted silicon on insulator (FD-SOI) transistors. We analyze the impact of back and front gate biases on the various noise parameters; along with discussions on the secondary effects in FD-SOI transistors which contribute to the thermal noise. Using calibrated TCAD simulations, we show that the noise figure changes with the substrate doping and buried oxide thickness."}
{"_id": "213cb7593934bc675c336f53dd6c61a3c799be80", "title": "Duplicate Record Detection: A Survey", "text": "Often, in the real world, entities have two or more representations in databases. Duplicate records do not share a common key and/or they contain errors that make duplicate matching a difficult task. Errors are introduced as the result of transcription errors, incomplete information, lack of standard formats, or any combination of these factors. In this paper, we present a thorough analysis of the literature on duplicate record detection. We cover similarity metrics that are commonly used to detect similar field entries, and we present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database. We also cover multiple techniques for improving the efficiency and scalability of approximate duplicate detection algorithms. We conclude with coverage of existing tools and with a brief discussion of the big open problems in the area"}
{"_id": "f59e2d2cf33c11bd9b58e157298ece51e28bfbba", "title": "Enhancing bank security system using Face Recognition, Iris Scanner and Palm Vein Technology", "text": "The objective of this paper is to design a bank locker security system which is using Face Recognition, Iris Scanner and Palm Vein Technology (PVR) for securing valuable belongings. A face recognition system is a system which identifies and authenticates the image of the authorized user by using MATLAB software. The images of a person entering a unrestricted zone are taken by the camera and software compares the image with an existing database of valid users. Iris Recognition system uses generous characteristics present in human body. This technology is employed for biometric authentication in ATM\u2019s, Immigration & border control, public safety, hospitality and tourism etc. This paper serves techniques so that capability of palm vein recognition system can be improved using modified vascular pattern thinning algorithm. Palm Vein Recognition (PVR) is a technology which recognizes palm vein pattern of an individual and matches it with the data stored in database for authentication. This is a very reliable technique and has a very good accuracy and is considered as the most secure and better technique for security purposes."}
{"_id": "c5151f18c2499f1d95522536f167f2fcf75f647f", "title": "Handover decision using fuzzy MADM in heterogeneous networks", "text": "In the next generation heterogeneous wireless networks, a user with a multi-interface terminal may have a network access from different service providers using various technologies. It is believed that handover decision is based on multiple criteria as well as user preference. Various approaches have been proposed to solve the handover decision problem, but the choice of decision method appears to be arbitrary and some of the methods even give disputable results. In this paper, a new handover criteria is introduced along with a new handover decision strategy. In addition, handover decision is identified us a fuzzy multiple attribute decision making (MADM) problem, and fuzzy logic is applied to deal with the imprecise information of some criteria and user preference. After a systematic analysis of various fuzzy MADM methods, a feasible approach is presented. In the end, examples are provided illustrating the proposed methods and the sensitivity of the methods is also analysed."}
{"_id": "e321ab5d7a98e18253ed7874946a229a10e40f26", "title": "A Hybrid Feature Selection Algorithm For Classification Unbalanced Data Processsing", "text": "The performance and accuracy of classifier are affected by the result of feature selection directly. Based on the one-class F-Score feature selection and the improved F-Score feature selection and genetic algorithm, combined with machine learning methods like the K nearest neighbor, support vector machine, random forest, naive Bayes, a hybrid feature selection algorithm is proposed to process the two classification unbalanced data problem and multi classification problem. Compared with the traditional machine learning algorithm, it can search in wider feature space and promote classifier to deal with the characteristics of unbalanced data sets according to heuristic rules, which can handle the problem of unbalanced classification better. The experiment results show that the area under receiver operating characteristic curve for two classifications and the accuracy rate for multi classification problem have been improved compared with other models"}
{"_id": "a81548c643ffb6d5f5e935880bb9db6290e5a386", "title": "Modular bases for fluid dynamics", "text": "We present a new approach to fluid simulation that balances the speed of model reduction with the flexibility of grid-based methods. We construct a set of composable reduced models, or tiles, which capture spatially localized fluid behavior. We then precompute coupling terms so that these models can be rearranged at runtime. To enforce consistency between tiles, we introduce constraint reduction. This technique modifies a reduced model so that a given set of linear constraints can be fulfilled. Because dynamics and constraints can be solved entirely in the reduced space, our method is extremely fast and scales to large domains."}
{"_id": "a0c8b78146a19bfee1296c06eca79dd6c53dd40a", "title": "Conceptual Impact-Based Recommender System for CiteSeerx", "text": "CiteSeer is a digital library for scientific publications written by Computer Science researchers. Users are able to retrieve relevant documents from the database by searching by author name and/or keyword queries. Users may also receive recommendations of papers they might want to read provided by an existing conceptual recommender system. This system recommends documents based on an automaticallyconstructed user profile. Unlike traditional content-based recommender systems, the documents and the user profile are represented as concepts vectors rather than keyword vectors and papers are recommended based on conceptual matches rather than keyword matches between the profile and the documents. Although the current system provides recommendations that are on-topic, they are not necessarily high quality papers. In this work, we introduce the Conceptual Impact-Based Recommender (CIBR), a hybrid recommender system that extends the existing conceptual recommender system in CiteSeer by including an explicit quality factor as part of the recommendation criteria. To measure quality, our system considers the impact factor of each paper\u2019s authors as measured by the authors\u2019 h-index. Experiments to evaluate the effectiveness of our hybrid system show that the CIBR system recommends more relevant papers as compared to the conceptual recommender system."}
{"_id": "4c4e26c482da08a749b3e6585cd1cbb51089e8f4", "title": "Design & construction of a Vertical Axis Wind Turbine", "text": "A wind turbine is a device that converts kinetic energy from the wind into electrical power. Vertical Axis Wind Turbine (VAWT) is one type of wind turbine where the main rotor shaft is set vertically and it can capture wind from any direction. The aim of this work is to develop a theoretical model for the design and performance of Darrieus type vertical axis wind turbine for small scale energy applications. A small 3 bladed turbine (prototype) is constructed and investigated the performance for low wind velocity. The model is based on NACA 0018 airfoil & light wood is used as blade material. The full scale Vertical Axis Wind Turbine is made for 36 inch height, 24 inch diameter, blade cord length is 3.937 inch & blade height is 24 inch. A 100 watt 24 volt brushless DC motor is used to measure output power. The whirling speed of blade & electric power output for the corresponding speed is measured through Tachometer & Wattmeter. The power curves show the relation between the rotational wind speed of the turbine and the power produced for a range of wind speeds. This approach indicates to develop vertical axis wind turbine with better performance to meet the increasing power demand."}
{"_id": "3a60d77d4bbc7561b011d004adbcb47b17080fbc", "title": "Learning Hierarchical Semantic Image Manipulation through Structured Representations", "text": "Understanding, reasoning, and manipulating semantic concepts of images have been a fundamental research problem for decades. Previous work mainly focused on direct manipulation on natural image manifold through color strokes, keypoints, textures, and holes-to-fill. In this work, we present a novel hierarchical framework for semantic image manipulation. Key to our hierarchical framework is that we employ structured semantic layout as our intermediate representation for manipulation. Initialized with coarse-level bounding boxes, our structure generator first creates pixel-wise semantic layout capturing the object shape, object-object interactions, and object-scene relations. Then our image generator fills in the pixel-level textures guided by the semantic layout. Such framework allows a user to manipulate images at object-level by adding, removing, and moving one bounding box at a time. Experimental evaluations demonstrate the advantages of the hierarchical manipulation framework over existing image generation and context hole-filing models, both qualitatively and quantitatively. Benefits of the hierarchical framework are further demonstrated in applications such as semantic object manipulation, interactive image editing, and data-driven image manipulation."}
{"_id": "405b92b42423fb011f5a26a6808471a60040d80a", "title": "A computationally efficient limited memory CMA-ES for large scale optimization", "text": "We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from m direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn), where $n$ is the number of decision variables. When $n$ is large (e.g., n > 1000), even relatively small values of $m$ (e.g., m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time."}
{"_id": "baea5b38ef79158a6f942497b06443ae24f15331", "title": "A Locality Aware Convolutional Neural Networks Accelerator", "text": "The advantages of Convolutional Neural Networks (CNNs) with respect to traditional methods for visual pattern recognition have changed the field of machine vision. The main issue that hinders broad adoption of this technique is the massive computing workload in CNN that prevents real-time implementation on low-power embedded platforms. Recently, several dedicated solutions have been proposed to improve the energy efficiency and throughput, nevertheless the huge amount of data transfer involved in the processing is still a challenging issue. This work proposes a new CNN accelerator exploiting a novel memory access scheme which significantly improves data locality in CNN related processing. With this scheme, external memory access is reduced by 50% while achieving similar or even better throughput. The accelerator is implemented using 28nm CMOS technology. Implementation results show that the accelerator achieves a performance of 102GOp/s @800MHz while consuming 0.303mm2 in silicon area. Power simulation shows that the dynamic power of the accelerator is 68mW. Its flexibility is demonstrated by running various different CNN benchmarks."}
{"_id": "d083aab9b01cbd5ff74970c24cd55dcabf2067f1", "title": "New algorithms for euclidean distance transformation of an n-dimensional digitized picture with applications", "text": "Al~traet--In this paper, we propose a new method to obtain the Euclidean distance transformation and the Voronoi diagram based on the exact Euclidean metric for an n-dimensional picture. We present four algorithms to perform the transformation which are constructed by the serial composition of n-dimensional filters. When performed by a general purpose computer, they are faster than the method by H. Yamada for a two-dimensional picture. Those algorithms require only one n-dimensional array for storing input/output pictures and a single one-dimensional array for a work area, if an input picture needs not be preserved."}
{"_id": "7a9b632319a9c02abda36ed9665809b2e70c78b0", "title": "A Robust Deep Model for Improved Classification of AD/MCI Patients", "text": "Accurate classification of Alzheimer's disease (AD) and its prodromal stage, mild cognitive impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of a particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight coadaptation, which is a typical cause of overfitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multitask learning strategy into the deep learning framework. We applied the proposed method to the ADNI dataset, and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods."}
{"_id": "3e1c07970fe976269ac5c9904b7b5651e8786c51", "title": "Simulation Model Verification and Validation: Increasing the Users' Confidence", "text": "This paper sets simulation model verification and validation (V&V) in the context of the process of performing a simulation study. Various different forms of V&V need to take place depending on the stage that has been reached. Since the phases of a study are performed in an iterative manner, so too are the various forms of V&V. A number of difficulties with verifying and validating models are discussed, after which a series of V&V methods are described. V&V is seen as a process of increasing confidence in a model, and not one of demonstrating absolute accuracy."}
{"_id": "6a8c5d33968230c9c4bdb207b93da7c71d302ed6", "title": "Reduce The Wastage of Data During Movement in Data Warehouse", "text": "In this research paper so as to handle Data in warehousing as well as reduce the wastage of data and provide a better results which takes more and more turn into a focal point of the data source business. Data warehousing and on-line analytical processing (OLAP) are vital fundamentals of resolution hold, which has more and more become a focal point of the database manufacturing. Lots of marketable yield and services be at the present accessible, and the entire primary database management organization vendor nowadays have contributions in the area assessment hold up spaces some quite dissimilar necessities on record technology compare to conventional on-line transaction giving out application. This article gives a general idea of data warehousing and OLAP technologies, with the highlighting on top of their latest necessities. So tools which is used for extract, clean-up and load information into back end of a information warehouse; multidimensional data model usual of OLAP; front end client tools for querying and data analysis; server extension for proficient query processing; and tools for data managing and for administration the warehouse. In adding to survey the circumstances of the art, this article also identify a number of capable research issue, a few which are interrelated to"}
{"_id": "04ce064505b1635583fa0d9cc07cac7e9ea993cc", "title": "A Comparison of Event Models for Naive Bayes Text Classification", "text": "Recent approaches to text classification have used two different first-order probabilistic models for classification, both of which make the naive Bayes assumption. Some use a multi-variate Bernoulli model, that is, a Bayesian Network with no dependencies between words and binary word features (e.g. Larkey and Croft 1996; Koller and Sahami 1997). Others use a multinomial model, that is, a uni-gram language model with integer word counts (e.g. Lewis and Gale 1994; Mitchell 1997). This paper aims to clarify the confusion by describing the differences and details of these two models, and by empirically comparing their classification performance on five text corpora. We find that the multi-variate Bernoulli performs well with small vocabulary sizes, but that the multinomial performs usually performs even better at larger vocabulary sizes\u2014providing on average a 27% reduction in error over the multi-variate Bernoulli model at any vocabulary size."}
{"_id": "08fddf1865e48a1adc21d4875396a754711f0a28", "title": "An Extensive Empirical Study of Feature Selection Metrics for Text Classification", "text": "Machine learning for text classification is the cor nerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and m ore accurate. This paper presents an empirical comparison of twelve feature selection methods (e.g . Information Gain) evaluated on a benchmark of 229 text classification problem instances that w ere gathered from Reuters, TREC, OHSUMED, etc. The results are analyzed from multiple goal p ers ectives\u2014accuracy, F-measure, precision, and recall\u2014since each is appropriate in different si tuat ons. The results reveal that a new feature selection me tric we call \u2018Bi-Normal Separation\u2019 (BNS), outperformed the others by a substantial margin in most situations. This margin widened in tasks with high class skew, which is rampant in text clas sification problems and is particularly challenging for induction algorithms. A new evaluation methodology is offered that focus es on the needs of the data mining practitioner faced with a single dataset who seeks to choose one (or a pair of) metrics that are most likely to yield the best performance. From this perspect iv , BNS was the top single choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Informati on Gain and Chi-Squared have correlated failures, and so they work poorly together. When c hoosing optimal pairs of metrics for each of the four performance goals, BNS is consistently a membe r of the pair\u2014e.g., for greatest recall, the pair BNS + F1-measure yielded the best performance on the greatest number of tasks by a considerable margin."}
{"_id": "20cc59e8879305cbe18409c77464eff272e1cf55", "title": "Language Identification in the Limit", "text": "Language learnabi l i ty has been inves t igated . This refers to the following s i tua t ion: A class of possible languages is specified, t oge the r wi th a method of present ing informat ion to the learner about an unknown language, which is to be chosen from the class. The quest ion is now asked, \" I s the informat ion sufficient to determine which of the possible languages is the unknown language?\" M a n y definitions of l ea rnab i l i ty are possible, bu t only the following is considered here: Time is quant ized and has a finite s t a r t ing time. At each t ime the learner receives a un i t of in format ion and is to make a guess as to the i den t i t y of the unknown language on the basis of the informat ion received so far. This process continues forever. The class of languages will be considered lea~nabIe with respect to the specified me thod of in format ion p resen ta t ion if there is an a lgor i thm t h a t the learner can use to make his guesses, the a lgor i thm hav ing the following p roper ty : Given any language of the class, there is some finite t ime af te r which the guesses will all be the same and they will be correct . In this pre l iminary invest igat ion, a language is t aken to be a set of s tr ings on some finite a lphabet . The a lphabe t is the same for all languages of tile class. Several var ia t ions of each of the following two basic methods of informat ion presenta t ion are inves t iga ted: A text for a language generates the s t r ings of the language in any order such t h a t every s t r ing of the language occurs a t least once. An informant for a language tells whe the r a s t r ing is in the language, and chooses the s t r ings in some order such t h a t every s t r ing occurs at least once. I t was found t h a t the class of context-sensi t ive languages is learnable from an informant, but that not even the class of regular languages is learnable from a text."}
{"_id": "32352a889360e365fa242ad3040ccd6c54131d47", "title": "Introduction to Information Retrieval: Index", "text": ""}
{"_id": "47f5682448cdc0b650b54e7f59d22d72f4976c2d", "title": "Domain Adaptation for Statistical Classifiers", "text": "The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the \u201cin-domain\u201d test data is drawn from a distribution that is related, but not identical, to the \u201cout-of-domain\u201d distribution of the training data. We consider the common case in which labeled out-of-domain data is plentiful, but labeled in-domain data is scarce. We introduce a statistical formulation of this problem in terms of a simple mixture model and present an instantiation of this framework to maximum entropy classifiers and their linear chain counterparts. We present efficient inference algorithms for this special case based on the technique of conditional expectation maximization. Our experimental results show that our approach leads to improved performance on three real world tasks on four different data sets from the natural language processing domain."}
{"_id": "782546908241748b0529e1a451f15567b31f411d", "title": "Augmenting paper-based reading activity with direct access to digital materials and scaffolded questioning", "text": "Comprehension is the goal of reading. However, students often encounter reading difficulties due to the lack of background knowledge and proper reading strategy. Unfortunately, print text provides very limited assistance to one\u2019s reading comprehension through its static knowledge representations such as symbols, charts, and graphs. Integrating digital materials and reading strategy into paper-based reading activities may bring opportunities for learners to make meaning of the print material. In this study, QR codes were adopted in association with mobile technology to deliver supplementary materials and questions to support students\u2019 reading. QR codes were printed on paper prints to provide direct access to digital materials and scaffolded questions. Smartphones were used to scan the printed QR codes to fetch predesigned digital resources and scaffolded questions over the Internet. A quasi-experiment was conducted to evaluate the effectiveness of direct access to the digital materials prepared by the instructor using QR codes and that of scaffolded questioning in improving students\u2019 reading comprehension. The results suggested that direct access to digital resources using QR codes does not significantly influence students\u2019 reading comprehension; however, the reading strategy of scaffolded questioning significantly improves students\u2019 understanding about the text. The survey showed that most students agreed that the integrated print-and-digital-materialbased learning system benefits English reading comprehension but may not be as efficient as expected. The implications of the findings shed light on future improvement of the system. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "8762619d4be605e21524eeded2cd946dac47fb2a", "title": "Effective Identification of Similar Patients Through Sequential Matching over ICD Code Embedding", "text": "Evidence-based medicine often involves the identification of patients with similar conditions, which are often captured in ICD (International Classification of Diseases (World Health Organization 2013)) code sequences. With no satisfying prior solutions for matching ICD-10 code sequences, this paper presents a method which effectively captures the clinical similarity among routine patients who have multiple comorbidities and complex care needs. Our method leverages the recent progress in representation learning of individual ICD-10 codes, and it explicitly uses the sequential order of codes for matching. Empirical evaluation on a state-wide cancer data collection shows that our proposed method achieves significantly higher matching performance compared with state-of-the-art methods ignoring the sequential order. Our method better identifies similar patients in a number of clinical outcomes including readmission and mortality outlook. Although this paper focuses on ICD-10 diagnosis code sequences, our method can be adapted to work with other codified sequence data."}
{"_id": "65d115a49e84ce1de41ba5c116acd1821c21dc4b", "title": "Anger detection in call center dialogues", "text": "We present a method to classify fixed-duration windows of speech as expressing anger or not, which does not require speech recognition, utterance segmentation, or separating the utterances of different speakers and can, thus, be easily applied to real-world recordings. We also introduce the task of ranking a set of spoken dialogues by decreasing percentage of anger duration, as a step towards helping call center supervisors and analysts identify conversations requiring further action. Our work is among the very few attempts to detect emotions in spontaneous human-human dialogues recorded in call centers, as opposed to acted studio recordings or human-machine dialogues. We show that despite the non-perfect performance (approx. 70% accuracy) of the window-level classifier, its decisions help produce a ranking of entire conversations by decreasing percentage of anger duration that is clearly better than a random ranking, which represents the case where supervisors and analysts randomly select conversations to inspect."}
{"_id": "ca7cc812a2fbe60550f18c4033a75296432fa28f", "title": "Kathar\u00e1: A container-based framework for implementing network function virtualization and software defined networks", "text": "Network Function Virtualization (NFV) and Software-Defined Networking (SDN) are deeply changing the networking field by introducing software at any level, aiming at decoupling the logic from the hardware. Together, they bring several benefits, mostly in terms of scalability and flexibility. Up to now, SDN has been used to support NFV from the routing and the architectural point of view. In this paper we present Kathar\u00e1, a framework based on con\u00adtainers, that allows network operators to deploy Virtual Network Functions (VNFs) through the adoption of emerging data-plane programmable capabilities, such as P4-compliant switches. It also supports the coexistence of SDN and traditional routing protocols in order to set up arbitrarily complex networks. As a side effect, thanks to Kathar\u00e1, we demonstrate that implementing NFV by means of specific-purpose equipment is feasible and it provides a gain in performance while preserving the benefits of NFV. We measure the resource consumption of Kathar\u00e1 and we show that it performs better than frameworks that implement virtual networks using virtual machines by several orders of magnitude."}
{"_id": "8863dd8efc047ee2c8060dd74c24006d7204a9d5", "title": "Joint Learning of CNN and LSTM for Image Captioning", "text": "In this paper, we describe the details of our methods for the participation in the subtask of the ImageCLEF 2016 Scalable Image Annotation task: Natural Language Caption Generation. The model we used is the combination of a procedure of encoding and a procedure of decoding, which includes a Convolutional neural network(CNN) and a Long Short-Term Memory(LSTM) based Recurrent Neural Network. We first train a model on the MSCOCO dataset and then fine tune the model on different target datasets collected by us to get a more suitable model for the natural language caption generation task. Both of the parameters of CNN and LSTM are learned together."}
{"_id": "5280846253fd842e97bbe955ccf8caa1e1f8d6c8", "title": "The Stable Model Semantics for Logic Programming", "text": "This paper studies the stable model semantics of logic progr ams with (abstract) constraint atoms and their properties. We introduce a succinct abstract represe ntation of these constraint atoms in which a constraint atom is represented compactly. We show two appli cations. First, under this representation of constraint atoms, we generalize the Gelfond-Lifschitz t ransformation and apply it to define stable models (also called answer sets) for logic programs with arb itr y constraint atoms. The resulting semantics turns out to coincide with the one defined by Son et a l., which is based on a fixpoint approach. One advantage of our approach is that it can be appl ied, in a natural way, to define stable models for disjunctive logic programs with constraint atom s, which may appear in the disjunctive head as well as in the body of a rule. As a result, our approach t o the stable model semantics for logic programs with constraint atoms generalizes a number of prev ious approaches. Second, we show that our abstract representation of constraint atoms provides a me ns to characterize dependencies of atoms in a program with constraint atoms, so that some standa rd characterizations and properties relying on these dependencies in the past for logic programs with ordinary atoms can be extended to logic programs with constraint atoms."}
{"_id": "3eefbcf161b5b95b34980d17c042c7cb4fd76864", "title": "Improving late life depression and cognitive control through the use of therapeutic video game technology: A proof-of-concept randomized trial.", "text": "BACKGROUND\nExisting treatments for depression are known to have only modest effects, are insufficiently targeted, and are inconsistently utilized, particularly in older adults. Indeed, older adults with impaired cognitive control networks tend to demonstrate poor response to a majority of existing depression interventions. Cognitive control interventions delivered using entertainment software have the potential to not only target the underlying cerebral dysfunction associated with depression, but to do so in a manner that is engaging and engenders adherence to treatment protocol.\n\n\nMETHODS\nIn this proof-of-concept trial (Clinicaltrials.gov #: NCT02229188), individuals with late life depression (LLD) (22; 60+ years old) were randomized to either problem solving therapy (PST, n = 10) or a neurobiologically inspired digital platform designed to enhance cognitive control faculties (Project: EVO\u2122, n = 12). Given the overlapping functional neuroanatomy of mood disturbances and executive dysfunction, we explored the impact of an intervention targeting cognitive control abilities, functional disability, and mood in older adults suffering from LLD, and how those outcomes compare to a therapeutic gold standard.\n\n\nRESULTS\nEVO participants demonstrated similar improvements in mood and self-reported function after 4 weeks of treatment to PST participants. The EVO participants also showed generalization to untrained measures of working memory and attention, as well as negativity bias, a finding not evident in the PST condition. Individuals assigned to EVO demonstrated 100% adherence.\n\n\nCONCLUSIONS\nThis study provides preliminary findings that this therapeutic video game targeting cognitive control deficits may be an efficacious LLD intervention. Future research is needed to confirm these findings."}
{"_id": "25e5b745ce0d3518bf16fe28d788f7d4fac9d838", "title": "XYZ Indoor Navigation through Augmented Reality: A Research in Progress", "text": "We present an overall framework of services for indoor navigation, which includes Indoor Mapping, Indoor Positioning, Path Planning, and En-route Assistance. Within such framework we focus on an augmented reality (AR) solution for en-route assistance. AR assists the user walking in a multi-floor building by displaying a directional arrow under a camera view, thus freeing the user from knowing his/her position. Our AR solution relies on geomagnetic positioning and north-oriented space coordinates transformation. Therefore, it can work without infrastructure and without relying on GPS. The AR visual interface and the integration with magnetic positioning is the main novelty of our solution, which has been validated by experiments and shows a good performance."}
{"_id": "676600ed722d4739d669715c16a1ed2fc117b3d4", "title": "Weakly supervised detection with decoupled attention-based deep representation", "text": "Training object detectors with only image-level annotations is an important problem with a variety of applications. However, due to the deformable nature of objects, a target object delineated by a bounding box always includes irrelevant context and occlusions, which causes large intra-class object variations and ambiguity in object-background distinction. For this reason, identifying the object of interest from a substantial amount of cluttered backgrounds is very challenging. In this paper, we propose a decoupled attention-based deep model to optimize region-based object representation. Different from existing approaches posing object representation in a single-tower model, our proposed network decouples object representation into two separate modules, i.e., image representation and attention localization. The image representation module captures content-based semantic representation, while the attention localization module regresses an attention map which simultaneously highlights the locations of the discriminative object parts and down weights the irrelevant backgrounds presented in the image. The combined representation alleviates the impact from the noisy context and occlusions inside an object bounding box. As a result, object-background ambiguity can be largely reduced and background regions can be suppressed effectively. In addition, the proposed object representation model can be seamlessly integrated into a state-of-the-art weakly supervised detection framework, and the entire model can be trained end-to-end. We extensively evaluate the detection performance on the PASCAL VOC 2007, VOC 2010 and VOC2012 datasets. Experimental results demonstrate that our approach effectively improves weakly supervised object detection."}
{"_id": "3c5dd0defca5232b638f7fc86e2bf7f1e0da4d7d", "title": "The mass appraisal of the real estate by computational intelligence", "text": "Mass appraisal is the systematic appraisal of groups of properties as of a given date using standardized procedures and statistical testing. Mass appraisal is commonly used to compute real estate tax. There are three traditional real estate valuation methods: the sales comparison approach, income approach, and the cost approach. Mass appraisal models are commonly based on the sales comparison approach. The ordinary least squares (OLS) linear regression is the classical method used to build models in this approach. The method is compared with computational intelligence approaches \u2013 support vector machine (SVM) regression, multilayer perceptron (MLP), and a committee of predictors in this paper. All the three predictors are used to build a weighted data-depended committee. A self-organizing map (SOM) generating clusters of value zones is used to obtain the data-dependent aggregation weights. The experimental investigations performed using data cordially provided by the Register center of Lithuania have shown very promising results. The performance of the computational intelligence-based techniques was considerably higher than that obtained using the official real estate models of the Register center. The performance of the committee using the weights based on zones obtained from the SOM was also higher than of that exploiting the real estate value zones provided by the Register center. \u00a9 2009 Elsevier B.V. All rights reserved."}
{"_id": "d1fb73dc083c916cf8a965434fc684b8f0b8762d", "title": "Beyond the Turing Test", "text": "SPRING 2016 3 Alan Turing\u2019s renowned test on intelligence, commonly known as the Turing test, is an inescapable signpost in AI. To people outside the field, the test \u2014 which hinges on the ability of machines to fool people into thinking that they (the machines) are people \u2014 is practically synonymous with the quest to create machine intelligence. Within the field, the test is widely recognized as a pioneering landmark, but also is now seen as a distraction, designed over half a century ago, and too crude to really measure intelligence. Intelligence is, after all, a multidimensional variable, and no one test could possibly ever be definitive truly to measure it. Moreover, the original test, at least in its standard implementations, has turned out to be highly gameable, arguably an exercise in deception rather than a true measure of anything especially correlated with intelligence. The much ballyhooed 2015 Turing test winner Eugene Goostman, for instance, pretends to be a thirteen-year-old foreigner and proceeds mainly by ducking questions and returning canned one-liners; it cannot see, it cannot think, and it is certainly a long way from genuine artificial general intelligence. Our hope is to see a new suite of tests, part of what we have"}
{"_id": "3c5b2338f0169d25ba07e3bfcf7ebb2a8c1edcea", "title": "Extending the Business Model Canvas: A Dynamic Perspective", "text": "When designing and assessing a business model, a more visual and practical ontology and framework is necessary. We show how an academic theory such as Business Model Ontology has evolved into the Business Model Canvas (BMC) that is used by practitioners around the world today. We draw lessons from usage and define three maturity level. We propose new concepts to help design the dynamic aspect of a business model. On the first level, the BMC supports novice users as they elicit their models; it also helps novices to build coherent models. On the second level, the BMC allows expert users to evaluate the interaction of business model elements by outlining the key threads in the business models\u2019 story. On the third level, master users are empowered to create multiple versions of their business models, allowing them to evaluate alternatives and retain the history of the business model\u2019s evolution. These new concepts for the BMC which can be supported by Computer-Aided Design tools provide a clearer picture of the business model as a strategic planning tool and are the basis for further research."}
{"_id": "7dacc2905c536e17ea24bdd07c9e87d699a12975", "title": "Communicating with Cost-based Implicature : a Game-Theoretic Approach to Ambiguity", "text": "A game-theoretic approach to linguistic communication predicts that speakers can meaningfully use ambiguous forms in a discourse context in which only one of several available referents has a costly unambiguous form and in which rational interlocutors share knowledge of production costs. If a speaker produces a low-cost ambiguous form to avoid using the high-cost unambiguous form, a rational listener will infer that the high-cost entity was the intended entity, or else the speaker would not have risked ambiguity. We report data from two studies in which pairs of speakers show alignment of their use of ambiguous forms based on this kind of shared knowledge. These results extend the analysis of cost-based pragmatic inferencing beyond that previously associated only with fixed lexical hosts."}
{"_id": "fe2781c806bb6049604b66721fa21814f3711ff0", "title": "Smoothing LUT classifiers for robust face detection", "text": "Look-up table (LUT) classifiers are often used to construct concise classifiers for rapid object detection due to their favorable convergent ability. However, their poor generalization ability imposes restrictions on their applications. A novel improvement to LUT classifiers is proposed in this paper where the new confident of each partition is recalculated by smoothing the old ones within its neighbor partitions. The new confidents are more generalizable than the old ones because each of the new predicts is supported by more training samples and the high frequency components in the old predict sequences are suppressed greatly through smoothing operation. Both weight sum smoothing method and confident smoothing method are introduced here, which all bring negligible extra computation cost in training time and no extra cost in test time. Experimental results in the domain of upright frontal face detection using smoothed LUT classifiers with identical smoothing width and smoothing factor for all partitions based on Haar-like rectangle features show that smoothed LUT classifiers generalize much better and also converge more or less worse than unsmooth LUT classifiers. Specifically, smoothed LUT classifiers can delicately balance between generalization ability and convergent ability through carefully set smoothing parameters."}
{"_id": "668c7c978615143545eed2eca8ae343e027a02ca", "title": "Multiple-output ZCS resonant inverter for multi-coil induction heating appliances", "text": "This paper presents a multiple-output resonant inverter for multi-coil systems featuring high efficiency and flexible output power control for modern induction heating appliances. By adopting a matrix structure, the number of controlled devices can be reduced to the square root of the number of controlled induction heating loads. Therefore, an elevated component count reduction can be achieved compared to classical multi-outputs solutions. The proposed converter is analyzed and verified experimentally, proving the feasibility of this proposal and benefits compared with previous alternatives."}
{"_id": "13a39916d0459879703ce20f960726c5fb7269f1", "title": "ExpNet: Landmark-Free, Deep, 3D Facial Expressions", "text": "We describe a deep learning based method for estimating 3D facial expression coefficients. Unlike previous work, our process does not relay on facial landmark detection methods as a proxy step. Recent methods have shown that a CNN can be trained to regress accurate and discriminative 3D morphable model (3DMM) representations, directly from image intensities. By foregoing landmark detection, these methods were able to estimate shapes for occluded faces appearing in unprecedented viewing conditions. We build on those methods by showing that facial expressions can also be estimated by a robust, deep, landmark-free approach. Our ExpNet CNN is applied directly to the intensities of a face image and regresses a 29D vector of 3D expression coefficients. We propose a unique method for collecting data to train our network, leveraging on the robustness of deep networks to training label noise. We further offer a novel means of evaluating the accuracy of estimated expression coefficients: by measuring how well they capture facial emotions on the CK+ and EmotiW-17 emotion recognition benchmarks. We show that our ExpNet produces expression coefficients which better discriminate between facial emotions than those obtained using state of the art, facial landmark detectors. Moreover, this advantage grows as image scales drop, demonstrating that our ExpNet is more robust to scale changes than landmark detectors. Finally, our ExpNet is orders of magnitude faster than its alternatives."}
{"_id": "bd8c21578c44aeaae394f9414974ca81fb412ec1", "title": "A First Look at Android Malware Traffic in First Few Minutes", "text": "With the advent of mobile era, mobile terminals are going through a trend of surpassing PC to become the most popular computing device. Meanwhile, the hackers and viruswriters are paying close attention to the mobile terminals, especially the Android platform. The growing of malwares on the Android system has drawn attentions from both the academia and security industry. Recently, mobile network traffic analysis has been used to identify the malware. But due to the lack of a large-scale malware repository and a systematic analysis of network traffic features, the existing research mostly remain in theory. In this paper, we design an Android malware traffic behavior monitoring scheme to capture traffic data generated by malware samples in a real Internet environment. We capture the network traffic from 5560 malware samples in the first 5 minutes, and analyze the major compositions of the traffic data. We discover that HTTP and DNS traffic are accounted for more than 99% on the application layer traffic. We then present an analysis of related network features: DNS query, HTTP packet length, ratio of downlink to uplink traffic amount, HTTP request and Ad traffic feature. Our statistical results illustrate that: (1) more than 70% malwares generate malicious traffic in the first 5 minutes, (2) DNS query and HTTP request can be used to identify the malware, and the detection rate reaches 69.55% and 40.89% respectively, (3) Ad traffic can greatly affect the malware detection. We believe our research provides an in-depth analysis into mobile malwares' network behaviors."}
{"_id": "acb64f17bdb0eacf1daf45a5f33c0e7a8893dbf6", "title": "Continuous Representation of Location for Geolocation and Lexical Dialectology using Mixture Density Networks", "text": "We propose a method for embedding twodimensional locations in a continuous vector space using a neural network-based model incorporating mixtures of Gaussian distributions, presenting two model variants for text-based geolocation and lexical dialectology. Evaluated over Twitter data, the proposed model outperforms conventional regression-based geolocation and provides a better estimate of uncertainty. We also show the effectiveness of the representation for predicting words from location in lexical dialectology, and evaluate it using the DARE dataset."}
{"_id": "f1f20213f5e412e5c88dd3216703ff8181e073b5", "title": "Restoration of Endodontically Treated Molars Using All Ceramic Endocrowns", "text": "Clinical success of endodontically treated posterior teeth is determined by the postendodontic restoration. Several options have been proposed to restore endodontically treated teeth. Endocrowns represent a conservative and esthetic restorative alternative to full coverage crowns. The preparation consists of a circular equigingival butt-joint margin and central retention cavity into the entire pulp chamber constructing both the crown and the core as a single unit. The case reports discussed here are moderately damaged endodontically treated molars restored using all ceramic endocrowns fabricated using two different systems, namely, CAD/CAM and pressed ceramic."}
{"_id": "dd34e51f2069c033df9b904bb8063b420649ed76", "title": "Efficient lineage tracking for scientific workflows", "text": "Data lineage and data provenance are key to the management of scientific data. Not knowing the exact provenance and processing pipeline used to produce a derived data set often renders the data set useless from a scientific point of view. On the positive side, capturing provenance information is facilitated by the widespread use of workflow tools for processing scientific data. The workflow process describes all the steps involved in producing a given data set and, hence, captures its lineage. On the negative side, efficiently storing and querying workflow based data lineage is not trivial. All existing solutions use recursive queries and even recursive tables to represent the workflows. Such solutions do not scale and are rather inefficient. In this paper we propose an alternative approach to storing lineage information captured as a workflow process. We use a space and query efficient interval representation for dependency graphs and show how to transform arbitrary workflow processes into graphs that can be stored using such representation. We also characterize the problem in terms of its overall complexity and provide a comprehensive performance evaluation of the approach."}
{"_id": "7857cdf46d312af4bb8854bd127e5c0b4268f90c", "title": "A novel tri-state boost PFC converter with fast dynamic performance", "text": "Dynamic response of boost power factor correction (PFC) converter operating in continuous-conduction mode (CCM) is heavily influenced by low-bandwidth of voltage control loop. A novel tri-state boost PFC converter operating in pseudo-continuous-conduction mode (PCCM) is proposed in this paper. An additional degree of control-freedom introduced by a freewheeling switching control interval helps to achieve PFC control. A simple and fast voltage control loop can be used to maintain a constant output voltage. Furthermore, compared with boost PFC converter operating in conventional discontinuous-conduction mode (DCM), boost PFC converter operating in PCCM demonstrates greatly improved current handling capability with reduced current and voltage ripples. Analytical and simulation results of the tri-state boost PFC converter have been presented and compared with those of the boost PFC converter operating in conventional CCM and DCM. Simulation results show excellent dynamic performance of the tri-state boost PFC converter."}
{"_id": "d904d00d84f5ed130c66c5fe0d641a9022197244", "title": "A NoSQL Data Model for Scalable Big Data Workflow Execution", "text": "While big data workflows haven been proposed recently as the next-generation data-centric workflow paradigm to process and analyze data of ever increasing in scale, complexity, and rate of acquisition, a scalable distributed data model is still missing that abstracts and automates data distribution, parallelism, and scalable processing. In the meanwhile, although NoSQL has emerged as a new category of data models, they are optimized for storing and querying of large datasets, not for ad-hoc data analysis where data placement and data movement are necessary for optimized workflow execution. In this paper, we propose a NoSQL data model that: 1) supports high-performance MapReduce-style workflows that automate data partitioning and data-parallelism execution. In contrast to the traditional MapReduce framework, our MapReduce-style workflows are fully composable with other workflows enabling dataflow applications with a richer structure, 2) automates virtual machine provisioning and deprovisioning on demand according to the sizes of input datasets, 3) enables a flexible framework for workflow executors that take advantage of the proposed NoSQL data model to improve the performance of workflow execution. Our case studies and experiments show the competitive advantages of our proposed data model. The proposed NoSQL data model is implemented in a new release of DATAVIEW, one of the most usable big data workflow systems in the community."}
{"_id": "12d40aeb4f31d4352264114841ebfb8651729908", "title": "Torque ripple improvement for synchronous reluctance motor using an asymmetric flux barrier arrangement", "text": "An interior permanent-magnet synchronous motor (IPMSM) is a highly efficient motor and operates in a wide speed range; therefore, it is used in many industrial and home appliance applications. However, the torque ripple of synchronous motors such as the IPMSM and synchronous reluctance motor is very large. The variation of magnetic resistance between the flux barriers and teeth causes the torque ripple. In this paper, flux barriers are asymmetrically designed so that the relative positions between the outer edges of the flux barriers and the teeth do not correspond. As a result, torque ripple can be reduced dramatically."}
{"_id": "ada4c48f083f2f98ee9a345e4bc534425d0a9bfd", "title": "Analysis of Rotor Core Eddy-Current Losses in Interior Permanent-Magnet Synchronous Machines", "text": "This paper presents the results of an investigation focused on the rotor core eddy-current losses of interior permanent-magnet (IPM) synchronous machines. First, analytical insight into the rotor core eddy-current losses of IPM machines is developed. Next, major design parameters that have the most significant impact on the rotor core eddy-current losses of IPM machines are identified. Finite-element analysis results are then presented to compare the predicted eddy-current losses in the machine core of IPM machines with one- and two-layer rotors coupled with concentrated- and distributed-winding stators. It is shown that the lowest total eddy-current losses in the machine core are achieved using a combination of distributed stator windings and two magnet layers per rotor pole, whereas minimizing only the rotor core eddy-current losses favors replacement of the rotor with a single-layer configuration."}
{"_id": "29a073c933aaf22dcb2914be06adae6e189736b5", "title": "Optimization of average and cogging torque in 3-phase IPM motor drives", "text": "In this paper, an interior permanent magnet (IPM) brushless DC motor for traction applications is analyzed. The effect of magnetization direction, number of stator slots and current waveform on torque pulsation are examined. A three-phase, four-pole IPM motor is considered for the study. The finite element method is used to calculate the torque, reluctance torque, back iron flux density, tooth flux density, detent torque and back-EMF of the motor. It is shown that because of the reluctance torque resulting from rotor saliency, the peak point in the torque-angle curve is shifted to the left. Therefore, it is not possible to find the switching instants just by considering the direction of stator and rotor MMF and keeping the right angle between them. A Matlab program has been developed to rind the switching intervals, which will produce the maximum average torque and minimum cogging torque. Experimental results to support the simulation findings are included in the paper."}
{"_id": "c37456766064c1e1432679454b6ca446b3f53068", "title": "Iron loss analysis of interior permanent-magnet synchronous motors-variation of main loss factors due to driving condition", "text": "In this paper, the authors investigate the iron loss of interior permanent magnet motors driven by pulsewidth modulation (PWM) inverters from both results of the experiments and the finite-element analysis. In the analysis, the iron loss of the motor is decomposed into several components due to their origins, for instance, the fundamental field, carrier of the PWM inverter, slot ripples, and harmonic magnetomotive forces of the permanent magnet in order to clarify the main loss factors. The Fourier transformation and the finite-element method considering the carrier harmonics are applied to this calculation. The calculated iron loss is compared with the measurement at each driving condition. The measured and the calculated results agree well. It is clarified that the iron loss caused by the carrier of the PWM inverter is the largest component at low-speed condition under the maximum torque control, whereas the loss caused by the harmonic magnetomotive forces of the permanent magnet remarkably increase at high-speed condition under the flux-weakening control"}
{"_id": "badedcabfbb8e47eb33238aa44cf2b9c90c3d83a", "title": "A novel wavelet transform technique for on-line partial discharge measurements. 1. WT de-noising algorithm", "text": "Medium and high voltage power cables are widely used in the electrical industry with substantial growth over the last 20-30 years ago, particular in the use of XLPE insulated systems. Ageing of the cable insulation is becoming an increasing problem that requires development of reliable methods for on-line condition assessment. For insulation condition assessment of MV and HV cables, partial discharge (PD) monitoring is one of the most effective techniques. However on-site and on-line PD measurements are affected by electromagnetic interference (EMI) that makes sensitive PD detection very difficult, if not impossible. This paper describes implementation of wavelet transform techniques to reject noise from on-line partial discharge measurements on cables. A new wavelet threshold determination method is proposed with the technique. With implementation of this novel de-noising method, PD measurement sensitivity has been greatly improved. In addition, a full AC cycle data recovery can be achieved instead of focusing only on recovering individual PD pulses. Other wavelet threshold de-noising methods are discussed and examined under a noisy environment to compare their performance with the new method proposed here. The method described here has been found to be superior to the other wavelet-based methods"}
{"_id": "52c5882dc62c319c67a37cf40e56e0905dc4acb5", "title": "Articulatory and spectrum features integration using generalized distillation framework", "text": "It has been shown that by combining the acoustic and articulatory information significant performance improvements in automatic speech recognition (ASR) task can be achieved. In practice, however, articulatory information is not available during recognition and the general approach is to estimate it from the acoustic signal. In this paper, we propose a different approach based on the generalized distillation framework, where acoustic-articulatory inversion is not necessary. We trained two DNN models: one called \u201cteacher\u201d learns from both acoustic and articulatory features and the other one called \u201cstudent\u201d is trained on acoustic features only, but its training process is guided by the \u201cteacher\u201d model and can reach a better performance that can't be obtained by regular training even without articulatory feature inputs during test time. The paper is organized as follows: Section 1 gives the introduction and briefly discusses some related works. Section 2 describes the distillation training process, Section 3 describes ASR system used in this paper. Section 4 presents the experiments and the paper is concluded by Section 5."}
{"_id": "40a63746a710baf4a694fd5a4dd8b5a3d9fc2846", "title": "Invertible Conditional GANs for image editing", "text": "Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications."}
{"_id": "f7025aed8f41a0affb25acf3340a06a7c8993511", "title": "Machine Learning Approaches for Mood Classification of Songs toward Music Search Engine", "text": "Human often wants to listen to music that fits best his current emotion. A grasp of emotions in songs might be a great help for us to effectively discover music. In this paper, we aimed at automatically classifying moods of songs based on lyrics and metadata, and proposed several methods for supervised learning of classifiers. In future, we plan to use automatically identified moods of songs as metadata in our music search engine. Mood categories in a famous contest about Audio Music Mood Classification (MIREX 2007) are applied for our system. The training data is collected from a LiveJournal blog site in which each blog entry is tagged with a mood and a song. Then three kinds of machine learning algorithms are applied for training classifiers: SVM, Naive Bayes and Graph-based methods. The experiments showed that artist, sentiment words, putting more weight for words in chorus and title parts are effective for mood classification. Graph-based method promises a good improvement if we have rich relationship information among songs."}
{"_id": "651e8215e93fd94318997f3be5190f965f147c79", "title": "A Study on the Customer-Based Brand Equity of Online Retailers", "text": "Different from traditional firms, online retailers do not often produce common goods on their own. On the contrary, they sell merchandises that traditional firms produce. Furthermore, online retailers are also different from traditional retailers because the former do not often have frontline employees, physical outlets, or displaying layouts. Therefore, when building a strong online retailer brand, it is not enough to only consider the traditional dimensions of customer-based brand equity, namely brand awareness, perceived quality, brand associations, and brand loyalty. Some dimensions, i.e. trust associations and emotional connection need to be distinguished or included in the context of online business. Besides, the willingness to pay a price premium is always being viewed as a good indicator to measure brand equity, but for online retailers, it is conditional on some premises. By structural equation modeling, it was verified that for online retailers, the importance of brand awareness and trust associations decreased but perceived quality remained its strong influence. Brand loyalty was the better indicator for measuring brand equity, and emotional connection became so critical that if without it, customers would lack willingness to pay a price premium."}
{"_id": "8629cebb7c574adf40d71d41389f340804c8c81f", "title": "Automatic Fingerprint Recognition Systems", "text": "This article summarizes the major developments in the history of efforts to use fingerprint patterns to identify individuals, from the earliest fingerprint classification systems of Vucetich and Henry in the 1890s through the advent of automated fingerprint identification. By chronicling the history of \u201cmanual\u201d systems for recording storing, matching, and retrieving fingerprints, the article puts advances in automatic fingerprint recognition in historical context and highlights their historical and social significance."}
{"_id": "7d9089cbe958da21cbd943bdbcb996f4499e701b", "title": "Document Modeling with Gated Recurrent Neural Network for Sentiment Classification", "text": "Document level sentiment classification remains a challenge: encoding the intrinsic relations between sentences in the semantic meaning of a document. To address this, we introduce a neural network model to learn vector-based document representation in a unified, bottom-up fashion. The model first learns sentence representation with convolutional neural network or long short-term memory. Afterwards, semantics of sentences and their relations are adaptively encoded in document representation with gated recurrent neural network. We conduct document level sentiment classification on four large-scale review datasets from IMDB and Yelp Dataset Challenge. Experimental results show that: (1) our neural model shows superior performances over several state-of-the-art algorithms; (2) gated recurrent neural network dramatically outperforms standard recurrent neural network in document modeling for sentiment classification.1"}
{"_id": "b294b61f0b755383072ab332061f45305e0c12a1", "title": "Re-embedding words", "text": "We present a fast method for re-purposing existing semantic word vectors to improve performance in a supervised task. Recently, with an increase in computing resources, it became possible to learn rich word embeddings from massive amounts of unlabeled data. However, some methods take days or weeks to learn good embeddings, and some are notoriously difficult to train. We propose a method that takes as input an existing embedding, some labeled data, and produces an embedding in the same space, but with a better predictive performance in the supervised task. We show improvement on the task of sentiment classification with respect to several baselines, and observe that the approach is most useful when the training set is sufficiently small."}
{"_id": "74bcf18831294adb82c1ba8028ef314db1327549", "title": "Reference table based k-anonymous private blocking", "text": "Privacy Preserving Record Linkage is an emerging field of research which attempts to deal with the classical linkage problem from a privacy preserving point of view. In this paper we propose a novel approach for performing Privacy Preserving Blocking in order to minimize the computational cost of Privacy Preserving Record Linkage. We achieve this without compromising privacy by using Nearest Neighbors clustering, a well-known clustering algorithm and by using a reference table. A reference table is a publicly known table the contents of which are used as intermediate references. The combination of Nearest Neighbors and a reference table offers our approach k-anonymity characteristics."}
{"_id": "87cc66a7eae45f9df7fa10c14c77f4178abd2563", "title": "Pedestrian Behaviour Monitoring: Methods and Experiences", "text": "The investigation of pedestrian spatio-temporal behaviour is of particular interest in many different research fields. Disciplines like travel behaviour research and tourism research, social sciences, artificial intelligence, geoinformation and many others have approached this subject from different perspectives. Depending on the particular research questions, various methods of data collection and analysis have been developed and applied in order to gain insight into specific aspects of human motion behaviour and the determinants influencing spatial activities. In this contribution, we provide a general overview about most commonly used methods for monitoring and analysing human spatio-temporal behaviour. After discussing frequently used empirical methods of data collection and emphasising related advantages and limitations, we present seven case studies concerning the collection and analysis of human motion behaviour following different purposes."}
{"_id": "0d779e029fd5fb3271f174c05019b4144ffa46c0", "title": "ANALYSIS OF SEGMENTATION PARAMETERS IN ECOGNITION SOFTWARE USING HIGH RESOLUTION QUICKBIRD MS IMAGERY", "text": "For objectoriented classification approaches, main step is the segmentation part of the imagery. In eCognition v.4.0.6 software, with the segmentation process, meaningful objects can be created for following steps. In the experimental imagery with 2.4m ground sampling distance (GSD) has been used and several different parameters e.g. scale, color/shape and smoothness/compactness parameters have been tested accordingly. Additionally, segmentation parameters were set to low and high values and thus, dissimilarity of segmentation results were examined."}
{"_id": "645e505f107a470f347d9521c492f88b1a2e6670", "title": "Towards Query Efficient Black-box Attacks: An Input-free Perspective", "text": "Recent studies have highlighted that deep neural networks (DNNs) are vulnerable to adversarial attacks, even in a black-box scenario. However, most of the existing black-box attack algorithms need to make a huge amount of queries to perform attacks, which is not practical in the real world. We note one of the main reasons for the massive queries is that the adversarial example is required to be visually similar to the original image, but in many cases, how adversarial examples look like does not matter much. It inspires us to introduce a new attack called input-free attack, under which an adversary can choose an arbitrary image to start with and is allowed to add perceptible perturbations on it. Following this approach, we propose two techniques to significantly reduce the query complexity. First, we initialize an adversarial example with a gray color image on which every pixel has roughly the same importance for the target model. Then we shrink the dimension of the attack space by perturbing a small region and tiling it to cover the input image. To make our algorithm more effective, we stabilize a projected gradient ascent algorithm with momentum, and also propose a heuristic approach for region size selection. Recent studies have highlighted that deep neural networks (DNNs) are vulnerable to adversarial attacks, even in a black-box scenario. However, most of the existing black-box attack algorithms need to make a huge amount of queries to perform attacks, which is not practical in the real world. We note one of the main reasons for the massive queries is that the adversarial example is required to be visually similar to the original image, but in many cases, how adversarial examples look like does not matter much. It inspires us to introduce a new attack called input-free attack, under which an adversary can choose an arbitrary image to start with and is allowed to add perceptible perturbations on it. Following this approach, we propose two techniques to significantly reduce the query complexity. First, we initialize an adversarial example with a gray color image on which every pixel has roughly the same importance for the target model. Then we shrink the dimension of the attack space by perturbing a small region and tiling it to cover the input image. To make our algorithm more effective, we stabilize a projected gradient ascent algorithm with momentum, and also propose a heuristic approach for region size selection. Through extensive experiments, we show that with only 1,701 queries on average, we can perturb a gray image to any target class of ImageNet with a 100% success rate on InceptionV3. Besides, our algorithm has successfully defeated two real-world systems, the Clarifai food detection API and the Baidu Animal Identification API."}
{"_id": "029341c7f1ce11696a6bc7e7a716b7876010ebc7", "title": "Model Learning for Look-ahead Exploration in Continuous Control", "text": "We propose an exploration method that incorporates look-ahead search over basic learnt skills and their dynamics, and use it for reinforcement learning (RL) of manipulation policies . Our skills are multi-goal policies learned in isolation in simpler environments using existing multigoal RL formulations, analogous to options or macroactions. Coarse skill dynamics, i.e., the state transition caused by a (complete) skill execution, are learnt and are unrolled forward during lookahead search. Policy search benefits from temporal abstraction during exploration, though itself operates over low-level primitive actions, and thus the resulting policies does not suffer from suboptimality and inflexibility caused by coarse skill chaining. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parametrized skills as building blocks of the policy itself, as opposed to guiding exploration. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parameterized skills as building blocks of the policy itself, as opposed to guiding exploration."}
{"_id": "2374a65e335b26c2ae9692b6c2f1408401d86f5b", "title": "A Consolidated Perspective on Multimicrophone Speech Enhancement and Source Separation", "text": "Speech enhancement and separation are core problems in audio signal processing, with commercial applications in devices as diverse as mobile phones, conference call systems, hands-free systems, or hearing aids. In addition, they are crucial preprocessing steps for noise-robust automatic speech and speaker recognition. Many devices now have two to eight microphones. The enhancement and separation capabilities offered by these multichannel interfaces are usually greater than those of single-channel interfaces. Research in speech enhancement and separation has followed two convergent paths, starting with microphone array processing and blind source separation, respectively. These communities are now strongly interrelated and routinely borrow ideas from each other. Yet, a comprehensive overview of the common foundations and the differences between these approaches is lacking at present. In this paper, we propose to fill this gap by analyzing a large number of established and recent techniques according to four transverse axes: 1 the acoustic impulse response model, 2 the spatial filter design criterion, 3 the parameter estimation algorithm, and 4 optional postfiltering. We conclude this overview paper by providing a list of software and data resources and by discussing perspectives and future trends in the field."}
{"_id": "df12dcd87b6b98d3e3dc7504c9e4e606ab960d04", "title": "Real-time 3D scene reconstruction with dynamically moving object using a single depth camera", "text": "Online 3D reconstruction of real-world scenes has been attracting increasing interests from both the academia and industry, especially with the consumer-level depth cameras becoming widely available. Recent most online reconstruction systems take live depth data from a moving Kinect camera and incrementally fuse them to a single high-quality 3D model in real time. Although most real-world scenes have static environment, the daily objects in a scene often move dynamically, which are non-trivial to reconstruct especially when the camera is also not still. To solve this problem, we propose a single depth camera-based real-time approach for simultaneous reconstruction of dynamic object and static environment, and provide solutions for its key issues. In particular, we first introduce a robust optimization scheme which takes advantage of raycasted maps to segment moving object and background from the live depth map. The corresponding depth data are then fused to the volumes, respectively. These volumes are raycasted to extract views of the implicit surface which can be used as a consistent reference frame for the next iteration of segmentation and tracking. Particularly, in order to handle fast motion of dynamic object and handheld camera in the fusion stage, we propose a sequential 6D pose prediction method which largely increases the registration robustness and avoids registration failures occurred in conventional methods. Experimental results show that our approach can reconstruct moving object as well as static environment with rich details, and outperform conventional methods in multiple aspects."}
{"_id": "320473d70ea898ce099ca0dda92af17a34b99599", "title": "Monadic Parsing in Haskell", "text": "This paper is a tutorial on defining recursive descent parsers in Haskell. In the spirit of one-stop shopping , the paper combines material from three areas into a single source. The three areas are functional parsers (Burge, 1975; Wadler, 1985; Hutton, 1992; Fokker, 1995), the use of monads to structure functional programs (Wadler, 1990, 1992a, 1992b), and the use of special syntax for monadic programs in Haskell (Jones, 1995; Peterson et al., 1996). More specifically, the paper shows how to define monadic parsers using do notation in Haskell. Of course, recursive descent parsers defined by hand lack the efficiency of bottomup parsers generated by machine (Aho et al., 1986; Mogensen, 1993; Gill and Marlow, 1995). However, for many research applications, a simple recursive descent parser is perfectly sufficient. Moreover, while parser generators typically offer a fixed set of combinators for describing grammars, the method described here is completely extensible: parsers are first-class values, and we have the full power of Haskell available to define new combinators for special applications. The method is also an excellent illustration of the elegance of functional programming. The paper is targeted at the level of a good undergraduate student who is familiar with Haskell, and has completed a grammars and parsing course. Some knowledge of functional parsers would be useful, but no experience with monads is assumed. A Haskell library derived from the paper is available on the web from:"}
{"_id": "fd47145321e4b34e043104c9eb21c9bc28dfd680", "title": "On the Design of Adaptive Automation for Complex Systems", "text": "This article presents a constrained review of human factors issues relevant to adaptive automation (AA), including designing complex system interfaces to support AA, facilitating human\u2013computer interaction and crew interactions in adaptive system operations, and considering workload associated with AA management in the design of human roles in adaptive systems. Unfortunately, these issues have received limited attention in earlier reviews of AA. This work is aimed at supporting a general theory of human-centered automation advocating humans as active information processors in complex system control loops to support situation awareness and effective performance. The review demonstrates the need for research into user-centered design of dynamic displays in adaptive systems. It also points to the need for discretion in designing transparent interfaces to facilitate human awareness of modes of automated systems. Finally, the review identifies the need to consider critical human\u2013human interactions in designing adaptive systems. This work describes important branches of a developing framework of AA research and contributes to the general theory of human-centered automation."}
{"_id": "0cb087738f1e88043e13f0744d91b52aaa47d5f0", "title": "Android based Portable Hand Sign Recognition System", "text": "These days mobile devices like phones or tablets are very common among people of all age. They are connected with network and provide seamless communications through internet or cellular services. These devices can be a big help for the people who are not able to communicatie properly and even in emergency conditions. A disabled person who is not able to speak or a person who speak a different language, these devices can be a boon for them as understanding, translating and speaking systems for these people. This chapter discusses a portable android based hand sign recognition system which can be used by disabled people. This chapter shows a part of on-going project. Computer Vision based techniques were used for image analysis and PCA was used after image tokenizer for recognition. This method was tested with webcam results to make system more robust."}
{"_id": "d987bc0133f3331ef89ea1e50f5c57acfe17db9b", "title": "Tool Detection and Operative Skill Assessment in Surgical Videos Using Region-Based Convolutional Neural Networks", "text": "Five billion people in the world lack access to quality surgical care. Surgeon skill varies dramatically, and many surgical patients suffer complications and avoidable harm. Improving surgical training and feedback would help to reduce the rate of complications\u2014half of which have been shown to be preventable. To do this, it is essential to assess operative skill, a process that currently requires experts and is manual, time consuming, and subjective. In this work, we introduce an approach to automatically assess surgeon performance by tracking and analyzing tool movements in surgical videos, leveraging region-based convolutional neural networks. In order to study this problem, we also introduce a new dataset, m2cai16-tool-locations, which extends the m2cai16-tool dataset with spatial bounds of tools. While previous methods have addressed tool presence detection, ours is the first to not only detect presence but also spatially localize surgical tools in real-world laparoscopic surgical videos. We show that our method both effectively detects the spatial bounds of tools as well as significantly outperforms existing methods on tool presence detection. We further demonstrate the ability of our method to assess surgical quality through analysis of tool usage patterns, movement range, and economy of motion."}
{"_id": "4a27709545cfa225d8983fb4df8061fb205b9116", "title": "A data-driven approach to predict the success of bank telemarketing", "text": "We propose a data mining (DM) approach to predict the success of telemarketing calls for selling bank long-term deposits. A Portuguese retail bank was addressed, with data collected from 2008 to 2013, thus including the effects of the recent financial crisis. We analyzed a large set of 150 features related with bank client, product and social-economic attributes. A semi-automatic feature selection was explored in the modeling phase, performed with the data prior to July 2012 and that allowed to select a reduced set of 22 features. We also compared four DM models: logistic regression, decision trees (DT), neural network (NN) and support vector machine. Using two metrics, area of the receiver operating characteristic curve (AUC) and area of the LIFT cumulative curve (ALIFT), the four models were tested on an evaluation phase, using the most recent data (after July 2012) and a rolling windows scheme. The NN presented the best results (AUC=0.8 and ALIFT=0.7), allowing to reach 79% of the subscribers by selecting the half better classified clients. Also, two knowledge extraction methods, a sensitivity analysis and a DT, were applied to the NN model and revealed several key attributes (e.g., Euribor rate, direction of the call and bank agent experience). Such knowledge extraction confirmed the obtained model as credible and valuable for telemarketing campaign managers. Preprint submitted to Elsevier 19 February 2014"}
{"_id": "1b02659436090ae5888eafa25f5726a23b87bd64", "title": "Enhancing metacognitive reinforcement learning using reward structures and feedback", "text": "How do we learn to think better, and what can we do to promote such metacognitive learning? Here, we propose that cognitive growth proceeds through metacognitive reinforcement learning. We apply this theory to model how people learn how far to plan ahead and test its predictions about the speed of metacognitive learning in two experiments. In the first experiment, we find that our model can discern a reward structure that promotes metacognitive reinforcement learning from one that hinders it. In the second experiment, we show that our model can be used to design a feedback mechanism that enhances metacognitive reinforcement learning in an environment that hinders learning. Our results suggest that modeling metacognitive learning is a promising step towards promoting cognitive growth."}
{"_id": "d51c099b2ef1b8fa131825b582ef23c6c50acc17", "title": "An experimental comparison of ensemble of classifiers for bankruptcy prediction and credit scoring", "text": "In this paper, we investigate the performance of several systems based on ensemble of classifiers for bankruptcy prediction and credit scoring. The obtained results are very encouraging, our results improved the performance obtained using the stand-alone classifiers. We show that the method \u2018\u2018Random Subspace\u201d outperforms the other ensemble methods tested in this paper. Moreover, the best stand-alone method is the multi-layer perceptron neural net, while the best method tested in this work is the Random Subspace of Levenberg\u2013Marquardt neural net. In this work, three financial datasets are chosen for the experiments: Australian credit, German credit, and Japanese credit. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "529d9215a7d8dd32bdbca018dab3e839569241a0", "title": "PyThinSearch: A Simple Web Search Engine", "text": "We describe a simple, functioning web search engine for indexing and searching online documents using python programming language. Python was chosen because it is an elegant language with simple syntax, is easy to learn and debug, and supports main operating systems almost evenly. The remarkable characteristics of this program are an adjustable search function that allows users to rank documents with several combinations of score functions and the focus on anchor text analysis as we provide four additional schemes to calculate scores based on anchor text. We also provide an additional ranking algorithm based on link addition process in network motivated by PageRank and HITS as an experimental tool. This algorithm is the original contribution of this paper."}
{"_id": "25d7da85858a4d89b7de84fd94f0c0a51a9fc67a", "title": "Selective Search for Object Recognition", "text": "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99\u00a0% recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http://disi.unitn.it/~uijlings/SelectiveSearch.html )."}
{"_id": "0cc22d1dab50d9bab9501008e9b359cd9e51872a", "title": "SuperParsing: Scalable Nonparametric Image Parsing with Superpixels", "text": "This paper presents a simple and effective nonparametric approach to the problem of image parsing, or labeling image regions (in our case, superpixels produced by bottom-up segmentation) with their categories. This approach requires no training, and it can easily scale to datasets with tens of thousands of images and hundreds of labels. It works by scene-level matching with global image descriptors, followed by superpixel-level matching with local features and efficient Markov random field (MRF) optimization for incorporating neighborhood context. Our MRF setup can also compute a simultaneous labeling of image regions into semantic classes (e.g., tree, building, car) and geometric classes (sky, vertical, ground). Our system outperforms the state-of-the-art nonparametric method based on SIFT Flow on a dataset of 2,688 images and 33 labels. In addition, we report per-pixel rates on a larger dataset of 15,150 images and 170 labels. To our knowledge, this is the first complete evaluation of image parsing on a dataset of this size, and it establishes a new benchmark for the problem."}
{"_id": "f2eced4d8317b8e64482d58cdae5ca95f490eb25", "title": "Model based control of fiber reinforced elastofluidic enclosures", "text": "Fiber-Reinforced Elastofluidic Enclosures (FREEs), are a subset of pneumatic soft robots with an asymmetric continuously deformable skin that are able to generate a wide range of deformations and forces, including rotation and screw motions. Though these soft robots are able to generate a variety of motions, simultaneously controlling their end effector rotation and position has remained challenging due to the lack of a simple model. This paper presents a model that establishes a relationship between the pressure, torque due to axial loading, and axial rotation to enable a model-driven open-loop control for FREEs. The modeling technique relies on describing force equilibrium between the fiber, fluid, and an elastomer model which is computed via system identification. The model is experimentally tested as these variables are changed, illustrating that it provides good agreement with the real system. To further illustrate the potential of the model, a precision open-loop control experiment of opening a rotational combination lock is presented2."}
{"_id": "a55eb76988e0692525259a7bcbaeba36638e3600", "title": "Theory and implementation of dielectric resonator antenna excited by a waveguide slot", "text": "Excitation of dielectric resonator antennas (DRAs) by waveguide slots is proposed as an alternative to traditionally used excitation mechanisms in order to enhance the frequency bandwidth of slotted waveguide radiators and to control the power coupled to the DRA. The analysis is based on the numerical solution of coupled integral equations discretized using the method of moments (MoM). The dielectric resonator (DR) is modeled as a body-of-revolution based on the integral equation formulation for the equivalent electric and magnetic surface current densities. The analysis of an infinite or a semi-infinite waveguide containing longitudinal or transverse narrow slots uses the appropriate dyadic Green's function resulting in closed-form analytical expressions of the MoM matrix. The scattering parameters for a slotted waveguide loaded with a dielectric resonator antenna disk are calculated and compared with finite-difference time-domain results. Bandwidth enhancement is achieved by the proper selection for the antenna parameters."}
{"_id": "672221c45146dbe1c0e2d07132cc17eb20ff965c", "title": "Eight Key Elements of a Business Model", "text": "The growth of e-commerce and mobile services has been creating business opportunities. Among these, mobile marketing is predicted to be a new trend of marketing. As mobile devices are personal tools such that the services ought to be unique, in the last few years, researches have been conducted studies related to the user acceptance of mobile services and produced results. This research aims to develop a model of mobile e-commerce mobile marketing system, in the form of computer-based information system (CBIS) that addresses the recommendations or criteria formulated by researchers. In this paper, the criteria formulated by researches are presented then each of the criteria is resolved and translated into mobile services. A model of CBIS, which is an integration of a website and a mobile application, is designed to materialize the mobile services. The model is presented in the form of business model, system procedures, network topology, software model of the website and mobile application, and database models."}
{"_id": "afed1b1e121ebc62566c1479ed5102b8c61538ac", "title": "The LOCATA Challenge Data Corpus for Acoustic Source Localization and Tracking", "text": "Algorithms for acoustic source localization and tracking are essential for a wide range of applications such as personal assistants, smart homes, tele-conferencing systems, hearing aids, or autonomous systems. Numerous algorithms have been proposed for this purpose which, however, are not evaluated and compared against each other by using a common database so far. The IEEE-AASP Challenge on sound source localization and tracking (LOCATA) provides a novel, comprehensive data corpus for the objective benchmarking of state-of-the-art algorithms on sound source localization and tracking. The data corpus comprises six tasks ranging from the localization of a single static sound source with a static microphone array to the tracking of multiple moving speakers with a moving microphone array. It contains real-world multichannel audio recordings, obtained by hearing aids, microphones integrated in a robot head, a planar and a spherical microphone array in an enclosed acoustic environment, as well as positional information about the involved arrays and sound sources represented by moving human talkers or static loudspeakers."}
{"_id": "f5325f71d6ea59b8b120483ef882da3c0ea8a8cd", "title": "Thermal Analysis of the Permanent-Magnet Spherical Motor", "text": "There are many kinds of permanent-magnet (PM) motors, which are widely used in modern industry, but the problem of temperature rise is likely to affect the operation performance and even the life of the motor. The semiclosed structure of the PM spherical motor (PMSM) tends to increase the temperature when it works, so the thermal analysis of the PMSM is considered necessary. Based on double Fourier series decomposition, this paper establishes a three-dimensional (3-D) analytical model of PM eddy current loss in the PMSM, which considers the effect of time and space harmonics. Meanwhile, the heat sources such as hysteresis loss and copper loss are discussed. With those heat sources, a 3-D equivalent thermal network model is established. The thermal resistances are calculated. By applying the thermal network model, the influence of different speed and load on temperature rise is analyzed, and a steady-state thermal analysis of the motor is performed by using finite element analysis. The structure of stator is improved and good heat dissipation effect is achieved. This paper provides the theoretical basis for the design of ventilation and cooling system of the PMSM."}
{"_id": "8ca87d3b45d66d8c3712d532e05b163d1c6c2121", "title": "Different Feature Selection for Sentiment Classification", "text": "Sentiment Analysis (SA) research has increased tremendously in recent times .Sentiment analysis means to extract opinion of users from review documents. Sentiment classification using Machine learning (ML ) methods faces the problem of high dimensionality of feature vector. Therefore, a feature selection method is required to eliminate the irrelevant and noisy features from the feature vector for efficient working of ML algorithms Rough set theory provides an important concept for feature reduction called reduct. The cost of reduct set computation is highly influenced by the attribute size of the dataset where the problem of finding reducts has been proven as NPhard problems.Different feature selection are applied on different data set, Experimental results show that mRMR is better compared to IG for sentiment classification, Hybrid feature selection method based on the RST and Information Gain (IG) is better compared to the previous methods. Proposed methods are evaluated on four standard datasets viz. Movie review, Product (book, DVD, and electronics) reviewed datasets, and Experimental results show that hybrid feature selection method outperforms than feature selection methods for sentimental classification."}
{"_id": "98f1fa77d71bc92e283c8911d314ee3817cc3fce", "title": "Introduction to the Special Issue: The Literature Review in Information Systems", "text": "There has been a flowering of scholarly interest in the literature review as a research method in the information systems discipline. We feel privileged to contribute to this conversation and introduce the work of the authors represented in this special issue. Some of the highlights include three new methods for conducting literature analysis and guidelines, tutorials, and approaches for coping with some of the challenges involved in carrying out a literature review. Of the three \u201cnew method\u201d papers, one (ontological meta-analysis and synthesis) is entirely new, and two (stylized facts and critical discourse analysis) are novel in the information systems context. The other four paper address more general issues: the challenges of effective search strategies when confronted with the burgeoning volume of research available, a detailed tool-supported approach for conducting a rigorous review, a detailed tutorial for conducting a qualitative literature review, and a discussion of quality issues. Collectively, the papers place emphasis beyond the traditional \u201cnarrative synthesis\u201d on the importance of selecting the appropriate approach for the research context and the importance of attention to quality and transparency at all stages of the process, regardless of which approach is adopted."}
{"_id": "45a9c00b07129b63515945b60d778134ffc3773f", "title": "Internet Addiction and Relationships with Insomnia, Anxiety, Depression, Stress and Self-Esteem in University Students: A Cross-Sectional Designed Study", "text": "BACKGROUND AND AIMS\nInternet addiction (IA) could be a major concern in university medical students aiming to develop into health professionals. The implications of this addiction as well as its association with sleep, mood disorders and self-esteem can hinder their studies, impact their long-term career goals and have wide and detrimental consequences for society as a whole. The objectives of this study were to: 1) Assess potential IA in university medical students, as well as factors associated with it; 2) Assess the relationships between potential IA, insomnia, depression, anxiety, stress and self-esteem.\n\n\nMETHODS\nOur study was a cross-sectional questionnaire-based survey conducted among 600 students of three faculties: medicine, dentistry and pharmacy at Saint-Joseph University. Four validated and reliable questionnaires were used: the Young Internet Addiction Test, the Insomnia Severity Index, the Depression Anxiety Stress Scales (DASS 21), and the Rosenberg Self Esteem Scale (RSES).\n\n\nRESULTS\nThe average YIAT score was 30 \u00b1 18.474; Potential IA prevalence rate was 16.8% (95% confidence interval: 13.81-19.79%) and it was significantly different between males and females (p-value = 0.003), with a higher prevalence in males (23.6% versus 13.9%). Significant correlations were found between potential IA and insomnia, stress, anxiety, depression and self-esteem (p-value < 0.001); ISI and DASS sub-scores were higher and self-esteem lower in students with potential IA.\n\n\nCONCLUSIONS\nIdentifying students with potential IA is important because this addiction often coexists with other psychological problems. Therefore, interventions should include not only IA management but also associated psychosocial stressors such as insomnia, anxiety, depression, stress, and self-esteem."}
{"_id": "c08e84cfc979c5e4534797fb83b82419514e30f6", "title": "Sensorless current estimation and sharing in multiphase input-parallel output-parallel DC-DC converters", "text": "This paper introduces a sensorless current-sharing strategy for multiphase input-parallel output-parallel (IPOP) DC-DC converters. A dual active bridge (DAB) DC-DC converter is chosen as the basic DC-DC converter. With this strategy, by perturbing the duty cycles in (n-1) out of n phases in turn and measuring the changes of duty cycles respectively, the parameters mismatches among phases are estimated. According to the mismatches, a set of variables, which are proportional to the per-phase output currents, are calculated. Then with a current-sharing regulator, parameter mismatches are compensated, thus achieving current sharing without current sensing. The strategy is verified through both simulation and experimental implementation with a 30V-to-70V, 272W, 20kHz, three-phase IPOP DC-DC converter."}
{"_id": "41a1d968174234a6bc991f7f5ed29ecb49681216", "title": "Neural mechanisms of selective visual attention.", "text": "The two basic phenomena that define the problem of visual attention can be illustrated in a simple example. Consider the arrays shown in each panel of Figure 1. In a typical experiment, before the arrays were presented, subjects would be asked to report letters appearing in one color (targets, here black letters), and to disregard letters in the other color (nontargets, here white letters). The array would then be briefly flashed, and the subjects, without any opportunity for eye movements, would give their report. The display mimics our. usual cluttered visual environment: It contains one or more objects that are relevant to current behavior, along with others that are irrelevant. The first basic phenomenon is limited capacity for processing information. At any given time, only a small amount of the information available on the retina can be processed and used in the control of behavior. Subjectively, giving attention to any one target leaves less available for others. In Figure 1, the probability of reporting the target letter N is much lower with two accompa\u00ad nying targets (Figure la) than with none (Figure Ib). The second basic phenomenon is selectivity-the ability to filter out un\u00ad wanted information. Subjectively, one is aware of attended stimuli and largely unaware of unattended ones. Correspondingly, accuracy in identifying an attended stimulus may be independent of the number of nontargets in a display (Figure la vs Ie) (see Bundesen 1990, Duncan 1980)."}
{"_id": "e5627ed3d4d07b73355cbfd7f54f5e6b696909bd", "title": "Discrete Delaunay: Boundary extraction from voxel objects", "text": "We present a discrete approach for boundary extraction from 3D image data. The proposed technique is based on the duality between the Voronoi graph computed accross the digital boundary and the Delaunay triangulation. The originality of the approach is that algorithms perform only integer arithmetic and the method does not suffer from standard round problems and numerical instabilities in the case of floating point computations. This method has been applied both on segmented anatomical structures and on manufactured objects presenting corners and edges. The experimental results show that the method allows to produce a polygonal boundary representation which is guaranteed to be a 2-manifold. This representation is successfully transformed into a triangular quality mesh which meets all topological and geometrical requirements of applications such as augmented reality or simulation."}
{"_id": "3d3c80f0ef6cbbf6e35367b028feacace033affe", "title": "Fundamental Limits of Wideband Localization\u2014 Part I: A General Framework", "text": "The availability of position information is of great importance in many commercial, public safety, and military applications. The coming years will see the emergence of location-aware networks with submeter accuracy, relying on accurate range measurements provided by wide bandwidth transmissions. In this two-part paper, we determine the fundamental limits of localization accuracy of wideband wireless networks in harsh multipath environments. We first develop a general framework to characterize the localization accuracy of a given node here and then extend our analysis to cooperative location-aware networks in Part II. In this paper, we characterize localization accuracy in terms of a performance measure called the squared position error bound (SPEB), and introduce the notion of equivalent Fisher information (EFI) to derive the SPEB in a succinct expression. This methodology provides insights into the essence of the localization problem by unifying localization information from individual anchors and that from a priori knowledge of the agent's position in a canonical form. Our analysis begins with the received waveforms themselves rather than utilizing only the signal metrics extracted from these waveforms, such as time-of-arrival and received signal strength. Hence, our framework exploits all the information inherent in the received waveforms, and the resulting SPEB serves as a fundamental limit of localization accuracy."}
{"_id": "40d2e4e51903c7c6868acdde4335b3c9245a7002", "title": "Coupled Bayesian Sets Algorithm for Semi-supervised Learning and Information Extraction", "text": "Our inspiration comes from Nell (Never Ending Language Learning), a computer program running at Carnegie Mellon University to extract structured information from unstructured web pages. We consider the problem of semi-supervised learning approach to extract category instances (e.g. country(USA), city(New York)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semisupervised approaches using a small number of labeled examples together with many unlabeled examples are often unreliable as they frequently produce an internally consistent, but nevertheless, incorrect set of extractions. We believe that this problem can be overcome by simultaneously learning independent classifiers in a new approach named Coupled Bayesian Sets algorithm, based on Bayesian Sets, for many different categories and relations (in the presence of an ontology defining constraints that couple the training of these classifiers). Experimental results show that simultaneously learning a coupled collection of classifiers for random 11 categories resulted in much more accurate extractions than training classifiers through original Bayesian Sets algorithm, Naive Bayes, BaS-all and Coupled Pattern Learner (the category extractor used in NELL)."}
{"_id": "21496f06aaf599cd76397cf7be3987079e23c93d", "title": "Polarity Lexicon Building: to what Extent Is the Manual Effort Worth?", "text": "Polarity lexicons are a basic resource for analyzing the sentiments and opinions expressed in texts in an automated way. This paper explores three methods to construct polarity lexicons: translating existing lexicons from other languages, extracting polarity lexicons from corpora, and annotating sentiments Lexical Knowledge Bases. Each of these methods require a different degree of human effort. We evaluate how much manual effort is needed and to what extent that effort pays in terms of performance improvement. Experiment setup includes generating lexicons for Basque, and evaluating them against gold standard datasets in different domains. Results show that extracting polarity lexicons from corpora is the best solution for achieving a good performance with reasonable human effort."}
{"_id": "91a9a10e4b4067dd9f7135129727c7530702763d", "title": "Exploring specialized near-memory processing for data intensive operations", "text": "Emerging 3D stacked memory systems provide significantly more bandwidth than current DDR modules. However, general purpose processors do not take full advantage of these resources offered by the memory modules. Taking advantage of the increased bandwidth requires the use of specialized processing units. In this paper, we evaluate the benefits of placing hardware accelerators at the bottom layer of a 3D stacked memory system compared to accelerators that are placed external to the memory stack. Our evaluation of the design using cycle-accurate simulation and RTL synthesis shows that, for important data intensive kernels, near-memory accelerators inside a single 3D memory package provide 3x-13x speedup over a Quad-core Xeon processor. Most of the benefits are from the application of accelerators, as the near-memory configurations provide marginal benefits compared to the same number of accelerators placed on a die external to the memory package. This comparable performance for external accelerators is due to the high bandwidth afforded by the high-speed off-chip links. On the other hand, near-memory accelerators consume 7%-39% less energy than the external accelerators."}
{"_id": "5e880d3bd1c4c4635ea7684df47109a33448b4c2", "title": "Towards an Architecture for Knowledge Representation and Reasoning in Robotics", "text": "This paper describes an architecture that combines the complementary strengths of probabilistic graphical models and declarative programming to enable robots to represent and reason with qualitative and quantitative descriptions of uncertainty and domain knowledge. An action language is used for the architecture\u2019s low-level (LL) and high-level (HL) system descriptions, and the HL definition of recorded history is expanded to allow prioritized defaults. For any given objective, tentative plans created in the HL using commonsense reasoning are implemented in the LL using probabilistic algorithms, and the corresponding observations are added to the HL history. Tight coupling between the levels helps automate the selection of relevant variables and the generation of policies in the LL for each HL action, and supports reasoning with violation of defaults, noisy observations and unreliable actions in complex domains. The architecture is evaluated in simulation and on robots moving objects in indoor domains."}
{"_id": "4c422d3b8df9b140b04d436d704b062c4f304dec", "title": "A framework to build bespoke auto-tuners with structured Bayesian optimisation", "text": "Due to their complexity, modern computer systems expose many configuration parameters which users must manually tune to maximise the performance of their applications. To relieve users of this burden, auto-tuning has emerged as an alternative in which a black-box optimiser iteratively evaluates configurations to find efficient ones. A popular auto-tuning technique is Bayesian optimisation, which uses the results to incrementally build a probabilistic model of the impact of the parameters on performance. This allows the optimisation to quickly focus on efficient regions of the configuration space. Unfortunately, for many computer systems, either the configuration space is too large to develop a good model, or the time to evaluate performance is too long to be executed many times. In this dissertation, I argue that by extracting a small amount of domain specific knowledge about a system, it is possible to build a bespoke auto-tuner with significantly better performance than its off-the-shelf counterparts. This could be performed, for example, by a system engineer who has a good understanding of the underlying system behaviour and wants to provide performance portability. This dissertation presents BOAT, a framework to build BespOke Auto-Tuners. BOAT offers a novel set of abstractions designed to make the exposition of domain knowledge easy and intuitive. First, I introduce Structured Bayesian Optimisation (SBO), an extension of the Bayesian optimisation algorithm. SBO can leverage a bespoke probabilistic model of the system's behaviour, provided by the system engineer, to rapidly converge to high performance configurations. The model can benefit from observing many runtime measurements per evaluation of the system, akin to the use of profilers. Second, I present Probabilistic-C++ a lightweight, high performance probabilistic programming library. It allows users to declare a probabilistic models of their system's behaviour and expose it to an SBO. Probabilistic programming is a recent tool from the Machine Learning community making the declaration of structured probabilistic models intuitive. Third, I present a new optimisation scheduling abstraction which offers a structured way to express optimisations which themselves execute other optimisations. For example, this is useful to express Bayesian optimisations, which each iteration execute a numerical optimisation. Furthermore, it allows users to easily implement decompositions which exploit the loose coupling of subsets of the configuration parameters to optimise them almost independently."}
{"_id": "9c4e2205821c519cf72ae3e88837ba0a678f6086", "title": "A Novel Technique for English Font Recognition Using Support Vector Machines", "text": "Font Recognition is one of the Challenging tasks in Optical Character Recognition. Most of the existing methods for font recognition make use of local typographical features and connected component analysis. In this paper, English font recognition is done based on global texture analysis. The main objective of this proposal is to employ support vector machines (SVM) in identifying various fonts. The feature vectors are extracted by making use of Gabor filters and the proposed SVM is trained using these features. The method is found to give superior performance over neural networks by avoiding local minima points. The SVM model is formulated tested and the results are presented in this paper. It is observed that this method is content independent and the SVM classifier shows an average accuracy of 93.54%."}
{"_id": "25190bd8bc97c78626f5ca0b6f59cf0360c71b58", "title": "Mobile ad hoc networking: imperatives and challenges", "text": "Mobile ad hoc networks (MANETs) represent complex distributed systems that comprise wireless mobile nodes that can freely and dynamically self-organize into arbitrary and temporary, \u2018\u2018ad-hoc\u2019\u2019 network topologies, allowing people and devices to seamlessly internetwork in areas with no pre-existing communication infrastructure, e.g., disaster recovery environments. Ad hoc networking concept is not a new one, having been around in various forms for over 20 years. Traditionally, tactical networks have been the only communication networking application that followed the ad hoc paradigm. Recently, the introduction of new technologies such as the Bluetooth, IEEE 802.11 and Hyperlan are helping enable eventual commercial MANET deployments outside the military domain. These recent evolutions have been generating a renewed and growing interest in the research and development of MANET. This paper attempts to provide a comprehensive overview of this dynamic field. It first explains the important role that mobile ad hoc networks play in the evolution of future wireless technologies. Then, it reviews the latest research activities in these areas, including a summary of MANET s characteristics, capabilities, applications, and design constraints. The paper concludes by presenting a set of challenges and problems requiring further research in the future. 2003 Elsevier B.V. All rights reserved."}
{"_id": "cc0439a45f37cb1a5edbb1a9ded69a75a7249597", "title": "Single-Phase Seven-Level Grid-Connected Inverter for Photovoltaic System", "text": "This paper proposes a single-phase seven-level inverter for grid-connected photovoltaic systems, with a novel pulsewidth-modulated (PWM) control scheme. Three reference signals that are identical to each other with an offset that is equivalent to the amplitude of the triangular carrier signal were used to generate the PWM signals. The inverter is capable of producing seven levels of output-voltage levels (Vdc, 2Vdc/3, Vdc/3, 0, -Vdc, -2Vdc/3, -Vdc/3) from the dc supply voltage. A digital proportional-integral current-control algorithm was implemented in a TMS320F2812 DSP to keep the current injected into the grid sinusoidal. The proposed system was verified through simulation and implemented in a prototype."}
{"_id": "9ca9f28676ad788d04ba24a51141a9a0a0df4d67", "title": "A new model for learning in graph domains", "text": "In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model."}
{"_id": "6b75c572673d1f2af76332dd251ef1ac7bcb59f0", "title": "Wavelet Feature Based Confusion Character Sets for Gujarati Script", "text": "Indic script recognition is a difficult task due to the large number of symbols that result from concatenation of vowel modifiers to basic consonants and the conjunction of consonants with modifiers etc. Recognition of Gujarati script is a less studied area and no attempt is made so far to constitute confusion sets of Gujarati glyphs. In this paper, we present confusion sets of glyphs in printed Gujarati. Feature vector made up of Daubechies D4 wavelet coefficients were subjected to two different classifiers, giving more than 96% accuracy for a larger set of symbols. Novel application of GR neural-net architecture allows for fast building of a classifier for the large character data set. The combined approach of wavelet feature extraction and GRNN classification has given the highest recognition accuracy reported on this script."}
{"_id": "f407c09ae8d886fc373d3f471c97c22d3ca50580", "title": "Intelligent fuzzy controller of a quadrotor", "text": "The aim of this work is to describe an intelligent system based on fuzzy logic that is developed to control a quadrotor. A quadrotor is a helicopter with four rotors, that make the vehicle more stable but more complex to model and to control. The quadrotor has been used as a testing platform in the last years for various universities and research centres. A quadrotor has six degrees of freedom, three of them regarding the position: height, horizontal and vertical motions; and the other three are related to the orientation: pitch, roll and yaw. A fuzzy control is designed and implemented to control a simulation model of the quadrotor. The inputs are the desired values of the height, roll, pitch and yaw. The outputs are the power of each of the four rotors that is necessary to reach the specifications. Simulation results prove the efficiency of this intelligent control strategy."}
{"_id": "714b68efe5f81e8ec24701fc222393ed038137ac", "title": "Clearing algorithms for barter exchange markets: enabling nationwide kidney exchanges", "text": "In barter-exchange markets, agents seek to swap their items with one another, in order to improve their own utilities. These swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle. We focus mainly on the upcoming national kidney-exchange market, where patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors. With over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney diseas.\n The clearing problem involves finding a social welfare maximizing exchange when the maximum length of a cycle is fixed. Long cycles are forbidden, since, for incentive reasons, all transplants in a cycle must be performed simultaneously. Also, in barter-exchanges generally, more agents are affected if one drops out of a longer cycle. We prove that the clearing problem with this cycle-length constraint is NP-hard. Solving it exactly is one of the main challenges in establishing a national kidney exchange.\n We present the first algorithm capable of clearing these markets on a nationwide scale. The key is incremental problem formulation. We adapt two paradigms for the task: constraint generation and column generation. For each, we develop techniques that dramatically improve both runtime and memory usage. We conclude that column generation scales drastically better than constraint generation. Our algorithm also supports several generalizations, as demanded by real-world kidney exchanges.\n Our algorithm replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges. The match runs are conducted every two weeks and transplants based on our optimizations have already been conducted."}
{"_id": "0a7eb04e7100161893e9a81f89445924439c5964", "title": "PeerSpace - An Online Collaborative Learning Environment for Computer Science Students", "text": "The aim of Peer Space is to promote peer support and peer learning in introductory Computer Science (CS) courses by providing the students with online collaborative tools for convenient synchronous and asynchronous interactions on course related topics and social matters. This paper presents the development of various social and learning components in Peer Space that are unique in promoting collaborative learning. Analysis of preliminary results is presented."}
{"_id": "138c86b9283e4f26ff1583acdf4e51a5f88ccad1", "title": "Observing Human-Object Interactions: Using Spatial and Functional Compatibility for Recognition", "text": "Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene or event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape or appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information."}
{"_id": "321f14b35975b3800de5e66da64dee96071603d9", "title": "Efficient and Robust Feature Selection via Joint \u21132, 1-Norms Minimization", "text": "Feature selection is an important component of many machine learning applications. Especially in many bioinformatics tasks, efficient and robust feature selection methods are desired to extract meaningful features and eliminate noisy ones. In this paper, we propose a new robust feature selection method with emphasizing joint `2,1-norm minimization on both loss function and regularization. The `2,1-norm based loss function is robust to outliers in data points and the `2,1norm regularization selects features across all data points with joint sparsity. An efficient algorithm is introduced with proved convergence. Our regression based objective makes the feature selection process more efficient. Our method has been applied into both genomic and proteomic biomarkers discovery. Extensive empirical studies are performed on six data sets to demonstrate the performance of our feature selection method."}
{"_id": "9b505dd5459fb28f0136d3c63793b600042e6a94", "title": "A Multimedia Retrieval Framework Based on Semi-Supervised Ranking and Relevance Feedback", "text": "We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency."}
{"_id": "0ef550dacb89fb655f252e5b17dbd5d643eb5ac1", "title": "Action recognition in the premotor cortex.", "text": "We recorded electrical activity from 532 neurons in the rostral part of inferior area 6 (area F5) of two macaque monkeys. Previous data had shown that neurons of this area discharge during goal-directed hand and mouth movements. We describe here the properties of a newly discovered set of F5 neurons (\"mirror neurons', n = 92) all of which became active both when the monkey performed a given action and when it observed a similar action performed by the experimenter. Mirror neurons, in order to be visually triggered, required an interaction between the agent of the action and the object of it. The sight of the agent alone or of the object alone (three-dimensional objects, food) were ineffective. Hand and the mouth were by far the most effective agents. The actions most represented among those activating mirror neurons were grasping, manipulating and placing. In most mirror neurons (92%) there was a clear relation between the visual action they responded to and the motor response they coded. In approximately 30% of mirror neurons the congruence was very strict and the effective observed and executed actions corresponded both in terms of general action (e.g. grasping) and in terms of the way in which that action was executed (e.g. precision grip). We conclude by proposing that mirror neurons form a system for matching observation and execution of motor actions. We discuss the possible role of this system in action recognition and, given the proposed homology between F5 and human Brocca's region, we posit that a matching system, similar to that of mirror neurons exists in humans and could be involved in recognition of actions as well as phonetic gestures."}
{"_id": "15b2c44b3868a1055850846161aaca59083e0529", "title": "Learning with Local and Global Consistency", "text": "We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data."}
{"_id": "ebeb3dc542db09dfca3d5697f6897058eaa3a1f1", "title": "DC and RF breakdown voltage characteristics of SiGe HBTs for WiFi PA applications", "text": "Breakdown voltage and RF characteristics relevant for RF power amplifiers (PA) are presented in this paper. Typically, DC collector-to-emitter breakdown voltage with base open (BVCEO) or DC collector-to-base breakdown with emitter open (BVCBO) has been presented as the metric for voltage limit of PA devices. In practical PA circuits, the RF envelope voltage can swing well beyond BVCEO without causing a failure. An analysis of output power swing limitations and DC breakdown is presented with attention to biasing and temperature."}
{"_id": "6fdade400c600be1247ea41f1ab9ce2e9196d835", "title": "Cognitive Reduced Prefrontal Connectivity in Psychopathy", "text": "Julian C. Motzkin,1 Joseph P. Newman,2 Kent A. Kiehl,3,4 and Michael Koenigs1 1Department of Psychiatry, University of Wisconsin-Madison, Madison, Wisconsin 53719, 2Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin 53706, 3The MIND Research Network, Albuquerque, New Mexico 87131, and 4Departments of Psychology and Neuroscience, University of New Mexico, Albuquerque, New Mexico 87131"}
{"_id": "5174f38080629993bb258497269277b4d2e865f6", "title": "BVFG An Unbiased Detector of Curvilinear Structures", "text": "The extraction of curvilinear structures is an important low-level operation in computer vision that has many applications. Most existing operators use a simple model for the line that is to be extracted, i.e., they do not take into account the surroundings of a line. This leads to the undesired consequence that the line will be extracted in the wrong position whenever a line with different lateral contrast is extracted. In contrast, the algorithm proposed in this paper uses an explicit model for lines and their surroundings. By analyzing the scale-space behaviour of a model line profile, it is shown how the bias that is induced by asymmetrical lines can be removed. Furthermore, the algorithm not only returns the precise sub-pixel line position, but also the width of the line for each line point, also with sub-pixel accuracy."}
{"_id": "a8718f42e55720b942e11fd1eda73f7c26ad7ef9", "title": "Design and implementation of an Android host-based intrusion prevention system", "text": "Android has a dominating share in the mobile market and there is a significant rise of mobile malware targeting Android devices. Android malware accounted for 97% of all mobile threats in 2013 [26]. To protect smartphones and prevent privacy leakage, companies have implemented various host-based intrusion prevention systems (HIPS) on their Android devices. In this paper, we first analyze the implementations, strengths and weaknesses of three popular HIPS architectures. We demonstrate a severe loophole and weakness of an existing popular HIPS product in which hackers can readily exploit. Then we present a design and implementation of a secure and extensible HIPS platform---\"Patronus.\" Patronus not only provides intrusion prevention without the need to modify the Android system, it can also dynamically detect existing malware based on runtime information. We propose a two-phase dynamic detection algorithm for detecting running malware. Our experiments show that Patronus can prevent the intrusive behaviors efficiently and detect malware accurately with a very low performance overhead and power consumption."}
{"_id": "50886d25ddd5d0d1982ed94f90caa67639fcf1a1", "title": "Open Source Integrated Planner for Autonomous Navigation in Highly Dynamic Environments", "text": ""}
{"_id": "4c2bbcb3e897e927cd390517b2036b0b9123953c", "title": "BeAware! - Situation awareness, the ontology-driven way", "text": "Available online 18 July 2010 Information overload is a severe problem for human operators of large-scale control systems as, for example, encountered in the domain of road traffic management. Operators of such systems are at risk to lack situation awareness, because existing systems focus on themere presentation of the available information on graphical user interfaces\u2014thus endangering the timely and correct identification, resolution, and prevention of critical situations. In recent years, ontology-based approaches to situation awareness featuring a semantically richer knowledge model have emerged. However, current approaches are either highly domain-specific or have, in case they are domain-independent, shortcomings regarding their reusability. In this paper, we present our experience gained from the development of BeAware!, a framework for ontology-driven information systems aiming at increasing anoperator's situation awareness. In contrast to existing domain-independent approaches, BeAware!'s ontology introduces the concept of spatio-temporal primitive relations betweenobserved real-world objects thereby improving the reusability of the framework. To show its applicability, a prototype of BeAware! has been implemented in thedomain of road trafficmanagement. An overviewof this prototype and lessons learned for the development of ontology-driven information systems complete our contribution. \u00a9 2010 Elsevier B.V. All rights reserved."}
{"_id": "f8e3eb310101eac0e5175c61bb7bba2b89fcf45e", "title": "Using sonification", "text": "The idea behind sonification is that synthetic non-verbal sounds can represent numerical data and provide support for information processing activities of many different kinds. This article describes some of the ways that sonification has been used in assistive technologies, remote collaboration, engineering analyses, scientific visualisations, emergency services and aircraft cockpits. Approaches for designing sonifications are surveyed, and issues raised by the existing approaches and applications are outlined. Relations are drawn to other areas of knowledge where similar issues have also arisen, such as human-computer interaction, scientific visualisation, and computer music. At the end is a list of resources that will help you delve further into the topic."}
{"_id": "73a061bc6fe1afca540f0451dbd07065a1cf8429", "title": "Comparison of Different Classification Techniques on PIMA Indian Diabetes Data", "text": "The development of data-mining applications such as classification and clustering has been applied to large scale data. In this research, we present comparative study of different classification techniques using three data mining tools named WEKA, TANAGRA and MATLAB. The aim of this paper is to analyze the performance of different classification techniques for a set of large data. The algorithm or classifiers tested are Multilayer Perceptron, Bayes Network, J48graft (c4.5), Fuzzy Lattice Reasoning (FLR), NaiveBayes, JRip (RIPPER), Fuzzy Inference System (FIS), Adaptive Neuro-Fuzzy Inference Systems(ANFIS). A fundamental review on the selected technique is presented for introduction purposes. The diabetes data with a total instance of 768 and 9 attributes (8 for input and 1 for output) will be used to test and justify the differences between the classification methods or algorithms. Subsequently, the classification technique that has the potential to significantly improve the common or conventional methods will be suggested for use in large scale data, bioinformatics or other general applications."}
{"_id": "d5621f8c4aa7bf231624489d3525fe4c373b7ddc", "title": "SSD-Assisted Backup and Recovery for Database Systems", "text": "Backup and recovery is an important feature of database systems since it protects data against unexpected hardware and software failures. Database systems can provide data safety and reliability by creating a backup and restoring the backup from a failure. Database administrators can use backup/recovery tools that are provided with database systems or backup/recovery methods with operating systems. However, the existing tools perform time-consuming jobs and the existing methods may negatively affect run-time performance during normal operation even though high-performance SSDs are used. In this paper, we present an SSD-assisted backup/recovery scheme for database systems. In our scheme, we extend the out-of-place update characteristics of flash-based SSDs for backup/recovery operations. To this end, we exploit the resources (e.g., flash translation layer and DRAM cache with supercapacitors) inside SSDs, and we call our SSD with new backup/recovery features BR-SSD. We design and implement the backup/recovery functionality in the Samsung enterprise-class SSD (i.e., SM843Tn) for more realistic systems. Furthermore, we conduct a case study of BR-SSDs in replicated database systems and modify MySQL with replication to integrate BR-SSDs. The experimental result demonstrates that our scheme provides fast recovery while it does not negatively affect the run-time performance during normal operation."}
{"_id": "9effb687054446445d1f827f28b94ef886553c82", "title": "Multimodal analysis of the implicit affective channel in computer-mediated textual communication", "text": "Computer-mediated textual communication has become ubiquitous in recent years. Compared to face-to-face interactions, there is decreased bandwidth in affective information, yet studies show that interactions in this medium still produce rich and fulfilling affective outcomes. While overt communication (e.g., emoticons or explicit discussion of emotion) can explain some aspects of affect conveyed through textual dialogue, there may also be an underlying implicit affective channel through which participants perceive additional emotional information. To investigate this phenomenon, computer-mediated tutoring sessions were recorded with Kinect video and depth images and processed with novel tracking techniques for posture and hand-to-face gestures. Analyses demonstrated that tutors implicitly perceived students' focused attention, physical demand, and frustration. Additionally, bodily expressions of posture and gesture correlated with student cognitive-affective states that were perceived by tutors through the implicit affective channel. Finally, posture and gesture complement each other in multimodal predictive models of student cognitive-affective states, explaining greater variance than either modality alone. This approach of empirically studying the implicit affective channel may identify details of human behavior that can inform the design of future textual dialogue systems modeled on naturalistic interaction."}
{"_id": "f74219d8448b4f3a05ff82a8acf2e78e5c141c41", "title": "phishGILLNET \u2014 phishing detection methodology using probabilistic latent semantic analysis , AdaBoost , and co-training", "text": "Identity theft is one of the most profitable crimes committed by felons. In the cyber space, this is commonly achieved using phishing. We propose here robust server side methodology to detect phishing attacks, called phishGILLNET, which incorporates the power of natural language processing and machine learning techniques. phishGILLNET is a multi-layered approach to detect phishing attacks. The first layer (phishGILLNET1) employs Probabilistic Latent Semantic Analysis (PLSA) to build a topic model. The topic model handles synonym (multiple words with similar meaning), polysemy (words with multiple meanings), and other linguistic variations found in phishing. Intentional misspelled words found in phishing are handled using Levenshtein editing and Google APIs for correction. Based on term document frequency matrix as input PLSA finds phishing and non-phishing topics using tempered expectation maximization. The performance of phishGILLNET1 is evaluated using PLSA fold in technique and the classification is achieved using Fisher similarity. The second layer of phishGILLNET (phishGILLNET2) employs AdaBoost to build a robust classifier. Using probability distributions of the best PLSA topics as features the classifier is built using AdaBoost. The third layer (phishGILLNET3) further expands phishGILLNET2 by building a classifier from labeled and unlabeled examples by employing Co-Training. Experiments were conducted using one of the largest public corpus of email data containing 400,000 emails. Results show that phishGILLNET3 outperforms state of the art phishing detection methods and achieves F-measure of 100%. Moreover, phishGILLNET3 requires only a small percentage (10%) of data be annotated thus saving significant time, labor, and avoiding errors incurred in human annotation."}
{"_id": "9903b6801d6cc7687d484f8ec7d8496093cdd24b", "title": "Circle grid fractal pattern for calibration at different camera zoom levels", "text": "Camera calibration patterns and fiducial markers are basic technique for 3D measurement to create Computer Graphics (CG) and Augmented Reality (AR). A checkerboard is widely used as a calibration pattern and a circle or ring grid provides higher precision [Datta et al. 2009], and matrix codes are common as AR markers."}
{"_id": "551212c42795f2613cb09138d594ec3e654042b7", "title": "Biomedical Image Segmentation Using Fully Convolutional Networks on TrueNorth", "text": "With the rapid growth of medical and biomedical image data, energy-efficient solutions for analyzing such image data that can be processed fast and accurately on platforms with low power budget are highly desirable. This paper uses segmenting glial cells in brain microscopy images as a case study to demonstrate how to achieve biomedical image segmentation with significant energy saving and minimal comprise in accuracy. Specifically, we design, train, implement, and evaluate Fully Convolutional Networks (FCNs) for biomedical image segmentation on IBM's neurosynaptic DNN processor \u2014 TrueNorth (TN). Comparisons in terms of accuracy and energy dissipation of TN with that of a low power NVIDIA TX2 mobile GPU platform have been conducted. Experimental results show that TN can offer at least two orders of magnitude improvement in energy efficiency when compared to TX2 GPU for the same workload."}
{"_id": "6756d3e0669430fa6e006754aecb46084818d6b6", "title": "McRT-STM: a high performance software transactional memory system for a multi-core runtime", "text": "Applications need to become more concurrent to take advantage of the increased computational power provided by chip level multiprocessing. Programmers have traditionally managed this concurrency using locks (mutex based synchronization). Unfortunately, lock based synchronization often leads to deadlocks, makes fine-grained synchronization difficult, hinders composition of atomic primitives, and provides no support for error recovery. Transactions avoid many of these problems, and therefore, promise to ease concurrent programming.We describe a software transactional memory (STM) system that is part of McRT, an experimental Multi-Core RunTime. The McRT-STM implementation uses a number of novel algorithms, and supports advanced features such as nested transactions with partial aborts, conditional signaling within a transaction, and object based conflict detection for C/C++ applications. The McRT-STM exports interfaces that can be used from C/C++ programs directly or as a target for compilers translating higher level linguistic constructs.We present a detailed performance analysis of various STM design tradeoffs such as pessimistic versus optimistic concurrency, undo logging versus write buffering, and cache line based versus object based conflict detection. We also show a MCAS implementation that works on arbitrary values, coexists with the STM, and can be used as a more efficient form of transactional memory. To provide a baseline we compare the performance of the STM with that of fine-grained and coarse-grained locking using a number of concurrent data structures on a 16-processor SMP system. We also show our STM performance on a non-synthetic workload -- the Linux sendmail application."}
{"_id": "9a0b250656df8fd2e5fecb78110668195f60f11c", "title": "Modal analysis and transcription of strokes of the mridangam using non-negative matrix factorization", "text": "In this paper we use a Non-negative Matrix Factorization (NMF) based approach to analyze the strokes of the mridangam, a South Indian hand drum, in terms of the normal modes of the instrument. Using NMF, a dictionary of spectral basis vectors are first created for each of the modes of the mridangam. The composition of the strokes are then studied by projecting them along the direction of the modes using NMF. We then extend this knowledge of each stroke in terms of its basic modes to transcribe audio recordings. Hidden Markov Models are adopted to learn the modal activations for each of the strokes of the mridangam, yielding up to 88.40% accuracy during transcription."}
{"_id": "44b7f70a79734c73c8f231f8eb91f724ef97c371", "title": "Blind image quality assessment by relative gradient statistics and adaboosting neural network", "text": "The image gradient is a commonly computed image feature and a potentially predictive factor for image quality assessment (IQA). Indeed, it has been successfully used for both fulland noreference image quality prediction. However, the gradient orientation has not been deeply explored as a predictive source of information for image quality assessment. Here we seek to amend this by studying the quality relevance of the relative gradient orientation, viz., the gradient orientation relative to the surround. We also deploy a relative gradient magnitude feature which accounts for perceptual masking and utilize an AdaBoosting back-propagation (BP) neural network to map the image features to image quality. The generalization of the AdaBoosting BP neural network results in an effective and robust quality prediction model. The new model, called Oriented Gradients Image Quality Assessment (OG-IQA), is shown to deliver highly competitive image quality prediction performance as compared with the most popular IQA approaches. Furthermore, we show that OG-IQA has good database independence properties and a low complexity. & 2015 Elsevier B.V. All rights reserved."}
{"_id": "6424add0f4f99cb582ecc50c4a33ae18d9236021", "title": "Unconstrained Monocular 3D Human Pose Estimation by Action Detection and Cross-Modality Regression Forest", "text": "This work addresses the challenging problem of unconstrained 3D human pose estimation (HPE) from a novel perspective. Existing approaches struggle to operate in realistic applications, mainly due to their scene-dependent priors, such as background segmentation and multi-camera network, which restrict their use in unconstrained environments. We therfore present a framework which applies action detection and 2D pose estimation techniques to infer 3D poses in an unconstrained video. Action detection offers spatiotemporal priors to 3D human pose estimation by both recognising and localising actions in space-time. Instead of holistic features, e.g. silhouettes, we leverage the flexibility of deformable part model to detect 2D body parts as a feature to estimate 3D poses. A new unconstrained pose dataset has been collected to justify the feasibility of our method, which demonstrated promising results, significantly outperforming the relevant state-of-the-arts."}
{"_id": "1c0d35e024dbb8a1db0f326fb243a67d158d5f24", "title": "0-Day Vulnerabilities and Cybercrime", "text": "This study analyzes 0-day vulnerabilities in the broader context of cybercrime and economic markets. The work is based on the interviews of several leading experts and on a field research of the authors. In particular, cybercrime is considered when involving traditional criminal activities or when military operations are involved. A description of different 0-day vulnerability markets - White, Black and Government markets - is provided, as well as the characteristics of malware factories and their major customers are discussed."}
{"_id": "d6e2b45820dfee9ac48926884d19a30ebf33820b", "title": "PreFix: Switch Failure Prediction in Datacenter Networks", "text": "In modern datacenter networks (DCNs), failures of network devices are the norm rather than the exception, and many research efforts have focused on dealing with failures after they happen. In this paper, we take a different approach by predicting failures, thus the operators can intervene and \"fix\" the potential failures before they happen. Specifically, in our proposed system, named PreFix, we aim to determine during runtime whether a switch failure will happen in the near future. The prediction is based on the measurements of the current switch system status and historical switch hardware failure cases that have been carefully labelled by network operators. Our key observation is that failures of the same switch model share some common syslog patterns before failures occur, and we can apply machine learning methods to extract the common patterns for predicting switch failures. Our novel set of features (message template sequence, frequency, seasonality and surge) for machine learning can efficiently deal with the challenges of noises, sample imbalance, and computation overhead. We evaluated PreFix on a data set collected from 9397 switches (3 different switch models) deployed in more than 20 datacenters owned by a top global search engine in a 2-year period. PreFix achieved an average of 61.81% recall and 1.84 * 10^-5 false positive ratio. It outperforms the other failure prediction methods for computers and ISP devices."}
{"_id": "cfcac1ef666fc173eb80f93c9eec220da63c4b5e", "title": "Semantic Feature Selection for Text with Application to Phishing Email Detection", "text": "In a phishing attack, an unsuspecting victim is lured, typically via an email, to a web site designed to steal sensitive information such as bank/credit card account numbers, login information for accounts, etc. Each year Internet users lose billions of dollars to this scourge. In this paper, we present a general semantic feature selection method for text problems based on the statistical t-test and WordNet, and we show its effectiveness on phishing email detection by designing classifiers that combine semantics and statistics in analyzing the text in the email. Our feature selection method is general and useful for other applications involving text-based analysis as well. Our email body-text-only classifier achieves more than 95% accuracy on detecting phishing emails with a false positive rate of 2.24%. Due to its use of semantics, our feature selection method is robust against adaptive attacks and avoids the problem of frequent retraining needed by machine learning classifiers."}
{"_id": "951128a02e03a28358aabf1e6df053899ab118ab", "title": "Mining Parallel Corpora from Sina Weibo and Twitter", "text": "Microblogs such as Twitter, Facebook, and Sina Weibo (China's equivalent of Twitter) are a remarkable linguistic resource. In contrast to content from edited genres such as newswire, microblogs contain discussions of virtually every topic by numerous individuals in different languages and dialects and in different styles. In this work, we show that some microblog users post \u201cself-translated\u201d messages targeting audiences who speak different languages, either by writing the same message in multiple languages or by retweeting translations of their original posts in a second language. We introduce a method for finding and extracting this naturally occurring parallel data. Identifying the parallel content requires solving an alignment problem, and we give an optimally efficient dynamic programming algorithm for this. Using our method, we extract nearly 3M Chinese\u2013English parallel segments from Sina Weibo using a targeted crawl of Weibo users who post in multiple languages. Additionally, from a random sample of Twitter, we obtain substantial amounts of parallel data in multiple language pairs. Evaluation is performed by assessing the accuracy of our extraction approach relative to a manual annotation as well as in terms of utility as training data for a Chinese\u2013English machine translation system. Relative to traditional parallel data resources, the automatically extracted parallel data yield substantial translation quality improvements in translating microblog text and modest improvements in translating edited news content."}
{"_id": "abf9ee52b29f109f5dbf6423fbc0d898df802971", "title": "Synthetic aperture radar interferometry-Invited paper", "text": ""}
{"_id": "1d710f29ce139296956fe31d6e78ded479fb08bb", "title": "10-kV SiC MOSFET-Based Boost Converter", "text": "10-kV silicon carbide (SiC) MOSFETs are currently being developed by a number of organizations in the U.S. with prospective applications in high-voltage and high-frequency power-electronic systems. The aim of this paper is to demonstrate the high-frequency and high-temperature capability of 10-kV SiC MOSFETs in the application of a dc/dc boost converter. In this study, 10-kV SiC MOSFET and junction barrier Schottky (JBS) diode were characterized and modeled in SPICE. Following this, a dc/dc boost converter based on a 10-kV 10-A MOSFET and a 10-kV 5-A JBS diode was designed and tested under continuous operation for frequencies up to 25 kHz. The boost converter had an output voltage of 4 kV, an output power of 4 kW, and operated with a junction temperature of 174degC for the SiC MOSFET. The fast-switching speed, low losses, and high-temperature operation capability of 10-kV SiC MOSFETs demonstrated in the dc/dc boost converter make them attractive for high-frequency and high-voltage power-conversion applications."}
{"_id": "69bd231dfa8f4fb88090ce5b0f1d3701b765bf72", "title": "Analytical loss model of power MOSFET", "text": "An accurate analytical model is proposed in this paper to calculate the power loss of a metal-oxide semiconductor field-effect transistor. The nonlinearity of the capacitors of the devices and the parasitic inductance in the circuit, such as the source inductor shared by the power stage and driver loop, the drain inductor, etc., are considered in the model. In addition, the ringing is always observed in the switching power supply, which is ignored in the traditional loss model. In this paper, the ringing loss is analyzed in a simple way with a clear physical meaning. Based on this model, the circuit power loss could be accurately predicted. Experimental results are provided to verify the model. The simulation results match the experimental results very well, even at 2-MHz switching frequency."}
{"_id": "8a69693a73f96a64a4148893fb2b50b219176370", "title": "Design And Application Guide For High Speed MOSFET Gate Drive Circuits By", "text": "The main purpose of this paper is to demonstrate a systematic approach to design high performance gate drive circuits for high speed switching applications. It is an informative collection of topics offering a \u201cone-stop-shopping\u201d to solve the most common design challenges. Thus it should be of interest to power electronics engineers at all levels of experience. The most popular circuit solutions and their performance are analyzed, including the effect of parasitic components, transient and extreme operating conditions. The discussion builds from simple to more complex problems starting with an overview of MOSFET technology and switching operation. Design procedure for ground referenced and high side gate drive circuits, AC coupled and transformer isolated solutions are described in great details. A special chapter deals with the gate drive requirements of the MOSFETs in synchronous rectifier applications. Several, step-by-step numerical design examples complement the paper."}
{"_id": "aaa376cbd21095763093b6a629b41443ed3c0ed8", "title": "A High Efficiency Synchronous Buck VRM with Current Source Gate Driver", "text": "In this paper, a new current source gate drive circuit is proposed for high efficiency synchronous buck VRMs. The proposed circuit achieves quick turn-on and turn-off transition times to reduce switching loss and conduction loss in power MOSFETS. The driver circuit consists of two sets of four control switches and two very small inductors (typically 50 nH-300 nH each at 1 MHz). It drives both the control MOSFET and synchronous MOSFET in synchronous buck VRMs. An analysis, design procedure, optimization procedure and experimental results are presented for the proposed circuit. Experimental results demonstrate an efficiency of 86.6% at 15 A load and 81.9% at 30 A load for 12 V input and 1.3 V output at 1 MHz."}
{"_id": "b51134239fb5d52ca70c5f3aadf84d3ee62ee2d1", "title": "Tapped-inductor buck converter for high-step-down DC-DC conversion", "text": "The narrow duty cycle in the buck converter limits its application for high-step-down dc-dc conversion. With a simple structure, the tapped-inductor buck converter shows promise for extending the duty cycle. However, the leakage inductance causes a huge turn-off voltage spike across the top switch. Also, the gate drive for the top switch is not simple due to its floating source connection. This paper solves all these problems by modifying the tapped-inductor structure. A simple lossless clamp circuit can effectively clamp the switch turn-off voltage spike and totally recover the leakage energy. Experimental results for 12V-to-1.5V and 48V-to-6V dc-dc conversions show significant improvements in efficiency."}
{"_id": "44c18fc9582862185a96f7a79f63a788abc856bc", "title": "Robust and fast visual tracking via spatial kernel phase correlation filter", "text": "In this paper, we present a novel robust and fast object tracker called spatial kernel phase correlation based Tracker (SPC). Compared with classical correlation tracking which occupies all spectrums (including both phase spectrum and magnitude spectrum) in frequency domain, our SPC tracker only adopts the phase spectrum by implementing using phase correlation filter to estimate the object's translation. Thanks to circulant structure and kernel trick, we can implement dense sampling in order to train a high-quality phase correlation filter. Meanwhile, SPC learns the object's spatial context model by using new spatial response distribution, achieving superior performance. Given all these elaborate configurations, SPC is more robust to noise and cluster, and achieves more competitive performance in visual tracking. The framework of SPC can be briefly summarized as: firstly, phase correlation filter is well trained with all subwindows and is convoluted with a new image patch; then, the object's translation is calculated by maximizing spatial response; finally, to adapt to changing object, phase correlation filter is updated by reliable image patches. Tracking performance is evaluated by Peak-to-Sidelobe Ratio (PSR), aiming to resolve drifting problem by adaptive model updating. Owing to Fast Fourier Transform (FFT), the proposed tracker can track the object at about 50 frames/s. Numerical experiments demonstrate the proposed algorithm performs favorably against several state-of-the-art trackers in speed, accuracy and robustness. & 2016 Elsevier B.V. All rights reserved."}
{"_id": "66a7cfaca67cc69b6b08397a884e10ff374d710c", "title": "Menu-Match: Restaurant-Specific Food Logging from Images", "text": "Logging food and calorie intake has been shown to facilitate weight management. Unfortunately, current food logging methods are time-consuming and cumbersome, which limits their effectiveness. To address this limitation, we present an automated computer vision system for logging food and calorie intake using images. We focus on the \"restaurant\" scenario, which is often a challenging aspect of diet management. We introduce a key insight that addresses this problem specifically: restaurant plates are often both nutritionally and visually consistent across many servings. This insight provides a path to robust calorie estimation from a single RGB photograph: using a database of known food items together with restaurant-specific classifiers, calorie estimation can be achieved through identification followed by calorie lookup. As demonstrated on a challenging Menu-Match dataset and an existing third party dataset, our approach outperforms previous computer vision methods and a commercial calorie estimation app. Our Menu-Match dataset of realistic restaurant meals is made publicly available."}
{"_id": "a31e3b340f448fe0a276b659a951e39160a350dd", "title": "Modelling User Satisfaction with an Employee Portal", "text": "User satisfaction with general information system (IS) and certain types of information technology (IT) applications has been thoroughly studied in IS research. With the widespread and increasing use of portal technology, however, there is a need to conduct a user satisfaction study on portal use -in particular, the business-to-employee (b2e) portal. In this paper, we propose a conceptual model for determining b2e portal user satisfaction, which has been derived from an extensive literature review of user satisfaction scales and the b2e portal. Nine dimensions of b2e portal user satisfaction are identified and modeled: information content, ease of use, convenience of access, timeliness, efficiency, security, confidentiality, communication, and layout."}
{"_id": "7bb529166fac40451bfe0f52f31807231d4ebc8d", "title": "Indoor smartphone localization via fingerprint crowdsourcing: challenges and approaches", "text": "Nowadays, smartphones have become indispensable to everyone, with more and more built-in location-based applications to enrich our daily life. In the last decade, fingerprinting based on RSS has become a research focus in indoor localization, due to its minimum hardware requirement and satisfiable positioning accuracy. However, its time-consuming and labor-intensive site survey is a big hurdle for practical deployments. Fingerprint crowdsourcing has recently been promoted to relieve the burden of site survey by allowing common users to contribute to fingerprint collection in a participatory sensing manner. For its promising commitment, new challenges arise to practice fingerprint crowdsourcing. This article first identifies two main challenging issues, fingerprint annotation and device diversity, and then reviews the state of the art of fingerprint crowdsourcing-based indoor localization systems, comparing their approaches to cope with the two challenges. We then propose a new indoor subarea localization scheme via fingerprint crowdsourcing, clustering, and matching, which first constructs subarea fingerprints from crowdsourced RSS measurements and relates them to indoor layouts. We also propose a new online localization algorithm to deal with the device diversity issue. Our experiment results show that in a typical indoor scenario, the proposed scheme can achieve a 95 percent hit rate to correctly locate a smartphone in its subarea."}
{"_id": "2e054a07a2731da83081c7069f0950bb07ee7490", "title": "Optimizing Soft Real-Time Scheduling Performance for Virtual Machines with SRT-Xen", "text": "Multimedia applications are an important part of today's Internet. However, currently most virtualization solutions, including Xen, lack adequate support for soft real-time tasks. Soft real-time applications, e.g. media workloads, are impeded by components of virtualization, such as the increase of scheduling latency. This paper focuses on improving scheduling scheme to support soft real-time workloads in virtualization systems. In this paper, we present an enhanced scheduler SRT-Xen. SRT-Xen can promote the soft real-time domain's performance compared with Xen's existing scheduling. It focuses on not only bringing a new realtime-friendly scheduling framework with corresponding strategies but also improving the management of the virtual CPUs' queuing in order to implement a fair scheduling mechanism for both real-time and non-real-time tasks. Finally, we use PESQ (Perceptual Evaluation of Speech Quality) and other benchmarks to evaluate and compare SRT-Xen with some other works. The results show that SRT-Xen supports soft real-time domains well without penalizing non-real-time ones."}
{"_id": "a39f15d74a578692e65050381d318fecac27e2a4", "title": "Scalable Edge Computing for Low Latency Data Dissemination in Topic-Based Publish/Subscribe", "text": "Advances in Internet of Things (IoT) give rise to a variety of latency-sensitive, closed-loop applications that reside at the edge. These applications often involve a large number of sensors that generate volumes of data, which must be processed and disseminated in real-time to potentially a large number of entities for actuation, thereby forming a closed-loop, publish-process-subscribe system. To meet the response time requirements of such applications, this paper presents techniques to realize a scalable, fog/edge-based broker architecture that balances data publication and processing loads for topic-based, publish-process-subscribe systems operating at the edge, and assures the Quality-of-Service (QoS), specified as the 90th percentile latency, on a per-topic basis. The key contributions include: (a) a sensitivity analysis to understand the impact of features such as publishing rate, number of subscribers, per-sample processing interval and background load on a topic's performance; (b) a latency prediction model for a set of co-located topics, which is then used for the latency-aware placement of topics on brokers; and (c) an optimization problem formulation for k-topic co-location to minimize the number of brokers while meeting each topic's QoS requirement. Here, k denotes the maximum number of topics that can be placed on a broker. We show that the problem is NP-hard for k >=3 and present three load balancing heuristics. Empirical results are presented to validate the latency prediction model and to evaluate the performance of the proposed heuristics."}
{"_id": "c97774191be232678a45d343a25fcc0c96c065e7", "title": "Co-Training of Audio and Video Representations from Self-Supervised Temporal Synchronization", "text": "There is a natural correlation between the visual and auditive elements of a video. In this work, we use this correlation in order to learn strong and general features via cross-modal self-supervision with carefully chosen neural network architectures and calibrated curriculum learning. We suggest that this type of training is an effective way of pretraining models for further pursuits in video understanding, as they achieve on average 14.8% improvement over models trained from scratch. Furthermore, we demonstrate that these general features can be used for audio classification and perform on par with state-of-the-art results. Lastly, our work shows that using cross-modal self-supervision for pretraining is a good starting point for the development of multi-sensory models."}
{"_id": "4b733a188198dbff57cb8bd1ec996044fe272ce5", "title": "Machine Learning in Genomic Medicine: A Review of Computational Problems and Data Sets", "text": "In this paper, we provide an introduction to machine learning tasks that address important problems in genomic medicine. One of the goals of genomic medicine is to determine how variations in the DNA of individuals can affect the risk of different diseases, and to find causal explanations so that targeted therapies can be designed. Here we focus on how machine learning can help to model the relationship between DNA and the quantities of key molecules in the cell, with the premise that these quantities, which we refer to as cell variables, may be associated with disease risks. Modern biology allows high-throughput measurement of many such cell variables, including gene expression, splicing, and proteins binding to nucleic acids, which can all be treated as training targets for predictive models. With the growing availability of large-scale data sets and advanced computational techniques such as deep learning, researchers can help to usher in a new era of effective genomic medicine."}
{"_id": "0b0609478cf9882aac77af83e0c3e0b3838edbe8", "title": "Measuring epistemic curiosity and its diversive and specific components.", "text": "A questionnaire constructed to assess epistemic curiosity (EC) and perceptual curiosity (PC) curiosity was administered to 739 undergraduates (546 women, 193 men) ranging in age from 18 to 65. The study participants also responded to the trait anxiety, anger, depression, and curiosity scales of the State-Trait Personality Inventory (STPI; Spielberger et al., 1979) and selected subscales of the Sensation Seeking (SSS; Zuckerman, Kolin, Price, & Zoob, 1964) and Novelty Experiencing (NES; Pearson, 1970) scales. Factor analyses of the curiosity items with oblique rotation identified EC and PC factors with clear simple structure. Subsequent analyses of the EC items provided the basis for developing an EC scale, with Diversive and Specific Curiosity subscales. Moderately high correlations of the EC scale and subscales with other measures of curiosity provided strong evidence of convergent validity. Divergent validity was demonstrated by minimal correlations with trait anxiety and the sensation-seeking measures, and essentially zero correlations with the STPI trait anger and depression scales. Male participants had significantly higher scores on the EC scale and the NES External Cognition subscale (effect sizes of r =.16 and.21, respectively), indicating that they were more interested than female participants in solving problems and discovering how things work. Male participants also scored significantly higher than female participants on the SSS Thrill-and-Adventure and NES External Sensation subscales (r =.14 and.22, respectively), suggesting that they were more likely to engage in sensation-seeking activities."}
{"_id": "9659f4455ec2779400b0ad761b6997c6271da651", "title": "Gait planning and behavior-based control for statically-stable walking robots", "text": "A substantial portion of the Earth is inaccessible to any sort of wheeled mechanism\u2014 natural obstacles like large rocks, loose soil, deep ravines, and steep slopes conspire to render rolling locomotion ineffective. Hills, mountains, shores, seabeds, as well as the moon and other planets present similar terrain challenges. In many of these natural terrains, legs are well-suited. They can avoid small obstacles by making discrete contacts and passing up undesirable footholds. Legged mechanisms can climb over obstacles and step across ditches, surmounting terrain discontinuities of body-scale while staying level and stable. To achieve their potential, legged robots must coordinate their leg motions to climb over, step across and walk in natural terrain. These coordinated motions, which support and propel the robot, are called a gait. This thesis develops a new method of gait planning and control that enables statically-stable walking robots to produce a gait that is robust and productive in natural terrain. Independent task-achieving processes, called gait behaviors, establish a nominal gait, adapt it to the terrain, and react to disturbances like bumps and slips. Gait controlled in this way enabled the robot Dante II to walk autonomously in natural terrain, including the volcanic crater of Mount Spurr. This method extends to other walking robots as demonstrated by a generalized hexapod that performs the variety of gaits seen in six-legged insects, as well as aperiodic free gaits. The ability to change gait patterns on-the-fly with continuous, stable motion is a new development that enables robots to behave more like animals in adapting their gait to terrain. Finally, this thesis describes why walking robots need predictive plans as well as reflexive behaviors to walk effectively in the real world. It presents a method of guiding the behavior of a walking robot by planning distinct attributes of the desired gait. This partitioning of gait planning avoids the complexity of high degree-of-freedom motion planning. The ability to plan and foresee changes in gait improves performance while maintaining robust safety and stability. Abstract ii iii Acknowledgment"}
{"_id": "3f4d50cfb6cf6ad39600998ed295494a1f4f156b", "title": "Developing a Taxonomy of Dark Triad Triggers at Work \u2013 A Grounded Theory Study Protocol", "text": "In past years, research and corporate scandals have evidenced the destructive effects of the dark triad at work, consisting of narcissism (extreme self-centeredness), psychopathy (lack of empathy and remorse) and Machiavellianism (a sense of duplicity and manipulativeness). The dark triad dimensions have typically been conceptualized as stable personality traits, ignoring the accumulating evidence that momentary personality expressions - personality states - may change due to the characteristics of the situation. The present research protocol describes a qualitative study that aims to identify triggers of dark triad states at work by following a grounded theory approach using semi-structured interviews. By building a comprehensive categorization of dark triad triggers at work scholars may study these triggers in a parsimonious and structured way and organizations may derive more effective interventions to buffer or prevent the detrimental effects of dark personality at work."}
{"_id": "1e18291a151806c75518d7466d43264cac96864b", "title": "Web Engineering: A New Discipline for Development of Web-Based Systems", "text": "In most cases, development of Web-based systems has been ad hoc, lacking systematic approach and quality control and assurance procedures. Hence, there is now legitimate and growing concern about the manner in which Web-based systems are developed and their long-term quality and integrity. Web Engineering, an emerging new discipline, advocates a process and a systematic approach to development of high quality Web-based systems. It promotes the establishment and use of sound scientific, engineering and management principles, and disciplined and systematic approaches to development, deployment and maintenance of Web-based systems. This paper gives an introductory overview on Web Engineering. It presents the principles and roles of Web Engineering, assesses the similarities and differences between development of traditional software and Web-based systems, identifies key Web engineering activities and reviews some of the ongoing work in this area. It also highlights the prospects of Web engineering and the areas that need further study."}
{"_id": "6f9f4312876fb26175837c829ff5eb0b4fab6089", "title": "Negotiation and Cooperation in Multi-Agent Environments", "text": "Automated intelligent agents inhabiting a shared environment must coordinate their activities Cooperation not merely coordination may improve the performance of the individual agents or the overall behavior of the system they form Research in Distributed Arti cial Intelligence DAI addresses the problem of designing automated intelligent systems which interact e ectively DAI is not the only eld to take on the challenge of understanding cooperation and coordination There are a variety of other multi entity environments in which the entities coordinate their activity and cooperate Among them are groups of people animals particles and computers We argue that in order to address the challenge of building coordinated and collaborated intelligent agents it is bene cial to combine AI techniques with methods and techniques from a range of multi entity elds such as game theory operations research physics and philosophy To support this claim we describe some of our projects where we have successfully taken an interdisciplinary approach We demonstrate the bene ts in applying multi entity methodologies and show the adaptations modi cations and extensions necessary for solving the DAI problems This is an extended version of a lecture presented upon receipt of the Computers and Thought Award at the th International Joint Conference on Arti cial Intelligence in Montreal Canada August"}
{"_id": "aef8cbfbf08ed04ba242d15f515d610d315f9904", "title": "Wikipedia Chemical Structure Explorer: substructure and similarity searching of molecules from Wikipedia", "text": "BACKGROUND\nWikipedia, the world's largest and most popular encyclopedia is an indispensable source of chemistry information. It contains among others also entries for over 15,000 chemicals including metabolites, drugs, agrochemicals and industrial chemicals. To provide an easy access to this wealth of information we decided to develop a substructure and similarity search tool for chemical structures referenced in Wikipedia.\n\n\nRESULTS\nWe extracted chemical structures from entries in Wikipedia and implemented a web system allowing structure and similarity searching on these data. The whole search as well as visualization system is written in JavaScript and therefore can run locally within a web page and does not require a central server. The Wikipedia Chemical Structure Explorer is accessible on-line at www.cheminfo.org/wikipedia and is available also as an open source project from GitHub for local installation.\n\n\nCONCLUSIONS\nThe web-based Wikipedia Chemical Structure Explorer provides a useful resource for research as well as for chemical education enabling both researchers and students easy and user friendly chemistry searching and identification of relevant information in Wikipedia. The tool can also help to improve quality of chemical entries in Wikipedia by providing potential contributors regularly updated list of entries with problematic structures. And last but not least this search system is a nice example of how the modern web technology can be applied in the field of cheminformatics. Graphical abstractWikipedia Chemical Structure Explorer allows substructure and similarity searches on molecules referenced in Wikipedia."}
{"_id": "75ff20c21d4ab56917286b429643db4c216f51b5", "title": "Word Embeddings and Their Use In Sentence Classification Tasks", "text": "This paper has two parts. In the first part we discuss word embeddings. We discuss the need for them, some of the methods to create them, and some of their interesting properties. We also compare them to image embeddings and see how word embedding and image embedding can be combined to perform different tasks. In the second part we implement a convolutional neural network trained on top of pre-trained word vectors. The network is used for several sentence-level classification tasks, and achieves state-of-art (or comparable) results, demonstrating the great power of pre-trainted word embeddings over random ones."}
{"_id": "c4f78541eff05e539927d17ece67f239603b18a1", "title": "A critical review of blockchain and its current applications", "text": "Blockchain technology has been known as a digital currency platform since the emergence of Bitcoin, the first and the largest of the cryptocurrencies. Hitherto, it is used for the decentralization of markets more generally, not exclusively for the decentralization of money and payments. The decentralized transaction ledger of blockchain could be employed to register, confirm, and send all kinds of contracts to other parties in the network. In this paper, we thoroughly review state-of-the-art blockchain-related applications emerged in the literature. A number of published works were carefully included based on their contributions to the blockchain's body of knowledge. Several remarks are explored and discussed in the last section of the paper."}
{"_id": "cfb06ca51d03b7e625678d97d4661db69e2ee534", "title": "Hidden Voice Commands", "text": "Voice interfaces are becoming more ubiquitous and are now the primary input method for many devices. We explore in this paper how they can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices. We evaluate these attacks under two different threat models. In the black-box model, an attacker uses the speech recognition system as an opaque oracle. We show that the adversary can produce difficult to understand commands that are effective against existing systems in the black-box model. Under the white-box model, the attacker has full knowledge of the internals of the speech recognition system and uses it to create attack commands that we demonstrate through user testing are not understandable by humans. We then evaluate several defenses, including notifying the user when a voice command is accepted; a verbal challenge-response protocol; and a machine learning approach that can detect our attacks with 99.8% accuracy."}
{"_id": "65658c28dfdec0268b4f46c16bb1973581b2ad95", "title": "Supply Chain Sourcing Under Asymmetric Information", "text": "We study a supply chain with two suppliers competing over a contract to supply components to a manufacturer. One of the suppliers is a big company for whom the manufacturer\u2019s business constitutes a small part of his business. The other supplier is a small company for whom the manufacturer\u2019s business constitutes a large portion of his business. We analyze the problem from the perspective of the big supplier and address the following questions: What is the optimal contracting strategy that the big supplier should follow? How does the information about the small supplier\u2019s production cost affect the profits and contracting decision? How does the existence of the small supplier affect profits? By studying various information scenarios regarding the small supplier\u2019s and the manufacturer\u2019s production cost, we show, for example, that the big supplier benefits when the small supplier keeps its production cost private. We quantify the value of information for the big supplier and the manufacturer. We also quantify the cost (value) of the alternative-sourcing option for the big supplier (the manufacturer). We determine when an alternative-sourcing option has more impact on profits than information. We conclude with extensions and numerical examples to shed light on how system parameters affect this supply chain."}
{"_id": "f183f06d55a149a74e0d10b8dd8253383f7c9c7b", "title": "A Dependency Graph Approach for Fault Detection and Localization Towards Secure Smart Grid", "text": "Fault diagnosis in power grids is known to be challenging, due to the massive scale and spatial coupling therein. In this study, we explore multiscale network inference for fault detection and localization. Specifically, we model the phasor angles across the buses as a Markov random field (MRF), where the conditional correlation coefficients of the MRF are quantified in terms of the physical parameters of power systems. Based on the MRF model, we then study decentralized network inference for fault diagnosis, through change detection and localization in the conditional correlation matrix of the MRF. Particularly, based on the hierarchical topology of practical power systems, we devise a multiscale network inference algorithm that carries out fault detection and localization in a decentralized manner. Simulation results are used to demonstrate the effectiveness of the proposed approach."}
{"_id": "0848827ba30956e29d7d126d0a05e51660094ebe", "title": "Secure and reconfigurable network design for critical information dissemination in the Internet of battlefield things (IoBT)", "text": "The Internet of things (IoT) is revolutionizing the management and control of automated systems leading to a paradigm shift in areas such as smart homes, smart cities, health care, transportation, etc. The IoT technology is also envisioned to play an important role in improving the effectiveness of military operations in battlefields. The interconnection of combat equipment and other battlefield resources for coordinated automated decisions is referred to as the Internet of battlefield things (IoBT). IoBT networks are significantly different from traditional IoT networks due to the battlefield specific challenges such as the absence of communication infrastructure, and the susceptibility of devices to cyber and physical attacks. The combat efficiency and coordinated decision-making in war scenarios depends highly on real-time data collection, which in turn relies on the connectivity of the network and the information dissemination in the presence of adversaries. This work aims to build the theoretical foundations of designing secure and reconfigurable IoBT networks. Leveraging the theories of stochastic geometry and mathematical epidemiology, we develop an integrated framework to study the communication of mission-critical data among different types of network devices and consequently design the network in a cost effective manner."}
{"_id": "82bd131de322e70b8211b18718f58a4c3a5e3ebe", "title": "Multi-task learning of dish detection and calorie estimation", "text": "In recent years, a rise in healthy eating has led to various food management applications, which have image recognition to automatically record meals. However, most image recognition functions in existing applications are not directly useful for multiple-dish food photos and cannot automatically estimate food calories. Meanwhile, methodologies on image recognition have advanced greatly because of the advent of Convolutional Neural Network, which has improved accuracies of various kinds of image recognition tasks, such as classification and object detection. Therefore, we propose CNN-based food calorie estimation for multiple-dish food photos. Our method estimates food calories while simultaneously detecting dishes by multi-task learning of food calorie estimation and food dish detection with a single CNN. It is expected to achieve high speed and save memory by simultaneous estimation in a single network. Currently, there is no dataset of multiple-dish food photos annotated with both bounding boxes and food calories, so in this work, we use two types of datasets alternately for training a single CNN. For the two types of datasets, we use multiple-dish food photos with bounding-boxes attached and single-dish food photos with food calories. Our results show that our multi-task method achieved higher speed and a smaller network size than a sequential model of food detection and food calorie estimation."}
{"_id": "884b6944cdf806d11147a8254f16180050133377", "title": "Chernoff-Hoeffding Inequality and Applications", "text": "When dealing with modern big data sets, a very common theme is reducing the set through a random process. These generally work by making \u201cmany simple estimates\u201d of the full data set, and then judging them as a whole. Perhaps magically, these \u201cmany simple estimates\u201d can provide a very accurate and small representation of the large data set. The key tool in showing how many of these simple estimates are needed for a fixed accuracy trade-off is the Chernoff-Hoeffding inequality [2, 6]. This document provides a simple form of this bound, and two examples of its use."}
{"_id": "0eb5b71df161fcf77024bdb4608337eedc874b98", "title": "The Unbearable Automaticity of Being", "text": "What was noted by E. J. hanger (1978) remains true today: that much of contemporary psychological research is based on the assumption that people are consciously and systematically processing incoming information in order to construe and interpret their world and to plan and engage in courses of action. As did E. J. hanger, the authors question this assumption. First, they review evidence that the ability to exercise such conscious, intentional control is actually quite limited, so that most of moment-to-moment psychological life must occur through nonconscious means if it is to occur at all. The authors then describe the different possible mechanisms that produce automatic, environmental control over these various phenomena and review evidence establishing both the existence of these mechanisms as well as their consequences for judgments, emotions, and behavior. Three major forms of automatic self-regulation are identified: an automatic effect of perception on action, automatic goal pursuit, and a continual automatic evaluation of one's experience. From the accumulating evidence, the authors conclude that these various nonconscious mental systems perform the lion's share of the self-regulatory burden, beneficently keeping the individual grounded in his or her current environment."}
{"_id": "6ae1dd01e89d54e18bae39039b09c8f57338c4e6", "title": "Intrinsically Motivated Learning of Hierarchical Collections of Skills", "text": "Humans and other animals often engage in activities for their own sakes rather than as steps toward solving practical problems. Psychologists call these intrinsically motivated behaviors. What we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. In this paper we present initial results from a computational study of intrinsically motivated learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are needed for competent autonomy. At the core of the model are recent theoretical and algorithmic advances in computational reinforcement learning, specifically, new concepts related to skills and new learning algorithms for learning with skill hierarchies."}
{"_id": "3021e3f8b35aa34c6c14481e4bd9e0756cf130b3", "title": "An examination of the determinants of customer loyalty in mobile commerce contexts", "text": "While the importance of customer loyalty has been recognized in marketing literature for at least three decades, the development and empirical validation of a customer loyalty model in a mobile commerce (m-commerce) context had not been addressed. The purpose of our study was to develop and validate such a customer loyalty model. Based on IS and marketing literature, a comprehensive set of constructs and hypotheses were compiled with a methodology for testing them. A questionnaire was constructed and data were collected from 255 users of m-commerce systems in Taiwan. Structural modeling techniques were then applied to analyze the data. The results indicated that customer loyalty was affected by perceived value, trust, habit, and customer satisfaction, with customer satisfaction playing a crucial intervening role in the relationship of perceived value and trust to loyalty. Based on the findings, its implications and limitations are discussed. # 2005 Elsevier B.V. All rights reserved."}
{"_id": "70598a94915e4931679593db91d792d3d58b5037", "title": "Principles and practice in reporting structural equation analyses.", "text": "Principles for reporting analyses using structural equation modeling are reviewed, with the goal of supplying readers with complete and accurate information. It is recommended that every report give a detailed justification of the model used, along with plausible alternatives and an account of identifiability. Nonnormality and missing data problems should also be addressed. A complete set of parameters and their standard errors is desirable, and it will often be convenient to supply the correlation matrix and discrepancies, as well as goodness-of-fit indices, so that readers can exercise independent critical judgment. A survey of fairly representative studies compares recent practice with the principles of reporting recommended here."}
{"_id": "f38e9c51e2736b382938131f50757be414af4e35", "title": "154-2008: Understanding Your Customer: Segmentation Techniques for Gaining Customer Insight and Predicting Risk in the Telecom Industry", "text": "The explosion of customer data in the last twenty years has increased the need for data mining aimed at customer relationship management (CRM) and understanding the customer. It is well known that the telecom sector consists of customers with a wide array of customer behaviors. These customers pose different risks, making it imperative to implement different treatment strategies to maximize shareholder profit and improve revenue generation. Segmentation is the process of developing meaningful customer groups that are similar based on individual account characteristics and behaviors. The goal of segmentation is to know your customer better and to apply that knowledge to increase profitability, reduce operational cost, and enhance customer service. Segmentation can provide a multidimensional view of the customer for better treatment targeting. An improved understanding of customer risk and behaviors enables more effective portfolio management and the proactive application of targeted treatments to lift account profitability. In this paper we outline several segmentation techniques using SAS Enterprise MinerTM. INTRODUCTION Rapid advances in computer technology and an explosion of data collection systems over the last thirty years make it more critical for business to understand their customers. Companies employing data driven analytical strategies often enjoy a competitive advantage. Many organizations across several industries widely employ analytical models to gain a better understanding of their customers. They use these models to predict a wide array of events such as behavioral risk, fraud, or the likelihood of response. Regardless of the predictive variable, a single model may not perform optimally across the target population because there may be distinct segments with different characteristics inherent in the population. Segmentation may be done judgmentally based on experience, but such segmentation schema is limited to the use of only a few variables at best. True multivariate segmentation with the goal of identifying the different segments in your population is best achieved through the use of cluster analysis. Clustering and profiling of the customer base can answer the following questions: \u2666 Who are my customers? \u2666 How profitable are my customers? \u2666 Who are my least profitable customers? \u2666 Why are my customers leaving? \u2666 What do my best customers look like? This paper discusses the use of SAS Enterprise Miner to segment a population of customers using cluster analysis and decision trees. Focus is placed on the methodology rather than the results to ensure the integrity and confidentiality of customer data. Other statistical strategies are presented in this paper which could be employed in the pursuit of further customer intelligence. It has been said that statistical cluster analysis is as much art as it is science because it is performed without the benefit of well established statistical criteria. Data Mining and Predictive Modeling SAS Global Forum 2008"}
{"_id": "32e876a9420f7c58a3c55ec703416c7f57a54f4c", "title": "Causality : Models , Reasoning , and Inference", "text": "For most researchers in the ever growing fields of probabilistic graphical models, belief networks, causal influence and probabilistic inference, ACM Turing award winner Dr. Pearl and his seminary papers on causality are well-known and acknowledged. Representation and determination of Causality, the relationship between an event (the cause) and a second event (the effect), where the second event is understood as a consequence of the first, is a challenging problem. Over the years, Dr. pearl has written significantly on both Art and Science of Cause and Effect. In this book on \"Causality: Models, Reasoning and Inference\", the inventor of Bayesian belief networks discusses and elaborates on his earlier workings including but not limited to Reasoning with Cause and Effect, Causal inference in statistics, Simpson's paradox, Causal Diagrams for Empirical Research, Robustness of Causal Claims, Causes and explanations, and Probabilities of causation Bounds and identification."}
{"_id": "a04145f1ca06c61f5985ab22a2346b788f343392", "title": "Information Systems Success: The Quest for the Dependent Variable", "text": "A large number of studies have been conducted during the last decade and a half attempting to identify those factors that contribute to information systems success. However, the dependent variable in these studies\u2014I/S success \u2014has been an elusive one to define. Different researchers have addressed different aspects of success, making comparisons difficult and the prospect of building a cumulative tradition for I/S research similarly elusive. To organize this diverse research, as well as to present a more integrated view of the concept of I/S success, a comprehensive taxonomy is introduced. This taxonomy posits six major dimensions or categories of I/S success\u2014SYSTEM OUALITY, INFORMATION QUALITY, USE, USER SATISFACTION, INDIVIDUAL IMPACT, and ORGANIZATIONAL IMPACT. Using these dimensions, both conceptual and empirical studies are then reviewed (a total of 180 articles are cited) and organized according to the dimensions of the taxonomy. Finally, the many aspects of I/S success are drawn together into a descriptive model and its implications for future I/S research are discussed."}
{"_id": "989bbec65334a18201ec19b814b6ba84fbce8790", "title": "Just Feels Good : Customers \u2019 Affective Response to Touch and Its Influence on Persuasion", "text": "Prior research has assumed that touch has a persuasive effect only if it provides attribute or structural information about a product. Under this view, the role of touch as a persuasive tool is limited. The main purpose of this research is to investigate the persuasive influence of touch as an affective tool in the absence of useful product-related information. The authors find that for people who are motivated to touch because it is fun or interesting, a communication that incorporates touch leads to increased affective response and increased persuasion, particularly when the touch provides neutral or positive sensory feedback. People who are not motivated to touch for fun will also be persuaded by a communication that incorporates touch when they are able to make sense of how the touch is related to the message. The authors explore the effectiveness of different types of touch in generating an affective response, and they replicate the effects on attitudes and behavior in a real-world setting. This research suggests that the marketing implications of touch are more substantial than previously believed. The authors present research implications for direct marketing, product packaging, point-of-purchase displays, and print advertising."}
{"_id": "8d79900092c807aa563ad8471908114166225e8d", "title": "Portfolio Choice and the Bayesian Kelly Criterion", "text": "We derive optimal gambling and investment policies for cases in which the underlying stochastic process has parameter values that are unobserved random variables. For the objective of maximizing logarithmic utility when the underlying stochastic process is a simple random walk in a random environment, we show that a state-dependent control is optimal, which is a generalization of the celebrated Kelly strategy: The optimal strategy is to bet a fraction of current wealth equal to a linear function of the posterior mean increment. To approximate more general stochastic processes, we consider a continuous-time analog involving Brownian motion. To analyze the continuous-time problem, we study the diffusion limit of random walks in a random environment. We prove that they converge weakly to a Kiefer process, or tied-down Brownian sheet. We then find conditions under which the discrete-time process converges to a diffusion, and analyze the resulting process. We analyze in detail the case of the natural conjugate prior, where the success probability has a beta distribution, and show that the resulting limiting diffusion can be viewed as a rescaled Brownian motion. These results allow explicit computation of the optimal control policies for the continuoustime gambling and investment problems without resorting to continuous-time stochastic-control procedures. Moreover they also allow an explicit quantitative evaluation of the financial value of randomness, the financial gain of perfect information and the financial cost of learning in the Bayesian problem."}
{"_id": "7a7db2ccf49a909921711d9e88dbfac66c776167", "title": "Vibration parameter estimation using FMCW radar", "text": "Vibration sensing is essential in many applications. Traditional vibration sensors are contact based. With the advance of low-cost and highly integrated CMOS radars, another class of non-contact vibration sensors is emerging. In this paper, we present detailed analysis on obtaining vibration parameters using frequency modulated continuous wave (FMCW) radars. We establish the Cramer Rao lower bounds (CRLB) of the parameter estimation problem and propose an estimation algorithm that achieves the bounds in simulations. These analysis show that vibration sensing using FMCW radars can easily achieve sub-Hertz frequency accuracy and micrometer level amplitude accuracy."}
{"_id": "9e832cceb07aa740c720fbdfae2f6b17c286bab8", "title": "Using Multi-Locators to Increase the Robustness of Web Test Cases", "text": "The main reason for the fragility of web test cases is the inability of web element locators to work correctly when the web page DOM evolves. Web elements locators are used in web test cases to identify all the GUI objects to operate upon and eventually to retrieve web page content that is compared against some oracle in order to decide whether the test case has passed or not. Hence, web element locators play an extremely important role in web testing and when a web element locator gets broken developers have to spend substantial time and effort to repair it. While algorithms exist to produce robust web element locators to be used in web test scripts, no algorithm is perfect and different algorithms are exposed to different fragilities when the software evolves. Based on such observation, we propose a new type of locator, named multi-locator, which selects the best locator among a candidate set of locators produced by different algorithms. Such selection is based on a voting procedure that assigns different voting weights to different locator generation algorithms. Experimental results obtained on six web applications, for which a subsequent release was available, show that the multi-locator is more robust than the single locators (about -30% of broken locators w.r.t. the most robust kind of single locator) and that the execution overhead required by the multiple queries done with different locators is negligible (2-3% at most)."}
{"_id": "e64d199cb8d4e053ffb0e28475df2fda140ba5de", "title": "lncDML: Identification of long non-coding RNAs by Deep Metric Learning", "text": "The next-generation sequencing technologies provide a great deal of transcripts for bioinformatics research. Specially, because of the regulation of long non-coding RNAs (lncRNAs) in various cellular processes, the research on IncRNAs is in full swing. And the solution of IncRNAs identification is the basis for the in-depth study of its functions. In this study, we present an approach to identify the IncRNAs from large scale transcripts, named IncDML which is completely different from previous identification methods. In our model, we extract signal to noise ratio (SNR) and k-mer from transcripts sequences as features. Firstly, we just use the SNR to cluster the original dataset to three parts. In this process, we achieve preliminary identification effect to some extent. Then abandoning traditional feature selection, we directly measure the relationship between each pair of samples by deep metric learning for each part of data. Finally, a novel classifier based on complex network is applied to achieve the final identification. The experiment results show that IncDML is a very effective method for identifying IncRNAs."}
{"_id": "26f0cb59a35a6c83e0375ab19ea710e741f907ad", "title": "ERP system implementation in SMEs: exploring the influences of the SME context", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."}
{"_id": "f063d23d23ac325a0bc3979f037a197948fa04f3", "title": "Job Resources Boost Work Engagement , Particularly When Job Demands Are High", "text": "This study of 805 Finnish teachers working in elementary, secondary, and vocational schools tested 2 interaction hypotheses. On the basis of the job demands\u2013resources model, the authors predicted that job resources act as buffers and diminish the negative relationship between pupil misbehavior and work engagement. In addition, using conservation of resources theory, the authors hypothesized that job resources particularly influence work engagement when teachers are confronted with high levels of pupil misconduct. In line with these hypotheses, moderated structural equation modeling analyses resulted in 14 out of 18 possible 2-way interaction effects. In particular, supervisor support, innovativeness, appreciation, and organizational climate were important job resources that helped teachers cope with demanding interactions with students."}
{"_id": "4729943ac1bb2eaf39d3d734d94b21b55f0920ec", "title": "Superior thermal conductivity of single-layer graphene.", "text": "We report the measurement of the thermal conductivity of a suspended single-layer graphene. The room temperature values of the thermal conductivity in the range approximately (4.84+/-0.44)x10(3) to (5.30+/-0.48)x10(3) W/mK were extracted for a single-layer graphene from the dependence of the Raman G peak frequency on the excitation laser power and independently measured G peak temperature coefficient. The extremely high value of the thermal conductivity suggests that graphene can outperform carbon nanotubes in heat conduction. The superb thermal conduction property of graphene is beneficial for the proposed electronic applications and establishes graphene as an excellent material for thermal management."}
{"_id": "f673d75df44a3126898634cb96344d5fd31b3504", "title": "Asymmetric Cryptography for Mobile Devices", "text": "This paper is meant to give the reader a general overview about the application of asymmetric cryptography in communication, particular in mobile devices. The basics principles of a cryptosystem are addressed, as well as the idea of symmetric and asymmetric cryptography. The functional principles of RSA encryption and the DiffieHellman key exchange scheme, as well as the general idea of digital signatures are shortly described. Furthermore some challenges and solution approaches for the application of asymmetric encryption in low power mobile devices are named. On completing, some the future developments in cryptography are shortly described."}
{"_id": "497ce53f8f12f2117d36b5a61b3fc142f0cb05ee", "title": "Interpreting Semantic Relations in Noun Compounds via Verb Semantics", "text": "We propose a novel method for automatically interpreting compound nouns based on a predefined set of semantic relations. First we map verb tokens in sentential contexts to a fixed set of seed verbs using WordNet::Similarity and Moby\u2019s Thesaurus. We then match the sentences with semantic relations based on the semantics of the seed verbs and grammatical roles of the head noun and modifier. Based on the semantics of the matched sentences, we then build a classifier using TiMBL. The performance of our final system at interpreting NCs is 52.6%."}
{"_id": "42ce0afbe27913f3da97d4d08730b2fcaf15e18d", "title": "An Empirical Study on Android-Related Vulnerabilities", "text": "Mobile devices are used more and more in everyday life. They are our cameras, wallets, and keys. Basically, they embed most of our private information in our pocket. For this and other reasons, mobile devices, and in particular the software that runs on them, are considered first-class citizens in the software-vulnerabilities landscape. Several studies investigated the software-vulnerabilities phenomenon in the context of mobile apps and, more in general, mobile devices. Most of these studies focused on vulnerabilities that could affect mobile apps, while just few investigated vulnerabilities affecting the underlying platform on which mobile apps run: the Operating System (OS). Also, these studies have been run on a very limited set of vulnerabilities. In this paper we present the largest study at date investigating Android-related vulnerabilities, with a specific focus on the ones affecting the Android OS. In particular, we (i) define a detailed taxonomy of the types of Android-related vulnerability, (ii) investigate the layers and subsystems from the Android OS affected by vulnerabilities, and (iii) study the survivability of vulnerabilities (i.e., the number of days between the vulnerability introduction and its fixing). Our findings could help OS and apps developers in focusing their verification & validation activities, and researchers in building vulnerability detection tools tailored for the mobile world."}
{"_id": "74289572067a8ba3dbe1abf84d4a352b8bb4740f", "title": "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness", "text": "We introduce a new family of fairness definitions that interpolate between statistical and individual notions of fairness, obtaining some of the best properties of each. We show that checking whether these notions are satisfied is computationally hard in the worst case, but give practical oracle-efficient algorithms for learning subject to these constraints, and confirm our findings with experiments."}
{"_id": "0f161ac594fe478e4192b2c2a8a5fed7d0d1837e", "title": "A Simple Introduction to Maximum Entropy Models for Natural Language Processing", "text": "Many problems in natural language processing can be viewed as lin guistic classi cation problems in which linguistic contexts are used to pre dict linguistic classes Maximum entropy models o er a clean way to com bine diverse pieces of contextual evidence in order to estimate the proba bility of a certain linguistic class occurring with a certain linguistic con text This report demonstrates the use of a particular maximum entropy model on an example problem and then proves some relevant mathemat ical facts about the model in a simple and accessible manner This report also describes an existing procedure called Generalized Iterative Scaling which estimates the parameters of this particular model The goal of this report is to provide enough detail to re implement the maximum entropy models described in Ratnaparkhi Reynar and Ratnaparkhi Ratnaparkhi and also to provide a simple explanation of the max imum entropy formalism Introduction Many problems in natural language processing NLP can be re formulated as statistical classi cation problems in which the task is to estimate the probability of class a occurring with context b or p a b Contexts in NLP tasks usually include words and the exact context depends on the nature of the task for some tasks the context b may consist of just a single word while for others b may consist of several words and their associated syntactic labels Large text corpora usually contain some information about the cooccurrence of a s and b s but never enough to completely specify p a b for all possible a b pairs since the words in b are typically sparse The problem is then to nd a method for using the sparse evidence about the a s and b s to reliably estimate a probability model p a b Consider the Principle of Maximum Entropy Jaynes Good which states that the correct distribution p a b is that which maximizes en tropy or uncertainty subject to the constraints which represent evidence i e the facts known to the experimenter Jaynes discusses its advan tages in making inferences on the basis of partial information we must use that probability distribution which has maximum entropy sub ject to whatever is known This is the only unbiased assignment we can make to use any other would amount to arbitrary assumption of information which by hypothesis we do not have More explicitly if A denotes the set of possible classes and B denotes the set of possible contexts p should maximize the entropy H p X"}
{"_id": "78a11b7d2d7e1b19d92d2afd51bd3624eca86c3c", "title": "Improved Deep Metric Learning with Multi-class N-pair Loss Objective", "text": "Deep metric learning has gained much popularity in recent years, following the success of deep learning. However, existing frameworks of deep metric learning based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one negative example while not interacting with the other negative classes in each update. In this paper, we propose to address this problem with a new metric learning objective called multi-class N -pair loss. The proposed objective function firstly generalizes triplet loss by allowing joint comparison among more than one negative examples \u2013 more specifically, N -1 negative examples \u2013 and secondly reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples, instead of (N+1)\u00d7N . We demonstrate the superiority of our proposed loss to the triplet loss as well as other competing loss functions for a variety of tasks on several visual recognition benchmark, including fine-grained object recognition and verification, image clustering and retrieval, and face verification and identification."}
{"_id": "85f9204814b3dd7d2c9207b25297f90d463d4810", "title": "A second order accurate projection method for the incompressible Navier-Stokes equations on non-graded adaptive grids", "text": "We present an unconditionally stable second order accurate projection method for the incompressible Navier\u2013Stokes equations on non-graded adaptive Cartesian grids. We employ quadtree and octree data structures as an efficient means to represent the grid. We use the supra-convergent Poisson solver of [C.-H. Min, F. Gibou, H. Ceniceros, A supra-convergent finite difference scheme for the variable coefficient Poisson equation on fully adaptive grids, CAM report 05-29, J. Comput. Phys. (in press)], a second order accurate semi-Lagrangian method to update the momentum equation, an unconditionally stable backward difference scheme to treat the diffusion term and a new method that guarantees the stability of the projection step on highly non-graded grids. We sample all the variables at the grid nodes, producing a scheme that is straightforward to implement. We propose two and three-dimensional examples to demonstrate second order accuracy for the velocity field and the divergence free condition in the L and L1 norms. 2006 Elsevier Inc. All rights reserved."}
{"_id": "7fe85d798f7c5ef9be79426ac9878b78a0cd0e83", "title": "A Monte Carlo Model of Light Propagation in Tissue", "text": "The Monte Carlo method is rapidly becoming the model of choice for simulating light transport in tissue. This paper provides all the details necessary for implementation of a Monte Carlo program. Variance reduction schemes that improve the efficiency of the Monte Carlo method are discussed. Analytic expressions facilitating convolution calculations for finite flat and Gaussian beams are included. Useful validation benchmarks are presented."}
{"_id": "ea573d7a9bb99d769427e83b196c0122a39cafc9", "title": "A design and development of an intelligent jammer and jamming detection methodologies using machine learning approach", "text": "Nowadays, the utilization of mobile phones has increased rapidly. Due to this evolution schools, mosques, prisons, etc., require silence and security. This is achieved by using the mobile phone jammers. An intelligent mobile jammer is designed for allowing only the emergency calls by utilizing the microcontroller. Here, the jammer utilizes the successive approximation for reducing the transmission power. However, it requires few modifications. Therefore in this paper, an improved successive approximation based on divide and conquer algorithm is proposed in order to design the improved intelligent mobile jammer and reduce transmission power. Subsequently, the proposed jammer is analysed based on the different scenarios and frequency bands to illustrate their performance effectiveness. Furthermore, the normal activities are distinguished from jamming by using machine learning-based detection system according to the various parameters. Finally, the proposed algorithm is compared with conventional algorithms to demonstrate its performance efficiency in terms of detection accuracy."}
{"_id": "23b7d6a9fce5732ca5c5e11a3f42e17860ef05ad", "title": "Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition", "text": "Recurrent Neural Networks (RNNs) and their variants, such as Long-Short Term Memory (LSTM) networks, and Gated Recurrent Unit (GRU) networks, have achieved promising performance in sequential data modeling. The hidden layers in RNNs can be regarded as the memory units, which are helpful in storing information in sequential contexts. However, when dealing with high dimensional input data, such as video and text, the input-to-hidden linear transformation in RNNs brings high memory usage and huge computational cost. This makes the training of RNNs very difficult. To address this challenge, we propose a novel compact LSTM model, named as TR-LSTM, by utilizing the low-rank tensor ring decomposition (TRD) to reformulate the input-to-hidden transformation. Compared with other tensor decomposition methods, TRLSTM is more stable. In addition, TR-LSTM can complete an end-to-end training and also provide a fundamental building block for RNNs in handling large input data. Experiments on real-world action recognition datasets have demonstrated the promising performance of the proposed TR-LSTM compared with the tensortrain LSTM and other state-of-the-art competitors."}
{"_id": "a99f1f749481e44abab0ba9a8b7c1d3572a2e465", "title": "Quo Vadis , Atlas-Based Segmentation?", "text": ""}
{"_id": "36091ff6b5d5a53d9641f5c3388b8c31b9ad4b49", "title": "Temporal Modular Networks for Retrieving Complex Compositional Activities in Videos", "text": "A major challenge in computer vision is scaling activity understanding to the long tail of complex activities without requiring collecting large quantities of data for new actions. The task of video retrieval using natural language descriptions seeks to address this through rich, unconstrained supervision about complex activities. However, while this formulation offers hope of leveraging underlying compositional structure in activity descriptions, existing approaches typically do not explicitly model compositional reasoning. In this work, we introduce an approach for explicitly and dynamically reasoning about compositional natural language descriptions of activity in videos. We take a modular neural network approach that, given a natural language query, extracts the semantic structure to assemble a compositional neural network layout and corresponding network modules. We show that this approach is able to achieve state-of-the-art results on the DiDeMo video retrieval dataset."}
{"_id": "b3d7371522a7a68137df2cb005ca9683f3436bd7", "title": "Multivalued logics: a uniform approach to reasoning in artificial intelligence", "text": "This paper describes a uniform formalization of much of the current work in artificial intelligence on inference systems. We show that many of these systems, including first-order theorem provers, assumption-based truth maintenance systems (ATMSS), and unimplemented formal systems such as default logic or circumscription, can be subsumed under a single general framework. We begin by defining this framework, which is based on a mathematical structure known as a bilattice. We present a formal definition of inference using this structure and show that this definition generalizes work involving ATMSS and some simple nonmonotonic logics. Following the theoretical description, we describe a constructive approach to inference in this setting; the resulting generalization of both conventional inference and ATMSS is achieved without incurring any substantial computational overhead. We show that our approach can also be used to implement a default reasoner, and discuss a combination of default and ATMS methods that enables us to formally describe an \u201cincremental\u201d default reasoning system. This incremental system does not need to perform consistency checks before drawing tentative conclusions, but can instead adjust its beliefs when a default premise or conclusion is overturned in the face of convincing contradictory evidence. The system is therefore much more computationally viable than earlier approaches. Finally, we discuss the implementation of our ideas. We begin by considering general issues that need to be addressed when implementing a multivalued approach such as that we are proposing, and then turn to specific examples showing the results of an existing implementation. This single implementation is used to solve a digital simulation task using first-order logic, a diagnostic task using ATMSS as suggested by de Kleer and Williams, a problem in default reasoning as in Reiter\u2019s default logic or McCarthy\u2019s circumscription, and to solve the same problem more efficiently by combining default methods with justification information. All of these applications use the same general-purpose bilattice theorem prover and differ only in the choice of bilattice being considered."}
{"_id": "13cb5d5f0b04de165ef47b5117fc3a4b74d12b89", "title": "Automics: souvenir generating photoware for theme parks", "text": "Automics is a photo-souvenir service which utilises mobile devices to support the capture, sharing and annotation of digital images amongst groups of visitors to theme parks. The prototype service mixes individual and group photo-capture with existing in-park, on-ride photo services, to allow users to create printed photo-stories. Herein we discuss initial fieldwork in theme parks that grounded the design of Automics, our development of the service prototype, and its real-world evaluation with theme park visitors. We relate our findings on user experience of the service to a literature on mobile photoware, finding implications for the design of souvenir services."}
{"_id": "5704912d313452373d7ce329253ef398c0e4d6de", "title": "A DIRT-T A PPROACH TO U NSUPERVISED D OMAIN", "text": "Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes violation of the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T)1 model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation benchmarks."}
{"_id": "8658dbfb4bc0f8474a513adf0b51b1cfc2419a02", "title": "SCRUM: An extension pattern language for hyper productive software development", "text": "The patterns of the SCRUM development method are presented as an extension pattern language to the existing organizational pattern languages. In the last few years, the SCRUM development method has rapidly gained recognition as an effective tool to hyper-productive software development. However, when SCRUM patterns are combined with other existing organizational patterns, they lead to highly adaptive, yet well-structured software development organizations. Also, decomposing SCRUM into patterns can guide adoption of only those parts of SCRUM that are applicable to a specific situation."}
{"_id": "4fbf7c49ae0fddd9fdb4c94d381d36afd2ab4637", "title": "Goal-based trajectory analysis for unusual behaviour detection in intelligent surveillance", "text": "In a typical surveillance installation, a human operator has to constantly monitor a large array of video feeds for suspicious behaviour. As the number of cameras increases, information overload makes manual surveillance increasingly difficult, adding to other confounding factors such as human fatigue and boredom. The objective of an intelligent vision-based surveillance system is to automate the monitoring and event detection components of surveillance, alerting the operator only when unusual behaviour or other events of interest are detected. While most traditional methods for trajectory-based unusual behaviour detection rely on low-level trajectory features such as flow vectors or control points, this paper builds upon a recently introduced approach that makes use of higher-level features of intentionality. Individuals in the scene are modelled as intentional agents, and unusual behaviour is detected by evaluating the explicability of the agent\u2019s trajectory with respect to known spatial goals. The proposed method extends the original goal-based approach in three ways: first, the spatial scene structure is learned in a training phase; second, a region transition model is learned to describe normal movement patterns between spatial regions; and third, classification of trajectories in progress is performed in a probabilistic framework using particle filtering. Experimental validation on three published third-party datasets demonstrates the validity of the proposed approach."}
{"_id": "abe0bd94e134c7ac0b1c78922ed17cf3bec08d5e", "title": "A stochastic model of human-machine interaction for learning dialog strategies", "text": "In this paper, we propose a quantitative model for dialog systems that can be used for learning the dialog strategy. We claim that the problem of dialog design can be formalized as an optimization problem with an objective function reflecting different dialog dimensions relevant for a given application. We also show that any dialog system can be formally described as a sequential decision process in terms of its state space, action set, and strategy. With additional assumptions about the state transition probabilities and cost assignment, a dialog system can be mapped to a stochastic model known as Markov decision process(MDP). A variety of data driven algorithms for finding the optimal strategy (i.e., the one that optimizes the criterion) is available within the MDP framework, based on reinforcement learning. For an effective use of the available training data we propose a combination of supervised and reinforcement learning: the supervised learning is used to estimate a model of the user, i.e., the MDP parameters that quantify the user\u2019s behavior. Then a reinforcement learning algorithm is used to estimate the optimal strategy while the system interacts with the simulated user. This approach is tested for learning the strategy in an air travel information system (ATIS) task. The experimental results we present in this paper show that it is indeed possible to find a simple criterion, a state space representation, and a simulated user parameterization in order to automatically learn a relatively complex dialog behavior, similar to one that was heuristically designed by several research groups."}
{"_id": "22ca497e24466737981f9ca1690d6c712b7e1276", "title": "Reinforcement Learning for Spoken Dialogue Systems", "text": "Recently, a number of authors have proposed treating dialog ue systems as Markov decision processes (MDPs). However, the practical applica tion of MDP algorithms to dialogue systems faces a number of severe technical chall enges. We have built a general software tool (RLDS, for Reinforcement Learning fo r Dialogue Systems) based on the MDP framework, and have applied it to dialogue co rpora gathered from two dialogue systems built at AT&T Labs. Our experiment s demonstrate that RLDS holds promise as a tool for \u201cbrowsing\u201d and understandin g correlations in complex, temporally dependent dialogue corpora."}
{"_id": "5c8bb027eb65b6d250a22e9b6db22853a552ac81", "title": "Learning from delayed rewards", "text": ""}
{"_id": "f44a610e28e174f48220cac09579a3aa337e672a", "title": "An Efficient Algorithm for Fractal Analysis of Textures", "text": "In this paper we propose a new and efficient texture feature extraction method: the Segmentation-based Fractal Texture Analysis, or SFTA. The extraction algorithm consists in decomposing the input image into a set of binary images from which the fractal dimensions of the resulting regions are computed in order to describe segmented texture patterns. The decomposition of the input image is achieved by the Two-Threshold Binary Decomposition (TTBD) algorithm, which we also propose in this work. We evaluated SFTA for the tasks of content-based image retrieval (CBIR) and image classification, comparing its performance to that of other widely employed feature extraction methods such as Haralick and Gabor filter banks. SFTA achieved higher precision and accuracy for CBIR and image classification. Additionally, SFTA was at least 3.7 times faster than Gabor and 1.6 times faster than Haralick with respect to feature extraction time."}
{"_id": "2152e1fda8b19f5480ca094cb08f1ebac1f5ff9b", "title": "Voice-based assessments of trustworthiness, competence, and warmth in blind and sighted adults", "text": "The study of voice perception in congenitally blind individuals allows researchers rare insight into how a lifetime of visual deprivation affects the development of voice perception. Previous studies have suggested that blind adults outperform their sighted counterparts in low-level auditory tasks testing spatial localization and pitch discrimination, as well as in verbal speech processing; however, blind persons generally show no advantage in nonverbal voice recognition or discrimination tasks. The present study is the first to examine whether visual experience influences the development of social stereotypes that are formed on the basis of nonverbal vocal characteristics (i.e., voice pitch). Groups of 27 congenitally or early-blind adults and 23 sighted controls assessed the trustworthiness, competence, and warmth of men and women speaking a series of vowels, whose voice pitches had been experimentally raised or lowered. Blind and sighted listeners judged both men's and women's voices with lowered pitch as being more competent and trustworthy than voices with raised pitch. In contrast, raised-pitch voices were judged as being warmer than were lowered-pitch voices, but only for women's voices. Crucially, blind and sighted persons did not differ in their voice-based assessments of competence or warmth, or in their certainty of these assessments, whereas the association between low pitch and trustworthiness in women's voices was weaker among blind than sighted participants. This latter result suggests that blind persons may rely less heavily on nonverbal cues to trustworthiness compared to sighted persons. Ultimately, our findings suggest that robust perceptual associations that systematically link voice pitch to the social and personal dimensions of a speaker can develop without visual input."}
{"_id": "1028b3f1808b5bcc72b018d157db952bb8282205", "title": "Place navigation impaired in rats with hippocampal lesions", "text": "Electrophysiological studies have shown that single cells in the hippocampus respond during spatial learning and exploration1\u20134, some firing only when animals enter specific and restricted areas of a familiar environment. Deficits in spatial learning and memory are found after lesions of the hippocampus and its extrinsic fibre connections5,6 following damage to the medial septal nucleus which successfully disrupts the hippocampal theta rhythm7, and in senescent rats which also show a correlated reduction in synaptic enhancement on the perforant path input to the hippocampus8. We now report, using a novel behavioural procedure requiring search for a hidden goal, that, in addition to a spatial discrimination impairment, total hippocampal lesions also cause a profound and lasting placenavigational impairment that can be dissociated from correlated motor, motivational and reinforcement aspects of the procedure."}
{"_id": "1045aca116f8830e364147de75285e86f9a24474", "title": "Performance anomaly of 802.11b", "text": "We analyze the performance of the IEEE 802.11b wireless local area networks. We have observed that when some mobile hosts use a lower bit rate than the others, the performance of all hosts is considerably degraded. Such a situation is a common case in wireless local area networks in which a host far away from an Access Point is subject to important signal fading and interference. To cope with this problem, the host changes its modulation type, which degrades its bit rate to some lower value. Typically, 802.11b products degrade the bit rate from 11 Mb/s to 5.5, 2, or 1 Mb/s when repeated unsuccessful frame transmissions are detected. In such a case, a host transmitting for example at 1 Mb/s reduces the throughput of all other hosts transmitting at 11 Mb/s to a low value below 1 Mb/s. The basic CSMA/CA channel access method is at the root of this anomaly: it guarantees an equal long term channel access probability to all hosts. When one host captures the channel for a long time because its bit rate is low, it penalizes other hosts that use the higher rate. We analyze the anomaly theoretically by deriving simple expressions for the useful throughput, validate them by means of simulation, and compare with several performance measurements."}
{"_id": "3bf64462fc3558ab7e9329d084a1af4cf0c87ebf", "title": "Deadline Guaranteed Service for Multi-Tenant Cloud Storage", "text": "It is imperative for cloud storage systems to be able to provide deadline guaranteed services according to service level agreements (SLAs) for online services. In spite of many previous works on deadline aware solutions, most of them focus on scheduling work flows or resource reservation in datacenter networks but neglect the server overload problem in cloud storage systems that prevents providing the deadline guaranteed services. In this paper, we introduce a new form of SLAs, which enables each tenant to specify a percentage of its requests it wishes to serve within a specified deadline. We first identify the multiple objectives (i.e., traffic and latency minimization, resource utilization maximization) in developing schemes to satisfy the SLAs. To satisfy the SLAs while achieving the multi-objectives, we propose a Parallel Deadline Guaranteed (PDG) scheme, which schedules data reallocation (through load re-assignment and data replication) using a tree-based bottom-up parallel process. The observation from our model also motivates our deadline strictness clustered data allocation algorithm that maps tenants with the similar SLA strictness into the same server to enhance SLA guarantees. We further enhance PDG in supplying SLA guaranteed services through two algorithms: i) a prioritized data reallocation algorithm that deals with request arrival rate variation, and ii) an adaptive request retransmission algorithm that deals with SLA requirement variation. Our trace-driven experiments on a simulator and Amazon EC2 show the effectiveness of our schemes for guaranteeing the SLAs while achieving the multi-objectives."}
{"_id": "1284e72c31b94a6a1936430a9aaeb84edbc445ed", "title": "High Quality Uniform Random Number Generation Using LUT Optimised State-transition Matrices", "text": "This paper presents a family of uniform random number generators designed for efficient implementation in Lookup table (LUT) based FPGA architectures. A generator with a period of 2j1 can be implemented using k flip-flops and k LUTs, and provides k random output bits each cycle. Each generator is based on a binary linear recurrence, with a state-transition matrix designed to make best use of all available LUT inputs in a given FPGA architecture, and to ensure that the critical path between all registers is a single LUT. This class of generator provides a higher sample rate per area than LFSR and Combined Tausworthe generators, and operates at similar or higher clock-rates. The statistical quality of the generators increases with k, and can be used to pass all common empirical tests such as Diehard, Crush and the NIST cryptographic test suite. Theoretical properties such as global equidistribution can also be calculated, and best and average case statistics shown. Due to the large number of random bits generated per cycle these generators can be used as a basis for generators with even higher statistical quality, and an example involving combination through addition is demonstrated."}
{"_id": "3735317a7435296a01ff4b7571d6b08fca98b298", "title": "Adversarial Text Generation Without Reinforcement Learning", "text": "Generative Adversarial Networks (GANs) have experienced a recent surge in popularity, performing competitively in a variety of tasks, especially in computer vision. However, GAN training has shown limited success in natural language processing. This is largely because sequences of text are discrete, and thus gradients cannot propagate from the discriminator to the generator. Recent solutions use reinforcement learning to propagate approximate gradients to the generator, but this is inefficient to train. We propose to utilize an autoencoder to learn a low-dimensional representation of sentences. A GAN is then trained to generate its own vectors in this space, which decode to realistic utterances. We report both random and interpolated samples from the generator. Visualization of sentence vectors indicate our model correctly learns the latent space of the autoencoder. Both human ratings and BLEU scores show that our model generates realistic text against competitive baselines."}
{"_id": "3cc515487ff15e7ce57840f87243e7c0748f5d89", "title": "Fast bayesian matching pursuit", "text": "A low-complexity recursive procedure is presented for minimum mean squared error (MMSE) estimation in linear regression models. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both an approximate MMSE estimate of the parameter vector and a set of high posterior probability mixing parameters. Emphasis is given to the case of a sparse parameter vector. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and MAP model selection. The set of high probability mixing parameters not only provides MAP basis selection, but also yields relative probabilities that reveal potential ambiguity in the sparse model."}
{"_id": "4a2d7bf9937793a648a43c93029353ade10e64da", "title": "Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines", "text": "Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values.\n We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers."}
{"_id": "041f167030571f5f156c8407bab2eab3006842e0", "title": "Biomedical Event Extraction using Abstract Meaning Representation", "text": "We propose a novel, Abstract Meaning Representation (AMR) based approach to identifying molecular events/interactions in biomedical text. Our key contributions are: (1) an empirical validation of our hypothesis that an event is a subgraph of the AMR graph, (2) a neural network-based model that identifies such an event subgraph given an AMR, and (3) a distant supervision based approach to gather additional training data. We evaluate our approach on the 2013 Genia Event Extraction dataset1 (Kim et al., 2013) and show promising results."}
{"_id": "64a1e4d18c2fab4904f85b3ee1a9af4263dae348", "title": "Deep generative models: Survey", "text": "Generative models have found their way to the forefront of deep learning the last decade and so far, it seems that the hype will not fade away any time soon. In this paper, we give an overview of the most important building blocks of most recent revolutionary deep generative models such as RBM, DBM, DBN, VAE and GAN. We will also take a look at three of state-of-the-art generative models, namely PixelRNN, DRAW and NADE. We will delve into their unique architectures, the learning procedures and their potential and limitations. We will also review some of the known issues that arise when trying to design and train deep generative architectures using shallow ones and how different models deal with these issues. This paper is not meant to be a comprehensive study of these models, but rather a starting point for those who bear an interest in the field."}
{"_id": "0e27539f23b8ebb0179fcecfeb11167d3f38eeaf", "title": "Critical event prediction for proactive management in large-scale computer clusters", "text": "As the complexity of distributed computing systems increases, systems management tasks require significantly higher levels of automation; examples include diagnosis and prediction based on real-time streams of computer events, setting alarms, and performing continuous monitoring. The core of autonomic computing, a recently proposed initiative towards next-generation IT-systems capable of 'self-healing', is the ability to analyze data in real-time and to predict potential problems. The goal is to avoid catastrophic failures through prompt execution of remedial actions.This paper describes an attempt to build a proactive prediction and control system for large clusters. We collected event logs containing various system reliability, availability and serviceability (RAS) events, and system activity reports (SARs) from a 350-node cluster system for a period of one year. The 'raw' system health measurements contain a great deal of redundant event data, which is either repetitive in nature or misaligned with respect to time. We applied a filtering technique and modeled the data into a set of primary and derived variables. These variables used probabilistic networks for establishing event correlations through prediction algorithms. We also evaluated the role of time-series methods, rule-based classification algorithms and Bayesian network models in event prediction.Based on historical data, our results suggest that it is feasible to predict system performance parameters (SARs) with a high degree of accuracy using time-series models. Rule-based classification techniques can be used to extract machine-event signatures to predict critical events with up to 70% accuracy."}
{"_id": "4b0e6f6c63ba66b21ad92e8b14139a8b59e9877e", "title": "Small business credit scoring: a comparison of logistic regression, neural network, and decision tree models", "text": "The paper compares the models for small business credit scoring developed by logistic regression, neural networks, and CART decision trees on a Croatian bank dataset. The models obtained by all three methodologies were estimated; then validated on the same hold-out sample, and their performance is compared. There is an evident significant difference among the best neural network model, decision tree model, and logistic regression model. The most successful neural network model was obtained by the probabilistic algorithm. The best model extracted the most important features for small business credit scoring from the observed data"}
{"_id": "ac3306493b0b314b5e355bbc1ac44289bcaf1acd", "title": "SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine", "text": "We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering. That is, we start not from an existing article and generate a question-answer pair, but start from an existing question-answer pair, crawled from J! Archive, and augment it with text snippets retrieved by Google. Following this approach, we built SearchQA, which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average. Each question-answer-context tuple of the SearchQA comes with additional meta-data such as the snippet\u2019s URL, which we believe will be valuable resources for future research. We conduct human evaluation as well as test two baseline methods, one simple word selection and the other deep learning based, on the SearchQA. We show that there is a meaningful gap between the human and machine performances. This suggests that the proposed dataset could well serve as a benchmark for question-answering."}
{"_id": "3913d2e0a51657a5fe11305b1bcc8bf3624471c0", "title": "Learning Structured Representation for Text Classification via Reinforcement Learning", "text": "Representation learning is a fundamental problem in natural language processing. This paper studies how to learn a structured representation for text classification. Unlike most existing representation models that either use no structure or rely on pre-specified structures, we propose a reinforcement learning (RL) method to learn sentence representation by discovering optimized structures automatically. We demonstrate two attempts to build structured representation: Information Distilled LSTM (ID-LSTM) and Hierarchically Structured LSTM (HS-LSTM). ID-LSTM selects only important, taskrelevant words, and HS-LSTM discovers phrase structures in a sentence. Structure discovery in the two representation models is formulated as a sequential decision problem: current decision of structure discovery affects following decisions, which can be addressed by policy gradient RL. Results show that our method can learn task-friendly representations by identifying important words or task-relevant structures without explicit structure annotations, and thus yields competitive performance."}
{"_id": "1605b94e1c2d8a019674c9ee05d88893c7038b27", "title": "Privacy Paradox Revised: Pre-Existing Attitudes, Psychological Ownership, and Actual Disclosure", "text": "Prior research has pointed to discrepancies between users' privacy concerns and disclosure behaviors, denoted as the privacy paradox, and repeatedly highlighted the importance to find explanations for this dichotomy. In this regard, three approaches have been proposed by prior literature: (1) use of actual disclosure behavior rather than behavioral intentions, (2) systematic distinction between pre-existing attitudes and situation-specific privacy considerations, and (3) limited and irrational cognitive processes during decision-making. The current research proposes an experiment capable to test these three assumptions simultaneously. More precisely, the authors aim to explore the contextual nature of privacy-related decisions by systematically manipulating (1) individuals\u2019 psychological ownership with regard to own private information, and (2) individuals\u2019 affective states, while measuring (3) pre-existing attitudes as well as situation-specific risk and benefit perceptions, and (4) intentions as well as actual disclosure. Thus, the proposed study strives to uniquely add to the understanding of the privacy paradox."}
{"_id": "f924aae98bc6d0119035712d3a37f388975a55a3", "title": "Anomaly Detection in Noisy Images", "text": "Title of dissertation: Anomaly Detection in Noisy Images Xavier Gibert Serra, Ph.D. Examination, Fall 2015 Dissertation directed by: Professor Rama Chellappa Department of Electrical and Computer Engineering Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work. Anomaly Detection in Noisy Images by Xavier Gibert Serra Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2015 Advisory Committee: Professor Rama Chellappa, Chair/Advisor Professor Piya Pal Professor Shuvra Bhattacharyya Professor Vishal M. Patel Professor Amitabh Varshney, Dean\u2019s Representative"}
{"_id": "68914d04bf225449408d86536fcbae7f285a0f63", "title": "Building population mapping with aerial imagery and GIS data", "text": "Geospatial distribution of population at a scale of individual buildings is needed for analysis of people\u2019s interaction with their local socio-economic and physical environments. High resolution aerial images are capable of capturing urban complexities and considered as a potential source for mapping urban features at this fine scale. This paper studies population mapping for individual buildings by using aerial imagery and other geographic data. Building footprints and heights are first determined from aerial images, digital terrain and surface models. City zoning maps allow the classification of the buildings as residential and non-residential. The use of additional ancillary geographic data further filters residential utility buildings out of the residential area and identifies houses and apartments. In the final step, census block population, which is publicly available from the U.S. Census, is disaggregated and mapped to individual residential buildings. This paper proposes a modified building population mapping model that takes into account the effects of different types of residential buildings. Detailed steps are described that lead to the identification of residential buildings from imagery and other GIS data layers. Estimated building populations are evaluated per census block with reference to the known census records. This paper presents and evaluates the results of building population mapping in areas of West Lafayette, Lafayette, and Wea Township, all in the state of Indiana, USA. \u00a9 2011 Published by Elsevier B.V."}
{"_id": "bb3f4ad0d4392689fa1a7e4696579173c443dc58", "title": "Toward a Universal Cortical Algorithm: Examining Hierarchical Temporal Memory in Light of Frontal Cortical Function", "text": "Representations One aspect of frontal cortical function that has been given less attention than it is due, both in this review and in the literature in general, is the biological basis of the generation and manipulation of abstract frontal representations. Response properties of some PFC cells have been shown to correspond with abstract rules (Wallis et al, 2001), and models such as those of Norman & Shallice (1980, 1986; Shallice, 1982; Shallice & Burgess, 1991, 1996), Grafman (2002) and Fuster (1997) have emphasized the requirement that frontal cortex dynamically generate and manipulate abstract"}
{"_id": "dd5b30cbb7c07cbc9643f9dfe124c344b26a03bd", "title": "An Adaption of BIOASQ Question Answering dataset for Machine Reading systems by Manual Annotations of Answer Spans", "text": "BIOASQ Task B Phase B challenge focuses on extracting answers from snippets for a given question. The dataset provided by the organizers contains answers, but not all their variants. Henceforth a manual annotation was performed to extract all forms of correct answers. This article shows the impact of using all occurrences of correct answers for training on the evaluation scores which are improved significantly."}
{"_id": "7c331f755de54f5d37d699b6192ca2f2468c8d9b", "title": "QoE in Video Transmission: A User Experience-Driven Strategy", "text": "The increasing popularity of video (i.e., audio-visual) applications or services over both wired and wireless links has prompted recent growing interests in the investigations of quality of experience (QoE) in online video transmission. Conventional video quality metrics, such as peak-signal-to-noise-ratio and quality of service, only focus on the reception quality from the systematic perspective. As a result, they cannot represent the true visual experience of an individual user. Instead, the QoE introduces a user experience-driven strategy which puts special emphasis on the contextual and human factors in addition to the transmission system. This advantage has raised the popularity and widespread usage of QoE in video transmission. In this paper, we present an overview of selected issues pertaining to QoE and its recent applications in video transmission, with consideration of the compelling features of QoE (i.e., context and human factors). The selected issues include QoE modeling with influence factors in the end-to-end chain of video transmission, QoE assessment (including subjective test and objective QoE monitoring) and QoE management of video transmission over different types of networks. Through the literature review, we observe that the context and human factors in QoE-aware video transmission have attracted significant attentions since the past two to three years. A vast number of high quality works were published in this area, and will be highlighted in this survey. In addition to a thorough summary of recent progresses, we also present an outlook of future developments on QoE assessment and management in video transmission, especially focusing on the context and human factors that have not been addressed yet and the technical challenges that have not been completely solved so far. We believe that our overview and findings can provide a timely perspective on the related issues and the future research directions in QoE-oriented services over video communications."}
{"_id": "6182186784d792ed09c4129924a46f9e88869407", "title": "Grounded Theory", "text": "Grounded theory was originally expounded by Barney Glaser and Anselm Strauss in their 1967 book The Discovery of Grounded Theory (Glaser and Strauss 1967). Reacting against what they saw as the dominance of hypothetico-deductive, theory-testing approaches, Glaser and Strauss proposed grounded theory as a way of building theory systematically using data obtained from social research. Since its first appearance, grounded theory has gone on to become \u2018currently the most widely used and popular qualitative research method across a wide range of disciplines and subject areas\u2019 (Bryant and Charmaz 2007: 1)."}
{"_id": "931bf857a5d9dbf13ff8da107f5d3075d63a925d", "title": "Optimal assay design for determining the in vitro sensitivity of ring stage Plasmodium falciparum to artemisinins.", "text": "Recent reports demonstrate that failure of artemisinin-based antimalarial therapies is associated with an altered response of early blood stage Plasmodium falciparum. This has led to increased interest in the use of pulse assays that mimic clinical drug exposure for analysing artemisinin sensitivity of highly synchronised ring stage parasites. We report a methodology for the reliable execution of drug pulse assays and detail a synchronisation strategy that produces well-defined tightly synchronised ring stage cultures in a convenient time-frame."}
{"_id": "2d1cfc9e81fb159967c2be8446a8e3e7b50fe36b", "title": "An MDP-based Recommender System", "text": "Typical Recommender systems adopt a static view of the recommendation process and treat it as a prediction problem. We argue that it is more appropriate to view the problem of generating recommendations as a sequential decision problem and, consequently, that Markov decision processes (MDP) provide a more appropriate model for Recommender systems. MDPs introduce two benefits: they take into account the long-term effects of each recommendation, and they take into account the expected value of each recommendation. To succeed in practice, an MDP-based Recommender system must employ a strong initial model; and the bulk of this paper is concerned with the generation of such a model. In particular, we suggest the use of an n-gram predictive model for generating the initial MDP. Ourn-gram model induces a Markovchain model of user behavior whose predictive accuracy is greater than that of existing predictive models. We describe our predictive model in detail and evaluate its performance on real data. In addition, we show how the model can be used in an MDP-based Recommender system."}
{"_id": "3107cb3f3f39eb6fcf6435daaef636db35950e4f", "title": "From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews", "text": "Recommending products to consumers means not only understanding their tastes, but also understanding their level of experience. For example, it would be a mistake to recommend the iconic film Seven Samurai simply because a user enjoys other action movies; rather, we might conclude that they will eventually enjoy it---once they are ready. The same is true for beers, wines, gourmet foods---or any products where users have acquired tastes: the `best' products may not be the most 'accessible'. Thus our goal in this paper is to recommend products that a user will enjoy now, while acknowledging that their tastes may have changed over time, and may change again in the future. We model how tastes change due to the very act of consuming more products---in other words, as users become more experienced. We develop a latent factor recommendation system that explicitly accounts for each user's level of experience. We find that such a model not only leads to better recommendations, but also allows us to study the role of user experience and expertise on a novel dataset of fifteen million beer, wine, food, and movie reviews."}
{"_id": "599ebeef9c9d92224bc5969f3e8e8c45bff3b072", "title": "Item-based top-N recommendation algorithms", "text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems---a personalized information filtering technology used to identify a set of items that will be of interest to a certain user. User-based collaborative filtering is the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. Unfortunately, the computational complexity of these methods grows linearly with the number of customers, which in typical commercial applications can be several millions. To address these scalability concerns model-based recommendation techniques have been developed. These techniques analyze the user--item matrix to discover relations between the different items and use these relations to compute the list of recommendations.In this article, we present one such class of model-based recommendation algorithms that first determines the similarities between the various items and then uses them to identify the set of items to be recommended. The key steps in this class of algorithms are (i) the method used to compute the similarity between the items, and (ii) the method used to combine these similarities in order to compute the similarity between a basket of items and a candidate recommender item. Our experimental evaluation on eight real datasets shows that these item-based algorithms are up to two orders of magnitude faster than the traditional user-neighborhood based recommender systems and provide recommendations with comparable or better quality."}
{"_id": "6bd87d13b5633495f93699e80220492035a92716", "title": "Dynamic Conversion Behavior at E-Commerce Sites", "text": "T paper develops a model of conversion behavior (i.e., converting store visits into purchases) that predicts each customer\u2019s probability of purchasing based on an observed history of visits and purchases. We offer an individual-level probability model that allows for different forms of customer heterogeneity in a very flexible manner. Specifically, we decompose an individual\u2019s conversion behavior into two components: one for accumulating visit effects and another for purchasing threshold effects. Each component is allowed to vary across households as well as over time. Visit effects capture the notion that store visits can play different roles in the purchasing process. For example, some visits are motivated by planned purchases, while others are associated with hedonic browsing (akin to window shopping); our model is able to accommodate these (and several other) types of visit-purchase relationships in a logical, parsimonious manner. The purchasing threshold captures the psychological resistance to online purchasing that may grow or shrink as a customer gains more experience with the purchasing process at a given website. We test different versions of the model that vary in the complexity of these two key components and also compare our general framework with popular alternatives such as logistic regression. We find that the proposed model offers excellent statistical properties, including its performance in a holdout validation sample, and also provides useful managerial diagnostics about the patterns underlying online buyer behavior."}
{"_id": "92cc12f272ff55795c29cd97dc8ee17a5554308e", "title": "Content-Based, Collaborative Recommendation", "text": "The problem of recommending items from some fixed database has been studied extensively, and two main paradigms have emerged. In content-based recommendation one tries to recommend items similar to those a given user has liked in the past, whereas in collaborative recommendation one identifies users whose tastes are similar to those of the given user and recommends items they have liked. Our approach in Fab has been to combine these two methods. Here, we explain how a hybrid system can incorporate the advantages of both methods while inheriting the disadvantages of neither. In addition to what one might call the \u201cgeneric advantages\u201d inherent in any hybrid system, the particular design of the Fab architecture brings two additional benefits. First, two scaling problems common to all Web services are addressed\u2014an increasing number of users and an increasing number of documents. Second, the system automatically identifies emergent communities of interest in the user population, enabling enhanced group awareness and communications. Here we describe the two approaches for contentbased and collaborative recommendation, explain how a hybrid system can be created, and then describe Fab, an implementation of such a system. For more details on both the implemented architecture and the experimental design the reader is referred to [1]. The content-based approach to recommendation has its roots in the information retrieval (IR) community, and employs many of the same techniques. Text documents are recommended based on a comparison between their content and a user profile. Data"}
{"_id": "a93bee60173411d5cf5d917ecd7355ab7cfee40e", "title": "Splitting approaches for context-aware recommendation: an empirical study", "text": "User and item splitting are well-known approaches to context-aware recommendation. To perform item splitting, multiple copies of an item are created based on the contexts in which it has been rated. User splitting performs a similar treatment with respect to users. The combination of user and item splitting: UI splitting, splits both users and items in the data set to boost context-aware recommendations. In this paper, we perform an empirical comparison of these three context-aware splitting approaches (CASA) on multiple data sets, and we also compare them with other popular context-aware collaborative filtering (CACF) algorithms. To evaluate those algorithms, we propose new evaluation metrics specific to contextual recommendation. The experiments reveal that CASA typically outperform other popular CACF algorithms, but there is no clear winner among the three splitting approaches. However, we do find some underlying patterns or clues for the application of CASA."}
{"_id": "3828a3f60ca9477d3e130f1fd7dfc9d600ef72c8", "title": "Asynchronous Distributed Semi-Stochastic Gradient Optimization", "text": "Lemma 2 At a specific stage s and for a worker p, let g\u2217 i = \u2207fi(w)\u2212\u2207fi(w\u0303) +\u2207F (w\u0303), i \u2208 Dp, then the following inequality holds, Ep [ Ei\u2016g i \u2016 ] \u2264 2L [F (w\u0303)\u2212 F (w\u2217)] , Proof 2 Ei\u2016g i \u2016 =Ei\u2016\u2207fi(w\u0303)\u2212\u2207fi(w)\u2212\u2207F (w\u0303)\u2016 =Ei\u2016\u2207fi(w\u0303)\u2212\u2207fi(w)\u2016 \u2212 2Ei\u3008\u2207F (w\u0303),\u2207fi(w\u0303)\u2212\u2207fi(w)\u3009+ \u2016\u2207F (w\u0303)\u2016 \u22642L (Fp(w\u0303)\u2212 Fp(w)\u2212 \u3008\u2207Fp(w), w\u0303 \u2212 w\u2217\u3009) \u2212 2\u3008\u2207F (w\u0303),\u2207Fp(w\u0303)\u2212\u2207Fp(w)\u3009+ \u2016\u2207F (w\u0303)\u2016, where the inequality is the result of applying Lemma 1. Further, taking expectation on both sides w.r.t. worker p, we get"}
{"_id": "986094a13766cfe7d751d5a47553dfe3ff196186", "title": "Teaching-to-Learn and Learning-to-Teach for Multi-label Propagation", "text": "Multi-label propagation aims to transmit the multi-label information from labeled examples to unlabeled examples based on a weighted graph. Existing methods ignore the specific propagation difficulty of different unlabeled examples and conduct the propagation in an imperfect sequence, leading to the error-prone classification of some difficult examples with uncertain labels. To address this problem, this paper associates each possible label with a \u201cteacher\u201d, and proposes a \u201cMulti-Label Teaching-to-Learn and Learning-toTeach\u201d (ML-TLLT) algorithm, so that the entire propagation process is guided by the teachers and manipulated from simple examples to more difficult ones. In the teaching-to-learn step, the teachers select the simplest examples for the current propagation by investigating both the definitiveness of each possible label of the unlabeled examples, and the dependencies between labels revealed by the labeled examples. In the learning-to-teach step, the teachers reversely learn from the learner\u2019s feedback to properly select the simplest examples for the next propagation. Thorough empirical studies show that due to the optimized propagation sequence designed by the teachers, ML-TLLT yields generally better performance than seven state-of-the-art methods on the typical multi-label benchmark datasets."}
{"_id": "d47f5143e566db54100a2546c8869465c57251f4", "title": "A 1-V, 16.9 ppm/$^{\\circ}$C, 250 nA Switched-Capacitor CMOS Voltage Reference", "text": "An ultra low-power, precise voltage reference using a switched-capacitor technique in 0.35-\u03bcm CMOS is presented in this paper. The temperature dependence of the carrier mobility and channel length modulation effect can be effectively minimized by using 3.3 and 5 V N-type transistors to operate in the saturation and subthreshold regions, respectively. In place of resistors, a precise reference voltage with flexible trimming capability is achieved by using capacitors. When the supply voltage is 1 V and the temperature is 80\u00b0C, the supply current is 250 nA. The line sensitivity is 0.76%/V; the PSRR is -41 dB at 100 Hz and -17 dB at 10 MHz. Moreover, the occupied die area is 0.049 mm2."}
{"_id": "9f458230b385d2fb0124e59663059d41e10686ac", "title": "Morphological Segmentation with Window LSTM Neural Networks", "text": "Morphological segmentation, which aims to break words into meaning-bearing morphemes, is an important task in natural language processing. Most previous work relies heavily on linguistic preprocessing. In this paper, we instead propose novel neural network architectures that learn the structure of input sequences directly from raw input words and are subsequently able to predict morphological boundaries. Our architectures rely on Long Short Term Memory (LSTM) units to accomplish this, but exploit windows of characters to capture more contextual information. Experiments on multiple languages confirm the effectiveness of our models on this task."}
{"_id": "01a8909330cb5d4cc37ef50d03467b1974d6c9cf", "title": "An overview of 3D object grasp synthesis algorithms", "text": "This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing the mechanics of grasping and the finger-object contact interactions [7] or robot hand design and their control [1]. Robot grasp synthesis algorithms have been reviewed in [63], but since then an important progress has been made toward applying learning techniques to the grasping problem. This overview focuses on analytical as well as empirical grasp synthesis approaches."}
{"_id": "c200b0b6a80ad58f8b0fd7b461ed75d54fa0ae6d", "title": "Assessing the effects of service quality and justice on customer satisfaction and the continuance intention of mobile value-added services: An empirical test of a multidimensional model", "text": "a r t i c l e i n f o Understanding the antecedents and consequences of customer satisfaction in the mobile communications market is important. This study explores the effects of service quality and justice on customer satisfaction, which, in turn, affects continuance intention of mobile services. Service quality, justice and customer satisfaction were measured by multiple dimensions. A research model was developed based on this multidimen-sional approach and was empirically examined with data collected from about one thousand users of mobile value-added services in China. Results show that all three dimensions of service quality (interaction quality, environment quality and outcome quality) have significant and positive effects on cumulative satisfaction while only one dimension of service quality (interaction quality) has a significant and positive effect on transaction-specific satisfaction. Besides procedural justice, the other two dimensions of justice (distribu-tive justice and interactional justice) significantly influence both transaction-specific satisfaction and cumulative satisfaction. Furthermore, both types of customer satisfaction have significant and positive effects on continuance intention. Implications for research and practice are discussed. With the rapid advancements of mobile network technologies, provision of various kinds of value-added services by mobile service providers is on the rise around the world. As the market becomes more and more mature, value-added services become more homogeneous and the competition for acquiring new customers and retaining existing customers becomes more intense. In this environment, customer satisfaction is a critical factor for mobile service providers to maintain or improve their market share and profitability. Prior studies have found that customer satisfaction contributes to a firm's profitability and customer retention [33,35]. In a reorganization of the communications industry in China between 2008 and 2009, the original six mobile network operators were reduced to three. Meanwhile, the availability of third-generation telecommunications technologies suggested that more mobile value-added services would be provided to the customers. A recent value-added services survey report on mobile communications conducted by Analysys in 2010 predicted that, the competition among existing mobile network operators would become fierce after the reorganization of the industry and the introduction of third-generation services. Thus, for these mobile network operators, in order to retain customers, enhancing customer satisfaction is an urgent task to tackle with. Moreover, as new mobile value-added services are released, service providers need to focus on if these new services appeal to customers and on the willingness of customers to continue to use the services. Therefore, understanding the \u2026"}
{"_id": "84a60e1b5ec7aa08059bb0f67e81591a0c5753fc", "title": "A Low-Noise Amplifier With Tunable Interference Rejection for 3.1- to 10.6-GHz UWB Systems", "text": "An ultrawideband common-gate low noise amplifier with tunable interference rejection is presented. The proposed LNA embeds a tunable active notch filter to eliminate interferer at 5-GHz WLAN and employs a common-gate input stage and dual-resonant loads for wideband implementation. This LNA has been fabricated in a 0.18-\u00bfm CMOS process. The measured maximum power gain is 13.2 dB and noise figure is 4.5-6.2 dB with bandwidth of 3.1-10.6 GHz. The interferer rejection is 8.2 dB compared to the maximum gain and 7.6 dB noise figure at 5.2 GHz , respectively. The measured input P1dB is -11 dBm at 10.3 GHz. It consumes 12.8 mA from 1.8-V supply voltage."}
{"_id": "b2232ccfec21669834f420f106aab6fea3988d69", "title": "The effect of small disjuncts and class distribution on decision tree learning", "text": ".............................................................................................................................ii Acknowledgements........................................................................................................... iv Dedication .........................................................................................................................vi List of Tables....................................................................................................................xii List of Figures .................................................................................................................xiv"}
{"_id": "857d94ececa3cf5bbb792097212b1fc13b6a52e4", "title": "A strategic framework for good governance through e-governance optimization: A case study of Punjab in India", "text": "Purpose \u2013 The purpose of this paper is to attempt to find out whether the new information and communication technologies can make a significant contribution to the achievement of the objective of good governance. The study identifies the factors responsible for creating a conducive environment for effective and successful implementation of e-governance for achieving good governance and the possible barriers in the implementation of e governance applications. Based on the comprehensive analysis it proposes a strategic policy framework for good governance in Punjab in India. Punjab is a developed state ranked amongst some of the top states of India in terms of per capita income and infrastructure. Design/methodology/approach \u2013 The study designs a framework for good governance by getting the shared vision of all stakeholders about providing good quality administration and governance in the Indian context through \u201cParticipatory Stakeholder Assessment\u201d. The study uses descriptive statistics, perception gap, ANOVA and factor analysis to identify the key factors for good governance, the priorities of public regarding e-services, the policy makers\u2019 perspectives regarding good governance to be achieved through e-governance. Findings \u2013 The study captures the good governance factors mainly contributing to the shared vision. The study further highlights that most Indian citizens in Punjab today believe in the power of information and communication technology (ICT) and want to access e-governance services. Major factors causing pain and harassment to the citizens in getting the services from various government departments include: unreasonable delay, multiple visits even for small services; poor public infrastructure and its maintenance in government offices. In the understanding of citizens the most important factors for the success of e-governance services are: overall convenience and experience of the citizens; reduction in the corruption levels by improvement in the transparency of government functioning and awareness about the availability of service amongst general masses. Originality/value \u2013 The present study has evolved a shared vision of all stakeholders on good governance in the Indian context. It has opened up many new possibilities for the governments, not only to use ICTs and help them in prioritizing the governance areas for focused attention, but also help to understand the mindset of the modern citizenry, their priorities and what they consider as good governance. The study will help policy makers focus on these factors for enhancing speedy delivery of prioritized services and promote good governance in developing countries similar to India."}
{"_id": "046c6c8e15d9b9ecd73b5d2ce125db20bbcdec4b", "title": "Deanonymizing mobility traces: using social network as a side-channel", "text": "Location-based services, which employ data from smartphones, vehicles, etc., are growing in popularity. To reduce the threat that shared location data poses to a user's privacy, some services anonymize or obfuscate this data. In this paper, we show these methods can be effectively defeated: a set of location traces can be deanonymized given an easily obtained social network graph. The key idea of our approach is that a user may be identified by those she meets: a \"contact graph\" identifying meetings between anonymized users in a set of traces can be structurally correlated with a social network graph, thereby identifying anonymized users. We demonstrate the effectiveness of our approach using three real world datasets: University of St Andrews mobility trace and social network (27 nodes each), SmallBlue contact trace and Facebook social network (125 nodes), and Infocom 2006 bluetooth contact traces and conference attendees' DBLP social network (78 nodes). Our experiments show that 80% of users are identified precisely, while only 8% are identified incorrectly, with the remainder mapped to a small set of users."}
{"_id": "4bbc651a3f7debf39c8e9fa1d7877c5898761b4a", "title": "A topology control approach for utilizing multiple channels in multi-radio wireless mesh networks", "text": "We consider the channel assignment problem in a multi-radio wireless mesh network that involves assigning channels to radio interfaces for achieving efficient channel utilization. We propose the notion of a traffic-independent base channel assignment to ease coordination and enable dynamic, efficient and flexible channel assignment. We present a novel formulation of the base channel assignment as a topology control problem, and show that the resulting optimization problem is NP-complete. We then develop a new greedy heuristic channel assignment algorithm (termed CLICA) for finding connected, low interference topologies by utilizing multiple channels. Our extensive simulation studies show that the proposed CLICA algorithm can provide large reduction in interference (even with a small number of radios per node), which in turn leads to significant gains in both link layer and multihop performance in 802.11-based multi-radio mesh networks."}
{"_id": "3c6e10d69ae189a824d9f61bc27066228e97cc1a", "title": "Pattern frequency representation for time series classification", "text": "The paper presents a new method for data transformation. The obtained data format enhances the efficiency of the time series classification. The transformation is realized in four steps, namely: time series segmentation, segment's feature representation, segments' binning and pattern frequency representation. The method is independent of the nature of the data, and it works well with temporal data from diverse areas. Its ability is proved by classification of different real datasets and is also compared with the results of other classification methods."}
{"_id": "12bb048034a2842b8197580606ea51021f154962", "title": "Unsupervised document classification using sequential information maximization", "text": "We present a novel sequential clustering algorithm which is motivated by the Information Bottleneck (IB) method. In contrast to the agglomerative IB algorithm, the new sequential (sIB) approach is guaranteed to converge to a local maximum of the information with time and space complexity typically linear in the data size. information, as required by the original IB principle. Moreover, the time and space complexity are significantly improved. We apply this algorithm to unsupervised document classification. In our evaluation, on small and medium size corpora, the sIB is found to be consistently superior to all the other clustering methods we examine, typically by a significant margin. Moreover, the sIB results are comparable to those obtained by a supervised Naive Bayes classifier. Finally, we propose a simple procedure for trading cluster's recall to gain higher precision, and show how this approach can extract clusters which match the existing topics of the corpus almost perfectly."}
{"_id": "4e03cadf3095f8779eaf878f0594e56ad88788e2", "title": "The Optimal Control of Partially Observable Markov Processes over a Finite Horizon", "text": ""}
{"_id": "76295bf84f26477457bd78250d0d9f6f9bb3de12", "title": "Contextual RNN-GANs for Abstract Reasoning Diagram Generation", "text": "Education University of California, Berkeley (2008-2013) Ph.D. in Computer Science Thesis: Surface Web Semantics for Structured Natural Language Processing Advisor: Dan Klein. Committee members: Dan Klein, Marti Hearst, Line Mikkelsen, Nelson Morgan University of California, Berkeley (2012) Master of Science (M.S.) in Computer Science Thesis: An All-Fragments Grammar for Simple and Accurate Parsing Advisor: Dan Klein Indian Institute of Technology, Kanpur (2004-2008) Bachelor of Technology (B.Tech.) in Computer Science and Engineering GPA: 3.96/4.00 (Institute and Department Rank 2) Cornell University (Summer 2007) CS490 (Independent Research and Reading) GPA: 4.00/4.00 Advisors: Lillian Lee, Claire Cardie"}
{"_id": "79d3792a6aea9934bb46c11dfec287d1775d6c9b", "title": "An Energy-Efficient and Wide-Range Voltage Level Shifter With Dual Current Mirror", "text": "This brief presents an energy-efficient level shifter (LS) to convert a subthreshold input signal to an above-threshold output signal. In order to achieve a wide range of conversion, a dual current mirror (CM) structure consisting of a virtual CM and an auxiliary CM is proposed. The circuit has been implemented and optimized in SMIC 40-nm technology. The postlayout simulation demonstrates that the new LS can achieve voltage conversion from 0.2 to 1.1 V. Moreover, at the target voltage of 0.3 V, the proposed LS exhibits an average propagation delay of 66.48 ns, a total energy per transition of 72.31 fJ, and a static power consumption of 88.4 pW, demonstrating an improvement of $6.0\\times $ , $13.1\\times $ , and $89.0\\times $ , respectively, when compared with Wilson CM-based LS."}
{"_id": "4d392448890a55d19c4212a9dd137ec2c2335a79", "title": "A 60 V Auto-Zero and Chopper Operational Amplifier With 800 kHz Interleaved Clocks and Input Bias Current Trimming", "text": "An auto-zero and chopper operational amplifier with a 4.5-60 V supply voltage range is realized, using a 0.18 \u03bcm CMOS process augmented by 5 V CMOS and 60 V DMOS transistors. It achieves a maximum offset voltage drift of 0.02 \u03bcV/\u00b0C, a minimum CMRR of 145 dB, a noise PSD of 6.8 nV/\u221aHz, and a 3.1 MHz unity gain bandwidth, while dissipating 840 \u03bcA of current. Up-modulated chopper ripple is suppressed by auto- zeroing. Furthermore, glitches from the charge injection of the input switches are mitigated by employing six parallel input stages with 800 kHz interleaved clocks. This moves the majority of the glitch energy up to 4.8 MHz, while leaving little energy at 800 kHz. As a result, the requirements on an external low-pass glitch filter is relaxed, and a wider usable signal bandwidth can be obtained. Maximum input bias current due to charge injection mismatch is reduced from 1.5 nA to 150 pA by post production trimming with an on-chip charge mismatch compensation circuit."}
{"_id": "95b55614898b172b58087d63f0d52bee463948a0", "title": "Laser hair removal.", "text": "The extended theory of selective photothermolysis enables the laser surgeon to target and destroy hair follicles, thereby leading to hair removal. Today, laser hair removal (LHR) is the most commonly requested cosmetic procedure in the world and is routinely performed by dermatologists, other physicians, and non-physician personnel with variable efficacy. The ideal candidate for LHR is fair skinned with dark terminal hair; however, LHR can today be successfully performed in all skin types. Knowledge of hair follicle anatomy and physiology, proper patient selection and preoperative preparation, principles of laser safety, familiarity with the various laser/light devices, and a thorough understanding of laser-tissue interactions are vital to optimizing treatment efficacy while minimizing complications and side effects."}
{"_id": "dd7d87105c49a3a93b863d82e6d4a5ab3eb024f3", "title": "Psychological Well-Being in Adult Life", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "b069df19dec08644b9534a6229cec4199d69828e", "title": "ENN: Extended Nearest Neighbor Method for Pattern Recognition [Research Frontier]", "text": "This article introduces a new supervised classification method - the extended nearest neighbor (ENN) - that predicts input patterns according to the maximum gain of intra-class coherence. Unlike the classic k-nearest neighbor (KNN) method, in which only the nearest neighbors of a test sample are used to estimate a group membership, the ENN method makes a prediction in a \"two-way communication\" style: it considers not only who are the nearest neighbors of the test sample, but also who consider the test sample as their nearest neighbors. By exploiting the generalized class-wise statistics from all training data by iteratively assuming all the possible class memberships of a test sample, the ENN is able to learn from the global distribution, therefore improving pattern recognition performance and providing a powerful technique for a wide range of data analysis applications."}
{"_id": "50ed709761f57895b50346a8249814a6f66f6c89", "title": "Fast and Accurate Template Matching Using Pixel Rearrangement on the GPU", "text": "A GPU (Graphics Processing Unit) is a specialized processor for graphics processing. GPUs have the ability to perform high-speed parallel processing using its many processing cores. To utilize the powerful computing ability, GPUs are widely used for general purpose processing. The main contribution of this paper is to show a new template matching algorithm using pixel rearrangement. Template Matching is a technique for finding small parts of an image which match a template image. The feature of our proposed algorithm is that using pixel rearrangement, multiple low-resolution images are generated and template matching for the low-resolution images is performed to reduce the computing time. Also, we implemented our algorithm on a GPU system. The experimental results show that, for an input image with size of 4096 $\\times$ 4096 and a template image with size of 256 $\\times$ 256, our implementation can achieve a speedup factor of approximately 78 times over the conventional sequential implementation."}
{"_id": "5676987f4b421f6ef9380d889d32b36e1e2179b6", "title": "Visual Digital Signature Scheme: A New Approach", "text": "\u2014A digital signature is an important public-key primitive that performs the function of conventional handwritten signatures for entity authentication, data integrity, and non-repudiation, especially within the electronic commerce environment. Currently, most conventional digital signature schemes are based on mathematical hard problems. These mathematical algorithms require computers to perform the heavy and complex computations to generate and verify the keys and signatures. In 1995, Naor and Shamir proposed a visual cryptography (VC) for binary images. VC has high security and requires simple computations. The purpose of this paper is to provide an alternative to the current digital signature technology. In this paper, we introduce a new digital signature scheme based on the concept of a non-expansion visual cryptography. A visual digital signature scheme is a method to enable visual verification of the authenticity of an image in an insecure environment without the need to perform any complex computations. Our proposed scheme generates visual shares and manipulates them using the simple Boolean operations OR rather than generating and computing large and long random integer values as in the conventional digital signature schemes currently in use."}
{"_id": "05df100ebcf58826324641a20fd14eb838439a6c", "title": "Analyzing Stability in Wide-Area Network Performance", "text": "The Internet is a very large scale, complex, dynamical system that is hard to model and analyze. In this paper, we develop and analyze statistical models for the observed end-to-end network performance based on extensive packet-level traces (consisting of approximately 1.5 billion packets) collected from the primary Web site for the Atlanta Summer Olympic Games in 1996. We find that observed mean throughputs for these transfers measured over 60 million complete connections vary widely as a function of end-host location and time of day, confirming that the Internet is characterized by a large degree of heterogeneity. Despite this heterogeneity, we find (using best-fit linear regression techniques) that we can express the throughput for Web transfers to most hosts as a random variable with a log-normal distribution. Then, using observed throughput as the control parameter, we attempt to quantify the spatial (statistical similarity across neighboring hosts) and temporal (persistence over time) stability of network performance. We find that Internet hosts that are close to each other often have almost identically distributed probability distributions of throughput. We also find that throughputs to individual hosts often do not change appreciably for several minutes. Overall, these results indicate that there is promise in protocol mechanisms that cache and share network characteristics both within a single host and amongst nearby hosts."}
{"_id": "070096ce36bba240b39b5ddb7bc6071311478843", "title": "Learning to Grade Short Answer Questions using Semantic Similarity Measures and Dependency Graph Alignments", "text": "In this work we address the task of computerassisted assessment of short student answers. We combine several graph alignment features with lexical semantic similarity measures using machine learning techniques and show that the student answers can be more accurately graded than if the semantic measures were used in isolation. We also present a first attempt to align the dependency graphs of the student and the instructor answers in order to make use of a structural component in the automatic grading of student answers."}
{"_id": "21dd2790b76a57b42191b19a54505837f3969141", "title": "Tuned Models of Peer Assessment in MOOCs", "text": "In massive open-access online courses (MOOCs), peer grading serves as a critical tool for scaling the grading of complex, open-ended assignments to courses with tens or hundreds of thousands of students. But despite promising initial trials, it does not always deliver accurate results compared to human experts. In this paper, we develop algorithms for estimating and correcting for grader biases and reliabilities, showing significant improvement in peer grading accuracy on real data with 63,199 peer grades from Coursera\u2019s HCI course offerings \u2014 the largest peer grading networks analysed to date. We relate grader biases and reliabilities to other student factors such as engagement, performance as well as commenting style. We also show that our model can lead to more intelligent assignment of graders to gradees."}
{"_id": "3b073bf632aa91628d134a828911ff82706b8a32", "title": "The critical importance of retrieval for learning.", "text": "Learning is often considered complete when a student can produce the correct answer to a question. In our research, students in one condition learned foreign language vocabulary words in the standard paradigm of repeated study-test trials. In three other conditions, once a student had correctly produced the vocabulary item, it was repeatedly studied but dropped from further testing, repeatedly tested but dropped from further study, or dropped from both study and test. Repeated studying after learning had no effect on delayed recall, but repeated testing produced a large positive effect. In addition, students' predictions of their performance were uncorrelated with actual performance. The results demonstrate the critical role of retrieval practice in consolidating learning and show that even university students seem unaware of this fact."}
{"_id": "50fcb0e5f921357b2ec96be9a75bfd3169e8f8da", "title": "Personalized Online Education - A Crowdsourcing Challenge", "text": "Interest in online education is surging, as dramatized by the success of Khan Academy and recent Stanford online courses, but the technology for online education is in its infancy. Crowdsourcing mechanisms will likely be essential in order to reach the full potential of this medium. This paper sketches some of the challenges and directions we hope HCOMP researchers will ad-"}
{"_id": "7545f90299a10dae1968681f6bd268b9b5ab2c37", "title": "Powergrading: a Clustering Approach to Amplify Human Effort for Short Answer Grading", "text": "We introduce a new approach to the machine-assisted grading of short answer questions. We follow past work in automated grading by first training a similarity metric between student responses, but then go on to use this metric to group responses into clusters and subclusters. The resulting groupings allow teachers to grade multiple responses with a single action, provide rich feedback to groups of similar answers, and discover modalities of misunderstanding among students; we refer to this amplification of grader effort as \u201cpowergrading.\u201d We develop the means to further reduce teacher effort by automatically performing actions when an answer key is available. We show results in terms of grading progress with a small \u201cbudget\u201d of human actions, both from our method and an LDA-based approach, on a test corpus of 10 questions answered by 698 respondents."}
{"_id": "e638bad00cbe2467dacc1b69876c21b776ad8d3b", "title": "EARLY DEVELOPMENTS OF A PARALLELLY ACTUATED HUMANOID , SAFFIR", "text": "This paper presents the design of our new 33 degree of freedom full size humanoid robot, SAFFiR (Shipboard Autonomous Fire Fighting Robot). The goal of this research project is to realize a high performance mixed force and position controlled robot with parallel actuation. The robot has two 6 DOF legs and arms, a waist, neck, and 3 DOF hands/fingers. The design is characterized by a central lightweight skeleton actuated with modular ballscrew driven force controllable linear actuators arranged in a parallel fashion around the joints. Sensory feedback on board the robot includes an inertial measurement unit, force and position output of each actuator, as well as 6 axis force/torque measurements from the feet. The lower body of the robot has been fabricated and a rudimentary walking algorithm implemented while the upper body fabrication is completed. Preliminary walking experiments show that parallel actuation successfully minimizes the loads through individual actuators."}
{"_id": "526f6208e5d0c9ef3eaa491b6a650110876ab574", "title": "QuizRDF: search technology for the semantic Web", "text": "An information-seeking system is described which combines traditional keyword querying of WWW resources with the ability to browse and query against RDF annotations of those resources. RDF(S) and RDF are used to specify and populate an ontology and the resultant RDF annotations are then indexed along with the full text of the annotated resources. The resultant index allows both keyword querying against the full text of the document and the literal values occurring in the RDF annotations, along with the ability to browse and query the ontology. We motivate our approach as a key enabler for fully exploiting the semantic Web in the area of knowledge management and argue that the ability to combine searching and browsing behaviours more fully supports a typical information-seeking task. The approach is characterised as \"low threshold, high ceiling\" in the sense that where RDF annotations exist they are exploited for an improved information-seeking experience but where they do not yet exist, a search capability is still available."}
{"_id": "cb615377f990c0b43cfa6fae3755d5a0da418b2f", "title": "Squadron: Incentivizing Quality-Aware Mission-Driven Crowd Sensing", "text": "Recent years have witnessed the success of mobile crowd sensing systems, which outsource sensory data collection to the public crowd equipped with various mobile devices in a wide spectrum of civilian applications. We envision that crowd sensing could as well be very useful in a whole host of mission-driven scenarios, such as peacekeeping operations, non-combatant evacuations, and humanitarian missions. However, the power of crowd sensing could not be fully unleashed in mission-driven crowd sensing (MiCS) systems, unless workers are effectively incentivized to participate. Therefore, in this paper, taking into consideration workers' diverse quality of information (QoI), we propose Squadron, a quality-aware incentive mechanism for MiCS systems. Squadron adopts the reverse auction framework. It approximately minimizes the platform's total payment for worker recruiting in a computationally efficient manner, and recruits workers who potentially could provide high quality data. Furthermore, it also satisfies the desirable properties of truth-fulness and individual rationality. Through rigorous theoretical analysis, as well as extensive simulations, we validate the various aforementioned desirable properties held by Squadron."}
{"_id": "a7b9af6fe95f0c17f85a940b1a71d1e3cdfa2109", "title": "Where to Add Actions in Human-in-the-Loop Reinforcement Learning", "text": "In order for reinforcement learning systems to learn quickly in vast action spaces such as the space of all possible pieces of text or the space of all images, leveraging human intuition and creativity is key. However, a human-designed action space is likely to be initially imperfect and limited; furthermore, humans may improve at creating useful actions with practice or new information. Therefore, we propose a framework in which a human adds actions to a reinforcement learning system over time to boost performance. In this setting, however, it is key that we use human effort as efficiently as possible, and one significant danger is that humans waste effort adding actions at places (states) that aren\u2019t very important. Therefore, we propose Expected Local Improvement (ELI), an automated method which selects states at which to query humans for a new action. We evaluate ELI on a variety of simulated domains adapted from the literature, including domains with over a million actions and domains where the simulated experts change over time. We find ELI demonstrates excellent empirical performance, even in settings where the synthetic \u201cexperts\u201d are quite poor."}
{"_id": "f6bb1c45e63783785e97ef99b4fe718847d10261", "title": "Bad Subsequences of Well-Known Linear Congruential Pseudo-Random Number Generators", "text": "We present a spectral test analysis of full-period subsequences with small step sizes generated by well-known linear congruential pseudorandom number generators. Subsequences may occur in certain simulation problems or as a method to get parallel streams of pseudorandom numbers. Applying the spectral test, it is possible to find bad subsequences with small step sizes for almost all linear pseudorandom number generators currently in use."}
{"_id": "fd4f24af30d64ca6375016249dc145b1f114ddc9", "title": "An Introduction to Temporal Graphs: An Algorithmic Perspective", "text": "A temporal graph is, informally speaking, a graph that changes with time. When time is discrete and only the relationships between the participating entities may change and not the entities themselves, a temporal graph may be viewed as a sequence G1, G2 . . . , Gl of static graphs over the same (static) set of nodes V . Though static graphs have been extensively studied, for their temporal generalization we are still far from having a concrete set of structural and algorithmic principles. Recent research shows that many graph properties and problems become radically different and usually substantially more difficult when an extra time dimension is added to them. Moreover, there is already a rich and rapidly growing set of modern systems and applications that can be naturally modeled and studied via temporal graphs. This, further motivates the need for the development of a temporal extension of graph theory. We survey here recent results on temporal graphs and temporal graph problems that have appeared in the Computer Science community."}
{"_id": "c4622d4a8d582c887904a9d0f2714a1dae794c1b", "title": "Analytical Solution of Air-Gap Field in Permanent-Magnet Motors Taking Into Account the Effect of Pole Transition Over Slots", "text": "We present an analytical method to study magnetic fields in permanent-magnet brushless motors, taking into consideration the effect of stator slotting. Our attention concentrates particularly on the instantaneous field distribution in the slot regions where the magnet pole transition passes over the slot opening. The accuracy in the flux density vector distribution in such regions plays a critical role in the prediction of the magnetic forces, i.e., the cogging torque and unbalanced magnetic pull. However, the currently available analytical solutions for calculating air-gap fields in permanent magnet motors can estimate only the distribution of the flux density component in the radial direction. Magnetic field and forces computed by the new analytical method agree well with those obtained by the finite-element method. The analytical method provides a useful tool for design and optimization of permanent-magnet motors."}
{"_id": "1e627fb686eaacaf66452f9ecad066a4311abfb4", "title": "Learning to learn with the informative vector machine", "text": "This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MTIVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than random sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task."}
{"_id": "7a36bcc25b605394c8a61d39b6e4187653aacf98", "title": "Test setup for multi-finger gripper control based on robot operating system (ROS)", "text": "This paper presents the concept for a test setup to prototype control algorithms for a multi-finger gripper. The human-robot interface has to provide enough degrees of freedom (DOF) to intuitively control advanced gripper and to accomplish this task a simple sensor glove equipped with flex and force sensors has been prepared for this project. The software architecture has to support both real hardware and simulation, as well as flexible communication standards, therefore, ROS architecture was employed in this matter. Paper presents some preliminary results for using sensor glove and simulated model of the three finger gripper."}
{"_id": "a63c3f53584fd50e27ac0f2dcbe28c7361b5adff", "title": "Integrated Phased Array Systems in Silicon", "text": "Silicon offers a new set of possibilities and challenges for RF, microwave, and millimeter-wave applications. While the high cutoff frequencies of the SiGe heterojunction bipolar transistors and the ever-shrinking feature sizes of MOSFETs hold a lot of promise, new design techniques need to be devised to deal with the realities of these technologies, such as low breakdown voltages, lossy substrates, low-Q passives, long interconnect parasitics, and high-frequency coupling issues. As an example of complete system integration in silicon, this paper presents the first fully integrated 24-GHz eight-element phased array receiver in 0.18-/spl mu/m silicon-germanium and the first fully integrated 24-GHz four-element phased array transmitter with integrated power amplifiers in 0.18-/spl mu/m CMOS. The transmitter and receiver are capable of beam forming and can be used for communication, ranging, positioning, and sensing applications."}
{"_id": "64da24aad2e99514ab26d093c19cebec07350099", "title": "Low cost Ka-band transmitter for CubeSat systems", "text": "CubeSat platforms grow increasingly popular in commercial ventures as alternative solutions for global Internet networks deep space exploration, and aerospace research endeavors. Many technology companies and system engineers plan to implement small satellite systems as part of global Low Earth Orbit (LEO) inter-satellite constellations. High performing low cost hardware is of key importance in driving these efforts. This paper presents the heterodyne architecture and performance of Ka-Band Integrated Transmitter Assembly (ITA) Module, which could be implemented in nano/microsatellite or other satellite systems as a low-cost solution for high data rate space communication systems. The module converts a 0.9 to 1.1 GHz IF input signal to deliver linear transmission of +29 dBm at 26.7 to 26.9 GHz frequency range with a built-in phase locked oscillator, integrated transmitter, polarizer, and lens corrected antenna."}
{"_id": "ed202833d8c1f5c432c1e24abf3945c2e0ef91c5", "title": "Identifying Harm Events in Clinical Care through Medical Narratives", "text": "Preventable medical errors are estimated to be among the leading causes of injury and death in the United States. To prevent such errors, healthcare systems have implemented patient safety and incident reporting systems. These systems enable clinicians to report unsafe conditions and cases where patients have been harmed due to errors in medical care. These reports are narratives in natural language and while they provide detailed information about the situation, it is non-trivial to perform large scale analysis for identifying common causes of errors and harm to the patients. In this work, we present a method for identifying harm events in patient care and categorize the harm event types based on their severity level. We show that our method which is based on convolutional and recurrent networks with an attention mechanism is able to significantly improve over the existing methods on two large scale datasets of patient reports."}
{"_id": "891df42f3b1284e93128e5de23bcdf0b329700f4", "title": "Quality of life in the anxiety disorders: a meta-analytic review.", "text": "There has been significant interest in the impact of anxiety disorders on quality of life. In this meta-analytic review, we empirically evaluate differences in quality of life between patients with anxiety disorders and nonclinical controls. Thirty-two patient samples from 23 separate studies (N=2892) were included in the analysis. The results yielded a large effect size indicating poorer quality of life among anxiety disorder patients vs. controls and this effect was observed across all anxiety disorders. Compared to control samples, no anxiety disorder diagnosis was associated with significantly poorer overall quality of life than was any other anxiety disorder diagnosis. Examination of specific domains of QOL suggests that impairments may be particularly prominent among patients with post-traumatic stress disorder. QOL domains of mental health and social functioning were associated with the highest levels of impairment among anxiety disorder patients. These findings are discussed in the context of future research on the assessment of quality of life in the anxiety disorders."}
{"_id": "56ca21e44120ee31dcc2c7fd963a3567a037ca6f", "title": "An Investigation into the Use of Common Libraries in Android Apps", "text": "The packaging model of Android apps requires the entire code necessary for the execution of an app to be shipped into one single apk file. Thus, an analysis of Android apps often visits code which is not part of the functionality delivered by the app. Such code is often contributed by the common libraries which are used pervasively by all apps. Unfortunately, Android analyses, e.g., for piggybacking detection and malware detection, can produce inaccurate results if they do not take into account the case of library code, which constitute noise in app features. Despite some efforts on investigating Android libraries, the momentum of Android research has not yet produced a complete set of common libraries to further support in-depth analysis of Android apps. In this paper, we leverage a dataset of about 1.5 million apps from Google Play to harvest potential common libraries, including advertisement libraries. With several steps of refinements, we finally collect by far the largest set of 1,113 libraries supporting common functionality and 240 libraries for advertisement. We use the dataset to investigates several aspects of Android libraries, including their popularity and their proportion in Android app code. Based on these datasets, we have further performed several empirical investigations to confirm the motivations behind our work."}
{"_id": "6b13a159caebd098aa0f448ae24e541e58319a64", "title": "' s personal copy Core , animal reminder , and contamination disgust : Three kinds of disgust with distinct personality , behavioral , physiological , and clinical correlates", "text": "We examined the relationships between sensitivity to three kinds of disgust (core, animalreminder, and contamination) and personality traits, behavioral avoidance, physiological responding, and anxiety disorder symptoms. Study 1 revealed that these disgusts are particularly associated with neuroticism and behavioral inhibition. Moreover, the three disgusts showed a theoretically consistent pattern of relations on four disgust-relevant behavioral avoidance tasks in Study 2. Similar results were found in Study 3 such that core disgust was significantly related to increased physiological responding during exposure to vomit, while animal-reminder disgust was specifically related to physiological responding during exposure to blood. Lastly, Study 4 revealed that each of the three disgusts showed a different pattern of relations with fear of contamination, fear of animals, and fear of blood\u2013 injury relevant stimuli. These findings provide support for the convergent and divergent validity of core, animal-reminder, and contamination disgust. These findings also highlight the possibility that the three kinds of disgust may manifest as a function of different psychological mechanisms (i.e., oral incorporation, mortality defense, disease avoidance) that may give rise to different clinical conditions. However, empirical examination of the mechanisms that underlie the three disgusts will require further refinement of the psychometric properties of the disgust scale. 2008 Elsevier Inc. All rights reserved."}
{"_id": "38732356b452e098d30026d036622461c6e8a3f5", "title": "A primer on spatial modeling and analysis in wireless networks", "text": "The performance of wireless networks depends critically on their spatial configuration, because received signal power and interference depend critically on the distances between numerous transmitters and receivers. This is particularly true in emerging network paradigms that may include femtocells, hotspots, relays, white space harvesters, and meshing approaches, which are often overlaid with traditional cellular networks. These heterogeneous approaches to providing high-capacity network access are characterized by randomly located nodes, irregularly deployed infrastructure, and uncertain spatial configurations due to factors like mobility and unplanned user-installed access points. This major shift is just beginning, and it requires new design approaches that are robust to spatial randomness, just as wireless links have long been designed to be robust to fading. The objective of this article is to illustrate the power of spatial models and analytical techniques in the design of wireless networks, and to provide an entry-level tutorial."}
{"_id": "72d078429890dcf213d5e959d21fb84adc99d4df", "title": "Digital Literacy: A Conceptual Framework for Survival Skills in the Digital Era", "text": "Digital literacy involves more than the mere ability to use software or operate a digital device; it includes a large variety of complex cognitive, motor, sociological, and emotional skills, which users need in order to function effectively in digital environments. The tasks required in this context include, for example, \u201creading\u201d instructions from graphical displays in user interfaces; using digital reproduction to create new, meaningful materials from existing ones; constructing knowledge from a nonlinear, hypertextual navigation; evaluating the quality and validity of information; and have a mature and realistic understanding of the \u201crules\u201d that prevail in the cyberspace. This newly emerging concept of digital literacy may be used as a measure of the quality of learners\u2019 work in digital environments, and provide scholars and developers with a more effective means of communication in designing better user-oriented environments. This article proposes a holistic, refined conceptual framework for digital literacy, which includes photo-visual literacy; reproduction literacy; branching literacy; information literacy; and socioemotional literacy."}
{"_id": "5ed7b57af3976e635a08f4dc13bb2a2aca760dbf", "title": "A 0.003 mm$^{2}$ 10 b 240 MS/s 0.7 mW SAR ADC in 28 nm CMOS With Digital Error Correction and Correlated-Reversed Switching", "text": "This paper describes a single-channel, calibration-free Successive-Approximation-Register (SAR) ADC with a resolution of 10 bits at 240 MS/s. A DAC switching technique and an addition-only digital error correction technique based on the non-binary search are proposed to tackle the static and dynamic non-idealities attributed to capacitor mismatch and insufficient DAC settling. The conversion speed is enhanced, and the power and area of the DAC are also reduced by 40% as a result. In addition, a switching scheme lifting the input common mode of the comparator is proposed to further enhance the speed. Moreover, the comparator employs multiple feedback paths for an enhanced regeneration strength to alleviate the metastable problem. Occupying an active area of 0.003 mm 2 and dissipating 0.68 mW from 1 V supply at 240 MS/s in 28 nm CMOS, the proposed design achieves an SNDR of 57 dB with low-frequency inputs and 53 dB at the Nyquist input. This corresponds to a conversion efficiency of 4.8 fJ/c.-s. and 7.8 fJ/c.-s. respectively. The DAC switching technique improves the INL and DNL from +1.15/-1.01 LSB and +0.92/-0.28 LSB to within +0.55/-0.45 LSB and +0.45/-0.23 LSB, respectively. This ADC is at least 80% smaller and 32% more power efficient than reported state-of-the-art ADCs of similar resolutions and Nyquist bandwidths larger than 75 MHz."}
{"_id": "39339e14ae221cd154354ec1d30d23c8681348c5", "title": "Adhesive capsulitis: review of imaging findings, pathophysiology, clinical presentation, and treatment options", "text": "Adhesive capsulitis, commonly referred to as \u201cfrozen shoulder,\u201d is a debilitating condition characterized by progressive pain and limited range of motion about the glenohumeral joint. It is a condition that typically affects middle-aged women, with some evidence for an association with endocrinological, rheumatological, and autoimmune disease states. Management tends to be conservative, as most cases resolve spontaneously, although a subset of patients progress to permanent disability. Conventional arthrographic findings include decreased capsular distension and volume of the axillary recess when compared with the normal glenohumeral joint, in spite of the fact that fluoroscopic visualization alone is rarely carried out today in favor of magnetic resonance imaging (MRI). MRI and MR arthrography (MRA) have, in recent years, allowed for the visualization of several characteristic signs seen with this condition, including thickening of the coracohumeral ligament, axillary pouch and rotator interval joint capsule, in addition to the obliteration of the subcoracoid fat triangle. Additional findings include T2 signal hyperintensity and post-contrast enhancement of the joint capsule. Similar changes are observable on ultrasound. However, the use of ultrasound is most clearly established for image-guided injection therapy. More aggressive therapies, including arthroscopic release and open capsulotomy, may be indicated for refractory disease, with arthroscopic procedures favored because of their less invasive nature and relatively high success rate."}
{"_id": "21d6baca37dbcb35cf263dd93ce306f6013a1ff5", "title": "Issues in building general letter to sound rules", "text": "In generaltext-to-speechsystems, it is notpossibleto guaranteethat a lexiconwill containall wordsfoundin a text, thereforesomesystemfor predictingpronunciationfrom theword itself is necessary . Herewe presenta generalframework for building letter to sound (LTS) rulesfrom a word list in a language.The techniquecanbe fully automatic,thougha small amountof handseedingcangive betterresults.We have appliedthis techniqueto English(UK and US), Frenchand German. The generatedmodelsachieve, 75%, 58%, 93% and89%, respecti vely, wordscorrectfor held out data from theword lists. To testour modelson moretypical datawe alsoanalyzedgeneral text, to find which wordsdo not appearin our lexicon. Theseunknown wordswereusedasamorerealistictestcorpusfor ourmodels. We also discussthe distribution and type of suchunknown words."}
{"_id": "d7e58d05232ed02d4bffba943e4523706971913b", "title": "Telecommunication Fraud Detection Using Data Mining techniques", "text": "This document presents the final report of the thesis \u201cTelecommunication Fraud Detection Using Data Mining Techniques\u201d, were a study is made over the effect of the unbalanced data, generated by the Telecommunications Industry, in the construction and performance of classifiers that allows the detection and prevention of frauds. In this subject, an unbalanced data set is characterized by an uneven class distribution where the amount of fraudulent instances (positive) is substantially smaller than the amount normal instances (negative). This will result in a classifier which is most likely to classify data has belonging to the normal class then to the fraud class. At first, an overall inspection is made over the data characteristics and the Naive Bayes model, which is the classifier selected to do the anomaly detection on these experiments. After the characteristics are presented, a feature engineering stage is done with the intent to extend the information contained in the data creating a depper relation with the data itself and model characteristics. A previously proposed solution that consists on undersampling the most abundant class (normal) before building the model, is presented and tested. In the end, the new proposals are presented. The first proposal is to study the effects of changing the intrinsic class distribution parameter in the Naive Bayes model and evaluate its performance. The second proposal consists in estimating margin values that when applied to the model output, attempt to bring more positive instances from previous negative classification. All of these suggested models are validated over a monte-carlo experiment, using data with and without the engineered features."}
{"_id": "8124c8f871c400dcbdba87aeb16938c86b068688", "title": "An Analysis of Buck Converter Efficiency in PWM / PFM Mode with Simulink", "text": "This technical paper takes a study into efficiency comparison between PWM and PFM control modes in DC-DC buck converters. Matlab Simulink Models are built to facilitate the analysis of various effects on power loss and converting efficiency, including different load conditions, gate switching frequency, setting of voltage and current thresholds, etc. From efficiency vs. load graph, a best switching frequency is found to achieve a good efficiency throughout the wide load range. This simulation point is then compared to theoretical predictions, justifying the effectiveness of computer based simulation. Efficiencies at two different control modes are compared to verify the improvement of PFM scheme."}
{"_id": "25523cc0bbe43885f0247398dcbf3aecf9538ce2", "title": "Robust Unit Commitment Problem with Demand Response and Wind Energy", "text": "To improve the efficiency in power generation and to reduce the greenhouse gas emission, both Demand Response (DR) strategy and intermittent renewable energy have been proposed or applied in electric power systems. However, the uncertainty and the generation pattern in wind farms and the complexity of demand side management pose huge challenges in power system operations. In this paper, we analytically investigate how to integrate DR and wind energy with fossil fuel generators to (i) minimize power generation cost; (2) fully take advantage wind energy with managed demand to reduce greenhouse emission. We first build a two-stage robust unit commitment model to obtain day-ahead generator schedules where wind uncertainty is captured by a polyhedron. Then, we extend our model to include DR strategy such that both price levels and generator schedule will be derived for the next day. For these two NP-hard problems, we derive their mathematical properties and develop a novel and analytical solution method. Our computational study on a IEEE 118 system with 36 units shows that (i) the robust unit commitment model can significantly reduce total cost and fully make use of wind energy; (ii) the cutting plane method is computationally superior to known algorithms."}
{"_id": "2e478ae969db8ff0b23e309a9ce46bcaf20c36b3", "title": "Learner Modeling for Integration Skills", "text": "Complex skill mastery requires not only acquiring individual basic component skills, but also practicing integrating such basic skills. However, traditional approaches to knowledge modeling, such as Bayesian knowledge tracing, only trace knowledge of each decomposed basic component skill. This risks early assertion of mastery or ineffective remediation failing to address skill integration. We introduce a novel integration-level approach to model learners' knowledge and provide fine-grained diagnosis: a Bayesian network based on a new kind of knowledge graph with progressive integration skills. We assess the value of such a model from multifaceted aspects: performance prediction, parameter plausibility, expected instructional effectiveness, and real-world recommendation helpfulness. Our experiments based on a Java programming tutor show that proposed model significantly improves two popular multiple-skill knowledge tracing models on all these four aspects."}
{"_id": "0b44fcbeea9415d400c5f5789d6b892b6f98daff", "title": "Building a Large Annotated Corpus of English: The Penn Treebank", "text": "In this paper, we review our experience with constructing one such large annotated corpus--the Penn Treebank, a corpus consisting of over 4.5 million words of American English. During the first three-year phase of the Penn Treebank Project (1989-1992), this corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-93-87. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/237 Building A Large Annotated Corpus of English: The Penn Treebank MS-CIS-93-87 LINC LAB 260 Mitchell P. Marcus Beatrice Santorini Mary Ann Marcinkiewicz University of Pennsylvania School of Engineering and Applied Science Computer and Information Science Department Philadelphia, PA 19104-6389"}
{"_id": "21ed4912935c2ce77515791acbccce527e7266ff", "title": "INDUCING THE MORPHOLOGICAL LEXICON OF A NATURAL LANGUAGE FROM UNANNOTATED TEXT", "text": "This work presents an algorithm for the unsupervised learning, or induction, of a simple morphology of a natural language. A probabilistic maximum a posteriori model is utilized, which builds hierarchical representations for a set of morphs, which are morpheme-like units discovered from unannotated text corpora. The induced morph lexicon stores parameters related to both the \u201cmeaning\u201d and \u201cform\u201d of the morphs it contains. These parameters affect the role of the morphs in words. The model is implemented in a task of unsupervised morpheme segmentation of Finnish and English words. Very good results are obtained for Finnish and almost as good results are obtained in the English task."}
{"_id": "5c2e8ef6835ca92476fa6d703e7cc8b1955f108f", "title": "Unsupervised Learning of the Morphology of a Natural Language", "text": "This study reports the results of using minimum description length (MDL) analysis to model unsupervised learning of the morphological segmentation of European languages, using corpora ranging in size from 5,000 words to 500,000 words. We develop a set of heuristics that rapidly develop a probabilistic morphological grammar, and use MDL as our primary tool to determine whether the modifications proposed by the heuristics will be adopted or not. The resulting grammar matches well the analysis that would be developed by a human morphologist. In the final section, we discuss the relationship of this style of MDL grammatical analysis to the notion of evaluation metric in early generative grammar."}
{"_id": "0c5043108eda7d2fa467fe91e3c47d4ba08e0b48", "title": "Unsupervised Discovery of Morphemes", "text": "We present two methods for unsupervised segmentation of words into morphemelike units. The model utilized is especially suited for languages with a rich morphology, such as Finnish. The first method is based on the Minimum Description Length (MDL) principle and works online. In the second method, Maximum Likelihood (ML) optimization is used. The quality of the segmentations is measured using an evaluation method that compares the segmentations produced to an existing morphological analysis. Experiments on both Finnish and English corpora show that the presented methods perform well compared to a current stateof-the-art system."}
{"_id": "16524adee515692a50dd67a170b8e605e4b00b29", "title": "Conceptual spaces - the geometry of thought", "text": "In his book \u201cConceptual Spaces The Geometry of Thought\u201d, Peter G\u00e4rdenfors [1] presents a pioneering theory for representing conceptual knowledge, the basic construct of human thinking and reasoning [4]. The conceptual level is not seen as an alternative to traditional approaches of knowledge representation in artificial intelligence, namely symbolic or subsymbolic methods. Instead, it is meant to complement both approaches. The book is highly recommendable and worth reading, as it does not only tackle many fundamental problems of knowledge representation such as grounding [3], concept formation and similarity comparisons [2], but also outlines novel and enlightening ideas how to overcome these. The book introduces the notion of a conceptual space as a framework for representing knowledge at the conceptual level. It is motivated by contrasting it to other levels of knowledge representation: The highest level, the symbolic level, conceptualizes the human brain as a computing device. Knowledge is represented based on a language consisting of a set of symbols. Logical operations can be performed on these symbols to infer new knowledge. Human reasoning is modeled as a symbol manipulation process. Classical, symbolic artificial intelligence does not very well support central cognitive processes such as the acquisition or formation of new concepts and similarity comparisons. The lowest level, the subsymbolic knowledge representation, is oriented towards the neuro-biological structure of the human brain. Concepts are implicitly represented via activation patterns within the neural network. Learning is modeled by modifying the activation of neurons. Explicit representation of knowledge and concepts is not possible. At the intersection between the symbolic and the subsymbolic level, G\u00e4rdenfors introduces the conceptual level. The theory of conceptual spaces is based on semantic spaces with a geometric structure: A conceptual space is formed by a set of quality dimensions. One or several quality dimensions model one domain. An important example used throughout the book is the color domain represented by the quality dimensions hue, saturation and brightness. Conceptual spaces have a cognitive foundation because domains can be grounded in qualities perceivable by the human sensory apparatus. Concepts are represented as conceptual regions described by their properties on the quality dimensions. The geometric structure of conceptual spaces makes it possible to determine distances and therefore provides an inherent similarity measure by taking the distance in the conceptual space as indicator of the semantic similarity. The notion of similarity is an important construct for modeling categorization and concept formation. Using similarity for reasoning can also reflect well the vagueness typical for human reasoning. The strong focus on the cognitive foundation makes the book particularly valuable. It contains many challenging claims which are related to various disciplines by giving evidence from a wide range of literature. This shows the huge and highly interdisciplinary background of the author. Unfortunately, G\u00e4rdenfors describes his theory only at a very abstract level and forbears from describing algorithms for the formalization of his theory. The realization of a computational model for conceptual spaces bears many practical problems which still have to be solved. Moreover, no empirical evidence is given for his pioneering, sometimes revolutionary ideas. However, these shortcomings should be considered as challenges to solve in the future. The target audience of the book is highly interdisciplinary: since G\u00e4rdenfors tackles the problem of cognitive knowledge representation from a psychologic and computer science perspective as well as from a philosophic, neuroscience and linguistic point of view, this book is worth reading for researchers from many different areas. It is required reading for researchers in cognitive science or artificial intelligence interested in knowledge representation. The book has a clear structure and is very well written. The convincing examples throughout the book illustrate the findings very well and make it easy to understand. Therefore I would also deem G\u00e4rdenfors\u2019 book to be suitable for students as introducing literature to various problem fields in cognitive science. It gives readers from related areas the chance to look beyond one\u2019s own nose and get to know an interdisciplinary way of thinking. The book certainly meets the expectations of the highly interdisciplinary research area cognitive science."}
{"_id": "4964f9a7437070858337393a1111032efc1c2039", "title": "Radio Interface Technologies for Cooperative Transmission in 3GPP LTE-Advanced", "text": "This paper presents an overview of radio interface technologies for cooperative transmission in 3GPP LTE-Advanced, i.e., coordinated multi-point (CoMP) transmission, enhanced inter-cell interference coordination (eICIC) for heterogeneous deployments, and relay transmission techniques. This paper covers not only the technical components in the 3GPP specifications that have already been released, but also those that were discussed in the Study Item phase of LTE-Advanced, and those that are currently being discussed in 3GPP for potential specification in future LTE releases. key words: LTE-Advanced, CoMP, eICIC, relay"}
{"_id": "0e9a674280b2dabe36e540c20ce5a7a9e10361f7", "title": "Enhancing network performance in Distributed Cognitive Radio Networks using single-agent and multi-agent Reinforcement Learning", "text": "Cognitive Radio (CR) is a next-generation wireless communication system that enables unlicensed users to exploit underutilized licensed spectrum to optimize the utilization of the overall radio spectrum. A Distributed Cognitive Radio Network (DCRN) is a distributed wireless network established by a number of unlicensed users in the absence of fixed network infrastructure such as a base station. Context awareness and intelligence are the capabilities that enable each unlicensed user to observe and carry out its own action as part of the joint action on its operating environment for network-wide performance enhancement. These capabilities can be applied in various application schemes in CR networks such as Dynamic Channel Selection (DCS), congestion control, and scheduling. In this paper, we apply Reinforcement Learning (RL), including single-agent and multi-agent approaches, to achieve context awareness and intelligence. Firstly, we show that the RL approach achieves a joint action that provides better network-wide performance in respect to DCS in DCRNs. The multi-agent approach is shown to provide higher levels of stability compared to the single-agent approach. Secondly, we show that RL achieves high level of fairness. Thirdly, we show the effects of network density and various essential parameters in RL on the network-wide performance."}
{"_id": "abbb5854f5583703fc41112d40d7fe13742de81d", "title": "The integration of the internal and external milieu in the insula during dynamic emotional experiences", "text": "Whilst external events trigger emotional responses, interoception (the perception of internal physiological states) is fundamental to core emotional experience. By combining high resolution functional neuroimaging with concurrent physiological recordings, we investigated the neural mechanisms of interoceptive integration during free listening to an emotionally salient audio film. We found that cardiac activity, a key interoceptive signal, was robustly synchronised across participants and centrally represented in the posterior insula. Effective connectivity analysis revealed that the anterior insula, specifically tuned to the emotionally salient moments of the audio stream, serves as an integration hub of interoceptive processing: interoceptive states represented in the posterior insula are integrated with exteroceptive representations by the anterior insula to highlight these emotionally salient moments. Our study for the first time demonstrates the insular hierarchy for interoceptive processing during natural emotional experience. These findings provide an ecologically-valid framework for elucidating the neural underpinnings of emotional deficits in neuropsychiatric disorders."}
{"_id": "bdb6d9baefb2a972f4ed094ae40f04e55a850f2b", "title": "User Experience Design and Agile Development : From Theory to Practice", "text": "We used existing studies on the integration of user experience design and agile methods as a basis to develop a framework for integrating UX and Agile. We performed a field study in an ongoing project of a medium-sized company in order to check if the proposed framework fits in the real world, and how some aspects of the integration of UX and Agile work in a real project. This led us to some conclusions situating contributions from practice to theory and back again. The framework is briefly described in this paper and consists of a set of practices, artifacts, and techniques derived from the literature. By combining theory and practice we were able to confirm some thoughts and identify some gaps\u2014both in the company process and in our proposed framework\u2014and drive our attention to new issues that need to be addressed. We believe that the most important issues in our case study are: UX designers cannot collaborate closely with developers because UX designers are working on multiple projects and that UX designers cannot work up front because they are too busy with too many projects at the same time."}
{"_id": "a0495616ea12b3306e3e6d25093d6dc7642e7410", "title": "Turbo-Like Beamforming Based on Tabu Search Algorithm for Millimeter-Wave Massive MIMO Systems", "text": "For millimeter-wave (mmWave) massive multiple-input-multiple-output (MIMO) systems, codebook-based analog beamforming (including transmit precoding and receive combining) is usually used to compensate the severe attenuation of mmWave signals. However, conventional beamforming schemes involve complicated search among predefined codebooks to find out the optimal pair of analog precoder and analog combiner. To solve this problem, by exploring the idea of turbo equalizer together with the tabu search (TS) algorithm, we propose a Turbo-like beamforming scheme based on TS, which is called Turbo-TS beamforming in this paper, to achieve near-optimal performance with low complexity. Specifically, the proposed Turbo-TS beamforming scheme is composed of the following two key components: 1) Based on the iterative information exchange between the base station (BS) and the user, we design a Turbo-like joint search scheme to find out the near-optimal pair of analog precoder and analog combiner; and 2) inspired by the idea of the TS algorithm developed in artificial intelligence, we propose a TS-based precoding/combining scheme to intelligently search the best precoder/combiner in each iteration of Turbo-like joint search with low complexity. Analysis shows that the proposed Turbo-TS beamforming can considerably reduce the searching complexity, and simulation results verify that it can achieve near-optimal performance."}
{"_id": "37ac5eaad66955ded22bbb50603f9d1a4f15f3d6", "title": "Multilingual representations for low resource speech recognition and keyword search", "text": "This paper examines the impact of multilingual (ML) acoustic representations on Automatic Speech Recognition (ASR) and keyword search (KWS) for low resource languages in the context of the OpenKWS15 evaluation of the IARPA Babel program. The task is to develop Swahili ASR and KWS systems within two weeks using as little as 3 hours of transcribed data. Multilingual acoustic representations proved to be crucial for building these systems under strict time constraints. The paper discusses several key insights on how these representations are derived and used. First, we present a data sampling strategy that can speed up the training of multilingual representations without appreciable loss in ASR performance. Second, we show that fusion of diverse multilingual representations developed at different LORELEI sites yields substantial ASR and KWS gains. Speaker adaptation and data augmentation of these representations improves both ASR and KWS performance (up to 8.7% relative). Third, incorporating un-transcribed data through semi-supervised learning, improves WER and KWS performance. Finally, we show that these multilingual representations significantly improve ASR and KWS performance (relative 9% for WER and 5% for MTWV) even when forty hours of transcribed audio in the target language is available. Multilingual representations significantly contributed to the LORELEI KWS systems winning the OpenKWS15 evaluation."}
{"_id": "1aa0522b561b2732e5a206dfa37ccdd62c3890c5", "title": "A geometry-based soft shadow volume algorithm using graphics hardware", "text": "Most previous soft shadow algorithms have either suffered from aliasing, been too slow, or could only use a limited set of shadow casters and/or receivers. Therefore, we present a strengthened soft shadow volume algorithm that deals with these problems. Our critical improvements include robust penumbra wedge construction, geometry-based visibility computation, and also simplified computation through a four-dimensional texture lookup. This enables us to implement the algorithm using programmable graphics hardware, and it results in images that most often are indistinguishable from images created as the average of 1024 hard shadow images. Furthermore, our algorithm can use both arbitrary shadow casters and receivers. Also, one version of our algorithm completely avoids sampling artifacts which is rare for soft shadow algorithms. As a bonus, the four-dimensional texture lookup allows for small textured light sources, and, even video textures can be used as light sources. Our algorithm has been implemented in pure software, and also using the GeForce FX emulator with pixel shaders. Our software implementation renders soft shadows at 0.5--5 frames per second for the images in this paper. With actual hardware, we expect that our algorithm will render soft shadows in real time. An important performance measure is bandwidth usage. For the same image quality, an algorithm using the accumulated hard shadow images uses almost two orders of magnitude more bandwidth than our algorithm."}
{"_id": "7c70c29644ff1d6dd75a8a4dd0556fb8cb13549b", "title": "Polymer\u2013Ceramic Composites for Microwave Applications: Fabrication and Performance Assessment", "text": "We present a novel technique to fabricate conformal and pliable substrates for microwave applications including systems-on-package. The produced materials are fabricated by combining ceramic powders with polymers to generate a high-contrast substrate that is concurrently pliable (bendable). Several such polymer-ceramic substrates are fabricated and used to examine the performance of a patch antenna and a coupled line filter. This paper presents the substrate mixing method while measurements are given to evaluate the loss performance of the substrates. Overall, the fabricated composites lead to flexible substrates with a permittivity of up to epsivr=20 and sufficiently low loss"}
{"_id": "90541ff36e8e9592614224276b69e057c2c03c22", "title": "Body Image and Personality in Aesthetic Plastic Surgery : A Case-Control Study", "text": "Introduction: The amount of research on the relationship between plastic surgery and psychological features, such as personality disorders and indexes of body image dissatisfaction, has increased significantly in the last years. Aim: The aim of the present study was to examine these psychological features among Italian patients who underwent aesthetic plastic surgery, testing the mediating role of the mass media influence on the relationship between them and the choice of aesthetic plastic surgery. The Personality Diagnostic Questionnaire 4+ (PDQ-4+) and the Body Uneasiness Test (BUT) were administered to patients who underwent aesthetic plastic surgery (N = 111) and participants who had no history of aesthetic plastic surgical procedures (N = 149). Results: Results showed that aesthetic patients reported higher indexes of body image disturbance than controls. No significant differences between aesthetic participants and controls were found in all three cluster B personality disorders. Moreover, the effect of body image dissatisfaction on the choice to undergo aesthetic plastic surgery was partially mediated by the influence of mass media. Conclusions: In conclusion, the present study confirmed the importance of body dissatisfaction as a predictor of the choice to undergo aesthetic surgery and highlighted the influence of media messages regarding physical appearance."}
{"_id": "225c570344c8230b9d3e8be3a5d8fd3d3a9ba267", "title": "A DTC-Based Subsampling PLL Capable of Self-Calibrated Fractional Synthesis and Two-Point Modulation", "text": "We present an analog subsampling PLL based on a digital-to-time converter (DTC), which operates with almost no performance gap (176/198 fs RMS jitter) between the integer and the worst case fractional operation, achieving -246.6 dB FOM in the worst case fractional mode. The PLL is capable of two-point, 10 Mbit/s GMSK modulation with -40.5 dB EVM around a 10.24 GHz fractional carrier. The analog nonidealities-DTC gain, DTC nonlinearity, modulating VCO bank gain, and nonlinearity-are calibrated in the background while the system operates normally. This results in ~15 dB fractional spur improvement (from -41 dBc to -56.5 dBc) during synthesis and ~15 dB EVM improvement (from -25 dB to -40.5 dB) during modulation. The paper provides an overview of the mechanisms that contribute to performance degradation in DTC-based PLL/phase modulators and presents ways to mitigate them. We demonstrate state-of-the-art performance in nanoscale CMOS for fractional-N synthesis and phase modulation."}
{"_id": "c4df7d84eb47e416af043dc20d053d3cb45e9571", "title": "Broadcast Encryption", "text": "We introduce new theoretical measures for the qualitative and quantitative assessment of encryption schemes designed for broadcast transmissions. The goal is to allow a central broadcast site to broadcast secure transmissions to an arbitrary set of recipients while minimizing key management related transmissions. We present several schemes that allow a center to broadcast a secret to any subset of privileged users out of a universe of size so that coalitions of users not in the privileged set cannot learn the secret. The most interesting scheme requires every user to store keys and the center to broadcast messages regardless of the size of the privileged set. This scheme is resilient to any coalition of users. We also present a scheme that is resilient with probability against a random subset of users. This scheme requires every user to store keys and the center to broadcast messages. Preliminary version appeared in Advances in Cryptology CRYPTO \u201993 Proceedings, Lecture Notes in Computer Science, Vol. 773, 1994, pp. 480\u2013491. Dept. of Computer Science, School of Mathematics, Tel Aviv University, Tel Aviv, Israel, and Algorithmic Research Ltd. E-mail fiat@math.tau.ac.il. Incumbent of the Morris and Rose Goldman Career Development Chair, Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel. Research supported by an Alon Fellowship and a grant from the Israel Science Foundation administered by the Israeli Academy of Sciences. E-mail: naor@wisdom.weizmann.ac.il."}
{"_id": "751f61473d563f9018fbfac57cba5619cd39de4c", "title": "A 45nm 4Gb 3-Dimensional Double-Stacked Multi-Level NAND Flash Memory with Shared Bitline Structure", "text": "Recently, 3-dimensional (3D) memories have regained attention as a potential future memory solution featuring low cost, high density and high performance. We present a 3D double stacked 4Gb MLC NAND flash memory device with shared bitline structure, with a cell size of 0.0021mum2/bit per unit feature area. The device is designed to support 3D stacking and fabricated by S3 and 45nm floating-gate CMOS technologies."}
{"_id": "7f4773a3edd05f27e543b89986f8bfd34ed29bfa", "title": "Alobar holoprosencephaly, mobile proboscis and trisomy 13 in a fetus with maternal gestational diabetes mellitus: a 2D ultrasound diagnosis and review of the literature", "text": "Alobar holoprosencephaly is a rare and severe brain malformation due to early arrest in brain cleavage and rotation. We report a congenital anomalous fetus with alobar holoprosencephaly, prenatally diagnosed by two-dimensional (2D) sonography at the 40\u00a0weeks of gestation. The mother was affected by gestational diabetes mellitus and was obese (BMI\u00a0>\u00a030\u00a0kg/m2). 2D Ultrasound depicted the cerebral malformation, cyclopy, proboscis, cardiac defects (atrial septal defect, hypoplastic left heart, anomalous communication between right ventricle and aorta) and extremities defects. The newborn died just after delivery. External examination confirmed a mobile proboscis-like nose on the normal nose position. The fetus had both claw hands. The right and left foots showed to be equine. Autopsy confirmed the ultrasound diagnosis and chromosome analysis revealed trisomy 13 (47,XY,+13). Fetopathologic examination showed cyclopy, proboscis and alobar holoprosencephaly of the fetus, which was consistent with Patau syndrome. The teratogenic effect of diabetes on fetus has been described, but no previous clinical case of a congenital anomalous fetus with trisomy 13 and maternal gestational diabetes has been reported. This case report is the first to describe 2D ultrasound diagnosis of alobar holoprosencephaly and trisomy 13 with maternal gestational diabetes mellitus."}
{"_id": "4b4edd81ae68c42c1a570bee7fd1a1956532320a", "title": "Cloud-assisted humanoid robotics for affective interaction", "text": "In recent years, the humanoid robot is received great attention, and gradually develop to households and personal service field. The prominent progress of cloud computing, big data, and machine learning fields provides a strong support for the research of the robot. With affective interaction ability of robot has a broad market space and research value. In this paper, we propose a cloud-assisted humanoid robotics for affective interaction system architecture, and introduce the essential composition, design and implementation of related components. Finally, through an actual robot emotional interaction test platform, validating the feasibility and extendibility of proposed architecture."}
{"_id": "384ac22ddf645108d085f6f9ec6d359813776a80", "title": "Paragraph: Thwarting Signature Learning by Training Maliciously", "text": "Defending a server against Internet worms and defending a user\u2019s email inbox against spam bear certain similarities. In both cases, a stream of samples arrives, and a classifier must automatically determine whether each sample falls into a malicious target class (e.g., worm network traffic, or spam email). A learner typically generates a classifier automatically by analyzing two labeled training pools: one of innocuous samples, and one of samples that fall in the malicious target class. Learning techniques have previously found success in settings where the content of the labeled samples used in training is either random, or even constructed by a helpful teacher, who aims to speed learning of an accurate classifier. In the case of learning classifiers for worms and spam, however, an adversary controls the content of the labeled samples to a great extent. In this paper, we describe practical attacks against learning, in which an adversary constructs labeled samples that, when used to train a learner, prevent or severely delay generation of an accurate classifier. We show that even a delusive adversary, whose samples are all correctly labeled, can obstruct learning. We simulate and implement highly effective instances of these attacks against the Polygraph [15] automatic polymorphic worm signature generation algorithms."}
{"_id": "f62b9c6ef565e820d21dfa64e4aed00323a50417", "title": "Diagnostic Performance of a Smartphone\u2010Based Photoplethysmographic Application for Atrial Fibrillation Screening in a Primary Care Setting", "text": "BACKGROUND\nDiagnosing atrial fibrillation (AF) before ischemic stroke occurs is a priority for stroke prevention in AF. Smartphone camera-based photoplethysmographic (PPG) pulse waveform measurement discriminates between different heart rhythms, but its ability to diagnose AF in real-world situations has not been adequately investigated. We sought to assess the diagnostic performance of a standalone smartphone PPG application, Cardiio Rhythm, for AF screening in primary care setting.\n\n\nMETHODS AND RESULTS\nPatients with hypertension, with diabetes mellitus, and/or aged \u226565\u00a0years were recruited. A single-lead ECG was recorded by using the AliveCor heart monitor with tracings reviewed subsequently by 2 cardiologists to provide the reference standard. PPG measurements were performed by using the Cardiio Rhythm smartphone application. AF was diagnosed in 28 (2.76%) of 1013 participants. The diagnostic sensitivity of the Cardiio Rhythm for AF detection was 92.9% (95% CI] 77-99%) and was higher than that of the AliveCor automated algorithm (71.4% [95% CI 51-87%]). The specificities of Cardiio Rhythm and the AliveCor automated algorithm were comparable (97.7% [95% CI: 97-99%] versus 99.4% [95% CI 99-100%]). The positive predictive value of the Cardiio Rhythm was lower than that of the AliveCor automated algorithm (53.1% [95% CI 38-67%] versus 76.9% [95% CI 56-91%]); both had a very high negative predictive value (99.8% [95% CI 99-100%] versus 99.2% [95% CI 98-100%]).\n\n\nCONCLUSIONS\nThe Cardiio Rhythm smartphone PPG application provides an accurate and reliable means to detect AF in patients at risk of developing AF and has the potential to enable population-based screening for AF."}
{"_id": "aafef584b6c5b3e3275e42902c75ee6dfdf20942", "title": "An indicator of research front activity: Measuring intellectual organization as uncertainty reduction in document sets", "text": "When using scientific literature to model scholarly discourse, a research specialty can be operationalized as an evolving set of related documents. Each publication can be expected to contribute to the further development of the specialty at the research front. The specific combinations of title words and cited references in a paper can then be considered as a signature of the knowledge claim in the paper: new words and combinations of words can be expected to represent variation, while each paper is at the same time selectively positioned into the intellectual organization of a field using context-relevant references. Can the mutual information among these three dimensions\u2014 title words, cited references, and sequence numbers\u2014be used as an indicator of the extent to which intellectual organization structures the uncertainty prevailing at a research front? The effect of the discovery of nanotubes (1991) on the previously existing field of fullerenes is used as a test case. Thereafter, this method is applied to science studies with a focus on scientometrics using various sample delineations. An emerging research front about citation analysis can be indicated."}
{"_id": "8563b6545a8ff8d17a74da1f70f57c4a7d9a38bc", "title": "DPP-Net: Device-Aware Progressive Search for Pareto-Optimal Neural Architectures", "text": "Recent breakthroughs in Neural Architectural Search (NAS) have achieved state-of-the-art performances in applications such as image classification and language modeling. However, these techniques typically ignore device-related objectives such as inference time, memory usage, and power consumption. Optimizing neural architecture for devicerelated objectives is immensely crucial for deploying deep networks on portable devices with limited computing resources. We propose DPPNet: Device-aware Progressive Search for Pareto-optimal Neural Architectures, optimizing for both device-related (e.g., inference time and memory usage) and device-agnostic (e.g., accuracy and model size) objectives. DPP-Net employs a compact search space inspired by current state-of-the-art mobile CNNs, and further improves search efficiency by adopting progressive search (Liu et al. 2017). Experimental results on CIFAR-10 are poised to demonstrate the effectiveness of Pareto-optimal networks found by DPP-Net, for three different devices: (1) a workstation with Titan X GPU, (2) NVIDIA Jetson TX1 embedded system, and (3) mobile phone with ARM Cortex-A53. Compared to CondenseNet and NASNet (Mobile), DPP-Net achieves better performances: higher accuracy & shorter inference time on various devices. Additional experimental results show that models found by DPP-Net also achieve considerablygood performance on ImageNet as well."}
{"_id": "85947d646623ef7ed96dfa8b0eb705d53ccb4efe", "title": "Network forensic frameworks: Survey and research challenges", "text": "Network forensics is the science that deals with capture, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper makes an exhaustive survey of various network forensic frameworks proposed till date. A generic process model for network forensics is proposed which is built on various existing models of digital forensics. Definition, categorization and motivation for network forensics are clearly stated. The functionality of various Network Forensic Analysis Tools (NFATs) and network security monitoring tools, available for forensics examiners is discussed. The specific research gaps existing in implementation frameworks, process models and analysis tools are identified and major challenges are highlighted. The significance of this work is that it presents an overview on network forensics covering tools, process models and framework implementations, which will be very much useful for security practitioners and researchers in exploring this upcoming and young discipline. a 2010 Elsevier Ltd. All rights reserved."}
{"_id": "4c18d626c7200957a36df64a5baefb8c9f8cda5a", "title": "What does media use reveal about personality and mental health? An exploratory investigation among German students", "text": "The present study aimed to investigate the relationship between personality traits, mental health variables and media use among German students. The data of 633 participants were collected. Results indicate a positive association between general Internet use, general use of social platforms and Facebook use, on the one hand, and self-esteem, extraversion, narcissism, life satisfaction, social support and resilience, on the other hand. Use of computer games was found to be negatively related to these personality and mental health variables. The use of platforms that focus more on written interaction (Twitter, Tumblr) was assumed to be negatively associated with positive mental health variables and significantly positively with depression, anxiety, and stress symptoms. In contrast, Instagram use, which focuses more on photo-sharing, correlated positively with positive mental health variables. Possible practical implications of the present results for mental health, as well as the limitations of the present work are discussed."}
{"_id": "2c7bfc8d75dab44aeab34b1bf5243b192112f502", "title": "Failure as a Service ( FaaS ) : A Cloud Service for Large-Scale , Online Failure Drills", "text": "Cloud computing is pervasive, but cloud service outages still take place. One might say that the computing forecast for tomorrow is \u201ccloudy with a chance of failure.\u201d One main reason why major outages still occur is that there are many unknown large-scale failure scenarios in which recovery might fail. We propose a new type of cloud service, Failure as a Service (FaaS), which allows cloud services to routinely perform large-scale failure drills in real deployments."}
{"_id": "3832b8446bf3256148868ce62d84546e48c0a7be", "title": "Young adults' experiences of seeking online information about diabetes and mental health in the age of social media", "text": "BACKGROUND\nThe Internet is a primary source of health information for many. Since the widespread adoption of social media, user-generated health-related content has proliferated, particularly around long-term health issues such as diabetes and common mental health disorders (CMHDs).\n\n\nOBJECTIVE\nTo explore perceptions and experiences of engaging with health information online in a sample of young adults familiar with social media environments and variously engaged in consuming user-generated content.\n\n\nMETHODS\nForty semi-structured interviews were conducted with young adults, aged 18-30, with experience of diabetes or CMHDs. Data were analysed following a thematic networks approach to explore key themes around online information-seeking and content consumption practices.\n\n\nRESULTS\nAlthough participants primarily discussed well-rehearsed approaches to health information-seeking online, particularly reliance on search engines, their accounts also reflected active engagement with health-related content on social media sites. Navigating between professionally produced websites and user-generated content, many of the young adults seemed to appreciate different forms of health knowledge emanating from varied sources. Participants described negotiating health content based on social media practices and features and assessing content heuristically. Some also discussed habitual consumption of content related to their condition as integrated into their everyday social media use.\n\n\nCONCLUSION\nTechnologies such as Facebook, Twitter and YouTube offer opportunities to consume and assess content which users deem relevant and useful. As users and organizations continue to colonize social media platforms, opportunities are increasing for health communication and intervention. However, how such innovations are adopted is dependent on their alignment with users' expectations and consumption practices."}
{"_id": "0a45ca7d9c6ac32eeec03ce9a6d8344d4d9aaf1c", "title": "Web-Based Virtual Learning Environments: A Research Framework and a Preliminary Assessment of Effectiveness in Basic IT Skills Training", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "391dd4ca2ee3b2995260425d09dd97d4d9aaac29", "title": "Computer Self-Efficacy: Development of a Measure and Initial Test", "text": "This paper discusses the role of individuals\u2019 beliefs about their abilities to competently use computers (computer self-efficacy) in the determination of computer use. A survey of Canadian managers and professionals was conducted to develop and validate a measure of computer self-efficacy and to assess both its impacts and antecedents. Computer selfefficacy was found to exert a significant influence on individuals\u2019 expectations of the outcomes of using computers, their emotional reactions to computers (affect and anxiety), well as their actual computer use. An individual\u2019s self-efficacy and outcome expectations were found to be positively influenced by the encouragement of others in their work group, as well as others\u2019 use of computers. Thus, self-efficacy represents an important individual trait, which moderates organizational influences (such as encouragement and support) on an individual\u2019s decision to use computers. Understanding self-efficacy, then, is important o the successful implementation of systems in Introduction Understanding the factors that influence an individual\u2019s use of information technology has been a goal of MIS research since the mid-1970s, when organizations and researchers began to find that adoption of new technology was not living up to expectations. Lucas (1975, 1978) provides some of the earliest evidence of the individual or behavioral factors that influenced IT adoption. The first theoretical perspective to gain widespread acceptance in this research was the Theory of Reasoned Action (Fishbein and Ajzen, 1975). This theory maintains that individuals would use computers if they could see that there would be positive benefits (outcomes) associated with using them. This theory is still widely used today in the IS literature and has demonstrated validity. However, there is also a growing recognition that additional explanatory variables are needed (e.g., Thompson, et al., 1991; Webster and Martocchio, 1992). One such variable, examined in this research, comes from the writings of Albert Bandura and his work on Social Cognitive Theory (Bandura, 1986). Self-efficacy, the belief that one has the capability to perform a particular behavior, is an important construct in social psychology. Self-efficacy perceptions have been found to influence decisions about what behaviors to undertake (e.g., Bandura, et al., 1977; Betz and Hackett, 1981), the effort exerted and persistence in attempting those behaviors (e.g., Barling and Beattie, 1983; Brown and Inouye, 1978), the emotional responses (including stress and anxiety) of the individual performing the behaviors (e.g., Bandura, et al., 1977; Stumpf, et a1.,1987), and the actual performance attainments of the individual with respect to the behavior (e.g., Barling M/S Quarterly/June 1995 189 Computer Self-Efficacy---Measurement and Beattie, 1983; Collins, 1985; Locke, et al., 1984; Schunk, 1981; Wood and Bandura, 1989). These effects have been shown for a wide variety of behaviors in both clinical and managerial settings. Within a management context, self-efficacy has been found to be related to attendance (Frayne and Latham, 1987; Latham and Frayne, 1989), career choice and dev,elopment (Betz and Hackett, 1981; Jones, 1986), research productivity (Taylor, et al., 1984), and sales performance (Barling and Beattie, 1983). Several more recent studies (Burkhardt and Brass, 1990; Gist, et a1.,1989; Hill, et al., 1986; 1987; Webster and Martocohio, 1992; 1993) have examined the relationship between self-efficacy with respect to using computers and a variety of computer behaviors. These studies found evidence of a relationship between selfefficacy and registration in computer courses at universities (Hill, et al., 1987), adoption of high technology products (Hill, et al., 1986) and innovations (Burkhardt and Brass, 1990), as well performance in software training (Gist, et al., 1989; Webster and Martocchio, 1992; 1993). All of the studies argue the need for further research to explore fully the role of self-efficacy in computing behavior. This paper describes the first study in a program of research aimed at understanding the impact of self-efficacy on individual reactions to computing technology. The study involves the development of a measure for computer self-efficacy and a test of its reliability and validity. The measure was evaluated by examining its performance in a nomological network, through structural equations modeling. A research model for this purpose was developed with reference to literature from social psychology, as well as prior IS research. The paper is organized as follows. The next section presents the theoretical foundation for this research. The third section discusses the development of the self-efficacy measure. The research model is described and the hypotheses are presented in the following section. Then, the research methodology is outlined, and the results of the analyses are presented. The paper concludes with a discussion of the implications of these findings and the strengths and limitations of the research. Theoretical Background Social Cognitive Theory Social Cognitive Theory (Bandura, 1977; 1978; 1982; 1986) is a widely accepted, empirically validated model of individual behavior. It is based on the premise that environmental influences such as social pressures or unique situational characteristics, cognitive and other personal factors including personality as well as demographic haracteristics, and behavior are reciprocally determined. Thus, individuals choose the environments in which they exist in"}
{"_id": "53393cbe2147c22e111dc3a4667bb1a1dbf537f6", "title": "Judo strategy. The competitive dynamics of Internet time.", "text": "Competition on the Internet is creating fierce battles between industry giants and small-scale start-ups. Smart start-ups can avoid those conflicts by moving quickly to uncontested ground and, when that's no longer possible, turning dominant players' strengths against them. The authors call this competitive approach judo strategy. They use the Netscape-Microsoft battles to illustrate the three main principles of judo strategy: rapid movement, flexibility, and leverage. In the early part of the browser wars, for instance, Netscape applied the principle of rapid movement by being the first company to offer a free stand-alone browser. This allowed Netscape to build market share fast and to set the market standard. Flexibility became a critical factor later in the browser wars. In December 1995, when Microsoft announced that it would \"embrace and extend\" competitors' Internet successes, Netscape failed to give way in the face of superior strength. Instead it squared off against Microsoft and even turned down numerous opportunities to craft deep partnerships with other companies. The result was that Netscape lost deal after deal when competing with Microsoft for common distribution channels. Netscape applied the principle of leverage by using Microsoft's strengths against it. Taking advantage of Microsoft's determination to convert the world to Windows or Windows NT, Netscape made its software compatible with existing UNIX systems. While it is true that these principles can't replace basic execution, say the authors, without speed, flexibility, and leverage, very few companies can compete successfully on Internet time."}
{"_id": "54f35b4edba6ddee8ce2eac489bde78308e3e708", "title": "Forward Reasoning and Dependency-Directed Backtracking in a System for Computer-Aided Circuit Analysis", "text": "We present a rule-based system for computer-aided circuit analysis. The set of rules, called EL, is written in a rule language called ARS. Rules are implemented by ARS as pattern-directed invocation demons monitoring an associative data base. Deductions are performed in an antecedent manner, giving EL's analysis a catch-as-catch-can flavour suggestive of the behavior of expert circuit analyzers. We call this style of circuit analysis propagation of constraints. The system threads deduced fact~ with justifications which mention the antecedent facts attd the rule used. These justifications may be examined by the user to gain in right into the operation of the set of rules as they apply to a problem. ~q~e same justifications are used by the system to determine the currently active dat~-base context Jor reasoning in hypothetical situations. They are also used by the system in the at~alysis of failures to reduce the search space. This leads to effective control of combinatorial search which we call dependency-directed backtracking."}
{"_id": "7d98dce77cce2d0963a3b6566f5c733ad4343ce4", "title": "Gender Differences in the Perception and Use of E-Mail: An Extension to the Technology Acceptance Model", "text": "This study extends Davis' (1989) TAM model and Straub's (1994) SPIR addendum by adding gender to an IT diffusion model. The Technology Acceptance Model (TAM) has been widely studied in IS research as an explanation of the use of information systems across IS-types and nationalities. While this line of research has found significant cross-cultural differences, it has ignored the effects of gender, even though in socio-linguistic research, gender is a fundamental aspect of culture. Indeed, sociolinguistic research that has shown that men tend to focus discourse on hierarchy and independence while women focus on intimacy and solidarity. This literature provides a solid grounding for conceptual extensions to the IT diffusion research and the Technology Acceptance Model. Testing gender differences that might relate to beliefs and use of computer-based media, the present study sampled 392 female and male responses via a cross-sectional survey instrument. The sample drew from comparable groups of knowledge workers using E-mail systems in the airline industry in North America, Asia, and Europe. Study findings indicate that women and men differ in their perceptions but not use of E-mail. These findings suggest that researchers should include gender in IT diffusion models along with other cultural effects. Managers and co-workers, moreover, need to realize that the same mode of communication may be perceived differently by the sexes, suggesting that more favorable communications environments might be created, environments that take into account not only organizational contextual factors, but also the gender of users. The creation of these environments involves not only the actual deployment of communication media, but also organizational training on communications media."}
{"_id": "032c741edc5acd8fdb7b22e02a39a83c3882211a", "title": "EDUCATIONAL DATA MINING", "text": "Computer-based learning systems can now keep detailed logs of user-system interactions, including key clicks, eye-tracking, and video data, opening up new opportunities to study how students learn with technology. Educational Data Mining (EDM; Romero, Ventura, Pechenizkiy, & Baker, 2010) is concerned with developing, researching, and applying computerized methods to detect patterns in large collections of educational data \u2013 patterns that would otherwise be hard or impossible to analyze due to the enormous volume of data they exist within. Data of interest is not restricted to interactions of individual students with an educational system (e.g., navigation behavior, input to quizzes and interactive exercises) but might also include data from collaborating students (e.g., text chat), administrative data (e.g., school, school district, teacher), and demographic data (e.g., gender, age, school grades). Data on student affect (e.g., motivation, emotional states) has also been a focus, which can be inferred from physiological sensors (e.g., facial expression, seat posture and perspiration). EDM uses methods and tools from the broader field of Data Mining (Witten & Frank, 2005), a sub-field of Computer Science and Artificial Intelligence that has been used for purposes as diverse as credit card fraud detection, analysis of gene sequences in bioinformatics, or the analysis of purchasing behaviors of customers. Distinguishing EDM features are its particular focus on educational data and problems, both theoretical (e.g., investigating a learning hypothesis) and practical (e.g., improving a learning tool). Furthermore, EDM makes a methodological contribution by developing and researching data mining techniques for educational applications. Typical steps in an EDM project include data acquisition, data preprocessing (e.g., data \u201ccleaning\u201d), data mining, and validation of results."}
{"_id": "d172ef0dafee17f08d9ef7867337878ed5cd2971", "title": "Why Do Men Get More Attention? Exploring Factors Behind Success in an Online Design Community", "text": "Online platforms are an increasingly popular tool for people to produce, promote or sell their work. However recent studies indicate that social disparities and biases present in the real world might transfer to online platforms and could be exacerbated by seemingly harmless design choices on the site (e.g., recommendation systems or publicly visible success measures). In this paper we analyze an exclusive online community of teams of design professionals called Dribbble and investigate apparent differences in outcomes by gender. Overall, we find that men produce more work, and are able to show it to a larger audience thus receiving more likes. Some of this effect can be explained by the fact that women have different skills and design different images. Most importantly however, women and men position themselves differently in the Dribbble community. Our investigation of users\u2019 position in the social network shows that women have more clustered and gender homophilous following relations, which leads them to have smaller and more closely knit social networks. Overall, our study demonstrates that looking behind the apparent patterns of gender inequalities in online markets with the help of social networks and product differentiation helps us to better understand gender differences in success and failure."}
{"_id": "6e30fa90d8c628dd10083eac76cfa5aff209d0ed", "title": "Removing ECG noise from surface EMG signals using adaptive filtering", "text": "Surface electromyograms (EMGs) are valuable in the pathophysiological study and clinical treatment for dystonia. These recordings are critically often contaminated by cardiac artefact. Our objective of this study was to evaluate the performance of an adaptive noise cancellation filter in removing electrocardiogram (ECG) interference from surface EMGs recorded from the trapezius muscles of patients with cervical dystonia. Performance of the proposed recursive-least-square adaptive filter was first quantified by coherence and signal-to-noise ratio measures in simulated noisy EMG signals. The influence of parameters such as the signal-to-noise ratio, forgetting factor, filter order and regularization factor were assessed. Fast convergence of the recursive-least-square algorithm enabled the filter to track complex dystonic EMGs and effectively remove ECG noise. This adaptive filter procedure proved a reliable and efficient tool to remove ECG artefact from surface EMGs with mixed and varied patterns of transient, short and long lasting dystonic contractions."}
{"_id": "bd4308ebc656cee9cd50ba3347f6662d0192f0a8", "title": "Forecasting the behavior of multivariate time series using neural networks", "text": "Abstract--This pal~er presents a neural network approach to multivariate time-series anal.lwis. Real world observations ~?/flottr /.'ices in three cities have been used as a benchmark in our ev~eriments, l\"eed/orward connectionist networks have bt,t,tl ~k,.signed to model flottr l , ices over the period/iom ..lltgttsl 1972 to Novt,mher 1980./or the cities ~?[Blt[]ith~. :lliltneapoli.s, and Kansas Cit): Remarkable sttccess has heen achieved ill training the lletworks to leatvt the price CllrlV [i,\" each t?/these cities and in m a k i l l g acc l t ra le price predictions. Our r\u00a2,~lt/l.s show that the it\u00a2ltral network al~proat'h is a leading contender with the statistical modeling apl,oaches."}
{"_id": "479fee2481047995353df4ba11a163c77d2eb57b", "title": "Dose-response relationship in music therapy for people with serious mental disorders: systematic review and meta-analysis.", "text": "Serious mental disorders have considerable individual and societal impact, and traditional treatments may show limited effects. Music therapy may be beneficial in psychosis and depression, including treatment-resistant cases. The aim of this review was to examine the benefits of music therapy for people with serious mental disorders. All existing prospective studies were combined using mixed-effects meta-analysis models, allowing to examine the influence of study design (RCT vs. CCT vs. pre-post study), type of disorder (psychotic vs. non-psychotic), and number of sessions. Results showed that music therapy, when added to standard care, has strong and significant effects on global state, general symptoms, negative symptoms, depression, anxiety, functioning, and musical engagement. Significant dose-effect relationships were identified for general, negative, and depressive symptoms, as well as functioning, with explained variance ranging from 73% to 78%. Small effect sizes for these outcomes are achieved after 3 to 10, large effects after 16 to 51 sessions. The findings suggest that music therapy is an effective treatment which helps people with psychotic and non-psychotic severe mental disorders to improve global state, symptoms, and functioning. Slight improvements can be seen with a few therapy sessions, but longer courses or more frequent sessions are needed to achieve more substantial benefits."}
{"_id": "192ece7d63279a6ec150ca426ff461eff53ff7db", "title": "Javalanche: efficient mutation testing for Java", "text": "To assess the quality of a test suite, one can use mutation testing - seeding artificial defects (mutations) into the program and checking whether the test suite finds them. Javalanche is an open source framework for mutation testing Java programs with a special focus on automation, efficiency, and effectiveness. In particular, Javalanche assesses the impact of individual mutations to effectively weed out equivalent mutants; it has been demonstrated to work on programs with up to 100,000 lines of code."}
{"_id": "a9c2a6b2ada15fa042c61e65277272ad746d7bb5", "title": "Embedded Trench Redistribution Layers (RDL) by Excimer Laser Ablation and Surface Planer Processes", "text": "This paper reports the demonstration of 2-5 \u00b5m embedded trench formation in dry film polymer dielectrics such as Ajinomoto build-up film (ABF) and Polyimide without using chemical mechanical polishing (CMP) process. The trenches in these dielectrics were formed by excimer laser ablation, followed by metallization of trenches by copper plating processes and overburden removal with surface planer tool. The materials and processes integrated in this work are scalable to large panel fabrication at much higher throughput, for interposers and high density fan-out packaging at lower cost and higher performance than silicon interposers."}
{"_id": "831ed2a5f40861866b4ebfe60257b997701e38e2", "title": "ESPRIT-estimation of signal parameters via rotational invariance techniques", "text": "High-resolution signal parameter estimation is a problem of significance in many signal processing applications. Such applications include direction-of-arrival (DOA) estimation, system identification, and time series analysis. A novel approach to the general problem of signal parameter estimation is described. Although discussed in the context of direction-of-arrival estimation, ESPRIT can be applied to a wide variety of problems including accurate detection and estimation of sinusoids in noise. It exploits an underlying rotational invariance among signal subspaces induced by an array of sensors with a translational invariance structure. The technique, when applicable, manifests significant performance and computational advantages over previous algorithms such as MEM, Capon's MLM, and MUSIC."}
{"_id": "e1d0880dbb789448c14f9d656a4c88ec3da8f8c4", "title": "Assessment of Knowledge , Perception and Practice of Maternal Nutrition Among Pregnant Mother Attending Antenatal Care in Selected Health", "text": "Nutrition during preconception as well as throughout pregnancy has a major impact on the outcome of pregnancy. Therefore, the aim of this study was designed to assess knowledge, perception and practices of maternal nutrition among pregnant women during antenatal care in selected health center of Horo Guduru Wollega zone. A facility-based cross-sectional study design was conducted on total 405 pregnant mothers from January to June, 2017. A semi structured interview and questionnaires was used to collect information in the areas of socio-demographic, knowledge, perception, and practices towards maternal nutrition among pregnant mothers. Statistical Package for Social Sciences (SPSS) version 20.0 was used to perform descriptive statistics. The result obtained during study indicates that 63.5%, 70.6% and74.6% of pregnant mother good knowledge, perception and practices respectively while 36.5%, 29.4% and 25.4% was poor knowledge, perception and practice respectively. This study clearly indicated that less than half of pregnant mother\u2019s attending antenatal care in the study area had poor knowledge, perception and practices. Therefore, nutrition education should be intensified to improve the overall knowledge, perception, and practices of pregnant mothers towards maternal nutrition in different villages, health centers, health posts and hospitals."}
{"_id": "4ec6bd672aaaa075b42a751099eb9317857e6e0c", "title": "Novelty and diversity metrics for recommender systems: choice, discovery and relevance", "text": "There is an increasing realization in the Recommender Systems (RS) field that novelty and diversity are fundamental qualities of recommendation effectiveness and added-value. We identify however a gap in the formalization of novelty and diversity metrics \u2013and a consensus around them\u2013 comparable to the recent proposals in IR diversity. We study a formal characterization of different angles that RS novelty and diversity may take from the end-user viewpoint, aiming to contribute to a formal definition and understanding of different views and meanings of these magnitudes under common groundings. Building upon this, we derive metric schemes that take item position and relevance into account, two aspects not generally addressed in the novelty and diversity metrics reported in the RS literature."}
{"_id": "915dca0156f3eb8d1ee6e66796da770f02cd3b88", "title": "Comparative evaluation of soft-switching concepts for bi-directional buck+boost dc-dc converters", "text": "Soft-switching techniques are an enabling technology to further reduce the losses and the volume of automotive dc-dc converters, utilized to interconnect the high voltage battery or ultra-capacitor to the dc-link of a Hybrid Electrical Vehicle (HEV) or a Fuel Cell Vehicle (FCV). However, as the performance indices of a power electronics converter, such as efficiency and power density, are competing and moreover dependent on the underlying specifications and technology node, a comparison of different converter topologies naturally demands detailed analytical models. Therefore, to investigate the performance of the ARCP, CF-ZVS-M, SAZZ and ZCT-QZVT soft-switching converters, the paper discusses in detail the advantages and drawbacks of each concept, and the impact of the utilized semiconductor technology and silicon area on the converter efficiency. The proposed analytical models that correlate semiconductor, capacitor and inductor losses with the component volume furthermore allow for a comparison of power density and to find the \u03b7-\u03c1-Pareto-Front of the CF-ZVS-M converter."}
{"_id": "beb0ad490e55296b7b79ecb029c7b59c01a0e524", "title": "Development of a robotic finger with an active dual-mode twisting actuation and a miniature tendon tension sensor", "text": "A robot finger is developed with enhanced grasping force and speed using active dual-mode twisting actuation, which is a type of twisted string actuation. This actuation system has two twisting modes (Speed Mode and Force Mode) with different radii of the twisted string, and the twisting mode can be changed automatically. Therefore, the actuator operates like an automatic transmission having two transmission ratios, and the robot finger can generate a large grasping force or fast motion depending on the twisting mode. In addition, a miniature tendon tension sensor is developed and embedded in the fingertip for automatic mode change and grasping force control. A relation between the fingertip force and the tension of the tendon is derived by kinematic and kinetic analyses of the robot finger. The performance of the robotic finger and the tension sensor are verified by experimental results."}
{"_id": "30c8d01077ab942802320899eb0b49c4567591d3", "title": "Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM", "text": "Sentiment analysis on large-scale social media data is important to bridge the gaps between social media contents and real world activities including political election prediction, individual and public emotional status monitoring and analysis, and so on. Although textual sentiment analysis has been well studied based on platforms such as Twitter and Instagram, analysis of the role of extensive emoji uses in sentiment analysis remains light. In this paper, we propose a novel scheme for Twitter sentiment analysis with extra attention on emojis. We first learn bi-sense emoji embeddings under positive and negative sentimental tweets individually, and then train a sentiment classifier by attending on these bi-sense emoji embeddings with an attention-based long short-term memory network (LSTM). Our experiments show that the bi-sense embedding is effective for extracting sentiment-aware embeddings of emojis and outperforms the state-of-the-art models. We also visualize the attentions to show that the bi-sense emoji embedding provides better guidance on the attention mechanism to obtain a more robust understanding of the semantics and sentiments."}
{"_id": "65f55691cc3bad6ca224e2144083c9deb2b2cd1d", "title": "A Look into 30 Years of Malware Development from a Software Metrics Perspective", "text": "During the last decades, the problem of malicious and unwanted software (malware) has surged in numbers and sophistication. Malware plays a key role in most of today\u2019s cyber attacks and has consolidated as a commodity in the underground economy. In this work, we analyze the evolution of malware since the early 1980s to date from a software engineering perspective. We analyze the source code of 151 malware samples and obtain measures of their size, code quality, and estimates of the development costs (effort, time, and number of people). Our results suggest an exponential increment of nearly one order of magnitude per decade in aspects such as size and estimated effort, with code quality metrics similar to those of regular software. Overall, this supports otherwise confirmed claims about the increasing complexity of malware and its production progressively becoming an industry."}
{"_id": "7f9f42fb6166b66ae0c26a3d373edc91b369796d", "title": "Maturity Models for Systems Thinking", "text": "Recent decades have seen a rapid increase in the complexity of goods, products, and services that society has come to demand. This has necessitated a corresponding growth in the requirements demanded of organizational systems and the people who work in them. The competence a person requires to be effective in working in such systems has become an area of increased interest to scholars and practitioners in many disciplines. How can we assess the degree to which a person is executing the competencies required to do good systems work? Several industries now utilize maturity models in the attempt to evaluate and cultivate people\u2019s ability to effectively execute complex tasks. This paper will examine current thought regarding the value and pitfalls of maturity models. It will identify principles and exemplars that could guide the development of a Maturity Model of Systems Thinking Competence (MMSTC) for the varied roles people inhabit in systems contexts."}
{"_id": "50b5a0a6ceb1c8d1dbdaba6e843b4c6947f02d62", "title": "Learning Attractor Dynamics for Generative Memory", "text": "A central challenge faced by memory systems is the robust retrieval of a stored pattern in the presence of interference due to other stored patterns and noise. A theoretically well-founded solution to robust retrieval is given by attractor dynamics, which iteratively clean up patterns during recall. However, incorporating attractor dynamics into modern deep learning systems poses difficulties: attractor basins are characterised by vanishing gradients, which are known to make training neural networks difficult. In this work, we avoid the vanishing gradient problem by training a generative distributed memory without simulating the attractor dynamics. Based on the idea of memory writing as inference, as proposed in the Kanerva Machine, we show that a likelihood-based Lyapunov function emerges from maximising the variational lower-bound of a generative memory. Experiments shows it converges to correct patterns upon iterative retrieval and achieves competitive performance as both a memory model and a generative model."}
{"_id": "962db795fbd438100f602bcc4e566ec08d868d55", "title": "Artificial intelligence theory (Basic concepts)", "text": "Taking the bionic approach as a basis, the article discusses the main concepts of the theory of artificial intelligence as a field of knowledge, which studies the principles of creation and functioning of intelligent systems based on multidimensional neural-like growing networks. The general theory of artificial intelligence includes the study of neural-like elements and multidimensional neural-like growing networks, temporary and long term memory, study of the functional organization of the \u201cbrain\u201d of the artificial intelligent systems, of the sensor system, modulating system, motor system, conditioned and unconditioned reflexes, reflexes arc (ring), motivation, purposeful behavior, of \u201cthinking\u201d, \u201cconsciousness\u201d, \u201csubconscious and artificial personality developed as a result of training and education\u201d."}
{"_id": "64ccd7ef913372962a2dc04a0fb18957fd347f74", "title": "Dual-band bandpass filter using helical resonators", "text": "This paper demonstrates non-uniform pitch helical resonators for realization of dual-band bandpass filters. By modifying the pitch and coil diameter of conventional uniform pitch helical resonators, the desired dual-band frequencies can be obtained. The external coupling and inter-resonator coupling structure are also illustrated. A non-uniform pitch helical resonator was designed and fabricated with unloaded quality-factor (>900) for both passbands. A 2nd order Butterworth filter that has passbands at 840 MHz and 2590 MHz has been designed. The simulation results show that the filter has a good selectivity with 6% and 4.3% fractional bandwidth at the passbands, respectively. The good power handling capability also suggests that the filter is applicable for microcell/picocell LTE base stations."}
{"_id": "1c8e97e9b8dce97b164be9379461fed9eb23cf3b", "title": "Automation of Attendances in Classrooms using RFID", "text": "This paper automates the design and implementation of students\u2019 attendance management system, taken into consideration easy access and time saving. The system will be used in faculty of computing and information system (FCIT) at King Abdulaziz University (KAU). Radio Frequency Identification (RFID) and wireless technology are two technologies which will be applied as infrastructure in the indoor environment. University Based Services (UBS) and tag IDs of the RFID are being used in this paper, in order to determine the attendance list for academic staff. Once the academic staff enters a classroom, he will be able to register student\u2019s presence. Therefore, an academic system for identifying and tracking attendance at computing college at KAU will be described, and hybrid RFID and Wireless LAN (WLAN) technologies will be implemented in FCIT academic environment."}
{"_id": "94638a5fdc084b2d7a9b3500b6e42cd30226ab50", "title": "Got Flow?: Using Machine Learning on Physiological Data to Classify Flow", "text": "As information technologies (IT) are both, drivers of highly engaging experiences and sources of disruptions at work, the phenomenon of flow - defined as \"the holistic sensation that people feel when they act with total involvement\" [5, p. 36] - has been suggested as promising vehicle to understand and enhance user behavior. Despite the growing relevance of flow at work, contemporary measurement approaches of flow are of subjective and retrospective nature, limiting our possibilities to investigate and support flow in a reliable and timely manner. Hence, we require objective and real-time classification of flow. To address this issue, this article combines recent theoretical considerations from psychology and experimental research on the physiology of flow with machine learning (ML). The overall aim is to build classifiers to distinguish flow states (i.e., low and high flow). Our results indicate that flow-classifiers can be derived from physiological signals. Cardiac features seem to play an important role in this process resulting in an accuracy of 72.3%. Hereby, our findings may serve as foundation for future work aiming to build flow-aware IT-systems."}
{"_id": "5204a0c2464bb64971dfb045e833bb0ca4f118fd", "title": "Measuring Universal Intelligence in Agent-Based Systems Using the Anytime Intelligence Test", "text": "This paper aims to quantify and analyze the intelligence of artificial agent collectives. A universal metric is proposed and used to empirically measure intelligence for several different agent decision controllers. Accordingly, the effectiveness of various algorithms is evaluated on a per-agent basis over a selection of abstracted, canonical tasks of different algorithmic complexities. Results reflect the different settings over which cooperative multiagent systems can be significantly more intelligent per agent than others. We identify and discuss some of the factors influencing the collective performance of these systems."}
{"_id": "754e5f3c10fca0f55916be86e0c545504b06590f", "title": "Creating Constructivist Learning Environments on the Web: The Challenge in Higher Education", "text": "Australian universities have traditionally relied on government funding to support undergraduate teaching. As the government has adopted the \u2018user-pays\u2019 principle, universities have been forced to look outside their traditional market to expand the undergraduate, post-graduate and international offerings. Alternate delivery methods in many universities have utilised web-based instruction as a basis for this move because of three perceptions: access by the target market is reasonably significant, it is a cost-effective method of delivery, and it provides global access. Since the mid sixties, the trend for both on-campus teaching and teaching at a distance has been to use behaviourist instructional strategies for subject development, which rely on the development of a set of instructional sequences with predetermined outcomes. These models, whilst applicable in a behaviourist environment, are not serving instructional designers well when the theoretical foundation for the subject outcomes is based on a constructivist approach to learning, since the constructivist group of theories places less emphasis on the sequence of instruction and more emphasis on the design of the learning environment. (Jonassen, 1994. p 35). In a web-based environment this proves to be even more challenging. This paper will review current research in design goals for web-based constructivist learning environments, and a move towards the development of models. The design of two web-based subjects will be explored in the context of the design goals developed by Duffy and Cunningham (1996 p 177) who have produced some basic assumptions that they call \u201cmetaphors we teach by\u201d. The author seeks to examine the seven goals for their relevance to the instructional designer through the examination of their relevance to the web-based subjects, both of which were framed in constructivist theory."}
{"_id": "92e99b0a460c731a5ac4aeb6dba5d9234b797795", "title": "Robust road marking detection using convex grouping method in around-view monitoring system", "text": "As the around-view monitoring (AVM) system becomes one of the essential components for advanced driver assistance systems (ADAS), many applications using AVM such as parking guidance system are actively being developed. As a key step for such applications, detecting road markings robustly is a very important issue to be solved. However, compared to the lane marking detection methods, detection of non-lane markings, such as text marks painted on the road, has been less studied so far. While some of methods for detecting non-lane markings exist, many of them are restricted to roadways only, or work poorly on AVM images. In this paper, we propose an algorithm which can robustly detect non-lane road markings on AVM images. We first propose a difference-of-Gaussian based method for extracting a connected component set, followed by a novel grouping method for grouping connected components based on convexity condition. For a classification task, we exploit the Random Forest classifier. We demonstrate the robustness and detection accuracy of our methods through various experiments by using the dataset collected from various environments."}
{"_id": "23ebda99aa7020e703be88b82ad255376c6d8878", "title": "Focused Crawling: A New Approach to Topic-Specific Web Resource Discovery", "text": "The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines. In this paper we describe a new hypertext resource discovery system called a Focused Crawler. The goal of a focused crawler is to selectively seek out pages that are relevant to a pre-defined set of topics. The topics are specified not using keywords, but using exemplary documents. Rather than collecting and indexing all accessible Web documents to be able to answer all possible ad-hoc queries, a focused crawler analyzes its crawl boundary to find the links that are likely to be most relevant for the crawl, and avoids irrelevant regions of the Web. This leads to significant savings in hardware and network resources, and helps keep the crawl more up-to-date. To achieve such goal-directed crawling, we designed two hypertext mining programs that guide our crawler: a classifier that evaluates the relevance of a hypertext document with respect to the focus topics, and a distiller that identifies hypertext nodes that are great access points to many relevant pages within a few links. We report on extensive focused-crawling experiments using several topics at different levels of specificity. Focused crawling acquires relevant pages steadily while standard crawling quickly loses its way, even though they are started from the same root set. Focused crawling is robust against large perturbations in the starting set of URLs. It discovers largely overlapping sets of resources in spite of these perturbations. It is also capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius. Our anecdotes suggest that focused crawling is very effective for building high-quality collections of Web documents on specific topics, using modest desktop hardware. \uf6d9 1999 Published by Elsevier Science B.V. All rights reserved."}
{"_id": "2599131a4bc2fa957338732a37c744cfe3e17b24", "title": "A Training Algorithm for Optimal Margin Classifiers", "text": "A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms."}
{"_id": "04ada80ffb2d8a90b6366ef225a75edd6a2be0e0", "title": "Searching the world wide Web", "text": "The coverage and recency of the major World Wide Web search engines was analyzed, yielding some surprising results. The coverage of any one engine is significantly limited: No single engine indexes more than about one-third of the \"indexable Web,\" the coverage of the six engines investigated varies by an order of magnitude, and combining the results of the six engines yields about 3.5 times as many documents on average as compared with the results from only one engine. Analysis of the overlap between pairs of engines gives an estimated lower bound on the size of the indexable Web of 320 million pages."}
{"_id": "3022cfe4decc9ea7dd95af935764204924c2f9a1", "title": "WebWatcher : A Tour Guide for the World Wide Web", "text": "1 We explore the notion of a tour guide software agent for assisting users browsing the World Wide Web. A Web tour guide agent provides assistance similar to that provided by a human tour guide in a museum { it guides the user along an appropriate path through the collection, based on its knowledge of the user's interests, of the location and relevance of various items in the collection, and of the way in which others have interacted with the collection in the past. This paper describes a simple but operational tour guide, called WebWatcher, which has given over 5000 tours to people browsing CMU's School of Computer Science Web pages. WebWatcher accompanies users from page to page, suggests appropriate hyperlinks, and learns from experience to improve its advice-giving skills. We describe the learning algorithms used by WebWatcher, experimental results showing their e ectiveness, and lessons learned from this case study in Web tour guide agents."}
{"_id": "aa618ec4197c1ee34c0cc42b0624c499e119d28a", "title": "Application-specific slicing for MVNO and traffic characterization [Invited]", "text": "In this paper, we apply the concept of a software-defined data plane to defining new services for mobile virtual network operators (MVNOs) to empower them with the capability of tailoring fine-grained subscription plans that can meet users' demands. For this purpose, we have recently proposed the concept of application-and/ or device-specific slicing that classifies application-and/or device-specific traffic into slices and introduced various applications of our proposed system [Optical Fiber Communication Conf. (OFC), 2016, paper Th3I.4]. This paper reports the prototype implementation of the proposal into a real MVNO connecting customized smartphones so we can identify applications/devices from the given traffic with 100% accuracy. We first characterize the traffic patterns of popular applications to build a detailed understanding of how network resources are utilized for real users and applications and what their demands on different network resources are. Then, we classify traffic according to the devices and find that the flow characteristics of LTE RAN slices are very similar to those of 3G. We discover that most of the bandwidth is consumed by those flows with the duration of several hundreds of seconds, and the flow size is larger than 5 Mbytes in both LTE and 3G MVNO networks."}
{"_id": "fc595b4410d0e1ecdedfb8f1c68bbd5dfd2d06be", "title": "A Solution to Single Point of Failure Using Voter Replication and Disagreement Detection", "text": "This paper suggests a method, called distributed voting, to overcome the problem of the single point of failure in a TMR system used in robotics and industrial control applications. It uses time redundancy and is based on TMR with disagreement detector feature. This method masks faults occurring in the voter where the TMR system can continue its function properly. The method has been evaluated by injecting faults into Vertex2Pro and Vertex4 Xilinx FPGAs An analytical evolution is also performed. The results of both evaluation approaches show that the proposed method can improve the reliability and the mean time to failure (MTTF) of a TMR system by at least a factor of (2-RV (t)) where RV(t) is the reliability of the voter"}
{"_id": "813375e15680be8741907064ee740ce58cb94b53", "title": "Error detecting and error correcting codes", "text": "When a message is transmitted, it has the potential to get scrambled by noise. This is certainly true of voice messages, and is also true of the digital messages that are sent to and from computers. Now even sound and video are being transmitted in this manner. A digital message is a sequence of 0\u2019s and 1\u2019s which encodes a given message. More data will be added to a given binary message that will help to detect if an error has been made in the transmission of the message; adding such data is called an error-detecting code. More data may also be added to the original message so that errors made in transmission may be detected, and also to figure out what the original message was from the possibly corrupt message that was received. This type of code is an error-correcting code."}
{"_id": "52d6080eec86f20553e1264a4adb63630808773c", "title": "Millibot Trains for Enhanced Mobility", "text": "The objective of this work is to enhance the mobility of small mobile robots by enabling them to link into a train configuration capable of crossing relatively large obstacles. In particular, we are building on Millibots, semiautonomous, tracked mobile sensing/communication platforms at the 5-cm scale previously developed at Carnegie Mellon University. TheMillibot Train concept provides couplers that allow the Millibot modules to engage/disengage under computer control and joint actuators that allow lifting of one module by another and control of the whole train shape in two dimensions. A manually configurable train prototype demonstrated the ability to climb standard stairs and vertical steps nearly half the train length. A fully functional module with powered joints has been developed and several have been built and tested. Construction of a set of six modules is well underway and will allow testing of the complete train in the near future. This paper focuses on the development, design, and construction of the electromechanical hardware for the Millibot Train."}
{"_id": "d6a8b32bc767dff6919c6c9aa4b758a8af75c31c", "title": "Bach 10 Dataset---- A Versatile Polyphonic Music Dataset", "text": "This is a polyphonic music dataset which can be used for versatile research problems, such as Multi-pitch Estimation and Tracking, Audio-score Alignment, Source Separation, etc. This dataset consists of the audio recordings of each part and the ensemble of ten pieces of four-part J.S. Bach chorales, as well as their MIDI scores, the ground-truth alignment between the audio and the score, the ground-truth pitch values of each part and the ground-truth notes of each piece. The audio recordings of the four parts (Soprano, Alto, Tenor and Bass) of each piece are performed by violin, clarinet, saxophone and bassoon, respectively."}
{"_id": "9346a6fc28681eaa56fcf71816bc2070d786062e", "title": "Variational Sequential Monte Carlo", "text": "Many recent advances in large scale probabilistic inference rely on variational methods. The success of variational approaches depends on (i) formulating a flexible parametric family of distributions, and (ii) optimizing the parameters to find the member of this family that most closely approximates the exact posterior. In this paper we present a new approximating family of distributions, the variational sequential Monte Carlo (VSMC) family, and show how to optimize it in variational inference. VSMC melds variational inference (VI) and sequential Monte Carlo (SMC), providing practitioners with flexible, accurate, and powerful Bayesian inference. The VSMC family is a variational family that can approximate the posterior arbitrarily well, while still allowing for efficient optimization of its parameters. We demonstrate its utility on state space models, stochastic volatility models for financial data, and deep Markov models of brain neural circuits."}
{"_id": "68cc677342506ef6c0928994e7ece63d432c34fb", "title": "LoRa-based localization systems for noisy outdoor environment", "text": "LoRa-based (Long Range Communication based) localization systems make use of RSSI (Received Signal Strength Indicator) to locate an object properly in an outdoor environment for IoT (Internet of Things) applications in smart environment and urban networking. The accuracy of localization is highly degraded, however, by noisy environments (e.g., electronic interference, blocking). To address this issue, this paper proposes two new localization algorithms to improve localization accuracy. Furthermore, the paper develops a winner selection strategy to select a winner from the traditional localization algorithm and the two new algorithms to minimize localization errors. Our simulations and real experiments found that the two new algorithms can significantly improve localization accuracy. Additionally, the winner selection strategy effectively selects a winner from the three algorithms to improve localization accuracy further. Using our proposed algorithms in some real experiments, the localization error is only a few meters in a real measurement field longer than 100 m."}
{"_id": "51c2940efef5ea99ac6dee9ae9841eebffe0dc7d", "title": "Document-level Sentiment Inference with Social, Faction, and Discourse Context", "text": "We present a new approach for documentlevel sentiment inference, where the goal is to predict directed opinions (who feels positively or negatively towards whom) for all entities mentioned in a text. To encourage more complete and consistent predictions, we introduce an ILP that jointly models (1) sentenceand discourse-level sentiment cues, (2) factual evidence about entity factions, and (3) global constraints based on social science theories such as homophily, social balance, and reciprocity. Together, these cues allow for rich inference across groups of entities, including for example that CEOs and the companies they lead are likely to have similar sentiment towards others. We evaluate performance on new, densely labeled data that provides supervision for all pairs, complementing previous work that only labeled pairs mentioned in the same sentence. Experiments demonstrate that the global model outperforms sentence-level baselines, by providing more coherent predictions across sets of related entities."}
{"_id": "68c29b7bf1811f941040bba6c611753b8d756310", "title": "Frequency-based anomaly detection for the automotive CAN bus", "text": "The modern automobile is controlled by networked computers. The security of these networks was historically of little concern, but researchers have in recent years demonstrated their many vulnerabilities to attack. As part of a defence against these attacks, we evaluate an anomaly detector for the automotive controller area network (CAN) bus. The majority of attacks are based on inserting extra packets onto the network. But most normal packets arrive at a strict frequency. This motivates an anomaly detector that compares current and historical packet timing. We present an algorithm that measures inter-packet timing over a sliding window. The average times are compared to historical averages to yield an anomaly signal. We evaluate this approach over a range of insertion frequencies and demonstrate the limits of its effectiveness. We also show how a similar measure of the data contents of packets is not effective for identifying anomalies. Finally we show how a one-class support vector machine can use the same information to detect anomalies with high confidence."}
{"_id": "0299e59b518fcc29f08c88701e64a3e1621172fe", "title": "Graph2Seq: Scalable Learning Dynamics for Graphs", "text": "Neural networks have been shown to be an effective tool for learning algorithms over graph-structured data. However, graph representation techniques\u2014that convert graphs to real-valued vectors for use with neural networks\u2014are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but these methods have difficulty scaling and generalizing to graphs with different sizes and shapes. We present GRAPH2SEQ, a new technique that represents vertices of graphs as infinite time-series. By not limiting the representation to a fixed dimension, GRAPH2SEQ scales naturally to graphs of arbitrary sizes and shapes. GRAPH2SEQ is also reversible, allowing full recovery of the graph structure from the sequences. By analyzing a formal computational model for graph representation, we show that an unbounded sequence is necessary for scalability. Our experimental results with GRAPH2SEQ show strong generalization and new state-of-the-art performance on a variety of graph combinatorial optimization problems."}
{"_id": "c43d8a3d36973e3b830684e80a035bbb6856bcf7", "title": "Image Super-Resolution Using Very Deep Residual Channel Attention Networks", "text": "Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The lowresolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods."}
{"_id": "dd7993922ac2accbb5de67c8fe1dbc6f53d632c1", "title": "Riderless bicycle with gyroscopic balancer controlled by FSMC and AFSMC", "text": "A Riderless bicycle has been developed with a gyroscopic balancer controller by a Fuzzy Sliding Mode Controller (FSMC) and an Adaptive Fuzzy Sliding Mode Controller (AFSMC). The FSMC controller was first implemented because it has better performance on controlling nonlinear systems than the one with PID control. The FSMC can also reduce the chattering phenomenon caused by SMC and the effect of linearizing a nonlinear system. Compared with other balancers, the gyroscopic balancer has a couple of advantages, such as faster system response, lower mass ratio of balancer to bicycle and relatively larger moment. To demonstrate the attributes stated above, we designed and conducted experiments, including the balancing of unmoving bicycle, unmoving bicycle with external impacts, as well as the bicycle moving forward and turning. The experimental results show that the bicycle can overcome jolts, uneven terrain and external disturbances. Furthermore, since the results of experiments are consistent with the ones of the simulation, it validates the derived bicycle dynamics model with the gyroscopic balancer and proves its robustness. However, the system's ability to resist continuous disturbance is not strong enough because of the limitation on the tilt angle of the gyroscopic balancer. Hence, we modified the control strategy by using AFSMC despite the fact that the FSMC performed better than PID control. From the simulations in Section IV, it shows that the AFSMC has better performance at resisting continuous disturbances than FSMC does. Furthermore, the abilities to balance the unmoving bicycle or moving bicycle are no less than FSMC. Thus, the AFSMC is employed to replace the FSMC. The designs of adaptive law and estimation law of AFSMC are based on the Lyapunov function to ensure the stability of the system. Experiments of the bicycle controlled by AFSMC are currently being conducted."}
{"_id": "9c0e62bec042abcdc8bce4449c2c085b03d6a567", "title": "Can visual fixation patterns improve image fidelity assessment?", "text": "This paper presents the results of a computational experiment designed to investigate the extent to which metrics of image fidelity can be improved through knowledge of where humans tend to fixate in images. Five common metrics of image fidelity were augmented using two sets of fixation data, one set obtained under task-free viewing conditions and another set obtained when viewers were asked to judge image quality. The augmented metrics were then compared to subjective ratings of the images. The results show that most metrics can be improved using eye fixation data, but a greater improvement was found using fixations obtained in the task-free condition (task-free viewing)."}
{"_id": "6a1da83440c7685f5a03e7bda17be9025e0892e3", "title": "Semantic Match Consistency for Long-Term Visual Localization", "text": "Robust and accurate visual localization across large appearance variations due to changes in time of day, seasons, or changes of the environment is a challenging problem which is of importance to application areas such as navigation of autonomous robots. Traditional featurebased methods often struggle in these conditions due to the significant number of erroneous matches between the image and the 3D model. In this paper, we present a method for scoring the individual correspondences by exploiting semantic information about the query image and the scene. In this way, erroneous correspondences tend to get a low semantic consistency score, whereas correct correspondences tend to get a high score. By incorporating this information in a standard localization pipeline, we show that the localization performance can be significantly improved compared to the state-of-the-art, as evaluated on two challenging long-term localization benchmarks."}
{"_id": "665cf8ff3a221a3468c6d4fc34c903cd42f236ae", "title": "Stabilization and Path Following of a Spherical Robot", "text": "In this paper, we present a spherical mobile robot BYQ_III, for planetary surface exploration and security tasks. The driving torque for the rolling robot is generated by a new type of mechanism equipped with a counter-pendulum. This robot is nonholonomic in nature, and underactuated. In this paper, the three-dimensional (3-D) nonlinear dynamic model is developed, then decoupled to the longitudinal and lateral motions by linearization. Two sliding-mode controllers are proposed to asymptotically stabilize the tracking errors in lean angle and spinning angular velocity, respectively, and indirectly to stabilize the desired path curvature, because the robot steers only by leaning itself to a predefined angle. For the task of path following, a path curvature controller, based on a geometrical notion, is employed. The stability and performance analyses are performed, and also the effectiveness of the controllers is shown by numerical simulations. To the best of author's knowledge, similar results could not be obtained in the previous spherical robot control system based on the dynamics. The work is of significance in understanding and developing this type of planning and controlling motions of nonholonomic systems."}
{"_id": "a3f4b702a3b273a3b9e77b286f8ce32091271201", "title": "Dynamics simulation toolbox for industrial robot manipulators", "text": "A new robot toolbox for dynamics simulation based on MATLAB Graphical User Interface (GUI) is developed for educational purposes. It is built on the previous version named as ROBOLAB performing only the kinematics analysis of robot manipulators. The toolbox presented in this paper provides interactive real-time simulation and visualization of the industrial robot manipulator\u2019s dynamics based on Langrange Euler and Newton Euler formulations. Since most of the industrial robot manipulators are produced with six degrees of freedom (DOF) such as the PUMA 560, the Fanuc ArcMate 120iB and Stanford Arm, the library of the toolbox includes sixteen fundamental 6-DOF robot manipulators with Euler wrist. The software can be used to interactively perform robotics analysis and off-line programming of robot dynamics such as forward and inverse dynamics as well as to interactively teach and simulate the basic principles of robot dynamics in a very realistic way. In order to demonstrate the user-friendly features of the toolbox much better, simulation of the NS robot manipulator (Stanford Arm) is provided as an example. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 319 330, 2010; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20262"}
{"_id": "670f313246b21c6f5fa9558b6b559f7fb086aa30", "title": "Energy-Efficient Data Collection in UAV Enabled Wireless Sensor Network", "text": "In wireless sensor networks, utilizing the unmanned aerial vehicle (UAV) as a mobile data collector for the sensor nodes (SNs) is an energy-efficient technique to prolong the network lifetime. In this letter, considering a general fading channel model for the SN-UAV links, we jointly optimize the SNs\u2019 wake-up schedule and UAV\u2019s trajectory to minimize the maximum energy consumption of all SNs, while ensuring that the required amount of data is collected reliably from each SN. We formulate our design as a mixed-integer non-convex optimization problem. By applying the successive convex optimization technique, an efficient iterative algorithm is proposed to find a sub-optimal solution. Numerical results show that the proposed scheme achieves significant network energy saving as compared to benchmark schemes."}
{"_id": "29cb1e90d1e08f0990925c8d4e8d00fa5fa49100", "title": "Towards Reading Hidden Emotions: A Comparative Study of Spontaneous Micro-Expression Spotting and Recognition Methods", "text": "Micro-expressions (MEs) are rapid, involuntary facial expressions which reveal emotions that people do not intend to show. Studying MEs is valuable as recognizing them has many important applications, particularly in forensic science and psychotherapy. However, analyzing spontaneous MEs is very challenging due to their short duration and low intensity. Automatic ME analysis includes two tasks: ME spotting and ME recognition. For ME spotting, previous studies have focused on posed rather than spontaneous videos. For ME recognition, the performance of previous studies is low. To address these challenges, we make the following contributions: (i)\u00a0We propose the first method for spotting spontaneous MEs in long videos (by exploiting feature difference contrast). This method is training free and works on arbitrary unseen videos. (ii)\u00a0We present an advanced ME recognition framework, which outperforms previous work by a large margin on two challenging spontaneous ME databases (SMIC and CASMEII). (iii)\u00a0We propose the first automatic ME analysis system (MESR), which can spot and recognize MEs from spontaneous video data. Finally, we show our method outperforms humans in the ME recognition task by a large margin, and achieves comparable performance to humans at the very challenging task of spotting and then recognizing spontaneous MEs."}
{"_id": "fa7bf12a18250018c6b38dcd5ef879a7a05a4c09", "title": "Is endoscopic polypectomy an adequate therapy for malignant colorectal adenomas? Presentation of 114 patients and review of the literature.", "text": "PURPOSE\nThis study was designed to evaluate the outcome of endoscopic polypectomy of malignant polyps with and without subsequent surgery based on histologic criteria.\n\n\nMETHODS\nConsecutive patients with invasive carcinoma in colorectal polyps endoscopically removed between 1985 and 1996 were retrospectively studied. Patients with complete resection, grading G1 or G2, and absence of vascular invasion were classified as \"low risk.\" The other patients were classified \"high risk.\" Available literature was reviewed by applying similar classification criteria.\n\n\nRESULTS\nA total of 114 patients (59 males; median age, 70 (range, 20-92) years) were included. Median polyp size was 2.5 (0.4-10) cm. After polypectomy, of 54 patients with low-risk malignant polyps, 13 died of unrelated causes after a median of 76 months, 5 had no residual tumor at surgery, and 33 were alive and well during a median follow-up of 69 (range, 9-169) months. Of 60 patients with high-risk malignant polyps, 52 had surgery (residual carcinoma 27 percent). Five of eight patients not operated had an uneventful follow-up of median 57 (range, 47-129) months. Patients in the high-risk group were significantly more likely to have an adverse outcome than those in the low-risk group (P < 0.0001). Review of 20 studies including 1,220 patients with malignant polyps revealed no patient with low-risk criteria with an adverse outcome.\n\n\nCONCLUSIONS\nFor patients with low-risk malignant polyps, endoscopic polypectomy alone seems to be adequate. In high-risk patients, the risk of adverse outcome should be weighed against the risk of surgery."}
{"_id": "77768638f4f400272b6e5970596b127663471538", "title": "A scoping review of scoping reviews: advancing the approach and enhancing the consistency", "text": "BACKGROUND\nThe scoping review has become an increasingly popular approach for synthesizing research evidence. It is a relatively new approach for which a universal study definition or definitive procedure has not been established. The purpose of this scoping review was to provide an overview of scoping reviews in the literature.\n\n\nMETHODS\nA scoping review was conducted using the Arksey and O'Malley framework. A search was conducted in four bibliographic databases and the gray literature to identify scoping review studies. Review selection and characterization were performed by two independent reviewers using pretested forms.\n\n\nRESULTS\nThe search identified 344 scoping reviews published from 1999 to October 2012. The reviews varied in terms of purpose, methodology, and detail of reporting. Nearly three-quarter of reviews (74.1%) addressed a health topic. Study completion times varied from 2 weeks to 20 months, and 51% utilized a published methodological framework. Quality assessment of included studies was infrequently performed (22.38%).\n\n\nCONCLUSIONS\nScoping reviews are a relatively new but increasingly common approach for mapping broad topics. Because of variability in their conduct, there is a need for their methodological standardization to ensure the utility and strength of evidence."}
{"_id": "d6fa2444818889afc4ab22d2e868b3c52de8ec38", "title": "A short literature review on reward-based crowdfunding", "text": "In this short article, we discuss about a popular online fundraising method, named crowdfunding. One of the most useful types is reward-based crowdfunding, where the return is tangible products. We review related researches from three streams, the conceptual research, the empirical research, and the modelling research. Some possible research directions are also discussed in the paper."}
{"_id": "f852227c81240ae7af03a7716053b08ae45e0bdf", "title": "Brain2Object: Printing Your Mind from Brain Signals with Spatial Correlation Embedding", "text": "Electroencephalography (EEG) signals are known to manifest differential patterns when individuals visually concentrate on different objects (e.g., a car). In this work, we present an end-to-end digital fabrication system , Brain2Object, to print the 3D object that an individual is observing by solely decoding visually-evoked EEG brain signal streams. We propose a unified training framework which combines multi-class Common Spatial Pattern and deep Convolutional Neural Networks to support the backend computation. Specially, a Dynamical Graph Representation of EEG signals is learned for accurately capturing the structured spatial correlations of EEG channels in an adaptive manner. A user friendly interface is developed as the system front end. Brain2Object presents a streamlined end-to-end workflow which can serve as a template for deeper integration of BCI technologies to assist with our routine activities. The proposed system is evaluated extensively using offline experiments and through an online demonstrator. For the former, we use a rich widely used public dataset and a limited but locally collected dataset. The experiment results show that our approach consistently outperforms a wide range of baseline and state-of-the-art approaches. The proof-of-concept corroborates the practicality of our approach and illustrates the ease with which such a system could be deployed."}
{"_id": "cce2dc7712e0b1fe8ae1fd0ea3148a5ab3cc3c07", "title": "Analyzing hidden populations online: topic, emotion, and social network of HIV-related users in the largest Chinese online community", "text": "BACKGROUND\nTraditional survey methods are limited in the study of hidden populations due to the hard to access properties, including lack of a sampling frame, sensitivity issue, reporting error, small sample size, etc. The rapid increase of online communities, of which members interact with others via the Internet, have generated large amounts of data, offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. In this study, we try to understand the multidimensional characteristics of a hidden population by analyzing the massive data generated in the online community.\n\n\nMETHODS\nBy elaborately designing crawlers, we retrieved a complete dataset from the \"HIV bar,\" the largest bar related to HIV on the Baidu Tieba platform, for all records from January 2005 to August 2016. Through natural language processing and social network analysis, we explored the psychology, behavior and demand of online HIV population and examined the network community structure.\n\n\nRESULTS\nIn HIV communities, the average topic similarity among members is positively correlated to network efficiency (r\u2009=\u20090.70, p\u2009<\u20090.001), indicating that the closer the social distance between members of the community, the more similar their topics. The proportion of negative users in each community is around 60%, weakly correlated with community size (r\u2009=\u20090.25, p\u2009=\u20090.002). It is found that users suspecting initial HIV infection or first in contact with high-risk behaviors tend to seek help and advice on the social networking platform, rather than immediately going to a hospital for blood tests.\n\n\nCONCLUSIONS\nOnline communities have generated copious amounts of data offering new opportunities for understanding hidden populations with unprecedented sample sizes and richness of information. It is recommended that support through online services for HIV/AIDS consultation and diagnosis be improved to avoid privacy concerns and social discrimination in China."}
{"_id": "1b9d022273780c5b0b7522555bd0e2c626a38e77", "title": "Locally Adaptive Color Correction for Underwater Image Dehazing and Matching", "text": "Underwater images are known to be strongly deteriorated by a combination of wavelength-dependent light attenuation and scattering. This results in complex color casts that depend both on the scene depth map and on the light spectrum. Color transfer, which is a technique of choice to counterbalance color casts, assumes stationary casts, defined by global parameters, and is therefore not directly applicable to the locally variable color casts encountered in underwater scenarios. To fill this gap, this paper introduces an original fusion-based strategy to exploit color transfer while tuning the color correction locally, as a function of the light attenuation level estimated from the red channel. The Dark Channel Prior (DCP) is then used to restore the color compensated image, by inverting the simplified Koschmieder light transmission model, as for outdoor dehazing. Our technique enhances image contrast in a quite effective manner and also supports accurate transmission map estimation. Our extensive experiments also show that our color correction strongly improves the effectiveness of local keypoints matching."}
{"_id": "20f4a023782606b96d295f466e18485b0582cb5f", "title": "Overview of the Stereo and Multiview Video Coding Extensions of the H.264/MPEG-4 AVC Standard", "text": "Significant improvements in video compression capability have been demonstrated with the introduction of the H.264/MPEG-4 advanced video coding (AVC) standard. Since developing this standard, the Joint Video Team of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) has also standardized an extension of that technology that is referred to as multiview video coding (MVC). MVC provides a compact representation for multiple views of a video scene, such as multiple synchronized video cameras. Stereo-paired video for 3-D viewing is an important special case of MVC. The standard enables inter-view prediction to improve compression capability, as well as supporting ordinary temporal and spatial prediction. It also supports backward compatibility with existing legacy systems by structuring the MVC bitstream to include a compatible \u201cbase view.\u201d Each other view is encoded at the same picture resolution as the base view. In recognition of its high-quality encoding capability and support for backward compatibility, the stereo high profile of the MVC extension was selected by the Blu-Ray Disc Association as the coding format for 3-D video with high-definition resolution. This paper provides an overview of the algorithmic design used for extending H.264/MPEG-4 AVC towards MVC. The basic approach of MVC for enabling inter-view prediction and view scalability in the context of H.264/MPEG-4 AVC is reviewed. Related supplemental enhancement information (SEI) metadata is also described. Various \u201cframe compatible\u201d approaches for support of stereo-view video as an alternative to MVC are also discussed. A summary of the coding performance achieved by MVC for both stereo- and multiview video is also provided. Future directions and challenges related to 3-D video are also briefly discussed."}
{"_id": "3b13533495ec04b7b263d9bdf82372959c9d87e6", "title": "Overview of the High Efficiency Video Coding (HEVC) Standard", "text": "High Efficiency Video Coding (HEVC) is currently being prepared as the newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of the HEVC standardization effort is to enable significantly improved compression performance relative to existing standards-in the range of 50% bit-rate reduction for equal perceptual video quality. This paper provides an overview of the technical features and characteristics of the HEVC standard."}
{"_id": "5c1e5e87a7cb833b046222bf631f9063c9926680", "title": "H.264/AVC over IP", "text": "H.264 is the ITU-T\u2019s new, nonbackward compatible video compression Recommendation that significantly outperforms all previous video compression standards. It consists of a video coding layer (VCL) which performs all the classic signal processing tasks and generates bit strings containing coded macroblocks, and a network adaptation layer (NAL) which adapts those bit strings in a network friendly way. The paper describes the use of H.264 coded video over best-effort IP networks, using RTP as the real-time transport protocol. After the description of the environment, the error-resilience tools of H.264 and the draft specification of the RTP payload format are introduced. Next the performance of several possible VCLand NAL-based error-resilience tools of H.264 are verified in simulations."}
{"_id": "0c8cac66c4ce059493b7b9de0abee703e44d5563", "title": "Comparison of the Coding Efficiency of Video Coding Standards\u2014Including High Efficiency Video Coding (HEVC)", "text": "The compression capability of several generations of video coding standards is compared by means of peak signal-to-noise ratio (PSNR) and subjective testing results. A unified approach is applied to the analysis of designs, including H.262/MPEG-2 Video, H.263, MPEG-4 Visual, H.264/MPEG-4 Advanced Video Coding (AVC), and High Efficiency Video Coding (HEVC). The results of subjective tests for WVGA and HD sequences indicate that HEVC encoders can achieve equivalent subjective reproduction quality as encoders that conform to H.264/MPEG-4 AVC when using approximately 50% less bit rate on average. The HEVC design is shown to be especially effective for low bit rates, high-resolution video content, and low-delay communication applications. The measured subjective improvement somewhat exceeds the improvement measured by the PSNR metric."}
{"_id": "2d22229794c2bd70c5bd1b1e4f004eb5864627a9", "title": "The Quadtree and Related Hierarchical Data Structures", "text": "A tutorial survey is presented of the quadtree and related hierarchical data structures. They are based on the principle of recursive decomposition. The emphasis is on the representation of data used in applications in image processing, computer graphics, geographic information systems, and robotics. There is a greater emphasis on region data (i.e., two-dimensional shapes) and to a lesser extent on point, curvilinear, and threedimensional data. A number of operations in which such data structures find use are examined in greater detail."}
{"_id": "08ae3f221339feb8230ae15647df51e7a5b5a13a", "title": "Software developers' perceptions of productivity", "text": "The better the software development community becomes at creating software, the more software the world seems to demand. Although there is a large body of research about measuring and investigating productivity from an organizational point of view, there is a paucity of research about how software developers, those at the front-line of software construction, think about, assess and try to improve their productivity. To investigate software developers' perceptions of software development productivity, we conducted two studies: a survey with 379 professional software developers to help elicit themes and an observational study with 11 professional software developers to investigate emergent themes in more detail. In both studies, we found that developers perceive their days as productive when they complete many or big tasks without significant interruptions or context switches. Yet, the observational data we collected shows our participants performed significant task and activity switching while still feeling productive. We analyze such apparent contradictions in our findings and use the analysis to propose ways to better support software developers in a retrospection and improvement of their productivity through the development of new tools and the sharing of best practices."}
{"_id": "485f15282478c0f72b359d3e1eb72aaaaea540d2", "title": "\"The part of me that you bring out\": ideal similarity and the Michelangelo phenomenon.", "text": "This work examines the Michelangelo phenomenon, an interpersonal model of the means by which people move closer to (vs. further from) their ideal selves. The authors propose that partner similarity--similarity to the ideal self, in particular--plays an important role in this process. Across 4 studies employing diverse designs and measurement techniques, they observed consistent evidence that when partners possess key elements of one another's ideal selves, each person affirms the other by eliciting important aspects of the other's ideals, each person moves closer to his or her ideal self, and couple well-being is enhanced. Partner similarity to the actual self also accounts for unique variance in key elements of this model. The associations of ideal similarity and actual similarity with couple well-being are fully attributable to the Michelangelo process, to partner affirmation and target movement toward the ideal self. The authors also performed auxiliary analyses to rule out several alternative interpretations of these findings."}
{"_id": "0ef948b3139a41d3e291e20d67ed2bdbba290f65", "title": "Wrong, but useful: regional species distribution models may not be improved by range\u2010wide data under biased sampling", "text": "Species distribution modeling (SDM) is an essential method in ecology and conservation. SDMs are often calibrated within one country's borders, typically along a limited environmental gradient with biased and incomplete data, making the quality of these models questionable. In this study, we evaluated how adequate are national presence-only data for calibrating regional SDMs. We trained SDMs for Egyptian bat species at two different scales: only within Egypt and at a species-specific global extent. We used two modeling algorithms: Maxent and elastic net, both under the point-process modeling framework. For each modeling algorithm, we measured the congruence of the predictions of global and regional models for Egypt, assuming that the lower the congruence, the lower the appropriateness of the Egyptian dataset to describe the species' niche. We inspected the effect of incorporating predictions from global models as additional predictor (\"prior\") to regional models, and quantified the improvement in terms of AUC and the congruence between regional models run with and without priors. Moreover, we analyzed predictive performance improvements after correction for sampling bias at both scales. On average, predictions from global and regional models in Egypt only weakly concur. Collectively, the use of priors did not lead to much improvement: similar AUC and high congruence between regional models calibrated with and without priors. Correction for sampling bias led to higher model performance, whatever prior used, making the use of priors less pronounced. Under biased and incomplete sampling, the use of global bats data did not improve regional model performance. Without enough bias-free regional data, we cannot objectively identify the actual improvement of regional models after incorporating information from the global niche. However, we still believe in great potential for global model predictions to guide future surveys and improve regional sampling in data-poor regions."}
{"_id": "03f723759694daf290c723154fbe627cf90fb8f2", "title": "Autonomous Flight for Detection, Localization, and Tracking of Moving Targets With a Small Quadrotor", "text": "In this letter, we address the autonomous flight of a small quadrotor, enabling tracking of a moving object. The 15-cm diameter, 250-g robot relies only on onboard sensors (a single camera and an inertial measurement unit) and computers, and can detect, localize, and track moving objects. Our key contributions include the relative pose estimate of a spherical target as well as the planning algorithm, which considers the dynamics of the underactuated robot, the actuator limitations, and the field of view constraints. We show simulation and experimental results to demonstrate feasibility and performance, as well as robustness to abrupt variations in target motion."}
{"_id": "2d76d34ee18340eeced16f7735405b97d3980c5f", "title": "Inferring Dockless Shared Bike Distribution in New Cities", "text": "Recently, dockless shared bike services have achieved great success and reinvented bike sharing business in China. When expanding bike sharing business into a new city, most start-ups always wish to find out how to cover the whole city with a suitable bike distribution. In this paper, we study the problem of inferring bike distribution in new cities, which is challenging. As no dockless bikes are deployed in the new city, we propose to learn insights on bike distribution from cities populated with dockless bikes. We exploit multi-source data to identify important features that affect bike distributions and develop a novel inference model combining Factor Analysis and Convolutional Neural Network techniques. The extensive experiments on real-life datasets show that the proposed solution provides significantly more accurate inference results compared with competitive prediction methods."}
{"_id": "ca5e8a9f1a3447b735af9fc71c489b0b62a50501", "title": "Design and Analysis of a 21\u201329-GHz Ultra-Wideband Receiver Front-End in 0.18-$\\mu$ m CMOS Technology", "text": "This paper reports the design and analysis of 21-29-GHz CMOS low-noise amplifier (LNA), balun and mixer in a standard 0.18-\u03bcm CMOS process for ultra-wideband automotive radar systems. To verify the proposed LNA, balun, and mixer architectures, a simplified receiver front-end comprising an LNA, a double-balanced Gilbert-cell-based mixer, and two Marchand baluns was implemented. The wideband Marchand baluns can convert the single RF and local oscillator (LO) signals to nearly perfect differential signals over the 21-29-GHz band. The performance of the mixer is improved with the current-bleeding technique and a parallel resonant inductor at the differential outputs of the RF transconductance stage. Over the 21-29-GHz band, the receiver front-end exhibits excellent noise figure of 4.6\u00b10.5 dB, conversion gain of 23.7\u00b11.4 dB, RF port reflection coefficient lower than -8.8 dB, LO-IF isolation lower than -47 dB, LO-RF isolation lower than -55 dB, and RF-IF isolation lower than -35.5 dB. The circuit occupies a chip area of 1.25\u00d71.06 mm2, including the test pads. The dc power dissipation is only 39.2 mW."}
{"_id": "ec094cc1b52188a80d16d6466444a498ca6c9161", "title": "Load-Adaptive Modulation of a Series-Resonant Inverter for All-Metal Induction Heating Applications", "text": "A conventional induction heating (IH) system has been developed to heat the pot of ferromagnetic materials because the small resistance of nonferromagnetic materials induces large resonant current to the power switch of a series-resonant IH inverter. However, the heating capability for various materials is the most important to improve the functionality and the usability of IH products. In this paper, a load-adaptive modulation (LAM) method is proposed to heat the pot made from nonferromagnetic and ferromagnetic materials. The LAM method can change the magnitude of the input voltage of the IH coil and the operating frequency of the series-resonant IH inverter according to the resistance of the pot. The operational principle and the design method are analyzed to implement the proposed LAM method and its power control. The validity of the LAM method is experimentally verified using a 2-kW prototype series-resonant IH inverter."}
{"_id": "c91081bb16bfe2ed30a05761e01eccd84b05c537", "title": "BIM: Enabling Sustainability and Asset Management through Knowledge Management", "text": "Building Information Modeling (BIM) is the use of virtual building information models to develop building design solutions and design documentation and to analyse construction processes. Recent advances in IT have enabled advanced knowledge management, which in turn facilitates sustainability and improves asset management in the civil construction industry. There are several important qualifiers and some disadvantages of the current suite of technologies. This paper outlines the benefits, enablers, and barriers associated with BIM and makes suggestions about how these issues may be addressed. The paper highlights the advantages of BIM, particularly the increased utility and speed, enhanced fault finding in all construction phases, and enhanced collaborations and visualisation of data. The paper additionally identifies a range of issues concerning the implementation of BIM as follows: IP, liability, risks, and contracts and the authenticity of users. Implementing BIM requires investment in new technology, skills training, and development of new ways of collaboration and Trade Practices concerns. However, when these challenges are overcome, BIM as a new information technology promises a new level of collaborative engineering knowledge management, designed to facilitate sustainability and asset management issues in design, construction, asset management practices, and eventually decommissioning for the civil engineering industry."}
{"_id": "4c5815796c29d44c940830118339e276f741d34a", "title": "Robot Collisions: A Survey on Detection, Isolation, and Identification", "text": "Robot assistants and professional coworkers are becoming a commodity in domestic and industrial settings. In order to enable robots to share their workspace with humans and physically interact with them, fast and reliable handling of possible collisions on the entire robot structure is needed, along with control strategies for safe robot reaction. The primary motivation is the prevention or limitation of possible human injury due to physical contacts. In this survey paper, based on our early work on the subject, we review, extend, compare, and evaluate experimentally model-based algorithms for real-time collision detection, isolation, and identification that use only proprioceptive sensors. This covers the context-independent phases of the collision event pipeline for robots interacting with the environment, as in physical human\u2013robot interaction or manipulation tasks. The problem is addressed for rigid robots first and then extended to the presence of joint/transmission flexibility. The basic physically motivated solution has already been applied to numerous robotic systems worldwide, ranging from manipulators and humanoids to flying robots, and even to commercial products."}
{"_id": "6b30fce40a8a5baae028c50914093d32ba37a60c", "title": "Empathy and social functioning in late adulthood.", "text": "OBJECTIVES\nBoth cognitive and affective empathy are regarded as essential prerequisites for successful social functioning, and recent studies have suggested that cognitive, but not affective, empathy may be adversely affected as a consequence of normal adult aging. This decline in cognitive empathy is of concern, as older adults are particularly susceptible to the negative physical and mental health consequences of loneliness and social isolation.\n\n\nMETHOD\nThe present study compared younger (N = 80) and older (N = 49) adults on measures of cognitive empathy, affective empathy, and social functioning.\n\n\nRESULTS\nWhilst older adults' self-reported and performance-based cognitive empathy was significantly reduced relative to younger adults, there were no age-related differences in affective empathy. Older adults also reported involvement in significantly fewer social activities than younger adults, and cognitive empathy functioned as a partial mediator of this relationship.\n\n\nCONCLUSION\nThese findings are consistent with theoretical models that regard cognitive empathy as an essential prerequisite for good interpersonal functioning. However, the cross-sectional nature of the study leaves open the question of causality for future studies."}
{"_id": "59a50612fbdfa3d91eb15f344f726c5b96783803", "title": "Study of the therapeutic effects of a hippotherapy simulator in children with cerebral palsy: a stratified single-blind randomized controlled trial.", "text": "OBJECTIVE\nTo investigate whether hippotherapy (when applied by a simulator) improves postural control and balance in children with cerebral palsy.\n\n\nDESIGN\nStratified single-blind randomized controlled trial with an independent assessor. Stratification was made by gross motor function classification system levels, and allocation was concealed.\n\n\nSUBJECTS\nChildren between 4 and 18 years old with cerebral palsy.\n\n\nINTERVENTIONS\nParticipants were randomized to an intervention (simulator ON) or control (simulator OFF) group after getting informed consent. Treatment was provided once a week (15 minutes) for 10 weeks.\n\n\nMAIN MEASURES\nGross Motor Function Measure (dimension B for balance and the Total Score) and Sitting Assessment Scale were carried out at baseline (prior to randomization), end of intervention and 12 weeks after completing the intervention.\n\n\nRESULTS\nThirty-eight children participated. The groups were balanced at baseline. Sitting balance (measured by dimension B of the Gross Motor Function Measure) improved significantly in the treatment group (effect size = 0.36; 95% CI 0.01-0.71) and the effect size was greater in the severely disabled group (effect size = 0.80; 95% CI 0.13-1.47). The improvements in sitting balance were not maintained over the follow-up period. Changes in the total score of the Gross Motor Function Measure and the Sitting Assessment Scale were not significant.\n\n\nCONCLUSION\nHippotherapy with a simulator can improve sitting balance in cerebral palsy children who have higher levels of disability. However, this did not lead to a change in the overall function of these children (Gross Motor Function Classification System level V)."}
{"_id": "44032090cbe8fe945da0d605c5ffe85059692dc5", "title": "Pricing in Agent Economies Using Multi-Agent Q-Learning", "text": "This paper investigates how adaptive software agents may utilize reinforcement learning algorithms such as Q-learning to make economic decisions such as setting prices in a competitive marketplace. For a single adaptive agent facing fixed-strategy opponents, ordinary Q-learning is guaranteed to find the optimal policy. However, for a population of agents each trying to adapt in the presence of other adaptive agents, the problem becomes non-stationary and history dependent, and it is not known whether any global convergence will be obtained, and if so, whether such solutions will be optimal. In this paper, we study simultaneous Q-learning by two competing seller agents in three moderately realistic economic models. This is the simplest case in which interesting multi-agent phenomena can occur, and the state space is small enough so that lookup tables can be used to represent the Q-functions. We find that, despite the lack of theoretical guarantees, simultaneous convergence to self-consistent optimal solutions is obtained in each model, at least for small values of the discount parameter. In some cases, exact or approximate convergence is also found even at large discount parameters. We show how the Q-derived policies increase profitability and damp out or eliminate cyclic price \u201cwars\u201d compared to simpler policies based on zero lookahead or short-term lookahead. In one of the models (the \u201cShopbot\u201d model) where the sellers' profit functions are symmetric, we find that Q-learning can produce either symmetric or broken-symmetry policies, depending on the discount parameter and on initial conditions."}
{"_id": "560bc93783070c023a5406f6b64e9963db7a76a8", "title": "Generating Synthetic Missing Data: A Review by Missing Mechanism", "text": "The performance evaluation of imputation algorithms often involves the generation of missing values. Missing values can be inserted in only one feature (univariate configuration) or in several features (multivariate configuration) at different percentages (missing rates) and according to distinct missing mechanisms, namely, missing completely at random, missing at random, and missing not at random. Since the missing data generation process defines the basis for the imputation experiments (configuration, missing rate, and missing mechanism), it is essential that it is appropriately applied; otherwise, conclusions derived from ill-defined setups may be invalid. The goal of this paper is to review the different approaches to synthetic missing data generation found in the literature and discuss their practical details, elaborating on their strengths and weaknesses. Our analysis revealed that creating missing at random and missing not at random scenarios in datasets comprising qualitative features is the most challenging issue in the related work and, therefore, should be the focus of future work in the field."}
{"_id": "63138eee4572ffee1d3395866da69c61be453a26", "title": "Driver fatigue detection based on eye tracking and dynamk, template matching", "text": "A vision-based real-time driver fatigue detection system is proposed for driving safely. The driver's face is located, from color images captured in a car, by using the characteristic of skin colors. Then, edge detection is used to locate the regions of eyes. In addition to being used as the dynamic templates for eye tracking in the next frame, the obtained eyes' images are also used for fatigue detection in order to generate some warning alarms for driving safety. The system is tested on a Pentium III 550 CPU with 128 MB RAM. The experiment results seem quite encouraging andpromising. The system can reach 20 frames per second for eye tracking, and the average correct rate for eye location and tracking can achieve 99.1% on four test videos. The correct rate for fatigue detection is l00%, but the average precision rate is 88.9% on the test videos."}
{"_id": "47b8c91964f4f4f0c8b63a718cb834ae7fbd200f", "title": "Effectiveness of Credit Management System on Loan Performance : Empirical Evidence from Micro Finance Sector in Kenya", "text": "Microfinance institutions in Kenya experience high levels of non-performing loans. This trend threatens viability and sustainability of MFIs and hinders the achievement of their goals. This study was aimed at assessing the effectiveness of credit management systems on loan performance of microfinance institutions. Specifically we sought to establish the effect of credit terms, client appraisal, credit risk control measures and credit collection policies on loan performance. We adopted a descriptive research design. The respondents were the credit officers of the MFIs in Meru town. Collection policy was found to have a higher effect on loan repayment with =12.74, P=0.000 at 5% significance level. Further research is recommended on the effectiveness of credit referencing on loan performance of MFIs. This study is informative in terms of public policy adjustments and firm level competences required for better operation of MFIs and it also contributes to credit management literature."}
{"_id": "89286c76c6f17fd50921584467fe44f793de3d8a", "title": "Fast Cryptographic Computation on Intel \u00ae Architecture Processors Via Function Stitching", "text": "Cryptographic applications often run more than one independent algorithm such as encryption and authentication. This fact provides a high level of parallelism which can be exploited by software and converted into instruction level parallelism to improve overall performance on modern super-scalar processors. We present fast and efficient methods of computing such pairs of functions on IA processors using a method called \" function stitching \". Instead of computing pairs of functions sequentially as is done today in applications/libraries, we replace the function calls by a single call to a composite function that implements both algorithms. The execution time of this composite function can be made significantly shorter than the sums of the execution times for the individual functions and, in many cases, close to the execution time of the slower function. Function stitching is best done at a very fine grain, interleaving the code for the individual algorithms at an instruction-level granularity. This results in excellent utilization of the execution resources in the processor core with a single thread. We show how stitching pairs of functions together in a fine-grained manner results in excellent performance on IA processors. Currently, applications perform the functions sequentially. We demonstrate performance gains of 1.4X-1.9X with stitching over the best sequential function performance. We show performance results achieved by this method on the Intel \u00ae processors based on the Westmere architecture."}
{"_id": "150b61c6111ae16526ae19b6288b53d1373822ae", "title": "Dual Quaternion Variational Integrator for Rigid Body Dynamic Simulation", "text": "We introduce a symplectic dual quaternion variational integrator(DQVI) for simulating single rigid body motion in all six degrees of freedom. Dual quaternion is used to represent rigid body kinematics and one-step Lie group variational integrator is used to conserve the geometric structure, energy and momentum of the system during the simulation. The combination of these two becomes the first Lie group variational integrator for rigid body simulation without decoupling translations and rotations. Newton-Raphson method is used to solve the recursive dynamic equation. This method is suitable for real-time rigid body simulations with high precision under large time step. DQVI respects the symplectic structure of the system with excellent long-term conservation of geometry structure, momentum and energy. It also allows the reference point and 6-by-6 inertia matrix to be arbitrarily defined, which is very convenient for a variety of engineering problems."}
{"_id": "19f5d33e6814ddab2d17a97a77bb6525db784d35", "title": "Artificial error generation for translation-based grammatical error correction", "text": "Automated grammatical error correction for language learners has attracted a lot of attention in recent years, especially after a number of shared tasks that have encouraged research in the area. Treating the problem as a translation task from 'incorrect' into 'correct' English using statistical machine translation has emerged as a state-of-the-art approach but it requires vast amounts of corrected parallel data to produce useful results. Because manual annotation of incorrect text is laborious and expensive, we can generate artificial error-annotated data by injecting errors deliberately into correct text and thus produce larger amounts of parallel data with much less effort. In this work, we review previous work on artificial error generation and investigate new approaches using random and probabilistic methods for constrained and general error correction. Our methods use error statistics from a reference corpus of learner writing to generate errors in native text that look realistic and plausible in context. We investigate a number of aspects that can play a part in the error generation process, such as the origin of the native texts, the amount of context used to find suitable insertion points, the type of information encoded by the error patterns and the output error distribution. In addition, we explore the use of linguistic information for characterising errors and train systems using different combinations of real and artificial data. Results of our experiments show that the use of artificial errors can improve system performance when they are used in combination with real learner errors, in line with previous research. These improvements are observed for both constrained and general correction, for which probabilistic methods produce the best results. We also demonstrate that systems trained on a combination of real and artificial errors can beat other highly-engineered systems and be more robust, showing that performance can be improved by focusing on the data rather than tuning system parameters. Part of our work is also devoted to the proposal of the I-measure, a new evaluation scheme that scores corrections in terms of improvement on the original text and solves known issues with existing evaluation measures. To my family and L.A. In memory of Kika."}
{"_id": "aa8b058674bc100899be204364e5a2505afba126", "title": "Amulet: Aggregating Multi-level Convolutional Features for Salient Object Detection", "text": "Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features in convolutional layers. However, how to better aggregate multi-level convolutional feature maps for salient object detection is underexplored. In this work, we present Amulet, a generic aggregating multi-level convolutional feature framework for salient object detection. Our framework first integrates multi-level feature maps into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then it adaptively learns to combine these feature maps at each resolution and predict saliency maps with the combined features. Finally, the predicted results are efficiently fused to generate the final saliency map. In addition, to achieve accurate boundary inference and semantic enhancement, edge-aware feature maps in low-level layers and the predicted results of low resolution features are recursively embedded into the learning framework. By aggregating multi-level convolutional features in this efficient and flexible manner, the proposed saliency model provides accurate salient object labeling. Comprehensive experiments demonstrate that our method performs favorably against state-of-the-art approaches in terms of near all compared evaluation metrics."}
{"_id": "3f9f0d50e4addcc3b7f9ecf177f1f902308697a2", "title": "2-MHz GaN PWM isolated SEPIC converters", "text": "The eGaN HEMTs with ZVS technique have emerged in high frequency converters to reduce the switching loss. This paper investigates an eGaN HEMT ZVS isolated SEPIC converter running up to 2 MHz. Furthermore, an inductor optimization method is proposed to ensure ZVS and reduce reverse conduction loss of the SR. The efficiency curve is optimized based on the trade-off method between the switching and conduction losses. Moreover, the embedded inductor is proposed to reduce the conduction loss of the inductor. Finally, two 2 MHz eGaN HEMT prototypes with the commercial and embedded inductor with 18\u201336 V input and 5 V/2 A output are built respectively to verify the effectiveness of the proposed methods. The converters achieve a peak efficiency of over 88.2% and power density of 52.9 W/in3, outperforming the state-of-art power module products."}
{"_id": "1a91432b291e06ff70dfb4e1295e93fd0d190472", "title": "Vehicle Tracking, Monitoring and Alerting System: A Review", "text": "The goal of this paper is to review the past work of vehicle tracking, monitoring and alerting system, to categorize various methodologies and identify new trends. Vehicle tracking, monitoring and alerting system is challenging problem. There are various challenges encounter in vehicle tracking, monitoring and alerting due to deficiency in proper real time vehicle location and problem of alerting system. GPS (Global Positioning System) is most widely used technology for vehicle tracking and keep regular monitoring of vehicle. The objective of tracking system is to manage and control the transport using GPS transreceiver to know the current location of vehicle. In number of system, RFID (Radio Frequency Identification) is chosen as one of technology implemented for bus monitoring system. GSM (Global System for Mobile Communication) is most widely used for alerting system. Alerting system is essential for providing the location and information about vehicle to passenger, owner or user."}
{"_id": "0f90fc86d84ad8757d9d5f007255917c86723bd3", "title": "Recognition of Emotional Face Expressions and Amygdala Pathology *", "text": "The amygdala is often damaged in patients with temporal lobe epilepsy, either because of the primary epileptogenic disease (e.g. sclerosis or encephalitis) or because of secondary effects of surgical interventions (e.g. lobectomy). In humans, the amygdala has been associated with a range of important emotional and social functions, in particular with deficits in emotion recognition from faces. Here we review data from recent neuropsychological research illustrating the amygdala role in processing facial expressions. We describe behavioural findings subsequent to focal lesions and possible factors that may influence the nature and severity of deficits in patients. Both bilateral and unilateral amygdala damage can impair the recog nition of facial emotions, especially fear, but such deficits are not always present and not always specific. Moreover, not all components of emotion or fear processing are impaired after amygdala damage. Dissociations have been reported between deficits in the recognition of emotions from the face of other people, and intact ability in the production of emotions in one's own facial expression [...] CRISTINZIO PERRIN, Chiara, SANDER, David, VUILLEUMIER, Patrik. Recognition of emotional face expressions and amygdala pathology. Epileptologie, 2007, vol. 3, p. 130-138"}
{"_id": "6023fc75489419de1d7c15a9b8cf01f27bf31efc", "title": "FPGA Implementations of Neural Networks - A Survey of a Decade of Progress", "text": "The first successful FPGA implementation [1] of artificial neural networks (ANNs) was published a little over a decade ago. It is timely to review the progress that has been made in this research area. This brief survey provides a taxonomy for classifying FPGA implementations of ANNs. Different implementation techniques and design issues are discussed. Future research trends are also presented."}
{"_id": "9a972b5919264016faf248b6e14ac51194ff45b2", "title": "Pedestrian detection in crowded scenes", "text": "In this paper, we address the problem of detecting pedestrians in crowded real-world scenes with severe overlaps. Our basic premise is that this problem is too difficult for any type of model or feature alone. Instead, we present an algorithm that integrates evidence in multiple iterations and from different sources. The core part of our method is the combination of local and global cues via probabilistic top-down segmentation. Altogether, this approach allows examining and comparing object hypotheses with high precision down to the pixel level. Qualitative and quantitative results on a large data set confirm that our method is able to reliably detect pedestrians in crowded scenes, even when they overlap and partially occlude each other. In addition, the flexible nature of our approach allows it to operate on very small training sets."}
{"_id": "36cd88ed2c17a596001e9c7d89533ac46c28dec0", "title": "Object Detection Using the Statistics of Parts", "text": "In this paper we describe a trainable object detector and its instantiations for detecting faces and cars at any size, location, and pose. To cope with variation in object orientation, the detector uses multiple classifiers, each spanning a different range of orientation. Each of these classifiers determines whether the object is present at a specified size within a fixed-size image window. To find the object at any location and size, these classifiers scan the image exhaustively. Each classifier is based on the statistics of localized parts. Each part is a transform from a subset of wavelet coefficients to a discrete set of values. Such parts are designed to capture various combinations of locality in space, frequency, and orientation. In building each classifier, we gathered the class-conditional statistics of these part values from representative samples of object and non-object images. We trained each classifier to minimize classification error on the training set by using Adaboost with Confidence-Weighted Predictions (Shapire and Singer, 1999). In detection, each classifier computes the part values within the image window and looks up their associated class-conditional probabilities. The classifier then makes a decision by applying a likelihood ratio test. For efficiency, the classifier evaluates this likelihood ratio in stages. At each stage, the classifier compares the partial likelihood ratio to a threshold and makes a decision about whether to cease evaluation\u2014labeling the input as non-object\u2014or to continue further evaluation. The detector orders these stages of evaluation from a low-resolution to a high-resolution search of the image. Our trainable object detector achieves reliable and efficient detection of human faces and passenger cars with out-of-plane rotation."}
{"_id": "469f5b07c8927438b79a081efacea82449b338f8", "title": "Real-Time Object Detection for \"Smart\" Vehicles", "text": "This paper presents an e cient shape-based object detection method based on Distance Transforms and describes its use for real-time vision on-board vehicles. The method uses a template hierarchy to capture the variety of object shapes; e cient hierarchies can be generated o ine for given shape distributions using stochastic optimization techniques (i.e. simulated annealing). Online, matching involves a simultaneous coarse-tone approach over the shape hierarchy and over the transformation parameters. Very large speedup factors are typically obtained when comparing this approach with the equivalent brute-force formulation; we have measured gains of several orders of magnitudes. We present experimental results on the real-time detection of tra c signs and pedestrians from a moving vehicle. Because of the highly time sensitive nature of these vision tasks, we also discuss some hardwarespeci c implementations of the proposed method as far as SIMD parallelism is concerned."}
{"_id": "4e3a22ed94c260b9143eee9fdf6d5d6e892ecd8f", "title": "A Performance Evaluation of Local Descriptors", "text": ""}
{"_id": "ac3f8372b9d893dbdb7e4b9cd3df5ed825ffb548", "title": "Twitter Sentiment Analysis: Lexicon Method, Machine Learning Method and Their Combination", "text": "This paper covers the two approaches for sentiment analysis: i) lexicon based method; ii) machine learning method. We describe several techniques to implement these approaches and discuss how they can be adopted for sentiment classification of Twitter messages. We present a comparative study of different lexicon combinations and show that enhancing sentiment lexicons with emoticons, abbreviations and social-media slang expressions increases the accuracy of lexicon-based classification for Twitter. We discuss the importance of feature generation and feature selection processes for machine learning sentiment classification. To quantify the performance of the main sentiment analysis methods over Twitter we run these algorithms on a benchmark Twitter dataset from the SemEval-2013 competition, task 2-B. The results show that machine learning method based on SVM and Naive Bayes classifiers outperforms the lexicon method. We present a new ensemble method that uses a lexicon based sentiment score as input feature for the machine learning approach. The combined method proved to produce more precise classifications. We also show that employing a cost-sensitive classifier for highly unbalanced datasets yields an improvement of sentiment classification performance up to 7%."}
{"_id": "36203a748889758656f2f8bbdcd1c2cc236f410f", "title": "A Balancing Control Strategy for a One-Wheel Pendulum Robot Based on Dynamic Model Decomposition: Simulations and Experiments", "text": "A dynamics-based posture-balancing control strategy for a new one-wheel pendulum robot (OWPR) is proposed and verified. The OWPR model includes a rolling wheel, a robot body with a steering axis, and a pendulum for lateral balancing. In constructing the dynamic model, three elements are generalized in comparison with existing robotic systems: the mass and inertia of the robot body, the \u201cI\u201d-type pendulum, and the steering motion. The dynamics of the robot are derived using a Lagrangian formulation to represent the torques of the wheel during the rolling, yawing, and pitching of the robot body, in terms of the control inputs. The OWPR dynamics are decomposed into state-space models for lateral balancing and steering, and the corresponding controller is designed to be adaptive to changes in the state variables. Simulations and experimental studies are presented that demonstrate and verify the efficiency of the proposed models and the control algorithm."}
{"_id": "d2c25b7c43abe6ab9162a01d63ab3bde7572024b", "title": "Implementation of the State of Charge Estimation with Adaptive Extended Kalman Filter for Lithium-Ion Batteries by Arduino", "text": "This study considers the use of Arduino to achieve state of charge (SOC) estimation of lithium-ion batteries by adaptive extended Kalman filter (AEKF). To implement a SOC estimator for the lithium-ion battery, we adopt a first-order RC equivalent circuit as the equivalent circuit model (ECM) of the battery. The parameters of the ECM will be identified through the designed experiments, and they will be approximated by the piecewise linear functions and then will be built into Arduino. The AEKF algorithm will also be programed into Arduino to estimate the SOC. To verify the accuracy of the SOC estimation, some lithium-ion batteries are tested at room temperature. Experimental results show that the absolute value of the steady-state SOC estimation error is small."}
{"_id": "5a4b1c95e098da013cbcd4149f177174765f4881", "title": "Multi-view Scene Flow Estimation: A View Centered Variational Approach", "text": "We present a novel method for recovering the 3D structure and scene flow from calibrated multi-view sequences. We propose a 3D point cloud parametrization of the 3D structure and scene flow that allows us to directly estimate the desired unknowns. A unified global energy functional is proposed to incorporate the information from the available sequences and simultaneously recover both depth and scene flow. The functional enforces multi-view geometric consistency and imposes brightness constancy and piecewise smoothness assumptions directly on the 3D unknowns. It inherently handles the challenges of discontinuities, occlusions, and large displacements. The main contribution of this work is the fusion of a 3D representation and an advanced variational framework that directly uses the available multi-view information. This formulation allows us to advantageously bind the 3D unknowns in time and space. Different from optical flow and disparity, the proposed method results in a nonlinear mapping between the images\u2019 coordinates, thus giving rise to additional challenges in the optimization process. Our experiments on real and synthetic data demonstrate that the proposed method successfully recovers the 3D structure and scene flow despite the complicated nonconvex optimization problem."}
{"_id": "4a28c3f80d341f9de6a8f2030d70cf11d5f531f2", "title": "B-ROC Curves for the Assessment of Classifiers over Imbalanced Data Sets", "text": "The class imbalance problem appears to be ubiquitous to a large portion of the machine learning and data mining communities. One of the key questions in this setting is how to evaluate the learning algorithms in the case of class imbalances. In this paper we introduce the Bayesian Receiver Operating Characteristic (B-ROC) curves, as a set of tradeoff curves that combine in an intuitive way, the variables that are more relevant to the evaluation of classifiers over imbalanced data sets. This presentation is based on section 4 of (C\u00e1rdenas, Baras, & Seamon 2006). Introduction The term class imbalance refers to the case when in a classification task, there are many more instances of some classes than others. The problem is that under this setting, classifiers in general perform poorly because they tend to concentrate on the large classes and disregard the ones with few examples. Given that this problem is prevalent in a wide range of practical classification problems, there has been recent interest in trying to design and evaluate classifiers faced with imbalanced data sets (Japkowicz 2000; Chawla, Japkowicz, & Ko\u0142cz 2003; Chawla, Japkowicz, & Ko\u0142z 2004). A number of approaches on how to address these issues have been proposed in the literature. Ideas such as data sampling methods, one-class learning (i.e. recognitionbased learning), and feature selection algorithms, appear to be the most active research directions for learning classifiers. On the other hand the issue of how to evaluate binary classifiers in the case of class imbalances appears to be dominated by the use of ROC curves (Ferri et al. 2004; 2005) (and to a lesser extent, by error curves (Drummond & Holte 2001)). The class imbalance problem is of particular importance in intrusion detection systems (IDSs). In this paper we present and expand some of the ideas introduced in our research for the evaluation of IDSs (C\u00e1rdenas, Baras, & Seamon 2006). In particular we claim that for heavily imbalanced data sets, ROC curves cannot provide the necessary Copyright c \u00a9 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. intuition for the choice of the operational point of the classifier and therefore we introduce the Bayesian-ROCs (BROCs). Furthermore we demonstrate how B-ROCs can deal with the uncertainty of class distributions by displaying the performance of the classifier under different conditions. Finally, we also show how B-ROCs can be used for comparing classifiers without any assumptions of misclassification costs. Performance Tradeoffs Before we present our formulation we need to introduce some notation and definitions. Assume that the input to the classifier is a feature-vector x. Let C be an indicator random variable denoting whether x belongs to class zero: C = 0 (the majority class) or class one: C = 1 (the minority class). The output of the classifier is denoted by A = 1 if the classifier assigns x to class one, and A = 0 if the classifier assigns x to class zero. Finally, the class imbalance problem is quantified by the probability of a positive example p = Pr[C = 1]. Most classifiers subject to the class imbalance problem are evaluated with the help of ROC curves. ROC curves are a tool to visualize the tradeoff between the probability of false alarm PFA \u2261 Pr[A = 1|C = 0] and the probability of detection PD \u2261 Pr[A = 1|C = 1]. Of interest to us in the intrusion detection community, is that classifiers with ROC curves achieving traditionally \u201cgood\u201d operating points such as (PFA = 0.01,PD = 1) would still generate a huge amount of false alarms in realistic scenarios. This effect is due in part to the class imbalance problem, since one of the causes for the large amount of false alarms that IDSs generate, is the enormous difference between the large amount of normal activity compared to the small amount of intrusion events. The reasoning is that because the likelihood of an attack is very small, even if an IDS fires an alarm, the likelihood of having an intrusion remains relatively small. That is, when we compute the posterior probability of intrusion given that the IDS fired an alarm, (a quantity known as the Bayesian detection rate, or the positive predictive value (PPV)), we obtain: PPV \u2261 Pr[C = 1|A = 1] = pPD pPD +(1\u2212 p)PFA (1) Therefore, if the rate of incidence of an attack is very small, for example on average only 1 out of 105 events is an attack"}
{"_id": "5c7efb061bd14cd8e41ed7314967b4e13347bbad", "title": "Data-driven models for monthly streamflow time series prediction", "text": "C. L. Wu and K. W. Chau* 2 Dept. of Civil and Structural Engineering, Hong Kong Polytechnic University, 3 Hung Hom, Kowloon, Hong Kong, People\u2019s Republic of China 4 5 *Email: cekwchau@polyu.edu.hk 6 ABSTRACT 7 Data-driven techniques such as Auto-Regressive Moving Average (ARMA), K-Nearest-Neighbors (KNN), and 8 Artificial Neural Networks (ANN), are widely applied to hydrologic time series prediction. This paper investigates 9 different data-driven models to determine the optimal approach of predicting monthly streamflow time series. Four 10 sets of data from different locations of People\u2019s Republic of China (Xiangjiaba, Cuntan, Manwan, and 11 Danjiangkou) are applied for the investigation process. Correlation integral and False Nearest Neighbors (FNN) 12 are first employed for Phase Space Reconstruction (PSR). Four models, ARMA, ANN, KNN, and Phase Space 13 Reconstruction-based Artificial Neural Networks (ANN-PSR) are then compared by one-month-ahead forecast 14 using Cuntan and Danjiangkou data. The KNN model performs the best among the four models, but only exhibits 15 weak superiority to ARMA. Further analysis demonstrates that a low correlation between model inputs and 16 outputs could be the main reason to restrict the power of ANN. A Moving Average Artificial Neural Networks 17 (MA-ANN), using the moving average of streamflow series as inputs, is also proposed in this study. The results 18 show that the MA-ANN has a significant improvement on the forecast accuracy compared with the original four 19 models. This is mainly due to the improvement of correlation between inputs and outputs depending on the 20 moving average operation. The optimal memory lengths of the moving average were three and six for Cuntan and 21 Danjiangkou respectively when the optimal model inputs are recognized as the previous twelve months. 22 23"}
{"_id": "f8c90c6549b97934da4fcdafe0012cea95cc443c", "title": "State-of-the-Art Predictive Maintenance Techniques*", "text": "This paper discusses the limitations of time-based equipment maintenance methods and the advantages of predictive or online maintenance techniques in identifying the onset of equipment failure. The three major predictive maintenance techniques, defined in terms of their source of data, are described as follows: 1) the existing sensor-based technique; 2) the test-sensor-based technique (including wireless sensors); and 3) the test-signal-based technique (including the loop current step response method, the time-domain reflectrometry test, and the inductance-capacitance-resistance test). Examples of detecting blockages in pressure sensing lines using existing sensor-based techniques and of verifying calibration using existing-sensor direct current output are given. Three Department of Energy (DOE)-sponsored projects, whose aim is to develop online and wireless hardware and software systems for performing predictive maintenance on critical equipment in nuclear power plants, DOE research reactors, and general industrial applications, are described."}
{"_id": "601e0049eccf0d5e69c9d246b051f4c290d1e26b", "title": "Towards Accurate Duplicate Bug Retrieval Using Deep Learning Techniques", "text": "Duplicate Bug Detection is the problem of identifying whether a newly reported bug is a duplicate of an existing bug in the system and retrieving the original or similar bugs from the past. This is required to avoid costly rediscovery and redundant work. In typical software projects, the number of duplicate bugs reported may run into the order of thousands, making it expensive in terms of cost and time for manual intervention. This makes the problem of duplicate or similar bug detection an important one in Software Engineering domain. However, an automated solution for the same is not quite accurate yet in practice, in spite of many reported approaches using various machine learning techniques. In this work, we propose a retrieval and classification model using Siamese Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) for accurate detection and retrieval of duplicate and similar bugs. We report an accuracy close to 90% and recall rate close to 80%, which makes possible the practical use of such a system. We describe our model in detail along with related discussions from the Deep Learning domain. By presenting the detailed experimental results, we illustrate the effectiveness of the model in practical systems, including for repositories for which supervised training data is not available."}
{"_id": "19bd90c739c93ebb5186f6718e0a70dc8f3b01af", "title": "Insights from Price's equation into evolutionary epidemiology", "text": "The basic reproduction number, denoted by R0, is one of the most important quantities in epidemiological theory ([11], [23]). It is defined as the expected number of new infections generated by an infected individual in an otherwise wholly susceptible population ([2], [12], [23]). Part of the reason why R0 plays such a central role in this body of theory undoubtedly stems from its relatively simple and intuitively sensible interpretation as a measure of pathogen reproduction. If R0 is less than unity then we expect the pathogen to die out since each infected individual fails to generate at least one other infection during the lifetime of the infection. Given that R0 is a measure of pathogen reproductive success, it is not surprising that this quantity has also come to form the basis of most evolutionary considerations of host-pathogen interactions ([1], [18]). For example, mathematical models for numerous epidemiological settings have been used to demonstrate that natural selection is often expected to favour the pathogen strain that results in the largest value of R0 ([6], [18]). In more complex epidemiological settings such optimization criteria typically cannot be derived and instead a game-theoretic approach is taken ([5]). In this context a measure of the fitness of a rare mutant pathogen strain is used to characterize the evolutionarily stable strain (i.e., the strain that, if present within the population in sufficient numbers, cannot be displaced by any mutant strain that arises). Typically R0 again plays a central role as the measure of mutant fitness in such invasion analyses ([10], [18], [30]). In this chapter we consider an alternative approach for developing theory in evolutionary epidemiology. Rather than using the total number of new infections generated by an infected individual (i.e., R0) as a measure of pathogen fitness we use the instantaneous rate of change of the number of infected hosts instead (see also [3], [18]). This shifts the focus from a consideration of pathogen reproductive success per generation to pathogen reproductive success per unit time. One very useful result of this change in focus is that we can then model the time dynamics of evolutionary change in the pathogen population simultaneously with the epidemiological dynamics, rather than simply characterizing the evolutionary equilibria that are expected. Even more importantly, however, this seemingly slight change"}
{"_id": "fb9339830e2d471e8b4c46caf465b39027c73164", "title": "Context Prediction for Unsupervised Deep Learning on Point Clouds", "text": "Point clouds provide a flexible and natural representation usable in countless applications such as robotics or self-driving cars. Recently, deep neural networks operating on raw point cloud data have shown promising results on supervised learning tasks such as object classification and semantic segmentation. While massive point cloud datasets can be captured using modern scanning technology, manually labelling such large 3D point clouds for supervised learning tasks is a cumbersome process. This necessitates effective unsupervised learning methods that can produce representations such that downstream tasks require significantly fewer annotated samples. We propose a novel method for unsupervised learning on raw point cloud data in which a neural network is trained to predict the spatial relationship between two point cloud segments. While solving this task, representations that capture semantic properties of the point cloud are learned. Our method outperforms previous unsupervised learning approaches in downstream object classification and segmentation tasks and performs on par with fully supervised methods."}
{"_id": "548aa59905693f84e95515c6134cd5352a97ca76", "title": "Scatterplots: Tasks, Data, and Designs", "text": "Traditional scatterplots fail to scale as the complexity and amount of data increases. In response, there exist many design options that modify or expand the traditional scatterplot design to meet these larger scales. This breadth of design options creates challenges for designers and practitioners who must select appropriate designs for particular analysis goals. In this paper, we help designers in making design choices for scatterplot visualizations. We survey the literature to catalog scatterplot-specific analysis tasks. We look at how data characteristics influence design decisions. We then survey scatterplot-like designs to understand the range of design options. Building upon these three organizations, we connect data characteristics, analysis tasks, and design choices in order to generate challenges, open questions, and example best practices for the effective design of scatterplots."}
{"_id": "5d2548457d10e5e5072f28fc8b7a6f1638fdef14", "title": "The design of scaffolds for use in tissue engineering. Part I. Traditional factors.", "text": "In tissue engineering, a highly porous artificial extracellular matrix or scaffold is required to accommodate mammalian cells and guide their growth and tissue regeneration in three dimensions. However, existing three-dimensional scaffolds for tissue engineering proved less than ideal for actual applications, not only because they lack mechanical strength, but they also do not guarantee interconnected channels. In this paper, the authors analyze the factors necessary to enhance the design and manufacture of scaffolds for use in tissue engineering in terms of materials, structure, and mechanical properties and review the traditional scaffold fabrication methods. Advantages and limitations of these traditional methods are also discussed."}
{"_id": "a4803dd805a60c7c161eb7ed60e52ed9a425b65f", "title": "From Polynesian healers to health food stores: changing perspectives of Morinda citrifolia (Rubiaceae).", "text": "Morinda citrifolia L (noni) is one of the most important traditional Polynesian medicinal plants. Remedies from isolated Polynesian cultures, such as that of Rotuma, illustrate traditional indications that focus upon leaves, roots, bark, and green fruit, primarily for topical ailments. Anecdotally collected Hawaiian remedies that employ noni fruit illustrate changing usage patterns with shifts in recent times to preparation of juice made of ripe or decaying fruit. Ralph M. Heinicke promoted a wide range of claims about noni, and these seem to have fueled much of the current commercial interest in the plant. Recent studies of the proliferation of commercial products have shown that noni product manufacturers are promoting a range of therapeutic claims. These claims are based upon traditional Polynesian uses, Heinicke's ideas, and fragments of recent scientific studies including the activity of noni in the treatment of cancer. A review is provided of recent studies of potential anticancer activity of noni fruit. While noni's anticancer potential is still being explored, it continues to be widely used by Polynesians and non-Polynesians alike for both traditional and newly hypothesized indications."}
{"_id": "8e27d3a78ea387200405ec026619201dbd8bd5b3", "title": "How Alluring Are Dark Personalities ? The Dark Triad and Attractiveness in Speed Dating", "text": "Copy Abstract: Dark Triad traits (narcissism, psychopathy, and Machiavellianism) are linked to the pursuit of short-term mating strategies, but they may have differential effects on actual mating success in naturalistic scenarios: Narcissism may be a facilitator for men\u2019s short-term mating success, while Machiavellianism and psychopathy may be detrimental. To date, little is known about the attractiveness of Dark Triad traits in women. In a speed-dating study, we assessed participants\u2019 Dark Triad traits, Big Five personality traits, and physical attractiveness in N=90 heterosexual individuals (46 women and 44 men). Each participant rated each partner\u2019s mate appeal for shortand long-term relationships. Across both sexes, narcissism was positively associated with mate appeal for shortand long-term relationships. Further analyses indicated that these associations were due to the shared variance among narcissism and extraversion in men and narcissism and physical attractiveness in women, respectively. In women, psychopathy was also positively associated with mate appeal for short-term relationships. Regarding mating preferences, narcissism was found to involve greater choosiness in the rating of others\u2019 mate appeal (but not actual choices) in men, while psychopathy was associated with greater openness towards short-term relationships in women. Copyright \u00a9 2016 European Association of Personality Psychology"}
{"_id": "e18fa8c8f402c483b2c3eaaa89192fe99e80abd5", "title": "Evaluating Sentiment Analysis in the Context of Securities Trading", "text": "There are numerous studies suggesting that published news stories have an important effect on the direction of the stock market, its volatility, the volume of trades, and the value of individual stocks mentioned in the news. There is even some published research suggesting that automated sentiment analysis of news documents, quarterly reports, blogs and/or twitter data can be productively used as part of a trading strategy. This paper presents just such a family of trading strategies, and then uses this application to re-examine some of the tacit assumptions behind how sentiment analyzers are generally evaluated, in spite of the contexts of their application. This discrepancy comes at a cost."}
{"_id": "124a9d3fa4569b2ebda1ca8726166a635f31a36d", "title": "Feature selection methods for conversational recommender systems", "text": "This paper focuses on question selection methods for conversational recommender systems. We consider a scenario, where given an initial user query, the recommender system may ask the user to provide additional features describing the searched products. The objective is to generate questions/features that a user would likely reply, and if replied, would effectively reduce the result size of the initial query. Classical entropy-based feature selection methods are effective in term of result size reduction, but they select questions uncorrelated with user needs and therefore unlikely to be replied. We propose two feature-selection methods that combine feature entropy with an appropriate measure of feature relevance. We evaluated these methods in a set of simulated interactions where a probabilistic model of user behavior is exploited. The results show that these methods outperform entropy-based feature selection."}
{"_id": "38b0877fa6ac3ebfbb29d74f761fea394ee190f3", "title": "Lazy Decision Trees", "text": "Lazy learning algorithms, exemplified by nearestneighbor algorithms, do not induce a concise hypothesis from a given training set; the inductive process is delayed until a test instance is given. Algorithms for constructing decision trees, such as C4.5, ID3, and CART create a single \u201cbest\u201d decision tree during the training phase, and this tree is then used to classify test instances. The tests at the nodes of the constructed tree are good on average, but there may be better tests for classifying a specific instance. We propose a lazy decision tree algorithm-LAzuDT-that conceptually constructs the \u201cbest\u201d decision tree for each test instance. In practice, only a path needs to be constructed, and a caching scheme makes the algorithm fast. The algorithm is robust with respect to missing values without resorting to the complicated methods usually seen in induction of decision trees. Experiments on real and artificial problems are presented."}
{"_id": "4e6cad4d8616c88856792688228a4c52cec9bace", "title": "A consensus-based approach for platooning with inter-vehicular communications", "text": "Automated and coordinated vehicles' driving (platooning) is gaining more and more attention today and it represents a challenging scenario heavily relying on wireless Inter-Vehicular Communication (IVC). In this paper, we propose a novel controller for vehicle platooning based on consensus. Opposed to current approaches where the logical control topology is fixed a priori and the control law designed consequently, we design a system whose control topology can be reconfigured depending on the actual network status. Moreover, the controller does not require the vehicles to be radar equipped and automatically compensates outdated information caused by network delays. We define the control law and analyze it in both analytical and simulative way, showing its robustness in different network scenarios. We consider three different wireless network settings: uncorrelated Bernoullian losses, correlated losses using a Gilbert-Elliott channel, and a realistic traffic scenario with interferences caused by other vehicles. Finally, we compare our strategy with another state of the art controller. The results show the ability of the proposed approach to maintain a stable string of vehicles even in the presence of strong interference, delays, and fading conditions, providing higher comfort and safety for platoon drivers."}
{"_id": "3c6bf8d1a83e7c78dc5ff43e0b9e776753ec96ae", "title": "Thinking Positively - Explanatory Feedback for Conversational Recommender Systems", "text": "When it comes to buying expensive goods people expect to be skillfully steered through the options by well-informed sales assistants that are capable of balancing the user\u2019s many and varied requirements. In addition users often need to be educated about the product-space, especially if they are to come to understand what is available and why certain options are being recommended by the sales-assistant. The same issues arise in interactive recommender systems, our online equivalent of a sales assistant and explanation in recommender systems, as a means to educate users and justify recommendations, is now well accepted. In this paper we focus on a novel approach to explanation. Instead of attempting to justify a particular recommendation we focus on how explanations can help users to understand the recommendation opportunities that remain if the current recommendation should not meet their requirements. We describe how this approach to explanation is tightly coupled with the generation of compound critiques, which act as a form of feedback for the users. And we argue that these explanation-rich critiques have the potential to dramatically improve recommender performance and usability."}
{"_id": "9a423eb7c6f3b8082bfc99ce5811a10416a1daff", "title": "A Survey of Definitions and Models of Exploratory Search", "text": "Exploratory search has an unclear and open-ended definition. The complexity of the task and the difficulty of defining this activity are reflected in the limits of existing evaluation methods for exploratory search systems. In order to improve them, we intend to design an evaluation method based on a user-centered model of exploratory search. In this work, we identified and defined the characteristics of exploratory search and used them as an information seeking model evaluation grid. We tested this analytic grid on two information seeking models: Ellis' and Marchionini's models. The results show that Marchonini's model does not match our evaluation method's requirements whereas on the other hand Ellis' model could be adapted to better suit exploratory search."}
{"_id": "9cd69fa7f185ab9be4b452dadd44cbce1e8fe579", "title": "Efficient FIR Filter Design Using Modified Carry Select Adder & Wallace Tree Multiplier", "text": "An area-power-delay efficient design of FIR filter is described in this paper. In proposed multiplier unit high speed is achieved using XOR-XNOR column by column reduction compressors instead of compressors using full adder. The carry propagation delay and area of carry select adder is reduced by splitting carry select adder into equal bit groups. The proposed carry select adder unit consumes less power than conventional carry select adder unit. With proposed multiplier unit and carry select adder unit, the designed FIR Filter consumes 55% power less than the conventional filter without significant increase in area. The power & delay comparison is performed for both existing and proposed method of FIR filter. The design is implemented using 0.18\u03bcm technology. Index Terms FIR filter design, FDA Tool, Low Power VLSI, Wallace tree multiplication."}
{"_id": "2d4204a473d03ca1cd87e894e3d5772ae0364052", "title": "An efficient and generic reversible debugger using the virtual machine based approach", "text": "The reverse execution of programs is a function where programs are executed backward in time. A reversible debugger is a debugger that provides such a functionality. In this paper, we propose a novel reversible debugger that enables reverse execution of programs written in the C language. Our approach takes the virtual machine based approach. In this approach, the target program is executed on a special virtual machine. Our contribution in this paper is two-fold. First, we propose an approach that can address problems of (1) compatibility and (2) efficiency that exist in previous works. By compatibility, we mean that previous debuggers are not generic, i.e., they support only a special language or special intermediate code. Second, our approach provides two execution modes: the native mode, where the debuggee is directly executed on a real CPU, and the virtual machine mode, where the debuggee is executed on a virtual machine. Currently, our debugger provides four types of trade-off settings (designated by unit and optimization) to consider trade-offs between granularity, accuracy, overhead and memory requirement. The user can choose the appropriate setting flexibly during debugging without finishing and restarting the debuggee."}
{"_id": "623dc089f4736e1ddeee35b50f0ae3f6efa15c78", "title": "Real-time Empirical Mode Decomposition for EEG signal enhancement", "text": "Electroencephalography (EEG) recordings are used for brain research. However, in most cases, the recordings not only contain brain waves, but also artifacts of physiological or technical origins. A recent approach used for signal enhancement is Empirical Mode Decomposition (EMD), an adaptive data-driven technique which decomposes non-stationary data into so-called Intrinsic Mode Functions (IMFs). Once the IMFs are obtained, they can be used for denoising and detrending purposes. This paper presents a real-time implementation of an EMD-based signal enhancement scheme. The proposed implementation is used for removing noise, for suppressing muscle artifacts, and for detrending EEG signals in an automatic manner and in real-time. The proposed algorithm is demonstrated by application to a simulated and a real EEG data set from an epilepsy patient. Moreover, by visual inspection and in a quantitative manner, it is shown that after the EMD in real-time, the EEG signals are enhanced."}
{"_id": "904179d733b8ff792586631d50b3bd64f42d6b7d", "title": "Span, CRUNCH, and Beyond: Working Memory Capacity and the Aging Brain", "text": "Neuroimaging data emphasize that older adults often show greater extent of brain activation than younger adults for similar objective levels of difficulty. A possible interpretation of this finding is that older adults need to recruit neuronal resources at lower loads than younger adults, leaving no resources for higher loads, and thus leading to performance decrements [Compensation-Related Utilization of Neural Circuits Hypothesis; e.g., Reuter-Lorenz, P. A., & Cappell, K. A. Neurocognitive aging and the compensation hypothesis. Current Directions in Psychological Science, 17, 177\u2013182, 2008]. The Compensation-Related Utilization of Neural Circuits Hypothesis leads to the prediction that activation differences between younger and older adults should disappear when task difficulty is made subjectively comparable. In a Sternberg memory search task, this can be achieved by assessing brain activity as a function of load relative to the individual's memory span, which declines with age. Specifically, we hypothesized a nonlinear relationship between load and both performance and brain activity and predicted that asymptotes in the brain activation function should correlate with performance asymptotes (corresponding to working memory span). The results suggest that age differences in brain activation can be largely attributed to individual variations in working memory span. Interestingly, the brain activation data show a sigmoid relationship with load. Results are discussed in terms of Cowan's [Cowan, N. The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24, 87\u2013114, 2001] model of working memory and theories of impaired inhibitory processes in aging."}
{"_id": "10254c472248b5831352eaa2b542fbba455a4f0e", "title": "Dry and Noncontact EEG Sensors for Mobile Brain\u2013Computer Interfaces", "text": "Dry and noncontact electroencephalographic (EEG) electrodes, which do not require gel or even direct scalp coupling, have been considered as an enabler of practical, real-world, brain-computer interface (BCI) platforms. This study compares wet electrodes to dry and through hair, noncontact electrodes within a steady state visual evoked potential (SSVEP) BCI paradigm. The construction of a dry contact electrode, featuring fingered contact posts and active buffering circuitry is presented. Additionally, the development of a new, noncontact, capacitive electrode that utilizes a custom integrated, high-impedance analog front-end is introduced. Offline tests on 10 subjects characterize the signal quality from the different electrodes and demonstrate that acquisition of small amplitude, SSVEP signals is possible, even through hair using the new integrated noncontact sensor. Online BCI experiments demonstrate that the information transfer rate (ITR) with the dry electrodes is comparable to that of wet electrodes, completely without the need for gel or other conductive media. In addition, data from the noncontact electrode, operating on the top of hair, show a maximum ITR in excess of 19 bits/min at 100% accuracy (versus 29.2 bits/min for wet electrodes and 34.4 bits/min for dry electrodes), a level that has never been demonstrated before. The results of these experiments show that both dry and noncontact electrodes, with further development, may become a viable tool for both future mobile BCI and general EEG applications."}
{"_id": "8353ee33dc6aec01813543c87dc8e5191c5c89fb", "title": "ECG measurement on a chair without conductive contact", "text": "For the purpose of long-term, everyday electrocardiogram (ECG) monitoring, we present a convenient method of ECG measurement without direct conductive contact with the skin while subjects sat on a chair wearing normal clothes. Measurements were made using electrodes attached to the back of a chair, high-input-impedance amplifiers mounted on the electrodes, and a large ground-plane placed on the chair seat. ECGs were obtained by the presented method for several types of clothing and compared to ECGs obtained from conventional measurement using Ag-AgCl electrodes. Motion artifacts caused by usual desk works were investigated. This study shows the feasibility of the method for long-term, convenient, everyday use"}
{"_id": "069443f2bbeb2deedefb600c82ed59cde2137e60", "title": "Analysis of emotion recognition using facial expressions, speech and multimodal information", "text": "The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although several approaches have been proposed to recognize human emotions based on facial expressions or speech, relatively limited work has been done to fuse these two, and other, modalities to improve the accuracy and robustness of the emotion recognition system. This paper analyzes the strengths and the limitations of systems based only on facial expressions or acoustic information. It also discusses two approaches used to fuse these two modalities: decision level and feature level integration. Using a database recorded from an actress, four emotions were classified: sadness, anger, happiness, and neutral state. By the use of markers on her face, detailed facial motions were captured with motion capture, in conjunction with simultaneous speech recordings. The results reveal that the system based on facial expression gave better performance than the system based on just acoustic information for the emotions considered. Results also show the complementarily of the two modalities and that when these two modalities are fused, the performance and the robustness of the emotion recognition system improve measurably."}
{"_id": "0bc46478051356455facc79f216a00b896c2dc5f", "title": "ORTHONORMAL BASES OF COMPACTLY SUPPORTED WAVELETS", "text": "We construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity. The order of regularity increases linearly with the support width. We start by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction. The construction then follows from a synthesis of these different approaches."}
{"_id": "969c055f67efe1296bd9816c4effb5d4dfd83948", "title": "Emotion Detection from Speech to Enrich Multimedia Content", "text": "The paper describes an experimental study on the detection of emotion from speech. As computer based characters such as avatars and virtual chat faces become more common, the use of emotion to drive the expression of the virtual characters become more important. The study utilizes a corpus containing emotional speech with 721 short utterances expressing four emotions: anger, happiness, sadness, and the neutral (unemotional) state, which were captured manually from movies and teleplays. We introduce a new concept to evaluate emotions in speech. Emotions are so complex that most speech sentences cannot be precisely assigned into a particular emotion category; however, most emotional states nevertheless can be described as a mixture of multiple emotions. Based on this concept we have trained SVMs (support vector machines) to recognize utterances within these four categories and developed an agent that can recognize and express emotions."}
{"_id": "f2e808509437b0474a6c9a258cd06c6f3b42754b", "title": "The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults.", "text": "The objective of this study was to refine the APACHE (Acute Physiology, Age, Chronic Health Evaluation) methodology in order to more accurately predict hospital mortality risk for critically ill hospitalized adults. We prospectively collected data on 17,440 unselected adult medical/surgical intensive care unit (ICU) admissions at 40 US hospitals (14 volunteer tertiary-care institutions and 26 hospitals randomly chosen to represent intensive care services nationwide). We analyzed the relationship between the patient's likelihood of surviving to hospital discharge and the following predictive variables: major medical and surgical disease categories, acute physiologic abnormalities, age, preexisting functional limitations, major comorbidities, and treatment location immediately prior to ICU admission. The APACHE III prognostic system consists of two options: (1) an APACHE III score, which can provide initial risk stratification for severely ill hospitalized patients within independently defined patient groups; and (2) an APACHE III predictive equation, which uses APACHE III score and reference data on major disease categories and treatment location immediately prior to ICU admission to provide risk estimates for hospital mortality for individual ICU patients. A five-point increase in APACHE III score (range, 0 to 299) is independently associated with a statistically significant increase in the relative risk of hospital death (odds ratio, 1.10 to 1.78) within each of 78 major medical and surgical disease categories. The overall predictive accuracy of the first-day APACHE III equation was such that, within 24 h of ICU admission, 95 percent of ICU admissions could be given a risk estimate for hospital death that was within 3 percent of that actually observed (r2 = 0.41; receiver operating characteristic = 0.90). Recording changes in the APACHE III score on each subsequent day of ICU therapy provided daily updates in these risk estimates. When applied across the individual ICUs, the first-day APACHE III equation accounted for the majority of variation in observed death rates (r2 = 0.90, p less than 0.0001)."}
{"_id": "25573459892caddfbe8d7478f3db11dc5e537f3a", "title": "Recent advances on active noise control : open issues and innovative applications", "text": "Yoshinobu Kajikawa, Woon\u00adSeng Gan and Sen M. Kuo APSIPA Transactions on Signal and Information Processing / Volume 1 / August 2012 / e3 DOI: 10.1017/ATSIP.2012.4, Published online: Link to this article: http://journals.cambridge.org/abstract_S2048770312000042 How to cite this article: Yoshinobu Kajikawa, Woon\u00adSeng Gan and Sen M. Kuo (2012). Recent advances on active noise control: open issues and innovative applications. APSIPA Transactions on Signal and Information Processing,1, e3 doi:10.1017/ATSIP.2012.4 Request Permissions : Click here"}
{"_id": "8e3bfff28337c4249c1e98973b4df8f95a205dce", "title": "Software Defined Networking Meets Information Centric Networking: A Survey", "text": "Information centric networking (ICN) and software-defined networking (SDN) are two emerging networking paradigms that promise to solve different aspects of networking problems. ICN is a clean-slate design for accommodating the ever increasing growth of the Internet traffic by regarding content as the network primitive, adopting in-network caching, and name-based routing, while SDN focuses on agile and flexible network management by decoupling network control logic from data forwarding. ICN and SDN have gained significant research attention separately in the most of the previous work. However, the features of ICN have profound impacts on the design and operation of SDN, such as in-network caching and data-centric security. Conversely, SDN provides a powerful tool for experimenting and deploying ICN architectures and can greatly facilitate ICN functionality and management. In this paper, we point out the necessity of surveying the scattered works on integrating SDN and ICN (SD-ICN) for improving operational networks. Specifically, we analyze the points of SD-ICN strengths and opportunities, and discuss the SDN enablers for deploying ICN architectures. In addition, we review and classify the recent work on improving the network management by SD-ICN and discuss the potential security benefits of SD-ICN. Finally, a number of open issues and future trends are highlighted."}
{"_id": "050c6fa2ee4b3e0a076ef456b82b2a8121506060", "title": "Data-driven 3D Voxel Patterns for object category recognition", "text": "Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6% in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D."}
{"_id": "0a24f049590c014d5b40660449503368bcedc921", "title": "Belief space planning assuming maximum likelihood observations", "text": "We cast the partially observable control problem as a fully observable underactuated stochastic control problem in belief space and apply standard planning and control techniques. One of the difficulties of belief space planning is modeling the stochastic dynamics resulting from unknown future observations . The core of our proposal is to define deterministic beliefsystem dynamics based on an assumption that the maximum likelihood observation (calculated just prior to the observation) is always obtained. The stochastic effects of future observation s are modeled as Gaussian noise. Given this model of the dynamics, two planning and control methods are applied. In the first, linear quadratic regulation (LQR) is applied to generate policies in the belief space. This approach is shown to be optimal for linearGaussian systems. In the second, a planner is used to find locally optimal plans in the belief space. We propose a replanning approach that is shown to converge to the belief space goal in a finite number of replanning steps. These approaches are characterized in the context of a simple nonlinear manipulation problem where a planar robot simultaneously locates and grasps an object."}
{"_id": "d13aa46e8069f67d48cf773bf70d241ead6ee7e6", "title": "Very High Resolution Spaceborne SAR Tomography in Urban Environment", "text": "Synthetic aperture radar tomography (TomoSAR) extends the synthetic aperture principle into the elevation direction for 3-D imaging. It uses stacks of several acquisitions from slightly different viewing angles (the elevation aperture) to reconstruct the reflectivity function along the elevation direction by means of spectral analysis for every azimuth-range pixel. The new class of meter-resolution spaceborne SAR systems (TerraSAR-X and COSMO-Skymed) offers a tremendous improvement in tomographic reconstruction of urban areas and man-made infrastructure. The high resolution fits well to the inherent scale of buildings (floor height, distance of windows, etc.). This paper demonstrates the tomographic potential of these SARs and the achievable quality on the basis of TerraSAR-X spotlight data of urban environment. A new Wiener-type regularization to the singular-value decomposition method-equivalent to a maximum a posteriori estimator-for TomoSAR is introduced and is extended to the differential case (4-D, i.e., space-time). Different model selection schemes for the estimation of the number of scatterers in a resolution cell are compared and proven to be applicable in practice. Two parametric estimation algorithms of the scatterers' elevation and their velocities are evaluated. First 3-D and 4-D reconstructions of an entire building complex (including its radar reflectivity) with very high level of detail from spaceborne SAR data by pixelwise TomoSAR are presented."}
{"_id": "0adeeb60165f8ac498cf52840dc2c436c11a14c2", "title": "Softmax Discriminant Classifier", "text": "A simple but effective classifier, which is called soft max discriminant classifier or SDC for short, is presented. Based on the soft max discriminant function, SDC assigns the label information to a new testing sample by nonlinear transformation of the distance between the testing sample and training samples. Experimental results on some well-known data sets demonstrate the feasibility and effectiveness of the proposed algorithm."}
{"_id": "cc2be11675137e4ea60bf6c89a108518c53b48bb", "title": "An improved power quality induction heater using zeta converter", "text": "This paper presents an induction heater (IH) with power factor correction (PFC) for domestic induction heating applications. A half bridge voltage fed series resonant inverter (VFSRI) based IH is fed from the front end PFC corrected zeta converter. The PFC-zeta converter operating in continuous inductor conduction mode (CICM) is used to regulate the DC link voltage. The switch current stress of PFC-zeta converter is controlled by operating zeta converter in CICM mode using a well-established internal current loop and outer voltage loop approach. The PFC-zeta converter based voltage fed IH converts intermediate DC bus voltage to high frequency (30 kHz) voltage suitable for domestic induction heating. A 2 kW PFC-zeta converter based IH is designed and its performance is simulated in MATLAB with power quality indices within an IEC 61000-3-2 standard."}
{"_id": "6c234f7838cc57e71646ff927a55ae5afefa4a98", "title": "Semi-Supervised Nonnegative Matrix Factorization", "text": "Nonnegative matrix factorization (NMF) is a popular method for low-rank approximation of nonnegative matrix, providing a useful tool for representation learning that is valuable for clustering and classification. When a portion of data are labeled, the performance of clustering or classification is improved if the information on class labels is incorporated into NMF. To this end, we present semi-supervised NMF (SSNMF), where we jointly incorporate the data matrix and the (partial) class label matrix into NMF. We develop multiplicative updates for SSNMF to minimize a sum of weighted residuals, each of which involves the nonnegative 2-factor decomposition of the data matrix or the label matrix, sharing a common factor matrix. Experiments on document datasets and EEG datasets in BCI competition confirm that our method improves clustering as well as classification performance, compared to the standard NMF, stressing that semi-supervised NMF yields semi-supervised feature extraction."}
{"_id": "289b34d437ccb17fe2543b33ad7243a9be644898", "title": "The role of debriefing in simulation-based learning.", "text": "The aim of this paper is to critically review what is felt to be important about the role of debriefing in the field of simulation-based learning, how it has come about and developed over time, and the different styles or approaches that are used and how effective the process is. A recent systematic review of high fidelity simulation literature identified feedback (including debriefing) as the most important feature of simulation-based medical education.1 Despite this, there are surprisingly few papers in the peer-reviewed literature to illustrate how to debrief, how to teach or learn to debrief, what methods of debriefing exist and how effective they are at achieving learning objectives and goals. This review is by no means a systematic review of all the literature available on debriefing, and contains information from both peer and nonpeer reviewed sources such as meeting abstracts and presentations from within the medical field and other disciplines versed in the practice of debriefing such as military, psychology, and business. It also contains many examples of what expert facilitators have learned over years of practice in the area. We feel this would be of interest to novices in the field as an introduction to debriefing, and to experts to illustrate the gaps that currently exist, which might be addressed in further research within the medical simulation community and in collaborative ventures between other disciplines experienced in the art of debriefing."}
{"_id": "bafc689320ef11455b8df7cdefe3694bfaac9072", "title": "The use of the interactive whiteboard for creative teaching and learning in literacy and mathematics: a case study", "text": "This paper considers the ways in which the interactive whiteboard may support and enhance pedagogic practice through whole-class teaching within literacy and numeracy. Data collected from observations of whole-class lessons, alongside individual interviews and focus group discussions with class teachers and Initial Teacher Education students, has provided opportunities to consider the potential of such technology to facilitate a more creative approach to whole-class teaching. The data suggests that, in the first instance, the special features of information and communications technology such as interactivity, \u2018provisionality,\u2019 speed, capacity and range enhance the delivery and pace of the session. This research seems to indicate that it is the skill and the professional knowledge of the teacher who mediates the interaction, and facilitates the development of pupils\u2019 creative responses at the interface of technology, which is critical to the enhancement of the whole-class teaching and learning processes. Introduction The globalising phenomenon of information and communication technologies (ICT) is a distinct characteristic of modern times. The speed and immediacy of ICT, coupled with opportunities for increased information flow through multiple routes of communication, suggest that we are living in a time of unprecedented change, with ICT affecting the way we live and function as individuals and as a society (Castells, 2004). Within the context of education there are some technologies that appear to have attracted more interest than others; however, the degree to which they have been British Journal of Educational Technology Vol 39 No 1 2008 84\u201396 doi:10.1111/j.1467-8535.2007.00703.x \u00a9 2007 The Authors. Journal compilation \u00a9 2007 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. successfully integrated into the classroom environment has been varied. In recent years, there has been a growing level of interest in the electronic or interactive whiteboard (IWB), well documented by the educational press. Such technology is generally comprised of a triangulation between data projector, computer and an electronic screen. This allows an individual to interact with software at the front of a class rather than from the computer. Effectively, the computer screen is projected onto the electronic whiteboard and presented to the class with the teacher, or perhaps student, selecting, activating and interacting with the programs. At a time in England when the government has promoted whole-class interactive teaching, particularly within Literacy and Numeracy, access to IWB technology through targeted government funding is also increasing, and the IWB is steadily becoming a feature of most numeracy and literacy lessons. In January 2004, Charles Clarke, the Secretary of State for Education in England, announced that, in addition to the \u00a325 million previously made available to schools in September 2003; a further \u00a325 million would be released for the purchase of IWBs. This technology is therefore likely to become a key resource in most schools. Introduction of new technologies such as this within the classroom context raises questions regarding the ways in which pedagogic practice may be supported and enhanced; this being the focus of this study, specifically, the links between three areas; whole-class direct teaching, creativity and the integration of technology. As IWBs are becoming more familiar within the educational market, additional challenges, which imply new priorities, have arisen from recent demands for \u2018a much stronger emphasis on creative and cultural education and a new balance in teaching and in the curriculum.\u2019 (The National Advisory Committee on Creative and Cultural Education [NACCCE], 1999) The teacher educator is therefore faced with a complex set of demands that require resolution in terms of pedagogic practice. The aim of this study is to investigate how IWBs can provide opportunities for creativity in teaching and learning, particularly within whole-class lessons, by drawing upon observations of, and discussions with, classroom practitioners and ITE students. In so doing, the study will contribute to debates within educational fields (Bleakley, 2004; Craft, 2005; NACCCE, 1999) regarding the notion of creativity in primary teaching and learning. Current developments in creativity, interactivity and whole-class direct teaching The current focus on direct whole-class teaching, particularly in mathematics, developed in response to concerns in England about the level of children\u2019s performance in English and mathematics compared to those in other countries. In particular, in the Third International Mathematics and Science Study of 1999 (Mullis et al, 2000) England was placed low in basic number skills although as Muijs and Reynolds (2001) note, faring better in geometry and problem solving. Research conducted by Professor David Reynolds, chair of the newly established Numeracy Task Force, singled out a substantial amount of direct whole-class teaching as a key feature of mathematics sessions in the top-attaining countries (Reynolds & Farrell, 1996), these findings The use of the interactive whiteboard in literacy and maths: a case study 85 \u00a9 2007 The Authors. Journal compilation \u00a9 2007 British Educational Communications and Technology Agency. echoing those of previous studies, (Galton & Croll, 1980; Rosenshine, 1979), and more recently, Bierhoff (1996), Bierhoff and Prais (1995) and Burghes (1995). In response to these concerns about attainment and the related research findings, the National Literacy Strategy (NLS; DfEE, 1998) and National Numeracy Strategy (NNS) were introduced into most state-funded primary schools; the NLS in 1998 and the NNS in 1999, both providing a clear framework for the content of the curriculum and recommended teaching approaches. In both Literacy and Numeracy, sessions were expected to consist of a substantial amount of direct, whole-class teaching. A wholeclass approach, it was claimed, enables the teacher to interact more with each pupil; adapt activities quickly in response to pupils\u2019 responses; use errors and misconceptions as a teaching point for the whole class and keep pupils on task for longer periods of time (Muijs & Reynolds, 2001). However, Muijs and Reynolds also noted that this approach was not necessarily the best to use in all circumstances. Brophy and Good (1986) are cited as finding this method more suited for teaching rules, procedures and basic skills, especially to younger pupils. Less structured and teacher-directed approaches, Muijs and Reynolds (2001) suggest, would be more appropriate when the aims of the lesson are more complex or open-ended (eg, developing students\u2019 thinking skills) With the IWB featuring widely in whole-class teaching, there is a concern that its full interactive potential may not be explored through this structured, teacher-directed approach as the teaching and modelling of rules, procedures and basic skills is likely to take precedence over more complex and cognitively demanding activities. Where whole-class interactive teaching in mathematics lessons is concerned, the Numeracy Task Force stressed that \u2018it is the quality of the whole class teaching which determines its success\u2019 (Department for Education and Employment [DfEE], 1999) rather than the whole-class approach per se. They note that whole-class teaching should not be viewed as a purely transmission-based approach, with children given instruction regarding knowledge and understanding in a product-driven mode of teaching. Rather, it should be seen as an interactive, process-oriented approach to learning with the quality of the interaction within the classroom being of prime importance, maximised effectively and efficiently through good whole-class teaching. The NNS goes on to state that \u2018interaction is a two-way process in which pupils are expected to play an active part\u2019 and that \u2018high quality interactive teaching is oral, interactive and lively\u2019, which is fundamental to a social constructivist view of learning (Vygotsky, 1978). This may be a laudable claim, however; Muijs and Reynolds (2001) caution that with direct whole-class teaching, pupils may find it easy to adopt a more passive role, becoming too dependent on the teacher and failing to develop independent learning skills. Nevertheless, it may be within a more social, interactive and lively learning environment that the IWB could be seen to make a valuable contribution. At present, there is a limited amount of research available that focuses specifically upon the IWB and associated pedagogy, however, Smith, Hardman and Higgins (2006) have undertaken a substantial study involving observations made in primary schools of 184 lessons in literacy and numeracy conducted over a 2-year period. The study suggests that, although the use of IWBs engages the pupils and sessions are generally faster in 86 British Journal of Educational Technology Vol 39 No 1 2008 \u00a9 2007 The Authors. Journal compilation \u00a9 2007 British Educational Communications and Technology Agency. their pace of delivery, the underlying pedagogy of whole-class teaching appears to remain unaffected, with teacher-led recitation and emphasis upon recall dominating proceedings. Previous research has highlighted the motivational impact of IWBs upon pupils, with the large screen, the multimedia capability and the element of fun enhancing the presentational aspects of a lesson (Glover & Miller, 2001; Levy, 2002). Essentially, there appears to be the potential for enhancements in whole-class teaching and learning through the use of IWBs if pedagogic practice were to adapt and change through creative and innovative use of the particular features of this new technology. Creativity Creativity has been described as \u2018the word of the moment\u2019 (Bruce, 2004, p. vi"}
{"_id": "dbf38bd3fb7ae7134e740e36b028eff71f03ec00", "title": "Multi Objective Segmentation for Vehicle License Plate Detection with Immune-based Classifier: A General Framework", "text": "Vehicle License Plate Recognition (VLPR) is an important system for harmonious traffic. Moreover this system is helpful in many fields and places as private and public entrances, parking lots, border control and theft control. This paper presents a new framework for Sudanese VLPR system. The proposed framework uses Multi Objective Particle Swarm Optimization (MOPSO) and Connected Component Analysis (CCA) to extract the license plate. Horizontal and vertical projection will be used for character segmentation and the final recognition stage is based on the Artificial Immune System (AIS). A new dataset that contains samples for the current shape of Sudanese license plates will be used for training and testing the proposes framework."}
{"_id": "8ae6dfc9226d49d9eab49dd7e21e04b660c4569f", "title": "Simultaneously Color-Depth Super-Resolution with Conditional Generative Adversarial Network", "text": "Recently, Generative Adversarial Network (GAN) has been found wide applications in style transfer, image-to-image translation and image super-resolution. In this paper, a colordepth conditional GAN is proposed to concurrently resolve the problems of depth super-resolution and color super-resolution in 3D videos. Firstly, given the low-resolution depth image and low-resolution color image, a generative network is proposed to leverage mutual information of color image and depth image to enhance each other in consideration of the geometry structural dependency of color-depth image in the same scene. Secondly, three loss functions, including data loss, total variation loss, and 8-connected gradient difference loss are introduced to train this generative network in order to keep generated images close to the real ones, in addition to the adversarial loss. Experimental results demonstrate that the proposed approach produces highquality color image and depth image from low-quality image pair, and it is superior to several other leading methods. Besides, the applications of the proposed method in other tasks are image smoothing and edge detection at the same time."}
{"_id": "19517a10e0f26729fbc2b1a858e56f8af860eac3", "title": "Aerial Mapping of Forests Affected by Pathogens Using UAVs, Hyperspectral Sensors, and Artificial Intelligence", "text": "The environmental and economic impacts of exotic fungal species on natural and plantation forests have been historically catastrophic. Recorded surveillance and control actions are challenging because they are costly, time-consuming, and hazardous in remote areas. Prolonged periods of testing and observation of site-based tests have limitations in verifying the rapid proliferation of exotic pathogens and deterioration rates in hosts. Recent remote sensing approaches have offered fast, broad-scale, and affordable surveys as well as additional indicators that can complement on-ground tests. This paper proposes a framework that consolidates site-based insights and remote sensing capabilities to detect and segment deteriorations by fungal pathogens in natural and plantation forests. This approach is illustrated with an experimentation case of myrtle rust (Austropuccinia psidii) on paperbark tea trees (Melaleuca quinquenervia) in New South Wales (NSW), Australia. The method integrates unmanned aerial vehicles (UAVs), hyperspectral image sensors, and data processing algorithms using machine learning. Imagery is acquired using a Headwall Nano-Hyperspec \u00ae camera, orthorectified in Headwall SpectralView \u00ae , and processed in Python programming language using eXtreme Gradient Boosting (XGBoost), Geospatial Data Abstraction Library (GDAL), and Scikit-learn third-party libraries. In total, 11,385 samples were extracted and labelled into five classes: two classes for deterioration status and three classes for background objects. Insights reveal individual detection rates of 95% for healthy trees, 97% for deteriorated trees, and a global multiclass detection rate of 97%. The methodology is versatile to be applied to additional datasets taken with different image sensors, and the processing of large datasets with freeware tools."}
{"_id": "f950f8af6437540bf301c91a0c26b8015ac6c439", "title": "Classification and Regression using SAS", "text": "K-Nearest Neighbor (KNN) classification and regression are two widely used analytic methods in predictive modeling and data mining fields. They provide a way to model highly nonlinear decision boundaries, and to fulfill many other analytical tasks such as missing value imputation, local smoothing, etc. In this paper, we discuss ways in SAS R \u00a9 to conduct KNN classification and KNN Regression. Specifically, PROC DISCRIM is used to build multi-class KNN classification and PROC KRIGE2D is used for KNN regression tasks. Technical details such as tuning parameter selection, etc are discussed. We also discuss tips and tricks in using these two procedures for KNN classification and regression. Examples are presented to demonstrate full process flow in applying KNN classification and regression in real world business projects."}
{"_id": "7114f7bb38914958f07cfa98fb29afcd94c626ad", "title": "Cardiac pulse detection in BCG signals implemented on a regular classroom chair integrated to an emotional and learning model for personalization of learning resources", "text": "Emotions are related with learning processes, and physiological signals can be used to detect them. In this work, it is presented a model for the dynamic personalization of a learning environment. In the model, a specific combination of emotions and cognition processes are connected and integrated with the concept of \u2018flow\u2019, since it may provide a way to improve learning abilities. Physiological signals can be used to relate temporal emotions and subject's learning processes; the cardiac pulse is a reliable signal that carries useful information about the subject's emotional condition, which is detected using a classroom chair adapted with non invasive Electro-Mechanical Film (EMFi) sensors and an acquisition system that generates a ballistocardiogram (BCG), which is analyzed by a developed algorithm to obtain cardiac pulse statistics. A study including different data obtained from the chair's sensors was carried out and interpreted in terms of emotions, which together with a cognitive model is used for the personalization of content in a learning scenario. Such model establishes a relation between the learner's knowledge, the content of a course or a learning activity and the emotional condition of each student."}
{"_id": "877091fdec0c58d28b7d81094bc73e135a63fe60", "title": "A quantitative analysis of the speedup factors of FPGAs over processors", "text": "The speedup over a microprocessor that can be achieved by implementing some programs on an FPGA has been extensively reported. This paper presents an analysis, both quantitative and qualitative, at the architecture level of the components of this speedup. Obviously, the spatial parallelism that can be exploited on the FPGA is a big component. By itself, however, it does not account for the whole speedup.In this paper we experimentally analyze the remaining components of the speedup. We compare the performance of image processing application programs executing in hardware on a Xilinx Virtex E2000 FPGA to that on three general-purpose processor platforms: MIPS, Pentium III and VLIW. The question we set out to answer is what is the inherent advantage of a hardware implementation over a von Neumann platform. On the one hand, the clock frequency of general-purpose processors is about 20 times that of typical FPGA implementations. On the other hand, the iteration level parallelism on the FPGA is one to two orders of magnitude that on the CPUs. In addition to these two factors, we identify the efficiency advantage of FPGAs as an important factor and show that it ranges from 6 to 47 on our test benchmarks. We also identify some of the components of this factor: the streaming of data from memory, the overlap of control and data flow and the elimination of some instruction on the FPGA. The results provide a deeper understanding of the tradeoff between system complexity and performance when designing Configurable SoC as well as designing software for CSoC. They also help understand the one to two orders of magnitude in speedup of FPGAs over CPU after accounting for clock frequencies."}
{"_id": "e960842f887e5abdf1a29cf0ef3d3f7c86991adc", "title": "Computational Complexity of Linear Large Margin Classification With Ramp Loss", "text": "Minimizing the binary classification error with a linear model leads to an NP-hard problem. In practice, surrogate loss functions are used, in particular loss functions leading to large margin classification such as the hinge loss and the ramp loss. The intuitive large margin concept is theoretically supported by generalization bounds linking the expected classification error to the empirical margin error and the complexity of the considered hypotheses class. This article addresses the fundamental question about the computational complexity of determining whether there is a hypotheses class with a hypothesis such that the upper bound on the generalization error is below a certain value. Results of this type are important for model comparison and selection. This paper takes a first step and proves that minimizing a basic margin-bound is NP-hard when considering linear hypotheses and the \u03c1margin loss function, which generalizes the ramp loss. This result directly implies the hardness of ramp loss minimization."}
{"_id": "648b03cfa3fa899019ebc418923fa7afdcd64828", "title": "Effectiveness of Chinese massage therapy (Tui Na) for chronic low back pain: study protocol for a randomized controlled trial", "text": "BACKGROUND\nLow back pain is a common, disabling musculoskeletal disorder in both developing and developed countries. Although often recommended, the potential efficacy of massage therapy in general, and Chinese massage (tuina) in particular, for relief of chronic low back pain (CLBP) has not been fully established due to inadequate sample sizes, low methodological quality, and subclinical dosing regimens of trials to date. Thus, the purpose of this randomized controlled trial (RCT) is to evaluate the comparative effectiveness of tuina massage therapy versus conventional analgesics for CLBP.\n\n\nMETHODS/DESIGN\nThe present study is a single center, two-arm, open-label RCT. A total of 150 eligible CLBP patients will be randomly assigned to either a tuina treatment group or a conventional drug control group in a 1:1 ratio. Patients in the tuina group receive a 20 minutes, 4-step treatment protocol which includes both structural and relaxation massage, administered in 20 sessions over a period of 4 weeks. Patients in the conventional drug control group are instructed to take a specific daily dose of ibuprofen. The primary outcome measure is the change from baseline back pain and function, measured by Roland-Morris Disability Questionnaire, at two months. Secondary outcome measures include the visual analogue scale, Japanese orthopedic association score (JOAS), and McGill pain questionnaire.\n\n\nDISCUSSION\nThe design and methodological rigor of this trial will allow for collection of valuable data to evaluate the efficacy of a specific tuina protocol for treating CLBP. This trial will therefore contribute to providing a solid foundation for clinical treatment of CLBP, as well as future research in massage therapy.\n\n\nTRIAL REGISTRATION\nThis trial was registered with ClinicalTrials.gov of the National Institute of Health on 22 October 2013 (http://NCT01973010)."}
{"_id": "7826bc81ffed9c1f342616df264c92d6f732f4dd", "title": "Autonomic Communications in Software-Driven Networks", "text": "Autonomic communications aim to provide the quality-of-service in networks using self-management mechanisms. It inherits many characteristics from autonomic computing, in particular, when communication systems are running as specialized applications in software-defined networking (SDN) and network function virtualization (NFV)-enabled cloud environments. This paper surveys autonomic computing and communications in the context of software-driven networks, i.e., networks based on SDN/NFV concepts. Autonomic communications create new challenges in terms of security, operations, and business support. We discuss several goals, research challenges, and development issues on self-management mechanisms and architectures in software-driven networks. This paper covers multiple perspectives of autonomic communications in software-driven networks, such as automatic testing, integration, and deployment of network functions. We also focus on self-management and optimization, which make use of machine learning techniques."}
{"_id": "377fa49c9ead7b9604993e984d5ab14888188b1d", "title": "Coordinated Control of the Energy Router-Based Smart Home Energy Management System", "text": "Home area energy networks will be an essential part of the future Energy Internet in terms of energy saving, demand-side management and stability improvement of the distribution network, while an energy router will be the perfect choice to serve as an intelligent and multi-functional energy interface between the home area energy network and power grid. This paper elaborates on the design, analysis and implementation of coordinated control of the low-voltage energy router-based smart home energy management system (HEMS). The main contribution of this paper is to develop a novel solution to make the energy router technically feasible and practical for the HEMS to make full use of the renewable energy sources (RESs), while maintaining \u201coperational friendly and beneficial\u201d to the power grid. The behaviors of the energy router-based HEMS in correlation with the power grid are investigated, then the coordinated control scheme composed of a reference voltage and current compensation strategy and a fuzzy logic control-based power management strategy is developed. The system model is built on the MATLAB/Simulink platform, simulation results have demonstrated that the presented control scheme is a strong performer in making full use of the RES generations for the HEMS while maintaining the operational stability of the whole system, as well as in collaboration with the power grid to suppress the impact of RES output fluctuations and load consumption variations."}
{"_id": "e027edb12295a264dee7370a57c8a22e3106474a", "title": "The brain's conversation with itself: neural substrates of dialogic inner speech.", "text": "Inner speech has been implicated in important aspects of normal and atypical cognition, including the development of auditory hallucinations. Studies to date have focused on covert speech elicited by simple word or sentence repetition, while ignoring richer and arguably more psychologically significant varieties of inner speech. This study compared neural activation for inner speech involving conversations ('dialogic inner speech') with single-speaker scenarios ('monologic inner speech'). Inner speech-related activation differences were then compared with activations relating to Theory-of-Mind (ToM) reasoning and visual perspective-taking in a conjunction design. Generation of dialogic (compared with monologic) scenarios was associated with a widespread bilateral network including left and right superior temporal gyri, precuneus, posterior cingulate and left inferior and medial frontal gyri. Activation associated with dialogic scenarios and ToM reasoning overlapped in areas of right posterior temporal cortex previously linked to mental state representation. Implications for understanding verbal cognition in typical and atypical populations are discussed."}
{"_id": "9fb5d6f97ba6ffa35a7bb8a013474f991249a813", "title": "Judgments of genuine, suppressed, and faked facial expressions of pain.", "text": "The process of discriminating among genuine, suppressed, and faked expressions of pain was examined. Untrained judges estimated the severity of pain being experienced when viewing videotaped facial expressions of chronic pain patients undergoing a painful diagnostic test or dissimulating reactions. Verbal feedback as to whether pain was experienced also was provided, so as to be either consistent or inconsistent with the facial expression. Judges were able to distinguish genuine pain faces from baseline expressions but, relative to genuine pain faces, attributed more pain to faked faces and less pain to suppressed ones. Advance warning of deception did not improve discrimination but led to a more conservative or nonempathic judging style. Verbal feedback increased or decreased judgments, as appropriate, but facial information consistently was assigned greater weight. An augmenting model of the judgment process that attaches considerable importance to the context in which information is provided was supported."}
{"_id": "adf0d7a1967a14be8f881ab71449015aff720755", "title": "SRFeat: Single Image Super-Resolution with Feature Discrimination", "text": "Generative adversarial networks (GANs) have recently been adopted to single image super-resolution (SISR) and showed impressive results with realistically synthesized high-frequency textures. However, the results of such GAN-based approaches tend to include less meaningful high-frequency noise that is irrelevant to the input image. In this paper, we propose a novel GAN-based SISR method that overcomes the limitation and produces more realistic results by attaching an additional discriminator that works in the feature domain. Our additional discriminator encourages the generator to produce structural high-frequency features rather than noisy artifacts as it distinguishes synthetic and real images in terms of features. We also design a new generator that utilizes long-range skip connections so that information between distant layers can be transferred more effectively. Experiments show that our method achieves the state-of-the-art performance in terms of both PSNR and perceptual quality compared to recent GAN-based methods."}
{"_id": "103f1b70ccc04372da643f3ae16acbfd975ee5d3", "title": "Time Series Featurization via Topological Data Analysis: an Application to Cryptocurrency Trend Forecasting", "text": "We propose a novel methodology for feature extraction from time series data based on topological data analysis. The proposed procedure applies a dimensionality reduction technique via principal component analysis to the point cloud of the Takens\u2019 embedding from the observed time series and then evaluates the persistence landscape and silhouettes based on the corresponding Rips complex. We define a new notion of Rips distance function that is especially suited for persistence homologies built on Rips complexes and prove stability theorems for it. We use these results to demonstrate in turn some stability properties of the topological features extracted using our procedure with respect to additive noise and sampling. We further apply our method to the problem of trend forecasting for cryptocurrency prices, where we manage to achieve significantly lower error rates than more standard, non TDA-based methodologies in complex pattern classification tasks. We expect our method to provide a new insight on feature engineering for granular, noisy time series data."}
{"_id": "2621b8f63247ea5af03f4ea0e83c3b528238c4a1", "title": "Evaluating STT-RAM as an energy-efficient main memory alternative", "text": "In this paper, we explore the possibility of using STT-RAM technology to completely replace DRAM in main memory. Our goal is to make STT-RAM performance comparable to DRAM while providing substantial power savings. Towards this goal, we first analyze the performance and energy of STT-RAM, and then identify key optimizations that can be employed to improve its characteristics. Specifically, using partial write and row buffer write bypass, we show that STT-RAM main memory performance and energy can be significantly improved. Our experiments indicate that an optimized, equal capacity STT-RAM main memory can provide performance comparable to DRAM main memory, with an average 60% reduction in main memory energy."}
{"_id": "a95436fb5417f16497d90cd2aeb11a0e2873f55f", "title": "Architecting phase change memory as a scalable dram alternative", "text": "Memory scaling is in jeopardy as charge storage and sensing mechanisms become less reliable for prevalent memory technologies, such as DRAM. In contrast, phase change memory (PCM) storage relies on scalable current and thermal mechanisms. To exploit PCM's scalability as a DRAM alternative, PCM must be architected to address relatively long latencies, high energy writes, and finite endurance.\n We propose, crafted from a fundamental understanding of PCM technology parameters, area-neutral architectural enhancements that address these limitations and make PCM competitive with DRAM. A baseline PCM system is 1.6x slower and requires 2.2x more energy than a DRAM system. Buffer reorganizations reduce this delay and energy gap to 1.2x and 1.0x, using narrow rows to mitigate write energy and multiple rows to improve locality and write coalescing. Partial writes enhance memory endurance, providing 5.6 years of lifetime. Process scaling will further reduce PCM energy costs and improve endurance."}
{"_id": "71c2deb5c3b4b0fd1ed68bdda534ec7ea76e845b", "title": "A novel nonvolatile memory with spin torque transfer magnetization switching: spin-ram", "text": "A novel nonvolatile memory utilizing spin torque transfer magnetization switching (STS), abbreviated spin-RAM hereafter, is presented for the first time. The spin-RAM is programmed by magnetization reversal through an interaction of a spin momentum-torque-transferred current and a magnetic moment of memory layers in magnetic tunnel junctions (MTJs), and therefore an external magnetic field is unnecessary as that for a conventional MRAM. This new programming mode has been accomplished owing to our tailored MTJ, which has an oval shape of 100 times 150 nm. The memory cell is based on a 1-transistor and a 1-MTJ (ITU) structure. The 4kbit spin-RAM was fabricated on a 4 level metal, 0.18 mum CMOS process. In this work, writing speed as high as 2 ns, and a write current as low as 200 muA were successfully demonstrated. It has been proved that spin-RAM possesses outstanding characteristics such as high speed, low power and high scalability for the next generation universal memory"}
{"_id": "f03cb33bac64a0d23008c8970d1ced993354ebc6", "title": "Ultra-Thin Phase-Change Bridge Memory Device Using GeSb", "text": "An ultra-thin phase-change bridge (PCB) memory cell, implemented with doped GeSb, is shown with < 100muA RESET current. The device concept provides for simplified scaling to small cross-sectional area (60nm2) through ultra-thin (3nm) films; the doped GeSb phase-change material offers the potential for both fast crystallization and good data retention"}
{"_id": "fae8a785260ac5c34be82fca92a4abef4c30d655", "title": "Phase-change random access memory: A scalable technology", "text": "random access memory: A scalable technology S. Raoux G. W. Burr M. J. Breitwisch C. T. Rettner Y.-C. Chen R. M. Shelby M. Salinga D. Krebs S.-H. Chen H.-L. Lung C. H. Lam Nonvolatile RAM using resistance contrast in phase-change materials [or phase-change RAM (PCRAM)] is a promising technology for future storage-class memory. However, such a technology can succeed only if it can scale smaller in size, given the increasingly tiny memory cells that are projected for future technology nodes (i.e., generations). We first discuss the critical aspects that may affect the scaling of PCRAM, including materials properties, power consumption during programming and read operations, thermal cross-talk between memory cells, and failure mechanisms. We then discuss experiments that directly address the scaling properties of the phase-change materials themselves, including studies of phase transitions in both nanoparticles and ultrathin films as a function of particle size and film thickness. This work in materials directly motivated the successful creation of a series of prototype PCRAM devices, which have been fabricated and tested at phase-change material cross-sections with extremely small dimensions as low as 3 nm \u00b7 20 nm. These device measurements provide a clear demonstration of the excellent scaling potential offered by this technology, and they are also consistent with the scaling behavior predicted by extensive device simulations. Finally, we discuss issues of device integration and cell design, manufacturability, and reliability."}
{"_id": "1a124ed5d7c739727ca60cf11008edafa9e3ecf2", "title": "SamzaSQL: Scalable Fast Data Management with Streaming SQL", "text": "As the data-driven economy evolves, enterprises have come to realize a competitive advantage in being able to act on high volume, high velocity streams of data. Technologies such as distributed message queues and streaming processing platforms that can scale to thousands of data stream partitions on commodity hardware are a response. However, the programming API provided by these systems is often low-level, requiring substantial custom code that adds to the programmer learning curve and maintenance overhead. Additionally, these systems often lack SQL querying capabilities that have proven popular on Big Data systems like Hive, Impala or Presto. We define a minimal set of extensions to standard SQL for data stream querying and manipulation. These extensions are prototyped in SamzaSQL, a new tool for streaming SQL that compiles streaming SQL into physical plans that are executed on Samza, an open-source distributed stream processing framework. We compare the performance of streaming SQL queries against native Samza applications and discuss usability improvements. SamzaSQL is a part of the open source Apache Samza project and will be available for general use."}
{"_id": "8619ce028bd112548f83d3f36290c1b05a8a694e", "title": "Neuro-fuzzy rule generation: survey in soft computing framework", "text": "The present article is a novel attempt in providing an exhaustive survey of neuro-fuzzy rule generation algorithms. Rule generation from artificial neural networks is gaining in popularity in recent times due to its capability of providing some insight to the user about the symbolic knowledge embedded within the network. Fuzzy sets are an aid in providing this information in a more human comprehensible or natural form, and can handle uncertainties at various levels. The neuro-fuzzy approach, symbiotically combining the merits of connectionist and fuzzy approaches, constitutes a key component of soft computing at this stage. To date, there has been no detailed and integrated categorization of the various neuro-fuzzy models used for rule generation. We propose to bring these together under a unified soft computing framework. Moreover, we include both rule extraction and rule refinement in the broader perspective of rule generation. Rules learned and generated for fuzzy reasoning and fuzzy control are also considered from this wider viewpoint. Models are grouped on the basis of their level of neuro-fuzzy synthesis. Use of other soft computing tools like genetic algorithms and rough sets are emphasized. Rule generation from fuzzy knowledge-based networks, which initially encode some crude domain knowledge, are found to result in more refined rules. Finally, real-life application to medical diagnosis is provided."}
{"_id": "436c3119d16ce2e3c243ffe7a4a1a5dc40b128aa", "title": "Dialog Act Modeling for Conversational Speech", "text": "We describe an integrated approach for statistical modeling of discourse structure for natural conversational speech. Our model is based on 42`dialog acts' which were hand-labeled in 1155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We developed several models and algorithms to automatically detect dialog acts from transcribed or automatically recognized words and from prosodic properties of the speech signal, and by using a statistical discourse grammar. All of these components were probabilistic in nature and estimated from data, employing a variety of techniques (hidden Markov models, N-gram language models, maximum entropy estimation, decision tree classiiers, and neural networks). In preliminary studies, we achieved a dialog act labeling accuracy of 65% based on recognized words and prosody, and an accuracy of 72% based on word transcripts. Since humans achieve 84% on this task (with chance performance at 35%) we nd these results encouraging."}
{"_id": "b8ec319b1f5223508267b1d5b677c0796d25ac13", "title": "Learning by Association \u2014 A Versatile Semi-Supervised Training Method for Neural Networks", "text": "In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. Associations are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN."}
{"_id": "c26e4b8cd5b8f99718e637fa42d1458275bea0a2", "title": "Plead or Pitch ? The Role of Language in Kickstarter Project Success", "text": "We present an analysis of over 26000 projects from Kickstarter, a popular crowdfunding platform. Specifically, we focus on the language used in project pitches, and how it impacts project success/failure. We train a series of discriminative models for binary success/failure prediction of projects with increasing complexity of linguistic features, and show that successful project pitches are, on average, more emotive, thoughtful and colloquial. In addition, we also present an analysis of pledge rewards in Kickstarter, and how these differ across categories and for successful versus unsuccessful projects."}
{"_id": "238e8b5689d0b49c218e235a3e38d59c57ae6503", "title": "Mobility Management for Femtocells in LTE-Advanced: Key Aspects and Survey of Handover Decision Algorithms", "text": "Support of femtocells is an integral part of the Long Term Evolution - Advanced (LTE-A) system and a key enabler for its wide adoption in a broad scale. Femtocells are short-range, low-power and low-cost cellular stations which are installed by the consumers in an unplanned manner. Even though current literature includes various studies towards understanding the main challenges of interference management in the presence of femtocells, little light has been shed on the open issues of mobility management (MM) in the two-tier macrocell-femtocell network. In this paper, we provide a comprehensive discussion on the key aspects and research challenges of MM support in the presence of femtocells, with the emphasis given on the phases of a) cell identification, b) access control, c) cell search, d) cell selection/reselection, e) handover (HO) decision, and f) HO execution. A detailed overview of the respective MM procedures in the LTE-A system is also provided to better comprehend the solutions and open issues posed in real-life systems. Based on the discussion for the HO decision phase, we subsequently survey and classify existing HO decision algorithms for the two-tier macrocell-femtocell network, depending on the primary HO decision criterion used. For each class, we overview up to three representative algorithms and provide detailed flowcharts to describe their fundamental operation. A comparative summary of the main decision parameters and key features of selected HO decision algorithms concludes this work, providing insights for future algorithmic design and standardization activities."}
{"_id": "2cf9714cb82974c85c99a5f3bfe5cd79de52bd69", "title": "Directing exploratory search: reinforcement learning from user interactions with keywords", "text": "Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information."}
{"_id": "be4e6bbf2935d999ce4def488ed537dfd50a2ee3", "title": "Investigations on speaker adaptation of LSTM RNN models for speech recognition", "text": "Recently Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN) acoustic models have demonstrated superior performance over deep neural networks (DNN) models in speech recognition and many other tasks. Although a lot of work have been reported on DNN model adaptation, very little has been done on LSTM model adaptation. In this paper we present our extensive studies of speaker adaptation of LSTM-RNN models for speech recognition. We investigated different adaptation methods combined with KL-divergence based regularization, where and which network component to adapt, supervised versus unsupervised adaptation and asymptotic analysis. We made a few distinct and important observations. In a large vocabulary speech recognition task, by adapting only 2.5% of the LSTM model parameters using 50 utterances per speaker, we obtained 12.6% WERR on the dev set and 9.1% WERR on the evaluation set over a strong LSTM baseline model."}
{"_id": "32f9fad37daf596563bcf0f5251e7f43c1f0ca58", "title": "A 3D printed low profile magnetic dipole antenna", "text": "An electrically small, low profile magnetic dipole antenna is presented. The proposed antenna is composed of multiple small twisted loops connected in parallel. The simulated radiation efficiency is 87 % at the resonant frequency of 222.5 MHz. The corresponding electrical size ka is 0.276 with the height of 0.0016\u03bb. The prototype is built using a selective laser sintering technology and silver paste painting. The measured result is discussed."}
{"_id": "910c864568bd83719ee5050a9d734be9ba439cd5", "title": "Applying connectionist modal logics to distributed knowledge representation problems", "text": "Neural-Symbolic Systems concern the integration of the symbolic and connectionist paradigms of Artificial Intelligence. Distributed knowledge representation is traditionally seen under a symbolic perspective. In this paper, we show how neural networks can represent distributed symbolic knowledge, acting as multi-agent systems with learning capability (a key feature of neural networks). We apply the framework of Connectionist Modal Logics to well-known testbeds for distributed knowledge representation formalisms, namely the muddy children and the wise men puzzles. Finally, we sketch a full solution to these problems by extending our approach to deal with knowledge evolution over time."}
{"_id": "45864034eba454b115a0cd91e175104010cd14ad", "title": "Epigenetics and Signaling Pathways in Glaucoma", "text": "Glaucoma is the most common cause of irreversible blindness worldwide. This neurodegenerative disease becomes more prevalent with aging, but predisposing genetic and environmental factors also contribute to increased risk. Emerging evidence now suggests that epigenetics may also be involved, which provides potential new therapeutic targets. These three factors work through several pathways, including TGF-\u03b2, MAP kinase, Rho kinase, BDNF, JNK, PI-3/Akt, PTEN, Bcl-2, Caspase, and Calcium-Calpain signaling. Together, these pathways result in the upregulation of proapoptotic gene expression, the downregulation of neuroprotective and prosurvival factors, and the generation of fibrosis at the trabecular meshwork, which may block aqueous humor drainage. Novel therapeutic agents targeting these pathway members have shown preliminary success in animal models and even human trials, demonstrating that they may eventually be used to preserve retinal neurons and vision."}
{"_id": "24d66ec9dd202a6ea02b8723ae9d2fd7ffd32a4a", "title": "BING: Binarized Normed Gradients for Objectness Estimation at 300fps", "text": "Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 \u00d7 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2% object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5% DR."}
{"_id": "9cf389e03a5a23c1db4b14b849552777f238ee50", "title": "A controlled clinical comparison of attention performance in children with ADHD in a virtual reality classroom compared to standard neuropsychological methods.", "text": "In this initial pilot study, a controlled clinical comparison was made of attention perforance in children with attention deficit-hyperactivity disorder (ADHD) in a virtual reality (VR) classroom. Ten boys diagnosed with ADHD and ten normal control boys participated in the study. Groups did not significantly differ in mean age, grade level, ethnicity, or handedness. No participants reported simulator sickness following VR exposure. Children with ADHD exhibited more omission errors, commission errors, and overall body movement than normal control children in the VR classroom. Children with ADHD were more impacted by distraction in the VR classroom. VR classroom measures were correlated with traditional ADHD assessment tools and the flatscreen CPT. Of note, the small sample size incorporated in each group and higher WISC-III scores of normal controls might have some bearing on the overall interpretation of results. These data suggested that the Virtual Classroom had good potential for controlled performance assessment within an ecologically valid environment and appeared to parse out significant effects due to the presence of distraction stimuli."}
{"_id": "6ae8321f2863b79b37e2f0e0d131dc1a80829cf7", "title": "Does Twitter language reliably predict heart disease? A commentary on Eichstaedt et al. (2015a)", "text": "We comment on Eichstaedt et\u00a0al.'s (2015a) claim to have shown that language patterns among Twitter users, aggregated at the level of US counties, predicted county-level mortality rates from atherosclerotic heart disease (AHD), with \"negative\" language being associated with higher rates of death from AHD and \"positive\" language associated with lower rates. First, we examine some of Eichstaedt et al.'s apparent assumptions about the nature of AHD, as well as some issues related to the secondary analysis of online data and to considering counties as communities. Next, using the data files supplied by Eichstaedt et al., we reproduce their regression- and correlation-based models, substituting mortality from an alternative cause of death-namely, suicide-as the outcome variable, and observe that the purported associations between \"negative\" and \"positive\" language and mortality are reversed when suicide is used as the outcome variable. We identify numerous other conceptual and methodological limitations that call into question the robustness and generalizability of Eichstaedt et al.'s claims, even when these are based on the results of their ridge regression/machine learning model. We conclude that there is no good evidence that analyzing Twitter data in bulk in this way can add anything useful to our ability to understand geographical variation in AHD mortality rates."}
{"_id": "458dd9078a518a859e9ae1051b28a19f8dfa72de", "title": "Cascading Bandits for Large-Scale Recommendation Problems", "text": "Most recommender systems recommend a list of items. The user examines the list, from the first item to the last, and often chooses the first attractive item and does not examine the rest. This type of user behavior can be modeled by the cascade model. In this work, we study cascading bandits, an online learning variant of the cascade model where the goal is to recommend K most attractive items from a large set of L candidate items. We propose two algorithms for solving this problem, which are based on the idea of linear generalization. The key idea in our solutions is that we learn a predictor of the attraction probabilities of items from their features, as opposing to learning the attraction probability of each item independently as in the existing work. This results in practical learning algorithms whose regret does not depend on the number of items L. We bound the regret of one algorithm and comprehensively evaluate the other on a range of recommendation problems. The algorithm performs well and outperforms all baselines."}
{"_id": "2e9de0de9aa8ab46a9d3e20fe21472104f42cbbe", "title": "SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent", "text": "TheSGD-QN algorithm is a stochastic gradient descent algorithm that m akes careful use of secondorder information and splits the parameter update into inde pendently scheduled components. Thanks to this design, SGD-QN iterates nearly as fast as a first-order stochastic gradient escent but requires less iterations to achieve the same accuracy. This algorith m won the \u201cWild Track\u201d of the first PASCAL Large Scale Learning Challenge (Sonnenburg et al., 2008 )."}
{"_id": "33ffb3959534d798c12f69391b380b22e7492e3e", "title": "Autonomous human-robot proxemics: socially aware navigation based on interaction potential", "text": "To enable situated human\u2013robot interaction (HRI), an autonomous robot must both understand and control proxemics\u2014the social use of space\u2014to employ natural communication mechanisms analogous to those used by humans. This work presents a computational framework of proxemics based on data-driven probabilistic models of how social signals (speech and gesture) are produced (by a human) and perceived (by a robot). The framework and modelswere implemented as autonomous proxemic behavior systems for sociable robots, including: (1) a sampling-based method for robot proxemic goal state estimation with respect to human\u2013robot distance and orientation parameters, (2) a reactive proxemic controller for goal state realization, and (3) a cost-based trajectory planner for maximizing automated robot speech and gesture recognition rates along a path to the goal state. Evaluation results indicate that the goal state estimation and realization significantly improve upon past work in human\u2013robot proxemics with respect to \u201cinteraction potential\u201d\u2014predicted automated speech and gesture recognition rates as the robot enters into and engages in faceto-face social encounters with a human user\u2014illustrating their efficacy to support richer robot perception and autonomy in HRI. Electronic supplementary material The online version of this article (doi:10.1007/s10514-016-9572-2) contains supplementary material, which is available to authorized users. B Ross Mead rossmead@usc.edu 1 3710 McClintock Avenue, RTH 423, Los Angeles, CA 90089-0781, USA 2 3710 McClintock Avenue, RTH 407, Los Angeles, CA 90089-0781, USA"}
{"_id": "6716a4e9ebb330a08e3cb19b0b574b0e7b429166", "title": "Advanced Maintenance Simulation by Means of Hand-Based Haptic Interfaces", "text": "Aerospace industry has been involved in virtual simulation for design and testing since the birth of virtual reality. Today this industry is showing a growing interest in the development of haptic-based maintenance training applications, which represent the most advanced way to simulate maintenance and repair tasks within a virtual environment by means of a visual-haptic approach. The goal is to allow the trainee to experiment the service procedures not only as a workflow reproduced at a visual level but also in terms of the kinaesthetic feedback involved with the manipulation of tools and components. This study, conducted in collaboration with aerospace industry specialists, is aimed to the development of an immersive virtual capable of immerging the trainees into a virtual environment where mechanics and technicians can perform maintenance simulation or training tasks by directly manipulating 3D virtual models of aircraft parts while perceiving force feedback through the haptic interface. The proposed system is based on ViRstperson, a virtual reality engine under development at the Italian Center for Aerospace Research (CIRA) to support engineering and technical activities such as design-time maintenance procedure validation, and maintenance training. This engine has been extended to support haptic-based interaction, enabling a more complete level of interaction, also in terms of impedance control, and thus fostering the development of haptic knowledge in the user. The user\u2019s \u201csense of touch\u201d within the immersive virtual environment is simulated through an Immersion CyberForce\u00ae hand-based force-feedback device. Preliminary testing of the proposed system seems encouraging."}
{"_id": "274c938a70de1afb9ef1489cab2186c1d699725e", "title": "Electricity Price Forecasting With Extreme Learning Machine and Bootstrapping", "text": "Artificial neural networks (ANNs) have been widely applied in electricity price forecasts due to their nonlinear modeling capabilities. However, it is well known that in general, traditional training methods for ANNs such as back-propagation (BP) approach are normally slow and it could be trapped into local optima. In this paper, a fast electricity market price forecast method is proposed based on a recently emerged learning method for single hidden layer feed-forward neural networks, the extreme learning machine (ELM), to overcome these drawbacks. The new approach also has improved price intervals forecast accuracy by incorporating bootstrapping method for uncertainty estimations. Case studies based on chaos time series and Australian National Electricity Market price series show that the proposed method can effectively capture the nonlinearity from the highly volatile price data series with much less computation time compared with other methods. The results show the great potential of this proposed approach for online accurate price forecasting for the spot market prices analysis."}
{"_id": "99e2a37d09479b29f14b34c81f4f493f40920b7f", "title": "E-TOURISM: AN INNOVATIVE APPROACH FOR THE SMALL AND MEDIUM-SIZED TOURISM ENTERPRISES (SMTES) IN KOREA", "text": "This paper deals with e-tourism, innovation and growth. The Internet is revolutionising the distribution of tourism information and sales. The Korean small and medium-sized tourism enterprises (SMTEs) with well-developed and innovative Web sites can now have \u201cequal Internet access\u201d to international tourism markets. This paper examines problems and solutions related to electronic commerce in the tourism industry and suggests recommendations for successful e-commerce strategies in tourism to be applied by the industry and the government in Korea. Introduction The definitions of tourism innovation (e.g. product, service and technological innovations) remains unclear, with the exception maybe of the Internet. New technologies can produce an essential contribution to tourism development. For tourism businesses, the Internet offers the potential to make information and booking facilities available to large numbers of tourists at relatively low costs. It also provides a tool for communication between tourism suppliers, intermediaries, as well as end-consumers. OECD (2000) revealed that the advent of Internet-based electronic commerce offers considerable opportunities for firms to expand their customer base, enter new product markets and rationalise their business. WTO (2001) also indicated that electronic business offers SMEs the opportunity to undertake their business in new and more cost-effective ways. According to WTO, the Internet is revolutionising the distribution of tourism information and sales. An increasing proportion of Internet users are buying on\u2013line and tourism will gain a larger and larger share of the online commerce market. Obviously, the Internet is having a major impact as a source of information for tourism. However, the SMTEs are facing more stringent impediments to the adoption of new information technology, in particular, e-business. Part of the problem relates to the scale and affordability of information technology, as well as the facility of implementation within rapidly growing and changing organisations. In addition, new solutions configured for large, stable, and internationally-oriented firms do not fit well for small, dynamic, and locally-based tourism firms. Despite these challenges, SMTEs with well-developed and innovative Web sites can now have \u201cequal Internet access\u201d to international tourism markets. This implies equal access to telecom infrastructure, as well as to marketing management and education. According to a UN report (2001), \u201cit is not the cost of being there, on the on-line market place, which must be reckoned with, but the cost of not being there.\u201d It is certain that embracing digital communication and information technology is no longer an option, but a necessity. Thus, one of the most important characteristics of"}
{"_id": "c78537e92ccc4b14939a97ef5345cd7dbacc1c25", "title": "The Effectiveness of Intrinsic and Extrinsic Motivations : A Study of Malaysian Amway Company \u2019 s Direct Sales Forces", "text": "This research utilizes the survey questionnaire data collected in May/June 2010 from 200 Amway Company\u2019s direct sales forces in Klang Valley areas (Malaysia) to analyze the effectiveness of intrinsic and extrinsic motivation in influencing job satisfaction. The research will be analyzed using the well-established correlation analysis, regression analysis, independent sample t-test, and one-way ANOVA. There are four major findings. First, there is a relationship between intrinsic and extrinsic motivations with job satisfaction. According to the correlation value, intrinsic motivation compared to extrinsic motivation tends to contribute more in job satisfaction. Second, there are significant and positive relationship between intrinsic and extrinsic motivations and job satisfaction. Both of the intrinsic and extrinsic motivation is identified as the predictor for job satisfaction. In other words, they are both significantly contribute in better job satisfaction. Third, there is no difference between gender and intrinsic and extrinsic motivations. Hence, gender is not a factor that affects both of the intrinsic and extrinsic motivations. Lastly, the result indicates that there is a difference between age and intrinsic and extrinsic motivations. Therefore, age is the factor that influences on both intrinsic and extrinsic motivations. Last but not least, the results have demonstrated the effectiveness of intrinsic and extrinsic motivations in influencing job satisfaction among the Amway Company\u2019s direct sales forces, as well as establishing appropriate intrinsic motivations may promote higher job satisfaction."}
{"_id": "59d312a4ef11d1fa2b001650decb664a64fcb0a7", "title": "A robust adaptive mixing control for improved forward flight of a tilt-rotor UAV", "text": "This work presents the modeling and control of a tilt-rotor UAV, with tail controlled surfaces, for path tracking with improved forward flight. The dynamic model is obtained using the Euler-Lagrange formulation considering the aerodynamic forces and torques exerted on the horizontal and vertical stabilizers, and fuselage. For control design purposes, the equations of motion are linearized around different operation points to cover a large range of forward velocity. Based on these linearized dynamic models, a mixed H2/H\u221e robust controller is designed for each operation point. Therefore, an adaptive mixing scheme is used to perform an on-line smooth gain-scheduling between them. Simulation results show the control strategy efficiency when the UAV is designated to have a forward acceleration and perform a circular trajectory subject to a wind disturbance."}
{"_id": "7e9100e625ec05c95adc8f05927bfca43258eee7", "title": "Fatigue in soccer: a brief review.", "text": "This review describes when fatigue may develop during soccer games and the potential physiological mechanisms that cause fatigue in soccer. According to time-motion analyses and performance measures during match-play, fatigue or reduced performance seems to occur at three different stages in the game: (1) after short-term intense periods in both halves; (2) in the initial phase of the second half; and (3) towards the end of the game. Temporary fatigue after periods of intense exercise in the game does not appear to be linked directly to muscle glycogen concentration, lactate accumulation, acidity or the breakdown of creatine phosphate. Instead, it may be related to disturbances in muscle ion homeostasis and an impaired excitation of the sarcolemma. Soccer players' ability to perform maximally is inhibited in the initial phase of the second half, which may be due to lower muscle temperatures compared with the end of the first half. Thus, when players perform low-intensity activities in the interval between the two halves, both muscle temperature and performance are preserved. Several studies have shown that fatigue sets in towards the end of a game, which may be caused by low glycogen concentrations in a considerable number of individual muscle fibres. In a hot and humid environment, dehydration and a reduced cerebral function may also contribute to the deterioration in performance. In conclusion, fatigue or impaired performance in soccer occurs during various phases in a game, and different physiological mechanisms appear to operate in different periods of a game."}
{"_id": "16326dd240c9a3ffd02237497ac868b6edb3147b", "title": "Combining Gradient and Albedo Data for Rotation Invariant Classification of 3D Surface Texture", "text": "We present a new texture classification scheme which is invariant to surface-rotation. Many texture classification approaches have been presented in the past that are image-rotation invariant, However, image rotation is not necessarily the same as surface rotation. We have therefore developed a classifier that uses invariants that are derived from surface properties rather than image properties. Previously we developed a scheme that used surface gradient (normal) fields estimated using photometric stereo. In this paper we augment these data with albedo information and an also employ an additional feature set: the radial spectrum. We used 30 real textures to test the new classifier. A classification accuracy of 91% was achieved when albedo and gradient 1D polar and radial features were combined. The best performance was also achieved by using 2D albedo and gradient spectra. The classification accuracy is 99%."}
{"_id": "318344b8015a92f23e508d6476f8243c74ff02ee", "title": "A Software-Based Sonar Ranging Sensor for Smart Phones", "text": "We live in a 3-D world. However, the smart phones that we use every day are incapable of sensing depth, without the use of custom hardware. By creating new depth sensors, we can provide developers with the tools that they need to create immersive mobile applications that take advantage of the 3-D nature of our world. In this paper, we propose a new sonar sensor for smart phones. This sonar sensor does not require any additional hardware, and utilizes the phone's microphone and rear speaker. The sonar sensor calculates distances by measuring the elapsed time between the initial pulse and its reflection. We evaluate the accuracy of the sonar sensor by using it to measure the distance from the phone to an object. We found that we were able to measure the distances of objects accurately with an error bound of 12 cm."}
{"_id": "1adc43eb3cb3ee8b4f2289ed3533b6f550a3fefa", "title": "Recent Advances in Augmented Reality", "text": "The field of Augmented Reality (AR) has existed for just over one decade, but the growth and progress in the past few years has been remarkable. In 1997, the first author published a survey [3] (based on a 1995 SIGGRAPH course lecture) that defined the field, described many problems, and summarized the developments up to that point. Since then, the field has grown rapidly. In the late 1990\u0092s, several conferences specializing in this area were started, including the International Workshop and Symposium on Augmented Reality [29], the International Symposium on Mixed Reality [30], and the Designing Augmented Reality Environments workshop. Some wellfunded interdisciplinary consortia were formed that focused on AR, notably the Mixed Reality Systems Laboratory [50] in Japan and Project ARVIKA [61] in Germany. A freely-available software toolkit (the ARToolkit) for rapidly building AR applications is now available [2]. Because of this wealth of new developments, an updated survey is needed to guide and encourage further research in this exciting area."}
{"_id": "3244243bd0ab1790dfda1128390fd56674c24389", "title": "Exploring the Benefits of Augmented Reality Documentation for Maintenance and Repair", "text": "We explore the development of an experimental augmented reality application that provides benefits to professional mechanics performing maintenance and repair tasks in a field setting. We developed a prototype that supports military mechanics conducting routine maintenance tasks inside an armored vehicle turret, and evaluated it with a user study. Our prototype uses a tracked headworn display to augment a mechanic's natural view with text, labels, arrows, and animated sequences designed to facilitate task comprehension, localization, and execution. A within-subject controlled user study examined professional military mechanics using our system to complete 18 common tasks under field conditions. These tasks included installing and removing fasteners and indicator lights, and connecting cables, all within the cramped interior of an armored personnel carrier turret. An augmented reality condition was tested against two baseline conditions: the same headworn display providing untracked text and graphics and a fixed flat panel display representing an improved version of the laptop-based documentation currently employed in practice. The augmented reality condition allowed mechanics to locate tasks more quickly than when using either baseline, and in some instances, resulted in less overall head movement. A qualitative survey showed that mechanics found the augmented reality condition intuitive and satisfying for the tested sequence of tasks."}
{"_id": "56695dbdf9c49a4d44961bd97b16e3153144f906", "title": "ENHANCING THE TOURISM EXPERIENCE THROUGH MOBILE AUGMENTED REALITY: CHALLENGES AND PROSPECTS", "text": "The paper discusses the use of Augmented Reality (AR) applications for the needs of tourism. It describes the technology\u2019s evolution from pilot applications into commercial mobile applications. We address the technical aspects of mobile AR applications development, emphasizing on the technologies that render the delivery of augmented reality content possible and experientially superior. We examine the state of the art, providing an analysis concerning the development and the objectives of each application. Acknowledging the various technological limitations hindering AR\u2019s substantial end-user adoption, the paper proposes a model for developing AR mobile applications for the field of tourism, aiming to release AR\u2019s full potential within the field."}
{"_id": "87806d658ee71aff6c595e412d0a96187a2dffa6", "title": "A head-mounted three dimensional display", "text": "The fundamental idea behind the three-dimensional display is to present the user with a perspective image which changes as he moves. The retinal image of the real objects which we see is, after all, only two-dimensional. Thus if we can place suitable two-dimensional images on the observer's retinas, we can create the illusion that he is seeing a three-dimensional object. Although stereo presentation is important to the three-dimensional illusion, it is less important than the change that takes place in the image when the observer moves his head. The image presented by the three-dimensional display must change in exactly the way that the image of a real object would change for similar motions of the user's head. Psychologists have long known that moving perspective images appear strikingly three-dimensional even without stereo presentation; the three-dimensional display described in this paper depends heavily on this \"kinetic depth effect.\""}
{"_id": "d1baa7bf1dd81422c86954447b8ad570539f93be", "title": "Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR", "text": "Although Augmented Reality technology was first developed over forty years ago, there has been little survey work giving an overview of recent research in the field. This paper reviews the ten-year development of the work presented at the ISMAR conference and its predecessors with a particular focus on tracking, interaction and display research. It provides a roadmap for future augmented reality research which will be of great value to this relatively young field, and also for helping researchers decide which topics should be explored when they are beginning their own studies in the area."}
{"_id": "11385b86db4e49d85abaf59940c536aa17ff44b0", "title": "Dynamic program slicing methods", "text": "A dynamic program slice is that part of a program that \u2018\u2018affects\u2019\u2019 the computation of a variable of interest during program execution on a specific program input. Dynamic program slicing refers to a collection of program slicing methods that are based on program execution and may significantly reduce the size of a program slice because run-time information, collected during program execution, is used to compute program slices. Dynamic program slicing was originally proposed only for program debugging, but its application has been extended to program comprehension, software testing, and software maintenance. Different types of dynamic program slices, together with algorithms to compute them, have been proposed in the literature. In this paper we present a classification of existing dynamic slicing methods and discuss the algorithms to compute dynamic slices. In the second part of the paper, we compare the existing methods of dynamic slice computation. \uf6d9 1998 Elsevier Science B.V. All rights reserved."}
{"_id": "63ad6c788f36e38e615abaadf29279c8e42f0742", "title": "Group Linguistic Bias Aware Neural Response Generation", "text": "For practical chatbots, one of the essential factor for improving user experience is the capability of customizing the talking style of the agents, that is, to make chatbots provide responses meeting users\u2019 preference on language styles, topics, etc. To address this issue, this paper proposes to incorporate linguistic biases, which implicitly involved in the conversation corpora generated by human groups in the Social Network Services (SNS), into the encoderdecoder based response generator. By attaching a specially designed neural component to dynamically control the impact of linguistic biases in response generation, a Group Linguistic Bias Aware Neural Response Generation (GLBA-NRG) model is eventually presented. The experimental results on the dataset from the Chinese SNS show that the proposed architecture outperforms the current response generating models by producing both meaningful and vivid responses with customized styles."}
{"_id": "ce1c2fa34fa42b1e6f2daa47b6accb5f78c2706b", "title": "Entity resolution using inferred relationships and behavior", "text": "We present a method for entity resolution that infers relationships between observed identities and uses those relationships to aid in mapping identities to underlying entities. We also introduce the idea of using graphlets for entity resolution. Graphlets are collections of small graphs that can be used to characterize the \u201crole\u201d of a node in a graph. The idea is that graphlets can provide a richer set of features to characterize identities. We validate our method on standard author datasets, and we further evaluate our method using data collected from Twitter. We find that inferred relationships and graphlets are useful for entity resolution."}
{"_id": "a934b9519dd6df9378b4f0c8fff29baebf36d6e5", "title": "A Hidden Semi-Markov Model-Based Speech Synthesis System", "text": "Recently, a statistical speech synthesis system based on the hidden Markov model (HMM) has been proposed. In this system, spectrum, excitation, and duration of human speech are modeled simultaneously by context-dependent HMMs and speech parameter vector sequences are generated from the HMMs themselves. This system defines a speech synthesis problem in a generative model framework and solves it using the maximum likelihood (ML) criterion. However, there is an inconsistency: although state duration models are explicitly used in the synthesis part of the system, they have not been incorporated in its training part. This inconsistency may degrade the naturalness of synthesized speech. In the present paper, a statistical speech synthesis system based on a hidden semi-Markov model (HSMM), which can be viewed as an HMM with explicit state duration models, is developed and evaluated. The use of HSMMs allows us to incorporate the state duration models explicitly not only in the synthesis part but also in the training part of the system and resolves the inconsistency in the HMM-based speech synthesis system. Subjective listening test results show that the use of HSMMs improves the reported naturalness of synthesized speech. key words: hidden Markov model, hidden semi-Markov model, HMMbased speech synthesis"}
{"_id": "30ffc7c6aab3bbd1f5af69fb97a7d151509d0a52", "title": "Estimating mobile application energy consumption using program analysis", "text": "Optimizing the energy efficiency of mobile applications can greatly increase user satisfaction. However, developers lack viable techniques for estimating the energy consumption of their applications. This paper proposes a new approach that is both lightweight in terms of its developer requirements and provides fine-grained estimates of energy consumption at the code level. It achieves this using a novel combination of program analysis and per-instruction energy modeling. In evaluation, our approach is able to estimate energy consumption to within 10% of the ground truth for a set of mobile applications from the Google Play store. Additionally, it provides useful and meaningful feedback to developers that helps them to understand application energy consumption behavior."}
{"_id": "855972b98b09ffb4ada4c3b933d2c848e8e72d6d", "title": "Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities", "text": "This keynote paper: presents a 21st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents some representative Cloud platforms especially those developed in industries along with our current work towards realising market-oriented resource allocation of Clouds by leveraging the 3rd generation Aneka enterprise Grid technology; reveals our early thoughts on interconnecting Clouds for dynamically creating an atmospheric computing environment along with pointers to future community research; and concludes with the need for convergence of competing IT paradigms for delivering our 21st century vision."}
{"_id": "1f377bd3499defddad2a1b039febcb6d034a1168", "title": "Cogeneration of mechanical, electrical, and software designs for printable robots from structural specifications", "text": "Designing and fabricating new robotic systems is typically limited to experts, requiring engineering background, expensive tools, and considerable time. In contrast, to facilitate everyday users developing custom robots for personal use, this work presents a new system to easily create printable foldable robots from high-level structural specifications. A user merely needs to select electromechanical components from a library of basic building blocks and pre-designed mechanisms, then connect them to define custom robot assemblies. The system then generates complete mechanical drawings suitable for fabrication, instructions for the assembly of electronics, and software to control and drive the final robot. Several robots designed in this manner demonstrate the ability and versatility of this process."}
{"_id": "30445e21ab927234034306acb15e1cf3a4142e0c", "title": "Characterization of silicone rubber based soft pneumatic actuators", "text": "Conventional pneumatic actuators have been a popular choice due to their decent force/torque output. Nowadays, new generation of pneumatic actuator made out of highly compliant elastomers, which we call soft pneumatic actuators (SPA), are drawing increasing attention due to their ease of fabrication, high customizability and innately softness. However, there is no effective method presented to characterize and understand these actuators, such as to measure the force and torque output, range of motion and the speed of actuation. In this work, we present two types of SPAs: bending and rotary actuators. In addition, we have developed two measurement setups to characterize actuators of different geometries. The measured force/torque outputs of different actuators are presented and analyzed. Step responses to certain pressure input are presented and discussed. A simple model is presented to provide physical insight to the observed behavior of the soft actuators. This work provides the basis for designing customized SPAs with application-specific requirements."}
{"_id": "cb4c33639dbfdee7254dfeb486b7764b2e9aa358", "title": "Numerical Recipes 3rd Edition: The Art of Scientific Computing", "text": "numerical recipes 3rd edition the art of scientific computing numerical recipes 3rd edition the art of scientific numerical recipes 3rd edition the art of scientific by william h press numerical recipes 3rd edition the art numerical recipes source code cd rom 3rd edition the art numerical recipes 3rd edition the art of scientific numerical recipes with source code cd rom 3rd edition the numerical recipes with source code cd rom 3rd edition the numerical recipes with source code cd rom 3rd edition the numerical recipes assets numerical recipes 3rd edition the art of scientific computing free ebooks numerical recipes 3rd edition the art of numerical recipes 3rd edition the art of scientific numerical recipes with source code cd rom 3rd edition the numerical recipes 3rd edition the art of scientific numerical recipes with source code cd rom 3rd edition the numerical recipes source code cd rom 3rd edition the art numerical recipes source code cd rom 3rd edition the art numerical recipes 3rd edition the art of scientific free ebooks numerical recipes with source code cd rom 3rd numerical recipes the art of scientific computing numerical recipesthe art of scientific computing 3th third numerical recipesthe art of scientific computing 3th third numerical recipes in c the art of scientific computing by william h press numerical recipes 3rd edition the art numerical recipesthe art of scientific computing 3th third numerical recipes in c example book the art of scientific numerical recipes 3rd edition the art of scientific by william h press numerical recipes 3rd edition the art numerical recipes in c the art of scientific computing numerical recipes the art of scientific computing fortran"}
{"_id": "19b6edcfd9b82d5d3538d8950092fe87061bf35a", "title": "A new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets", "text": "Traditional rough set theory is mainly used to extract rules from and reduce attributes in databases in which attributes are characterized by partitions, while the covering rough set theory, a generalization of traditional rough set theory, does the same yet characterizes attributes by covers. In this paper, we propose a way to reduce the attributes of covering decision systems, which are databases characterized by covers. First, we define consistent and inconsistent covering decision systems and their attribute reductions. Then, we state the sufficient and the necessary conditions for reduction. Finally, we use a discernibility matrix to design algorithms that compute all the reducts of consistent and inconsistent covering decision systems. Numerical tests on four public data sets show that the proposed attribute reductions of covering decision systems accomplish better classification performance than those of traditional rough sets. 2007 Elsevier Inc. All rights reserved."}
{"_id": "c1972f1c721f122185cf893ed8530f2425fe32b1", "title": "Recognizing tactic patterns in broadcast basketball video using player trajectory", "text": "The explosive growth of the sports fandom inspires much research on manifold sports video analyses and applications. The audience, sports fans, and even professionals require more than traditional highlight extraction or semantic summarization. Computer-assisted sports tactic analysis is inevitably in urging demand. Recognizing tactic patterns in broadcast basketball video is a challenging task due to its complicated scenes, varied camera motion, frequently occlusions between players, etc. In basketball games, the action screen means that an offensive player perform a blocking move via standing beside or behind a defender for freeing a teammate to shoot, to receive a pass, or to drive in for scoring. In this paper, we propose a screen-strategy recognition system capable of detecting and classifying screen patterns in basketball video. The proposed system automatically detects the court lines for camera calibration, tracks players, and discriminates the offensive/defensive team. Player trajectories are calibrated to the realworld court model for screen pattern recognition. Our experiments on broadcast basketball videos show promising results. Furthermore, the extracted player trajectories and the recognized screen patterns visualized on a court model indeed assist the coach/players or the fans in comprehending the tactics executed in basketball games informatively and efficiently. 2012 Elsevier Inc. All rights reserved."}
{"_id": "13dd9ab6da60e3fff3a75e2b22017da771c80da1", "title": "Integration of it governance and security risk management: A systematic literature review", "text": "GRC is an umbrella acronym covering the three disciplines of governance, risk management and compliance. In this context, IT GRC is the subset of GRC dealing with IT aspects of GRC. The main challenge of GRC is to have an approach as integrated as possible of the three domains. The objective of our paper is to study one facet of IT GRC: the links and integration between IT governance and risk management that we consider today as the least integrated. To do so, the method followed in this paper is a systematic literature review, in order to identify the existing research works in this field. The resulting contribution of the paper is a set of recommendations established for practitioners and for researchers on how better deal with the integration between IT governance and risk management."}
{"_id": "83e351642cc1ff946af9caed3bb33f65953a6156", "title": "Emotional intelligence: the most potent factor in the success equation.", "text": "Star performers can be differentiated from average ones by emotional intelligence. For jobs of all kinds, emotional intelligence is twice as important as a person's intelligence quotient and technical skills combined. Excellent performance by top-level managers adds directly to a company's \"hard\" results, such as increased profitability, lower costs, and improved customer retention. Those with high emotional intelligence enhance \"softer\" results by contributing to increased morale and motivation, greater cooperation, and lower turnover. The author discusses the five components of emotional intelligence, its role in facilitating organizational change, and ways to increase an organization's emotional intelligence."}
{"_id": "67aafb27e6b0d970f1d1bab9b523b1d6569609d9", "title": "A comparative analysis of biclustering algorithms for gene expression data", "text": "The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters."}
{"_id": "8f8c19ed9cf6ae02e3dbf1b7dadfd8f6660a6119", "title": "Digital signature based on PlayGamal algorithm", "text": "To keep the security and integrity of digital messages, can be applied cryptography method. Digital Signature is one of the vital digital data, so the secrecy must be maintained at the time of distribution and storage of data. This study aims to combine modern cryptographic methods with ElGamal traditional algorithm method is called PlayGamal Cipher Algorithm. Moreover. the method will be implement as application for digital signature. The proposed method will be compare with the existing El-Gamal to know the performance during encryption and decryption process."}
{"_id": "ea5cf5a6d9dcde96999e215f3f0196cd468f2bf0", "title": "Chiron: a parallel engine for algebraic scientific workflows", "text": "Large-scale scientific experiments based on computer simulations are typically modeled as scientific workflows, which eases the chaining of different programs. These scientific workflows are defined, executed, and monitored by scientific workflowmanagement systems (SWfMS). As these experiments manage large amounts of data, it becomes critical to execute them in high-performance computing environments, such as clusters, grids, and clouds. However, few SWfMS provide parallel support. The ones that do so are usually labor-intensive for workflow developers and have limited primitives to optimize workflow execution. To address these issues, we developed workflow algebra to specify and enable the optimization of parallel execution of scientific workflows. In this paper, we show how the workflow algebra is efficiently implemented in Chiron, an algebraic based parallel scientific workflow engine. Chiron has a unique native distributed provenance mechanism that enables runtime queries in a relational database. We developed two studies to evaluate the performance of our algebraic approach implemented in Chiron; the first study compares Chiron with different approaches, whereas the second one evaluates the scalability of Chiron. By analyzing the results, we conclude that Chiron is efficient in executing scientific workflows, with the benefits of declarative specification and runtime provenance support. Copyright \u00a9 2013 John Wiley & Sons, Ltd."}
{"_id": "934e0abba8b3cff110d6272615909440f5b92763", "title": "Change from baseline and analysis of covariance revisited.", "text": "The case for preferring analysis of covariance (ANCOVA) to the simple analysis of change scores (SACS) has often been made. Nevertheless, claims continue to be made that analysis of covariance is biased if the groups are not equal at baseline. If the required equality were in expectation only, this would permit the use of ANCOVA in randomized clinical trials but not in observational studies. The discussion is related to Lord's paradox. In this note, it is shown, however that it is not a necessary condition for groups to be equal at baseline, not even in expectation, for ANCOVA to provide unbiased estimates of treatment effects. It is also shown that although many situations can be envisaged where ANCOVA is biased it is very difficult to imagine circumstances under which SACS would then be unbiased and a causal interpretation could be made."}
{"_id": "fac9e24c9dc285b71439c483698e423202bfeb43", "title": "Determinants of Behavioral Intention to Mobile Banking Case From Yemen", "text": "Nowadays, new tools and technologies are emerging rapidly. They are often used cross-culturally before being tested for suitability and validity. However, they must be validated to ensure that they work with all users, not just part of them. Mobile banking (as a new technology tool) has been introduced assuming that it performs well concerning authentication, among all members of the society. Our research aimed to evaluate authentication mobile banking user acceptance, through Technology Acceptance Model (TAM), in Arabic countries, namely Yemen. The results confirm the previous studies that have shown the importance of perceived ease of use and perceived usefulness. Furthermore, perceived ease of use plays a determinant role. KeywordsTechnology acceptance models; Mobile Banking;"}
{"_id": "d29ef3a422b589c33e123d37d66c97316f2c1c92", "title": "Reliability and validity of the Evaluation Tool of Children's Handwriting-Cursive (ETCH-C) using the general scoring criteria.", "text": "OBJECTIVES\nTo determine the reliability and aspects of validity of the Evaluation Tool of Children's Handwriting-Cursive (ETCH-C; Amundson, 1995), using the general scoring criteria, when assessing children who use alternative writing scripts.\n\n\nMETHOD\nChildren in Years 5 and 6 with handwriting problems and a group of matched control participants from their respective classrooms were assessed with the ETCH-C twice, 4 weeks apart.\n\n\nRESULTS\nTotal Letter scores were most reliable; more variability should be expected for Total Word scores. Total Numeral scores showed unacceptable reliability levels and are not recommended. We found good discriminant validity for Letter and Word scores and established cutoff scores to distinguish children with and without handwriting dysfunction (Total Letter <90%, Total Word <85%).\n\n\nCONCLUSION\nThe ETCH-C, using the general scoring criteria, is a reliable and valid test of handwriting for children using alternative scripts."}
{"_id": "76d4f741a0321bad1f080a6c4d41996a381d3c80", "title": "Design, Analysis, and Optimization of Ironless Stator Permanent Magnet Machines", "text": "This paper presents a methodology for the design, analysis, and graphical optimization of ironless brushless permanent magnet machines primarily for generator applications. Magnetic flux in this class of electromagnetic machine tends to be 3-D due to the lack of conventional iron structures and the absence of a constrained magnetic flux path. The proposed methodology includes comprehensive geometric, magnetic and electrical dimensioning followed by detailed 3-D finite element (FE) modeling of a base machine for which parameters are determined. These parameters are then graphically optimized within sensible volumetric and electromagnetic constraints to arrive at improved design solutions. This paper considers an ironless machine design to validate the 3-D FE model to optimize power conversion for the case of a low-speed, ironless stator generator. The machine configuration investigated in this paper has concentric arrangement of the rotor and the stator, solenoid-shaped coils, and a simple mechanical design considered for ease of manufacture and maintenance. Using performance and material effectiveness as the overriding optimization criteria, this paper suggests optimal designs configurations featuring two different winding arrangements, i.e., radial and circumferentially mounted. Performance and material effectiveness of the studied ironless stator designs are compared to published ironless machine configurations."}
{"_id": "852c633882927affd1a951e81e6e30251bb40867", "title": "Radiation Efficiency Measurement Method for Passive UHF RFID Dipole Tag Antennas", "text": "Concurrently with the continuously developing radio frequency identification (RFID) technology, new types of tag antenna materials and structures are emerging to fulfill the requirements encountered within the new application areas. In this work, a radiation efficiency measurement method is developed and verified for passive ultra-high frequency (UHF) RFID dipole tag antennas. In addition, the measurement method is applied to measure the radiation efficiency of sewed dipole tag antennas for wearable body-centric wireless communication applications. The acquired information from measurements can be used to characterize tag antenna material structures losses and to further both improve and optimize tag antenna performance and reliability."}
{"_id": "1886edb4e771c1c0aa7bae360d7f3de23ac4ac8e", "title": "Failure Trends in a Large Disk Drive Population", "text": "It is estimated that over 90% of all new information produced in the world is being stored on magnetic media, most of it on hard disk drives. Despite their importance, there is relatively little published work on the failure patterns of disk drives, and the key factors that affect their lifetime. Most available data are either based on extrapolation from accelerated aging experiments or from relatively modest sized field studies. Moreover, larger population studies rarely have the infrastructure in place to collect health signals from components in operation, which is critical information for detailed failure analysis. We present data collected from detailed observations of a large disk drive population in a production Internet services deployment. The population observed is many times larger than that of previous studies. In addition to presenting failure statistics, we analyze the correlation between failures and several parameters generally believed to impact longevity. Our analysis identifies several parameters from the drive\u2019s self monitoring facility (SMART) that correlate highly with failures. Despite this high correlation, we conclude that models based on SMART parameters alone are unlikely to be useful for predicting individual drive failures. Surprisingly, we found that temperature and activity levels were much less correlated with drive failures than previously reported."}
{"_id": "517c5cd1dbafb3cfa0eea4fc78d0b5cd085209b2", "title": "DRAM errors in the wild: a large-scale field study", "text": "Errors in dynamic random access memory (DRAM) are a common form of hardware failure in modern compute clusters. Failures are costly both in terms of hardware replacement costs and service disruption. While a large body of work exists on DRAM in laboratory conditions, little has been reported on real DRAM failures in large production clusters. In this paper, we analyze measurements of memory errors in a large fleet of commodity servers over a period of 2.5 years. The collected data covers multiple vendors, DRAM capacities and technologies, and comprises many millions of DIMM days.\n The goal of this paper is to answer questions such as the following: How common are memory errors in practice? What are their statistical properties? How are they affected by external factors, such as temperature and utilization, and by chip-specific factors, such as chip density, memory technology and DIMM age?\n We find that DRAM error behavior in the field differs in many key aspects from commonly held assumptions. For example, we observe DRAM error rates that are orders of magnitude higher than previously reported, with 25,000 to 70,000 errors per billion device hours per Mbit and more than 8% of DIMMs affected by errors per year. We provide strong evidence that memory errors are dominated by hard errors, rather than soft errors, which previous work suspects to be the dominant error mode. We find that temperature, known to strongly impact DIMM error rates in lab conditions, has a surprisingly small effect on error behavior in the field, when taking all other factors into account. Finally, unlike commonly feared, we don't observe any indication that newer generations of DIMMs have worse error behavior."}
{"_id": "663e064469ad91e6bda345d216504b4c868f537b", "title": "A scalable, commodity data center network architecture", "text": "Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance.\n In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP."}
{"_id": "07add9c98a979e732cfa215c901adb1975f3f43a", "title": "The case for RAMClouds: scalable high-performance storage entirely in DRAM", "text": "Disk-oriented approaches to online storage are becoming increasingly problematic: they do not scale gracefully to meet the needs of large-scale Web applications, and improvements in disk capacity have far outstripped improvements in access latency and bandwidth. This paper argues for a new approach to datacenter storage called RAMCloud, where information is kept entirely in DRAM and large-scale systems are created by aggregating the main memories of thousands of commodity servers. We believe that RAMClouds can provide durable and available storage with 100-1000x the throughput of disk-based systems and 100-1000x lower access latency. The combination of low latency and large scale will enable a new breed of dataintensive applications."}
{"_id": "26b730317a906882754a1e25c263c12eb2613132", "title": "Sketching in circuits: designing and building electronics on paper", "text": "The field of new methods and techniques for building electronics is quickly growing - from research in new materials for circuit building, to modular toolkits, and more recently to untoolkits, which aim to incorporate more off-the-shelf parts. However, the standard mediums for circuit design and construction remain the breadboard, protoboard, and printed circuit board (PCB). As an alternative, we introduce a method in which circuits are hand-made on ordinary paper substrates, connected with conductive foil tape and off-the-shelf circuit components with the aim of supporting the durability, scalability, and accessibility needs of novice and expert circuit builders alike. We also used electrified notebooks to investigate how the circuit design and build process would be affected by the constraints and affordances of the bound book. Our ideas and techniques were evaluated through a series of workshops, through which we found our methods supported a wide variety of approaches and results - both technical and expressive - to electronics design and construction."}
{"_id": "f285a3075faa90ce6c1a76719cb5867406d3e07a", "title": "Entailment-based Fully Automatic Technique for Evaluation of Summaries", "text": "We propose a fully automatic technique for evaluating text summaries without the need to prepare the gold standard summaries manually. A standard and popular summary evaluation techniques or tools are not fully automatic; they all need some manual process or manual reference summary. Using recognizing textual entailment (TE), automatically generated summaries can be evaluated completely automatically without any manual preparation process. We use a TE system based on a combination of lexical entailment module, lexical distance module, Chunk module, Named Entity module and syntactic text entailment (TE) module. The documents are used as text (T) and summary of these documents are taken as hypothesis (H). Therefore, the more information of the document is entailed by its summary the better the summary. Comparing with the ROUGE 1.5.5 evaluation scores over TAC 2008 (formerly DUC, conducted by NIST) dataset, the proposed evaluation technique predicts the ROUGE scores with a accuracy of 98.25% with respect to ROUGE-2 and 95.65% with respect to ROUGE-SU4."}
{"_id": "f2561f0f08ed82921a0d6bb9537adb47b67b8ba5", "title": "Hemispheric asymmetry reduction in older adults: the HAROLD model.", "text": "A model of the effects of aging on brain activity during cognitive performance is introduced. The model is called HAROLD (hemispheric asymmetry reduction in older adults), and it states that, under similar circumstances, prefrontal activity during cognitive performances tends to be less lateralized in older adults than in younger adults. The model is supported by functional neuroimaging and other evidence in the domains of episodic memory, semantic memory, working memory, perception, and inhibitory control. Age-related hemispheric asymmetry reductions may have a compensatory function or they may reflect a dedifferentiation process. They may have a cognitive or neural origin, and they may reflect regional or network mechanisms. The HAROLD model is a cognitive neuroscience model that integrates ideas and findings from psychology and neuroscience of aging."}
{"_id": "361367838ee5d9d5c9a77c69c1c56b1c309ab236", "title": "Salient Object Detection: A Survey", "text": "Detecting and segmenting salient objects in natural scenes, often referred to as salient object detection, has attracted a lot of interest in computer vision. While many models have been proposed and several applications have emerged, yet a deep understanding of achievements and issues is lacking. We aim to provide a comprehensive review of the recent progress in salient object detection and situate this field among other closely related areas such as generic scene segmentation, object proposal generation, and saliency for fixation prediction. Covering 228 publications, we survey i) roots, key concepts, and tasks, ii) core techniques and main modeling trends, and iii) datasets and evaluation metrics in salient object detection. We also discuss open problems such as evaluation metrics and dataset bias in model performance and suggest future research directions."}
{"_id": "208a59ad8612c7ac0ee76b7eb55d6b17a237ce32", "title": "How to stop spread of misinformation on social media: Facebook plans vs. right-click authenticate approach", "text": "One of the key features of social networks is that users are able to share information, and through cascades of sharing information, this information may reach a large number of individuals. The high availability of user-provided contents on online social media facilitates people aggregation around shared beliefs, interests, worldviews and narratives. With lack of means to verify information, social media has been accused of becoming a hot bed for sharing of misinformation. Facebook, as one of the largest social networking services, has been facing widespread criticism on how its newsfeed algorithm is designed thus amplifying dissemination of misinformation. In late 2016, Facebook revealed plans to address fake news on Facebook newsfeeds. In this work, we study the methods Facebook has proposed to combat the spread of misinformation and compare it with our previously proposed approach called \u2018Right-click Authenticate\u2019. By analyzing the Business Process Modeling and Notation of both approaches, this paper suggests some key weaknesses and improvements social media companies need to consider when tackling the spread of misinformation online."}
{"_id": "833de6c09b38a679ed870ad3a7ccfafc8de010e1", "title": "Instantaneous ego-motion estimation using multiple Doppler radars", "text": "The estimation of the ego-vehicle's motion is a key capability for advanced driving assistant systems and mobile robot localization. The following paper presents a robust algorithm using radar sensors to instantly determine the complete 2D motion state of the ego-vehicle (longitudinal, lateral velocity and yaw rate). It evaluates the relative motion between at least two Doppler radar sensors and their received stationary reflections (targets). Based on the distribution of their radial velocities across the azimuth angle, non-stationary targets and clutter are excluded. The ego-motion and its corresponding covariance matrix are estimated. The algorithm does not require any preprocessing steps such as clustering or clutter suppression and does not contain any model assumptions. The sensors can be mounted at any position on the vehicle. A common field of view is not required, avoiding target association in space. As an additional benefit, all targets are instantly labeled as stationary or non-stationary."}
{"_id": "4b2c2633246ba1fafe8dedace9b168ed27f062f3", "title": "The capacity of low-density parity-check codes under message-passing decoding", "text": "In this paper we present a general method for determining the capacity of low-density paritycheck codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et. al. [1] in the case of a binary symmetric channel and a binary message passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined. Index Terms \u2013 low-density parity-check codes, turbo codes, message-passing decoders, iterative decoding, belief-propagation, turbo decoding"}
{"_id": "67861b521b2d3eaf70e8e3ba1cb7b7d66b7fdacd", "title": "Model-Based Wheel Slip Detection for Outdoor Mobile Robots", "text": "This paper introduces a model-based approach to estimating longitudinal wheel slip and detecting immobilized conditions of autonomous mobile robots operating on outdoor terrain. A novel tire traction/braking model is presented and used to calculate vehicle dynamic forces in an extended Kalman filter framework. Estimates of external forces and robot velocity are derived using measurements from wheel encoders, IMU, and GPS. Weak constraints are used to constrain the evolution of the resistive force estimate based upon physical reasoning. Experimental results show the technique accurately and rapidly detects robot immobilization conditions while providing estimates of the robot's velocity during normal driving. Immobilization detection is shown to be robust to uncertainty in tire model parameters. Accurate immobilization detection is demonstrated in the absence of GPS, indicating the algorithm is applicable for both terrestrial applications and space robotics."}
{"_id": "b8e70e21db2918c9932f0ca27c98ee7da168223f", "title": "Robot Modeling and Control [Book Review]", "text": "The field of robotics began in the 1960s and 1970s with the design and control of general purpose robotic manipulators. During the 1980s the expense of integrating robotics into manufacturing lines and the difficulty of performing seemingly easy tasks, such as manipulating flexible objects or washing a window, led to disenchantment among many researchers and funding agencies. Since the 1990s the popularity of robotics has resurged with many new exciting areas of application particularly in the medical, defense, and service sectors. Robotics has grown from a field that is primarily concerned with problems such as the manipulation of rigid bodies in well-structured environments to a field that has tackled challenging problems such as extraterrestrial exploration and remote minimally invasive surgery. I began my involvement in the field of robotics as a graduate student studying control theory. My project was to design a control algorithm to induce stable walking in a biped robot. As with many control problems, before formulating a solution I had to determine a suitable model for the plant. Since I was an electrical engineering student with little background in the kinematics and dynamics of rigid-body mechanical systems, this task was daunting. With some guidance from a fellow student, I fumbled my way through Lagrange\u2019s method to derive the robot\u2019s equations of motion. I had little idea at the time why forming the Lagrangian and taking certain derivatives resulted in the right equations. For insight I was pointed to various texts on classical mechanics. While these texts provided the desired insight, I was still left with questions: Given an arbitrary system, how do I systematically construct the Lagrangian so that I can derive the system\u2019s equation of motion? What are the most effective techniques for robot control? What special aspects of the robot\u2019s dynamics should be considered in developing controllers? Five years after my introduction to the field of robotics, I now know that research in the field goes well beyond the dynamics and control of traditional manipulators. The concepts required to analyze and control manipulators, however, are still needed in modern robotics research. These fundamental concepts, which are addressed by many introductory texts on robotics, are the following: 1) Forward kinematics: Given joint displacements, determine the position and orientation of the endeffector with respect to a specified reference frame. 2) Inverse kinematics: Given the position and orientation of the end-effector frame relative to a specified reference frame, determine the joint variables that give rise to this configuration. 3) Differential kinematics: Given the manipulator\u2019s joint rates, determine the linear and angular velocities of the end-effector frame relative to a fixed reference frame. 4) Statics: Given the forces and moments applied to the manipulator, determine the corresponding joint forces and torques required to maintain equilibrium in the presence of applied forces and moments. 5) Dynamics: Determine the equations of motion of the system. Specifically, determine the differential equations relating the applied forces and torques to the joint variables, joint rates, and joint accelerations. 6) Trajectory generation: Given initial and final configurations of the manipulator, find a trajectory of the joint variables connecting these two configurations. 7) Control: Given desired joint trajectories or forces between the end-effector and the environment determine joint torques and forces that effect these joint trajectories and forces."}
{"_id": "217a6a618e87c6a709bff0698bc15cf4bfa789a4", "title": "K-modestream algorithm for clustering categorical data streams", "text": "Clustering categorical data streams is a challenging problem because new data points are continuously adding to the already existing database at rapid pace and there exists no natural order among the categorical values. Recently, some algorithms have been discussed to tackle the problem of clustering the categorical data streams. However, in all these schemes the user needs to pre-specify the number of clusters, which is not trivial, and it renders to inefficient in the data stream environment. In this paper, we propose a new clustering algorithm, named it as k-modestream, which follows the k-modes algorithm paradigm to dynamically cluster the categorical data streams. It automatically computes the number of clusters and their initial modes simultaneously at regular time intervals. We analyse the time complexity of our scheme and perform various experiments using the synthetic and real world datasets to evaluate its efficacy."}
{"_id": "46b7a7fe4ad9aa4a734c827409a940ff050e3940", "title": "A 3.5\u20136.8GHz wide-bandwidth DTC-assisted fractional-N all-digital PLL with a MASH \u0394\u03a3 TDC for low in-band phase noise", "text": "We present a digital-to-time converter (DTC)-assisted fractional-N wide-bandwidth all-digital PLL (ADPLL). It employs a MASH \u0394\u03a3 time-to-digital converter (TDC) to achieve low in-band phase noise, and a wide-tuning range digitally-controlled oscillator (DCO). Fabricated in 40nm CMOS, the ADPLL consumes 10.7 mW while outputting 1.73 to 3.38 GHz (after a \u00f72 division) and achieves better than -109 dBc/Hz in-band phase noise and 420fsrms integrated jitter."}
{"_id": "e8e77ec157375343667e002c78d3d3bf73e5fd8a", "title": "Measuring Semantic Similarity in the Taxonomy of WordNet", "text": "This paper presents a new model to measure semantic similarity in the taxonomy of WordNet, using edgecounting techniques. We weigh up our model against a benchmark set by human similarity judgment, and achieve a much improved result compared with other methods: the correlation with average human judgment on a standard 28 word pair dataset is 0.921, which is better than anything reported in the literature and also significantly better than average individual human judgments. As this set has been effectively used for algorithm selection and tuning, we also cross-validate an independent 37 word pair test set (0.876) and present results for the full 65 word pair superset (0.897)."}
{"_id": "1e83848317597969dd07906fe7c3dceddd1737f3", "title": "A Hardware-Assisted Realtime Attack on A5/2 Without Precomputations", "text": "A5/2 is a synchronous stream cipher that is used for protecting GSM communication. Recently, some powerful attacks [2,10] on A5/2 have been proposed. In this contribution we enhance the ciphertext-only attack [2] by Barkan, Biham, and Keller by designing special-purpose hardware for generating and solving the required systems of linear equations. For realizing the LSE solver component, we use an approach recently introduced in [5,6] describing a parallelized hardware implementation of the Gauss-Jordan algorithm. Our hardware-only attacker immediately recovers the initial secret state of A5/2 which is sufficient for decrypting all frames of a session using a few ciphertext frames without any precomputations and memory. More precisely, in contrast to [2] our hardware architecture directly attacks the GSM speech channel (TCH/FS and TCH/EFS). It requires 16 ciphertext frames and completes the attack in about 1 second. With minor changes also input from other GSM channels (e.g., SDCCH/8) can be used to mount the attack."}
{"_id": "703b5e7a9a7f4b567cbaec329adce0df504c98fe", "title": "Patterns of Internet and Traditional News Media Use in a Networked Community", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. The growing popularity of the World Wide Web as a source of news raises questions about the future of traditional news media. Is the Web likely to become a supplement to newspapers and television news, or a substitute for these media? Among people who have access to newspapers, television, and the World Wide Web, why do some prefer to use the Web as a source of news, while others prefer traditional news media? Drawing from a survey of 520 undergraduate students at a large public university where Internet use is woven into the fabric of daily life, this study suggests that use of the Web as a news source is positively related with reading newspapers but has no relationship with viewing television news. Members of this community use the Web mainly as a source of entertainment. Patterns of Web and traditional media exposure are examined in light of computer anxiety, desire for control, and political knowledge. This study suggests that even when computer skills and Internet access become more widespread in the general population, use of the World Wide Web as a news source seems unlikely to diminish substantially use of traditional news media. Will the World Wide Web become a supplement or substitute for traditional news media? As the World Wide Web became popularly accessible only with the advent of Mosaic browser software in 1993, it is still unclear at this early stage of the Web's development how dominant it is likely to become as a provider of daily news to the general public. Yet the rapid spread of on-line news outlets raises profound questions about the future of traditional newspapers and television news programs. The number of newspapers around the world being published \u2026"}
{"_id": "46ed0e6077fe5d2a571c99f1dc50f2cd697a6815", "title": "Smart home automation system using Bluetooth technology", "text": "In this paper a low cost and user friendly remote controlled home automation system is presented using Arduino board, Bluetooth module, smartphone, ultrasonic sensor and moisture sensor. A smartphone application is used in the suggested system which allows the users to control up to 18 devices including home appliances and sensors using Bluetooth technology. Nowadays, most of conventional home automation systems are designed for special purposes while proposed system is a general purpose home automation system. Which can easily be implement in existing home. The suggested system has more features than conventional home automation systems such as an ultrasonic sensor is used for water level detection and soil moisture sensor is use for automatic plant irrigation system. This paper also describes the hardware and software architecture of system, future work and scope. The proposed prototype of home automation system is implemented and tested on hardware and it gave the exact and expected results."}
{"_id": "dab8b00e5619ceec615b179265cd6d315a97911d", "title": "A two-stage training deep neural network for small pedestrian detection", "text": "In the present paper, we propose a deep network architecture in order to improve the accuracy of pedestrian detection. The proposed method contains a proposal network and a classification network that are trained separately. We use a single shot multibox detector (SSD) as a proposal network to generate the set of pedestrian proposals. The proposal network is fine-tuned from a pre-trained network by several pedestrian data sets of large input size (512 \u00d7 512 pixels) in order to improve detection accuracy of small pedestrians. Then, we use a classification network to classify pedestrian proposals. We then combine the scores from the proposal network and the classification network to obtain better final detection scores. Experiments were evaluated using the Caltech test set, and, compared to other state-of-the-art methods of pedestrian detection task, the proposed method obtains better results for small pedestrians (30 to 50 pixels in height) with an average miss rate of 42%."}
{"_id": "27630637ca68ce01400c8c2407f731ab761ac7da", "title": "Why Have Child Maltreatment and Child Victimization Declined ?", "text": "Various forms of child maltreatment and child victimization declined as much as 40\u201370% from 1993 until 2004, including sexual abuse, physical abuse, sexual assault, homicide, aggravated assault, robbery, and larceny. Other child welfare indicators also improved during the same period, including teen pregnancy, teen suicide, and children living in poverty. This article reviews a wide variety of possible explanations for these changes: demography, fertility and abortion legalization, economic prosperity, increased incarceration of offenders, increased agents of social intervention, changing social norms and practices, the dissipation of the social changes from the 1960s, and psychiatric pharmacology. Multiple factors probably contributed. In particular, economic prosperity, increasing agents of social intervention, and psychiatric pharmacology have advantages over some of the other explanations in accounting for the breadth and timing of the improvements."}
{"_id": "31918003360c352fb0750040d163f287894ab547", "title": "Development Process for AUTOSAR-based Embedded System", "text": "Recently automotive embedded system has developed highly since the advent of smart car, electric card and so on. They have various value-added system for example IPA (Intelligent Parking Assistance), BSW (Blind Spot Warning), LDWS (Lane Departure warning System), LKS(Lane Keeping System)-these are ADAS (Advanced Driver Assistance Systems). AUTOSAR (AUTomotive Open System Architecture) is the most notable industrial standard for developing automotive embedded software. AUTOSAR is a partnership of automotive manufacturers and suppliers working together to develop and establish an open industry standard for automotive E/E architectures. In this paper, we will introduce AUTOSAR briefly and demonstrate the result of automotive software LDWS (Lane Detection & Warning System) development."}
{"_id": "bfb9ec14d76fa64dab58ba8ee8e7271d0c587d47", "title": "Physical and psychological factors predict outcome following whiplash injury", "text": "Predictors of outcome following whiplash injury are limited to socio-demographic and symptomatic factors, which are not readily amenable to secondary and tertiary intervention. This prospective study investigated the predictive capacity of early measures of physical and psychological impairment on pain and disability 6 months following whiplash injury. Motor function (ROM; kinaesthetic sense; activity of the superficial neck flexors (EMG) during cranio-cervical flexion), quantitative sensory testing (pressure, thermal pain thresholds, brachial plexus provocation test), sympathetic vasoconstrictor responses and psychological distress (GHQ-28, TSK, IES) were measured in 76 acute whiplash participants. The outcome measure was Neck Disability Index scores at 6 months. Stepwise regression analysis was used to predict the final NDI score. Logistic regression analyses predicted membership to one of the three groups based on final NDI scores (<8 recovered, 10-28 mild pain and disability, >30 moderate/severe pain and disability). Higher initial NDI score (1.007-1.12), older age (1.03-1.23), cold hyperalgesia (1.05-1.58), and acute post-traumatic stress (1.03-1.2) predicted membership to the moderate/severe group. Additional variables associated with higher NDI scores at 6 months on stepwise regression analysis were: ROM loss and diminished sympathetic reactivity. Higher initial NDI score (1.03-1.28), greater psychological distress (GHQ-28) (1.04-1.28) and decreased ROM (1.03-1.25) predicted subjects with persistent milder symptoms from those who fully recovered. These results demonstrate that both physical and psychological factors play a role in recovery or non-recovery from whiplash injury. This may assist in the development of more relevant treatment methods for acute whiplash."}
{"_id": "1642e4592f53b2bd50c8dae0428f703d15a09c12", "title": "Differential privacy for location pattern mining", "text": "One main concern for individuals to participate in the data collection of personal location history records is the disclosure of their location and related information when a user queries for statistical or pattern mining results derived from these records. In this paper, we investigate how the privacy goal that the inclusion of one's location history in a statistical database with location pattern mining capabilities does not substantially increase one's privacy risk. In particular, we propose a differentially private pattern mining algorithm for interesting geographic location discovery using a region quadtree spatial decomposition to preprocess the location points followed by applying a density-based clustering algorithm. A differentially private region quadtree is used for both de-noising the spatial domain and identifying the likely geographic regions containing the interesting locations. Then, a differential privacy mechanism is applied to the algorithm outputs, namely: the interesting regions and their corresponding stay point counts. The quadtree spatial decomposition enables one to obtain a localized reduced sensitivity to achieve the differential privacy goal and accurate outputs. Experimental results on synthetic datasets are used to show the feasibility of the proposed privacy preserving location pattern mining algorithm."}
{"_id": "08791420bc8894d1762a44188bc4727479a8de2c", "title": "From categories to subcategories: Large-scale image classification with partial class label refinement", "text": "The number of digital images is growing extremely rapidly, and so is the need for their classification. But, as more images of pre-defined categories become available, they also become more diverse and cover finer semantic differences. Ultimately, the categories themselves need to be divided into subcategories to account for that semantic refinement. Image classification in general has improved significantly over the last few years, but it still requires a massive amount of manually annotated data. Subdividing categories into subcategories multiples the number of labels, aggravating the annotation problem. Hence, we can expect the annotations to be refined only for a subset of the already labeled data, and exploit coarser labeled data to improve classification. In this work, we investigate how coarse category labels can be used to improve the classification of subcategories. To this end, we adopt the framework of Random Forests and propose a regularized objective function that takes into account relations between categories and subcategories. Compared to approaches that disregard the extra coarse labeled data, we achieve a relative improvement in subcategory classification accuracy of up to 22% in our large-scale image classification experiments."}
{"_id": "a7c1534f09943be868088a6ac5854830898347c6", "title": "Hemisphere lens-loaded Vivaldi antenna for time domain microwave imaging of concealed objects", "text": "The hemisphere lens-loaded Vivaldi antenna for the microwave imaging applications is designed and tested in this paper. The proposed antenna is designed to work in the wide frequency band of 1\u201314 GHz, and is fabricated on the FR-4 substrate. The directivity of the proposed Vivaldi antenna is enhanced using a hemispherical shape dielectric lens, which is fixed on the end-fire direction of the antenna. The proposed antenna is well suited for the microwave imaging applications because of the wide frequency range and high directivity. The design of the antenna is carried out using the CST microwave studio, and various parameters such as the return loss, the radiation pattern, the directivity, and input impedance are optimized. The maximum improvement of 4.19 dB in the directivity is observed with the designed hemisphere lens. The antenna design is validated by fabricating and testing it in an anechoic environment. Finally, the designed antenna is utilized to establish a setup for measuring the scattering coefficients of various objects and structures in the frequency band of 1\u201314 GHz. The two-dimensional (2D) microwave images of these objects are successfully obtained in terms of the measured wide band scattering data using a novel time domain inverse scattering approach, which shows the applicability of the proposed antenna."}
{"_id": "48947a9ce5e37003008a38fbfcdb2a317421b7e4", "title": "ELM-ART: An Adaptive Versatile System for Web-based Instruction", "text": "This paper discusses the problems of developing versatile adaptive and intelligent learning systems that can be used in the context of practical Web-based education. We argue that versatility is an important feature of successful Web-based education systems. We introduce ELM-ART, an intelligent interactive educational system to support learning programming in LISP. ELM-ART provides all learning material online in the form of an adaptive interactive textbook. Using a combination of an overlay model and an episodic student model, ELM-ART provides adaptive navigation support, course sequencing, individualized diagnosis of student solutions, and example-based problem-solving support. Results of an empirical study show different effects of these techniques on different types of users during the first lessons of the programming course. ELM-ART demonstrates how some interactive and adaptive educational components can be implemented in WWW context and how multiple components can be naturally integrated together in a single system."}
{"_id": "34329c7ec1cc159ed5efa84cb38ea9bbec335d19", "title": "A probabilistic model for retrospective news event detection", "text": "Retrospective news event detection (RED) is defined as the discovery of previously unidentified events in historical news corpus. Although both the contents and time information of news articles are helpful to RED, most researches focus on the utilization of the contents of news articles. Few research works have been carried out on finding better usages of time information. In this paper, we do some explorations on both directions based on the following two characteristics of news articles. On the one hand, news articles are always aroused by events; on the other hand, similar articles reporting the same event often redundantly appear on many news sources. The former hints a generative model of news articles, and the latter provides data enriched environments to perform RED. With consideration of these characteristics, we propose a probabilistic model to incorporate both content and time information in a unified framework. This model gives new representations of both news articles and news events. Furthermore, based on this approach, we build an interactive RED system, HISCOVERY, which provides additional functions to present events, Photo Story and Chronicle."}
{"_id": "0621213a012d169cb7c2930354c6489d6a89baf8", "title": "A cross-collection mixture model for comparative text mining", "text": "In this paper, we define and study a novel text mining problem, which we refer to as Comparative Text Mining (CTM). Given a set of comparable text collections, the task of comparative text mining is to discover any latent common themes across all collections as well as summarize the similarity and differences of these collections along each common theme. This general problem subsumes many interesting applications, including business intelligence and opinion summarization. We propose a generative probabilistic mixture model for comparative text mining. The model simultaneously performs cross-collection clustering and within-collection clustering, and can be applied to an arbitrary set of comparable text collections. The model can be estimated efficiently using the Expectation-Maximization (EM) algorithm. We evaluate the model on two different text data sets (i.e., a news article data set and a laptop review data set), and compare it with a baseline clustering method also based on a mixture model. Experiment results show that the model is quite effective in discovering the latent common themes across collections and performs significantly better than our baseline mixture model."}
{"_id": "3bb6fa7b7f22de6ccc6cca3036a480a5bc839a4d", "title": "Analysis of first-order anti-aliasing integration sampler", "text": "Performance of the first-order anti-aliasing integration sampler used in software-defined radio (SDR) receivers is analyzed versus all practical nonidealities. The nonidealities that are considered in this paper are transconductor finite output resistance, switch resistance, nonzero rise and fall times of the sampling clock, charge injection, clock jitter, and noise. It is proved that the filter is quite robust to all of these nonidealities except for transconductor finite output resistance. Furthermore, linearity and noise performances are all limited to design of a low-noise and highly linear transconductor."}
{"_id": "746706121f51103c52350e9efa347db858ee78b7", "title": "If You Measure It, Can You Improve It?: Exploring The Value of Energy Disaggregation", "text": "Over the past few years, dozens of new techniques have been proposed for more accurate energy disaggregation, but the jury is still out on whether these techniques can actually save energy and, if so, whether higher accuracy translates into higher energy savings. In this paper, we explore both of these questions. First, we develop new techniques that use disaggregated power data to provide actionable feedback to residential users. We evaluate these techniques using power traces from 240 homes and find that they can detect homes that need feedback with as much as 84% accuracy. Second, we evaluate whether existing energy disaggregation techniques provide power traces with sufficient fidelity to support the feedback techniques that we created and whether more accurate disaggregation results translate into more energy savings for the users. Results show that feedback accuracy is very low even while disaggregation accuracy is high. These results indicate a need to revisit the metrics by which disaggregation is evaluated."}
{"_id": "8cafd353218e8dbd3e2d485bc0079f7d1b3dc39a", "title": "Supervised Word Sense Disambiguation using Python", "text": "In this paper, we discuss the problem of Word Sense Disambigu ation (WSD) and one approach to solving the lexical sample problem. We use training and te st data fromSENSEVAL-3 and implement methods based on Na\u0131\u0308ve Bayes calculations, cosi ne comparison of word-frequency vectors, decision lists, and Latent Semantic Analysis. We a lso implement a simple classifier combination system that combines these classifiers into one WSD module. We then prove the effectiveness of our WSD module by participating in the M ultilingual Chinese-English Lexical Sample Task from SemEval-2007."}
{"_id": "a4bd8fb3e27e41a5afc13b7c19f783bf17fcd7b9", "title": "Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets", "text": "With the ever growing amount of textual data from a large variety of languages, domains, and genres, it has become standard to evaluate NLP algorithms on multiple datasets in order to ensure a consistent performance across heterogeneous setups. However, such multiple comparisons pose significant challenges to traditional statistical analysis methods in NLP and can lead to erroneous conclusions. In this paper we propose a Replicability Analysis framework for a statistically sound analysis of multiple comparisons between algorithms for NLP tasks. We discuss the theoretical advantages of this framework over the current, statistically unjustified, practice in the NLP literature, and demonstrate its empirical value across four applications: multi-domain dependency parsing, multilingual POS tagging, cross-domain sentiment classification and word similarity prediction."}
{"_id": "420bf47ca523f4fe0772d9d13e1bc7f93d187186", "title": "Speech Recognition Engineering Issues in Speech to Speech Translation System Design for Low Resource Languages and Domains", "text": "Engineering automatic speech recognition (ASR) for speech to speech (S2S) translation systems, especially targeting languages and domains that do not have readily available spoken language resources, is immensely challenging due to a number of reasons. In addition to contending with the conventional data-hungry speech acoustic and language modeling needs, these designs have to accommodate varying requirements imposed by the domain needs and characteristics, target device and usage modality (such as phrase-based, or spontaneous free form interactions, with or without visual feedback) and huge spoken language variability arising due to socio-linguistic and cultural differences of the users. This paper, using case studies of creating speech translation systems between English and languages such as Pashto and Farsi, describes some of the practical issues and the solutions that were developed for multilingual ASR development. These include novel acoustic and language modeling strategies such as language adaptive recognition, active-learning based language modeling, class-based language models that can better exploit resource poor language data, efficient search strategies, including N-best and confidence generation to aid multiple hypotheses translation, use of dialog information and clever interface choices to facilitate ASR, and audio interface design for meeting both usability and robustness requirements"}
{"_id": "0d636d8241b3c578513f0f52448198c035bb717e", "title": "Effects of prior hamstring strain injury on strength, flexibility, and running mechanics.", "text": "BACKGROUND\nPrevious studies have shown evidence of residual scar tissue at the musculotendon junction following a hamstring strain injury, which could influence re-injury risk. The purpose of this study was to investigate whether bilateral differences in strength, neuromuscular patterns, and musculotendon kinematics during sprinting are present in individuals with a history of unilateral hamstring injury, and whether such differences are linked to the presence of scar tissue.\n\n\nMETHODS\nEighteen subjects with a previous hamstring injury (>5 months prior) participated in a magnetic resonance (MR) imaging exam, isokinetic strength testing, and a biomechanical assessment of treadmill sprinting. Bilateral comparisons were made for peak knee flexion torque, angle of peak torque, and the hamstrings:quadriceps strength ratio, as well as muscle activations and peak hamstring stretch during sprinting. MR images were used to measure the volumes of the proximal tendon/aponeurosis of the biceps femoris, with asymmetries considered indicative of scar tissue.\n\n\nFINDINGS\nA significantly enlarged proximal biceps femoris tendon volume was measured on the side of prior injury. However, no significant differences between the previously injured and uninjured limbs were found in strength measures, peak hamstring stretch, or muscle activation patterns. Further, the degree of asymmetry in tendon volume was not correlated to any of the functional measures.\n\n\nINTERPRETATION\nInjury-induced changes in morphology do not seem discernable from strength measures, running kinematics, or muscle activation patterns. Further research is warranted to ascertain whether residual scarring alters localized musculotendon mechanics in a way that may contribute to the high rates of muscle re-injury that are observed clinically."}
{"_id": "125afe2892d2fe9960bdcf9469d7ee504af26d18", "title": "Gray-level grouping (GLG): an automatic method for optimized image contrast enhancement - part II: the variations", "text": "This is Part II of the paper, \"Gray-Level Grouping (GLG): an Automatic Method for Optimized Image Contrast Enhancement\". Part I of this paper introduced a new automatic contrast enhancement technique: gray-level grouping (GLG). GLG is a general and powerful technique, which can be conveniently applied to a broad variety of low-contrast images and outperforms conventional contrast enhancement techniques. However, the basic GLG method still has limitations and cannot enhance certain classes of low-contrast images well, e.g., images with a noisy background. The basic GLG also cannot fulfill certain special application purposes, e.g., enhancing only part of an image which corresponds to a certain segment of the image histogram. In order to break through these limitations, this paper introduces an extension of the basic GLG algorithm, selective gray-level grouping (SGLG), which groups the histogram components in different segments of the grayscale using different criteria and, hence, is able to enhance different parts of the histogram to various extents. This paper also introduces two new preprocessing methods to eliminate background noise in noisy low-contrast images so that such images can be properly enhanced by the (S)GLG technique. The extension of (S)GLG to color images is also discussed in this paper. SGLG and its variations extend the capability of the basic GLG to a larger variety of low-contrast images, and can fulfill special application requirements. SGLG and its variations not only produce results superior to conventional contrast enhancement techniques, but are also fully automatic under most circumstances, and are applicable to a broad variety of images."}
{"_id": "c2c5206f6a539b02f5d5a19bdb3a90584f7e6ba4", "title": "Affective Computing: A Review", "text": "Affective computing is currently one of the most active research topics, furthermore, having increasingly intensive attention. This strong interest is driven by a wide spectrum of promising applications in many areas such as virtual reality, smart surveillance, perceptual interface, etc. Affective computing concerns multidisciplinary knowledge background such as psychology, cognitive, physiology and computer sciences. The paper is emphasized on the several issues involved implicitly in the whole interactive feedback loop. Various methods for each issue are discussed in order to examine the state of the art. Finally, some research challenges and future directions are also discussed."}
{"_id": "7083777989197f87e3933d26d614d15ea05c3fe6", "title": "Advances in Instance Selection for Instance-Based Learning Algorithms", "text": "The basic nearest neighbour classifier suffers from the indiscriminate storage of all presented training instances. With a large database of instances classification response time can be slow. When noisy instances are present classification accuracy can suffer. Drawing on the large body of relevant work carried out in the past 30 years, we review the principle approaches to solving these problems. By deleting instances, both problems can be alleviated, but the criterion used is typically assumed to be all encompassing and effective over many domains. We argue against this position and introduce an algorithm that rivals the most successful existing algorithm. When evaluated on 30 different problems, neither algorithm consistently outperforms the other: consistency is very hard. To achieve the best results, we need to develop mechanisms that provide insights into the structure of class definitions. We discuss the possibility of these mechanisms and propose some initial measures that could be useful for the data miner."}
{"_id": "ff5c3b48e4a2c46de00839c86b3322735d42a907", "title": "Effects of sleep deprivation on performance: a meta-analysis.", "text": "To quantitatively describe the effects of sleep loss, we used meta-analysis, a technique relatively new to the sleep research field, to mathematically summarize data from 19 original research studies. Results of our analysis of 143 study coefficients and a total sample size of 1.932 suggest that overall sleep deprivation strongly impairs human functioning. Moreover, we found that mood is more affected by sleep deprivation than either cognitive or motor performance and that partial sleep deprivation has a more profound effect on functioning than either long-term or short-term sleep deprivation. In general, these results indicate that the effects of sleep deprivation may be underestimated in some narrative reviews, particularly those concerning the effects of partial sleep deprivation."}
{"_id": "2bd233eb65e0a5383c9f2fc92bd08fab9a847c97", "title": "An investigation of the impacts of some external contextual factors on ERP systems success assessment: a case of firms in Baltic-Nordic region", "text": "Enterprise Resource Planning (ERP) systems are among the largest information systems (IS) investments made by firms, and the use of such systems is spreading globally. Many researchers have discussed their adoption and implementation, few have investigated the impact of external contextual factors on the success of such technologies in adopting firms. This study aims to fill this gap in research by examining the effects of three external contextual factors, i.e., industry type, industry climate, and national economic climate on ERP success assessment. We obtained data from Estonia and Finland and our analysis shows that industry and national economic climates have significant relationships with ERP success."}
{"_id": "c876a24ffe1dac77397218275cfda7cc3dce6ac9", "title": "An Open Smart City IoT Test Bed: Street Light Poles as Smart City Spines: Poster Abstract", "text": "Street light poles will be a key enabler for a smart city's hardware infrastructure, thanks to their ubiquity throughout the city as well as access to power. We propose an IoT test bed around light poles for the city, with a modular hardware and software architecture to enable experimentation with various technologies."}
{"_id": "6115c7df49cdd3a4e6dd86ebd8b6e75b26d27f26", "title": "RFBoost: An improved multi-label boosting algorithm and its application to text categorisation", "text": "The AdaBoost.MH boosting algorithm is considered to be one of the most accurate algorithms for multilabel classification. AdaBoost.MH works by iteratively building a committee of weak hypotheses of decision stumps. In each round of AdaBoost.MH learning, all features are examined, but only one feature is used to build a new weak hypothesis. This learning mechanism may entail a high degree of computational time complexity, particularly in the case of a large-scale dataset. This paper describes a way to manage the learning complexity and improve the classification performance of AdaBoost.MH. We propose an improved version of AdaBoost.MH, called RFBoost . The weak learning in RFBoost is based on filtering a small fixed number of ranked features in each boosting round rather than using all features, as AdaBoost.MH does. We propose two methods for ranking the features: One Boosting Round and Labeled Latent Dirichlet Allocation (LLDA), a supervised topic model based on Gibbs sampling. Additionally, we investigate the use of LLDA as a feature selection method for reducing the feature space based on the maximal conditional probabilities of words across labels. Our experimental results on eight well-known benchmarks for multi-label text categorisation show that RFBoost is significantly more efficient and effective than the baseline algorithms. Moreover, the LLDA-based feature ranking yields the best performance for RFBoost. \u00a9 2016 Elsevier B.V. All rights reserved."}
{"_id": "f61dd368316f45bc45c740aba9570b93ea48c40a", "title": "Novel use of electronic whiteboard in the operating room increases surgical team compliance with pre-incision safety practices.", "text": "BACKGROUND\nDespite evidence that use of a checklist during the pre-incision time out improves patient morbidity and mortality, compliance with performing the required elements of the checklist has been low. In an effort to improve compliance, a standardized time out interactive Electronic Checklist System [iECS] was implemented in all hospital operating room (OR) suites at 1 institution. The purpose of this 12-month prospective observational study was to assess whether an iECS in the OR improves and sustains improved surgical team compliance with the pre-incision time out.\n\n\nMETHODS\nDirect observational analyses of preprocedural time outs were performed on 80 cases 1 month before, and 1 and 9 months after implementation of the iECS, for a total of 240 observed cases. Three observers, who achieved high interrater reliability (kappa = 0.83), recorded a compliance score (yes, 1; no, 0) on each element of the time out. An element was scored as compliant if it was clearly verbalized by the surgical team.\n\n\nRESULTS\nPre-intervention observations indicated that surgical staff verbally communicated the core elements of the time out procedure 49.7 \u00b1 12.9% of the time. After implementation of the iECS, direct observation of 80 surgical cases at 1 and 9 months indicated that surgical staff verbally communicated the core elements of the time out procedure 81.6 \u00b1 11.4% and 85.8 \u00b1 6.8% of the time, respectively, resulting in a statistically significant (P < .0001) increase in time out procedural compliance.\n\n\nCONCLUSION\nImplementation of a standardized, iECS can dramatically increase compliance with preprocedural time outs in the OR, an important and necessary step in improving patient outcomes and reducing preventable complications and deaths."}
{"_id": "3a326b2a558501a64aab85d51fcaf52e70b86019", "title": "Self-Adaptive Skin Segmentation in Color Images", "text": "In this paper, we present a new method for skin detection and segmentation, relying on spatial analysis of skin-tone pixels. Our contribution lies in introducing self-adaptive seeds, from which the skin probability is propagated using the distance transform. The seeds are determined from a local skin color model that is learned on-line from a presented image, without requiring any additional information. This is in contrast to the existing methods that need a skin sample for the adaptation, e.g., acquired using a face detector. In our experimental study, we obtained F-score of over 0.85 for the ECU benchmark, and this is highly competitive compared with several state-of-the-art methods."}
{"_id": "c4cf7f57600ef62cbd1179986f4ddea822c7b157", "title": "Mobile Robot Control on a Reference Path", "text": "In this paper a control design of a nonholonomic mobile robot with a differential drive is presented. On the basis of robot kinematics equations a robot control is designed where the robot is controlled to follow the arbitrary path reference with a predefined velocity profile. The designed control algorithm proved stable and robust to the errors in robot initial positions, to input and output noises and to other disturbances. The obtained control law is demonstrated on a simple trajectory example, however, for a more general applicability a time-optimal motion planning algorithm considering acceleration constraints is presented as well"}
{"_id": "528dd75a9b3f56a8fb03072d41565056a7e1c1e0", "title": "An investigation of factors associated with the health and well-being of HIV-infected or HIV-affected older people in rural South Africa", "text": "BACKGROUND\nDespite the severe impact of HIV in sub-Saharan Africa, the health of older people aged 50+ is often overlooked owing to the dearth of data on the direct and indirect effects of HIV on older people's health status and well-being. The aim of this study was to examine correlates of health and well-being of HIV-infected older people relative to HIV-affected people in rural South Africa, defined as participants with an HIV-infected or death of an adult child due to HIV-related cause.\n\n\nMETHODS\nData were collected within the Africa Centre surveillance area using instruments adapted from the World Health Organization (WHO) Study on global AGEing and adult health (SAGE). A stratified random sample of 422 people aged 50+ participated. We compared the health correlates of HIV-infected to HIV-affected participants using ordered logistic regressions. Health status was measured using three instruments: disability index, quality of life and composite health score.\n\n\nRESULTS\nMedian age of the sample was 60 years (range 50-94). Women HIV-infected (aOR 0.15, 95% confidence interval (CI) 0.08-0.29) and HIV-affected (aOR 0.20, 95% CI 0.08-0.50), were significantly less likely than men to be in good functional ability. Women's adjusted odds of being in good overall health state were similarly lower than men's; while income and household wealth status were stronger correlates of quality of life. HIV-infected participants reported better functional ability, quality of life and overall health state than HIV-affected participants.\n\n\nDISCUSSION AND CONCLUSIONS\nThe enhanced healthcare received as part of anti-retroviral treatment as well as the considerable resources devoted to HIV care appear to benefit the overall well-being of HIV-infected older people; whereas similar resources have not been devoted to the general health needs of HIV uninfected older people. Given increasing numbers of older people, policy and programme interventions are urgently needed to holistically meet the health and well-being needs of older people beyond the HIV-related care system."}
{"_id": "ce1889c543f2a85e7a98c020efa265cdad8a7647", "title": "A Machine Learning Approach to Anomaly Detection", "text": "Much of the intrusion detection research focuses on signature (misuse) detection, where models are built to recognize known attacks. However, signature detection, by its nature, cannot detect novel attacks. Anomaly detection focuses on modeling the normal behavior and identifying significant deviations, which could be novel attacks. In this paper we explore two machine learning methods that can construct anomaly detection models from past behavior. The first method is a rule learning algorithm that characterizes normal behavior in the absence of labeled attack data. The second method uses a clustering algorithm to identify outliers."}
{"_id": "d6b2180dd2a401d252573883a5ab6880b13a3031", "title": "Construction and Evaluation of a User Experience Questionnaire", "text": "An end-user questionnaire to measure user experience quickly in a simple and immediate way while covering a preferably comprehensive impression of the product user experience was the goal of the reported construction process. An empirical approach for the item selection was used to ensure practical relevance of items. Usability experts collected terms and statements on user experience and usability, including \u2018hard\u2019 as well as \u2018soft\u2019 aspects. These statements were consolidated and transformed into a first questionnaire version containing 80 bipolar items. It was used to measure the user experience of software products in several empirical studies. Data were subjected to a factor analysis which resulted in the construction of a 26 item questionnaire including the six factors Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty. Studies conducted for the original German questionnaire and an English version indicate a satisfactory level of reliability and construct validity."}
{"_id": "817a6ce83b610ca538d84b96f40328e8a98946f9", "title": "Transformational change and business process reengineering (BPR): Lessons from the British and Dutch public sector", "text": "Available online 18 May 2011"}
{"_id": "0c1edc0fb1a2bbed050403f465c1bfb5fdd507e5", "title": "Clustering of multivariate time series data using particle swarm optimization", "text": "Particle swarm optimization (PSO) is a practical and effective optimization approach that has been applied recently for data clustering in many applications. While various non-evolutionary optimization and clustering algorithms have been applied for clustering multivariate time series in some applications such as customer segmentation, they usually provide poor results due to their dependency on the initial values and their poor performance in manipulating multiple objectives. In this paper, a particle swarm optimization algorithm is proposed for clustering multivariate time series data. Since the time series data sometimes do not have the same length and they usually have missing data, the regular Euclidean distance and dynamic time warping can not be applied for such data to measure the similarity. Therefore, a hybrid similarity measure based on principal component analysis and Mahalanobis distance is applied in order to handle such limitations. The comparison between the results of the proposed method with the similar ones in the literature shows the superiority of the proposed method."}
{"_id": "5cb52b24aa172d7a043f618068e2dbd9c5038df9", "title": "Nonlinear Dynamics of Spring Softening and Hardening in Folded-MEMS Comb Drive Resonators", "text": "This paper studies analytically and numerically the spring softening and hardening phenomena that occur in electrostatically actuated microelectromechanical systems comb drive resonators utilizing folded suspension beams. An analytical expression for the electrostatic force generated between the combs of the rotor and the stator is derived and takes into account both the transverse and longitudinal capacitances present. After formulating the problem, the resulting stiff differential equations are solved analytically using the method of multiple scales, and a closed-form solution is obtained. Furthermore, the nonlinear boundary value problem that describes the dynamics of inextensional spring beams is solved using straightforward perturbation to obtain the linear and nonlinear spring constants of the beam. The analytical solution is verified numerically using a Matlab/Simulink environment, and the results from both analyses exhibit excellent agreement. Stability analysis based on phase plane trajectory is also presented and fully explains previously reported empirical results that lacked sufficient theoretical description. Finally, the proposed solutions are, once again, verified with previously published measurement results. The closed-form solutions provided are easy to apply and enable predicting the actual behavior of resonators and gyroscopes with similar structures."}
{"_id": "11338e3919ae988af93c7dd4729db856c23173b1", "title": "Automatic Detection and Classification of Brain Hemorrhages", "text": "Computer-aided diagnosis systems have been the focus of many research endeavors. They are based on the idea of processing and analyzing images of different parts of the human body for a quick and accurate diagnosis. In this paper, the aforementioned approach is followed to detect whether a brain hemorrhage exists or not in a Computed Topography (CT) scans of the brain. Moreover, the type of the hemorrhage is identified. The implemented system consists of several stages that include image preprocessing, image segmentation, feature extraction, and classification. The results of the conducted experiments are very promising. A recognition rate of 100% is attained for detecting whether a brain hemorrhage exists or not. For the hemorrhage type classification, more than 92% accuracy is achieved. Key\u2013Words:brain hemorrhage, brain ct scans, machine learning, image processing, image segmentation"}
{"_id": "ad7d3f6817b6cc96781d3785c7e45e0843a9d39d", "title": "Analysis of a linear series-fed rectangular microstrip antenna array", "text": "A new method is proposed to analyze a linear series-fed microstrip antenna array which maintains accuracy while not requiring a full-wave analysis of the entire structure. The method accounts for mutual coupling between patches as well as the patch excitations by the feed lines."}
{"_id": "090d25f94cb021bdd3400a2f547f989a6a5e07ec", "title": "Direct least squares fitting of ellipses", "text": "This work presents a new efficient method for fitting ellipses to scattered data. Previous algorithms either fitted general conics or were computationally expensive. By minimizing the algebraic distance subject to the constraint 4 2 1 the new method incorporates the ellipticity constraint into the normalization factor. The new method combines several advantages: (i) It is ellipse-specific so that even bad data will always return an ellipse; (ii) It can be solved naturally by a generalized eigensystem and (iii) it is extremely robust, efficient and easy to implement. We compare the proposed method to other approaches and show its robustness on several examples in which other non-ellipse-specific approaches would fail or require computationally expensive iterative refinements. Source code for the algorithm is supplied and a demonstration is available on ! \" ! $#% '& () \"*) & +, ./10 0 32, . 4) \"*) \"*) % 5* 0"}
{"_id": "4c4387afaeadda64d8183d7aba19574a9b757a6a", "title": "SUITOR: an attentive information system", "text": "Attentive systems pay attention to what users do so that they can attend to what users need. Such systems track user behavior, model user interests, and anticipate user desires and actions. Because the general class of attentive systems is broad \u2014 ranging from human butlers to web sites that profile users \u2014 we have focused specifically on attentive information systems, which observe user actions with information resources, model user information states, and suggest information that might be helpful to users. In particular, we describe an implemented system, Simple User Interest Tracker (Suitor), that tracks computer users through multiple channels \u2014 gaze, web browsing, application focus \u2014 to determine their interests and to satisfy their information needs. By observing behavior and modeling users, Suitor finds and displays potentially relevant information that is both timely and non-disruptive to the users' ongoing activities."}
{"_id": "002aaf4412f91d0828b79511f35c0863a1a32c47", "title": "A real-time face tracker", "text": "We present a real-time face tracker in this paper The system has achieved a rate of 30% frameshecond using an HP-9000 workstation with a framegrabber and a Canon VC-CI camera. It can track a person 'sface while the person moves freely (e.g., walks, jumps, sits down and stands up) in a room. Three types of models have been employed in developing the system. First, we present a stochastic model to characterize skin-color distributions of human faces. The information provided by the model is sufJicient for tracking a human face in various poses and views. This model is adaptable to different people and different lighting conditions in real-time. Second, a motion model e's used to estimate image motion and to predict search window. Third, a camera model is used toprediet and to compensate for camera motion. The system can be applied to tele-conferencing and many HCI applications including lip-reading and gaze tracking. The principle in developing this system can be extended to other tracking problems such as tracking the human hand."}
{"_id": "36bb4352891209ba0a7df150c74cd4db6d603ca5", "title": "Single Image Super-Resolution via Multiple Mixture Prior Models", "text": "Example learning-based single image super-resolution (SR) is a promising method for reconstructing a high-resolution (HR) image from a single-input low-resolution (LR) image. Lots of popular SR approaches are more likely either time-or space-intensive, which limit their practical applications. Hence, some research has focused on a subspace view and delivered state-of-the-art results. In this paper, we utilize an effective way with mixture prior models to transform the large nonlinear feature space of LR images into a group of linear subspaces in the training phase. In particular, we first partition image patches into several groups by a novel selective patch processing method based on difference curvature of LR patches, and then learning the mixture prior models in each group. Moreover, different prior distributions have various effectiveness in SR, and in this case, we find that student-t prior shows stronger performance than the well-known Gaussian prior. In the testing phase, we adopt the learned multiple mixture prior models to map the input LR features into the appropriate subspace, and finally reconstruct the corresponding HR image in a novel mixed matching way. Experimental results indicate that the proposed approach is both quantitatively and qualitatively superior to some state-of-the-art SR methods."}
{"_id": "7cecad0af237dcd7760364e1ae0d03dea362d7cd", "title": "Pharmacogenetics and adverse drug reactions", "text": "Polymorphisms in the genes that code for drug-metabolising enzymes, drug transporters, drug receptors, and ion channels can affect an individual's risk of having an adverse drug reaction, or can alter the efficacy of drug treatment in that individual. Mutant alleles at a single gene locus are the best studied individual risk factors for adverse drug reactions, and include many genes coding for drug-metabolising enzymes. These genetic polymorphisms of drug metabolism produce the phenotypes of \"poor metabolisers\" or \"ultrarapid metabolisers\" of numerous drugs. Together, such phenotypes make up a substantial proportion of the population. Pharmacogenomic techniques allow efficient analysis of these risk factors, and genotyping tests have the potential to optimise drug therapy in the future."}
{"_id": "794435fe025ac480dbdb6218866e1d30d5a786c8", "title": "High performance work systems: the gap between policy and practice in health care reform.", "text": "PURPOSE\nStudies of high-performing organisations have consistently reported a positive relationship between high performance work systems (HPWS) and performance outcomes. Although many of these studies have been conducted in manufacturing, similar findings of a positive correlation between aspects of HPWS and improved care delivery and patient outcomes have been reported in international health care studies. The purpose of this paper is to bring together the results from a series of studies conducted within Australian health care organisations. First, the authors seek to demonstrate the link found between high performance work systems and organisational performance, including the perceived quality of patient care. Second, the paper aims to show that the hospitals studied do not have the necessary aspects of HPWS in place and that there has been little consideration of HPWS in health system reform.\n\n\nDESIGN/METHODOLOGY/APPROACH\nThe paper draws on a series of correlation studies using survey data from hospitals in Australia, supplemented by qualitative data collection and analysis. To demonstrate the link between HPWS and perceived quality of care delivery the authors conducted regression analysis with tests of mediation and moderation to analyse survey responses of 201 nurses in a large regional Australian health service and explored HRM and HPWS in detail in three casestudy organisations. To achieve the second aim, the authors surveyed human resource and other senior managers in all Victorian health sector organisations and reviewed policy documents related to health system reform planned for Australia.\n\n\nFINDINGS\nThe findings suggest that there is a relationship between HPWS and the perceived quality of care that is mediated by human resource management (HRM) outcomes, such as psychological empowerment. It is also found that health care organisations in Australia generally do not have the necessary aspects of HPWS in place, creating a policy and practice gap. Although the chief executive officers of health service organisations reported high levels of strategic HRM, the human resource and other managers reported a distinct lack of HPWS from their perspectives. The authors discuss why health care organisations may have difficulty in achieving HPWS.\n\n\nORIGINALITY/VALUE\nLeaders in health care organisations should focus on ensuring human resource management systems, structures and processes that support HPWS. Policy makers need to consider HPWS as a necessary component of health system reform. There is a strong need to reorient organisational human resource management policies and procedures in public health care organisations towards high performing work systems."}
{"_id": "da2a956676b59b5237bde62a308fa604215d6a55", "title": "Analysis and design of HBT Cherry-Hooper amplifiers with emitter-follower feedback for optical communications", "text": "In this article, the large-signal, small-signal, and noise performance of the Cherry-Hooper amplifier with emitter-follower feedback are analyzed from a design perspective. A method for choosing the component values to obtain a low group delay distortion or Bessel transfer function is given. The design theory is illustrated with an implementation of the circuit in a 47-GHz SiGe process. The amplifier has 19.7-dB gain, 13.7-GHz bandwidth, and /spl plusmn/10-ps group delay distortion. The amplifier core consumes 34 mW from a -3.3-V supply."}
{"_id": "d5673c53b3643372dd8d35136769ecd73a6dede3", "title": "A Deep Learning Framework for Smart Street Cleaning", "text": "Conventional street cleaning methods include street sweepers going to various spots in the city and manually verifying if the street needs cleaning and taking action if required. However, this method is not optimized and demands a huge investment in terms of time and money. This paper introduces an automated framework which addresses street cleaning problem in a better way by making use of modern equipment with cameras and computational techniques to analyze, find and efficiently schedule clean-up crews for the areas requiring more attention. Deep learning-based neural network techniques can be used to achieve better accuracy and performance for object detection and classification than conventional machine learning algorithms for large volume of images. The proposed framework for street cleaning leverages the deep learning algorithm pipeline to analyze the street photographs and determines if the streets are dirty by detecting litter objects. The pipeline further determines the the degree to which the streets are littered by classifying the litter objects detected in earlier stages. The framework also provides information on the cleanliness status of the streets on a dashboard updated in real-time. Such framework can prove effective in reducing resource consumption and overall operational cost involved in street cleaning."}
{"_id": "189a391b217387514bfe599a0b6c1bbc1ccc94bb", "title": "A New Paradigm for Collision-free Hashing: Incrementality at Reduced Cost", "text": "We present a simple, new paradigm for the design of collision-free hash functions. Any function emanating from this paradigm is incremental. (This means that if a message x which I have previously hashed is modi ed to x0 then rather than having to re-compute the hash of x 0 from scratch, I can quickly \\update\" the old hash value to the new one, in time proportional to the amount of modi cation made in x to get x.) Also any function emanating from this paradigm is parallelizable, useful for hardware implementation. We derive several speci c functions from our paradigm. All use a standard hash function, assumed ideal, and some algebraic operations. The rst function, MuHASH, uses one modular multiplication per block of the message, making it reasonably e cient, and signi cantly faster than previous incremental hash functions. Its security is proven, based on the hardness of the discrete logarithm problem. A second function, AdHASH, is even faster, using additions instead of multiplications, with security proven given either that approximation of the length of shortest lattice vectors is hard or that the weighted subset sum problem is hard. A third function, LtHASH, is a practical variant of recent lattice based functions, with security proven based, again on the hardness of shortest lattice vector approximation. Dept. of Computer Science & Engineering, University of California at San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA. E-Mail: mihir@cs.ucsd.edu. URL: http://www-cse.ucsd.edu/users/mihir. Supported in part by NSF CAREER Award CCR-9624439 and a Packard Foundation Fellowship in Science and Engineering. yMIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA 02139, USA. E-Mail: miccianc@theory.lcs.mit.edu. Supported in part by DARPA contract DABT63-96-C-0018."}
{"_id": "185aa5c29a1bce3cd5c806cd68c6518b65ba3d75", "title": "Bayesian Rose Trees", "text": "Hierarchical structure is ubiquitous in data across many domains. There are many hierarchical clustering methods, frequently used by domain experts, which strive to discover this structure. However, most of these methods limit discoverable hierarchies to those with binary branching structure. This limitation, while computationally convenient, is often undesirable. In this paper we explore a Bayesian hierarchical clustering algorithm that can produce trees with arbitrary branching structure at each node, known as rose trees. We interpret these trees as mixtures over partitions of a data set, and use a computationally efficient, greedy agglomerative algorithm to find the rose trees which have high marginal likelihood given the data. Lastly, we perform experiments which demonstrate that rose trees are better models of data than the typical binary trees returned by other hierarchical clustering algorithms."}
{"_id": "940173d9b880defde6c5f171579cddbba8288dc6", "title": "Digital Business Strategy and Value Creation: Framing the Dynamic Cycle of Control Points", "text": "Within changing value networks, the profits and competitive advantages of participation reside dynamically at control points that are the positions of greatest value and/or power. The enterprises that hold these positions have a great deal of control over how the network operates, how the benefits are redistributed, and how this influences the execution of a digital business strategy. This article is based on a field study that provides preliminary, yet promising, empirical evidence that sheds light on the dynamic cycle of value creation and value capture points in digitally enabled networks in response to triggers related to technology and business strategy. We use the context of the European and U.S. broadcasting industry. Specifically, we illustrate how incremental innovations may shift value networks from static, vertically integrated networks to more loosely coupled networks, and how cross-boundary industry disruptions may then, in turn, shift those to two-sided markets. Based on our analysis we provide insights and implications for digital business strategy research and practice."}
{"_id": "f39d45aeaf8a5ace793fb8c8495eb0c7598f5e23", "title": "Attitude Detection for One-Round Conversation: Jointly Extracting Target-Polarity Pairs", "text": "We tackle Attitude Detection, which we define as the task of extracting the replier's attitude, i.e., a target-polarity pair, from a given one-round conversation. While previous studies considered Target Extraction and Polarity Classification separately, we regard them as subtasks of Attitude Detection. Our experimental results show that treating the two subtasks independently is not the optimal solution for Attitude Detection, as achieving high performance in each subtask is not sufficient for obtaining correct target-polarity pairs. Our jointly trained model AD-NET substantially outperforms the separately trained models by alleviating the target-polarity mismatch problem. Moreover, we proposed a method utilising the attitude detection model to improve retrieval-based chatbots by re-ranking the response candidates with attitude features. Human evaluation indicates that with attitude detection integrated, the new responses to the sampled queries from are statistically significantly more consistent, coherent, engaging and informative than the original ones obtained from a commercial chatbot."}
{"_id": "b81ef3e2185b79843fce53bc682a6f9fa44f2485", "title": "Robust misinterpretation of confidence intervals.", "text": "Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data."}
{"_id": "a9267dd3407e346aa08cd4018fbde92018230eb3", "title": "Low-complexity image denoising based on statistical modeling of wavelet coefficients", "text": "We introduce a simple spatially adaptive statistical model for wavelet image coefficients and apply it to image denoising. Our model is inspired by a recent wavelet image compression algorithm, the estimation-quantization (EQ) coder. We model wavelet image coefficients as zero-mean Gaussian random variables with high local correlation. We assume a marginal prior distribution on wavelet coefficients variances and estimate them using an approximate maximum a posteriori probability rule. Then we apply an approximate minimum mean squared error estimation procedure to restore the noisy wavelet image coefficients. Despite the simplicity of our method, both in its concept and implementation, our denoising results are among the best reported in the literature."}
{"_id": "aead2b2d26ab62f6229048b912b7ad313187008c", "title": "Crowdlet: Optimal worker recruitment for self-organized mobile crowdsourcing", "text": "In this paper, we advocate Crowdlet, a novel self-organized mobile crowdsourcing paradigm, in which a mobile task requester can proactively exploit a massive crowd of encountered mobile workers at real-time for quick and high-quality results. We present a comprehensive system model of Crowdlet that defines task, worker arrival and worker ability models. Further, we introduce a service quality concept to indicate the expected service gain that a requester can enjoy when he recruits an encountered worker, by jointly taking into account worker ability, real-timeness and task reward. Based on the models, we formulate an online worker recruitment problem to maximize the expected sum of service quality. We derive an optimal worker recruitment policy through the dynamic programming principle, and show that it exhibits a nice threshold based structure. We conduct extensive performance evaluation based on real traces, and numerical results demonstrate that our policy can achieve superior performance and improve more than 30% performance gain over classic policies. Besides, our Android prototype shows that Crowdlet is cost-efficient, requiring less than 7 seconds and 6 Joule in terms of time and energy cost for policy computation in most cases."}
{"_id": "f39322a314bf96dbaf482b7f86fdf99853fca468", "title": "The population genetics of the Jewish people", "text": "Adherents to the Jewish faith have resided in numerous geographic locations over the course of three millennia. Progressively more detailed population genetic analysis carried out independently by multiple research groups over the past two decades has revealed a pattern for the population genetic architecture of contemporary Jews descendant from globally dispersed Diaspora communities. This pattern is consistent with a major, but variable component of shared Near East ancestry, together with variable degrees of admixture and introgression from the corresponding host Diaspora populations. By combining analysis of monoallelic markers with recent genome-wide variation analysis of simple tandem repeats, copy number variations, and single-nucleotide polymorphisms at high density, it has been possible to determine the relative contribution of sex-specific migration and introgression to map founder events and to suggest demographic histories corresponding to western and eastern Diaspora migrations, as well as subsequent microevolutionary events. These patterns have been congruous with the inferences of many, but not of all historians using more traditional tools such as archeology, archival records, linguistics, comparative analysis of religious narrative, liturgy and practices. Importantly, the population genetic architecture of Jews helps to explain the observed patterns of health and disease-relevant mutations and phenotypes which continue to be carefully studied and catalogued, and represent an important resource for human medical genetics research. The current review attempts to provide a succinct update of the more recent developments in a historical and human health context."}
{"_id": "2ab66c0f02521f4e5d42a6b29b08eb815504818b", "title": "Learning deep physiological models of affect", "text": "More than 15 years after the early studies in Affective Computing (AC), [1] the problem of detecting and modeling emotions in the context of human-computer interaction (HCI) remains complex and largely unexplored. The detection and modeling of emotion is, primarily, the study and use of artificial intelligence (AI) techniques for the construction of computational models of emotion. The key challenges one faces when attempting to model emotion [2] are inherent in the vague definitions and fuzzy boundaries of emotion, and in the modeling methodology followed. In this context, open research questions are still present in all key components of the modeling process. These include, first, the appropriateness of the modeling tool employed to map emotional manifestations and responses to annotated affective states; second, the processing of signals that express these manifestations (i.e., model input); and third, the way affective annotation (i.e., model output) is handled. This paper touches upon all three key components of an affective model (i.e., input, model, output) and introduces the use of deep learning (DL) [3], [4], [5] methodologies for affective modeling from multiple physiological signals."}
{"_id": "b3e6312fb37a99336f7780b1c69badd899858b41", "title": "Three Implementations of SquishQL, a Simple RDF Query Language", "text": "RDF provides a basic way to represent data for the Semantic Web. We have been experimenting with the query paradigm for working with RDF data in semantic web applications. Query of RDF data provides a declarative access mechanism that is suitable for application usage and remote access. We describe work on a conceptual model for querying RDF data that refines ideas first presented in at the W3C workshop on Query Languages [14] and the design of one possible syntax, derived from [7], that is suitable for application programmers. Further, we present experience gained in three implementations of the query language."}
{"_id": "060d1b412dc3fb7d5f5d952adc8a4a8ecc4bd3fa", "title": "Introduction to Statistical Learning Theory", "text": "The goal of statistical learning theory is to study, in a statistical framework, the properties of learning algorithms. In particular, most results take the form of so-called error bounds. This tutorial introduces the techniques that are used to obtain such results."}
{"_id": "9b9c9cc72ebc16596a618d5b78972437c9c569f6", "title": "Fast Texture Transfer", "text": ""}
{"_id": "5de1dec847e812b7d7f2776a756ee17b2d1791cd", "title": "Social Media: The Good, the Bad, and the Ugly", "text": "Companies when strategizing are looking for innovative ways to have a competitive advantage over their competitors. One way in which they compete is by the adoption of social media. Social media has evolved over the years and as a result, new concepts and applications are developed which promises to provide business value to a company. However, despite the usefulness of social media, many businesses fail to reap its full benefits. The current literature shows evidence of lack of strategically designed process for companies to successfully implement social media. The purpose of this study is to suggest a framework which provides the necessary alignment between social media goals with business objectives. From the literature review, a social media strategy framework was derived to offer an effective step by step approach to the development and imple\u2010 mentation of social media goals aligned with a company\u2019s business objectives. The contribution to this study is the development of a social media strategy framework that can be used by organisations for business value."}
{"_id": "a70ba6daebb31461480fe3a369af6c7658ccdec0", "title": "Web Mining for Information Retrieval", "text": "The present era is engulfed in data and it is quite difficult to churn the emergence of unstructured data in order to mine relevant information. In this paper we present a basic outline of web mining techniques which changed the scenario of today world and help in retrieval of information. Web Mining actually referred as mining of interesting pattern by using set of tools and techniques from the vast pool of web. It focuses on different aspects of web mining referred as Web Content Mining, Web Structure Mining and Web Usage Mining. The overview of these mining techniques which help in retrieving desired information are covered along with the approaches used and algorithm employed."}
{"_id": "6de1fbfe86ddfef72d5fcfdb6e7e140be0ab195e", "title": "Argumentation Mining: Where are we now, where do we want to be and how do we get there?", "text": "This paper gives a short overview of the state-of-the-art and goals of argumentation mining and it provides ideas for further research. Its content is based on two invited lectures on argumentation mining respectively at the FIRE 2013 conference at the India International Center in New Delhi, India and a lecture given as SICSA distinguished visitor at the University of Dundee, UK in the summer of 2014."}
{"_id": "9abba0edb3d631b6cca51a346d494b911e5342fa", "title": "1 Unsupervised Feature Learning in Time Series 2 Prediction Using Continuous Deep Belief Network 3", "text": "A continuous Deep Belief Network (cDBN) with two hidden layers is proposed in this 18 paper, focusing on the problem of weak feature learning ability when dealing with continuous data. 19 In cDBN, the input data is trained in an unsupervised way by using continuous version of transfer 20 functions, the contrastive divergence is designed in hidden layer training process to raise 21 convergence speed, an improved dropout strategy is then implemented in unsupervised training to 22 realize features learning by de-cooperating between the units, and then the network is fine-tuned 23 using back propagation algorithm. Besides, hyper-parameters are analysed through stability 24 analysis to assure the network can find the optimal. Finally, the experiments on Lorenz chaos series, 25 CATS benchmark and other real world like CO2 and waste water parameters forecasting show that 26 cDBN has the advantage of higher accuracy, simpler structure and faster convergence speed than 27 other methods. 28"}
{"_id": "63cffc8d517259e5b4669e122190dfe515ee5508", "title": "Influence of Some Parameters on the Effectiveness of Induction Heating", "text": "We have investigated the effectiveness of heating conductive plates by induction in low frequencies analytically and numerically, in relation to the shape of the plate, the area of the region exposed to the magnetic flux, and the position of the exposed region with respect to the center of the plate. We considered both uniform and nonuniform magnetic fields. For plates with equivalent area exposed to the same uniform magnetic flux, the one with the most symmetrical shape is heated most effectively. If a coil is employed for the excitation, the results depend on the shape of the plate, the shape of the coil section, and the coil lift-off. When only the central region of a plate is exposed to a variable magnetic flux, there is a specific value of the exposed area for which the power dissipated in the plate material reaches a maximum. The most effective heating of a plate partially exposed occurs when the axis of the exciting coil is at the plate center."}
{"_id": "4ad7f58bde3358082cc922da3e726571fab453e7", "title": "A Model of a Localized Cross-Border E-Commerce", "text": "By the explosive growth of B2B e-commerce transactions in international supply chains and the rapid increase of business documents in Iran\u2019s cross-border trading, effective management of trade processes over borders is vital in B2B e-commerce systems. Structure of the localized model in this paper is based on three major layers of a B2B e-commerce infrastructure, which are messaging layer, business process layer and content layer. For each of these layers proper standards and solutions are chosen due to Iran\u2019s e-commerce requirements. As it is needed to move smoothly towards electronic documents in Iran, UNedocs standard is suggested to support the contents of both paper and electronic documents. The verification of the suggested model is done by presenting a four phase scenario through case study method. The localized model in this paper tries to make a strategic view of business documents exchange in trade processes, and getting closer to the key target of regional single windows establishment in global trade e-supply chains."}
{"_id": "3ffce42ed3d7ac5963e03d4b6e32460ef5b29ff7", "title": "Object modelling by registration of multiple range images", "text": "We study the problem of creating a complete model of a physical object. Although this may be possible using intensity images, we use here range images which directly provide access t o three dimensional information. T h e first problem that we need t o solve is t o find the transformation between the different views. Previous approaches have either assumed this transformation t o be known (which is extremely difficult for a complete model), or computed it with feature matching (which is not accurate enough for integration). In this paper, we propose a new approach which works on range d a t a directly, and registers successive views with enough overlapping area t o get an accurate transformation between views. This is performed by minimizing afunctional which does not require point t o point matches. We give the details of the registration method and modeling procedure, and illustrate them on real range images of complex objects. 1 Introduction Creating models of physical objects is a necessary component machine of biological vision modules. Such models can then be used in object recognition, pose estimation or inspection tasks. If the object of interest has been precisely designed, then such a model exists in the form of a CAD model. In many applications, however, it is either not possible or not practical to have access to such CAD models, and we need to build models from the physical object. Some researchers bypass the problem by using a model which consist,s of multiple views ([4], [a]), but t,liis is not, always enough. If one needs a complete model of an object, the following steps are necessary: 1. data acquisition, 2. registration between views, 3. integration of views. By view we mean the 3D surface information of the object from specific point of view. While the integration process is very dependent on the representation scheme used, the precondition for performing integration consists of knowing the transformation between the data from different views. The goal of registrat,ion is to find such a transformat.ion, which is also known as t,he covrespon,den,ce problem. This problem has been a t the core of many previous research efforts: Bhanu [a] developed an object modeling system for object recognition by rotating object through known angles to acquire multiple views. Chien et al. [3] and Ahuja and Veen-stra [l] used orthogonal views to construct octree object ' models. With these methods, \u2026"}
{"_id": "883b2b981dc04139800f30b23a91b8d27be85b65", "title": "Rigid 3D geometry matching for grasping of known objects in cluttered scenes", "text": "In this paper, we present an efficient 3D object recognition and pose estimation approach for grasping procedures in cluttered and occluded environments. In contrast to common appearance-based approaches, we rely solely on 3D geometry information. Our method is based on a robust geometric descriptor, a hashing technique and an efficient, localized RANSAC-like sampling strategy. We assume that each object is represented by a model consisting of a set of points with corresponding surface normals. Our method simultaneously recognizes multiple model instances and estimates their pose in the scene. A variety of tests shows that the proposed method performs well on noisy, cluttered and unsegmented range scans in which only small parts of the objects are visible. The main procedure of the algorithm has a linear time complexity resulting in a high recognition speed which allows a direct integration of the method into a continuous manipulation task. The experimental validation with a 7-degrees-of-freedom Cartesian impedance controlled robot shows how the method can be used for grasping objects from a complex random stack. This application demonstrates how the integration of computer vision and softrobotics leads to a robotic system capable of acting in unstructured and occluded environments."}
{"_id": "9bc8aaaf23e2578c47d5d297d1e1cbb5b067ca3a", "title": "Combining Scale-Space and Similarity-Based Aspect Graphs for Fast 3D Object Recognition", "text": "This paper describes an approach for recognizing instances of a 3D object in a single camera image and for determining their 3D poses. A hierarchical model is generated solely based on the geometry information of a 3D CAD model of the object. The approach does not rely on texture or reflectance information of the object's surface, making it useful for a wide range of industrial and robotic applications, e.g., bin-picking. A hierarchical view-based approach that addresses typical problems of previous methods is applied: It handles true perspective, is robust to noise, occlusions, and clutter to an extent that is sufficient for many practical applications, and is invariant to contrast changes. For the generation of this hierarchical model, a new model image generation technique by which scale-space effects can be taken into account is presented. The necessary object views are derived using a similarity-based aspect graph. The high robustness of an exhaustive search is combined with an efficient hierarchical search. The 3D pose is refined by using a least-squares adjustment that minimizes geometric distances in the image, yielding a position accuracy of up to 0.12 percent with respect to the object distance, and an orientation accuracy of up to 0.35 degree in our tests. The recognition time is largely independent of the complexity of the object, but depends mainly on the range of poses within which the object may appear in front of the camera. For efficiency reasons, the approach allows the restriction of the pose range depending on the application. Typical runtimes are in the range of a few hundred ms."}
{"_id": "dbd66f601b325404ff3cdd7b9a1a282b2da26445", "title": "T-LESS: An RGB-D Dataset for 6D Pose Estimation of Texture-Less Objects", "text": "We introduce T-LESS, a new public dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects. Initial evaluation results indicate that the state of the art in 6D object pose estimation has ample room for improvement, especially in difficult cases with significant occlusion. The T-LESS dataset is available online at cmp:felk:cvut:cz/t-less."}
{"_id": "a8e5c255bcc474194777f884ee4caec841269200", "title": "Secure LSB steganography for colored images using character-color mapping", "text": "Steganography is the science of embedding the secret messages inside other medium files in a way that hides the existence of the secret message at all. Steganography can be applied to text, audio, image, and video file types. In this study, we propose a new steganography approach for digital images in which the RGB coloring model was used. The efficiency of the proposed approach has been tested and evaluated. The experimental results show that the proposed approach produce high-quality stego images that resist against visual and statistical attacks."}
{"_id": "ffa25b893ea72ffb077158eb750df827d154b5b6", "title": "TicTac: Accelerating Distributed Deep Learning with Communication Scheduling.", "text": "State-of-the-art deep learning systems rely on iterative distributed training to tackle the increasing complexity of models and input data. The iteration time in these communication-heavy systems depends on the computation time, communication time and the extent of overlap of computation and communication. In this work, we identify a shortcoming in systems with graph representation for computation, such as TensorFlow and PyTorch, that result in high variance in iteration time \u2014 random order of received parameters across workers. We develop a system, TicTac, to improve the iteration time by fixing this issue in distributed deep learning with Parameter Servers while guaranteeing near-optimal overlap of communication and computation. TicTac identifies and enforces an order of network transfers which improves the iteration time using prioritization. Our system is implemented over TensorFlow and requires no changes to the model or developer inputs. TicTac improves the throughput by up to 37.7% in inference and 19.2% in training, while also reducing straggler effect by up to 2.3\u00d7. Our code is publicly available."}
{"_id": "73f35cb89d62a1ee7ace46d047956a2a0353719a", "title": "An ultra-low power (ULP) zero-current-detector (ZCD) circuit for switching inductor converter applied in energy harvesting system", "text": "In this paper, we present an ultra-low power (ULP) zero-current-detector (ZCD) circuit by using sub-threshold design method and discontinued operating time to decrease power consumption, which can be widely used in switching inductor converters such as boost, buck or buck-boost converters. The designed ZCD circuit consumes only 40 nA quiescent current to realize nearly 70 dB DC gain and 650 kHz gain-bandwidth (GBW) when it operates. The main comparator of the ZCD circuit has high sensitivity and very low delay time to turn-off the synchronous switches of the DC-DC converter on time. The average quiescent current consumption is less than 1.6 nA because the operating time is only 40 \u03bcs under the minimum switching period of 1 ms. Finally, the designed ZCD circuit is used in a boost converter for vibrational energy harvesting applications."}
{"_id": "6925dc5d9aed7a3d9640179aca0355c6259db260", "title": "Text Classification Using the N-Gram Graph Representation Model Over High Frequency Data Streams", "text": "A prominent challenge in our information age is the classification over high frequency data streams. In this research, we propose an innovative and high-accurate text stream classificationmodel that is designed in an elastic distributed way and is capable to service text load with fluctuated frequency. In this classification model, text is represented as N-Gram Graphs and the classification process takes place using text pre-processing, graph similarity and feature classification techniques following the supervised machine learning approach. The work involves the analysis of many variations of the proposed model and its parameters, such as various representations of text as N-Gram Graphs, graph comparisons metrics and classification methods in order to conclude to the most accurate setup. To deal with the scalability, the availability and the timely response in case of high frequency text we employ the Beam programming model. Using the Beam programmingmodel the classification process occurs as a sequence of distinct tasks and facilitates the distributed implementation of the most computational demanding tasks of the inference stage. The proposed model and the various parameters that constitute it are evaluated experimentally and the high frequency stream emulated using two public datasets (20NewsGroup and Reuters-21578) that are commonly used in the literature for text classification."}
{"_id": "33fcc94696ac3ba51e21d61647ff66b0c104e759", "title": "PIP Distance: A Unitary-invariant Metric for Understanding Functionality and Dimensionality of Vector Embeddings", "text": "In this paper, we present a theoretical framework for understanding vector embedding, a fundamental building block of many deep learning models, especially in NLP. We discover a natural unitary-invariance in vector embeddings, which is required by the distributional hypothesis. This unitary-invariance states the fact that two embeddings are essentially equivalent if one can be obtained from the other by performing a relative-geometry preserving transformation, for example a rotation. This idea leads to the Pairwise Inner Product (PIP) loss, a natural unitary-invariant metric for the distance between two embeddings. We demonstrate that the PIP loss captures the difference in functionality between embeddings. By formulating the embedding training process as matrix factorization under noise, we reveal a fundamental bias-variance tradeoff in dimensionality selection. With tools from perturbation and stability theory, we provide an upper bound on the PIP loss using the signal spectrum and noise variance, both of which can be readily inferred from data. Our framework sheds light on many empirical phenomena, including the existence of an optimal dimension, and the robustness of embeddings against over-parametrization. The bias-variance tradeoff of PIP loss explicitly answers the fundamental open problem of dimensionality selection for vector embeddings."}
{"_id": "8405b58be29ddc4f4b57557ecf72b4acd374c9f4", "title": "Haptic Feedback in Room-Scale VR", "text": "Virtual reality (VR) is now becoming a mainstream medium. Current systems like the HTC Vive offer accurate tracking of the HMD and controllers, which allows for highly immersive interactions with the virtual environment. The interactions can be further enhanced by adding feedback. As an example, a controller can vibrate when it is close to a grabbable ball. As such interactions are not exhaustingly researched, we conducted a user study. Specifically, we examine: 1. grabbing and throwing with controllers in a simple basketball game. 2. the influence of haptic and optical feedback on performance, presence, task load, and usability. 3. the advantages of VR over desktop for point-cloud editing. Several new techniques emerged from the point-cloud editor for VR. The bi-manual pinch gesture, which extends the handlebar metaphor, is a novel viewing method used to translate, rotate, and scale the point-cloud. Our new rendering technique uses the geometry shader to draw sparse point clouds quickly. The selection volumes at the controllers are our new technique to efficiently select points in point clouds. The resulting selection is visualized in real time. The results of the user study show that: 1. grabbing with a controller button is intuitive but throwing is not. Releasing a button is a bad metaphor for releasing a grabbed virtual object in order to throw it. 2. any feedback is better than none. Adding haptic, optical, or both feedback types to the grabbing improves the user performance and presence. However, only sub-scores like accuracy and predictability are significantly improved. Usability and task load are mostly unaffected by feedback. 3. the point-cloud editing is significantly better in VR with the bi-manual pinch gesture and selection volumes than on the desktop with the orbiting camera and lasso selections."}
{"_id": "d3c86e4f26b379a091cb915c73480d8db5d8fd9e", "title": "Strong intuitionistic fuzzy graphs", "text": "We introduce the notion of strong intuitionistic fuzzy graphs and investigate some of their properties. We discuss some propositions of self complementary and self weak complementary strong intuitionistic fuzzy graphs. We introduce the concept of intuitionistic fuzzy line graphs."}
{"_id": "4a13a1e30069148bf24bdd697ecc139a82f3e19a", "title": "Analysis of packet loss and delay variation on QoE for H.264 andWebM/VP8 Codecs", "text": "The popularity of multimedia services over Internet has increased in the recent years. These services include Video on Demand (VoD) and mobile TV which are predominantly growing, and the user expectations towards the quality of videos are gradually increasing. Different video codecs are used for encoding and decoding. Recently Google has introduced the VP8 codec which is an open source compression format. It is introduced to compete with existing popular codec namely H.264/AVC developed by ITU-T Video Coding Expert Group (VCEG), as by 2016 there will be a license fee for H.264. In this work we compare the performance of H.264/AVC and WebM/VP8 in an emulated environment. NetEm is used as an emulator to introduce delay/delay variation and packet loss. We have evaluated the user perception of impaired videos using Mean Opinion Score (MOS) by following the International Telecommunication Union (ITU) Recommendations Absolute Category Rating (ACR) and analyzed the results using statistical methods. It was found that both video codecs exhibit similar performance in packet loss, But in case of delay variation H.264 codec shows better results when compared to WebM/VP8. Moreover along with the MOS ratings we also studied the effect of user feelings and online video watching experience impacts on their perception."}
{"_id": "d54672065689c9128354d54aac722bbdf450e406", "title": "3D depth estimation from a holoscopic 3D image", "text": "This paper presents an innovative technique for 3D depth estimation from a single holoscopic 3D image (H3D). The image is captured with a single aperture holoscopic 3D camera, which mimics a fly's eye technique to acquire an optical 3D model of a true 3D scene. The proposed method works by extracting of optimum viewpoints images from a H3D image and it uses the shift and integration function in up-sampling the extracted viewpoints and then it performs the match block functions to match correspondence features between the stereoscopic 3D images. Finally, the 3D depth is estimated through a smoothing and optimizing process to produce a final 3D depth map. The proposed method estimates the full 3D depth information from a single H3D image, which makes it reliable and suitable for trend autonomous robotic applications."}
{"_id": "e3ab8a2d775e74641b4612fac4f33b5b1895d8ea", "title": "User practice in password security: An empirical study of real-life passwords in the wild", "text": "Due to increasing security awareness of password from the public and little attention on the characteristics of real-life passwords, it is thus natural to understand the current state of characteristics of real-life passwords, and to explore how password characteristics change over time and how earlier password practice is understood in current context. In this work, we attempt to present an in-depth and comprehensive understanding of user practice in real-life passwords, and to see whether the previous observations can be confirmed or reversed, based on large-scale measurements rather than anecdotal experiences or user surveys. Specifically, we measure password characteristics on over 6 million passwords, in terms of password length, password composition, and password selection. We then make informed comparisons of the findings between our investigation and previously reported results. Our general findings include: (1) average password length is at least 12% longer than previous results, and 85% of our passwords have the length between 8 and 11 characters;"}
{"_id": "df45ccf1189bc212cacb7e12c796f9d16bbeb4bf", "title": "WAVELET BASED WATERMARKING ON DIGITAL IMAGE", "text": "Safeguarding creative content and intellectual property in a digital form has become increasingly difficult as technologies, such as the internet, broadband availability and mobile access advance. It has grown to be progressively easier to copy, modify and redistribute digital media, resulting in great declines in business profits. Digital watermarking is a technique which has been proposed as a possible solution to this problem. Digital Watermarking is a technology which is used to identify the creator, owner, distributor of a given video or image by embedding copyright marks into the digital content, hence digital watermarking is a powerful tool used to check the copy right violation. In this paper a robust watermarking technique based on DWT (Discrete Wavelet Transform) is presented. In this technique the insertion and extraction of the watermark in the grayscale image is found to be simpler than other transform techniques. This paper successfully explains the digital watermarking technique on digital images based on discrete wavelet transform by analyzing various values of PSNR\u2019s and MSE\u2019s.."}
{"_id": "095836eda402aae1cf22e515fef662dc5a1bb2cb", "title": "On the robustness of power systems: Optimal load-capacity distributions and hardness of attacking", "text": "We consider a power system with N transmission lines whose initial loads (i.e., power flows) L1,\u2026, LN and capacities C1,\u2026, CN are independent and identically distributed with the joint distribution PLC(x, y) = P [L \u2264 x, C \u2264 y]; the capacity Ci defines the maximum flow allowed on line i. We survey some results on the robustness of this power system against random attacks (or, failures) that target a p-fraction of the lines, under a democratic fiber bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. We then consider the case where an adversary can launch a targeted attack, and present several results on the hardness of attacking optimally."}
{"_id": "44638b2ab0c1f926b0d79dda8d3508f00b979be9", "title": "Process mining: a two-step approach to balance between underfitting and overfitting", "text": "Process mining includes the automated discovery of processes from event logs. Based on observed events (e.g., activities being executed or messages being exchanged) a process model is constructed. One of the essential problems in process mining is that one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such \u201coverfitting\u201d by generalizing the model to allow for more behavior. This generalization is often driven by the representation language and very crude assumptions about completeness. As a result, parts of the model are \u201coverfitting\u201d (allow only for what has actually been observed) while other parts may be \u201cunderfitting\u201d (allow for much more behavior without strong support for it). None of the existing techniques enables the user to control the balance between \u201coverfitting\u201d and \u201cunderfitting\u201d. To address this, we propose a two-step approach. First, using a configurable approach, a transition system is constructed. Then, using the \u201ctheory of regions\u201d, the model is synthesized. The approach has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches."}
{"_id": "55b0e1c5e1ef6060b9fdcb5f644b01d89afe5b27", "title": "A Framework for Clustering Massive Text and Categorical Data Streams", "text": "Many applications such as news group filtering, text crawling, and document organization require real time clustering and segmentation of text data records. The categorical data stream clustering problem also has a number of applications to the problems of customer segmentation and real time trend analysis. We will present an online approach for clustering massive text and categorical data streams with the use of a statistical summarization methodology. We present results illustrating the effectiveness of the technique."}
{"_id": "7adc7cbdaf862d23b34e973b05dcbae8ede47bb1", "title": "Build watson: An overview of DeepQA for the Jeopardy! Challenge", "text": "Computer systems that can directly and accurately answer peoples' questions over a broad domain of human knowledge have been envisioned by scientists and writers since the advent of computers themselves. Open domain question answering holds tremendous promise for facilitating informed decision making over vast volumes of natural language content. Applications in business intelligence, healthcare, customer support, enterprise knowledge management, social computing, science and government would all benefit from deep language processing. The DeepQA project (www.ibm.com/deepqa) is aimed at illustrating how the advancement and integration of Natural Language Processing (NLP), Information Retrieval (IR), Machine Learning (ML), massively parallel computation and Knowledge Representation and Reasoning (KR&R) can greatly advance open-domain automatic Question Answering. An exciting proof-point in this challenge is to develop a computer system that can successfully compete against top human players at the Jeopardy! quiz show (www.jeopardy.com). Attaining champion-level performance Jeopardy! requires a computer to rapidly answer rich open-domain questions, and to predict its own performance on any given category/question. The system must deliver high degrees of precision and confidence over a very broad range of knowledge and natural language content and with a 3-second response time. To do this DeepQA generates, evidences and evaluates many competing hypotheses. A key to success is automatically learning and combining accurate confidences across an array of complex algorithms and over different dimensions of evidence. Accurate confidences are needed to know when to \"buzz in\" against your competitors and how much to bet. Critical for winning at Jeopardy!, High precision and accurate confidence computations are just as critical for providing real value in business settings where helping users focus on the right content sooner and with greater confidence can make all the difference. The need for speed and high precision demands a massively parallel compute platform capable of generating, evaluating and combing 1000's of hypotheses and their associated evidence. In this talk I will introduce the audience to the Jeopardy! Challenge and describe our technical approach and our progress on this grand-challenge problem."}
{"_id": "61f064c24cb9776a408b8bf4cfbcb5a5105ac31a", "title": "Multi-type attributes driven multi-camera person re-identification", "text": "One of the major challenges in person Re-Identification (ReID) is the inconsistent visual appearance of a person. Current works on visual feature and distance metric learning have achieved significant achievements, but still suffer from the limited robustness to pose variations, viewpoint changes, etc., and the high computational complexity. This makes person ReID among multiple cameras still challenging. This work is motivated to learn mid-level human attributes which are robust to visual appearance variations and could be used as efficient features for person matching. We propose a weakly supervised multi-type attribute learning framework which considers the contextual cues among attributes and progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a threestage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely deep attributes exhibit promising generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained competitive accuracy on four person ReID datasets. Experiments also show that a simple distance metric learning modular further boosts our method, making it outperform many recent \u2217Corresponding author Email address: slzhang.jdl@pku.edu.cn (Shiliang Zhang) Preprint submitted to Pattern Recognition September 11, 2017"}
{"_id": "49c5b67d22a3d8929913efeb3621fb7f5c9160ef", "title": "Fractional Order Fuzzy Control of Hybrid Power System with Renewable Generation Using Chaotic PSO", "text": "This paper investigates the operation of a hybrid power system through a novel fuzzy control scheme. The hybrid power system employs various autonomous generation systems like wind turbine, solar photovoltaic, diesel engine, fuel-cell, aqua electrolyzer etc. Other energy storage devices like the battery, flywheel and ultra-capacitor are also present in the network. A novel fractional order (FO) fuzzy control scheme is employed and its parameters are tuned with a particle swarm optimization (PSO) algorithm augmented with two chaotic maps for achieving an improved performance. This FO fuzzy controller shows better performance over the classical PID, and the integer order fuzzy PID controller in both linear and nonlinear operating regimes. The FO fuzzy controller also shows stronger robustness properties against system parameter variation and rate constraint nonlinearity, than that with the other controller structures. The robustness is a highly desirable property in such a scenario since many components of the hybrid power system may be switched on/off or may run at lower/higher power output, at different time instants."}
{"_id": "740ffcfc4607b60d55a0d6108ca6ee3b4b5b0f16", "title": "Image Enhancement Network Trained by Using HDR images", "text": "In this paper, a novel image enhancement network is proposed, where HDR images are used for generating training data for our network. Most of conventional image enhancement methods, including Retinex based methods, do not take into account restoring lost pixel values caused by clipping and quantizing. In addition, recently proposed CNN based methods still have a limited scope of application or a limited performance, due to network architectures. In contrast, the proposed method have a higher performance and a simpler network architecture than existing CNN based methods. Moreover, the proposed method enables us to restore lost pixel values. Experimental results show that the proposed method can provides higher-quality images than conventional image enhancement methods including a CNN based method, in terms of TMQI and NIQE."}
{"_id": "87822f10ef2f797c65eba3185a81cd541f8dad6b", "title": "Topographical and temporal diversity of the human skin microbiome.", "text": "Human skin is a large, heterogeneous organ that protects the body from pathogens while sustaining microorganisms that influence human health and disease. Our analysis of 16S ribosomal RNA gene sequences obtained from 20 distinct skin sites of healthy humans revealed that physiologically comparable sites harbor similar bacterial communities. The complexity and stability of the microbial community are dependent on the specific characteristics of the skin site. This topographical and temporal survey provides a baseline for studies that examine the role of bacterial communities in disease states and the microbial interdependencies required to maintain healthy skin."}
{"_id": "e11086c453c6be433084a45d1cc2aeb403d73f23", "title": "A Formal Analysis of the ISO 9241-210 Definition of User Experience", "text": "User Experience (UX) is a major concept in HCI and a variety of different UX definitions have been suggested within the scientific community. An ISO UX definition has been presented to standardize the term from an industry perspective. We introduce methods from formal logic in order to formalize and analyze the ISO UX definition with regard to consistency and ambiguities and present recommendations for an improved version. Although this kind of formalization is not common within the CHI community, we show that quasi-formal methods provide an alternative way of analyzing widely discussed HCI terms, such as UX, to deepen its understanding."}
{"_id": "b88678762b44ea53db95365fd62e6f0426b195a8", "title": "THE IMPACT OF CHANGE MANAGEMENT IN ERP SYSTEM : A CASE STUDY OF MADAR", "text": "This paper discusses Enterprise Resource Planning (ERP) change management. A review of the research literature has been presented that focuses on the ERP change management factors. The literature is further classified and the major outcomes of each study have been addressed in this paper. The discussion is supported by a practical ERP project called Madar. The paper is targeted to investigate and identify the reasons for resistance to diffusion and why individuals within an organization resist the changes. This paper also suggests strategies to minimize the resistance if not overcome completely."}
{"_id": "bd7eb1e0dfb280ffcccbb07c9c5f6bfc3b79f6c3", "title": "Walking to public transit: steps to help meet physical activity recommendations.", "text": "BACKGROUND\nNearly half of Americans do not meet the Surgeon General's recommendation of > or =30 minutes of physical activity daily. Some transit users may achieve 30 minutes of physical activity daily solely by walking to and from transit. This study estimates the total daily time spent walking to and from transit and the predictors of achieving 30 minutes of physical activity daily by doing so.\n\n\nMETHODS\nTransit-associated walking times for 3312 transit users were examined among the 105,942 adult respondents to the 2001 National Household Travel Survey, a telephone-based survey sponsored by the U.S. Department of Transportation to assess American travel behavior.\n\n\nRESULTS\nAmericans who use transit spend a median of 19 minutes daily walking to and from transit; 29% achieve > or =30 minutes of physical activity a day solely by walking to and from transit. In multivariate analysis, rail users, minorities, people in households earning <$15,000 a year, and people in high-density urban areas were more likely to spend > or =30 minutes walking to and from transit daily.\n\n\nCONCLUSIONS\nWalking to and from public transportation can help physically inactive populations, especially low-income and minority groups, attain the recommended level of daily physical activity. Increased access to public transit may help promote and maintain active lifestyles. Results from this study may contribute to health impact assessment studies (HIA) that evaluate the impact of proposed public transit systems on physical activity levels, and thereby may influence choices made by transportation planners."}
{"_id": "47165ad181b3da3a615e6e8bc8509fcfc53f49c4", "title": "Cross-Modal Correlation Learning by Adaptive Hierarchical Semantic Aggregation", "text": "With the explosive growth of web data, effective and efficient technologies are in urgent need for retrieving semantically relevant contents of heterogeneous modalities. Previous studies devote efforts to modeling simple cross-modal statistical dependencies, and globally projecting the heterogeneous modalities into a measurable subspace. However, global projections cannot appropriately adapt to diverse contents, and the naturally existing multilevel semantic relation in web data is ignored. We study the problem of semantic coherent retrieval, where documents from different modalities should be ranked by the semantic relevance to the query. Accordingly, we propose TINA, a correlation learning method by adaptive hierarchical semantic aggregation. First, by joint modeling of content and ontology similarities, we build a semantic hierarchy to measure multilevel semantic relevance. Second, with a set of local linear projections and probabilistic membership functions, we propose two paradigms for local expert aggregation, i.e., local projection aggregation and local distance aggregation. To learn the cross-modal projections, we optimize the structure risk objective function that involves semantic coherence measurement, local projection consistency, and the complexity penalty of local projections. Compared to existing approaches, a better bias-variance tradeoff is achieved by TINA in real-world cross-modal correlation learning tasks. Extensive experiments on widely used NUS-WIDE and ICML-Challenge for image-text retrieval demonstrate that TINA better adapts to the multilevel semantic relation and content divergence, and, thus, outperforms state of the art with better semantic coherence."}
{"_id": "74257c2a5c9633565c3becdb9139789bcf14b478", "title": "IT Control in the Australian Public Sector: An International Comparison", "text": "Despite widespread adoption of IT control frameworks, little academic empirical research has been undertaken to investigate their use. This paper reports upon research to benchmark the maturity levels of 15 key IT control processes from the Control Objectives for Information and Related Technology (COBIT) in public sector organisations across Australia. It also makes a comparison against a similar benchmark for a mixed sector group from a range of nations, a mixed sector group from Asian-Oceanic nations, and for public sector organisations for all geographic areas. The Australian data were collected in a mail survey of the 387 non-financial public sector organisations identified as having more than 50 employees, which returned a 27% response rate. Patterns seen in the original international survey undertaken by the IS Audit and Control Association in 2002 were also seen in the Australian data. However, the Australian public sector performed better than sectors in all the international benchmarks for the 15 most important IT processes."}
{"_id": "9583ac53a19cdf0db81fef6eb0b63e66adbe2324", "title": "Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent", "text": "We study the resilience to Byzantine failures of distributed implementations of Stochastic Gradient Descent (SGD). So far, distributed machine learning frameworks have largely ignored the possibility of failures, especially arbitrary (i.e., Byzantine) ones. Causes of failures include software bugs, network asynchrony, biases in local datasets, as well as attackers trying to compromise the entire system. Assuming a set of n workers, up to f being Byzantine, we ask how resilient can SGD be, without limiting the dimension, nor the size of the parameter space. We first show that no gradient aggregation rule based on a linear combination of the vectors proposed by the workers (i.e, current approaches) tolerates a single Byzantine failure. We then formulate a resilience property of the aggregation rule capturing the basic requirements to guarantee convergence despite f Byzantine workers. We propose Krum, an aggregation rule that satisfies our resilience property, which we argue is the first provably Byzantine-resilient algorithm for distributed SGD. We also report on experimental evaluations of Krum."}
{"_id": "0760550d3830230a05191766c635cec80a676b7e", "title": "Deep learning with Elastic Averaging SGD", "text": "We study the problem of stochastic optimization for deep learning in the parallel computing environment under communication constraints. A new algorithm is proposed in this setting where the communication and coordination of work among concurrent processes (local workers), is based on an elastic force which links the parameters they compute with a center variable stored by the parameter server (master). The algorithm enables the local workers to perform more exploration, i.e. the algorithm allows the local variables to fluctuate further from the center variable by reducing the amount of communication between local workers and the master. We empirically demonstrate that in the deep learning setting, due to the existence of many local optima, allowing more exploration can lead to the improved performance. We propose synchronous and asynchronous variants of the new algorithm. We provide the stability analysis of the asynchronous variant in the round-robin scheme and compare it with the more common parallelized method ADMM. We show that the stability of EASGD is guaranteed when a simple stability condition is satisfied, which is not the case for ADMM. We additionally propose the momentum-based version of our algorithm that can be applied in both synchronous and asynchronous settings. Asynchronous variant of the algorithm is applied to train convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Experiments demonstrate that the new algorithm accelerates the training of deep architectures compared to DOWNPOUR and other common baseline approaches and furthermore is very communication efficient."}
{"_id": "0e9bac6a2b51e93e73f7f5045d4252972db10b5a", "title": "Large-scale matrix factorization with distributed stochastic gradient descent", "text": "We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm. We first develop a novel \"stratified\" SGD variant (SSGD) that applies to general loss-minimization problems in which the loss function can be expressed as a weighted sum of \"stratum losses.\" We establish sufficient conditions for convergence of SSGD using results from stochastic approximation theory and regenerative process theory. We then specialize SSGD to obtain a new matrix-factorization algorithm, called DSGD, that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. DSGD can handle a wide variety of matrix factorizations. We describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms."}
{"_id": "1109b663453e78a59e4f66446d71720ac58cec25", "title": "OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks", "text": "We present an integrated framework for using Convolutional Networks for classification , localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat."}
{"_id": "50629d7d6afd7577ccfd92b35c7e15f79ad4b180", "title": "ON MILLIMETER-WAVE IMAGING OF CONCEALED OBJECTS : APPLICATION USING BACK-PROJECTION ALGORITHM", "text": "Millimeter-wave (MMW) imaging is a powerful tool for the detection of objects concealed under clothing. Several factors including different kinds of objects, variety of covering materials and their thickness, accurate imaging of near-field scattered data affect the success of detection. To practice with such considerations, this paper presents the two-dimensional (2D) images of different targets hidden under various fabric sheets. The W-band inverse synthetic aperture radar (ISAR) data of various target-covering situations are acquired and imaged by applying both the focusing operator based inversion algorithm and the spherical back-projection algorithm. Results of these algorithms are demonstrated and compared to each other to assess the performance of the MMW imaging in detecting the concealed objects of both metallic and dielectric types."}
{"_id": "34e4016ad6362dc1f85c4a755b2b47c4795fbe4d", "title": "A study of prefix hijacking and interception in the internet", "text": "There have been many incidents of prefix hijacking in the Internet. The hijacking AS can blackhole the hijacked traffic. Alternatively, it can transparently intercept the hijacked traffic by forwarding it onto the owner. This paper presents a study of such prefix hijacking and interception with the following contributions: (1). We present a methodology for prefix interception, (2). We estimate the fraction of traffic to any prefix that can be hijacked and intercepted in the Internet today, (3). The interception methodology is implemented and used to intercept real traffic to our prefix, (4). We conduct a detailed study to detect ongoing prefix interception.\n We find that: Our hijacking estimates are in line with the impact of past hijacking incidents and show that ASes higher up in the routing hierarchy can hijack a significant amount of traffic to any prefix, including popular prefixes. A less apparent result is that the same holds for prefix interception too. Further, our implementation shows that intercepting traffic to a prefix in the Internet is almost as simple as hijacking it. Finally, while we fail to detect ongoing prefix interception, the detection exercise highlights some of the challenges posed by the prefix interception problem."}
{"_id": "5ef3fe3b0e2c8e5f06c0bf45e503d18371b31946", "title": "Development of a Dynamic Simulation Tool for the ExoMars Rover", "text": "Future planetary missions, including the 2011 European Space Agency (ESA) ExoMars mission, will require rovers to travel further, faster, and over more demanding terrain than has been encountered to date. To improve overall mobility, advances need to be made in autonomous navigation, power collection, and locomotion. In this paper we focus on the locomotion problem and discuss the development of a planetary rover chassis simulation tool that allows us to study key locomotion test cases such as slope climbing in loose soil. We have also constructed rover wheels and obtained experimental data to validate the wheel-soil interaction module. The main conclusion is that to fully validate such a complex simulation, experimental data from a full rover chassis is required. This is a first step in an on-going effort to validate the simulation with experimental data obtained from a full rover prototype."}
{"_id": "5cbed8c666ab7cb0836c753877b867d9ee0b14dd", "title": "Accurate Angle Estimator for High-Frame-Rate 2-D Vector Flow Imaging", "text": "This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360\u00b0 range. The method is validated on Field II simulations and phantom measurements using the experimental ultrasound scanner SARUS and a flow rig before being tested in vivo. An 8-MHz linear array transducer is used with defocused beam emissions. In the simulations of a spinning disk phantom, a 360\u00b0 uniform behavior on the angle estimation is observed with a median angle bias of 1.01\u00b0 and a median angle SD of 1.8\u00b0. Similar results are obtained on a straight vessel for both simulations and measurements, where the obtained angle biases are below 1.5\u00b0 with SDs around 1\u00b0. Estimated velocity magnitudes are also kept under 10% bias and 5% relative SD in both simulations and measurements. An in vivo measurement is performed on a carotid bifurcation of a healthy individual. A 3-s acquisition during three heart cycles is captured. A consistent and repetitive vortex is observed in the carotid bulb during systoles."}
{"_id": "5254a6bc467b49f27c00f4654d03dc5d69d9d38d", "title": "Predictive Data Mining for Diagnosis of Thyroid Disease using Neural Network", "text": "This paper presents a systematic approach for earlier diagnosis of Thyroid disease using back propagation algorithm used in neural network. Back propagation algorithm is a widely used algorithm in this field. ANN has been developed based on back propagation of error used for earlier prediction of disease. ANN was subsequently trained with experimental data and testing is carried out using data that was not used during training process. Results show that outcome of ANN is in good agreement with experimental data; this indicates that developed neural network can be used as an alternative for earlier prediction of a disease. Keywords\u2014Back propagation, decision tree, gradient descent, prediction, Supervised Learning"}
{"_id": "62b45eb6deedc89f3d3ef428aa6be2c4bba4eeb4", "title": "A Robust Single-Phase PLL System With Stable and Fast Tracking", "text": "Phase, frequency, and amplitude of single-phase voltages are the most important and basic information required for single-phase grid-connected applications. This paper proposes a method for instantly and robustly estimating the phase, frequency, and amplitude of frequency-varying single-phase signals for such applications, which is a phase-locked loop (PLL) method based on a structure. The proposed method has the following attractive features: 1) the estimation system results in a nonlinear system, but it can be stabilized; 2) all subsystems constructing the system can be easily designed; 3) \"two-phase signal generator\"; can autotune a single system parameter in response to varying frequency of the injected signal; 4) high-order \"PLL-Controllers\" allowing fast tracking can be stably used; and 5) even in hostile envelopments, favorable instant estimates can be obtained. This paper proposes the estimation method and verifies its usefulness by extensive numerical experiments."}
{"_id": "c7d05ca5bddd91d1d929aa95b45c0b6b29ec3c11", "title": "MixedFusion: Real-Time Reconstruction of an Indoor Scene with Dynamic Objects", "text": "Real-time indoor scene reconstruction aims to recover the 3D geometry of an indoor scene in real time with a sensor scanning the scene. Previous works of this topic consider pure static scenes, but in this paper, we focus on more challenging cases that the scene contains dynamic objects, for example, moving people and floating curtains, which are quite common in reality and thus are eagerly required to be handled. We develop an end-to-end system using a depth sensor to scan a scene on the fly. By proposing a Sigmoid-based Iterative Closest Point (S-ICP) method, we decouple the camera motion and the scene motion from the input sequence and segment the scene into static and dynamic parts accordingly. The static part is used to estimate the camera rigid motion, while for the dynamic part, graph node-based motion representation and model-to-depth fitting are applied to reconstruct the scene motions. With the camera and scene motions reconstructed, we further propose a novel mixed voxel allocation scheme to handle static and dynamic scene parts with different mechanisms, which helps to gradually fuse a large scene with both static and dynamic objects. Experiments show that our technique successfully fuses the geometry of both the static and dynamic objects in a scene in real time, which extends the usage of the current techniques for indoor scene reconstruction."}
{"_id": "ab577cd7b229744d6a153895cbe73e9f0480e631", "title": "Faceted Search", "text": "We live in an information age that requires us, more than ever, to represent, access, and use information. Over the last several decades, we have developed a modern science and technology for information retrieval, relentlessly pursuing the vision of a \u201cmemex\u201d that Vannevar Bush proposed in his seminal article, \u201cAs We May Think.\u201d Faceted search plays a key role in this program. Faceted search addresses weaknesses of conventional search approaches and has emerged as a foundation for interactive information retrieval. User studies demonstrate that faceted search provides more effective information-seeking support to users than best-first search. Indeed, faceted search has become increasingly prevalent in online information access systems, particularly for e-commerce and site search. In this lecture, we explore the history, theory, and practice of faceted search. Although we cannot hope to be exhaustive, our aim is to provide sufficient depth and breadth to offer a useful resource to both researchers and practitioners. Because faceted search is an area of interest to computer scientists, information scientists, interface designers, and usability researchers, we do not assume that the reader is a specialist in any of these fields. Rather, we offer a self-contained treatment of the topic, with an extensive bibliography for those who would like to pursue particular aspects in more depth."}
{"_id": "b3f1d0a8b2ea51baa44461bddd627cf316628b7e", "title": "Classifying distinct basal cell carcinoma subtype by\u00a0means of dermatoscopy and reflectance confocal microscopy.", "text": "BACKGROUND\nThe current guidelines for the management of basal cell carcinoma (BCC) suggest a different therapeutic approach according to histopathologic subtype. Although dermatoscopic and confocal criteria of BCC have been investigated, no specific studies were performed to evaluate the distinct reflectance confocal microscopy (RCM) aspects of BCC subtypes.\n\n\nOBJECTIVES\nTo define the specific dermatoscopic and confocal criteria for delineating different BCC subtypes.\n\n\nMETHODS\nDermatoscopic and confocal images of histopathologically confirmed BCCs were retrospectively evaluated for the presence of predefined criteria. Frequencies of dermatoscopic and confocal parameters are provided. Univariate and adjusted odds ratios were calculated. Discriminant analyses were performed to define the independent confocal criteria for distinct BCC subtypes.\n\n\nRESULTS\nEighty-eight BCCs were included. Dermatoscopically, superficial BCCs (n=44) were primarily typified by the presence of fine telangiectasia, multiple erosions, leaf-like structures, and revealed cords connected to the epidermis and epidermal streaming upon RCM. Nodular BCCs (n=22) featured the classic dermatoscopic features and well outlined large basaloid islands upon RCM. Infiltrative BCCs (n=22) featured structureless, shiny red areas, fine telangiectasia, and arborizing vessels on dermatoscopy and dark silhouettes upon RCM.\n\n\nLIMITATIONS\nThe retrospective design.\n\n\nCONCLUSION\nDermatoscopy and confocal microscopy can reliably classify different BCC subtypes."}
{"_id": "062c1c1b3e280353242dd2fb3c46178b87cb5e46", "title": "Fitted Natural Actor-Critic: A New Algorithm for Continuous State-Action MDPs", "text": "In this paper we address reinforcement learning problems with continuous state-action spaces. We propose a new algorithm, tted natural actor-critic (FNAC), that extends the work in [1] to allow for general function approximation and data reuse. We combine the natural actor-critic architecture [1] with a variant of tted value iteration using importance sampling. The method thus obtained combines the appealing features of both approaches while overcoming their main weaknesses: the use of a gradient-based actor readily overcomes the di culties found in regression methods with policy optimization in continuous action-spaces; in turn, the use of a regression-based critic allows for e cient use of data and avoids convergence problems that TD-based critics often exhibit. We establish the convergence of our algorithm and illustrate its application in a simple continuous space, continuous action problem."}
{"_id": "087350c680e791093624be7d1094c4cef73c2214", "title": "Learning Discriminative Affine Regions via", "text": "We present an accurate method for estimation of the affine shape of local features. The method is trained in a novel way, exploiting the recently proposed HardNet triplet loss. The loss function is driven by patch descriptor differences, avoiding problems with symmetries. Moreover, such training process does not require precisely geometrically aligned patches. The affine shape is represented in a way amenable to learning by stochastic gradient descent. When plugged into a state-of-the-art wide baseline matching algorithm, the performance on standard datasets improves in both the number of challenging pairs matched and the number of inliers. Finally, AffNet with combination of Hessian detector and HardNet descriptor improves bag-of-visual-words based state of the art on Oxford5k and Paris6k by large margin, 4.5 and 4.2 mAP points respectively. The source code and trained networks are available at https://github.com/ducha-aiki/affnet"}
{"_id": "81f40dec4eb9ba43fe0d55b9accb75626c9745e4", "title": "Evaluating the Effectiveness of Serious Games for Cultural Awareness: The Icura User Study", "text": "There is an increasing awareness about the potential of serious games for education and training in many disciplines. However, research still witnesses a lack of methodologies, guidelines and best practices on how to develop effective serious games and how to integrate them in the actual learning and training processes. This process of integration heavily depends on providing and spreading evidence of the effectiveness of serious games This paper reports a user study to evaluate the effectiveness of Icura, a serious game about Japanese culture and etiquette. The evaluation methodology extends the set of instruments used in previous studies by evaluating the effects of the game on raising awareness, by avoiding the selective attention bias and by assessing the medium-term retention. . With this research we aim to provide a handy toolkit for evaluating the effectiveness a serious games for cultural awareness and heritage."}
{"_id": "f97f0902698abff8a2bc3488e8cca223e5c357a1", "title": "Feature selection via sensitivity analysis of SVM probabilistic outputs", "text": "Feature selection is an important aspect of solving data-mining and machine-learning problems. This paper proposes a feature-selection method for the Support Vector Machine (SVM) learning. Like most feature-selection methods, the proposed method ranks all features in decreasing order of importance so that more relevant features can be identified. It uses a novel criterion based on the probabilistic outputs of SVM. This criterion, termed Feature-based Sensitivity of Posterior Probabilities (FSPP), evaluates the importance of a specific feature by computing the aggregate value, over the feature space, of the absolute difference of the probabilistic outputs of SVM with and without the feature. The exact form of this criterion is not easily computable and approximation is needed. Four approximations, FSPP1-FSPP4, are proposed for this purpose. The first two approximations evaluate the criterion by randomly permuting the values of the feature among samples of the training data. They differ in their choices of the mapping function from standard SVM output to its probabilistic output: FSPP1 uses a simple threshold function while FSPP2 uses a sigmoid function. The second two directly approximate the criterion but differ in the smoothness assumptions of criterion with respect to the features. The performance of these approximations, used in an overall feature-selection scheme, is then evaluated on various artificial problems and real-world problems, including datasets from the recent Neural Information Processing Systems (NIPS) feature selection competition. FSPP1-3 show good performance consistently with FSPP2 being the best overall by a slight margin. The performance of FSPP2 is competitive with some of the best performing feature-selection methods in the literature on the datasets that we have tested. Its associated computations are modest and hence it is suitable as a feature-selection method for SVM applications."}
{"_id": "f424355ca1351cadc4d0f84d362933da6cf1eea7", "title": "BzTree: A High-Performance Latch-free Range Index for Non-Volatile Memory", "text": "Storing a database (rows and indexes) entirely in non-volatile memory (NVM) potentially enables both high performance and fast recovery. To fully exploit parallelism on modern CPUs, modern main-memory databases use latch-free (lock-free) index structures, e.g. Bw-tree or skip lists. To achieve high performance NVMresident indexes also need to be latch-free. This paper describes the design of the BzTree, a latch-free B-tree index designed for NVM. The BzTree uses a persistent multi-word compare-and-swap operation (PMwCAS) as a core building block, enabling an index design that has several important advantages compared with competing index structures such as the Bw-tree. First, the BzTree is latch-free yet simple to implement. Second, the BzTree is fast showing up to 2x higher throughput than the Bw-tree in our experiments. Third, the BzTree does not require any special-purpose recovery code. Recovery is near-instantaneous and only involves rolling back (or forward) any PMwCAS operations that were in-flight during failure. Our end-to-end recovery experiments of BzTree report an average recovery time of 145 \u03bcs. Finally, the same BzTree implementation runs seamlessly on both volatile RAM and NVM, which greatly reduces the cost of code maintenance. PVLDB Reference Format: Joy Arulraj, Justin Levandoski, Umar Farooq Minhas,Per-Ake Larson. BzTree: A High-Performance Latch-free Range Index for Non-Volatile Memory. PVLDB, 11(4): xxxx-yyyy, 2018. DOI: https://doi.org/10.1145/3164135.3164147"}
{"_id": "3206fe6bbad88896b8608fafe7c9295f2504b745", "title": "DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time", "text": "We present the first dense SLAM system capable of reconstructing non-rigidly deforming scenes in real-time, by fusing together RGBD scans captured from commodity sensors. Our DynamicFusion approach reconstructs scene geometry whilst simultaneously estimating a dense volumetric 6D motion field that warps the estimated geometry into a live frame. Like KinectFusion, our system produces increasingly denoised, detailed, and complete reconstructions as more measurements are fused, and displays the updated model in real time. Because we do not require a template or other prior scene model, the approach is applicable to a wide range of moving objects and scenes."}
{"_id": "1d13f96a076cb0e3590f13209c66ad26aa33792f", "title": "Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation", "text": "Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms \u2014 especially the collaborative filtering (CF)-based approaches with shallow or deep models \u2014 usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users\u2019 historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines."}
{"_id": "896191c053a5c98f2478cb20246ae447ebcd6e38", "title": "Optimal Mass Transport: Signal processing and machine-learning applications", "text": "Transport-based techniques for signal and data analysis have recently received increased interest. Given their ability to provide accurate generative models for signal intensities and other data distributions, they have been used in a variety of applications, including content-based retrieval, cancer detection, image superresolution, and statistical machine learning, to name a few, and they have been shown to produce state-of-the-art results. Moreover, the geometric characteristics of transport-related metrics have inspired new kinds of algorithms for interpreting the meaning of data distributions. Here, we provide a practical overview of the mathematical underpinnings of mass transport-related methods, including numerical implementation, as well as a review, with demonstrations, of several applications. Software accompanying this article is available from [43]."}
{"_id": "ef8d41dace32a312ac7e29688df2702089f4ba2d", "title": "Design and Experiment of a Conformal Monopulse Antenna for Passive Radar Applications", "text": "This paper proposed a design scheme of wide band monopulse antenna system for the passive radar seeker (PRS) of anti-radiation missile (ARM). To save the installation space of the PRS, the conical conformal log periodic antennas (LPDA) are employed to form the monopulse antenna system. The specific implementation scheme of PRS based on the conically conformal monopulse antennas was designed and analyzed. A practical monopulse antenna system composed of two log periodic antennas was designed and fabricated over the operating frequency range of 1GHz to 8GHz. A loaded resistor with impedance equal to 50\u03a9 is used to reduce the return loss at low frequency. The experimental results demonstrate that acceptable wide band impedance matching and port isolation for each fabricated antenna was achieved. Furthermore, the wide band sum and difference radiation patterns were formed, which validates the proposed conformal monopulse antenna design scheme for the PRS application."}
{"_id": "234904ac1dbbeb507f8c63abf327fe47d023810c", "title": "Prediction of Web Users Browsing Behavior : A Review", "text": "In Web Usage Mining (WUM), Classification technique is useful in web user prediction, where user\u2019s browsing behavior is predicted. In Web prediction, classification is done first to predict the next Web pages set that a user may visit based on history of previously visited pages. Predicting user\u2019s behavior effectively is important in various applications like recommendation systems, smart phones etc. Currently various prediction systems are available based on Markov model, Association Rule Mining (ARM), classification techniques, etc. In Markov prediction model it cannot predict for a session that was not observed previously in the training set and prediction time is also constant. In ARM, Apriori algorithm requires more time to generate item sets. This paper presents review of various prediction models used to predict web users browsing behavior. Key Words\u2014 WUM, classification, Browsing behavior, prediction, ARM, session"}
{"_id": "5c785d21421bfed0beba8de06a13fae6fe318e50", "title": "Testing and evaluation of a wearable augmented reality system for natural outdoor environments", "text": "This paper describes performance evaluation of a wearable augmented reality system for natural outdoor environments. Applied Research Associates (ARA), as prime integrator on the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program, is developing a soldier-worn system to provide intuitive \u2018heads-up\u2019 visualization of tactically-relevant geo-registered icons. Our system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered iconography (e.g., navigation waypoints, sensor points of interest, blue forces, aircraft) on the soldier\u2019s view of reality. We achieve accurate pose estimation through fusion of inertial, magnetic, GPS, terrain data, and computer-vision inputs. We leverage a helmet-mounted camera and custom computer vision algorithms to provide terrain-based measurements of absolute orientation (i.e., orientation of the helmet with respect to the earth). These orientation measurements, which leverage mountainous terrain horizon geometry and mission planning landmarks, enable our system to operate robustly in the presence of external and body-worn magnetic disturbances. Current field testing activities across a variety of mountainous environments indicate that we can achieve high icon geo-registration accuracy (<10mrad) using these vision-based methods."}
{"_id": "a3721e874b0d80c21ac0db5034a4917f8ef94e5e", "title": "Position Paper: Physical Unclonable Functions for IoT Security", "text": "Devices in the Internet of Things (IoT) introduce unique security challenges due to their operating conditions and device limitations. Existing security solutions based on classical cryptography have significant drawbacks in IoT devices, primarily due to the possibility of physical and side channel attacks. As an alternative approach, this position paper advocates the use of physical unclonable functions (PUFs) as a security primitive for developing security solutions for IoT devices. Preliminary work on developing a PUF based mutual authentication protocol is also presented."}
{"_id": "7e561868b2da2575b09ffb453ace9590cdcf4f9a", "title": "Predicting at-risk novice Java programmers through the analysis of online protocols", "text": "In this study, we attempted to quantify indicators of novice programmer progress in the task of writing programs, and we evaluated the use of these indicators for identifying academically at-risk students. Over the course of nine weeks, students completed five different graded programming exercises in a computer lab. Using an instrumented version of BlueJ, an integrated development environment for Java, we collected novice compilations and explored the errors novices encountered, the locations of these errors, and the frequency with which novices compiled their programs. We identified which frequently encountered errors and which compilation behaviors were characteristic of at-risk students. Based on these findings, we developed linear regression models that allowed prediction of students' scores on a midterm exam. However, the models derived could not accurately predict the at-risk students. Although our goal of identifying at-risk students was not attained, we have gained insights regarding the compilation behavior of our students, which may help us identify students who are in need of intervention."}
{"_id": "234a741a8bcdf25f5924101edc1f67f164f5277b", "title": "Cues to deception.", "text": "Do people behave differently when they are lying compared with when they are telling the truth? The combined results of 1,338 estimates of 158 cues to deception are reported. Results show that in some ways, liars are less forthcoming than truth tellers, and they tell less compelling tales. They also make a more negative impression and are more tense. Their stories include fewer ordinary imperfections and unusual contents. However, many behaviors showed no discernible links, or only weak links, to deceit. Cues to deception were more pronounced when people were motivated to succeed, especially when the motivations were identity relevant rather than monetary or material. Cues to deception were also stronger when lies were about transgressions."}
{"_id": "7ac64e54d377d2e765158cb545df5013e92905da", "title": "Deception and design: the impact of communication technology on lying behavior", "text": "Social psychology has demonstrated that lying is an important, and frequent, part of everyday social interactions. As communication technologies become more ubiquitous in our daily interactions, an important question for developers is to determine how the design of these technologies affects lying behavior. The present research reports the results of a diary study, in which participants recorded all of their social interactions and lies for seven days. The data reveal that participants lied most on the telephone and least in email, and that lying rates in face-to-face and instant messaging interactions were approximately equal. This pattern of results suggests that the design features of communication technologies (e.g., synchronicity, recordability, and copresence) affect lying behavior in important ways, and that these features must be considered by both designers and users when issues of deception and trust arise. The implications for designing applications that increase, decrease or detect deception are discussed."}
{"_id": "c3245983b40b3c9b90f62f50bbe99372c04e973d", "title": "Psychological aspects of natural language. use: our words, our selves.", "text": "The words people use in their daily lives can reveal important aspects of their social and psychological worlds. With advances in computer technology, text analysis allows researchers to reliably and quickly assess features of what people say as well as subtleties in their linguistic styles. Following a brief review of several text analysis programs, we summarize some of the evidence that links natural word use to personality, social and situational fluctuations, and psychological interventions. Of particular interest are findings that point to the psychological value of studying particles-parts of speech that include pronouns, articles, prepositions, conjunctives, and auxiliary verbs. Particles, which serve as the glue that holds nouns and regular verbs together, can serve as markers of emotional state, social identity, and cognitive styles."}
{"_id": "f8a5ada24bb625ce5bbcdb9004da911d7fa845b2", "title": "Testing the Interactivity Model: Communication Processes, Partner Assessments, and the Quality of Collaborative Work", "text": "EE K. BuRGOON IS Professor of Communication and Director of the Center for the Management of Information at the University of Arizona. She is the author or coauthor of seven books and nearly two hundred articles and chapters on topics related to interpersonal and nonverbal communication, deception and credibility, mass media, and new communication technologies. Her current research focuses on interactivity, adaptation, and attunement in communication."}
{"_id": "0d202e5e05600222cc0c6f53504e046996fdd3c8", "title": "Male and Female Spoken Language Differences : Stereotypes and Evidence", "text": "Male speech and female speech have been observed to differ in their form, topic, content, and use. Early writers were largely introspective in their analyses; more recent work has begun to provide empirical evidence. Men may be more loquacious and directive; they use more nonstandard forms, talk more about sports, money, and business, and more frequently refer to time, space, quantity, destructive action, perceptual attributes, physical movements, and objects. Women are often more supportive, polite, and expressive, talk more about home and family, and use more words implying feeling, evaluation, interpretation, and psychological state. A comprehensive theory of \"genderlect\" must include information about linguistic features under a multiplicity of conditions."}
{"_id": "416f27b7ed8764a5d609965ba9a08b67affdd1b8", "title": "Study of Deep Learning Techniques for Side-Channel Analysis and Introduction to ASCAD Database", "text": "To provide insurance on the resistance of a system against side-channel analysis, several national or private schemes are today promoting an evaluation strategy, common in classical cryptography, which is focussing on the most powerful adversary who may train to learn about the dependency between the device behaviour and the sensitive data values. Several works have shown that this kind of analysis, known as Template Attacks in the side-channel domain, can be rephrased as a classical Machine Learning classification problem with learning phase. Following the current trend in the latter area, recent works have demonstrated that deep learning algorithms were very efficient to conduct security evaluations of embedded systems and had many advantages compared to the other methods. Unfortunately, their hyper-parametrization has often been kept secret by the authors who only discussed on the main design principles and on the attack efficiencies. This is clearly an important limitation of previous works since (1) the latter parametrization is known to be a challenging question in Machine Learning and (2) it does not allow for the reproducibility of the presented results. This paper aims to address these limitations in several ways. First, completing recent works, we propose a comprehensive study of deep learning algorithms when applied in the context of side-channel analysis and we discuss the links with the classical template attacks. Secondly, we address the question of the choice of the hyper-parameters for the class of multi-layer perceptron networks and convolutional neural networks. Several benchmarks and rationales are given in the context of the analysis of a masked implementation of the AES algorithm. To enable perfect reproducibility of our tests, this work also introduces an open platform including all the sources of the target implementation together with the campaign of electromagnetic measurements exploited in our benchmarks. This open database, named ASCAD, has been specified to serve as a common basis for further works on this subject. Our work confirms the conclusions made by Cagli et al. at CHES 2017 about the high potential of convolutional neural networks. Interestingly, it shows that the approach followed to design the algorithm VGG-16 used for image recognition seems also to be sound when it comes to fix an architecture for side-channel analysis."}
{"_id": "193debca0be1c38dabc42dc772513e6653fd91d8", "title": "Mnemonic Descent Method: A Recurrent Process Applied for End-to-End Face Alignment", "text": "Cascaded regression has recently become the method of choice for solving non-linear least squares problems such as deformable image alignment. Given a sizeable training set, cascaded regression learns a set of generic rules that are sequentially applied to minimise the least squares problem. Despite the success of cascaded regression for problems such as face alignment and head pose estimation, there are several shortcomings arising in the strategies proposed thus far. Specifically, (a) the regressors are learnt independently, (b) the descent directions may cancel one another out and (c) handcrafted features (e.g., HoGs, SIFT etc.) are mainly used to drive the cascade, which may be sub-optimal for the task at hand. In this paper, we propose a combined and jointly trained convolutional recurrent neural network architecture that allows the training of an end-to-end to system that attempts to alleviate the aforementioned drawbacks. The recurrent module facilitates the joint optimisation of the regressors by assuming the cascades form a nonlinear dynamical system, in effect fully utilising the information between all cascade levels by introducing a memory unit that shares information across all levels. The convolutional module allows the network to extract features that are specialised for the task at hand and are experimentally shown to outperform hand-crafted features. We show that the application of the proposed architecture for the problem of face alignment results in a strong improvement over the current state-of-the-art."}
{"_id": "33ef870128a5a7ea3c98274f0753324083b955aa", "title": "BigDataScript: a scripting language for data pipelines", "text": "MOTIVATION\nThe analysis of large biological datasets often requires complex processing pipelines that run for a long time on large computational infrastructures. We designed and implemented a simple script-like programming language with a clean and minimalist syntax to develop and manage pipeline execution and provide robustness to various types of software and hardware failures as well as portability.\n\n\nRESULTS\nWe introduce the BigDataScript (BDS) programming language for data processing pipelines, which improves abstraction from hardware resources and assists with robustness. Hardware abstraction allows BDS pipelines to run without modification on a wide range of computer architectures, from a small laptop to multi-core servers, server farms, clusters and clouds. BDS achieves robustness by incorporating the concepts of absolute serialization and lazy processing, thus allowing pipelines to recover from errors. By abstracting pipeline concepts at programming language level, BDS simplifies implementation, execution and management of complex bioinformatics pipelines, resulting in reduced development and debugging cycles as well as cleaner code.\n\n\nAVAILABILITY AND IMPLEMENTATION\nBigDataScript is available under open-source license at http://pcingola.github.io/BigDataScript."}
{"_id": "d174b88552cf2531e10af49e0d182802f0c8d604", "title": "Implicit and Explicit Stereotyping of Adolescents", "text": "Although adolescents are commonly assumed to be rebellious, risky and moody, two experiments demonstrate for the first time that these beliefs operate both explicitly and implicitly as stereotypes. In Experiment 1, participants (a) explicitly endorsed adolescent stereotypes and (b) implicitly associated adolescent stereotyped words more rapidly with the adolescent than the adult social category. Individual differences in the explicit endorsement of adolescent stereotypes predicted explicit perceptions of the rebelliousness of a 17-year-old but not a 71-year-old, although individual differences in implicit stereotyping did not. Identification with adults was associated with greater implicit stereotyping but not explicit stereotyping. In Experiment 2, subliminal exposure to adolescent stereotyped words increased subsequent perceptions of the rebelliousness of a 17-year-old but not a 71-year-old. Although individual differences in implicit adolescent stereotyping did not predict explicit evaluations of adolescents, stereotypes of adolescents nevertheless influenced explicit evaluations unconsciously and unintentionally."}
{"_id": "7c17025c540b88df14da35229618b5e896ab9528", "title": "Rain Streak Removal Using Layer Priors", "text": "This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility of an image and introduce undesirable interference that can severely affect the performance of computer vision algorithms. Rain streak removal can be formulated as a layer decomposition problem, with a rain streak layer superimposed on a background layer containing the true scene content. Existing decomposition methods that address this problem employ either dictionary learning methods or impose a low rank structure on the appearance of the rain streaks. While these methods can improve the overall visibility, they tend to leave too many rain streaks in the background image or over-smooth the background image. In this paper, we propose an effective method that uses simple patch-based priors for both the background and rain layers. These priors are based on Gaussian mixture models and can accommodate multiple orientations and scales of the rain streaks. This simple approach removes rain streaks better than the existing methods qualitatively and quantitatively. We overview our method and demonstrate its effectiveness over prior work on a number of examples."}
{"_id": "6a584f3fbc4c5333825ce0a8e8a30776097c81c5", "title": "The Microfinance Promise", "text": "Contributions to this research made by a member of The Financial Access Initiative."}
{"_id": "4279602fdbe037b935b63bcadbbfbe503a7c30ee", "title": "You are what you like! Information leakage through users' Interests", "text": "Suppose that a Facebook user, whose age is hidden or missing, likes Britney Spears. Can you guess his/her age? Knowing that most Britney fans are teenagers, it is fairly easy for humans to answer this question. Interests (or \u201clikes\u201d) of users is one of the highly-available on-line information. In this paper, we show how these seemingly harmless interests (e.g., music interests) can leak privacysensitive information about users. In particular, we infer their undisclosed (private) attributes using the public attributes of other users sharing similar interests. In order to compare user-defined interest names, we extract their semantics using an ontologized version of Wikipedia and measure their similarity by applying a statistical learning method. Besides self-declared interests in music, our technique does not rely on any further information about users such as friend relationships or group belongings. Our experiments, based on more than 104K public profiles collected from Facebook and more than 2000 private profiles provided by volunteers, show that our inference technique efficiently predicts attributes that are very often hidden by users. To the best of our knowledge, this is the first time that user interests are used for profiling, and more generally, semantics-driven inference of private data is addressed."}
{"_id": "47b4d1e152efebb00ac9f948d610e6c6a27d34ea", "title": "Automated Method for Discrimination of Arrhythmias Using Time, Frequency, and Nonlinear Features of Electrocardiogram Signals", "text": "We developed an automated approach to differentiate between different types of arrhythmic episodes in electrocardiogram (ECG) signals, because, in real-life scenarios, a software application does not know in advance the type of arrhythmia a patient experiences. Our approach has four main stages: (1) Classification of ventricular fibrillation (VF) versus non-VF segments\u2014including atrial fibrillation (AF), ventricular tachycardia (VT), normal sinus rhythm (NSR), and sinus arrhythmias, such as bigeminy, trigeminy, quadrigeminy, couplet, triplet\u2014using four image-based phase plot features, one frequency domain feature, and the Shannon entropy index. (2) Classification of AF versus non-AF segments. (3) Premature ventricular contraction (PVC) detection on every non-AF segment, using a time domain feature, a frequency domain feature, and two features that characterize the nonlinearity of the data. (4) Determination of the PVC patterns, if present, to categorize distinct types of sinus arrhythmias and NSR. We used the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database, Creighton University\u2019s VT arrhythmia database, the MIT-BIH atrial fibrillation database, and the MIT-BIH malignant ventricular arrhythmia database to test our algorithm. Binary decision tree (BDT) and support vector machine (SVM) classifiers were used in both stage 1 and stage 3. We also compared our proposed algorithm\u2019s performance to other published algorithms. Our VF detection algorithm was accurate, as in balanced datasets (and unbalanced, in parentheses) it provided an accuracy of 95.1% (97.1%), sensitivity of 94.5% (91.1%), and specificity of 94.2% (98.2%). The AF detection was accurate, as the sensitivity and specificity in balanced datasets (and unbalanced, in parentheses) were found to be 97.8% (98.6%) and 97.21% (97.1%), respectively. Our PVC detection algorithm was also robust, as the accuracy, sensitivity, and specificity were found to be 99% (98.1%), 98.0% (96.2%), and 98.4% (99.4%), respectively, for balanced and (unbalanced) datasets."}
{"_id": "95a8087f81f7c60544c4e9cfec67f4c1513ff789", "title": "An Algebra and Equivalences to Transform Graph Patterns in Neo4j", "text": "Modern query optimizers of relational database systems embody more than three decades of research and practice in the area of data management and processing. Key advances include algebraic query transformation, intelligent search space pruning, and modular optimizer architectures. Surprisingly, many of these contributions seem to have been overlooked in the emerging field of graph databases so far. In particular, we believe that query optimization based on a general graph algebra and its equivalences can greatly improve on the current state of the art. Although some graph algebras have already been proposed, they have often been developed in a context, in which a relational database system is used as a backend to process graph data. As a consequence, these algebras are typically tightly coupled to the relational algebra, making them unsuitable for native graph databases. While we support the approach of extending the relational algebra, we argue that graph-specific operations should be defined at a higher level, independent of the database backend. In this paper, we introduce such a general graph algebra and corresponding equivalences. We demonstrate how it can be used to optimize Cypher queries in the setting of the Neo4j native graph database."}
{"_id": "3c203e4687fab9bacc2adcff2cbcab2b6d7937dd", "title": "Information theory and neural coding", "text": "Information theory quantifies how much information a neural response carries about the stimulus. This can be compared to the information transferred in particular models of the stimulus\u2013response function and to maximum possible information transfer. Such comparisons are crucial because they validate assumptions present in any neurophysiological analysis. Here we review information-theory basics before demonstrating its use in neural coding. We show how to use information theory to validate simple stimulus\u2013response models of neural coding of dynamic stimuli. Because these models require specification of spike timing precision, they can reveal which time scales contain information in neural coding. This approach shows that dynamic stimuli can be encoded efficiently by single neurons and that each spike contributes to information transmission. We argue, however, that the data obtained so far do not suggest a temporal code, in which the placement of spikes relative to each other yields additional information."}
{"_id": "e835b786e552f45da8c8fa7af0dfc3c571f4150c", "title": "ANE: Network Embedding via Adversarial Autoencoders", "text": "Network embedding is an important method to learn low-dimensional representations of vertexes in network, whose goal is to capture and preserve the highly non-linear network structures. Here, we propose an Adversarial autoencoders based Network Embedding method (ANE for short), which utilizes the rencently proposed adversarial autoencoders to perform variational inference by matching the aggregated posterior of low-dimensional representations of vertexes with an arbitraray prior distribution. This framework introduces adversarial regularization to autoencoders. And it is able to attaches the latent representations of similar vertexes to each other and thus prevents the manifold fracturing problem that is typically encountered in the embeddings learnt by the autoencoders. Experiments demonstrate the effictiveness of ANE on link prediction and multi-label classification on three real-world information networks."}
{"_id": "6be966504d72b77886af302cf7e28a7d5c14e624", "title": "Multiview projectors/cameras system for 3D reconstruction of dynamic scenes", "text": "Active vision systems are usually limited to either partial or static scene reconstructions. In this paper, we propose to acquire the entire 3D shape of a dynamic scene. This is performed using a multiple projectors and cameras system, that allows to recover the entire shape of the object within a single scan at each frame. Like previous approaches, a static and simple pattern is used to avoid interferences of multiple patterns projected on the same object. In this paper, we extend the technique to capture a dense entire shape of a moving object with accuracy and high video frame rate. To achieve this, we mainly propose two additional steps; one is checking the consistency between the multiple cameras and projectors, and the other is an algorithm for light sectioning based on a plane parameter optimization. In addition, we also propose efficient noise reduction and mesh generation algorithm which are necessary for practical applications. In the experiments, we show that we can successfully reconstruct dense entire shapes of moving objects. Results are illustrated on real data from a system composed of six projectors and six cameras that was actually built."}
{"_id": "65c0d042d2ee7e4b71992e97f8bb42f028facac6", "title": "Using Machine Learning to Break Visual Human Interaction Proofs (HIPs)", "text": "Machine learning is often used to automatically solve human tasks. In this paper, we look for tasks where machine learning algorithms are not as good as humans with the hope of gaining insight into their current limitations. We studied various Human Interactive Proofs (HIPs) on the market, because they are systems designed to tell computers and humans apart by posing challenges presumably too hard for computers. We found that most HIPs are pure recognition tasks which can easily be broken using machine learning. The harder HIPs use a combination of segmentation and recognition tasks. From this observation, we found that building segmentation tasks is the most effective way to confuse machine learning algorithms. This has enabled us to build effective HIPs (which we deployed in MSN Passport), as well as design challenging segmentation tasks for machine learning algorithms."}
{"_id": "29d583c9ed11377d02158150b61f2c4ce9ad5fb1", "title": "Area-efficient cross-coupled charge pump for on-chip solar cell", "text": "In this paper, an area-efficient cross-coupled charge pump for standalone chip using on-chip solar cell is proposed. The proposed cross-coupled charge pump outputs higher voltage than conventional cross-coupled charge pump on the same area by adding capacitors to drive MOS transistors. The proposed circuit is fabricated in 0.18um standard CMOS process, and outputs 80mV higher than the conventional circuit with 500mV input and 100kHz clock frequency."}
{"_id": "745782902e97be8fbacd1e05d283f11104e2fec6", "title": "An introduction to ROC analysis", "text": "Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research. 2005 Elsevier B.V. All rights reserved."}
{"_id": "509d6923c3d18a78b49a8573189ef9a1a5d56158", "title": "Sentiment Intensity Ranking among Adjectives Using Sentiment Bearing Word Embeddings", "text": "Identification of intensity ordering among polar (positive or negative) words which have the same semantics can lead to a finegrained sentiment analysis. For example, master, seasoned and familiar point to different intensity levels, though they all convey the same meaning (semantics), i.e., expertise: having a good knowledge of. In this paper, we propose a semisupervised technique that uses sentiment bearing word embeddings to produce a continuous ranking among adjectives that share common semantics. Our system demonstrates a strong Spearman\u2019s rank correlation of 0.83 with the gold standard ranking. We show that sentiment bearing word embeddings facilitate a more accurate intensity ranking system than other standard word embeddings (word2vec and GloVe). Word2vec is the state-of-the-art for intensity ordering task."}
{"_id": "4e8160b341556d11ea11a636f4e85f221e80df92", "title": "Does the Fragrance of Essential Oils Alleviate the Fatigue Induced by Exercise? A Biochemical Indicator Test in Rats", "text": "Objective\nTo study the effect of the essential oils of Citrus sinensis L., Mentha piperita L., Syzygium aromaticum L., and Rosmarinus officinalis L. on physical exhaustion in rats.\n\n\nMethods\nForty-eight male Wistar rats were randomly divided into a control group, a fatigue group, an essential oil mixture (EOM) group, and a peppermint essential oil (PEO) group. Loaded swimming to exhaustion was used as the rat fatigue model. Two groups were nebulized with EOM and PEO after swimming, and the others were nebulized with distilled water. After continuous inhalation for 3 days, the swimming time, blood glucose, blood lactic acid (BLA), blood urea nitrogen (BUN), superoxide dismutase (SOD), glutathione peroxidase (GSH-PX), and malondialdehyde (MDA) in blood were determined.\n\n\nResults\nWhile an increased time to exhaustion and SOD activity were apparent in both the EOM and PEO groups, the BLA and MDA were lower in both groups, in comparison with the fatigue group, and the changes in the EOM group were more dramatic. Additionally, the EOM group also showed marked changes of the rise of blood glucose and the decrease of BUN and GSH-PX.\n\n\nConclusion\nThe results suggested that the inhalation of an essential oil mixture could powerfully relieve exercise-induced fatigue."}
{"_id": "bdd46459102967fed8c9ce41f81ce4d18b33c38e", "title": "Addressing Function Approximation Error in Actor-Critic Methods", "text": "In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested."}
{"_id": "3b8e1b8b85639079dbad489b23e28437f4c565ad", "title": "Design and Optimization of OpenFOAM-based CFD Applications for Modern Hybrid and Heterogeneous HPC Platforms", "text": "Design and Optimization of OpenFOAM-based CFD Applications for Modern Hybrid and Heterogeneous HPC Platforms Amani AlOnazi The progress of high performance computing platforms is dramatic, and most of the simulations carried out on these platforms result in improvements on one level, yet expose shortcomings of current CFD packages capabilities. Therefore, hardwareaware design and optimizations are crucial towards exploiting modern computing resources. This thesis proposes optimizations aimed at accelerating numerical simulations, which are illustrated in OpenFOAM solvers. A hybrid MPI and GPGPU parallel conjugate gradient linear solver has been designed and implemented to solve the sparse linear algebraic kernel that derives from two CFD solver: icoFoam, which is an incompressible flow solver, and laplacianFoam, which solves the Poisson equation, for e.g., thermal diffusion. A load-balancing step is applied using heterogeneous decomposition, which decomposes the computations taking into account the performance of each computing device and seeking to minimize communication. In addition, we implemented the recently developed pipeline conjugate gradient as an algorithmic improvement, and parallelized it using MPI, GPGPU, and a hybrid technique. While many questions of ultimately attainable per node performance and multi-node scaling remain, the experimental results show that the hybrid implementation of both solvers significantly outperforms state-of-the-art implementations of a widely used open source package."}
{"_id": "52b102620fff029b80b3193bec147fe6afd6f42e", "title": "Benchmark of a large scale database for facial beauty prediction", "text": "Due to its indefinite evaluation criterion, facial beauty analysis has shown its challenges in pattern recognition and biometric recognition. Plenty of methods have been designed for solving this problem, but there are still various limitations. Most of the available research results in this field are achieved on a small-scale facial beauty database which is difficult in modeling the structure information for facial beauty. And majority of the presented facial beauty prediction algorithm requires burden landmark or expensive optimization procedure. In this paper, we have established a large scale facial beauty database named LSFBD, which is a new database for facial beauty analysis. The LSFBD database contains 20000 labeled images, which has 10000 unconstrained male subjects and 10000 unconstrained female subjects separately, and all images are verified with well-designed rating with average scores and standard deviation. In this database, manually adding the extreme beauty images make the distribution more reasonable and the result more accurate. CRBM or Eigenfaces results are presented in the paper as benchmark for evaluating the task of facial beauty prediction."}
{"_id": "4d9583991eeae36366cd61e6011df7ada7413f0d", "title": "The Distributed Computing Paradigms: P2P, Grid, Cluster, Cloud, and Jungle", "text": "The distributed computing is done on many systems to solve a large scale problem. The growing of high-speed broadband networks in developed and developing countries, the continual increase in computing power, and the rapid growth of the Internet have changed the way. In it the society manages information and information services. Historically, the state of computing has gone through a series of platform and environmental changes. Distributed computing holds great assurance for using computer systems effectively. As a result, supercomputer sites and datacenters have changed from providing high performance floating point computing capabilities to concurrently servicing huge number of requests from billions of users. The distributed computing system uses multiple computers to solve large-scale problems over the Internet. It becomes data-intensive and network-centric. The applications of distributed computing have become increasingly wide-spread. In distributed computing, the main stress is on the large scale resource sharing and always goes for the best performance. In this article, we have reviewed the work done in the area of distributed computing paradigms. The main stress is on the evolving area of cloud computing."}
{"_id": "d2331d22cab942a5f15c907effe1eaedd5d77305", "title": "Guide to Health Informatics", "text": "From the very earliest moments in the modern history of the computer, scientists have dreamed of creating an \" electronic brain \". Of all the modern technological quests, this search to create artificially intelligent (AI) computer systems has been one of the most ambitious and, not surprisingly, controversial. It also seems that very early on, scientists and doctors alike were captivated by the potential such a technology might have in medicine (e.g. Ledley and Lusted, 1959). With intelligent computers able to store and process vast stores of knowledge, the hope was that they would become perfect 'doctors in a box', assisting or surpassing clinicians with tasks like diagnosis. With such motivations, a small but talented community of computer scientists and healthcare professionals set about shaping a research program for a new discipline called Artificial Intelligence in Medicine (AIM). These researchers had a bold vision of the way AIM would revolutionise medicine, and push forward the frontiers of technology. AI in medicine at that time was a largely US-based research community. Work originated out of a number of campuses, including MIT-Tufts, Pittsburgh, Stanford and Rutgers (e. The field attracted many of the best computer scientists and by any measure their output in the first decade of the field remains a remarkable achievement. In reviewing this new field in 1984, Clancey and Shortliffe provided the following definition: 'Medical artificial intelligence is primarily concerned with the construction of AI programs that perform diagnosis and make therapy recommendations. Unlike medical applications based on other programming methods, such as purely statistical and probabilistic methods, medical AI programs are based on symbolic models of disease entities and their relationship to patient factors and clinical manifestations.' Much has changed since then, and today the importance of diagnosis as a task requiring computer support in routine clinical situations receives much less emphasis (Durinck et al., 1994). The strict focus on the medical setting has now broadened across the healthcare spectrum, and instead of AIM systems, it is more typical to describe them as clinical decision support systems (CDSS). Intelligent systems today are thus found supporting medication prescribing, in clinical laboratories and educational settings, for clinical surveillance, or in data-rich areas like the intensive care setting. While there certainly have been ongoing challenges in developing such systems, they actually have proven their reliability and accuracy on repeated occasions (Shortliffe, 1987). Much of the difficulty experienced in introducing them has \u2026"}
{"_id": "76456aac10e7ed56f1188cc28ea5f525c959896e", "title": "A sock puppet detection algorithm on virtual spaces", "text": "On virtual spaces, some individuals use multiple usernames or copycat/forge other users (usually called \u2018\u2018sock puppet\u2019\u2019) to communicate with others. Those sock puppets are fake identities through which members of Internet community praise or create the illusion of support for the product or one\u2019s work, pretending to be a different person. A fundamental problem is how to identify these sock puppets. In this paper, we propose a sock puppet detection algorithm which combines authorship-identification techniques and link analysis. Firstly, we propose an interesting social network model in which links between two IDs are built if they have similar attitudes to most topics that both of them participate in; then, the edges are pruned according a hypothesis test, which consider the impact of their writing styles; finally, the link-based community detection for pruned network is performed. Compared to traditional methods, our approach has three advantages: (1) it conforms to the practical meanings of sock puppet community; (2) it can be applied in online situation; (3) it increases the efficiency of link analysis. In the experimental work, we evaluate our method using real datasets and compared our approach with several previous methods; the results have proved above advantages. 2012 Elsevier B.V. All rights reserved."}
{"_id": "97728f1466f435b7145637acb4893c6acaa2291a", "title": "EdgeChain: An Edge-IoT Framework and Prototype Based on Blockchain and Smart Contracts", "text": "The emerging Internet of Things (IoT) is facing significant scalability and security challenges. On the one hand, IoT devices are \u201cweak\u201d and need external assistance. Edge computing provides a promising direction addressing the deficiency of centralized cloud computing in scaling massive number of devices. On the other hand, IoT devices are also relatively \u201cvulnerable\u201d facing malicious hackers due to resource constraints. The emerging blockchain and smart contracts technologies bring a series of new security features for IoT and edge computing. In this paper, to address the challenges, we design and prototype an edge-IoT framework named \u201cEdgeChain\u201d based on blockchain and smart contracts. The core idea is to integrate a permissioned blockchain and the internal currency or \u201ccoin\u201d system to link the edge cloud resource pool with each IoT device\u2019 account and resource usage, and hence behavior of the IoT devices. EdgeChain uses a credit-based resource management system to control how much resource IoT devices can obtain from edge servers, based on pre-defined rules on priority, application types and past behaviors. Smart contracts are used to enforce the rules and policies to regulate the IoT device behavior in a non-deniable and automated manner. All the IoT activities and transactions are recorded into blockchain for secure data logging and auditing. We implement an EdgeChain prototype and conduct extensive experiments to evaluate the ideas. The results show that while gaining the security benefits of blockchain and smart contracts, the cost of integrating them into EdgeChain is within a reasonable and acceptable range."}
{"_id": "16463c13e27f35d326056bd84364e02182a978a4", "title": "\"Now, i have a body\": uses and social norms for mobile remote presence in the workplace", "text": "As geographically distributed teams become increasingly common, there are more pressing demands for communication work practices and technologies that support distributed collaboration. One set of technologies that are emerging on the commercial market is mobile remote presence (MRP) systems, physically embodied videoconferencing systems that remote workers use to drive through a workplace, communicating with locals there. Our interviews, observations, and survey results from people, who had 2-18 months of MRP use, showed how remotely-controlled mobility enabled remote workers to live and work with local coworkers almost as if they were physically there. The MRP supported informal communications and connections between distributed coworkers. We also found that the mobile embodiment of the remote worker evoked orientations toward the MRP both as a person and as a machine, leading to formation of new usage norms among remote and local coworkers."}
{"_id": "4a861d29f36d2e4f03477c5df2730c579d8394d3", "title": "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting", "text": "The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-theart operational ROVER algorithm for precipitation nowcasting."}
{"_id": "4f9e6b13e22ae8f3d77b1f5d1c946179e3abfd64", "title": "Combining Instance-Based and Model-Based Learning", "text": "This paper concerns learning tasks that require the prediction of a continuous value rather than a discrete class. A general method is presented that allows predictions to use both instance-based and model-based learning. Results with three approaches to constructing models and with eight datasets demonstrate improvements due to the composite method."}
{"_id": "6e820cf11712b9041bb625634612a535476f0960", "title": "An Analysis of Transformations", "text": "In the analysis of data it is often assumed that observations y,, y,, ...,y, are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters 0. In this paper we make the less restrictive assumption that such a normal, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's. Inferences about the transformation and about the parameters of the linear model are made by computing the likelihood function and the relevant posterior distribution. The contributions of normality, homoscedasticity and additivity to the transformation are separated. The relation of the present methods to earlier procedures for finding transformations is discussed. The methods are illustrated with examples."}
{"_id": "a241a7e26d6baf2c068601813216d3cc09e845ff", "title": "Deep Learning for Time Series Modeling CS 229 Final Project Report", "text": "Demand forecasting is crucial to electricity providers because their ability to produce energy exceeds their ability to store it. Excess demand can cause \u201cbrown outs,\u201d while excess supply ends in waste. In an industry worth over $1 trillion in the U.S. alone [1], almost 9% of GDP [2], even marginal improvements can have a huge impact. Any plan toward energy efficiency should include enhanced utilization of existing production. Energy loads provide an interesting topic for Machine Learning techniques due to the availability of large datasets that exhibit fundamental nonlinear patterns. Using data from the Kaggle competition \u201cGlobal Energy Forecasting Competition 2012 Load Forecasting\u201d [3] we sought to use deep learning architectures to predict energy loads across different network grid areas, using only time and temperature data. Data included hourly demand for four and a half years from 20 different geographic regions, and similar hourly temperature readings from 11 zones. For most of our analysis we focused on short term load forecasting because this will aide online operational scheduling."}
{"_id": "e9e46148e6f9b115f22767703ca88b55ef10d24e", "title": "When moderation is mediated and mediation is moderated.", "text": "Procedures for examining whether treatment effects on an outcome are mediated and/or moderated have been well developed and are routinely applied. The mediation question focuses on the intervening mechanism that produces the treatment effect. The moderation question focuses on factors that affect the magnitude of the treatment effect. It is important to note that these two processes may be combined in informative ways, such that moderation is mediated or mediation is moderated. Although some prior literature has discussed these possibilities, their exact definitions and analytic procedures have not been completely articulated. The purpose of this article is to define precisely both mediated moderation and moderated mediation and provide analytic strategies for assessing each."}
{"_id": "a1c5a6438d3591819e730d8aecb776a52130c33d", "title": "Compact ultra-wide stopband lowpass filter using transformed stepped impedance hairpin resonator", "text": "A compact microstrip lowpass filter (LPF) with ultra-wide stopband using transformed stepped impedance hairpin resonator is proposed. The transformed resonator consists of a stepped impedance hairpin resonator and an embedded hexagon stub loaded coupled-line structure. Without enlarging the size, the embedded structure is introduced to get a broad stopband. A prototype LPF has been simulated, fabricated and measured, and the measurements are in good agreement with simulations. The implemented lowpass filter exhibits an ultra-wide stopband up to 12.01fc with a rejection level of 14 dB. In addition, the proposed filter features a size of 0.071\u03bbg\u00d7 0.103\u03bbg, where \u03bbg is the waveguide length at the cutoff frequency 1.45 GHz."}
{"_id": "63c30ec269e7e02f97e2a20a9d68e268b4405a5d", "title": "Are Coherence Protocol States Vulnerable to Information Leakage?", "text": "Most commercial multi-core processors incorporate hardware coherence protocols to support efficient data transfers and updates between their constituent cores. While hardware coherence protocols provide immense benefits for application performance by removing the burden of software-based coherence, we note that understanding the security vulnerabilities posed by such oft-used, widely-adopted processor features is critical for secure processor designs in the future. In this paper, we demonstrate a new vulnerability exposed by cache coherence protocol states. We present novel insights into how adversaries could cleverly manipulate the coherence states on shared cache blocks, and construct covert timing channels to illegitimately communicate secrets to the spy. We demonstrate 6 different practical scenarios for covert timing channel construction. In contrast to prior works, we assume a broader adversary model where the trojan and spy can either exploit explicitly shared read-only physical pages (e.g., shared library code), or use memory deduplication feature to implicitly force create shared physical pages. We demonstrate how adversaries can manipulate combinations of coherence states and data placement in different caches to construct timing channels. We also explore how adversaries could exploit multiple caches and their associated coherence states to improve transmission bandwidth with symbols encoding multiple bits. Our experimental results on commercial systems show that the peak transmission bandwidths of these covert timing channels can vary between 700 to 1100 Kbits/sec. To the best of our knowledge, our study is the first to highlight the vulnerability of hardware cache coherence protocols to timing channels that can help computer architects to craft effective defenses against exploits on such critical processor features."}
{"_id": "218f9e89f2f585b184d928b08636d300e1307c7f", "title": "Seahawk: Stack Overflow in the IDE", "text": "Services, such as Stack Overflow, offer a web platform to programmers for discussing technical issues, in form of Question and Answers (Q&A). Since Q&A services store the discussions, the generated crowd knowledge can be accessed and consumed by a large audience for a long time. Nevertheless, Q&A services are detached from the development environments used by programmers: Developers have to tap into this crowd knowledge through web browsers and cannot smoothly integrate it into their workflow. This situation hinders part of the benefits of Q&A services. To better leverage the crowd knowledge of Q&A services, we created Seahawk, an Eclipse plugin that supports an integrated and largely automated approach to assist programmers using Stack Overflow. Seahawk formulates queries automatically from the active context in the IDE, presents a ranked and interactive list of results, lets users import code samples in discussions through drag & drop and link Stack Overflow discussions and source code persistently as a support for team work. Video Demo URL: http://youtu.be/DkqhiU9FYPI"}
{"_id": "9b75d440a545a915aac025029cab9c7352173021", "title": "Cooperative Control for A Hybrid Rehabilitation System Combining Functional Electrical Stimulation and Robotic Exoskeleton", "text": "Functional electrical stimulation (FES) and robotic exoskeletons are two important technologies widely used for physical rehabilitation of paraplegic patients. We developed a hybrid rehabilitation system (FEXO Knee) that combined FES and an exoskeleton for swinging movement control of human knee joints. This study proposed a novel cooperative control strategy, which could realize arbitrary distribution of torque generated by FES and exoskeleton, and guarantee harmonic movements. The cooperative control adopted feedfoward control for FES and feedback control for exoskeleton. A parameter regulator was designed to update key parameters in real time to coordinate FES controller and exoskeleton controller. Two muscle groups (quadriceps and hamstrings) were stimulated to generate active torque for knee joint in synchronization with torque compensation from exoskeleton. The knee joint angle and the interactive torque between exoskeleton and shank were used as feedback signals for the control system. Central pattern generator (CPG) was adopted that acted as a phase predictor to deal with phase confliction of motor patterns, and realized synchronization between the two different bodies (shank and exoskeleton). Experimental evaluation of the hybrid FES-exoskeleton system was conducted on five healthy subjects and four paraplegic patients. Experimental results and statistical analysis showed good control performance of the cooperative control on torque distribution, trajectory tracking, and phase synchronization."}
{"_id": "859e592b643384133bdb6cffb0990c87ecf06d93", "title": "Uniform circular array with integrated microstrip tapered baluns", "text": "This paper presents the design, simulation, and measurement results of a 4-element uniform circular array (UCA) using dipole antenna elements with integrated microstrip tapered baluns to convert between balanced and unbalanced signals. These baluns are compact, integrated into the array support structure, and can provide a balanced signal over a relatively wide bandwidth. Simulation results are shown for the amplitude and phase balance of the balun from 1.6 GHz to 2.2 GHz. The array was fabricated and its manifold was measured in an anechoic chamber at a frequency of 1.7 GHz. The gain and phase responses of the array are presented and are compared to a simulated ideal dipole UCA in free-space."}
{"_id": "883b30ddaceb39b8b7c2e2e4ab1b6a92adaeb8c0", "title": "N-Party Encrypted Diffie-Hellman Key Exchange Using Different Passwords", "text": "We consider the problem of password-authenticated group Diffie-Hellman key exchange among N parties, N\u22121 clients and a singleserver, using different passwords. Most password-authenticated key exchange schemes in the literature have focused on an authenticated key exchange using a shared password between a client and a server. With a rapid change in modern communication environment such as ad-hoc networks and ubiquitous computing, it is necessary to construct a secure end-to-end channel between clients, which is a quite different paradigm from the existing ones. To achieve this end-to-end security, only a few schemes of three-party setting have been presented where two clients exchange a key using their own passwords with the help of a server. However, up until now, no formally treated and round efficient protocols which enable group members to generate a common session key with clients\u2019 distinct passwords have been suggested. In this paper we securely and efficiently extend three-party case to Nparty case with a formal proof of security. Two provably secure N-party EKE protocols are suggested; N-party EKE-U in the unicast network and N-party EKE-M in the multicast network. The proposed N-party EKE-M is provable secure and provides forward secrecy. Especially, the scheme is of constant-round, hence scalable and practical."}
{"_id": "e48e480c16f419ec3a3c7dd7a9be7089e3dd1675", "title": "Iterative unified clustering in big data", "text": "We propose a novel iterative unified clustering algorithm for data with both continuous and categorical variables, in the big data environment. Clustering is a well-studied problem and finds several applications. However, none of the big data clustering works discuss the challenge of mixed attribute datasets, with both categorical and continuous attributes. We study an application in the health care domain namely Case Based Reasoning (CBR), which refers to solving new problems based on solutions to similar past problems. This is particularly useful when there is a large set of clinical records with several types of attributes, from which similar patients need to be identified. We go one step further and include the genomic components of patient records to enhance the CBR discovery. Thus, our contributions in this paper spans across the big data algorithmic research and a key contribution to the domain of heath care information technology research. First, our clustering algorithm deals with both continuous and categorical variables in the data; second, our clustering algorithm is iterative where it finds the clusters which are not well formed and iteratively drills down to form well defined clusters at the end of the process; third we provide a novel approach to CBR across clinical and genomic data. Our research has implications for clinical trials and facilitating precision diagnostics in large and heterogeneous patient records. We present extensive experimental results to show the efficacy or our approach."}
{"_id": "5f7526b8e47fbb274ae6b662c5eeb96e89f4b33f", "title": "PLEdestrians: A Least-Effort Approach to Crowd Simulation", "text": "We present a new algorithm for simulating large-scale crowds at interactive rates based on the Principle of Least Effort. Our approach uses an optimization method to compute a biomechanically energy-efficient, collision-free trajectory that minimizes the amount of effort for each heterogeneous agent in a large crowd. Moreover, the algorithm can automatically generate many emergent phenomena such as lane formation, crowd compression, edge and wake effects ant others. We compare the results from our simulations to data collected from prior studies in pedestrian and crowd dynamics, and provide visual comparisons with real-world video. In practice, our approach can interactively simulate large crowds with thousands of agents on a desktop PC and naturally generates a diverse set of emergent behaviors."}
{"_id": "29be05f17c8906d70659fe1110758a59d39d2a08", "title": "Android taint flow analysis for app sets", "text": "One approach to defending against malicious Android applications has been to analyze them to detect potential information leaks. This paper describes a new static taint analysis for Android that combines and augments the FlowDroid and Epicc analyses to precisely track both inter-component and intra-component data flow in a set of Android applications. The analysis takes place in two phases: given a set of applications, we first determine the data flows enabled individually by each application, and the conditions under which these are possible; we then build on these results to enumerate the potentially dangerous data flows enabled by the set of applications as a whole. This paper describes our analysis method, implementation, and experimental results."}
{"_id": "de33698f7b2264bf7313f43d1de8c2d19e2a2f7a", "title": "An Optimization Approach for Utilizing Cloud Services for Mobile Devices in Cloud Environment", "text": "Abstract. Mobile cloud computing has emerged aiming at assisting mobile devices in processing computationally or data intensive tasks using cloud resources. This paper presents an optimization approach for utilizing cloud services for mobile client in mobile cloud, which considers the benefit of both mobile device users and cloud datacenters. The mobile cloud service provisioning optimization is conducted in parallel under the deadline, budget and energy expenditure constraint. Mobile cloud provider runs multiple VMs to execute the jobs for mobile device users, the cloud providers want to maximize the revenue and minimize the electrical cost. The mobile device user gives the suitable payment to the cloud datacenter provider for available cloud resources for optimize the benefit. The paper proposes a distributed optimization algorithm for utilizing cloud services for mobile devices. The experiment is to test convergence of the proposed algorithm and also compare it with other related work. The experiments study the impacts of job arrival rate, deadline and mobility speeds on energy consumption ratio, execution success ratio, resource allocation efficiency and cost. The experiment shows that the proposed algorithm outperforms other related work in terms of some performance metrics such as allocation efficiency."}
{"_id": "39e8237f35361b64ac6de9d4f3c93f409f73dac0", "title": "Inner Product Similarity Search using Compositional Codes", "text": "This paper addresses the nearest neighbor search problem under inner product similarity and introduces a compact code-based approach. The idea is to approximate a vector using the composition of several elements selected from a source dictionary and to represent this vector by a short code composed of the indices of the selected elements. The inner product between a query vector and a database vector is efficiently estimated from the query vector and the short code of the database vector. We show the superior performance of the proposed group M -selection algorithm that selects M elements from M source dictionaries for vector approximation in terms of search accuracy and efficiency for compact codes of the same length via theoretical and empirical analysis. Experimental results on large-scale datasets (1M and 1B SIFT features, 1M linear models and Netflix) demonstrate the superiority of the proposed approach."}
{"_id": "ae6c24aa50eb2b5d6387a761655091b88d414359", "title": "Lexical input as related to children's vocabulary acquisition: effects of sophisticated exposure and support for meaning.", "text": "A corpus of nearly 150,000 maternal word-tokens used by 53 low-income mothers in 263 mother-child conversations in 5 settings (e.g., play, mealtime, and book readings) was studied. Ninety-nine percent of maternal lexical input consisted of the 3,000 most frequent words. Children's vocabulary performance in kindergarten and later in 2nd grade related more to the occurrence of sophisticated lexical items than to quantity of lexical input overall. Density of sophisticated words heard and the density with which such words were embedded in helpful or instructive interactions, at age 5 at home, independently predicted over a third of the variance in children's vocabulary performance in both kindergarten and 2nd grade. These two variables, with controls for maternal education, child nonverbal IQ, and amount of child's talk produced during the interactive settings, at age 5, predicted 50% of the variance in children's 2nd-grade vocabulary."}
{"_id": "c90fc38e3c38b7feb40e33a23d643be2d7e9fdaa", "title": "Automotive radar target characterization from 22 to 29 GHz and 76 to 81 GHz", "text": "The radar signatures of automotive targets were measured from 22 to 29 GHz and 76 to 81 GHz inside an anechoic chamber using a vector network analyzer. Radar cross section maps of a sedan, truck, van, pedestrian, motorcycle, and bicycle as a function of effective radar sensor bandwidth, center frequency, and polarization are presented."}
{"_id": "70d2d4b07b5c65ef4866c7fd61f9620bffa01e29", "title": "A model for smart agriculture using IoT", "text": "Climate changes and rainfall has been erratic over the past decade. Due to this in recent era, climate-smart methods called as smart agriculture is adopted by many Indian farmers. Smart agriculture is an automated and directed information technology implemented with the IOT (Internet of Things). IOT is developing rapidly and widely applied in all wireless environments. In this paper, sensor technology and wireless networks integration of IOT technology has been studied and reviewed based on the actual situation of agricultural system. A combined approach with internet and wireless communications, Remote Monitoring System (RMS) is proposed. Major objective is to collect real time data of agriculture production environment that provides easy access for agricultural facilities such as alerts through Short Massaging Service (SMS) and advices on weather pattern, crops etc."}
{"_id": "ea88b58158395aefbb27f4706a18dfa2fd7daa89", "title": "\"It Won't Happen To Me!\": Self-Disclosure in Online Social Networks", "text": "Despite the considerable amount of self-disclosure in Online Social Networks (OSN), the motivation behind this phenomenon is still little understood. Building on the Privacy Calculus theory, this study fills this gap by taking a closer look at the factors behind individual self-disclosure decisions. In a Structural Equation Model with 237 subjects we find Perceived Enjoyment and Privacy Concerns to be significant determinants of information revelation. We confirm that the privacy concerns of OSN users are primarily determined by the perceived likelihood of a privacy violation and much less by the expected damage. These insights provide a solid basis for OSN providers and policy-makers in their effort to ensure healthy disclosure levels that are based on objective rationale rather than subjective misconceptions."}
{"_id": "7427a00058b9925fc9c23379217cba637cb15f99", "title": "Finite-time posture control of a unicycle robot", "text": "This paper deals with the problem of posture control for a unicycle (single-wheel) robot by designing and analyzing a finite-time posture control strategy. The unicycle robot consists of a lower rotating wheel, an upper rotating disk, and a robot body. The rotating wheel enables the unicycle robot to move forward and backward to obtain pitch (longitudinal) balance. The rotating disk is used for obtaining roll (lateral) balance. Therefore, the unicycle robot can be viewed as a mobile inverted pendulum for the pitch axis and a reaction-wheel pendulum for the roll axis. The dynamics models for unicycle robots at roll and pitch axes are derived using the Lagrange equation. According to the dynamics models, two finite-time controller are designed, which includes the finite-time roll controller and the finite-time pitch controller. Moreover, the stability on the unicycle robot with the proposed finite-time posture control strategy is also analyzed. Finally, simulations are worked out to illustrate the effectiveness of the finite-time posture control strategy for the unicycle robot at roll and pitch axes."}
{"_id": "7e8bc9da7071d6a2e4f2a5437d7071868913f8b2", "title": "Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques", "text": "Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS. Keywords\u2014Data mining, K-means, road traffic accidents, Waze, Weka."}
{"_id": "9dbfcf610da740396b2b9fd75c7032f0b94896d7", "title": "Using Program Analysis to Improve Database Applications", "text": "Applications that interact with database management systems (DBMSs) are ubiquitous. Such database applications are usually hosted on an application server and perform many small accesses over the network to a DBMS hosted on the database server to retrieve data for processing. For decades, the database and programming systems research communities have worked on optimizing such applications from different perspectives: database researchers have built highly efficient DBMSs, and programming systems researchers have developed specialized compilers and runtime systems for hosting applications. However, there has been relatively little work that optimizes database applications by considering these specialized systems in combination and looking for optimization opportunities that span across them. In this article, we highlight three projects that optimize database applications by looking at both the programming system and the DBMS in a holistic manner. By carefully revisiting the interface between the DBMS and the application, and by applying a mix of declarative database optimization and modern program analysis techniques, we show that a speedup of multiple orders of magnitude is possible in real-world applications."}
{"_id": "7cc21bcce77dacd1ea7646244043261167f2dcd0", "title": "Exploring Models and Data for Remote Sensing Image Caption Generation", "text": "Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, notable progress has been made in scene classification and target detection. However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at https://github.com/201528014227051/RSICD_optimal."}
{"_id": "fdc3948f5fec24eb7cd4178aee9732ab284f1f1c", "title": "Hybrid Multi-Mode Narrow-Frame Antenna for WWAN/LTE Metal-Rimmed Smartphone Applications", "text": "A hybrid multi-mode narrow-frame antenna for WWAN/LTE metal-rimmed smartphone applications is proposed in this paper. The ground clearance is only 5 mm \u00d7 45 mm, which is promising for narrow-frame smartphones. The metal rim with a small gap is connected to the system ground by three grounded patches. This proposed antenna can excite three coupled-loop modes and one slot mode. By incorporating these four modes, the proposed antenna can provide coverage for GSM850/900, DCS/PCS/UMTS2100, and LTE2300/2500 operations. Detailed design considerations of the proposed antenna are described, and both experimental and simulated results are also presented."}
{"_id": "b7706677ff24a509651bdcd43076aadcad6e378c", "title": "An integrated fuzzy MCDM approach for supplier evaluation and selection", "text": "A fuzzy multi-criteria group decision making approach that makes use of quality function deployment (QFD), fusion of fuzzy information and 2-tuple linguistic representation model is developed for supplier selection. The proposed methodology seeks to establish the relevant supplier assessment criteria while also considering the impacts of inner dependence among them. Two interrelated house of quality matrices are constructed, and fusion of fuzzy information and 2-tuple linguistic representation model are employed to compute the weights of supplier selection criteria and subsequently the ratings of suppliers. The proposed method is apt to manage non-homogeneous information in a decision setting with multiple information sources. The decision framework presented in this paper employs ordered weighted averaging (OWA) operator, and the aggregation process is based on combining information by means of fuzzy sets on a basic linguistic term set. The proposed framework is illustrated through a case study conducted in a private hospital in Istanbul."}
{"_id": "8bbbdff11e88327816cad3c565f4ab1bb3ee20db", "title": "Automatic Semantic Face Recognition Nawaf", "text": "Recent expansion in surveillance systems has motivated research in soft biometrics that enable the unconstrained recognition of human faces. Comparative soft biometrics show superior recognition performance than categorical soft biometrics and have been the focus of several studies which have highlighted their ability for recognition and retrieval in constrained and unconstrained environments. These studies, however, only addressed face recognition for retrieval using human generated attributes, posing a question about the feasibility of automatically generating comparative labels from facial images. In this paper, we propose an approach for the automatic comparative labelling of facial soft biometrics. Furthermore, we investigate unconstrained human face recognition using these comparative soft biometrics in a human labelled gallery (and vice versa). Using a subset from the LFW dataset, our experiments show the efficacy of the automatic generation of comparative facial labels, highlighting the potential extensibility of the approach to other face recognition scenarios and larger ranges of attributes."}
{"_id": "f23dfa84771615d4b4ea733e325a44922d3c9711", "title": "Emergency situation awareness from twitter for crisis management", "text": "This paper describes ongoing work with the Australian Government to detect, assess, summarise, and report messages of interest for crisis coordination published by Twitter. The developed platform and client tools, collectively termed the Emergency Situation Awareness - Automated Web Text Mining (ESA-AWTM) system, demonstrate how relevant Twitter messages can be identified and utilised to inform the situation awareness of an emergency incident as it unfolds.\n A description of the ESA-AWTM platform is presented detailing how it may be used for real life emergency management scenarios. These scenarios are focused on general use cases to provide: evidence of pre-incident activity; near-real-time notification of an incident occurring; first-hand reports of incident impacts; and gauging the community response to an emergency warning. Our tools have recently been deployed in a trial for use by crisis coordinators."}
{"_id": "04cf655bfc5da8ae164626034734e1d409adf5ed", "title": "Avoiding and Treating Blindness From Fillers: A Review of the World Literature.", "text": "BACKGROUND\nAs the popularity of soft tissue fillers increases, so do the reports of adverse events. The most serious complications are vascular in nature and include blindness.\n\n\nOBJECTIVE\nTo review the cases of blindness after filler injection, to highlight key aspects of the vascular anatomy, and to discuss prevention and management strategies.\n\n\nMETHODS\nA literature review was performed to identify all the cases of vision changes from filler in the world literature.\n\n\nRESULTS\nNinety-eight cases of vision changes from filler were identified. The sites that were high risk for complications were the glabella (38.8%), nasal region (25.5%), nasolabial fold (13.3%), and forehead (12.2%). Autologous fat (47.9%) was the most common filler type to cause this complication, followed by hyaluronic acid (23.5%). The most common symptoms were immediate vision loss and pain. Most cases of vision loss did not recover. Central nervous system complications were seen in 23.5% of the cases. No treatments were found to be consistently successful in treating blindness.\n\n\nCONCLUSION\nAlthough the risk of blindness from fillers is rare, it is critical for injecting physicians to have a firm knowledge of the vascular anatomy and to understand key prevention and management strategies."}
{"_id": "b65f3cd5431f91cca97469996f08d2c139d8ef6a", "title": "Filler Rhinoplasty Evaluated by Anthropometric Analysis.", "text": "BACKGROUND\nThere are no reports of objectively evaluating the efficacy of filler rhinoplasty by anthropometric techniques.\n\n\nOBJECTIVE\nTo objectively demonstrate the effectiveness of filler rhinoplasty by anthropometric analysis.\n\n\nMATERIALS AND METHODS\nA total of 242 patients who revisited the clinic within 2 months of undergoing hyaluronic acid filler rhinoplasty were analyzed based on the injection site, injected volume, and the change in anthropometry.\n\n\nRESULTS\nAmong the 242 patients, 112 (46.3%) were in the nasal dorsum augmentation group, 8 (3.3%) were in the tip rotation group, and 122 (50.4%) were in the whole nose augmentation group. Average injection volume was 1 \u00b1 0.4 mL for nasal dorsum and 0.9 \u00b1 0.3 mL for tip rotation, whereas 1.6 \u00b1 0.5 mL was used for whole nose augmentation. On follow-up, the radix height, nasofrontal angle, and nasolabial angle (NLA) had increased by 78.3%, 5.7 \u00b1 4.1\u00b0, and 9.4 \u00b1 4.5\u00b0, respectively, whereas the modified nasofacial angle had decreased by 1.9 \u00b1 2.9\u00b0. Three cases (1.2%) of vascular complications were encountered.\n\n\nCONCLUSION\nFiller rhinoplasty is a simple and effective treatment modality producing outcomes comparable with surgical augmentation rhinoplasty. Among various anthropometric measurements, the nasal radix height was the most useful for evaluating dorsum augmentation, whereas the NLA was the best for nasal tip rotation."}
{"_id": "407ee40d3f6168411e4b66eec003e416f6195466", "title": "The anatomical origin and course of the angular artery regarding its clinical implications.", "text": "BACKGROUND\nThe purposes of this study were to determine the morphological features and conceptualize the anatomical definition of the angular artery (AA) as an aid to practical operations in the clinical field.\n\n\nMATERIALS AND METHODS\nThirty-one hemifaces from 17 Korean cadavers and 26 hemifaces from 13 Thai cadavers were dissected.\n\n\nRESULTS\nThe topography of the AA was classified into 4 types according to its course: Type I (persistent pattern), in which the AA traverses the lateral side of the nose (11%); Type II (detouring pattern), in which the AA traverses the cheek and tear trough area (18%); Type III (alternative pattern), in which the AA traverses the medial canthal area through a branch of the ophthalmic artery (22.8%); and Type IV (latent pattern), in which the AA is absent (26.3%).\n\n\nCONCLUSION\nThe findings of this study will contribute toward improved outcomes for cosmetic surgery involving the injection of facial filler by enhancing the understanding of AA anatomy."}
{"_id": "766b55aa9c8915a9f786f0fd9f0e79f2e9bf57dc", "title": "Ischemic oculomotor nerve palsy and skin necrosis caused by vascular embolization after hyaluronic acid filler injection: a case report.", "text": "Hyaluronic acid filler injection is widely used for soft tissue augmentation. However, there can be disastrous complications by direct vascular embolization. We present a case of ischemic oculomotor nerve palsy and skin necrosis after hyaluronic acid filler injection on glabellar.blepharoptosis, exotropia and diplopia developed suddenly after the injection, and skin necrosis gradually occurred. Symptoms and signs of oculomotor nerve palsy continuously improved with steroid therapy. Skin defects healed with minimal scars through intensive wound care.Percutaneous filler injection of periorbital areas should be performed carefully by experienced surgeons, and the possibility of embolization should be considered promptly if symptoms develop."}
{"_id": "efa97923a4a9174c0f1c6f5b16c4019e1fcb3e21", "title": "Complications of injectable fillers, part I.", "text": "Dermal filling has rapidly become one of the most common procedures performed by clinicians worldwide. The vast majority of treatments are successful and patient satisfaction is high. However, complications, both mild and severe, have been reported and result from injection of many different types of dermal fillers. In this Continuing Medical Education review article, the author describes common technical errors, the signs and symptoms of both common and rare complications, and management of sequelae in clear, easily adaptable treatment algorithms."}
{"_id": "2f725ea2e93ea4814d8b83ce6f9a82141cedbd4e", "title": "Developing a Research Agenda for Human-Centered Data Science", "text": "The study and analysis of large and complex data sets offer a wealth of insights in a variety of applications. Computational approaches provide researchers access to broad assemblages of data, but the insights extracted may lack the rich detail that qualitative approaches have brought to the understanding of sociotechnical phenomena. How do we preserve the richness associated with traditional qualitative methods while utilizing the power of large data sets? How do we uncover social nuances or consider ethics and values in data use? These and other questions are explored by human-centered data science, an emerging field at the intersection of human-computer interaction (HCI), computer-supported cooperative work (CSCW), human computation, and the statistical and computational techniques of data science. This workshop, the first of its kind at CSCW, seeks to bring together researchers interested in human-centered approaches to data science to collaborate, define a research agenda, and form a community."}
{"_id": "03e6e9198bffe536fe6ccd1edb91ccf0a68b254d", "title": "MEASURING THE SIMILARITY OF TWO IMAGE SEQUENCES", "text": "We propose a novel similarity measure of two image sequences based on shapeme histograms. The idea of shapeme histogram has been used for single image/texture recognition, but is used here to solve the sequence-tosequence matching problem. We develop techniques to represent each sequence as a set of shapeme histograms, which captures different variations of the object appearances within the sequence. These shapeme histograms are computed from the set of 2D invariant features that are stable across multiple images in the sequence, and therefore minimizes the effect of both background clutter, and 2D pose variations. We define sequence similarity measure as the similarity of the most similar pair of images from both sequences. This definition maximizes the chance of matching between two sequences of the same object, because it requires only part of the sequences being similar. We also introduce a weighting scheme to conduct an implicit feature selection process during the matching of two shapeme histograms. Experiments on clustering image sequences of tracked objects demonstrate the efficacy of the proposed method."}
{"_id": "6ad2b6ceb9d4102ddbad7f853748add299c5d58b", "title": "Design and implementation of a 1.3 kW, 7-level flying capacitor multilevel AC-DC converter with power factor correction", "text": "This work presents a 1.3 kW, single phase, AC-DC converter with power factor correction based on a 7-level flying capacitor multilevel converter. The topology features a tradeoff of active circuit complexity for dramatic reduction on the magnetic component size, while maintaining a high efficiency. In this work, we demonstrate these features through theoretical analysis as well as a hardware prototype. It has been experimentally demonstrated that the prototype can operate at an universal AC input from 90\u2013230 VRMS and frequencies from 47\u201363 Hz with an output voltage of 400 V, achieving a box power density of 1.21 W/cm3 (19.8 W/in3) and a peak efficiency of 97.6%. This prototype is the first successful demonstration of a 7-level flying capacitor multilevel boost topology as an AC-DC converter with fully implemented digital PFC control and self-powered start-up from a universal AC input."}
{"_id": "68302d9c362fd66d6f3e3aca0d1a0303cd241598", "title": "Gender Identity and Adjustment in Middle Childhood", "text": "Gender identity is a central construct in many accounts of psychosocial development, yet it has been defined in diverse ways. Kohlberg (1966) and Zucker et al. (1993) viewed gender identity as knowing that one is a member of one sex rather than the other; Kagan (1964) regarded gender identity as the degree to which one perceives the self as conforming to cultural stereotypes for one's gender; Bem (1981) saw gender identity as the degree to which one internalizes societal pressures for gender conformity; Green (1974) and Spence (1985) viewed gender identity as a fundamental sense of acceptance of, and of belonging to, one's gender. It is conceivable that all of the foregoing (and still other) conceptualizations of gender identity have merit but that different varieties or facets of gender identity serve different psychological functions or affect adjustment in different ways. Thus, it may be fruitful to regard gender identity as a multidimensional construct and to define gender identity as the collection of thoughts and feelings one has about one's gender category and one's membership in it. A recent study by Egan and Perry (2001) was built on this premise. Egan and Perry proposed that gender identity is composed of five major components: (a) membership knowledge (knowledge of membership in a gender category); (b) gender typicality (the degree to which one feels one is a typical member of one's gender category); (c) gender contentedness (the degree to which one is happy with one's gender assignment); (d) felt pressure for gender conformity (the degree to which one feels pressure from parents, peers, and self for conformity to gender stereotypes); and (e) intergroup bias (the extent to which one believes one's own sex is superior to the other). Egan and Perry (2001) measured the last four of these components of gender identity in preadolescent children and found the components to be relatively independent, to be fairly stable over a school year, and to relate to adjustment (i.e., self-esteem and peer acceptance) in different ways. Gender typicality and gender contentedness were favorably related to adjustment, whereas felt pressure and intergroup bias were negatively associated with adjustment. Links between the gender identity constructs and the adjustment indexes remained significant when children's perceptions of self-efficacy for a wide variety of sex-typed activities were statistically controlled. This suggests that the gender identity constructs carry implications for adjustment beyond self-perceptions of specific sex-linked competencies. The purposes of the \u2026"}
{"_id": "4b631afce2f2a8b9b4c16c5e13c3765e75a38e54", "title": "Load Forecasting via Deep Neural Networks", "text": "Nowadays, electricity plays a vital role in national economic and social development. Accurate load forecasting can help power companies to secure electricity supply and scheduling and reduce wastes since electricity is difficult to store. In this paper, we propose a novel Deep Neural Network architecture for short term load forecasting. We integrate multiple types of input features by using appropriate neural network components to process each of them. We use Convolutional Neural Network components to extract rich features from historical load sequence and use Recurrent Components to model the implicit dynamics. In addition, we use Dense layers to transform other types of features. Experimental results on a large data set containing hourly loads of a North China city show the superiority of our method. Moreover, the proposed method is quite flexible and can be applied to other time series prediction tasks. c \u00a9 2017 The Authors. Published by Elsevier B.V. Selectio and/or peer-review under responsibility of ITQM2017."}
{"_id": "fb996b851971db84c56bcb8d0d96a2e04b21417f", "title": "3D-Printing of Meso-structurally Ordered Carbon Fiber/Polymer Composites with Unprecedented Orthotropic Physical Properties", "text": "Here we report the first example of a class of additively manufactured carbon fiber reinforced composite (AMCFRC) materials which have been achieved through the use of a latent thermal cured aromatic thermoset resin system, through an adaptation of direct ink writing (DIW) 3D-printing technology. We have developed a means of printing high performance thermoset carbon fiber composites, which allow the fiber component of a resin and carbon fiber fluid to be aligned in three dimensions via controlled micro-extrusion and subsequently cured into complex geometries. Characterization of our composite systems clearly show that we achieved a high order of fiber alignment within the composite microstructure, which in turn allows these materials to outperform equivalently filled randomly oriented carbon fiber and polymer composites. Furthermore, our AM carbon fiber composite systems exhibit highly orthotropic mechanical and electrical responses as a direct result of the alignment of carbon fiber bundles in the microscale which we predict will ultimately lead to the design of truly tailorable carbon fiber/polymer hybrid materials having locally programmable complex electrical, thermal and mechanical response."}
{"_id": "754a984815c7fab512e651fc6c5d6aa4864f559e", "title": "Ten-year research update review: child sexual abuse.", "text": "OBJECTIVE To provide clinicians with current information on prevalence, risk factors, outcomes, treatment, and prevention of child sexual abuse (CSA). To examine the best-documented examples of psychopathology attributable to CSA. METHOD Computer literature searches of and for key words. All English-language articles published after 1989 containing empirical data pertaining to CSA were reviewed. RESULTS CSA constitutes approximately 10% of officially substantiated child maltreatment cases, numbering approximately 88,000 in 2000. Adjusted prevalence rates are 16.8% and 7.9% for adult women and men, respectively. Risk factors include gender, age, disabilities, and parental dysfunction. A range of symptoms and disorders has been associated with CSA, but depression in adults and sexualized behaviors in children are the best-documented outcomes. To date, cognitive-behavioral therapy (CBT) of the child and a nonoffending parent is the most effective treatment. Prevention efforts have focused on child education to increase awareness and home visitation to decrease risk factors. CONCLUSIONS CSA is a significant risk factor for psychopathology, especially depression and substance abuse. Preliminary research indicates that CBT is effective for some symptoms, but longitudinal follow-up and large-scale \"effectiveness\" studies are needed. Prevention programs have promise, but evaluations to date are limited."}
{"_id": "ea741073b635819de3aec7db648b84eb4abe976a", "title": "Appropriateness of plantar pressure measurement devices: a comparative technical assessment.", "text": "Accurate plantar pressure measurements are mandatory in both clinical and research contexts. Differences in accuracy, precision and reliability of the available devices have prevented so far the onset of standardization processes or the definition of reliable reference datasets. In order to comparatively assess the appropriateness of the most used pressure measurement devices (PMD) on-the-market, in 2006 the Institute the author is working for approved a two-year scientific project aimed to design, validate and implement dedicated testing methods for both in-factory and on-the field assessment. A first testing phase was also performed which finished in December 2008. Five commercial PMDs using different technologies-resistive, elastomer-based capacitive, air-based capacitive-were assessed and compared with respect to absolute pressure measurements, hysteresis, creep and COP estimation. The static and dynamic pressure tests showed very high accuracy of capacitive, elastomer-based technology (RMSE<0.5%), and quite a good performance of capacitive, air-based technology (RMSE<5%). High accuracy was also found for the resistive technology by TEKSCAN (RMSE<2.5%), even though a complex ad hoc calibration was necessary."}
{"_id": "4b119dc8a4e38824d73a9f88179935a96e001aaf", "title": "Finite Precision Error Analysis of Neural Network Hardware Implementations", "text": "29 computation. On the other hand, for network learning, at least 14-16 bits of precision must be used for the weights to avoid having the training process divert too much from the trajectory of the high precision computation. References [1] D. Hammerstrom. A VLSI architecture for high-performance, low cost, on-chip learning. Figure 10: The average squared dierences between the desired and actual outputs of the XOR problem after the network converges. The paper is devoted to the derivation of the nite precision error analysis techniques for neural network implementations, especially analysis of the back-propagation learning of MLP's. This analysis technique is proposed to be more versatile and to prepare the ground for a wider variety of neural network algorithms: recurrent neural networks, competitive learning networks, and etc. All these networks share similar computational mechanisms as those used in back-propagation learning. For the forward retrieving operations, it is shown that 8-bit weights are sucient to maintain the same performance as using high precision 27 ture of the XOR problem, a soft convergence is good enough for the termination of training. Therefore, at the predicted point of 12-13 bits of weights, the squared dierence curve dives. Another interesting observation is worthwhile to mention: the total nite precision error in a single iteration of weight updating is mainly generated in the nal jamming operators in the computation of the output delta, hidden delta, and weight update. Therefore, even though it is required to have at least 13 to 16 bits assigned to the computation of the weight update and stored as the total weight value, the number of weight bits in the computation of forward retrieving and hidden delta steps of learning can be as low as 8 bits without excessive degradation of learning convergence and accuracy."}
{"_id": "662089d17a0f4518b08b76274aa842134990134f", "title": "Rotation Invariant Multi-View Color Face Detection Based on Skin Color and Adaboost Algorithm", "text": "As the training of Adaboost is complicated or does not work well in the multi-view face detection with large plane-rotated angle, this paper proposes a rotation invariant multi-view color face detection method combining skin color segmentation and multi-view Adaboost algorithm. First the possible face region is fast detected by skin color table, and the skin-color background adhesion of face is separated by color clustering. After region merging, the candidate face region is received. Then the face direction in plane is calculated by K-L transform, and finally the candidate face region corrected by rotating is scanned by multi-view Adaboost classifier to locate the face accurately. The experiments show that the method can effectively detect the plane large-angle multi-view face image which the conventional Adaboost can not do. It can be effectively applied to the cases of multi-view and multi-face image with complex background."}
{"_id": "021f37e9da69ea46fba9d2bf4e7ca3e8ba7b3448", "title": "Amorphous Silicon Solar Vivaldi Antenna", "text": "An ultrawideband solar Vivaldi antenna is proposed. Cut from amorphous silicon cells, it maintains a peak power at 4.25 V, which overcomes a need for lossy power management components. The wireless communications device can yield solar energy or function as a rectenna for dual-source energy harvesting. The solar Vivaldi performs with 0.5-2.8 dBi gain from 0.95-2.45 GHz , and in rectenna mode, it covers three bands for wireless energy scavenging."}
{"_id": "505869faa8fa30b4d767a5de452fa0057d364c5d", "title": "Automating metadata generation: the simple indexing interface", "text": "In this paper, we focus on the development of a framework for automatic metadata generation. The first step towards this framework is the definition of an Application Programmer Interface (API), which we call the Simple Indexing Interface (SII). The second step is the definition of a framework for implementation of the SII. Both steps are presented in some detail in this paper. We also report on empirical evaluation of the metadata that the SII and supporting framework generated in a real-life context."}
{"_id": "b17d5cf76dc8a51b44de13f4ad02e801c489775f", "title": "Four common anatomic variants that predispose to unfavorable rhinoplasty results: a study based on 150 consecutive secondary rhinoplasties.", "text": "A retrospective study was conducted of 150 consecutive secondary rhinoplasty patients operated on by the author before February of 1999, to test the hypothesis that four anatomic variants (low radix/low dorsum, narrow middle vault, inadequate tip projection, and alar cartilage malposition) strongly predispose to unfavorable rhinoplasty results. The incidences of each variant were compared with those in 50 consecutive primary rhinoplasty patients. Photographs before any surgery were available in 61 percent of the secondary patients; diagnosis in the remaining individuals was made from operative reports, physical diagnosis, or patient history. Low radix/low dorsum was present in 93 percent of the secondary patients and 32 percent of the primary patients; narrow middle vault was present in 87 percent of the secondary patients and 38 percent of the primary patients; inadequate tip projection was present in 80 percent of the secondary patients and 31 percent of the primary patients; and alar cartilage malposition was present in 42 percent of the secondary patients and 18 percent of the primary patients. In the 150-patient secondary group, the most common combination was the triad of low radix, narrow middle vault, and inadequate tip projection (40 percent of patients). The second largest group (27 percent) had shared all four anatomic points before their primary rhinoplasties. Seventy-eight percent of the secondary patients had three or all four anatomic variants in some combination; each secondary patient had at least one of the four traits; 99 percent had two or more. Seventy-eight percent of the primary patients had at least two variants, and 58 percent had three or more. Twenty-two percent of the primary patients had none of the variants and therefore would presumably not be predisposed to unfavorable results following traditional reduction rhinoplasty. This study supports the contention that four common anatomic variants, if unrecognized, are strongly associated with unfavorable results following primary rhinoplasty. It is important for all surgeons performing rhinoplasty to recognize these anatomic variants to avoid the unsatisfactory functional and aesthetic sequelae that they may produce by making their correction a deliberate part of each preoperative surgical plan."}
{"_id": "732dfd5dd20af6c5d78878c841e49322e1c4d607", "title": "Categorization of Anomalies in Smart Manufacturing Systems to Support the Selection of Detection Mechanisms", "text": "An important issue in anomaly detection in smart manufacturing systems is the lack of consistency in the formal definitions of anomalies, faults, and attacks. The term anomaly is used to cover a wide range of situations that are addressed by different types of solutions. In this letter, we categorize anomalies in machines, controllers, and networks along with their detection mechanisms, and unify them under a common framework to aid in the identification of potential solutions. The main contribution of the proposed categorization is that it allows the identification of gaps in anomaly detection in smart manufacturing systems."}
{"_id": "6bc4b1376ec2812b6d752c4f6bc8d8fd0512db91", "title": "Multimodal Machine Learning: A Survey and Taxonomy", "text": "Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research."}
{"_id": "2a3e10b0ff84f36ae642b9431c9be408703533c7", "title": "NCDawareRank: a novel ranking method that exploits the decomposable structure of the web", "text": "Research about the topological characteristics of the hyperlink graph has shown that Web possesses a nested block structure, indicative of its innate hierarchical organization. This crucial observation opens the way for new approaches that can usefully regard Web as a Nearly Completely Decomposable(NCD) system; In recent years, such approaches gave birth to various efficient methods and algorithms that exploit NCD from a computational point of view and manage to considerably accelerate the extraction of the PageRank vector. However, very little have been done towards the qualitative exploitation of NCD.\n In this paper we propose NCDawareRank, a novel ranking method that uses the intuition behind NCD to generalize and refine PageRank. NCDawareRank considers both the link structure and the hierarchical nature of the Web in a way that preserves the mathematically attractive characteristics of PageRank and at the same time manages to successfully resolve many of its known problems, including Web Spamming Susceptibility and Biased Ranking of Newly Emerging Pages. Experimental results show that NCDawareRank is more resistant to direct manipulation, alleviates the problems caused by the sparseness of the link graph and assigns more reasonable ranking scores to newly added pages, while maintaining the ability to be easily implemented on a large-scale and in a computationally efficient manner."}
{"_id": "de1eac60ef1c6af41746430469fe69f6e8a3d258", "title": "Motion Detection Techniques Using Optical Flow", "text": "Motion detection is very important in image processing. One way of detecting motion is using optical flow. Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. The method used for finding the optical flow in this project is assuming that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. This technique is later used in developing software for motion detection which has the capability to carry out four types of motion detection. The motion detection software presented in this project also can highlight motion region, count motion level as well as counting object numbers. Many objects such as vehicles and human from video streams can be recognized by applying optical flow technique. Keywords\u2014Background modeling, Motion detection, Optical flow, Velocity smoothness constant, motion trajectories."}
{"_id": "33871c9019e44864e79e27255a676f48eb4c4f8f", "title": "What \u2019 s good for the goose is good for the GANder Comparing Generative Adversarial Networks for NLP", "text": "Generative Adversarial Nets (GANs), which use discriminators to help train a generative model, have been successful particularly in computer vision for generating images. However, there are many restrictions in its applications to natural language tasks\u2013mainly, it is difficult to back-propagate through discrete-value random variables. Yet recent publications have applied GAN with promising results. Sequence GAN (Yu et al. 2017) introduces a solution by modeling the data generator as a reinforcement learning (RL) policy to overcome the generator differentiation problem, with the RL reward signals produced by the discriminator after it judges complete sequences. However, problems with this model persist, as the GAN training objective is inherently unstable, producing a large variation of results that make it difficult to fool the discriminator. Maximum-Likelihood Augmented Discrete GAN (Che at al. 2017) suggests a new low-variance objective for the generator, using a normalized reward signal from the discriminator that corresponds to log-likelihood. Our project explores both proposed implementations: we produce experimental results on both synthetic and real-world discrete datasets to explore the effectiveness of GAN over strong baselines."}
{"_id": "a84b3a4a97b5af474fe81af7f8e98048f59f5d2f", "title": "Design and implementation of intrusion detection system using convolutional neural network for DoS detection", "text": "Nowadays, network is one of the essential parts of life, and lots of primary activities are performed by using the network. Also, network security plays an important role in the administrator and monitors the operation of the system. The intrusion detection system (IDS) is a crucial module to detect and defend against the malicious traffics before the system is affected. This system can extract the information from the network system and quickly indicate the reaction which provides real-time protection for the protected system. However, detecting malicious traffics is very complicating because of their large quantity and variants. Also, the accuracy of detection and execution time are the challenges of some detection methods. In this paper, we propose an IDS platform based on convolutional neural network (CNN) called IDS-CNN to detect DoS attack. Experimental results show that our CNN based DoS detection obtains high accuracy at most 99.87%. Moreover, comparisons with other machine learning techniques including KNN, SVM, and Na\u00efve Bayes demonstrate that our proposed method outperforms traditional ones."}
{"_id": "44a81a98cb3d25aafc1fa96372f0a68486d648a2", "title": "Is multiagent deep reinforcement learning the answer or the question? A brief survey", "text": "Deep reinforcement learning (DRL) has achieved outstanding results in recent years. This has led to a dramatic increase in the number of applications and methods. Recent works have explored learning beyond single-agent scenarios and have considered multiagent scenarios. Initial results report successes in complex multiagent domains, although there are several challenges to be addressed. In this context, first, this article provides a clear overview of current multiagent deep reinforcement learning (MDRL) literature. Second, it provides guidelines to complement this emerging area by (i) showcasing examples on how methods and algorithms from DRL and multiagent learning (MAL) have helped solve problems in MDRL and (ii) providing general lessons learned from these works. We expect this article will help unify and motivate future research to take advantage of the abundant literature that exists in both areas (DRL and MAL) in a joint effort to promote fruitful research in the multiagent community."}
{"_id": "460054a0540e6ad59711d16f2323743a055f7be5", "title": "Identifiable Phenotyping using Constrained Non-Negative Matrix Factorization", "text": "This work proposes a new algorithm for automated and simultaneous phenotyping of multiple co\u2013occurring medical conditions, also referred as comorbidities, using clinical notes from the electronic health records (EHRs). A basic latent factor estimation technique of non-negative matrix factorization (NMF) is augmented with domain specific constraints to obtain sparse latent factors that are anchored to a fixed set of chronic conditions. The proposed anchoring mechanism ensures a one-to-one identifiable and interpretable mapping between the latent factors and the target comorbidities. Qualitative assessment of the empirical results by clinical experts suggests that the proposed model learns clinically interpretable phenotypes while being predictive of 30 day mortality. The proposed method can be readily adapted to any non-negative EHR data across various healthcare institutions."}
{"_id": "69296a15df81fd853e648d160a329cbd9c0050c8", "title": "Integrating perceived playfulness into expectation-confirmation model for web portal context", "text": "This paper investigated the value of including \"playfulness\" in expectation-confirmation theory (ECT) when studying continued use of a web site. Original models examined cognitive beliefs and effects that influence a person's intention to continue to use an information system. Here, an extended ECT model (with an additional relationship between perceived playfulness and satisfaction) was shown to provide a better fit than a simple path from perceived usefulness to satisfaction. The results indicated that perceived playfulness, confirmation to satisfaction, and perceived usefulness all contributed significantly to the users' intent to reuse a web site. Thus, we believe that the extended ECT model is an appropriate tool for the study of web site effects."}
{"_id": "e481de52378f366d75fa78cff438d1f37842f0aa", "title": "Survey of review spam detection using machine learning techniques", "text": "Online reviews are often the primary factor in a customer\u2019s decision to purchase a product or service, and are a valuable source of information that can be used to determine public opinion on these products or services. Because of their impact, manufacturers and retailers are highly concerned with customer feedback and reviews. Reliance on online reviews gives rise to the potential concern that wrongdoers may create false reviews to artificially promote or devalue products and services. This practice is known as Opinion (Review) Spam, where spammers manipulate and poison reviews (i.e., making fake, untruthful, or deceptive reviews) for profit or gain. Since not all online reviews are truthful and trustworthy, it is important to develop techniques for detecting review spam. By extracting meaningful features from the text using Natural Language Processing (NLP), it is possible to conduct review spam detection using various machine learning techniques. Additionally, reviewer information, apart from the text itself, can be used to aid in this process. In this paper, we survey the prominent machine learning techniques that have been proposed to solve the problem of review spam detection and the performance of different approaches for classification and detection of review spam. The majority of current research has focused on supervised learning methods, which require labeled data, a scarcity when it comes to online review spam. Research on methods for Big Data are of interest, since there are millions of online reviews, with many more being generated daily. To date, we have not found any papers that study the effects of Big Data analytics for review spam detection. The primary goal of this paper is to provide a strong and comprehensive comparative study of current research on detecting review spam using various machine learning techniques and to devise methodology for conducting further investigation."}
{"_id": "592a6d781309423ceb95502e92e577ef5656de0d", "title": "Incorporating Structural Alignment Biases into an Attentional Neural Translation Model", "text": "Neural encoder-decoder models of machine translation have achieved impressive results, rivalling traditional translation models. However their modelling formulation is overly simplistic, and omits several key inductive biases built into traditional models. In this paper we extend the attentional neural translation model to include structural biases from word based alignment models, including positional bias, Markov conditioning, fertility and agreement over translation directions. We show improvements over a baseline attentional model and standard phrase-based model over several language pairs, evaluating on difficult languages in a low resource setting."}
{"_id": "6eedf0a4fe861335f7f7664c14de7f71c00b7932", "title": "Neural Turing Machines", "text": "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples."}
{"_id": "74fdaeb2678aba886c3d899f66b4197b901483d7", "title": "Deep Neural Networks in Machine Translation: An Overview", "text": "Deep neural networks (DNNs) are widely used in machine translation (MT). This article gives an overview of DNN applications in various aspects of MT."}
{"_id": "9999de91aae516680e2364b6d5bfeeec4c748616", "title": "Syntactically Guided Neural Machine Translation", "text": "We investigate the use of hierarchical phrase-based SMT lattices in end-to-end neural machine translation (NMT). Weight pushing transforms the Hiero scores for complete translation hypotheses, with the full translation grammar score and full ngram language model score, into posteriors compatible with NMT predictive probabilities. With a slightly modified NMT beam-search decoder we find gains over both Hiero and NMT decoding alone, with practical advantages in extending NMT to very large input and output vocabularies."}
{"_id": "e810ddd9642db98492bd6a28b08a8655396c1555", "title": "Facing facts: neuronal mechanisms of face perception.", "text": "The face is one of the most important stimuli carrying social meaning. Thanks to the fast analysis of faces, we are able to judge physical attractiveness and features of their owners' personality, intentions, and mood. From one's facial expression we can gain information about danger present in the environment. It is obvious that the ability to process efficiently one's face is crucial for survival. Therefore, it seems natural that in the human brain there exist structures specialized for face processing. In this article, we present recent findings from studies on the neuronal mechanisms of face perception and recognition in the light of current theoretical models. Results from brain imaging (fMRI, PET) and electrophysiology (ERP, MEG) show that in face perception particular regions (i.e. FFA, STS, IOA, AMTG, prefrontal and orbitofrontal cortex) are involved. These results are confirmed by behavioral data and clinical observations as well as by animal studies. The developmental findings reviewed in this article lead us to suppose that the ability to analyze face-like stimuli is hard-wired and improves during development. Still, experience with faces is not sufficient for an individual to become an expert in face perception. This thesis is supported by the investigation of individuals with developmental disabilities, especially with autistic spectrum disorders (ASD)."}
{"_id": "6ceea52b14f26eb39dc7b9e9da3be434f9c3c173", "title": "Differential neuronal plasticity in mouse hippocampus associated with various periods of enriched environment during postnatal development", "text": "Enriched environment (EE) is characterized by improved conditions for enhanced exploration, cognitive activity, social interaction and physical exercise. It has been shown that EE positively regulates the remodeling of neural circuits, memory consolidation, long-term changes in synaptic strength and neurogenesis. However, the fine mechanisms by which environment shapes the brain at different postnatal developmental stages and the duration required to induce such changes are still a matter of debate. In EE, large groups of mice were housed in bigger cages and were given toys, nesting materials and other equipment that promote physical activity to provide a stimulating environment. Weaned mice were housed in EE for 4, 6 or 8\u00a0weeks and compared with matched control mice that were raised in a standard environment. To investigate the differential effects of EE on immature and mature brains, we also housed young adult mice (8\u00a0weeks old) for 4\u00a0weeks in EE. We studied the influence of onset and duration of EE housing on the structure and function of hippocampal neurons. We found that: (1) EE enhances neurogenesis in juvenile, but not young adult mice; (2) EE increases the number of synaptic contacts at every stage; (3) long-term potentiation (LTP) and spontaneous and miniature activity at the glutamatergic synapses are affected differently by EE depending on its onset and duration. Our study provides an integrative view of the role of EE during postnatal development in various mechanisms of plasticity in the hippocampus including neurogenesis, synaptic morphology and electrophysiological parameters of synaptic connectivity. This work provides an explanation for discrepancies found in the literature about the effects of EE on LTP and emphasizes the importance of environment on hippocampal plasticity."}
{"_id": "49f39786b87dddbf088ae202f2bdad46668387e3", "title": "Flash memory cells-an overview", "text": "The aim of this paper is to give a thorough overview of Flash memory cells. Basic operations and charge-injection mechanisms that are most commonly used in actual Flash memory cells are reviewed to provide an understanding of the underlying physics and principles in order to appreciate the large number of device structures, processing technologies, and circuit designs presented in the literature. New cell structures and architectural solutions have been surveyed to highlight the evolution of the Flash memory technology, oriented to both reducing cell size and upgrading product functions. The subject is of extreme interest: new concepts involving new materials, structures, principles, or applications are being continuously introduced. The worldwide semiconductor memory market seems ready to accept many new applications in fields that are not specific to traditional nonvolatile memories."}
{"_id": "a20882df2f267c12e541d441281249dc2aee5fd0", "title": "On the Global Convergence of Majorization Minimization Algorithms for Nonconvex Optimization Problems", "text": "In this paper, we study the global convergence of majorization minimization (MM) algorithms for solving nonconvex regularized optimization problems. MM algorithms have received great attention in machine learning. However, when applied to nonconvex optimization problems, the convergence of MM algorithms is a challenging issue. We introduce theory of the KurdykaLojasiewicz inequality to address this issue. In particular, we show that many nonconvex problems enjoy the KurdykaLojasiewicz property and establish the global convergence result of the corresponding MM procedure. We also extend our result to a well known method that called CCCP (concave-convex procedure)."}
{"_id": "2ffe2450366f9d6ff704964297ea624ce30b0f4d", "title": "An Overview on Mobile Data Mining", "text": "In early days the mobile phones are considered to perform only telecommunication operation. This scenario of mobile phones as communication devices changed with the emergence of a new class of mobile devices called the smart phones. These smart phones in addition to being used as a communication device are capable of doing things that a computer does. In recent times the smart phone are becoming more and more powerful in both computing and storage aspects. The data generated by the smart phone provide a means to get new knowledge about various aspects like usage, movement of the user etc. This paper provides an introduction to Mobile Data Mining and its types."}
{"_id": "1f8560da84454fe3128c76d043d84e0a9a749dcd", "title": "Cognitive Neuroscience : implications for education ?", "text": "Research into the functioning of the human brain, particularly during the past decade, has greatly enhanced our understanding of cognitive behaviours which are fundamental to education: learning, memory, intelligence, emotion. Here, we argue the case that research findings from cognitive neuroscience hold implications for educational practice. In doing so we advance a bio-psycho-social position that welcomes multi-disciplinary perspectives on current educational challenges. We provide some examples of research implications which support conventional pedagogic wisdom, and others which are novel and perhaps counter-intuitive. As an example, we take a model of adaptive plasticity that relies on stimulus reinforcement and examine possible implications for pedagogy and curriculum depth. In doing so, we reject some popular but over-simplistic applications of neuroscience to education. In sum, the education profession could benefit from embracing rather than ignoring cognitive neuroscience. Moreover, educationists should be actively contributing to the research agenda of future brain research."}
{"_id": "9ebe089caca6d78ff525856c7a828884724b9039", "title": "Bayesian Reinforcement Learning in Factored POMDPs", "text": "Bayesian approaches provide a principled solution to the explorationexploitation trade-off in Reinforcement Learning. Typical approaches, however, either assume a fully observable environment or scale poorly. This work introduces the Factored Bayes-Adaptive POMDP model, a framework that is able to exploit the underlying structure while learning the dynamics in partially observable systems. We also present a belief tracking method to approximate the joint posterior over state and model variables, and an adaptation of the Monte-Carlo Tree Search solution method, which together are capable of solving the underlying problem near-optimally. Our method is able to learn efficiently given a known factorization or also learn the factorization and the model parameters at the same time. We demonstrate that this approach is able to outperform current methods and tackle problems that were previously infeasible."}
{"_id": "0f7129cb21af9e7a894b1e32a4b68dd3b80c80f8", "title": "Training techniques to improve endurance exercise performances.", "text": "In previously untrained individuals, endurance training improves peak oxygen uptake (VO2peak), increases capillary density of working muscle, raises blood volume and decreases heart rate during exercise at the same absolute intensity. In contrast, sprint training has a greater effect on muscle glyco(geno)lytic capacity than on muscle mitochondrial content. Sprint training invariably raises the activity of one or more of the muscle glyco(geno)lytic or related enzymes and enhances sarcolemmal lactate transport capacity. Some groups have also reported that sprint training transforms muscle fibre types, but these data are conflicting and not supported by any consistent alteration in sarcoplasmic reticulum Ca2+ ATPase activity or muscle physicochemical H+ buffering capacity. While the adaptations to training have been studied extensively in previously sedentary individuals, far less is known about the responses to high-intensity interval training (HIT) in already highly trained athletes. Only one group has systematically studied the reported benefits of HIT before competition. They found that >or=6 HIT sessions, was sufficient to maximally increase peak work rate (W(peak)) values and simulated 40 km time-trial (TT(40)) speeds of competitive cyclists by 4 to 5% and 3.0 to 3.5%, respectively. Maximum 3.0 to 3.5% improvements in TT(40) cycle rides at 75 to 80% of W(peak) after HIT consisting of 4- to 5-minute rides at 80 to 85% of W(peak) supported the idea that athletes should train for competition at exercise intensities specific to their event. The optimum reduction or 'taper' in intense training to recover from exhaustive exercise before a competition is poorly understood. Most studies have shown that 20 to 80% single-step reductions in training volume over 1 to 4 weeks have little effect on exercise performance, and that it is more important to maintain training intensity than training volume. Progressive 30 to 75% reductions in pool training volume over 2 to 4 weeks have been shown to improve swimming performances by 2 to 3%. Equally rapid exponential tapers improved 5 km running times by up to 6%. We found that a 50% single-step reduction in HIT at 70% of W(peak) produced peak approximately 6% improvements in simulated 100 km time-trial performances after 2 weeks. It is possible that the optimum taper depends on the intensity of the athletes' preceding training and their need to recover from exhaustive exercise to compete. How the optimum duration of a taper is influenced by preceding training intensity and percentage reduction in training volume warrants investigation."}
{"_id": "254149281ea88ee3cbd8d629993cd18ab077dada", "title": "Multi-View Inpainting for Image-Based Scene Editing and Rendering", "text": "We propose a method to remove objects such as people and cars from multi-view urban image datasets, enabling free-viewpoint IBR in the edited scenes. Our method combines information from multi-view 3D reconstruction with image inpainting techniques, by formulating the problem as an optimization of a global patch-based objective function. We use Image-Based Rendering (IBR) techniques to reproject information from neighboring views, and 3D multi-view stereo reconstruction to perform multiview coherent initialization for inpainting of pixels not filled by reprojection. Our algorithm performs multi-view consistent inpainting for color and 3D by blending reprojections with patch-based image inpainting. We run our algorithm on casually captured datasets, and Google StreetViewdata, removing objects cars, people and pillars, showing that our approach produces results of sufficient quality for free-viewpoint IBR on \"cleaned up\" scenes, as well as IBR scene editing, such as limited motion of real objects."}
{"_id": "8368d2fc947cf6ac46a1d251d1895f2f87c7d498", "title": "Habanero-Java: the new adventures of old X10", "text": "In this paper, we present the Habanero-Java (HJ) language developed at Rice University as an extension to the original Java-based definition of the X10 language. HJ includes a powerful set of task-parallel programming constructs that can be added as simple extensions to standard Java programs to take advantage of today's multi-core and heterogeneous architectures. The language puts a particular emphasis on the usability and safety of parallel constructs. For example, no HJ program using async, finish, isolated, and phaser constructs can create a logical deadlock cycle. In addition, the future and data-driven task variants of the async construct facilitate a functional approach to parallel programming. Finally, any HJ program written with async, finish, and phaser constructs that is data-race free is guaranteed to also be deterministic.\n HJ also features two key enhancements that address well known limitations in the use of Java in scientific computing --- the inclusion of complex numbers as a primitive data type, and the inclusion of array-views that support multidimensional views of one-dimensional arrays. The HJ compiler generates standard Java class-files that can run on any JVM for Java 5 or higher. The HJ runtime is responsible for orchestrating the creation, execution, and termination of HJ tasks, and features both work-sharing and work-stealing schedulers. HJ is used at Rice University as an introductory parallel programming language for second-year undergraduate students. A wide variety of benchmarks have been ported to HJ, including a full application that was originally written in Fortran 90. HJ has a rich development and runtime environment that includes integration with DrJava, the addition of a data race detection tool, and service as a target platform for the Intel Concurrent Collections coordination language"}
{"_id": "0d2c4723e9e5925cde74bd879611fda6f6e3980b", "title": "BigBench: towards an industry standard benchmark for big data analytics", "text": "There is a tremendous interest in big data by academia, industry and a large user base. Several commercial and open source providers unleashed a variety of products to support big data storage and processing. As these products mature, there is a need to evaluate and compare the performance of these systems.\n In this paper, we present BigBench, an end-to-end big data benchmark proposal. The underlying business model of BigBench is a product retailer. The proposal covers a data model and synthetic data generator that addresses the variety, velocity and volume aspects of big data systems containing structured, semi-structured and unstructured data. The structured part of the BigBench data model is adopted from the TPC-DS benchmark, which is enriched with semi-structured and unstructured data components. The semi-structured part captures registered and guest user clicks on the retailer's website. The unstructured data captures product reviews submitted online. The data generator designed for BigBench provides scalable volumes of raw data based on a scale factor. The BigBench workload is designed around a set of queries against the data model. From a business prospective, the queries cover the different categories of big data analytics proposed by McKinsey. From a technical prospective, the queries are designed to span three different dimensions based on data sources, query processing types and analytic techniques.\n We illustrate the feasibility of BigBench by implementing it on the Teradata Aster Database. The test includes generating and loading a 200 Gigabyte BigBench data set and testing the workload by executing the BigBench queries (written using Teradata Aster SQL-MR) and reporting their response times."}
{"_id": "be410f931d5369a5bdadff8f94ddea7fb25869ce", "title": "Colostrum avoidance, prelacteal feeding and late breast-feeding initiation in rural Northern Ethiopia.", "text": "OBJECTIVE\nTo identify specific cultural and behavioural factors that might be influenced to increase colostrum feeding in a rural village in Northern Ethiopia to improve infant health.\n\n\nDESIGN\nBackground interviews were conducted with six community health workers and two traditional birth attendants. A semi-structured tape-recorded interview was conducted with twenty mothers, most with children under the age of 5 years. Variables were: parental age and education; mother's ethnicity; number of live births and children's age; breast-feeding from birth through to weaning; availability and use of formula; and descriptions of colostrum v. other stages of breast milk. Participant interviews were conducted in Amharic and translated into English.\n\n\nSETTING\nKossoye, a rural Amhara village with high prevalence rates of stunting: inappropriate neonatal feeding is thought to be a factor.\n\n\nSUBJECTS\nWomen (20-60 years of age) reporting at least one live birth (range: 1-8, mean: \u223c4).\n\n\nRESULTS\nColostrum (inger) and breast milk (yetut wotet) were seen as different substances. Colostrum was said to cause abdominal problems, but discarding a portion was sufficient to mitigate this effect. Almost all (nineteen of twenty) women breast-fed and twelve (63 %) reported ritual prelacteal feeding. A majority (fifteen of nineteen, 79 %) reported discarding colostrum and breast-feeding within 24 h of birth. Prelacteal feeding emerged as an additional factor to be targeted through educational intervention.\n\n\nCONCLUSIONS\nTo maximize neonatal health and growth, we recommend culturally tailored education delivered by community health advocates and traditional health practitioners that promotes immediate colostrum feeding and discourages prelacteal feeding."}
{"_id": "450a6b3e27869595df328a4c2df8f0c37610a669", "title": "High gain antipodal tapered slot antenna With sine-shaped corrugation and fermi profile substrate slotted cut-out for MMW 5G", "text": "A high gain, high-efficiency antipodal tapered slot antenna with sine-shaped corrugation and a Fermi profile substrate cut-out has been developed for 5G millimeter wave (MMW) communications. A parametric study of a new substrate cutout with Fermi profile is demonstrated to reduce the sidelobe level at the E-plane and H-plane as well as to increase antenna efficiency by an average of 88% over a band of 20-40 GHz. A low-cost printed circuit board (PCB) is processed simply with a CNC Milling machine to fabricate the proposed antenna with Fermi profile substrate cut-out. The measured reflection coefficient is found to be less than -14 dB over a frequency range of 20-40 GHz. Furthermore, the measured gain of the proposed antenna is 17 dB at 30 GHz and the measured radiation pattern and gain is almost constant within the wide bandwidth from 30-40 GHz. Therefore, this antenna is proposed for use in an H-plane array structure such as for point-to-point communication systems, a switched-beam system."}
{"_id": "48f3552e17bcebb890e4b1f19c9a2c1fa362800f", "title": "The staircase to terrorism: a psychological exploration.", "text": "To foster a more in-depth understanding of the psychological processes leading to terrorism, the author conceptualizes the terrorist act as the final step on a narrowing staircase. Although the vast majority of people, even when feeling deprived and unfairly treated, remain on the ground floor, some individuals climb up and are eventually recruited into terrorist organizations. These individuals believe they have no effective voice in society, are encouraged by leaders to displace aggression onto out-groups, and become socialized to see terrorist organizations as legitimate and out-group members as evil. The current policy of focusing on individuals already at the top of the staircase brings only short-term gains. The best long-term policy against terrorism is prevention, which is made possible by nourishing contextualized democracy on the ground floor."}
{"_id": "b3a18280f63844e2178d8f82bc369fcf3ae6d161", "title": "Reducing gender bias in word embeddings", "text": "Word embedding is a popular framework that represents text data as vectors of real numbers. These vectors capture semantics in language, and are used in a variety of natural language processing and machine learning applications. Despite these useful properties, word embeddings derived from ordinary language corpora necessarily exhibit human biases [6]. We measure direct and indirect gender bias for occupation word vectors produced by the GloVe word embedding algorithm [9], then modify this algorithm to produce an embedding with less bias to mitigate amplifying the bias in downstream applications utilizing this embedding."}
{"_id": "79ccfb918bf0b6ccbe83b552e1b8266eb05e8f53", "title": "Text Mining of Tweet for Sentiment Classification and Association with Stock Prices", "text": "In present days, the social media and networking act as one of the key platforms for sharing information and opinions. Many people share ideas, express their view points and opinions on various topic of their interest. Social media text has rich information about the companies, their products and various services offered by them. In this research we focus exploring the association of sentiments of social media text and stock prices of a company. The tweets of several company has been extracted and performed sentiment classification using Na\u00efve Bayes classifier and SVM classifier. To perform the classification, N-gram based feature vectors are constructed using important words of tweets. Further, the pattern of association between number of tweets which are positive or negative and stock prices has been explored. Motivated by such an association, the features related to tweets such as number of positive, negative, neutral tweets and total number of tweets are used to predict the stock market status using Support Vector Machine classifier."}
{"_id": "45f6e665105fe0b3c311ab4f0bf5f6bf8738f242", "title": "History and State of the Art in Commercial Electric Ship Propulsion, Integrated Power Systems, and Future Trends", "text": "Electric propulsion has emerged as one of the most efficient propulsion arrangements for several vessel types over the last decades. Even though examples can be found in the history at the end of 19th century, and further into the 20th century, the modern use of electric propulsion started in the 1980s along with the development of semiconductor switching devices to be used in high power drives (dc drives and later ac-to-ac drives). This development opened up for full rpm control of propellers and thrusters, and thereby enabling a simplification of the mechanical structure. However, the main reason for using electric propulsion in commercial ship applications is the potential for fuel savings compared to equivalent mechanical alternatives, except for icebreakers where the performance of an electric powered propeller is superior to a combustion engine powered propeller. The fuel saving potential lies within the fact that the applicable vessels have a highly varying operation profile and are seldom run at full power. This favors the power plant principle in which electric power can be produced at any time with optimum running of prime movers, e.g., diesel engines, by turning on and off units depending on the power demand for propulsion and other vessel loads. Icebreakers were among the first vessels to take advantage of this technology later followed by cruise vessel, and the offshore drilling vessels operating with dynamic positioning (DP). The converter technology was rapidly developing and soon the dc drives were replaced with ac drives. In the same period electric propulsion emerged as basic standard for large cruise liners, and DP operated drilling vessels, but also found its way into other segments as shuttle tankers, ferries, and other special vessels. At the same time podded propulsion were introduced, where the electric motor was mounted directly on the propeller shaft in a submerged 360 $^{\\circ}$ steerable pod, adding better efficiency, improved maneuvering, and reduced installation space/cost to the benefits of electric propulsion. The future trends are now focusing on further optimization of efficiency by allowing multiple energy sources, independent operation of individual power producers, and energy storage for various applications, such as power back up, peak shaving, or emission free operation (short voyages)."}
{"_id": "80bb5c9119ff4fc2374103b4f3d6a8f614b3c2ed", "title": "Shining the Floodlights on Mobile Web Tracking \u2014 A Privacy Survey", "text": "We present the first published large-scale study of mobile web tracking. We compare tracking across five physical and emulated mobile devices with one desktop device as a benchmark. Our crawler is based on FourthParty; however, our architecture avoids clearing state which has the benefit of continual observation of (and by) third-parties. We confirm many intuitive predictions and report a few surprises. The lists of top third-party domains across different categories devices are substantially similar; we found surprisingly few mobile-specific ad networks. The use of JavaScript by tracking domains increases gradually as we consider more powerful devices. We also analyze cookie longevity by device. Finally, we analyze a curious phenomenon of cookies that are used to store information about the user\u2019s browsing history on the client. Mobile tracking appears to be an under-researched area, and this paper is only a first step. We have made our code and data available at http://webtransparency.org/ for others to build on."}
{"_id": "c1b5a34aa3a8052378b2a4a33c98c02eed24ff1a", "title": "Automated Trajectory Planner of Industrial Robot for Pick-and-Place Task", "text": "Industrial robots, due to their great speed, precision and cost\u2010effectiveness in repetitive tasks, now tend to be used in place of human workers in automated manufacturing systems. In particular, they perform the pick\u2010and\u2010place operation, a non\u2010value\u2010added activity which at the same time cannot be eliminated. Hence, minimum time is an important consideration for economic reasons in the trajectory planning system of the manipulator. The trajectory should also be smooth to handle parts precisely in applications such as semiconductor manufacturing, processing and handling of chemicals and medicines, and fluid and aerosol deposition. In this paper, an automated trajectory planner is proposed to determine a smooth, minimum\u2010time and collision\u2010free trajectory for the pick\u2010and\u2010place operations of a 6\u2010DOF robotic manipulator in the presence of an obstacle. Subsequently, it also proposes an algorithm for the jerk\u2010 bounded Synchronized Trigonometric S\u2010curve Trajectory (STST) and the \u2018forbidden\u2010sphere\u2019 technique to avoid the obstacle. The proposed planner is demonstrated with suitable examples and comparisons. The experiments show that the proposed planner is capable of providing a smoother trajectory than the cubic spline based trajectory."}
{"_id": "4811ccadaf63fa92ab04b0080403084098fa4c02", "title": "Progressive muscle relaxation reduces migraine frequency and normalizes amplitudes of contingent negative variation (CNV)", "text": "Central information processing, visible in evoked potentials like the contingent negative variation (CNV) is altered in migraine patients who exhibit higher CNV amplitudes and a reduced habituation. Both characteristics were shown to be normalized under different prophylactic migraine treatment options whereas Progressive Muscle Relaxation (PMR) has not yet been examined. We investigated the effect of PMR on clinical course and CNV in migraineurs in a quasi-randomized, controlled trial. Thirty-five migraine patients and 46 healthy controls were examined. Sixteen migraineurs and 21 healthy participants conducted a 6-week PMR-training with CNV-measures before and after as well as three months after PMR-training completion. The remaining participants served as controls. The clinical course was analyzed with two-way analyses of variance (ANOVA) with repeated measures. Pre-treatment CNV differences between migraine patients and healthy controls were examined with t-tests for independent measures. The course of the CNV-parameters was examined with three-way ANOVAs with repeated measures. After PMR-training, migraine patients showed a significant reduction of migraine frequency. Preliminary to the PMR-training, migraine patients exhibited higher amplitudes in the early component of the CNV (iCNV) and the overall CNV (oCNV) than healthy controls, but no differences regarding habituation. After completion of the PMR-training, migraineurs showed a normalization of the iCNV amplitude, but neither of the oCNV nor of the habituation coefficient. The results confirm clinical efficacy of PMR for migraine prophylaxis. The pre-treatment measure confirms altered cortical information processing in migraine patients. Regarding the changes in the iCNV after PMR-training, central nervous mechanisms of the PMR-effect are supposed which may be mediated by the serotonin metabolism."}
{"_id": "10eb7bfa7687f498268bdf74b2f60020a151bdc6", "title": "Visualizing Data using t-SNE", "text": "We present a new technique called \u201ct-SNE\u201d that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets."}
{"_id": "666b639aadcd2a8a11d24b36bae6a4f07e802b34", "title": "Tailoring Continuous Word Representations for Dependency Parsing", "text": "Word representations have proven useful for many NLP tasks, e.g., Brown clusters as features in dependency parsing (Koo et al., 2008). In this paper, we investigate the use of continuous word representations as features for dependency parsing. We compare several popular embeddings to Brown clusters, via multiple types of features, in both news and web domains. We find that all embeddings yield significant parsing gains, including some recent ones that can be trained in a fraction of the time of others. Explicitly tailoring the representations for the task leads to further improvements. Moreover, an ensemble of all representations achieves the best results, suggesting their complementarity."}
{"_id": "7f0eb2cb332a8ce5fafaa7c280b5c5ab9c7ca95a", "title": "A Universal Part-of-Speech Tagset", "text": "To facilitate future research in unsupervised induction of syntactic structure and to standardize best-practices, we propose a tagset that consists of twelve universal part-of-speech categories. In addition to the tagset, we develop a mapping from 25 different treebank tagsets to this universal set. As a result, when combined with the original treebank data, this universal tagset and mapping produce a dataset consisting of common parts-of-speech for 22 different languages. We highlight the use of this resource via three experiments, that (1) compare tagging accuracies across languages, (2) present an unsupervised grammar induction approach that does not use gold standard part-of-speech tags, and (3) use the universal tags to transfer dependency parsers between languages, achieving state-of-the-art results."}
{"_id": "013cd20c0eaffb9cab80875a43086e0c3224fe20", "title": "Representation Learning: A Review and New Perspectives", "text": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning."}
{"_id": "0183b3e9d84c15c7048e6c2149ed86257ccdc6cb", "title": "Dependency-Based Word Embeddings", "text": "While continuous word embeddings are gaining popularity, current models are based solely on linear contexts. In this work, we generalize the skip-gram model with negative sampling introduced by Mikolov et al. to include arbitrary contexts. In particular, we perform experiments with dependency-based contexts, and show that they produce markedly different embeddings. The dependencybased embeddings are less topical and exhibit more functional similarity than the original skip-gram embeddings."}
{"_id": "d0399694b31c933d66cd2a6896afbd5aa7bc097d", "title": "Intentional rounding: facilitators, benefits and barriers.", "text": "AIMS AND OBJECTIVES\nTo describe the implementation, practice and sustainability of Intentional Rounding (IR) within two diverse settings (aged care and maternity).\n\n\nBACKGROUND\nThe profile of patients in hospitals has changed over time, generally being more severe, placing heavy demands on nurses' time. Routine non-urgent care is often provided only when there is time. IR has been found to increase both patient and staff satisfaction, also resulting in improved patient outcomes such as reduced falls and call bell use. IR is also used as a time management tool for safe and reliable provision of routine care.\n\n\nMETHODS\nThis descriptive qualitative research study comprised of three focus groups in a metropolitan hospital.\n\n\nRESULTS\nFifteen nurses participated in three focus groups. Seven main themes emerged from the thematic analysis of the verbatim transcripts: implementation and maintenance, how IR works, roles and responsibilities, context and environment, benefits, barriers and legal issues.\n\n\nCONCLUSION\nIR was quickly incorporated into normal practice, with clinicians being able to describe the main concepts and practices. IR was seen as a management tool, facilitating accountability and continuity of management support being essential for sustainability. Clinicians reported increases in patient and staff satisfaction, and the opportunity to provide patient education. While patient type and acuity, ward layout and staff experience affected the practice of IR, the principles of IR are robust enough to allow for differences in the ward specialty and patient type. However, care must be taken when implementing IR to reduce the risk of alienating experienced staff. Incorporation of IR charts into the patient health care record is recommended.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nEngaging all staff, encouraging ownership and stability of management are key factors in the successful implementation and maintenance of IR. IR is flexible and robust enough to accommodate different patient types and acuity."}
{"_id": "d424f718d4a56318106f7c935aedd5a66bb7112a", "title": "Large-scale genome-wide association analysis of bipolar disorder identifies a new susceptibility locus near ODZ4", "text": "We conducted a combined genome-wide association study (GWAS) of 7,481 individuals with bipolar disorder (cases) and 9,250 controls as part of the Psychiatric GWAS Consortium. Our replication study tested 34 SNPs in 4,496 independent cases with bipolar disorder and 42,422 independent controls and found that 18 of 34 SNPs had P < 0.05, with 31 of 34 SNPs having signals with the same direction of effect (P = 3.8 \u00d7 10\u22127). An analysis of all 11,974 bipolar disorder cases and 51,792 controls confirmed genome-wide significant evidence of association for CACNA1C and identified a new intronic variant in ODZ4. We identified a pathway comprised of subunits of calcium channels enriched in bipolar disorder association intervals. Finally, a combined GWAS analysis of schizophrenia and bipolar disorder yielded strong association evidence for SNPs in CACNA1C and in the region of NEK4-ITIH1-ITIH3-ITIH4. Our replication results imply that increasing sample sizes in bipolar disorder will confirm many additional loci."}
{"_id": "503f347333b09c13f32fd8db7296d21aad032c86", "title": "Realization of a Filter with Helical Components *", "text": "In the VHF range, high-quality narrow-band filters with reasonable physical dimensions are extremely d8icult to realize. Excessive pass-band insertion loss accompanies filters employing lumped constant elements, and unreasonable size is a natural consequence of coaxial-resonator filters. Harmonic crystal filters are inadequate because of the unpredictable amount of spurious modes above the harmonic frequency; and hence they can only be used for very low percentage bandwidths. A solution to the above problems is provided by the use of helical resonators for high-quality filters. This paper describes the helical resonator, measurement of its Q, and coupling to the resonator. A procedure for the design and construction of a filter using helical circuits is then presented. Finally an example is given illustrating the design method, and several photographs of helical-resonator filters are shown."}
{"_id": "3c30b496bc5f680d46094ea108feb4368ade4d07", "title": "Accuracy in Random Number Generation", "text": "The generation of continuous random variables on a digital computer encounters a problem of accuracy caused by approximations and discretization error. These in turn impose a bias on the simulation results. An ideal discrete approximation of a continuous distribution and a measure of error are proposed. Heuristic analysis of common methods for transforming uniform deviates to other continuous random variables is discussed. Comments and recommendations are made for the design of algorithms to reduce the bias and avoid overflow problems."}
{"_id": "0c2c405ed2e9fa48c4b782ab8a4da1c06c1b132e", "title": "Low power 3\u20132 and 4\u20132 adder compressors implemented using ASTRAN", "text": "This paper presents two adder compressors architectures addressing high-speed and low power. Adder compressors are used to implement arithmetic circuits such as multipliers and digital signal processing units like the Fast Fourier Transform (FTT). To address the objective of high-speed and low power, it is well known that optimization efforts should be applied in all abstraction levels. In this paper are combined optimizations at logic, electrical and physical level. At the logic level, the circuit is optimized by using multiplexers instead of XOR gates to reduce delay, power and area. At the electrical level, this work presents an architecture that generate the XOR and XNOR signals simultaneously, this reduce internal glitches hence dynamic power as well. And finally at the physical level, and automatic layout generation tool (ASTRAN) is used to make the adder compressors layouts. This tool has proved to reduce power consumption and delay due to the smaller input capacitances of the complex gates generated compared to manual-designed layouts."}
{"_id": "12b7cc398d67b2a17ace0b0b79363e9a646f8bcb", "title": "Review and Comparative Study of Clustering Techniques", "text": "Clustering is an automatic learning technique which aims at grouping a set of objects into clusters so that objects in the same clusters should be similar as possible, whereas objects in one cluster should be as dissimilar as possible from objects in other clusters. Document clustering aims to group in an unsupervised way, a given document set into clusters such that documents within each clusters are more similar between each other than those in different clusters. Cluster analysis aims to organize a collection of patterns into clusters based on similarity. This paper focuses on survey of various clustering techniques. These techniques can be divided into several categories: Partitional algorithms, Hierarchical algorithms, Density based, and comparison of various algorithm is surveyed and shows how Hierarchical Clustering can be better than other techniques."}
{"_id": "25fdcb2969d0a6ee660f2c98391725ff98867f3f", "title": "Defect detection and identification in textile fabrics using Multi Resolution Combined Statistical and Spatial Frequency Method", "text": "In textile industry, reliable and accurate quality control and inspection becomes an important element. Presently, this is still accomplished by human experience, which is more time consuming and is also prone to errors. Hence automated visual inspection systems become mandatory in textile industries. This Paper presents a novel algorithm of fabric defect detection by making use of Multi Resolution Combined Statistical and Spatial Frequency Method. Defect detection consists of two phases, first is the training and next is the testing phase. In the training phase, the reference fabric images are cropped into non-overlapping sub-windows. By applying MRCSF the features of the textile fabrics are extracted and stored in the database. During the testing phase the same procedure is applied for test fabric and the features are compared with database information. Based on the comparison results, each sub-window is categorized as defective or non-defective. The classification rate obtained by the process of simulation using MATLAB was found to be 99%."}
{"_id": "b6dc8b5e100482058323194c82aa30a1cf0f97fc", "title": "Architecture for the Internet of Things (IoT): API and Interconnect", "text": "The proposed architecture is structured into a secure API, a backbone, and separate device networks with standard interface to the backbone. The API decouples innovation of services and service logic from protocols and network elements. It also enables service portability between systems, i.e. a service may be allocated to end-systems or servers, with possible relocation and replication throughout its lifecycle. Ubiquitous service provisioning depends on interoperability, not only for provisioning of a standard QoS controlled IP bearer, but also for cross domain naming, security, mobility, multicast, location, routing and management, including fair compensation for utility provisioning. The proposed architecture not only includes these critical elements but also caters for multi-homing, mobile networks with dynamic membership, and third party persistent storage based on indirection."}
{"_id": "ef8f45bcfbc98dbd7b434a2342adbd66c3182b54", "title": "Organophosphate Pesticide Exposure and Attention in Young Mexican-American Children: The CHAMACOS Study", "text": "BACKGROUND\nExposure to organophosphate (OP) pesticides, well-known neurotoxicants, has been associated with neurobehavioral deficits in children.\n\n\nOBJECTIVES\nWe investigated whether OP exposure, as measured by urinary dialkyl phosphate (DAP) metabolites in pregnant women and their children, was associated with attention-related outcomes among Mexican-American children living in an agricultural region of California.\n\n\nMETHODS\nChildren were assessed at ages 3.5 years (n = 331) and 5 years (n = 323). Mothers completed the Child Behavior Checklist (CBCL). We administered the NEPSY-II visual attention subtest to children at 3.5 years and Conners' Kiddie Continuous Performance Test (K-CPT) at 5 years. The K-CPT yielded a standardized attention deficit/hyperactivity disorder (ADHD) Confidence Index score. Psychometricians scored behavior of the 5-year-olds during testing using the Hillside Behavior Rating Scale.\n\n\nRESULTS\nPrenatal DAPs (nanomoles per liter) were nonsignificantly associated with maternal report of attention problems and ADHD at age 3.5 years but were significantly related at age 5 years [CBCL attention problems: \u03b2 = 0.7 points; 95% confidence interval (CI), 0.2-1.2; ADHD: \u03b2 = 1.3; 95% CI, 0.4-2.1]. Prenatal DAPs were associated with scores on the K-CPT ADHD Confidence Index > 70th percentile [odds ratio (OR) = 5.1; 95% CI, 1.7-15.7] and with a composite ADHD indicator of the various measures (OR = 3.5; 95% CI, 1.1-10.7). Some outcomes exhibited evidence of effect modification by sex, with associations found only among boys. There was also limited evidence of associations between child DAPs and attention.\n\n\nCONCLUSIONS\nIn utero DAPs and, to a lesser extent, postnatal DAPs were associated adversely with attention as assessed by maternal report, psychometrician observation, and direct assessment. These associations were somewhat stronger at 5 years than at 3.5 years and were stronger in boys."}
{"_id": "921cefa2e8363b3be6d7522ad82a506a2a835f01", "title": "A Survey on Query Processing and Optimization in Relational Database Management System", "text": "The query processer and optimizer is an important component in today\u2019s relational database management system. This component is responsible for translating a user query, usually written in a non-procedural language like SQL \u2013 into an efficient query evaluation program that can be executed against database. In this paper, we identify many of the common issues, themes, and approaches that extend this work and the settings in which each piece of work is most appropriate. Our goal with this paper is to be a \u201cvalue-add\u201d over the existing papers on the material, providing not only a brief overview of each technique, but also a basic framework for understating the field of query processing and optimization in general."}
{"_id": "2a361cc98ff78de97fcd2f86e9ababb95c922eae", "title": "A randomized controlled trial of postcrisis suicide prevention.", "text": "OBJECTIVE\nThis study tested the hypothesis that professionals' maintenance of long-term contact with persons who are at risk of suicide can exert a suicide-prevention influence. This influence was hypothesized to result from the development of a feeling of connectedness and to be most pertinent to high-risk individuals who refuse to remain in the health care system.\n\n\nMETHODS\nA total of 3,005 persons hospitalized because of a depressive or suicidal state, populations known to be at risk of subsequent suicide, were contacted 30 days after discharge about follow-up treatment. A total of 843 patients who had refused ongoing care were randomly divided into two groups; persons in one group were contacted by letter at least four times a year for five years. The other group-the control group-received no further contact. A follow-up procedure identified patients who died during the five-year contact period and during the subsequent ten years. Suicide rates in the contact and no-contact groups were compared.\n\n\nRESULTS\nPatients in the contact group had a lower suicide rate in all five years of the study. Formal survival analyses revealed a significantly lower rate in the contact group (p=.04) for the first two years; differences in the rates gradually diminished, and by year 14 no differences between groups were observed.\n\n\nCONCLUSIONS\nA systematic program of contact with persons who are at risk of suicide and who refuse to remain in the health care system appears to exert a significant preventive influence for at least two years. Diminution of the frequency of contact and discontinuation of contact appear to reduce and eventually eliminate this preventive influence."}
{"_id": "3c5a8d0742e9bc24f53e3da8ae52659d3fe2c5fd", "title": "An AAL system based on IoT technologies and linked open data for elderly monitoring in smart cities", "text": "The average age growing of the urban population, with an increasing number of 65+ citizens, is calling for the cities to provide global services specifically geared to elderly people. In this context, collecting data from the elderly's environment and his/her habits and making them available in a structured way to third parties for analysis, is the first step towards the realization of innovative user-centric services. This paper presents a city-wide general IoT-based sensing infrastructure and a data management layer providing some REST and Linked Open Data Application Programming Interfaces (APIs) that collect and present data related to elderly people. In particular, this architecture is used by the H2020 City4Age project to help geriatricians in identifying the onset of Mild Cognitive Impairment (MCI) disease."}
{"_id": "08a6e999532544e83618c16a96f6d4c7356bc140", "title": "Bio fl oc technology in aquaculture : Bene fi cial effects and future challenges", "text": ""}
{"_id": "3b0cfc0cc3fc3bee835147786c749de5ab1f8ed2", "title": "Streaming Big Data Processing in Datacenter Clouds", "text": "Today, we live in a digital universe in which information and technology are not only around us but also play important roles in dictating the quality of our lives. As we delve deeper into this digital universe, we're witnessing explosive growth in the variety, velocity, and volume of data1,2 being transmitted over the Internet. A zetabyte of data passed through the Internet in the past year; IDC predicts that this digital universe will explode to an unimaginable eight Zbytes by 2015. These data are and will be generated mainly from Internet search, social media, mobile devices, the Internet of Things, business transactions, next-generation radio astronomy telescopes, high-energy physics synchrotron, and content distribution. Government and business organizations are now overflowing with data, easily aggregating to terabytes or even petabytes of information."}
{"_id": "ec04f476a416172988ba822de313eecd72f2e223", "title": "Integration of alternative sources of energy as current sources", "text": "This work proposes an integration of alternative energy sources having the different generating modules connected as current sources. These sources were configured to inject current into a common DC bus. This feature avoids current circulation among sources due to the voltage differences, reducing the stress in the individual generators. A battery bank is a system part, used as an energy accumulator for periods when there is primary energy depletion. The bus voltage is controlled by modulation of a secondary load in parallel with the main load. A DC-DC Boost converter connects each primary source to a common bus, except for the storage system that uses a DC-DC Buck-Boost converter. Finally, it is presented a mathematical analysis through simulated results to show the sources behavior and overall performance when turned into current sources."}
{"_id": "91e82ab0f5d0b8d2629084641de607578057c98e", "title": "Isolation and characterization of microsatellite loci in the black pepper, Piper nigrum L. (piperaceae)", "text": "The black pepper, Piper nigrum L., which originated in \u00cdndia, is the World\u2019s most important commercial spice. Brazil has a germplasm collection of this species preserved at the Brazilian Agricultural Research Corporation (Embrapa\u2014Eastern Amazonia) where efforts are being made to generation information on the patterns of genetic variation and develop strategies for conservation and management of black pepper. Molecular markers of the SSR type are powerful tools for the description of material preserved in genetic resources banks, due to characteristics such as high levels of polymorphism, codominance and Mendelian segregation. Given this, we developed nine microsatellite markers from an enriched library of Piper nigrum L. Twenty varieties clonal from the Brazilian germplasm collection were analyzed, and observed and expected heterozygosity values ranged over 0.11\u20131.00 and 0.47\u20130.87, respectively. The nine microsatellite loci characterized here will contribute to studies of genetic diversity and conservation of Piper nigrum L."}
{"_id": "3215cd14ee1a559eec1513b97ab1c7e6318bd5af", "title": "Global Linear Neighborhoods for Efficient Label Propagation", "text": "Graph-based semi-supervised learning improves classification by combining labeled and unlabeled data through label propagation. It was shown that the sparse representation of graph by weighted local neighbors provides a better similarity measure between data points for label propagation. However, selecting local neighbors can lead to disjoint components and incorrect neighbors in graph, and thus, fail to capture the underlying global structure. In this paper, we propose to learn a nonnegative low-rank graph to capture global linear neighborhoods, under the assumption that each data point can be linearly reconstructed from weighted combinations of its direct neighbors and reachable indirect neighbors. The global linear neighborhoods utilize information from both direct and indirect neighbors to preserve the global cluster structures, while the low-rank property retains a compressed representation of the graph. An efficient algorithm based on a multiplicative update rule is designed to learn a nonnegative low-rank factorization matrix minimizing the neighborhood reconstruction error. Large scale simulations and experiments on UCI datasets and high-dimensional gene expression datasets showed that label propagation based on global linear neighborhoods captures the global cluster structures better and achieved more accurate classification results."}
{"_id": "1e88f725cdcf0ca38678267b88b41ab55fb968f8", "title": "The standard operating procedure of the DOE-JGI Microbial Genome Annotation Pipeline (MGAP v.4)", "text": "The DOE-JGI Microbial Genome Annotation Pipeline performs structural and functional annotation of microbial genomes that are further included into the Integrated Microbial Genome comparative analysis system. MGAP is applied to assembled nucleotide sequence datasets that are provided via the IMG submission site. Dataset submission for annotation first requires project and associated metadata description in GOLD. The MGAP sequence data processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNA features, as well as CRISPR elements. Structural annotation is followed by assignment of protein product names and functions."}
{"_id": "9beb7cea71ef7da73852465734f8303df1e1c970", "title": "Feature-level fusion of palmprint and palm vein for person identification based on a \u201cJunction Point\u201d representation", "text": "The issue of how to represent the palm features for effective classification is still an open problem. In this paper, we propose a novel palm representation, the \"junction points\" (JP) set, which is formed by the two set of line segments extracted from the registered palmprint and palm vein images respectively. Unlike the existing approaches, the JP set, containing position and orientation information, is a more compact feature that significantly reduces the storage requirement. We compare the proposed JP approach with the line-based methods on a large dataset. Experimental results show that the proposed JP approach provides a better representation and achieves lower error rate in palm verification."}
{"_id": "75578b7c5f015ca57f4a7d6ecfef902e993b1c7d", "title": "Multilevel Analysis Techniques and Applications", "text": "The purpose of this series is to present methodological techniques to investigators and students from all functional areas of business, although individuals from other disciplines will also find the series useful. Each volume in the series will focus on a specific method The goal is to provide an understanding and working knowledge of each method with a minimum of mathematical derivations. Proposals are invited from all interested authors. Each proposal should consist of the following: (i) a brief description of the volume's focus and intended market, (ii) a table of contents with an outline of each chapter, and (iii) a curriculum vita. Materials may be sent to Dr. Contents Preface ix 1. Introduction to multilevel analysis 1 1.1. Why do we need special multilevel analysis techniques? 5 1.2. Multilevel theories 7 1.3. Models described in this book 8"}
{"_id": "7e49a6f11a8843b2ff5bdbf7cf95617c6219f757", "title": "Multi-Modal Fusion for Moment in Time Video Classification", "text": "Action recognition in videos remains a challenging problem in the machine learning community. Particularly challenging is the differing degree of intra-class variation between actions: While background information is enough to distinguish certain classes, many others are abstract and require fine-grained knowledge for discrimination. To approach this problem, in this work we evaluate different modalities on the recently published Moments in Time dataset, a collection of one million videos of short length."}
{"_id": "bc7bcec88a3b5aeb7e3ecb0e1e584f6b29b9fc09", "title": "Band-Notched Small Square-Ring Antenna With a Pair of T-Shaped Strips Protruded Inside the Square Ring for UWB Applications", "text": "A novel printed monopole antenna with constant gain over a wide bandwidth for ultra wideband applications with desired notch-band characteristic is presented. The proposed antenna consists of a square-ring radiating patch with a pair of T-shaped strips protruded inside the square ring and a coupled T-shaped strip and a ground plane with a protruded strip, which provides a wide us able fractional bandwidth of more than 130% (3.07-14.6 GHz). By using the square-ring radiating patch with a pair of T-shaped strips protruded inside it, the frequency bandstop performance is generated, and we can control its characteristics such as band-notch frequency and its bandwidth by electromagnetically adjusting coupling between a pair of T-shaped strips protruded inside the square ring. The designed antenna has a small size of 12 \u00d7 18 mm2, or about 0.15\u03bb \u00d7 0.25\u03bb at 4.2 GHz, while showing the band-rejection performance in the frequency band of 5.05-5.95 GHz."}
{"_id": "56d986c576d37c2f9599cabb8ba5a59660971045", "title": "P300 brain computer interface: current challenges and emerging trends", "text": "A brain-computer interface (BCI) enables communication without movement based on brain signals measured with electroencephalography (EEG). BCIs usually rely on one of three types of signals: the P300 and other components of the event-related potential (ERP), steady state visual evoked potential (SSVEP), or event related desynchronization (ERD). Although P300 BCIs were introduced over twenty years ago, the past few years have seen a strong increase in P300 BCI research. This closed-loop BCI approach relies on the P300 and other components of the ERP, based on an oddball paradigm presented to the subject. In this paper, we overview the current status of P300 BCI technology, and then discuss new directions: paradigms for eliciting P300s; signal processing methods; applications; and hybrid BCIs. We conclude that P300 BCIs are quite promising, as several emerging directions have not yet been fully explored and could lead to improvements in bit rate, reliability, usability, and flexibility."}
{"_id": "79b7d4de7b1d6c41ad3cf0a7996ce79acf98c8dd", "title": "Event Recognition in Videos by Learning from Heterogeneous Web Sources", "text": "In this work, we propose to leverage a large number of loosely labeled web videos (e.g., from YouTube) and web images (e.g., from Google/Bing image search) for visual event recognition in consumer videos without requiring any labeled consumer videos. We formulate this task as a new multi-domain adaptation problem with heterogeneous sources, in which the samples from different source domains can be represented by different types of features with different dimensions (e.g., the SIFT features from web images and space-time (ST) features from web videos) while the target domain samples have all types of features. To effectively cope with the heterogeneous sources where some source domains are more relevant to the target domain, we propose a new method called Multi-domain Adaptation with Heterogeneous Sources (MDA-HS) to learn an optimal target classifier, in which we simultaneously seek the optimal weights for different source domains with different types of features as well as infer the labels of unlabeled target domain data based on multiple types of features. We solve our optimization problem by using the cutting-plane algorithm based on group based multiple kernel learning. Comprehensive experiments on two datasets demonstrate the effectiveness of MDA-HS for event recognition in consumer videos."}
{"_id": "067f9352c60a9e4cf98c58a4ddf0984d31cf9b46", "title": "Planning optimal paths for multiple robots on graphs", "text": "In this paper, we study the problem of optimal multi-robot path planning (MPP) on graphs. We propose two multiflow based integer linear programming (ILP) models that compute minimum last arrival time and minimum total distance solutions for our MPP formulation, respectively. The resulting algorithms from these ILP models are complete and guaranteed to yield true optimal solutions. In addition, our flexible framework can easily accommodate other variants of the MPP problem. Focusing on the time optimal algorithm, we evaluate its performance, both as a stand alone algorithm and as a generic heuristic for quickly solving large problem instances. Computational results confirm the effectiveness of our method."}
{"_id": "34df85f4db9d1389c63da17f3ffbb7af1ed2ea0c", "title": "Coordinating Hundreds of Cooperative, Autonomous Vehicles in Warehouses", "text": "The Kiva warehouse management system creates a new paradigm for pick-pack-and-ship warehouses that significantly improves worker productivity. The Kiva system uses movable storage shelves that can be lifted by small, autonomous robots. By bringing the product to the worker, productivity is increased by a factor of two or more, while simultaneously improving accountability and flexibility. A Kiva installation for a large distribution center may require 500 or more vehicles. As such, the Kiva system represents the first commercially available, large-scale autonomous robot system. The first permanent installation of a Kiva system was deployed in the summer of 2006."}
{"_id": "679efbf29c911f786c84dc210f7cc5a8fc166b70", "title": "Multi-agent Path Planning and Network Flow", "text": "This paper connects multi-agent path planning on graphs (roadmaps) to network flow problems, showing that the former can be reduced to the latter, therefore enabling the application of combinatorial network flow algorithms, as well as general linear program techniques, to multi-agent path planning problems on graphs. Exploiting this connecti on, we show that when the goals are permutation invariant, the problem always has a feasible solution path set with a longes t finish time of no more than n+V \u2212 1 steps, in which n is the number of agents andV is the number of vertices of the underlying graph. We then give a complete algorithm that finds such a solution in O(nVE) time, with E being the number of edges of the graph. Taking a further step, we study time and distance optimality of the feasible solutions, show that th ey have a pairwise Pareto optimal structure, and again provide efficient algorithms for optimizing two of these practical objectives."}
{"_id": "0c35a65a99af8202fe966c5e7bee00dea7cfcbf8", "title": "Experiences with an Interactive Museum Tour-Guide Robot", "text": "This article describes the software architecture of an auto nomous, interactive tour-guide robot. It presents a modular and distributed software archi te ture, which integrates localization, mapping, collision avoidance, planning, and vari ous modules concerned with user interaction and Web-based telepresence. At its heart, the s oftware approach relies on probabilistic computation, on-line learning, and any-time alg orithms. It enables robots to operate safely, reliably, and at high speeds in highly dynamic environments, and does not require any modifications of the environment to aid the robot \u2019s peration. Special emphasis is placed on the design of interactive capabilities that appeal to people\u2019s intuition. The interface provides new means for human-robot interaction w ith crowds of people in public places, and it also provides people all around the world with the ability to establish a \u201cvirtual telepresence\u201d using the Web. To illustrate our approac h, results are reported obtained in mid-1997, when our robot \u201cRHINO\u201d was deployed for a period of six days in a densely populated museum. The empirical results demonstrate relia bl operation in public environments. The robot successfully raised the museum\u2019s atten dance by more than 50%. In addition, thousands of people all over the world controlled the robot through the Web. We conjecture that these innovations transcend to a much large r range of application domains for service robots."}
{"_id": "13dcb5efadfc4575abb8ee5ef6f52d565e08de1d", "title": "Principles of Robot Motion: Theory, Algorithms, and Implementations [Book Review]", "text": "introduction to autonomous mobile robots intelligent robotics and autonomous agents series PDF disaster robotics intelligent robotics and autonomous agents series PDF probabilistic robotics intelligent robotics and autonomous agents series PDF self-reconfigurable robots an introduction intelligent robotics and autonomous agents series PDF mechanics of robotic manipulation intelligent robotics and autonomous agents PDF strategic negotiation in multiagent environments intelligent robotics and autonomous agents PDF random finite sets for robot mapping & slam new concepts in autonomous robotic map representations springer tracts in advanced robotics PDF"}
{"_id": "82c0086755479360935ec73add346854df4d1304", "title": "What If You Can't Trust Your Network Card?", "text": "In the last few years, many different attacks against computing platform targeting hardware or low level firmware have been published. Such attacks are generally quite hard to detect and to defend against as they target components that are out of the scope of the operating system and may not have been taken into account in the security policy enforced on the platform. In this paper, we study the case of remote attacks against network adapters. In our case study, we assume that the target adapter is running a flawed firmware that an attacker may subvert remotely by sending packets on the network to the adapter. We study possible detection techniques and their efficiency. We show that, depending on the architecture of the adapter and the interface provided by the NIC to the host operating system, building an efficient detection framework is possible. We explain the choices we made when designing such a framework that we called NAVIS and give details on our proof of concept implementation."}
{"_id": "07ca378eacd8241cc956478d2619279895dae925", "title": "A Scalable Multiphysics Network Simulation Using PETSc DMNetwork", "text": "A scientific framework for simulations of large-scale networks, such as is required for the analysis of critical infrastructure interaction and interdependencies, is needed for applications on exascale computers. Such a framework must be able to manage heterogeneous physics and unstructured topology, and must be reusable. To this end we have developed DMNetwork, a class in PETSc that provides data and topology management and migration for network problems, along with multiphysics solvers to exploit the problem structure. It eases the application development cycle by providing the necessary infrastructure through simple abstractions to define and query the network. This paper presents the design of the DMNetwork, illustrates its user interface, and demonstrates its ability to solve large network problems through the numerical simulation of a water pipe network with more than 2 billion variables on extreme-scale computers using up to 30,000 processor cores."}
{"_id": "f26e5e0e58b4b05d4632b4a3418c3f9305d091bf", "title": "3D Integration technology: Status and application development", "text": "As predicted by the ITRS roadmap, semiconductor industry development dominated by shrinking transistor gate dimensions alone will not be able to overcome the performance and cost problems of future IC fabrication. Today 3D integration based on through silicon vias (TSV) is a well-accepted approach to overcome the performance bottleneck and simultaneously shrink the form factor. Several full 3D process flows have been demonstrated, however there are still no microelectronic products based on 3D TSV technologies in the market \u2014 except CMOS image sensors. 3D chip stacking of memory and logic devices without TSVs is already widely introduced in the market. Applying TSV technology for memory on logic will increase the performance of these advanced products and simultaneously shrink the form factor. In addition to the enabling of further improvement of transistor integration densities, 3D integration is a key technology for integration of heterogeneous technologies. Miniaturized MEMS/IC products represent a typical example for such heterogeneous systems demanding for smart system integration rather than extremely high transistor integration densities. The European 3D technology platform that has been established within the EC funded e-CUBES project is focusing on the requirements coming from heterogeneous systems. The selected 3D integration technologies are optimized concerning the availability of devices (packaged dies, bare dies or wafers) and the requirements of performance and form factor. There are specific technology requirements for the integration of MEMS/NEMS devices which differ from 3D integrated ICs (3D-IC). While 3D-ICs typically show a need for high interconnect densities and conductivities, TSV technologies for the integration of MEMS to ICs may result in lower electrical performance but have to fulfill other requirements, e. g. mechanical stability issues. 3D integration of multiple MEMS/IC stacks was successfully demonstrated for the fabrication of miniaturized sensor systems (e-CUBES), as for automotive, health & fitness and aeronautic applications."}
{"_id": "44562e355ef846d094aea9218e1b88fb56196f9e", "title": "Taxonomy of attacks on industrial control protocols", "text": "Industrial control systems (ICS) are highly distributed information systems used to control and monitor critical infrastructures such as nuclear plants, power generation and distribution plants, Oil and Gas and many other facilities. The main architecture principles of ICS are; real time response, high availability and reliability. For these specific purposes, several protocols has been designed to ensure the control and supervision operations. Modbus and DNP3 are the most used protocols in the ICS world due to their compliance with real time needs. With the increasing of the connectivity to the internet world for business reasons, ICS adopted Internet based technologies and most of communication protocols are redesigned to work over IP. This openness exposed the ICS components as well as communication protocols to cyber-attacks with a higher risk than attacks on traditional IT systems. In order to facilitate the risk assessment of cyber-attacks on ICS protocols we propose a taxonomy model of different identified attacks on Modbus and DNP3.the model is based on the threat origin, threat type, attack type, attack scenario, vulnerability type and the impact of the attack. We populate this Taxonomy model with identified attacks on Modbus and DNP3 from previous academic and industrial works."}
{"_id": "ee0db22a39d309330afdf03bd6b8e5426a1f1504", "title": "Neural-Network-Based Adaptive Leader-Following Control for Multiagent Systems With Uncertainties", "text": "A neural-network-based adaptive approach is proposed for the leader-following control of multiagent systems. The neural network is used to approximate the agent's uncertain dynamics, and the approximation error and external disturbances are counteracted by employing the robust signal. When there is no control input constraint, it can be proved that all the following agents can track the leader's time-varying state with the tracking error as small as desired. Compared with the related work in the literature, the uncertainty in the agent's dynamics is taken into account; the leader's state could be time-varying; and the proposed algorithm for each following agent is only dependent on the information of its neighbor agents. Finally, the satisfactory performance of the proposed method is illustrated by simulation examples."}
{"_id": "66479c2251088dae51c228341c26164f21250593", "title": "Some mathematical notes on three-mode factor analysis.", "text": ""}
{"_id": "5a94ab41593ed7e01fddabc6375f4a78862e1444", "title": "Ee364a: Convex Optimization I Final Exam", "text": "This is a 24 hour take-home final. Please turn it in at Bytes Cafe in the Packard building, 24 hours after you pick it up. You may use any books, notes, or computer programs, but you may not discuss the exam with anyone until March 16, after everyone has taken the exam. The only exception is that you can ask us for clarification, via the course staff email address. We've tried pretty hard to make the exam unambiguous and clear, so we're unlikely to say much. Please make a copy of your exam, or scan it, before handing it in. Please attach the cover page to the front of your exam. Assemble your solutions in order (problem 1, problem 2, problem 3,. . .), starting a new page for each problem. Put everything associated with each problem (e.g., text, code, plots) together; do not attach code or plots at the end of the final. We will deduct points from long needlessly complex solutions, even if they are correct. Our solutions are not long, so if you find that your solution to a problem goes on and on for many pages, you should try to figure out a simpler one. We expect neat, legible exams from everyone, including those enrolled Cr/N. When a problem involves computation you must give all of the following: a clear discussion and justification of exactly what you did, the source code that produces the result, and the final numerical results or plots. Files containing problem data can be found in the usual place, Please respect the honor code. Although we allow you to work on homework assignments in small groups, you cannot discuss the final with anyone, at least until everyone has taken it. All problems have equal weight. Some are easy. Others, not so much. Be sure you are using the most recent version of CVX, CVXPY, or Convex,jl. Check your email often during the exam, just in case we need to send out an important announcement. Some problems involve applications. But you do not need to know anything about the problem area to solve the problem; the problem statement contains everything you need."}
{"_id": "9e56528c948645c366044c05d5a17354745a6d52", "title": "Classical Statistics and Statistical Learning in Imaging Neuroscience", "text": "Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques."}
{"_id": "6537b42e6f3ce6eb6ef7d775e22c0817bf74be15", "title": "A Mining and Visualizing System for Large-Scale Chinese Technical Standards", "text": "A technical standard is an established norm or requirement in regard to technical systems. It is usually a formal document that establishes uniform engineering or technical criteria, methods, processes and practices. Every year, more than 10K standards are released by standardization organizations, such as International Standardization Organization (ISO), American National Standards Institute (ANSI), and China National Institute of Standardization (CNIS). Thereinto, China publishes more than 4K national and industrial standards per year. Confronted with the large amount of standards, how to manage and make full use of these standards efficiently and effectively has become a major issue to be addressed. In this paper, we introduce a mining and visualizing system for large-scale Chinese technical standards from spatial and temporal perspectives. Specifically, we propose an s-index metric to measure the contribution of a drafting unit. Moreover, we develop a multiple dimensional mining and visualizing analysis system for standardization experts and users to explore regional differences and temporal trends of standards in China. Our system has run on 95K Chinese standards spanning from 2001 to 2016 and provided online service. The extensive experiments and case study demonstrate the effectiveness of our proposed system."}
{"_id": "c8d3dd12a002cd0bc21e39ae41d7c7b5f99981b0", "title": "Foot pressure distribution and contact duration pattern during walking at self-selected speed in young adults", "text": "Foot load observed as pressure distribution is examined in relation to a foot and ankle functions within the gait cycle. This load defines the modes of healthy and pathological gait. Determination of the patterns of healthy, i.e. \u201cnormal\u201d walk is the basis for classification of pathological modes. Eleven healthy participants were examined in this initial study. Participants walked over pressure plate barefoot at their self - selected speed. Maximal values of the pressure were recorded in the heel, the first, the second and the third metatarsal joints and in the hallux region. Largest contact duration is recorded in the metatarsal region."}
{"_id": "632a9af82186d130712086ff9b019b5dafaf02e5", "title": "SPROUT: Lazy vs. Eager Query Plans for Tuple-Independent Probabilistic Databases", "text": "A paramount challenge in probabilistic databases is the scalable computation of confidences of tuples in query results. This paper introduces an efficient secondary-storage operator for exact computation of queries on tuple-independent probabilistic databases. We consider the conjunctive queries without self-joins that are known to be tractable on any tuple-independent database, and queries that are not tractable in general but become tractable on probabilistic databases restricted by functional dependencies. Our operator is semantically equivalent to a sequence of aggregations and can be naturally integrated into existing relational query plans. As a proof of concept, we developed an extension of the PostgreSQL 8.3.3 query engine called SPROUT. We study optimizations that push or pull our operator or parts thereof past joins. The operator employs static information, such as the query structure and functional dependencies, to decide which constituent aggregations can be evaluated together in one scan and how many scans are needed for the overall confidence computation task. A case study on the TPC-H benchmark reveals that most TPC-H queries obtained by removing aggregations can be evaluated efficiently using our operator. Experimental evaluation on probabilistic TPC-H data shows substantial efficiency improvements when compared to the state of the art."}
{"_id": "ba32fded0293e79885f5cee8cbca79c345474d8b", "title": "R-ADMAD: high reliability provision for large-scale de-duplication archival storage systems", "text": "Data de-duplication has become a commodity component in data-intensive systems and it is required that these systems provide high reliability comparable to others. Unfortunately, by storing duplicate data chunks just once, de-duped system improves storage utilization at cost of error resilience or reliability. In this paper, R-ADMAD, a high reliability provision mechanism is proposed. It packs variable-length data chunks into fixed sized objects, and exploits ECC codes to encode the objects and distributes them among the storage nodes in a redundancy group, which is dynamically generated according to current status and actual failure domains. Upon failures, R-ADMAD proposes a distributed and dynamic recovery process. Experimental results show that R-ADMAD can provide the same storage utilization as RAID-like schemes, but comparable reliability to replication based schemes with much more redundancy. The average recovery time of R-ADMAD based configurations is about 2-6 times less than RAID-like schemes. Moreover, R-ADMAD can provide dynamic load balancing even without the involvement of the overloaded storage nodes."}
{"_id": "ddcd8c6b235d3d3e1f00e628b76ccc736254effb", "title": "Fast hyperparameter selection for graph kernels via subsampling and multiple kernel learning", "text": "Model selection is one of the most computationally expensive tasks in a machine learning application. When dealing with kernel methods for structures, the choice with the largest impact on the overall performance is the selection of the feature bias, i.e. the choice of the concrete kernel for structures. Each kernel in turn exposes several hyper-parameters which also need to be fine tuned. Multiple Kernel Learning offers a way to approach this computational bottleneck by generating a combination of different kernels under different parametric settings. However, this solution still requires the computation of many large kernel matrices. In this paper we propose a method to efficiently select a small number of kernels on a subset of the original data, gaining a dramatic reduction in the runtime without a significant loss of predictive performance."}
{"_id": "faeba02f45e7be56ad4ada7687fbd55657b9cc95", "title": "Houston, we have a problem...: a survey of actual problems in computer games development", "text": "This paper presents a survey of problems found in the development process of electronic games. These problems were collected mainly from game postmortems and specialized litterature on game development, allowing a comparison with respect to well-known problems in the traditional software industry."}
{"_id": "c57f871287d75f50225f51d9313af30c748dcd65", "title": "Deep Kernelized Autoencoders", "text": "In this paper we introduce the deep kernelized autoencoder, a neural network model that allows an explicit approximation of (i) the mapping from an input space to an arbitrary, user-specified kernel space and (ii) the back-projection from such a kernel space to input space. The proposed method is based on traditional autoencoders and is trained through a new unsupervised loss function. During training, we optimize both the reconstruction accuracy of input samples and the alignment between a kernel matrix given as prior and the inner products of the hidden representations computed by the autoencoder. Kernel alignment provides control over the hidden representation learned by the autoencoder. Experiments have been performed to evaluate both reconstruction and kernel alignment performance. Additionally, we applied our method to emulate kPCA on a denoising task obtaining promising results."}
{"_id": "2c521847f2c6801d8219a1a2e9f4e196798dd07d", "title": "An Efficient Algorithm for Media-based Surveillance System (EAMSuS) in IoT Smart City Framework", "text": ""}
{"_id": "d55b63e1b7ad70e3b37d5089585a7423cd245fde", "title": "An Innovative Approach to Investigate Various Software Testing Techniques and Strategies", "text": "Software testing is a way of finding errors from the system. It helps us to identify and debug mistakes, errors, faults and failures of a system. There are many techniques and strategies emerged since the concept of software development emerged. The aim of testing is to make the quality of software as efficient as possible.in this paper we discuss most widely used techniques and strategies. Where they can be used and how they can be used. How they work and how they differ (from each other).They are the following. Techniques: Black Box Testing, White Box Testing, And Grey Box Testing. Strategies: Unit Testing, System Testing, And Acceptance Testing."}
{"_id": "2dccec3c1a8a17883cece784e8f0fc0af413eb83", "title": "Online Dominant and Anomalous Behavior Detection in Videos", "text": "We present a novel approach for video parsing and simultaneous online learning of dominant and anomalous behaviors in surveillance videos. Dominant behaviors are those occurring frequently in videos and hence, usually do not attract much attention. They can be characterized by different complexities in space and time, ranging from a scene background to human activities. In contrast, an anomalous behavior is defined as having a low likelihood of occurrence. We do not employ any models of the entities in the scene in order to detect these two kinds of behaviors. In this paper, video events are learnt at each pixel without supervision using densely constructed spatio-temporal video volumes. Furthermore, the volumes are organized into large contextual graphs. These compositions are employed to construct a hierarchical codebook model for the dominant behaviors. By decomposing spatio-temporal contextual information into unique spatial and temporal contexts, the proposed framework learns the models of the dominant spatial and temporal events. Thus, it is ultimately capable of simultaneously modeling high-level behaviors as well as low-level spatial, temporal and spatio-temporal pixel level changes."}
{"_id": "699c6a8a8220f77423a881528f907ee5399ced1f", "title": "Intelligent Condition Based Monitoring Using Acoustic Signals for Air Compressors", "text": "Intelligent fault diagnosis of machines for early recognition of faults saves industry from heavy losses occurring due to machine breakdowns. This paper proposes a process with a generic data mining model that can be used for developing acoustic signal-based fault diagnosis systems for reciprocating air compressors. The process includes details of data acquisition, sensitive position analysis for deciding suitable sensor locations, signal pre-processing, feature extraction, feature selection, and a classification approach. This process was validated by developing a real time fault diagnosis system on a reciprocating type air compressor having 8 designated states, including one healthy state, and 7 faulty states. The system was able to accurately detect all the faults by analyzing acoustic recordings taken from just a single position. Additionally, thorough analysis has been presented where performance of the system is compared while varying feature selection techniques, the number of selected features, and multiclass decomposition algorithms meant for binary classifiers."}
{"_id": "c0e97ca70fe29db4ceb834464576b699ef8874b1", "title": "Recurrent-OctoMap: Learning State-Based Map Refinement for Long-Term Semantic Mapping With 3-D-Lidar Data", "text": "This letter presents a novel semantic mapping approach, Recurrent-OctoMap, learned from long-term three-dimensional (3-D) Lidar data. Most existing semantic mapping approaches focus on improving semantic understanding of single frames, rather than 3-D refinement of semantic maps (i.e. fusing semantic observations). The most widely used approach for the 3-D semantic map refinement is \u201cBayes update,\u201d which fuses the consecutive predictive probabilities following a Markov-chain model. Instead, we propose a learning approach to fuse the semantic features, rather than simply fusing predictions from a classifier. In our approach, we represent and maintain our 3-D map as an OctoMap, and model each cell as a recurrent neural network, to obtain a Recurrent-OctoMap. In this case, the semantic mapping process can be formulated as a sequence-to-sequence encoding\u2013decoding problem. Moreover, in order to extend the duration of observations in our Recurrent-OctoMap, we developed a robust 3-D localization and mapping system for successively mapping a dynamic environment using more than two weeks of data, and the system can be trained and deployed with arbitrary memory length. We validate our approach on the ETH long-term 3-D Lidar dataset. The experimental results show that our proposed approach outperforms the conventional \u201cBayes update\u201d approach."}
{"_id": "4622265755b2d4683e57c32d638bab841a4d5b45", "title": "Learning to generate chairs with convolutional neural networks", "text": "We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task."}
{"_id": "5a01793b222acb13543d48b74cb8799b08e8f581", "title": "Designing augmented reality interfaces", "text": "Most interactive computer graphics appear on a screen separate from the real world and the user's surroundings. However this does not always have to be the case. In augmented reality (AR) interfaces, three-dimensional virtual images appear superimposed over real objects. AR applications typically use head-mounted or handheld displays to make computer graphics appear in the user's environment."}
{"_id": "13f3045805cbd0a58f3abf6c9ad52515d6c10aeb", "title": "Comics on the Brain: Structure and Meaning in Sequential Image Comprehension", "text": "Just as syntax differentiates coherent sentences from scrambled word strings, sequential images must also use a cognitive system to distinguish coherent narratives from random strings of images. We conducted experiments analogous to two classic psycholinguistic studies to examine structure and semantics in sequential images. We compared Normal comic strips with both structure and meaning to sequences with Semantics Only, Structure Only, or randomly Scrambled panels. Experiment 1 used a target-monitoring paradigm, and found that RTs were slowest to target panels in Scrambled sequences, intermediate in Structural Only and Semantic Only sequences, and fastest in Normal sequences. Experiment 2 measured ERPs to the same strips. The largest N400 appeared in Scrambled and Structural Only sequences, intermediate in Semantic Only sequences and smallest in Normal sequences. Together, these findings suggest that sequential image comprehension is guided by an interaction between a structure and meaning, broadly analogous to syntax and semantics in language."}
{"_id": "1d3ddcefe4d5fefca04fe730ca73312e2c588b3b", "title": "A comparative analysis of machine learning techniques for student retention management", "text": "Student retention is an essential part of many enrollment management systems. It affects university rankings, school reputation, and financial wellbeing. Student retention has become one of the most important priorities for decision makers in higher education institutions. Improving student retention starts with a thorough understanding of the reasons behind the attrition. Such an understanding is the basis for accurately predicting at-risk students and appropriately intervening to retain them. In this study, using five years of institutional data along with several data mining techniques (both individuals as well as ensembles), we developed analytical models to predict and to explain the reasons behind freshmen student attrition. The comparative analyses results showed that the ensembles performed better than individual models, while the balanced dataset produced better prediction results than the unbalanced dataset. The sensitivity analysis of the Purchase Export Previous article Next article Check if you have access through your login credentials or your institution."}
{"_id": "60cae7e13210bbba9efc8ad42b75d491b7cc8c84", "title": "A hybrid data analytic approach to predict college graduation status and its determinative factors", "text": "Purpose \u2013 The prediction of graduation rates of college students has become increasingly important to colleges and universities across the USA and the world. Graduation rates, also referred to as completion rates, directly impact university rankings and represent a measurement of institutional performance and student success. In recent years, there has been a concerted effort by federal and state governments to increase the transparency and accountability of institutions, making \u201cgraduation rates\u201d an important and challenging university goal. In line with this, the main purpose of this paper is to propose a hybrid data analytic approach which can be flexibly implemented not only in the USA but also at various colleges across the world which would help predict the graduation status of undergraduate students due to its generic nature. It is also aimed at providing a means of determining and ranking the critical factors of graduation status. Design/methodology/approach \u2013 This study focuses on developing a novel hybrid data analytic approach to predict the degree completion of undergraduate students at a four-year public university in the USA. Via the deployment of the proposed methodology, the data were analyzed using three popular data mining classifications methods (i.e. decision trees, artificial neural networks, and support vector machines) to develop predictive degree completion models. Finally, a sensitivity analysis is performed to identify the relative importance of each predictor factor driving the graduation. Findings \u2013 The sensitivity analysis of the most critical factors in predicting graduation rates is determined to be fall-term grade-point average, housing status (on campus or commuter), and which high school the student attended. The least influential factors of graduation status are ethnicity, whether or not a student had work study, and whether or not a student applied for financial aid. All three data analytic models yielded high accuracies ranging from 71.56 to 77.61 percent, which validates the proposed model. Originality/value \u2013 This study presents uniqueness in that it presents an unbiased means of determining the driving factors of college graduation status with a flexible and powerful hybrid methodology to be implemented at other similar decision-making settings."}
{"_id": "1b3b22b95ab55853aff3ea980a5b4a76b7537980", "title": "Improved Use of Continuous Attributes in C4.5", "text": "A reported weakness of C4.5 in domains with continuous attributes is addressed by modifying the formation and evaluation of tests on continuous attributes. An MDL-inspired penalty is applied to such tests, eliminating some of them from consideration and altering the relative desirability of all tests. Empirical trials show that the modi cations lead to smaller decision trees with higher predictive accuracies. Results also con rm that a new version of C4.5 incorporating these changes is superior to recent approaches that use global discretization and that construct small trees with multi-interval splits."}
{"_id": "9fa05d7a03de28f94596f0fa5e8f107cfe4d38d7", "title": "Self-organized computation with unreliable , memristive nanodevices", "text": "Nanodevices have terrible properties for building Boolean logic systems: high defect rates, high variability, high death rates, drift, and (for the most part) only two terminals. Economical assembly requires that they be dynamical. We argue that strategies aimed at mitigating these limitations, such as defect avoidance/reconfiguration, or applying coding theory to circuit design, present severe scalability and reliability challenges. We instead propose to mitigate device shortcomings and exploit their dynamical character by building self-organizing, self-healing networks that implement massively parallel computations. The key idea is to exploit memristive nanodevice behavior to cheaply implement adaptive, recurrent networks, useful for complex pattern recognition problems. Pulse-based communication allows the designer to make trade-offs between power consumption and processing speed. Self-organization sidesteps the scalability issues of characterization, compilation and configuration. Network dynamics supplies a graceful response to device death. We present simulation results of such a network\u2014a self-organized spatial filter array\u2014that demonstrate its performance as a function of defects and device variation. (Some figures in this article are in colour only in the electronic version) 1. Nanoelectronics and computing paradigms Nanodevices are crummy1. High defect rates, high device variability, device ageing, and limitations on device complexity (e.g., two-terminal devices are much easier to build) are to be expected if we intend to mass produce nanoelectronic systems economically. Not only that, it is almost axiomatic among many researchers that such systems will be built from simple structures, such as crossbars, composed of nanodevices that must be configured to implement the desired functionality (Heath et al 1998, Williams and Kuekes 2000, Kuekes and Williams 2002, DeHon 2003, Stan et al 2003, Ziegler and Stan 2003, Snider et al 2004, DeHon 2005, Ma et al 2005, Snider 2005, Snider et al 2005, Strukov and Likharev 2005). So we are faced with the challenge of computing with devices that are not only crummy, but dynamical as well. Can reliable Boolean logic systems be built from such crummy devices? Yes, but we argue that at some point as 1 \u2018Crummy\u2019 was introduced into the technical lexicon by Moore and Shannon (1954). device dimensions scale down, the overhead and complexity become so costly that performance and density improvements will hit a barrier. In the next section we discuss two frequently proposed strategies for implementing Boolean logic with crummy, dynamical devices\u2014reconfiguration and coding theory\u2014and argue that each has severe scalability problems. This is not suggesting that logic at the nanoscale is not worth pursuing. It clearly is, and semiconductor manufacturers have the economic motivation to continue scaling down as aggressively as their profits permit. Rather we are suggesting that the \u2018killer app\u2019 for nanoelectronics lies elsewhere. An alternative computational paradigm\u2014adaptive, recurrent networks\u2014is computationally powerful and requires only two types of components, which we call \u2018edges\u2019 and \u2018nodes\u2019. Edge to node ratios are typically high, hundreds to thousands, and edges are, unfortunately, difficult to implement efficiently. This difficulty has made these networks extremely unpopular; software implementations are impossibly slow, and hardware implementations require far too much area. 0957-4484/07/365202+13$30.00 1 \u00a9 2007 IOP Publishing Ltd Printed in the UK Nanotechnology 18 (2007) 365202 G S Snider In this paper we propose using memristive nanodevices to implement edges, conventional analog and digital electronics to implement nodes, and pairs of bipolar pulses, called \u2018spikes\u2019, to implement communication. The tiny size of the nanodevices implementing edges would allow, for the first time, a practical hardware implementation of adaptive, recurrent networks. We suggest that such networks are a better architectural fit to nanoelectronics than Boolean logic circuits. They are robust in the presence of device variations and defects; they degrade gracefully as devices become defective over time, and can even \u2018self-heal\u2019 in response to internal change; they can be implemented with simple, crossbar-like structures. Just as importantly, they can be structured to self-organize their computations, sidestepping scalability problems with device characterization, compilation and configuration. Such systems can contain large numbers of defective components, but we will not need to discover where they are\u2014in fact, we will not care where they are. The system can adapt and \u2018rewire\u2019 itself around them. 2. Boolean logic is hard with crummy, dynamical devices 2.1. Reconfiguration to the rescue? Several researchers have proposed using a combination of device characterization, defect avoidance, and configuration to handle initial static defects (Stan et al 2003, DeHon 2003, Snider et al 2004, Strukov and Likharev 2005, Snider and Williams 2007). The strategy is a three-pass algorithm: (1) Characterization. Analyze every nanowire and nanodevice in the system and compile a list of resources which are defective (stuck open or stuck closed, shorted, broken, out-of-spec, etc). Such analysis algorithms were used in the Teramac system (Culbertson et al 1997). (2) Defect avoidance. Give the list of defects from pass 1 to a compiler which maps a desired circuit onto the defective fabric, routing around defective components (Culbertson et al 1997, Snider et al 2004, Strukov and Likharev 2005, Snider and Williams 2007). (3) Configuration. Give the mapping determined in pass 2 to a controller that electrically configures each of the mapped components. Since every chip will have a unique set of defects, the above process must be applied to each and every chip. This presents some interesting challenges for manufacturing, since the time required to perform the above steps will contribute to production cost. Characterization (pass 1) is problematic due to device variability\u2014the functional state of a device (or wire) is not necessarily discrete (working versus nonworking) but can lie on a continuum. And characterizing, say, 1012 nanodevices (DeHon et al 2005) in a reasonable amount of time is not likely to be trivial, especially given the bottleneck of pins on the chip. It is not clear if existing characterization algorithms could be applied to systems like this, or how well they would scale. Compilation (pass 2) also presents considerable risk. Compiling circuits onto configurable chips (such as FPGAs) today is a time-consuming process, due to the NP-hard placement and routing problems that lie at the compiler\u2019s core. Even circuits comprising only a few tens of thousands of gates can require several minutes to compile, depending on the degree of optimization needed\u2014and that\u2019s assuming a defect-free target, where a single compilation can be used to manufacture thousands of parts. One proposal for minimizing this problem requires an initial \u2018ideal\u2019 compilation onto a hypothetical defect-free fabric, laying out components a little more sparsely than optimal. This would be done only once for each (circuit type, chip type) combination, so one could afford to spend enormous amounts of compute time on this to arrive at this ideal configuration. The configuration of an individual chip on the production line would then be viewed as a \u2018perturbation\u2019 of the ideal configuration, with resource allocation shifted as necessary to avoid defects. One might even combine the characterization pass with this pass for further speed improvements. This strategy might be workable. But it is not clear how well this would scale, or how robust this would be in the presence of defect clustering. Configuration (pass 3) is the most straightforward, with the most significant risk being configuration time restrictions due to the pin bottleneck and power dissipation. Note that the above approach of characterize, compile, configure does not handle device death. What happens when a nanodevice stops working and the chip starts computing nonsense? If the nanodevices are reconfigurable, the system can be stopped and reconfigured to work around the newly formed defects. But that assumes additional circuitry to detect the malfunction (e.g. self-checking circuits (Wakerly 1978)), and a companion host processor capable of implementing the three passes in order to reconfigure the chip. Such a processor would have to be fast, reliable (which probably means it would not be implemented with nanodevices), and likely would require a significant amount of memory. Implementing such a coprocessor would seem to negate the benefits that one was presumably getting from the nanoscale circuit in the first place. 2.2. Coding theory to the rescue? Coding theory has been used for decades to robustly transmit information over noisy channels by adding a small amount of redundant information. Can coding theory do the same for logic circuits by adding a small number of redundant gates or components in order to achieve reliable operation? von Neumann (1956), looking forward to nanoscale computation, was perhaps the first to address this question. His approach used a code that replicated each logic gate, and combined the replicated gate outputs in a clever way so that the entire system achieved a desired level of reliability. Although his primary concern was correcting errors induced by transient faults, the approach could also compensate for initially defective devices or \u2018device deaths\u2019 as long as the failing devices were randomly distributed and the number did not exceed a threshold (the trade-off being that device deaths would reduce the system\u2019s tolerance of transient faults). The overhead for his scheme could be enormous, though. Replication factors for Boolean logic went to infinity as fault rates approached about 1%. This bound has been improved by later researchers, with"}
{"_id": "ad6f202e6129ba1e2d743dd7500da6bc9e15bf44", "title": "Controllability of complex networks via pinning.", "text": "We study the problem of controlling a general complex network toward an assigned synchronous evolution by means of a pinning control strategy. We define the pinning controllability of the network in terms of the spectral properties of an extended network topology. The roles of the control and coupling gains, as well as of the number of pinned nodes, are also discussed."}
{"_id": "84875ee58006260d3956f1bac933f1bf7851fdd6", "title": "Feature Extraction with Convolutional Neural Networks for Handwritten Word Recognition", "text": "In this paper, we show that learning features with convolutional neural networks is better than using hand-crafted features for handwritten word recognition. We consider two kinds of systems: a grapheme based segmentation and a sliding window segmentation. In both cases, the combination of a convolutional neural network with a HMM outperform a state-of-the art HMM system based on explicit feature extraction. The experiments are conducted on the Rimes database. The systems obtained with the two kinds of segmentation are complementary: when they are combined, they outperform the systems in isolation. The system based on grapheme segmentation yields lower recognition rate but is very fast, which is suitable for specific applications such as document classification."}
{"_id": "c36eb6887c13a3fbb9ec01cd5a7501c4e063de73", "title": "Design and implementation of 0.7 to 7 GHz broadband double-ridged horn antenna", "text": "In this paper, simulation and measurement results of 0.7-7 GHz double-ridged guide horn (DRGH) antenna with coaxial input feed section is presented. This antenna, due to the large frequency band required by standards, is appropriate to be used in electromagnetic compatibility (EMC) testing and antenna measurement as a transmitter. A step by step method for designing DRGH antenna is given. A suitable taper for the ridges in the horn is designed and its impedance variations along the horn are shown. In addition, a new structure for the electrical field probe in the feed section is introduced by which a shift-down of lower frequency to 0.7 GHz is achieved. A sensitivity analysis is done on the parameters of the proposed structure. Other parameters of the feed section are also investigated and optimized values are obtained. Finally, the proposed antenna has been fabricated and measurement results show good agreement with the simulation."}
{"_id": "b446e79a70cb0dcbecdfd2f8c11d2ac1bec4c2f7", "title": "An empirical study of the mechanisms of mindfulness in a mindfulness-based stress reduction program.", "text": "S. L. Shapiro and colleagues (2006) have described a testable theory of the mechanisms of mindfulness and how it affects positive change. They describe a model in which mindfulness training leads to a fundamental change in relationship to experience (reperceiving), which leads to changes in self-regulation, values clarification, cognitive and behavioral flexibility, and exposure. These four variables, in turn, result in salutogenic outcomes. Analyses of responses from participants in a mindfulness-based stress-reduction program did not support the mediating effect of changes in reperceiving on the relationship of mindfulness with those four variables. However, when mindfulness and reperceiving scores were combined, partial support was found for the mediating effect of the four variables on measures of psychological distress. Issues arising in attempts to test the proposed theory are discussed, including the description of the model variables and the challenges to their assessment."}
{"_id": "a01c855b24fea6fe8ecd229e9a3b536760c689e4", "title": "Second screen applications and tablet users: constellation, awareness, experience, and interest", "text": "This study investigates how tablet users incorporate multiple media in their television viewing experience. Three patterns are found: (a) only focusing on television, (b) confounding television viewing with other screen media (e.g. laptop, tablet) and (c) confounding television viewing with various media, including print and screen media. Furthermore, we question how the incorporation of screen media in this experience affects the practice of engaging in digital commentary on television content. Also, we inquire the uptake and interest in so-called 'second screen applications'. These applications allow extensions of the primary screen experience on secondary screens (e.g. tablet). The results, based on a sample of 260 tablet users, indicate that there is only a modest uptake and interest in using secondary screens to digitally share opinions. However, the use of second screen interaction with television content is not discarded: although there is still little awareness and experience, we notice a moderate interest in these apps."}
{"_id": "0b41a405b329198428b1ce947b86a0797b59289e", "title": "Muscle simulation for facial animation in Kong: Skull Island", "text": "For Kong: Skull Island, Industrial Light & Magic created an anatomically motivated facial simulation model for Kong that includes the facial skeleton and musculature. We applied a muscle simulation framework that allowed us to target facial shapes while maintaining desirable physical properties to ensure that the simulations stayed on-model. This allowed muscle simulations to be used as a powerful tool for adding physical detail to and improving the anatomical validity of both blendshapes and blendshape animations in order to achieve more realistic facial animation with less hand sculpting."}
{"_id": "a668a6ca3fc8a83436fc28a65a890daf6cd59762", "title": "Autonomous semantic mapping for robots performing everyday manipulation tasks in kitchen environments", "text": "In this work we report about our efforts to equip service robots with the capability to acquire 3D semantic maps. The robot autonomously explores indoor environments through the calculation of next best view poses, from which it assembles point clouds containing spatial and registered visual information. We apply various segmentation methods in order to generate initial hypotheses for furniture drawers and doors. The acquisition of the final semantic map makes use of the robot's proprioceptive capabilities and is carried out through the robot's interaction with the environment. We evaluated the proposed integrated approach in the real kitchen in our laboratory by measuring the quality of the generated map in terms of the map's applicability for the task at hand (e.g. resolving counter candidates by our knowledge processing system)."}
{"_id": "b1dd6f03a5d7a09850542453695b42665b445fa1", "title": "Conscious intention and brain activity", "text": "The problem of free will lies at the heart of modern scientific studies of consciousness. An influential series of experiments by Libet has suggested that conscious intentions arise as a result of brain activity. This contrasts with traditional concepts of free will, in which the mind controls the body. A more recent study by Haggard and Eimer has further examined the relation between intention and brain processes, concluding that conscious awareness of intention is linked to the choice or selection of a specific action, and not to the earliest initiation of action processes. The exchange of views in this paper further explores the relation between conscious intention and brain activity."}
{"_id": "275445087a85bf6d3b0f101a60b0be7ab4ed520f", "title": "Identifying emotional states using keystroke dynamics", "text": "The ability to recognize emotions is an important part of building intelligent computers. Emotionally-aware systems would have a rich context from which to make appropriate decisions about how to interact with the user or adapt their system response. There are two main problems with current system approaches for identifying emotions that limit their applicability: they can be invasive and can require costly equipment. Our solution is to determine user emotion by analyzing the rhythm of their typing patterns on a standard keyboard. We conducted a field study where we collected participants' keystrokes and their emotional states via self-reports. From this data, we extracted keystroke features, and created classifiers for 15 emotional states. Our top results include 2-level classifiers for confidence, hesitance, nervousness, relaxation, sadness, and tiredness with accuracies ranging from 77 to 88%. In addition, we show promise for anger and excitement, with accuracies of 84%."}
{"_id": "e97736b0920af2b5729bf62172a2f20a80dd5666", "title": "Efficient deep learning for stereo matching with larger image patches", "text": "Stereo matching plays an important role in many applications, such as Advanced Driver Assistance Systems, 3D reconstruction, navigation, etc. However it is still an open problem with many difficult. Most difficult are often occlusions, object boundaries, and low or repetitive textures. In this paper, we propose a method for processing the stereo matching problem. We propose an efficient convolutional neural network to measure how likely the two patches matched or not and use the similarity as their stereo matching cost. Then the cost is refined by stereo methods, such as semiglobal maching, subpixel interpolation, median filter, etc. Our architecture uses large image patches which makes the results more robust to texture-less or repetitive textures areas. We experiment our approach on the KITTI2015 dataset which obtain an error rate of 4.42% and only needs 0.8 second for each image pairs."}
{"_id": "537fcac9d4b94cd54d989cbc690d18a4a02898fb", "title": "Loading of the knee joint during activities of daily living measured in vivo in five subjects.", "text": "Detailed knowledge about loading of the knee joint is essential for preclinical testing of implants, validation of musculoskeletal models and biomechanical understanding of the knee joint. The contact forces and moments acting on the tibial component were therefore measured in 5 subjects in vivo by an instrumented knee implant during various activities of daily living. Average peak resultant forces, in percent of body weight, were highest during stair descending (346% BW), followed by stair ascending (316% BW), level walking (261% BW), one legged stance (259% BW), knee bending (253% BW), standing up (246% BW), sitting down (225% BW) and two legged stance (107% BW). Peak shear forces were about 10-20 times smaller than the axial force. Resultant forces acted almost vertically on the tibial plateau even during high flexion. Highest moments acted in the frontal plane with a typical peak to peak range -2.91% BWm (adduction moment) to 1.61% BWm (abduction moment) throughout all activities. Peak flexion/extension moments ranged between -0.44% BWm (extension moment) and 3.16% BWm (flexion moment). Peak external/internal torques lay between -1.1% BWm (internal torque) and 0.53% BWm (external torque). The knee joint is highly loaded during daily life. In general, resultant contact forces during dynamic activities were lower than the ones predicted by many mathematical models, but lay in a similar range as measured in vivo by others. Some of the observed load components were much higher than those currently applied when testing knee implants."}
{"_id": "46fb1214b2303c61d1684c167b1add1996e15313", "title": "TrendLearner: Early Prediction of Popularity Trends of User Generated Content", "text": "Accurately predicting the popularity of user generated content (UGC) is very valuable to content providers, online advertisers, as well as social media and social network researchers. However, it is also a challenging task due to the plethora of factors that affect content popularity in social systems. We here focus on the problem of predicting the popularity trend of a piece of UGC (object) as early as possible, as a step towards building more accurate popularity prediction methods. Unlike previous work, we explicitly address the inherent tradeoff between prediction accuracy and remaining interest in the object after prediction, since, to be useful, accurate predictions should be made before interest has exhausted. Moreover, given the heterogeneity in popularity dynamics across objects in most UGC applications, this tradeoff has to be solved on a per-object basis, which makes the prediction task harder. We propose to tackle this problem with a novel two-step learning approach in which we: (1) extract popularity trends from previously uploaded objects, and then (2) predict trends for newly uploaded content. The first step exploits a time series clustering algorithm to represent each trend by a time series centroid. We propose to treat the second step as a classification problem. First, we extract a set of features of the target object corresponding to the distances of its early popularity curve to the previously identified centroids. We then combine these features with content features (e.g., incoming links, category), using them to train classifiers for prediction. Our experimental results for YouTube datasets show that we can achieve Micro and Macro F1 scores between 0.61 and 0.71 (a gain of up to 38% when compared to alternative approaches), with up to 68% of the views still remaining for 50% or 21% of the videos, depending on the dataset. We also show that our approach can be applied to produce predictions of content popularity at a future date that are much more accurate than recently proposed regression-based and state-space based models, with accuracy improvements of at least 33% and 59%, on average."}
{"_id": "f2c6c4bc71db8f2f18f81373c65f48d80720d95e", "title": "Blur detection for digital images using wavelet transform", "text": "With the prevalence of digital cameras, the number of digital images increases quickly, which raises the demand for image quality assessment in terms of blur. Based on the edge type and sharpness analysis, using the Harr wavelet transform, a new blur detection scheme is proposed in this paper, which can determine whether an image is blurred or not and to what extent an image is blurred. Experimental results demonstrate the effectiveness of the proposed scheme."}
{"_id": "3b02ec4f4b77368f7e34f86f9a49a0d5902c45e0", "title": "Time-driven activity-based costing.", "text": "In the classroom, activity-based costing (ABC) looks like a great way to manage a company's limited resources. But executives who have tried to implement ABC in their organizations on any significant scale have often abandoned the attempt in the face of rising costs and employee irritation. They should try again, because a new approach sidesteps the difficulties associated with large-scale ABC implementation. In the revised model, managers estimate the resource demands imposed by each transaction, product, or customer, rather than relying on time-consuming and costly employee surveys. This method is simpler since it requires, for each group of resources, estimates of only two parameters: how much it costs per time unit to supply resources to the business's activities (the total overhead expenditure of a department divided by the total number of minutes of employee time available) and how much time it takes to carry out one unit of each kind of activity (as estimated or observed by the manager). This approach also overcomes a serious technical problem associated with employee surveys: the fact that, when asked to estimate time spent on activities, employees invariably report percentages that add up to 100. Under the new system, managers take into account time that is idle or unused. Armed with the data, managers then construct time equations, a new feature that enables the model to reflect the complexity of real-world operations by showing how specific order, customer, and activity characteristics cause processing times to vary. This Tool Kit uses concrete examples to demonstrate how managers can obtain meaningful cost and profitability information, quickly and inexpensively. Rather than endlessly updating and maintaining ABC data,they can now spend their time addressing the deficiencies the model reveals: inefficient processes, unprofitable products and customers, and excess capacity."}
{"_id": "023c3af96af31d7c2a97c9b39028984f6bea8423", "title": "Visualization in Meteorology\u2014A Survey of Techniques and Tools for Data Analysis Tasks", "text": "This article surveys the history and current state of the art of visualization in meteorology, focusing on visualization techniques and tools used for meteorological data analysis. We examine characteristics of meteorological data and analysis tasks, describe the development of computer graphics methods for visualization in meteorology from the 1960s to today, and visit the state of the art of visualization techniques and tools in operational weather forecasting and atmospheric research. We approach the topic from both the visualization and the meteorological side, showing visualization techniques commonly used in meteorological practice, and surveying recent studies in visualization research aimed at meteorological applications. Our overview covers visualization techniques from the fields of display design, 3D visualization, flow dynamics, feature-based visualization, comparative visualization and data fusion, uncertainty and ensemble visualization, interactive visual analysis, efficient rendering, and scalability and reproducibility. We discuss demands and challenges for visualization research targeting meteorological data analysis, highlighting aspects in demonstration of benefit, interactive visual analysis, seamless visualization, ensemble visualization, 3D visualization, and technical issues."}
{"_id": "8e38ef4ab1097b38e3a5ac1c2b20a5044b1d2e90", "title": "Combining perspiration- and morphology-based static features for fingerprint liveness detection", "text": "0167-8655/$ see front matter 2012 Elsevier B.V. A doi:10.1016/j.patrec.2012.01.009 \u21d1 Corresponding author at: Dipartimento di Informa of Naples Federico II, via Claudio 21, 80125 Naples, It E-mail addresses: emanuela.marasco@unina.it (E. (C. Sansone). It has been showed that, by employing fake fingers, the existing fingerprint recognition systems may be easily deceived. So, there is an urgent need for improving their security. Software-based liveness detection algorithms typically exploit morphological and perspiration-based characteristics separately to measure the vitality. Both such features provide discriminant information about live and fake fingers, then, it is reasonable to investigate also their joint contribution. In this paper, we combine a set of the most robust morphological and perspiration-based measures. The effectiveness of the proposed approach has been assessed through a comparison with several state-ofthe-art techniques for liveness detection. Experiments have been carried out, for the first time, by adopting standard databases. They have been taken from the Liveness Detection Competition 2009 whose data have been acquired by using three different optical sensors. Further, we have analyzed how the performance of our algorithm changes when the material employed for the spoof attack is not available during the training of the system. 2012 Elsevier B.V. All rights reserved."}
{"_id": "c145ac5130bfce3de23de25d6618ea282c67c075", "title": "Recognition of Pollen-Bearing Bees from Video Using Convolutional Neural Network", "text": "In this paper, the recognition of pollen bearing honey bees from videos of the entrance of the hive is presented. This computer vision task is a key component for the automatic monitoring of honeybees in order to obtain large scale data of their foraging behavior and task specialization. Several approaches are considered for this task, including baseline classifiers, shallow Convolutional Neural Networks, and deeper networks from the literature. The experimental comparison is based on a new dataset of images of honeybees that was manually annotated for the presence of pollen. The proposed approach, based on Convolutional Neural Networks is shown to outperform the other approaches in terms of accuracy. Detailed analysis of the results and the influence of the architectural parameters, such as the impact of dedicated color based data augmentation, provide insights into how to apply the approach to the target application."}
{"_id": "2591d2d773f38b5c8d17829a3938c85edf2009a6", "title": "Psychophysiological contributions to phantom limbs.", "text": "Recent studies of amputees reveal a remarkable diversity in the qualities of experiences that define the phantom limb, whether painless or painful. This paper selectively reviews evidence of peripheral, central and psychological processes that trigger or modulate a variety of phantom limb experiences. The data show that pain experienced prior to amputation may persist in the form of a somatosensory memory in the phantom limb. It is suggested that the length and size of the phantom limb may be a perceptual marker of the extent to which sensory input from the amputation stump have re-occupied deprived cortical regions originally subserving the amputated limb. A peripheral mechanism involving a sympathetic-efferent somatic-afferent cycle is presented to explain fluctuations in the intensity of paresthesias referred to the phantom limb. While phantom pain and other sensations are frequently triggered by thoughts and feelings, there is no evidence that the painful or painless phantom limb is a symptom of a psychological disorder. It is concluded that the experience of a phantom limb is determined by a complex interaction of inputs from the periphery and widespread regions of the brain subserving sensory, cognitive, and emotional processes."}
{"_id": "346e6803d8adff413a9def0768637db533fe71b1", "title": "Experimental personality designs: analyzing categorical by continuous variable interactions.", "text": "Theories hypothesizing interactions between a categorical and one or more continuous variables are common in personality research. Traditionally, such hypotheses have been tested using nonoptimal adaptations of analysis of variance (ANOVA). This article describes an alternative multiple regression-based approach that has greater power and protects against spurious conclusions concerning the impact of individual predictors on the outcome in the presence of interactions. We discuss the structuring of the regression equation, the selection of a coding system for the categorical variable, and the importance of centering the continuous variable. We present in detail the interpretation of the effects of both individual predictors and their interactions as a function of the coding system selected for the categorical variable. We illustrate two- and three-dimensional graphical displays of the results and present methods for conducting post hoc tests following a significant interaction. The application of multiple regression techniques is illustrated through the analysis of two data sets. We show how multiple regression can produce all of the information provided by traditional but less optimal ANOVA procedures."}
{"_id": "08fe9658c086b842980e86c66bde3cef95bb6bec", "title": "Deformable part models are convolutional neural networks", "text": "Deformable part models (DPMs) and convolutional neural networks (CNNs) are two widely used tools for visual recognition. They are typically viewed as distinct approaches: DPMs are graphical models (Markov random fields), while CNNs are \u201cblack-box\u201d non-linear classifiers. In this paper, we show that a DPM can be formulated as a CNN, thus providing a synthesis of the two ideas. Our construction involves unrolling the DPM inference algorithm and mapping each step to an equivalent CNN layer. From this perspective, it is natural to replace the standard image features used in DPMs with a learned feature extractor. We call the resulting model a DeepPyramid DPM and experimentally validate it on PASCAL VOC object detection. We find that DeepPyramid DPMs significantly outperform DPMs based on histograms of oriented gradients features (HOG) and slightly outperforms a comparable version of the recently introduced R-CNN detection system, while running significantly faster."}
{"_id": "1060ff9852dc12e05ec44bee7268efdc76f7535d", "title": "Collection flow", "text": "Computing optical flow between any pair of Internet face photos is challenging for most current state of the art flow estimation methods due to differences in illumination, pose, and geometry. We show that flow estimation can be dramatically improved by leveraging a large photo collection of the same (or similar) object. In particular, consider the case of photos of a celebrity from Google Image Search. Any two such photos may have different facial expression, lighting and face orientation. The key idea is that instead of computing flow directly between the input pair (I, J), we compute versions of the images (I', J') in which facial expressions and pose are normalized while lighting is preserved. This is achieved by iteratively projecting each photo onto an appearance subspace formed from the full photo collection. The desired flow is obtained through concatenation of flows (I \u2192 I') o (J' \u2192 J). Our approach can be used with any two-frame optical flow algorithm, and significantly boosts the performance of the algorithm by providing invariance to lighting and shape changes."}
{"_id": "205f65b295c80131dbf8f17874dd362413e5d7fe", "title": "Learning Dense Correspondence via 3D-Guided Cycle Consistency", "text": "Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and real-to-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms state-of-the-art pairwise matching methods in correspondence-related tasks."}
{"_id": "54a1b59a47f38bef6ad4cd2d56ea4553746b0b22", "title": "Descriptor Matching with Convolutional Neural Networks: a Comparison to SIFT", "text": "Latest results indicate that features learned via convolutional neural networks outperform previous descriptors on classification tasks by a large margin. It has been shown that these networks still work well when they are applied to datasets or recognition tasks different from those they were trained on. However, descriptors like SIFT are not only used in recognition but also for many correspondence problems that rely on descriptor matching. In this paper we compare features from various layers of convolutional neural nets to standard SIFT descriptors. We consider a network that was trained on ImageNet and another one that was trained without supervision. Surprisingly, convolutional neural networks clearly outperform SIFT on descriptor matching."}
{"_id": "7b1e5e9f85b6d9735ccd63a7bacd4e1bcfa589bb", "title": "Mirrored Light Field Video Camera Adapter", "text": "This paper proposes the design of a custom mirror-based light field camera adapter that is cheap, simple in construction, and accessible. Mirrors of different shape and orientation reflect the scene into an upwards-facing camera to create an array of virtual cameras with overlapping field of view at specified depths, and deliver video frame rate light fields. We describe the design, construction, decoding and calibration processes of our mirrorbased light field camera adapter in preparation for an open-source release to benefit the robotic vision community. The latest report, computer-aided design models, diagrams and code can be obtained from the following repository: https://bitbucket.org/acrv/mirrorcam."}
{"_id": "635268baf38ac5058e0886b8b235e3f98c9f93c1", "title": "UPTIME: Ubiquitous pedestrian tracking using mobile phones", "text": "The mission of tracking a pedestrian is valuable for many applications including walking distance estimation for the purpose of pervasive healthcare, museum and shopping mall guides, and locating emergency responders. In this paper, we show how accurate and ubiquitous tracking of a pedestrian can be performed using only the inertial sensors embedded in his/her mobile phone. Our work depends on performing dead reckoning to track the user's movement. The main challenge that needs to be addressed is handling the noise of the low cost low quality inertial sensors in cell phones. Our proposed system combines two novel contributions: a novel step count estimation technique and a gait-based accurate variable step size detection algorithm. The step count estimation technique is based on a lightweight finite state machine approach that leverages orientation-independent features. In order to capture the varying stride length of the user, based on his changing gait, we employ a multi-class hierarchical Support Vector Machine classifier. Combining the estimated number of steps with the an accurate estimate of the individual stride length, we achieve ubiquitous and accurate tracking of a person in indoor environments. We implement our system on different Android-based phones and compare it to the state-of-the-art techniques in indoor and outdoor testbeds with arbitrary phone orientation. Our results in two different testbeds show that we can provide an accurate step count estimation with an error of 5.72%. In addition, our gait type classifier has an accuracy of 97.74%. This leads to a combined tracking error of 6.9% while depending only on the inertial sensors and turning off the GPS sensor completely. This highlights the ability of the system to provide ubiquitous, accurate, and energy efficient tracking."}
{"_id": "7dbca41518638170b0be90e85b7ac3365b337d44", "title": "A Noise\u2010aware Filter for Real\u2010time Depth Upsampling", "text": "A new generation of active 3D range sensors, such as time-of-flight cameras, enables recording of full-frame depth maps at video frame rate. Unfortunately, the captured data are typically starkly contaminated by noise and the sensors feature only a rather limited image resolution. We therefore present a pipeline to enhance the quality and increase the spatial resolution of range data in real-time by upsampling the range information with the data from a high resolution video camera. Our algorithm is an adaptive multi-lateral upsampling filter that takes into account the inherent noisy nature of real-time depth data. Thus, we can greatly improve reconstruction quality, boost the resolution of the data to that of the video sensor, and prevent unwanted artifacts like texture copy into geometry. Our technique has been crafted to achieve improvement in depth map quality while maintaining high computational efficiency for a real-time application. By implementing our approach on the GPU, the creation of a real-time 3D camera with video camera resolution is feasible."}
{"_id": "38d309e86c2bcb00bfe9b7f9c6740f061f4ef27c", "title": "Analyzing of two types water cooling electric motors using computational fluid dynamics", "text": "The focus of this work consists in analyzing water flow inside a water cooling electric motor frame. Aiming of work is in the comparison of load losses and the avoidance of hot spots in two types water cooled frames. Total losses of electrical machine were considered as thermal load. Electrical motor is new designed electrically excited synchronous machine for automotive industry. For the development of this work computational fluid dynamics (CFD) was used."}
{"_id": "2b905881fe991ce7b6f59b00872d696902906db2", "title": "Imperative Functional Programming", "text": "We present a new model, based on monads, for performing input/output in a non-strict, purely functional language. It is composable, extensible, efficient, requires no extensions to the type system, and extends smoothly to incorporate mixed-language working and in-place array updates."}
{"_id": "98adc92ea6225c3a0ba6f204c531f408fedbc281", "title": "Cyber Security Analysis of Substation Automation System", "text": "The automation of substation is increasing in the modern world. The implementation of SCADA system is necessary for Substation Automation (SA) systems. Generally Substation Automation Systems uses Intelligent Electronic devices (IED) for monitoring, control and protection of substation. Standard protocols used by the Substation Automation systems are IEC 60870-5-104, DNP, IEC 61850, IEC 60870-5-101. In this paper, Modbus protocol is used as communication protocol. Cyber attack is critical issue in SCADA systems. This paper deals with the monitoring of substation and cyber security analysis of SCADA systems."}
{"_id": "823964b144009f7c395cd09de9a70fe06542cc84", "title": "Overview of Current Development in Electrical Energy Storage Technologies and the Application Potential in Power System Operation", "text": "Electrical power generation is changing dramatically across the world because of the need to reduce greenhouse gas emissions and to introduce mixed energy sources. The power network faces great challenges in transmission and distribution to meet demand with unpredictable daily and seasonal variations. Electrical Energy Storage (EES) is recognized as underpinning technologies to have great potential in meeting these challenges, whereby energy is stored in a certain state, according to the technology used, and is converted to electrical energy when needed. However, the wide variety of options and complex characteristic matrices make it difficult to appraise a specific EES technology for a particular application. This paper intends to mitigate this problem by providing a comprehensive and clear picture of the state-of-the-art technologies available, and where they would be suited for integration into a power generation and distribution system. The paper starts with an overview of the operation principles, technical and economic performance features and the current research and development of important EES technologies, sorted into six main categories based on the types of energy stored. Following this, a comprehensive comparison and an application potential analysis of the reviewed technologies are presented. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/)."}
{"_id": "7ad3675c38070d41b5e4c96fef1a6c80ca481f51", "title": "Xface: MPEG-4 based open source toolkit for 3D Facial Animation", "text": "In this paper, we present our open source, platform independent toolkit for developing 3D talking agents, namely Xface. It relies on MPEG-4 Face Animation (FA) standard. The toolkit currently incorporates three pieces of software. The core Xface library is for developers who want to embed 3D facial animation to their software as well as researchers who want to focus on related topics without the hassle of implementing a full framework from scratch. XfaceEd editor provides an easy to use interface to generate MPEG-4 ready meshes from static 3D models. Last, XfacePlayer is a sample application that demonstrates the toolkit in action. All the pieces are implemented in C++ programming language and rely on only operating system independent libraries. The main design principles for Xface are ease of use and extendibility."}
{"_id": "49ba47eaa12cecca72bf7729d3bbdb9d5c38db24", "title": "Classifying imbalanced data using a bagging ensemble variation (BEV)", "text": "In many applications, data collected are highly skewed where data of one class clearly dominates data from the other classes. Most existing classification systems that perform well on balanced data give very poor performance on imbalanced data, especially for the minority class data. Existing work on improving the quality of classification on imbalanced data include over-sampling, under-sampling, and methods that make modifications to the existing classification systems. This paper discusses the BEV system for classifying imbalanced data. The system is developed based on the ideas from the \"Bagging\" classification ensemble. The motivation behind the scheme is to maximally use the minority class data without creating synthetic data or making changes to the existing classification systems. Experimental results using real world imbalanced data show the effectiveness of the system."}
{"_id": "20d16d229ed5fddcb9ac1c3a7925582c286d3927", "title": "Defining clusters from a hierarchical cluster tree: the Dynamic Tree Cut package for R", "text": "SUMMARY\nHierarchical clustering is a widely used method for detecting clusters in genomic data. Clusters are defined by cutting branches off the dendrogram. A common but inflexible method uses a constant height cutoff value; this method exhibits suboptimal performance on complicated dendrograms. We present the Dynamic Tree Cut R package that implements novel dynamic branch cutting methods for detecting clusters in a dendrogram depending on their shape. Compared to the constant height cutoff method, our techniques offer the following advantages: (1) they are capable of identifying nested clusters; (2) they are flexible-cluster shape parameters can be tuned to suit the application at hand; (3) they are suitable for automation; and (4) they can optionally combine the advantages of hierarchical clustering and partitioning around medoids, giving better detection of outliers. We illustrate the use of these methods by applying them to protein-protein interaction network data and to a simulated gene expression data set.\n\n\nAVAILABILITY\nThe Dynamic Tree Cut method is implemented in an R package available at http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/BranchCutting."}
{"_id": "1742c355b5b17f5e147661bbcafcd204f253cf0e", "title": "Automatic Stance Detection Using End-to-End Memory Networks", "text": "We present an effective end-to-end memory network model that jointly (i) predicts whether a given document can be considered as relevant evidence for a given claim, and (ii) extracts snippets of evidence that can be used to reason about the factuality of the target claim. Our model combines the advantages of convolutional and recurrent neural networks as part of a memory network. We further introduce a similarity matrix at the inference level of the memory network in order to extract snippets of evidence for input claims more accurately. Our experiments on a public benchmark dataset, FakeNewsChallenge, demonstrate the effectiveness of our approach."}
{"_id": "3088a18108834988640a7176d2fd50e711d40146", "title": "Transfer learning from multiple source domains via consensus regularization", "text": "Recent years have witnessed an increased interest in transfer learning. Despite the vast amount of research performed in this field, there are remaining challenges in applying the knowledge learnt from multiple source domains to a target domain. First, data from multiple source domains can be semantically related, but have different distributions. It is not clear how to exploit the distribution differences among multiple source domains to boost the learning performance in a target domain. Second, many real-world applications demand this transfer learning to be performed in a distributed manner. To meet these challenges, we propose a consensus regularization framework for transfer learning from multiple source domains to a target domain. In this framework, a local classifier is trained by considering both local data available in a source domain and the prediction consensus with the classifiers from other source domains. In addition, the training algorithm can be implemented in a distributed manner, in which all the source-domains are treated as slave nodes and the target domain is used as the master node. To combine the training results from multiple source domains, it only needs share some statistical data rather than the full contents of their labeled data. This can modestly relieve the privacy concerns and avoid the need to upload all data to a central location. Finally, our experimental results show the effectiveness of our consensus regularization learning."}
{"_id": "06eda0aa078454c99eb6be43d3a301c9b8d5d1fa", "title": "Image Similarity Using Mutual Information of Regions", "text": "Mutual information (MI) has emerged in recent years as an effective similarity measure for comparing images. One drawback of MI, however, is that it is calculated on a pixel by pixel basis, meaning that it takes into account only the relationships between corresponding individual pixels and not those of each pixel\u2019s respective neighborhood. As a result, much of the spatial information inherent in images is not utilized. In this paper, we propose a novel extension to MI called regional mutual information (RMI). This extension efficiently takes neighborhood regions of corresponding pixels into account. We demonstrate the usefulness of RMI by applying it to a real-world problem in the medical domain\u2014 intensity-based 2D-3D registration of X-ray projection images (2D) to a CT image (3D). Using a gold-standard spine image data set, we show that RMI is a more robust similarity meaure for image registration than MI."}
{"_id": "d1f711ab0fc172f9d2cbb430fdc814be3abd8832", "title": "A novel data glove for fingers motion capture using inertial and magnetic measurement units", "text": "A novel data glove embedded low cost MEMS inertial and magnetic measurement units, is proposed for fingers motion capture. Each unit consists of a tri-axial gyroscope, a tri-axial accelerometer and a tri-axial magnetometer. The sensor board and processor board are compactly designed, which are small enough to fit the size of our fingers. The data glove is equipped with fifteen units to measure each joint angle of the fingers. Then the calibration approach is put up to improve the accuracy of measurements by both offline and online procedures, and a fast estimation method is used to determine the three orientations of fifteen units simultaneously. The proposed algorithm is easy to be implemented, and more precise and efficient measurements can be obtained as compared with existing methods. The fingers motion capture experiments are implemented to acquire the characteristics of the fingers and teleoperate the robotic hands, which prove the effectiveness of the data glove."}
{"_id": "6ac2443b7fa7de3b779137a5cbbf1a023d6b2a68", "title": "Interpretation and trust: designing model-driven visualizations for text analysis", "text": "Statistical topic models can help analysts discover patterns in large text corpora by identifying recurring sets of words and enabling exploration by topical concepts. However, understanding and validating the output of these models can itself be a challenging analysis task. In this paper, we offer two design considerations - interpretation and trust - for designing visualizations based on data-driven models. Interpretation refers to the facility with which an analyst makes inferences about the data through the lens of a model abstraction. Trust refers to the actual and perceived accuracy of an analyst's inferences. These considerations derive from our experiences developing the Stanford Dissertation Browser, a tool for exploring over 9,000 Ph.D. theses by topical similarity, and a subsequent review of existing literature. We contribute a novel similarity measure for text collections based on a notion of \"word-borrowing\" that arose from an iterative design process. Based on our experiences and a literature review, we distill a set of design recommendations and describe how they promote interpretable and trustworthy visual analysis tools."}
{"_id": "806a8739d8cb68c3e4695d37ae6757d29e317c23", "title": "Exploring simultaneous keyword and key sentence extraction: improve graph-based ranking using wikipedia", "text": "Summarization and Keyword Selection are two important tasks in NLP community. Although both aim to summarize the source articles, they are usually treated separately by using sentences or words. In this paper, we propose a two-level graph based ranking algorithm to generate summarization and extract keywords at the same time. Previous works have reached a consensus that important sentence is composed by important keywords. In this paper, we further study the mutual impact between them through context analysis. We use Wikipedia to build a two-level concept-based graph, instead of traditional term-based graph, to express their homogenous relationship and heterogeneous relationship. We run PageRank and HITS rank on the graph to adjust both homogenous and heterogeneous relationships. A more reasonable relatedness value will be got for key sentence selection and keyword selection. We evaluate our algorithm on TAC 2011 data set. Traditional term-based approach achieves a score of 0.255 in ROUGE-1 and a score of 0.037 and ROUGE-2 and our approach can improve them to 0.323 and 0.048 separately."}
{"_id": "3d92f8806d45727ad476751748bbb7ceda65c685", "title": "English, Devnagari and Urdu Text Identification", "text": "In a multi-lingual multi-script country like India, a single text line of a document page may contain words of two or more scripts. For the Optical Character Recognition of such a document page it is necessary to identify different scripts from the document. In this paper, an automatic technique for word -wise identification of English, Devnagari and Urdu scripts from a single document is proposed. Here, at first, the document is segmented into lines and then the lines are segmented into possible words. Using characteristics of different scripts, the identification scheme is developed."}
{"_id": "04afd5f18d3080c57d4b304dfbd1818da9a02e8e", "title": "Models and Issues in Data Stream Systems", "text": "In this overview paper we motivate the need for and research issues arising from a new model of data processing. In this model, data does not take the form of persistent relations, but rather arrives in multiple, continuous, rapid, time-varying data streams. In addition to reviewing past work relevant to data stream systems and current projects in the area, the paper explores topics in stream query languages, new requirements and challenges in query processing, and algorithmic issues."}
{"_id": "e988f7ad852cc0918b521431d0ec9e0e792bde2a", "title": "Accuracy analysis of kinect depth data", "text": "This paper presents an investigation of the geometric quality of depth data obtained by the Kinect sensor. Based on the mathematical model of depth measurement by the sensor a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimetres up to about 4 cm at the maximum range of the sensor. The accuracy of the data is also found to be influenced by the low resolution of the depth measurements."}
{"_id": "d61f07f93bc6ecd70f2fc37c9f43748d714b3dfb", "title": "Automatic Mapping Clinical Notes to Medical Terminologies", "text": "Automatic mapping of key concepts from clinical notes to a terminology is an important task to achieve for extraction of the clinical information locked in clinical notes and patient reports. The present paper describes a system that automatically maps free text into a medical reference terminology. The algorithm utilises Natural Language Processing (NLP) techniques to enhance a lexical token matcher. In addition, this algorithm is able to identify negative concepts as well as performing term qualification. The algorithm has been implemented as a web based service running at a hospital to process real-time data and demonstrated that it worked within acceptable time limits and accuracy limits for them. However broader acceptability of the algorithm will require comprehensive evaluations."}
{"_id": "064fb3a6f2666e17f6d411c0a731d56aae0a785e", "title": "Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering", "text": "Building a successful recommender system depends on understanding both the dimensions of people\u2019s preferences as well as their dynamics. In certain domains, such as fashion, modeling such preferences can be incredibly difficult, due to the need to simultaneously model the visual appearance of products as well as their evolution over time. The subtle semantics and non-linear dynamics of fashion evolution raise unique challenges especially considering the sparsity and large scale of the underlying datasets. In this paper we build novel models for the One-Class Collaborative Filtering setting, where our goal is to estimate users\u2019 fashion-aware personalized ranking functions based on their past feedback. To uncover the complex and evolving visual factors that people consider when evaluating products, our method combines high-level visual features extracted from a deep convolutional neural network, users\u2019 past feedback, as well as evolving trends within the community. Experimentally we evaluate our method on two large real-world datasets from Amazon.com, where we show it to outperform stateof-the-art personalized ranking measures, and also use it to visualize the high-level fashion trends across the 11-year span of our dataset."}
{"_id": "0842636e2efd5a0c0f34ae88785af29612814e17", "title": "Joint Deep Modeling of Users and Items Using Reviews for Recommendation", "text": "A large amount of information exists in reviews written by users. This source of information has been ignored by most of the current recommender systems while it can potentially alleviate the sparsity problem and improve the quality of recommendations. In this paper, we present a deep model to learn item properties and user behaviors jointly from review text. The proposed model, named Deep Cooperative Neural Networks (DeepCoNN), consists of two parallel neural networks coupled in the last layers. One of the networks focuses on learning user behaviors exploiting reviews written by the user, and the other one learns item properties from the reviews written for the item. A shared layer is introduced on the top to couple these two networks together. The shared layer enables latent factors learned for users and items to interact with each other in a manner similar to factorization machine techniques. Experimental results demonstrate that DeepCoNN significantly outperforms all baseline recommender systems on a variety of datasets."}
{"_id": "4c3103164ae3d2e79c9e1d943d77b7dfdf609307", "title": "Ratings meet reviews, a combined approach to recommend", "text": "Most existing recommender systems focus on modeling the ratings while ignoring the abundant information embedded in the review text. In this paper, we propose a unified model that combines content-based filtering with collaborative filtering, harnessing the information of both ratings and reviews. We apply topic modeling techniques on the review text and align the topics with rating dimensions to improve prediction accuracy. With the information embedded in the review text, we can alleviate the cold-start problem. Furthermore, our model is able to learn latent topics that are interpretable. With these interpretable topics, we can explore the prior knowledge on items or users and recommend completely \"cold\"' items. Empirical study on 27 classes of real-life datasets show that our proposed model lead to significant improvement compared with strong baseline methods, especially for datasets which are extremely sparse where rating-only methods cannot make accurate predictions."}
{"_id": "6dedbf5566c1e12f61ec7731aa5f7635ab28dc74", "title": "Nonlinear model predictive control for aerial manipulation", "text": "This paper presents a nonlinear model predictive controller to follow desired 3D trajectories with the end effector of an unmanned aerial manipulator (i.e., a multirotor with a serial arm attached). To the knowledge of the authors, this is the first time that such controller runs online and on board a limited computational unit to drive a kinematically augmented aerial vehicle. Besides the trajectory following target, we explore the possibility of accomplishing other tasks during flight by taking advantage of the system redundancy. We define several tasks designed for aerial manipulators and show in simulation case studies how they can be achieved by either a weighting strategy, within a main optimization process, or a hierarchical approach consisting on nested optimizations. Moreover, experiments are presented to demonstrate the performance of such controller in a real robot."}
{"_id": "5cb265c890c8ddf5a5bf06762b87f7074f4ff9ea", "title": "The generic model query language GMQL - Conceptual specification, implementation, and runtime evaluation", "text": "The generic model query language GMQL is designed to query collections of conceptual models created in arbitrary graph-based modelling languages. Querying conceptual models means searching for particular model subgraphs that comply with a predefined pattern query. Such a query specifies the structural and semantic properties of the model language from the literature and formally specify the language\u2019s syntax and semantics. We conduct an analysis of GMQL's theoretical and practical runtime performance concluding that it returns query results within satisfactory time. Given its generic nature, GMQL contributes to a broad range of different model analysis scenarios ranging from business process compliance management to model translation and business process weakness detection. As GMQL returns results with acceptable runtime performance, it can be used to query large collections of hundreds or thousands of conceptual models containing not only process models, but also data models or organizational charts. In this paper, we furthermore evaluate GMQL against the backdrop of existing query approaches thereby carving out its advantages and limitations as well as pointing toward future research. & 2014 Elsevier Ltd. All rights reserved."}
{"_id": "09661a6bb7578979e42c75d6ce382baba64d4981", "title": "Proving Theorems about LISP Functions", "text": "We d e s c r i b e some s i m p l e h e u r i s t i c s comb in ing e v a l u a t i o n and ma themat i ca l i n d u c t i o n wh ich we have implemented in a program t h a t a u t o m a t i c a l l y p roves a wide v a r i e t y o f theorems about r e c u r s i v e LISP f u n c t i o n s . The method the program uses t o gene ra te i n d u c t i o n f o r m u l a s i s d e s c r i b e d a t l e n g t h . The theorems proved by t h e p r o \u00ad gram i n c l u d e t h a t REVERSE i s i t s own i n v e r s e and t h a t a p a r t i c u l a r SORT program is c o r r e c t . Append ix B c o n t a i n s a l i s t o f the theorems p roved by t h e p r o g r a m ."}
{"_id": "f59a6c57fe1735d0c36b24437d6110f7ef21a27d", "title": "Ferrite-magnet spoke-type IPMSM with W-shaped magnet placement", "text": "Rare earth magnets that have a large energy product are generally used in high-efficiency interior permanent-magnet synchronous motors (IPMSMs). However, efficient rare-earth-free motors are needed because of problems such as the high prices of and export restrictions on rare earth materials. This paper introduces a ferrite-magnet spoke-type IPMSM that employs a W-shaped magnet placement in order to achieve high efficiency. The effectiveness of the W-shaped magnet placement is verified from the results of two-dimensional finite element analysis and experiments on a prototype."}
{"_id": "79890a7d61b082947e1300f60231336a53cc285c", "title": "Deep Joint Rain Detection and Removal from a Single Image", "text": "In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in our new rain image model and new deep learning architecture. We add a binary map that provides rain streak locations to an existing model, which comprises a rain streak layer and a background layer. We create a model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our models and architecture."}
{"_id": "6a6502cc5aef5510122c7b7b9e00e489ef06c2ca", "title": "5G Spectrum: is china ready?", "text": "With a considerable ratio of the world's mobile users, China has been actively promoting research on 5G, in which the spectrum issue is of great interest. New 5G characteristics put forward a lot of requirements for spectrum in terms of total amount, candidate bands, as well as new challenges for spectrum usage methods and management. Based on China's current situation, this article first discusses the 5G vision, spectrum demands, and potential candidate bands. Furthermore, it is indicated that spectrum sharing will bring many benefits for 5G systems, and different sharing scenarios are summarized. Finally, based on the current framework of spectrum management in China, potential services classification and spectrum assessment are proposed to accommodate new 5G requirements."}
{"_id": "16bae0f9f6a2823b2a65f6296494eea41f5c9859", "title": "Scalable Semantic Web Data Management Using Vertical Partitioning", "text": "Efficient management of RDF data is an important factor in realizing the Semantic Web vision. Performance and scalability issues are becoming increasingly pressing as Semantic Web technology is applied to real-world applications. In this paper, we examine the reasons why current data management solutions for RDF data scale poorly, and explore the fundamental scalability limitations of these approaches. We review the state of the art for improving performance for RDF databases and consider a recent suggestion, \u201cproperty tables.\u201d We then discuss practically and empirically why this solution has undesirable features. As an improvement, we propose an alternative solution: vertically partitioning the RDF data. We compare the performance of vertical partitioning with prior art on queries generated by a Web-based RDF browser over a large-scale (more than 50 million triples) catalog of library data. Our results show that a vertical partitioned schema achieves similar performance to the property table technique while being much simpler to design. Further, if a column-oriented DBMS (a database architected specially for the vertically partitioned case) is used instead of a row-oriented DBMS, another order of magnitude performance improvement is observed, with query times dropping from minutes to several seconds."}
{"_id": "64b0625cec2913ca30a059ffb5f06c750b86c881", "title": "A robust fault modeling and prediction technique for heterogeneous network", "text": "This paper proposes a modelling technique for fault identification in a heterogeneous network. Heterogeneous network is the integration of wired network, cellular network, wireless local area network (WLAN), and Adhoc network, etc. There may be several numbers of causes and sub-causes of fault in each type of network. These causes of fault must be mitigated in order to have a secure and reliable system. The paper introduces a model based approach that starts with the enumeration of possible causes of each fault over the heterogeneous network. A framework is then created by such causes which diagrammatically describe the causes and sub-causes responsible for the occurrence of a fault. This paper suggests a conceptual fault cause tree model based on the probabilistic framework for ranking the faults and their possible causes. An effective mitigation strategy is required to mitigate the causes and sub-causes. Once mitigation of the cause creating a fault is achieved, the system is analyzed and tested for accuracy and reliability. The Proposed technique assures that all sub-causes even at the lowest level of abstraction is taken into consideration in making the system more robust against the particular fault. The proposed model is also used to analyze the faulty probability of a heterogeneous network."}
{"_id": "118737f1a4429535e22dd438402cc50f4e9d1644", "title": "One-against-all multi-class SVM classification using reliability measures", "text": "Support vector machines (SVM) is originally designed for binary classification. To extend it to multi-class scenario, a typical conventional way is to decompose an M-class problem into a series of two-class problems, for which one-against-all is the earliest and one of the most widely used implementations. However, certain theoretical analysis reveals a drawback, i.e., the competence of each classifier is totally neglected when the results of classification from the multiple classifiers are combined for the final decision. To overcome this limitation, this paper introduces reliability measures into the multi-class framework. Two measures are designed: static reliability measure (SRM) and dynamic reliability measure (DRM). SRM works on a collective basis and yields a constant value regardless of the location of the test sample. DRM, on the other hand, accounts for the spatial variation of the classifier's performance. Based on these two reliability measures, a new decision strategy for the one-against-all method is proposed, which is tested on benchmark data sets and demonstrates its effectiveness."}
{"_id": "a44afaa688d50effa9968cceb292c81b5bb2616d", "title": "Modeling a DC Power System in Hardware-inthe-Loop Environment", "text": "ii Abstract (in Finnish) iiiin Finnish) iii"}
{"_id": "ebe24a632c42c36dc87d49c66164cc6097168101", "title": "Motion Capture and Retargeting of Fish by Monocular Camera", "text": "Accurate motion capture and flexible retargeting of underwater creatures such as fish remain to be difficult due to the long-lasting challenges of marker attachment and feature description for soft bodies in the underwater environment. Despite limited new research progresses appeared in recent years, the fish motion retargeting with a desirable motion pattern in real-time remains elusive. Strongly motivated by our ambitious goal of achieving high-quality data-driven fish animation with a light-weight, mobile device, this paper develops a novel framework of motion capturing and retargeting for a fish. We capture the motion of actual fish by a monocular camera without the utility of any marker. The elliptical Fourier coefficients are then integrated into the contour-based feature extraction process to analyze the fish swimming patterns. This novel approach can obtain the motion information in a robust way, with smooth medial axis as the descriptor for a soft fish body. For motion retargeting, we propose a two-level scheme to properly transfer the captured motion into new models, such as 2D meshes (with texture) generated from pictures or 3D models designed by artists, regardless of different body geometry and fin proportions among various species. Both motion capture and retargeting processes are functioning in real time. Hence, the system can simultaneously create fish animation with variation, while obtaining video sequences of real fish by a monocular camera."}
{"_id": "613b6abf4e3557cb3e17eb96c137ce9b7788907c", "title": "Joint shot boundary detection and key frame extraction", "text": "Representing a video by a set of key frames is useful for efficient video browsing and retrieving. But key frame extraction keeps a challenge in the computer vision field. In this paper, we propose a joint framework to integrate both shot boundary detection and key frame extraction, wherein three probabilistic components are taken into account, i.e. the prior of the key frames, the conditional probability of shot boundaries and the conditional probability of each video frame. Thus the key frame extraction is treated as a Maximum A Posteriori which can be solved by adopting alternate strategy. Experimental results show that the proposed method preserves the scene level structure and extracts key frames that are representative and discriminative."}
{"_id": "218f2a4d1174f5d9aed2383ea600869c186cfd08", "title": "Design and analysis of cross-fed rectangular array antenna; an X-band microstrip array antenna, operating at 11 GHz", "text": "In this paper, an X-Band Microstrip Patch antenna termed as the Cross fed rectangular array antenna has been presented with an enhanced radiation efficiency. The proposed antenna is designed and simulated using FEKO 5.5 suite. It is then fabricated on a 40 \u00d7 40mm2 Rogers RT-Duroid 5880LZ dielectric material board. This antenna is composed of a rectangular patch array in a cross fashion with four patches, each with a dimension of 12.3 mm \u00d7 9.85 mm excited using a wire-port. The antenna operates at a frequency of 11.458 GHz in the X-Band (8-12 GHz). It has achieved stable radiation efficiency of 80.71% with a gain of 7.4 dB at the operating frequency. It is thus inferred that this antenna can be used for X-Band applications such as in maritime navigation and airborne radars."}
{"_id": "1a86eb42952412ee02e3f6da06f874f1946eff6b", "title": "Deep Cross-Modal Projection Learning for Image-Text Matching", "text": "The key point of image-text matching is how to accurately measure the similarity between visual and textual inputs. Despite the great progress of associating the deep cross-modal embeddings with the bi-directional ranking loss, developing the strategies for mining useful triplets and selecting appropriate margins remains a challenge in real applications. In this paper, we propose a cross-modal projection matching (CMPM) loss and a cross-modal projection classification (CMPC) loss for learning discriminative image-text embeddings. The CMPM loss minimizes the KL divergence between the projection compatibility distributions and the normalized matching distributions defined with all the positive and negative samples in a mini-batch. The CMPC loss attempts to categorize the vector projection of representations from one modality onto another with the improved norm-softmax loss, for further enhancing the feature compactness of each class. Extensive analysis and experiments on multiple datasets demonstrate the superiority of the proposed approach."}
{"_id": "a3f550d415313eee87764fcc5c44b94cc82313c4", "title": "Efficient Ordered Combinatorial Semi-Bandits for Whole-Page Recommendation", "text": "Multi-Armed Bandit (MAB) framework has been successfully applied in many web applications. However, many complex real-world applications that involve multiple content recommendations cannot fit into the traditional MAB setting. To address this issue, we consider an ordered combinatorial semi-bandit problem where the learner recommends S actions from a base set of K actions, and displays the results in S (out of M ) different positions. The aim is to maximize the cumulative reward with respect to the best possible subset and positions in hindsight. By the adaptation of a minimumcost maximum-flow network, a practical algorithm based on Thompson sampling is derived for the (contextual) combinatorial problem, thus resolving the problem of computational intractability. With its potential to work with whole-page recommendation and any probabilistic models, to illustrate the effectiveness of our method, we focus on Gaussian process optimization and a contextual setting where click-throughrate is predicted using logistic regression. We demonstrate the algorithms\u2019 performance on synthetic Gaussian process problems and on large-scale news article recommendation datasets from Yahoo! Front Page Today Module."}
{"_id": "d79ec6fd2beba6fa8a884e8b3e48bd0691ce186f", "title": "Overweight among primary school-age children in Malaysia.", "text": "This study is a secondary data analysis from the National Health Morbidity Survey III, a population-based study conducted in 2006. A total of 7,749 children between 7 and 12 years old were recruited into the study. This study seeks to report the prevalence of overweight (including obesity) children in Malaysia using international cut-off point and identify its associated key social determinants. The results show that the overall prevalence of overweight children in Malaysia was 19.9%. The urban residents, males, Chinese, those who are wealthy, have overweight or educated guardians showed higher prevalence of overweight. In multivariable analysis, higher likelihood of being overweight was observed among those with advancing age (OR=1.15), urban residents (OR=1.16, 95% CI: 1.01-1.36), the Chinese (OR=1.45, 95% CI: 1.19-1.77), boys (OR=1.23, 95% CI: 1.08-1.41), and those who came from higher income family. In conclusion, one out of five of 7-12 year-old-children in Malaysia were overweight. Locality of residence, ethnicity, gender, guardian education, and overweight guardian were likely to be the predictors of this alarming issue. Societal and public health efforts are needed in order to reduce the burden of disease associated with obesity."}
{"_id": "dbe39712e69a62f94cb309513d85e71930c5c293", "title": "Mesh Location in Open Ventral Hernia Repair: A Systematic Review and Network Meta-analysis", "text": "There is no consensus on the ideal location for mesh placement in open ventral hernia repair (OVHR). We aim to identify the mesh location associated with the lowest rate of recurrence following OVHR using a systematic review and meta-analysis. A search was performed for studies comparing at least two of four locations for mesh placement during OVHR (onlay, inlay, sublay, and underlay). Outcomes assessed were hernia recurrence and surgical site infection (SSI). Pairwise meta-analysis was performed to compare all direct treatment of mesh locations. A multiple treatment meta-analysis was performed to compare all mesh locations in the Bayesian framework. Sensitivity analyses were planned for the following: studies with a low risk of bias, incisional hernias, by hernia size, and by mesh type (synthetic or biologic). Twenty-one studies were identified (n\u00a0=\u00a05,891). Sublay placement of mesh was associated with the lowest risk for recurrence [OR 0.218 (95\u00a0% CI 0.06\u20130.47)] and was the best of the four treatment modalities assessed [Prob (best)\u00a0=\u00a094.2\u00a0%]. Sublay was also associated with the lowest risk for SSI [OR 0.449 (95\u00a0% CI 0.12\u20131.16)] and was the best of the 4 treatment modalities assessed [Prob (best)\u00a0=\u00a077.3\u00a0%]. When only assessing studies at low risk of bias, of incisional hernias, and using synthetic mesh, the probability that sublay had the lowest rate of recurrence and SSI was high. Sublay mesh location has lower complication rates than other mesh locations. While additional randomized controlled trials are needed to validate these findings, this network meta-analysis suggests the probability of sublay being the best location for mesh placement is high."}
{"_id": "dc160709bbe528b506a37ead334f60d258413357", "title": "Learned Step Size Quantization", "text": "We present here Learned Step Size Quantization, a method for training deep networks such that they can run at inference time using low precision integer matrix multipliers, which offer power and space advantages over high precision alternatives. The essence of our approach is to learn the step size parameter of a uniform quantizer by backpropagation of the training loss, applying a scaling factor to its learning rate, and computing its associated loss gradient by ignoring the discontinuity present in the quantizer. This quantization approach can be applied to activations or weights, using different levels of precision as needed for a given system, and requiring only a simple modification of existing training code. As demonstrated on the ImageNet dataset, our approach achieves better accuracy than all previous published methods for creating quantized networks on several ResNet network architectures at 2-, 3and 4-bits of precision."}
{"_id": "aabe608eef164a4940d962aeffebafb96ebbeb81", "title": "Design of EMG Acquisition Circuit to Control an Antagonistic Mechanism Actuated by Pneumatic Artificial Muscles PAMs", "text": "-A pneumatically actuated antagonistic pair of muscles with joint mechanism (APMM) is supported and developed to be essential for bionic and biomimetic applications to emulate the biological muscles by realizing various kinds of locomotion based on normal electrical activity of biological muscles. This Paper aims to compare the response of antagonistic pairs of muscles mechanism (APMM) based on the pneumatic artificial muscles (PAMs) to an EMG signal that was acquired throw a designed circuit and an EMG Laboratory acquisition kit. The response is represented as a joint rotary displacement generated by the contraction and extension of the pneumatic artificial muscles. A statistical study was done to prove the efficiency of the designed circuit the response of antagonistic pairs of muscles mechanism. The statistical result showed that there is no significant difference of voltage data in both EMG acquired signal between reference kit and designed circuit. An excellent correlation behavior between the EMG control signal and the response of APMM as an angular displacement has been discussed and statistically analyzed. Index Term-Pneumatic Artificial Muscles, Biomechatronics, Electromyogram EMG, Pneumatic Proportional Directional Control Valve, Signal Processing, Bionic."}
{"_id": "798957b4bbe99fcf9283027d30e19eb03ce6b4d5", "title": "Dependency Parsing and Domain Adaptation with LR Models and Parser Ensembles", "text": "We present a data-driven variant of the LR algorithm for dependency parsing, and extend it with a best-first search for probabilistic generalized LR dependency parsing. Parser actions are determined by a classifier, based on features that represent the current state of the parser. We apply this parsing framework to both tracks of the CoNLL 2007 shared task, in each case taking advantage of multiple models trained with different learners. In the multilingual track, we train three LR models for each of the ten languages, and combine the analyses obtained with each individual model with a maximum spanning tree voting scheme. In the domain adaptation track, we use two models to parse unlabeled data in the target domain to supplement the labeled out-ofdomain training set, in a scheme similar to one iteration of co-training."}
{"_id": "6c4c0ac83373d18267779dc461ddc769ad8d0133", "title": "Analysis of Blockage Effects on Urban Cellular Networks", "text": "Large-scale blockages such as buildings affect the performance of urban cellular networks, especially at higher frequencies. Unfortunately, such blockage effects are either neglected or characterized by oversimplified models in the analysis of cellular networks. Leveraging concepts from random shape theory, this paper proposes a mathematical framework to model random blockages and analyze their impact on cellular network performance. Random buildings are modeled as a process of rectangles with random sizes and orientations whose centers form a Poisson point process on the plane. The distribution of the number of blockages in a link is proven to be a Poisson random variable with parameter dependent on the length of the link. Our analysis shows that the probability that a link is not intersected by any blockages decays exponentially with the link length. A path loss model that incorporates the blockage effects is also proposed, which matches experimental trends observed in prior work. The model is applied to analyze the performance of cellular networks in urban areas with the presence of buildings, in terms of connectivity, coverage probability, and average rate. Our results show that the base station density should scale superlinearly with the blockage density to maintain the network connectivity. Our analyses also show that while buildings may block the desired signal, they may still have a positive impact on the SIR coverage probability and achievable rate since they can block significantly more interference."}
{"_id": "e4f5ea444c790ed90ecaba667c7ed6db21549b61", "title": "Depth enhanced visual-inertial odometry based on Multi-State Constraint Kalman Filter", "text": "There have been increasing demands for developing robotic system combining camera and inertial measurement unit in navigation task, due to their low-cost, lightweight and complementary properties. In this paper, we present a Visual Inertial Odometry (VIO) system which can utilize sparse depth to estimate 6D pose in GPS-denied and unstructured environments. The system is based on Multi-State Constraint Kalman Filter (MSCKF), which benefits from low computation load when compared to optimization-based method, especially on resource-constrained platform. Features are enhanced with depth information forming 3D landmark position measurements in space, which reduces uncertainty of position estimate. And we derivate measurement model to access compatibility with both 2D and 3D measurements. In experiments, we evaluate the performance of the system in different in-flight scenarios, both cluttered room and industry environment. The results suggest that the estimator is consistent, substantially improves the accuracy compared with original monocular-based MSKCF and achieves competitive accuracy with other research."}
{"_id": "b4951eb36bb0a408b02fadc12c0a1d8e680b589f", "title": "Web scraping technologies in an API world", "text": "Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applications, ranging from simple extraction robots to online meta-servers. This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities. The main focus is set on showing how straightforward it is today to set up a data scraping pipeline, with minimal programming effort, and answer a number of practical needs. For exemplification purposes, we introduce a biomedical data extraction scenario where the desired data sources, well-known in clinical microbiology and similar domains, do not offer programmatic interfaces yet. Moreover, we describe the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis."}
{"_id": "0823b293d13a5efaf9c3f37109a4a6018d05d074", "title": "Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks", "text": "To describe the log-likelihood computation in our model, let us consider a two scale pyramid for the moment. Given a (vectorized) j \u00d7 j image I , denote by l = d(I) the coarsened image, and h = I \u2212 u(d(I)) to be the high pass. In this section, to simplify the computations, we use a slightly different u operator than the one used to generate the images displayed in Figure 3 of the paper. Namely, here we take d(I) to be the mean over each disjoint block of 2\u00d7 2 pixels, and take u to be the operator that removes the mean from each 2\u00d7 2 block. Since u has rank 3d/4, in this section, we write h in an orthonormal basis of the range of u, then the (linear) mapping from I to (l, h) is unitary. We now build a probability density p on Rd2 by p(I) = q0(l, h)q1(l) = q0(d(I), h(I))q1(d(I)); in a moment we will carefully define the functions qi. For now, suppose that qi \u2265 0, \u222b q1(l) dl = 1, and for each fixed l, \u222b q0(l, h) dh = 1. Then we can check that p has unit integral: \u222b"}
{"_id": "11da2d589485685f792a8ac79d4c2e589e5f77bd", "title": "Show and tell: A neural image caption generator", "text": "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art."}
{"_id": "2b329183e93cb8c1c20c911c765d9a94f34b5ed5", "title": "Generative Adversarial Networks", "text": "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1 2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."}
{"_id": "3da5e164744eb111739aa6cf08bbdadaf5d53703", "title": "Rules and Ontologies for the Semantic Web", "text": "Rules and ontologies play a key role in the layered architecture of the Semantic Web, as they are used to ascribe meaning to, and to reason about, data on the Web. While the Ontology Layer of the Semantic Web is quite developed, and the Web Ontology Language (OWL) is a W3C recommendation since a couple of years already, the rules layer is far less developed and an active area of research; a number of initiatives and proposals have been made so far, but no standard as been released yet. Many implementations of rule engines are around which deal with Semantic Web data in one or another way. This article gives a comprehensive, although not exhaustive, overview of such systems, describes their supported languages, and sets them in relation with theoretical approaches for combining rules and ontologies as foreseen in the Semantic Web architecture. In the course of this, we identify desired properties and common features of rule languages and evaluate existing systems against their support. Furthermore, we review technical problems underlying the integration of rules and ontologies, and classify representative proposals for theoretical integration approaches into different categories."}
{"_id": "aed69092966e15473d8ebacb217335b49391d172", "title": "Process Mining in IT Service Management: A Case Study", "text": "The explosion of process-related data in nowadays organizations has raised the interest to exploit the data in order to know in deep how the business processes are being carried out. To face this need, process mining has emerged as a way to analyze the behavior of an organization by extracting knowledge from process related data. In this paper, we present a case study of process mining in a real IT service management scenario. We describe an exploratory analysis of real life event logs generated between 2012 and 2015, in four different processes designed within an IT platform. More specifically, we analyze the way of handling the different requests and incidents registered in an organization."}
{"_id": "d7c3c875f2f0c8ff1e8361802eca52c7b1d481c5", "title": "Learning to Detect Anomalies in Surveillance Video", "text": "Detecting anomalies in surveillance videos, that is, finding events or objects with low probability of occurrence, is a practical and challenging research topic in computer vision community. In this paper, we put forward a novel unsupervised learning framework for anomaly detection. At feature level, we propose a Sparse Semi-nonnegative Matrix Factorization (SSMF) to learn local patterns at each pixel, and a Histogram of Nonnegative Coefficients (HNC) can be constructed as local feature which is more expressive than previously used features like Histogram of Oriented Gradients (HOG). At model level, we learn a probability model which takes the spatial and temporal contextual information into consideration. Our framework is totally unsupervised requiring no human-labeled training data. With more expressive features and more complicated model, our framework can accurately detect and localize anomalies in surveillance video. We carried out extensive experiments on several benchmark video datasets for anomaly detection, and the results demonstrate the superiority of our framework to state-of-the-art approaches, validating the effectiveness of our framework."}
{"_id": "7e403d160f3db4a5d631ac450abcba190268c0e6", "title": "SAIL: single access point-based indoor localization", "text": "This paper presents SAIL, a Single Access Point Based Indoor Localization system. Although there have been advances in WiFi-based positioning techniques, we find that existing solutions either require a dense deployment of access points (APs), manual fingerprinting, energy hungry WiFi scanning, or sophisticated AP hardware. We design SAIL using a single commodity WiFi AP to avoid these restrictions. SAIL computes the distance between the client and an AP using the propagation delay of the signal traversing between the two, combines the distance with smartphone dead-reckoning techniques, and employs geometric methods to ultimately yield the client's location using a single AP. SAIL combines physical layer (PHY) information and human motion to compute the propagation delay of the direct path by itself, eliminating the adverse effect of multipath and yielding sub-meter distance estimation accuracy. Furthermore, SAIL systematically addresses some of the common challenges towards dead-reckoning using smartphone sensors and achieves 2-5x accuracy improvements over existing techniques. We have implemented SAIL on commodity wireless APs and smartphones. Evaluation in a large-scale enterprise environment with 10 mobile users demonstrates that SAIL can capture the user's location with a mean error of 2.3m using just a single AP."}
{"_id": "83d323a5bb26b706d4f6d24eb27411a7e7ff57e6", "title": "Protective action of green tea catechins in neuronal mitochondria during aging.", "text": "Mitochondria are central players in the regulation of cell homeostasis. They are essential for energy production but at the same time, reactive oxygen species accumulate as byproducts of the electron transport chain causing mitochondrial damage. In the central nervous system, senescence and neurodegeneration occur as a consequence of mitochondrial oxidative insults and impaired electron transfer. The accumulation of several oxidation products in neurons during aging prompts the idea that consumption of antioxidant compounds may delay neurodegenerative processes. Tea, one of the most consumed beverages in the world, presents benefits to human health that have been associated to its abundance in polyphenols, mainly catechins, that possess powerful antioxidant properties in vivo and in vitro. In this review, the focus will be placed on the effects of green tea catechins in neuronal mitochondria. Although these compounds reach the brain in small quantities, there are several possible targets, signaling pathways and molecular machinery impinging in the mitochondria that will be highlighted. Accumulated evidence thus far seems to indicate that catechins help prevent neurodegeneration and delay brain function decline."}
{"_id": "3fefeab49d4d387eefabe170413748a028a66347", "title": "Inferring Unusual Crowd Events From Mobile Phone Call Detail Records", "text": "The pervasiveness and availability of mobile phone data offer the opportunity of discovering usable knowledge about crowd behaviors in urban environments. Cities can leverage such knowledge in order to provide better services (e.g., public transport planning, optimized resource allocation) and safer cities. Call Detail Record (CDR) data represents a practical data source to detect and monitor unusual events considering the high level of mobile phone penetration, compared with GPS equipped and open devices. In this paper, we provide a methodology that is able to detect unusual events from CDR data that typically has low accuracy in terms of space and time resolution. Moreover, we introduce a concept of unusual event that involves a large amount of people who expose an unusual mobility behavior. Our careful consideration of the issues that come from coarse-grained CDR data ultimately leads to a completely general framework that can detect unusual crowd events from CDR data effectively and efficiently. Through extensive experiments on real-world CDR data for a large city in Africa, we demonstrate that our method can detect unusual events with 16% higher recall and over 10 times higher precision, compared to stateof-the-art methods. We implement a visual analytics prototype system to help end users analyze detected unusual crowd events to best suit different application scenarios. To the best of our knowledge, this is the first work on the detection of unusual events from CDR data with considerations of its temporal and spatial sparseness and distinction between user unusual activities and daily routines."}
{"_id": "bf09dc10f2d36313a8025e09832c6a7812c2f4f8", "title": "Mining Cluster-Based Temporal Mobile Sequential Patterns in Location-Based Service Environments", "text": "Researches on Location-Based Service (LBS) have been emerging in recent years due to a wide range of potential applications. One of the active topics is the mining and prediction of mobile movements and associated transactions. Most of existing studies focus on discovering mobile patterns from the whole logs. However, this kind of patterns may not be precise enough for predictions since the differentiated mobile behaviors among users and temporal periods are not considered. In this paper, we propose a novel algorithm, namely, Cluster-based Temporal Mobile Sequential Pattern Mine (CTMSP-Mine), to discover the Cluster-based Temporal Mobile Sequential Patterns (CTMSPs). Moreover, a prediction strategy is proposed to predict the subsequent mobile behaviors. In CTMSP-Mine, user clusters are constructed by a novel algorithm named Cluster-Object-based Smart Cluster Affinity Search Technique (CO-Smart-CAST) and similarities between users are evaluated by the proposed measure, Location-Based Service Alignment (LBS-Alignment). Meanwhile, a time segmentation approach is presented to find segmenting time intervals where similar mobile characteristics exist. To our best knowledge, this is the first work on mining and prediction of mobile behaviors with considerations of user relations and temporal property simultaneously. Through experimental evaluation under various simulated conditions, the proposed methods are shown to deliver excellent performance."}
{"_id": "635ce4a260d78b0539eecd45dc582db23835205d", "title": "The dependency paradox in close relationships: accepting dependence promotes independence.", "text": "Using multiple methods, this investigation tested the hypothesis that a close relationship partner's acceptance of dependence when needed (e.g., sensitive responsiveness to distress cues) is associated with less dependence, more autonomous functioning, and more self-sufficiency (as opposed to more dependence) on the part of the supported individual. In two studies, measures of acceptance of dependency needs and independent functioning were obtained through couple member reports, by observing couple members' behaviors during laboratory interactions, by observing responses to experimentally manipulated partner assistance provided during an individual laboratory task, and by following couples over a period of 6 months to examine independent goal striving as a function of prior assessments of dependency acceptance. Results provided converging evidence in support of the proposed hypothesis. Implications of the importance of close relationships for optimal individual functioning are discussed."}
{"_id": "5f5de869662a075da3b9998e7ff9206b3e502860", "title": "Semantic Segmentation of Mixed Crops using Deep Convolutional Neural Network", "text": "Estimation of in-field biomass and crop composition is important for both farmers and researchers. Using close-up high resolution images of the crops, crop species can be distinguished using image processing. In the current study, deep convolutional neural networks for semantic segmentation (or pixel-wise classification) of cluttered classes in RGB images was explored in case of catch crops and volunteer barley cereal. The dataset consisted of RGB images from a plot trial using oil radish as catch crops in barley. The images were captured using a high-end consumer camera mounted on a tractor. The images were manually annotated in 7 classes: oil radish, barley, weed, stump, soil, equipment and unknown. Data argumentation was used to artificially increase the dataset by transposing and flipping the images. A modified version of VGG-16 deep neural network was used. First, the last fully-connected layers were converted to convolutional layer and the depth was modified to cope with our number of classes. Secondly, a deconvolutional layer with a 32 stride was added between the last fully-connected layer and the softmax classification layer to ensure that the output layer has the same size as the input. Preliminary results using this network show a pixel accuracy of 79% and a frequency weighted intersection over union of 66%. These preliminary results indicate great potential in deep convolutional networks for segmentation of plant species in cluttered RGB images."}
{"_id": "149f3dfe7542baffa670ea0351e14499e5d189c0", "title": "Recent Research in Cooperative Control of Multivehicle", "text": "This paper presents a survey of recent research in cooperative control of multivehicle systems, using a common mathematical framework to allow different methods to be described in a unified way. The survey has three primary parts: an overview of current applications of cooperative control, a summary of some of the key technical approaches that have been explored, and a description of some possible future directions for research. Specific technical areas that are discussed include formation control, cooperative tasking, spatiotemporal planning, and consensus. DOI: 10.1115/1.2766721"}
{"_id": "649f417531ac7b1408b80fb35125319f86d00f79", "title": "\"Green\" electronics: biodegradable and biocompatible materials and devices for sustainable future.", "text": "\"Green\" electronics represents not only a novel scientific term but also an emerging area of research aimed at identifying compounds of natural origin and establishing economically efficient routes for the production of synthetic materials that have applicability in environmentally safe (biodegradable) and/or biocompatible devices. The ultimate goal of this research is to create paths for the production of human- and environmentally friendly electronics in general and the integration of such electronic circuits with living tissue in particular. Researching into the emerging class of \"green\" electronics may help fulfill not only the original promise of organic electronics that is to deliver low-cost and energy efficient materials and devices but also achieve unimaginable functionalities for electronics, for example benign integration into life and environment. This Review will highlight recent research advancements in this emerging group of materials and their integration in unconventional organic electronic devices."}
{"_id": "2451ea08eb78e24371de8128c14a6680b2a628fa", "title": "Stability of Causal Inference", "text": "We consider the sensitivity of causal identi cation to small perturbations in the input. A long line of work culminating in papers by Shpitser and Pearl (2006) and Huang and Valtorta (2008) led to a complete procedure for the causal identi cation problem. In our main result in this paper, we show that the identi cation function computed by these procedures is in some cases extremely unstable numerically. Speci cally, the \u201ccondition number\u201d of causal identi cation can be of the order of \u03a9(exp(n0.49)) on an identi able semi-Markovian model with n visible nodes. That is, in order to give an output accurate to d bits, the empirical probabilities of the observable events need to be obtained to accuracy d +\u03a9(n0.49) bits."}
{"_id": "40bda71f8adc847c40b89e06febd6b037c765077", "title": "A Fuzzy Local Search Classifier for Intrusion Detection", "text": "In this paper, we propose a fuzzy local search (FLS) method for intrusion detection. The FLS system is a fuzzy classifier, whose knowledge base is modeled as a fuzzy rule such as \"if-then\" and improved by a local search metaheuristic. The proposed method is implemented and tested on the benchmark KDD'99 intrusion dataset. The results are encouraging and demonstrate the benefits of our approach."}
{"_id": "61c4c83ed02ed2477190611f69cb86f89e8ff0ab", "title": "A Study on Watermarking Schemes for Image Authentication", "text": "The digital revolution in digital image processing has made it possible to create, manipulate and transmit digital images in a simple and fast manner. The adverse affect of this is that the same image processing techniques can be used by hackers to tamper with any image and use it illegally. This has made digital image safety and integrity the top prioritized issue in today\u2019s information explosion. Watermarking is a popular technique that is used for copyright protection and authentication. This paper presents an overview of the various concepts and research works in the field of image watermark authentication. In particular, the concept of content-based image watermarking is reviewed in details."}
{"_id": "c8e424defb590f6b3eee659eb097ac978bf49348", "title": "The role of academic emotions in the relationship between perceived academic control and self-regulated learning in online learning", "text": "Self-regulated learning is recognized as a critical factor for successful online learning, and students\u2019 perceived academic control and academic emotions are important antecedents of self-regulated learning. Because emotions and cognition are interrelated, investigating the joint relationship between perceived academic control and academic emotions on self-regulated learning would be valuable to understanding the process of self-regulated learning. Therefore, this study examined the role of academic emotions (enjoyment, anxiety, and boredom) in the relationship between perceived academic control and selfregulated learning in online learning. The path model was proposed to test the mediating and moderating effects of academic emotions. Data were collected from 426 Korean college students registered in online courses, and a path analysis was conducted. The results demonstrated that enjoyment mediated the relationship between perceived academic control and self-regulated learning, but the moderating effect of enjoyment was not significant. Boredom and anxiety did not have significant mediating effects on self-regulated learning, whereas they showed significant moderating effects in the relationship between perceived academic control and self-regulated learning. The role of academic emotions in learning and their implications for facilitating students\u2019 self-regulated learning in online learning were discussed based on the findings. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "82b2c431035e5c0faa20895fe9f002327c0994bd", "title": "IoT based smart emergency response system for fire hazards", "text": "The Internet of Things pertains to connecting currently unconnected things and people. It is the new era in transforming the existed systems to amend the cost effective quality of services for the society. To support Smart city vision, Urban IoT design plans exploit added value services for citizens as well as administration of the city with the most advanced communication technologies. To make emergency response real time, IoT enhances the way first responders and provides emergency managers with the necessary up-to-date information and communication to make use those assets. IoT mitigates many of the challenges to emergency response including present problems like a weak communication network and information lag. In this paper it is proposed that an emergency response system for fire hazards is designed by using IoT standardized structure. To implement this proposed scheme a low-cost Espressif wi-fi module ESP-32, Flame detection sensor, Smoke detection sensor (MQ-5), Flammable gas detection sensor and one GPS module are used. The sensors detects the hazard and alerts the local emergency rescue organizations like fire departments and police by sending the hazard location to the cloud-service through which all are connected. The overall network utilizes a light weighted data oriented publish-subscribe message protocol MQTT services for fast and reliable communication. Thus, an intelligent integrated system is designed with the help of IoT."}
{"_id": "2f32f535c8a0acf1d55247c49564d4d3f8a2c5c5", "title": "Machinery equipment early fault detection using Artificial Neural Network based Autoencoder", "text": "Machinery equipment early fault detection is still in an open challenge. The objective of this paper is to introduce a parametric method Artificial Neural Network based Autoencoder implemented to perform early fault detection of a machinery equipment. The performance of this method is then compared to one of the industry state of the art nonparametric methods called Similarity Based Modeling. The comparison is done by analyzing the implementation result on both artificial and real case dataset. Root Mean Square Error (RMSE) is applied to measure the performance. Based on the result of the research, both of these methods are effective to do pattern recognition and able to identify data anomaly or in this case is fault identification."}
{"_id": "93788e92e5a411f3e5ed16b09400dcdbcec3b4f0", "title": "Anomaly Detection in Video Surveillance via Gaussian Process", "text": "In this paper, we propose a new approach for anomaly detection in video surveillance. This approach is based on a nonparametric Bayesian regression model built upon Gaussian process priors. It establishes a set of basic vectors describing motion patterns from low-level features via online clustering, and then constructs a Gaussian process regression model to approximate the distribution of motion patterns in kernel space. We analyze di\u00aeerent anomaly measure criterions derived from Gaussian process regression model and compare their performances. To reduce"}
{"_id": "2be9abe785b6159df65063dd80a6a72e29fa6d23", "title": "Forensic Analysis and Anonymisation of Printed Documents", "text": "Contrary to popular belief, the paperless office has not yet established itself. Printer forensics is therefore still an important field today to protect the reliability of printed documents or to track criminals. An important task of this is to identify the source device of a printed document. There are many forensic approaches that try to determine the source device automatically and with commercially available recording devices. However, it is difficult to find intrinsic signatures that are robust against a variety of influences of the printing process and at the same time can identify the specific source device. In most cases, the identification rate only reaches up to the printer model. For this reason we reviewed document colour tracking dots, an extrinsic signature embedded in nearly all modern colour laser printers. We developed a refined and generic extraction algorithm, found a new tracking dot pattern and decoded pattern information. Through out we propose to reuse document colour tracking dots, in combination with passive printer forensic methods. From privacy perspective we additional investigated anonymization approaches to defeat arbitrary tracking. Finally we propose our toolkitdeda which implements the entire workflow of extracting, analysing and anonymisation of a tracking dot pattern."}
{"_id": "c18172a7d11d1fd3467040bea0bc9a38c1b26a6d", "title": "AtDELFI: automatically designing legible, full instructions for games", "text": "This paper introduces a fully automatic method for generating video game tutorials. The AtDELFI system (Automatically DEsigning Legible, Full Instructions for games) was created to investigate procedural generation of instructions that teach players how to play video games. We present a representation of game rules and mechanics using a graph system as well as a tutorial generation method that uses said graph representation. We demonstrate the concept by testing it on games within the General Video Game Artificial Intelligence (GVG-AI) framework; the paper discusses tutorials generated for eight different games. Our findings suggest that a graph representation scheme works well for simple arcade style games such as Space Invaders and Pacman, but it appears that tutorials for more complex games might require higher-level understanding of the game than just single mechanics."}
{"_id": "25b9dd71c8247820ccc51f238abd215880b973e5", "title": "Text Categorization Using Weight Adjusted k-Nearest Neighbor Classification", "text": "Categorization of documents is challenging, as the number o f discriminating words can be very large. We present a nearest neighbor classification scheme for text categoriz ation in which the importance of discriminating words is learned using mutual information and weight adjustment tec hniques. The nearest neighbors for a particular document are then computed based on the matching words and their weigh ts. We evaluate our scheme on both synthetic and real world documents. Our experiments with synthetic data sets s how that this scheme is robust under different emulated conditions. Empirical results on real world documents demo nstrate that this scheme outperforms state of the art classification algorithms such as C4.5, RIPPER, Rainbow, an d PEBLS."}
{"_id": "95d2a3c89bd97436aac9c72affcd0edc5c7d2e58", "title": "Multiple HOG templates for gait recognition", "text": "In gait recognition field, template-based approaches such as Gait Energy Image (GEI) and Chrono-Gait Image (CGI) can achieve good recognition performance with low computational cost. Meanwhile, CGI can preserve temporal information better than GEI. However, they pay less attention to the local shape features. To preserve temporal information and generate more abundant local shape features, we generate multiple HOG templates by extracting Histogram of Oriented Gradients (HOG) of GEI and CGI templates. Experiments show that compared with several published approaches, our proposed multiple HOG templates achieve better performance for gait recognition."}
{"_id": "235723a15c86c369c99a42e7b666dfe156ad2cba", "title": "Improving predictive inference under covariate shift by weighting the log-likelihood function", "text": "A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function. This approach is e ective in cases such as sample surveys or design of experiments, where the observed covariate follows a di erent distribution than that in the whole population. Under misspeci cation of the parametric model, the optimal choice of the weight function is asymptotically shown to be the ratio of the density function of the covariate in the population to that in the observations. This is the pseudo-maximum likelihood estimation of sample surveys. The optimality is de ned by the expected Kullback\u2013Leibler loss, and the optimal weight is obtained by considering the importance sampling identity. Under correct speci cation of the model, however, the ordinary maximum likelihood estimate (i.e. the uniform weight) is shown to be optimal asymptotically. For moderate sample size, the situation is in between the two extreme cases, and the weight function is selected by minimizing a variant of the information criterion derived as an estimate of the expected loss. The method is also applied to a weighted version of the Bayesian predictive density. Numerical examples as well as Monte-Carlo simulations are shown for polynomial regression. A connection with the robust parametric estimation is discussed. c \u00a9 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "2b3818c141da414cf9e783c8b2d4928019cb70fd", "title": "Discriminative learning for differing training and test distributions", "text": "We address classification problems for which the training instances are governed by a distribution that is allowed to differ arbitrarily from the test distribution---problems also referred to as classification under covariate shift. We derive a solution that is purely discriminative: neither training nor test distribution are modeled explicitly. We formulate the general problem of learning under covariate shift as an integrated optimization problem. We derive a kernel logistic regression classifier for differing training and test distributions."}
{"_id": "d5a55a548fc7cd703c1dd8d867ca1eb6b0c0764c", "title": "Can Movies and Books Collaborate? Cross-Domain Collaborative Filtering for Sparsity Reduction", "text": "The sparsity problem in collaborative filtering (CF) is a major bottleneck for most CF methods. In this paper, we consider a novel approach for alleviating the sparsity problem in CF by transferring useritem rating patterns from a dense auxiliary rating matrix in other domains (e.g., a popular movie rating website) to a sparse rating matrix in a target domain (e.g., a new book rating website). We do not require that the users and items in the two domains be identical or even overlap. Based on the limited ratings in the target matrix, we establish a bridge between the two rating matrices at a clusterlevel of user-item rating patterns in order to transfer more useful knowledge from the auxiliary task domain. We first compress the ratings in the auxiliary rating matrix into an informative and yet compact cluster-level rating pattern representation referred to as a codebook. Then, we propose an efficient algorithm for reconstructing the target rating matrix by expanding the codebook. We perform extensive empirical tests to show that our method is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary tasks, as compared to many state-of-the-art CF methods."}
{"_id": "5efbd02d154fb76506caa9cf3e9333c89f24ccc9", "title": "A new approach to identify power transformer criticality and asset management decision based on dissolved gas-in-oil analysis", "text": "Dissolved gas analysis (DGA) of transformer oil is one of the most effective power transformer condition monitoring tools. There are many interpretation techniques for DGA results. However, all of these techniques rely on personnel experience more than standard mathematical formulation and significant number of DGA results fall outside the proposed codes of the current methods and cannot be diagnosed by these methods. To overcome these limitations, this paper introduces a novel approach using Gene Expression Programming (GEP) to help in standardizing DGA interpretation techniques, identify transformer critical ranking based on DGA results and propose a proper maintenance action. DGA has been performed on 338 oil samples that have been collected from different transformers of different rating and different life span. Traditional DGA interpretation techniques are used in analyzing the results to measure its consistency. These data are then used to develop the new GEP model. Results show that all current traditional techniques do not necessarily lead to the same conclusion for the same oil sample. The new approach using GEP is easy to implement and it does not call for any expert personnel to interpret the DGA results and to provide a proper asset management decision on the transformer based on DGA analysis."}
{"_id": "e1e146ddf8214fcab965dabeb9908d097912a77f", "title": "Geolocation and Assisted GPS", "text": "C urrently in development, numerous geolocation technologies can pinpoint a person's or ob-ject's position on the Earth. Knowledge of the spatial distribution of wireless callers will facilitate the planning, design, and operation of next-generation broadband wireless networks. Mobile users will gain the ability to get local traffic information and detailed directions to gas stations, restaurants, hotels, and other services. Police and rescue teams will be able to quickly and precisely locate people who are lost or injured but cannot give their precise location. Companies will use geolocation-based applications to track personnel, vehicles, and other assets. The driving force behind the development of this technology is a US Federal Communications Commission (FCC) mandate stating that by 1 October 2001 all wireless carriers must provide the geolocation of an emergency 911 caller to the appropriate public safety answering point (see http://www.fcc.gov/e911/). Location technologies requiring new, modified, or upgraded mobile stations must determine the caller's longitude and latitude within 50 meters for 67 percent of emergency calls, and within 150 meters for 95 percent of the calls. Otherwise, they must do so within 100 meters and 300 meters, respectively, for the same percentage of calls. Currently deployed wireless technology can locate 911 calls within an area no smaller than 10 to 15 square kilometers. An obvious way to satisfy the FCC requirement is to incorporate Global Positioning System (GPS) receivers into mobile phones. GPS consists of a constellation of 24 satellites, equally spaced in six orbital planes 20,200 kilometers above the Earth, that transmit two specially coded carrier signals: L1 frequency for civilian use, and L2 for military and government use. GPS receivers process the signals to compute position in 3D\u2014latitude, longitude , and altitude\u2014within a radius of 10 meters or better. Accuracy has increased substantially since the US government turned off Selective Availability, the intentional degradation of GPS signals , in May 2000. Because no return channel links GPS receivers to satellites, any number of users can get their positions simultaneously. GPS signals also resist interference and jamming. To operate properly, however, conventional GPS receivers need a clear view of the skies and signals from at least four satellites, requirements that exclude operation in buildings or other RF-shadowed environments. Further, it takes a GPS receiver starting \" cold \" \u2014without any knowledge about the GPS constellation's state\u2014as long as several minutes to achieve the mobile station location fix, a considerable delay for \u2026"}
{"_id": "e91ac1c4539b7c9162d9907bc6f655a9bd43e705", "title": "DASCo: dynamic adaptive streaming over CoAP", "text": "This paper presents the Dynamic Adaptive Streaming over CoAP (DASCo), a solution for adaptive media streaming in the Internet of Things (IoT) environment. DASCo combines DASH (Dynamic Adaptive Streaming over HTTP), the widespread open standard for HTTP-compliant streaming, with Constrained Application Protocol (CoAP), the vendor-independent web transfer protocol designed for resource-constrained devices. The proposed solution uses DASH formats and mechanisms to make media segments available for consumers, and exploits CoAP to deliver media segments to consumers\u2019 applications. Consequently, the DASCo player offers native interoperability with IoT devices that are accessed via CoAP, thus it allows easy access to data collected by different sensors in order to enrich the multimedia services. In the paper we present an overview of constraints of default CoAP implementation with respect to media streaming, and propose guidelines for development of adaptive streaming service over CoAP. Moreover, we discuss the features of CoAP that can be investigated when designing an efficient adaptive algorithm for DASCo. Presented experimental results show poor performance of DASCo when default values of CoAP transmission parameters have been used. However, adjusting the parameters according to the network conditions considerably improves DASCo efficiency. Furthermore, in bad network conditions the enhanced DASCo is characterized by a more stable download rate compared to DASH. This feature is important in the context of dynamic media adaptation, since it allows an adaptation algorithm to better fit media bit rate with download conditions."}
{"_id": "fe46824b50fba1f9b7b16c8e71c00a72933353c8", "title": "Dynamic time warping for gesture-based user identification and authentication with Kinect", "text": "The Kinect has primarily been used as a gesture-driven device for motion-based controls. To date, Kinect-based research has predominantly focused on improving tracking and gesture recognition across a wide base of users. In this paper, we propose to use the Kinect for biometrics; rather than accommodating a wide range of users we exploit each user's uniqueness in terms of gestures. Unlike pure biometrics, such as iris scanners, face detectors, and fingerprint recognition which depend on irrevocable biometric data, the Kinect can provide additional revocable gesture information. We propose a dynamic time-warping (DTW) based framework applied to the Kinect's skeletal information for user access control. Our approach is validated in two scenarios: user identification, and user authentication on a dataset of 20 individuals performing 8 unique gestures. We obtain an overall 4.14%, and 1.89% Equal Error Rate (EER) in user identification, and user authentication, respectively, for a gesture and consistently outperform related work on this dataset. Given the natural noise present in the real-time depth sensor this yields promising results."}
{"_id": "71e2307d30ea55c3a70e4220c65a29826f78dccc", "title": "A Multi-disciplinary Approach to Marketing Leveraging Advances in", "text": "For decades, neuroscience has greatly contributed to our foundational understanding of human behavior. More recently, the findings and methods of neuroscience have been applied to study the process of decision-making in order to offer advanced insights into the neural mechanisms that influence economic and consumer choices. In this thesis, I will address how customized marketing strategies can be enriched through the integration of consumer neuroscience, an integrative field anchored in the biological, cognitive and affective mechanisms of consumer behavior. By recognizing and utilizing these multidisciplinary interdependencies, marketers can enhance their advertising and promotional mix to elicit desired neural and affective consumer responses and measure these reactions in order to enhance purchasing decisions. The principal objective of this thesis is to present a comprehensive review of consumer neuroscience and to elucidate why it is an increasingly important area of study within the framework of human behavior. I will also describe how the insights gained from this emerging field can be leveraged to optimize marketing activities. Finally, I propose an experiment that illuminates key research questions, which may have considerable impact on the discipline of consumer neuroscience as well as the marketing industry."}
{"_id": "127f0fd91bc7d5cee164beb3ec89e0f5071a3e8d", "title": "AutoMoDe: A novel approach to the automatic design of control software for robot swarms", "text": "We introduce AutoMoDe: a novel approach to the automatic design of control software for robot swarms. The core idea in AutoMoDe recalls the approach commonly adopted in machine learning for dealing with the bias\u2013variance tradedoff: to obtain suitably general solutions with low variance, an appropriate design bias is injected. AutoMoDe produces robot control software by selecting, instantiating, and combining preexisting parametric modules\u2014the injected bias. The resulting control software is a probabilistic finite state machine in which the topology, the transition rules and the values of the parameters are obtained automatically via an optimization process that maximizes a task-specific objective function. As a proof of concept, we define AutoMoDe-Vanilla, which is a specialization of AutoMoDe for the e-puck robot. We use AutoMoDe-Vanilla to design the robot control software for two different tasks: aggregation and foraging. The results show that the control software produced by AutoMoDe-Vanilla (i) yields good results, (ii) appears to be robust to the so called reality gap, and (iii) is naturally human-readable."}
{"_id": "103476735a92fced52ac096f301b07fda3ee42b7", "title": "BiDirectional optical communication with AquaOptical II", "text": "This paper describes AquaOptical II, a bidirectional, high data-rate, long-range, underwater optical communication system. The system uses the software radio principle. Each AquaOptical II modem can be programmed to transmit user defined waveforms and record the received waveforms for detailed analysis. This allows for the use of many different modulation schemes. We describe the hardware and software architecture we developed for these goals. We demonstrate bidirectional communication between two AquaOptical II modems in a pool experiment. During the experiment AquaOptical II achieved a signal to noise ration of 5.1 over a transmission distance of 50 m at pulse widths of 1 \u00b5sec, 500 ns, and 250 ns. When using discrete pulse interval modulation (DPIM) this corresponds to a bit-rate of 0.57 Mbit/s, 1.14 Mbit/s, and 2.28 Mbit/s."}
{"_id": "0eb463bc1079db2ba081f20af675cb055ca16d6f", "title": "Semantic Data Models", "text": "Semantic data models have emerged from a requirement for more expressive conceptual data models. Current generation data models lack direct support for relationships, data abstraction, inheritance, constraints, unstructured objects, and the dynamic properties of an application. Although the need for data models with richer semantics is widely recognized, no single approach has won general acceptance. This paper describes the generic properties of semantic data models and presents a representative selection of models that have been proposed since the mid-1970s. In addition to explaining the features of the individual models, guidelines are offered for the comparison of models. The paper concludes with a discussion of future directions in the area of conceptual data modeling."}
{"_id": "ab7f7d3af0989135cd42a3d48348b0fdaceba070", "title": "External localization system for mobile robotics", "text": "We present a fast and precise vision-based software intended for multiple robot localization. The core component of the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision, and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost camera, its core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. We propose a mathematical model of the method that allows to calculate its precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions are verified in several experiments. Apart from the method description, we also publish its source code; so, it can be used as an enabling technology for various mobile robotics problems."}
{"_id": "e19e6fdec87e5cdec2116fd44cd458b6ecece408", "title": "Online Product Quantization", "text": "Approximate nearest neighbor (ANN) search has achieved great success in many tasks. However, existing popular methods for ANN search, such as hashing and quantization methods, are designed for static databases only. They cannot handle well the database with data distribution evolving dynamically, due to the high computational effort for retraining the model based on the new database. In this paper, we address the problem by developing an online product quantization (online PQ) model and incrementally updating the quantization codebook that accommodates to the incoming streaming data. Moreover, to further alleviate the issue of large scale computation for the online PQ update, we design two budget constraints for the model to update partial PQ codebook instead of all. We derive a loss bound which guarantees the performance of our online PQ model. Furthermore, we develop an online PQ model over a sliding window with both data insertion and deletion supported, to reflect the real-time behavior of the data. The experiments demonstrate that our online PQ model is both time-efficient and effective for ANN search in dynamic large scale databases compared with baseline methods and the idea of partial PQ codebook update further reduces the update cost."}
{"_id": "51f478339cc86f20fff222baf17be5bab7f29bc3", "title": "Survey on Collaborative Filtering, Content-based Filtering and Hybrid Recommendation System", "text": "Recommender systems or recommendation systems are a subset of information filtering system that used to anticipate the 'evaluation' or 'preference' that user would feed to an item. In recent years E-commerce applications are widely using Recommender system. Generally the most popular E-commerce sites are probably music, news, books, research articles, and products. Recommender systems are also available for business experts, jokes, restaurants, financial services, life insurance and twitter followers. Recommender systems have formulated in parallel with the web. Initially Recommender systems were based on demographic, content-based filtering and collaborative filtering. Currently, these systems are incorporating social information for enhancing a quality of recommendation process. For betterment of recommendation process in the future, Recommender systems will use personal, implicit and local information from the Internet. This paper provides an overview of recommender systems that include collaborative filtering, content-based filtering and hybrid approach of recommender system."}
{"_id": "6e6eccaabebdaf463e118d9798251d6ce86c0d8f", "title": "Lidar-histogram for fast road and obstacle detection", "text": "Detection of traversable road regions, positive and negative obstacles, and water hazards is a fundamental task for autonomous driving vehicles, especially in off-road environment. This paper proposes an efficient method, called Lidar-histogram. It can be used to integrate the detection of traversable road regions, obstacles and water hazards into one single framework. The weak assumption of the Lidar-histogram is that a decent-sized area in front of the vehicle is flat. The Lidar-histogram is derived from an efficient organized map of Lidar point cloud, called Lidar-imagery, to index, describe and store Lidar data. The organized point-cloud map can be easily obtained by indexing the original unordered 3D point cloud to a Lidar-specific 2D coordinate system. In the Lidar-histogram representation, the 3D traversable road plane in front of vehicle can be projected as a straight line segment, and the positive and negative obstacles are projected above and below the line segment, respectively. In this way, the problem of detecting traversable road and obstacles is converted into a simple linear classification task in 2D space. Experiments have been conducted in different kinds of off-road and urban scenes, and we have obtained very promising results."}
{"_id": "c5d34128e995bd0039f74eba2255c82950ca4c46", "title": "Automated personalized feedback for physical activity and dietary behavior change with mobile phones: a randomized controlled trial on adults.", "text": "BACKGROUND\nA dramatic rise in health-tracking apps for mobile phones has occurred recently. Rich user interfaces make manual logging of users' behaviors easier and more pleasant, and sensors make tracking effortless. To date, however, feedback technologies have been limited to providing overall statistics, attractive visualization of tracked data, or simple tailoring based on age, gender, and overall calorie or activity information. There are a lack of systems that can perform automated translation of behavioral data into specific actionable suggestions that promote healthier lifestyle without any human involvement.\n\n\nOBJECTIVE\nMyBehavior, a mobile phone app, was designed to process tracked physical activity and eating behavior data in order to provide personalized, actionable, low-effort suggestions that are contextualized to the user's environment and previous behavior. This study investigated the technical feasibility of implementing an automated feedback system, the impact of the suggestions on user physical activity and eating behavior, and user perceptions of the automatically generated suggestions.\n\n\nMETHODS\nMyBehavior was designed to (1) use a combination of automatic and manual logging to track physical activity (eg, walking, running, gym), user location, and food, (2) automatically analyze activity and food logs to identify frequent and nonfrequent behaviors, and (3) use a standard machine-learning, decision-making algorithm, called multi-armed bandit (MAB), to generate personalized suggestions that ask users to either continue, avoid, or make small changes to existing behaviors to help users reach behavioral goals. We enrolled 17 participants, all motivated to self-monitor and improve their fitness, in a pilot study of MyBehavior. In a randomized two-group trial, investigators randomly assigned participants to receive either MyBehavior's personalized suggestions (n=9) or nonpersonalized suggestions (n=8), created by professionals, from a mobile phone app over 3 weeks. Daily activity level and dietary intake was monitored from logged data. At the end of the study, an in-person survey was conducted that asked users to subjectively rate their intention to follow MyBehavior suggestions.\n\n\nRESULTS\nIn qualitative daily diary, interview, and survey data, users reported MyBehavior suggestions to be highly actionable and stated that they intended to follow the suggestions. MyBehavior users walked significantly more than the control group over the 3 weeks of the study (P=.05). Although some MyBehavior users chose lower-calorie foods, the between-group difference was not significant (P=.15). In a poststudy survey, users rated MyBehavior's personalized suggestions more positively than the nonpersonalized, generic suggestions created by professionals (P<.001).\n\n\nCONCLUSIONS\nMyBehavior is a simple-to-use mobile phone app with preliminary evidence of efficacy. To the best of our knowledge, MyBehavior represents the first attempt to create personalized, contextualized, actionable suggestions automatically from self-tracked information (ie, manual food logging and automatic tracking of activity). Lessons learned about the difficulty of manual logging and usability concerns, as well as future directions, are discussed.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02359981; https://clinicaltrials.gov/ct2/show/NCT02359981 (Archived by WebCite at http://www.webcitation.org/6YCeoN8nv)."}
{"_id": "5e5fa1de11a715d65008074a24060368b221f376", "title": "Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study.", "text": "Qualitative content analysis and thematic analysis are two commonly used approaches in data analysis of nursing research, but boundaries between the two have not been clearly specified. In other words, they are being used interchangeably and it seems difficult for the researcher to choose between them. In this respect, this paper describes and discusses the boundaries between qualitative content analysis and thematic analysis and presents implications to improve the consistency between the purpose of related studies and the method of data analyses. This is a discussion paper, comprising an analytical overview and discussion of the definitions, aims, philosophical background, data gathering, and analysis of content analysis and thematic analysis, and addressing their methodological subtleties. It is concluded that in spite of many similarities between the approaches, including cutting across data and searching for patterns and themes, their main difference lies in the opportunity for quantification of data. It means that measuring the frequency of different categories and themes is possible in content analysis with caution as a proxy for significance."}
{"_id": "28bd4f88a9f2830d97fbdd1120ab3e9a810adf18", "title": "A hybrid bipartite graph based recommendation algorithm for mobile games", "text": "With the rapid development of the mobile games, mobile game recommendation has become a core technique to mobile game marketplaces. This paper proposes a bipartite graph based recommendation algorithm PKBBR (Prior Knowledge Based for Bipartite Graph Rank). We model the user's interest in mobile game based on bipartite graph structure and use the users' mobile game behavior to describe the edge weights of the graph, then incorporate users' prior knowledge into the projection procedure of the bipartite graph to enrich the information among the nodes. Because the popular games have a great influence on mobile game marketplace, we design a hybrid recommendation algorithm to incorporate popularity recommendation based on users' behaviors. The experiment results show that this hybrid method could achieve a better performance than other approaches."}
{"_id": "1b52a22614d1dd0c95072bd6bb7bab3bee75f550", "title": "Active Object Categorization on a Humanoid Robot", "text": "We present a Bag of Words-based active object categorization technique implemented and tested on a humanoid robot. The robot is trained to categorize objects that are handed to it by a human operator. The robot uses hand and head motions to actively acquire a number of different views. A view planning scheme using entropy minimization reduces the number of views needed to achieve a valid decision. Categorization results are significantly improved by active elimination of background features using robot arm motion. Our experiments cover both, categorization when the object is handed to the robot in a fixed pose at training and testing, and object pose independent categorization. Results on a 4-class object database demonstrate the classification efficiency, a significant gain from multi-view compared to single-view classification, and the advantage of view planning. We conclude that humanoid robotic systems can be successfully applied to actively categorize objects a task with many potential applications ranging from edutainment to active surveillance."}
{"_id": "d8adb97dbc276c4135601bb7ffcd03b7271f5a6e", "title": "Real Time Text Mining on Twitter Data", "text": "Social media constitute a challenging new source of information for intelligence gathering and decision making. Twitter is one of the most popular social media sites and often becomes the primary source of information. Twitter messages are short and well suited for knowledge discovery. Twitter provides both researchers and practitioners a free Application Programming Interface (API) which allows them to gather and analyse large data sets of tweets. Twitter information aren't solely tweet texts, as Twitter\u2019s API provides a lot of info to perform attention-grabbing analysis studies. The paper concisely describes method of knowledge gathering and therefore the main areas of knowledge mining, information discovery and information visual image from Twitter information. In this paper we can create a twitter app from which we can fetch the real time twitter tweets on a particular topic and stored it into R and then we can apply several text mining steps on the tweets to pre-process the tweets text and than we"}
{"_id": "e02010ed24b9a1aea265ac84c3e5d103c071dfe2", "title": "A Low-Power Speech Recognizer and Voice Activity Detector Using Deep Neural Networks", "text": "This paper describes digital circuit architectures for automatic speech recognition (ASR) and voice activity detection (VAD) with improved accuracy, programmability, and scalability. Our ASR architecture is designed to minimize off-chip memory bandwidth, which is the main driver of system power consumption. A SIMD processor with 32 parallel execution units efficiently evaluates feed-forward deep neural networks (NNs) for ASR, limiting memory usage with a sparse quantized weight matrix format. We argue that VADs should prioritize accuracy over area and power, and introduce a VAD circuit that uses an NN to classify modulation frequency features with 22.3- $\\mu \\text{W}$ power consumption. The 65-nm test chip is shown to perform a variety of ASR tasks in real time, with vocabularies ranging from 11 words to 145 000 words and full-chip power consumption ranging from 172 $\\mu \\text{W}$ to 7.78 mW."}
{"_id": "72b169c5539cfca8ac315638ff36eb04997f8350", "title": "A federated Edge Cloud-IoT architecture", "text": "The provisioning of powerful decentralized services in the Internet of Things (IoT), as well as the integration of innovative IoT platforms, requires ubiquity, reliability, high-performance, efficiency, and scalability. Towards the accomplishment of the above requirements, the contemporary technological trends leaded to the IoT and Cloud convergence. This combination created the approach of federated Cloud-IoT architectures. The Cloud-IoT architectural vision is paving the way to the vital step, beyond the IoT cognitive capabilities, that refers to the IoT world leveraging cloud principles. For the success of the deployment of such innovative and high-performance solutions, the introduction and the exploitation of 5G technological principles will empower the way forward. Towards this direction, this paper presents a federated architecture combined by distributed Edge Cloud-IoT platforms, that enables the semantic-based service combination and provisioning. The 5G technological features, such as low-latency, zero-losses, high-bitrates are studied and proposed for the empowerment of the federation of the Edge Cloud-IoT architectures. A smart health use case is presented for the evaluation of the proposed solution."}
{"_id": "130dab15d243e5569925aa8d2eafb080078baf79", "title": "GPCAD: a tool for CMOS op-amp synthesis", "text": "We present a method for optimizing and automating component and transistor sizing for CMOS operational amplifiers. We observe that a wide variety of performance measures can be formulated as posynomial functions of the design variables. As a result, amplifier design problems can be formulated as a geometric program, a special type of convex optimization problem for which very efficient global optimization methods have recently been developed. The synthesis method is therefore fast, and determines the globally optimal design; in particular the final solution is completely independent of the starting point (which can even be infeasible), and infeasible specifications are unambiguously detected. After briefly introducing the method, which is described in more detail by M. Hershenson et al., we show how the method can be applied to six common op-amp architectures, and give several example designs."}
{"_id": "ccb2b479b2b430e284e1c3afb1f9362cd1c95119", "title": "Applying graph-based anomaly detection approaches to the discovery of insider threats", "text": "The ability to mine data represented as a graph has become important in several domains for detecting various structural patterns. One important area of data mining is anomaly detection, but little work has been done in terms of detecting anomalies in graph-based data. In this paper we present graph-based approaches to uncovering anomalies in applications containing information representing possible insider threat activity: e-mail, cell-phone calls, and order processing."}
{"_id": "b81c9a3f774f839a79c0c6e40449ae5d5f49966a", "title": "Talla at SemEval-2017 Task 3: Identifying Similar Questions Through Paraphrase Detection", "text": "This paper describes our approach to the SemEval-2017 shared task of determining question-question similarity in a community question-answering setting (Task 3B). We extracted both syntactic and semantic similarity features between candidate questions, performed pairwise-preference learning to optimize for ranking order, and then trained a random forest classifier to predict whether the candidate questions were paraphrases of each other. This approach achieved a MAP of 45.7% out of max achievable 67.0% on the test set."}
{"_id": "789156726070dd03a8d2175157161c35c5a5b290", "title": "DDD: A New Ensemble Approach for Dealing with Concept Drift", "text": "Online learning algorithms often have to operate in the presence of concept drifts. A recent study revealed that different diversity levels in an ensemble of learning machines are required in order to maintain high generalization on both old and new concepts. Inspired by this study and based on a further study of diversity with different strategies to deal with drifts, we propose a new online ensemble learning approach called Diversity for Dealing with Drifts (DDD). DDD maintains ensembles with different diversity levels and is able to attain better accuracy than other approaches. Furthermore, it is very robust, outperforming other drift handling approaches in terms of accuracy when there are false positive drift detections. In all the experimental comparisons we have carried out, DDD always performed at least as well as other drift handling approaches under various conditions, with very few exceptions."}
{"_id": "4aa9f5150b46320f534de4747a2dd0cd7f3fe292", "title": "Semi-supervised Sequence Learning", "text": "We present two approaches that use unlabeled data to improve sequ nce learning with recurrent networks. The first approach is to predict wha t comes next in a sequence, which is a conventional language model in natural language processing. The second approach is to use a sequence autoencoder, which r eads the input sequence into a vector and predicts the input sequence again. T hese two algorithms can be used as a \u201cpretraining\u201d step for a later supervised seq uence learning algorithm. In other words, the parameters obtained from the unsu pervised step can be used as a starting point for other supervised training model s. In our experiments, we find that long short term memory recurrent networks after b eing pretrained with the two approaches are more stable and generalize bette r. With pretraining, we are able to train long short term memory recurrent network s up to a few hundred timesteps, thereby achieving strong performance in ma ny text classification tasks, such as IMDB, DBpedia and 20 Newsgroups."}
{"_id": "e833ac2e377ab0bc4a6a7f236c4b6c32f3d2b8c1", "title": "Crime Scene Reconstruction: Online Gold Farming Network Analysis", "text": "Many online games have their own ecosystems, where players can purchase in-game assets using game money. Players can obtain game money through active participation or \u201creal money trading\u201d through official channels: converting real money into game money. The unofficial market for real money trading gave rise to gold farming groups (GFGs), a phenomenon with serious impact in the cyber and real worlds. GFGs in massively multiplayer online role-playing games (MMORPGs) are some of the most interesting underground cyber economies because of the massive nature of the game. To detect GFGs, there have been various studies using behavioral traits. However, they can only detect gold farmers, not entire GFGs with internal hierarchies. Even worse, GFGs continuously develop techniques to hide, such as forming front organizations, concealing cyber-money, and changing trade patterns when online game service providers ban GFGs. In this paper, we analyze the characteristics of the ecosystem of a large-scale MMORPG, and devise a method for detecting GFGs. We build a graph that characterizes virtual economy transactions, and trace abnormal trades and activities. We derive features from the trading graph and physical networks used by GFGs to identify them in their entirety. Using their structure, we provide recommendations to defend effectively against GFGs while not affecting the existing virtual ecosystem."}
{"_id": "096e07ced8d32fc9a3617ff1f725efe45507ede8", "title": "Learning methods for generic object recognition with invariance to pose and lighting", "text": "We assess the applicability of several popular learning methods for the problem of recognizing generic visual categories with invariance to pose, lighting, and surrounding clutter. A large dataset comprising stereo image pairs of 50 uniform-colored toys under 36 azimuths, 9 elevations, and 6 lighting conditions was collected (for a total of 194,400 individual images). The objects were 10 instances of 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. Five instances of each category were used for training, and the other five for testing. Low-resolution grayscale images of the objects with various amounts of variability and surrounding clutter were used for training and testing. Nearest neighbor methods, support vector machines, and convolutional networks, operating on raw pixels or on PCA-derived features were tested. Test error rates for unseen object instances placed on uniform backgrounds were around 13% for SVM and 7% for convolutional nets. On a segmentation/recognition task with highly cluttered images, SVM proved impractical, while convolutional nets yielded 16/7% error. A real-time version of the system was implemented that can detect and classify objects in natural scenes at around 10 frames per second."}
{"_id": "0addfc35fc8f4419f9e1adeccd19c07f26d35cac", "title": "A discriminatively trained, multiscale, deformable part model", "text": "This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose."}
{"_id": "11540131eae85b2e11d53df7f1360eeb6476e7f4", "title": "Learning to Forget: Continual Prediction with LSTM", "text": "Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive forget gate that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way."}
{"_id": "a44ebe793947d04d9026a42f9a28c962ff66c5c6", "title": "The anatomy of the posterior aspect of the knee. An anatomic study.", "text": "BACKGROUND\nThe orthopaedic literature contains relatively little quantitative information regarding the anatomy of the posterior aspect of the knee. The purpose of the present study was to provide a detailed description of, and to propose a standard nomenclature for, the anatomy of the posterior aspect of the knee.\n\n\nMETHODS\nDetailed dissection of twenty nonpaired, fresh-frozen knees was performed. Posterior knee structures were measured according to length, width, and/or distance to reproducible osseous landmarks.\n\n\nRESULTS\nThe semimembranosus tendon had eight attachments distal to the main common tendon. The main components were a lateral expansion to the oblique popliteal ligament; a direct arm, which attached to the tibia; and an anterior arm. The oblique popliteal ligament, the largest posterior knee structure, formed a broad fascial sheath over the posterior aspect of the knee and measured 48.0 mm in length and 9.5 mm wide at its medial origin and 16.4 mm wide at its lateral attachment. It had two lateral attachments, one to the meniscofemoral portion of the posterolateral joint capsule and one to the tibia, along the lateral border of the posterior cruciate ligament facet. The semimembranosus also had a distal tibial expansion, which formed a posterior fascial layer over the popliteus muscle. A thickening of the posterior joint capsule, the proximal popliteus capsular expansion, which in this study averaged 40.5 mm in length, connected the posteromedial knee capsule at its attachment at the intercondylar notch to the medial border of the popliteus musculotendinous junction. The plantaris muscle, popliteofibular ligament, fabellofibular ligament, and semimembranosus bursa were present in all specimens.\n\n\nCONCLUSIONS\nThe anatomy of the posterior aspect of the knee is quite complex. This study provides information that can lead to further biomechanical, radiographic imaging, and clinical studies of the importance of these posterior knee structures."}
{"_id": "b3eea1328c10455faa9b49c1f4aec7cd5a0b2d1a", "title": "The discrete wavelet transform: wedding the a trous and Mallat algorithms", "text": ""}
{"_id": "29b6452a3787022942da7670c197ff088c64562b", "title": "Benefit of Direct Charge Measurement (DCM) on Interconnect Capacitance Measurement", "text": "This paper discusses application of direct charge measurement (DCM) on characterizing on-chip interconnect capacitance. Measurement equipment and techniques are leveraged from Flat Panel Display testing. On-chip active device is not an essential necessity for DCM test structure and it is easy to implement parallel measurements. Femto-Farad measurement sensitivity is achieved without having on-chip active device. Measurement results of silicon and glass substrates, including parallel measurements, are presented."}
{"_id": "07b4bd19a4450cda062d88a2fd0447fbc0875fd0", "title": "Adaptive Sharing for Online Social Networks: A Trade-off Between Privacy Risk and Social Benefit", "text": "Online social networks such as Facebook allow users to control which friend sees what information, but it can be a laborious process for users to specify every receiver for each piece of information they share. Therefore, users usually group their friends into social circles, and select the most appropriate social circle to share particular information with. However, social circles are not formed for setting privacy policies, and even the most appropriate social circle still cannot adapt to the changes of users' privacy requirements influenced by the changes in context. This problem drives the need for better privacy control which can adaptively filter the members in a selected social circle to satisfy users' requirements while maintaining users' social needs. To enable such adaptive sharing, this paper proposes a utility-based trade-off framework that models users' concerns (i.e. Potential privacy risks) and incentives of sharing (i.e. Potential social benefits), and quantifies users' requirements as a trade-off between these two types of utilities. By balancing these two metrics, our framework suggests a subset of a selected circle that aims to maximise users' overall utility of sharing. Numerical simulation results compare the outcome of three sharing strategies in randomly changing contexts."}
{"_id": "28133b781cfa109f9943f821ed31736f01d84afe", "title": "Evaluating Reinforcement Learning Algorithms in Observational Health Settings", "text": "Omer Gottesman, Fredrik Johansson, Joshua Meier, Jack Dent, Donghun Lee, Srivatsan Srinivasan, Linying Zhang, Yi Ding, David Wihl, Xuefeng Peng, Jiayu Yao, Isaac Lage, Christopher Mosch, Li-wei H. Lehman, Matthieu Komorowski, Aldo Faisal, Leo Anthony Celi, David Sontag, and Finale Doshi-Velez Paulson School of Engineering and Applied Sciences, Harvard University Institute for Medical Engineering and Science, MIT T.H. Chan School of Public Health, Harvard University Department of Statistics, Harvard University Laboratory for Computational Physiology, Harvard-MIT Health Sciences & Technology, MIT Department of Surgery and Cancer, Faculty of Medicine, Imperial College London Department of Bioengineering, Imperial College London Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center MIT Critical Data"}
{"_id": "35e500a7392bcff30045f5b21c1255c2aa2e7647", "title": "Personality traits and group-based information behaviour: an exploratory study", "text": "Although the importance of individual characteristics and psychological factors has been conceptualized in many models of information seeking behaviour (e.g., Case 2007; Wilson 1999) research into personality issues has only recently attracted attention in information science. This may be explained by the work by Heinstr\u00f6m (2002; 2003; 2006a; 2006b; 2006c), but may also be explained by the emerging interest and research into affective dimensions of information behaviour (Nahl and Bilal 2007). Hitherto, the research focus in information science VOL. 14 NO. 2, JUNE, 2009"}
{"_id": "92c7e2171731d0a768e900150ef46402284830fd", "title": "Design of Neuro-Fuzzy System Controller for DC ServomotorBased Satellite Tracking System", "text": "A parabolic dish antenna positioning system is a control unit which directs the antenna to desired target automatically. In all its aspects, automation has become widespread and promises to represent the future for satellite tracking antenna systems. The term tracking is used to mean that the control system should utilize suitable algorithms to automate the process of pointing the antenna to a selected satellite thereby establishing the desired line of sight (LOS). This paper aims to present and discuss the results obtained from the development of a DC servomotor-based Neuro-Fuzzy System Controller (NFSC) algorithm which can be applied in the positioning of the parabolic reflector antennas. The advantage of using NFSC method is that it has a high degree of nonlinearity tolerance, learning ability and solves problems that are difficult to address with the conventional techniques such as Proportional Integral Derivative (PID) strategy. The architecture of the proposed antenna control system employs Adaptive Neuro-Fuzzy Inference System (ANFIS) design environment of MATLAB/SIMULINK package. The results obtained indicated that the proposed NFSC was able to achieve desired output DC motor position with reduced rise time and overshoot. Future work is to apply the controller in real time to achieve automatic satellite tracking with parabolic antenna using data derived from the signal strength."}
{"_id": "52efd857b500531e040a9366b3eb5bb0ea543979", "title": "Google effects on memory: cognitive consequences of having information at our fingertips.", "text": "The advent of the Internet, with sophisticated algorithmic search engines, has made accessing information as easy as lifting a finger. No longer do we have to make costly efforts to find the things we want. We can \"Google\" the old classmate, find articles online, or look up the actor who was on the tip of our tongue. The results of four studies suggest that when faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it. The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves."}
{"_id": "38bef5fc907299a1a9b91346eec334b259d5b3eb", "title": "Rigid scene flow for 3D LiDAR scans", "text": "The perception of the dynamic aspects of the environment is a highly relevant precondition for the realization of autonomous robot system acting in the real world. In this paper, we propose a novel method for estimating dense rigid scene flow in 3D LiDAR scans. We formulate the problem as an energy minimization problem, where we assume local geometric constancy and incorporate regularization for smooth motion fields. Analyzing the dynamics at point level helps in inferring the fine-grained details of motion. We show results on multiple sequences of the KITTI odometry dataset, where we seamlessly estimate multiple motions pertaining to different dynamic objects. Furthermore, we test our approach on a dataset with pedestrians to show how our method adapts to a case with non-rigid motion. For comparison we use the ground truth from KITTI and show how our method outperforms different ICP-based methods."}
{"_id": "351e1e61e38a127f1dd68d74a6dfb0398e8cb84e", "title": "Dialogue-act-driven Conversation Model : An Experimental Study", "text": "The utility of additional semantic information for the task of next utterance selection in an automated dialogue system is the focus of study in this paper. In particular, we show that additional information available in the form of dialogue acts \u2013when used along with context given in the form of dialogue history\u2013 improves the performance irrespective of the underlying model being generative or discriminative. In order to show the model agnostic behavior of dialogue acts, we experiment with several well-known models such as sequence-to-sequence encoder-decoder model, hierarchical encoder-decoder model, and Siamese-based models with and without hierarchy; and show that in all models, incorporating dialogue acts improves the performance by a significant margin. We, furthermore, propose a novel way of encoding dialogue act information, and use it along with hierarchical encoder to build a model that can use the sequential dialogue act information in a natural way. Our proposed model achieves an MRR of about 84.8% for the task of next utterance selection on a newly introduced DailyDialog dataset, and outperform the baseline models. We also provide a detailed analysis of results including key insights that explain the improvement in MRR because of dialogue act information."}
{"_id": "8ecd08b194529ff6b8e8fc80b5c7a72504a059d3", "title": "Evaluation of Objective Quality Measures for Speech Enhancement", "text": "In this paper, we evaluate the performance of several objective measures in terms of predicting the quality of noisy speech enhanced by noise suppression algorithms. The objective measures considered a wide range of distortions introduced by four types of real-world noise at two signal-to-noise ratio levels by four classes of speech enhancement algorithms: spectral subtractive, subspace, statistical-model based, and Wiener algorithms. The subjective quality ratings were obtained using the ITU-T P.835 methodology designed to evaluate the quality of enhanced speech along three dimensions: signal distortion, noise distortion, and overall quality. This paper reports on the evaluation of correlations of several objective measures with these three subjective rating scales. Several new composite objective measures are also proposed by combining the individual objective measures using nonparametric and parametric regression analysis techniques."}
{"_id": "a46c5572506e5b015174332445cded7f888d137d", "title": "A Proposed Digital Forensics Business Model to Support Cybercrime Investigation in Indonesia", "text": "Digital forensics will always include at least human as the one who performs activities, digital evidence as the main object, and process as a reference for the activities followed. The existing framework has not provided a description of the interaction between human, interaction between human and digital evidence, as well as interaction between human and the process itself. A business model approach can be done to provide the idea regarding the interaction in question. In this case, what has been generated by the author in the previous study through a business model of the digital chain of custody becomes the first step in constructing a business model of a digital forensics. In principle, the proposed business model already accommodates major components of digital forensics (human, digital evidence, process) and also considers the interactions among the components. The business model suggested has contained several basic principles as described in The Regulation of Chief of Indonesian National Police (Perkap) No 10/2010. This will give support to law enforcement to deal with cybercrime cases that are more frequent and more sophisticated, and can be a reference for each institution and organization to implement digital forensics activities."}
{"_id": "e3ff5de6f800fde3368eb78c61463c6cb3302f59", "title": "Compact slow-wave millimeter-wave bandpass filter (BPF) using open-loop resonator", "text": "A compact slow-wave millimeter-wave bandpass filter for wireless communication systems is developed and proposed in this paper. The filter is based on an open-loop resonator that is composed of a microstrip line with both ends loaded with folded open stubs. The folded arms of the open stubs are utilized to increase the loading capacitance to ground. In addition, the folded arms of the open stubs are also used to introduce cross couplings. The cross couplings can create transmission zeros at the edges of the desired passband. As a result, the filter can exhibit a passband with sharp rejection skirts. The filter is designed to have a fractional bandwidth of about 3.3% at a center frequency of 30.0 GHz. The filter design is realized using Rogers RT6002 substrate with a dielectric constant of 2.94 and a thickness of 0.127 mm. The filter design is successfully fabricated using the printed circuit board (PCB) technology. The fabricated filter is measured and an excellent agreement between the simulated and measured results is achieved. The fabricated filter has the advantages of compact size and excellent performance."}
{"_id": "c896eb2c37475a147d31cd6f0c19fce60e8e6b43", "title": "RDF-4X: a scalable solution for RDF quads store in the cloud", "text": "Resource Description Framework (RDF) represents a flexible and concise model for representing the metadata of resources on the web. Over the past years, with the increasing amount of RDF data, efficient and scalable RDF data management has become a fundamental challenge to achieve the Semantic Web vision. However, multiple approaches for RDF storage have been suggested, ranging from simple triple stores to more advanced techniques like vertical partitioning on the predicates or centralized approaches. Unfortunately, it is still a challenge to store a huge quantity of RDF quads due, in part, to the query processing for RDF data. This paper proposes a scalable solution for RDF data management that uses Apache Accumulo. We focus on introducing storage methods and indexing techniques that scale to billions of quads across multiple nodes, while providing fast and easy access to the data through conventional query mechanisms such as SPARQL. Our performance evaluation shows that in most cases our approach works well against large RDF datasets."}
{"_id": "30725559faf4f883c8bc9ed0af4aad6f6e9c9b10", "title": "Global, Dense Multiscale Reconstruction for a Billion Points", "text": "We present a variational approach for surface reconstruction from a set of oriented points with scale information. We focus particularly on scenarios with nonuniform point densities due to images taken from different distances. In contrast to previous methods, we integrate the scale information in the objective and globally optimize the signed distance function of the surface on a balanced octree grid. We use a finite element discretization on the dual structure of the octree minimizing the number of variables. The tetrahedral mesh is generated efficiently with a lookup table which allows to map octree cells to the nodes of the finite elements. We optimize memory efficiency by data aggregation, such that robust data terms can be used even on very large scenes. The surface normals are explicitly optimized and used for surface extraction to improve the reconstruction at edges and corners."}
{"_id": "41e0436a617bda62ef6d895d58f1c4347a90dc89", "title": "Teachers\u2019 Performance Evaluation in Higher Educational Institution using Data Mining Technique", "text": "Educational Data Mining (EDM) is an evolving field exploring pedagogical data by applying different machine learning techniques/tools. It can be considered as interdisciplinary research field which provides intrinsic knowledge of teaching and learning process for effective education. The main objective of any educational institution is to provide quality education to its students. One way to achieve highest level of quality in higher education system is by discovering knowledge that predicts teachers\u2019 performance. This study presents an efficient system model for evaluation and prediction of teachers\u2019 performance in higher institutions of learning using data mining technologies. To achieve the objectives of this work, a two-layered classifier system was designed; it consists of an Artificial Neural Network (ANN) and Decision Tree. The classifier system was tested successfully using case study data from a Nigerian University in the South West of Nigeria. The data consists of academic qualifications for teachers as well as their experiences and grades of students in courses they taught among others. The attribute selected were evaluated using two feature selection methods in order to get a subset of the attributes that would make for a compact and accurate predictive model. The WEKA machine learning tool was used for the mining. The results show that, among the six attributes used, Working Experience, and Rank are rated the best two attributes that contributed mostly to the performance of teachers in this study. Also, considering the time taken to build the models and performance accuracy level, C4.5 decision tree outperformed the other two algorithms (ID3 and MLP) with good performance of 83.5% accuracy level and acceptable kappa statistics of 0.743. It does mean that C4.5 decision tree is best algorithm suitable for predicting teachers\u2019 performance in relation to the other two algorithms in this work. General Terms Data Mining"}
{"_id": "2564e203c97df4e62348fa116ffa3b18f5151278", "title": "Parsimonious module inference in large networks.", "text": "We investigate the detectability of modules in large networks when the number of modules is not known in advance. We employ the minimum description length principle which seeks to minimize the total amount of information required to describe the network, and avoid overfitting. According to this criterion, we obtain general bounds on the detectability of any prescribed block structure, given the number of nodes and edges in the sampled network. We also obtain that the maximum number of detectable blocks scales as sqrt[N], where N is the number of nodes in the network, for a fixed average degree \u27e8k\u27e9. We also show that the simplicity of the minimum description length approach yields an efficient multilevel Monte Carlo inference algorithm with a complexity of O(\u03c4NlogN), if the number of blocks is unknown, and O(\u03c4N) if it is known, where \u03c4 is the mixing time of the Markov chain. We illustrate the application of the method on a large network of actors and films with over 10(6) edges, and a dissortative, bipartite block structure."}
{"_id": "ea5b1f3c719cd4ddd4c78a0da1501e36d87d9782", "title": "Discriminative Optimization: Theory and Applications to Computer Vision", "text": "Many computer vision problems are formulated as the optimization of a cost function. This approach faces two main challenges: designing a cost function with a local optimum at an acceptable solution, and developing an efficient numerical method to search for this optimum. While designing such functions is feasible in the noiseless case, the stability and location of local optima are mostly unknown under noise, occlusion, or missing data. In practice, this can result in undesirable local optima or not having a local optimum in the expected place. On the other hand, numerical optimization algorithms in high-dimensional spaces are typically local and often rely on expensive first or second order information to guide the search. To overcome these limitations, we propose Discriminative Optimization (DO), a method that learns search directions from data without the need of a cost function. DO explicitly learns a sequence of updates in the search space that leads to stationary points that correspond to the desired solutions. We provide a formal analysis of DO and illustrate its benefits in the problem of 3D registration, camera pose estimation, and image denoising. We show that DO outperformed or matched state-of-the-art algorithms in terms of accuracy, robustness, and computational efficiency."}
{"_id": "93b8a3e1771b42b46eaa802a85ddcfba3cdf7c1e", "title": "Pervasive Motion Tracking and Muscle Activity Monitor", "text": "This paper introduces a novel human activity monitoring system combining Inertial Measurement Units (IMU) and Mechanomyographic (MMG) muscle sensing technology. While other work has recognised and implemented systems for combined muscle activity and motion recording, they have focused on muscle activity through EMG sensors, which have limited use outside controlled environments. MMG is a low frequency vibration emitted by skeletal muscle whose measurement does not require gel or adhesive electrodes, potentially offering much more efficient implementation for pervasive use. We have developed a combined IMU and MMG sensor the fusion of which provides a new dimension in human activity monitoring and bridges the gap between human dynamics and muscle activity. Results show that synergy between motion and muscle detection is not only viable, but straightforward, inexpensive, and can be packaged in a lightweight wearable package."}
{"_id": "d3bed9e9e7cb89236a5908ed902d006589e5f07b", "title": "Recognition for Handwritten English Letters : A Review", "text": "Character recognition is one of the most interesting and challenging research areas in the field of Image processing. English character recognition has been extensively studied in the last half century. Nowadays different methodologies are in widespread use for character recognition. Document verification, digital library, reading bank deposit slips, reading postal addresses, extracting information from cheques, data entry, applications for credit cards, health insurance, loans, tax forms etc. are application areas of digital document processing. This paper gives an overview of research work carried out for recognition of hand written English letters. In Hand written text there is no constraint on the writing style. Hand written letters are difficult to recognize due to diverse human handwriting style, variation in angle, size and shape of letters. Various approaches of hand written character recognition are discussed here along with their performance."}
{"_id": "9313629756a2f76b6f43c6fbc8c8e452a2fc6588", "title": "Anomaly Detection for Astronomical Data", "text": "Modern astronomical observatories can produce massive amount of data that are beyond the capability of the researchers to even take a glance. These scientific observations present both great opportunities and challenges for astronomers and machine learning researchers. In this project we address the problem of detecting anomalies/novelties in these large-scale astronomical data sets. Two types of anomalies, the point anomalies and the group anomalies, are considered. The point anomalies include individual anomalous objects, such as single stars or galaxies that present unique characteristics. The group anomalies include anomalous groups of objects, such as unusual clusters of the galaxies that are close together. They both have great values for astronomical studies, and our goal is to detect them automatically in un-supervised ways. For point anomalies, we adopt the subspace-based detection strategy and proposed a robust low-rank matrix decomposition algorithm for more reliable results. For group anomalies, we use hierarchical probabilistic models to capture the generative mechanism of the data, and then score the data groups using various probability measures. Experimental evaluation on both synthetic and real world data sets shows the effectiveness of the proposed methods. On a real astronomical data sets, we obtained several interesting anecdotal results. Initial inspections by the astronomers confirm the usefulness of these machine learning methods in astronomical research."}
{"_id": "14956e13069de62612d63790860f41e9fcc9bf70", "title": "Graph SLAM based mapping for AGV localization in large-scale warehouses", "text": "The operation of industrial Automated Guided Vehicles (AGV) today requires designated infrastructure and readily available maps for their localization. In logistics, high effort and investment is necessary to enable the introduction of AGVs. Within the SICK AG coordinated EU-funded research project PAN-Robots we aim to reduce the installation time and costs dramatically by semi-automated plant exploration and localization based on natural landmarks. In this paper, we present our current mapping and localization results based on measurement data acquired at the site of our project partner Coca-Cola Iberian Partners in Bilbao, Spain. We evaluate our solution in terms of accuracy of the map, i.e. comparing landmark position estimates with a ground truth map of millimeter accuracy. The localization results are shown based on artificial landmarks as well as natural landmarks (gridmaps) based on the same graph based optimization solution."}
{"_id": "03184ac97ebf0724c45a29ab49f2a8ce59ac2de3", "title": "Evaluation of output embeddings for fine-grained image classification", "text": "Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compelling classification performance can be achieved on such categories even without labeled training data. Given image and class embeddings, we learn a compatibility function such that matching embeddings are assigned a higher score than mismatching ones; zero-shot classification of an image proceeds by finding the label yielding the highest joint compatibility score. We use state-of-the-art image features and focus on different supervised attributes and unsupervised output embeddings either derived from hierarchies or learned from unlabeled text corpora. We establish a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate that purely unsupervised output embeddings (learned from Wikipedia and improved with finegrained text) achieve compelling results, even outperforming the previous supervised state-of-the-art. By combining different output embeddings, we further improve results."}
{"_id": "453b92fb760b2b6865e96670003061097953b5b3", "title": "Advice weaving in AspectJ", "text": "This paper describes the implementation of advice weaving in AspectJ. The AspectJ language picks out dynamic join points in a program's execution with pointcuts and uses advice to change the behavior at those join points. The core task of AspectJ's advice weaver is to statically transform a program so that at runtime it will behave according to the AspeetJ language semantics. This paper describes the 1.1 implementation which is based on transforming bytecode. We describe how AspectJ's join points are mapped to regions of bytecode, how these regions are efficiently matched by AspectJ's pointcuts, and how pieces of advice are efficiently implemented at these regions. We also discuss both run-time and compile-time performance of this implementation."}
{"_id": "7081388f372231e13080f129423f6605d0c5168f", "title": "Scalable production of large quantities of defect-free few-layer graphene by shear exfoliation in liquids.", "text": "To progress from the laboratory to commercial applications, it will be necessary to develop industrially scalable methods to produce large quantities of defect-free graphene. Here we show that high-shear mixing of graphite in suitable stabilizing liquids results in large-scale exfoliation to give dispersions of graphene nanosheets. X-ray photoelectron spectroscopy and Raman spectroscopy show the exfoliated flakes to be unoxidized and free of basal-plane defects. We have developed a simple model that shows exfoliation to occur once the local shear rate exceeds 10(4) s(-1). By fully characterizing the scaling behaviour of the graphene production rate, we show that exfoliation can be achieved in liquid volumes from hundreds of millilitres up to hundreds of litres and beyond. The graphene produced by this method performs well in applications from composites to conductive coatings. This method can be applied to exfoliate BN, MoS2 and a range of other layered crystals."}
{"_id": "75412b7dbc418aa56710e1b6e4b61da794d5adc7", "title": "Large-area synthesis of high-quality and uniform graphene films on copper foils.", "text": "Graphene has been attracting great interest because of its distinctive band structure and physical properties. Today, graphene is limited to small sizes because it is produced mostly by exfoliating graphite. We grew large-area graphene films of the order of centimeters on copper substrates by chemical vapor deposition using methane. The films are predominantly single-layer graphene, with a small percentage (less than 5%) of the area having few layers, and are continuous across copper surface steps and grain boundaries. The low solubility of carbon in copper appears to help make this growth process self-limiting. We also developed graphene film transfer processes to arbitrary substrates, and dual-gated field-effect transistors fabricated on silicon/silicon dioxide substrates showed electron mobilities as high as 4050 square centimeters per volt per second at room temperature."}
{"_id": "7ad70a564c4c8898f61eddbcecf9e90f71e9d4b6", "title": "Measurement of the elastic properties and intrinsic strength of monolayer graphene.", "text": "We measured the elastic properties and intrinsic breaking strength of free-standing monolayer graphene membranes by nanoindentation in an atomic force microscope. The force-displacement behavior is interpreted within a framework of nonlinear elastic stress-strain response, and yields second- and third-order elastic stiffnesses of 340 newtons per meter (N m(-1)) and -690 Nm(-1), respectively. The breaking strength is 42 N m(-1) and represents the intrinsic strength of a defect-free sheet. These quantities correspond to a Young's modulus of E = 1.0 terapascals, third-order elastic stiffness of D = -2.0 terapascals, and intrinsic strength of sigma(int) = 130 gigapascals for bulk graphite. These experiments establish graphene as the strongest material ever measured, and show that atomically perfect nanoscale materials can be mechanically tested to deformations well beyond the linear regime."}
{"_id": "ddeb1c116cecd14f1065c76599509de094e18420", "title": "Raman spectrum of graphene and graphene layers.", "text": "Graphene is the two-dimensional building block for carbon allotropes of every other dimensionality. We show that its electronic structure is captured in its Raman spectrum that clearly evolves with the number of layers. The D peak second order changes in shape, width, and position for an increasing number of layers, reflecting the change in the electron bands via a double resonant Raman process. The G peak slightly down-shifts. This allows unambiguous, high-throughput, nondestructive identification of graphene layers, which is critically lacking in this emerging research area."}
{"_id": "85d3432db02840f07f7ce8603507cb6125ac1334", "title": "Synthesis of graphene-based nanosheets via chemical reduction of exfoliated graphite oxide", "text": "Reduction of a colloidal suspension of exfoliated graphene oxide sheets in water with hydrazine hydrate results in their aggregation and subsequent formation of a high-surface-area carbon material which consists of thin graphene-based sheets. The reduced material was characterized by elemental analysis, thermo-gravimetric analysis, scanning electron microscopy, X-ray photoelectron spectroscopy, NMR spectroscopy, Raman spectroscopy, and by electrical conductivity measurements. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "3364a3131285f309398372ebd2831f2fddb59f26", "title": "Microphone Array Processing for Distant Speech Recognition: From Close-Talking Microphones to Far-Field Sensors", "text": "Distant speech recognition (DSR) holds the promise of the most natural human computer interface because it enables man-machine interactions through speech, without the necessity of donning intrusive body- or head-mounted microphones. Recognizing distant speech robustly, however, remains a challenge. This contribution provides a tutorial overview of DSR systems based on microphone arrays. In particular, we present recent work on acoustic beam forming for DSR, along with experimental results verifying the effectiveness of the various algorithms described here; beginning from a word error rate (WER) of 14.3% with a single microphone of a linear array, our state-of-the-art DSR system achieved a WER of 5.3%, which was comparable to that of 4.2% obtained with a lapel microphone. Moreover, we present an emerging technology in the area of far-field audio and speech processing based on spherical microphone arrays. Performance comparisons of spherical and linear arrays reveal that a spherical array with a diameter of 8.4 cm can provide recognition accuracy comparable or better than that obtained with a large linear array with an aperture length of 126 cm."}
{"_id": "c47b829a6a19f405e941e32ce19ea5e662967519", "title": "The Impact on Family Functioning of Social Media Use by Depressed Adolescents: A Qualitative Analysis of the Family Options Study", "text": "BACKGROUND\nAdolescent depression is a prevalent mental health problem, which can have a major impact on family cohesion. In such circumstances, excessive use of the Internet by adolescents may exacerbate family conflict and lack of cohesion. The current study aims to explore these patterns within an intervention study for depressed adolescents.\n\n\nMETHOD\nThe current study draws upon data collected from parents within the family options randomized controlled trial that examined family based interventions for adolescent depression (12-18\u2009years old) in Melbourne, Australia (2012-2014). Inclusion in the trial required adolescents to meet diagnostic criteria for a major depressive disorder via the Structured Clinical Interview for DSM-IV Childhood Disorders. The transcripts of sessions were examined using qualitative thematic analysis. The transcribed sessions consisted of 56\u2009h of recordings in total from 39 parents who took part in the interventions.\n\n\nRESULTS\nThe thematic analysis explored parental perceptions of their adolescent's use of social media (SM) and access to Internet content, focusing on the possible relationship between adolescent Internet use and the adolescent's depressive disorder. Two overarching themes emerged as follows: the sense of loss of parental control over the family environment and parents' perceived inability to protect their adolescent from material encountered on the Internet and social interactions via SM.\n\n\nCONCLUSION\nParents within the context of family based treatments felt that prolonged exposure to SM exposed their already vulnerable child to additional stressors and risks. The thematic analysis uncovered a sense of parental despair and lack of control, which is consistent with their perception of SM and the Internet as relentless and threatening to their parental authority and family cohesion."}
{"_id": "1eab69b7a85f87eb265945bebddd6ac0e1e08be3", "title": "A Type-Based Compiler for Standard ML", "text": "Compile-time type information should be valuable in efficient compilation of statically typed functional languages such as Standard ML. But how should type-directed compilation work in real compilers, and how much performance gain will type-based optimizations yield? In order to support more efficient data representations and gain more experience about type-directed compilation, we have implemented a new type-based middle end and back end for the Standard ML of New Jersey compiler. We describe the basic design of the new compiler, identify a number of practical issues, and then compare the performance of our new compiler with the old non-type-based compiler. Our measurement shows that a combination of several simple type-based optimizations reduces heap allocation by 36%; and improves the already-efficient code generated by the old non-type-based compiler by about 19% on a DECstation 500."}
{"_id": "c4108ffeed1fb1766578f60c10e9ebfc37ea13c0", "title": "Using Neural Networks to Generate Inferential Roles for Natural Language", "text": "Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's \"inferential role.\" We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition."}
{"_id": "32805881c575cd7a55a42b3ad0b00a112d6620eb", "title": "Double Thompson Sampling for Dueling Bandits", "text": "In this paper, we propose a Double Thompson Sampling (D-TS) algorithm for dueling bandit problems. As its name suggests, D-TS selects both the first and the second candidates according to Thompson Sampling. Specifically, D-TS maintains a posterior distribution for the preference matrix, and chooses the pair of arms for comparison according to two sets of samples independently drawn from the posterior distribution. This simple algorithm applies to general Copeland dueling bandits, including Condorcet dueling bandits as a special case. For general Copeland dueling bandits, we show that D-TS achieves O(K log T ) regret. Moreover, using a back substitution argument, we refine the regret to O(K log T +K log log T ) in Condorcet dueling bandits and most practical Copeland dueling bandits. In addition, we propose an enhancement of D-TS, referred to as D-TS, to reduce the regret in practice by carefully breaking ties. Experiments based on both synthetic and real-world data demonstrate that D-TS and D-TS significantly improve the overall performance, in terms of regret and robustness."}
{"_id": "5c8a746146dba2dacb1741b8d9362d37c09f61ce", "title": "A Proposal for a Set of Level 3 Basic Linear Algebra Subprograms", "text": "This paper describes a proposal for Level 3 Basic Linear Algebra Subprograms (Level 3 BLAS). The Level 3 BLAS are targeted at matrix-matrix operations with the aim of providing more efficient, but portable, implementations of algorithms on high-performance computers, especially those with hierarchical memory and parallel processing capability."}
{"_id": "3a8d076b5baa28e28160a1229ac3cae0b9e192f3", "title": "Accurate vertical road profile estimation using v-disparity map and dynamic programming", "text": "Detecting obstacles on the road is crucial for the advanced driver assistance systems. Obstacle detection on the road can be greatly facilitated if we have a vertical road profile. Therefore, in this paper, we present a novel method that can estimate an accurate vertical road profile of the scene from the stereo images. Unlike conventional stereo-based road profile estimation methods that heavily rely on a parametric model of the road surface, our method can obtain a road profile for an arbitrary complicated road. To this end, an energy function that includes the stereo matching fidelity and spatio-temporal smoothness of the road profile is presented, and thus the road profile is extracted by maximizing the energy function via dynamic programming. The experimental results demonstrate the effectiveness of the proposed method."}
{"_id": "9b9c8419c237ceb2cabeb7778dd425317fc0095f", "title": "Position control of Robotino mobile robot using fuzzy logic", "text": "The appearance of fuzzy logic theory led to new trends in robotics field and solves various typical problems. In fact, fuzzy logic is a good alternative for increasing the capabilities of autonomous mobile robots in an unknown dynamic environment by integrating human experience. The paper presents the design and implementation of the position control using fuzzy logic for an omni directional mobile robot. The position control of the mobile robot was tested remotely using the Festo Robotino platform and Matlab environment. The obstacle avoidance and mobile robot odometry were also studied in this work."}
{"_id": "e911518c14dffe2ce9d25a5bdd8d0bf20df1069f", "title": "Phase III clinical trial of thalidomide plus dexamethasone compared with dexamethasone alone in newly diagnosed multiple myeloma: a clinical trial coordinated by the Eastern Cooperative Oncology Group.", "text": "PURPOSE\nTo determine if thalidomide plus dexamethasone yields superior response rates compared with dexamethasone alone as induction therapy for newly diagnosed multiple myeloma.\n\n\nPATIENTS AND METHODS\nPatients were randomly assigned to receive thalidomide plus dexamethasone or dexamethasone alone. Patients in arm A received thalidomide 200 mg orally for 4 weeks; dexamethasone was administered at a dose of 40 mg orally on days 1 to 4, 9 to 12, and 17 to 20. Cycles were repeated every 4 weeks. Patients in arm B received dexamethasone alone at the same schedule as in arm A.\n\n\nRESULTS\nTwo hundred seven patients were enrolled: 103 were randomly assigned to thalidomide plus dexamethasone and 104 were randomly assigned to dexamethasone alone; eight patients were ineligible. The response rate with thalidomide plus dexamethasone was significantly higher than with dexamethasone alone (63% v 41%, respectively; P = .0017). The response rate allowing for use of serum monoclonal protein levels when a measurable urine monoclonal protein was unavailable at follow-up was 72% v 50%, respectively. The incidence rates of grade 3 or higher deep vein thrombosis (DVT), rash, bradycardia, neuropathy, and any grade 4 to 5 toxicity in the first 4 months were significantly higher with thalidomide plus dexamethasone compared with dexamethasone alone (45% v 21%, respectively; P < .001). DVT was more frequent in arm A than in arm B (17% v 3%); grade 3 or higher peripheral neuropathy was also more frequent (7% v 4%, respectively).\n\n\nCONCLUSION\nThalidomide plus dexamethasone demonstrates significantly superior response rates in newly diagnosed myeloma compared with dexamethasone alone. However, this must be balanced against the greater toxicity seen with the combination."}
{"_id": "6ceacd889559cfcf0009e914d47f915167231846", "title": "The impact of visual attributes on online image diffusion", "text": "Little is known on how visual content affects the popularity on social networks, despite images being now ubiquitous on the Web, and currently accounting for a considerable fraction of all content shared. Existing art on image sharing focuses mainly on non-visual attributes. In this work we take a complementary approach, and investigate resharing from a mainly visual perspective. Two sets of visual features are proposed, encoding both aesthetical properties (brightness, contrast, sharpness, etc.), and semantical content (concepts represented by the images). We collected data from a large image-sharing service (Pinterest) and evaluated the predictive power of different features on popularity (number of reshares). We found that visual properties have low predictive power compared that of social cues. However, after factoring-out social influence, visual features show considerable predictive power, especially for images with higher exposure, with over 3:1 accuracy odds when classifying highly exposed images between very popular and unpopular."}
{"_id": "5b5ee84ac8ab484c33b3b152c9af5a0715217e53", "title": "Finding Structural Similarity in Time Series Data Using Bag-of-Patterns Representation", "text": "For more than one decade, time series similarity search has been given a great deal of attention by data mining researchers. As a result, many time series representations and distance measures have been proposed. However, most existing work on time series similarity search focuses on finding shape-based similarity. While some of the existing approaches work well for short time series data, they typically fail to produce satisfactory results when the sequence is long. For long sequences, it is more appropriate to consider the similarity based on the higher-level structures. In this work, we present a histogram-based representation for time series data, similar to the \u201cbag of words\u201d approach that is widely accepted by the text mining and information retrieval communities. We show that our approach outperforms the existing methods in clustering, classification, and anomaly detection on several real datasets."}
{"_id": "2343c9765be95ae7a156a75825321ba00940ff83", "title": "An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments", "text": "Grabbing and manipulating virtual objects is an important user interaction for immersive virtual environments. We present implementations and discussion of six techniques which allow manipulation of remote objects. A user study of these techniques was performed which revealed their characteristics and deficiencies, and led to the development of a new class of techniques. These hybrid techniques provide istinct advantages in terms of ease of use and efficiency because they consider the tasks of grabbing and manipulation separately. CR Categories and Subject Descr iptors: I.3.7 [Computer Graphics]:Three-Dimensional Graphics and Realism Virtual Reality; I.3.6 [Computer Graphics]:Methodology and Techniques Interaction Techniques."}
{"_id": "24be6d5e698e32a72b709cb3eb50f9da0f616027", "title": "Maladaptive and adaptive emotion regulation through music: a behavioral and neuroimaging study of males and females", "text": "Music therapists use guided affect regulation in the treatment of mood disorders. However, self-directed uses of music in affect regulation are not fully understood. Some uses of music may have negative effects on mental health, as can non-music regulation strategies, such as rumination. Psychological testing and functional magnetic resonance imaging (fMRI) were used explore music listening strategies in relation to mental health. Participants (n = 123) were assessed for depression, anxiety and Neuroticism, and uses of Music in Mood Regulation (MMR). Neural responses to music were measured in the medial prefrontal cortex (mPFC) in a subset of participants (n = 56). Discharge, using music to express negative emotions, related to increased anxiety and Neuroticism in all participants and particularly in males. Males high in Discharge showed decreased activity of mPFC during music listening compared with those using less Discharge. Females high in Diversion, using music to distract from negative emotions, showed more mPFC activity than females using less Diversion. These results suggest that the use of Discharge strategy can be associated with maladaptive patterns of emotional regulation, and may even have long-term negative effects on mental health. This finding has real-world applications in psychotherapy and particularly in clinical music therapy."}
{"_id": "4c648fe9b7bfd25236164333beb51ed364a73253", "title": "Presentation Attack Detection Methods for Face Recognition Systems: A Comprehensive Survey", "text": "The vulnerability of face recognition systems to presentation attacks (also known as direct attacks or spoof attacks) has received a great deal of interest from the biometric community. The rapid evolution of face recognition systems into real-time applications has raised new concerns about their ability to resist presentation attacks, particularly in unattended application scenarios such as automated border control. The goal of a presentation attack is to subvert the face recognition system by presenting a facial biometric artifact. Popular face biometric artifacts include a printed photo, the electronic display of a facial photo, replaying video using an electronic display, and 3D face masks. These have demonstrated a high security risk for state-of-the-art face recognition systems. However, several presentation attack detection (PAD) algorithms (also known as countermeasures or antispoofing methods) have been proposed that can automatically detect and mitigate such targeted attacks. The goal of this survey is to present a systematic overview of the existing work on face presentation attack detection that has been carried out. This paper describes the various aspects of face presentation attacks, including different types of face artifacts, state-of-the-art PAD algorithms and an overview of the respective research labs working in this domain, vulnerability assessments and performance evaluation metrics, the outcomes of competitions, the availability of public databases for benchmarking new PAD algorithms in a reproducible manner, and finally a summary of the relevant international standardization in this field. Furthermore, we discuss the open challenges and future work that need to be addressed in this evolving field of biometrics."}
{"_id": "990c5f709007ffc72234fc9f491065c891bf888f", "title": "Deep Fusion Net for Multi-atlas Segmentation: Application to Cardiac MR Images", "text": "Atlas selection and label fusion are two major challenges in multi-atlas segmentation. In this paper, we propose a novel deep fusion net for better solving these challenges. Deep fusion net is a deep architecture by concatenating a feature extraction subnet and a non-local patchbased label fusion (NL-PLF) subnet in a single network. This network is trained end-to-end for automatically learning deep features achieving optimal performance in a NL-PLF framework. The learned deep features are further utilized in defining a similarity measure for atlas selection. Experimental results on Cardiac MR images for left ventricular segmentation demonstrate that our approach is effective both in atlas selection and multi-atlas label fusion, and achieves state of the art in performance."}
{"_id": "2788637dafcdc6c440b1ab91756c601523ee45ca", "title": "A New Distance Measure for Face Recognition System", "text": "This paper proposes a new powerful distance measure called Normalized Unmatched Points (NUP). This measure can be used in a face recognition system to discriminate facial images. It works by counting the number of unmatched pixels between query and database images. A face recognition system has been proposed which makes use of this proposed distance measure for taking the decision on matching. This system has been tested on four publicly available databases, viz. ORL, YALE, BERN and CALTECH databases. Experimental results show that the proposed measure achieves recognition rates more than 98.66% for the first five likely matched faces. It is observed that the NUP distance measure performs better than other existing similar variants on these databases."}
{"_id": "a38e54b0425a6074c53f0d0d365158613b4edea2", "title": "Fusing Iris, Palmprint and Fingerprint in a Multi-biometric Recognition System", "text": "This paper presents a trimodal biometric recognition system based on iris, palmprint and fingerprint. Wavelet transform and Gabor filter are used to extract features in different scales and orientations from iris and palmprint. Minutiae extraction and alignment is used for fingerprint matching. Six different fusion algorithms including score-based, rank-based and decision-based methods are used to combine the results of three modules. We also propose a new rank-based fusion algorithm Maximum Inverse Rank (MIR) which is robust with respect to variations in scores and also bad ranking from a module. CASIA datasets for iris, palmprint and fingerprint are used in this study. The experiments show the effectiveness of our fusion method and our trimodal biometric recognition system in comparison to existing multi-modal recognition systems."}
{"_id": "f3f050cee9bc594ac12801b9fb700330918bf42a", "title": "Dermoscopy in differentiating palmar syphiloderm from palmar papular psoriasis.", "text": "Palmar syphiloderm is one of the most common presentations of secondary syphilis and its recognition is of utmost importance in order to promptly identify such a disease and initiate appropriate workup/management. However, the differential diagnosis with palmar papular psoriasis often poses some difficulties, with consequent possible diagnostic errors/delays and prescription of improper therapies. In this report, we underline the role of dermoscopy as a supportive tool to facilitate the non-invasive recognition of palmar syphiloderm and its distinction from palmar papular psoriasis."}
{"_id": "5ec89f73a8d1e817ebf654f91318d28c9cfebead", "title": "Semantically Guided Depth Upsampling", "text": "We present a novel method for accurate and efficient upsampling of sparse depth data, guided by high-resolution imagery. Our approach goes beyond the use of intensity cues only and additionally exploits object boundary cues through structured edge detection and semantic scene labeling for guidance. Both cues are combined within a geodesic distance measure that allows for boundary-preserving depth interpolation while utilizing local context. We model the observed scene structure by locally planar elements and formulate the upsampling task as a global energy minimization problem. Our method determines globally consistent solutions and preserves fine details and sharp depth boundaries. In our experiments on several public datasets at different levels of application, we demonstrate superior performance of our approach over the state-of-the-art, even for very sparse measurements."}
{"_id": "20ba6183545930316f0db323ff40de6f86381c0e", "title": "A master-slave blockchain paradigm and application in digital rights management", "text": "Upon flaws of current blockchain platforms of heavyweight, large capacity of ledger, and time-consuming of synchronization of data, in this paper, we proposed a new paradigm of master-slave blockchain scheme (MSB) for pervasive computing that suitable for general PC, mobile device such as smart phones or PADs to participants in the working of mining and verification, in which we separated traditional blockchain model in 2 layer defined as master node layer and a series of slavery agents layer, then we proposed 2 approaches for partially computing model(P-CM) and non-computing of model(NCM) in the MSB blockchain, Finally large amounts of simulations manifest the proposed master-slave blockchain scheme is feasible, extendible and suitable for pervasive computing especially in the 5G generation environment, and can apply in the DRM-related applications."}
{"_id": "c35d770942f22b4a1595a225c518335e3acdd494", "title": "Analysis of GDI Technique for Digital Circuit Design", "text": "Power Dissipation of Digital circuits can be reduced by 15% 25% by using appropriate logic restructuring and also it can be reduced by 40% 60% by lowering switching activity. Here, Gate Diffusion Input Technique which is based on a Shannon expansion is analyzed for minimizing the power consumption and delay of static digital circuits. This technique as compare to other currently used logic design style, allows less power consumption and reduced propagation delay for low-power design of combinatorial digital circuits with minimum number of transistors. In this paper, basic building blocks of digital system and few combinational circuits are analyzed using GDI and other CMOS techniques. All circuits are designed at 180nm technology in CADENCE and simulate using VIRTUOSO SPECTRE simulator at 100 MHz frequency. Comparative analysis has been done among GDI and other parallel design styles for designing ripple adder, CLA adder and bit magnitude comparator. Simulation result shows GDI technique saves 53. 3%, 55. 6% and 75. 6% power in ripple adder, CLA adder and bit magnitude comparator respectively as compare to CMOS. Also delay is reduced with 25. 2%, 3. 4% and 6. 9% as compare to CMOS. Analysis conclude that GDI is revolutionary high speed and low power consumption technique."}
{"_id": "115e07fd6a7910895e7a3d81ed3121d89262b40b", "title": "Cooperative Inverse Reinforcement Learning", "text": "For an autonomous system to be helpful to humans and to pose no unwarranted risks, it needs to align its values with those of the humans in its environment in such a way that its actions contribute to the maximization of value for the humans. We propose a formal definition of the value alignment problem as cooperative inverse reinforcement learning (CIRL). A CIRL problem is a cooperative, partialinformation game with two agents, human and robot; both are rewarded according to the human\u2019s reward function, but the robot does not initially know what this is. In contrast to classical IRL, where the human is assumed to act optimally in isolation, optimal CIRL solutions produce behaviors such as active teaching, active learning, and communicative actions that are more effective in achieving value alignment. We show that computing optimal joint policies in CIRL games can be reduced to solving a POMDP, prove that optimality in isolation is suboptimal in CIRL, and derive an approximate CIRL algorithm."}
{"_id": "925cc228f463ef493b3ab036b659df0d292bb087", "title": "On the Folly of Rewarding A, While Hoping for B", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles."}
{"_id": "5be817083fbefe6bc087ec5b5069da92d5f98b88", "title": "Network Intrusion Detection System using attack behavior classification", "text": "Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time."}
{"_id": "13c0c418df650ad94ac368c81e2133ec9e166381", "title": "Mid-level deep pattern mining", "text": "Mid-level visual element discovery aims to find clusters of image patches that are both representative and discriminative. In this work, we study this problem from the prospective of pattern mining while relying on the recently popularized Convolutional Neural Networks (CNNs). Specifically, we find that for an image patch, activation extracted from the first fully-connected layer of a CNN have two appealing properties which enable its seamless integration with pattern mining. Patterns are then discovered from a large number of CNN activations of image patches through the well-known association rule mining. When we retrieve and visualize image patches with the same pattern (See Fig. 1), surprisingly, they are not only visually similar but also semantically consistent. We apply our approach to scene and object classification tasks, and demonstrate that our approach outperforms all previous works on mid-level visual element discovery by a sizeable margin with far fewer elements being used. Our approach also outperforms or matches recent works using CNN for these tasks. Source code of the complete system is available online."}
{"_id": "d98fa0e725ebc6e69a16219de9ccf90c07175998", "title": "Broadband Reflectarray Antenna Using Subwavelength Elements Based on Double Square Meander-Line Rings", "text": "A linearly polarized broadband reflectarray is presented employing a novel single layer subwavelength phase shifting element. The size of the element is a fifth of a wavelength at the center frequency of 10 GHz and the element consists of double concentric square rings of meander lines. By changing the length of the meander line, a 420\u00b0 phase variation range is achieved at the center frequency. This characteristic makes the proposed configuration unique, as most of the reported subwavelength reflectarray elements can only realize a phase range far less than 360\u00b0. In addition, the slope of the phase response remains almost constant from 9 to 11 GHz, demonstrating a broadband property. A $48 \\times 48$-element reflectarray antenna is simulated, fabricated, and measured. Good agreement is obtained between simulated and measured results. A measured 1.5-dB gain bandwidth of 18% and 56.5% aperture efficiency is achieved."}
{"_id": "37d0f68ed0d6a1293d260cc2a43be462b553c0f6", "title": "Does Computer-Assisted Learning Improve Learning Outcomes ? Evidence from a Randomized Experiment in Migrant Schools in Beijing", "text": "The\t\r education\t\r of\t\r the\t\r poor\t\r and\t\r disadvantaged\t\r population\t\r has\t\r been\t\r a\t\r long-\u00ad\u2010standing challenge\t\r to\t\r the\t\r education\t\r system\t\r in\t\r both\t\r developed\t\r and\t\r developing\t\r countries. Although\t\r computer-\u00ad\u2010assisted\t\r learning\t\r (CAL)\t\r has\t\r been\t\r considered\t\r one\t\r alternative\t\r to improve\t\r learning\t\r outcomes\t\r in\t\r a\t\r cost-\u00ad\u2010effective\t\r way,\t\r the\t\r empirical\t\r evidence\t\r of\t\r its\t\r impacts on\t\r improving\t\r learning\t\r outcomes\t\r is\t\r mixed.\t\r This\t\r paper\t\r intends\t\r to\t\r explore\t\r the\t\r nature\t\r of the\t\r effects\t\r of\t\r CAL\t\r on\t\r student\t\r academic\t\r and\t\r non-\u00ad\u2010academic\t\r outcomes\t\r for\t\r underserved populations\t\r in\t\r a\t\r developing\t\r country.\t\r To\t\r meet\t\r this\t\r goal,\t\r we\t\r exploit\t\r a\t\r randomized\t\r field experiment\t\r of\t\r a\t\r CAL\t\r program\t\r involving\t\r over\t\r 4000\t\r third-\u00ad\u2010grade\t\r students,\t\r mostly\t\r from poor\t\r migrant\t\r families,\t\r in\t\r 43\t\r migrant\t\r schools\t\r in\t\r Beijing.\t\r The\t\r main\t\r intervention\t\r is\t\r a\t\r math CAL\t\r program\t\r that\t\r is\t\r held\t\r out\t\r of\t\r regular\t\r school\t\r hours.\t\r The\t\r program\t\r is\t\r tailored\t\r to\t\r the regular\t\r school\t\r math\t\r curriculum\t\r and\t\r is\t\r remedial\t\r in\t\r nature.\t\r Our\t\r results\t\r show\t\r that,\t\r the\t\r CAL program\t\r improved\t\r the\t\r student\t\r standardized\t\r math\t\r scores\t\r by\t\r 0.14\t\r standard\t\r deviations and\t\r most\t\r of\t\r the\t\r program\t\r effect\t\r took\t\r place\t\r within\t\r two\t\r months\t\r after\t\r the\t\r start\t\r of\t\r the program.\t\r Low-\u00ad\u2010performing\t\r students\t\r and\t\r those\t\r with\t\r less-\u00ad\u2010educated\t\r parents\t\r benefited more\t\r from\t\r the\t\r program.\t\r Moreover,\t\r CAL\t\r also\t\r significantly\t\r increased\t\r the\t\r levels\t\r of\t\r self-\u00ad\u2010 efficacy\t\r of\t\r the\t\r students\t\r and\t\r their\t\r interest\t\r in\t\r learning.\t\r We\t\r observed\t\r at\t\r most\t\r a\t\r moderate program\t\r spillover\t\r in\t\r Chinese\t\r test\t\r scores.\t\r Our\t\r findings\t\r are\t\r robust\t\r to\t\r the\t\r Hawthorne effect\t\r and\t\r CAL\t\r program\t\r spillovers\t\r that\t\r might\t\r potentially\t\r bias\t\r the\t\r estimates\t\r of\t\r the program\t\r effects. Key\t\r Words:\t\r \t\r Education;\t\r Development;\t\r Computer\t\r Assisted\t\r Learning;\t\r Random Assignment;\t\r Test\t\r Scores;\t\r China;\t\r Migration JEL\t\r Codes:\t\r I20;\t\r I21;\t\r I28;\t\r O15"}
{"_id": "218acaf6ea938025ee99676606a2ac5f8dd888ed", "title": "A network forensics tool for precise data packet capture and replay in cyber-physical systems", "text": "Network data packet capture and replay capabilities are basic requirements for forensic analysis of faults and securityrelated anomalies, as well as for testing and development. Cyber-physical networks, in which data packets are used to monitor and control physical devices, must operate within strict timing constraints, in order to match the hardware devices' characteristics. Standard network monitoring tools are unsuitable for such systems because they cannot guarantee to capture all data packets, may introduce their own traffic into the network, and cannot reliably reproduce the original timing of data packets. Here we present a high-speed network forensics tool specifically designed for capturing and replaying data traffic in Supervisory Control and Data Acquisition systems. Unlike general-purpose \"packet capture\" tools it does not affect the observed network's data traffic and guarantees that the original packet ordering is preserved. Most importantly, it allows replay of network traffic precisely matching its original timing. The tool was implemented by developing novel user interface and back-end software for a special-purpose network interface card. Experimental results show a clear improvement in data capture and replay capabilities over standard network monitoring methods and general-purpose forensics solutions."}
{"_id": "3230af0ced496c3eb59e4abc4df15f2572b50d03", "title": "Meta-learning with backpropagation", "text": "This paper introduces gradient descent methods applied to meta-leaming (leaming how to leam) in Neural Networks. Meta-leaning has been of interest in the machine leaming field for decades because of its appealing applications to intelligent agents, non-stationary time series, autonomous robots, and improved leaming algorithms. Many previous neural network-based approaches toward meta-leaming have been based on evolutionary methods. We show how to use gradient descent for meta-leaming in recurrent neural networks. Based on previous work on FixedWeight Leaming Neural Networks, we hypothesize that any recurrent network topology and its corresponding leaming algorithm(s) is a potential meta-leaming system. We tested several recurrent neural network topologies and their corresponding forms of Backpropagation for their ability to meta-leam. One of our systems, based on the Long Short-Term Memory neural network developed a leaming algorithm that could leam any two-dimensional quadratic function (from a set of such functions} after only 30 training examples."}
{"_id": "fcaefc4836a276009f85a527f866b59611cbd9f1", "title": "Swarm-Based Spatial Sorting", "text": "Purpose \u2013 To present an algorithm for spatially sorting objects into an annular structure. Design/Methodology/Approach \u2013 A swarm-based model that requires only stochastic agent behaviour coupled with a pheromone-inspired \u201cattraction-repulsion\u201d mechanism. Findings \u2013 The algorithm consistently generates high-quality annular structures, and is particularly powerful in situations where the initial configuration of objects is similar to those observed in nature. Research limitations/implications \u2013 Experimental evidence supports previous theoretical arguments about the nature and mechanism of spatial sorting by insects. Practical implications \u2013 The algorithm may find applications in distributed robotics. Originality/value \u2013 The model offers a powerful minimal algorithmic framework, and also sheds further light on the nature of attraction-repulsion algorithms and underlying natural processes."}
{"_id": "5d263170dd6f4132da066645ac4c352e783f6471", "title": "A High Efficiency MAC Protocol for WLANs: Providing Fairness in Dense Scenarios", "text": "Collisions are a main cause of throughput degradation in wireless local area networks. The current contention mechanism used in the IEEE 802.11 networks is called carrier sense multiple access with collision avoidance CSMA/CA. It uses a binary exponential backoff technique to randomize each contender attempt of transmitting, effectively reducing the collision probability. Nevertheless, CSMA/CA relies on a random backoff that while effective and fully decentralized, in principle is unable to completely eliminate collisions, therefore degrading the network throughput as more contenders attempt to share the channel. To overcome these situations, carrier sense multiple access with enhanced collision avoidance CSMA/ECA is able to create a collision-free schedule in a fully decentralized manner using a deterministic backoff after successful transmissions. Hysteresis and fair share are two extensions of CSMA/ECA to support a large number of contenders in a collision-free schedule. CSMA/ECA offers better throughput than CSMA/CA and short-term throughput fairness. This paper describes CSMA/ECA and its extensions. In addition, it provides the first evaluation results of CSMA/ECA with non-saturated traffic, channel errors, and its performance when coexisting with CSMA/CA nodes. Furthermore, it describes the effects of imperfect clocks over CSMA/ECA and presents a mechanism to leverage the impact of channel errors and the addition/withdrawal of nodes over collision-free schedules. Finally, the experimental results on throughput and lost frames from a CSMA/ECA implementation using commercial hardware and open-source firmware are presented."}
{"_id": "0f00a04c4a8c92e070b50ab411df4cd31d2cbe97", "title": "Face recognition with one training image per person", "text": "At present there are many methods that could deal well with frontal view face recognition. However, most of them cannot work well when there is only one training image per person. In this paper, an extension of the eigenface technique, i.e. (PC)A, is proposed. (PC)A combines the original face image with its horizontal and vertical projections and then performs principal component analysis on the enriched version of the image. It requires less computational cost than the standard eigenface technique and experimental results show that on a gray-level frontal view face database where each person has only one training image, (PC)A achieves 3%-5% higher accuracy than the standard eigenface technique through using 10%-15% fewer eigenfaces."}
{"_id": "44d811b066187865cce71d9acecdda8c3f9c8eee", "title": "Travel time estimation of a path using sparse trajectories", "text": "In this paper, we propose a citywide and real-time model for estimating the travel time of any path (represented as a sequence of connected road segments) in real time in a city, based on the GPS trajectories of vehicles received in current time slots and over a period of history as well as map data sources. Though this is a strategically important task in many traffic monitoring and routing systems, the problem has not been well solved yet given the following three challenges. The first is the data sparsity problem, i.e., many road segments may not be traveled by any GPS-equipped vehicles in present time slot. In most cases, we cannot find a trajectory exactly traversing a query path either. Second, for the fragment of a path with trajectories, they are multiple ways of using (or combining) the trajectories to estimate the corresponding travel time. Finding an optimal combination is a challenging problem, subject to a tradeoff between the length of a path and the number of trajectories traversing the path (i.e., support). Third, we need to instantly answer users' queries which may occur in any part of a given city. This calls for an efficient, scalable and effective solution that can enable a citywide and real-time travel time estimation. To address these challenges, we model different drivers' travel times on different road segments in different time slots with a three dimension tensor. Combined with geospatial, temporal and historical contexts learned from trajectories and map data, we fill in the tensor's missing values through a context-aware tensor decomposition approach. We then devise and prove an object function to model the aforementioned tradeoff, with which we find the most optimal concatenation of trajectories for an estimate through a dynamic programming solution. In addition, we propose using frequent trajectory patterns (mined from historical trajectories) to scale down the candidates of concatenation and a suffix-tree-based index to manage the trajectories received in the present time slot. We evaluate our method based on extensive experiments, using GPS trajectories generated by more than 32,000 taxis over a period of two months. The results demonstrate the effectiveness, efficiency and scalability of our method beyond baseline approaches."}
{"_id": "b8084d5e193633462e56f897f3d81b2832b72dff", "title": "DeepID3: Face Recognition with Very Deep Neural Networks", "text": "The state-of-the-art of face recognition has been significantly advanced by the emergence of deep learning. Very deep neural networks recently achieved great success on general object recognition because of their superb learning capacity. This motivates us to investigate their effectiveness on face recognition. This paper proposes two very deep neural network architectures, referred to as DeepID3, for face recognition. These two architectures are rebuilt from stacked convolution and inception layers proposed in VGG net [10] and GoogLeNet [16] to make them suitable to face recognition. Joint face identification-verification supervisory signals are added to both intermediate and final feature extraction layers during training. An ensemble of the proposed two architectures achieves 99.53% LFW face verification accuracy and 96.0% LFW rank-1 face identification accuracy, respectively. A further discussion of LFW face verification result is given in the end."}
{"_id": "7b93d0791cdf8342be6c398db831321759180d86", "title": "Incredible high performance SAW resonator on novel multi-layerd substrate", "text": "High Q and low TCF are elemental characteristics for SAW resonator, in order to realize high performance filters and duplexers coping with new LTE frequency bands. Authors focus on the acoustic energy confinement in SAW propagation surface and electrical characteristics have been investigated theoretically and experimentally. A new multi-layered structure substrate have realized incredible high performances such as both three times Q and one fifth TCF compared with conventional structure. Band25 duplexer using the new structure has been developed successfully with low loss and good temperature stability."}
{"_id": "36604c12e02c8146f8cb486f02f7ba9545004669", "title": "Reverse Classification Accuracy: Predicting Segmentation Performance in the Absence of Ground Truth", "text": "When integrating computational tools, such as automatic segmentation, into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data and, in particular, to detect when an automatic method fails. However, this is difficult to achieve due to the absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross validation, because validation data are often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared with a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA, we take the predicted segmentation from a new image to train a reverse classifier, which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as a part of large-scale image analysis studies."}
{"_id": "fcdf8fee148179f7bf26a8254cb82c86321811d2", "title": "Understanding Individual Neuron Importance Using Information Theory", "text": "In this work, we characterize the outputs of individual neurons in a trained feed-forward neural network by entropy, mutual information with the class variable, and a class selectivity measure based on Kullback-Leibler divergence. By cumulatively ablating neurons in the network, we connect these information-theoretic measures to the impact their removal has on classification performance on the test set. We observe that, looking at the neural network as a whole, none of these measures is a good indicator for classification performance, thus confirming recent results by Morcos et al. However, looking at specific layers separately, both mutual information and class selectivity are positively correlated with classification performance. We thus conclude that it is ill-advised to compare these measures across layers, and that different layers may be most appropriately characterized by different measures. We then discuss pruning neurons from neural networks to reduce computational complexity of inference. Drawing from our results, we perform pruning based on information-theoretic measures on a fully connected feed-forward neural network with two hidden layers trained on MNIST dataset and compare the results to a recently proposed pruning method. We furthermore show that the common practice of re-training after pruning can partly be obviated by a surgery step called bias balancing, without incurring significant performance degradation."}
{"_id": "3b2e70869975c3c27c2338788b67371e758cc221", "title": "FLEXIBLE MANUFACTURING SYSTEMS MODELLING AND PERFORMANCE EVALUATION USING AUTOMOD", "text": "In recent times flexible manufacturing systems emerged as a powerful technology to meet the continuous changing customer demands. Increase in the performance of flexible manufacturing systems is expected as a result of integration of the shop floor activities such as machine and vehicle scheduling. The authors made an attempt to integrate machine and vehicle scheduling with an objective to minimize the makespan using Automod. Automod is a discrete event simulation package used to model and simulate a wide variety of issues in automated manufacturing systems. The key issues related to the design and operation of automated guided vehicles such as flow path layout, number of vehicles and traffic control problems are considered in the study. The performance measures like throughput, machine and vehicle utilization are studied for different job dispatching and vehicle assignment rules in different flexible manufacturing system configurations. (Received in August 2010, accepted in April 2011. This paper was with the authors 1 month for 2 revisions.)"}
{"_id": "2ab9b5fb21b3e4f1de78494380135df535dd75e0", "title": "Fusing Social Media Cues: Personality Prediction from Twitter and Instagram", "text": "Incorporating users\u2019 personality traits has shown to be instrumental in many personalized retrieval and recommender systems. Analysis of users\u2019 digital traces has become an important resource for inferring personality traits. To date, the analysis of users\u2019 explicit and latent characteristics is typically restricted to a single social networking site (SNS). In this work, we propose a novel method that integrates text, image, and users\u2019 meta features from two different SNSs: Twitter and Instagram. Our preliminary results indicate that the joint analysis of users\u2019 simultaneous activities in two popular SNSs seems to lead to a consistent decrease of the prediction errors for each personality trait."}
{"_id": "ada07d84bf881daa4a7e692670e61ad766f692f3", "title": "Current sharing in three-phase LLC interleaved resonant converter", "text": "In this paper, a novel approach for multi-phase interleaved LLC resonant converter is presented. The proposed solution, based on the use of three LLC modules with star connection of transformer primary windings, allows a drastic reduction of the output current ripple and consequently of the output filter capacitor size. Differently from other multi-phase solutions, that are greatly susceptible to resonant components' tolerance causing current imbalance, the proposed topology exhibits an inherent current sharing capability. Moreover, a closed-loop phase-shift control is introduced to additionally compensate for current mismatch and completely balance the current supplied by each module. The benefit of such solution on the reduction of output current ripple and the phase-shift control interaction and effect on load-step variations are also investigated. Measurements on a prototype are added to simulations as validation of the assertions and proposals."}
{"_id": "8914b7cba355d22425f22a2fa5b109ba82806f45", "title": "Ratiometric Artifact Reduction in Low Power Reflective Photoplethysmography", "text": "This paper presents effective signal-processing techniques for the compensation of motion artifacts and ambient light offsets in a reflective photoplethysmography sensor suitable for wearable applications. A ratiometric comparison of infrared (IR) and red absorption characteristics cancels out noise that is multiplicative in nature and amplitude modulation of pulsatile absorption signals enables rejection of additive noise. A low-power, discrete-time pulse-oximeter platform is used to capture IR and red photoplethysmograms so that the data used for analysis have noise levels representative of what a true body sensor network device would experience. The proposed artifact rejection algorithm is designed for real-time implementation with a low-power microcontroller while being robust enough to compensate for varying levels in ambient light as well as reducing the effects of motion-induced artifacts. The performance of the system is illustrated by its ability to extract a typical plethysmogram heart-rate waveform since the sensor is subjected to a range of physical disturbances."}
{"_id": "d5083647572808fcea765d4721c59d3a42e3980c", "title": "A learning environment for augmented reality mobile learning", "text": "There are many tools that enable the development of the augmented reality (AR) activities, some of them can even create and generate an AR experience where the incorporation of 3D objects is simple. However AR tools used in education are different from the general tools limited to the reproduction of virtual content. The purpose of this paper is to present a learning environment based on augmented reality, which can be used both by teachers to develop quality AR educational resources and by students to acquire knowledge in an area. Common problems teachers have in applying AR have been taken into account by producing an authoring tool for education which includes the following characteristics: (1) the ability to incorporate diverse multimedia resources such as video, sound, images and 3D objects in an easy manner, 2) the ability to incorporate tutored descriptions into elements which are being displayed (thus, the student is provided with an additional information, description or narrative about the resource, while the content gets adapted and personalized), (3) the possibility for the teacher to incorporate multiple choice questions (MCQ) into the virtual resource (useful for instant feedback to students on their understanding, it can be useful for the student to know what are the most important points of that issue and for the teacher to assess whether the student distinguishes different concepts) and (4) a library of virtual content where all resources are available in a simple and transparent way for any user. In this study ARLE is used to add AR technologies into notes or books created by the teacher, thereby supplementing both the theoretical and practical content without any programming skills needed on the designers' behalf. In addition to presenting system architecture and the examples of its educational use, a survey concerning use of AR amongst teachers in Spain has been conducted."}
{"_id": "0a802c57c83655a1f2221d135c7bd6a1cbdd06fe", "title": "Qualitative Simulation", "text": "Qualitative simulation is a key inference process in qualitative causal reasoning. However, the precise meaning of the different proposals and their relation with differential equations is often unclear. In this paper, we present a precise definition of qualitative structure and behavior descriptions as abstractions of differential equations and continuously differentiable functions. We present a new algorithm for qualitative simulation that generalizes the best features of existing algorithms, and allows direct comparisons among alternate approaches. Starting with a set of constraints abstracted from a differential equation, we prove that the OSIM algorithm is guaranteed to produce a qualitative behavior corresponding to any solution to the original equation. We also show that any qualitative simulation algorithm will sometimes produce spurious qualitative behaviors: ones which do not correspond to any mechanism satisfying the given constraints. These observations suggest specific types of care that must be taken in designing applications of qualitative causal reasoning systems, and in constructing and validating a knowledge base of mechanism descriptions."}
{"_id": "4491bc44446ed65d19bfd41320ecf2aed39d190b", "title": "Anticipatory affect: neural correlates and consequences for choice.", "text": "'Anticipatory affect' refers to emotional states that people experience while anticipating significant outcomes. Historically, technical limitations have made it difficult to determine whether anticipatory affect influences subsequent choice. Recent advances in the spatio-temporal resolution of functional magnetic resonance imaging, however, now allow researchers to visualize changes in neural activity seconds before choice occurs. We review evidence that activation in specific brain circuits changes during anticipation of monetary incentives, that this activation correlates with affective experience and that activity in these circuits may influence subsequent choice. Specifically, an activation likelihood estimate meta-analysis of cued response studies indicates that nucleus accumbens (NAcc) activation increases during gain anticipation relative to loss anticipation, while anterior insula activation increases during both loss and gain anticipation. Additionally, anticipatory NAcc activation correlates with self-reported positive arousal, whereas anterior insula activation correlates with both self-reported negative and positive arousal. Finally, NAcc activation precedes the purchase of desirable products and choice of high-risk gambles, whereas anterior insula activation precedes the rejection of overpriced products and choice of low-risk gambles. Together, these findings support a neurally plausible framework for understanding how anticipatory affect can influence choice."}
{"_id": "9da1d06e9afe37b3692a102022f561e2b6b25eaf", "title": "Ernest: Efficient Performance Prediction for Large-Scale Advanced Analytics", "text": "Recent workload trends indicate rapid growth in the deployment of machine learning, genomics and scientific workloads on cloud computing infrastructure. However, efficiently running these applications on shared infrastructure is challenging and we find that choosing the right hardware configuration can significantly improve performance and cost. The key to address the above challenge is having the ability to predict performance of applications under various resource configurations so that we can automatically choose the optimal configuration. Our insight is that a number of jobs have predictable structure in terms of computation and communication. Thus we can build performance models based on the behavior of the job on small samples of data and then predict its performance on larger datasets and cluster sizes. To minimize the time and resources spent in building a model, we use optimal experiment design, a statistical technique that allows us to collect as few training points as required. We have built Ernest, a performance prediction framework for large scale analytics and our evaluation on Amazon EC2 using several workloads shows that our prediction error is low while having a training overhead of less than 5% for long-running jobs."}
{"_id": "914cfd6fb61d75608592164193f90776bef58a7e", "title": "Bootstrapping the Linked Data Web", "text": "Most knowledge sources on the Data Web were extracted from structured or semi-structured data. Thus, they encompass solely a small fraction of the information available on the document-oriented Web. In this paper, we present BOA, an iterative bootstrapping strategy for extracting RDF from unstructured data. The idea behind BOA is to use the Data Web as background knowledge for the extraction of natural language patterns that represent predicates found on the Data Web. These patterns are used to extract instance knowledge from natural language text. This knowledge is nally fed back into the Data Web, therewith closing the loop. We evaluate our approach on two data sets using DBpedia as background knowledge. Our results show that we can extract several thousand new facts in one iteration with very high accuracy. Moreover, we provide the rst repository of natural language representations of predicates found on the Data Web."}
{"_id": "befb418f58475b7ebafd1826ba2e60d354eecc80", "title": "Combined Spectral and Spatial Modeling of Corn Yield Based on Aerial Images and Crop Surface Models Acquired with an Unmanned Aircraft System", "text": "Precision Farming (PF) management strategies are commonly based on estimations of within-field yield potential, often derived from remotely-sensed products, e.g., Vegetation Index (VI) maps. These well-established means, however, lack important information, like crop height. Combinations of VI-maps and detailed 3D Crop Surface Models (CSMs) enable advanced methods for crop yield prediction. This work utilizes an Unmanned Aircraft System (UAS) to capture standard RGB imagery datasets for corn grain yield prediction at three earlyto mid-season growth stages. The imagery is processed into simple VI-orthoimages for crop/non-crop classification and 3D CSMs for crop height determination at different spatial resolutions. Three linear regression models are tested on their prediction ability using site-specific (i) unclassified mean heights, (ii) crop-classified mean heights and (iii) a combination of crop-classified mean heights with according crop coverages. The models show determination coefficients R of up to 0.74, whereas model (iii) performs best with imagery captured at the end of stem elongation and intermediate spatial resolution (0.04 m\u00b7px\u22121). Following these results, combined spectral and spatial modeling, based on aerial images and CSMs, proves to be a suitable method for mid-season corn yield prediction. Remote Sens. 2014, 11 10336"}
{"_id": "dc6e656ebdaf65c07f09ba93e6b1308f564978d5", "title": "You are what you eat? Vegetarianism, health and identity.", "text": "This paper examines the views of 'health vegetarians' through a qualitative study of an online vegetarian message board. The researcher participated in discussions on the board, gathered responses to questions from 33 participants, and conducted follow-up e-mail interviews with 18 of these participants. Respondents were predominantly from the United States, Canada and the UK. Seventy per cent were female, and ages ranged from 14 to 53 years, with a median of 26 years. These data are interrogated within a theoretical framework that asks, 'what can a vegetarian body do?' and explores the physical, psychic, social and conceptual relations of participants. This provides insights into the identities of participants, and how diet and identity interact. It is concluded that vegetarianism is both a diet and a bodily practice with consequences for identity formation and stabilisation."}
{"_id": "6b85beb00b43e6ea6203a0afa66f15f3cefb48c6", "title": "Digital Material Fabrication Using Mask-Image-Projection-based Stereolithography", "text": "Purpose \u2013 The purpose of this paper is to present a mask-image-projection-based Stereolithography process that can combine two base materials with various concentrations and structures to produce a solid object with desired material characteristics. Stereolithography is an additive manufacturing process in which liquid photopolymer resin is cross-linked and converted to solid. The fabrication of digital material requires frequent resin changes during the building process. The process presented in this paper attempts to address the related challenges in achieving such fabrication capability. Design/methodology/approach \u2013 A two-channel system design is presented for the multi-material mask-imageprojection-based Stereolithography process. In such a design, a coated thick film and linear motions in two axes are used to reduce the separation force of a cured layer. The material cleaning approach to thoroughly remove resin residue on built surfaces is presented for the developed process. Based on a developed testbed, experimental studies were conducted to verify the effectiveness of the presented process on digital material fabrication. Findings \u2013 The proposed two-channel system can reduce the separation force of a cured layer by an order of magnitude in the bottom-up projection system. The developed two-stage cleaning approach can effectively remove resin residue on built surfaces. Several multi-material designs have been fabricated to highlight the capability of the developed multi-material mask-image-projection-based Stereolithography process. Research limitations/implications \u2013 A proof-of-concept testbed has been developed. Its building speed and accuracy can be further improved. Our tests were limited to the same type of liquid resins. In addition, the removal of trapped air is a challenge in the presented process. Originality/value \u2013 This paper presents a novel and a pioneering approach towards digital material fabrication based on the Stereolithography process. This research contributes to the additive manufacturing development by significantly expanding the selection of base materials in fabricating solid objects with desired material characteristics."}
{"_id": "9425c683ce13c482201ab23dab29bc34e7046229", "title": "Calibrating Large-area Mask Projection Stereolithography for Its Accuracy and Resolution Improvements", "text": "Solid freeform fabrication (SFF) processes based on mask image projection such as digital micro-mirror devices (DMD) have the potential to be fast and inexpensive. More and more research and commercial systems have been developed based on such digital devices. However, a digital light processing (DLP) projector based on DMD has limited resolution and certain image blurring. In order to use a DLP projector in the large-area mask projection stereolithography, it is critical to plan mask images in order to achieve high accuracy and resolution. Based on our previous work on optimized pixel blending, we present a calibration method for capturing the non-uniformity of a projection image by a low cost off-the-shelf DLP projector. Our method is based on two calibration systems, a geometric calibration system that can calibrate the position, shape, size, and orientation of a pixel and an energy calibration system that can calibrate the light intensity of a pixel. Based on both results, the light intensity at various grayscale levels can be approximated for each pixel. Developing a library of such approximation functions is critical for the optimized pixel blending to generate a better mask image plan. Experimental results verify our calibration results."}
{"_id": "58a64cc6a1dd8269ab19b9de271e202ab3e6de92", "title": "Projection micro-stereolithography using digital micro-mirror dynamic mask", "text": "We present in this paper the development of a high-resolution projection micro-stereolithography (P SL) process by using the Digital Micromirror Device (DMDTM, Texas Instruments) as a dynamic mask. This unique technology provides a parallel fabrication of complex three-dimensional (3D) microstructures used for micro electro-mechanical systems (MEMS). Based on the understanding of underlying mechanisms, a process model has been developed with all critical parameters obtained from the experimental measurement. By coupling the experimental measurement and the process model, the photon-induced curing behavior of the resin has been quantitatively studied. The role o erty of the r n d \u00a9"}
{"_id": "9e0c729e1596cd3567945db84e440fbfdeb75d9a", "title": "A Layerless Additive Manufacturing Process based on CNC Accumulation", "text": "Most current additive manufacturing processes are layer-based, that is building a physical model layer-by-layer. By converting 3-dimensional geometry into 2-dimensional contours, the layer-based approach can dramatically simplify the process planning steps. However, there are also drawbacks associated with the layer-based approach such as inconsistent material properties between various directions. In a recent NSF workshop on additive manufacturing, it is suggested to investigate alternative non-layer based approaches. In this paper, we present an additive manufacturing process without planar layers. In the developed testbed, an additive tool based on a fiber optics cable and a UV-LED has been developed. By merging such tools inside a liquid resin tank, we demonstrate its capability of building various 2D and 3D structures. The technical challenges related to the development of such a process are discussed. Some potential applications including part repairing and building around inserts have also been demonstrated."}
{"_id": "0ab4c2b3a97de19f37df76662029ae5886e4eb22", "title": "Single\u2010cell sequencing maps gene expression to mutational phylogenies in PDGF\u2010 and EGF\u2010driven gliomas", "text": "Glioblastoma multiforme (GBM) is the most common and aggressive type of primary brain tumor. Epidermal growth factor (EGF) and platelet-derived growth factor (PDGF) receptors are frequently amplified and/or possess gain-of-function mutations in GBM However, clinical trials of tyrosine-kinase inhibitors have shown disappointing efficacy, in part due to intra-tumor heterogeneity. To assess the effect of clonal heterogeneity on gene expression, we\u00a0derived an approach to map single-cell expression profiles to\u00a0sequentially acquired mutations identified from exome sequencing. Using 288 single cells, we constructed high-resolution phylogenies of EGF-driven and PDGF-driven GBMs, modeling transcriptional kinetics during tumor evolution. Descending the phylogenetic tree of a PDGF-driven tumor corresponded to a progressive induction of an oligodendrocyte progenitor-like cell type, expressing pro-angiogenic factors. In contrast, phylogenetic analysis of an EGFR-amplified tumor showed an up-regulation of pro-invasive genes. An in-frame deletion in a specific dimerization domain of PDGF receptor correlates with an up-regulation of growth pathways in a proneural GBM and enhances proliferation when ectopically expressed in glioma cell lines. In-frame deletions in this domain are frequent in public GBM data."}
{"_id": "af8f8e292ab7b3a80cca31a9e12ef01d2608a9ca", "title": "A very-high output impedance current mirror for very-low voltage biomedical analog circuits", "text": "In this paper, we present the design of a new very-high output impedance CMOS current mirror with enhanced output voltage compliance. The proposed current mirror uses MOS current dividers to sample the output current and a feedback action is used to force it to be equal to the input current; yielding very high impedance with a very large output voltage range. The proposed implementation yields an increase of the output impedance by a factor of about gmro compared with that of the super-Wilson current mirror, thus offering a potential solution to mitigate the effect of the low output impedance of ultra-deep submicron CMOS transistors used in sub 1-V current mirrors and current sources. A NMOS version of the proposed current mirror circuit was implemented using STMicroelectronics 1-V 90-nm CMOS process and simulated using Spectre to validate its performance. The output current is mirrored with a transfer error lower than 1% down to an output voltage as low as 80 mV for an input current of 5 muA, and 111 mV when the input current is increased to 50 muA."}
{"_id": "2a9d09d8e2390c92cdaa5c8b98d6dd4cb394f638", "title": "Dune: Safe User-level Access to Privileged CPU Features", "text": "Dune is a system that provides applications with direct but safe access to hardware features such as ring protection, page tables, and tagged TLBs, while preserving the existing OS interfaces for processes. Dune uses the virtualization hardware in modern processors to provide a process, rather than a machine abstraction. It consists of a small kernel module that initializes virtualization hardware and mediates interactions with the kernel, and a user-level library that helps applications manage privileged hardware features. We present the implementation of Dune for 64bit x86 Linux. We use Dune to implement three userlevel applications that can benefit from access to privileged hardware: a sandbox for untrusted code, a privilege separation facility, and a garbage collector. The use of Dune greatly simplifies the implementation of these applications and provides significant performance advantages."}
{"_id": "015d1aabae32efe2c30dfa32d8ce71d01bcac9c5", "title": "Applications of approximation algorithms to cooperative games", "text": "The Internet, which is intrinsically a common playground for a large number of players with varying degrees of collaborative and sel sh motives, naturally gives rise to numerous new game theoretic issues. Computational problems underlying solutions to these issues, achieving desirable economic criteria, often turn out to be NP-hard. It is therefore natural to apply notions from the area of approximation algorithms to these problems. The connection is made more meaningful by the fact that the two areas of game theory and approximation algorithms share common methodology { both heavily use machinery from the theory of linear programming. Various aspects of this connection have been explored recently by researchers [8, 10, 15, 20, 21, 26, 27, 29]. In this paper we will consider the problem of sharing the cost of a jointly utilized facility in a \\fair\" manner. Consider a service providing company whose set of possible customers, also called users, is U . For each set S U C(S) denotes the cost incurred by the company to serve the users in S. The function C is known as the cost function. For concreteness, assume that the company broadcasts news of common interest, such as nancial news, on the net. Each user, i, has a utility, u0i, for receiving the news. This utility u 0 i is known only to user i. User i enjoys a bene t of u0i xi if she gets the news at the price xi. If she does not get the news then her bene t is 0. Each user is assumed to be sel sh, and hence in order to maximize bene t, may misreport her utility as some other number, say ui. For the rest of the discussion, the utility of user i will mean the number ui. A cost sharing mechanism determines which users receive the broadcast and at what price. The mechanism is strategyproof if the dominant strategy of each user is to reveal the"}
{"_id": "182b53b6823605f2a9b6fa6135227a303493b4c4", "title": "Automatic generation of destination maps", "text": "Destination maps are navigational aids designed to show anyone within a region how to reach a location (the destination). Hand-designed destination maps include only the most important roads in the region and are non-uniformly scaled to ensure that all of the important roads from the highways to the residential streets are visible. We present the first automated system for creating such destination maps based on the design principles used by mapmakers. Our system includes novel algorithms for selecting the important roads based on mental representations of road networks, and for laying out the roads based on a non-linear optimization procedure. The final layouts are labeled and rendered in a variety of styles ranging from informal to more formal map styles. The system has been used to generate over 57,000 destination maps by thousands of users. We report feedback from both a formal and informal user study, as well as provide quantitative measures of success."}
{"_id": "08f4d8e7626e55b7c4ffe1fd12eb034bc8022a43", "title": "Aspect Extraction with Automated Prior Knowledge Learning", "text": "Aspect extraction is an important task in sentiment analysis. Topic modeling is a popular method for the task. However, unsupervised topic models often generate incoherent aspects. To address the issue, several knowledge-based models have been proposed to incorporate prior knowledge provided by the user to guide modeling. In this paper, we take a major step forward and show that in the big data era, without any user input, it is possible to learn prior knowledge automatically from a large amount of review data available on the Web. Such knowledge can then be used by a topic model to discover more coherent aspects. There are two key challenges: (1) learning quality knowledge from reviews of diverse domains, and (2) making the model fault-tolerant to handle possibly wrong knowledge. A novel approach is proposed to solve these problems. Experimental results using reviews from 36 domains show that the proposed approach achieves significant improvements over state-of-the-art baselines."}
{"_id": "e340a3b93204c030a8b4db2c24a9ef2628cde140", "title": "Application Protocols and Wireless Communication for IoT: A Simulation Case Study Proposal", "text": "The current Internet of Things (IoT) solutions require support at different network layers, from higher level applications to lower level media-based support. This paper presents some of the main application requirements for IoT, characterizing architecture, Quality of Service (QoS) features, security mechanisms, discovery service resources and web integration options and the protocols that can be used to provide them (e.g. CoAP, XMPP, DDS, MQTT-SN, AMQP). As examples of lower-level requirements and protocols, several wireless network characteristics (e.g. ZigBee, Z-Wave, BLE, LoRaWAN, SigFox, IEEE 802.11af, NB-IoT) are presented. The variety of possible applications scenarios and the heterogeneity of enabling technologies combined with a large number of sensors and devices, suggests the need for simulation and modeling tactics to describe how the previous requirements can be met. As a potential solution, the creation of simulation models and the usage of the OMNET++ simulation tool to enable meaningful IoT simulation is discussed. The analysis of the behavior of IoT applications is proposed for two use cases: Wireless Sensor Networks (WSN) for home and industrial automation, and Low Power Wide Area (LPWA) networks for smart meters, smart buildings, and smart cities."}
{"_id": "405872b6c6c1a53c2ede41ccb7c9de6d4207d9be", "title": "Integrating Surface and Abstract Features for Robust Cross-Domain Chinese Word Segmentation", "text": "Current character-based approaches are not robust for cross domain Chin ese word segmentation. In this paper, we alleviate this problem by deriving a novel enhanced ch aracterbased generative model with a new abstract aggregate candidate-feature, which indicates if th given candidate prefers the corresponding position-tag of the longest dictionary matching wo rd. Since the distribution of the proposed feature is invariant across domains, our m del thus possesses better generalization ability. Open tests on CIPS-SIGHAN-2010 show that the enhance d gen rative model achieves robust cross-domain performance for various OOV cov erage rates and obtains the best performance on three out of four domains. The enhanced gen erative model is then further integrated with a discriminative model which also utilizes dictionary information . This integrated model is shown to be either superior or comparable to all other models repo rted in the literatur e on every domain of this task."}
{"_id": "724e3c4e98bc9ac3281d5aef7d53ecfd4233c3fc", "title": "Neural Networks and Graph Algorithms with Next-Generation Processors", "text": "The use of graphical processors for distributed computation revolutionized the field of high performance scientific computing. As the Moore's Law era of computing draws to a close, the development of non-Von Neumann systems: neuromorphic processing units, and quantum annealers; again are redefining new territory for computational methods. While these technologies are still in their nascent stages, we discuss their potential to advance computing in two domains: machine learning, and solving constraint satisfaction problems. Each of these processors utilize fundamentally different theoretical models of computation. This raises questions about how to best use them in the design and implementation of applications. While many processors are being developed with a specific domain target, the ubiquity of spin-glass models and neural networks provides an avenue for multi-functional applications. This provides hints at the future infrastructure needed to integrate many next-generation processing units into conventional high-performance computing systems."}
{"_id": "ce484d7a889fb3b0fa455086af0bfa453c5ec3db", "title": "Identification of extract method refactoring opportunities for the decomposition of methods", "text": "The extraction of a code fragment into a separate method is one of the most widely performed refactoring activities, since it allows the decomposition of large and complex methods and can be used in combination with other code transformations for fixing a variety of design problems. Despite the significance of Extract Method refactoring towards code quality improvement, there is limited support for the identification of code fragments with distinct functionality that could be extracted into new methods. The goal of our approach is to automatically identify Extract Method refactoring opportunities which are related with the complete computation of a given variable (complete computation slice) and the statements affecting the state of a given object (object state slice). Moreover, a set of rules regarding the preservation of existing odule decomposition dependences is proposed that exclude refactoring opportunities corresponding to slices whose extraction could possibly cause a change in program behavior. The proposed approach has been evaluated regarding its ability to capture slices of code implementing a distinct functionality, its ability to resolve existing design flaws, its impact on the cohesion of the decomposed and extracted methods, and its ability to preserve program behavior. Moreover, precision and recall have been computed employing the refactoring depe opportunities found by in"}
{"_id": "e7922d53216a4c234f601049ec3326a6ea5d5c7c", "title": "WHY PEOPLE STAY : USING JOB EMBEDDEDNESS TO PREDICT VOLUNTARY TURNOVER", "text": "A new construct, entitled job embeddedness, is introduced. Assessing factors from on and off the job, it includes an individual\u2019s (a) links to other people, teams and groups, (b) perception of their fit with their job, organization and community and (c) what they say they would have to sacrifice if they left their job. A measure of job embeddedness is developed with two samples. The results show that job embeddedness predicts the key outcomes of both intent to leave and voluntary turnover, and explains significant incremental variance over and above job satisfaction, organizational commitment, job alternatives and job search. Implications for theory and practice are discussed."}
{"_id": "6c93f95929fa900bb2eafbd915d417199bc0a9dc", "title": "Real-Time Biologically Inspired Action Recognition from Key Poses Using a Neuromorphic Architecture", "text": "Intelligent agents, such as robots, have to serve a multitude of autonomous functions. Examples are, e.g., collision avoidance, navigation and route planning, active sensing of its environment, or the interaction and non-verbal communication with people in the extended reach space. Here, we focus on the recognition of the action of a human agent based on a biologically inspired visual architecture of analyzing articulated movements. The proposed processing architecture builds upon coarsely segregated streams of sensory processing along different pathways which separately process form and motion information (Layher et al., 2014). Action recognition is performed in an event-based scheme by identifying representations of characteristic pose configurations (key poses) in an image sequence. In line with perceptual studies, key poses are selected unsupervised utilizing a feature-driven criterion which combines extrema in the motion energy with the horizontal and the vertical extendedness of a body shape. Per class representations of key pose frames are learned using a deep convolutional neural network consisting of 15 convolutional layers. The network is trained using the energy-efficient deep neuromorphic networks (Eedn) framework (Esser et al., 2016), which realizes the mapping of the trained synaptic weights onto the IBM Neurosynaptic System platform (Merolla et al., 2014). After the mapping, the trained network achieves real-time capabilities for processing input streams and classify input images at about 1,000 frames per second while the computational stages only consume about 70 mW of energy (without spike transduction). Particularly regarding mobile robotic systems, a low energy profile might be crucial in a variety of application scenarios. Cross-validation results are reported for two different datasets and compared to state-of-the-art action recognition approaches. The results demonstrate, that (I) the presented approach is on par with other key pose based methods described in the literature, which select key pose frames by optimizing classification accuracy, (II) compared to the training on the full set of frames, representations trained on key pose frames result in a higher confidence in class assignments, and (III) key pose representations show promising generalization capabilities in a cross-dataset evaluation."}
{"_id": "22b65eebec33b89ec912054b4c4ec3d963960ab0", "title": "Hopelessness Depression : A Theory-Based Subtype of Depression", "text": "We present a revision of the 1978 reformulated theory of helplessness and depression and call it the hopelessness theory of depression. Although the 1978 reformulation has generated a vast amount of empirical work on depression over the past 10 years and recently has been evaluated as a model of depression, we do not think that it presents a clearly articulated theory of depression. We build on the skeletal logic of the 1978 statement and (a) propose a hypothesized subtype of depression\u2014 hopelessness depression, (b) introduce hopelessness as a proximal sufficient cause of the symptoms of hopelessness depression, (c) deemphasize causal attributions because inferred negative consequences and inferred negative characteristics about the self are also postulated to contribute to the formation of hopelessness and, in turn, the symptoms of hopelessness depression, and (d) clarify the diathesisstress and causal mediation components implied, but not explicitly articulated, in the 1978 statement. We report promising findings for the hopelessness theory and outline the aspects that still need to be tested."}
{"_id": "9b8d134417c5db4a6c3dce5784a0407c2d97495d", "title": "SSS: a hybrid architecture applied to robot navigation", "text": "This paper describes a new three layer architecture, SSS, for robot control. It combines a servo-control layer, a \u00d2subsumption\" layer, and a symbolic layer in a way that allows the advantages of each technique to be fully exploited. The key to this synergy is the interface between the individual subsystems. We describe how to build situation recognizers that bridge the gap between the servo and subsumption layers, and event detectors that link the subsumption and symbolic layers. The development of such a combined system is illustrated by a fully implemented indoor navigation example. The resulting real robot, called \u00d2TJ\", is able automatically map office building environments and smoothly navigate through them at the rapid speed of 2.6 feet per second."}
{"_id": "8de1c724a42d204c0050fe4c4b4e81a675d7f57c", "title": "Deep Face Recognition: A Survey", "text": "Face recognition made tremendous leaps in the last five years with a myriad of systems proposing novel techniques substantially backed by deep convolutional neural networks (DCNN). Although face recognition performance sky-rocketed using deep-learning in classic datasets like LFW, leading to the belief that this technique reached human performance, it still remains an open problem in unconstrained environments as demonstrated by the newly released IJB datasets. This survey aims to summarize the main advances in deep face recognition and, more in general, in learning face representations for verification and identification. The survey provides a clear, structured presentation of the principal, state-of-the-art (SOTA) face recognition techniques appearing within the past five years in top computer vision venues. The survey is broken down into multiple parts that follow a standard face recognition pipeline: (a) how SOTA systems are trained and which public data sets have they used; (b) face preprocessing part (detection, alignment, etc.); (c) architecture and loss functions used for transfer learning (d) face recognition for verification and identification. The survey concludes with an overview of the SOTA results at a glance along with some open issues currently overlooked by the community."}
{"_id": "3371575d11fbf83577b5adf2a0994c1306aebd09", "title": "Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing", "text": "Vocal perception is particularly important for understanding a speaker's emotional state and intentions because, unlike facial perception, it is relatively independent of speaker distance and viewing conditions. The idea, derived from brain lesion studies, that vocal emotional comprehension is a special domain of the right hemisphere has failed to receive consistent support from neuroimaging. This conflict can be reconciled if vocal emotional comprehension is viewed as a multi-step process with individual neural representations. This view reveals a processing chain that proceeds from the ventral auditory pathway to brain structures implicated in cognition and emotion. Thus, vocal emotional comprehension appears to be mediated by bilateral mechanisms anchored within sensory, cognitive and emotional processing systems."}
{"_id": "289bdc364e2b8b03d0e52609dc6665a5f9d056c4", "title": "Generating English from Abstract Meaning Representations", "text": "We present a method for generating English sentences from Abstract Meaning Representation (AMR) graphs, exploiting a parallel corpus of AMRs and English sentences. We treat AMR-to-English generation as phrase-based machine translation (PBMT). We introduce a method that learns to linearize tokens of AMR graphs into an English-like order. Our linearization reduces the amount of distortion in PBMT and increases generation quality. We report a Bleu score of 26.8 on the standard AMR/English test set."}
{"_id": "5a778a2a32f35a96f4dfb0f22d1415eff321e7ad", "title": "A Novel Distributed SDN-Secured Architecture for the IoT", "text": "Due to their rapid evolution, mobile devices demand for more dynamic and flexible networking services. A major challenges of future mobile networks is the increased mobile traffic. With the recent upcoming technologies of network programmability like Software-Defined Network (SDN), it may be integrated to create a new communication platform for Internet of Things (IoT). In this work, we present how to determine the effectiveness of an approach to build a new secured network architecture based on SDN and clusters. Our proposed scheme is a starting point for some experiments providing perspective over SDN deployment in a cluster environment. With this aim in mind, we suggest a routing protocol that manages routing tasks over Cluster-SDN. By using network virtualization and OpenFlow technologies to generate virtual nodes, we simulate a prototype system controlled by SDN. With our testbed, we are able to manage 500 things. We can analyze every OpenFlow messages and we have discovered that with a particular flow, the things can exchange information unlike the routing principle."}
{"_id": "eb60bcac9bf7668cc4318995fc8b9b7ada46c090", "title": "Logistic Regression and Collaborative Filtering for Sponsored Search Term Recommendation", "text": "Sponsored search advertising is largely based on bidding on individual terms. The richness of natural languages permits web searchers to express their information needs in myriad ways. Advertisers have difficulty discovering all the terms that are relevant to their products or services. We examine the performance of logistic regression and collaborative filtering models on two different data sources to predict terms relevant to a set of seed terms describing an advertiser\u2019s product or service."}
{"_id": "a54eae23ac844c3833b872edd40a571f2fc1b2f3", "title": "Removal of High-Density Salt-and-Pepper Noise in Images With an Iterative Adaptive Fuzzy Filter Using Alpha-Trimmed Mean", "text": "Suppression of impulse noise in images is an important problem in image processing. In this paper, we propose a novel adaptive iterative fuzzy filter for denoising images corrupted by impulse noise. It operates in two stages-detection of noisy pixels with an adaptive fuzzy detector followed by denoising using a weighted mean filter on the \u201cgood\u201d pixels in the filter window. Experimental results demonstrate the algorithm to be superior to state-of-the-art filters. The filter is also shown to be robust to very high levels of noise, retrieving meaningful detail at noise levels as high as 97%."}
{"_id": "f77f1d1d274042105cceebf17a57e680dfe1ce03", "title": "Ant colony optimization for routing and load-balancing: survey and new directions", "text": "Although an ant is a simple creature, collectively a colony of ants performs useful tasks such as finding the shortest path to a food source and sharing this information with other ants by depositing pheromone. In the field ofant colony optimization (ACO), models ofcollective intelligenceof ants are transformed into useful optimization techniques that find applications in computer networking. In this survey, the problem-solving paradigm of ACO is explicated and compared to traditional routing algorithms along the issues of routing information, routing overhead and adaptivity. The contributions of this survey include 1) providing a comparison and critique of the state-of-the-art approaches for mitigatingstagnation (a major problem in many ACO algorithms), 2) surveying and comparing three major research in applying ACO in routing and load-balancing, and 3) discussing new directions and identifying open problems. The approaches for mitigating stagnation discussed include:evaporation, aging, pheromone smoothingand limiting, privileged pheromone layingandpheromone-heuristic control . The survey on ACO in routing/load-balancing includes comparison and critique of ant-based control and its ramifications, AntNetand its extensions, as well as ASGAand SynthECA. Discussions on new directions include an ongoing work of the authors in applyingmultiple ant colony optimizationin load-balancing."}
{"_id": "9931b8ea6594b97c7dfca93936a2d95a38167046", "title": "It Makes Sense: A Wide-Coverage Word Sense Disambiguation System for Free Text", "text": "Word sense disambiguation (WSD) systems based on supervised learning achieved the best performance in SensEval and SemEval workshops. However, there are few publicly available open source WSD systems. This limits the use of WSD in other applications, especially for researchers whose research interests are not in WSD. In this paper, we present IMS, a supervised English all-words WSD system. The flexible framework of IMS allows users to integrate different preprocessing tools, additional features, and different classifiers. By default, we use linear support vector machines as the classifier with multiple knowledge-based features. In our implementation, IMS achieves state-of-the-art results on several SensEval and SemEval tasks."}
{"_id": "b4f943584acf7694cc369a42a75ba804f8493ed3", "title": "NASARI: a Novel Approach to a Semantically-Aware Representation of Items", "text": "The semantic representation of individual word senses and concepts is of fundamental importance to several applications in Natural Language Processing. To date, concept modeling techniques have in the main based their representation either on lexicographic resources, such as WordNet, or on encyclopedic resources, such as Wikipedia. We propose a vector representation technique that combines the complementary knowledge of both these types of resource. Thanks to its use of explicit semantics combined with a novel cluster-based dimensionality reduction and an effective weighting scheme, our representation attains state-of-the-art performance on multiple datasets in two standard benchmarks: word similarity and sense clustering. We are releasing our vector representations at http://lcl.uniroma1.it/nasari/."}
{"_id": "b54315a22b825e9ca1b59aa1d3fac98ea4925941", "title": "De-Conflated Semantic Representations", "text": "One major deficiency of most semantic representation techniques is that they usually model a word type as a single point in the semantic space, hence conflating all the meanings that the word can have. Addressing this issue by learning distinct representations for individual meanings of words has been the subject of several research studies in the past few years. However, the generated sense representations are either not linked to any sense inventory or are unreliable for infrequent word senses. We propose a technique that tackles these problems by de-conflating the representations of words based on the deep knowledge that can be derived from a semantic network. Our approach provides multiple advantages in comparison to the previous approaches, including its high coverage and the ability to generate accurate representations even for infrequent word senses. We carry out evaluations on six datasets across two semantic similarity tasks and report state-of-the-art results on most of them."}
{"_id": "fb166f1e77428a492ea869a8b79df275dd9669c2", "title": "Neural Sequence Learning Models for Word Sense Disambiguation", "text": "Word Sense Disambiguation models exist in many flavors. Even though supervised ones tend to perform best in terms of accuracy, they often lose ground to more flexible knowledge-based solutions, which do not require training by a word expert for every disambiguation target. To bridge this gap we adopt a different perspective and rely on sequence learning to frame the disambiguation problem: we propose and study in depth a series of end-to-end neural architectures directly tailored to the task, from bidirectional Long Short-Term Memory to encoder-decoder models. Our extensive evaluation over standard benchmarks and in multiple languages shows that sequence learning enables more versatile all-words models that consistently lead to state-of-the-art results, even against word experts with engineered features."}
{"_id": "0aa9b26b407a36ed62d19c8c1c1c6a26d75991af", "title": "Your Exploit is Mine: Automatic Shellcode Transplant for Remote Exploits", "text": "Developing a remote exploit is not easy. It requires a comprehensive understanding of a vulnerability and delicate techniques to bypass defense mechanisms. As a result, attackers may prefer to reuse an existing exploit and make necessary changes over developing a new exploit from scratch. One such adaptation is the replacement of the original shellcode (i.e., the attacker-injected code that is executed as the final step of the exploit) in the original exploit with a replacement shellcode, resulting in a modified exploit that carries out the actions desired by the attacker as opposed to the original exploit author. We call this a shellcode transplant. Current automated shellcode placement methods are insufficient because they over-constrain the replacement shellcode, and so cannot be used to achieve shellcode transplant. For example, these systems consider the shellcode as an integrated memory chunk and require that the execution path of the modified exploit must be same as the original one. To resolve these issues, we present ShellSwap, a system that uses symbolic tracing, with a combination of shellcode layout remediation and path kneading to achieve shellcode transplant. We evaluated the ShellSwap system on a combination of 20 exploits and 5 pieces of shellcode that are independently developed and different from the original exploit. Among the 100 test cases, our system successfully generated 88% of the exploits."}
{"_id": "8a1ff425abea99ca21bfdf9f6b7b7254e36e3242", "title": "Manufacturing Problem Solving in a Job Shop \u2014 Research in Layout Planning", "text": "For ensuring efficient operation of a job shop, it is important to minimize waste, which has no value addition to the final product. For a job shop, minimizing movement is considered as the highest priority for waste prevention. For this reason, the layout for a job shop should be designed in such a way to ensure the lowest possible cost for production by reducing nonvalue added activities, such as movement of work-in-process. An effective and efficient way of layout planning for a job shop is a key for solving movement inefficiencies and facilitating communication and interaction between workers and supervisors. This involves relocation of equipment and machinery to streamline materials flow. The primary objective of relocation is to avoid flow conflicts, reduce process time, and increase efficiency of labor usage. Proximity of the most frequently used machines minimizes the movement cost significantly, which eventually minimizes the cost of production. This paper describes the research done in process flow improvements in a job shop manufacturing steel components. The literature focused mainly on mathematical modeling with assumptions that are not applicable for a typical small-scale job shop operation. However, this was overcome by collecting material movement data over three months and analyzing the information using a From-To chart. By analyzing the chart, the actual loads between departments for the operation period were tabulated in available plant space. From this information, the inter-departmental flow was shown by a model. This provides the basic layout pattern, which was improved. A second step was to determine the cost of this layout by multiplying the material handling cost by the number of loads moved between each pair of departments. As a recommendation for solving the problem, two layout models have been developed for ensuring the lowest movement cost. Introduction Transportation is considered as one of the seven wastes for lean manufacturing, and effective layout planning is considered as a key to overcome this kind of waste. It is stated, \u201cDouble handling and excessive movements are likely to cause damage and deterioration with the distance of communication between processes\u201d [1]. Therefore, layout planning has clear impact with the quality and quantity of the final products by reducing waste and improving efficiency."}
{"_id": "11d063b26b99d63581bec6566a77f7751b4be2b3", "title": "High-power GaN MMIC PA Over 40\u20134000MHz", "text": "We report a high-performance GaN MMIC power amplifier operating from 40MHz to 4,000MHz. The MMIC achieved 80W pulsed (100us pulse width and 10% duty cycle) output power (P5dB) with 54% efficiency at 40MHz, 50W with about 30% efficiency across most of the mid band, and gradually decreases to 30W with 22% efficiency at 4000MHz. Power gain is 25dB across the 40-4000MHz band. This ultra wideband performance is achieved by both tailoring the device output impedance, and using a unique wide-band, circuit-matching topology. Detailed design techniques of both the device and the matching circuit will be presented."}
{"_id": "5a6eec6f2b3bdb918b400214fb9c41645eb81e0a", "title": "Team assembly mechanisms determine collaboration network structure and team performance.", "text": "Agents in creative enterprises are embedded in networks that inspire, support, and evaluate their work. Here, we investigate how the mechanisms by which creative teams self-assemble determine the structure of these collaboration networks. We propose a model for the self-assembly of creative teams that has its basis in three parameters: team size, the fraction of newcomers in new productions, and the tendency of incumbents to repeat previous collaborations. The model suggests that the emergence of a large connected community of practitioners can be described as a phase transition. We find that team assembly mechanisms determine both the structure of the collaboration network and team performance for teams derived from both artistic and scientific fields."}
{"_id": "1f6235cedd6b0023b0eec7e11c226e89b4515bd2", "title": "Internet of Things Business Models", "text": "Almost all businesses are aware of the potential gains that the Internet of Things (IoT) has to offer, they are unsure of how to approach it. This article proposes a business model that builds on Holler et al., (2014) [1]. The model consists of three dimensions: \u201cWho, Where, and Why\u201d. \u201cWho\u201d describes collaborating partners, which builds the \u201cValue Network\u201d. \u201cWhere\u201d describes sources of value co-creation rooted in the layer model of digitized objects, and \u201cWhy\u201d describes how partners benefit from collaborating within the value network. With the intention of addressing \u201cHow\u201d, the proposed framework has integrated the IoT strategy category, tactics, and value chain elements. The framework is then validated through the case studies of some successful players who are either the Awardees of the IoT Award 2014 or the ICT Award 2015 of Hong Kong."}
{"_id": "077492a77812a68c86b970557e97a452a6689427", "title": "Automatic 3D face reconstruction from single images or video", "text": "This paper presents a fully automated algorithm for reconstructing a textured 3D model of a face from a single photograph or a raw video stream. The algorithm is based on a combination of Support Vector Machines (SVMs) and a Morphable Model of 3D faces. After SVM face detection, individual facial features are detected using a novel regression- and classification-based approach, and probabilistically plausible configurations of features are selected to produce a list of candidates for several facial feature positions. In the next step, the configurations of feature points are evaluated using a novel criterion that is based on a Morphable Model and a combination of linear projections. To make the algorithm robust with respect to head orientation, this process is iterated while the estimate of pose is refined. Finally, the feature points initialize a model-fitting procedure of the Morphable Model. The result is a high resolution 3D surface model."}
{"_id": "d77d179169dc0354a31208cfa461267469215045", "title": "Polymer Nanoparticles for Smart Drug Delivery", "text": "In the recent decades, polymers are widely used as biomaterials due to their favorable properties such as good biocompatibility, easy design and preparation, a variety of structures and interesting bio-mimetic character. Especially in the field of smart drug delivery, polymer played a significant role because it can deliver therapeutic agents directly into the intended site of action, with superior efficacy. The ideal requirements for designing nano-particulate delivery system are to effectively be controlled particle size, surface character; enhance permeation, flexibility, solubility and release of therapeutically active agents in order to attain the target and specific activity at a predetermined rate and time. The smart drug delivery systems have been successfully made by the advances in polymer science in the bio-nano\u2010 technology field. Recently, these advances have been found in various medical applications for nano-scale structures in smart drug delivery. The smart drug delivery systems should possess some important feature such as pre-scheduled rate, self controlled, targeted, predetermined time and monitor the delivery. The smart drug delivery system enhances the polymer nanoparticle better stage to their therapy regimen. They are drug carriers of natural, semi-synthetic, and synthetic polymeric nature at the nano-scale to micro-scale range. The polymeric particles are collectively named as spheres and capsules. The most of the polymeric nanoparticles with surfactants offer stability of various forms of active drugs and have useful to smart release properties. There are numerous biological applications have been reported for the nano-scale to micro-scale sized particles, such as site-targeted, controlled, and enhanced bioavailability of hydrophobic drugs [1-4]. Due to the nanoparticles size the drugs have been targeting into various applications, such as, various cancers targeting has been shown to be promising [5]. Moreover, polymeric particles proved their effectiveness in stabilizing and protecting the drug molecules such as proteins, peptides, or DNA molecules from various environmental hazards degradation [2-4, 6, 7]. So these polymers are affording the potential for various protein and gene delivery. Numerous methods had been available to fabricate"}
{"_id": "20ff5fc3d1628db26a1d4936eaa7b0cdad8eeae8", "title": "Deep neural networks for recognizing online handwritten mathematical symbols", "text": "This paper presents application of deep learning to recognize online handwritten mathematical symbols. Recently various deep learning architectures such as Convolution neural network (CNN), Deep neural network (DNN) and Long short term memory (LSTM) RNN have been applied to fields such as computer vision, speech recognition and natural language processing where they have been shown to produce state-of-the-art results on various tasks. In this paper, we apply max-out-based CNN and BLSTM to image patterns created from online patterns and to the original online patterns, respectively and combine them. We also compare them with traditional recognition methods which are MRF and MQDF by carrying out some experiments on CROHME database."}
{"_id": "0955315509ac15bb4f825dbcd1e51423c3781ce4", "title": "The HAWKwood Database", "text": "We present a database consisting of wood pile images, which can be used as a benchmark to evaluate the performance of wood pile detection and surveying algorithms. We distinguish six database categories which can be used for different types of algorithms. Images of real and synthetic scenes are provided, which consist of 7655 images divided into 354 data sets. Depending on the category the data sets either include ground truth data or forestry specific measurements with which algorithms may be compared."}
{"_id": "6681c5cecb6efb15f170786e04e05fc77820be50", "title": "Temperature-aware microarchitecture: Modeling and implementation", "text": "With cooling costs rising exponentially, designing cooling solutions for worst-case power dissipation is prohibitively expensive. Chips that can autonomously modify their execution and power-dissipation characteristics permit the use of lower-cost cooling solutions while still guaranteeing safe temperature regulation. Evaluating techniques for this dynamic thermal management (DTM), however, requires a thermal model that is practical for architectural studies.This paper describes HotSpot, an accurate yet fast and practical model based on an equivalent circuit of thermal resistances and capacitances that correspond to microarchitecture blocks and essential aspects of the thermal package. Validation was performed using finite-element simulation. The paper also introduces several effective methods for DTM: \"temperature-tracking\" frequency scaling, \"migrating computation\" to spare hardware units, and a \"hybrid\" policy that combines fetch gating with dynamic voltage scaling. The latter two achieve their performance advantage by exploiting instruction-level parallelism, showing the importance of microarchitecture research in helping control the growth of cooling costs.Modeling temperature at the microarchitecture level also shows that power metrics are poor predictors of temperature, that sensor imprecision has a substantial impact on the performance of DTM, and that the inclusion of lateral resistances for thermal diffusion is important for accuracy."}
{"_id": "80898c89f32975f33e42a680a4d675df63a8a3e5", "title": "A 3 ppm 1.5 \u00d7 0.8 mm 2 1.0 \u00b5A 32.768 kHz MEMS-Based Oscillator", "text": "This paper describes the first 32 kHz low-power MEMS-based oscillator in production. The primary goal is to provide a small form-factor oscillator (1.5 \u00d7 0.8 mm 2 ) for use as a crystal replacement in space-constrained mobile devices. The oscillator generates an output frequency of 32.768 kHz and its binary divisors down to 1 Hz. The frequency stability over the industrial temperature range (-40 \u00b0C to 85 \u00b0C) is \u00b1100 ppm as an oscillator (XO) or \u00b13 ppm with optional calibration as a temperature compensated oscillator (TCXO). Supply currents are 0.9 \u03bcA for the XO and 1.0 \u03bcA for the TCXO at supply voltages from 1.4 V to 4.5 V. The MEMS resonator is a capacitively-transduced tuning fork at 524 kHz. The circuitry is fabricated in 180 nm CMOS and includes low power sustaining circuit, fractional-N PLL, temperature sensor, digital control, and low swing driver."}
{"_id": "982ffaf04a681c98c2d314a1100dd705a950850e", "title": "ITIL in small to medium-sized enterprises software companies: towards an implementation sequence", "text": "Information Technology Infrastructure Library, ITIL framework, is a set of comprehensive publications providing descriptive guidance on the management of IT processes, functions, roles and responsibilities related to Information Technology Service Management. However, and in spite of its repercussion and popularity, the ITIL framework does not suggest an implementation order for their processes. This decision constitutes the first challenge that an organization must overcome when starting an ITIL implementation, the enterprise size being one of the leading factors to be considered in the decision making process. In the scenario of Small and Medium Enterprises dedicated to producing software, this paper is devoted to investigating which processes are the most selected to start the implementation of ITIL in these organizations. This is done by means of two different instruments. Firstly, a systematic literature review on the topic and secondly, a survey conducted among experts and practitioners. Results show in both cases that Incident Management Process should be the first process when implementing ITIL framework."}
{"_id": "24a9e0ac9bbc708efd14512becc5c4514d8f042d", "title": "Efficiently inferring community structure in bipartite networks", "text": "Bipartite networks are a common type of network data in which there are two types of vertices, and only vertices of different types can be connected. While bipartite networks exhibit community structure like their unipartite counterparts, existing approaches to bipartite community detection have drawbacks, including implicit parameter choices, loss of information through one-mode projections, and lack of interpretability. Here we solve the community detection problem for bipartite networks by formulating a bipartite stochastic block model, which explicitly includes vertex type information and may be trivially extended to k-partite networks. This bipartite stochastic block model yields a projection-free and statistically principled method for community detection that makes clear assumptions and parameter choices and yields interpretable results. We demonstrate this model's ability to efficiently and accurately find community structure in synthetic bipartite networks with known structure and in real-world bipartite networks with unknown structure, and we characterize its performance in practical contexts."}
{"_id": "e5e4c3a891997f93f539c421cae73dab078cdb79", "title": "Fast Fourier Transforms for Nonequispaced Data", "text": "Fast Fourier Transforms for Nonequispaced Data Alok Dutt Yale University 1993 Two groups of algorithms are presented generalizing the fast Fourier transform (FFT) to the case of non-integer frequencies and nonequispaced nodes on the interval [-7r, 7r]. These schemes are based on combinations of certain analytical considerations with the classical fast Fourier transform, and generalize both the forward and backward FFTs. The number of arithmetic operations required by each of the algorithms is proportional to Nlog N + Nlog(I/e), where 6 is the desired precision of computations and N is the number of nodes. Several related algorithms are also presented, each of which utilizes a similar set of techniques from analysis and linear algebra. These include an efficient version of the Fast Multipole Method in one dimension and fast algorithms for the evaluation, integration and differentiation of Lagrange polynomial interpolants. Several numerical examples are used to illustrate the efficiency of the approach, and to compare the performances of the two sets of nonuniform FFT algorithms. The work of this author was supported in part by the Office of Naval Research under Grant N00014-89-J-1527 and in part by the National Science Foundation under Grant DMS9012751. Approved for public release: distribution is unlimited."}
{"_id": "4f1b7c199a02b8efc5e77b9a5b6283095fa038b5", "title": "Value of Information Systems and Products : Understanding the Users \u2019 Perspective and Values", "text": "Developers aim at providing value through their systems and products. However, value is not financial only, but depends on usage and users\u2019 perceptions of value. In this paper, we clarify the concept of value from the users\u2019 perspective and the role of user involvement in providing value. First, theories and approaches of psychology, marketing and human-computer interaction are reviewed. Secondly, the concept of \u2018user values\u2019 is suggested to clarify the concept of value from the user\u2019s point of view and a category framework of user values is presented to make them more concrete and easier to identify. Thirdly, the activities and methods for adopting user values in development work are discussed. The analysis of the literature shows that value has been considered in multiple ways in development. However, users\u2019 perspectives have received less attention. As a conclusion, we draw future research directions for value-centered design and propose that user involvement is essential in identifying user values, interpreting the practical meaning of the values and implementing them in development work."}
{"_id": "e80f6871f78da0e5c4ae7c07235db27cbadaffe0", "title": "Quadtree Convolutional Neural Networks", "text": "This paper presents a Quadtree Convolutional Neural Network (QCNN) for efficiently learning from image datasets representing sparse data such as handwriting, pen strokes, freehand sketches, etc. Instead of storing the sparse sketches in regular dense tensors, our method decomposes and represents the image as a linear quadtree that is only refined in the non-empty portions of the image. The actual image data corresponding to non-zero pixels is stored in the finest nodes of the quadtree. Convolution and pooling operations are restricted to the sparse pixels, leading to better efficiency in computation time as well as memory usage. Specifically, the computational and memory costs in QCNN grow linearly in the number of non-zero pixels, as opposed to traditional CNNs where the costs are quadratic in the number of pixels. This enables QCNN to learn from sparse images much faster and process high resolution images without the memory constraints faced by traditional CNNs. We study QCNN on four sparse image datasets for sketch classification and simplification tasks. The results show that QCNN can obtain comparable accuracy with large reduction in computational and memory costs."}
{"_id": "9a2b2bfa324041c1fc1e84598d6f14956c67c825", "title": "A 0 . 3-\u03bc m CMOS 8Gb / s 4-PAM Serial Link Transceiver", "text": "An 8-Gb/s 0.3-\u00b5m CMOS transceiver uses multilevel signaling (4-PAM) and transmit pre-shaping in combination with receive equalization to reduce ISI due to channel low-pass effects. High on-chip frequencies are avoided by multi-plexing and demultiplexing the data directly at the pads. Timing recovery takes advantage of a novel frequency acquisition scheme and a linear PLL with a loop bandwidth >30MHz, phase margin >48 and capture range of 20MHz without a frequency acquisition aid. The transmitted 8-Gbps data is successfully detected by the receiver after a 10-m coaxial cable. The 2mm x 2mm chip consumes 1.1W at 8Gbps with a 3-V supply."}
{"_id": "bbf70ffe55676b34c43b585e480e8343943aa328", "title": "SDN/NFV-enabled satellite communications networks: Opportunities, scenarios and challenges", "text": "In the context of next generation 5G networks, the satellite industry is clearly committed to revisit and revamp the role of satellite communications. As major drivers in the evolution of (terrestrial) fixed and mobile networks, Software Defined Networking (SDN) and Network Function Virtualisation (NFV) technologies are also being positioned as central technology enablers towards improved and more flexible integration of satellite and terrestrial segments, providing satellite network further service innovation and business agility by advanced network resources management techniques. Through the analysis of scenarios and use cases, this paper provides a description of the benefits that SDN/NFV technologies can bring into satellite communications towards 5G. Three scenarios are presented and analysed to delineate different potential improvement areas pursued through the introduction of SDN/NFV technologies in the satellite ground segment domain. Within each scenario, a number of use cases are developed to gain further insight into specific capabilities and to identify the technical challenges stemming from them."}
{"_id": "2b354f4ad32a03914dd432658150ab2419d4ff0f", "title": "Multi-Sensor Conflict Measurement and Information Fusion", "text": "In sensing applications where multiple sensors observe the same scene, fusing sensor outputs can provide improved results. However, if some of the sensors are providing lower quality outputs, e.g. when one or more sensors has a poor signal-tonoise ratio (SNR) and therefore provides very noisy data, the fused results can be degraded. In this work, a multi-sensor conflict measure is proposed which estimates multi-sensor conflict by representing each sensor output as interval-valued information and examines the sensor output overlaps on all possible n-tuple sensor combinations. The conflict is based on the sizes of the intervals and how many sensors output values lie in these intervals. In this work, conflict is defined in terms of how little the output from multiple sensors overlap. That is, high degrees of overlap mean low sensor conflict, while low degrees of overlap mean high conflict. This work is a preliminary step towards a robust conflict and sensor fusion framework. In addition, a sensor fusion algorithm is proposed based on a weighted sum of sensor outputs, where the weights for each sensor diminish as the conflict measure increases. The proposed methods can be utilized to (1) assess a measure of multi-sensor conflict, and (2) improve sensor output fusion by lessening weighting for sensors with high conflict. Using this measure, a simulated example is given to explain the mechanics of calculating the conflict measure, and stereo camera 3D outputs are analyzed and fused. In the stereo camera case, the sensor output is corrupted by additive impulse noise, DC offset, and Gaussian noise. Impulse noise is common in sensors due to intermittent interference, a DC offset a sensor bias or registration error, and Gaussian noise represents a sensor output with low SNR. The results show that sensor output fusion based on the conflict measure shows improved accuracy over a simple averaging fusion strategy."}
{"_id": "521c5092f7ff4e1fdbf607e2f54e8ce26b51bd27", "title": "Active Imitation Learning via Reduction to I.I.D. Active Learning", "text": "In standard passive imitation learning, the goal is to learn a target policy by passively observing full execution trajectories of it. Unfortunately, generating such trajectories can require substantial expert effort and be impractical in some cases. In this paper, we consider active imitation learning with the goal of reducing this effort by querying the expert about the desired action at individual states, which are selected based on answers to past queries and the learner\u2019s interactions with an environment simulator. We introduce a new approach based on reducing active imitation learning to i.i.d. active learning, which can leverage progress in the i.i.d. setting. Our first contribution, is to analyze reductions for both non-stationary and stationary policies, showing that the label complexity (number of queries) of active imitation learning can be substantially less than passive learning. Our second contribution, is to introduce a practical algorithm inspired by the reductions, which is shown to be highly effective in four test domains compared to a number of alternatives."}
{"_id": "3fe3343d1f908270f067acebc0463590d284abf3", "title": "Judgment under emotional certainty and uncertainty: the effects of specific emotions on information processing.", "text": "The authors argued that emotions characterized by certainty appraisals promote heuristic processing, whereas emotions characterized by uncertainty appraisals result in systematic processing. The 1st experiment demonstrated that the certainty associated with an emotion affects the certainty experienced in subsequent situations. The next 3 experiments investigated effects on processing of emotions associated with certainty and uncertainty. Compared with emotions associated with uncertainty, emotions associated with certainty resulted in greater reliance on the expertise of a source of a persuasive message in Experiment 2, more stereotyping in Experiment 3, and less attention to argument quality in Experiment 4. In contrast to previous theories linking valence and processing, these findings suggest that the certainty appraisal content of emotions is also important in determining whether people engage in systematic or heuristic processing."}
{"_id": "f26a8dcfbaf9f46c021c41a3545fcfa845660c47", "title": "Human Pose Regression by Combining Indirect Part Detection and Contextual Information", "text": "In this paper, we propose an end-to-end trainable regression approach for human pose estimation from still images. We use the proposed Soft-argmax function to convert feature maps directly to joint coordinates, resulting in a fully differentiable framework. Our method is able to learn heat maps representations indirectly, without additional steps of artificial ground truth generation. Consequently, contextual information can be included to the pose predictions in a seamless way. We evaluated our method on two very challenging datasets, the Leeds Sports Poses (LSP) and the MPII Human Pose datasets, reaching the best performance among all the existing regression methods and comparable results to the state-of-the-art detection based approaches."}
{"_id": "50e1460abd160b92b38f206553f7917cf6470324", "title": "Tachyon : Memory Throughput I / O for Cluster Computing Frameworks", "text": "As ever more big data computations start to be in-memory, I/O throughput dominates the running times of many workloads. For distributed storage, the read throughput can be improved using caching, however, the write throughput is limited by both disk and network bandwidth due to data replication for fault-tolerance. This paper proposes a new file system architecture to enable frameworks to both read and write reliably at memory speed, by avoiding synchronous data replication on writes."}
{"_id": "20ad0ba7e187e6f335a08c59a4e53da4e4b027ec", "title": "Automatic Acquisition of Hyponyms from Large Text Corpora", "text": "We describe a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text. Two goals motivate the approach: (i) avoidance of the need for pre-encoded knowledge and (ii) applicability across a wide range of text. We identify a set of lexicosyntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest. We describe a method for discovering these patterns and suggest that other lexical relations will also be acquirable in this way. A subset of the acquisition algorithm is implemented and the results are used to augment and critique the structure of a large hand-built thesaurus. Extensions and applications to areas such as information retrieval are suggested."}
{"_id": "3d378bdbf3564e341bec66c1974899b01e82507f", "title": "Open domain event extraction from twitter", "text": "Tweets are the most up-to-date and inclusive stream of in- formation and commentary on current events, but they are also fragmented and noisy, motivating the need for systems that can extract, aggregate and categorize important events. Previous work on extracting structured representations of events has focused largely on newswire text; Twitter's unique characteristics present new challenges and opportunities for open-domain event extraction. This paper describes TwiCal-- the first open-domain event-extraction and categorization system for Twitter. We demonstrate that accurately extracting an open-domain calendar of significant events from Twitter is indeed feasible. In addition, we present a novel approach for discovering important event categories and classifying extracted events based on latent variable models. By leveraging large volumes of unlabeled data, our approach achieves a 14% increase in maximum F1 over a supervised baseline. A continuously updating demonstration of our system can be viewed at http://statuscalendar.com; Our NLP tools are available at http://github.com/aritter/ twitter_nlp."}
{"_id": "87086ba81c59a04d657107b7895d8ab38e4b6464", "title": "TwiNER: named entity recognition in targeted twitter stream", "text": "Many private and/or public organizations have been reported to create and monitor targeted Twitter streams to collect and understand users' opinions about the organizations. Targeted Twitter stream is usually constructed by filtering tweets with user-defined selection criteria e.g. tweets published by users from a selected region, or tweets that match one or more predefined keywords. Targeted Twitter stream is then monitored to collect and understand users' opinions about the organizations. There is an emerging need for early crisis detection and response with such target stream. Such applications require a good named entity recognition (NER) system for Twitter, which is able to automatically discover emerging named entities that is potentially linked to the crisis. In this paper, we present a novel 2-step unsupervised NER system for targeted Twitter stream, called TwiNER. In the first step, it leverages on the global context obtained from Wikipedia and Web N-Gram corpus to partition tweets into valid segments (phrases) using a dynamic programming algorithm. Each such tweet segment is a candidate named entity. It is observed that the named entities in the targeted stream usually exhibit a gregarious property, due to the way the targeted stream is constructed. In the second step, TwiNER constructs a random walk model to exploit the gregarious property in the local context derived from the Twitter stream. The highly-ranked segments have a higher chance of being true named entities. We evaluated TwiNER on two sets of real-life tweets simulating two targeted streams. Evaluated using labeled ground truth, TwiNER achieves comparable performance as with conventional approaches in both streams. Various settings of TwiNER have also been examined to verify our global context + local context combo idea."}
{"_id": "b62c81269e1d13bcae9c15db335887db990e5860", "title": "Generalized Expectation Criteria for Semi-Supervised Learning with Weakly Labeled Data", "text": "In this paper, we present an overview of generalized expectation criteria (GE), a simple, robust, scalable method for semi-supervised training using weakly -labeled data. GE fits model parameters by favoring models that match certain expectation constrai nts, such as marginal label distributions, on the unlabeled data. This paper shows how to apply generali z d expectation criteria to two classes of parametric models: maximum entropy models and condition al random fields. Experimental results demonstrate accuracy improvements over supervise d training and a number of other stateof-the-art semi-supervised learning methods for these mod els."}
{"_id": "351743198513dedd4f7d59b8d694fd15caa9a6c2", "title": "Towards Instrument Segmentation for Music Content Description: a Critical Review of Instrument Classification Techniques", "text": "A system capable of describing the musical content of any kind of soundfile or soundstream, as it is supposed to be done in MPEG7-compliant applications, should provide an account of the different moments where a certain instrument can be listened to. This segmentation according to instrument taxonomies must be solved with different strategies than segmentation according to perceptual features. In this paper we concentrate on reviewing the different techniques that have been so far proposed for automatic classification of musical instruments. Although the ultimate goal should be the segmentation of complex sonic mixtures, it is still far from being solved. Therefore, the practical approach is to reduce the scope of the classification systems to only deal with isolated, and out-of-context, sounds. There is an obvious tradeoff in endorsing this strategy: we gain simplicity and tractability, but we lose contextual and time-dependent cues that can be exploited as relevant features for classifying the sounds."}
{"_id": "e6c3c1ab62e14c5e23eca9c8db08a2c2e06e2469", "title": "Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems", "text": "Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm that shares many similarities with evolutionary computation techniques. However, the PSO is driven by the simulation of a social psychological metaphor motivated by collective behaviors of bird and other social organisms instead of the survival of the fittest individual. Inspired by the classical PSO method and quantum mechanics theories, this work presents novel quantum-behaved PSO (QPSO) approaches using mutation operator with Gaussian probability distribution. The application of Gaussian mutation operator instead of random sequences in QPSO is a powerful strategy to improve the QPSO performance in preventing premature convergence to local optima. In this paper, new combinations of QPSO and Gaussian probability distribution are employed in well-studied continuous optimization problems of engineering design. Two case studies are described and evaluated in this work. Our results indicate that Gaussian QPSO approaches handle such problems efficiently in terms of precision and convergence and, in most cases, they outperform the results presented in the literature. 2009 Elsevier Ltd. All rights reserved."}
{"_id": "9f8bea863cdf2bb4c0fb118aa0d8b58c73c3fa54", "title": "Positive Youth Development , Participation in Community Youth Development Programs , and Community Contributions of Fifth-Grade Adolescents : Findings From the First Wave Of the 4-H Study of Positive Youth Development", "text": "The 4-H Study of Positive Youth Development (PYD), a longitudinal investigation of a diverse sample of 1,700 fifth graders and 1,117 of their parents, tests developmental contextual ideas linking PYD, youth contributions, and participation in community youth development (YD) programs, representing a key ecological asset. Using data from Wave 1 of the study, structural equation modeling procedures provided evidence for five first-order latent factors representing the \u201cFive Cs\u201d of PYD (competence, confidence, connection, character, and caring), and for their convergence on a second-order, PYD latent construct. A theoretical construct, youth \u201ccontribution,\u201d was also created and examined. Both PYD and YD program participation independently related to contribution. The importance of longitudinal analyses for extending the present results is discussed."}
{"_id": "e70f05f9a8b13e2933eee86a3bcebbc8403c672b", "title": "A Flexible AC Distribution System Device for a Microgrid", "text": "This paper presents a flexible ac distribution system device for microgrid applications. The device aims to improve the power quality and reliability of the overall power distribution system that the microgrid is connected to. The control design employs a new model predictive control algorithm which allows faster computational time for large power systems by optimizing the steady-state and the transient control problems separately. Extended Kalman filters are also employed for frequency tracking and to extract the harmonic spectra of the grid voltage and the load currents in the microgrid. The design concept is verified through different test case scenarios to demonstrate the capability of the proposed device and the results obtained are discussed."}
{"_id": "e08332bc8f664e3d55363b50d1b932fbfa717986", "title": "A Hierarchical Game Framework for Resource Management in Fog Computing", "text": "Supporting real-time and mobile data services, fog computing has been considered as a promising technology to overcome long and unpredicted delay in cloud computing. However, as resources in FNs are owned by independent users or infrastructure providers, the ADSSs cannot connect and access data services from the FNs directly, but can only request data service from the DSOs in the cloud. Accordingly, in fog computing, the DSOs are required to communicate with FNs and allocate resources from the FNs to the ADSSs. The DSOs provide virtualized data services to the ADSSs, and the FNs, motivated by the DSOs, provide data services in the physical network. Nevertheless, with fog computing added as the intermediate layer between the cloud and users, there are challenges such as the resource allocation in the virtualized network between the DSOs and ADSSs, the asymmetric information problem between DSOs and ADSSs, and the resource matching from the FNs to the ADSSs in the physical network. In this article, we propose a three-layer hierarchical game framework to solve the challenges in fog computing networks. In the proposed framework, we apply the Stackelberg sub-game for the interaction between DSOs and ADSSs, moral hazard modeling for the interaction between DSOs and FNs, and the student project allocation matching sub-game for the interaction between FNs and ADSSs. The purpose is to obtain stable and optimal utilities for each DSO, FN, and ADSS in a distributed fashion."}
{"_id": "b919bdba5cbafcb555b65ccbd5874b77f36c10e1", "title": "Social Learning Theory and Developmental Psychology : The Legacies of", "text": "Social learning theory began as an attempt by Robert Sears and others to meld psychoanalytic and stimulus-response learning theory into a comprehensive explanation of human behavior, drawing on the clinical richness of the former and the rigor of the latter. Albert Bandura abandoned the psychoanalytic and drive features of the approach, emphasizing instead cognitive and informationprocessing capacities that mediate social behavior. Both theories were intended as a general framework for the understanding of human behavior, and their developmental aspects remain to be worked out in detail. Nevertheless, Bandura has provided a strong theoretical beginning: The theory appears to be capable of accounting well for existing developmental data as well as guiding new investigation."}
{"_id": "a2e905bcb84a1f38523f0c95f29be05e2179019b", "title": "Compact Half Diamond Dual-Band Textile HMSIW On-Body Antenna", "text": "A novel wearable dual-band textile antenna, designed for optimal on-body performance in the 2.4 and 5.8 GHz Industrial, Scientific and Medical bands, is proposed. By using brass eye-lets and a combination of conducting and non-conductive textile materials, a half-mode substrate integrated waveguide cavity with ground plane is realized that is very compact and flexible, while still directing radiation away from the wearer. Additional miniaturization is achieved by adding a row of shorting vias and slots. Beside excellent free space performance in the 2.4 and 5.8 GHz bands, respectively, with measured impedance bandwidth of 4.9% and 5.1%, maximal measured free-space gain of 4.1 and 5.8 dBi, and efficiency of 72.8% and 85.6%, very stable on-body performance is obtained, with minimal frequency detuning when deploying the antenna on the human body and when bent around cylinders with radii of 75 and 40 mm. At 2.45 and 5.8 GHz, respectively, the measured on-body gain is 4.4 and 5.7 dBi, with sufficiently small calculated SAR values of 0.55 and 0.90 W/kg. These properties make the proposed antenna excellently suited for wearable on-body systems."}
{"_id": "f2217c96f4a628aac892b89bd1f0625b09b1c577", "title": "A machine learning approach for medication adherence monitoring using body-worn sensors", "text": "One of the most important challenges in chronic disease self-management is medication non-adherence, which has irrevocable outcomes. Although many technologies have been developed for medication adherence monitoring, the reliability and cost-effectiveness of these approaches are not well understood to date. This paper presents a medication adherence monitoring system by user-activity tracking based on wrist-band wearable sensors. We develop machine learning algorithms that track wrist motions in real-time and identify medication intake activities. We propose a novel data analysis pipeline to reliably detect medication adherence by examining single-wrist motions. Our system achieves an accuracy of 78.3% in adherence detection without need for medication pillboxes and with only one sensor worn on either of the wrists. The accuracy of our algorithm is only 7.9% lower than a system with two sensors that track motions of both wrists."}
{"_id": "1ce0260db2f3278c910c9e0eb518309f43885a91", "title": "Cost Effective Genetic Algorithm for Workflow Scheduling in Cloud Under Deadline Constraint", "text": "Cloud computing is becoming an increasingly admired paradigm that delivers high-performance computing resources over the Internet to solve the large-scale scientific problems, but still it has various challenges that need to be addressed to execute scientific workflows. The existing research mainly focused on minimizing finishing time (makespan) or minimization of cost while meeting the quality of service requirements. However, most of them do not consider essential characteristic of cloud and major issues, such as virtual machines (VMs) performance variation and acquisition delay. In this paper, we propose a meta-heuristic cost effective genetic algorithm that minimizes the execution cost of the workflow while meeting the deadline in cloud computing environment. We develop novel schemes for encoding, population initialization, crossover, and mutations operators of genetic algorithm. Our proposal considers all the essential characteristics of the cloud as well as VM performance variation and acquisition delay. Performance evaluation on some well-known scientific workflows, such as Montage, LIGO, CyberShake, and Epigenomics of different size exhibits that our proposed algorithm performs better than the current state-of-the-art algorithms."}
{"_id": "959536fe0fe4c278136777aa54d3c0032279ece3", "title": "Measuring #GamerGate: A Tale of Hate, Sexism, and Bullying", "text": "2009-2014 PhD in Computer Science, University of California Santa Barbara, Santa Barbara, CA. Dissertation title: \u201cStepping Up the Cybersecurity Game: Protecting Online Services from Malicious Activity\u201d 2014 MSc in Computer Science, University of California Santa Barbara, Santa Barbara, CA. 2006-2009 Laurea Specialistica in Computer Engineering (MSc equivalent), Universit\u00e0 degli Studi di Genova, Genova, Italy. Thesis title: \u201cA Distributed System for Intrusion Prevention\u201d 2003-2006 Laurea Triennale in Computer Engineering (BSc equivalent), Universit\u00e0 degli Studi di Genova, Genova, Italy. Thesis title: \u201cComputer Security in a Linux System\u201d (in Italian) 1998-2003 Liceo Classico A. D\u2019Oria, High School, Genova, Italy. Focus on humanities"}
{"_id": "b62309d46935fa19e604e7b8b3c03eb9c93defc8", "title": "A fast-transient LDO based on buffered flipped voltage follower", "text": "In this work, the analysis of the flipped voltage follower (FVF) based single-transistor-control (STC) LDO is given. Two evolved versions of FVF, cascaded FVF (CAFVF) and level shifted FVF (LSFVF), are studied. Then, a buffered FVF (BFVF) for LDO application is proposed, combining the virtues of both CAFVF and LSFVF structures. It alleviates the minimum loading requirement of FVF and the simulation result shows that it has faster transient response and better load regulation."}
{"_id": "be8a69da1ee8c63b567df197bf0afa1e2d46ffdc", "title": "Double hierarchy hesitant fuzzy linguistic term set and MULTIMOORA method: A case of study to evaluate the implementation status of haze controlling measures", "text": "In recent years, hesitant fuzzy linguistic term sets (HFLTSs) have been studied by many scholars and are becoming gradually mature. However, some shortcomings of HFLTS also emerged. To describe the complex linguistic terms or linguistic term sets more accurately and reasonably, in this paper, we introduce the novel concepts named double hierarchy linguistic term set (DHLTS) and double hierarchy hesitant fuzzy linguistic term set (DHHFLTS). The operational laws and properties of the DHHFLTSs are developed as well. Afterwards, we investigate the multiple criteria decision making model with double hierarchy hesitant fuzzy linguistic information. We develop a double hierarchy hesitant fuzzy linguistic MULTIMOORA (DHHFL-MULTIMOORA) method to solve it. Furthermore, we apply the DHHFL-MULTIMOORA method to deal with a practical case about selecting the optimal city in China by evaluating the implementation status of haze controlling measures. Some comparisons between the DHHFL-MULTIMOORA method and the hesitant fuzzy linguistic TOPSIS method are provided to show the advantages of the proposed method. \u00a9 2017 Elsevier B.V. All rights reserved. l p v t t s c e P t t g"}
{"_id": "53e3e04251b4b9d54b6a2f6cee4e7c89e4a978f3", "title": "Behavioral activation and inhibition systems and the severity and course of depression.", "text": "Theorists have proposed that depression is associated with abnormalities in the behavioral activation (BAS) and behavioral inhibition (BIS) systems. In particular, depressed individuals are hypothesized to exhibit deficient BAS and overactive BIS functioning. Self-reported levels of BAS and BIS were examined in 62 depressed participants and 27 nondepressed controls. Clinical functioning was assessed at intake and at 8-month follow-up. Relative to nondepressed controls, depressed participants reported lower BAS levels and higher BIS levels. Within the depressed group, lower BAS levels were associated with greater concurrent depression severity and predicted worse 8-month outcome. Levels of both BIS and BAS showed considerable stability over time and clinical state. Overall, results suggest that BAS dysregulation exacerbates the presentation and course of depressive illness."}
{"_id": "7d9b03aae8a4efb22f94488866bbba8448c631ef", "title": "Direct voltage control of DC-DC boost converters using model predictive control based on enumeration", "text": "This paper presents a model predictive control (MPC) approach for the dc-dc boost converter. Based on a hybrid model of the converter suitable for both continuous and discontinuous conduction mode an objective function is formulated, which is to be minimized. The proposed MPC scheme, utilized as a voltage-mode controller, achieves regulation of the output voltage to its reference, without requiring a subsequent current control loop. Simulation and experimental results are provided to demonstrate the merits of the proposed control methodology, which include fast transient response and robustness."}
{"_id": "4ad67ec5310f6149df6be2af686d85933539a063", "title": "A Logit Model of Brand Choice Calibrated on Scanner Data", "text": "Optical scanning of the Universal Product Code in supermarkets provides a new level of detail and completeness in household panel data and makes possible the construction of more comprehensive brand choice models than hitherto possible. A multinomial logit model calibrated on 32 weeks of purchases of regular ground coffee by 100 households shows high statistical significance for the explanatory variables of brand loyalty, size loyalty, presence/absence of store promotion, regular shelf price and promotional price cut. The model is quite parsimonious in that the coefficients of these variables are modeled to be the same for all coffee brand-sizes. Considering its parsimony, the calibrated model predicts surprisingly well the share of purchases by brand-size in a hold-out sample of 100 households over the 32 week calibration period and a subsequent 20 week forecast period. Discrepencies in prediction are conjectured to be due in part to missing variables. Three short term market response measures are calculated from the model: regular shelf price elasticity of share, percent share increase from a promotion with a median price cut, and promotional price cut elasticity of share. Responsiveness varies across brand-sizes in a systematic way with large share brand-sizes showing less responsiveness in percentage terms. On the basis of the model a quantitative picture emerges of groups of loyal customers who are relatively insensitive to marketing actions and a pool of switchers who are quite responsive."}
{"_id": "314aeefaf90a3322e9ca538c6b5f8a02fbb256bc", "title": "Global H\u221e Consensus of Multi-Agent Systems with Lipschitz Nonlinear Dynamics", "text": "Abstract: This paper addresses the global consensus problems of a class of nonlinear multi-agent systems with Lipschitz nonlinearity and directed communication graphs, by using a distributed consensus protocol based on the relative states of neighboring agents. A two-step algorithm is presented to construct a protocol, under which a Lipschitz multi-agent system without disturbances can reach global consensus for a strongly connected directed communication graph. Another algorithm is then given to design a protocol which can achieve global consensus with a guaranteed H\u221e performance for a Lipschitz multiagent system subject to external disturbances. The case with a leader-follower communication graph is also discussed. Finally, the effectiveness of the theoretical results is demonstrated through a network of single-link manipulators."}
{"_id": "3e08a3912ebe494242f6bcd772929cc65307129c", "title": "Few-Shot Image Recognition by Predicting Parameters from Activations", "text": "In this paper, we are interested in the few-shot learning problem. In particular, we focus on a challenging scenario where the number of categories is large and the number of examples per novel category is very limited, e.g. 1, 2, or 3. Motivated by the close relationship between the parameters and the activations in a neural network associated with the same category, we propose a novel method that can adapt a pre-trained neural network to novel categories by directly predicting the parameters from the activations. Zero training is required in adaptation to novel categories, and fast inference is realized by a single forward pass. We evaluate our method by doing few-shot image recognition on the ImageNet dataset, which achieves the state-of-the-art classification accuracy on novel categories by a significant margin while keeping comparable performance on the large-scale categories. We also test our method on the MiniImageNet dataset and it strongly outperforms the previous state-of-the-art methods."}
{"_id": "5c782aafeecd658558b64acacad18cbefba86f2e", "title": "Field test of autonomous loading operation by wheel loader", "text": "The authors have been conducting research on an autonomous system for loading operation by wheel loader. Experimental results at a field test site using full-size model (length: 6.1m) will be described in this paper. Basic structure of system consists of three sub systems: measuring and modeling of environment, task planning and motion control. The experimental operation includes four cycles of scooping and loading to dump truck. The experimental results prove that the developed system performed the autonomous operation smoothly and completed the mission."}
{"_id": "a14f69985e19456681bc874310e7166528637bed", "title": "Feature-Oriented Software Product Lines", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance."}
{"_id": "58fa283251f2f1fb5932cd9f5619b7299c29afba", "title": "DaTuM: Dynamic tone mapping technique for OLED display power saving based on video classification", "text": "The adoption of the latest OLED (organic light emitting diode) technology does not change the fact that screen is still one of the most energy-consuming modules in modern smartphones. In this work, we found that video streams from the same video category share many common power consumption features on OLED screens. Therefore, we are able to build a Hidden Markov Model (HMM) classifier to categorize videos based on OLED screen power characteristics. Using this HMM classifier, we propose a video classification based dynamic tone mapping (DTM) scheme, namely, DaTuM, to remap output color range and minimize the power-hungry color compositions on OLED screens for power saving. Experiment shows that DaTuM scheme averagely reduces OLED screen power by 17.8% with minimum display quality degradation. Compared to DTM scheme based on official category info provided by the video sources and one state-of-the-art scheme, DaTuM substantially enhances OLED screens' power efficiency and display quality controllability."}
{"_id": "bd5756b67109fd5c172b885921952b8f5fd5a944", "title": "Coordinate Noun Phrase Disambiguation in a Generative Parsing Model", "text": "In this paper we present methods for improving the disambiguation of noun phrase (NP) coordination within the framework of a lexicalised history-based parsing model. As well as reducing noise in the data, we look at modelling two main sources of information for disambiguation: symmetry in conjunct structure, and the dependency between conjunct lexical heads. Our changes to the baseline model result in an increase in NP coordination dependency f-score from 69.9% to 73.8%, which represents a relative reduction in f-score error of 13%."}
{"_id": "40414d71c9706806960f6551a1ff53cc87488899", "title": "The max-min hill-climbing Bayesian network structure learning algorithm", "text": "We present a new algorithm for Bayesian network structure learning, called Max-Min Hill-Climbing (MMHC). The algorithm combines ideas from local learning, constraint-based, and search-and-score techniques in a principled and effective way. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. In our extensive empirical evaluation MMHC outperforms on average and in terms of various metrics several prototypical and state-of-the-art algorithms, namely the PC, Sparse Candidate, Three Phase Dependency Analysis, Optimal Reinsertion, Greedy Equivalence Search, and Greedy Search. These are the first empirical results simultaneously comparing most of the major Bayesian network algorithms against each other. MMHC offers certain theoretical advantages, specifically over the Sparse Candidate algorithm, corroborated by our experiments. MMHC and detailed results of our study are publicly available at http://www.dsl-lab.org/supplements/mmhc_paper/mmhc_index.html."}
{"_id": "e17511e875e49ddd6b7ed14428771513a4c6155a", "title": "Ordering-Based Search: A Simple and Effective Algorithm for Learning Bayesian Networks", "text": "One of the basic tasks for Bayesian networks (BNs) is that of learning a network structure from data. The BN-learning problem is NPhard, so the standard solution is heuristic search. Many approaches have been proposed for this task, but only a very small number outperform the baseline of greedy hill-climbing with tabu lists; moreover, many of the proposed algorithms are quite complex and hard to implement. In this paper, we propose a very simple and easy-toimplement method for addressing this task. Our approach is based on the well-known fact that the best network (of bounded in-degree) consistent with a given node ordering can be found very efficiently. We therefore propose a search not over the space of structures, but over the space of orderings, selecting for each ordering the best network consistent with it. This search space is much smaller, makes more global search steps, has a lower branching factor, and avoids costly acyclicity checks. We present results for this algorithm on both synthetic and real data sets, evaluating both the score of the network found and in the running time. We show that orderingbased search outperforms the standard baseline, and is competitive with recent algorithms that are much harder to implement."}
{"_id": "1e42647ecb5c88266361c2e6ef785eeadf8dc9c3", "title": "Learning Bayesian Networks: The Combination of Knowledge and Statistical Data", "text": ""}
{"_id": "6a3d5f585c64fc1903ea573232d82240b0d11e62", "title": "Game private networks performance: analytical models for very-large scale simulation", "text": "The WTFast's Gamers Private Network (GPN\u00ae) is a client/server solution that makes online games faster. GPN\u00ae connects online video-game players with a common game service across a wide-area network. Online games are interactive competitions by individual players who compete in a virtual environment. Response time, latency and its predictability are keys to GPN\u00ae success and runs against the vast complexity of internet-wide systems. We have built an experimental network of virtualized GPN\u00ae components so as to carefully measure the statistics of latency for distributed Minecraft games and to do so in a controlled laboratory environment. This has led to a better understanding of the coupling between parameters such as: the number of players, the subset of players that are idle or active, the volume of packets exchanged, the size of packets, latency to and from the game servers, and time-series for most of those parameters. In this paper we present a mathematical model of those system game network parameters and show how it leads to: (1) realistic simulation of each of those network or game parameters, without relying on the experimental setup; (2) very large-scale numerical simulation of the game setup so as to explore various internet-wide performance scenarios that: (a) are impossible to isolate from internet \u201cnoise\u201d in their real environment and; (b) would require vast supercomputing resources if they were to be simulated exhaustively. We motivate all elements of our mathematical model and estimate the savings in computational costs they will bring for very large-scale simulation of the GPN\u00ae. Such simulations will improve quality of service for GPN\u00ae systems and their reliability."}
{"_id": "dc4fe639012048e98eb14500e41b04275a43b192", "title": "Recursive circle packing problems", "text": "This paper presents a class of packing problems where circles may be placed either inside or outside other circles, the whole set being packed in a rectangle. This corresponds to a practical problem of packing tubes in a container; before being inserted in the container, tubes may be put inside other tubes in a recursive fashion. A variant of the greedy randomized adaptive search procedure is proposed for tackling this problem, and its performance assessed in a set of benchmark instances."}
{"_id": "8d701bc4b2853739de4e752d879296608119a65c", "title": "A Hybrid Fragmentation Approach for Distributed Deductive Database Systems", "text": "Fragmentation of base relations in distributed database management systems increases the level of concurrency and therefore system throughput for query processing. Algorithms for horizontal and vertical fragmentation of relations in relational, object-oriented and deductive databases exist; however, hybrid fragmentation techniques based on variable bindings appearing in user queries and query-access-rule dependency are lacking for deductive database systems. In this paper, we propose a hybrid fragmentation approach for distributed deductive database systems. Our approach first considers the horizontal partition of base relations according to the bindings imposed on user queries, and then generates vertical fragments of the horizontally partitioned relations and clusters rules using affinity of attributes and access frequency of queries and rules. The proposed fragmentation technique facilitates the design of distributed deductive database systems."}
{"_id": "b7975962664a04a39a24f1f068f5aaadfb5c208d", "title": "Risks and benefits of social media for children and adolescents.", "text": "Resources \u2022 AAP Internet safety resources site, http:// safetynet.aap.org \u2022 CDC Social Media Tools Guidelines & Best Practices site, http://www.cdc.gov/ SocialMedia/Tools/ guidelines/ \u2022 Common Sense Media (2009). Is technology changing childhood? A national poll on teens and social networking. R e t r i e v e d f r om http://www.commonsensemedia.org/teensocial-media. FACEBOOK, TWITTER, AND YouTube bring benefits to children and teenagers, including enhancing communication, broadening social connections, and learning technical skills, but can also expose them to risks, such as cyberbullying, \u201cFacebook depression,\u201d and \u201csexting,\u201d according to a new report by the American Academy of Pediatrics (AAP; O'Keeffe, Clarke-Pearson, & Council on Communications and Media, 2011). The new report outlines the latest research on one of the most common activities among today's children and teenagers and describes how health care providers can help families understand these sites and encourage their healthy use. The report considers any site that allows social interaction as a social media site, including social networking sites such as Facebook, MySpace, and Twitter; gaming sites and virtual worlds such as Club Penguin, Second Life, and the Sims; video sites such as YouTube; and blogs. The abundance of these sites has grown exponentially in recent years. According to a poll cited in the report, more than half of American teenagers log onto their favorite social media site at least once a day, whereas 22% do so at least 10 times a day; 75% of teenagers now own cell phones, with 54% of them using them for texting, 24% for instant messaging, and 25% for social media access. According to the authors of the new report, increase in social media has been so rapid and their presence in children's everyday life is now so pervasive that, for some teens, social media is the primary way they interact socially, and a large part of this generation's social and emotional development is occurring while on the Internet and on cell phones. Because of their limited capacity for self-regulation and susceptibility to peer pressure,"}
{"_id": "de5ad10b0513a2641a6c13e3fe777f4f42c21bbe", "title": "Raspberry Pi based interactive home automation system through E-mail", "text": "Home automation is becoming more and more popular day by day due to its numerous advantages. This can be achieved by local networking or by remote control. This paper aims at designing a basic home automation application on Raspberry Pi through reading the subject of E-mail and the algorithm for the same has been developed in python environment which is the default programming environment provided by Raspberry Pi. Results show the efficient implementation of proposed algorithm for home automation. LEDs were used to indicate the switching action."}
{"_id": "8da810635b63657edac71799bdb5fdb981fffd6e", "title": "On the privacy-utility tradeoff in participatory sensing systems", "text": "The ubiquity of sensors-equipped mobile devices has enabled citizens to contribute data via participatory sensing systems. This emergent paradigm comes with various applications to improve users' quality of life. However, the data collection process may compromise the participants' privacy when reporting data tagged or correlated with their sensitive information. Therefore, anonymization and location cloaking techniques have been designed to provide privacy protection, yet to some cost of data utility which is a major concern for queriers. Different from past works, we assess simultaneously the two competing goals of ensuring the queriers' required data utility and protecting the participants' privacy. First, we introduce a trust worthy entity to the participatory sensing traditional system. Also, we propose a general privacy-preserving mechanism that runs on this entity to release a distorted version of the sensed data in order to minimize the information leakage with its associated private information. We demonstrate how to identify a near-optimal solution to the privacy-utility tradeoff by maximizing a privacy score while considering a utility metric set by data queriers (service providers). Furthermore, we tackle the challenge of data with large size alphabets by investigating quantization techniques. Finally, we evaluate the proposed model on three different real datasets while varying the prior knowledge and the obfuscation type. The obtained results demonstrate that, for different applications, a limited distortion may ensure the participants' privacy while maintaining about 98% of the required data utility."}
{"_id": "11b76a652c367ae549bcd5b82ab4a89629f937f4", "title": "On the statistics of natural stochastic textures and their application in image processing", "text": "Statistics of natural images has become an important subject of research in recent years. The highly kurtotic, non-Gaussian, statistics known to be characteristic of many natural images are exploited in various image processing tasks. In this paper, we focus on natural stochastic textures (NST) and substantiate our finding that NST have Gaussian statistics. Using the well-known statistical self-similarity property of natural images, exhibited even more profoundly in NST, we exploit a Gaussian self-similar process known as the fractional Brownian motion, to derive a fBm-PDE-based singleimage superresolution scheme for textured images. Using the same process as a prior, we also apply it in denoising of NST."}
{"_id": "9fdb60d4b2db75838513f3409e83ebc25cc02e3d", "title": "JSpIRIT: a flexible tool for the analysis of code smells", "text": "Code smells are a popular mechanism to identify structural design problems in software systems. Since it is generally not feasible to fix all the smells arising in the code, some of them are often postponed by developers to be resolved in the future. One reason for this decision is that the improvement of the code structure, to achieve modifability goals, requires extra effort from developers. Therefore, they might not always spend this additional effort, particularly when they are focused on delivering customer-visible features. This postponement of code smells are seen as a source of technical debt. Furthermore, not all the code smells may be urgent to fix in the context of the system's modifability and business goals. While there are a number of tools to detect smells, they do not allow developers to discover the most urgent smells according to their goals. In this article, we present a fexible tool to prioritize technical debt in the form of code smells. The tool is flexible to allow developer s to add new smell detection strategies and to prioritize smells, and groups of smells, based on the confguration of their manifold criteria. To illustrate this flexibility, we present an application example of our tool. The results suggest that our tool can be easily extended to be aligned with the developer's goals."}
{"_id": "074c2c9a0cd64fe036748bb0e2bb2a32d1e612e4", "title": "Asymmetrical duty cycle permits zero switching loss in PWM circuits with no conduction loss penalty", "text": "Operating a bridge-type PWM (pulse width modulation) switch mode power converter with asymmetrical duty ratios can eliminate switching losses with no increase in conduction loss. This circuit topology combines the best features of resonant (zero switching loss) and switch mode (low conclusion loss) circuits. Design equations are presented for just such a circuit.<>"}
{"_id": "503035e205654af479d5be4c4b700dcae5a66913", "title": "Reproducibility of quantitative tractography methods applied to cerebral white matter", "text": "Tractography based on diffusion tensor imaging (DTI) allows visualization of white matter tracts. In this study, protocols to reconstruct eleven major white matter tracts are described. The protocols were refined by several iterations of intra- and inter-rater measurements and identification of sources of variability. Reproducibility of the established protocols was then tested by raters who did not have previous experience in tractography. The protocols were applied to a DTI database of adult normal subjects to study size, fractional anisotropy (FA), and T2 of individual white matter tracts. Distinctive features in FA and T2 were found for the corticospinal tract and callosal fibers. Hemispheric asymmetry was observed for the size of white matter tracts projecting to the temporal lobe. This protocol provides guidelines for reproducible DTI-based tract-specific quantification."}
{"_id": "771a1d5d3375b1b04cb22161193e0b72b98c6088", "title": "Threshold-Dependent Camouflaged Cells to Secure Circuits Against Reverse Engineering Attacks", "text": "With current tools and technology, someone who has physical access to a chip can extract the detailed layout of the integrated circuit (IC). By using advanced visual imaging techniques, reverse engineering can reveal details that are meant to be kept secret, such as a secure protocol or novel implementation that offers a competitive advantage. A promising solution to defend against reverse engineering attacks is IC camouflaging. In this work, we propose a new camouflaging technique based on the threshold voltage of the transistors. We refer to these cells as threshold dependent camouflaged cells. Our work differs from current commercial solutions in that the latter use look-alike cells, with the assumption that it is difficult for the reverse engineer to identify the cell's functionality. Yet, if a structural distinction between cells exists, then these are still vulnerable, especially as reverse engineers use more advanced and precise techniques. On the other hand, the proposed threshold dependent standard cells are structurally identical regardless of the cells' functionality. Detailed circuit simulations of our proposed threshold dependent camouflaged cells demonstrate that they can be used to cost-effectively and robustly camouflage large netlists. Corner analysis of process, temperature, and supply voltage (PVT) variations show that our cells operate as expected over all PVT corners simulated."}
{"_id": "5b59feb5ac67ae6f852b84337179da51202764dc", "title": "Yacc is dead", "text": "We present two novel approaches to parsing context-free languages. The first approach is based on an extension of Brzozowski\u2019s derivative from regular expressions to context-free grammars. The second approach is based on a generalization of the derivative to parser combinators. The payoff of these techniques is a small (less than 250 lines of code), easy-to-implement parsing library capable of parsing arbitrary context-free grammars into lazy parse forests. Implementations for both Scala and Haskell are provided. Preliminary experiments with S-Expressions parsed millions of tokens per second, which suggests this technique is efficient enough for use in practice. 1 Top-down motivation: End cargo cult parsing \u201cCargo cult parsing\u201d is a plague upon computing. Cargo cult parsing refers to the use of \u201cmagic\u201d regular expressions\u2014often cut and pasted directly from Google search results\u2014to parse languages which ought to be parsed with contextfree grammars. Such parsing has two outcomes. In the first case, the programmer produces a parser that works \u201cmost of the time\u201d because the underlying language is fundamentally irregular. In the second case, some domain-specific language ends up with a mulish syntax, because the programmer has squeezed the language through the regular bottleneck. There are two reasons why regular expressions are so abused while context-free languages remain foresaken: (1) regular expression libraries are available in almost every language, while parsing libraries and toolkits are not, and (2) regular expressions are \u201cWYSIWYG\u201d\u2014the language described is the language that gets matched\u2014whereas parser-generators are WYSIWYGIYULR(k)\u2014\u201cwhat you see is what you get if you understand LR(k).\u201d To end cargo-cult parsing, we need a new approach to parsing that: 1. handles arbitrary context-free grammars; 2. parses efficiently on average; and 3. can be implemented as a library with little effort. The end goals of the three conditions are simplicity, feasibility and ubiquity. The \u201carbitrary context-free grammar\u201d condition is necessary because programmers will be less inclined to use a tool that forces them to learn or think about LL/LR arcana. It is hard for compiler experts to imagine, but the constraints on LL/LR 1 The term \u201ccargo cult parsing\u201d is due to Larry Wall, 19 June 2008, Google Tech Talk. grammars are (far) beyond the grasp of the average programmer. [Of course, arbitrary context-free grammars bring ambiguity, which means the parser must be prepared to return a parse forest rather than a parse tree.] The \u201cefficient parsing\u201d condition is necessary because programmers will avoid tools branded as inefficient (however justly or unjustly this label has been applied). Specifically, a parser needs to have roughly linear behavior on average. Because ambiguous grammars may yield an exponential number of parse trees, parse trees must be produced lazily, and each parse tree should be paid for only if it is actually produced. The \u201ceasily implemented\u201d condition is perhaps most critical. It must be the case that a programmer could construct a general parsing toolkit if their language of choice doesn\u2019t yet have one. If this condition is met, it is reasonable to expect that proper parsing toolkits will eventually be available for every language.When proper parsing toolkits and libraries remain unavailable for a language, cargo cult parsing prevails. This work introduces parsers based on the derivative of context-free languages and upon the derivative of parser combinators. Parsers based on derivatives meet all of the aforementioned requirements: they accept arbitrary grammars, they produce parse forests efficiently (and lazily), and they are easy to implement (less than 250 lines of Scala code for the complete library). Derivative-based parsers also avoid the precompilation overhead of traditional parser generators; this cost is amortized (and memoised) across the parse itself. In addition, derivative-based parsers can be modified mid-parse, which makes it conceivable that a language could to modify its own syntax at compileor run-time. 2 Bottom-up motivation: Generalizing the derivative Brzozowski defined the derivative of regular expressions in 1964 [1]. This technique was lost to the \u201csands of time\u201d until Owens, Reppy and Turon recently revived it to show that derivative-based lexer-generators were easier to write, more efficient and more flexible than the classical regex-to-NFA-to-DFA generators [15]. (Derivative-based lexers allow, for instance, both complement and intersection.) Given the payoff for regular languages, it is natural to ask whether the derivative, and its benefits, extend to context-free languages, and transitively, to parsing. As it turns out, they do. We will show that context-free languages are closed under the derivative\u2014they critical property needed for parsing. We will then show that context-free parser combinators are also closed under a generalization of the derivative. The net impact is that we will be able to write a derivative-based parser combinator library in under 250 lines of Scala, capable of producing a lazy parse forest for any context-free grammar. 2 The second author on this paper, an undergraduate student, completed the implementation for Haskell in less than a week."}
{"_id": "5118a15588e670031f40b72f418e21a8f47d8727", "title": "Sketches by Paul the Robot", "text": "In this paper we describe Paul, a robotic installation that produces observational sketches of people. The sketches produced have been considered of interest by fine art professionals in recent art fairs and exhibitions, as well as by the public at large. We identify factors that may account for the perceived qualities of the produced sketches. A technical overview of the system is also presented."}
{"_id": "2c8bde8c34468774f2a070ac3e14a6b97f877d13", "title": "A Hierarchical Neural Model for Learning Sequences of Dialogue Acts", "text": "We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model."}
{"_id": "3ebc2b0b4168b20f920074262577a3b8977dce83", "title": "Delta-Sigma FDC Based Fractional-N PLLs", "text": "Fractional-N phase-locked loop frequency synthesizers based on time-to-digital converters (TDC-PLLs) have been proposed to reduce the area and linearity requirements of conventional PLLs based on delta-sigma modulation and charge pumps (\u0394\u03a3-PLLs). Although TDC-PLLs with good performance have been demonstrated, TDC quantization noise has so far kept their phase noise and spurious tone performance below that of the best comparable \u0394\u03a3-PLLs. An alternative approach is to use a delta-sigma frequency-to-digital converter (\u0394\u03a3 FDC) in place of a TDC to retain the benefits of TDC-PLLs and \u0394\u03a3-PLLs. This paper proposes a practical \u0394\u03a3 FDC based PLL in which the quantization noise is equivalent to that of a \u0394\u03a3-PLL. It presents a linearized model of the PLL, design criteria to avoid spurious tones in the \u0394\u03a3FDC quantization noise, and a design methodology for choosing the loop parameters in terms of standard PLL target specifications."}
{"_id": "3d2626090650f79d62b6e17f1972f3abe9c4bdcc", "title": "Distributed generation intelligent islanding detection using governor signal clustering", "text": "One of the major protection concerns with distribution networks comprising distributed generation is unintentional islanding phenomenon. Expert diagnosis system is needed to distinguish network cut off from normal occurrences. An important part of synchronous generator is automatic load-frequency controller (ALFC). In this paper, a new approach based on clustering of input signal to governor is introduced. Self-organizing map (SOM) neural network is used to identify and classify islanding and non-islanding phenomena. Simulation results show that input signal to governor has different characteristics concern with islanding conditions and other disturbances. In addition, the SOM is able to identify and classify phenomena satisfactorily. Using proposed method, islanding can be detected after 200 ms."}
{"_id": "669dbee1fd07c8193c3c0c7b4c00767e3d311ab5", "title": "Function-Hiding Inner Product Encryption Is Practical", "text": "In a functional encryption scheme, secret keys are associated with functions and ciphertexts are associated with messages. Given a secret key for a function f , and a ciphertext for a message x, a decryptor learns f(x) and nothing else about x. Inner product encryption is a special case of functional encryption where both secret keys and ciphertext are associated with vectors. The combination of a secret key for a vector x and a ciphertext for a vector y reveal \u3008x,y\u3009 and nothing more about y. An inner product encryption scheme is function-hiding if the keys and ciphertexts reveal no additional information about both x and y beyond their inner product. In the last few years, there has been a flurry of works on the construction of function-hiding inner product encryption, starting with the work of Bishop, Jain, and Kowalczyk (Asiacrypt 2015) to the more recent work of Tomida, Abe, and Okamoto (ISC 2016). In this work, we focus on the practical applications of this primitive. First, we show that the parameter sizes and the run-time complexity of the state-of-the-art construction can be further reduced by another factor of 2, though we compromise by proving security in the generic group model. We then show that function privacy enables a number of applications in biometric authentication, nearest-neighbor search on encrypted data, and single-key two-input functional encryption for functions over small message spaces. Finally, we evaluate the practicality of our encryption scheme by implementing our function-hiding inner product encryption scheme. Using our construction, encryption and decryption operations for vectors of length 50 complete in a tenth of a second in a standard desktop environment."}
{"_id": "16207aded3bbeb6ad28aceb89e46c5921372fb13", "title": "Characterising the Behaviour of IEEE 802.11 Broadcast Transmissions in Ad Hoc Wireless LANs", "text": "This paper evaluates the performance of the IEEE 802.11 broadcast traffic under both saturation and nonsaturation conditions. The evaluation highlights some important characteristics of IEEE 802.11 broadcast traffic as compared to corresponding unicast traffic. Moreover, it underlines the inaccuracy of the broadcast saturation model proposed by Ma and Chen due to the absence of backoff counter freeze process when channel is busy. Computer simulations are used to validate the accuracy of the new model and demonstrate the importance of capturing the freezing of backoff counter in the analytical study of IEEE 802.11 broadcast."}
{"_id": "538578f986cf3a5d888b4a1d650f875d4710b0a7", "title": "Brain development during childhood and adolescence: a longitudinal MRI study", "text": "Pediatric neuroimaging studies, up to now exclusively cross sectional, identify linear decreases in cortical gray matter and increases in white matter across ages 4 to 20. In this large-scale longitudinal pediatric neuroimaging study, we confirmed linear increases in white matter, but demonstrated nonlinear changes in cortical gray matter, with a preadolescent increase followed by a postadolescent decrease. These changes in cortical gray matter were regionally specific, with developmental curves for the frontal and parietal lobe peaking at about age 12 and for the temporal lobe at about age 16, whereas cortical gray matter continued to increase in the occipital lobe through age 20."}
{"_id": "dd7019c0444ce0765cb7f4af6a5a5023b0c7ba12", "title": "Risk Factors in Adolescence: The Case of Gambling, Videogame Playing, and the Internet", "text": "It has been noted that adolescents may be more susceptible to pathological gambling. Not only is it usually illegal, but it appears to be related to high levels of problem gambling and other delinquent activities such as illicit drug taking and alcohol abuse. This paper examines risk factors not only in adolescent gambling but also in videogame playing (which shares many similarities with gambling). There appear to be three main forms of adolescent gambling that have been widely researched. Adolescent gambling activities and general risk factors in adolescent gambling are provided. As well, the influence of technology on adolescents in the form of both videogames and the Internet are examined. It is argued that technologically advanced forms of gambling may be highly appealing to adolescents."}
{"_id": "4775dff7528d0952a75af40ffe9900ed3c8874d8", "title": "Automated Data Structure Generation: Refuting Common Wisdom", "text": "Common wisdom in the automated data structure generation community states that declarative techniques have better usability than imperative techniques, while imperative techniques have better performance. We show that this reasoning is fundamentally flawed: if we go to the declarative limit and employ constraint logic programming (CLP), the CLP data structure generation has orders of magnitude better performance than comparable imperative techniques. Conversely, we observe and argue that when it comes to realistically complex data structures and properties, the CLP specifications become more obscure, indirect, and difficult to implement and understand than their imperative counterparts. We empirically evaluate three competing generation techniques, CLP, Korat, and UDITA, to validate these observations on more complex and interesting data structures than any prior work in this area. We explain why these observations are true, and discuss possible techniques for attaining the best of both worlds."}
{"_id": "77e10f17c7eeebe37f7ca046dfb616c8ac994995", "title": "Simple Bayesian Algorithms for Best Arm Identification", "text": "This paper considers the optimal adaptive allocation of measurement effort for identifying the best among a finite set of options or designs. An experimenter sequentially chooses designs to measure and observes noisy signals of their quality with the goal of confidently identifying the best design after a small number of measurements. I propose three simple Bayesian algorithms for adaptively allocating measurement effort. One is Top-Two Probability sampling, which computes the two designs with the highest posterior probability of being optimal, and then randomizes to select among these two. One is a variant a top-two sampling which considers not only the probability a design is optimal, but the expected amount by which its quality exceeds that of other designs. The final algorithm is a modified version of Thompson sampling that is tailored for identifying the best design. I prove that these simple algorithms satisfy a strong optimality property. In a frequestist setting where the true quality of the designs is fixed, one hopes the posterior definitively identifies the optimal design, in the sense that that the posterior probability assigned to the event that some other design is optimal converges to zero as measurements are collected. I show that under the proposed algorithms this convergence occurs at an exponential rate, and the corresponding exponent is the best possible among all allocation rules."}
{"_id": "eb65824356478ad59c927a565de2e467a3f0c67a", "title": "Cloudlet-based just-in-time indexing of IoT video", "text": "As video cameras proliferate, the ability to scalably capture and search their data becomes important. Scalability is improved by performing video analytics on cloudlets at the edge of the Internet, and only shipping extracted index information and meta-data to the cloud. In this setting, we describe interactive data exploration (IDE), which refers to human-in-the-loop content-based retrospective search using predicates that may not have been part of any prior indexing. We also describe a new technique called just-in-time indexing (JITI) that improves response times in IDE."}
{"_id": "537abf102ed05a373eb227151b7423306c9e5d3c", "title": "Intrinsically Motivated Affordance Discovery and Modeling", "text": "In this chapter, we argue that a single intrinsic motivation function for affordance discovery can guide long-term learning in robot systems. To these ends, we provide a novel definition of \u201caffordance\u201d as the latent potential for the closed-loop control of environmental stimuli perceived by sensors. Specifically, the proposed intrinsic motivation function rewards the discovery of such control affordances. We will demonstrate how this function has been used by a humanoid robot to learn a number of general purpose control skills that address many different tasks. These skills, for example, include strategies for finding, grasping, and placing simple objects. We further show how this same intrinsic reward function is used to direct the robot to build stable models of when the environment affords these skills."}
{"_id": "1ca0911ee19bade27650860ebd904a7dc32c13cb", "title": "An affective computational model for machine consciousness", "text": "In the past, several models of consciousness have become popular and have led to the development of models for machine consciousness with varying degrees of success and challenges for simulation and implementations. Moreover, affective computing attributes that involve emotions, behavior and personality have not been the focus of models of consciousness as they lacked motivation for deployment in software applications and robots. The affective attributes are important factors for the future of machine consciousness with the rise of technologies that can assist humans. Personality and affection hence can give an additional flavor for the computational model of consciousness in humanoid robotics. Recent advances in areas of machine learning with a focus on deep learning can further help in developing aspects of machine consciousness in areas that can better replicate human sensory perceptions such as speech recognition and vision. With such advancements, one encounters further challenges in developing models that can synchronize different aspects of affective computing. In this paper, we review some existing models of consciousnesses and present an affective computational model that would enable the human touch and feel for robotic systems."}
{"_id": "1678a55524be096519b3ea71c9680ba8041a761e", "title": "\u2013 \u201c Mixture Densities , Maximum Likelihood and the EM Algorithm", "text": "The problem of estimating the parameters which determine a mixture density has been the subject of a large, diverse body of literature spanning nearly ninety years. During the last two decades, the method of maximum likelihood has become the most widely followed approach to this problem, thanks primarily to the advent of high speed electronic computers. Here, we first offer a brief survey of the literature directed toward this problem and review maximum-likelihood estimation for it. We then turn to the subject of ultimate interest, which is a particular iterative procedure for numerically approximating maximum-likelihood estimates for mixture density problems. This procedure, known as the EM algorithm, is a specialization to the mixture density context of a general algorithm of the same name used to approximate maximum-likelihood estimates for incomplete data problems. We discuss the formulation and theoretical and practical properties of the EM algorithm for mixture densities, focussing in particular on mixtures of densities from exponential families."}
{"_id": "8a1a3f4dcb4ae461c3c0063820811d9c37d8ec75", "title": "Embedded image coding using zerotrees of wavelet coefficients", "text": "The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code. The embedded code represents a sequence of binary decisions that distinguish a n image from the \u201cnull\u201d image. Using a n embedded coding algorithm, a n encoder can terminate the encoding a t any point thereby allowing a target rate or target distortion metric to be met exactly. Also, given a bit stream, the decoder can cease decoding a t any point in the bit stream and still produce exactly the same image that would have been encoded a t the bit rate corresponding to the truncated bit stream. In addition to producing a fully embedded bit stream, EZW consistently produces compression results that a re competitive with virtually all known compression algorithms on standard test images. Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source. The EZW algorithm is based on four key concepts: 1) a discrete wavelet transform or hierarchical subband decomposition, 2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, 3) entropy-coded successive-approximation quantization, and 4) universal lossless data compression which is achieved via adaptive arithmetic coding."}
{"_id": "b82ab6b6885bdd1b06cafd2b3ca892b75352a96b", "title": "A 600MS/s 30mW 0.13\u00b5m CMOS ADC array achieving over 60dB SFDR with adaptive digital equalization", "text": "At high conversion speed, time interleaving provides a viable way of achieving analog-to-digital conversion with low power consumption, especially when combined with the successive-approximation-register (SAR) architecture that is known to scale well in CMOS technology. In this work, we showcase a digital background-equalization technique to treat the path-mismatch problem as well as individual ADC nonlinearities in time-interleaved SAR ADC arrays. This approach was first introduced to calibrate the linearity errors in pipelined ADCs [1], and subsequently extended to treat SAR ADCs [2]. In this prototype, we demonstrate the effectiveness of this technique in a compact SAR ADC array, which achieves 7.5 ENOB and a 65dB SFDR at 600MS/s while dissipating 23.6mW excluding the on-chip DLL, and exhibiting one of the best conversion FOMs among ADCs with similar sample rates and resolutions [3]."}
{"_id": "ac22f98cd13926ed25619c6620990c5c875b3948", "title": "LateralPaD: A surface-haptic device that produces lateral forces on a bare finger", "text": "The LateralPaD is a surface haptic device that generates lateral (shear) force on a bare finger, by vibrating the touch surface simultaneously in both out-of-plane (normal) and in-plane (lateral) directions. A prototype LateralPaD has been developed in which the touch surface is glass, and piezoelectric actuators drive normal and lateral resonances at the same ultrasonic frequency (~22.3 KHz). The force that develops on the finger can be controlled by modulating the relative phase of the two resonances. A 2DOF load cell setup is used to characterize the dependence of induced lateral force on vibration amplitude, relative phase, and applied normal force. A Laser Doppler Vibrometer (LDV) is used to measure the motion of glass surface as well as of the fingertip. Together, these measurements yield insight into the mechanism of lateral force generation. We show evidence for a mechanism dependent on tilted impacts between the LateralPaD and fingertip."}
{"_id": "ca5766b91da4903ad6f6d40a5b31a3ead1f7f6de", "title": "A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution", "text": "We address the problem of image upscaling in the form of single image super-resolution based on a dictionary of lowand highresolution exemplars. Two recently proposed methods, Anchored Neighborhood Regression (ANR) and Simple Functions (SF), provide state-ofthe-art quality performance. Moreover, ANR is among the fastest known super-resolution methods. ANR learns sparse dictionaries and regressors anchored to the dictionary atoms. SF relies on clusters and corresponding learned functions. We propose A+, an improved variant of ANR, which combines the best qualities of ANR and SF. A+ builds on the features and anchored regressors from ANR but instead of learning the regressors on the dictionary it uses the full training material, similar to SF. We validate our method on standard images and compare with state-of-the-art methods. We obtain improved quality (i.e. 0.2-0.7dB PSNR better than ANR) and excellent time complexity, rendering A+ the most efficient dictionary-based super-resolution method to date."}
{"_id": "b118b38c180925f9e5bfda98cc12d03f536c12d4", "title": "Experience Prototyping", "text": "In this paper, we describe \"Experience Prototyping\" as a form of prototyping that enables design team members, users and clients to gain first-hand appreciation of existing or future conditions through active engagement with prototypes. We use examples from commercial design projects to illustrate the value of such prototypes in three critical design activities: understanding existing experiences, exploring design ideas and in communicating design concepts."}
{"_id": "211efdc4db415ad42c7c5eb9b0457c84ad23ef30", "title": "Supernumerary nostril: a case report", "text": "BACKGROUND\nSupernumerary nostril is a congenital anomaly that contains additional nostril with or without accessory cartilage. These rare congenital nasal deformities result from embryological defects. Since 1906, Lindsay (Trans Pathol Soc Lond. 57:329-330, 1906) has published the first research of bilateral supernumerary nostrils, and only 34 cases have been reported so far in the English literature.\n\n\nCASE PRESENTATION\nA 1-year-old female baby was brought to our department group for the treatment of an accessory opening above the left nostril which had been presented since her birth. Medical history was non-specific and her birth was normal. The size of a supernumerary nostril was about 0.2\u00a0cm diameter and connected to the left nostril. The right one was normal. Minimal procedure was operated for the anomaly. After 1\u00a0year, rhinoplasty was performed for the nostril asymmetry.\n\n\nCONCLUSIONS\nAt 1\u00a0year follow-up, the functional and cosmetic result was satisfactory. In this case, it is important that we have early preoperative diagnosis. Also, it is desirable that we should perform a corrective surgery as soon as possible for the patient's psychosocial growth."}
{"_id": "4424a9788e70ef4a1d447f6e28c1fc0c7cbdefc0", "title": "ADvanced IMage Algebra (ADIMA): a novel method for depicting multiple\nsclerosis lesion heterogeneity, as demonstrated by quantitative MRI", "text": "BACKGROUND\nThere are modest correlations between multiple sclerosis (MS) disability and white matter lesion (WML) volumes, as measured by T2-weighted (T2w) magnetic resonance imaging (MRI) scans (T2-WML). This may partly reflect pathological heterogeneity in WMLs, which is not apparent on T2w scans.\n\n\nOBJECTIVE\nTo determine if ADvanced IMage Algebra (ADIMA), a novel MRI post-processing method, can reveal WML heterogeneity from proton-density weighted (PDw) and T2w images.\n\n\nMETHODS\nWe obtained conventional PDw and T2w images from 10 patients with relapsing-remitting MS (RRMS) and ADIMA images were calculated from these. We classified all WML into bright (ADIMA-b) and dark (ADIMA-d) sub-regions, which were segmented. We obtained conventional T2-WML and T1-WML volumes for comparison, as well as the following quantitative magnetic resonance parameters: magnetisation transfer ratio (MTR), T1 and T2. Also, we assessed the reproducibility of the segmentation for ADIMA-b, ADIMA-d and T2-WML.\n\n\nRESULTS\nOur study's ADIMA-derived volumes correlated with conventional lesion volumes (p < 0.05). ADIMA-b exhibited higher T1 and T2, and lower MTR than the T2-WML (p < 0.001). Despite the similarity in T1 values between ADIMA-b and T1-WML, these regions were only partly overlapping with each other. ADIMA-d exhibited quantitative characteristics similar to T2-WML; however, they were only partly overlapping. Mean intra- and inter-observer coefficients of variation for ADIMA-b, ADIMA-d and T2-WML volumes were all < 6 % and < 10 %, respectively.\n\n\nCONCLUSION\nADIMA enabled the simple classification of WML into two groups having different quantitative magnetic resonance properties, which can be reproducibly distinguished."}
{"_id": "5807664af8e63d5207f59fb263c9e7bd3673be79", "title": "Hybrid speech recognition with Deep Bidirectional LSTM", "text": "Deep Bidirectional LSTM (DBLSTM) recurrent neural networks have recently been shown to give state-of-the-art performance on the TIMIT speech database. However, the results in that work relied on recurrent-neural-network-specific objective functions, which are difficult to integrate with existing large vocabulary speech recognition systems. This paper investigates the use of DBLSTM as an acoustic model in a standard neural network-HMM hybrid system. We find that a DBLSTM-HMM hybrid gives equally good results on TIMIT as the previous work. It also outperforms both GMM and deep network benchmarks on a subset of the Wall Street Journal corpus. However the improvement in word error rate over the deep network is modest, despite a great increase in framelevel accuracy. We conclude that the hybrid approach with DBLSTM appears to be well suited for tasks where acoustic modelling predominates. Further investigation needs to be conducted to understand how to better leverage the improvements in frame-level accuracy towards better word error rates."}
{"_id": "240cc2dbe027400957ed1f8cf8fb092a533c406e", "title": "Design of Multiple-Level Hybrid Classifier for Intrusion Detection System", "text": "As the number of networked computers grows, intrusion detection is an essential component in keeping networks secure. However, constructing and maintaining a misuse detection system is very labor-intensive since attack scenarios and patterns need to be analyzed and categorized, and the corresponding rules and patterns need to be carefully hand-coded. Thus, data mining can be used to ease this inconvenience. This paper proposes a multiple-level hybrid classifier, an intrusion detection system that uses a combination of tree classifiers and clustering algorithms to detect intrusions. Performance of this new algorithm is compared to other popular approaches such as MADAM ID and 3-level tree classifiers, and significant improvement has been achieved from the viewpoint of both high intrusion detection rate and reasonably low false alarm rate"}
{"_id": "379c379e40e6bc5d0f7b8b884a6153585d55b9ca", "title": "Project Lachesis: Parsing and Modeling Location Histories", "text": "A datatype with increasing importance in GIS is what we call the location history\u2013a record of an entity\u2019s location in geographical space over an interval of time. This paper proposes a number of rigorously defined data structures and algorithms for analyzing and generating location histories. Stays are instances where a subject has spent some time at a single location, and destinations are clusters of stays. Using stays and destinations, we then propose two methods for modeling location histories probabilistically. Experiments show the value of these data structures, as well as the possible applications of probabilistic models of location histories."}
{"_id": "351a37370f395c3254242a4d8395a89b7072c179", "title": "Extending feature usage: a study of the post-adoption of electronic medical records", "text": "In order to study the factors that influence how professionals use complex systems, we create a tentative research model that builds on PAM and TAM. Specifically we include PEOU and the construct \u2018Professional Association Guidance\u2019. We postulate that feature usage is enhanced when professional associations influence PU by highlighting additional benefits. We explore the theory in the context of post-adoption use of Electronic Medical Records (EMRs) by primary care physicians in Ontario. The methodology can be extended to other professional environments and we suggest directions for future research."}
{"_id": "08e8ee6fd84dd21028e7f42aa68b333ce4bf13c0", "title": "Information theory-based shot cut/fade detection and video summarization", "text": "New methods for detecting shot boundaries in video sequences and for extracting key frames using metrics based on information theory are proposed. The method for shot boundary detection relies on the mutual information (MI) and the joint entropy (JE) between the frames. It can detect cuts, fade-ins and fade-outs. The detection technique was tested on the TRECVID2003 video test set having different types of shots and containing significant object and camera motion inside the shots. It is demonstrated that the method detects both fades and abrupt cuts with high accuracy. The information theory measure provides us with better results because it exploits the inter-frame information in a more compact way than frame subtraction. It was also successfully compared to other methods published in literature. The method for key frame extraction uses MI as well. We show that it captures satisfactorily the visual content of the shot."}
{"_id": "3e15fb211f94b2e85d07e60f0aa641797b062e01", "title": "Small Discrete Fourier Transforms on GPUs", "text": "Efficient implementations of the Discrete Fourier Transform (DFT) for GPUs provide good performance with large data sizes, but are not competitive with CPU code for small data sizes. On the other hand, several applications perform multiple DFTs on small data sizes. In fact, even algorithms for large data sizes use a divide-and-conquer approach, where eventually small DFTs need to be performed. We discuss our DFT implementation, which is efficient for multiple small DFTs. One feature of our implementation is the use of the asymptotically slow matrix multiplication approach for small data sizes, which improves performance on the GPU due to its regular memory access and computational patterns. We combine this algorithm with the mixed radix algorithm for 1-D, 2-D, and 3-D complex DFTs. We also demonstrate the effect of different optimization techniques. When GPUs are used to accelerate a component of an application running on the host, it is important that decisions taken to optimize the GPU performance not affect the performance of the rest of the application on the host. One feature of our implementation is that we use a data layout that is not optimal for the GPU so that the overall effect on the application is better. Our implementation performs up to two orders of magnitude faster than cuFFT on an NVIDIA GeForce 9800 GTX GPU and up to one to two orders of magnitude faster than FFTW on a CPU for multiple small DFTs. Furthermore, we show that our implementation can accelerate the performance of a Quantum Monte Carlo application for which cuFFT is not effective. The primary contributions of this work lie in demonstrating the utility of the matrix multiplication approach and also in providing an implementation that is efficient for small DFTs when a GPU is used to accelerate an application running on the host."}
{"_id": "175963e3634457ee55f6d82cb4fafe7bd7469ee2", "title": "Groundnut leaf disease detection and classification by using back probagation algorithm", "text": "Many studies shows that quality of agricultural products may be reduced from many causes. One of the most important factors contributing to low yield is disease attack. The plant disease such as fungi, bacteria and viruses. The leaf disease completely destroys the quality of the leaf. Common ground nut disease is cercospora. It is one of the type of disease in early stage of ground nut leaf. The upgraded processing pattern comprises of four leading steps. Initially a color renovation constitute intended for the input RGB image is formed, This RGB is converted into HSV because RGB is for Color generation and color descriptor. The next step is plane separation. Next performed the color features. Then using back propagation algorithm detection of leaf disease is done."}
{"_id": "4a7c79583ffe7a695385d2c4c55f27b645a397ba", "title": "Development of Robotic Gloves for Hand Rehabilitation Post-Stroke", "text": "This paper presents the hardware and software developments that have been achieved and implemented in the robotic gloves for hand rehabilitation post-stroke. The practical results are shown that prove the functionalities of the robotic gloves in common operating conditions. This work focused on development of a lightweight and low-cost robotic glove that patients can wear and use to recover hand functionality. We developed some structures for the robotic glove, as exoskeleton and as soft robotic glove. The paper presents a comparison of these structures."}
{"_id": "15b9bb158ef57a73c94cabcf4a95679f7d2f818e", "title": "Customised Visualisations of Linked Open Data", "text": "This paper aims to tackle on Linked Open Data (LOD) customised visualisations. The work is part of an ongoing research on interfaces and tools for helping non-programmers to easily present, analyse and support sense making over large semantic dataset. The customisation is a key aspect of our work. Producing effective customised visualisations is still difficult due to the complexity of the existing tools and the limited set of options they offer, especially to those with little knowledge in LOD and semantic data models. How can we give users full control on the primitives of the visualisation and their properties, without requiring them to master Semantic Web technologies or programming languages? The paper presents a conceptual model that can be used as a reference to build tools for generating customisable infoviews. We used it to conduct a survey on existing tools in terms of customisation capabilities. Starting from the feedback collected in this phase, we will implement these customisation features into some prototypes we are currently experimenting on."}
{"_id": "8dbb9432e7e7e5c11bbe7bdbbe8eb9a14811c970", "title": "Enhancing representation learning with tensor decompositions for knowledge graphs and high dimensional sequence modeling", "text": "The capability of processing and digesting raw data is one of the key features of a humanlike artificial intelligence system. For instance, real-time machine translation should be able to process and understand spoken natural language, and autonomous driving relies on the comprehension of visual inputs. Representation learning is a class of machine learning techniques that autonomously learn to derive latent features from raw data. These new features are expected to represent the data instances in a vector space that facilitates the machine learning task. This thesis studies two specific data situations that require efficient representation learning: knowledge graph data and high dimensional sequences. In the first part of this thesis, we first review multiple relational learning models based on tensor decomposition for knowledge graphs. We point out that relational learning is in fact a means of learning representations through one-hot mapping of entities. Furthermore, we generalize this mapping function to consume a feature vector that encodes all known facts about each entity. It enables the relational model to derive the latent representation instantly for a new entity, without having to re-train the tensor decomposition. In the second part, we focus on learning representations from high dimensional sequential data. Sequential data often pose the challenge that they are of variable lengths. Electronic health records, for instance, could consist of clinical event data that have been collected at subsequent time steps. But each patient may have a medical history of variable length. We apply recurrent neural networks to produce fixed-size latent representations from the raw feature sequences of various lengths. By exposing a prediction model to these learned representations instead of the raw features, we can predict the therapy prescriptions more accurately as a means of clinical decision support. We further propose Tensor-Train recurrent neural networks. We give a detailed introduction to the technique of tensorizing and decomposing large weight matrices into a few smaller tensors. We demonstrate the specific algorithms to perform the forward-pass and the back-propagation in this setting. Then we apply this approach to the input-to-hidden weight matrix in recurrent neural networks. This novel architecture can process extremely high dimensional sequential features such as video data. The model also provides a promising solution to processing sequential features with high sparsity. This is, for instance, the case with electronic health records, since they are often of categorical nature and have to be binary-coded. We incorporate a statistical survival model with this representation learning model, which shows superior prediction quality."}
{"_id": "6754f7a897bd44364d181fa6946014c9a9165fe4", "title": "Analyzing and Managing Role-Based Access Control Policies", "text": "Today more and more security-relevant data is stored on computer systems; security-critical business processes are mapped to their digital counterparts. This situation applies to various domains such as health care industry, digital government, and financial service institutes requiring that different security requirements must be fulfilled. Authorisation constraints can help the policy architect design and express higher-level organisational rules. Although the importance of authorisation constraints has been addressed in the literature, there does not exist a systematic way to verify and validate authorisation constraints. In this paper, we specify both non-temporal and history-based authorisation constraints in the Object Constraint Language (OCL) and first-order linear temporal logic (LTL). Based upon these specifications, we attempt to formally verify role-based access control policies with the help of a theorem prover and to validate policies with the USE system, a validation tool for OCL constraints. We also describe an authorisation engine, which supports the enforcement of authorisation constraints."}
{"_id": "fafe0f7a8f25a06f21962ca5f45361ec75373051", "title": "Greedy Layerwise Learning Can Scale to ImageNet", "text": "Shallow supervised 1-hidden layer neural networks have a number of favorable properties that make them easier to interpret, analyze, and optimize than their deep counterparts, but lack their representational power. Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks. Contrary to previous approaches using shallow networks, we focus on problems where deep learning is reported as critical for success. We thus study CNNs on image classification tasks using the large-scale ImageNet dataset and the CIFAR-10 dataset. Using a simple set of ideas for architecture and training we find that solving sequential 1-hidden-layer auxiliary problems lead to a CNN that exceeds AlexNet performance on ImageNet. Extending this training methodology to construct individual layers by solving 2-and-3-hidden layer auxiliary problems, we obtain an 11-layer network that exceeds several members of the VGG model family on ImageNet, and can train a VGG-11 model to the same accuracy as end-to-end learning. To our knowledge, this is the first competitive alternative to end-to-end training of CNNs that can scale to ImageNet. We illustrate several interesting properties of these models theoretically and conduct a range of experiments to study the properties this training induces on the intermediate representations."}
{"_id": "cdb25e4df6913bb94edcd1174d00baf2d21c9a6d", "title": "Rethinking the Value of Network Pruning", "text": "Network pruning is widely used for reducing the heavy computational cost of 1 deep models. A typical pruning algorithm is a three-stage pipeline, i.e., training (a 2 large model), pruning and fine-tuning. In this work, we make a rather surprising 3 observation: fine-tuning a pruned model only gives comparable or even worse 4 performance than training that model with randomly initialized weights. Our 5 results have several implications: 1) training a large, over-parameterized model is 6 not necessary to obtain an efficient final model, 2) learned \u201cimportant\u201d weights 7 of the large model are not necessarily useful for the small pruned model, 3) the 8 pruned architecture itself, rather than a set of inherited weights, is what leads to the 9 efficiency benefit in the final model, which suggests that some pruning algorithms 10 could be seen as performing network architecture search. 11"}
{"_id": "c4d5b0995a97a3a7824e51e9f71aa1fd700d27e1", "title": "Compact wideband directional antenna with three-dimensional structure for microwave-based head imaging systems", "text": "Microwave-based head imaging systems demand wideband antennas with compact size. To ensure high scattered signals from the intended target point, the antennas need to have directional radiation patterns. Designing an antenna by addressing all these issues is challenging as the head imaging system utilizes low microwave frequencies (around 1-3 GHz) to maintain high signal penetration in the lossy human head. This paper reports a coplanar waveguide-fed design that overcomes all these limitations. The antenna is constructed in a three-dimensional manner by combining a dipole antenna with a folded parasitic structure printed on two substrate blocks. The antenna mechanism is explained with the description of its design parameters. The antenna covers 101% fractional bandwidth with a stable gain, good radiation efficiency and well directional radiation characteristics. The overall volume of the antenna is 0.24 \u00d7 0.1 \u00d7 0.05 \u03bbm where \u03bbm is the wavelength at the lowest frequency of operation."}
{"_id": "7b4bb79c57bba1bc722a589a43fae2d488bf7e8b", "title": "A spiral antenna over a high-impedance surface consisting of fan-shaped patch cells", "text": "Radiation characteristics of a spiral antenna over a high-impedance surface (HIS) are analyzed. The HIS consists of fan-shaped patch cells. The fan-shaped patches are arranged homogeneously in the circumferential direction but are set non-homogeneously in the radial direction. The analysis is performed using method of moment. Radiation characteristics of a spiral antenna with a perfect electric conductor (PEC) reflector are analyzed. It is reaffirmed that wideband radiation characteristics, including input impedance and axial ratio are deteriorate. Subsequently, Radiation characteristics of a spiral antenna with a fan-shaped HIS reflector are analyzed. It is revealed that input impedance and axial ratio are mitigated by replacing the PEC reflector with the fan-shaped HIS reflector."}
{"_id": "bb1d6215f0cfd84b5efc7173247b016ade4c976e", "title": "Autoencoders, Unsupervised Learning, and Deep Architectures", "text": "To better understand deep architectures and unsupervised learning, uncluttered by hardware details, we develop a general autoencoder framework for the comparative study of autoencoders, including Boolean autoencoders. We derive several results regarding autoencoders and autoencoder learning, including results on learning complexity, vertical and horizontal composition, and fundamental connections between critical points and clustering. Possible implications for the theory of deep architectures are discussed."}
{"_id": "071a6cd442706e424ea09bc8852eaa2e901c72f3", "title": "Accelerating Differential Evolution Using an Adaptive Local Search", "text": "We propose a crossover-based adaptive local search (LS) operation for enhancing the performance of standard differential evolution (DE) algorithm. Incorporating LS heuristics is often very useful in designing an effective evolutionary algorithm for global optimization. However, determining a single LS length that can serve for a wide range of problems is a critical issue. We present a LS technique to solve this problem by adaptively adjusting the length of the search, using a hill-climbing heuristic. The emphasis of this paper is to demonstrate how this LS scheme can improve the performance of DE. Experimenting with a wide range of benchmark functions, we show that the proposed new version of DE, with the adaptive LS, performs better, or at least comparably, to classic DE algorithm. Performance comparisons with other LS heuristics and with some other well-known evolutionary algorithms from literature are also presented."}
{"_id": "5880b9bc3f75f4649b8ec819c3f983a14fca9927", "title": "Hybrid Recommender Systems: Survey and Experiments", "text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, EntreeC, a system that combines knowledge-based recommendation and collaborative filtering to recommend restaurants. Further, we show that semantic ratings obtained from the knowledge-based part of the system enhance the effectiveness of collaborative filtering."}
{"_id": "69a5f151f68f04a5c471c31f8f0209126002b479", "title": "Recommender Systems Research: A Connection-Centric Survey", "text": "Recommender systems attempt to reduce information overload and retain customers by selecting a subset of items from a universal set based on user preferences. While research in recommender systems grew out of information retrieval and filtering, the topic has steadily advanced into a legitimate and challenging research area of its own. Recommender systems have traditionally been studied from a content-based filtering vs. collaborative design perspective. Recommendations, however, are not delivered within a vacuum, but rather cast within an informal community of users and social context. Therefore, ultimately all recommender systems make connections among people and thus should be surveyed from such a perspective. This viewpoint is under-emphasized in the recommender systems literature. We therefore take a connection-oriented perspective toward recommender systems research. We posit that recommendation has an inherently social element and is ultimately intended to connect people either directly as a result of explicit user modeling or indirectly through the discovery of relationships implicit in extant data. Thus, recommender systems are characterized by how they model users to bring people together: explicitly or implicitly. Finally, user modeling and the connection-centric viewpoint raise broadening and social issues\u2014such as evaluation, targeting, and privacy and trust\u2014which we also briefly address."}
{"_id": "6c49508db853e9b167b6d894518c034076993953", "title": "Method to find community structures based on information centrality.", "text": "Community structures are an important feature of many social, biological, and technological networks. Here we study a variation on the method for detecting such communities proposed by Girvan and Newman and based on the idea of using centrality measures to define the community boundaries [M. Girvan and M. E. J. Newman, Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)]. We develop an algorithm of hierarchical clustering that consists in finding and removing iteratively the edge with the highest information centrality. We test the algorithm on computer generated and real-world networks whose community structure is already known or has been studied by means of other methods. We show that our algorithm, although it runs to completion in a time O(n4) , is very effective especially when the communities are very mixed and hardly detectable by the other methods."}
{"_id": "719542dc73ea92a52a6add63f873811b69773875", "title": "Collaborative Recommendation via Adaptive Association Rule Mining", "text": "Collaborative recommender systems allow personalization for e-commerce by exploiting similarities and dissimilarities among users' preferences. We investigate the use of association rule mining as an underlying technology for collaborative recommender systems. Association rules have been used with success in other domains. However, most currently existing association rule mining algorithms were designed with market basket analysis in mind. Such algorithms are ine cient for collaborative recommendation because they mine many rules that are not relevant to a given user. Also, it is necessary to specify the minimum support of the mined rules in advance, often leading to either too many or too few rules; this negatively impacts the performance of the overall system. We describe a collaborative recommendation technique based on a new algorithm speci cally designed to mine association rules for this purpose. Our algorithm does not require the minimum support to be speci ed in advance. Rather, a target range is given for the number of rules, and the algorithm adjusts the minimum support for each user in order to obtain a ruleset whose size is in the desired range. Rules are mined for a speci c target user, reducing the time required for the mining process. We employ associations between users as well as associations between items in making recommendations. Experimental evaluation of a system based on our algorithm reveals performance that is signi cantly better than that of traditional correlationbased approaches. Corresponding author. Present a liation: Department of Computer Science, Wellesley College, 106 Central Street, Wellesley, MA 02481 USA, e-mail: salvarez@wellesley.edu"}
{"_id": "17f221aff2b4f2ddf1886274fbfa7bf6610c2039", "title": "Information rules - a strategic guide to the network economy", "text": "Whatever our proffesion, information rules a strategic guide to the network economy can be good source for reading. Locate the existing reports of word, txt, kindle, ppt, zip, pdf, as well as rar in this website. You could completely review online or download this book by here. Currently, never ever miss it. Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files. Have leisure times? Read information rules a strategic guide to the network economy writer by Why? A best seller book in the world with great value and also content is combined with appealing words. Where? Simply below, in this site you could review online. Want download? Certainly offered, download them also here. Available files are as word, ppt, txt, kindle, pdf, rar, and zip. GO TO THE TECHNICAL WRITING FOR AN EXPANDED TYPE OF THIS INFORMATION RULES A STRATEGIC GUIDE TO THE NETWORK ECONOMY, ALONG WITH A CORRECTLY FORMATTED VERSION OF THE INSTANCE MANUAL PAGE ABOVE."}
{"_id": "a47b437c7d349cf529c86719625e0ca65696ab66", "title": "Fuzzy Cognitive Map Learning Based on Nonlinear Hebbian Rule", "text": "Fuzzy Cognitive Map (FCM) is a soft computing technique for modeling systems. It combines synergistically the theories of neural networks and fuzzy logic. The methodology of developing FCMs is easily adaptable but relies on human experience and knowledge, and thus FCMs exhibit weaknesses and dependence on human experts. The critical dependence on the expert's opinion and knowledge, and the potential convergence to undesired steady states are deficiencies of FCMs. In order to overcome these deficiencies and improve the efficiency and robustness of FCM a possible solution is the utilization of learning methods. This research work proposes the utilization of the unsupervised Hebbian algorithm to nonlinear units for training FCMs. Using the proposed learning procedure, the FCM modifies its fuzzy causal web as causal patterns change and as experts update their causal knowledge."}
{"_id": "666ff37c3339236f0a7f206955ffde382ecc2893", "title": "Emergence of 3D Printed Fashion: Navigating the Ambiguity of Materiality Through Collective Design", "text": "The emergence of 3D printing technology is being embraced by an increasing number of fashion designers. Due to the nascent and evolving nature of the technology, however, there is significant ambiguity around this technology\u2019s implications for the practices of fashion design. Based on the theoretical perspective of sociomateriality and the concept of translation, and drawing on archives, interviews, and other forms of qualitative data, this study\u2019s preliminary findings show that fashion designers navigate this ambiguity by pursuing a collective design process with diverse stakeholders to actively perceive and leverage the affordances and constraints of the technology. The ongoing interaction among a network of heterogeneous actors gives rise to innovative perceptions, practices, and products, which collectively shape the emergence of the field of 3D printed fashion."}
{"_id": "cc92d8f5e97c5df3dd03df0f437d5061c709f65c", "title": "Addressing cold start in recommender systems: a semi-supervised co-training algorithm", "text": "Cold start is one of the most challenging problems in recommender systems. In this paper we tackle the cold-start problem by proposing a context-aware semi-supervised co-training method named CSEL. Specifically, we use a factorization model to capture fine-grained user-item context. Then, in order to build a model that is able to boost the recommendation performance by leveraging the context, we propose a semi-supervised ensemble learning algorithm. The algorithm constructs different (weak) prediction models using examples with different contexts and then employs the co-training strategy to allow each (weak) prediction model to learn from the other prediction models. The method has several distinguished advantages over the standard recommendation methods for addressing the cold-start problem. First, it defines a fine-grained context that is more accurate for modeling the user-item preference. Second, the method can naturally support supervised learning and semi-supervised learning, which provides a flexible way to incorporate the unlabeled data.\n The proposed algorithms are evaluated on two real-world datasets. The experimental results show that with our method the recommendation accuracy is significantly improved compared to the standard algorithms and the cold-start problem is largely alleviated."}
{"_id": "049f4f88fdb42c91796ab93598fc4e23f3cbdebe", "title": "Learning Extractors from Unlabeled Text using Relevant Databases", "text": "Supervised machine learning algorithms for information extraction generally require large amounts of training data. In many cases where labeling training data is burdensome, there may, however, already exist an incomplete database relevant to the task at hand. Records from this database can be used to label text strings that express the same information. For tasks where text strings do not follow the same format or layout, and additionally may contain extra information, labeling the strings completely may be problematic. This paper presents a method for training extractors which fill in missing labels of a text sequence that is partially labeled using simple high-precision heuristics. Furthermore, we improve the algorithm by utilizing labeled fields from the database. In experiments with BibTeX records and research paper citation strings, we show a significant improvement in extraction accuracy over a baseline that only relies on the database for training data."}
{"_id": "72abc9fe3b65a753153916a39c72b91d7328f42b", "title": "Cloaker: Hardware Supported Rootkit Concealment", "text": "Rootkits are used by malicious attackers who desire to run software on a compromised machine without being detected. They have become stealthier over the years as a consequence of the ongoing struggle between attackers and system defenders. In order to explore the next step in rootkit evolution and to build strong defenses, we look at this issue from the point of view of an attacker. We construct Cloaker, a proof-of-concept rootkit for the ARM platform that is non- persistent and only relies on hardware state modifications for concealment and operation. A primary goal in the design of Cloaker is to not alter any part of the host operating system (OS) code or data, thereby achieving immunity to all existing rootkit detection techniques which perform integrity, behavior and signature checks of the host OS. Cloaker also demonstrates that a self-contained execution environment for malicious code can be provided without relying on the host OS for any services. Integrity checks of hardware state in each of the machine's devices are required in order to detect rootkits such as Cloaker. We present a framework for the Linux kernel that incorporates integrity checks of hardware state performed by device drivers in order to counter the threat posed by rootkits such as Cloaker."}
{"_id": "54eef481dffdb85a141e29146a17aaae44ef39ec", "title": "Detecting Driver Drowsiness Based on Sensors: A Review", "text": "In recent years, driver drowsiness has been one of the major causes of road accidents and can lead to severe physical injuries, deaths and significant economic losses. Statistics indicate the need of a reliable driver drowsiness detection system which could alert the driver before a mishap happens. Researchers have attempted to determine driver drowsiness using the following measures: (1) vehicle-based measures; (2) behavioral measures and (3) physiological measures. A detailed review on these measures will provide insight on the present systems, issues associated with them and the enhancements that need to be done to make a robust system. In this paper, we review these three measures as to the sensors used and discuss the advantages and limitations of each. The various ways through which drowsiness has been experimentally manipulated is also discussed. We conclude that by designing a hybrid drowsiness detection system that combines non-intrusive physiological measures with other measures one would accurately determine the drowsiness level of a driver. A number of road accidents might then be avoided if an alert is sent to a driver that is deemed drowsy."}
{"_id": "6e56d2fbf2e6c3f3151a9946fd03c1b36ae0b007", "title": "Comparative study of different methods of social network analysis and visualization", "text": "A Social Network is a social structure made of individuals or organizations that are linked by one or more specific types of interdependency, such as friends, kinship, terrorist relation, conflict, financial exchange, disease transmission (epidemiology), airline routes etc. Social Network Analysis is an approach to the study of human or organization social interactions. It can be used to investigate kinship patterns, community structure or the organization of other formal and informal social networks. Recently, email networks have been popular source of data for both analysis and visualization of social network of a person or organization. In this paper, three types of visualization technique to analyze social networks have been considered - Force directed layout, Spherical layout and Clustered layout. Each method was evaluated with various data sets from an organization. Force directed layout is used to view the total network structure (Overview). Spherical layout is a 3D method to reveal communication patterns and relationships between different groups. Clustered graph layout visualizes a large amount of data in an efficient and compact form. It gives a hierarchical view of the network. Among the three methods, Clustered layout is the best to handle large network and group relationship. Finally, the comparative study of these three methods has been given."}
{"_id": "08ff9a747e1b8c851403461f3e5c811a97eda4f8", "title": "A Systematic Analysis of XSS Sanitization in Web Application Frameworks", "text": "While most research on XSS defense has focused on techniques for securing existing applications and re-architecting browser mechanisms, sanitization remains the industry-standard defense mechanism. By streamlining and automating XSS sanitization, web application frameworks stand in a good position to stop XSS but have received little research attention. In order to drive research on web frameworks, we systematically study the security of the XSS sanitization abstractions frameworks provide. We develop a novel model of the web browserions frameworks provide. We develop a novel model of the web browser and characterize the challenges of XSS sanitization. Based on the model, we systematically evaluate the XSS abstractions in 14 major commercially-used web frameworks. We find that frameworks often do not address critical parts of the XSS conundrum. We perform an empirical analysis of 8 large web applications to extract the requirements of sanitization primitives from the perspective of realworld applications. Our study shows that there is a wide gap between the abstractions provided by frameworks and the requirements of applications."}
{"_id": "42141ca5134cca10e129e587a0061c819b67e69b", "title": "An integrated approach of diet and exercise recommendations for diabetes patients", "text": "Diabetes is among one of the fastest growing disease all over the world. Controlled diet and proper exercise are considered as a treatment to control diabetes. However, food and exercise suggestions in existing solutions do not consider integrated knowledge from personal profile, preferences, current vital signs, diabetes domain, food domain and exercise domain. Furthermore, there is a strong correlation of diet and exercise. We have implemented an ontology based integrated approach to combine knowledge from various domains to generate diet and exercise suggestions for diabetics. The solution is developed as a Semantic Healthcare Assistant for Diet and Exercise (SHADE). For each domain (person, diabetes, food and exercise) we have defined separate ontology along with rules and then an integrated ontology combines these individual ontologies. Finally, diet recommendations are presented in the form of various alternative menus such that each menu is a healthy and balanced diet."}
{"_id": "7e9bfb62ba48bbd8d9c13ef1dc7b93fcc58efea8", "title": "Low-profile dual-band circularly polarized microstrip antenna for GNSS applications", "text": "This paper presents a design of a micro-strip circularly polarized antenna intended for the Global Navigation Satellite Systems (GNSS). The presented device is composed of a micro-strip slotted patch antenna printed on a Rogers RO3006 substrate, a foam layer of 2 mm thick and a wideband commercial 3-dB SMT coupler. The combined fullwave antenna results with the measured S-Parameters of the coupler shows very good performances in terms of antenna matching and axial ratio on larger bandwidths."}
{"_id": "cc2fd4427572cc0ce8d172286277a88272c96d03", "title": "The PLEX Cards and its techniques as sources of inspiration when designing for playfulness", "text": "Playfulness can be observed in all areas of human activity. It is an attitude of making activities more enjoyable. Designing for playfulness involves creating objects that elicit a playful approach and provide enjoyable experiences. In this paper, we introduce the design and evaluation of the PLEX Cards and its two related idea generation techniques. The cards were created to communicate the 22 categories of a playful experiences framework to designers and other stakeholders who wish to design for playfulness. We have evaluated the helpfulness of both the cards and their associated techniques in two studies. The results show that the PLEX Cards and its associated techniques are valuable sources of inspiration when designing for playfulness."}
{"_id": "684250ab1e1459197f4e37304e4103cd6848a826", "title": "Cloud Federation: Effects of Federated Compute Resources on Quality of Service and Cost*", "text": "Cloud Federation is one concept to confront challenges that still persist in Cloud Computing, such as vendor lock-in or compliance requirements. The lack of a standardized meaning for the term Cloud Federation has led to multiple conflicting definitions and an unclear prospect of its possible benefits. Taking a client-side perspective on federated compute services, we analyse how choosing a certain federation strategy affects Quality of Service and cost of the resulting service or application. Based on a use case, we experimentally prove our analysis to be correct and describe the different trade-offs that exist within each of the strategies."}
{"_id": "267411fb2e73d4f9cbef8bebc3bb6140ea7dd43c", "title": "The nucleotide sequence of a HMW glutenin subunit gene located on chromosome 1A of wheat (Triticum aestivum L.).", "text": "A cloned 8.2 kb EcoRI fragment has been isolated from a genomic library of DNA derived from Triticum aestivum L. cv. Cheyenne. This fragment contains sequences related to the high molecular weight (HMW) subunits of glutenin, proteins considered to be important in determining the elastic properties of gluten. The cloned HMW subunit gene appears to be derived from chromosome 1A. The nucleotide sequence of this gene has provided new information on the structure and evolution of the HMW subunits. However, hybrid-selection translation experiments suggest that this gene is silent."}
{"_id": "213a25383e5b9c4db9dc6908e1be53d66ead82ca", "title": "dyngraph2vec: Capturing Network Dynamics using Dynamic Graph Representation Learning", "text": "Learning graph representations is a fundamental task aimed at capturing various properties of graphs in vector space. The most recent methods learn such representations for static networks. However, real world networks evolve over time and have varying dynamics. Capturing such evolution is key to predicting the properties of unseen networks. To understand how the network dynamics affect the prediction performance, we propose an embedding approach which learns the structure of evolution in dynamic graphs and can predict unseen links with higher precision. Our model, dyngraph2vec, learns the temporal transitions in the network using a deep architecture composed of dense and recurrent layers. We motivate the need of capturing dynamics for prediction on a toy data set created using stochastic block models. We then demonstrate the efficacy of dyngraph2vec over existing state-of-the-art methods on two real world data sets. We observe that learning dynamics can improve the quality of embedding and yield better performance in link prediction."}
{"_id": "e0f89f02f6de65dd84da556f4e2db4ced9713f6f", "title": "Operational capabilities development in mediated offshore software services models", "text": "The paper expands theoretical and empirical understanding of capabilities development in the mediated offshore outsourcing model whereby a small or a medium-sized firm delivers offshore software services to a larger information technology firm that in turn contracts and interfaces with the actual end-client onshore firms. Such a mediated model has received little prior research attention, although it is common particularly among Chinese firms exporting services to Japan, the largest export market for Chinese software services. We conducted case studies in four China-based software companies to understand the mechanisms used to develop their operational capabilities. We focused on client-specific, process, and human resources capabilities that have been previously associated with vendor success. We found a range of learning mechanisms to build the capabilities in offshore firms. Results show that the development of human resources capabilities was most challenging in the mediated model; yet foundational for the development of the other capabilities. This paper contributes to the information systems literature by improving our understanding of the development of operational capabilities in smalland medium-sized Chinese firms that deploy the mediated model of offshore"}
{"_id": "c557a7079a885c1bbc0b21057a0e16f927179ac7", "title": "Unsupervised detection of surface defects: A two-step approach", "text": "In this paper, we focus on the problem of finding anomalies in surface images. Despite enormous research efforts and advances, it still remains a big challenge to be solved. This paper proposes a unified approach for defect detection. Our proposed method consists of two phases: (1) global estimation and (2) local refinement. First, we roughly estimate defects by applying a spectral-based approach in a global manner. We then locally refine the estimated region based on the distributions of pixel intensities derived from defect and defect-free regions. Experimental results show that the proposed method outperforms the previous defect detection methods and gives robust results even in noisy surface defect images."}
{"_id": "cc0530e3c7463d51ad51eb90e25ccbd7dda11814", "title": "The Ideology of Interactivity (or Video Games and Taylorization of Leisure)", "text": "Interactivity is one of the key conceptual apparatuses through which video games have been theorized thus far. As many writers have noted, video games are distinct from other forms of media because player actions seem to have direct, immediate consequences in the world depicted onscreen. But in many ways, this \u201cinteractive\u201d feature of video games tends to manifest itself as a relentless series of demands, or a way of disciplining player behavior. In this sense, it seems more accurate to describe the human-machine interface made possible by gaming as an aggressive form of \u201cinterpellation\u201d or hailing. Drawing primarily upon the work of Louis Althusser, I argue that traditional theories of interactivity fail to acknowledge the work of video games\u2014in other words, the extent to which video games define and reconstitute players as subjects of ideology."}
{"_id": "0407d76a4f3a4cb35a63734eaea41a31f8008aef", "title": "Generalized Methods for the Design of Quasi-Ideal Symmetric and Asymmetric Coupled-Line Sections and Directional Couplers", "text": "Generalized methods for the design of quasi-ideal symmetric and asymmetric coupled-line sections have been proposed. Two cases of coupled-line sections, in which the inductive coupling coefficient is either greater or smaller than the capacitive coupling coefficient, have been separately considered. To compensate for the coupling coefficients' inequality, a combination of lumped capacitors and short coupled-line sections has been introduced into the coupled-line section. The proposed methods allow for designing quasi-ideal coupled-line sections in nearly arbitrarily chosen dielectric media and for arbitrarily chosen coupling. The theoretical analyses have been verified by the design and measurement of three compensated coupled-line directional couplers illustrating application of the proposed methods in different dielectric structures."}
{"_id": "946716c6759498d1ee953ad6d13b052a5052c6a8", "title": "Bullying, cyberbullying, and mental health in young people.", "text": "OBJECTIVE\nTo investigate the factors associated with exposure to in-real-life (IRL) bullying, cyberbullying, and both IRL and cyberbullying and to explore the relationship between these types of bullying and mental health among 13-16-year-old Swedish boys and girls.\n\n\nMETHODS\nData was derived from a cross-sectional web-based study of 13-16-year-old students in northern Sweden (n=1214, response rate 81.9%).\n\n\nRESULTS\nThe combination of IRL- and cyberbullying was the most common type of bullying. A non-supportive school environment and poor body image were related to exposure to bullying for both genders but the relationship was more distinct in girls. All types of bullying were associated with depressive symptoms in both boys and girls and all forms of bullying increased the likelihood of psychosomatic problems in girls.\n\n\nCONCLUSIONS\nCyberbullying can be seen as an extension of IRL bullying. A combination of IRL- and cyberbullying seems to be particularly negative for mental health. Interventions should focus on improved school environment and body image as well as anti-violence programmes. Gender aspects of bullying need to be acknowledged."}
{"_id": "173443511c450bd8f61e3d1122982f74c94147ae", "title": "Pivoted Document Length Normalization", "text": "Automatic information retrieval systems have to deal with documents of varying lengths in a text collection. Document length normalization is used to fairly retrieve documents of all lengths. In this study, we ohserve that a normalization scheme that retrieves documents of all lengths with similar chances as their likelihood of relevance will outperform another scheme which retrieves documents with chances very different from their likelihood of relevance. We show that the retrievaf probabilities for a particular normalization method deviate systematically from the relevance probabilities across different collections. We present pivoted normalization, a technique that can be used to modify any normalization function thereby reducing the gap between the relevance and the retrieval probabilities. Training pivoted normalization on one collection, we can successfully use it on other (new) text collections, yielding a robust, collectzorz independent normalization technique. We use the idea of pivoting with the well known cosine normalization function. We point out some shortcomings of the cosine function andpresent two new normalization functions--pivoted unique normalization and piuotert byte size normalization."}
{"_id": "27d9d71afb06ea5d4295fbdcc7cd67e491d50cc4", "title": "Nearest neighbor pattern classification", "text": "The case of n unity-variance random variables x1, XZ,. * *, x, governed by the joint probability density w(xl, xz, * * * x,) is considered, where the density depends on the (normalized) cross-covariances pii = E[(xi jzi)(xi li)]. It is shown that the condition holds for an \u201carbitrary\u201d function f(xl, x2, * * * , x,) of n variables if and only if the underlying density w(xl, XZ, * * * , x,) is the usual n-dimensional Gaussian density for correlated random variables. This result establishes a generalized form of Price\u2019s theorem in which: 1) the relevant condition (*) subsumes Price\u2019s original condition; 2) the proof is accomplished without appeal to Laplace integral expansions; and 3) conditions referring to derivatives with respect to diagonal terms pii are avoided, so that the unity variance assumption can be retained. Manuscript received February 10, 1966; revised May 2, 1966. The author is with the Ordnance Research Laboratory, Pennsylvania State University, State College, Pa. RICE\u2019S THEOREM and its various extensions ([l]-[4]) have had great utility in the determination of output correlations between zero-memory nonlinearities subjected to jointly Gaussian inputs. In its original form, the theorem considered n jointly normal random variables, x1, x2, . . . x,, with respective means 21, LE2, . . . ) Z, and nth-order joint probability density, P(z 1, .x2, . 1 . , :r,,) = (27p ply,, y . exp { -a F F ;;;, ~ (2,. 2,)(:r, 5,) I , (1) where IM,l is the determinant\u2019 of J1,, = [p,,], Pr-. = E[(sr 5$.)(x, ,2J] = xvx, &IL:, is the correlation coefficient of x, and x,, and Ail,, is the cofactor of p.? in ilf,. From [l], the theorem statement is as follows: \u201cLet there be n zero-memory nonlinear devices specified by the input-output relationship f<(x), i = 1, 2, . 1 . , n. Let each xi be the single input to a corresponding fi(x) Authorized licensed use limited to: West Virginia University. Downloaded on February 26, 2009 at 13:41 from IEEE Xplore. Restrictions apply."}
{"_id": "64b3435826a94ddd269b330e6254579f3244f214", "title": "Matrix computations (3. ed.)", "text": ""}
{"_id": "ba6419a7a4404174ba95a53858632617c47cfff0", "title": "Statistical learning theory", "text": ""}
{"_id": "37357230347ed6ed8912a09c7ba4c35260e84e4c", "title": "A flexible coupling approach to multi-agent planning under incomplete information", "text": "Multi-agent planning (MAP) approaches are typically oriented at solving loosely coupled problems, being ineffective to deal with more complex, strongly related problems. In most cases, agents work under complete information, building complete knowledge bases. The present article introduces a general-purpose MAP framework designed to tackle problems of any coupling levels under incomplete information. Agents in our MAP model are partially unaware of the information managed by the rest of agents and share only the critical information that affects other agents, thus maintaining a distributed vision of the task. Agents solve MAP tasks through the adoption of an iterative refinement planning procedure that uses single-agent planning technology. In particular, agents will devise refinements through the partial-order planning paradigm, a flexible framework to build refinement plans leaving unsolved details that will be gradually completed by means of new refinements. Our proposal is supported with the implementation of a fully operative MAP system and we show various experiments when running our system over different types of MAP problems, from the most strongly related to the most loosely coupled."}
{"_id": "9f0f1f373cabdf5f232d490990b5cf03e110349a", "title": "Image Processing Operations Identification via Convolutional Neural Network", "text": "In recent years, image forensics has attracted more and more attention, and many forensic methods have been proposed for identifying image processing operations. Up to now, most existing methods are based on hand crafted features, and just one specific operation is considered in their methods. In many forensic scenarios, however, multiple classification for various image processing operations is more practical. Besides, it is difficult to obtain effective features by hand for some image processing operations. In this paper, therefore, we propose a new convolutional neural network (CNN) based method to adaptively learn discriminative features for identifying typical image processing operations. We carefully design the high pass filter bank to get the image residuals of the input image, the channel expansion layer to mix up the resulting residuals, the pooling layers, and the activation functions employed in our method. The extensive results show that the proposed method can outperform the currently best method based on hand crafted features and three related methods based on CNN for image steganalysis and/or forensics, achieving the state-of-the-art results. Furthermore, we provide more supplementary results to show the rationality and robustness of the proposed model."}
{"_id": "7606f448d1ad155bb6d61aed0aa2e1b22723f4db", "title": "Real Time Identification and Prevention of Doziness using Face Recognition System", "text": "A large number of road accidents in the world occur due to driver\u2019s not taking proper breaks while driving long distances. This System is used to detect fatigue behaviour of the driver well and before any accident occurs. In this system driver\u2019s facial appearance is extracted via high speed camera which is placed in car in front of driver. This camera will capture the frames at every instant of time. Each of this frame will be processed using image processing algorithm. The mouth and eye region is separated from each of the frames and analyzed for concluding whether the driver is dozy or not. The system draws conclusion if the eyes are found closed for five consecutive frame. If doziness is detected, a signal alert is triggered. In case accidents occurs, message is sent to the desired number."}
{"_id": "9a6258dd41d81db3320cfe0b88911489d1ee23f0", "title": "A Comprehensive Review of Smart Wheelchairs: Past, Present, and Future", "text": "A smart wheelchair (SW) is a power wheelchair (PW) to which computers, sensors, and assistive technology are attached. In the past decade, there has been little effort to provide a systematic review of SW research. This paper aims to provide a complete state-of-the-art overview of SW research trends. We expect that the information gathered in this study will enhance awareness of the status of contemporary PW as well as SW technology and increase the functional mobility of people who use PWs. We systematically present the international SW research effort, starting with an introduction to PWs and the communities they serve. Then, we discuss in detail the SW and associated technological innovations with an emphasis on the most researched areas, generating the most interest for future research and development. We conclude with our vision for the future of SW research and how to best serve people with all types of disabilities."}
{"_id": "be52a712b18acf0d33c561c48691b9fe007b05e2", "title": "Electrochemical Sensors for Soil Nutrient Detection: Opportunity and Challenge", "text": "Soil testing is the basis for nutrient recommendation and formulated fertilization. This study presented a brief overview of potentiometric electrochemical sensors (ISE and ISFET) for soil NPK detection. The opportunities and challenges for electrochemical sensors in soil testing were"}
{"_id": "1bfce3f5a992223f7d0d0149e401d4b7fb35702d", "title": "Adaptive resource allocation for wildlife protection against illegal poachers", "text": "Illegal poaching is an international problem that leads to the extinction of species and the destruction of ecosystems. As evidenced by dangerously dwindling populations of endangered species, existing anti-poaching mechanisms are insufficient. This paper introduces the Protection Assistant for Wildlife Security (PAWS) application a joint deployment effort done with researchers at Uganda\u2019s Queen Elizabeth National Park (QENP) with the goal of improving wildlife ranger patrols. While previous works have deployed applications with a game-theoretic approach (specifically Stackelberg Games) for counter-terrorism, wildlife crime is an important domain that promotes a wide range of new deployments. Additionally, this domain presents new research challenges and opportunities related to learning behavioral models from collected poaching data. In addressing these challenges, our first contribution is a behavioral model extension that captures the heterogeneity of poachers\u2019 decision making processes. Second, we provide a novel framework, PAWS-Learn, that incrementally improves the behavioral model of the poacher population with more data. Third, we develop a new algorithm, PAWS-Adapt, that adaptively improves the resource allocation strategy against the learned model of poachers. Fourth, we demonstrate PAWS\u2019s potential effectiveness when applied to patrols in QENP, where PAWS will be deployed."}
{"_id": "144de7e34f091e9e622114f1cf59c09e26cb32ac", "title": "Automatic knowledge extraction from OCR documents using hierarchical document analysis", "text": "Industries can improve their business efficiency by analyzing and extracting relevant knowledge from large numbers of documents. Knowledge extraction manually from large volume of documents is labor intensive, unscalable and challenging. Consequently there have been a number of attempts to develop intelligent systems to automatically extract relevant knowledge from OCR documents. Moreover, the automatic system can improve the capability of search engine by providing application-specific domain knowledge. However, extracting the efficient information from OCR documents is challenging due to highly unstructured format [1, 11, 18, 26]. In this paper, we propose an efficient framework for a knowledge extraction system that takes keywords based queries and automatically extracts their most relevant knowledge from OCR documents by using text mining techniques. The framework can provide relevance ranking of knowledge to a given query. We tested the proposed framework on corpus of documents at GE Power where document consists of more than hundred pages in PDF."}
{"_id": "9b84fa56b2eb41a67d32adf06942fcb50665f326", "title": "What is principal component analysis?", "text": "Several measurement techniques used in the life sciences gather data for many more variables per sample than the typical number of samples assayed. For instance, DNA microarrays and mass spectrometers can measure levels of thousands of mRNAs or proteins in hundreds of samples. Such high-dimensionality makes visualization of samples difficult and limits simple exploration of the data. Principal component analysis (PCA) is a mathematical algorithm that reduces the dimensionality of the data while retaining most of the variation in the data set1. It accomplishes this reduction by identifying directions, called principal components, along which the variation in the data is maximal. By using a few components, each sample can be represented by relatively few numbers instead of by values for thousands of variables. Samples can then be plotted, making it possible to visually assess similarities and differences between samples and determine whether samples can be grouped. Saal et al.2 used microarrays to measure the expression of 27,648 genes in 105 breast tumor samples. I will use this gene expression data set, which is available through the Gene Expression Omnibus database (accession no. GSE5325), to illustrate how PCA can be used to represent samples with a smaller number of variables, visualize samples and genes, and detect dominant patterns of gene expression. My aim with this example is to leave you with an idea of how PCA can be used to explore data sets in which thousands of variables have been measured."}
{"_id": "b9bb5f4dec8e7cb9974a2eb6ad558d919d0ebe9c", "title": "DsUniPi: An SVM-based Approach for Sentiment Analysis of Figurative Language on Twitter", "text": "The DsUniPi team participated in the SemEval 2015 Task#11: Sentiment Analysis of Figurative Language in Twitter. The proposed approach employs syntactical and morphological features, which indicate sentiment polarity in both figurative and non-figurative tweets. These features were combined with others that indicate presence of figurative language in order to predict a fine-grained sentiment score. The method is supervised and makes use of structured knowledge resources, such as SentiWordNet sentiment lexicon for assigning sentiment score to words and WordNet for calculating word similarity. We have experimented with different classification algorithms (Na\u00efve Bayes, Decision trees, and SVM), and the best results were achieved by an SVM classifier with linear kernel."}
{"_id": "35c867252b0a105339acced5cdde4d2b9fb49b54", "title": "A Critical Review of Marketing Research on Diffusion of New Products", "text": "We critically examine alternate models of the diffusion of new products and the turning points of the diffusion curve. On each of these topics, we focus on the drivers, specifications, and estimation methods researched in the literature. We discover important generalizations about the shape, parameters, and turning points of the diffusion curve and the characteristics of diffusion across early stages of the product life cycle. We point out directions for future research. Because new products affect every aspect of the life of individuals, communities, countries, and economies, the study of the diffusion of innovations is of vital importance. Researchers have studied this topic in various disciplines, including marketing, economics, medicine, agriculture, sociology, anthropology, geography, and technology management. We present a critical review of research on the diffusion of new products primarily in the marketing literature, but also in the economics and geography literature. We use the word product broadly to cover any good, service, idea, or person. We distinguish the term new product from the broader term innovation, which refers to both new product and new method, practice, institution, or social entity. Even though we restrict our review to the marketing literature, which focuses on the diffusion of new products, the implications of our review may hold as well as for the study of the diffusion of innovations in other disciplines. The marketing literature on this topic is vast, dating back at least as early as the publication by Fourt and Woodlock (1960). The term diffusion has been used differently in two groups of literatures. Within economics and most nonmarketing disciplines, diffusion is defined as the spread of an innovation across social groups over time (Brown, 1981; Stoneman, 2002). As such, the phenomenon is separate from the drivers, which can be consumer income, the product\u2019s price, word-of-mouth communication, and so on. In marketing and communication, diffusion typically has come to mean the communication of an innovation through the population (Golder and Tellis, 1998; Mahajan, Muller, and Wind, 2000a; Mahajan, Muller, and Bass, 1990; Rogers 1995). In this sense, the phenomenon (spread of a product) is synonymous with its underlying driver (communication). The Webster (2004) definition of the noun \u201cdiffusion\u201d is \u201cthe spread of a cultural or technological practice or innovation from one region or people to another, as by trade or conquest\u201d and the verb \u201cdiffusing\u201d is \u201cpour, spread out or disperse in every direction; spread or scatter widely.\u201d This latter interpretation is synonymous with the term\u2019s use in economics and most other disciplines. In addition, some researchers in marketing have subscribed to the definition used in economics (Bemmaor, 1994; Dekimpe, Parker, and Sarvary, 2000a; Van den Bulte and 40 DEEPA CHANDRASEKARAN AND GERARD J. TELLIS Stremersch, 2004). Hence, in this review, we define diffusion as the spread of an innovation across markets over time. Researchers commonly measure diffusion using the sales and especially the market penetration of a new product during the early stages of its life cycle. To characterize this phenomenon carefully, we adopt the definitions of the stages and turning points of the product\u2019s life cycle by Golder and Tellis (2004): 1. Commercialization is the date a new product is first sold. 2. Takeoff is the first dramatic and sustained increase in a new product\u2019s sales. 3. Introduction is the period from a new product\u2019s commercialization until its takeoff. 4. Slowdown is the beginning of a period of level, slowly increasing, or temporarily decreasing product sales after takeoff. 5. Growth is the period from a new product\u2019s takeoff until its slowdown. 6. Maturity is the period from a product\u2019s slowdown until sales begin a steady decline. Hence, there are two key turning points in the diffusion curve: takeoff and slowdown. Prior reviews address various aspects of the marketing literature on the diffusion of new products. For example, Mahajan, Muller, and Bass (1990) provide an excellent overview of the Bass model, its extensions, and some directions for further research. Parker (1994) provides an overview of the Bass model and evaluates the various estimation techniques, forecasting abilities, and specification improvements of the model. Mahajan, Muller, and Bass (1995) summarize the generalizations from applications of the Bass model. An edited volume by Mahajan, Muller, and Wind (2000b) covers in depth various topics in diffusion models, such as specification, estimation, and applications. Sultan, Farley, and Lehmann (1990) and Van den Bulte and Stremersch (2004) meta-analyze the diffusion parameters of the Bass model. The current review differs from prior reviews in two important aspects. First, the prior reviews focus on the S-curve of cumulative sales of a new product, mostly covering growth. This review focuses on phenomena besides the S-curve, such as takeoff and slowdown. Second, the above reviews focus mainly on the Bass model. This review considers the Bass model as well as other models of diffusion and drivers of new product diffusion other than communication. Our key findings and the most useful part of our study is the discovery of potential generalizations from past research. For the benefit of readers who are familiar with this topic, we present these generalizations before details of the measures, models, and methods used in past research. (Readers who are unfamiliar with the topic may want to read the Potential Generalizations section last). Therefore, we organize the rest of the chapter as follows. In the next section, we summarize potential generalizations from prior research. In the third section, we point out limitations of past research and directions for future research. In the fourth section, we evaluate key models and drivers of the diffusion curve. In the fifth section, we evaluate models of the key turning points in diffusion: takeoff and slowdown. Potential Generalizations We use the term potential generalizations or regularities to describe empirical findings with substantial support. By substantial, we mean that support comes from reviews or meta-analyses of the literature or individual studies with a large sample of over ten categories or ten countries. Table 2.1 lists the studies on which the potential generalizations are based. This section covers important findings about the shape of the diffusion curve, parameters of the Bass models, the turning points of diffusion, and findings across stages of the diffusion curve. MARKETING RESEARCH ON DIFFUSION OF NEW PRODUCTS 41 Shape of the Diffusion Curve The most important and most widely reported finding about new product diffusion relates to the shape of the diffusion curve (see Figure 2.1). Numerous studies in a variety of disciplines suggest that (with the exception of entertainment products) the plot of cumulative sales of new products against time is an S-shaped curve (e.g., Mahajan, Muller, and Bass 1990; Mahajan, Muller, and Wind, 2000a). Parameters of the Bass Model Most of the marketing studies use the Bass diffusion model to capture the S-shaped curve of new products sales (see later section for explanation). This model has three key parameters: the coefficient of innovation or external influence (p), the coefficient of imitation or internal influence (q), and the market potential (\u03b1 or m). Table 2.1 Studies Included for Assessing Potential Generalizations Authors Categories Countries Gatignon, Eliashberg, and 6 consumer durables 14 European countries Robertson (1989) Mahajan, Muller, and Bass (1990) Numerous studies Sultan, Farley, and Lehmann (1990) 213 applications United States, European countries Helsen, Jedidi, and DeSarbo (1993) 3 consumer durables 11 European countries and United States Ganesh and Kumar (1996) 1 industrial product 10 European countries, United States, and Japan Ganesh, Kumar, Subramaniam (1997) 4 consumer durables 16 European countries Golder and Tellis (1997) 31 consumer durables United States Putsis et al., (1997) 4 consumer durables 10 European countries Dekimpe, Parker, and Sarvary (1998) 1 service 74 countries Kumar, Ganesh, and Echambadi (1998) 5 consumer durables 14 European countries Golder and Tellis (1998) 10 consumer durables United States Kohli, Lehmann, and Pae (1999) 32 appliances, house United States wares and electronics Dekimpe, Parker, and Sarvary (2000) 1 innovation More than 160 countries Mahajan, Muller, and Wind (2000) Numerous studies Van den Bulte (2000) 31 consumer durables United States Talukdar, Sudhir, and Ainslie (2002) 6 consumer durables 31 countries Agarwal and Bayus (2002) 30 innovations United States Goldenberg, Libai, and Muller (2002) 32 innovations United States Tellis, Stremersch, and Yin (2003) 10 consumer durables 16 European countries Golder and Tellis (2004) 30 consumer durables United States Stremersch and Tellis (2004) 10 consumer durables 16 European countries Van den Bulte and Stremersch (2004) 293 applications 28 countries <> <> 42 DEEPA CHANDRASEKARAN AND GERARD J. TELLIS Coefficient of Innovation \u2022 The mean value of the coefficient of innovation for a new product lies between 0.0007 and .03 (Sultan, Farley, and Lehmann, 1990; Talukdar, Sudhir, and Ainslie, 2002; Van den Bulte and Stremersch, 2004). \u2022 The mean value of the coefficient of innovation for a new product is 0.001 for developed countries and 0.0003 for developing countries (Talukdar, Sudhir, and Ainslie, 2002). \u2022 The coefficient of innovation is higher for European countries than for the United States (Sultan, Farley, and Lehmann, 1990). Coefficient of Imitation \u2022 The mean value of the coefficient of imitation for a new product lies between 0.38 and 0.53 (Sultan, Farley, and Lehmann, 1990; Talukdar, Sudhir, and Ainslie, 2002; Van den Bulte and Stremersch, 2004). \u2022 Indu"}
{"_id": "b770794840edef3dc6a36fd9d55f0cf491bec42b", "title": "GPU-based cone beam computed tomography", "text": "The use of cone beam computed tomography (CBCT) is growing in the clinical arena due to its ability to provide 3D information during interventions, its high diagnostic quality (sub-millimeter resolution), and its short scanning times (60 s). In many situations, the short scanning time of CBCT is followed by a time-consuming 3D reconstruction. The standard reconstruction algorithm for CBCT data is the filtered backprojection, which for a volume of size 256(3) takes up to 25 min on a standard system. Recent developments in the area of Graphic Processing Units (GPUs) make it possible to have access to high-performance computing solutions at a low cost, allowing their use in many scientific problems. We have implemented an algorithm for 3D reconstruction of CBCT data using the Compute Unified Device Architecture (CUDA) provided by NVIDIA (NVIDIA Corporation, Santa Clara, California), which was executed on a NVIDIA GeForce GTX 280. Our implementation results in improved reconstruction times from minutes, and perhaps hours, to a matter of seconds, while also giving the clinician the ability to view 3D volumetric data at higher resolutions. We evaluated our implementation on ten clinical data sets and one phantom data set to observe if differences occur between CPU and GPU-based reconstructions. By using our approach, the computation time for 256(3) is reduced from 25 min on the CPU to 3.2 s on the GPU. The GPU reconstruction time for 512(3) volumes is 8.5 s."}
{"_id": "e62622831d968d7bf9440c0ea04252e46a649da6", "title": "Achievement Goals in the Classroom : Students ' Learning Strategies and Motivation Processes", "text": "We studied how specific motivational processes are related to the salience of mastery and performance goals in actual classroom settings. One hundred seventy-six students attending a junior high/high school for academically advanced students were randomly selected from one of their classes and responded to a questionnaire on their perceptions of the classroom goal orientation, use of effective learning strategies, task choices, attitudes, and causal attributions. Students who perceived an emphasis on mastery goals in the classroom reported using more effective strategies, preferred challenging tasks, had a more positive attitude toward the class, and had a stronger belief that success follows from one's effort. Students who perceived performance goals as salient tended to focus on their ability, evaluating their ability negatively and attributing failure to lack of ability. The pattern and strength of the findings suggest that the classroom goal orientation may facilitate the maintenance of adaptive motivation patterns when mastery goals are salient and are adopted by students."}
{"_id": "104f1e76b653386c38b4a0bd535ad08c4a651832", "title": "Calibrating the COCOMO II Post-Architecture Model", "text": "The COCOMO II model was created to meet the need for a cost model that accounted for future software development practices. This resulted in the formulation of three submodels for cost estimation, one for composing applications, one for early lifecycle estimation and one for detailed estimation when the architecture of the product is understood. This paper describes the calibration procedures for the last model, Post-Architecture COCOMO II model, from eighty-three observations. The results of the multiple regression analysis and their implications are discussed. Future work includes further analysis of the PostArchitecture model, calibration of the other models, derivation of maintenance parameters, and refining the effort distribution for the model output."}
{"_id": "10b37a3018286dc787fd2cf25caae72f83035f10", "title": "Meditation states and traits: EEG, ERP, and neuroimaging studies.", "text": "Neuroelectric and imaging studies of meditation are reviewed. Electroencephalographic measures indicate an overall slowing subsequent to meditation, with theta and alpha activation related to proficiency of practice. Sensory evoked potential assessment of concentrative meditation yields amplitude and latency changes for some components and practices. Cognitive event-related potential evaluation of meditation implies that practice changes attentional allocation. Neuroimaging studies indicate increased regional cerebral blood flow measures during meditation. Taken together, meditation appears to reflect changes in anterior cingulate cortex and dorsolateral prefrontal areas. Neurophysiological meditative state and trait effects are variable but are beginning to demonstrate consistent outcomes for research and clinical applications. Psychological and clinical effects of meditation are summarized, integrated, and discussed with respect to neuroimaging data."}
{"_id": "197ef20f1c652589da145a625093e8b31082c470", "title": "Weighted Association Rule Mining using weighted support and significance framework", "text": "We address the issues of discovering significant binary relationships in transaction datasets in a weighted setting. Traditional model of association rule mining is adapted to handle weighted association rule mining problems where each item is allowed to have a weight. The goal is to steer the mining focus to those significant relationships involving items with significant weights rather than being flooded in the combinatornal explosion of insignificant relationships. We identify the challenge of using weights in the iterative process of generating large itemsets. The problem of invalidation of the \"downward closure property\" in the weighted setting is solved by using an improved model of weighted support measurements and exploiting a \"weighted downward closure property\". A new algorithm called WARM (Weighted Association Rule Mining) is developed based on the improved model. The algorithm is both scalable and efficient in discovering significant relationships in weighted settings as illustrated by experiments performed on simulated datasets."}
{"_id": "9467791cf63d3c8bf6e92c1786c91c9cf86ba256", "title": "A low-power Adaboost-based object detection processor using Haar-like features", "text": "This paper presents an architecture of a low-power real-time object detection processor using Adaboost with Haar-Like features. We employ a register array based architecture, and introduce two architectural-level power optimization techniques; signal gating domain for integral image extraction, and low-power integral image update. The power efficiency of our proposed architecture including nine classifiers is estimated to be 0.64mW/fps when handling VGA(640 \u00d7 480) 70fps video."}
{"_id": "4a96fbbfba14bb9237948286b9bfacf97f059576", "title": "A study on evolution of email spam over fifteen years", "text": "Email spam is a persistent problem, especially today, with the increasing dedication and sophistication of spammers. Even popular social media sites such as Facebook, Twitter, and Google Plus are not exempt from email spam as they all interface with email systems. With an \u201carms-race\u201d between spammers and spam filter developers, spam has been continually changing over the years. In this paper, we analyze email spam trends on a dataset collected by the Spam Archive, which contains 5.1 million spam emails spread over 15 years (1998-2013). We use statistical analysis techniques on different headers in email messages (e.g. content type and length) and embedded items in message body (e.g. URL links and HTML attachments). Also, we investigate topic drift by applying topic modeling on the content of email spam. Moreover, we extract sender-to-receiver IP routing networks from email spam and perform network analysis on it. Our results show the dynamic nature of email spam over one and a half decades and demonstrate that the email spam business is not dying but changing to be more capricious."}
{"_id": "19db9eb3a43dfbe5d45a70f65ef1fe39b1c1688c", "title": "Applying persuasive design in a diabetes mellitus application", "text": "This paper describes persuasive design methods and compares this to an application currently under development for diabetes mellitus patients. Various elements of persuasion and a categorization of persuasion types are mentioned. Also discussed are principles of how successful persuasion should be designed, as well as the practical applications and ethics of persuasive design. This paper is not striving for completeness of theories on the topic, but uses the theories to compare it to an application intended for diabetes mellitus patients. The results of this comparison can be used for improvements of the application."}
{"_id": "e8e6ef5ad06082d6e57112e5ff8ac0a44ea94527", "title": "Analysis of modulation schemes for Bluetooth-LE module for Internet-of-Things (IoT) applications", "text": "Bluetooth transceivers have been in the active area of recent research being the key component of physical layer in the Bluetooth technology. The low power consumption of Bluetooth low energy (LE) devices compared to the conventional Bluetooth devices has enhanced its importance in Internet-of-Things (IoT) applications. Therefore, Bluetooth low energy device based solution needs expansion in the IoT network infrastructure. The transceivers in the literature and modulation schemes are compared and summarized. Energy consumption of modulation schemes in Bluetooth communication are analyzed and compared using the model presented in this work. In this approach considering both circuit and signal power consumption, optimum modulation order for minimum energy consumption has been found using numerical calculation and relation between signal to noise ratio (SNR) and channel capacity. Battery life for IoT sensors using Bluetooth LE technology as a wireless link has been analyzed considering multiple transaction times for transmitters having different power consumption. MFSK and more bandwidth-efficient GFSK are identified as low energy solution for all smart devices."}
{"_id": "2d6a600e03e2ac7ae18fe2623c770878394814d6", "title": "Wheat glutenin subunits and dough elasticity: findings of the EUROWHEAT project", "text": "*IACR-Long Ashton Research Station, Department of Agricultural Sciences, University of Bristol, Long Ashton BS41 9AF, UK (tel: +1275-392181; fax: +1275-394299; e-mail: peter.shewry@bbsrc.ac.uk) {Institut National de la Recherche Agronomique, Centre de Recherches de Nantes, Laboratoire de BiochimieetdeTechnologiedesProt\u00e9ines,B.P.71627, Ruede laG\u00e9raudi\u00e8re,Nantes 44316,Cedex03, France xDipartimento di Agrobiologia ed Agrochimica, Via San Camillo de Lellis, Viterbo 01100, Lazio, Italy {Institute of Food Research, Norwich Laboratory, Norwich Research Park, Colney Lane, Norwich NR4 7UA, UK"}
{"_id": "97ceffb21ea6e2028eafe797d813fa643f255ee5", "title": "Insights into Human Behavior from Lesions to the Prefrontal Cortex", "text": "The prefrontal cortex (PFC), a cortical region that was once thought to be functionally insignificant, is now known to play an essential role in the organization and control of goal-directed thought and behavior. Neuroimaging, neurophysiological, and modeling techniques have led to tremendous advances in our understanding of PFC functions over the last few decades. It should be noted, however, that neurological, neuropathological, and neuropsychological studies have contributed some of the most essential, historical, and often prescient conclusions regarding the functions of this region. Importantly, examination of patients with brain damage allows one to draw conclusions about whether a brain area is necessary for a particular function. Here, we provide a broad overview of PFC functions based on behavioral and neural changes resulting from damage to PFC in both human patients and nonhuman primates."}
{"_id": "b2bad87690428fc5479d2d63ed928ab68449c672", "title": "A lane detection algorithm based on reliable lane markings", "text": "This paper proposes a robust and effective vision-based lane detection approach. First, two binary images are obtained from the region of interest of gray-scale images. The obtained binary images are merged by a novel neighborhood AND operator and then transformed to a bird's eye view (BEV) via inverse perspective mapping. Then, gaussian probability density functions are fit to the left and right regions of a histogram image acquired from the BEV. Finally, a polynomial lane model is estimated from the identified regions. Experimental results show that the proposed method accurately detects lanes in complex situations including worn-out and curved lanes."}
{"_id": "f1aaee5f263dd132f7efbdcb6988ec6dbf8de165", "title": "Socio-cognitive gamification: general framework for educational games", "text": "Gamification of learning material has received much interest from researchers in the past years. This paper aims to further improve such learning experience by applying socio-cognitive gamification to educational games. Dynamic difficulty adjustment (DDA) is a well-known tool in optimizing gaming experience. It is a process to control the parameters in a video game automatically based on user experience in real-time. This method can be extended by using a biofeedback-approach, where certain aspects of the player\u2019s ability is estimated based on physiological measurement (e.g. eye tracking, ECG, EEG). Here, we outline the design of a biofeedback-based framework that supports dynamic difficulty adjustment in educational games. It has a universal architecture, so the concept can be employed to engage users in non-game contexts as well. The framework accepts input from the games, from the physiological sensors and from the so-called supervisor unit. This special unit empowers a new social aspect by enabling another user to observe or intervene during the interaction. To explain the game-user interaction itself in educational games we propose a hybrid model."}
{"_id": "6143217ceebc10506fd5a8073434cd6f83cf9a33", "title": "EPOpt: Learning Robust Neural Network Policies Using Model Ensembles", "text": "Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks \u2013 especially when the policies are represented using rich function approximators like deep neural networks. Modelbased methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including to unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning/adaptation."}
{"_id": "9917363277c783a01bff32af1c27fc9b373ad55d", "title": "DeepLoco: dynamic locomotion skills using hierarchical deep reinforcement learning", "text": "Learning physics-based locomotion skills is a difficult problem, leading to solutions that typically exploit prior knowledge of various forms. In this paper we aim to learn a variety of environment-aware locomotion skills with a limited amount of prior knowledge. We adopt a two-level hierarchical control framework. First, low-level controllers are learned that operate at a fine timescale and which achieve robust walking gaits that satisfy stepping-target and style objectives. Second, high-level controllers are then learned which plan at the timescale of steps by invoking desired step targets for the low-level controller. The high-level controller makes decisions directly based on high-dimensional inputs, including terrain maps or other suitable representations of the surroundings. Both levels of the control policy are trained using deep reinforcement learning. Results are demonstrated on a simulated 3D biped. Low-level controllers are learned for a variety of motion styles and demonstrate robustness with respect to force-based disturbances, terrain variations, and style interpolation. High-level controllers are demonstrated that are capable of following trails through terrains, dribbling a soccer ball towards a target location, and navigating through static or dynamic obstacles."}
{"_id": "de9b0f81505e536446bfb6a11281d4af7aa1d904", "title": "Terrain-adaptive locomotion skills using deep reinforcement learning", "text": "Reinforcement learning offers a promising methodology for developing skills for simulated characters, but typically requires working with sparse hand-crafted features. Building on recent progress in deep reinforcement learning (DeepRL), we introduce a mixture of actor-critic experts (MACE) approach that learns terrain-adaptive dynamic locomotion skills using high-dimensional state and terrain descriptions as input, and parameterized leaps or steps as output actions. MACE learns more quickly than a single actor-critic approach and results in actor-critic experts that exhibit specialization. Additional elements of our solution that contribute towards efficient learning include Boltzmann exploration and the use of initial actor biases to encourage specialization. Results are demonstrated for multiple planar characters and terrain classes."}
{"_id": "0ecd4fdce541317b38124967b5c2a259d8f43c91", "title": "The Arcade Learning Environment: An Evaluation Platform for General Agents", "text": "In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available."}
{"_id": "8618c07a15ee34e8f09fda73cea28a83e5f06804", "title": "Software development governance: A meta-management perspective", "text": "Software development governance is a nascent field of research. Establishing how it is framed early, can significantly affect the progress of contributions. This position paper considers the nature and role of governance in organizations and in the software development domain in particular. In contrast to the dominant functional and structural perspectives, an integrated view of governance is proposed as managing the management of a particular domain (that is, a meta-management perspective). Principles are developed and applied to software development governance and illustrated by a case study."}
{"_id": "a65047dde59feb81790c304e4d5a9a7c25aa2acb", "title": "Automated extraction of product comparison matrices from informal product descriptions", "text": "Domain analysts, product managers, or customers aim to capture the important features and differences among a set of related products. A case-by-case reviewing of each product description is a laborious and time-consuming task that fails to deliver a condense view of a family of product. In this article, we investigate the use of automated techniques for synthesizing a product comparison matrix (PCM) from a set of product descriptions written in natural language. We describe a tool-supported process, based on term recognition, information extraction, clustering, and similarities, capable of identifying and organizing features and values in a PCM \u2013 despite the informality and absence of structure in the textual descriptions of products. We evaluate our proposal against numerous categories of products mined from BestBuy. Our empirical results show that the synthesized PCMs exhibit numerous quantitative, comparable information that can potentially complement or even refine technical descriptions of products. The user study shows that our automatic approach is capable of extracting a significant portion of correct features and correct values. This approach has been implemented in MatrixMiner a web environment with an interactive support for automatically synthesizing PCMs from informal product descriptions. MatrixMiner also maintains traceability with the original descriptions and the technical specifications for further refinement or maintenance by users. Preprint submitted to JSS January 5, 2017"}
{"_id": "b83396caf4762c906530c9219a9e4dd0658232b0", "title": "A General Lower Bound on the Number of Examples Needed for Learning", "text": "We prove a lower bound of ( ln + VCdim(C) ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and and are the accuracy and con dence parameters. This improves the previous best lower bound of ( ln + VCdim(C)), and comes close to the known general upper bound of O( ln + VCdim(C) ln ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. A. Ehrenfeucht was supported by NSF grant MCS-8305245, D. Haussler by ONR grant N00014-86-K-0454, M. Kearns by ONR grant N00014-86-K-0454 and an A.T. & T. Bell Laboratories Scholarship, and L. Valiant by grants ONR-N00014-85-K-0445, NSF-DCR-8600379, DAAL03-86-K-0171 and by the SERC of the U.K. Part of this research was done while M. Kearns was visiting U.C. Santa Cruz."}
{"_id": "d5974504d7dadca9aa78df800a924f2ac18f24d6", "title": "On-demand virtualization for live migration in bare metal cloud", "text": "The level of demand for bare-metal cloud services has increased rapidly because such services are cost-effective for several types of workloads, and some cloud clients prefer a single-tenant environment due to the lower security vulnerability of such enviornments. However, as the bare-metal cloud does not utilize a virtualization layer, it cannot use live migration. Thus, there is a lack of manageability with the bare-metal cloud. Live migration support can improve the manageability of bare-metal cloud services significantly.\n This paper suggests an on-demand virtualization technique to improve the manageability of bare-metal cloud services. A thin virtualization layer is inserted into the bare-metal cloud when live migration is requested. After the completion of the live migration process, the thin virtualization layer is removed from the host. We modified BitVisor [19] to implement on-demand virtualization and live migration on the x86 architecture.\n The elapsed time of on-demand virtualization was negligible. It takes about 20 ms to insert the virtualization layer and 30 ms to remove the one. After removing the virtualization layer, the host machine works with bare-metal performance."}
{"_id": "4b278c79a7ba22a0eb9f5f967010bf57d6667c37", "title": "Gender identity and sexual orientation in women with borderline personality disorder.", "text": "INTRODUCTION\nIn the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, text revision (DSM-IV-TR) (and earlier editions), a disturbance in \"identity\" is one of the defining features of borderline personality disorder (BPD). Gender identity, a person's sense of self as a male or a female, constitutes an important aspect of identity formation, but this construct has rarely been examined in patients with BPD.\n\n\nAIMS\nIn the present study, the presence of gender identity disorder or confusion was examined in women diagnosed with BPD.\n\n\nMAIN OUTCOME MEASURES\nWe used a validated dimensional measure of gender dysphoria. Recalled gender identity and gender role behavior from childhood was also assessed with a validated dimensional measure, and current sexual orientation was assessed by two self-report measures.\n\n\nMETHODS\nA consecutive series of 100 clinic-referred women (mean age, 34 years) with BPD participated in the study. The women were diagnosed with BPD using the International Personality Disorder Exam-BPD Section.\n\n\nRESULTS\nNone of the women with BPD met the criterion for caseness on the dimensional measure of gender dysphoria. Women who self-reported either a bisexual or a homosexual sexual orientation had a significantly higher score on the dimensional measure of gender dysphoria than the women who self-reported a heterosexual sexual orientation, and they also recalled significantly more cross-gender behavior during childhood. Results were compared with a previous study on a diagnostically heterogeneous group of women with other clinical problems.\n\n\nCONCLUSION\nThe importance of psychosexual assessment in the clinical evaluation of patients with BPD is discussed."}
{"_id": "f4ed7a2c1ce10b08ad338217743d737de64b056b", "title": "Enhanced Classification Model for Cervical Cancer Dataset based on Cost Sensitive Classifier Hayder", "text": "Cervical cancer threatens the lives of many women in our world today. In 2014, the number of women infected with this disease in the United States was 12,578, of which 4,115 died, with a death rate of nearly 32%. Cancer data, including cervical cancer datasets, represent a significant challenge data mining techniques because absence of different costs for error cases. The proposed model present a cost sensitive classifiers that has three main stages; the first stage is prepressing the original data to prepare it for classification model which is build based on decision tree classifier with cost selectivity and finallyevaluation the proposed model based on many metrics in addition to apply a cross validation.The proposed model provides more accurate result in both binary class and multi class classification. It has a TP rate (0.429) comparing with (0.160) for typical decision tree in binary class task."}
{"_id": "279beb332fa6e158f32742b7dfafe83f12a97110", "title": "Sequencing technologies \u2014 the next generation", "text": "Demand has never been greater for revolutionary technologies that deliver fast, inexpensive and accurate genome information. This challenge has catalysed the development of next-generation sequencing (NGS) technologies. The inexpensive production of large volumes of sequence data is the primary advantage over conventional methods. Here, I present a technical review of template preparation, sequencing and imaging, genome alignment and assembly approaches, and recent advances in current and near-term commercially available NGS instruments. I also outline the broad range of applications for NGS technologies, in addition to providing guidelines for platform selection to address biological questions of interest."}
{"_id": "4d843ef246fa08f39d1b1edbc5002dbae35b9334", "title": "Multi-Bias Non-linear Activation in Deep Neural Networks", "text": "As a widely used non-linear activation, Rectified Linear Unit (ReLU) separates noise and signal in a feature map by learning a threshold or bias. However, we argue that the classification of noise and signal not only depends on the magnitude of responses, but also the context of how the feature responses would be used to detect more abstract patterns in higher layers. In order to output multiple response maps with magnitude in different ranges for a particular visual pattern, existing networks employing ReLU and its variants have to learn a large number of redundant filters. In this paper, we propose a multi-bias non-linear activation (MBA) layer to explore the information hidden in the magnitudes of responses. It is placed after the convolution layer to decouple the responses to a convolution kernel into multiple maps by multi-thresholding magnitudes, thus generating more patterns in the feature space at a low computational cost. It provides great flexibility of selecting responses to different visual patterns in different magnitude ranges to form rich representations in higher layers. Such a simple and yet effective scheme achieves the stateof-the-art performance on several benchmarks."}
{"_id": "5f2a4982a8adef2d1a6d589a155143291d440c0a", "title": "Constraints in the IoT: The World in 2020 and Beyond", "text": "The Internet of Things (IoT), often referred as the future Internet; is a collection of interconnected devices integrated into the world-wide network that covers almost everything and could be available anywhere. IoT is an emerging technology and aims to play an important role in saving money, conserving energy, eliminating gap and better monitoring for intensive management on a routine basis. On the other hand, it is also facing certain design constraints such as technical challenges, social challenges, compromising privacy and performance tradeoffs. This paper surveys major technical limitations that are hindering the successful deployment of the IoT such as standardization, interoperability, networking issues, addressing and sensing issues, power and storage restrictions, privacy and security, etc. This paper categorizes the existing research on the technical constraints that have been published in the recent years. With this categorization, we aim to provide an easy and concise view of the technical aspects of the IoT. Furthermore, we forecast the changes influenced by the IoT. This paper predicts the future and provides an estimation of the world in year 2020 and beyond. Keywords\u2014Internet of Things; Future Internet; Next generation network issues; World-wide network; 2020"}
{"_id": "28f7f43774bce41023f9912a24219e33612a3842", "title": "Don't Get Caught in the Cold, Warm-up Your JVM: Understand and Eliminate JVM Warm-up Overhead in Data-Parallel Systems", "text": "Many widely used, latency sensitive, data-parallel distributed systems, such as HDFS, Hive, and Spark choose to use the Java Virtual Machine (JVM), despite debate on the overhead of doing so. This paper analyzes the extent and causes of the JVM performance overhead in the above mentioned systems. Surprisingly, we find that the warm-up overhead, i.e., class loading and interpretation of bytecode, is frequently the bottleneck. For example, even an I/O intensive, 1GB read on HDFS spends 33% of its execution time in JVM warm-up, and Spark queries spend an average of 21 seconds in warm-up. The findings on JVM warm-up overhead reveal a contradiction between the principle of parallelization, i.e., speeding up long running jobs by parallelizing them into short tasks, and amortizing JVM warm-up overhead through long tasks. We solve this problem by designing HotTub, a new JVM that amortizes the warm-up overhead over the lifetime of a cluster node instead of over a single job by reusing a pool of already warm JVMs across multiple applications. The speed-up is significant. For example, using HotTub results in up to 1.8X speedups for Spark queries, despite not adhering to the JVM specification in edge cases."}
{"_id": "2fb5c1fdfdf999631a30c09a3602956c9de084db", "title": "HellRank: a Hellinger-based centrality measure for bipartite social networks", "text": "Measuring centrality in a social network, especially in bipartite mode, poses many challenges, for example, the requirement of full knowledge of the network topology, and the lack of properly detecting top-k behavioral representative users. To overcome the above mentioned challenges, we propose HellRank, an accurate centrality measure for identifying central nodes in bipartite social networks. HellRank is based on the Hellinger distance between two nodes on the same side of a bipartite network. We theoretically analyze the impact of this distance on a bipartite network and find upper and lower bounds for it. The computation of the HellRank centrality measure can be distributed, by letting each node uses local information only on its immediate neighbors. Consequently, one does not need a central entity that has full knowledge of the network topological structure. We experimentally evaluate the performance of the HellRank measure in correlation with other centrality measures on real-world networks. The results show partial ranking similarity between the HellRank and the other conventional metrics according to the Kendall and Spearman rank correlation coefficient."}
{"_id": "b65ec8eec4933b3a9425d7dc981fdc27e4260077", "title": "The biology of VEGF and its receptors", "text": "Vascular endothelial growth factor (VEGF) is a key regulator of physiological angiogenesis during embryogenesis, skeletal growth and reproductive functions. VEGF has also been implicated in pathological angiogenesis associated with tumors, intraocular neovascular disorders and other conditions. The biological effects of VEGF are mediated by two receptor tyrosine kinases (RTKs), VEGFR-1 and VEGFR-2, which differ considerably in signaling properties. Non-signaling co-receptors also modulate VEGF RTK signaling. Currently, several VEGF inhibitors are undergoing clinical testing in several malignancies. VEGF inhibition is also being tested as a strategy for the prevention of angiogenesis, vascular leakage and visual loss in age-related macular degeneration."}
{"_id": "45d5d9f461793425dff466556acd87b934a6fa0c", "title": "Work in Progress: K-Nearest Neighbors Techniques for ABAC Policies Clustering", "text": "In this paper, we present an approach based on the K-Nearest Neighbors algorithms for policies clustering that aims to reduce the ABAC policies dimensionality for high scale systems. Since ABAC considers a very large set of attributes for access decisions, it turns out that using such model for large scale systems might be very complicated. To date, researchers have proposed to use data mining techniques to discover roles for RBAC system construction. In this work in progress, we consider the usage of KNN-based techniques for the classification of ABAC policies based on similarity computations of rules in order to enhance the ABAC flexibility and to reduce the number of policy rules."}
{"_id": "5cacce78a013fd2a5dcf4457787feff42ddaf68e", "title": "Online 3D acquisition and model integration", "text": "This paper presents a system which yields complete 3D models in a fast and inexpensive way. The major building blocks are a high-speed structured light range scanner and a registration module. The former generates \u2019raw\u2019 range data whereas the latter performs a fast registration (ICP) and renders the partially integrated model as a preview of the final result. As the scanner uses only a single image to make a reconstruction, it is possible to scan while the object is moved. The use of an adaptive projection pattern gives a more robust behaviour. This allows the system to deal more easily with complicated geometry and texture than most other systems. During the scanning process the model is built up incrementally and rendered on the screen. This real-time visual feedback allows the user to check the current state of the 3D model. Holes or other flaws can be detected during the acquisition process itself. This speeds up the model building process, and solves indirectly the problem of view planning. The whole real-time pipeline comprising acquisition, merging and visualization only uses off-the-shel f hardware. A regular desktop PC connected to a camera and LCD projector is turned into a high-speed scanner and modeler. The current implementation has a throughput of approx. 5 fps."}
{"_id": "9f8f7e6cc18205b92ecf9e792bfcdb4b6f19cae3", "title": "Opinion Mining in Online Reviews About Distance Education Programs", "text": "The popularity of distance education programs is increasing at a fast pace. En par with this development, online communication in fora, social media and reviewing platforms between students is increasing as well. Exploiting this information to support fellow students or institutions requires to extract the relevant opinions in order to automatically generate reports providing an overview of pros and cons of different distance education programs. We report on an experiment involving distance education experts with the goal to develop a dataset of reviews annotated with relevant categories and aspects in each category discussed in the specific review together with an indication of the sentiment. Based on this experiment, we present an approach to extract general categories and specific aspects under discussion in a review together with their sentiment. We frame this task as a multi-label hierarchical text classification problem and empirically investigate the performance of different classification architectures to couple the prediction of a category with the prediction of particular aspects in this category. We evaluate different architectures and show that a hierarchical approach leads to superior results in comparison to a flat model which makes decisions independently. This work has been performed while the first and last authors were at Bielefeld University."}
{"_id": "539a06ab025005ff2ad5d8435515faa058c73b07", "title": "Real-time large-scale dense RGB-D SLAM with volumetric fusion", "text": "We present a new SLAM system capable of producing high quality globally consistent surface reconstructions over hundreds of metres in real-time with only a low-cost commodity RGB-D sensor. By using a fused volumetric surface reconstruction we achieve a much higher quality map over what would be achieved using raw RGB-D point clouds. In this paper we highlight three key techniques associated with applying a volumetric fusion-based mapping system to the SLAM problem in real-time. First, the use of a GPU-based 3D cyclical buffer trick to efficiently extend dense every frame volumetric fusion of depth maps to function over an unbounded spatial region. Second, overcoming camera pose estimation limitations in a wide variety of environments by combining both dense geometric and photometric camera pose constraints. Third, efficiently updating the dense map according to place recognition and subsequent loop closure constraints by the use of an \u201cas-rigid-as-possible\u201d space deformation. We present results on a wide variety of aspects of the system and show through evaluation on de facto standard RGB-D benchmarks that our system performs strongly in terms of trajectory estimation, map quality and computational performance in comparison to other state-of-the-art systems."}
{"_id": "af1745e54e256351f55da4a4a4bf61f594e7e3a7", "title": "The six determinants of gait and the inverted pendulum analogy: A dynamic walking perspective.", "text": "We examine two prevailing, yet surprisingly contradictory, theories of human walking. The six determinants of gait are kinematic features of gait proposed to minimize the energetic cost of locomotion by reducing the vertical displacement of the body center of mass (COM). The inverted pendulum analogy proposes that it is beneficial for the stance leg to behave like a pendulum, prescribing a more circular arc, rather than a horizontal path, for the COM. Recent literature presents evidence against the six determinants theory, and a simple mathematical analysis shows that a flattened COM trajectory in fact increases muscle work and force requirements. A similar analysis shows that the inverted pendulum fares better, but paradoxically predicts no work or force requirements. The paradox may be resolved through the dynamic walking approach, which refers to periodic gaits produced almost entirely by the dynamics of the limbs alone. Demonstrations include passive dynamic walking machines that descend a gentle slope, and active dynamic walking robots that walk on level ground. Dynamic walking takes advantage of the inverted pendulum mechanism, but requires mechanical work to transition from one pendular stance leg to the next. We show how the step-to-step transition is an unavoidable energetic consequence of the inverted pendulum gait, and gives rise to predictions that are experimentally testable on humans and machines. The dynamic walking approach provides a new perspective, focusing on mechanical work rather than the kinematics or forces of gait. It is helpful for explaining human gait features in a constructive rather than interpretive manner."}
{"_id": "3e2dcf63a9f6fac3ac17aaefe2be2572349cc6f5", "title": "Comparing CORBA and Web-Services in view of a Service Oriented Architecture", "text": "The concept of Service Oriented Architecture revolves around registering services as tasks. These tasks are accomplished collectively by various disparate components seamlessly connected to one another. The task of interlinking these components may be considered amongst the most convoluted and difficult tasks currently faced by software practitioners. This paper attempts to show that although middleware technologies can be solely utilized to develop service oriented architecture, however such architecture would severely lack quality, interoperability and ease of implementation. In order to resolve these complexities and complications this paper proposes Web Services as an alternative to Middleware, for the realization of a fully functional interoperable and an automated SOA which conforms to the characteristics of a SOA. This paper provides an abstract implementation model of a SOA using both middleware and web services. It then attempts to point out the implementation and accepted benefits of the latter, especially when legacy applications are involved. Emphasize is laid out on the significance of interoperability since it assists in mobility and other corporate benefits. The paper concludes that when interoperability along with its benefits of mobility, expansion, costs, simplicity and enterprise integration are required in the construction of a SOA then web services should be the definite integration choice. The paper also highlights the importance of object oriented middleware, along with situations in which it might be preferred over web services."}
{"_id": "38e83f4fcfca63dfa2db4f48ad75cbbdda948a84", "title": "Gentle Adaboost algorithm for weld defect classification", "text": "In this paper, we present a new strategy for automatic classification of weld defects in radiographs based on Gentle Adaboost algorithm. Radiographic images were segmented and moment-based features were extracted and given as input to Gentle Adaboost classifier. The performance of our classification system is evaluated using hundreds of radiographic images. The classifier is trained to classify each defect pattern into one of four classes: Crack, Lack of penetration, Porosity, and Solid inclusion. The experimental results show that the Gentle Adaboost classifier is an efficient automatic weld defect classification algorithm and can achieve high accuracy and is faster than support vector machine (SVM) algorithm, for the tested data."}
{"_id": "8bc68ff091ee873c797b8b2979139b024527cb59", "title": "Artificial Neural Networks for Misuse Detection", "text": "Misuse detection is the process of attempting to identify instances of network attacks by comparing current activity against the expected actions of an intruder. Most current approaches to misuse detection involve the use of rule-based expert systems to identify indications of known attacks. However, these techniques are less successful in identifying attacks which vary from expected patterns. Artificial neural networks provide the potential to identify and classify network activity based on limited, incomplete, and nonlinear data sources. We present an approach to the process of misuse detection that utilizes the analytical strengths of neural networks, and we provide the results from our preliminary analysis of this approach."}
{"_id": "0722af4b124785f40861b527fa494a4d76ac6d70", "title": "A General Greedy Approximation Algorithm with Applications", "text": "Greedy approximation algorithms have been frequently used to obtain sparse solutions to learning problems. In this paper, we present a general greedy algorithm for solving a class of convex optimization problems. We derive a bound on the rate of approximation for this algorithm, and show that our algorithm includes a number of earlier studies as special cases."}
{"_id": "d02e1139a93a91ee83becf5fdbdf83b467799e20", "title": "Traffic sign detection and recognition based on random forests", "text": "In this paper we present a new traffic sign detection and recognition (TSDR) method, which is achieved in three main steps. The first step segments the image based on thresholding of HSI color space components. The second step detects traffic signs by processing the blobs extracted by the first step. The last one performs the recognition of the detected traffic signs. The main contributions of the paper are as follows. First, we propose, in the second step, to use invariant geometric moments to classify shapes instead of machine learning algorithms. Second, inspired by the existing features, new ones have been proposed for the recognition. The histogram of oriented gradients (HOG) features has been extended to the HSI raffic sign detection raffic sign recognition olor segmentation andom forests upport vector machines (SVMs) istogram of oriented gradients (HOG) color space and combined with the local self-similarity (LSS) features to get the descriptor we use in our algorithm. As a classifier, random forest and support vector machine (SVM) classifiers have been tested together with the new descriptor. The proposed method has been tested on both the German Traffic Sign Detection and Recognition Benchmark and the Swedish Traffic Signs Data sets. The results obtained are satisfactory when compared to the state-of-the-art methods. \u00a9 2016 Elsevier B.V. All rights reserved. 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 ocal self-similarity (LSS)"}
{"_id": "c8ec9df5c699b110ed04d4b29d2de20adc5283db", "title": "Few-shot Learning Using a Small-Sized Dataset of High-Resolution FUNDUS Images for Glaucoma Diagnosis", "text": "Deep learning has recently attracted a lot of attention, mainly thanks to substantial gains in terms of effectiveness. However, there is still room for significant improvement, especially when dealing with use cases that come with a limited availability of data, as is often the case in the area of medical image analysis. In this paper, we introduce a novel approach for early diagnosis of glaucoma in high-resolution FUNDUS images, only requiring a small number of training samples. In particular, we developed a predictive model based on a matching neural network architecture, integrating a high-resolution deep convolutional network that allows preserving the high-fidelity nature of the medical images. Our experimental results show that our predictive model is able to obtain higher levels of effectiveness than vanilla deep convolutional neural networks."}
{"_id": "0afcbfcd5f30eb680f7c6868fbddb2f034093918", "title": "NLANGP: Supervised Machine Learning System for Aspect Category Classification and Opinion Target Extraction", "text": "This paper describes our system used in the Aspect Based Sentiment Analysis Task 12 of SemEval-2015. Our system is based on two supervised machine learning algorithms: sigmoidal feedforward network to train binary classifiers for aspect category classification (Slot 1), and Conditional Random Fields to train classifiers for opinion target extraction (Slot 2). We extract a variety of lexicon and syntactic features, as well as cluster features induced from unlabeled data. Our system achieves state-of-the-art performances, ranking 1st for three of the evaluations (Slot 1 for both restaurant and laptop domains, and Slot 1 & 2) and 2nd for Slot 2 evaluation."}
{"_id": "120ce727d1754bdd663199fa03b110fd6ceee4bf", "title": "SemEval-2016 Task 5: Aspect Based Sentiment Analysis", "text": "This paper describes the SemEval 2016 shared task on Aspect Based Sentiment Analysis (ABSA), a continuation of the respective tasks of 2014 and 2015. In its third year, the task provided 19 training and 20 testing datasets for 8 languages and 7 domains, as well as a common evaluation procedure. From these datasets, 25 were for sentence-level and 14 for text-level ABSA; the latter was introduced for the first time as a subtask in SemEval. The task attracted 245 submissions from 29 teams."}
{"_id": "19d748f959a24888f0239facf194085a4bf59f20", "title": "A Logic Programming Approach to Aspect Extraction in Opinion Mining", "text": "Aspect extraction aims to extract fine-grained opinion targets from opinion texts. Recent work has shown that the syntactical approach performs well. In this paper, we show that Logic Programming, particularly Answer Set Programming (ASP), can be used to elegantly and efficiently implement the key components of syntax based aspect extraction. Specifically, the well known double propagation (DP) method is implemented using 8 ASP rules that naturally model all key ideas in the DP method. Our experiment on a widely used data set also shows that the ASP implementation is much faster than a Java-based implementation. Syntactical approach has its limitation too. To further improve the performance of syntactical approach, we identify a set of general words from Word Net that have little chance to be an aspect and prune them when extracting aspects. The concept of general words and their pruning are concisely captured by 10 new ASP rules, and a natural extension of the 8 rules for the original DP method. Experimental results show a major improvement in precision with almost no drop in recall compared with those reported in the existing work on a typical benchmark data set. Logic Programming provides a convenient and effective tool to encode and thus test knowledge needed to improve the aspect extraction methods so that the researchers can focus on the identification and discovery of new knowledge to improve aspect extraction."}
{"_id": "4947198a6f4b0e3f9a514bfb4c9d5410427b9e84", "title": "Cortical demyelination and diffuse white matter injury in multiple sclerosis.", "text": "Focal demyelinated plaques in white matter, which are the hallmark of multiple sclerosis pathology, only partially explain the patient's clinical deficits. We thus analysed global brain pathology in multiple sclerosis, focusing on the normal-appearing white matter (NAWM) and the cortex. Autopsy tissue from 52 multiple sclerosis patients (acute, relapsing-remitting, primary and secondary progressive multiple sclerosis) and from 30 controls was analysed using quantitative morphological techniques. New and active focal inflammatory demyelinating lesions in the white matter were mainly present in patients with acute and relapsing multiple sclerosis, while diffuse injury of the NAWM and cortical demyelination were characteristic hallmarks of primary and secondary progressive multiple sclerosis. Cortical demyelination and injury of the NAWM, reflected by diffuse axonal injury with profound microglia activation, occurred on the background of a global inflammatory response in the whole brain and meninges. There was only a marginal correlation between focal lesion load in the white matter and diffuse white matter injury, or cortical pathology, respectively. Our data suggest that multiple sclerosis starts as a focal inflammatory disease of the CNS, which gives rise to circumscribed demyelinated plaques in the white matter. With chronicity, diffuse inflammation accumulates throughout the whole brain, and is associated with slowly progressive axonal injury in the NAWM and cortical demyelination."}
{"_id": "12f1075dba87030dd82b1254c2f160f0ab861c2a", "title": "Essential communication practices for Extreme Programming in a global software development team", "text": "We conducted an industrial case study of a distributed team in the USA and the Czech Republic that used Extreme Programming. Our goal was to understand how this globally-distributed team created a successful project in a new problem domain using a methodology that is dependent on informal, face-to-face communication. We collected quantitative and qualitative data and used grounded theory to identify four key factors for communication in globally-distributed XP teams working within a new problem domain. Our study suggests that, if these critical enabling factors are addressed, methodologies dependent on informal communication can be used on global software development projects."}
{"_id": "11dd5ff0c75630de85d040ed1bc42eacf168f578", "title": "The application of virtual reality to (chemical engineering) education", "text": "Virtual reality, VR, offers many benefits to technical education, including the delivery of information through multiple active channels, the addressing of different learning styles, and experiential-based learning. This poster presents work performed by the authors to apply VR to engineering education, in three broad project areas: virtual chemical plants, virtual laboratory accidents, and a virtual UIC campus. The first area provides guided exploration of domains otherwise inaccessible, such as the interior of operating reactors and microscopic reaction mechanisms. The second promotes safety by demonstrating the consequences of not following proper lab safety procedures. And the third provides valuable guidance for (foreign) visitors. All programs developed are available on the Web, for free download to any interested parties."}
{"_id": "c4f05354ce6776dd1a3a076c9cc60614ee38476e", "title": "Deep EndoVO: A Recurrent Convolutional Neural Network (RCNN) based Visual Odometry Approach for Endoscopic Capsule Robots", "text": "Ingestible wireless capsule endoscopy is an emerging minimally invasive diagnostic technology for inspection of the GI tract and diagnosis of a wide range of diseases and pathologies. Medical device companies and many research groups have recently made substantial progresses in converting passive capsule endoscopes to active capsule robots, enabling more accurate, precise, and intuitive detection of the location and size of the diseased areas. Since a reliable real time pose estimation functionality is crucial for actively controlled endoscopic capsule robots, in this study, we propose a monocular visual odometry (VO) method for endoscopic capsule robot operations. Our method lies on the application of the deep Recurrent Convolutional Neural Networks (RCNNs) for the visual odometry task, where Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are used for the feature extraction and inference of dynamics across the frames, respectively. Detailed analyses and evaluations made on a real pig stomach dataset proves that our system achieves high translational and rotational accuracies for different types of endoscopic capsule robot trajectories."}
{"_id": "76082a00ed6f3e3828f78436723d7fde29faeed4", "title": "Pondering the Concept of Abstraction in (Illustrative) Visualization", "text": "We explore the concept of abstraction as it is used in visualization, with the ultimate goal of understanding and formally defining it. Researchers so far have used the concept of abstraction largely by intuition without a precise meaning. This lack of specificity left questions on the characteristics of abstraction, its variants, its control, or its ultimate potential for visualization and, in particular, illustrative visualization mostly unanswered. In this paper we thus provide a first formalization of the abstraction concept and discuss how this formalization affects the application of abstraction in a variety of visualization scenarios. Based on this discussion, we derive a number of open questions still waiting to be answered, thus formulating a research agenda for the use of abstraction for the visual representation and exploration of data. This paper, therefore, is intended to provide a contribution to the discussion of the theoretical foundations of our field, rather than attempting to provide a completed and final theory."}
{"_id": "357a89799c968d5a363197026561b04c5a496b66", "title": "Haraka - Efficient Short-Input Hashing for Post-Quantum Applications", "text": "Recently, many efficient cryptographic hash function design strategies have been explored, not least because of the SHA-3 competition. These designs are, almost exclusively, geared towards high performance on long inputs. However, various applications exist where the performance on short (fixed length) inputs matters more. Such hash functions are the bottleneck in hash-based signature schemes like SPHINCS or XMSS, which is currently under standardization. Secure functions specifically designed for such applications are scarce. We attend to this gap by proposing two short-input hash functions (or rather simply compression functions). By utilizing AES instructions on modern CPUs, our proposals are the fastest on such platforms, reaching throughputs below one cycle per hashed byte even for short inputs, while still having a very low latency of less than 60 cycles. Under the hood, this results comes with several innovations. First, we study whether the number of rounds for our hash functions can be reduced, if only second-preimage resistance (and not collision resistance) is required. The conclusion is: only a little. Second, since their inception, AES-like designs allow for supportive security arguments by means of counting and bounding the number of active S-boxes. However, this ignores powerful attack vectors using truncated differentials, including the powerful rebound attacks. We develop a general tool-based method to include arguments against attack vectors using truncated differentials."}
{"_id": "4d58f886f5150b2d5e48fd1b5a49e09799bf895d", "title": "Texas 3D Face Recognition Database", "text": "We make the Texas 3D Face Recognition Database available to researchers in three dimensional (3D) face recognition and other related areas. This database contains 1149 pairs of high resolution, pose normalized, preprocessed, and perfectly aligned color and range images of 118 adult human subjects acquired using a stereo camera. The images are accompanied with information about the subjects' gender, ethnicity, facial expression, and the locations of 25 manually located anthropometric facial fiducial points. Specific partitions of the data for developing and evaluating 3D face recognition algorithms are also included."}
{"_id": "270af733bcf18d9c14230bcffc77d6ae57e2667d", "title": "Maximum Entropy Deep Inverse Reinforcement Learning", "text": "This paper presents a general framework for employing deep architectures in particular neural networks to solve the inverse reinforcement learning (IRL) problem. Specifically, we propose to exploit the representational capacity and favourable computational complexity of deep networks to approximate complex, nonlinear reward functions. We show that the Maximum Entropy paradigm for IRL lends itself naturally to the efficient training of deep architectures. At test time, the approach leads to a computational complexity independent of the number of demonstrations. This makes it especially well-suited for applications in life-long learning scenarios commonly encountered in robotics. We demonstrate that our approach achieves performance commensurate to the state-of-the-art on existing benchmarks already with simple, comparatively shallow network architectures while significantly outperforming the stateof-the-art on an alternative benchmark based on more complex, highly varying reward structures representing strong interactions between features. Furthermore, we extend the approach to include convolutional layers in order to eliminate the dependency on precomputed features of current algorithms and to underline the substantial gain in flexibility in framing IRL in the context of deep learning."}
{"_id": "2dfcb6a65c8c97e7cadd2742f1671d773ab596c1", "title": "Distributed Secret Sharing Approach With Cheater Prevention Based on QR Code", "text": "QR barcodes are used extensively due to their beneficial properties, including small tag, large data capacity, reliability, and high-speed scanning. However, the private data of the QR barcode lacks adequate security protection. In this article, we design a secret QR sharing approach to protect the private QR data with a secure and reliable distributed system. The proposed approach differs from related QR code schemes in which it uses the QR characteristics to achieve secret sharing and can resist the print-and-scan operation. The secret can be split and conveyed with QR tags in the distribution application, and the system can retrieve the lossless secret when authorized participants cooperate. General browsers can read the original data from the marked QR tag via a barcode reader, and this helps reduce the security risk of the secret. Based on our experiments, the new approach is feasible and provides content readability, cheater detectability, and an adjustable secret payload of the QR barcode."}
{"_id": "5f6b7fca82ff3947f6cc571073c18c687eaedd0d", "title": "Big Data analytics. Three use cases with R, Python and Spark", "text": "Management and analysis of big data are systematically associated with a data distributed architecture in the Hadoop and now Spark frameworks. This article offers an introduction for statisticians to these technologies by comparing the performance obtained by the direct use of three reference environments : R, Python Scikit-learn, \u2217Universit\u00e9 de Toulouse \u2013 INSA, Institut de Math\u00e9matiques, UMR CNRS 5219 \u2020Institut de Math\u00e9matiques, UMR CNRS 5219 \u2021Universit\u00e9 de Toulouse \u2013 UT3, Institut de Math\u00e9matiques, UMR CNRS 5219 1 ar X iv :1 60 9. 09 61 9v 1 [ st at .A P] 3 0 Se p 20 16 Spark MLlib on three public use cases : character recognition, recommending films, categorizing products. As main result, it appears that, if Spark is very efficient for data munging and recommendation by collaborative filtering (non-negative factorization), current implementations of conventional learning methods (logistic regression, random forests) in MLlib or SparkML do not ou poorly compete habitual use of these methods (R, Python Scikit-learn) in an integrated or undistributed architecture."}
{"_id": "b58f395fd0afa3e309ff108247e03ab6c3f15719", "title": "Effectiveness Evaluation of Rule Based Classifiers for the Classification of Iris Data Set", "text": "ISSN 2250 \u2013 1061 | \u00a9 2011 Bonfring Abstract--In machine learning, classification refers to a step by step procedure for designating a given piece of input data into any one of the given categories. There are many classification problem occurs and need to be solved. Different types are classification algorithms like tree-based, rule-based, etc are widely used. This work studies the effectiveness of Rule-Based classifiers for classification by taking a sample data set from UCI machine learning repository using the open source machine learning tool. A comparison of different rulebased classifiers used in Data Mining and a practical guideline for selecting the most suited algorithm for a classification is presented and some empirical criteria for describing and evaluating the classifiers are given."}
{"_id": "53c5c2da782debbaa150cbce1ff4909d1d323b1f", "title": "Comparison of high-speed switched reluctance machines with conventional and toroidal windings", "text": "This paper presents designs of 50,000 rpm 6/4 switched reluctance motors (SRM's) with a focus of the comparative study on conventional and toroidal windings. There are four different machines compared in this paper, while the first is a conventional SRM, and the other three are toroidal winding machines. The first toroidal SRM (TSRM1) employs the conventional asymmetric convert and the same switching sequence as conventional SRM (CSRM). Therefore, an equivalent magnetic performance is observed. The second toroidal SRM (TSRM2) introduces a 12-switch converter topology. With a proper coil connection and switching sequence, all the coils are active and contribute to the flux and torque generation at the same time. The analysis shows that for the same amount of copper losses, TSRM2 yields a 50% higher output torque and power at rated speed than CSRM, while TSRM1 only generates half the torque of CSRM. The third toroidal SRM is a resized TSRM2, which is presented with the same envelope dimension of CSRM (same volumetric comparison). The comparison shows it's competitive to CSRM, especially as toroidal-winding can achieve higher filling factor during the manufacture process of winding."}
{"_id": "1250ef2f19d0ac751ec7d0e2a22e741ecb40ea92", "title": "Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition", "text": "Long short-term memory (LSTM) based acoustic modeling methods have recently been shown to give state-of-the-art performance on some speech recognition tasks. To achieve a further performance improvement, in this research, deep extensions on LSTM are investigated considering that deep hierarchical model has turned out to be more efficient than a shallow one. Motivated by previous research on constructing deep recurrent neural networks (RNNs), alternative deep LSTM architectures are proposed and empirically evaluated on a large vocabulary conversational telephone speech recognition task. Meanwhile, regarding to multi-GPU devices, the training process for LSTM networks is introduced and discussed. Experimental results demonstrate that the deep LSTM networks benefit from the depth and yield the state-of-the-art performance on this task."}
{"_id": "84e65a5bdb735d62eef4f72c2f01af354b2285ba", "title": "Efficient Architecture Search by Network Transformation", "text": "Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23% test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters."}
{"_id": "63af189a7f392ea29cca5b18a71b9a680aef071b", "title": "Iron Loss Reduction in an Interior PM Automotive Alternator", "text": "This paper examines the iron loss characteristics of a high-flux interior permanent-magnet machine. The machine was designed as a concept demonstrator for a 6-kW automotive alternator and has a wide field-weakening range. Initial experimental results revealed a high iron loss during field-weakening operation. Finite-element analysis was used to investigate the cause of the high iron losses and to predict their magnitude as a function of speed. The effects of changes in the machine design were examined in order to reduce iron losses and hence improve the machine performance"}
{"_id": "bbe13b72314fffcc2f35b0660195f2f6607c00a0", "title": "Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions", "text": "Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset."}
{"_id": "ee61d5dbb2ff64995f1aeb81d94c0b55d562b4c9", "title": "A Strategic Analysis of Electronic Marketplaces", "text": ""}
{"_id": "9b9bac085208271dfd33fd333dcb76dcde8332b8", "title": "On the Convergence of Adam and Beyond", "text": ""}
{"_id": "db4bfae21d57a1583effad1b2952f78daece2454", "title": "New Fast and Accurate Jacobi SVD Algorithm. I", "text": "This paper is the result of contrived efforts to break the barrier between numerical accuracy and run time efficiency in computing the fundamental decomposition of numerical linear algebra \u2013 the singular value decomposition (SVD) of a general dense matrix. It is an unfortunate fact that the numerically most accurate one\u2013sided Jacobi SVD algorithm is several times slower than generally less accurate bidiagonalization based methods such as the QR or the divide and conquer algorithm. Despite its sound numerical qualities, the Jacobi SVD is not included in the state of the art matrix computation libraries and it is even considered obsolete by some leading researches. Our quest for a highly accurate and efficient SVD algorithm has led us to a new, superior variant of the Jacobi algorithm. The new algorithm has inherited all good high accuracy properties, and it outperforms not only the best implementations of the one\u2013sided Jacobi algorithm but also the QR algorithm. Moreover, it seems that the potential of the new approach is yet to be fully exploited."}
{"_id": "54daa2279dd7c6a44b406b8007086474db7f8359", "title": "Predictive Control in Power Electronics and Drives", "text": "Predictive control is a very wide class of controllers that have found rather recent application in the control of power converters. Research on this topic has been increased in the last years due to the possibilities of today's microprocessors used for the control. This paper presents the application of different predictive control methods to power electronics and drives. A simple classification of the most important types of predictive control is introduced, and each one of them is explained including some application examples. Predictive control presents several advantages that make it suitable for the control of power converters and drives. The different control schemes and applications presented in this paper illustrate the effectiveness and flexibility of predictive control."}
{"_id": "da1fa5958aac40af3991eb4bda2ebe4a221be897", "title": "Anomaly Detection using Autoencoders in High Performance Computing Systems", "text": "Anomaly detection in supercomputers is a very difficult problem due to the big scale of the systems and the high number of components. The current state of the art for automated anomaly detection employs Machine Learning methods or statistical regression models in a supervised fashion, meaning that the detection tool is trained to distinguish among a fixed set of behaviour classes (healthy and unhealthy states). We propose a novel approach for anomaly detection in High Performance Computing systems based on a Machine (Deep) Learning technique, namely a type of neural network called autoencoder. The key idea is to train a set of autoencoders to learn the normal (healthy) behaviour of the supercomputer nodes and, after training, use them to identify abnormal conditions. This is different from previous approaches which where based on learning the abnormal condition, for which there are much smaller datasets (since it is very hard to identify them to begin with). We test our approach on a real supercomputer equipped with a fine-grained, scalable monitoring infrastructure that can provide large amount of data to characterize the system behaviour. The results are extremely promising: after the training phase to learn the normal system behaviour, our method is capable of detecting anomalies that have never been seen before with a very good accuracy (values ranging between 88% and 96%)."}
{"_id": "b1b215e30753f6eaa74409819bd229ec4851ae78", "title": "Real-Time Standard Scan Plane Detection and Localisation in Fetal Ultrasound Using Fully Convolutional Neural Networks", "text": "Fetal mid-pregnancy scans are typically carried out according to fixed protocols. Accurate detection of abnormalities and correct biometric measurements hinge on the correct acquisition of clearly defined standard scan planes. Locating these standard planes requires a high level of expertise. However, there is a worldwide shortage of expert sonographers. In this paper, we consider a fully automated system based on convolutional neural networks which can detect twelve standard scan planes as defined by the UK fetal abnormality screening programme. The network design allows real-time inference and can be naturally extended to provide an approximate localisation of the fetal anatomy in the image. Such a framework can be used to automate or assist with scan plane selection, or for the retrospective retrieval of scan planes from recorded videos. The method is evaluated on a large database of 1003 volunteer mid-pregnancy scans. We show that standard planes acquired in a clinical scenario are robustly detected with a precision and recall of 69% and 80%, which is superior to the current state-of-the-art. Furthermore, we show that it can retrospectively retrieve correct scan planes with an accuracy of 71% for cardiac views and 81% for non-cardiac views."}
{"_id": "58f713f4c3c2804bebd4792fd7fb9b69336c471f", "title": "Data Security and Privacy Protection Issues in Cloud Computing", "text": "It is well-known that cloud computing has many potential advantages and many enterprise applications and data are migrating to public or hybrid cloud. But regarding some business-critical applications, the organizations, especially large enterprises, still wouldn't move them to cloud. The market size the cloud computing shared is still far behind the one expected. From the consumers' perspective, cloud computing security concerns, especially data security and privacy protection issues, remain the primary inhibitor for adoption of cloud computing services. This paper provides a concise but all-round analysis on data security and privacy protection issues associated with cloud computing across all stages of data life cycle. Then this paper discusses some current solutions. Finally, this paper describes future research work about data security and privacy protection issues in cloud."}
{"_id": "559e2ae733f4d231e2739dbc6d8d528af6feddf3", "title": "INTEGRATING MOOCs IN BLENDED COURSES", "text": "Recent studies appreciate that \"MOOCs bring an impetus of reform, research and innovation to the Academy\" [9]. Even though MOOCs are usually developed and delivered as independent online courses, experiments to wrap formal university courses around existing MOOCs are reported by teachers and researchers in different articles [3], [4], [8], [17]. This paper describes a new approach, in which the participation of students in different MOOCs was integrated in a blended course run on a social mobile LMS. The topics of MOOCs delivered on specific platforms and having particular characteristics were connected with the Fall 2013 undergraduate course of Web Programming, at University Politehnica Timisoara, Romania, facilitated by the first author, the co-authors providing a shadow/peer facilitation, for tailoring the course scenario. The main parts of this study deal with: a) The reasons to integrate MOOCs in the university course. b) How the course was designed, how the students\u2019 activities on different MOOC platforms were assessed and integrated in the course scenario. c) The results of a survey that evaluates students\u2019 experiences related to MOOCs: a number of MOOC features were assessed [14]; answers to a few problems are also analysed: Did the participation in MOOCs support students to clarify and expand the course issues? What are students' suggestions for a more active participation in MOOCs? By comparing the learning scenarios of MOOCs with the Web Programming blended course, how can the course and its virtual space be improved? Do students consider that the MOOCs phenomenon is important for professional and personal development? The conclusions of the paper can be used by other teachers/instructors for integrating MOOCs in the courses they deliver/facilitate."}
{"_id": "07c8dc37b1061784f3b55cf3ca5d2bc735e1693c", "title": "SQLrand: Preventing SQL Injection Attacks", "text": "We present a practical protection mechanism against SQL injection attacks. Such attacks target databases that are accessible through a web frontend, and take advantage of flaws in the input validation logic of Web components such as CGI scripts. We apply the concept of instruction-set randomization to SQL, creating instances of the language that are unpredictable to the attacker. Queries injected by the attacker will be caught and terminated by the database parser. We show how to use this technique with the MySQL database using an intermediary proxy that translates the random SQL to its standard language. Our mechanism imposes negligible performance overhead to query processing and can be easily retrofitted to existing systems."}
{"_id": "0d1f5807a26286f8a486d7b535d5fc16bd37d86d", "title": "Finding application errors and security flaws using PQL: a program query language", "text": "A number of effective error detection tools have been built in recent years to check if a program conforms to certain design rules. An important class of design rules deals with sequences of events asso-ciated with a set of related objects. This paper presents a language called PQL (Program Query Language) that allows programmers to express such questions easily in an application-specific context. A query looks like a code excerpt corresponding to the shortest amount of code that would violate a design rule. Details of the tar-get application's precise implementation are abstracted away. The programmer may also specify actions to perform when a match is found, such as recording relevant information or even correcting an erroneous execution on the fly.We have developed both static and dynamic techniques to find solutions to PQL queries. Our static analyzer finds all potential matches conservatively using a context-sensitive, flow-insensitive, inclusion-based pointer alias analysis. Static results are also use-ful in reducing the number of instrumentation points for dynamic analysis. Our dynamic analyzer instruments the source program to catch all violations precisely as the program runs and to optionally perform user-specified actions.We have implemented the techniques described in this paper and found 206 errors in 6 large real-world open-source Java applica-tions containing a total of nearly 60,000 classes. These errors are important security flaws, resource leaks, and violations of consis-tency invariants. The combination of static and dynamic analysis proves effective at addressing a wide range of debugging and pro-gram comprehension queries. We have found that dynamic analysis is especially suitable for preventing errors such as security vulner-abilities at runtime."}
{"_id": "02d0ce2e95891570f11bbcfee607587f3fac9a02", "title": "A First Step Towards Automated Detection of Buffer Overrun Vulnerabilities", "text": "We describe a new technique for finding potential buffer overrun vulnerabilities in security-critical C code. The key to success is to use static analysis: we formulate detection of buffer overruns as an integer range analysis problem. One major advantage of static analysis is that security bugs can be eliminated before code is deployed. We have implemented our design and used our prototype to find new remotely-exploitable vulnerabilities in a large, widely deployed software package. An earlier hand audit missed"}
{"_id": "188847872834a63fb435cf3a51eef72046464317", "title": "StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks", "text": "This paper presents a systematic solution to the persistent problem of buffer overflow attacks. Buffer overflow attacks gained notoriety in 1988 as part of the Morris Worm incident on the Internet. While it is fairly simple to fix individual buffer overflow vulnerabilities, buffer overflow attacks continue to this day. Hundreds of attacks have been discovered, and while most of the obvious vulnerabilities have now been patched, more sophisticated buffer overflow attacks continue to emerge. We describe StackGuard: a simple compiler technique that virtually eliminates buffer overflow vulnerabilities with only modest performance penalties. Privileged programs that are recompiled with the StackGuard compiler extension no longer yield control to the attacker, but rather enter a fail-safe state. These programs require no source code changes at all, and are binary-compatible with existing operating systems and libraries. We describe the compiler technique (a simple patch to gcc), as well as a set of variations on the technique that tradeoff between penetration resistance and performance. We present experimental results of both the penetration resistance and the performance impact of this technique. This research is partially supported by DARPA contracts F3060296-1-0331 and F30602-96-1-0302. Ryerson Polytechnic University"}
{"_id": "3a14ea1fc798843bec6722e6f7997d1ef9714922", "title": "Securing web application code by static analysis and runtime protection", "text": "Security remains a major roadblock to universal acceptance of the Web for many kinds of transactions, especially since the recent sharp increase in remotely exploitable vulnerabilities have been attributed to Web application bugs. Many verification tools are discovering previously unknown vulnerabilities in legacy C programs, raising hopes that the same success can be achieved with Web applications. In this paper, we describe a sound and holistic approach to ensuring Web application security. Viewing Web application vulnerabilities as a secure information flow problem, we created a lattice-based static analysis algorithm derived from type systems and typestate, and addressed its soundness. During the analysis, sections of code considered vulnerable are instrumented with runtime guards, thus securing Web applications in the absence of user intervention. With sufficient annotations, runtime overhead can be reduced to zero. We also created a tool named.WebSSARI (Web application Security by Static Analysis and Runtime Inspection) to test our algorithm, and used it to verify 230 open-source Web application projects on SourceForge.net, which were selected to represent projects of different maturity, popularity, and scale. 69 contained vulnerabilities. After notifying the developers, 38 acknowledged our findings and stated their plans to provide patches. Our statistics also show that static analysis reduced potential runtime overhead by 98.4%."}
{"_id": "8a3a53c768c652b76689f9dde500a07f36d740cb", "title": "An investigation into language model data augmentation for low-resourced STT and KWS", "text": "This paper reports on investigations using two techniques for language model text data augmentation for low-resourced automatic speech recognition and keyword search. Lowresourced languages are characterized by limited training materials, which typically results in high out-of-vocabulary (OOV) rates and poor language model estimates. One technique makes use of recurrent neural networks (RNNs) using word or subword units. Word-based RNNs keep the same system vocabulary, so they cannot reduce the OOV, whereas subword units can reduce the OOV but generate many false combinations. A complementary technique is based on automatic machine translation, which requires parallel texts and is able to add words to the vocabulary. These methods were assessed on 10 languages in the context of the Babel program and NIST OpenKWS evaluation. Although improvements vary across languages with both methods, small gains were generally observed in terms of word error rate reduction and improved keyword search performance."}
{"_id": "d908528685ce3c64b570c21758ce2d1aae30b4db", "title": "Supervised Dynamic and Adaptive Discretization for Rule Mining", "text": "Association rule mining is a well-researched topic in data mining. However, a common limitation with existing algorithms is that they mainly deal with categorical data. In this work we propose a methodology that allows adaptive discretization and quantitative rule discovery in large mixed databases. More specifically, we propose a top-down, recursive approach to find ranges of values for continuous attributes that result in high confidence rules. Our approach allows any given continuous attribute to be discretized in multiple ways. Compared to a global discretization scheme, our approach makes it possible to capture different intervariable interactions. We applied our algorithm to various synthetic and real datasets, including Intel manufacturing data that motivated our research. The experimental results and analysis indicate that our algorithm is capable of finding more meaningful rules for multivariate data, in addition to being more efficient than the state-of-the-art techniques."}
{"_id": "33f57f2f632d89950909b31c75fae7317e6ea0cb", "title": "Modeling social influence through network autocorrelation: constructing the weight matrix", "text": "Many physical and social phenomena are embedded within networks of interdependencies, the so-called \u2018context\u2019 of these phenomena. In network analysis, this type of process is typically modeled as a network autocorrelation model. Parameter estimates and inferences based on autocorrelation models, hinge upon the chosen specification of weight matrix W, the elements of which represent the influence pattern present in the network. In this paper I discuss how social influence processes can be incorporated in the specification of W. Theories of social influence center around \u2018communication\u2019 and \u2018comparison\u2019; it is discussed how these can be operationalized in a network analysis context. Starting from that, a series of operationalizations of W is discussed. Finally, statistical tests are presented that allow an analyst to test various specifications against one another or pick the best fitting model from a set of models. \u00a9 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "eb9ab73e669195e1d3e73addc0028ffc08aa8da7", "title": "Application of generalized Bagley-polygon four-port power dividers to designing microwave dual-band bandpass planar filters", "text": "A new type of microwave dual-band bandpass planar filter based on signal-interference techniques is reported. The described filter approach consists of transversal filtering sections made up of generalized Bagley-polygon four-port power dividers. This transversal section, by exploiting feedforward signal-interaction concepts, enables dual-band bandpass filtering transfer functions with several transmission zeros to be obtained. A set of closed formulas and guidelines for the analytical synthesis of the dual-passband transversal filtering section are derived. Moreover, its practical usefulness is proven with the development and testing of a 2.75/3.25-GHz dual-band microstrip prototype."}
{"_id": "7f270d66e0e82040b82dfcef6ad90a1e78e13f04", "title": "Measuring user acceptance of emerging information technologies: an assessment of possible method biases", "text": "The measurement scales for the perceived usefulness and perceived ease of we constructs introduced by Davis [12/ have become widely used for forecasting user acceptance of emerging information technologies. An experiment was conducted to examine whether grouping of items caused artifactual inflation of reliability and validity measures. We found support for our hypothesis that the reliability and validity stemmed notfrom item grouping butfrom the constructs of perceived usefulness and perceived ease of use being clearly defined, and the items used co measure each of these consn-ucts clearly capturing the essence of the conscrucI."}
{"_id": "8cd7d1461a6a0dad8dc01868e1948cce6ef89273", "title": "Effect of gas type and flow rate on Cu free air ball formation in thermosonic wire bonding", "text": "0026-2714/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.microrel.2010.02.023 * Corresponding author. E-mail address: apequegn@uwaterloo.ca (A. Peque The development of novel Cu wires for thermosonic wire bonding is time consuming and the effects of shielding gas on the electrical flame off (EFO) process is not fully understood. An online method is used in this study for characterizing Cu free air balls (FABs) formed with different shielding gas types and flow rates. The ball heights before (HFAB) and after deformation (Hdef) are responses of the online method and measured as functions of gas flow rate. Sudden changes in the slopes of these functions, a non-parallelity of the two functions, and a large standard deviation of the HFAB measurements all identify FAB defects. Using scanning electron microscope (SEM) images in parallel with the online measurements golf-club shaped and pointed shaped FABs are found and the conditions at which they occur are identified. In general FAB defects are thought to be caused by changes in surface tension of the molten metal during EFO due to inhomogeneous cooling or oxidation. It is found that the convective cooling effect of the shielding gas increases with flow rate up to 0.65 l/min where the bulk temperature of a thermocouple at the EFO site decreases by 19 C. Flow rates above 0.7 l/min yield an undesirable EFO process due to an increase in oxidation which can be explained by a change in flow from laminar to turbulent. The addition of H2 to the shielding gas reduces the oxidation of the FAB as well as providing additional thermal energy during EFO. Different Cu wire materials yield different results where some perform better than others. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "f3a67725d61ae77afaa2dd5e3763b78800c78321", "title": "Enclosed: a component-centric interface for designing prototype enclosures", "text": "This paper explores the problem of designing enclosures (or physical cases) that are needed for prototyping electronic devices. We present a novel interface that uses electronic components as handles for designing the 3D shape of the enclosure. We use the .NET Gadgeteer platform as a case study of this problem, and implemented a proof-of-concept system for designing enclosures for Gadgeteer components. We show examples of enclosures designed and fabricated with our system."}
{"_id": "4a58e3066f12bb86d7aef2776e9d8a2a4e4daf3e", "title": "Evaluation Techniques for Storage Hierarchies", "text": "This paper introduces an efficient technique called \u201cstack processing\u201d that can be used in the cost-performance evaluation of a large class of storage hierarchies. The technique depends on a classification of page replacement algorithms as \u201cstack algorithms\u201d for which various properties are derived. These properties may be of use in the general areas of program modeling and system analysis, as well as in the evaluation of storage hierarchies. For a better understanding of storage hierarchies, we briefly review some basic concepts of their design."}
{"_id": "671697cf84dfbe53a1cb0bed29b9f649c653bbc5", "title": "Multispectral Deep Neural Networks for Pedestrian Detection", "text": "Multispectral pedestrian detection is essential for around-the-clock applications, e.g., surveillance and autonomous driving. We deeply analyze Faster R-CNN for multispectral pedestrian detection task and then model it into a convolutional network (ConvNet) fusion problem. Further, we discover that ConvNet-based pedestrian detectors trained by color or thermal images separately provide complementary information in discriminating human instances. Thus there is a large potential to improve pedestrian detection by using color and thermal images in DNNs simultaneously. We carefully design four ConvNet fusion architectures that integrate two-branch ConvNets on different DNNs stages, all of which yield better performance compared with the baseline detector. Our experimental results on KAIST pedestrian benchmark show that the Halfway Fusion model that performs fusion on the middle-level convolutional features outperforms the baseline method by 11% and yields a missing rate 3.5% lower than the other proposed architectures."}
{"_id": "35e23d6461ac90e0fd9cc7e199096ee380908b61", "title": "Traffic Analysis of Encrypted Messaging Services: Apple iMessage and Beyond", "text": "Instant messaging services are quickly becoming the most dominant form of communication among consumers around the world. Apple iMessage, for example, handles over 2 billion messages each day, while WhatsApp claims 16 billion messages from 400 million international users. To protect user privacy, many of these services typically implement end-to-end and transport layer encryption, which are meant to make eavesdropping infeasible even for the service providers themselves. In this paper, however, we show that it is possible for an eavesdropper to learn information about user actions, the language of messages, and even the length of those messages with greater than 96% accuracy despite the use of state-of-the-art encryption technologies simply by observing the sizes of encrypted packets. While our evaluation focuses on Apple iMessage, the attacks are completely generic and we show how they can be applied to many popular messaging services, including WhatsApp, Viber, and Telegram."}
{"_id": "f1d71d041616f9ea2952258da3db522d53163f60", "title": "DroidCat: Effective Android Malware Detection and Categorization via App-Level Profiling", "text": "Most existing Android malware detection and categorization techniques are static approaches, which suffer from evasion attacks, such as obfuscation. By analyzing program behaviors, dynamic approaches are potentially more resilient against these attacks. Yet existing dynamic approaches mostly rely on characterizing system calls which are subject to system-call obfuscation. This paper presents DroidCat, a novel dynamic app classification technique, to complement existing approaches. By using a diverse set of dynamic features based on method calls and inter-component communication (ICC) Intents without involving permission, app resources, or system calls while fully handling reflection, DroidCat achieves superior robustness than static approaches as well as dynamic approaches relying on system calls. The features were distilled from a behavioral characterization study of benign versus malicious apps. Through three complementary evaluation studies with 34 343 apps from various sources and spanning the past nine years, we demonstrated the stability of DroidCat in achieving high classification performance and superior accuracy compared with the two state-of-the-art peer techniques that represent both static and dynamic approaches. Overall, DroidCat achieved 97% F1-measure accuracy consistently for classifying apps evolving over the nine years, detecting or categorizing malware, 16%\u201327% higher than any of the two baselines compared. Furthermore, our experiments with obfuscated benchmarks confirmed higher robustness of DroidCat over these baseline techniques. We also investigated the effects of various design decisions on DroidCat\u2019s effectiveness and the most important features for our dynamic classification. We found that features capturing app execution structure such as the distribution of method calls over user code and libraries are much more important than typical security features such as sensitive flows."}
{"_id": "98e01bbd62e39e2c4bb81d690cb24523ae9c1e9d", "title": "How We Transmit Memories to Other Brains: Constructing Shared Neural Representations Via Communication", "text": "Humans are able to mentally construct an episode when listening to another person's recollection, even though they themselves did not experience the events. However, it is unknown how strongly the neural patterns elicited by mental construction resemble those found in the brain of the individual who experienced the original events. Using fMRI and a verbal communication task, we traced how neural patterns associated with viewing specific scenes in a movie are encoded, recalled, and then transferred to a group of na\u00efve listeners. By comparing neural patterns across the 3 conditions, we report, for the first time, that event-specific neural patterns observed in the default mode network are shared across the encoding, recall, and construction of the same real-life episode. This study uncovers the intimate correspondences between memory encoding and event construction, and highlights the essential role our common language plays in the process of transmitting one's memories to other brains."}
{"_id": "7a4a716d8675311985b281f5aa339a27b9055c0f", "title": "Development of a new hybrid ANN for solving a geotechnical problem related to tunnel boring machine performance", "text": "Prediction of tunnel boring machine (TBM) performance parameters can be caused to reduce the risks associated with tunneling projects. This study is aimed to introduce a new hybrid model namely Firefly algorithm (FA) combined by artificial neural network (ANN) for solving problems in the field of geotechnical engineering particularly for estimation of penetration rate (PR) of TBM. For this purpose, the results obtained from the field observations and laboratory tests were considered as model inputs to estimate PR of TBMs operated in a water transfer tunnel in Malaysia. Five rock mass and material properties (rock strength, tensile strength of rock, rock quality designation, rock mass rating and weathering zone) and two machine factors (trust force and revolution per minute) were used in the new model for predicting PA. FA algorithm was used to optimize weight and bias of ANN to obtain a higher level of accuracy. A series of hybrid FA-ANN models using the most influential parameters on FA were constructed to estimate PR. For comparison, a simple ANN model was built to predict PR of TBM. This ANN model was improved on the basis of new ways. By doing this, the best ANN model was chosen for comparison purposes. After implementing the best models for two methods, the data were divided into five separate categories. This will minimize the chance of randomness. Then the best models were applied for these new categories. The results demonstrated that new hybrid intelligent model is able to provide higher performance capacity for predicting. Based on the coefficient of determination 0.948 and 0.936 and 0.885 and 0.889 for training and testing datasets of FA-ANN and ANN models, respectively, it was found that the new hybrid model can be introduced as a superior model for solving geotechnical engineering problems."}
{"_id": "b9e7cd6f5a330a3466198fa5d20aa134b876f221", "title": "A whole-body control framework for humanoids operating in human environments", "text": "Tomorrow's humanoids will operate in human environments, where efficient manipulation and locomotion skills, and safe contact interactions are critical design factors. We report here our recent efforts into these issues, materialized into a whole-body control framework. This framework integrates task-oriented dynamic control and control prioritization allowing to control multiple task primitives while complying with physical and movement-related constraints. Prioritization establishes a hierarchy between control spaces, assigning top priority to constraint-handling tasks, while projecting operational tasks in the null space of the constraints, and controlling the posture within the residual redundancy. This hierarchy is directly integrated at the kinematic level, allowing the program to monitor behavior feasibility at runtime. In addition, prioritization allows us to characterize the dynamic behavior of the individual control primitives subject to the constraints, and to synthesize operational space controllers at multiple levels. To complete this framework, we have developed free-floating models of the humanoid and incorporate the associated dynamics and the effects of the resulting support contacts into the control hierarchy. As part of a long term collaboration with Honda, we are currently implementing this framework into the humanoid robot Asimo"}
{"_id": "edeab73e5868ab3a5bb63971ee7329aa5c9da90b", "title": "Path diversification for future internet end-to-end resilience and survivability", "text": "Path Diversification is a new mechanism that can be used to select multiple paths between a given ingress and egress node pair using a quantified diversity measure to achieve maximum flow reliability. The path diversification mechanism is targeted at the end-to-end layer, but can be applied at any level for which a path discovery service is available. Path diversification also takes into account service requirements for low-latency or maximal reliability in selecting appropriate paths. Using this mechanism will allow future internetworking architectures to exploit naturally rich physical topologies to a far greater extent than is possible with shortest-path routing or equal-cost load balancing. We describe the path diversity metric and its application at various aggregation levels, and apply the path diversification process to 13 real-world network graphs as well as 4 synthetic topologies to asses the gain in flow reliability. Based on the analysis of flow reliability across a range of networks, we then extend our path diversity metric to create a composJ. P. Rohrer* E-mail: jprohrer@nps.edu Tel.: +1 831 656 3196 Computer Science Department, Naval Postgraduate School Monterey, California, USA A. Jabbar* E-mail: jabbar@ge.com Advanced Communication Systems Lab, GE Global Research Niskayuna, New York, USA J. P. G. Sterbenz E-mail: jpgs@{ittc.ku.edu|comp.lancs.ac.uk} Tel.: +1 785 864 7890 Information and Telecommunication Technology Center, The University of Kansas Lawrence, Kansas, USA and Lancaster University, Lancaster, UK * Work performed while at The University of Kansas ite compensated total graph diversity metric that is representative of a particular topology\u2019s survivability with respect to distributed simultaneous link and node failures. We tune the accuracy of this metric having simulated the performance of each topology under a range of failure severities, and present the results. The topologies used are from nationalscale backbone networks with a variety of characteristics, which we characterize using standard graph-theoretic metrics. The end result is a compensated total graph diversity metric that accurately predicts the survivability of a given network topology."}
{"_id": "20a531fb5b8b7d978f8f24c18c51ff58c949b60d", "title": "Hyperbolic Graph Generator", "text": "Networks representing many complex systems in nature and society share some common structural properties like heterogeneous degree distributions and strong clustering. Recent research on network geometry has shown that those real networks can be adequately modeled as random geometric graphs in hyperbolic spaces. In this paper, we present a computer program to generate such graphs. Besides real-world-like networks, the program can generate random graphs from other well-known graph ensembles, such as the soft configuration model, random geometric graphs on a circle, or Erd\u0151s-R\u00e9nyi random graphs. The simulations show a good match between the expected values of different network structural properties and the corresponding empirical values measured in generated graphs, confirming the accurate behavior of the program."}
{"_id": "8fb7639148d92779962be1b3b27761e9fe0a15ee", "title": "Sentiment analysis on Twitter posts: An analysis of positive or negative opinion on GoJek", "text": "Online transportation, such as GoJek, is preferred by many users especially in areas where public transport is difficult to access or when there is a traffic jam. Twitter is a popular social networking site in Indonesia that can generate information from users' tweets. In this study, we proposed a system that detect public sentiments based on Twitter post about online transportation services especially GoJek. The system will collect tweets, analyze the tweets sentiments using SVM, and group them into positive and negative sentiment."}
{"_id": "5a37e085fd1ce6d8c49609ad5688292b5939d059", "title": "Click Through Rate Prediction for Contextual Advertisment Using Linear Regression", "text": "This research presents an innovative and unique way of solving the advertisement prediction problem which is considered as a learning problem over the past several years. Online advertising is a multi-billion-dollar industry and is growing every year with a rapid pace. The goal of this research is to enhance click through rate of the contextual advertisements using Linear Regression. In order to address this problem, a new technique propose in this paper to predict the CTR which will increase the overall revenue of the system by serving the advertisements more suitable to the viewers with the help of feature extraction and displaying the advertisements based on context of the publishers. The important steps include the data collection, feature extraction, CTR prediction and advertisement serving. The statistical results obtained from the dynamically used technique show an efficient outcome by fitting the data close to perfection for the LR technique using optimized feature selection. Keywords-Click Through Rate(CTR), Contextual Advertisements, Machine Learning, Web advertisements, Regression Problem."}
{"_id": "62566f0b005f9bf10b3ac6487dcacd21f97265fe", "title": "ICrafter: A Service Framework for Ubiquitous Computing Environments", "text": "In this paper, we propose ICrafter, a framework for services and their user interfaces in a class of ubiquitous computing environments. The chief objective of ICrafter is to let users flexibly interact with the services in their environment using a variety of modalities and input devices. We extend existing service frameworks in three ways. First, to offload services and user input devices, ICrafter provides infrastructure support for UI selection, generation, and adaptation. Second, ICrafter allows UIs to be associated with service patterns for on-the-fly aggregation of services. Finally, ICrafter facilitates the design of service UIs that are portable but still reflect the context of the local environment. In addition, we also focus on the system properties such as incremental deployability and robustness that are critical for ubiquitous computing environments. We describe the goals and architecture of ICrafter, a prototype implementation that validates its design, and the key lessons learnt from our"}
{"_id": "61d530578b8b91157cda18c5097ea97ac2f6910e", "title": "A signal detection method for temporal variation of adverse effect with vaccine adverse event reporting system data", "text": "BACKGROUND\nTo identify safety signals by manual review of individual report in large surveillance databases is time consuming; such an approach is very unlikely to reveal complex relationships between medications and adverse events. Since the late 1990s, efforts have been made to develop data mining tools to systematically and automatically search for safety signals in surveillance databases. Influenza vaccines present special challenges to safety surveillance because the vaccine changes every year in response to the influenza strains predicted to be prevalent that year. Therefore, it may be expected that reporting rates of adverse events following flu vaccines (number of reports for a specific vaccine-event combination/number of reports for all vaccine-event combinations) may vary substantially across reporting years. Current surveillance methods seldom consider these variations in signal detection, and reports from different years are typically collapsed together to conduct safety analyses. However, merging reports from different years ignores the potential heterogeneity of reporting rates across years and may miss important safety signals.\n\n\nMETHOD\nReports of adverse events between years 1990 to 2013 were extracted from the Vaccine Adverse Event Reporting System (VAERS) database and formatted into a three-dimensional data array with types of vaccine, groups of adverse events and reporting time as the three dimensions. We propose a random effects model to test the heterogeneity of reporting rates for a given vaccine-event combination across reporting years. The proposed method provides a rigorous statistical procedure to detect differences of reporting rates among years. We also introduce a new visualization tool to summarize the result of the proposed method when applied to multiple vaccine-adverse event combinations.\n\n\nRESULT\nWe applied the proposed method to detect safety signals of FLU3, an influenza vaccine containing three flu strains, in the VAERS database. We showed that it had high statistical power to detect the variation in reporting rates across years. The identified vaccine-event combinations with significant different reporting rates over years suggested potential safety issues due to changes in vaccines which require further investigation.\n\n\nCONCLUSION\nWe developed a statistical model to detect safety signals arising from heterogeneity of reporting rates of a given vaccine-event combinations across reporting years. This method detects variation in reporting rates over years with high power. The temporal trend of reporting rate across years may reveal the impact of vaccine update on occurrence of adverse events and provide evidence for further investigations."}
{"_id": "0cbda5365adc971b0d0ed51c0cb4bfcf0013959d", "title": "Measuring effects of music, noise, and healing energy using a seed germination bioassay.", "text": "OBJECTIVE\nTo measure biologic effects of music, noise, and healing energy without human preferences or placebo effects using seed germination as an objective biomarker.\n\n\nMETHODS\nA series of five experiments were performed utilizing okra and zucchini seeds germinated in acoustically shielded, thermally insulated, dark, humid growth chambers. Conditions compared were an untreated control, musical sound, pink noise, and healing energy. Healing energy was administered for 15-20 minutes every 12 hours with the intention that the treated seeds would germinate faster than the untreated seeds. The objective marker was the number of seeds sprouted out of groups of 25 seeds counted at 12-hour intervals over a 72-hour growing period. Temperature and relative humidity were monitored every 15 minutes inside the seed germination containers. A total of 14 trials were run testing a total of 4600 seeds.\n\n\nRESULTS\nMusical sound had a highly statistically significant effect on the number of seeds sprouted compared to the untreated control over all five experiments for the main condition (p < 0.002) and over time (p < 0.000002). This effect was independent of temperature, seed type, position in room, specific petri dish, and person doing the scoring. Musical sound had a significant effect compared to noise and an untreated control as a function of time (p < 0.03) while there was no significant difference between seeds exposed to noise and an untreated control. Healing energy also had a significant effect compared to an untreated control (main condition, p < 0.0006) and over time (p < 0.0001) with a magnitude of effect comparable to that of musical sound.\n\n\nCONCLUSION\nThis study suggests that sound vibrations (music and noise) as well as biofields (bioelectromagnetic and healing intention) both directly affect living biologic systems, and that a seed germination bioassay has the sensitivity to enable detection of effects caused by various applied energetic conditions."}
{"_id": "c55511ba441f4cbbe8ed68d93bedb79c915023f3", "title": "Indoor Localization Algorithm based on Fingerprint Using a Single Fifth Generation Wi-Fi Access Point", "text": "This paper proposes an indoor positioning system (IPS) based on WLAN using a single fifth-generation (5G) Wi-Fi access point. The proposed method uses fingerprint and classification models based on KNN (K-nearest neighbor) and Bayes rule. The fingerprint is formed by beam RSS (Received Signal Strength) samples, collected in some 2D locations of the indoor environment. Numerical simulations shown that using the best beam samples, it is possible to locate the stationary user's mobile device with average error less than 2.5 m."}
{"_id": "6e5e76268f292929ccba794ea4dcbb4c68899df7", "title": "Decision Trees for Mining Data Streams Based on the Gaussian Approximation", "text": "Since the Hoeffding tree algorithm was proposed in the literature, decision trees became one of the most popular tools for mining data streams. The key point of constructing the decision tree is to determine the best attribute to split the considered node. Several methods to solve this problem were presented so far. However, they are either wrongly mathematically justified (e.g., in the Hoeffding tree algorithm) or time-consuming (e.g., in the McDiarmid tree algorithm). In this paper, we propose a new method which significantly outperforms the McDiarmid tree algorithm and has a solid mathematical basis. Our method ensures, with a high probability set by the user, that the best attribute chosen in the considered node using a finite data sample is the same as it would be in the case of the whole data stream."}
{"_id": "b22b4817757778bdca5b792277128a7db8206d08", "title": "SCAN: Learning Hierarchical Compositional Visual Concepts", "text": "The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts."}
{"_id": "2c9bd66e9782af58a61196884a512ebbeef16859", "title": "Data fusion schemes for cooperative spectrum sensing in cognitive radio networks", "text": "Cooperative spectrum sensing has proven its efficiency to detect spectrum holes in cognitive radio network (CRN) by combining sensing information of multiple cognitive radio users. In this paper, we study different fusion schemes that can be implemented in fusion center. Simulation comparison between these schemes based on hard, quantized and soft fusion rules are conducted. It is shown through computer simulation that the soft combination scheme outperforms the hard one at the cost of more complexity; the quantized combination scheme provides a good tradeoff between detection performance and complexity. In the paper, we also analyze a quantized combination scheme based on a tree-bit quantization and compare its performance with some hard and soft combination schemes."}
{"_id": "2ccbdf9e9546633ee58009e0c0f3eaee75e6f576", "title": "The Meteor metric for automatic evaluation of machine translation", "text": "The Meteor Automatic Metric for Machine Translation evaluation, originally developed and released in 2004, was designed with the explicit goal of producing sentence-level scores which correlate well with human judgments of translation quality. Several key design decisions were incorporated into Meteor in support of this goal. In contrast with IBM\u2019s Bleu, which uses only precision-based features, Meteor uses and emphasizes recall in addition to precision, a property that has been confirmed by several metrics as being critical for high correlation with human judgments. Meteor also addresses the problem of reference translation variability by utilizing flexible word matching, allowing for morphological variants and synonyms to be taken into account as legitimate correspondences. Furthermore, the feature ingredients within Meteor are parameterized, allowing for the tuning of the metric\u2019s free parameters in search of values that result in optimal correlation with human judgments. Optimal parameters can be separately tuned for different types of human judgments and for different languages. We discuss the initial design of the Meteor metric, subsequent improvements, and performance in several independent evaluations in recent years."}
{"_id": "34ddd8865569c2c32dec9bf7ffc817ff42faaa01", "title": "A Stochastic Approximation Method", "text": "Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown tot he experiment, and it is desire to find the solution x=0 of the equation M(x) = a, where x is a given constant. we give a method for making successive experiments at levels x1, x2,... in such a way that x, will tend to 0 in probability."}
{"_id": "3d07b5087e53c6f7c228b3c7e769494527be228e", "title": "A Study of Translation Edit Rate with Targeted Human Annotation", "text": "We examine a new, intuitive measure for evaluating machine-translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Edit Rate (TER) measures the amount of editing that a human would have to perform to change a system output so it exactly matches a reference translation. We show that the single-reference variant of TER correlates as well with human judgments of MT quality as the four-reference variant of BLEU. We also define a human-targeted TER (or HTER) and show that it yields higher correlations with human judgments than BLEU\u2014even when BLEU is given human-targeted references. Our results indicate that HTER correlates with human judgments better than HMETEOR and that the four-reference variants of TER and HTER correlate with human judgments as well as\u2014or better than\u2014a second human judgment does."}
{"_id": "7c18e6cad2f9e23036ab28689e27d9f1176d65a3", "title": "A process study of computer-aided translation", "text": "We investigate novel types of assistance for human translators, based on statistical machine translation methods. We developed the computer-aided tool Caitra that makes suggestions for sentence completion, shows word and phrase translation options, and allows postediting of machine translation output. We carried out a study of the translation process that involved non-professional translators that were native in either French or English and recorded their interaction with the tool. Users translated 192 sentences from French news stories into English. Most translators were faster and better when assisted by our tool. A detailed examination of the logs also provides insight into the human translation process, such as time spent on different activities and length of pauses."}
{"_id": "85b03c6b6c921c674b4c0e463de0d3f8c27ef491", "title": "Miniaturized UWB Log-Periodic Square Fractal Antenna", "text": "In this letter, the log-periodic square fractal geometry is presented for the design of a miniaturized patch antenna for the ultra-wideband (UWB) services (3.1-10.6 GHz). A miniaturization factor of 23% is achieved with a constant and stable gain in the desired band. The radiation pattern is broadside, which finds suitable applications in the UWB radars and medical imaging. Furthermore, the time-domain performance of the proposed antenna is investigated. A prototype model of the proposed antenna is fabricated and measured as a proof of concept."}
{"_id": "6373298f14c7472dbdecc3d77439853e39ec216f", "title": "Analysis, design, and performance evaluation of asymmetrical half-bridge flyback converter for universal-line-voltage-range applications", "text": "The asymmetrical half-bridge (AHB) flyback converter is an attractive topology for operation at higher switching frequencies because it can operate with zero-voltage switching of the primary-side switches and zero-current switching of the secondary-side rectifier. In this paper, a detailed analysis and design procedure of the AHB flyback converter for the universal-line-voltage-range applications is presented. The performance of the AHB flyback converter is evaluated by loss analysis based on the simulation waveforms obtained in Simplis and experimentally verified on a laboratory prototype of a 65-W (19.5-V, 3.33-A) universal-line-voltage-range adapter."}
{"_id": "5f8b4c3cd03421e04f20d3cb98aee745c4d58216", "title": "Resistance to Change : The Moderating Effects of Leader-Member Exchange and Role Breadth Self-Efficacy", "text": "The prevalence of resistance during change initiatives is well recognized in the change management literature. The implementation of the lean production system is no exception. It often requires substantial changes to processes and the way people work. As such, understanding how to manage this resistance is important. One view argues that the extent of resistance during change depends on the characteristics of the change process. This view posits that resistance can be reduced if organizations manage information flow, create room for participation and develop trust in management. In addition, this paper also proposes that is Leader-Member Exchange (LMX) and Role Breadth Self-Efficacy (RBSE) moderate the effect on the employees\u2019 resistance to change. \uf020"}
{"_id": "97225355a9cbc284939fb0193fddfd41ac45ee71", "title": "Real-time wireless vibration monitoring system using LabVIEW", "text": "Vibration analysis provides relevant information about abnormal working condition of machine parts. Vibration measurement is prerequisite for vibration analysis which is used for condition monitoring of machinery. Also, wireless vibration monitoring has many advantages over wired monitoring. This Paper presents, implementation of a reliable and low cost wireless vibration monitoring system. Vibration measurement has been done using 3-Axis digital output MEMS Accelerometer sensor. This sensor can sense vibrations in the range 0.0156g to 8g where, 1g is 9.81m/s2. Accelerometer Sensor is interfaced with Arduino-derived microcontroller board having Atmel's AT-mega328p microcontroller. The implemented system uses ZigBee communication protocol i.e. standard IEEE 802.15.4, for wireless communication between Sensor Unit and Vibration Monitoring Unit. The wireless communication has been done using XBee RF modules. National Instruments's LabVIEW software has been used for development of graphical user interface, data-logging and alarm indication on the PC. Experimental results show continuous real-time monitoring of machine's vibrations on charts. These results, along with data-log file have been used for vibration analysis. This analysis is used to ensure safe working condition of machinery and used in predictive maintenance."}
{"_id": "5badb3cf19a724327f62991e8616772b0222858b", "title": "Big Data Cognitive Storage: An Optimum Way of handling the Educational Data in Indian Scenario", "text": "The concept of big data has been incorporated in majority of areas. The educational sector has plethora of data especially in online education which plays a vital in modern education. Moreover digital learning which comprises of data and analytics contributes significantly to enhance teaching and learning. The key challenge for handling such data can be a costly affair. IBM has introduced the technology \"Cognitive Storage\" which ensures that the most relevant information is always on hand. This technology governs the incoming data, stores the data in definite media, application of levels of data protection, policies for the lifecycle and retention of different classes of data. This technology can be very beneficial for online learning in Indian scenario. This technology will be very beneficial in Indian society so as to store more information for the upliftment of the students\u2019 knowledge."}
{"_id": "b521b6c9983b3e74355be32ac3082beb9f2a291d", "title": "Development of an Assessment Model for Industry 4.0: Industry 4.0-MM", "text": "The application of new technologies in the manufacturing environment is ushering a new era referred to as the 4th industrial revolution, and this digital transformation appeals to companies due to various competitive advantages it provides. Accordingly, there is a fundamental need for assisting companies in the transition to Industry 4.0 technologies/practices, and guiding them for improving their capabilities in a standardized, objective, and repeatable way. Maturity Models (MMs) aim to assist organizations by providing comprehensive guidance. Therefore, the literature is reviewed systematically with the aim of identifying existing studies related to MMs proposed in the context of Industry 4.0. Seven identified MMs are analyzed by comparing their characteristics of scope, purpose, completeness, clearness, and objectivity. It is concluded that none of them satisfies all expected criteria. In order to satisfy the need for a structured Industry 4.0 assessment/maturity model, SPICE-based Industry 4.0-MM is proposed in this study. Industry 4.0-MM has a holistic approach consisting of the assessment of process transformation, application management, data governance, asset management, and organizational alignment areas. The aim is to create a common base for performing an assessment of the establishment of Industry 4.0 technologies, and to guide companies towards achieving a higher maturity stage in order to maximize the economic benefits of Industry 4.0. Hence, Industry 4.0-MM provides standardization in continuous benchmarking and improvement of businesses in the manufacturing industry."}
{"_id": "f5fb1d2992900d68ddad8ebb7fa3e1b941b985dd", "title": "Exosomes in tumor microenvironment: novel transporters and biomarkers", "text": "Tumor microenvironment (TME) plays an integral part in the biology of cancer, participating in tumor initiation, progression, and response to therapy. Exosome is an important part of TME. Exosomes are small vesicles formed in vesicular bodies with a diameter of 30\u2013100\u00a0nm and a classic \u201ccup\u201d or \u201cdish\u201d morphology. They can contain microRNAs, mRNAs, DNA fragments and proteins, which are shuttled from a donor cell to recipient cells. Exosomes secreted from tumor cells are called tumor-derived (TD) exosomes. There is emerging evidence that TD exosomes can construct a fertile environment to support tumor proliferation, angiogenesis, invasion and premetastatic niche preparation. TD exosomes also may facilitate tumor growth and metastasis by inhibiting immune surveillance and by increasing chemoresistance via removal of chemotherapeutic drugs. Therefore, TD-exosomes might be potential targets for therapeutic interventions via their modification or removal. For example, exosomes can serve as specific delivery vehicles to tumors of drugs, small molecules, or agents of prevention and gene therapy. Furthermore, the biomarkers detected in exosomes of biological fluids imply a potential for exosomes in the early detection and diagnosis, prediction of therapeutic efficacy, and determining prognosis of cancer. Although exosomes may serve as cancer biomarkers and aid in the treatment of cancer, we have a long way to go before we can further enhance the anti-tumor therapy of exosomes and develop exosome-based cancer diagnostic and therapeutic strategies."}
{"_id": "5b47d47a6b9da9bc77da3ab15257f5d30ff3f489", "title": "Weakly supervised named entity classification", "text": "In this paper, we describe a new method for the problem of named entity classification for specialized or technical domains, using distant supervision. Our approach relies on a simple observation: in some specialized domains, named entities are almost unambiguous. Thus, given a seed list of names of entities, it is cheap and easy to obtain positive examples from unlabeled texts using a simple string match. Those positive examples can then be used to train a named entity classifier, by using the PU learning paradigm, which is learning from positive and unlabeled examples. We introduce a new convex formulation to solve this problem, and apply our technique in order to extract named entities from financial reports corresponding to healthcare companies."}
{"_id": "c986bb0fbb101a36dbec97e2b49475897298d295", "title": "An Efficient Method of Number Plate Extraction from Indian Vehicles Image", "text": "Automatic Number Plate Recognition (ANPR) is an image-processing technology that identifies vehicles by their number plates without direct human intervention. It is an application of computer vision and important area of research due to its many applications. The main process of ANPR is divided into four stages. This paper presents a simple and efficient method for the extraction of number plate from the vehicle image based on morphological operations, thresholding and sobel edge detection, and the connected component analysis."}
{"_id": "16c9604d0fc53dc7f21fb31cbc7fab6bd9bdddd6", "title": "Information fusion for wireless sensor networks: Methods, models, and classifications", "text": "Wireless sensor networks produce a large amount of data that needs to be processed, delivered, and assessed according to the application objectives. The way these data are manipulated by the sensor nodes is a fundamental issue. Information fusion arises as a response to process data gathered by sensor nodes and benefits from their processing capability. By exploiting the synergy among the available data, information fusion techniques can reduce the amount of data traffic, filter noisy measurements, and make predictions and inferences about a monitored entity. In this work, we survey the current state-of-the-art of information fusion by presenting the known methods, algorithms, architectures, and models of information fusion, and discuss their applicability in the context of wireless sensor networks."}
{"_id": "b1ec21386d1573b9a9ad791c434aca576378ee9b", "title": "Leadership style and patient safety: implications for nurse managers.", "text": "OBJECTIVE\nThe purpose of this study was to explore the relationship between nurse manager (NM) leadership style and safety climate.\n\n\nBACKGROUND\nNursing leaders are needed who will change the environment and increase patient safety. Hospital NMs are positioned to impact day-to-day operations. Therefore, it is essential to inform nurse executives regarding the impact of leadership style on patient safety.\n\n\nMETHODS\nA descriptive correlational study was conducted in 41 nursing departments across 9 hospitals. The hospital unit safety climate survey and multifactorial leadership questionnaire were completed by 466 staff nurses. Bivariate and regression analyses were conducted to determine how well leadership style predicted safety climate.\n\n\nRESULTS\nTransformational leadership style was demonstrated as a positive contributor to safety climate, whereas laissez-faire leadership style was shown to negatively contribute to unit socialization and a culture of blame.\n\n\nCONCLUSIONS\nNursing leaders must concentrate on developing transformational leadership skills while also diminishing negative leadership styles."}
{"_id": "e66e7dbbd10861d346ea00594c0df7a8605c7cf5", "title": "Iterative Attention Mining for Weakly Supervised Thoracic Disease Pattern Localization in Chest X-Rays", "text": "Given image labels as the only supervisory signal, we focus on harvesting, or mining, thoracic disease localizations from chest X-ray images. Harvesting such localizations from existing datasets allows for the creation of improved data sources for computer-aided diagnosis and retrospective analyses. We train a convolutional neural network (CNN) for image classification and propose an attention mining (AM) strategy to improve the model\u2019s sensitivity or saliency to disease patterns. The intuition of AM is that once the most salient disease area is blocked or hidden from the CNN model, it will pay attention to alternative image regions, while still attempting to make correct predictions. However, the model requires to be properly constrained during AM, otherwise, it may overfit to uncorrelated image parts and forget the valuable knowledge that it has learned from the original image classification task. To alleviate such side effects, we then design a knowledge preservation (KP) loss, which minimizes the discrepancy between responses for X-ray images from the original and the updated networks. Furthermore, we modify the CNN model to include multi-scale aggregation (MSA), improving its localization ability on small-scale disease findings, e.g., lung nodules. We experimentally validate our method on the publicly-available ChestXray14 dataset, outperforming a class activation map (CAM)-based approach, and demonstrating the value of our novel framework for mining disease locations."}
{"_id": "bcdd2458633e6f9955a2b18846e5c85c7b047e08", "title": "Content-Based Image Retrieval Using Multiple Features", "text": "Algorithms of Content-Based Image Retrieval (CBIR) have been well developed along with the explosion of information. These algorithms are mainly distinguished based on feature used to describe the image content. In this paper, the algorithms that are based on color feature and texture feature for image retrieval will be presented. Color Coherence Vector based image retrieval algorithm is also attempted during the implementation process, but the best result is generated from the algorithms that weights color and texture. 80% satisfying rate is achieved."}
{"_id": "587f6b97f6c75d7bfaf2c04be8d9b4ad28ee1b0a", "title": "DIFusion: Fast Skip-Scan with Zero Space Overhead", "text": "Scan is a crucial operation in main-memory column-stores. It scans a column and returns a result bit vector indicating which records satisfy a filter predicate. ByteSlice is an in-memory data layout that chops data into multiple bytes and exploits early-stop capability by high-order bytes comparisons. As column widths are usually not multiples of byte, the last-byte of ByteSlice is padded with 0's, wasting memory bandwidth and computation power. To fully leverage the resources, we propose to weave a secondary index into the vacant bits (i.e., bits originally padded with 0's), forming our new layout coined DIFusion (Data Index Fusion). DIFusion enables skip-scan, a new fast scan that inherits the early-stopping capability from ByteSlice and at the same time possesses the data-skipping ability of index with zero space overhead. Empirical results show that skip-scan on DIFusion outperforms scan on ByteSlice."}
{"_id": "6c1671a8163f7a2ce0cd68424a142df0bae40c2e", "title": "Monocyte emigration from bone marrow during bacterial infection requires signals mediated by chemokine receptor CCR2", "text": "Monocytes recruited to tissues mediate defense against microbes or contribute to inflammatory diseases. Regulation of the number of circulating monocytes thus has implications for disease pathogenesis. However, the mechanisms controlling monocyte emigration from the bone marrow niche where they are generated remain undefined. We demonstrate here that the chemokine receptor CCR2 was required for emigration of Ly6Chi monocytes from bone marrow. Ccr2\u2212/\u2212 mice had fewer circulating Ly6Chi monocytes and, after infection with Listeria monocytogenes, accumulated activated monocytes in bone marrow. In blood, Ccr2\u2212/\u2212 monocytes could traffic to sites of infection, demonstrating that CCR2 is not required for migration from the circulation into tissues. Thus, CCR2-mediated signals in bone marrow determine the frequency of Ly6Chi monocytes in the circulation."}
{"_id": "7acbb31671647dd86f451ba3a1b895d949b70ff9", "title": "Incremental learning for \u03bd-Support Vector Regression", "text": "The \u03bd-Support Vector Regression (\u03bd-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter \u03bd on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to \u03bd-Support Vector Classification (\u03bd-SVC) (Sch\u00f6lkopf et\u00a0al., 2000), \u03bd-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line \u03bd-SVC algorithm (AONSVM) to \u03bd-SVR will not generate an effective initial solution. It is the main challenge to design an incremental \u03bd-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of \u03bd-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental \u03bd-SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch \u03bd-SVR algorithms with both cold and warm starts."}
{"_id": "d574fe2948052948f8506e2c369cf0558827e260", "title": "Feature combination strategies for saliency-based visual attention systems", "text": "acin ral eais a000 Abstract. Bottom-up or saliency-based visual attention allows primates to detect nonspecific conspicuous targets in cluttered scenes. A classical metaphor, derived from electrophysiological and psychophysical studies, describes attention as a rapidly shiftable \u2018\u2018spotlight.\u2019\u2019 We use a model that reproduces the attentional scan paths of this spotlight. Simple multi-scale \u2018\u2018feature maps\u2019\u2019 detect local spatial discontinuities in intensity, color, and orientation, and are combined into a unique \u2018\u2018master\u2019\u2019 or \u2018\u2018saliency\u2019\u2019 map. The saliency map is sequentially scanned, in order of decreasing saliency, by the focus of attention. We here study the problem of combining feature maps, from different visual modalities (such as color and orientation), into a unique saliency map. Four combination strategies are compared using three databases of natural color images: (1) Simple normalized summation, (2) linear combination with learned weights, (3) global nonlinear normalization followed by summation, and (4) local nonlinear competition between salient locations followed by summation. Performance was measured as the number of false detections before the most salient target was found. Strategy (1) always yielded poorest performance and (2) best performance, with a threefold to eightfold improvement in time to find a salient target. However, (2) yielded specialized systems with poor generalization. Interestingly, strategy (4) and its simplified, computationally efficient approximation (3) yielded significantly better performance than (1), with up to fourfold improvement, while preserving generality. \u00a9 2001 SPIE and IS&T. [DOI: 10.1117/1.1333677]"}
{"_id": "b77e603fdfce2e2f5ae40592d07f0cbd1ad91ac1", "title": "Counter-Forensics: Attacking Image Forensics", "text": "This chapter discusses counter-forensics, the art and science of impeding or misleading forensic analyses of digital images. Research on counter-forensics is motivated by the need to assess and improve the reliability of forensic methods in situations where intelligent adversaries make efforts to induce a certain outcome of forensic analyses. Counter-forensics is first defined in a formal decision-theoretic framework. This framework is then interpreted and extended to encompass the requirements to forensic analyses in practice, including a discussion of the notion of authenticity in the presence of legitimate processing, and the role of image models with regard to the epistemic underpinning of the forensic decision problem. A terminology is developed that distinguishes security from robustness properties, integrated from post-processing attacks, and targeted from universal attacks. This terminology is directly applied in a self-contained technical survey of counter-forensics against image forensics, notably techniques that suppress traces of image processing and techniques that synthesize traces of authenticity, including examples and brief evaluations. A discussion of relations to other domains of multimedia security and an overview of open research questions concludes the chapter. 1 Definition of Counter-Forensics This final chapter changes the perspective. It is devoted to digital image counterforensics, the art and science of impeding and misleading forensic analyses of digital images. Rainer B\u00f6hme Westf\u00e4lische Wilhelms-Universit\u00e4t M\u00fcnster, Dept. of Information Systems, Leonardo-Campus 3, 48149 M\u00fcnster, Germany e-mail: rainer.boehme@wi.uni-muenster.de Matthias Kirchner International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704 e-mail: kirchner@icsi.berkeley.edu"}
{"_id": "eb2c8a159a41ed86e70aad96d2d7f833633eddac", "title": "Band-pass shielding enclosure for 2.4/5 GHz WLAN applications", "text": "This paper presents a design of band-pass shielding enclosure for two frequency bands of WLAN. The proposed design employs a compact frequency selective surface to highlight the advantages of these miniaturized structures for the selective shielding of such enclosures that are electrically small. The results indicate that the proposed shielding enclosure has high transmittance in both the pass bands for horizontal and vertical polarizations, while offers a good amount of electromagnetic shielding outside these frequency bands."}
{"_id": "7d973f3c1634dadd4543d1132dcaf0160ad7d01f", "title": "Big Data For Development: Applications and Techniques", "text": "With the explosion of social media sites and proliferation of digital computing devices and Internet access, massive amounts of public data is being generated on a daily basis. Efficient techniques/ algorithms to analyze this massive amount of data can provide near real-time information about emerging trends and provide early warning in case of an imminent emergency (such as the outbreak of a viral disease). In addition, careful mining of these data can reveal many useful indicators of socioeconomic and political events, which can help in establishing effective public policies. The focus of this study is to review the application of big data analytics for the purpose of human development. The emerging ability to use big data techniques for development (BD4D) promises to revolutionalize healthcare, education, and agriculture; facilitate the alleviation of poverty; and help to deal with humanitarian crises and violent conflicts. Besides all the benefits, the large-scale deployment of BD4D is beset with several challenges due to the massive size, fast-changing and diverse nature of big data. The most pressing concerns relate to efficient data acquisition and sharing, establishing of context (e.g., geolocation and time) and veracity of a dataset, and ensuring appropriate privacy. In this study, we provide a review of existing BD4D work to study the impact of big data on the development of society. In addition to reviewing the important works, we also highlight important challenges and open issues."}
{"_id": "78b7a8acf5da38537f78fba56aefadab0bf3e7e1", "title": "On-Chip High-Voltage Generation in Integrated Circuits Using an Improved Multiplier Technique", "text": "An improved oped for generating +40 voltage multiplier technique has been develV internally in p-channel MNOS integrated circuits to enrtble them to be operated from standard +5and : 12-V supply rails. With this technique, the multiplication efficiency and current driving capabtilty are both independent of the number of multiplier stages. A mathematical model and simple equivalent circuit have been developed for the multiplier and the predicted performance agrees well with measured results. A multiplier has already been incorporated into a TTL compatible nonvolatile quad-latch, in which it occupies a chip area of 600 ~m X 240 pm. It is operated with a clock frequency of 1 MHz and can supManuscript received December 9, 1975; revised February 18, 1976. This paper is based on part of a presentation entitled \u201cA non-volatile MNOS quad-latch,\u201d which was presented at the First European SolidState Circuits Conference, Canterbury, England, September 2-5, 1975. The author is with the Allen Clark Research Centre, The Plessey Company Ltd., CasweJl, Towcester, Northants., England. MNOS Voltage ply a maximum load current of about 10 NA. The output impedance is 3.2 Mfl. INTRODUCTION ALTHOUGH MNOS technology is now well established for fabricating nonvolatile memory circuits, the relatively high potentials necessary to write or erase information, typically 30-40 V, are an obvious disadvantage. In many applications, the need to generate these voltages has prevented the use of MNOS devices being economically viable, especially when only a few bits of nonvolatile data are required. To overcome this problem, a method of on-chip high-voltage generation using a new voltage multiplier technique has been developed, enabling MNOS circuits to be operated with standard DICKSON: ON-CHIP HIGH-VOLTAGEGENERATION 375"}
{"_id": "ca2b173e1d7a7b906feed1f1da611d883733b61c", "title": "Using Fully Homomorphic Encryption for Statistical Analysis of Categorical, Ordinal and Numerical Data", "text": "In recent years, there has been a growing trend towards outsourcing of computational tasks with the development of cloud services. The Gentry\u2019s pioneering work of fully homomorphic encryption (FHE) and successive works have opened a new vista for secure and practical cloud computing. In this paper, we consider performing statistical analysis on encrypted data. To improve the efficiency of the computations, we take advantage of the batched computation based on the ChineseRemainder-Theorem. We propose two building blocks that work with FHE: a novel batch greater-than primitive, and matrix primitive for encrypted matrices. With these building blocks, we construct secure procedures and protocols for different types of statistics including the histogram (count), contingency table (with cell suppression) for categorical data; k-percentile for ordinal data; and principal component analysis and linear regression for numerical data. To demonstrate the effectiveness of our methods, we ran experiments in five real datasets. For instance, we can compute a contingency table with more than 50 cells from 4000 of data in just 5 minutes, and we can train a linear regression model with more than 40k of data and dimension as high as 6 within 15 minutes. We show that the FHE is not as slow as commonly believed and it becomes feasible to perform a broad range of statistical analysis on thousands of encrypted data."}
{"_id": "a052348595c07b7da2fc9ef5a0e883f03b6a02b1", "title": "Anomalous Magnetic Field Activity During a Bioenergy Healing Experiment", "text": "A few studies have reported magnetic fi eld changes during bioenergy healing. In a pilot experiment, we examined magnetic fi eld activity during hands-on healing and distant healing of mice with experimentally induced tumors. During healing sessions, we observed distinct magnetic fi eld oscillations adjacent to the mice cages, which were similar in appearance to those reported by Zimmerman (1985). The magnetic fi eld oscillations began as 20\u201330 Hz oscillations, slowing to 8\u20139 Hz, and then to less than 1 Hz, at which point the oscillations reversed and increased in frequency, with an overall symmetrical appearance resembling a \u201cchirp wave.\u201d The waves ranged from 1\u20138 milliGauss peak-to-peak in strength and 60\u2013120 sec in duration. Evidence to date suggests that bioenergy healing may be detectable with DC gaussmeters, although we have not ruled out an artifactual basis for the oscillations reported here."}
{"_id": "0e9741bc1e0c80520a8181970cd4f61caa00055a", "title": "Algorithms implementing distributed shared memory", "text": "Four basic algorithms for implementing distributed shared memory are compared. Conceptually, these algorithms extend local virtual address spaces to span multiple hosts connected by a local area network, and some of them can easily be integrated with the hosts' virtual memory systems. The merits of distributed shared memory and the assumptions made with respect to the environment in which the shared memory algorithms are executed are described. The algorithms are then described, and a comparative analysis of their performance in relation to application-level access behavior is presented. It is shown that the correct choice of algorithm is determined largely by the memory access behavior of the applications. Two particularly interesting extensions of the basic algorithms are described, and some limitations of distributed shared memory are noted.<>"}
{"_id": "0a482d33dc0c7ca08cc333158c5c0e82d4c04560", "title": "Adapting the Tesseract open source OCR engine for multilingual OCR", "text": "We describe efforts to adapt the Tesseract open source OCR engine for multiple scripts and languages. Effort has been concentrated on enabling generic multi-lingual operation such that negligible customization is required for a new language beyond providing a corpus of text. Although change was required to various modules, including physical layout analysis, and linguistic post-processing, no change was required to the character classifier beyond changing a few limits. The Tesseract classifier has adapted easily to Simplified Chinese. Test results on English, a mixture of European languages, and Russian, taken from a random sample of books, show a reasonably consistent word error rate between 3.72% and 5.78%, and Simplified Chinese has a character error rate of only 3.77%."}
{"_id": "2be13dc3d2e27bea5b5008a164e8ddac5ff4dcdd", "title": "Multilingual OCR (MOCR): An Approach to Classify Words to Languages", "text": "There are immense efforts to design a complete OCR for most of the world\u2019s leading languages, however, multilingual documents either of handwritten or of printed form. As a united attempt, Unicode based OCRs were studied mostly with some positive outcomes, despite the fact that a large character set slows down the recognition significantly. In this paper, we come out with a method to classify words to a language as the word segmentation is complete. For the purpose, we identified the characteristics of writings of several languages and utilized projecting method combined with some other feature extraction methods. In addition, this paper intends a modified statistical approach to correct the skewness before processing a segmented document. The proposed procedure, evaluated for a collection of both handwritten and printed documents, came with excellent outcomes in assigning words to languages. General Terms Pattern Recognition, Document Processing, Optical Character Recognition."}
{"_id": "84073be75980a6a8aabc11a00b4017f681362c08", "title": "The OCRopus open source OCR system", "text": "OCRopus is a new, open source OCR system emphasizing modularity, easy extensibility, and reuse, aimed at both the research community and large scale commercial document conversions. This paper describes the current status of the system, its general architecture, as well as the major algorithms currently being used for layout analysis and text line recognition."}
{"_id": "ecc5492d00b1c53ed30d830136c300d381ca2770", "title": "Automatic code generation of convolutional neural networks in FPGA implementation", "text": "Convolutional neural networks (CNNs) have gained great success in various computer vision applications. However, state-of-the-art CNN models are computation-intensive and hence are mainly processed on high performance processors like server CPUs and GPUs. Owing to the advantages of high performance, energy efficiency and reconfigurability, Field-Programmable Gate Arrays (FPGAs) have been widely explored as CNN accelerators. In this paper, we propose parallel structures to exploit the inherent parallelism and efficient computation units to perform operations in convolutional and fully-connected layers. Further, an automatic generator is proposed to generate Verilog HDL source code automatically according to high-level hardware description language. Execution time, DSP consumption and performance are analytically modeled based on some critical design variables. We demonstrate the automatic methodology by implementing two representative CNNs (LeNet and AlexNet) and evaluate the execution time models by comparing estimated and measured values. Our results show that the proposed automatic methodology yields hardware design with good performance and saves much developing round time."}
{"_id": "46cf123a28b30921ac904672fbba9073885fa9dc", "title": "Extending drift-diffusion paradigm into the era of FinFETs and nanowires", "text": "This paper presents a feasibility study that the drift-diffusion model can capture the ballistic transport of FinFETs and nanowires with a simple model extension. For FinFETs, Monte Carlo simulation is performed and the ballistic mobility is calibrated to linear & saturation currents. It is validated that the calibrated model works over a wide range of channel length and channel stress. The ballistic mobility model is then applied to a nanowire with 5nm design rules. Finally, the technology scaling trend of the ballistic ratio is explored."}
{"_id": "b418c47489e0e51504540f6ac46e530822e759a8", "title": "Secure Communication Method Based on Encryption and Steganography", "text": "Encryption is a traditional method of scrambling data beyond recognition. For years, encryption has been widely used in various areas and domains where secrecy and confidentiality were required. However, there are certain encryption techniques that are not allowed in some countries. In these cases, steganography may come as a solution for hiding data using an apparently inoffensive and innocent carrier. The purpose of steganography is to deliver secret information from a sender to a receiver by scrambling the communication channel not the information itself. This way, it is very unlikely for someone to identify and extract the secret message without knowing exactly how it was embedded into the carrier. Moreover, this task is more difficult since most steganographic methods rely on file or packet header redundancy, lossy multimedia algorithms or insignificant content alteration. All of these are very hard to detect. In this paper, a combination of encryption with image based steganography is presented in order to offer similar robustness as offered by advanced encryption algorithms such as RSA, AES or DES but with lower resource needs. The most original contribution of this paper consists in how a secret message is embedded into a image carrier generating 3 different but very similar images. These images are sent to the destination using different channels and then used to recover the original (secret) message."}
{"_id": "084dd226a724608d29a70101f793ed4e81dddfd7", "title": "Jigsaw: indoor floor plan reconstruction via mobile crowdsensing", "text": "The lack of floor plans is a critical reason behind the current sporadic availability of indoor localization service. Service providers have to go through effort-intensive and time-consuming business negotiations with building operators, or hire dedicated personnel to gather such data. In this paper, we propose Jigsaw, a floor plan reconstruction system that leverages crowdsensed data from mobile users. It extracts the position, size and orientation information of individual landmark objects from images taken by users. It also obtains the spatial relation between adjacent landmark objects from inertial sensor data, then computes the coordinates and orientations of these objects on an initial floor plan. By combining user mobility traces and locations where images are taken, it produces complete floor plans with hallway connectivity, room sizes and shapes. Our experiments on 3 stories of 2 large shopping malls show that the 90-percentile errors of positions and orientations of landmark objects are about 1~2m and 5~9\u00b0, while the hallway connectivity is 100% correct."}
{"_id": "cdd34f54a5a719c9474a7c167642fac89bef9893", "title": "Dielectric properties of human normal, malignant and cirrhotic liver tissue: in vivo and ex vivo measurements from 0.5 to 20 GHz using a precision open-ended coaxial probe.", "text": "Hepatic malignancies have historically been treated with surgical resection. Due to the shortcomings of this technique, there is interest in other, less invasive, treatment modalities, such as microwave hepatic ablation. Crucial to the development of this technique is the accurate knowledge of the dielectric properties of human liver tissue at microwave frequencies. To this end, we characterized the dielectric properties of in vivo and ex vivo normal, malignant and cirrhotic human liver tissues from 0.5 to 20 GHz. Analysis of our data at 915 MHz and 2.45 GHz indicates that the dielectric properties of ex vivo malignant liver tissue are 19 to 30% higher than normal tissue. The differences in the dielectric properties of in vivo malignant and normal liver tissue are not statistically significant (with the exception of effective conductivity at 915 MHz, where malignant tissue properties are 16% higher than normal). Also, the dielectric properties of in vivo normal liver tissue at 915 MHz and 2.45 GHz are 16 to 43% higher than ex vivo. No statistically significant differences were found between the dielectric properties of in vivo and ex vivo malignant tissue (with the exception of effective conductivity at 915 MHz, where malignant tissue properties are 28% higher than normal). We report the one-pole Cole-Cole parameters for ex vivo normal, malignant and cirrhotic liver tissue in this frequency range. We observe that wideband dielectric properties of in vivo liver tissue are different from the wideband dielectric properties of ex vivo liver tissue, and that the in vivo data cannot be represented in terms of a Cole-Cole model. Further work is needed to uncover the mechanisms responsible for the observed wideband trends in the in vivo liver data."}
{"_id": "55686516b40a94165aa9bda3aa0d0d4cda8992fb", "title": "Camera parameters auto-adjusting technique for robust robot vision", "text": "How to make vision system work robustly under dynamic light conditions is still a challenging research focus in computer/robot vision community. In this paper, a novel camera parameters auto-adjusting technique based on image entropy is proposed. Firstly image entropy is defined and its relationship with camera parameters is verified by experiments. Then how to optimize the camera parameters based on image entropy is proposed to make robot vision adaptive to the different light conditions. The algorithm is tested by using the omnidirectional vision in indoor RoboCup Middle Size League environment and the perspective camera in outdoor ordinary environment, and the results show that the method is effective and color constancy to some extent can be achieved."}
{"_id": "4e7abdcafcfb593c2db9a69d65b479333cf22961", "title": "Image Processing And Pattern Recognition Fundamentals And Techniques", "text": "It's coming again, the new collection that this site has. To complete your curiosity, we offer the favorite image processing and pattern recognition fundamentals and techniques book as the choice today. This is a book that will show you even new to old thing. Forget it; it will be right for you. Well, when you are really dying of image processing and pattern recognition fundamentals and techniques, just pick it. You know, this book is always making the fans to be dizzy if not to find."}
{"_id": "da3e09593812d0d4022a05737f870625dd13e661", "title": "Rules for an Ontology-based Approach to Adaptation", "text": "Adaptation addresses the complexity and overload caused by an increasing amount of available resources by the identification and presentation of relevant resources. Drawbacks of existing approaches to this end include the limited degree of customization, difficulties in the acquisition of model information and the lack of control and transparency of the system's adaptive behavior. This paper proposes an approach which addresses these drawbacks to perform meaningful and effective adaptation by the application of semantic Web technologies. This comprises the use of an ontology and adaptation rules for knowledge representation and inference engines for reasoning. The focus of this paper lies in the presentation of adaptation rules"}
{"_id": "454c99488333b2720299e1c49c6ded14e1ceb108", "title": "RTL implementation and analysis of fixed priority, round robin, and matrix arbiters for the NoC's routers", "text": "Networks-on-Chip (NoC) is an emerging on-chip interconnection centric platform that influences modern high speed communication infrastructure to improve the performance of many-core System-on-Chip (SoCs) designs. The core of each NoCs router involves arbiter and multiplier pairs that need to be carefully co-optimized in order to achieve an overall efficient implementation. Low transmission latency design is one of the most important parameters of NoC design. This paper uses parametric Verilog HDL to implement the designs and compares the performance in terms of power, area, and delay of different types of arbiters using for NoCs routers. The RTL implementation is performed using parametric Verilog HDL and analysis in term of power, area and delay is performed using Xilinx ISE 14.7 and Xpower Analyzer (XPA) with Xpower Estimator (XPE). The target device uses for these implementation is Vertex 6."}
{"_id": "a90d4559e7d4fb07b1ae495f5062aa83ce84ef4f", "title": "[Translation and validation for Brazil of the body image scale for adolescents--Offer Self-Image Questionnaire (OSIQ)].", "text": "OBJECTIVE\nTo evaluate the semantic and measure equivalence of the body image sub-scale of the Offer Self Image Questionnaire (OSIQ).\n\n\nMETHODS\nParticipants were 386 teenagers, 10 to 18 years old, both sexes, enrolled in a private school (junior and high school age). Translation, back-translation, technique revision and evaluation were conducted. The Portuguese instrument was evaluated for internal consistency, discriminate and concurrent validity.\n\n\nRESULTS\nInternal consistency showed values from 0.43 to 0.54 and was able to discriminate all groups studied--the whole population, boys and girls, and boys in early adolescence, by nutritional status (p<0.001; p<0.009; p=0.030; p=0.043, respectively). Concurrent analyses showed significant correlation with anthropometric measures only for girls (r=-0.16 and p=0.021; r=-0.19 and p=0.007), early adolescence (r=-0.23 and p=0.008; r=-0.26 and p=0.003) and intermediate adolescence (r=-0.29 and p=0.010) and the retest confirmed reliability by the coefficient of interclass correlation. Although the instrument has proven its ability to discriminate between the groups studied by nutritional state, other results were less satisfactory. More studies are necessary for full transcultural adaptation, including the application of other comparative scales.\n\n\nCONCLUSION\nThe body image sub-scale of the OSIQ was translated, but the results are not promising and require more studies."}
{"_id": "07c00639d498de8b3aa0c3ee50dd939cd03fce7d", "title": "Inflammatory reaction to hyaluronic acid: A newly described complication in vocal fold augmentation.", "text": "OBJECTIVES/HYPOTHESIS\nTo establish the rate of inflammatory reaction to hyaluronic acid (HA) in vocal fold injection augmentation, determine the most common presenting signs and symptoms, and propose an etiology.\n\n\nSTUDY DESIGN\nRetrospective chart review.\n\n\nMETHODS\nPatients injected with HA over a 5-year period were reviewed to identify those who had a postoperative inflammatory reaction. Medical records were reviewed for patient demographic information, subjective complaints, Voice Handicap Index-10 (VHI-10) scores, medical intervention, and resolution time. Videolaryngostroboscopy examinations were also evaluated.\n\n\nRESULTS\nA total of 186 patients (245 vocal folds) were injected with HA over a 5-year period, with a postoperative inflammatory reaction rate of 3.8%. The most common complaints in these patients were odynophagia, dysphonia, and dyspnea with vocal fold erythema, edema, and loss of pliability on videolaryngostroboscopy. All patients were treated with corticosteroids. Return of vocal fold vibration ranged from 3 weeks to 26 months, with VHI-10 scores normalizing in 50% of patients.\n\n\nCONCLUSIONS\nThis reaction may be a form of hypersensitivity related to small amounts of protein linked to HA. Alternatively, extravascular compression from the HA could lead to venous congestion of the vocal fold. The possibility of equipment contamination is also being investigated. Further studies are needed to determine the etiology and best treatment.\n\n\nLEVEL OF EVIDENCE\n4 Laryngoscope, 2016 127:445-449, 2017."}
{"_id": "83fbcac3e5d959923397aaa07317a14c852b4948", "title": "Global effects of smoking, of quitting, and of taxing tobacco.", "text": "From the Center for Global Health Research, St. Michael\u2019s Hospital and Dalla Lana School of Public Health, University of Toronto, Toronto (P.J.); and the Clinical Trial Service Unit and Epidemiological Studies Unit, Nuffield Department of Population Health, Richard Doll Building, University of Oxford, Oxford, United Kingdom (R.P.). Address reprint requests to Dr. Jha at prabhat.jha@utoronto.ca."}
{"_id": "95a6d057b441396420ee46eca84dea47e4bf11e7", "title": "User-centered security: stepping up to the grand challenge", "text": "User-centered security has been identified as a grand challenge in information security and assurance. It is on the brink of becoming an established subdomain of both security and human/computer interface (HCI) research, and an influence on the product development lifecycle. Both security and HCI rely on the reality of interactions with users to prove the utility and validity of their work. As practitioners and researchers in those areas, we still face major issues when applying even the most foundational tools used in either of these fields across both of them. This essay discusses the systemic roadblocks at the social, technical, and pragmatic levels that user-centered security must overcome to make substantial breakthroughs. Expert evaluation and user testing are producing effective usable security today. Principles such as safe staging, enumerating usability failure risks, integrated security, transparent security and reliance on trustworthy authorities can also form the basis of improved systems"}
{"_id": "74d369ac9d945959b6afe5e7cb7147ece2f3aceb", "title": "Static and Moving Object Detection Using Flux Tensor with Split Gaussian Models", "text": "In this paper, we present a moving object detection system named Flux Tensor with Split Gaussian models (FTSG) that exploits the benefits of fusing a motion computation method based on spatio-temporal tensor formulation, a novel foreground and background modeling scheme, and a multi-cue appearance comparison. This hybrid system can handle challenges such as shadows, illumination changes, dynamic background, stopped and removed objects. Extensive testing performed on the CVPR 2014 Change Detection benchmark dataset shows that FTSG outperforms state-of-the-art methods."}
{"_id": "9c3b6fe16fcf417ee8f87bd2810fa9e0a9896bd4", "title": "A Quick Tour of BabelNet 1.1", "text": "In this paper we present BabelNet 1.1, a brand-new release of the largest \u201cencyclopedic dictionary\u201d, obtained from the automatic integration of the most popular computational lexicon of English, i.e. WordNet, and the largest multilingual Web encyclopedia, i.e. Wikipedia. BabelNet 1.1 covers 6 languages and comes with a renewed Web interface, graph explorer and programmatic API. BabelNet is available online at http://www.babelnet.org."}
{"_id": "42b9487dc0ce33f9ebf83930d709ed73f45399ad", "title": "Cloud Computing: The impact on digital forensic investigations", "text": "Cloud Computing (CC) as a concept and business opportunity is likely to see many organisations experiencing the 'credit crunch', embrace the relatively low cost option of CC to ensure continued business viability and sustainability. The pay-as-you-go structure of the CC business model is typically suited to SME's who do not have the resources to completely fulfil their IT requirements. Private end users will also look to utilise the colossal pool of resources that CC offers in an attempt to provide mobile freedom of information. However, as with many opportunities that offer legitimate users enormous benefits, unscrupulous and criminal users will also look to use CC to exploit the loopholes that may exist within this new concept, design and business model. This paper will outline the tasks that the authors undertook for the CLOIDIFIN project and highlight where the Impact of CC will diversely effect digital forensic investigations."}
{"_id": "c144214851af20f95642f394195d4671fed258a7", "title": "An Improvement to Feature Selection of Random Forests on Spark", "text": "The Random Forests algorithm belongs to the class of ensemble learning methods, which are common used in classification problem. In this paper, we studied the problem of adopting the Random Forests algorithm to learn raw data from real usage scenario. An improvement, which is stable, strict, high efficient, data-driven, problem independent and has no impact on algorithm performance, is proposed to investigate 2 actual issues of feature selection of the Random Forests algorithm. The first one is to eliminate noisy features, which are irrelevant to the classification. And the second one is to eliminate redundant features, which are highly relevant with other features, but useless. We implemented our improvement approach on Spark. Experiments are performed to evaluate our improvement and the results show that our approach has an ideal performance."}
{"_id": "099089dd8cc7a2cba74ac56664e43313b47c7bf3", "title": "Yield Trends Are Insufficient to Double Global Crop Production by 2050", "text": "Several studies have shown that global crop production needs to double by 2050 to meet the projected demands from rising population, diet shifts, and increasing biofuels consumption. Boosting crop yields to meet these rising demands, rather than clearing more land for agriculture has been highlighted as a preferred solution to meet this goal. However, we first need to understand how crop yields are changing globally, and whether we are on track to double production by 2050. Using \u223c2.5 million agricultural statistics, collected for \u223c13,500 political units across the world, we track four key global crops-maize, rice, wheat, and soybean-that currently produce nearly two-thirds of global agricultural calories. We find that yields in these top four crops are increasing at 1.6%, 1.0%, 0.9%, and 1.3% per year, non-compounding rates, respectively, which is less than the 2.4% per year rate required to double global production by 2050. At these rates global production in these crops would increase by \u223c67%, \u223c42%, \u223c38%, and \u223c55%, respectively, which is far below what is needed to meet projected demands in 2050. We present detailed maps to identify where rates must be increased to boost crop production and meet rising demands."}
{"_id": "a17a069a093dba710582e6e3a25abe67e7d74da7", "title": "Characterizing affective instability in borderline personality disorder.", "text": "OBJECTIVE\nThis study sought to understand affective instability among patients with borderline personality disorder by examining the degree of instability in six affective domains. The authors also examined the subjective intensity with which moods are experienced and the association between instability and intensity of affect.\n\n\nMETHOD\nIn a group of 152 patients with personality disorders, subjective affective intensity and six dimensions of affective instability were measured. The mean scores for lability and intensity for each affective domain for patients with borderline personality disorder were compared with those of patients with other personality disorders through analyses that controlled for other axis I affective disorders, age, and sex.\n\n\nRESULTS\nGreater lability in terms of anger and anxiety and oscillation between depression and anxiety, but not in terms of oscillation between depression and elation, was associated with borderline personality disorder. Contrary to expectation, the experience of an increase in subjective affective intensity was not more prominent in patients with borderline personality disorder than in those with other personality disorders.\n\n\nCONCLUSIONS\nBy applying a finer-grained perspective on affective instability than those of previous personality disorder studies, this study points to patterns of affective experience characteristic of patients with borderline personality disorder."}
{"_id": "65bf178effbf3abe806926899c79924a80502f8c", "title": "Proximal Algorithms in Statistics and Machine Learning", "text": "In this paper we develop proximal methods for statistical learning. Proximal point algorithms are useful in statistics and machine learning for obtaining optimization solutions for composite functions. Our approach exploits closedform solutions of proximal operators and envelope representations based on the Moreau, Forward-Backward, Douglas-Rachford and Half-Quadratic envelopes. Envelope representations lead to novel proximal algorithms for statistical optimisation of composite objective functions which include both non-smooth and non-convex objectives. We illustrate our methodology with regularized Logistic and Poisson regression and non-convex bridge penalties with a fused lasso norm. We provide a discussion of convergence of non-descent algorithms with acceleration and for non-convex functions. Finally, we provide directions for future research."}
{"_id": "0f9c08653e801764fc431f3d5f02c1f1286cc1ab", "title": "Chaotic bat algorithm", "text": "Bat algorithm (BA) is a recent metaheuristic optimization algorithm proposed by Yang. In the present study, we have introduced chaos into BA so as to increase its global search mobility for robust global optimization. Detailed studies have been carried out on benchmark problems with different chaotic maps. Here, four different variants of chaotic BA are introduced and thirteen different chaotic maps are vailable online xxx eywords: at algorithm haos etaheuristic lobal optimization utilized for validating each of these four variants. The results show that some variants of chaotic BAs can clearly outperform the standard BA for these benchmarks. \u00a9 2013 Elsevier B.V. All rights reserved. . Introduction Many design optimization problems are often highly nonlinar, which can typically have multiple modal optima, and it is thus ery challenging to solve such multimodal problems. To cope with his issue, global optimization algorithms are widely attempted, owever, traditional algorithms may not produce good results, and atest trends are to use new metaheuristic algorithms [1]. Metaeuristic techniques are well-known global optimization methods hat have been successfully applied in many real-world and comlex optimization problems [2,3]. These techniques attempt to imic natural phenomena or social behavior so as to generate etter solutions for optimization problem by using iterations and tochasticity [4]. They also try to use both intensification and iversification to achieve better search performance. Intensificaion typically searches around the current best solutions and selects he best candidate designs, while the diversification process allows he optimizer to explore the search space more efficiently, mostly y randomization [1]. In recent years, several novel metaheuristic algorithms have een proposed for global search. Such algorithms can increase he computational efficiency, solve larger problems, and implePlease cite this article in press as: A.H. Gandomi, X.-S. http://dx.doi.org/10.1016/j.jocs.2013.10.002 ent robust optimization codes [5]. For example, Xin-She Yang [6] ecently developed a promising metaheuristic algorithm, called bat lgorithm (BA). Preliminary studies suggest that the BA can have \u2217 Corresponding author. Tel.: +44 2084112351. E-mail addresses: a.h.gandomi@gmail.com, ag72@uakron.edu (A.H. Gandomi), .yang@mdx.ac.uk (X.-S. Yang). 877-7503/$ \u2013 see front matter \u00a9 2013 Elsevier B.V. All rights reserved. ttp://dx.doi.org/10.1016/j.jocs.2013.10.002 superior performance over genetic algorithms and particle swarm optimization [6], and it can solve real world and engineering optimization problems [7\u201310]. On the other hand, recent advances in theories and applications of nonlinear dynamics, especially chaos, have drawn more attention in many fields [10]. One of these fields is the applications of chaos in optimization algorithms to replace certain algorithm-dependent parameters [11]. Previously, chaotic sequences have been used to tune parameters in metaheuristic optimization algorithms such as genetic algorithms [12], particle swarm optimization [13], harmony search [14], ant and bee colony optimization [15,16], imperialist competitive algorithm [17], firefly algorithm [18], and simulated annealing [19]. Such a combination of chaos with metaheuristics has shown some promise once the right set of chaotic maps are used. It is still not clear why the use of chaos in an algorithm to replace certain parameters may change the performance, however, empirical studies indeed indicate that chaos can have high-level of mixing capability, and thus it can be expected that when a fixed parameter is replaced by a chaotic map, the solutions generated may have higher mobility and diversity. For this reason, it may be useful to carry out more studies by introducing chaos to other, especially newer, metaheuristic algorithms. Therefore, one of the aims of this paper is to introduce chaos into the standard bat algorithm, and as a result, we propose a chaosbased bat algorithm (CBA). As different chaotic maps may lead to different behavior of the algorithm, we then have a set of chaosYang, Chaotic bat algorithm, J. Comput. Sci. (2013), based bat algorithms. In these algorithms, we use different chaotic systems to replace the parameters in BA. Thus different methods that use chaotic maps as potentially efficient alternatives to pseudorandom sequences have been proposed. In order to evaluate the"}
{"_id": "207bcf6dbd11bd6c8ab252750244bf051efd2576", "title": "A novel bat algorithm with habitat selection and Doppler effect in echoes for optimization", "text": "A novel bat algorithm (NBA) is proposed for optimization in this paper, which focuses on further mimicking the bats\u2019 behaviors and improving bat algorithm (BA) in view of biology. The proposed algorithm incorporates the bats\u2019 habitat selection and their self-adaptive compensation for Doppler effect in echoes into the basic BA. The bats\u2019 habitat selection is modeled as the selection between their quantum behaviors and mechanical behaviors. Having considered the bats\u2019 self-adaptive compensation for Doppler effect in echoes and the individual\u2019s difference in the compensation rate, the echolocation characteristics of bats can be further simulated in NBA. A self-adaptive local search strategy is also embedded into NBA. Simulations and comparisons based on twenty benchmark problems and four real-world engineering designs demonstrate the effectiveness, efficiency and stability of NBA compared with the basic BA and some well-known algorithms, and suggest that to improve algorithm based on biological basis should be very efficient. Further research topics are also discussed. 2015 Elsevier Ltd. All rights reserved."}
{"_id": "745b88eb437eb59e2a58fe378d287702a6b0d985", "title": "Use of genetic algorithms to solve production and operations management problems: A review", "text": "Operations managers and scholars in their search for fast and good solutions to real-world problems have applied genetic algorithms to many problems. While genetic algorithms are promising tools for problem solving, future research will benefit from a review of the problems that have been solved and the designs of the genetic algorithms used to solve them. This paper provides a review of the use of genetic algorithms to solve operations problems. Reviewed papers are classified according to the problems they solve. The basic design of each genetic algorithm is described, the shortcomings of the current research are discussed and directions for future research are suggested."}
{"_id": "1447fe8dc5e08f4a6bad99d2730347dd5a134354", "title": "A swarm optimization algorithm inspired in the behavior of the social-spider", "text": "Swarm intelligence is a research field that models the collective behavior in swarms of insects or animals. Several algorithms arising from such models have been proposed to solve a wide range of complex optimization problems. In this paper, a novel swarm algorithm called the Social Spider Optimization (SSO) is proposed for solving optimization tasks. The SSO algorithm is based on the simulation of cooperative behavior of social-spiders. In the proposed algorithm, individuals emulate a group of spiders which interact to each other based on the biological laws of the cooperative colony. The algorithm considers two different search agents (spiders): males and females. Depending on gender, each individual is conducted by a set of different evolutionary operators which mimic different cooperative behaviors that are typically found in the colony. In order to illustrate the proficiency and robustness of the proposed approach, it is compared to other well-known evolutionary methods. The comparison examines several standard benchmark functions that are commonly considered within the literature of evolutionary algorithms. The outcome shows a high performance of the proposed method for searching a global optimum with several benchmark functions."}
{"_id": "90cbb19ad3e69de3907633fc55123153f980c843", "title": "Bat-Inspired Optimization Approach for the Brushless DC Wheel Motor Problem", "text": "This paper presents a metaheuristic algorithm inspired in evolutionary computation and swarm intelligence concepts and fundamentals of echolocation of micro bats. The aim is to optimize the mono and multiobjective optimization problems related to the brushless DC wheel motor problems, which has 5 design parameters and 6 constraints for the mono-objective problem and 2 objectives, 5 design parameters, and 5 constraints for multiobjective version. Furthermore, results are compared with other optimization approaches proposed in the recent literature, showing the feasibility of this newly introduced technique to high nonlinear problems in electromagnetics."}
{"_id": "0fe96806c009e8d095205e8f954d41b2b9fd5dcf", "title": "On-the-Job Learning with Bayesian Decision Theory", "text": "Our goal is to deploy a high-accuracy system starting with zero training examples. We consider an on-the-job setting, where as inputs arrive, we use real-time crowdsourcing to resolve uncertainty where needed and output our prediction when confident. As the model improves over time, the reliance on crowdsourcing queries decreases. We cast our setting as a stochastic game based on Bayesian decision theory, which allows us to balance latency, cost, and accuracy objectives in a principled way. Computing the optimal policy is intractable, so we develop an approximation based on Monte Carlo Tree Search. We tested our approach on three datasets\u2014named-entity recognition, sentiment classification, and image classification. On the NER task we obtained more than an order of magnitude reduction in cost compared to full human annotation, while boosting performance relative to the expert provided labels. We also achieve a 8% F1 improvement over having a single human label the whole set, and a 28% F1 improvement over online learning. \u201cPoor is the pupil who does not surpass his master.\u201d \u2013 Leonardo da Vinci"}
{"_id": "09ecdb904eb7ae8a12d0c6c04ae531617a30eafa", "title": "Rethinking serializable multiversion concurrency control", "text": "Multi-versioned database systems have the potential to significantly increase the amount of concurrency in transaction processing because they can avoid read-write conflicts. Unfortunately, the increase in concurrency usually comes at the cost of transaction serializability. If a database user requests full serializability, modern multi-versioned systems significantly constrain read-write concurrency among conflicting transactions and employ expensive synchronization patterns in their design. In main-memory multi-core settings, these additional constraints are so burdensome that multiversioned systems are often significantly outperformed by singleversion systems. We propose BOHM, a new concurrency control protocol for mainmemory multi-versioned database systems. BOHM guarantees serializable execution while ensuring that reads never block writes. In addition, BOHM does not require reads to perform any bookkeeping whatsoever, thereby avoiding the overhead of tracking reads via contended writes to shared memory. This leads to excellent scalability and performance in multi-core settings. BOHM has all the above characteristics without performing validation based concurrency control. Instead, it is pessimistic, and is therefore not prone to excessive aborts in the presence of contention. An experimental evaluation shows that BOHM performs well in both high contention and low contention settings, and is able to dramatically outperform state-of-the-art multi-versioned systems despite maintaining the full set of serializability guarantees."}
{"_id": "99d2b42c731269751431393d20f9b9225787cc5e", "title": "Overview of rotary steerable system and its control methods", "text": "Rotary steerable system (RSS) is a new technique of drilling technology adopted in directional drilling. Compared with conventional directional drilling, RSS has the ability to improve the drilling efficiency, reduce the drilling risk and decrease the drilling cost at the same time. This paper summaries the fundamental structure, classification, directional principle and development process of RSS; and gives detail analysis of the control methods in RSS. According to these summaries and analyses, the advantages and disadvantages of RSS at present are given, and some key suggestions about this technique and the related control methods are also proposed in this paper."}
{"_id": "8bf73b47f5d3ac82187971311faa371d719fa08f", "title": "Commonsense Reasoning Based on Betweenness and Direction in Distributional Models", "text": "Several recent approaches use distributional similarity for making symbolic reasoning more flexible. While an important step in the right direction, the use of similarity has a number of inherent limitations. We argue that similarity-based reasoning should be complemented with commonsense reasoning patterns such as interpolation and a fortiori inference. We show how the required background knowledge for these inference patterns can be obtained from distributional models."}
{"_id": "416377d4a06e73b87f0ff9fbc86340f1228dbf8d", "title": "A Survey on Mobile Banking Applications and the Adopted Models", "text": "Many banks around the world are starting to offer banking services through mobile phones. Therefore, many studies have been proposed to help bankers to increase the number of customers. Despite the effort that has been founded in the state of the art, the adoption rate of mobile banking application has not reached the expected amount yet. Therefore, the aim of this study is to analyse the most well-known and accepted models to provide a comprehensive understanding of their impacts toward the adoption of mobile banking applications. Furthermore, this study aims also at exploring the most effected factors that have been used to influence the adoption behaviour intention of mobile banking applications. As a result of this survey study, some critical recommendations were stated"}
{"_id": "b924e60dd39e9795fb610d644df908a9a36fe75d", "title": "A review of Augmented Reality and its application in context aware library system", "text": "Augmented Reality has revolutionized the way of looking at actuality. The colossal developments in this area have led to visualizations beyond desktop. Unreal objects can be augmented on real surfaces. Augmented Reality has converged the knowledge of computer vision, virtual reality, image-processing, human-to-computer interaction and more such areas. It has found applications in various expanses like Games, Entertainment, Military, Navigation and many more. One such application could be in Library Management System. Earlier systems for book tracking involved immobile computers that only had the shelf numbers nested with book id to locate a book. A borrower needed to ask the Librarian/support staff to look up the shelf number from the system so as to find a book or simply do it manually by himself by searching each shelf. The application of Augmented Reality in Library Administration is useful in automating the proximity location tracking of a book and providing alternatives to the borrower in the books context. The borrower is navigated through the library till he/she reaches the shelf where the book is kept. The spine of the book serves as the index of the required book and contextual information is augmented once the book is located. Thus Augmented Reality can prove immensely helpful for Library Administration."}
{"_id": "35d6119d3bad46fe9905af2d3f6263dd201eeebd", "title": "A new force-feedback arm exoskeleton for haptic interaction in virtual environments", "text": "The paper presents the mechanical design of the L-EXOS, a new exoskeleton for the human arm. The exoskeleton is a tendon driven wearable haptic interface with 5 dof 4 actuated ones, and is characterized by a workspace very close to the one of the human arm. The design has been optimized to obtain a solution with reduced mass and high stiffness, by employing special mechanical components and carbon fiber structural parts. The devised exoskeleton is very effective for simulating the touch by hand of large objects or the manipulation within the whole workspace of the arm. The main features of the first prototype that has been developed at PERCRO are presented, together with an indication of the achieved and tested performance."}
{"_id": "fa9bf738ee1e6b631b670d3473d801d368e8bf5a", "title": "Issues and Challenges in Scrum Implementation", "text": "The aim of this research paper is to bring the challenges and issues in Scrum implementation to light and proposing solutions for these. For this, a survey is conducted in two companies named Digital Prodigy Limited (DPL) and Bentley Systems Pakistan. Participants include experienced and inexperienced scrum personals from both companies. The analysis of the survey results exposed several issues that affect scrum implementation directly or indirectly and resulting in violation of Scrum rules. Quality items pileup, module integration issues, code quality, disruption in team work, mature vs. immature scrum, sprint duration, lack of Scrum training , release process, backlog management, no technical practices, multiple teams, metrics, risk management, documentation and over idealistic are some examples of these issues. During this study, it has been observed that proper training of Scrum can eliminate half of the issues such as, disruption in team work, immature Scrum, sprint duration and backlog management. At the end, suggestions to address these issues are also provided."}
{"_id": "fd48c4bfa8f336ad6879f16710d980988ca35b67", "title": "Investigating the global trend of RF power amplifiers with the arrival of 5G", "text": "To satisfy the continuously increasing demand for high data rates and mobility required by new wireless applications, the 5G has gained much attention recently. Radio frequency power amplifiers (RF-PAs), as one of critical components of 5G transmitter system, are becoming a hot issue. In this paper, the statistical analysis on RF PA papers shows the research on RF-PAs in Asia-Pacific and cooperation between different affiliations and countries are gradually becoming more prominent, showing the globalization trend of RF PA research and the 5G technologies. The decreased research cycle of RF PA shows the processes of research on PA and 5G technologies are speeding up. Some promising RF-PA technologies for 5G wireless communication system are also been discussed."}
{"_id": "c78138102d9ad484bc78b6129e3b80ed950fb333", "title": "Grade Prediction with Temporal Course-wise Influence", "text": "There is a critical need to develop new educational technology applications that analyze the data collected by universities to ensure that students graduate in a timely fashion (4 to 6 years); and they are well prepared for jobs in their respective fields of study. In this paper, we present a novel approach for analyzing historical educational records from a large, public university to perform next-term grade prediction; i.e., to estimate the grades that a student will get in a course that he/she will enroll in the next term. Accurate next-term grade prediction holds the promise for better student degree planning, personalized advising and automated interventions to ensure that students stay on track in their chosen degree program and graduate on time. We present a factorization-based approach called Matrix Factorization with Temporal Course-wise Influence that incorporates course-wise influence effects and temporal effects for grade prediction. In this model, students and courses are represented in a latent \u201cknowledge\u201d space. The grade of a student on a course is modeled as the similarity of their latent representation in the \u201cknowledge\u201d space. Course-wise influence is considered as an additional factor in the grade prediction. Our experimental results show that the proposed method outperforms several baseline approaches and infer meaningful patterns between pairs of courses within academic programs."}
{"_id": "e15ad6b7fc692aaa5855691599e263a600d85325", "title": "The fragment assembly string graph", "text": "We present a concept and formalism, the string graph, which represents all that is inferable about a DNA sequence from a collection of shotgun sequencing reads collected from it. We give time and space efficient algorithms for constructing a string graph given the collection of overlaps between the reads and, in particular, present a novel linear expected time algorithm for transitive reduction in this context. The result demonstrates that the decomposition of reads into kmers employed in the de Bruijn graph approach described earlier is not essential, and exposes its close connection to the unitig approach we developed at Celera. This paper is a preliminary piece giving the basic algorithm and results that demonstrate the efficiency and scalability of the method. These ideas are being used to build a next-generation whole genome assembler called BOA (Berkeley Open Assembler) that will easily scale to mammalian genomes."}
{"_id": "aa43eeb7511a5d349af0cc27b1a594b4e9f2927c", "title": "Fusion of laser and monocular camera data in object grid maps for vehicle environment perception", "text": "Occupancy grid maps provide a reliable vehicle environmental model and usually process data from range finding sensors. Object grid maps additionally contain information about the classes of objects, which is crucial for applications like autonomous driving. Unfortunately, they lack the precision of occupancy grid maps, since they mostly process classification results from camera data by projecting the corresponding images onto the ground plane. This paper proposes a modular framework to create precise object grid maps. The presented algorithm creates classical occupancy grid maps and object grid maps. In a combination step, it transforms both maps into the same frame of discernment based on the Dempster-Shafer theory of evidence. This allows fusing the maps to one object grid map, which contains valuable object information and at the same time benefits from the precision of the occupancy grid map."}
{"_id": "4dfa7b0c07fcff203306ecf7f9f496d60369e45e", "title": "Semantic Representation", "text": "In recent years, there has been renewed interest in the NLP community in genuine language understanding and dialogue. Thus the long-standing issue of how the semantic content of language should be represented is reentering the communal discussion. This paper provides a brief \u201copinionated survey\u201d of broadcoverage semantic representation (SR). It suggests multiple desiderata for such representations, and then outlines more than a dozen approaches to SR\u2013some longstanding, and some more recent, providing quick characterizations, pros, cons, and some comments on imple-"}
{"_id": "a7f47084be0c7c4fac60f7523858c90abf4d0b51", "title": "A 0.002-mm$^{2}$ 6.4-mW 10-Gb/s Full-Rate Direct DFE Receiver With 59.6% Horizontal Eye Opening Under 23.3-dB Channel Loss at Nyquist Frequency", "text": "This paper reports a full-rate direct decision-feedback-equalization (DFE) receiver with circuit techniques to widen the data eye opening with competitive power and area efficiencies. Specifically, a current-reuse active-inductor (AI) linear equalizer is merged into a clocked-one-tap DFE core for joint-elimination of pre-cursor and long-tail post-cursors. Unlike the passive-inductor designs that are bulky and untunable, the AI linear equalizer offers orthogonally tunable low- and high-frequency de-emphasis. The clocked-one-tap DFE resolves the first post-cursor via return-to-zero feedback data patterns for sharper data transition (i.e., horizontal eye opening), and is followed by a D-flip-flop slicer to maximize the data height (i.e., vertical eye opening). A 10-Gb/s DFE receiver was fabricated in 65-nm CMOS. Measured over an 84-cm printed circuit board differential trace with 23.3-dB channel loss at Nyquist frequency (5 GHz), the achieved figure-of-merit is 0.027 pJ/bit/dB (power consumption/date rate/channel loss). At 10-12 bit error rate under 27-1 pseudorandom binary sequence, the horizontal and vertical eye opening are 59.6% and 189.3 mV, respectively. The die size is 0.002 mm2."}
{"_id": "05991f4ea3bde9f0f993b304b681ecfc78b4cdff", "title": "A Survey on TCP Congestion Control Schemes in Guided Media and Unguided Media Communication", "text": "Transmission Control Protocol (TCP) is a widely used end-to-end transport protocol in the Internet. This End to End delivery in wired (Guided) as well as wireless (Unguided) network improves the performance of transport layer or Transmission control Protocol (TCP) characterized by negligible random packet losses. This paper represents tentative study of TCP congestion control principles and mechanisms. Modern implementations of TCP hold four intertwined algorithms: slow start, congestion avoidance, fast retransmits, and fast recovery in addition to the standard algorithms used in common implementations of TCP. This paper describes the performance characteristics of four representative TCP schemes, namely TCP Tahoe, Reno, New Reno and Vegas under the condition of congested link capacities for wired network as well as wireless network."}
{"_id": "951fbb632fd02fd57fb1d864bbd183ebb93172e0", "title": "Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review.", "text": "Breast cancer is the most common form of cancer among women worldwide. Early detection of breast cancer can increase treatment options and patients' survivability. Mammography is the gold standard for breast imaging and cancer detection. However, due to some limitations of this modality such as low sensitivity especially in dense breasts, other modalities like ultrasound and magnetic resonance imaging are often suggested to achieve additional information. Recently, computer-aided detection or diagnosis (CAD) systems have been developed to help radiologists in order to increase diagnosis accuracy. Generally, a CAD system consists of four stages: (a) preprocessing, (b) segmentation of regions of interest, (c) feature extraction and selection, and finally (d) classification. This paper presents the approaches which are applied to develop CAD systems on mammography and ultrasound images. The performance evaluation metrics of CAD systems are also reviewed."}
{"_id": "77105d6df4ccd7454bd805575abe05428f36893a", "title": "ULA-OP: an advanced open platform for ultrasound research", "text": "The experimental test of novel ultrasound (US) investigation methods can be made difficult by the lack of flexibility of commercial US machines. In the best options, these only provide beamformed radiofrequency or demodulated echo-signals for acquisition by an external PC. More flexibility is achieved in high-level research platforms, but these are typically characterized by high cost and large size. This paper presents a powerful but portable US system, specifically developed for research purposes. The system design has been based on high-level commercial integrated circuits to obtain the maximum flexibility and wide data access with minimum of electronics. Preliminary applications involving nonstandard imaging transmit/receive strategies and simultaneous B-mode and multigate spectral Doppler mode are discussed."}
{"_id": "3e76919e1a27105e76ca9105a9a94f262729eade", "title": "Improved Winding Proposal for Wound Rotor Resolver Using Genetic Algorithm and Winding Function Approach", "text": "Among position sensors, resolvers are superior from reliability point of view. However, obtaining lower output voltage harmonics and simple manufacturing process is a challenge in the design and optimization of resolvers. In this paper, a metaheuristic optimization algorithm is used to minimize total harmonic distortion of the output signals, and consequently the estimated position error in concentrated coil wound field resolvers. Meanwhile, to minimize total coil numbers, manufacturing costs, and complexity of the winding process, modified objective function and constraints are proposed. In this way, a modified winding function method is employed for performance analysis of the axial flux resolver. Then, the merits of the optimized winding configuration with respect to fractional slot concentrated windings are shown. The results of the proposed configurations are verified with a three-dimensional time-stepping finite-element method. Finally, the prototype of the studied axial flux resolver is constructed and tested. Good agreement is obtained between simulation and experimental results, confirming the effectiveness of the optimization process."}
{"_id": "6662dbe13b725180d7f9ae114b709f8bc7d0ced9", "title": "Lens design as multi-objective optimisation", "text": "This paper demonstrates the computational advantages of a multiobjective framework that can overcome the generic and domain-related challenges in optical system design and optimisation. Non-dominated sorting genetic algorithms-II (Deb, 2003) is employed in this study. The optical systems studied in this paper are Cooke triplets, Petzval lens systems and achromatic doublets. We report the results of four studies. In the first study, we optimise the optical systems using computationally efficient image quality objective functions. Our approach uses only two paraxial rays to estimate the objective functions and thus improves the computational efficiency. This timesaving measure can partially compensate for the typically enormous number of fitness function evaluations required in evolutionary algorithms. The reduction in reliability due to the computations from a single ray pair is compensated by the availability of multiple objective functions that help us to navigate to the optima. In the second study, hybridisation of evolutionary and gradient-based approaches and scaling techniques are employed to speed up convergence and enforce the constraints. The third study shows how recent developments in optical system design research can be better integrated in a multi-objective framework. The fourth study optimises an achromatic doublet with suitable constraints applied to the thicknesses and image distance."}
{"_id": "5976d405e242dac207fcd32f37fb1e9044b6f8de", "title": "The Relationship Between Health and Growth : When Lucas", "text": "This paper revisits the relationship between health and growth in light of modern endogenous growth theory. We propose a unified framework that encompasses the growth effects of both the rate of improvement of health and the level of health. Based on cross-country regressions over the period 1960-2000, where we instrument for both variables, we find that a higher initial level and a higher rate of improvement in life expectancy both have a significantly positive impact on per capita GDP growth. Then, restricting attention to OECD countries, we find supportive evidence that only the reduction in mortality below age forty generates productivity gains, which in turn may explain why the positive correlation between health and growth in cross-OECD country regressions appears to have weakened since 1960. JEL classification: E0, I10, O10, O15"}
{"_id": "fd94e80dd0f8a05ccf0c7912da2a5b90ed75d210", "title": "Tool for 3D Gazebo Map Construction from Arbitrary Images and Laser Scans", "text": "Algorithms for mobile robots, such as navigation, mapping, and SLAM, require proper modelling of environment. Our paper presents an automatic tool that allows creating a realistic 3D-landscape in Gazebo simulation, which is based on the real sensor-based experimental results. The tool provides an occupancy grid map automatic filtering and import to Gazebo framework as a heightmap and enables to configure the settings for created simulation environment. In addition, the tool is capable to create a 3D Gazebo map from any arbitrary image."}
{"_id": "9a83b57a240e950f8080c1ce185de325858d585b", "title": "Multi-View Stereo Revisited", "text": "We present an extremely simple yet robust multi-view stereo algorithm and analyze its properties. The algorithm first computes individual depth maps using a window-based voting approach that returns only good matches. The depth maps are then merged into a single mesh using a straightforward volumetric approach. We show results for several datasets, showing accuracy comparable to the best of the current state of the art techniques and rivaling more complex algorithms."}
{"_id": "4e088d1c5bc436f1f84997906223e5f24e1df28c", "title": "SPARSKIT: a basic tool kit for sparse matrix computations - Version 2", "text": "This paper presents the main features of a tool package for manipulating and working with sparse matrices. One of the goals of the package is to provide basic tools to facilitate exchange of software and data between researchers in sparse matrix computations. Our starting point is the Harwell/Boeing collection of matrices for which we provide a number of tools. Among other things the package provides programs for converting data structures, printing simple statistics on a matrix, plotting a matrix profile, performing basic linear algebra operations with sparse matrices and so on. *RIACS, Mail Stop 230-5, NASA Ames Research Center, Moffett Field, CA 94035. This work was supported in part by the NAS Systems Division, via Cooperative Agreement NCC 2-387 between NASA and the University Space Research Association (USRA) and in part by the Department of Energy under grant DE-FG02-85ER25001."}
{"_id": "1c9aca60f7ac5edcceb73d612806704a7d662643", "title": "Capturing Semantic Similarity for Entity Linking with Convolutional Neural Networks", "text": "A key challenge in entity linking is making effective use of contextual information to disambiguate mentions that might refer to different entities in different contexts. We present a model that uses convolutional neural networks to capture semantic correspondence between a mention\u2019s context and a proposed target entity. These convolutional networks operate at multiple granularities to exploit various kinds of topic information, and their rich parameterization gives them the capacity to learn which n-grams characterize different topics. We combine these networks with a sparse linear model to achieve state-of-the-art performance on multiple entity linking datasets, outperforming the prior systems of Durrett and Klein (2014) and Nguyen et al. (2014).1"}
{"_id": "4f5cd4c2d81db5c52f952589a8d52bba16962707", "title": "Connecting Language and Knowledge Bases with Embedding Models for Relation Extraction", "text": "This paper proposes a novel approach for relation extraction from free text which is trained to jointly use information from the text and from existing knowledge. Our model is based on two scoring functions that operate by learning low-dimensional embeddings of words and of entities and relationships from a knowledge base. We empirically show on New York Times articles aligned with Freebase relations that our approach is able to efficiently use the extra information provided by a large subset of Freebase data (4M entities, 23k relationships) to improve over existing methods that rely on text features alone."}
{"_id": "00ac5fa58d024af5a681e520f49ea0a2dfc6c078", "title": "VisKE: Visual knowledge extraction and question answering by visual verification of relation phrases", "text": "How can we know whether a statement about our world is valid. For example, given a relationship between a pair of entities e.g., `eat(horse, hay)', how can we know whether this relationship is true or false in general. Gathering such knowledge about entities and their relationships is one of the fundamental challenges in knowledge extraction. Most previous works on knowledge extraction have focused purely on text-driven reasoning for verifying relation phrases. In this work, we introduce the problem of visual verification of relation phrases and developed a Visual Knowledge Extraction system called VisKE. Given a verb-based relation phrase between common nouns, our approach assess its validity by jointly analyzing over text and images and reasoning about the spatial consistency of the relative configurations of the entities and the relation involved. Our approach involves no explicit human supervision thereby enabling large-scale analysis. Using our approach, we have already verified over 12000 relation phrases. Our approach has been used to not only enrich existing textual knowledge bases by improving their recall, but also augment open-domain question-answer reasoning."}
{"_id": "057ac29c84084a576da56247bdfd63bf17b5a891", "title": "Learning Structured Embeddings of Knowledge Bases", "text": "Many Knowledge Bases (KBs) are now readily available and encompass colossal quantities of information thanks to either a long-term funding effort (e.g. WordNet, OpenCyc) or a collaborative process (e.g. Freebase, DBpedia). However, each of them is based on a different rigid symbolic framework which makes it hard to use their data in other systems. It is unfortunate because such rich structured knowledge might lead to a huge leap forward in many other areas of AI like natural language processing (word-sense disambiguation, natural language understanding, ...), vision (scene classification, image semantic annotation, ...) or collaborative filtering. In this paper, we present a learning process based on an innovative neural network architecture designed to embed any of these symbolic representations into a more flexible continuous vector space in which the original knowledge is kept and enhanced. These learnt embeddings would allow data from any KB to be easily used in recent machine learning methods for prediction and information retrieval. We illustrate our method on WordNet and Freebase and also present a way to adapt it to knowledge extraction from raw text."}
{"_id": "c4ed93ddc4be73a902e936cd44ad3f3c0e1f87f3", "title": "Safety of Machine Learning Systems in Autonomous Driving", "text": "Machine Learning, and in particular Deep Learning, are extremely capable tools for solving problems which are difficult, or intractable to tackle analytically. Application areas include pattern recognition, computer vision, speech and natural language processing. With the automotive industry aiming for increasing amount of automation in driving, the problems to solve become increasingly complex, which appeals to the use of supervised learning methods from Machine Learning and Deep Learning. With this approach, solutions to the problems are learned implicitly from training data, and inspecting their correctness is not possible directly. This presents concerns when the resulting systems are used to support safety-critical functions, as is the case with autonomous driving of automotive vehicles. This thesis studies the safety concerns related to learning systems within autonomous driving and applies a safety monitoring approach to a collision avoidance scenario. Experiments are performed using a simulated environment, with a deep learning system supporting perception for vehicle control, and a safety monitor for collision avoidance. The related operational situations and safety constraints are studied for an autonomous driving function, with potential faults in the learning system introduced and examined. Also, an example is considered for a measure that indicates trustworthiness of the learning system during operation."}
{"_id": "610b86da495e69a27484287eac6e79285513884f", "title": "A Broadband U-Slot Coupled Microstrip-to-Waveguide Transition", "text": "A novel planar broadband microstrip-to-waveguide transition is proposed in this paper. The referred waveguide can be either rectangular waveguide or ridged waveguide. The transition consists of an open-circuited microstrip quarter-wavelength resonator and a resonant U-shaped slot on the upper broadside wall of a short-circuited waveguide. A physics-based equivalent-circuit model is also developed for interpreting the working mechanism and providing a coarse model for engineering design. The broadband transition can be regarded as a stacked two-pole resonator filter. Each coupling circuit can be approximately designed separately using the group-delay information at the center frequency. In addition to its broadband attribute, the transition is compact in size, vialess, and is highly compatible with planar circuits. These good features make the new transition very attractive for the system architecture where waveguide devices need to be surface mounted on a multilayered planer circuit. Two design examples are given to demonstrate the usefulness of the transition: one is a broadband ridged-waveguide bandpass filter and the other is a surface-mountable broadband low-temperature co-fired ceramic laminated waveguide cavity filter. Both filters are with the proposed transition for interfacing with microstrip lines, showing promising potentials in practical applications."}
{"_id": "cd33ec99c3bcba977356bc44efab5cb3ab837938", "title": "Optimized, Direct Sale of Privacy in Personal-Data Marketplaces", "text": "Very recently, we are witnessing the emergence of a number of start-ups that enables individuals to sell their private data directly to brokers and businesses. While this new paradigm may shift the balance of power between individuals and companies that harvest data, it raises some practical, fundamental questions for users of these services: how they should decide which data must be vended and which data protected, and what a good deal is. In this work, we investigate a mechanism that aims at helping users address these questions. The investigated mechanism relies on a hard-privacy model and allows users to share partial or complete profile data with broker companies in exchange for an economic reward. The theoretical analysis of the trade-off between privacy and money posed by such mechanism is the object of this work. We adopt a generic measure of privacy although part of our analysis focuses on some important examples of Bregman divergences. We find a parametric solution to the problem of optimal exchange of privacy for money, and obtain a closed-form expression and characterize the trade-off between profile-disclosure risk and economic reward for several interesting cases."}
{"_id": "91f09f9009d5d7df61d7da0bcc590959f5794dde", "title": "Performance analysis of MapReduce programs on Hadoop cluster", "text": "This paper discusses various MapReduce applications like pi, wordcount, grep, Terasort. We have shown experimental results of these applications on a Hadoop cluster. In this paper, performance of above application has been shown with respect to execution time and number of nodes. We find that as the number of nodes increases the execution time decreases. This paper is basically a research study of above MapReduce applications."}
{"_id": "1e02f85c5560f6f57b0a6ddffcd89b83ab6cc01c", "title": "Ten key considerations for the successful implementation and adoption of large-scale health information technology.", "text": "The implementation of health information technology interventions is at the forefront of most policy agendas internationally. However, such undertakings are often far from straightforward as they require complex strategic planning accompanying the systemic organizational changes associated with such programs. Building on our experiences of designing and evaluating the implementation of large-scale health information technology interventions in the USA and the UK, we highlight key lessons learned in the hope of informing the on-going international efforts of policymakers, health directorates, healthcare management, and senior clinicians."}
{"_id": "3cf000b5886f1feb1d8cd272deb9a71dd6db2111", "title": "Classification of Age Groups Based on Facial Features", "text": "An age group classification system for gray-scale facial images is proposed in this paper. Four age groups, including babies, young adults, middle-aged adults, and old adults, are used in the classification system. The process of the system is divided into three phases: location, feature extraction, and age classification. Based on the symmetry of human faces and the variation of gray levels, the positions of eyes, noses, and mouths could be located by applying the Sobel edge operator and region labeling. Two geometric features and three wrinkle features from a facial image are then obtained. Finally, two back-propagation neural networks are constructed for classification. The first one employs the geometric features to distinguish whether the facial image is a baby. If it is not, then the second network uses the wrinkle features to classify the image into one of three adult groups. The proposed system is experimented with 230 facial images on a Pentium II 350 processor with 128 MB RAM. One half of the images are used for training and the other half for test. It takes 0.235 second to classify an image on an average. The identification rate achieves 90.52% for the training images and 81.58% for the test images, which is roughly close to human\u2019s subjective justification."}
{"_id": "47ae71f741c635905b9ebcd63e20255d31a9ae0f", "title": "Observing Tutorial Dialogues Collaboratively: Insights About Human Tutoring Effectiveness From Vicarious Learning", "text": "The goals of this study are to evaluate a relatively novel learning environment, as well as to seek greater understanding of why human tutoring is so effective. This alternative learning environment consists of pairs of students collaboratively observing a videotape of another student being tutored. Comparing this collaboratively observing environment to four other instructional methods-one-on-one human tutoring, observing tutoring individually, collaborating without observing, and studying alone-the results showed that students learned to solve physics problems just as effectively from observing tutoring collaboratively as the tutees who were being tutored individually. We explain the effectiveness of this learning environment by postulating that such a situation encourages learners to become active and constructive observers through interactions with a peer. In essence, collaboratively observing combines the benefit of tutoring with the benefit of collaborating. The learning outcomes of the tutees and the collaborative observers, along with the tutoring dialogues, were used to further evaluate three hypotheses explaining why human tutoring is an effective learning method. Detailed analyses of the protocols at several grain sizes suggest that tutoring is effective when tutees are independently or jointly constructing knowledge: with the tutor, but not when the tutor independently conveys knowledge."}
{"_id": "1e8c283cedbbceb2a56bf962bc0a86fd40f1cea6", "title": "Asynchronous Large-Scale Graph Processing Made Easy", "text": "Scaling large iterative graph processing applications through parallel computing is a very important problem. Several graph processing frameworks have been proposed that insulate developers from low-level details of parallel programming. Most of these frameworks are based on the bulk synchronous parallel (BSP) model in order to simplify application development. However, in the BSP model, vertices are processed in fixed rounds, which often leads to slow convergence. Asynchronous executions can significantly accelerate convergence by intelligently ordering vertex updates and incorporating the most recent updates. Unfortunately, asynchronous models do not provide the programming simplicity and scalability advantages of the BSP model. In this paper, we combine the easy programmability of the BSP model with the high performance of asynchronous execution. We have designed GRACE, a new graph programming platform that separates application logic from execution policies. GRACE provides a synchronous iterative graph programming model for users to easily implement, test, and debug their applications. It also contains a carefully designed and implemented parallel execution engine for both synchronous and user-specified built-in asynchronous execution policies. Our experiments show that asynchronous execution in GRACE can yield convergence rates comparable to fully asynchronous executions, while still achieving the near-linear scalability of a synchronous BSP system."}
{"_id": "d4baf2e2f2715ef65223d5642f408ecd9d0d5321", "title": "Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees", "text": "Developing classification algorithms that are fair with respect to sensitive attributes of the data is an important problem due to the increased deployment of classification algorithms in societal contexts. Several recent works have focused on studying classification with respect to specific fairness metrics, modeled the corresponding fair classification problem as constrained optimization problems, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which there are no fair classifiers with theoretical guarantees; primarily because the resulting optimization problem is non-convex. The main contribution of this paper is a meta-algorithm for classification that can take as input a general class of fairness constraints with respect to multiple non-disjoint and multi-valued sensitive attributes, and which comes with provable guarantees. In particular, our algorithm can handle non-convex \"linear fractional\" constraints (which includes fairness constraints such as predictive parity) for which no prior algorithm was known. Key to our results is an algorithm for a family of classification problems with convex constraints along with a reduction from classification problems with linear fractional constraints to this family. Empirically, we observe that our algorithm is fast, can achieve near-perfect fairness with respect to various fairness metrics, and the loss in accuracy due to the imposed fairness constraints is often small."}
{"_id": "41aa1e97416a01ddcbb26e6fdd5402bfe0c6c1ce", "title": "Bridge Management System with Integrated Life Cycle Cost Optimization", "text": ".................................................................................................................................. iii Table of"}
{"_id": "0a80cfdcca92f0b6c5f3fd1569c92f3b2d08b4db", "title": "Understanding Instructions on Large Scale for Human-Robot Interaction", "text": "Correctly interpreting human instructions is the first step to human-robot interaction. Previous approaches to semantically parsing the instructions relied on large numbers of training examples with annotation to widely cover all words in a domain. Annotating large enough instructions with semantic forms needs exhaustive engineering efforts. Hence, we propose propagating the semantic lexicon to learn a semantic parser from limited annotations, whereas the parser still has the ability of interpreting instructions on a large scale. We assume that the semantically-close words have the same semantic form based on the fact that human usually uses different words to refer to a same object or task. Our approach softly maps the unobserved words/phrases to the semantic forms learned from the annotated copurs through a metric for knowledge-based lexical similarity. Experiments on the collected instructions showed that the semantic parser learned with lexicon propagation outperformed the baseline. Our approach provides an opportunity for the robots to understand the human instructions on a large scale."}
{"_id": "b3d4e09f161207eef2f00160479b75878ff9fbd1", "title": "Evolutionary Bayesian Rose Trees", "text": "We present an evolutionary multi-branch tree clustering method to model hierarchical topics and their evolutionary patterns overtime. The method builds evolutionary trees in a Bayesian online filtering framework. The tree construction is formulated as an online posterior estimation problem, which well balances both the fitness of the current tree and the smoothness between trees. The state-of-the-art multi-branch clustering method, Bayesian rose trees, is employed to generate a topic tree with a high fitness value. A constraint model is also introduced to preserve the smoothness between trees. A set of comprehensive experiments on real world news data demonstrates that the proposed method better incorporates historical tree information and is more efficient and effective than the traditional evolutionary hierarchical clustering algorithm. In contrast to our previous method [31], we implement two additional baseline algorithms to compare them with our algorithm. We also evaluate the performance of the clustering algorithm based on multiple constraint trees. Furthermore, two case studies are conducted to demonstrate the effectiveness and usefulness of our algorithm in helping users understand the major hierarchical topic evolutionary patterns in text data."}
{"_id": "75bbd7d888899337f55663f28ced0ad7372d0dd1", "title": "Capacitive Touch Systems With Styli for Touch Sensors: A Review", "text": "This paper presents the latest progress in development and applications of the capacitive touch systems (CTSs) with styli. The CTSs, which include the touch sensor, analog front-end (AFE) integrated circuit (IC), and micro-controller unit, are reviewed along with the passive and active styli. The architecture of the CTS is explained first, followed by an exploration of the touch sensors: (1) types of touch sensors, such as in-cell, on-cell, and add-on types according to the size and material, (2) AFE IC with the driving and sensing methods, and (3) passive and active styli for the CTS. Finally, the future perspectives of the CTS are given from the viewpoint of the technical developments."}
{"_id": "6514d7eeb27a47f8b75e157aca98b177c38de4e9", "title": "Linear feedback control : analysis and design with MATLAB", "text": ""}
{"_id": "f4702d55b49ba075edc34309423880d091eac2b3", "title": "A Multi-Scale CNN and Curriculum Learning Strategy for Mammogram Classification", "text": "Screening mammography is an important front-line tool for the early detection of breast cancer, and some 39 million exams are conducted each year in the United States alone. Here, we describe a multi-scale convolutional neural network (CNN) trained with a curriculum learning strategy that achieves high levels of accuracy in classifying mammograms. Specifically, we first train CNN-based patch classifiers on segmentation masks of lesions in mammograms, and then use the learned features to initialize a scanning-based model that renders a decision on the whole image, trained end-to-end on outcome data. We demonstrate that our approach effectively handles the \u201cneedle in a haystack\u201d nature of full-image mammogram classification, achieving 0.92 AUROC on the DDSM dataset."}
{"_id": "7367b866d1e08a2b4c1143822e076ad0291d2cc5", "title": "QRPp1-4: Characterizing Quality of Time and Topology in a Time Synchronization Network", "text": "As Internet computing gains speed, complexity, and becomes ubiquitous, the need for precise and accurate time synchronization increases. In this paper, we present a characterization of a clock synchronization network managed by Network Time Protocol (NTP), composed by thousands of nodes, including hundreds of Stratum 1 servers, based on data collected recently by a robot. NTP is the most common protocol for time synchronization in the Internet. Many aspects that define the quality of timekeeping are analyzed, as well as topological characteristics of the network. The results are compared to previous characterizations of the NTP network, showing the evolution of clock synchronization in the last fifteen years."}
{"_id": "57344d036f349600e918be3c0676bde7f47ad7bf", "title": "Exact Inference Techniques for the Dynamic Analysis of Bayesian Attack Graphs", "text": "Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise valuable network resources. The uncertainty about the attacker\u2019s behaviour and capabilities make Bayesian networks suitable to model attack graphs to perform static and dynamic analysis. Previous approaches have focused on the formalization of traditional attack graphs into a Bayesian model rather than proposing mechanisms for their analysis. In this paper we propose to use efficient algorithms to make exact inference in Bayesian attack graphs, enabling the static and dynamic network risk assessments. To support the validity of our proposed approach we have performed an extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches."}
{"_id": "b77f5f07c07b8569596eb03e578a680300016188", "title": "Daily Fluctuations in Smartphone Use, Psychological Detachment, and Work Engagement: The Role of Workplace Telepressure", "text": "Today's work environment is shaped by the electronic age. Smartphones are important tools that allow employees to work anywhere and anytime. The aim of this diary study was to examine daily smartphone use after and during work and their association with psychological detachment (in the home domain) and work engagement (in the work domain), respectively. We explored whether workplace telepressure, which is a strong urge to respond to work-related messages and a preoccupation with quick response times, promotes smartphone use. Furthermore, we hypothesized that employees experiencing high workplace telepressure would have more trouble letting go of the workday during the evening and feel less engaged during their workday to the extent that they use their smartphone more intensively across domains. A total of 116 employees using their smartphones for work-related purposes completed diary questionnaires on five workdays (N = 476 data points) assessing their work-related smartphone use, psychological detachment after work, and engagement during work. Workplace telepressure was measured as a between-individual variable and only assessed at the beginning of the study, as well as relevant control variables such as participants' workload and segmentation preference (a preference for work and home domains to be as segmented as possible). Multilevel path analyses revealed that work-related smartphone use after work was negatively related to psychological detachment irrespective of employees' experienced workplace telepressure, and daily smartphone use during work was unrelated to work engagement. Supporting our hypothesis, employees who reported high telepressure experienced less work engagement on days that they used their smartphone more intensively during work. Altogether, intensive smartphone use after work hampers employees' psychological detachment, whereas intensive smartphone use during work undermines their work engagement only when employees experience high workplace telepressure as well. Theoretical and practical implications of these findings are discussed."}
{"_id": "9e88de8a38a53dbbc4ab670b6cdbf5c61ead18c5", "title": "Extending the Definitions of Antenna Gain and Radiation Pattern into the Time Domain", "text": "Many of the classical parameters of frequency domain (CW) antennas, such as gain and radiation pattern, are defined only in the frequency domain, and currently have no meaning in the time domain. The purpose of this note is to extend their definitions into the time domain. We develop here the concept of waveform norms as a mechanism for comparing the radiated field to the driving voltage. The concept of gain that we develop then compares a norm of the radiated field to a norm of the derivative of the driving voltage. The transient gain is therefore a function of both the choice of norm, and of the choice of driving waveform. A key feature of our definitions of gain and pnttcrn factor is that they arc equivalent in both transmit and receive modes. ."}
{"_id": "ed173a39f4cd980eef319116b6ba39cec1b37c42", "title": "Associative Embedding: End-to-End Learning for Joint Detection and Grouping", "text": "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets."}
{"_id": "8d7a6c618a898ff8c09f0ec495c776a1837242b5", "title": "A quadruped robot with parallel mechanism legs", "text": "Summary form only given. The design and control of quadruped robots has become a fascinating research field because they have better mobility on unstructured terrains. Until now, many kinds of quadruped robots were developed, such as JROB-1 [1], BISAM [2], BigDog [3], LittleDog [4], HyQ [5] and Cheetah cub [6]. They have shown significant walking performance. However, most of them use serial mechanism legs and have animal like structure: the thigh and the crus. To swing the crus in swing phase and support the body's weight in stance phase, a linear actuator is attached on the thigh [2, 3, 5, 6], or instead, a rotational actuator is installed on the knee joint [1, 4]. To make the robot more useful in the wild environment, e.g., the detection or manipulation tasks, the payload capability is very important. To carry the sensors or tools, heavy load legged robot is very necessary. Thus the knee actuator should be lightweight, powerful and easy to maintain. However, this can be very costly and hard to satisfy at the same time."}
{"_id": "343f9b0e97a5f980f70f21dba284cd87314cd9b1", "title": "An Overview of Inter-Vehicular Communication Systems, Protocols and Middleware", "text": "Inter-vehicular communication is an important research area that is rapidly growing due to considerable advances in mobile and wireless communication technologies, as well as the growth of microprocessing capabilities inside today\u2019s cars, and other moving vehicles. A good amount of research has been done to exploit the different services that can be provided to enhance the safety and comfort of the driver. Additional functions provide the car electronics and the passengers with access to the Internet and other core network resources. This paper provides a survey of the latest advances in the area of inter-vehicular communication (IVC) including vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) functions and services. In addition, the paper presents the most important projects and protocols that are involved in IVC systems as well as the different issues and challenges that exist at each layer of the networking model."}
{"_id": "b8d9ff98b9fe1fa3951caaa084109f6bfd0de95d", "title": "DERI&UPM: Pushing Corpus Based Relatedness to Similarity: Shared Task System Description", "text": "In this paper, we describe our system submitted for the semantic textual similarity (STS) task at SemEval 2012. We implemented two approaches to calculate the degree of similarity between two sentences. First approach combines corpus-based semantic relatedness measure over the whole sentence with the knowledge-based semantic similarity scores obtained for the words falling under the same syntactic roles in both the sentences. We fed all these scores as features to machine learning models to obtain a single score giving the degree of similarity of the sentences. Linear Regression and Bagging models were used for this purpose. We used Explicit Semantic Analysis (ESA) as the corpus-based semantic relatedness measure. For the knowledgebased semantic similarity between words, a modified WordNet based Lin measure was used. Second approach uses a bipartite based method over the WordNet based Lin measure, without any modification. This paper shows a significant improvement in calculating the semantic similarity between sentences by the fusion of the knowledge-based similarity measure and the corpus-based relatedness measure against corpus based measure taken alone."}
{"_id": "358e8ad4d8c6baab848f3b3d8bd7fc689fbf3ca5", "title": "WebGT: An Interactive Web-Based System for Historical Document Ground Truth Generation", "text": "We present WebGT, the first web-based system to help users produce ground truth data for document images. This user-friendly software system helps historians and computer scientists collectively annotate historical documents. It supports real time collaboration among remote sites independent of the local operating system and also provides several novel semi-automatic tools that have proven effective for annotating degraded documents."}
{"_id": "bd61ee05c320ec40d67ed8cf857de2799dbd3174", "title": "Creating Ground Truth for Historical Manuscripts with Document Graphs and Scribbling Interaction", "text": "Ground truth is both - indispensable for training and evaluating document analysis methods, and yet very tedious to create manually. This especially holds true for complex historical manuscripts that exhibit challenging layouts with interfering and overlapping handwriting. In this paper, we propose a novel semi-automatic system to support layout annotations in such a scenario based on document graphs and a pen-based scribbling interaction. On the one hand, document graphs provide a sparse page representation that is already close to the desired ground truth and on the other hand, scribbling facilitates an efficient and convenient pen-based interaction with the graph. The performance of the system is demonstrated in the context of a newly introduced database of historical manuscripts with complex layouts."}
{"_id": "feaee4dbf7ed20387cc2fc400669aff9d2f99267", "title": "Aletheia - An Advanced Document Layout and Text Ground-Truthing System for Production Environments", "text": "Large-scale digitisation has led to a number of new possibilities with regard to adaptive and learning based methods in the field of Document Image Analysis and OCR. For ground truth production of large corpora, however, there is still a gap in terms of productivity. Ground truth is not only crucial for training and evaluation at the development stage of tools but also for quality assurance in the scope of production workflows for digital libraries. This paper describes Aletheia, an advanced system for accurate and yet cost-effective ground truthing of large amounts of documents. It aids the user with a number of automated and semi-automated tools which were partly developed and improved based on feedback from major libraries across Europe and from their digitisation service providers which are using the tool in a production environment. Novel features are, among others, the support of top-down ground truthing with sophisticated split and shrink tools as well as bottom-up ground truthing supporting the aggregation of lower-level elements to more complex structures. Special features have been developed to support working with the complexities of historical documents. The integrated rules and guidelines validator, in combination with powerful correction tools, enable efficient production of highly accurate ground truth."}
{"_id": "069c40a8ca5305c9a0734c1f6134eb19a678f4ab", "title": "LabelMe: A Database and Web-Based Tool for Image Annotation", "text": "We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sharing of such annotations. Using this annotation tool, we have collected a large dataset that spans many object categories, often containing multiple instances over a wide variety of images. We quantify the contents of the dataset and compare against existing state of the art datasets used for object recognition and detection. Also, we show how to extend the dataset to automatically enhance object labels with WordNet, discover object parts, recover a depth ordering of objects in a scene, and increase the number of labels using minimal user supervision and images from the web."}
{"_id": "3301691a253f7ffd3ca6b1db89c3b111c73c4d09", "title": "Decoronation - a conservative method to treat ankylosed teeth for preservation of alveolar ridge prior to permanent prosthetic reconstruction: literature review and case presentation.", "text": "Avulsed teeth that are stored extraorally in a dry environment for >60 min generally develop replacement root resorption or ankylosis following their replantation due to the absence of a vital periodontal ligament on their root surface. One negative sequelae of such ankylosis is tooth infra-positioning and the local arrest of alveolar bone growth. Removal of an ankylosed tooth may be difficult and traumatic leading to esthetic bony ridge deformities and optimal prosthetic treatment interferences. Recently a treatment option for ankylosed teeth named 'de-coronation' gained interest, particularly in pediatric dentistry that concentrated in dental traumatology. This article reviews the up-to-date literature that has been published on decoronation with respect to its importance for future prosthetic rehabilitation followed by a case presentation that demonstrates its clinical benefits."}
{"_id": "e8f6c316b3f80699e75a94d5c58efe96ea50cd83", "title": "A Component-Based Dynamic Link Support for Safety-Critical Embedded Systems", "text": "Safety-critical embedded systems have to undergo rigorous development process in order to ensure that their function will not compromise humans or environment where they operate. Therefore, they rely on simple and proven-in-use design. However, with growing software complexity, maintenance becomes very important aspect in safety domain. Recent approaches for managing maintenance allow to perform changes on software at design-time, which implies that the whole system has to be rebuilt when the application software changes. In this paper, we describe more flexible solution for updating the application software. We apply the component-based paradigm to construct the application software, i.e. we define a model of a software function that can be dynamically linked with the entire operating system (OS). In order to avoid the usage of the OS-provided support for dynamic linking, we design software functions as position-independent and relocation-free binaries with well-defined interfaces. With the help of component-based paradigm we show how to simplify the link support and make it suitable for safety domain."}
{"_id": "5de6c10e37b04dadde40b344c334876d13fcf76e", "title": "Parameter Estimation for Probabilistic Finite-State Transducers", "text": "Weighted finite-state transducers suffer from the lack of a training algorithm. Training is even harder for transducers that have been assembled via finite-state operations such as composition, minimization, union, concatenation, and closure, as this yields tricky parameter tying. We formulate a \u201cparameterized FST\u201d paradigm and give training algorithms for it, including a general bookkeeping trick (\u201cexpectation semirings\u201d) that cleanly and efficiently computes expectations and gradients. 1 Background and Motivation Rational relations on strings have become widespread in language and speech engineering (Roche and Schabes, 1997). Despite bounded memory they are well-suited to describe many linguistic and textual processes, either exactly or approximately. A relation is a set of (input, output) pairs. Relations are more general than functions because they may pair a given input string with more or fewer than one output string. The class of so-called rational relations admits a nice declarative programming paradigm. Source code describing the relation (a regular expression) is compiled into efficient object code (in the form of a 2-tape automaton called a finite-state transducer). The object code can even be optimized for runtime and code size (via algorithms such as determinization and minimization of transducers). This programming paradigm supports efficient nondeterminism, including parallel processing over infinite sets of input strings, and even allows \u201creverse\u201d computation from output to input. Its unusual flexibility for the practiced programmer stems from the many operations under which rational relations are closed. It is common to define further useful operations (as macros), which modify existing relations not by editing their source code but simply by operating on them \u201cfrom outside.\u201d \u2217A brief version of this work, with some additional material, first appeared as (Eisner, 2001a). A leisurely journal-length version with more details has been prepared and is available. The entire paradigm has been generalized to weighted relations, which assign a weight to each (input, output) pair rather than simply including or excluding it. If these weights represent probabilities P (input, output) or P (output | input), the weighted relation is called a joint or conditional (probabilistic) relation and constitutes a statistical model. Such models can be efficiently restricted, manipulated or combined using rational operations as before. An artificial example will appear in \u00a72. The availability of toolkits for this weighted case (Mohri et al., 1998; van Noord and Gerdemann, 2001) promises to unify much of statistical NLP. Such tools make it easy to run most current approaches to statistical markup, chunking, normalization, segmentation, alignment, and noisy-channel decoding,1 including classic models for speech recognition (Pereira and Riley, 1997) and machine translation (Knight and Al-Onaizan, 1998). Moreover, once the models are expressed in the finitestate framework, it is easy to use operators to tweak them, to apply them to speech lattices or other sets, and to combine them with linguistic resources. Unfortunately, there is a stumbling block: Where do the weights come from? After all, statistical models require supervised or unsupervised training. Currently, finite-state practitioners derive weights using exogenous training methods, then patch them onto transducer arcs. Not only do these methods require additional programming outside the toolkit, but they are limited to particular kinds of models and training regimens. For example, the forward-backward algorithm (Baum, 1972) trains only Hidden Markov Models, while (Ristad and Yianilos, 1996) trains only stochastic edit distance. In short, current finite-state toolkits include no training algorithms, because none exist for the large space of statistical models that the toolkits can in principle describe and run. Given output, find input to maximize P (input, output)."}
{"_id": "b0d04d05adf63cb635215f142e22f83d48cbb81b", "title": "Data mining in software engineering", "text": "The increased availability of data created as part of the sof tware development process allows us to apply novel analysis techniques on the data and use the results to guide t he process\u2019s optimization. In this paper we describe variou s data sources and discuss the principles and techniques of data mi ning as applied on software engineering data. Data that can b e mined is generated by most parts of the development process: requi rements elicitation, development analysis, testing, debu gging, and maintenance. Based on this classification we survey the mini ng approaches that have been used and categorize them accord ing to the corresponding parts of the development process and th e task they assist. Thus the survey provides researchers wit h a concise overview of data mining techniques applied to softw are engineering data, and aids practitioners on the selecti on of appropriate data mining techniques for their work."}
{"_id": "87f920f1800e01e5d1bbdd30f8095a61ddcc7edd", "title": "Signal Processing for FMCW SAR", "text": "The combination of frequency-modulated continuous-wave (FMCW) technology and synthetic aperture radar (SAR) techniques leads to lightweight cost-effective imaging sensors of high resolution. One limiting factor to the use of FMCW sensors is the well-known presence of nonlinearities in the transmitted signal. This results in contrast- and range-resolution degradation, particularly when the system is intended for high-resolution long-range applications, as it is the case for SAR. This paper presents a novel processing solution, which solves the nonlinearity problem for the whole range profile. Additionally, the conventional stop-and-go approximation used in pulse-radar algorithms is not valid in FMCW SAR applications under certain circumstances. Therefore, the motion within the sweep needs to be taken into account. Analytical development of the FMCW SAR signal model, starting from the deramped signal and without using the stop-and-go approximation, is presented in this paper. The model is then applied to stripmap, spotlight, and single-transmitter/multiple-receiver digital-beamforming SAR operational mode. The proposed algorithms are verified by processing real FMCW SAR data collected with the demonstrator system built at the Delft University of Technology."}
{"_id": "fbfc5fb33652d27ac7e46462388ccf2b48078d62", "title": "Air-Gap Convection in Rotating Electrical Machines", "text": "This paper reviews the convective heat transfer within the air gap of both cylindrical and disk geometry rotating electrical machines, including worked examples relevant to fractional horsepower electrical machines. Thermal analysis of electrical machines is important because torque density is limited by maximum temperature. Knowledge of surface convective heat transfer coefficients is necessary for accurate thermal modeling, for example, using lumped parameter models. There exists a wide body of relevant literature, but much of it has traditionally been in other application areas, dominated by mechanical engineers, such as gas turbine design. Particular attention is therefore given to the explanation of the relevant nondimensional parameters and to the presentation of measured convective heat transfer correlations for a wide variety of situations from laminar to turbulent flow at small and large gap sizes for both radial-flux and axial-flux electrical machines."}
{"_id": "362f40b7121ec60791546577c796ac9ec4433c21", "title": "CAPTCHA: Using Hard AI Problems for Security", "text": "We introduce captcha, an automated test that humans can pass, but current computer programs can\u2019t pass: any program that has high success over a captcha can be used to solve an unsolved Artificial Intelligence (AI) problem. We provide several novel constructions of captchas. Since captchas have many applications in practical security, our approach introduces a new class of hard problems that can be exploited for security purposes. Much like research in cryptography has had a positive impact on algorithms for factoring and discrete log, we hope that the use of hard AI problems for security purposes allows us to advance the field of Artificial Intelligence. We introduce two families of AI problems that can be used to construct captchas and we show that solutions to such problems can be used for steganographic communication. captchas based on these AI problem families, then, imply a win-win situation: either the problems remain unsolved and there is a way to differentiate humans from computers, or the problems are solved and there is a way to communicate covertly on some channels."}
{"_id": "c4475e5c53abedfdb84fd5a654492a9fd33d0610", "title": "Multiple social identities and stereotype threat: imbalance, accessibility, and working memory.", "text": "In 4 experiments, the authors showed that concurrently making positive and negative self-relevant stereotypes available about performance in the same ability domain can eliminate stereotype threat effects. Replicating past work, the authors demonstrated that introducing negative stereotypes about women's math performance activated participants' female social identity and hurt their math performance (i.e., stereotype threat) by reducing working memory. Moving beyond past work, it was also demonstrated that concomitantly presenting a positive self-relevant stereotype (e.g., college students are good at math) increased the relative accessibility of females' college student identity and inhibited their gender identity, eliminating attendant working memory deficits and contingent math performance decrements. Furthermore, subtle manipulations in questions presented in the demographic section of a math test eliminated stereotype threat effects that result from women reporting their gender before completing the test. This work identifies the motivated processes through which people's social identities became active in situations in which self-relevant stereotypes about a stigmatized group membership and a nonstigmatized group membership were available. In addition, it demonstrates the downstream consequences of this pattern of activation on working memory and performance."}
{"_id": "b3d506fd3ec836f1499478479960d2998b90f56c", "title": "Multiuser Resource Allocation for Mobile-Edge Computation Offloading", "text": "Mobile-edge computation offloading (MECO) offloads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we consider resource allocation in a MECO system comprising multiple users that time share a single edge cloud and have different computation loads. The optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under constraint on computation latency and for both the cases of infinite and finite edge cloud computation capacities. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Computing the threshold requires iterative computation. To reduce the complexity, a sub-optimal resource-allocation algorithm is proposed and shown by simulation to have close-to-optimal performance."}
{"_id": "7f7f2a84f0fc1393e92fc1a9fc30d92ca034962f", "title": "Trimming and Improving Skip-thought Vectors", "text": "The skip-thought model has been proven to be effective at learning sentence representations and capturing sentence semantics. In this paper, we propose a suite of techniques to trim and improve it. First, we validate a hypothesis that, given a current sentence, inferring the previous and inferring the next sentence provide similar supervision power, therefore only one decoder for predicting the next sentence is preserved in our trimmed skip-thought model. Second, we present a connection layer between encoder and decoder to help the model to generalize better on semantic relatedness tasks. Third, we found that a good word embedding initialization is also essential for learning better sentence representations. We train our model unsupervised on a large corpus with contiguous sentences, and then evaluate the trained model on 7 supervised tasks, which includes semantic relatedness, paraphrase detection, and text classification benchmarks. We empirically show that, our proposed model is a faster, lighter-weight and equally powerful alternative to the original skip-thought model."}
{"_id": "64227f6ae36f2fe23b9754d0a0123feac89f7731", "title": "Self-tuning fuzzy PID controller for electro-hydraulic cylinder", "text": "Hydraulic systems are widely used in industrial applications. This is due to its high speed of response with fast start, stop and speed reversal possible. The torque to inertia ratio is also large with resulting high acceleration capability. The nonlinear properties of hydraulic cylinder make the position tracking control design challenging. This paper presents the development and implementation of self-tuning fuzzy PID controller in controlling the position variation of electro-hydraulic actuator. The hydraulic system mathematical model is approximated using system identification technique. The simulation studies were done using Matlab Simulink environment. The output performance was compared with the design using pole-placement controller. The roots mean squared error for both techniques showed that self-tuning Fuzzy PID produced better result compared to using pole-placement controller."}
{"_id": "b3e1f8fda8c9a9d531d6664b106d04f077179a57", "title": "Formula Derivation and Verification of Characteristic Impedance for Offset Double-Sided Parallel Strip Line (DSPSL)", "text": "Double-sided parallel strip line (DSPSL) is a versatile microwave transmission line. With an offset, it allows for large variations of the characteristic impedance and therefore flexibility for the design of novel microwave devices. In this letter, a formula for the characteristic impedance of DSPSL is derived by the fuzzy EM method and verified by commercial software. Very good agreement is observed."}
{"_id": "455e11529077ec799484a5250ea8d6dd49a47390", "title": "Two Conceptions of the Physical", "text": "The debate over physicalism in philosophy of mind can be seen as concerning an inconsistent tetrad of theses: (1) if physicalism is true, a priori physicalism is true; (2) a priori physicalism is false; (3) if physicalism is false, epiphenomenalism is true; (4) epiphenomenalism is false. This paper argues that one may resolve the debate by distinguishing two conceptions of the physical: on the theory-based conception, it is plausible that (2) is true and (3) is false; on the object-based conception, it is plausible that (3) is true and (2) is false. The paper also defends and explores the version of physicalism that results from this strategy. 1. One way to view the contemporary debate in philosophy of mind over physicalism is to see it as being organized around an inconsistent tetrad of theses. These are: (1) If physicalism is true, a priori physicalism is true. (2) A priori physicalism is false. (3) If physicalism is false, epiphenomenalism is true. (4) Epiphenomenalism is false. It is obvious of course that these theses are inconsistent: (1) and (2) entail that physicalism is false, while (3) and (4) entail that it is true. Barring ambiguity, therefore, one thing we know is that one of the theses is false. On the other hand, each of the theses has powerful considerations, or at least what seem initially to be powerful considerations, in its favor. (1) In support of (1) are considerations of supervenience, articulated most clearly in recent times by Frank Jackson and David Chalmers. A priori physicalism is a thesis with two parts. The first part-the physicalist part-is that the mental supervenes with metaphysical necessity on the physical. The second part-the a priori part-is that mental truths are a priori entailed by physical truths. Many philosophers hold that supervenience stands in need of justification or explanation; Jackson and Chalmers argue that the project of justifying or explaining supervenience just is the project of making it plausible that there is an a priori entailment of the mental by the physical. This suggests that the first part of"}
{"_id": "5adb91899113b6cdea63f9c10baf41b154418c2f", "title": "A modular architecture for building automation systems", "text": "The deployment of building automation systems (BAS) allows to increase comfort, safety and security and to reduce operational cost. Today such systems typically follow a two-layered hierarchical approach. While control networks interconnect distributed sensors, actuators and controllers, a backbone provides the necessary infrastructure for management tasks hosted by configuration and management devices. In addition, devices interconnecting the control network with the backbone and the backbone with further networks (e.g., the Internet) play a strategic role. All BAS devices contributing to a particular functionality differ in their requirements for hardware. This paper discusses requirements for devices used in the building automation domain and presents our work in progress to assemble platforms with different purposes relying on a modular architecture."}
{"_id": "d4e5e9978ca8449ec6c4e8a4ebac4cb677bc3f62", "title": "Exploring actor-object relationships for query-focused multi-document summarization", "text": "Most research on multi-document summarization explores methods that generate summaries based on queries regardless of the users\u2019 preferences. We note that, different users can generate somewhat different summaries on the basis of the same source data and query. This paper presents our study on how to exploit the information regards how users summarized their texts. Models of different users can be used either separately, or in an ensemble-like fashion. Machine learning methods are explored in the construction of the individual models. However, we explore yet another hypothesis. We believe that the sentences selected into the summary should be coherent and supplement each other in their meaning. One method to model this relationship between sentences is by detecting actor\u2013object relationship (AOR). The sentences that satisfy this relationship have their importance value enhanced. This paper combines ensemble summarizing system and AOR to generate summaries. We have evaluated this method on DUC 2006 and DUC 2007 using ROUGE measure. Experimental results show the supervised method that exploits the ensemble summarizing system combined with AOR outperforms previous models when considering performance in query-based multidocument summarization tasks. Keyword User-based summarization \u00b7 Actor\u2013object relationship \u00b7 Multi-document summarization \u00b7 Ensemble summarizing system \u00b7 Training data construction Communicated by V. Loia. M. Valizadeh (B) \u00b7 P. Brazdil LIAAD INESC Tec, University of Porto, Porto, Portugal e-mail: valizadehmr@gmail.com P. Brazdil FEP, University of Porto, Porto, Portugal e-mail: pbrazdil@inescporto.pt"}
{"_id": "e674619de6d94cdd78158e380ed5ad701f9c19e7", "title": "WhyCon : An Efficent , Marker-based Localization System", "text": "We present an open-source marker-based localization system intended as a low-cost easy-to-deploy solution for aerial and swarm robotics. The main advantage of the presented method is its high computational efficiency, which allows its deployment on small robots with limited computational resources. Even on low-end computers, the core component of the system can detect and estimate 3D positions of hundreds of black and white markers at the maximum frame-rate of standard cameras. The method is robust to changing lighting conditions and achieves accuracy in the order of millimeters to centimeters. Due to its reliability, simplicity of use and availability as an open-source ROS module (http://purl.org/robotics/whycon), the system is now used in a number of aerial robotics projects where fast and precise relative localization is required."}
{"_id": "ed73065187c8e2480fb9883a2b1f197e3e43cb9d", "title": "A Novel Feature Selection Based on One-Way ANOVA F-Test for E-Mail Spam Classification", "text": "Spam is commonly defined as unwanted e-mails and it became a global threat against e-mail users. Although, Support Vector Machine (SVM) has been commonly used in e-mail spam classification, yet the problem of high data dimensionality of the feature space due to the massive number of e-mail dataset and features still exist. To improve the limitation of SVM, reduce the computational complexity (efficiency) and enhancing the classification accuracy (effectiveness). In this study, feature selection based on one-way ANOVA F-test statistics scheme was applied to determine the most important features contributing to e-mail spam classification. This feature selection based on one-way ANOVA F-test is used to reduce the high data dimensionality of the feature space before the classification process. The experiment of the proposed scheme was carried out using spam base wellknown benchmarking dataset to evaluate the feasibility of the proposed method. The comparison is achieved for different datasets, categorization algorithm and success measures. In addition, experimental results on spam base English datasets showed that the enhanced SVM (FSSVM) significantly outperforms SVM and many other recent spam classification methods for English dataset in terms of computational complexity and dimension reduction."}
{"_id": "e15d731fcdf98ec4a4150149a935ddf3ba5549c7", "title": "SEISMIC BEHAVIOR OF INTERMEDIATE BEAMS IN STEEL PLATE SHEAR WALLS", "text": "This paper presents some preliminary results of an ongoing research whose objective is to investigate the seismic behavior of intermediate beams (i.e., the beams other than those at the roof and foundation levels) in multi-story steel plate shear walls (SPSWs). Of primary interest is the determination of the strength level needed to avoid the formation of in-span plastic hinges, a relevant practical issue that has not been considered in past investigations. To attain this objective, the seismic response of different SPSW models was analyzed by performing linear and nonlinear analysis. The intermediate beams of the SPSW models were designed to resist: (I) forces imposed by gravity loads only; (II) forces determined by the ASCE 7 load combinations; and (III) forces imposed by fully yielded plates. For comparison purposes, SPSW models designed according to the Canadian Standard CAN/CSA S16-01 were considered as well. It is concluded that intermediate beams designed according to criteria I and II are prone to inspan plastic hinges and to excessive plastic deformations. It was also found that, while in-span plastic hinges do not appear in the intermediate beams of the CAN/CSA S16-01 models, they do appear in the roof and foundation beams of these models when a collapse mechanism develops."}
{"_id": "755321ceb4bd63a6bc7d786930b07b73de3c928d", "title": "MBBA: A Multi-Bandwidth Bus Arbiter for Hard Real-Time", "text": "Multi-core architectures are being increasingly used in embedded systems as they offer several advantages: improved hardware integration, low thermal dissipation and reduced energy consumption, while they make it possible to improve the computing power. In order to run real-time software on a multicore architecture, computing the Worst-Case Execution Time of every thread should be achievable. This notably involves bounding memory latencies by employing a predictable bus arbiter. However, state-of-the-art techniques prove to be irrelevant to schedule unbalanced workloads in which some threads require more bus bandwidth than the other ones. This paper proposes a new bus arbitration scheme that ensures that the shared bus latencies can be upper bounded. Compared to other schemes that make the bus latencies predictable, like the Round-Robin protocol, our approach defines several levels of bandwidth to meet requirements that may vary from one thread to another. Experimental results (WCET estimates) show that the worst-case bus latency is noticeably shortened, compared to Round-Robin, for the cores with highest priority that get the largest bandwidth. The relevance of the scheme is shown through an example workload composed of various benchmarks."}
{"_id": "69adc4f4f6ac4015d9dea2dab71d3e1bade2725f", "title": "LoRa Scalability: A Simulation Model Based on Interference Measurements", "text": "LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data."}
{"_id": "8ee4ed49fbf411fb25ba3119c2bc5c5f27a3e93e", "title": "Improvements in muscle symmetry in children with cerebral palsy after equine-assisted therapy (hippotherapy).", "text": "OBJECTIVE\nTo evaluate the effect of hippotherapy (physical therapy utilizing the movement of a horse) on muscle activity in children with spastic cerebral palsy.\n\n\nDESIGN\nPretest/post-test control group.\n\n\nSETTING/LOCATION\nTherapeutic Riding of Tucson (TROT), Tucson, AZ.\n\n\nSUBJECTS\nFifteen (15) children ranging from 4 to 12 years of age diagnosed with spastic cerebral palsy.\n\n\nINTERVENTIONS\nChildren meeting inclusion criteria were randomized to either 8 minutes of hippotherapy or 8 minutes astride a stationary barrel.\n\n\nOUTCOME MEASURES\nRemote surface electromyography (EMG) was used to measure muscle activity of the trunk and upper legs during sitting, standing, and walking tasks before and after each intervention.\n\n\nRESULTS\nAfter hippotherapy, significant improvement in symmetry of muscle activity was noted in those muscle groups displaying the highest asymmetry prior to hippotherapy. No significant change was noted after sitting astride a barrel.\n\n\nCONCLUSIONS\nEight minutes of hippotherapy, but not stationary sitting astride a barrel, resulted in improved symmetry in muscle activity in children with spastic cerebral palsy. These results suggest that the movement of the horse rather than passive stretching accounts for the measured improvements."}
{"_id": "96e6a49cf2d1e89c981e3b0bf9fafdcb2dfc35de", "title": "Gain and bandwidth improvement of microstrip antenna using RIS and FPC resonator", "text": "Improvement in gain as well as bandwidth of microstrip antenna using reactive impedance surface and Fabry Perot cavity resonator is proposed. Suspended microstrip antenna (MSA) is designed on reactive impedance surface (RIS) backed FR4 substrate. RIS improves bandwidth, gain and efficiency of antenna due to reduced coupling between MSA and ground plane. This MSA backed RIS is placed in a FPC resonator for further improving the gain of MSA. 1\u00d71 to 4\u00d74 square patch arrays are designed on a dielectric layer which is energized by the radiations from RIS backed MSA. The PRS layer is placed at about \u03bb0/2 from ground. The S11 is < \u22129.5 dB over 5.725\u20136.4 GHz. Maximum gain of 15.8 dBi with gain variation < 2 dB is obtained using 4\u00d74 array. Beside this, cross polarization and Side lobe level are < \u22122\u03bf dB. Antenna offers front to back lobe ratio (FBR) > 20 dB and > 70 % antenna efficiency. The measured results of prototype structure are in agreement with the simulation results."}
{"_id": "1a4134f85c9b5cb0697a3213308c148e7e53afee", "title": "Using Spoken Language Benchmarks to Characterize the Expressive Language Skills of Young Children With Autism Spectrum Disorders.", "text": "PURPOSE\nSpoken language benchmarks proposed by Tager-Flusberg et al. (2009) were used to characterize communication profiles of toddlers with autism spectrum disorders and to investigate if there were differences in variables hypothesized to influence language development at different benchmark levels.\n\n\nMETHOD\nThe communication abilities of a large sample of toddlers with autism spectrum disorders (N = 105) were characterized in terms of spoken language benchmarks. The toddlers were grouped according to these benchmarks to investigate whether there were differences in selected variables across benchmark groups at a mean age of 2.5 years.\n\n\nRESULTS\nThe majority of children in the sample presented with uneven communication profiles with relative strengths in phonology and significant weaknesses in pragmatics. When children were grouped according to one expressive language domain, across-group differences were observed in response to joint attention and gestures but not cognition or restricted and repetitive behaviors.\n\n\nCONCLUSION\nThe spoken language benchmarks are useful for characterizing early communication profiles and investigating features that influence expressive language growth."}
{"_id": "127f3b1106835ccee15d2cdf1e1099cf699247cd", "title": "THE DEMOCRATIZATION OF PERSONAL CONSUMER LOANS? DETERMINANTS OF SUCCESS IN ONLINE PEER-TO-PEER LENDING COMMUNITIES", "text": "Online peer-to-peer (P2P) lending communities enable individual consumers to borrow from, and lend money to, one another directly. We study the borrowerand loan listing-related determinants of funding success in an online P2P lending community by conceptualizing loan decision variables (loan amount, interest rate offered, duration of loan listing) as mediators between borrower attributes such as demographic characteristics, financial strength, and effort prior to making the request, and the likelihood of funding success. Borrower attributes are also treated as moderators of the effects of loan decision variables on funding success. The results of our empirical study, conducted using a database of 5,370 completed P2P loan transactions, provide support for the proposed conceptual framework. Although demographic attributes such as race and gender do affect likelihood of funding success, their effects are very small in comparison to effects of borrowers\u2019 financial strength and their effort when listing and publicizing the loan. These results are substantially different from the documented discriminatory practices of US financial institutions, suggesting that individual lenders lend more fairly when their own investment money is at stake in P2P loans. The paper concludes with specific suggestions to borrowers to increase their chances of receiving funding in P2P lending communities, and a discussion of future research opportunities."}
{"_id": "5839f91e41d1dfd7506bf6ec904b256939a8af12", "title": "Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis", "text": "The advent of the Social Web has enabled anyone with an Internet connection to easily create and share their ideas, opinions and content with millions of other people around the world. In pace with a global deluge of videos from billions of computers, smartphones, tablets, university projectors and security cameras, the amount of multimodal content on the Web has been growing exponentially, and with that comes the need for decoding such information into useful knowledge. In this paper, a multimodal affective data analysis framework is proposed to extract user opinion and emotions from video content. In particular, multiple kernel learning is used to combine visual, audio and textual modalities. The proposed framework outperforms the state-of-the-art model in multimodal sentiment analysis research with a margin of 10-13% and 3-5% accuracy on polarity detection and emotion recognition, respectively. The paper also proposes an extensive study on decision-level fusion."}
{"_id": "b3848d438ae936b4cf4b244345c2cbb7d62461e8", "title": "Clash of the Contagions: Cooperation and Competition in Information Diffusion", "text": "In networks, contagions such as information, purchasing behaviors, and diseases, spread and diffuse from node to node over the edges of the network. Moreover, in real-world scenarios multiple contagions spread through the network simultaneously. These contagions not only propagate at the same time but they also interact and compete with each other as they spread over the network. While traditional empirical studies and models of diffusion consider individual contagions as independent and thus spreading in isolation, we study how different contagions interact with each other as they spread through the network. We develop a statistical model that allows for competition as well as cooperation of different contagions in information diffusion. Competing contagions decrease each other's probability of spreading, while cooperating contagions help each other in being adopted throughout the network. We evaluate our model on 18,000 contagions simultaneously spreading through the Twitter network. Our model learns how different contagions interact with each other and then uses these interactions to more accurately predict the diffusion of a contagion through the network. Moreover, the model also provides a compelling hypothesis for the principles that govern content interaction in information diffusion. Most importantly, we find very strong effects of interactions between contagions. Interactions cause a relative change in the spreading probability of a contagion by 71% on the average."}
{"_id": "017372aec4b163ed6300499d40e316d2a0a7a9dd", "title": "Patterns of temporal variation in online media", "text": "Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored.\n We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets.\n We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention."}
{"_id": "431551e2626973b150d982563a175ef80b56ce98", "title": "An annotation tool for automatically triangulating individuals' psychophysiological emotional reactions to digital media stimuli", "text": "Current affective user experience studies require both laborious and time-consuming data analysis, as well as dedicated affective classification algorithms. Moreover, the high technical complexity and lack of general guidelines for developing these affective classification algorithms further limits the comparability of the obtained results. In this paper we target this issue by presenting a tool capable of automatically annotating and triangulating players' physiologically interpreted emotional reactions to in-game events. This tool was initially motivated by an experimental psychology study regarding the emotional habituation effects of audio-visual stimuli in digital games and we expect it to contribute in future similar studies by providing both a deeper and more objective analysis on the affective aspects of user experience. We also hope it will contribute towards the rapid implementation and accessibility of this type of studies by open-sourcing it. Throughout this paper we describe the development and benefits presented by our tool, which include: enabling researchers to conduct objective a posteriori analyses without disturbing the gameplay experience, automating the annotation and emotional response identification process, and formatted data exporting for further analysis in third-party statistical software applications."}
{"_id": "cded2e16ad67f690c37a7c686ae5431c69caddcb", "title": "A 3-D printed lightweight cavity-backed monocone antenna", "text": "A lightweight exponentially tapered cavity-backed mono-cone antenna made by 3-D printing and conductive paint coating is proposed. To make the 3-D printed structure be metalized, three layers of conductive paint are used. To verify the proposed fabrication method, the 3-D printed antenna was fabricated and its return loss and far-field radiation characteristics were analyzed."}
{"_id": "b4f517f2f229a5f9d8b64bf7572c515c6ed414bd", "title": "Symmetrical Dense-Shortcut Deep Fully Convolutional Networks for Semantic Segmentation of Very-High-Resolution Remote Sensing Images", "text": "Semantic segmentation has emerged as a mainstream method in very-high-resolution remote sensing land-use/land-cover applications. In this paper, we first review the state-of-the-art semantic segmentation models in both computer vision and remote sensing fields. Subsequently, we introduce two semantic segmentation frameworks: SNFCN and SDFCN, both of which contain deep fully convolutional networks with shortcut blocks. We adopt an overlay strategy as the postprocessing method. Based on our frameworks, we conducted experiments on two online ISPRS datasets: Vaihingen and Potsdam. The results indicate that our frameworks achieve higher overall accuracy than the classic FCN-8s and SegNet models. In addition, our postprocessing method can increase the overall accuracy by about 1%\u20132% and help to eliminate \u201csalt and pepper\u201d phenomena and block effects."}
{"_id": "51f300c4c0de859f8c0a1598c78e5e71eeef42fd", "title": "Generating Neural Networks with Neural Networks", "text": "Hypernetworks are neural networks that generate weights for another neural network. We formulate the hypernetwork training objective as a compromise between accuracy and diversity, where the diversity takes into account trivial symmetry transformations of the target network. We explain how this simple formulation generalizes variational inference. We use multi-layered perceptrons to form the mapping from the low dimensional input random vector to the high dimensional weight space, and demonstrate how to reduce the number of parameters in this mapping by parameter sharing. We perform experiments and show that the generated weights are diverse and lie on a non-trivial manifold."}
{"_id": "b8a618199f53fcabc8b79dcd53930398bdc6bcdb", "title": "OPTIMIZING THE PERFORMANCE OF VEHICLE-TO-GRID ( V 2 G ) ENABLED BATTERY ELECTRIC VEHICLES THROUGH A SMART CHARGE SCHEDULING MODEL", "text": "\u2212A smart charge scheduling model is presented for potential (1) vehicle-to-grid (V2G) enabled battery electric vehicle (BEV) owners who are willing to participate in the grid ancillary services, and (2) grid operators. Unlike most V2G implementations, which are considered from the perspective of power grid systems, this model includes a communication network architecture for connecting system components that supports both BEV owners and grid operators to efficiently monitor and manage the charging and ancillary service activities. This model maximizes the net profit to each BEV participant while simultaneously satisfying energy demands for his/her trips. The performance of BEVs using the scheduling model is validated by estimating optimal annual financial benefits under different scenarios. An analysis of popular BEV models revealed that one of the existing BEVs considered in the study can generate an annual regulation profit of $454, $394 and $318 when the average daily driving distance is 20 miles, 40 miles and 60 miles, respectively. All popular BEV models can completely compensate the energy cost and generate a positive net profit, through the application of the scheduling model presented in this paper, with an annual driving distance of approximately 15,000 miles. Simulation analysis indicated that the extra load distribution from the optimized BEV charging operations were well balanced compared to the unmanaged BEV"}
{"_id": "2b21b25e1fdcf9555dc0f26491c2bbc45142a282", "title": "Reading Between the Lines: Overcoming Data Sparsity for Accurate Classification of Lexical Relationships", "text": "The lexical semantic relationships between word pairs are key features for many NLP tasks. Most approaches for automatically classifying related word pairs are hindered by data sparsity because of their need to observe two words co-occurring in order to detect the lexical relation holding between them. Even when mining very large corpora, not every related word pair co-occurs. Using novel representations based on graphs and word embeddings, we present two systems that are able to predict relations between words, even when these are never found in the same sentence in a given corpus. In two experiments, we demonstrate superior performance of both approaches over the state of the art, achieving significant gains in recall."}
{"_id": "7d5146e3ddb13a157799f0bdca1ac7d822b1fc82", "title": "MC-HOG Correlation Tracking with Saliency Proposal", "text": "Designing effective feature and handling the model drift problem are two important aspects for online visual tracking. For feature representation, gradient and color features are most widely used, but how to effectively combine them for visual tracking is still an open problem. In this paper, we propose a rich feature descriptor, MC-HOG, by leveraging rich gradient information across multiple color channels or spaces. Then MC-HOG features are embedded into the correlation tracking framework to estimate the state of the target. For handling the model drift problem caused by occlusion or distracter, we propose saliency proposals as prior information to provide candidates and reduce background interference. In addition to saliency proposals, a ranking strategy is proposed to determine the importance of these proposals by exploiting the learnt appearance filter, historical preserved object samples and the distracting proposals. In this way, the proposed approach could effectively explore the color-gradient characteristics and alleviate the model drift problem. Extensive evaluations performed on the benchmark dataset show the superiority of the proposed method."}
{"_id": "4a6a138799df4f05e3a37dc83a375ea546d1bdd0", "title": "Leveraging stereopsis for saliency analysis", "text": "Stereopsis provides an additional depth cue and plays an important role in the human vision system. This paper explores stereopsis for saliency analysis and presents two approaches to stereo saliency detection from stereoscopic images. The first approach computes stereo saliency based on the global disparity contrast in the input image. The second approach leverages domain knowledge in stereoscopic photography. A good stereoscopic image takes care of its disparity distribution to avoid 3D fatigue. Particularly, salient content tends to be positioned in the stereoscopic comfort zone to alleviate the vergence-accommodation conflict. Accordingly, our method computes stereo saliency of an image region based on the distance between its perceived location and the comfort zone. Moreover, we consider objects popping out from the screen salient as these objects tend to catch a viewer's attention. We build a stereo saliency analysis benchmark dataset that contains 1000 stereoscopic images with salient object masks. Our experiments on this dataset show that stereo saliency provides a useful complement to existing visual saliency analysis and our method can successfully detect salient content from images that are difficult for monocular saliency analysis methods."}
{"_id": "2b76d1c1daeddaee11963cd9069f5eef940649aa", "title": "The line of action: an intuitive interface for expressive character posing", "text": "The line of action is a conceptual tool often used by cartoonists and illustrators to help make their figures more consistent and more dramatic. We often see the expression of characters---may it be the dynamism of a super hero, or the elegance of a fashion model---well captured and amplified by a single aesthetic line. Usually this line is laid down in early stages of the drawing and used to describe the body's principal shape. By focusing on this simple abstraction, the person drawing can quickly adjust and refine the overall pose of his or her character from a given viewpoint. In this paper, we propose a mathematical definition of the line of action (LOA), which allows us to automatically align a 3D virtual character to a user-specified LOA by solving an optimization problem. We generalize this framework to other types of lines found in the drawing literature, such as secondary lines used to place arms. Finally, we show a wide range of poses and animations that were rapidly created using our system."}
{"_id": "eee9d92794872fd3eecf38f86cd26d605f3eede7", "title": "Multi-view point clouds registration and stitching based on SIFT feature", "text": "In order to solve multi-view point clouds registration in large non-feature marked scenes, a new registration and stitching method was proposed based on 2D SIFT(Scale-invariant feature transform) features. Firstly, we used texture mapping method to generate 2D effective texture image, and then extracted and match SIFT features, obtained accurate key points and registration relationship between the effective texture images. Secondly, we reflected the SIFT key points and registration relationship to the 3D point clouds data, obtained key points and registration relationship of multi-view point clouds, we can achieve multi-view point clouds stitching. Our algorithm used texture mapping method to generate 2D effective texture image, it can eliminate interference of the holes and ineffective points, and can eliminate unnecessary mistake matching. Our algorithm used correct extracted matching point pairs to stitch, avoiding stepwise iterated of ICP algorithm, so our algorithm is simple to calculate, and it's matching precision and matching efficiency are improved to some extent. We carried experiments on two-view point clouds in two large indoor; the experiment results verified the validity of our algorithm."}
{"_id": "c7d39d8aa2bfb1a7952e2c6fbd5bed237478f1a0", "title": "An IOT based human healthcare system using Arduino uno board", "text": "Internet of Things (IOT) visualizes a future of anything anywhere by anyone at any time. The Information and communication technologies help in creating a revolution in digital technology. IOT are known for interconnecting various physical devices with the networks. In IOT, various physical devices are embedded with different types of sensors and other devices to exchange data between them. An embedded system is a combination of software and hardware where they are programmed for functioning specific functions. These data can be accessed from any parts of the world by making use of cloud. This can be used for creating a digital world, smart homes, healthcare systems and real life data exchange like smart banking. Though Internet of things has emerged long back, it is now becoming popular and gaining attention lately. In healthcare industry, some of the hospitals started using sensors implemented in the bed to get the status of patient's movement and other activities. This paper contains various IOT applications and the role of IOT in the healthcare system, challenges in the healthcare system using IOT. Also, introduced a secured surveillance monitoring system for reading and storing patient's details using low power for transmitting the data."}
{"_id": "56debe08d1f3f0a149ef18b86fc2c6be593bdc03", "title": "Beyond Cybersecurity Awareness: Antecedents and Satisfaction", "text": "Organizations develop technical and procedural measures to protect information systems. Relying only on technical based security solutions is not enough. Organizations must consider technical security solutions along with social, human, and organizational factors. The human element represents the employees (insiders) who use the information systems and other technology resources in their day-to-day operations. ISP awareness is essential to protect organizational information systems. This study adapts the Innovation Diffusion Theory to examine the antecedents of ISP awareness and its impact on the satisfaction with ISP and security practices. A sample of 236 employees in universities in the United States is collected to evaluate the research model. Results indicated that ISP quality, self-efficacy, and technology security awareness significantly impact ISP awareness. The current study presents significant contributions toward understanding the antecedents of ISP awareness and provides a starting point toward including satisfaction aspect in information security behavioral domain."}
{"_id": "b329e8dc2f97ee604df17b6fa15484363ccb52ab", "title": "Vehicle detection and classification based on convolutional neural network", "text": "Deep learning has emerged as a hot topic due to extensive application and high accuracy. In this paper this efficient method is used for vehicle detection and classification. We extract visual features from the activation of a deep convolutional network, large-scale sparse learning and other distinguishing features in order to compare their accuracy. When compared to the leading methods in the challenging ImageNet dataset, our deep learning approach obtains highly competitive results. Through the experiments with in short of training data, the features extracted by deep learning method outperform those generated by traditional approaches."}
{"_id": "a697e3eb3c52d2c8833a178fbf99dd4f728bcddd", "title": "Neuroticism as a mediator of treatment response to SSRIs in major depressive disorder.", "text": "BACKGROUND\nSerotonin function has been implicated in both major depressive disorder and neuroticism. In the current investigation, we examined the hypothesis that any change in depression severity is mediated through the reduction of neuroticism, but only for those compounds which target serotonin receptors.\n\n\nMETHODS\nNinety-three outpatients in the midst of a major depressive episode received one of three antidepressant medications, classified into two broad types: selective serotonin reuptake inhibitors (SSRIs) and non-SSRIs (i.e. reversible monoamine oxidase inhibitors [RIMAs] and noradrenergic and dopaminergic reuptake blockers [NDMs]). Patients completed the Hamilton Rating Scale for Depression, Beck Depression Inventory II and Revised NEO Personality Inventory prior to and following approximately 16 weeks of treatment. Structural equation modeling was used to test two models: a mediation model, in which neuroticism change is the mechanism by which SSRIs exert a therapeutic effect upon depressive symptoms, and a complication model, in which neuroticism change is a mere epiphenomenon of depression reduction in response to SSRIs.\n\n\nRESULTS\nThe mediation model provided a good fit to the data; the complication model did not. Patients treated with SSRIs demonstrated greater neuroticism change than those treated with non-SSRIs, and greater neuroticism change was associated with greater depressive symptom change. These effects held for both self-reported and clinician-rated depressive symptom severity.\n\n\nLIMITATIONS\nReplication within a randomized control trial with multiple assessment periods is required.\n\n\nCONCLUSION\nNeuroticism mediates changes in depression in response to treatment with SSRIs, such that any treatment effect of SSRIs occurs through neuroticism reduction."}
{"_id": "37eacf589bdf404b5ceabb91d1a154268f7eb567", "title": "Voice as sound: using non-verbal voice input for interactive control", "text": "We describe the use of non-verbal features in voice for direct control of interactive applications. Traditional speech recognition interfaces are based on an indirect, conversational model. First the user gives a direction and then the system performs certain operation. Our goal is to achieve more direct, immediate interaction like using a button or joystick by using lower-level features of voice such as pitch and volume. We are developing several prototype interaction techniques based on this idea, such as \"control by continuous voice\", \"rate-based parameter control by pitch,\" and \"discrete parameter control by tonguing.\" We have implemented several prototype systems, and they suggest that voice-as-sound techniques can enhance traditional voice recognition approach."}
{"_id": "4999437db9bae36fab1a8336445c0e6b3d1f70c0", "title": "Improving Diversity in Ranking using Absorbing Random Walks", "text": "We introduce a novel ranking algorithm called GRASSHOPPER, which ranks items with an emphasis on diversity. That is, the top items should be different from each other in order to have a broad coverage of the whole item set. Many natural language processing tasks can benefit from such diversity ranking. Our algorithm is based on random walks in an absorbing Markov chain. We turn ranked items into absorbing states, which effectively prevents redundant items from receiving a high rank. We demonstrate GRASSHOPPER\u2019s effectiveness on extractive text summarization: our algorithm ranks between the 1st and 2nd systems on DUC 2004 Task 2; and on a social network analysis task that identifies movie stars of the world."}
{"_id": "94c92186f6ab009160f8fd634fb5ddf4d0757a25", "title": "Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples", "text": "In recent years, deep learning has shown performance breakthroughs in many prominent problems. However, this comes with a major concern: deep networks have been found to be vulnerable to adversarial examples. Adversarial examples are modified inputs designed to cause the model to err. In the domains of images and speech, the modifications are untraceable by humans, but affect the model\u2019s output. In binary files, small modifications to the byte-code often lead to significant changes in its functionality/validity, therefore generating adversarial examples is not straightforward. We introduce a novel approach for generating adversarial examples for discrete input sets, such as binaries. We modify malicious binaries by injecting a small sequence of bytes to the binary file. The modified files are detected as benign, while preserving their malicious functionality. We applied this approach to an end-to-end convolutional neural malware detector and present a high evasion rate. Moreover, we showed that our payload is transferable within different positions of the same file and across different files and file families."}
{"_id": "7917b89d0780decf7201aad8db9ed3cb101b24d7", "title": "Intrusion Detection System using K-means , PSO with SVM Classifier : A Survey", "text": "Intrusion detection is a process of identifying the Attacks. The main aim of IDS is to identify the Normal and Intrusive activities. In recent years, many researchers are using data mining techniques for building IDS. Here we propose a new approach using data mining technique such as SVM and Particle swarm optimization for attaining higher detection rate. The propose technique has major steps: Preprocessing, Training using PSO, clustering using K-means to generate different training subsets. Then based on the subsequent training subsets a vector for SVM classification is formed and in the end, classification using PSO is performed to detect Intrusion has happened or not. This paper contains summarization study and identification of the drawbacks of formerly surveyed works. Keywords-Intrusion detection system; Neuro-fuzzy; Support Vector Machine (SVM); PSO; K-means"}
{"_id": "8fbe4fd165b79e01e9b48f067cfe4d454f4a17e4", "title": "Artificial Life and Real Robots", "text": "The first part of this paper explores the general issues in using Artificial Life techniques to program actual mobile robots. In particular it explores the difficulties inherent in transferring programs evolved in a simulated environment to run on an actual robot. It examines the dual evolution of organism morphology and nervous systems in biology. It proposes techniques to capture some of the search space pruning that dual evolution offers in the domain of robot programming. It explores the relationship between robot morphology and program structure, and techniques for capturing regularities across this mapping. The second part of the paper is much more specific. It proposes techniques which could allow realistic explorations concerning the evolution of programs to control physically embodied mobile robots. In particular we introduce a new abstraction for behavior-based robot programming which is specially tailored to be used with genetic programming techniques. To compete with hand coding techniques it will be necessary to automatically evolve programs that are one to two orders of magnitude more complex than those previously reported in any domain. Considerable extensions to previously reported approaches to genetic programming are necessary in order to achieve this goal."}
{"_id": "42fcaf880933b43c7ef1c0cf1b437e46cd9d0a9c", "title": "VARIATIONAL RECURRENT ADVERSARIAL DEEP DOMAIN ADAPTATION", "text": "We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) is built atop a variational recurrent neural network (VRNN) and trains adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent dependencies of multivariate time-series data. Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model\u2019s ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches."}
{"_id": "025fcdec650972d7baae1a5bb507e11ff2965075", "title": "Evaluation of Attention and Relaxation Levels of Archers in Shooting Process using Brain Wave Signal Analysis Algorithms", "text": "* Archer's capability of attention and relaxation control during shooting process was evaluated using EEG technology. Attention and meditation algorithms were used to represent the levels of mental concentration and relaxation levels. Elite, mid-level, and novice archers were tested for short and long distance shootings in the archery field. Single channel EEG was recorded on the forehead (Fp1) during the shooting process, and attention and meditation levels were computed by real time. Four types of variations were defined based on the increasing and decreasing patterns of attention and meditation levels during shooting process. Elite archers showed increases in both attention and relaxation while mid-level archers showed increased attention but decreased relaxation. Elite archers also showed higher levels of attention at the release than mid-level and novice archers. Levels of attention and relaxation and their variation patterns were useful to categorize archers and to provide feedback in training."}
{"_id": "4d48312f8602c1a36496e5342d654f6ac958f277", "title": "Analysis of the Electromagnetic Signature of Reinforced Concrete Structures for Nondestructive Evaluation of Corrosion Damage", "text": "This paper presents a nondestructive corrosion damage detection method for reinforced concrete structures based on the analysis of the electromagnetic signature of the steel rebar corrosion. The signature of the corrosion on the scattered field upon microwave illumination is first numerically analyzed. First-order quality factor parameters, the energy and the mean propagation delay, are proposed to quantify the corrosion amount in the structure. To validate the model, low-profile ultra-wide-band antennas (3-12 GHz) are fabricated and measured. Measurements on 12 reinforced concrete samples with induced corrosion are performed, using three different antenna setups. The experiments demonstrate a good correlation between an increase in the corrosion amount with a decrease in the energy and an increase in the time delay of the propagated signal."}
{"_id": "424c2e2382a21159db5c5058c5b558f1f534fd37", "title": "A PatchMatch-Based Dense-Field Algorithm for Video Copy\u2013Move Detection and Localization", "text": "We propose a new algorithm for the reliable detection and localization of video copy\u2013move forgeries. Discovering well-crafted video copy\u2013moves may be very difficult, especially when some uniform background is copied to occlude foreground objects. To reliably detect both additive and occlusive copy\u2013moves, we use a dense-field approach, with invariant features that guarantee robustness to several postprocessing operations. To limit complexity, a suitable video-oriented version of PatchMatch is used, with a multiresolution search strategy, and a focus on volumes of interest. Performance assessment relies on a new dataset, designed ad hoc, with realistic copy\u2013moves and a wide variety of challenging situations. Experimental results show the proposed method to detect and localize video copy\u2013moves with good accuracy even in adverse conditions."}
{"_id": "ae3b5ad72efeb6cdfdd38cb1627a67a12e106962", "title": "The Zachman Framework and the OMG''s Model Driven Architecture", "text": "123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 123456789012345678901234567890121234567890123456 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 1234567890123456789012345678901212345678901234567890123456789012123456789012345678901234567890121234 BUSINESS PROCESS TRENDS WHITEPAPER"}
{"_id": "414b7477daa7838b6bbd7af659683a965691272c", "title": "Video summarisation: A conceptual framework and survey of the state of the art", "text": "Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users. 2007 Elsevier Inc. All rights reserved."}
{"_id": "0ee2da37957b9d5b3fcc7827c84ee326cd8cb0c3", "title": "I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience", "text": "Social media technologies collapse multiple audiences into single contexts, making it difficult for people to use the same techniques online that they do to handle multiplicity in face-to-face conversation. This article investigates how content producers navigate \u2018imagined audiences\u2019 on Twitter. We talked with participants who have different types of followings to understand their techniques, including targeting different audiences, concealing subjects, and maintaining authenticity. Some techniques of audience management resemble the practices of \u2018micro-celebrity\u2019 and personal branding, both strategic self-commodification. Our model of the networked audience assumes a manyto-many communication through which individuals conceptualize an imagined audience evoked through their tweets."}
{"_id": "216e92be827a9774130263754f2036aab0808484", "title": "The Benefits of Facebook \"Friends: \" Social Capital and College Students' Use of Online Social Network Sites", "text": "This study examines the relationship between use of Facebook, a popular online social network site, and the formation and maintenance of social capital. In addition to assessing bonding and bridging social capital, we explore a dimension of social capital that assesses one's ability to stay connected with members of a previously inhabited community, which we call maintained social capital. Regression analyses conducted on results from a survey of undergraduate students (N=286) suggest a strong association between use of Facebook and the three types of social capital, with the strongest relationship being to bridging social capital. In addition, Facebook usage was found to interact with measures of psychological well-being, suggesting that it might provide greater benefits for users experiencing low self-esteem and low life satisfaction."}
{"_id": "8392a12398718f73df004d742d8edf60477c723a", "title": "Signals in Social Supernets", "text": "Social network sites (SNSs) provide a new way to organize and navigate an egocentric social network. Are they a fad, briefly popular but ultimately useless? Or are they the harbingers of a new and more powerful social world, where the ability to maintain an immense network\u2014a social \"supernet\"\u2014fundamentally changes the scale of human society? This article presents signaling theory as a conceptual framework with which to assess the transformative potential of SNSs and to guide their design to make them into more effective social tools. It shows how the costs associated with adding friends and evaluating profiles affect the reliability of users' self-presentation; examines strategies such as information fashion and risk-taking; and shows how these costs and strategies affect how the publicly-displayed social network aids the establishment of trust, identity, and cooperation\u2014the essential foundations for an expanded social world. Grooming, Gossip, and Online Friending Social ties provide many benefits, including companionship, access to information, and emotional and material support (Granovetter, 1983; Wellman, Garton, & Haythornthwaite, 1997; Wellman & Gulia, 1999). Increasing the number of ties increases access to these benefits, although time and cognitive constraints preclude indefinite expansions of one's personal network. Yet if maintaining ties were to become more temporally efficient and cognitively effective, it should be possible to increase the scale of one's social world\u2014to create a \"supernet\" with many more ties than is feasible without socially assistive tools. The question this article addresses is whether social network sites (SNSs) are a technology that can bring this about. In the wild, apes groom each other, picking through fur to remove parasitic bugs. This behavior helps with hygiene and is relaxing and pleasant for the recipient. Perhaps most importantly, it establishes social bonds: Apes who groom each other are more likely to help each other and not fight. Long grooming sessions are time consuming, however. Since the ape must also spend many hours finding food, sleeping, etc., it is clear that grooming can sustain only a limited number of relationships (Dunbar, 1996). In Grooming, Gossip, and the Evolution of Language, Robin Dunbar (1996) argued eloquently that in human societies, language, especially gossip, has taken over the social function of grooming. Instead of removing lice from each other's hair, people check in with friends and colleagues, ask how they are doing, and exchange a few words about common acquaintances, the news, or the local sports team (Dunbar, 1996, 2004). Language is much more efficient than physical grooming, for one can talk to several people at once. Language also helps people learn about cultural norms, evaluate others' behavior, and keep up with the news and shifting opinions of their surrounding community. It makes reputation possible\u2014individuals benefit from the experience of others in determining who is nice, who does good Signals in Social Supernets http://jcmc.indiana.edu/vol13/issue1/donath.html 2 of 19 4/7/2008 6:26 PM work, and who should be shunned for their dishonest ways. Using language to maintain ties and manage trust, people can form and manage more complex and extensive social networks.1 Communication technologies expand human social reach (Horrigan, Boase, Rainie, & Wellman, 2006). Email makes communication more efficient: Sending a message to numerous recipients is as easy as sending it to one, and its asynchrony means that there is little need to coordinate interaction. Contact management tools, from paper Rolodexes to complex software systems, increase one's ability to remember large numbers of people (Whittaker, Jones, & Terveen 2002). While these technologies provide some of the support an expanded social world needs, they alone are not sufficient. People need to be able to keep track of ever-changing relationships (Dunbar, 1996; Nardi, Whittaker, Isaacs, Creech, Johnson, & Hainsworth, 2002), to see people within the context of their social relationships (Raub & Weesie, 1990), and, most fundamentally, to know whom to trust (Bacharach & Gambetti, 2001; Good, 2000). Email and contact tools help maintain an expanded collection of individual relationships. Are social network sites the solution for placing these relationships into the greater social context? A page on MySpace, filled with flashing logos, obscure comments, poorly focused yet revealing photos, and laced with twinkling animated gifs, may not look to the casual observer like the harbinger of the next stage in human social evolution. But perhaps it is. SNSs locate people in the context of their acquaintances, provide a framework for maintaining an extensive array of friends and other contacts, and allow for the public display of interpersonal commentary (boyd & Ellison, this issue). At the same time, SNSs are still primitive; it is too early in their development to observe clear evidence that they have transformed society. The goal of this article is to present a theoretical framework with which to a) assess the transformative potential of SNSs and b) develop design guidelines for making them into more effective social tools. The foundation for this analysis is signaling theory, which models why some communications are reliably honest and others are not. The argument begins with an introduction to signaling theory. The next section uses this theory to examine how the fundamental structure of SNSs can bring greater trust and reliability to online self-presentations, how specific site design decisions enhance or weaken their trust-conferring ability, and how seemingly pointless or irrational behaviors, such as online fashion and risk taking, actually signal social information. The final section examines the transformative possibilities of social supernets\u2014not only whether SNSs may bring them about, but if so, in what ways they might change our society. An emphasis of this article is on ways of achieving reliable information about identity and affiliations. There are situations where ephemeral, hidden, or multiple identities are desirable. However, minimal online identity has been easy to create, while it is harder to establish more grounded identities in a fluid and nuanced way. A primary goal of this article is to understand how reliability is encouraged or enforced. For designers of future systems such knowledge is a tool, not a blueprint. Depending on the situation, they should choose the appropriate space between anonymous and whimsical and verified and trustworthy identities and communication."}
{"_id": "2311a88803728111ba7bdb327b127ec3f54d282a", "title": "Beyond Bowling Together : SocioTechnical Capital", "text": "Social resources like trust and shared identity make it easier for people to work and play together. Such social resources are sometimes referred to as social capital. Thirty years ago, Americans built social capital as a side effect of participation in civic organizations and social activities, including bowling leagues. Today, they do so far less frequently (Putnam 2000). HCI researchers and practitioners need to find new ways for people to interact that will generate even more social capital than bowling together does. A new theoretical construct, SocioTechnical Capital, provides a framework for generating and evaluating technology-mediated social relations."}
{"_id": "5fc6817421038f21d355af7cee4114155d134f69", "title": "A Scientific Methodology for MIS Case Studies", "text": ""}
{"_id": "179aa78159b0d3739e12e80d492b4235df7283c2", "title": "On multiple foreground cosegmentation", "text": "In this paper, we address a challenging image segmentation problem called multiple foreground cosegmentation (MFC), which concerns a realistic scenario in general Webuser photo sets where a finite number of K foregrounds of interest repeatedly occur cross the entire photo set, but only an unknown subset of them is presented in each image. This contrasts the classical cosegmentation problem dealt with by most existing algorithms, which assume a much simpler but less realistic setting where the same set of foregrounds recurs in every image. We propose a novel optimization method for MFC, which makes no assumption on foreground configurations and does not suffer from the aforementioned limitation, while still leverages all the benefits of having co-occurring or (partially) recurring contents across images. Our method builds on an iterative scheme that alternates between a foreground modeling module and a region assignment module, both highly efficient and scalable. In particular, our approach is flexible enough to integrate any advanced region classifiers for foreground modeling, and our region assignment employs a combinatorial auction framework that enjoys several intuitively good properties such as optimality guarantee and linear complexity. We show the superior performance of our method in both segmentation quality and scalability in comparison with other state-of-the-art techniques on a newly introduced FlickrMFC dataset and the standard ImageNet dataset."}
{"_id": "ca6a1718c47f05bf0ad966f2f5ee62ff5d71af11", "title": "Fourier Analysis for Demand Forecasting in a Fashion Company", "text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a \u20ac60+ million turnover mediumto large-sized Italian fashion company, which operates in the women\u2019s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software."}
{"_id": "17444ea2d3399870939fb9afea78157d5135ec4a", "title": "Multiview Hessian Regularization for Image Annotation", "text": "The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR."}
{"_id": "bff381287fcd0ae45cae0ad2fab03bceb04b1428", "title": "PKLot - A robust dataset for parking lot classification", "text": "http://dx.doi.org/10.1016/j.eswa.2015.02.009 0957-4174/ 2015 Elsevier Ltd. All rights reserved. \u21d1 Corresponding author. E-mail addresses: prlalmeida@inf.ufpr.br (P.R.L. de Almeida), lesoliveira@inf. ufpr.br (L.S. Oliveira), alceu@ppgia.pucpr.br (A.S. Britto Jr.), eunelson@ppgia.pucpr. br (E.J. Silva Jr.), alessandro.koerich@etsmtl.ca (A.L. Koerich). Paulo R.L. de Almeida , Luiz S. Oliveira , Alceu S. Britto Jr. b,\u21d1, Eunelson J. Silva Jr. b Alessandro L. Koerich b,c"}
{"_id": "9e92fc38823baa82e3af674d955d3a19a27c69b2", "title": "Irregularities in the Distribution of Primes and Twin Primes By", "text": "The maxima and minima of sL(x)) \u2014 n(x), iR(x)) \u2014 n(x), and sL2(x)) \u2014 n2(x) in various intervals up to x = 8 x 10 are tabulated. Here n(x) and n2(x) are respectively the number of primes and twin primes not exceeding x, L(x) is the logarithmic integral, R(x) is Riemann's approximation to ir(x), and L2(x) is the Hardy-Littlewood approximation to ti\"2(;c). The computation of the sum of inverses of twin primes less than 8 x 10 gives a probable value 1.9021604 \u00b1 5 x 10~7 for Brun's constant. 1. Approximations to nix). Let P= {2, 3, 5, \u2022 \u2022 \u2022 } be the set of primes, and let 7r(x) be the number of primes not exceeding x. Two well-known approximations to 7t(x) for x > 1 are the logarithmic integral: rx dt (1.1) L{x) = \\v ; J o log t O-2) =T + log(logx)+\u00cd: ^> k=\\ k-k and Riemann's approximation: (1.3) R{x)=Z^L{xi'k) k=i k (1.4) =1 + \u00bfJt\u00f3_ fz. k\\kl{k + 1) Note that (1.1) differs by 1(2) = 1.04516378... from the frequently used approximation f2 dt/log t. We are interested in the errors O-5) r.{x) = (L(x)> n{x) and (1 6) r2(x) = (Rix)) n{x), where denotes the integer closest to y {i.e., the integer part of (y + W)). k Received July 5, 1974. AMS (MOS) subject classifications (1970). Primary 10-04, 10H25, 10H15; Secondary 10A25, 10A40, 10H05, 65A05, 65B05."}
{"_id": "d5beb0826660ebc0c3bb96d1dba3f65daed1c377", "title": "Low-Cost Dual-Band Circularly Polarized Switched-Beam Array for Global Navigation Satellite System", "text": "This paper presents the design and development of a dual-band switched-beam microstrip array for global navigation satellite system (GNSS) applications such as ocean reflectometry and remote sensing. In contrast to the traditional Butler matrix, a simple, low cost, broadband and low insertion loss beam switching feed network is proposed, designed and integrated with a dual band antenna array to achieve continuous beam coverage of \u00b125\u00b0 around the boresight at the L1 (1.575 GHz) and L2 (1.227 GHz) bands. To reduce the cost, microstrip lines and PIN diode based switches are employed. The proposed switched-beam network is then integrated with dual-band step-shorted annular ring (S-SAR) antenna elements in order to produce a fully integrated compact-sized switched-beam array. Antenna simulation results show that the switched-beam array achieves a maximum gain of 12 dBic at the L1 band and 10 dBic at the L2 band. In order to validate the concept, a scaled down prototype of the simulated design is fabricated and measured. The prototype operates at twice of the original design frequency, i.e., 3.15 GHz and 2.454 GHz and the measured results confirm that the integrated array achieves beam switching and good performance at both bands."}
{"_id": "6d1eb878e1d2530c197c962dd4a61d2aba015261", "title": "A faceted approach to conceptualizing tasks in information seeking", "text": "0306-4573/$ see front matter Published by Elsevi doi:10.1016/j.ipm.2008.07.005 * Corresponding author. Tel.: +1 601 266 4035; fa E-mail addresses: Yuelin.Li@usm.edu (Y. Li), nick 1 Tel.: +1 732 932 7500x8271; fax: +1 732 932 69 The nature of the task that leads a person to engage in information interaction, as well as of information seeking and searching tasks, have been shown to influence individuals\u2019 information behavior. Classifying tasks in a domain has been viewed as a departure point of studies on the relationship between tasks and human information behavior. However, previous task classification schemes either classify tasks with respect to the requirements of specific studies or merely classify a certain category of task. Such approaches do not lead to a holistic picture of task since a task involves different aspects. Therefore, the present study aims to develop a faceted classification of task, which can incorporate work tasks and information search tasks into the same classification scheme and characterize tasks in such a way as to help people make predictions of information behavior. For this purpose, previous task classification schemes and their underlying facets are reviewed and discussed. Analysis identifies essential facets and categorizes them into Generic facets of task and Common attributes of task. Generic facets of task include Source of task, Task doer, Time, Action, Product, and Goal. Common attributes of task includes Task characteristics and User\u2019s perception of task. Corresponding sub-facets and values are identified as well. In this fashion, a faceted classification of task is established which could be used to describe users\u2019 work tasks and information search tasks. This faceted classification provides a framework to further explore the relationships among work tasks, search tasks, and interactive information retrieval and advance adaptive IR systems design. Published by Elsevier Ltd."}
{"_id": "80768c58c3e298c17cb39164715cb968cbe7af31", "title": "RCD: Rapid Close to Deadline Scheduling for datacenter networks", "text": "Datacenter-based Cloud Computing services provide a flexible, scalable and yet economical infrastructure to host online services such as multimedia streaming, email and bulk storage. Many such services perform geo-replication to provide necessary quality of service and reliability to users resulting in frequent large inter-datacenter transfers. In order to meet tenant service level agreements (SLAs), these transfers have to be completed prior to a deadline. In addition, WAN resources are quite scarce and costly, meaning they should be fully utilized. Several recently proposed schemes, such as B4 [1], TEMPUS [2], and SWAN [3] have focused on improving the utilization of inter-datacenter transfers through centralized scheduling, however, they fail to provide a mechanism to guarantee that admitted requests meet their deadlines. Also, in a recent study, authors propose Amoeba [4], a system that allows tenants to define deadlines and guarantees that the specified deadlines are met, however, to admit new traffic, the proposed system has to modify the allocation of already admitted transfers. In this paper, we propose Rapid Close to Deadline Scheduling (RCD), a close to deadline traffic allocation technique that is fast and efficient. Through simulations, we show that RCD is up to 15 times faster than Amoeba, provides high link utilization along with deadline guarantees, and is able to make quick decisions on whether a new request can be fully satisfied before its deadline."}
{"_id": "f8f02450166292caf154e73fbf307e725af0a06c", "title": "A Lyapunov approach to the stability of fractional differential equations", "text": "Lyapunov stability of fractional differential equations is addressed in this paper. The key concept is the frequency distributed fractional integrator model, which is the basis for a global state space model of FDEs. Two approaches are presented: the direct one is intuitive but it leads to a large dimension parametric problem while the indirect one, which is based on the continuous frequency distribution, leads to a parsimonious solution. Two examples, with linear and nonlinear FDEs, exhibit the main features of this new methodology. & 2010 Elsevier B.V. All rights reserved."}
{"_id": "c20875094dc9c56b83ca22b83cd3eb950c657ad5", "title": "Semi-Supervised Recognition of Sarcasm in Twitter and Amazon", "text": "Sarcasm is a form of speech act in which the speakers convey their message in an implicit way. The inherently ambiguous nature of sarcasm sometimes makes it hard even for humans to decide whether an utterance is sarcastic or not. Recognition of sarcasm can benefit many sentiment analysis NLP applications, such as review summarization, dialogue systems and review ranking systems. In this paper we experiment with semisupervised sarcasm identification on two very different data sets: a collection of 5.9 million tweets collected from Twitter, and a collection of 66000 product reviews from Amazon. Using the Mechanical Turk we created a gold standard sample in which each sentence was tagged by 3 annotators, obtaining F-scores of 0.78 on the product reviews dataset and 0.83 on the Twitter dataset. We discuss the differences between the datasets and how the algorithm uses them (e.g., for the Amazon dataset the algorithm makes use of structured information). We also discuss the utility of Twitter #sarcasm hashtags for the task."}
{"_id": "0a8a6dc8f57ed11c0cda81023c30e4deb1f48f9d", "title": "Scheduling and locking in multiprocessor real-time operating systems", "text": "BJ\u00d6RN B. BRANDENBURG: Scheduling and Locking in Multiprocessor Real-Time Operating Systems (Under the direction of James H. Anderson) With the widespread adoption of multicore architectures, multiprocessors are now a standard deployment platform for (soft) real-time applications. This dissertation addresses two questions fundamental to the design of multicore-ready real-time operating systems: (1) Which scheduling policies offer the greatest flexibility in satisfying temporal constraints; and (2) which locking algorithms should be used to avoid unpredictable delays? With regard to Question 1, LITMUSRT, a real-time extension of the Linux kernel, is presented and its design is discussed in detail. Notably, LITMUSRT implements link-based scheduling, a novel approach to controlling blocking due to non-preemptive sections. Each implemented scheduler (22 configurations in total) is evaluated under consideration of overheads on a 24-core Intel Xeon platform. The experiments show that partitioned earliest-deadline first (EDF) scheduling is generally preferable in a hard real-time setting, whereas global and clustered EDF scheduling are effective in a soft real-time setting. With regard to Question 2, real-time locking protocols are required to ensure that the maximum delay due priority inversion can be bounded a priori. Several spinlockand semaphore-based multiprocessor real-time locking protocols for mutual exclusion (mutex), reader-writer (RW) exclusion, and k-exclusion are proposed and analyzed. A new category of RW locks suited to worst-case analysis, termed phase-fair locks, is proposed and three efficient phase-fair spinlock implementations are provided (one with few atomic operations, one with low space requirements, and one with constant RMR complexity). Maximum priority-inversion blocking is proposed as a natural complexity measure for semaphore protocols. It is shown that there are two classes of schedulability analysis, namely suspensionoblivious and suspension-aware analysis, that yield two different lower bounds on blocking. Five"}
{"_id": "48a165c4b0fd4081c338ac990612d7fa30304f7f", "title": "Creativity support tools: accelerating discovery and innovation", "text": "How can designers of programming interfaces, interactive tools, and rich social environments enable more people to be more creative more often?"}
{"_id": "872085b4f62b02913fdab640547741d07fac3cbb", "title": "Miniaturized rat-race coupler with bandpass response and good stopband rejection", "text": "Coupled-resonator network is applied to the design of a rat-race coupler with bandpass response in this paper. Net-type resonators are used to reduce the circuit size and control the harmonics. As a result, the proposed coupler exhibits not only a compact size but also good stopband rejection. The proposed coupler occupies only one sixth area of the conventional design, while the rejection level in the stopband is better than 30 dB up to 4.3\u01920. The measured results are in good agreement with the simulated predictions."}
{"_id": "0fb1280aa1f2d066512d09e9ea3cfc724e9929cc", "title": "BinMatch: A Semantics-Based Hybrid Approach on Binary Code Clone Analysis", "text": "Binary code clone analysis is an important technique which has a wide range of applications in software engineering (e.g., plagiarism detection, bug detection). The main challenge of the topic lies in the semantics-equivalent code transformation (e.g., optimization, obfuscation) which would alter representations of binary code tremendously. Another challenge is the trade-off between detection accuracy and coverage. Unfortunately, existing techniques still rely on semantics-less code features which are susceptible to the code transformation. Besides, they adopt merely either a static or a dynamic approach to detect binary code clones, which cannot achieve high accuracy and coverage simultaneously.\u00a0 In this paper, we propose a semantics-based hybrid approach to detect binary clone functions. We execute a template binary function with its test cases, and emulate the execution of every target function for clone comparison with the runtime information migrated from that template function. The semantic signatures are extracted during the execution of the template function and emulation of the target function. Lastly, a similarity score is calculated from their signatures to measure their likeness. We implement the approach in a prototype system designated as BinMatch which analyzes IA-32 binary code on the Linux platform. We evaluate BinMatch with eight real-world projects compiled with different compilation configurations and commonly-used obfuscation methods, totally performing over 100 million pairs of function comparison. The experimental results show that BinMatch is robust to the semantics-equivalent code transformation. Besides, it not only covers all target functions for clone analysis, but also improves the detection accuracy comparing to the state-of-the-art solutions."}
{"_id": "f4ec256be284ff40316f27fa3b07531f407ce9fe", "title": "Broadband antenna array aperture made of tightly couple printed dipoles", "text": "This study reports a novel tight-coupled dipole array antenna to operate with up to six octave bandwidth and 60 degrees scanning. The array was designed through full-wave EM simulations by employing the current sheet array radiator concept that was advanced by a novel integrated feed network. Several prototypes of planar and conformal arrays across 0.3\u201320 GHz have been fabricated and tested with good agreement observed between all predicted and measured terminal and radiation features. The exemplified arrays have been designed for 1.2\u20136 GHz with relative radiator height 0.12 of maximum operational wavelength."}
{"_id": "cd45fe6a1d77e8e17e94f2abb454bcdfe55413bf", "title": "Swing-up and stabilization of a cart-pendulum system under restricted cart track length", "text": "This paper describes the swing-up and stabilization of a cart\u2013pendulum system with a restricted cart track length and restricted control force using generalized energy control methods. Starting from a pendant position, the pendulum is swung up to the upright unstable equilibrium con5guration using energy control principles. An \u201cenergy well\u201d is built within the cart track to prevent the cart from going outside the limited length. When su9cient energy is acquired by the pendulum, it goes into a \u201ccruise\u201d mode when the acquired energy is maintained. Finally, when the pendulum is close to the upright con5guration, a stabilizing controller is activated around a linear zone about the upright con5guration. The proposed scheme has worked well both in simulation and a practical setup and the conditions for stability have been derived using the multiple Lyapunov functions approach. c \u00a9 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "5332dbbdba1f4918b0fe164036dd76623848e66e", "title": "Depression as a risk factor for coronary artery disease: evidence, mechanisms, and treatment.", "text": "OBJECTIVE\nThe present paper reviews the evidence that depression is a risk factor for the development and progression of coronary artery disease (CAD).\n\n\nMETHODS\nMEDLINE searches and reviews of bibliographies were used to identify relevant articles. Articles were clustered by theme: depression as a risk factor, biobehavioral mechanisms, and treatment outcome studies.\n\n\nRESULTS\nDepression confers a relative risk between 1.5 and 2.0 for the onset of CAD in healthy individuals, whereas depression in patients with existing CAD confers a relative risk between 1.5 and 2.5 for cardiac morbidity and mortality. A number of plausible biobehavioral mechanisms linking depression and CAD have been identified, including treatment adherence, lifestyle factors, traditional risk factors, alterations in autonomic nervous system (ANS) and hypothalamic pituitary adrenal (HPA) axis functioning, platelet activation, and inflammation.\n\n\nCONCLUSION\nThere is substantial evidence for a relationship between depression and adverse clinical outcomes. However, despite the availability of effective therapies for depression, there is a paucity of data to support the efficacy of these interventions to improve clinical outcomes for depressed CAD patients. Randomized clinical trials are needed to further evaluate the value of treating depression in CAD patients to improve survival and reduce morbidity."}
{"_id": "82f348ccf3a9eaceb1b88cc6278f4c422ea6b95f", "title": "Risk Assessment Uncertainties in Cybersecurity Investments", "text": "When undertaking cybersecurity risk assessments, it is important to be able to assign numeric values to metrics to compute the final expected loss that represents the risk that an organization is exposed to due to cyber threats. Even if risk assessment is motivated by real-world observations and data, there is always a high chance of assigning inaccurate values due to different uncertainties involved (e.g., evolving threat landscape, human errors) and the natural difficulty of quantifying risk. Existing models empower organizations to compute optimal cybersecurity strategies given their financial constraints, i.e., available cybersecurity budget. Further, a general game-theoretic model with uncertain payoffs (probability-distribution-valued payoffs) shows that such uncertainty can be incorporated in the game-theoretic model by allowing payoffs to be random. This paper extends previous work in the field to tackle uncertainties in risk assessment that affect cybersecurity investments. The findings from simulated examples indicate that although uncertainties in cybersecurity risk assessment lead, on average, to different cybersecurity strategies, they do not play a significant role in the final expected loss of the organization when utilising a game-theoretic model and methodology to derive these strategies. The model determines robust defending strategies even when knowledge regarding risk assessment values is not accurate. As a result, it is possible to show that the cybersecurity investments\u2019 tool is capable of providing effective decision support."}
{"_id": "f31a07a7db9b1796a04e781e550a51f2453649e9", "title": "Wideband and Low Sidelobe Slot Antenna Fed by Series-Fed Printed Array", "text": "A combination of slots and series-fed patch antenna array is introduced leading to an array antenna with wide bandwidth, low sidelobe level (SLL), and high front-to-back ratio (F/B). Three structures are analyzed. The first, the reference structure, is the conventional series-fed microstrip antenna array that operates at 16.26 GHz with a bandwidth of 3%, SLL of -24 dB, and F/B of 23 dB. The second is similar to the reference structure, but a large slot that covers the patch array is removed from the ground plane. This results in a wide operating bandwidth of over 72% for , with a 3 dB gain bandwidth of over 15.9%. The SLL of this structure is improved to more than at 16.26 GHz, and stable SLL of over 15.5-16.4 GHz band is also exhibited. This structure has a bi-directional radiation pattern. To make the pattern uni-directional and increase the F/B, a third structure is analyzed. In this structure, rather than one single large slot, an array of slots is used and a reflector is placed above the series-fed patch array. This increases the F/B to 40 dB. The simulated and measured results are presented and discussed."}
{"_id": "3fde79fafc9edbe0ad4282104e44c468a2bf1af4", "title": "Active Learning for Real Time Detection of Polyps in Videocolonoscopy", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L\u2019archive ouverte pluridisciplinaire HAL, est destin\u00e9e au d\u00e9p\u00f4t et \u00e0 la diffusion de documents scientifiques de niveau recherche, publi\u00e9s ou non, \u00e9manant des \u00e9tablissements d\u2019enseignement et de recherche fran\u00e7ais ou \u00e9trangers, des laboratoires publics ou priv\u00e9s. Active Learning For Real Time Detection Of Polyps In Videocolonoscopy Quentin Angermann, Aymeric Histace, Olivier Romain"}
{"_id": "2934206d916a04540fb3d437628fe44aea15ef0a", "title": "The challenge of simultaneous object detection and pose estimation: A comparative study", "text": "Detecting objects and estimating their pose remains as one of the major challenges of the computer vision research community. There exists a compromise between localizing the objects and estimating their viewpoints. The detector ideally needs to be viewinvariant, while the pose estimation process should be able to generalize towards the category-level. This work is an exploration of using deep learning models for solving both problems simultaneously. For doing so, we propose three novel deep learning architectures, which are able to perform a joint detection and pose estimation, where we gradually decouple the two tasks. We also investigate whether the pose estimation problem should be solved as a classification or regression problem, being this still an open question in the computer vision community. We detail a comparative analysis of all our solutions and the methods that currently define the state of the art for this problem. We use PASCAL3D+ and ObjectNet3D datasets to present the thorough experimental evaluation and main results. With the proposed models we achieve the state-of-the-art performance in both datasets."}
{"_id": "0be360a2964c4bb91aaad0cc6d1baa6639746028", "title": "Automatic recognition and analysis of human faces and facial expressions: a survey", "text": "Humans detect and identify faces in a scene with little or no effort . However, building an automated system that accomplishes this task is very difficult . There are several related Subproblems : detection of a pattern as a face . identification of the face, analysis of facial expressions, and classification based on physical features of the face . A system that performs these operations will find many applications, e .g . criminal identification, authentication in secure systems, etc . Most of the work to date has been in identification . This paper surveys the past work in solving these problems . The capability of the human visual system with respect to these problems is also discussed . It is meant to serve as a guide for an automated system . Some new approaches to these problems are also briefly discussed . Face detection Face identification Facial expressions Classification Facial features"}
{"_id": "4d899ebf7a3004fe550842830f06b4600d9c6230", "title": "The theory of signal detectability", "text": "The problem of signal detectability treated in this paper is the following: Suppose an observer is given a voltage varying with time during a prescribed observation interval and is asked to decide whether its source is noise or is signal plus noise. What method should the observer use to make this decision, and what receiver is a realization of that method? After giving a discussion of theoretical aspects of this problem, the paper presents specific derivations of the optimum receiver for a number of cases of practical interest. The receiver whose output is the value of the likelihood ratio of the input voltage over the observation interval is the answer to the second question no matter which of the various optimum methods current in the literature is employed including the Neyman Pearson observer, Siegert's ideal observer, and.Woodward and Davies' tlobserver.lV An optimum observer required to give a yes or no answer simply chooses an operating level and concludes that the receiver input arose from signal plus noise only when this level is exceeded by the output of his likelihood ratio receiver. Associated with each such operating level are conditional probabilities that the answer is a false alarm and the conditional probability of detection. Graphs of these quantities, called receiver operating characteristic, or ROC, curves are convenient for evaluating a receiver. If the detection problem is changed by varying, for example, the signal power, then a family of ROC curves is generated. Such things as betting curves can easily be obtained from such a family. The operating level to be used in a particular situation must be chosen by the observer. His choice will depend on such factors as the permissable false alarm rate, a priori probabilities, and relative importance of errors. With these theoretical aspects serving as sn introduction, attention is devoted to the derivation of explicit formulas for likelihood ratio, and for probability of detection and probability of false alarm, for a number of particular cases. Stationary, band-limited, white Gaussian noise is assumed.. The seven special cases which sre presented were chosen from the simplest problems in signal detection which closely represent practical situations. Two of the cases form a basis for the best available approximation to the important problem of finding probability of detection when the starting time of the signal, signal frequency, or both, are l l&TlOUn. Furthermore, in these two cases uncertainty in the signal can be varied, and a quantitative relationship between uncertainty and ability to detect signals is presented for these two rather general cases. The variety of examples presented should serve to suggest methods for attacking other simple signal detection problems and to give insight into problems too complicated to allow a direct solution."}
{"_id": "5140f1dc83e562de0eb409385480b799e9549d54", "title": "Textural Features for Image Classification", "text": "Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on graytone spatial dependancies, and illustrates their application in categoryidentification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications."}
{"_id": "6513888c5ef473bdbb3167c7b52f0985be071f7a", "title": "Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression", "text": "A three-layered neural network is described for transforming two-dimensional discrete signals into generalized nonorthogonal 2-D \u201cGabor\u201d representations for image analysis, segmentation, and compression. These transforms are conjoint spat iahpectral representations [lo], [15], which provide a complete image description in terms of locally windowed 2-D spectral coordinates embedded within global 2-D spatial coordinates. Because intrinsic redundancies within images a re extracted, the resulting image codes can be very compact. However, these conjoint transforms are inherently difficult to compute because t e elementary expansion functions a re not orthogonal. One o r t h o g o n k i n g approach developed for 1-D signals by Bastiaans [SI, based on biorthonormal expansions, is restricted by constraints on the conjoint sampling rates and invariance of the windowing function, as well as by the fact that the auxiliary orthogonalizing functions are nonlocal infinite series. In the present \u201cneural network\u201d approach, based upon interlaminar interactions involving two layers with fixed weights and one layer with adjustable weights, the network finds coefficients for complete conjoint 2-D Gabor transforms without these restrictive conditions. For arbitrary noncomplete transforms, in which the coefficients might be interpreted simply as signifying the presence of certain features in the image, the network finds optimal coefficients in the sense of minimal mean-squared-error in representing the image. I n one algebraically complete scheme permitting exact reconstruction, the network finds expansion coefficients that reduce entropy from 7.57 in the pixel representation to 2.55 in the complete 2-D Gabor transform. In \u201cwavelet\u201d expansions based on a biologically inspired log-polar ensemble of dilations, rotations, and translations of a single underlying 2-D Gabor wavelet template, image compression is illustrated with ratios up to 20: 1. Also demonstrated is image segmentation based on the clustering of coefficients in the complete 2-D Gabor transform. This coefficient-finding network for implementing useful nonorthogonal image transforms may also have neuroscientific relevance, because the network layers with fixed weights use empirical 2-D receptive field profiles obtained from orientation-selective neurons in cat visual cortex as the weighting functions, and the resulting transform mimics the biological visual strategy of embedding angular and spectral analysis within global spatial coordinates."}
{"_id": "ab0e9c43af9f8ba3dcf77e1a21b9ed21abd82288", "title": "ReTiCaM: Real-time Human Performance Capture from Monocular Video", "text": "We present the first real-time human performance capture approach that reconstructs dense, space-time coherent deforming geometry of entire humans in general everyday clothing from just a single RGB video. We propose a novel two-stage analysis-by-synthesis optimization whose formulation and implementation are designed for high performance. In the first stage, a skinned templatemodel is jointly fitted to background subtracted input video, 2D and 3D skeleton joint positions found using a deep neural network, and a set of sparse facial landmark detections. In the second stage, dense non-rigid 3D deformations of skin and even loose apparel are captured based on a novel real-time capable algorithm for non-rigid tracking using dense photometric and silhouette constraints. Our novel energy formulation leverages automatically identified material regions on the template to model the differing non-rigid deformation behavior of skin and apparel. The two resulting nonlinear optimization problems per-frame are solved with specially-tailored data-parallel Gauss-Newton solvers. In order to achieve real-time performance of over 25Hz, we design a pipelined parallel architecture using the CPU and two commodity GPUs. Our method is the first real-time monocular approach for full-body performance capture. Our method yields comparable accuracy with off-line performance capture techniques, while being orders of magnitude faster."}
{"_id": "3ec569ba6c3f377d94a9fc79e6656d27d099430c", "title": "Comorbidity of anxiety and unipolar mood disorders.", "text": "Research on relationships between anxiety and depression has proceeded at a rapid pace since the 1980s. The similarities and differences between these two conditions, as well as many of the important features of the comorbidity of these disorders, are well understood. The genotypic structure of anxiety and depression is also fairly well documented. Generalized anxiety and major depression share a common genetic diathesis, but the anxiety disorders themselves are genetically hetergeneous. Sophisticated phenotypic models have also emerged, with data converging on an integrative hierarchical model of mood and anxiety disorders in which each individual syndrome contains both a common and a unique component. Finally, considerable progress has been made in understanding cognitive aspects of these disorders. This work has focused on both the cognitive content of anxiety and depression and on the effects that anxiety and depression have on information processing for mood-congruent material."}
{"_id": "752d4887c68149d54f93206e0d20bc5359ccc4d2", "title": "The neuropsychopharmacology of fronto-executive function: monoaminergic modulation.", "text": "We review the modulatory effects of the catecholamine neurotransmitters noradrenaline and dopamine on prefrontal cortical function. The effects of pharmacologic manipulations of these systems, sometimes in comparison with the indoleamine serotonin (5-HT), on performance on a variety of tasks that tap working memory, attentional-set formation and shifting, reversal learning, and response inhibition are compared in rodents, nonhuman primates, and humans using, in a behavioral context, several techniques ranging from microiontophoresis and single-cell electrophysiological recording to pharmacologic functional magnetic resonance imaging. Dissociable effects of drugs and neurotoxins affecting these monoamine systems suggest new ways of conceptualizing state-dependent fronto-executive functions, with implications for understanding the molecular genetic basis of mental illness and its treatment."}
{"_id": "82cfd6306399e0e8b96ce6a9f3d4cb55a783984b", "title": "Vicious and virtuous cycles in ERP implementation: a case study of interrelations between critical success factors", "text": "\u2022 A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. \u2022 The final author version and the galley proof are versions of the publication after peer review. \u2022 The final published version features the final layout of the paper including the volume, issue and page numbers."}
{"_id": "3164ff63c7fd8f062e0d3c487059630def97955c", "title": "Nivolumab in previously untreated melanoma without BRAF mutation.", "text": "BACKGROUND\nNivolumab was associated with higher rates of objective response than chemotherapy in a phase 3 study involving patients with ipilimumab-refractory metastatic melanoma. The use of nivolumab in previously untreated patients with advanced melanoma has not been tested in a phase 3 controlled study.\n\n\nMETHODS\nWe randomly assigned 418 previously untreated patients who had metastatic melanoma without a BRAF mutation to receive nivolumab (at a dose of 3 mg per kilogram of body weight every 2 weeks and dacarbazine-matched placebo every 3 weeks) or dacarbazine (at a dose of 1000 mg per square meter of body-surface area every 3 weeks and nivolumab-matched placebo every 2 weeks). The primary end point was overall survival.\n\n\nRESULTS\nAt 1 year, the overall rate of survival was 72.9% (95% confidence interval [CI], 65.5 to 78.9) in the nivolumab group, as compared with 42.1% (95% CI, 33.0 to 50.9) in the dacarbazine group (hazard ratio for death, 0.42; 99.79% CI, 0.25 to 0.73; P<0.001). The median progression-free survival was 5.1 months in the nivolumab group versus 2.2 months in the dacarbazine group (hazard ratio for death or progression of disease, 0.43; 95% CI, 0.34 to 0.56; P<0.001). The objective response rate was 40.0% (95% CI, 33.3 to 47.0) in the nivolumab group versus 13.9% (95% CI, 9.5 to 19.4) in the dacarbazine group (odds ratio, 4.06; P<0.001). The survival benefit with nivolumab versus dacarbazine was observed across prespecified subgroups, including subgroups defined by status regarding the programmed death ligand 1 (PD-L1). Common adverse events associated with nivolumab included fatigue, pruritus, and nausea. Drug-related adverse events of grade 3 or 4 occurred in 11.7% of the patients treated with nivolumab and 17.6% of those treated with dacarbazine.\n\n\nCONCLUSIONS\nNivolumab was associated with significant improvements in overall survival and progression-free survival, as compared with dacarbazine, among previously untreated patients who had metastatic melanoma without a BRAF mutation. (Funded by Bristol-Myers Squibb; CheckMate 066 ClinicalTrials.gov number, NCT01721772.)."}
{"_id": "7238c5082e2d02a1bbcb4b4503e787a271c66379", "title": "Marginal improvement of AUFLS using ROCOF", "text": "This paper proposes a scheme that integrates rate of change of frequency (ROCOF) and under frequency (UF) elements to design more effective load shedding response. A new load block activation algorithm is also introduced that improves the overall load shedding (LS) performance. The combination of ROCOF and UF criteria is used to overcome the LS block discrimination problem and to provide a flexible and decentralised solution to mitigation. The proposed method is verified through simulations on the IEEE 39-bus test system."}
{"_id": "c6f61f344c919493886bf67d0e64e0242ae83547", "title": "Size matters: word count as a measure of quality on wikipedia", "text": "Wikipedia, \"the free encyclopedia\", now contains over two million English articles, and is widely regarded as a high-quality, authoritative encyclopedia. Some Wikipedia articles, however, are of questionable quality, and it is not always apparent to the visitor which articles are good and which are bad. We propose a simple metric -- word count -- for measuring article quality. In spite of its striking simplicity, we show that this metric significantly outperforms the more complex methods described in related work."}
{"_id": "3661933b3173af59a20adbf129c0e3efa42c8ae4", "title": "OPRM1 and EGFR contribute to skin pigmentation differences between Indigenous Americans and Europeans", "text": "Contemporary variation in skin pigmentation is the result of hundreds of thousands years of human evolution in new and changing environments. Previous studies have identified several genes involved in skin pigmentation differences among African, Asian, and European populations. However, none have examined skin pigmentation variation among Indigenous American populations, creating a critical gap in our understanding of skin pigmentation variation. This study investigates signatures of selection at 76 pigmentation candidate genes that may contribute to skin pigmentation differences between Indigenous Americans and Europeans. Analysis was performed on two samples of Indigenous Americans genotyped on genome-wide SNP arrays. Using four tests for natural selection\u2014locus-specific branch length (LSBL), ratio of heterozygosities (lnRH), Tajima\u2019s D difference, and extended haplotype homozygosity (EHH)\u2014we identified 14 selection-nominated candidate genes (SNCGs). SNPs in each of the SNCGs were tested for association with skin pigmentation in 515 admixed Indigenous American and European individuals from regions of the Americas with high ground-level ultraviolet radiation. In addition to SLC24A5 and SLC45A2, genes previously associated with European/non-European differences in skin pigmentation, OPRM1 and EGFR were associated with variation in skin pigmentation in New World populations for the first time."}
{"_id": "9e2bc7c8dba23d594f22ccdd73d42f5aafbc5060", "title": "Non-Foster impedance matching of electrically small antennas", "text": "When the size of an antenna is electrically small, the antenna is neither efficient nor a good radiator because most of the input power is stored in the reactive near-field region and little power is radiated in the far-field region. As demonstrated in [1]- [2], the radiation quality factor of small antennas is definitely high. In other words, the input impedance of small antennas is considerably reactive. To reduce the radiation quality factor in the whole or partial frequency range of interest, it is important to increase the radiation resistance and/or reduce the reactance of the antenna. Hence, it is necessary to modify the antenna to reduce the reactance of the antenna and/or add impedance matching networks (MNs) to maximize the transfer of power from a resistive source to the highly reactive antenna."}
{"_id": "36bcaabc15e812df931a8dd8f37c7cfcb0a07d54", "title": "A restart CMA evolution strategy with increasing population size", "text": "In this paper we introduce a restart-CMA-evolution strategy, where the population size is increased for each restart (IPOP). By increasing the population size the search characteristic becomes more global after each restart. The IPOP-CMA-ES is evaluated on the test suit of 25 functions designed for the special session on real-parameter optimization of CEC 2005. Its performance is compared to a local restart strategy with constant small population size. On unimodal functions the performance is similar. On multi-modal functions the local restart strategy significantly outperforms IPOP in 4 test cases whereas IPOP performs significantly better in 29 out of 60 tested cases."}
{"_id": "1dd26846e8b42e2da92b5977f8a6050f2f92f388", "title": "Robust Curb Detection with Fusion of 3D-Lidar and Camera Data", "text": "Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes."}
{"_id": "ba0164fe77d37786eca4cfe1a6fbc020943c91a2", "title": "Successful lean implementation: Organizational culture and soft lean practices", "text": "Lean management (LM) is a managerial approach for improving processes based on a complex system of interrelated socio-technical practices. Recently, debate has centered on the role of organizational culture (OC) in LM. This paper aims to contribute to this debate by examining whether plants that successfully implement LM are characterized by a specific OC profile and extensively adopt soft LM practices. Data were analyzed from the High Performance Manufacturing (HPM) project dataset using a multi-group approach. The results revealed that a specific OC profile characterizes successful lean plants; in particular, when compared to unsuccessful lean plants, they show a higher institutional collectivism, future orientation, a humane orientation, and a lower level of assertiveness. While a high level of institutional collectivism, future orientation, and humane orientation are common features of high performers in general, a low level of assertiveness is typical only of successful lean plants. In addition, successful lean plants use soft LM practices more extensively than unsuccessful lean plants (i.e., lean practices concerning people and relations, such as small group problem solving, employees\u2019 training to perform multiple tasks, supplier partnerships, customer involvement, and continuous improvement), while they do not differ significantly in terms of hard LM practices (i.e., lean technical and analytical tools). For managers, the results indicate that, in order to implement LM successfully, it is fundamental to go beyond LM technicalities by adopting soft practices and nurturing the development of an appropriate OC profile."}
{"_id": "c25be11ce09e711411863f503ab588ba15ec73c0", "title": "Narcissism and Romantic Attraction", "text": "A model of narcissism and romantic attraction predicts that narcissists will be attracted to admiring individuals and highly positive individuals and relatively less attracted to individuals who offer the potential for emotional intimacy. Five studies supported this model. Narcissists, compared with nonnarcissists, preferred more self-oriented (i.e., highly positive) and less other-oriented (i.e., caring) qualities in an ideal romantic partner (Study 1). Narcissists were also relatively more attracted to admiring and highly positive hypothetical targets and less attracted to caring targets (Studies 2 and 3). Indeed, narcissists displayed a preference for highly positive-noncaring targets compared with caring but not highly positive targets (Study 4). Finally, mediational analyses demonstrated that narcissists' romantic attraction is, in part, the result of a strategy for enhancing self-esteem (Study 5)."}
{"_id": "146ec475964d7c61d35b2e6a0eb6de03258edac2", "title": "Creating a Labeled Dataset for Medical Misinformation in Health Forums", "text": "The dissemination of medical misinformation online presents a challenge to human health. Machine learning techniques provide a unique opportunity for decreasing the cognitive load associated with deciding upon whether any given user comment is likely to contain misinformation, but a paucity of labeled data of medical misinformation makes supervised approaches a challenge. In order to ameliorate this condition, we present a new labeled dataset of misinformative and non-misinformative comments developed over posted questions and comments on a health discussion forum. This required extraction of candidate misinformative entries from the corpus using information retrieval techniques, development of a codex and labeling strategy for the dataset, and the creation of features for use in machine learning tasks. By identifying the nine most descriptive features with regard to classification as misinformative or non-misinformative through the use of Recursive Feature Elimination, we achieved a classification accuracy of 90.1%, where the dataset is comprised 85.8% of non-misinformative comments. In our opinion, this dataset and analysis will aid the machine learning community in the development of an online misinformation classification system over user-generated content such as medical forum posts."}
{"_id": "de8335a834362c50de9075ccebc20605aa326545", "title": "Evaluating visual analytics with eye tracking", "text": "The application of eye tracking for the evaluation of humans' viewing behavior is a common approach in psychological research. So far, the use of this technique for the evaluation of visual analytics and visualization is less prominent. We investigate recent scientific publications from the main visualization and visual analytics conferences and journals that include an evaluation by eye tracking. Furthermore, we provide an overview of evaluation goals that can be achieved by eye tracking and state-of-the-art analysis techniques for eye tracking data. Ideally, visual analytics leads to a mixed-initiative cognitive system where the mechanism of distribution is the interaction of the user with visualization environments. Therefore, we also include a discussion of cognitive approaches and models to include the user in the evaluation process. Based on our review of the current use of eye tracking evaluation in our field and the cognitive theory, we propose directions of future research on evaluation methodology, leading to the grand challenge of developing an evaluation approach to the mixed-initiative cognitive system of visual analytics."}
{"_id": "f058c2376ba396e062cb1b9f5d6f476fd6515927", "title": "Game theoretic approach on Real-time decision making for IoT-based traffic light control", "text": "1Department of Computer Engineering, Chung-Ang University, Seoul, Korea 2Department of Computer Science, Universidad Aut\u00f4noma de Madrid, Spain Correspondence Jai E. Jung, Department of Computer Engineering, Chung-Ang University, Korea. Email: j3ung@cau.ac.kr David Camacho, Department of Computer Science, Universidad Autonoma de Madrid, Spain. Email: david.camacho@uam.es Funding information Chung-Ang University Summary Smart traffic light control at intersections is 1 of the major issues in Intelligent Transportation System. In this paper, on the basis of the new emerging technologies of Internet of Things, we introduce a new approach for smart traffic light control at intersection. In particular, we firstly propose a connected intersection system where every objects such as vehicles, sensors, and traffic lights will be connected and sharing information to one another. By this way, the controller is able to collect effectively and mobility traffic flow at intersection in real-time. Secondly, we propose the optimization algorithms for traffic lights by applying algorithmic game theory. Specially, 2 game models (which are Cournot Model and Stackelberg Model) are proposed to deal with difference scenarios of traffic flow. In this regard, based on the density of vehicles, controller will make real-time decisions for the time durations of traffic lights to optimize traffic flow. To evaluate our approach, we have used Netlogo simulator, an agent-based modeling environment for designing and implementing a simple working traffic. The simulation results shows that our approach achieves potential performance with various situations of traffic flow."}
{"_id": "ec375191617e58e0592a8485333753974f9c741a", "title": "Pulsed voltage converter with bipolar output voltages up to 10 kV for Dielectric Barrier Discharge", "text": "For pulsed power applications special pulsed voltage converters are needed. This paper presents an approach for a pulsed power converter, which generates bipolar output voltages up to 10 kV with extremely fast voltage slopes and high repetition rates. The topology is based on a H-bridge with a 10 kV dc-link. The output voltage and current of the pulsed voltage converter are adapted for the operation with Dielectric Barrier Discharge. To avoid the use of spark gaps due to their limited lifetime and thus to the lifetime of the converter, series stacked MOSFETs are used to realize a switch with a high blocking voltage. A balancing network for the series stacked MOSFETs is introduced as well as an adequate gate drive circuit. A matching method for the capacitive load is described, to achieve a maximum voltage slope at this capacitive load. To validate the theoretical considerations a prototype and measurements are presented."}
{"_id": "3f11994f22bbea0997f942032bd8c0a6ab3f2ec0", "title": "Odor Recognition in Multiple E-Nose Systems With Cross-Domain Discriminative Subspace Learning", "text": "In this paper, we propose an odor recognition framework for multiple electronic noses (E-noses), machine olfaction odor perception systems. Straight to the point, the proposed transferring odor recognition model is called cross-domain discriminative subspace learning (CDSL). General odor recognition problems with E-nose are single domain oriented, that is, recognition algorithms are often modeled and tested on the same one domain data set (i.e., from only one E-nose system). Different from that, we focus on a more realistic scenario: the recognition model is trained on a prepared source domain data set from a master E-nose system ${A}$ , but tested on another target domain data set from a slave system ${B}$ or ${C}$ with the same type of the master system ${A}$ . The internal device parameter variance between master and slave systems often results in data distribution discrepancy between source domain and target domain, such that single-domain-based odor recognition model may not be adapted to another domain. Therefore, we propose domain-adaptation-based odor recognition for addressing the realistic recognition scenario across systems. Specifically, the proposed CDSL method consists of three merits: 1) an intraclass scatter minimization- and an interclass scatter maximization-based discriminative subspace learning is solved on source domain; 2) a data fidelity and preservation constraint of the subspace is imposed on target domain without distortion; and 3) a minipatch feature weighted domain distance is minimized for closely connecting the source and target domains. Experiments and comparisons on odor recognition tasks in multiple E-noses demonstrate the efficiency of the proposed method."}
{"_id": "7fe690f22bb04949c1b46fb21710e28e5bd1157c", "title": "Real Time Drowsy Driver Identification Using Eye Blink Detection", "text": "HUMAN COMPUTER INTERFACE (HCI) systems are designed for use in assisting people in various aspects. Driving support systems such as navigating systems are getting commoner by the day. The capability of driving support systems to detect the level of driver\u2019s alertness is very important in ensuring road safety. By observation of blink pattern and eye movements, driver fatigue can be detected early enough to prevent collisions caused by drowsiness. The analysis of face image are widely used in security systems, face recognition, criminal focusing etc. In this specific study a non recursive system have been designed to detect shutting of eyes of the person driving an automobile. Real time detection of driver\u2019s eyes is processed using image processing in MATLAB to detect whether the eye remains closed more than the fixed duration thus indicating condition of fatigue and raise an alarm which could prevent a collision. The driving support systems have been found lacking in detecting the influence of drug or alcohol causing great degree of risks to the commuters. This study has found that eye blink patterns are starkly different for persons under the influence of drugs and can be easily detected by the system designed by us."}
{"_id": "ab614b5712d41433e6341fd0eb465258f14d1f23", "title": "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks", "text": "Recurrent neural network (RNN) models are widely used for processing sequential data governed by a latent tree structure. Previous work shows that RNN models (especially Long Short-Term Memory (LSTM) based models) could learn to exploit the underlying tree structure. However, its performance consistently lags behind that of tree-based models. This work proposes a new inductive bias Ordered Neurons, which enforces an order of updating frequencies between hidden state neurons. We show that the ordered neurons could explicitly integrate the latent tree structure into recurrent models. To this end, we propose a new RNN unit: ON-LSTM, which achieve good performances on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference1."}
{"_id": "899886a4dab243958c343ba2925861a2840190bd", "title": "The Pendubot: a mechatronic system for control research and education", "text": "1 In this paper we describe the Pendubot, a mechatronic device for use in control engineering education and for research in nonlinear control and robotics. This device is a two{link planar robot with an actuator at the shoulder but no actuator at the elbow. With this system , a number of fundamental concepts in nonlinear dynamics and control theory may be illustrated. The pendubot complements previous mechatronic systems, such as the Acrobot 3] and the inverted pendulum of Furuta 4]."}
{"_id": "a9f9a4dc25479e550ce1e0ddcbaf00743ccafc29", "title": "Extensional Versus Intuitive Reasoning : The Conjunction Fallacy in Probability Judgment", "text": "Perhaps the simplest and the most basic qualitative law of probability is the conjunction rule: The probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and .P(B), because the extension (or the possibility set) of the conjunction is included in the extension of its constituents. Judgments under uncertainty, however, are often mediated by intuitive heuristics that are not bound by the conjunction rule. A conjunction can be more representative than one of its constituents, and instances of a specific category can be easier to imagine or to retrieve than instances of a more inclusive category. The representativeness and availability heuristics therefore can make a conjunction appear more probable than one of its constituents. This phenomenon is demonstrated in a variety of contexts including estimation of word frequency, personality judgment, medical prognosis, decision under risk, suspicion of criminal acts, and political forecasting. Systematic violations of the conjunction rule are observed in judgments of lay people and of experts in both between-subjects and within-subjects comparisons. Alternative interpretations of the conjunction fallacy are discussed and attempts to combat it are explored."}
{"_id": "2f983533f6dd084b0851652b5991fd6464706ff4", "title": "An Achilles Heel in Signature-Based IDS : Squealing False Positives in SNORT", "text": "We report a vulnerability to network signature-based IDS which we have tested using Snort and we call \u201cSquealing\u201d. This vulnerability has significant implications since it can easily be generalized to any IDS. The vulnerability of signature-based IDS to high false positive rates has been welldocumented but we go further to show (at a high level) how packets can be crafted to match attack signatures such that a alarms on a target IDS can be conditioned or disabled and then exploited. This is the first academic treatment of this vulnerability that has already been reported to the CERT Coordination Center and the National Infrastructure Protection Center. Independently, other tools based on \u201csquealing\u201d are poised to appear that, while validating our ideas, also gives cause for concern. keywords: squealing, false positive, intrusion detection, IDS, signature-based, misuse behavior, network intrusion detection, snort"}
{"_id": "363d109c3f00026f9ef904dd8cc3c935ee463b65", "title": "Snort: Lightweight Intrusion Detection for Networks", "text": "Network intrusion detection systems (NIDS) are an important part of any network security architecture. They provide a layer of defense which monitors network traffic for predefined suspicious activity or patterns, and alert system administrators when potential hostile traffic is detected. Commercial NIDS have many differences, but Information Systems departments must face the commonalities that they share such as significant system footprint, complex deployment and high monetary cost. Snort was designed to address these issues."}
{"_id": "5dbb8f63e9ac926005037debc5496e9949a3885f", "title": "Evaluating intrusion detection systems: the 1998 DARPA off-line intrusion detection evaluation", "text": "A intrusion detection evaluation test bed was developed which generated normal traffic similar to that on a government site containing 100\u2019s of users on 1000\u2019s of hosts. More than 300 instances of 38 different automated attacks were launched against victim UNIX hosts in seven weeks of training data and two weeks of test data. Six research groups participated in a blind evaluation and results were analyzed for probe, denialof-service (DoS), remote-to-local (R2L), and user to root (U2R) attacks. The best systems detected old attacks included in the training data, at moderate detection rates ranging from 63% to 93% at a false alarm rate of 10 false alarms per day. Detection rates were much worse for new and novel R2L and DoS attacks included only in the test data. The best systems failed to detect roughly half these new attacks which included damaging access to root-level privileges by remote users. These results suggest that further research should focus on developing techniques to find new attacks instead of extending existing rule-based"}
{"_id": "8210f25d033de427cfd2e385bc9dd8f9fdca3cfc", "title": "Data Preparation for Mining World Wide Web Browsing Patterns", "text": "The World Wide Web (WWW) continues to grow at an astounding rate in both the sheer volume of traffic and the size and complexity of Web sites. The complexity of tasks such as Web site design, Web server design, and of simply navigating through a Web site have increased along with this growth. An important input to these design tasks is the analysis of how a Web site is being used. Usage analysis includes straightforward statistics, such as page access frequency, as well as more sophisticated forms of analysis, such as finding the common traversal paths through a Web site. Web Usage Mining is the application of data mining techniques to usage logs of large Web data repositories in order to produce results that can be used in the design tasks mentioned above. However, there are several preprocessing tasks that must be performed prior to applying data mining algorithms to the data collected from server logs. This paper presents several data preparation techniques in order to identify unique users and user sessions. Also, a method to divide user sessions into semantically meaningful transactions is defined and successfully tested against two other methods. Transactions identified by the proposed methods are used to discover association rules from real world data using the WEBMINER system [15]."}
{"_id": "55ce99ff2cb04eaed460b8a8ee5c3fd4821e0e0f", "title": "Differentially Private Stochastic Gradient Descent for in-RDBMS Analytics", "text": "This paper studies differential privacy for stochastic gradient descent (SGD) in an in-RDBMS system. While significant progress has been made separately on in-RDBMS SGD and private SGD, none of the major in-RDBMS machine learning frameworks have incorporated differentially private SGD. There are two inter-related issues for this disconnect between research and practice: (1) low model accuracy due to added noise to guarantee privacy, and (2) high development and runtime overhead of the private algorithms. This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address both issues in an integrated manner. In contrast to the white-box approach adopted by previous algorithms, we revisit and use the classical technique of output perturbation. While using output perturbation trivially addresses (2), it gives rise to challenges in addressing (1). We address this challenge by providing a novel analysis of the L2-sensitivity of SGD, which allows, under the same privacy guarantees, better convergence of SGD when only a constant number of passes can be made over the data. We then integrate this algorithm, along with the state-of-the-art differentially private SGD, into Bismarck, an inRDBMS analytics system. Extensive experiments demonstrate that our algorithm can be easily integrated, incurs virtually no overhead, scales well, and most importantly, for a number of datasets yields substantially better (up to 4X) test accuracy than the state-of-the-art algorithms."}
{"_id": "db3612fedd16ec37b30b97747ce0d8536c7ce5bd", "title": "Developing Leadership Character in Business Programs", "text": "Our objective is to encourage and enable leadership character development in business education. Building on a model of character strengths and their link to virtues, values, and ethical decision making, we describe an approach to develop leadership character at the individual, group, and organizational levels. We contrast this approach to existing practices that have focused on teaching functional content over character and address how business educators can enable leadership character development through their own behaviors, relationships, and structures. Most important, we provide concrete suggestions on how to integrate a focus on character development into existing business programs, both in terms of individual courses as well as the overall curriculum. We highlight that the development of leadership character must extend beyond student engagement in a course since \u201cit takes a village\u201d to develop character. ........................................................................................................................................................................"}
{"_id": "d2fbc1a8bcc7c252a70c524cc96c14aa807c2345", "title": "Approximating displacement with the body velocity integral", "text": "In this paper, we present a technique for approximating the net displacement of a locomoting system over a gait without directly integrating its equations of motion. The approximation is based on a volume integral, which, among other benefits, is more open to optimization by algorithm or inspection than is the full displacement integral. Specifically, we develop the concept of a body velocity integral (BVI), which is computable over a gait as a volume integral via Stokes\u2019s theorem. We then demonstrate that, given an appropriate choice of coordinates, the BVI for a gait approximates the displacement of the system over that gait. This consideration of coordinate choice is a new approach to locomotion problems, and provides significantly improved results over past attempts to apply Stokes\u2019s theorem to gait analysis."}
{"_id": "cd3f32418cbacc65357f7436a2d4186c634f024a", "title": "On Selecting the Largest Element in Spite of Erroneous Information", "text": ""}
{"_id": "9df2cbb10394319d0466da3e914193876efdaf91", "title": "Octree-based decimation of marching cubes surfaces", "text": "The marching cubes (MC) algorithm is a method for generating isosurfaces. It also generates an excessively large number of triangles to represent an isosurface; this increases the rendering time. This paper presents a decimation method to reduce the number of triangles generated. Decimation is carried out before creating a large number of triangles. Four major steps comprise the algorithm: surface tracking, merging, crack patching and triangulation. Surface tracking is an enhanced implementation of the MC algorithm. Starting from a seed point, the surface tracker visits only those cells likely to compose part of the desired isosurface. The cells making up the extracted surface are stored in an octree that is further processed. A bottom-up approach is taken in merging the cells containing a relatively flat approximating surface. The finer surface details are maintained. Cells are merged as long as the error due to such an operation is within a user-specified error parameter, or a cell acquires more than one connected surface component in it. A crack patching method is described that forces edges of smaller cells to lie along those of the larger neighboring cells. The overall saving in the number of triangles depends both on the specified error value and the nature of the data. Use of the hierarchical octree data structure also presents the potential of incremental representation of surfaces. We can generate a highly smoothed surface representation which can be progressively refined as the user-specified error value is decreased."}
{"_id": "363490159a0e757d7d80cb683cee4218afdf4878", "title": "Talking to machines (statistically speaking)", "text": "Statistical methods have long been the dominant approach in speech recognition and probabilistic modelling in ASR is now a mature technology. The use of statistical methods in other areas of spoken dialogue is however more recent and rather less mature. This paper reviews spoken dialogue systems from a statistical modelling perspective. The complete system is first presented as a partially observable Markov decision process. The various sub-components are then exposed by introducing appropriate intermediate variables. Samples of existing work are reviewed within this framework, including dialogue control and optimisation, semantic interpretation, goal detection, natural language generation and synthesis."}
{"_id": "3504a8bfb1a35ca115a4829a8afd7e417aba92ac", "title": "Mobile ECG measurement and analysis system using mobile phone as the base station", "text": "In this paper, we introduce an ECG measurement, analysis and transmission system which uses a mobile phone as a base station. The system is based on a small-sized mobile ECG recording device which sends measurement data wirelessly to the mobile phone. In the mobile phone the received data is analyzed and if abnormalities are found part of the measurement data is sent to a server for the use of medical personnel. The prototype of the system was made with a portable ECG monitor and Nokia 6681 mobile phone. The results show that modern smart phones are indeed capable for this kind of tasks. Thus, with their good networking and data processing capabilities, they are a potential part of the future wireless health care systems."}
{"_id": "5f76a3f006cb4af1394bc852ce77646bada39ee1", "title": "STANDARDISATION AND CLASSIFICATION OF ALERTS GENERATED BY INTRUSION DETECTION SYSTEMS", "text": "Intrusion detection systems are most popular de-fence mechanisms used to provide security to IT infrastructures. Organisation need best performance, so it uses multiple IDSs from different vendors. Different vendors are using different formats and protocols. Difficulty imposed by this is the generation of several false alarms. Major part of this work concentrates on the collection of alerts from different intrusion detection systems to represent them in IDMEF(Intrusion Detection Message Exchange Format) format. Alerts were collected from intrusion detection systems like snort, ossec, suricata etc. Later classification is attempted using machine learning technique, which helps to mitigate generation of false positives."}
{"_id": "e51662b2b2e1dfc113c931f23524178ae4bc82fc", "title": "3D Surface Reconstruction of Noisy Point Clouds Using Growing Neural Gas: 3D Object/Scene Reconstruction", "text": "With the advent of low-cost 3D sensors and 3D printers, scene and object 3D surface reconstruction has become an important research topic in the last years. In this work, we propose an automatic (unsupervised) method for 3D surface reconstruction from raw unorganized point clouds acquired using low-cost 3D sensors. We have modified the growing neural gas network, which is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation, to perform 3D surface reconstruction of different real-world objects and scenes. Some improvements have been made on the original algorithm considering colour and surface normal information of input data during the learning stage and creating complete triangular meshes instead of basic wire-frame representations. The proposed method is able to successfully create 3D faces online, whereas existing 3D reconstruction methods based on self-organizing maps required post-processing steps to close gaps and holes produced during the 3D reconstruction process. A set of quantitative and qualitative experiments were carried out to validate the proposed method. The method has been implemented and tested on real data, and has been found to be effective at reconstructing noisy point clouds obtained using low-cost 3D sensors."}
{"_id": "1254b7f7d7399add50b7ededd7c737db4239874c", "title": "Advanced PROPHET Routing in Delay Tolerant Network", "text": "To solve routing jitter problem in PROPHET in delay tolerant network, advanced PROPHET routing is proposed in this paper. Average delivery predictabilities are used in advanced PROPHET to avoid routing jitter. Furthermore, we evaluate it through simulations versus PROPHET routing protocol. The experimental results show there has higher average delivery rates and shorter average delays in advanced PROPHET. Thus, it is fair to say that advanced PROPHET gives better performance than PROPHET."}
{"_id": "14b061c4da9baa988f642eae43903dd5ea7ce3a3", "title": "Error control and concealment for video communication: a review", "text": "The problem of error control and concealment in video communication is becoming increasingly important because of the growing interest in video delivery over unreliable channels such as wireless networks and the Internet. This paper reviews the techniques that have been developed for error control and concealment in the past ten to fifteen years. These techniques are described in three categories according to the roles that the encoder and decoder play in the underlying approaches. Forward error concealment includes methods that add redundancy at the source end to enhance error resilience of the coded bit streams. Error concealment by postprocessing refers to operations at the decoder to recover the damaged areas based on characteristics of image and video signals. Finally, interactive error concealment covers techniques that are dependent on a dialog between the source and destination. Both current research activities and practice in international standards are covered."}
{"_id": "586f8952909eb72727eaa9365f62c3f36cd5a9aa", "title": "Regenerative braking strategy for electric vehicles", "text": "Regenerative braking is an effective approach for electric vehicles to extend their driving range. The control strategy of regenerative braking plays an important role in maintaining the vehicle's stability and recovering energy. In this paper, the main properties that have influence on brake energy regeneration are analyzed. Mathematical model of brake energy regenerating electric vehicles is established. By analyzing the charge and discharge characteristics of the battery and motor, a simple regenerative braking strategy is proposed. The strategy takes the braking torque required, the motor available braking torque, and the braking torque limit into account, and it can make the best use of the motor braking torque. Simulation results show higher energy regeneration compared to a parallel strategy when the proposed strategy is adopted."}
{"_id": "8688a74162a6b2ca759d94bd2a4f9b28db7fd5b4", "title": "Periodically Controlled Hybrid Systems Verifying A Controller for An Autonomous Vehicle", "text": "This paper introduces Periodically Controlled Hybrid Automata (PCHA) for describing a class of hybrid control systems. In a PCHA, control actions occur roughly periodically while internal and input actions, may occur in the interim changing the discrete-state or the setpoint. Based on periodicity and subtangential conditions, a new sufficient condition for verifying invariance of PCHAs is presented. This technique is used in verifying safety of the planner-controller subsystem of an autonomous ground vehicle, and in deriving geometric properties of planner generated paths that can be followed safely by the controller under environmental uncertainties."}
{"_id": "2f2f0f3f6def111907780d6580f6b0a7dfc9153c", "title": "Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability", "text": "Knowing the quality of reading comprehension (RC) datasets is important for the development of natural-language understanding systems. In this study, two classes of metrics were adopted for evaluating RC datasets: prerequisite skills and readability. We applied these classes to six existing datasets, including MCTest and SQuAD, and highlighted the characteristics of the datasets according to each metric and the correlation between the two classes. Our dataset analysis suggests that the readability of RC datasets does not directly affect the question difficulty and that it is possible to create an RC dataset that is easy to read but difficult to answer."}
{"_id": "0d4fef0ef83c6bad2e14fe4a4880fa153f550974", "title": "Neural Networks for Open Domain Targeted Sentiment", "text": "Open domain targeted sentiment is the joint information extraction task that finds target mentions together with the sentiment towards each mention from a text corpus. The task is typically modeled as a sequence labeling problem, and solved using state-of-the-art labelers such as CRF. We empirically study the effect of word embeddings and automatic feature combinations on the task by extending a CRF baseline using neural networks, which have demonstrated large potentials for sentiment analysis. Results show that the neural model can give better results by significantly increasing the recall. In addition, we propose a novel integration of neural and discrete features, which combines their relative advantages, leading to significantly higher results compared to both baselines."}
{"_id": "0999fcd42bc502742dbcb25bd41760ff10d15fb0", "title": "On Throughput Efficiency of Geographic Opportunistic Routing in Multihop Wireless Networks", "text": "Geographic opportunistic routing (GOR) is a new routing concept in multihop wireless networks. In stead of picking one node to forward a packet to, GOR forwards a packet to a set of candidate nodes and one node is selected dynamically as the actual forwarder based on the instantaneous wireless channel condition and node position and availability at the time of transmission. GOR takes advantages of the spatial diversity and broadcast nature of wireless communications and is an efficient mechanism to combat the unreliable links. The existing GOR schemes typically involve as many as available next-hop neighbors into the local opportunistic forwarding, and give the nodes closer to the destination higher relay priorities. In this paper, we focus on realizing GOR\u2019s potential in maximizing throughput. We start with an insightful analysis of various factors and their impact on the throughput of GOR, and propose a local metric named expected one-hop throughput (EOT) to balance the tradeoff between the benefit (i.e., packet advancement and transmission reliability) and the cost (i.e., medium time delay). We identify an upper bound of EOT and proof its concavity. Based on the EOT, we also propose a local candidate selection and prioritization algorithm. Simulation results validate our analysis and show that the metric EOT leads to both higher one-hop and path throughput than the corresponding pure GOR and geographic routing."}
{"_id": "75b8c0abfd45fd77d7a61da7d12bdf516e3139c7", "title": "Estimating types in binaries using predictive modeling", "text": "Reverse engineering is an important tool in mitigating vulnerabilities in binaries. As a lot of software is developed in object-oriented languages, reverse engineering of object-oriented code is of critical importance. One of the major hurdles in reverse engineering binaries compiled from object-oriented code is the use of dynamic dispatch. In the absence of debug information, any dynamic dispatch may seem to jump to many possible targets, posing a significant challenge to a reverse engineer trying to track the program flow. We present a novel technique that allows us to statically determine the likely targets of virtual function calls. Our technique uses object tracelets \u2013 statically constructed sequences of operations performed on an object \u2013 to capture potential runtime behaviors of the object. Our analysis automatically pre-labels some of the object tracelets by relying on instances where the type of an object is known. The resulting type-labeled tracelets are then used to train a statistical language model (SLM) for each type.We then use the resulting ensemble of SLMs over unlabeled tracelets to generate a ranking of their most likely types, from which we deduce the likely targets of dynamic dispatches.We have implemented our technique and evaluated it over real-world C++ binaries. Our evaluation shows that when there are multiple alternative targets, our approach can drastically reduce the number of targets that have to be considered by a reverse engineer."}
{"_id": "3b1189bcc031edb119ca0a94cc75080ff9814ca9", "title": "Deep Emotion: A Computational Model of Emotion Using Deep Neural Networks", "text": "Emotions are very important for human intelligence. For example, emotions are closely related to the appraisal of the internal bodily state and external stimuli. This helps us to respond quickly to the environment. Another important perspective in human intelligence is the role of emotions in decision-making. Moreover, the social aspect of emotions is also very important. Therefore, if the mechanism of emotions were elucidated, we could advance toward the essential understanding of our natural intelligence. In this study, a model of emotions is proposed to elucidate the mechanism of emotions through the computational model. Furthermore, from the viewpoint of partner robots, the model of emotions may help us to build robots that can have empathy for humans. To understand and sympathize with people\u2019s feelings, the robots need to have their own emotions. This may allow robots to be accepted in human society. The proposed model is implemented using deep neural networks consisting of three modules, which interact with each other. Simulation results reveal that the proposed model exhibits reasonable behavior as the basic mechanism of emotion."}
{"_id": "25ea0f831c214ee211ee55ffd31eab719799861c", "title": "Comorbidity of DSM-IV pathological gambling and other psychiatric disorders: results from the National Epidemiologic Survey on Alcohol and Related Conditions.", "text": "OBJECTIVE\nTo present nationally representative data on lifetime prevalence and comorbidity of pathological gambling with other psychiatric disorders and to evaluate sex differences in the strength of the comorbid associations.\n\n\nMETHOD\nData were derived from a large national sample of the United States. Some 43,093 household and group quarters residents age 18 years and older participated in the 2001-2002 survey. Prevalence and associations of lifetime pathological gambling and other lifetime psychiatric disorders are presented. The diagnostic interview was the National Institute on Alcohol Abuse and Alcoholism Alcohol Use Disorder and Associated Disabilities Interview Schedule-DSM-IV Version. Fifteen symptom items operationalized the 10 pathological gambling criteria.\n\n\nRESULTS\nThe lifetime prevalence rate of pathological gambling was 0.42%. Almost three quarters (73.2%) of pathological gamblers had an alcohol use disorder, 38.1% had a drug use disorder, 60.4% had nicotine dependence, 49.6% had a mood disorder, 41.3% had an anxiety disorder, and 60.8% had a personality disorder. A large majority of the associations between pathological gambling and substance use, mood, anxiety, and personality disorders were overwhelmingly positive and significant (p < .05), even after controlling for sociodemographic and socioeconomic characteristics. Male sex, black race, divorced/separated/widowed marital status, middle age, and living in the West and Midwest were associated with increased risk for pathological gambling. Further, associations between alcohol dependence, any drug use disorder, drug abuse, nicotine dependence, major depressive episode, and generalized anxiety disorder and pathological gambling were stronger among women than men (p > .05).\n\n\nCONCLUSION\nPathological gambling is highly comorbid with substance use, mood, anxiety, and personality disorders, suggesting that treatment for one condition should involve assessment and possible concomitant treatment for comorbid conditions."}
{"_id": "9201fe9244071f6c4d1bac1b612f2b6aa12ca18f", "title": "Content-based image retrieval by integrating color and texture features", "text": "Content-based image retrieval (CBIR) has been an active research topic in the last decade. Feature extraction and representation is one of the most important issues in the CBIR. In this paper, we propose a content-based image retrieval method based on an efficient integration of color and texture features. As its color features, pseudo-Zernike chromaticity distribution moments in opponent chromaticity space are used. As its texture features, rotation-invariant and scale-invariant image descriptor in steerable pyramid domain are adopted, which offers an efficient and flexible approximation of early processing in the human visual system. The integration of color and texture information provides a robust feature set for color image retrieval. Experimental results show that the proposed method yields higher retrieval accuracy than some conventional methods even though its feature vector dimension is not higher than those of the latter for different test DBs."}
{"_id": "10160d18917e9359f5b8222362a564040fc88692", "title": "Rfam 12.0: updates to the RNA families database", "text": "The Rfam database (available at http://rfam.xfam.org) is a collection of non-coding RNA families represented by manually curated sequence alignments, consensus secondary structures and annotation gathered from corresponding Wikipedia, taxonomy and ontology resources. In this article, we detail updates and improvements to the Rfam data and website for the Rfam 12.0 release. We describe the upgrade of our search pipeline to use Infernal 1.1 and demonstrate its improved homology detection ability by comparison with the previous version. The new pipeline is easier for users to apply to their own data sets, and we illustrate its ability to annotate RNAs in genomic and metagenomic data sets of various sizes. Rfam has been expanded to include 260 new families, including the well-studied large subunit ribosomal RNA family, and for the first time includes information on short sequence- and structure-based RNA motifs present within families."}
{"_id": "8db26a22942404bd435909a16bb3a50cd67b4318", "title": "Marginalized Denoising Autoencoders for Domain Adaptation", "text": "Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters \u2014 in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB, significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks."}
{"_id": "d3f7ff54b188b724597f143f57114b8eef067572", "title": "Multi-Task Learning for Mental Health using Social Media Text", "text": "We introduce initial groundwork for estimating suicide risk and mental health in a deep learning framework. By modeling multiple conditions, the system learns to make predictions about suicide risk and mental health at a low false positive rate. Conditions are modeled as tasks in a multitask learning (MTL) framework, with gender prediction as an additional auxiliary task. We demonstrate the effectiveness of multi-task learning by comparison to a well-tuned single-task baseline with the same number of parameters. Our best MTL model predicts potential suicide attempt, as well as the presence of atypical mental health, with AUC > 0.8. We also find additional large improvements using multi-task learning on mental health tasks with limited training data."}
{"_id": "00b69fcb15b6ddedd6a1b23a0e4ed3afc0b8ac49", "title": "Co-Training for Domain Adaptation", "text": "Domain adaptation algorithms seek to generalize a model trained in a source domain to a new target domain. In many practical cases, the source and target distributions can differ substantially, and in some cases crucial target features may not have support in the source domain. In this paper we introduce an algorithm that bridges the gap between source and target domains by slowly adding to the training set both the target features and instances in which the current algorithm is the most confident. Our algorithm is a variant of co-training [7], and we name it CODA (Co-training for domain adaptation). Unlike the original co-training work, we do not assume a particular feature split. Instead, for each iteration of cotraining, we formulate a single optimization problem which simultaneously learns a target predictor, a split of the feature space into views, and a subset of source and target features to include in the predictor. CODA significantly out-performs the state-of-the-art on the 12-domain benchmark data set of Blitzer et al. [4]. Indeed, over a wide range (65 of 84 comparisons) of target supervision CODA achieves the best performance."}
{"_id": "2254a9c8e0a3d753ce25d4049e063e0e9611f377", "title": "A Planar Magic-T Using Substrate Integrated Circuits Concept", "text": "In this letter, a slotline to substrate integrated waveguide transition is proposed for the development of substrate integrated circuits. The insertion loss of the back-to-back transition is less than 1 dB from 8.7 to 9.0 GHz. With this transition, a planar magic-T is studied and designed. Measured results indicate a very good performance of the fabricated magic-T is observed within the experimental frequency range of 8.4-9.4 GHz. The amplitude and phase imbalances are less than 0.2 dB and 1.5deg, respectively."}
{"_id": "170ec65f2281e0a691d794b8d1580a41117782bf", "title": "Automatic global vessel segmentation and catheter removal using local geometry information and vector field integration", "text": "Vessel enhancement and segmentation aim at (binary) per-pixel segmentation considering certain local features as probabilistic vessel indicators. We propose a new methodology to combine any local probability map with local directional vessel information. The resulting global vessel segmentation is represented as a set of discrete streamlines populating the vascular structures and providing additional connectivity and geometric shape information. The streamlines are computed by numerical integration of the directional vector field that is obtained from the eigenanalysis of the local Hessian indicating the local vessel direction. The streamline representation allows for sophisticated post-processing techniques using the additional information to refine the segmentation result with respect to the requirements of the particular application such as image registration. We propose different post-processing techniques for hierarchical segmentation, centerline extraction, and catheter removal to be used for X-ray angiograms. We further demonstrate how the global approach is able to significantly improve the segmentation compared to conventional local Hessian-based approaches."}
{"_id": "637058be1258f718ba0f7f8ff57a27d47d19c8eb", "title": "Control of VSC connected to the grid through LCL-filter to achieve balanced currents", "text": "Grid-connected voltage source converters (VSCs) are the heart of many applications with power quality concerns due to their reactive power controllability. However, the major drawback is their sensitivity to grid disturbances. Moreover, when VSCs are used in DG applications, voltage unbalance may be intolerant. The current protection may trip due to current unbalance or due to overcurrent. In this paper, a vector current controller for VSC connected to the grid through LCL-filter is developed with the main focus on producing symmetrical and balanced currents in case of unbalanced voltage dips. Implementing this controller helps the VSC system not to trip during voltage dips."}
{"_id": "5e54f1d05c94357df0af05a7bbc79eaf4bc61144", "title": "Marketing malpractice: the cause and the cure.", "text": "Ted Levitt used to tell his Harvard Business School students, \"People don't want a quarter-inch drill--they want a quarter-inch hole.\" But 35 years later, marketers are still thinking in terms of products and ever-finer demographic segments. The structure of a market, as seen from customers' point of view, is very simple. When people need to get a job done, they hire a product or service to do it for them. The marketer's task is to understand what jobs periodically arise in customers' lives for which they might hire products the company could make. One job, the \"I-need-to-send-this-from-here-to-there-with-perfect-certainty-as-fast-as-possible\"job, has existed practically forever. Federal Express designed a service to do precisely that--and do it wonderfully again and again. The FedEx brand began popping into people's minds whenever they needed to get that job done. Most of today's great brands--Crest, Starbucks, Kleenex, eBay, and Kodak, to name a few-started out as just this kind of purpose brand. When a purpose brand is extended to products that target different jobs, it becomes an endorser brand. But, over time, the power of an endorser brand will surely erode unless the company creates a new purpose brand for each new job, even as it leverages the endorser brand as an overall marker of quality. Different jobs demand different purpose brands. New growth markets are created when an innovating company designs a product and then positions its brand on a job for which no optimal product yet exists. In fact, companies that historically have segmented and measured markets by product categories generally find that when they instead segment by job, their market is much larger (and their current share much smaller) than they had thought. This is great news for smart companies hungry for growth."}
{"_id": "26a6b9ed436ebf5fc7b214af840d625685f89203", "title": "Cubrick : A Scalable Distributed MOLAP Database for Fast Analytics", "text": "This paper describes the architecture and design of Cubrick, a distributed multidimensional in-memory database that enables real-time data analysis of large dynamic datasets. Cubrick has a strictly multidimensional data model composed of dimensions, dimensional hierarchies and metrics, supporting sub-second MOLAP operations such as slice and dice, roll-up and drill-down over terabytes of data. All data stored in Cubrick is chunked in every dimension and stored within containers called bricks in an unordered and sparse fashion, providing high data ingestion ratios and indexed access through every dimension. In this paper, we describe details about Cubrick\u2019s internal data structures, distributed model, query execution engine and a few details about the current implementation. Finally, we present some experimental results found in a first Cubrick deployment inside Facebook."}
{"_id": "2d6a270c21cee7305aec08e61b11121467f25b2f", "title": "A comparative study of HTM and other neural network models for online sequence learning with streaming data", "text": "Online sequence learning from streaming data is one of the most challenging topics in machine learning. Neural network models represent promising candidates for sequence learning due to their ability to learn and recognize complex temporal patterns. In this paper, we present a comparative study of Hierarchical Temporal Memory (HTM), a neurally-inspired model, and other feedforward and recurrent artificial neural network models on both artificial and real-world sequence prediction algorithms. HTM and long-short term memory (LSTM) give the best prediction accuracy. HTM additionally demonstrates many other features that are desirable for real-world sequence learning, such as fast adaptation to changes in the data stream, robustness to sensor noise and fault tolerance. These features make HTM an ideal candidate for online sequence learning problems."}
{"_id": "7a736b7347fc5ea93c196ddfe0630ecddc17d324", "title": "Multirate Multimodal Video Captioning", "text": "Automatically describing videos with natural language is a crucial challenge of video understanding. Compared to images, videos have specific spatial-temporal structure and various modality information. In this paper, we propose a Multirate Multimodal Approach for video captioning. Considering that the speed of motion in videos varies constantly, we utilize a Multirate GRU to capture temporal structure of videos. It encodes video frames with different intervals and has a strong ability to deal with motion speed variance. As videos contain different modality cues, we design a particular multimodal fusion method. By incorporating visual, motion, and topic information together, we construct a well-designed video representation. Then the video representation is fed into a RNN-based language model for generating natural language descriptions. We evaluate our approach for video captioning on \"Microsoft Research - Video to Text\" (MSR-VTT), a large-scale video benchmark for video understanding. And our approach gets great performance on the 2nd MSR Video to Language Challenge."}
{"_id": "6b87864dd6846ea3167a3f1dcf8e28a1cbc85000", "title": "Animal-assisted therapy for persons with aphasia: A pilot study.", "text": "This study explored the effects and effectiveness of animal-assisted therapy (AAT) for persons with aphasia. Three men with aphasia from left-hemisphere strokes participated in this study. The men received one semester of traditional therapy followed by one semester of AAT. While both therapies were effective, in that each participant met his goals, no significant differences existed between test results following traditional speech-language therapy versus AAT. Results of a client-satisfaction questionnaire, however, indicated that each of the participants was more motivated, enjoyed the therapy sessions more, and felt that the atmosphere of the sessions was lighter and less stressed during AAT compared with traditional therapy."}
{"_id": "709f7a6b870cb07a4eab553adf6345b244913913", "title": "NoSQL Databases and Data Modeling Techniques for a Document-oriented NoSQL Database", "text": "NoSQL databases are an important component of Big Data for storing and retrieving large volumes of data. Traditional Relational Database Management Systems (RDBMS) use the ACID theorem for data consistency, whereas NoSQL Databases use a non-transactional approach called BASE. RDBMS scale vertically and NoSQL Databases can scale both horizontally (sharding) and vertically. Four types of NoSQL databases are Document-oriented, Key-Value Pairs, Column-oriented and Graph. Data modeling for Document-oriented databases is similar to data modeling for traditional RDBMS during the conceptual and logical modeling phases. However, for a physical data model, entities can be combined (denormalized) by using embedding. What was once called a foreign key in a traditional RDBMS is now called a reference in a Documentoriented NoSQL database."}
{"_id": "13c4ac1bd5511671a658efd8f3ceec572cc53b5f", "title": "Institutionally Distributed Deep Learning Networks", "text": "Deep learning has become a promising approach for automated medical diagnoses. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In such cases, sharing a deep learning model is a more attractive alternative. The best method of performing such a task is unclear, however. In this study, we simulate the dissemination of learning deep learning network models across four institutions using various heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in three independent image collections (retinal fundus photos, mammography, and ImageNet). We find that cyclical weight transfer resulted in a performance (testing accuracy = 77.3%) that was closest to that of centrally hosted patient data (testing accuracy = 78.7%). We also found that there is an improvement in the performance of cyclical weight transfer heuristic with high frequency of weight transfer."}
{"_id": "d1f7ec3085b71f584ada50e817ef7c14337953e2", "title": "A Chebyshev-Response Step-Impedance Phase-Inverter Rat-Race Coupler Directly on Lossy Silicon Substrate and Its Gilbert Mixer Application", "text": "This paper focuses on the analysis and the design methodology of the step-impedance phase-inverter rat-race coupler on a silicon-based process. The issues of impedance limitation and bandwidth are discussed in detail. Our proposed concept utilizes a high silicon dielectric constant, phase-inverter structure, step-impedance technique, and Chebyshev response to make the rat-race coupler more compact (~ 64% reduction) and highly balanced over a wide operating bandwidth. Moreover, the inter-digital coplanar stripline used in the step-impedance section effectively reduces the characteristic impedance of the transmission line for large size shrinkage and insertion-loss reduction. The demonstrated step-impedance rat-race coupler directly on silicon substrate has 6- ~ 7-dB insertion loss from 5 to 15 GHz and small variations in amplitude/phase balance. Compared with our previous work, the proposed rat-race coupler achieves a 3-dB improvement in the insertion loss. Thus, a 0.13-\u03bcm CMOS Gilbert down-converter with a miniature phase-inverter rat-race coupler at the RF path for wideband single-to-differential signal conversion achieves a noise figure of 16 dB."}
{"_id": "033cc3784a60115d758a11a765e764b86aca336c", "title": "The Visual Hull Concept for Silhouette-Based Image Understanding", "text": "Many algorithms for both identifying and reconstructing a 3-D object are based on the 2-D silhouettes of the object. In general, identifying a nonconvex object using a silhouettebased approach implies neglecting some features of its surface as identification clues. The same features cannot be reconstructed by volume intersection techniques using multiple silhouettes of the object. This paper addresses the problem of finding which parts of a nonconvex object are relevant for silhouette-based image understanding. For this purpose, the geometric concept of visual hull of a 3-D object is introduced. The visual hull of a 3-D object S is the closest approximation of S that can be obtained with the volume intersection approach. An equivalent statement, relative to object identification, is that the visual hull of S is the maximal object silhouette-equivalent to S, i.e., which can be substituted for S without affecting any silhouette. Only the parts of the surface of S that also lie on the surface of the visual hull can be reconstructed or identified using silhouette-based algorithms. The visual hull of an object depends not only on the object itself but also on the region allowed to the viewpoint. Two main viewing regions can be considered, resulting in the external and internal visual hull. In the former case the viewing region is related to the convex hull of S, in the latter it is bounded by S itself. The internal visual hull also admits an interpretation not related to silhouettes: the features of the surface of S that is not coincident with the surface of the internal visual hull cannot be observed from any viewpoint lying outside the convex hull. After a general discussion of the visual hull and its properties, algorithms for computing the internal and external visual hulls of 2-D objects and 3-D planar face objects are presented and their complexity analyzed. In general, the visual hull of a 3-D planar face object turns out to be bounded by planar and curved patches. A precise statement of the concept of visual hull appears to be novel, as is the problem of its computation."}
{"_id": "a98e5856091999922ec7150efefe25bbfeb2aecf", "title": "Big data, smart cities and city planning", "text": "I define big data with respect to its size but pay particular attention to the fact that the data I am referring to is urban data, that is, data for cities that are invariably tagged to space and time. I argue that this sort of data are largely being streamed from sensors, and this represents a sea change in the kinds of data that we have about what happens where and when in cities. I describe how the growth of big data is shifting the emphasis from longer term strategic planning to short-term thinking about how cities function and can be managed, although with the possibility that over much longer periods of time, this kind of big data will become a source for information about every time horizon. By way of conclusion, I illustrate the need for new theory and analysis with respect to 6 months of smart travel card data of individual trips on Greater London's public transport systems."}
{"_id": "17650831f1900b849fd1914d02337e1d006aea0c", "title": "Maglev: A Fast and Reliable Software Network Load Balancer", "text": "Maglev is Google\u2019s network load balancer. It is a large distributed software system that runs on commodity Linux servers. Unlike traditional hardware network load balancers, it does not require a specialized physical rack deployment, and its capacity can be easily adjusted by adding or removing servers. Network routers distribute packets evenly to the Maglev machines via Equal Cost Multipath (ECMP); each Maglev machine then matches the packets to their corresponding services and spreads them evenly to the service endpoints. To accommodate high and ever-increasing traffic, Maglev is specifically optimized for packet processing performance. A single Maglev machine is able to saturate a 10Gbps link with small packets. Maglev is also equipped with consistent hashing and connection tracking features, to minimize the negative impact of unexpected faults and failures on connection-oriented protocols. Maglev has been serving Google\u2019s traffic since 2008. It has sustained the rapid global growth of Google services, and it also provides network load balancing for Google Cloud Platform."}
{"_id": "1c610c122c0c848ba8defbcb40ce9f4dd55de7d9", "title": "Augmented Reality Tracking in Natural Environments", "text": "Tracking, or camera pose determination, is the main technical challenge in creating augmented realities. Constraining the degree to which the environment may be altered to support tracking heightens the challenge. This paper describes several years of work at the USC Computer Graphics and Immersive Technologies (CGIT) laboratory to develop self-contained, minimally intrusive tracking systems for use in both indoor and outdoor settings. These hybrid-technology tracking systems combine vision and inertial sensing with research in fiducial design, feature detection, motion estimation, recursive filters, and pragmatic engineering to satisfy realistic application requirements."}
{"_id": "728bbc615aececb184b07899d973fad62269dc21", "title": "Word Epoch Disambiguation: Finding How Words Change Over Time", "text": "In this paper we introduce the novel task of \u201cword epoch disambiguation,\u201d defined as the problem of identifying changes in word usage over time. Through experiments run using word usage examples collected from three major periods of time (1800, 1900, 2000), we show that the task is feasible, and significant differences can be observed between occurrences of words in different periods of time."}
{"_id": "15386c1d34870f028927d02d5608581d02e589a1", "title": "How to model fake news", "text": "Over the past three years it has become evident that fake news is a danger to democracy. However, until now there has been no clear understanding of how to define fake news, much less how to model it. This paper addresses both these issues. A definition of fake news is given, and two approaches for the modelling of fake news and its impact in elections and referendums are introduced. The first approach, based on the idea of a representative voter, is shown to be suitable to obtain a qualitative understanding of phenomena associated with fake news at a macroscopic level. The second approach, based on the idea of an election microstructure, describes the collective behaviour of the electorate by modelling the preferences of individual voters. It is shown through a simulation study that the mere knowledge that pieces of fake news may be in circulation goes a long way towards mitigating the impact of fake news."}
{"_id": "f98990356a62e05af16993a5fc355a7e675a3320", "title": "Penoscrotal plication as a uniform approach to reconstruction of penile curvature.", "text": "OBJECTIVE\nTo present our 4-year experience of using a minimally invasive technique, penoscrotal plication (PSP), as a uniform treatment for men with debilitating penile curvature resulting from Peyronie's disease.\n\n\nPATIENTS AND METHODS\nIn 48 men (median age 58.7 years) with penile curvature the penis was reconstructed by imbricating the tunica albuginea opposite the curvature with multiple nonabsorbable sutures. All patients, regardless of the degree or direction of curvature, were approached through a small penoscrotal incision made without degloving the penis. Detailed measurements of penile shaft angle and stretched penile length were recorded and analysed before and after reconstruction, and the numbers of sutures required for correction were documented.\n\n\nRESULTS\nNearly all patients had dorsal and/or lateral deformities that were easily corrected via a ventral penoscrotal incision. The median (range) degree of correction was 28 (18-55) degrees and number of sutures used was 6 (4-17). Stretched penile length measurements before and after plication showed no significant difference. A single PSP procedure was successful in 45/48 (93%) patients; two were dissatisfied with the correction, one having repeat plication and the other a penile prosthesis; one other required a suture release for pain.\n\n\nCONCLUSIONS\nPSP is safe and effective and should be considered even for cases with severe or biplanar curvature."}
{"_id": "d12f2be30367f85eb9ca54313939bf07f6482b0a", "title": "Non-sexually related acute genital ulcers in 13 pubertal girls: a clinical and microbiological study.", "text": "OBJECTIVE\nTo describe the clinical and microbiological features of acute genital ulcers (AGU), which have been reported in virgin adolescents, predominantly in girls.\n\n\nDESIGN\nDescriptive study. We collected data on the clinical features, sexual history, blood cell count, biochemistry, microbiological workup, and 1-year follow-up.\n\n\nSETTING\nDepartments of dermatology of 3 university hospitals in Paris. Patients Thirteen immunocompetent female patients with a first flare of non-sexually transmitted AGU.\n\n\nMAIN OUTCOME MEASURES\nClinical and microbiological data, using a standardized form.\n\n\nRESULTS\nMean age was 16.6 years (range, 11-19 years). Eleven patients denied previous sexual contact. A fever or flulike symptoms preceded AGU in 10 of the 13 patients (77%), with a mean delay of 3.8 days before the AGU onset (range, 0-10 days). The genital ulcers were bilateral in 10 patients. The final diagnosis was Epstein-Barr virus primary infection in 4 patients (31%) and Beh\u00e7et disease in 1 patient (8%). No other infectious agents were detected in this series.\n\n\nCONCLUSIONS\nWe recommend serologic testing for Epstein-Barr virus with IgM antibodies to viral capsid antigens in non-sexually related AGU in immunocompetent patients. Further microbiological studies are required to identify other causative agents."}
{"_id": "1c1406a873ac215ac4ccfdb37d9876e7902e080c", "title": "Tangible Query Interfaces: Physically Constrained Tokens for Manipulating Database Queries", "text": "We present a new approach for using physically constrained tokens to express, manipulate, and visualize parameterized database queries. This method extends tangible interfaces to enable interaction with large aggregates of information. We describe two interface prototypes that use physical tokens to represent database parameters. These tokens are manipulated upon physical constraints, which map compositions of tokens onto interpretations including database queries, views, and Boolean operations. We propose a framework for \u201c token + constraint\u201d interfaces, and compare one of our prototypes with a comparable graphical interface in a preliminary user study. \u2020 Current affiliation: Visualization Dept., Zuse Institute Berlin (ZIB), ullmer@zib.de \u2020\u2020 Current affiliation: Computer Science Dept., Tufts University, jacob@cs.tufts.edu"}
{"_id": "e3ff5f811a2d10aee18edb808d64051cb1a1f642", "title": "OpenIoT: An open service framework for the Internet of Things", "text": "The Internet of Things (IoT) has been a hot topic for the future of computing and communication. It will not only have a broad impact on our everyday life in the near future, but also create a new ecosystem involving a wide array of players such as device developers, service providers, software developers, network operators, and service users. In this paper, we present an open service framework for the Internet of Things, facilitating entrance into the IoT-related mass market, and establishing a global IoT ecosystem with the worldwide use of IoT devices and softwares. We expect that the open IoT service framework we proposed will play an important role in the widespread adoption of the Internet of Things in our everyday life, enhancing our quality of life with a large number of innovative applications and services, but also offering endless opportunities to all of the stakeholders in the world of information and communication technologies."}
{"_id": "f0982dfd3071d33296c22a4c38343887dd5b2a9b", "title": "A visual analytics agenda", "text": "Researchers have made significant progress in disciplines such as scientific and information visualization, statistically based exploratory and confirmatory analysis, data and knowledge representations, and perceptual and cognitive sciences. Although some research is being done in this area, the pace at which new technologies and technical talents are becoming available is far too slow to meet the urgent need. National Visualization and Analytics Center's goal is to advance the state of the science to enable analysts to detect the expected and discover the unexpected from massive and dynamic information streams and databases consisting of data of multiple types and from multiple sources, even though the data are often conflicting and incomplete. Visual analytics is a multidisciplinary field that includes the following focus areas: (i) analytical reasoning techniques, (ii) visual representations and interaction techniques, (iii) data representations and transformations, (iv) techniques to support production, presentation, and dissemination of analytical results. The R&D agenda for visual analytics addresses technical needs for each of these focus areas, as well as recommendations for speeding the movement of promising technologies into practice. This article provides only the concise summary of the R&D agenda. We encourage reading, discussion, and debate as well as active innovation toward the agenda for visual analysis."}
{"_id": "5264ae4ea4411426ddd91dc780c2892c3ff933d3", "title": "An Introduction to Variable and Feature Selection", "text": "Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variabl es are available. These areas include text processing of internet documents, gene expression arr ay nalysis, and combinatorial chemistry. The objective of variable selection is three-fold: improvi ng the prediction performance of the predictors, providing faster and more cost-effective predict ors, and providing a better understanding of the underlying process that generated the data. The contrib utions of this special issue cover a wide range of aspects of such problems: providing a better definit ion of the objective function, feature construction, feature ranking, multivariate feature sele ction, efficient search methods, and feature validity assessment methods."}
{"_id": "a1420a6c619d2572109abfb4a387f70c2fc998ff", "title": "Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I.", "text": "BACKGROUND\nAs part of an interdisciplinary study of medical injury and malpractice litigation, we estimated the incidence of adverse events, defined as injuries caused by medical management, and of the subgroup of such injuries that resulted from negligent or substandard care.\n\n\nMETHODS\nWe reviewed 30,121 randomly selected records from 51 randomly selected acute care, nonpsychiatric hospitals in New York State in 1984. We then developed population estimates of injuries and computed rates according to the age and sex of the patients as well as the specialties of the physicians.\n\n\nRESULTS\nAdverse events occurred in 3.7 percent of the hospitalizations (95 percent confidence interval, 3.2 to 4.2), and 27.6 percent of the adverse events were due to negligence (95 percent confidence interval, 22.5 to 32.6). Although 70.5 percent of the adverse events gave rise to disability lasting less than six months, 2.6 percent caused permanently disabling injuries and 13.6 percent led to death. The percentage of adverse events attributable to negligence increased in the categories of more severe injuries (Wald test chi 2 = 21.04, P less than 0.0001). Using weighted totals, we estimated that among the 2,671,863 patients discharged from New York hospitals in 1984 there were 98,609 adverse events and 27,179 adverse events involving negligence. Rates of adverse events rose with age (P less than 0.0001). The percentage of adverse events due to negligence was markedly higher among the elderly (P less than 0.01). There were significant differences in rates of adverse events among categories of clinical specialties (P less than 0.0001), but no differences in the percentage due to negligence.\n\n\nCONCLUSIONS\nThere is a substantial amount of injury to patients from medical management, and many injuries are the result of substandard care."}
{"_id": "74a08b42f7cd4208dc39e44a74515917febea252", "title": "Developing a car gesture interface for use as a secondary task", "text": "Existing gesture-interface research has centered on controlling the user's primary task. This paper explores the use of gestures to control secondary tasks while the user is focused on driving. Through contextual inquiry, ten iterative prototypes, and a Wizard of Oz experiment, we show that a gesture interface is a viable alternative for completing secondary tasks in the car."}
{"_id": "818064efbe870746d3fa6e3b4e208c4f37a6847a", "title": "Modularized Morphing of Neural Networks", "text": "In this work we study the problem of network morphism, an effective learning scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i.e., how a convolutional layer can be morphed into an arbitrary module of a neural network. To simplify the representation of a network, we abstract a module as a graph with blobs as vertices and convolutional layers as edges, based on which the morphing process is able to be formulated as a graph transformation problem. Two atomic morphing operations are introduced to compose the graphs, based on which modules are classified into two families, i.e., simple morphable modules and complex modules. We present practical morphing solutions for both of these two families, and prove that any reasonable module can be morphed from a single convolutional layer. Extensive experiments have been conducted based on the state-of-the-art ResNet on benchmark datasets, and the effectiveness of the proposed solution has been verified."}
{"_id": "a0a9390e14beb38c504473c3adc857f8faeaebd2", "title": "Face Detection using Digital Image Processing", "text": "This paper presents a technique for automatically detecting human faces in digital color images. This is two-step process which first detects regions contain human skin in the color image and then extracts information from these regions which might indicate the location of a face in the image. The skin detection is performed using a skin filter which relies on color and texture information. The face detection is performed on a grayscale image containing only the detected skin areas. A combination of thresh holding and mathematical morphology are used to extract object features that would indicate the presence of a face. The face detection process works predictably and fairly reliably, as test results show."}
{"_id": "1c32fb359bef96c6cdbc4b668bb7e365538a5047", "title": "Regularized linear and kernel redundancy analysis", "text": "Redundancy analysis (RA) is a versatile technique used to predict multivariate criterion variables from multivariate predictor variables. The reduced-rank feature of RA captures redundant information in the criterion variables in a most parsimonious way. A ridge type of regularization was introduced in RA to deal with the multicollinearity problem among the predictor variables. The regularized linear RA was extended to nonlinear RA using a kernel method to enhance the predictability. The usefulness of the proposed procedures was demonstrated by a Monte Carlo study and through the analysis of two real data sets."}
{"_id": "742ae20de07ae03c907f3c5f68cac26d95a8121d", "title": "A design process for embedding knowledge management in everyday work", "text": "Knowledge Management Software must be embedded in processes of knowledge workers' everyday practice. In order to attain a seamless design, regarding the special qualities and requirements of knowledge work, detailed studies of the existing work processes and analysis of the used knowledge are necessary. Participation of the knowledge owners and future users us an important factor for success of knowledge management systems. In this paper we describe characteristics of knowledge work motivating the usage of participatory design techniques. We suggest a design process for developing or improving knowledge management, which includes ethnographic surveys, user participation in cyclic improvement, scenario based design, and the use of multiple design artifacts and documents. Finally we explain the benefits of our approach. The paper is based on a case study we carried out to design and introduce a knowledge management system in a training company."}
{"_id": "9699af44d3d0c9a727fe13cab3088ada6ae66876", "title": "The Development Process of the Semantic Web and Web Ontology", "text": "This paper deals with the semantic web and web ontology. The existing ontology development processes are not catered towards casual web ontology development, a notion analogous to standard web page development. Ontologies have become common on the World-Wide Web[2]. Key features of this process include easy and rapid creation of ontological skeletons, searching and linking to existing ontologies and a natural language-based technique to improve presentation of ontologies[6]. Ontologies, however, vary greatly in size, scope and semantics. They can range from generic upper-level ontologies to domain-specific schemas. The success of the Semantic Web is based on the existance of numerous distributed ontologies, using which users can annotate their data, thereby enabling shared machine readable content. This paper elaborates the stages in a casual ontology development process. Key wordsToolkits; ontology; semantic web; language-based; web ontology."}
{"_id": "5e47d3fd6447b292f7fbe572fc17ebffa487809f", "title": "Building upon RouteFlow : a SDN development experience", "text": "RouteFlow is a platform for providing virtual IP routing services in OpenFlow networks. During the first year of development, we came across some use cases that might be interesting pursuing in addition to a number of lessons learned worth sharing. In this paper, we will discuss identified requirements and architectural and implementation changes made to shape RouteFlow into a more robust solution for Software Defined networking (SDN). This paper addresses topics of interest to the SDN community, such as development issues involving layered applications on top of network controllers, ease of configuration, and network visualization. In addition, we will present the first publicly known use case with multiple, heterogeneous OpenFlow controllers to implement a centralized routing control function, demonstrating how IP routing as a service can be provided for different network domains under a single central control. Finally, performance comparisons and a real testbed were used as means of validating the implementation."}
{"_id": "f243961ec694c5510ec0a7f71fd4596c58195bdc", "title": "PhysioDroid: Combining Wearable Health Sensors and Mobile Devices for a Ubiquitous, Continuous, and Personal Monitoring", "text": "Technological advances on the development of mobile devices, medical sensors, and wireless communication systems support a new generation of unobtrusive, portable, and ubiquitous health monitoring systems for continuous patient assessment and more personalized health care. There exist a growing number of mobile apps in the health domain; however, little contribution has been specifically provided, so far, to operate this kind of apps with wearable physiological sensors. The PhysioDroid, presented in this paper, provides a personalized means to remotely monitor and evaluate users' conditions. The PhysioDroid system provides ubiquitous and continuous vital signs analysis, such as electrocardiogram, heart rate, respiration rate, skin temperature, and body motion, intended to help empower patients and improve clinical understanding. The PhysioDroid is composed of a wearable monitoring device and an Android app providing gathering, storage, and processing features for the physiological sensor data. The versatility of the developed app allows its use for both average users and specialists, and the reduced cost of the PhysioDroid puts it at the reach of most people. Two exemplary use cases for health assessment and sports training are presented to illustrate the capabilities of the PhysioDroid. Next technical steps include generalization to other mobile platforms and health monitoring devices."}
{"_id": "249b8988c4f43315ccfaef3d1d3ebda67494d2f0", "title": "Wideband planar monopole antennas with dual band-notched characteristics", "text": "Wideband planar monopole antennas with dual band-notched characteristics are presented. The proposed antenna consists of a wideband planar monopole antenna and the multiple cup-, cap-, and inverted L-shaped slots, producing band-notched characteristics. In order to generate dual band-notched characteristic, we propose nine types of planar monopole antennas, which have two or three cap (cup or inverted L)-shaped slots in the radiator. This technique is suitable for creating ultra-wideband antenna with narrow frequency notches or for creating multiband antennas"}
{"_id": "c0b2c8817eacdeb7809d82e5aa1edb1cd5836938", "title": "Optimal Operation of Distribution Feeders in Smart Grids", "text": "This paper presents a generic and comprehensive distribution optimal power flow (DOPF) model that can be used by local distribution companies (LDCs) to integrate their distribution system feeders into a Smart Grid. The proposed three-phase DOPF framework incorporates detailed modeling of distribution system components and considers various operating objectives. Phase specific and voltage dependent modeling of customer loads in the three-phase DOPF model allows LDC operators to determine realistic operating strategies that can improve the overall feeder efficiency. The proposed distribution system operation objective is based on the minimization of the energy drawn from the substation while seeking to minimize the number of switching operations of load tap changers and capacitors. A novel method for solving the three-phase DOPF model by transforming the mixed-integer nonlinear programming problem to a nonlinear programming problem is proposed which reduces the computational burden and facilitates its practical implementation and application. Two practical case studies, including a real distribution feeder test case, are presented to demonstrate the features of the proposed methodology. The results illustrate the benefits of the proposed DOPF in terms of reducing energy losses while limiting the number of switching operations."}
{"_id": "01eff5a77d72b34ea2dfab434f82efee91827519", "title": "Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).", "text": "BACKGROUND\nThe Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military.\n\n\nMETHODS\nThe HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA.\n\n\nRESULTS\nInvestigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research.\n\n\nCONCLUSION\nThese results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains."}
{"_id": "1b645b3a2366baeb6e6b485ec5b414fc0827fb86", "title": "A 0.5-to-2.5 Gb/s Reference-Less Half-Rate Digital CDR With Unlimited Frequency Acquisition Range and Improved Input Duty-Cycle Error Tolerance", "text": "A reference-less highly digital half-rate clock and data recovery (CDR) circuit with improved tolerance to input duty cycle error is presented. Using a chain of frequency dividers, the pro posed frequency detector produces a known sub-harmonic tone from the incoming random data. A digital frequency-locked loop uses the extracted tone, and drives the oscillator to any sub-rate of the input data frequency. The early/late outputs of a conventional half-rate bang-bang phase detector are used to determine the duty-cycle error in the incoming random data and adjust the oscillator clock phases to maximize receiver timing margins. Fabricated in 0.13 \u03bcm CMOS technology, the prototype digital CDR op erates without any errors from 0.5 Gb/s to 2.5 Gb/s. At 2 Gb/s, the prototype consumes 6.1 mW power from a 1.2 V supply. The pro posed clock-phase calibration is capable of correcting upto \u00b120% of input data duty-cycle error."}
{"_id": "97325a06de27b59c431a37b03c88f33e3a789f31", "title": "Applying data mining techniques in job recommender system for considering candidate job preferences", "text": "Job recommender systems are desired to attain a high level of accuracy while making the predictions which are relevant to the customer, as it becomes a very tedious task to explore thousands of jobs, posted on the web, periodically. Although a lot of job recommender systems exist that uses different strategies , here efforts have been put to make the job recommendations on the basis of candidate's profile matching as well as preserving candidate's job behavior or preferences. Firstly, rules predicting the general preferences of the different user groups are mined. Then the job recommendations to the target candidate are made on the basis of content based matching as well as candidate preferences, which are preserved either in the form of mined rules or obtained by candidate's own applied jobs history. Through this technique a significant level of accuracy has been achieved over other basic methods of job recommendations."}
{"_id": "f394fc319d90a6227ce87504d882245c6c342cd2", "title": "A Simple Flood Forecasting Scheme Using Wireless Sensor Networks", "text": "This paper presents a forecasting model designed using WSNs( Wireless Sensor Networks) to predict flood in rivers using simple and fast calculations to provide real-time results and save the lives of people who may be affected by the flood. Our prediction model uses multiple variable robust linear regression which is easy to understand and simple and cost effective in implementation, is speed efficient, but has low resource utilization and yet provides real time predictions with reliable accuracy, thus having features which are desirable in any real world algorithm. Our prediction model is independent of the number of parameters, i.e. any number of parameters may be added or removed based on the on-site requirements. When the water level rises, we represent it using a polynomial whose nature is used to determine if the water level may exceed the flood line in the near future. We compare our work with a contemporary algorithm to demonstrate our improvements over it. Then we present our simulation results for the predicted water level compared to the actual water level."}
{"_id": "559caee18178518a655004979d2dc26c969d10c6", "title": "Test Case Prioritization Using Requirements-Based Clustering", "text": "The importance of using requirements information in the testing phase has been well recognized by the requirements engineering community, but to date, a vast majority of regression testing techniques have primarily relied on software code information. Incorporating requirements information into the current testing practice could help software engineers identify the source of defects more easily, validate the product against requirements, and maintain software products in a holistic way. In this paper, we investigate whether the requirements-based clustering approach that incorporates traditional code analysis information can improve the effectiveness of test case prioritization techniques. To investigate the effectiveness of our approach, we performed an empirical study using two Java programs with multiple versions and requirements documents. Our results indicate that the use of requirements information during the test case prioritization process can be beneficial."}
{"_id": "38a62849b31996ff3d3595e505227973e9e3a562", "title": "Teaching and working with robots as a collaboration", "text": "New applications for autonomous robots bring them into the human environment where they are to serve as helpful assistants to untrained users in the home or office, or work as capable members of human-robot teams for security, military, and space efforts. These applications require robots to be able to quickly learn how to perform new tasks from natural human instruction, and to perform tasks collaboratively with human teammates. Using joint intention theory as our theoretical framework, our approach integrates learning and collaboration through a goal based task structure. Specifically, we use collaborative discourse with accompanying gestures and social cues to teach a humanoid robot a structurally complex task. Having learned the representation for the task, the robot then performs it shoulder-to-shoulder with a human partner, using social communication acts to dynamically mesh its plans with those of its partner, according to the relative capabilities of the human and the robot."}
{"_id": "87660054e271de3a284cdb654d989a0197bd6051", "title": "Sensitive Lifelogs: A Privacy Analysis of Photos from Wearable Cameras", "text": "While media reports about wearable cameras have focused on the privacy concerns of bystanders, the perspectives of the `lifeloggers' themselves have not been adequately studied. We report on additional analysis of our previous in-situ lifelogging study in which 36 participants wore a camera for a week and then reviewed the images to specify privacy and sharing preferences. In this Note, we analyze the photos themselves, seeking to understand what makes a photo private, what participants said about their images, and what we can learn about privacy in this new and very different context where photos are captured automatically by one's wearable camera. We find that these devices record many moments that may not be captured by traditional (deliberate) photography, with camera owners concerned about impression management and protecting private information of both themselves and bystanders."}
{"_id": "27f366b733ba0f75a93c06d5d7f0d1e06b467a4c", "title": "Foundations of Logic Programming", "text": ""}
{"_id": "af6bb0fef60a068f6930ab22397b56ac1ad39026", "title": "Evaluating faces on trustworthiness: an extension of systems for recognition of emotions signaling approach/avoidance behaviors.", "text": "People routinely make various trait judgments from facial appearance, and such judgments affect important social outcomes. These judgments are highly correlated with each other, reflecting the fact that valence evaluation permeates trait judgments from faces. Trustworthiness judgments best approximate this evaluation, consistent with evidence about the involvement of the amygdala in the implicit evaluation of face trustworthiness. Based on computer modeling and behavioral experiments, I argue that face evaluation is an extension of functionally adaptive systems for understanding the communicative meaning of emotional expressions. Specifically, in the absence of diagnostic emotional cues, trustworthiness judgments are an attempt to infer behavioral intentions signaling approach/avoidance behaviors. Correspondingly, these judgments are derived from facial features that resemble emotional expressions signaling such behaviors: happiness and anger for the positive and negative ends of the trustworthiness continuum, respectively. The emotion overgeneralization hypothesis can explain highly efficient but not necessarily accurate trait judgments from faces, a pattern that appears puzzling from an evolutionary point of view and also generates novel predictions about brain responses to faces. Specifically, this hypothesis predicts a nonlinear response in the amygdala to face trustworthiness, confirmed in functional magnetic resonance imaging (fMRI) studies, and dissociations between processing of facial identity and face evaluation, confirmed in studies with developmental prosopagnosics. I conclude with some methodological implications for the study of face evaluation, focusing on the advantages of formally modeling representation of faces on social dimensions."}
{"_id": "631483c15641c3652377f66c8380ff684f3e365c", "title": "Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures", "text": "This paper introduces a novel approach for generating videos called Synchronized Deep Recurrent Attentive Writer (Sync-DRAW). Sync-DRAW can also perform text-to-video generation which, to the best of our knowledge, makes it the first approach of its kind. It combines a Variational Autoencoder(VAE) with a Recurrent Attention Mechanism in a novel manner to create a temporally dependent sequence of frames that are gradually formed over time. The recurrent attention mechanism in Sync-DRAW attends to each individual frame of the video in sychronization, while the VAE learns a latent distribution for the entire video at the global level. Our experiments with Bouncing MNIST, KTH and UCF-101 suggest that Sync-DRAW is efficient in learning the spatial and temporal information of the videos and generates frames with high structural integrity, and can generate videos from simple captions on these datasets."}
{"_id": "14e54c29e986977dd0537ef694fad0fa6eb862f6", "title": "How schema and novelty augment memory formation", "text": "Information that is congruent with existing knowledge (a schema) is usually better remembered than less congruent information. Only recently, however, has the role of schemas in memory been studied from a systems neuroscience perspective. Moreover, incongruent (novel) information is also sometimes better remembered. Here, we review lesion and neuroimaging findings in animals and humans that relate to this apparent paradoxical relationship between schema and novelty. In addition, we sketch a framework relating key brain regions in medial temporal lobe (MTL) and medial prefrontal cortex (mPFC) during encoding, consolidation and retrieval of information as a function of its congruency with existing information represented in neocortex. An important aspect of this framework is the efficiency of learning enabled by congruency-dependent MTL-mPFC interactions."}
{"_id": "e3530c67ed2294ce1a72b424d5e38c95552ba15f", "title": "Neural scene representation and rendering", "text": "Scene representation\u2014the process of converting visual sensory data into concise descriptions\u2014is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand the world around them."}
{"_id": "1a74a968e64e4e8381a6a6ddc7f6f7a7599ce4ee", "title": "Detecting sources of computer viruses in networks: theory and experiment", "text": "We provide a systematic study of the problem of finding the source of a computer virus in a network. We model virus spreading in a network with a variant of the popular SIR model and then construct an estimator for the virus source. This estimator is based upon a novel combinatorial quantity which we term rumor centrality. We establish that this is an ML estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has non-trivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops in different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding virus sources in networks which are not tree-like."}
{"_id": "53d1f9f7c77ffb0af729c173c35f099550f27f6e", "title": "Rumor centrality: a universal source detector", "text": "We consider the problem of detecting the source of a rumor (information diffusion) in a network based on observations about which set of nodes possess the rumor. In a recent work [10], this question was introduced and studied. The authors proposed rumor centrality as an estimator for detecting the source. They establish it to be the maximum likelihood estimator with respect to the popular Susceptible Infected (SI) model with exponential spreading time for regular trees. They showed that as the size of infected graph increases, for a line (2-regular tree) graph, the probability of source detection goes to 0 while for d-regular trees with d \u2265 3 the probability of detection, say \u03b1d, remains bounded away from 0 and is less than 1/2. Their results, however stop short of providing insights for the heterogeneous setting such as irregular trees or the SI model with non-exponential spreading times.\n This paper overcomes this limitation and establishes the effectiveness of rumor centrality for source detection for generic random trees and the SI model with a generic spreading time distribution. The key result is an interesting connection between a multi-type continuous time branching process (an equivalent representation of a generalized Polya's urn, cf. [1]) and the effectiveness of rumor centrality. Through this, it is possible to quantify the detection probability precisely. As a consequence, we recover all the results of [10] as a special case and more importantly, we obtain a variety of results establishing the universality of rumor centrality in the context of tree-like graphs and the SI model with a generic spreading time distribution."}
{"_id": "71b64035eacf2986ca7c153ebec72eb781a23931", "title": "Learning to Discover Social Circles in Ego Networks", "text": "Our personal social networks are big and cluttered, and currently there is no good way to organize them. Social networking sites allow users to manually categorize their friends into social circles (e.g. \u2018circles\u2019 on Google+, and \u2018lists\u2019 on Facebook and Twitter), however they are laborious to construct and must be updated whenever a user\u2019s network grows. We define a novel machine learning task of identifying users\u2019 social circles. We pose the problem as a node clustering problem on a user\u2019s ego-network, a network of connections between her friends. We develop a model for detecting circles that combines network structure as well as user profile information. For each circle we learn its members and the circle-specific user profile similarity metric. Modeling node membership to multiple circles allows us to detect overlapping as well as hierarchically nested circles. Experiments show that our model accurately identifies circles on a diverse set of data from Facebook, Google+, and Twitter for all of which we obtain hand-labeled ground-truth."}
{"_id": "9b90cb4aea40677494e4a3913878e355c4ae56e8", "title": "Collective dynamics of \u2018small-world\u2019 networks", "text": "Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays,, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks \u2018rewired\u2019 to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them \u2018small-world\u2019 networks, by analogy with the small-world phenomenon, (popularly known as six degrees of separation). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices."}
{"_id": "00d23e5c06f90bed0c9d4aec22babb2f7488817f", "title": "Link Prediction via Matrix Factorization", "text": "We propose to solve the link prediction problem in graphs using a supervised matrix factorization approach. The model learns latent features from the topological structure of a (possibly directed) graph, and is shown to make better predictions than popular unsupervised scores. We show how these latent features may be combined with optional explicit features for nodes or edges, which yields better performance than using either type of feature exclusively. Finally, we propose a novel approach to address the class imbalance problem which is common in link prediction by directly optimizing for a ranking loss. Our model is optimized with stochastic gradient descent and scales to large graphs. Results on several datasets show the efficacy of our approach."}
{"_id": "917ec490106e81e69b3c8a39a04db811079d0b55", "title": "LHCP and RHCP Substrate Integrated Waveguide Antenna Arrays for Millimeter-Wave Applications", "text": "Left-hand and right-hand circularly polarized (LHCP and RHCP) substrate integrated waveguide (SIW) antenna arrays are presented at the 28-GHz band for millimeter-wave (mm-wave) applications. Two types of circularly polarized (CP) antenna elements are designed to achieve respective LHCP and RHCP performance. Eight-element LHCP and RHCP antenna arrays have been implemented with feeding networks and measured. Based on the measurement, the LHCP and RHCP antenna arrays have impedance bandwidths of 1.54 and 1.7 GHz within $| S_{\\rm{11}}| < - \\text{10 dB}$, whereas 3-dB axial-ratio bandwidths are 1.1 and 1.3 GHz, respectively. Each of the fabricated LHCP and RHCP antenna arrays is accomplished with a gain up to 13.09 and 13.52 dBi. Most of the measured results are validated with the simulated ones. The proposed CP antenna arrays can provide low-cost, broadband characteristics, and high-gain radiation performance with CP properties for mm-wave applications."}
{"_id": "afdc55d29c3d976b1bbab75fce4f37526ae7977f", "title": "A Review of Substrate Integrated Waveguide End-Fire Antennas", "text": "Substrate integrated waveguide (SIW) is a planar waveguide structure, which offers the advantages of low profile, ease of fabrication, low insertion loss, and compatibility with other planar circuits. SIW end-fire antennas have drawn broad interests due to the potential applications in aircraft, missile, and radar systems. However, this planar structure suffers from narrow bandwidth due to severe impedance mismatch at the radiating aperture. Meanwhile, the narrow radiating aperture of SIW end-fire antennas also deteriorates the radiation performance. This paper presents a detailed review upon the most recent research efforts concerning the improvement of antenna performances. They are discussed and classified into three different categories from the aspect of polarization properties: horizontally polarized, vertically polarized, and circularly polarized SIW end-fire antennas. Some practical difficulties for the development of SIW end-fire antennas are pointed out and effective approaches are also provided. A wide variety of antenna examples are presented with respect to theoretical and experimental results."}
{"_id": "876446e5c9eaaf1b54746270abdb690db98b748f", "title": "Feature Mining for Localised Crowd Counting", "text": "This paper presents a multi-output regression model for crowd counting in public scenes. Existing counting by regression methods either learn a single model for global counting, or train a large number of separate regressors for localised density estimation. In contrast, our single regression model based approach is able to estimate people count in spatially localised regions and is more scalable without the need for training a large number of regressors proportional to the number of local regions. In particular, the proposed model automatically learns the functional mapping between interdependent low-level features and multi-dimensional structured outputs. The model is able to discover the inherent importance of different features for people counting at different spatial locations. Extensive evaluations on an existing crowd analysis benchmark dataset and a new more challenging dataset demonstrate the effectiveness of our approach."}
{"_id": "590ed266f720d04e8c5b2af3a5a9d8c86c24880d", "title": "Trajectory Bundling for Animated Transitions", "text": "Animated transition has been a popular design choice for smoothly switching between different visualization views or layouts, in which movement trajectories are created as cues for tracking objects during location shifting. Tracking moving objects, however, becomes difficult when their movement paths overlap or the number of tracking targets increases. We propose a novel design to facilitate tracking moving objects in animated transitions. Instead of simply animating an object along a straight line, we create \"bundled\" movement trajectories for a group of objects that have spatial proximity and share similar moving directions. To study the effect of bundled trajectories, we untangle variations due to different aspects of tracking complexity in a comprehensive controlled user study. The results indicate that using bundled trajectories is particularly effective when tracking more targets (six vs. three targets) or when the object movement involves a high degree of occlusion or deformation. Based on the study, we discuss the advantages and limitations of the new technique, as well as provide design implications."}
{"_id": "5b0644069bd4a2fcf9e3272307f68c2002d2122f", "title": "A Taxonomy of Domain-Specific Aspect Languages", "text": "Domain-Specific Aspect Languages (DSALs) are Domain-Specific Languages (DSLs) designed to express crosscutting concerns. Compared to DSLs, their aspectual nature greatly amplifies the language design space. We structure this space in order to shed light on and compare the different domain-specific approaches to deal with crosscutting concerns. We report on a corpus of 36 DSALs covering the space, discuss a set of design considerations, and provide a taxonomy of DSAL implementation approaches. This work serves as a frame of reference to DSAL and DSL researchers, enabling further advances in the field, and to developers as a guide for DSAL implementations."}
{"_id": "04fa47f1d3983bacfea1e3c838cf868f9b73dc58", "title": "Convolutional face finder: a neural architecture for fast and robust face detection", "text": "In this paper, we present a novel face detection approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns, rotated up to /spl plusmn/20 degrees in image plane and turned up to /spl plusmn/60 degrees, in complex real world images. The proposed system automatically synthesizes simple problem-specific feature extractors from a training set of face and nonface patterns, without making any assumptions or using any hand-made design concerning the features to extract or the areas of the face pattern to analyze. The face detection procedure acts like a pipeline of simple convolution and subsampling modules that treat the raw input image as a whole. We therefore show that an efficient face detection system does not require any costly local preprocessing before classification of image areas. The proposed scheme provides very high detection rate with a particularly low level of false positives, demonstrated on difficult test sets, without requiring the use of multiple networks for handling difficult cases. We present extensive experimental results illustrating the efficiency of the proposed approach on difficult test sets and including an in-depth sensitivity analysis with respect to the degrees of variability of the face patterns."}
{"_id": "cf8f95458591e072835c4372c923e3087754a484", "title": "Markov Logic Mixtures of Gaussian Processes: Towards Machines Reading Regression Data", "text": "We propose a novel mixtures of Gaussian processes model in which the gating function is interconnected with a probabilistic logical model, in our case Markov logic networks. In this way, the resulting mixed graphical model, called Markov logic mixtures of Gaussian processes (MLxGP), solves joint Bayesian non-parametric regression and probabilistic relational inference tasks. In turn, MLxGP facilitates novel, interesting tasks such as regression based on logical constraints or drawing probabilistic logical conclusions about regression data, thus putting \u201cmachines reading regression data\u201d in reach."}
{"_id": "bd151b04739107e09c646933f2abd3ecb3e976c9", "title": "On the Spectral Bias of Neural Networks", "text": "Neural networks are known to be a class of highly expressive functions able to fit even random input-output mappings with 100% accuracy. In this work we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we show that deep ReLU networks are biased towards low frequency functions, meaning that they cannot have local fluctuations without affecting their global behavior. Intuitively, this property is in line with the observation that over-parameterized networks find simple patterns that generalize across data samples. We also investigate how the shape of the data manifold affects expressivity by showing evidence that learning high frequencies gets easier with increasing manifold complexity, and present a theoretical understanding of this behavior. Finally, we study the robustness of the frequency components with respect to parameter perturbation, to develop the intuition that the parameters must be finely tuned to express high frequency functions."}
{"_id": "3d0dd69d6f49d3b91fad795509c86918b7e4142c", "title": "Structured light in scattering media", "text": "Virtually all structured light methods assume that the scene and the sources are immersed in pure air and that light is neither scattered nor absorbed. Recently, however, structured lighting has found growing application in underwater and aerial imaging, where scattering effects cannot be ignored. In this paper, we present a comprehensive analysis of two representative methods - light stripe range scanning and photometric stereo - in the presence of scattering. For both methods, we derive physical models for the appearances of a surface immersed in a scattering medium. Based on these models, we present results on (a) the condition for object detectability in light striping and (b) the number of sources required for photometric stereo. In both cases, we demonstrate that while traditional methods fail when scattering is significant, our methods accurately recover the scene (depths, normals, albedos) as well as the properties of the medium. These results are in turn used to restore the appearances of scenes as if they were captured in clear air. Although we have focused on light striping and photometric stereo, our approach can also be extended to other methods such as grid coding, gated and active polarization imaging."}
{"_id": "e99e5c845d6bda14490a0dacd0d7143c20320e2b", "title": "Conceptualizing Sources in Online News", "text": "This study attempts a new conceptualization of communication \u201csources\u201d by proposing a typology of sources that would apply not only to traditional media but also to new online media. Ontological rationale for the distinctions in the typology is supplemented by psychological evidence via an experiment that investigated the effects of different types of source attributions upon receivers\u2019 perception of online news content. Participants (N = 48) in a 4-condition, between-participants experiment read 6 identical news stories each through an online service. Participants were told that the stories were selected by 1 of 4 sources: news editors, the computer terminal on which they were accessing the stories, other audience members (or users) of the online news service, or (using a pseudo-selection task) the individual user (self). After reading each online news story, all participants filled out a paperand-pencil questionnaire indicating their perceptions of the story they had just read. In confirmation of the distinctions made in the typology, attribution of identical content to 4 different types of online sources was associated with significant variation in news story perception. Theoretical implications of the results as well as the typology are discussed."}
{"_id": "823c9dc4ef36dd93079dfd640da59d3d6ff2cc22", "title": "Conceptualising user hedonic experience", "text": "For many years, the approach taken towards technology design focused on supporting the effective, efficient, and satisfying use of it. The way people use technology has shifted from merely using it to enjoying using it. This paper describes an early approach to understanding user experience in context of technologies (e.g. digital cameras, PDAs, and mobile phones), as well as in more general context such as physical activities e.g. exercising, orienteering, and walking, and in context of diaries. The focus of this paper is on hedonic user experience; that is pleasure, enjoyment, excitement, and fun in the context of technology. This study provides insights into factors contributing to and influencing such experiences and the relationships between them."}
{"_id": "34245763557c61fb18218543e1fb498970d32d1c", "title": "Factorized Geometrical Autofocus for Synthetic Aperture Radar Processing", "text": "This paper describes a factorized geometrical autofocus (FGA) algorithm, specifically suitable for ultrawideband synthetic aperture radar. The strategy is integrated in a fast factorized back-projection chain and relies on varying track parameters step by step to obtain a sharp image; focus measures are provided by an object function (intensity correlation). The FGA algorithm has been successfully applied on synthetic and real (Coherent All RAdio BAnd System II) data sets, i.e., with false track parameters introduced prior to processing, to set up constrained problems involving one geometrical quantity. Resolution (3 dB in azimuth and slant range) and peak-to-sidelobe ratio measurements in FGA images are comparable with reference results (within a few percent and tenths of a decibel), demonstrating the capacity to compensate for residual space variant range cell migration. The FGA algorithm is finally also benchmarked (visually) against the phase gradient algorithm to emphasize the advantage of a geometrical autofocus approach."}
{"_id": "d2491e8f4e6d5a8d1380efda2ec327ed7afefa63", "title": "It's More than Just Sharing Game Play Videos! Understanding User Motives in Mobile Game Social Media", "text": "As mobile gaming has become increasingly popular in recent years, new forms of mobile game social media such as GameDuck that share mobile gameplay videos have emerged. In this work, we set out to understand the user motives of GameDuck by leveraging the Uses and Gratification Theory. We first explore the major motive themes from users' responses (n=138) and generate motivation survey items. We then identify the key motivators by conducting exploratory factor analysis of the survey results (n=354). Finally, we discuss how this new social media relates to existing systems such as Twitch."}
{"_id": "8326f86876adc98c72031de6c3d3d3fac0403175", "title": "Enterprise Architecture design for ensuring strategic business IT alignment (integrating SAMM with TOGAF 9.1)", "text": "Strategic business IT (Information Technology) alignment is one of the main objectives that is achieved from the implementation of Enterprise Architecture (EA) in an organization. EA helps organizations to define architecture of business, information systems and technology that capable for aligning business strategy with IT organizations, through the development of business models, business strategy, business processes and organizations that aligned with infrastructure, applications and IT organizations. A good design of Enterprise Architecture should consider various viewpoints of IT alignment with the organization's business needs. This paper provides a solution how to design Enterprise Architecture which provides guarantee for strategic business IT alignment that is designed through the integration of SAMM component with TOGAF 9.1 metamodel."}
{"_id": "073daaf4f6fa972d3bdee3c4e4510d21dc934dfb", "title": "Machine learning - a probabilistic perspective", "text": "\u201cKevin Murphy excels at unraveling the complexities of machine learning methods while motivating the reader with a stream of illustrated examples and real-world case studies. The accompanying software package includes source code for many of the figures, making it both easy and very tempting to dive in and explore these methods for yourself. A must-buy for anyone interested in machine learning or curious about how to extract useful knowledge from big data.\u201d John Winn, Microsoft Research"}
{"_id": "afe6e5010ae932071eb6288dd0b02347c5a7f6b1", "title": "Algorithms for the variable sized bin packing problem", "text": "In this paper, we consider the variable sized bin packing problem where the objective is to minimize the total cost of used bins when the cost of unit size of each bin does not increase as the bin size increases. Two greedy algorithms are described, and analyzed in three special cases: (a) the sizes of items and bins are divisible, respectively, (b) only the sizes of bins are divisible, and (c) the sizes of bins are not divisible. Here, we say that a list of numbers a1; a2; . . . ; am are divisible when aj exactly divides aj 1, for each 1 < j6m. In the case of (a), the algorithms give optimal solutions, and in the case of (b), each algorithm gives a solution whose value is less than 11 9 C\u00f0B \u00de \u00fe 4 9 , where C\u00f0B \u00de is the optimal value. In the case of (c), each algorithm gives a solution whose value is less than 3 2 C\u00f0B \u00de \u00fe 1. 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "aade79855b2c79d77ee48751fb946c35892fed9b", "title": "IoT system for Human Activity Recognition using BioHarness 3 and Smartphone", "text": "This paper presents an Internet of Things (IoT) approach to Human Activity Recognition (HAR) using remote monitoring of vital signs in the context of a healthcare system for self-managed chronic heart patients. Our goal is to create a HAR-IoT system using learning algorithms to infer the activity done within 4 categories (lie, sit, walk and run) as well as the time consumed performing these activities and, finally giving feedback during and after the activity. Alike in this work, we provide a comprehensive insight on the cloud-based system implemented and the conclusions after implementing two different learning algorithms and the results of the overall system for larger implementations."}
{"_id": "aa2ef24ebbe3f1afcd5d5506ee60508195a46c2b", "title": "Analysis and Design of an Input-Series Two-Transistor Forward Converter For High-Input Voltage Multiple-Output Applications", "text": "In this paper, an input-series two-transistor forward converter is proposed and investigated, which is aimed at high-input voltage multiple-output applications. In this converter, all of the switches are operating synchronously, and the input voltage sharing (IVS) of each series-module is achieved automatically by the coupling of primary windings of the common forward integrated transformer. The active IVS processes are analyzed based on the model of the forward integrated transformer. Through the influence analysis when the mismatches in various series-modules are considered, design principles of the key parameters in each series-module are discussed to suppress the input voltage difference. Finally, a 96-W laboratory-made prototype composed of two forward series-modules is built, and the feasibility of the proposed method and the theoretical analysis are verified by the experimental results."}
{"_id": "af390f5ded307d8f548243163178d2db639b6e5c", "title": "Task-Driven Feature Pooling for Image Classification", "text": "Feature pooling is an important strategy to achieve high performance in image classification. However, most pooling methods are unsupervised and heuristic. In this paper, we propose a novel task-driven pooling (TDP) model to directly learn the pooled representation from data in a discriminative manner. Different from the traditional methods (e.g., average and max pooling), TDP is an implicit pooling method which elegantly integrates the learning of representations into the given classification task. The optimization of TDP can equalize the similarities between the descriptors and the learned representation, and maximize the classification accuracy. TDP can be combined with the traditional BoW models (coding vectors) or the recent state-of-the-art CNN models (feature maps) to achieve a much better pooled representation. Furthermore, a self-training mechanism is used to generate the TDP representation for a new test image. A multi-task extension of TDP is also proposed to further improve the performance. Experiments on three databases (Flower-17, Indoor-67 and Caltech-101) well validate the effectiveness of our models."}
{"_id": "843ae23cbb088640a1697f887b4de82b8ba7a07d", "title": "Channel Attention and Multi-level Features Fusion for Single Image Super-Resolution", "text": "Convolutional neural networks (CNNs) have demonstrated superior performance in super-resolution (SR). However, most CNN-based SR methods neglect the different importance among feature channels or fail to take full advantage of the hierarchical features. To address these issues, this paper presents a novel recursive unit. Firstly, at the beginning of each unit, we adopt a compact channel attention mechanism to adaptively recalibrate the channel importance of input features. Then, the multi-level features, rather than only deep-level features, are extracted and fused. Additionally, we find that it will force our model to learn more details by using the learnable upsampling method (i.e., transposed convolution) only on residual branch (instead of using it both on residual branch and identity branch) while using the bicubic interpolation on the other branch. Analytic experiments show that our method achieves competitive results compared with the state-of-the-art methods and maintains faster speed as well."}
{"_id": "82566f380f61e835292e483cda84eb3d22e32cd4", "title": "Kolmogorov's theorem and multilayer neural networks", "text": "-Taking advantage of techniques developed by Kolmogorov, we give a direct proof of the universal approximation capabilities of perceptron type networks with two hidden layers. From our proof we derive estimates of numbers of hidden units based on properties of the function being approximated and the accuracy of its approximation. Keywords--Feedforward neural networks, Multilayer perceptron type networks, Sigmoidal activation function, Approximations of continuous functions, Uniform approximation, Universal approximation capabilities, Estimates of number of hidden units, Modulus of continuity."}
{"_id": "10236985b28470951de73f76d6fba5343d5f788f", "title": "Dynamic Bin Packing for On-Demand Cloud Resource Allocation", "text": "Dynamic Bin Packing (DBP) is a variant of classical bin packing, which assumes that items may arrive and depart at arbitrary times. Existing works on DBP generally aim to minimize the maximum number of bins ever used in the packing. In this paper, we consider a new version of the DBP problem, namely, the MinTotal DBP problem which targets at minimizing the total cost of the bins used overtime. It is motivated by the request dispatching problem arising from cloud gaming systems. We analyze the competitive ratios of the modified versions of the commonly used First Fit, Best Fit, and Any Fit packing (the family of packing algorithms that open a new bin only when no currently open bin can accommodate the item to be packed) algorithms for the MinTotal DBP problem. We show that the competitive ratio of Any Fit packing cannot be better than \u03bc + 1, where \u03bc is the ratio of the maximum item duration to the minimum item duration. The competitive ratio of Best Fit packing is not bounded for any given \u03bc. For First Fit packing, if all the item sizes are smaller than 1/\u03b2 of the bin capacity (\u03b2> 1 is a constant), the competitive ratio has an upper bound of \u03b2/\u03b2-1\u00b7\u03bc+3\u03b2/\u03b2-1 + 1. For the general case, the competitive ratio of First Fit packing has an upper bound of 2\u03bc + 7. We also propose a Hybrid First Fit packing algorithm that can achieve a competitive ratio no larger than 5/4 \u03bc + 19/4 when \u03bc is not known and can achieve a competitive ratio no larger than \u03bc + 5 when \u03bc is known."}
{"_id": "bd2afb3933cc68270327fbbd7d446fd86b5a7806", "title": "Adaptation of Word Vectors using Tree Structure for Visual Semantics", "text": "We propose a framework of word-vector adaptation, which makes vectors of visually similar concepts close to each other. Here, word vectors are real-valued vector representation of words, e.g., word2vec representation. Our basic idea is to assume that each concept has some hypernyms that are important to determine its visual features. For example, for a concept Swallow with hypernyms Bird, Animal and Entity, we believe Bird is the most important since birds have common visual features with their feathers etc. Adapted word vectors are obtained for each word by taking a weighted sum of a given original word vector and its hypernym word vectors. Our weight optimization makes vectors of visually similar concepts close to each other, by giving a large weight for such important hypernyms. We apply the adapted word vectors to zero-shot learning on the TRECVID 2014 semantic indexing dataset. We achieved 0.083 of Mean Average Precision, which is the best performance without using TRECVID training data to the best of our knowledge."}
{"_id": "16ae3881a409835e6e957f96233e7cfbab8481bc", "title": "Humanoid robot HRP-3", "text": "In this paper, the development of humanoid robot HRP-3 is presented. HRP-3, which stands for Humanoid Robotics Platform-3, is a human-size humanoid robot developed as the succeeding model of HRP-2. One of features of HRP-3 is that its main mechanical and structural components are designed to prevent the penetration of dust or spray. Another is that its wrist and hand are newly designed to improve manipulation. Software for a humanoid robot in a real environment is also improved. We also include information on mechanical features of HRP-3 and together with the newly developed hand. Also included are the technologies implemented in HRP-3 prototype. Electrical features and some experimental results using HRP-3 are also presented."}
{"_id": "a94e358289f956ee563242a5236f523548940328", "title": "Modeling and Simulation of Electric and Hybrid Vehicles", "text": "This paper discusses the need for modeling and simulation of electric and hybrid vehicles. Different modeling methods such as physics-based Resistive Companion Form technique and Bond Graph method are presented with powertrain component and system modeling examples. The modeling and simulation capabilities of existing tools such as Powertrain System Analysis Toolkit (PSAT), ADvanced VehIcle SimulatOR (ADVISOR), PSIM, and Virtual Test Bed are demonstrated through application examples. Since power electronics is indispensable in hybrid vehicles, the issue of numerical oscillations in dynamic simulations involving power electronics is briefly addressed"}
{"_id": "f72831ce1c4c253412310adc8cd602d90cc7973c", "title": "A Matlab-based modeling and simulation package for electric and hybrid electric vehicle design", "text": "This paper discusses a simulation and modeling package developed at Texas A&M University, V-Elph 2.01. VElph facilitates in-depth studies of electric vehicle (EV) and hybrid EV (HEV) configurations or energy management strategies through visual programming by creating components as hierarchical subsystems that can be used interchangeably as embedded systems. V-Elph is composed of detailed models of four major types of components: electric motors, internal combustion engines, batteries, and support components that can be integrated to model and simulate drive trains having all electric, series hybrid, and parallel hybrid configurations. V-Elph was written in the Matlab/Simulink graphical simulation language and is portable to most computer platforms. This paper also discusses the methodology for designing vehicle drive trains using the V-Elph package. An EV, a series HEV, a parallel HEV, and a conventional internal combustion engine (ICE) driven drive train have been designed using the simulation package. Simulation results such as fuel consumption, vehicle emissions, and complexity are compared and discussed for each vehicle."}
{"_id": "682c150f6a1ac49c355ad5ec992d8f3a364d9756", "title": "PSIM-based modeling of automotive power systems: conventional, electric, and hybrid electric vehicles", "text": "Automotive manufacturers have been taking advantage of simulation tools for modeling and analyzing various types of vehicles, such as conventional, electric, and hybrid electric vehicles. These simulation tools are of great assistance to engineers and researchers to reduce product-development cycle time, improve the quality of the design, and simplify the analysis without costly and time-consuming experiments. In this paper, a modeling tool that has been developed to study automotive systems using the power electronics simulator (PSIM) software is presented. PSIM was originally made for simulating power electronic converters and motor drives. This user-friendly simulation package is able to simulate electric/electronic circuits; however, it has no capability for simulating the entire system of an automobile. This paper discusses the PSIM validity as an automotive simulation tool by creating module boxes for not only the electrical systems, but also the mechanical, energy-storage, and thermal systems of the vehicles. These modules include internal combustion engines, fuel converters, transmissions, torque couplers, and batteries. Once these modules are made and stored in the library, the user can make the car model either a conventional, an electric, or a hybrid vehicle at will, just by dragging and dropping onto a schematic blank page."}
{"_id": "69c37d0ce5fbf3647eb6b60e23c080cf477183ec", "title": "Principles of object-oriented modeling and simulation with Modelica 2.1", "text": "Why should wait for some days to get or receive the principles of object oriented modeling and simulation with modelica 2 1 pdf book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This principles of object oriented modeling and simulation with modelica 2 1 pdf is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?"}
{"_id": "83b2339158240eadced7ca712b07169ae24667ec", "title": "ADVISOR 2.1: a user-friendly advanced powertrain simulation using a combined backward/forward approach", "text": "-ADVISOR 2.1 is the latest version of the National Renewable Energy Laboratory\u2019s advanced vehicle simulator. It was first developed in 1994 to support the U.S. Department of Energy hybrid propulsion system program, and is designed to be accurate, fast, flexible, easily sharable, and easy to use. This paper presents the model, focusing on its combination of forwardand backward-facing simulation approaches, and evaluates the model in terms of its design goals. ADVISOR predicts acceleration time to within 0.7% and energy use on the demanding US06 to within 0.6% for an underpowered series hybrid vehicle (0-100 km/h in 20 s). ADVISOR simulates vehicle performance on standard driving cycles between 2.6 and 8.0 times faster than a representative forward-facing vehicle model. Due in large part to ADVISOR\u2019s powerful graphical user interface and Web presence, over 800 users have downloaded ADVISOR from 45 different countries. Many of these users have contributed their own component data to the ADVISOR library."}
{"_id": "9b35f4dcaa1831e05adda9a8e37e6d590c79609b", "title": "Marital status and health: United States, 1999-2002.", "text": "OBJECTIVE\nThis report presents prevalence estimates by marital status for selected health status and limitations, health conditions, and health risk behaviors among U.S. adults, using data from the 1999-2002 National Health Interview Surveys (NHIS).\n\n\nMETHODS\nData for the U.S. civilian noninstitutionalized population were collected using computer-assisted personal interviews (CAPI). The household response rate for the NHIS was 88.7%. This report is based on a total of 127,545 interviews with sample adults aged 18 years and over, representing an overall response rate of 72.4% for the 4 years combined. Statistics were age-adjusted to the 2000 U.S. standard population. Marital status categories shown in this report are: married, widowed, divorced or separated, never married, and living with a partner.\n\n\nRESULTS\nRegardless of population subgroup (age, sex, race, Hispanic origin, education, income, or nativity) or health indictor (fair or poor health, limitations in activities, low back pain, headaches, serious psychological distress, smoking, or leisure-time physical inactivity), married adults were generally found to be healthier than adults in other marital status categories. Marital status differences in health were found in each of the three age groups studied (18-44 years, 45-64 years, and 65 years and over), but were most striking among adults aged 18-44 years. The one negative health indicator for which married adults had a higher prevalence was overweight or obesity. Married adults, particularly men, had high rates of overweight or obesity relative to adults in other marital status groups across most population subgroups studied. Never married adults were among the least likely to be overweight or obese."}
{"_id": "a98d9815e6694971e755b462ad77a6c9d847b147", "title": "Synthetic therapeutic peptides: science and market.", "text": "The decreasing number of approved drugs produced by the pharmaceutical industry, which has been accompanied by increasing expenses for R&D, demands alternative approaches to increase pharmaceutical R&D productivity. This situation has contributed to a revival of interest in peptides as potential drug candidates. New synthetic strategies for limiting metabolism and alternative routes of administration have emerged in recent years and resulted in a large number of peptide-based drugs that are now being marketed. This review reports on the unexpected and considerable number of peptides that are currently available as drugs and the chemical strategies that were used to bring them into the market. As demonstrated here, peptide-based drug discovery could be a serious option for addressing new therapeutic challenges."}
{"_id": "e845231c7ac288fdb354d136e9537ba8993fd1f0", "title": "H-Ras transfers from B to T cells via tunneling nanotubes", "text": "Lymphocytes form cell\u2013cell connections by various mechanisms, including intercellular networks through actin-supported long-range plasma membrane (PM) extensions, termed tunneling nanotubes (TNTs). In this study, we tested in vitro whether TNTs form between human antigen-presenting B cells and T cells following cell contact and whether they enable the transfer of PM-associated proteins, such as green fluorescent protein (GFP)-tagged H-Ras (GFP-H-Ras). To address this question, we employed advanced techniques, including cell trapping by optical tweezers and live-cell imaging by 4D spinning-disk confocal microscopy. First, we showed that TNTs can form after optically trapped conjugated B and T cells are being pulled apart. Next, we determined by measuring fluorescence recovery after photobleaching that GFP-H-Ras diffuses freely in the membrane of TNTs that form spontaneously between B and T cells during coculturing. Importantly, by 4D time-lapse imaging, we showed that GFP-H-Ras-enriched PM patches accumulate at the junction between TNTs and the T-cell body and subsequently transfer to the T-cell surface. Furthermore, the PM patches adopted by T cells were enriched for another B-cell-derived transmembrane receptor, CD86. As predicted, the capacity of GFP-H-Ras to transfer between B and T cells, during coculturing, was dependent on its normal post-transcriptional lipidation and consequent PM anchorage. In summary, our data indicate that TNTs connecting B and T cells provide a hitherto undescribed route for the transfer of PM patches containing, for example, H-Ras from B to T cells."}
{"_id": "a2ff5117ccd1eb3e42c6a606b8cecb4358d3ec84", "title": "SciMATE: A Novel MapReduce-Like Framework for Multiple Scientific Data Formats", "text": "Despite the popularity of MapReduce, there are several obstacles to applying it for developing scientific data analysis applications. Current MapReduce implementations require that data be loaded into specialized file systems, like the Hadoop Distributed File System (HDFS), whereas with rapidly growing size of scientific datasets, reloading data in another file system or format is not feasible. We present a framework that allows scientific data in different formats to be processed with a MapReduce like API. Our system is referred to as SciMATE, and is based on the MATE system developed at Ohio State. SciMATE is developed as a customizable system, which can be adapted to support processing on any of the scientific data formats. We have demonstrated the functionality of our system by creating instances that can be processing NetCDF and HDF5 formats as well as flat-files. We have also implemented three popular data mining applications and have evaluated their execution with each of the three instances of our system."}
{"_id": "c10226699acb5a778f3655cf093e8cd65a8adb42", "title": "Assessing the Impact of Real-time Ridesharing on Urban Traffic using Mobile Phone Data", "text": "Recently, smart-phone based technology has enabled ridesharing services to match customers making similar trips in realtime for a reduced rate and minimal inconvenience. But what are the impacts of such services on city-wide congestion? The answer lies in whether or not ridesharing adds to vehicle traffic by diverting non-driving trips like walking, transit, or cycling, or reduces vehicle traffic by diverting trips otherwise made in private, single occupancy cars or taxis. This research explores the impact of rideshare adoption on congestion using mobile phone data. We extract average daily origin-destination (OD) trips from mobile phone records and estimate the proportions of these trips made by auto and other non-auto travelers. Next, we match spatially and temporally similar trips, and assume a range of adoption rates for auto and non-auto users, in order to distill rideshare vehicle trips. Finally, for several adoption scenarios, we evaluate the impacts of congestion network-wide."}
{"_id": "0721f581323fa61548c25669a6963dcf7a74bee5", "title": "An overview of online trust: Concepts, elements, and implications", "text": "Lack of trust has been repeatedly identified as one of the most formidable barriers to people for engaging in e-commerce, involving transactions in which financial and personal information is submitted to merchants via the Internet. The future of e-commerce is tenuous without a general climate of online trust. Building consumer trust on the Internet presents a challenge for online merchants and is a research topic of increasing interest and importance. This paper provides an overview of the nature and concepts of trust from multi-disciplinary perspectives, and it reviews relevant studies that investigate the elements of online trust. Also, a framework of trust-inducing interface design features articulated from the existing literature is presented. The design features were classified into four dimensions, namely (1) graphic design, (2) structure design, (3) content design, and (4) social-cue design. By applying the design features identified within this framework to e-commerce web site interfaces, online merchants might then anticipate fostering optimal levels of trust in their customers. 2003 Elsevier Ltd. All rights reserved."}
{"_id": "3e8723310b5b6f6ba0d0d61cc970c54ead5a8872", "title": "Exploring compassion: a meta-analysis of the association between self-compassion and psychopathology.", "text": "Compassion has emerged as an important construct in studies of mental health and psychological therapy. Although an increasing number of studies have explored relationships between compassion and different facets of psychopathology there has as yet been no systematic review or synthesis of the empirical literature. We conducted a systematic search of the literature on compassion and mental health. We identified 20 samples from 14 eligible studies. All studies used the Neff Self Compassion Scale (Neff, 2003b). We employed meta-analysis to explore associations between self-compassion and psychopathology using random effects analyses of Fisher's Z correcting for attenuation arising from scale reliability. We found a large effect size for the relationship between compassion and psychopathology of r=-0.54 (95% CI=-0.57 to -0.51; Z=-34.02; p<.0001). Heterogeneity was significant in the analysis. There was no evidence of significant publication bias. Compassion is an important explanatory variable in understanding mental health and resilience. Future work is needed to develop the evidence base for compassion in psychopathology, and explore correlates of compassion and psychopathology."}
{"_id": "402f6f33fd2db4f17b591002723d793814362b1e", "title": "Autonomous boids", "text": "The classical work of bird-like objects of Reynolds [1] simulates polarized motion of groups of oriented particles, bird-like objects, or simply boids. To do this, three steering vectors are introduced. Cohesion is the tendency of boids to stay in the center of the flock, alignment smoothes their velocities to similar values, and separation helps them to avoid mutual collisions. If no impetus is introduced the boids wander somewhat"}
{"_id": "9da22d5240223e08cfaefd65cbf0504cb983c284", "title": "Evaluation of Quranic text retrieval system based on manually indexed topics", "text": "This paper investigates the effectiveness of a state of the art information retrieval (IR) system in the verse retrieval problem for Quranic text. The evaluation is based on manually indexed topics of the Quran that provides both the queries and the relevance judgments. Furthermore, the system is evaluated in both Malay and English environment. The performance of the system is measured based on the MAP, the precision at 1, 5 and 10, and the MRR scores. The results of the evaluation are promising, showing the IR system has many potential for the Quranic text retrieval."}
{"_id": "6d06e485853d24f28855279bf7b13c45cb3cad31", "title": "Lyapunov-based control for switched power converters", "text": "The fundamental properties, such as passivity or incremental passivity, of the network elements making up a switched power converter are examined. The nominal open-loop operation of a broad class of such converters is shown to be stable in the large via a Lyapunov argument. The obtained Lyapunov function is then shown to be useful for designing globally stabilizing controls that include adaptive schemes for handling uncertain nominal parameters. Numerical simulations illustrate the application of this control approach in DC-DC converters.<>"}
{"_id": "788f341d02130e1807edf88c8c64a77e4096437e", "title": "Pancreas Segmentation in Abdominal CT Scan: A Coarse-to-Fine Approach", "text": "Deep neural networks have been widely adopted for automatic organ segmentation from CT-scanned images. However, the segmentation accuracy on some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily distracted by the complex and variable background region which occupies a large fraction of the input volume. In this paper, we propose a coarse-to-fine approach to deal with this problem. We train two deep neural networks using different regions of the input volume. The first one, the coarse-scaled model, takes the entire volume as its input. It is used for roughly locating the spatial position of the pancreas. The second one, the fine-scaled model, only sees a small input region covering the pancreas, thus eliminating the background noise and providing more accurate segmentation especially around the boundary areas. At the testing stage, we first use the coarse-scaled model to roughly locate the pancreas, then adopt the fine-scaled model to refine the initial segmentation in an iterative manner to obtain increasingly better segmentation. We evaluate our algorithm on the NIH pancreas segmentation dataset with 82 volumes, and outperform the state-of-theart [18] by more than 4%, measured by the Dice-S\u00f8rensen Coefficient (DSC). In addition, we report 62.43% DSC for our worst case, significantly better than 34.11% reported in [18]."}
{"_id": "e3fc67dfcf8e194f452fd734e4dfd99a53f2afeb", "title": "UNFOLD: a memory-efficient speech recognizer using on-the-fly WFST composition", "text": "Accurate, real-time Automatic Speech Recognition (ASR) requires huge memory storage and computational power. The main bottleneck in state-of-the-art ASR systems is the Viterbi search on a Weighted Finite State Transducer (WFST). The WFST is a graph-based model created by composing an Acoustic Model (AM) and a Language Model (LM) offline. Offline composition simplifies the implementation of a speech recognizer as only one WFST has to be searched. However, the size of the composed WFST is huge, typically larger than a Gigabyte, resulting in a large memory footprint and memory bandwidth requirements.\n In this paper, we take a completely different approach and propose a hardware accelerator for speech recognition that composes the AM and LM graphs on-the-fly. In our ASR system, the fully-composed WFST is never generated in main memory. On the contrary, only the subset required for decoding each input speech fragment is dynamically generated from the AM and LM models. In addition to the direct benefits of this on-the-fly composition, the resulting approach is more amenable to further reduction in storage requirements through compression techniques.\n The resulting accelerator, called UNFOLD, performs the decoding in real-time using the compressed AM and LM models, and reduces the size of the datasets from more than one Gigabyte to less than 40 Megabytes, which can be very important in small form factor mobile and wearable devices.\n Besides, UNFOLD improves energy-efficiency by orders of magnitude with respect to CPUs and GPUs. Compared to a state-of-the-art Viterbi search accelerators, the proposed ASR system outperforms by providing 31x reduction in memory footprint and 28% energy savings on average."}
{"_id": "548bc4203770450c21133bfb72c58f5fae0fbdf2", "title": "Visual-Inertial-Semantic Scene Representation for 3D Object Detection", "text": "We describe a system to detect objects in three-dimensional space using video and inertial sensors (accelerometer and gyrometer), ubiquitous in modern mobile platforms from phones to drones. Inertials afford the ability to impose class-specific scale priors for objects, and provide a global orientation reference. A minimal sufficient representation, the posterior of semantic (identity) and syntactic (pose) attributes of objects in space, can be decomposed into a geometric term, which can be maintained by a localization-and-mapping filter, and a likelihood function, which can be approximated by a discriminatively-trained convolutional neural network The resulting system can process the video stream causally in real time, and provides a representation of objects in the scene that is persistent: Confidence in the presence of objects grows with evidence, and objects previously seen are kept in memory even when temporarily occluded, with their return into view automatically predicted to prime re-detection."}
{"_id": "6984f2aadd187051546e8e6ddb505460b1bc327a", "title": "Computational Intelligence in Sports: A Systematic Literature Review", "text": "Recently, data mining studies are being successfully conducted to estimate several parameters in a variety of domains. Data mining techniques have attracted the attention of the information industry and society as a whole, due to a large amount of data and the imminent need to turn it into useful knowledge. However, the effective use of data in some areas is still under development, as is the case in sports, which in recent years, has presented a slight growth; consequently, many sports organizations have begun to see that there is a wealth of unexplored knowledge in the data extracted by them. Therefore, this article presents a systematic review of sports data mining. Regarding years 2010 to 2018, 31 types of research were found in this topic. Based on these studies, we present the current panorama, themes, the database used, proposals, algorithms, and research opportunities. Our findings provide a better understanding of the sports data mining potentials, besides motivating the scientific community to explore this timely and interesting topic."}
{"_id": "e2176d557793b7e2b80d8e5ec945078441356eb8", "title": "Design of a distributed energy-efficient clustering algorithm for heterogeneous wireless sensor networks", "text": "The clustering Algorithm is a kind of key technique used to reduce energy consumption. It can increase the scalability and lifetime of the network. Energy-efficient clustering protocols should be designed for the characteristic of heterogeneous wireless sensor networks. We propose and evaluate a new distributed energy-efficient clustering scheme for heterogeneous wireless sensor networks, which is called DEEC. In DEEC, the cluster-heads are elected by a probability based on the ratio between residual energy of each node and the average energy of the network. The epochs of being cluster-heads for nodes are different according to their initial and residual energy. The nodes with high initial and residual energy will have more chances to be the cluster-heads than the nodes with low energy. Finally, the simulation results show that DEEC achieves longer lifetime and more effective messages than current important clustering protocols in heterogeneous environments. 2006 Elsevier B.V. All rights reserved."}
{"_id": "b0fa1b8490d9f86de73556edc3a73e32019a9207", "title": "Reading the mind in cartoons and stories: an fMRI study of \u2018theory of mind\u2019 in verbal and nonverbal tasks", "text": "Previous functional imaging studies have explored the brain regions activated by tasks requiring 'theory of mind'--the attribution of mental states. Tasks used have been primarily verbal, and it has been unclear to what extent different results have reflected different tasks, scanning techniques, or genuinely distinct regions of activation. Here we report results from a functional magnetic resonance imaging study (fMRI) involving two rather different tasks both designed to tap theory of mind. Brain activation during the theory of mind condition of a story task and a cartoon task showed considerable overlap, specifically in the medial prefrontal cortex (paracingulate cortex). These results are discussed in relation to the cognitive mechanisms underpinning our everyday ability to 'mind-read'."}
{"_id": "c5098b6da33d0b966cb175a80bab70b829799a6b", "title": "A method for measuring helpfulness in online peer review", "text": "This paper describes an original method for evaluating peer review in online systems by calculating the helpfulness of an individual reviewer's response. We focus on the development of specific and machine scoreable indicators for quality in online peer review."}
{"_id": "c6c1ea09425f79bcef32491b785afc5d604222d0", "title": "Ultrawideband Metasurface Lenses Based on Off-Shifted Opposite Layers", "text": "Metasurfaces have demonstrated to be a low-cost solution for the development of directive antennas at high frequency. One of the opportunities of metasurfaces is the possibility to produce planar lenses. However, these lenses usually present a narrow band of operation. This limitation on bandwidth is more restrictive when the required range of refractive index is high. Here, we present a novel implementation of metasurfaces with low dispersion that can be employed for the design of ultrawideband planar lenses. The implementation consists of two mirrored metasurfaces that are displaced exactly a half unit cell. This displacement produces two effects: It increases the equivalent refractive index of the fundamental mode and flattens its frequency dependence."}
{"_id": "d5961c904a85aaf89bf1cd85727f4b3e4dc09fb5", "title": "Planning algorithms for s-curve trajectories", "text": "Although numerous researches on s-curve motion profiles have been carried out, up to date, no systematic investigation on the general model of polynomial s-curve motion profiles is considered. In this paper, the model of polynomial s-curve motion profiles is generalized in a recursive form. Based on that, a general algorithm to design s-curve trajectory with jerk bounded and time-optimal consideration is proposed. In addition, a special strategy for planning s-curve motion profiles using a trigonometric model is also presented. The algorithms are implemented on a linear motor system. Experimental results show the effectiveness and promising application ability of the algorithms in s-curve motion profiling."}
{"_id": "28b3bac665673755aca40849ca467ea57b246bd3", "title": "Fibroblast Growth Factor Receptor 2 (FGFR2) Mutation Related Syndromic Craniosynostosis", "text": "Craniosynostosis results from the premature fusion of cranial sutures, with an incidence of 1 in 2,100-2,500 live births. The majority of cases are non-syndromic and involve single suture fusion, whereas syndromic cases often involve complex multiple suture fusion. The fibroblast growth factor receptor 2 (FGFR2) gene is perhaps the most extensively studied gene that is mutated in various craniosynostotic syndromes including Crouzon, Apert, Pfeiffer, Antley-Bixler, Beare-Stevenson cutis gyrata, Jackson-Weiss, Bent Bone Dysplasia, and Seathre-Chotzen-like syndromes. The majority of these mutations are missense mutations that result in constitutive activation of the receptor and downstream molecular pathways. Treatment involves a multidisciplinary approach with ultimate surgical fixation of the cranial deformity to prevent further sequelae. Understanding the molecular mechanisms has allowed for the investigation of different therapeutic agents that can potentially be used to prevent the disorders. Further research efforts are need to better understand screening and effective methods of early intervention and prevention. Herein, the authors provide a comprehensive update on FGFR2-related syndromic craniosynostosis."}
{"_id": "18422d0eca8b06e98807e2663a2d9aed683402b6", "title": "EpiFast: a fast algorithm for large scale realistic epidemic simulations on distributed memory systems", "text": "Large scale realistic epidemic simulations have recently become an increasingly important application of high-performance computing. We propose a parallel algorithm, EpiFast, based on a novel interpretation of the stochastic disease propagation in a contact network. We implement it using a master-slave computation model which allows scalability on distributed memory systems.\n EpiFast runs extremely fast for realistic simulations that involve: (i) large populations consisting of millions of individuals and their heterogeneous details, (ii) dynamic interactions between the disease propagation, the individual behaviors, and the exogenous interventions, as well as (iii) large number of replicated runs necessary for statistically sound estimates about the stochastic epidemic evolution. We find that EpiFast runs several magnitude faster than another comparable simulation tool while delivering similar results.\n EpiFast has been tested on commodity clusters as well as SGI shared memory machines. For a fixed experiment, if given more computing resources, it scales automatically and runs faster. Finally, EpiFast has been used as the major simulation engine in real studies with rather sophisticated settings to evaluate various dynamic interventions and to provide decision support for public health policy makers."}
{"_id": "5c71a81f1b934a9fc7765a225ae4a8833071ad17", "title": "A \"Pile\" Metaphor for Supporting Casual Organization of Information", "text": "A user study was conducted to investigate how people deal with the flow of information in their workspaces. Subjects reported that, in an attempt to quickly and informally manage their information, they created piles of documents. Piles were seen as complementary to the folder filing system, which was used for more formal archiving. A new desktop interface element\u2013the pile\u2013 was developed and prototyped through an iterative process. The design includes direct manipulation techniques and support for browsing, and goes beyond physical world functionality by providing system assistance for automatic pile construction and reorganization. Preliminary user tests indicate the design is promising and raise issues that will be addressed in future work."}
{"_id": "21d4258394a9c8f0ea15f0792d67f7e645720ff6", "title": "Multiscale Combinatorial Grouping", "text": "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates."}
{"_id": "250b1eb62ef3188e7172b63b64b7c9b133b370f9", "title": "Fast Approximate Energy Minimization via Graph Cuts", "text": "In this paper we address the problem of minimizing a large class of energy functions that occur in early vision. The major restriction is that the energy function\u2019s smoothness term must only involve pairs of pixels. We propose two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed. The first move we consider is an \u03b1-\u03b2swap: for a pair of labels \u03b1, \u03b2, this move exchanges the labels between an arbitrary set of pixels labeled \u03b1 and another arbitrary set labeled \u03b2. Our first algorithm generates a labeling such that there is no swap move that decreases the energy. The second move we consider is an \u03b1-expansion: for a label \u03b1, this move assigns an arbitrary set of pixels the label \u03b1. Our second algorithm, which requires the smoothness term to be a metric, generates a labeling such that there is no expansion move that decreases the energy. Moreover, this solution is within a known factor of the global minimum. We experimentally demonstrate the effectiveness of our approach on image restoration, stereo and motion. 1 Energy minimization in early vision Many early vision problems require estimating some spatially varying quantity (such as intensity or disparity) from noisy measurements. Such quantities tend to be piecewise smooth; they vary smoothly at most points, but change dramatically at object boundaries. Every pixel p \u2208 P must be assigned a label in some set L; for motion or stereo, the labels are disparities, while for image restoration they represent intensities. The goal is to find a labeling f that assigns each pixel p \u2208 P a label fp \u2208 L, where f is both piecewise smooth and consistent with the observed data. These vision problems can be naturally formulated in terms of energy minimization. In this framework, one seeks the labeling f that minimizes the energy E(f) = Esmooth(f) + Edata(f). Here Esmooth measures the extent to which f is not piecewise smooth, while Edata measures the disagreement between f and the observed data. Many different energy functions have been proposed in the literature. The form of Edata is typically"}
{"_id": "62a2b1956166ecd5fd8a6b2928f45765f41b76ed", "title": "Real-time object classification in 3D point clouds using point feature histograms", "text": "This paper describes a LIDAR-based perception system for ground robot mobility, consisting of 3D object detection, classification and tracking. The presented system was demonstrated on-board our autonomous ground vehicle MuCAR-3, enabling it to safely navigate in urban traffic-like scenarios as well as in off-road convoy scenarios. The efficiency of our approach stems from the unique combination of 2D and 3D data processing techniques. Whereas fast segmentation of point clouds into objects is done in a 2 \u008dD occupancy grid, classifying the objects is done on raw 3D point clouds. For fast object feature extraction, we advocate the use of statistics of local point cloud properties, captured by histograms over point features. In contrast to most existing work on 3D point cloud classification, where real-time operation is often impossible, this combination allows our system to perform in real-time at 0.1s frame-rate."}
{"_id": "76193ed7cea1716ef1d912954b028169359f2a86", "title": "Latent Fingerprint Enhancement via Multi-Scale Patch Based Sparse Representation", "text": "Latent fingerprint identification plays an important role for identifying and convicting criminals in law enforcement agencies. Latent fingerprint images are usually of poor quality with unclear ridge structure and various overlapping patterns. Although significant advances have been achieved on developing automated fingerprint identification system, it is still challenging to achieve reliable feature extraction and identification for latent fingerprints due to the poor image quality. Prior to feature extraction, fingerprint enhancement is necessary to suppress various noises, and improve the clarity of ridge structures in latent fingerprints. Motivated by the recent success of sparse representation in image denoising, this paper proposes a latent fingerprint enhancement algorithm by combining the total variation model and multiscale patch-based sparse representation. First, the total variation model is applied to decompose the latent fingerprint into cartoon and texture components. The cartoon component with most of the nonfingerprint patterns is removed as the structured noise, whereas the texture component consisting of the weak latent fingerprint is enhanced in the next stage. Second, we propose a multiscale patch-based sparse representation method for the enhancement of the texture component. Dictionaries are constructed with a set of Gabor elementary functions to capture the characteristics of fingerprint ridge structure, and multiscale patch-based sparse representation is iteratively applied to reconstruct high-quality fingerprint image. The proposed algorithm cannot only remove the overlapping structured noises, but also restore and enhance the corrupted ridge structures. In addition, we present an automatic method to segment the foreground of latent image with the sparse coefficients and orientation coherence. Experimental results and comparisons on NIST SD27 latent fingerprint database are presented to show the effectiveness of the proposed algorithm and its superiority over existing algorithms."}
{"_id": "7aa21c9291a655423d971965fc3d46728de2deae", "title": "A study on intelligent personalized push notification with user history", "text": "In mobile application, the push notification is an important function to make users consume an item (eg. movie, novel). If a mobile application sends too many notifications to users without considering their preferences, it can cause them to turn off the notification or even deleting the mobile application. Thus, we develop a framework that selects a set of users who are likely to react to the push notification. To the best of our knowledge, we first apply recommendation techniques (i.e., collaborative filtering and content-based models) to the problem of discovering the target of push notification."}
{"_id": "c38a77cf264b9fbd806e37a54ab89ae3e628c4e1", "title": "Social media analytics and research test-bed (SMART dashboard)", "text": "We developed a social media analytics and research testbed (SMART) dashboard for monitoring Twitter messages and tracking the diffusion of information in different cities. SMART dashboard is an online geo-targeted search and analytics tool, including an automatic data processing procedure to help researchers to 1) search tweets in different cities; 2) filter noise (such as removing redundant retweets and using machine learning methods to improve precision); 3) analyze social media data from a spatiotemporal perspective, and 4) visualize social media data in various ways (such as weekly and monthly trends, top URLs, top retweets, top mentions, or top hashtags). By monitoring social messages in geo-targeted cities, we hope that SMART dashboard can assist researchers investigate and monitor various topics, such as flu outbreaks, drug abuse, and Ebola epidemics at the municipal level."}
{"_id": "8c156e938358dfb24b7c03ed65d19d803411db29", "title": "Racing Bib Numbers Recognition", "text": "Running races, such as marathons, are broadly covered by professional as well as amateur photographers. This leads to a constantly growing number of photos covering a race, making the process of identifying a particular runner in such datasets difficult. Today, such identification is often done manually. In running races, each competitor has an identification number, called the Racing Bib Number (RBN), used to identify that competitor during the race. RBNs are usually printed on a paper or cardboard tag and pined onto the competitor\u2019s T-shirt during the race. We introduce an automatic system that receives a set of natural images taken in running sports events and outputs the participants\u2019 RBN."}
{"_id": "a023186eb15325673b25a24115963cd160130787", "title": "An improved Opposition-Based Sine Cosine Algorithm for global optimization", "text": "Real life optimization problems require techniques that properly explore the search spaces to obtain the best solutions. In this sense, it is common that traditional optimization algorithms fail in local optimal values. The Sine Cosine Algorithms (SCA) has been recently proposed; it is a global optimization approach based on two trigonometric functions. SCA uses the sine and cosine functions to modify a set of candidate solutions; such operators create a balance between exploration and exploitation of the search space. However, like other similar approaches, SCA tends to be stuck into sub-optimal regions that it is reflected in the computational effort required to find the best values. This situation occurs due that the operators used for exploration do not work well to analyze the search space. This paper presents an improved version of SCA that considers the opposition based learning (OBL) as a mechanism for a better exploration of the search space generating more accurate solutions. OBL is a machine learning strategy commonly used to increase the performance of metaheuristic algorithms. OBL considers the opposite position of a solution in the search space. Based on the objective function value, the OBL selects the best element between the original solution and its opposite position; this task increases the accuracy of the optimization process. The hybridization of concepts from different fields is crucial in intelligent and expert systems; it helps to combine the advantages of algorithms to generate more efficient approaches. The proposed method is an example of this combination; it has been tested over several benchmark functions and engineering problems. Such results support the efficacy of the proposed approach to find the optimal solutions in complex search spaces. \u00a9 2017 Elsevier Ltd. All rights reserved. e w g t S f t ( fi b M a b"}
{"_id": "4c74a0be4cf9878ab98912801ea12e7a44ae04b5", "title": "Clustering Algorithms in Biomedical Research: A Review", "text": "Applications of clustering algorithms in biomedical research are ubiquitous, with typical examples including gene expression data analysis, genomic sequence analysis, biomedical document mining, and MRI image analysis. However, due to the diversity of cluster analysis, the differing terminologies, goals, and assumptions underlying different clustering algorithms can be daunting. Thus, determining the right match between clustering algorithms and biomedical applications has become particularly important. This paper is presented to provide biomedical researchers with an overview of the status quo of clustering algorithms, to illustrate examples of biomedical applications based on cluster analysis, and to help biomedical researchers select the most suitable clustering algorithms for their own applications."}
{"_id": "8decee97a7238011a732678997e4c3e5749746a2", "title": "Towards an automated system for short-answer assessment using ontology mapping", "text": "A key concern for any e-assessment tool (computer assisted assessment) is its efficiency in assessing the learner\u2019s knowledge, skill set and ability. Multiple-choice questions are the most common means of assessment used in e-assessment systems, and are also successful. An efficient e-assessment system should use variety of question types including shortanswers, essays etc. and modes of response to assess learner\u2019s performance. In this paper, we consider the task of assessing short-answer questions. Several researches have been performed on the evaluation and assessment of short-answer questions and many products are deployed to assess the same as part of e-learning systems. We propose an automated system for assessing short-answers using ontology mapping. We also compare our approach with some existing systems and give an overall evaluation of experiment results, which shows that our approach using ontology mapping gives an optimized result."}
{"_id": "120f1a81fd4abd089f47a335d0771b4162e851e8", "title": "A Direct Least-Squares (DLS) method for PnP", "text": "In this work, we present a Direct Least-Squares (DLS) method for computing all solutions of the perspective-n-point camera pose determination (PnP) problem in the general case (n \u2265 3). Specifically, based on the camera measurement equations, we formulate a nonlinear least-squares cost function whose optimality conditions constitute a system of three third-order polynomials. Subsequently, we employ the multiplication matrix to determine all the roots of the system analytically, and hence all minima of the LS, without requiring iterations or an initial guess of the parameters. A key advantage of our method is scalability, since the order of the polynomial system that we solve is independent of the number of points. We compare the performance of our algorithm with the leading PnP approaches, both in simulation and experimentally, and demonstrate that DLS consistently achieves accuracy close to the Maximum-Likelihood Estimator (MLE)."}
{"_id": "7ec474923d47930b44790e043e070abcb728b73d", "title": "A Soft Robotic Orthosis for Wrist Rehabilitation 1", "text": "In the United States about 795,000 people suffer from a stroke each year [1]. Of those who survive the stroke, the majority experience paralysis on one side of their body, a condition known as hemiparesis. Physical therapy has been shown to restore functionality of the paretic limbs, especially when done early and often [2]. However, the effectiveness of therapy is limited by the availability of therapists and the amount of practice that patients do on their own. Robot-assisted therapy has been explored as a means of guiding patients through the time-intensive exercise regimes that most therapy techniques prescribe. Several wearable, robotic orthoses for the hand and wrist have been developed and are still being developed today [3]. However, few of these existing solutions allow for any significant range of motion, and those that do only offer one degree of freedom. The assisted degree of freedom is almost always flexion/extension, despite the fact that supination/ pronation is crucial in nearly all activities of daily living. In addition, current devices are often large, heavy, and uncomfortable for the wearer, presenting significant deterrents to practice. This paper presents a soft wearable device for the wrist that provides motion-specific assistance with rehabilitation for hemiparetic stroke patients at home. Unlike conventional robotassisted rehabilitation, this pneumatically actuated orthosis is portable, soft, and lightweight, making it more suitable for use outside of the clinic. In addition, this device supports all the degrees of freedom of the wrist, including supination and pronation, which are critical to many tasks."}
{"_id": "201fed356fc9d39d3b9d81b029ba71cc798cc7ec", "title": "Video enhancement using per-pixel virtual exposures", "text": "We enhance underexposed, low dynamic range videos by adaptively and independently varying the exposure at each photoreceptor in a post-process. This virtual exposure is a dynamic function of both the spatial neighborhood and temporal history at each pixel. Temporal integration enables us to expand the image's dynamic range while simultaneously reducing noise. Our non-linear exposure variation and denoising filters smoothly transition from temporal to spatial for moving scene elements. Our virtual exposure framework also supports temporally coherent per frame tone mapping. Our system outputs restored video sequences with significantly reduced noise, increased exposure time of dark pixels, intact motion, and improved details."}
{"_id": "27b2b7b1c70ece4fbdf6c9746cdddb9e616ab841", "title": "Effects of pomegranate juice consumption on oxidative stress in patients with type 2 diabetes: a single-blind, randomized clinical trial.", "text": "Increased free radicals production due to hyperglycemia produces oxidative stress in patients with diabetes. Pomegranate juice (PJ) has antioxidant properties. This study was conducted to determine the effects of PJ consumption in oxidative stress in type 2 diabetic patients. This study was a randomized clinical trial performed on 60, 40-65 years old diabetic patients. The patients were randomly allocated either to PJ consumption group or control. Patients in PJ group consumed 200\u2009ml of PJ daily for six weeks. Sex distribution and the mean age were not different between two groups. After six weeks intervention, oxidized LDL and anti-oxidized LDL antibodies decreased and total serum antioxidant capacity and arylesterase activity of paraoxonase increased significantly in the PJ-treated group compared to the control group. Our data have shown that six weeks supplementation of PJ could have favorable effects on oxidative stress in patients with type 2 diabetes (T2D)."}
{"_id": "425d6a85bc0b81c6e288480f5c6f5dddba0f1089", "title": "MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach", "text": "Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages."}
{"_id": "71fbe95871241feebfba7a32aa9a51d0638c0b39", "title": "A Memory-Network Based Solution for Multivariate Time-Series Forecasting", "text": "Multivariate time series forecasting is extensively studied throughout the years with ubiquitous applications in areas such as finance, traffic, environment, etc. Still, concerns have been raised on traditional methods for incapable of modeling complex patterns or dependencies lying in real word data. To address such concerns, various deep learning models, mainly Recurrent Neural Network (RNN) based methods, are proposed. Nevertheless, capturing extremely long-term patterns while effectively incorporating information from other variables remains a challenge for time-series forecasting. Furthermore, lack-of-explainability remains one serious drawback for deep neural network models. Inspired by Memory Network proposed for solving the question-answering task, we propose a deep learning based model named Memory Timeseries network (MTNet) for time series forecasting. MTNet consists of a large memory component, three separate encoders, and an autoregressive component to train jointly. Addtionally, the attention mechanism designed enable MTNet to be highly interpretable. We can easily tell which part of the historic data is referenced the most."}
{"_id": "5bc848fcbeed1cffb55098c4d7cef4596576e779", "title": "Chapter 17 Wireless Sensor Network Security : A Survey", "text": "As wireless sensor networks continue to grow, so does the need for effective security mechanisms. Because sensor networks may interact with sensitive data and/or operate in hostile unattended environments, it is imperative that these security concerns be addressed from the beginning of the system design. However, due to inherent resource and computing constraints, security in sensor networks poses different challenges than traditional network/computer security. There is currently enormous research potential in the field of wireless sensor network security. Thus, familiarity with the current research in this field will benefit researchers greatly. With this in mind, we survey the major topics in wireless sensor network security, and present the obstacles and the requirements in the sensor security, classify many of the current attacks, and finally list their corresponding defensive measures."}
{"_id": "c219fa405ce0734a23b067376481f289789ef197", "title": "A Nonlinear Controller Based on a Discrete Energy Function for an AC/DC Boost PFC Converter", "text": "AC/DC converter systems generally have two stages: an input power factor correction (PFC) boost ac/dc stage that converts input ac voltage to an intermediate dc voltage while reducing the input current harmonics injected to the grid, followed by a dc/dc converter that steps up or down the intermediate dc-bus voltage as required by the output load and provides high-frequency galvanic isolation. Since a low-frequency ripple (second harmonic of the input ac line frequency) exists in the output voltage of the PFC ac/dc boost converter due to the power ripple, the voltage loop in the conventional control system must have a very low bandwidth in order to avoid distortions in the input current waveform. This results in the conventional PFC controller having a slow dynamic response against load variations with adverse overshoots and undershoots. This paper presents a new control approach that is based on a novel discrete energy function minimization control law that allows the front-end ac/dc boost PFC converter to operate with faster dynamic response than the conventional controllers and simultaneously maintain near unity input power factor. Experimental results from a 3-kW ac/dc converter built for charging traction battery of a pure electric vehicle are presented in this paper to validate the proposed control method and its superiority over conventional controllers."}
{"_id": "190875cda0d1fb86fc6036a9ad7d46fc1f9fc19b", "title": "Tracking Sentiment in Mail: How Genders Differ on Emotional Axes", "text": "With the widespread use of email, we now have access to unprecedented amounts of text that we ourselves have written. In this paper, we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in many types of mail. We create a large word\u2013emotion association lexicon by crowdsourcing, and use it to compare emotions in love letters, hate mail, and suicide notes. We show that there are marked differences across genders in how they use emotion words in work-place email. For example, women use many words from the joy\u2013sadness axis, whereas men prefer terms from the fear\u2013trust axis. Finally, we show visualizations that can help people track emotions in their emails."}
{"_id": "4b07a4d8643f9a33dc2372a8c25a67be046a44b8", "title": "Bidding algorithms for simultaneous auctions", "text": "This paper introduces RoxyBot, one of the top-scoring agents in the First International Trading Agent Competition. A TAC agent simulates one vision of future travel agents: it represents a set of clients in simultaneous auctions, trading complementary (e.g., airline tickets and hotel reservations) and substitutable (e.g., symphony and theater tickets) goods. RoxyBot faced two key technical challenges in TAC: (i) allocation---assigning purchased goods to clients at the end of a game instance so as to maximize total client utility, and (ii) completion---determining the optimal quantity of each resource to buy and sell given client preferences, current holdings, and market prices. For the dimensions of TAC, an optimal solution to the allocation problem is tractable, and RoxyBot uses a search algorithm based on A* to produce optimal allocations. An optimal solution to the completion problem is also tractable, but in the interest of minimizing bidding cycle time, RoxyBot solves the completion problem using beam search, producing approximately optimal completions. RoxyBot's completer relies on an innovative data structure called a priceline."}
{"_id": "e95c610484468a9c53fc73b8adfb62bf8690e6ed", "title": "Bandwidth Enhancement of a Single-Feed Circularly Polarized Antenna Using a Metasurface: Metamaterial-based wideband CP rectangular microstrip antenna.", "text": "In this article, the authors use a 7 x 7 rectangular-ring unit metasurface and a single-feed, CP, rectangular, slotted patch antenna to enhance bandwidth. The antenna consists of a rectangular, slotted patch radiator, a metasurface with an array of rectangular-ring cells, and a coaxial feed. Using the metasurface results in a wide CP bandwidth sevenfold the size of a conventional antenna. The measured 3-dB-AR bandwidth of an antenna prototype printed onto an FR4 substrate is 28.3% (3.62 GHz-4.75 GHz) with 2:1-voltage standing wave ratio (VSWR) bandwidth of 36%. A 6.0-dBic, boresight gain with a variation of 1.5 dB is achieved across the frequency band of 3.35 GHz-4.75 GHz, with the maximum gain of 7.4 dBic at 4.1 GHz. We measure and compare the antenna prototype with a simulation using CST Microwave Studio. Parametric studies aid in understanding the proposed antenna's operation mechanism."}
{"_id": "7e9db0d2a5c60e773cee15cdb07a966ee3a2e839", "title": "Dynamic Online Pricing with Incomplete Information", "text": "Consider the pricing decision for a manager at large online retailer, such as Amazon.com, that sells millions of products. The pricing manager must decide on real-time prices for each of these product. Due to the large number of products, the manager must set retail prices without complete demand information. A manager can run price experiments to learn about demand and maximize long run profits. There are two aspects that make the online retail pricing different from traditional brick and mortar settings. First, due to the number of products the manager must be able to automate pricing. Second, an online retailer can make frequent price changes. Pricing differs from other areas of online marketing where experimentation is common, such as online advertising or website design, as firms do not randomize prices to different customers at the same time. In this paper we propose a dynamic price experimentation policy where the firm has incomplete demand information. For this general setting, we derive a pricing algorithm that balances earning profit immediately and learning for future profits. The proposed approach marries statistical machine learning and economic theory. In particular we combine multi-armed bandit (MAB) algorithms with partial identification of consumer demand into a unique pricing policy. Our automated policy solves this problem using a scalable distribution-free algorithm. We show that our method converges to the optimal price faster than standard machine learning MAB solutions to the problem. In a series of Monte Carlo simulations, we show that the proposed approach perform favorably compared to methods in computer science and revenue management. \u2217Ross School of Business, University of Michigan. Contact: ericmsch@umich.edu \u2020Rady School of Management, University of California, San Diego. Contact: kamisra@ucsd.edu \u2021Department of Electrical Engineering and Computer Science, University of Michigan. Contact: jabernet@umich.edu"}
{"_id": "98a44ee30b38c6f51c27464526224db4f5557bfe", "title": "The biological principles of swarm intelligence", "text": "The roots of swarm intelligence are deeply embedded in the biological study of self-organized behaviors in social insects. From the routing of traffic in telecommunication networks to the design of control algorithms for groups of autonomous robots, the collective behaviors of these animals have inspired many of the foundational works in this emerging research field. For the first issue of this journal dedicated to swarm intelligence, we review the main biological principles that underlie the organization of insects\u2019 colonies. We begin with some reminders about the decentralized nature of such systems and we describe the underlying mechanisms of complex collective behaviors of social insects, from the concept of stigmergy to the theory of self-organization in biological systems. We emphasize in particular the role of interactions and the importance of bifurcations that appear in the collective output of the colony when some of the system\u2019s parameters change. We then propose to categorize the collective behaviors displayed by insect colonies according to four functions that emerge at the level of the colony and that organize its global behavior. Finally, we address the role of modulations of individual behaviors by disturbances (either environmental or internal to the colony) in the overall flexibility of insect colonies. We conclude that future studies about self-organized biological behaviors should investigate such modulations to better understand how insect colonies adapt to uncertain worlds."}
{"_id": "e6bdbe24f5cd7195af0bb7db343a77f7534b2491", "title": "Illuminating Light: An Optical Design Tool with a Luminous-Tangible Interface", "text": "We describe a novel system for rapid prototyping of laserbased optical and holographic layouts. Users of this optical prototyping tool \u2013 called the Illuminating Light system \u2013 move physical representations of various optical elements about a workspace, while the system tracks these components and projects back onto the workspace surface the simulated propagation of laser light through the evolving layout. This application is built atop the Luminous Room infrastructure, an aggregate of interlinked, computer-controlled projector-camera units called I/O Bulbs. Philosophically, the work embodies the emerging ideas of the Luminous Room and builds on the notions of \u2018graspable media\u2019. We briefly introduce the I/O Bulb and Luminous Room concepts and discuss their current implementations. After an overview of the optical domain that the Illuminating Light system is designed to address, we present the overall system design and implementation, including that of an intermediary toolkit called voodoo which provides a general facility for object identification and tracking."}
{"_id": "cbc3a39442f0c4a501d9ff35d8ecbaa88377692d", "title": "Congruent evidence from \u03b1-tubulin and \u03b2-tubulin gene phylogenies for a zygomycete origin of microsporidia", "text": "The origin of microsporidia and the evolutionary relationships among the major lineages of fungi have been examined by molecular phylogeny using a-tubulin and b-tubulin. Chytrids, basidiomycetes, ascomycetes, and microsporidia were all recovered with high support, and the zygomycetes were consistently paraphyletic. The microsporidia were found to branch within zygomycetes, and showed relationships with members of the Entomophthorales and Zoopagales. This provides support for the microsporidia having evolved from within the fungi, however, the tubulin genes are difficult to interpret unambiguously since fungal and microsporidian tubulins are very divergent. Rapid evolutionary rates a characteristic of practically all microsporidian genes studied, so determining their evolutionary history will never be completely free of such difficulties. While the tubulin phylogenies do not provide a decisive conclusion, they do further narrow the probable origin of microsporidia to a zygomycete-like ancestor. 2002 Elsevier Science (USA). All rights reserved."}
{"_id": "114a4222c53f1a6879f1a77f1bae2fc0f8f55348", "title": "The Galois / Counter Mode of Operation ( GCM )", "text": ""}
{"_id": "b37b6c95bda81d7c1f84edc22da970d8f3a8519e", "title": "Lattice Attacks on Digital Signature Schemes", "text": "We describe a lattice attack on the Digital Signature Algorithm (DSA) when used to sign many messages, mi, under the assumption that a proportion of the bits of each of the associated ephemeral keys, yi, can be recovered by alternative techniques."}
{"_id": "c8f6a8f081f49325eb97600eca05620887092d2c", "title": "Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems", "text": "By carefully measuring the amount of time required to perform private key operations, attackers may be able to nd xed Di eHellman exponents, factor RSA keys, and break other cryptosystems. Against a vulnerable system, the attack is computationally inexpensive and often requires only known ciphertext. Actual systems are potentially at risk, including cryptographic tokens, network-based cryptosystems, and other applications where attackers can make reasonably accurate timing measurements. Techniques for preventing the attack for RSA and Di e-Hellman are presented. Some cryptosystems will need to be revised to protect against the attack, and new protocols and algorithms may need to incorporate measures to prevent timing attacks."}
{"_id": "0dba88589f12be4b7438da48056c44a97844450e", "title": "The RC5 Encryption Algorithm", "text": "This document describes the RC encryption algorithm a fast symmetric block cipher suitable for hardware or software imple mentations A novel feature of RC is the heavy use of data dependent rotations RC has a variable word size a variable number of rounds and a variable length secret key The encryption and decryption algorithms are exceptionally simple"}
{"_id": "826dc5774b2c2430cef0dfc4d18bc35947106c6d", "title": "Knowledge management , flexibility and firm performance : The effects of family involvement", "text": ""}
{"_id": "f9166911d7ac1afb5c230813edce76109876276a", "title": "Generation of Test Cases from Software Requirements Using Natural Language Processing", "text": "Software testing plays an important role in early verification of software systems and it enforces quality in the system under development. One of the challenging tasks in the software testing is generation of software test cases. There are many existing approaches to generate test cases like using uses case, activity diagrams and sequence diagrams; they have their own limitations such as inability to capture test cases for non functional requirements and etc. Thus these techniques have restricted use in acceptance testing and are not effective for verification & acceptance of large software system. If software requirements are stated using semi-formal or formal methods then it is difficult for the testers and other third party domain experts to test the system. It also requires much expertise in interpreting requirements and only limited number of persons can understand them. This paper proposes an approach to generate test case from software requirements expressed in natural language using natural language processing technique."}
{"_id": "d30fca9f090bde034e6e0882502ec0846fe35b45", "title": "Cellular planning for next generation wireless mobile network using novel energy efficient CoMP", "text": "Cellular planning also called radio network planning is a crucial stage for the deployment of a wireless network. The enormous increase in data traffic requires augmentation of coverage, capacity, throughput, quality of service and optimized cost of the network. The main goal of cellular planning is to enhance the spectral efficiency and hence the throughput of a network. A novel CoMP algorithm has been discussed with two-tier heterogeneous network. Number of clusters has been obtained using V-R (variance ratio) Criterion. The centroid of a cluster obtained using K-means algorithm provides the deployment of BS position. Application of CoMP in this network using DPS approach with sleep mode of power saving, provides higher energy efficiency, SINR and throughput as compared to nominal CoMP. CoMP basically describes a scheme in which a group of base stations (BS) dynamically co-ordinate and co-operate among themselves to convert interference into a beneficial signal. Network planning using stochastic method and Voronoi Tessellation with two-tier network has been applied to a dense region of Surat city in Gujarat state of India. The results show clear improvement in signal-to-interference plus noise ratio (SINR) by 25% and energy efficiency of the network by 28% using the proposed CoMP transmission."}
{"_id": "7e9d77c808961ff42f65f8bd5e0ade60bfe562d1", "title": "CardBoardiZer: Creatively Customize, Articulate and Fold 3D Mesh Models", "text": "Computer-aided design of flat patterns allows designers to prototype foldable 3D objects made of heterogeneous sheets of material. We found origami designs are often characterized by pre-synthesized patterns and automated algorithms. Furthermore, augmenting articulated features to a desired model requires time-consuming synthesis of interconnected joints. This paper presents CardBoardiZer, a rapid cardboard based prototyping platform that allows everyday sculptural 3D models to be easily customized, articulated and folded. We develop a building platform to allow the designer to 1) import a desired 3D shape, 2) customize articulated partitions into planar or volumetric foldable patterns, and 3) define rotational movements between partitions. The system unfolds the model into 2D crease-cut-slot patterns ready for die-cutting and folding. In this paper, we developed interactive algorithms and validated the usability of CardBoardiZer using various 3D models. Furthermore, comparisons between CardBoardiZer and methods of Autodesk\u00ae 123D Make, demonstrated significantly shorter time-to-prototype and ease of fabrication."}
{"_id": "07925910d45761d96269fc3bdfdc21b1d20d84ad", "title": "Deep Learning without Poor Local Minima", "text": "In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice."}
{"_id": "689246187740f686a3dc12a41b3e4e54f1399b87", "title": "Aggregating User Input in Ecology Citizen Science Projects", "text": "Camera traps (remote, automatic cameras) are revolutionizing large-scale studies in ecology. The Serengeti Lion Project has used camera traps to produce over 1.5 million pictures of animals in the Serengeti. To analyze these pictures, the Project created Snapshot Serengeti, a citizen science website where volunteers can help classify animals. To increase accuracy, each photo is shown to multiple users and a critical step is aggregating individual classifications. In this paper, we present a new aggregation algorithm which achieves an accuracy of 98.6%, better than many human experts. Our algorithm also requires fewer users per photo than existing methods. The algorithm is intuitive and designed so that nonexperts can understand the end results. Ecology seeks to understand the interrelationships of species with one another and with their environment. Monitoring many species of animals simultaneously has traditionally been very difficult. Camera traps (remote, automatic cameras) are revolutionizing ecological research by providing a non-invasive, cost-effective approach for large-scale monitoring. Ecologists are currently using these traps in the Serengeti National Park, one of the world\u2019s last large intact natural areas, to understand the dynamics of its dozens of large mammals (Swanson et al. 2014). As of November 2013, the ecologists have spent 3 years using more than 200 cameras spread over 1,125 square kilometers to take more than 1.5 million photos. In order to process so many images, the ecologists, along with Zooniverse (a citizen science platform), created Snapshot Serengeti, a web site where over 35,000 volunteers helped classify the species in the photos (Zooniverse 2014a). Since volunteers can make mistakes, each photo is shown to multiple users. A critical step is to combine these classifications into one aggregate classification: e.g., if 4 out of 5 users classify a photo as containing a zebra, we might \u21e4These authors are part of the Zooniverse project, funded in part by MICO. Copyright c 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. decide that the photo does indeed contain a zebra. In this paper, we develop an aggregation algorithm for Snapshot Serengeti. Classification aggregation is an active area in machine learning; however, we show that much of the existing literature is based on assumptions which do not apply to Snapshot Serengeti, and must therefore develop a novel approach. 1 In addition, current machine learning work on classification aggregation often draws on ideas such as expectation maximization and Bayesian reasoning. While powerful, these methods obscure the connection between input and results, making it hard for non-machine learning experts to understand the end results. Thus, our algorithm must be both accurate and intuitive. Our paper proceeds as follows. We begin by discussing Snapshot Serengeti and previous machine learning literature on classifier aggregation. We then discuss why much of this existing work is not applicable to Snapshot Serengeti. We next introduce a new classifier aggregation algorithm for Snapshot Serengeti and compare it against the current algorithm. Finally, we conclude and discuss possible future work."}
{"_id": "572e1b1f9fc7414f8bfd9aabbb34da67619a25ae", "title": "An approach for offensive text detection and prevention in Social Networks", "text": "Social Network has become a place where people from every corner of the world has established a virtual civilization. In this virtual community, people used to share their views, express their feelings, photos, videos, blogs, etc. Social Networking Sites like Facebook, Twitters, etc. has given a platform to share innumerable contents with just a click of a button. However, there is no restriction applied by them for the uploaded content. These uploaded content may contains abusive words, explicit images which may be unsuitable for social platforms. As such there is no defined mechanism for restricting offensive contents from publishing on social sites. To solve this problem we have used our proposed approach. In our approach we are developing a social network prototype for implementing our approach for automatic filtering of offensive content in social network. Many popular social networking sites today don't have proper mechanism for restricting offensive contents. They use reporting methods in which user report if the content is abuse. This requires substantial human efforts and time. In this paper, we applied pattern matching algorithm for offensive keyword detection from social networking comments and prevent it from publishing on social platform. Apart from conventional method of reporting abusive contents by users our approach does not requires any human intervention and thus restrict offensive words by detecting and preventing it automatically."}
{"_id": "c0fba6c576daa416b0db591594681c130f785658", "title": "Major League Soccer Scheduling Problem", "text": "Major League Soccer is the highest level mens professional soccer league in United States and Canada, consisting of 20 teams, with 17 in United States and 3 in Canada. The League is divided by two Conferences: Western and Eastern with 10 teams each. The regular season of the League runs from early March to late October each year. It is important to look at MLS mainly for three reasons. Firstly, it is a fast-growing league, with only 10 teams competed in 1996 when the League started. Secondly, the popularity of the game has increased dramatically, with the average per-game attendance in MLS increased by 40% in the past decade, which exceeds that of NHL and NBA. Lastly, the big TV broadcasting contract signed with ESPN, Fox Sports, and Univision has demonstrated the huge commercial value of the League."}
{"_id": "2bb887f21c78de312046dc826564708d489bb77e", "title": "Information Security and Privacy: Rethinking Governance Models", "text": "Concerns about information security and privacy continue to make headlines in the media and rate as serious issues in business and public surveys. Conventional thinking suggests that enhancements to governance models and business practices improve the performance of individual businesses. However, is this enough? Good governance is based on clear rights and responsibilities, and this panel will address the contention that such clarity is lacking in the treatment of personal information. As a result, certain types of business innovation may be constrained and good practices may remain tick box exercises, disconnected from wider business objectives. More radical thinking would go beyond incremental improvements to the status quo and recognize the profound challenges to privacy and information security created by digital technology. This panel will bring together a range of research disciplines and senior business representatives to critique current practice and develop a future research agenda."}
{"_id": "9ae7cd3170e1feb8aeb80aa38bc813c93c6c0218", "title": "Software Test Automation practices in agile development environment: An industry experience report", "text": "The increased importance of Test Automation in software engineering is very evident considering the number of companies investing in automated testing tools nowadays, with the main aim of preventing defects during the development process. Test Automation is considered an essential activity for agile methodologies being the key to speed up the quality assurance process. This paper presents empirical observations and the challenges of a test team new to agile practices and Test Automation using open source testing tools integrated in software projects that use the Scrum methodology. The results obtained showed some important issues to be discussed and the Test Automation practices collected based on the experiences and lessons learned."}
{"_id": "f11e2579cc3584f28e8b07bab572ac80a1dd7416", "title": "Radial-Flux Permanent-Magnet Hub Drives: A Comparison Based on Stator and Rotor Topologies", "text": "In this paper, the performances of electric vehicle (EV) in-wheel (hub) nonoverlap-winding permanent-magnet motors with different stator and rotor topologies are compared. The calculation of the frequency losses in the design optimization of the hub motors is specifically considered. Also, the effect of the slot and pole number choice that determines the winding factor of the machine is investigated. It is shown that the torque-per-copper-loss performance of the machine cannot be judged solely on the winding factor of the machine. Wide open stator slot machines with rectangular preformed coils and, hence, low manufacturing costs are found to perform better than expected. The design detail and test results of a prototype 10-kW water-cooled EV hub drive are presented. The test results confirm the finite-element-calculated results, specifically in the high-speed region of the drive where the rotor topology affects the constant power speed range."}
{"_id": "448fd962a195c6ff196ca5962b3eb7afb896d9e3", "title": "Fast and robust Earth Mover's Distances", "text": "We present a new algorithm for a robust family of Earth Mover's Distances - EMDs with thresholded ground distances. The algorithm transforms the flow-network of the EMD so that the number of edges is reduced by an order of magnitude. As a result, we compute the EMD by an order of magnitude faster than the original algorithm, which makes it possible to compute the EMD on large histograms and databases. In addition, we show that EMDs with thresholded ground distances have many desirable properties. First, they correspond to the way humans perceive distances. Second, they are robust to outlier noise and quantization effects. Third, they are metrics. Finally, experimental results on image retrieval show that thresholding the ground distance of the EMD improves both accuracy and speed."}
{"_id": "6a8fe4de9e91fcea62993599a975a3464bc7e3af", "title": "Evaluation methods and decision theory for classification of streaming data with temporal dependence", "text": "Predictive modeling on data streams plays an important role in modern data analysis, where data arrives continuously and needs to be mined in real time. In the stream setting the data distribution is often evolving over time, and models that update themselves during operation are becoming the state-of-the-art. This paper formalizes a learning and evaluation scheme of such predictive models. We theoretically analyze evaluation of classifiers on streaming data with temporal dependence. Our findings suggest that the commonly accepted data stream classification measures, such as classification accuracy and Kappa statistic, fail to diagnose cases of poor performance when temporal dependence is present, therefore they should not be used as sole performance indicators. Moreover, classification accuracy can be misleading if used as a proxy for evaluating change detectors with datasets that have temporal dependence. We formulate the decision theory for streaming data classification with temporal dependence and develop a new evaluation methodology for data stream classification that takes temporal dependence into account. We propose a combined measure for classification performance, that takes into account temporal dependence, and we recommend using it as the main performance measure in classification of streaming data."}
{"_id": "6e6dfc5d5f097f9e7d02d3fc0380c1ca49744df2", "title": "Learning Background-Aware Correlation Filters for Visual Tracking", "text": "Correlation Filters (CFs) have recently demonstrated excellent performance in terms of rapidly tracking objects under challenging photometric and geometric variations. The strength of the approach comes from its ability to efficiently learn - on the fly - how the object is changing over time. A fundamental drawback to CFs, however, is that the background of the target is not modeled over time which can result in suboptimal performance. Recent tracking algorithms have suggested to resolve this drawback by either learning CFs from more discriminative deep features (e.g. DeepSRDCF [9] and CCOT [11]) or learning complex deep trackers (e.g. MDNet [28] and FCNT [33]). While such methods have been shown to work well, they suffer from high complexity: extracting deep features or applying deep tracking frameworks is very computationally expensive. This limits the real-time performance of such methods, even on high-end GPUs. This work proposes a Background-Aware CF based on hand-crafted features (HOG [6]) that can efficiently model how both the foreground and background of the object varies over time. Our approach, like conventional CFs, is extremely computationally efficient- and extensive experiments over multiple tracking benchmarks demonstrate the superior accuracy and real-time performance of our method compared to the state-of-the-art trackers."}
{"_id": "d59691a0341246f8c09c655428712ddc86246efc", "title": "Broadband Radial Waveguide Power Combiner with Improved Isolation among Adjacent Output Ports", "text": "An eight-way waveguide-based power combiner/divider is presented and investigated in the frequency range 7.5\u201310.5GHz. A simple approach is proposed for design purposes. The measured combiner shows a good agreement between the simulated and measured results. Insertion loss is about \u22120.3 dB, return loss is less than \u221215 dB and isolation between adjacent output ports is better than \u221211 dB at 8.5GHz and reaches about \u221214 dB at 9.5GHz."}
{"_id": "1b322f89319440f06bb45d9081461978c9c643db", "title": "Telomere measurement by quantitative PCR.", "text": "It has long been presumed impossible to measure telomeres in vertebrate DNA by PCR amplification with oligonucleotide primers designed to hybridize to the TTAGGG and CCCTAA repeats, because only primer dimer-derived products are expected. Here we present a primer pair that eliminates this problem, allowing simple and rapid measurement of telomeres in a closed tube, fluorescence-based assay. This assay will facilitate investigations of the biology of telomeres and the roles they play in the molecular pathophysiology of diseases and aging."}
{"_id": "261e39073725dd16d46aeaeb30b7b7dd3e8b78ee", "title": "Depth map creation and image-based rendering for advanced 3DTV services providing interoperability and scalability", "text": "Due to enormous progress in the areas of auto-stereoscopic 3D displays, digital video broadcast and computer vision algorithms, 3D television (3DTV) has reached a high technical maturity and many people now believe in its readiness for marketing. Experimental prototypes of entire 3DTV processing chains have been demonstrated successfully during the last few years, and the motion picture experts group (MPEG) of ISO/IEC has launched related ad hoc groups and standardization efforts envisaging the emerging market segment of 3DTV. In this context the paper discusses an advanced approach for a 3DTV service, which is based on the concept of video-plus-depth data representations. It particularly considers aspects of interoperability and multi-view adaptation for the case that different multi-baseline geometries are used for multi-view capturing and 3D display. Furthermore it presents algorithmic solutions for the creation of depth maps and depth image-based rendering related to this framework of multi-view adaptation. In contrast to other proposals, which are more focused on specialized configurations, the underlying approach provides a modular and flexible system architecture supporting a wide range of multi-view structures. r 2007 Elsevier B.V. All rights reserved."}
{"_id": "14726825de6b438920bd2f021a5e8b7323046f00", "title": "Generalizing Skills with Semi-Supervised Reinforcement Learning", "text": "Deep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of labeled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while it is deployed and performing useful tasks. However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Such reward functions are often hard to measure in the real world, especially in domains such as robotics and dialog systems, where the reward could depend on the unknown positions of objects or the emotional state of the user. On the other hand, it is often quite practical to provide the agent with reward functions in a limited set of situations, such as when a human supervisor is present, or in a controlled laboratory setting. Can we make use of this limited supervision, and still benefit from the breadth of experience an agent might collect in the unstructured real world? In this paper, we formalize this problem setting as semisupervised reinforcement learning (SSRL), where the reward function can only be evaluated in a set of \u201clabeled\u201d MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of \u201cunlabeled\u201d MDPs, by using experience from both settings. Our proposed method infers the task objective in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent\u2019s own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging, continuous control tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using experience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward."}
{"_id": "e4945a6b2bea1efc1f6fcfa160e476f74a037b69", "title": "Virtual InfiniBand clusters for HPC clouds", "text": "High Performance Computing (HPC) employs fast interconnect technologies to provide low communication and synchronization latencies for tightly coupled parallel compute jobs. Contemporary HPC clusters have a fixed capacity and static runtime environments; they cannot elastically adapt to dynamic workloads, and provide a limited selection of applications, libraries, and system software. In contrast, a cloud model for HPC clusters promises more flexibility, as it provides elastic virtual clusters to be available on-demand. This is not possible with physically owned clusters.\n In this paper, we present an approach that makes it possible to use InfiniBand clusters for HPC cloud computing. We propose a performance-driven design of an HPC IaaS layer for InfiniBand, which provides throughput and latency-aware virtualization of nodes, networks, and network topologies, as well as an approach to an HPC-aware, multi-tenant cloud management system for elastic virtualized HPC compute clusters."}
{"_id": "06ee3e95ce74711830a9752768c37c3e69585d7d", "title": "Smart Cities' Data: Challenges and Opportunities for Semantic Technologies", "text": "How can we innovate smart systems for smart cities, to make data available homogeneously, inexpensively, and flexibly while supporting an array of applications that have yet to exist or be specified?"}
{"_id": "758e3fbd07ef858b33420aa981ae741a3f0d9a78", "title": "cocor: A Comprehensive Solution for the Statistical Comparison of Correlations", "text": "A valid comparison of the magnitude of two correlations requires researchers to directly contrast the correlations using an appropriate statistical test. In many popular statistics packages, however, tests for the significance of the difference between correlations are missing. To close this gap, we introduce cocor, a free software package for the R programming language. The cocor package covers a broad range of tests including the comparisons of independent and dependent correlations with either overlapping or nonoverlapping variables. The package also includes an implementation of Zou's confidence interval for all of these comparisons. The platform independent cocor package enhances the R statistical computing environment and is available for scripting. Two different graphical user interfaces-a plugin for RKWard and a web interface-make cocor a convenient and user-friendly tool."}
{"_id": "49254a9de8b95df38f3db57228fe242ab7084816", "title": "Eye localization from thermal infrared images", "text": "By using the knowledge of facial structure and temperature distribution, this paper proposes an automatic eye localization method from long wave infrared thermal images both with eyeglasses and without eyeglasses. First, with the help of support vector machine classifier, three gray-projection features are defined to determine whether a subject is with eyeglasses. For subjects with eyeglasses, the locations of valleys in the projection curve are used to perform eye localization. For subjects without eyeglasses, a facial structure consisting of 15 sub-regions is proposed to extract Haar-like features. Eight classifiers are learned from the features selected by Adaboost algorithm for left and right eye, respectively. A vote strategy is employed to find the most likely eyes. To evaluate the effectiveness of our approach, experiments are performed on NVIE and Equinox databases. The eyeglass detection results on NVIE database and Equinox database are 99.36% and 95%, respectively, which demonstrate the effectiveness and robustness of our eyeglass detection method. Eye localization results of withindatabase experiments and cross-database experiments on these two databases are very comparable with the previous results in this field, verifying the effectiveness and the generalization ability of our approach. & 2013 Elsevier Ltd. All rights reserved."}
{"_id": "0788cda105da9853627d3e1ec8d01e01f7239c30", "title": "Parallel Coordinate Descent for L1-Regularized Loss Minimization", "text": "We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds for Shotgun which predict near-linear speedups, up to a problemdependent limit. We present a comprehensive empirical study of Shotgun for Lasso and sparse logistic regression. Our theoretical predictions on the potential for parallelism closely match behavior on real data. Shotgun outperforms other published solvers on a range of large problems, proving to be one of the most scalable algorithms for L1."}
{"_id": "2414283ed14ebb0eec031bb75cd25fbad000687e", "title": "Distributed large-scale natural graph factorization", "text": "Natural graphs, such as social networks, email graphs, or instant messaging patterns, have become pervasive through the internet. These graphs are massive, often containing hundreds of millions of nodes and billions of edges. While some theoretical models have been proposed to study such graphs, their analysis is still difficult due to the scale and nature of the data.\n We propose a framework for large-scale graph decomposition and inference. To resolve the scale, our framework is distributed so that the data are partitioned over a shared-nothing set of machines. We propose a novel factorization technique that relies on partitioning a graph so as to minimize the number of neighboring vertices rather than edges across partitions. Our decomposition is based on a streaming algorithm. It is network-aware as it adapts to the network topology of the underlying computational hardware. We use local copies of the variables and an efficient asynchronous communication protocol to synchronize the replicated values in order to perform most of the computation without having to incur the cost of network communication. On a graph of 200 million vertices and 10 billion edges, derived from an email communication network, our algorithm retains convergence properties while allowing for almost linear scalability in the number of computers."}
{"_id": "2a65a1a126f843f0e3600ba80da50bc6d4c32855", "title": "The Matrix Cookbook", "text": "Acknowledgements: We would like to thank the following for contributions and suggestions: Bill Baxter, Brian Templeton, Christian Rish\u00f8j, Christian Schr\u00f6ppel Douglas L. Theobald, Esben Hoegh-Rasmussen, Glynne Casteel, Jan Larsen, Jun Bin Gao, J\u00fcrgen Struckmeier, Kamil Dedecius, Korbinian Strimmer, Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut, Miguel Bar\u00e3o, Ole Winther, Pavel Sakov, Stephan Hattinger, Vasile Sima, Vincent Rabaud, Zhaoshui He. We would also like thank The Oticon Foundation for funding our PhD studies."}
{"_id": "51e93552fe55be91a5711ff2aabc04b742503e68", "title": "ICA with Reconstruction Cost for Efficient Overcomplete Feature Learning", "text": "F . Here,Cx denotes the covariance matrix of the data. E is the matrix whose columns are eigenvectors of Cx; andD is a diagonal matrix of eigenvalues of Cx. This result also means that when the data is not whitened, the reconstruction cost becomes a weighted linear regression in the space rotated by eigenvec tors and scaled by eigenvalues. The cost is therefore weighted heavily in the principal vector d irections and less so in the other directions. Also, note that D 1 2E is the well-known PCA whitening matrix. The optimization th erefore builds in an inverse whitening matrix such that W will have to learn the whitening matrix to cancel outED 1 2 ."}
{"_id": "903520b0b13edc0e6a18a05dde11ff64f7a7582f", "title": "Modeling IoT-Based Solutions Using Human-Centric Wireless Sensor Networks", "text": "The Internet of Things (IoT) has inspired solutions that are already available for addressing problems in various application scenarios, such as healthcare, security, emergency support and tourism. However, there is no clear approach to modeling these systems and envisioning their capabilities at the design time. Therefore, the process of designing these systems is ad hoc and its real impact is evaluated once the solution is already implemented, which is risky and expensive. This paper proposes a modeling approach that uses human-centric wireless sensor networks to specify and evaluate models of IoT-based systems at the time of design, avoiding the need to spend time and effort on early implementations of immature designs. It allows designers to focus on the system design, leaving the implementation decisions for a next phase. The article illustrates the usefulness of this proposal through a running example, showing the design of an IoT-based solution to support the first responses during medium-sized or large urban incidents. The case study used in the proposal evaluation is based on a real train crash. The proposed modeling approach can be used to design IoT-based systems for other application scenarios, e.g., to support security operatives or monitor chronic patients in their homes."}
{"_id": "40c5050e470fa0890e85487e4679197e07a91c09", "title": "pVM: persistent virtual memory for efficient capacity scaling and object storage", "text": "Next-generation byte-addressable nonvolatile memories (NVMs), such as phase change memory (PCM) and Memristors, promise fast data storage, and more importantly, address DRAM scalability issues. State-of-the-art OS mechanisms for NVMs have focused on improving the block-based virtual file system (VFS) to manage both persistence and the memory capacity scaling needs of applications. However, using the VFS for capacity scaling has several limitations, such as the lack of automatic memory capacity scaling across DRAM and NVM, inefficient use of the processor cache and TLB, and high page access costs. These limitations reduce application performance and also impact applications that use NVM for persistent object storage with flat namespaces, such as photo stores, NoSQL databases, and others.\n To address such limitations, we propose persistent virtual memory (pVM), a system software abstraction that provides applications with (1) automatic OS-level memory capacity scaling, (2) flexible memory placement policies across NVM, and (3) fast object storage. pVM extends the OS virtual memory (VM) instead of building on the VFS and abstracts NVM as a NUMA node with support for NVM-based memory placement mechanisms. pVM inherits benefits from the cache and TLB-efficient VM subsystem and augments these further by distinguishing between persistent and nonpersistent capacity use of NVM. Additionally, pVM achieves fast persistent storage by further extending the VM subsystem with consistent and durable OS-level persistent metadata. Our evaluation of pVM with memory capacity-intensive applications shows a 2.5x speedup and up to 80% lower TLB and cache misses compared to VFS-based systems. pVM's object store provides 2x higher throughput compared to the block-based approach of the state-of-the art solution and up to a 4x reduction in the time spent in the OS."}
{"_id": "877aff9bd05de7e9d82587b0e6f1cda28fd33171", "title": "Long-Term Visual Localization Using Semantically Segmented Images", "text": "Robust cross-seasonal localization is one of the major challenges in long-term visual navigation of autonomous vehicles. In this paper, we exploit recent advances in semantic segmentation of images, i.e., where each pixel is assigned a label related to the type of object it represents, to attack the problem of long-term visual localization. We show that semantically labeled 3D point maps of the environment, together with semantically segmented images, can be efficiently used for vehicle localization without the need for detailed feature descriptors (SIFT, SURF, etc.), Thus, instead of depending on hand-crafted feature descriptors, we rely on the training of an image segmenter. The resulting map takes up much less storage space compared to a traditional descriptor based map. A particle filter based semantic localization solution is compared to one based on SIFT-features, and even with large seasonal variations over the year we perform on par with the larger and more descriptive SIFT-features, and are able to localize with an error below 1 m most of the time."}
{"_id": "967852b62d5ef4c0f65f7f5568c5ed8661b77460", "title": "Sexual Conflict in Human Mating", "text": "Despite interdependent reproductive fates that favor cooperation, males and females exhibit many psychological and behavioral footprints of sexually antagonistic coevolution. These include strategies of deception, sexual exploitation, and sexual infidelity as well as anti-exploitation defenses such as commitment skepticism and emotions such as sexual regret and jealousy. Sexual conflict pervades the mating arena prior to sexual consummation, after a mating relationship has formed, and in the aftermath of a breakup. It also permeates many other social relationships in forms such as daughter-guarding, conflict in opposite-sex friendships, and workplace sexual harassment. As such, sexual conflict constitutes not a narrow or occasional flashpoint but rather persistent threads that run through our intensely group-living species."}
{"_id": "a7ebc07154979549def2c39e33ad711ae5f41c65", "title": "Evaluating the relationship between white matter integrity, cognition, and varieties of video game learning.", "text": "BACKGROUND\nMany studies are currently researching the effects of video games, particularly in the domain of cognitive training. Great variability exists among video games however, and few studies have attempted to compare different types of video games. Little is known, for instance, about the cognitive processes or brain structures that underlie learning of different genres of video games.\n\n\nOBJECTIVE\nTo examine the cognitive and neural underpinnings of two different types of game learning in order to evaluate their common and separate correlates, with the hopes of informing future intervention research.\n\n\nMETHODS\nParticipants (31 younger adults and 31 older adults) completed an extensive cognitive battery and played two different genres of video games, one action game and one strategy game, for 1.5 hours each. DTI scans were acquired for each participant, and regional fractional anisotropy (FA) values were extracted using the JHU atlas.\n\n\nRESULTS\nBehavioral results indicated that better performance on tasks of working memory and perceptual discrimination was related to enhanced learning in both games, even after controlling for age, whereas better performance on a perceptual speed task was uniquely related with enhanced learning of the strategy game. DTI results indicated that white matter FA in the right fornix/stria terminalis was correlated with action game learning, whereas white matter FA in the left cingulum/hippocampus was correlated with strategy game learning, even after controlling for age.\n\n\nCONCLUSION\nAlthough cognition, to a large extent, was a common predictor of both types of game learning, regional white matter FA could separately predict action and strategy game learning. Given the neural and cognitive correlates of strategy game learning, strategy games may provide a more beneficial training tool for adults suffering from memory-related disorders or declines in processing speed, particularly older adults."}
{"_id": "82b90cbbfc440d3b932df2e732d37aa86af138e9", "title": "Empirical vulnerability analysis of automated smart contracts security testing on blockchains", "text": "\u008ce emerging blockchain technology supports decentralized computing paradigm shi\u0089 and is a rapidly approaching phenomenon. While blockchain is thought primarily as the basis of Bitcoin, its application has grown far beyond cryptocurrencies due to the introduction of smart contracts. Smart contracts are self-enforcing pieces of so\u0089ware, which reside and run over a hosting blockchain. Using blockchain-based smart contracts for secure and transparent management to govern interactions (authentication, connection, and transaction) in Internet-enabled environments, mostly IoT, is a niche area of research and practice. However, writing trustworthy and safe smart contracts can be tremendously challenging because of the complicated semantics of underlying domain-speci\u0080c languages and its testability. \u008cere have been high-pro\u0080le incidents that indicate blockchain smart contracts could contain various code-security vulnerabilities, instigating \u0080nancial harms. When it involves security of smart contracts, developers embracing the ability to write the contracts should be capable of testing their code, for diagnosing security vulnerabilities, before deploying them to the immutable environments on blockchains. However, there are only a handful of security testing tools for smart contracts. \u008cis implies that the existing research on automatic smart contracts security testing is not adequate and remains in a very stage of infancy. With a speci\u0080c goal to more readily realize the application of blockchain smart contracts in security and privacy, we should \u0080rst understand their vulnerabilities before widespread implementation. Accordingly, the goal of this paper is to carry out a far-reaching experimental assessment of current static smart contracts security testing tools, for the most widely used blockchain, the Ethereum and its domain-speci\u0080c programming language, Solidity, to provide the \u0080rst body of knowledge for creating more secure blockchainbased so\u0089ware. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\u0080t or commercial advantage and that copies bear this notice and the full citation on the \u0080rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s)."}
{"_id": "3502439dbd2bdfb0a1b452f90760d8ff1fbb2c73", "title": "CONVOLUTIONAL SEQUENCE LEARNING", "text": "We present Deep Voice 3, a fully-convolutional attention-based neural textto-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training an order of magnitude faster. We scale Deep Voice 3 to dataset sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on a single GPU server."}
{"_id": "e17c0ca1e9e5658c54d0c556644bb0c8b5a606c7", "title": "Parenting cognitions \u2192 parenting practices \u2192 child adjustment? The standard model.", "text": "In a large-scale (N = 317) prospective 8-year longitudinal multiage, multidomain, multivariate, multisource study, we tested a conservative three-term model linking parenting cognitions in toddlerhood to parenting practices in preschool to classroom externalizing behavior in middle childhood, controlling for earlier parenting practices and child externalizing behavior. Mothers who were more knowledgeable, satisfied, and attributed successes in their parenting to themselves when their toddlers were 20 months of age engaged in increased supportive parenting during joint activity tasks 2 years later when their children were 4 years of age, and 6 years after that their 10-year-olds were rated by teachers as having fewer classroom externalizing behavior problems. This developmental cascade of a \"standard model\" of parenting applied equally to families with girls and boys, and the cascade from parenting attributions to supportive parenting to child externalizing behavior obtained independent of 12 child, parent, and family covariates. Conceptualizing socialization in terms of cascades helps to identify points of effective intervention."}
{"_id": "226ac4e235990a4bfe0e4c77695e036d66f6c4d9", "title": "Attribute-Based Access Control Scheme in Federated IoT Platforms", "text": "The Internet of Things (IoT) introduced the possibility to connect electronic things from everyday life to the Internet, while making them ubiquitously available. With advanced IoT services, based on a trusted federation among heterogeneous IoT platforms, new security problems (including authentication and authorization) emerge. This contribution aims at describing the main facets of the preliminary security architecture envisaged in the context of the symbIoTe project, recently launched by European Commission under the Horizon 2020 EU program. Our approach features distributed and decoupled mechanisms for authentication and authorization services in complex scenarios embracing heterogeneous and federated IoT platforms, by leveraging Attribute Based Access Control and token-based authorization techniques."}
{"_id": "46f0d44599188dadf831a3c0e486b2f0391d0dec", "title": "On the privacy of anonymized networks", "text": "The proliferation of online social networks, and the concomitant accumulation of user data, give rise to hotly debated issues of privacy, security, and control. One specific challenge is the sharing or public release of anonymized data without accidentally leaking personally identifiable information (PII). Unfortunately, it is often difficult to ascertain that sophisticated statistical techniques, potentially employing additional external data sources, are unable to break anonymity. In this paper, we consider an instance of this problem, where the object of interest is the structure of a social network, i.e., a graph describing users and their links. Recent work demonstrates that anonymizing node identities may not be sufficient to keep the network private: the availability of node and link data from another domain, which is correlated with the anonymized network, has been used to re-identify the anonymized nodes. This paper is about conditions under which such a de-anonymization process is possible.\n We attempt to shed light on the following question: can we assume that a sufficiently sparse network is inherently anonymous, in the sense that even with unlimited computational power, de-anonymization is impossible? Our approach is to introduce a random graph model for a version of the de-anonymization problem, which is parameterized by the expected node degree and a similarity parameter that controls the correlation between two graphs over the same vertex set. We find simple conditions on these parameters delineating the boundary of privacy, and show that the mean node degree need only grow slightly faster than log n with network size n for nodes to be identifiable. Our results have policy implications for sharing of anonymized network information."}
{"_id": "465fd6e50dc5c3c2583bc1d7a7b3b424e0101b9f", "title": "New technology and health care costs--the case of robot-assisted surgery.", "text": "n engl j med 363;8 nejm.org august 19, 2010 701 stood. These technologies can lead to increases in costs, either because they are simply more expensive than previous treatments or because their introduction leads to an expansion in the types and numbers of patients treated. We examined these patterns as they apply to the case of robot-assisted surgery. Robotic surgical devices allow a surgeon at a console to operate remote-controlled robotic arms, which may facilitate the performance of laparoscopic procedures. Laparoscopic surgery, in turn, is associated with shorter hospital stays than open surgery, as well as with less postoperative pain and scarring and lower risks of infection and need for blood transfusion. Robotic technology has been adopted rapidly over the past 4 years in both the United States and Europe. The number of robot-assisted procedures that are performed worldwide has nearly tripled since 2007, from 80,000 to 205,000. Between 2007 and 2009, the number of da Vinci systems, the leading robotic technology, that were installed in U.S. hospitals grew by approximately 75%, from almost 800 to around 1400, and the number that were installed in other countries doubled, from 200 to nearly 400, according to Intuitive Surgical, da Vinci\u2019s manufacturer. A wide range of procedures are now performed by means of robot-assisted surgery. Some of these procedures were already being performed laparoscopically before robots were introduced; the introduction of robotic technology affects expenditures associated with such procedures primarily by increasing the cost per procedure. For procedures that were more often performed as open surgeries, the introduction of robots may affect both the cost and the volume of surgeries performed. Robotic surgical systems have high fixed costs, with prices ranging from $1 million to $2.5 million for each unit. Surgeons must perform 150 to 250 procedures to become adept in their use. The systems also require costly maintenance and demand the use of additional consumables (singleuse robotic appliances). The use of robotic systems may also require more operating time than alternatives. In the case of procedures that had previously been performed as open surgery, however, some of the new costs will New Technology and Health Care Costs \u2014 The Case of Robot-Assisted Surgery"}
{"_id": "bbb31f0a7c620cb168c9860167eae36e7936382d", "title": "Minimally supervised question classification on fine-grained taxonomies", "text": "This article presents a minimally supervised approach to question classification on fine-grained taxonomies. We have defined an algorithm that automatically obtains lists of weighted terms for each class in the taxonomy, thus identifying which terms are highly related to the classes and are highly discriminative between them. These lists have then been applied to the task of question classification. Our approach is based on the divergence of probability distributions of terms in plain text retrieved from the Web. A corpus of questions with which to train the classifier is not therefore necessary. As the system is based purely on statistical information, it does not require additional linguistic resources or tools. The experiments were performed on English questions and their Spanish translations. The results reveal that our system surpasses current supervised approaches in this task, obtaining a significant improvement in the experiments carried out."}
{"_id": "3e4c19f558862a92c8e7485758b5809a0b8338db", "title": "Crowdsourcing Information Systems \u2013 A Systems Theory Perspective", "text": "Crowdsourcing harnesses the potential of large and open networks of people. It is a relatively new phenomenon and attracted substantial interest in practice. Related research, however, lacks a theoretical foundation. We propose a systemtheoretical perspective on crowdsourcing systems to address this gap and illustrate its applicability by using it to classify crowdsourcing systems. By deriving two principal dimensions from theory, we identify four fundamental types of crowdsourcing systems that help to distinguish important features of such systems. We analyse their respective characteristics and discuss implications and requirements for various aspects related to the design of such systems. Our results demonstrate that systems theory can inform the study of crowdsourcing systems. The identified system types and the implications on their design may prove useful for researchers to frame future studies and for practitioners to identify the right crowdsourcing systems for a particular purpose."}
{"_id": "f5ceb1d4403a710fa2e0002755f286756b407c16", "title": "Methods of mapping from phase to sine amplitude in direct digital synthesis", "text": "There are many methods for performing functional mapping from phase to sine amplitude (e.g., ROM look-up, coarse/fine segmentation into multiple ROM's, Taylor series, CORDIC algorithm). The spectral purity of the conventional direct digital synthesizer (DDS) is also determined by the resolution of the values stored in the sine table ROM. Therefore, it is desirable to increase the resolution of the ROM. Unfortunately, larger ROM storage means higher power consumption, lower reliability, lower speed, and greatly increased costs. Different memory compression and algorithmic techniques and their effect on distortion and trade-offs are investigated in detail. A computer program has been created to simulate the effects of the memory compression and algorithmic techniques on the output spectrum of the DDS. For each memory compression and algorithmic technique, the worst case spurious response is calculated using the computer program."}
{"_id": "17d8fa4be0e351e62e6f1c72296e1f6136d3b9df", "title": "Why does deep and cheap learning work so well?", "text": "We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through \u201ccheap learning\u201d with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various \u201cno-flattening theorems\u201d showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that n variables cannot be multiplied using fewer than 2 neurons in a single hidden layer."}
{"_id": "fc64015f62631b642e0162860be9d753cc4a1b0a", "title": "Auto-Calibration Around-View Monitoring System", "text": "This work is to establish an auto-calibration around-view monitoring system to avoid the inconsistent performance from deviations of camera installation. First, a vehicle is parked at a specific area which has a special pattern on the ground. Second, the captured images from four embedded cameras are compared with the reference pattern images to obtain the deviations in angle, X-axis and Y-axis. Finally, these deviations are used to compensate the coefficients of transformation matrices associated with view-point conversion and image stitching. Particularly, the adaptive edge recovery scheme is developed to help on precise alignments. The experimental results show that the proposed system implemented at the TI DaVinci DSP platform can accomplish auto-calibration of a vehicle within 60 seconds and have deviation rates smaller than 38% at stitched regions and 25% at the other regions. Additionally, \u00b12\u00b0 deviation errors at angles, X-axis and Y-axis are allowed. Particularly, the proposed system can generate precise 2-meter around-view pictures at a vehicle speed of 30km/h. Therefore, the auto-calibration system proposed herein can be widely applied to vehicle manufacture, verification and repair."}
{"_id": "dd9d624bc79f0c08378fedb6ddbe772581fa0d45", "title": "Incivility in nursing education: An intervention.", "text": "Incivility in nursing education is an unfortunate phenomenon affecting nursing students in all aspects of their educational experience. Students and their instructors are often ill equipped to deal with academic incivility and their lack of ability to handle such behaviors has proven detrimental to the future of the nursing profession. Nursing instructors need tools to help educate nursing students on how to recognize uncivil behaviors within themselves as well as others and ways to combat it. This research project addressed these aspects of academic incivility and implemented an e-learning module that was developed to educate students on incivility. The data was collected through a pre-test, post-test model with resulting statistical analysis using the McNemar's test. Results showed the nursing students obtained increased self-efficacy in regards to their ability to define, detect, and combat academic incivility after viewing the e-learning module. In conclusion, the successful implementation of the e-learning module provides further incentive for schools of nursing to consider implementing incivility education in their curriculums."}
{"_id": "48a7b3c5d2592ee6e5760f2dd3d771f2b701b682", "title": "Single phase based on ups applied Z-source inverter by using matlab / simulink", "text": "Uninterruptible power supplies (UPSs) are widely used to supply critical loads, such as airline computers and life-support systems in hospitals, providing protection against power failure or anomalies of power-line voltage. In general, there are two types of traditional single phase UPSs. The first one couples a battery bank to a half or full-bridge inverter with a low-frequency transformer. In this type of UPSs, the ac output voltage is higher than that of the battery bank; thus, a step-up transformer is required to boost voltage. .The second one couples a battery bank to a dc/dc booster with a half or full bridge inverter. In this type of UPSs, the additional booster is needed, leading to high cost and low efficiency. The controlling of the switches in the booster also complicates the system. Due to the presence of the step-up transformer, the inverter current is much higher than the load current, causing high current stress on the switches of the inverter The dead time in the pulse width-modulation (PWM) signals to prevent the upper and lower switches at the same phase leg from shooting through has to be provided in the aforementioned two types of UPSs, and it distorts the voltage waveform of the ac output voltage. In order to overcome the above problems in the traditional UPSs, new topology of the UPS is proposed by using a voltage source inverter and Zsource inverter. With this new topology, the proposed UPS offers the advantages over the traditional UPSs such as the dc/dc booster and the inverter have been combined into one single-stage power conversion and the distortion of the ac output-voltage waveform is reduced in the absence of dead time in the PWM signals. The effectiveness of the proposed method is implemented by using MATLAB/SIMULINK software."}
{"_id": "4d0f1ef8e1ae14af8db252bb9057da17c55bef21", "title": "SAR automatic target recognition based on a visual cortical system", "text": "Human Vision system is the most complex and accurate system. In order to extract better features about Synthetic Aperture Radar (SAR) targets, a SAR automatic target recognition (ATR) algorithm based on human visual cortical system is proposed. This algorithm contains three stages: (1) Image preprocessing (we use a Kuan filter to do the enhancement and an adaptive Intersecting Cortical Model (ICM) to do the segmentation) (2) Feature extraction using a sparse autoencoder. (3) Classification using a softmax regression classifier. Experiment result of MSTAR public data shows a better performance of recognition."}
{"_id": "8f50f6a53ac6c81ce583d098c5339058e7343794", "title": "Trust and mobile commerce in North America", "text": "Mobile Commerce (mCommerce) activities include the act of shopping and buying on mobile devices, along with the more recent emergence of mobile payment systems. Within North America, mCommerce activities have become increasingly popular and will likely continue on this upwards trend as mobile devices further proliferate markets. Historically, one common issue with the adoption and use of eCommerce systems (e.g., commerce activities on personal computers) is trust. Yet we know little of how trust and other social factors may affect mCommerce usageda new genre of commerce activities that explicitly occur on mobile devices. To help address this problem, we have conducted two studies that explore users' mCommerce activities. The first is a diary and interview study of mCommerce shoppers who have already adopted the technology and shop on their mobile devices regularly. Our study explores typical mCommerce routines and behaviors along with issues of trust, given its long-term concern for eCommerce. The second is a diary and interview study of new and existing users of mobile payment services in North America. Participants used a variety of services, including Google Wallet, Amazon Payments, LevelUp, Square and company apps geared towards payments (e.g., Starbucks). Our results show that when it comes to shopping on mobile devices, people have few trust concerns. Yet when mobile devices are used for payments within physical stores, trust issues emerge along with prepurchase anxiety and mental model challenges. We discuss these results and show the value in adapting and developing new trust mechanisms for mCommerce. \u00a9 2016 Elsevier Ltd. All rights reserved."}
{"_id": "8318fa48ed23f9e8b9909385d3560f029c623171", "title": "Implementing linearizability at large scale and low latency", "text": "Linearizability is the strongest form of consistency for concurrent systems, but most large-scale storage systems settle for weaker forms of consistency. RIFL provides a general-purpose mechanism for converting at-least-once RPC semantics to exactly-once semantics, thereby making it easy to turn non-linearizable operations into linearizable ones. RIFL is designed for large-scale systems and is lightweight enough to be used in low-latency environments. RIFL handles data migration by associating linearizability metadata with objects in the underlying store and migrating metadata with the corresponding objects. It uses a lease mechanism to implement garbage collection for metadata. We have implemented RIFL in the RAMCloud storage system and used it to make basic operations such as writes and atomic increments linearizable; RIFL adds only 530 ns to the 13.5 \u03bcs base latency for durable writes. We also used RIFL to construct a new multi-object transaction mechanism in RAMCloud; RIFL's facilities significantly simplified the transaction implementation. The transaction mechanism can commit simple distributed transactions in about 20 \u03bcs and it outperforms the H-Store main-memory database system for the TPC-C benchmark."}
{"_id": "6d9dc7c078bc32f2b7c17637ecc60e5ffe410897", "title": "Penile septoplasty for congenital ventral penile curvature: results in 51 patients.", "text": "PURPOSE\nThe technique most widely used to correct congenital ventral penile curvature is still corporoplasty as originally described by Nesbit. We present results in patients treated with a variation of Nesbit corporoplasty used specifically for congenital ventral penile curvature.\n\n\nMATERIALS AND METHODS\nFrom June 2000 to June 2007 we treated 51 patients with congenital ventral penile curvature using modified corporoplasty (septoplasty), consisting of accessing the bed of the penile dorsal vein and excising 1 or more diamonds of tunica albuginea from it, extending in wedge-like formation 4 to 5 mm deep into the septum, until the penis is completely straightened. Patient history, clinical findings, self-photography results and the International Index of Erectile Function score were assessed. Curvature grade is expressed using the equation, 180 degrees - X, where X represents the deviation in degrees from the penis axis. Mean preoperative ventral curvature was 131.4 degrees (median 135, range 145 to 110). Of the patients 13 also had erectile dysfunction.\n\n\nRESULTS\nAt followup postoperative mean ventral curvature was 178.3 degrees (median 179.1, range 180 to 175). A total of 49 stated that they were completely satisfied. Penile shortening was 5 to 15 mm. Compared to preoperative values there were marked improvements in the International Index of Erectile Function score in the various groups. No major postoperative complications developed. In 4 patients wound healing occurred by secondary intent.\n\n\nCONCLUSIONS\nThis technique provides excellent straightening of the curved penis. By avoiding isolation of the whole dorsal neurovascular bundle there is no risk of neurovascular lesions. Suture perception is minimized."}
{"_id": "f9f92fad17743dd14be7b8cc05ad0881b67f32c2", "title": "Cross-Domain Metric and Multiple Kernel Learning Based on Information Theory", "text": "Learning an appropriate distance metric plays a substantial role in the success of many learning machines. Conventional metric learning algorithms have limited utility when the training and test samples are drawn from related but different domains (i.e., source domain and target domain). In this letter, we propose two novel metric learning algorithms for domain adaptation in an information-theoretic setting, allowing for discriminating power transfer and standard learning machine propagation across two domains. In the first one, a cross-domain Mahalanobis distance is learned by combining three goals: reducing the distribution difference between different domains, preserving the geometry of target domain data, and aligning the geometry of source domain data with label information. Furthermore, we devote our efforts to solving complex domain adaptation problems and go beyond linear cross-domain metric learning by extending the first method to a multiple kernel learning framework. A convex combination of multiple kernels and a linear transformation are adaptively learned in a single optimization, which greatly benefits the exploration of prior knowledge and the description of data characteristics. Comprehensive experiments in three real-world applications (face recognition, text classification, and object categorization) verify that the proposed methods outperform state-of-the-art metric learning and domain adaptation methods."}
{"_id": "1b51a9be75c5b4a02aecde88a965e32413efd5a3", "title": "On multi-view feature learning", "text": "Sparse coding is a common approach to learning local features for object recognition. Recently, there has been an increasing interest in learning features from spatio-temporal, binocular, or other multi-observation data, where the goal is to encode the relationship between images rather than the content of a single image. We provide an analysis of multi-view feature learning, which shows that hidden variables encode transformations by detecting rotation angles in the eigenspaces shared among multiple image warps. Our analysis helps explain recent experimental results showing that transformation-specific features emerge when training complex cell models on videos. Our analysis also shows that transformation-invariant features can emerge as a by-product of learning representations of transformations."}
{"_id": "213d7af7107fa4921eb0adea82c9f711fd105232", "title": "Reducing the dimensionality of data with neural networks.", "text": "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such \"autoencoder\" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."}
{"_id": "3da3aa90c3e013ade289122168bb09ad3dd264e4", "title": "Learning Object Affordances: From Sensory--Motor Coordination to Imitation", "text": "Affordances encode relationships between actions, objects, and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy, and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We illustrate the benefits of the acquired knowledge in imitation games."}
{"_id": "4c46347fbc272b21468efe3d9af34b4b2bad6684", "title": "Deep learning via Hessian-free optimization", "text": "We develop a 2nd-order optimization method based on the \u201cHessian-free\u201d approach, and apply it to training deep auto-encoders. Without using pre-training, we obtain results superior to those reported by Hinton & Salakhutdinov (2006) on the same tasks they considered. Our method is practical, easy to use, scales nicely to very large datasets, and isn\u2019t limited in applicability to autoencoders, or any specific model class. We also discuss the issue of \u201cpathological curvature\u201d as a possible explanation for the difficulty of deeplearning and how 2nd-order optimization, and our method in particular, effectively deals with it."}
{"_id": "2e5fd5433d93350d84b57cc865713bc7e237769e", "title": "Hierarchical Singular Value Decomposition of Tensors", "text": "We define the hierarchical singular value decomposition (SVD) for tensors of order d \u2265 2. This hierarchical SVD has properties like the matrix SVD (and collapses to the SVD in d = 2), and we prove these. In particular, one can find low rank (almost) best approximations in a hierarchical format (H-Tucker) which requires only O((d \u2212 1)k3 + dnk) parameters, where d is the order of the tensor, n the size of the modes and k the (hierarchical) rank. The H-Tucker format is a specialization of the Tucker format and it contains as a special case all (canonical) rank k tensors. Based on this new concept of a hierarchical SVD we present algorithms for hierarchical tensor calculations allowing for a rigorous error analysis. The complexity of the truncation (finding lower rank approximations to hierarchical rank k tensors) is in O((d\u22121)k4 +dnk2) and the attainable accuracy is just 2\u20133 digits less than machine precision."}
{"_id": "935b70f2e223b2229187422843ae0efb0116f033", "title": "Contact-less Transfer of Energy by means of a Rotating Transformer", "text": "This paper examines the electrical properties of a rotating transformer used for contact-less transfer of energy to rotating equipment. Two winding layouts are analysed theoretically and experimentally. The reluctance modelling provides a deep understanding of how the geometry of the core and windings affect the electrical behaviour of the component. Theoretical calculations, measured results and finite element analysis are used to compare the proposed layouts. Basic design guidelines are given to adjust the leakage and magnetising inductances. The selection of a phaseshifted full bridge converter is suggested for applications at the power level of 1kW."}
{"_id": "51797b3d2256cee87cc07a6f0ae6fbd312047401", "title": "Anarchy, stability, and utopia: creating better matchings", "text": "Historically, the analysis of matching has centered on designing algorithms to produce stable matchings as well as on analyzing the incentive compatibility of matching mechanisms. Less attention has been paid to questions related to the social welfare of stable matchings in cardinal utility models. We examine the loss in social welfare that arises from requiring matchings to be stable, the natural equilibrium concept under individual rationality. While this loss can be arbitrarily bad under general preferences, when there is some structure to the underlying graph corresponding to natural conditions on preferences, we prove worst case bounds on the price of anarchy. Surprisingly, under simple distributions of utilities, the average case loss turns out to be significantly smaller than the worst-case analysis would suggest. Furthermore, we derive conditions for the existence of approximately stable matchings that are also close to socially optimal, demonstrating that adding small switching costs can make socially (near-)optimal matchings stable. Our analysis leads to several concomitant results of interest on the convergence of decentralized partner-switching algorithms, and on the impact of heterogeneity of tastes on social welfare."}
{"_id": "5a3a21efea38628cf437378931ddfd60c79d74f0", "title": "Recovering human body configurations: combining segmentation and recognition", "text": "The goal of this work is to detect a human figure image and localize his joints and limbs along with their associated pixel masks. In this work we attempt to tackle this problem in a general setting. The dataset we use is a collection of sports news photographs of baseball players, varying dramatically in pose and clothing. The approach that we take is to use segmentation to guide our recognition algorithm to salient bits of the image. We use this segmentation approach to build limb and torso detectors, the outputs of which are assembled into human figures. We present quantitative results on torso localization, in addition to shortlisted full body configurations."}
{"_id": "5116ed56e2f22cc8739cba7ff9c1bf32bfdf29b1", "title": "1 . 1 Definition of adjustable autonomy and human-centered autonomous", "text": "We expect a variety of autonomous systems, from rovers to life-support systems, to play a critical role in the success of manned Mars missions. The crew and ground support personnel will want to control and be informed by these systems at varying levels of detail depending on the situation. Moreover, these systems will need to operate safely in the presence of people and cooperate with them effectively. We call such autonomous systems human-centered in contrast with traditional \u00d2black-box\u00d3 autonomous systems. Our goal is to design a framework for human-centered autonomous systems that enables users to interact with these systems at whatever level of control is most appropriate whenever they so choose, but minimize the necessity for such interaction. This paper discusses on-going research at the NASA Ames Research Center and the Johnson Space Center in developing human-centered autonomous systems that can be used for a manned Mars mission."}
{"_id": "18f17d72bc033dce8cc0ac5e6cf7f647cddbdc43", "title": "Fat grafting in facial rejuvenation.", "text": "Patients with significant facial atrophy and age-related loss of facial fat generally achieve suboptimal improvement from both surface treatments of facial skin and surgical lifts. Restoring lost facial volume by fat grafting is a powerful technique that is now acknowledged by most plastic surgeons and other physicians engaged in treating the aging face as one of the most important advances in aesthetic surgery. Properly performed, the addition of fat to areas of the face that have atrophied because of age or disease can produce a significant and sustained improvement in appearance that is unobtainable by other means."}
{"_id": "2a761bb8cf2958eb5e5424b00f71692abc7191a4", "title": "A UWB based Localization System for Indoor Robot Navigation", "text": "For robots to become more popular for domestic applications, the short comings of current indoor navigation technologies have to be overcome. In this paper, we propose the use of UWB-IR for indoor robot navigation. Various parts of an actual implementation of a UWB-IR based robot navigation system such as system architecture, RF sub-system design, antennas and localization algorithms are discussed. It is shown that by properly addressing the various issues, a localization error of less than 25 cm can be achieved at all points within a realistic indoor localization space."}
{"_id": "e0ca033ad5d8a351a48c6a8fb4ff235ace26f362", "title": "Blockchain-Free Cryptocurrencies : A Framework for Truly Decentralised Fast Transactions", "text": "The \u201cblockchain\u201d distributed ledger pioneered by Bitcoin is effective at preventing double-spending, but inherently attracts (1) \u201cuser cartels\u201d and (2) incompressible delays, as a result of linear verification and a winner-takes-all incentive lottery. We propose to forgo the \u201cblocks\u201d and \u201cchain\u201d entirely, and build a truly distributed ledger system based on a lean graph of cross-verifying transactions, which now become the main and only objects in the system. A fully distributed consensus mechanism, based on progressive proofs of work with predictable incentives, ensures rapid convergence even across a large network of unequal participants, who all get rewards working at their own pace. Graph-based affirmation fosters snappy response through automatic scaling, while application-agnostic design supports all modern cryptocurrency features such as multiple denominations, swaps, securitisation, scripting, smart contracts, etc. We prove theoretically, and experimentally verify, our proposal to show it achieves a crucial \u201cconvergence\u201d property \u2014 meaning that any valid transaction entering the system will quickly become enshrined into the ancestry upon which all future transactions will rest."}
{"_id": "b38e0fb48dbdf8624a91bb2b2ec24dbcb476fc40", "title": "Fuzzy Adaptive Sliding Mode Control of 6 DOF Parallel Manipulator with Electromechanical Actuators in Cartesian Space Coordinates", "text": "This paper proposes a fuzzy adaptive sliding mode controller in Cartesian space coordinates for trajectory tracking of the 6 DOF parallel manipulator with considering dynamics of electromechanical actuators. The 6-DOF sensors, may be very expensive or impossible to find at the desired accuracies; and also, especially at constant speeds, will contain errors; therefore, it is better to use LVDT position sensors and then, solving the forward kinematics problem using an iterative artificial neural network strategy, Instead of using the Numerical methods such as the Newton-Raphson method, that heavy computational load imposes to the system. This controller consists of a sliding mode control and adaptive learning algorithms to adjust uncertain parameters of system, that relax the requirement of bounding parameter values and not depending upon any parameter initialization conditions, and fuzzy approximators to estimate the plant\u2019s unknown nonlinear functions, and robustifying control terms to compensate of approximation errors; so that closedloop stability and finite-time convergence of tracking errors can be guaranteed. Simulation results demonstrate that the proposed control strategy can achieve favorable control performance with regard to uncertainties, nonlinearities and external disturbances."}
{"_id": "69155d0852cf46e45c52c0e0b6e965b77ff1c0be", "title": "Spatial Planning: A Configuration Space Approach", "text": "This paper presents algorithms for computing constraints on the position of an object due to the presence of ther objects. This problem arises in applications that require choosing how to arrange or how to move objects without collisions. The approach presented here is based on characterizing the position and orientation of an object as a single point in a configuration space, in which each coordinate represents a degree of freedom in the position or orientation of the object. The configurations forbidden to this object, due to the presence of other objects, can then be characterized as regions in the configuration space, called configuration space obstacles. The paper presents algorithms for computing these configuration space obstacles when the objects are polygons or polyhedra."}
{"_id": "5b4bc6f02be76fe9ed542dd23c4c6d274fec8ae4", "title": "Design and Describe REST API without Violating REST: A Petri Net Based Approach", "text": "As REST architectural style gains popularity in the web service community, there is a growing concern and debate on how to design Restful web services (REST API) in a proper way. We attribute this problem to lack of a standard model and language to describe a REST API that respects all the REST constraints. As a result, many web services that claim to be REST API are not hypermedia driven as prescribed by REST. This situation may lead to REST APIs that are not as scalable, extensible, and interoperable as promised by REST. To address this issue, this paper proposes REST Chart as a model and language to design and describe REST API without violating the REST constraints. REST Chart models a REST API as a special type of Colored Petri Net whose topology defines the REST API and whose token markings define the representational state space of user agents using that API. We demonstrate REST Chart with an example REST API. We also show how REST Chart can support efficient content negotiation and reuse hybrid representations to broaden design choices. Furthermore, we argue that the REST constraints, such as hypermedia driven and statelessness, can either be enforced naturally or checked automatically in REST Chart."}
{"_id": "e2ea20d8cb365cec59b19b621dae3e26c85b53f9", "title": "Power electronics for renewable energy systems: Wind turbine and photovoltaic systems", "text": "The use of renewable energy sources are increased because of the depletion of natural resources and the increasing pollution level from energy production. The wind energy and the solar energy are most widely used among the renewable energy sources. Power electronics is needed in almost all kinds of renewable energy system. It controls the renewable source and interfaces with the load effectively, which can be grid-connection or working in stand-alone mode. In this paper, overview of wind and photovoltaic energy systems are introduced. Next, the power electronic circuits behind the most common wind and photovoltaic configurations are discussed. Finally, their controls and important requirements for grid connection are explained."}
{"_id": "b2e3f06a00d4e585498ac4449d86a01c3c71680f", "title": "The LightCycler: a microvolume multisample fluorimeter with rapid temperature control.", "text": "Experimental and commercial microvolume fluorimeters with rapid temperature control are described. Fluorescence optics adopted from flow cytometry were used to interrogate 1-10-microL samples in glass capillaries. Homogeneous temperature control and rapid change of sample temperatures (10 degrees C/s) were obtained by a circulating air vortex. A prototype 2-color, 32-sample version was constructed with a xenon arc for excitation, separate excitation and emission paths, and photomultiplier tubes for detection. The commercial LightCycler, a 3-color, 24-sample instrument, uses a blue light-emitting diode for excitation, paraxial epi-illumination through the capillary tip and photodiodes for detection. Applications include analyte quantification and nucleic acid melting curves with fluorescent dyes, enzyme assays with fluorescent substrates and techniques that use fluorescence resonance energy transfer. Microvolume capability allows analysis of very small or expensive samples. As an example of one application, rapid cycle DNA amplification was continuously monitored by three different fluorescence techniques, Which included using the double-stranded DNA dye SYBR Green I, a dual-labeled 5'-exonuclease hydrolysis probe, and adjacent fluorescein and Cy5z-labeled hybridization probes. Complete amplification and analysis requires only 10-15 min."}
{"_id": "310745d7eaaf877acb5b3a7dc3778d91d2172633", "title": "The Internet of things.", "text": "Keynote David Rose The New Vanguard for Business: Connectivity, Design, and the Internet of Things The Internet of Things is the hottest topic of the moment a shift predicted to be as momentous as the impact of the internet itself. The internet has allowed us to share ideas and data largely input by humans. What about a world where data from objects as diverse as umbrellas, fridges, and gas tanks all flows through the internet?"}
{"_id": "93852ab424a74b893a163df1223d968fb69805b6", "title": "Kinetic Energy of Wind-Turbine Generators for System Frequency Support", "text": "As wind power penetration increases and fossil plants are retired, it is feared that there will be insufficient kinetic energy (KE) from the plants to support the system frequency. This paper shows the fear is groundless because the high inertias (H cong 4 seconds) of wind turbine-generators (WTGs) can be integrated to provide frequency support during generation outage."}
{"_id": "61578923fde1cafdc1e6824f72a31869889eb8d2", "title": "Enhancing financial performance with social media: An impression management perspective", "text": "The growing plethora of social media outlets have sparked both opportunity and concern in how organizations manage their corporate image. While previous research has examined the various problems associated with negative, word-of-mouth transference of information occurring simultaneously throughout many networks in social media, this paper seeks to address social media usage in impression management (IM). Specifically, we seek to answer two questions: Do IM direct-assertive strategies in social media impact a firm\u2019s financial performance? And which social media strategies impact a firm\u2019s financial performance? To analyze these questions in depth, we use text mining to collect and analyze text from a variety of social network platforms, including blogs, forums, and corporate websites, to assess how such IM strategies impact financial performance. Our results provide text mining validation that social media have a 1 Corresponding author positive impact on IM. We also provide further understanding of how social media strengthens organizations\u2019 communication with various internal and external stakeholders. Lastly, we provide future research ideas concerning social media\u2019s usage in IM."}
{"_id": "6f56eab88b70a4dd7a11255aaef97c0f87c919db", "title": "Low power/low voltage high speed CMOS differential track and latch comparator with rail-to-rail input", "text": "A new CMOS differential latched comparator suitable for low voltage, low-power application is presented. The circuit consists of a constant-gm rail-to-rail common-mode operational transconductance amplifier followed by a regenerative latch in a track and latch configuration to achieve a relatively constant delay. The use of a track and latch minimizes the total number of gain stages required for a given resolution. Potential offset from the constant-gm differential input stage, estimated as the main source of offset, can be minimized by proper choice of transistors sizes. Simulation results show that the circuit requires less than 86 \u03bcA with a supply voltage of 1.65 V in a standard CMOS 0.18 \u03bcm digital process. The average delay is less than 1 ns and is approximately independent of the common-mode input voltage."}
{"_id": "677c44572cd6ac6ccab36d74c8246c4d8785434f", "title": "From semantics to syntax and back again: Argument structure in the third year of life", "text": "An essential part of the human capacity for language is the ability to link conceptual or semantic representations with syntactic representations. On the basis of data from spontaneous production, suggested that young children acquire such links on a verb-by-verb basis, with little in the way of a general understanding of linguistic argument structure. Here, we suggest that a receptive understanding of argument structure--including principles linking syntax and conceptual/semantic structure--appears earlier. In a forced-choice pointing task we have shown that toddlers in the third year of life can map a single scene (involving a novel causative action paired with a novel verb) onto two distinct syntactic frames (transitive and intransitive). This suggests that even before toddlers begin generalizing argument structure in their own speech, they have some representation of conceptual/semantic categories, syntactic categories, and a system that links the two."}
{"_id": "6cf2afb2c5571b594038751b0cc76f548711c350", "title": "Physical Discipline and Behavior Problems in African American , European American , and Hispanic Children : Emotional Support as a Moderator", "text": "Using data collected over a 6-year period on a sample of 1,039 European American children, 550 African American children, and 401 Hispanic children from the children of the National Longitudinal Survey of Youth, this study assessed whether maternal emotional support of the child moderates the relation between spanking and behavior problems. Children were 4\u20135 years of age in the first of 4 waves of data used (1988, 1990, 1992, 1994). At each wave, mothers reported their use of spanking and rated their children\u2019s behavior problems. Maternal emotional support of the child was based on interviewer observations conducted as part of the Home Observation for Measurement of the Environment. For each of the 3 racial-ethnic groups, spanking predicted an increase in the level of problem behavior over time, controlling for income-needs ratio and maternal emotional support. Maternal emotional support moderated the link between spanking and problem behavior. Spanking was associated with an increase in behavior problems over time in the context of low levels of emotional support, but not in the context of high levels of emotional support. This pattern held for all 3 racial-ethnic groups."}
{"_id": "ac7257f58991d63ec9541b632a3c19d7c3db4275", "title": "Depressive symptomatology among university students in Denizli, Turkey: prevalence and sociodemographic correlates.", "text": "AIM\nTo determine overall and subgroup prevalence of depressive symptomatology among university students in Denizli, Turkey during the 1999-2000 academic year, and to investigate whether sociodemographic factors were associated with depressive symptoms in university students.\n\n\nMETHODS\nA stratified probability sample of 504 Turkish university students (296 male, 208 female) was used in a cross-sectional study. Data were obtained by self-administered questionnaire, including questions on sociodemographic characteristics and problem areas. The revised Beck Depression Inventory (BDI) was used to determine depressive symptoms of the participants. BDI scores 17 or higher were categorized as depressive for logistic regression analysis. Student t-test and linear regression were used for continuous data analysis.\n\n\nRESULTS\nOut of all participants, 26.2% had a BDI score 17 or higher. The prevalence of depressive symptoms increased to 32.1% among older students, 34.7% among students with low socioeconomic status, 31.2% among seniors, and 62.9% among students with poor school performance. The odds ratio of depressive symptoms was 1.84 (95% confidence interval [CI], 1.03-3.28) in students with low socioeconomic status and 7.34 (95% CI, 3.36-16.1) in students with poor school performance in the multivariate logistic model. The participants identified several problem areas: lack of social activities and shortage of facilities on the campus (69.0%), poor quality of the educational system (54.8%), economic problems (49.3%), disappointment with the university (43.2%), and friendship problems (25.9%).\n\n\nCONCLUSIONS\nConsidering the high frequency of depressive symptoms among Turkish university students, a student counseling service offering mental health assistance is necessary. This service should especially find the way to reach out to poor students and students with poor school performance."}
{"_id": "5754524110610b04cfe12d42433d92d72397a19a", "title": "Momentum Investment Strategies of Mutual Funds , Performance Persistence , and Survivorship Bias", "text": "We show that the persistent use of momentum investment strategies by mutual funds has important implications for the performance persistence and survivorship bias controversies. Using mutual fund portfolio holdings from a database free of survivorship bias, we \u0304nd that the best performing funds during one year are the best performers during the following year, with the exception of 1981, 1983, 1988, and 1989. This pattern corresponds exactly to the pattern exhibited by the momentum e\u00aeect in stock returns, \u0304rst documented by Jegadeesh and Titman (1993) and recently studied by Chan, Jegadeesh, and Lakonishok (1996). Our evidence points not only to the momentum e\u00aeect in stock returns, but to the persistent, active use of momentum strategies by mutual funds as the reasons for performance persistence. Moreover, essentially no persistence remains after controlling for the one-year momentum e\u00aeect in stock returns. We also explain why most recent studies have found that survivorship bias is a relatively small concern. Funds that were the best performers during one year are the worst performers during the following year whenever the momentum e\u00aeect in stock returns is absent, and these funds tend to disappear with a frequency not appreciably lower than that of consistently poor performers. Therefore, the pool of non-surviving funds is more representative of the cross-section of all funds than previously thought. Speci \u0304cally, we \u0304nd a di\u00aeerence of only 20 basis points per year in risk-adjusted pre-expense returns between the average fund and the average surviving fund."}
{"_id": "d1d9acd4a55c9d742a8b6736928711c3cfbe6526", "title": "Local Feature Based Multiple Object Instance Identification Using Scale and Rotation Invariant Implicit Shape Model", "text": "In this paper, we propose a Scale and Rotation Invariant Implicit Shape Model (SRIISM), and develop a local feature matching based system using the model to accurately locate and identify large numbers of object instances in an image. Due to repeated instances and cluttered background, conventional methods for multiple object instance identification suffer from poor identification results. In the proposed SRIISM, we model the joint distribution of object centers, scale, and orientation computed from local feature matches in Hough voting, which is not only invariant to scale changes and rotation of objects, but also robust to false feature matches. In the multiple object instance identification system using SRIISM, we apply a fast 4D bin search method in Hough space with complexity O(n), where n is the number of feature matches, in order to segment and locate each instance. Futhermore, we apply maximum likelihood estimation (MLE) for accurate object pose detection. In the evaluation, we created datasets simulating various industrial applications such as pick-and-place and inventory management. Experiment results on the datasets show that our method outperforms conventional methods in both accuracy (5%-30% gain) and speed (2x speed up)."}
{"_id": "1cf74b1f4fbd05250701c86a45044534a66c7f5e", "title": "Robust Object Detection with Interleaved Categorization and Segmentation", "text": "This paper presents a novel method for detecting and localizing objects of a visual category in cluttered real-world scenes. Our approach considers object categorization and figure-ground segmentation as two interleaved processes that closely collaborate towards a common goal. As shown in our work, the tight coupling between those two processes allows them to benefit from each other and improve the combined performance. The core part of our approach is a highly flexible learned representation for object shape that can combine the information observed on different training examples in a probabilistic extension of the Generalized Hough Transform. The resulting approach can detect categorical objects in novel images and automatically infer a probabilistic segmentation from the recognition result. This segmentation is then in turn used to again improve recognition by allowing the system to focus its efforts on object pixels and to discard misleading influences from the background. Moreover, the information from where in the image a hypothesis draws its support is employed in an MDL based hypothesis verification stage to resolve ambiguities between overlapping hypotheses and factor out the effects of partial occlusion. An extensive evaluation on several large data sets shows that the proposed system is applicable to a range of different object categories, including both rigid and articulated objects. In addition, its flexible representation allows it to achieve competitive object detection performance already from training sets that are between one and two orders of magnitude smaller than those used in comparable systems."}
{"_id": "1fe6afba10f5e1168d63b7eca0483f9c57337d57", "title": "Real-time object detection and localization with SIFT-based clustering", "text": "a r t i c l e i n f o Keywords: Pick-and-place applications Machine vision for industrial applications SIFT This paper presents an innovative approach for detecting and localizing duplicate objects in pick-and-place applications under extreme conditions of occlusion, where standard appearance-based approaches are likely to be ineffective. The approach exploits SIFT keypoint extraction and mean shift clustering to partition the correspondences between the object model and the image onto different potential object instances with real-time performance. Then, the hypotheses of the object shape are validated by a projection with a fast Euclidean transform of some delimiting points onto the current image. Moreover, in order to improve the detection in the case of reflective or transparent objects, multiple object models (of both the same and different faces of the object) are used and fused together. Many measures of efficacy and efficiency are provided on random disposals of heavily-occluded objects, with a specific focus on real-time processing. Experimental results on different and challenging kinds of objects are reported. Information technologies have become in the last decades a fundamental aid for helping the automation of everyday people life and industrial processes. Among the many different disciplines contributing to this process, machine vision and pattern recognition have been widely used for industrial applications and especially for robot vision. A typical need is to automate the pick-and-place process of picking up objects, possibly performing some tasks, and then placing them down on a different location. Most of the pick-and-place systems are basically composed of robotic systems and sensors. The sensors are in charge of driving the robot arms to the right 3D location and possibly orientation of the next object to be picked up, according to the robot's degrees of freedom. Object picking can be very complicated if the scene is not well structured and constrained. The automation of object picking by using cameras, however, requires to detect and localize objects in the scene; they are crucial tasks for several other computer vision applications, such as image/ video retrieval [1,2], or automatic robot navigation [3]. This paper describes a new complete approach for pick-and-place processes with the following challenging requirements: 1. Different types of objects: the approach should work with every type of object of different dimension and complexity, with reflective surfaces or semi-transparent parts, such as in the case of pharmaceutical and cosmetic objects, often reflective or included in transparent flowpacks; 2. \u2026"}
{"_id": "0f04a0b658f00f329687d8ba94d9fca25269b4b7", "title": "Extensibility, Safety and Performance in the SPIN Operating System", "text": "This paper describes the motivation architecture and performance of SPIN an extensible operating system SPIN provides an extension infrastructure together with a core set of extensible services that allow applica tions to safely change the operating system s interface and implementation Extensions allow an application to specialize the underlying operating system in order to achieve a particular level of performance and function ality SPIN uses language and link time mechanisms to inexpensively export ne grained interfaces to operat ing system services Extensions are written in a type safe language and are dynamically linked into the op erating system kernel This approach o ers extensions rapid access to system services while protecting the op erating system code executing within the kernel address space SPIN and its extensions are written in Modula and run on DEC Alpha workstations"}
{"_id": "00bbfde6af97ce5efcf86b3401d265d42a95603d", "title": "Feature hashing for large scale multitask learning", "text": "Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case --- multitask learning with hundreds of thousands of tasks."}
{"_id": "1805f1ba9bc1ae4f684ac04d3118cf6043587dfc", "title": "A new approach to unwrap a 3-D fingerprint to a 2-D rolled equivalent fingerprint", "text": "For many years, fingerprints have been captured by pressing a finger against a paper or hard surface. This touch-based fingerprint acquisition introduces some problems such as distortions and deformations in the acquired images, which arise due to the contact of the fingerprint surface with the sensor platen, and degrades the recognition performance. A new touch-less fingerprint technology has been recently introduced to the market, which can address the problems with the contact-based fingerprint systems. In this paper, we propose a new algorithm for unwrapping the acquired 3-D scan of the subject's finger into a 2-D rolled equivalent image. Therefore, The resulting image can be matched with the conventional 2-D scans; it also can be used for matching unwrapped 3-D fingerprints among themselves with the 2-D fingerprint matching algorithms. The algorithm is based on curvature analysis of the 3-D surface. The quality of the resulting image is evaluated and analyzed using NIST fingerprint image software."}
{"_id": "73b911968bff8e855c22c7fcbd5f17d1c95456f5", "title": "Functional Search-based Testing from State Machines", "text": "The application of metaheuristic search techniques in test data generation has been extensively investigated in recent years. Most studies, however, have concentrated on the application of such techniques in structural testing. The use of search-based techniques in functional testing is less frequent, the main cause being the implicit nature of the specification. This paper investigates the use of search-based techniques for functional testing, having the specification in form of a state machine. Its purpose is to generate input data for chosen paths in a state machine, so that the parameter values provided to the methods satisfy the corresponding guards and trigger the desired transitions. A general form of a fitness function for an individual path is presented and this approach is empirically evaluated using three search techniques: simulated annealing, genetic algorithms and particle swarm optimization."}
{"_id": "d6c1438b62796662a099d653da6235ecefdf07ed", "title": "Ensemble Pruning Using Reinforcement Learning", "text": "Multiple Classifier systems have been developed in order to improve classification accuracy using methodologies for effective classifier combination. Classical approaches use heuristics, statistical tests, or a meta-learning level in order to find out the optimal combination function. We study this problem from a Reinforcement Learning perspective. In our modeling, an agent tries to learn the best policy for selecting classifiers by exploring a state space and considering a future cumulative reward from the environment. We evaluate our approach by comparing with state-of-the-art combination methods and obtain very promising results."}
{"_id": "0c56c03a65a9c38561b2d6fd01e55dc80df357c3", "title": "Improved ID3 algorithm", "text": "as the classical algorithm of the decision tree classification algorithm, ID3 is famous for the merits of high classifying speed easy, strong learning ability and easy construction. But when use it to classify, there does exist the problem of inclining to chose attributions which has many values, which affects its practicality. This paper for solving the problem a decision tree algorithm based on attribute-importance is proposed. The improved algorithm uses attribute-importance to increase information gain of attribution which has fewer attributions and compares ID3 with improved ID3 by an example. The experimental analysis of the data show that the improved ID3 algorithm can get more reasonable and more effective rules."}
{"_id": "adb705bd2b16d1f839d803ae5a94dc555f4e61ff", "title": "Convolutional Neural Network Information Fusion based on Dempster-Shafer Theory for Urban Scene Understanding", "text": "Dempster-Shafer theory provides a sensor fusion framework that autonomously accounts for obstacle occlusion in dynamic, urban environments. However, to discern static and moving obstacles, the Dempster-Shafer approach requires manual tuning of parameters dependent on the situation and sensor types. The proposed methodology utilizes a deep fully convolutional neural network to improve the robust performance of the information fusion algorithm in distinguishing static and moving obstacles from navigable space. The image-like spatial structure of probabilistic occupancy allows a semantic segmentation framework to discern classes for individual grid cells. A subset of the KITTI LIDAR tracking dataset in combination with semantic map data was used for the information fusion task. The probabilistic occupancy grid output of the Dempster-Shafer information fusion algorithm was provided as input to the neural network. The network then learned an offset from the original DST result to improve semantic labeling performance. The proposed framework outperformed the baseline approach in the mean intersection over union metric reaching 0.546 and 0.531 in the validation and test sets respectively. However, little improvement was achieved in discerning moving and static cells due to the limited dataset size. To improve model performance in future work, the dataset will be expanded to facilitate more effective learning, and temporal data will be fed through individual convolutional networks prior to being merged in channels as input to the main network."}
{"_id": "b8811144fb24a25335bf30dedfb70930d2f67055", "title": "MARVEL: A Large-Scale Image Dataset for Maritime Vessels", "text": ""}
{"_id": "d96c02619add7c6a98ceb245758b10440dc3d5ad", "title": "Baseline Mechanisms for Enterprise Governance of IT in SMEs", "text": "Information Technology (IT) has become fundamental for most organizations since it is vital to their sustainability, development and success. This pervasive use led organizations to a critical dependency on IT. Despite the benefits, it exposes organizations to several risks. Hence, a significant focus on Enterprise Governance of IT (EGIT) is required. EGIT involve the definition and implementation of processes, structures and relational mechanisms to support the business/IT alignment and the creation of business value from IT investments. These mechanisms act as a mean to direct and operationalize IT-related decision-making. However, identifying the appropriate mechanisms is a complex task since there are internal and external contingency factors that influence the design and implementation of EGIT. Small and Medium Enterprises (SMEs) are considered key elements to promote economic growth, job creation, social integration and innovation. The use of IT in these organizations can imply severe consequences on survival and growth in highly competitive markets, becoming critical in this globalization era. Several studies were developed to investigate and identify EGIT mechanisms in different contingencies but very few focused on the organization' size criterion. However, SMEs have particular characteristics that require further investigation. Therefore, our goal in this research is to evaluate EGIT mechanisms in a SME's context and identify a baseline of mechanisms for SMEs using semi-structured interviews with IT experts that have experience in SMEs. The article ends by presenting contributions, limitations and future work."}
{"_id": "c720bd17e837415de3e602ee844288546eb576fa", "title": "Operability of Folded Microstrip Patch-Type Tag Antenna in the UHF RFID Bands Within 865-928 MHz", "text": "The emerging use of passive radio frequency identification (RFID) systems at ultra-high-frequency (UHF) spectrum requires application specific tag antenna designs for challenging applications. Many objects containing conductive material need novel tag antenna designs for reliable identification. The operability of folded microstrip patch-type tag antenna for objects containing conductive material is analyzed and tested within the UHF RFID bands used at the moment mainly in Europe and in North and South America. First the operability of the tag antenna design and the effects of conductive material are modeled with simulations based on finite element method (FEM). The performance of the tag antenna design affixed to a package containing metallic foil is verified with read range measurements. The results show that the antenna design is operable in both of the UHF RFID bands within 865-928 MHz"}
{"_id": "8e6d963a9c1115de61e7672c6d4c54c781b3e54d", "title": "Lessons from the journey: a query log analysis of within-session learning", "text": "The Internet is the largest source of information in the world. Search engines help people navigate the huge space of available data in order to acquire new skills and knowledge. In this paper, we present an in-depth analysis of sessions in which people explicitly search for new knowledge on the Web based on the log files of a popular search engine. We investigate within-session and cross-session developments of expertise, focusing on how the language and search behavior of a user on a topic evolves over time. In this way, we identify those sessions and page visits that appear to significantly boost the learning process. Our experiments demonstrate a strong connection between clicks and several metrics related to expertise. Based on models of the user and their specific context, we present a method capable of automatically predicting, with good accuracy, which clicks will lead to enhanced learning. Our findings provide insight into how search engines might better help users learn as they search."}
{"_id": "afdb7a95e28c0706bdb5350a4cf46cce94e3de97", "title": "Dual-Band Multi-Pole Directional Filter for Microwave Multiplexing Applications", "text": "A novel microstrip directional filter for multiplexing applications is presented. This device uses composite right-left-handed transmission lines and resonators to achieve dual-band frequency response. In addition, by cascading two or more stages using dual frequency immitance inverters, multi-pole configurations can be obtained. Simulation and experimental results are presented with good agreement."}
{"_id": "919f3ad43b48142ca52c979a956f0799e11d0f6c", "title": "Multi-domain spoken language understanding with transfer learning", "text": "This paper addresses the problem of multi-domain spoken language understanding (SLU) where domain detection and domaindependent semantic tagging problems are combined. We present a transfer learning approach to the multi-domain SLU problem in which multiple domain-specific data sources can be incorporated. To implement multi-domain SLU with transfer learning, we introduce a triangular-chain structured model. This model effectively learns multiple domains in parallel, and allows use of domain-independent patterns among domains to create a better model for the target domain. We demonstrate that the proposed method outperforms baseline models on dialog data for multi-domain SLU problems. 2009 Elsevier B.V. All rights reserved."}
{"_id": "2b028c2cc8864ead78775d3a1c0efabe202f86c3", "title": "Can Computers Overcome Humans? Consciousness Interaction and its Implications", "text": "Can computers overcome human capabilities? This is a paradoxical and controversial question, particularly because there are many hidden assumptions. This article focuses on that issue putting on evidence some misconception related to future generations of machines and the understanding of the brain. It will discuss to what extent computers might reach human capabilities, and how it would be possible only if the computer is a conscious machine. However, it will be shown that if the computer is conscious, an interference process due to consciousness would affect the information processing of the system. Therefore, it might be possible to make conscious machines to overcome human capabilities, which will have similar limitations than humans. In other words, trying to overcome human capabilities with computers implies the paradoxical conclusion that a computer will never overcome human capabilities at all, or if the computer does, it should not be considered as a computer anymore."}
{"_id": "9c37e8d5bb8fcd6d5b24edb18e5ce422b459e856", "title": "High-Speed Parallel Decimal Multiplication with Redundant Internal Encodings", "text": "The decimal multiplication is one of the most important decimal arithmetic operations which have a growing demand in the area of commercial, financial, and scientific computing. In this paper, we propose a parallel decimal multiplication algorithm with three components, which are a partial product generation, a partial product reduction, and a final digit-set conversion. First, a redundant number system is applied to recode not only the multiplier, but also multiples of the multiplicand in signed-digit (SD) numbers. Furthermore, we present a multioperand SD addition algorithm to reduce the partial product array. Finally, a digit-set conversion algorithm with a hybrid prefix network to decrease the number of the logic gates on the critical path is discussed. An analysis of the timing delay and an HDL model synthesized under 90 nm technology show that by considering the tradeoff of designs among three components, the overall delay of the proposed 16 \u00d7 16-digit multiplier takes about 11 percent less timing delay with 2 percent less area compared to the current fastest design."}
{"_id": "a9001afcff3aee4e0e0b13289d06e3f3c403eafd", "title": "Survey of Large-Scale MIMO Systems", "text": "The escalating teletraffic growth imposed by the proliferation of smartphones and tablet computers outstrips the capacity increase of wireless communications networks. Furthermore, it results in substantially increased carbon dioxide emissions. As a powerful countermeasure, in the case of full-rank channel matrices, MIMO techniques are potentially capable of linearly increasing the capacity or decreasing the transmit power upon commensurately increasing the number of antennas. Hence, the recent concept of large-scale MIMO (LS-MIMO) systems has attracted substantial research attention and been regarded as a promising technique for next-generation wireless communications networks. Therefore, this paper surveys the state of the art of LS-MIMO systems. First, we discuss the measurement and modeling of LS-MIMO channels. Then, some typical application scenarios are classified and analyzed. Key techniques of both the physical and network layers are also detailed. Finally, we conclude with a range of challenges and future research topics."}
{"_id": "31bd199555b926c6f985d0bbf4c71f5c46b5a078", "title": "Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more", "text": "Designations used by companies to distinguish their products are often claimed as trademarks or registered trademarks. In all instances in which Morgan Kaufmann Publishers is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means\u2014electronic, mechanical, photocopying, scanning, or otherwise\u2014without prior written permission of the publisher. may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting \" Support & Contact \" then \" Copyright and Permission \" and then \" Obtaining Permissions. \" Page 197: \" Make a new plan Stan. .. and get yourself free. Physical database design : the database professional's guide to exploiting indexes, views, storage, and more / Sam Lightstone, Toby Teorey, and Tom Nadeau. p. cm.-(The Morgan Kaufmann series in database management systems) Includes bibliographical references and index."}
{"_id": "1dba04ee74e7aa5db2b3264ce473e215c332251f", "title": "Area and length minimizing flows for shape segmentation", "text": "A number of active contour models have been proposed that unify the curve evolution framework with classical energy minimization techniques for segmentation, such as snakes. The essential idea is to evolve a curve (in two dimensions) or a surface (in three dimensions) under constraints from image forces so that it clings to features of interest in an intensity image. The evolution equation has been derived from first principles as the gradient flow that minimizes a modified length functional, tailored to features such as edges. However, because the flow may be slow to converge in practice, a constant (hyperbolic) term is added to keep the curve/surface moving in the desired direction. We derive a modification of this term based on the gradient flow derived from a weighted area functional, with image dependent weighting factor. When combined with the earlier modified length gradient flow, we obtain a partial differential equation (PDE) that offers a number of advantages, as illustrated by several examples of shape segmentation on medical images. In many cases the weighted area flow may be used on its own, with significant computational savings."}
{"_id": "a63bcdb0c3cd459d59700160bfa9d600f3d8aebe", "title": "Research Advances in Cloud Computing", "text": "Serverless computing has emerged as a new compelling paradigm for the deployment of applications and services. It represents an evolution of cloud programmingmodels, abstractions, and platforms, and is a testament to the maturity and wide adoption of cloud technologies. In this chapter, we survey existing serverless platforms from industry, academia, and open-source projects, identify key characteristics and use cases, and describe technical challenges and open problems. I. Baldini \u00b7 P. Castro (B) \u00b7 K. Chang \u00b7 P. Cheng \u00b7 S. Fink \u00b7 N. Mitchell \u00b7 V. Muthusamy \u00b7 R. Rabbah \u00b7 A. Slominski IBM Research, New York, USA e-mail: ioana@us.ibm.com P. Castro e-mail: castrop@us.ibm.com K. Chang e-mail: Kerry.Chang@ibm.com P. Cheng e-mail: perry@us.ibm.com S. Fink e-mail: sjfink@us.ibm.com N. Mitchell e-mail: nickm@us.ibm.com V. Muthusamy (B) e-mail: vmuthus@us.ibm.com R. Rabbah e-mail: rabbah@us.ibm.com A. Slominski (B) e-mail: aslom@us.ibm.com V. Ishakian (B) Bentley University, Waltham, USA e-mail: vishakian@bentley.edu P. Suter Two Sigma, New York, USA \u00a9 Springer Nature Singapore Pte Ltd. 2017 S. Chaudhary et al. (eds.), Research Advances in Cloud Computing, DOI 10.1007/978-981-10-5026-8_1 1"}
{"_id": "84b8ffa0e39a4149bd62122a84c081a7ad06b056", "title": "Primed for Violence : The Role of Gender Inequality in Predicting Internal Conflict", "text": "We know, most notably through Ted Gurr\u2019s research, that ethnic discrimination can lead to ethnopolitical rebellion\u2013intrastate conflict. I seek to discover what impact, if any, gender inequality has on intrastate conflict. Although democratic peace scholars and others highlight the role of peaceful domestic behavior in predicting state behavior, many scholars have argued that a domestic environment of inequality and violenceFstructural and cultural violenceFresults in a greater likelihood of violence at the state and the international level. This project contributes to this line of inquiry and further tests the grievance theory of intrastate conflict by examining the norms of violence that facilitate a call to arms. And in many ways, I provide an alternative explanation for the significance of some of the typical economic measuresFthe greed theoryFbased on the link between discrimination, inequality, and violence. I test whether states characterized by higher levels of gender inequality are more likely to experience intrastate conflict. Ultimately, the basic link between gender inequality and intrastate conflict is confirmedFstates characterized by gender inequality are more likely to experience intrastate conflict, 1960\u20132001."}
{"_id": "53b3fd527da1cc4367319da03b517636e33528de", "title": "Sales Taxes and Internet Commerce\u2217", "text": "We estimate the sensitivity of Internet retail purchasing to sales taxes using data from the eBay marketplace. Our first approach exploits the fact that seller locations are revealed only after buyers have expressed interest in an item by clicking on its listing. We use millions of location \u201csurprises\u201dto estimate price elasticities with respect to the charged sales tax. We then use aggregated data to estimate cross-state substitution parameters, and substitution between offl ine and online purchases, relying both on cross-sectional variation in state and local sales taxes, and on changes in these rates over time. We find substantial sensitivity to sales taxes. Using our item-level approach, we find an average price elasticity of around -2 for interested buyers. Using our aggregate approach, we find that a one percentage point increase in a state\u2019s sales tax increases online purchases by state residents by just under two percent, but decreases their online purchases from home-state retailers by 3-4 percent. (JEL: D12, H20, H71, L81) \u2217We are grateful to Glenn Ellison and many seminar participants for useful comments. We appreciate support from the Stanford Institute for Economic Policy Research and the Toulouse Network on Information Technology. Data access for this study was obtained under a consulting agreement between the Stanford authors (Einav, Knoepfle and Levin) and eBay Research. \u2020Department of Economics, Stanford University, Stanford, CA 94305-6072 (Einav, Knoepfle, Levin), NBER (Einav and Levin), and eBay Labs (Sundaresan). Email: leinav@stanford.edu, knoepfle@stanford.edu, jdlevin@stanford.edu, nsundaresan@ebay.com."}
{"_id": "9d2cdcd0009490d45218b4e9c42f087b27bc9fcb", "title": "Internet Web servers: workload characterization and performance implications", "text": "This paper presents a workload characterization study for Internet Web servers. Six different data sets are used in the study: three from academic environments, two from scientific research organizations, and one from a commercial Internet provider. These data sets represent three different orders of magnitude in server activity, and two different orders of magnitude in time duration, ranging from one week of activity to one year. The workload characterization focuses on the document type distribution, the document size distribution, the document referencing behavior, and the geographic distribution of server requests. Throughout the study, emphasis is placed on finding workload characteristics that are common to all the data sets studied. Ten such characteristics are identified. The paper concludes with a discussion of caching and performance issues, using the observed workload characteristics to suggest performance enhancements that seem promising for Internet Web servers."}
{"_id": "13cf6e3658598e92a24feb439e532894e4aa68e3", "title": "TIARA: a visual exploratory text analytic system", "text": "In this paper, we present a novel exploratory visual analytic system called TIARA (Text Insight via Automated Responsive Analytics), which combines text analytics and interactive visualization to help users explore and analyze large collections of text. Given a collection of documents, TIARA first uses topic analysis techniques to summarize the documents into a set of topics, each of which is represented by a set of keywords. In addition to extracting topics, TIARA derives time-sensitive keywords to depict the content evolution of each topic over time. To help users understand the topic-based summarization results, TIARA employs several interactive text visualization techniques to explain the summarization results and seamlessly link such results to the original text. We have applied TIARA to several real-world applications, including email summarization and patient record analysis. To measure the effectiveness of TIARA, we have conducted several experiments. Our experimental results and initial user feedback suggest that TIARA is effective in aiding users in their exploratory text analytic tasks."}
{"_id": "1b255cda9c92a5fbaba319aeb4b4ec532693c2a4", "title": "Dynamic topic models", "text": "A family of probabilistic time series models is developed to analyze the time evolution of topics in large document collections. The approach is to use state space models on the natural parameters of the multinomial distributions that represent the topics. Variational approximations based on Kalman filters and nonparametric wavelet regression are developed to carry out approximate posterior inference over the latent topics. In addition to giving quantitative, predictive models of a sequential corpus, dynamic topic models provide a qualitative window into the contents of a large document collection. The models are demonstrated by analyzing the OCR'ed archives of the journal Science from 1880 through 2000."}
{"_id": "31b52afe4db00f97a703d534803165bbd55359c2", "title": "EventRiver: Visually Exploring Text Collections with Temporal References", "text": "Many text collections with temporal references, such as news corpora and weblogs, are generated to report and discuss real life events. Thus, event-related tasks, such as detecting real life events that drive the generation of the text documents, tracking event evolutions, and investigating reports and commentaries about events of interest, are important when exploring such text collections. To incorporate and leverage human efforts in conducting such tasks, we propose a novel visual analytics approach named EventRiver. EventRiver integrates event-based automated text analysis and visualization to reveal the events motivating the text generation and the long term stories they construct. On the visualization, users can interactively conduct tasks such as event browsing, tracking, association, and investigation. A working prototype of EventRiver has been implemented for exploring news corpora. A set of case studies, experiments, and a preliminary user test have been conducted to evaluate its effectiveness and efficiency."}
{"_id": "0c1960cbedb9be693e96cc5c3c8889dc5879332c", "title": "Evolutionary hierarchical dirichlet processes for multiple correlated time-varying corpora", "text": "Mining cluster evolution from multiple correlated time-varying text corpora is important in exploratory text analytics. In this paper, we propose an approach called evolutionary hierarchical Dirichlet processes (EvoHDP) to discover interesting cluster evolution patterns from such text data. We formulate the EvoHDP as a series of hierarchical Dirichlet processes~(HDP) by adding time dependencies to the adjacent epochs, and propose a cascaded Gibbs sampling scheme to infer the model. This approach can discover different evolving patterns of clusters, including emergence, disappearance, evolution within a corpus and across different corpora. Experiments over synthetic and real-world multiple correlated time-varying data sets illustrate the effectiveness of EvoHDP on discovering cluster evolution patterns."}
{"_id": "1441c41d266ce48a2041bd4da0468eec961ddf4f", "title": "The Word Tree, an Interactive Visual Concordance", "text": "We introduce the Word Tree, a new visualization and information-retrieval technique aimed at text documents. A Word Tree is a graphical version of the traditional \"keyword-in-context\" method, and enables rapid querying and exploration of bodies of text. In this paper we describe the design of the technique, along with some of the technical issues that arise in its implementation. In addition, we discuss the results of several months of public deployment of word trees on Many Eyes, which provides a window onto the ways in which users obtain value from the visualization."}
{"_id": "84badeea160675906c0d3f2a30e286cf60d79d41", "title": "Total variation regularization of local-global optical flow", "text": "More data fidelity terms in variational optical flow methods improve the estimation's robustness. A robust and anisotropic smoother enhances the specific fill-in process. This work presents a combined local-global (CLG) approach with total variation regularization. The combination of bilateral filtering and anisotropic (image driven) regularization is used to control the propagation phenomena. The resulted method, CLG-TV, is able to compute larger displacements in a reasonable time. The numerical scheme is highly parallelizable and runs in real-time on current generation graphic processing units."}
{"_id": "73b671a99c6404ebcf1bf664cb9e3ab539570d4a", "title": "Learning intermediate object affordances: Towards the development of a tool concept", "text": "Inspired by the extraordinary ability of young infants to learn how to grasp and manipulate objects, many works in robotics have proposed developmental approaches to allow robots to learn the effects of their own motor actions on objects, i.e., the objects affordances. While holding an object, infants also promote its contact with other objects, resulting in object-object interactions that may afford effects not possible otherwise. Depending on the characteristics of both the held object (intermediate) and the acted object (primary), systematic outcomes may occur, leading to the emergence of a primitive concept of tool. In this paper we describe experiments with a humanoid robot exploring object-object interactions in a playground scenario and learning a probabilistic causal model of the effects of actions as functions of the characteristics of both objects. The model directly links the objects' 2D shape visual cues to the effects of actions. Because no object recognition skills are required, generalization to novel objects is possible by exploiting the correlations between the shape descriptors. We show experiments where an affordance model is learned in a simulated environment, and is then used on the real robotic platform, showing generalization abilities in effect prediction. We argue that, despite the fact that during exploration no concept of tool is given to the system, this very concept may emerge from the knowledge that intermediate objects lead to significant effects when acting on other objects."}
{"_id": "2d662a458148afa4a10add1e6f9815969bd1adc6", "title": "Stereovision-Based Object Segmentation for Automotive Applications", "text": "Obstacle detection and classification in a complex urban area are highly demanding, but desirable for pedestrian protection, stop & go, and enhanced parking aids. The most difficult task for the system is to segment objects from varied and complicated background. In this paper, a novel position-based object segmentationmethod has been proposed to solve this problem. According to the method proposed, object segmentation is performed in two steps: in depth map (X-Z plane) and in layered images (X-Y planes). The stereovision technique is used to reconstruct image points and generate the depth map. Objects are detected in the depth map. Afterwards, the original edge image is separated into different layers based on the distance of detected objects. Segmentation performed in these layered images can be easier and more reliable. It has been proved that the proposed method offers robust detection of potential obstacles and accurate measurement of their location and size."}
{"_id": "37b9b5a5eb63349a3e6f75d5c4c061d7dbc87f4e", "title": "Attacks on Copyright Marking Systems", "text": "In the last few years, a large number of schemes have been proposed for hiding copyright marks and other information in digital pictures, video, audio and other multimedia objects. We describe some contenders that have appeared in the research literature and in the field; we then present a number of attacks that enable the information hidden by them to be removed or otherwise rendered unusable. 1 Information Hiding Applications The last few years have seen rapidly growing interest in ways to hide information in other information. A number of factors contributed to this. Fears that copyright would be eroded by the ease with which digital media could be copied led people to study ways of embedding hidden copyright marks and serial numbers in audio and video; concern that privacy would be eroded led to work on electronic cash, anonymous remailers, digital elections and techniques for making mobile computer users harder for third parties to trace; and there remain the traditional \u2018military\u2019 concerns about hiding one\u2019s own traffic while making it hard for the opponent to do likewise. The first international workshop on information hiding [2] brought these communities together and a number of hiding schemes were presented there; more have been presented elsewhere. We formed the view that useful progress in steganography and copyright marking might come from trying to attack all these first generation schemes. In the related field of cryptology, progress was iterative: cryptographic algorithms were proposed, attacks on them were found, more algorithms were proposed, and so on. Eventually, theory emerged: fast correlation attacks on stream ciphers and differential and linear attacks on block ciphers, now help us understand the strength of cryptographic algorithms in much more detail than before. Similarly, many cryptographic protocols were proposed and almost all the early candidates were broken, leading to concepts of protocol robustness and techniques for formal verification [6]. So in this paper, we first describe the copyright protection context in which most recent schemes have been developed; we then describe a selection of these ? The first author is grateful to Intel Corporation for financial support under the grant \u2018Robustness of Information Hiding Systems\u2019 ?? The third author is supported by a European Commission Marie-Curie grant David Aucsmith (Ed.): Information Hiding 1998, LNCS 1525, pp. 218\u2013238, 1998. c \u00a9 Springer-Verlag Berlin Heidelberg 1998 Attacks on Copyright Marking Systems 219 schemes and present a number of attacks, which break most of them. We finally make some remarks on the meaning of robustness in the context of steganography in general and copyright marking in particular. 1.1 Copyright Protection Issues Digital recording media offer many new possibilities but their uptake has been hindered by widespread fears among intellectual property owners such as Hollywood and the rock music industry that their livelihoods would be threatened if users could make unlimited perfect copies of videos, music and multimedia works. One of the first copy protection mechanisms for digital media was the serial copy management system (SCMS) introduced by Sony and Phillips for digital audio tapes in the eighties [31]. The idea was to allow consumers to make a digital audio tape of a CD they owned in order to use it (say) in their car, but not to make a tape of somebody else\u2019s tape; thus copies would be limited to first generation only. The implementation was to include a Boolean marker in the header of each audio object. Unfortunately this failed because the hardware produced by some manufacturers did not enforce it. More recently the Digital Video Disk, also known as Digital Versatile Disk (DVD) consortium called for proposals for a copyright marking scheme to enforce serial copy management. The idea is that the DVD players sold to consumers will allow unlimited copying of home videos and time-shifted viewing of TV programmes, but cannot easily be abused for commercial piracy [19, 44]. The proposed implementation is that videos will be unmarked, or marked \u2018never copy\u2019, or \u2018copy once only\u2019; compliant players would not record a video marked \u2018never copy\u2019 and when recording one marked \u2018copy once only\u2019 would change its mark to \u2018never copy\u2019. Commercially sold videos would be marked \u2018never copy\u2019, while TV broadcasts and similar material would be marked \u2018copy once only\u2019 and home videos would be unmarked. Electronic copyright management schemes have also been proposed by European projects such as Imprimatur and CITED [45, 66, 67], and American projects such as the proposed by the Working Group on Intellectual Property Rights [69]."}
{"_id": "9ebe574f95efdb5868a60764bb2f46e2783a00a6", "title": "DPIL@FIRE2016: Overview of the Shared task on Detecting Paraphrases in Indian language", "text": "This paper explains the overview of the shared task \"Detecting Paraphrases in Indian Languages\" (DPIL) conducted at FIRE 2016. Given a pair of sentences in the same language, participants are asked to detect the semantic equivalence between the sentences. The shared task is proposed for four Indian languages namely Tamil, Malayalam, Hindi and Punjabi. The dataset created for the shared task has been made available online and it is the first open-source paraphrase detection corpora for Indian languages."}
{"_id": "048dbd4acf1c34fc87bee4b67adf965e90acb38c", "title": "A comparison of trading algorithms based on machine learning classifiers : application on the S & P 500", "text": "In this paper we analyse whether different machine learning classifiers can be useful when it comes to financial data prediction. We first measure the accuracy and the volatility of our algorithm on the stocks included in the S&P 500 stock index. We then back-test it against the same benchmark for two different periods. Overall, our results show that an algorithm that trades just according to the predictions of the classifiers, is not profitable even if the accuracy obtained by the algorithm is higher than the accuracy that should be obtained according to the random walk theory. However, it can boost the results obtained by other strategies, maintaining the same level of volatility. JEL classification: C4, C8, G14."}
{"_id": "6bf703874706ba2b5cc1b6e49f9d1ae59ff1ce5f", "title": "Visual Perception of Parallel Coordinate Visualizations", "text": "Parallel coordinates is a visualization technique that provides an unbiased representation of high-dimensional data. The parallel configuration of axes treats data dimensions uniformly and is well suited for exploratory visualization. However, first-time users of parallel coordinate visualizations can find the representation confusing and difficult to understand.We used eye tracking to study how parallel coordinate visualizations are perceived, and compared the results to the optimal visual scan path required to complete the tasks. The results indicate that even first-time users quickly learn how to use parallel coordinate visualizations, pay attention to the correct task-specific areas in the visualization, and become rapidly proficient with it."}
{"_id": "37e1fc37a3ee90f24d85ad6fd3e5c51d3f5ab4fd", "title": "Attentive Explanations: Justifying Decisions and Pointing to the Evidence", "text": "Deep models are the defacto standard in visual decision problems due to their impressive performance on a wide array of visual tasks. However, they are frequently seen as opaque and are unable to explain their decisions. In contrast, humans can justify their decisions with natural language and point to the evidence in the visual world which supports their decisions. We propose a method which incorporates a novel explanation attention mechanism; our model is trained using textual rationals, and infers latent attention to visually ground explanations. We collect two novel datasets in domains where it is interesting and challenging to explain decisions. First, we extend the visual question answering task to not only provide an answer but also visual and natural language explanations for the answer. Second, we focus on explaining human activities in a contemporary activity recognition dataset. We extensively evaluate our model, both on the justification and pointing tasks, by comparing it to prior models and ablations using both automatic and human evaluations."}
{"_id": "6ccaddf9d418f32f0dc2be6140ee6902c7d7e5ab", "title": "A chaincode based scheme for fingerprint feature extraction", "text": "A feature extraction method using the chaincode representation of fingerprint ridge contours is presented for use by Automatic Fingerprint Identification Systems. The representation allows efficient image quality enhancement and detection of fine minutiae feature points. For image enhancement a given fingerprint image is first binarized after a quick averaging to generate its chaincode representation. The direction field is estimated from a set of selected chaincodes. The original gray scale image is then enhanced using connected component analysis and a dynamic filtering scheme that takes advantage of the knowledge gained from the estimated direction flow of the contours. For feature extraction, the enhanced fingerprint image is carefully binarized using a local-global binarization algorithm for generating the chaincode representation. The minutiae are generated using a sophisticated ridge contour following procedure. Visual inspection of the experiment images shows that the method is very effective."}
{"_id": "a2c145dbb30c942f136b2525a0132bf72d51b0c2", "title": "Model-Based Stabilisation of Deep Reinforcement Learning", "text": "Though successful in high-dimensional domains, deep reinforcement learning exhibits high sample complexity and suffers from stability issues as reported by researchers and practitioners in the field. These problems hinder the application of such algorithms in real-world and safety-critical scenarios. In this paper, we take steps towards stable and efficient reinforcement learning by following a model-based approach that is known to reduce agent-environment interactions. Namely, our method augments deep Q-networks (DQNs) with model predictions for transitions, rewards, and termination flags. Having the model at hand, we then conduct a rigorous theoretical study of our algorithm and show, for the first time, convergence to a stationary point. En route, we provide a counterexample showing that \u2019vanilla\u2019 DQNs can diverge confirming practitioners\u2019 and researchers\u2019 experiences. Our proof is novel in its own right and can be extended to other forms of deep reinforcement learning. In particular, we believe exploiting the relation between reinforcement (with deep function approximators) and online learning can serve as a recipe for future proofs in the domain. Finally, we validate our theoretical results in 20 games from the Atari benchmark. Our results show that following the proposed model-based learning approach not only ensures convergence but leads to a reduction in sample complexity and superior performance. Introduction Model-free deep reinforcement learning methods have recently demonstrated impressive performance on a range of complex learning tasks (Hessel et al. 2018; Lillicrap et al. 2017; Jaderberg et al. 2017). Deep Q-Networks (DQNs) (Mnih et al. 2015), in particular, stand out as a versatile and effective tool for a wide range of applications. DQNs offer an end-to-end solution for many reinforcement learning problems by generalizing tabular Q-learning to highdimensional input spaces. Unfortunately, DQNs still suffer from a number of important drawbacks that hinder wide-spread adoption. There is a general lack of theoretical understanding to guide the use of deep Q-learning algorithms. Combining non-linear function approximation with off-policy reinforcement learning is known to lead to possible divergence issues, even in very simple tasks (Boyan and Preliminary work. Moore 1995). Despite improvements introduced by deep Qnetworks (Mnih et al. 2015), Q-learning approaches based on deep neural networks still exhibit occasional instability during the learning process, see for example (van Hasselt, Guez, and Silver 2016). In this paper, we aim to address the instability of current deep RL approaches by augmenting model-free value learning with a predictive model. We propose a hybrid model and value learning objective, which stabilises the learning process and reduces sample complexity when compared to model-free algorithms. We attribute this reduction to the richer feedback signal provided by our method compared to standard deep Q-networks. In particular, our model provides feedback at every time step based on the current prediction error, which in turn eliminates one source of sample inefficiency caused by sparse rewards. These conclusions are also in accordance with previous research conducted on linear function approximation in reinforcement learning (Parr et al. 2008; Sutton et al. 2012; Song et al. 2016). While linear function approximation results provide a motivation for model-based stabilisation, such theories fail to generalise to deep non-linear architectures (Song et al. 2016). To close this gap in the literature and demonstrate stability of our method, we prove convergence in the general deep RL setting with deep value function approximators. Theoretically analysing deep RL algorithms, however, is challenging due to nonlinear functional dependencies and objective non-convexities prohibiting convergence to globally optimal policies. As such, the best one would hope for is a stationary point (e.g., a first-order one) that guarantees vanishing gradients. Even understanding gradient behavior for deep networks in RL can be difficult due to explorationbased policies and replay memory considerations. To alleviate some of these problems, we map deep reinforcement learning to a mathematical framework explicitly designed for understanding the exploration/exploitation trade-off. Namely, we formalise the problem as an online learning game the agent plays against an adversary (i.e., the environment). Such a link allows us to study a more general problem combining notions of regret, reinforcement learning, and optimisation. Given such a link, we prove that for any > 0, regret vanishes as O ( T 1\u22122 lnT ) with T being the total number of rounds. This, in turn, guarantees convergence to a stationary point. ar X iv :1 80 9. 01 90 6v 1 [ cs .L G ] 6 S ep 2 01 8 Guided by the above theoretical results, we validate that our algorithm leads to faster learning by conducting experiments on 20 Atari games. Due to the high computational and budgeting demands, we chose a benchmark that is closest to our setting. Concretely, we picked DQNs as our template algorithm to improve on both theoretically and empirically. It is worth noting that extending our theoretical results to more complex RL algorithms is an orthogonal future direction. Background: Reinforcement Learning We consider a discrete time, infinite horizon, discounted Markov Decision Process (MDP) \u3008 S,A, P , P , \u03b3 \u3009 . Here S denotes the state space, A the set of possible actions, P (R) : S \u00d7 A \u00d7 R \u2192 [0, 1] is the reward function, P (S) : S \u00d7 A \u00d7 S \u2192 [0, 1] is the state transition function, and \u03b3 \u2208 (0, 1) is a discount factor. An agent, being in state St \u2208 S at time step t, executes an action at \u2208 A sampled from its policy \u03c0(at|St), a conditional probability distribution where \u03c0 : S \u00d7 A \u2192 [0, 1]. The agent\u2019s action elicits from the environment a reward signal rt \u2208 R indicating instantaneous reward, a terminal flag ft \u2208 {0, 1} indicating a terminal event that restarts the environment, and a transition to a successor state St+1 \u2208 S. We assume that the sets S, A, and R are discrete. The reward rt is sampled from the conditional probability distribution P (rt|St,at). Similarly, ft \u223c P (F (ft|St,at) with P (F ) : S \u00d7 A \u00d7 {0, 1} \u2192 [0, 1], where a terminal event (ft = 1) restarts the environment according to some initial state distribution S0 \u223c P (S) 0 (S0). The state transition to a successor state is determined by a stochastic state transition function according to St+1 \u223c P (St+1|St,at). The agent\u2019s goal is to maximise future cumulative reward E P (S) 0 ,\u03c0,P (R),P (F ),P (S) [ \u221e \u2211 t=0 ( t \u220f t\u2032=0 (1\u2212 ft\u2032\u22121) ) \u03b3rt ] with respect to the policy \u03c0. An important quantity in RL are Q-values Q(St,at), which are defined as the expected future cumulative reward value when executing action at in state St and subsequently following policy \u03c0. Qvalues enable us to conveniently phrase the RL problem as max\u03c0 \u2211 S p (S) \u2211 a \u03c0(a|S)Q(S,a), where p(S) is the stationary state distribution obtained when executing \u03c0 in the environment (starting from S0 \u223c P (S) 0 (S0)). Deep Q-Networks: Value-based reinforcement learning approaches identify optimal Q-values directly using parametric function approximators Q\u03b8(S,a), where \u03b8 \u2208 \u0398 represents the parameters (Watkins 1989; Busoniu et al. 2010). Optimal Q-value estimates then correspond to an optimal policy \u03c0(a|S) = \u03b4a,argmaxa\u2032Q\u03b8(S,a\u2032). Deep Q-networks (Mnih et al. 2015) learn a deep neural network based Qvalue approximation by performing stochastic gradient descent on the following training objective: L(\u03b8) = ES,a,r,f,S\u2032 [( r + (1\u2212 f)\u03b3max a\u2032 Q\u03b8\u2212(S \u2032,a\u2032) \u2212Q\u03b8(S,a) )2] . (1) The expectation ranges over transitions S,a, r, f,S\u2032 sampled from an experience replay memory (S\u2032 denotes the state at the next time step). Use of this replay memory, together with the use of a separate target network Q\u03b8\u2212 (with different parameters \u03b8\u2212 \u2208 \u0398) for calculating the bootstrap values maxa\u2032 Q\u03b8\u2212(S \u2032,a\u2032), helps stabilise the learning process. Background: Online Learning In this paper, we employ a special form of regret minimisation games that we briefly review here. A regret minimisation game is a triple \u3008\u0398,F , T \u3009, where \u0398 is a non-empty decision set, F is the set of moves of the adversary which contains bounded convex functions from R to R and T is the total number of rounds. The game commences in rounds, where at round j = 1, . . . , T , the agent chooses a prediction \u03b8j \u2208 \u0398 and the adversary a loss function Lj \u2208 F . At the end of the round, the adversary reveals its choice and the agent suffers a loss Lj(\u03b8j). In this paper, we are concerned with the full-information case where the agent can observe the complete loss function Lj at the end of each round. The goal of the game is for the agent to make successive predictions to minimise cumulative regret defined as:"}
{"_id": "dad06d4dba532b59bd5d56b6be27c7ee4f0b6f1c", "title": "Membrane Bioreactor (MBR) Technology for Wastewater Treatment and Reclamation: Membrane Fouling", "text": "The membrane bioreactor (MBR) has emerged as an efficient compact technology for municipal and industrial wastewater treatment. The major drawback impeding wider application of MBRs is membrane fouling, which significantly reduces membrane performance and lifespan, resulting in a significant increase in maintenance and operating costs. Finding sustainable membrane fouling mitigation strategies in MBRs has been one of the main concerns over the last two decades. This paper provides an overview of membrane fouling and studies conducted to identify mitigating strategies for fouling in MBRs. Classes of foulants, including biofoulants, organic foulants and inorganic foulants, as well as factors influencing membrane fouling are outlined. Recent research attempts on fouling control, including addition of coagulants and adsorbents, combination of aerobic granulation with MBRs, introduction of granular materials with air scouring in the MBR tank, and quorum quenching are presented. The addition of coagulants and adsorbents shows a significant membrane fouling reduction, but further research is needed to establish optimum dosages of the various coagulants/adsorbents. Similarly, the integration of aerobic granulation with MBRs, which targets biofoulants and organic foulants, shows outstanding filtration performance and a significant reduction in fouling rate, as well as excellent nutrients removal. However, further research is needed on the enhancement of long-term granule integrity. Quorum quenching also offers a strong potential for fouling control, but pilot-scale testing is required to explore the feasibility of full-scale application."}
{"_id": "123ae35aa7d6838c817072032ce5615bb891652d", "title": "BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1", "text": "We introduce BinaryNet, a method which trains DNNs with binary weights and activations when computing parameters\u2019 gradient. We show that it is possible to train a Multi Layer Perceptron (MLP) on MNIST and ConvNets on CIFAR-10 and SVHN with BinaryNet and achieve nearly state-of-the-art results. At run-time, BinaryNet drastically reduces memory usage and replaces most multiplications by 1-bit exclusive-not-or (XNOR) operations, which might have a big impact on both general-purpose and dedicated Deep Learning hardware. We wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST MLP 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for BinaryNet is available."}
{"_id": "83f651d94997b3a2327fe52bc4fd9436a71957d0", "title": "Lagrangian Texture Advection: Preserving both Spectrum and Velocity Field", "text": "Texturing an animated fluid is a useful way to augment the visual complexity of pictures without increasing the simulation time. But texturing flowing fluids is a complex issue, as it creates conflicting requirements: we want to keep the key texture properties (features, spectrum) while advecting the texture with the underlying flow-which distorts it. In this paper, we present a new, Lagrangian, method for advecting textures: the advected texture is computed only locally and follows the velocity field at each pixel. The texture retains its local properties, including its Fourier spectrum, even though it is accurately advected. Due to its Lagrangian nature, our algorithm can perform on very large, potentially infinite scenes in real time. Our experiments show that it is well suited for a wide range of input textures, including, but not limited to, noise textures."}
{"_id": "7cf6f7da2e932da9a73b218c65f0b0264dd25479", "title": "Producing radiologist-quality reports for interpretable artificial intelligence", "text": "Current approaches to explaining the decisions of deep learning systems for medical tasks have focused on visualising the elements that have contributed to each decision. We argue that such approaches are not enough to \u201copen the black box\u201d of medical decision making systems because they are missing a key component that has been used as a standard communication tool between doctors for centuries: language. We propose a model-agnostic interpretability method that involves training a simple recurrent neural network model to produce descriptive sentences to clarify the decision of deep learning classifiers. We test our method on the task of detecting hip fractures from frontal pelvic x-rays. This process requires minimal additional labelling despite producing text containing elements that the original deep learning classification model was not specifically trained to detect. The experimental results show that: 1) the sentences produced by our method consistently contain the desired information, 2) the generated sentences are preferred by doctors compared to current tools that create saliency maps, and 3) the combination of visualisations and generated text is better than either alone."}
{"_id": "91ab3ca9fe3add7ce48dc2f97bb9435db1a991b9", "title": "On Divergences and Informations in Statistics and Information Theory", "text": "The paper deals with the f-divergences of Csiszar generalizing the discrimination information of Kullback, the total variation distance, the Hellinger divergence, and the Pearson divergence. All basic properties of f-divergences including relations to the decision errors are proved in a new manner replacing the classical Jensen inequality by a new generalized Taylor expansion of convex functions. Some new properties are proved too, e.g., relations to the statistical sufficiency and deficiency. The generalized Taylor expansion also shows very easily that all f-divergences are average statistical informations (differences between prior and posterior Bayes errors) mutually differing only in the weights imposed on various prior distributions. The statistical information introduced by De Groot and the classical information of Shannon are shown to be extremal cases corresponding to alpha=0 and alpha=1 in the class of the so-called Arimoto alpha-informations introduced in this paper for 0n, 1/\u03b5, 1/\u03b4, and 1/&sgr;, where n is the number of Boolean attributes, \u03b5 and \u03b4 are the usual accuracy and confidence parameters, and &sgr; indicates the minimum distance of any example from the target hyperplane, which is assumed to be lower than the average distance of the examples from any hyperplane. This result is achieved by modifying the Perceptron algorithm\u2014for each update, a weighted average of a sample of misclassified examples and a correction vector is added to the current weight vector. Similar modifications are shown for the Weighted Majority algorithm. The correction vector is simply the mean of the normalized examples. In the special case of Boolean threshold functions, the modified Perceptron algorithm performs O (n2\u03b5\u22122 ) iterations over O(n4\u03b5 \u22122ln(n/(\u03b4\u03b5))) examples. This improves on the previous classification-noise result of Angluin and Laird to a much larger concept class with a similar number of examples, but with multiple iterations over the examples."}
{"_id": "09ef59bb37b619ef5850060948674cb495ddebb6", "title": "Usability Evaluation of Google Classroom : Basis for the Adaptation of GSuite E-Learning Platform", "text": "Electronic learning is a technology learning that plays an important role in modern education and training. Its great contribution lies in the fact that content is available at any place and device from a fixed device to mobile device. Nowadays, education is accessible everywhere with the use of technology. There are several LMS (Learning Management Systems) available. One of the new tool available was released by Google under GSuite. Pangasinan State University is currently subscribed to GSuite for Education, and recently Google introduces Classroom as an eLearning platform for an educational institution. This research aims to evaluate the new product, its functionalities for the purpose of adapting and deployment. The main objective of this paper is to identify the usability and evaluation of the Learning Management System (LMS) Google Classroom, its functionalities, features, and satisfaction level of the students. Based on the result, the respondents agreed that GSuite classroom is recommended. The result of this study will be the proposed e-learning platform for Pangasinan State University, Lingayen Campus initially need in the College of Hospitality Management, Business and Public Administration."}
{"_id": "e1010530b9cfbaba451f37987a5feb3c659cda1f", "title": "Learning to Read Chest X-Ray Images from 16000+ Examples Using CNN", "text": "Chest radiography (chest X-ray) is a low-cost yet effective and widely used medical imaging procedures. The lacking of qualified radiologist seriously limits the applicability of the technique. We explore the possibility of designing a computer-aided diagnosis for chest X-rays using deep convolutional neural networks. Using a real-world dataset of 16,000 chest X-rays with natural language diagnosis reports, we can train a multi-class classification model from images and preform accurate diagnosis, without any prior domain knowledge."}
{"_id": "b89c61aeaa22c42e35f2e12daa32a04a9c276b07", "title": "Automatic detection of discontinuities from 3D point clouds for the stability analysis of jointed rock masses", "text": "We describe an strategy for automatic detection of discontinuities in jointed rock masses from a 3D point cloud. The method consists on a sequence of processes for feature extraction and characterization of the discontinuities that use voxelization, robust detection of planes by sequential RANSAC and clustering. The results obtained by applying this methodology to the 3D point cloud are a set of discontinuity families, characterized by their dips and dip directions, and surface wedges. This geometric information is essential for a later stability analysis of the jointed rock mass. The ultimate aim of our investigation is the automatic extraction of discontinuity families for stability analysis of tunnels excavated in jointed rock masses."}
{"_id": "bec6af81b99e3254d49bed9d5c9ebe941cf13c4d", "title": "Efficient calibration for rssi-based indoor localization by bayesian experimental design on multi-task classification", "text": "RSSI-based indoor localization is getting much attention. Thanks to a number of researchers, the localization accuracy has already reached a sufficient level. However, it is still not easy-to-use technology because of its heavy installation cost. When an indoor localization system is installed, it needs to collect RSSI data for training classifiers. Existing techniques need to collect enough data at each location. This is why the installation cost is very heavy. We propose a technique to gather data efficiently by using machine learning techniques. Our proposed algorithm is based on multi-task learning and Bayesian optimization. This algorithm can remove the need to collect data of all location labels and select location labels to acquire new data efficiently. We verify this algorithm by using a Wi-Fi RSSI dataset collected in a building. The empirical results suggest that the algorithm is superior to an existing algorithm applying single-task learning and Active Class Selection."}
{"_id": "6b24792298d47409cdf23012f593d49e2be0d4f3", "title": "Deep Neural Architecture for Multi-Modal Retrieval based on Joint Embedding Space for Text and Images", "text": "Recent advances in deep learning and distributed representations of images and text have resulted in the emergence of several neural architectures for cross-modal retrieval tasks, such as searching collections of images in response to textual queries and assigning textual descriptions to images. However, the multi-modal retrieval scenario, when a query can be either a text or an image and the goal is to retrieve both a textual fragment and an image, which should be considered as an atomic unit, has been significantly less studied. In this paper, we propose a gated neural architecture to project image and keyword queries as well as multi-modal retrieval units into the same low-dimensional embedding space and perform semantic matching in this space. The proposed architecture is trained to minimize structured hinge loss and can be applied to both cross- and multi-modal retrieval. Experimental results for six different cross- and multi-modal retrieval tasks obtained on publicly available datasets indicate superior retrieval accuracy of the proposed architecture in comparison to the state-of-art baselines."}
{"_id": "8fbd9a178014f9a9b0d8834315f03aa725d8b1a5", "title": "From \u201cclassic\u201d child abuse and neglect to the new era of maltreatment", "text": "The evolution of the concept of child abuse leads to consider new types of maltreatment that in the future will certainly be taken into account with a new era of social pediatrics.Pediatric care has been based on the increased awareness of the importance of meeting the psychosocial and developmental needs of children and of the role of families in promoting the health."}
{"_id": "f5eeacec4b4a19e60b6fc187dc08aca12dd13625", "title": "Overcoming Security Challenges in Microservice Architectures", "text": "The microservice architectural style is an emerging trend in software engineering that allows building highly scalable and flexible systems. However, current state of the art provides only limited insight into the particular security concerns of microservice system. With this paper, we seek to unravel some of the mysteries surrounding microservice security by: providing a taxonomy of microservices security; assessing the security implications of the microservice architecture; and surveying related contemporary solutions, among others Docker Swarm and Netflix security decisions. We offer two important insights. On one hand, microservice security is a multi-faceted problem that requires a layered security solution that is not available out of the box at the moment. On the other hand, if these security challenges are solved, microservice architectures can improve security; their inherent properties of loose coupling, isolation, diversity, and fail fast all contribute to the increased robustness of a system. To address the lack of security guidelines this paper describes the design and implementation of a simple security framework for microservices that can be leveraged by practitioners. Proof-of-concept evaluation results show that the performance overhead of the security mechanisms is around 11%."}
{"_id": "48c010d7ba115ae9bd5535d6932c77238c3f9926", "title": "vDesign: a CAVE-based virtual design environment using hand interactions", "text": "The cave automatic virtual environment (CAVE) system is one of the most fully immersive systems for virtual reality environments. By providing users with realistic perception and immersive experience, CAVE systems have been widely used in many fields, including military, education, health care, entertainment, design, and others. In this paper, we focus on the design applications in the CAVE. The design applications involve many interactions between the user and the CAVE. However, the conventional interaction tool, the wand, cannot provide fast and convenient interactions. In this paper, we propose vDesign, a CAVE-based virtual design environment using hand interactions. The hand interactions in vDesign are classified into menu navigation and object manipulations. For menu navigation, we define two interactions: activating the main menu and selecting a menu item. For object manipulations, we define three interactions: moving, rotating, and scaling an object. By using the proposed hand interactions, we develop the functions of image segmentation and image composition in vDesign. With the image segmentation function, the designer can select and cut the interested objects from different images. With the image composition function, the designer can manipulate the segmented objects and combine them as a composite image. We implemented the vDesign prototype in CAVE and conducted experiments to evaluate the interaction performance in terms of manipulation time and distortion. The experimental results demonstrated that the proposed hand interactions can provide faster and more accurate interactions compared to the traditional wand interactions."}
{"_id": "3805cd9f0db2a71bd33cb72ad6ca7bd23fe95e35", "title": "A support vector approach for cross-modal search of images and texts", "text": "Building bilateral semantic associations between images and texts is among the fundamental problems in computer vision. In this paper, we study two complementary cross-modal prediction tasks: (i) predicting text(s) given a query image (\u201cIm2Text\u201d), and (ii) predicting image(s) given a piece of text (\u201cText2Im\u201d). We make no assumption on the specific form of text; i.e., it could be either a set of labels, phrases, or even captions. We pose both these tasks in a retrieval framework. For Im2Text, given a query image, our goal is to retrieve a ranked list of semantically relevant texts from an independent text-corpus (i.e., texts with no corresponding images). Similarly, for Text2Im, given a query text, we aim to retrieve a ranked list of semantically relevant images from a collection of unannotated images (i.e., images without any associated textual meta-data). We propose a novel Structural SVM based unified framework for these two tasks, and show how it can be efficiently trained and tested. Using a variety of loss functions, extensive experiments are conducted on three popular datasets (two medium-scale datasets containing few thousands of samples, and one web-scale dataset containing one million samples). Experiments demonstrate that our framework gives promising results compared to competing baseline cross-modal search techniques, thus confirming its efficacy."}
{"_id": "9f6f09db2e6c1db8fba09844fbb02ac0260635a7", "title": "M-Learning: A New Paradigm of Learning Mathematics in Malaysia", "text": "M-Learning is a new learning paradigm of the new social structure with mobile and wireless technologies. Smart school is one of the four flagship applications for Multimedia Super Corridor (MSC) under Malaysian government initiative to improve education standard in the country. With the advances of mobile devices technologies, mobile learning could help the government in realizing the initiative. This paper discusses the prospect of implementing mobile learning for primary school students. It indicates significant and challenges and analysis of user perceptions on potential mobile applications through a survey done in primary school context. The authors propose the m-Learning for mathematics by allowing the extension of technology in the traditional classroom in term of learning and teaching."}
{"_id": "b35a21517d0f1e134172581c698b3654b9bde95b", "title": "Brand Name Logo Recognition of Fast Food and Healthy Food among Children", "text": "The fast food industry has been increasingly criticized for creating brand loyalty in young consumers. Food marketers are well versed in reaching children and youth given the importance of brand loyalty on future food purchasing behavior. In addition, food marketers are increasingly targeting the Hispanic population given their growing spending power. The fast food industry is among the leaders in reaching youth and ethnic minorities through their marketing efforts. The primary objective of this study was to determine if young children recognized fast food restaurant logos at a higher rate than other food brands. Methods Children (n\u00a0=\u00a0155; 53% male; 87% Hispanic) ages 4\u20138\u00a0years were recruited from elementary schools and asked to match 10 logo cards to products depicted on a game board. Parents completed a survey assessing demographic and psychosocial characteristics associated with a healthy lifestyle in the home. Results Older children and children who were overweight were significantly more likely to recognize fast food restaurant logos than other food logos. Moreover, parents\u2019 psychosocial and socio-demographic characteristics were associated with the type of food logo recognized by the children. Conclusions Children\u2019s high recognition of fast food restaurant logos may reflect greater exposure to fast food advertisements. Families\u2019 socio-demographic characteristics play a role in children\u2019s recognition of food logos."}
{"_id": "efa13067ad6afb062db44e20116abf2c89b7e9a9", "title": "LSTM Model to Forecast Time Series for EC2 Cloud Price", "text": "With the widespread use of spot instances in Amazon EC2, which utilize unused capacity with unfixed price, predicting price is important for users. In this paper, we try to forecast spot instance price by using long short-term memory (LSTM) algorithm to predict time series of the prices. We apply cross validation technique to our data set, and extract some features; this help our model to learn deeply. We make the prediction for 10 regions, and measure the performance using root-mean-square error (RMSE). We apply our data to other statistical models to compare their performance; we found that our model works more efficiently, also the error is decreasing more with cross validation and the result is much faster. Based on our result we try to find the availability zone that less become over on-demand price and less changing price over time, which help users to choose the most stable availability zone."}
{"_id": "789506ae5f631d88298c870607630344c26b1b59", "title": "Automatic scraper of celebrity images from heterogeneous websites based on face recognition and sorting for profiling", "text": "Now days it has become trend to follow all the celebrities as we consider them as our role models. So instead of searching the images of various celebrities in different websites we can find them in a single website by sorting all the images. Reliable database of images is essential for any image recognition system. Through Internet we find all the required images. These images will serve as samples for automatic recognition system. With these images we do face detection, face recognition, face sorting using various techniques like local binary patterns, haar cascades. We make an overall analysis of the detector. Using opencv we detect and recognize images. Similarity matching is done to check how the images are related to each other. Collection of images is based on user defined templates, which are in web browser environment. With the help of this system one can give their requirement and the image of celebrity is displayed based on that."}
{"_id": "5ee0d8aeb2cb01ef4d8a858d234e72a7400c03ac", "title": "Convolution Kernels on Discrete Structures", "text": "We introduce a new method of constructing kernels on sets whose elements are discrete structures like strings, trees and graphs. The method can be applied iteratively to build a kernel on a innnite set from kernels involving generators of the set. The family of kernels generated generalizes the family of radial basis kernels. It can also be used to deene kernels in the form of joint Gibbs probability distributions. Kernels can be built from hidden Markov random elds, generalized regular expressions, pair-HMMs, or ANOVA de-compositions. Uses of the method lead to open problems involving the theory of innnitely divisible positive deenite functions. Fundamentals of this theory and the theory of reproducing kernel Hilbert spaces are reviewed and applied in establishing the validity of the method."}
{"_id": "41ab8a3c6088eb0576ba65e114ebd37340c2bae1", "title": "What Does This Imply? Examining the Impact of Implicitness on the Perception of Hate Speech", "text": ""}
{"_id": "07233b4fb2231de9dc902fd694a6c205fa42d66a", "title": "An autonomous polymerization motor powered by DNA hybridization.", "text": "We present a synthetic molecular motor capable of autonomous nanoscale transport in solution. Inspired by bacterial pathogens such as Rickettsia rickettsii, which locomote by inducing the polymerization of the protein actin at their surfaces to form 'comet tails', the motor operates by polymerizing a double-helical DNA tail2. DNA strands are propelled processively at the living end of the growing polymers, demonstrating autonomous locomotion powered by the free energy of DNA hybridization."}
{"_id": "18914b4a81b917ee3b7e83adf51c68071dc26124", "title": "0.18-V input charge pump with forward body biasing in startup circuit using 65nm CMOS", "text": "In this paper, a 0.18-V input three-stage charge pump circuit applying forward body bias is proposed. In the developed charge pump, all the MOSFETs are forward body biased by using the inter-stage/output voltages. By applying the proposed charge pump as the startup in the boost converter, the lower kick-up input voltage of the boost converter can be achieved. To verify the circuit characteristics, four test circuits have been implemented by using 65nm CMOS process. The measured available output current of the proposed charge pump under 0.18-V input voltage can be improved more than 150%. In addition, the boost converter can successfully been boosted from 0.18-V input to the 0.74-V output under 6mA output current. The proposed circuit is suitable for extremely low voltage applications such as harvesting energy sources."}
{"_id": "a407bc2cc5a5d8c25e1cb4c536ab6d92597134a3", "title": "A Blockchain-Based Micro Economy Platform for Distributed Infrastructure Initiatives", "text": "Distributed Infrastructure Initiatives (DIIs) are communities that collaboratively produce and consume infrastructure. To develop a healthy ecosystem, DIIs require an economic model that balances supply and demand, but there is currently a lack of tooling to support implementing these. In this research, we propose an architecture for a platform that enables DIIs to implement such models, focused around a digital currency based on blockchain technology. The currency is issued according to the amount participants contribute to the initiative, which is quantified based on operational metrics gathered from the infrastructure. Furthermore, the platform enables participants to deploy smart contracts which encode self-enforcing agreements about the infrastructure services they exchange. The architecture has been evaluated through a case study at The Things Network (TTN) a global distributed crowdsourced Internet of Things initiative. The case study revealed that the architecture is effective for the selected case at TTN. In addition, the results motivate future research lines to support scalability (i.e., to deploy the architecture on a larger scale) and security."}
{"_id": "0a82c11218726076711c225666893bc55f09f2d7", "title": "Safe robotic grasping: Minimum impact-force grasp selection", "text": "This paper addresses the problem of selecting from a choice of possible grasps, so that impact forces will be minimised if a collision occurs while the robot is moving the grasped object along a post-grasp trajectory. Such considerations are important for safety in human-robot interaction, where even a certified \u201chuman-safe\u201d (e.g. compliant) arm may become hazardous once it grasps and begins moving an object, which may have significant mass, sharp edges or other dangers. Additionally, minimising collision forces is critical to preserving longevity of robots which operate in uncertain and hazardous environments, e.g. robots deployed for nuclear decommissioning, where removing a damaged robot from a contaminated zone for repairs may be extremely difficult and costly. Also, unwanted collisions between a robot and critical infrastructure (e.g. pipework) in such high-consequence environments can be disastrous. In this paper we investigate how the safety of the post-grasp motion can be considered during the pre-grasp approach phase, so that the selected grasp is optimal in terms of applying minimum impact forces if a collision occurs during the desired post-grasp manipulation. We build on the methods of augmented robot-object dynamic model and \u201ceffective mass\u201d and propose a method for combining these concepts with modern grasp and trajectory planners, to enable the robot to achieve a grasp which maximises the safety of the post-grasp trajectory, by minimising potential collision forces. We demonstrate the effectiveness of our approach through several experiments with both simulated and real robots."}
{"_id": "551558995669a43113c85b638a89adb5775cc2ce", "title": "Quantifying Political Polarity Based on Bipartite Opinion Networks", "text": "Political inclinations of individuals (liberal vs. conservative) largely shape their opinions on several issues such as abortion, gun control, nuclear power, etc. These opinions are openly exerted in online forums, news sites, the parliament, and so on. In this paper, we address the problem of quantifying political polarity of individuals and of political issues for classification and ranking. We use signed bipartite networks to represent the opinions of individuals on issues, and formulate the problem as a node classification task. We propose a linear algorithm that exploits network effects to learn both the polarity labels as well as the rankings of people and issues in a completely unsupervised manner. Through extensive experiments we demonstrate that our proposed method provides an effective, fast, and easy-to-implement solution, while outperforming three existing baseline algorithms adapted to signed networks, on real political forum and US Congress datasets. Experiments on a wide variety of synthetic graphs with varying polarity and degree distributions of the nodes further demonstrate the robustness of our approach. Introduction Many individuals use online media to exert their opinions on a variety of topics. Hotly debated topics include liberal vs. conservative policies such as tax cuts and gun control, social issues such as abortion and same-sex marriage, environmental issues such as climate change and nuclear power plants, etc. These openly debated issues in blogs, forums, and news websites shape the nature of public opinion and affect the direction of politics, media, and public policy. In this paper, we address the problem of quantifying political polarity in opinion datasets. Given a set of individuals from two opposing camps (liberal vs. conservative) debating a set of issues or exerting opinions on a set of subjects (e.g. human subjects, political issues, congressional bills), we aim to address two problems: (1) classify which person lies in which camp, and which subjects are favored by each camp; and (2) rank the people and the subjects by the magnitude or extent of their polarity. Here while the classification enables us to determine the two camps, ranking helps us understand the extremity to which a person/subject is polarized; e.g. same-sex marriage may be highly polarized among the Copyright c \u00a9 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. two camps (liberals being strictly in favor, and conservatives strictly being against), while gun control may not belong to a camp fully favoring or opposing it (i.e., is less polarized). Ranking also helps differentiate moderate vs. extreme individuals, as well as unifying vs. polarizing subjects; such as (unifying) bills voted in the same way, e.g. all \u2018yea\u2019 by the majority of congressmen vs. (polarizing) bills that are voted quite oppositely by the two camps. A large body of prior research focuses on sentiment analysis on politically oriented text (Cohen and Ruths 2013; Conover et al. 2011b; 2011a; Pak and Paroubek 2010; Tumasjan et al. 2010). The main goal of these works is to classify political text. In this work, on the other hand, we deal with network data to classify its nodes. Moreover, these methods mostly rely on supervised techniques whereas we focus on un/semi-supervised classification. Other prior research on polarization have exploited link mining and graph clustering to study the social structure on social media networks (Adamic and Glance 2005; Livne et al. 2011; Conover et al. 2011b; Guerra et al. 2013) where the edges depict the \u2018mention\u2019 or \u2018hyperlink\u2019 relations and not opinions. Moreover, these works do not perform ranking. Different from previous works, our key approach is to exploit network effects to both classify and rank individuals and political subjects by their polarity. The opinion datasets can be effectively represented as signed bipartite networks, where one set of nodes represent individuals, the other set represent subjects, and signed edges between the individuals and subjects depict the +/\u2212 opinions. As such, we cast the problem as a node classification task on such networks. Our main contributions can be summarized as follows: \u2022 We cast the political polarity classification and ranking problem as a node classification task on edge-signed bipartite opinion networks. \u2022 We propose an algorithm, called signed polarity propagation (SPP), that computes probabilities (i.e., polarity scores) of nodes of belonging to one of two classes (e.g., liberal vs. conservative), and use these scores for classification and ranking. Our method is easy-to-implement and fast\u2014running time grows linearly with network size. \u2022 We show the effectiveness of our algorithm, in terms of both prediction and ranking, on synthetic and real datasets with ground truth from the US Congress and Political Forum. Further, we modify three existing algorithms to handle signed networks, and compare them to SPP. Our experiments reveal the advantages and robustness of our method on diverse settings with various polarity and degree distributions. Related Work Scoring and ranking the nodes of a graph based on the network structure has been studied extensively, with wellknown algorithms like PageRank (Brin and Page 1998), and HITS (Kleinberg 1998). These, however, are applied on graphs where the edges are unsigned and therefore cannot be directly used to compute polarity scores. Computing polarity scores can be cast as a network classification problem, where the task is to assign probabilities (i.e., scores) to nodes of belonging to one of two classes, which is the main approach we take in our work. There exist a large body of work on network-based classification (Getoor et al. 2007; Neville and Jensen 2003; Macskassy and Provost 2003). Semi-supervised algorithms based on network propagation have also been used in classifying political orientation (Lin and Cohen 2008; Zhou, Resnick, and Mei 2011). However, all the existing methods work with unsigned graphs in which the edges do not represent opinions but simply relational connections such as HTML hyperlinks between blog articles or \u2018mention\u2019 relations in Twitter. Signed networks have only recently attracted attention (Leskovec, Huttenlocher, and Kleinberg 2010b). Most existing studies focused on tackling the edge sign prediction problem (Yang et al. 2012; Chiang et al. 2011; Leskovec, Huttenlocher, and Kleinberg 2010a; Symeonidis, Tiakas, and Manolopoulos 2010). Other works include the study of trust/distrust propagation (Guha et al. 2004; DuBois, Golbeck, and Srinivasan 2011; Huang et al. 2012), and product/merchant quality estimation from reviews (McGlohon, Glance, and Reiter 2010). These works do not address the problem of classifying and ranking nodes in signed graphs. With respect to studies on political orientation through social media, (Adamic and Glance 2005; Adar et al. 2004) use link mining and graph clustering to analyze political communities in the blogosphere. While these and most clustering algorithms are designed to work with unsigned graphs, there also exist approaches for clustering signed graphs (Traag and Bruggeman 2009; Lo et al. 2013; Zhang et al. 2013). Clustering, however, falls short in scoring the nodes and hence quantifying polarity for ranking. Related, (Livne et al. 2011) utilize graph and text mining techniques to analyze differences between political parties and their online media usage in conveying a cohesive message. Most recently, (Cohen and Ruths 2013) use supervised classification techniques to classify three groups of Twitter users with varying political activity (figures, active, and modest) by their political orientation. Other works that exploit supervised classification using text features include (Conover et al. 2011a; Golbeck and Hansen 2011; Pennacchiotti and Popescu 2011). Similar to (Adamic and Glance 2005), there exist related works that study the social structure for measuring polarity. These works rely on the existence of (unsigned) social links between the users of social media and study the communities induced by polarized debate; an immediate consequence of the homophily principle, which states that people with similar beliefs and opinions tend to establish social ties. (Livne et al. 2011) and (Conover et al. 2011b) both use modularity (Newman 2006) as a measure of segregation between political groups in Twitter. (Guerra et al. 2013) compare modularity of polarized and non-polarized networks and propose two new measures of polarity based on community boundaries. Again, these works do not study the opinion networks with signed edges. As our main task is quantifying polarity, work on sentiment analysis is also related. There exist a long list of works on sentiment and polarity prediction in political text (Tumasjan et al. 2010; Awadallah, Ramanath, and Weikum 2010; Conover et al. 2011b; He et al. 2012), as well as in tweets, blogs, and news articles (Pak and Paroubek 2010; Godbole, Srinivasaiah, and Skiena 2007; Thomas, Pang, and Lee 2006; Balasubramanyan et al. 2012). These differ from our work as they use text-based sentiment analysis, while we focus on the network effects in signed graphs. Proposed Method Problem Overview We consider the problem of polarity prediction and ranking in opinion datasets (e.g. forums, blogs, the congress). Opinion datasets mainly consist of a set of people (e.g. users in a forum, representatives in The House) and a set of subjects (e.g. political issues, political people, congressional bills). Each person often maintains a positive or negative opinion toward a particular subject (e.g. a representative votes \u2018yes\u2019 for a bill, a person \u2018likes\u2019 a political individual). This opinion is often an exposition of the person\u2019s latent political leaning\u2014liberal or conservative. For example, we could think of a person with strong negative opinion toward gay&lesbian rights to be more conservati"}
{"_id": "53176495ce9ea8e7b23278d6f90c13d37eff9312", "title": "miRDeep2 accurately identifies known and hundreds of novel microRNA genes in seven animal clades", "text": "microRNAs (miRNAs) are a large class of small non-coding RNAs which post-transcriptionally regulate the expression of a large fraction of all animal genes and are important in a wide range of biological processes. Recent advances in high-throughput sequencing allow miRNA detection at unprecedented sensitivity, but the computational task of accurately identifying the miRNAs in the background of sequenced RNAs remains challenging. For this purpose, we have designed miRDeep2, a substantially improved algorithm which identifies canonical and non-canonical miRNAs such as those derived from transposable elements and informs on high-confidence candidates that are detected in multiple independent samples. Analyzing data from seven animal species representing the major animal clades, miRDeep2 identified miRNAs with an accuracy of 98.6-99.9% and reported hundreds of novel miRNAs. To test the accuracy of miRDeep2, we knocked down the miRNA biogenesis pathway in a human cell line and sequenced small RNAs before and after. The vast majority of the >100 novel miRNAs expressed in this cell line were indeed specifically downregulated, validating most miRDeep2 predictions. Last, a new miRNA expression profiling routine, low time and memory usage and user-friendly interactive graphic output can make miRDeep2 useful to a wide range of researchers."}
{"_id": "ba86c86b4b8b7989698babf0280980235d598548", "title": "GA3C: GPU-based A3C for Deep Reinforcement Learning", "text": "We introduce and analyze the computational aspects of a hybrid CPU/GPU implementation of the Asynchronous Advantage Actor-Critic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. Our analysis concentrates on the critical aspects to leverage the GPU\u2019s computational power, including the introduction of a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. We also show the potential for the use of larger DNN models on a GPU. Our TensorFlow implementation achieves a significant speed up compared to our CPU-only implementation, and it will be made publicly available to other researchers."}
{"_id": "a858bf95d8cdae54f21967f555b8831db6a149e3", "title": "1 Augmented Reality", "text": "This paper is an overview of technologies that fall under the term augmented reality. Augmented reality refers to a system in which the physical surroundings of a person are mixed with real-time computer generated information creating an enhanced perception of surrounding environment. Being partly virtual and real, augmented reality applications have quite extreme requirements to be practical to use. It also has very much potential in numerous different application areas. These issues make augmented reality both an interesting and challenging subject from scientific and business perspectives."}
{"_id": "ca9f8fadbd14253e02b07ae48fc7a5bac44f6a94", "title": "Arrested development? Reconsidering dual-systems models of brain function in adolescence and disorders", "text": "The dual-systems model of a ventral affective system, whose reactivity confers risks and liabilities, and a prefrontal control system, whose regulatory capacities buffer against these vulnerabilities, is an intuitive account that pervades many fields in the cognitive neurosciences--especially in the study of populations that differ from neurotypical adults, such as adolescents or individuals with affective or impulse regulation disorders. However, recent evidence that is inconsistent with dual-systems models illustrates the complexity of developmental and clinical variations in brain function. Building new models to account for this complexity is critical to progress in these fields, and will be facilitated by research that emphasizes network-based approaches and maps relationships between structure and function, as well as brain and behavior, over time."}
{"_id": "37300dd4c73fc7e6affc357dbc266b2b4e5c7bea", "title": "1/f noise of NMOS and PMOS transistors and their implications to design of voltage controlled oscillators", "text": "Low frequency noise of NMOS and PMOS transistors in a 0.25 /spl mu/m foundry CMOS process with a pure SiO/sub 2/ gate oxide layer is characterized for the entire range of MOSFET operation. Surprisingly, the measurement results showed that surface channel PMOS transistors have about an order of magnitude lower 1/f noise than NMOS transistors especially at V/sub GS/-V/sub TH/ less than /spl sim/0.4 V The data were used to show that a VCO using all surface channel PMOS transistors can have /spl sim/14 dB lower close-in phase noise compared to that for a VCO using all surface channel NMOS transistors."}
{"_id": "86d37dcb04055739969c297b45b9e28e4db01b86", "title": "Design issues in CMOS differential LC oscillators", "text": "An analysis of phase noise in differential crosscoupled inductance\u2013capacitance(LC) oscillators is presented. The effect of tail current and tank power dissipation on the voltage amplitude is shown. Various noise sources in the complementary cross-coupled pair are identified, and their effect on phase noise is analyzed. The predictions are in good agreement with measurements over a large range of tail currents and supply voltages. A 1.8-GHz LC oscillator with a phase noise of 121 dBc/Hz at 600 kHz is demonstrated, dissipating 6 mW of power using on-chip spiral inductors."}
{"_id": "f2842d5350b5f0e621fd4cfa176f01da986c962c", "title": "Oscillator phase noise: a tutorial", "text": "Linear time-invariant (LTI) phase noise theories provide important qualitative design insights but are limited in their quantitative predictive power. Part of the difficulty is that device noise undergoes multiple frequency translations to become oscillator phase noise. A quantitative understanding of this process requires abandoning the principle of time invariance assumed in most older theories of phase noise. Fortunately, the noise-to-phase transfer function of oscillators is still linear, despite the existence of the nonlinearities necessary for amplitude stabilization. In addition to providing a quantitative reconciliation between theory and measurement, the time-varying phase noise model presented in this tutorial identifies the importance of symmetry in suppressing the upconversion of 1/f noise into close-in phase noise, and provides an explicit appreciation of cyclostationary effects and AM-PM conversion. These insights allow a reinterpretation of why the Colpitts oscillator exhibits good performance, and suggest new oscillator topologies. Tuned LC and ring oscillator circuit examples are presented to reinforce the theoretical considerations developed. Simulation issues and the accommodation of amplitude noise are considered in appendixes."}
{"_id": "3d35faeab92b4f5d64da081d67a326c38736d4dd", "title": "A study of phase noise in CMOS oscillators", "text": "This paper presents a study of phase noise in two inductorless CMOS oscillators. First-order analysis of a linear oscillatory system leads to a noise shaping function and a new definition of Q. A linear model of CMOS ring oscillators is used to calculate their phase noise, and three phase noise phenomena, namely, additive noise, high-frequency multiplicative noise, and low-frequency multiplicative noise, are identified and formulated. Based on the same concepts, a CMOS relaxation oscillator is also analyzed. Issues and techniques related to simulation of noise in the time domain are described, and two prototypes fabricated in a 0.5m CMOS technology are used to investigate the accuracy of the theoretical predictions. Compared with the measured results, the calculated phase noise values of a 2-GHz ring oscillator and a 900-MHz relaxation oscillator at 5 MHz offset have an error of approximately 4 dB."}
{"_id": "ff8cec504cdaa063278c64a2cb05119fab976db4", "title": "On the Logic and Purpose of Significance Testing", "text": "There has been much recent attention given to the problems involved with the traditional approach to null hypothesis significance testing (NHST). Many have suggested that, perhaps, NHST should be abandoned altogether in favor of other bases for conclusions such as confidence intervals and effect size estimates (e.g., Schmidt, 1996). The purposes of this article are to (a) review the function that data analysis is supposed to serve in the social sciences, (b) examine the ways in which these functions are performed by NHST, (c) examine the case against NHST, and (d) evaluate interval-based estimation as an alternative to NHST."}
{"_id": "f4c8df47919c57928f91da5e942bfa5f74fe7aca", "title": "Flip-chip interconnection EA-DFB laser module for 100-Gbit/s/\u03bb application", "text": "We have developed a flip-chip interconnection technique to solve the problem of the degradation of the modulation bandwidth of an optical transmitter due to the parasitic inductance of bonding wire. And we have fabricated an LE-EA-DFB laser module that has a 3-dB bandwidth of 59 GHz thanks to our flip-chip interconnection technique. The fabricated optical receiver, which includes a wideband electrical amplifier, has a 3-dB bandwidth of 51 GHz. Using these optical modules with a 3-dB bandwidth of more than 50 GHz, we have obtained a bit-error rate of less than 2 \u00d7 10-4, which is an error-free condition using KP4 forward error correction, under 53.2-Gbaud 4-level pulse amplitude modulation (4-PAM) operation without an equalizer even after a 2-km transmission through single-mode fiber."}
{"_id": "167842cb39be40987b783e2932b9ad2025ab7d07", "title": "Terrain surface classification for autonomous ground vehicles using a 2D laser stripe-based structured light sensor", "text": "To increase autonomous ground vehicle (AGV) safety and efficiency on outdoor terrains the vehicle's control system should have settings for individual terrain surfaces. A first step in such a terrain-dependent control system is classification of the surface upon which the AGV is traversing. This paper considers vision-based terrain surface classification for the path directly in front of the vehicle (\u226a 1 m). Most visionbased terrain classification has focused on terrain traversability and not on terrain surface classification. The few approaches to classifying traversable terrain surfaces, with the exception of the use of infrared cameras to classify mud, have relied on stand-alone cameras that are designed for daytime use and are not expected to perform well in the dark. In contrast, this research uses a laser stripe-based structured light sensor, which uses a laser in conjunction with a camera, and hence can work at night. Also, unlike most previous results, the classification here does not rely on color since color changes with illumination and weather, and certain terrains have multiple colors (e.g., sand may be red or white). Instead, it relies only on spatial relationships, specifically spatial frequency response and texture, which captures spatial relationships between different gray levels. Terrain surface classification using each of these features separately is conducted by using a probabilistic neural network. Experimental results based on classifying four outdoor terrains demonstrate the effectiveness of the proposed methods."}
{"_id": "51c6e7ea45aaf4f2374dbf09788d78e2a1973649", "title": "The microbiota-gut-brain axis: neurobehavioral correlates, health and sociality", "text": "Recent data suggest that the human body is not such a neatly self-sufficient island after all. It is more like a super-complex ecosystem containing trillions of bacteria and other microorganisms that inhabit all our surfaces; skin, mouth, sexual organs, and specially intestines. It has recently become evident that such microbiota, specifically within the gut, can greatly influence many physiological parameters, including cognitive functions, such as learning, memory and decision making processes. Human microbiota is a diverse and dynamic ecosystem, which has evolved in a mutualistic relationship with its host. Ontogenetically, it is vertically inoculated from the mother during birth, established during the first year of life and during lifespan, horizontally transferred among relatives, mates or close community members. This micro-ecosystem serves the host by protecting it against pathogens, metabolizing complex lipids and polysaccharides that otherwise would be inaccessible nutrients, neutralizing drugs and carcinogens, modulating intestinal motility, and making visceral perception possible. It is now evident that the bidirectional signaling between the gastrointestinal tract and the brain, mainly through the vagus nerve, the so called \"microbiota-gut-vagus-brain axis,\" is vital for maintaining homeostasis and it may be also involved in the etiology of several metabolic and mental dysfunctions/disorders. Here we review evidence on the ability of the gut microbiota to communicate with the brain and thus modulate behavior, and also elaborate on the ethological and cultural strategies of human and non-human primates to select, transfer and eliminate microorganisms for selecting the commensal profile."}
{"_id": "9f95b2cd4c22d5f7fce62c9f804e827d2ded0f84", "title": "Endurance exercise performance: the physiology of champions.", "text": "Efforts to understand human physiology through the study of champion athletes and record performances have been ongoing for about a century. For endurance sports three main factors--maximal oxygen consumption (.VO(2,max)), the so-called 'lactate threshold' and efficiency (i.e. the oxygen cost to generate a given running speed or cycling power output)--appear to play key roles in endurance performance. and lactate threshold interact to determine the 'performance .VO(2)' which is the oxygen consumption that can be sustained for a given period of time. Efficiency interacts with the performance .VO(2) to establish the speed or power that can be generated at this oxygen consumption. This review focuses on what is currently known about how these factors interact, their utility as predictors of elite performance, and areas where there is relatively less information to guide current thinking. In this context, definitive ideas about the physiological determinants of running and cycling efficiency is relatively lacking in comparison with .VO(2,max) and the lactate threshold, and there is surprisingly limited and clear information about the genetic factors that might pre-dispose for elite performance. It should also be cautioned that complex motivational and sociological factors also play important roles in who does or does not become a champion and these factors go far beyond simple physiological explanations. Therefore, the performance of elite athletes is likely to defy the types of easy explanations sought by scientific reductionism and remain an important puzzle for those interested in physiological integration well into the future."}
{"_id": "0d526d3ed49943b302bbbe6747dd3484c7d706af", "title": "Breaking Mifare DESFire MF3ICD40: Power Analysis and Templates in the Real World", "text": "With the advent of side-channel analysis, implementations of mathematically secure ciphers face a new threat: by exploiting the physical characteristics of a device, adversaries are able to break algorithms such as AES or Triple-DES (3DES), for which no efficient analytical or brute-force attacks exist. In this paper, we demonstrate practical, noninvasive side-channel attacks on the Mifare DESFire MF3ICD40 contactless smartcard, a 3DES-based alternative to the cryptanalytically weak Mifare Classic [9, 25]. We detail on how to recover the complete 112-bit secret key of the employed 3DES algorithm, using non-invasive power analysis and template attacks. Our methods can be put into practice at a low cost with standard equipment, thus posing a severe threat to many real-world applications that employ the DESFire MF3ICD40 smartcard."}
{"_id": "842301714c2513659a6814a7e9b5ae761136f9d8", "title": "A Survey of Algorithms for Keyword Search on Graph Data", "text": "In this chapter, we survey methods that perform keyword search on graph data. Keyword search provides a simple but user-friendly interface to retrieve information from complicated data structures. Since many real life datasets are represented by trees and graphs, keyword search has become an attractive mechanism for data of a variety of types. In this survey, we discuss methods of keyword search on schema graphs, which are abstract representation for XML data and relational data, and methods of keyword search on schema-free graphs. In our discussion, we focus on three major challenges of keyword search on graphs. First, what is the semantics of keyword search on graphs, or, what qualifies as an answer to a keyword search; second, what constitutes a good answer, or, how to rank the answers; third, how to perform keyword search efficiently. We also discuss some unresolved challenges and propose some new research directions"}
{"_id": "ec2a21eff2b5aa3c195feb03a78b478550dd8582", "title": "Virtual Reality and Education", "text": "Virtual Reality (VR), a new computer technology, has incredible potential in the education field. The reason for this assertion is that education is a field that requires the students to understand complex data, particularly in the study of science (see \"Why is science hard to learn?\" by Millar) and VR makes that task easier. VR presents information in a 3-dimensional form with the participant viewing the world from inside the world (an immersed viewpoint) with the ability to interact with the information or world. People can enter a VR world that is already created or build their own. VR\u2019s style of presentation mimics the ways that we, as humans, have learned to interact with our physical world. The requirement of learning abstract concepts (such as written language or jargon of a particular field) in order to understand data is dramatically reduced. For example, a chemistry world can show participants how electrons hover around a nucleus depending on the amount of energy involved. The abstractness of the subject would not be eliminated, but metaphors could be employed that were based solidly in reality. For instance, in the electron world the color of the world could represent the amount of energy in the system with red representing more energy than blue. For the students, they would be able to relate to hot things looking red and cold things looking blue without the concept of \"joules increasing\" getting in the way."}
{"_id": "ce9ccf0ded2bcfbb1c63fa7e9db1d310bd3db756", "title": "Diverse and abundant antibiotic resistance genes in Chinese swine farms.", "text": "Antibiotic resistance genes (ARGs) are emerging contaminants posing a potential worldwide human health risk. Intensive animal husbandry is believed to be a major contributor to the increased environmental burden of ARGs. Despite the volume of antibiotics used in China, little information is available regarding the corresponding ARGs associated with animal farms. We assessed type and concentrations of ARGs at three stages of manure processing to land disposal at three large-scale (10,000 animals per year) commercial swine farms in China. In-feed or therapeutic antibiotics used on these farms include all major classes of antibiotics except vancomycins. High-capacity quantitative PCR arrays detected 149 unique resistance genes among all of the farm samples, the top 63 ARGs being enriched 192-fold (median) up to 28,000-fold (maximum) compared with their respective antibiotic-free manure or soil controls. Antibiotics and heavy metals used as feed supplements were elevated in the manures, suggesting the potential for coselection of resistance traits. The potential for horizontal transfer of ARGs because of transposon-specific ARGs is implicated by the enrichment of transposases--the top six alleles being enriched 189-fold (median) up to 90,000-fold in manure--as well as the high correlation (r(2) = 0.96) between ARG and transposase abundance. In addition, abundance of ARGs correlated directly with antibiotic and metal concentrations, indicating their importance in selection of resistance genes. Diverse, abundant, and potentially mobile ARGs in farm samples suggest that unmonitored use of antibiotics and metals is causing the emergence and release of ARGs to the environment."}
{"_id": "95d7c5f1309c7cd60fd3866fbc8199ef694d3ce7", "title": "Understanding microblog continuance usage intention: an integrated model", "text": "Purpose \u2013 The purpose of this paper is to explore the factors affecting users\u2019 continuous microblog usage intention. In recent years, the number of microblog users has gradually declined. This research can reveal microblog users\u2019 needs and provide the improvement direction of microblog services. Design/methodology/approach \u2013 By integrating Wixom and Todd\u2019s theoretical framework, the Uses and Gratifications Theory and the DeLone and McLean Information System Success Model, a conceptual model is proposed. In this model, gratification is defined as a kind of behavioral attitude, and satisfaction is viewed as an object-based attitude. The survey data were collected online and analyzed using the partial least squares method. Findings \u2013 The results suggest that users\u2019 continuance intention (behavioral intention) is jointly determined by users\u2019 gratification (behavioral-based attitude) and their habitual microblog usage behavior. Likewise, gratification is positively affected by satisfaction (object-based attitude) which is a joint function of system quality and information quality (object-based beliefs). Originality/value \u2013 In this research, Wixom and Todd\u2019s principle is applied as the basic theoretical framework; gratification is viewed as a behavior attitude and user satisfaction is identified as an object-based attitude. This research model is a new lens for continuance usage research."}
{"_id": "13ae009718f0edf8f6c3d8dc55dd620f3f953ec9", "title": "Automatic Seed Classification by Shape and Color Features using Machine Vision Technology", "text": ": In this paper the proposed system uses content based image retrieval (CBIR) technique for identification of seed e.g. wheat, rice, gram etc. on the basis of their features. CBIR is a technique to identify or recognize the image on the basis of features present in image. Basically features are classified in to four categories 1.color 2.Shape 3. texture 4. size .In this system we are extracting color, shape feature extraction. After that classifying images in to categories using neural network according to the weights and image displayed from the category for which neural network shows maximum weight. category1 belongs to wheat and category2 belongs to gram. Experiment was conducted on 200 images of wheat and gram by using Euclidean distance(ED) and artificial neural network techniques. From 200 images 150 are used for training purpose and 50 images are used for testing purpose. The precision rate of the system by using ED is 84.4 percent By using Artificial neural network precision rate is 95 percent."}
{"_id": "7aea1a278ec2678796c9fefe7b8201047d0613e9", "title": "Commentary on Kraus' (2015) \"classifying intersex in DSM-5: critical reflections on gender dysphoria\".", "text": "A recurrent issue in the more recent editions of the Diagnostic andStatisticalManualofMentalDisorders (DSM)oftheAmerican Psychiatric Association (APA) has been whether patientinitiated gender change and the preceding dissatisfaction with the initially assigned gender (gender dysphoria) should be included as a psychiatric diagnosis and, if yes, whether that diagnosis should be limited to such a presentation shown by persons born with a healthy reproductive system or also be applied if it is shown by persons with congenital somatic intersexuality (in the current medical nomenclature included under disorders of sex development [DSD]) (Hughes, Houk, Ahmed, Lee, & LWPES/ESPE Consensus Group, 2006). DSM-5 stipulates for the diagnosis of Gender Dysphoria (GD)\u2018\u2018Specify if: With a disorder of sex development\u2019\u2019 (italics and boldface removed) and adds the\u2018\u2018Coding note: Code the disorder of sex development as well as gender dysphoria\u2019\u2019(American Psychiatric Association, 2013, pp. 451\u2013459). In the view of Kraus (2015), GD thereby\u2018\u2018subsumes the physical condition under themental \u2018disorder\u2019.\u2019\u2019Kraus sees this conceptualization as\u2018\u2018the most significant change in the revised diagnosis\u2019\u2019and expresses numerous misgivings. In response, as a member of the respective DSM-5 Work Group (which was dissolved with the publication of DSM-5), I want to clarify the pertinentchangesmadefromDSM-IVandDSM-IV-TR(AmericanPsychiatricAssociation,1994,2000) toDSM-5. In the interest of space, I will only address the most salient ones (for other details, see Zucker et al., 2013). (1) Kraus\u2019 (2015) critique, which culminates in the catchy but misleading phrase,\u2018\u2018when the controversial GID diagnosis exits the DSM, disorders of sex development enter\u2019\u2019(p. 1154), appears to be based on a misunderstanding that is already reflected in the titleofherarticle,\u2018\u2018ClassifyingIntersex inDSM5.\u2019\u2019 It is not the physical condition \u2018\u2018disorder of sex development\u2019\u2019 (referring here to intersexuality as illustrated by the examples given in the DSM-5 criteria table on p. 453) that is \u2018\u2018classified in DSM-5\u2019\u2019 or \u2018\u2018subsumed\u2019\u2019 under mental disorder. Instead, what DSM-5 categorizes here as GD is a gender identity problem occurring in a person who also has a DSD/intersexcondition,provided therespectivepsychiatric/psychologic DSM-5 criteria for GD are met. The change is not \u2018\u2018fundamental\u2019\u2019as Kraus suggests, because in DSM-IV, people with what then was called\u2018\u2018Gender Identity Disorder\u2019\u2019(GID) in association with an intersex condition were categorized as GID Not Otherwise Specified (GIDNOS). Also GIDNOS was a psychiatric diagnosis related to gender identity, just not as stringently defined as GID. Thus, in both DSM-IV and DSM-5, the psychologic/psychiatric condition of GD was/is categorized as a psychiatric diagnosis, not the intersex condition. In this regard, DSM-5 resembles the DSM-III (APA, 1980) and DSM-III-R (APA, 1987) diagnoses of Gender Identity Disorder of Childhood (GIDC) and the DSM-III-R diagnosis of Transsexualism (in adults): they only stipulated to note\u2018\u2018physical abnormalities of the sex organs\u2019\u2019(with GIDC) and\u2018\u2018physical intersexuality or a genetic abnormality\u2019\u2019(with Transsexualism), if also present, on Axis III (Physical Disorders and Conditions, i.e., not psychiatric diagnostic categories), but not to exclude persons with the combination of GID or Transsexualism and such somatic conditions from the gender identity-related diagnoses on Axis I (Clinical [psychiatric] Syndromes and V Codes]. & Heino F. L. Meyer-Bahlburg meyerb@nyspi.columbia.edu"}
{"_id": "d279e147dde46bbaf74123da4909231b9d1e86e8", "title": "Path planning using Matlab-ROS integration applied to mobile robots", "text": "In this paper the possibilities that Matlab provides to design, implementation and monitoring programs of autonomous navigation for mobile robots, on both simulated and real platforms, through its new toolbox for robotics will be explored. Robotics System Toolbox has established itself as a solid tool of integration between Matlab and robot operating under ROS environment. It will be studied the tools available in this toolbox that allow achieving the connection between the two platforms, in addition to the generation of algorithms of location, path planning, mapping and autonomous navigation."}
{"_id": "aa1c5d1f542a4d762dab40056d2c4bf329e7fd45", "title": "Microfibrillated cellulose and new nanocomposite materials: a review - DTU Orbit (15/10/2018)", "text": "(15/10/2018) Microfibrillated cellulose and new nanocomposite materials: a review Due to their abundance, high strength and stiffness, low weight and biodegradability, nano-scale cellulose fiber materials (e.g., microfibrillated cellulose and bacterial cellulose) serve as promising candidates for bio-nanocomposite production. Such new high-value materials are the subject of continuing research and are commercially interesting in terms of new products from the pulp and paper industry and the agricultural sector. Cellulose nanofibers can be extracted from various plant sources and, although the mechanical separation of plant fibers into smaller elementary constituents has typically required high energy input, chemical and/or enzymatic fiber pre-treatments have been developed to overcome this problem. A challenge associated with using nanocellulose in composites is the lack of compatibility with hydrophobic polymers and various chemical modification methods have been explored in order to address this hurdle. This review summarizes progress in nanocellulose preparation with a particular focus on microfibrillated cellulose and also discusses recent developments in bio-nanocomposite fabrication based on nanocellulose."}
{"_id": "8f3f222127904d0e3568f4c8c807b1bf26b77f93", "title": "Learning state representation for deep actor-critic control", "text": "Deep Neural Networks (DNNs) can be used as function approximators in Reinforcement Learning (RL). One advantage of DNNs is that they can cope with large input dimensions. Instead of relying on feature engineering to lower the input dimension, DNNs can extract the features from raw observations. The drawback of this end-to-end learning is that it usually requires a large amount of data, which for real-world control applications is not always available. In this paper, a new algorithm, Model Learning Deep Deterministic Policy Gradient (ML-DDPG), is proposed that combines RL with state representation learning, i.e., learning a mapping from an input vector to a state before solving the RL task. The ML-DDPG algorithm uses a concept we call predictive priors to learn a model network which is subsequently used to pre-train the first layer of the actor and critic networks. Simulation results show that the ML-DDPG can learn reasonable continuous control policies from high-dimensional observations that contain also task-irrelevant information. Furthermore, in some cases, this approach significantly improves the final performance in comparison to end-to-end learning."}
{"_id": "8d5d725091861aac6a63a43a95bb1c96442fdd3b", "title": "Forecasting market demand for new telecommunications services: an introduction", "text": "The marketing team of a new telecommunications company is usually tasked with producing forecasts for diverse stakeholders with different needs. Consequently, those outside marketing often realize neither the many reasons for developing forecasts nor the marketing theory used and the challenges involved in doing so. Based on our three decades of experience working with telecommunications operators around the world we seek to redress this situation by presenting a discussion of the issues involved in demand forecasting for new communications services. 2002 Elsevier Science Ltd. All rights reserved."}
{"_id": "21813c61601a8537136488ce55a2c15669365ef9", "title": "A Sharp PageRank Algorithm with Applications to Edge Ranking and Graph Sparsification", "text": "We give an improved algorithm for computing personalized PageRank vectors with tight error bounds which can be as small as \u03a9(n\u2212p) for any fixed positive integer p. The improved PageRank algorithm is crucial for computing a quantitative ranking of edges in a given graph. We will use the edge ranking to examine two interrelated problems \u2013 graph sparsification and graph partitioning. We can combine the graph sparsification and the partitioning algorithms using PageRank vectors to derive an improved partitioning algorithm."}
{"_id": "5be6e653955d5ff18b7cf2dbffa94d31f11d1f06", "title": "Turning telecommunications call details to churn prediction: a data mining approach", "text": "As deregulation, stew technologies, and new competitors open up the mobile telecommunications industry, churn prediction and management has become of great concern to mobile service providers: A mobile service provider wishing to retain its subscribers needs to be able to predict which of them may be at-risk of changing services and will make those subscribers the focus of customer retention efforts. In response to the limitations of existing churn-prediction systems and the unavailability of customer demographics in the mobile telecommunications provider investigated, we propose, design, and experimentally evaluate a churn-prediction technique that predicts churning from subscriber contractual information and call pattern changes extracted from call details. This proposed technique is capable of identifying potential churners at the contract level for a specific prediction time-period. In addition, the proposed technique incorporates the multi-classifier class-combiner approach to address the challenge of a highly skewed class distribution between churners and non-churners. The empirical evaluation results suggest that the proposed call-behavior-based churn-prediction technique exhibits satisfactory predictive effectiveness when more recent call details are employed for the churn prediction model construction. Furthermore, the proposed technique is able to demonstrate satisfactory or reasonable predictive power within the one-month interval between model construction and churn prediction. Using a previous demographics-based churn-prediction system as a reference, the lift factors attained by our proposed technique appear largely satisfactory."}
{"_id": "d19c7d0882ae9845f34a84dd4e4dbd1d4cf0b00c", "title": "The Effects of Violent Video Games on Aggression A Meta-Analysis", "text": "s International-B, 49(11). (University Microfilms No. 8902398) Ballard, M. E., & Lineberger, R. (1999). Video game violence and confederate gender: Effects on reward and punishment given by college males. Sex Roles, 41, 541\u2013558. Ballard, M. E., & Wiest, J. R. (1995, March). Mortal Kombat: The effects of violent video technology on males\u2019 hostility and cardiovascular responding. Paper presented at the biennial meeting of the Society for Research in Child Development, Indianapolis, IN. Bandura, A. (1994). The social cognitive theory of mass communication. In J. Bryant & D. Zillmann (Eds.), Media effects: Advances in theory and research (pp. 61\u201390). Hillsdale, NJ: Erlbaum. Battelle, J., & Johnstone, B. (1993, December). Seizing the next level: Sega\u2019s plan for world domination. Wired, 1, 73\u201378, 128\u2013131. Berkowitz, L., & Alioto, J. (1973). The meaning of an observed event as a determinant of its aggressive consequences. Journal of Personality and Social Psychology, 28, 206\u2013217. Berkowitz, L., & Rogers, K. H. (1986). A priming effect analysis of media influences. In J. Bryant & D. Zillmann (Eds.), Perspectives on media effects (pp. 57\u201381). Hillsdale, NJ: Erlbaum. Brusa, J. A. (1988). Effects of video game playing on children\u2019s social behavior (aggression, cooperation). Dissertation Abstracts International-B, 48(10), 3127. (Univeristy Microfilms No. 8800625) Calvert, S., & Tan, S. L. (1994). Impact of virtual reality on young adults\u2019 physiological arousal and aggressive thoughts: Interaction versus observation. Journal of Applied Developmental Psychology, 15, 125\u2013139. Chambers, J. H., & Ascione, F. R. (1987). The effects of prosocial and aggressive video games on children\u2019s donating and helping. Journal of Genetic Psychology, 148, 499\u2013505. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field research. Chicago: Rand McNally. Cooper, J., & Mackie, D. (1986). Video games and aggression in children. Journal of Applied Social Psychology, 16, 726\u2013744. Dill, K. E. (1997). Violent video game and trait aggression effects on aggressive behavior, thoughts, and feelings, delinquency, and world view. Dissertation Abstracts International-B, 59(07). (University Microfilms No. 9841277) Dill, K. E., & Dill, J. C. (1998). Video game violence: A review of the empirical literature. Aggression & Violent Behavior, 3, 407\u2013428. Dominick, J. R. (1984). Videogames, television violence, and aggression in teenagers. Journal of Communication, 34, 136\u2013147. Feshbach, S. (1955). The drive-reducing function of fantasy behavior. Journal of Abnormal and Social Psychology, 50, 3\u201311. Fling, S., Smith, L., Rodriguez, T., Thornton, D., Atkins, E., & Nixon, K. (1992). Video games, aggression and self-esteem: A survey. Social Behavior and Personality, 20(1), 39\u201345. Funk, J. (1993). Reevaluating the impact of video games. Clinical Pediatrics, 32, 86\u201390. Gibb, G. D., Baily, J. R., Lambirth, T. T., & Wilson, W. P. (1983). Personality differences between high and low electronic game users. Journal of Psychology, 114, 159\u2013165. 430 HUMAN COMMUNICATION RESEARCH / July 2001 Graybill, D., Kirsch, J., & Esselman, E. (1985). Effects of playing violent versus nonviolent video games on the aggressive ideation of aggressive and nonaggressive children. Child Study Journal, 15, 199\u2013205. Graybill, D., Strawniak, M., Hunter, T., & O\u2019Leary, M. (1987). Effects of playing versus observing violent versus nonviolent video games on children\u2019s aggression. Psychology: A Quarterly Journal of Human Behavior, 24(3), 1\u20138. Griffiths, M. (1999). Violent video games and aggression: A review of the literature. Aggression & Violent Behavior, 4, 203\u2013212. Gunter, B. (1994) The question of media violence. In J. Bryant & D. Zillmann (Eds.), Media effects: Advances in theory and research (pp. 163\u2013212). Hillsdale, NJ: Erlbaum. Halladay, J., & Wolf, R. (2000, July 18). Indianapolis OKs restrictions on violent video game usage. USA Today, A5. Hendrick, C. (1990). Replications, strict replications, and conceptual replications: Are they important? In J. W. Neuliep (Ed.), Handbook of replication research in the behavioral and social sciences [Special issue]. Journal of Social Behavior and Personality, 5(4), 41\u201349. Hoffman, K. (1995). Effects of playing versus witnessing video game violence on attitudes toward aggression and acceptance of violence as a means of conflict resolution. Dissertation Abstracts International, 56(03), 747. (University Microfilms No. 9522426) Hunt, M. M. (1997). How science takes stock : The story of meta-analysis. New York: Russell Sage Foundation. Hunter, J. E., Schmidt, F. L., & Jackson, G. B. (1982). Meta-analysis: Cumulating research findings across studies. Beverly Hills, CA: Sage. Irwin, A. R., & Gross, A. M. (1995). Cognitive tempo, violent video games, and aggressive behavior in young boys. Journal of Family Violence, 10, 337\u2013350. Jo, E., & Berkowitz, L. (1994). A priming effect analysis of media influences: An update. In J. Bryant & D. Zillmann (Eds.), Media effects: Advances in theory and research (pp. 43\u201360). Hillsdale, NJ: Erlbaum. Kestenbaum, G. I., & Weinstein, L. (1985). Personality, psychopathology and developmental issues in male adolescent video game use. Journal of the American Academy of Child Psychiatry, 24, 329\u2013337. Kirsh, S. J. (1998). Seeing the world through Mortal Kombat-colored glasses: Violent video games and the development of a short-term hostile attribution bias. Childhood, 5, 177\u2013184. Lin, S., & Lepper, M. R. (1987). Correlates of children\u2019s usage of videogames and computers. Journal of Applied Social Psychology, 17(1), 72\u201393. Meinel, C. (1983, May/June). Will Pac-Man consume our nation\u2019s youth? Technology Review, 88, 10\u201311, 28. National Coalition of Television Violence. (1990). Nintendo tainted by extreme violence. NCTV News, 11, 3\u20134. Paik, H., & Comstock, G. (1994). The effects of television violence on antisocial behavior: A meta-analysis. Communication Research, 21, 516\u2013546. Rosenthal, R. (1995). Writing meta-analytic reviews. Psychological Bulletin, 118, 183\u2013192. Rosenthal, R. (1991). Meta-analytic procedures for social research. Newbury Park, CA: Sage. Schutte, N., Malouff, J., Post-Gordon, J., & Rodasta, A. (1988). Effects of playing video games on children\u2019s aggressive and other behaviors. Journal of Applied Social Psychology, 18, 451\u2013456. Scott, D. (1995). The effect of video games on feelings of aggression. Journal of Psychology, 129, 121\u2013132. Silvern, S. B., Lang, M. K., & Williamson, P. A. (1987). Social impact of video game play. Meaningful Play, Playful Meaning: Proceedings of the 11th Annual Meeting of the Association for the Anthropological Study of Play. Champaign, IL: Human Kinetics. Sherry / VIDEO GAME META-ANALYSIS 431 Silvern, S., & Williamson, P. (1987). The effects of video game play on young children\u2019s aggression, fantasy, and prosocial behavior. Journal of Developmental Psychology,"}
{"_id": "0e56b759fc516c0bd91bb2afd113d3eeca9b9c61", "title": "Security behaviors of smartphone users", "text": "Purpose \u2013 This paper aims to report on the information security behaviors of smartphone users in an affluent economy of the Middle East. Design/methodology/approach \u2013 A model based on prior research, synthesized from a thorough literature review, is tested using survey data from 500 smartphone users representing three major mobile operating systems. Findings \u2013 The overall level of security behaviors is low. Regression coefficients indicate that the efficacy of security measures and the cost of adopting them are the main factors influencing smartphone security behaviors. At present, smartphone users are more worried about malware and data leakage than targeted information theft. Research limitations/implications \u2013 Threats and counter-measures co-evolve over time, and our findings, which describe the state of smartphone security at the current time, will need to be updated in the future. Practical implications \u2013 Measures to improve security practices of smartphone users are needed urgently. The findings indicate that such measures should be broadly effective and relatively costless for users to implement. Social implications \u2013 Personal smartphones are joining enterprise networks through the acceptance of Bring-Your-Own-Device computing. Users\u2019 laxity about smartphone security thus puts organizations at risk. Originality/value \u2013 The paper highlights the key factors influencing smartphone security and compares the situation for the three leading operating systems in the smartphone market."}
{"_id": "06db1531ef84602ef1aa8dbdb8e89e04e9034583", "title": "Maximum entropy models for natural language ambiguity resolution", "text": "MAXIMUM ENTROPY MODELS FOR NATURAL LANGUAGE AMBIGUITY RESOLUTION Adwait Ratnaparkhi Supervisor Professor Mitch Marcus This thesis demonstrates that several important kinds of natural language ambiguities can be resolved to state of the art accuracies using a single statistical modeling technique based on the principle of maximum entropy We discuss the problems of sentence boundary detection part of speech tagging prepo sitional phrase attachment natural language parsing and text categorization under the maximum entropy framework In practice we have found that maximum entropy models o er the following advantages State of the art Accuracy The probability models for all of the tasks discussed per form at or near state of the art accuracies or outperform competing learning algo rithms when trained and tested under similar conditions Methods which outperform those presented here require much more supervision in the form of additional human involvement or additional supporting resources Knowledge Poor Features The facts used to model the data or features are linguis tically very simple or knowledge poor but yet succeed in approximating complex linguistic relationships Reusable Software Technology The mathematics of the maximum entropy framework"}
{"_id": "2717def05f735902162808f1997048ce31cb6950", "title": "Assessment of computer science learning in a scratch-based outreach program", "text": "Many institutions have created and deployed outreach programs for middle school students with the goal of increasing the number and diversity of students who later pursue careers in computer science. While these programs have been shown to increase interest in computer science, there has been less work on showing whether participants learn computer science content.\n We address two questions, one specific, and the other more general: (1) \"What computer science did our middle school students learn in our interdisciplinary two-week summer camp?\" (2) \"How can computer science concepts be assessed in the context of Scratch-based outreach programs\"? We address both questions by presenting the design of our summer camp, an overview of our curriculum, our assessment methodology, and our assessment results.\n Though the sample size is not statistically significant, the results show that a two-week, interdisciplinary, non-academic summer camp can be effective not only for engaging students, but also for imparting CS content. In just two weeks, with a curriculum not entirely focused on computer science, students displayed competence with event-driven programming, initialization of state, message passing, and say/sound synchronization. We have employed assessment methodologies that avoid written exams, an approach both outreach and classroom-based programs may find useful."}
{"_id": "436c852c4094a58ff9e629ace758612f58459342", "title": "Evaluation of Computer Games Developed by Primary School Children to Gauge Understanding of Programming Concepts", "text": "Under the Curriculum for Excellence (CfE) in Scotland, newer approaches such as games-based learning and games-based construction are being adopted to motivate and engage students. Construction of computer games is seen by some to be a highly motivational and practical approach at engaging children at Primary Education (PE) level in computer programming concepts. Gamesbased learning (GBL) and games-based construction both suffer from a dearth of empirical evidence supporting their validity as teaching and learning approaches. To address this issue, this paper will present the findings of observational research at PE level using Scratch as a tool to develop computer games using rudimentary programming concepts. A list of criteria will be compiled for reviewing the implementation of each participant to gauge the level of programming proficiency demonstrated. The study will review 29 games from Primary 4 to Primary 7 level and will present the overall results and results for each individual year. This study will contribute to the empirical evidence in games-based construction by providing the results of observational research across different levels of PE and will provide pedagogical guidelines for assessing programming ability using a games-based construction approach."}
{"_id": "4d61cf4ffa9d15963e6f61d3d22b17274c6c4800", "title": "Computational thinking", "text": "omputational thinking builds on the power and limits of computing processes, whether they are executed by a human or by a machine. Computational methods and models give us the courage to solve problems and design systems that no one of us would be capable of tackling alone. Computational thinking confronts the riddle of machine intelligence: What can humans do better than computers? And What can computers do better than humans? Most fundamentally it addresses the question: What is computable? Today, we know only parts of the answers to such questions."}
{"_id": "4edaded63d9d7ea27f5c4819f35d7168e1ae7974", "title": "Scratch: programming for all", "text": "\"Digital fluency\" should mean designing, creating, and remixing, not just browsing, chatting, and interacting."}
{"_id": "8fffddb7e900e9a6873903d306c264e8759fec7d", "title": "Design Patterns: Elements of Reusable Object-Oriented Software", "text": "Design Patterns is a modern classic in the literature of object-oriented development, offering timeless and elegant solutions to common problems in software design. It describes patterns for managing object creation, composing objects into larger structures, and coordinating control flow between objects. The book provides numerous examples where using composition rather than inheritance can improve the reusability and flexibility of code. Note, though, that its not a tutorial but a catalog that you can use to find an object-oriented design pattern thats appropriate for the needs of your particular application--a selection for virtuoso programmers who appreciate (or require) consistent, well-engineered object-oriented designs."}
{"_id": "9ef7bf35e858663b48259b45d40b96becf9fe862", "title": "An Evaluation of Model-Based Approaches to Sensor Data Compression", "text": "As the volumes of sensor data being accumulated are likely to soar, data compression has become essential in a wide range of sensor-data applications. This has led to a plethora of data compression techniques for sensor data, in particular model-based approaches have been spotlighted due to their significant compression performance. These methods, however, have never been compared and analyzed under the same setting, rendering a \"right\" choice of compression technique for a particular application very difficult. Addressing this problem, this paper presents a benchmark that offers a comprehensive empirical study on the performance comparison of the model-based compression techniques. Specifically, we reimplemented several state-of-the-art methods in a comparable manner, and measured various performance factors with our benchmark, including compression ratio, computation time, model maintenance cost, approximation quality, and robustness to noisy data. We then provide in-depth analysis of the benchmark results, obtained by using 11 different real data sets consisting of 346 heterogeneous sensor data signals. We believe that the findings from the benchmark will be able to serve as a practical guideline for applications that need to compress sensor data."}
{"_id": "900550291f250b8ee45082752665ab3550bfad2b", "title": "A TOR-based anonymous communication approach to secure smart home appliances", "text": "Digital information has become a social infrastructure and with the expansion of the Internet, network infrastructure has become an indispensable part of social life and industrial activity for mankind. The idea of using existing electronics in smart home appliances and connecting them to the Internet is a new dimension along which technologies continue to grow, and in recent years mankind has witnessed an upsurge of usage of devices such as smart phone, smart television, home health-care device, smart LED light bulbs system, etc. Their build-in internet-controlled function has made them quite attractive to many segments of consumers and smart phone has become a common gadget for social networking. There are, however, serious challenges which need to be addressed as these tiny devices are designed for specific functions and lack processing capacity required for most security software. This research explores how these internet-enabled smart devices can be turned into very dangerous spots for distributed attacks purposes by cybercriminals for various ill intensions in a pinpointed manner. It then introduces a new approach to deal with such problems by taking advantage of the anonymous communication of the Onion Router (hereafter: TOR). It compares pros and cons of using anonymous communication scheme and justifies it be an efficient countermeasure to most attack scenarios."}
{"_id": "08de23f650d1b0c6a366b04d72d2abf8cde2fbd8", "title": "A Fully Integrated Digital LDO With Coarse\u2013Fine-Tuning and Burst-Mode Operation", "text": "The digital low dropout regulator (D-LDO) has drawn significant attention recently for its low-voltage operation and process scalability. However, the tradeoff between current efficiency and transient response speed has limited its applications. In this brief, a coarse-fine-tuning technique with burst-mode operation is proposed to the D-LDO. Once the voltage undershoot/ overshoot is detected, the coarse tuning quickly finds out the coarse control word in which the load current should be located, with large power MOS strength and high sampling frequency for a fixed time. Then, the fine-tuning, with reduced power MOS strength and sampling frequency, regulates the D-LDO to the desired output voltage and takes over the steady-state operation for high accuracy and current efficiency. The proposed D-LDO is verified in a 65-nm CMOS process with a 0.01-mm2 active area. The measured voltage undershoot and overshoot are 55 and 47 mV, respectively, with load steps of 2 to 100 mA with a 20-ns edge time. The quiescent current is 82 \u03bcA, with a 0.43-ps figure of merit achieved. Moreover, the reference tracking speed is 1.5 V/\u03bcs."}
{"_id": "762cbbbeb46f0b1873cfe1dccb57a37da3a5af3e", "title": "A comprehensive introduction to differential geometry", "text": "Spivak's Comprehensive introduction takes as its theme the classical roots of contemporary differential geometry. Spivak explains his Main Premise (my term) as follows: \"in order for an introduction to differential geometry to expose the geometric aspect of the subject, an historical approach is necessary; there is no point in introducing the curvature tensor without explaining how it was invented and what it has to do with curvature\". His second premise concerns the manner in which the historical material should be presented: \"it is absurdly inefficient to eschew the modern language of manifolds, bundles, forms, etc., which was developed precisely in order to rigorize the concepts of classical differential geometry\". Here, Spivak is addressing \"a dilemma which confronts anyone intent on penetrating the mysteries of differential geometry\". On the one hand, the subject is an old one, dating, as we know it, from the works of Gauss and Riemann, and possessing a rich classical literature. On the other hand, the rigorous and systematic formulations in current use were established relatively recently, after topological techniques had been sufficiently well developed to provide a base for an abstract global theory; the coordinate-free geometric methods of E. Cartan were also a major source. Furthermore, the viewpoint of global structure theory now dominates the subject, whereas differential geometers were traditionally more concerned with the local study of geometric objects. Thus it is possible and not uncommon for a modern geometric education to leave the subject's classical origins obscure. Such an approach can offer the great advantages of elegance, efficiency, and direct access to the most active areas of modern research. At the same time, it may strike the student as being frustratingly incomplete. As Spivak remarks, \"ignorance of the roots of the subject has its price-no one denies that modern formulations are clear, elegant and precise; it's just that it's impossible to comprehend how any one ever thought of them.\" While Spivak's impulse to mediate between the past and the present is a natural one and is by no means unique, his undertaking is remarkable for its ambitious scope. Acting on its second premise, the Comprehensive introduction opens with an introduction to differentiable manifolds; the remaining four volumes are devoted to a geometric odyssey which starts with Gauss and Riemann, and ends with the Gauss-Bonnet-Chern Theorem and characteristic classes. A formidable assortment of topics is included along the way, in which we may distinguish several major historical themes: In the first place, the origins of fundamental geometric concepts are investigated carefully. As just one example, Riemannian sectional curvature is introduced by a translation and close exposition of the text of Riemann's remarkable paper, \u00dcber die Hypothesen, welche der Geometrie zu Grunde"}
{"_id": "17963c800077aedd3802b3e97d45c286ba953ba4", "title": "Autonomy and Reliability of Continuous Active Learning for Technology-Assisted Review", "text": "We enhance the autonomy of the continuous active learning method shown by Cormack and Grossman (SIGIR 2014) to be effective for technology-assisted review, in which documents from a collection are retrieved and reviewed, using relevance feedback, until substantially all of the relevant documents have been reviewed. Autonomy is enhanced through the elimination of topic-specific and dataset-specific tuning parameters, so that the sole input required by the user is, at the outset, a short query, topic description, or single relevant document; and, throughout the review, ongoing relevance assessments of the retrieved documents. We show that our enhancements consistently yield superior results to Cormack and Grossman\u2019s version of continuous active learning, and other methods, not only on average, but on the vast majority of topics from four separate sets of tasks: the legal datasets examined by Cormack and Grossman, the Reuters RCV1-v2 subject categories, the TREC 6 AdHoc task, and the construction of the TREC 2002 filtering test collection."}
{"_id": "df3c3ef09b2f37a442652472e89eaad964a6cd1c", "title": "06-025 Multi-agent reinforcement learning : A survey \u2217", "text": "Multi-agent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, economics. Many tasks arising in these domains require that the agents learn behaviors online. A significant part of the research on multi-agent learning concerns reinforcement learning techniques. However, due to different viewpoints on central issues, such as the formal statement of the learning goal, a large number of different methods and approaches have been introduced. In this paper we aim to present an integrated survey of the field. First, the issue of the multi-agent learning goal is discussed, after which a representative selection of algorithms is reviewed. Finally, open issues are identified and future research directions are outlined. Keywords\u2014multi-agent systems, reinforcement learning, game theory, distributed control"}
{"_id": "26c4eb90d0f82ac722cf0bb279de3cf89fef5e89", "title": "FilterReg: Robust and Efficient Probabilistic Point-Set Registration using Gaussian Filter and Twist Parameterization", "text": "Probabilistic point-set registration methods have been gaining more attention for their robustness to noise, outliers and occlusions. However, these methods tend to be much slower than the popular iterative closest point (ICP) algorithms, which severely limits their usability. In this paper, we contribute a novel probabilistic registration method that achieves state-of-the-art robustness as well as substantially faster computational performance than modern ICP implementations. This is achieved using a rigorous yet computationally-efficient probabilistic formulation. Pointset registration is cast as a maximum likelihood estimation and solved using the EM algorithm. We show that with a simple augmentation, the E step can be formulated as a filtering problem, allowing us to leverage advances in efficient Gaussian filtering methods. We also propose a customized permutohedral filter [1] for improved efficiency while retaining sufficient accuracy for our task. Additionally, we present a simple and efficient twist parameterization that generalizes our method to the registration of articulated and deformable objects. For articulated objects, the complexity of our method is almost independent of the Degrees Of Freedom (DOFs), which makes it highly efficient even for high DOF systems. The results demonstrate the proposed method consistently outperforms many competitive baselines on a variety of registration tasks. The video demo and source code are available on our project page."}
{"_id": "71c3182fa122a1d6ccd4aa8eb9dccd95314b848b", "title": "Taxonomy for description of cross-domain attacks on CPS", "text": "The pervasiveness of Cyber-Physical Systems (CPS) in various aspects of the modern society grows rapidly. This makes CPS to increasingly attractive targets for various kinds of attacks. We consider cyber-security as an integral part of CPS security. Additionally, the necessity exists to investigate the CPS-specific aspects which are out of scope of cyber-security. Most importantly, attacks capable to cross the cyber-physical domain boundary should be analyzed. The vulnerability of CPS to such cross-domain attacks has been practically proven by numerous examples, e.g., by the currently most famous Stuxnet attack. In this paper, we propose taxonomy for description of attacks on CPS. The proposed taxonomy is capable of representing both conventional cyber-attacks as well as cross-domain attacks on CPS. Furthermore, based on the proposed taxonomy, we define the attack categorization. Several possible application areas of the proposed taxonomy are extensively discussed. Among others, it can be used to establish a knowledge base about attacks on CPS known in the literature. Furthermore, the proposed description structure will foster the quantitative and qualitative analysis of these attacks, both of which are necessarily to improve CPS security."}
{"_id": "d96f7b8b0a4a6338335929e5495e13e54e21afaf", "title": "Automated 3-D Segmentation of Lungs With Lung Cancer in CT Data Using a Novel Robust Active Shape Model Approach", "text": "Segmentation of lungs with (large) lung cancer regions is a nontrivial problem. We present a new fully automated approach for segmentation of lungs with such high-density pathologies. Our method consists of two main processing steps. First, a novel robust active shape model (RASM) matching method is utilized to roughly segment the outline of the lungs. The initial position of the RASM is found by means of a rib cage detection method. Second, an optimal surface finding approach is utilized to further adapt the initial segmentation result to the lung. Left and right lungs are segmented individually. An evaluation on 30 data sets with 40 abnormal (lung cancer) and 20 normal left/right lungs resulted in an average Dice coefficient of 0.975\u00b10.006 and a mean absolute surface distance error of 0.84\u00b10.23 mm, respectively. Experiments on the same 30 data sets showed that our methods delivered statistically significant better segmentation results, compared to two commercially available lung segmentation approaches. In addition, our RASM approach is generally applicable and suitable for large shape models."}
{"_id": "72912bfe28fe17bcd2c30de1495f186773eeda46", "title": "Detecting Code-Switching between Turkish-English Language Pair", "text": "Code-switching (usage of different languages within a single conversation context in an alternative manner) is a highly increasing phenomenon in social media and colloquial usage which poses different challenges for natural language processing. This paper introduces the first study for the detection of TurkishEnglish code-switching and also a small test data collected from social media in order to smooth the way for further studies. The proposed system using character level n-grams and conditional random fields (CRFs) obtains 95.6% micro-averaged F1-score on the introduced test data set."}
{"_id": "b71aa6ec14d0714c0ccc0a76654166cfe53c7ec3", "title": "Daily Mindful Responding Mediates the Effect of Meditation Practice on Stress and Mood: The Role of Practice Duration and Adherence.", "text": "OBJECTIVE\nAlthough meditation practice is an important component of many mindfulness-based interventions (MBIs), empirical findings of its effects on psychological functioning are\u00a0mixed and the mechanisms for the effects remain unclear. Responding with mindfulness (i.e., returning one's attention back to a\u00a0nonjudgmental, present-oriented awareness) is a fundamental skill practiced in meditations. With repeated meditation practice, this skill is\u00a0thought to become internalized and be applied to one's daily life. We thus hypothesized that the extent to which individuals responded to daily\u00a0events with mindfulness would mediate the effects of meditation practice (instance, duration, and adherence to instructions) on psychological well-being.\n\n\nMETHOD\nUsing a daily diary methodology, we tracked the meditation practice, use of mindful responding during the day, and psychological outcomes (perceived stress, negative and positive affect) of 117 mindfulness-based stress reduction program participants.\n\n\nRESULTS\nWe found that on days when participants meditated, they responded with greater mindfulness to daily events, which accounted for the beneficial effects of meditating on psychological outcomes. Furthermore, findings suggest that on meditation days, longer and more closely adhered meditation practices were independently associated with increases in mindful responding, which in turn were associated with better psychological outcomes.\n\n\nCONCLUSION\nThese results suggest that regular, longer, and more closely adhered meditation practice is an important component of MBIs, in part because it leads to responding more mindfully in daily life,\u00a0which promotes well-being."}
{"_id": "f98e69f9cefeb8eb7998ea5fcad3ea211018b39d", "title": "Social Norms and Community Enforcement", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "374858657ac7f1906cace0eda68539600f7040a8", "title": "Motion estimation with non-local total variation regularization", "text": "State-of-the-art motion estimation algorithms suffer from three major problems: Poorly textured regions, occlusions and small scale image structures. Based on the Gestalt principles of grouping we propose to incorporate a low level image segmentation process in order to tackle these problems. Our new motion estimation algorithm is based on non-local total variation regularization which allows us to integrate the low level image segmentation process in a unified variational framework. Numerical results on the Mid-dlebury optical flow benchmark data set demonstrate that we can cope with the aforementioned problems."}
{"_id": "714e622067ae4f787ee347f510c7a08d2173158d", "title": "BFSPMiner: an effective and efficient batch-free algorithm for mining sequential patterns over data streams", "text": "Supporting sequential pattern mining from data streams is nowadays a relevant problem in the area of data stream mining research. Actual proposals available in the literature are based on the well-known PrefixSpan approach and are, indeed, able to effectively bound the error of discovered patterns. This approach foresees the idea of dividing the target stream in a collection of manageable chunks, i.e., pieces of stream, in order to gain into effectiveness and efficiency. Unfortunately, mining patterns from stream chunks indeed introduce additional errors with respect to the basic application scenario where the target stream is mined continuously, in a non-batch manner. This is due to several reasons. First, since batches are processed individually, patterns that contain items from two consecutive batches are lost. Secondly, in most batch-based approaches, the decision about the frequency of a pattern is done locally inside a single batch. Thus, if a pattern is frequent in the stream but its items are scattered over different batches, it will be continuously pruned out and will never become frequent due to the algorithm\u2019s lack of the \u201ccomplete-picture\u201d perspective. In order to address so-delineated pattern mining problems, this paper introduces and experimentally assesses BFSPMiner, a Batch-Free Sequential Pattern Miner algorithm for effectively and efficiently mining patterns in streams without being constrained to the traditional batch-based processing. This allows us, for instance, to discover frequent patterns that would be lost according to alternative batch-based stream mining processing models. We complement our analytical contributions by means of a comprehensive experimental campaign of BFSPMiner against real-world data stream sets and in comparison with current batch-based stream sequential pattern mining algorithms."}
{"_id": "5f7fc75286c20f78278fa53c8a3f4c32cebebe6a", "title": "Treatment of keloids and hypertrophic scars.", "text": "Clinicians always find it difficult to treat hypertrophic scars and keloids. Various treatment modalities are available. Intralesional corticosteroids, topical applications, cryotherapy, surgery, laser therapy, and silicone sheeting are the widely used options. Radiation therapy can also help in cases of recalcitrant keloids. Most recently, pulsed-dye laser has been successfully used to treat keloids and hypertrophic scars. There are no set guidelines for the treatment of keloids. Treatment has to be individualized depending upon the distribution, size, thickness, and consistency of the lesions and association of inflammation. A combination approach to therapy seems to be the best option."}
{"_id": "62140a124e5ef6b04746b64d9428822d18192486", "title": "Understanding Technology Adoption : Theory and Future Directions for Informal Learning", "text": "How and why individuals adopt innovations has motivated a great deal of research. This article examines individuals' computing adoption processes through the lenses of three adoption theories: Rogers's innovation diffusion theory, the Concerns-Based Adoption Model, the Technology Acceptance Model, and the United Theory of Acceptance and Use of Technology. Incorporating all three models, this article suggests technology adoption is a complex, inherently social, developmental process; individuals construct unique yet malleable perceptions of technology that influence their adoption decisions. Thus, successfully facilitating technology adoption must address cognitive, emotional, and contextual concerns. This article also focuses specific attention on adoption theory outside of a formal organization and the implications of adoption theory on informal environments. For a crime against the gods\u2014the specifics of which are variously reported\u2014he was condemned to an eternity at hard labor. And frustrating labor at that. For his assignment was to roll a great boulder to the top of a hill. Only every time Sisyphus, by the greatest of exertion and toil, attained the summit, the darn thing rolled back down again."}
{"_id": "44411382876f098bfaad184bd70dca2df9ade956", "title": "From natural language requirements to UML class diagrams", "text": "Unified Modeling Language (UML) is the most popular modeling language for analysis, design and development of the software system. There has been a lot of research interest in generating these UML models, especially class diagrams, automatically from Natural Language requirements. The interest in class diagrams can be attributed to the fact that classes represent the abstractions present in the system to be developed. However, automated generation of UML class diagrams is a challenging task as it involves lot of pre-processing or manual intervention at times. In this paper, we present dependency analysis based approach to derive UML class diagrams automatically from Natural Language requirements. We transform the requirements statements to an intermediary frame-based structured representation using dependency analysis of requirements statements and the Grammatical Knowledge Patterns. The knowledge stored in the frame-based structured representation is used to derive class diagrams using rule-based algorithm. Our approach has generated similar class diagrams as reported in earlier works based on linguistic analysis with either annotation or manual intervention. We present the effectiveness of our approach in terms of recall and precision for the case-studies presented in earlier works."}
{"_id": "237292e08fe45320e954377ebe2b7e08d08f1979", "title": "On the Construction of Huffman Trees", "text": ""}
{"_id": "a3daa12e3535065e82cc1806058be31c42c48f5d", "title": "Health Effects Related to Wind Turbine Noise Exposure: A Systematic Review", "text": "BACKGROUND\nWind turbine noise exposure and suspected health-related effects thereof have attracted substantial attention. Various symptoms such as sleep-related problems, headache, tinnitus and vertigo have been described by subjects suspected of having been exposed to wind turbine noise.\n\n\nOBJECTIVE\nThis review was conducted systematically with the purpose of identifying any reported associations between wind turbine noise exposure and suspected health-related effects.\n\n\nDATA SOURCES\nA search of the scientific literature concerning the health-related effects of wind turbine noise was conducted on PubMed, Web of Science, Google Scholar and various other Internet sources.\n\n\nSTUDY ELIGIBILITY CRITERIA\nAll studies investigating suspected health-related outcomes associated with wind turbine noise exposure were included.\n\n\nRESULTS\nWind turbines emit noise, including low-frequency noise, which decreases incrementally with increases in distance from the wind turbines. Likewise, evidence of a dose-response relationship between wind turbine noise linked to noise annoyance, sleep disturbance and possibly even psychological distress was present in the literature. Currently, there is no further existing statistically-significant evidence indicating any association between wind turbine noise exposure and tinnitus, hearing loss, vertigo or headache.\n\n\nLIMITATIONS\nSelection bias and information bias of differing magnitudes were found to be present in all current studies investigating wind turbine noise exposure and adverse health effects. Only articles published in English, German or Scandinavian languages were reviewed.\n\n\nCONCLUSIONS\nExposure to wind turbines does seem to increase the risk of annoyance and self-reported sleep disturbance in a dose-response relationship. There appears, though, to be a tolerable level of around LAeq of 35 dB. Of the many other claimed health effects of wind turbine noise exposure reported in the literature, however, no conclusive evidence could be found. Future studies should focus on investigations aimed at objectively demonstrating whether or not measureable health-related outcomes can be proven to fluctuate depending on exposure to wind turbines."}
{"_id": "374062ce74cddb2d55ced757e59e87c1bd087124", "title": "A near linear time constant factor approximation for Euclidean bichromatic matching (cost)", "text": "We give an NlogO(1) N-time randomized O(1)-approximation algorithm for computing the cost of minimum bichromatic matching between two planar point-sets of size N."}
{"_id": "4755e9c5b8750c1e8066d53a7f67497b6919dc2f", "title": "I Can Has Cheezburger? A Nonparanormal Approach to Combining Textual and Visual Information for Predicting and Generating Popular Meme Descriptions", "text": "The advent of social media has brought Internet memes, a unique social phenomenon, to the front stage of the Web. Embodied in the form of images with text descriptions, little do we know about the \u201clanguage of memes\u201d. In this paper, we statistically study the correlations among popular memes and their wordings, and generate meme descriptions from raw images. To do this, we take a multimodal approach\u2014we propose a robust nonparanormal model to learn the stochastic dependencies among the image, the candidate descriptions, and the popular votes. In experiments, we show that combining text and vision helps identifying popular meme descriptions; that our nonparanormal model is able to learn dense and continuous vision features jointly with sparse and discrete text features in a principled manner, outperforming various competitive baselines; that our system can generate meme descriptions using a simple pipeline."}
{"_id": "c924137ca87e8b4e1557465405744f8b639b16fc", "title": "Seeding Deep Learning using Wireless Localization", "text": "Build accurate DNN models requires training on large labeled, context specific datasets, especially those matching the target scenario. We believe advances in wireless localization, working in unison with cameras, can produce automated annotation of targets on images and videos captured in the wild. Using pedestrian and vehicle detection as examples, we demonstrate the feasibility, benefits, and challenges of an automatic image annotation system. Our work calls for new technical development on passive localization, mobile data analytics, and error-resilient ML models, as well as design issues in user privacy policies."}
{"_id": "2088c55ec3e52a8be99cd933907020c70b281383", "title": "Mastering the game of Go with deep neural networks and tree search", "text": "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses \u2018value networks\u2019 to evaluate board positions and \u2018policy networks\u2019 to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away."}
{"_id": "1cac91219d44b4c202109bebdf5c8704eecdde48", "title": "Clinical spinal instability and low back pain.", "text": "Clinical instability is an important cause of low back pain. Although there is some controversy concerning its definition, it is most widely believed that the loss of normal pattern of spinal motion causes pain and/or neurologic dysfunction. The stabilizing system of the spine may be divided into three subsystems: (1) the spinal column; (2) the spinal muscles; and (3) the neural control unit. A large number of biomechanical studies of the spinal column have provided insight into the role of the various components of the spinal column in providing spinal stability. The neutral zone was found to be a more sensitive parameter than the range of motion in documenting the effects of mechanical destabilization of the spine caused by injury and restabilization of the spine by osteophyle formation, fusion or muscle stabilization. Clinical studies indicate that the application of an external fixator to the painful segment of the spine can significantly reduce the pain. Results of an in vitro simulation of the study found that it was most probably the decrease in the neutral zone, which was responsible for pain reduction. A hypothesis relating the neutral zone to pain has been presented. The spinal muscles provide significant stability to the spine as shown by both in vitro experiments and mathematical models. Concerning the role of neuromuscular control system, increased body sway has been found in patients with low back pain, indicating a less efficient muscle control system with decreased ability to provide the needed spinal stability."}
{"_id": "09b1520aea25ff0b5852d8a777e48eacf5300fac", "title": "A Study on Balancing Parallelism, Data Locality, and Recomputation in Existing PDE Solvers", "text": "Structured-grid PDE solver frameworks parallelize over boxes, which are rectangular domains of cells or faces in a structured grid. In the Chombo framework, the box sizes are typically 163 or 323, but larger box sizes such as 1283 would result in less surface area and therefore less storage, copying, and/or ghost cells communication overhead. Unfortunately, current on-node parallelization schemes perform poorly for these larger box sizes. In this paper, we investigate 30 different inter-loop optimization strategies and demonstrate the parallel scaling advantages of some of these variants on NUMA multicore nodes. Shifted, fused, and communication-avoiding variants for 1283 boxes result in close to ideal parallel scaling and come close to matching the performance of 163 boxes on three different multicore systems for a benchmark that is a proxy for program idioms found in Computational Fluid Dynamic (CFD) codes."}
{"_id": "93aa35ceb562affe9a85929184fdf98d8aa93821", "title": "A Survey of the Functional Splits Proposed for 5 G Mobile Crosshaul Networks Line", "text": "Pacing the way towards 5G has lead researchers and industry in the direction of centralized processing known from Cloud-Radio Access Networks (C-RAN). In C-RAN research, a variety of different functional splits is presented by different names and focusing on different directions. The functional split determines how many Base Station (BS) functions to leave locally, close to the user, with the benefit of relaxing fronthaul network bitrate and delay requirements, and how many functions to centralize with the possibility of achieving greater processing benefits. This work presents for the first time a comprehensive overview systematizing the different work directions for both research and industry, while providing a detailed description of each functional split option and an assessment of the advantages and disadvantages. This work gives an overview of where the most effort has been directed in terms of functional splits, and where there is room for further studies. The standardization currently taking place is also considered and mapped into the research directions. It is investigated how the fronthaul network will be affected by the choice of functional split, both in terms of bitrates and latency, and as the different functional splits provide different advantages and disadvantages, the option of flexible functional splits is also looked into. Keywords\u2014Functional Split; Crosshaul; X-haul; C-RAN; Fronthaul; standardization; industry; network architecture; Please refer to the list of acronyms provided in the end of the paper, right before the references."}
{"_id": "0b20bb0543ceeff587a5e1a1f304d3b5edcaad1e", "title": "A Prototype Navigation System for Guiding Blind People Indoors using NXT Mindstorms", "text": "People with visual impairment face enormous difficulties in terms of their mobility as they do not have enough information about their location and orientation with respect to traffic and obstacles on their route. Visually impaired people can navigate unknown areas by relying on the assistance of canes, other people, or specially trained guide dogs. The traditional ways of using guide dogs and long-cane only help to avoid obstacles, not to know where they are. The research presented in this paper introduces a mobile assistant navigation prototype to locate and direct blind people indoors. Since most of the existing navigation systems developed so far for blind people employ a complex conjunction of positioning systems, video cameras, locationbased and image processing algorithms, we designed an affordable low-cost prototype navigation system for orienting and tracking the position of blind people in complex environments. The prototype system is based on the inertial navigation system and experiments have been performed on NXT Mindstorms platform."}
{"_id": "7afc10ef1d101d3def485f6b4f5fbf6f70dfa373", "title": "Online Learning of Weighted Relational Rules for Complex Event Recognition", "text": "Systems for symbolic complex event recognition detect occurrences of events in time using a set of event definitions in the form of logical rules. The Event Calculus is a temporal logic that has been used as a basis in event recognition applications, providing among others, connections to techniques for learning such rules from data. We advance the state-of-the-art by combining an existing online algorithm for learning crisp relational structure with an online method for weight learning in Markov Logic Networks (MLN). The result is an algorithm that learns complex event patterns in the form of Event Calculus theories in the MLN semantics. We evaluate our approach on a challenging real-world application for activity recognition and show that it outperforms both its crisp predecessor and competing online MLN learners in terms of predictive performance, at the price of a small increase in training time."}
{"_id": "9f7d7dc88794d28865f28d7bba3858c81bdbc3db", "title": "Deep Reinforcement Learning for Robotic Manipulation", "text": "Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on offpolicy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations."}
{"_id": "64c8217cc46df711f294cdd823d04ff2cc602280", "title": "Gradient and Curvature from Photometric Stereo Including Local Confidence Estimation", "text": "Photometric stereo is one technique for 3D shape determination that has been implemented in a variety of experimental settings and that has produced consistently good results. The idea is to use intensity values recorded from multiple images obtained from the same viewpoint but under different conditions of illumination. The resulting radiometric constraint makes it possible to obtain local estimates of both surface orientation and surface curvature without requiring global smoothness assumptions and without requiring prior image segmentation. This paper moves photometric stereo one step closer to practical viability by describing an experimental setting in which surface gradient estimation is achieved on full frame video data at near video frame rates (i.e., 15Hz). The implementation uses commercially available hardware. Reflectance is modeled empirically using measurements obtained from a calibration sphere. Estimation of the gradient, (p, q), requires only simple table lookup. Curvature estimation uses, in addition, the reflectance map, R(p, q). The required lookup table and reflectance maps are derived during calibration. Because reflectance is modeled empirically, no prior physical model of the reflectance characteristics of the objects to be analyzed is assumed. At the same time, if a good physical model is available, it can be retrofit to the method for implementation purposes. Photometric stereo is subject to error in the presence of cast shadows and interreflection. No purely local technique can succeed since these phenomena are inherently non-local. Nevertheless, this paper demonstrates that one can exploit the redundancy in three light source photometric stereo to, in most cases, locally detect the presence of cast shadows and interreflection. Detection is facilitated by explicitly including a local confidence estimate in the lookup table used for gradient estimation."}
{"_id": "3e0feb2986f4719d12b30446744c631cdfa695e8", "title": "Anatomisation with slicing: a new privacy preservation approach for multiple sensitive attributes", "text": "An enormous quantity of personal health information is available in recent decades and tampering of any part of this information imposes a great risk to the health care field. Existing anonymization methods are only apt for single sensitive and low dimensional data to keep up with privacy specifically like generalization and bucketization. In this paper, an anonymization technique is proposed that is a combination of the benefits of anatomization, and enhanced slicing approach adhering to the principle of k-anonymity and l-diversity for the purpose of dealing with high dimensional data along with multiple sensitive data. The anatomization approach dissociates the correlation observed between the quasi identifier attributes and sensitive attributes (SA) and yields two separate tables with non-overlapping attributes. In the enhanced slicing algorithm, vertical partitioning does the grouping of the correlated SA in ST together and thereby minimizes the dimensionality by employing the advanced clustering algorithm. In order to get the optimal size of buckets, tuple partitioning is conducted by MFA. The experimental outcomes indicate that the proposed method can preserve privacy of data with numerous SA. The anatomization approach minimizes the loss of information and slicing algorithm helps in the preservation of correlation and utility which in turn results in reducing the data dimensionality and information loss. The advanced clustering algorithms prove its efficiency by minimizing the time and complexity. Furthermore, this work sticks to the principle of k-anonymity, l-diversity and thus avoids privacy threats like membership, identity and attributes disclosure."}
{"_id": "a78f218cca10e387a9d3ad230fc8b447f04b9e48", "title": "Leveraging Deep Neural Networks and Knowledge Graphs for Entity Disambiguation", "text": "Entity Disambiguation aims to link mentions of ambiguous entities to a knowledge base (e.g., Wikipedia). Modeling topical coherence is crucial for this task based on the assumption that information from the same semantic context tends to belong to the same topic. This paper presents a novel deep semantic relatedness model (DSRM) based on deep neural networks (DNN) and semantic knowledge graphs (KGs) to measure entity semantic relatedness for topical coherence modeling. The DSRM is directly trained on large-scale KGs and it maps heterogeneous types of knowledge of an entity from KGs to numerical feature vectors in a latent space such that the distance between two semantically-related entities is minimized. Compared with the state-ofthe-art relatedness approach proposed by (Milne and Witten, 2008a), the DSRM obtains 19.4% and 24.5% reductions in entity disambiguation errors on two publicly available datasets respectively."}
{"_id": "a5a693c672a85420ffaf781d0601cd77b1027987", "title": "Epidemic dengue/dengue hemorrhagic fever as a public health, social and economic problem in the 21st century.", "text": "Dengue fever/dengue hemorrhagic fever is now one of the most important public health problems in tropical developing countries and also has major economic and societal consequences."}
{"_id": "680c63f3ade86b456d8a00cc25e1f9af94b81eae", "title": "AN AUTOMATIC SYSTEM FOR DETECTING AND COUNTING RBC AND WBC USING FUZZY LOGIC", "text": "Blood cell detection and counting is the initial process for detecting and diagnosing diseases. Several image processing algorithms are there for the blood cell classification and counting. The processed image helps to detect different blood related diseases. In all those algorithms several pre-processing steps are there for the process of detection and counting. Though all the algorithms give accurate results, the pre-processing steps are complex and time-consuming. This paper discusses about the RBC and WBC detection using fuzzy logic. Fuzzy logic toolbox software in MATLAB is used to develop the model on virtual platform."}
{"_id": "c7a1b63a981b01413267fbe9816f309fe8c3fa08", "title": "Evoked-potential correlates of stimulus uncertainty.", "text": "The average evoked-potential waveforms to sound and light stimuli recorded from scalp in awake human subjects show differences as a function of the subject's degree of uncertainty with respect to the sensory modality of the stimulus to be presented. Differences are also found in the evoked potential as a function of whether or not the sensorymodality of the stimulus was anticipated correctly. The major waveform alteration is in the amplitude of a positive-going component which reaches peak amplitude at about 300 milliseconds."}
{"_id": "de81d968a660df67a8984df6aa77cf88df77259f", "title": "Development of DC to Single-Phase AC Voltage Source Inverter With Active Power Decoupling Based on Flying Capacitor DC/DC Converter", "text": "In the present, a power decoupling method without additional component is proposed for a dc to single-phase ac converter, which consists of a flying capacitor dc/dc converter (FCC) and the voltage source inverter (VSI). In particular, a small flying capacitor in the FCC is used for both a boost operation and a double-line-frequency power ripple reduction. Thus, the dc-link capacitor value can be minimized in order to avoid the use of a large electrolytic capacitor. In addition, component design, of, e.g., the boost inductor and the flying capacitor, is clarified when the proposed control is applied. Experiments were carried out using a 1.5-kW prototype in order to verify the validity of the proposed control. The experimental results revealed that the use of the proposed control reduced the dc-link voltage ripple by 74.5%, and the total harmonic distortion (THD) of the inverter output current was less than 5%. Moreover, a maximum system efficiency of 95.4% was achieved at a load of 1.1 kW. Finally, the high power density design is evaluated by the Pareto front optimization. The power densities of three power decoupling topologies, such as a boost topology, a buck topology, and the proposed topology are compared. As a result, the proposed topology achieves the highest power density (5.3\u00a0kW/dm3) among the topologies considered herein."}
{"_id": "86c5c4a11e0f60820518c2d2d57a0da8e4a429e2", "title": "Efficient Exploration in Monte Carlo Tree Search using Human Action Abstractions", "text": "Monte Carlo Tree Search (MCTS) is a family of methods for planning in large domains. It focuses on finding a good action for a particular state, making its complexity independent of the size of the state space. However such methods are exponential with respect to the branching factor. Effective application of MCTS requires good heuristics to arbitrate action selection during learning. In this paper we present a policy-guided approach that utilizes action abstractions, derived from human input, with MCTS to facilitate efficient exploration. We draw from existing work in hierarchical reinforcement learning, interactive machine learning and show how multi-step actions, represented as stochastic policies, can serve as good action selection heuristics. We demonstrate the efficacy of our approach in the PacMan domain and highlight its advantages over traditional MCTS."}
{"_id": "e0080299afae01ad796060abcf602abff6024754", "title": "A Survey of Music Recommendation Systems and Future Perspectives", "text": "Along with the rapid expansion of digital music formats, managing and searching for songs has become significant. Though music information retrieval (MIR) techniques have been made successfully in last ten years, the development of music recommender systems is still at a very early stage. Therefore, this paper surveys a general framework and state-of-art approaches in recommending music. Two popular algorithms: collaborative filtering (CF) and content-based model (CBM), have been found to perform well. Due to the relatively poor experience in finding songs in long tail and the powerful emotional meanings in music, two user-centric approaches: context-based model and emotion-based model, have been paid increasing attention. In this paper, three key components in music recommender user modelling, item profiling, and match algorithms are discussed. Six recommendation models and four potential issues towards user experience, are explained. However, subjective music recommendation system has not been fully investigated. To this end, we propose a motivation-based model using the empirical studies of human behaviour, sports education, music psychology."}
{"_id": "470a6b517b36ed5c8125f93bb8a82984e8835c55", "title": "Image super-resolution using gradient profile prior", "text": "In this paper, we propose an image super-resolution approach using a novel generic image prior - gradient profile prior, which is a parametric prior describing the shape and the sharpness of the image gradients. Using the gradient profile prior learned from a large number of natural images, we can provide a constraint on image gradients when we estimate a hi-resolution image from a low-resolution image. With this simple but very effective prior, we are able to produce state-of-the-art results. The reconstructed hi-resolution image is sharp while has rare ringing or jaggy artifacts."}
{"_id": "857176d022369e963d3ff1be2cb9e1ca2f674520", "title": "DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning", "text": "We study the problem of learning to reason in large scale knowledge graphs (KGs). More specifically, we describe a novel reinforcement learning framework for learning multi-hop relational paths: we use a policy-based agent with continuous states based on knowledge graph embeddings, which reasons in a KG vector space by sampling the most promising relation to extend its path. In contrast to prior work, our approach includes a reward function that takes the accuracy, diversity, and efficiency into consideration. Experimentally, we show that our proposed method outperforms a path-ranking based algorithm and knowledge graph embedding methods on Freebase and Never-Ending Language Learning datasets.1"}
{"_id": "f1cb9dbd0a7b9a027710f994b9a65342d4e98079", "title": "Adversarial Geometry and Lighting using a Differentiable Renderer", "text": "Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric normballs, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow."}
{"_id": "1c1f5132023cf060b7278cf9dad7f4ea3b56056f", "title": "A framework for the design and integration of collaborative classroom games", "text": "The progress registered in the use of video games as educational tools has not yet been successfully transferred to the classroom. In an attempt to close this gap, a framework was developed that assists in the design and classroom integration of educational games. The framework addresses both the educational dimension and the ludic dimension. The educational dimension employs Bloom\u2019s revised taxonomy to define learning objectives and applies the classroom multiplayer presential game (CMPG) pedagogical model while the ludic dimension determines the gaming elements subject to constraints imposed by the educational dimension. With a view to validating the framework, a game for teaching electrostatics was designed and experimentally implemented in a classroom context. An evaluation based on pre/post testing found that the game increased the average number of correct answers by students participating in the experiment from 6.11 to 10.00, a result found to be statistically significant. Thus validated, the framework offers a promising basis for further exploration through the development of other games and fine-tuning of its components. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "4585d6899ed33c845846e92b28a7d5d193de007d", "title": "Effects of growth mode and pyruvate carboxylase on succinic acid production by metabolically engineered strains of Escherichia coli.", "text": "Escherichia coli NZN111, which lacks activities for pyruvate-formate lyase and lactate dehydrogenase, and AFP111, a derivative which contains an additional mutation in ptsG (a gene encoding an enzyme of the glucose phophotransferase system), accumulate significant levels of succinic acid (succinate) under anaerobic conditions. Plasmid pTrc99A-pyc, which expresses the Rhizobium etli pyruvate carboxylase enzyme, was introduced into both strains. We compared growth, substrate consumption, product formation, and activities of seven key enzymes (acetate kinase, fumarate reductase, glucokinase, isocitrate dehydrogenase, isocitrate lyase, phosphoenolpyruvate carboxylase, and pyruvate carboxylase) from glucose for NZN111, NZN111/pTrc99A-pyc, AFP111, and AFP111/pTrc99A-pyc under both exclusively anaerobic and dual-phase conditions (an aerobic growth phase followed by an anaerobic production phase). The highest succinate mass yield was attained with AFP111/pTrc99A-pyc under dual-phase conditions with low pyruvate carboxylase activity. Dual-phase conditions led to significant isocitrate lyase activity in both NZN111 and AFP111, while under exclusively anaerobic conditions, an absence of isocitrate lyase activity resulted in significant pyruvate accumulation. Enzyme assays indicated that under dual-phase conditions, carbon flows not only through the reductive arm of the tricarboxylic acid cycle for succinate generation but also through the glyoxylate shunt and thus provides the cells with metabolic flexibility in the formation of succinate. Significant glucokinase activity in AFP111 compared to NZN111 similarly permits increased metabolic flexibility of AFP111. The differences between the strains and the benefit of pyruvate carboxylase under both exclusively anaerobic and dual-phase conditions are discussed in light of the cellular constraint for a redox balance."}
{"_id": "9318dc55b867bc5b1da9ef0dfd6e54951744e8df", "title": "Cloning Cryptographic RFID", "text": "We develop a new, custom-built hardware for emulating contactless smartcards compliant to ISO 14443. The device is based on a modern low-cost microcontroller and can support basically all relevant (cryptographic) protocols used by contactless smartcards today, e.g., those based on AES or Triple-DES. As a proof of concept, we present a full emulation of Mifare Classic cards on the basis of our highly optimized implementation of the stream cipher Crypto1. The implementation enables the creation of exact clones of such cards, including the UID. We furthermore reverse-engineered the protocol of DESFire EV1 and realize the first emulation of DESFire and DESFire EV1 cards in the literature. We practically demonstrate the capabilities of our emulator by spoofing several real-world systems, e.g., creating a contactless payment card which allows an attacker to set the stored credit balance as desired and hence make an infinite amount of payments."}
{"_id": "fb5aec4e2d63de14c3780fc42818cfda4ec4145d", "title": "Spoken Language Recognition: From Fundamentals to Practice", "text": "Spoken language recognition refers to the automatic process through which we determine or verify the identity of the language spoken in a speech sample. We study a computational framework that allows such a decision to be made in a quantitative manner. In recent decades, we have made tremendous progress in spoken language recognition, which benefited from technological breakthroughs in related areas, such as signal processing, pattern recognition, cognitive science, and machine learning. In this paper, we attempt to provide an introductory tutorial on the fundamentals of the theory and the state-of-the-art solutions, from both phonological and computational aspects. We also give a comprehensive review of current trends and future research directions using the language recognition evaluation (LRE) formulated by the National Institute of Standards and Technology (NIST) as the case studies."}
{"_id": "15d70c9223a476d5e2e1ae87a0c143be0027c162", "title": "Entailment above the word level in distributional semantics", "text": "We introduce two ways to detect entailment using distributional semantic representations of phrases. Our first experiment shows that the entailment relation between adjective-noun constructions and their head nouns (big cat |= cat), once represented as semantic vector pairs, generalizes to lexical entailment among nouns (dog |= animal). Our second experiment shows that a classifier fed semantic vector pairs can similarly generalize the entailment relation among quantifier phrases (many dogs|=some dogs) to entailment involving unseen quantifiers (all cats|=several cats). Moreover, nominal and quantifier phrase entailment appears to be cued by different distributional correlates, as predicted by the type-based view of entailment in formal semantics."}
{"_id": "51c49cc4654dbce3c3de2919800da1e7477d88b3", "title": "Do Supervised Distributional Methods Really Learn Lexical Inference Relations?", "text": "Distributional representations of words have been recently used in supervised settings for recognizing lexical inference relations between word pairs, such as hypernymy and entailment. We investigate a collection of these state-of-the-art methods, and show that they do not actually learn a relation between two words. Instead, they learn an independent property of a single word in the pair: whether that word is a \u201cprototypical hypernym\u201d."}
{"_id": "68bce4eecf0a71f74216fb864d6859e4be87e036", "title": "Hierarchical Clustering Algorithms for Document Datasets", "text": "Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters. In particular, clustering algorithms that build meaningful hierarchies out of large document collections are ideal tools for their interactive visualization and exploration as they provide data-views that are consistent, predictable, and at different levels of granularity. This paper focuses on document clustering algorithms that build such hierarchical solutions and (i) presents a comprehensive study of partitional and agglomerative algorithms that use different criterion functions and merging schemes, and (ii) presents a new class of clustering algorithms called constrained agglomerative algorithms, which combine features from both partitional and agglomerative approaches that allows them to reduce the early-stage errors made by agglomerative methods and hence improve the quality of clustering solutions. The experimental evaluation shows that, contrary to the common belief, partitional algorithms always lead to better solutions than agglomerative algorithms; making them ideal for clustering large document collections due to not only their relatively low computational requirements, but also higher clustering quality. Furthermore, the constrained agglomerative methods consistently lead to better solutions than agglomerative methods alone and for many cases they outperform partitional methods, as well."}
{"_id": "68cd1c7c0651b116a83abab8a7a46a29975d3b5f", "title": "Exploring Various Knowledge in Relation Extraction", "text": "Extracting semantic relationships between entities is challenging. This paper investigates the incorporation of diverse lexical, syntactic and semantic knowledge in feature-based relation extraction using SVM. Our study illustrates that the base phrase chunking information is very effective for relation extraction and contributes to most of the performance improvement from syntactic aspect while additional information from full parsing gives limited further enhancement. This suggests that most of useful information in full parse trees for relation extraction is shallow and can be captured by chunking. We also demonstrate how semantic information such as WordNet and Name List, can be used in feature-based relation extraction to further improve the performance. Evaluation on the ACE corpus shows that effective incorporation of diverse features enables our system outperform previously best-reported systems on the 24 ACE relation subtypes and significantly outperforms tree kernel-based systems by over 20 in F-measure on the 5 ACE relation types."}
{"_id": "cac0a34484527fe2705c3da163824b67bc930886", "title": "Learning concepts and categories: is spacing the \"enemy of induction\"?", "text": "Inductive learning -- that is, learning a new concept or category by observing exemplars -- happens constantly, for example, when a baby learns a new word or a doctor classifies x-rays. What influence does the spacing of exemplars have on induction? Compared with massing, spacing enhances long-term recall, but we expected spacing to hamper induction by making the commonalities that define a concept or category less apparent. We asked participants to study multiple paintings by different artists, with a given artist's paintings presented consecutively (massed) or interleaved with other artists' paintings (spaced). We then tested induction by asking participants to indicate which studied artist (Experiments 1a and 1b) or whether any studied artist (Experiment 2) painted each of a series of new paintings. Surprisingly, induction profited from spacing, even though massing apparently created a sense of fluent learning: Participants rated massing as more effective than spacing, even after their own test performance had demonstrated the opposite."}
{"_id": "6976f290ae28d427227ee8183800b8ee339879fe", "title": "Using Lexical Resources for Irony and Sarcasm Classification", "text": "The paper presents a language dependent model for classification of statements into ironic and non-ironic. The model uses various language resources: morphological dictionaries, sentiment lexicon, lexicon of markers and a WordNet based ontology. This approach uses various features: antonymous pairs obtained using the reasoning rules over the Serbian WordNet ontology (R), antonymous pairs in which one member has positive sentiment polarity (PPR), polarity of positive sentiment words (PSP), ordered sequence of sentiment tags (OSA), Part-of-Speech tags of words (POS) and irony markers (M). The evaluation was performed on two collections of tweets that had been manually annotated according to irony. These collections of tweets as well as the used language resources are in the Serbian language (or one of closely related languages --Bosnian/Croatian/Montenegrin). The best accuracy of the developed classifier was achieved for irony with a set of 5 features -- (PPR, PSP, POS, OSA, M) -- acc = 86.1%, while for sarcasm the best results were achieved with the set (R, PSP, POS, OSA, M) -- acc = 72.8."}
{"_id": "8597da33970c02df333d9d6520884c1ba3f5fb17", "title": "Robust Non-Rigid Motion Tracking and Surface Reconstruction Using $L_0$ Regularization", "text": "We present a new motion tracking technique to robustly reconstruct non-rigid geometries and motions from a single view depth input recorded by a consumer depth sensor. The idea is based on the observation that most non-rigid motions (especially human-related motions) are intrinsically involved in articulate motion subspace. To take this advantage, we propose a novel $L_0$ based motion regularizer with an iterative solver that implicitly constrains local deformations with articulate structures, leading to reduced solution space and physical plausible deformations. The $L_0$ strategy is integrated into the available non-rigid motion tracking pipeline, and gradually extracts articulate joints information online with the tracking, which corrects the tracking errors in the results. The information of the articulate joints is used in the following tracking procedure to further improve the tracking accuracy and prevent tracking failures. Extensive experiments over complex human body motions with occlusions, facial and hand motions demonstrate that our approach substantially improves the robustness and accuracy in motion tracking."}
{"_id": "40480a9f88ab119548fa4a8ca15f5f18d73f157f", "title": "Grid interfaced solar PV based water pumping using brushless DC motor drive", "text": "This paper proposes a solar photovoltaic (PV) fed water pumping system driven by a brushless DC (BLDC) motor. A promising case of interruption in the water pumping due to the intermittency of PV power generation is resolved by using a power grid as an external power backup. The power is drawn from the grid in case the PV array is unable to meet the required power demand, otherwise the PV array is preferably used. A unidirectional power flow control for the same is developed and realized through a power factor corrected (PFC) boost converter. The proposed system and control enable a consumer to get a full volume of water delivery throughout the day and night regardless of the climatic condition. The utilization of motor-pump at best is thus made possible in addition to an enhanced reliability of the pumping system. The power quality standards required at the utility grid are met by the developed control. The MATLAB/Simulink based simulations and the performance analysis are carried out to demonstrate the applicability of the system."}
{"_id": "d745cf8c51032996b5fee6b19e1b5321c14797eb", "title": "Viewpoint Invariant Pedestrian Recognition with an Ensemble of Localized Features", "text": "Viewpoint invariant pedestrian recognition is an important yet under-addressed problem in computer vision. This is likely due to the difficulty in matching two objects with unknown viewpoint and pose. This paper presents a method of performing viewpoint invariant pedestrian recognition using an efficiently and intelligently designed object representation, the ensemble of localized features (ELF). Instead of designing a specific feature by hand to solve the problem, we define a feature space using our intuition about the problem and let a machine learning algorithm find the best representation. We show how both an object class specific representation and a discriminative recognition model can be learned using the AdaBoost algorithm. This approach allows many different kinds of simple features to be combined into a single similarity function. The method is evaluated using a viewpoint invariant pedestrian recognition dataset and the results are shown to be superior to all previous benchmarks for both recognition and reacquisition of pedestrians."}
{"_id": "3c56db4f2decb73d632738ff17d0f4cacbf56f57", "title": "OpenCV on TI \u2019 s DSP + ARM \u00ae platforms : Mitigating the challenges of porting OpenCV to embedded platforms Introduction", "text": "In today\u2019s advancing market, the growing performance and decreasing price of embedded processors are opening many doors for developers to design highly sophisticated solutions for different end applications. The complexities of these systems can create bottlenecks for developers in the form of longer development times, more complicated development environments and issues with application stability and quality. Developers can address these problems using sophisticated software packages such as OpenCV, but migrating this software to embedded platforms poses its own set of challenges. This paper will review how to mitigate some of these issues, including C++ implementation, memory constraints, floating-point support and opportunities to maximize performance using vendor-optimized libraries and integrated accelerators or co-processors. Finally, we will introduce a new effort by Texas Instruments (TI) to optimize vision systems by running OpenCV on the C6000TM digital signal processor (DSP) architecture. Benchmarks will show the advantage of using the DSP by comparing the performance of a DSP+ARM\u00ae system-on-chip (SoC) processor against an ARM-only device. OpenCV on TI\u2019s DSP+ARM platforms: Mitigating the challenges of porting OpenCV to embedded platforms Introduction OpenCV is a free and open-source computer vision library that offers a broad range of functionality under the permissive Berkeley Software Distribution (BSD) license. The library itself is written in C++ and is also usable through C or Python language applications. Thousands of developers use OpenCV to power their own specialized applications, making it the most widely-used library of its kind. The OpenCV project is under active development, with regular updates to eliminate bugs and add new functionality. The mainline development effort targets the x86 architecture and supports acceleration via Intel\u2019s proprietary Integrated Performance Primitives (IPP) library. A recent release also added support for graphics processing unit (GPU) acceleration using NVIDIA\u2019s Compute Unifi ed Device Architecture (CUDA) standard. OpenCV\u2019s greatest asset is the sheer breadth of algorithms included in its standard distribution. Figure 1 on page 2 shows an incomplete list of some of the key function categories included in OpenCV. These range from low-level image fi ltering and transformation to sophisticated feature analysis and machine learning functionality. A complete listing of every function and use case is beyond the scope of this article, but we will consider the unique requirements of developers in the embedded vision space. For these developers, OpenCV represents an attractively comprehensive toolbox of useful, well-tested algorithms that can serve as building blocks for their own specialized applications. The question then becomes whether or not OpenCV can be used directly in their embedded systems. Despite its original development focus for use with PC workstations, OpenCV can also be a useful tool for embedded development. There are vendor-specifi c libraries that offer OpenCV-like capabilities on various embedded systems, but few can match OpenCV\u2019s ubiquity in the computer vision fi eld or the sheer breadth of its included algorithms. It should come as no surprise that OpenCV has already been ported to the ARM\u00ae architecture, a popular CPU choice for embedded processors. It\u2019s certainly possible to cross-compile the OpenCV source code as-is and use the result with embedded devices, but memory constraints and other architectural considerations may pose a problem. This white paper will examine some of the specifi c obstacles that must be overcome for OpenCV to achieve acceptable performance on Joseph Coombs, Applications engineering, Texas Instruments Rahul Prabhu, Applications engineering, Texas Instruments W H I T E P A P E R"}
{"_id": "b33e9bc4a248fcbd73031728a1cef209629753a1", "title": "Model-based control of a 6-dof electrohydraulic Stewart \u2013 Gough platform", "text": "In this paper, a novel model-based controller for a six Degree-of-Freedom (dof) electrohydraulic Stewart\u2013Gough platform is developed. Dynamic models of low complexity are employed that describe the salient dynamics of the main electrohydraulic components. Rigid body equations of motion and hydraulics dynamics, including friction and servovalve models are used. The developed feedback controller uses the system dynamic and hydraulic model to yield servovalve currents, so that the error dynamics converge asymptotically to zero, independent of load variations. In this approach, force, pressure or acceleration feedback is not required. Simulations with typical desired trajectory inputs are presented and a good performance of the controller is obtained. The proposed methodology can be extended to electrohydraulic serial or closed-chain manipulators and simulators. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "0a267d927cfae039cf0a9c995a59ded563344eb6", "title": "Model Selection Management Systems: The Next Frontier of Advanced Analytics", "text": "John Boyd recognized in the 1960's the importance of situation awareness for military operations and introduced the notion of the OODA loop (Observe, Orient, Decide, and Act). Today we realize that many applications have to deal with situation awareness: Customer Relationship Management, Human Capital Management, Supply Chain Management, patient care, power grid management, and cloud services management, as well as any IoT (Internet of Things) related application; the list seems to be endless. Situation awareness requires applications to support the management of data, knowledge, processes, and other services such as social networking in an integrated way. These applications additionally require high personalization as well as rapid and continuous evolution. They must provide a wide variety of operational and functional requirements, including real time processing.\n Handcrafting these applications is an almost impossible task requiring exhaustive resources for development and maintenance. Due to the resources and time involved in their development, these applications typically fall way short of the desired functionality, operational characteristics, and ease and speed of evolution. We - the authors - have developed a model enabling the development and maintenance of situation-aware applications in a declarative and therefore economical manner; we call this model KIDS - Knowledge Intensive Data-processing System."}
{"_id": "4fec8a97d6d87713c5c00f369fc1373fba4377e3", "title": "Training Sources 3 D Normalized Pose Space 2 D Normalized Pose Space KD-Tree Input Image 2 D Pose Estimation 3 D Pose Reconstruction", "text": "In this work we address the challenging problem of 3D human pose estimation from single images. Recent approaches learn deep neural networks to regress 3D pose directly from images. One major challenge for such methods, however, is the collection of training data. Specifically, collecting large amounts of training data containing unconstrained images annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of accurate 3D motion capture data, and the second source consists of unconstrained images with annotated 2D poses. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient 3D pose retrieval. To this end, we first convert the motion capture data into a normalized 2D pose space, and separately learn a 2D pose estimation model from the image data. During inference, we estimate the 2D pose and efficiently retrieve the nearest 3D poses. We then jointly estimate a mapping from the 3D pose space to the image and reconstruct the 3D pose. We provide a comprehensive evaluation of the proposed method and experimentally demonstrate the effectiveness of our approach, even when the skeleton structures of the two sources differ substantially."}
{"_id": "a6ae8ef5180179a5f9e681f26f77db1e95de531e", "title": "Accurate and dense wide-baseline stereo matching using SW-POC", "text": "This paper proposes an accurate and dense wide-baseline stereo matching method using Scaled Window Phase-Only Correlation (SW-POC). The wide-baseline setting of the stereo camera can improve the accuracy of the 3D reconstruction compared with the short-baseline setting. However, it is difficult to find accurate and dense correspondence from wide-baseline stereo images due to its large perspective distortion. Addressing this problem, we employ the SW-POC, which is a correspondence matching method using 1D POC with the concept of Scale Window Matching (SWM). The use of SW-POC makes it possible to find the accurate and dense correspondence from a wide-baseline stereo image pair with low computational cost. We also apply the proposed method to 3D reconstruction using a moving and uncalibrated consumer digital camera."}
{"_id": "3590129d886957c81ffcf84d5aecbbbf99f98c71", "title": "Assessment of chronic pain coping strategies", "text": "Introduction. We made an adaptation of the Coping Strategies Questionnaire (CSQ) to the Spanish population. This measure, the most used in its scope, was developed by Rosenstiel and Keefe in 1983. Method. 205 participants coming from Primary Health Care and pain clinics made up the sample. More than half suffered migraine and chronic tension-type headache; the rest, fibromyalgia, low back pain, arthrosis or arthritis. Results. Factor analyses explained 59% of the total variance, on an 8-factor structure that converged into a 2-factor structure. In the 8-factor solution the novelty was the diversification of mental-non-mental distraction strategies, and religious-non-religious hope strategies. In the 2-factor solution the novelty was the grouping according to the efficacy of the coping. All the CSQ factors showed inner consistency and construct validity. Thus, unadaptive coping strategies were related to negative, anxious and depressed self-talk, related to lack of control and perceived self-efficacy, and related to many pain behaviors. On the contrary it happened with adaptive coping strategies. In addition, the diagnosis of pain was related to the utilization and effectiveness of coping strategies. Conclussions. CSQ is shown to be a reliable and valid measure of coping strategies in chronic pain in the Spanish population, showing the difference between theoretical and empirical factor structures again."}
{"_id": "864bd331b2c9effec5b724356ec8150f372d3372", "title": "Heart rate variability is associated with emotion recognition: direct evidence for a relationship between the autonomic nervous system and social cognition.", "text": "It is well established that heart rate variability (HRV) plays an important role in social communication. Polyvagal theory suggests that HRV may provide a sensitive marker of one's ability to respond and recognize social cues. The aim of the present study was to directly test this hypothesis. Resting-state HRV was collected and performance on the Reading the Mind in the Eyes Test was assessed in 65 volunteers. HRV was positively associated with performance on this emotion recognition task confirming our hypothesis and these findings were retained after controlling for a variety of confounding variables known to influence HRV - sex, BMI, smoking habits, physical activity levels, depression, anxiety, and stress. Our data suggests that increased HRV may provide a novel marker of one's ability to recognize emotions in humans. Implications for understanding the biological basis of emotion recognition, and social impairment in humans are discussed."}
{"_id": "0c53aed7b6e94b2dfeae6d3b593e1f01059f81a0", "title": "Medusa: a programming framework for crowd-sensing applications", "text": "The ubiquity of smartphones and their on-board sensing capabilities motivates crowd-sensing, a capability that harnesses the power of crowds to collect sensor data from a large number of mobile phone users. Unlike previous work on wireless sensing, crowd-sensing poses several novel requirements: support for humans-in-the-loop to trigger sensing actions or review results, the need for incentives, as well as privacy and security. Beyond existing crowd-sourcing systems, crowd-sensing exploits sensing and processing capabilities of mobile devices. In this paper, we design and implement Medusa, a novel programming framework for crowd-sensing that satisfies these requirements. Medusa provides high-level abstractions for specifying the steps required to complete a crowd-sensing task, and employs a distributed runtime system that coordinates the execution of these tasks between smartphones and a cluster on the cloud. We have implemented ten crowd-sensing tasks on a prototype of Medusa. We find that Medusa task descriptions are two orders of magnitude smaller than standalone systems required to implement those crowd-sensing tasks, and the runtime has low overhead and is robust to dynamics and resource attacks."}
{"_id": "cdc1c352bc847554de3abd3728d1f2311a5a9766", "title": "The Cultural Evolution of Religion", "text": "After almost a century of dwelling in two \u201cnon-overlapping magisteria,\u201d as Steven Jay Gould once put it, religion and science are coming together again. Long the exclusive province of the humanities and left outside of the mainstream of psychology and the behavioural sciences, religion is gaining scientific attention at a rapid pace. The dismantling of the taboos that have kept religion out of the scientific spotlight will take time (Dennett, 2006). Nevertheless, these are exciting times, and we can now safely say that religion\u2014to paraphrase Chomsky about language\u2014has been upgraded from scientific mystery to scientific puzzle (Boyer, 2001). This growing scientific interest promises to offer a naturalistic account for a deeply affecting aspect of human lives that is widespread in all known cultures in the world."}
{"_id": "08913213ecba1eeae04af8c30ea2a9b6947d6a7b", "title": "Probabilistic routing in intermittently connected networks", "text": "We consider the problem of routing in intermittently connected networks. In such networks there is no guarantee that a fully connected path between source and destination exist at any time, rendering traditional routing protocols unable to deliver messages between hosts. We propose a probabilistic routing protocol for such networks."}
{"_id": "8a7fd9ab5b0ea67117ffdf3bb54627e622fe7613", "title": "DIAC+: a Professional Diacritics Recovering System", "text": "In languages that use diacritical characters, if these special signs are stripped-off from a word, the resulted string of characters may not exist in the language, and therefore its normative form is, in general, easy to recover. However, this is not always the case, as presence or absence of a diacritical sign attached to a base letter of a word which exists in both variants, may change its grammatical properties or even the meaning, making the recovery of the missing diacritics a difficult task, not only for a program but sometimes even for a human reader. We describe and evaluate an accurate knowledge-based system for automatic recovering the missing diacritics in MSOffice documents written in Romanian. For the rare cases when the system is not able to reliably make a decision, it either provides the user a list of words with their recovery suggestions, or probabilistically choose one of the possible changes, but leaves a trace (a highlighted comment) on each word the modification of which was uncertain."}
{"_id": "92f4bca3f5f54f6180ae4b421b62f4253ba2a30a", "title": "A Performance Comparison of Open-Source Stream Processing Platforms", "text": "Distributed stream processing platforms is a new class of real-time monitoring systems that analyze and extracts knowledge from large continuous streams of data. This type of systems is crucial for providing high throughput and low latency required by Big Data or Internet of Things monitoring applications. This paper describes and analyzes three main open-source distributed stream- processing platforms: Storm Flink, and Spark Streaming. We analyze the system architectures and we compare their main features. We carry out two experiments concerning anomaly detection on network traffic to evaluate the throughput efficiency and the resilience to node failures. Results show that the performance of native stream processing systems, Storm and Flink, is up to 15 times higher than the micro-batch processing system, Spark Streaming. On the other hand, Spark Streaming is more robust to node failures and provides recovery without losses."}
{"_id": "f5bbdfe37f5a0c9728c9099d85f0799e67e3d07d", "title": "Challenges during the transition to Agile Methodologies : A Holistic Overview", "text": ""}
{"_id": "ebc2ceba03a0f561dd2ab27c97b641c649c48a14", "title": "An Introduction to Active Shape Models \u2217", "text": "Biomedical images usually contain complex objects, which will vary in appearance significantly from one image to another. Attempting to measure or detect the presence of particular structures in such images can be a daunting task. The inherent variability will thwart naive schemes. However, by using models which can cope with the variability it is possible to successfully analyse complex images. Here we will consider a number of methods where the model represents the expected shape and local grey-level structure of a target object in an image. Model-based methods make use of a prior model of what is expected in the image, and typically attempt to find the best match of the model to the data in a new image. Having matched the model, one can then make measurements or test whether the target is actually present. This approach is a \u2018top-down\u2019 strategy, and differs significantly from \u2018bottomup\u2019 methods. In the latter the image data is examined at a low level, looking for local structures such as edges or regions, which are assembled into groups in an attempt to identify objects of interest. Without a global model of what to expect, this approach is difficult and prone to failure. A wide variety of model based approaches have been explored (see the review below). This chapter will concentrate on a statistical approach, in which a model is built from analysing the appearance of a set of labelled examples. Where structures vary in shape or texture, it is possible to learn what are plausible variations and what are not. A new image can be interpreted by finding the best plausible match of the model to the image data. The advantages of such a method are that"}
{"_id": "4a406aa843e385105370cf7a62b2e543a8fa1a1d", "title": "Gated Recurrent Unit (GRU) for Emotion Classification from Noisy Speech", "text": "Despite the enormous interest in emotion classification from speech, the impact of noise on emotion classification is not well understood. This is important because, due to the tremendous advancement of the smartphone technology, it can be a powerful medium for speech emotion recognition in the outside laboratory natural environment, which is likely to incorporate background noise in the speech. We capitalize on the current breakthrough of Recurrent Neural Network (RNN) and seek to investigate its performance for emotion classification from noisy speech. We particularly focus on the recently proposed Gated Recurrent Unit (GRU), which is yet to be explored for emotion recognition from speech. Experiments conducted with speech compounded with eight different types of noises reveal that GRU incurs an 18.16% smaller run-time while performing quite comparably to the Long Short-Term Memory (LSTM), which is the most popular Recurrent Neural Network proposed to date. This result is promising for any embedded platform in general and will initiate further studies to utilize GRU to its full potential for emotion recognition on smartphones."}
{"_id": "e8d6716d98ec675e0c9826a7386e8cac570693e0", "title": "Class-Conditional Superresolution with GANs", "text": "\u25cf Super resolution enhances the resolution of a low-res image. \u25cf Many deep learning models today work fairly well with an upscaling factor of 4x but use only the downscaled image as input. \u25cf We demonstrate two class-conditional GAN models that outperform state-of-the-art GANs for superresolution: \u25cb Explicitly passes in a known class parameter as input \u25cb Uses a separately trained classifier to inform a ResNet about character attributes \u25cf Both improve the quality of the super-resolved output."}
{"_id": "2c424f21607ff6c92e640bfe3da9ff105c08fac4", "title": "Learning Structured Output Representation using Deep Conditional Generative Models", "text": "Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional generative model for structured output prediction using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction algorithms, such as input noise-injection and multi-scale prediction objective at training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which leads to strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset."}
{"_id": "7f57e9939560562727344c1c987416285ef76cda", "title": "Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition", "text": "Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk.\n In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection."}
{"_id": "8e4808e71c9b9f852dc9558d7ef41566639137f3", "title": "Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition", "text": "In this paper we show that misclassification attacks against face-recognition systems based on deep neural networks (DNNs) are more dangerous than previously demonstrated, even in contexts where the adversary can manipulate only her physical appearance (versus directly manipulating the image input to the DNN). Specifically, we show how to create eyeglasses that, when worn, can succeed in targeted (impersonation) or untargeted (dodging) attacks while improving on previous work in one or more of three facets: (i) inconspicuousness to onlooking observers, which we test through a user study; (ii) robustness of the attack against proposed defenses; and (iii) scalability in the sense of decoupling eyeglass creation from the subject who will wear them, i.e., by creating \u201cuniversal\u201d sets of eyeglasses that facilitate misclassification. Central to these improvements are adversarial generative nets, a method we propose to generate physically realizable attack artifacts (here, eyeglasses) automatically."}
{"_id": "d87422549a4e7596c6b6b59458d1a52f69d9fa33", "title": "Learning with a Strong Adversary", "text": "The robustness of neural networks to intended perturbations has recently attracted significant attention. In this paper, we propose a new method, learning with a strong adversary, that learns robust classifiers from supervised data by generating adversarial examples as an intermediate step. A new and simple way of finding adversarial examples is presented that is empirically stronger than existing approaches in terms of the accuracy reduction as a function of perturbation magnitude. Experimental results demonstrate that resulting learning method greatly improves the robustness of the classification models produced."}
{"_id": "03563dfaf4d2cfa397d3c12d742e9669f4e95bab", "title": "Deep learning from temporal coherence in video", "text": "This work proposes a learning method for deep architectures that takes advantage of sequential data, in particular from the temporal coherence that naturally exists in unlabeled video recordings. That is, two successive frames are likely to contain the same object or objects. This coherence is used as a supervisory signal over the unlabeled data, and is used to improve the performance on a supervised task of interest. We demonstrate the effectiveness of this method on some pose invariant object and face recognition tasks."}
{"_id": "a6ff1a8b3f8e36b595a021dd57814852144d9793", "title": "Software Re-Modularization Based on Structural and Semantic Metrics", "text": "The structure of a software system has a major impact on its maintainability. To improve maintainability, software systems are usually organized into subsystems using the constructs of packages or modules. However, during software evolution the structure of the system undergoes continuous modifications, drifting away from its original design, often reducing its quality. In this paper we propose an approach for helping maintainers to improve the quality of software modularization. The proposed approach analyzes the (structural and semantic) relationships between classes in a package identifying chains of strongly related classes. The identified chains are used to define new packages with higher cohesion than the original package. The proposed approach has been empirical evaluated through a case study. The context of the study is represented by an open source system, JHotDraw, and two software systems developed by teams of students at the University of Salerno. The analysis of the results reveals that the proposed approach generates meaningful re-modularization of the studied systems, which can lead to higher quality."}
{"_id": "20f7078b8690a23b8f9ed955d568754bf1218176", "title": "Towards a Welsh Semantic Annotation System", "text": "Automatic semantic annotation of natural language data is an important task in Natural Language Processing, and a variety of semantic taggers have been developed for this task, particularly for English. However, for many languages, particularly for low-resource languages, such tools are yet to be developed. In this paper, we report on the development of an automatic Welsh semantic annotation tool (named CySemTagger) in the CorCenCC Project, which will facilitate semantic-level analysis of Welsh language data on a large scale. Based on Lancaster\u2019s USAS semantic tagger framework, this tool tags words in Welsh texts with semantic tags from a semantic classification scheme, and is designed to be compatible with multiple Welsh POS taggers and POS tagsets by mapping different tagsets into a core shared POS tagset that is used internally by CySemTagger. Our initial evaluation shows that the tagger can cover up to 91.78% of words in Welsh text. This tagger is under continuous development, and will provide a critical tool for Welsh language corpus and information processing at semantic level."}
{"_id": "f0bfe867f75bf1399eaf7fd4e880bd27b7a4cc14", "title": "Two (Note) Heads Are Better Than One: Pen-Based Multimodal Interaction with Music Scores", "text": "Digitizing early music sources requires new ways of dealing with musical documents. Assuming that current technologies cannot guarantee a perfect automatic transcription, our intention is to develop an interactive system in which user and software collaborate to complete the task. Since conventional score post-editing might be tedious, the user is allowed to interact using an electronic pen. Although this provides a more ergonomic interface, this interaction must be decoded as well. In our framework, the user traces the symbols using the electronic pen over a digital surface, which provides both the underlying image (offline data) and the drawing made by the e-pen (online data) to improve classification. Applying this methodology over 70 scores of the target musical archive, a dataset of 10 230 bimodal samples of 30 different symbols was obtained and made available for research purposes. This paper presents experimental results on classification over this dataset, in which symbols are recognized by combining the two modalities. This combination of modes has demonstrated its good performance, decreasing the error rate of using each modality separately and achieving an almost error-free performance."}
{"_id": "cd70248fa698a7dda81aea7e3b7fb1e62283504b", "title": "Efficient Event-Driven Simulation of Large Networks of Spiking Neurons and Dynamical Synapses", "text": "A simulation procedure is described for making feasible large-scale simulations of recurrent neural networks of spiking neurons and plastic synapses. The procedure is applicable if the dynamic variables of both neurons and synapses evolve deterministically between any two successive spikes. Spikes introduce jumps in these variables, and since spike trains are typically noisy, spikes introduce stochasticity into both dynamics. Since all events in the simulation are guided by the arrival of spikes, at neurons or synapses, we name this procedure event-driven. The procedure is described in detail, and its logic and performance are compared with conventional (synchronous) simulations. The main impact of the new approach is a drastic reduction of the computational load incurred upon introduction of dynamic synaptic efficacies, which vary organically as a function of the activities of the pre- and postsynaptic neurons. In fact, the computational load per neuron in the presence of the synaptic dynamics grows linearly with the number of neurons and is only about 6 more than the load with fixed synapses. Even the latter is handled quite efficiently by the algorithm. We illustrate the operation of the algorithm in a specific case with integrate-and-fire neurons and specific spike-driven synaptic dynamics. Both dynamical elements have been found to be naturally implementable in VLSI. This network is simulated to show the effects on the synaptic structure of the presentation of stimuli, as well as the stability of the generated matrix to the neural activity it induces."}
{"_id": "3dfddb48dbf1cb13aa74dc1764008008578740b8", "title": "The Human Brain in Numbers: A Linearly Scaled-up Primate Brain", "text": "The human brain has often been viewed as outstanding among mammalian brains: the most cognitively able, the largest-than-expected from body size, endowed with an overdeveloped cerebral cortex that represents over 80% of brain mass, and purportedly containing 100 billion neurons and 10x more glial cells. Such uniqueness was seemingly necessary to justify the superior cognitive abilities of humans over larger-brained mammals such as elephants and whales. However, our recent studies using a novel method to determine the cellular composition of the brain of humans and other primates as well as of rodents and insectivores show that, since different cellular scaling rules apply to the brains within these orders, brain size can no longer be considered a proxy for the number of neurons in the brain. These studies also showed that the human brain is not exceptional in its cellular composition, as it was found to contain as many neuronal and non-neuronal cells as would be expected of a primate brain of its size. Additionally, the so-called overdeveloped human cerebral cortex holds only 19% of all brain neurons, a fraction that is similar to that found in other mammals. In what regards absolute numbers of neurons, however, the human brain does have two advantages compared to other mammalian brains: compared to rodents, and probably to whales and elephants as well, it is built according to the very economical, space-saving scaling rules that apply to other primates; and, among economically built primate brains, it is the largest, hence containing the most neurons. These findings argue in favor of a view of cognitive abilities that is centered on absolute numbers of neurons, rather than on body size or encephalization, and call for a re-examination of several concepts related to the exceptionality of the human brain."}
{"_id": "dbf09ff33daf807f5063e526c15f56ab5d120813", "title": "Detecting Review Spammer Groups via Bipartite Graph Projection", "text": "Online product reviews play an important role in E-commerce websites because most customers read and rely on them when making purchases. For the sake of profit or reputation, review spammers deliberately write fake reviews to promote or demote target products, some even fraudulently work in groups to try and control the sentiment about a product. To detect such spammer groups, previous work exploits frequent itemset mining (FIM) to generate candidate spammer groups, which can only find tightly coupled groups, i.e. each reviewer in the group reviews every target product. In this paper, we present the loose spammer group detection problem, i.e. each group member is not required to review every target product. We solve this problem using bipartite graph projection. We propose a set of group spam indicators to measure the spamicity of a loose spammer group, and design a novel algorithm to identify highly suspicious loose spammer groups in a divide and conquer manner. Experimental results show that our method not only can find loose spammer groups with high precision and recall, but also can generate more meaningful candidate spammer groups than FIM, thus it can also be used as an alternative preprocessing tool for existing FIM-based approaches."}
{"_id": "cf384e282578f62377861ebca0cfd477bed7eaef", "title": "WAIS-IV visual puzzles in a mixed clinical sample.", "text": "Little is known about which cognitive functions underlie the new Visual Puzzles subtest of the Wechsler Adult Intelligence Scale - Fourth Edition (WAIS-IV). The purpose of this study was to investigate relationships between Visual Puzzles and common neuropsychological measures in a mixed clinical sample. A total of 44 veterans (75% men) were administered the WAIS-IV as part of a neuropsychological evaluation. Average age was 47.4 years (SD = 11.8), and average education was 13.8 years (SD = 2.3). Correlations were conducted to examine relationships between Visual Puzzles, demographic variables, and neuropsychological measures. Hierarchical regression analysis was used to determine which measures contributed the most variance to Visual Puzzles. Visual Puzzles correlated significantly with measures of visuospatial reasoning, verbal learning and recall, mental flexibility, processing speed, and naming, which accounted for 50% of the variance in Visual Puzzles performance. The results indicate that Visual Puzzles is not a pure measure of visuoperceptual reasoning, at least in a mixed clinical sample, because memory, mental flexibility, processing speed, and language abilities also contribute to successful performance of the task. Thus it may be important to consider other aspects of cognitive functioning when interpreting Visual Puzzles performance."}
{"_id": "7d553ced76a668120cf524a0b3e633edfea426df", "title": "The Great Grevy \u2019 s Rally : The Need , Methods , Findings , Implications and Next Steps", "text": ""}
{"_id": "8c2498d8ad7aae2d03c32b0c3e5f67131b8dc3ff", "title": "Recent and robust query auto-completion", "text": "Query auto-completion (QAC) is a common interactive feature that assists users in formulating queries by providing completion suggestions as they type. In order for QAC to minimise the user's cognitive and physical effort, it must: (i) suggest the user's intended query after minimal input keystrokes, and (ii) rank the user's intended query highly in completion suggestions. Typically, QAC approaches rank completion suggestions by their past popularity. Accordingly, QAC is usually very effective for previously seen and consistently popular queries. Users are increasingly turning to search engines to find out about unpredictable emerging and ongoing events and phenomena, often using previously unseen or unpopular queries. Consequently, QAC must be both robust and time-sensitive -- that is, able to sufficiently rank both consistently and recently popular queries in completion suggestions. To address this trade-off, we propose several practical completion suggestion ranking approaches, including: (i) a sliding window of query popularity evidence from the past 2-28 days, (ii) the query popularity distribution in the last N queries observed with a given prefix, and (iii) short-range query popularity prediction based on recently observed trends. Using real-time simulation experiments, we extensively investigated the parameters necessary to maximise QAC effectiveness for three openly available query log datasets with prefixes of 2-5 characters: MSN and AOL (both English), and Sogou 2008 (Chinese). Optimal parameters vary for each query log, capturing the differing temporal dynamics and querying distributions. Results demonstrate consistent and language-independent improvements of up to 9.2% over a non-temporal QAC baseline for all query logs with prefix lengths of 2-3 characters. This work is an important step towards more effective QAC approaches."}
{"_id": "5d99d520410efec67a515e3bcc20ce8868e13b6d", "title": "How Does Disagreement Benefit Co-teaching?", "text": "Learning with noisy labels is one of the most important question in weakly-supervised learning domain. Classical approaches focus on adding the regularization or estimating the noise transition matrix. However, either a regularization bias is permanently introduced, or the noise transition matrix is hard to be estimated accurately. In this paper, following a novel path to train on small-loss samples, we propose a robust learning paradigm called Co-teaching+. This paradigm naturally bridges \u201cUpdate by Disagreement\u201d strategy with Co-teaching that trains two deep neural networks, thus consists of disagreement-update step and cross-update step. In disagreement-update step, two networks predicts all data first, and feeds forward prediction disagreement data only. Then, in cross-update step, each network selects its small-loss data from such disagreement data, but back propagates the small-loss data by its peer network and updates itself parameters. Empirical results on noisy versions of MNIST, CIFAR10 and NEWS demonstrate that Co-teaching+ is much superior to the state-of-the-art methods in the robustness of trained deep models."}
{"_id": "45b2ff4c2dc8d944b05e9b55d55f2a9245bef767", "title": "Reinforcement Learning for Humanoid Robotics", "text": "Reinforcement learning offers one of the most general framework to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like humanoid robots remains an unsolved problem. In this paper, we discuss different approaches of reinforcement learning in terms of their applicability in humanoid robotics. Methods can be coarsely classified into three different categories, i.e., greedy methods, \u2018vanilla\u2019 policy gradient methods, and natural gradient methods. We discuss that greedy methods are not likely to scale into the domain humanoid robotics as they are problematic when used with function approximation. \u2018Vanilla\u2019 policy gradient methods on the other hand have been successfully applied on real-world robots including at least one humanoid robot [3]. We demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. A derivation of the natural policy gradient is provided, proving that the average policy gradient of Kakade [10] is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges to the nearest local minimum of the cost function with respect to the Fisher information metric under suitable conditions. The algorithm outperforms non-natural policy gradients by far in a cart-pole balancing evaluation, and for learning nonlinear dynamic motor primitives for humanoid robot control. It offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems."}
{"_id": "51a7ff5e15b724c519f00d666852833d56a0e851", "title": "Experimental Analysis of a UAV-Based Wireless Power Transfer Localization System", "text": "Sensors deployed in remote locations provide unprecedented amounts of data, but powering these sensors over long periods remains a challenge. In this paper, we develop and present a UAV-based wireless power transfer system. We discuss design considerations and present our system that allows a UAV to fly to remote locations to charge hard to access sensors. We analyze the impact of different materials on the wireless power transfer system. Since GPS does not provide sufficient accuracy, we develop and experimentally characterize a relative localization algorithm based on sensing the magnetic field of the power transfer system and optical flow that allows the UAV to localize the sensor with an average error of 15 cm to enable the transfer of on average 4.2 W . These results overcome some of the practical challenges associated with wirelessly charging sensors with a UAV and show that UAVs with wireless power transfer systems can greatly extend the life of remotely deployed sensors."}
{"_id": "909d7c1399a7c6c018f7f474aa336444457ba2bb", "title": "Cruciform: Solving Crosswords with Natural Language Processing", "text": "Crossword puzzles are popular word games that require not only a large vocabulary, but also a broad knowledge of topics. Answering each clue is a natural language task on its own as many clues contain nuances, puns, or counter-intuitive word definitions. Additionally, it can be extremely difficult to ascertain definitive answers without the constraints of the crossword grid itself. This task is challenging for both humans and computers. We describe here a new crossword solving system, Cruciform. We employ a group of natural language components, each of which returns a list of candidate words with scores when given a clue. These lists are used in conjunction with the fill intersections in the puzzle grid to formulate a constraint satisfaction problem, in a manner similar to the one used in the Dr. Fill system[4]. We describe the results of several of our experiments with the system."}
{"_id": "8cc25cb9dc28d85681853d7fd53806e9a0901a30", "title": "A Survey on Application of Bio-Inspired Algorithms", "text": "The algorithms that are inspired by the principles of natural biological evolution and distributed collective behaviour of social colonies have shown excellence in dealing with complex optimization problems and are becoming more popular nowadays. This paper surveys the recent advances in biologically inspired swarm optimization methods, including ant colony optimization algorithm, particle swarm optimization algorithm, artificial bee colony algorithm and their hybridizations, which are applied in various fields. KeywordsBiologically inspired algorithms, Swarm Intelligence, application of Bio-inspired algorithms."}
{"_id": "3c0cfcfe6eaa093ce14c2a645ba448292635b196", "title": "View-based and modular eigenspaces for face recognition", "text": "In this work we describe experiments with eigenfaces for recognition and interactive search in a large-scale face database. Accurate visual recognition is demonstrated using a database of O(10) faces. The problem of recognition under general viewing orientation is also examined. A view-based multiple-observer eigenspace technique is proposed for use in face recognition under variable pose. In addition, a modular eigenspace description technique is used which incorporates salient features such as the eyes, nose and mouth, in an eigenfeature layer. This modular representation yields higher recognition rates as well as a more robust framework for face recognition. An automatic feature extraction technique using feature eigentemplates is also demonstrated."}
{"_id": "cfce6745b2456105a71b6538a6a3ea812c5e9148", "title": "Efficient mapping of quantum circuits to the IBM QX architectures", "text": "In March 2017, IBM launched the project IBM Q with the goal to provide access to quantum computers for a broad audience. This allowed users to conduct quantum experiments on a 5-qubit and, since June 2017, also on a 16-qubit quantum computer (called IBM QX2 and IBM QX3, respectively). In order to use these, the desired quantum functionality (e.g. provided in terms of a quantum circuit) has to properly be mapped so that the underlying physical constraints are satisfied \u2014 a complex task. This demands for solutions to automatically and efficiently conduct this mapping process. In this paper, we propose such an approach which satisfies all constraints given by the architecture and, at the same time, aims to keep the overhead in terms of additionally required quantum gates minimal. The proposed approach is generic and can easily be configured for future architectures. Experimental evaluations show that the proposed approach clearly outperforms IBM's own mapping solution with respect to runtime as well as resulting costs."}
{"_id": "42fddb742959e80cefe3932475a3c2cad97cba8d", "title": "Variational methods for the Dirichlet process", "text": "Variational inference methods, including mean field methods and loopy belief propagation, have been widely used for approximate probabilistic inference in graphical models. While often less accurate than MCMC, variational methods provide a fast deterministic approximation to marginal and conditional probabilities. Such approximations can be particularly useful in high dimensional problems where sampling methods are too slow to be effective. A limitation of current methods, however, is that they are restricted to parametric probabilistic models. MCMC does not have such a limitation; indeed, MCMC samplers have been developed for the Dirichlet process (DP), a nonparametric distribution on distributions (Ferguson, 1973) that is the cornerstone of Bayesian nonparametric statistics (Escobar & West, 1995; Neal, 2000). In this paper, we develop a mean-field variational approach to approximate inference for the Dirichlet process, where the approximate posterior is based on the truncated stick-breaking construction (Ishwaran & James, 2001). We compare our approach to DP samplers for Gaussian DP mixture models."}
{"_id": "d4f28fb8a0e8593ec969158156a38c8c0f7e28a9", "title": "A polarimetric 76\u201379 GHz radar-frontend for target classification in automotive use", "text": "Autonomous driving requires precise detection of the environment and correct classification of all targets. W-band radar technology provides high accuracy object detection in all weather conditions, both day and night. Target classification can be improved by evaluating polarimetric scattering information, where it is in particular promising to work with cicularly polarized waves. Both circular polarizations, left-handed circular (LHC) and right-handed circular (RHC), are necessary with a high cross polarization discrimination (XPD). A particularly high XPD can be achieved by hollow waveguide technology. For that purpose, a radar-frontend with corrugated horn antennas has been designed, manufactured and tested. LHC and RHC data is then entered into an algorithm which takes into account switched transmitter technology, fast-ramp frequency-modulated continuous-wave (FMCW) and digital beamforming. Polarimetric radar images are generated, allowing for discrimination and classification of different targets."}
{"_id": "156b252f1891e5f89cb10d7cd02a7e02025cffa9", "title": "Types of minority class examples and their influence on learning classifiers from imbalanced data", "text": "Many real-world applications reveal difficulties in learning classifiers from imbalanced data. Although several methods for improving classifiers have been introduced, the identification of conditions for the efficient use of the particular method is still an open research problem. It is also worth to study the nature of imbalanced data, characteristics of the minority class distribution and their influence on classification performance. However, current studies on imbalanced data difficulty factors have been mainly done with artificial datasets and their conclusions are not easily applicable to the real-world problems, also because the methods for their identification are not sufficiently developed. In our paper, we capture difficulties of class distribution in real datasets by considering four types of minority class examples: safe, borderline, rare and outliers. First, we confirm their occurrence in real data by exploring multidimensional visualizations of selected datasets. Then, we introduce a method for an identification of these types of examples, which is based on analyzing a class distribution in a local neighbourhood of the considered example. Two ways of modeling this neighbourhood are presented: with k-nearest examples and with kernel functions. Experiments with artificial datasets show that these methods are able to re-discover simulated types of examples. Next contributions of this paper include carrying out a comprehensive experimental study with 26 real world imbalanced datasets, where (1) we identify new data characteristics basing on the analysis of types of minority examples; (2) we demonstrate that considering the results of this analysis allow to differentiate classification performance of popular classifiers and pre-processing methods and to evaluate their areas of competence. Finally, we highlight directions of exploiting the results of our analysis for developing new algorithms for learning classifiers and pre-processing methods."}
{"_id": "c5c3943cc70e3b6509a059d0631115a0852e44a7", "title": "Gravitational Search Algorithm and Recent Trends", "text": "A R T I C L E I N F O A B S T R A C T Article history: Received 20 March 2017 Received in revised form 22 March 2017 Accepted 28 March 2017 Available online 31March 2017 The inability of conventional optimization algorithms to handle exponential increase in the size of the search space has paved way towards capturing the behavior of natural organisms for solving optimization problems. Nature Inspired Algorithms have mostly been associated with the swarm behavior of organisms. A rather new idea in this respect was proposed by Rashedi et al in 2009 by the name \u2018Gravitational Search Algorithm\u2019. The basis of their proposal, as clear from the name, is the Newtonian gravity and interactions between populations of masses. The algorithm has been widely used since then in many applications of science. Since its proposal, researches have been continuously headed in extending GSA for different scenarios and applications. The working of GSA and some such popular and recent proposals in the direction of extending GSA are discussed in this paper."}
{"_id": "c8b3f30e06669605ef4bd624842456d97ab6bdab", "title": "EFFICIENCY ENHANCEMENT OF BASE STATION POWER AMPLIFIERS USING DOHERTY TECHNIQUE", "text": "The power amplifiers are typically the most power-consuming block in wireless communication systems. Spectrum is expensive, and newer technologies demand transmission of maximum amount of data with minimum spectrum usage. This requires sophisticated modulation techniques, leading to wide, dynamic signals that require linear amplification. Although linear amplification is achievable, it always comes at the expense of efficiency. Most of the modern wireless applications such as WCDMA use non-constant envelope modulation techniques with a high peak to average ratio. Linearity being a critical issue, power amplifiers implemented in such applications are forced to operate at a backed off region from saturation. Therefore, in order to overcome the battery lifetime limitation, a design of a high efficiency power amplifier that can maintain the efficiency for a wider range of radio frequency input signal is the obvious solution. A new technique that improves the drain efficiency of a linear power amplifier such as Class A or AB, for a wider range of output power, has been investigated in this research. The Doherty technique consists of two amplifiers in parallel; in such a way that the combination enhances the power added efficiency of the main amplifier at 6dB back off from the maximum output power. The classes of operation of power amplifier (A, AB,B, C etc), and the design techniques are presented. Design of a 2.14 GHz Doherty power amplifier has been provided in chapter 4. This technique shows a 15% increase in power added efficiency at 6 dB back off from the compression point. This PA can be implemented in WCDMA base station transmitter."}
{"_id": "f36d529cb340b4d748b6867e7a93ea02f832386f", "title": "Data Mining in Sports: A Systematic Review", "text": "Data mining technique has attracted attention in the information industry and society as a whole, because of the big amount of data and the imminent need to transform that data into useful information and knowledge. Recently conducted studies with successfully demarcated results using this technique, to estimate several parameters in a variety of domains. However, the effective use of data in some areas is still developing, as is the case of sports, which has shown moderate growth. In this context, the objective of this article is to present a systematic review of the literature about research involving sports data mining. As systematic searches were made out in five databases, resulting in 21 articles that answered a question that grounded this article."}
{"_id": "9985d2b40b10ea2dc37a73e2159ebd9b7918529b", "title": "Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach", "text": "Knowledge bases are employed in a variety of applications from natural language processing to semantic web search; alas, in practice their usefulness is hurt by their incompleteness. Embedding models attain state-of-the-art accuracy in knowledge base completion, but their predictions are notoriously hard to interpret. In this paper, we adapt \u201cpedagogical approaches\u201d (from the literature on neural networks) so as to interpret embedding models by extracting weighted Horn rules from them. We show how pedagogical approaches have to be adapted to take upon the large-scale relational aspects of knowledge bases and show experimentally their strengths and weaknesses."}
{"_id": "6f34cf80935642601b3080342d724f8a41a5098b", "title": "Judgment of information quality and cognitive authority in the Web", "text": "In the Web, making judgments of information quality and authority is a difficult task for most users because overall, there is no quality control mechanism. This study examines the problem of the judgment of information quality and cognitive authority by observing people\u2019s searching behavior in the Web. Its purpose is to understand the various factors that influence people\u2019s judgment of quality and authority in the Web, and the effects of those judgments on selection behaviors. Fifteen scholars from diverse disciplines participated, and data were collected combining verbal protocols during the searches, search logs, and postsearch interviews. It was found that the subjects made two distinct kinds of judgment: predictive judgment, and evaluative judgment. The factors influencing each judgment of quality and authority were identified in terms of characteristics of information objects, characteristics of sources, knowledge, situation, ranking in search output, and general assumption. Implications for Web design that will effectively support people\u2019s judgments of quality and authority are also discussed."}
{"_id": "1800d4306a78c39ed222285bc95bfada75d2d14e", "title": "Provably secure ciphertext policy ABE", "text": "In ciphertext policy attribute-based encryption (CP-ABE), every secret key is associated with a set of attributes, and every ciphertext is associated with an access structure on attributes. Decryption is enabled if and only if the user's attribute set satisfies the ciphertext access structure. This provides fine-grained access control on shared data in many practical settings, e.g., secure database and IP multicast.\n In this paper, we study CP-ABE schemes in which access structures are AND gates on positive and negative attributes. Our basic scheme is proven to be chosen plaintext (CPA) secure under the decisional bilinear Diffie-Hellman (DBDH) assumption. We then apply the Canetti-Halevi-Katz technique to obtain a chosen ciphertext (CCA) secure extension using one-time signatures. The security proof is a reduction to the DBDH assumption and the strong existential unforgeability of the signature primitive.\n In addition, we introduce hierarchical attributes to optimize our basic scheme - reducing both ciphertext size and encryption/decryption time while maintaining CPA security. We conclude with a discussion of practical applications of CP-ABE."}
{"_id": "02beed2e1350a0d0b01bb9622081cb93a965a716", "title": "Practical Techniques for Searches on Encrypted Data", "text": "It is desirable to store data on data storage servers such as mail servers and file servers in encrypted form to reduce security and privacy risks. But this usually implies that on e has to sacrifice functionality for security. For example, if a client wishes to retrieve only documents containing certai n words, it was not previously known how to let the data storage server perform the search and answer the query without loss of data confidentiality. In this paper, we describe our cryptographic schemes for the problem of searching on encrypted data and provide proofs of security for the resulting crypto systems. Ou r techniques have a number of crucial advantages. They are provably secure : they provideprovable secrecy for encryption, in the sense that the untrusted server cannot learn anything about the plaintext when only given the ciphertext; they providequery isolationfor searches, meaning that the untrusted server cannot learn anything more about the plaintext than the search result; they provide controlled searching, so that the untrusted server cannot search for an arbitrary word without the user\u2019s authorization; they also supporthidden queries , so that the user may ask the untrusted server to search for a secret word without revealing the word to the server. The algorithms we present are simple, fast (for a document of length n, the encryption and search algorithms only need O(n) stream cipher and block cipher operations), and introduce almost no space and communication overhead, and hence are practical to use today. We gratefully acknowledge support for this research from se veral US government agencies. This research was suported in part by t he Defense Advanced Research Projects Agency under DARPA contract N66 01-9928913 (under supervision of the Space and Naval Warfare Syst ems Center San Diego), by the National Science foundation under grant F D99-79852, and by the United States Postal Service under grant USPS 1025 90-98-C3513. Views and conclusions contained in this document are t hose of the authors and do not necessarily represent the official opinio n or policies, either expressed or implied of the US government or any of its agencies, DARPA, NSF, USPS."}
{"_id": "0b277244b78a172394d3cbb68cc068fb1ebbd745", "title": "Attribute-based encryption for fine-grained access control of encrypted data", "text": "As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE)."}
{"_id": "0bafcffb9274aafe39830da451e6f44f38f434a4", "title": "Plutus: Scalable Secure File Sharing on Untrusted Storage", "text": "Plutus is a cryptographic storage system that enables secure file sharing without placing much trust on the file servers. In particular, it makes novel use of cryptographic primitives to protect and share files. Plutus features highly scalable key management while allowing individual users to retain direct control over who gets access to their files. We explain the mechanisms in Plutus to reduce the number of cryptographic keys exchanged between users by using filegroups, distinguish file read and write access, handle user revocation efficiently, and allow an untrusted server to authorize file writes. We have built a prototype of Plutus on OpenAFS. Measurements of this prototype show that Plutus achieves strong security with overhead comparable to systems that encrypt all network traffic."}
{"_id": "c5021337a11be124cb126fc2e9fa847aa3778228", "title": "Diaphragmatic Eventration in Sisters with Asparagine Synthetase Deficiency: A Novel Homozygous ASNS Mutation and Expanded Phenotype.", "text": "BACKGROUND\nAsparagine Synthetase Deficiency (ASNSD; OMIM #615574) is a newly described rare autosomal recessive neurometabolic disorder, characterised by congenital microcephaly, severe psychomotor delay, encephalopathy and progressive cerebral atrophy. To date, seven families and seven missense mutations in the ASNSD disease causing gene, ASNS, have been published.\n\n\nMETHODS\nWe report two further affected infant sisters from a consanguineous Indian family, who in addition to the previously described features had diaphragmatic eventration. Both girls died within the first 6 months of life. Whole exome sequencing (WES) was performed for both sisters to identify the pathogenic mutation. The clinical and biochemical parameters of our patient are compared to previous reports.\n\n\nRESULTS\nWES demonstrated a homozygous novel missense ASNS mutation, c.1019G\u2009>\u2009A, resulting in substitution of the highly conserved arginine residue by histidine (R340H).\n\n\nCONCLUSION\nThis report expands the phenotypic and mutation spectrum of ASNSD, which should be considered in neonates with congenital microcephaly, seizures and profound neurodevelopmental delay. The presence of diaphragmatic eventration suggests extracranial involvement of the central nervous system in a disorder that was previously thought to exclusively affect the brain. Like all previously reported patients, these cases were diagnosed with WES, highlighting the clinical utility of next generation sequencing in the diagnosis of rare, difficult to recognise disorders."}
{"_id": "48b7ca5d261de75f66bc41c1cc658a00c4609199", "title": "Challenges in developing opinion mining tools for social media", "text": "While much work has recently focused on the analysis of social media in order to get a feel for what people think about current topics of interest, there are, however, still many challenges to be faced. Text mining systems originally designed for more regular kinds of texts such as news articles may need to be adapted to deal with facebook posts, tweets etc. In this paper, we discuss a variety of issues related to opinion mining from social media, and the challenges they impose on a Natural Language Processing (NLP) system, along with two example applications we have developed in very different domains. In contrast with the majority of opinion mining work which uses machine learning techniques, we have developed a modular rule-based approach which performs shallow linguistic analysis and builds on a number of linguistic subcomponents to generate the final opinion polarity and score."}
{"_id": "a55392918b3e7db1c45481253425a0a52e703282", "title": "Background Prior-Based Salient Object Detection via Deep Reconstruction Residual", "text": "Detection of salient objects from images is gaining increasing research interest in recent years as it can substantially facilitate a wide range of content-based multimedia applications. Based on the assumption that foreground salient regions are distinctive within a certain context, most conventional approaches rely on a number of hand-designed features and their distinctiveness is measured using local or global contrast. Although these approaches have been shown to be effective in dealing with simple images, their limited capability may cause difficulties when dealing with more complicated images. This paper proposes a novel framework for saliency detection by first modeling the background and then separating salient objects from the background. We develop stacked denoising autoencoders with deep learning architectures to model the background where latent patterns are explored and more powerful representations of data are learned in an unsupervised and bottom-up manner. Afterward, we formulate the separation of salient objects from the background as a problem of measuring reconstruction residuals of deep autoencoders. Comprehensive evaluations of three benchmark datasets and comparisons with nine state-of-the-art algorithms demonstrate the superiority of this paper."}
{"_id": "59e580aea8a3cbdfb06e685dcdc8dca851fe4b31", "title": "p-Cycle-based node failure protection for survivable virtual network embedding", "text": "Network Virtualization offers to Physical Infrastructure Provider (PIP) the possibility to smoothly roll out multiple isolated networks on top of his infrastructure. A major challenge in this respect is the embedding problem which deals with the mapping of Virtual Networks (VNs) resources on PIP network. In literature, a number of research proposals have been focused on providing heuristic approaches to solve this NP-hard problem while most often assuming that PIP infrastructure is operational at all time. In virtualization environment a single physical node failure can result in single/multi logical link failure(s) as it effects all VNs with a mapping that spans over. Setup a dedicated backup for each VN embedding, i.e., no sharing, is inefficient in terms of resource usage. To address these concerns, we propose in this work an approach for VN mapping with combined physical node and multiple logical links protection mechanism (VNM-CNLP) that: (a) uses a large scale optimization technique, developed in our previous work, to calculate in a first stage a cost-efficient VN mapping while minimizing the effects of a single node failure in VN layer, and (b) proposes in a second stage link p-Cycle based protection techniques that minimize the backup resources while providing a full VN protection scheme against a single physical node failure and a multiple logical links failure. Our simulations show a clear advantage of VNM-CNLP approach over benchmark in terms of VN backup cost and resources utilization."}
{"_id": "7b5ac7a703832b80e21ddfa6829f1aec95439352", "title": "DAISY Filter Flow: A Generalized Discrete Approach to Dense Correspondences", "text": "Establishing dense correspondences reliably between a pair of images is an important vision task with many applications. Though significant advance has been made towards estimating dense stereo and optical flow fields for two images adjacent in viewpoint or in time, building reliable dense correspondence fields for two general images still remains largely unsolved. For instance, two given images sharing some content exhibit dramatic photometric and geometric variations, or they depict different 3D scenes of similar scene characteristics. Fundamental challenges to such an image or scene alignment task are often multifold, which render many existing techniques fall short of producing dense correspondences robustly and efficiently. This paper presents a novel approach called DAISY filter flow (DFF) to address this challenging task. Inspired by the recent PatchMatch Filter technique, we leverage and extend a few established methods: DAISY descriptors, filter-based efficient flow inference, and the PatchMatch fast search. Coupling and optimizing these modules seamlessly with image segments as the bridge, the proposed DFF approach enables efficiently performing dense descriptor-based correspondence field estimation in a generalized high-dimensional label space, which is augmented by scales and rotations. Experiments on a variety of challenging scenes show that our DFF approach estimates spatially coherent yet discontinuity-preserving image alignment results both robustly and efficiently."}
{"_id": "619a8a014c654e7bf5c8007b777f6f02db72e0a0", "title": "Making TCP/IP Viable for Wireless Sensor Networks", "text": "The TCP/IP protocol suite, which has proven itself highly successful in wired networks, is often claimed to be unsuited for wireless micro-sensor networks. In this work, we question this conventional wisdom and present a number of mechanisms that are intended to enable the use of TCP/IP for wireless sensor networks: spatial IP address assignment, shared context header compression, application overlay routing, and distributed TCP caching (DTC). Sensor networks based on TCP/IP have the advantage of being able to directly communicate with an infrastructure consisting either of a wired IP network or of IP-based wireless technology such as GPRS. We have implemented parts of our mechanisms both in a simulator environment and on actual sensor nodes. Our preliminary results are promising."}
{"_id": "5ec2ffed443fff4917bd86e0b9c6507df3b13f70", "title": "Introducing the Occupational Balance Questionnaire (OBQ).", "text": "OBJECTIVE\nThe concept of occupational balance is frequently used in occupational therapy but the fact that it has been defined and measured differently is a limitation. This article introduces the Occupational Balance Questionnaire (OBQ), which focuses on satisfaction with the amount and variation of occupations. It consists of 13 items measured on six-step ordinal scales. It has shown good content validity in a sample of 21 occupational therapists but other psychometric properties have not been investigated. The aim was to investigate the OBQ regarding internal consistency, test-retest reliability, and floor/ceiling effects.\n\n\nMETHODS\nThe OBQ was administered twice to a sample selected through convenience sampling. Internal consistency was investigated by Cronbach's alpha and test-retest reliability analysed with Spearman's Rho correlation for the total score and weighted kappa on each item. Potential floor/ceiling effects were explored by checking for the percentage of participants who scored lowest and highest.\n\n\nRESULTS\nThe results demonstrated that the OBQ has good internal consistency (Cronbach's alpha 0.936) and sufficient test-retest reliability (Spearman's Rho for the total score was 0.926) and, thus, seems stable over time. No floor or ceiling effect was detected.\n\n\nCONCLUSIONS\nThe OBQ therefore showed promising reliability, although further instrument development studies to examine its construct validity are required."}
{"_id": "2b7d5931a08145d9a501af9839fb9f8954c82c3c", "title": "Design and implementation of a single-phase buck-boost inverter", "text": "In order to simplify the configuration of photovoltaic (PV) grid-connection system, this paper proposes to adopt a buck-boost dc-dc converter, and then develops a single-phase inverter via the connection with an H-bridge unfolding circuit with line-commutated. Depending on the conditions of dc input-voltage and ac output-voltage, the proposed circuit can work functionally as either a step-down or step-up inverter. It is suitable for the applications with wide voltage-variation range. Since only one switch is operated with high-frequency, the switching loss can be significantly reduced to improve the efficiency. Finally, a laboratory prototype with 110 Vrms / 60 Hz output voltage is then simulated and implemented accordingly to verify the feasibility of the proposed inverter."}
{"_id": "b8d3cde01d078d6a6c27210a42d7dfeb19d89ed1", "title": "Chi: A Scalable and Programmable Control Plane for Distributed Stream Processing Systems", "text": "Stream-processing workloads and modern shared cluster environments exhibit high variability and unpredictability. Combined with the large parameter space and the diverse set of user SLOs, this makes modern streaming systems very challenging to statically configure and tune. To address these issues, in this paper we investigate a novel control-plane design, Chi, which supports continuous monitoring and feedback, and enables dynamic re-configuration. Chi leverages the key insight of embedding control-plane messages in the data-plane channels to achieve a low-latency and flexible control plane for stream-processing systems. Chi introduces a new reactive programming model and design mechanisms to asynchronously execute control policies, thus avoiding global synchronization. We show how this allows us to easily implement a wide spectrum of control policies targeting different use cases observed in production. Large-scale experiments using production workloads from a popular cloud provider demonstrate the flexibility and efficiency of our approach. PVLDB Reference Format: Luo Mai, Kai Zeng, Rahul Potharaju, Le Xu, Steve Suh, Shivaram Venkataraman, Paolo Costa, Terry Kim, Saravanan Muthukrishnan, Vamsi Kuppa, Sudheer Dhulipalla, Sriram Rao. Chi: A Scalable and Programmable Control Plane for Distributed Stream Processing Systems. PVLDB, 11 (10): xxxx-yyyy, 2018. DOI: https://doi.org/10.14778/3231751.3231765"}
{"_id": "492be1fedcfcaadd34a4d56a47ad687b3c81abe3", "title": "Using Dynamic Programming for Solving Variational Problems in Vision", "text": "Variational approaches have been proposed for solving many inverse problems in early vision, such as in the computation of optical flow, shape from shading, and energy-minimizing active contour models. In general however, variational approaches do not guarantee global optimality of the solution, require estimates of higher order derivatives of the discrete data, and do not allow direct and natural enforcement of constraints. In this paper we discuss dynamic programming as a novel approach to solving variational problems in vision. Dynamic programming ensures global optimality of the solution, it is numerically stable, and it allows for hard constraints to be enforced on the behavior of the solution within a natural and straightforward structure. As a specific example of the efficacy of the proposed approach, application of dynamic programming to the energy-minimizing active contours is described. The optimization problem is set up as a discrete multistage decision process and is solved by a \" time-delayed \" discrete dynamic programming algorithm. A parallel procedure is discussed that can result in savings in computational costs."}
{"_id": "24d8a2c600076c878d05930b8501cf63d2100387", "title": "VisTrails: visualization meets data management", "text": "Scientists are now faced with an incredible volume of data to analyze. To successfully analyze and validate various hypothesis, it is necessary to pose several queries, correlate disparate data, and create insightful visualizations of both the simulated processes and observed phenomena. Often, insight comes from comparing the results of multiple visualizations. Unfortunately, today this process is far from interactive and contains many error-prone and time-consuming tasks. As a result, the generation and maintenance of visualizations is a major bottleneck in the scientific process, hindering both the ability to mine scientific data and the actual use of the data. The VisTrails system represents our initial attempt to improve the scientific discovery process and reduce the time to insight. In VisTrails, we address the problem of visualization from a data management perspective: VisTrails manages the data and metadata of a visualization product. In this demonstration, we show the power and flexibility of our system by presenting actual scenarios in which scientific visualization is used and showing how our system improves usability, enables reproducibility, and greatly reduces the time required to create scientific visualizations."}
{"_id": "611b5a3b8a04897117fe488af39680d8d3c9f2fe", "title": "The twisted string actuation system: Modeling and control", "text": "This paper describes a novel actuation system for very compact and light-weight robotic devices, like artificial hands. The actuation concept presented here allows the implementation of powerful tendon-based driving systems, using as actuators small-size DC motors characterized by high speed and low torque. After the presentation of the basic concept of this novel actuation system, the constitutive equations of the system are given, validated by means of laboratory tests. Moreover, the problem of tracking a desired actuation force profile is taken into account, considering as load a mass-spring-damper system. A control algorithm based on a second-order sliding manifold has been firstly evaluated by means of simulations, and then validated by experiments. This output-feedback controller has been chosen to guarantee a high level of robustness against disturbances, parameter variations and uncertainties while maintaining a low computational burden."}
{"_id": "a7779428671a47e36b9edadf75d5feb5e067d230", "title": "An Empirical Evaluation of Resources for the Identification of Diseases and Adverse Effects in Biomedical Literature", "text": "The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html"}
{"_id": "8eb7b2a24de3a484348e2516747dfc1722e72dbb", "title": "The iSLIP scheduling algorithm for input-queued switches", "text": "h increasing number of high performance internetworking protocol routers, LAN and asynchronous transfer mode (ATM) switches use a switched backplane based on a crossbar switch. Most often, these systems use input queues to hold packets waiting to traverse the switching fabric. It is well known that if simple first in fh-st out (FIFO) input queues are used to hold packets then, even under benign conditions, head-of-line (HOL) blocking limits the achievable bandwidth to approximately 58.6% of the maximum. HOL blocking can be overcome by the use of virtual output queueing, which is described in this paper. A scheduling algorithm is used to configure the crossbar switch, deciding the order in which packets will be served. Recent results have shown that with a suitable scheduling algorithm, 100% throughput can he achieved. In this paper, we present a scheduling algorithm called iSLIP. An iterative, round-robin algorithm, iSLIP can achieve 100% throughput for uniform traffic, yet is simple to implement in hardware. Iterative and noniterative versions of the algorithms are presented, along with modified versions for prioritized traffic. Simulation results are presented to indicate the performance of iSLIP under benign and bursty traffic conditions. Prototype and commercial implementations of SLIP exist in systems with aggregate bandwidths ranging from 50 to SO0 Gb/s. When the traffic is nonuniform, SLIP quickly adapts to a fair scheduling policy that is guaranteed never to starve an input queue. Finally, we describe the implementation complexity of SLIP. Based on a two-dimensional (2-D) array of priority encoders, single-chip schedulers have been built supporting up to 32 ports, and making approximately 100 million scheduling decisions per second. I~l&x TermsATM switch, crossbar switch, input-queueing, IP router, scheduling."}
{"_id": "eac936ca639a44da547d761b0f71ee7bac6ca8a0", "title": "Reliable detection and skew correction method of license plate for PTZ camera-based license plate recognition system", "text": "The skewness of license plates affects negatively all processing steps of license plate recognition including character segmentation and character classification. The skewness of the license plate in the captured image may appear severely under PTZ camera environments since license plate appears to have various postures in the captured image due to panning and tilting positions of PTZ cameras. Thus far, most of previous works on this subject have mainly resolved skewness problem due to rotation in plane, but not rotation in depth. In this paper, by utilizing planar projective transformation (planar homography), we propose a reliable skew correction method of license plates which handles also the skewness due to rotation in depth. For more reliable deskewing process, the license plate needs to be detected more reliably so that this paper also proposes a reliable license plate detection method. The effectiveness of the proposed skew correction method as well as the proposed license plate detection method is verified though experiments."}
{"_id": "b5feb00a68d94a1575dd7d94adc8f57f8909ea98", "title": "Rapid Graph Layout Using Space Filling Curves", "text": "Network data frequently arises in a wide variety of fields, and node-link diagrams are a very natural and intuitive representation of such data. In order for a node-link diagram to be effective, the nodes must be arranged well on the screen. While many graph layout algorithms exist for this purpose, they often have limitations such as high computational complexity or node colocation. This paper proposes a new approach to graph layout through the use of space filling curves which is very fast and guarantees that there will be no nodes that are colocated. The resulting layout is also aesthetic and satisfies several criteria for graph layout effectiveness."}
{"_id": "8dd91603bdd8357338b5a3f2a7d86039f9b34450", "title": "Context-Free Transductions with Neural Stacks", "text": "This paper analyzes the behavior of stackaugmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, contextfree language modelling, and cumulative XOR evaluation. Examining the behavior of our networks, we show that stack-augmented RNNs can discover intuitive stack-based strategies for solving our tasks. However, stack RNNs are more difficult to train than classical architectures such as LSTMs. Rather than employ stack-based strategies, more complex networks often find approximate solutions by using the stack as unstructured memory."}
{"_id": "f7255871755f4107ebc1f4d02b6de4c26c52f759", "title": "Radio resource allocation and power control scheme to mitigate interference in device-to-device communications underlaying LTE-A uplink cellular networks", "text": "The integration of Device-to-Device (D2D) communication into a cellular network improves system spectral efficiency, increases capacity, reduces power consumption and also reduces traffic to the evolved NodeB (eNB). However, D2D communication generates interference to the conventional cellular network which deteriorates the system performance. In this paper, we propose a radio resource allocation and power control scheme to mitigate the interference using cell sectorization scheme while reusing uplink cellular resources by the D2D pairs. By simulations, we show that our proposed scheme improves the overall system performance compared with the existing methods."}
{"_id": "97db4d2bac016467ccbf67f48b3b1b3602c02c22", "title": "Humanlike Driving: Empirical Decision-Making System for Autonomous Vehicles", "text": "The autonomous vehicle, as an emerging and rapidly growing field, has received extensive attention for its futuristic driving experiences. Although the fast developing depth sensors and machine learning methods have given a huge boost to self-driving research, existing autonomous driving vehicles do meet with several avoidable accidents during their road testings. The major cause is the misunderstanding between self-driving systems and human drivers. To solve this problem, we propose a humanlike driving system in this paper to give autonomous vehicles the ability to make decisions like a human. In our method, a convolutional neural network model is used to detect, recognize, and abstract the information in the input road scene, which is captured by the on-board sensors. And then a decision-making system calculates the specific commands to control the vehicles based on the abstractions. The biggest advantage of our work is that we implement a decision-making system which can well adapt to real-life road conditions, in which a massive number of human drivers exist. In addition, we build our perception system with only the depth information, rather than the unstable RGB data. The experimental results give a good demonstration of the efficiency and robustness of the proposed method."}
{"_id": "791cf156d1fad8187fbd732e4543331d4e2b2367", "title": "Robot Learning from Demonstration: A Task-level Planning Approach:", "text": "In this paper, we deal with the problem of task level learning and planning for robotic applications that involve object manipulation. In dynamic environments, preprogramming robots for execution of complex tasks such as for example setting a dinner table is of little use, since the same order of subtasks may not be conceivable due to the changed state of the world in the execution phase. In our approach, we aim to learn the goal of the task, and use a task planner to reach the goal given different initial states of the world. However, for some tasks, there are underlying constraints that must be fulfilled, and knowing just the final goal is not sufficient. We propose two techniques for constraint identification. In the first case, the teacher can directly instruct the system about the underlying constraints. In the second case, the constraints are identified by the robot itself through the merging of multiple observations. The constraints are then considered in the planning phase, allowing the task to be executed without violating any of them. We evaluate our work on a real robot performing pick-and-place tasks with several objects of various size."}
{"_id": "0c7840062c1ad947591d811a9b77a36c3cd80a5b", "title": "Does Life Satisfaction influence the intention (We-Intention) to use Facebook?", "text": "Several studies have investigated a variety of factors affecting use of social networking sites (SNSs), but the investigation of these factors is still under development. In this study, we aim to contribute to the literature by extending and testing an existent conceptual model in a new context, including Life Satisfaction as an independent variable to explain the intention (We-Intention) to use Facebook. The previous models has Subjective Norm, Group Norms, Social Identity, Purposive Value, Self-Discovery, Maintaining Interpersonal Interconnectivity, Social Enhancement, Entertainment Value and Social Presence as the predictor variables. An online survey with Brazilians (n = 1111) was conducted. Our structural equation modeling reveals that Life Satisfaction influence on We-Intention is mediated by Subjective Norm, Group Norm, Social Identity, Entertainment Value, and Maintaining Interpersonal Interconnectivity (R-squared value is 0.36). Our findings, while consistent with existing literature in terms of theories, reveal different arrangements among factors influencing Brazilian consumers\u2019 behavior. 2015 Elsevier Ltd. All rights reserved."}
{"_id": "2a92f9e3160ea00338bc2884b8fdb9699c2d30f0", "title": "NailTactors: eyes-free spatial output using a nail-mounted tactor array", "text": "This paper investigates the feasibility of using a nail-mounted array of tactors, NailTactors, as an eyes-free output device. By rim-attached eccentric-rotating-mass (ERM) vibrators to artificial nails, miniature high-resolution tactile displays were realized as an eyes-free output device. To understand how to deliver rich signals to users for valid signal perception, three user studies were conducted. The results suggest that users can not only recognized absolute and relative directional cues, but also recognized numerical characters in EdgeWrite format with an overall 89% recognition rate. Experiments also identified the optimal placement of ERM actuators for maximizing information transfer."}
{"_id": "169d9ebd8f33d41fdc8d49f1690f124ed5260860", "title": "Automatic medical image annotation in ImageCLEF 2007: Overview, results, and discussion", "text": "In this paper, the automatic medical annotation task of the 2007 CLEF cross-language image retrieval campaign (ImageCLEF) is described. The paper focusses on the images used, the task setup, and the results obtained in the evaluation campaign. Since 2005, the medical automatic image annotation task exists in ImageCLEF with increasing complexity to evaluate the performance of state-of-the-art methods for completely automatic annotation of medical images based on visual properties. The paper also describes the evolution of the task from its origin in 2005 to 2007. The 2007 task, comprising 11,000 fully annotated training images and 1,000 test images to be annotated, is a realistic task with a large number of possible classes at different levels of detail. Detailed analysis of the methods across participating groups is presented with respect to the (i) image representation, (ii) classification method, and (iii) use of the class hierarchy. The results show that methods which build on local image descriptors and discriminative models are able to provide good predictions of the image classes, mostly by using techniques that were originally developed in the machine learning and computer vision domain for object recognition in non-medical images."}
{"_id": "245414e768c3b8c8288ac0651604a36b1a44a446", "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models", "text": "We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation \u2013 rules for gradient backpropagation through stochastic variables \u2013 and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation."}
{"_id": "373cf414cc038516a2cff11d7caafa3ff1031c6d", "title": "Generalized Denoising Auto-Encoders as Generative Models", "text": "Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying datagenerating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty)."}
{"_id": "3b4d901574162750af343c3d564ed4f514273c67", "title": "Sampling Generative Networks: Notes on a Few Effective Techniques", "text": "We introduce several techniques for sampling and visualizing the latent spaces of generative models. Replacing linear interpolation with spherical linear interpolation prevents diverging from a model\u2019s prior distribution and produces sharper samples. J-Diagrams and MINE grids are introduced as visualizations of manifolds created by analogies and nearest neighbors. We demonstrate two new techniques for deriving attribute vectors: bias-corrected vectors with data replication and synthetic vectors with data augmentation. Most techniques are intended to be independent of model type and examples are shown on both Variational Autoencoders and Generative Adversarial Networks. Keywords\u2014Generative; VAE; GAN; Sampling; Manifold"}
{"_id": "4e97b53926d997f451139f74ec1601bbef125599", "title": "Discriminative Regularization for Generative Models", "text": "We explore the question of whether the representations learned by classifiers can be used to enhance the quality of generative models. Our conjecture is that labels correspond to characteristics of natural data which are most salient to humans: identity in faces, objects in images, and utterances in speech. We propose to take advantage of this by using the representations from discriminative classifiers to augment the objective function corresponding to a generative model. In particular we enhance the objective function of the variational autoencoder, a popular generative model, with a discriminative regularization term. We show that enhancing the objective function in this way leads to samples that are clearer and have higher visual quality than the samples from the standard variational autoencoders."}
{"_id": "d08a2f386acded08a31a1fa945fdb00fababa00b", "title": "Light emitting diodes reliability review", "text": "0026-2714/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.microrel.2011.07.063 \u21d1 Corresponding author at: CALCE Center for Adva University of Maryland, College Park, MD 20742, Un 5323; fax: +1 301 314 9269. E-mail address: pecht@calce.umd.edu (M. Pecht). The increasing demand for light emitting diodes (LEDs) has been driven by a number of application categories, including display backlighting, communications, medical services, signage, and general illumination. The construction of LEDs is somewhat similar to microelectronics, but there are functional requirements, materials, and interfaces in LEDs that make their failure modes and mechanisms unique. This paper presents a comprehensive review for industry and academic research on LED failure mechanisms and reliability to help LED developers and end-product manufacturers focus resources in an effective manner. The focus is on the reliability of LEDs at the die and package levels. The reliability information provided by the LED manufacturers is not at a mature enough stage to be useful to most consumers and end-product manufacturers. This paper provides the groundwork for an understanding of the reliability issues of LEDs across the supply chain. We provide an introduction to LEDs and present the key industries that use LEDs and LED applications. The construction details and fabrication steps of LEDs as they relate to failure mechanisms and reliability are discussed next. We then categorize LED failures into thirteen different groups related to semiconductor, interconnect, and package reliability issues. We then identify the relationships between failure causes and their associated mechanisms, issues in thermal standardization, and critical areas of investigation and development in LED technology and reliability. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "ee8c779e7823814a5f1746d883ca77b26671b617", "title": "Computability and \u03bb-Definability", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/asl.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission."}
{"_id": "1af2e075903a3cc5ad5a192921a0b4fb67645dc1", "title": "Mathematical models of cancer metabolism.", "text": "Metabolism is essential for life, and its alteration is implicated in multiple human diseases. The transformation from a normal to a cancerous cell requires metabolic changes to fuel the high metabolic demands of cancer cells, including but not limited to cell proliferation and cell migration. In recent years, there have been a number of new discoveries connecting known aberrations in oncogenic and tumour suppressor pathways with metabolic alterations required to sustain cell proliferation and migration. However, an understanding of the selective advantage of these metabolic alterations is still lacking. Here, we review the literature on mathematical models of metabolism, with an emphasis on their contribution to the identification of the selective advantage of metabolic phenotypes that seem otherwise wasteful or accidental. We will show how the molecular hallmarks of cancer can be related to cell proliferation and tissue remodelling, the two major physiological requirements for the development of a multicellular structure. We will cover different areas such as genome-wide gene expression analysis, flux balance models, kinetic models, reaction diffusion models and models of the tumour microenvironment. We will also highlight current challenges and how their resolution will help to achieve a better understanding of cancer metabolism and the metabolic vulnerabilities of cancers."}
{"_id": "99f3f950286b130f10e38127f903dfc0dfc843c8", "title": "A Direct AC\u2013AC Converter for Inductive Power-Transfer Systems", "text": "This paper proposes a direct ac-ac converter to generate a high-frequency current for inductive power-transfer (IPT) systems. Unlike traditional dc-ac converters for IPT systems, the proposed converter can achieve a high-frequency current generation directly from an ac power source without a dc link. The power converter can maintain high-frequency resonance by natural circuit oscillation and discrete energy injection control. A variable-frequency control strategy is developed to vary the switching frequency to keep the high-frequency current operating with zero-current switching. Thus, the switching stress, power losses, and electromagnetic interference of the converter are reduced. Theoretical analysis, simulation, and experimental results show that the output track current is fully controllable with good waveforms for contactless power-transfer applications."}
{"_id": "d33c745e04b36f269d660dea6719b0c47c9478b7", "title": "MEGA7: Molecular Evolutionary Genetics Analysis Version 7.0 for Bigger Datasets.", "text": "We present the latest version of the Molecular Evolutionary Genetics Analysis (Mega) software, which contains many sophisticated methods and tools for phylogenomics and phylomedicine. In this major upgrade, Mega has been optimized for use on 64-bit computing systems for analyzing larger datasets. Researchers can now explore and analyze tens of thousands of sequences in Mega The new version also provides an advanced wizard for building timetrees and includes a new functionality to automatically predict gene duplication events in gene family trees. The 64-bit Mega is made available in two interfaces: graphical and command line. The graphical user interface (GUI) is a native Microsoft Windows application that can also be used on Mac OS X. The command line Mega is available as native applications for Windows, Linux, and Mac OS X. They are intended for use in high-throughput and scripted analysis. Both versions are available from www.megasoftware.net free of charge."}
{"_id": "576203782173f94d3dfe779f7157645c990242f0", "title": "Investigation of a new motor assessment scale for stroke patients.", "text": "The purpose of this paper is to present and describe a motor assessment scale (MAS) for stroke patients and to report on the investigation of two aspects of its reliability. The MAS is a brief and easily administered assessment of eight areas of motor function and one item related to muscle tone. Each item is scored on a scale from 0 to 6. To check interrater reliability, we videotaped five stroke patients while they were being assessed with the MAS. These scores were used as the criterion ratings. Twenty raters then assessed these patients, and their results were correlated with the criterion ratings. We determined test-retest reliability by assessing on two occasions, separated by a four-week interval, 14 stroke patients whose recovery was considered to be stable and by correlating these scores. The MAS was found to be highly reliable with an average interrater correlation of .95 and an average test-retest correlation of .98."}
{"_id": "8dce45a4fac6493985ac990f03108ba6b332006c", "title": "Time-Domain Finite-Difference and Finite-Element Methods for Maxwell Equations in Complex Media", "text": "Extensions of finite-difference time-domain (FDTD) and finite-element time-domain (FETD) algorithms are reviewed for solving transient Maxwell equations in complex media. Also provided are a few representative examples to illustrate the modeling capabilities of FDTD and FETD for complex media. The term complex media refers here to media with dispersive, (bi)anisotropic, inhomogeneous, and/or nonlinear properties present in the constitutive tensors."}
{"_id": "48f28c57e9a37333bb626412460877f0c9ca52ec", "title": "Creating SERS hot spots on MoS(2) nanosheets with in situ grown gold nanoparticles.", "text": "Herein, a reliable surface-enhanced Raman scattering (SERS)-active substrate has been prepared by synthesizing gold nanoparticles (AuNPs)-decorated MoS2 nanocomposite. The AuNPs grew in situ on the surface of MoS2 nanosheet to form efficient SERS hot spots by a spontaneous redox reaction with tetrachloroauric acid (HAuCl4) without any reducing agent. The morphologies of MoS2 and AuNPs-decorated MoS2 nanosheet were characterized by TEM, HRTEM, and AFM. The formation of hot spots greatly depended on the ratio of MoS2 and HAuCl4. When the concentration of HAuCl4 was 2.4 mM, the as-prepared AuNPs@MoS2-3 nanocomposite exhibited a high-quality SERS activity toward probe molecule due to the generated hot spots. The spot-to-spot SERS signals showed that the relative standard deviation (RSD) in the intensity of the main Raman vibration modes (1362, 1511, and 1652 cm(-1)) of Rhodamine 6G were about 20%, which displayed good uniformity and reproducibility. The AuNPs@MoS2-based substrate was reliable, sensitive, and reproducible, which showed great potential to be an excellent SERS substrate for biological and chemical detection."}
{"_id": "6ec17e735cd9f7cb37485ab07b905a7895b0067d", "title": "SIFT Features Tracking for Video Stabilization", "text": "This paper presents a video stabilization algorithm based on the extraction and tracking of scale invariant feature transform features through video frames. Implementation of SIFT operator is analyzed and adapted to be used in a feature-based motion estimation algorithm. SIFT features are extracted from video frames and then their trajectory is evaluated to estimate interframe motion. A modified version of iterative least squares method is adopted to avoid estimation errors and features are tracked as they appear in nearby frames to improve video stability. Intentional camera motion is eventually filtered with adaptive motion vector integration. Results confirm the effectiveness of the method."}
{"_id": "1df692629297cdb028a402a453f6f7a25d44fe45", "title": "Compact Single- and Dual-Band Filtering Patch Antenna Arrays Using Novel Feeding Scheme", "text": "This paper presents two compact patch antenna arrays using a novel feeding scheme to integrate bandpass responses. The two arrays have no filtering circuits but reveal high filtering performance. For each array, only a microstrip feed line is coupled to radiating patches and specific coupling regions are employed to obtain the desired current distribution on the radiating patches. In the operating band, the induced currents on the patches are in phase, resulting in high boresight gain. At the out-of-band frequencies, the patches are not excited or the induced currents on them are 180\u00b0 out of phase, resulting in radiation suppression. Using this method, out-of-band radiation can be suppressed without degrading the in-band performance. For demonstration, a four-element planar patch array is first implemented. Single-band bandpass response is observed in the experiment. Measured in-band gains are 12.6 dBi, whereas out-of-band suppression is more than 20 dB. Then the feeding scheme is employed to design two three-element subarrays to form a planar dual-band patch array. Also, dual-band bandpass responses are obtained with high frequency selectivity."}
{"_id": "bd5c67bf72da28a5f4bc06e58555683505d10ef1", "title": "A broadband substrate integrated waveguide (SIW) filter", "text": "An eleven-stage broadband SIW filter has been designed, fabricated and measured. The measured results show that the SIW filter can be operated from 9.25 GHz to 11.25 GHz with good selectivity performance. The filter has a small size, low weight, and can be integrated with other SIW components without any transitions. It's suitable for the design of microwave and millimeter wave integrated circuits"}
{"_id": "e0739088d578b2abf583e30953ffa000620cca98", "title": "Efficient Pedestrian Detection in Urban Traffic Scenes", "text": "Pedestrians are important participants in urban traffic environments, and thus act as an interesting category of objects for autonomous cars. Automatic pedestrian detection is an essential task for protecting pedestrians from collision. In this thesis, we investigate and develop novel approaches by interpreting spatial and temporal characteristics of pedestrians, in three different aspects: shape, cognition and motion. The special up-right human body shape, especially the geometry of the head and shoulder area, is the most discriminative characteristic for pedestrians from other object categories. Inspired by the success of Haar-like features for detecting human faces, which also exhibit a uniform shape structure, we propose to design particular Haar-like features for pedestrians. Tailored to a pre-defined statistical pedestrian shape model, Haar-like templates with multiple modalities are designed to describe local difference of the shape structure. Cognition theories aim to explain how human visual systems process input visual signals in an accurate and fast way. By emulating the center-surround mechanism in human visual systems, we design multi-channel, multi-direction and multi-scale contrast features, and boost them to respond to the appearance of pedestrians. In this way, our detector is considered as a top-down saliency system. In the last part of this thesis, we exploit the temporal characteristics for moving pedestrians and then employ motion information for feature design, as well as for regions of interest (ROIs) selection. Motion segmentation on optical flow fields enables us to select those blobs most probably containing moving pedestrians; a combination of Histogram of Oriented Gradients (HOG) and motion self difference features further enables robust detection. We test our three approaches on image and video data captured in urban traffic scenes, which are rather challenging due to dynamic and complex backgrounds. The achieved results demonstrate that our approaches reach and surpass state-of-the-art performance, and can also be employed for other applications, such as indoor robotics or public surveillance."}
{"_id": "8fd1d13e18c5ef8b57296adab6543cb810c36d81", "title": "Exploiting Tri-Relationship for Fake News Detection", "text": "Social media for news consumption is becoming popular nowadays. The low cost, easy access and rapid information dissemination of social media bring benefits for people to seek out news timely. However, it also causes the widespread of fake news, i.e., low-quality news pieces that are intentionally fabricated. The fake news brings about several negative effects on individual consumers, news ecosystem, and even society trust. Previous fake news detection methods mainly focus on news contents for deception classification or claim fact-checking. Recent Social and Psychology studies show potential importance to utilize social media data: 1) Confirmation bias effect reveals that consumers prefer to believe information that confirms their existing stances; 2) Echo chamber effect suggests that people tend to follow likeminded users and form segregated communities on social media. Even though users social engagements towards news on social media provide abundant auxiliary information for better detecting fake news, but existing work exploiting social engagements is rather limited. In this paper, we explore the correlations of publisher bias, news stance, and relevant user engagements simultaneously, and propose a Tri-Relationship Fake News detection framework (TriFN). We also provide two comprehensive real-world fake news datasets to facilitate fake news research. Experiments on these datasets demonstrate the effectiveness of the proposed approach. Introduction People nowadays tend to seek out and consume news from social media rather than traditional news organizations. For example, 62% of U.S. adults get news on social media in 2016, while in 2012, only 49 percent reported seeing news on social media. However, social media for news consumption is a double-edged sword. The quality of news on social media is much lower than traditional news organizations. Large volumes of \u201cfake news\u201d, i.e., those news articles with intentionally false information, are produced online for a variety of purposes, such as financial and political gain (Klein and Wueller 2017; Allcott and Gentzkow 2017). Fake news can have detrimental effects on individuals and the society. First, people may be misled by fake Copyright c \u00a9 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. http://www.journalism.org/2016/05/26/news-use-acrosssocial-media-platforms-2016/ news and accept false beliefs (Nyhan and Reifler 2010; Paul and Matthews 2016). Second, fake news could change the way people respond to true news. Third, the widespread of fake news could break the trustworthiness of entire news ecosystem. Thus, it is important to detect fake news on social media. Fake news is intentionally written to mislead consumers, which makes it nontrivial to detect simply based on news content. Thus, it is necessary to explore auxiliary information to improve detection. For example, several style-based approaches try to capture the deceptive manipulators originated from the particular writing style of fake news (Rubin and Lukoianova 2015; Potthast et al. 2017). In addition, previous approaches try to aggregate users\u2019 responses from relevant social engagements to infer the veracity of original news (Castillo, Mendoza, and Poblete 2011; Gupta, Zhao, and Han 2012). The news ecosystem on social media involves three basic entities, i.e., news publisher, news and social media users. Figure 1 gives an illustration of such ecosystem. In Figure 1, p1, p2 and p3 are news publishers who publish news a1, . . . , a4 and u1, . . . , u6 are users who have engaged in posting these news. In addition, users with similar interests can also form social links. The tri-relationship among publisher, news, and social engagements contains additional information to help detect fake news. First, sociallogical studies on journalism have theorized the correlation between the partisan bias of publisher and news contents veracity (Gentzkow, Shapiro, and Stone 2014; Entman 2007), where partisan means the perceived bias of the publisher in the selection of how news is reported and covered. For example, in Figure 1, for p1 with extreme left partisan bias and p2 with extreme right partisan bias, to support their own partisan, they have high degree to report fake news, such as a1 and a3; while for a mainstream publisher p3 that has least partisan bias, she has lower degree to manipulate original news events, and is more likely to write true news a4. Thus, exploiting publisher partisan information can bring additional benefits to predict fake news. Second, mining user engagements on social media towards the news also help fake news detection. Different users have different credibility levels on social media, and https://www.nytimes.com/2016/11/28/opinion/fake-newsand-the-internet-shell-game.html? Figure 1: Tri-relationship among publishers, news pieces, and social media users for news dissemination ecosystem. the credibility score which means \u201cthe quality of being trustworthy\u201d (Abbasi and Liu 2013) has a strong indication of whether the user is more likely to engage fake news or not. Those less credible users, such as malicious accounts or normal users who are vulnerable to fake news, are more likely to spread fake news. For example, u2 and u4 are users with low credibility scores, and they tend to spread fake news more than other highly credible users. In addition, users tend to form relationships with like-minded people. For example, user u5 and u6 are friends on social media, so they tend to engage those news that confirm their own views, such as a4. Publisher partisan information can bridge the publishernews relationship, while social engagements can capture the news-user relationship. In other words, they provide complementary information that has potential to improve fake news prediction. Thus, it\u2019s important to integrate these two components and model the tri-relationship simultaneously. In this paper, we study the novel problem of exploiting trirelationship for fake news detection. In essence, we need to address the following challenges (1) how to mathematically model the tri-relationship to extract news feature representations; and (2) how to take the advantage of tri-relationship learning for fake news detection. In an attempt to address these challenges, we propose a novel framework TriFN that captures the Tri-relationship for Fake News detection. The main contributions are: \u2022 We provide a principled way to model tri-relationship among publisher, news, and relevant user engagements simultaneously; \u2022 We propose a novel framework TriFN that exploits trirelationship for fake news prediction; and \u2022 We evaluate the effectiveness of the proposed framework for fake news detection through extensive experiments on newly collected real-world datasets. Problem Statement Even though fake news has been existed for long time, there is no agreed definition. In this paper, we follow the definition of fake news that is widely used in recent research (Shu et al. 2017; Zubiaga et al. 2017; Allcott and Gentzkow 2017), which has been shown to be able to 1) provide theoretical and practical values for fake news topic; and 2) eliminate the ambiguities between fake news and related concepts. DEFINITION 1 (FAKE NEWS) Fake news is a news article that is intentionally and verifiably false. Let A = {a1, a2, ..., an} be the set of n news articles and U = {u1, u2, ..., um} be the set of m users on social media engaging the news spreading process. We denote X \u2208 R as the news feature matrix. Users can become friends with other users and we use A \u2208 {0, 1} to denote the user-user adjacency matrix. On social media sites, users can easily share, comment and discuss about the news pieces. This kind of social engagements provide auxiliary information for fake news detection. We denote the social news engagement matrix as W \u2208 {0, 1}, where Wij = 1 indicate that user ui has engaged in the spreading process of the news piece aj ; otherwise Wij = 0. It\u2019s worth mentioning that we focus on those engagements that show that users agree with the news. For example, For example, we only utilize those users that directly post the news, or repost the news without adding comments. More details will introduced in Section . We also denote P = {p1, p2, ..., pl} as the set of l news publishers. In addition, we denote B \u2208 R as the publisher-news relation matrix, and Bkj = 1 means news publisher pk publishes the news article aj ; otherwise Bkj = 0. We assume that the partisan labels of some publishers are given and available. We define o \u2208 {\u22121, 0, 1} as the partisan label vectors, where -1, 0, 1 represents left-, neutral-, and right-partisan bias. Similar to previous research (Shu et al. 2017), we treat fake news detection problem as a binary classification problem. In other words, each news piece can be true or fake, and we use y = {y1;y2; ...;yn} \u2208 R n\u00d71 to represent the labels, and yj = 1 means news piece aj is fake news; yj = \u22121 means true news. With the notations given above, the problem is formally defined as, Given news article feature matrix X, user adjacency matrix A, user social engagement matrix W, publishernews publishing matrix B, publisher partisan label vector o, and partial labeled news vector yL, we aim to predict remaining unlabeled news label vector yU . A Tri-Relationship Embedding Framework In this section, we propose a semi-supervised detection framework by exploiting tri-relationship. The idea of modeling tri-relationship is demonstrated in Figure 2. Specifically, we first introduce the news latent feature embedding from news content, and then show how to model user social engagements and publisher partisan separately; At last, we integrate the components to model tri-relationship and provide a semi-supervised detection framework. A Basic Model for News Content Embedding The inherent manipulators of fake news can be reflected in the news content. Thus, it\u2019s important to extract"}
{"_id": "6863dc9fa61e47b839f1e481a396913b42c9b239", "title": "Which User Interactions Predict Levels of Expertise in Work-Integrated Learning?", "text": "Predicting knowledge levels from user\u2019s implicit interactions with an adaptive system is a difficult task, particularly in learning systems that are used in the context of daily work tasks. We have collected interactions of six persons working with the adaptive work-integrated learning system APOSDLE over a period of two months to find out whether naturally occurring interactions with the system can be used to predict their level of expertise. One set of interactions is based on the tasks they performed, the other on a number of additional Knowledge Indicating Events (KIE). We find that the addition of KIE significantly improves the prediction as compared to using tasks only. Both approaches are superior to a model that uses only the frequencies of events."}
{"_id": "5732afb98a2e5b2970344b255b7af10f5c363873", "title": "Application of a LabVIEW for Real-Time Control of Ball and Beam System", "text": ""}
{"_id": "812f3d2b8be0e8ae54a2cc7983a834ed91e3a49a", "title": "Data-Confined HTML5 Applications", "text": "Rich client-side applications written in HTML5 proliferate diverse platforms such as mobile devices, commodity PCs, and the web platform. These client-side HTML5 applications are increasingly accessing sensitive data, including users\u2019 personal and social data, sensor data, and capability-bearing tokens. To fulfill their security and privacy guarantees, these applications need to maintain certain data-confinement invariants. These invariants are not explicitly stated in today\u2019s HTML5 applications and are enforced using ad-hoc mechanisms. The complexity of web applications, coupled with hard-to-analyze client-side languages, leads to low-assurance data-confinement mechanisms in which the whole application needs to be in the TCB to ensure data-confinement in-"}
{"_id": "efa426e829e887efdef3ba0df5566097685a6b0e", "title": "Accelerated Convolutions for Efficient Multi-Scale Time to Contact Computation in Julia", "text": "Convolutions have long been regarded as fundamental to applied mathematics, physics and engineering. Their mathematical elegance allows for common tasks such as numerical differentiation to be computed efficiently on large data sets. Efficient computation of convolutions is critical to artificial intelligence in real-time applications, like machine vision, where convolutions must be continuously and efficiently computed on tens to hundreds of kilobytes per second. In this paper, we explore how convolutions are used in fundamental machine vision applications. We present an accelerated n-dimensional convolution package in the high performance computing language, Julia, and demonstrate its efficacy in solving the time to contact problem for machine vision. Results are measured against synthetically generated videos and quantitatively assessed according to their mean squared error from the ground truth. We achieve over an order of magnitude decrease in compute time and allocated memory for comparable machine vision applications. All code is packaged and integrated into the official Julia Package Manager to be used in various other scenarios."}
{"_id": "cbb1787d507436048865ee2127044f2ab42b5b90", "title": "Question Quality Analysis and Prediction in Community Question Answering Services with Coupled Mutual Reinforcement", "text": "Community question answering services (CQAS) (e.g., Yahoo! Answers) provides a platform where people post questions and answer questions posed by others. Previous works analyzed the answer quality (AQ) based on answer-related features, but neglect the question-related features on AQ. Previous work analyzed how asker- and question-related features affect the question quality (QQ) regarding the amount of attention from users, the number of answers and the question solving latency, but neglect the correlation between QQ and AQ (measured by the rating of the best answer), which is critical to quality of service (QoS). We handle this problem from two aspects. First, we additionally use QQ in measuring AQ, and analyze the correlation between a comprehensive list of features (including answer-related features) and QQ. Second, we propose the first method that estimates the probability for a given question to obtain high AQ. Our analysis on the Yahoo! Answers trace confirmed that the list of our identified features exert influence on AQ, which determines QQ. For the correlation analysis, the previous classification algorithms cannot consider the mutual interactions between multiple ($>$ 2) classes of features. We then propose a novel Coupled Semi-Supervised Mutual Reinforcement-based Label Propagation (CSMRLP) algorithm for this purpose. Our extensive experiments show that CSMRLP outperforms the Mutual Reinforcement-based Label Propagation (MRLP) and five other traditional classification algorithms in the accuracy of AQ classification, and the effectiveness of our proposed method in AQ prediction. Finally, we provide suggestions on how to create a question that will receive high AQ, which can be exploited to improve the QoS of CQAS."}
{"_id": "32e12185035cc61e5d7519b79e5f679684582b91", "title": "The longitudinal relationship of personality traits and disorders.", "text": "Personality disorders are presumed to be stable because of underlying stable and maladaptive personality traits, but while previous research has demonstrated a link between personality traits and personality disorders cross-sectionally, personality disorders and personality traits have not been linked longitudinally. This study explores the extent to which relevant personality traits are stable in individuals diagnosed with 4 personality disorders (schizotypal, borderline, avoidant, and obsessive-compulsive personality disorders) and examines the assumption that these personality disorders are stable by virtue of stable personality traits. This assumption was tested via the estimation of a series of latent longitudinal models that evaluated whether changes in relevant personality traits lead to subsequent changes in personality disorders. In addition to offering large consistency estimates for personality traits and personality disorders, the results demonstrate significant cross-lagged relationships between trait change and later disorder change for 3 of the 4 personality disorders studied."}
{"_id": "5367c2c9eeefe666a41cb4c13b7a6f4e9661eb1c", "title": "Human-robot physical interaction and collaboration using an industrial robot with a closed control architecture", "text": "In physical Human-Robot Interaction, the basic problem of fast detection and safe robot reaction to unexpected collisions has been addressed successfully on advanced research robots that are torque controlled, possibly equipped with joint torque sensors, and for which an accurate dynamic model is available. In this paper, an end-user approach to collision detection and reaction is presented for an industrial manipulator having a closed control architecture and no additional sensors. The proposed detection and reaction schemes have minimal requirements: only the outer joint velocity reference to the robot manufacturer's controller is used, together with the available measurements of motor currents and joint positions. No a priori information on the robot dynamic model and existing low-level joint controllers is strictly needed. A suitable on-line processing of the motor currents allows to distinguish between accidental collisions and intended human-robot contacts, so as to switch the robot to a collaboration mode when needed. Two examples of reaction schemes for collaboration are presented, with the user pushing/pulling the robot at any point of its structure (e.g., for manual guidance) or with a compliant-like robot behavior in response to forces applied by the human. The actual performance of the methods is illustrated through experiments on a KUKA KR5 manipulator."}
{"_id": "554f6cc9cb9c64a25670eeb12827b803f3db2f71", "title": "TR 03-002 Item-Based Top-N Recommendation Algorithms", "text": ""}
{"_id": "8db9e3f2384b032278ed9e9113021538ef4b9b94", "title": "ICWSM - A Great Catchy Name: Semi-Supervised Recognition of Sarcastic Sentences in Online Product Reviews", "text": "Sarcasm is a sophisticated form of speech act widely used in online communities. Automatic recognition of sarcasm is, however, a novel task. Sarcasm recognition could contribute to the performance of review summarization and ranking systems. This paper presents SASI, a novel Semi-supervised Algorithm for Sarcasm Identification that recognizes sarcastic sentences in product reviews. SASI has two stages: semisupervised pattern acquisition, and sarcasm classification. We experimented on a data set of about 66000 Amazon reviews for various books and products. Using a gold standard in which each sentence was tagged by 3 annotators, we obtained precision of 77% and recall of 83.1% for identifying sarcastic sentences. We found some strong features that characterize sarcastic utterances. However, a combination of more subtle pattern-based features proved more promising in identifying the various facets of sarcasm. We also speculate on the motivation for using sarcasm in online communities and social networks."}
{"_id": "c425c7d84356bb827bb5a26d66497b24e5201fd8", "title": "Real-World Recommender Systems for Academia: The Pain and Gain in Building, Operating, and Researching them", "text": "Research on recommender systems is a challenging task, as is building and operating such systems. Major challenges include non-reproducible research results, dealing with noisy data, and answering many questions such as how many recommendations to display, how often, and, of course, how to generate recommendations most effectively. In the past six years, we built three research-article recommender systems for digital libraries and reference managers, and conducted research on these systems. In this paper, we share some experiences we made during that time. Among others, we discuss the required skills to build recommender systems, and why the literature provides little help in identifying promising recommendation approaches. We explain the challenge in creating a randomization engine to run A/B tests, and how low data quality impacts the calculation of bibliometrics. We further discuss why several of our experiments delivered disappointing results, and provide statistics on how many researchers showed interest in our recommendation dataset."}
{"_id": "71b45919a82cf163d2563b7934245c115c33c4e8", "title": "Object Segmentation by Long Term Analysis of Point Trajectories", "text": "Unsupervised learning requires a grouping step that defines which data belong together. A natural way of grouping in images is the segmentation of objects or parts of objects. While pure bottom-up segmentation from static cues is well known to be ambiguous at the object level, the story changes as soon as objects move. In this paper, we present a method that uses long term point trajectories based on dense optical flow. Defining pair-wise distances between these trajectories allows to cluster them, which results in temporally consistent segmentations of moving objects in a video shot. In contrast to multi-body factorization, points and even whole objects may appear or disappear during the shot. We provide a benchmark dataset and an evaluation method for this so far uncovered setting."}
{"_id": "7a953aaf29ef67ee094943d4be50d753b3744573", "title": "\"GrabCut\": interactive foreground extraction using iterated graph cuts", "text": ""}
{"_id": "205814d838b67f4baf97a187a6a781dfa5271df9", "title": "LabelMe video: Building a video database with human annotations", "text": "Currently, video analysis algorithms suffer from lack of information regarding the objects present, their interactions, as well as from missing comprehensive annotated video databases for benchmarking. We designed an online and openly accessible video annotation system that allows anyone with a browser and internet access to efficiently annotate object category, shape, motion, and activity information in real-world videos. The annotations are also complemented with knowledge from static image databases to infer occlusion and depth information. Using this system, we have built a scalable video database composed of diverse video samples and paired with human-guided annotations. We complement this paper demonstrating potential uses of this database by studying motion statistics as well as cause-effect motion relationships between objects."}
{"_id": "3dbe40033c570f4ea9c88ffbee1ad26a6b7d0df6", "title": "The structure of images", "text": "In practice the relevant details of images exist only over a restricted range of scale. Hence it is important to study the dependence of image structure on the level of resolution. It seems clear enough that visual perception treats images on several levels of resolution simultaneously and that this fact must be important for the study of perception. However, no applicable mathematically formulated theory to deal with such problems appers to exist. In this paper it is shown that any image can be embedded in a one-parameter family of derived images (with resolution as the parameter) in essentially only one unique way if the constraint that no spurious detail should be generated when the resolution is diminished, is applied. The structure of this family is governed by the well known diffusion equation (a parabolic, linear, partial differential equation of the second order). As such the structure fits into existing theories that treat the front end of the visual system as a continuous tack of homogeneous layer, characterized by iterated local processing schemes. When resolution is decreased the images becomes less articulated because the extrem (\u201clight and dark blobs\u201d) disappear one after the other. This erosion of structure is a simple process that is similar in every case. As a result any image can be described as a juxtaposed and nested set of light and dark blobs, wherein each blod has a limited range of resolution in which it manifests itself. The structure of the family of derived images permits a derivation of the sampling density required to sample the image at multiple scales of resolution. The natural scale along the resolution axis (leading to an informationally uniform sampling density) is logarithmic, thus the structure is apt for the description of size invariances."}
{"_id": "56bd199574471fb7f7f1665c558047c2100208b9", "title": "Gaussian processes for multi-sensor environmental monitoring", "text": "Efficiently monitoring environmental conditions across large indoor spaces (such as warehouses, factories or data centers) is an important problem with many applications. Deployment of a sensor network across the space can provide very precise readings at discrete locations. However, construction of a continuous model from this discrete sensor data is a challenge. The challenge is made harder by economic and logistical constraints that may limit the number of sensor motes in the network. The required model, therefore, must be able to interpolate sparse data and give accurate predictions at unsensed locations, as well as provide some notion of the uncertainty on those predictions. We propose a Gaussian process based model to answer both of these issues. We use Gaussian processes to model temperature and humidity distributions across an indoor space as functions of a 3-dimensional point. We study the model selection process and show that good results can be obtained, even with sparse sensor data. Deployment of a sensor network across an indoor lab provides real-world data that we use to construct an environmental model of the lab space. We seek to refine the model obtained from the initial deployment by using the uncertainty estimates provided by the Gaussian process methodology to modify sensor distribution such that each sensor is most advantageously placed. We explore multiple sensor placement techniques and experimentally validate a near-optimal criterion."}
{"_id": "7281dc74afc7dd2da35cf054b68afdeaec76cc9f", "title": "A Parking Occupancy Detection Algorithm Based on AMR Sensor", "text": "Recently, with the explosive increase of automobiles in cities, parking problems are serious and even worsen in many cities. This paper proposes a new algorithm for parking occupancy detection based on the use of anisotropic magnetoresistive sensors. Parking occupancy detection is abstracted as binary pattern recognition problem. According to the status of the parking space, the recognition result contains two categories: vacant and occupied. The feature extraction method of the parking magnetic signal is proposed. In addition, the classification criteria are derived based on the distance discriminate analysis method. Eighty-two sensor nodes are deployed on the roadside parking spaces. By running the system for six months, we observed that the accuracy rate of the proposed parking occupancy detection algorithm is better than 98%."}
{"_id": "ad0323b075146e7d7a3ef3aacb9892201da69492", "title": "Evidence-based practice: functional rhinoplasty.", "text": "The cause of nasal obstruction can often be attributed to pathologic conditions of the nasal valve. The key physical examination finding in nasal valve compromise is inspiratory collapse of the nasal sidewall. Validated subjective and objective measures evaluating nasal obstruction exist, although with weak correlation. Functional rhinoplasty encompasses the surgical techniques used to address obstruction occurring in this area. These techniques aim to increase the size of the nasal valve opening and/or strengthen the lateral nasal wall and nasal ala, preventing dynamic collapse. Much of the supporting evidence for functional rhinoplasty consists of observational studies that are universally favorable."}
{"_id": "dd30e491e6481292eb0d1ccd329fa57c7e9ac47f", "title": "Agile software development in theory and practice", "text": "Kalermo, Jonna Maria Rissanen, Jenni Karoliina Agile software development in theory and practice / Jonna Kalermo and Jenni Rissanen Jyv\u00e4skyl\u00e4, University of Jyv\u00e4skyl\u00e4, 2002. 188 pp. Master's thesis In turn of the millennium, new software development ideas were presented in the form of Agile Manifesto as a counteraction to the traditional, rigorous development methods and process models. Agile Manifesto consists of four values and twelve principles of which the authors of this thesis formed a conceptual framework. It assists in analysing different aspects of agile software development one by one but also as a whole. Agile software development aims at fast, light and efficient development that supports customer's business without being chaotic or following any rigorous method. The aim of this study was to analyse Agile Manifesto and its applicability. This was done by conducting a literature review and an empirical case study. Agile Manifesto gives an ideological background for agile methods, such as Extreme Programming (XP). Such methods are most feasible for small organisations developing innovative software products. We analysed Agile Manifesto and found out that literature supported the ideas of it, although no revolutionary ideas were discovered. We used an empirical case study as an example of an agile in-house software development method in corporate venturing context. The case study supported principles and values of Agile Manifesto and confirmed the assumption that agility is heavily based on tacit knowledge, skilled and motivated employees and frequent communication. This sets several requirements for management and employees working in an agile mode: they have to be communicative, social, responsible and skilled."}
{"_id": "7f14e2bfdfd2d55e3836d116724484c56d84f859", "title": "Randomized, Placebo-Controlled Trial of Methyl B12 for Children with Autism.", "text": "OBJECTIVE\nChildren with autism spectrum disorder (ASD) have been reported to have reduced ability to methylate DNA and elevated markers of oxidative stress. We sought to determine if methyl B12, a key metabolic cofactor for cellular methylation reactions and antioxidant defense, could improve symptoms of ASD.\n\n\nMETHODS\nA total of 57 children with ASD were randomly assigned to 8 weeks of treatment with methyl B12 (75\u2009\u03bcg/kg) or saline placebo every 3 days in a subcutaneous injection. The primary outcome measure was overall improvement in symptoms of ASD as measured by the Clinical Global Impressions-Improvement (CGI-I) score. Secondary outcome measures included changes in the Aberrant Behavior Checklist (ABC) and the Social Responsiveness Scale (SRS). Laboratory measures of methionine methylation and antioxidant glutathione metabolism were assessed at baseline and 8 weeks.\n\n\nRESULTS\nA total of 50 children (mean age 5.3 years, 79% male) completed the study. The primary outcome measure - the clinician rated CGI-I score - was statistically significantly better (lower) in the methyl B12 group (2.4) than in the placebo group (3.1) (0.7 greater improvement in the methyl B12 group, 95% CI 1.2-0.2, p\u2009=\u20090.005). Clinical improvement among children treated with methyl B12 was positively correlated with increases in plasma methionine (p\u2009=\u20090.05), decreases in S-adenosyl-l-homocysteine (SAH) (p\u2009=\u20090.007) and improvements in the ratio of S-adenosylmethionine (SAM) to SAH (p\u2009=\u20090.007), indicating an improvement in cellular methylation capacity. No improvements were observed in the parent-rated ABC or SRS.\n\n\nCONCLUSIONS\nMethyl B12 treatment improved clinician-rated symptoms of ASD that were correlated with improvements in measures of methionine metabolism and cellular methylation capacity. Clinical Trial Registry: Efficacy Study of Subcutaneous Methyl B12 in Children with Autism: NCT01039792 ( clinicaltrials.gov1 )."}
{"_id": "6afd49ff7787288998998717b5e73174c7a507c5", "title": "Insider Threat Security Reference Architecture", "text": "The Insider Threat Security Reference Architecture, ITSRA, provides an enterprise wide solution to insider threat. The architecture consists of four security layers - Business, Information, Data, and Application. Organizations should deploy and enforce controls at each layer in order to address insider attacks. Each layer does not function in isolation or independently of other layers. Rather, the correlation of indicators and application of controls across all four layers forms the crux of this approach. Based on empirical data consisting of over 600 cases of insider crimes, insider attacks proved successful in inflicting damage when an organization failed to implement adequate controls in any of three security principles - authorized access, acceptable use, and continuous monitoring. The ITSRA draws from existing best practices and standards as well as from analysis of these cases to provide actionable guidance for organizations to improve their posture against the insider threat."}
{"_id": "43a5915faa4c680bdca9c222bd851f516f85a00b", "title": "Exploring multicriteria decision strategies in GIS with linguistic quantifiers: A case study of residential quality evaluation", "text": "Commonly used GIS combination operators such as Boolean conjunction/disjunction and weighted linear combination can be generalized to the ordered weighted averaging (OWA) family of operators. This multicriteria evaluation method allows decision-makers to define a decision strategy on a continuum between pessimistic and optimistic strategies. Recently, OWA has been introduced to GIS-based decision support systems. We propose to extend a previous implementation of OWA with linguistic quantifiers to simplify the definition of decision strategies and to facilitate an exploratory analysis of multiple criteria. The linguistic quantifier-guided OWA procedure is illustrated using a dataset for evaluating residential quality of neighborhoods in London, Ontario."}
{"_id": "9e2cd1ad32759a4307f789b757a115b71907703f", "title": "The efficacy and safety of postoperative adjuvant transarterial embolization and radiotherapy in hepatocellular carcinoma patients with portal vein tumor thrombus", "text": "OBJECTIVE\nThis study aims to find out the safety and efficiency of postoperative adjuvant transarterial chemoembolization (TACE) and radiotherapy (RT) in hepatocellular carcinoma (HCC) patients with portal vein tumor thrombus (PVTT).\n\n\nMETHODS\nFrom 2009 to 2010, a total of 92 HCC patients with PVTT were enrolled in this retrospective study. Patients were divided into three groups according to their adjuvant therapies (conservative group, n=51; TACE group, n=31; RT group, n=10).\n\n\nRESULTS\nIn our analysis, median survival in patients with postoperative adjuvant TACE (21.91\u00b13.60 months) or RT (14.53\u00b11.61 months) was significantly longer than patients with hepatectomy alone (8.99\u00b11.03 months). But the difference between adjuvant TACE and RT was of no significance (P=0.716). Also a similar result could be observed in median disease-free survival: conservative group (6.51\u00b11.44 months), TACE group (13.98\u00b13.38 months), and RT group (14.03\u00b12.40 months). Treatment strategies (hazard ratio [HR] =0.411, P<0.001) and PVTT type (HR =4.636, P<0.001) were the independent prognostic factors for overall survival. Similarly, the risk factors were the same when multivariate analysis was conducted in disease-free survival (treatment strategies, HR =0.423, P<0.001; PVTT type, HR =4.351, P<0.001) and recurrence (treatment strategies, HR =0.459, P=0.030; PVTT type, HR =2.908, P=0.047). Patients with PVTT type I had longer overall survival than patients with PVTT type II (median survival: 18.43\u00b12.88 months vs 11.59\u00b11.45 months, P=0.035).\n\n\nCONCLUSION\nPostoperative adjuvant TACE and RT may be a choice for HCC patients with PVTT."}
{"_id": "d234fcdb34d7ca2aac238af3dc1a3ff565c9e637", "title": "Minimum Inhibitory Concentration ( MIC ) Of various synthetic and natural antimicrobial agents using E coli screened from VIT sewage treatment plant *", "text": "The Minimum Inhibitory Concentration (MIC) Test is an important evaluating tool in the field of microbiology. It gives us an idea of the microbial activity assessment in any source sample containing microbes. Validating the effectiveness of disinfection and decontamination is a vital and often challenging task. In clinical and laboratory environments, presence of sterile conditions is a compulsion and various disinfectants are used to sterilize the surroundings from bacteria, fungi and other microorganisms. The broth dilution and agar diffusion methods of MIC Test tell us about the efficiency of the disinfectant under use. By subjecting a given disinfectant sample to the MIC Test, we can find out its efficiency in lysing the microorganisms present in the inoculums. Thus two or more disinfectants can be compared also at the same time to test their lysing capacity under normal conditions. The MIC Test also helps to choose the more economic disinfectant from a given batch of disinfectants. The results obtained from the MIC Test can also be used to prepare a new anti microbial agent which has greater efficiency in lysing the microorganisms and is economic financially. Minimum Inhibitory Concentration (MIC) of various synthetic like chloramphenical, chloroxylenol and 10% hydrochloric acid with butyl oleylamine in aqueous solution), and natural antimicrobial agents like Sample C (Grass extract) and Sample D (Grass+Neem extract) using E coli screened from VIT sewage treatment plant will be discussed."}
{"_id": "94f8a728a072b9b48b043a87b16619a052340421", "title": "Evaluation of the Wireless M-Bus standard for future smart water grids", "text": "The most recent Wireless Sensor Networks technologies can provide viable solutions to perform automatic monitoring of the water grid, and smart metering of water consumptions. However, sensor nodes located along water pipes cannot access power grid facilities, to get the necessary energy imposed by their working conditions. In this sense, it is of basic importance to design the network architecture in such a way as to require the minimum possible power. This paper investigates the suitability of the Wireless Metering Bus protocol for possible adoption in future smart water grids, by evaluating its transmission performance, through simulations and experimental tests executed by means of prototype sensor nodes."}
{"_id": "1242f4b9d5bcab62f5177103a2897691679a09c5", "title": "Evaluation of a dynamic role-playing platform for simulations based on Octalysis gamification framework", "text": "The use of serious games (SG) in education and their pedagogical benefit is being widely recognized. However, effective integration of SGs in education depends on addressing two big challenges: successful incorporation of motivation and engagement that can lead to learning; and high cost and specialised skills associated with customised development to achieve the required pedagogical results. This paper presents a SG platform that offers tools to educators to dynamically create three dimensional (3D) scenes and verbal and non-verbal interaction with fully embodied conversational agents (ECA) that can be used to simulate numerous educational scenarios. We then evaluate the effectiveness of the platform in terms of supporting the creation of motivating and engaging educational simulations based on the Octalysis gamification framework. The latter includes a number of game design elements that can be used as appropriate in the optimization of the experience in each phase of the player\u2019s journey. We conclude with directions of the further extension to our platform to address the aforementioned gamification framework, as well as directions for further development of the Octalysis framework."}
{"_id": "a9ffc6f690c0159ac3266b4386ec9cbdfa6f2f9a", "title": "Ad Exchanges: Research Issues", "text": "An emerging way to sell and buy display ads on the Internet is via ad exchanges. RightMedia [1], AdECN [2] and DoubleClick Ad Exchange [3] are examples of such real-time two-sided markets. We describe an abstraction of ad exchanges. Based on that abstraction, we present several research directions and discuss some insights."}
{"_id": "1048bc450f2698c7d41dbb3679c837323de400fb", "title": "Joint Entity and Event Coreference Resolution across Documents", "text": "We introduce a novel coreference resolution system that models entities and events jointly. Our iterative method cautiously constructs clusters of entity and event mentions using linear regression to model cluster merge operations. As clusters are built, information flows between entity and event clusters through features that model semantic role dependencies. Our system handles nominal and verbal events as well as entities, and our joint formulation allows information from event coreference to help entity coreference, and vice versa. In a cross-document domain with comparable documents, joint coreference resolution performs significantly better (over 3 CoNLL F1 points) than two strong baselines that resolve entities and events separately."}
{"_id": "b73b5924ef7c7867adaec1b3264a6cb07b443701", "title": "Understanding Difficulties and Resulting Confusion in Learning: An Integrative Review", "text": "Difficulties are often an unavoidable but important part of the learning process. This seems particularly so for complex conceptual learning. Challenges in the learning process are however, particularly difficult to detect and respond to in educational environments where growing class sizes and the increased use of digital technologies mean that teachers are unable to provide nuanced and personalized feedback and support to help students overcome their difficulties. Individual differences, the specifics of the learning activity, and the difficulty of giving individual feedback in large classes and digital environments all add to the challenge of responding to student difficulties and confusion. In this integrative review, we aim to explore difficulties and resulting emotional responses in learning. We will review the primary principles of cognitive disequilibrium and contrast these principles with work on desirable difficulties, productive failure, impasse driven learning, and pure discovery-based learning. We conclude with a theoretical model of the zones of optimal and sub-optimal confusion as a way of conceptualizing the parameters of productive and non-productive difficulties experienced by students while they learn."}
{"_id": "f68edad569a8fe5a842c8ba62c9a9689aa7041ac", "title": "Hand Gesture Recognition for Sign Language Recognition: A Review", "text": "For deaf and dumb people, Sign language is the only way of communication. With the help of sign language, these physical impaired people express their emotions and thoughts to other person. Since for the common person, it is difficult to understand these language therefore often these physically challenged has to keep the translator along with them to communicate with the world. Hence sign language recognition has become empirical task. Since sign language consist of various movement and gesture of hand therefore the accuracy of sign language depends on the accurate recognition of hand gesture. This paper present various method of hand gesture and sign language recognition proposed in the past by various researchers. Keywords\u2014SL, DCT, Self organizing Map, Camshift"}
{"_id": "ab4626fe5df4a2b1651b617e0c28c5a334ff02cf", "title": "Using Ontology and Data Provenance to Improve Software Processes", "text": "Provenance refers to the origin of a particular object. In computational terms, provenance is a historical record of the derivation of data that can help to understand the current record. In this context, this work presents a proposal for software processes improvement using a provenance data model and an ontology. This improvement can be obtained by process data execution analysis with an approach called PROV-Process, which uses a layer for storing process provenance and an ontology based on PROV-O."}
{"_id": "0a625eb578732b15c471785ff47b18ea1510db7e", "title": "A Bayesian Classification Model for Fraud Detection over ATM Platforms Milgo , Carolyne", "text": "The banking system relies greatly on the use of Automated Teller Machines, debit and credit cards as a vital element of its payment processing systems. The efficient functioning of payment processing systems allows transactions to be completed safely and on time thereby contributing to operational performance. Customer accounts are equally exposed to risk with respect to fraud. The major security risk identified is identity and access management of customer funds. This research sought to use classification techniques to implement a novel fraud detection solution to establish legitimacy of customer transactions. It also sought to identify the main security issues encountered with use of ATM cards and establish internal control mechanisms which can be implemented to deter possibility of card fraud. The data was obtained from a local bank and was consequently preprocessed and fed into WEKA which was used to develop the training model which was to be used for classification of incoming transactions."}
{"_id": "25931b74f11f0ffdd18c3f81d3899c0efa710223", "title": "Should Investors Avoid All Actively Managed Mutual Funds ? A Study in Bayesian Performance Evaluation", "text": "This paper analyzes mutual-fund performance from an investor\u2019s perspective. We study the portfolio-choice problem for a mean-variance investor choosing among a risk-free asset, index funds, and actively managed mutual funds. To solve this problem, we employ a Bayesian method of performance evaluation; a key innovation in our approach is the development of a f lexible set of prior beliefs about managerial skill. We then apply our methodology to a sample of 1,437 mutual funds. We find that some extremely skeptical prior beliefs nevertheless lead to economically significant allocations to active managers. ACTIVELY MANAGED EQUITY MUTUAL FUNDS have trillions of dollars in assets, collect tens of billions in management fees, and are the subject of enormous attention from investors, the press, and researchers. For years, many experts have been saying that investors would be better off in low-cost passively managed index funds. Notwithstanding the recent growth in index funds, active managers still control the vast majority of mutual-fund assets. Are any of these active managers worth their added expenses? Should investors avoid all actively managed mutual funds? Since Jensen ~1968!, most studies have found that the universe of mutual funds does not outperform its benchmarks after expenses.1 This evidence indicates that the average active mutual fund should be avoided. On the other hand, recent studies have found that future abnormal returns ~\u201calphas\u201d! can be forecast using past returns or alphas,2 past fund * Baks and Metrick are from the Department of Finance, The Wharton School, University of Pennsylvania. Wachter is from the Department of Finance, The Stern School, New York University. We thank Nick Barberis, Gary Chamberlain, Ken French, Will Goetzmann, Karsten Hansen, Chris Jones, Tom Knox, Tony Lancaster, L\u0306ubos\u0306 P\u00e1stor, Andr\u00e9 Perold, Steve Ross, Andrei Shleifer, Rob Stambaugh, Ren\u00e9 Stulz, Sheridan Titman, an anonymous referee, and seminar participants at Columbia, Wharton, the NBER, the 1999 NBER Summer Institute, and the 2000 AFA meetings for helpful comments. Wachter thanks Lehman Brothers for financial support. 1 Recently, Carhart ~1995!, Malkiel ~1995!, and Daniel et al. ~1997! all find small or zero average abnormal returns by using modern performance-evaluation methods on samples that are relatively free of survivorship bias. 2 Carlson ~1970!, Lehman and Modest ~1987!, Grinblatt and Titman ~1988, 1992!, Hendricks, Patel, and Zechhauser ~1993!, Goetzmann and Ibbotson ~1994!, Brown and Goetzmann ~1995!, Elton, Gruber, and Blake ~1996!, and Carhart ~1997!. THE JOURNAL OF FINANCE \u2022 VOL. LVI, NO. 1 \u2022 FEBRUARY 2001"}
{"_id": "fee58c304a5b1c9dbc12f053b725c20fd80e5df7", "title": "The world of emotions is not two-dimensional.", "text": "For more than half a century, emotion researchers have attempted to establish the dimensional space that most economically accounts for similarities and differences in emotional experience. Today, many researchers focus exclusively on two-dimensional models involving valence and arousal. Adopting a theoretically based approach, we show for three languages that four dimensions are needed to satisfactorily represent similarities and differences in the meaning of emotion words. In order of importance, these dimensions are evaluation-pleasantness, potency-control, activation-arousal, and unpredictability. They were identified on the basis of the applicability of 144 features representing the six components of emotions: (a) appraisals of events, (b) psychophysiological changes, (c) motor expressions, (d) action tendencies, (e) subjective experiences, and (f) emotion regulation."}
{"_id": "46096df86a3ada83650eaae00dc0f8cbd380acd0", "title": "On the Origin of CRISPR-Cas Technology: From Prokaryotes to Mammals.", "text": "Clustered regularly-interspaced short palindromic repeat (CRISPR) sequences cooperate with CRISPR-associated (Cas) proteins to form the basis of CRISPR-Cas adaptive immune systems in prokaryotes. For more than 20 years, these systems were of interest only to specialists, mainly molecular microbiologists, who tried to understand the properties of this unique defense mechanism. In 2012, the potential of CRISPR-Cas systems was uncovered and these were presented as genome-editing tools with an outstanding capacity to trigger targeted genetic modifications that can be applied to virtually any organism. Shortly thereafter, in early 2013, these tools were shown to efficiently drive specific modification of mammalian genomes. This review attempts to summarize, in a comprehensive manner, the key events and milestones that brought CRISPR-Cas technology from prokaryotes to mammals."}
{"_id": "b25e0fe6739214ffd3d5f1ca5e651b57bd81aa50", "title": "Deep Analysis of Binary Code to Recover Program Structure", "text": "Title of dissertation: DEEP ANALYSIS OF BINARY CODE TO RECOVER PROGRAM STRUCTURE Khaled ElWazeer, Doctor of Philosophy, 2014 Dissertation directed by: Professor Rajeev Barua Department of Electrical and Computer Engineering Reverse engineering binary executable code is gaining more i nterest in the research community. Agencies as diverse as anti-virus companies, se curity consultants, code forensics consultants, law-enforcement agencies and nati onal security agencies routinely try to understand binary code. Engineers also often need to d ebug, optimize or instrument binary code during the software development process. In this dissertation, we present novel techniques to extend he capabilities of existing binary analysis and rewriting tools to be more scalab le, handling a larger set of stripped binaries with better and more understandable outp uts as well as ensuring correct recovered intermediate representation (IR) from binar ies such that any modified or rewritten binaries compiled from this representation work correctly. In the first part of the dissertation, we present techniques t o recover accurate function boundaries from stripped executables. Our techniques as opposed to current techniques ensure complete live executable code coverage, high quality recovered code, and functional behavior for most application binaries. We use s tatic and dynamic based techniques to remove as much spurious code as possible in a safe ma nner that does not hurt code coverage or IR correctness. Next, we present static tec hniques to recover correct prototypes for the recovered functions. The recovered prot otypes include the complete set of all arguments and returns. Our techniques ensure corr ct behavior of rewritten binaries for both internal and external functions. Finally, we present scalable and precise techniques to reco ver l cal variables for every function obtained as well as global and heap variables . Different techniques are represented for floating point stack allocated variables an d memory allocated variables. Data type recovery techniques are presented to declare mean ingful data types for the detected variables. Our data type recovery techniques can rec over integer, pointer, structural and recursive data types. We discuss the correctness of the r ecovered representation. The evaluation of all the methods proposed is conducted on Se condWrite, a binary rewriting framework developed by our research group. An imp ortant metric in the evaluation is to be able to recompile the IR with the recovered info rmation and run it producing the same answer that is produced when running the original ex ecutable. Another metric is the analysis time. Some other metrics are proposed to meas ure the quality of the IR with respect to the IR with source code information availabl e. DEEP ANALYSIS OF BINARY CODE TO RECOVER PROGRAM STRUCTURE"}
{"_id": "cd6a0adcceaa9d743db83305cec9b1bff37b443d", "title": "Ultrasonography for clinical decision-making and intervention in airway management: from the mouth to the lungs and pleurae", "text": "OBJECTIVES\nTo create a state-of-the-art overview of the new and expanding role of ultrasonography in clinical decision-making, intervention and management of the upper and lower airways, that is clinically relevant, up-to-date and practically useful for clinicians.\n\n\nMETHODS\nThis is a narrative review combined with a structured Medline literature search.\n\n\nRESULTS\nUltrasonography can be utilised to predict airway difficulty during induction of anaesthesia, evaluate if the stomach is empty or possesses gastric content that poses an aspiration risk, localise the essential cricothyroid membrane prior to difficult airway management, perform nerve blocks for awake intubation, confirm tracheal or oesophageal intubation and facilitate localisation of tracheal rings for tracheostomy. Ultrasonography is an excellent diagnostic tool in intraoperative and emergency diagnosis of pneumothorax. It also enables diagnosis and treatment of interstitial syndrome, lung consolidation, atelectasis, pleural effusion and differentiates causes of acute breathlessness during pregnancy. Patient safety can be enhanced by performing procedures under ultrasound guidance, e.g. thoracocentesis, vascular line access and help guide timing of removal of chest tubes by quantification of residual pneumothorax size.\n\n\nCONCLUSIONS\nUltrasonography used in conjunction with hands-on management of the upper and lower airways has multiple advantages. There is a rapidly growing body of evidence showing its benefits.\n\n\nTEACHING POINTS\n\u2022 Ultrasonography is becoming essential in management of the upper and lower airways. \u2022 The tracheal structures can be identified by ultrasonography, even when unidentifiable by palpation. \u2022 Ultrasonography is the primary diagnostic approach in suspicion of intraoperative pneumothorax. \u2022 Point-of-care ultrasonography of the airways has a steep learning curve. \u2022 Lung ultrasonography allows treatment of interstitial syndrome, consolidation, atelectasis and effusion."}
{"_id": "24171780855cb31e5f7ea74908485e2a0142b620", "title": "Challenges of design and practical application of LTCC chip antennas", "text": "In this paper key challenges related to design, manufacturing and practical application of small LTCC antennas are described and discussed. A summary of the state of the art in LTCC antenna technology is presented and then the focus is put on limitations that have to be taken into account in the course of antenna design and manufacturing. Finally, some aspects related to practical application of LTCC antennas are discussed."}
{"_id": "ba2ceb8f6c9bf49da1b366e4757d202724c3bff4", "title": "The radiation properties of electrically small folded spherical helix antennas", "text": "The radiation properties of several electrically small, folded spherical helix antennas are presented. The primary variables considered in the design of these antennas are the number of helical turns and the number of helical arms. The principle design objectives are to achieve self resonance, a low quality factor (Q), and a practical radiation resistance for small values of ka. Designs are presented for ka less than 0.5, where the antennas are self resonant, exhibiting an efficiency in excess of 95%, a Q within 1.5 times the fundamental limit, and a radiation resistance near 50 /spl Omega/. The relationship between the number of helical turns, the number of helical arms, and achieving self resonance at low frequencies is discussed."}
{"_id": "cfd913e1edd1a15b7456ef6d222c1319e056eddc", "title": "Small Spherical Antennas Using Arrays of Electromagnetically Coupled Planar Elements", "text": "This letter presents the design, fabrication, and experimental characterization of small spherical antennas fabricated using arrays of non-interconnected planar conductor elements. The antennas are based upon spherical resonator structures with radiation Q-factors approaching $1.5\\times$ the fundamental lower limit. The antennas are formed by coupling these resonators to an impedance-matched coplanar strip transmission line. Direct electrical connection between the feed and the antenna are made only to conductor elements coplanar with the transmission line, simplifying the fabrication process. The incident energy excites a collective resonant mode of the entire sphere (an electric dipole resonance), thereby inducing currents in each of the rings of the structure. The presence of the conductor elements outside of the feed plane is critical towards achieving the excellent bandwidth behavior observed here. The fabricated antennas have a normalized size $ka=0.54$ (where $k$ is the wavenumber and $a$ is the radius of the sphere) and exhibit high radiative efficiencies ($>$ 90%) and bandwidth performance near the fundamental limit for their size."}
{"_id": "ae5d79dc7abd6b649a7d0bb9e108f79e2696e128", "title": "The Spherical Coil as an Inductor, Shield, or Antenna", "text": "The spherical coil is an idealized form of inductor having, on a spherical surface, a single-layer winding of constant axial pitch. Its magnetic field inside is uniform and outside is that of a small coil or magnetic dipole. Its properties exemplify exactly some of the rules that are approximately applicable to practical inductors. Simple formulas are given for self-inductance, mutual inductance, coupling coefficient, effect of iron core, and radiation power factor in free space or sea water. This coil is the basis for evaluating the shielding effect of a closed conductive (nonmagnetic) metal shell. A special winding is described which enables simple and exact computation of self-resonance (the length of wire being just 1/2 wavelength in some cases)."}
{"_id": "f23ecb25c3250fc6e2d3401dc2f54ffd6135ae2e", "title": "Substrate-Integrated Millimeter-Wave and Terahertz Antenna Technology", "text": "Significant advances in the development of millimeter-wave and terahertz (30-10 000 GHz) technologies have been made to cope with the increasing interest in this still not fully explored electromagnetic spectrum. The nature of electromagnetic waves over this frequency range is well suited for the development of high-resolution imaging applications, molecular-sensitive spectroscopic devices, and ultrabroadband wireless communications. In this paper, millimeter-wave and terahertz antenna technologies are overviewed including the conventional and nonconventional planar/nonplanar antenna structures based on different platforms. As a promising technological platform, substrate-integrated circuits (SICs) attract more and more attention. Various substrate-integrated waveguide (SIW) schemes and other synthesized guide techniques have been widely employed in the design of antennas and arrays. Different types of substrate-integrated antennas and beamforming networks are discussed with respect to theoretical and experimental results in connection with electrical and mechanical performances."}
{"_id": "5964e7db2d1e0ad4bcf239fe71142b81646c75d1", "title": "An overlap invariant entropy measure of 3D medical image alignment", "text": "This paper is concerned with the development of entropy-based registration criteria for automated 3D multi-modality medical image alignment. In this application where misalignment can be large with respect to the imaged field of view, invariance to overlap statistics is an important consideration. Current entropy measures are reviewed and a normalised measure is proposed which is simply the ratio of the sum of the marginal entropies and the joint entropy. The effect of changing overlap on current entropy measures and this normalised measure are compared using a simple image model and experiments on clinical image data. Results indicate that the normalised entropy measure provides significantly improved behaviour over a range of imaged fields of view. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved."}
{"_id": "baf55ddda9949e15c7c6880affffe93b75acad45", "title": "Node Representation Learning for Multiple Networks: The Case of Graph Alignment", "text": "Recent advances in representation learning produce node embeddings that may be used successfully in many downstream tasks (e.g., link prediction), but do not generally extend beyond a single network. Motivated by the prevalence of multi-network problems, such as graph alignment, similarity, and transfer learning, we introduce an elegant and principled node embedding formulation, Cross-network Matrix Factorization or xNetMF, to effectively and efficiently learn embeddings from similarities based on the nodes\u2019 structural and attribute identities (if available), which are comparable across networks. Leveraging the Nystr\u00f6m method for low-rank approximation, we achieve significant reduction in space and time compared to the na\u00efve approach of directly factorizing the node similarity matrix, with xNetMF running up to 30\u00d7 faster than existing representation learning methods. We study its effectiveness on the task of network alignment, and introduce REGAL (REpresentationbased Graph ALignment), the first method to use node representation learning for this task. Extensive experiments on a variety of challenging real-world networks with and without attributes show that REGAL outperforms existing network alignment methods by 20 to 30% in accuracy on average, uses attributes up to 40\u00d7 faster, and scales to networks with millions of nodes each."}
{"_id": "5af4ca674872b121fdbbdd46c38db4989dc5fb9d", "title": "Canine computer interaction: towards designing a touchscreen interface for working dogs", "text": "Touchscreens can provide a way for service dogs to relay emergency information about their handlers from a home or office environment. In this paper, we build on work exploring the ability of canines to interact with touchscreen interfaces. We observe new requirements for training and explain best practices found in training techniques. Learning from previous work, we also begin to test new dog interaction techniques such as lift-off selection and sliding gestural motions. Our goal is to understand the affordances needed to make touchscreen interfaces usable for canines and help the future design of touchscreen interfaces for assistance dogs in the home."}
{"_id": "ee8cf34756e95f4f58522a663111b04858a8baaa", "title": "Exploring large scale data for multimedia QA: an initial study", "text": "With the explosive growth of multimedia contents on the internet, multimedia search has become more and more important. However, users are often bewildered by the vast quantity of information content returned by the search engines. In this scenario, Multimedia Question-Answering (MMQA) emerges as a way to return precise answers by leveraging advanced media content and linguistic analysis as well as domain knowledge. This paper performs an initial study on exploring large scale data for MMQA. First, we construct a web video dataset and discuss its query strategy, statistics, feature description and groundtruth. We then conduct experiments based on the dataset to answer definition event questions using three schemes. We finally conclude the study with discussion for future work."}
{"_id": "e2d634ee9e9abaca804b69af69a40cf00897b2d0", "title": "Industrial Robotics", "text": ""}
{"_id": "353082e23c6677ce6f6af602bbb010e737a99bd0", "title": "An experimental study of big spatial data systems", "text": "With the rise of location-aware IoT devices, there is an increased desire to process and manage the stationary and moving trajectory data generated by these real-time sensors. There has been a corresponding evolution of distributed database and compute technology to handle the increasing data load. Here we describe challenges in managing this kind of data and survey the technologies that address those challenges. We evaluate two distributed database technologies against a simulated high volume sensor data source and discuss the performance of predominant moving trajectory access patterns."}
{"_id": "62580bd7f32994e4d0f97860f994d0cf237eaf46", "title": "Time-Frequency Distributions Based on Compact Support Kernels: Properties and Performance Evaluation", "text": "This paper presents two new time-frequency distributions (TFDs) based on kernels with compact support (KCS) namely the separable (CB) (SCB) and the polynomial CB (PCB) TFDs. The implementation of this family of TFDs follows the method developed for the Cheriet-Belouchrani (CB) TFD. The mathematical properties of these three TFDs are analyzed and their performance is compared to the best classical quadratic TFDs using several tests on multicomponent signals with linear and nonlinear frequency modulation (FM) components including the noise effects. Instead of relying solely on visual inspection of the time-frequency domain plots, comparisons include the time slices' plots and the evaluation of the Boashash-Sucic's normalized instantaneous resolution performance measure that permits to provide the optimized TFD using a specific methodology. In all presented examples, the KCS-TFDs show a significant interference rejection, with the component energy concentration around their respective instantaneous frequency laws yielding high resolution measure values."}
{"_id": "c5d5ff11d974574fb79907151dba2f23b5f08063", "title": "On the Capacity Comparison Between MIMO-NOMA and MIMO-OMA", "text": "Non-orthogonal multiple access (NOMA) has been shown in the literature to have a better performance than OMA in terms of sum channel capacity; however, the capacity superiority of NOMA over OMA has been only proved for single antenna systems, and the proof for the capacity superiority of multiple-input multiple-output NOMA (MIMO-NOMA) over conventional MIMO-OMA has not been available yet. In this paper, we will provide our proof to demonstrate that the MIMO-NOMA is strictly better than MIMO-OMA in terms of sum channel capacity (except for the case where only one user is being communicated to), i.e., for any rate pair achieved by MIMO-OMA, there is a power split for which MIMO-NOMA can achieve rate pairs that are strictly larger. Based on this result, we prove that the MIMO-NOMA can also achieve a larger sum ergodic capacity than MIMO-OMA. Our analytical results are verified by simulations."}
{"_id": "ee0a18889a0007a88e9411880bb1c2e75a7610ea", "title": "Analysis of the Impact of Negative Sampling on Link Prediction in Knowledge Graphs", "text": "Knowledge graphs are large, useful, but incomplete knowledge repositories. \u008cey encode knowledge through entities and relations which de\u0080ne each other through the connective structure of the graph. \u008cis has inspired methods for the joint embedding of entities and relations in continuous low-dimensional vector spaces, that can be used to induce new edges in the graph, i.e., link prediction in knowledge graphs. Learning these representations relies on contrasting positive instances with negative ones. Knowledge graphs include only positive relation instances, leaving the door open for a variety of methods for selecting negative examples. We present an empirical study on the impact of negative sampling on the learned embeddings, assessed through the task of link prediction. We use state-of-the-art knowledge graph embedding methods \u2013 Rescal , TransE, DistMult and ComplEX \u2013 and evaluate on benchmark datasets \u2013 FB15k and WN18. We compare well known methods for negative sampling and propose two new embedding based sampling methods. We note a marked di\u0082erence in the impact of these sampling methods on the two datasets, with the \u201dtraditional\u201d corrupting positives method leading to best results on WN18, while embedding based methods bene\u0080t FB15k."}
{"_id": "6e653192f5afd42c4f05bcc44afb2c6bd37510ac", "title": "BRIDE - A toolchain for framework-independent development of industrial service robot applications", "text": "Software integration is still a challenging and time consuming task and therefore a major part of the development of industrial and domestic service robot applications. The presented toolchain BRIDE is able to streamline this process by the separation of user roles and the separation of developer concerns of software components to ensure a frame-work independent implementation. The impact of the BRIDE toolchain in the development process is demonstrated in a case study on the SyncMM mobile manipulation control framework."}
{"_id": "1b90942a7661d956115716f33bd23deb4632266e", "title": "The String B-tree: A New Data Structure for String Search in External Memory and Its Applications", "text": "We introduce a new text-indexing data structure, the String B-Tree, that can be seen as a link between some traditional external-memory and string-matching data structures. In a short phrase, it is a combination of B-trees and Patricia tries for internal-node indices that is made more effective by adding extra pointers to speed up search and update operations. Consequently, the String B-Tree overcomes the theoretical limitations of inverted files, B-trees, prefix B-trees, suffix arrays, compacted tries and suffix trees. String B-trees have the same worst-case performance as B-trees but they manage unbounded-length strings and perform much more powerful search operations such as the ones supported by suffix trees. String B-trees are also effective in main memory (RAM model) because they improve the online suffix tree search on a dynamic set of strings. They also can be successfully applied to database indexing and software duplication."}
{"_id": "36246ff904be7044ecd536d072b1388ea59aaf43", "title": "Risk indicators and psychopathology in traumatised children and adolescents with a history of sexual abuse", "text": "Childhood sexual abuse (CSA) is widespread amongst South African (SA) children, yet data on risk factors and psychiatric consequences are limited and mixed. Traumatised children and adolescents referred to our Youth Stress Clinic were interviewed to obtain demographic, sexual abuse, lifetime trauma and psychiatric histories. Data for 94 participants (59 female, 35 male; mean age 14.25 [8.25\u201319] years) exposed to at least one lifetime trauma were analysed. Sexual abuse was reported in 53% of participants (42.56% females, 10.63% males) with 64% of violations committed by perpetrators known to them. Multinomial logistic regression analysis revealed female gender (P\u00a0=\u00a00.002) and single-parent families (P\u00a0=\u00a00.01) to be significant predictors of CSA (62.5%). CSA did not predict exposure to other traumas. Sexually abused children had significantly higher physical and emotional abuse subscale scores and total CTQ scores than non-abused children. Depression (33%, X 2\u00a0=\u00a010.89, P\u00a0=\u00a00.001) and PTSD (63.8%, X 2\u00a0=\u00a04.79, P\u00a0=\u00a00.034) were the most prevalent psychological consequences of trauma and both were significantly associated with CSA. High rates of CSA predicted high rates of PTSD in this traumatised sample. Associations we found appear consistent with international studies of CSA and, should be used to focus future social awareness, prevention and treatment strategies in developing countries."}
{"_id": "7e3b4d3b257eb2b907aafd6d158e7fe468fe9eee", "title": "Estimating software based on use case points", "text": "It is well documented that software product cost estimates are notoriously inaccurate across the software industry. Creating accurate cost estimates for software product development projects early in the product development lifecycle has always been a challenge for the industry. This article describes how a large multi-team software engineering organization (over 450 engineers) estimates project cost accurately and early in the software development lifecycle using Use Case Points, and the process of evaluating metrics to ensure the accuracy of the model.The engineering teams of Agilis Solutions in partnership with FPT Software, provide our customers with accurate estimates for software product projects early in the product lifecycle. The bases for these estimates are initial definitions of Use Cases, given point factors and modified for technical and environmental factors according to the Use Case Point method defined within the Rational Unified Process. After applying the process across hundreds of sizable (60 man-months average) software projects, we have demonstrated metrics that prove an estimating accuracy of less than 9% deviation from actual to estimated cost on 95% of our projects. Our process and this success factor is documented over a period of five years, and across more than 200 projects."}
{"_id": "1f0465bdee312659d13dbb303ca93df6ac259c40", "title": "Fast Change Point Detection on Dynamic Social Networks", "text": "A number of real world problems in many domains (e.g. sociology, biology, political science and communication networks) can be modeled as dynamic networks with nodes representing entities of interest and edges representing interactions among the entities at different points in time. A common representation for such models is the snapshot model where a network is defined at logical time-stamps. An important problem under this model is change point detection. In this work we devise an effective and efficient three-step-approach for detecting change points in dynamic networks under the snapshot model. Our algorithm achieves up to 9X speedup over the state-of-the-art while improving quality on both synthetic and real world networks."}
{"_id": "5d52ab2645380ca0732717a914f337f7a1b1c4a5", "title": "Thin skin elastodynamics", "text": "We present a novel approach for simulating thin hyperelastic skin. Real human skin is only a few millimeters thick. It can stretch and slide over underlying body structures such as muscles, bones, and tendons, revealing rich details of a moving character. Simulating such skin is challenging because it is in close contact with the body and shares its geometry. Despite major advances in simulating elastodynamics of cloth and soft bodies for computer graphics, such methods are difficult to use for simulating thin skin due to the need to deal with non-conforming meshes, collision detection, and contact response. We propose a novel Eulerian representation of skin that avoids all the difficulties of constraining the skin to lie on the body surface by working directly on the surface itself. Skin is modeled as a 2D hyperelastic membrane with arbitrary topology, which makes it easy to cover an entire character or object. Unlike most Eulerian simulations, we do not require a regular grid and can use triangular meshes to model body and skin geometry. The method is easy to implement, and can use low resolution meshes to animate high-resolution details stored in texture-like maps. Skin movement is driven by the animation of body shape prescribed by an artist or by another simulation, and so it can be easily added as a post-processing stage to an existing animation pipeline. We provide several examples simulating human and animal skin, and skin-tight clothes."}
{"_id": "af82f1b9fdee7e6f92fccab2e6f02816965bf937", "title": "Boosting Alzheimer Disease Diagnosis Using PET Images", "text": "Alzheimer's disease (AD) is one of the most frequent type of dementia. Currently there is no cure for AD and early diagnosis is crucial to the development of treatments that can delay the disease progression. Brain imaging can be a biomarker for Alzheimer's disease. This has been shown in several works with MR Images, but in the case of functional imaging such as PET, further investigation is still needed to determine their ability to diagnose AD, especially at the early stage of Mild Cognitive Impairment (MCI). In this paper we study the use of PET images of the ADNI database for the diagnosis of AD and MCI. We adopt a Boosting classification method, a technique based on a mixture of simple classifiers, which performs feature selection concurrently with the segmentation thus is well suited to high dimensional problems. The Boosting classifier achieved an accuracy of 90.97% in the detection of AD and 79.63% in the detection of MCI."}
{"_id": "0d2edc46f81f9a0b0b62937507ad977b46729f64", "title": "The BOBYQA algorithm for bound constrained optimization without derivatives", "text": "BOBYQA is an iterative algorithm for finding a minimum of a function F (x), x\u2208R, subject to bounds a\u2264x\u2264b on the variables, F being specified by a \u201cblack box\u201d that returns the value F (x) for any feasible x. Each iteration employs a quadratic approximation Q to F that satisfies Q(y j ) = F (y j ), j = 1, 2, . . . , m, the interpolation points y j being chosen and adjusted automatically, but m is a prescribed constant, the value m= 2n+1 being typical. These conditions leave much freedom in Q, taken up when the model is updated by the highly successful technique of minimizing the Frobenius norm of the change to the second derivative matrix of Q. Thus no first derivatives of F are required explicitly. Most changes to the variables are an approximate solution to a trust region subproblem, using the current quadratic model, with a lower bound on the trust region radius that is reduced cautiously, in order to keep the interpolation points well separated until late in the calculation, which lessens damage from computer rounding errors. Some other changes to the variables are designed to improve the model without reducing F . These techniques are described. Other topics include the starting procedure that is given an initial vector of variables, the value of m and the initial trust region radius. There is also a new device called RESCUE that tries to restore normality if severe loss of accuracy occurs in the matrix calculations of the updating of the model. Numerical results are reported and discussed for two test problems, the numbers of variables being between 10 and 320. Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, England."}
{"_id": "c23136a48527a20d6bfef019337ba4494077f7c5", "title": "Stigma and its public health implications", "text": ""}
{"_id": "8dd33fcab782ef611b6ff28ce2f2a355ee5a7a5c", "title": "Convolutional Neural Networks for Automatic State-Time Feature Extraction in Reinforcement Learning Applied to Residential Load Control", "text": "Direct load control of a heterogeneous cluster of residential demand flexibility sources is a high-dimensional control problem with partial observability. This paper proposes a novel approach that uses a convolutional neural network (CNN) to extract hidden state-time features to mitigate the curse of partial observability. More specific, a CNN is used as a function approximator to estimate the state-action value function or ${Q}$ -function in the supervised learning step of fitted ${Q}$ -iteration. The approach is evaluated in a qualitative simulation, comprising a cluster of thermostatically controlled loads that only share their air temperature, while their envelope temperature remains hidden. The simulation results show that the presented approach is able to capture the underlying hidden features and able to successfully reduce the electricity cost the cluster."}
{"_id": "1713695284c16e05d88b719c52f073dc9f5a0e06", "title": "Forgetting Is Regulated through Rac Activity in Drosophila", "text": "Initially acquired memory dissipates rapidly if not consolidated. Such memory decay is thought to result either from the inherently labile nature of newly acquired memories or from interference by subsequently attained information. Here we report that a small G protein Rac-dependent forgetting mechanism contributes to both passive memory decay and interference-induced forgetting in Drosophila. Inhibition of Rac activity leads to slower decay of early memory, extending it from a few hours to more than one day, and to blockade of interference-induced forgetting. Conversely, elevated Rac activity in mushroom body neurons accelerates memory decay. This forgetting mechanism does not affect memory acquisition and is independent of Rutabaga adenylyl cyclase-mediated memory formation mechanisms. Endogenous Rac activation is evoked on different time scales during gradual memory loss in passive decay and during acute memory removal in reversal learning. We suggest that Rac's role in actin cytoskeleton remodeling may contribute to memory erasure."}
{"_id": "5a95410f9f8c842fbd3fdea9b2dabbc60a8547ee", "title": "Automatic Labelling of Tabla Signals", "text": "Most of the recent developments in the field of music indexing and music information retrieval are focused on western music. In this paper, we present an automatic music transcription system dedicated to Tabla a North Indian percussion instrument. Our approach is based on three main steps: firstly, the audio signal is segmented in adjacent segments where each segment represents a single stroke. Secondly, rhythmic information such as relative durations are calculated using beat detection techniques. Finally, the transcription (recognition of the strokes) is performed by means of a statistical model based on Hidden Markov Model (HMM). The structure of this model is designed in order to represent the time dependencies between successives strokes and to take into account the specificities of the tabla score notation (transcription symbols may be context dependent). Realtime transcription of Tabla soli (or performances) with an error rate of 6.5% is made possible with this transcriber. The transcription system, along with some additional features such as sound synthesis or phrase correction, are integrated in a user-friendly environment called Tablascope."}
{"_id": "0cb48f543c4bf329a16c0408c4d2c198679a6057", "title": "Active Appearance Models Revisited", "text": "Active Appearance Models (AAMs) and the closely related concepts of Morphable Models and Active Blobs are generative models of a certain visual phenomenon. Although linear in both shape and appearance, overall, AAMs are nonlinear parametric models in terms of the pixel intensities. Fitting an AAM to an image consists of minimising the error between the input image and the closest model instance; i.e. solving a nonlinear optimisation problem. We propose an efficient fitting algorithm for AAMs based on the inverse compositional image alignment algorithm. We show that the effects of appearance variation during fitting can be precomputed (\u201cprojected out\u201d) using this algorithm and how it can be extended to include a global shape normalising warp, typically a 2D similarity transformation. We evaluate our algorithm to determine which of its novel aspects improve AAM fitting performance."}
{"_id": "4a0ee613076c845976c69cedd2260d80ed2e4723", "title": "Virtual try-on of eyeglasses using 3D model of the head", "text": "This work presents a system for virtual try-on of eyeglasses using a 3D model of user's face and head. The 3D head model is reconstructed using only one image of the user. The 3D glasses model are then fitted onto this head model, and the user's head movement is tracked in real-time to rotate the 3D head model with glasses accordingly."}
{"_id": "ab7f98f768b00db3eda7c2403c4e4eef19f00b06", "title": "A mixed reality system for virtual glasses try-on", "text": "In this paper we present an augmented reality system for automatic try-on of 3D virtual eyeglasses. The user can select from various virtual models of eyeglasses for trying-on and the system will automatically fit the selected virtual glasses on the user's face. The user can see his/her face as in a mirror with the 3D virtual glasses fitted on it. We also propose a method for handling the occlusion problem, to display only those parts of the glasses that are not occluded by the face. This system can be used for online shopping, or short listing a large set of available models to a few before physical try-on at a retailer's site."}
{"_id": "10268b7878063d2c79723a7e537593f2860be0c8", "title": "Linear Object Classes and Image Synthesis From a Single Example Image", "text": "The need to generate new views of a 3D object from a single real image arises in several fields, including graphics and\u00b0object recognition. While the traditional approach relies on the use of 3D models, we have recently introduced [11, 6, 5] techniques that are applicable under restricted conditions but simpler. The approach exploits image transformations that are specific to the relevant object class and learnable from example views of other \"prototypical\" objects of the same class. In this paper, we introduce such a new technique by extending the notion of linear class first proposed by Poggio and Vetter [12]. For linear object classes it is shown that linear transformations can be learned exactly from a basis set of 2D prototypical views. We demonstrate the approach on artificial objects and then show preliminary evidence that the technique can effectively \"rotate\" high-resolution face images from a single 2D view."}
{"_id": "b38958628b36c53d02f66124e09dbd0be0bd6672", "title": "Multi-vehicle detection with identity awareness using cascade Adaboost and Adaptive Kalman filter for driver assistant system", "text": "Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness."}
{"_id": "e9c69aa24f14f53043bd254f8c6f2f3016466998", "title": "Temporal Pattern Attention for Multivariate Time Series Forecasting", "text": "Forecasting of multivariate time series data, for instance the prediction of electricity consumption, solar power production, and polyphonic piano pieces, has numerous valuable applications. However, complex and non-linear interdependencies between time steps and series complicate this task. To obtain accurate prediction, it is crucial to model long-term dependency in time series data, which can be achieved by recurrent neural networks (RNNs) with an attention mechanism. The typical attention mechanism reviews the information at each previous time step and selects relevant information to help generate the outputs; however, it fails to capture temporal patterns across multiple time steps. In this paper, we propose using a set of filters to extract time-invariant temporal patterns, similar to transforming time series data into its \u201cfrequency domain\u201d. Then we propose a novel attention mechanism to select relevant time series, and use its frequency domain information for multivariate forecasting. We apply the proposed model on several real-world tasks and achieve state-of-the-art performance in all of these with a single exception. Our source code is available at https://github.com/gantheory/TPA-LSTM. * indicates equal contribution. This work was financially supported by the Ministry of Science and Technology of Taiwan. Shun-Yao Shih National Taiwan University E-mail: shunyaoshih@gmail.com Fan-Keng Sun National Taiwan University E-mail: b03901056@ntu.edu.tw Hung-yi Lee National Taiwan University E-mail: hungyilee@ntu.edu.tw ar X iv :1 80 9. 04 20 6v 2 [ cs .L G ] 2 7 N ov 2 01 8 2 Shun-Yao Shih* et al. Fig. 1 Historical prices of crude oil, gasoline, and lumber. Units are omitted and scales are normalized for simplicity."}
{"_id": "753c0ecd5b4951b94fcba2fbc30ede5499ae00f5", "title": "Seven element wideband planar log-periodic antenna for TVWS base station", "text": "This paper presents design and simulation of a wide band antenna for base station in the TV White Space (TVWS) spectrum band operating between 470 MHz-700 MHz. Concept of printed Log Periodic Dipole Array (LPDA), which provides wide bandwidth, has been used to realize wide band antenna. The antenna elements are printed on low cost FR4 substrate with \u03b5r = 4.4 and tan \u03b4 = 0.02. These elements are printed alternatively on both side of the substrate. Total volume of the antenna is 303 \u00d7 162.3 \u00d7 1.6 mm3. To reduce the size, scaling factor (\u03c4) for this design is considered as 0.89 and the relative spacing (\u03c3) is chosen as 0.054. The antenna is fed at the base of smallest element. The antenna shows an impedance bandwidth for VSWR \u2264 2 in the frequency range of 470 MHz\u2013700 MHz. The gain of this antenna is between 5.3 dB to 6.5 dB in the entire band of operation. The radiation pattern shows end fire behavior with uniform radiation pattern in both E and H-planes with maximum front to back lobe ratio (F/B) of 30 dB."}
{"_id": "30a8fd0d42c2f4308253b83f83149d3674bbc491", "title": "An ontology mindset for system engineering", "text": "This paper presents an approach to support model-based system engineering (MBSE) which combines a standard metamodel such as UML, SysML, or NAF, with dedicated project ontologies that ease the understanding and creation of models. The approach is formalized using category theory to determine the conditions for use of project ontologies while ensuring the resulting models comply with the selected metamodel. The paper exposes the advantages of the proposed approach over a direct metamodel-based approach or an ontology-based approach using a domain specific language (DSL)."}
{"_id": "67d137d0496d7339e302c18d693c7ec960ddf733", "title": "Moving towards a new vision: implementation of a public health policy intervention", "text": "BACKGROUND\nPublic health systems in Canada have undergone significant policy renewal over the last decade in response to threats to the public's health, such as severe acute respiratory syndrome. There is limited research on how public health policies have been implemented or what has influenced their implementation. This paper explores policy implementation in two exemplar public health programs -chronic disease prevention and sexually-transmitted infection prevention - in Ontario, Canada. It examines public health service providers', managers' and senior managements' perspectives on the process of implementation of the Ontario Public Health Standards 2008 and factors influencing implementation.\n\n\nMETHODS\nPublic health staff from six health units representing rural, remote, large and small urban settings were included. We conducted 21 focus groups and 18 interviews between 2010 (manager and staff focus groups) and 2011 (senior management interviews) involving 133 participants. Research assistants coded transcripts and researchers reviewed these; the research team discussed and resolved discrepancies. To facilitate a breadth of perspectives, several team members helped interpret the findings. An integrated knowledge translation approach was used, reflected by the inclusion of academics as well as decision-makers on the team and as co-authors.\n\n\nRESULTS\nFront line service providers often were unaware of the new policies but managers and senior management incorporated them in operational and program planning. Some participants were involved in policy development or provided feedback prior to their launch. Implementation was influenced by many factors that aligned with Greenhalgh and colleagues' empirically-based Diffusion of Innovations in Service Organizations Framework. Factors and related components that were most clearly linked to the OPHS policy implementation were: attributes of the innovation itself; adoption by individuals; diffusion and dissemination; the outer context - interorganizational networks and collaboration; the inner setting - implementation processes and routinization; and, linkage at the design and implementation stage.\n\n\nCONCLUSIONS\nMultiple factors influenced public health policy implementation. Results provide empirical support for components of Greenhalgh et al's framework and suggest two additional components - the role of external organizational collaborations and partnerships as well as planning processes in influencing implementation. These are important to consider by government and public health organizations when promoting new or revised public health policies as they evolve over time. A successful policy implementation process in Ontario has helped to move public health towards the new vision."}
{"_id": "2a410ac48e28e899c80ef5eb23944d701792ce98", "title": "ICDAR 2013 Chinese Handwriting Recognition Competition", "text": "This paper describes the Chinese handwriting recognition competition held at the 12th International Conference on Document Analysis and Recognition (ICDAR 2013). This third competition in the series again used the CASIA-HWDB/OLHWDB databases as the training set, and all the submitted systems were evaluated on closed datasets to report character-level correct rates. This year, 10 groups submitted 27 systems for five tasks: classification on extracted features, online/offline isolated character recognition, online/offline handwritten text recognition. The best results (correct rates) are 93.89% for classification on extracted features, 94.77% for offline character recognition, 97.39% for online character recognition, 88.76% for offline text recognition, and 95.03% for online text recognition, respectively. In addition to the test results, we also provide short descriptions of the recognition methods and brief discussions on the results."}
{"_id": "17a47a549e6c6606893e016d36e47d8406c42ba6", "title": "Hunting for Spammers: Detecting Evolved Spammers on Twitter", "text": "Once an email problem, spam has nowadays branched into new territories with disruptive effects. In particular, spam has established itself over the recent years as a ubiquitous, annoying, and sometimes threatening aspect of online social networks. Due to its prevalent existence, many works have tackled spam on Twitter from different angles. Spam is, however, a moving target. The new generation of spammers on Twitter has evolved into online creatures that are not easily recognizable by old detection systems. With the strong tangled spamming community, automatic tweeting scripts, and the ability to massively create Twitter accounts with a negligible cost, spam on Twitter is becoming smarter, fuzzier and harder to detect. Our own analysis of spam content on Arabic trending hashtags in Saudi Arabia results in an estimate of about three quarters of the total generated content. This alarming rate makes the development of adaptive spam detection techniques a very real and pressing need. In this paper, we analyze the spam content of trending hashtags on Saudi Twitter, and assess the performance of previous spam detection systems on our recently gathered dataset. Due to the escalating manipulation that characterizes newer spamming accounts, simple manual labeling currently leads to inaccurate results. In order to get reliable ground-truth data, we propose an updated manual classification algorithm that avoids the deficiencies of older manual approaches. We also adapt the previously proposed features to respond to spammers evading techniques, and use these features to build a new data-driven detection system."}
{"_id": "018fde4b542cd94c160bfc479cca091401094886", "title": "Real-Time Keyword Extraction from Conversations", "text": "We introduce a novel, fully unsupervised method to extract keywords from meeting speech in real-time. Our approach represents text as a word co-occurrence network and leverages the k-core graph decomposition algorithm and properties of submodular functions. We outperform multiple baselines in a real-time scenario emulated from the AMI and ICSI meeting corpora. Evaluation is conducted against both extractive and abstractive gold standard using two standard performance metrics and a newer one based on word embeddings."}
{"_id": "756620bd35d77f77172256cccaa1f352e3ac653c", "title": "Building the Next Generation of Cyber Security Professionals", "text": "Cyber security is an area of strategic and policy interest to governments and enterprises globally, which results in an increase in the demand for cyber security professionals. However, there is a lack of education based on sound theories, standards and practices. In this paper, we adapted the Situational Crime Prevention Theory and the NICE National Cybersecurity Workforce Framework in the design and delivery of our courses, particularly in the Cyber Security Exercise (CSE) which forms an integral part of the courses. The CSE is an attack/defence environment where students are grouped and given a virtual machine with which to host a number of services (e.g. HTTP(S), FTP and SSH) for access by other groups. The CSE is designed to mirror real-world environments where the students\u2019 skills will be applied. An overview of the CSE architecture was also provided for readers interested in replicating the exercise in their institutions. Based on student assessment and feedback, we found that our approach was useful in transferring theoretical knowledge to practical skills suitable for the cyber security workforce."}
{"_id": "68f31ef096bc1322effea4e33e2639aea3d4fcb1", "title": "Contextual Consent: Ethical Mining of Social Media for Health Research", "text": "Social media are a rich source of insight for data mining and u sercentred research, but the question of consent arises when st udying such data without the express knowledge of the creator. Case studies that mine social data from users of online services s uch as Facebook and Twitter are becoming increasingly common. Thi s has led to calls for an open discussion into how researchers c an best use these vast resources to make innovative findings whi le still respecting fundamental ethical principles. In this positi n paper we highlight some key considerations for this topic and argu e that the conditions of informed consent are often not being met, a nd that using social media data that some deem free to access and analyse may result in undesirable consequences, particula ly within the domain of health research and other sensitive topics. We posit that successful exploitation of online personal data, part icularly for health and other sensitive research, requires new and us able methods of obtaining consent from the user."}
{"_id": "b5fa055f3440fb77751371122f422fb4fab69c3d", "title": "Engineering Fast Route Planning Algorithms", "text": "Algorithms for route planning in transportation networks have recently undergone a rapid development, leading to methods that are up to one million times faster than Dijkstra\u2019s algorithm. We outline ideas, algorithms, implementations, and experimental methods behind this development. We also explain why the story is not over yet because dynamically changing networks, flexible objective functions, and new applications pose a lot of interesting challenges."}
{"_id": "26b4b77ff8470187491d2537aae29139bf5a5917", "title": "Privacy awareness about information leakage: who knows what about me?", "text": "The task of protecting users' privacy is made more difficult by their attitudes towards information disclosure without full awareness and the economics of the tracking and advertising industry. Even after numerous press reports and widespread disclosure of leakages on the Web and on popular Online Social Networks, many users appear not be fully aware of the fact that their information may be collected, aggregated and linked with ambient information for a variety of purposes. Past attempts at alleviating this problem have addressed individual aspects of the user's data collection. In this paper we move towards a comprehensive and efficient client-side tool that maximizes users' awareness of the extent of their information leakage. We show that such a customizable tool can help users to make informed decisions on controlling their privacy footprint."}
{"_id": "97b9f9a365d2418806a1a8103a4e1c57f586a56c", "title": "An Open Source Matlab/Simulink Toolbox for Interval Type-2 Fuzzy Logic Systems", "text": "In the last two decades, we have witnessed that Interval Type-2 Fuzzy Logic Systems (IT2-FLSs) have been successfully implemented in various engineering areas. In this paper, we will introduce a free open source Mat lab/Simulink toolbox for the development of IT2-FLSs for a wider accessibility to users beyond the type-2 fuzzy logic community. The presented IT2-FLS toolbox allows intuitive implementation of IT2-FLSs where it is capable to cover all the phases of its design. In order to allow users to easily construct IT2-FISs, a GUI is developed which is similar to that of Matlab\u00ae Fuzzy Logic Toolbox. We have embedded various Type Reduction (TR) methods in the toolbox since the TR method is the most important operation that must be taken into consideration. This gives the opportunity to the user to examine the performance of his/her IT2-FLS design with respect to the TR methods. We have also developed an IT2-FLS Simulink library so that the designer can perform various simulation analyses and can also investigate the effect of TR method on the performance of the IT2-FLS. Moreover, the developed IT2-FLS Mat lab/Simulink toolbox contains an automatic connection between Matlab and Simulink environments. Once the user has finished the design of the IT2-FLS via the GUI, it is possible to export his/her design directly to Simulink via an automatic Simulink file generation. We believe that the availability of the developed free and open-source IT2-FLS Mat lab/Simulink toolbox will be an important step for a wider deployment and development of IT2-FLSs."}
{"_id": "a6a53b783ec3e01f91696b6ec846e3aac15f4a3d", "title": "Tensors for Data Mining and Data Fusion: Models, Applications, and Scalable Algorithms", "text": "Tensors and tensor decompositions are very powerful and versatile tools that can model a wide variety of heterogeneous, multiaspect data. As a result, tensor decompositions, which extract useful latent information out of multiaspect data tensors, have witnessed increasing popularity and adoption by the data mining community. In this survey, we present some of the most widely used tensor decompositions, providing the key insights behind them, and summarizing them from a practitioner\u2019s point of view. We then provide an overview of a very broad spectrum of applications where tensors have been instrumental in achieving state-of-the-art performance, ranging from social network analysis to brain data analysis, and from web mining to healthcare. Subsequently, we present recent algorithmic advances in scaling tensor decompositions up to today\u2019s big data, outlining the existing systems and summarizing the key ideas behind them. Finally, we conclude with a list of challenges and open problems that outline exciting future research directions."}
{"_id": "744d04cdd7c1ccd13482d0a17ae4b621b682554d", "title": "Swarm Intelligence Approaches for Grid Load Balancing", "text": "With the rapid growth of data and computational needs, distributed systems and computational Grids are gaining more and more attention. The huge amount of computations a Grid can fulfill in a specific amount of time cannot be performed by the best supercomputers. However, Grid performance can still be improved by making sure all the resources available in the Grid are utilized optimally using a good load balancing algorithm. This research proposes two new distributed swarm intelligence inspired load balancing algorithms. One algorithm is based on ant colony optimization and the other algorithm is based on particle swarm optimization. A simulation of the proposed approaches using a Grid simulation toolkit (GridSim) is conducted. The performance of the algorithms are evaluated using performance criteria such as makespan and load balancing level. A comparison of our proposed approaches with a classical approach called State Broadcast Algorithm and two random approaches is provided. Experimental results show the proposed algorithms perform very well in a Grid environment. Especially the application of particle swarm optimization, can yield better performance results in many scenarios than the ant colony approach."}
{"_id": "8d3ebbb9bf35833f8d119366783be1b8f9e92979", "title": "Routing Around Congestion: Defeating DDoS Attacks and Adverse Network Conditions via Reactive BGP Routing", "text": "In this paper, we present Nyx, the first system to both effectively mitigate modern Distributed Denial of Service (DDoS) attacks regardless of the amount of traffic under adversarial control and function without outside cooperation or an Internet redesign. Nyx approaches the problem of DDoS mitigation as a routing problem rather than a filtering problem. This conceptual shift allows Nyx to avoid many of the common shortcomings of existing academic and commercial DDoS mitigation systems. By leveraging how Autonomous Systems (ASes) handle route advertisement in the existing Border Gateway Protocol (BGP), Nyx allows the deploying AS to achieve isolation of traffic from a critical upstream AS off of attacked links and onto alternative, uncongested, paths. This isolation removes the need for filtering or de-prioritizing attack traffic. Nyx controls outbound paths through normal BGP path selection, while return paths from critical ASes are controlled through the use of specific techniques we developed using existing traffic engineering principles and require no outside coordination. Using our own realistic Internet-scale simulator, we find that in more than 98% of cases our system can successfully route critical traf?c around network segments under transit-link DDoS attacks; a new form of DDoS attack where the attack traf?c never reaches the victim AS, thus invaliding defensive filtering, throttling, or prioritization strategies. More significantly, in over 95% of those cases, the alternate path provides complete congestion relief from transit-link DDoS. Nyx additionally provides complete congestion relief in over 75% of cases when the deployer is being directly attacked."}
{"_id": "a09a9f69ff145508a12e6dbb81ccd7f5be4bd2fc", "title": "Machine Learning for High-Throughput Stress Phenotyping in Plants.", "text": "Advances in automated and high-throughput imaging technologies have resulted in a deluge of high-resolution images and sensor data of plants. However, extracting patterns and features from this large corpus of data requires the use of machine learning (ML) tools to enable data assimilation and feature identification for stress phenotyping. Four stages of the decision cycle in plant stress phenotyping and plant breeding activities where different ML approaches can be deployed are (i) identification, (ii) classification, (iii) quantification, and (iv) prediction (ICQP). We provide here a comprehensive overview and user-friendly taxonomy of ML tools to enable the plant community to correctly and easily apply the appropriate ML tools and best-practice guidelines for various biotic and abiotic stress traits."}
{"_id": "60c24e44fce158c217d25c1bae9f880a8bd19fc3", "title": "Controllable Image-to-Video Translation: A Case Study on Facial Expression Generation", "text": "The recent advances in deep learning have made it possible to generate photorealistic images by using neural networks and even to extrapolate video frames from an input video clip. In this paper, for the sake of both furthering this exploration and our own interest in a realistic application, we study image-to-video translation and particularly focus on the videos of facial expressions. This problem challenges the deep neural networks by another temporal dimension comparing to the imageto-image translation. Moreover, its single input image fails most existing video generation methods that rely on recurrent models. We propose a user-controllable approach so as to generate video clips of various lengths from a single face image. The lengths and types of the expressions are controlled by users. To this end, we design a novel neural network architecture that can incorporate the user input into its skip connections and propose several improvements to the adversarial training method for the neural network. Experiments and user studies verify the effectiveness of our approach. Especially, we would like to highlight that even for the face images in the wild (downloaded from the Web and the authors\u2019 own photos), our model can generate high-quality facial expression videos of which about 50% are labeled as real by Amazon Mechanical Turk workers. Please see our project page1 for more demonstrations."}
{"_id": "5bc664efa3c43a946431cc513affe5904a5bdb73", "title": "A deep learning based stock trading model with 2-D CNN trend detection", "text": "The success of convolutional neural networks in the field of computer vision has attracted the attention of many researchers from other fields. One of the research areas in which neural networks is actively used is financial forecasting. In this paper, we propose a novel method for predicting stock price movements using CNN. To avoid the high volatility of the market and to maximize the profit, ETFs are used as primary financial assets. We extract commonly used trend indicators and momentum indicators from financial time series data and use these as our features. Adopting a sliding window approach, we generate our images by taking snapshots that are bounded by the window over a daily period. We then perform daily predictions, namely, regression for predicting the ETF prices and classification for predicting the movement of the prices on the next day, which can be modified to estimate weekly or monthly trends. To increase the number of images, we use numerous ETFs. Finally, we evaluate our method by performing paper trading and calculating the final capital. We also compare our method's performance to commonly used classical trading strategies. Our results indicate that we can predict the next day's prices with 72% accuracy and end up with 5:1 of our initial capital, taking realistic values of transaction costs into account."}
{"_id": "07e7883d9d4c98917f7fdf8eb322617c752082a5", "title": "Detecting product review spammers using rating behaviors", "text": "This paper aims to detect users generating spam reviews or review spammers. We identify several characteristic behaviors of review spammers and model these behaviors so as to detect the spammers. In particular, we seek to model the following behaviors. First, spammers may target specific products or product groups in order to maximize their impact. Second, they tend to deviate from the other reviewers in their ratings of products. We propose scoring methods to measure the degree of spam for each reviewer and apply them on an Amazon review dataset. We then select a subset of highly suspicious reviewers for further scrutiny by our user evaluators with the help of a web based spammer evaluation software specially developed for user evaluation experiments. Our results show that our proposed ranking and supervised methods are effective in discovering spammers and outperform other baseline method based on helpfulness votes alone. We finally show that the detected spammers have more significant impact on ratings compared with the unhelpful reviewers."}
{"_id": "0bee052af002eb197277cd222d62154c7de4ac8a", "title": "Combating Web Spam with TrustRank", "text": "Web spam pages use various techniques to achieve higher-than-deserved rankings in a search engine\u2019s results. While human experts can identify spam, it is too expensive to manually evaluate a large number of pages. Instead, we propose techniques to semiautomatically separate reputable, good pages from spam. We first select a small set of seed pages to be evaluated by an expert. Once we manually identify the reputable seed pages, we use the link structure of the web to discover other pages that are likely to be good. In this paper we discuss possible ways to implement the seed selection and the discovery of good pages. We present results of experiments run on the World Wide Web indexed by AltaVista and evaluate the performance of our techniques. Our results show that we can effectively filter out spam from a significant fraction of the web, based on a good seed set of less than 200 sites."}
{"_id": "2b3e4df9463e721577f7058d39949b0d74fd58a9", "title": "Towards a General Rule for Identifying Deceptive Opinion Spam", "text": "Consumers\u2019 purchase decisions are increasingly influenced by user-generated online reviews. Accordingly, there has been growing concern about the potential for posting deceptive opinion spam\u2014 fictitious reviews that have been deliberately written to sound authentic, to deceive the reader. In this paper, we explore generalized approaches for identifying online deceptive opinion spam based on a new gold standard dataset, which is comprised of data from three different domains (i.e. Hotel, Restaurant, Doctor), each of which contains three types of reviews, i.e. customer generated truthful reviews, Turker generated deceptive reviews and employee (domain-expert) generated deceptive reviews. Our approach tries to capture the general difference of language usage between deceptive and truthful reviews, which we hope will help customers when making purchase decisions and review portal operators, such as TripAdvisor or Yelp, investigate possible fraudulent activity on their sites.1"}
{"_id": "30e81d8675be66271e362e5276395397631336ee", "title": "What Yelp Fake Review Filter Might Be Doing?", "text": "Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse \u2012 deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp\u2019s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp\u2019s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90% accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp\u2019s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors."}
{"_id": "426ccb6a700ac1dbe21484735fc182127783670b", "title": "Finding unusual review patterns using unexpected rules", "text": "In recent years, opinion mining attracted a great deal of research attention. However, limited work has been done on detecting opinion spam (or fake reviews). The problem is analogous to spam in Web search [1, 9 11]. However, review spam is harder to detect because it is very hard, if not impossible, to recognize fake reviews by manually reading them [2]. This paper deals with a restricted problem, i.e., identifying unusual review patterns which can represent suspicious behaviors of reviewers. We formulate the problem as finding unexpected rules. The technique is domain independent. Using the technique, we analyzed an Amazon.com review dataset and found many unexpected rules and rule groups which indicate spam activities."}
{"_id": "317eaeb73757c131bd1cf920a87c0c54f9b0dfe8", "title": "A fast garment fitting algorithm using skeleton-based error metric", "text": "Funding information National Natural Science Foundation of China, Grant/Award Number: 61732015; National Key R&D Program of China, Grant/Award Number: 2017YFB1002600; Key Research and Development Program of Zhejiang Province, Grant/Award Number: 2018C01090; National Natural Science Foundation of China, Grant/Award Number: 61472355 Abstract We present a fast and automatic method to fit a given 3D garment onto a human model with various shapes and poses, without using a reference human model. Our approach uses a novel skeleton-based error metric to find the pose that best fits the input garment. Specifically, we first generate the skeleton of the given human model and its corresponding skinning weights. Then, we iteratively rotate each bone to find its best position to fit the garment. After that, we rig the surface of the human model according to the transformations of the skeleton. Potential penetrations are resolved using collision handling and physically based simulation. Finally, we restore the human model back to the original pose in order to obtain the desired fitting result. Our experiment results show that besides its efficiency and automation, our method is about two orders of magnitudes faster than existing approaches, and it can handle various garments, including jacket, trousers, skirt, a suit of clothing, and even multilayered clothing."}
{"_id": "b705ca751a947e3b761e2305b41891051525d9df", "title": "Exploring Context with Deep Structured Models for Semantic Segmentation", "text": "We propose an approach for exploiting contextual information in semantic image segmentation, and particularly investigate the use of patch-patch context and patch-background context in deep CNNs. We formulate deep structured models by combining CNNs and Conditional Random Fields (CRFs) for learning the patch-patch context between image regions. Specifically, we formulate CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied in order to avoid repeated expensive CRF inference during the course of back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image inputs and sliding pyramid pooling is very effective for improving performance. We perform comprehensive evaluation of the proposed method. We achieve new state-of-the-art performance on a number of challenging semantic segmentation datasets."}
{"_id": "456ff5089e5b6a3f284a2a265b8c690eca4db005", "title": "Two-Switch Voltage Equalizer Using an LLC Resonant Inverter and Voltage Multiplier for Partially Shaded Series-Connected Photovoltaic Modules", "text": "Various kinds of differential power processing converters and voltage equalizers have been proposed for series-connected photovoltaic (PV) modules to prevent negative influences of partial shading, such as significant reduction in power generation and the occurrence of multiple maximum power points (MPPs), including local and global MPPs, that hinders and confuses MPP tracking algorithms to operate properly. However, since conventional topologies are based on multiple individual dc-dc converters, the required switch count increases proportionally to the number of modules connected in series, increasing the complexity. A two-switch voltage equalizer using an LLC resonant inverter and voltage multiplier is proposed in this paper. The circuitry can be dramatically simplified compared with conventional topologies due to the two-switch configuration. Detailed operation analyses for the LLC resonant inverter and voltage multiplier are separately performed. Experimental equalization tests emulating partially shaded conditions were performed for four PV modules connected in series. With the proposed voltage equalizer, local MPPs successfully disappeared, and extractable maximum power increased compared with those without equalization, demonstrating the effectiveness and performance of the proposed voltage equalizer."}
{"_id": "ab847ec4ce8a87321f28dcae98c1d90264115370", "title": "A Low-Cost Real-Time Embedded Stereo Vision System for Accurate Disparity Estimation Based on Guided Image Filtering", "text": "Stereo matching, a key element towards extracting depth information from stereo images, is widely used in several embedded consumer electronic and multimedia systems. Such systems demand high processing performance and accurate depth perception, while their deployment in embedded and mobile environments implies that cost, energy and memory overheads need to be minimized. Hardware acceleration has been demonstrated in efficient embedded stereo vision systems. To this end, this paper presents the design and implementation of a hardware-based stereo matching system able to provide high accuracy and concurrently high performance for embedded vision devices, which are associated with limited hardware and power budget. We first implemented a compact and efficient design of the guided image filter, an edge-preserving filter, which reduces the hardware complexity of the implemented stereo algorithm, while at the same time maintains high-quality results. The guided filter design is used in two parts of the stereo matching pipeline, showing that it can simplify the hardware complexity of the Adaptive Support Weight aggregation step, and efficiently enable a powerful disparity refinement unit, which improves matching accuracy, even though cost aggregation is based on simple, fixed support strategies. We implemented several variants of our design on a Kintex-7 FPGA board, which was able to process HD video (1,280 \u00d7 720) in real-time (60 fps), using ~57.5k and ~71k of the FPGA's logic (CLB) and register resources, respectively. Additionally, the proposed stereo matching design delivers leading accuracy when compared to state-of-the-art hardware implementations based on the Middlebury evaluation metrics (at least 1.5 percent less bad matching pixels)."}
{"_id": "54f29c12eef1895c526ee4d1205d11754e656cd1", "title": "Generalization Properties and Implicit Regularization for Multiple Passes SGM", "text": "We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions. We show that, in the absence of penalizations or constraints, the stability and approximation properties of the algorithm can be controlled by tuning either the step-size or the number of passes over the data. In this view, these parameters can be seen to control a form of implicit regularization. Numerical results complement the theoretical findings."}
{"_id": "3600f8647fde67ab91290c8b37e1356d06d0edbb", "title": "Trajectory Outlier Detection: A Partition-and-Detect Framework", "text": "Outlier detection has been a popular data mining task. However, there is a lack of serious study on outlier detection for trajectory data. Even worse, an existing trajectory outlier detection algorithm has limited capability to detect outlying sub- trajectories. In this paper, we propose a novel partition-and-detect framework for trajectory outlier detection, which partitions a trajectory into a set of line segments, and then, detects outlying line segments for trajectory outliers. The primary advantage of this framework is to detect outlying sub-trajectories from a trajectory database. Based on this partition-and-detect framework, we develop a trajectory outlier detection algorithm TRAOD. Our algorithm consists of two phases: partitioning and detection. For the first phase, we propose a two-level trajectory partitioning strategy that ensures both high quality and high efficiency. For the second phase, we present a hybrid of the distance-based and density-based approaches. Experimental results demonstrate that TRAOD correctly detects outlying sub-trajectories from real trajectory data."}
{"_id": "48d62bd7c8e4df6a443d52877e4ce1839b51348e", "title": "Decision support for diffuse pollution management", "text": "The effort to manage diffuse pollution at the catchment scale is an ongoing challenge that needs to take into account trade-offs between environmental and economic objectives. Best Management Practices (BMPs) are gaining ground as a means to address the problem, but their application (and impact) is highly dependant on the characteristics of the crops and of the land in which they are to be applied. In this paper, we demonstrate a new methodology and associated decision support tool that suggests the optimal location for placing BMPs to minimise diffuse surface water pollution at the catchment scale, by determining the trade-off among economic and multiple environmental objectives. The decision support tool consists of a non-point source (NPS) pollution estimator, the SWAT (Soil and Water Assessment Tool) model, a genetic algorithm (GA), which serves as the optimisation engine for the selection and placement of BMPs across the agricultural land of the catchment, and of an empirical economic function for the estimation of the mean annual cost of BMP implementation. In the proposed decision support tool, SWAT was run a number of times equal to the number of tested BMPs, to predict nitrates nitrogen (N-NO3) and total phosphorus (TP) losses from all the agricultural Hydrologic Response Units (HRUs) and possible BMPs implemented on them. The results were then saved in a database which was subsequently used for the optimisation process. Fifty different BMPs, including sole or combined changes in livestock, crop, soil and nutrient application management in alfalfa, corn and pastureland fields, were evaluated in the reported application of the tool in a catchment in Greece, by solving a three-objective optimisation process (cost, TP and N-NO3). The relevant two-dimensional trade-off curves of cost-TP, cost-N-NO3 and N-NO3eTP are presented and discussed. The strictest environmental target, expressed as a 45% reduction of TP at the catchment outlet, which also resulted in a 25% reduction of the annual N-NO3 yield was met at an affordable annual cost of 25 V/person by establishing an optimal combination of BMPs. The methodology could be used to assist in a more cost-effective implementation of environmental"}
{"_id": "a7d99e695f282896ce82028eff9dbc1b623f0477", "title": "100-Mb/s NRZ Visible Light Communications Using a Postequalized White LED", "text": "This letter describes a high-speed visible light communications link that uses a white-light light-emitting diode (LED). Such devices have bandwidths of few megahertz, severely limiting the data rates of any communication system. Here, we demonstrate that by detecting only the blue component of the LED, and using a simple first-order analogue equalizer, a data rate of 100 Mb/s can be achieved using on-off keying nonreturn-to-zero modulation."}
{"_id": "527b74bf712f2fd0d3fe475cc0a1032ef7a9e9f4", "title": "Systematic literature studies: Database searches vs. backward snowballing", "text": "Systematic studies of the literature can be done in different ways. In particular, different guidelines propose different first steps in their recommendations, e.g. start with search strings in different databases or start with the reference lists of a starting set of papers.\n In software engineering, the main recommended first step is using search strings in a number of databases, while in information systems, snowballing has been recommended as the first step. This paper compares the two different search approaches for conducting literature review studies.\n The comparison is conducted by searching for articles addressing \"Agile practices in global software engineering\". The focus of the paper is on evaluating the two different search approaches.\n Despite the differences in the included papers, the conclusions and the patterns found in both studies are quite similar. The strengths and weaknesses of each first step are discussed separately and in comparison with each other.\n It is concluded that none of the first steps is outperforming the other, and the choice of guideline to follow, and hence the first step, may be context-specific, i.e. depending on the area of study."}
{"_id": "0f0387d7207390dec305e09cdbbf4847e3c948e7", "title": "Chapter 7 Towards Automatically-Tuned Deep Neural Networks", "text": "Recent advances in AutoML have led to automated tools that can compete with machine learning experts on supervised learning tasks. In this work, we present two versions of Auto-Net, which provide automatically-tuned deep neural networks without any human intervention. The first version, Auto-Net 1.0, builds upon ideas from the competition-winning system Auto-sklearn by using the Bayesian Optimization method SMAC and uses Theano as the underlying deep learning (DL) framework. The more recent Auto-Net 2.0 builds upon a recent combination of Bayesian Optimization and HyperBand, called BOHB, and uses PyTorch as DL framework. To the best of our knowledge, Auto-Net 1.0 was the first automatically-tuned neural network to win competition datasets against human experts (as part of the first AutoML challenge). Further empirical results show that ensembling Auto-Net 1.0 with Auto-sklearn can perform better than either approach alone, and that Auto-Net 2.0 can perform better yet."}
{"_id": "28f0d32eaa11d35c64bb687268b99b548addea23", "title": "A Model Based on Aras-G and AHP Methods for Multiple Criteria Prioritizing of Heritage Value", "text": "The paper discusses the meaning and nature of urban cultural heritage, and the available methods for its valuation in the perspective of sustainable city development. From this perspective, decision-making problems of renovation often involve a complex decision-making process in which multiple requirements and conditions have to be taken into consideration simultaneously. In project development it is hardly possible to get exhaustive and accurate information. As a result, the situations occur, the consequences of which can be very damaging to the project. Sometimes the loss is related to symbolic values that the public perceive as disregarded by the project, despite the overall improved conditions. This paper presents the multiple criteria assessment of alternatives of the cultural heritage renovation projects in Vilnius city. The model consists of the following elements: determining attributes set a\u00aeecting built and human environment renovation; information collection and analysis, decision modeling and solution selection. The main purpose of the model is to improve the condition of the built and human environment through e\u00b1cient decision making in renovation supported by multiple attribute evaluation. Delphi, AHP and ARAS-G methods, considering di\u00aeerent environment factors as well as stakeholders' needs, are applied to solve problem."}
{"_id": "59a1ad0b4a92201c0898fc8e74fd0dc31a57ed44", "title": "Illuminating the Effects of Dynamic Lighting on Student Learning", "text": "Light is universally understood as essential to the human condition. Yet light quality varies substantially both in nature and in controlled environments leading to questions of which artificial light characteristics facilitate maximum learning. Recent research has examined lighting variables of color temperature, and illuminance for impacting sleep, mood, focus, motivation, concentration and work and school performance. This has resulted in artificial light systems intended to support human beings in their actualization through dynamic lighting technology allowing for different lighting conditions per task. Eighty-four third graders were exposed to either focus (6000K-100fc average maintained) or normal lighting. Focus lighting led to a higher percentage increase in oral reading fluency performance (36%) than did control lighting (17%). No lighting effects were found for motivation or concentration, possibly attributable to the younger age level of respondents as compared to European studies. These findings illuminate the need for further research on artificial light and learning. The Effects of Lighting on Humans in General The human evolution is shaped by light. In the course of evolution, human beings have adapted and developed an internal clock that under natural light conditions is synchronized to the earth\u2019s 24-hour light-dark rotational cycle (Czeisler et al., 1999). Research reveals the mechanism for how light is essential for human functioning (Boyce, Hunter, & Howlett, 2003). Light is a strong enabler for visual performance (Grangaard, 1995), regulates a large variety of bodily processes such as sleep and alertness (Dijk et al., 1997; Wright et al., 2006; Takasu et al., 2006; Viola et al., 2008), is essential for cognition and mood (Veitch & McColl, 2001; Goven et al., 2011; Taras, 2005), enables production of important hormones such as melatonin and cortisol (Berson, 2002; dijk et al., 1997; Leproult et al., 2001), and is essential for a healthy rest-activity pattern (Wurtman, 1975). Lights of different wavelengths also affect blood pressure, pulse, respiration rates, brain activity, and biorhythms. The role of lighting in our daily lives is essential in order to operate ideally in every environment. Thus, lighting directly influences every dimension of human existence. Tanner reiterated: \u201cLight is the most important environmental input, after food and water, in controlling bodily functions (as cited in Wurtman, 1975). Since the industrial revolution, people spend more and more time indoors while artificial lighting has shown the power to at least partially compensate for the processes that stabilize the body, mind, and emotions (Knez, 1995; Tanner, 2008, van Someren et al.,2005; Takasu et al., 2006; Mishima et al., 2001; Viola et al., 2008). In the following we elaborate a bit more on the proven effects artificial light has on human functioning. Circadian rhythm Sleep is one of the most basic physical requirements for human functioning. Amount and quality of lighting invariably affects the degree and quality of sleep in humans and regulates our biological clocks. In 2002, Berson et al. (2002) identified a new non-image forming photo-pigment residing within a cell type in the retina of the eye. It is referred to as melanopsin and regulates the biological effects of light. When ocular light (light perceived by the eyes) reaches these cells, a complex chemical reaction occurs, producing electrical impulses that are sent via separate nerve pathways to our biological clock, the suprachiasmatic nuclei (SCN). The SCN in turn regulates the circadian (daily) and circannual (seasonal) rhythms of a large variety of bodily processes, such as sleep, and some important hormones, such as melatonin and cortisol, essential for a healthy rest-activity pattern. The system that generates the circadian rhythmicity of biological processes is denoted as the circadian system. The melanopsin pigment is most sensitive to blue light with a peak sensitivity at 480 nm (Brainard et al., 2001; Thapan et al., 2001; Hankins et al., 2008). Sleep consolidation is optimal when sleep timing coincides with the period of melatonin secretion (Dijk et al., 1997). People that sleep during their melatonin peak (as in normal, i.e., well-synchronized, people), are reported to have a longer total sleep time and less wakefulness after sleep onset as compared with people that schedule their wakefulness during the melatonin peak (non-synchronized people) (Wright et al., 2006). Moreover, the same study indicates that cognitive performance (i.e., learning) was better in a synchronized group of people, whereas learning was impaired in a non-synchronized group of people. This indicates that proper alignment between sleep-wakefulness and biological (internal circadian) time is crucial, not only for sleep quality, but also for enhancement of cognitive performance. Vision The most obvious effect of light on humans is to enable vision and performance of visual tasks through the eyes. The eye contains \u201cphotoreceptor cells\u201d called rods and cones. These photoreceptor cells regulate the visual effects. When light reaches these cells, a complex chemical reaction occurs. The chemical that is formed creates electrical impulses in the nerve that connects the photoreceptor cells to the visual cortex at the back of the brain. In the visual cortex the electrical impulses are interpreted as \u201cvision\u201d. Rod cells \u201coperate in low-level light situations and do not permit color vision.\u201d Cone cells on the other hand, operate best in \u201cnormal daytime lighting conditions,\u201d and are \u201cresponsible for sharpness, detail, and color vision.\u201d Studies show that the nature of the task \u2013 as well as the amount, spectrum, and distribution of the light \u2013 determines the level of performance that is achieved. Mood and Cognition Lighting plays an important role in evoking emotions. Lighting can be used to make an architectural space more aesthetically pleasing or it can create an atmosphere in that space; both affect people\u2019s emotions. In addition, the user\u2019s well-being can be directly influenced by light. Brightness, color, direction, contrast and time are parameters used to create lighting conditions that address this. Nevertheless, concerning the relationship between lighting and mood/cognition, research has not shown consistent results. In a study by Knez (1995), two experiments were performed in order to analyze the effects of color temperature and illuminance levels on mood and cognitive performance tasks including long-term recall, free recall, and performance appraisal between males and females. After each experiment, a test to measure each subject\u2019s mood was administered. The results showed that females performed better in warm white lighting environments, whereas males performed better on cognitive tasks in cool white lighting. Both males and females perceived and responded differently in evaluating the illuminance levels and color index of the lighting and therefore each gender\u2019s mood was affected differently. Positive mood measures showed no increase in mood in both genders; however, the cooler lighting had a more negative effect on females\u2019 moods. Thus, females\u2019 performance on cognitive tasks decreased under cooler lighting. Because physiological changes occur when humans are exposed to light, mood and cognition can be affected indirectly and variably. According to Veitch and McColl (2001), lighting\u2019s cognitive and mood-related effects on people have noteworthy implications: 1) better performance on cognitive related tasks in the workplace or academic environment and 2) overall improved quality of life and well-being. Both visual perception strength and adequate sleep could have a considerable impact on cognitive abilities such as concentration and memory. Mood may also determine the sharpness of these cognitive abilities. Mood can be influenced by the quality and amount of lighting (Partonen et al., 2002; Veitch & McColl, 2001; inter alia Beauchemin & Hays, 1996; Benedetti et al., 2001; Goven, 2011). For instance, light therapy has proven a successful treatment for those with Seasonal Affective Disorder (SAD) and other non-seasonal mood related disorders such as depression and eating disorders (Spiegel et al., 2004; Van Cauter et al., 2000; McColl & Veitch, 2001). Effects for Lighting and Learning Because lighting profoundly impacts numerous levels of human functioning such as vision, circadian rhythms, mood, and cognition, its implicit effects on learning and classroom achievement cannot be dismissed. Several studies have addressed how the quality and color of lighting can either impair or enhance students\u2019 visual skills and thus, academic performance. Visual impairments alone can induce behavioral problems in students as well as level of concentration and motivation in the classroom. Cheatum and Hammond (2000) estimated that around 20% of children that enter the school encounter visual problems (e.g., problems with focusing, eye tracking, training, lazy eye, and trabismus). Among elementary school children, 41% have experienced trouble with tracking, 6% have refractive errors and 4% have strabismus (Koslowe, 1995, as cited in Cheatum & Hammond, 2000). The same study suggests that \u201cthe inability of visual tracking is also thought to be the cause of behavioral problems and being illiterate.\u201d Winterbottom (2009) suggested that certain features of lighting can cause discomfort and impair visual and cognitive performance. These features include \u201cimperceptible 100 Hz flicker from fluorescent lighting and glare induced by 1) daylight and fluorescent lighting, 2) interactive whiteboards (IWBs) and dry-erase whiteboards (DWBs)\u201d (2009). The purpose of his study was to determine the degree and magnitude to which students are subjected to the above stated lighting inefficiencies in the classroom. The 100 Hz flicker from fluorescent lighting was displayed in 80% of the 90 UK classrooms u"}
{"_id": "9caa5d8f2bd2d8bf05383bef449ad2663acc86e1", "title": "Computer graphics and geometric ornamental design", "text": "Computer Graphics and Geometric Ornamental Design"}
{"_id": "38126db189b9e9c1f8b2a9422bfd2cc372102bc7", "title": "Worthiness of Sentences in Scientific Reports", "text": "Does this sentence need citation? In this paper, we introduce the task of citation worthiness for scientific texts at a sentence-level granularity. The task is to detect whether a sentence in a scientific article needs to be cited or not. It can be incorporated into citation recommendation systems to help automate the citation process by marking sentences where needed. It may also be useful for publishers to regularize the citation process. We construct a dataset using the ACLAnthology Reference Corpus; consisting of over 1.1M \u201cnot_cite\u201d and 85K \u201ccite\u201d sentences. We study the performance of a set of state-of-the-art sentence classifiers for the citation worthiness task and show the practical challenges. We also explore sectionwise difficulty of the task and analyze the performance of our best model on a published article. ACM Reference Format: Hamed Bonab, Hamed Zamani, Erik Learned-Miller, and James Allan. 2018. Citation Worthiness of Sentences in Scientific Reports. In SIGIR \u201918: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, July 8\u201312, 2018, Ann Arbor, MI, USA. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3209978.3210162"}
{"_id": "f8410bcd477d7be77ba980af61f156d509e81bfb", "title": "Witnessing spouse abuse and experiencing physical abuse: A \u201cdouble whammy\u201d?", "text": "Child witnesses to parental violence, both abused (N =40) and nonabused (N =44), who were temporarily residing in a battered women's shelter were compared with children from a similar economic background (N =66) on measures of anxiety, depression, and behavior problems. Mothers of the three groups of children (comparison, witness, abused/witness) across the age range from 4 to 12 years completed a behavior problem inventory; the youngsters responded to paper-and-pencil self-report measures. Results indicated that the abused/witness children were manifesting significantly more distress on the behavior problem measure than the comparison youngsters, with the witness children showing a moderate amount and the comparison children the least. However, these patterns were mediated by the age of the child. Implications of these differential findings are discussed."}
{"_id": "410b473157a19bff7af01a0586664a956988c66d", "title": "Neighborhood playgrounds, fast food restaurants, and crime: relationships to overweight in low-income preschool children.", "text": "BACKGROUND\nWe examined the relationship between overweight in preschool children and three environmental factors--the proximity of the children's residences to playgrounds and to fast food restaurants and the safety of the children's neighborhoods. We hypothesized that children who lived farther from playgrounds, closer to fast food restaurants, and in unsafe neighborhoods were more likely to be overweight.\n\n\nMETHODS\nThis was a cross-sectional study of 7,020 low-income children, 36 through 59 months of age living in Cincinnati, OH. Overweight was defined as a measured body mass index > or =95th percentile. The distance between each child's residence and the nearest public playground and fast food restaurant was determined with geographic information systems. Neighborhood safety was defined by the number of police-reported crimes per 1,000 residents per year in each of 46 city neighborhoods.\n\n\nRESULTS\nOverall, 9.2% of the children were overweight, 76% black, and 23% white. The mean (+/- SD) distances from a child's home to the nearest playground and fast food restaurant were 0.31 (+/- 0.22) and 0.70 (+/- 0.38) miles, respectively. There was no association between child overweight and proximity to playgrounds, proximity to fast food restaurants, or level of neighborhood crime. The association between child overweight and playground proximity did not differ by neighborhood crime level.\n\n\nCONCLUSIONS\nWithin a population of urban low-income preschoolers, overweight was not associated with proximity to playgrounds and fast food restaurants or with the level of neighborhood crime."}
{"_id": "0fc1f8086b50e1fdb7b1dd404303052afd788e4e", "title": "Sentiment-Based Features for Predicting Election Polls: A Case Study on the Brazilian Scenario", "text": "The success of opinion mining for automatically processing vast amounts of opinionated content available on the Internet has been demonstrated as a less expensive and lower latency solution for gathering public opinion. In this paper, we investigate whether it is possible to predict variations in vote intention based on sentiment time series extracted from news comments, using three Brazilian elections as case study. The contributions of this case study are: a) the comparison of two approaches for opinion mining in user-generated content in Brazilian Portuguese, b) the proposition of two types of features to represent sentiment behavior towards political candidates that can be used for prediction, c) an approach to predict polls vote intention variations that is adequate for scenarios of sparse data. We developed experiments to assess the influence on the forecasting accuracy of the proposed features, and their respective preparation. Our results display an accuracy of 70% in predicting positive and negative variations. These are important contributions towards a more general framework that is able to blend opinions from several different sources to find representativeness of the target population, and make more reliable predictions."}
{"_id": "420c43d4a2a96910e54465ff20fe6d08a5c008e2", "title": "Innocent by association: early recognition of legitimate users", "text": "This paper presents the design and implementation of Souche, a system that recognizes legitimate users early in online services. This early recognition contributes to both usability and security. Souche leverages social connections established over time. Legitimate users help identify other legitimate users through an implicit vouching process, strategically controlled within vouching trees. Souche is lightweight and fully transparent to users. In our evaluation on a real dataset of several hundred million users, Souche can efficiently identify 85% of legitimate users early, while reducing the percentage of falsely admitted malicious users from 44% to 2.4%. Our evaluation further indicates that Souche is robust in the presence of compromised accounts. It is generally applicable to enhance usability and security for a wide class of online services."}
{"_id": "20f81da3c6f632c924d9b3ab839ba7fbe140999f", "title": "A Double-Tail Latch-Type Voltage Sense Amplifier with 18ps Setup+Hold Time", "text": "A latch-type voltage sense amplifier in 90nm CMOS is designed with a separated input and cross-coupled stage. This separation enables fast operation over a wide common-mode and supply voltage range. With a 1-sigma offset of 8mV, the circuit consumes 92fJ/decision with a 1.2V supply. It has an input equivalent noise of 1.5mV and requires 18ps setup-plus-hold time"}
{"_id": "5a68cdb6a02572d57134355c66db77c77c90e31b", "title": "MobiMine: Monitoring the Stock Market from a PDA", "text": "This paper describes an experimental mobile data mining system that allows intelligent monitoring of time-critical financial data from a hand-held PDA. It presents the overall system architecture and the philosophy behind the design. It explores one particular aspect of the system---automated construction of personalized focus area that calls for user's attention. This module works using data mining techniques. The paper describes the data mining component of the system that employs a novel Fourier analysis-based approach to efficiently represent, visualize, and communicate decision trees over limited bandwidth wireless networks. The paper also discusses a quadratic programming-based personalization module that runs on the PDAs and the multi-media based user-interfaces. It reports experimental results using an ad hoc peer-to-peer IEEE 802.11 wireless network."}
{"_id": "873ff7fbccc2709256e4183c317756656658a9d7", "title": "A Study on Evaluation of E-Government Service Quality", "text": "Service quality is the highest requirement by users, especially for the service in electronic government. During the past decades, it has become a major area of academic investigation. Considering this issue, there are a lot of researches that evaluated the dimensions and e-service contexts. This study also identified the dimensions of service quality, but focuses on a new concept and provides a new methodology in developing measurement scales of e-service quality such as information quality, service quality and organization quality. Finally, this study will suggest a key factor to evaluate e-government service quality better. Keywords\u2014E-government, e-service, e-service quality, dimensionality."}
{"_id": "09acc6d00f710c8307ffa118a7dc77a00c692b74", "title": "Holistic Multi-modal Memory Network for Movie Question Answering", "text": "Answering questions according to multi-modal context is a challenging problem as it requires a deep integration of different data sources. Existing approaches only employ partial interactions among data sources in one attention hop. In this paper, we present the Holistic Multi-modal Memory Network (HMMN) framework which fully considers the interactions between different input sources (multi-modal context, question) in each hop. In addition, it takes answer choices into consideration during the context retrieval stage. Therefore, the proposed framework effectively integrates multi-modal context, question, and answer information, which leads to more informative context retrieved for question answering. Our HMMN framework achieves state-of-the-art accuracy on MovieQA dataset. Extensive ablation studies show the importance of holistic reasoning and contributions of different attention strategies."}
{"_id": "0faccbb6424dd7ee734af1420fee16eac3211aa0", "title": "Graphical Models for Game Theory", "text": "We introduce a compact graph-theoretic representation for multi-party game theory. Our main result is a provably correct and efficient algorithm for computing approximate Nash equilibria in (one-stage) games represented by trees or sparse graphs."}
{"_id": "1ff3ebd402e29c3af7226ece7f1d716daf1eb4a9", "title": "A 64 GHz 2 Gbps transmit/receive phased-array communication link in SiGe with 300 meter coverage", "text": "This paper presents a 64 GHz transmit/receive communication link between two 32-element SiGe-based phased arrays. The antenna element is a series-fed patch array, which provides directivity in the elevation plane. The transmit array results in an EIRP of 42 dBm, while the receive array provides an electronic gain of 33 dB and a system NF < 8 dB including the T/R switch and antenna losses. The arrays can be scanned to +/\u221250\u00b0 in the azimuth using a 5-bit phase shifter on the SiGe chip, while keeping very low sidelobes and a near-ideal pattern. The communication link uses one array on the transmit side and another array on the receive side, together with external mixers and IF amplifiers. A Keysight M8195A arbitrary waveform generator is used to generate the modulated waveforms on the transmit side and a Keysight DSO804A oscilloscope is used to demodulate the received IF signal. The link performance was measured for different scan angles and modulation formats. Data rates of 1 Gbps using 16-QAM and 2 Gbps using QPSK are demonstrated at 300 m. The system also results in > 4 Gbps data rate at 100 meters, and \u223c 500 Mbps data rate at 800 meters."}
{"_id": "2fb03a66f250a2c51eb2eb30344a13a5e4d8a265", "title": "Fabrication and measurement of a large, monolithic, PCB-based AESA", "text": "This paper discusses a fabrication approach and experimental validation of a very large, planar active electronically scanned array (AESA). The planar AESA architecture employs a monolithic printed circuit board (PCB) with 768 active antenna elements at X-Band. Manufacturing physically large arrays with high element counts is discussed in relation to construction, assembly and yield considerations. Measured active array patterns of the ESA are also presented."}
{"_id": "5ec3ee90bbc5b23e748d82cb1914d1c45d85bdd9", "title": "A Millimeter-Wave (40\u201345 GHz) 16-Element Phased-Array Transmitter in 0.18-$\\mu$ m SiGe BiCMOS Technology", "text": "This paper demonstrates a 16-element phased-array transmitter in a standard 0.18-mum SiGe BiCMOS technology for Q-band satellite applications. The transmitter array is based on the all-RF architecture with 4-bit RF phase shifters and a corporate-feed network. A 1:2 active divider and two 1:8 passive tee-junction dividers constitute the corporate-feed network, and three-dimensional shielded transmission-lines are used for the passive divider to minimize area. All signals are processed differentially inside the chip except for the input and output interfaces. The phased-array transmitter results in a 12.5 dB of average power gain per channel at 42.5 GHz with a 3-dB gain bandwidth of 39.9-45.6 GHz. The RMS gain variation is < 1.3 dB and the RMS phase variation is < for all 4-bit phase states at 35-50 GHz. The measured input and output return losses are < -10 dB at 36.6-50 GHz, and <-10 dB at 37.6-50 GHz, respectively. The measured peak-to-peak group delay variation is plusmn 20 ps at 40-45 GHz. The output P-1dB is -5plusmn1.5 dBm and the maximum saturated output power is - 2.5plusmn1.5 dBm per channel at 42.5 GHz. The transmitter shows <1.8 dB of RMS gain mismatch and < 7deg of RMS phase mismatch between the 16 different channels over all phase states. A - 30 dB worst-case port-to-port coupling is measured between adjacent channels at 30-50 GHz, and the measured RMS gain and phase disturbances due to the inter-channel coupling are < 0.15 dB and < 1deg, respectively, at 35-50 GHz. All measurements are obtained without any on-chip calibration. The chip consumes 720 mA from a 5 V supply voltage and the chip size is 2.6times3.2 mm2."}
{"_id": "a1b40af260487c00a2031df1ffb850d3bc368cee", "title": "A 28GHz Bulk-CMOS dual-polarization phased-array transceiver with 24 channels for 5G user and basestation equipment", "text": "Developing next-generation cellular technology (5G) in the mm-wave bands will require low-cost phased-array transceivers [1]. Even with the benefit of beamforming, due to space constraints in the mobile form-factor, increasing TX output power while maintaining acceptable PA PAE, LNA NF, and overall transceiver power consumption is important to maximizing link budget allowable path loss and minimizing handset case temperature. Further, the phased-array transceiver will need to be able to support dual-polarization communication. An IF interface to the analog baseband is desired for low power consumption in the handset or user equipment (UE) active antenna and to enable use of arrays of transceivers for customer premises equipment (CPE) or basestation (BS) antenna arrays with a low-loss IF power-combining/splitting network implemented on an antenna backplane carrying multiple tiled antenna modules."}
{"_id": "0515ff7de41fd349b4bff34f7fe4e9c12a7fff47", "title": "7.2 A 28GHz 32-element phased-array transceiver IC with concurrent dual polarized beams and 1.4 degree beam-steering resolution for 5G communication", "text": "Next-generation mobile technology (5G) aims to provide an improved experience through higher data-rates, lower latency, and improved link robustness. Millimeter-wave phased arrays offer a path to support multiple users at high data-rates using high-bandwidth directional links between the base station and mobile devices. To realize this vision, a phased-array-based pico-cell must support a large number of precisely controlled beams, yet be compact and power efficient. These system goals have significant mm-wave radio interface implications, including scalability of the RFIC+antenna-array solution, increase in the number of concurrent beams by supporting dual polarization, precise beam steering, and high output power without sacrificing TX power efficiency. Packaged Si-based phased arrays [1\u20133] with nonconcurrent dual-polarized TX and RX operation [2,3], concurrent dual-polarized RX operation [3] and multi-IC scaling [3,4] have been demonstrated. However, support for concurrent dual-polarized operation in both RX and TX remains unaddressed, and high output power comes at the cost of power consumption, cooling complexity and increased size. The RFIC reported here addresses these challenges. It supports concurrent and independent dual-polarized operation in TX and RX modes, and is compatible with a volume-efficient, scaled, antenna-in-package array. A new TX/RX switch at the shared antenna interface enables high output power without sacrificing TX efficiency, and a t-line-based phase shifter achieves <1\u00b0 RMS error and <5\u00b0 phase steps for precise beam control."}
{"_id": "b7f442298f16684e4c1e69fd4c5002f02a783d27", "title": "A novel activation function for multilayer feed-forward neural networks", "text": "Traditional activation functions such as hyperbolic tangent and logistic sigmoid have seen frequent use historically in artificial neural networks. However, nowadays, in practice, they have fallen out of favor, undoubtedly due to the gap in performance observed in recognition and classification tasks when compared to their well-known counterparts such as rectified linear or maxout. In this paper, we introduce a simple, new type of activation function for multilayer feed-forward architectures. Unlike other approaches where new activation functions have been designed by discarding many of the mainstays of traditional activation function design, our proposed function relies on them and therefore shares most of the properties found in traditional activation functions. Nevertheless, our activation function differs from traditional activation functions on two major points: its asymptote and global extremum. Defining a function which enjoys the property of having a global maximum and minimum, turned out to be critical during our design-process since we believe it is one of the main reasons behind the gap observed in performance between traditional activation functions and their recently introduced counterparts. We evaluate the effectiveness of the proposed activation function on four commonly used datasets, namely, MNIST, CIFAR-10, CIFAR-100, and the Pang and Lee\u2019s movie review. Experimental results demonstrate that the proposed function can effectively be applied across various datasets where our accuracy, given the same network topology, is competitive with the state-of-the-art. In particular, the proposed activation function outperforms the state-of-the-art methods on the MNIST dataset."}
{"_id": "19bd1797921ec924b69b2b22999b555a5f2b982f", "title": "A Survey on Anomalies Detection Techniques and Measurement Methods", "text": "Dynamic research area has been applied and researched on anomaly detection in various domains. And various techniques have been proposed to identify unexpected items or events in datasets which differ from the norm. This review tries to provide a basic and structured overview of the anomaly detection techniques. Also, this review discusses major anomaly detection techniques using statistical based and machine learning based techniques. The outcome of this review may facilitate a better understanding of the different techniques in which research has been done on this topic by comparing the pros and cons of the identified techniques. In addition, this review also discusses on the measurement methods used by other researchers in validating their anomalies detection techniques."}
{"_id": "b3769619cdc490ec60aaa3025cd9402a74235f0c", "title": "Comparison of EEG signal features and ensemble learning methods for motor imagery classification", "text": "Classifying electroencephalogram (EEG) signal in Brain Computer Interface (BCI) is a useful methods to analysis different organs of human body and it can be used for communicate with the outside world and controlling external device. Accuracy classification of extracted features from EEG signals is a problem which many researcher try to improve it. Although many methods for extracting feature and classifying EEG signal have been proposed and developed, many of them suffer from extracting less accurate data from EEG signals. In this work, four signal feature extraction and three ensemble learning method have been reviewed and performances of classification techniques are compared for motor imagery task."}
{"_id": "c1efd29ddb6cb5cf82151ab25fbfc99e20354d9e", "title": "Linking GloVe with word2vec", "text": "The Global Vectors for word representation (GloVe), introduced by Jeffrey Pennington et al. [3] is reported to be an efficient and effective method for learning vector representations of words. State-of-the-art performance is also provided by skip-gram with negative-sampling (SGNS) [2] implemented in the word2vec tool. In this note, we explain the similarities between the training objectives of the two models, and show that the objective of SGNS is similar to the objective of a specialized form of GloVe, though their cost functions are defined differently."}
{"_id": "1475e1dea92f1b9c7d2c8ba3a9ead3b91febc6f1", "title": "Expanded Hemodialysis: A New Therapy for a New Class of Membranes.", "text": "A wide spectrum of molecules is retained in end-stage kidney disease, normally defined as uremic toxins. These solutes have different molecular weights and radii. Current dialysis membranes and techniques only remove solutes in the range of 50-15,000 Da, with limited or no capability to remove solutes in the middle to high molecular weight range (up to 50,000 Da). Improved removal has been obtained with high cut-off (HCO) membranes, with albumin loss representing a limitation to their practical application. Hemodiafiltration (HDF) at high volumes (>23 L/session) has produced some results on middle molecules and clinical outcomes, although complex hardware and high blood flows are required. A new class of membrane has been recently developed with a cut off (MWCO) close to the molecular weight of albumin. While presenting negligible albumin loss, these membranes have a very high retention onset (MWRO), allowing high clearances of solutes in a wide spectrum of molecular weights. These membranes originally defined (medium cut off) are probably better classified as high retention onset. The introduction of such membranes in the clinical routine has allowed the development of a new concept therapy called \"expanded hemodialysis\" (HDx). The new therapy is based on a special hollow fiber and dialyzer design. Its simple set-up and application offer the possibility to use it even in patients with suboptimal vascular access or even with an indwelling catheter. The system does not require a particular hardware or unusual nursing skill. The quality of dialysis fluid is, however, mandatory to ensure a safe conduction of the dialysis session. This new therapy is likely to modify the outcome of end-stage kidney disease patients, thanks to the enhanced removal of molecules traditionally retained by current dialysis techniques."}
{"_id": "91c50f239890e64c246130e94e795568a2e38c98", "title": "Modular Self-Reconfigurable Robot Systems [Grand Challenges of Robotics]", "text": "The field of modular self-reconfigurable robotic systems addresses the design, fabrication, motion planning, and control of autonomous kinematic machines with variable morphology. Modular self-reconfigurable systems have the promise of making significant technological advances to the field of robotics in general. Their promise of high versatility, high value, and high robustness may lead to a radical change in automation. Currently, a number of researchers have been addressing many of the challenges. While some progress has been made, it is clear that many challenges still exist. By illustrating several of the outstanding issues as grand challenges that have been collaboratively written by a large number of researchers in this field, this article has shown several of the key directions for the future of this growing field"}
{"_id": "881ad7b896a6e6715a231afe48bbc276167ecf73", "title": "The effects of caffeine on heart rate variability in newborns with apnea of prematurity", "text": "Objective:Apnea of prematurity is a common complication in premature newborns and caffeine is a widespread medication used to treat this complication. Caffeine may have adverse effects on the cardiovascular and central nervous system, yet its effects on the autonomic nervous system modulation of heart rate have not been studied in premature newborns, which was the objective of our study.Study Design:We prospectively studied 21 premature newborns who\u00a0were treated with caffeine. We analyzed heart rate variability by power spectral density and by dynamic nonlinear analyses methods.Result:There were no changes in heart rate, blood pressure or the autonomic nervous system tone following administration of caffeine, nor were the nonlinear dynamical properties of the system altered by caffeine.Conclusion:Caffeine does not have detrimental effects on heart rate variablility, heart rate or blood pressure in conventional doses given to premature newborns."}
{"_id": "cbfdddf49d76c9e2577c6a555bbf746bd69effe1", "title": "The end of the end of ideology.", "text": "The \"end of ideology\" was declared by social scientists in the aftermath of World War II. They argued that (a) ordinary citizens' political attitudes lack the kind of stability, consistency, and constraint that ideology requires; (b) ideological constructs such as liberalism and conservatism lack motivational potency and behavioral significance; (c) there are no major differences in content (or substance) between liberal and conservative points of view; and (d) there are few important differences in psychological processes (or styles) that underlie liberal versus conservative orientations. The end-of-ideologists were so influential that researchers ignored the topic of ideology for many years. However, current political realities, recent data from the American National Election Studies, and results from an emerging psychological paradigm provide strong grounds for returning to the study of ideology. Studies reveal that there are indeed meaningful political and psychological differences that covary with ideological self-placement. Situational variables--including system threat and mortality salience--and dispositional variables--including openness and conscientiousness--affect the degree to which an individual is drawn to liberal versus conservative leaders, parties, and opinions. A psychological analysis is also useful for understanding the political divide between \"red states\" and \"blue states.\""}
{"_id": "94983a1f9ccf11dfdba827c5113b0484a85590ee", "title": "DeltaIoT: A Self-Adaptive Internet of Things Exemplar", "text": "Internet of Things (IoT) consists of networked tiny embedded computers (motes) that are capable of monitoring and controlling the physical world. Examples range from building security monitoring to smart factories. A central problem of IoT is minimising the energy consumption of the motes, while guaranteeing high packet delivery performance, regardless of uncertainties such as sudden changes in traffic load and communication interference. Traditionally, to deal with uncertainties the network settings are either hand-tuned or over-provisioned, resulting in continuous network maintenance or inefficiencies. Enhancing the IoT network with self-adaptation can automate these tasks. This paper presents DeltaIoT, an exemplar that enables researchers to evaluate and compare new methods, techniques and tools for self-adaptation in IoT. DeltaIoT is the first exemplar for research on self-adaptation that provides both a simulator for offline experimentation and a physical setup that can be accessed remotely for real-world experimentation."}
{"_id": "c3b56745ca11a97daf08c2a97834e8f5c625077d", "title": "Boosting energy efficiency with mirrored data block replication policy and energy scheduler", "text": "Energy efficiency is one of the major challenges in big datacenters. To facilitate processing of large data sets in a distributed fashion, the MapReduce programming model is employed in these datacenters. Hadoop is an open-source implementation of MapReduce which contains a distributed file system. Hadoop Distributed File System provides a data block replication scheme to preserve reliability and data availability. The distribution of the data block replicas over the nodes is performed randomly by meeting some constraints (e.g., preventing storage of two replicas of a data block on a single node). This study makes use of flexibility in the data block placement policy to increase energy efficiency in datacenters. Furthermore, inspired by Zaharia et al.'s delay scheduling algorithm, a scheduling algorithm is introduced, which takes into account energy efficiency in addition to fairness and data locality properties. Computer simulations of the proposed method suggest its superiority over Hadoop's standard settings."}
{"_id": "dd3331156d892c9fc0c669d50d0302c9ecdeb056", "title": "Interactive Indirect Illumination Using Voxel Cone Tracing", "text": "Indirect illumination is an important element for realistic image synthesis, but its computation is expensive and highly dependent on the complexity of the scene and of the BRDF of the involved surfaces. While off-line computation and pre-baking can be acceptable for some cases, many applications (games, simulators, etc.) require real-time or interactive approaches to evaluate indirect illumination. We present a novel algorithm to compute indirect lighting in real-time that avoids costly precomputation steps and is not restricted to low-frequency illumination. It is based on a hierarchical voxel octree representation generated and updated on the fly from a regular scene mesh coupled with an approximate voxel cone tracing that allows for a fast estimation of the visibility and incoming energy. Our approach can manage two light bounces for both Lambertian and glossy materials at interactive framerates (25-70FPS). It exhibits an almost scene-independent performance and can handle complex scenes with dynamic content thanks to an interactive octree-voxelization scheme. In addition, we demonstrate that our voxel cone tracing can be used to efficiently estimate Ambient Occlusion."}
{"_id": "ab7ac31bbb31b85d23c697be743bf0786d4883b0", "title": "Compact low-voltage power-efficient operational amplifier cells for VLSI", "text": "\u201cSome Design Aspects of a Two-Stage Rail-to-Rail CMOS Op Amp' by Sander L. J. Gierkink, Peter J. Holzmann, Remco J. Wiegerink and Roelof F. Wassenaar, Analog Integrated Circuits and Signal Processing, Vol. 21, No. 2, Nov. 1999, pp 143-152. \u201cCompact Low-Voltage Power-Efficient Operational Amplifier Cells for VLSI by Klaas-Jan de Langen and Johan H. Huijsing, IEEE Journal of Solid State Circuits, vol. 33, No. 10, Oct. 1998, pp. 1482\u20131496. \u201cDesign Aspects of a Rail-to-Rail CMOS Op Amp' by Glierkink et al., Mesa Research institute, ECT-97-36, pp. 23-28."}
{"_id": "c368e5f5d6390ecd6b431f3b535c707ea8b21993", "title": "Mining program workflow from interleaved traces", "text": "Successful software maintenance is becoming increasingly critical due to the increasing dependence of our society and economy on software systems. One key problem of software maintenance is the difficulty in understanding the evolving software systems. Program workflows can help system operators and administrators to understand system behaviors and verify system executions so as to greatly facilitate system maintenance. In this paper, we propose an algorithm to automatically discover program workflows from event traces that record system events during system execution. Different from existing workflow mining algorithms, our approach can construct concurrent workflows from traces of interleaved events. Our workflow mining approach is a three-step coarse-to-fine algorithm. At first, we mine temporal dependencies for each pair of events. Then, based on the mined pair-wise tem-poral dependencies, we construct a basic workflow model by a breadth-first path pruning algorithm. After that, we refine the workflow by verifying it with all training event traces. The re-finement algorithm tries to find out a workflow that can interpret all event traces with minimal state transitions and threads. The results of both simulation data and real program data show that our algorithm is highly effective."}
{"_id": "b8090b7b7efa0d971d3e1174facea60129be09c6", "title": "A SiC-based isolated DC/DC converter for high density data center applications", "text": "Data centers are increasing in number and size at astounding rates, while operational cost, thermal management, size, and performance continue to be the driving metrics for the power subsystems in the associated computing equipment. This paper presents a SiC-based phase-shifted full bridge (PSFB) converter designed for 10kW output power and targeted at data center applications. The design approach focused on tuning of the converter efficiency and minimizing the thermal management system, resulting in a high-density converter. A unique thermal management system has also been employed, resulting in both increased power density and better thermal management. In this paper, the implementation of this converter is described in detail, along with empirical results, both electrical and thermal."}
{"_id": "e0b65eb21c041ca01043739af988ee011708ec8b", "title": "Classification of Brain MR Images using Texture Feature Extraction", "text": "Alzheimer\u2019s disease (AD), is a degenerative disease which leads to memory loss and problems with thinking and behaviour.AD is a type of dementia which accounts for an estimated 60% to 80% of cases. Accurate diagnosis depends on the identification of discriminative features of AD. Recently, different feature extraction methods are used for the classification of AD. In this paper, we proposed a classification framework to select features, which are extracted using Gray-Level Co-occurrence Matrix (GLCM) method to distinguish between the AD and the Normal Control (NC). In order to evaluate the proposed method, we have performed evaluations on the MRI acquiring from the OASIS database. The proposed method yields an average testing accuracy of 75.71% which indicates that the proposed method can differentiate AD and NC satisfactorily."}
{"_id": "d62c281dffed526cf12937c4b320a2b74226ae05", "title": "Eavesdropping Attacks on High-Frequency RFID Tokens", "text": "RFID systems often use near-field magnetic coupling to implement communication channels. The advertised operational range of these channels is less than 10 cm and therefore several implemented systems assume that the communication channel is location limited and therefore relatively secure. Nevertheless, there have been repeated questions raised about the vulnerability of these near-field systems against eavesdropping and skimming attacks. In this paper I revisit the topic of RFID eavesdropping attacks, surveying previous work and explaining why the feasibility of practical attacks is still a relevant and novel research topic. I present a brief overview of the radio characteristics for popular HF RFID standards and present some practical results for eavesdropping experiments against tokens adhering to the ISO 14443 and ISO 15693 standards. Finally, I discuss how an attacker could construct a low-cost eavesdropping device using easy to obtain parts and reference designs."}
{"_id": "9b0b961fa83fb529b9eaa17953bc83ff6e71c1ab", "title": "MS-TWSVM: Mahalanobis distance-based Structural Twin Support Vector Machine", "text": "The distribution information of data points in two classes as the structural information is inserted into the classifiers to improve their generalization performance. Recently many algorithms such as S-TWSVM has used this information to construct two nonparallel hyperplanes which each one lies as close as possible to one class and being far away from the other. It is well known that different classes have different data distribution in real world problems, thus the covariance matrices of these classes are not the same. In these situations, the Mahalanobis is often more popular than Euclidean as a measure of distance. In this paper, in addition to apply the idea of S-TWSVM, the classical Euclidean distance is replaced by Mahalanobis distance which leads to simultaneously consider the covariance matrices of the two classes. By this modification, the orientation information in two classes can be better exploited than S-TWSVM. The experiments indicate our proposed algorithm is often superior to other learning algorithms in terms of generalization performance."}
{"_id": "06881a2960a83e6cf075c30888fe69af405a8bbc", "title": "- 1-Simultaneous measurement of impulse response and distortion with a swept-sine technique", "text": "A novel measurement technique of the transfer function of weakly not-linear, approximately time-invariant systems is presented. The method is implemented with low-cost instrumentation; it is based on an exponentially-swept sine signal. It is applicable to loudspeakers and other audio components, but also to room acoustics measurements. The paper presents theoretical description of the method and experimental verification in comparison with MLS."}
{"_id": "4ddbeb946a4ff4853f2e98c547bb0b39cc6a4480", "title": "Metamaterial-Based Antennas: Research and Developments", "text": "A brief review of metamaterials and their applications to antenna systems is given. Artificial magnetic conductors and electrically small radiating and scattering systems are emphasized. Single negative, double negative, and zero-index metamaterial systems are discussed as a means to manipulate their size, efficiency, bandwidth, and directivity characteristics. key words: metamaterials, electrically small antennas, complex media, artificial magnetic conductors"}
{"_id": "773c85e88bbf54de684c85c9104750dce79f9688", "title": "The high-conductance state of neocortical neurons in vivo", "text": "Intracellular recordings in vivo have shown that neocortical neurons are subjected to an intense synaptic bombardment in intact networks and are in a 'high-conductance' state. In vitro studies have shed light on the complex interplay between the active properties of dendrites and how they convey discrete synaptic inputs to the soma. Computational models have attempted to tie these results together and predicted that high-conductance states profoundly alter the integrative properties of cortical neurons, providing them with a number of computational advantages. Here, we summarize results from these different approaches, with the aim of understanding the integrative properties of neocortical neurons in the intact brain."}
{"_id": "af1b1279bb28dc24488cf3050f86257ad1cd02b3", "title": "Inferring Destination from Mobility Data", "text": "Destination prediction in a moving vehicle has several applications such as alternative route recommendations even in cases where the driver has not entered their destination into the system. In this paper a hierarchical approach to destination prediction is presented. A Discrete Time Markov Chain model is used to make an initial prediction of a general region the vehicle might be travelling to. Following that a more complex Bayesian Inference Model is used to make a fine grained prediction within that destination region. The model is tested on a dataset of 442 taxis operating in Porto, Portugal. Experiments are run on two maps. One is a smaller map concentrating specificially on trips within the Porto city centre and surrounding areas. The second map covers a much larger area going as far as Lisbon. We achieve predictions for Porto with average distance error of less than 0.6 km from early on in the trip and less than 1.6 km dropping to less than 1 km for the wider area."}
{"_id": "15c6a03461ce80ef39587bf86e8b88d4b20457a2", "title": "Non-psychotropic plant cannabinoids: new therapeutic opportunities from an ancient herb.", "text": "Delta(9)-tetrahydrocannabinol binds cannabinoid (CB(1) and CB(2)) receptors, which are activated by endogenous compounds (endocannabinoids) and are involved in a wide range of physiopathological processes (e.g. modulation of neurotransmitter release, regulation of pain perception, and of cardiovascular, gastrointestinal and liver functions). The well-known psychotropic effects of Delta(9)-tetrahydrocannabinol, which are mediated by activation of brain CB(1) receptors, have greatly limited its clinical use. However, the plant Cannabis contains many cannabinoids with weak or no psychoactivity that, therapeutically, might be more promising than Delta(9)-tetrahydrocannabinol. Here, we provide an overview of the recent pharmacological advances, novel mechanisms of action, and potential therapeutic applications of such non-psychotropic plant-derived cannabinoids. Special emphasis is given to cannabidiol, the possible applications of which have recently emerged in inflammation, diabetes, cancer, affective and neurodegenerative diseases, and to Delta(9)-tetrahydrocannabivarin, a novel CB(1) antagonist which exerts potentially useful actions in the treatment of epilepsy and obesity."}
{"_id": "442fb4e092c0b63721d8bef973c2affbac1f708b", "title": "A topology for three-stage Solid State Transformer", "text": "Solid State Transformer (SST) is a new type of power transformer based on power electronic converters and high frequency transformers. The SST realizes voltage transformation, galvanic isolation, and power quality improvements in a single device. In the literature, a number of topologies have been introduced for the SST. In this work, employing a modular multilevel converter, a new SST topology is introduced which provides not only high-voltage AC (HVAC), low-voltage AC (LVAC) and low-voltage DC (LVDC) ports, but also high-voltage DC (HVDC) port. Besides, the proposed topology is easily scalable to higher voltage levels."}
{"_id": "1a0a415178d20754418987e312681f3fb0b40257", "title": "Unsupervised Models for Named Entity Classification", "text": "This paper discusses the use of unlabeled examples for the problem of named entity classification. A large number of rules is needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. However, we show that the use of unlabeled data can reduce the requirements for supervision to just 7 simple \u201cseed\u201d rules. The approach gains leverage from natural redundancy in the data: for many named-entity instances both the spelling of the name and the context in which it appears are sufficient to determine its type. We present two algorithms. The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98). The second algorithm extends ideas from boosting algorithms, designed for supervised learning tasks, to the framework suggested by (Blum and Mitchell 98)."}
{"_id": "1c2381e357c6829c551baee403650e6022ae6c40", "title": "Semi-Supervised Learning for Natural Language", "text": "Statistical supervised learning techniques have been successful for many natural language processing tasks, but they require labeled datasets, which can be expensive to obtain. On the other hand, unlabeled data (raw text) is often available \"for free\" in large quantities. Unlabeled data has shown promise in improving the performance of a number of tasks, e.g. word sense disambiguation, information extraction, and natural language parsing. In this thesis, we focus on two segmentation tasks, named-entity recognition and Chinese word segmentation. The goal of named-entity recognition is to detect and classify names of people, organizations, and locations in a sentence. The goal of Chinese word segmentation is to find the word boundaries in a sentence that has been written as a string of characters without spaces. Our approach is as follows: In a preprocessing step, we use raw text to cluster words and calculate mutual information statistics. The output of this step is then used as features in a supervised model, specifically a global linear model trained using the Perceptron algorithm. We also compare Markov and semi-Markov models on the two segmentation tasks. Our results show that features derived from unlabeled data substantially improves performance, both in terms of reducing the amount of labeled data needed to achieve a certain performance level and in terms of reducing the error using a fixed amount of labeled data. We find that sometimes semi-Markov models can also improve performance over Markov models. Thesis Supervisor: Michael Collins Title: Assistant Professor, CSAIL"}
{"_id": "967972821567b8a34dc058c9fbf60c4054dc3b69", "title": "A framework and graphical development environment for robust NLP tools and applications.", "text": ""}
{"_id": "0104063400e6d69294edc95fb14c7e8fac347f6a", "title": "Max-Margin Markov Networks", "text": "In typical classification tasks, we seek a function which assigns a label to a single object. Kernel-based approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ability to use high-dimensional feature spaces, and from their strong theoretical guarantees. However, many real-world tasks involve sequential, spatial, or structured data, where multiple labels must be assigned. Existing kernel-based methods ignore structure in the problem, assigning labels independently to each object, losing much useful information. Conversely, probabilistic graphical models, such as Markov networks, can represent correlations between labels, by exploiting problem structure, but cannot handle high-dimensional feature spaces, and lack strong theoretical generalization guarantees. In this paper, we present a new framework that combines the advantages of both approaches: Maximum margin Markov (M ) networks incorporate both kernels, which efficiently deal with highdimensional features, and the ability to capture correlations in structured data. We present an efficient algorithm for learning M networks based on a compact quadratic program formulation. We provide a new theoretical bound for generalization in structured domains. Experiments on the task of handwritten character recognition, demonstrate very significant gains over previous approaches."}
{"_id": "016d34269a505a74d1f481314b30c13049d993bb", "title": "Bottom-Up Relational Learning of Pattern Matching Rules for Information Extraction", "text": "Information extraction is a form of shallow text processing that locates a specified set of relevant items in a natural-language document. Systems for this task require significant domain-specific knowledge and are time-consuming and difficult to build by ha nd, making them a good application for machine learning. We present an algorithm, R APIER, that uses pairs of sample documents and filled templates to induce pattern-match rules that dire ctly extract fillers for the slots in the template. RAPIER is a bottom-up learning algorithm that incorporates techni ques from several inductive logic programming systems. We have implemented t h algorithm in a system that allows patterns to have constraints on the words, part-of-speech t ags, and semantic classes present in the filler and the surrounding text. We present encouraging expe rimental results on two domains."}
{"_id": "9ec3c91b255cdd4e47102f40da4d73c8bf23d6f6", "title": "Smart clothing: Perceived benefits vs. perceived fears", "text": "Smart textile technologies integrate computer functionality into textiles. Since a few years, smart clothing has been coming up in the sport and health sector and is increasingly implemented in everyday objects within private spaces. Especially the use of textiles for medical reasons and their potential use within Ambient Assisted Living-Concepts (AAL) make it necessary to understand users' perspectives on these technologies and the willingness to use them. Today, the understanding in which way individual attitudes and emotional and cognitive abilities, may impact the acceptance of pervasive health care technologies, is restricted. This research is focused on the users' hopes and fears towards smart clothing and examines perceived benefits and barriers. As women have a higher life expectancy and will dominate the group of old people in the future - gender was chosen as one central factor of interest. As the second factor we examined technical experience in order to learn if the acceptance for smart clothing is connected to the degree of users previous experience with technology. Outcomes revealed both factors - gender and technical experience - to be decisive factors for the acceptance of smart clothing. Generally, women and persons with low technical experience show considerable caveats towards the usage of smart clothing technologies what becomes most evident in the perceived barriers and fears connected to the usage of this new technology."}
{"_id": "c817c3a0acfc49f7eca0e9855b550e242aa0a6d5", "title": "Generalized Zero-Shot Learning with Deep Calibration Network", "text": "A technical challenge of deep learning is recognizing target classes without seen data. Zero-shot learning leverages semantic representations such as attributes or class prototypes to bridge source and target classes. Existing standard zero-shot learning methods may be prone to overfitting the seen data of source classes as they are blind to the semantic representations of target classes. In this paper, we study generalized zero-shot learning that assumes accessible to target classes for unseen data during training, and prediction on unseen data is made by searching on both source and target classes. We propose a novel Deep Calibration Network (DCN) approach towards this generalized zero-shot learning paradigm, which enables simultaneous calibration of deep networks on the confidence of source classes and uncertainty of target classes. Our approach maps visual features of images and semantic representations of class prototypes to a common embedding space such that the compatibility of seen data to both source and target classes are maximized. We show superior accuracy of our approach over the state of the art on benchmark datasets for generalized zero-shot learning, including AwA, CUB, SUN, and aPY."}
{"_id": "ff8a9bac9d6f68e149515bd67a9582ebda51afd7", "title": "A Strategic Empty Container Logistics Optimization in a Major Shipping Company", "text": "In this paper, we present a system that Compa\u00f1\u00eda Sud Americana de Vapores (CSAV), one of the world\u2019s largest shipping companies, developed to support its decisions for repositioning and stocking empty containers. CSAV\u2019s main business is shipping cargo in containers to clients worldwide. It uses a fleet of about 700,000 TEU containers of different types, which are carried by both CSAV-owned and third-party ships. Managing the container fleet is complex; CSAV must make thousands of decisions each day. In particular, imbalances exist among the regions. For example, China often has a deficit of empty containers and is a net importer; Saudi Arabia often has a surplus and is a net exporter. CSAV and researchers from the University of Chile developed the Empty Container Logistics Optimization System (ECO) to manage this imbalance. ECO\u2019s multicommodity, multiperiod model manages the repositioning problem, whereas an inventory model determines the safety stock required at each location. CSAV uses safety stock to ensure high service levels despite uncertainties, particularly in the demand for containers. A hybrid forecasting system supports both the inventory and the multicommodity network flow model. Major improvements in data gathering, real-time communications, and automation of data handling were needed as input to the models. A collaborative Web-based optimization framework allows agents from different zones to interact in decision making. The use of ECO led to direct savings of $81 million for CSAV, a reduction in inventory stock of 50 percent, and an increase in container turnover of 60 percent. Moreover, the system helped CSAV to become more efficient and to overcome the 2008 economic crisis."}
{"_id": "bc0cc6577d15bb8ba71ec7579c1f1cfc006c1603", "title": "Consumer Search Behavior in Online Shopping Environments", "text": "This paper explores search behavior of online shoppers. Information economics literature suggests that search cost in electronic markets has essentially been reduced to zero as consumers are able to use powerful search tools free of charge to easily find and compare product and shopping information on the Internet. In the present research, however, we present a research model proposing that users need to spend time and effort when completing search tasks resulting in significant search cost and a trade-off between search cost and search performance. Preliminary findings from an Internet experiment indicate that search task complexity, search engine capability, search strategy and user experience are important factors determining search cost and performance."}
{"_id": "18db25c1a47a17120785fb530bfc8f88727810a0", "title": "Colony of NPUs : Scaling the Efficiency of Neural Accelerators", "text": "Trading small amounts of output accuracy to obtain significant performance improvement or reduce power consumption is a promising research direction. Many modern applications from different domains such as image and signal processing, computer vision, data mining, machine learning and speech recognition are amenable to approximate computing. In many cases, their output is interpreted by humans, so infrequent variation in outputs are not noticeable by users. Moreover, the input of these applications is noisy which prevents strict constraints on the output of the system. Hence, these domains have significant potential for the aforementioned trade-offs, and these applications are ideal targets for approximate computing. Existing approaches for approximate computing include modifying the ISA [6], compiler [1], programming language [2, 4], underlying hardware [14] or the entire framework [3, 8, 10]. Approximation accelerators [5, 7, 13] use some of these methods to trade off accuracy for higher performance or energy savings. These accelerators require the programmer to annotate code sections which are amenable for approximation. Approximate parts are offloaded to the accelerator and other parts are executed by the CPU at run time. Esmaeilzadeh et. al. [7] proposed a Neural Processing Unit (NPU) as a programmable approximate accelerator. The key idea is to train a neural network (nn) to mimic an approximable region of original code and replace that with an efficient computation of the learned model. Although the proposed approximate accelerators such as NPU demonstrate acceptable results on the benchmarks from different domains, they have some shortcomings. Firstly, different invocations of a program might produce varying output qualities because the output quality is dependent on the input values. Consequently, using an NPU with fixed configuration for a wide range of inputs may produce outputs with poor accuracies. On the other hand, if the output quality drops below a determined threshold, one way to improve the quality is re-executing the whole program on the exact hardware. However, the overhead of this recovery process may offset the gains of approximation. Secondly, existing techniques usually measure the output quality by averaging errors in individual output elements, e.g., pixels in an image. Previous works in approximate computing [9, 11, 12] show that although most of the output elements have small errors, there exist a few output elements that have considerably large errors which may degrade the whole user experience. Therefore, some inputs might need more specialized methods to deal with the issue of output quality. Lastly, it is challenging to tune the output quality of an approximate hardware dynamically. If different NPUs with different configurations are available in the system, defining the number of active NPUs per invocation will be a reasonable knob for changing output quality based on users preferences. NPU Output Input Monolithic NPU"}
{"_id": "0cc899fbc4e1e99cc59335b51afee580a7c245a1", "title": "Separating the real from the synthetic: minutiae histograms as fingerprints of fingerprints", "text": "In this study we show that by the current state-of-the-art synthetically generated fingerprints can easily be discriminated from real fingerprints. We propose a method based on second order extended minutiae histograms (MHs) which can distinguish between real and synthetic prints with very high accuracy. MHs provide a fixed-length feature vector for a fingerprint which are invariant under rotation and translation. This \u2019test of realness\u2019 can be applied to synthetic fingerprints produced by any method. In this work, tests are conducted on the 12 publicly available databases of FVC2000, FVC2002 and FVC2004 which are well established benchmarks for evaluating the performance of fingerprint recognition algorithms; 3 of these 12 databases consist of artificial fingerprints generated by the SFinGe software. Additionally, we evaluate the discriminative performance on a database of synthetic fingerprints generated by the software of Bicz versus real fingerprint images. We conclude with suggestions for the improvement of synthetic fingerprint generation."}
{"_id": "856fd80398c368d2620350b60580a543df8973ec", "title": "Artificial Neural Networks Modeling in Water Resources Engineering : Infrastructure and Applications", "text": "The use of artificial neural network (ANN) modeling for prediction and forecasting variables in water r esources engineering are being increasing rapidly. Infrastru ctural applications of ANN in terms of selection of inputs, architectur e of networks, training algorithms, and selection of training para meters in different types of neural networks used in water resources en gin ering have been reported. ANN modeling conducted for water res ources engineering variables (river sediment and discharge ) published in high impact journals since 2002 to 2011 have been e xamined and presented in this review. ANN is a vigorous techniq ue to develop immense relationship between the input and output v ariables, and able to extract complex behavior between the water resources variables such as river sediment and discharge. It can produce robust prediction results for many of the water resources engineering problems by appropriate learning from a set of exam ples. It is important to have a good understanding of the input and output variables from a statistical analysis of the data b efore network modeling, which can facilitate to design an efficie nt network. An appropriate training based ANN model is able to ado pt the physical understanding between the variables and may generat more effective results than conventional prediction techniques. Keywords\u2014ANN, discharge, modeling, prediction, sediment,"}
{"_id": "96818e0c101c0edf7ad3a855ca806e381262252c", "title": "WIRELESS BIO-RADAR SENSOR FOR HEARTBEAT AND RESPIRATION DETECTION", "text": "In this study, a wireless bio-radar sensor was designed to detect a human heartbeat and respiration signals without direct skin contact. In order to design a wireless bio-radar sensor quantitatively, the signal-to-noise ratio (SNR) in the baseband output of a sensor should be calculated. Therefore, we analyzed the SNR of the wireless bio-radar sensor, considering the signal power attenuation in a human body and all kinds of noise sources. Especially, we measured a residual phase noise of a typical free-running oscillator and used its value for the SNR analysis. Based on these analysis and the measurement results, a compact, low-cost 2.4 GHz direct conversion bio-radar sensor was designed and implemented in a printed circuit board. The demonstrated sensor consists of two printed antennas, a"}
{"_id": "b13d04c9eb5940f443f0c64e88f2bbb573a5fa06", "title": "Sexually transmitted diseases treatment guidelines, 2010.", "text": "These guidelines for the treatment of persons who have or are at risk for sexually transmitted diseases (STDs) were updated by CDC after consultation with a group of professionals knowledgeable in the field of STDs who met in Atlanta on April 18-30, 2009. The information in this report updates the 2006 Guidelines for Treatment of Sexually Transmitted Diseases (MMWR 2006;55[No. RR-11]). Included in these updated guidelines is new information regarding 1) the expanded diagnostic evaluation for cervicitis and trichomoniasis; 2) new treatment recommendations for bacterial vaginosis and genital warts; 3) the clinical efficacy of azithromycin for chlamydial infections in pregnancy; 4) the role of Mycoplasma genitalium and trichomoniasis in urethritis/cervicitis and treatment-related implications; 5) lymphogranuloma venereum proctocolitis among men who have sex with men; 6) the criteria for spinal fluid examination to evaluate for neurosyphilis; 7) the emergence of azithromycin-resistant Treponema pallidum; 8) the increasing prevalence of antimicrobial-resistant Neisseria gonorrhoeae; 9) the sexual transmission of hepatitis C; 10) diagnostic evaluation after sexual assault; and 11) STD prevention approaches."}
{"_id": "7ec432f17555c345988b16cf076bfabaabb0dabf", "title": "Continental Airlines Flies High with Real-Time Business Intelligence", "text": "Real-time business intelligence (BI) is taking Continental Airlines to new heights. Powered by a real-time data warehouse, the company has dramatically changed all aspects of its business. Continental\u2019s president and COO, Larry Kellner, describes the impact of real-time BI in the following way: \u201cReal-time BI is critical to the accomplishment of our business strategy and has created significant business benefits.\" In fact, Continental has realized more than $500 million in cost savings and revenue generation over the past six years from its BI initiatives, producing an ROI of more than 1,000 percent."}
{"_id": "9be1df07e4ffcec086af9fa6ff6177708bba6bb1", "title": "Dry-Contact and Noncontact Biopotential Electrodes: Methodological Review", "text": "Recent demand and interest in wireless, mobile-based healthcare has driven significant interest towards developing alternative biopotential electrodes for patient physiological monitoring. The conventional wet adhesive Ag/AgCl electrodes used almost universally in clinical applications today provide an excellent signal but are cumbersome and irritating for mobile use. While electrodes that operate without gels, adhesives and even skin contact have been known for many decades, they have yet to achieve any acceptance for medical use. In addition, detailed knowledge and comparisons between different electrodes are not well known in the literature. In this paper, we explore the use of dry/noncontact electrodes for clinical use by first explaining the electrical models for dry, insulated and noncontact electrodes and show the performance limits, along with measured data. The theory and data show that the common practice of minimizing electrode resistance may not always be necessary and actually lead to increased noise depending on coupling capacitance. Theoretical analysis is followed by an extensive review of the latest dry electrode developments in the literature. The paper concludes with highlighting some of the novel systems that dry electrode technology has enabled for cardiac and neural monitoring followed by a discussion of the current challenges and a roadmap going forward."}
{"_id": "ffd2667a88b3de52ed04583566831d54c08cc227", "title": "Intermanual apparent tactile motion on handheld tablets", "text": "Handheld and wearable devices frequently engage users with simple haptic feedback, such as alerting, shaking, and pulsating. Here we explored intermanual apparent tactile motion-illusory movement between two hands-as a means to enrich such feedback. A series of psychophysical experiments determined the control space for generating smooth and consistent motion across the hands while users held the device. Experiment 1 calibrated the system and showed that vibrotactile detection thresholds decreased with increasing frequency, with similar trends for both hands. Experiment 2 measured effects of vibrotactile parameters on perceived motion. Both duration and temporal separation of stimuli, but not frequency and amplitude, affected subjective motion ratings. In Experiment 3, subjective ratings showed that stimuli with gradual onsets produced a stronger percept of motion than those with abrupt onsets. Finally, Experiment 4 determined a multimodal factor to match moving visual cues across the screen to moving tactile motion across hands. Our results showed compression of visual duration by the tactile system by a factor of approximately 1/3 at two test frequencies. The results of this research are useful for media designers and developers to generate reliable motion across the hands and integrate haptic motion with visual media."}
{"_id": "5fc541e1431aea41261569af2f9473c8fbd0e5ec", "title": "Keypoint Detection and Local Feature Matching for Textured 3D Face Recognition", "text": "Holistic face recognition algorithms are sensitive to expressions, illumination, pose, occlusions and makeup. On the other hand, feature-based algorithms are robust to such variations. In this paper, we present a feature-based algorithm for the recognition of textured 3D faces. A novel keypoint detection technique is proposed which can repeatably identify keypoints at locations where shape variation is high in 3D faces. Moreover, a unique 3D coordinate basis can be defined locally at each keypoint facilitating the extraction of highly descriptive pose invariant features. A 3D feature is extracted by fitting a surface to the neighborhood of a keypoint and sampling it on a uniform grid. Features from a probe and gallery face are projected to the PCA subspace and matched. The set of matching features are used to construct two graphs. The similarity between two faces is measured as the similarity between their graphs. In the 2D domain, we employed the SIFT features and performed fusion of the 2D and 3D features at the feature and score-level. The proposed algorithm achieved 96.1% identification rate and 98.6% verification rate on the complete FRGC v2 data set."}
{"_id": "20ffcde31cb03e92f85d3509d2b979706685055f", "title": "C-ICAMA, a centralized intelligent channel assigned multiple access for multi-layer ad-hoc wireless networks with UAVs", "text": "Multi-layer ad hoc wireless networks with UAVs is an ideal infrastructure to establish a rapidly deployable wireless communication system any time any where in the world for military applications. In this tactical environment, information traflc is quite asynimetric. Ground jighting units are information consumers and receive jar more data than they transmit. The up-link is used for sending requests for information and some networking configuration overhead with a f a 0 kilobits, while the down-link is used to return the data requested with megabits size (e.g. multimedia file of images and charts). Centralized Intelligent Channel Assigned Multiple Access(C-ICAMA) is a MAC layer protocol proposed for ground backbone nodes to access UAV (Unmanned Aerie1 Vehicle) to solve the highly asymmetric data trafic in this tactical environment. With it\u2019s intelligent scheduling algorithm, it can dynamically allocate bandwidth for up-link and downlink to jit the instantaneous status of symmetric trafJic. The results of C-ICAMA is very promising, due to the dynamic bandwidth allocation of asymmetric i.ip-link and down-link, the access delay is tremendously reduced,"}
{"_id": "fec484652cd844bc7f077d9c0d968be818fb19bb", "title": "Sentry-Based Power Management in Wireless Sensor Networks", "text": "This paper presents a sentry-based approach to power management in wireless sensor networks for applications such as intruder detection and tracking. To minimize average power consumption while maintaining sufficient node density for coarse sensing, nodes are partitioned dynamically into two sets: sentries and non-sentries. Sentry nodes provide sufficient coverage for continuous monitoring and basic communication services. Non-sentry nodes sleep for designated periods of time to conserve power, and switch to full power only when needed to provide more refined sensing for tracking. Non-sentry nodes check for beacons from sentry nodes to determine when they should remain on. Experimental results are presented demonstrating trade-offs between power savings and tracking performance for a network of seventeen nodes using the first implementation of a basic sentry-based power management scheme. The paper concludes with a brief description of a full set of power-management services being implemented as middle-ware for general wireless sensor applications."}
{"_id": "42eaf468b9b50768bf61166f39d06f05b53d6902", "title": "CSDSM: Cognitive switch-based DDoS sensing and mitigation in SDN-driven CDNi word", "text": "Content Delivery Networks (CDNs) are increasingly deployed for their efficient content delivery and are often integrated with Software Defined Networks (SDNs) to achieve centrality and programmability of the network. However, these networks are also an attractive target for network attackers whose main goal is to exhaust network resources. One attack approach is to over-flood the OpenFlow switch tables containing routing information. Due to the increasing number of different flooding attacks such as DDoS, it becomes difficult to distinguish these attacks from normal traffic when evaluated with traditional attack detection methods. This paper proposes an architectural method that classifies and defends all possible forms of DDoS attack and legitimate Flash Crowd traffic using a segregated dimension functioning cognitive process based in a controller module. Our results illustrate that the proposed model yields significantly enhanced performance with minimal false positives and false negatives when classified with optimal Support Vector Machine and Logistic Regression algorithms. The traffic classifications initiate deployment of security rules to the OpenFlow switches, preventing new forms of flooding attacks. To the best of our knowledge, this is the first work conducted on SDN-driven CDNi used to detect and defend against all possible DDoS attacks through traffic segregated dimension functioning coupled with cognitive classification."}
{"_id": "6f5b6b6e94dca127afe5c6effa62a4c6771640d8", "title": "Empirical formula for propagation loss in land mobile radio services", "text": "An empirical formula for propagation loss is derived from Okumura's report in order to put his propagation prediction method to computational use. The propagation loss in an urban area is presented in a simple form: A + B log10R, where A and B are frequency and antenna height functions and R is the distance. The introduced formula is applicable to system designs for UHF and VHF land mobile radio services, with a small formulation error, under the following conditions: frequency range 100-1500 MHz, distance 1-20 km, base station antenna height 30-200 m, and vehicular antenna height 1-10 m."}
{"_id": "2e3b981b9f3751fc5873f77ad2aa7789c3e1d1d2", "title": "Comprehensive Dataset of Broadcast Soccer Videos", "text": "In the absence of a rich dataset, sports video analysis remains a challenging task. Shot analysis, event detection, and player tracking are important aspects of soccer game research, however currently available datasets focus mainly on single-type annotations, and have several limitations when applied to full video analysis; these include a lack of training samples, temporal annotations, and rich labels of different types. In this paper, we focus on broadcast soccer videos and present a comprehensive dataset for analysis. In its current version, the dataset contains 222 broadcast soccer videos, totaling 170 video hours. The dataset covers three annotation types: (1) A shot boundary with two shot transition types, and shot type annotations with five shot types; (2) Event annotations with 11 event labels, and a story annotation with 15 story labels at coarser granularity; and (3) bounding boxes of the players under analysis in a subset of 19908 video frames. We hope that the introduction of this dataset will enable the research community to develop soccer video analysis techniques."}
{"_id": "e4a0312a232d72b477154824fc3b864055aab899", "title": "SpaSM: A MATLAB Toolbox for Sparse Statistical Modeling", "text": "Applications in biotechnology such as gene expression analysis and image processing have led to a tremendous development of statistical methods with emphasis on reliable solutions to severely underdetermined systems. Furthermore, interpretations of such solutions are of importance, meaning that the surplus of inputs has been reduced to a concise model. At the core of this development are methods which augment the standard linear models for regression, classification and decomposition such that sparse solutions are obtained. This toolbox aims at making public available carefully implemented and well-tested variants of the most popular of such methods for the MATLAB programming environment. These methods consist of easy-to-read yet efficient implementations of various coefficient-path following algorithms and implementations of sparse principal component analysis and sparse discriminant analysis which are not available in MATLAB. The toolbox builds on code made public in 2005 and which has since been used in several studies."}
{"_id": "af60cb36fa5e7d29244a827c1f859c65e84ea141", "title": "Modular Forms of Weight One", "text": "The first three sections are preliminary. The fourth section contains the statement of the main theorem and some corollaries. The proof occupies sections 5 through 9. The idea is as follows: one begins by constructing, for each prime number `, a representation of Gal(Q/Q) in characteristic ` (see \u00a76); we then show that the image of Gal(Q/Q) is \u201csmall\u201d under each of these representations, which permits us to lift this to characteristic 0 and obtain the complex representation we seek. The \u201csmallness\u201d in question itself results from an increase of the average of the eigenvalues of the Hecke operators (Rankin, \u00a75). Section 9 contains an estimate of the coefficients of the modular forms of weight one. We note that we have used an essential point (\u00a76, Thm. 6.1) of some results demonstrated by one of us (P. Deligne), but of which no complete proof has been published; this proof should appear in SGA 5, so we ask that the reader accept this for now."}
{"_id": "6f90c913e1e5a162371862a6c5e20f8e7de5e9c7", "title": "Benefits from Using Continuous Rating Scales in Online Survey Research", "text": "The usage of Likert-type scales has become widespread practice in current IS research. Those scales require individuals to choose between a limited number of choices, and have been criticized in the literature for causing loss of information, allowing the researcher to affect responses by determining the range, and being ordinal in nature. The use of online surveys allows for the easy implementation of continuous rating scales, which have a long history in psychophysical measurement but were rarely used in IS surveys. This type of measurement requires survey participants to express their opinion in a visual form, i.e. to place a mark at an appropriate position on a continuous line. That not only solves the problems of information loss, but also allows for applying advanced robust statistical analyses. In this 1 Augasse 2-6, A-1090 Vienna, Austria Tel.: +43/1/31336/4480 Fax: +43/1/31336/746 E-Mail: Horst.Treiblmaier@wu.ac.at"}
{"_id": "521d14b514ead0665029d664bce1720558e6e4af", "title": "Abstract Meaning Representation for Multi-Document Summarization", "text": "Generating an abstract from a collection of documents is a desirable capability for many realworld applications. However, abstractive approaches to multi-document summarization have not been thoroughly investigated. This paper studies the feasibility of using Abstract Meaning Representation (AMR), a semantic representation of natural language grounded in linguistic theory, as a form of content representation. Our approach condenses source documents to a set of summary graphs following the AMR formalism. The summary graphs are then transformed to a set of summary sentences in a surface realization step. The framework is fully data-driven and flexible. Each component can be optimized independently using small-scale, in-domain training data. We perform experiments on benchmark summarization datasets and report promising results. We also describe opportunities and challenges for advancing this line of research."}
{"_id": "6cf05f8ac54cb5f942833839ade392db22e46a3f", "title": "Analysis of clickjacking attacks and an effective defense scheme for Android devices", "text": "Smartphones bring users lots of convenience by integrating all useful functions people may need. While users are spending more time on their phones, have they ever questioned of being spoofed by the phone they are interacting with? This paper conducts a thorough study of the mobile clickjacking attacks. We first present how the clickjacking attack works and the key points to remain undiscovered. Then, we evaluate its potential threats by exploring the feasibility of launching clickjacking attacks on various UIs, including system app windows, 3rd-party app windows, and other system UIs. Finally, we propose a system-level defense scheme against clickjacking attacks on Android platform, which requires no user or developer effort and is compatible with existing apps. The performance of the countermeasure is evaluated with extensive experiments. The results show that our scheme can effectively prevent clickjacking attacks with only a minor impact to the system."}
{"_id": "20e9e36329ddabc30261ef5bee487f491d27f835", "title": "Dimensionality Reduction by Learning an Invariant Mapping", "text": "Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that 'similar\" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distancemeasure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular LLE."}
{"_id": "80bcdd5d82f03e4d2bca28cbc1399424dac138f9", "title": "Vlfeat: an open and portable library of computer vision algorithms", "text": "VLFeat is an open and portable library of computer vision algorithms. It aims at facilitating fast prototyping and reproducible research for computer vision scientists and students. It includes rigorous implementations of common building blocks such as feature detectors, feature extractors, (hierarchical) k-means clustering, randomized kd-tree matching, and super-pixelization. The source code and interfaces are fully documented. The library integrates directly with MATLAB, a popular language for computer vision research."}
{"_id": "8c557561cad4ef357bed3a96d83e15d01c322d1a", "title": "BRIEF: Computing a Local Binary Descriptor Very Fast", "text": "Binary descriptors are becoming increasingly popular as a means to compare feature points very fast while requiring comparatively small amounts of memory. The typical approach to creating them is to first compute floating-point ones, using an algorithm such as SIFT, and then to binarize them. In this paper, we show that we can directly compute a binary descriptor, which we call BRIEF, on the basis of simple intensity difference tests. As a result, BRIEF is very fast both to build and to match. We compare it against SURF and SIFT on standard benchmarks and show that it yields comparable recognition accuracy, while running in an almost vanishing fraction of the time required by either."}
{"_id": "aa20bb6f0ff96b4d4d685df23924ea8e7afecce0", "title": "Learning Local Feature Descriptors Using Convex Optimisation", "text": "The objective of this work is to learn descriptors suitable for the sparse feature detectors used in viewpoint invariant matching. We make a number of novel contributions towards this goal. First, it is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity. Second, it is shown that descriptor dimensionality reduction can also be formulated as a convex optimisation problem, using Mahalanobis matrix nuclear norm regularisation. Both formulations are based on discriminative large margin learning constraints. As the third contribution, we evaluate the performance of the compressed descriptors, obtained from the learnt real-valued descriptors by binarisation. Finally, we propose an extension of our learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections. It is demonstrated that the new learning methods improve over the state of the art in descriptor learning on the annotated local patches data set of Brown et al. and unannotated photo collections of Philbin et al. ."}
{"_id": "b7d91f60466811e8eb84a1ae2283ab3cc0b72088", "title": "Quantitative Evaluation of DC Microgrids Availability: Effects of System Architecture and Converter Topology Design Choices", "text": "This paper presents a quantitative method to evaluate dc microgrids availability by identifying and calculating minimum cut sets occurrence probability for different microgrid architectures and converter topologies. Hence, it provides planners with an essential tool to evaluate downtime costs and decide technology deployments based on quantitative risk assessments by allowing to compare the effect that converter topologies and microgrid architecture choices have on availability. Conventional architectures with single-input converters and alternative configurations with multiple-input converters (MICs) are considered. Calculations yield that all microgrid configurations except those utilizing center converters achieve similar availability of 6-nines. Three converter topologies are used as representatives of many other circuits. These three benchmark circuits are the boost, the isolated SEPIC (ISEPIC), and the current-source half-bridge. Marginal availability differences are observed for different circuit topology choices, although architectures with MICs are more sensitive to this choice. MICs and, in particular, the ISEPIC, are identified as good compromise options for dc microgrids source interfaces. The analysis also models availability influence of local energy storage, both in batteries and generators' fuel. These models provide a quantitative way of comparing dc microgrids with conventional backup energy systems. Calculations based on widely accepted data in industry supports the analysis."}
{"_id": "4dbdc54ced134d3ebe94ad82101a6c8848d0e23a", "title": "VM Placement Strategies for Cloud Scenarios", "text": "The problem of Virtual Machine (VM) placement in a compute cloud infrastructure is well-studied in the literature. However, the majority of the existing works ignore the dynamic nature of the incoming stream of VM deployment requests that continuously arrive to the cloud provider infrastructure. In this paper we provide a practical model of cloud placement management under a stream of requests and present a novel technique called Backward Speculative Placement (BSP) that projects the past demand behavior of a VM to a candidate target host. We exploit the BSP technique in two algorithms, first for handling the stream of deployment requests, second in a periodic optimization, to handle the dynamic aspects of the demands. We show the benefits of our BSP technique by comparing the results on a simulation period with a strategy of choosing an optimal placement at each time instant, produced by a generic MIP solver."}
{"_id": "a171737aa2517cf16f63cacfae6bae05c4f1067a", "title": "Cachexia: a new definition.", "text": "On December 13th and 14th a group of scientists and clinicians met in Washington, DC, for the cachexia consensus conference. At the present time, there is no widely agreed upon operational definition of cachexia. The lack of a definition accepted by clinician and researchers has limited identification and treatment of cachectic patient as well as the development and approval of potential therapeutic agents. The definition that emerged is: \"cachexia, is a complex metabolic syndrome associated with underlying illness and characterized by loss of muscle with or without loss of fat mass. The prominent clinical feature of cachexia is weight loss in adults (corrected for fluid retention) or growth failure in children (excluding endocrine disorders). Anorexia, inflammation, insulin resistance and increased muscle protein breakdown are frequently associated with cachexia. Cachexia is distinct from starvation, age-related loss of muscle mass, primary depression, malabsorption and hyperthyroidism and is associated with increased morbidity. While this definition has not been tested in epidemiological or intervention studies, a consensus operational definition provides an opportunity for increased research."}
{"_id": "e5d0afc598b83bc7dc8ef6ddc48871a6724384ab", "title": "Deep Neural Network Capacity", "text": "In recent years, deep neural network exhibits its powerful superiority on information discrimination in many computer vision applications. However, the capacity of deep neural network architecture is still a mystery to the researchers. Intuitively, larger capacity of neural network can always deposit more information to improve the discrimination ability of the model. But, the learnable parameter scale is not feasible to estimate the capacity of deep neural network. Due to the overfitting, directly increasing hidden nodes number and hidden layer number are already demonstrated not necessary to effectively increase the network discrimination ability. In this paper, we propose a novel measurement, named \u201ctotal valid bits\u201d, to evaluate the capacity of deep neural networks for exploring how to quantitatively understand the deep learning and the insights behind its super performance. Specifically, our scheme to retrieve the total valid bits incorporates the skilled techniques in both training phase and inference phase. In the network training, we design decimal weight regularization and 8-bit forward quantization to obtain the integer-oriented network representations. Moreover, we develop adaptive-bitwidth and non-uniform quantization strategy in the inference phase to find the neural network capacity, total valid bits. By allowing zero bitwidth, our adaptive-bitwidth quantization can execute the model reduction and valid bits finding simultaneously. In our extensive experiments, we first demonstrate that our total valid bits is a good indicator of neural network capacity. We also analyze the impact on network capacity from the network architecture and advanced training"}
{"_id": "4342849e5934775db1927ae5064cd651862446e6", "title": "The effectiveness of the combined use of VIX and Support Vector Machines on the prediction of S&P 500", "text": "The aim of this research is to analyse the effectiveness of the Chicago Board Options Exchange Market Volatility Index (VIX) when used with Support Vector Machines (SVMs) in order to forecast the weekly change in the S&P 500 index. The data provided cover the period between 3 January 2000 and 30 December 2011. A trading simulation is implemented so that statistical efficiency is complemented by measures of economic performance. The inputs retained are traditional technical trading rules commonly used in the analysis of equity markets such as Relative Strength Index, Moving Average Convergence Divergence, VIX and the daily return of the S&P 500. The SVM identifies the best situations in which to buy or sell in the market. The two outputs of the SVM are the movement of the market and the degree of set membership. The obtained results show that SVM using VIX produce better results than the Buy and Hold strategy or SVM without VIX. The influence of VIX in the trading system is particularly significant when bearish periods appear. Moreover, the SVM allows the reduction in the Maximum Drawdown and the annualised standard deviation."}
{"_id": "666dfbde2526ed420a9de9d5c5e7ca2507d346ef", "title": "A tourism destination recommender system using users\u2019 sentiment and temporal dynamics", "text": "With the development and popularity of social networks, an increasing number of consumers prefer to order tourism products online, and like to share their experiences on social networks. Searching for tourism destinations online is a difficult task on account of its more restrictive factors. Recommender system can help these users to dispose information overload. However, such a system is affected by the issue of low recommendation accuracy and the cold-start problem. In this paper, we propose a tourism destination recommender system that employs opinion-mining technology to refine user sentiment, and make use of temporal dynamics to represent user preference and destination popularity drifting over time. These elements are then fused with the SVD+ + method by combining user sentiment and temporal influence. Compared with several well-known recommendation approaches, our method achieves improved recommendation accuracy and quality. A series of experimental evaluations, using a publicly available dataset, demonstrates that the proposed recommender system outperforms the existing recommender systems."}
{"_id": "5b6d075d99abfedf9c773cf7a6e61c391c06510f", "title": "Dynamic MRI reconstruction using low rank plus sparse tensor decomposition", "text": "In this paper, we introduce a multi-dimensional approach to the problem of reconstruction of MR image sequences that are highly undersampled in k-space. By formulating the reconstruction as a high-order low-rank plus sparse tensor decomposition problem, we propose an efficient numerical algorithm based on the alternating direction method of multipliers (ADMM) to solve the optimization. Through extensive experimental results we show that our proposed method achieves superior reconstruction quality, compared to the state-of-the-art reconstruction methods."}
{"_id": "aae6bf5f8ef9d4431146f0ced8c3922dbf36c935", "title": "Strategy and architecture of a safety concept for fully automatic and autonomous driving assistance systems", "text": "As drivers back out of the driving task, when transported automatically by an intelligent car for a longer time, they are not always able to react properly, if a driver take over request occurs. This paper presents two ways, how to deal with this problem within the scope of a functional safety concept. Thereto, the difference between fully automatic and autonomous driving assistance systems is explained. Afterwards two different strategies to reach a safe state in consequence of a system boundary crossing are proposed. In the first case the fall back state is reached by a driver take over, in the second case by an automatic, active fail-safe mechanism. Subsequently the necessary components for monitoring and reaching a safe state and their embedment in a basic, functional architecture of a driving assistance system are described. In this context, special regard is paid to aspects of redundancy as well. In the end it is concluded, that the safety concept proposed here is crucial for guaranteeing enduring safety in an automatically driving car and in consequence for making automatic driving functions commercially ready for serial production."}
{"_id": "242377d7e76ad3371ed1814cf6f5249139e4b830", "title": "Open innovation : State of the art and future perspectives", "text": "Open innovation has become one of the hottest topics in innovation management. This article intends to explore the limits in our understanding of the open innovation concept. In doing so, I address the questions of what (the content of open innovation), when (the context dependency) and how (the process). Open innovation is a rich concept, that can be implemented in many different ways. The context dependency of open innovation is one of the least understood topics; more research is needed on the internal and external environment characteristics affecting performance. The open innovation process relates to both the transition towards open innovation, and the various open innovation practices. As with any new concept, initial studies focus on successful and early adopters, are based on case studies, and descriptive. However, not all lessons learned from the early adopters may be applicable to following firms. Case study research increases our understanding of how things work and enables us to identify important phenomena. They should be followed by quantitative studies involving large samples to determine the relative importance of factors, to build path models to understand chains of effects, and to formally test for context dependencies. However, the evidence shows that open innovation has been a valuable concept for so many firms and in so many contexts, that it is on its way to find its final place in innovation management. & 2010 Elsevier Ltd. All rights reserved."}
{"_id": "63d15a9af9257163edad1e64dd119233d2a076af", "title": "LINE-mediated retrotransposition of marked Alu sequences", "text": "Alu elements are the most successful transposons in humans. They are 300-bp non-coding sequences transcribed by RNA polymerase III (Pol III) and are expected to retrotranspose with the aid of reverse transcriptases of cellular origin. We previously showed that human LINEs can generate cDNA copies of any mRNA transcript by means of a retroposition process involving reverse transcription and integration by the LINE-encoded endonuclease and reverse transcriptase. Here we show mobility of marked Alu sequences in human HeLa cells with the canonical features of a retrotransposition process, including splicing out of an autocatalytic intron introduced into the marked sequence, target site duplications of varying lengths and integrations into consensus A-rich sequences. We further show that the poly-A stretch at the Alu 3\u2032 end is essential for mobility, that LINEs are required for transposition and that the rate of retroposition is 100\u20131,000 times higher for Alu transcripts than for control mRNAs, thus accounting for the high mutational activity of these elements observed in humans."}
{"_id": "d019afc693e31af50a25c4f03af52eb53ed68bc9", "title": "Security Strategies for SCADA Networks", "text": "SCADA systems have historically been isolated from other computing resources. However, the use of TCP/IP as a carrier protocol and the trend to interconnect SCADA systems with enterprise networks introduce serious security threats. This paper describes two strategies for securing SCADA networks, both of which have been implemented in a laboratory-scale Modbus network. The first utilizes a security services suite that minimizes the impact on time-critical industrial process systems while adhering to industry standards. The second engages a sophisticated forensic system for SCADA network traffic collection and analysis. The forensic system supports the post mortem analysis of security breaches and the monitoring of process behavior to optimize plant"}
{"_id": "8e6681c2307b9b875ea580b89b94b405aa63e78e", "title": "BSSync: Processing Near Memory for Machine Learning Workloads with Bounded Staleness Consistency Models", "text": "Parallel machine learning workloads have become prevalent in numerous application domains. Many of these workloads are iterative convergent, allowing different threads to compute in an asynchronous manner, relaxing certain read-after-write data dependencies to use stale values. While considerable effort has been devoted to reducing the communication latency between nodes by utilizing asynchronous parallelism, inefficient utilization of relaxed consistency models within a single node have caused parallel implementations to have low execution efficiency. The long latency and serialization caused by atomic operations have a significant impact on performance. The data communication is not overlapped with the main computation, which reduces execution efficiency. The inefficiency comes from the data movement between where they are stored and where they are processed. In this work, we propose Bounded Staled Sync (BSSync), a hardware support for the bounded staleness consistency model, which accompanies simple logic layers in the memory hierarchy. BSSync overlaps the long latency atomic operation with the main computation, targeting iterative convergent machine learning workloads. Compared to previous work that allows staleness for read operations, BSSync utilizes staleness for write operations, allowing stale-writes. We demonstrate the benefit of the proposed scheme for representative machine learning workloads. On average, our approach outperforms the baseline asynchronous parallel implementation by 1.33x times."}
{"_id": "f73874bd2faf3ab0661d5932ea0e651529ec7b88", "title": "Potential Bacillus probiotics enhance bacterial numbers, water quality and growth during early development of white shrimp (Litopenaeus vannamei).", "text": "Epidemics of epizootics and occurrence of multiresistant antibiotics of pathogenic bacteria in aquaculture have put forward a development of effective probiotics for the sustainable culture. This study examined the effectiveness of forms of mixed Bacillus probiotics (probiotic A and probiotic B) and mode of probiotic administration on growth, bacterial numbers and water quality during rearing of white shrimp (Litopenaeus vannamei) in two separated experiments: (1) larval stages and (2) postlarval (PL) stages. Forms of Bacillus probiotics and modes of probiotic administration did not affect growth and survival of larval to PL shrimp. The compositions of Bacillus species in probiotic A and probiotic B did not affect growth and survival of larvae. However, postlarvae treated with probiotic B exhibited higher (P<0.05) growth than probiotic A and controls, indicating Bacillus probiotic composition affects the growth of PL shrimp. Total heterotrophic bacteria and Bacillus numbers in larval and PL shrimp or culture water of the treated groups were higher (P<0.05) than in controls. Levels of pH, ammonia and nitrite of the treated shrimp were significantly decreased, compared to the controls. Microencapsulated Bacillus probiotic was effective for rearing of PL L. vannamei. This investigation showed that administration of mixed Bacillus probiotics significantly improved growth and survival of PL shrimp, increased beneficial bacteria in shrimp and culture water and enhanced water quality for the levels of pH, ammonia and nitrite of culture water."}
{"_id": "09eb7642b733dfcf1d9ac14a9435d3bc96d18f0c", "title": "POMDP-based dialogue manager adaptation to extended domains", "text": "Existing spoken dialogue systems are typically designed to operate in a static and well-defined domain, and are not well suited to tasks in which the concepts and values change dynamically. To handle dynamically changing domains, techniques will be needed to transfer and reuse existing dialogue policies and rapidly adapt them using a small number of dialogues in the new domain. As a first step in this direction, this paper addresses the problem of automatically extending a dialogue system to include a new previously unseen concept (or slot) which can be then used as a search constraint in an information query. The paper shows that in the context of Gaussian process POMDP optimisation, a domain can be extended through a simple expansion of the kernel and then rapidly adapted. As well as being much quicker, adaptation rather than retraining from scratch is shown to avoid subjecting users to unacceptably poor performance during the learning stage."}
{"_id": "cadefcf12a0ae14baff8a1fa67819b4e4f644396", "title": "Towards Automated Cost-Efficient Data Management for Federated Cloud Services", "text": "Cloud computing has transformed the accessibility and usage of information technology resources, by offering them as services via the Internet. Cloud service provisioning spans across infrastructure, platform and software levels. The management of the services at these levels is based on monitoring. The analysis of monitoring data provides insight into Cloud operations in order to make informed decisions. Due to the emergence of numerous heterogenous Cloud platforms with proprietary APIs, service and monitoring data are being formatted using diverse and mostly incompatible data interchange formats. This results to interoperability issues and makes the analysis of monitoring data from multi-Cloud service deployments difficult to handle. The existing research efforts on data interchange formats have been mainly focused on general performance analyses. Little or no effort has been channelled towards a combination of multiple data interchange formats based on data type to achieve efficient serialisation that can facilitate interoperability in federated Clouds, and also reduce the size of data and bandwidth utilisation cost. This paper addresses these issues by presenting automated framework that is capable of automatically selecting the most suitable data interchange formats for achieving an efficient formatting and serialisation outcome. The goal of the framework is to enable robust and transparent communication within and between multiple Cloud deployments. Based on three use case scenarios, we evaluate the proposed framework to demonstrate its efficacy in formatting and serialising data."}
{"_id": "4769443cea862eee5a05b18579985c564a1b7356", "title": "Crowd Sourcing, with a Few Answers: Recommending Commuters for Traffic Updates", "text": "Real-time traffic awareness applications are playing an ever increasing role understanding and tackling traffic congestion in cities. First-hand accounts from drivers witnessing an incident is an invaluable source of information for traffic managers. Nowadays, drivers increasingly contact control rooms through social media to report on journey times, accidents or road weather conditions. These new interactions allow traffic controllers to engage users, and in particular to query them for information rather than passively collecting it. Querying participants presents the challenge of which users to probe for updates about a specific situation. In order to maximise the probability of a user responding and the accuracy of the information, we propose a strategy which takes into account the engagement levels of the user, the mobility profile and the reputation of the user. We provide an analysis of a real-world user corpus of Twitter users contributing updates to LiveDrive, a Dublin based traffic radio station."}
{"_id": "753dfc1c5a49f3ce0e4864ca33c4ceb9ec7b6c32", "title": "Psychophysical magic: rendering the visible \u2018invisible\u2019", "text": "What are the neural correlates of conscious visual awareness? Tackling this question requires contrasting neural correlates of stimulus processing culminating in visual awareness with neural correlates of stimulus processing unaccompanied by awareness. To produce these two neural states, one must be able to erase an otherwise visible stimulus from awareness. This article describes and assesses visual phenomena involving dissociation of physical stimulation and conscious awareness: degraded stimulation, visual masking, visual crowding, bistable figures, binocular rivalry, motion-induced blindness, inattentional blindness, change blindness and attentional blink. No single approach stands above the others, but those producing changing visual awareness despite invariant physical stimulation are clearly preferable. Such phenomena can help lead us ultimately to a comprehensive account of the neural correlates of conscious awareness."}
{"_id": "8fcffa267ed01e38e2280891c9f33bfa41771cad", "title": "A Memory Bandwidth-Efficient Hybrid Radix Sort on GPUs", "text": "Sorting is at the core of many database operations, such as index creation, sort-merge joins, and user-requested output sorting. As GPUs are emerging as a promising platform to accelerate various operations, sorting on GPUs becomes a viable endeavour. Over the past few years, several improvements have been proposed for sorting on GPUs, leading to the first radix sort implementations that achieve a sorting rate of over one billion 32-bit keys per second. Yet, state-of-the-art approaches are heavily memory bandwidth-bound, as they require substantially more memory transfers than their CPU-based counterparts. Our work proposes a novel approach that almost halves the amount of memory transfers and, therefore, considerably lifts the memory bandwidth limitation. Being able to sort two gigabytes of eight-byte records in as little as 50 milliseconds, our approach achieves a 2.32-fold improvement over the state-of-the-art GPU-based radix sort for uniform distributions, sustaining a minimum speed-up of no less than a factor of 1.66 for skewed distributions. To address inputs that either do not reside on the GPU or exceed the available device memory, we build on our efficient GPU sorting approach with a pipelined heterogeneous sorting algorithm that mitigates the overhead associated with PCIe data transfers. Comparing the end-to-end sorting performance to the state-of-the-art CPU-based radix sort running 16 threads, our heterogeneous approach achieves a 2.06-fold and a 1.53-fold improvement for sorting 64 GB key-value pairs with a skewed and a uniform distribution, respectively."}
{"_id": "e608e0301d8e8ff0d32ea5d8fe34370569d9ffd2", "title": "Context-Aware Filtering and Visualization of Web Service Clusters", "text": "Web service filtering is an efficient approach to address some big challenges in service computing, such as discovery, clustering and recommendation. The key operation of the filtering process is measuring the similarity of services. Several methods are used in current similarity calculation approaches such as string-based, corpus-based, knowledge-based and hybrid methods. These approaches do not consider domain-specific contexts in measuring similarity because they have failed to capture the semantic similarity of Web services in a given domain and this has affected their filtering performance. In this paper, we propose a context-aware similarity method that uses a support vector machine and a domain dataset from a context-specific search engine query. Our filtering approach uses a spherical associated keyword space algorithm that projects filtering results from a three-dimensional sphere to a two-dimensional (2D) spherical surface for 2D visualization. Experimental results show that our filtering approach works efficiently."}
{"_id": "6ddb38aa0a7f8cf55f6874aa81797514c361ea37", "title": "Long-Span Program Behavior Modeling and Attack Detection", "text": "Intertwined developments between program attacks and defenses witness the evolution of program anomaly detection methods. Emerging categories of program attacks, e.g., non-control data attacks and data-oriented programming, are able to comply with normal trace patterns at local views. This article points out the deficiency of existing program anomaly detection models against new attacks and presents long-span behavior anomaly detection (LAD), a model based on mildly context-sensitive grammar verification. The key feature of LAD is its reasoning of correlations among arbitrary events that occurred in long program traces. It extends existing correlation analysis between events at a stack snapshot, e.g., paired call and ret, to correlation analysis among events that historically occurred during the execution. The proposed method leverages specialized machine learning techniques to probe normal program behavior boundaries in vast high-dimensional detection space. Its two-stage modeling/detection design analyzes event correlation at both binary and quantitative levels. Our prototype successfully detects all reproduced real-world attacks against sshd, libpcre, and sendmail. The detection procedure incurs 0.1 ms to 1.3 ms overhead to profile and analyze a single behavior instance that consists of tens of thousands of function call or system call events."}
{"_id": "cd2a63215c5f5d242c66771ad3529cf9cfd880ff", "title": "Electrically programmable fuse (eFUSE) using electromigration in silicides", "text": "For the first time we describe a positive application of electromigration, as an electrically programmable fuse device (eFUSE). Upon programming, eFUSE's show a large increase in resistance that enable easy sensing. The transient device characteristics show that the eFUSE stays in a low resistance state during programming due to the local heating of the fuse link. The programming is enhanced by a device design that uses a large cathode which increases the temperature gradient and minimizes the effect of microstructural variations."}
{"_id": "4d9d946499ce52e47ee69a47b588a53a795f2073", "title": "Deformation Constraints in a Mass-Spring Model to Describe Rigid Cloth Behavior", "text": "This paper describes a physically-based model for animating cloth objects, derived from elastically deformable models, and improved in order to take into account the non-elastic properties of woven fabrics . A cloth object is first approximated to a deformable surface composed of a network of masses and springs, the movement of which is evaluated using the numerical integration of the fundamental law of dynamics. We show that when a concentration of high stresses occurs in a small region of the surface, the local deformation becomes unrealistic compared to real deformations of textiles . With such an elastic model , the only solution to decrease these deformations has been so far to increase the stiffness of the deformed springs, but we show that it dramatically increases the cost of the algorithm. We present therefore a new method to adapt our model to the particularly stiff properties of textiles, inspired from dynamic inverse procedures. R esume Cet article decrit un modele physique d'animation des tissus, variante des modeles elastiques deformables, et ameliore de fa<;on a prendre en compte les proprietes non elastiques des textiles . Nous modelisons tout d 'abord une piece de tissu par une surface deformable, constituee d 'un reseau de masses et de ressorts. Son mouvement est evalue grace a I'integration numerique de la loi fondamentale de la dynamique. Nous montrons que lorsqu'une forte concentration de contraintes apparait a certains endroits de la surface, la deformation locale y devient irrealiste comparee aux deformations rencontrees dans les tissus reels . A vec un tel modele elastique, la seule solution permettant d'attenuer cette deformation etait jusqu'a present d 'augmenter la raideur des ressorts deformes, mais nous montrons que ceci faisait croitre dramatiquement le cout de I'algorithme. Nous presentons donc ici une nouvelle methode permettant d 'adapter notre mode le aux proprietes particulierement rigides des textiles, inspiree des procedures de dynamique inverse."}
{"_id": "31e4725e74bf623aeaf86782f52d9f140b2af153", "title": "A MINI UNMANNED AERIAL VEHICLE ( UAV ) : SYSTEM OVERVIEW AND IMAGE ACQUISITION", "text": "In the last years UAV (Unmanned Aerial Vehicle)-systems became relevant for applications in precision farming and in infrastructure maintenance, like road maintenance and dam surveillance. This paper gives an overview about UAV (Unmanned Aerial Vehicle) systems and their application for photogrammetric recording and documentation of cultural heritage. First the historical development of UAV systems and the definition of UAV-helicopters will be given. The advantages of a photogrammetric system on-board a model helicopter will be briefly discussed and compared to standard aerial and terrestrial photogrammetry. UAVs are mostly low cost systems and flexible and therefore a suitable alternative solution compared to other mobile mapping systems. A mini UAV-system was used for photogrammetric image data acquisition near Palpa in Peru. A settlement from the 13 century AD, which was presumably used as a mine, was flown with a model helicopter. Based on the image data, an accurate 3D-model will be generated in the future. With an orthophoto and a DEM derived from aerial images in a scale of 1:7 000, a flight planning was build up. The determined flying positions were implemented in the flight control system. Thus, the helicopter is able to fly to predefined pathpoints automatically. Tests in Switzerland and the flights in Pinchango Alto showed that using the built-in GPS/INSand stabilization units of the flight control system, predefined positions could be reached exactly to acquire the images. The predicted strip crossings and flying height were kept accurately in the autonomous flying mode."}
{"_id": "e941667ffd52d8e1a0047e8f46b6e27c3ab84dfc", "title": "Assessment of Unmanned Aerial Vehicles Imagery for Quantitative Monitoring of Wheat Crop in Small Plots", "text": "This paper outlines how light Unmanned Aerial Vehicles (UAV) can be used in remote sensing for precision farming. It focuses on the combination of simple digital photographic cameras with spectral filters, designed to provide multispectral images in the visible and near-infrared domains. In 2005, these instruments were fitted to powered glider and parachute, and flown at six dates staggered over the crop season. We monitored ten varieties of wheat, grown in trial micro-plots in the South-West of France. For each date, we acquired multiple views in four spectral bands corresponding to blue, green, red, and near-infrared. We then performed accurate corrections of image vignetting, geometric distortions, and radiometric bidirectional effects. Afterwards, we derived for each experimental micro-plot several vegetation indexes relevant for vegetation analyses. Finally, we sought relationships between these indexes and field-measured biophysical parameters, both generic and date-specific. Therefore, we established a robust and stable generic relationship between, in one hand, leaf area index and NDVI and, in the other hand, nitrogen uptake and GNDVI. Due to a high amount of noise in the data, it was not possible to obtain a more accurate model for each date independently. A validation protocol showed that we could expect a precision level of 15% in the biophysical parameters estimation while using these relationships."}
{"_id": "7155942fecdf006edfb24dae287f0f5056c49565", "title": "A flexible unmanned aerial vehicle for precision agriculture", "text": "An unmanned aerial vehicle (\u201cVIPtero\u201d) was assembled and tested with the aim of developing a flexible and powerful tool for site-specific vineyard management. The system comprised a six-rotor aerial platform capable of flying autonomously to a predetermined point in space, and of a pitch and roll compensated multi-spectral camera for vegetation canopy reflectance recording. Before the flight campaign, the camera accuracy was evaluated against high resolution ground-based measurements, made with a field spectrometer. Then, \u201cVIPtero\u201d performed the flight in an experimental vineyard in Central Italy, acquiring 63 multi-spectral images during 10\u00a0min of flight completed almost autonomously. Images were analysed and classified vigour maps were produced based on normalized difference vegetation index. The resulting vigour maps showed clearly crop heterogeneity conditions, in good agreement with ground-based observations. The system provided very promising results that encourage its development as a tool for precision agriculture application in small crops."}
{"_id": "97b3f4fd72578a3037e977c9fb18a78cfeac2464", "title": "Thermal and Narrowband Multispectral Remote Sensing for Vegetation Monitoring From an Unmanned Aerial Vehicle", "text": "Two critical limitations for using current satellite sensors in real-time crop management are the lack of imagery with optimum spatial and spectral resolutions and an unfavorable revisit time for most crop stress-detection applications. Alternatives based on manned airborne platforms are lacking due to their high operational costs. A fundamental requirement for providing useful remote sensing products in agriculture is the capacity to combine high spatial resolution and quick turnaround times. Remote sensing sensors placed on unmanned aerial vehicles (UAVs) could fill this gap, providing low-cost approaches to meet the critical requirements of spatial, spectral, and temporal resolutions. This paper demonstrates the ability to generate quantitative remote sensing products by means of a helicopter-based UAV equipped with inexpensive thermal and narrowband multispectral imaging sensors. During summer of 2007, the platform was flown over agricultural fields, obtaining thermal imagery in the 7.5-13-mum region (40-cm resolution) and narrowband multispectral imagery in the 400-800-nm spectral region (20-cm resolution). Surface reflectance and temperature imagery were obtained, after atmospheric corrections with MODTRAN. Biophysical parameters were estimated using vegetation indices, namely, normalized difference vegetation index, transformed chlorophyll absorption in reflectance index/optimized soil-adjusted vegetation index, and photochemical reflectance index (PRI), coupled with SAILH and FLIGHT models. As a result, the image products of leaf area index, chlorophyll content (C ab), and water stress detection from PRI index and canopy temperature were produced and successfully validated. This paper demonstrates that results obtained with a low-cost UAV system for agricultural applications yielded comparable estimations, if not better, than those obtained by traditional manned airborne sensors."}
{"_id": "afd3b7fb9478501f3723a9b205dc7125a9b620c1", "title": "Image registration methods: a survey", "text": "This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrically align two images (the reference and sensed images). The reviewed approaches are classified according to their nature (areabased and feature-based) and according to four basic steps of image registration procedure: feature detection, feature matching, mapping function design, and image transformation and resampling. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of image registration and outlook for the future research are discussed too. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas. q 2003 Elsevier B.V. All rights reserved."}
{"_id": "31094fca20105113b8cf4f1268941dea4818af30", "title": "Parameter tuning or default values? An empirical investigation in search-based software engineering", "text": "Many software engineering problems have been addressed with search algorithms. Search algorithms usually depend on several parameters (e.g., population size and crossover rate in genetic algorithms), and the choice of these parameters can have an impact on the performance of the algorithm. It has been formally proven in the No Free Lunch theorem that it is impossible to tune a search algorithm such that it will have optimal settings for all possible problems. So, how to properly set the parameters of a search algorithm for a given software engineering problem? In this paper, we carry out the largest empirical analysis so far on parameter tuning in search-based software engineering. More than one million experiments were carried out and statistically analyzed in the context of test data generation for object-oriented software using the EvoSuite tool. Results show that tuning does indeed have impact on the performance of a search algorithm. But, at least in the context of test data generation, it does not seem easy to find good settings that significantly outperform the \u201cdefault\u201d values suggested in the literature. This has very practical value for both researchers (e.g., when different techniques are compared) and practitioners. Using \u201cdefault\u201d values is a reasonable and justified choice, whereas parameter tuning is a long and expensive process that might or might not pay off in the end."}
{"_id": "d9b42c3ad3847f0795b4054b4bfd0ea6314ed2f0", "title": "Elastic Scheduling of Scientific Workflows under Deadline Constraints in Cloud Computing Environments", "text": "Scientific workflow applications are collections of several structured activities and fine-grained computational tasks. Scientific workflow scheduling in cloud computing is a challenging research topic due to its distinctive features. In cloud environments, it has become critical to perform efficient task scheduling resulting in reduced scheduling overhead, minimized cost and maximized resource utilization while still meeting the user-specified overall deadline. This paper proposes a strategy, Dynamic Scheduling of Bag of Tasks based workflows (DSB), for scheduling scientific workflows with the aim to minimize financial cost of leasing Virtual Machines (VMs) under a user-defined deadline constraint. The proposed model groups the workflow into Bag of Tasks (BoTs) based on data dependency and priority constraints and thereafter optimizes the allocation and scheduling of BoTs on elastic, heterogeneous and dynamically provisioned cloud resources called VMs in order to attain the proposed method\u2019s objectives. The proposed approach considers pay-as-you-go Infrastructure as a Service (IaaS) clouds having inherent features such as elasticity, abundance, heterogeneity and VM provisioning delays. A trace-based simulation using benchmark scientific workflows representing real world applications, demonstrates a significant reduction in workflow computation cost while the workflow deadline is met. The results validate that the proposed model produces better success rates to meet deadlines and cost efficiencies in comparison to adapted state-of-the-art algorithms for similar problems."}
{"_id": "0f0c67050aed2312cd45f05d5fb8f6502e64b50b", "title": "Analyzing Sentiments Expressed on Twitter by UK Energy Company Consumers", "text": "Automatic sentiment analysis provides an effective way to gauge public opinion on any topic of interest. However, most sentiment analysis tools require a general sentiment lexicon to automatically classify sentiments or opinion in a text. One of the challenges presented by using a general sentiment lexicon is that it is insensitive to the domain since the scores assigned to the words are fixed. As a result, while one general sentiment lexicon might perform well in one domain, the same lexicon might perform poorly in another domain. Most sentiment lexica will need to be adjusted to suit the specific domain to which it is applied. In this paper, we present results of sentiment analysis expressed on Twitter by UK energy consumers. We optimised the accuracy of the sentiment analysis results by combining functions from two sentiment lexica. We used the first lexicon to extract the sentiment-bearing terms and negative sentiments since it performed well in detecting these. We then used a second lexicon to classify the rest of the data. Experimental results show that this method improved the accuracy of the results compared to the common practice of using only one lexicon."}
{"_id": "7bdb94403913fd714cc3efcf0a5e39df93a404d8", "title": "Atlas-based rib-bone detection in chest X-rays", "text": "This paper investigates using rib-bone atlases for automatic detection of rib-bones in chest X-rays (CXRs). We built a system that takes patient X-ray and model atlases as input and automatically computes the posterior rib borders with high accuracy and efficiency. In addition to conventional atlas, we propose two alternative atlases: (i) automatically computed rib bone models using Computed Tomography (CT) scans, and (ii) dual energy CXRs. We test the proposed approach with each model on 25 CXRs from the Japanese Society of Radiological Technology (JSRT) dataset and another 25 CXRs from the National Library of Medicine CXR dataset. We achieve an area under the ROC curve (AUC) of about 95% for Montgomery and 91% for JSRT datasets. Using the optimal operating point of the ROC curve, we achieve a segmentation accuracy of 88.91\u00b11.8% for Montgomery and 85.48\u00b13.3% for JSRT datasets. Our method produces comparable results with the state-of-the-art algorithms. The performance of our method is also excellent on challenging X-rays as it successfully addressed the rib-shape variance between patients and number of visible rib-bones due to patient respiration."}
{"_id": "7ed189ecc4244fa9bd1d47d5686914f35aababea", "title": "Detecting spam tweets in Twitter using a data stream clustering algorithm", "text": "Social network sites are becoming great through millions of users and information have been gathered from the users. This information have equal advantages to their friends and spammers. Twitter is one of the most popular social networks that, users can send short textual messages namely tweet. Researches have shown that this network is subject to spammer's invasion more than other social networks and more than six percent of its tweets are spam. So diagnose of the spam tweets is very important. Firstly, in this research, we determine various features for spam detection and then by using a clustering algorithm based on the data stream, we identify spam tweets. previous works in the field of spam tweets were done by classification algorithms. it is the first time that for spam tweets detection, an data stream clustering algorithm is used. Den stream Algorithm can cluster tweets and consider Outliers as spam. Results show, when this algorithm is set properly the amount of accuracy and precision of spam tweets detection will improve and false positive rate will reach to the minimum value in comparison with previous works."}
{"_id": "ae6c66167598955dba9ac64eb60b0b2b2d80c9f2", "title": "A Randomized Controlled Trial Comparing the Attention Training Technique and Mindful Self-Compassion for Students With Symptoms of Depression and Anxiety", "text": "The Attention Training Technique (ATT) and Mindful Self-Compassion (MSC) are two promising psychological interventions. ATT is a 12-min auditory exercise designed to strengthen attentional control and promote external focus of attention, while MSC uses guided meditation and exercises designed to promote self-compassion. In this randomized controlled trial (RCT), a three-session intervention trial was conducted in which university students were randomly assigned to either an ATT-group (n = 40) or a MSC-group (n = 41). The students were not assessed with diagnostic interviews but had self-reported symptoms of depression, anxiety, or stress. Participants listened to audiotapes of ATT or MSC before discussing in groups how to apply these principles for their everyday struggles. Participants also listened to audiotapes of ATT and MSC as homework between sessions. Participants in both groups showed significant reductions in symptoms of anxiety and depression accompanied by significant increases in mindfulness, self-compassion, and attention flexibility post-intervention. These results were maintained at 6-month follow-up. Improvement in attention flexibility was the only significant unique predictor of treatment response. The study supports the use of both ATT and MSC for students with symptoms of depression and anxiety. Further, it suggests that symptom improvement is related to changes in attention flexibility across both theoretical frameworks. Future studies should focus on how to strengthen the ability for attention flexibility to optimize treatment for emotional disorder."}
{"_id": "0dab7902ad91308cfd884b803ed28021ffb85ba5", "title": "Optimization of hybrid PV/wind power system for remote telecom station", "text": "The rapid depletion of fossil fuel resources and environmental concerns has given awareness on generation of renewable energy resources. Among the various renewable resources, hybrid solar and wind energy seems to be promising solutions to provide reliable power supply with improved system efficiency and reduced storage requirements for stand-alone applications. This paper presents a feasibility assessment and optimum size of photovoltaic (PV) array, wind turbine and battery bank for a standalone hybrid Solar/Wind Power system (HSWPS) at remote telecom station of Nepal at Latitude (27\u00b023\u203250\u2033) and Longitude (86\u00b044\u203223\u2033) consisting a telecommunication load of Very Small Aperture Terminal (VSAT), Repeater station and Code Division Multiple Access Base Transceiver Station (CDMA 2C10 BTS). In any RES based system, the feasibility assessment is considered as the first step analysis. In this work, feasibility analysis is carried through hybrid optimization model for electric renewables (HOMER) and mathematical models were implemented in the MATLAB environment to perform the optimal configuration for a given load and a desired loss of power supply probability (LPSP) from a set of systems components with the lowest value of cost function defined in terms of reliability and levelized unit electricity cost (LUCE). The simulation results for the existing and the proposed models are compared. The simulation results shows that existing architecture consisting of 6.12 kW KC85T photovoltaic modules, 1kW H3.1 wind turbine and 1600 Ah GFM-800 battery bank have a 36.6% of unmet load during a year. On the other hand, the proposed system includes 1kW \u22172 H3.1 Wind turbine, 8.05 kW TSM-175DA01 photovoltaic modules and 1125 Ah T-105 battery bank with system reliability of 99.99% with a significant cost reduction as well as reliable energy production."}
{"_id": "9cf082ffd90454a9af2b0d043dd638f97097b0ad", "title": "Digital control of a half-bridge LLC resonant converter", "text": "Recently, LLC resonant converters are gaining more and more interest in DC/DC conversion due to their high efficiency and power density. Usually, they are used in low power applications, i.e. in the range of few hundred Watt and with constant output load, however they are very appealing in high power and renewable energy applications as well as in many other applications affected by wide variations of the load and/or the source voltage. This paper presents and discusses a novel half bridge DC/DC resonant converter with digital control and fuzzy logic. The proposed system is compared with a more traditional PID controller. A variable frequency control algorithms are implemented using a low cost, high performance Texas Instrument TMS320F28069 DSP for the regulation of the output voltage. Simulation results using MATLAB are presented to evaluate the quality of the obtained result. A laboratory prototype has been realized and experimental results obtained using both PID and Fuzzy controllers are shown and discussed."}
{"_id": "0e8efd4874aed280a42540d7699ff72a0f073a88", "title": "The place for ubiquitous computing in schools: lessons learned from a school-based intervention for youth physical activity", "text": "With rising concerns about obesity and sedentary lifestyles in youth, there has been an increasing interest in understanding how pervasive and ubiquitous computing technologies can catalyze positive health behaviors in children and teens. School-based interventions seem like a natural choice, and ubiquitous computing technologies hold much promise for these interventions. Yet the literature contains little guidance for how to approach school-based ubicomp deployments. Grounded in our analysis of a large-scale US school-based intervention for promoting youth physical activity, we present an approach to the design and evaluation of school-based ubicomp that treats the school as a social institution. We show how the school regulates students' daily lives, drawing from work in the sociology of schools to create a framing for planning, executing and analyzing school-based ubicomp deployments. These insights will assist other researchers and designers engaging in deployments of ubiquitous computing systems in settings with established institutional structures."}
{"_id": "17d27dc936454b23020766183993d1eadaff5515", "title": "Estimating Crowd Density in an RF-Based Dynamic Environment", "text": "Crowd density estimating is a crucial service in many applications (e.g., smart guide, crowd control, etc.), which is often conducted using pattern recognition technologies based on video surveillance. However, these kinds of methods are high cost, and cannot work well in low-light environments. Radio frequency based technologies are adopted more and more in indoor application, since radio signal strength (RSS) can be easily obtained by various wireless devices without additional cost. In this paper, we introduce a low cost crowd density estimating method using wireless sensor networks. The proposed approach is a device-free crowd counting approach without objects carrying any assistive device. It is hard to count objects based on RSS measurement, since different number of mobile people at different positions often generates different RSS due to the multipath phenomenon. This paper utilizes the space-time relativity of crowd distribution to reduce the estimation errors. The proposed approach is an iterative process, which contains three phases: the training phase, the monitoring phase, and the calibrating phase. Our experiments are implemented based on TelosB sensor platform. We also do some large-scale simulations to verify the feasibility and the effectiveness of our crowd density estimating approach."}
{"_id": "2196c4afde66808657f5ded39595de937dfbd08d", "title": "Selective Sampling with Redundant Views", "text": "Selective sampling, a form of active learning, reduces the cost of labeling training data by asking only for the labels of the most informative unlabeled examples. We introduce a novel approach to selective sampling which we call co-testing. Cotesting can be applied to problems with redundant views (i.e., problems with multiple disjoint sets of attributes that can be used for learning). We analyze the most general algorithm in the co-testing family, naive co-testing, which can be used with virtually any type of learner. Naive co-testing simply selects at random an example on which the existing views disagree. We applied our algorithm to a variety of domains, including three real-world problems: wrapper induction, Web page classi cation, and discourse trees parsing. The empirical results show that besides reducing the number of labeled examples, naive co-testing may also boost the classi cation accuracy."}
{"_id": "41d03f5f95c334bf6f3e4e5f5264409f87f235ad", "title": "A Methodology for Realizing High Efficiency Class-J in a Linear and Broadband PA", "text": "The design and implementation of a class-J mode RF power amplifier is described. The experimental results indicate the class-J mode's potential in achieving high efficiency across extensive bandwidth, while maintaining predistortable levels of linearity. A commercially available 10 W GaN (gallium nitride) high electron mobility transistor device was used in this investigation, together with a combination of high power waveform measurements, active harmonic load-pull and theoretical analysis of the class-J mode. Targeting a working bandwidth of 1.5-2.5 GHz an initial power amplifier (PA) design was based on basic class-J theory and computer-aided design simulation. This realized a 50% bandwidth with measured drain efficiency of 60%-70%. A second PA design iteration has realized near-rated output power of 39 dBm and improved efficiency beyond the original 2.5 GHz target, hence extending efficient PA operation across a bandwidth of 1.4-2.6 GHz, centered at 2 GHz. This second iteration made extensive use of active harmonic load-pull and waveform measurements, and incorporated a novel design methodology for achieving predistortable linearity. The class-J amplifier has been found to be more realizable than conventional class-AB modes, with a better compromise between power and efficiency tradeoffs over a substantial RF bandwidth."}
{"_id": "0478a185526faf72e822e61c6dd30c6559c66666", "title": "Hardware Accelerated Per-Pixel Displacement Mapping", "text": "In this paper we present an algorithm capable of rendering a displacement mapped triangle mesh interactively on latest GPUs. The algorithm uses only pixel shaders and does not rely on adaptively adding geometry. All sampling of the displacement map takes place in the pixel shader and bior trilinear filtering can be applied to it, and at the same time as the calculations are done per pixel in the shader, the algorithm has automatic level of detail control. The triangles of the base mesh are extruded along the respective normal directions and then the resulting prisms are rendered by casting rays inside and intersecting them with the displaced surface. Two different implementations are discussed in detail."}
{"_id": "7968a806169ae1cd8b9f6948af4820c779d11ba6", "title": "Paraphrasing Out-of-Vocabulary Words with Word Embeddings and Semantic Lexicons for Low Resource Statistical Machine Translation", "text": "Out-of-vocabulary (OOV) word is a crucial problem in statistical machine translation (SMT) with low resources. OOV paraphrasing that augments the translation model for the OOV words by using the translation knowledge of their paraphrases has been proposed to address the OOV problem. In this paper, we propose using word embeddings and semantic lexicons for OOV paraphrasing. Experiments conducted on a low resource setting of the OLYMPICS task of IWSLT 2012 verify the effectiveness of our proposed method."}
{"_id": "27a3d074913d3af2bccb10cb62a652bd8029f613", "title": "A Character Model with Moral Emotions: Preliminary Evaluation", "text": "In literary and drama criticism, emotions, and moral emotions in particular, have been pointed out as one of characterizing features of stories. In this paper, we propose to model story characters as value-based emotional agents, who appraise their own and others\u2019 actions based on their desires and values, and feel the appropriate moral emotions in response to narrative situations that challenge their goals and values. In order to validate the appropriateness of the agent model for narrative characters, we ran an experiment with human participants aimed at comparing their expectations about characters\u2019 emotions with the predictions of the value-based model of emotional agent. The results of the experiment show that the participants\u2019 expectations meet the predictions of the model. 1998 ACM Subject Classification I.2.11 Intelligent Agents, I.2.0 Cognitive Simulation"}
{"_id": "b0c38ce8350927dd9cf3920f33f17b7bfc009c3b", "title": "Social neuroscience evidence for dehumanised perception", "text": ""}
{"_id": "7ef5e193b6c244e196dbfb69d49b1a6eba8e9709", "title": "A Spatial Mapping Algorithm with Applications in Deep Learning-Based Structure Classification", "text": "Convolutional Neural Network (CNN)-based machine learning systems have made breakthroughs in feature extraction and image recognition tasks in two dimensions (2D). Although there is significant ongoing work to apply CNN technology to domains involving complex 3D data, the success of such efforts has been constrained, in part, by limitations in data representation techniques. Most current approaches rely upon low-resolution 3D models, strategic limitation of scope in the 3D space, or the application of lossy projection techniques to allow for the use of 2D CNNs. To address this issue, we present a mapping algorithm that converts 3D structures to 2D and 1D data grids by mapping a traversal of a 3D space-filling curve to the traversal of corresponding 2D and 1D curves. We explore the performance of 2D and 1D CNNs trained on data encoded with our method versus comparable volumetric CNNs operating upon raw 3D data from a popular benchmarking dataset. Our experiments demonstrate that both 2D and 1D representations of 3D data generated via our method preserve a significant proportion of the 3D data\u2019s features in forms learnable by CNNs. Furthermore, we demonstrate that our method of encoding 3D data into lower-dimensional representations allows for decreased CNN training time cost, increased original 3D model rendering resolutions, and supports increased numbers of data channels when compared to purely volumetric approaches. This demonstration is accomplished in the context of a structural biology classification task wherein we train 3D, 2D, and 1D CNNs on examples of two homologous branches within the Ras protein family. The essential contribution of this paper is the introduction of a dimensionality-reduction method that may ease the application of powerful deep learning tools to domains characterized by complex structural data."}
{"_id": "68e07fdfa8de4eb7ace6374ce9c16168317414e3", "title": "Fast End-to-End Trainable Guided Filter", "text": "Image processing and pixel-wise dense prediction have been advanced by harnessing the capabilities of deep learning. One central issue of deep learning is the limited capacity to handle joint upsampling. We present a deep learning building block for joint upsampling, namely guided filtering layer. This layer aims at efficiently generating the high-resolution output given the corresponding low-resolution one and a high-resolution guidance map. The proposed layer is composed of a guided filter, which is reformulated as a fully differentiable block. To this end, we show that a guided filter can be expressed as a group of spatial varying linear transformation matrices. This layer could be integrated with the convolutional neural networks (CNNs) and jointly optimized through end-to-end training. To further take advantage of end-to-end training, we plug in a trainable transformation function that generates task-specific guidance maps. By integrating the CNNs and the proposed layer, we form deep guided filtering networks. The proposed networks are evaluated on five advanced image processing tasks. Experiments on MIT-Adobe FiveK Dataset demonstrate that the proposed approach runs 10-100\u00c3\u2014 faster and achieves the state-of-the-art performance. We also show that the proposed guided filtering layer helps to improve the performance of multiple pixel-wise dense prediction tasks. The code is available at https://github.com/wuhuikai/DeepGuidedFilter."}
{"_id": "4487e1db79e8387b7abc47e7ec9b711fc4d286df", "title": "Building Lexical Vector Representations from Concept Definitions", "text": "The use of distributional language representations have opened new paths in solving a variety of NLP problems. However, alternative approaches can take advantage of information unavailable through pure statistical means. This paper presents a method for building vector representations from meaning unit blocks called concept definitions, which are obtained by extracting information from a curated linguistic resource (Wiktionary). The representations obtained in this way can be compared through conventional cosine similarity and are also interpretable by humans. Evaluation was conducted in semantic similarity and relatedness test sets, with results indicating a performance comparable to other methods based on single linguistic resource extraction. The results also indicate noticeable performance gains when combining distributional similarity scores with the ones obtained using this approach. Additionally, a discussion on the proposed method\u2019s shortcomings is provided in the analysis of error cases."}
{"_id": "b7811efffe3ba41c720708c670f01d34091984b0", "title": "Circuit Design of a High Speed and Low Power CMOS Continuous-time Current Comparator", "text": "Current comparator is a fundamental component of current-mode analog integrated circuits. A novel high-performance continuous-time CMOS current comparator is proposed in this paper, which comprises one CMOS complementary amplifier, two resistive-load amplifiers and two CMOS inverters. A MOS resistor is used as the CMOS complementary amplifier\u2019s negative feedback. Because the voltage swings of the CMOS complementary amplifier are reduced by low input and output resistances, the delay time of the current comparator is shortened. Its power consumption can be reduced rapidly with the increase of input current. Simulation results based on 1.2 \u03bcm CMOS process model show the speed of the novel current comparator is comparable with those of the existing fastest CMOS current comparators, and its power consumption is the lowest, so it has the smallest power-delay product. Furthermore, the new current comparator occupies small area and is process-robust, so it is very suitable to high-speed and low-power applications."}
{"_id": "81e0f458a894322baf170fa4d6fa8099bd055c39", "title": "Statistical Decision Theory and Bayesian Analysis, 2nd Edition", "text": ""}
{"_id": "7bbadf3b6e7d6b362abfb83939cf95e16e890ca5", "title": "Automated Generation of Attack Trees", "text": "Attack trees are widely used to represent threat scenarios in a succinct and intuitive manner, suitable for conveying security information to non-experts. The manual construction of such objects relies on the creativity and experience of specialists, and therefore it is error-prone and impracticable for large systems. Nonetheless, the automated generation of attack trees has only been explored in connection to computer networks and levering rich models, whose analysis typically leads to an exponential blow-up of the state space. We propose a static analysis approach where attack trees are automatically inferred from a process algebraic specification in a syntax-directed fashion, encompassing a great many application domains and avoiding incurring systematically an exponential explosion. Moreover, we show how the standard propositional denotation of an attack tree can be used to phrase interesting quantitative problems, that can be solved through an encoding into Satisfiability Modulo Theories. The flexibility and effectiveness of the approach is demonstrated on the study of a national-scale authentication system, whose attack tree is computed thanks to a Java implementation of the framework."}
{"_id": "758d5e6627430b0e97700a9325ffe8bec8ce4beb", "title": "A low cost implementation of MQTT using ESP8266", "text": "Technology is great growling engine of the change and Internet of Things (IoT) is the backbone of such revolutionary engines. Basically, in the real world the things having sensor capability, sufficient power supply and connectivity to internet makes field like Internet of Things (IoT) possible. For such rapid growing technology, it is the necessity to have very light, inexpensive and minimum bandwidth protocol like Message Queuing Telemetry Transport (MQTT) Protocol. Because of such non-established protocol it is easy for the clients to publish or/and subscribe the desire topic through the host acting as server of the network also known to be the broker. In this paper, it is shown that communication between the low power ESP8266 WiFi as client with the clients on smartphones and laptop using an MQTT protocol becomes easier and more reliable. The WiFi enabled ESP8266 board interfaces with DHT11 sensor and LDR sensor to monitor the ambient condition and according to the light intensity level the brightness level of 8\u22178 Neopixel matrix is controlled. The adafruit.io is the MQTT server i.e, broker which also provides the facility of monitoring system through the dashboard. The clients on smartphone and laptop subscribes to the topics of temperature, humidity and light intensity level gets the respective update. The main objective of the paper is to implement the idea for the smart home appliance, street light system for the smart cities, fire alert systems, whether monitoring system and so."}
{"_id": "1ef7d60e44998647847ca0636551eb0aaa9fa20e", "title": "Machine Learning Techniques in Spam Filtering", "text": "The article gives an overview of some of the most popular machine learning methods (Bayesian classification, k-NN, ANNs, SVMs) and of their applicability to the problem of spam-filtering. Brief descriptions of the algorithms are presented, which are meant to be understandable by a reader not familiar with them before. A most trivial sample implementation of the named techniques was made by the author, and the comparison of their performance on the PU1 spam corpus is presented. Finally, some ideas are given of how to construct a practically useful spam filter using the discussed techniques. The article is related to the author\u2019s first attempt of applying the machine-learning techniques in practice, and may therefore be of interest primarily to those getting aquainted with machine-learning."}
{"_id": "284db8df66ef94594ee831ff2b36f546e023953a", "title": "SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition", "text": "We consider visual category recognition in the framework of measuring similarities, or equivalently perceptual distances, to prototype examples of categories. This approach is quite flexible, and permits recognition based on color, texture, and particularly shape, in a homogeneous framework. While nearest neighbor classifiers are natural in this setting, they suffer from the problem of high variance (in bias-variance decomposition) in the case of limited sampling. Alternatively, one could use support vector machines but they involve time-consuming optimization and computation of pairwise distances. We propose a hybrid of these two methods which deals naturally with the multiclass setting, has reasonable computational complexity both in training and at run time, and yields excellent results in practice. The basic idea is to find close neighbors to a query sample and train a local support vector machine that preserves the distance function on the collection of neighbors. Our method can be applied to large, multiclass data sets for which it outperforms nearest neighbor and support vector machines, and remains efficient when the problem becomes intractable for support vector machines. A wide variety of distance functions can be used and our experiments show state-of-the-art performance on a number of benchmark data sets for shape and texture classification (MNIST, USPS, CUReT) and object recognition (Caltech- 101). On Caltech-101 we achieved a correct classification rate of 59.05%(\u00b10.56%) at 15 training images per class, and 66.23%(\u00b10.48%) at 30 training images."}
{"_id": "29fa9b903dbd8d19e39b0d7fb06efc6a1907dfdb", "title": "Support vector machines for spam categorization", "text": "We study the use of support vector machines (SVM's) in classifying e-mail as spam or nonspam by comparing it to three other classification algorithms: Ripper, Rocchio, and boosting decision trees. These four algorithms were tested on two different data sets: one data set where the number of features were constrained to the 1000 best features and another data set where the dimensionality was over 7000. SVM's performed best when using binary features. For both data sets, boosting trees and SVM's had acceptable test performance in terms of accuracy and speed. However, SVM's had significantly less training time."}
{"_id": "3e825b4d4ca9f0405d7759c15fc10c702f26a2ec", "title": "An Introduction to Support Vector Machines and Other Kernel-based Learning Methods", "text": "Bargaining with reading habit is no need. Reading is not kind of something sold that you can take or not. It is a thing that will change your life to life better. It is the thing that will give you many things around the world and this universe, in the real world and here after. As what will be given by this an introduction to support vector machines and other kernel based learning methods, how can you bargain with the thing that has many benefits for you?"}
{"_id": "44e915a220ce74badf755aae870fa0b69ee2b82a", "title": "Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval", "text": "The naive Bayes classiier, currently experiencing a renaissance in machine learning, has long been a core technique in information retrieval. We review some of the variations of naive Bayes models used for text retrieval and classiication, focusing on the distributional assumptions made about word occurrences in documents."}
{"_id": "2f97430f90b41325ee18b3b6dc8613c29fda9aa1", "title": "A Call for the Integration of Trauma-Informed Care Among Intellectual and Developmental Disability Organizations", "text": "Research exploring the occurrence of trauma among adults with intellectual and developmental disabilities (I/DD) has grown over the past decade. Yet there is a dearth of literature investigating the impact of organizational factors on the trauma experience despite this population\u2019s need for organizational supports. Trauma-informed care (TIC), a systems-focused model for service delivery, is a fast-developing interest among the broader field of trauma in the general population. It recognizes the prevalence and impact of trauma, and creates a culture of safety, trustworthiness, choice, collaboration, and empowerment. The author synthesized relevant literature from both the intellectual and developmental disabilities areas and integrated this with TIC and trauma literature drawn from the general population. Explored are the implications of organizations for service delivery and the potential assimilation of TIC within I/DD organizations. The effectiveness of TIC applications and their potential barriers are discussed and related to the philosophy of quality of life and organizational culture. The author notes that some individuals with I/DD comprise a vulnerable subgroup of the population that in large part relies upon the support of organizational services to foster quality of life. Given the implications of the focus on quality of life, he posits that TIC presents as a viable response for organizations, complimenting and augmenting current efforts."}
{"_id": "394522d71eaf4fa1045be93c0478dc7e85e43a22", "title": "Sea Ice Remote Sensing Using AMSR-E 89 GHz Channels", "text": "Recent progress in sea ice concentration remote sensing by satellite microwave radiometers has been stimulated by two developments: First, the new sensor AMSR-E offers spatial resolutions of approximately 6x4 km at 89 GHz, nearly three times the resolution of the standard sensor SSM/I at 85 GHz (15x13 km). Second, a new algorithm enables to estimate sea ice concentration from the channels near 90GHz, despite the enhanced atmospheric influence in these channels. This allows to fully exploit their horizontal resolution which is up to four times finer than the one of the channels near 19 and 37 GHz, the frequencies used by the most widespread algorithms for sea ice retrieval, the NASA-Team and Bootstrap algorithms. The ASI algorithm used combines a model for retrieving the sea ice concentration from SSM/I 85 GHz data proposed by Svendsen et al. [1987] with an ocean mask derived from the 18-, 23-, and 37-GHz AMSR-E data using weather filters. During two ship campaigns, the correlation of ASI, NASA-Team 2 and Bootstrap algorithms ice concentrations with bridge observations were 0.80, 0.79 and 0.81, respectively. Systematic differences over the complete AMSR-E period (2002-2006) between ASI and NASA-Team 2 are below \u22122\u00b18.8%, and between ASI and Bootstrap 1.7\u00b110.8%. Among the geophysical implications of the ASI algorithm are: (1) Its higher spatial resolution allows to better estimate crucial variables in numerical atmospheric and ocean models, e.g. the heat flux between ocean and atmosphere, especially near coastlines and in polynyas. (2) It provides an additional time series of ice area and extent for climate studies."}
{"_id": "8e5fbfc5a0fa52f249a3ccaaeb943c6ea8b754bf", "title": "Aware LSTM model for Semantic Role Labeling", "text": "In Semantic Role Labeling (SRL) task, the tree structured dependency relation is rich in syntax information, but it is not well handled by existing models. In this paper, we propose Syntax Aware Long Short Time Memory (SA-LSTM). The structure of SA-LSTM changes according to dependency structure of each sentence, so that SA-LSTM can model the whole tree structure of dependency relation in an architecture engineering way. Experiments demonstrate that on Chinese Proposition Bank (CPB) 1.0, SA-LSTM improves F1 by 2.06% than ordinary bi-LSTM with feature engineered dependency relation information, and gives state-of-the-art F1 of 79.92%. On English CoNLL 2005 dataset, SA-LSTM brings improvement (2.1%) to bi-LSTM model and also brings slight improvement (0.3%) when added to the stateof-the-art model."}
{"_id": "44bcd20a481c00f5b92d55405f9bf70a12f99b8f", "title": "Adaptation algorithm for adaptive streaming over HTTP", "text": "Internet video makes up a significant part of the Internet traffic and its fraction is constantly growing. In order to guarantee best user experience throughout different network access technologies with dynamically varying network conditions, it is fundamental to adopt technologies enabling a proper delivery of the media content. One of such technologies is adaptive streaming. It allows to dynamically adapt the bit-rate of the stream to varying network conditions. There are various approaches to adaptive streaming. In our work, we focus on the receiver-driven approach where the media file is subdivided into segments, each of the segments is provided at multiple bit-rates, and the task of the client is to select the appropriate bit-rate for each of the segments. With this approach, the challenges are (i) to properly estimate the dynamics of the available network throughput, (ii) to control the filling level of the client buffer in order to avoid underflows resulting in playback interruptions, (iii) to maximize the quality of the stream, while avoiding unnecessary quality fluctuations, and, finally, (iv) to minimize the delay between the user's request and the start of the playback. During our work, we designed and implemented a receiver-driven adaptation algorithm for adaptive streaming that does not rely on cross-layer information or server assistance. We integrated the algorithm with a prototype implementation of a streaming client based on the MPEG DASH (Dynamic Adaptive Streaming over HTTP) standard. We evaluated the implemented prototype in real-world scenarios and found that it performes remarkably well even under challenging network conditions. Further, it exhibits stable and fair operation if a common link is shared among multiple clients."}
{"_id": "d989bbb1ffd47881a8f2166618c6a82d630b71b4", "title": "User Acceptance of Hedonic Information Technologies: A Perceived Value Perspective", "text": "Hedonic information technologies do not focus on productivity-oriented applications, but rather, on lifestyle-augmentation and entertainment. As such users of hedonic IT are provided with benefits related to general enjoyment as opposed to gains in organizational efficiency. The main purpose of this study is to investigate this burgeoning phenomenon by measuring several value drivers for downloading mobile phone ringtones. Hypothesis testing was performed with PLS techniques applied to data collected from 119 ringtone users. Results confirmed that the overall value of hedonic IT is a third-order construct, which successfully predicted behavioral usage intentions and positive word of mouth."}
{"_id": "5fb9176fa0213c47d1da2439a6fc58d9cac218b6", "title": "Continuous monitoring of distance-based outliers over data streams", "text": "Anomaly detection is considered an important data mining task, aiming at the discovery of elements (also known as outliers) that show significant diversion from the expected case. More specifically, given a set of objects the problem is to return the suspicious objects that deviate significantly from the typical behavior. As in the case of clustering, the application of different criteria lead to different definitions for an outlier. In this work, we focus on distance-based outliers: an object x is an outlier if there are less than k objects lying at distance at most R from x. The problem offers significant challenges when a stream-based environment is considered, where data arrive continuously and outliers must be detected on-the-fly. There are a few research works studying the problem of continuous outlier detection. However, none of these proposals meets the requirements of modern stream-based applications for the following reasons: (i) they demand a significant storage overhead, (ii) their efficiency is limited and (iii) they lack flexibility. In this work, we propose new algorithms for continuous outlier monitoring in data streams, based on sliding windows. Our techniques are able to reduce the required storage overhead, run faster than previously proposed techniques and offer significant flexibility. Experiments performed on real-life as well as synthetic data sets verify our theoretical study."}
{"_id": "287c5caff566a525f80ae90b70c76e47a9c4303c", "title": "From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning", "text": "We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover both structure and meaning from noisy and ambiguous data across modalities. We show that our model indeed learns to predict features of the visual context given phonetically transcribed image descriptions, and show that it represents linguistic information in a hierarchy of levels: lower layers in the stack are comparatively more sensitive to form, whereas higher layers are more sensitive to meaning."}
{"_id": "5fcab9045d9209a1989b8732e4565aa298be16f0", "title": "Measuring the Similarity between Automatically Generated Topics", "text": "Previous approaches to the problem of measuring similarity between automatically generated topics have been based on comparison of the topics\u2019 word probability distributions. This paper presents alternative approaches, including ones based on distributional semantics and knowledgebased measures, evaluated by comparison with human judgements. The best performing methods provide reliable estimates of topic similarity comparable with human performance and should be used in preference to the word probability distribution measures used previously."}
{"_id": "515b35aca04cef4bd7a8dce3e6b9a39722253dd3", "title": "Electronic model of human brain using verilog", "text": "Computers are man-made machines which work according to the given set of inputs and perform some operation on the input to generate a new set of outputs. They can be programmed to perform huge and complex task yet they lack imagination and ability to understand things. On the hand human brain is a machine which can learn new tasks by process of acquiring knowledge and understanding through thought, experience and the senses. To make computer work or think like human, human brain modeling of machine is necessary. We need to know how human brain works. The basic component of human brain is neuron, which make us think and do smart things. Brains have billions of neurons and they communicate with each other using electrical impulse. These impulses are responsible for making brain think and to have a consciousness. This paper proposes such a model of a part of the brain."}
{"_id": "432e84ce9d800bde362f3b9a2a8da6e7910ff23e", "title": "Synchronous reluctance motor multi-static MEC model", "text": "High efficiency, low cost and high reliable electrical machines are designed to meet very demanding specifications for a wide range of applications. The SynchRM (Synchronous Reluctance Motor) has very interesting properties and features. One of the main advantages is that the motor can be designed without rare earth materials permanent magnets. In this article, a modelling method of such machines using the MEC (Magnetic Equivalent Circuit) is proposed. The main challenges are to obtain a very accurate parametric model in order to calculate torque variations and to make possible geometric optimizations in future work."}
{"_id": "e644867bc141453d1f0387c76ff5e7f7863c5f4f", "title": "Learning Pixel-Level Semantic Affinity with Image-Level Supervision for Weakly Supervised Semantic Segmentation", "text": "The deficiency of segmentation labels is one of the main obstacles to semantic segmentation in the wild. To alleviate this issue, we present a novel framework that generates segmentation labels of images given their image-level class labels. In this weakly supervised setting, trained models have been known to segment local discriminative parts rather than the entire object area. Our solution is to propagate such local responses to nearby areas which belong to the same semantic entity. To this end, we propose a Deep Neural Network (DNN) called AffinityNet that predicts semantic affinity between a pair of adjacent image coordinates. The semantic propagation is then realized by random walk with the affinities predicted by AffinityNet. More importantly, the supervision employed to train AffinityNet is given by the initial discriminative part segmentation, which is incomplete as a segmentation annotation but sufficient for learning semantic affinities within small image areas. Thus the entire framework relies only on image-level class labels and does not require any extra data or annotations. On the PASCAL VOC 2012 dataset, a DNN learned with segmentation labels generated by our method outperforms previous models trained with the same level of supervision, and is even as competitive as those relying on stronger supervision."}
{"_id": "0b81a5bb18124235ddf1e09bcf06d0d69730ebcb", "title": "Generalized distributed rate limiting", "text": "The Distributed Rate Limiting (DRL) paradigm is a recently proposed mechanism for decentralized control of cloud-based services. DRL is a simple and efficient approach to resolve the issues of pricing and resource control/engineering of cloud based services. The existing DRL schemes focus on very specific performance metrics (such as loss rate and fair-share) and their design heavily depends on the assumption that the traffic is generated by elastic TCP sources. In this paper we tackle the DRL problem for general workloads and performance metrics and propose an analytic framework for the design of stable DRL algorithms. The closed-form nature of our results allows simple design rules which, together with extremely low communication overhead, makes the presented algorithms practical and easy to deploy with guaranteed convergence properties under a wide range of possible scenarios."}
{"_id": "4dc340848caab4bd854244aa0ee3c6d3118f3387", "title": "1 Wave Dispersion in High-Rise Buildings due to Soil-Structure Interaction", "text": "Nonparametric techniques for estimation of wave dispersion in buildings by seismic interferometry are applied to a simple model of a soil-structure interaction (SSI) system with coupled horizontal and rocking response. The system consists of a viscously damped shearbeam, representing a building, on a rigid foundation embedded in a half-space. The analysis shows that (1) wave propagation through the system is dispersive. The dispersion is characterized by lower phase velocity (softening) in the band containing the fundamental system mode of vibration, and little change in the higher frequency bands, relative to the building shear wave velocity. This mirrors its well known effect on the frequencies of vibration, i.e. reduction for the fundamental mode (softening) and no significant change for the higher modes of vibration, in agreement with the duality of the wave and vibrational nature of structural response. Nevertheless, the phase velocity identified from broader band IRFs is very close to the superstructure shear wave velocity, as found by the earlier study. The analysis reveals that (2) the reason for this apparent paradox is that the latter estimates are biased towards the higher values, representative of the higher frequencies in the band, where the response is less affected by SSI. It is also discussed that (3) bending flexibility and soil flexibility produce similar effects on the phase velocities and frequencies of vibration of a building. 1 Ph.D. Candidate, University of Southern California, Dept. of Civil Eng., Los Angeles, CA 90089-2531, Email: mrahmani@usc.edu 2 Ph.D. Candidate, University of Southern California, Dept. of Civil Eng., Los Angeles, CA 90089-2531, Email: mebrahim@usc.edu 3 Research Professor, University of Southern California, Dept. of Civil Eng., Los Angeles, CA 90089-2531, Email: mtodorov@usc.edu Earthquake Engineering and Structural Dynamics. DOI: 10.1002/eqe.2454, Final Draft. First published online on June 23, 2014, in press Article available at: http://onlinelibrary.wiley.com/doi/10.1002/eqe.2454/abstract."}
{"_id": "9bac73b9b952d8292287dc4dc22a6ba9b908f314", "title": "Clinical anatomy of the coccyx: A systematic review.", "text": "The coccyx has been relatively neglected in anatomical research which is surprising given the population prevalence of coccydynia and our inadequate understanding of its etiology. This systematic review analyzes available information on the clinical anatomy of the coccyx. A literature search using five electronic databases and standard anatomy reference texts was conducted yielding 61 primary and 7 secondary English-language sources. This was supplemented by a manual search of selected historical foreign language articles. The coccygeal vertebrae, associated joints, ligaments and muscles, coccygeal movements, nerves, and blood supply were analyzed in detail. Although the musculoskeletal aspects of the coccyx are reasonably well described, the precise anatomy of the coccygeal plexus and its distribution, the function of the coccygeal body, and the anatomy of the sacrococcygeal zygapophyseal joints are poorly documented. Further research into the anatomy of the coccyx may clarify the etiopathogenesis of coccydynia which remains uncertain in one-third of affected patients."}
{"_id": "ee41bad04ae534480523a97d4ad2c51324e8368d", "title": "ASSESSMENT OF SLOPE STABILITY IN EMBANKMENT DAMS USING ARTIFICIAL NEURAL NETWORK", "text": "A slope formed when there is a height difference between two points of the ground surface. Limited or ambiguous data defined in the studying of the slip surface have always been posed as a challenge to engineers and geologists; however, previous studies presented the useful strategies for resolving this challenge. In general, Multi Layer Perceptrons (MLPs) (with three input, output and hidden layers) artificial neural networks (ANNs), as well as the transfer function and sufficient number of neurons in hidden layer are able to estimate any nonlinear function with any degree of complexity. In the present study, the Zonouz embankment dam at Iran was firstly modeled by using the classic limit equilibrium method (LEM) then, obtained results were used for training the ANN. Then the effect of various parameters such as logarithmic data and training epoch number on the network and sensitivity analysis on the input parameters were discussed. Finally the best network was selected in training step, and obtained results were verified. The results show that the ANN can be used as a suitable alternative technique instead of LEM for probing the slope stability in embankment dams."}
{"_id": "ec089b8ccf0b27f6c2f395df96126f0fae93cc01", "title": "Improving SAT-solving with Machine Learning", "text": "In this project, we aimed to improve the runtime of Minisat, a Conflict-Driven Clause Learning (CDCL) solver that solves the Propositional Boolean Satisfiability (SAT) problem. We first used a logistic regression model to predict the satisfiability of propositional boolean formulae after fixing the values of a certain fraction of the variables in each formula. We then applied the logistic model and added a preprocessing period to Minisat to determine the preferable initial value (either true or false) of each boolean variable using a Monte-Carlo approach. Concretely, for each Monte-Carlo trial, we fixed the values of a certain ratio of randomly selected variables, and calculated the confidence that the resulting sub-formula is satisfiable with our logistic regression model. The initial value of each variable was set based on the mean confidence scores of the trials that started from the literals of that variable. We were particularly interested in setting the initial values of the backbone variables correctly, which are variables that have the same value in all solutions of a SAT formula. Our Monte-Carlo method was able to set 78% of the backbones correctly. Excluding the preprocessing time, compared with the default setting of Minisat, the runtime of Minisat for satisfiable formulae decreased by 23%. However, our method did not outperform vanilla Minisat in runtime, as the decrease in the conflicts was outweighed by the long runtime of the preprocessing period."}
{"_id": "625001acdd299eaf7fdbe925bd8dbe985d69610a", "title": "Semantic Models for Scalable Search in the Internet of Things", "text": "The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format) and SPARQL (a query language for RDF-encoded data) can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL."}
{"_id": "60dc90046fbbfd1cbad7c3b9759843780c8bacea", "title": "Analyzing the Performance of Multilayer Neural Networks for Object Recognition", "text": "In the last two years, convolutional neural networks (CNNs) have achieved an impressive suite of results on standard recognition datasets and tasks. CNN-based features seem poised to quickly replace engineered representations, such as SIFT and HOG. However, compared to SIFT and HOG, we understand much less about the nature of the features learned by large CNNs. In this paper, we experimentally probe several aspects of CNN feature learning in an attempt to help practitioners gain useful, evidence-backed intuitions about how to apply CNNs to computer vision problems."}
{"_id": "57008ec240cf3d35e2ad4227f2f286fc952767e8", "title": "SenGen: Sentence Generating Neural Variational Topic Model", "text": "We present a new topic model that generates documents by sampling a topic for one whole sentence at a time, and generating the words in the sentence using an RNN decoder that is conditioned on the topic of the sentence. We argue that this novel formalism will help us not only visualize and model the topical discourse structure in a document better, but also potentially lead to more interpretable topics since we can now illustrate topics by sampling representative sentences instead of bag of words or phrases. We present a variational auto-encoder approach for learning in which we use a factorized variational encoder that independently models the posterior over topical mixture vectors of documents using a feedforward network, and the posterior over topic assignments to sentences using an RNN. Our preliminary experiments on two different datasets indicate early promise, but also expose many challenges that remain to be addressed."}
{"_id": "27bb445f9f5b1a8db76ff630a12a91f18f82e551", "title": "Understanding packet delivery performance in dense wireless sensor networks", "text": "Wireless sensor networks promise fine-grain monitoring in a wide variety of environments. Many of these environments (e.g., indoor environments or habitats) can be harsh for wireless communication. From a networking perspective, the most basic aspect of wireless communication is the packet delivery performance: the spatio-temporal characteristics of packet loss, and its environmental dependence. These factors will deeply impact the performance of data acquisition from these networks.In this paper, we report on a systematic medium-scale (up to sixty nodes) measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot. Our findings have interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks."}
{"_id": "aebda53e90a60ec68b2dcac6e6ddbb182570a615", "title": "TopP-S: Persistent homology-based multi-task deep neural networks for simultaneous predictions of partition coefficient and aqueous solubility", "text": "Aqueous solubility and partition coefficient are important physical properties of small molecules. Accurate theoretical prediction of aqueous solubility and partition coefficient plays an important role in drug design and discovery. The prediction accuracy depends crucially on molecular descriptors which are typically derived from a theoretical understanding of the chemistry and physics of small molecules. This work introduces an algebraic topology-based method, called element-specific persistent homology (ESPH), as a new representation of small molecules that is entirely different from conventional chemical and/or physical representations. ESPH describes molecular properties in terms of multiscale and multicomponent topological invariants. Such topological representation is systematical, comprehensive, and scalable with respect to molecular size and composition variations. However, it cannot be literally translated into a physical interpretation. Fortunately, it is readily suitable for machine learning methods, rendering topological learning algorithms. Due to the inherent correlation between solubility and partition coefficient, a uniform ESPH representation is developed for both properties, which facilitates multi-task deep neural networks for their simultaneous predictions. This strategy leads to a more accurate prediction of relatively small datasets. A total of six datasets is considered in this work to validate the proposed topological and multitask deep learning approaches. It is demonstrated that the proposed approaches achieve some of the most accurate predictions of aqueous solubility and partition coefficient. Our software is available online at http://weilab.math.msu.edu/TopP-S/. \u00a9 2018 Wiley Periodicals, Inc."}
{"_id": "2e286e4897932ca9a6069ab3d6ffcb5e2eee68b4", "title": "SelCSP: A Framework to Facilitate Selection of Cloud Service Providers", "text": "With rapid technological advancements, cloud marketplace witnessed frequent emergence of new service providers with similar offerings. However, service level agreements (SLAs), which document guaranteed quality of service levels, have not been found to be consistent among providers, even though they offer services with similar functionality. In service outsourcing environments, like cloud, the quality of service levels are of prime importance to customers, as they use third-party cloud services to store and process their clients' data. If loss of data occurs due to an outage, the customer's business gets affected. Therefore, the major challenge for a customer is to select an appropriate service provider to ensure guaranteed service quality. To support customers in reliably identifying ideal service provider, this work proposes a framework, SelCSP, which combines trustworthiness and competence to estimate risk of interaction. Trustworthiness is computed from personal experiences gained through direct interactions or from feedbacks related to reputations of vendors. Competence is assessed based on transparency in provider's SLA guarantees. A case study has been presented to demonstrate the application of our approach. Experimental results validate the practicability of the proposed estimating mechanisms."}
{"_id": "46af4e3272fe3dbc7ee648400fb049ae6d3689cd", "title": "Acquiring the reflectance field of a human face", "text": "We present a method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint. We first acquire images of the face from a small set of viewpoints under a dense sampling of incident illumination directions using a light stage. We then construct a reflectance function image for each observed image pixel from its values over the space of illumination directions. From the reflectance functions, we can directly generate images of the face from the original viewpoints in any form of sampled or computed illumination. To change the viewpoint, we use a model of skin reflectance to estimate the appearance of the reflectance functions for novel viewpoints. We demonstrate the technique with synthetic renderings of a person's face under novel illumination and viewpoints."}
{"_id": "1db0c4d47f2398cf8bebe3c117242a1eb73aaacf", "title": "Piecewise smooth surface reconstruction", "text": "We present a general method for automatic reconstruction of accurate, concise, piecewise smooth surface models from scattered range data. The method can be used in a variety of applications such as reverse engineering\u2014the automatic generation of CAD models from physical objects. Novel aspects of the method are its ability to model surfaces of arbitrary topological type and to recover sharp features such as creases and corners. The method has proven to be effective, as demonstrated by a number of examples using both simulated and real data.\nA key ingredient in the method, and a principal contribution of this paper, is the introduction of a new class of piecewise smooth surface representations based on subdivision. These surfaces have a number of properties that make them ideal for use in surface reconstruction: they are simple to implement, they can model sharp features concisely, and they can be fit to scattered range data using an unconstrained optimization procedure."}
{"_id": "63f24b857fad276c13bfe763401ec72f343de167", "title": "Computer generated animation of faces", "text": "This paper describes the representation, animation and data collection techniques that have been used to produce \"realistic\" computer generated half-tone animated sequences of a human face changing expression. It was determined that approximating the surface of a face with a polygonal skin containing approximately 250 polygons defined by about 400 vertices is sufficient to achieve a realistic face. Animation was accomplished using a cosine interpolation scheme to fill in the intermediate frames between expressions. This approach is good enough to produce realistic facial motion. The three-dimensional data used to describe the expressions of the face was obtained photogrammetrically using pairs of photographs."}
{"_id": "44b8f53edfb44f9fa94ff1995954ca16c5b7b728", "title": "\"Is it Weird to Still Be a Virgin\": Anonymous, Locally Targeted Questions on Facebook Confession Boards", "text": "People have long sought answers to questions online, typically using either anonymous or pseudonymous forums or social network platforms that primarily use real names. Systems that allow anonymous communication afford freedom to explore identity and discuss taboo topics, but can result in negative disinhibited behavior such as cyberbullying. Identifiable communication systems allows one to reach a known audience and avoid negative disinhibition, but can constrain behavior with concerns about privacy and reputation. One persistent design issue is understanding how to leverage the benefits of anonymity without suffering its drawbacks. This paper presents a case study analysis of question asking on Facebook confession boards (FCBs), a tool popular on some college campuses. FCBs present a unique configuration in which members of an offline community (e.g., a university) anonymously submit content to a moderator who posts it to a Facebook page where others in the community can view it and respond. Response is via identifiable Facebook comments and likes. Our results show users asking about taboo and stigmatized topics with local others, and receiving relevant responses with little cyberbullying or negativity."}
{"_id": "92dd388f337e290d51ae46b79a3a01afac722762", "title": "Bitcoin Block Withholding Attack: Analysis and Mitigation", "text": "We address two problems: first, we study a variant of block withholding (BWH) attack in Bitcoins and second, we propose solutions to prevent all existing types of BWH attacks in Bitcoins. We analyze the strategies of a selfish Bitcoin miner who in connivance with one pool attacks another pool and receives reward from the former mining pool for attacking the latter. We name this attack as \u201csponsored block withholding attack.\u201d We present detailed quantitative analysis of the monetary incentive that a selfish miner can earn by adopting this strategy under different scenarios. We prove that under certain conditions, the attacker can maximize her revenue by adopting some strategies and by utilizing her computing power wisely. We also show that an attacker may use this strategy for attacking both the pools for earning higher amount of incentives. More importantly, we present a strategy that can effectively counter block withholding attack in any mining pool. First, we propose a generic scheme that uses cryptographic commitment schemes to counter BWH attack. Then, we suggest an alternative implementation of the same scheme using hash function. Our scheme protects a pool from rogue miners as well as rogue pool administrators. The scheme and its variant defend against BWH attack by making it impossible for the miners to distinguish between a partial proof of work and a complete proof of work. The scheme is so designed that the administrator cannot cheat on the entire pool. The scheme can be implemented by making minor changes to existing Bitcoin protocol. We also analyze the security of the scheme."}
{"_id": "e8a7d7b0230f162439dd04a19ba873238851a104", "title": "Deep Relevance Ranking using Enhanced Document-Query Interactions", "text": "We explore several new models for document relevance ranking, building upon the Deep Relevance Matching Model (DRMM) of Guo et al. (2016). Unlike DRMM, which uses context-insensitive encodings of terms and query-document term interactions, we inject rich context-sensitive encodings throughout our models, inspired by PACRR\u2019s (Hui et al., 2017) convolutional n-gram matching features, but extended in several ways including multiple views of query and document inputs. We test our models on datasets from the BIOASQ question answering challenge (Tsatsaronis et al., 2015) and TREC ROBUST 2004 (Voorhees, 2005), showing they outperform BM25-based baselines, DRMM, and PACRR."}
{"_id": "b61f0669fc05ebc5ec86e79c1cd0f5741782edf6", "title": "BrainID: Development of an EEG-based biometric authentication system", "text": "Authentication is a crucial consideration when securing data or any kind of information system. Though existing approaches for authentication are user-friendly, they have vulnerabilities such as the possibility & criminally threatening a user. We propose a novel approach which uses Electroencephalogram (EEG) brain signals for an authentication process. Unique features of EEG data for distinguishing brain activities can potentially be used to authenticate a user. Compared to other biometric systems, this approach is very robust and more secure because response is significantly changed according to instantaneous mental condition. In the proposed approach, the system user is asked to visualize a number while corresponding EEG signals are captured. Captured signals are used to train the system, compared in the authentication process. This approach mainly focuses on the 8 to 30 Hz Alpha and Beta combined frequency band across all EEG channels, since it is the most appropriate band for EEG signals in Brain Computer Interfaces (BCI). Common Spatial Patterns (CSP) values were used as main features to train the model. Linear discriminant analysis (LDA) was used as a classification algorithm for a given set of user data. A trained set of models for each user is embedded in the system as a parameter database. Each user selects the profile attached to their trained model before the authentication session. Eventually, a trained model can authenticate a user after memorization of pre-determined four digits which the user is asked to think at the first stage of the process. The maximum accuracy recorded with the existing data was 96.97%."}
{"_id": "781a5ee600063c36fd5526fbba85a28bce323c5b", "title": "Quantifying the experience of immersion in games.", "text": "Games are undoubtedly the most successful computer application yet it is not easy to attribute their success to any one particular feature. Nonetheless, when talking about games, gamers and reviewers frequently refer to the immersive experience of the game as an important aspect to be attained. However, it is not clear what immersion is and even if it is a comparable experience between different players and different games. This paper aims to develop more quantifiable and therefore objective measures of immersion. We describe a study into switching from an immersive gaming experience to another task, not in the game world. Though the degree of immersion does seem to have an impact on the ability to perform the task, the experimental approach is complex and possibly quite fragile. We therefore set out hypotheses for a similar experiment with the aim of exploring whether eyetracking and body motion provide better indicators of the degree of immersion."}
{"_id": "e5536c3033153fd18de13ab87428c204bb15818f", "title": "Design and construction of organic computing systems", "text": "The next generation of embedded computing systems will have to meet new challenges. The systems are expected to act mainly autonomously, to dynamically adapt to changing environments and to interact with one another if necessary. Such systems are called organic. Organic Computing systems are similar to autonomic computing systems. In addition Organic Computing systems often behave life-like and are inspired by nature/biological phenomena. Design and construction of such systems brings new challenges for the software engineering process. In this paper we present a framework for design, construction and analysis of organic computing systems. It can facilitate design and construction as well as it can be used to (semi-)formally define organic properties like self-configuration or self-adaptation. We illustrate the framework on a real-world case study from production automation."}
{"_id": "3fc109c976bf1c3f06aba9310f86dcd0c124ab41", "title": "How useful are your comments?: analyzing and predicting youtube comments and comment ratings", "text": "An analysis of the social video sharing platform YouTube reveals a high amount of community feedback through comments for published videos as well as through meta ratings for these comments. In this paper, we present an in-depth study of commenting and comment rating behavior on a sample of more than 6 million comments on 67,000 YouTube videos for which we analyzed dependencies between comments, views, comment ratings and topic categories. In addition, we studied the influence of sentiment expressed in comments on the ratings for these comments using the SentiWordNet thesaurus, a lexical WordNet-based resource containing sentiment annotations. Finally, to predict community acceptance for comments not yet rated, we built different classifiers for the estimation of ratings for these comments. The results of our large-scale evaluations are promising and indicate that community feedback on already rated comments can help to filter new unrated comments or suggest particularly useful but still unrated comments."}
{"_id": "18927b6b59b12c1be024549f8ad37092f28d7260", "title": "Automated Driving in Uncertain Environments: Planning With Interaction and Uncertain Maneuver Prediction", "text": "Automated driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and from the fact that the intention of human drivers cannot be directly measured. This problem is formulated as a partially observable Markov decision process (POMDP) with the intended route of the other vehicles as hidden variables. The solution of the POMDP is a policy determining the optimal acceleration of the ego vehicle along a preplanned path. Therefore, the policy is optimized for the most likely future scenarios resulting from an interactive, probabilistic motion model for the other vehicles. Considering possible future measurements of the surrounding cars allows the autonomous car to incorporate the estimated change in future prediction accuracy in the optimal policy. A compact representation results in a low-dimensional state-space. Thus, the problem can be solved online for varying road layouts and number of vehicles. This is done with a point-based solver in an anytime fashion on a continuous state-space. Our evaluation is threefold: At first, the convergence of the algorithm is evaluated and it is shown how the convergence can be improved with an additional search heuristic. Second, we show various planning scenarios to demonstrate how the introduction of different considered uncertainties results in more conservative planning. At the end, we show online simulations for the crossing of complex (unsignalized) intersections. We can demonstrate that our approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches."}
{"_id": "1c3eaea88cfe5cb28fa8d79b7057d817a108479f", "title": "3D printing of robotic soft actuators with programmable bioinspired architectures", "text": "Soft actuation allows robots to interact safely with humans, other machines, and their surroundings. Full exploitation of the potential of soft actuators has, however, been hindered by the lack of simple manufacturing routes to generate multimaterial parts with intricate shapes and architectures. Here, we report a 3D printing platform for the seamless digital fabrication of pneumatic silicone actuators exhibiting programmable bioinspired architectures and motions. The actuators comprise an elastomeric body whose surface is decorated with reinforcing stripes at a well-defined lead angle. Similar to the fibrous architectures found in muscular hydrostats, the lead angle can be altered to achieve elongation, contraction, or twisting motions. Using a quantitative model based on lamination theory, we establish design principles for the digital fabrication of silicone-based soft actuators whose functional response is programmed within the material's properties and architecture. Exploring such programmability enables 3D printing of a broad range of soft morphing structures. 3D-printed soft actuators have limited motion and are far from reaching the level of complexity\u00a0found in biological systems. Here the authors present a multimaterial 3D printing platform for the fabrication of soft actuators displaying a wide range of motions that are programmable."}
{"_id": "36bb99b54c4afb2923a2ea189c5252527ee64a5e", "title": "Clump splitting via bottleneck detection and shape classification", "text": "Under-segmentation of an image with multiple objects is a common problem in image segmentation algorithms. This paper presents a novel approach for splitting clumps formed by multiple objects due to under-segmentation. The proposed algorithm includes three steps: (1) decide whether to split a candidate connected component by application-specific shape classification; (2) find a pair of points for clump splitting and (3) join the pair of selected points. In the first step, a shape classifier is applied to determine whether a connected component should be split. In the second step, a pair of points for splitting is detected using a bottleneck rule, under the assumption that the desired objects have roughly a convex shape. In the third step, the selected splitting points from step two are joined by finding the optimal splitting line between them, based on minimizing an image energy. The shape classifier is built offline via various shape features and a support vector machine. Steps two and three are applicationindependent. The performance of this method is evaluated using images from various applications. Experimental results show that the proposed approach outperforms the state-of-the-art algorithms for the clump splitting problem. & 2011 Elsevier Ltd. All rights reserved."}
{"_id": "42529379e8b697bc538d2f1b2f0fafeff0e88aaf", "title": "Answering enumeration queries with the crowd", "text": "1.1. \u80fd\u5426\u5168\u90e8\u641e\u5b9a?\u679a\u4e3e\u67e5\u8be2 \u5728\u672c\u6587\u4e2d,\u6211\u4eec\u601d\u8003\u4e86\u5173\u7cfb\u6570\u636e\u5e93\u7ba1\u7406\u7cfb\u7edf (RDBMS) \u4e2d\u4e00\u79cd\u6700\u57fa\u672c\u7684\u8fd0\u7b97,\u5373\u5229\u7528\u7b5b\u9009\u7ea6\u675f\u6761\u4ef6\u6216\u65ad\u8a00 \u6765\u626b\u63cf\u5355\u4e00\u6570\u636e\u5e93\u8868\u683c;\u6b64\u67e5\u8be2\u679a\u4e3e\u7528\u6237\u611f\u5174\u8da3\u7684\u4e00 \u7ec4\u7279\u5b9a\u9879\u76ee\u3002\u4f8b\u5982,\u8bf7\u601d\u8003\u4ee5\u4e0b SQL \u67e5\u8be2,\u5b83\u8981\u5217 \u51fa\u53ef\u4ee5\u5fcd\u8010\u4f4e\u5149\u73af\u5883\u7684\u6240\u6709\u5ba4\u5185\u76c6\u683d\u690d\u7269:SELECT DISTINCT name FROM Plants WHERE lightneeds = \u2018low\u2019\u3002\u5728\u4f7f\u7528\u4f20\u7edf\u7684 RDBMS \u5e76\u4e14\u7ed9\u5b9a\u6570 \u636e\u5e93\u72b6\u6001\u65f6,\u6b64\u67e5\u8be2\u53ea\u6709\u4e00\u4e2a\u6b63\u786e\u7684\u7b54\u6848,\u901a\u8fc7\u626b\u63cf \u76f8\u5173\u8868\u683c\u3001\u7b5b\u9009\u8bb0\u5f55,\u518d\u8fd4\u56de\u6240\u6709\u5339\u914d\u7684\u8bb0\u5f55\u5373\u53ef\u83b7 \u5f97\u8be5\u7b54\u6848\u3002\u8fd9\u79cd\u65b9\u6cd5\u5bf9\u5373\u4fbf\u5b9e\u9645\u4e0a\u65e0\u754c\u7684\u5173\u7cfb\u4e5f\u594f\u6548, \u56e0\u4e3a\u5c01\u95ed\u4e16\u754c\u5047\u5b9a\u6307\u51fa:\u5728\u67e5\u8be2\u6267\u884c\u4e4b\u65f6\u6570\u636e\u5e93\u4e2d\u6ca1 \u6709\u7684\u4efb\u4f55\u8bb0\u5f55\u90fd\u4e0d\u5b58\u5728\u3002 \u76f8\u53cd,\u5728 CrowdDB \u7b49\u4f17\u5305\u7cfb\u7edf\u4e2d,\u4e00\u65e6\u8017\u5c3d\u5df2 \u5b58\u50a8\u8868\u683c\u4e2d\u7684\u8bb0\u5f55\u65f6,\u6211\u4eec\u53ef\u4ee5\u5411\u4f17\u4eba\u53d1\u9001\u4f5c\u4e1a\u6765\u7d22 \u53d6\u66f4\u591a\u8bb0\u5f55\u3002\u7136\u540e\u95ee\u9898\u6f14\u53d8\u4e3a:\u67e5\u8be2\u7ed3\u679c\u96c6\u4f55\u65f6\u624d\u7b97 \u5b8c\u6574?\u4f17\u5305\u67e5\u8be2\u5929\u751f\u5177\u6709\u6a21\u7cca\u6027\u6216\u62e5\u6709\u65e0\u754c\u7ed3\u679c\u96c6, \u5176\u5143\u7ec4\u5206\u6563\u4e8e\u7f51\u7edc\u6216\u8005\u4ec5\u5b58\u5728\u4e8e\u4eba\u8111\u3002\u4f8b\u5982,\u6211\u4eec\u53ef \u4ee5\u601d\u8003\u8fd9\u6837\u7684\u67e5\u8be2:\u5217\u51fa\u5c31\u4e1a\u5e02\u573a\u4e0a\u73b0\u6709\u7684\u535a\u58eb\u5b66\u4f4d \u5e94\u5c4a\u6bd5\u4e1a\u751f,\u6216\u8005\u5217\u51fa\u52a0\u5229\u798f\u5c3c\u4e9a\u5dde\u5bf9\u7eff\u8272\u6280\u672f\u611f\u5174 \u8da3\u7684\u4f01\u4e1a\u3002\u8fd9\u79cd\u67e5\u8be2\u662f\u5229\u7528\u4f17\u4eba\u529b\u91cf\u7684\u6570\u636e\u5e93\u7cfb\u7edf\u7684 \u4e3b\u8981\u7528\u4f8b,\u5bf9\u53d1\u51fa\u8981\u6267\u884c\u7684\u67e5\u8be2\u7684\u7528\u6237\u800c\u8a00\u5747\u5c5e\u4e8e\u52b3 \u52a8\u5bc6\u96c6\u578b\u4efb\u52a1,\u4f46\u7531\u4e8e\u5e76\u4e0d\u7ecf\u5e38\u9700\u8981\u6267\u884c,\u6240\u4ee5\u4e0d\u4e00 \u5b9a\u503c\u5f97\u53bb\u5f00\u53d1\u3001\u8c03\u8282\u548c\u4f7f\u7528\u590d\u6742\u7684\u673a\u5668\u5b66\u4e60\u89e3\u51b3\u65b9\u6848\u3002 \u5728\u672c\u6587\u4e2d,\u6211\u4eec\u5c06\u89e3\u51b3\u4ee5\u4e0b\u95ee\u9898:\u201c\u7528\u6237\u5728\u4f17\u5305 \u6570\u636e\u5e93\u7cfb\u7edf\u7684\u5f00\u653e\u4e16\u754c\u4e2d\u5e94\u5f53\u5982\u4f55\u601d\u7d22\u679a\u4e3e\u67e5\u8be2?\u201d \u6211\u4eec\u5f00\u53d1\u7684\u7edf\u8ba1\u5b66\u5de5\u5177\u53ef\u4ee5\u5e2e\u52a9\u7528\u6237\u63a8\u5bfc\u65f6\u95f4 / \u6210\u672c \u4e0e\u5b8c\u6574\u5ea6\u7684\u6743\u8861,\u8fd8\u53ef\u7528\u4e8e\u63a8\u8fdb\u67e5\u8be2\u6267\u884c\u548c\u4f17\u5305\u7b56\u7565\u3002"}
{"_id": "209f4c5dc7c65670473836304f8c478e1e0a0980", "title": "VARAN the Unbelievable: An Efficient N-version Execution Framework", "text": "With the widespread availability of multi-core processors, running multiple diversified variants or several different versions of an application in parallel is becoming a viable approach for increasing the reliability and security of software systems. The key component of such N-version execution (NVX) systems is a runtime monitor that enables the execution of multiple versions in parallel. Unfortunately, existing monitors impose either a large performance overhead or rely on intrusive kernel-level changes. Moreover, none of the existing solutions scales well with the number of versions, since the runtime monitor acts as a performance bottleneck.\n In this paper, we introduce Varan, an NVX framework that combines selective binary rewriting with a novel event-streaming architecture to significantly reduce performance overhead and scale well with the number of versions, without relying on intrusive kernel modifications.\n Our evaluation shows that Varan can run NVX systems based on popular C10k network servers with only a modest performance overhead, and can be effectively used to increase software reliability using techniques such as transparent failover, live sanitization and multi-revision execution."}
{"_id": "6a0b64f80adbf0e34bfa72b057662a635f791dc2", "title": "Identification of influencers - Measuring influence in customer networks", "text": "Viral marketing refers to marketing techniques that use social networks to produce increases in brand awareness through self-replicating viral diffusion of messages, analogous to the spread of pathological and computer viruses. The idea has successfully been used by marketers to reach a large number of customers rapidly. If data about the customer network is available, centrality measures provide a structural measure that can be used in decision support systems to select influencers and spread viral marketing campaigns in a customer network. Usage stimulation and churn management are examples of DSS applications, where centrality of customers does play a role. The literature on network theory describes a large number of such centrality measures. A critical question is which of these measures is best to select an initial set of customers for a marketing campaign, in order to achieve a maximum dissemination of messages. In this paper, we present the results of computational experiments based on call data from a telecom company to compare different centrality measures for the diffusion of marketing messages. We found a significant lift when using central customers in message diffusion, but also found differences in the various centrality measures depending on the underlying network topology and diffusion process. The simple out-degree centrality performed well in all treatments."}
{"_id": "744cc8c69255cbe9d992315e456b9efb06f42e20", "title": "Person Re-Identification by Deep Joint Learning of Multi-Loss Classification", "text": "Existing person re-identification (re-id) methods rely mostly on either localised or global feature representation alone. This ignores their joint benefit and mutual complementary effects. In this work, we show the advantages of jointly learning local and global features in a Convolutional Neural Network (CNN) by aiming to discover correlated local and global features in different context. Specifically, we formulate a method for joint learning of local and global feature selection losses designed to optimise person re-id when using only generic matching metrics such as the L2 distance. We design a novel CNN architecture for Jointly Learning Multi-Loss (JLML) of local and global discriminative feature optimisation subject concurrently to the same re-id labelled information. Extensive comparative evaluations demonstrate the advantages of this new JLML model for person re-id over a wide range of state-of-the-art re-id methods on five benchmarks (VIPeR, GRID, CUHK01, CUHK03, Market-1501)."}
{"_id": "d80d5862f5406a46225e91a71bb046eda451c430", "title": "Projection Based Weight Normalization for Deep Neural Networks", "text": "Optimizing deep neural networks (DNNs) often suffers from the ill-conditioned problem. We observe that the scaling-based weight space symmetry property in rectified nonlinear network will cause this negative effect. Therefore, we propose to constrain the incoming weights of each neuron to be unit-norm, which is formulated as an optimization problem over Oblique manifold. A simple yet efficient method referred to as projection based weight normalization (PBWN) is also developed to solve this problem. PBWN executes standard gradient updates, followed by projecting the updated weight back to Oblique manifold. This proposed method has the property of regularization and collaborates well with the commonly used batch normalization technique. We conduct comprehensive experiments on several widely-used image datasets including CIFAR-10, CIFAR-100, SVHN and ImageNet for supervised learning over the state-of-the-art convolutional neural networks, such as Inception, VGG and residual networks. The results show that our method is able to improve the performance of DNNs with different architectures consistently. We also apply our method to Ladder network for semi-supervised learning on permutation invariant MNIST dataset, and our method outperforms the state-of-the-art methods: we obtain test errors as 2.52%, 1.06%, and 0.91% with only 20, 50, and 100 labeled samples, respectively."}
{"_id": "0c881ea63ff12d85bc3192ce61f37abf701fdf38", "title": "Connecting modalities: Semi-supervised segmentation and annotation of images using unaligned text corpora", "text": "We propose a semi-supervised model which segments and annotates images using very few labeled images and a large unaligned text corpus to relate image regions to text labels. Given photos of a sports event, all that is necessary to provide a pixel-level labeling of objects and background is a set of newspaper articles about this sport and one to five labeled images. Our model is motivated by the observation that words in text corpora share certain context and feature similarities with visual objects. We describe images using visual words, a new region-based representation. The proposed model is based on kernelized canonical correlation analysis which finds a mapping between visual and textual words by projecting them into a latent meaning space. Kernels are derived from context and adjective features inside the respective visual and textual domains. We apply our method to a challenging dataset and rely on articles of the New York Times for textual features. Our model outperforms the state-of-the-art in annotation. In segmentation it compares favorably with other methods that use significantly more labeled training data."}
{"_id": "4566ebff2150f8d6b74ed6f84aa3bcde43dd918b", "title": "Reducing Patient Mortality in Hospitals : The Role of Human Resource", "text": "Developing effective health care organizations is increasingly complex as a result of demographic changes, globalization and developments in medicine. This study examines the potential contribution of organizational behavior theory and research by investigating the relationship between systems of human resource management (HRM) practices and effectiveness of patient care in hospitals. Relatively little research has been conducted to explore these issues in health care settings. In a sample of 52 hospitals in England, we examine the relationship between the HRM system and health care outcome. Specifically, we study the association between high performance HRM policies and practices and standardized patient mortality rates. The research reveals that, after controlling for prior mortality and other potentially confounding factors such as the ratio of doctors to patients, greater use of a complementary set of HRM practices has a statistically and practically significant relationship with patient mortality. The findings suggest that managers and policy makers should focus sharply on improving the functioning of relevant HR management systems in health care organizations as one important means by which to improve patient care."}
{"_id": "3c8fa888927d908a44a0b38687291af609cbefdb", "title": "Digital Calibration Techniques for Pipelined ADC \u2019 s", "text": "A skip and fill algorithm is developed to digitally self-calibrate pipelined analog-to-digital converters (ADC\u2019s) in real time. The proposed digital calibration technique is applicable to capacitor-ratioed multiplying digital-to-analog converters (MDAC\u2019s) commonly used in multi-step or pipelined ADC\u2019s. This background calibration process can replace, in effect, a trimming procedure usually done in the factory with a hidden electronic calibration. Unlike other self-calibration techniques working in the foreground, the proposed technique is based on the concept of skipping conversion cycles randomly but filling in data later by nonlinear interpolation. This opens up the feasibility of digitally implementing calibration hardware and simplifying the task of self-calibrating multi-step or pipelined ADC\u2019s. The proposed method improves the performance of the inherently fast ADC\u2019s by maintaining simple system architectures. To measure errors resulting from capacitor mismatch, op amp dc gain, offset, and switch feedthrough in real time, the calibration test signal is injected in place of the input signal using a split-reference injection technique. Ultimately, the missing signal within 2/3 of the Nyquist bandwidth is recovered with 16-bit accuracy using a 44-th order polynomial interpolation, behaving essentially as an FIR filter. This work was supported by the National Science Foundation under Grant MIP-9312671."}
{"_id": "0bd8c8a19f207b182286da354fdec5ea49f6072f", "title": "Role of Broca's Area in Implicit Motor Skill Learning: Evidence from Continuous Theta-burst Magnetic Stimulation", "text": "Complex actions can be regarded as a concatenation of simple motor acts, arranged according to specific rules. Because the caudal part of the Broca's region (left Brodmann's area 44, BA 44) is involved in processing hierarchically organized behaviors, we aimed to test the hypothesis that this area may also play a role in learning structured motor sequences. To address this issue, we investigated the inhibitory effects of a continuous theta-burst TMS (cTBS) applied over left BA 44 in healthy subjects, just before they performed a serial RT task (SRTT). SRTT has been widely used to study motor skill learning and is also of interest because, for complex structured sequences, subjects spontaneously organize them into smaller subsequences, referred to as chunks. As a control, cTBS was applied over the vertex in another group, which underwent the same experiment. Control subjects showed both a general practice learning effect, evidenced by a progressive decrease in RT across blocks and a sequence-specific learning effect, demonstrated by a significant RT increase in a pseudorandom sequence. In contrast, when cTBS was applied over left BA 44, subjects lacked both the general practice and sequence-specific learning effects. However, surprisingly, their chunking pattern was preserved and remained indistinguishable from controls. The present study indicates that left BA 44 plays a role in motor sequence learning, but without being involved in elementary chunking. This dissociation between chunking and sequence learning could be explained if we postulate that left BA 44 intervenes in high hierarchical level processing, possibly to integrate elementary chunks together."}
{"_id": "a6af22306492b0830dd1002ad442cc0b53f14b25", "title": "A 5.8-GHz radar sensor chip in 0.18-\u03bcm CMOS for non-contact vital sign detection", "text": "This paper presents a 5.8-GHz radar sensor chip for non-contact vital sign detection. The sensor chip is designed and fabricated in TSMC 0.18 \u03bcm CMOS 1P6M process. Except for the low-noise amplifier, all the active and passive components of the radar system are fully integrated in a single CMOS chip. With packaging on the printed-circuit board and connecting transmitting and receiving antennas, the radar sensor chip has been successfully demonstrated to detect the respiration and heart beat rates of a human adult."}
{"_id": "3916407cd711828f206ff378d01b1ae526a6ee84", "title": "Linear scan register allocation", "text": "We describe a new algorithm for fast global register allocation called linear scan. This algorithm is not based on graph coloring, but allocates registers to variables in a single linear-time scan of the variables' live ranges. The linear scan algorithm is considerably faster than algorithms based on graph coloring, is simple to implement, and results in code that is almost as efficient as that obtained using more complex and time-consuming register allocators based on graph coloring. The algorithm is of interest in applications where compile time is a concern, such as dynamic compilation systems, \u201cjust-in-time\u201d compilers, and interactive development environments."}
{"_id": "04109164cad9a03575aea5e0394d177101a4e4f6", "title": "A Classification and Comparison Framework for Software Architecture Description Languages", "text": "\u00d0Software architectures shift the focus of developers from lines-of-code to coarser-grained architectural elements and their overall interconnection structure. Architecture description languages (ADLs) have been proposed as modeling notations to support architecture-based development. There is, however, little consensus in the research community on what is an ADL, what aspects of an architecture should be modeled in an ADL, and which of several possible ADLs is best suited for a particular problem. Furthermore, the distinction is rarely made between ADLs on one hand and formal specification, module interconnection, simulation, and programming languages on the other. This paper attempts to provide an answer to these questions. It motivates and presents a definition and a classification framework for ADLs. The utility of the definition is demonstrated by using it to differentiate ADLs from other modeling notations. The framework is used to classify and compare several existing ADLs, enabling us, in the process, to identify key properties of ADLs. The comparison highlights areas where existing ADLs provide extensive support and those in which they are deficient, suggesting a research agenda for the future. Index Terms\u00d0Software architecture, architecture description language, component, connector, configuration, definition, classification, comparison."}
{"_id": "0bc06d66d4dbad49b888b25f8b1a6ab08e347511", "title": "Specifying Distributed Software Architectures", "text": "There is a real need for clear and sound design speci cations of distributed systems at the architectural level. This is the level of the design which deals with the high-level organisation of computational elements and the interactions between those elements. The paper presents the Darwin notation for specifying this high-level organisation. Darwin is in essence a declarative binding language which can be used to de ne hierarchic compositions of interconnected components. Distribution is dealt with orthogonally to system structuring. The language supports the speci cation of both static structures and dynamic structures which may evolve during execution. The central abstractions managed by Darwin are components and services. Services are the means by which components interact. In addition to its use in specifying the architecture of a distributed system, Darwin has an operational semantics for the elaboration of speci cations such that they may be used at runtime to direct the construction of the desired system. The paper describes the operational semantics of Darwin in terms of the -calculus, Milner's calculus of mobile processes. The correspondence between the treatment of names in the -calculus and the management of services in Darwin leads to an elegant and concise -calculus model of Darwin's operational semantics. The model is used to argue the correctness of the Darwin elaboration process. The overall objective is to provide a soundly based notation for specifying and constructing distributed software architectures. This paper will appear in the Fifth European Software Engineering Conference, ESEC '95 on 26 September 1995 in Barcelona."}
{"_id": "4a209112ba6bf2b0ce867ac3e236412b2674f835", "title": "An Introduction to the Aesop System", "text": "As the design of software architectures emerges as a discipline within software engineering, it becomes increasingly important to support architectural description and analysis with tools and environments. This paper provides a brief introduction to Aesop, a set of tools for developing architectural design environments that exploit architectural styles to guide software architects in producing speci c systems."}
{"_id": "6340677599a6ab56a6b3c37a197391468bb9a074", "title": "ADLs and dynamic architecture changes", "text": "Existing ADLs typically support only static architecture specification and do not provide facilities for the support of dynamically changing architectures. This paper presents a possible solution to this problem: in order to adequately support dynamic architecture changes, ADLs can leverage techniques used in dynamic programming languages. In particular, changes to ADL specifications should be interpreted. To enable interpretation, an ADL should have an architecture construction component that supports explicit and incremental specification of architectural changes, in addition to the traditional architecture description facilities. This will allow software architects to specify the changes to an architecture after it has been built. The paper expands upon the results from an ongoing project -building a development environment for CZstyle architectures.\u2019"}
{"_id": "98b6d704dda8f39df9c04235e7dbf13b18e4c95f", "title": "A two-stage short-term load forecasting approach using temperature daily profiles estimation", "text": "Electrical load forecasting plays an important role in the regular planning of power systems, in which load is influenced by several factors that must be analysed and identified prior to modelling in order to ensure better and instant load balancing between supply and demand. This paper proposes a two-stage approach for short-term electricity load forecasting. In the first stage, a set of day classes of load profiles are identified using K-means clustering algorithm alongside daily temperature estimation profiles. The proposed estimation method is particularly useful in case of lack of historical regular temperature data. While in the second stage, the stacked denoising autoencoders approach is used to build regression models able to forecast each day type independently. The obtained models are trained and evaluated using hourly electricity power data offered by Algeria\u2019s National Electricity and Gas Company. Several models are investigated to substantiate the accuracy and effectiveness of the proposed approach."}
{"_id": "982913e050c7fa0e92a991abea4b18d9c6a1d164", "title": "Performance analysis of DCO-OFDM in VLC system", "text": "The performance of indoor visible light communication (VLC) systems using a direct current-biased optical orthogonal frequency division multiplexing (DCO-OFDM) scheme is investigated in this paper. The impact of nonlinearity of Light Emitting Diode (LED) and its beam angle on the VLC system performance is studied. We later analyze the effect of modulation order, number of subcarriers, signal scaling and biasing operation on the peak to average power ratio (PAPR) which is a major issue in OFDM based VLC system. Simulation results show that the bit error rate (BER) decreases as the degree of nonlinearity increases and a better BER can be achieved as the LED behavior approaches linear model. In addition, it is shown that reducing the QAM order or increasing the number of subcarriers may reduce the effect of the LED nonlinearity, thus improving the BER performance of the VLC system. Moreover, it is demonstrated that PAPR is higher for a large number of subcarriers and modulation order. Finally, the PAPR of the visible light OFDM system can significantly be reduced by employing the signal scaling combined with biasing operation."}
{"_id": "323ccf0d893e153f68d20eec437b801e2aa01b19", "title": "When Does More Money Work? Examining the Role of Perceived Fairness in Pay on the Performance Quality of Crowdworkers", "text": "This research adds to the rich discussion on whether increases in payment to crowdworkers lead to increases in performance quality by introducing the concept of perceived fairness in pay (PFP). PFP refers to the belief that one is fairly compensated for their work. We examine whether PFP mediates the impact of payment amount on the performance quality of crowdworkers. We conducted a field experiment with 152 crowdworkers performing a buttonclicking (BC) task and an instructional manipulation check (IMC) task. PFP mediated the impact of payment amount on performance quality in the BC task but not in the IMC task. PFP also mediated the impact of payment amount on satisfaction and task time. Results suggest that PFP can help us better understand the relationship between payment and performance quality in crowdsourcing."}
{"_id": "fdd1548f7e5570e3b1a29334bde5c5311cb3430f", "title": "Efficient frame-sequential label propagation for video object segmentation", "text": "In this work, we present an approach for segmenting objects in videos taken in complex scenes. It propagates initial object label through the entire video by a frame-sequential manner where the initial label is usually given by the user. The proposed method has several contributions which make the propagation much more robust and accurate than other methods. First, a novel supervised motion estimation algorithm is employed between each pair of neighboring frames, by which a predicted shape model can be warped in order to segment the similar color around object boundary. Second, unlike previous methods with fixed modeling range, we design a novel range-adaptive appearance model to handle the tough problem of occlusion. Last, the paper gives a reasonable framework based on GraphCut algorithm for obtaining the final label of the object by combining the clues from both appearance and motion. In the experiments, the proposed approach is evaluated qualitatively and quantitatively with some recent methods to show it achieves state-of-art results on multiple videos from benchmark data sets."}
{"_id": "56bc9dd03848c1d96598897eb10d3ff7dc23836c", "title": "A combined transmembrane topology and signal peptide prediction method.", "text": "An inherent problem in transmembrane protein topology prediction and signal peptide prediction is the high similarity between the hydrophobic regions of a transmembrane helix and that of a signal peptide, leading to cross-reaction between the two types of predictions. To improve predictions further, it is therefore important to make a predictor that aims to discriminate between the two classes. In addition, topology information can be gained when successfully predicting a signal peptide leading a transmembrane protein since it dictates that the N terminus of the mature protein must be on the non-cytoplasmic side of the membrane. Here, we present Phobius, a combined transmembrane protein topology and signal peptide predictor. The predictor is based on a hidden Markov model (HMM) that models the different sequence regions of a signal peptide and the different regions of a transmembrane protein in a series of interconnected states. Training was done on a newly assembled and curated dataset. Compared to TMHMM and SignalP, errors coming from cross-prediction between transmembrane segments and signal peptides were reduced substantially by Phobius. False classifications of signal peptides were reduced from 26.1% to 3.9% and false classifications of transmembrane helices were reduced from 19.0% to 7.7%. Phobius was applied to the proteomes of Homo sapiens and Escherichia coli. Here we also noted a drastic reduction of false classifications compared to TMHMM/SignalP, suggesting that Phobius is well suited for whole-genome annotation of signal peptides and transmembrane regions. The method is available at as well as at"}
{"_id": "1a6c9d9165fe77b3b8b974f57ca1e11d0326903a", "title": "Design of a triangular fractal patch antenna with slit for IRNSS and GAGAN applications", "text": "In this paper a triangular fractal patch antenna with slit is designed for IRNSS and GAGAN applications using ADS software. India is intended to develop a satellite based navigation systems known as Indian Regional Navigational Satellite System (IRNSS) for positioning applications. Design of IRNSS antenna at user sector is indispensable. GPS Aided and Geo Augmented Navigation (GAGAN), a satellite based augmentation system for India, erect over the GPS system is anticipated to provide the flawless navigation support over the Asia-Pacific regions. The desired antenna has been deliberate on dielectric constant \u03b5r = 4.8 and substrate thickness h = 3.05 mm. The feed location of the antenna has been selected to produce the circular polarization. The self-similar property in antenna exhibits multi-band resonant frequencies. These specifications should be satisfied at the frequency L5 (1175 MHz), L1 (1575.42 MHz) and S (2492.08 MHz)."}
{"_id": "21753be26268473ffc863fb56d5ebc6af025bdc3", "title": "Flame detection in video using hidden Markov models", "text": "This paper proposes a novel method to detect flames in video by processing the data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame flicker process is also detected by using a hidden Markov model. Markov models representing the flame and flame colored ordinary moving objects are used to distinguish flame flicker process from motion of flame colored moving objects. Spatial color variations in flame are also evaluated by the same Markov models, as well. These clues are combined to reach a final decision. False alarms due to ordinary motion of flame colored moving objects are greatly reduced when compared to the existing video based fire detection systems."}
{"_id": "c1fcf93dd84a9b9d58603b7f177b3a8a77925c3d", "title": "Sockets for Limb Prostheses: A Review of Existing Technologies and Open Challenges", "text": "In the prosthetics field, one of the most important bottlenecks is still the human-machine interface, namely the socket. Indeed, a large number of amputees still rejects prostheses or points out a low satisfaction level, due to a sub-optimal interaction between the socket and the residual limb tissues. The aim of this paper is to describe the main parameters (displacements, stress, volume fluctuations and temperature) affecting the stump-socket interface and reducing the comfort/stability of limb prostheses. In this review, a classification of the different socket types proposed in the literature is reported, together with an analysis of advantages and disadvantages of the different solutions, from multiple viewpoints. The paper then describes the technological solutions available to face an altered distribution of stresses on the residual limb tissues, volume fluctuations affecting the stump overtime and temperature variations affecting the residual tissues within the socket. The open challenges in this research field are highlighted and the possible future routes are discussed, towards the ambitious objective of achieving an advanced socket able to self-adapt in real-time to the complex interplay of factors affecting the stump, during both static and dynamic tasks."}
{"_id": "b9059384edc6b80f5155288941ff6f5528771d13", "title": "Visual inertial odometry for quadrotors on SE(3)", "text": "The combination of on-board sensors measurements with different statistical characteristics can be employed in robotics for localization and control, especially in GPS-denied environments. In particular, most aerial vehicles are packaged with low cost sensors, important for aerial robotics, such as camera, a gyroscope, and an accelerometer. In this work, we develop a visual inertial odometry system based on the Unscented Kalman Filter (UKF) acting on the Lie group SE(3), such to obtain an unique, singularity-free representation of a rigid body pose. We model this pose with the Lie group SE(3) and model the noise on the corresponding Lie algebra. Moreover, we extend the concepts used in the standard UKF formulation, such as state uncertainty and modeling, to correctly incorporate elements that do not belong to an Euclidean space such as the Lie group members. In this analysis, we use the parallel transport, which requires us to explicitly consider SE(3) as representing rigid bodies though the use of the affine connection. We present experimental results to show the effectiveness of the proposed approach for state estimation of a quadrotor platform."}
{"_id": "21cd3b6a12862b3a355b2c0260fcbcac6a01a233", "title": "Sleep and autism spectrum disorders.", "text": "Sleep disorders are common in children with autism spectrum disorders and have a significant effect on daytime function and parental stress. The cornerstone of treatment is to establish the cause of the sleep concern, which is often multifactorial. Identifying and treating sleep disorders may result not only in more consolidated sleep, more rapid time to fall asleep, and avoidance of night waking but also favorably affect daytime behavior and parental stress. Targeting effective treatment strategies is dependent on understanding the underlying causes of sleep problems in children with Autism spectrum disorders, therefore further research is paramount."}
{"_id": "61c05d7cab4448a1585596cc7e00ab271c442c2e", "title": "Optimizing distributions over molecular space . An Objective-Reinforced Generative Adversarial Network for Inverse-design Chemistry ( ORGANIC )", "text": "Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design. 1"}
{"_id": "33b99b2e287516a76c787fc4cced2471ddaccbbe", "title": "Opportunities and Challenges of Switched Reluctance Motor Drives for Electric Propulsion: A Comparative Study", "text": "Selection of the proper electric traction drive is an important step in design and performance optimization of electrified powertrains. Due to the use of high energy magnets, permanent magnet synchronous machines (PMSM) have been the primary choice in the electric traction motor market. However, manufacturers are very interested to find a permanent magnet-free alternative as a fallback option due to unstable cost of rare-earth metals and fault tolerance issues related to the constant permanent magnet excitation. In this paper, a new comprehensive review of electric machines (EMs) that includes various new switched reluctance machine topologies in addition to conventional EMs such as PMSM, induction machine, synchronous reluctance machine (SynRel), and PM-assisted SynRel is presented. This paper is based on performances such as power density, efficiency, torque ripple, vibration and noise, and fault tolerance. These systematic examinations prove that recently proposed magnetic configurations such as double-stator switched reluctance machine can be a reasonable substitute for permanent magnet machines in electric traction applications."}
{"_id": "7ebbdaaa3b5c1be5e0fb17247e792272463f1fcf", "title": "Supervised Learning of Semantics-Preserving Hashing via Deep Neural Networks for Large-Scale Image Search", "text": "This paper presents a supervised deep hashing approach that constructs binary hash codes from labeled data for large-scale image search. We assume that semantic labels are governed by a set of latent attributes in which each attribute can be on or off, and classification relies on these attributes. Based on this assumption, our approach, dubbed supervised semanticspreserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network in which binary codes are learned by the optimization of an objective function defined over classification error and other desirable properties of hash codes. With this design, SSDH has a nice property that classification and retrieval are unified in a single learning model, and the learned binary codes not only preserve the semantic similarity between images but also are efficient for image search. Moreover, SSDH performs joint learning of image representations, hash codes, and classification in a pointwised manner and thus is naturally scalable to large-scale datasets. SSDH is simple and can be easily realized by a slight modification of an existing deep architecture for classification; yet it is effective and outperforms other unsupervised and supervised hashing approaches on several benchmarks and one large dataset comprising more than 1 million images."}
{"_id": "4df059f2cf563a5e7c1acbb393a8debf96d77567", "title": "Bank Customer Churn Prediction Based on Support Vector Machine: Taking a Commercial Bank's VIP Customer Churn as the Example", "text": "Customer churn analysis and prediction play an important role in customer relationship management and improve benefit of enterprise. According to the bank's customer churn data which is large scale and imbalance, this paper presented a support vector machine model to predict customer churn. The method was compared with artificial neural network, decision tree, logistic regression and naive Bayesian classifier regarding customer churn prediction for a commercial bank's VIP customers. It is found that the method has the best accuracy rate, hit rate, covering rate and lift coefficient, and provides an effective measurement for bank's customer churn prediction."}
{"_id": "66feb6d2ae909c4eb96c78ef5f6fe60fce8b5a97", "title": "SHOP2: An HTN Planning System", "text": "The SHOP2 planning system received one of the awards for distinguished performance in the 2002 International Planning Competition. This paper describes the features of SHOP2 which enabled it to excel in the competition, especially those aspects of SHOP2 that deal with temporal and metric planning domains."}
{"_id": "4155ecb89086261704bae0040abcf326c41c21f8", "title": "Extending the Hierarchical Deep Reinforcement Learning framework", "text": "iv"}
{"_id": "b69badabc3fddc9710faa44c530473397303b0b9", "title": "Unsupervised Image-to-Image Translation Networks", "text": "Most of the existing image-to-image translation frameworks\u2014mapping an image in one domain to a corresponding image in another\u2014are based on supervised learning, i.e., pairs of corresponding images in two domains are required for learning the translation function. This largely limits their applications, because capturing corresponding images in two different domains is often a difficult task. To address the issue, we propose the UNsupervised Image-to-image Translation (UNIT) framework, which is based on variational autoencoders and generative adversarial networks. The proposed framework can learn the translation function without any corresponding images in two domains. We enable this learning capability by combining a weight-sharing constraint and an adversarial training objective. Through visualization results from various unsupervised image translation tasks, we verify the effectiveness of the proposed framework. An ablation study further reveals the critical design choices. Moreover, we apply the UNIT framework to the unsupervised domain adaptation task and achieve better results than competing algorithms do in benchmark datasets."}
{"_id": "8a09c5270cd5f9a592cfb91b5ea9924c104f996c", "title": "Process scheduling under uncertainty: Review and challenges", "text": "Uncertainty is a very important concern in production scheduling since it can cause infeasibilities and production disturbances. Thus scheduling nder uncertainty has received a lot of attention in the open literature in recent years from chemical engineering and operations research communities. he purpose of this paper is to review the main methodologies that have been developed to address the problem of uncertainty in production cheduling as well as to identify the main challenges in this area. The uncertainties in process scheduling are first analyzed, and the different athematical approaches that exist to describe process uncertainties are classified. Based on the different descriptions for the uncertainties, lternative scheduling approaches and relevant optimization models are reviewed and discussed. Further research challenges in the field of process cheduling under uncertainty are identified and some new ideas are discussed. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "a94f69271082f677e2f1b04a8919983dda3025f0", "title": "Diagnosis and compensation of amplitude imbalance, imperfect quadrant and offset in resolver signals", "text": "In this paper, a new method for detecting some errors in resolver signals is presented. Sin and cosine signals in resolver to digital converter's (RDC's) input might have errors, such as amplitude imbalance, imperfect quadrant or DC offset. Like other sensors some defects in mechanical structure and electrical or magnetic circuits cause to these errors in resolver output signals. The amplitude and phase of resolver output signals and consequently calculated angle are affected by these errors at any moment. Resolver output signals are more affected in their peak points than the other points. These peak points are easier and more accurate than the other points for analyzing, too. It is obvious that any error has unique effects on the signals. Therefore, by analyzing the effect on signals' peak points the error type and the value can be detected. To avoid wrong detection, error signal is defined, which detects the per unit signals deviation from the unit circle. Also, for error detection, there is no need to interrupt converter's operation. The proposed method is simulated in Matlab/Simulink environment, and the obtained results validate its effectiveness. Also, it is simulated on TMS320F2812 DSP using CCS 3.3, and the results are presented. Furthermore, by implementing the present RDC on motor controller DSP, it will be a low-cost approach."}
{"_id": "eef951da16097e70b7acc91015544ce31d6dfc74", "title": "A Key Characteristic of Sex Differences in the Developing Brain: Greater Variability in Brain Structure of Boys than Girls.", "text": "In many domains, including cognition and personality, greater variability is observed in males than in females in humans. However, little is known about how variability differences between sexes are represented in the brain. The present study tested whether there is a sex difference in variance in brain structure using a cohort of 643 males and 591 females aged between 3 and 21 years. The broad age-range of the sample allowed us to test if variance differences in the brain differ across age. We observed significantly greater male than female variance for several key brain structures, including cerebral white matter and cortex, hippocampus, pallidum, putamen, and cerebellar cortex volumes. The differences were observed at both upper and lower extremities of the distributions and appeared stable across development. These findings move beyond mean levels by showing that sex differences were pronounced for variability, thereby providing a novel perspective on sex differences in the developing brain."}
{"_id": "421c305a0d2773d1132d9539e42d1f1337f1600a", "title": "Apposcopy: semantics-based detection of Android malware through static analysis", "text": "We present Apposcopy, a new semantics-based approach for identifying a prevalent class of Android malware that steals private user information. Apposcopy incorporates (i) a high-level language for specifying signatures that describe semantic characteristics of malware families and (ii) a static analysis for deciding if a given application matches a malware signature. The signature matching algorithm of Apposcopy uses a combination of static taint analysis and a new form of program representation called Inter-Component Call Graph to efficiently detect Android applications that have certain control- and data-flow properties. We have evaluated Apposcopy on a corpus of real-world Android applications and show that it can effectively and reliably pinpoint malicious applications that belong to certain malware families."}
{"_id": "3bfdb525fc85eed9c8b383c84fc9298e713cd590", "title": "Towards 3D object maps for autonomous household robots", "text": "This paper describes a mapping system that acquires 3D object models of man-made indoor environments such as kitchens. The system segments and geometrically reconstructs cabinets with doors, tables, drawers, and shelves, objects that are important for robots retrieving and manipulating objects in these environments. The system also acquires models of objects of daily use such glasses, plates, and ingredients. The models enable the recognition of the objects in cluttered scenes and the classification of newly encountered objects. Key technical contributions include (1) a robust, accurate, and efficient algorithm for constructing complete object models from 3D point clouds constituting partial object views, (2) feature-based recognition procedures for cabinets, tables, and other task-relevant furniture objects, and (3) automatic inference of object instance and class signatures for objects of daily use that enable robots to reliably recognize the objects in cluttered and real task contexts. We present results from the sensor-based mapping of a real kitchen."}
{"_id": "403be1b4e5ac775d46700814b76eedce4be8441a", "title": "Redundancy in Perceptual and Linguistic Experience: Comparing Feature-Based and Distributional Models of Semantic Representation", "text": "Since their inception, distributional models of semantics have been criticized as inadequate cognitive theories of human semantic learning and representation. A principal challenge is that the representations derived by distributional models are purely symbolic and are not grounded in perception and action; this challenge has led many to favor feature-based models of semantic representation. We argue that the amount of perceptual and other semantic information that can be learned from purely distributional statistics has been underappreciated. We compare the representations of three feature-based and nine distributional models using a semantic clustering task. Several distributional models demonstrated semantic clustering comparable with clustering-based on feature-based representations. Furthermore, when trained on child-directed speech, the same distributional models perform as well as sensorimotor-based feature representations of children's lexical semantic knowledge. These results suggest that, to a large extent, information relevant for extracting semantic categories is redundantly coded in perceptual and linguistic experience. Detailed analyses of the semantic clusters of the feature-based and distributional models also reveal that the models make use of complementary cues to semantic organization from the two data streams. Rather than conceptualizing feature-based and distributional models as competing theories, we argue that future focus should be on understanding the cognitive mechanisms humans use to integrate the two sources."}
{"_id": "66b2e77e2ba21a7bdcc1f816616c7b9b214ec17a", "title": "Designing an UHF RFID Reader Antenna", "text": "In this paper, a circularly polarized (CP) Octagonal shape microstrip patch antenna fed by coaxial feed is designed and analyzed for RFID (Radio Frequency Identification) reader applications. The physical parameters of the structure as well as its partial ground plane are analyzed and optimized using Zealand IE3D simulation software. Return loss (S11), voltage standing wave ratio (VSWR), directivity and gain are carried out. The results show that the proposed antenna has good impedance and radiation characteristics over the required bandwidth, 860-960 MHz (UHF RFID band). The return loss of the optimized Octagonal-shaped microstrip patch antenna is below 10dB over the UHF frequency band. The proposed antenna is very promising for various modern communication applications."}
{"_id": "cba45a87fc6cf12b3b0b6f57ba1a5282ef7fee7a", "title": "Emotion AI , Real-Time Emotion Detection using CNN", "text": "In this paper, we describe a Convolutional Neural Network (CNN) approach to realtime emotion detection. We utilize data from the Extended Cohn-Kanade dataset, Japanese Female Facial Expression data set, and our own custom images in order to train this model, and apply preprocessing steps to improve performance. We re-train a LeNet and AlexNet implementation, both of which perform with above 97% accuracy. Qualitative analysis of real-time images shows that the above models perform reasonably well at classifying facial expressions, but not as well as the quantitative results would indicate."}
{"_id": "6ba7a9e2bc2c8f3fe7623fc3006c150eb5bf1717", "title": "Identity-Based Broadcast Encryption with Constant Size Ciphertexts and Private Keys", "text": "This paper describes the first identity-based broadcast encryption scheme (IBBE) with constant size ciphertexts and private keys. In our scheme, the public key is of size linear in the maximal size m of the set of receivers, which is smaller than the number of possible users (identities) in the system. Compared with a recent broadcast encryption system introduced by Boneh, Gentry and Waters (BGW), our system has comparable properties, but with a better efficiency: the public key is shorter than in BGW. Moreover, the total number of possible users in the system does not have to be fixed in the setup."}
{"_id": "6e8cf181b6e4d759f0416665a3a9f62ad37b316c", "title": "Unobservable Communication over Fully Untrusted Infrastructure", "text": "Keeping communication private has become increasingly important in an era of mass surveillance and statesponsored attacks. While hiding the contents of a conversation has well-known solutions, hiding the associated metadata (participants, duration, etc.) remains a challenge, especially if one cannot trust ISPs or proxy servers. This paper describes a communication system called Pung that provably hides all content and metadata while withstanding global adversaries. Pung is a key-value store where clients deposit and retrieve messages without anyone\u2014 including Pung\u2019s servers\u2014learning of the existence of a conversation. Pung is based on private information retrieval, which we make more practical for our setting with new techniques. These include a private multi-retrieval scheme, an application of the power of two choices, and batch codes. These extensions allow Pung to handle 103\u00d7 more users than prior systems with a similar threat model."}
{"_id": "7381426ddc5d51e81fec419daadfdd3d7062be92", "title": "IRON: Functional Encryption using Intel SGX", "text": "Functional encryption (FE) is an extremely powerful cryptographic mechanism that lets an authorized entity compute on encrypted data, and learn the results in the clear. However, all current cryptographic instantiations for general FE are too impractical to be implemented. We construct IRON, a provably secure, and practical FE system using Intel's recent Software Guard Extensions (SGX). We show that IRON can be applied to complex functionalities, and even for simple functions, outperforms the best known cryptographic schemes. We argue security by modeling FE in the context of hardware elements, and prove that IRON satisfies the security model."}
{"_id": "0a289fd7b14345822b1acda6d82750b15d59663e", "title": "SCONE: Secure Linux Containers with Intel SGX", "text": "Sergei Arnautov, Bohdan Trach, Franz Gregor, Thomas Knauth, Andre Martin, Christian Priebe, Joshua Lind, Divya Muthukumaran, Dan O\u2019Keeffe, Mark L Stillwell, David Goltzsche, David Eyers, R\u00fcdiger Kapitza, Peter Pietzuch, and Christof Fetzer Fakult\u00e4t Informatik, TU Dresden, christof.fetzer@tu-dresden.de Dept. of Computing, Imperial College London, prp@imperial.ac.uk Informatik, TU Braunschweig, rrkapitz@ibr.cs.tu-bs.de Dept. of Computer Science, University of Otago, dme@cs.otago.ac.nz"}
{"_id": "104eb3716d8b2d7e6d69805357d5b7fb87caff3e", "title": "Sanctum: Minimal Hardware Extensions for Strong Software Isolation", "text": "Sanctum offers the same promise as Intel\u2019s Software Guard Extensions (SGX), namely strong provable isolation of software modules running concurrently and sharing resources, but protects against an important class of additional software attacks that infer private information from a program\u2019s memory access patterns. Sanctum shuns unnecessary complexity, leading to a simpler security analysis. We follow a principled approach to eliminating entire attack surfaces through isolation, rather than plugging attack-specific privacy leaks. Most of Sanctum\u2019s logic is implemented in trusted software, which does not perform cryptographic operations using keys, and is easier to analyze than SGX\u2019s opaque microcode, which does. Our prototype targets a Rocket RISC-V core, an open implementation that allows any researcher to reason about its security properties. Sanctum\u2019s extensions can be adapted to other processor cores, because we do not change any major CPU building block. Instead, we add hardware at the interfaces between generic building blocks, without impacting cycle time. Sanctum demonstrates that strong software isolation is achievable with a surprisingly small set of minimally invasive hardware changes, and a very reasonable overhead."}
{"_id": "05c987e588e989c769cf4ab786d8514fe8b0202d", "title": "Consensus photometric stereo", "text": "This paper describes a photometric stereo method that works with a wide range of surface reflectances. Unlike previous approaches that assume simple parametric models such as Lambertian reflectance, the only assumption that we make is that the reflectance has three properties; monotonicity, visibility, and isotropy with respect to the cosine of light direction and surface orientation. In fact, these properties are observed in many non-Lambertian diffuse reflectances. We also show that the monotonicity and isotropy properties hold specular lobes with respect to the cosine of the surface orientation and the bisector between the light direction and view direction. Each of these three properties independently gives a possible solution space of the surface orientation. By taking the intersection of the solution spaces, our method determines the surface orientation in a consensus manner. Our method naturally avoids the need for radiometrically calibrating cameras because the radiometric response function preserves these three properties. The effectiveness of the proposed method is demonstrated using various simulated and real-world scenes that contain a variety of diffuse and specular surfaces."}
{"_id": "2df6c9cc1c561c8b8ee2cb11a5f52f8fea6537b1", "title": "A computer vision-aided motion sensing algorithm for mobile robot's indoor navigation", "text": "This paper presents the design and analysis of a computer vision-aided motion sensing algorithm for wheeled mobile robot's indoor navigation. The algorithm is realized using two vision cameras attached on a wheeled mobile robot. The first camera is positioned at front-looking direction while the second camera is positioned at downward-looking direction. An algorithm is developed to process the images acquired from the cameras to yield the mobile robot's positions and orientations. The proposed algorithm is implemented on a wheeled mobile robot for real-world effectiveness testing. Results are compared and shown the accuracy of the proposed algorithm. At the end of the paper, an artificial landmark approach is introduced to improve the navigation efficiency. Future work involved implementing the proposed artificial landmark for indoor navigation applications with minimized accumulated errors."}
{"_id": "2992b02dc4f89d016ed9db7eae2c3e471ed891b7", "title": "ANALYTIC PROBLEMS FOR ELLIPTIC CURVES", "text": "We consider some problems of analytic number theory for elliptic curves which can be considered as analogues of classical questions around the distribution of primes in arithmetic progressions to large moduli, and to the question of twin primes. This leads to some local results on the distribution of the group structures of elliptic curves defined over a prime finite field, exhibiting an interesting dichotomy for the occurence of the possible groups."}
{"_id": "8f520b7a21421b15c8765d399816f6f5bed5cc42", "title": "Classification of text documents supervised by domain ontologies", "text": "The research objective is to establish an approach for supporting the classification of text documents referring to a specified domain. The focus is on the preliminary topic assignment to the documents used for training the model. The method implements domain ontology as background knowledge. The idea consists in extracting the preliminary topics for training the classifier by means of unsupervised machine learning on a text corpus and further alignment of the document vectors to concepts of the ontology. The results obtained by classification of new documents supervised by e-governance ontology with several machine learning algorithms showed sufficient match of their content to the ontology concepts. A conclusion is drawn that the approach can support the automatic extraction of documents relevant to any domain described by ontology."}
{"_id": "206db06251922cdf312e75f2fc9b51aafb47efeb", "title": "Gene selection for cancer identification: a decision tree model empowered by particle swarm optimization algorithm", "text": "In the application of microarray data, how to select a small number of informative genes from thousands of genes that may contribute to the occurrence of cancers is an important issue. Many researchers use various computational intelligence methods to analyzed gene expression data. To achieve efficient gene selection from thousands of candidate genes that can contribute in identifying cancers, this study aims at developing a novel method utilizing particle swarm optimization combined with a decision tree as the classifier. This study also compares the performance of our proposed method with other well-known benchmark classification methods (support vector machine, self-organizing map, back propagation neural network, C4.5 decision tree, Naive Bayes, CART decision tree, and artificial immune recognition system) and conducts experiments on 11 gene expression cancer datasets. Based on statistical analysis, our proposed method outperforms other popular classifiers for all test datasets, and is compatible to SVM for certain specific datasets. Further, the housekeeping genes with various expression patterns and tissue-specific genes are identified. These genes provide a high discrimination power on cancer classification."}
{"_id": "1cd006635a4d5185be7aa941d47cadd3b9cd3da5", "title": "Function and regulation in MAPK signaling pathways: lessons learned from the yeast Saccharomyces cerevisiae.", "text": "Signaling pathways that activate different mitogen-activated protein kinases (MAPKs) elicit many of the responses that are evoked in cells by changes in certain environmental conditions and upon exposure to a variety of hormonal and other stimuli. These pathways were first elucidated in the unicellular eukaryote Saccharomyces cerevisiae (budding yeast). Studies of MAPK pathways in this organism continue to be especially informative in revealing the molecular mechanisms by which MAPK cascades operate, propagate signals, modulate cellular processes, and are controlled by regulatory factors both internal to and external to the pathways. Here we highlight recent advances and new insights about MAPK-based signaling that have been made through studies in yeast, which provide lessons directly applicable to, and that enhance our understanding of, MAPK-mediated signaling in mammalian cells."}
{"_id": "db4abadb6f868715801c0f348f49cfd3a1fb264a", "title": "The ALSFRS-R: a revised ALS functional rating scale that incorporates assessments of respiratory function", "text": "The ALS Functional Rating Scale (ALSFRS) is a validated rating instrument for monitoring the progression of disability in patients with amyotrophic lateral sclerosis (ALS). One weakness of the ALSFRS as originally designed was that it granted disproportionate weighting to limb and bulbar, as compared to respiratory, dysfunction. We have now validated a revised version of the ALSFRS, which incorporates additional assessments of dyspnea, orthopnea, and the need for ventilatory support. The Revised ALSFRS (ALSFRS-R) retains the properties of the original scale and shows strong internal consistency and construct validity. ALSFRS-R scores correlate significantly with quality of life as measured by the Sickness Impact Profile, indicating that the quality of function is a strong determinant of quality of life in ALS."}
{"_id": "006a9e55241b70f85822d3f0e7d04b80c8026583", "title": "Designing and implementing an integrated technological pedagogical science knowledge framework for science teachers professional development", "text": "This paper reports on the design and the implementation of the Technological Pedagogical Science Knowledge (TPASK), a new model for science teachers professional development built on an integrated framework determined by the Technological Pedagogical Content Knowledge (TPACK) model and the authentic learning approach. The TPASK curriculum dimensions and the related course sessions are also elaborated and applied in the context of a teacher trainers\u2019 preparation program aiming at ICT integration in science classroom practice. A brief description of the project, its accomplishments, and perceptions of the participants, through the lens of TPASK professional development model, are presented. This is followed by the presentation of the evaluation results on the impact of the program which demonstrates that science teachers reported meaningful TPASK knowledge and increased willingness to adopt and apply this framework in their instruction. Finally, we draw on the need to expand TPACK by incorporating a fourth dimension, the Educational Context within Pedagogy, Content and Technology mutually interact, in order to address future policy models concerning teacher preparation to integrate ICT in education. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "9cbb1df07eed4c8c2bd26fe02a3310a4f11a6dac", "title": "Improv Chat: Second Response Generation for Chatbot", "text": "Existing research on response generation for chatbot focuses on First Response Generation which aims to teach the chatbot to say the first response (e.g. a sentence) appropriate to the conversation context (e.g. the user\u2019s query). In this paper, we introduce a new task Second Response Generation, termed as Improv chat, which aims to teach the chatbot to say the second response after saying the first response with respect the conversation context, so as to lighten the burden on the user to keep the conversation going. Specifically, we propose a general learning based framework and develop a retrieval based system which can generate the second responses with the users\u2019 query and the chatbot\u2019s first response as input. We present the approach to building the conversation corpus for Improv chat from public forums and social networks, as well as the neural networks based models for response matching and ranking. We include the preliminary experiments and results in this paper. This work could be further advanced with better deep matching models for retrieval base systems or generative models for generation based systems as well as extensive evaluations in real-life applications."}
{"_id": "9e995540fd196e87302e3509a5a47487d871089e", "title": "Multilabel Classification via Co-Evolutionary Multilabel Hypernetwork", "text": "Multilabel classification is prevalent in many real-world applications where data instances may be associated with multiple labels simultaneously. In multilabel classification, exploiting label correlations is an essential but nontrivial task. Most of the existing multilabel learning algorithms are either ineffective or computationally demanding and less scalable in exploiting label correlations. In this paper, we propose a co-evolutionary multilabel hypernetwork (Co-MLHN) as an attempt to exploit label correlations in an effective and efficient way. To this end, we firstly convert the traditional hypernetwork into a multilabel hypernetwork (MLHN) where label correlations are explicitly represented. We then propose a co-evolutionary learning algorithm to learn an integrated classification model for all labels. The proposed Co-MLHN exploits arbitrary order label correlations and has linear computational complexity with respect to the number of labels. Empirical studies on a broad range of multilabel data sets demonstrate that Co-MLHN achieves competitive results against state-of-the-art multilabel learning algorithms, in terms of both classification performance and scalability with respect to the number of labels."}
{"_id": "1d6331e9d600098ecbaf82d498555b753c7cb7e0", "title": "Robust Appointment Scheduling", "text": "We consider the problem of appointment scheduling in a robus t optimization framework. The appointment scheduling problem arises in many service operations, for e xample health care. For each job, we are given its minimum and maximum possible execution times. The objective is to find an appointment schedule for which the cost in the worst case scenario of the realization of the processi ng times of the jobs is minimized. We present a global balancing heuristic, which gives an easy to compute closed f orm optimal schedule when the underage costs of the jobs are non-decreasing. In addition, for the case where we h av the flexibility of changing the order of execution of the jobs, we give simple heuristics to find a near-optimal seq uence of the jobs. Working paper, Operations Research Center, MIT. Last updat ed March 3, 2012. Amazon.com, 333 Boren Avenue N, Seattle WA 98101, USA. Email : mshashi@alum.mit.edu. TU Berlin, Matheon, Berlin, Germany. Email: sebastian.stiller@tu-berlin.de."}
{"_id": "c9f5ac175670d7133280ea6b3e2eb2293cbb6690", "title": "The ASTOOT Approach to Testing Object-Oriented Programs", "text": "This article describes a new approach to the unit testing of object-oriented programs, a set of tools based on this approach, and two case studies. In this approach, each test case consists of a tuple of sequences of messages, along with tags indicating whether these sequences should put objects of the class under test into equivalent states and/or return objects that are in equivalent states. Tests are executed by sending the sequences to objects of the class under test, then invoking a user-supplied equivalence-checking mechanism. This approach allows for substantial automation of many aspects of testing, including test case generation, test driver generation, test execution, and test checking. Experimental prototypes of tools for test generation and test execution are described. The test generation tool requires the availability of an algebraic specification of the abstract data type being tested, but the test execution tool can be used when no formal specification is available. Using the test execution tools, case studies involving execution of tens of thousands of test cases, with various sequence lengths, parameters, and combinations of operations were performed. The relationships among likelihood of detecting an error and sequence length, range of parameters, and relative frequency of various operations were investigated for priority queue and sorted-list implementations having subtle errors. In each case, long sequences tended to be more likely to detect the error, provided that the range of parameters was sufficiently large and likelihood of detecting an error tended to increase up to a threshold value as the parameter range increased."}
{"_id": "025ca3bba0c4ddf946c60a597bedd85bf962f5e6", "title": "Confirmatory factor analysis of the WAIS-IV/WMS-IV.", "text": "The Wechsler Adult Intelligence Scale-fourth edition (WAIS-IV) and the Wechsler Memory Scale-fourth edition (WMS-IV) were co-developed to be used individually or as a combined battery of tests. The independent factor structure of each of the tests has been identified; however, the combined factor structure has yet to be determined. Confirmatory factor analysis was applied to the WAIS-IV/WMS-IV Adult battery (i.e., age 16-69 years) co-norming sample (n = 900) to test 13 measurement models. The results indicated that two models fit the data equally well. One model is a seven-factor solution without a hierarchical general ability factor: Verbal Comprehension, Perceptual Reasoning, Processing Speed, Auditory Working Memory, Visual Working Memory, Auditory Memory, and Visual Memory. The second model is a five-factor model composed of Verbal Comprehension, Perceptual Reasoning, Processing Speed, Working Memory, and Memory with a hierarchical general ability factor. Interpretative implications for each model are discussed."}
{"_id": "0d3de784c0a418d2c6eefdfcbc8f5a93da97af7e", "title": "A compact planar feed structure for Ka-band satcom-on-the-move tracking antennas", "text": "The increasing interest in bi-directional mobile high data rate satellite communications in Ka-band necessitates the development of dedicated antenna tracking systems and feeds. In this paper we describe a compact feed structure based on printed circuit boards for a mobile satellite communications ground terminal with a Cassegrain reflector antenna. The novel structure provides a dual circular polarisation communication mode as well as the TM01 mode for multimode monopulse tracking. This coupler, based on carefully matched transitions from grounded coplanar lines to circular waveguides, is operational at 20GHz and 30 GHz, to cover the downlink and the uplink frequency ranges in Ka-band. This work contributes to the development of a satellite terminal for land-mobile communications in disaster scenarios."}
{"_id": "f77d539d2904ecc8bc9ceb9019dc0e502f1489eb", "title": "Dysbiosis of the gut microbiota in disease", "text": "There is growing evidence that dysbiosis of the gut microbiota is associated with the pathogenesis of both intestinal and extra-intestinal disorders. Intestinal disorders include inflammatory bowel disease, irritable bowel syndrome (IBS), and coeliac disease, while extra-intestinal disorders include allergy, asthma, metabolic syndrome, cardiovascular disease, and obesity."}
{"_id": "9ab4883c0ee0db114a81eb3f47bde38a1270c590", "title": "Learning in and about complex systems", "text": "Change is accelerating, and the complexity of the systems in which we live is growing. Increasingly change is the result of humanity itself. As complexity grows so do the unanticipated side effects of human action, further increasing complexity in a vicious cycle. Many scholars call for the development of 'systems thinking' to improve our ability to manage wisely. But how do people learn in and about complex dynamic systems? Learning is a feedback process in which our decisions alter the real world, we receive information feedback about the world, and using the new information we revise the decisions we make and the mental models that motivate those decisions. Unfortunately, in the world of social action various impediments slow or prevent these learning feedbacks from functioning, allowing erroneous and harmful behaviors and beliefs to persist. The barriers to learning include the dynamic complexity of the systems themselves, inadequate and ambiguous outcome feedback, systematic 'misperceptions of feedback' where our cognitive maps omit important feedback processes, delays, stocks and flows, and nonlinearities that characterize complex systems, inability to simulate mentally the dynamics of our cognitive maps, poor interpersonal and organizational inquiry skills, and poor scientific reasoning skills. To be successful methods to enhance learning about complex systems must address all these impediments. Effective methods for learning in and about complex dynamic systems must include (1) tools to elicit participant knowledge, articulate and reframe perceptions, and create maps of the feedback structure of a problem from those perceptions; (2) simulation tools and management flight simulators to assess the dynamics of those maps and test new policies; and (3) methods to improve scientific reasoning skills, strengthen group process and overcome defensive routines for individuals and teams."}
{"_id": "e3510cb51f9aae9048e328a1c42c871886b7db20", "title": "Immunity to Device Variations in a Spiking Neural Network With Memristive Nanodevices", "text": "Memristive nanodevices can feature a compact multilevel nonvolatile memory function, but are prone to device variability. We propose a novel neural network-based computing paradigm, which exploits their specific physics, and which has virtual immunity to their variability. Memristive devices are used as synapses in a spiking neural network performing unsupervised learning. They learn using a simplified and customized \u201cspike timing dependent plasticity\u201d rule. In the network, neurons\u2019 threshold is adjusted following a homeostasis-type rule. We perform system level simulations with an experimentally verified model of the memristive devices\u2019 behavior. They show, on the textbook case of character recognition, that performance can compare with traditional supervised networks of similar complexity. They also show that the system can retain functionality with extreme variations of various memristive devices\u2019 parameters (a relative standard dispersion of more than 50% is tolerated on all device parameters), thanks to the robustness of the scheme, its unsupervised nature, and the capability of homeostasis. Additionally the network can adjust to stimuli presented with different coding schemes, is particularly robust to read disturb effects and does not require unrealistic control on the devices\u2019 conductance. These results open the way for a novel design approach for ultraadaptive electronic systems."}
{"_id": "0aa37a31f17726737ac8b08f12a9ba746aa58e36", "title": "AlGaN/GaN current aperture vertical electron transistors", "text": "Describes AlGaN/GaN current aperture vertical electron transistor (CAVET) structures. A CAVET consists of a source region separated from a drain region by an insulating layer containing a narrow aperture which is filled with conducting material. A device mesa is formed by reactive ion etching, and source contacts are deposited on either side of the aperture. The drain metal contacts the n-doped region below the aperture. Electrons flow from the source contacts through the aperture into the n-type base region and are collected at the drain. A Schottky gate, located directly above the aperture, is used to modulate the current passing through the aperture. In a CAVET, because the virtual drain (or pinched off region) is located underneath the gate, charge does not accumulate at the gate edge, so no large fields near the gate edge are present. Instead, our simulations show that the high field region in a CAVET is buried in the bulk. The CAVET therefore has the potential to support large source-drain voltages, since surface related breakdown is eliminated."}
{"_id": "6d52bca7532777c4fac49e6e08c99ff9d51e34e4", "title": "End-to-End Channel Capacity of a Wireless Sensor Network Under Reachback", "text": "Many wireless sensor network applications will impose a many-to-one traffic flow pattern. This phenomena in communication networks is referred to as reachback. The higher volume of traffic in traffic hotspots, which usually form around the base station, limits the number of sensors that can participate in the formation of a wireless sensor network with a fixed per sensor data rate. To alleviate the effects of reachback, spatial correlation of sensor data has enabled the use of Slepian-Wolf coding as a compression technique to reduce the volume of traffic in sensor networks. Within a cluster, Slepian-Wolf coding is used to compress the sensor readings. Clusterheads then transmit the compressed sensor readings over the overlay network which is formed by all clusterheads. We develop an expression for the end-to-end (sensor-to-base station) channel capacity of such a wireless sensor network and observe the effect of variations in the number of clusterheads on end-to-end capacity."}
{"_id": "c0a2293809917839047c7ec98d942777ca426e57", "title": "Building Enterprise Systems Infrastructure Flexibility as Enabler of Organisational Agility: Empirical Evidence", "text": "Enterprise systems (ES) that capture the most advanced developments of information technology are becoming common fixtures in most organisations. However, how ES affect organizational agility (OA) has been less researched and the existing research remains equivocal. From the perspective that ES can positively contribute to OA, this research via theory-based model development and rigorous empirical investigation of the proposed model, has bridged significant research gaps and provided empirical evidence for, and insights into, the effect of ES on OA. The empirical results based on data collected from 179 large organizations in Australia and New Zealand which have implemented and used ES for at least one year show that organisations can achieve agility out of their ES in two ways: by developing ES technical competences to build ES-enabled capabilities that digitise their key sensing and responding processes; and when ES-enabled sensing and responding capabilities are aligned in relatively turbulent environment."}
{"_id": "e497b884938f491a9fd53b89f54a15ee8ecf6ce9", "title": "Central Lip Lift as Aesthetic and Physiognomic Plastic Surgery: The Effect on Lower Facial Profile.", "text": "BACKGROUND\nA central lip lift was introduced to Westerners in 1980s. However, no studies have been conducted on the facial aesthetic and physiognomic perspectives of a central lip lift in the Asian population.\n\n\nOBJECTIVES\nThe authors presented the central lip lift as aesthetic and physiognomic treatment in Asians and explained its effect on lower facial profile.\n\n\nMETHODS\nA retrospective chart review was performed in 202 cases of asians. The authors analyzed patient age, cause of long philtrum, purpose of the treatment, and postoperative satisfaction. The authors then performed an anthropometric assessment and a photographic analysis.\n\n\nRESULTS\nThe vertical disproportion of the lower face was improved after the treatment, and there was significant shortening of the philtrum length (P < .001) and an increase in a visible upper vermilion (P < .001). In Westerners, a long philtrum was mainly caused by the aging process. Aging patients (range, 40-59 years) underwent the central lip lift for upper lip rejuvenation. In contrast, in Asia, a long philtrum was primarily caused by bone retraction after an orthognathic surgery or orthodontic procedure. Young patients (range, 20-39 years old) underwent the central lip lift to correct a relatively lengthened philtrum after 2-jaw surgery. Furthermore, about half of the patients (52.0%) underwent the central lip lift for facial physiognomic improvement.\n\n\nCONCLUSIONS\nIn today's multiracial society, plastic surgeons planning a central lip lift in Asian patients should consider both aesthetic and physiognomic perspectives. Regardless of the aesthetic outcome, the surgeon should strive to maximize patient satisfaction.\n\n\nLEVEL OF EVIDENCE\n4 Therapeutic."}
{"_id": "13b2c975571893815e02a94e34cd64e1ce9100a2", "title": "A Fast Algorithm for Edge-Preserving Variational Multichannel Image Restoration", "text": "We generalize the alternating minimization algorithm recently proposed in [32] to efficiently solve a general, edge-preserving, variational model for recovering multichannel images degraded by withinand cross-channel blurs, as well as additive Gaussian noise. This general model allows the use of localized weights and higher-order derivatives in regularization, and includes a multichannel extension of total variation (MTV) regularization as a special case. In the MTV case, we show that the model can be derived from an extended half-quadratic transform of Geman and Yang [14]. For color images with three channels and when applied to the MTV model (either locally weighted or not), the per-iteration computational complexity of this algorithm is dominated by nine fast Fourier transforms. We establish strong convergence results for the algorithm including finite convergence for some variables and fast q-linear convergence for the others. Numerical results on various types of blurs are presented to demonstrate the performance of our algorithm compared to that of the MATLAB deblurring functions. We also present experimental results on regularization models using weighted MTV and higher-order derivatives to demonstrate improvements in image quality provided by these models over the plain MTV model."}
{"_id": "34b305d72c1d035bee77c4ab190190b8e349beb9", "title": "Hyperbolic centroidal Voronoi tessellation", "text": "The centroidal Voronoi tessellation (CVT) has found versatile applications in geometric modeling, computer graphics, and visualization. In this paper, we extend the concept of the CVT from Euclidean space to hyperbolic space. A novel hyperbolic CVT energy is defined, and the relationship between minimizing this energy and the hyperbolic CVT is proved. We also show by our experimental results that the hyperbolic CVT has the similar property as its Euclidean counterpart where the sites are uniformly distributed according to given density values. Two algorithms -- Lloyd's algorithm and the L-BFGS algorithm -- are adopted to compute the hyperbolic CVT, and the convergence of Lloyd's algorithm is proved. As an example of the application, we utilize the hyperbolic CVT to compute uniform partitions and high-quality remeshing results for high-genus (genus>1) surfaces."}
{"_id": "793651f4cf210bd81922d173346b037d66f2b4a4", "title": "Bayes Optimality in Linear Discriminant Analysis", "text": "We present an algorithm that provides the one-dimensional subspace, where the Bayes error is minimized for the C class problem with homoscedastic Gaussian distributions. Our main result shows that the set of possible one-dimensional spaces v, for which the order of the projected class means is identical, defines a convex region with associated convex Bayes error function g(v). This allows for the minimization of the error function using standard convex optimization algorithms. Our algorithm is then extended to the minimization of the Bayes error in the more general case of heteroscedastic distributions. This is done by means of an appropriate kernel mapping function. This result is further extended to obtain the d dimensional solution for any given d by iteratively applying our algorithm to the null space of the (d - l)-dimensional solution. We also show how this result can be used to improve upon the outcomes provided by existing algorithms and derive a low-computational cost, linear approximation. Extensive experimental validations are provided to demonstrate the use of these algorithms in classification, data analysis and visualization."}
{"_id": "6805c3d025c5de65eae9a3c0d930bca79f7dcc50", "title": "Emotional modulation of cognitive control: approach-withdrawal states double-dissociate spatial from verbal two-back task performance.", "text": "Emotional states might selectively modulate components of cognitive control. To test this hypothesis, the author randomly assigned 152 undergraduates (equal numbers of men and women) to watch short videos intended to induce emotional states (approach, neutral, or withdrawal). Each video was followed by a computerized 2-back working memory task (spatial or verbal, equated for difficulty and appearance). Spatial 2-back performance was enhanced by a withdrawal state and impaired by an approach state; the opposite pattern held for verbal performance. The double dissociation held more strongly for participants who made more errors than average across conditions. The results suggest that approach-withdrawal states can have selective influences on components of cognitive control, possibly on a hemispheric basis. They support and extend several frameworks for conceptualizing emotion-cognition interactions."}
{"_id": "3505447904364877605aabaa450c09568c8db1ec", "title": "Smart irrigation using low-cost moisture sensors and XBee-based communication", "text": "Deficiency in fresh water resources globally has raised serious alarms in the last decade. Efficient management of water resources play an important role in the agriculture sector. Unfortunately, this is not given prime importance in the third world countries because of adhering to traditional practices. This paper presents a smart system that uses a bespoke, low cost soil moisture sensor to control water supply in water deficient areas. The sensor, which works on the principle of moisture dependent resistance change between two points in the soil, is fabricated using affordable materials and methods. Moisture data acquired from a sensor node is sent through XBEE wireless communication modules to a centralized server that controls water supply. A user-friendly interface is developed to visualize the daily moisture data. The low-cost and wireless nature of the sensing hardware presents the possibility to monitor the moisture levels of large agricultural fields. Moreover, the proposed moisture sensing method has the ability to be incorporated into an automated drip-irrigation scheme and perform automated, precision agriculture in conjunction with de-centralized water control."}
{"_id": "2e64b370a86bcdaac392ca078f41f5bbe8d0307f", "title": "An approach for the economical evaluation of solar irrigation in Bangladesh", "text": "This paper represents a comparative picture of irrigation cost of different Bangladeshi crops for diesel, grid electricity and solar power based irrigation systems. The study has been conducted on 27 types of crops. All the data were collected about volume of water for those crops. Then three different types of pump (solar, diesel, electric) have been chosen with same power rating i.e. 5hp. Specific area covered for different crops are calculated furthermore from the attained water volumes. Then finally it has been calculated the 10 years cost in taka. The study found for the entire crops grid powered irrigation cost is minimum than solar powered irrigation cost because the later one is associated with huge primary investment.[12] The study also discovered irrigation with solar power for most of the crops such as onion, carrot, chill, tomato, maize, garlic, gourd, ginger, turmeric, pumpkin, cabbage, cauliflower, lady finger, banana, papaya and groundnut are not beneficial at all rather it costs very high among all the three types of irrigation system.[5] It is also evident that irrigation with solar power of certain crops like potato, cotton, soybean, sunflower, strawberry, lentil, mustard are very much lucrative compared to diesel powered irrigation."}
{"_id": "6f78a9f48de4e2942dcc3562c30fe380a316a4ee", "title": "Sensor data collection and irrigation control on vegetable crop using smart phone and wireless sensor networks for smart farm", "text": "Feeding of the world in the 21st century is the biggest challenge, especially for smart farm business. The smart farm has used agriculture automation system instead of traditional agriculture. Traditional agricultural methods employed by the local people are highly sustainable, although the all inclusive cost is not cheap. Our research goal is to provide long term sustainable solution for automation of agriculture. Agriculture automation has several methods to getting data from vegetable crop like sensor for environmental measurement. Therefore, we developed a portable measurement technology including soil moisture sensor, air humidity sensor and air temperature sensor. Moreover, irrigation system using wireless sensor network has installed these sensors, with the purpose for collecting the environment data and controlling the irrigation system via smart phone. The purpose of the experiment is to find better ways of controlling an irrigation system with automatic system and manual control by smart phone. In order to control an irrigation system, we have developed the communication methodology of the wireless sensor network for collected environment data and sending control command to turn on/off irrigation system. It is successful for controlling the irrigation system and controlling the water near the vegetable roots. In this paper, we have attempted to demonstrate the automation of the irrigation system that is useful for farm business which make it comfortable than using traditional agriculture by using smart phone for monitoring and controlling the system. Accordingly, in the long-term has reduced cost as well. The experimental result shows that the accuracy of sending and receiving command control for irrigation system is 96 percent and accuracy of environment collection is 98 percent."}
{"_id": "e7a3b413197f259c85fbcfb454c27362bbfcd15c", "title": "Microcontroller based drip irrigation system using smart sensor", "text": "In the past couple of decades, there is rapid growth in terms of technology in the field of agriculture. Different monitoring and controlled systems are installed in order to increase the yield. The yield rate may deceases due to numerous factors. Disease is one of the key factors that cause the degradation of yield. So the developed monitoring system mainly focuses on predicting the start of germination of the disease. Sensor module is used to detect different environmental condition across the farm and the sensed data is displayed on Liquid crystal display using microcontroller. Microcontroller wirelessly transmits different environment conditions across the farm to central unit where data is stored, and analysed. Central unit checks the present data with disease condition and if matches then it commands microcontroller to operate relay. Sensor module is tested for different temperature range and it is found that there are little variations in recorded values. Wireless data transfer is tested with the introduction of various obstacles like wall, metal body, magnet, etc. and it is found that same data is transferred to central unit but with some amount of delay in it. The developed system nearly predicts the start of germination of disease."}
{"_id": "768b18d745639fcfb157fe16cbd957ca60ebfc2e", "title": "Design of Electronic Throttle Valve Position Control System using Nonlinear PID Controller", "text": "In recent years, the use of electronic throttle valve systems has been very popular in the automotive industry. It is used to regulate the amount of air flow into the engine. Due to the existence of multiple nonsmooth nonlinearities, the controller design to the electronic throttle valve becomes difficult task. These nonlinearities including stick\u2013slip friction, backlash, and a discontinuous nonlinear spring involved in the system. In the first part of this paper the electronic throttle valve system is presented first, and then the model is derived for each components of the throttle valve system. Later, the system dimension is reduced to two by ignoring the motor inductance at the end of this part of work. Actually this step enables us to design a nonlinear PID controller electronic throttle valve system. The simulation results, of applying a nonlinear PID controller to the electronic throttle valve system, show the effectiveness of the proposed controller in forcing the angle of the throttle valve to the desired opening angle in presence of nonlinearities and disturbances in throttle valve system model."}
{"_id": "7d7cb08e751edb0b5519a041925f1f93200d0119", "title": "Mining time-changing data streams", "text": "Most statistical and machine-learning algorithms assume that the data is a random sample drawn from a stationary distribution. Unfortunately, most of the large databases available for mining today violate this assumption. They were gathered over months or years, and the underlying processes generating them changed during this time, sometimes radically. Although a number of algorithms have been proposed for learning time-changing concepts, they generally do not scale well to very large databases. In this paper we propose an efficient algorithm for mining decision trees from continuously-changing data streams, based on the ultra-fast VFDT decision tree learner. This algorithm, called CVFDT, stays current while making the most of old data by growing an alternative subtree whenever an old one becomes questionable, and replacing the old with the new when the new becomes more accurate. CVFDT learns a model which is similar in accuracy to the one that would be learned by reapplying VFDT to a moving window of examples every time a new example arrives, but with O(1) complexity per example, as opposed to O(w), where w is the size of the window. Experiments on a set of large time-changing data streams demonstrate the utility of this approach."}
{"_id": "4420fca3cb722ad0478030c8209b550cd7db8095", "title": "Wearable Medical Systems for p-Health", "text": "Driven by the growing aging population, prevalence of chronic diseases, and continuously rising healthcare costs, the healthcare system is undergoing a fundamental transformation, from the conventional hospital-centered system to an individual-centered system. Current and emerging developments in wearable medical systems will have a radical impact on this paradigm shift. Advances in wearable medical systems will enable the accessibility and affordability of healthcare, so that physiological conditions can be monitored not only at sporadic snapshots but also continuously for extended periods of time, making early disease detection and timely response to health threats possible. This paper reviews recent developments in the area of wearable medical systems for p-Health. Enabling technologies for continuous and noninvasive measurements of vital signs and biochemical variables, advances in intelligent biomedical clothing and body area networks, approaches for motion artifact reduction, strategies for wearable energy harvesting, and the establishment of standard protocols for the evaluation of wearable medical devices are presented in this paper with examples of clinical applications of these technologies."}
{"_id": "008baae7037a47f69804c2eb8438d366a6e67486", "title": "3D Human Pose Estimation via Deep Learning from 2D Annotations", "text": "We propose a deep convolutional neural network for 3D human pose and camera estimation from monocular images that learns from 2D joint annotations. The proposed network follows the typical architecture, but contains an additional output layer which projects predicted 3D joints onto 2D, and enforces constraints on body part lengths in 3D. We further enforce pose constraints using an independently trained network that learns a prior distribution over 3D poses. We evaluate our approach on several benchmark datasets and compare against state-of-the-art approaches for 3D human pose estimation, achieving comparable performance. Additionally, we show that our approach significantly outperforms other methods in cases where 3D ground truth data is unavailable, and that our network exhibits good generalization properties."}
{"_id": "5c3fb7e2ffc8b312b20bae99c822d427d0dc003d", "title": "Wireless underground sensor networks: Research challenges", "text": "This work introduces the concept of a Wireless Underground Sensor Network (WUSN). WUSNs can be used to monitor a variety of conditions, such as soil properties for agricultural applications and toxic substances for environmental monitoring. Unlike existing methods of monitoring underground conditions, which rely on buried sensors connected via wire to the surface, WUSN devices are deployed completely belowground and do not require any wired connections. Each device contains all necessary sensors, memory, a processor, a radio, an antenna, and a power source. This makes their deployment much simpler than existing underground sensing solutions. Wireless communication within a dense substance such as soil or rock is, however, significantly more challenging than through air. This factor, combined with the necessity to conserve energy due to the difficulty of unearthing and recharging WUSN devices, requires that communication protocols be redesigned to be as efficient as possible. This work provides an extensive overview of applications and design challenges for WUSNs, challenges for the underground communication channel including methods for predicting path losses in an underground link, and challenges at each layer of the communication protocol stack. 2006 Elsevier B.V. All rights reserved."}
{"_id": "0b078bbfad5463251ff38d34fc755e4534e25e47", "title": "Converting an unstructured quadrilateral / hexahedral mesh to a rational T-spline", "text": "This paper presents a novel method for converting any unstructured quadrilateral or hexahedral mesh to a generalized T-spline surface or solid T-spline, based on the rational T-spline basis functions. Our conversion algorithm consists of two stages: the topology stage and the geometry stage. In the topology stage, the input quadrilateral or hexahedral mesh is taken as the initial T-mesh. To construct a gap-free T-spline, templates are designed for each type of node and applied to elements in the input mesh. In the geometry stage, an efficient surface fitting technique is developed to improve the surface accuracy with sharp feature preservation. The constructed T-spline surface and solid T-spline interpolate every boundary node in the input mesh, with C2-continuity everywhere except the local region around irregular nodes. Finally, a B\u00e9zier extraction technique is developed and linear independence of the constructed T-splines is studied to facilitate T-spline based isogeometric analysis."}
{"_id": "24c5877251ba8b31570256f46247740e42aa59e4", "title": "Quantifying social group evolution", "text": "The rich set of interactions between individuals in society results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network. Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution. Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole. We have developed an algorithm based on clique percolation that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency\u2014the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community\u2019s lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions."}
{"_id": "45adc16c4111bcc5201f721d2d573dc8206f8a79", "title": "HUMAN-AUTOMATION INTERACTION : TAXONOMIES AND QUALITATIVE MODELS The Meaning of Human-Automation Interaction", "text": "Automation does not mean humans are replaced; quite the opposite. Increasingly, humans are asked to interact with automation in complex and typically large-scale systems, including aircraft and air traffic control, nuclear power, manufacturing plants, military systems, homes, and hospitals. This is not an easy or error-free task for either the system designer or the human operator/automation supervisor, especially as computer technology becomes ever more sophisticated. This review outlines recent research and challenges in the area, including taxonomies and qualitative models of human-automation interaction; descriptions of automation-related accidents and studies of adaptive automation; and social, political, and ethical issues."}
{"_id": "e4eef0b40cfb3d57d2c44313eb0fff787b750b29", "title": "Assisting Users in a World Full of Cameras: A Privacy-Aware Infrastructure for Computer Vision Applications", "text": "Computer vision based technologies have seen widespread adoption over the recent years. This use is not limited to the rapid adoption of facial recognition technology but extends to facial expression recognition, scene recognition and more. These developments raise privacy concerns and call for novel solutions to ensure adequate user awareness, and ideally, control over the resulting collection and use of potentially sensitive data. While cameras have become ubiquitous, most of the time users are not even aware of their presence. In this paper we introduce a novel distributed privacy infrastructure for the Internet-of-Things and discuss in particular how it can help enhance user's awareness of and control over the collection and use of video data about them. The infrastructure, which has undergone early deployment and evaluation on two campuses, supports the automated discovery of IoT resources and the selective notification of users. This includes the presence of computer vision applications that collect data about users. In particular, we describe an implementation of functionality that helps users discover nearby cameras and choose whether or not they want their faces to be denatured in the video streams."}
{"_id": "430a2146b47522f45bf417e17bb89922f3d9fcbc", "title": "New insights into churn prediction in the telecommunication sector: A profit driven data mining approach", "text": "Customer churn prediction models aim to indicate the customers with the highest propensity to attrite, allowing to improve the efficiency of customer retention campaigns and to reduce the costs associated with churn. Although cost reduction is their prime objective, churn prediction models are typically evaluated using statistically based performance measures, resulting in suboptimal model selection. Therefore, in the first part of this paper, a novel, profit centric performance measure is developed, by calculating the maximum profit that can be generated by including the optimal fraction of customers with the highest predicted probabilities to attrite in a retention campaign. The novel measure selects the optimal model and fraction of customers to include, yielding a significant increase in profits compared to statistical mea-"}
{"_id": "8e7b3859813a746f9b16a584ecb024103fce48cb", "title": "The Challenge of Optical Music Recognition", "text": "This article describes the challenges posed by optical music recognition \u2013 a topic in computer science that aims to convert scanned pages of music into an on-line format. First, the problem is described; then a generalised framework for software is presented that emphasises key stages that must be solved: staff line identification, musical object location, musical feature classification, and musical semantics. Next, significant research projects in the area are reviewed, showing how each fits the generalised framework. The article concludes by discussing perhaps the most open question in the field: how to compare the accuracy and success of rival systems, highlighting certain steps that help ease the task."}
{"_id": "2bdffe03bff8373445eda7e1e9439b7300a6f2fd", "title": "A Neural Click Model for Web Search", "text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible."}
{"_id": "7bc2b6e32ec9d3097cb5c494e05dfe28f9786ec9", "title": "Inf2vec: Latent Representation Model for Social Influence Embedding", "text": "As a fundamental problem in social influence propagation analysis, learning influence parameters has been extensively investigated. Most of the existing methods are proposed to estimate the propagation probability for each edge in social networks. However, they cannot effectively learn propagation parameters of all edges due to data sparsity, especially for the edges without sufficient observed propagation. Different from the conventional methods, we introduce a novel social influence embedding problem, which is to learn parameters for nodes rather than edges. Nodes are represented as vectors in a low-dimensional space, and thus social influence information can be reflected by these vectors. We develop a new model Inf2vec, which combines both the local influence neighborhood and global user similarity to learn the representations. We conduct extensive experiments on two real-world datasets, and the results indicate that Inf2vec significantly outperforms state-of-the-art baseline algorithms."}
{"_id": "4995f4b0eba9f4516101e803063b591076a24985", "title": "BiCoS: A Bi-level co-segmentation method for image classification", "text": "The objective of this paper is the unsupervised segmentation of image training sets into foreground and background in order to improve image classification performance. To this end we introduce a new scalable, alternation-based algorithm for co-segmentation, BiCoS, which is simpler than many of its predecessors, and yet has superior performance on standard benchmark image datasets."}
{"_id": "419004cb43226ce31f093bdd74301e46d46cd87c", "title": "Physical Primitive Decomposition", "text": "Objects are made of parts, each with distinct geometry, physics, functionality, and affordances. Developing such a distributed, physical, interpretable representation of objects will facilitate intelligent agents to better explore and interact with the world. In this paper, we study physical primitive decomposition\u2014understanding an object through its components, each with physical and geometric attributes. As annotated data for object parts and physics are rare, we propose a novel formulation that learns physical primitives by explaining both an object\u2019s appearance and its behaviors in physical events. Our model performs well on block towers and tools in both synthetic and real scenarios; we also demonstrate that visual and physical observations often provide complementary signals. We further present ablation and behavioral studies to better understand our model and contrast it with human performance."}
{"_id": "848644136ee9190ce8098615e5dd60c70a660628", "title": "Ceramic interconnect bridge for heterogeneous integration and multiple chip packaging", "text": "In this paper, we describe the architecture and performance of a fine pitch multiple chip heterogeneous integration solution using ceramic interconnect bridge on organic substrate package. We present the increased IO density and improvement of electrical high-speed performance on signal integrity are achievable through this novel integration scheme, where dense copper routings on small ceramic elements are served as interconnect bridges. The cost and signal attenuation of using ceramic bridge is far better than that of silicon bridge or wafer interposer on substrate."}
{"_id": "95f8e74b6aa73dc94e7ab049f7e13e624669d827", "title": "Towards Seamless Integration of Data Analytics into Existing HPC Infrastructures", "text": "Customers of the High Performance Computing Center (HLRS) tend to execute more complex and data-driven applications, often resulting in large amounts of data of up to 1 Petabyte. The majority of our customers, however, is currently lacking the ability and knowledge to process this amount of data in a timely manner in order to extract meaningful information. We have therefore established a new project in order to support our users with the task of knowledge discovery by means of data analytics. We put the high performance data analytics system, a Cray Urika-GX, into operation to cope with this challenge. In this paper, we give an overview about our project and discuss immanent challenges in bridging the gap between HPC and data analytics in a production environment. The paper concludes with a case study about analyzing log files of a Cray XC40 to detect variations in system performance. We were able to identify successfully so-called aggressor jobs, which reduce significantly the performance of other simultaneously running"}
{"_id": "14e798a60ee4cab75d1a6daaf1e77114fcacfbc8", "title": "outputs Semantic patterns for sentiment analysis of Twitter Conference Item", "text": "Most existing approaches to Twitter sentiment analysis assume that sentiment is explicitly expressed through affective words. Nevertheless, sentiment is often implicitly expressed via latent semantic relations, patterns and dependencies among words in tweets. In this paper, we propose a novel approach that automatically captures patterns of words of similar contextual semantics and sentiment in tweets. Unlike previous work on sentiment pattern extraction, our proposed approach does not rely on external and fixed sets of syntactical templates/patterns, nor requires deep analyses of the syntactic structure of sentences in tweets. We evaluate our approach with tweetand entity-level sentiment analysis tasks by using the extracted semantic patterns as classification features in both tasks. We use 9 Twitter datasets in our evaluation and compare the performance of our patterns against 6 state-of-the-art baselines. Results show that our patterns consistently outperform all other baselines on all datasets by 2.19% at the tweet-level and 7.5% at the entity-level in average F-measure."}
{"_id": "e148568b12ed0400b3f1ab7f6d398ac1b6095e66", "title": "Quiet Neighborhoods: Key to Protect Job Performance Predictability", "text": "Interference of nearby jobs has been recently identified as the dominant reason for the high performance variability of parallel applications running on High Performance Computing (HPC) systems. Typically, HPC systems are dynamic with multiple jobs coming and leaving in an unpredictable fashion, sharing simultaneously the system interconnection network. In such environment contention for network resources is causing random stalls in the progress of application execution degrading application and system performance overall. Eliminating job interactions in their neighbourhoods is key for guaranteeing performance predictability of applications. In this paper we are proposing the concept of quiet neighbourhoods that significantly reduce job interactions. Quiet neighbourhoods are created by the system resource manager in two phases. First, multiple virtual network blocks are defined on the top of the physical network resources based on typical workload distributions. Second, newly arriving jobs are allocated in these virtual blocks based on their size."}
{"_id": "9f0dd03757867ba99917ab8af488115be69c2639", "title": "Use of platelet-rich fibrin membrane following treatment of gingival recession: a randomized clinical trial.", "text": "This 6-month randomized controlled clinical study primarily aimed to compare the results achieved by the use of a platelet-rich fibrin (PRF) membrane or connective tissue graft (CTG) in the treatment of gingival recession and to evaluate the clinical impact of PRF on early wound healing and subjective patient discomfort. Use of a PRF membrane in gingival recession treatment provided acceptable clinical results, followed by enhanced wound healing and decreased subjective patient discomfort compared to CTG-treated gingival recessions. No difference could be found between PRF and CTG procedures in gingival recession therapy, except for a greater gain in keratinized tissue width obtained in the CTG group and enhanced wound healing associated with the PRF group."}
{"_id": "0690ba31424310a90028533218d0afd25a829c8d", "title": "Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs", "text": "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \u201dsemantic image segmentation\u201d). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \u201cDeepLab\u201d system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6% IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the \u2019hole\u2019 algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU."}
{"_id": "cd249d7b6cbe2255fdb3f2acc13467f0ec0259c4", "title": "PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping", "text": "Phase unwrapping is a crucial signal processing problem in several applications that aims to restore original phase from the wrapped phase. In this letter, we propose a novel framework for unwrapping the phase using deep fully convolutional neural network termed as PhaseNet. We reformulate the problem definition of directly obtaining continuous original phase as obtaining the wrap-count (integer jump of 2 $\\pi$) at each pixel by semantic segmentation and this is accomplished through a suitable deep learning framework. The proposed architecture consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The relationship between the absolute phase and the wrap-count is leveraged in generating abundant simulated data of several random shapes. This deliberates the network on learning continuity in wrapped phase maps rather than specific patterns in the training data. We compare the proposed framework with the widely adapted quality-guided phase unwrapping algorithm and also with the well-known MATLAB's unwrap function for varying noise levels. The proposed framework is found to be robust to noise and computationally fast. The results obtained highlight that deep convolutional neural network can indeed be effectively applied for phase unwrapping, and the proposed framework will hopefully pave the way for the development of a new set of deep learning based phase unwrapping methods."}
{"_id": "548e23658e3c0ab9f38b640dd79315a59988636c", "title": "Why women use makeup: implication of psychological traits in makeup functions.", "text": "Makeup acts and stimulates three of our senses: touch (which encompasses all sensations from the body surface), smell (fragrance), and sight (the process of becoming and looking beautiful). The positive stimulation of these senses by makeup can induce sensory as well as psychological pleasure. In order to understand the relationship of women to their makeup, we interviewed different groups of women on their quality of life and makeup habits. Then, through four standard well-validated psychometric self-questionnaires, we examined the possible relation between the need to make up oneself and specific psychological features. Our first results clearly showed that makeup could support two opposite \"up\" functions, i.e., \"camouflage\" vs \"seduction.\" Concerning their psychological profiles, results showed that women of the functional class \"camouflage\" are more anxious, defensive, and emotionally unstable compared to those of the functional class \"seduction,\" who appear to be more sociable, assertive, and extroverted. Further analyses revealed a division of the two classes into subclasses of volunteers with opposed personality and psychological profiles. This new classification allowed us to define more precisely the relations existing within the subjective experience of women during the makeup process. In conclusion, our study revealed that beyond the simple application of colorful products on the face, makeup has two major functional implications depending on specific psychological profiles of women."}
{"_id": "67a9f3a16b78d574d8ed24708f4991bad36cc26d", "title": "Measuring the Physical in Physical Attractiveness : Quasi-Experiments on the Sociobiology of Female Facial Beauty", "text": "Two quasi-experiments investigated the relation between specific adult female facial features and the attraction, attribution, and altruistic responses of adult males. Precise measurements were obtained of the relative size of 24 facial features in an international sample of photographs of 50 females. Male subjects provided ratings of the attractiveness of each of the females. Positively correlated with attractiveness ratings were the neonate features of large eyes, small nose, and small chin; the maturity features of prominent cheekbones and narrow cheeks; and the expressive features of high eyebrows, large pupils, and large smile. A second study asked males to rate the personal characteristics of 16 previously measured females. The males were also asked to indicate the females for whom they would be most inclined to perform altruistic behaviors, and select for dating, sexual behavior, and childrearing. The second study replicated the correlations of feature measurements with attractiveness. Facial features also predicted personality attributions, altruistic inclinations, and reproductive interest. Sociobiological interpretations are discussed."}
{"_id": "6c5440d919f0e493eb2835c0deab9db48324c265", "title": "Mixed-effects modeling with crossed random effects for subjects and items", "text": "This paper provides a non-technical introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects. A worked-out example of how to use recent software for mixed-effects modeling is provided. Simulation studies illustrate the advantages offered by mixedeffects analyses compared to traditional analyses based on quasi-F tests, by-subjects analyses, combined by-subjects and by-items analyses, and random regression. Applications and possibilities across a range of domains of inquiry are discussed. 1 General Introduction Psycholinguists and other cognitive psychologists use convenience samples for their experiments, often based on participants within the local university community. When analyzing the data from these experiments, participants are treated as random variables, because the interest of most studies is not about \u2217 Corresponding author. Address: Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands Email addresses: baayen@mpi.nl (R.H. Baayen), Doug.Davidson@fcdonders.ru.nl (D.J. Davidson), bates@stat.wisc.edu (D.M. Bates). Preprint submitted to Elsevier Februari 3, 2006 experimental effects present only in the individuals who participated in the experiment, but rather in effects present in speakers everywhere, either within the language studied, or human language users in general. The differences between individuals due to genetic, developmental, environmental, social, political, or chance factors are modeled jointly by means of a participant random effect. A similar logic applies to linguistic materials. Psycholinguists construct materials for the tasks that they employ by a variety of means, but most importantly, most materials in a single experiment do not exhaust all possible syllables, words, or sentences that could be found in a given language, and most choices of language to investigate do not exhaust the possible languages that an experimenter could investigate. In fact, two core principles of the structure of language, the arbitrary (and hence statistical) association between sound and meaning (de Saussure) and the unbounded combination of finite lexical items (von Humboldt), guarantee that a great many language materials must be a sample, rather than an exhaustive list. The space of possible words, and the space of possible sentences, is simply too large to be modeled by any other means. Just as we model human partipants as random variables, we have to model factors characterizing their speech as random variables as well. Clark (1973) illuminated this issue, sparked by the work of Coleman (1964), by showing how language researchers might generalize their results to the larger population of linguistic materials from which they sample by testing for statistical significance of experimental contrasts with participants and items analyses. Clark\u2019s oft-cited paper presented a technical solution to this modeling problem, based on statistical theory and computational methods available at the time (e.g., Winer, 1971). This solution involved computing a quasi-F statistic which, in the simplest-to-use form, could be approximated by the use of a combined minimum-F statistic derived from separate participants (F1) and items (F2) analyses. In the 30+ years since, statistical techniques have expanded the space of possible solutions to this problem, but these techniques have not yet been applied widely in the field of language and memory studies. The present paper offers a new alternative known as a mixed effects model approach, based on maximum likelihood methods that are now in common use in many areas of science, medicine, and engineering. More specifically, we introduce a very recent development in computational statistics, namely, the possibility to include subjects and items as crossed random effects, as opposed to hierarchical or multilevel models in which random effects must be assumed to be nested. Traditional approaches to random effects modeling suffer multiple drawbacks which can be eliminated by adopting mixed effect linear models. These drawbacks include (a) deficiencies in statistical power related to the problems posed by repeated observations, (b) the lack of a flexible method of dealing with"}
{"_id": "db249d07cea3efb8ef4456b616698d5d1242b075", "title": "Women's use of cosmetics: psychosocial correlates and consequences.", "text": "Synopsis Few systematic studies have examined individual differences in women's use of facial make-up or the possible psychosocial effects of such use. In the present investigation, the researchers developed several measures of the amount and the situational and temporal patterning of cosmetics use. Among forty-two female college students, differential use was associated with a number of selected personality variables-public self-consciousness, public body-consciousness, social anxiety, and various body-image factors. Through the imagery-induction of cosmetics use and non-use conditions, cosmetics users' self-evaluative responses were studied. In a variety of imagined situations, subjects reported being more self-confident and sociable when wearing as opposed to not wearing their customary cosmetics. Findings were discussed in the context of the role of cosmetics in self-image enhancement and social impression management."}
{"_id": "538bd7add21a89cec5484ddb0208be2ae972e93b", "title": "Neuroplasticity in the context of motor rehabilitation after stroke", "text": "Approximately one-third of patients with stroke exhibit persistent disability after the initial cerebrovascular episode, with motor impairments accounting for most poststroke disability. Exercise and training have long been used to restore motor function after stroke. Better training strategies and therapies to enhance the effects of these rehabilitative protocols are currently being developed for poststroke disability. The advancement of our understanding of the neuroplastic changes associated with poststroke motor impairment and the innate mechanisms of repair is crucial to this endeavor. Pharmaceutical, biological and electrophysiological treatments that augment neuroplasticity are being explored to further extend the boundaries of poststroke rehabilitation. Potential motor rehabilitation therapies, such as stem cell therapy, exogenous tissue engineering and brain\u2013computer interface technologies, could be integral in helping patients with stroke regain motor control. As the methods for providing motor rehabilitation change, the primary goals of poststroke rehabilitation will be driven by the activity and quality of life needs of individual patients. This Review aims to provide a focused overview of neuroplasticity associated with poststroke motor impairment, and the latest experimental interventions being developed to manipulate neuroplasticity to enhance motor rehabilitation."}
{"_id": "288b65df0153c42131b406f2afa532531d607d00", "title": "Learning personal + social latent factor model for social recommendation", "text": "Social recommendation, which aims to systematically leverage the social relationships between users as well as their past behaviors for automatic recommendation, attract much attention recently. The belief is that users linked with each other in social networks tend to share certain common interests or have similar tastes (homophily principle); such similarity is expected to help improve the recommendation accuracy and quality. There have been a few studies on social recommendations; however, they almost completely ignored the heterogeneity and diversity of the social relationship.\n In this paper, we develop a joint personal and social latent factor (PSLF) model for social recommendation. Specifically, it combines the state-of-the-art collaborative filtering and the social network modeling approaches for social recommendation. Especially, the PSLF extracts the social factor vectors for each user based on the state-of-the-art mixture membership stochastic blockmodel, which can explicitly express the varieties of the social relationship. To optimize the PSLF model, we develop a scalable expectation-maximization (EM) algorithm, which utilizes a novel approximate mean-field technique for fast expectation computation. We compare our approach with the latest social recommendation approaches on two real datasets, Flixter and Douban (both with large social networks). With similar training cost, our approach has shown a significant improvement in terms of prediction accuracy criteria over the existing approaches."}
{"_id": "698efa2bfd0afa2cd5d70de80c355fd720f9ce07", "title": "The Implications and Impacts of Web Services to Electronic Commerce Research and Practices", "text": "Web services refer to a family of technologies that can universally standardize the communication of applications in order to connect systems, business partners, and customers cost-effectively through the World Wide Web. Major software vendors such as IBM, Microsoft, SAP, SUN, and Oracle are all embracing Web services standards and are releasing new products or tools that are Web services enabled. Web services will ease the constraints of time, cost, and space for discovering, negotiating, and conducting e-business transactions. As a result, Web services will change the way businesses design their applications as services, integrate with other business entities, manage business process workflows, and conduct e-business transactions. The early adopters of Web services are showing promising results such as greater development productivity gains and easier and faster integration with trading partners. However, there are many issues worth studying regarding Web services in the context of e-commerce. This special issue of the JECR aims to encourage awareness and discussion of important issues and applications of Web Services that are related to electronic commerce from the organizational, economics, and technical perspectives. Research opportunities of Web services and e-commerce area are fruitful and important for both academics and practitioners. We wish that this introductory article can shed some light for researchers and practitioners to better understand important issues and future trends of Web services and e-business."}
{"_id": "c5ea23a32d7ab1f4650f49eddb3727f57f50b584", "title": "Social stories: mechanisms of effectiveness in increasing game play skills in children diagnosed with autism spectrum disorder using a pretest posttest repeated measures randomized control group design.", "text": "An increasing body of literature has indicated that social stories are an effective way to teach individuals diagnosed with autism appropriate social behavior. This study compared two formats of a social story targeting the improvement of social skills during game play using a pretest posttest repeated measures randomized control group design. A total of 45 children diagnosed with Autism Spectrum Disorder (ASD) ages 7-14 were randomly assigned to standard, directive, or control story conditions. Results demonstrated that the standard and directive story formats were equally as effective in eliciting, generalizing and maintaining the targeted social skills in participants who had prior game play experience and Verbal Comprehension Index (VCI) scores from the WISC-IV intelligence test in the borderline range or above."}
{"_id": "bdd610ef86a1d92b222f0aea39ec4c28e7355456", "title": "What is a robot companion - friend, assistant or butler?", "text": "The study presented in this paper explored people's perceptions and attitudes towards the idea of a future robot companion for the home. A human-centred approach was adopted using questionnaires and human-robot interaction trials to derive data from 28 adults. Results indicated that a large proportion of participants were in favour of a robot companion and saw the potential role as being an assistant, machine or servant. Few wanted a robot companion to be a friend. Household tasks were preferred to child/animal care tasks. Humanlike communication was desirable for a robot companion, whereas humanlike behaviour and appearance were less essential. Results are discussed in relation to future research directions for the development of robot companions."}
{"_id": "d4e32ea6ded673640b4d0fbd7c4d054394a5b697", "title": "Narrative Visualization: Sharing Insights into Complex Data", "text": "This paper is a reflection on the emerging genre of narrative visualization, a creative response to the need to share complex data engagingly with the public. In it, we explain how narrative visualization offers authors the opportunity to communicate more effectively with their audience by reproducing and sharing an experience of insight similar to their own. To do so, we propose a two part model, derived from previous literature, in which insight is understood as both an experience and also the product of that experience. We then discuss how the design of narrative visualization should be informed by attempts elsewhere to track the provenance of insights and share them in a collaborative setting. Finally, we present a future direction for research that includes using EEG technology to record neurological patterns during episodes of insight experience as the basis for evaluation."}
{"_id": "d91777099ffce5aa58efa427078e3fb5c1551cac", "title": "Realistic and Fast Cloud Rendering", "text": "Clouds are an important aspect of rendering outdoor scenes. Thi paper describes a cloud system that extends texture splatting on par ticles to model a dozen cloud types (e.g., stratus, cumulus congestus, cumulonimb us), an improvement over earlier systems that modeled only one type of cumulus. W e also achieve fast real-time rendering, even for scenes of dense overcast cove rage, which was a limitation for previous systems. We present a new shading model that uses artist-driven contr ols rather than a programmatic approach to approximate lighting. This is sui table when fine-grained control over the look-and-feel is necessary, and artistic r esources are available. We also introduce a way to simulate cloud formation and dissipa tion using texture"}
{"_id": "cd9be53c9b0a866e071acbd66171d8900d8d7bb2", "title": "FaceTube: predicting personality from facial expressions of emotion in online conversational video", "text": "The advances in automatic facial expression recognition make possible to mine and characterize large amounts of data, opening a wide research domain on behavioral understanding. In this paper, we leverage the use of a state-of-the-art facial expression recognition technology to characterize users of a popular type of online social video, conversational vlogs. First, we propose the use of several activity cues to characterize vloggers based on frame-by-frame estimates of facial expressions of emotion. Then, we present results for the task of automatically predicting vloggers' personality impressions using facial expressions and the Big-Five traits. Our results are promising, specially for the case of the Extraversion impression, and in addition our work poses interesting questions regarding the representation of multiple natural facial expressions occurring in conversational video."}
{"_id": "c7ee9c3391955301695b427e613d28aa6b44c5e1", "title": "Minutiae Extraction From Level 1 Features of Fingerprint", "text": "Fingerprint features can be divided into three major categories based on the granularity at which they are extracted: level 1, level 2, and level 3 features. Orientation field, ridge frequency field, and minutiae set are three fundamental components of fingerprint, where the orientation field and ridge frequency field are regarded as level 1 features and minutiae set as level 2 features. It is generally believed that level 1 features, especially orientation field, can be reconstructed from level 2 features, i.e., minutiae. However, it is still a question that if minutiae can be extracted from level 1 features. In this paper, we analyze the relations between level 1 and level 2 features using the frequency modulation (FM) model and propose an approach to extract minutiae from level 1 features (i.e., orientation field and frequency field). The proposed algorithm is evaluated on NIST SD27 and FVC2002 DB1 databases. The true detection rate (TDR) and false detection rate (FDR) of minutiae detection on NIST SD27 and FVC2002 DB1 are about 45% and 30% compared with manually marked minutiae, respectively, with level 1 features extracted at a block size of 16 pixels. When pixelwise orientation and frequency fields are available, TDR and FDR can reach 70% and 25%, respectively. With a smaller block size, the minutiae recovering accuracy can be even higher. Our quantitative and experimental results show the deep relationship between level 1 and level 2 features of a fingerprint."}
{"_id": "24e70cdd2838c0c1612a85d26edc7398458fc57a", "title": "Single image tree modeling", "text": "In this paper, we introduce a simple sketching method to generate a realistic 3D tree model from a single image. The user draws at least two strokes in the tree image: the first crown stroke around the tree crown to mark up the leaf region, the second branch stroke from the tree root to mark up the main trunk, and possibly few other branch strokes for refinement. The method automatically generates a 3D tree model including branches and leaves. Branches are synthesized by a growth engine from a small library of elementary subtrees that are pre-defined or built on the fly from the recovered visible branches. The visible branches are automatically traced from the drawn branch strokes according to image statistics on the strokes. Leaves are generated from the region bounded by the first crown stroke to complete the tree. We demonstrate our method on a variety of examples."}
{"_id": "69d603d7ecb4c66d6636d9698ec04598aecbc1e2", "title": "Effects of manual lymphatic drainage on breast cancer-related lymphedema: a systematic review and meta-analysis of randomized controlled trials", "text": "BACKGROUND\nLymphedema is a common complication of axillary dissection for breast cancer. We investigated whether manual lymphatic drainage (MLD) could prevent or manage limb edema in women after breast-cancer surgery.\n\n\nMETHODS\nWe performed a systematic review and meta-analysis of published randomized controlled trials (RCTs) to evaluate the effectiveness of MLD in the prevention and treatment of breast-cancer-related lymphedema. The PubMed, EMBASE, CINAHL, Physiotherapy Evidence Database (PEDro), SCOPUS, and Cochrane Central Register of Controlled Trials electronic databases were searched for articles on MLD published before December 2012, with no language restrictions. The primary outcome for prevention was the incidence of postoperative lymphedema. The outcome for management of lymphedema was a reduction in edema volume.\n\n\nRESULTS\nIn total, 10 RCTs with 566 patients were identified. Two studies evaluating the preventive outcome of MLD found no significant difference in the incidence of lymphedema between the MLD and standard treatment groups, with a risk ratio of 0.63 and a 95% confidence interval (CI) of 0.14 to 2.82. Seven studies assessed the reduction in arm volume, and found no significant difference between the MLD and standard treatment groups, with a weighted mean difference of 75.12 (95% CI, -9.34 to 159.58).\n\n\nCONCLUSIONS\nThe current evidence from RCTs does not support the use of MLD in preventing or treating lymphedema. However, clinical and statistical inconsistencies between the various studies confounded our evaluation of the effect of MLD on breast-cancer-related lymphedema."}
{"_id": "fb69ca4ce65a5c796f6247559205322e9796f2fc", "title": "CREDAL: Towards Locating a Memory Corruption Vulnerability with Your Core Dump", "text": "After a program has crashed and terminated abnormally, it typically leaves behind a snapshot of its crashing state in the form of a core dump. While a core dump carries a large amount of information, which has long been used for software debugging, it barely serves as informative debugging aids in locating software faults, particularly memory corruption vulnerabilities. A memory corruption vulnerability is a special type of software faults that an attacker can exploit to manipulate the content at a certain memory. As such, a core dump may contain a certain amount of corrupted data, which increases the difficulty in identifying useful debugging information (e.g. , a crash point and stack traces). Without a proper mechanism to deal with this problem, a core dump can be practically useless for software failure diagnosis. In this work, we develop CREDAL, an automatic tool that employs the source code of a crashing program to enhance core dump analysis and turns a core dump to an informative aid in tracking down memory corruption vulnerabilities. Specifically, CREDAL systematically analyzes a core dump potentially corrupted and identifies the crash point and stack frames. For a core dump carrying corrupted data, it goes beyond the crash point and stack trace. In particular, CREDAL further pinpoints the variables holding corrupted data using the source code of the crashing program along with the stack frames. To assist software developers (or security analysts) in tracking down a memory corruption vulnerability, CREDAL also performs analysis and highlights the code fragments corresponding to data corruption.\n To demonstrate the utility of CREDAL, we use it to analyze 80 crashes corresponding to 73 memory corruption vulnerabilities archived in Offensive Security Exploit Database. We show that, CREDAL can accurately pinpoint the crash point and (fully or partially) restore a stack trace even though a crashing program stack carries corrupted data. In addition, we demonstrate CREDAL can potentially reduce the manual effort of finding the code fragment that is likely to contain memory corruption vulnerabilities."}
{"_id": "20b09b04a135b0b9def2571ba84d6c563d4dd305", "title": "Multiple PRP injections are more effective than single injections and hyaluronic acid in knees with early osteoarthritis: a randomized, double-blind, placebo-controlled trial", "text": "To compare the effectiveness of intraarticular (IA) multiple and single platelet-rich plasma (PRP) injections as well as hyaluronic acid (HA) injections in different stages of osteoarthritis (OA) of the knee. A total of 162 patients with different stages of knee OA were randomly divided into four groups receiving 3 IA doses of PRP, one dose of PRP, one dose of HA or a saline injection (control). Then, each group was subdivided into two groups: early OA (Kellgren\u2013Lawrence grade 0 with cartilage degeneration or grade I\u2013III) and advanced OA (Kellgren\u2013Lawrence grade IV). The patients were evaluated before the injection and at the 6-month follow-ups using the EuroQol visual analogue scale (EQ-VAS) and International Knee Documentation Committee (IKDC) subjective scores. Adverse events and patient satisfaction were recorded. There was a statistically significant improvement in the IKDC and EQ-VAS scores in all the treatment groups compared with the control group. The knee scores of patients treated with three PRP injections were significantly better than those patients of the other groups. There was no significant difference in the scores of patients injected with one dose of PRP or HA. In the early OA subgroups, significantly better clinical results were achieved in the patients treated with three PRP injections, but there was no significant difference in the clinical results of patients with advanced OA among the treatment groups. The clinical results of this study suggest IA PRP and HA treatment for all stages of knee OA. For patients with early OA, multiple (3) PRP injections are useful in achieving better clinical results. For patients with advanced OA, multiple injections do not significantly improve the results of patients in any group. I."}
{"_id": "45c26fb5050664ea07d53932f8c0fefa7ed3c061", "title": "OpenFst: A General and Efficient Weighted Finite-State Transducer Library", "text": "We describe OpenFst, an open-source library for weighted finite-state transducers (WFSTs). OpenFst consists of a C++ template library with efficient WFST representations and over twenty-five operations for constructing, combining, optimizing, and searching them. At the shell-command level, there are corresponding transducer file representations and programs that operate on them. OpenFst is designed to be both very efficient in time and space and to scale to very large problems. This library has key applications speech, image, and natural language processing, pattern and string matching, and machine learning. We give an overview of the library, examples of its use, details of its design that allow customizing the labels, states, and weights and the lazy evaluation of many of its operations. Further information and a download of the OpenFst library can be obtained from http://www.openfst.org."}
{"_id": "ff053d2dc355d56de9b9958ecee3853ba7a4acbc", "title": "Dynamical Approximation by Hierarchical Tucker and Tensor-Train Tensors", "text": "We extend results on the dynamical low-rank approximation for the treatment of time-dependent matrices and tensors (Koch and Lubich; see [SIAM J. Matrix Anal. Appl., 29 (2007), pp. 434\u2013454], [SIAM J. Matrix Anal. Appl., 31 (2010), pp. 2360\u20132375]) to the recently proposed hierarchical Tucker (HT) tensor format (Hackbusch and K\u00fchn; see [J. Fourier Anal. Appl., 15 (2009), pp. 706\u2013722]) and the tensor train (TT) format (Oseledets; see [SIAM J. Sci. Comput., 33 (2011), pp. 2295\u20132317]), which are closely related to tensor decomposition methods used in quantum physics and chemistry. In this dynamical approximation approach, the time derivative of the tensor to be approximated is projected onto the time-dependent tangent space of the approximation manifold along the solution trajectory. This approach can be used to approximate the solutions to tensor differential equations in the HT or TT format and to compute updates in optimization algorithms within these reduced tensor formats. By deriving and analyzing the tangent space projector for the manifold of HT/TT tensors of fixed rank, we obtain curvature estimates, which allow us to obtain quasi-best approximation properties for the dynamical approximation, showing that the prospects and limitations of the ansatz are similar to those of the dynamical low rank approximation for matrices. Our results are exemplified by numerical experiments."}
{"_id": "a571746b0ee622caa7d69b46202d251de93f35e9", "title": "Statistics and Social Network of YouTube Videos", "text": "YouTube has become the most successful Internet website providing a new generation of short video sharing service since its establishment in early 2005. YouTube has a great impact on Internet traffic nowadays, yet itself is suffering from a severe problem of scalability. Therefore, understanding the characteristics of YouTube and similar sites is essential to network traffic engineering and to their sustainable development. To this end, we have crawled the YouTube site for four months, collecting more than 3 million YouTube videos' data. In this paper, we present a systematic and in-depth measurement study on the statistics of YouTube videos. We have found that YouTube videos have noticeably different statistics compared to traditional streaming videos, ranging from length and access pattern, to their growth trend and active life span. We investigate the social networking in YouTube videos, as this is a key driving force toward its success. In particular, we find that the links to related videos generated by uploaders' choices have clear small-world characteristics. This indicates that the videos have strong correlations with each other, and creates opportunities for developing novel techniques to enhance the service quality."}
{"_id": "59923c9dee2a97e0403460ac9a49021930e9a5d2", "title": "No-reference image and video quality assessment: a classification and review of recent approaches", "text": "The field of perceptual quality assessment has gone through a wide range of developments and it is still growing. In particular, the area of no-reference (NR) image and video quality assessment has progressed rapidly during the last decade. In this article, we present a classification and review of latest published research work in the area of NR image and video quality assessment. The NR methods of visual quality assessment considered for review are structured into categories and subcategories based on the types of methodologies used for the underlying processing employed for quality estimation. Overall, the classification has been done into three categories, namely, pixel-based methods, bitstream-based methods, and hybrid methods of the aforementioned two categories. We believe that the review presented in this article will be helpful for practitioners as well as for researchers to keep abreast of the recent developments in the area of NR image and video quality assessment. This article can be used for various purposes such as gaining a structured overview of the field and to carry out performance comparisons for the state-of-the-art"}
{"_id": "4c877c086e5cb57e1f5c16fc662e637419c5a7df", "title": "Improving Machine Learning Approaches to Coreference Resolution", "text": "Wepresentanounphrasecoreferencesystem that extends the work of Soon et al. (2001) and, to our knowledge, producesthebestresultsto dateontheMUC6 andMUC-7 coreferenceresolutiondata sets\u2014 F-measuresof 70.4 and63.4, respecti vely. Improvementsarisefrom two sources: extra-linguistic changesto the learningframework anda large-scaleexpansionof thefeaturesetto includemore sophisticatedlinguisticknowledge."}
{"_id": "36542e488f9009b1e6d61c729bd2c8f0a794f75a", "title": "Genital trauma and vaginal bleeding: is it a lapse of time issue? A case report of a prepubertal girl and review of the literature", "text": "Child victims of sexual abuse may present with physical findings whose interpretation requires the most exhaustive evaluation and an accurate collection of a detailed history. Genital bleeding is usually considered as an acute sign, related to a trauma that occurred shortly before its appearance. We report a case of a 34-month-old child who was referred to the emergency room with a significant vaginal hemorrhage, originating from a wide laceration of the posterior fourchette, and a negative history for accidental trauma. The characteristics of the lesion, compared to the temporal evolution of the healing process, and the witnesses\u2019 depositions led us to assume that the time elapsed between the abusive event and the physical examination was longer in respect to what had appeared at the first sight. The judicial reconstruction of the events confirmed our assumption, allowing the charge of the right abuse perpetrator. As the literature regarding this eventuality is very poor, we report this case to stress the importance for physicians to consider that an active bleeding may be the manifestation of a trauma that occurred very long before."}
{"_id": "d6ce70f1b23ec7ba0f15a94334c13ac5ff27f945", "title": "Novel Compact Tri-Band Bandpass Filter With Controllable Bandwidths", "text": "Novel compact tri-band bandpass filter (BPF) with controllable bandwidths is presented in this letter. The tri-band performance is realized by using three sets of resonators, i.e., two sets of short-stub loaded dual-mode resonators, which are designed for the first and second passband, respectively, and half-wavelength resonators for the third passband. The three passband frequencies and bandwidths could be independently tuned and designed. The half-wavelength resonators are embedded between the two dual-mode resonators and serve as part of feed lines for the first and second passband, thus compact size is realized. The newly designed tri-band BPF is verified by circuit implementation and good agreement between the simulated and measured results can be observed."}
{"_id": "bfb5f142d0eb129fa66616685a84ce055ad8f071", "title": "Snap, Eat, RepEat: A Food Recognition Engine for Dietary Logging", "text": "We present a system to assist users in dietary logging habits, which performs food recognition from pictures snapped on their phone in two different scenarios. In the first scenario, called \"Food in context\", we exploit the GPS information of a user to determine which restaurant they are having a meal at, therefore restricting the categories to recognize to the set of items in the menu. Such context allows us to also report precise calories information to the user about their meal, since restaurant chains tend to standardize portions and provide the dietary information of each meal. In the second scenario, called \"Foods in the wild\" we try to recognize a cooked meal from a picture which could be snapped anywhere. We perform extensive experiments on food recognition on both scenarios, demonstrating the feasibility of our approach at scale, on a newly introduced dataset with 105K images for 500 food categories."}
{"_id": "0c79f2dcdd440bc43f8a920e658a369f16b8e56b", "title": "A Tandem Algorithm for Singing Pitch Extraction and Voice Separation From Music Accompaniment", "text": "Singing pitch estimation and singing voice separation are challenging due to the presence of music accompaniments that are often nonstationary and harmonic. Inspired by computational auditory scene analysis (CASA), this paper investigates a tandem algorithm that estimates the singing pitch and separates the singing voice jointly and iteratively. Rough pitches are first estimated and then used to separate the target singer by considering harmonicity and temporal continuity. The separated singing voice and estimated pitches are used to improve each other iteratively. To enhance the performance of the tandem algorithm for dealing with musical recordings, we propose a trend estimation algorithm to detect the pitch ranges of a singing voice in each time frame. The detected trend substantially reduces the difficulty of singing pitch detection by removing a large number of wrong pitch candidates either produced by musical instruments or the overtones of the singing voice. Systematic evaluation shows that the tandem algorithm outperforms previous systems for pitch extraction and singing voice separation."}
{"_id": "a77adae70271dd69c73868c3523d7373df53a467", "title": "A spaceborne telemetry loaded bifilar helical antenna for LEO satellites", "text": "In this paper we describe the design, manufacturing and measurement of an antenna with an isoflux type radiation pattern used in the space segment TT&C subsystem. In particular, the antenna corresponds to the flight model of mission SAC-D/AQUARIUS, which is a LEO earth observation satellite. The chosen solution is a loaded bifilar helical antenna, which presents a higher gain than a typical hemispherical antenna but which doesn't require the use of a beam forming network like the classical quadrifilar Kilgus type antenna. This reduces the number of required parts, and due to its loading, smaller dimension and higher structural stiffness are achieved."}
{"_id": "1f9ede76dbbd6caf7e3877918fae0d421c6f180c", "title": "Fast algorithms for mining association rules", "text": ""}
{"_id": "3a932716ed323b247a828bd0fd8ae9b2ee0197b2", "title": "Sampling Large Databases for Association Rules", "text": "Discovery of association rules is an important database mining problem. Current algorithms for nding association rules require several passes over the analyzed database, and obviously the role of I/O overhead is very signi cant for very large databases. We present new algorithms that reduce the database activity considerably. The idea is to pick a random sample, to nd using this sample all association rules that probably hold in the whole database, and then to verify the results with the rest of the database. The algorithms thus produce exact association rules, not approximations based on a sample. The approach is, however, probabilistic, and in those rare cases where our sampling method does not produce all association rules, the missing rules can be found in a second pass. Our experiments show that the proposed algorithms can nd association rules very e ciently in only one database pass."}
{"_id": "780e2631adae2fb3fa43965bdeddc0f3b885e20d", "title": "Mining Quantitative Association Rules in Large Relational Tables", "text": "We introduce the problem of mining association rules in large relational tables containing both quantitative and categorical attributes. An example of such an association might be \"10% of married people between age 50 and 60 have at least 2 cars\". We deal with quantitative attributes by fine-partitioning the values of the attribute and then combining adjacent partitions as necessary. We introduce measures of partial completeness which quantify the information lost due to partitioning. A direct application of this technique can generate too many similar rules. We tackle this problem by using a \"greater-than-expected-value\" interest measure to identify the interesting rules in the output. We give an algorithm for mining such quantitative association rules. Finally, we describe the results of using this approach on a real-life dataset."}
{"_id": "cadc5cd05bab1a8bdf9123a1cb1496e5c350cd20", "title": "Rethinking Education in the Age of Technology", "text": "By reading, you can know the knowledge and things more, not only about what you get from people to people. Book will be more trusted. As this rethinking education in the age of technology review, it will really give you the good idea to be successful. It is not only for you to be success in certain life you can be successful in everything. The success can be started by knowing the basic knowledge and do actions."}
{"_id": "5781897f9f0441bf50380fa8b1489027c2731e3f", "title": "Pathophysiological distortions in time perception and timed performance.", "text": "Distortions in time perception and timed performance are presented by a number of different neurological and psychiatric conditions (e.g. Parkinson's disease, schizophrenia, attention deficit hyperactivity disorder and autism). As a consequence, the primary focus of this review is on factors that define or produce systematic changes in the attention, clock, memory and decision stages of temporal processing as originally defined by Scalar Expectancy Theory. These findings are used to evaluate the Striatal Beat Frequency Theory, which is a neurobiological model of interval timing based upon the coincidence detection of oscillatory processes in corticostriatal circuits that can be mapped onto the stages of information processing proposed by Scalar Timing Theory."}
{"_id": "2dbe2147f046a40451a25ed2299b00eef76afbf7", "title": "User Traffic Profile for Traffic Reduction and Effective Bot C&C Detection", "text": "Bots are malicious software components used for generating spams, launching denial of service attacks, phishing, identity theft and information exfiltration and such other illegal activities. Bot detection is an area of active research in recent times. Here we propose a bot detection mechanism for a single host. A user traffic profile is used to filter out normal traffic generated by the host. The remaining suspicious traffic is subject to detailed analysis. The nature of detailed analysis is derived from a characterization of bot traffic. The detection system is tested using two real world bots. Performance evaluation results show that the proposed system achieves a high detection rate (100%) and a low false positive rate (0\u20138%). The traffic filtering yielded an average reduction in traffic by 70%."}
{"_id": "24f7190e6ee9893ea8531c4ce21ba5e5c3514395", "title": "PreCog: Improving Crowdsourced Data Quality Before Acquisition", "text": "Quality control in crowdsourcing systems is crucial. It is typically done after data collection, often using additional crowdsourced tasks to assess and improve the quality. These post-hoc methods can easily add cost and latency to the acquisition process\u2014particularly if collecting high-quality data is important. In this paper, we argue for pre-hoc interface optimizations based on feedback that helps workers improve data quality before it is submitted and is well suited to complement post-hoc techniques. We propose the Precog system that explicitly supports such interface optimizations for common integrity constraints as well as more ambiguous text acquisition tasks where quality is ill-defined. We then develop the Segment-Predict-Explain pattern for detecting low-quality text segments and generating prescriptive explanations to help the worker improve their text input. Our unique combination of segmentation and prescriptive explanation are necessary for Precog to collect 2\u00d7 more high-quality text data than non-Precog approaches on two real domains."}
{"_id": "f5d9c0182d8578f7c0a99ad9bdd4ff62e5f7c68d", "title": "ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking", "text": "We tackle the problem of entity linking for large collections of online pages; Our system, ZenCrowd, identifies entities from natural language text using state of the art techniques and automatically connects them to the Linked Open Data cloud. We show how one can take advantage of human intelligence to improve the quality of the links by dynamically generating micro-tasks on an online crowdsourcing platform. We develop a probabilistic framework to make sensible decisions about candidate links and to identify unreliable human workers. We evaluate ZenCrowd in a real deployment and show how a combination of both probabilistic reasoning and crowdsourcing techniques can significantly improve the quality of the links, while limiting the amount of work performed by the crowd."}
{"_id": "b7fb576498a0bde9a0ebd1f79f51b034a4b83bc9", "title": "Nonspeech oral motor treatment issues related to children with developmental speech sound disorders.", "text": "PURPOSE\nThis article examines nonspeech oral motor treatments (NSOMTs) in the population of clients with developmental speech sound disorders. NSOMTs are a collection of nonspeech methods and procedures that claim to influence tongue, lip, and jaw resting postures; increase strength; improve muscle tone; facilitate range of motion; and develop muscle control. In the case of developmental speech sound disorders, NSOMTs are employed before or simultaneous with actual speech production treatment.\n\n\nMETHOD\nFirst, NSOMTs are defined for the reader, and there is a discussion of NSOMTs under the categories of active muscle exercise, passive muscle exercise, and sensory stimulation. Second, different theories underlying NSOMTs along with the implications of the theories are discussed. Finally, a review of pertinent investigations is presented.\n\n\nRESULTS\nThe application of NSOMTs is questionable due to a number of reservations that include (a) the implied cause of developmental speech sound disorders, (b) neurophysiologic differences between the limbs and oral musculature, (c) the development of new theories of movement and movement control, and (d) the paucity of research literature concerning NSOMTs.\n\n\nCLINICAL IMPLICATION\nThere is no substantive evidence to support NSOMTs as interventions for children with developmental speech sound disorders."}
{"_id": "27aefc35f8c73dc64c7f3a68019f60f13aff764a", "title": "Precise Flow-Insensitive May-Alias Analysis is NP-Hard", "text": "Determining aliases is one of the foundamental static analysis problems, in part because the precision with which this problem is solved can affect the precision of other analyses such as live variables, available expressions, and constant propagation. Previous work has investigated the complexity of flow-sensitive alias analysis. In this article we show that precise flow-insensitive may-alias analysis is NP-hard given arbitrary levels of pointers and arbitrary pointer dereferencing."}
{"_id": "8942bee3ec522349892d59b31dbe512282cf2619", "title": "Automated Generation of Event-Oriented Exploits in Android Hybrid Apps", "text": "Recently more and more Android apps integrate the embedded browser, known as \u201cWebView\u201d, to render web pages and run JavaScript code without leaving these apps. WebView provides a powerful feature that allows event handlers defined in the native context (i.e., Java in Android) to handle web events that occur in WebView. However, as shown in prior work, this feature suffers from remote attacks, which we generalize as EventOriented Exploit (EOE) in this paper, such that adversaries may remotely access local critical functionalities through event handlers in WebView without any permission or authentication. In this paper, we propose a novel approach, EOEDroid, which can automatically vet event handlers in a given hybrid app using selective symbolic execution and static analysis. If a vulnerability is found, EOEDroid also automatically generates exploit code to help developers and analysts verify the vulnerability. To support exploit code generation, we also systematically study web events, event handlers and their trigger constraints. We evaluated our approach on 3,652 most popular apps. The result showed that our approach found 97 total vulnerabilities in 58 apps, including 2 cross-frame DOM manipulation, 53 phishing, 30 sensitive information leakage, 1 local resources access, and 11 Intent abuse vulnerabilities. We also found a potential backdoor in a high profile app that could be used to steal users\u2019 sensitive information, such as IMEI. Even though developers attempted to close it, EOEDroid found that adversaries were still able to exploit it by triggering two events together and feeding event handlers with well designed input."}
{"_id": "e8b16e99dd0b01bd897e11d58ecf4f8085755335", "title": "Encountering stronger password requirements: user attitudes and behaviors", "text": "Text-based passwords are still the most commonly used authentication mechanism in information systems. We took advantage of a unique opportunity presented by a significant change in the Carnegie Mellon University (CMU) computing services password policy that required users to change their passwords. Through our survey of 470 CMU computer users, we collected data about behaviors and practices related to the use and creation of passwords. We also captured users' opinions about the new, stronger policy requirements. Our analysis shows that, although most of the users were annoyed by the need to create a complex password, they believe that they are now more secure. Furthermore, we perform an entropy analysis and discuss how our findings relate to NIST recommendations for creating a password policy. We also examine how users answer specific questions related to their passwords. Our results can be helpful in designing better password policies that consider not only technical aspects of specific policy rules, but also users' behavior in response to those rules."}
{"_id": "5f4e608dab5eb74619ef3a4d1c2ae50b8a8c0e28", "title": "A multiview 3D modeling system based on stereo vision techniques", "text": "This paper introduces a stereo vision system to automatically generate 3D models of real objects. 3D model generation is based on the merging of multiview range images obtained from a digital stereo camera. Stereo images obtained from the camera are rectified, and a correlation-based stereo matching technique reconstructs range images from them. A turntable stage is also employed to obtain multiple range images of the objects. To register range images into a common coordinate system automatically, we introduce and calibrate a turntable coordinate system with respect to the camera coordinate system. After the registration of multiview range images, a 3D model is reconstructed using a volumetric integration technique. Error analysis on turntable calibration and 3D model reconstruction shows the accuracy of our 3D modeling system."}
{"_id": "27935e98a17a6b741f45ad48ee92c49a4397feba", "title": "On the compensation of dead time and zero-current crossing for a PWM-inverter-controlled AC servo drive", "text": "Conventional dead-time compensation methods, for pulsewidth-modulated (PWM) inverters, improve the current output waveforms; however, the zero-current crossing effect is still apparent. This letter proposes a new method, based on angle domain repetitive control, to reduce the distortions in the PWM inverters output waveforms caused by the dead time and the zero-crossing problem."}
{"_id": "18722bf43d2f9d42aaf18faca8b29c84a67eca44", "title": "Design 13.56MHz 10 kW resonant inverter using GaN HEMT for wireless power transfer systems", "text": "This paper presents the design process of a 13.56 MHz 10 kW and high efficiency inverter for wireless power transfer systems. The new 650V E-mode GaN is used in multiphase resonant topology to obtain high output power. The optimized efficiency design and the driver design are presented in detail. The circuit simulation results show that the inverter achieves 10 kW output power with the efficiency of over 98%. The experiment system is tested at 4 kW output power with the efficiency of 96.5%."}
{"_id": "9e793e5038988d7bef8fe7350d683419f5b822e2", "title": "Correspondence-free pose estimation for 3D objects from noisy depth data", "text": "Estimating the pose of objects from depth data is a problem of considerable practical importance for many vision applications. This paper presents an approach for accurate and efficient 3D pose estimation from noisy 2.5D depth images obtained from a consumer depth sensor. Initialized with a coarsely accurate pose, the proposed approach applies a hypothesize-and-test scheme that combines stochastic optimization and graphics-based rendering to refine the supplied initial pose, so that it accurately accounts for a sensed depth image. Pose refinement employs particle swarm optimization to minimize an objective function that quantifies the misalignment between the acquired depth image and a rendered one that is synthesized from a hypothesized pose with the aid of an object mesh model. No explicit correspondences between the depth data and the model need to be established, whereas pose hypothesis rendering and objective function evaluation are efficiently performed on the GPU. Extensive experimental results demonstrate the superior performance of the proposed approach compared to the ICP algorithm, which is typically used for pose refinement in depth images. Furthermore, the experiments indicate the graceful degradation of its performance to limited computational resources and its robustness to noisy and reduced polygon count models, attesting its suitability for use with automatically scanned object models and common graphics hardware."}
{"_id": "93714f49903be580f3bc323d198401661ac2f096", "title": "Techniques for the Automated Analysis of Musical Audio", "text": "This thesis presents work on automated analysis techniques for musical audio. A complete analysis of the content within an audio waveform is termed \u2018transcription\u2019 and can be used for intelligent, informed processing of the signal in a variety of applications. However, full transcription is beyond the ability of current methods and hence this thesis concerns subtasks of whole problem. A major part of musical analysis is the extraction of signal information from a time-frequency representation, often the short time Fourier transform or spectrogram. The \u2018reassignment\u2019 method is a technique for improving the clarity of these representations. It is comprehensively reviewed and its relationship to a number of classical instantaneous frequency measures is explicitly stated. Performance of reassignment as a sinusoidal frequency estimator is then assessed quantitatively for the first time. This is achieved in the framework of the Cram\u00e9r-Rao bound. Reassignment is shown to be inferior to some other estimators for simple synthetic signals but achieves comparable results for real musical examples. Several extensions and uses of the reassignment method are then described. These include the derivation of reassignment measures extracted from the amplitude of the spectrogram, rather than the traditional phase-based method and the use of reassignment measures for signal classification. The second major area of musical analysis investigated in this thesis is beat tracking, where the aim is to extract both the tempo implied in the music and also its phase which is analagous to beat location. Three models for the beat in general music examples are described. These are cast in a state-space formulation within the Bayesian paradigm, which allows the use of Monte Carlo methods for inference of the model parameters. The first two models use pre-extracted note onsets and model tempo as either a linear Gaussian process or as Brownian motion. The third model also includes on-line detection of onsets, thus adding an extra layer of complexity. Sequential Monte Carlo algorithms, termed particle filters, are then used for the estimation of the data. The systems are tested on an extensive database, nearly three and a half hours in length and consisting of various styles of music. The results exceed the current published state of the art. The research presented here could form the early stages of a full transcription system, a proposal for which is also expounded. This would use a flow of contextual information from simpler, more global structures to aid the inference of more complex and detailed processes. The global structures present in the music (such as style, structure, tempo, etc.) still have their own uses, making this scheme instantly applicable to real problems."}
{"_id": "3fe910b1360a77f50f73c2e82e654b6028072826", "title": "A user's guide to tabu search", "text": "We describe the main features of tabu search, emphasizing a perspective for guiding a user to understand basic implementation principles for solving combinatorial or nonlinear problems. We also identify recent developments and extensions that have contributed to increasing the efficiency of the method. One of the useful aspects of tabu search is the ability to adapt a rudimentary prototype implementation to encompass additional model elements, such as new types of constraints and objective functions. Similarly, the method itself can be evolved to varying levels of sophistication. We provide several examples of discrete optimization problems to illustrate the strategic concerns of tabu search, and to show how they may be exploited in various contexts. Our presentation is motivated by the emergence of an extensive literature of computational results, which demonstrates that a well-tuned implementation makes it possible to obtain solutions of high quality for difficult problems, yielding outcomes in some settings that have not been matched by other known techniques."}
{"_id": "2accf0e9bc488a689b8b577738ec2138b210240b", "title": "A simple hybrid 3-level buck-boost DC-DC converter with efficient PWM regulation scheme", "text": "This paper describes a 3-level buck-boost converter with an output voltage range up to double the input voltage. The proposed converter can be used for applications that require stepping up/down the input voltage depending on the operating conditions like solar power harvesting. The converter has a hybrid structure between inductor-based converters and capacitor-based converters by utilizing a one flying capacitor and an inductor in the charge transfer process. The converter features smaller inductor size and current ripples as compared to the traditional buck-boost inductor-based converter while offering a high efficiency as compared to SC converters making it suitable for efficient on-chip integration. A prototype circuit has been designed on a 65nm technology using a 5-nH parasitic bondwire inductor. Simulation results show that the proposed converter features up to 20% improvement in the efficiency over a conventional 1:2 SC converter for a wide range of output voltage values."}
{"_id": "8187d38b46bae647e6c3e24abed34220f81134a4", "title": "Machine Learning and Inference Laboratory An Optimized Design of Finned-Tube Evaporators Using the Learnable Evolution Model", "text": "Optimizing the refrigerant circuitry for a finned-tube evaporator is a daunting task for traditional exhaustive search techniques due to the extremely large number of circuitry possibilities. For this reason, more intelligent search techniques are needed. This paper presents and evaluates a novel optimization system, called ISHED1 (Intelligent System for Heat Exchanger Design). This system uses a recently developed non-Darwinian evolutionary computation method to seek evaporator circuit designs that maximize the capacity of the evaporator under given technical and environmental constraints. Circuitries were developed for an evaporator with three depth rows of 12 tubes each, based on optimizing the performance with uniform and non-uniform airflow profiles. ISHED1 demonstrated the capability to design an optimized circuitry for a non-uniform air distribution so the capacity showed no degradation over the traditional balanced circuitry design working with a uniform airflow."}
{"_id": "f995daea095d8a4951ed6af092b2f4612f2d02bd", "title": "Developing a retrieval based diagnostic aid for automated melanoma recognition of dermoscopic images", "text": "Skin cancer is one of the most frequent cancers among human beings. Whereas, malignant melanoma is the most aggressive and deadly type of skin cancer, and its incidence has been quickly increasing over the last years. The detection of the malignant melanoma in its early stages with dermoscopic images reduces the mortality considerably, hence this a crucial issue for the dermatologists. However, their interpretation is time consuming and subjective, even for trained dermatologists. The current computer-aided diagnosis (CAD) systems are mainly noninteractive in nature and their prediction represents just a cue for the dermatologist, as the final decision regarding the likelihood of the presence of a melanoma is left exclusively to him/her. Recently, developing CAD schemes that use image retrieval approach to search for the clinically relevant and visually similar lesions has been attracting research interest. Although preliminary studies have suggested that using retrieval might improve dermatologists' performance and/or increase their confidence in the decision making, this technology is still in the early development stage with lack of benchmark evaluation in ground-truth datasets to compare retrieval effectiveness. A CAD system based on both classification and retrieval would be more effective and robust. This work is focusing on by addressing the various issues related to the development of such an integrated and interactive CAD system by performing automatic lesion segmentation with an adaptive thresholding and region growing approach, extracting invariant features from lesions and classifying and retrieving those using Extreme Learning Machines (ELM) and a similarity fusion approach. Finally, various methods are evaluated with promising results in a benchmark dataset of International Skin Imaging Collaboration (ISIC)."}
{"_id": "7bdf73efe3646b49fce5364de23379ab575c4823", "title": "A Framework for Using Hypothesis-Driven Approaches to Support Data-Driven Learning Analytics in Measuring Computational Thinking in Block-Based Programming Environments", "text": "Systematic endeavors to take computer science (CS) and computational thinking (CT) to scale in middle and high school classrooms are underway with curricula that emphasize the enactment of authentic CT skills, especially in the context of programming in block-based programming environments. There is, therefore, a growing need to measure students\u2019 learning of CT in the context of programming and also support all learners through this process of learning computational problem solving. The goal of this research is to explore hypothesis-driven approaches that can be combined with data-driven ones to better interpret student actions and processes in log data captured from block-based programming environments with the goal of measuring and assessing students\u2019 CT skills. Informed by past literature and based on our empirical work examining a dataset from the use of the Fairy Assessment in the Alice programming environment in middle schools, we present a framework that formalizes a process where a hypothesis-driven approach informed by Evidence-Centered Design effectively complements data-driven learning analytics in interpreting students\u2019 programming process and assessing CT in block-based programming environments. We apply the framework to the design of Alice tasks for high school CS to be used for measuring CT during programming."}
{"_id": "64f75a03e5146b1fd0a86751aaec44bd2d47ffd4", "title": "Perception on the Usage of Mobile Assisted Language Learning ( MALL ) in English as a Second Language ( ESL ) Learning among Vocational College Students", "text": "Mobile phones are viewed as significant aids to English language learning. Numerous studies have been conducted in different contexts on mobile assisted language learning (MALL) that indicate the benefits of mobile devices for English language learning. Nevertheless, few studies on MALL had been conducted on among vocational college students in Malaysia. Therefore, this study was aimed to investigate the perception on the usage of MALL in English as a Second Language (ESL) among private vocational college students. Data were collected from a survey questionnaire adapted from Technology Acceptance Model (TAM). The results show the respondents have positive perception on the usage of MALL. Majority of the respondents showed overall agreement on both constructs perceived usefulness (PU) and perceived ease of use (PEoU) of MALL. They believed the usage of MALL will enhance the teaching and learning process. This evidence of acceptance has implication for educators and curriculum designers to exploit the mobile phone for autonomous and interactive ESL learning beyond the classroom context. It is hoped that MALL will be used as one of the teaching aids that could assist educators to teach English as a Second Language (ESL) more effectively."}
{"_id": "524a3b2af0d5ba305701adbcc84dda627dd2869b", "title": "A probabilistic fuzzy logic system for modeling and control", "text": "In this paper, a probabilistic fuzzy logic system (PFLS) is proposed for the modeling and control problems. Similar to the ordinary fuzzy logic system (FLS), the PFLS consists of the fuzzification, inference engine and defuzzification operation to process the fuzzy information. Different to the FLS, it uses the probabilistic modeling method to improve the stochastic modeling capability. By using a three-dimensional membership function (MF), the PFLS is able to handle the effect of random noise and stochastic uncertainties existing in the process. A unique defuzzification method is proposed to simplify the complex operation. Finally, the proposed PFLS is applied to a function approximation problem and a robotic system. It shows a better performance than an ordinary FLS in stochastic circumstance."}
{"_id": "30a82a63a339c1e69aac36b23900544fe9ec97bb", "title": "CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms", "text": "Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as \u201cservices\u201d to endusers under a usage-based payment model. They can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behaviour modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited efforts. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter-networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter-networked Cloud computing scenarios. Several researchers from organisations such as HP Labs in USA are using CloudSim in their investigation on Cloud resource provisioning and energy-efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns."}
{"_id": "76b7c03feffec5969d264c7e3c564e878b4ae9ef", "title": "IT Security Investments Through the Lens of the Resource-Based View: A new Theoretical Model and Literature Review", "text": "IT security has become a major issue for organizations as they need to protect their assets, including IT resources, intellectual property and business processes, against security attacks. Disruptions of ITbased business activities can easily lead to economic damage, such as loss of productivity, revenue and"}
{"_id": "586ed71b41362ef55e92475ff063a753e8536afe", "title": "CLOSET: An Efficient Algorithm for Mining Frequent Closed Itemsets", "text": "Association mining may often derive an undesirably large set of frequent itemsets and association rules. Recent studies have proposed an interesting alternative: mining frequent closed itemsets and their corresponding rules, which has the same power as association mining but substantially reduces the number of rules to be presented. In this paper, we propose an e cient algorithm, CLOSET, for mining closed itemsets, with the development of three techniques: (1) applying a compressed, frequent pattern tree FP-tree structure for mining closed itemsets without candidate generation, (2) developing a single pre x path compression technique to identify frequent closed itemsets quickly, and (3) exploring a partition-based projection mechanism for scalable mining in large databases. Our performance study shows that CLOSET is e cient and scalable over large databases, and is faster than the previously proposed methods."}
{"_id": "133eacaf0ad25b8364cb4510007d9363298e8adf", "title": "A comparison of approaches to large-scale data analysis", "text": "There is currently considerable enthusiasm around the MapReduce (MR) paradigm for large-scale data analysis [17]. Although the basic control flow of this framework has existed in parallel SQL database management systems (DBMS) for over 20 years, some have called MR a dramatically new computing model [8, 17]. In this paper, we describe and compare both paradigms. Furthermore, we evaluate both kinds of systems in terms of performance and development complexity. To this end, we define a benchmark consisting of a collection of tasks that we have run on an open source version of MR as well as on two parallel DBMSs. For each task, we measure each system's performance for various degrees of parallelism on a cluster of 100 nodes. Our results reveal some interesting trade-offs. Although the process to load data into and tune the execution of parallel DBMSs took much longer than the MR system, the observed performance of these DBMSs was strikingly better. We speculate about the causes of the dramatic performance difference and consider implementation concepts that future systems should take from both kinds of architectures."}
{"_id": "1d27d04e8cef4d32cb4e022c9f493a40a019f59f", "title": "Starfish: A Self-tuning System for Big Data Analytics", "text": "Hadoop is a MAD system for data analytics \uf0d8 Magnetism: attracts all sources of data \uf0d8 Agility: adapts in sync with rapid data evolution \uf0d8 Depth: supports complex analytics needs Starfish makes Hadoop MADDER and Self-Tuning \uf0d8 Data-lifecycle-awareness: achieves good performance throughout data lifecycle \uf0d8 Elasticity: adjusts resources and operational costs \uf0d8 Robustness: provides availability and predictability Just-in-Time Job Optimization"}
{"_id": "2a7ef5d25eb1c0a75b861cf3939c5e7a3185df26", "title": "A Language Independent Algorithm for Single and Multiple Document Summarization", "text": "S S S w S w w S S Sim + \u2208 \u2227 \u2208 = other similarity metrics: cosine, string kernels, etc. 3. r i BC-HurricaneGilbert 09-11 0339 4. BC-Hurricane Gilbert , 0348 5. Hurricane Gilbert Heads Toward Dominican Coast 6. By RUDDY GONZALEZ 7. Associated Press Writer 8. SANTO DOMINGO , Dominican Republic ( AP ) 9. Hurricane Gilbert swept toward the Dominican Republic Sunday , and the Civil Defense alerted its heavily populated south coast to prepare for high winds , heavy rains and high seas . 10. The storm was approaching from the southeast with sustained winds of 75 mph gusting to 92 mph . 11. \" There is no need for alarm , \" Civil Defense Director Eugenio Cabral said in a television alert shortly before midnight Saturday . 12. Cabral said residents of the province of Barahona should closely follow Gilbert 's movement . 13. An estimated 100,000 people live in the province , including 70,000 in the city of Barahona , about 125 miles west of Santo Domingo . 14. Tropical Storm Gilbert formed in the eastern Caribbean and strengthened into a hurricane Saturday night . 15. The National Hurricane Center in Miami reported its position at 2a.m. Sunday at latitude 16.1 north , longitude 67.5 west , about 140 miles south of Ponce , Puerto Rico , and 200 miles southeast of Santo Domingo . 16. The National Weather Service in San Juan , Puerto Rico , said Gilbert was moving westward at 15 mph with a \" broad area of cloudiness and heavy weather \" rotating around the center of the storm . 17. The weather service issued a flash flood watch for Puerto Rico and the Virgin Islands until at least 6p.m. Sunday . 18. Strong winds associated with the Gilbert brought coastal flooding , strong southeast winds and up to 12 feet feet to Puerto Rico 's south coast . 19. There were no reports of casualties . 20. San Juan , on the north coast , had heavy rains and gusts Saturday , but they subsided during the night . 21. On Saturday , Hurricane Florence was downgraded to a tropical storm and its remnants pushed inland from theU.S. Gulf Coast . 22. Residents returned home , happy to find little damage from 80 mph winds and sheets of rain . 23. Florence , the sixth named storm of the 1988 Atlantic storm season , was the second hurricane . 24. The first , Debby , reached minimal hurricane strength briefly before hitting the Mexican coast last month . 4 6 5"}
{"_id": "0728ea2e21c8a24b51d14d5878c9485c5b11b52f", "title": "The Minimum Rank with Hysteresis Objective Function", "text": ""}
{"_id": "1b280c79007b8150de2760ff47945a1a240992ad", "title": "Latent semantic indexing and convolutional neural network for multi-label and multi-class text classification", "text": "The classification of a real text should not be necessarily treated as a binary or multi-class classification, since the text may belong to one or more labels. This type of problem is called multi-label classification. In this paper, we propose the use of latent semantic indexing to text representation, convolutional neural networks to feature extraction and a single multi layer perceptron for multi-label classification in real text data. The experiments show that the model outperforms state of the art techniques when the dataset has long documents, and we observe that the precision is poor when the size of the texts is small."}
{"_id": "31e4e1bb57bd04bc02d7f7110e66122a215d7bfa", "title": "A survey of hard real-time scheduling for multiprocessor systems", "text": "This survey covers hard real-time scheduling algorithms and schedulability analysis techniques for homogeneous multiprocessor systems. It reviews the key results in this field from its origins in the late 1960s to the latest research published in late 2009. The survey outlines fundamental results about multiprocessor real-time scheduling that hold independent of the scheduling algorithms employed. It provides a taxonomy of the different scheduling methods, and considers the various performance metrics that can be used for comparison purposes. A detailed review is provided covering partitioned, global, and hybrid scheduling algorithms, approaches to resource sharing, and the latest results from empirical investigations. The survey identifies open issues, key research challenges, and likely productive research directions."}
{"_id": "b1b98bdc2f1928d1c5b747a54a6bc3fc014e9695", "title": "Anxiety disorders in primary care: prevalence, impairment, comorbidity, and detection.", "text": "Context Anxiety and depression are both common in primary care patients, but much less attention has been paid to anxiety. Contribution The authors administered a 7-item anxiety scale (Generalized Anxiety Disorder [GAD]-7) to 965 primary care patients, who also had a structured interview, to detect an anxiety disorder. Of these patients, 19.5% had at least 1 anxiety disorder. Patients with anxiety had worse functional status, more disability days, and more physician visits, but 41% were not being treated for any anxiety disorder. The GAD-7 had high sensitivity and good specificity for detecting a generalized anxiety disorder, panic disorder, social anxiety disorder, and posttraumatic stress disorder. Implications Anxiety disorders are common, underrecognized, and undertreated, but they are easy to detect with a brief questionnaire. The Editors Anxiety and depression are the 2 most common mental health problems seen in the general medical setting (15). Although increasing attention has been paid to anxiety, it still lags far behind depression in terms of research as well as clinical and public health efforts in screening, diagnosis, and treating affected individuals. This is unfortunate given the prevalence of anxiety and its substantial impact on patient functioning, work productivity, and health care costs (614). More than 30 million Americans have a lifetime history of anxiety (15), and anxiety disorders cost an estimated $42 billion dollars per year in the United States alone, counting direct and indirect costs (16). The 4 most common anxiety disorders (excluding simple phobias that seldom present clinically) are generalized anxiety disorder, panic disorder, social anxiety disorder, and posttraumatic stress disorder (PTSD) (1723). However, despite the substantial disability associated with each anxiety disorder and the availability of effective treatments, only a minority of patients (15% to 36%) with anxiety are recognized in primary care (24, 25). In our paper, we analyze results from a large primary carebased anxiety study (26) to answer several questions. First, what is the prevalence of these 4 anxiety disorders, both individually and concurrent with one another? Second, how do these disorders compare in functional impairment, health care use, and comorbid depressive and somatic symptom burden? Third, how effective is a brief anxiety measure in screening for each disorder? Compared with previous research, our study is particularly well-positioned to ascertain commonalities among anxiety diagnoses that are traditionally considered to be discrete and to determine whether a single measure can be used as a first step, common metric. This is especially salient for the busy, complex primary care setting, in which simplifying initial recognition of mental disorders may in fact make wider efforts at recognition more feasible. Methods Patient Sample The Patient Health Questionnaire (PHQ) anxiety study (26) was conducted to develop a short measure to assess generalized anxiety disorder. Patients were enrolled from a research network of 15 primary care sites (13 family practice and 2 internal medicine sites) located in 12 states and administered centrally by Clinvest, Inc., Springfield, Missouri, from November 2004 to June 2005. The Generalized Anxiety Disorder (GAD)-7 scale was developed and validated in 2149 patients. In the original study, 2982 persons were invited to participate; of these, 2740 (92%) completed the 4-page questionnaire and had no or minimal missing data (26). To minimize sampling bias, consecutive patients were approached at each site in clinic sessions until the target quota for that week was achieved. Of the 2740 participants, the first 2149 were used for development and validation of the GAD-7 scale, whereas the last 591 were used to determine the testretest reliability of the scale. Of the 2149 patients in the validation group, 1654 agreed to a telephone interview, of whom 965 were randomly selected to undergo this interview within 1 week of their clinic visit by 1 of 2 mental health professionals: a clinical psychologist (with a PhD) or a senior psychiatric social worker. Contact information was sent by fax to each interviewer, who shuffled the fax sheets received each day and then drew from the stack several participants to interview that day. The 965 interviewed patients comprise the study population for this paper, and compared with the 1184 participants who did not undergo a mental health professional interview, these were more often women (69% vs. 63%; P= 0.003) and had slightly higher GAD-7 anxiety scores (5.7 vs. 5.1; P= 0.010) but were similar in age, race, and education. Of note, we only used data from the 1184 participants not undergoing a mental health professional interview to derive the GAD-7 (26). The study was approved by the Sterling Institutional Review Board. Study Questionnaire Before seeing their physicians, patients completed a 4-page questionnaire that included the GAD-7 (Appendix Figure). This scale was shown to have good internal and testretest reliability, as well as convergent, construct, criterion, procedural, and factorial validity for the diagnosis of generalized anxiety disorder (26). Scores on the GAD-7 range from 0 to 21; scores of 5, 10, and 15 represent mild, moderate, and severe anxiety symptoms, respectively. The first 2 items of the GAD-7 represent core anxiety symptoms, and scores on this GAD-2 subscale range from 0 to 6. Appendix Figure. The Generalized Anxiety Disorder (GAD)-7 scale. The first 2 items constitute the GAD-2 subscale. GAD-7 2006 Pfizer Inc. All rights reserved. Used with permission. The study questionnaire also included questions about age, sex, education, race or ethnicity, and marital status; the Medical Outcomes Study Short Form-20 (SF-20), which measures functional status in 6 domains (27); the 10-item anxiety subscale from the Hopkins Symptom Checklist (28); the PHQ-8 depression scale (29); a 3-item version of the Social Phobia Inventory (Mini-SPIN) (30); the 5-item PHQ panic module (25); and the PHQ-15 somatic symptom scale (31). Also, single-item global assessments of anxiety, depression, and pain based on a scale of 0 (none) to 10 (as bad as you can imagine) were included. Finally, patients reported the number of physician visits and disability days during the previous 3 months. Structured Psychiatric Interview The 2 mental health professionals, while blinded to the results of the self-report research questionnaire, conducted structured psychiatric interviews by telephone to establish independent criteria-based diagnoses according to the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) (32). The interview consisted of the generalized anxiety disorder, social anxiety disorder, and PTSD sections of the Structured Clinical Interview for DSM-IV (SCID) (33). Reinterview by telephone was used because of its feasibility in our multisite study and its demonstrated comparability with face-to-face research interviews (3436). The 2 mental health professionals based diagnoses of generalized anxiety disorder and PTSD on the SCID interview. For generalized anxiety disorder, some questions were slightly modified to better assess each DSM-IV criterion. They based a diagnosis of social anxiety disorder on whether the patient met SCID diagnostic criteria and had a Mini-SPIN score of 8 or greater, because this improves the accuracy of social anxiety disorder diagnoses (37). They based a diagnosis of panic disorder on answering yes to all 5 questions on the PHQ panic module, a threshold that reflects DSM-IV criteria and has been validated in both clinical (25) and population-based (38) samples. Statistical Analysis We estimated sample size with respect to sensitivity of the GAD-7 scale for diagnosing the target disease (generalized anxiety disorder). We needed 60 participants with generalized anxiety disorder to ensure that the total width of the 95% CI around a sensitivity proportion of 0.80 was no greater than 0.20. Given that the estimated prevalence of generalized anxiety disorder in the primary care population was 6% (18), we needed a total of 1000 unselected primary care patients to have approximately 60 patients with generalized anxiety disorder. We determined the prevalence of each of the 4 anxiety disorders and compared them in patient demographic characteristics, functional status, psychiatric comorbidity, disability days, and physician visits. Consistent with previous work (1, 26, 29, 39), we replaced missing values in a scale with the mean value of the remaining items if 25% or fewer items were missing. If more than 25% of items were missing, the sum score was not computed and was counted as missing. The amount of missing data for any individual variable or scale score was very low (<1%). The 15 sites did not differ in missing data. In addition to descriptive statistics, we used analysis of covariance to examine associations among each anxiety disorder and the 6 SF-20 functional status scales, self-reported disability days, and physician visitscontrolling for demographic variables (sex, age, race, and educational level) and study site. We ran similar models to examine the effect of the number of anxiety disorders. In all models, patients with no anxiety disorder were the reference group. We adjusted pairwise statistical comparisons by using the Bonferroni correction. Because some dependent variables displayed a skewed (but unimodal) distribution, we also reran the models using the rank transformation of the dependent variables. We examined the operating characteristics (sensitivity, specificity, and positive likelihood ratio) for a range of cutoff scores of the GAD-7 and GAD-2 for each anxiety disorder. We conducted receiver-operating characteristic curve analyses to determine the area under the curve (AUC) for each anxiety disorder. We calculated AUCs and performed statistical comparisons (GAD-7 vs. GAD-"}
{"_id": "4b832b1304366c1ffc4b067fe1c95ba914dd9002", "title": "Parental Socio-Economic Status and Students \u2019 Academic Achievement in Selected Secondary Schools in Urban Informal Settlements in Westlands Division , Nairobi County .", "text": "This study sought to investigate critical parental socio-economic factors affecting the academic achievement of students in selected secondary schools in urban informal settlements in Westlands District in Nairobi County. The study was premised on the classical Liberal Theory of Equal Opportunity and social Darwinism proposed by Charles Darwin. A descriptive survey design using a sample of 125 respondents comprising of 91 students, 18 teachers and 16 parents was used for the study. The tools for data collection were questionnaires for students, Focus Group Discussions for teachers and Interview schedules for parents. Quantitative data from the questionnaires was analyzed using descriptive and inferential statistics while the qualitative data from interviews was managed through thematic techniques. The major findings of the study were that; the physical and other critical instructional resources were grossly inadequate or in a pathetic condition not conducive to education provision, there was a strong non-significant negative correlation between the occupation of parents and ability to finance education, that there is a significant positive correlation between good parent-teacher relationship and their involvement in their children\u2019s academic achievement. It was concluded that, parental occupation and involvement in learning activities and effective parent-teacher relationship were facilitating factors. Parents\u2019 low ability to finance education, coupled with the poor status of physical and instructional resources were inhibiting factors to students\u2019 academic achievement in the study locale. It was therefore recommended that the government should strengthen the collaboration between key education development partners to mobilize physical teaching/learning resources and strengthen education in the region. Unemployment should be controlled; poor students should be provided scholarships and that the government should take steps to raise socioeconomic status of people."}
{"_id": "029e33f959fe0fe6688aac87a3519608f5e75a5e", "title": "Bidirectional Dynamics for Protein Secondary Structure Prediction", "text": "Connectionist models for learning in sequential domains are typically dynamical systems that use hidden states to store contextual information. In principle, these models can adapt to variable time lags and perform complex sequential mappings. In spite of several successful applications (mostly based on hidden Markov models), the general class of sequence learning problems is still far from being satisfactorily solved. In particular, learning sequential translations is generally a hard task and current models seem to exhibit a number of limitations. One of these limitations, at least for some application domains, is the causality assumption. A dynamical system is said to be causal if the output at (discrete) time t does not depend on future inputs. Causality is easy to justify in dynamics that attempt to model the behavior of many physical systems. Clearly, in these cases the response at time t cannot depend on stimulae that the system has not yet received as input. As it turns out, non-causal dynamics over infinite time horizons cannot be realized by any physical or computational device. For certain categories of finite sequences, however, information from both the past and the future can be very useful for analysis and predictions at time t. This is the case, for example, of DNA and protein sequences where the structure and function of a region in the sequence may strongly depend on events located both upstream and downstream of the region, sometimes at considerable distances. Another good example is provided by the off-line translation of a language into another one. Even in the so-called \u201csimultaneous\u201d translation, it is well known that interpreters are constantly forced to introduce small delays in order to acquire \u201cfuture\u201d information within a sentence to resolve semantic ambiguities and preserve syntactic correctness. Non-causal dynamics are sometimes used in other disciplines (for example, Kalman smoothing in optimal control or non-causal digital filters in signal processing). However, as far as connectionist models are concerned, the causality assumption is shared among all the types of models which are capable of mapping input sequences to output sequences, including recurrent neural networks and input-output HMMs (IOHMMs) (Bengio & Frasconi, 1996). In this paper, we develop a new family of non-causal adaptive architectures where the underlying dynamics are factored using a pair of chained hidden state variables. The two"}
{"_id": "3dbbd0e8095fb5c4266461c31657e68d0b2840c6", "title": "Analog inference circuits for deep learning", "text": "Deep Machine Learning (DML) algorithms have proven to be highly successful at challenging, high-dimensional learning problems, but their widespread deployment is limited by their heavy computational requirements and the associated power consumption. Analog computational circuits offer the potential for large improvements in power efficiency, but noise, mismatch, and other effects cause deviations from ideal computations. In this paper we describe circuits useful for DML algorithms, including a tunable-width bump circuit and a configurable distance calculator. We also discuss the impacts of computational errors on learning performance. Finally we will describe a complete deep learning engine implemented using current-mode analog circuits and compare its performance to digital equivalents."}
{"_id": "72f64a9853eb522599dd3771d64ec16f09d92045", "title": "Multiresolution EO/IR target tracking and identification", "text": "Simultaneous target tracking and identification through feature association, attribute matching, or blob analysis is dependent on spatio-temporal measurements. Improved track maintenance should be achievable by maintaining coarse sensor resolutions on maneuvering targets and utilizing finer sensor resolutions to resolve closely-spaced targets. There are inherent optimal resolutions for sensors and restricted altitudes that constrain operational performance that a sensor manager must optimize for both wide-area surveillance and precision tracking. The advent of better optics, coordinated sensor management, and fusion strategies provide an opportunity to enhance simultaneous tracking and identification algorithms. We investigate utilizing electro-optical (EO) and infrared (IR) sensors operating at various resolutions to optimize target tracking and identification. We use a target-dense maneuvering scenario to highlight the performance gains with the multiresolution EO/IR data association (MEIDA) algorithm in tracking crossing targets."}
{"_id": "abf75b81b96ff2cd262086675deba55f907108db", "title": "Benchmarking Single-Image Dehazing and Beyond", "text": "We present a comprehensive study and evaluation of existing single-image dehazing algorithms, using a new large-scale benchmark consisting of both synthetic and real-world hazy images, called REalistic Single-Image DEhazing (RESIDE). RESIDE highlights diverse data sources and image contents, and is divided into five subsets, each serving different training or evaluation purposes. We further provide a rich variety of criteria for dehazing algorithm evaluation, ranging from full-reference metrics to no-reference metrics and to subjective evaluation, and the novel task-driven evaluation. Experiments on RESIDE shed light on the comparisons and limitations of the state-of-the-art dehazing algorithms, and suggest promising future directions."}
{"_id": "2f7606ec16cc858329c2707ecbec25d0d9c15a5a", "title": "Effects of Platform and Content Attributes on Information Dissemination on We Media: A Case Study on WeChat Platform", "text": "The emergence of the WeChat has brought some new changes to people's ways of life and reading. Thanks to the WeChat Public Platform, many people from all walks of life have become We Media operators, creating their own brands and products. In a case study of WeChat Public Platform, this paper uses NLPIR, a tool for Chinese word segmentation, and SPSS, a data analysis software, to conduct an analysis of actual data captured by Web Spider about WeChat posts and the platform itself, exploring the effects of post content and platform attributes on users' reading and dissemination behaviors. Our regression analysis found that some content attributes, including parts of speech like adverb, punctuation marks such as question mark, and headline length and position, as well as platform attributes, such as the platform seniority, have significant impacts on page views; the genres of advertisement, joke and warning, among other content attributes, and follower count as a platform attribute, have a very significant effect on the dissemination index. This paper is helpful for operators to gain a better understanding of factors influencing the spread of information published by WeChat Official Accounts and providing valuable revelations on achieving fast and efficient dissemination of posts."}
{"_id": "fb4e92e1266898152058f9b1f24acd8226ed9249", "title": "Mining Local Process Models", "text": "In this paper we describe a method to discover frequent behavioral patterns in event logs. We express these patterns as local process models. Local process model mining can be positioned in-between process discovery and episode / sequential pattern mining. The technique presented in this paper is able to learn behavioral patterns involving sequential composition, concurrency, choice and loop, like in process mining. However, we do not look at start-to-end models, which distinguishes our approach from process discovery and creates a link to episode / sequential pattern mining. We propose an incremental procedure for building local process models capturing frequent patterns based on so-called process trees. We propose five quality dimensions and corresponding metrics for local process models, given an event log. We show monotonicity properties for some quality dimensions, enabling a speedup of local process model discovery through pruning. We demonstrate through a real life case study that mining local patterns allows us to get insights in processes where regular start-to-end process discovery techniques are only able to learn unstructured, flower-like, models."}
{"_id": "dec768d7a76fc15814c185a8ea2d240cfa24dcff", "title": "Marketing Theory : Overview of Ontology , Epistemology , and Axiology Aspects", "text": "This article discusses about the marketing theory from the perspective of philosophy of science. The discussion focused on the aspects of the ontology, epistemology, and axiology. From the aspect of the ontology, it seems that the essence of marketing is a useful science and mutually beneficial for both the marketer and the stakeholders, or in other words that marketing knowledge is useful knowledge for the benefit of humankind that can be realized through the exchange process. Side of the ontology covers what the substance of marketing knowledge, the substance of truth and reality that is inherent with marketing. Meanwhile, aspects of epistemology cover a variety of approaches, methods, sources, structure and validation or marketing truth. Finally, axiology fields are related to ethics in marketing and marketing research. Marketing ethics and ethics in marketing research is a crucial matter, including in this case is trust."}
{"_id": "9941e02a6167c7ab7c320290b29481f452e6cda9", "title": "Adversarial Objectives for Text Generation", "text": "Language models can be used to generate text by iteratively sampling words conditioned on previously sampled words. In this work, we explore adversarial objectives to obtain good text generations by training a recurrent language model to keep its hidden state statistics during sampling similar to what it has sees during maximum likelihood estimation (MLE) training. We analyze the convergence of these models and discuss the effect of the adversarial objective on word-level and character-level language models. We find that using an adversarial objective assists the MLE objective and results in faster convergence and lower validation perplexities for character-level language models. We also collect sentence quality ratings from human participants and show that character-level language models with the adversarial objective generates qualitatively better sentences than the standard character-level recurrent language model."}
{"_id": "5293778b5139599368dab36f6fa4e49e4c25df45", "title": "A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation", "text": "Natural language generation lies at the core of generative dialogue systems and conversational agents. We describe an ensemble neural language generator, and present several novel methods for data representation and augmentation that yield improved results in our model. We test the model on three datasets in the restaurant, TV and laptop domains, and report both objective and subjective evaluations of our best model. Using a range of automatic metrics, as well as human evaluators, we show that our approach achieves better results than state-of-the-art models on the same datasets."}
{"_id": "92c0d6ea864cdd39f4822de6e73da05f9c80ba10", "title": "Metadating: Exploring the Romance and Future of Personal Data", "text": "We introduce Metadating -- a future-focused research and speed-dating event where single participants were invited to \"explore the romance of personal data\". Participants created \"data profiles\" about themselves, and used these to \"date\" other participants. In the rich context of dating, we study how personal data is used conversationally to communicate and illustrate identity. We note the manner in which participants carefully curated their profiles, expressing ambiguity before detail, illustration before accuracy. Our findings proposition a set of data services and features, each concerned with representing and curating data in new ways, beyond a focus on purely rational or analytic relationships with a quantified self. Through this, we build on emerging interest in \"lived informatics\" and raise questions about the experience and social reality of a \"data-driven life\"."}
{"_id": "6084f5faaade4a25003d0ea81e257ff857a500e0", "title": "Noise and sensitivity analysis of harmonic radar system for vital sign detection", "text": "Harmonic radar architectures present systems of wireless detection operating at multiple harmonic carrier frequencies. The noise and sensitivity of a harmonic radar system, which is developed at 12 GHz and 24 GHz for vital signs detection, are studied numerically and experimentally. The received signal power of the radar system is analyzed numerically as a function of the distance, taking into account the radar cross section (RCS) of the patient chest. The numerical data are compared with results obtained with single carrier frequency system. The total noise is a combination of thermal noise, residual phase noise, and flicker noise that presents the most elevated factor influencing the detection at baseband. Experimental results show that with the harmonic radar, the flicker noise can be reduced by 20dB at around 1Hz baseband frequency."}
{"_id": "766dca86a13a0206cf3846206c238c7ae6a0d4a8", "title": "DNA Sequence Compression Using the Burrows-Wheeler Transform", "text": "We investigate off-line dictionary oriented approaches to DNA sequence compression, based on the Burrows-Wheeler Transform (BWT). The preponderance of short repeating patterns is an important phenomenon in biological sequences. Here, we propose off-line methods to compress DNA sequences that exploit the different repetition structures inherent in such sequences. Repetition analysis is performed based on the relationship between the BWT and important pattern matching data structures, such as the suffix tree and suffix array. We discuss how the proposed approach can be incorporated in the BWT compression pipeline."}
{"_id": "8d6c0487f7b358a6167007eca96911d15e7cf186", "title": "Multichannel-Sniffing-System for Real-World Analysing of Wi-Fi-Packets", "text": "Wireless technologies like Wi-Fi send their data using multiple channels. To analyze an environment and all Wi-Fi packets inside, a sniffing system is needed, which can sniff on all used channels of the wireless technology at the same time. This allows catching most packets on each channel. In this paper, a way to build up a multi-channel-sniffing-system (MCSS) is described. The test system uses several single board computers (SBC) with an external Wi-Fi adapter (USB), 19 SBCs are sniffing nodes (SFN) and one SBC as sending node (SN). The sniffing SBCs are placed in a cycle around the sender so that every node has the same chance to receive the simulated packets from the SN. For the control of all 20 SBCs, a self-developed software is used, which connects from the host to the clients and is used for configuring the experiments. The configuration is sent to each client and will initiate their start, so that their times are also synchronized, for this all clients are synchronised using a time"}
{"_id": "030c82b87e3cdc5ba35c443a93ff4a9d21c2bc2f", "title": "Appearance Characterization of Linear Lambertian Objects, Generalized Photometric Stereo, and Illumination-Invariant Face Recognition", "text": "Traditional photometric stereo algorithms employ a Lambertian reflectance model with a varying albedo field and involve the appearance of only one object. In this paper, we generalize photometric stereo algorithms to handle all appearances of all objects in a class, in particular the human face class, by making use of the linear Lambertian property. A linear Lambertian object is one which is linearly spanned by a set of basis objects and has a Lambertian surface. The linear property leads to a rank constraint and, consequently, a factorization of an observation matrix that consists of exemplar images of different objects (e.g., faces of different subjects) under different, unknown illuminations. Integrability and symmetry constraints are used to fully recover the subspace bases using a novel linearized algorithm that takes the varying albedo field into account. The effectiveness of the linear Lambertian property is further investigated by using it for the problem of illumination-invariant face recognition using just one image. Attached shadows are incorporated in the model by a careful treatment of the inherent nonlinearity in Lambert's law. This enables us to extend our algorithm to perform face recognition in the presence of multiple illumination sources. Experimental results using standard data sets are presented"}
{"_id": "8b43d2f3203669d0891bb92c77c0ef33bd562bf1", "title": "G2RHA:Group-to-Route Handover Authentication Scheme for Mobile Relays in LTE-A High-Speed Rail Networks", "text": "The introduction of mobile relay node (MRN) in long-term evolution advanced high-speed rail networks is an attractive method to provide uninterrupted connectivity for the group of user equipments on board. However, MRNs still suffer from frequent handovers and several security threats due to several rounds of message exchange and the insecure air interface between MRNs and donor eNBs (DeNBs). In this paper, we propose a group-to-route handover authentication scheme based on trajectory prediction for mobile relays. By our scheme, all of the DeNBs deployed along the trajectory can be formed a route-DeNB group and all of the MRNs deployed in the same train can construct a MRN group. Compared with the current 3GPP standards and other related schemes, our scheme can accomplish the mutual authentication and key agreement between the MRN group and the target DeNB with ideal efficiency in terms of authentication signaling overhead, bandwidth consumption, and computational cost. Security analysis by using the BAN logic and formal verification tool Scyther show that our scheme can work correctly and withstand several protocol attacks."}
{"_id": "42c7d57ae12142d7bb4df41c1fa2a68c5d280eb8", "title": "Using institutional theory and dynamic simulation to understand complex e-Government phenomena", "text": "Available online 17 May 2011"}
{"_id": "6eccf61084f35fe91997ba8910de9651d81b0b37", "title": "Color retinal image enhancement using CLAHE", "text": "Common method in image enhancement that's often use is histogram equalization, due to this method is simple and has low computation load. In this research, we use Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance the color retinal image. To reduce this noise effect in color retinal image due to the acquisition process, we need to enhance this image. Color retinal image has unique characteristic than other image, that is, this image has important in green (G) channel. Image enhancement has important contribution in ophthalmology. In this paper, we propose new enhancement method using CLAHE in G channel to improve the color retinal image quality. The enhancement process conduct in G channel is appropriate to enhance the color retinal image quality."}
{"_id": "e81f89091aac2a3f12527c0c53aaedb9c6cb2722", "title": "An Analog Integrated Circuit Beamformer for High-Frequency Medical Ultrasound Imaging", "text": "We designed and fabricated a dynamic receive beamformer integrated circuit (IC) in 0.35-\u03bcm CMOS technology. This beamformer IC is suitable for integration with an annular array transducer for high-frequency (30-50 MHz) intravascular ultrasound (IVUS) imaging. The beamformer IC consists of receive preamplifiers, an analog dynamic delay-and-sum beamformer, and buffers for 8 receive channels. To form an analog dynamic delay line we designed an analog delay cell based on the current-mode first-order all-pass filter topology, as the basic building block. To increase the bandwidth of the delay cell, we explored an enhancement technique on the current mirrors. This technique improved the overall bandwidth of the delay line by a factor of 6. Each delay cell consumes 2.1-mW of power and is capable of generating a tunable time delay between 1.75 ns to 2.5 ns. We successfully integrated the fabricated beamformer IC with an 8-element annular array. Experimental test results demonstrated the desired buffering, preamplification and delaying capabilities of the beamformer."}
{"_id": "c15317b9d109fe07cd63aa742bacac023f2ca831", "title": "Multi-target threat assessment for automotive applications", "text": "Auto-brake systems have been on the market for a few years, but in order to continue improving their performance, multi-target threat assessment is a key component. The goal is to early be able to say that a collision can no longer be avoided by a steering maneuver, and thus braking can be started earlier. This is done by looking at all objects in the road scene, including other vehicles and barriers, and then looking for escape paths. The paper presents a computationally efficient way of determining whether an escape path can be found in a scenario with generally positioned objects, given the limited dynamics of the vehicle."}
{"_id": "0d7256ac0119d01acbb2ff6e124c4d60635fae1f", "title": "Managing Organizational Knowledge By Diagnosing Intellectual Capital : Framing and Advancing the State of the Field", "text": "Copyright \u00a9 2001, Idea Group Publishing. Since organizational knowledge is at the crux of sustainable competitive advantage, the burgeoning field of intellectual capital is an exciting area for both researchers and practitioners. Intellectual capital is conceptualized from numerous disciplines making the field a mosaic of perspectives. Accountants are interested in how to measure it on the balance sheet, information technologists want to codify it on systems, sociologists want to balance power with it, psychologists want to develop minds because of it, human resource managers want to calculate an ROI on it, and training and development officers want to make sure that they can build it. The following article represents a comprehensive literature review from a variety of managerial disciplines. In addition to highlighting the research to date, avenues for future pursuit are also offered."}
{"_id": "21d699e1cba89f8e3b40522530ea86b3253c111e", "title": "Intellectual capital ROI : a causal map of human capital antecedents and consequents", "text": "This report describes the results of a ground-breaking research study that measured the antecedents and consequents of effective human capital management. The research sample consisted of 76 senior executives from 25 companies in the financial services industry. The results of the study yielded a holistic causal map that integrated constructs from the fields of intellectual capital, knowledge management, human resources, organizational behaviour, information technology and accounting. The integration of both quantitative and qualitative measures in an overall conceptual model yielded several research implications. The resulting structural equation model allows participating organizations and researchers to gauge the effectiveness of an organization\u2019s human capital capabilities. This will allow practitioners and researchers to more efficiently allocate resources with regard to human capital management. The potential outcomes of the study are limitless, since a program of consistent re-evaluation can lead to the establishment of causal relationships between human capital management and economic and business results. Introduction Today\u2019s knowledge-based world consists of universal dynamic change and massive information bombardment. By the year 2010, the codified information base of the world is expected to `\u0300 double every 11 hours\u2019\u2019 (Bontis, 1999, p. 435). Information storage capacities continue to expand enormously. In 1950, IBM\u2019s Rama C tape contained 4.4 megabytes and they were able to store as many as 50 of these tapes together. At that time, 220 megabytes represented the frontiers of information storage. Many of today\u2019s standard desktop computers are being sold with 40 gigabytes of hard disk space. It is sobering to remember that full motion video in uncompressed form requires 1 gigabyte per minute and that the 83 minutes of Snow White digitized in full colour amount to 15 terabytes of space. Unfortunately, the conscious mind is only capable of processing somewhere between 16 and 40 bits of information (ones and zeros) per second. How do we reconcile this information bombardment conundrum when it seems that human beings are the bottle-neck? The current issue and full text archive of this journal is available at http://www.emeraldinsight.com/1469-1930.htm The authors would like to acknowledge the following organizations for their financial support: Accenture, Saratoga Institute and the Institute for Intellectual Capital Research. The authors would also like to highlight the contribution of Vanessa Yeh, who administered the data collection phase of this research."}
{"_id": "5a46da9aec9238b5acf5f83b2bb9e453be37367b", "title": "Producing sustainable competitive advantage through the effective management of people *", "text": "Executive Overview Achieving competitive success through people involves fundamentally altering how we think about the workforce and the employment relationship. It means achieving success by working with people, not by replacing them or limiting the scope of their activities. It entails seeing the workforce as a source of strategic advantage, not just as a cost to be minimized or avoided. Firms that take this different perspective are often able to successfully outmaneuver and outperform their rivals. ........................................................................................................................................................................"}
{"_id": "4ddabe9893c8e2db7d4870b1aefbae4d20d22e43", "title": "HOW MUCH DOES INDUSTRY MATTER , REALLY ?", "text": "In this paper, we examine the importance of year, industry, corporate-parent, and businessspecific effects on the profitability of U.S. public corporations within specific 4-digit SIC categories. Our results indicate that year, industry, corporate-parent, and business-specific effects account for 2 percent, 19 percent, 4 percent, and 32 percent, respectively, of the aggregate variance in profitability. We also find that the importance of the effects differs substantially across broad economic sectors. Industry effects account for a smaller portion of profit variance in manufacturing but a larger portion in lodging/entertainment, services, wholesale/retail trade, and transportation. Across all sectors we find a negative covariance between corporate-parent and industry effects. A detailed analysis suggests that industry, corporate-parent, and business-specific effects are related in complex ways. \uf6d9 1997 by John Wiley & Sons, Ltd."}
{"_id": "97293655d32d8dfa0c3f3aad446b412d2512cecf", "title": "From sunlight to phytomass: on the potential efficiency of converting solar radiation to phyto-energy.", "text": "The relationship between solar radiation capture and potential plant growth is of theoretical and practical importance. The key processes constraining the transduction of solar radiation into phyto-energy (i.e. free energy in phytomass) were reviewed to estimate potential solar-energy-use efficiency. Specifically, the out-put:input stoichiometries of photosynthesis and photorespiration in C(3) and C(4) systems, mobilization and translocation of photosynthate, and biosynthesis of major plant biochemical constituents were evaluated. The maintenance requirement, an area of important uncertainty, was also considered. For a hypothetical C(3) grain crop with a full canopy at 30\u00b0C and 350 ppm atmospheric [CO(2) ], theoretically potential efficiencies (based on extant plant metabolic reactions and pathways) were estimated at c. 0.041 J J(-1) incident total solar radiation, and c. 0.092 J J(-1) absorbed photosynthetically active radiation (PAR). At 20\u00b0C, the calculated potential efficiencies increased to 0.053 and 0.118 J J(-1) (incident total radiation and absorbed PAR, respectively). Estimates for a hypothetical C(4) cereal were c. 0.051 and c. 0.114 J J(-1), respectively. These values, which cannot be considered as precise, are less than some previous estimates, and the reasons for the differences are considered. Field-based data indicate that exceptional crops may attain a significant fraction of potential efficiency."}
{"_id": "382213e67d451cccb6cd41939b8e33e451a45e26", "title": "App Miscategorization Detection: A Case Study on Google Play", "text": "An ongoing challenge in the rapidly evolving app market ecosystem is to maintain the integrity of app categories. At the time of registration, app developers have to select, what they believe, is the most appropriate category for their apps. Besides the inherent ambiguity of selecting the right category, the approach leaves open the possibility of misuse and potential gaming by the registrant. Periodically, the app store will refine the list of categories available and potentially reassign the apps. However, it has been observed that the mismatch between the description of the app and the category it belongs to, continues to persist. Although some common mechanisms (e.g., a complaint-driven or manual checking) exist, they limit the response time to detect miscategorized apps and still open the challenge on categorization. We introduce FRAC+: (FR)amework for (A)pp (C)ategorization. FRAC+ has the following salient features: (i) it is based on a data-driven topic model and automatically suggests the categories appropriate for the app store, and (ii) it can detect miscategorizated apps. Extensive experiments attest to the performance of FRAC+. Experiments on Google Play shows that FRAC+\u2019s topics are more aligned with Google\u2019s new categories and 0.35-1.10 percent game apps are detected to be miscategorized."}
{"_id": "8936eb96bc8c67a84beb289787029f69fa28e4c2", "title": "Local Edge-Preserving Multiscale Decomposition for High Dynamic Range Image Tone Mapping", "text": "A novel filter is proposed for edge-preserving decomposition of an image. It is different from previous filters in its locally adaptive property. The filtered image contains local means everywhere and preserves local salient edges. Comparisons are made between our filtered result and the results of three other methods. A detailed analysis is also made on the behavior of the filter. A multiscale decomposition with this filter is proposed for manipulating a high dynamic range image, which has three detail layers and one base layer. The multiscale decomposition with the filter addresses three assumptions: 1) the base layer preserves local means everywhere; 2) every scale's salient edges are relatively large gradients in a local window; and 3) all of the nonzero gradient information belongs to the detail layer. An effective function is also proposed for compressing the detail layers. The reproduced image gives a good visualization. Experimental results on real images demonstrate that our algorithm is especially effective at preserving or enhancing local details."}
{"_id": "3f481fe3f753b26f1b0b46c4585b9d39617d01aa", "title": "Using neural networks and GIS to forecast land use changes : a Land Transformation Model", "text": "The Land Transformation Model (LTM), which couples geographic information systems (GIS) with artificial neural networks (ANNs) to forecast land use changes, is presented here. A variety of social, political, and environmental factors contribute to the model\u2019s predictor variables of land use change. This paper presents a version of the LTM parameterized for Michigan\u2019s Grand Traverse Bay Watershed and explores how factors such as roads, highways, residential streets, rivers, Great Lakes coastlines, recreational facilities, inland lakes, agricultural density, and quality of views can influence urbanization patterns in this coastal watershed. ANNs are used to learn the patterns of development in the region and test the predictive capacity of the model, while GIS is used to develop the spatial, predictor drivers and perform spatial analysis on the results. The predictive ability of the model improved at larger scales when assessed using a moving scalable window metric. Finally, the individual contribution of each predictor variable was examined and shown to vary across spatial scales. At the smallest scales, quality views were the strongest predictor variable. We interpreted the multi-scale influences of land use change, illustrating the relative influences of site (e.g. quality of views, residential streets) and situation (e.g. highways and county roads) variables at different scales. # 2002 Elsevier Science Ltd. All rights reserved."}
{"_id": "c13ee933b7ddccb4a1ccc7820c09fda8f32d1fb5", "title": "Business & IT Alignment in Theory and Practice", "text": "A key success factor for a successful company in a dynamic environment is effective and efficient information technology (IT) supporting business strategies and processes. In recent surveys however it is concluded that in most companies IT is not aligned with business strategy. The alignment between business needs and IT capabilities is therefore still a prominent area of concern. This paper reports the first stages of a research program exploring the differences of business & IT alignment (BIA) in theory and in practice. The paper presents an overview of the development of theory on BIA and reports the issues with aligning IT to business in practice based on a number of focus-group discussions with CIOs and IT managers. In line with the practical approach to BIA that the CIOs and IT managers in the focus-groups took, the last part of the paper builds upon Luftman's BIA maturity model and reports the application of the model to 12 Dutch firms"}
{"_id": "606b2c57cfed7328dedf88556ac657e9e1608311", "title": "Blockstack : A New Decentralized Internet", "text": "The traditional internet has many central points of failure and trust, like (a) the Domain Name System (DNS) servers, (b) public-key infrastructure, and (c) end-user data stored on centralized data stores. We present the design and implementation of a new internet, called Blockstack, where users don\u2019t need to trust remote servers. We remove any trust points from the middle of the network and use blockchains to secure critical data bindings. Blockstack implements services for identity, discovery, and storage and can survive failures of underlying blockchains. The design of Blockstack is informed by three years of experience from a large blockchain-based production system. Blockstack gives comparable performance to traditional internet services and enables a much-needed security and reliability upgrade to the traditional internet."}
{"_id": "212cdc602cf9222297beaa9cbdb7792441ceb7e4", "title": "The impact of modified Hatha yoga on chronic low back pain: a pilot study.", "text": "PURPOSE\nThe purpose of this randomized pilot study was to evaluate a possible design for a 6-week modified hatha yoga protocol to study the effects on participants with chronic low back pain.\n\n\nPARTICIPANTS\nTwenty-two participants (M = 4; F = 17), between the ages of 30 and 65, with chronic low back pain (CLBP) were randomized to either an immediate yoga based intervention, or to a control group with no treatment during the observation period but received later yoga training.\n\n\nMETHODS\nA specific CLBP yoga protocol designed and modified for this population by a certified yoga instructor was administered for one hour, twice a week for 6 weeks. Primary functional outcome measures included the forward reach (FR) and sit and reach (SR) tests. All participants completed Oswestry Disability Index (ODI) and Beck Depression Inventory (BDI) questionnaires. Guiding questions were used for qualitative data analysis to ascertain how yoga participants perceived the instructor, group dynamics, and the impact of yoga on their life.\n\n\nANALYSIS\nTo account for drop outs, the data were divided into better or not categories, and analyzed using chi-square to examine differences between the groups. Qualitative data were analyzed through frequency of positive responses.\n\n\nRESULTS\nPotentially important trends in the functional measurement scores showed improved balance and flexibility and decreased disability and depression for the yoga group but this pilot was not powered to reach statistical significance. Significant limitations included a high dropout rate in the control group and large baseline differences in the secondary measures. In addition, analysis of the qualitative data revealed the following frequency of responses (1) group intervention motivated the participants and (2) yoga fostered relaxation and new awareness/learning.\n\n\nCONCLUSION\nA modified yoga-based intervention may benefit individuals with CLB, but a larger study is necessary to provide definitive evidence. Also, the impact on depression and disability could be considered as important outcomes for further study. Additional functional outcome measures should be explored. This pilot study supports the need for more research investigating the effect of yoga for this population."}
{"_id": "d4f276f2e370d9010f0570dfe61ff9dc57d7b489", "title": "An Overview on Evaluating and Predicting Scholarly Article Impact", "text": "Scholarly article impact reflects the significance of academic output recognised by academic peers, and it often plays a crucial role in assessing the scientific achievements of researchers, teams, institutions and countries. It is also used for addressing various needs in the academic and scientific arena, such as recruitment decisions, promotions, and funding allocations. This article provides a comprehensive review of recent progresses related to article impact assessment and prediction. The review starts by sharing some insight into the article impact research and outlines current research status. Some core methods and recent progress are presented to outline how article impact metrics and prediction have evolved to consider integrating multiple networks. Key techniques, including statistical analysis, machine learning, data mining and network science, are discussed. In particular, we highlight important applications of each technique in article impact research. Subsequently, we discuss the open issues and challenges of article impact research. At the same time, this review points out some important research directions, including article impact evaluation by considering Conflict of Interest, time and location information, various distributions of scholarly entities, and rising stars."}
{"_id": "effc383d5c56d2830a81c6f75dbee8afc6eca218", "title": "\"An Odd Kind of Pleasure\": Differentiating Emotional Challenge in Digital Games", "text": "Recent work introduced the notion of emotional challenge as a means to afford more unique and diverse gaming experiences. However, players' experience of emotional challenge has received little empirical attention. It remains unclear whether players enjoy it and what exactly constitutes the challenge thereof. We surveyed 171 players about a challenging or an emotionally challenging experience, and analyzed their responses with regards to what made the experience challenging, their emotional response, and the relation to core player experience constructs. We found that emotional challenge manifested itself in different ways, by confronting players with difficult themes or decisions, as well as having them deal with intense emotions. In contrast to more'conventional' challenge, emotional challenge evoked a wider range of negative emotions and was appreciated significantly more by players. Our findings showcase the appeal of uncomfortable gaming experiences, and extend current conceptualizations of challenge in games."}
{"_id": "8ce651d8b9d4e575881939eec98590161aa51c2d", "title": "RFID Systems: A Survey on Security Threats and Proposed Solutions", "text": "Low-cost Radio Frequency Identification (RFID) tags affixed to consumer items as smart labels are emerging as one of the most pervasive computing technology in history. This can have huge security implications. The present article surveys the most important technical security challenges of RFID systems. We first provide a brief summary of the most relevant standards related to this technology. Next, we present an overview about the state of the art on RFID security, addressing both the functional aspects and the security risks and threats associated to its use. Finally, we analyze the main security solutions proposed until date."}
{"_id": "317072c8b7213d884f5b2d4d3133368d17c412ab", "title": "Broadband Substrate Integrated Waveguide Cavity-Backed Bow-Tie Slot Antenna", "text": "A novel design technique for broadband substrate integrated waveguide cavity-backed slot antenna is demonstrated in this letter. Instead of using a conventional narrow rectangular slot, a bow-tie-shaped slot is implemented to get broader bandwidth performance. The modification of the slot shape helps to induce strong loading effect in the cavity and generates two closely spaced hybrid modes that help to get a broadband response. The slot antenna incorporates thin cavity backing (height <;0.03\u03bb0 ) in a single substrate and thus retains low-profile planar configuration while showing unidirectional radiation characteristics with moderate gain. A fabricated prototype is also presented that shows a bandwidth of 1.03 GHz (9.4%), a gain of 3.7 dBi over the bandwidth, 15 dB front-to-back ratio, and cross-polarization level below -18 dB."}
{"_id": "17cea7a5d8caa91815a4e1e4188bc8475d0b21cc", "title": "Sparse PCA via Bipartite Matchings", "text": "We consider the following multi-component sparse PCA problem: given a set of data points, we seek to extract a small number of sparse components with disjoint supports that jointly capture the maximum possible variance. These components can be computed one by one, repeatedly solving the single-component problem and deflating the input data matrix, but as we show this greedy procedure is suboptimal. We present a novel algorithm for sparse PCA that jointly optimizes multiple disjoint components. The extracted features capture variance that lies within a multiplicative factor arbitrarily close to 1 from the optimal. Our algorithm is combinatorial and computes the desired components by solving multiple instances of the bipartite maximum weight matching problem. Its complexity grows as a low order polynomial in the ambient dimension of the input data matrix, but exponentially in its rank. However, it can be effectively applied on a low-dimensional sketch of the data; this allows us to obtain polynomial-time approximation guarantees via spectral bounds. We evaluate our algorithm on real data-sets and empirically demonstrate that in many cases it outperforms existing, deflationbased approaches."}
{"_id": "aa51d17fc646651218feb33058a4dba95f8e2ebe", "title": "Low-power high-speed level shifter design for block-level dynamic voltage scaling environment", "text": "Two novel level shifters that are suitable for block-level dynamic voltage scaling environment (namely, V/sub DD/-hopping) are proposed. In order to achieve reduction in power consumption and delay, the first proposed level shifter which is called contention mitigated level shifter (CMLS) uses a contention-reduction technique. The simulation results with 65-nm CMOS model show 24% reduction in power and 50% decrease in delay with 4% area increase compared with the conventional level shifter. The second proposed level shifter which is called bypassing enabled level shifter (BELS) implements a bypass function and it is fabricated using 0.35/spl mu/m CMOS technology. The measurement results show that the power and delay of the proposed BELS are reduced by 50% and 65%, respectively with 60% area overhead over the conventional level shifter."}
{"_id": "2c7886a75462716bcdc797ca6ee1ffbe3dffd67a", "title": "An accurate algebraic solution for moving source location using TDOA and FDOA measurements", "text": "This paper proposes an algebraic solution for the position and velocity of a moving source using the time differences of arrival (TDOAs) and frequency differences of arrival (FDOAs) of a signal received at a number of receivers. The method employs several weighted least-squares minimizations only and does not require initial solution guesses to obtain a location estimate. It does not have the initialization and local convergence problem as in the conventional linear iterative method. The estimated accuracy of the source position and velocity is shown to achieve the Crame/spl acute/r-Rao lower bound for Gaussian TDOA and FDOA noise at moderate noise level before the thresholding effect occurs. Simulations are included to examine the algorithm's performance and compare it with the Taylor-series iterative method."}
{"_id": "80d2e35888a5f072aae0c6f367c52f33dc874f8d", "title": "Towards dropout training for convolutional neural networks", "text": "Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage."}
{"_id": "61655a1ec2163c18279561a8fb40f8df84cc8d3a", "title": "Techniques and Programs Used in Bidding and Playing phases of Imperfect Information Game \u2013 an Overview", "text": "Bridge is an international imperfect information game played with similar rules all over the world and it is played by millions of players. It is an intelligent game; it increases creativity and knowledge of human mind. Many of the researchers analyses the Bridge bidding and playing phases, and they developed many programs for getting better results. The programs were performing well and it is also matter of time before the computer beats most human bridge players. As such, the researchers mainly focused on the techniques and computer programs which were used in bridge bidding"}
{"_id": "7ae152644e006474e8dca8da61fc6aa11969890e", "title": "High-Gain Reconfigurable Sectoral Antenna Using an Active Cylindrical FSS Structure", "text": "A novel design of a high-gain reconfigurable sectoral antenna using an active cylindrical frequency selective surface (FSS) structure is presented. The FSS structure consists of metallic discontinuous strips with PIN diodes in their discontinuities, and it is placed cylindrically around an omnidirectional electromagnetically coupled coaxial dipole (ECCD) array. The cylindrical FSS structure is divided into two semi-cylinders. By controlling the state of diodes in each semi-cylinder, a directive radiation pattern is obtained that can be swept in the entire azimuth plane. The effect of the diode-state configuration and the radius of the cylindrical structure are carefully studied to obtain an optimum sectoral radiation pattern. In addition, a solution for increasing the matching bandwidth of the antenna is also proposed. An experimental prototype was fabricated, and the measured results show a beamwidth of 20\u00b0 in elevation and 70\u00b0 in the azimuth plane at 2.1 GHz with a gain of 13 dBi. With these features, the proposed antenna is suitable for base-station applications in wireless communication systems."}
{"_id": "99aa2c7249abf64b20f2fe6a197ac722d5c2f0e2", "title": "Methodology for customer relationship management", "text": "Customer relationship management (CRM) is a customer-focused business strategy that dynamically integrates sales, marketing and customer care service in order to create and add value for the company and its customers. This change towards a customer-focused strategy is leading to a strong demand for CRM solutions by companies. However, in spite of companies interest in this new management model, many CRM implementations fail. One of the main reasons for this lack of success is that the existing methodologies being used to approach a CRM project are not adequate, since they do not satisfactorily integrate and complement the strategic and technological aspects of CRM. This paper describes a formal methodology for directing the process of developing and implementing a CRM System that considers and integrates various aspects, such as defining a customer strategy, re-engineering customer-oriented business processes, human resources management, the computer system, management of change and continuous improvement. 2005 Elsevier Inc. All rights reserved."}
{"_id": "9210a0bcc4487f7d2430f123db155617546e862e", "title": "Music Education and Its Impact on Students with Special Needs", "text": "Part of the Curriculum and Instruction Commons, Curriculum and Social Inquiry Commons, Disability and Equity in Education Commons, Educational Methods Commons, Elementary Education and Teaching Commons, Higher Education and Teaching Commons, Junior High, Intermediate, Middle School Education and Teaching Commons, Other Educational Administration and Supervision Commons, Other Teacher Education and Professional Development Commons, Pre-Elementary, Early Childhood, Kindergarten Teacher Education Commons, Scholarship of Teaching and Learning Commons, and the Special Education Administration Commons Survey: Let us know how this paper benefits you."}
{"_id": "7088fad9a6dedcadb9116c50f622c90bc4405e3a", "title": "Composite Scale of Morningness: psychometric properties, validity with Munich ChronoType Questionnaire and age/sex differences in Poland.", "text": "The present study aimed at testing psychometric properties of the Composite Scale of Morningness (CSM) and validating it with mid sleep on free days (MSF) derived from the Munich ChronoType Questionnaire (MCTQ) in Poland, along with analyzing age and sex differences in the CSM and MSF. A sample of 952 Polish residents (62.6% females) aged between 13 and 46 was tested. Additionally, a sample of 33 university students were given MCTQ and filled in a sleep diary for 8 days. MSF derived from MCTQ was related to the one from sleep diary (r=.44). The study revealed good reliability of the CSM (\u03b1=.84) and its validity: greater morningness preference was associated with earlier MSF from MCTQ (r=-.52). CSM scores were distributed over its full range, with a mean of 34, and did not differ between sexes, although females were earlier than males by 23minutes in MSF. Regarding age, eveningness estimated with both CSM and MSF was greatest in subjects aged 16-18years, and a shift toward eveningness during puberty and a shift back toward morningness in older age was observed. The Polish version of the CSM consisted of two components of morningness. Cutoff scores were: for evening types (lower 10%) 24 or less, for morning types (upper 10%) 43 or more. The Polish CSM presents good psychometric properties, which are similar to those reported in other language versions, and also presents sex/age patterns similar to those found previously."}
{"_id": "3e696e53734615c4fd43ebeccc66ab1ee924a66c", "title": "Analysis of different PM machines with concentrated windings and flux barriers in stator core", "text": "The new stator structure with magnetic flux-barriers in the stator yoke or tooth region represents an efficient method for reducing the sub-harmonics of electric machines with fractional slots, tooth-concentrated windings. In this paper the both flux-barriers techniques are considered during the analysis of different PM machines. The 12-teeth single-layer and double-layer concentrated winding in combination with a 10-poles and 14-poles PM rotor are investigated. For the all machine topologies the new stator design is used to improve their performances and characteristics. The flux-barrier effects on the main machine parameters, such as in the air-gap flux density harmonics, dq-machine parameters, characteristic currents, electromagnetic torque, and so on, are studied carefully. Comparisons performed with the analogous conventional machines (with conventional stator) show that, the new stator design offers significant advantages."}
{"_id": "1c7d38f68fe1150895a186e30b60c02dd89a676a", "title": "Solving the Multiple Instance Problem with Axis-Parallel Rectangles", "text": "The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms."}
{"_id": "65cba5719e3980502116073b82620db8a0ebe406", "title": "A survey of cost-sensitive decision tree induction algorithms", "text": "The past decade has seen a significant interest on the problem of inducing decision trees that take account of costs of misclassification and costs of acquiring the features used for decision making. This survey identifies over 50 algorithms including approaches that are direct adaptations of accuracy-based methods, use genetic algorithms, use anytime methods and utilize boosting and bagging. The survey brings together these different studies and novel approaches to cost-sensitive decision tree learning, provides a useful taxonomy, a historical timeline of how the field has developed and should provide a useful reference point for future research in this field."}
{"_id": "e99954dff1ac44cb2d90b9dcbe3f86b692264791", "title": "Tree-structured survival analysis.", "text": "In this note, tree-structured recursive partitioning schemes for classification, probability class estimation, and regression are adapted to cover censored survival analysis. The only assumptions required are those which guarantee identifiability of conditional distributions of lifetime given covariates. Thus, the techniques are applicable to more general situations than are those of the famous semi-parametric model of Cox."}
{"_id": "0ae8d4b595e4ca1f3088f18f9fedb36fa642e31a", "title": "A Brief Introduction to Boosting", "text": "Boosting is a general method for improving the accuracy of any given learning algorithm. This short paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting. Some examples of recent applications of boosting are also described."}
{"_id": "2cc879d90236743ad227b0e88a1f5f52a6603909", "title": "Tree-Based Batch Mode Reinforcement Learning", "text": "Reinforcement learning aims to determine an optimal contro l policy from interaction with a system or from observations gathered from a system. In batch mode, i t can be achieved by approximating the so-calledQ-function based on a set of four-tuples (xt ,ut , rt ,xt+1) wherext denotes the system state at timet, ut the control action taken, rt the instantaneous reward obtained and xt+1 the successor state of the system, and by determining the contro l policy from this Q-function. The Q-function approximation may be obtained from the limit of a s equence of (batch mode) supervised learning problems. Within this framework we describe the use of several classical tree-based supervised learning methods (CART, Kd-tree, tree bagging) a d two newly proposed ensemble algorithms, namelyextremelyandtotally randomized trees. We study their performances on several examples and find that the ensemble methods based on regressi on trees perform well in extracting relevant information about the optimal control policy from sets of four-tuples. In particular, the totally randomized trees give good results while ensuring the convergence of the sequence, whereas by relaxing the convergence constraint even better accurac y results are provided by the extremely randomized trees."}
{"_id": "99706c06a862b08e6cb2e2c0a6201e557b94a846", "title": "Visual exploration and evaluation of climate-related simulation data", "text": "Large, heterogeneous volumes of simulation data are calculated and stored in many disciplines, e.g. in climate and climate impact research. To gain insight, current climate analysis applies statistical methods and model sensitivity analyzes in combination with standard visualization techniques. However, there are some obstacles for researchers in applying the full functionality of sophisticated visualization, exploiting the available interaction and visualization functionality in order to go beyond data presentation tasks. In particular, there is a gap between available and actually applied multi-variate visualization techniques. Furthermore, visual data comparison of simulation (and measured) data is still a challenging task. Consequently, this paper introduces a library of visualization techniques, tailored to support exploration and evaluation of climate simulation data. These techniques are integrated into the easy-to-use visualization framework SimEnvVis - designed as a front-end user interface to a simulation environment - which provides a high level of user support generating visual representations."}
{"_id": "11fa8abec16c38fb05ef2fe4a127b0d86ab56926", "title": "Meta-Path-Based Ranking with Pseudo Relevance Feedback on Heterogeneous Graph for Citation Recommendation", "text": "The sheer volume of scholarly publications available online significantly challenges how scholars retrieve the new information available and locate the candidate reference papers. While classical text retrieval and pseudo relevance feedback (PRF) algorithms can assist scholars in accessing needed publications, in this study, we propose an innovative publication ranking method with PRF by leveraging a number of meta-paths on the heterogeneous bibliographic graph. Different meta-paths on the graph address different ranking hypotheses, whereas the pseudo-relevant papers (from the retrieval results) are used as the seed nodes on the graph. Meanwhile, unlike prior studies, we propose \"restricted meta-path\" facilitated by a new context-rich heterogeneous network extracted from full-text publication content along with citation context. By using learning-to-rank, we integrate 18 different meta-path-based ranking features to derive the final ranking scores for candidate cited papers. Experimental results with ACM full-text corpus show that meta-path-based ranking with PRF on the new graph significantly (p < 0.0001) outperforms text retrieval algorithms with text-based or PageRank-based PRF."}
{"_id": "4255bbd10e2a1692b723f8b40f28db7e27b06de9", "title": "Convex Formulations for Fair Principal Component Analysis", "text": "Though there is a growing literature on fairness for supervised learning, incorporating fairness into unsupervised learning has been less well-studied. This paper studies fairness in the context of principal component analysis (PCA). We first define fairness for dimensionality reduction, and our definition can be interpreted as saying a reduction is fair if information about a protected class (e.g., race or gender) cannot be inferred from the dimensionality-reduced data points. Next, we develop convex optimization formulations that can improve the fairness (with respect to our definition) of PCA and kernel PCA. These formulations are semidefinite programs, and we demonstrate their effectiveness using several datasets. We conclude by showing how our approach can be used to perform a fair (with respect to age) clustering of health data that may be used to set health insurance rates."}
{"_id": "76bb26087807bf29932a0a8a3b53f7d381be17a9", "title": "Feature-Rich Memory-Based Classification for Shallow NLP and Information Extraction", "text": "Memory-Based Learning (MBL) is based on the storage of all available training data, and similarity-based reasoning for handling new cases. By interpreting tasks such as POS tagging and shallow parsing as classification tasks, the advantages of MBL (implicit smoothing of sparse data, automatic integration and relevance weighting of information sources, handling exceptional data) contribute to state-of-the-art accuracy. However, Hidden Markov Models (HMM) typically achieve higher accuracy than MBL (and other Machine Learning approaches) for tasks such as POS tagging and chunking. In this paper, we investigate how the advantages of MBL, such as its potential to integrate various sources of information, come to play when we compare our approach to HMMs on two Information Extraction (IE) datasets: the well-known Seminar Announcement data set and a new German Curriculum Vitae data set. 1 Memory-Based Language Processing Memory-Based Learning (MBL) is a supervised classification-based learning method. A vector of feature values (an instance) is associated with a class by a classifier that lazily extrapolates from the most similar set (nearest neighbors) selected from all stored training examples. This is in contrast to eager learning methods like decision tree learning [26], rule induction [9], or Inductive Logic Programming [7], which abstract a generalized structure from the training set beforehand (forgetting the examples themselves), and use that to derive a classification for a new instance. In MBL, a distance metric on the feature space defines what are the nearest neighbors of an instance. Metrics with feature weights based on information-theory or other relevance statistics allow us to use rich representations of instances and their context, and to balance the influences of diverse information sources in computing distance. Natural Language Processing (NLP) tasks typically concern the mapping of an input representation (e.g., a series of words) into an output representation (e.g., the POS tags corresponding to each word in the input). Most NLP tasks can therefore easily be interpreted as sequences of classification 34 Zavrel, Daelemans tasks: e.g., given a word and some representation of its context, decide what tag to assign to each word in its context. By creating a separate classification instance (a \u201cmoving window\u201d approach) for each word and its context, shallow syntactic or semantic structures can be produced for whole sentences or texts. In this paper, we argue that more semantic and complex input-output mappings, such as Information Extraction, can also effectively be modeled by such a Memory-based classification-oriented framework, and that this approach has a number of very interesting advantages over rivalling methods, most notably that each classification decision can be made dependent on a very rich and diverse set of features. The properties of MBL as a lazy, similarity-based learning method seem make a good fit to the properties of typical disambiguation problems in NLP: \u2022 Similar input representations lead to similar output. E.g., words occurring in a similar context in general have the same POS tag. Similarity-based reasoning is the core of MBL. \u2022 Many sub-generalizations and exceptions. By keeping in memory all training instances, exceptions included, an MBL approach can capture generalization from exceptional or low-frequency cases according to [12]. \u2022 Need for integration of diverse types of information. E.g., in Information Extraction, lexical features, spelling features, syntactic as well as phrasal context features, global text structure, and layout features can potentially be very relevant. \u2022 Automatic smoothing in very rich event spaces. Supervised learning of NLP tasks regularly runs into problems of sparse data; not enough training data is available to extract reliable parameters for complex models. MBL incorporates an implicit robust form of smoothing by similarity [33]. In the remainder of this Section, we will show how a memory-, similarity-, and classification-based approach can be applied to shallow syntactic parsing, and can lead to state-of-the-art accuracy. Most of the tasks discussed here can also easily be modeled using Hidden Markov Models (HMM), and often with surprising accuracy. We will discuss the strengths of the HMMs and draw a comparison between the classification-based MBL method and the sequence-optimizing HMM approach (Section 1.2). 1.1 Memory-Based Shallow Parsing Shallow parsing is an important component of most text analysis systems in Text Mining applications such as information extraction, summary generation, and question answering. It includes discovering the main constituents of sentences (NPs, VPs, PPs) and their heads, and determining syntactic relationships like subject, object, adjunct relations between verbs and heads of other constituents. This is an important first step to understanding the who, what, when, and where of sentences in a text. Feature-Rich Memory-Based Classification for Information Extraction 35 In our approach to memory-based shallow parsing, we carve up the syntactic analysis process into a number of classification tasks with input vectors representing a focus item and a dynamically selected surrounding context. These classification tasks can be segmentation tasks (e.g., decide whether a focus word or tag is the start or end of an NP) or disambiguation tasks (e.g., decide whether a chunk is the subject NP, the object NP or neither). Output of some memory-based modules is used as input by other memory-based modules (e.g., a tagger feeds a chunker and the latter feeds a syntactic relation assignment module). Similar ideas about cascading of processing steps have also been explored in other approaches to text analysis: e.g., finite state partial parsing [1,18], statistical decision tree parsing [23], and maximum entropy parsing [30]. The approach briefly described here is explained and evaluated in more detail in [10,11,6] . Chunking The phrase chunking task can be defined as a classification task by generalizing the approach of [28], who proposed to convert NP-chunking to tagging each word with I for a word inside an NP, O for outside an NP, and B for between two NPs). The decision on these so called IOB tags for a word can be made by looking at the Part-of-Speech tag and the identity of the focus word and its local context. For the more general task of chunking other non-recursive phrases, we simply extend the tag set with IOB tags for each type of phrase. To illustrate this encoding with the extended IOB tag set, we can tag the sentence: But/CC [NP the/DT dollar/NN NP] [ADVP later/RB ADVP] [VP rebounded/VBD VP] ,/, [VP finishing/VBG VP] [ADJP slightly/RB higher/R ADJP] [Prep against/IN Prep] [NP the/DT yen/NNS NP] [ADJP although/IN ADJP] [ADJP slightly/RB lower/JJR ADJP] [Prep against/IN Prep] [NP the/DT mark/NN NP] ./. as: But/CCO the/DTI\u2212NP dollar/NNI\u2212NP later/RBI\u2212ADV P rebounded/VBDI\u2212V P ,/,O finishing/VBGI\u2212V P slightly/RBI\u2212ADV P higher/RBRI\u2212ADV P against/INI\u2212Prep the/DTI\u2212NP yen/NNSI\u2212NP although/INI\u2212ADJP slightly/RBB\u2212ADJP lower/JJRI\u2212ADJP against/INI\u2212Prep the/DTI\u2212NP mark/NNI\u2212NP ./.O Table 1 (from [6]) shows the accuracy of this memory-based chunking approach when training and testing on Wall Street Journal material. We report Precision, Recall, and F\u03b2=1 scores, a weighted harmonic mean of Recall and Precision (F\u03b2 = (\u03b2+1)\u2217P\u2217R \u03b22\u2217P+R) ). 1 An online demonstration of the Memory-Based Shallow Parser can be found at http://ilk.kub.nl . 36 Zavrel, Daelemans type precision recall F\u03b2=1 NPchunks 92.5 92.2 92.3 VPchunks 91.9 91.7 91.8 ADJPchunks 68.4 65.0 66.7 ADVPchunks 78.0 77.9 77.9 Prepchunks 95.5 96.7 96.1 PPchunks 91.9 92.2 92.0 ADVFUNCs 78.0 69.5 73.5 Table 1. Results of chunking\u2013labeling experiments. Reproduced from [6]. Grammatical Relation Finding After POS tagging, phrase chunking and labeling, the last step of the shallow parsing consists of resolving the (types of) attachment between labeled phrases. This is done by using a classifier to assign a grammatical relation (GR) between pairs of words in a sentence. In our approach, one of these words is always a verb, since this yields the most important GRs. The other word (focus) is the head of the phrase which is annotated with this grammatical relation in the treebank (e.g., a noun as head of an NP). An instance for such a pair of words is constructed by extracting a set of feature values from the sentence. The instance contains information about the verb and the focus: a feature for the word form and a feature for the POS of both. It also has similar features for the local context of the focus. Experiments on the training data suggest an optimal context width of two words to the left and one to the right. In addition to the lexical and the local context information, superficial information about clause structure was included: the distance from the verb to the focus, counted in words. A negative distance means that the focus is to the left of the verb. Other features contain the number of other verbs between the verb and the focus, and the number of intervening commas. These features were chosen by manual \u201cfeature engineering\u201d. Table 2 shows some of the feature-value instances corresponding to the following sentence (POS tags after the slash, chunks denoted with square and curly brackets, and adverbial functions after the dash): [ADVP Not /RB surprisingly /RB ADVP] ,/, [NP Peter /NNP Miller /NNP NP] ,/, [NP who /WP NP] [VP organized /VBD VP] [NP the /DT conference /NN NP] {PP-LOC [Prep in /IN Prep] [NP New /NNP York /NNP NP] PP-LOC} ,/, [VP does /VBZ not /RB want /VB to /TO come /VB VP] {PP-DIR [Prep to /IN Prep] [NP Paris /NNP NP] PP-DIR} [Prep without /IN Prep] [VP bringing /VBG VP] [NP his /PRP$ wife /NN NP]. Table 3 shows the results of the experiments. In the first row, only POS tag features are used. Other rows show the results when adding several types of chunk informati"}
{"_id": "66c8f00ea79955fa12e5d74676f527f294858b8d", "title": "An \"AI readability\" Formula for French as a Foreign Language", "text": "This paper present a new readability formula for French as a foreign language (FFL), which relies on 46 textual features representative of the lexical, syntactic, and semantic levels as well as some of the specificities of the FFL context. We report comparisons between several techniques for feature selection and various learning algorithms. Our best model, based on support vector machines (SVM), significantly outperforms previous FFL formulas. We also found that semantic features behave poorly in our case, in contrast with some previous readability studies on English as a first language."}
{"_id": "d5c5144d1fbdfd5aac465d5762df1ac509132ca1", "title": "Effect of Training Class Label Noise on Classification Performances for Land Cover Mapping with Satellite Image Time Series", "text": "Supervised classification systems used for land cover mapping require accurate reference databases. These reference data come generally from different sources such as field measurements, thematic maps, or aerial photographs. Due to misregistration, update delay, or land cover complexity, they may contain class label noise, i.e., a wrong label assignment. This study aims at evaluating the impact of mislabeled training data on classification performances for land cover mapping. Particularly, it addresses the random and systematic label noise problem for the classification of high resolution satellite image time series. Experiments are carried out on synthetic and real datasets with two traditional classifiers: Support Vector Machines (SVM) and Random Forests (RF). A synthetic dataset has been designed for this study, simulating vegetation profiles over one year. The real dataset is composed of Landsat-8 and SPOT-4 images acquired during one year in the south of France. The results show that both classifiers are little influenced for low random noise levels up to 25%\u201330%, but their performances drop down for higher noise levels. Different classification configurations are tested by increasing the number of classes, using different input feature vectors, and changing the number of training instances. Algorithm complexities are also analyzed. The RF classifier achieves high robustness to random and systematic label noise for all the tested configurations; whereas the SVM classifier is more sensitive to the kernel choice and to the input feature vectors. Finally, this work reveals that the cross-validation procedure is impacted by the presence of class label noise."}
{"_id": "aa9ebbfa406aca691043ee822f00f358edcf3c3b", "title": "Apple disease classification using color, texture and shape features from images", "text": "The presence of diseases in several kinds of fruits is the major factor of production and the economic degradation of the agricultural industry worldwide. An approach for the apple disease classification using color, texture and shape based features is investigated and experimentally verified in this paper. The primary steps of the introduced image processing based method is as follows; 1) infected fruit part detection is done with the help of K-Means clustering method, 2) color, texture and shape based features are computed over the segmented image and combined to form the single descriptor, and 3) multi-class Support Vector Machine is used to classify the apples into one of the infected or healthy categories. Apple fruit is taken as the test case in this study with 3 categories of diseases, namely blotch, rot and scab as well as healthy apples. The experimentation points out that the introduced method based is better as compared to the individual features. It also point out that shape feature is not better suited for this purpose."}
{"_id": "754ee07789f6ff28fc121bb9f771895e971ac28c", "title": "Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-identification", "text": "Person re-identification (ReID) is an important task in wide area video surveillance which focuses on identifying people across different cameras. Recently, deep learning networks with a triplet loss become a common framework for person ReID. However, the triplet loss pays main attentions on obtaining correct orders on the training set. It still suffers from a weaker generalization capability from the training set to the testing set, thus resulting in inferior performance. In this paper, we design a quadruplet loss, which can lead to the model output with a larger inter-class variation and a smaller intra-class variation compared to the triplet loss. As a result, our model has a better generalization ability and can achieve a higher performance on the testing set. In particular, a quadruplet deep network using a margin-based online hard negative mining is proposed based on the quadruplet loss for the person ReID. In extensive experiments, the proposed network outperforms most of the state-of-the-art algorithms on representative datasets which clearly demonstrates the effectiveness of our proposed method."}
{"_id": "5fbc425f92def016e4fd50bf55617fc5d335ffbb", "title": "Exploring the musical taste of expert listeners: musicology students reveal tendency toward omnivorous taste", "text": "Musicology students are engaged with music on an academic level and usually have an extensive musical background. They have a considerable knowledge of music history and theory and listening to music may be regarded as one of their primary occupations. Taken together, these factors qualify them as \u226bexpert listeners\u226a, who may be expected to exhibit a specific profile of musical taste: interest in a broad range of musical styles combined with a greater appreciation of \u226bsophisticated\u226a styles. The current study examined the musical taste of musicology students as compared to a control student group. Participants (n = 1003) completed an online survey regarding the frequency with which they listened to 22 musical styles. A factor analysis revealed six underlying dimensions of musical taste. A hierarchical cluster analysis then grouped all participants, regardless of their status, according to their similarity on these dimensions. The employed exploratory approach was expected to reveal potential differences between musicology students and controls. A three-cluster solution was obtained. Comparisons of the clusters in terms of musical taste revealed differences in the listening frequency and variety of appreciated music styles: the first cluster (51% musicology students/27% controls) showed the greatest musical engagement across all dimensions although with a tendency toward \u226bsophisticated\u226a musical styles. The second cluster (36% musicology students/46% controls) exhibited an interest in \u226bconventional\u226a music, while the third cluster (13% musicology students/27% controls) showed a strong liking of rock music. The results provide some support for the notion of specific tendencies in the musical taste of musicology students and the contribution of familiarity and knowledge toward musical omnivorousness. Further differences between the clusters in terms of social, personality, and sociodemographic factors are discussed."}
{"_id": "5a7766d333af2d579b97a05694c6877b86e2c0eb", "title": "Markov Logic Networks for Optical Chemical Structure Recognition", "text": "Optical chemical structure recognition is the problem of converting a bitmap image containing a chemical structure formula into a standard structured representation of the molecule. We introduce a novel approach to this problem based on the pipelined integration of pattern recognition techniques with probabilistic knowledge representation and reasoning. Basic entities and relations (such as textual elements, points, lines, etc.) are first extracted by a low-level processing module. A probabilistic reasoning engine based on Markov logic, embodying chemical and graphical knowledge, is subsequently used to refine these pieces of information. An annotated connection table of atoms and bonds is finally assembled and converted into a standard chemical exchange format. We report a successful evaluation on two large image data sets, showing that the method compares favorably with the current state-of-the-art, especially on degraded low-resolution images. The system is available as a web server at http://mlocsr.dinfo.unifi.it."}
{"_id": "61f57b5b49ccf6f48f5443ba226a8469d07b9e90", "title": "An LTE Base-Station Magnetoelectric Dipole Antenna with Anti-Interference Characteristics and Its MIMO System Application", "text": "In this research, a wideband long-term evolution (LTE) base-station antenna with flexible band-notch characteristics is introduced by two window-type slots etched on radiation patches of the magnetoelectric dipole. The presented antenna can provide a band-notch range with an impedance bandwidth of 2.39-2.52 GHz in the band of 1.825-2.925 GHz, which well applies the wideband antenna for LTE with the good anti-interference from wireless local area network narrow communication band. The typical parameters are investigated to show that the antenna with anti-interference characteristics exhibits large scope band-notch adjustability in the whole wideband region. Based on above element, an LTE base station antenna multiple-input, multiple-output system with anti-interference characteristics is investigated and measured to show its good performances: anti-interference, high isolation, high gain, unidirectional radiation, low back radiation, and low cross-polarization."}
{"_id": "6acdea8c776319b448bf400a432cb1a80db0e675", "title": "An AUC-based permutation variable importance measure for random forests", "text": "The random forest (RF) method is a commonly used tool for classification with high dimensional data as well as for ranking candidate predictors based on the so-called random forest variable importance measures (VIMs). However the classification performance of RF is known to be suboptimal in case of strongly unbalanced data, i.e. data where response class sizes differ considerably. Suggestions were made to obtain better classification performance based either on sampling procedures or on cost sensitivity analyses. However to our knowledge the performance of the VIMs has not yet been examined in the case of unbalanced response classes. In this paper we explore the performance of the permutation VIM for unbalanced data settings and introduce an alternative permutation VIM based on the area under the curve (AUC) that is expected to be more robust towards class imbalance. We investigated the performance of the standard permutation VIM and of our novel AUC-based permutation VIM for different class imbalance levels using simulated data and real data. The results suggest that the new AUC-based permutation VIM outperforms the standard permutation VIM for unbalanced data settings while both permutation VIMs have equal performance for balanced data settings. The standard permutation VIM loses its ability to discriminate between associated predictors and predictors not associated with the response for increasing class imbalance. It is outperformed by our new AUC-based permutation VIM for unbalanced data settings, while the performance of both VIMs is very similar in the case of balanced classes. The new AUC-based VIM is implemented in the R package party for the unbiased RF variant based on conditional inference trees. The codes implementing our study are available from the companion website: http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/070_drittmittel/janitza/index.html"}
{"_id": "b799e4a405e5f79194a4609561dc45f805f081f5", "title": "SiCloPe: Silhouette-Based Clothed People", "text": "We introduce a new silhouette-based representation for modeling clothed human bodies using deep generative models. Our method can reconstruct a complete and textured 3D model of a person wearing clothes from a single input picture. Inspired by the visual hull algorithm, our implicit representation uses 2D silhouettes and 3D joints of a body pose to describe the immense shape complexity and variations of clothed people. Given a segmented 2D silhouette of a person and its inferred 3D joints from the input picture, we first synthesize consistent silhouettes from novel view points around the subject. The synthesized silhouettes, which are the most consistent with the input segmentation are fed into a deep visual hull algorithm for robust 3D shape prediction. We then infer the texture of the subject\u2019s back view using the frontal image and segmentation mask as input to a conditional generative adversarial network. Our experiments demonstrate that our silhouette-based model is an effective representation and the appearance of the back view can be predicted reliably using an image-to-image translation network. While classic methods based on parametric models often fail for single-view images of subjects with challenging clothing, our approach can still produce successful results, which are comparable to those obtained from multi-view input."}
{"_id": "85860d38c66a5cf2e6ffd6475a3a2ba096ea2920", "title": "Celeb-500K: A Large Training Dataset for Face Recognition", "text": "In this paper, we propose a large training dataset named Celeb-500K for face recognition, which contains 50M images from 500K persons. To better facilitate academic research, we clean Celeb-500K to obtain Celeb-500K-2R, which contains 25M aligned face images from 365K persons. Based on the developed dataset, we achieve state-of-the-art face recognition performance and reveal two important observations on face recognition study. First, metric learning methods have limited performance gain when the training dataset contains a large number of identities. Second, in order to develop an efficient training dataset, the number of identities is more important than the average image number of each identity from the perspective of face recognition performance. Extensive experimental results show the superiority of Celeb-500K and provide a strong support to the two observations."}
{"_id": "4e85e17a9c74cd0dbf66c6d673eaa9161e280b18", "title": "Random Projection and Hashing", "text": "Random Projection is another class of methods used for low-rank matrix approximation. A random projection algorithm projects datapoints from a high-dimensional space R n onto a lower-dimensional subspace R r (r \u2327 n) using a random matrix S 2 R r\u21e5n. The key idea of random mapping comes from the Johnson-Lindenstrauss lemma[7] (we will explain later in detail) that says \" if points in a vector space are projected onto a randomly selected subspace of suitably high dimension, then the distances between points are approximately preserved \". Random projection methods are computationally efficient and sufficiently accurate in practice for dimen-sionality reduction of high-dimensional datasets. Moreover, since complexity of many geometric algorithms depends significantly on the dimension, applying random projection as pre-processing is a common task in many data mining applications. As opposed to column sampling methods, that need to access data for approximating the low-rank subspace, random projections are data oblivious, as their computation involves only a random matrix S. We start with the basic concepts and definitions required to understand random projections techniques. Definition 1 (Column Space). Consider a matrix A 2 R n\u21e5d (n > d). Notice that as one ranges over all vectors x 2 R d , Ax ranges over all linear combinations of the columns of A and therefore defines a d-dimensional subspace of R n , which we refer to as the column space of A and denote it by C(A). Definition 2 (` 2-Subspace Embedding). A matrix S 2 R r\u21e5n provides a subspace embedding for C(A) if kSAxk 2 2 = (1 \u00b1 \")kAxk 2 2 , 8x 2 R d. Such matrix S provides a low distortion embedding, and is called a (1 \u00b1 \") ` 2-subspace embedding. Using an`2-subspace embedding, one can work with SA 2 R r\u21e5d instead of A 2 R n\u21e5d. Typically r \u2327 n, so we are working with a smaller matrix that reduces the time/space complexity of many algorithms. However note that definitely r needs to be larger than d if we are talking about the whole subspace R d [11]. Note that subspace embedding does not depend on a particular basis for C(A), that means if we have a matrix U that is an orthonormal basis for C(A), then Ux gives the same subspace as Ax. Therefore if S is an embedding for A, it will be an embedding for U too. Let's consider \u2026"}
{"_id": "a7f10c157d7f03d1a3c90c25257621449b227f3a", "title": "Exploring the relationship between technology acceptance model and usability test", "text": "In the past, most studies used the technology acceptance model (TAM) to survey the subjective perception of users in using information technology. The usability test was also used to assess the ease of use of user interfaces. This study introduces a conceptual framework to explore the relationship between user\u2019s beliefs of TAM and usability testing attributes. Usability testing was conducted on an eCampus learning system with a mobile device. TAM data was collected from the participants for analyzing a possible relationship. The findings of this study reveal that TAM results contradict the usability test results in certain areas. The focus of our proposed research model is supported from the causality between perceived ease of use and usability; however, the correlation between perceived usefulness and usability remains unclear."}
{"_id": "fb4a035cb3eec66f08bcb8e98f0dd82624d5e9b1", "title": "Non-invasive optical detection of hand gestures", "text": "In this paper we present a novel type of sensing technology for hand and finger gesture recognition that utilizes light in the invisible spectrum to detect changes in position and form of body tissue like tendons and muscles. The proposed system can be easily integrated with existing wearable devices. Our approach not only enables gesture recognition but it could potentially double to perform a variety of health related monitoring tasks (e.g. heart rate, stress)."}
{"_id": "1c65cedb79dc2faa501a6be72ed5d5d77ff903d5", "title": "Micro bubble manipulation towards single cell handling tool", "text": "We report a series of micro air bubble manipulations (transporting, merging and eliminating of air bubbles) in two-dimensional microchannels filled with a water solution. Air bubbles (~500 nano liters in volume) are driven by electrowetting-on-dielectric (EWOD) principle. By sequentially energizing an array of electrodes covered with dielectric layers, air bubbles can be transported along a programmed path, merged into a single larger one, and eliminated out of a bulk water solution. This bubble manipulation technique can be applied to efficiently handle micro objects. As a proof of concept, we successfully demonstrate in a millimeter scale that a fish egg and a sesame husk are attached to an air bubble and then driven by the bubble in a programmed way. The moving air bubble can push and pull the attached objects of which surface is hydrophilic or hydrophobic. This handling technique will bring a novel, versatile, cost-effective, high throughput tool, enabling us to efficiently transport, trap, and isolate individual micro bio-entities such as cells, functional particles and possibly molecules"}
{"_id": "028d23945215e1ed7db2b3dd9a56ce7788db2c93", "title": "Learning and Collective Knowledge Construction With Social Media: A Process-Oriented Perspective", "text": "Social media are increasingly being used for educational purposes. The first part of this article briefly reviews literature that reports on educational applications of social media tools. The second part discusses theories that may provide a basis for analyzing the processes that are relevant for individual learning and collective knowledge construction. We argue that a systems-theoretical constructivist approach is appropriate to examine the processes of educational social media use, namely, self-organization, the internalization of information, the externalization of knowledge, and the interplay of externalization and internalization providing the basis of a co-evolution of cognitive and social systems. In the third part we present research findings that illustrate and support this systems-theoretical framework. Concluding, we discuss the implications for educational design and for future research on learning and collective knowledge construction with social media."}
{"_id": "7565225aba355a6a7cdc9a26d51505b6082ed061", "title": "Low Drop-Out Voltage Regulators: Capacitor-less Architecture Comparison", "text": "Demand for system-on-chip solutions has increased the interest in low drop-out (LDO) voltage regulators which do not require a bulky off-chip capacitor to achieve stability, also called capacitor-less LDO (CL-LDO) regulators. Several architectures have been proposed; however comparing these reported architectures proves difficult, as each has a distinct process technology and specifications. This paper compares CL-LDOs in a unified matter. We designed, fabricated, and tested five illustrative CL-LDO regulator topologies under common design conditions using 0.6?m CMOS technology. We compare the architectures in terms of (1) line/load regulation, (2) power supply rejection, (3) line/load transient, (4) total on-chip compensation capacitance, (5) noise, and (6) quiescent power consumption. Insights on what optimal topology to choose to meet particular LDO specifications are provided."}
{"_id": "01dfe1868e8abc090b1485482929f65743e23743", "title": "Towards Cognitive Exploration through Deep Reinforcement Learning for Mobile Robots", "text": "Exploration in an unknown environment is the core functionality for mobile robots. Learning-based exploration methods, including convolutional neural networks, provide excellent strategies without human-designed logic for the feature extraction [1]. But the conventional supervised learning algorithms cost lots of efforts on the labeling work of datasets inevitably. Scenes not included in the training set are mostly unrecognized either. We propose a deep reinforcement learning method for the exploration of mobile robots in an indoor environment with the depth information from an RGB-D sensor only. Based on the Deep Q-Network framework [2], the raw depth image is taken as the only input to estimate the Q values corresponding to all moving commands. The training of the network weights is end-to-end. In arbitrarily constructed simulation environments, we show that the robot can be quickly adapted to unfamiliar scenes without any man-made labeling. Besides, through analysis of receptive fields of feature representations, deep reinforcement learning motivates the convolutional networks to estimate the traversability of the scenes. The test results are compared with the exploration strategies separately based on deep learning [1] or reinforcement learning [3]. Even trained only in the simulated environment, experimental results in real-world environment demonstrate that the cognitive ability of robot controller is dramatically improved compared with the supervised method. We believe it is the first time that raw sensor information is used to build cognitive exploration strategy for mobile robots through end-to-end deep reinforcement learning."}
{"_id": "d59371687d9b783ff29ebc8eb9edc9e26b5141c4", "title": "Intrusion-Tolerant Autonomous Driving", "text": "Fully autonomous driving is one if not the killer application for the upcoming decade of real-time systems. However, in the presence of increasingly sophisticated attacks by highly skilled and well equipped adversarial teams, autonomous driving must not only guarantee timeliness and hence safety. It must also consider the dependability of the software concerning these properties while the system is facing attacks. For distributed systems, fault-and-intrusion tolerance toolboxes already offer a few solutions to tolerate partial compromise of the system behind a majority of healthy components operating in consensus. In this paper, we present a concept of an intrusion-tolerant architecture for autonomous driving. In such a scenario, predictability and recovery challenges arise from the inclusion of increasingly more complex software on increasingly less predictable hardware. We highlight how an intrusion tolerant design can help solve these issues by allowing timeliness to emerge from a majority of complex components being fast enough, often enough while preserving safety under attack through pre-computed fail safes."}
{"_id": "739d42f74e074b22e06311ee6911cd634ca98768", "title": "Modern FMCW radar - techniques and applications", "text": "The range of practical applications for FMCW radars has increased significantly in the last decade or so. This paper renews some of the recent applications and also some of the techniques which are now being applied to improve the versatility and performance of FMCW radars. Significant system developments include the use of moving-target detection techniques, the development of the automotive radar market, and the building of prototypes of active phased arrays using this modulation. Significant technical developments include improved ways of generating the sweep signal, using direct digital synthesis, or quieter oscillators and an improved understanding of the significance of the term 'Low Probability of Intercept.'."}
{"_id": "b0417e375954ee1503cadafee67fcf0873ff8dbf", "title": "A Literature Review on the Design of Spherical Rolling Robots", "text": "There are many advantages to the use of spherical robot designs. The current status of the design of spherical rolling robots is reviewed."}
{"_id": "12441a74e709ddab53f9039cf507491df7b3840a", "title": "SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning", "text": "Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism — a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods."}
{"_id": "3d275a4e4f44d452f21e0e0ff6145a5e18e6cf87", "title": "CIDEr: Consensus-based image description evaluation", "text": "Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking."}
{"_id": "7985ac55e170273dd0ffa6bd756e588bab301d57", "title": "Mind's eye: A recurrent visual representation for image caption generation", "text": "In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. Critical to our approach is a recurrent neural network that attempts to dynamically build a visual representation of the scene as a caption is being generated or read. The representation automatically learns to remember long-term visual concepts. Our model is capable of both generating novel captions given an image, and reconstructing visual features given an image description. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are equal to or preferred by humans 21.0% of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features."}
{"_id": "da65715e5209ec60974e185d3eac5dc88210455c", "title": "Enhanced Simulated Annealing for Globally Minimizing Functions of Many-Continuous Variables", "text": "A new global optimization algorithm for functions of many continuous variables is presented, derived from the basic Simulated annealing method. Our main contribution lies in dealing with high-dimensionality minimization problems, which are often difficult to solve by all known minimization methods with or without gradient. In this article we take a special interest in the variables discretization issue. We also develop and implement several complementary stopping criteria. The original Metropolis iterative random search, which takes place in a Euclidean space Rn, is replaced by another similar exploration, performed within a succession of Euclidean spaces Rp, with p < 3m 1. For signed (authenticated) messages, it is proven that consensus is possible if the loyal generals are connected to each other. However, this solution requires additional rounds of information exchange Since its initial presentation, nearly two decades ago, the Byzantine Generals problem has been the subject of intense academic study, leading to the development and formal validation of numerous Byzantine-tolerant algorithms and architectures. As stated previously, industry\u2019s recognition and treatment of the problem has been far less formal and rigorous. A reason for this might be the anthropomorphic tone and presentation of the problem definition. Although the authors warned against adopting too literal an interpretation, much of the related literature that has followed the original text has reinforced the \u201ctraitorous\u201d anthropomorphic failure model. Such treatment has resulted in the work being ignored by a large segment of the commu"}
{"_id": "b9fb4dcae102b449756e8f8bee43ae9c31be6891", "title": "Real-time underwater 3D scene reconstruction using commercial depth sensor", "text": "This paper presents preliminary work to utilize a commercial time of flight depth camera for real-time 3D scene reconstruction of underwater objects. Typical RGB stereo camera imaging for 3D capturing suffers from blur and haziness due to turbidity of water in addition to critical dependence on light, either from natural or artificial sources. We propose a method for repurposing the low-cost Microsoft Kinect\u2122 Time of Flight camera for underwater environment enabling dense depth data acquisition that can be processed in real time. Our motivation is the ease of use and low cost of the device for high quality real-time scene reconstruction as compared to multi view stereo cameras, albeit at a smaller range. Preliminary results of depth data acquisition and surface reconstruction in underwater environment are also presented. The novelty of our work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and our main objective is to develop an economical and compact solution for underwater 3D mapping."}
{"_id": "bdbf25f3dcf63d38cdb527a9ffca269fa0b8046b", "title": "1 Keyword extraction : a review of methods and approaches", "text": "Paper presents a survey of methods and approaches for keyword extraction task. In addition to the systematization of methods, the paper gathers a comprehensive review of existing research. Related work on keyword extraction is elaborated for supervised and unsupervised methods, with special emphasis on graphbased methods as well as Croatian keyword extraction. Selectivity-based keyword extraction method is proposed as a new unsupervised graph-based keyword extraction method which extracts nodes from a complex network as keyword candidates. The paper provides guidelines for future research and development of new graph-based approaches for keyword extraction."}
{"_id": "b3d340dbc07255ab22162fb9de741659673dd0a3", "title": "Sentiment classification for Turkish Twitter feeds using LDA", "text": "Today, people are able to express their opinions and ideas on any subject using personal blogs or a variety of social media environments. Contents shared by users with each passing day increases the data size on the web but this situation is a complication for meaningful information extraction. Therefore, sentiment analysis studies aimed at automatically obtaining meaningful information from social media content showed a significant increase in recent years. In these studies, automatically analyzing the content of messages and identifying sentiment such as positive, neutral, negative etc. are intended. In this study, a system has been proposed for analyzing sentiment of Turkish Twitter feeds. In addition, sentiment classification which is a leading problem for sentiment analysis studies has been performed using topic modeling and the effect of modeling on results are examined. The topic model has been created using LDA (Latent Dirichlet Allocation) algorithm. Experimental results obtained with different feature extraction model and classifiers. The results show that topic modeling is more successful compared to the methods used in the previous studies. Also, sentiment classification based on topic model knowledge has been found to increase sentiment analysis success by %26 compared to our previous work."}
{"_id": "401648a821f1ddf09868974f37dc3ff46ef252e4", "title": "A conceptual framework for serious games and its validation", "text": "FACULTY OF ENGINEERING, SCIENCES AND MATHEMATICS SCHOOL OF ELECTRONICS AND COMPUTER SCIENCE Doctor of Philosophy A CONCEPTUAL FRAMEWORK FOR SERIOUS GAMES AND ITS VALIDATION"}
{"_id": "bd8cb71859468661759df1e3875cb7c295a6bcc1", "title": "Cytokine removal with high cut-off membrane: review of literature.", "text": "During the last decade, blood purification therapies have been proposed as an effective therapy to control the cytokines dysregulation in systemic inflammatory syndromes. Among them, the treatment with high cut-off membranes is characterized by larger pore size and more effective clearance for middle molecular weight molecules (cytokines). In this paper, we performed a thoughtful review of the literature on HCO being used for blood purification indications in all systemic inflammation syndromes. Clinical and experimental studies show that the use of high effluent flows in a pure diffusive treatment effectively removes serum cytokines with a safe profile in albumin clearance. In clinical studies, the removal of these inflammatory mediators is associated with a significant improvement in hemodynamic condition, oxygenation indices, and organ dysfunction."}
{"_id": "3e9d005447307990b63854933812749e2ebdef1d", "title": "Role of cortical cell type and morphology in subthreshold and suprathreshold uniform electric field stimulation in vitro", "text": "BACKGROUND\nThe neocortex is the most common target of subdural electrotherapy and noninvasive brain stimulation modalities, including transcranial magnetic stimulation (TMS) and transcranial current simulation (TCS). Specific neuronal elements targeted by cortical stimulation are considered to underlie therapeutic effects, but the exact cell type(s) affected by these methods remains poorly understood.\n\n\nOBJECTIVE\nWe determined whether neuronal morphology or cell type predicted responses to subthreshold and suprathreshold uniform electric fields.\n\n\nMETHODS\nWe characterized the effects of subthreshold and suprathreshold electrical stimulation on identified cortical neurons in vitro. Uniform electric fields were applied to rat motor cortex brain slices, while recording from interneurons and pyramidal cells across cortical layers, using a whole-cell patch clamp. Neuron morphology was reconstructed after intracellular dialysis of biocytin. Based solely on volume-weighted morphology, we developed a parsimonious model of neuronal soma polarization by subthreshold electric fields.\n\n\nRESULTS\nWe found that neuronal morphology correlated with somatic subthreshold polarization. Based on neuronal morphology, we predict layer V pyramidal neuronal soma to be individually the most sensitive to polarization by optimally oriented subthreshold fields. Suprathreshold electric field action potential threshold was shown to reflect both direct cell polarization and synaptic (network) activation. Layer V/VI neuron absolute electric field action potential thresholds were lower than layer II/III pyramidal neurons and interneurons. Compared with somatic current injection, electric fields promoted burst firing and modulated action potential firing times.\n\n\nCONCLUSIONS\nWe present experimental data indicating that cortical neuron morphology relative to electric fields and cortical cell type are factors in determining sensitivity to sub- and supra-threshold brain stimulation."}
{"_id": "ceff65e02b2b9b6b181cfc956350351b8e284a01", "title": "Predicting Price Changes in Ethereum", "text": "The market capitalization of publicly traded cryptocurrencies is currently above $230 billion (Bovaird, 2017). Bitcoin, the most valuable cryptocurrency, serves primarily as a digital store of value (Van Alstyne, 2014), and its price predictability has been well-studied (Hegazy and Mumford, 2016). However, Ethereum has the second-highest market capitalization and supports much more functionality than Bitcoin. While its price predictability is sparsely covered in published literature, the technology\u2019s additional functionality may cause Ether\u2019s price predictability to differ significantly from that of Bitcoin. These characteristics are outlined in the following subsection; the underlying details of Bitcoin (Nakamoto, 2008) and Ethereum (Buterin, 2013) are elided, as they are described in depth in the cited papers."}
{"_id": "b3adf2cf03d9918cb1ff2131caf74a83fc8ec303", "title": "Binary malware image classification using machine learning with local binary pattern", "text": "Malware classification is a critical part in the cyber-security. Traditional methodologies for the malware classification typically use static analysis and dynamic analysis to identify malware. In this paper, a malware classification methodology based on its binary image and extracting local binary pattern (LBP) features is proposed. First, malware images are reorganized into 3 by 3 grids which is mainly used to extract LBP feature. Second, the LBP is implemented on the malware images to extract features in that it is useful in pattern or texture classification. Finally, Tensorflow, a library for machine learning, is applied to classify malware images with the LBP feature. Performance comparison results among different classifiers with different image descriptors such as GIST, a spatial envelop, and the LBP demonstrate that our proposed approach outperforms others."}
{"_id": "a37c1df39575fd59d8b3b4697da2de486c71ab3e", "title": "The return of AdaBoost.MH: multi-class Hamming trees", "text": "Within the framework of ADABOOST.MH, we propose to train vector-valued decision trees to optimize the multi-class edge without reducing the multi-class problem toK binary one-againstall classifications. The key element of the method is a vector-valued decision stump, factorized into an input-independent vector of length K and label-independent scalar classifier. At inner tree nodes, the label-dependent vector is discarded and the binary classifier can be used for partitioning the input space into two regions. The algorithm retains the conceptual elegance, power, and computational efficiency of binary ADABOOST. In experiments it is on par with support vector machines and with the best existing multi-class boosting algorithm AOSOLOGITBOOST, and it is significantly better than other known implementations of ADABOOST.MH."}
{"_id": "466325e150a67639e338a61786a9f63a7b9ee33a", "title": "Formalizing agility, part 2: how an agile organization embraced the CMMI", "text": "Most large IT organizations need the best of both worlds - working software that supports ever-changing business needs, and a process for delivering software that is predictable, trainable, and auditable. Organizations with agile software teams often receive timely, cost-effective solutions of sufficient quality. Organizations with formal processes often benefit from industry-recognized certifications and robust process improvement mechanisms. Rarely does a single, large IT organization achieve both. DTE Energy has such a combination: its lightweight yet CMMI-compatible methodology is now used within its 600+ person IT organization to deliver and support working software. Its small teams embrace core agile principles as they provide \"just enough\" solutions that satisfy maturing business needs. Yet they passed two SCAMPI appraisals toward formal CMMI Level III accreditation, scheduled for mid-2006. This report extends the Agile 2005 experience report on DTE Energy's agile IT organization's journey toward CMMI staged maturity accreditation. This report briefly recaps their seven years of agile experience, presents governance mechanisms and change management techniques, and highlights their four-release, one-year plan for CMMI Level III accreditation. Finally, this report offers suggestions on embracing a formal process framework that are applicable to any organization"}
{"_id": "9c01c5fee21747734c04bc4ce184e29d398b6002", "title": "Road Lane Detection by Discriminating Dashed and Solid Road Lanes Using a Visible Light Camera Sensor", "text": "With the increasing need for road lane detection used in lane departure warning systems and autonomous vehicles, many studies have been conducted to turn road lane detection into a virtual assistant to improve driving safety and reduce car accidents. Most of the previous research approaches detect the central line of a road lane and not the accurate left and right boundaries of the lane. In addition, they do not discriminate between dashed and solid lanes when detecting the road lanes. However, this discrimination is necessary for the safety of autonomous vehicles and the safety of vehicles driven by human drivers. To overcome these problems, we propose a method for road lane detection that distinguishes between dashed and solid lanes. Experimental results with the Caltech open database showed that our method outperforms conventional methods."}
{"_id": "0ae3182836b1b962902d664ddd524e8554b742cf", "title": "Integrating Context and Occlusion for Car Detection by Hierarchical And-Or Model", "text": "The ABSTRACT is to be in fully-justified italicized text, at the top of the left-hand column, below the author and affiliation information. Use the word \u201cAbstract\u201d as the title, in 12-point Times, boldface type, centered relative to the column, initially capitalized. The abstract is to be in 10point, single-spaced type. Leave two blank lines after the Abstract, then begin the main text. Look at previous CVPR abstracts to get a feel for style and length."}
{"_id": "2d6f14a97d6ae10bb8156e6be9b9fe3632a9e7b6", "title": "Signal Characteristics on Sensor Data Compression in IoT -An Investigation", "text": "In Internet of Things (IoT), numerous and diverse types of sensors generate a plethora of data that needs to be stored and processed with minimum loss of information. This demands efficient compression mechanisms where loss of information is minimized. Hence data generated by diverse sensors with different signal features require optimum balance between compression gain and information loss. This paper presents a unique analysis of contemporary lossy compression algorithms applied on real field sensor data with different sensor dynamics. The aim of the work is to classify the compression algorithms based on the signal characteristics of sensor data and to map them to different sensor data types to ensure efficient compression. The present work is the stepping stone for a future recommender system to choose the preferred compression techniques for the given type of sensor data."}
{"_id": "395748fc368ea860bdcfc5d681508408bd7d7e76", "title": "Resolving constraint conflicts", "text": "In this paper, we define constraint conflicts and examine properties that may aid in guiding their resolution. A constraint conflict is an inconsistency between the access control policy and the constraints specified to limit that policy. For example, a policy that permits a high integrity subject to access low integrity data is in conflict with a Biba integrity constraint. Constraint conflicts differ from typical policy conflicts in that constraints are never supposed to be violated. That is, a conflict with a constraint results in a policy compilation error, whereas policy conflicts are resolved at runtime. As we have found in the past, when constraint conflicts occur in a specification a variety of resolutions are both possible and practical. In this paper, we detail some key formal properties of constraint conflicts and show how these are useful in guiding conflict resolution. We use the SELinux example policy for Linux 2.4.19 as the source of our constraint conflicts and resolution examples. The formal properties are used to guide the selection of resolutions and provide a basis for a resolution language that we apply to resolve conflicts in the SELinux example policy."}
{"_id": "66b53fdbc752401f1a55986c0ea68acd463e5b5f", "title": "Hyperparameter Optimization for Tracking with Continuous Deep Q-Learning", "text": "Hyperparameters are numerical presets whose values are assigned prior to the commencement of the learning process. Selecting appropriate hyperparameters is critical for the accuracy of tracking algorithms, yet it is difficult to determine their optimal values, in particular, adaptive ones for each specific video sequence. Most hyperparameter optimization algorithms depend on searching a generic range and they are imposed blindly on all sequences. Here, we propose a novel hyperparameter optimization method that can find optimal hyperparameters for a given sequence using an action-prediction network leveraged on Continuous Deep Q-Learning. Since the common state-spaces for object tracking tasks are significantly more complex than the ones in traditional control problems, existing Continuous Deep Q-Learning algorithms cannot be directly applied. To overcome this challenge, we introduce an efficient heuristic to accelerate the convergence behavior. We evaluate our method on several tracking benchmarks and demonstrate its superior performance1."}
{"_id": "a75eef1178c4f260d5e1f08dce15dcc217015463", "title": "A Handwritten Character Recognition System Using Directional Element Feature and Asymmetric Mahalanobis Distance", "text": "This paper presents a precise system for handwritten Chinese and Japanese character recognition. Before extracting directional element feature (DEF) from each character image, transformation based on partial inclination detection (TPID) is used to reduce undesired effects of degraded images. In the recognition process, city block distance with deviation (CBDD) and asymmetric Mahalanobis distance (AMD) are proposed for rough classification and fine classification. With this recognition system, the experimental result of the database ETL9B reaches to 99.42%."}
{"_id": "335acf2e83b0a3b27386af390c55d9df03ce3e88", "title": "Co-operative downloading in vehicular ad-hoc wireless networks", "text": "Increasing need for people to be \"connected\"; while at the same time remain as mobile as ever poses several interesting issues in wireless networks. It is conceivable in the near-future that wireless \"hotspots\" experience flash crowds-like traffic arrival pattern. A common phenomena in the Internet today characterized by sudden and unpredicted increase in popularity of on-line content. In this paper, we propose SPAWN, a cooperative strategy for content delivery and sharing in future vehicular networks. We study the issues involved in using such a strategy from the standpoint of vehicular ad-hoc networks. In particular, we show that not only content server but also wireless access network load reduction is critical. We propose a \"communication efficient\" swarming protocol which uses a gossip mechanism that leverages the inherent broadcast nature of the wireless medium, and a piece-selection strategy that takes proximity into account in decisions to exchange pieces. We show through simulation that gossip incorporates location-awareness into peer selection, while incurring low messaging overhead, and consequently enhancing the swarming protocol performance. We develop an analytical model to characterize the performance of SPAWN."}
{"_id": "64cd9b00f88152112c7b1172e24e136afc4e24b7", "title": "Internet Traffic Measurement", "text": "This tutorial article discusses the role of network traffic measurement in the design, testing, and evaluation of Internet protocols and applications. The article begins with some background information on Internet traffic measurement, and then proceeds to discuss the \u201ctools of the trade\u201d, including examples of both hardwarebased and software-based approaches to network traffic measurement. The article concludes with a summary of the main observations from the past fifteen years of network measurement research, along with pointers to the relevant literature for more information."}
{"_id": "6fa15ca716b7e91726303818ca7bfbfaa70751b3", "title": "Runtime Verification of Ethereum Smart Contracts", "text": "The notion of smart contracts in distributed ledger systems have been hailed as a safe way of enforcing contracts between participating parties. However, unlike legal contracts, which talk about ideal behaviour and consequences of not adhering to such behaviour, smart contracts are by their very nature executable code, giving explicit instructions on how to achieve compliance. Executable specification languages, particularly Turing complete ones, are notoriously known for the difficulty of ensuring correctness, and recent incidents which led to huge financial losses due to bugs in smart contracts, have highlighted this issue. In this paper we show how standard techniques from runtime verification can be used in the domain of smart contracts, including a novel stake-based instrumentation technique which ensures that the violating party provides insurance for correct behaviour. The techniques we describe have been partially implemented in a proof-of-concept tool ContractLarva, which we discuss in this paper."}
{"_id": "20a773041aa5667fbcf5378ac87cad2edbfd28b7", "title": "DBpedia - A crystallization point for the Web of Data", "text": "The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains such as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia."}
{"_id": "744eacc689e1be16de6ca1f386ea3088abacad49", "title": "LUBM: A benchmark for OWL knowledge base systems", "text": "We describe our method for benchmarking Semantic Web knowledge base systems with respect to use in large OWL applications. We present the Lehigh University Benchmark (LUBM) as an example of how to design such benchmarks. The LUBM features an on-tology for the university domain, synthetic OWL data scalable to an arbitrary size, fourteen extensional queries representing a variety of properties, and several performance metrics. The LUBM can be used to evaluate systems with different reasoning capabilities and storage mechanisms. We demonstrate this with an evaluation of two memory-based systems and two systems with persistent storage."}
{"_id": "0c236e611a90018e84d9de23d1cff241354079be", "title": "Automatically refining the wikipedia infobox ontology", "text": "The combined efforts of human volunteers have recently extracted numerous facts from Wikipedia, storing them as machine-harvestable object-attribute-value triples in Wikipedia infoboxes. Machine learning systems, such as Kylin, use these infoboxes as training data, accurately extracting even more semantic knowledge from natural language text. But in order to realize the full power of this information, it must be situated in a cleanly-structured ontology. This paper introduces KOG, an autonomous system for refining Wikipedia's infobox-class ontology towards this end. We cast the problem of ontology refinement as a machine learning problem and solve it using both SVMs and a more powerful joint-inference approach expressed in Markov Logic Networks. We present experiments demonstrating the superiority of the joint-inference approach and evaluating other aspects of our system. Using these techniques, we build a rich ontology, integrating Wikipedia's infobox-class schemata with WordNet. We demonstrate how the resulting ontology may be used to enhance Wikipedia with improved query processing and other features."}
{"_id": "1fc360befecd3ca2fa9bddd799e4c16211299fa3", "title": "Exploiting Wikipedia as External Knowledge for Named Entity Recognition", "text": "We explore the use of Wikipedia as external knowledge to improve named entity recognition (NER). Our method retrieves the corresponding Wikipedia entry for each candidate word sequence and extracts a category label from the first sentence of the entry, which can be thought of as a definition part. These category labels are used as features in a CRF-based NE tagger. We demonstrate using the CoNLL 2003 dataset that the Wikipedia category labels extracted by such a simple method actually improve the accuracy of NER."}
{"_id": "e6517b709844f8423cad639b956af925042a3480", "title": "Field-Robot-Based Agriculture : \u201c RemoteFarming . 1 \u201d and \u201c BoniRob-Apps", "text": "Based on the autonomous field robot BoniRob, which was developed for plant phenotyping, the authors have created an enhanced platform BoniRob (V2). The design of BoniRob (V2) considers two main improvements: At first the robustness for continuous outdoor use is increased and secondly the robot now makes available a reusable platform that can serve for multiple agricultural purposes. The combination of the platform BoniRob (V2) and application modules (\u201cBoniRob-Apps\u201d) is comparable to the classical combination of a tractor and an implement. In order to demonstrate this new concept, the authors have implemented three BoniRob-Apps, which can be integrated in BoniRob (V2) via defined mechanical, electrical and logical interfaces. The research project RemoteFarming.1 goes even beyond the accomplishment of a specific task by a robot and demonstrates the integration of the BoniRob (V2) platform into agricultural processes. For that purpose innovative agricultural engineering is combined with webbased communication technologies. The target is to develop a robotic mechanical weed control system which integrates a human user as remote worker in the process. The robotic weed control system is used for intra row weed treatment in carrots at BBCH-scales 10 to 20 in organic farming. This task is today conducted by hand. Introduction In recent years a variety of field robot prototypes have been proposed. They cover several applications, such as scouting and sampling, precise variable rate application of pesticides and fertilizers as well as autonomous weeding [1][2]. However, until now there is no commercial application of autonomous field operation in agriculture available. This is partly due to open issues regarding safety and legal requirements of autonomous operations [3].Another problem of the presented prototypes for autonomous field robots is their conception as single-purpose robots, despite there being several applications with market potential [6]. As the single-purpose concept leads typically would lead high costs due to a low workload over the year (specific tasks are required over only a short period of time), the systems are not economically feasible yet. To overcome this barrier a new concept for a multipurpose field robot is shown here. Multipurpose field robot platform BoniRob (V2) The BoniRob (V2) platform has been designed as an improved version of the BoniRob crop scout, which has been announced by the authors [5]. The BoniRob (V2) is shown in Fig 1. Fig.1: BoniRob (V2) during a RemoteFarming.1 field trial It was reengineered with focus on robustness and reusability. This manifests in increases of power supply by the electrical generator, continuous and peak torque at the wheel drives as well as chassis clearance (Table 1). Table 1: Technical figures of BoniRob (V2) BoniRob crop scout BoniRob (V2) platform Power of electric generator 2.0 kW 2.6 kW Battery\u2019s capacity 110 Ah 170 Ah Continuous drive torque (per wheel) 55 Nm 70 Nm Peak drive torque (per wheel) 110 Nm 240 Nm As it can be seen in Fig.1, the height adjustment of the entire robot body was replaced by vertically fixed arms for each wheel, which can be rotated in the horizontal plane for adjustment of track width and centration of the robot body over the row to be treated. The hydraulic height adjustment was replaced by a manual, mechanical height-adjustment module frame. Furthermore, all other hydraulic drives were replaced by electrical systems. The BoniRob-App concept Using the multipurpose field robot platform BoniRob (V2) as a carrier, supplier and base for multiple BoniRob-Apps can be compared to the traditional combination of a single tractor with multiple implements. BoniRob-Apps can be integrated into the platform using defined mechanical, electrical and logical interfaces. The module frame of 0.77 m x 0.75 m in the centre of the robot allows mechanically attaching the BoniRob-App (Fig. 2). The mounting height can be adjusted in a range of 0.4 m. The electrical connection is set up using a custom plug. This provides energy supply to the BoniRob-App and different voltage levels (230V AC, 24 V DC, 12 V DC, 5 V DC). Moreover, it allows linking the emergency stop infrastructure of platform and App, such that even the App can stop the entire robot in case of emergency. Also, the custom plug sets up a Gigabit-Ethernet connection to the host-computer on BoniRob (V2). The entire communication infrastructure on BoniRob (V2) and between platform and the Apps is designed using Ethernet and ROS [4]. A sequence controller with 2 levels opens the way for distributed drive control of the robot. In the row the App (level 2 sequence controller) can access the drive control. On the headland the platform (level 1 sequence controller) holds the access to the drive control and runs the turn procedure of the robot. This Fig. 2: Multiple BoniRob-Apps can be used in the same platform allows switching the access to the drive depending on the state including letting the App control the drive train of the robot in properly defined situations. Again, this can be compared to a tractor-implement combination. In recent years in the field of Tractor Implement Management advances have been made towards control of the tractor by the implement in specific situations, because the implement and its sensors are closer to the process and more specifically adapted to it. This also shows that BoniRob-Apps are more than just tools to be used with the BoniRob (V2) platform but smart combinations of mechanical and electrical systems together with processand application-specific intelligence. BoniRob-Apps: Demonstrating examples In order to demonstrate the new concept, the authors implemented three BoniRob-Apps:"}
{"_id": "d14dc451c7958e58785dcb806ccc28d56505b217", "title": "Energy Minimization Using Multiple Supply Voltages", "text": "We present a dynamic programming technique for solving the multiple supply voltage scheduling problem in both non-pipelined and functionally pipelined data-paths. The scheduling problem refers to the assignment of a supply voltage level to each operation in a data ow graph so as to minimize the average energy consumption for given computation time or throughput constraints or both. The energy model is accurate and accounts for the input pattern dependencies, re-convergent fanout induced dependencies, and the energy cost of level shifters. Experimental results show that using four supply voltage levels on a number of standard benchmarks, an average energy saving of 53% (with a computation time constraint of 1.5 times the critical path delay) can be obtained compared to using one fixed supply voltage level."}
{"_id": "34015cf6c21cbf36f251de6f59dc8f41bdbd7b4a", "title": "Modeling Commonsense Reasoning via Analogical Chaining: A Preliminary Report", "text": "Understanding the nature of commonsense reasoning is one of the deepest questions of cognitive science. Prior work has proposed analogy as a mechanism for commonsense reasoning, with prior simulations focusing on reasoning about continuous behavior of physical systems. This paper examines how analogy might be used in commonsense more broadly. The two contributions are (1) the idea of common sense units, intermediate-sized collections of facts extracted from experience (including cultural experience) which improves analogical retrieval and simplifies inferencing, and (2) analogical chaining, where multiple rounds of analogical retrieval and mapping are used to rapidly construct explanations and predictions. We illustrate these ideas via an implemented computational model, tested on examples from an independently-developed test of commonsense reasoning."}
{"_id": "13a623aa360e2afc30510946bf27a88616090a17", "title": "A key-management scheme for distributed sensor networks", "text": "Distributed Sensor Networks (DSNs) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities. DSNs are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes. DSNs may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary. Hence DSNs require cryptographic protection of communications, sensor-capture detection, key revocation and sensor disabling. In this paper, we present a key-management scheme designed to satisfy both operational and security requirements of DSNs. The scheme includes selective distribution and revocation of keys to sensor nodes as well as node re-keying without substantial computation and communication capabilities. It relies on probabilistic key sharing among the nodes of a random graph and uses simple protocols for shared-key discovery and path-key establishment, and for key revocation, re-keying, and incremental addition of nodes. The security and network connectivity characteristics supported by the key-management scheme are discussed and simulation experiments presented."}
{"_id": "1c7b561a8ca3ede100989281e866addacbb671e7", "title": "A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers", "text": "It plays an important role to accurately track multiple vehicles in intelligent transportation, especially in intelligent vehicles. Due to complicated traffic environments it is difficult to track multiple vehicles accurately and robustly, especially when there are occlusions among vehicles. To alleviate these problems, a new approach is proposed to track multiple vehicles with the combination of robust detection and two classifiers. An improved ViBe algorithm is proposed for robust and accurate detection of multiple vehicles. It uses the gray-scale spatial information to build dictionary of pixel life length to make ghost shadows and object\u2019s residual shadows quickly blended into the samples of the background. The improved algorithm takes good post-processing method to restrain dynamic noise. In this paper, we also design a method using two classifiers to further attack the problem of failure to track vehicles with occlusions and interference. It classifies tracking rectangles with confidence values between two thresholds through combining local binary pattern with support vector machine (SVM) classifier and then using a convolutional neural network (CNN) classifier for the second time to remove the interference areas between vehicles and other moving objects. The two classifiers method has both time efficiency advantage of SVM and high accuracy advantage of CNN. Comparing with several existing methods, the qualitative and quantitative analysis of our experiment results showed that the proposed method not only effectively removed the ghost shadows, and improved the detection accuracy and real-time performance, but also was robust to deal with the occlusion of multiple vehicles in various traffic scenes."}
{"_id": "9c9d811ff35afd7ee1eab731f283a8bb632df995", "title": "Digital Watermarking and Steganography", "text": "Sharing, disseminating, and presenting data in digital format is not just a fad, but it is becoming part of our life. Without careful planning, digitized resources could easily be misused, especially those that are shared across the Internet. Examples of such misuse include use without the owner\u2019s permission, and modification of a digitized resource to fake ownership. One way to prevent such behaviors is to employ some form of copyright protection technique, such as digital watermarks. Digital watermarks refer to the data embedded into a digital source (e.g., images, text, audio, or video recording). They are similar to watermarks in printed materials as a message inserted into the host media typically becomes an integral part of the media. Apart from traditional watermarks in printed forms, digital watermarks may also be invisible, may be in the forms other than graphics, and may be digitally removed."}
{"_id": "27493531060fa3fac4ae5c58eaafc53d888fdad9", "title": "An evaluation of bipartitioning techniques", "text": "Logic partitioning is an important issue in VLSI CAD, and has been an area of active research for at least the last 25 years. Numerous approaches have been developed and many different techniques have been combined for a wide range of applications. In this paper, we examine many of the existing techniques for logic bipartitioning and present a methodology for determining the best mix of approaches. The result is a novel bipartitioning algorithm that includes both new and pre-existing techniques. Our algorithm produces results that are at least 16% better than the state-of-the-art while also being efficient in run-time."}
{"_id": "2c0b1aee014235b9f78c60d4b48609b63994397c", "title": "Unsupervised soccer video abstraction based on pitch, dominant color and camera motion analysis", "text": "We present a soccer video abstraction method based on the analysis of the audio and video streams. This method could be applied to other sports as rugby or american football. The main contribution of this paper is the design of an unsupervised summarization method, and more specifically, the introduction of an efficient detector of excited speech segments. An excited commentary is supposed to correspond to an interesting moment of the game. It is simultaneously characterized by an increase of the pitch (or fundamental frequency) within the voiced segments and an increase of the energy supported by the harmonics of the pitch. The pitch is estimated from the autocorrelation function and its local increases are detected from a multiresolution technique. We introduce a specific energy measure for the voiced segments. A statistical analysis of the energy measures is performed to detect the most excited parts of the speech. A deterministic combination of excited speech detection, dominant color identification and camera motion analysis is then performed in order to discriminate between excited speech sequences of the game and excited speech sequences in commercials or in studio shots included in the processed TV programs.\n The method presented here does not need any learning stage. It has been tested on seven soccer videos for a total duration of almost 20 hours."}
{"_id": "821d782408f7923fecb7b2d8afa88bcf9814cb2e", "title": "Blurred boundaries: the therapeutics and politics of medical marijuana.", "text": "For 5 millennia, Cannabis sativa has been used throughout the world medically, recreationally, and spiritually. From the mid-19th century to the 1930s, American physicians prescribed it for a plethora of indications, until the federal government started imposing restrictions on its use, culminating in 1970 with the US Congress classifying it as a Schedule I substance, illegal, and without medical value. Simultaneous with this prohibition, marijuana became the United States' most widely used illicit recreational drug, a substance generally regarded as pleasurable and relaxing without the addictive dangers of opioids or stimulants. Meanwhile, cannabis never lost its cachet in alternative medicine circles, going mainstream in 1995 when California became the first of 16 states to date to legalize its medical use, despite the federal ban. Little about cannabis is straightforward. Its main active ingredient, \u03b4-9-tetrahydrocannabinol, was not isolated until 1964, and not until the 1990s were the far-reaching modulatory activities of the endocannabinoid system in the human body appreciated. This system's elucidation raises the possibility of many promising pharmaceutical applications, even as draconian federal restrictions that hamstring research show no signs of softening. Recreational use continues unabated, despite growing evidence of marijuana's addictive potential, particularly in the young, and its propensity for inducing and exacerbating psychotic illness in the susceptible. Public approval drives medical marijuana legalization efforts without the scientific data normally required to justify a new medication's introduction. This article explores each of these controversies, with the intent of educating physicians to decide for themselves whether marijuana is panacea, scourge, or both. PubMed searches were conducted using the following keywords: medical marijuana, medical cannabis, endocannabinoid system, CB1 receptors, CB2 receptors, THC, cannabidiol, nabilone, dronabinol, nabiximols, rimonabant, marijuana legislation, marijuana abuse, marijuana dependence, and marijuana and schizophrenia. Bibliographies were hand searched for additional references relevant to clarifying the relationships between medical and recreational marijuana use and abuse."}
{"_id": "9c0e19677f302f04a276469fddc849bd7616ab46", "title": "AxNN: Energy-efficient neuromorphic systems using approximate computing", "text": "Neuromorphic algorithms, which are comprised of highly complex, large-scale networks of artificial neurons, are increasingly used for a variety of recognition, classification, search and vision tasks. However, their computational and energy requirements can be quite high, and hence their energy-efficient implementation is of great interest.\n We propose a new approach to design energy-efficient hardware implementations of large-scale neural networks (NNs) using approximate computing. Our work is motivated by the observations that (i) NNs are used in applications where less-than-perfect results are acceptable, and often inevitable, and (ii) they are highly resilient to inexactness in many (but not all) of their constituent computations. We make two key contributions. First, we propose a method to transform any given NN into an Approximate Neural Network (AxNN). This is performed by (i) adapting the backpropagation technique, which is commonly used to train these networks, to quantify the impact of approximating each neuron to the overall network quality (e.g., classification accuracy), and (ii) selectively approximating those neurons that impact network quality the least. Further, we make the key observation that training is a naturally error-healing process that can be used to mitigate the impact of approximations to neurons. Therefore, we incrementally retrain the network with the approximations in-place, reclaiming a significant portion of the quality ceded by approximations. As a second contribution, we propose a programmable and quality-configurable neuromorphic processing engine (qcNPE), which utilizes arrays of specialized processing elements that execute neuron computations with dynamically configurable accuracies and can be used to execute AxNNs from diverse applications. We evaluated the proposed approach by constructing AXNNs for 6 recognition applications (ranging in complexity from 12-47,818 neurons and 160-3,155,968 connections) and executing them on two different platforms--qcNPE implementation containing 272 processing elements in 45nm technology and a commodity Intel Xeon server. Our results demonstrate 1.14X-1.92X energy benefits for virtually no loss (< 0.5%) in output quality, and even higher improvements (upto 2.3X) when some loss (upto 7.5%) in output quality is acceptable."}
{"_id": "dc2330487092b243d09477e304a41d9bea5f47b5", "title": "Text Encryption with Huffman Compression", "text": "Communication between a sender and receiver needs security. It can be done in any form, like plain text or binary data. Changing the information to some unidentifiable form, can save it from assailants. For example, plain text can be coded using schemes so that a stranger cannot apprehend it. Cryptography is a subject or field which deals with the secret transmission of messages/ data between two parties. Cryptography is the practice and study of hiding information. It enables you to store delicate information and transmit it across insecure networks so that it cannot be read by anyone except the authorized recipient. Applications of cryptography include ATM cards, computer passwords, and electronic commerce etc. Messages, which are being sent between two parties, should be of small size so that they occupy less space. Data compression involves encoding of information using fewer bits than the original representation. Compression algorithms reduce the redundancy in data representation to decrease the storage required for that data. Symmetric key cryptography algorithms [6] are fast and mostly used type of encryption. Many types of data compression techniques do exist. Huffman Compression [5] is a lossless data compression technique and can be considered as the most efficient algorithm. In this paper, a combination of new Symmetric key algorithm and existing Huffman compression algorithm has been proposed. Proposed method works on text data. Algorithms have been provided in the paper itself."}
{"_id": "21d1343ef02a59e976f249281cfd4d3a87de14fb", "title": "Health informatics: current issues and challenges", "text": "Health informatics concerns the use of information and information and communication technologies within healthcare. Health informatics and information science need to take account of the unique aspects of health and medicine. The development of information systems and electronic records within health needs to consider the information needs and behaviour of all users. The sensitivity of personal health data raises ethical concerns for developing electronic records. E-health initiatives must actively involve users in the design, development, implementation and evaluation, and information science can contribute to understanding the needs and behaviour of user groups. Health informatics could make an important contribution to the ageing society and to reducing the digital divide and health divides within society. There is a need for an appropriate evidence base within health informatics to support future developments, and to ensure health informatics reaches its potential to improve the health and well-being of patients and the public."}
{"_id": "92862e13ceb048d596d05b5c788765649be9d851", "title": "Co-FAIS: Cooperative fuzzy artificial immune system for detecting intrusion in wireless sensor networks", "text": "Due to the distributed nature of Denial-of-Service attacks, it is tremendously challenging to identify such malicious behavior using traditional intrusion detection systems in Wireless Sensor Networks (WSNs). In the current paper, a bio-inspired method is introduced, namely the cooperative-based fuzzy artificial immune system (Co-FAIS). It is a modular-based defense strategy derived from the danger theory of the human immune system. The agents synchronize and work with one another to calculate the abnormality of sensor behavior in terms of context antigen value (CAV) or attackers and update the fuzzy activation threshold for security response. In such a multi-node circumstance, the sniffer module adapts to the sink node to audit data by analyzing the packet components and sending the log file to the next layer. The fuzzy misuse detector module (FMDM) integrates with a danger detector module to identify the sources of danger signals. The infected sources are transmitted to the fuzzy Q-learning vaccination modules (FQVM) in order for particular, required action to enhance system abilities. The Cooperative Decision Making Modules (Co-DMM) incorporates danger detector module with the fuzzy Q-learning vaccination module to produce optimum defense strategies. To evaluate the performance of the proposed model, the Low Energy Adaptive Clustering Hierarchy (LEACH) was simulated using a network simulator. The model was subsequently compared against other existing soft computing methods, such as fuzzy logic controller (FLC), artificial immune system (AIS), and fuzzy Q-learning (FQL), in terms of detection accuracy, counter-defense, network lifetime and energy consumption, to demonstrate its efficiency and viability. The proposed method improves detection accuracy and successful defense rate performance against attacks compared to conventional empirical methods. & 2014 Elsevier Ltd. All rights reserved."}
{"_id": "6303092f36faee382ee12d8215a5c80245414dca", "title": "Ray tracing animated scenes using coherent grid traversal", "text": "We present a new approach to interactive ray tracing of moderate-sized animated scenes based on traversing frustum-bounded packets of coherent rays through uniform grids. By incrementally computing the overlap of the frustum with a slice of grid cells, we accelerate grid traversal by more than a factor of 10, and achieve ray tracing performance competitive with the fastest known packet-based kd-tree ray tracers. The ability to efficiently rebuild the grid on every frame enables this performance even for fully dynamic scenes that typically challenge interactive ray tracing systems."}
{"_id": "cf03d0df41887697dfb64edde8c5196264e4441c", "title": "Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery", "text": "One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon's navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D optical imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions."}
{"_id": "2327a7883f36dc1c1bfc8e6563561169b9cebf40", "title": "A 0.3 GHz to 1.4 GHz N-path mixer-based code-domain RX with TX self-interference rejection", "text": "A code-domain N-path RX is proposed based on PN-code modulated LO pulses for concurrent reception of two code-modulated signals. Additionally, a combination of Walsh-Function and PN sequence is proposed to translate in-band TX self-interference (SI) to out-of-band at N-path RX output enabling frequency filtering for high SI rejection. A 0.3 GHz\u20131.4 GHz 65-nm CMOS implementation has 35 dB gain for desired signals and concurrently receives two RX signals while rejecting mismatched spreading codes at RF input. Proposed TX SI mitigation approach results in 38.5 dB rejection for \u221211.8dBm 1.46 Mb/s QPSK modulated SI at RX input. The RX achieves 23.7dBm OP1dB for in-band SI, while consuming \u223c35mW and occupies 0.31mm2."}
{"_id": "ff064f17dae6b416e21981da1d12582208171e2f", "title": "28 GHz and 73 GHz millimeter-wave indoor propagation measurements and path loss models", "text": "This paper presents 28 GHz and 73 GHz millimeter-wave propagation measurements performed in a typical office environment using a 400 Megachip-per-second broadband sliding correlator channel sounder and highly directional steerable 15 dBi (30\u00b0 beamwidth) and 20 dBi (15\u00b0 beamwidth) horn antennas. Power delay profiles were acquired for 48 transmitter-receiver location combinations over distances ranging from 3.9 m to 45.9 m with maximum transmit powers of 24 dBm and 12.3 dBm at 28 GHz and 73 GHz, respectively. Directional and omnidirectional path loss models and RMS delay spread statistics are presented for line-of-sight and non-line-of-sight environments for both co- and cross-polarized antenna configurations. The LOS omnidirectional path loss exponents were 1.1 and 1.3 at 28 GHz and 73 GHz, and 2.7 and 3.2 in NLOS at 28 GHz and 73 GHz, respectively, for vertically-polarized antennas. The mean directional RMS delay spreads were 18.4 ns and 13.3 ns, with maximum values of 193 ns and 288 ns at 28 GHz and 73 GHz, respectively."}
{"_id": "c286ba73f645535d19e085bdaa713a0bb9cb1ddc", "title": "Three-way substrate integrated waveguide (SIW) power divider design", "text": "In this paper, an X-band 1\u00d73 substrate integrated waveguide (SIW) power divider design is presented. The designed SIW power divider provides equal amplitude with uniform phase distribution at the each output port. It has also a satisfactory operating bandwidth and low insertion loss. Moreover, the return loss is approximately 25 dB at the design frequency as shown in the EM simulation results."}
{"_id": "338d720e068229ff611f521cd3b61961d5f530d6", "title": "A review and critique of emotional intelligence measures", "text": "Emotional intelligence measures vary widely in both their content and in their method of assessment. In particular, emotional intelligence measures tend to use either a self-report personality-based approach, an informant approach, or an ability-based assessment procedure. In this paper, the measurement and psychometric properties of four of the major emotional intelligence measures (Emotional Competence Inventory, Emotional Quotient Inventory, Multifactor Emotional Intelligence Scale, Mayer\u2013Salovey\u2013Caruso Emotional Intelligence Test) are reviewed, the comparability of these measures is examined, and some conclusions and suggestions for future research on emotional intelligence measures are provided. Copyright # 2005 John Wiley & Sons, Ltd."}
{"_id": "878301453e3d5cb1a1f7828002ea00f59cbeab06", "title": "Faceness-Net: Face Detection through Deep Facial Part Responses", "text": "We propose a deep convolutional neural network (CNN) for face detection leveraging on facial attributes based supervision. We observe a phenomenon that part detectors emerge within CNN trained to classify attributes from uncropped face images, without any explicit part supervision. The observation motivates a new method for finding faces through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is data-driven, and carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variations. Our method achieves promising performance on popular benchmarks including FDDB, PASCAL Faces, AFW, and WIDER FACE."}
{"_id": "126e5a90e12aee34d33e6d301ab9533acf03c199", "title": "Multi-style Generative Network for Real-Time Transfer", "text": "Despite the rapid progress in style transfer, existing approaches using feed-forward generative network for multistyle or arbitrary-style transfer are usually compromised of image quality and model flexibility. We find it is fundamentally difficult to achieve comprehensive style modeling using 1-dimensional style embedding. Motivated by this, we introduce CoMatch Layer that learns to match the second order feature statistics with the target styles. With the CoMatch Layer, we build a Multi-style Generative Network (MSGNet), which achieves real-time performance. In addition, we employ an specific strategy of upsampled convolution which avoids checkerboard artifacts caused by fractionallystrided convolution. Our method has achieved superior image quality comparing to state-of-the-art approaches. The proposed MSG-Net as a general approach for real-time style transfer is compatible with most existing techniques including content-style interpolation, color-preserving, spatial control and brush stroke size control. MSG-Net is the first to achieve real-time brush-size control in a purely feedforward manner for style transfer. Our implementations and pre-trained models for Torch, PyTorch and MXNet frameworks will be publicly available1."}
{"_id": "235390de857af2399d97647b763b57b2756b9527", "title": "Relations among children's social goals, implicit personality theories, and responses to social failure.", "text": "Two studies examined children's thought patterns in relation to their responses to social challenge. In Study 1, 4th and 5th graders tried out for a pen pal club under either a performance goal (stressing the evaluative nature of the tryout) or a learning goal (emphasizing the potential learning opportunities). In their behavior and attributions following rejection, children who were focused on a performance goal reacted with more helplessness, whereas children given a learning goal displayed a more mastery-oriented response. Study 2 found that in response to hypothetical socially challenging situations, 4th, 5th, and 6th graders who believed personality was nonmalleable (entity theorists) vs. malleable (incremental theorists) were more likely to endorse performance goals. Together, these studies indicate that children's goals in social situations are associated with their responses to social failure and are predicted by their implicit theories about their personality."}
{"_id": "01fecf4553132a5252b6bf6d264f68568a8dbf6e", "title": "Audiovisual Fusion: Challenges and New Approaches", "text": "In this paper, we review recent results on audiovisual (AV) fusion. We also discuss some of the challenges and report on approaches to address them. One important issue in AV fusion is how the modalities interact and influence each other. This review will address this question in the context of AV speech processing, and especially speech recognition, where one of the issues is that the modalities both interact but also sometimes appear to desynchronize from each other. An additional issue that sometimes arises is that one of the modalities may be missing at test time, although it is available at training time; for example, it may be possible to collect AV training data while only having access to audio at test time. We will review approaches to address this issue from the area of multiview learning, where the goal is to learn a model or representation for each of the modalities separately while taking advantage of the rich multimodal training data. In addition to multiview learning, we also discuss the recent application of deep learning (DL) toward AV fusion. We finally draw conclusions and offer our assessment of the future in the area of AV fusion."}
{"_id": "3817f4c05b7316a405c2aadf80eb113d8fbb82d6", "title": "Audio-visual speech recognition using lip movement extracted from side-face images", "text": "This paper proposes an audio-visual speech recognition method using lip movement extracted from side-face images to attempt to increase noise-robustness in mobile environments. Although most previous bimodal speech recognition methods use frontal face (lip) images, these methods are not easy for users since they need to hold a device with a camera in front of their face when talking. Our proposed method capturing lip movement using a small camera installed in a handset is more natural, easy and convenient. This method also effectively avoids a decrease of signal-to-noise ratio (SNR) of input speech. Visual features are extracted by optical-flow analysis and combined with audio features in the framework of HMM-based recognition. Phone HMMs are built by the multi-stream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions, and the best improvement is approximately 6% at 5dB SNR."}
{"_id": "4c842fbd4c032dd4d931eb6ff1eaa2a13450b7af", "title": "A review of recent advances in visual speech decoding", "text": "Visual speech information plays an important role in automatic speech recognition (ASR) especially when audio is corrupted or even inaccessible. Despite the success of audio-based ASR, the problem of visual speech decoding remains widely open. This paper provides a detailed review of recent advances in this research area. In comparison with the previous survey [97] which covers the whole ASR system that uses visual speech information, we focus on the important questions asked by researchers and summarize the recent studies that attempt to answer them. In particular, there are three questions related to the extraction of visual features, concerning speaker dependency, pose variation and temporal information, respectively. Another question is about audio-visual speech fusion, considering the dynamic changes of modality reliabilities encountered in practice. In addition, the state-of-the-art on facial landmark localization is briefly introduced in this paper. Those advanced techniques can be used to improve the region-of-interest detection, but have been largely ignored when building a visual-based ASR system. We also provide details of audio-visual speech databases. Finally, we discuss the remaining challenges and offer our insights into the future research on visual speech decoding."}
{"_id": "839104964d04c505a827ae854e1251271968c7f7", "title": "Graph Embedding and Extensions: A General Framework for Dimensionality Reduction", "text": "A large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called marginal Fisher analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional linear discriminant analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions"}
{"_id": "b0c59ca7251fba3274e9dc04e9dd402cae9a05ae", "title": "Fusion of AV features and external information sources for event detection in team sports video", "text": "The use of AV features alone is insufficient to induce high-level semantics. This article proposes a framework that utilizes both internal AV features and various types of external information sources for event detection in team sports video. Three schemes are also proposed to tackle the asynchronism between the fusion of AV and external information. The framework is extensible as it can provide increasing functionalities given more detailed external information and domain knowledge. By demonstrating its effectiveness on soccer and American football, we believe that with the availability of appropriate domain knowledge, the framework is applicable to other team sports."}
{"_id": "5c0dbbf5c9aa38766d61eac90a0258b4d7d97f6f", "title": "The Adoption Process of Cryptocurrencies Identifying Factors That Influence the Adoption of Cryptocurrencies from a Multiple Stakeholder Perspective. Identifying Factors That Influence the Adoption of Cryptocurrencies from a Multiple Stakeholder Perspective", "text": "II The adoption process of cryptocurrencies Management Summary Cryptocurrencies 1 are rapidly gaining more and more interest as a technology that is potentially groundbreaking and disruptive for the whole payments industry on a global scale. However the future of cryptocurrencies is very unclear as there are many different usage scenarios and different stakeholders have different needs. In order to be able to give a better future perspective and to determine possibilities for improvement of cryptocurrencies factors that influence adoption will have to be determined. To achieve this the following main research question was formulated: What are factors influencing the adoption of cryptocurrencies in different usage scenarios for different stakeholders? The way current payment systems work throughout the world varies widely. The added value of cryptocurrencies therefore also hugely differs per geographical area. In order to be able to give clear and concise conclusions this research is scoped towards looking at the European market and the Dutch market in particular. Based on the Diffusion of Innovations Theory, which defines characteristics necessary for an innovation in order to be successfully adopted, a conceptual model to describe the adoption of cryptocurrencies was formulated. This model was based on academic literature resulting from a literature review. The model was then validated by means of qualitative semi\u2010structured interviews with subject experts. To make sure that a balanced view was obtained stakeholders from a wide range of industries were interviewed: employees from the four biggest Dutch banks, the Dutch Central Bank, the three largest Dutch cryptocurrency exchanges, senior payments consultants, Payment Service Providers, a cryptography expert and the largest Dutch company that accepts cryptocurrency, Thuisbezorgd.nl. During the interviews barriers were identified that have to be overcome in order for cryptocurrencies to be adopted on a large scale. The three main pillars which are important for future mass adoption are: 1. Ease of use: at the moment there is a lack of user\u2010friendliness when using bitcoins. Sending and receiving bitcoin is still cumbersome and holding bitcoins is prone to many risks. Users need to be able to have more confidence in the safety of their funds. 2. Price stability: the current price volatility driven by speculation and a lack of liquidity makes that it is very risky for a user to keep his funds in cryptocurrency as the value varies wildly. This undermines the function of cryptocurrency as a store of value. 3. Governance: the current bitcoin \u2026"}
{"_id": "0657093cd32c185f4c1bef31198ff58dffcd0495", "title": "A Comprehensive Analysis of Knowledge Management Cycles", "text": "At present knowledge and its proper management became an essential issue for every organization. In the modern globalized world, organizations cannot survive in a sustainable way without efficient knowledge management. Knowledge management cycle (KMC) is a process of transforming information into knowledge within an organization, which explains how knowledge is captured, processed, and distributed in an organization. For the better performance organizations require a practical and coherent strategy and comprehensive KMC. The aim of this study is to examine the KMCs and how they are playing vital role for the development of organizations."}
{"_id": "81d51bf638a6a7c405e1e1d461ae979f83fd929b", "title": "Introduction to Evolutionary Computing", "text": ""}
{"_id": "564427596799f7967c91934966cd3c6bd31cb06d", "title": "Holographic reduced representations", "text": "Associative memories are conventionally used to represent data with very simple structure: sets of pairs of vectors. This paper describes a method for representing more complex compositional structure in distributed representations. The method uses circular convolution to associate items, which are represented by vectors. Arbitrary variable bindings, short sequences of various lengths, simple frame-like structures, and reduced representations can be represented in a fixed width vector. These representations are items in their own right and can be used in constructing compositional structures. The noisy reconstructions extracted from convolution memories can be cleaned up by using a separate associative memory that has good reconstructive properties."}
{"_id": "fb71549b0a0f7c06b46f3c8df05c8864b2409d02", "title": "Internet of Things Meets Brain-Computer Interface: A Unified Deep Learning Framework for Enabling Human-Thing Cognitive Interactivity", "text": "A Brain-Computer Interface (BCI) acquires brain signals, analyzes and translates them into commands that are relayed to actuation devices for carrying out desired actions. With the widespread connectivity of everyday devices realized by the advent of the Internet of Things (IoT), BCI can empower individuals to directly control objects such as smart home appliances or assistive robots, directly via their thoughts. However, realization of this vision is faced with a number of challenges, most importantly being the issue of accurately interpreting the intent of the individual from the raw brain signals that are often of low fidelity and subject to noise. Moreover, pre-processing brain signals and the subsequent feature engineering are both time-consuming and highly reliant on human domain expertise. To address the aforementioned issues, in this paper, we propose a unified deep learning based framework that enables effective human-thing cognitive interactivity in order to bridge individuals and IoT objects. We design a reinforcement learning based Selective Attention Mechanism (SAM) to discover the distinctive features from the input brain signals. In addition, we propose a modified Long Short-Term Memory (LSTM) to distinguish the inter-dimensional information forwarded from the SAM. To evaluate the efficiency of the proposed framework, we conduct extensive real-world experiments and demonstrate that our model outperforms a number of competitive state-of-the-art baselines. Two practical real-time human-thing cognitive interaction applications are presented to validate the feasibility of our approach."}
{"_id": "25312b9dfaff44ee6c41aaf3e3f3e4cf6677efc7", "title": "Domain Adaptation for Authorship Attribution: Improved Structural Correspondence Learning", "text": "We present the first domain adaptation model for authorship attribution to leverage unlabeled data. The model includes extensions to structural correspondence learning needed to make it appropriate for the task. For example, we propose a median-based classification instead of the standard binary classification used in previous work. Our results show that punctuation-based character n-grams form excellent pivot features. We also show how singular value decomposition plays a critical role in achieving domain adaptation, and that replacing (instead of concatenating) non-pivot features with correspondence features yields better performance."}
{"_id": "dd363f4d5b33564b0ed3b60468c11d2b475f5f14", "title": "The effects of Zumba training on cardiovascular and neuromuscular function in female college students.", "text": "The present study examined the effects of Zumba training (group fitness based on salsa and aerobics) on endurance, trunk strength, balance, flexibility, jumping performance and quality of life (QoL) in female college students. Thirty female participants were randomly assigned (strata: age, BMI and physical activity) to an intervention (INT, n = 15: age: 21.0 \u00b1 2.3 years; BMI: 21.8 \u00b1 3.0 kg/m(2); physical activity (PA): 7.6 \u00b1 4.6 h/week) or control group (CON, n = 14: age: 21.0 \u00b1 2.8 years; BMI: 21.0 \u00b1 2.1 kg/m(2); PA: 7.3 \u00b1 3.6 h/week). Instructed Zumba training was provided twice a week for 8 weeks (training attendance: 100%). QoL was assessed using the WHO-QoL-BREF questionnaire. Endurance was measured with the 6-min walking test (6MWT). Trunk flexibility was assessed with the stand-and-reach-test and lower-extremity strength with the jump-and-reach-test. The star excursion balance test (SEBT) was employed to assess dynamic balance. Trunk strength endurance was examined using the Swiss global trunk strength test in prone and lateral (left, right) positions. All testings were performed before and after the training period. We observed large statistically significant between-group effects of total QoL score (INT: +9.8%, CON: +0.4%, p < 0.001; partial eta squared [Formula: see text]), 6MWT distance (INT: +21%, CON: -2%, p < 0.001, [Formula: see text]), trunk strength endurance (prone, INT: +48%, CON: +11%, p = 0.04, [Formula: see text]; lateral-left, INT: +71%, CON: +11%, p = 0.01, [Formula: see text], lateral-right, INT: +54%, CON: +11%, p = 0.01, [Formula: see text]) and dynamic balance (all eight reaching distances of the SEBT, INT: +11-26%, CON: +1.1-3.8%, 0.001 < p < 0.04, 0.14 < [Formula: see text]) with significantly larger improvements for INT. Flexibility and jump performance were not relevantly affected (p > 0.05). Instructed Zumba training can be applied to improve well-being, aerobic fitness and neuromuscular function in female college students."}
{"_id": "8e636046ab315ce987625a98f0a511f319687bb9", "title": "The Smart Car Parking System Based on Iot Commanded by Android Application", "text": "Parking the car is one of the difficult task that we are facing in our day to day life. The main issue is providing the sufficient parking system. Now a days it is very hard to find the availability of parking slots. The various places(public) that is shopping mall, cinema hall etc finds it difficult to search the available parking area. This calls for the situations of an Smart car parking system which is based on IoT and commanded by Android application which are equipped with IR sensors and microcontroller (arduinouno).In this paper a small prototype of smart car parking system which is based on IoT is implemented. The paper proposed a system that the user will automatically find the parking space throught an android application via server. In addition to this we can say that the its a new way of communication between humans and the things with the help of new technology based on IoT."}
{"_id": "442319260d82ae8f7dec016849e566fd15cd520f", "title": "Saliency-based Sequential Image Attention with Multiset Prediction", "text": "Humans process visual scenes selectively and sequentially using attention. Central to models of human visual attention is the saliency map. We propose a hierarchical visual architecture that operates on a saliency map and uses a novel attention mechanism to sequentially focus on salient regions and take additional glimpses within those regions. The architecture is motivated by human visual attention, and is used for multi-label image classification on a novel multiset task, demonstrating that it achieves high precision and recall while localizing objects with its attention. Unlike conventional multi-label image classification models, the model supports multiset prediction due to a reinforcement-learning based training process that allows for arbitrary label permutation and multiple instances per label."}
{"_id": "e80756d644189957ea2079f0f8174d799b03453b", "title": "Intent aware adaptive admittance control for physical Human-Robot Interaction", "text": "Effective physical Human-Robot Interaction (pHRI) needs to account for variable human dynamics and also predict human intent. Recently, there has been a lot of progress in adaptive impedance and admittance control for human-robot interaction. Not as many contributions have been reported on online adaptation schemes that can accommodate users with varying physical strength and skill level during interaction with a robot. The goal of this paper is to present and evaluate a novel adaptive admittance controller that can incorporate human intent, nominal task models, as well as variations in the robot dynamics. An outer-loop controller is developed using an ARMA model which is tuned using an adaptive inverse control technique. An inner-loop neuroadaptive controller linearizes the robot dynamics. Working in conjunction and online, this two-loop technique offers an elegant way to decouple the pHRI problem. Experimental results are presented comparing the performance of different types of admittance controllers. The results show that efficient online adaptation of the robot admittance model for different human subjects can be achieved. Specifically, the adaptive admittance controller reduces jerk which results in a smooth human-robot interaction."}
{"_id": "97cf1dd119fe80521a1796dba01bb3318b43bc25", "title": "Macrophage biology in development, homeostasis and disease", "text": "Macrophages, the most plastic cells of the haematopoietic system, are found in all tissues and show great functional diversity. They have roles in development, homeostasis, tissue repair and immunity. Although tissue macrophages are anatomically distinct from one another, and have different transcriptional profiles and functional capabilities, they are all required for the maintenance of homeostasis. However, these reparative and homeostatic functions can be subverted by chronic insults, resulting in a causal association of macrophages with disease states. In this Review, we discuss how macrophages regulate normal physiology and development, and provide several examples of their pathophysiological roles in disease. We define the \u2018hallmarks\u2019 of macrophages according to the states that they adopt during the performance of their various roles, taking into account new insights into the diversity of their lineages, identities and regulation. It is essential to understand this diversity because macrophages have emerged as important therapeutic targets in many human diseases."}
{"_id": "f0546d12d75cb22f56c525d7b142f7e6349974b7", "title": "A ZVS Pulsewidth Modulation Full-Bridge Converter With a Low-RMS-Current Resonant Auxiliary Circuit", "text": "This paper presents the description and analysis of a phase-shift-modulated full-bridge converter with a novel robust passive low-rms-current resonant auxiliary circuit for zero-voltage switching (ZVS) operation of both the leading and lagging switch legs. Detailed time-domain analysis describes the steady-state behavior of the auxiliary circuit in different operating conditions. An in-depth comparative study between a fully specified baseline converter and the equivalent converter using the proposed resonant auxiliary circuit is presented. For a similar peak auxiliary current to ensure ZVS operation, a minimum of 20% reduction in rms current is obtained, which decreases the conduction losses. Key characteristics and design considerations are also fully discussed. Experimental results from a 750-W prototype confirm the predicted enhancements using the proposed resonant auxiliary circuit."}
{"_id": "a555de6cb4fa0491d5618e1bfd192cd613eb72e9", "title": "Incivility, retention and new graduate nurses: an integrated review of the literature.", "text": "AIM\nTo evaluate the influence of incivility on the new graduate nurse transition experience.\n\n\nBACKGROUND\nIncivility in the work environment is a major source of dissatisfaction and new graduate nurses are especially vulnerable. Incivility contributes to the high levels of turnover associated within the first 2\u00a0years of new graduate nurse employment.\n\n\nEVALUATION\nAn integrated review of the literature was conducted using MEDLINE-EBSCOhost, PsycInfo and CINAHL databases. Relevant articles were reviewed for appropriateness related to inclusion/exclusion criteria and for quality using established criteria. Sixteen studies were included in the final analysis.\n\n\nKEY ISSUES\nThemes that emerged included workplace incivility, nurse residency programmes, mentoring through preceptors and empowerment/work environment. Findings indicated that incivility in the workplace was a significant predictor of low job satisfaction in new graduate nurses transitioning into practice.\n\n\nCONCLUSIONS\nWhile graduate nurse transition programmes are associated with improved satisfaction and retention, they appear to address incivility by acculturating new graduate nurses to the experience of incivility. There is little evidence that the culture of incivility has been addressed.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nNurse managers have the responsibility to be aware of the prevalence of incivility, assess for its occurrence, and implement strategies which eliminate workplace incivility and tolerance for uncivil behaviours."}
{"_id": "1d9bbe9dfe99af804f08e0d8442b8d62f5828acd", "title": "Cognition and motivation in emotion.", "text": "The role of cognition--and to some extent motivation--in emotion, the ways meaning is generated, unconscious appraising, and the implications of this way of thinking for life-span development are addressed. It is argued that appraisal is a necessary as well as sufficient cause of emotion and that knowledge is necessary but not sufficient. This position is examined in light of what is known about emotions in infants and young children, the effects of drugs on acute emotions and moods, and recent patterns of thought about the brain in emotions. The discussion of how meaning is generated is the core of the article. Automatic processing without awareness is contrasted with deliberate and conscious processing, and the concept of resonance between an animal's needs and what is encountered in the environment is examined. The idea that there is more than one way meaning is achieved strengthens and enriches the case for the role of appraisal in emotion and allows the consideration of what is meant by unconscious and preconscious appraisal and the examination of how they might work."}
{"_id": "6c3b3813b5b504c8cf8c21934f80ca2201db26fc", "title": "Forward-Backward Selection with Early Dropping", "text": "Forward-backward selection is one of the most basic and commonly-used feature selection algorithms available. It is also general and conceptually applicable to many different types of data. In this paper, we propose a heuristic that significantly improves its running time, while preserving predictive accuracy. The idea is to temporarily discard the variables that are conditionally independent with the outcome given the selected variable set. Depending on how those variables are reconsidered and reintroduced, this heuristic gives rise to a family of algorithms with increasingly stronger theoretical guarantees. In distributions that can be faithfully represented by Bayesian networks or maximal ancestral graphs, members of this algorithmic family are able to correctly identify the Markov blanket in the sample limit. In experiments we show that the proposed heuristic increases computational efficiency by about two orders of magnitude in high-dimensional problems, while selecting fewer variables and retaining predictive performance. Furthermore, we show that the proposed algorithm and feature selection with LASSO perform similarly when restricted to select the same number of variables, making the proposed algorithm an attractive alternative for problems where no (efficient) algorithm for LASSO exists."}
{"_id": "9f27c7cd7a66f612c4807ec6e9a90d6aafd462e8", "title": "Combining Models from Multiple Sources for RGB-D Scene Recognition", "text": "Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at highlevels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities."}
{"_id": "b57314f7f5753794aa92e7935a138993b5779eb4", "title": "Language games for autonomous robots", "text": "extraordinarily difficult and poses challenges for both integration and grounding. Here, I propose a unifying idea that meets both challenges: language games. A language game is a sequence of verbal interactions between two agents situated in a specific environment. Language games both integrate the various activities required for dialogue and ground unknown words or phrases in a specific context, which helps constrain possible meanings. Over the past five years, I have been working with a team to develop and test language games on progressively more sophisticated systems, from relatively simple camera-based systems to humanoid robots. The results of our work show that language games are a useful way to both understand and design human\u2013robot interaction."}
{"_id": "46c1fc5be8508785342c365094468c3a18b970b8", "title": "A Survey of MAC Protocols for Mission-Critical Applications in Wireless Sensor Networks", "text": "Wireless Sensor Networks (WSNs) are generally designed to support applications in long-term deployments, and thus WSN protocols are primarily designed to be energy efficient. However, the research community has recently explored new WSN applications such as industrial process automation. These mission-critical applications demand not only energy efficient operation but also strict data transport performance. In particular, data must be transported to a sink in a timely and reliable fashion. Both WSN's data transport performance and energy consumption pattern are mainly defined by the employed medium access control (MAC) protocol. Therefore, this survey paper explores to what extent existing MAC protocols for WSNs can serve mission-critical applications. The reviewed protocols are classified according to data transport performance and suitability for mission-critical applications. The survey reveals that the existing solutions have a number of limitations and only a few recently developed MAC protocols are suitable for this application domain."}
{"_id": "96dcd4f91c666f056af7ddc306445fa16cbfa9b3", "title": "Teaching CS unplugged in the high school (with limited success)", "text": "CS Unplugged is a set of active learning activities designed to introduce fundamental computer science principles without the use of computers. The program has gained significant momentum in recent years, with proponents citing deep engagement and enjoyment benefits. With these benefits in mind, we initiated a one-year outreach program involving a local high school, using the CS Unplugged program as the foundation. To our disappointment, the results were at odds with our enthusiasm --- significantly. In this paper, we describe our approach to adapting the CS Unplugged materials for use at the high school level, present our experiences teaching it, and summarize the results of our evaluation."}
{"_id": "502cb1014c79b40fb3bbe465c302d1f93965e69a", "title": "A behavioral planning framework for autonomous driving", "text": "In this paper, we propose a novel planning framework that can greatly improve the level of intelligence and driving quality of autonomous vehicles. A reference planning layer first generates kinematically and dynamically feasible paths assuming no obstacles on the road, then a behavioral planning layer takes static and dynamic obstacles into account. Instead of directly commanding a desired trajectory, it searches for the best directives for the controller, such as lateral bias and distance keeping aggressiveness. It also considers the social cooperation between the autonomous vehicle and surrounding cars. Based on experimental results from both simulation and a real autonomous vehicle platform, the proposed behavioral planning architecture improves the driving quality considerably, with a 90.3% reduction of required computation time in representative scenarios."}
{"_id": "2c6f8da69230b98fc2870ca140eaf1e135ddd932", "title": "Region growing method for the analysis of functional MRI data", "text": "Existing analytical techniques for functional magnetic resonance imaging (fMRI) data always need some specific assumptions on the time series. In this article, we present a new approach for fMRI activation detection, which can be implemented without any assumptions on the time series. Our method is based on a region growing method, which is very popular for image segmentation. A comparison of performance on fMRI activation detection is made between the proposed method and the deconvolution method and the fuzzy clustering method with receiver operating characteristic (ROC) methodology. In addition, we examine the effectiveness and usefulness of our method on real experimental data. Experimental results show that our method outperforms over the deconvolution method and the fuzzy clustering method on a number of aspects. These results suggest that our region growing method can serve as a reliable analysis of fMRI data."}
{"_id": "19403910710bbb7cff44bece752a4cd171467c10", "title": "Rise and fall patterns of information diffusion: model and implications", "text": "The recent explosion in the adoption of search engines and new media such as blogs and Twitter have facilitated faster propagation of news and rumors. How quickly does a piece of news spread over these media? How does its popularity diminish over time? Does the rising and falling pattern follow a simple universal law?\n In this paper, we propose SpikeM, a concise yet flexible analytical model for the rise and fall patterns of influence propagation. Our model has the following advantages: (a) unification power: it generalizes and explains earlier theoretical models and empirical observations; (b) practicality: it matches the observed behavior of diverse sets of real data; (c) parsimony: it requires only a handful of parameters; and (d) usefulness: it enables further analytics tasks such as fore- casting, spotting anomalies, and interpretation by reverse- engineering the system parameters of interest (e.g. quality of news, count of interested bloggers, etc.).\n Using SpikeM, we analyzed 7.2GB of real data, most of which were collected from the public domain. We have shown that our SpikeM model accurately and succinctly describes all the patterns of the rise-and-fall spikes in these real datasets."}
{"_id": "d7b6f8e617e0bc16be901a7396e4525651914e82", "title": "Electrical stimulation of a small brain area reversibly disrupts consciousness", "text": "The neural mechanisms that underlie consciousness are not fully understood. We describe a region in the human brain where electrical stimulation reproducibly disrupted consciousness. A 54-year-old woman with intractable epilepsy underwent depth electrode implantation and electrical stimulation mapping. The electrode whose stimulation disrupted consciousness was between the left claustrum and anterior-dorsal insula. Stimulation of electrodes within 5mm did not affect consciousness. We studied the interdependencies among depth recording signals as a function of time by nonlinear regression analysis (h(2) coefficient) during stimulations that altered consciousness and stimulations of the same electrode at lower current intensities that were asymptomatic. Stimulation of the claustral electrode reproducibly resulted in a complete arrest of volitional behavior, unresponsiveness, and amnesia without negative motor symptoms or mere aphasia. The disruption of consciousness did not outlast the stimulation and occurred without any epileptiform discharges. We found a significant increase in correlation for interactions affecting medial parietal and posterior frontal channels during stimulations that disrupted consciousness compared with those that did not. Our findings suggest that the left claustrum/anterior insula is an important part of a network that subserves consciousness and that disruption of consciousness is related to increased EEG signal synchrony within frontal-parietal networks."}
{"_id": "30c98a8bcfff2801795fd78dfde8f666c7aae2e1", "title": "A Comprehensive Framework for Image Inpainting", "text": "Inpainting is the art of modifying an image in a form that is not detectable by an ordinary observer. There are numerous and very different approaches to tackle the inpainting problem, though as explained in this paper, the most successful algorithms are based upon one or two of the following three basic techniques: copy-and-paste texture synthesis, geometric partial differential equations (PDEs), and coherence among neighboring pixels. We combine these three building blocks in a variational model, and provide a working algorithm for image inpainting trying to approximate the minimum of the proposed energy functional. Our experiments show that the combination of all three terms of the proposed energy works better than taking each term separately, and the results obtained are within the state-of-the-art."}
{"_id": "1986349b2df8b9d4064453d169d69ecfde283e27", "title": "A Computational Model of Plan-Based Narrative Conflict at the Fabula Level", "text": "Conflict is an essential element of interesting stories. In this paper, we operationalize a narratological definition of conflict and extend established narrative planning techniques to incorporate this definition. The conflict partial order causal link planning algorithm (CPOCL) allows narrative conflict to arise in a plan while maintaining causal soundness and character believability. We also define seven dimensions of conflict in terms of this algorithm's knowledge representation. The first three-participants, reason, and duration-are discrete values which answer the \u201cwho?\u201d \u201cwhy?\u201d and \u201cwhen?\u201d questions, respectively. The last four-balance, directness, stakes, and resolution-are continuous values which describe important narrative properties that can be used to select conflicts based on the author's purpose. We also present the results of two empirical studies which validate our operationalizations of these narrative phenomena. Finally, we demonstrate the different kinds of stories which CPOCL can produce based on constraints on the seven dimensions."}
{"_id": "2766913aabb151107b28279645b915a3aa86c816", "title": "Explanation-Based Learning: A Problem Solving Perspective", "text": "This article outlines explanation-based learning (EBL) and its role in improving problem-solving performance through experience. Unlike inductive systems, which learn by abstracting common properties from multiple examples, EBL systems explain why a particular example is an instance of a concept. The explanations are then converted into operational recognition rules. In essence, the EBL approach is analytical and knowledge-intensive, whereas inductive methods are empirical and knowledge-poor. This article focuses on extensions of the basic EBL method and their integration with the PRODIGY problem-solving system. PRODIGY'S EBL method is specifically designed to acquire search control rules that are effective in reducing total search time for complex task domains. Domain-specific search control rules are learned from successful problem-solving decisions, costly failures, and unforeseen goal interactions. The ability to specify multiple learning strategies in a declarative manner enables EBL to serve as a general technique for performance improvement. PRODIGY'S EBL method is analyzed, illustrated with several examples and performance results, and compared with other methods for integrating EBL and problem solving. 'Present address: Artificial Intelligence Research Branch, NASA Ames Research Center, Sterling Federal Systems, Mail Stop 244-17, Moffett Field CA 94035. This research was sponsored in part by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976, Amendment 20, under contract number F33615-87-C-1499, monitored by the Air Force Avionics Laboratory, in part by the Office of Naval Research under contracts N00014-84-K-0345 (N91) and N00014-86-K-0678-N123, in part by NASA under contract NCC 2-463, in part by the Army Research Institute under contract MDA903-85-C-0324, under subcontract 487650-25537 through the University of California, Irvine, and in part by small contributions from private institutions. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of DARPA, ONR, NASA, ARI, or the US Government. The first and fifth authors were supported by AT&T Bell Labs Ph.D. Scholarships. Table of"}
{"_id": "516bd2e2bfc7405568f48560e02154135616374c", "title": "Narrative Planning: Balancing Plot and Character", "text": "Narrative, and in particular storytelling, is an important part of the human experience. Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers. One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences. There are many factors \u2013 logical and aesthetic \u2013 that contribute to the success of a narrative artifact. Central to this success is its understandability. We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability. Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience\u2019s suspension of disbelief. Specifically, characters must be perceived by the audience to be intentional agents. In this article, we explore the use of refinement search as a technique for solving the narrative generation problem \u2013 to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm \u2013 the Intent-based Partial Order Causal Link (IPOCL) planner \u2013 that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners."}
{"_id": "444e8aacda5f06d2a6c5197c89567638eaccb677", "title": "ISO/IEC 27000, 27001 and 27002 for Information Security Management", "text": "With the increasing significance of information technology, there is an urgent need for adequate measures of information security. Systematic information security management is one of most important initiatives for IT management. At least since reports about privacy and security breaches, fraudulent accounting practices, and attacks on IT systems appeared in public, organizations have recognized their responsibilities to safeguard physical and information assets. Security standards can be used as guideline or framework to develop and maintain an adequate information security management system (ISMS). The standards ISO/IEC 27000, 27001 and 27002 are international standards that are receiving growing recognition and adoption. They are referred to as \u201ccommon language of organizations around the world\u201d for information security [1]. With ISO/IEC 27001 companies can have their ISMS certified by a third-party organization and thus show their customers evidence of their security measures."}
{"_id": "532b6b93354c29a716c9089d81bc8b66b8e10293", "title": "A 0.3 pJ/access 8T data-aware SRAM utilizing column-based data encoding for ultra-low power applications", "text": "This paper proposes an 8T SRAMs utilizing a column-based data encoding scheme to reduce read and write power when there are similarities between consecutive data. It is useful in image processing applications where nearby pixels tend to have similar values. The proposed design has two modes of operation: normal and sequential modes. In the normal mode, it operates as a normal SRAM. In the sequential mode, bit-wise differences between consecutive data are written instead of the actual data. This leads to a much higher number of zeros in the array. Accordingly, a new data-aware bitline pre-charge scheme is proposed to minimize write power when writing a zero. A PVT-tracking reference voltage generator is also employed to compensate the read-bitline leakage for ultra-low voltage operation. On average, it offers more than 30% WBL switching power reduction. A 32Kb SRAM implemented in 65 nm CMOS process demonstrated successful operation down to 0.36 V. The total power consumption is 0.37 \u03bcW, corresponding to the maximum frequency of 0.25 MHz. Its minimum energy is 0.3 pJ/access achieved at 0.5 V."}
{"_id": "292eda72f970d94c295f4289f01fdb78f99a1036", "title": "Slotless Bearingless Disk Drive for High-Speed and High-Purity Applications", "text": "In this paper, a bearingless drive for high-speed applications with high purity and special chemical demands is introduced. To achieve high rotational speeds with low losses, a slotless bearingless disk drive with toroidal windings is used. We present the working principle of the bearingless drive as well as a model for calculating the achievable drive torque. An advantageous winding system for independent force and torque generation is proposed, which can be realized with standard inverter technology. Additionally, the winding inductances are examined to evaluate the dynamic properties of bearing and drive. The findings are verified with simulation results and the system performance is successfully demonstrated on an experimental prototype, which runs up to 20 000 rpm and is designed for an output power of 1 kW."}
{"_id": "1248ec7fae6c2b34a40cc0b99100227af6d2e980", "title": "Integrating low-rank and group-sparse structures for robust multi-task learning", "text": "Multi-task learning (MTL) aims at improving the generalization performance by utilizing the intrinsic relationships among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not be the case in many real-world applications. In this paper, we propose a robust multi-task learning (RMTL) algorithm which learns multiple tasks simultaneously as well as identifies the irrelevant (outlier) tasks. Specifically, the proposed RMTL algorithm captures the task relationships using a low-rank structure, and simultaneously identifies the outlier tasks using a group-sparse structure. The proposed RMTL algorithm is formulated as a non-smooth convex (unconstrained) optimization problem. We propose to adopt the accelerated proximal method (APM) for solving such an optimization problem. The key component in APM is the computation of the proximal operator, which can be shown to admit an analytic solution. We also theoretically analyze the effectiveness of the RMTL algorithm. In particular, we derive a key property of the optimal solution to RMTL; moreover, based on this key property, we establish a theoretical bound for characterizing the learning performance of RMTL. Our experimental results on benchmark data sets demonstrate the effectiveness and efficiency of the proposed algorithm."}
{"_id": "10f9b8f6e20ae6de6baab171050640acd48c6c64", "title": "A clinical and genetic study of the Say/Barber/Biesecker/Young-Simpson type of Ohdo syndrome.", "text": "We report a series of eight patients with the Say/Barber/Biesecker/Young-Simpson (SBBYS) type of Ohdo syndrome, which is the largest cohort described to date. We expand on the type, frequency and severity of the clinical characteristics in this condition; comment on the natural history of Ohdo syndrome and further refine previously published diagnostic criteria. Cytogenetic investigations and microarray CGH analysis undertaken in this cohort of patients failed to identify a chromosomal aetiology. It remains possible that this rare condition is heterogeneous and therefore caution must be undertaken during counselling until the underlying genetic mechanism(s) is (are) identified."}
{"_id": "56997ac6ab606e07c0ffabd9ba5b6d80be2e256b", "title": "WiFi RSS fingerprinting indoor localization for mobile devices", "text": "While WiFi-based indoor localization is attractive, the need for a significant degree of pre-deployment effort is a key challenge. In this paper, indoor localization with no pre-deployment effort in an indoor space, such as an office building corridor, with WiFi coverage but no a priori knowledge of the placement of the access points(APs) is implemented for mobile devices. WiFi Received Signal Strength(RSS) in the considered environment is used to build radio maps using WiFi fingerprinting approach. An offline RSS fingerprint of the environment is compared with an online RSS measurement to estimate the location of a user. Two architectures are developed based on this localization algorithm. The first one involves a client-server approach where the localization algorithm runs on the server whereas the second one is a standalone architecture and the algorithm runs on the SD card of the mobile device. Experimental results in the considered environment validate the approach for both architectures."}
{"_id": "ce3298dbcfcf4b2175a9124d67a0120eb1cb5706", "title": "The Multi-User Security of Authenticated Encryption: AES-GCM in TLS 1.3", "text": "We initiate the study of multi-user (mu) security of authenticated encryption (AE) schemes as a way to rigorously formulate, and answer, questions about the \u201crandomized nonce\u201d mechanism proposed for the use of the AE scheme GCM in TLS 1.3. We (1) Give definitions of mu ind (indistinguishability) and mu kr (key recovery) security for AE (2) Characterize the intent of nonce randomization as being improved mu security as a defense against mass surveillance (3) Cast the method as a (new) AE scheme RGCM (4) Analyze and compare the mu security of both GCM and RGCM in the model where the underlying block cipher is ideal, showing that the mu security of the latter is indeed superior in many practical contexts to that of the former, and (5) Propose an alternative AE scheme XGCM having the same efficiency as RGCM but better mu security and a more simple and modular design."}
{"_id": "fe9b19732e41c2c3b0e3d2c4f2d90be2f5129708", "title": "How Amazon web services uses formal methods", "text": "Engineers use TLA+ to prevent serious but subtle bugs from reaching production."}
{"_id": "d635f8770d54dfe857f515469c83c1696f0c1fc5", "title": "Vestibular papillomatosis as a normal vulvar anatomical condition.", "text": "At the beginning of the nineteen-eighties, vulvar vestibular papillomatosis (VVP) was thought to be a human papilloma virus (HPV) disease. Based upon these findings many clinicians have been treating this condition with laser ablation or by topical application of podophyllin or trichloroacetic acid. Currently, most authors believe that VVP should be considered an anatomical variant of the vestibular mucosa and not HPV related. We present a case of VVP in which there was no histological or molecular evidence of HPV; unnecessary treatment should be avoided."}
{"_id": "d9374967b6491f1fae422bfe7746a1ecce284824", "title": "Motion-based grasp selection: Improving traditional control strategies of myoelectric hand prosthesis", "text": "This paper introduces a novel prosthetic hand control architecture using inertial information for grasp prediction in order to reduce the cognitive burden of amputees. A pair of inertial measurement sensors (IMUs) are fitted on the wrist and bicep to record arm trajectory when reaching to grasp an object. Each object class can be associated with different methods for grasping and manipulation. An observation experiment was conducted to find the most common grasping methods for generic object classes: Very Small (VS), Small (S), and Medium (M). A Cup (CP) class was also examined to find differences in grasping habits for pick and place, and drinking applications. The resulting grasps were used to test the discriminatory ability of inertial motion features in the upper limb for VS, S and CP object classes. Subject experiments demonstrated an average classification success rate of 90.8%, 69.2% and 88.1% for VS, S and CP classes respectively when using a k-nearest neighbors algorithm with a Euclidean distance metric. The results suggest that inertial motion features have great potential to predict the grasp pattern during reach, and to the authors' knowledge, is the first IMU-based control strategy to utilize natural motion that is aimed at hand prosthesis control."}
{"_id": "c86566872d3e5f896b4a8bfb489e8b4886059355", "title": "Power-constrained CMOS scaling limits", "text": "The scaling of CMOS technology has progressed rapidly for three decades, but may soon come to an end because of powerdissipation constraints. The primary problem is static power dissipation, which is caused by leakage currents arising from quantum tunneling and thermal excitations. The details of these effects, along with other scaling issues, are discussed in the context of their dependence on application. On the basis of these considerations, the limits of CMOS scaling are estimated for various application scenarios."}
{"_id": "c29c0229d219b5eb518f08a55a88c6269e5da386", "title": "Rejecting Motion Outliers for Efficient Crowd Anomaly Detection", "text": "Crowd anomaly detection is a key research area in vision-based surveillance. Most of the crowd anomaly detection algorithms are either too slow, bulky, or power-hungry to be applicable for battery-powered surveillance cameras. In this paper, we present a new crowd anomaly detection algorithm. The proposed algorithm creates a feature for every superpixel that includes the contribution from the neighboring superpixels only if their direction of motion conforms with the dominant direction of motion in the region. We also propose using univariate Gaussian discriminant analysis with the K-means algorithm for classification. Our method provides superior accuracy over numerous deep learning-based and handcrafted feature-based approaches. We also present a low-power FPGA implementation of the proposed method. The algorithm is developed such that features are extracted over non-overlapping pixels. This allows gating inputs to numerous modules resulting in higher power efficiency. The maximum energy required per pixel is 2.43 nJ in our implementation. 126.65 Mpixels can be processed per second by the proposed implementation. The speed, power, and accuracy performance of our method make it competitive for surveillance applications, especially battery-powered surveillance cameras."}
{"_id": "8c76495e482d61b87265dfd64de035f720585872", "title": "A Four-Switch Single-Stage Single-Phase Buck\u2013Boost Inverter", "text": "This paper proposes a single-phase, single-stage buck\u2013boost inverter for photovoltaic (PV) systems. The presented topology has one common terminal in input and output ports which eliminates common mode leakage current problem in the grid-connected PV applications. Although it uses four switches, its operation is bimodal and only two switches receive high-frequency pulse width modulation signals in each mode. Its principle of operation is described in detail with the help of equivalent circuits. Its dynamic model is presented, based on which a bimodal controller is designed. Experimental results, in stand-alone and grid-connected mode, obtained with a 300-W laboratory prototype are presented to validate its performance."}
{"_id": "c7d1881946e0fb4e7baa419faa99df53bbf7a5b3", "title": "Vehicle Number Plates Detection and Recognition using improved Algorithms : A Review with Tanzanian Case study", "text": "\u2014 Invented in 1976, Number Plates Recognition (NPR) has since found wide commercial applications, making its research prospects challenging and scientifically interesting. A complete NPR system functions by vz steps, license plate; localization, sizing and orientation, normalization, character recognitions and geometric analysis. This paper is a review of NPR preliminary stages; it explains number plate localization, sizing and orientations as well as normalizations sections of the Number Plates Detection and Recognition-Tanzania Case study. MATLAB R2012b is employed in these processes. The input incorporated includes front and rear photographic images of vehicles, for proximity and simulation purposes the ample angle of image is 90 degree +-15. The captured image is converted to gray scale, binarized and edge detection algorithms are used to enhance edges. The output of this stage provides the input feature extraction, segmentation and"}
{"_id": "26be7edd6adccefcb54ec20b1361832f55ba26be", "title": "Featureless Visual Processing for SLAM in Changing Outdoor Environments", "text": "Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features."}
{"_id": "54eed22ff377dcb0472c8de454b1261988c4a9ac", "title": "Driver behavior recognition based on deep convolutional neural networks", "text": "Traffic safety is a severe problem around the world. Many road accidents are normally related with the driver's unsafe driving behavior, e.g. eating while driving. In this work, we propose a vision-based solution to recognize the driver's behavior based on convolutional neural networks. Specifically, given an image, skin-like regions are extracted by Gaussian Mixture Model, which are passed to a deep convolutional neural networks model, namely R*CNN, to generate action labels. The skin-like regions are able to provide abundant semantic information with sufficient discriminative capability. Also, R*CNN is able to select the most informative regions from candidates to facilitate the final action recognition. We tested the proposed methods on Southeast University Driving-posture Dataset and achieve mean Average Precision(mAP) of 97.76% on the dataset which prove the proposed method is effective in drivers's action recognition."}
{"_id": "c9a9a1e13031c6548c7e52e45ed9b69edc6a4921", "title": "Emotion regulation: affective, cognitive, and social consequences.", "text": "One of life's great challenges is successfully regulating emotions. Do some emotion regulation strategies have more to recommend them than others? According to Gross's (1998, Review of General Psychology, 2, 271-299) process model of emotion regulation, strategies that act early in the emotion-generative process should have a different profile of consequences than strategies that act later on. This review focuses on two commonly used strategies for down-regulating emotion. The first, reappraisal, comes early in the emotion-generative process. It consists of changing the way a situation is construed so as to decrease its emotional impact. The second, suppression, comes later in the emotion-generative process. It consists of inhibiting the outward signs of inner feelings. Experimental and individual-difference studies find reappraisal is often more effective than suppression. Reappraisal decreases emotion experience and behavioral expression, and has no impact on memory. By contrast, suppression decreases behavioral expression, but fails to decrease emotion experience, and actually impairs memory. Suppression also increases physiological responding for suppressors and their social partners. This review concludes with a consideration of five important directions for future research on emotion regulation processes."}
{"_id": "6c57b758334576abb98c703eb013ddb36888fa7f", "title": "OAuth Demystified for Mobile Application Developers", "text": "OAuth has become a highly influential protocol due to its swift and wide adoption in the industry. The initial objective of the protocol was specific: it serves the authorization needs for websites. What motivates our work is the realization that the protocol has been significantly re-purposed and re-targeted over the years: (1) all major identity providers, e.g., Facebook, Google and Microsoft, have re-purposed OAuth for user authentication; (2) developers have re-targeted OAuth to the mobile platforms, in addition to the traditional web platform. Therefore, we believe that it is necessary and timely to conduct an in-depth study to demystify OAuth for mobile application developers. Our work consists of two pillars: (1) an in-house study of the OAuth protocol documentation that aims to identify what might be ambiguous or unspecified for mobile developers; (2) a field-study of over 600 popular mobile applications that highlights how well developers fulfill the authentication and authorization goals in practice. The result is really worrisome: among the 149 applications that use OAuth, 89 of them (59.7%) were incorrectly implemented and thus vulnerable. In the paper, we pinpoint the key portions in each OAuth protocol flow that are security critical, but are confusing or unspecified for mobile application developers. We then show several representative cases to concretely explain how real implementations fell into these pitfalls. Our findings have been communicated to vendors of the vulnerable applications. Most vendors positively confirmed the issues, and some have applied fixes. We summarize lessons learned from the study, hoping to provoke further thoughts about clear guidelines for OAuth usage in mobile applications."}
{"_id": "63c08ef4019bac59c8df18f27d32def8abf1890d", "title": "Playful public participation in urban planning: A case study for online serious games", "text": "The aim of this paper is to study the implementation of online games to encourage public participation in urban planning. Its theoretical foundations are based on previous work in public participatory geographical information systems (PP GISs), play and games, with a special focus on serious games. Serious games aim to support learning processes in a new, more playful way. We developed the concept of playful public participation in urban planning, including playful elements such as storytelling, walking and moving, sketching, drawing, and games. A group of students designed an online serious public participatory game entitled NextCampus. The case study used in NextCampus was taken from the real-world question of a possible move of a university campus to a new location in the city of Hamburg, Germany. The development of the serious public participatory game NextCampus resulted in a physical prototype, user interface design, and a computational model of the game. The NextCampus game was tested with the help of two groups of urban planning students and presented to three external experts who provided valuable recommendations for further development. The critical comments questioned the level of complexity involved in such games. The positive comments included recognition of the potential for joy and the play-fulness a game like NextCampus could evoke. Public participatory online applications aim to attract citizens to discuss current issues related to their environment and to improve the process of public participation in general. An integration of geographic information systems (GISs) with public participatory tools represents one of the latest innovations in this area. ways of integrating the new applications into participatory processes and considers which new functionalities and technical characteristics could offer the most benefit to users. In the past, these technologies and other map-based applications were frequently criticized as being too complex for the majority of potential users (Steinmann, Krek, & Blaschke, 2004). New forms of collaboration and technical solutions emerged during the Web 2.0 era. For example , Google Maps and Google Earth can be used by lay users and non-experts without intense training. Recent research on collabo-rative mapping also known as ''geography without geographers'' prior work in PP GIS to include much wider, distributed participation (Hardy, 2008). Despite these new forms of collaboration and innovative technologies , Moody (2007) demonstrates that the use of GIS technology to involve citizens in participatory urban planning does not seem to empower citizens. An important factor in such findings \u2026"}
{"_id": "3cf1373afe806ea0fff683987f135f1d6b1db39f", "title": "Distributed Optimization Using the Primal-Dual Method of Multipliers", "text": "In this paper, we propose the primal-dual method of multipliers (PDMM) for distributed optimization over a graph. In particular, we optimize a sum of convex functions defined over a graph, where every edge in the graph carries a linear equality constraint. In designing the new algorithm, an augmented primal-dual Lagrangian function is constructed which smoothly captures the graph topology. It is shown that a saddle point of the constructed function provides an optimal solution of the original problem. Further under both the synchronous and asynchronous updating schemes, PDMM has the convergence rate of $O(1/K)$ (where $K$ denotes the iteration index) for general closed, proper, and convex functions. Other properties of PDMM such as convergence speeds versus different parameter-settings and resilience to transmission failure are also investigated through the experiments of distributed averaging."}
{"_id": "165c8add001bd460f66b8c0fa191df0b8e6af6f6", "title": "The use of pit and fissure sealants.", "text": "This paper reviews key issues of sealant use and methodology and poses recommendations to inform the discussion toward a consensus statement by participants. A comprehensive review of sealant literature, including policy recommendations from previous conferences that reviewed best practices for sealant use, was completed. Building on the review paper by Simonsen and on previous policy statements by dental and public health groups, this paper discusses key questions about sealant use in light of contemporary caries data and cost-benefit analyses. In addition, newest material advancements are reviewed to establish the next step in sealant improvement for young patients."}
{"_id": "df46dfaebb86a62d7dc31bd3e8857fe9eb0abb84", "title": "Physician Burnout and Well-Being: A Systematic Review and Framework for Action.", "text": "BACKGROUND\nPhysician burnout in the United States has reached epidemic proportions and is rising rapidly, although burnout in other occupations is stable. Its negative impact is far reaching and includes harm to the burned-out physician, as well as patients, coworkers, family members, close friends, and healthcare organizations.\n\n\nOBJECTIVE\nThe purpose of this review is to provide an accurate, current summary of what is known about physician burnout and to develop a framework to reverse its current negative impact, decrease its prevalence, and implement effective organizational and personal interventions.\n\n\nDATA SOURCES\nI completed a comprehensive MEDLINE search of the medical literature from January 1, 2000, through December 28, 2016, related to medical student and physician burnout, stress, depression, suicide ideation, suicide, resiliency, wellness, and well-being. In addition, I selectively reviewed secondary articles, books addressing the relevant issues, and oral presentations at national professional meetings since 2013.\n\n\nSTUDY SELECTION\nHealthcare organizations within the United States were studied.\n\n\nRESULTS\nThe literature review is presented in 5 sections covering the basics of defining and measuring burnout; its impact, incidence, and causes; and interventions and remediation strategies.\n\n\nCONCLUSIONS\nAll US medical students, physicians in training, and practicing physicians are at significant risk of burnout. Its prevalence now exceeds 50%. Burnout is the unintended net result of multiple, highly disruptive changes in society at large, the medical profession, and the healthcare system. Both individual and organizational strategies have been only partially successful in mitigating burnout and in developing resiliency and well-being among physicians. Two highly effective strategies are aligning personal and organizational values and enabling physicians to devote 20% of their work activities to the part of their medical practice that is especially meaningful to them. More research is needed."}
{"_id": "c35dd570f87c7437dfd50b74c80a110559fa182c", "title": "Performance Analysis of Alamouti Code MIMO-OFDM Systems for Error Control and IQ Impairments", "text": "Bit Error Rate (BER) performance is the most important metric of any digital communication system. MIMO-OFDM systems designed with Alamouti Coding technique, using Error Control Coding schemes has been investigated under this study to achieve BER enhancement and scrutinized for Additive White Gaussian Noise (AWGN) Channel with Inphase and Quadrature-phase (IQ) impairments. This paper details out the model developed using NI LabVIEW and discusses simulation results comparing the error control coding RS, RS-CC and Convolution encoding with 4-PSK and 4-QAM. General Terms Bit Error Rate, Space Time Coding (STC)."}
{"_id": "fd81880d09fa9997be8a0fccd5f1bf3fc4eb3fcb", "title": "Brain MRI analysis for Alzheimer\u2019s disease diagnosis using an ensemble system of deep convolutional neural networks", "text": "Alzheimer\u2019s disease is an incurable, progressive neurological brain disorder. Earlier detection of Alzheimer\u2019s disease can help with proper treatment and prevent brain tissue damage. Several statistical and machine learning models have been exploited by researchers for Alzheimer\u2019s disease diagnosis. Analyzing magnetic resonance imaging (MRI) is a common practice for Alzheimer\u2019s disease diagnosis in clinical research. Detection of Alzheimer\u2019s disease is exacting due to the similarity in Alzheimer\u2019s disease MRI data and standard healthy MRI data of older people. Recently, advanced deep learning techniques have successfully demonstrated human-level performance in numerous fields including medical image analysis. We propose a deep convolutional neural network for Alzheimer\u2019s disease diagnosis using brain MRI data analysis. While most of the existing approaches perform binary classification, our model can identify different stages of Alzheimer\u2019s disease and obtains superior performance for early-stage diagnosis. We conducted ample experiments to demonstrate that our proposed model outperformed comparative baselines on the Open Access Series of Imaging Studies dataset."}
{"_id": "5d92ce09b25a8ce3b874e24ee03d2a76adebbd2f", "title": "Future Internet: The Internet of Things Architecture, Possible Applications and Key Challenges", "text": "The Internet is continuously changing and evolving. The main communication form of present Internet is human-human. The Internet of Things (IoT) can be considered as the future evaluation of the Internet that realizes machine-to-machine (M2M) learning. Thus, IoT provides connectivity for everyone and everything. The IoT embeds some intelligence in Internet-connected objects to communicate, exchange information, take decisions, invoke actions and provide amazing services. This paper addresses the existing development trends, the generic architecture of IoT, its distinguishing features and possible future applications. This paper also forecast the key challenges associated with the development of IoT. The IoT is getting increasing popularity for academia, industry as well as government that has the potential to bring significant personal, professional and economic benefits."}
{"_id": "9dd6ff42ef3c3d71029ba45d7f85bc37648011d5", "title": "Towards smart city: M2M communications with software agent intelligence", "text": "Recent advances in the fields of wireless technology have exhibited a strong potential and tendency on improving human life by means of ubiquitous communications devices that enable smart, distributed services. In fact, traditional human to human (H2H) communications are gradually falling behind the scale of necessity. Consequently, machine to machine (M2M) communications have surpassed H2H, thus drawing significant interest from industry and the research community recently. This paper first presents a four-layer architecture for internet of things (IoT). Based on this architecture, we employ the second generation RFID technology to propose a novel intelligent system for M2M communications."}
{"_id": "b7fba13783ff14db9eb869dc3c9e4c69a9455b04", "title": "Internet of things capability and alliance: Entrepreneurial orientation, market orientation and product and process innovation", "text": "Internet of things capability and alliance: entrepreneurial orientation, market orientation and product and process innovation Xiaoyu Yu Bang Nguyen Yi Chen Article information: To cite this document: Xiaoyu Yu Bang Nguyen Yi Chen , (2016),\"Internet of things capability and alliance: entrepreneurial orientation, market orientation and product and process innovation\", Internet Research, Vol. 26 Iss 2 pp. Permanent link to this document: http://dx.doi.org/10.1108/IntR-10-2014-0265"}
{"_id": "1fe84868eb184162a92e2d923c13b8f6e93772ee", "title": "DepSys: Dependency aware integration of cyber-physical systems for smart homes", "text": "As sensor and actuator networks mature, they become a core utility of smart homes like electricity and water and enable the running of many CPS applications. Like other Cyber-Physical Systems (CPSs), when a number of applications share physical world entities, it raises many systems of systems interdependency problems. Such problems arise in the cyber part mainly because each application has assumptions on the physical world entities without knowing how other applications work. In this work, we propose DepSys, a utility sensing and actuation infrastructure for smart homes that provides comprehensive strategies to specify, detect, and resolve conflicts in a home setting. Based on real home data, we demonstrate the severity of conflicts when multiple CPSs are integrated and the significant ability of detecting and resolving such conflicts using DepSys."}
{"_id": "13cf0ec38d349f1c83f0e6c26e89c85222955f5f", "title": "Scaling Distributed Machine Learning with System and Algorithm Co-design", "text": "Due to the rapid growth of data and the ever increasing model complexity, which often manifests itself in the large number of model parameters, today, many important machine learning problems cannot be efficiently solved by a single machine. Distributed optimization and inference is becoming more and more inevitable for solving large scale machine learning problems in both academia and industry. However, obtaining an efficient distributed implementation of an algorithm, is far from trivial. Both intensive computational workloads and the volume of data communication demand careful design of distributed computation systems and distributed machine learning algorithms. In this thesis, we focus on the co-design of distributed computing systems and distributed optimization algorithms that are specialized for large machine learning problems. In the first part, we propose two distributed computing frameworks: Parameter Server, a distributed machine learning framework that features efficient data communication between the machines; MXNet, a multi-language library that aims to simplify the development of deep neural network algorithms. We have witnessed the wide adoption of the two proposed systems in the past two years. They have enabled and will continue to enable more people to harness the power of distributed computing to design efficient large-scale machine learning applications. In the second part, we examine a number of distributed optimization problems in machine learning, leveraging the two computing platforms. We present new methods to accelerate the training process, such as data partitioning with better locality properties, communication friendly optimization methods, and more compact statistical models. We implement the new algorithms on the two systems and test on large scale real data sets. We successfully demonstrate that careful co-design of computing systems and learning algorithms can greatly accelerate large scale distributed machine learning."}
{"_id": "663d999090f35ed660b574804799c745b9737562", "title": "Hardware-software integrated approaches to defend against software cache-based side channel attacks", "text": "Software cache-based side channel attacks present serious threats to modern computer systems. Using caches as a side channel, these attacks are able to derive secret keys used in cryptographic operations through legitimate activities. Among existing countermeasures, software solutions are typically application specific and incur substantial performance overhead. Recent hardware proposals including the Partition-Locked cache (PLcache) and Random-Permutation cache (RPcache) [23], although very effective in reducing performance overhead while enhancing the security level, may still be vulnerable to advanced cache attacks. In this paper, we propose three hardware-software approaches to defend against software cache-based attacks - they present different tradeoffs between hardware complexity and performance overhead. First, we propose to use preloading to secure the PLcache. Second, we leverage informing loads, which is a lightweight architectural support originally proposed to improve memory performance, to protect the RPcache. Third, we propose novel software permutation to replace the random permutation hardware in the RPcache. This way, regular caches can be protected with hardware support for informing loads. In our experiments, we analyze various processor models for their vulnerability to cache attacks and demonstrate that even to the processor model that is most vulnerable to cache attacks, our proposed software-hardware integrated schemes provide strong security protection."}
{"_id": "0095c269e7d0c990249312687fc43521019809c4", "title": "Modelling Interaction of Sentence Pair with Coupled-LSTMs", "text": "Recently, there is rising interest in modelling the interactions of two sentences with deep neural networks. However, most of the existing methods encode two sequences with separate encoders, in which a sentence is encoded with little or no information from the other sentence. In this paper, we propose a deep architecture to model the strong interaction of sentence pair with two coupled-LSTMs. Specifically, we introduce two coupled ways to model the interdependences of two LSTMs, coupling the local contextualized interactions of two sentences. We then aggregate these interactions and use a dynamic pooling to select the most informative features. Experiments on two very large datasets demonstrate the efficacy of our proposed architecture and its superiority to state-ofthe-art methods."}
{"_id": "390ee4176b464148d67a16af088802b937845221", "title": "An introduction to the Rasch measurement model: an example using the Hospital Anxiety and Depression Scale (HADS).", "text": "OBJECTIVES\nTo demonstrate the use of Rasch analysis by assessing the appropriateness of utilizing the Hospital Anxiety and Depression Scale (HADS) total score (HADS-14) as a measure of psychological distress.\n\n\nDESIGN\nCross-sectional, using Rasch analysis.\n\n\nMETHODS\nThe HADS was administered to 296 patients attending an out-patient musculoskeletal rehabilitation program. Rasch analysis was conducted using RUMM2020 software to assess the overall fit of the model, the response scale used, individual item fit, differential item functioning (DIF) and person separation.\n\n\nRESULTS\nRasch analysis supported the viability of the HADS-14 as a measure of psychological distress. It showed good person separation, little disordering of the thresholds and no evidence of DIE One anxiety item (item 11) showed some misfit to the model. The residuals patterned into the two subscales (anxiety and depression), but the person estimate derived from these two subscales was not statistically different to that derived from all items taken together, supporting the assumption of unidimensionality. A cut-point of 12 on the HADS-14 identified all cases that were classified as both anxious and depressed on the original individual HADS subscales.\n\n\nCONCLUSIONS\nThe results of Rasch analysis support the use of the HADS-14 as a global measure of psychological distress. The study demonstrates the usefulness of Rasch analysis in assessing the psychometric properties of a scale and suggests that further use of this technique to assess the HADS-14 in other clinical groups is warranted."}
{"_id": "64a4ccced4e9f7e58615e3d885af69b323e7b8c8", "title": "Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy", "text": "A snapshot Image Mapping Spectrometer (IMS) with high sampling density is developed for hyperspectral microscopy, measuring a datacube of dimensions 285 x 285 x 60 (x, y, lambda). The spatial resolution is approximately 0.45 microm with a FOV of 100 x 100 microm(2). The measured spectrum is from 450 nm to 650 nm and is sampled by 60 spectral channels with average sampling interval approximately 3.3 nm. The channel's spectral resolution is approximately 8nm. The spectral imaging results demonstrate the potential of the IMS for real-time cellular fluorescence imaging."}
{"_id": "9798e7c4852aadc3495a5791ca6890e485e641b7", "title": "Gain enhancement of dual-band antenna using square loop FSS", "text": "This paper presents a high gain dual-band microstrip antenna. The antenna operates in 9.25 and 11 GHz (X-band). High gain of antenna is achieved by employing a FSS as superstrate which is formed by the 5 \u00d7 5 array of square loops. The square loop FSS resonance is varied and its influence on the antenna behavior is analyzed. Analyze of the simulated antenna gain and return loss are shown. The gain improvement depends on the resonance of the FSS superstrate. The higher gain obtained is 6 dB at 11 GHz while maintaining a good matching level for the first resonance."}
{"_id": "148190c980d0814729c38a27de24974a2a13ceb2", "title": "High Resolution AMOLED Pixel Using Negative Feedback Structure for Improving Image Quality", "text": "Abstract An active matrix organic light emitting diode (AMOLED) pixel structure is proposed to improve image quality of high resolution small sized displays. The proposed pixel structure increases luminance uniformity by decreasing the distortion of the voltage in the storage capacitors using negative feedback structure. The simulation results show that the distortion of the voltage in the storage capacitors with negative feedback structure decreases to 55.3% compared to that without negative feedback structure and the emission current error of the proposed pixel structure ranges from \u20130.71 LSB to 0.56 LSB when the threshold voltage of the driving TFT varies from \u20130.2 V to 0.2 V."}
{"_id": "ae06abcc72eea584ce2f7dca1607ce0d4d037c90", "title": "Interactive Spatial Augmented Reality in Collaborative Robot Programming: User Experience Evaluation", "text": "This paper presents a novel approach to interaction between human workers and industrial collaborative robots. The proposed approach addresses problems introduced by existing solutions for robot programming. It aims to reduce the mental demands and attention switches by centering all interaction in a shared workspace, combining various modalities and enabling interaction with the system without any external devices. The concept allows simple programming in the form of setting program parameters using spatial augmented reality for visualization and a touch-enabled table and robotic arms as input devices. We evaluated the concept utilizing a user experience study with six participants (shop-floor workers). All participants were able to program the robot and to collaborate with it using the program they parametrized. The final goal is to create a distraction-free, usable and low-effort interface for effective human-robot collaboration, enabling any ordinary skilled worker to customize the robot's program to changes in production or to personal (e.g. ergonomic) needs."}
{"_id": "1c42b5543c315556c8a961b1a4ee8bc027f70b22", "title": "Accurate Scale Estimation for Robust Visual Tracking", "text": "Robust scale estimation is a challenging problem in visual object tracking. Most existing methods fail to handle large scale variations in complex image sequences. This paper presents a novel approach for robust scale estimation in a tracking-by-detection framework. The proposed approach works by learning discriminative correlation filters based on a scale pyramid representation. We learn separate filters for translation and scale estimation, and show that this improves the performance compared to an exhaustive scale search. Our scale estimation approach is generic as it can be incorporated into any tracking method with no inherent scale estimation. Experiments are performed on 28 benchmark sequences with significant scale variations. Our results show that the proposed approach significantly improves the performance by 18.8% in median distance precision compared to our baseline. Finally, we provide both quantitative and qualitative comparison of our approach with state-of-theart trackers in literature. The proposed method is shown to outperform the best existing tracker by 16.6% in median distance precision, while operating at real-time."}
{"_id": "71c398fb1a524a71511d31fb6605c221ef416729", "title": "ClaimVerif: A Real-time Claim Verification System Using the Web and Fact Databases", "text": "Our society is increasingly digitalized. Every day, a tremendous amount of information is being created, shared, and digested through all kinds of cyber channels. Although people can easily acquire information from various sources (social media, news articles, etc.), the truthfulness of most received information remains unverified. In many real-life scenarios, false information has become the de facto cause that leads to detrimental decision makings, and techniques that can automatically filter false information are highly demanded. However, verifying whether a piece of information is trustworthy is difficult because: (1) selecting candidate snippets for fact checking is nontrivial; and (2) detecting supporting evidences, i.e. stances, suffers from the difficulty of measuring the similarity between claims and related evidences. We build ClaimVerif, a claim verification system that not only provides credibility assessment for any user-given query claim, but also rationales the assessment results with supporting evidences. ClaimVerif can automatically select the stances from millions of documents and employs two-step training to justify the opinions of the stances. Furthermore, combined with the credibility of stances sources, ClaimVerif degrades the score of stances from untrustworthy sources and alleviates the negative effects from rumor spreaders. Our empirical evaluations show that ClaimVerif achieves both high accuracy and efficiency in different claim verification tasks. It can be highly useful in practical applications by providing multi-dimension analysis for the suspicious statements, including the stances, opinions, source credibility and estimated judgements."}
{"_id": "0b8e8199161a17b887b72e1d549ac0b1a5448234", "title": "Software metrics : usability and evaluation of software quality", "text": "It is difficult to understand, let alone improve, the quality of software without the knowledge of its software development process and software products. There must be some measurement process to predict the software development, and to evaluate the software products. This thesis provides a brief view on Software Quality, Software Metrics, and Software Metrics methods that will predict and measure the specified quality factors of software. It further discusses about the Quality as given by the standards such as ISO, principal elements required for the Software Quality and Software Metrics as the measurement technique to predict the Software Quality. This thesis was performed by evaluating a source code developed in Java, using Software Metrics, such as Size Metrics, Complexity Metrics, and Defect Metrics. Results show that, the quality of software can be analyzed, studied and improved by the usage of software metrics."}
{"_id": "2514ee178d4ac313b2ee4ee50a523d72f9cb87a6", "title": "Learning Embedding Adaptation for Few-Shot Learning", "text": "Learning with limited data is a key challenge for visual recognition. Few-shot learning methods address this challenge by learning an instance embedding function from seen classes, and apply the function to instances from unseen classes with limited labels. This style of transfer learning is task-agnostic: the embedding function is not learned optimally discriminative with respect to the unseen classes, where discerning among them is the target task. In this paper, we propose a novel approach to adapt the embedding model to the target classification task, yielding embeddings that are task-specific and are discriminative. To this end, we employ a type of self-attention mechanism called Transformer to transform the embeddings from task-agnostic to task-specific by focusing on relating instances from the test instances to the training instances in both seen and unseen classes. Our approach also extends to both transductive and generalized few-shot classification, two important settings that have essential use cases. We verify the effectiveness of our model on two standard benchmark few-shot classification datasets \u2014 MiniImageNet and CUB, where our approach demonstrates state-of-the-art empirical performance."}
{"_id": "8d937dbe4762bf7684519485303c7d71bd88e178", "title": "Fabrication of zero-speed sensor using weakly coupled NiFe/CoFe multilayer films", "text": "A magnetic sensor consisting of NiFe/Al/sub 2/O/sub 3//CoFe thin film was fabricated. Magnetization reversal of the NiFe soft layer was asymmetric in switching to parallel and antiparallel alignments to magnetization of CoFe because of magnetostatic coupling between the NiFe and CoFe layers. When the magnetization of the NiFe was paralleled to that of CoFe, a steep switching accompanied a large Barkhausen jump-induced pulse voltage in a pickup coil as sensor output. Amplitude of the pulse was not dependent on frequency of an external magnetic field. The pulse voltage output was obtained from the sensor element consisting of the optimized film structure excited by the magnetic field at almost zero frequency. These features are also achieved by twisted FeCoV and other wire materials. However, an integrated on-chip sensor with electronic circuits can be fabricated with film-based materials. The film-based sensor elements fabricated in this study can be utilized for speed sensor, rotation sensor, flow meter, and other applications."}
{"_id": "812cec589ce9a612503b099cf0033ca5ff33a2c4", "title": "Metamaterial-Based Low-Profile Broadband Mushroom Antenna", "text": "A metamaterial-based broadband low-profile mushroom antenna is presented. The proposed antenna is formed using an array of mushroom cells and a ground plane, and fed by a microstrip line through a slot cut onto the ground plane. With the feeding slot right underneath the center gap between the mushroom cells, the dual resonance modes are excited simultaneously for the radiation at boresight. A transmission-line model integrated with the dispersion relation of a composite right/left-handed mushroom structure is applied to analyze the modes. The proposed dielectric-filled (\u03b5r=3.38) mushroom antenna with a low profile of 0.06\u03bb0 ( \u03bb0 is the operating wavelength in free space) and a ground plane of 1.10\u03bb0\u00d71.10\u03bb0 attains 25% measured bandwidth with(|S11| <; - 10dB) 9.9-dBi average gain at 5-GHz band. Across the bandwidth, the antenna efficiency is greater than 76%, and cross-polarization levels are less than -20 dB."}
{"_id": "4811e5c3bba3a608ae44452bec985662da32fa02", "title": "Integrating shape grammars into a generative system for Zhuang ethnic embroidery design exploration", "text": "The integration of shape grammars within a generative system is thought to have good potential to support design applications in the art and cultural domains. Based on a shape decomposing method, two-level generative shape grammars are developed within the application of Zhuang ethnic embroidery design. In our approach, free B-spline curves are used to derive initial design concepts together with a new interpolation algorithm. In order to cope with the potentially huge solution space, a novel aesthetic evaluation model and a design exploration algorithm are developed for designers to find their favorite design solutions based on their subjective assessments. A system is introduced to demonstrate this research with an example of Zhuang ethnic embroidery design. \u00a9 2012 Elsevier Ltd. All rights reserved."}
{"_id": "f9dc83c0c63539a27eae56faed31e536a026a21d", "title": "Foot Design for a Large Walking Delta Robot", "text": "A large Delta robot has been modified to pick up and place a triangular shaped foot. The robot pushes the foot down to lift the body and the three associated fixed support legs. It then moves the foot sideways so that the body moves in the opposite direction to a new position where the fixed legs are lowered to the ground and the foot lifted ready for a new placement. In this way the robot is able to move in any direction with a tripod gait. The use of a combined foot and gripper for moving heavy objects is investigated. In the implementation reported, the gearboxes used are found to limit the operation and spring assistance is investigated as a trade off with payload. The small prototype, stands as tall as a person, weighs a third of a ton, and has a payload capability of near 100%. It is envisaged as a prototype for a much larger walking robot capable of general-purpose heavy lifting."}
{"_id": "2b67dbfadb5812501780d700d20c9dbfb1d1727b", "title": "From a Set of Shapes to Object Discovery", "text": "This paper presents an approach to object discovery in a give n unlabeled image set, based on mining repetitive spatial config urations of image contours. Contours that similarly deform from one image to a n ther are viewed as collaborating, or, otherwise, conflicting. This is captu red by a graph over all pairs of matching contours, whose maximum a posteriori mult icoloring assignment is taken to represent the shapes of discovered objects. Multicoloring is conducted by our new Coordinate Ascent Swendsen-Wang cut (CASW ). CASW uses the Metropolis-Hastings (MH) reversible jumps to probabil istically sample graph edges, and color nodes. CASW extends SW cut by introducing a r e ularization in the posterior of multicoloring assignments that prevent s the MH jumps to arrive at trivial solutions. Also, CASW seeks to learn paramet ers of the posterior via maximizing a lower bound of the MH acceptance rate. This s peeds up multicoloring iterations, and facilitates MH jumps from local mi nima. On benchmark datasets, we outperform all existing approaches to unsuper vised object discovery."}
{"_id": "14f964d152337e963e4a4fd3619f6030aa75deb1", "title": "Person Re-identification by Discriminatively Selecting Parts and Features", "text": "This paper presents a novel appearance-based method for person re-identification. The core idea is to rank and select different body parts on the basis of the discriminating power of their characteristic features. In our approach, we first segment the pedestrian images into meaningful parts, then we extract features from such parts as well as from the whole body and finally, we perform a salience analysis based on regression coefficients. Given a set of individuals, our method is able to estimate the different importance (or salience) of each body part automatically. To prove the effectiveness of our approach, we considered two standard datasets and we demonstrated through an exhaustive experimental section how our method improves significantly upon existing approaches, especially in multiple-shot scenarios."}
{"_id": "16d99c2f849e00a188c4c9a21105ef51c40ce417", "title": "A Pipelined CRC Calculation Using Lookup Tables", "text": "We present a fast cyclic redundancy check (CRC) algorithm that performs CRC computation for any length of message in parallel. Traditional CRC implementations have feedbacks, which make pipelining problematic. In the proposed approach, we eliminate feedbacks and implement a pipelined calculation of 32-bit CRC in the SMIC 0.13 \u03bcm CMOS technology. For a given message, the algorithm first chunks the message into blocks, each of which has a fixed size equal to the degree of the generator polynomial. Then it computes CRC for the chunked blocks in parallel using lookup tables, and the results are combined together by performing XOR operations. The simulation results show that our proposed pipelined CRC is more efficient than existing CRC implementations."}
{"_id": "1c9efbcfad4dc3259d7024465b2647ff7af1d484", "title": "Periodic Leaky-Wave Antenna Array With Horizontally Polarized Omnidirectional Pattern", "text": "A novel periodic leaky-wave antenna for broadside radiation with horizontally polarized omnidirectional pattern is proposed, designed and demonstrated experimentally. The objective is to achieve a horizontally polarized omnidirectional antenna array with highly directive beam. The proposed structure is based on combining the rotating electric field method and a leaky-wave antenna design. By properly designing a unit cell with a pair of crossed dipoles spaced at a distance approximately equal to \u03bbg/4 at the center frequency, not only omnidirectional radiation pattern with horizontal polarization is obtained by crossed dipoles fed in phase quadrature, but also the open-stopband effect exhibited at broadside scan is significantly reduced. The analysis and design of this structure are performed on the basis of a simple equivalent network model. A finite structure formed by 16-unit cell (32-element) for broadside beam is developed at the 2.4 GHz band, which offers a horizontally polarized omnidirectional radiation pattern with enhanced gain of 9.9-10.6 dBi and measured radiation efficiency of 90% for the covering bands."}
{"_id": "b2bafaa4e0943d47ad4249cd8b2ee34897fe2a1f", "title": "A comprehensive study of visual event computing", "text": "This paper contains a survey on aspects of visual event computing. We start by presenting events and their classifications, and continue with discussing the problem of capturing events in terms of photographs, videos, etc, as well as the methodologies for event storing and retrieving. Later, we review an extensive set of papers taken from well-known conferences and journals in multiple disciplines. We analyze events, and summarize the procedure of visual event actions. We introduce each component of a visual event computing system, and its computational aspects, we discuss the progress of each component and review its overall status. Finally, we suggest future research trends in event computing and hope to introduce a comprehensive profile of visual event computing to readers."}
{"_id": "a22bba702a12fc97790b51c45854106b292814d9", "title": "Estimation of safe sensor measurements of autonomous system under attack", "text": "The introduction of automation in cyber-physical systems (CPS) has raised major safety and security concerns. One attack vector is the sensing unit whose measurements can be manipulated by an adversary through attacks such as denial of service and delay injection. To secure an autonomous CPS from such attacks, we use a challenge response authentication (CRA) technique for detection of attack in active sensors data and estimate safe measurements using the recursive least square algorithm. For demonstrating effectiveness of our proposed approach, a car-follower model is considered where the follower vehicle's radar sensor measurements are manipulated in an attempt to cause a collision."}
{"_id": "e5004518bb0504b89d1b95457cfa306e86f9e213", "title": "How robust are prediction effects in language comprehension? Failure to replicate article-elicited N400 effects", "text": "Current psycholinguistic theory proffers prediction as a central, explanatory mechanism in language processing. However, widely-replicated prediction effects may not mean that prediction is necessary in language processing. As a case in point, C. D. Martin et al. [2013. Bilinguals reading in their second language do not predict upcoming words as native readers do. Journal of Memory and Language, 69(4), 574\u2013588. doi:10.1016/j.jml.2013.08.001] reported ERP evidence for prediction in nativebut not in non-native speakers. Articles mismatching an expected noun elicited larger negativity in the N400 time window compared to articles matching the expected noun in native speakers only. We attempted to replicate these findings, but found no evidence for prediction irrespective of language nativeness. We argue that pre-activation of phonological form of upcoming nouns, as evidenced in article-elicited effects, may not be a robust phenomenon. A view of prediction as a necessary computation in language comprehension must be re-evaluated. ARTICLE HISTORY Received 1 June 2016 Accepted 22 September 2016"}
{"_id": "05c933498c1c47ef82c751b3873d4b6387524ef3", "title": "Filterbank-based fingerprint matching", "text": "With identity fraud in our society reaching unprecedented proportions and with an increasing emphasis on the emerging automatic personal identification applications, biometrics-based verification, especially fingerprint-based identification, is receiving a lot of attention. There are two major shortcomings of the traditional approaches to fingerprint representation. For a considerable fraction of population, the representations based on explicit detection of complete ridge structures in the fingerprint are difficult to extract automatically. The widely used minutiae-based representation does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Further, minutiae-based matching has difficulty in quickly matching two fingerprint images containing a different number of unregistered minutiae points. The proposed filter-based algorithm uses a bank of Gabor filters to capture both local and global details in a fingerprint as a compact fixed length FingerCode. The fingerprint matching is based on the Euclidean distance between the two corresponding FingerCodes and hence is extremely fast. We are able to achieve a verification accuracy which is only marginally inferior to the best results of minutiae-based algorithms published in the open literature. Our system performs better than a state-of-the-art minutiae-based system when the performance requirement of the application system does not demand a very low false acceptance rate. Finally, we show that the matching performance can be improved by combining the decisions of the matchers based on complementary (minutiae-based and filter-based) fingerprint information."}
{"_id": "6c9bbae1bb6eaa7bd66b226ac6a5d4341f14f7dc", "title": "On-line fingerprint verification", "text": "Fingerprint verification is one of the most reliable personal identification methods. However, manual fingerprint verification is so tedious, time-consuming, and expensive that it is incapable of meeting today\u2019s increasing performance requirements. An automatic fingerprint identification system (AFIS) is widely needed. It plays a very important role in forensic and civilian applications such as criminal identification, access control, and ATM card verification. This paper describes the design and implementation of an on-line fingerprint verification system which operates in two stages: minutia extraction and minutia matching. An improved version of the minutia extraction algorithm proposed by Ratha et al., which is much faster and more reliable, is implemented for extracting features from an input fingerprint image captured with an on-line inkless scanner. For minutia matching, an alignment-based elastic matching algorithm has been developed. This algorithm is capable of finding the correspondences between minutiae in the input image and the stored template without resorting to exhaustive search and has the ability of adaptively compensating for the nonlinear deformations and inexact pose transformations between fingerprints. The system has been tested on two sets of fingerprint images captured with inkless scanners. The verification accuracy is found to be acceptable. Typically, a complete fingerprint verification procedure takes, on an average, about eight seconds on a SPARC 20 workstation. These experimental results show that our system meets the response time requirements of on-line verification with high accuracy. Index Terms \u2014Biometrics, fingerprints, matching, verification, minutia, orientation field, ridge extraction. \u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014 \u2726 \u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014"}
{"_id": "f97f55e7bd43740c17c0cfab7db61b12e6075ebf", "title": "FVC2002: Second Fingerprint Verification Competition", "text": "Two years after the first edition, a new Fingerprint Verification Competition (FVC2002) was organized by the authors, with the aim of determining the state-of-theart in this challenging pattern recognition application. The experience and the feedback received from FVC2000 allowed the authors to improve the organization of FVC2002 and to capture the attention of a significantly higher number of academic and commercial organizations (33 algorithms were submitted). This paper discusses the FVC2002 database, the test protocol and the main differences between FVC2000 and FVC2002. The algorithm performance evaluation will be presented at the 16 ICPR."}
{"_id": "605ed83a6d1f4eaf995e85830f373923b11d6c13", "title": "Cover your ACKs: pitfalls of covert channel censorship circumvention", "text": "In response to increasingly sophisticated methods of blocking access to censorship circumvention schemes such as Tor, recently proposed systems such as Skypemorph, FreeWave, and CensorSpoofer have used voice and video conferencing protocols as \"cover channels\" to hide proxy connections. We demonstrate that even with perfect emulation of the cover channel, these systems can be vulnerable to attacks that detect or disrupt the covert communications while having no effect on legitimate cover traffic. Our attacks stem from differences in the channel requirements for the cover protocols, which are peer-to-peer and loss tolerant, and the covert traffic, which is client-proxy and loss intolerant. These differences represent significant limitations and suggest that such protocols are a poor choice of cover channel for general censorship circumvention schemes."}
{"_id": "80225e4819ca41ef174d7b92859e3757bca6ffd2", "title": "Smart Agents in Industrial Cyber\u2013Physical Systems", "text": "Future industrial systems can be realized using the cyber-physical systems (CPSs) that advocate the coexistence of cyber and physical counterparts in a network structure to perform the system's functions in a collaborative manner. Multiagent systems share common ground with CPSs and can empower them with a multitude of capabilities in their efforts to achieve complexity management, decentralization, intelligence, modularity, flexibility, robustness, adaptation, and responsiveness. This work surveys and analyzes the current state of the industrial application of agent technology in CPSs, and provides a vision on the way agents can effectively enable emerging CPS challenges."}
{"_id": "1b752283193d6e9a4a90ba2f4b0277edcb59bb10", "title": "Katyusha: the first direct acceleration of stochastic gradient methods", "text": "Nesterov\u00e2\u0080\u0099s momentum trick is famously known for accelerating gradient descent, and has been proven useful in building fast iterative algorithms. However, in the stochastic setting, counterexamples exist and prevent Nesterov\u00e2\u0080\u0099s momentum from providing similar acceleration, even if the underlying problem is convex. \nWe introduce Katyusha, a direct, primal-only stochastic gradient method to fix this issue. It has a provably accelerated convergence rate in convex (off-line) stochastic optimization. The main ingredient is Katyusha momentum, a novel \u00e2\u0080\u009cnegative momentum\u00e2\u0080\u009d on top of Nesterov\u00e2\u0080\u0099s momentum that can be incorporated into a variance-reduction based algorithm and speed it up. Since variance reduction has been successfully applied to a growing list of practical problems, our paper suggests that in each of such cases, one could potentially give Katyusha a hug."}
{"_id": "76182c9a21dda0ba4c434d45c6b1d1ae3809efbc", "title": "A maximum entropy approach to adaptive statistical language modelling", "text": "An adaptive statistical language model is described, which successfully integrates long distance linguistic information with other knowledge sources. Most existing statistical language models exploit only the immediate history of a text. To extract information from further back in the document\u2019s history, we propose and use trigger pairs as the basic information bearing elements. This allows the model to adapt its expectations to the topic of discourse. Next, statistical evidence from multiple sources must be combined. Traditionally, linear interpolation and its variants have been used, but these are shown here to be seriously deficient. Instead, we apply the principle of Maximum Entropy (ME). Each information source gives rise to a set of constraints, to be imposed on the combined estimate. The intersection of these constraints is the set of probability functions which are consistent with all the information sources. The function with the highest entropy within that set is the ME solution. Given consistent statistical evidence, a unique ME solution is guaranteed to exist, and an iterative algorithm exists which is guaranteed to converge to it. The ME framework is extremely general: any phenomenon that can be described in terms of statistics of the text can be readily incorporated. An adaptive language model based on the ME approach was trained on the Wall Street Journal corpus, and showed a 32\u201339% perplexity reduction over the baseline. When interfaced to SPHINX-II, Carnegie Mellon\u2019s speech recognizer, it reduced its error rate by 10\u201314%. This thus illustrates the feasibility of incorporating many diverse knowledge sources in a single, unified statistical framework. \uf6d9 1996 Academic Press Limited"}
{"_id": "31dc769b9144f9cc370efed18814834e4815e5c4", "title": "Expressing Interactivity with States and Constraints", "text": "......................................................................................................................... iii Acknowledgements ......................................................................................................... iv Table of"}
{"_id": "439f205642eaebafa95047be9933a3872aa66553", "title": "Cascade Residual Learning: A Two-Stage Convolutional Neural Network for Stereo Matching", "text": "Leveraging on the recent developments in convolutional neural networks (CNNs), matching dense correspondence from a stereo pair has been cast as a learning problem, with performance exceeding traditional approaches. However, it remains challenging to generate high-quality disparities for the inherently ill-posed regions. To tackle this problem, we propose a novel cascade CNN architecture composing of two stages. The first stage advances the recently proposed DispNet by equipping it with extra up-convolution modules, leading to disparity images with more details. The second stage explicitly rectifies the disparity initialized by the first stage; it couples with the first-stage and generates residual signals across multiple scales. The summation of the outputs from the two stages gives the final disparity. As opposed to directly learning the disparity at the second stage, we show that residual learning provides more effective refinement. Moreover, it also benefits the training of the overall cascade network. Experimentation shows that our cascade residual learning scheme provides state-of-the-art performance for matching stereo correspondence. By the time of the submission of this paper, our method ranks first in the KITTI 2015 stereo benchmark, surpassing the prior works by a noteworthy margin."}
{"_id": "51934b43c251c5fb1456d2aad246adbca6231f6c", "title": "An anatomically-constrained local deformation model for monocular face capture", "text": "We present a new anatomically-constrained local face model and fitting approach for tracking 3D faces from 2D motion data in very high quality. In contrast to traditional global face models, often built from a large set of blendshapes, we propose a local deformation model composed of many small subspaces spatially distributed over the face. Our local model offers far more flexibility and expressiveness than global blendshape models, even with a much smaller model size. This flexibility would typically come at the cost of reduced robustness, in particular during the under-constrained task of monocular reconstruction. However, a key contribution of this work is that we consider the face anatomy and introduce subspace skin thickness constraints into our model, which constrain the face to only valid expressions and helps counteract depth ambiguities in monocular tracking. Given our new model, we present a novel fitting optimization that allows 3D facial performance reconstruction from a single view at extremely high quality, far beyond previous fitting approaches. Our model is flexible, and can be applied also when only sparse motion data is available, for example with marker-based motion capture or even face posing from artistic sketches. Furthermore, by incorporating anatomical constraints we can automatically estimate the rigid motion of the skull, obtaining a rigid stabilization of the performance for free. We demonstrate our model and single-view fitting method on a number of examples, including, for the first time, extreme local skin deformation caused by external forces such as wind, captured from a single high-speed camera."}
{"_id": "2dd3d10caea3f1ac85787538649f7bd0058b46fc", "title": "ESD reliability improvement of the 0.25-\u00b5m 60-V power nLDMOS with discrete embedded SCRs separated by STI structures", "text": "In this paper, the electrostatic-discharge (ESD) robustness improvement by modulating the drain-side embedded SCR of an HV nLDMOS device is investigated via a TSMC 0.25 \u03bcm 60 V process. After a systematic layout design and data analysis, it can be found that the holding voltage (Vh) of an nLDMOS with a parasitic SCR \u201cnpn\u201d-arranged type & thin oxide (OD) discrete (i.e. separated by the shallow-trench isolation (STI) structure) distribution in the drain-side have greatly increased with the parasitic SCR OD decreasing. Therefore, a high Vh value in the OD discrete parameter 2 (DIS-2) can be obtained about 13.3 V. On the other hand, the trigger voltage (Vt1) values and the holding voltage (Vh) values of the nLDMOS DUTs with a parasitic SCR \u201cpnp\u201d-arranged type & OD discrete distribution in the drain-side are slowly increased with the parasitic-SCR OD decreasing. The best Vh value in the \u201cpnp\u201d-arranged type & OD discrete parameter 2 (DIS-2) is about 14.4 V. Meanwhile, the secondary breakdown current (It2) values of this type are greater than 7 A except for the OD discrete parameter 2 and 3. Therefore, an appropriate layout of nLDMOS embedded with a drain-side \u201cpnp\u201d arranged type & OD discrete distribution that can yield high ESD and latch-up (LU) robustness levels."}
{"_id": "0833d6fd58f6516d84b1b0fe736fac5dac4a8f07", "title": "The huge Package for High-dimensional Undirected Graph Estimation in R", "text": "We describe an R package named huge which provides easy-to-use functions for estimating high dimensional undirected graphs from data. This package implements recent results in the literature, including Friedman et al. (2007), Liu et al. (2009, 2012) and Liu et al. (2010). Compared with the existing graph estimation package glasso, the huge package provides extra features: (1) instead of using Fortan, it is written in C, which makes the code more portable and easier to modify; (2) besides fitting Gaussian graphical models, it also provides functions for fitting high dimensional semiparametric Gaussian copula models; (3) more functions like data-dependent model selection, data generation and graph visualization; (4) a minor convergence problem of the graphical lasso algorithm is corrected; (5) the package allows the user to apply both lossless and lossy screening rules to scale up large-scale problems, making a tradeoff between computational and statistical efficiency."}
{"_id": "957ba54affcc27304d90bb5cb7efc0ee43dbcfcb", "title": "Semantic Attachments for Domain-Independent Planning Systems", "text": "Solving real-world problems using symbolic planning often requires a simplified formulation of the original problem, since certain subproblems cannot be represented at all or only in a way leading to inefficiency. For example, manipulation planning may appear as a subproblem in a robotic planning context or a packing problem can be part of a logistics task. In this paper we propose an extension of PDDL for specifying semantic attachments. This allows the evaluation of grounded predicates as well as the change of fluents by externally specified functions. Furthermore, we describe a general schema of integrating semantic attachments into a forward-chaining planner and report on our experience of adding this extension to the planners FF and Temporal Fast Downward. Finally, we present some preliminary experiments using semantic attach-"}
{"_id": "c15259a31f8c7ddfc69f73577ddb886018feae91", "title": "Visualising convolutional neural network decisions in automated sleep scoring", "text": "Current sleep medicine relies on the supervised analysis of polysomnographic recordings, which comprise amongst others electroencephalogram (EEG), electromyogram (EMG), and electrooculogram (EOG) signals. Convolutional neural networks (CNN) provide an interesting framework for automated sleep classification, however, the lack of interpretability of its results has hampered CNN\u2019s further use in medicine. In this study, we train a CNN using as input Continuous Wavelet transformed EEG, EOG and EMG recordings from a publicly available dataset. The network achieved a 10-fold cross-validation Cohen\u2019s Kappa score of \u03ba = 0.71\u00b10.01. Further, we provide insights on how this network classifies individual epochs of sleep using Guided Gradient-weighted Class Activation Maps (Guided Grad-CAM). The proposed approach is able to produce fine-grained activation maps on time-frequency domain for each signal providing a useful tool for identifying relevant features in CNNs."}
{"_id": "9eb145ff069c6bb23b5d3b95a7081f360047558a", "title": "Long-Term Treatment with High-Dose Thiamine in Parkinson Disease: An Open-Label Pilot Study.", "text": "OBJECTIVES\nTo investigate the potential clinical, restorative, and neuroprotective effects of long-term treatment with thiamine in Parkinson disease (PD).\n\n\nDESIGN\nObservational open-label pilot study.\n\n\nSETTING\nOutpatient neurologic rehabilitation clinic.\n\n\nPATIENTS AND METHODS\nStarting in June 2012, we have recruited 50 patients with PD (33 men and 17 women; mean age, 70.4\u2009\u00b1\u200912.9 years; mean disease duration, 7.3\u2009\u00b1\u20096.7 years). All the patients were assessed at baseline with the Unified Parkinson's Disease Rating Scale (UPDRS) and the Fatigue Severity Scale (FSS) and began treatment with 100\u2009mg of thiamine administered intramuscularly twice a week, without any change to personal therapy. All the patients were re-evaluated after 1 month and then every 3 months during treatment.\n\n\nRESULTS\nThiamine treatment led to significant improvement of motor and nonmotor symptoms: mean UPDRS scores (parts I-IV) improved from 38.55\u2009\u00b1\u200915.24 to 18.16\u2009\u00b1\u200915.08 (p\u2009=\u20092.4\u2009\u00d7\u200910(-14), t test for paired data) within 3 months and remained stable over time; motor UPDRS part III score improved from 22.01\u2009\u00b1\u20098.57 to 9.92\u2009\u00b1\u20098.66 (p\u2009=\u20093.1\u2009\u00d7\u200910(-22)). Some patients with a milder phenotype had complete clinical recovery. FSS scores, in six patients who had fatigue, improved from 53.00\u2009\u00b1\u20098.17 to 23.60\u2009\u00b1\u20097.77 (p\u2009<\u20090.0001, t test for paired data). Follow-up duration ranged from 95 to 831 days (mean, 291.6\u2009\u00b1\u2009207.2 days).\n\n\nCONCLUSIONS\nAdministration of parenteral high-dose thiamine was effective in reversing PD motor and nonmotor symptoms. The clinical improvement was stable over time in all the patients. From our clinical evidence, we hypothesize that a dysfunction of thiamine-dependent metabolic processes could cause selective neural damage in the centers typically affected by this disease and might be a fundamental molecular event provoking neurodegeneration. Thiamine could have both restorative and neuroprotective action in PD."}
{"_id": "76fb83d82bd83e1675930ecb42501c1d68d8a2dc", "title": "Information quality in information fusion", "text": "Designing fusion systems for decision support in complex dynamic situations calls for an integrated human-machine information environment, in which some processes are best executed automatically while for others the judgment and guidance of human experts and end-users are critical. Thus decision making in such environment requires constant information exchange between human and automated agents that utilize operational data, data obtained from sensors, intelligence reports, and open source information. The quality of decision making strongly depends on the success of being aware of, and compensating for, insufficient information quality at each step of information exchange. Designing the methods of representing and incorporating information quality into this environment is a relatively new and a rather difficult problem. The paper discusses major challenges and suggests some approaches to address this problem."}
{"_id": "30efc9890a40eb6693ed655f997b44cfef8c5695", "title": "An Intelligent Tutoring System for Entity Relationship Modelling", "text": "The paper presents KERMIT, a Knowledge-based Entity Relationship Modelling Intelligent Tutor. KERMIT is a problem-solving environment for the university-level students, in which they can practise conceptual database design using the Entity-Relationship data model. KERMIT uses Constraint-Based Modelling (CBM) to model the domain knowledge and generate student models. We have used CBM previous in tutors that teach SQL and English punctuation rules. The research presented in this paper is significant because we show that CBM can be used to support students learning design tasks, which are very different from domains we dealt with in earlier tutors. The paper describes the system\u2019s architecture and functionality. The system observes students\u2019 actions and adapts to their knowledge and learning abilities. KERMIT has been evaluated in the context of genuine teaching activities. We present the results of two evaluation studies with students taking database courses, which show that KERMIT is an effective system. The students have enjoyed the system\u2019s adaptability and found it a valuable asset to their learning."}
{"_id": "48e64fe9e7dd13e9e096ca1ea3d784f483c1864d", "title": "A binary linear programming formulation of the graph edit distance", "text": "A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs"}
{"_id": "50e40fa70d1ac76012ef782293e3c4936f6ea8d2", "title": "Blogging as social activity, or, would you let 900 million people read your diary?", "text": "\"Blogging\" is a Web-based form of communication that is rapidly becoming mainstream. In this paper, we report the results of an ethnographic study of blogging, focusing on blogs written by individuals or small groups, with limited audiences. We discuss motivations for blogging, the quality of social interactivity that characterized the blogs we studied, and relationships to the blogger's audience. We consider the way bloggers related to the known audience of their personal social networks as well as the wider \"blogosphere\" of unknown readers. We then make design recommendations for blogging software based on these findings."}
{"_id": "a0498059093527323c29e32547c51c29e8849006", "title": "Domain Adaptation for Statistical Machine Translation with Domain Dictionary and Monolingual Corpora", "text": "tra Statistical machine translation systems are usually trained on large amounts of bilingual text and monolingual text. In this paper, we propose a method to perform domain adaptation for statistical machine translation, where in-domain bilingual corpora do not exist. This method first uses out-of-domain corpora to train a baseline system and then uses in-domain translation dictionaries and in-domain monolingual corpora to improve the indomain performance. We propose an algorithm to combine these different resources in a unified framework. Experimental results indicate that our method achieves absolute improvements of 8.16 and 3.36 BLEU scores on Chinese to English translation and English to French translation respectively, as compared with the baselines using only out-ofdomain corpora."}
{"_id": "424b3cfb65088f0f0c3b3564b4a645088bda36c0", "title": "Recommendations for designers and researchers resulting from the world-wide failure exercise", "text": "The World-Wide Failure Exercise (WWFE) contained a detailed assessment of 19 theoretical approaches for predicting the deformation and failure response of polymer composite laminates when subjected to complex states of stress. The leading five theories are explored in greater detail to demonstrate their strengths and weaknesses in predicting various types of structural failure. Recommendations are then derived, as to how the theories can be best utilised to provide safe and economic predictions in a wide range of engineering design applications. Further guidance is provided for designers on the level of confidence and bounds of applicability of the current theories. The need for careful interpretation of initial failure predictions is emphasised, as is the need to allow for multiple sources of non-linearity (including progressive damage) where accuracy is sought for certain classes of large deformation and final failure strength predictions. Aspects requiring further experimental and theoretical investigation are identified. Direction is also provided to the research community by highlighting specific, tightly focussed, experimental and theoretical studies that, if carried out in the very near future, would pay great dividends from the designer\u2019s perspective, by increasing their confidence in the theoretical foundations. # 2003 QinetiQ Ltd. Published by Elsevier Ltd. All rights reserved."}
{"_id": "615b351cca9e902cccf7e417c4e695311fff1e01", "title": "A case study of cooperative learning and communication pedagogy : Does working in teams make a difference ?", "text": "Cooperative learning has increasingly become a popular form of active pedagogy employed in academic institutions. This case study explores the relationship between cooperative learning and academic performance in higher education, specifically in the field of communication. Findings from a questionnaire administered to undergraduate students in a communication research course indicate that involvement in cooperative learning is a strong predictor of a student\u2019s academic performance. A significant positive relationship was found between the degree to which grades are important to a student and his or her active participation in cooperative learning. Further, the importance of grades and sense of achievement are strong predictors of performance on readiness assessment tests."}
{"_id": "059ff0ee2e0ac95d5cc54fd20ded2d33e5d5dbd0", "title": "Generative and Discriminative Voxel Modeling with Convolutional Neural Networks", "text": "When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the state of the art for object classification."}
{"_id": "2e6a52d23dba71c974adeabee506aee72df7b3bc", "title": "Characterizing debate performance via aggregated twitter sentiment", "text": "Television broadcasters are beginning to combine social micro-blogging systems such as Twitter with television to create social video experiences around events. We looked at one such event, the first U.S. presidential debate in 2008, in conjunction with aggregated ratings of message sentiment from Twitter. We begin to develop an analytical methodology and visual representations that could help a journalist or public affairs person better understand the temporal dynamics of sentiment in reaction to the debate video. We demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events."}
{"_id": "ec4a94637ecd115219869e9df8902cb7282481e0", "title": "Semantic Sentiment Analysis of Twitter", "text": "Sentiment analysis over Twitter offer organisations a fast and effective way to monitor the publics\u2019 feelings towards their brand, business, directors, etc. A wide range of features and methods for training sentiment classifiers for Twitter datasets have been researched in recent years with varying results. In this paper, we introduce a novel approach of adding semantics as additional features into the training set for sentiment analysis. For each extracted entity (e.g. iPhone) from tweets, we add its semantic concept (e.g. \u201cApple product\u201d) as an additional feature, and measure the correlation of the representative concept with negative/positive sentiment. We apply this approach to predict sentiment for three different Twitter datasets. Our results show an average increase of F harmonic accuracy score for identifying both negative and positive sentiment of around 6.5% and 4.8% over the baselines of unigrams and part-of-speech features respectively. We also compare against an approach based on sentiment-bearing topic analysis, and find that semantic features produce better Recall and F score when classifying negative sentiment, and better Precision with lower Recall and F score in positive sentiment classification."}
{"_id": "07f33ce4159c1188ad20b864661731e246512239", "title": "From bias to opinion: a transfer-learning approach to real-time sentiment analysis", "text": "Real-time interaction, which enables live discussions, has become a key feature of most Web applications. In such an environment, the ability to automatically analyze user opinions and sentiments as discussions develop is a powerful resource known as real time sentiment analysis. However, this task comes with several challenges, including the need to deal with highly dynamic textual content that is characterized by changes in vocabulary and its subjective meaning and the lack of labeled data needed to support supervised classifiers. In this paper, we propose a transfer learning strategy to perform real time sentiment analysis. We identify a task - opinion holder bias prediction - which is strongly related to the sentiment analysis task; however, in constrast to sentiment analysis, it builds accurate models since the underlying relational data follows a stationary distribution.\n Instead of learning textual models to predict content polarity (i.e., the traditional sentiment analysis approach), we first measure the bias of social media users toward a topic, by solving a relational learning task over a network of users connected by endorsements (e.g., retweets in Twitter). We then analyze sentiments by transferring user biases to textual features. This approach works because while new terms may arise and old terms may change their meaning, user bias tends to be more consistent over time as a basic property of human behavior. Thus, we adopted user bias as the basis for building accurate classification models. We applied our model to posts collected from Twitter on two topics: the 2010 Brazilian Presidential Elections and the 2010 season of Brazilian Soccer League. Our results show that knowing the bias of only 10% of users generates an F1 accuracy level ranging from 80% to 90% in predicting user sentiment in tweets."}
{"_id": "a22445456a70896f850150001ecf4d0a726be9a9", "title": "Hedonic and instrumental motives in anger regulation.", "text": "What motivates individuals to regulate their emotions? One answer, which has been highlighted in emotion-regulation research, is that individuals are motivated by short-term hedonic goals (e.g., the motivation to feel pleasure). Another answer, however, is that individuals are motivated by instrumental goals (e.g., the motivation to perform certain behaviors). We suggest that both answers have merit. To demonstrate the role instrumental goals may play in emotion regulation, we pitted short-term hedonic motives and instrumental motives against each other, by testing whether individuals were motivated to experience a potentially useful, albeit unpleasant, emotion. We found that (a) individuals preferred activities that would increase their level of anger (but not their level of excitement) when they were anticipating confrontational, but not nonconfrontational, tasks and that (b) anger improved performance in a confrontational, but not a nonconfrontational, task. These findings support a functional view of emotion regulation, and demonstrate that in certain contexts, individuals may choose to experience emotions that are instrumental, despite short-term hedonic costs."}
{"_id": "8f92b07d5ad2937ebddcc66e1be820805f53e4ec", "title": "Building Poker Agent Using Reinforcement Learning with Neural Networks", "text": "Poker is a game with incomplete and imperfect information. The ability to estimate opponent and interpret its actions makes a player as a world class player. Finding optimal game strategy is not enough to win poker game. As in real life as in online poker game, the most time of it consists of opponent analysis. This paper illustrates a development of poker agent using reinforcement learning with neural networks. 1 STAGE OF THE RESEARCH Poker is a game with incomplete and imperfect information. The ability to estimate opponent and interpret its actions makes a player as a world class player. Finding optimal game strategy is not enough to win poker game. As in real life as in online poker game, the most time of it consists of opponent analysis. Author is working on development of poker agent that would find optimal game strategy using reinforcement learning (RL) in combination with artificial neural network (ANN) for value function approximation. 2 OUTLINE OF OBJECTIVES This paper illustrates a development of poker agent using reinforcement learning with neural networks. Complete poker agent should have an ability to create optimal game strategy that makes decisions based on information: \uf0a7 Hand strength/potential estimation; \uf0a7 Table card estimation; \uf0a7 Opponent hand strength prediction; \uf0a7 Opponent classification (tight/loose passive/ aggressive); \uf0a7 Current state evaluation using neural network. AI Poker agent should be able to find an optimal strategy by itself (unsupervised learning) using information given above. It also should be able to adapt opponent play style and change its strategy during the game."}
{"_id": "7efa3b18c6d3aa8f20b5b91bb85dd9b76436dd18", "title": "MATLAB Based BackPropagation Neural Network for Automatic Speech Recognition", "text": "Speech interface to computer is the next big step that the technology needs to take for general users. Automatic speech recognition (ASR) will play an important role in taking technology to the people. There are numerous applications of speech recognition such as direct voice input in aircraft, data entry, speech-to-text processing, voice user interfaces such as voice dialling. ASR system can be divided into two different parts, namely feature extraction and feature recognition. In this paper we present MATLAB based feature recognition using backpropagation neural network for ASR. The objective of this research is to explore how neural networks can be employed to recognize isolated-word speech as an alternative to the traditional methodologies. The general techniques developed here can be further extended to other applications such as sonar target recognition, missile tracking and classification of underwater acoustic signals. Back-propagation neural network algorithm uses input training samples and their respective desired output values to learn to recognize specific patterns, by modifying the activation values of its nodes and weights of the links connecting its nodes. Such a trained network is later used for feature recognition in ASR systems."}
{"_id": "10faedfeb8ab4280b43538a3cfdeabc46fad7d41", "title": "Automated Fingerprint Identification and Imaging Systems", "text": "More than a century has passed since Alphonse Bertillon first conceived and then industriously practiced the idea of using body measurements for solving crimes [1]. Just as his idea was gaining popularity, it faded into relative obscurity by a far more significant and pract ic l discovery of the uniqueness of the human fingerprints 1. Soon after this discovery, many major law enforcement departments embraced the idea of first \u201cbooking\u201d the fingerprints of criminals, so tha t their records are readily available and later using leftover fingerprint smudges (latent s), the identity of criminals can be determined. These agencies sponsored a rigorous study of fingerprints, developed s cientific 1In 1893, the Home Ministry Office, UK, accepted that no two ind ividuals have the same fingerprints."}
{"_id": "3261659127bebca4d805e8592906b37ece8d0ca3", "title": "Automatic Linguistic Annotation of Large Scale L 2 Databases : The EF-Cambridge Open Language Database ( EFCamDat )", "text": "\u2217Naturalistic learner productions are an important empirical resource for SLA research. Some pioneering works have produced valuable second language (L2) resources supporting SLA research.1 One common limitation of these resources is the absence of individual longitudinal data for numerous speakers with different backgrounds across the proficiency spectrum, which is vital for understanding the nature of individual variation in longitudinal development.2 A second limitation is the relatively restricted amounts of data annotated with linguistic information (e.g., lexical, morphosyntactic, semantic features, etc.) to support investigation of SLA hypotheses and obtain patterns of development for different linguistic phenomena. Where available, annotations tend to be manually obtained, a situation posing immediate limitations to the quantity of data that could be annotated with reasonable human resources and within reasonable time. Natural Language Processing (NLP) tools can provide automatic annotations for parts-of-speech (POS) and syntactic structure and are indeed increasingly applied to learner language in various contexts. Systems in computer-assisted language learning (CALL) have used a parser and other NLP tools to automatically detect learner errors and provide feedback accordingly.3 Some work aimed at adapting annotations provided by parsing tools to accurately describe learner syntax (Dickinson & Lee, 2009) or evaluated parser performance on learner language and the effect of learner errors on the parser. Krivanek and Meurers (2011) compared two parsing methods, one using a hand-crafted lexicon and one trained on a corpus. They found that the former is more successful in recovering the main grammatical dependency relations whereas the latter is more successful in recovering optional, adjunction relations. Ott and Ziai (2010) evaluated the performance of a dependency parser trained on native German (MaltParser; Nivre et al., 2007) on 106 learner answers to a comprehension task in L2 German. Their study indicates that while some errors can be problematic for the parser (e.g., omission of finite verbs) many others (e.g., wrong word order) can be parsed robustly, resulting in overall high performance scores. In this paper we have two goals. First, we introduce a new English L2 database, the EF Cambridge Open Language Database, henceforth EFCAMDAT. EFCAMDAT was developed by the Department of Theoretical and Applied Linguistics at the University of Cambridge in collaboration with EF Education First, an international educational organization. It contains writings submitted to Englishtown, the"}
{"_id": "378cf0ebeeca44ccd7f04b5e618c6854d0f467ef", "title": "A Radius and Ulna TW3 Bone Age Assessment System", "text": "An end-to-end system to automate the well-known Tanner-Whitehouse (TW3) clinical procedure to estimate the skeletal age in childhood is proposed. The system comprises the detailed analysis of the two most important bones in TW3: the radius and ulna wrist bones. First, a modified version of an adaptive clustering segmentation algorithm is presented to properly semi-automatically segment the contour of the bones. Second, up to 89 features are defined and extracted from bone contours and gray scale information inside the contour, followed by some well- founded feature selection mathematical criteria, based on the ideas of maximizing the classes' separability. Third, bone age is estimated with the help of a generalized softmax perceptron (GSP) neural network (NN) that, after supervised learning and optimal complexity estimation via the application of the recently developed posterior probability model selection (PPMS) algorithm, is able to accurately predict the different development stages in both radius and ulna from which and with the help of the TW3 methodology, we are able to conveniently score and estimate the bone age of a patient in years, in what can be understood as a multiple- class (multiple stages) pattern recognition approach with posterior probability estimation. Finally, numerical results are presented to evaluate the system performance in predicting the bone stages and the final patient bone age over a private hand image database, with the help of the pediatricians and the radiologists expert diagnoses."}
{"_id": "76e5089c5e93bef05fdc04695c63c7582a9dd9d0", "title": "An Approach to Data Privacy in Smart Home using Blockchain Technology", "text": "Nowadays, numerous applications of smart home systems provide recommendations for users, including reducing their energy consumption, warnings of defective devices, selecting reliable devices and software, diagnoses, etc [1]. The internet connected, dynamic and heterogeneous nature of the smart home environment creates new security, authentication, and privacy challenges [2]. To solve those challenges, an approach to data privacy in smart home using blockchain technology, which is called smart home based the IoT-Blockchain (SHIB), is proposed in this paper. In order to demonstrate the proposed architecture, an experimental scenario using Ganache, Remix, and web3. js is built among the user, service provider, and smart home to evaluate the performance of the smart contract in the SHIB. Based on the experiment results, the SHIB architecture brings the advantages like data privacy, trust access control, and high extension ability. In addition, the comparison between the proposed architecture and existing models in different parameters such as smart contract, the privacy of data, usage of tokens, updating the policies, and misbehavior judging are performed."}
{"_id": "03827b4aef6809ac90487ef1a9d27048088db413", "title": "Semi-supervised Density-Based Clustering", "text": "Most of the effort in the semi-supervised clustering literature was devoted to variations of the K-means algorithm. In this paper we show how background knowledge can be used to bias a partitional density-based clustering algorithm. Our work describes how labeled objects can be used to help the algorithm detecting suitable density parameters for the algorithm to extract density-based clusters in specific parts of the feature space. Considering the set of constraints estabilished by the labeled dataset we show that our algorithm, called SSDBSCAN, automatically finds density parameters for each natural cluster in a dataset. Four of the most interesting characteristics of SSDBSCAN are that (1) it only requires a single, robust input parameter, (2) it does not need any user intervention, (3) it automaticaly finds the noise objects according to the density of the natural clusters and (4) it is able to find the natural cluster structure even when the density among clusters vary widely. The algorithm presented in this paper is evaluated with artificial and real-world datasets, demonstrating better results when compared to other unsupervised and semi-supervised density-based approaches."}
{"_id": "46a473b1a8ae8d70bb01ccf914005e411b077d0d", "title": "Shake-your-head: revisiting walking-in-place for desktop virtual reality", "text": "The Walking-In-Place interaction technique was introduced to navigate infinitely in 3D virtual worlds by walking in place in the real world. The technique has been initially developed for users standing in immersive setups and was built upon sophisticated visual displays and tracking equipments.\n In this paper, we propose to revisit the whole pipeline of the Walking-In-Place technique to match a larger set of configurations and apply it notably to the context of desktop Virtual Reality. With our novel \"Shake-Your-Head\" technique, the user is left with the possibility to sit down, and to use small screens and standard input devices such as a basic webcam for tracking. The locomotion simulation can compute various motions such as turning, jumping and crawling, using as sole input the head movements of the user. We also introduce the use of additional visual feedback based on camera motions to enhance the walking sensations.\n An experiment was conducted to compare our technique with classical input devices used for navigating in desktop VR. Interestingly, the results showed that our technique could even allow faster navigations when sitting, after a short learning. Our technique was also perceived as more fun and increasing presence, and was generally more appreciated for VR navigation."}
{"_id": "1a9bb0a637cbaca8489afd69d6840975a9834a05", "title": "Should one compute the Temporal Difference fix point or minimize the Bellman Residual? The unified oblique projection view", "text": "We investigate projection methods, for evaluating a linear approximation of the value function of a policy in a Markov Decision Process context. We consider two popular approaches, the one-step Temporal Difference fix-point computation (TD(0)) and the Bellman Residual (BR) minimization. We describe examples, where each method outperforms the other. We highlight a simple relation between the objective function they minimize, and show that while BR enjoys a performance guarantee, TD(0) does not in general. We then propose a unified view in terms of oblique projections of the Bellman equation, which substantially simplifies and extends the characterization of Schoknecht (2002) and the recent analysis of Yu & Bertsekas (2008). Eventually, we describe some simulations that suggest that if the TD(0) solution is usually slightly better than the BR solution, its inherent numerical instability makes it very bad in some cases, and thus worse on average."}
{"_id": "c62b19bfe035130d618cbf6fbd1b966fea92fa0c", "title": "Analysis Enhanced Particle-based Flow Visualization", "text": "Particle-based fluid simulation (PFS), such as Smoothed Particle Hydrodynamics (SPH) and Position-based Fluid (PBF), is a mesh-free method that has been widely used in various fields, including astrophysics, mechanical engineering, and biomedical engineering for the study of liquid behaviors under different circumstances. Due to its meshless nature, most analysis techniques that are developed for mesh-based data need to be adapted for the analysis of PFS data. In this work, we study a number of flow analysis techniques and their extension for PFS data analysis, including the FTLE approach, Jacobian analysis, and an attribute accumlation framework. In particular, we apply these analysis techniques to free surface fluids. We demonstrate that these analyses can reveal some interesting underlying flow patterns that would be hard to see otherwise via a number of PFS simulated flows with different parameters and boundary settings. In addition, we point out that an in-situ analysis framework that performs these analyses can potentially be used to guide the adaptive PFS to allocate the computation and storage power to the regions of interest during the simulation."}
{"_id": "83f80b0c517f60eaec01442eb58a6ffb8bd8ab60", "title": "Look into Person: Joint Body Parsing & Pose Estimation Network and a New Benchmark", "text": "Human parsing and pose estimation have recently received considerable interest due to their substantial application potentials. However, the existing datasets have limited numbers of images and annotations and lack a variety of human appearances and coverage of challenging cases in unconstrained environments. In this paper, we introduce a new benchmark named \u201cLook into Person (LIP)\u201d that provides a significant advancement in terms of scalability, diversity, and difficulty, which are crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels and 16 body joints, which are captured from a broad range of viewpoints, occlusions, and background complexities. Using these rich annotations, we perform detailed analyses of the leading human parsing and pose estimation approaches, thereby obtaining insights into the successes and failures of these methods. To further explore and take advantage of the semantic correlation of these two tasks, we propose a novel joint human parsing and pose estimation network to explore efficient context modeling, which can simultaneously predict parsing and pose with extremely high quality. Furthermore, we simplify the network to solve human parsing by exploring a novel self-supervised structure-sensitive learning approach, which imposes human pose structures into the parsing results without resorting to extra supervision. The datasets, code and models are available at http://www.sysu-hcp.net/lip/."}
{"_id": "d7088b8102edc27106a0ad8cee9bc6c4845edfe9", "title": "OctoPocus: a dynamic guide for learning gesture-based command sets", "text": "We describe OctoPocus, an example of a dynamic guide that combines on-screen feedforward and feedback to help users learn, execute and remember gesture sets. OctoPocus can be applied to a wide range of single-stroke gestures and recognition algorithms and helps users progress smoothly from novice to expert performance. We provide an analysis of the design space and describe the results of two experi-ments that show that OctoPocus is significantly faster and improves learning of arbitrary gestures, compared to con-ventional Help menus. It can also be adapted to a mark-based gesture set, significantly improving input time compared to a two-level, four-item Hierarchical Marking menu."}
{"_id": "b45df87ab743d92a6a508815c5cab6d0a52dd3ce", "title": "Controversies surrounding nonspeech oral motor exercises for childhood speech disorders.", "text": "The use of nonspeech oral motor exercises (NSOMEs) for influencing children\u2019s speech sound productions is a common therapeutic practice used by speech-language pathologists (SLPs) in the United States, Canada, and the United Kingdom. Reports from these countries have documented that between 71.5% and 85% of practicing clinicians use some type of NSOMEs in therapy to change children\u2019s speech productions. NSOMEs can be defined as any therapy technique that does not require the child to produce a speech sound but is used to influence the development of speaking abilities. The National Center for Evidence-Based Practice in Communication Disorders (NCEP) of the American Speech-Language-Hearing Association (ASHA) developed this definition: \u2018\u2018Oral-motor exercises are activities that involve sensory stimulation to or actions of the lips, jaw, tongue, soft palate, larynx, and respiratory muscles which are intended to influence the physiologic underpinnings of the oropharyngeal mechanism and thus improve its functions; oral-motor exercises may include active muscle exercise, muscle stretching, passive exercise, and sensory stimulation.\u2019\u2019 The term \u2018\u2018oral motor,\u2019\u2019 which relates to movements and placements of the oral musculature, is established in the field of SLP. Although the existence and importance of the oral-motor aspects of speech production is well understood, the use and effectiveness of nonspeech oral-motor activities is disputed because of the lack of theoretical and empirical support. To understand more about the use of NSOMEs for changing disordered productions of speech, a colleague and I conducted a nationwide survey of 537 practicing clinicians from 48 states. We found that SLPs frequently used the exercises of blowing, tongue wagging and pushups, cheek puffing, the alternating movement of pucker-smile, \u2018\u2018big smile,\u2019\u2019 and tongue-to-nose-to-chin. The clinicians believed these NSOMEs would help their clients obtain tongue elevation, protrusion, and lateralization; increase their tongue and lip strength; become more aware of their articulators; stabilize the jaw; control drooling; and increase velopharyngeal and sucking abilities. The respondents to the survey reported using these techniques for children with a wide variety of different speech disorders stemming from a wide variety of etiologies: dysarthria, childhood apraxia of speech (CAS), Down syndrome, late talkers, phonological impairment, functional misarticulations, and hearing impairment. It makes one curious why clinicians would select these nonspeech therapeutic techniques because they lack adequate theoretical underpinnings for their use. Additionally, the research studies that have evaluated treatment efficacy using the NSOME techniques, although admittedly scant and not at the highest levels of scientific rigor, do not show therapeutic effectiveness. Not only are these nonspeech tasks lacking in theoretical and data support for their use, their application to improve speech intelligibility also often defies logical reasoning. So why do clinicians use these techniques? As I have previously pointed out, SLPs have several possible reasons for using NSOMEs. Some of these reasons may be that"}
{"_id": "5383ee00a5c0c3372081fccfeceb812b7854c9ea", "title": "Spatial as Deep: Spatial CNN for Traffic Scene Understanding", "text": "Convolutional neural networks (CNNs) are usually built by stacking convolutional operations layer-by-layer. Although CNN has shown strong capability to extract semantics from raw pixels, its capacity to capture spatial relationships of pixels across rows and columns of an image is not fully explored. These relationships are important to learn semantic objects with strong shape priors but weak appearance coherences, such as traffic lanes, which are often occluded or not even painted on the road surface as shown in Fig. 1 (a). In this paper, we propose Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-byslice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer. Such SCNN is particular suitable for long continuous shape structure or large objects, with strong spatial relationship but less appearance clues, such as traffic lanes, poles, and wall. We apply SCNN on a newly released very challenging traffic lane detection dataset and Cityscapse dataset. The results show that SCNN could learn the spatial relationship for structure output and significantly improves the performance. We show that SCNN outperforms the recurrent neural network (RNN) based ReNet and MRF+CNN (MRFNet) in the lane detection dataset by 8.7% and 4.6% respectively. Moreover, our SCNN won the 1st place on the TuSimple Benchmark Lane Detection Challenge, with an accuracy of 96.53%."}
{"_id": "458d8adb34e1de582b6ce553755fb6493093fecf", "title": "Multi-modality web video categorization", "text": "This paper reports a first comprehensive study and large-scale test on web video (so-called user generated video or micro video) categorization. Observing that web videos are characterized by a much higher diversity of quality, subject, style, and genres compared with traditional video programs, we focus on studying the effectiveness of different modalities in dealing with such high variation. Specifically, we propose two novel modalities including a semantic modality and a surrounding text modality, as effective complements to most commonly used low-level features. The semantic modality includes three feature representations, i.e., concept histogram, visual word vector model and visual word Latent Semantic Analysis (LSA), while text modality includes the titles, descriptions and tags of web videos. We conduct a set of comprehensive experiments for evaluating the effectiveness of the proposed feature representations over different classifiers such as Support Vector Machine (SVM), Gaussian Mixture Model (GMM) and Manifold Ranking (MR). Our experiments on a large-scale dataset with 11k web videos (nearly 450 hours in total) demonstrate that (1) the proposed multimodal feature representation is effective for web video categorization, and (2) SVM outperforms GMM and MR on nearly all the modalities."}
{"_id": "b422615322420d906c75a40843b3a5f200587f9d", "title": "A Fast-Response Pseudo-PWM Buck Converter With PLL-Based Hysteresis Control", "text": "Hysteresis voltage-mode control is a simple and fast control scheme for switched-mode power converters. However, it is well-known that the switching frequency of a switched-mode power converter with hysteresis control depends on many factors such as loading current and delay of the controller which vary from time to time. It results in a wide noise spectrum and leads to difficulty in shielding electro-magnetic interference. In this work, a phase-lock loop (PLL) is utilized to control the hysteresis level of the comparator used in the controller, while not interfering with the intrinsic behavior of the hysteresis controller. Some design techniques are used to solve the integration problem and to improve the settling speed of the PLL. Moreover, different control modes are implemented. A buck converter with proposed control scheme is fabricated using a commercial 0.35-\u03bc m CMOS technology. The chip area is 1900 \u03bcm \u00d7 2200 \u03bcm. The switching frequency is locked to 1 MHz, and the measured frequency deviation is within \u00b11%. The measured load transient response between 160 and 360 mA is 5 \u03bc s only."}
{"_id": "6e612d643d44d9bee5ab0532aa25fe3fa66e85d5", "title": "Memristor-The Missing Circuit Element LEON 0", "text": "A new two-terminal circuit element-called the memrirtorcharacterized by a relationship between the charge q(t) s St% i(7J d7 and the flux-linkage (p(t) = J-\u2018-m vfrj d T is introduced os the fourth boric circuit element. An electromagnetic field interpretation of this relationship in terms of a quasi-static expansion of Maxwell\u2019s equations is presented. Many circuit~theoretic properties of memdstorr are derived. It is shown that this element exhibiis some peculiar behavior different from that exhibited by resistors, inductors, or capacitors. These properties lead to a number of unique applications which cannot be realized with RLC networks alone. I + \u201d -3 nl Although a physical memristor device without internal power supply has not yet been discovered, operational laboratory models have been built with the help of active circuits. Experimental results ore presented to demonstrate the properties and potential applications of memristors. (a) I. 1NTR00~cnoN I + Y -3 T HIS PAPER presents the logical and scientific basis for the existence of a new two-terminal circuit element called the memristor (a contraction for memory (b) resistor) which has every right to be as basic as the three classical circuit elements already in existence, namely, the resistor, inductor, and capacitor. Although the existence of a memristor in the form of a physical device without internal power supply has not yet been discovered, its laboratory realization in the form of active circuits will be presented in Section II.\u2019 Many interesting circuit-theoretic properties possessed by the memristor, the most important of which is perhaps the passivity property which provides the circuit-theoretic basis for its physical realizability, will be derived in Section III. An electromagnetic field interpretation of the memristor characterization will be presented in Section IV with the help of a quasi-static expansion of Maxwell\u2019s equations. Finally, some novel applications of memristors will be presented in Section V. 1 + \u201d -3"}
{"_id": "79f026f743997ab8b5251f6e915a0a427576b142", "title": "Millimeter-wave antennas for radio access and backhaul in 5G heterogeneous mobile networks", "text": "Millimeter-wave communications are expected to play a key role in future 5G mobile networks to overcome the dramatic traffic growth expected over the next decade. Such systems will severely challenge antenna technologies used at mobile terminal, access point or backhaul/fronthaul levels. This paper provides an overview of the authors' recent achievements in the design of integrated antennas, antenna arrays and high-directivity quasi-optical antennas for high data-rate 60-GHz communications."}
{"_id": "9302a33f0ca803e68c48ddc76fbc7a8a7517999f", "title": "Frequency compensation of high-speed, low-voltage CMOS multistage amplifiers", "text": "This paper presents the frequency compensation of high-speed, low-voltage multistage amplifiers. Two frequency compensation techniques, the Nested Miller Compensation with Nulling Resistors (NMCNR) and Reversed Nested Indirect Compensation (RNIC), are discussed and employed on two multistage amplifier architectures. A four-stage pseudo-differential amplifier with CMFF and CMFB is designed in a 1.2 V, 65-nm CMOS process. With NMCNR, it achieves a phase margin (PM) of 59\u00b0 with a DC gain of 75 dB and unity-gain frequency (fug) of 712 MHz. With RNIC, the same four-stage amplifier achieves a phase margin of 84\u00b0, DC gain of 76 dB and fug of 2 GHz. Further, a three-stage single-ended amplifier is designed in a 1.1-V, 40-nm CMOS process. The three-stage OTA with RNIC achieves PM of 81\u00b0, DC gain of 80 dB and fug of 770 MHz. The same OTA achieves PM of 59\u00b0 with NMCNR, while maintaining a DC gain of 75 dB and fug of 262 MHz. Pole-splitting, to achieve increased stability, is illustrated for both compensation schemes. Simulations illustrate that the RNIC scheme achieves much higher PM and fug for lower values of compensation capacitance compared to NMCNR, despite the growing number of low voltage amplifier stages."}
{"_id": "d151823fabbe748b41d90ab0a01b8f9e90822241", "title": "Time-Frequency Masking in the Complex Domain for Speech Dereverberation and Denoising", "text": "In real-world situations, speech is masked by both background noise and reverberation, which negatively affect perceptual quality and intelligibility. In this paper, we address monaural speech separation in reverberant and noisy environments. We perform dereverberation and denoising using supervised learning with a deep neural network. Specifically, we enhance the magnitude and phase by performing separation with an estimate of the complex ideal ratio mask. We define the complex ideal ratio mask so that direct speech results after the mask is applied to reverberant and noisy speech. Our approach is evaluated using simulated and real room impulse responses, and with background noises. The proposed approach improves objective speech quality and intelligibility significantly. Evaluations and comparisons show that it outperforms related methods in many reverberant and noisy environments."}
{"_id": "9ef98f87a6b1bd7e6c006f056f321f9d291d4c45", "title": "Depth Images-based Human Detection , Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM", "text": "Human activity recognition using depth information is an emerging and challenging technology in computer vision due to its considerable attention by many practical applications such as smart home/office system, personal health care and 3D video games. This paper presents a novel framework of 3D human body detection, tracking and recognition from depth video sequences using spatiotemporal features and modified HMM. To detect human silhouette, raw depth data is examined to extract human silhouette by considering spatial continuity and constraints of human motion information. While, frame differentiation is used to track human movements. Features extraction mechanism consists of spatial depth shape features and temporal joints features are used to improve classification performance. Both of these features are fused together to recognize different activities using the modified hidden Markov model (M-HMM). The proposed approach is evaluated on two challenging depth video datasets. Moreover, our system has significant abilities to handle subject's body parts rotation and body parts missing which provide major contributions in human activity recognition."}
{"_id": "1737177098539e295235678d664fcdb833568b94", "title": "Cloud Task Scheduling Based on Load Balancing Ant Colony Optimization", "text": "The cloud computing is the development of distributed computing, parallel computing and grid computing, or defined as the commercial implementation of these computer science concepts. One of the fundamental issues in this environment is related to task scheduling. Cloud task scheduling is an NP-hard optimization problem, and many meta-heuristic algorithms have been proposed to solve it. A good task scheduler should adapt its scheduling strategy to the changing environment and the types of tasks. This paper proposes a cloud task scheduling policy based on Load Balancing Ant Colony Optimization (LBACO) algorithm. The main contribution of our work is to balance the entire system load while trying to minimizing the make span of a given tasks set. The new scheduling strategy was simulated using the CloudSim toolkit package. Experiments results showed the proposed LBACO algorithm outperformed FCFS (First Come First Serve) and the basic ACO (Ant Colony Optimization)."}
{"_id": "591bc36fbfa094495c01951026f275292b87b92c", "title": "A Particle Swarm Optimization-Based Heuristic for Scheduling Workflow Applications in Cloud Computing Environments", "text": "Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically. However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution costs when they are scheduled taking into account only the \u2018execution time\u2019. In addition to optimizing execution time, the cost arising from data transfers between resources as well as execution costs must also be taken into account. In this paper, we present a particle swarm optimization (PSO) based heuristic to schedule applications to cloud resources that takes into account both computation cost and data transmission cost. We experiment with a workflow application by varying its computation and communication costs. We compare the cost savings when using PSO and existing \u2018Best Resource Selection\u2019 (BRS) algorithm. Our results show that PSO can achieve: a) as much as 3 times cost savings as compared to BRS, and b) good distribution of workload onto resources."}
{"_id": "8186763a49f36c32af6144f789ca3a5b03a3c01e", "title": "QRSF: QoS-aware resource scheduling framework in cloud computing", "text": "Cloud computing harmonizes and delivers the ability of resource sharing over different geographical sites. Cloud resource scheduling is a tedious task due to the problem of finding the best match of resource-workload pair. The efficient management of dynamic nature of resource can be done with the help of cloud workloads. Till cloud workload is deliberated as a central capability, the resources cannot be utilized in an effective way. In literature, very few efficient resource scheduling policies for energy, cost and time constraint cloud workloads are reported. This paper presents an efficient cloud workload management framework in which cloud workloads have been identified, analyzed and clustered through K-means on the basis of weights assigned and their QoS requirements. Further scheduling has been done based on different scheduling policies and their corresponding algorithms. The performance of the proposed algorithms has been evaluated with existing scheduling policies through CloudSim toolkit. The experimental results show that the proposed framework gives better results in terms of energy consumption, execution cost and time of different cloud workloads as compared to existing algorithms."}
{"_id": "bb59dd3db07e4e82b0fc734e37531417d6834f86", "title": "Systematic literature reviews in software engineering - A systematic literature review", "text": "Background: In 2004 the concept of evidence-based software engineering (EBSE) was introduced at the ICSE04 conference. Aims: This study assesses the impact of systematic literature reviews (SLRs) which are the recommended EBSE method for aggregating evidence. Method: We used the standard systematic literature review method employing a manual search of 10 journals and 4 conference proceedings. Results: Of 20 relevant studies, eight addressed research trends rather than technique evaluation. Seven SLRs addressed cost estimation. The quality of SLRs was fair with only three scoring less than 2 out of 4. Conclusions: Currently, the topic areas covered by SLRs are limited. European researchers, particularly those at the Simula Laboratory appear to be the leading exponents of systematic literature reviews. The series of cost estimation SLRs demonstrate the potential value of EBSE for synthesising evidence and making it available to practitioners. ! 2008 Elsevier B.V. All rights reserved."}
{"_id": "c665f4a9b2ca7beb4ad4a7221e8830ea38b8d769", "title": "Combining Knowledge and Data Driven Insights for Identifying Risk Factors using Electronic Health Records", "text": "BACKGROUND\nThe ability to identify the risk factors related to an adverse condition, e.g., heart failures (HF) diagnosis, is very important for improving care quality and reducing cost. Existing approaches for risk factor identification are either knowledge driven (from guidelines or literatures) or data driven (from observational data). No existing method provides a model to effectively combine expert knowledge with data driven insight for risk factor identification.\n\n\nMETHODS\nWe present a systematic approach to enhance known knowledge-based risk factors with additional potential risk factors derived from data. The core of our approach is a sparse regression model with regularization terms that correspond to both knowledge and data driven risk factors.\n\n\nRESULTS\nThe approach is validated using a large dataset containing 4,644 heart failure cases and 45,981 controls. The outpatient electronic health records (EHRs) for these patients include diagnosis, medication, lab results from 2003-2010. We demonstrate that the proposed method can identify complementary risk factors that are not in the existing known factors and can better predict the onset of HF. We quantitatively compare different sets of risk factors in the context of predicting onset of HF using the performance metric, the Area Under the ROC Curve (AUC). The combined risk factors between knowledge and data significantly outperform knowledge-based risk factors alone. Furthermore, those additional risk factors are confirmed to be clinically meaningful by a cardiologist.\n\n\nCONCLUSION\nWe present a systematic framework for combining knowledge and data driven insights for risk factor identification. We demonstrate the power of this framework in the context of predicting onset of HF, where our approach can successfully identify intuitive and predictive risk factors beyond a set of known HF risk factors."}
{"_id": "0c369e983e93e9b2be9066d5e0fec7a60660a40a", "title": "On the Resolution Limits of Superimposed Projection", "text": "Multi-projector super-resolution is the dual of multi-camera super-resolution. The goal of projector super-resolution is to produce a high resolution frame via superimposition of multiple low resolution subframes. Prior work claims that it is impossible to improve resolution via superimposed projection except in specialized circumstances. Rigorous analysis has been previously restricted to the special case of uniform display sampling, which reduces the problem to a simple shift-invariant deblurring. To understand the true behavior of superimposed projection as an inverse of classical camera super-resolution, one must consider the effects of non-uniform displacements between component subframes. In this paper, we resolve two fundamental theoretical questions concerning resolution enhancement via superimposed projection. First, we show that it is possible to reproduce frequencies that are well beyond the Nyquist limit of any of the component subframes. Second, we show that nonuniform sampling and pixel reconstruction functions impose fundamental limits on achievable resolution."}
{"_id": "7fae42a6d1690e81f6b0c500c8a22330a6c2c75f", "title": "Droop control in LV-grids", "text": "Remote electrification with island supply systems, the increasing acceptance of the microgrids concept and the penetration of the interconnected grid with DER and RES require the application of inverters and the development of new control algorithms. One promising approach is the implementation of conventional f/U-droops into the respective inverters, thus down scaling the conventional grid control concept to the low voltage grid. Despite contradicting line parameters, the applicability of this proceeding is outlined and the boundary conditions are derived"}
{"_id": "1724b33cd5d166d9dab26a61687a1e04e52a86c3", "title": "Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization", "text": "We give novel algorithms for stochastic strongly-convex optimization in the gradient oracle model which return a O( 1 T )-approximate solution after T iterations. The first algorithm is deterministic, and achieves this rate via gradient updates and historical averaging. The second algorithm is randomized, and is based on pure gradient steps with a random step size. This rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O( log(T ) T ), which was obtained by applying an online stronglyconvex optimization algorithm with regret O(log(T )) to the batch setting. We complement this result by proving that any algorithm has expected regret of \u03a9(log(T )) in the online stochastic strongly-convex optimization setting. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization."}
{"_id": "728d1de052a40aa41ef8813bf128fb2a6db22597", "title": "Oxytocin Attenuates Amygdala Reactivity to Fear in Generalized Social Anxiety Disorder", "text": "Patients with generalized social anxiety disorder (GSAD) exhibit heightened activation of the amygdala in response to social cues conveying threat (eg, fearful/angry faces). The neuropeptide oxytocin (OXT) decreases anxiety and stress, facilitates social encounters, and attenuates amygdala reactivity to threatening faces in healthy subjects. The goal of this study was to examine the effects of OXT on fear-related amygdala reactivity in GSAD and matched healthy control (CON) subjects. In a functional magnetic resonance imaging study utilizing a double-blind placebo-controlled within-subjects design, we measured amygdala activation to an emotional face matching task of fearful, angry, and happy faces following acute intranasal administration of OXT (24\u2009IU or 40.32\u2009\u03bcg) and placebo in 18 GSAD and 18 CON subjects. Both the CON and GSAD groups activated bilateral amygdala to all emotional faces during placebo, with the GSAD group exhibiting hyperactivity specifically to fearful faces in bilateral amygdala compared with the CON group. OXT had no effect on amygdala activity to emotional faces in the CON group, but attenuated the heightened amygdala reactivity to fearful faces in the GSAD group, such that the hyperactivity observed during the placebo session was no longer evident following OXT (ie, normalization). These findings suggest that OXT has a specific effect on fear-related amygdala activity, particularly when the amygdala is hyperactive, such as in GSAD, thereby providing a brain-based mechanism of the impact of OXT in modulating the exaggerated processing of social signals of threat in patients with pathological anxiety."}
{"_id": "5fb7bb492fcdae221480cf39185fdad5d877a2f2", "title": "Generalized State Coherence Transform for multidimensional localization of multiple sources", "text": "In our recent work an effective method for multiple source localization has been proposed under the name of cumulative State Coherence Transform (cSCT). Exploiting the physical meaning of the frequency-domain blind source separation and the sparse time-frequency dominance of the acoustic sources, multiple reliable TDOAs can be estimated with only two microphones, regardless of the permutation problem and of the microphone spacing. In this paper we present a multidimensional generalization of the cSCT which allows one to localize several sources in the P-dimensional space. An important approximation is made in order to perform a disjoint TDOA estimation over each dimension which reduces the localization problem to linear complexity. Furthermore the approach is invariant to the array geometry and to the assumed acoustic propagation model. Experimental results on simulated data show a precise 2-D localization of 7 sources by only using an array of three elements."}
{"_id": "a14fcb383f9ba008da3df52f06a79ec874e5a0a4", "title": "An Overview of BioCreative II.5", "text": "We present the results of the BioCreative II.5 evaluation in association with the FEBS Letters experiment, where authors created Structured Digital Abstracts to capture information about protein-protein interactions. The BioCreative II.5 challenge evaluated automatic annotations from 15 text mining teams based on a gold standard created by reconciling annotations from curators, authors, and automated systems. The tasks were to rank articles for curation based on curatable protein-protein interactions; to identify the interacting proteins (using UniProt identifiers) in the positive articles (61); and to identify interacting protein pairs. There were 595 full-text articles in the evaluation test set, including those both with and without curatable protein interactions. The principal evaluation metrics were the interpolated area under the precision/recall curve (AUC iP/R), and (balanced) F-measure. For article classification, the best AUC iP/R was 0.70; for interacting proteins, the best system achieved good macroaveraged recall (0.73) and interpolated area under the precision/recall curve (0.58), after filtering incorrect species and mapping homonymous orthologs; for interacting protein pairs, the top (filtered, mapped) recall was 0.42 and AUC iP/R was 0.29. Ensemble systems improved performance for the interacting protein task."}
{"_id": "f6edfb817cc8a9c8fb5b5a622bb0245522b99b25", "title": "Requirements, Traceability and DSLs in Eclipse with the Requirements Interchange Format (ReqIF)", "text": "Requirements engineering (RE) is a crucial aspect in systems development and is the area of ongoing research and process improvement. However, unlike in modelling, there has been no established standard that activities could converge on. In recent years, the emerging Requirements Interchange Format (RIF/ReqIF) gained more and more visibility in industry, and research projects start to investigate these standards. To avoid redundant efforts in implementing the standard, the VERDE and Deploy projects cooperate to provide a stable common basis for RIF/ReqIF that could be leveraged by other research projects too. In this paper, we present an Eclipse-based extensible implementation of a RIF/ReqIF-based requirements editing platform. In addition, we are concerned with two related aspects of RE that take advantage of the common platform. First, how can the quality of requirements be improved by replacing or complementing natural language requirements with formal approaches such as domain specific languages or models. Second, how can we establish robust traceability that links requirements and model constructs and other artefacts of the development process. We present two approaches to traceability and two approaches to modelling. We believe that our research represents a significant contribution to the existing tooling landscape, as it is the first clean-room implementation of the RIF/ReqIF standard. We believe that it will help reduce gaps in often heterogeneous tool chains and inspire new conceptual work and new tools."}
{"_id": "ea31dd9f633b2071906cdc3f147ded8d1963ef11", "title": "FRPAs and High-Gain Directional Antennas", "text": "Many types of GNSS antennas have been developed in recent years to make them suitable for different applications. Since their designs and performance requirements vary depending on their application, they have been grouped into six different categories in this book: 1. The design and performance of each type of antenna will be described here and throughout the rest of the book. This chapter discusses the design of the first two varieties: FRPAs and high-gain directional antennas. Figure 2.1 shows a representative sample of these antennas. FRPAs are the most popular and widely used of all GNSS antennas. There are many different types of these antennas; hence they will need two chapters\u2014this chapter and Chapter 3\u2014to fully cover all the important types. This chapter will discuss the following FRPA designs: microstrip patch antennas, which are the most ubiquitous of all GNSS antennas, and quadrifilar helix antennas (QHAs), which are also popular for handheld receivers and crossed \" drooping \" bow-type dipoles, which provide relatively wider bandwidths and good circular polarization. Spiral antennas such as the Archimedean spiral antenna will be discussed in Chapter 3 under multiband antennas since they are ultrawideband and can cover the entire GNSS band from 1.1 to 1.6 GHz. The conical spiral antenna is also a high-gain di"}
{"_id": "fdd02106537a89069b6993f4120abc54a0f7d1a5", "title": "$V_{T}$ Compensation Circuit for AM OLED Displays Composed of Two TFTs and One Capacitor", "text": "In this paper, we propose the threshold-voltage compensation pixel circuit that is composed of two thin-film transistors (TFTs) and one capacitor (2T1C). It not only compensates the deviation of the threshold voltage of the driver TFT but also actualizes the large aperture ratio for organic light-emitting diode (OLED) devices as well as the traditional 2T1C circuit. We show the result of SPICE simulation for the pixel circuit; it indicates that the circuit can allocate the relatively large aperture ratio for OLED devices"}
{"_id": "9ca61f6ede25100da67b5388c941bf69929a7528", "title": "AGDISTIS - Graph-Based Disambiguation of Named Entities Using Linked Data", "text": "Over the last decades, several billion Web pages have been made available on the Web. The ongoing transition from the current Web of unstructured data to the Web of Data yet requires scalable and accurate approaches for the extraction of structured data in RDF (Resource Description Framework) from these websites. One of the key steps towards extracting RDF from text is the disambiguation of named entities. While several approaches aim to tackle this problem, they still achieve poor accuracy. We address this drawback by presenting AGDISTIS, a novel knowledge-base-agnostic approach for named entity disambiguation. Our approach combines the Hypertext-Induced Topic Search (HITS) algorithm with label expansion strategies and string similarity measures. Based on this combination, AGDISTIS can efficiently detect the correct URIs for a given set of named entities within an input text. We evaluate our approach on eight different datasets against state-of-theart named entity disambiguation frameworks. Our results indicate that we outperform the state-of-the-art approach by up to 29% F-measure."}
{"_id": "3334a80676fefc486575bd2ddf1b281a640742f1", "title": "Web Spam Taxonomy", "text": "Web spamming refers to actions intended to mislead search engines into ranking some pages higher than they deserve. Recently, the amount of web spam has increased dramatically, leading to a degradation of search results. This paper presents a comprehensive taxonomy of current spamming techniques, which we believe can help in developing appropriate countermeasures."}
{"_id": "9d5f8b16c8efdcea3cf2d2ffe6970218283d6a2d", "title": "Eye-tracking of men's preferences for waist-to-hip ratio and breast size of women.", "text": "Studies of human physical traits and mate preferences often use questionnaires asking participants to rate the attractiveness of images. Female waist-to-hip ratio (WHR), breast size, and facial appearance have all been implicated in assessments by men of female attractiveness. However, very little is known about how men make fine-grained visual assessments of such images. We used eye-tracking techniques to measure the numbers of visual fixations, dwell times, and initial fixations made by men who viewed front-posed photographs of the same woman, computer-morphed so as to differ in her WHR (0.7 or 0.9) and breast size (small, medium, or large). Men also rated these images for attractiveness. Results showed that the initial visual fixation (occurring within 200 ms from the start of each 5 s test) involved either the breasts or the waist. Both these body areas received more first fixations than the face or the lower body (pubic area and legs). Men looked more often and for longer at the breasts, irrespective of the WHR of the images. However, men rated images with an hourglass shape and a slim waist (0.7 WHR) as most attractive, irrespective of breast size. These results provide quantitative data on eye movements that occur during male judgments of the attractiveness of female images, and indicate that assessments of the female hourglass figure probably occur very rapidly."}
{"_id": "af4dabbe0b886ea18a863ddc78ef16c9f15ea89d", "title": "Static analysis of cable-driven manipulators with non-negligible cable mass", "text": "This paper addresses the static analysis of cable-driven robotic manipulators with non-negligible cable mass. An approach to computing the static displacement of a homogeneous elastic cable is presented. The resulting cable-displacement expression is used to solve the inverse kinematics of general cable-driven robotic manipulators. In addition, the sag-induced stiffness of the cables is derived. Finally, two sample robotic manipulators with dimensions and system parameters similar to a large scale cable-driven manipulator currently under development are analyzed. The results show that cable sag can have a significant effect on both the inverse kinematics and stiffness of such manipulators."}
{"_id": "663b44145ffa2661f5d4a109df334beae3373952", "title": "Criminal multilation of the human body in Sweden--a thirty-year medico-legal and forensic psychiatric study.", "text": "During the 30-year period 1961-1990, a total of 22 deaths with criminal multilation/dismemberment of the human body were registered in Sweden. The multilations occurred in time clusters, mostly during the summer and winter periods, and increased during the three decades, with incidence rates of 0.05, 0.1, and 0.125 per million inhabitants and year, respectively. Multilation was noted 6.6 times more often in large urban areas than in the rest of Sweden. Defensive mutilation, in order to get rid of the corpse or make its identity more difficult, was noted in ten instances, aggressive mutilation following outrageous overkilling in four, offensive mutilation (lust murder) in seven, and necromanic multilation in one instance. In the last-mentioned case the cause of death was natural, while all deaths in the first three groups were homicidal, or homicide was strongly suspected. All perpetrators were males, in six instances assisted by other persons. In more than half of the cases the perpetrator's occupation was associated with application of anatomical knowledge, e.g., butcher, physician, veterinary assistant, or hunter. The perpetrators of the defensive and aggressive mutilations were mostly disorganized, i.e., alcoholics or drug users with previous psychiatric contacts and criminal histories, while the lust murderers were mostly organized, with a history of violent crimes (including the \"serial killing\" type), drug abuse and mental disorders with anxiety and schizophrenia, in that order to a diminishing degree. There were differences in mode of mutilation, depending on whether the mutilation was carried out by a layman, a butcher, or a physician. In only one case was the perpetrator convicted for the mutilation act itself; in the remaining instances the manslaughter, as a more serious crime, assimilated the mutilation. When the mutilation made it impossible to establish the cause of death, the perpetrators, despite strong circumstantial evidence indicating murder, were acquitted."}
{"_id": "ff7e2fc52f65aa45dc667b7c6abae70a3eb1fbe0", "title": "The Unreasonable Effectiveness of Deep Features as a Perceptual Metric", "text": "While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called \"perceptual losses\"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations."}
{"_id": "15c61fdb1eb2f2f1e78ac11638342b66da71c858", "title": "Leaving so soon?: understanding and predicting web search abandonment rationales", "text": "Users of search engines often abandon their searches. Despite the high frequency of Web search abandonment and its importance to Web search engines, little is known about why searchers abandon beyond that it can be for good or bad reasons. In this paper, we ex-tend previous work by studying search abandonment using both a retrospective survey and an in-situ method that captures aban-donment rationales at abandonment time. We show that although satisfaction is a common motivator for abandonment, one-in-five abandonment instances does not relate to satisfaction. We also studied the automatic prediction of the underlying reason for ob-served abandonment. We used features of the query and the results, interaction with the result page (e.g., cursor movements, scrolling, clicks), and the full search session. We show that our classifiers can learn to accurately predict the reasons for observed search abandonment. Such accurate predictions help search providers estimate user satisfaction for queries without clicks, affording a more complete understanding of search engine performance."}
{"_id": "0d1c553a6716f6a634a8e826ca8dd3bbe16b00c0", "title": "A state-of-the-art 3D sensor for robot navigation", "text": "This paper relates first experiences using a state-of-the-art, time-of-flight sensor that is able to deliver 3D images. The properties and capabilities of the sensor make it a potential powerful tool for applications within mobile robotics especially for real-time tasks, as the sensor features a frame rate of up to 30 frames per second. Its capabilities in terms of basic obstacle avoidance and local path-planning are evaluated and compared to the performance of a standard laser scanner."}
{"_id": "ec846fbc1d8046afe67d630ca172fa0ea42fdb9a", "title": "IEC 61850 impact on substation design", "text": "IEC 61850 communication networks and systems for utility automation is a standard for communications that creates an environment that will allow dramatic changes in the design and operation of substations. The paper describes the different components of such systems and the distribution of functions between the devices included in it. Multiple Merging Units communicating over a 100 Mb Ethernet with multiple devices and/or a central computer that receives and processes current and voltage samples with a rate of 256 samples/cycle allow the implementation of all typical substation protection, automation, control, monitoring and recording functions in a very efficient way. The concepts also allow to change the design of layout of the high voltage switchgear."}
{"_id": "537afb5047cd33b9f9a3aa2dcc47c6de7f1618b4", "title": "OPC Unified Architecture: A Service-oriented Architecture for Smart Grids", "text": "In this paper, the OPC UA is introduced as a key technology for realizing a variety of Smart Grid use cases enabling relevant tasks of automation and control. OPC UA is the successor of the established Classic OPC specification and state of the art regarding information exchange in the industrial automation branch. One of its key improvements over the Classic OPC is that the area of application is no longer restricted to industrial automation but OPC UA can be applied almost in every domain facing challenges in automated control. This improvement stems from a more generic and object-oriented approach. For the adoption of OPC UA to Smart Grids, two of the most important data models -- the Common Information Model (CIM) and the IEC 61850 -- have been identified to be integrated into OPC UA communication. In this contribution, basic OPC UA features and functionalities (information modeling, communication services, and information security) are introduced and discussed in the context of Smart Grids."}
{"_id": "828f95aff57eb9bd2cea168364827cb0384caf9e", "title": "The social psychology of stigma.", "text": "This chapter addresses the psychological effects of social stigma. Stigma directly affects the stigmatized via mechanisms of discrimination, expectancy confirmation, and automatic stereotype activation, and indirectly via threats to personal and social identity. We review and organize recent theory and empirical research within an identity threat model of stigma. This model posits that situational cues, collective representations of one's stigma status, and personal beliefs and motives shape appraisals of the significance of stigma-relevant situations for well-being. Identity threat results when stigma-relevant stressors are appraised as potentially harmful to one's social identity and as exceeding one's coping resources. Identity threat creates involuntary stress responses and motivates attempts at threat reduction through coping strategies. Stress responses and coping efforts affect important outcomes such as self-esteem, academic achievement, and health. Identity threat perspectives help to explain the tremendous variability across people, groups, and situations in responses to stigma."}
{"_id": "f732290a4a2a77ba5070ec3383df679bfc13f713", "title": "Gait support for complete spinal cord injury patient by synchronized leg-swing with HAL", "text": "Biped walking improves the circulation of blood as well as bone density of the lower limbs, thereby enhancing the quality of life (QOL). It is significant not only to healthy people but also to physically challenged persons such as complete spinal cord injury (SCI) patients. The purpose of this paper is to propose an estimation algorithm that infers the intention related to the forward leg-swing in order to support the gait for complete SCI patients wearing an exoskeleton system called a Hybrid Assistive Limb (HAL), and to verify the effectiveness of the proposed algorithm through a clinical trial. The proposed algorithm infers the patient's intention in synchronization with the deviation of the center of the ground reaction force (CoGRF) that is observed immediately before a person starts walking. The patient conveys this intention by inducing the deviation of the CoGRF, using crutches or handrails with both of his/her arms. In the clinical trial, we confirmed that the algorithm inferred the patient's intention to swing the leg forward, and achieved a smooth gait in synchronization with it. As a result, the gait speed and cadence of the SCI patient with HAL during the 10-meter walking test increased to 6.67 [m/min] and 20 [steps/min], respectively after several trials."}
{"_id": "e4e73bad851bff528bb84750f293966b9a7113d4", "title": "A Web Scraping Methodology for Bypassing Twitter API Restrictions", "text": "Retrieving information from social networks is the first and primordial step many data analysis fields such as Natural Language Processing, Sentiment Analysis and Machine Learning. Important data science tasks relay on historical data gathering for further predictive results. Most of the recent works use Twitter API, a public platform for collecting public streams of information, which allows querying chronological tweets for no more than three weeks old. In this paper, we present a new methodology for collecting historical tweets within any date range using web scraping techniques bypassing for Twitter API restrictions."}
{"_id": "04ee77ef1143af8b19f71c63b8c5b077c5387855", "title": "Ask Me Anything: Dynamic Memory Networks for Natural Language Processing", "text": "Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a unified neural network framework which processes input sequences and questions, forms semantic and episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state of the art results on several types of tasks and datasets: question answering (Facebook\u2019s bAbI dataset), sequence modeling for part of speech tagging (WSJ-PTB), coreference resolution (Quizbowl dataset) and text classification for sentiment analysis (Stanford Sentiment Treebank). The model relies exclusively on trained word vector representations and requires no string matching or manually engineered features."}
{"_id": "165db9e093be270d38ac4a264efff7507518727e", "title": "Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks", "text": "One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks."}
{"_id": "264a91f1497fd95b6ba7d570ae450dbea0bfe712", "title": "Lifted Relational Neural Networks", "text": "We propose a method combining relational-logic representations with neural network learning. A general lifted architecture, possibly reflecting some background domain knowledge, is described through relational rules which may be handcrafted or learned. The relational rule-set serves as a template for unfolding possibly deep neural networks whose structures also reflect the structures of given training or testing relational examples. Different networks corresponding to different examples share their weights, which co-evolve during training by stochastic gradient descent algorithm. The framework allows for hierarchical relational modeling constructs and learning of latent relational concepts through shared hidden layers weights corresponding to the rules. Discovery of notable relational concepts and experiments on 78 relational learning benchmarks demonstrate favorable performance of the method."}
{"_id": "3a8129e6fe3ad9bc3a51e44da32424e38612e4cc", "title": "TensorLog: A Differentiable Deductive Database", "text": "Large knowledge bases (KBs) are useful in many tasks, but it is unclear how to integrate this sort of knowledge into \u201cdeep\u201d gradient-based learning systems. To address this problem, we describe a probabilistic deductive database, called TensorLog, in which reasoning uses a differentiable process. In TensorLog, each clause in a logical theory is first converted into certain type of factor graph. Then, for each type of query to the factor graph, the message-passing steps required to perform belief propagation (BP) are \u201cunrolled\u201d into a function, which is differentiable. We show that these functions can be composed recursively to perform inference in non-trivial logical theories containing multiple interrelated clauses and predicates. Both compilation and inference in TensorLog are efficient: compilation is linear in theory size and proof depth, and inference is linear in database size and the number of message-passing steps used in BP. We also present experimental results with TensorLog and discuss its relationship to other first-order probabilistic logics."}
{"_id": "39a1abcbb87d8ff48ee47d446411a3455451f25b", "title": "Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition", "text": "This paper focuses on human activity recognition (HAR) problem, in which inputs are multichannel time series signals acquired from a set of bodyworn inertial sensors and outputs are predefined human activities. In this problem, extracting effective features for identifying activities is a critical but challenging task. Most existing work relies on heuristic hand-crafted feature design and shallow feature learning architectures, which cannot find those distinguishing features to accurately classify different activities. In this paper, we propose a systematic feature learning method for HAR problem. This method adopts a deep convolutional neural networks (CNN) to automate feature learning from the raw inputs in a systematic way. Through the deep architecture, the learned features are deemed as the higher level abstract representation of low level raw time series signals. By leveraging the labelled information via supervised learning, the learned features are endowed with more discriminative power. Unified in one model, feature learning and classification are mutually enhanced. All these unique advantages of the CNN make it outperform other HAR algorithms, as verified in the experiments on the Opportunity Activity Recognition Challenge and other benchmark datasets."}
{"_id": "8731369a707046f3f8dd463d1fd107de31d40a24", "title": "Relation Extraction with Multi-instance Multi-label Convolutional Neural Networks", "text": "Distant supervision is an efficient approach that automatically generates labeled data for relation extraction (RE). Traditional distantly supervised RE systems rely heavily on handcrafted features, and hence suffer from error propagation. Recently, a neural network architecture has been proposed to automatically extract features for relation classification. However, this approach follows the traditional expressed-at-least-once assumption, and fails to make full use of information across different sentences. Moreover, it ignores the fact that there can be multiple relations holding between the same entity pair. In this paper, we propose a multi-instance multi-label convolutional neural network for distantly supervised RE. It first relaxes the expressed-at-least-once assumption, and employs cross-sentence max-pooling so as to enable information sharing across different sentences. Then it handles overlapping relations by multi-label learning with a neural network classifier. Experimental results show that our approach performs significantly and consistently better than state-of-the-art methods."}
{"_id": "504c1ff1557dcffeda90be5bccf90c94dcd6aa4e", "title": "Predictors of loneliness among older women and men in Sweden: A national longitudinal study.", "text": "OBJECTIVES\nLongitudinal research on loneliness in old age has rarely considered loneliness separately for men and women, despite gender differences in life experiences. The objective of this study was to examine the extent to which older women and men (70+) report feelings of loneliness with a focus on: (a) changes in reported loneliness as people age, and (b) which factors predict loneliness.\n\n\nMETHOD\nData from the 2004 and 2011 waves of SWEOLD, a longitudinal national survey, was used (n = 587). The prediction of loneliness in 2011 by variables measured in 2004 and 2004-2011 variable change scores was examined in three logistic regression models: total sample, women and men. Variables in the models included: gender, age, education, mobility problems, depression, widowhood and social contacts.\n\n\nRESULTS\nOlder people moved into and out of frequent loneliness over time, although there was a general increase in loneliness with age. Loneliness at baseline, depression increment and recent widowhood were significant predictors of loneliness in all three multivariable models. Widowhood, depression, mobility problems and mobility reduction predicted loneliness uniquely in the model for women; while low level of social contacts and social contact reduction predicted loneliness uniquely in the model for men.\n\n\nCONCLUSION\nThis study challenges the notion that feelings of loneliness in old age are stable. It also identifies important gender differences in prevalence and predictors of loneliness. Knowledge about such differences is crucial for the development of effective policy and interventions to combat loneliness in later life."}
{"_id": "f46e9a51f3a2e0702d0afc62e3092e23578995fb", "title": "Research on high-frequency induction heating load-matched based on LLC resonant circuit", "text": "Modern high-frequency induction heating technology puts forward higher requirements for load matching of the induction heating power supply. LLC resonant load circuit can replace the traditional induction heating load matching transformer, and increase power density. This paper analyzes the load matching characteristics of LLC load resonant circuit in high frequency induction heating power supply, and gives the parameter selection method of matching components. At last, combining with phase-shifting pulse-width modulation (PWM) control method, the phase-shifting induction heating experiment platform based on LLC load resonant circuit is established. Experimental results show that the performance of LLC circuit in high frequency induction heating power supply load matching is excellent."}
{"_id": "98e74a172db1f0b7079bd2659412ea3ee8713776", "title": "An 800-MHz\u20136-GHz Software-Defined Wireless Receiver in 90-nm CMOS", "text": "A software-defined radio receiver is designed from a low-power ADC perspective, exploiting programmability of windowed integration sampler and clock-programmable discrete-time analog filters. To cover the major frequency bands in use today, a wideband RF front-end, including the low-noise amplifier (LNA) and a wide tuning-range synthesizer, spanning over 800 MHz to 6 GHz is designed. The wideband LNA provides 18-20 dB of maximum gain and 3-3.5 dB of noise figure over 800 MHz to 6 GHz. A low 1/f noise and high-linearity mixer is designed which utilizes the passive mixer core properties and provides around +70 dBm IIP2 over the bandwidth of operation. The entire receiver circuits are implemented in 90-nm CMOS technology. Programmability of the receiver is tested for GSM and 802.11g standards"}
{"_id": "51e348c353e95906ab132ab6985eb3007a59742d", "title": "Towards Bi-Directional Communication in Human-Swarm Teaming: A Survey", "text": "Swarm systems consist of large numbers of robots that collaborate autonomously. With an appropriate level of human control, swarm systems could be applied in a variety of contexts ranging from search-and-rescue situations to Cyber defence. The two decision making cycles of swarms and humans operate on two different time-scales, where the former is normally orders of magnitude faster than the latter. Closing the loop at the intersection of these two cycles will create fast and adaptive humanswarm teaming networks. This paper brings desperate pieces of the ground work in this research area together to review this multidisciplinary literature. We conclude with a framework to synthesize the findings and summarize the multi-modal indicators needed for closed-loop human-swarm adaptive systems."}
{"_id": "3f04541a67b60285be3fc6d0b2af5ff3c5221eda", "title": "Opportunities and advances in ultra-wideband electronically scanned arrays", "text": "This paper discusses bandwidth and polarization challenges in modern dual-polarized ultra-wideband (UWB) electronically scanned array (ESA) aperture design and two recent solution advances aiming to remedy such issues: a 6:1 Planar Ultrawideband Modular Antenna (PUMA) array and a 10:1 Sliced Notch Antenna (SNA) array. The proposed arrays enable wide bandwidth, low cross-polarization, and high efficiency from both low-profile and high-profile array research directions to appeal to a wider variety of applications and design embodiments. The proposed UWB-ESAs are analyzed alongside a conventional 10:1 all-metal Vivaldi array to shed light upon modern design trends and implications."}
{"_id": "827d4b26f6e9b66deaae9f287898ef9c543b283c", "title": "Using a Recurrent Neural Network Model for Classification of Tweets Conveyed Influenza-related Information", "text": "Traditional disease surveillance systems depend on outpatient reporting and virological test results released by hospitals. These data have valid and accurate information about emerging outbreaks but it\u2019s often not timely. In recent years the exponential growth of users getting connected to social media provides immense knowledge about epidemics by sharing related information. Social media can now flag more immediate concerns related to outbreaks in real time. In this paper we apply the long short-term memory recurrent neural network (RNN) architecture to classify tweets conveyed influenza-related information and compare its performance with baseline algorithms including support vector machine (SVM), decision tree, naive Bayes, simple logistics, and naive Bayes multinomial. The developed RNN model achieved an F-score of 0.845 on the MedWeb task test set, which outperforms the F-score of SVM without applying the synthetic minority oversampling technique by 0.08. The F-score of the RNN model is within 1% of the highest score achieved by SVM with oversampling technique. * Corresponding author"}
{"_id": "85421e44649279d25677441eac74da617e66adf1", "title": "Incremental dense multi-modal 3D scene reconstruction", "text": "Aquiring reliable depth maps is an essential prerequisite for accurate and incremental 3D reconstruction used in a variety of robotics applications. Depth maps produced by affordable Kinect-like cameras have become a de-facto standard for indoor reconstruction and the driving force behind the success of many algorithms. However, Kinect-like cameras are less effective outdoors where one should rely on other sensors. Often, we use a combination of a stereo camera and lidar, however, process the acquired data in independent pipelines which generally leads to sub-optimal performance since both sensors suffer from different drawbacks. In this paper, we propose a probabilistic model that efficiently exploits complementarity between different depth-sensing modalities for incremental dense scene reconstruction. Our model uses a piecewise planarity prior assumption which is common in both the indoor and outdoor scenes. We demonstrate the effectiveness of our approach on the KITTI dataset, and provide qualitative and quantitative results showing high-quality dense reconstruction of a number of scenes."}
{"_id": "c274b6496cfcfc8f0961cae6c2b239dcb4a38258", "title": "An Algebraic Theory of Graph Reduction", "text": "Wc show how membership in classes of graphs definable m monwhc second-order ]oglc and of bounded treewldth can be decided by finite sets of terminating reduction rules. The method is constructive in the sense that wc describe an algorlthm that wdl produce, from J formula in monxhc second-order Ioglc and an mleger k such that the class dcfmed by the formul~ IS of treewidth s k, a set of rewrite rules that rcducxs any member of the elms to one of\u2019 firrltely many graphs, in a number of steps bounded by the size c~f the graph. This rcductmn syjtem ymlds an algorlthm that runs m time linear m the size of the graph. We illustrate our results with rcductlon systems that recognux some families of outerplanar and planar graphs. Categories .ind Subject Descriptors F.2 2 [Analysis of Algorithms and Problem Complexity], Nonnurnerical Algorithms and Pr[>blems\u2014cotyz/] ~trczt~o/zs O;Zdzscrctc strut tztrcs; F.-1 3 [Mathematical Logic and Formal Languages]; Formal Languages-ckmscs defu~ed by ~runzn~ars c~ruutomafa. G 2.2 [Discrete Mathematics]; Graph Thco~\u2013grup/~ alcqortrhms, trees General Terms Algorithm\\, Languages, Perform~nce. Theory Additlonai"}
{"_id": "5304becaddedc7f91085e825f650187a1932fe60", "title": "The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models", "text": "MOTIVATION\nMolecular biotechnology now makes it possible to build elaborate systems models, but the systems biology community needs information standards if models are to be shared, evaluated and developed cooperatively.\n\n\nRESULTS\nWe summarize the Systems Biology Markup Language (SBML) Level 1, a free, open, XML-based format for representing biochemical reaction networks. SBML is a software-independent language for describing models common to research in many areas of computational biology, including cell signaling pathways, metabolic pathways, gene regulation, and others.\n\n\nAVAILABILITY\nThe specification of SBML Level 1 is freely available from http://www.sbml.org/"}
{"_id": "235b9c8f10461a95398e169ecb91cf3e223d3350", "title": "A lightweight symbolic virtual machine for solver-aided host languages", "text": "Solver-aided domain-specific languages (SDSLs) are an emerging class of computer-aided programming systems. They ease the construction of programs by using satisfiability solvers to automate tasks such as verification, debugging, synthesis, and non-deterministic execution. But reducing programming tasks to satisfiability problems involves translating programs to logical constraints, which is an engineering challenge even for domain-specific languages.\n We have previously shown that translation to constraints can be avoided if SDSLs are implemented by (traditional) embedding into a host language that is itself solver-aided. This paper describes how to implement a symbolic virtual machine (SVM) for such a host language. Our symbolic virtual machine is lightweight because it compiles to constraints only a small subset of the host's constructs, while allowing SDSL designers to use the entire language, including constructs for DSL embedding. This lightweight compilation employs a novel symbolic execution technique with two key properties: it produces compact encodings, and it enables concrete evaluation to strip away host constructs that are outside the subset compilable to constraints. Our symbolic virtual machine architecture is at the heart of Rosette, a solver-aided language that is host to several new SDSLs."}
{"_id": "ceb6bf1f066ed67a3bd1175b0e9ea60615fcc7a5", "title": "Learning to Collaborate for Question Answering and Asking", "text": "Question answering (QA) and question generation (QG) are closely related tasks that could improve each other; however, the connection of these two tasks is not well explored in literature. In this paper, we give a systematic study that seeks to leverage the connection to improve both QA and QG. We present a training algorithm that generalizes both Generative Adversarial Network (GAN) and Generative Domain-Adaptive Nets (GDAN) under the question answering scenario. The two key ideas are improving the QG model with QA through incorporating additional QA-specific signal as the loss function, and improving the QA model with QG through adding artificially generated training instances. We conduct experiments on both document based and knowledge based question answering tasks. We have two main findings. Firstly, the performance of a QG model (e.g in terms of BLEU score) could be easily improved by a QA model via policy gradient. Secondly, directly applying GAN that regards all the generated questions as negative instances could not improve the accuracy of the QA model. Learning when to regard generated questions as positive instances could bring performance boost."}
{"_id": "4693e6903f03020b6bead1b55021003f5edff5f3", "title": "Developing Citizens \u2019 Trust towards Successful Adoption of E-Government : an Empirical Study from Saudi Arabia", "text": "Despite the significant influence of citizens\u2019 trust towards successful adoption of e-government, the majority of research studies investigating trust in e-government focus only on technology and government agencies. This research provides a deep understanding of the concept of trust in e-government by investigating the influence of technology, government agencies, risk, citizens\u2019 characteristic. Structure Equation Modelling PLS-SEM is utilized to analyse the collected data and to test the proposed hypotheses. The findings of the study reveals the positive and significant impacts of both technical factors and citizens\u2019 characteristic while factors related to government agencies and risk provide negative impacts in trust in egovernment. Also, this paper identifies effective strategies that government need to develop citizens\u2019 trust on their online services. Research contributions and limitations are also discussed in the end of the paper."}
{"_id": "8498a9eb6073373039de3104e73fde2dd9b50ec7", "title": "A ground level causal learning algorithm", "text": "Open domain causal learning involves learning and establishing causal connections between events directly from sensory experiences. It has been established in psychology that this often requires background knowledge. However, background knowledge has to be built from first experiences, which we term ground level causal learning, which basically involves observing temporal correlations. Subsequent knowledge level causal learning can then be based on this ground level causal knowledge. The causal connections between events, such as between lightning and thunder, are often hard to discern based on simple temporal correlations because there might be noise - e.g., wind, headlights, sounds of vehicles, etc. - that intervene between lightning and thunder. In this paper, we adopt the position that causal learning is inductive and pragmatic, and causal connections exist on a scale of graded strength. We describe a method that is able to filter away noise in the environment to obtain likely causal connections between events."}
{"_id": "81a4e3fdea0fc593ccdc546b2e86a72007d30655", "title": "A study on the effects of personalization and task information on implicit feedback performance", "text": "While Implicit Relevance Feedback (IRF) algorithms exploit users' interactions with information to customize support offered to users of search systems, it is unclear how individual and task differences impact the effectiveness of such algorithms. In this paper we describe a study on the effect on retrieval performance of using additional information about the user and their search tasks when developing IRF algorithms. We tested four algorithms that use document display time to estimate relevance, and tailored the threshold times (i.e., the time distinguishing relevance from non-relevance) to the task, the user, a combination of both, or neither. Interaction logs gathered during a longitudinal naturalistic study of online information-seeking behavior are used as stimuli for the algorithms. The findings show that tailoring display time thresholds based on task information improves IRF algorithm performance, but doing so based on user information worsens performance. This has implications for the development of effective IRF algorithms."}
{"_id": "c35f8f48f748c040c33bbe1f21c794e209341987", "title": "Neural Program Synthesis from Diverse Demonstration Videos", "text": "Interpreting decision making logic in demonstration videos is key to collaborating with and mimicking humans. To empower machines with this ability, we propose a neural program synthesizer that is able to explicitly synthesize underlying programs from behaviorally diverse and visually complicated demonstration videos. We introduce a summarizer module as part of our model to improve the network\u2019s ability to integrate multiple demonstrations varying in behavior. We also employ a multi-task objective to encourage the model to learn meaningful intermediate representations for end-to-end training. We show that our model is able to reliably synthesize underlying programs as well as capture diverse behaviors exhibited in demonstrations. The code is available at https://shaohua0116.github.io/demo2program."}
{"_id": "29caa4af6b719b87ebdd2ed46dabf7fb0a3a8602", "title": "Does the application of constraint-induced movement therapy during acute rehabilitation reduce arm impairment after ischemic stroke?", "text": "BACKGROUND AND PURPOSE\nMotor dysfunction after unilateral deafferentation in primates can be overcome by restraining the unaffected limb. We asked whether a constraint-induced movement (CIM) program could be implemented within 2 weeks after stroke and whether CIM is more effective than traditional upper-extremity (UE) therapies during this period.\n\n\nMETHODS\nTwenty-three persons were enrolled in a pilot randomized, controlled trial that compared CIM with traditional therapies. A blinded observer rated the primary end point, the Action Research Arm Test (ARA). Inclusion criteria were the following: ischemic stroke within 14 days, persistent hemiparesis, evidence of preserved cognitive function, and presence of a protective motor response. Differences between the groups were compared by using Student's t tests, ANCOVA, and Mann-Whitney U: tests.\n\n\nRESULTS\nTwenty subjects completed the 14-day treatment. Two adverse outcomes, a recurrent stroke and a death, occurred in the traditional group; 1 CIM subject met rehabilitation goals and was discharged before completing 14 inpatient days. The CIM treatment group had significantly higher scores on total ARA and pinch subscale scores (P:<0.05). Differences in the mean ARA grip, grasp, and gross movement subscale scores did not reach statistical significance. UE activities of daily living performance was not significantly different between groups, and no subject withdrew because of pain or frustration.\n\n\nCONCLUSIONS\nA clinical trial of CIM therapy during acute rehabilitation is feasible. CIM was associated with less arm impairment at the end of treatment. Long-term studies are needed to determine whether CIM early after stroke is superior to traditional therapies."}
{"_id": "17357530b7aae622162da73d3b796c63b557b3b3", "title": "Controllability of complex networks", "text": "The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. Although control theory offers mathematical tools for steering engineered and natural systems towards a desired state, a framework to control complex self-organized systems is lacking. Here we develop analytical tools to study the controllability of an arbitrary complex directed network, identifying the set of driver nodes with time-dependent control that can guide the system\u2019s entire dynamics. We apply these tools to several real networks, finding that the number of driver nodes is determined mainly by the network\u2019s degree distribution. We show that sparse inhomogeneous networks, which emerge in many real complex systems, are the most difficult to control, but that dense and homogeneous networks can be controlled using a few driver nodes. Counterintuitively, we find that in both model and real systems the driver nodes tend to avoid the high-degree nodes."}
{"_id": "61862a8624e4804f268d2d611fc589d74dc7894f", "title": "Evaluation of alternative glyph designs for time series data in a small multiple setting", "text": "We present the results of a controlled experiment to investigate the performance of different temporal glyph designs in a small multiple setting. Analyzing many time series at once is a common yet difficult task in many domains, for example in network monitoring. Several visualization techniques have, thus, been proposed in the literature. Among these, iconic displays or glyphs are an appropriate choice because of their expressiveness and effective use of screen space. Through a controlled experiment, we compare the performance of four glyphs that use different combinations of visual variables to encode two properties of temporal data: a) the position of a data point in time and b) the quantitative value of this data point. Our results show that depending on tasks and data density, the chosen glyphs performed differently. Line Glyphs are generally a good choice for peak and trend detection tasks but radial encodings are more effective for reading values at specific temporal locations. From our qualitative analysis we also contribute implications for designing temporal glyphs for small multiple settings."}
{"_id": "25ee08db14dca641d085584909b551042618b8bf", "title": "Learning to Segment Instances in Videos with Spatial Propagation Network", "text": "We propose a deep learning-based framework for instance-level object segmentation. Our method mainly consists of three steps. First, We train a generic model based on ResNet-101 for foreground/background segmentations. Second, based on this generic model, we fine-tune it to learn instance-level models and segment individual objects by using augmented object annotations in first frames of test videos. To distinguish different instances in the same video, we compute a pixel-level score map for each object from these instance-level models. Each score map indicates the objectness likelihood and is only computed within the foreground mask obtained in the first step. To further refine this per frame score map, we learn a spatial propagation network. This network aims to learn how to propagate a coarse segmentation mask spatially based on the pairwise similarities in each frame. In addition, we apply a filter on the refined score map that aims to recognize the best connected region using spatial and temporal consistencies in the video. Finally, we decide the instance-level object segmentation in each video by comparing score maps of different instances."}
{"_id": "f53fd940972063253f3d5530abd32ee2eabe4bc9", "title": "A Longitudinal Assessment of the Persistence of Twitter Datasets", "text": "With social media datasets being increasingly shared by researchers, it also presents the caveat that those datasets are not always completely replicable. Having to adhere to requirements of platforms like Twitter, researchers cannot release the raw data and instead have to release a list of unique identifiers, which others can then use to recollect the data from the platform themselves. This leads to the problem that subsets of the data may no longer be available, as content can be deleted or user accounts deactivated. To quantify the impact of content deletion in the replicability of datasets in a long term, we perform a longitudinal analysis of the persistence of 30 Twitter datasets, which include over 147 million tweets. Having the original datasets collected between 2012 and 2016, and recollecting them later by using the tweet IDs, we look at four different factors that quantify the extent to which recollected datasets resemble original ones: completeness, representativity, similarity and changingness. Even though the ratio of available tweets keeps decreasing as the dataset gets older, we find that the textual content of the recollected subset is still largely representative of the whole dataset that was originally collected. The representativity of the metadata, however, keeps decreasing over time, both because the dataset shrinks and because certain metadata, such as the users\u2019 number of followers, keeps changing. Our study has important implications for researchers sharing and using publicly shared Twitter datasets in their research."}
{"_id": "86629bfdedcacc4bd9047bb51a1dd024f8a1e609", "title": "Transaction Repair for Multi-Version Concurrency Control", "text": "The optimistic variants of Multi-Version Concurrency Control (MVCC) avoid blocking concurrent transactions at the cost of having a validation phase. Upon failure in the validation phase, the transaction is usually aborted and restarted from scratch. The \"abort and restart\" approach becomes a performance bottleneck for use cases with high contention objects or long running transactions. In addition, restarting from scratch creates a negative feedback loop in the system, because the system incurs additional overhead that may create even more conflicts.\n In this paper, we propose a novel approach for conflict resolution in MVCC for in-memory databases. This low overhead approach summarizes the transaction programs in the form of a dependency graph. The dependency graph also contains the constructs used in the validation phase of the MVCC algorithm. Then, when encountering conflicts among transactions, our mechanism quickly detects the conflict locations in the program and partially re-executes the conflicting transactions. This approach maximizes the reuse of the computations done in the initial execution round, and increases the transaction processing throughput."}
{"_id": "8ff18d710813e5ea50d05ace9f07f48006430671", "title": "Disaggregated water sensing from a single, pressure-based sensor: An extended analysis of HydroSense using staged experiments", "text": "We present an extended analysis of our previous work on the HydroSense technology, which is a low-cost and easily installed single-point sensor of pressure for automatically disaggregating water usage activities in the home (Froehlich et al., 2009 [53]). We expand upon this work by providing a survey of existing and emerging water disaggregation techniques, a more comprehensive description of the theory of operation behind our approach, and an expanded analysis section that includes hot versus coldwater valve usage classification and a comparisonbetween two classification approaches: the template-based matching scheme used in Froehlich et al. (2009) [53] and a new stochastic approach using a Hidden Markov Model. We show that both are successful in identifying valveand fixturelevel water events with greater than 90% accuracies. We conclude with a discussion of the limitations in our experimental methodology and open problems going forward. \u00a9 2010 Elsevier B.V. All rights reserved."}
{"_id": "a72f86022a03207f08906cc4767d6d8e22eb1791", "title": "Estimating and Interpreting The Instantaneous Frequency of a Signal-Part 1 : Fundamentals", "text": "The frequency of a sinusoidal signal is a well defined quantity. However, often in practice, signals are not truly sinusoidal, or even aggregates of sinusoidal components. Nonstationary signals in particular do not lend themselves well to decomposition into sinusoidal components. For such signals, the notion of frequency loses its effectiveness, and one needs to use a parameter which accounts for the time-varying nature of the process. This need has given rise to the idea of instantaneous frequency. The instantaneous frequency (IF) of a signal is a parameter which is often of significant practical importance. In many situations such as seismic, radar, sonar, communications, and biomedical applications, the IF is a good descriptor of some physical phenomenon. This paper discusses the concept of instantaneous frequency, its definitions, and the correspondence between the various mathematical models formulated for representation of IF. The paper also considers the extent to which the IF corresponds to our intuitive expectation of reality. A historical review of the successive attempts to define the IF is presented. Then the relationships between the IF and the group-delay, analytic signal, and bandwidth-time (BT) product are explored, as well as the relationship with time-frequency distributions. Finally, the notions of monocomponent and multicomponent signals, and instantaneous bandwidth are discussed. It is shown that all these notions are well described in the context of the theory presented."}
{"_id": "d4d5a73c021036dd548f5fbe71dbdabcad378e98", "title": "Nonintrusive Load Monitoring and Diagnostics in Power Systems", "text": "This paper describes a transient event classification scheme, system identification techniques, and implementation for use in nonintrusive load monitoring. Together, these techniques form a system that can determine the operating schedule and find parameters of physical models of loads that are connected to an AC or DC power distribution system. The monitoring system requires only off-the-shelf hardware and recognizes individual transients by disaggregating the signal from a minimal number of sensors that are installed at a central location in the distribution system. Implementation details and field tests for AC and DC systems are presented."}
{"_id": "3556c846890dc0dbf6cd15ebdcd8932f1fdef6a2", "title": "Inferring activities from interactions with objects", "text": "A key aspect of pervasive computing is using computers and sensor networks to effectively and unobtrusively infer users' behavior in their environment. This includes inferring which activity users are performing, how they're performing it, and its current stage. Recognizing and recording activities of daily living is a significant problem in elder care. A new paradigm for ADL inferencing leverages radio-frequency-identification technology, data mining, and a probabilistic inference engine to recognize ADLs, based on the objects people use. We propose an approach that addresses these challenges and shows promise in automating some types of ADL monitoring. Our key observation is that the sequence of objects a person uses while performing an ADL robustly characterizes both the ADL's identity and the quality of its execution. So, we have developed Proactive Activity Toolkit (PROACT)."}
{"_id": "3e093d3f7efe5f653ff1daba0f9d95d8081ad225", "title": "NAWMS: nonintrusive autonomous water monitoring system", "text": "Water is nature's most precious resource and growing demand is pushing fresh water supplies to the brink of non-renewability. New technological and social initiatives that enhance conservation and reduce waste are needed. Providing consumers with fine-grained real-time information has yielded benefits in conservation of power and gasoline. Extending this philosophy to water conservation, we introduce a novel water monitoring system, NAWMS, that similarly empowers users.\n The goal of our work is to furnish users with an easy-to-install self-calibrating system that provides information on when, where, and how much water they are using. The system uses wireless vibration sensors attached to pipes and, thus, neither plumbing nor special expertise is necessary for its installation. By implementing a non-intrusive, autonomous, and adaptive system using commodity hardware, we believe it is cost-effective and widely deployable.\n NAWMS makes use of the existing household water flow meter, which is considered accurate, but lacks spatial granularity, and adds vibration sensors on individual water pipes to estimate the water flow to each individual outlet. Compensating for manufacturing, installation, and material variabilities requires calibration of these low cost sensors to achieve a reasonable level of accuracy. We have devised an adaptive auto-calibration procedure, which attempts to solve a two phase linear programming and mixed linear geometric programming problem.\n We show through experiments on a three pipe testbed that such a system is indeed feasible and adapts well to minimize error in the water usage estimate. We report an accuracy, over likely domestic flow-rate scenarios, with long-term stability and a mean absolute error of 7%."}
{"_id": "f7831e7447489ae5ced99784504382f8a79c7a2a", "title": "Face recognition based on the uncorrelated discriminant transformation", "text": "The extraction of discriminant features is the most fundamental and important problem in face recognition. This paper presents a method to extract optimal discriminant features for face images by using the uncorrelated discriminant transformation and KL expansion. Experiments on the ORL database and the NUST603 database have been performed. Experimental results show that the uncorrelated discriminant transformation is superior to the Foley}Sammon discriminant transformation and the new method to extract uncorrelated discriminant features for face images is very e!ective. An error rate of 2.5% is obtained with the experiments on the ORL database. An average error rate of 1.2% is obtained with the experiments on the NUST603 database. Experiments show that by extracting uncorrelated discriminant features, face recognition could be performed with higher accuracy on lower than 16 16 resolution mosaic images. It is suggested that for the uncorrelated discriminant transformation, the optimal face image resolution can be regarded as the resolution m n which makes the dimensionality N\"mn of the original image vector space be larger and closer to the number of known-face classes. 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved."}
{"_id": "5a2bd116f13b1f4aaea94830b9bce934abb44ff6", "title": "Tracking multiple rigid symmetric and non-symmetric objects in real-time using depth data", "text": "In this paper, a robust, real-time object tracking approach capable of dealing with multiple symmetric and non-symmetric objects in a real-time requirement setting is proposed. The approach relies only on depth data to track multiple objects in a dynamic environment and uses random-forest based learning to deal with problems like object occlusion, motion-blur due to camera motion and clutter. We show that the relation between object motion and the corresponding change in its 3D point cloud data can be learned using only 6 random forests. A framework that unites object pose estimation and object pose tracking to efficiently track multiple objects in 3D space is presented. The approach is robust in tracking objects even in presence of motion blur that causes noisy depth data and is capable of real-time performance with 1.8ms per frame. The experimental evaluations demonstrate the performance of the approach against robustness, accuracy and speed and compare the approach with the state of the art. A publicly available dataset with real-world data is also provided for future benchmarking."}
{"_id": "af1b90cd05e4db69602253bc4a8d5286200e13b4", "title": "Smart home care network using sensor fusion and distributed vision-based reasoning", "text": "A wireless sensor network employing multiple sensing and event detection modalities and distributed processing is proposed for smart home monitoring applications. Image sensing and vision-based reasoning are employed to verify and further analyze events reported by other sensors. The system has been developed to address the growing application domain in caregiving to the elderly and persons in need of monitored living, who care to live independently while enjoying the assurance of timely access to caregivers when needed. An example of sensed events is the accidental fall of the person under care. A wireless badge node acts as a bridge between the user and the network. The badge node provides user-centric event sensing functions such as detecting falls, and also provides a voice communication channel between the user and the caregiving center when the system detects an alert and dials the center. The voice connection is carried over an IEEE 802.15.4 radio link between the user badge and another node in the network that acts as a modem. Using signal strength measurements, the network nodes keep track of the approximate location of the user in the monitoring environment.The network also includes wall-mounted image sensor nodes, which are triggered upon detection of a fall to analyze their field-of-view and provide the caregiving center with further information about the user 's status. A description of the developed network and several examples of the vision-based reasoning algorithm are presented in the paper."}
{"_id": "5cfc12273052d2661a3b29a24c9d5ca8901de98a", "title": "A viability theory for digital businesses: Exploring the evolutionary changes of revenue mechanisms to support managerial decisions", "text": "The ongoing integration of Information Technology (IT) into various areas of our lives has led to a plethora of digital products and services. To survive competition in the long run, these offerings not only have to keep up with constant technological developments, but also have to adapt from a business point of view. Managers of these digital businesses have to especially focus on the design and evolution of their business\u2019 revenue mechanisms to ensure the viability of their offerings. The related decisions are not trivial, as managers have to be aware of the relevant contextual factors and have to react quickly to changes in the environment. This paper proposes a viability theory for digital businesses described by 17 propositions that may guide managers in the design of revenue mechanisms and thereby support the evolution as well as the viability of a digital business."}
{"_id": "5cbfe46f4b026f8dee4afb1e788236b3fdf08b81", "title": "Efficient Zero-Knowledge Contingent Payments in Cryptocurrencies Without Scripts", "text": "One of the most promising innovations offered by the cryptographic currencies (like Bitcoin) are the so-called smart contracts, which can be viewed as financial agreements between mutually distrusting participants. Their execution is enforced by the mechanics of the currency, and typically has monetary consequences for the parties. The rules of these contracts are written in the form of so-called \u201cscripts\u201d, which are pieces of code in some \u201cscripting language\u201d. Although smart contracts are believed to have a huge potential, for the moment they are not widely used in practice. In particular, most of Bitcoin miners allow only to post standard transactions (i.e.: those without the non-trivial scripts) on the blockchain. As a result, it is currently very hard to create non-trivial smart contracts in Bitcoin. Motivated by this, we address the following question: \u201cis it possible to create non-trivial efficient smart contracts using the standard transactions only?\u201d We answer this question affirmatively, by constructing efficient Zero-Knowledge Contingent Payment protocol for a large class of NP-relations. This includes the relations for which efficient sigma protocols exist. In particular, our protocol can be used to sell a factorization (p, q) of an RSA modulus n = pq, which is an example that we implemented and tested its efficiency in practice. As another example of the \u201csmart contract without scripts\u201d we show how our techniques can be used to implement the contract called \u201ctrading across chains\u201d."}
{"_id": "cc825b00ce8643ff9c13a4b081c31c459d823ea7", "title": "Survey of Knowledge Representation and Reasoning Systems", "text": "As part of the information fusion task we wish to automatically fuse information derived from the text extraction process with data from a structured knowledge base. This process will involve resolving, aggregating, integrating and abstracting information via the methodologies of Knowledge Representation and Reasoning into a single comprehensive description of an individual or event. This report surveys the key principles underlying research in the field of Knowledge Representation and Reasoning. It represents an initial step in deciding upon a Knowledge Representation and Reasoning system for our information fusion task. APPROVED FOR PUBLIC RELEASE"}
{"_id": "26f14da587d689c8e15d78840a97204d44225619", "title": "Proving the Incompatibility of Efficiency and Strategyproofness via SMT Solving", "text": "Two important requirements when aggregating the preferences of multiple agents are that the outcome should be economically efficient and the aggregation mechanism should not be manipulable. In this article, we provide a computer-aided proof of a sweeping impossibility using these two conditions for randomized aggregation mechanisms. More precisely, we show that every efficient aggregation mechanism can be manipulated for all expected utility representations of the agents\u2019 preferences. This settles an open problem and strengthens several existing theorems, including statements that were shown within the special domain of assignment. Our proof is obtained by formulating the claim as a satisfiability problem over predicates from real-valued arithmetic, which is then checked using a satisfiability modulo theories (SMT) solver. To verify the correctness of the result, a minimal unsatisfiable set of constraints returned by the SMT solver was translated back into a proof in higher-order logic, which was automatically verified by an interactive theorem prover. To the best of our knowledge, this is the first application of SMT solvers in computational social choice."}
{"_id": "fdec38019625fbcffc9debb804544cce6630c3ac", "title": "Measurement of return on marketing investment : A conceptual framework and the future of marketing metrics", "text": "There is growing recognition that firms in the contemporary business environment derive substantial and sustained competitive advantage from a bundle of intangible assets such as knowledge, networks and innovative capability. Measuring the return on such intangible assets has now become imperative for managers. The present manuscript focuses on the measurement of the return on marketing. We first discuss the conditions that make this task a high managerial priority. We then discuss measurement efforts to date, both in general management and marketing. We then offer a conceptual framework that places measurement efforts in a historical perspective. We conclude with a discussion on where the future of marketing metrics lies. \u00a9 2006 Elsevier Inc. All rights reserved."}
{"_id": "69a5a7a546e5091ccaef8a566fe218163b025933", "title": "Proactive avoidance of moving obstacles for a service robot utilizing a behavior-based control", "text": "A main challenge in the application of service robotics is safe and reliable navigation of robots in human everyday environments. Supermarkets, which are chosen here as an example, pose a challenging scenario because they usually have a cluttered and nested character. The robot has to avoid collisions with static and even with moving obstacles while interacting with nearby humans or a dedicated user respectively. This paper presents a hierarchical approach for the proactive avoidance of moving objects as it is used on the robot shopping trolley InBOT. The behavior-based control (bbc) of InBOT is extended by a reflex and a reactive behavior to ensure adequate reaction times when confronted with a possible collision. On top of the bbc a spatio-temporal planner is situated which is able to predict environmental changes and therefore can generate a safe movement sequence accordingly."}
{"_id": "e97312975b44bd54d3f398be1079f3c18cdb46be", "title": "Mobile app conceptual browser: Online marketplaces information extraction", "text": "Online marketplaces are e-commerce websites where thousands of products are provided by multiple third parties. There are dozens of these differently structured marketplaces that need to be visited by the end users to reach their targets. This searching process consumes a lot of time and effort; moreover it negatively affects the user experience. In this paper, extensive analysis and evaluation of the existing e-marketplaces are performed to improve the end-users experience through a Mobile App. The main goal of this study is to find a solution that is capable of integrating multiple heterogeneous hidden data sources and unify the received responses into one single, structured and homogeneous source. Furthermore, the user can easily choose the desired product or reformulate the query through the interface. The proposed Android Mobile App is based on the multi-level conceptual analysis and modeling discipline, in which, data are analyzed in a way that helps in discovering the main concepts of any unknown domain captured from the hidden web. These concepts discovered through information extraction are then structured into a tree-based interface for easy navigation and query reformulation. The application has been evaluated through substantial experiments and compared to other existing mobile applications. The results showed that analyzing the query results and re-structuring the output before displaying to the end-user in a conceptual multilevel mechanism are reasonably effective in terms of number of clicks, time taken and number of navigation screens. Based on the proposed intelligent application, the interface is minimized to only two navigation screens, and the time needed to browse products from multiple marketplaces is kept reasonable in order to reach the target product."}
{"_id": "ed65ead203f773894833c1d039cc0c07f2d5a342", "title": "CRISPR/Cas9 for genome editing: progress, implications and challenges.", "text": "Clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated (Cas) protein 9 system provides a robust and multiplexable genome editing tool, enabling researchers to precisely manipulate specific genomic elements, and facilitating the elucidation of target gene function in biology and diseases. CRISPR/Cas9 comprises of a nonspecific Cas9 nuclease and a set of programmable sequence-specific CRISPR RNA (crRNA), which can guide Cas9 to cleave DNA and generate double-strand breaks at target sites. Subsequent cellular DNA repair process leads to desired insertions, deletions or substitutions at target sites. The specificity of CRISPR/Cas9-mediated DNA cleavage requires target sequences matching crRNA and a protospacer adjacent motif locating at downstream of target sequences. Here, we review the molecular mechanism, applications and challenges of CRISPR/Cas9-mediated genome editing and clinical therapeutic potential of CRISPR/Cas9 in future."}
{"_id": "5ae5971d000b2be9c4cd883ca59a2f26af07887a", "title": "Cross Domain Regularization for Neural Ranking Models using Adversarial Learning", "text": "Unlike traditional learning to rank models that depend on hand-crafted features, neural representation learning models learn higher level features for the ranking task by training on large datasets. Their ability to learn new features directly from the data, however, may come at a price. Without any special supervision, these models learn relationships that may hold only in the domain from which the training data is sampled, and generalize poorly to domains not observed during training. We study the effectiveness of adversarial learning as a cross domain regularizer in the context of the ranking task. We use an adversarial discriminator and train our neural ranking model on a small set of domains. The discriminator provides a negative feedback signal to discourage the model from learning domain specific representations. Our experiments show consistently better performance on held out domains in the presence of the adversarial discriminator---sometimes up to 30% on precision$@1$."}
{"_id": "b77e8e7c5e4069a923727a96060aaac7773336eb", "title": "Occlusion-Free Visual Servoing for the Shared Autonomy Teleoperation of Dual-Arm Robots", "text": "Visual servoing in telerobotics provides information to the operator about the remote location and assists in the task execution to reduce stress on the user. Occlusions in this scenario can, on the one hand, lead to the visual servoing failure, and, on the other hand, degrade the user experience and navigation performance due to the obstructed vision of elements of interest. In this letter, we consider a teleoperation system composed of two robot arms where one is remotely operated, while the other is autonomous and equipped with an eye-in-hand camera sensor. We propose a reactive unified convex optimization based controller that allows the execution of occlusion-free tasks by autonomously adjusting the camera robot, so as to keep the teleoperated one in the field of view. The occlusion avoidance is formulated inside the optimization as a constraint in the image space, it is formally derived from a collision avoidance analogy and made robust against noisy measurements and dynamic environment. A state machine is used to switch the control policy whenever an occlusion might occur. We validate our approach with experiments on a 14 d.o.f. dual-arm ABB YuMi robot equipped with an red, green, blue (RGB) camera and teleoperated by a 3 d.o.f. Novint Falcon device."}
{"_id": "0b0d195c2524538c1338a823a2765af478c113ca", "title": "Large-scale brain networks in cognition: emerging methods and principles", "text": "An understanding of how the human brain produces cognition ultimately depends on knowledge of large-scale brain organization. Although it has long been assumed that cognitive functions are attributable to the isolated operations of single brain areas, we demonstrate that the weight of evidence has now shifted in support of the view that cognition results from the dynamic interactions of distributed brain areas operating in large-scale networks. We review current research on structural and functional brain organization, and argue that the emerging science of large-scale brain networks provides a coherent framework for understanding of cognition. Critically, this framework allows a principled exploration of how cognitive functions emerge from, and are constrained by, core structural and functional networks of the brain."}
{"_id": "63cb17559fe233b462e79cc1078303d82078b97f", "title": "The Rosenberg Self-Esteem Scale: translation and validation in university students.", "text": "The aim of this study was to translate into Spanish and to validate the Rosenberg Self-Esteem Scale (RSES), completed by 420 university students. Confirmatory factor analysis revealed that the model that best fit the data, both in the total sample and in the male and female subsamples, was the one-factor structure with method effects associated with positively worded items. The results indicated high, positive correlations between self-esteem and the five dimensions of self-concept. The scale showed satisfactory levels of internal consistency and temporal stability over a four-week period. Lastly, gender differences were obtained. These findings support the use of the RSES for the assessment of self-esteem in higher education."}
{"_id": "fb8610ffd8351c59d3b2eb44a70d1afbb0712603", "title": "Teaching Computational Thinking Using Agile Software Engineering Methods: A Framework for Middle Schools", "text": "Computational Thinking (CT) has been recognized as one of the fundamental skills that all graduates should acquire. For this reason, motivational concerns need to be addressed at an early age of a child, and reaching students who do not consider themselves candidates for science, technology, engineering, and mathematics disciplines is important as well if the broadest audience possible is to be engaged. This article describes a framework for teaching and assessing CT in the context of K-12 education. The framework is based on Agile software engineering methods, which rely on a set of principles and practices that can be mapped to the activities of CT. The article presents as well the results of an experiment applying this framework in two sixth-grade classes, with 42 participants in total. The results show that Agile software engineering methods are effective at teaching CT in middle schools, after the addition of some tasks to allow students to explore, project, and experience the potential product before using the software tools at hand. Moreover, according to the teachers\u2019 feedback, the students reached all the educational objectives of the topics involved in the multidisciplinary activities. This result can be taken as an indicator that it is possible to use computing as a medium for teaching other subjects, besides computer science."}
{"_id": "db8196da3890be9b95307dc67833938a98eedde8", "title": "1 Pavement distress detection methods : a review 2", "text": "The road pavement condition affects safety and comfort, traffic and travel times, vehicles 8 operating cost and emission levels. In order to optimize the road pavement management and 9 guarantee satisfactory mobility conditions for all the road users, the Pavement Management 10 System (PMS) is an effective tool for the road manager. An effective PMS requires the availability of 11 pavement distress data, the possibility of data maintenance and updating, in order to evaluate the 12 best maintenance program. In the last decade, many researches have been focused on pavement 13 distress detection, using a huge variety of technological solutions for both data collection and 14 information extraction and qualification. This paper presents a literature review of data collection 15 systems and processing approach aimed at the pavement condition evaluation. Both commercial 16 solutions and research approaches have been included. The main goal is to draw a framework of 17 the actual existing solutions, considering them from a different point of view in order to identify 18 the most suitable for further research and technical improvement, also considering the automated 19 and semi-automated emerging technologies. An important attempt is to evaluate the aptness of the 20 data collection and extraction to the type of distress, considering the distress detection, 21 classification and quantification phases of the procedure. 22"}
{"_id": "2f40da5353a68b45d48795e952d41ee7a3657bfe", "title": "A Unified Local and Global Model for Discourse Coherence", "text": "We present a model for discourse coherence which combines the local entitybased approach of (Barzilay and Lapata, 2005) and the HMM-based content model of (Barzilay and Lee, 2004). Unlike the mixture model of (Soricut and Marcu, 2006), we learn local and global features jointly, providing a better theoretical explanation of how they are useful. As the local component of our model we adapt (Barzilay and Lapata, 2005) by relaxing independence assumptions so that it is effective when estimated generatively. Our model performs the ordering task competitively with (Soricut and Marcu, 2006), and significantly better than either of the models it is based on."}
{"_id": "211acebdb02bfeb65beace99de1ef889be561702", "title": "A knowledge-based table recognition method for Chinese bank statement images", "text": "Automatic processing of large volume scanned Chinese bank statements is a urgent demand recently. Conventional methods can not well handle the following challenges of this problem: various layout styles, noises, and especially requirement of fast speed for large Chinese character set. This paper proposes a knowledge based table recognition method to meet fast speed requirement with good accuracy. Two kinds of knowledge are utilized to accelerate the identification of digit columns and the cell recognition: i) geometric knowledge about column alignment and quasi equal digit width, and ii) semantic knowledge about prior format based on the results from an optical character recognition (OCR) engine of digits. Experimental results on a real dataset show the effectiveness of our method."}
{"_id": "d0f2b9d90aca10365944b4aaa4654b311adce800", "title": "Artificial Neural Network Method for Solution of Boundary Value Problems With Exact Satisfaction of Arbitrary Boundary Conditions", "text": "A method for solving boundary value problems (BVPs) is introduced using artificial neural networks (ANNs) for irregular domain boundaries with mixed Dirichlet/Neumann boundary conditions (BCs). The approximate ANN solution automatically satisfies BCs at all stages of training, including before training commences. This method is simpler than other ANN methods for solving BVPs due to its unconstrained nature and because automatic satisfaction of Dirichlet BCs provides a good starting approximate solution for significant portions of the domain. Automatic satisfaction of BCs is accomplished by the introduction of an innovative length factor. Several examples of BVP solution are presented for both linear and nonlinear differential equations in two and three dimensions. Error norms in the approximate solution on the order of 10-4 to 10-5 are reported for all example problems."}
{"_id": "e193ed3a76be169abbdf5b74372290ce6fe11caf", "title": "Promoting tourism destinations : A strategic marketing approach", "text": "Introduction This paper outlines the main topics of a report prepared for the Tourism Promotion Committee (TPC) in Heraklion District, Crete. This body is responsible for coordinating marketing activities and promoting its area. Its membership is composed of the main public and private sector organizations involved in providing tourism services in Crete. The report was compiled for TPC by the first author, who has been closely involved in the development of tourism marketing and planning on Crete for over ten years. It was commissioned with the aim of improving the destination\u2019s marketing effectiveness and efficiency. The task assigned was to provide a report (i) reviewing the marketing activities undertaken by the TPC; (ii) providing the appropriate approach to be adopted and implemented in promoting Heraklion as tourism destination; and (iii) giving a set of recommendations for key areas requiring improvement. Marios D. Soteriades and Vasiliki A. Avgeli"}
{"_id": "caadc498ba9fad769666ffce7d44312ac66f5366", "title": "Zoomorphic design", "text": "Zoomorphic shapes are man-made shapes that possess the form or appearance of an animal. They have desirable aesthetic properties, but are difficult to create using conventional modeling tools. We present a method for creating zoomorphic shapes by merging a man-made shape and an animal shape. To identify a pair of shapes that are suitable for merging, we use an efficient graph kernel based technique. We formulate the merging process as a continuous optimization problem where the two shapes are deformed jointly to minimize an energy function combining several design factors. The modeler can adjust the weighting between these factors to attain high-level control over the final shape produced. A novel technique ensures that the zoomorphic shape does not violate the design restrictions of the man-made shape. We demonstrate the versatility and effectiveness of our approach by generating a wide variety of zoomorphic shapes."}
{"_id": "0cfd5a7c6610e0eff2d277b419808edb32d93b78", "title": "Differential Cryptanalysis of the Data Encryption Standard", "text": ""}
{"_id": "46e596b6a7f9d91e247a21d0a6709b280a4157a8", "title": "A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Application", "text": "This paper discusses some aspects of testing random and pseudorandom number generators. A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications (NIST Special Publication) is presented. This paper also demonstrates the software implementation of the test suite with output protocols and presents experiences from testing some random and pseudorandom generators."}
{"_id": "59bb26443e5dbe1f85c5880aa8aac8170b89212a", "title": "A New Iterative Secret Key Cryptosystem Based on Reversible and Irreversible Cellular Automata", "text": "Many cryptosystems have been released to secure electronic data on internet. Some data are very critical to be transmitted as plaintext. Thus, to ensure the data confidentiality and integrity, a list of cryptosystems have been elaborated. The most important ones are divided into two categories: symmetric algorithms encrypting and decrypting data in blocks using a single secret key; and asymmetric algorithms using public keys to cipher texts and secret keys to reconstruct plaintexts. The of the present work is the design and implement a new secret key cryptosystem encrypting and decrypting data in blocks according to a number of iterations. Each plaintext block is encrypted using cellular automata and a list of sub keys deduced from a secret key through cellular automata. To demonstrate the feasibility, the proposed scheme is compared with AES algorithm, the well-known symmetric block cipher. We prove that our algorithm resists against statistical attacks and it is faster than AES-256 achieving good confusion and diffusion tests."}
{"_id": "71351a22741f385894f481604425f329c0bfb109", "title": "Differential Cryptanalysis of Feal and N-Hash", "text": "In 1,2] we introduced the notion of diierential cryptanalysis and described its application to DESS11] and several of its variants. In this paper we show the applicability of diierential cryptanalysis to the Feal family of encryption algorithms and to the N-Hash hash function. In addition, we show how to transform diierential cryptanalytic chosen plaintext attacks into known plaintext attacks."}
{"_id": "73fe7afb8f42e3df80b2c4d9ebdfe6e9e26b2207", "title": "A new kind of science", "text": "The distinctive idea is easy to state, though it needs some examples to appreciate. It is that simple rules can generate complex very complex outcomes when they are repeatedly applied. Isolated examples of this phenomenon have been known for centuries. For example, the number p has a simple definition the ratio of a circle's circumference to its diameter and there is a quite simple formula to calculate its digits. The result begins: 3.14159265358979323846264338327950288419716939937510582097494459230781640 6286208998628034825342117068..."}
{"_id": "5a19267b8fdf02e7523a07508f2be693159da1a2", "title": "Fixed-point algorithms for learning determinantal point processes", "text": "Determinantal point processes (DPPs) offer an elegant tool for encoding probabilities over subsets of a ground set. Discrete DPPs are parametrized by a positive semidefinite matrix (called the DPP kernel), and estimating this kernel is key to learning DPPs from observed data. We consider the task of learning the DPP kernel, and develop for it a surprisingly simple yet effective new algorithm. Our algorithm offers the following benefits over previous approaches: (a) it is much simpler; (b) it yields equally good and sometimes even better local maxima; and (c) it runs an order of magnitude faster on large problems. We present experimental results on both real and simulated data to illustrate the numerical performance of our technique."}
{"_id": "2c91447c35fe2d4426b6661b8c8c97f439f3172e", "title": "Stock Market Prediction based on Time Series Data and Market Sentiment", "text": "In this project, we would like to create a system that predicts stock market movements on a given day, based on time series data and market sentiment analysis. We will use Twitter data on that day to predict the market sentiment and S&P 500 values to perform analysis on historical data."}
{"_id": "5c2908b48b29cd5411c60e730b5a9fa569808c28", "title": "An Automated Approach for Digital Forensic Analysis of Heterogeneous Big Data", "text": "The major challenges with big data examination and analysis are volume, complex interdependence across content, and heterogeneity. The examination and analysis phases are considered essential to a digital forensics process. However, traditional techniques for the forensic investigation use one or more forensic tools to examine and analyse each resource. In addition, when multiple resources are included in one case, there is an inability to cross-correlate findings which often leads to inefficiencies in processing and identifying evidence. Furthermore, most current forensics tools cannot cope with large volumes of data. This paper develops a novel framework for digital forensic analysis of heterogeneous big data. The framework mainly focuses upon the investigations of three core issues: data volume, heterogeneous data and the investigators cognitive load in understanding the relationships between artefacts. The proposed approach focuses upon the use of metadata to solve the data volume problem, semantic web ontologies to solve the heterogeneous data sources and artificial intelligence models to support the automated identification and correlation of artefacts to reduce the burden placed upon the investigator to understand the nature and relationship of the artefacts."}
{"_id": "0d0894d5c3d812dbc68a89e224488abfaad2a5c2", "title": "Application of Katharine Kolkaba Comfort theory to nursing care of patient", "text": "aring is in practice since existence but caring as profession named Nursing was scientifically Coined by the founder of modern nursing who in her writings \u201cwhat it is and what it is not\u201ddeclared the boundaries along with its Constitutional elements. Those elements with appropriate association not only revealed the nature of caring dogma it also shows that it is doing something base on thinking something or in simple words this is practice rely on theory. According to McCrae, (2011) For the sake of assuming valuable status ofprofession in society nursing struggle to legitimate its position by generation and application of theory. Nursing literature is rich regarding theory practice association. although constant body of argument, counter argument and lack of agreed conclusion regarding nursing theory creates a bit confusion among nurses but ideally nursing theory should set the code of every day practice of nurses(Giltinane, 2013) as lack of a valued unique contribution nurses make to health care based on a specialized body of knowledge will portrayed nursing role as subordinate to doctor (Mccrae, 2011) Eminent nurses\u2019 theorist, scientist, Researchers, Clinicians, Educators and intellectual intelligentsia aid evidence to different perspectives of theory practice link. Some unanimous convincing narratives argue that the relationship is reciprocal and according to (Mckenna, 2005) Nursing theory takes its origin from practice, than reverse to practice for testing and at the end become set to guide the practice."}
{"_id": "a9fbb5156e6575ad60f9e0470683b4ea2023aba8", "title": "The impact of social and conventional media on firm equity value: A sentiment analysis approach", "text": "Available online 30 December 2012"}
{"_id": "5ac5c3b306ddb81e464642690f83b559522f9339", "title": "Applying autonomous sensor systems in logistics \u2014 Combining sensor networks , RFIDs and software agents", "text": "New sensor, communication and software technologies are used to broaden the facilities of tracing and tracing systems for food transports. n embedded assessing unit detects from sensor data collected by a wireless network potential risks for the freight quality. The estimation of he current maturing state of agricultural products will be supported by measurements of the gaseous hormone ethylene as an indicator for the ipening processes. A miniaturized high-resolution gas chromatography is under construction. The system autonomously configures itself to a roduct specific supervision task based on data scanned by an RFID reader during freight loading. Mobile software agents accompany the freight long the supply chain. They pre-process the vast sensor data and submit only substantial changes to the freight owner. 2006 Elsevier B.V. All rights reserved."}
{"_id": "b3ceae83a098807d543ae1b4f95cd0f9d1749d9c", "title": "Data Collection and Wireless Communication in Internet of Things (IoT) Using Economic Analysis and Pricing Models: A Survey", "text": "This paper provides a state-of-the-art literature review on economic analysis and pricing models for data collection and wireless communication in Internet of Things (IoT). Wireless sensor networks (WSNs) are the main components of IoT which collect data from the environment and transmit the data to the sink nodes. For long service time and low maintenance cost, WSNs require adaptive and robust designs to address many issues, e.g., data collection, topology formation, packet forwarding, resource and power optimization, coverage optimization, efficient task allocation, and security. For these issues, sensors have to make optimal decisions from current capabilities and available strategies to achieve desirable goals. This paper reviews numerous applications of the economic and pricing models, known as intelligent rational decision-making methods, to develop adaptive algorithms and protocols for WSNs. Besides, we survey a variety of pricing strategies in providing incentives for phone users in crowdsensing applications to contribute their sensing data. Furthermore, we consider the use of some pricing models in machine-to-machine (M2M) communication. Finally, we highlight some important open research issues as well as future research directions of applying economic and pricing models to IoT."}
{"_id": "9d9a7f1bbe627e4b4d3acaf46fe6f5cf87b1bfc5", "title": "Near-Duplicates Detection and Elimination Based on Web Provenance for Effective Web Search", "text": "Users of World Wide Web utilize search engines for information retrieval in web as search engines play a vital role in finding information on the web. However, the performance of a web search is greatly affected by flooding of search results with information that is redundant in nature i.e., existence of nearduplicates. Such near-duplicates holdup the other promising results to the users. Many of these near-duplicates are from distrusted websites and/or authors who host information on web. Such nearduplicates may be eliminated by means of Provenance. Thus, this paper proposes a novel approach to identify such near-duplicates based on provenance. In this approach a provenance model has been built using web pages which are the search results returned by existing search engine. The proposed model combines both content based and trust based factors for classifying the results as original or near-duplicates. Keywords\u2014 Web search, Near-duplicates, Provenance, Semantics,"}
{"_id": "441f7702906764c83cfc5f386642e26a02ecc877", "title": "Schemaless NoSQL Data Stores - Object-NoSQL Mappers to the Rescue?", "text": "NoSQL data stores are becoming increasingly popular in application development. These systems are attractive for developers due to their ability to handle large volumes of data, as well as data with a high degree of structural variety. Typically, NoSQL data stores are accessed programmatically. Due to the imminent lack of standardized query languages, building applications against the native interfaces of NoSQL data stores creates an unfortunate technical lock-in. To re-gain platform independence, developers turn to object mapper libraries as an additional level of abstraction when accessing NoSQL data stores. The current market for Java object mappers that support NoSQL data stores is still volatile, with commercial and open source products competing for adoption. In this paper, we give an overview on the state-of-the-art in Object-Relational Mappers that can handle also NoSQL data stores, as well as dedicated Object-NoSQL Mappers. We are able to show that choosing the right object mapper library is a strategic decision with far reaching consequences: Current mappers diverge in the NoSQL data stores that they support, in their features, their robustness, their truthfulness to the documentation and query standards, and ultimately, in the runtime overhead that they introduce. Especially in web development, runtime overhead is a crucial aspect contributing to the application latency, and ultimately, the user experience. By shedding light on the current market, we intend to provide software architects with the necessary information to make informed decisions."}
{"_id": "28ba3bd535327a08ffca088246447511599a7dfe", "title": "Efficient Image Dehazing with Boundary Constraint and Contextual Regularization", "text": "Images captured in foggy weather conditions often suffer from bad visibility. In this paper, we propose an efficient regularization method to remove hazes from a single input image. Our method benefits much from an exploration on the inherent boundary constraint on the transmission function. This constraint, combined with a weighted L1-norm based contextual regularization, is modeled into an optimization problem to estimate the unknown scene transmission. A quite efficient algorithm based on variable splitting is also presented to solve the problem. The proposed method requires only a few general assumptions and can restore a high-quality haze-free image with faithful colors and fine image details. Experimental results on a variety of haze images demonstrate the effectiveness and efficiency of the proposed method."}
{"_id": "7af536c1ec6f8d7d22427b1f6766644e1a0de41f", "title": "Trust in Online Shopping: The Korean Student Experience", "text": "E-commerce has become an important part of business. In South Korea, the market size of online shopping malls was 13,460 billion Korean Won in 2006, and this figure keeps growing. Thus, gaining loyal customers has become a rising concern. In this study, we adopted Lewicki and Bunker's three different types of trust, namely, calculus-based trust, knowledge-based trust, and identification-based trust, in order to investigate their hierarchical relationships in e-commerce and their impacts on customer satisfaction and loyalty. A total of 104 responses from university students were analyzed to test the proposed model and its hypotheses using PLS. The results showed that hierarchical relationships between different types of trust exist in the online environment, and among them, knowledge-based trust has the strongest impact on customer satisfaction. This finding implied that practitioners should focus on developing an appropriate online strategy in terms of how to build up trust-based relationships with online customers."}
{"_id": "1c0bbb7c7fb764d8a7f4c46e85af36c1b8df3572", "title": "Music-based respiratory biofeedback in visually-demanding tasks", "text": "Biofeedback tools generally use visualizations to display physiological information to the user. As such, these tools are incompatible with visually demanding tasks such as driving. While auditory or haptic biofeedback may be used in these cases, the additional sensory channels can increase workload or act as a nuisance to the user. A number of studies, however, have shown that music can improve mood and concentration, while also reduce aggression and boredom. Here, we propose an intervention that combines the benefits of biofeedback and music to help users regulate their stress response while performing a visual task (driving a car simulator). Our approach encourages slow breathing by adjusting the quality of the music in response to the user\u2019s breathing rate. We evaluate the intervention on a design with music and auditory biofeedback as independent variables. Our results indicate that our music-biofeedback intervention leads to lower arousal (reduced electrodermal activity and increased heart rate variability) than music alone, auditory biofeedback alone and a control condition."}
{"_id": "45af864f2f9991eb9f9545fdec45ccf2adb5f0fa", "title": "Graph databases: A survey", "text": "In the era of big data, data analytics, business intelligence database management plays a vital role from technical business management and research point of view. Over many decades, database management has been a topic of active research. There are different type of database management system have been proposed over a period of time but Relational Database Management System (RDBMS) is the one which has been most popularly used in academic research as well as industrial setup[1]. In recent years, graph databases regained interest among the researchers for certain obvious reasons. One of the most important reasons for such an interest in a graph database is because of the inherent property of graphs as a graph structure. Graphs are present everywhere in the data structure, which represents the strong connectivity within the data. Most of the graph database models are defined in which data-structure for schema and instances are modeled as graph or generalization of a graph. In such graph database models, data manipulations are expressed by graph-oriented operations and type constructors [9]. Now days, most of the real world applications can be modeled as a graph and one of the best real world examples is social or biological network. This paper gives an overview of the different type of graph databases, applications, and comparison between their models based on some properties."}
{"_id": "ec6ae4d512ab4c43f6ad688a2973368101909610", "title": "Fundamentals of Wearable Computers and Augmented Reality, Second Edition", "text": "The best ebooks about Fundamentals Of Wearable Computers And Augmented Reality Second Edition that you can get for free here by download this Fundamentals Of Wearable Computers And Augmented Reality Second Edition and save to your desktop. This ebooks is under topic such as fundamentals of wearable computers and augmented reality fundamentals of wearable computers and augmented reality fundamentals of wearable computers and augmented reality fundamentals of wearable computers and augmented reality fundamentals of wearable computers and augmented reality fundamentals of wearable computers and augmented reality fundamentals of wearable computers and augmented reality 11 location-based mixed and augmented reality storytelling fundamentals of wearable computers and augmented reality forum book review mit press journals the vietnam war revised 2nd edition ebook | slangsurfing dicom to 3d holograms: use case for augmented reality in pensees de j j rousseau citoyen de geneve v1 2 1785 ebook click here to access this book : free download amendment xivequal protection constitutional amendments solutions manual to fundamentals of engineering document about distant mirrors america as a foreign regeneration of peasants slangsurfing the cell ebook | esmajorelojes il buddhismo contemporaneo italian edition ebook | slanggeek proceedings of a court of inquiryconvened at the navy answer coming soonmore blog postings on arts letters the travel and tropical medicine manual 4e ebook fundamentals of sql server 2012 replication"}
{"_id": "e80f2032df4a1257ad350d73259a05f89c7123c0", "title": "The State of Public Infrastructure-as-a-Service Cloud Security", "text": "The public Infrastructure-as-a-Service (IaaS) cloud industry has reached a critical mass in the past few years, with many cloud service providers fielding competing services. Despite the competition, we find some of the security mechanisms offered by the services to be similar, indicating that the cloud industry has established a number of \u201cbest-practices,\u201d while other security mechanisms vary widely, indicating that there is also still room for innovation and experimentation. We investigate these differences and possible underlying reasons for it. We also contrast the security mechanisms offered by public IaaS cloud offerings and with security mechanisms proposed by academia over the same period. Finally, we speculate on how industry and academia might work together to solve the pressing security problems in public IaaS clouds going forward."}
{"_id": "87a4faff907a55953e10cee1603a6e438bdf228e", "title": "Researching Mental Health Disorders in the Era of Social Media: Systematic Review", "text": "BACKGROUND\nMental illness is quickly becoming one of the most prevalent public health problems worldwide. Social network platforms, where users can express their emotions, feelings, and thoughts, are a valuable source of data for researching mental health, and techniques based on machine learning are increasingly used for this purpose.\n\n\nOBJECTIVE\nThe objective of this review was to explore the scope and limits of cutting-edge techniques that researchers are using for predictive analytics in mental health and to review associated issues, such as ethical concerns, in this area of research.\n\n\nMETHODS\nWe performed a systematic literature review in March 2017, using keywords to search articles on data mining of social network data in the context of common mental health disorders, published between 2010 and March 8, 2017 in medical and computer science journals.\n\n\nRESULTS\nThe initial search returned a total of 5386 articles. Following a careful analysis of the titles, abstracts, and main texts, we selected 48 articles for review. We coded the articles according to key characteristics, techniques used for data collection, data preprocessing, feature extraction, feature selection, model construction, and model verification. The most common analytical method was text analysis, with several studies using different flavors of image analysis and social interaction graph analysis.\n\n\nCONCLUSIONS\nDespite an increasing number of studies investigating mental health issues using social network data, some common problems persist. Assembling large, high-quality datasets of social media users with mental disorder is problematic, not only due to biases associated with the collection methods, but also with regard to managing consent and selecting appropriate analytics techniques."}
{"_id": "5ac879e0377007baa7dc3b5cc53d4f483c875b6e", "title": "Advances in geometric modeling and feature extraction on pots", "text": "This paper outlines progress on the 3DK Knowledge and Distributed Intelligence project undertaken by the Partnership for Research In Stereo Modeling (PRISM) at Arizona State University. Three of the six 3DK pilot projects of this National Science Foundation funded project involve archaeological and biological material, namely ceramic vessels, lithic artifacts, and bones. These projects are: (1) \u201c3D Morphology of Ceramic Vessels\u201d which aims to learn about vessel uniformity and proportionality as indicators of developing craft specialization and complex social organization among prehistoric cultures; (2) \u201cLithic Refitting\u201d which intends to develop the algorithms required to (partially) automate the refitting process through 3D scanning and surface modeling; and finally (3) \u201c3D Topography of Joint Surfaces\u201d which investigates the ability of human ancestors to make tools and walk upright by developing software that will automate segmentation of osteological features and quantify the surface area and curvature of joint surfaces and the congruency between reciprocal joint surfaces, allowing for the development of biomechanical models. This paper discusses the links between these three unique projects and explores the underlying research aspects common to all: geometric modeling, feature recognition, and the development of a database structure aimed at making 3D models of these artifacts available online for query."}
{"_id": "0b99db47b233e2ddb743a82c9a5cc755c8aedb84", "title": "A Comparison of Knives for Bread Slicing", "text": "Vertical partitioning is a crucial step in physical database design in row-oriented databases. A number of vertical partitioning algorithms have been proposed over the last three decades for a variety of niche scenarios. In principle, the underlying problem remains the same: decompose a table into one or more vertical partitions. However, it is not clear how good different vertical partitioning algorithms are in comparison to each other. In fact, it is not even clear how to experimentally compare different vertical partitioning algorithms. In this paper, we present an exhaustive experimental study of several vertical partitioning algorithms. We categorize vertical partitioning algorithms along three dimensions. We survey six vertical partitioning algorithms and discuss their pros and cons. We identify the major differences in the use-case settings for different algorithms and describe how to make an apples-to-apples comparison of different vertical partitioning algorithms under the same setting. We propose four metrics to compare vertical partitioning algorithms. We show experimental results from the TPC-H and SSB benchmark and present four key lessons learned: (1) we can do four orders of magnitude less computation and still find the optimal layouts, (2) the benefits of vertical partitioning depend strongly on the database buffer size, (3) HillClimb is the best vertical partitioning algorithm, and (4) vertical partitioning for TPC-H-like benchmarks can improve over column layout by only up to 5%."}
{"_id": "cc7cf380c28be0eafbfecc378b9ec7231cf3c4d9", "title": "Large-scale PV system based on the multiphase isolated DC/DC converter", "text": "The large-scale photovoltaic (PV) systems have already been reached 200 MW power level and they will continue to grow in size in the upcoming years. This trend will challenge the existing PV system architectures by requiring power converters with a higher power rating and a higher voltage level at the point of common coupling (PCC). The cascaded H-bridge (CHB) multilevel converter is one of the solutions which could deal with the aforementioned challenges. However, the topology based on the CHB converter faces the issue of leakage current that flows through the solar panel parasitic capacitance to ground which could damage the PV panels and pose safety problems. This paper proposes a multiphase isolated DC/DC converter for the CHB topology for a large-scale PV system which eliminates the leakage current issue. At the same time, the multiphase structure of the DC/DC converter helps to increase the power rating of the converter and to reduce the PV voltage and current ripples. A 0.54 MW rated seven-level CHB converter using multiphase isolated DC/DC converters has been modeled and simulated using MATLAB/Simulink and PLECS Blockset. Simulation results of different case studies are presented to evaluate the performance of the proposed PV system configuration."}
{"_id": "90181982559e9826911ab08bab1590060695188d", "title": "Word Sense Induction for Novel Sense Detection", "text": "We apply topic modelling to automatically induce word senses of a target word, and demonstrate that our word sense induction method can be used to automatically detect words with emergent novel senses, as well as token occurrences of those senses. We start by exploring the utility of standard topic models for word sense induction (WSI), with a pre-determined number of topics (=senses). We next demonstrate that a non-parametric formulation that learns an appropriate number of senses per word actually performs better at the WSI task. We go on to establish state-of-the-art results over two WSI datasets, and apply the proposed model to a novel sense detection task."}
{"_id": "3e4ce8455f8fad933728936a7d6233364e520934", "title": "A theory of visual attention.", "text": "A unified theory of visual recognition and attentional selection is developed by integrating the biased-choice model for single-stimulus recognition (Luce, 1963; Shepard, 1957) with a choice model for selection from multielement displays (Bundesen, Pedersen, & Larsen, 1984) in a race model framework. Mathematically, the theory is tractable, and it specifies the computations necessary for selection. The theory is applied to extant findings from a broad range of experimental paradigms. The findings include effects of object integrality in selective report, number and spatial position of targets in divided-attention paradigms, selection criterion and number of distracters in focused-attention paradigms, delay of selection cue in partial report, and consistent practice in search. On the whole, the quantitative fits are encouraging."}
{"_id": "6dd657465d5ddb0f827df674920da635d97b50d3", "title": "Exercise prescription for overhead athletes with shoulder pathology: a systematic review with best evidence synthesis.", "text": "OBJECTIVE\nTo produce a best evidence synthesis of exercise prescription used when treating shoulder pathology in the overhead athlete.\n\n\nDESIGN\nA systematic review of exercises used in overhead athletes including case studies and clinical commentaries.\n\n\nDATA SOURCES\nMEDLINE, PubMed, SPORTDiscus and CINAHL from database inception through July 8, 2016.\n\n\nMETHODS\nWe examined data from randomised controlled trials and prospective cohort (level I-IV evidence) studies that addressed exercise intervention in the rehabilitation of the overhead athlete with shoulder pathology. Case studies and clinical commentaries (level V evidence) were examined to account for expert opinion-based research. Data were combined using best evidence synthesis and graded (A-F) recommendations (Centre for Evidence-Based Medicine).\n\n\nRESULTS\nThere were 33 unique exercises in six level I-IV studies that met our inclusion criteria. Most exercises were single-plane, upper extremity exercises performed below 90o of elevation. There were 102 unique exercises in 33 level V studies that met our inclusion criteria. These exercises emphasised plyometrics, kinetic chain and sport-specific training.\n\n\nCONCLUSIONS AND RELEVANCE\nOverall, evidence for exercise interventions in overhead athletes with shoulder pathology is dominated by expert opinion (grade D). There is great variability between exercise approaches suggested by experts and those investigated in research studies and the overall level of evidence is low. The strongest available evidence (level B) supports the use of single-plane, open chain upper extremity exercises performed below 90\u00b0 of elevation and closed chain upper extremity exercises. Clinical expert pieces support a more advanced, global treatment approach consistent with the complex, multidimensional nature of sport."}
{"_id": "c919a06c46171d96da4a3c54efe6912bad3b46ed", "title": "No Free Lunch Theorems: Limitations and Perspectives of Metaheuristics", "text": "The No Free Lunch (NFL) theorems for search and optimization are reviewed and their implications for the design of metaheurist ics are discussed. The theorems state that any two search or optimization algorith ms are equivalent when their performance is averaged across all possible problems and even over subsets of problems fulfilling certain constraints. The NFL results show that if there is no assumption regarding the relation between visited and unse en arch points, efficient search and optimization is impossible. There is no wel l performing universal metaheuristic, but the heuristics must be tailored to the pr oblem class at hand using prior knowledge. In practice, it is not likely that the pr econditions of the NFL theorems are fulfilled for a problem class and thus differenc es between algorithms exist. Therefore, tailored algorithms can exploit structu re nderlying the optimization problem. Given full knowledge about the problem class, it i in theory possible to construct an optimal algorithm."}
{"_id": "3ee2e92297ad5544181db64f03d9acfc531f9f06", "title": "Comparing support vector machines with Gaussian kernels to radial basis function classifiers", "text": "The support vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights, and threshold that minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined byk-means clustering, and the weights are computed using error backpropagation. We consider three machines, namely, a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the United States postal service database of handwritten digits, the SV machine achieves the highest recognition accuracy, followed by the hybrid system. The SV approach is thus not only theoretically wellfounded but also superior in a practical application."}
{"_id": "592fc3377e8b590a457d8ffaed60b71730114347", "title": "Fundamental Limitations on Search Algorithms: Evolutionary Computing in Perspective", "text": "The past twenty years has seen a rapid growth of interest in stochastic search algorithms, particularly those inspired by natural processes in physics and biology. Impressive results have been demonstrated on complex practical optimisation problems and related search applications taken from a variety of fields, but the theoretical understanding of these algorithms remains weak. This results partly from the insufficient attention that has been paid to results showing certain fundamental limitations on universal search algorithms, including the so-called \u201cNo Free Lunch\u201d Theorem. This paper extends these results and draws out some of their implications for the design of search algorithms, and for the construction of useful representations. The resulting insights focus attention on tailoring algorithms and representations to particular problem classes by exploiting domain knowledge. This highlights the fundamental importance of gaining a better theoretical grasp of the ways in which such knowledge may be systematically exploited as a major research agenda for the future."}
{"_id": "44e41e19f781083b4c0c6d673793d38609710c05", "title": "Design and Implementation of an FPGA-Based Real-Time Face Recognition System", "text": "Face recognition systems play a vital role in many applications including surveillance, biometrics and security. In this work, we present a {\\textit complete} real-time face recognition system consisting of a face detection, a recognition and a down sampling module using an FPGA. Our system provides an end-to-end solution for face recognition, it receives video input from a camera, detects the locations of the face(s) using the Viola-Jones algorithm, subsequently recognizes each face using the Eigenface algorithm, and outputs the results to a display. Experimental results show that our complete face recognition system operates at 45 frames per second on a Virtex-5 FPGA."}
{"_id": "90c930e6669ad0db6529abbc726d3697c51c96fa", "title": "The origins and the future of microfluidics", "text": "The manipulation of fluids in channels with dimensions of tens of micrometres \u2014 microfluidics \u2014 has emerged as a distinct new field. Microfluidics has the potential to influence subject areas from chemical synthesis and biological analysis to optics and information technology. But the field is still at an early stage of development. Even as the basic science and technological demonstrations develop, other problems must be addressed: choosing and focusing on initial applications, and developing strategies to complete the cycle of development, including commercialization. The solutions to these problems will require imagination and ingenuity."}
{"_id": "5e4b405202c92fd77a12f463ca1247a8b59fd935", "title": "Novel Low-Density Signature for Synchronous CDMA Systems Over AWGN Channel", "text": "Novel low-density signature (LDS) structure is proposed for transmission and detection of symbol-synchronous communication over memoryless Gaussian channel. Given N as the processing gain, under this new arrangement, users' symbols are spread over N chips but virtually only dv < N chips that contain nonzero-values. The spread symbol is then so uniquely interleaved as the sampled, at chip rate, received signal contains the contribution from only dc < K number of users, where K denotes the total number of users in the system. Furthermore, a near-optimum chip-level iterative soft-in-soft-out (SISO) multiuser decoding (MUD), which is based on message passing algorithm (MPA) technique, is proposed to approximate optimum detection by efficiently exploiting the LDS structure. Given beta = K/N as the system loading, our simulation suggested that the proposed system alongside the proposed detection technique, in AWGN channel, can achieve an overall performance that is close to single-user performance, even when the system has 200% loading, i.e., when beta = 2. Its robustness against near-far effect and its performance behavior that is very similar to optimum detection are demonstrated in this paper. In addition, the complexity required for detection is now exponential to dc instead of K as in conventional code division multiple access (CDMA) structure employing optimum multiuser detector."}
{"_id": "ea2435d3b476516d93c871ccf0baa64a50503b1c", "title": "Automatic feature extraction for multiview 3D face recognition", "text": "Current 2D face recognition systems encounter difficulties in recognizing faces with large pose variations. Utilizing the pose-invariant features of 3D face data has the potential to handle multiview face matching. A feature extractor based on the directional maximum is proposed to estimate the nose tip location and the pose angle simultaneously. A nose profile model represented by subspaces is used to select the best candidates for the nose tip. Assisted by a statistical feature location model, a multimodal scheme is presented to extract eye and mouth corners. Using the automatic feature extractor, a fully automatic 3D face recognition system is developed. The system is evaluated on two databases, the MSU database (300 multiview test scans from 100 subjects) and the UND database (953 near frontal scans from 277 subjects). The automatic system provides recognition accuracy that is comparable to the accuracy of a system with manually labeled feature points"}
{"_id": "57c72cb88843d44b43192741c7010558bb451394", "title": "A Survey of Location Prediction on Twitter", "text": "Locations, e.g., countries, states, cities, and point-of-interests, are central to news, emergency events, and people's daily lives. Automatic identification of locations associated with or mentioned in documents has been explored for decades. As one of the most popular online social network platforms, Twitter has attracted a large number of users who send millions of tweets on daily basis. Due to the world-wide coverage of its users and real-time freshness of tweets, location prediction on Twitter has gained significant attention in recent years. Research efforts are spent on dealing with new challenges and opportunities brought by the noisy, short, and context-rich nature of tweets. In this survey, we aim at offering an overall picture of location prediction on Twitter. Specifically, we concentrate on the prediction of user home locations, tweet locations, and mentioned locations. We first define the three tasks and review the evaluation metrics. By summarizing Twitter network, tweet content, and tweet context as potential inputs, we then structurally highlight how the problems depend on these inputs. Each dependency is illustrated by a comprehensive review of the corresponding strategies adopted in state-of-the-art approaches. In addition, we also briefly review two related problems, i.e., semantic location prediction and point-of-interest recommendation. Finally, we make a conclusion of the survey and list future research directions."}
{"_id": "177318d595684a6993cf25175b7df5c5e1c216aa", "title": "Sense-Aaware Semantic Analysis: A Multi-Prototype Word Representation Model Using Wikipedia", "text": "Human languages are naturally ambiguous, which makes it difficult to automatically understand the semantics of text. Most vector space models (VSM) treat all occurrences of a word as the same and build a single vector to represent the meaning of a word, which fails to capture any ambiguity. We present sense-aware semantic analysis (SaSA), a multi-prototype VSM for word representation based on Wikipedia, which could account for homonymy and polysemy. The \u201csense-specific\u201d prototypes of a word are produced by clustering Wikipedia pages based on both local and global contexts of the word in Wikipedia. Experimental evaluation on semantic relatedness for both isolated words and words in sentential contexts and word sense induction demonstrate its effectiveness."}
{"_id": "bd0d47398cfa42c338c3445f4277b33b222b44ed", "title": "Innovation in the pharmaceutical industry: New estimates of R&D costs.", "text": "The research and development costs of 106 randomly selected new drugs were obtained from a survey of 10 pharmaceutical firms. These data were used to estimate the average pre-tax cost of new drug and biologics development. The costs of compounds abandoned during testing were linked to the costs of compounds that obtained marketing approval. The estimated average out-of-pocket cost per approved new compound is $1395 million (2013 dollars). Capitalizing out-of-pocket costs to the point of marketing approval at a real discount rate of 10.5% yields a total pre-approval cost estimate of $2558 million (2013 dollars). When compared to the results of the previous study in this series, total capitalized costs were shown to have increased at an annual rate of 8.5% above general price inflation. Adding an estimate of post-approval R&D costs increases the cost estimate to $2870 million (2013 dollars)."}
{"_id": "cabb79a7c06392554a7262cec87c63bfc8ae8883", "title": "An Approach for Selecting Software-as-a-Service (SaaS) Product", "text": "Software-as-a-Service (SaaS) helps organizations avoid capital expenditure and pay for the functionality as an operational expenditure. Though enterprises are unlikely to use SaaS model for all their information systems needs, certain business functionalities such as Sales Force Automation (SFA), are more seen to be implemented using SaaS model. Such demand has prompted quite a few vendors to offer SFA functionality as SaaS. Enterprises need to adopt an objective approach to ensure they select the most appropriate SaaS product for their needs. This paper presents an approach that makes use of Analytic Hierarchy Process (AHP) technique for prioritizing the product features and also for expert-led scoring of the products."}
{"_id": "28f85f93eb23b0f545b3afbb3bd36697dbfd2eb6", "title": "A Hotel Recommendation System Based on Collaborative Filtering and Rankboost Algorithm", "text": "A hotel recommendation system based on collaborative filtering method of clustering and Rankboost algorithm proposed in this paper, which can avoid the cold-start and scalability problems existing in traditional collaborative filtering. One can find a hotel quickly and efficiently when he uses this hotel recommendation system."}
{"_id": "0666e885a8b376b81ca1511c7a40fcc88ab11948", "title": "Trust management of services in cloud environments: Obstacles and solutions", "text": "Trust management is one of the most challenging issues in the emerging cloud computing area. Over the past few years, many studies have proposed different techniques to address trust management issues. However, despite these past efforts, several trust management issues such as identification, privacy, personalization, integration, security, and scalability have been mostly neglected and need to be addressed before cloud computing can be fully embraced. In this article, we present an overview of the cloud service models and we survey the main techniques and research prototypes that efficiently support trust management of services in cloud environments. We present a generic analytical framework that assesses existing trust management research prototypes in cloud computing and relevant areas using a set of assessment criteria. Open research issues for trust management in cloud environments are also discussed."}
{"_id": "299d01d8df877b207df73cac5d904682a2f5ba66", "title": "Massively-Parallel Lossless Data Decompression", "text": "Today's exponentially increasing data volumes and the high cost of storage make compression essential for the Big Data industry. Although research has concentrated on efficient compression, fast decompression is critical for analytics queries that repeatedly read compressed data. While decompression can be parallelized somewhat by assigning each data block to a different process, break-through speed-ups require exploiting the massive parallelism of modern multi-core processors and GPUs for data decompression within a block. We propose two new techniques to increase the degree of parallelism during decompression. The first technique exploits the massive parallelism of GPU and SIMD architectures. The second sacrifices some compression efficiency to eliminate data dependencies that limit parallelism during decompression. We evaluate these techniques on the decompressor of the DEFLATE scheme, called Inflate, which is based on LZ77 compression and Huffman encoding. We achieve a 2\u00d7 speed-up in a head-to-head comparison with several multi core CPU-based libraries, while achieving a 17% energy saving with comparable compression ratios."}
{"_id": "393598c6512f27c1493117ac4a97fe57846d253f", "title": "A task description language for robot control", "text": "Robot systems must achieve high level goals while remaining reactive to contingencies and new opportunities. This typically requires robot systems to coordinate concurrent activities, monitor the environment, and deal with exceptions. We have developed a new language to support such task-level control. The language, TDL, is an extension of C++ that provides syntactic support for task decomposition, synchronization, execution monitoring, and exception handling. A compiler transforms TDL into pure C++ code that utilizes a platform-independent task management library. This paper introduces TDL, describes the task treerepresentation that underlies the language, and presents some aspects of its implementation and use in an autonomous mobile robot."}
{"_id": "d9f8168ab76595d47fb2d0d971c477876b6074b7", "title": "Contrast masking in human vision.", "text": "Contrast masking was studied psychophysically. A two-alternative forced-choice procedure was used to measure contrast thresholds for 2.0 cpd sine-wave gratings in the presence of masking sine-wave gratings. Thresholds were measured for 11 masker contrasts spanning three log units, and seven masker frequencies ranging +/- one octave from the signal frequency. Corresponding measurements were made for gratings with horizontal widths of 0.75 degrees (narrow fields) and 6.0 degrees (wide fields). For high contrast maskers at all frequencies, signal thresholds were related to masking contrast by power functions with exponents near 0.6. For a range of low masking contrasts, signal thresholds were reduced by the masker. For the wide fields, high contrast masking tuning functions peaked at the signal frequency, were slightly asymmetric, and had approximately invariant half-maximum frequencies that lie 3/4 octave below and 1 octave above the signal frequency. The corresponding low contrast tuning functions exhibited peak threshold reduction at the signal frequency, with half-minimum frequencies at roughly +/- 0.25 octaves. For the narrow fields, the masking tuning functions were much broader at both low and high masking contrasts. A masking model is presented that encompasses contrast detection, discrimination, and masking phenomena. Central constructs of the model include a linear spatial frequency filter, a nonlinear transducer, and a process of spatial pooling that acts at low contrasts only."}
{"_id": "c73ad76698d323191f994de40ef118c1e67f745d", "title": "Stages of embryonic development of the zebrafish.", "text": "We describe a series of stages for development of the embryo of the zebrafish, Danio (Brachydanio) rerio. We define seven broad periods of embryogenesis--the zygote, cleavage, blastula, gastrula, segmentation, pharyngula, and hatching periods. These divisions highlight the changing spectrum of major developmental processes that occur during the first 3 days after fertilization, and we review some of what is known about morphogenesis and other significant events that occur during each of the periods. Stages subdivide the periods. Stages are named, not numbered as in most other series, providing for flexibility and continued evolution of the staging series as we learn more about development in this species. The stages, and their names, are based on morphological features, generally readily identified by examination of the live embryo with the dissecting stereomicroscope. The descriptions also fully utilize the optical transparancy of the live embryo, which provides for visibility of even very deep structures when the embryo is examined with the compound microscope and Nomarski interference contrast illumination. Photomicrographs and composite camera lucida line drawings characterize the stages pictorially. Other figures chart the development of distinctive characters used as staging aid signposts."}
{"_id": "c4ea9066db2e73a7ddfa8643277bfd2948eebfe0", "title": "Large-scale cost function learning for path planning using deep inverse reinforcement learning", "text": ""}
{"_id": "7758a1c9a21e0b8635a5550cfdbebc40b22a41a6", "title": "HIRL: Hierarchical Inverse Reinforcement Learning for Long-Horizon Tasks with Delayed Rewards", "text": ""}
{"_id": "af369310aa73e9ff10518bdbf37afe17754687b7", "title": "A^2-Nets: Double Attention Networks", "text": "Learning to capture long-range relations is fundamental to image/video recognition. Existing CNN models generally rely on increasing depth to model such relations which is highly inefficient. In this work, we propose the \u201cdouble attention block\u201d, a novel component that aggregates and propagates informative global features from the entire spatio-temporal space of input images/videos, enabling subsequent convolution layers to access features from the entire space efficiently. The component is designed with a double attention mechanism in two steps, where the first step gathers features from the entire space into a compact set through second-order attention pooling and the second step adaptively selects and distributes features to each location via another attention. The proposed double attention block is easy to adopt and can be plugged into existing deep neural networks conveniently. We conduct extensive ablation studies and experiments on both image and video recognition tasks for evaluating its performance. On the image recognition task, a ResNet-50 equipped with our double attention blocks outperforms a much larger ResNet-152 architecture on ImageNet-1k dataset with over 40% less the number of parameters and less FLOPs. On the action recognition task, our proposed model achieves the state-of-the-art results on the Kinetics and UCF-101 datasets with significantly higher efficiency than recent works."}
{"_id": "c5700677da083f30dbae69f9d0771a29d2053a19", "title": "Automating hierarchical document classification for construction management information systems", "text": "The widespread use of information technologies for construction is considerably increasing the number of electronic text documents stored in construction management information systems. Consequently, automated methods for organizing and improving the access to the information contained in these types of documents become essential to construction information management. This paper describes a methodology developed to improve information organization and access in construction management information systems based on automatic hierarchical classification of construction project documents according to project components. A prototype system for document classification is presented, as well as the experiments conducted to verify the feasibility of the proposed approach. D 2003 Elsevier Science B.V. All rights reserved."}
{"_id": "5c12d2e8b4e59e764836ae2ac598c628b1363e7e", "title": "A SeqGAN for Polyphonic Music Generation", "text": "We propose an application of SeqGAN, generative adversarial networks for discrete sequence generation, for creating polyphonic musical sequences. Instead of monophonic melody generation suggested in the original work, we present an efficient representation of polyphony MIDI file that captures chords and melodies with dynamic timings simultaneously. The network can create sequences that are musically coherent. We also report that careful tuning of reinforcement learning signals of the model are crucial for general application."}
{"_id": "2c7e4460f59681ba0c7e887c98dc55ef8fc8d9d7", "title": "Multilinear Discriminant Analysis for Higher-Order Tensor Data Classification", "text": "In the past decade, great efforts have been made to extend linear discriminant analysis for higher-order data classification, generally referred to as multilinear discriminant analysis (MDA). Existing examples include general tensor discriminant analysis (GTDA) and discriminant analysis with tensor representation (DATER). Both the two methods attempt to resolve the problem of tensor mode dependency by iterative approximation. GTDA is known to be the first MDA method that converges over iterations. However, its performance relies highly on the tuning of the parameter in the scatter difference criterion. Although DATER usually results in better classification performance, it does not converge, yet the number of iterations executed has a direct impact on DATER's performance. In this paper, we propose a closed-form solution to the scatter difference objective in GTDA, namely, direct GTDA (DGTDA) which also gets rid of parameter tuning. We demonstrate that DGTDA outperforms GTDA in terms of both efficiency and accuracy. In addition, we propose constrained multilinear discriminant analysis (CMDA) that learns the optimal tensor subspace by iteratively maximizing the scatter ratio criterion. We prove both theoretically and experimentally that the value of the scatter ratio criterion in CMDA approaches its extreme value, if it exists, with bounded error, leading to superior and more stable performance in comparison to DATER."}
{"_id": "ba50b9c7f14c80e876044c864094f357a3fac32c", "title": "Blind grasping: Stable robotic grasping using tactile feedback and hand kinematics", "text": "We propose a machine learning approach to the perception of a stable robotic grasp based on tactile feedback and hand kinematic data, which we call blind grasping. We first discuss a method for simulating tactile feedback using a soft finger contact model in GraspIt!, which is a robotic grasping simulator [10]. Using this simulation technique, we compute tactile contacts of thousands of grasps with a robotic hand using the Columbia Grasp Database [6]. The tactile contacts along with the hand kinematic data are then input to a Support Vector Machine (SVM) which is trained to estimate the stability of a given grasp based on this tactile feedback and also the robotic hand kinematics. Experimental results indicate that the tactile feedback along with the hand kinematic data carry meaningful information for the prediction of the stability of a blind robotic grasp."}
{"_id": "820a9232fdf395d3f5cfdf928fce4b945833ecd1", "title": "Cortical dynamics of contextually cued attentive visual learning and search: spatial and object evidence accumulation.", "text": "How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. The ARTSCENE Search model is developed to illustrate the neural mechanisms of such memory-based context learning and guidance and to explain challenging behavioral data on positive-negative, spatial-object, and local-distant cueing effects during visual search, as well as related neuroanatomical, neurophysiological, and neuroimaging data. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined as a scene is scanned with saccadic eye movements. The model simulates the interactive dynamics of object and spatial contextual cueing and attention in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortex (area 46) primes possible target locations in posterior parietal cortex based on goal-modulated percepts of spatial scene gist that are represented in parahippocampal cortex. Model ventral prefrontal cortex (area 47/12) primes possible target identities in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex."}
{"_id": "2ef1890d43cbb30e26a3616ee569fca3e5330c41", "title": "Improving bug triage with bug tossing graphs", "text": "bug report is typically assigned to a single developer who is then responsible for fixing the bug. In Mozilla and Eclipse, between 37%-44% of bug reports are \"tossed\" (reassigned) to other developers, for example because the bug has been assigned by accident or another developer with additional expertise is needed. In any case, tossing increases the time-to-correction for a bug.\n In this paper, we introduce a graph model based on Markov chains, which captures bug tossing history. This model has several desirable qualities. First, it reveals developer networks which can be used to discover team structures and to find suitable experts for a new task. Second, it helps to better assign developers to bug reports. In our experiments with 445,000 bug reports, our model reduced tossing events, by up to 72%. In addition, the model increased the prediction accuracy by up to 23 percentage points compared to traditional bug triaging approaches."}
{"_id": "48852c8241f8e1be64f419aca5fe6150bab89e85", "title": "Where should the bugs be fixed? More accurate information retrieval-based bug localization based on bug reports", "text": "For a large and evolving software system, the project team could receive a large number of bug reports. Locating the source code files that need to be changed in order to fix the bugs is a challenging task. Once a bug report is received, it is desirable to automatically point out to the files that developers should change in order to fix the bug. In this paper, we propose BugLocator, an information retrieval based method for locating the relevant files for fixing a bug. BugLocator ranks all files based on the textual similarity between the initial bug report and the source code using a revised Vector Space Model (rVSM), taking into consideration information about similar bugs that have been fixed before. We perform large-scale experiments on four open source projects to localize more than 3,000 bugs. The results show that BugLocator can effectively locate the files where the bugs should be fixed. For example, relevant buggy files for 62.60% Eclipse 3.1 bugs are ranked in the top ten among 12,863 files. Our experiments also show that BugLocator outperforms existing state-of-the-art bug localization methods."}
{"_id": "67258a2679601d2cc69e533c49fcfd6366d7315e", "title": "Automatic Bug Triage using Semi-Supervised Text Classification", "text": "In this paper, we propose a semi-supervised text classification approach for bug triage to avoid the deficiency of labeled bug reports in existing supervised approaches. This new approach combines naive Bayes classifier and expectationmaximization to take advantage of both labeled and unlabeled bug reports. This approach trains a classifier with a fraction of labeled bug reports. Then the approach iteratively labels numerous unlabeled bug reports and trains a new classifier with labels of all the bug reports. We also employ a weighted recommendation list to boost the performance by imposing the weights of multiple developers in training the classifier. Experimental results on bug reports of Eclipse show that our new approach outperforms existing supervised approaches in terms of classification accuracy. Keywordsautomatic bug triage; expectation-maximization; semi-supervised text classification; weighted recommendation list"}
{"_id": "83aa6f19894dba0b1251d8f89c0eb6e1629597c5", "title": "DREX: Developer Recommendation with K-Nearest-Neighbor Search and Expertise Ranking", "text": "This paper proposes a new approach called DREX (Developer Recommendation with k-nearest-neighbor search and Expertise ranking) to developer recommendation for bug resolution based on K-Nearest-Neighbor search with bug similarity and expertise ranking with various metrics, including simple frequency and social network metrics. We collect Mozilla Fire fox open bug repository as the experimental data set and compare different ranking metrics on the performance of recommending capable developers for bugs. Our experimental results demonstrate that, when recommending 10 developers for each one of the 250 testing bugs, DREX has produced better performance than traditional methods with multi-labeled text categorization. The best performance obtained by two metrics as Out-Degree and Frequency, is with recall as 0.6 on average. Moreover, other social network metrics such as Degree and Page Rank have produced comparable performance on developer recommendation as Frequency when used for developer expertise ranking."}
{"_id": "8e6430426c0e33a5ce30d1aecb00ced5e746986c", "title": "Assigning bug reports using a vocabulary-based expertise model of developers", "text": "For popular software systems, the number of daily submitted bug reports is high. Triaging these incoming reports is a time consuming task. Part of the bug triage is the assignment of a report to a developer with the appropriate expertise. In this paper, we present an approach to automatically suggest developers who have the appropriate expertise for handling a bug report. We model developer expertise using the vocabulary found in their source code contributions and compare this vocabulary to the vocabulary of bug reports. We evaluate our approach by comparing the suggested experts to the persons who eventually worked on the bug. Using eight years of Eclipse development as a case study, we achieve 33.6% top-1 precision and 71.0% top-10 recall."}
{"_id": "aeba4287d11d3c135115bb42225fc10fbe2f8b1b", "title": "Understanding the determinants of business intelligence system adoption stages: An empirical study of SMEs", "text": "Although business intelligence systems (BIS) adoption research has progressed considerably since its early inceptions, our understanding of the influence of BIS determinants across different adoption stages remains limited. In response, we develop and empirically test a conceptual model for assessing the determinants of BIS diffusion on the evaluation, adoption, and use stages. The model is based on two prominent, firm-level adoption concepts: the diffusion of innovation (DOI) and the technology, organization, environment (TOE) framework, extended with our previous research findings. Drawing on data from 181 small and medium enterprises (SMEs), we identify seven distinct determinants (i.e. cost, BIS is part of ERP, management support, rational decision-making culture, project champion, organizational data environment, organizational readiness) as statistically significant for different adoption stages."}
{"_id": "059eb34d95c73e37dca8e35b0ac5a2fb0142f3ee", "title": "The Stratosphere platform for big data analytics", "text": "We present Stratosphere, an open-source software stack for parallel data analysis. Stratosphere brings together a unique set of features that allow the expressive, easy, and efficient programming of analytical applications at very large scale. Stratosphere\u2019s features include \u201cin situ\u201d data processing, a declarative query language, treatment of user-defined functions as first-class citizens, automatic program parallelization and optimization, support for iterative programs, and a scalable and efficient execution engine. Stratosphere covers a variety of \u201cBig Data\u201d use cases, such as data warehousing, information extraction and integration, data cleansing, graph analysis, and statistical analysis applications. In this paper, we present the overall system architecture design decisions, introduce Stratosphere through example queries, and then dive into the internal workings of the system\u2019s components that relate to extensibility, programming model, optimization, and query execution. We experimentally compare Stratosphere against popular open-source alternatives, and we conclude with a research outlook for the next years."}
{"_id": "c276b0314dadbd7e575d926cb92b67932662a3be", "title": "Improving Duckworth lewis method by using machine learning", "text": "There has been lot of research has been done in the area of sports with prediction. Machine learning algorithm has been applied on most of the data to predict its outcome. In cricket also there are many research has been done to find out the outcome of the match. But there are a special case in the cricket where statistic plays an important role. When the cricket match is interrupted by the rain and then again it starts during the shortage of the time it is difficult to predict the run and overs remaining in the cricket match.To over come this issue Duck worth lewis method has been introduced. In this paper we are going to discuss the short come of the method of Duck worth lewis method. In this research We are going to improve the model of duck worth lewis method with machine learning models. At end we are going to compare the Duck worth lewis method, Duck worth lewis method with run rate as a variable and Improved version of duck worth lewis method."}
{"_id": "0a8b0568d6c4f9db98fe5968a64a40e19a64387d", "title": "Push-Net: Deep Planar Pushing for Objects with Unknown Physical Properties", "text": "This paper introduces Push-Net, a deep recurrent neural network model, which enables a robot to push objects of unknown physical properties for re-positioning and re-orientation, using only visual camera images as input. The unknown physical properties is a major challenge for pushing. Push-Net overcomes the challenge by tracking a history of push interactions with an LSTM module and training an auxiliary objective function that estimates an object\u2019s center of mass. We trained Push-Net entirely in simulation and tested it extensively on many different objects in both simulation and on two real robots, a Fetch arm and a Kinova MICO arm. Experiments suggest that Push-Net is robust and efficient. It achieved over 97% success rate in simulation on average and succeeded in all real robot experiments with a small number of pushes."}
{"_id": "f0447698008f9eafe8ed93d87def1ee5d5cf79c4", "title": "Big Scholarly Data in CiteSeerX: Information Extraction from the Web", "text": "We examine CiteSeerX, an intelligent system designed with the goal of automatically acquiring and organizing large-scale collections of scholarly documents from the world wide web. From the perspective of automatic information extraction and modes of alternative search, we examine various functional aspects of this complex system with an eye towards ongoing and future research developments."}
{"_id": "bf48f1d556fdb85d5dbe8cfd93ef13c212635bcf", "title": "Neural Task Programming: Learning to Generalize Across Hierarchical Tasks", "text": "In this work, we propose a novel robot learning framework called Neural Task Programming (NTP), which bridges the idea of few-shot learning from demonstration and neural program induction. NTP takes as input a task specification (e.g., video demonstration of a task) and recursively decomposes it into finer sub-task specifications. These specifications are fed to a hierarchical neural program, where bottom-level programs are callable subroutines that interact with the environment. We validate our method in three robot manipulation tasks. NTP achieves strong generalization across sequential tasks that exhibit hierarchal and compositional structures. The experimental results show that NTP learns to generalize well towards unseen tasks with increasing lengths, variable topologies, and changing objectives.stanfordvl.github.io/ntp/."}
{"_id": "1a7de56cb569e5efad77640cc566bf6007b6f03b", "title": "Functional reactive programming from first principles", "text": "Functional Reactive Programming, or FRP, is a general framework for programming hybrid systems in a high-level, declarative manner. The key ideas in FRP are its notions of behaviors and events. Behaviors are time-varying, reactive values, while events are time-ordered sequences of discrete-time event occurrences. FRP is the essence of Fran, a domain-specific language embedded in Haskell for programming reactive animations, but FRP is now also being used in vision, robotics and other control systems applications. \nIn this paper we explore the formal semantics of FRP and how itrelates to an implementation based on streams that represent (and therefore only approximate) continuous behaviors. We show that, in the limit as the sampling interval goes to zero, the implementation is faithful to the formal, continuous semantics, but only when certain constraints on behaviors are observed. We explore the nature of these constraints, which vary amongst the FRP primitives. Our results show both the power and limitations of this approach to language design and implementation. As an example of a limitation, we show that streams are incapable of representing instantaneous predicate events over behaviors."}
{"_id": "bbe657fbc16cbf0ceaebd596cea5b3915f4eb39c", "title": "A Wide-band Circular Polarization Stacked Patch Antenna for the Wireless Communication Applications", "text": "A wide-band \u2018corners-truncated rectangular\u2019 stacked patch antenna for use in the circular polarization applications was proposed. The antenna proposed in this paper an axial ratio of less than 3 dB and a VSWR of less than 2 : 1 were shown to be achievable over a 25% bandwidth for use in the wireless communication applications, and this antenna can achieves higher gain, lower side lobes and wider bandwidth compared to the traditional microstrip patch antenna."}
{"_id": "a91859d1c485f738a522474f7d91f2f5e66d2f33", "title": "An upper limb exoskeleton with an optimized 4R spherical wrist mechanism for the shoulder joint", "text": "This paper presents an upper limb rehabilitation exoskeleton with an optimized 4R spherical wrist mechanism for the shoulder joint. Traditional designs of shoulder exoskeletons use 3R mechanisms to replicate the spherical motion of the shoulder. However, due to the exceptionally large range of motion of the human shoulder, the 3R mechanism is required to operate at a singular configuration at some point in the shoulder workspace. In this configuration, the 3R mechanism loses the ability to rotate the shoulder about one DOF. To overcome this problem, the use of a kinematically redundant 4R mechanism is proposed. The 4R mechanism has been optimized in a previous work and is used in the design of the presented exoskeleton prototype. This new shoulder mechanism allows the exoskeleton to achieve the entire human shoulder workspace without mechanical interference and while operating well away from singular configurations. Numerous features have also been included in the exoskeleton design to ensure it is safe, comfortable and easy to use. The shoulder mechanism of the exoskeleton is analyzed to demonstrate the capabilities of the 4R mechanism."}
{"_id": "5a73fe1896167b3dbfcaad154ee544222cd25027", "title": "Multi-modal human-machine communication for instructing robot grasping tasks", "text": "A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and an modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects."}
{"_id": "092f25f2269b13266fdbef278f4ec7abc9598f00", "title": "Analogy just looks like high level perception: why a domain-general approach to analogical mapping is right", "text": "Hofstadter and his colleagues have criticized current accounts of analogy, claiming that such accounts do not accurately capture interactions between processes of representation construction and processes of mapping. They suggest instead that analogy should be viewed as a form of high level perception that encompasses both representation building and mapping as indivisible operations within a single model. They argue speci\u00ae cally against SM E, our model of analogical matching, on the grounds that it is modular, and o\u0152er instead programs such as Mitchell and Hofstadter\u2019 s Copycat as examples of the high level perception approach. In this paper we argue against this position on two grounds. First, we demonstrate that most of their speci\u00ae c arguments involving SM E and Copycat are incorrect. Second, we argue that the claim that analogy is high-level perception, while in some ways an attractive metaphor, is too vague to be useful as a technical proposal. We focus on \u00ae ve issues : (1) how perception relates to analogy, (2) how \u0304 exibility arises in analogical processing, (3) whether analogy is a domain-general process, (4) how micro-worlds should be used in the study of analogy, and (5) how best to assess the psychological plausibility of a model of analogy. W e illustrate our discussion with examples taken from computer models embodying both views."}
{"_id": "8ec62b78c47be98d4807703e5a15f8d64e747e81", "title": "MICA - A new generation of versatile instruments in robotic surgery", "text": "Robotic surgery systems are highly complex and expensive pieces of equipment. Demands for lower cost of care can be met if these systems are employable in a flexible manner for a large variety of procedures. To protect the initial investment the capabilities of a robotic system need to be expandable as new tasks arise."}
{"_id": "af876ac8caa0223341640c2a4908daac3b651541", "title": "PEEK biomaterials in trauma, orthopedic, and spinal implants.", "text": "Since the 1980s, polyaryletherketones (PAEKs) have been increasingly employed as biomaterials for trauma, orthopedic, and spinal implants. We have synthesized the extensive polymer science literature as it relates to structure, mechanical properties, and chemical resistance of PAEK biomaterials. With this foundation, one can more readily appreciate why this family of polymers will be inherently strong, inert, and biocompatible. Due to its relative inertness, PEEK biomaterials are an attractive platform upon which to develop novel bioactive materials, and some steps have already been taken in that direction, with the blending of HA and TCP into sintered PEEK. However, to date, blended HA-PEEK composites have involved a trade-off in mechanical properties in exchange for their increased bioactivity. PEEK has had the greatest clinical impact in the field of spine implant design, and PEEK is now broadly accepted as a radiolucent alternative to metallic biomaterials in the spine community. For mature fields, such as total joint replacements and fracture fixation implants, radiolucency is an attractive but not necessarily critical material feature."}
{"_id": "b30ee7be57a522ea05b87224f52148f0a6f65576", "title": "A new computing environment for collective privacy protection from constrained healthcare devices to IoT cloud services", "text": "The Internet of healthcare things is essentially a new model that changes the way of the delivery and management of healthcare services. It utilizes digital sensors and cloud computing to present a quality healthcare service outside of the classical hospital environment. This resulted in the emergence of a new class of online web 4.0 services, which are termed \u201ccloud healthcare services\u201d. Cloud healthcare services offer a straightforward opportunity for patients to communicate with healthcare professionals and utilize their personal IoHT devices to obtain timely and accurate medical guidance and decisions. The personal IoHT devices integrate sensed health data at a central cloud healthcare service to extract useful health insights for wellness and preventive care strategies. However, the present practices for cloud healthcare services rely on a centralized approach, where patients\u2019 health data are collected and stored on servers, located at remote locations, which might be functioning under data privacy laws somewhat different from the ones applied where the service is running. Promoting a privacy respecting cloud services encourages patients to actively participate in these healthcare services and to routinely provide an accurate and precious health data about themselves. With the emergence of fog computing paradigm, privacy protection can now be enforced at the edge of the patient\u2019s network regardless of the location of service providers. In this paper, a framework for cloud healthcare recommender service is presented. We depicted the personal gateways at the patients\u2019 side act as intermediate nodes (called fog nodes) between IoHT devices and cloud healthcare services. A fog-based middleware will be hosted on these fog nodes for an efficient aggregation of patients generated health data while maintaining the privacy and the confidentiality of their health profiles. The proposed middleware executes a two-stage concealment process that utilizes the hierarchical nature of IoHT devices. This will unburden the constrained IoHT devices from performing intensive privacy preserving processes. At that, the patients will be empowered with a tool to control the privacy of their health data by enabling them to release their health data in a concealed form. The further processing at the cloud healthcare service continues over the concealed data by applying the proposed protocols. The proposed solution was integrated into a scenario related to preserving the privacy of the patients\u2019 health data when utilized by a cloud healthcare recommender service to generate health insights. Our approach induces a straightforward solution with accurate results, which are beneficial to both patients and service providers."}
{"_id": "1bb239731589f3114a3fe5b35e42a635b5eacb38", "title": "Transfer Learning for Text Mining", "text": "Over the years, transfer learning has received much attention in machine learning research and practice. Researchers have found that a major bottleneck associated with machine learning and text mining is the lack of high-quality annotated examples to help train a model. In response, transfer learning offers an attractive solution for this problem. Various transfer learning methods are designed to extract the useful knowledge from different but related auxiliary domains. In its connection to text mining, transfer learning has found novel and useful applications. In this chapter, we will review some most recent developments in transfer learning for text mining, explain related algorithms in detail, and project future developments of this field. We focus on two important topics: cross-domain text document classification and heterogeneous transfer learning that uses labeled text documents to help classify images."}
{"_id": "271ca0385fa41d3639f93ea4066f88fb83376d87", "title": "Sensorimotor training in virtual reality: a review.", "text": "Recent experimental evidence suggests that rapid advancement of virtual reality (VR) technologies has great potential for the development of novel strategies for sensorimotor training in neurorehabilitation. We discuss what the adaptive and engaging virtual environments can provide for massive and intensive sensorimotor stimulation needed to induce brain reorganization.Second, discrepancies between the veridical and virtual feedback can be introduced in VR to facilitate activation of targeted brain networks, which in turn can potentially speed up the recovery process. Here we review the existing experimental evidence regarding the beneficial effects of training in virtual environments on the recovery of function in the areas of gait,upper extremity function and balance, in various patient populations. We also discuss possible mechanisms underlying these effects. We feel that future research in the area of virtual rehabilitation should follow several important paths. Imaging studies to evaluate the effects of sensory manipulation on brain activation patterns and the effect of various training parameters on long term changes in brain function are needed to guide future clinical inquiry. Larger clinical studies are also needed to establish the efficacy of sensorimotor rehabilitation using VR in various clinical populations and most importantly, to identify VR training parameters that are associated with optimal transfer to real-world functional improvements."}
{"_id": "a61fd72060277132fa29b784edf1175f7b2e1b48", "title": "Sources of mathematical thinking: behavioral and brain-imaging evidence.", "text": "Does the human capacity for mathematical intuition depend on linguistic competence or on visuo-spatial representations? A series of behavioral and brain-imaging experiments provides evidence for both sources. Exact arithmetic is acquired in a language-specific format, transfers poorly to a different language or to novel facts, and recruits networks involved in word-association processes. In contrast, approximate arithmetic shows language independence, relies on a sense of numerical magnitudes, and recruits bilateral areas of the parietal lobes involved in visuo-spatial processing. Mathematical intuition may emerge from the interplay of these brain systems."}
{"_id": "74aa24a6b41363c37ee1d4a4c57b138af7e5be54", "title": "NON-UNIFORM TRANSMISSION LINE TRANSFORMERS AND THEIR APPLICATION IN THE DESIGN OF COMPACT MULTI-BAND BAGLEY POWER DIVIDERS WITH HARMONICS SUPPRESSION", "text": "In this paper, the application of compact non-uniform transmission line transformers (NTLTs) in suppressing and controlling the odd harmonics of the fundamental frequency is presented. A design example showing the complete suppression of the odd harmonics of the fundamental frequency is given. In addition, several compact NTLTs are designed showing the possibility of controlling the existence of a fundamental frequency\u2019s odd harmonics. Moreover, multi-band operation using NTLTs is investigated. Specifically, a design example of a miniaturized triple-frequency NTLT is introduced. Based on these compact NTLTs, a 3-way triple-frequency modified Bagley power divider (BPD) with a size reduction of 50%, and a 5-way modified BPD with harmonics suppression and size reduction of 34%, are designed. For verification purposes, both dividers are simulated using the two full-wave simulators IE3D and HFSS. Moreover, the modified 5-way BPD with harmonics suppression is fabricated and measured. Both the simulation and measurement results validate the design approach."}
{"_id": "0e5bb591f2c66b201541d17b009ad65e58686816", "title": "Changes in women's choice of dress across the ovulatory cycle: naturalistic and laboratory task-based evidence.", "text": "The authors tested the prediction that women prefer clothing that is more revealing and sexy when fertility is highest within the ovulatory cycle. Eighty-eight women reported to the lab twice: once on a low-fertility day of the cycle and once on a high-fertility day (confirmed using hormone tests). In each session, participants posed for full-body photographs in the clothing they wore to the lab, and they drew illustrations to indicate an outfit they would wear to a social event that evening. Although each data source supported the prediction, the authors found the most dramatic changes in clothing choice in the illustrations. Ovulatory shifts in clothing choice were moderated by sociosexuality, attractiveness, relationship status, and relationship satisfaction. Sexually unrestricted women, for example, showed greater shifts in preference for revealing clothing worn to the laboratory near ovulation. The authors suggest that clothing preference shifts could reflect an increase in female-female competition near ovulation."}
{"_id": "3d0091f5ddd63884a80e517b07c3063b1d22608f", "title": "Generic SPWM technique for multilevel inverter", "text": "Multilevel inverters generate stepped AC signals and thus overcome many of the limitations with the classical inverter circuit. Multilevel inverters use many number of switches and generate staircase type output through controlled switching of various switches. The sequence of switching and the duration of each step are very vital in minimizing the THD. The methods of generating gating signals are not unique. Many methods that are proposed in literatures make use of SPWM technique. Sinusoidal Pulse Width Modulation (SPWM) is a popular control method widely used in power electronic inverter circuit. It has advantages like low switching losses, the output has fewer harmonic and method is easy to implement. The different SPWM methods are: (A) Phase Disposition PWM (PDPWM), (B) Phase Opposition Disposition PWM (PODPWM), (C) Alternative Opposition and Disposition PWM (APODPWM), (D) Phase Shift PWM (PSPWM), (E) Carrier Overlapping PWM (COPWM) and (F) Multi Carrier Sinusoidal Pulse Width Modulation with Variable Frequency (MCSPWMVF). All the above methods make use of many triangular signals, that are level shifted or phase shifted, and compare with a single sine wave to generate gating signals for the respective switches. The intersection of the sine signal with the various triangular signals will generate the gating signals for the respective switches. In the present investigation a generalized gating signal generation method is proposed using only one modulating and one carrier signal. The signal so generated is steered into various switches through pulse steering circuit developed in the present work."}
{"_id": "539290b571b918da959fabaae7f605bb07850518", "title": "Document Page Segmentation and Layout Analysis Using Soft Ordering", "text": "A document image is composed of a variety of physical entities or regions such as text blocks, lines, words, figures, tables, and background. We could also assign functional or logical labels such as sentences, titles, captions, author names, and addresses to some of these regions. The process of document structure and layout analysis tries to decompose a given document image into its component regions and understand their functional roles and relationships. The processing is carried out in multiple steps, such as preprocessing, page decomposition, structure understanding, etc. We will look into each of these steps in detail in the following sections. Document images are often generated from physical documents by digitization using scanners or digital cameras. Many documents, such as newspapers, magazines and brochures, contain very complex layout due to the placement of figures, titles, and captions, complex backgrounds, artistic text formatting, etc. (see Figure 1). A human reader uses a variety of additional cues such as context, conventions and information about language/script, along with a complex reasoning process to decipher the contents of a document. Automatic analysis of an arbitrary document with complex layout is an extremely difficult task and is beyond the capabilities of the state-of-the-art document structure and layout analysis systems. This is interesting since documents are designed to be effective and clear to human interpretation unlike natural images."}
{"_id": "29b09a84ff5f90aed578f75f4b41ae49e7d9b6d7", "title": "Learning Dynamic Guidance for Depth Image Enhancement", "text": "The depth images acquired by consumer depth sensors (e.g., Kinect and ToF) usually are of low resolution and insufficient quality. One natural solution is to incorporate with high resolution RGB camera for exploiting their statistical correlation. However, most existing methods are intuitive and limited in characterizing the complex and dynamic dependency between intensity and depth images. To address these limitations, we propose a weighted analysis representation model for guided depth image enhancement, which advances the conventional methods in two aspects: (i) task driven learning and (ii) dynamic guidance. First, we generalize the analysis representation model by including a guided weight function for dependency modeling. And the task-driven learning formulation is introduced to obtain the optimized guidance tailored to specific enhancement task. Second, the depth image is gradually enhanced along with the iterations, and thus the guidance should also be dynamically adjusted to account for the updating of depth image. To this end, stage-wise parameters are learned for dynamic guidance. Experiments on guided depth image upsampling and noisy depth image restoration validate the effectiveness of our method."}
{"_id": "976a5ddd57d47f8a527089f37db042902598e684", "title": "Intracranial EEG Correlates of Expectancy and Memory Formation in the Human Hippocampus and Nucleus Accumbens", "text": "The human brain is adept at anticipating upcoming events, but in a rapidly changing world, it is essential to detect and encode events that violate these expectancies. Unexpected events are more likely to be remembered than predictable events, but the underlying neural mechanisms for these effects remain unclear. We report intracranial EEG recordings from the hippocampus of epilepsy patients, and from the nucleus accumbens of depression patients. We found that unexpected stimuli enhance an early (187 ms) and a late (482 ms) hippocampal potential, and that the late potential is associated with successful memory encoding for these stimuli. Recordings from the nucleus accumbens revealed a late potential (peak at 475 ms), which increases in magnitude during unexpected items, but no subsequent memory effect and no early component. These results are consistent with the hypothesis that activity in a loop involving the hippocampus and the nucleus accumbens promotes encoding of unexpected events."}
{"_id": "04189524be3e0ef83e7f7c90e50cfdf5ffa0efd7", "title": "The plastic surgery hypothesis", "text": "Recent work on genetic-programming-based approaches to automatic program patching have relied on the insight that the content of new code can often be assembled out of fragments of code that already exist in the code base. This insight has been dubbed the plastic surgery hypothesis; successful, well-known automatic repair tools such as GenProg rest on this hypothesis, but it has never been validated. We formalize and validate the plastic surgery hypothesis and empirically measure the extent to which raw material for changes actually already exists in projects. In this paper, we mount a large-scale study of several large Java projects, and examine a history of 15,723 commits to determine the extent to which these commits are graftable, i.e., can be reconstituted from existing code, and find an encouraging degree of graftability, surprisingly independent of commit size and type of commit. For example, we find that changes are 43% graftable from the exact version of the software being changed. With a view to investigating the difficulty of finding these grafts, we study the abundance of such grafts in three possible sources: the immediately previous version, prior history, and other projects. We also examine the contiguity or chunking of these grafts, and the degree to which grafts can be found in the same file. Our results are quite promising and suggest an optimistic future for automatic program patching methods that search for raw material in already extant code in the project being patched."}
{"_id": "05d44448c0e122a7d97c4b3c19b9b7acd29c0f7e", "title": "Mining StackOverflow to turn the IDE into a self-confident programming prompter", "text": "Developers often require knowledge beyond the one they possess, which often boils down to consulting sources of information like Application Programming Interfaces (API) documentation, forums, Q&A websites, etc. Knowing what to search for and how is non- trivial, and developers spend time and energy to formulate their problems as queries and to peruse and process the results. We propose a novel approach that, given a context in the IDE, automatically retrieves pertinent discussions from Stack Overflow, evaluates their relevance, and, if a given confidence threshold is surpassed, notifies the developer about the available help. We have implemented our approach in Prompter, an Eclipse plug-in. Prompter has been evaluated through two studies. The first was aimed at evaluating the devised ranking model, while the second was conducted to evaluate the usefulness of Prompter."}
{"_id": "5c026313a2bc5e61da4781bab1b5dfd21437e92c", "title": "Gender and Tenure Diversity in GitHub Teams", "text": "Software development is usually a collaborative venture. Open Source Software (OSS) projects are no exception; indeed, by design, the OSS approach can accommodate teams that are more open, geographically distributed, and dynamic than commercial teams. This, we find, leads to OSS teams that are quite diverse. Team diversity, predominantly in offline groups, is known to correlate with team output, mostly with positive effects. How about in OSS? Using GitHub, the largest publicly available collection of OSS projects, we studied how gender and tenure diversity relate to team productivity and turnover. Using regression modeling of GitHub data and the results of a survey, we show that both gender and tenure diversity are positive and significant predictors of productivity, together explaining a sizable fraction of the data variability. These results can inform decision making on all levels, leading to better outcomes in recruiting and performance."}
{"_id": "98e810ed098a651e0ba8cbb63d2d926d4eebdf9b", "title": "CCFinder: A Multilinguistic Token-Based Code Clone Detection System for Large Scale Source Code", "text": "\u00d0A code clone is a code portion in source files that is identical or similar to another. Since code clones are believed to reduce the maintainability of software, several code clone detection techniques and tools have been proposed. This paper proposes a new clone detection technique, which consists of the transformation of input source text and a token-by-token comparison. For its implementation with several useful optimization techniques, we have developed a tool, named CCFinder, which extracts code clones in C, C++, Java, COBOL, and other source files. As well, metrics for the code clones have been developed. In order to evaluate the usefulness of CCFinder and metrics, we conducted several case studies where we applied the new tool to the source code of JDK, FreeBSD, NetBSD, Linux, and many other systems. As a result, CCFinder has effectively found clones and the metrics have been able to effectively identify the characteristics of the systems. In addition, we have compared the proposed technique with other clone detection techniques. Index Terms\u00d0Code clone, duplicated code, CASE tool, metrics, maintenance."}
{"_id": "fb35f65339c4f1bbe9c0bd25f09f83e65e436d51", "title": "Comparison and evaluation of code clone detection techniques and tools: A qualitative approach", "text": "Over the last decade many techniques and tools for software clone detection have been proposed. In this paper, we provide a qualitative comparison and evaluation of the current state-of-the-art in clone detection techniques and tools, and organize the large amount of information into a coherent conceptual framework. We begin with background concepts, a generic clone detection process and an overall taxonomy of current techniques and tools. We then classify, compare and evaluate the techniques and tools in two different dimensions. First, we classify and compare approaches based on a number of facets, each of which has a set of (possibly overlapping) attributes. Second, we qualitatively evaluate the classified techniques and tools with respect to a taxonomy of editing scenarios designed to model the creation of Type-1, Type-2, Type-3 and Type-4 clones. Finally, we provide examples of how one might use the results of this study to choose the most appropriate clone detection tool or technique in the context of a particular set of goals and constraints. The primary contributions of this paper are: (1) a schema for classifying clone detection techniques and tools and a classification of current clone detectors based on this schema, and (2) a taxonomy of editing scenarios that produce different clone types and a qualitative evaluation of current clone detectors based on this taxonomy."}
{"_id": "35045f6ced0fba44b867910d7a391e04df4de504", "title": "WILDSENSING: Design and deployment of a sustainable sensor network for wildlife monitoring", "text": "The increasing adoption of wireless sensor network technology in a variety of applications, from agricultural to volcanic monitoring, has demonstrated their ability to gather data with unprecedented sensing capabilities and deliver it to a remote user. However, a key issue remains how to maintain these sensor network deployments over increasingly prolonged deployments. In this article, we present the challenges that were faced in maintaining continual operation of an automated wildlife monitoring system over a one-year period. This system analyzed the social colocation patterns of European badgers (Meles meles) residing in a dense woodland environment using a hybrid RFID-WSN approach. We describe the stages of the evolutionary development, from implementation, deployment, and testing, to various iterations of software optimization, followed by hardware enhancements, which in turn triggered the need for further software optimization. We highlight the main lessons learned: the need to factor in the maintenance costs while designing the system; to consider carefully software and hardware interactions; the importance of rapid prototyping for initial deployment (this was key to our success); and the need for continuous interaction with domain scientists which allows for unexpected optimizations."}
{"_id": "80d73ff08afb3dccd9eb456bb9a9924630e6081b", "title": "Merging thermal and visual images by a contrast pyramid", "text": "J. M. Valeton I nstitute for Perception TNO Kampweg 5 3769 DE Soesterberg, The Netherlands Abstract. Integration of images from different sensing modalities can produce information that cannot be obtained by viewing the sensor outputs separately and consecutively. This paper introduces a hierarchical i mage merging scheme based on a multiresolution contrast decomposition (the ratio of a low-pass pyramid). The composite images produced by this scheme preserve those details from the input images that are most relevant to visual perception. The method is tested by merging parallel registered thermal and visual images. The results show that the fused images present a more detailed representation of the depicted scene. Detection, recognition, and search tasks may therefore benefit from this new image representation."}
{"_id": "8b21c010b128789ae5c31bf0b2db4715943721b3", "title": "Fast Eigenspace Approximation using Random Signals", "text": "We focus in this work on the estimation of the first k eigenvectors of any graph Laplacian using filtering of Gaussian random signals. We prove that we only need k such signals to be able to exactly recover as many of the smallest eigenvectors, regardless of the number of nodes in the graph. In addition, we address key issues in implementing the theoretical concepts in practice using accurate approximated methods. We also propose fast algorithms both for eigenspace approximation and for the determination of the kth smallest eigenvalue \u03bbk. The latter proves to be extremely efficient under the assumption of locally uniform distribution of the eigenvalue over the spectrum. Finally, we present experiments which show the validity of our method in practice and compare it to state-of-the-art methods for clustering and visualization both on synthetic smallscale datasets and larger real-world problems of millions of nodes. We show that our method allows a better scaling with the number of nodes than all previous methods while achieving an almost perfect reconstruction of the eigenspace formed by the first k eigenvectors. Keywords\u2014Graph signal processing, low-rank reconstruction, partitionning, spectral graph theory, spectrum analysis, subspace approximation, visualization"}
{"_id": "5e703ba175256ddce240e7064119e352a23b43ad", "title": "LPV subspace identification of the edgewise vibrational dynamics of a wind turbine rotor", "text": "In this paper we apply a state-of-the-art algorithm for subspace identification of linear parameter-varying (LPV) systems to identify the coupled dynamics of the drive-train and the edgewise bending motion of the rotor blades of three-bladed wind turbines. These dynamics are varying with the rotor speed. The identification algorithm uses a factorization which makes it possible to form predictors based on past inputs, outputs, and the known rotor speed. The predictors contain the LPV equivalent of the Markov parameters. Using the predictors, ideas from Predictor Based Subspace IDentification (PBSID) were developed to estimate the state sequence from which the LPV system matrices can be constructed. The algorithm was applied not only to synthetic data generated by a computer simulation of a reference wind turbine, but also to data measured from the CART3 research wind turbine at the National Wind Technology Center of the National Renewable Energy Laboratory (NREL). This paper demonstrates that the linear time-varying behavior of the aeroelastic dynamics of the wind turbine rotor can be captured in an LPV model identified with measured input-output data."}
{"_id": "c5bc9fd9b1d1930de401af2b8a47f1de2e83a81e", "title": "Knowledge management technologies and applications - literature review from 1995 to 2002", "text": "This paper surveys knowledge management (KM) development using a literature review and classification of articles from 1995 to 2002 with keyword index in order to explore how KM technologies and applications have developed in this period. Based on the scope of 234 articles of knowledge management applications, this paper surveys and classifies KM technologies using the seven categories as: KM framework, knowledge-based systems, data mining, information and communication technology, artificial intelligence/expert systems, database technology, and modeling, together with their applications for different research and problem domains. Some discussion is presented, indicating future development for knowledge management technologies and applications as the followings: (1) KM technologies tend to develop towards expert orientation, and KM applications development is a problem-oriented domain. (2) Different social studies methodologies, such as statistical method, are suggested to implement in KM as another kind of technology. (3) Integration of qualitative and quantitative methods, and integration of KM technologies studies may broaden our horizon on this subject. (4) The ability to continually change and obtain new understanding is the power of KM technologies and will be the application of future works. q 2003 Elsevier Science Ltd. All rights reserved."}
{"_id": "b3d47fcab55fe866355248f996ac48b60a6d0645", "title": "Comparison of NRZ and RZ modulations in laser intersatellite communication systems", "text": "Laser intersatellite communication (LIC) is a promising choice for intersatellite communication due to the high data rate that can be achieved with the advantages of light weight, smaller size and lower power consumption. In this paper, we compare the maximum transmission distance with non-return-to-zero (NRZ) and return-to-zero (RZ) modulations for two different LIC scenarios. Simulation results show that for the long-range LIC system with a saturated booster optical amplifier, the RZ modulation scheme can offer a longer transmission distance than the NRZ modulation scheme. For the short-range LIC system without an optical amplifier, the RZ modulation scheme performs almost the same as the NRZ modulation scheme."}
{"_id": "74e1ec5c2a1f12544e4a44cb3502930851ee255f", "title": "An overview of substrate noise reduction techniques", "text": "This paper provides an overview of the recent circuit level and physical level substrate noise reduction techniques. Several of these techniques are compared for their advantages and disadvantages in \"system-on-chip\" applications."}
{"_id": "1ac291cb21378902486f88ecdea61beb2ea44e5d", "title": "Big Data Analytics for Security", "text": "Big data is changing the landscape of security tools for network monitoring, security information and event management, and forensics; however, in the eternal arms race of attack and defense, security researchers must keep exploring novel ways to mitigate and contain sophisticated attackers."}
{"_id": "19c90b3c0c0d94e8235731a057cc6377c46482ee", "title": "Direct Multisearch for Multiobjective Optimization", "text": "In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type. We prove under the common assumptions used in direct search for single optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step. Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts."}
{"_id": "2464c3a16bf685130094dfc7ce7302d3014f43a8", "title": "PRET DRAM controller: Bank privatization for predictability and temporal isolation", "text": "Hard real-time embedded systems employ high-capacity memories such as Dynamic RAMs (DRAMs) to cope with increasing data and code sizes of modern designs. However, memory controller design has so far largely focused on improving average-case performance. As a consequence, the latency of memory accesses is unpredictable, which complicates the worst-case execution time analysis necessary for hard real-time embedded systems.\n Our work introduces a novel DRAM controller design that is predictable and that significantly reduces worst-case access latencies. Instead of viewing the DRAM device as one resource that can only be shared as a whole, our approach views it as multiple resources that can be shared between one or more clients individually. We partition the physical address space following the internal structure of the DRAM device, i.e., its ranks and banks, and interleave ac- cesses to the blocks of this partition. This eliminates contention for shared resources within the device, making accesses temporally predictable and temporally isolated. This paper describes our DRAM controller design and its integration with a precision-timed (PRET) architecture called PTARM. We present analytical bounds on the latency and throughput of the proposed controller, and confirm these via simulation."}
{"_id": "f05826a3debddcc6761a16f9b6a8520013c98f43", "title": "Meta-cognitive extreme learning machine for regression problems", "text": "In this paper, we present an efficient fast learning algorithm for regression problems using meta-cognitive extreme learning machine(McELM). The proposed algorithm has two components, namely the cognitive component and meta-cognitive component. The cognitive component is an extreme learning machine (ELM) while the meta-cognitive component which controls the cognitive component employs a self-regulating learning mechanism to decide what to learn, when to learn and how to learn. The meta-cognitive component chooses suitable learning method based on the samples presented namely, delete sample, reserve sample and network update. The use of ELM improves the network speed and reduces computational cost. Unlike traditional ELM, the number of hidden layers is not fixed priori in McELM, instead, the network is built during the learning phase. This algorithm is evaluated on a set of benchmark regression and approximation problems and also on a real-world wind force and moment coefficient prediction problem. Performance results in this study highlight that McELM can achieve better results compared with conventional ELM, support vector regression (SVR)."}
{"_id": "a705e5f481f5e80c2b94e81b8ece6cb11f76bfe6", "title": "ERP evidence for conceptual mappings and comparison processes during the comprehension of conventional and novel metaphors", "text": "Cognitive linguists suggest that understanding metaphors requires activation of conceptual mappings between the involved concepts. We tested whether mappings are indeed in use during metaphor comprehension, and what mapping means as a cognitive process with Event-Related Potentials. Participants read literal, conventional metaphorical, novel metaphorical, and anomalous target sentences preceded by primes with related or unrelated mappings. Experiment 1 used sentence-primes to activate related mappings, and Experiment 2 used simile-primes to induce comparison thinking. In the unprimed conditions of both experiments, metaphors elicited N400s more negative than the literals. In Experiment 1, related sentence-primes reduced the metaphor-literal N400 difference in conventional, but not in novel metaphors. In Experiment 2, related simile-primes reduced the metaphor-literal N400 difference in novel, but not clearly in conventional metaphors. We suggest that mapping as a process occurs in metaphors, and the ways in which it can be facilitated by comparison differ between conventional and novel metaphors."}
{"_id": "6cb7ef2aad20bcb871f41884196b9c2a04ec3bf6", "title": "Design for auto-tuning PID controller based on genetic algorithms", "text": "This paper presents a method of designing proportional-integral-derivative (PID) controller based on genetic algorithms (GA). Ziegler-Nichols tuning formula has been used for predicting the range of gain coefficients of GA-PID controller. The use of GA for optimizing the gain coefficients of PID controller has considerably improved the performance of PID controller. Simulation studies on a second-order and a third-order control system demonstrate that the proposed controller provides high performance of dynamic and static characteristics."}
{"_id": "97dcd564fe8fcc61f46f585d30b5ca00f3fd6452", "title": "Suppression by Selecting Wavelets for Feature Compression in Distributed Speech Recognition", "text": "Distributed speech recognition DSR splits the processing of data between a mobile device and a network server. In the front-end, features are extracted and compressed to transmit over a wireless channel to a back-end server, where the incoming stream is received and reconstructed for recognition tasks. In this paper, we propose a feature compression algorithm termed suppression by selecting wavelets SSW to achieve the two main goals of DSR: Minimizing memory and device requirements while also maintaining or even improving the recognition performance. The SSW approach first applies the discrete wavelet transform DWT to filter the incoming speech feature sequence into two temporal subsequences at the client terminal. Feature compression is achieved by keeping the low modulation frequency subsequence while discarding the high frequency counterpart. The low-frequency subsequence is then transmitted across the remote network for specific feature statistics normalization. Wavelets are favorable for resolving the temporal properties of the feature sequence, and the down-sampling process in DWT achieves data compression by reducing the amount of data at the terminal prior to transmission across the network. Once the compressed features have arrived at the server, the feature sequence can be enhanced by statistics normalization, reconstructed with inverse DWT, and compensated with a simple post filter to alleviate any over-smoothing effects from the compression stage. Results on a standard robustness task Aurora-4 and on a Mandarin Chinese news corpus showed SSW outperforms conventional noise-robustness techniques while also providing nearly a 50% compression rate during the transmission stage of DSR systems."}
{"_id": "15a2c58b29c5a84a134d1504faff528101321f21", "title": "Additive Logistic Regression : a Statistical", "text": "Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classiication methodology. The performance of many classiication algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input data, and taking a weighted majority vote of the sequence of classiiers thereby produced. We show that this seemingly mysterious phenomenon can be understood in terms of well known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multi-class generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multi-class generalizations of boosting in most situations, and far superior in some. We suggest a minor modiication to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-rst truncated tree induction , often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster com-putationally making it more suitable to large scale data mining applications ."}
{"_id": "008c22c8b2997ba882f87a81b5cb3f85222879ea", "title": "CzEngVallex: a Bilingual Czech-English Valency Lexicon", "text": "This paper introduces a new bilingual Czech-English verbal valency lexicon (called CzEngVallex) representing a relatively large empirical database. It includes 20,835 aligned valency frame pairs (i.e., verb senses which are translations of each other) and their aligned arguments. This new lexicon uses data from the Prague Czech-English Dependency Treebank and also takes advantage of the existing valency lexicons for both languages: the PDT-Vallex for Czech and the EngVallex for English. The CzEngVallex is available for browsing as well as for download in the LINDAT/CLARIN repository. The CzEngVallex is meant to be used not only by traditional linguists, lexicographers, translators but also by computational linguists both for the purposes of enriching theoretical linguistic accounts of verbal valency from a cross-linguistic perspective and for an innovative use in various NLP tasks."}
{"_id": "2fbb01b036c3ac4f34c56a717856bd72f90144ff", "title": "Cartoon-recognition using video & audio descriptors", "text": "We present a new approach for classifying mpeg-2 video sequences as `cartoon' or `non-cartoon' by analyzing specific video and audio features of consecutive frames in real-time. This is part of the well-known video-genre-classification problem, where popular TV-broadcast genres like cartoon, commercial, music, news and sports are studied. Such applications have also been discussed in the context of MPEG-7 [12]. In our method the extracted features from the visual descriptors are non-linearly combined using a multilayered perceptron and then considered together with the output of the audio-descriptor to produce a reliable recognition. The results demonstrate a high identification rate based on a large collection of 100 representative video sequences (20 cartoons and 4*20 non-cartoons) gathered from free digital TV-broadcasting."}
{"_id": "5e9ed7531c9b7859b23f737b934c38f02fced281", "title": "Space and space-time modeling using process convolutions", "text": "A continuous spatial model can be constructed by convolving a very simple, perhaps independent, process with a kernel or point spread function. This approach for constructing a spatial process offers a number of advantages over specification through a spatial covariogram. In particular, this process convolution specification leads to compuational simplifications and easily extends beyond simple stationary models. This paper uses process convolution models to build space and space-time models that are flexible and able to accomodate large amounts of data. Data from environmental monitoring is considered."}
{"_id": "4414619c00ef5886ed3845d04a5a37f97946cfb0", "title": "The age of gossip: spatial mean field regime", "text": "Disseminating a piece of information, or updates for a piece of information, has been shown to benefit greatly from simple randomized procedures, sometimes referred to as gossiping, or epidemic algorithms. Similarly, in a network where mobile nodes occasionally receive updated content from a base station, gossiping using opportunistic contacts allows for recent updates to be efficiently maintained, for a large number of nodes. In this case, however, gossiping depends on node mobility. For this reason, we introduce a new gossip model, with mobile nodes moving between different classes that can represent locations or states, which determine gossiping behavior of the nodes. Here we prove that, when the number of mobile nodes becomes large, the age of the latest updates received by mobile nodes approaches a deterministic mean-field regime. More precisely, we show that the occupancy measure of the process constructed, with the ages defined above, converges to a deterministic limit that can be entirely characterized by differential equations. This major simplification allows us to characterize how mobility, source inputs and gossiping influence the age distribution for low and high ages. It also leads to a scalable numerical evaluation of the performance of mobile update systems, which we validate (using a trace of 500 taxicabs) and use to propose infrastructure deployment."}
{"_id": "565e1dc336733cdaab8ad884b6aa95c6c88feb7f", "title": "An Ethernet-Based Fronthaul Implementation with MAC/PHY Split LTE Processing", "text": "A testbed implementation for an Ethernet fronthaul transporting signals arising from a long-term evolution (LTE) functional subdivision (\"split\") at the media-access control (MAC)/physical layer (PHY) interface is presented. Based on open LTE base station software, the testbed demonstrates significant data rate reductions compared to current fronthaul implementations that rely on In- phase and Quadrature radio sample transportation and data rates that scale with cell load. All generated traffic flows are clearly distinguishable using appropriate packet headers. A selection of test cases and their corresponding results are presented to demonstrate the operation of the fronthaul and the performance of individual flows in terms of data rates and overheads."}
{"_id": "7de772d4550579ca8ac6978eb827b1f3afebb719", "title": "Patient factors influencing dermal filler complications: prevention, assessment, and treatment", "text": "While rare, complications do occur with the esthetic use of dermal fillers. Careful attention to patient factors and technique can do much to avoid these complications, and a well-informed practitioner can mitigate problems when they do occur. Since cosmetic surgery is usually an elective process, requested by the patient, clinical trials are complex to organize and run. For this reason, an international group of practicing physicians in the field of esthetics came together to share knowledge and to try and produce some informed guidance for their colleagues, considering the literature and also pooling their own extensive clinical experience. This manuscript aims to summarize the crucial aspects of patient selection, including absolute contraindications as well as situations that warrant caution, and also covers important considerations for the pre- and posttreatment periods as well as during the procedure itself. Guidance is given on both immediate and long-term management of adverse reactions. The majority of complications are related to accepting patients inappropriate for treatment or issues of sterility, placement, volume, and injection technique. It is clear that esthetic practitioners need an in-depth knowledge of all aspects of treatment with dermal fillers to achieve optimal outcomes for their patients."}
{"_id": "43e0b8b1e4083b0f7161a4338f6920584cb1e9cc", "title": "Stabilizing function of the diaphragm: dynamic MRI and synchronized spirometric assessment.", "text": "The aim was to describe diaphragmatic behavior during postural limb activities and examine the ventilatory and stabilizing functions of the diaphragm. Thirty healthy subjects were examined in the supine position using a dynamic MRI system assessed simultaneously with specialized spirometric readings. The diaphragmatic excursions (DEs) were measured at three diaphragmatic points in the sagittal plane; the diaphragm positions (DPs) as related to a reference horizontal baseline were determined. Measurements were taken during tidal breathing (TB) and isometric flexion of upper or lower extremities against external resistance together with TB. Mean DE in both upper and lower postural limb activities was greater compared with the TB condition (P < 0.05), with the effect greater for lower limb activities. Inspiratory DPs in the upper and lower extremity activities were lower compared with TB alone (P < 0.01). Expiratory DP was lower only for lower extremity activities (P < 0.01). DP was most affected at the apex of the crescent and crural (posterior) portion of the diaphragm. DEs correlated strongly with tidal volume (Vt) in all conditions. Changes in DEs relative to the initial value were minimal for upper and lower extremities but were related to lower values of Vt (P < 0.03). Significant involvement of the diaphragm in the limb postural activities was found. Resulting DEs and DPs differed from the TB conditions, especially in lower extremity activities. The differences between the percent changes of DEs vs. Vt found for lower extremity activities were confirmed by both ventilatory and postural diaphragm recruitment in response to postural demands."}
{"_id": "cda7f46731bac9846ec2ba2bd94df196c008bab4", "title": "A Q-band (40\u201345 GHz) 16-element phased-array transmitter in 0.18-\u03bcm SiGe BiCMOS technology", "text": "A 16-element phased-array transmitter based on 4-bit RF phase shifters is designed in 0.18-mum SiGe BiCMOS technology for Q-band applications. The phased-array shows 12.5plusmn1.5 dB of power gain per channel at 42.5 GHz for all phase states and the 3-dB gain bandwidth is 40-45.6 GHz. The input and output return loss is less than -10 dB at 37.5-50 GHz. The transmitter also results in les8.8deg of RMS phase error and les1.2 dB of RMS gain error for all phase states at 30-50 GHz. The maximum saturated output power is -2.5plusmn1.5 dBm per channel at 42.5 GHz. The RMS gain variation and RMS phase mismatch between all 16 channels is les0.5 dB and les4.5deg, respectively. The chip consumes 720 mA from a 5 V supply voltage and overall chip size is 2.6times3.2 mm2. To our knowledge, this is the first implementation of a 16-element phased array on a silicon chip with the RF phase shifting architecture at any frequency."}
{"_id": "35b86ef854c728da5f905ae9fb09dbbcf59a0cdd", "title": "Millimeter-wave CMOS design", "text": "This paper describes the design and modeling of CMOS transistors, integrated passives, and circuit blocks at millimeter-wave (mm-wave) frequencies. The effects of parasitics on the high-frequency performance of 130-nm CMOS transistors are investigated, and a peak f/sub max/ of 135 GHz has been achieved with optimal device layout. The inductive quality factor (Q/sub L/) is proposed as a more representative metric for transmission lines, and for a standard CMOS back-end process, coplanar waveguide (CPW) lines are determined to possess a higher Q/sub L/ than microstrip lines. Techniques for accurate modeling of active and passive components at mm-wave frequencies are presented. The proposed methodology was used to design two wideband mm-wave CMOS amplifiers operating at 40 GHz and 60 GHz. The 40-GHz amplifier achieves a peak |S/sub 21/| = 19 dB, output P/sub 1dB/ = -0.9 dBm, IIP3 = -7.4 dBm, and consumes 24 mA from a 1.5-V supply. The 60-GHz amplifier achieves a peak |S/sub 21/| = 12 dB, output P/sub 1dB/ = +2.0 dBm, NF = 8.8 dB, and consumes 36 mA from a 1.5-V supply. The amplifiers were fabricated in a standard 130-nm 6-metal layer bulk-CMOS process, demonstrating that complex mm-wave circuits are possible in today's mainstream CMOS technologies."}
{"_id": "7d8af04248e6ab82444a14bde211e52a3c28baa9", "title": "Fully integrated CMOS power amplifier design using the distributed active-transformer architecture", "text": "A novel on-chip impedance matching and powercombining method, the distributed active transformer is presented. It combines several low-voltage push\u2013pull amplifiers efficiently with their outputs in series to produce a larger output power while maintaining a 50match. It also uses virtual ac grounds and magnetic couplings extensively to eliminate the need for any offchip component, such as tuned bonding wires or external inductors. Furthermore, it desensitizes the operation of the amplifier to the inductance of bonding wires making the design more reproducible. To demonstrate the feasibility of this concept, a 2.4-GHz 2-W 2-V truly fully integrated power amplifier with 50input and output matching has been fabricated using 0.35m CMOS transistors. It achieves a power added efficiency (PAE) of 41% at this power level. It can also produce 450 mW using a 1-V supply. Harmonic suppression is 64 dBc or better. This new topology makes possible a truly fully integrated watt-level gigahertz range low-voltage CMOS power amplifier for the first time."}
{"_id": "cf7047567f6ee2a3adec2893a5a1987d35e93ce8", "title": "A SiGe BiCMOS 16-element phased-array transmitter for 60GHz communications", "text": "The demonstration of multi-Gb/s links in the 60GHz band has created new opportunities for wireless communications [1,2]. Due to the directional nature of millimeter-wave (mm-Wave) propagation, beam steering enables longer-range non-line-of-sight (NLOS) links at these frequencies. A phased-array architecture is attractive for an integrated 60GHz transmitter (Tx) since it can attain both beam steering and higher EIRP through spatial combining. An all-RF 16-element 40-to-45GHz Tx for satellite applications [3], a 6-element 60GHz Tx with IF-path phase-shift [4], and a bi-directional 4-element 60GHz Tx/Rx with RF phase shifters [5] have been recently demonstrated in silicon. This work presents a fully-integrated phased-array Tx which supports multi-Gb/s NLOS IEEE 802.15.3c links. In addition to beamsteering, the IC has the following major features: an on-chip power sensor at each element, 3 temperature sensors, LO leakage and I/Q phase and amplitude adjustment, front-end OP1dB programmability, and an integrated modulator for pi/2-BPSK/MSK signaling (common mode in 802.15.3c). The IC integrates 2240 NPNs, 323,000 FETs and hundreds of transmission lines and is fabricated in the IBM 8HP 0.12\u00b5m SiGe BiCMOS process (fT = 200GHz)."}
{"_id": "6e9d458667da8a28c0b0412545e3af6b1feef5db", "title": "On (Commercial) Benefits of Automatic Text Summarization Systems in the News Domain: A Case of Media Monitoring and Media Response Analysis", "text": "In this work, we present the results of a systematic study to investigate the (commercial) benefits of automatic text summarization systems in a real world scenario. More specifically, we define a use case in the context of media monitoring and media response analysis and claim that even using a simple query-based extractive approach can dramatically save the processing time of the employees without significantly reducing the quality of their work."}
{"_id": "607ddee575e1377dab144fd4ad57687fc171a200", "title": "Prediction of Hard Drive Failures via Rule Discovery from AutoSupport Data", "text": "The ability to accurately predict an impending hard disk failure is important for reliable storage system design. The facility provided by most hard drive manufacturers, called S.M.A.R.T. (selfmonitoring, analysis and reporting technology), has been shown by current research to have poor predictive value. The problem of finding alternatives to S.M.A.R.T. for predicting disk failure is an area of active research. In this paper, we present a rule discovery methodology, and show that it is possible to construct decision support systems that can detect such failures using information recorded in the AutoSupport database. We demonstrate the effectiveness of our system by evaluating it on disks that were returned to NetApp from the field. Our evaluation shows that our system can be tuned either to have a high failure detection rate (i.e., classify a bad disk as bad) or to have a low false alarm rate (i.e., not classify a good disk as bad). Further, our rule-based classifier generates rules that are intuitive and easy to understand, unlike black box techniques."}
{"_id": "f9b291ca6b44937956086cec37189e79d44c88ea", "title": "Modeling urban dynamics through GIS-based cellular automata", "text": "In urban systems modeling, there are many elaborate dynamic models based on intricate decision processes whose simulation must be based on customized software if their space\u00b1time properties are to be explored e\u0080ectively. In this paper we present a class of urban models whose dynamics are based on theories of development associated with cellular automata (CA), whose data is \u00aene-grained, and whose simulation requires software which can handle an enormous array of spatial and temporal model outputs. We \u00aerst introduce the generic problem of modeling within GIS, noting relevant CA models before outlining a generalized model based on Xie's (1996, A general model for cellular urban dynamics. Geographical Analysis, 28, 350\u00b1373) ``dynamic urban evolutionary modeling'' (DUEM) approach. We present ways in which land uses are structured through their life cycles, and ways in which existing urban activities spawn locations for new activities. We de\u00aene various decision rules that embed distance and direction, density thresholds, and transition or mutation probabilities into the model's dynamics, and we then outline the software designed to generate e\u0080ective urban simulations consistent with GIS data inputs, outputs and related functionality. Finally, we present a range of hypothetical urban simulations that illustrate the diversity of model types that can be handled within the framework as a prelude to more realistic applications which will be reported in later papers. # 1999 Published by Elsevier Science Ltd. All"}
{"_id": "0a06201d7d0f60d775b2e8d3b100026190081db8", "title": "Plant Disease Prediction using Image Processing Techniques-A Review", "text": "Agriculture has become much more than simply a means to feed ever growing populations. It is very important where in more than 70% population depends on agriculture in India. That means it feeds great number of people. The plant diseases effect the humans directly or indirectly by health or also economically. To detect these plant diseases we need a fast automatic way. Diseases are analyzed by different digital image processing techniques. In this paper, we have done survey on different digital image processing techniques to detect the plant diseases."}
{"_id": "bbb606d78d379262c85c4615cb5d9c191cd2e3bf", "title": "Peeking Behind the Curtains of Serverless Platforms", "text": "Serverless computing is an emerging paradigm in which an application\u2019s resource provisioning and scaling are managed by third-party services. Examples include AWS Lambda, Azure Functions, and Google Cloud Functions. Behind these services\u2019 easy-to-use APIs are opaque, complex infrastructure and management ecosystems. Taking on the viewpoint of a serverless customer, we conduct the largest measurement study to date, launching more than 50,000 function instances across these three services, in order to characterize their architectures, performance, and resource management efficiency. We explain how the platforms isolate the functions of different accounts, using either virtual machines or containers, which has important security implications. We characterize performance in terms of scalability, coldstart latency, and resource efficiency, with highlights including that AWS Lambda adopts a bin-packing-like strategy to maximize VM memory utilization, that severe contention between functions can arise in AWS and Azure, and that Google had bugs that allow customers to use resources for free."}
{"_id": "3a11c1b27efaf498005197af54da14fa0960cf6b", "title": "Characterizing concept drift", "text": "Most machine learning models are static, but the world is dynamic, and increasing online deployment of learned models gives increasing urgency to the development of efficient and effective mechanisms to address learning in the context of non-stationary distributions, or as it is commonly called concept drift. However, the key issue of characterizing the different types of drift that can occur has not previously been subjected to rigorous definition and analysis. In particular, while some qualitative drift categorizations have been proposed, few have been formally defined, and the quantitative descriptions required for precise and objective understanding of learner performance have not existed. We present the first comprehensive framework for quantitative analysis of drift. This supports the development of the first comprehensive set of formal definitions of types of concept drift. The formal definitions clarify ambiguities and identify gaps in previous definitions, giving rise to a new comprehensive taxonomy of concept drift types and a solid foundation for research into mechanisms to detect and address concept drift."}
{"_id": "9487769d391e154e8c99b136f6a31ab05ccc7b71", "title": "Conclusions of Ghaheri's Study That Laser Surgery for Posterior Tongue and Lip Ties Improves Breastfeeding Are Not Substantiated.", "text": "Editor\u2019s Note: The diagnosis and treatment of tongue and lip ties remains a topic of much and at times heated debate. The following correspondences, which take opposite sides of the argument as to the value of laser surgery, are symptomatic of this still unresolved and hotly contested subject. Breastfeeding Medicine is of the opinion that it should serve as a platform for the presentation of both sides of the issue so as to provide information that will assist clinicians in their management decision."}
{"_id": "8f1e7b159180dc626410a85656649737ed00e67c", "title": "' s personal copy A comparative study on feature reduction approaches in Hindi and Bengali named entity recognition", "text": "Features used for named entity recognition (NER) are often high dimensional in nature. These cause overfitting when training data is not sufficient. Dimensionality reduction leads to performance enhancement in such situations. There are a number of approaches for dimensionality reduction based on feature selection and feature extraction. In this paper we perform a comprehensive and comparative study on different dimensionality reduction approaches applied to the NER task. To compare the performance of the various approaches we consider two Indian languages namely Hindi and Bengali. NER accuracies achieved in these languages are comparatively poor as yet, primarily due to scarcity of annotated corpus. For both the languages dimensionality reduction is found to improve performance of the classifiers. A Comparative study of the effectiveness of several dimensionality reduction techniques is presented in detail in this paper. 2011 Elsevier B.V. All rights reserved."}
{"_id": "0ca380b8badde55f352eacccea258fac1a0ef94e", "title": "Designing Persian Floral Patterns using Circle Packing", "text": "In this paper, we present a novel approach toward generating floral patterns. We extract the essence of a pattern aside from its appearance and geometry into combinatorial elements. As a result, existing patterns can be reshaped while preserving their essence. Furthermore, we can create new patterns that adhere to high level concepts such as imperfect symmetry and visual balance. By decomposing floral patterns into a configuration of circles and angles, we can reconstruct this patterns on different surfaces given a conformal mapping."}
{"_id": "4405b44b5597787c42abeca6d0b3172175557b7b", "title": "Object discovery in 3D scenes via shape analysis", "text": "We present a method for discovering object models from 3D meshes of indoor environments. Our algorithm first decomposes the scene into a set of candidate mesh segments and then ranks each segment according to its \u201cobjectness\u201d - a quality that distinguishes objects from clutter. To do so, we propose five intrinsic shape measures: compactness, symmetry, smoothness, and local and global convexity. We additionally propose a recurrence measure, codifying the intuition that frequently occurring geometries are more likely to correspond to complete objects. We evaluate our method in both supervised and unsupervised regimes on a dataset of 58 indoor scenes collected using an Open Source implementation of Kinect Fusion [1]. We show that our approach can reliably and efficiently distinguish objects from clutter, with Average Precision score of .92. We make our dataset available to the public."}
{"_id": "ef414f4997ea2b955283546c7268d4d418f8cda0", "title": "MC64: A Web Platform to Test Bioinformatics Algorithms in a Many-Core Architecture", "text": "New analytical methodologies, like the so-called \u201cnext-generation sequencing\u201d (NGS), allow the sequencing of full genomes with high speed and reduced price. Yet, such technologies generate huge amounts of data that demand large raw computational power. Many-core technologies can be exploited to overcome the involved bioinformatics bottleneck. Indeed, such hardware is currently in active development. We have developed parallel bioinformatics algorithms for many-core microprocessors containing 64 cores each. Thus, the MC64 web platform allows executing high-performance alignments (NeedlemanWunsch, Smith-Waterman and ClustalW) of long sequences. The MC64 platform can be accessed via web browsers, allowing easy resource integration into thirdparty tools. Furthermore, the results obtained from the MC64 include timeperformance statistics that can be compared with other platforms."}
{"_id": "1724e00a497a1c8612d9147cce8a3858aa5f3902", "title": "Research on Fast Charge Method for Lead-Acid Electric Vehicle Batteries", "text": "Electric vehicle (EV) is environment friendly and high efficient. But the shortages of traction battery limited the rapid development of EV. Battery as a key part of EV has aroused lots of engineers to explore the management method and fast charge method is a key technology of battery management for electric vehicle. Constant current-constant voltage (CC-CV) and multistage constant current-constant voltage (MCC-CV) are two traditional charging ways. According to the dynamic circuit model of Lead-acid battery and fast charge theory, on the basic of CC-CV and MCC-CV method, explored the fast charge method for Lead-acid battery of electric vehicle. Compare experiment result of the fast charge method and traditional method. The two major parameters like temperature rise and the capacity-time ratio are considered in order to compare the result. Lots of experiment result support that the negative pulse could eliminate the polarization effective. The MCC-CV with negative pulse method proves working efficient and practical. Keywords-Fast charge; Electric Vehicle; Lead-acid battery;"}
{"_id": "d556556e3402ddc734e9a543588ff10d350cdcf3", "title": "Deep neural networks on graph signals for brain imaging analysis", "text": "Brain imaging data such as EEG or MEG are high-dimensional spatiotemporal data often degraded by complex, non-Gaussian noise. For reliable analysis of brain imaging data, it is important to extract discriminative, low-dimensional intrinsic representation of the recorded data. This work proposes a new method to learn the low-dimensional representations from the noise-degraded measurements. In particular, our work proposes a new deep neural network design that integrates graph information such as brain connectivity with fully-connected layers. Our work leverages efficient graph filter design using Chebyshev polynomial and recent work on convolutional nets on graph-structured data. Our approach exploits graph structure as the prior side information, localized graph filter for feature extraction and neural networks for high capacity learning. Experiments on real MEG datasets show that our approach can extract more discriminative representations, leading to improved accuracy in a supervised classification task."}
{"_id": "b116fb34e1f26c838d7dc9a07fa2ee7230670a6f", "title": "Lossy compression on IoT big data by exploiting spatiotemporal correlation", "text": "As the volume of data generated by various deployed IoT devices increases, storing and processing IoT big data becomes a huge challenge. While compression, especially lossy ones, can drastically reduce data volume, finding an optimal balance between the volume reduction and the information loss is not an easy task given that the data collected by diverse sensors exhibit different characteristics. Motivated by this, we present a feasibility analysis of lossy compression on agricultural sensor data by comparing fidelity of reconstructed data from various signal processing algorithms and temporal difference encoding. Specifically, we evaluated five real-world sensor data from weather stations as one of major IoT applications. Our experimental results indicate that Discrete Cosine Transform (DCT) and Fast Walsh-Hadamard Transform (FWHT) generate higher compression ratios than others. In terms of information loss, Lossy Delta Encoding (LDE) significantly outperforms others nonetheless. We also observe that, as compression factor is increased, error rates for all compression algorithms also increase. However, the impact of introduced error is much severe in DCT and FWHT while LDE was able to maintain a relatively lower error rate than other methods."}
{"_id": "bcceefac2808523e146b8473ba7fdec9c405e231", "title": "Knowledge discovery from database Using an integration of clustering and classification", "text": "Clustering and classification are two important techniques of data mining. Classification is a supervised learning problem of assigning an object to one of several pre-defined categories based upon the attributes of the object. While, clustering is an unsupervised learning problem that group objects based upon distance or similarity. Each group is known as a cluster. In this paper we make use of a large database \u2018Fisher\u2019s Iris Dataset\u2019 containing 5 attributes and 150 instances to perform an integration of clustering and classification techniques of data mining. We compared results of simple classification technique (using J48 classifier) with the results of integration of clustering and classification technique, based upon various parameters using WEKA (Waikato Environment for Knowledge Analysis), a Data Mining tool. The results of the experiment show that integration of clustering and classification gives promising results with utmost accuracy rate and robustness even when the data set is containing missing values. KeywordsData Mining; J48; KMEANS; WEKA; Fisher\u2019s Iris dataset;"}
{"_id": "b6b0d6a02f58aca3df76da4e420202ed2821c868", "title": "Nonlinear control for output regulation of ball and plate system", "text": "This paper proposes nonlinear control methods for output regulation of ball and plate system. Positions of the ball are regulated with double feedback loops. Recursive backstepping design is employed for the external feedback loop, while switching control scheme is used in the inner feedback loop. System performance was tuned by backstepping parameters. Simulation results show that the proposed nonlinear control works wells in both stabilization and tracking control. Asymptotical stabilities are also achieved under unknown external disturbance in the experiments."}
{"_id": "4ea19ecd2ea554dc34913b388878d864b8a7b985", "title": "A Benchmark for Automatic Acral Melanoma Preliminary Screening", "text": "Despite recent progress in artificial intelligence and computer vision, the lack of data for acral melanoma, a particular type of skin disease under nails, makes it difficult to develop its automatic visual diagnosis system. This paper introduces a large data set of dermoscopic images for acral melanoma, which is annotated by senior dermatologists. It contains 6,066 images of two categories: subungual hematomas and other acral melanoma symptoms of malignant tendency (i.e., just bleeding under nails or more critical disorder that requires treatment). We hope this benchmark data set will encourage further research on acral melanoma recognition and will continue to maintain this data set to better serve it. We address the classification task using various computer vision algorithms from conventional techniques to cutting edge deep learning. Currently, we achieve 0.928 accuracy."}
{"_id": "3969e582e68e418a2b79c604cd35d5d81de9b35d", "title": "Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology", "text": "Valid measurement scales for predicting user acceptance of computers are in short supply. Most subjective measures used in practice are unvalidated, and their relationship to system usage is unknown. The present research develops and validates new scales for two specific variables, perceived usefulness and perceived ease of use, which are hypothesized to be fundamental determinants of user acceptance. Definitions for these two variables were used to develop scale items that were pretested for content validity and then tested for reliability and construct validity in two studies involving a total of 152 users and four application programs. The measures were refined and streamlined, resulting in two six-item scales with reliabilities of .98 for usefulness and .94 for ease of use. The scales exhibited high convergent, discriminant, and factorial validity. Perceived usefulness was significantly correlated with both selfreported current usage (r=.63, Study 1) and self-predicted future usage (r =.85, Study 2). Perceived ease of use was also significantly correlated with current usage (r= .45, Study 1) and future usage (r =.59, Study 2). In both studies, usefulness had a significantly greater correlation with usage behavior than did ease of use. Regression analyses suggest that perceived ease of use may actually be a causal antecedent to perceived usefulness, as opposed to a parallel, direct determinant of system usage. Implications are drawn for future research on user acceptance."}
{"_id": "d49ca2e50c45e2310d275c97d1fb69fe19cd602e", "title": "BEHIND ADVERTISING : THE LANGUAGE OF PERSUASION Kenechukwu", "text": "Advertising is a force that makes it possible to sell more merchandise due to its persuasive nature. As a tool of marketing communication, advertising is the structured and composed, non-personal communication of information. It is usually paid for and usually persuasive in nature. The language of persuasion is employed for successful advertising campaign thereby, boosting patronage of idea, product or service. The persuasive nature of advertising however, has generated criticisms and controversies. Advertising has been vigorously attacked on the ground that it is unnecessary and wasteful and that through it, people are induced to buy worthless products. It is further argued by the objectors that much advertising is misleading and untruthful. Thus, this paper examines how consumers can be persuaded to patronise goods and services advertised."}
{"_id": "2ac9f8a1bdd94b13ae7b02927413414490a4b7dc", "title": "Mixed-Initiative Real-Time Topic Modeling & Visualization for Crisis Counseling", "text": "Text-based counseling and support systems have seen an increasing proliferation in the past decade. We present Fathom, a natural language interface to help crisis counselors on Crisis Text Line, a new 911-like crisis hotline that takes calls via text messaging rather than voice. Text messaging opens up the opportunity for software to read the messages as well as people, and to provide assistance for human counselors who give clients emotional and practical support. Crisis counseling is a tough job that requires dealing with emotionally stressed people in possibly life-critical situations, under time constraints. Fathom is a system that provides topic modeling of calls and graphical visualization of topic distributions, updated in real time. We develop a mixed-initiative paradigm to train coherent topic and word distributions and use them to power real-time visualizations aimed at reducing counselor cognitive overload. We believe Fathom to be the first real-time computational framework to assist in crisis counseling."}
{"_id": "22a8228715486be5ceec1c15b0dad7aa80b84ce4", "title": "A review of central retinal artery occlusion: clinical presentation and management", "text": "Central retinal artery occlusion (CRAO) is an ophthalmic emergency and the ocular analogue of cerebral stroke. Best evidence reflects that over three-quarters of patients suffer profound acute visual loss with a visual acuity of 20/400 or worse. This results in a reduced functional capacity and quality of life. There is also an increased risk of subsequent cerebral stroke and ischaemic heart disease. There are no current guideline-endorsed therapies, although the use of tissue plasminogen activator (tPA) has been investigated in two randomized controlled trials. This review will describe the pathophysiology, epidemiology, and clinical features of CRAO, and discuss current and future treatments, including the use of tPA in further clinical trials."}
{"_id": "3c5c3e264e238fe1b76cfe528a0ef26b97f7309b", "title": "New edge-directed interpolation", "text": "This paper proposes an edge-directed interpolation algorithm for natural images. The basic idea is to first estimate local covariance coefficients from a low-resolution image and then use these covariance estimates to adapt the interpolation at a higher resolution based on the geometric duality between the low-resolution covariance and the high-resolution covariance. The edge-directed property of covariance-based adaptation attributes to its capability of tuning the interpolation coefficients to match an arbitrarily oriented step edge. A hybrid approach of switching between bilinear interpolation and covariance-based adaptive interpolation is proposed to reduce the overall computational complexity. Two important applications of the new interpolation algorithm are studied: resolution enhancement of grayscale images and reconstruction of color images from CCD samples. Simulation results demonstrate that our new interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional linear interpolation."}
{"_id": "77c512cbb832436e1a35ad434e6bb3d763799763", "title": "A Neural Algorithm of Artistic Style", "text": "Werner Reichardt Centre for Integrative Neuroscience and Institute of Theoretical Physics, University of T\u00fcbingen, Germany Bernstein Center for Computational Neuroscience, T\u00fcbingen, Germany Graduate School for Neural Information Processing, T\u00fcbingen, Germany Max Planck Institute for Biological Cybernetics, T\u00fcbingen, Germany Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA \u2217To whom correspondence should be addressed; E-mail: leon.gatys@bethgelab.org"}
{"_id": "682343b02d255a24576195d9f587c12d7c05b4b4", "title": "PCA-based feature selection scheme for machine defect classification", "text": "The sensitivity of various features that are characteristic of a machine defect may vary considerably under different operating conditions. Hence it is critical to devise a systematic feature selection scheme that provides guidance on choosing the most representative features for defect classification. This paper presents a feature selection scheme based on the principal component analysis (PCA) method. The effectiveness of the scheme was verified experimentally on a bearing test bed, using both supervised and unsupervised defect classification approaches. The objective of the study was to identify the severity level of bearing defects, where no a priori knowledge on the defect conditions was available. The proposed scheme has shown to provide more accurate defect classification with fewer feature inputs than using all features initially considered relevant. The result confirms its utility as an effective tool for machine health assessment."}
{"_id": "769dc1469e0f88328147b08111daf13c299bf500", "title": "Uncovering cabdrivers' behavior patterns from their digital traces", "text": "Recognizing high-level human behavior and decisions from their digital traces are critical issues in pervasive computing systems. In this paper, we develop a novel methodology to reveal cabdrivers\u2019 operation patterns by analyzing their continuous digital traces. For the first time, we systematically study large scale cabdrivers\u2019 behavior in a real and complex city context through their daily digital traces. We identify a set of valuable features, which are simple and effective to classify cabdrivers, delineate cabdrivers\u2019 operation patterns and compare the different cabdrivers\u2019 behavior. The methodology and steps could spatially and temporally quantify, visualize, and examine different cabdrivers\u2019 operation patterns. Drivers were categorized into top drivers and ordinary drivers by their daily income. We use the daily operations of 3000 cabdrivers in over 48 million of trips and 240 million kilometers to uncover: (1) spatial selection behavior, (2) context-aware spatio-temporal operation behavior, (3) route choice behavior, and (4) operation tactics. Though we focused on cabdriver operation patterns analysis from their digital traces, the methodology is a general empirical and analytical methodology for any GPS-like trace analysis. Our work demonstrates the great potential to utilize the massive pervasive data sets to understand human behavior and high-level intelligence. Published by Elsevier Ltd."}
{"_id": "7c14de73292d0398b73638f352f53c8b410049c1", "title": "Predicting SMEs' adoption of enterprise systems", "text": "Purpose \u2013 The purpose of this paper is to develop a model that can be used to predict which small to medium-sized enterprises (SMEs) are more likely to become adopters of enterprise systems (ERP, CRM, SCM and e-procurement). Design/methodology/approach \u2013 Direct interviews were used to collect data from a random sample of SMEs located in the Northwest of England. Using logistic regression, 102 responses were analysed. Findings \u2013 The results reveal that the factors influencing SMEs\u2019 adoption of enterprise systems are different from the factors influencing SMEs\u2019 adoption of other previously studied information systems (IS) innovations. SMEs were found to be more influenced by technological and organisational factors than environmental factors. Moreover, the results indicate that firms with a greater perceived relative advantage, a greater ability to experiment with these systems before adoption, greater top management support, greater organisational readiness and a larger size are predicted to become adopters of enterprise systems. Research limitations/implications \u2013 Although this study focused on the factors that influence SMEs\u2019 adoption of a set of enterprise systems (i.e. ERP, CRM, SCM and e-procurement), it fails to differentiate between factors that influence each of these systems. Practical implications \u2013 The model can be used to assist software vendors not only to develop marketing strategies that can target potential adopters, but also to develop strategies to increase the adoption of ES among SMEs. Originality/value \u2013 The paper contributes to the continuing research in IS innovations adoption/diffusion in the small business context."}
{"_id": "66f4c9d6de9eaf2a9e61e5c39bf5256d01636fd0", "title": "Minimising semantic drift with Mutual Exclusion Bootstrapping", "text": "Iterative bootstrapping techniques are commonly used to extract lexical semantic resources from raw text. Their major weakness is that, without costly human intervention, the extracted terms (often rapidly) drift from the meaning of the original seed terms. In this paper we propose Mutual Exclusion bootstrapping (MEB) in which multiple semantic classes compete for each extracted term. This significantly reduces the problem of semantic drift by providing boundaries for the semantic classes. We demonstrate the superiority of MEB to standard bootstrapping in extracting named entities from the Google Web 1T 5-grams. Finally, we demonstrate that MEB is a multi-way cut problem over semantic classes, terms and contexts."}
{"_id": "e3a4c0c27313c1c28be40e069746e914452729c0", "title": "Time-dependent visual adaptation for fast realistic image display", "text": "Human vision takes time to adapt to large changes in scene intensity, and these transient adjustments have a profound effect on visual appearance. This paper offers a new operator to include these appearance changes in animations or interactive real-time simulations, and to match a user's visual responses to those the user would experience in a real-world scene.\nLarge, abrupt changes in scene intensities can cause dramatic compression of visual responses, followed by a gradual recovery of normal vision. Asymmetric mechanisms govern these time-dependent adjustments, and offer adaptation to increased light that is much more rapid than adjustment to darkness. We derive a new tone reproduction operator that simulates these mechanisms. The operator accepts a stream of scene intensity frames and creates a stream of color display images.\nAll operator components are derived from published quantitative measurements from physiology, psychophysics, color science, and photography. ept intentionally simple to allow fast computation, the operator is meant for use with real-time walk-through renderings, high dynamic range video cameras, and other interactive applications. We demonstrate its performance on both synthetically generated and acquired \u201creal-world\u201d scenes with large dynamic variations of illumination and contrast."}
{"_id": "fa192e467393f83294f1c24f4d20c92fad4455b2", "title": "Narrowband Internet of Things: Evolutions, Technologies, and Open Issues", "text": "We are on the threshold of the explosive growth in the global Internet-of-Things (IoT) market. Comparing with the legacy human-centric applications, machine type communication scenarios exhibit totally different characteristics, such as low throughput, delay insensitivity, occasional transmission, and deep coverage. Meanwhile, it also requires the terminal devices to be cheap enough and sustain long battery life. These demands hasten the prosperity of low power wide area (LPWA) technologies. Narrowband IoT (NB-IoT) is the newest Long Term Evolution (LTE) specification ratified by the third generation partner project as one of the LPWA solutions to achieve the objectives of super coverage, low power, low cost, and massive connection. Working in the licensed frequency band, it is designed to reuse and coexist with the existing LTE cellular networks, which endows it with outstanding advantages in decreasing network installation cost and minimizing product time-to-market. In this backdrop, it has been extensively regarded as one of the most promising technologies toward the IoT landscape. However, as a new LTE standard, there are still a lot of challenges that need to overcome. This paper surveys its evolutions, technologies, and issues, spanning from performance analysis, design optimization, combination with other leading technologies, to implementation and application. The goal is to deliver a holistic understanding for the emerging wireless communication system, in helping to spur further research in accelerating the broad use of NB-IoT."}
{"_id": "7af2c4d2f4927be654292a55f4abb04c63c76ee3", "title": "Texture classification and segmentation using multiresolution simultaneous autoregressive models", "text": ""}
{"_id": "8825df151fb1ffe48e011b31554584d315b694fe", "title": "k -Anonymous Data Mining: A Survey", "text": "Data mining technology has attracted significant interest as a means of identifying patterns and trends from large collections of data. It is however evident that the collection and analysis of data that include personal information may violate the privacy of the individuals to whom information refers. Privacy protection in data mining is then becoming a crucial issue that has captured the attention of many researchers. In this chapter, we first describe the concept of k-anonymity and illustrate different approaches for its enforcement. We then discuss how the privacy requirements characterized by k-anonymity can be violated in data mining and introduce possible approaches to ensure the satisfaction of k-anonymity in data mining."}
{"_id": "cf7303da154a654e2f8a1796704acf1320991957", "title": "An expert evaluation of word-sized visualizations for analyzing eye movement data", "text": "Word-sized visualizations for eye movement data allow analysts to compare a variety of experiment conditions or participants at the sametime. Weimplementedasetofsuchword-sizedvisualizations as part of an analysis framework. We want to \ufb01nd out which of the visualizations is most suitable for different analysis tasks. To this end, we applied the framework to data from an eye tracking study on the reading behavior of users studying metro maps. In anexpertevaluationwith\ufb01veanalysts,weidenti\ufb01eddistinguishing characteristics of the different word-sized visualizations."}
{"_id": "42b88d4930730c5c2328818e95c125b84ee81066", "title": "Comparing Business and Household Sector Innovation in Consumer Products : Findings from a Representative Study in the United Kingdom", "text": "In a first survey of its type, we measure development and modification of consumer products by product users in a representative sample of 1,173 UK consumers aged 18 and over. We estimate this previously unmeasured type of innovation to be quite large: 6.1% of UK consumers \u2013 nearly 2.9 million individuals have engaged in consumer product innovation during the prior three years. In aggregate, consumers\u2019 annual product development expenditures are more than 1.4 times larger than the annual consumer product R&D expenditures of all firms in the UK combined. Consumers engage in many small projects which seem complementary to the innovation efforts of incumbent producers. Consumer innovators very seldom protect their innovations via intellectual property, and 17% diffuse to others. These results imply that official statistics partly miss relevant innovation activity, and that existing companies should reconfigure their product development systems to find and build upon prototypes developed by consumers. 1 MIT Sloan School of Management, Cambridge MA, USA. 2 RSM Erasmus University, Rotterdam, The Netherlands. 3 EIM Business and Policy Research, Zoetermeer, The Netherlands. 4 Centre for Research in Innovation Management, University of Brighton, United Kingdom. * Corresponding author. E-mail: evhippel@mit.edu. Phone: +1 617-253-7155"}
{"_id": "ea39812bb04923faff76ba567649aceaf19a660e", "title": "A study on human gait analysis", "text": "Human gait is one of the most important biometric which has so far been neglected for use in medical diagnostics. In this paper, we make a feasibility study on human gait acquired from a wearable sensor based biometric suit called as Intelligent Gait Oscillation Detector (IGOD). This suit measures simultaneous gait oscillation from eight major joints (two knees, two hips, two elbows and two shoulders) of a human body. Techniques for analyzing and understanding the human gait patterns were developed. Variance in the gait oscillation was studied with respect to gait speed varying from 3km/hr to 5km/hr. Gender variance (male/female) gait oscillation has also been studied. A comprehensive analysis on human gait affected by knee joint movement and hip joint oscillation has been addressed with the arm swing effects. This analysis will provide us with an insight on human bipedal locomotion and its stability. We plan to create a repository of human gait oscillations which could extensively be analyzed for person identification and detecting walking problems in patients, which is detection of disease in the medical field."}
{"_id": "c5cba3d806eb8bc3bdfa46d96bb4aab90e58257b", "title": "Cross-lingual Argumentation Mining: Machine Translation (and a bit of Projection) is All You Need!", "text": "Argumentation mining (AM) requires the identification of complex discourse structures and has lately been applied with success monolingually. In this work, we show that the existing resources are, however, not adequate for assessing cross-lingual AM, due to their heterogeneity or lack of complexity. We therefore create suitable parallel corpora by (human and machine) translating a popular AM dataset consisting of persuasive student essays into German, French, Spanish, and Chinese. We then compare (i) annotation projection and (ii) bilingual word embeddings based direct transfer strategies for cross-lingual AM, finding that the former performs considerably better and almost eliminates the loss from cross-lingual transfer. Moreover, we find that annotation projection works equally well when using either costly human or cheap machine translations. Our code and data are available at http://github.com/UKPLab/ coling2018-xling_argument_mining."}
{"_id": "fc5d261540b3503960725446eb26c48a56488974", "title": "A Brief Review of Digital Games and Learning", "text": "Digital games have caught the attention of scholars across a variety of disciplines. Today, scholars from fields as diverse as media design, literature, computer science, education, and theatre studies have together contributed to the understanding of this emergent medium and its phenomenon as a learning tool The unsatisfying experience of directly adopting game formats into educational contexts has motivated researchers to study successful (commercial) digital games, in particular to examine the play activities of digital games and the principles underlying the design of digital games in order to develop concepts and strategies for rich virtual learning contexts. In this review, provided is a brief account of academic arguments regarding digital games and learning."}
{"_id": "3ff79cf6df1937949cc9bc522041a9a39d314d83", "title": "Adversarial examples: A survey", "text": "Adversarial examples are a phenomenon that have gathered a lot of attention in recent studies. The fact that the addition of very small, but carefully crafted perturbations to the inputs of sophisticated and high performing machine learning models may cause them to make significant errors, is both fascinating and important. A survey of findings connected with adversarial examples is presented and discussed in this paper."}
{"_id": "9c9ef326d19f864c414ad3f371932d4eb6156460", "title": "Sex-work harm reduction", "text": "Sex work is an extremely dangerous profession. The use of harm-reduction principles can help to safeguard sex workers' lives in the same way that drug users have benefited from drug-use harm reduction. Sex workers are exposed to serious harms: drug use, disease, violence, discrimination, debt, criminalisation, and exploitation (child prostitution, trafficking for sex work, and exploitation of migrants). Successful and promising harm-reduction strategies are available: education, empowerment, prevention, care, occupational health and safety, decriminalisation of sex workers, and human-rights-based approaches. Successful interventions include peer education, training in condom-negotiating skills, safety tips for street-based sex workers, male and female condoms, the prevention-care synergy, occupational health and safety guidelines for brothels, self-help organisations, and community-based child protection networks. Straightforward and achievable steps are available to improve the day-to-day lives of sex workers while they continue to work. Conceptualising and debating sex-work harm reduction as a new paradigm can hasten this process."}
{"_id": "b418dfe5bbc09656adf2b4738276d45dd787ab8c", "title": "Challenges for the Self-Safety in Autonomous Vehicles", "text": "The combination of multiple functions having different and complementary capabilities enables the emergence of Autonomous Vehicles. Their deployment is limited by the level of complexity they represent together with the challenges encountered in real environments with strong safety concerns. Thus a major concern prior to massive deployment is on how to ensure the safety of autonomous vehicles despite likely internal (e.g. malfunctions) and external (e.g., aggressive behaviors) disturbances they might undergo. This paper presents the challenges that undergoes the design and development of autonomous vehicles with respect to their functional architecture and adaptive behaviors from a safety perspective. For the purpose of the rationales, we define needs and requirements that lead to the formulation of an architectural framework. Our approach is based on paradigms and technologies from non-automotive domains to address non-functional system properties like safety, reliability and security. The notion of micro-services is also introduced for the self-safety of autonomous vehicles. These are part of the proposed framework that should facilitate the analysis, design, development and validation for the adequate composition and orchestration of services aimed to warrant the required non-functional properties, such as safety. In the present paper, we introduce the structural and behavioral adaptations of the framework to offer a holistic and scalable vision of the safety over the system."}
{"_id": "6c33834dfa5faed8911d9a2fd09a6eaaed33f1a4", "title": "Toward a Semantic Forum for Active Collaborative Learning", "text": "Online discussion forums provide open workspace allowing learners to share information, exchange ideas, address problems and discuss on specific themes. But the substantial impediment to its promotion as effective elearning facility lies in the continuously increasing messages but with discrete and incoherent structure as well as the loosely-tied learners with response-freeness. To motivate and facilitate active collaborative learning, this paper describes the design of a semantic forum with semantic link networking on discussion transcripts. Based on domain ontology and text mining technologies, messages are automatically processed for structural modeling with semantic association and special interest groups are automatically discovered for topic-centric social context measurement, which lays the foundation for the fulfillment of distinctive functionalities in the semantic forum (i.e. semantic search, relational navigation and recommendation). Compared with traditional forums, the semantic forum has three outstanding features. First, it deals with the structural incoherence and content isolation within forums. Second, it enables active learning by providing learners with relational navigation to meet their learning demands. Third, it supports social context based ranking to recommend learning companions or transcripts for collaborative problem-solving."}
{"_id": "18336c93fd1cf624a4e843925a648020e359c0ac", "title": "Unique Signatures of Histograms for Local Surface Description", "text": "This paper deals with local 3D descriptors for surface matching. First, we categorize existing methods into two classes: Signatures and Histograms. Then, by discussion and experiments alike, we point out the key issues of uniqueness and repeatability of the local reference frame. Based on these observations, we formulate a novel comprehensive proposal for surface representation, which encompasses a new unique and repeatable local reference frame as well as a new 3D descriptor. The latter lays at the intersection between Signatures and Histograms, so as to possibly achieve a better balance between descriptiveness and robustness. Experiments on publicly available datasets as well as on range scans obtained with Spacetime Stereo provide a thorough validation of our proposal."}
{"_id": "063a63356408a8278c1977ddfb0cc8898e8a53b6", "title": "Automatic collision avoidance during parking and maneuvering \u2014 An optimal control approach", "text": "In order to reduce the great number of parking incidences and other collisions in low speed scenarios, an obstacle avoidance algorithm is proposed. Since collision avoidance can be achieved by sole braking when driving slowly this algorithm's objective is a comfort orientated braking routine. Therefore, an optimization problem is formulated leading to jerk and time optimal trajectories for braking. In addition to that, based on the prediction of the future vehicle motion, this algorithm compensates for a significant actuator time delay. Using an occupancy grid for representing the static vehicle environment and the current driving state, possible collision points are determined not only for the vehicle front or rear, but also for both vehicle sides, where no sensors are located. The algorithm's performance is demonstrated in a real-world scenario."}
{"_id": "26237468c3e9c620d85f6e49b22999b668831567", "title": "Mercury BLASTN: Faster DNA Sequence Comparison using a Streaming Hardware Architecture", "text": "Motivation: Large-scale DNA sequence comparison, as implemented by BLAST and related algorithms, is one of the pillars of modern genomic analysis. One way to accelerate these computations is with a streaming architecture, in which processors are arranged in a pipeline that replicates the multistage structure of the algorithm. To achieve high performance, the processor hardware implementing the critical seed matching and ungapped extension stages of BLAST should be specialized to execute these stages as quickly as possible. However, accelerating these stages requires solving two key problems: first, the seed matching stage is not of a form which has traditionally been amenable to hardware acceleration; and second, the accelerated implementation of BLAST should retain sensitivity at least comparable to that of the original software. Results: We describe Mercury BLASTN, an FPGA-based implementation of BLAST for DNA. Mercury BLASTN combines a Bloom filtering approach to seed matching with a modified ungapped extension algorithm to overcome barriers to placing the early stages of BLAST onto hardware. On a previous-generation FPGA hardware platform, Mercury BLASTN runs 5 to 11 times faster than NCBI BLASTN current-generation general-purpose CPUs, with the prospect of a further eight-fold speedup on current-generation FPGAs. Moreover, its sensitivity to significant DNA sequence alignments is 99% of that observed with software NCBI BLASTN. Availability: Academic users should contact the authors for information on acquiring a prototype of the Mercury BLASTN system. Contact: jbuhler@cse.wustl.edu"}
{"_id": "291beb869f81c73c92456d65ce2adc1d9c6dfb62", "title": "Comprehensive Survey on T-SDN: Software-Defined Networking for Transport Networks", "text": "Paradoxically, with an ever-increasing traffic demand, today transport-network operators experience a progressive erosion of their margins. The alarms of change are set, and software define networking (SDN) is coming to the rescue with the promise of reducing capital expenditures and operational expenses. Driven by economic needs and network innovation facilities, today transport SDN (T-SDN) is a reality. It gained big momentum in the last years, however, in the networking industry, the transport network will be perhaps the last segment to embrace SDN, mainly due to the heterogeneous nature and complexity of the optical equipment composing it. This survey guides the reader through a fascinating technological adventure that provides an organic analysis of the T-SDN development and evolution considering contributions from: academic research, standardization bodies, industrial development, open source projects, and alliances among them. After creating a comprehensive picture of T-SDN, we provide an analysis of many open issues that are expected to need significant future work, and give our vision in this path toward a fully programmable and dynamic transport network."}
{"_id": "edafa697ba68874d608015b521c43d04e3584992", "title": "Gated Recurrent Capsules for Visual Word Embeddings", "text": "The caption retrieval task can be defined as follows: given a set of images I and a set of describing sentences S, for each image i in I we ought to find the sentence in S that best describes i. The most commonly applied method to solve this problem is to build a multimodal space and to map each image and each sentence to that space, so that they can be compared easily. A non-conventional model called Word2VisualVec has been proposed recently: instead of mapping images and sentences to a multimodal space, they mapped sentences directly to a space of visual features. Advances in the computation of visual features let us infer that such an approach is promising. In this paper, we propose a new Recurrent Neural Network model following that unconventional approach based on Gated Recurrent Capsules (GRCs), designed as an extension of Gated Recurrent Units (GRUs). We show that GRCs outperform GRUs on the caption retrieval task. We also state that GRCs present a great potential for other applications."}
{"_id": "1ccc33648f5712cb277e9b0829b4ee6f842d1ef3", "title": "Improving Convergence Speed and Scalability in OSPF: A Survey", "text": "Open Shortest Path First (OSPF), a link state routing protocol, is a popular interior gateway protocol (IGP) in the Internet. Wide spread deployment and years of experience running the protocol have motivated continuous improvements in its operation as the nature and demands of the routing infrastructures have changed. Modern routing domains need to maintain a very high level of service availability. Hence, OSPF needs to achieve fast convergence to topology changes. Also, the ever-growing size of routing domains, and possible presence of wireless mobile adhoc network (MANET) components, requires highly scalable operation on part of OSPF to avoid routing instability. Recent years have seen significant efforts aimed at improving OSPF's convergence speed as well as scalability and extending OSPF to achieve seamless integration of mobile adhoc networks with conventional wired networks. In this paper, we present a comprehensive survey of these efforts."}
{"_id": "37fd2fc9ae5baebe2f7ddb5456bc68f993d7bd66", "title": "Error-Related EEG Potentials Generated During Simulated Brain\u2013Computer Interaction", "text": "Brain-computer interfaces (BCIs) are prone to errors in the recognition of subject's intent. An elegant approach to improve the accuracy of BCIs consists in a verification procedure directly based on the presence of error-related potentials (ErrP) in the electroencephalogram (EEG) recorded right after the occurrence of an error. Several studies show the presence of ErrP in typical choice reaction tasks. However, in the context of a BCI, the central question is: ldquoAre ErrP also elicited when the error is made by the interface during the recognition of the subject's intent?rdquo We have thus explored whether ErrP also follow a feedback indicating incorrect responses of the simulated BCI interface. Five healthy volunteer subjects participated in a new human-robot interaction experiment, which seem to confirm the previously reported presence of a new kind of ErrP. However, in order to exploit these ErrP, we need to detect them in each single trial using a short window following the feedback associated to the response of the BCI. We have achieved an average recognition rate of correct and erroneous single trials of 83.5% and 79.2%, respectively, using a classifier built with data recorded up to three months earlier."}
{"_id": "714ed82f4f3d38677f823cafb1f037b10d32cc3d", "title": "ERP components on reaction errors and their functional significance: a tutorial", "text": "Some years ago we described a negative (Ne) and a later positive (Pe) deflection in the event-related brain potentials (ERPs) of incorrect choice reactions [Falkenstein, M., Hohnsbein, J., Hoormann, J., Blanke, L., 1990. In: Brunia, C.H.M., Gaillard, A.W.K., Kok, A. (Eds.), Psychophysiological Brain Research. Tilburg Univesity Press, Tilburg, pp. 192-195. Falkenstein, M., Hohnsbein, J., Hoormann, J., 1991. Electroencephalography and Clinical Neurophysiology, 78, 447-455]. Originally we assumed the Ne to represent a correlate of error detection in the sense of a mismatch signal when representations of the actual response and the required response are compared. This hypothesis was supported by the results of a variety of experiments from our own laboratory and that of Coles [Gehring, W. J., Goss, B., Coles, M.G.H., Meyer, D.E., Donchin, E., 1993. Psychological Science 4, 385-390. Bernstein, P.S., Scheffers, M.K., Coles, M.G.H., 1995. Journal of Experimental Psychology: Human Perception and Performance 21, 1312-1322. Scheffers, M.K., Coles, M. G.H., Bernstein, P., Gehring, W.J., Donchin, E., 1996. Psychophysiology 33, 42-54]. However, new data from our laboratory and that of Vidal et al. [Vidal, F., Hasbroucq, T., Bonnet, M., 1999. Biological Psychology, 2000] revealed a small negativity similar to the Ne also after correct responses. Since the above mentioned comparison process is also required after correct responses it is conceivable that the Ne reflects this comparison process itself rather than its outcome. As to the Pe, our results suggest that this is a further error-specific component, which is independent of the Ne, and hence associated with a later aspect of error processing or post-error processing. Our new results with different age groups argue against the hypotheses that the Pe reflects conscious error processing or the post-error adjustment of response strategies. Further research is necessary to specify the functional significance of the Pe."}
{"_id": "f2c4082faeff5d63b0144ef371c8964621ee33bf", "title": "Clinical Applications of Brain-Computer Interfaces: Current State and Future Prospects", "text": "Braincomputer interfaces (BCIs) allow their users to communicate or control external devices using brain signals rather than the brain's normal output pathways of peripheral nerves and muscles. Motivated by the hope of restoring independence to severely disabled individuals and by interest in further extending human control of external systems, researchers from many fields are engaged in this challenging new work. BCI research and development has grown explosively over the past two decades. Efforts have begun recently to provide laboratory-validated BCI systems to severely disabled individuals for real-world applications. In this paper, we discuss the current status and future prospects of BCI technology and its clinical applications. We will define BCI, review the BCI-relevant signals from the human brain, and describe the functional components of BCIs. We will also review current clinical applications of BCI technology and identify potential users and potential applications. Lastly, we will discuss current limitations of BCI technology, impediments to its widespread clinical use, and expectations for the future."}
{"_id": "13a8c7c30ca67cb88b4c5b78d3109d1c5a0f6025", "title": "Anterior cingulate cortex, error detection, and the online monitoring of performance.", "text": "An unresolved question in neuroscience and psychology is how the brain monitors performance to regulate behavior. It has been proposed that the anterior cingulate cortex (ACC), on the medial surface of the frontal lobe, contributes to performance monitoring by detecting errors. In this study, event-related functional magnetic resonance imaging was used to examine ACC function. Results confirm that this region shows activity during erroneous responses. However, activity was also observed in the same region during correct responses under conditions of increased response competition. This suggests that the ACC detects conditions under which errors are likely to occur rather than errors themselves."}
{"_id": "9b1b350dc58def7b7d7b147b779aa0b534b5b335", "title": "QR code location based reverse car-searching route recommendation model", "text": "To deal with the reverse car-searching issue in large buildings and parking lots, a Quick Response Code (QR code) based reverse car-searching route recommendation model is designed. By scanning the deployed QR codes, a Smartphone can pinpoint the host location and parking location efficiently. Based on the submitted location information, the central control system can finally return the recommended routes, which facilitates a host to reach the parking location effectively. In our model, the reverse car-searching route is divided into two parts: choosing the optimal exports (elevator) and computing the shortest walking distance route. Based on the optimal export selection algorithm and regional shortest path algorithm, our model can choose the prior exports (elevator) effectively, and then recommend the optimal walking route in the buildings and parking lots. The simulation shows that this low-cost system can effectively solve the reverse car-searching problem in large buildings and parking lots, save the driver's car-searching time and improve the utilization rate of parking facilities."}
{"_id": "e5ae82aa115f19e84a428e8377fac46aa0b3148e", "title": "Ant supervised by PSO and 2-Opt algorithm, AS-PSO-2Opt, applied to Traveling Salesman Problem", "text": "AS-PSO-2Opt is a new enhancement of the AS-PSO method. In the classical AS-PSO, the Ant heuristic is used to optimize the tour length of a Traveling Salesman Problem, TSP, and PSO is applied to optimize three parameters of ACO, (\u03b1, \u03b2, \u03c1). The AS-PSO-2Opt consider a post processing resuming path redundancy, helping to improve local solutions and to decrease the probability of falling in local minimum. Applied to TSP, the method allowed retrieving a valuable path solution and a set of fitted parameters for ACO. The performance of the AS-PSO-2Opt is tested on nine different TSP test benches. Experimental results based on a statistical analysis showed that the new proposal performs better than key state of art methods using Genetic algorithm, Neural Network and ACO algorithm. The AS-PSO-2Opt performs better than close related methods such as PSO-ACO-3Opt [9] and ACO with ABC [19] for various test benches."}
{"_id": "9d5de7d9c54f3c815350bcd00847ea25e34341cc", "title": "Using association rules for ontology extraction from a Quran corpus", "text": "Ontology can be seen as a formal representation of knowledge. They have been investigated in many arti ficial intelligence studies including semantic web, softwa re engineering, and information retrieval. The aim of ontology is t o develop knowledge representations that can be shared and re used. This research project is concerned with the use of assoc iati n rules to extract the Quran ontology. The manual acquisition of ontologies from Quran verses can be very costly; therefore, we ne d an intelligent system for Quran ontology construction using patternbased schemes and associations rules to discover Qu ran concepts and semantics relations from Quran verses. Our system i s based on the combination of statistics and linguistics methods t o extract concepts and conceptual relations from Quran. In particular, linguistic pattern-based approach is exploited to extract spec ific concepts from the Quran, while the conceptual relations are found based on association rules technique. The Quran ontology wil l offer a new and powerful representation of Quran knowledge, and the association rules will help to represent the relations between all classes of connected concepts in the Quran ontology. Keywords\u2014 Ontology; Concepts Extraction ; Quran; Association Rules; Text Mining."}
{"_id": "1d9bd24e65345258259ee24332141e371c6e4868", "title": "Learning Image Descriptors with Boosting", "text": "We propose a novel and general framework to learn compact but highly discriminative floating-point and binary local feature descriptors. By leveraging the boosting-trick we first show how to efficiently train a compact floating-point descriptor that is very robust to illumination and viewpoint changes. We then present the main contribution of this paper\u2014a binary extension of the framework that demonstrates the real advantage of our approach and allows us to compress the descriptor even further. Each bit of the resulting binary descriptor, which we call BinBoost, is computed with a boosted binary hash function, and we show how to efficiently optimize the hash functions so that they are complementary, which is key to compactness and robustness. As we do not put any constraints on the weak learner configuration underlying each hash function, our general framework allows us to optimize the sampling patterns of recently proposed hand-crafted descriptors and significantly improve their performance. Moreover, our boosting scheme can easily adapt to new applications and generalize to other types of image data, such as faces, while providing state-of-the-art results at a fraction of the matching time and memory footprint."}
{"_id": "27464a00725c941d7ec910e4a7b5240fc2d299d7", "title": "SMART VOTING SYSTEM WITH FACE RECOGNITION", "text": "Manual voting system has been deployed for many yea rs in our country. However in many parts of our cou ntry people cannot attend the voting because of several r asons. In order to solve these problems there is a need of online election voting system in addition to manual voting system. The long-term goal of this project is to g reatly reduce the cost and complexity of running elections and increase th e accuracy of results by removing the direct involv ement of humans in gathering and counting of votes. The voters will be a le to give their votes at any field areas by usi ng the system if they prefer online voting In this system, the user votes using android applic ation which can be downloaded over internet. The us er has to login before giving his vote. Authentication will b e done using SMS confirmation and face recognition. Users can view parties, enter votes, can verify vote via SMS or email and can view result after result is declared."}
{"_id": "7f1a80fd27f0e09fdb9982ec345745f9959892f5", "title": "The Nature, Importance, and Difficulty of Machine Ethics", "text": "The question of whether machine ethics exists or might exist in the future is difficult to answer if we can't agree on what counts as machine ethics. Some might argue that machine ethics obviously exists because humans are machines and humans have ethics. Others could argue that machine ethics obviously doesn't exist because ethics is simply emotional expression and machines can't have emotions. A wide range of positions on machine ethics are possible, and a discussion of the issue could rapidly propel us into deep and unsettled philosophical issues. Perhaps, understandably, few in the scientific arena pursue the issue of machine ethics. As we expand computers' decision-making roles in practical matters, such as computers driving cars, ethical considerations are inevitable. Computer scientists and engineers must examine the possibilities for machine ethics because, knowingly or not, they've already engaged in some form of it. Before we can discuss possible implementations of machine ethics, however, we need to be clear about what we're asserting or denying"}
{"_id": "a12cd3d9ae5530a90302a6e4af477e6e24fa0f95", "title": "Integrating Conflicting Data: The Role of Source Dependence", "text": "Many data management applications, such as setting up Web portals, managing enterprise data, managing community data, and sharing scientific data, require integrating data from multiple sources. Each of these sources provides a set of values and different sourc es can often provide conflicting values. To present quality data to users, it is critical that data integration systems can resolve conflicts and discover true values. Typically, we expect a true value to be provided by more sources than any particular false one, so we can take the value provided by the majority of the sources as the truth. Unfortunately, a false value can be spread through copying and that makes truth discovery extremely tricky. In this paper, we consider how to find true values from conflicting information when there are a large number of sources, among which some may copy from others. We present a novel approach that considers pendencebetween data sources in truth discovery. Intuitively, if two data sources provide a large number of common values and many of these values are rarely provided by other sources ( e.g., particular false values), it is very likely that one copies from the other. We apply Bayesian analysis to decide dependence between sources and design an algorithm that iteratively detects dependence and discovers truth from conflicting information. We also extend our model by considering accuracyof data sources and similarity between values. Our experiments on synthetic data as well as real-world data show that our algorithm can significantly improve accuracy of truth discovery and is scalable when there are a large number of data sources."}
{"_id": "394a3253849bacf24a8e1d98376540b8c9055b6c", "title": "Teenager \u2019 s Addiction to Smart Phones and Its Integrated Therapy Method", "text": "The fundamental cause of teenagers\u2019 addiction to smart phone is due to stress arising from family and school. Smartphone can cause many problems including musculoskeletal symptom which can develop due to staying in one position for a long time. Furthermore, starring at a small screen for an extended time can cause decreased eye sight, along with amblyopia. Smart phone addiction causes depression and anxiety which will lead to severe mental problems. This will lead to problems associated to one\u2019s social life and human network. Fundamental treatment of cognitive and emotional elements needed to become harmonized as a whole in order to increase the effectiveness of the treatment results. The therapy has been verified to cure the behavioral and emotional aspect related to depression, sense of control, and assimilation in school. From these integrated treatment approaches, this report compared the treatment\u2019s upsides and downsides and also, assessed the possibility the integrated treatment process. We can presume that this report provided the opportunity and support for the creation of a fundamental treatment method that can help teenagers live a healthy life by overcoming addiction to smart phones."}
{"_id": "2645a136bb1f0af81a526f04a1c9eb2b28dccb1b", "title": "Communication-efficient algorithms for statistical optimization", "text": "We study two communication-efficient algorithms for distributed statistical optimization on large-scale data. The first algorithm is an averaging method that distributes the N data samples evenly to m machines, performs separate minimization on each subset, and then averages the estimates. We provide a sharp analysis of this average mixture algorithm, showing that under a reasonable set of conditions, the combined parameter achieves mean-squared error that decays as O(N-1 + (N/m)-2). Whenever m \u2264 \u221aN, this guarantee matches the best possible rate achievable by a centralized algorithm with access to all N samples. The second algorithm is a novel method, based on an appropriate form of bootstrap. Requiring only a single round of communication, it has mean-squared error that decays as O(N-1 + (N/m)-3), and so is more robust to the amount of parallelization."}
{"_id": "29b756ebab9a00e65dcc8e0cc77544dbec5d4531", "title": "Reinforcement Learning of Coordination in Cooperative Multi-Agent Systems", "text": "We report on an investigation of reinforcement learning techniques for the learning of coordination in cooperative multi-agent systems. Specifically, we focus on a novel action selection strategy for Q-learning (Watkins 1989). The new technique is applicable to scenarios where mutual observation of actions is not possible.To date, reinforcement learning approaches for such independent agents did not guarantee convergence to the optimal joint action in scenarios with high miscoordination costs. We improve on previous results (Claus & Boutilier 1998) by demonstrating empirically that our extension causes the agents to converge almost always to the optimal joint action even in these difficult cases."}
{"_id": "641549e1867d66515002ccfe218149af6e38ddcc", "title": "Characterization of log periodic planar dipole array antenna", "text": "This paper proposes a analysis approach for characterizing the broadband Log Periodic Planar Dipole Array (LPPDA) antenna useful for C-band applications. A simplest geometry of the edge fed Log Periodic Planar Dipole Array is designed and developed. Parameters like length, width of the elements, and spacing between elements are the varied using defined scale factor (\u03c4) and a spacing factor (\u03c3), keeping others constant and their impact was noted down. A parametric analysis is performed. using CST software. A transmission line equivalent circuit of the proposed LPPDA is constructed. Log periodicity behaviour of the proposed antenna is checked by plotting input impedance with respect to logarithm of frequency. An antenna is fabricated and measured in laboratory."}
{"_id": "35b79fa3cc3dad9defdb8cf5aeab5da57413530b", "title": "Using Suffix Arrays to Compute Term Frequency and Document Frequency for All Substrings in a Corpus", "text": "Bigrams and trigrams are commonly used in statistical natural language processing; this paper will describe techniques for working with much longer n-grams. Suffix arrays (Manber and Myers 1990) were first introduced to compute the frequency and location of a substring (n-gram) in a sequence (corpus) of length N. To compute frequencies over all N(N+1)/2 substrings in a corpus, the substrings are grouped into a manageable number of equivalence classes. In this way, a prohibitive computation over substrings is reduced to a manageable computation over classes. This paper presents both the algorithms and the code that were used to compute term frequency (tf) and document frequency (df) for all n-grams in two large corpora, an English corpus of 50 million words of Wall Street Journal and a Japanese corpus of 216 million characters of Mainichi Shimbun. The second half of the paper uses these frequencies to find interesting substrings. Lexicographers have been interested in n-grams with high mutual information (MI) where the joint term frequency is higher than what would be expected by chance, assuming that the parts of the n-gram combine independently. Residual inverse document frequency (RIDF) compares document frequency to another model of chance where terms with a particular term frequency are distributed randomly throughout the collection. MI tends to pick out phrases with noncompositional semantics (which often violate the independence assumption) whereas RIDF tends to highlight technical terminology, names, and good keywords for information retrieval (which tend to exhibit nonrandom distributions over documents). The combination of both MI and RIDF is better than either by itself in a Japanese word extraction task."}
{"_id": "519ca7401f2346d8e31916b17b0466f5f77b7ad6", "title": "QDFS: A quality-aware distributed file storage service based on HDFS", "text": "On the basis of the Hadoop distributed file system (HDFS), this paper presents the design and implementation of QDFS, a distributed file storage service system that employs a backup policy based on recovery volumes and a quality-aware data distribution strategy. The experimental results show that QDFS increases the storage utilization ratio and improves the storage performance by being aware of the quality of service of the DataNodes. The improvements of QDFS compared with HDFS make the Hadoop distributed computing model more suitable to unstable wide area network environments."}
{"_id": "289daa2781140eb67e4e7edd548213569c5235c7", "title": "Effects of psychological and social factors on organic disease: a critical assessment of research on coronary heart disease.", "text": "An extensive research literature in the behavioral sciences and medicine suggests that psychological and social factors may play a direct role in organic coronary artery disease (CAD) pathology. However, many in the medical and scientific community regard this evidence with skepticism. This chapter critically examines research on the impact of psychological and psychosocial factors on the development and outcome of coronary heart disease, with particular emphasis on studies employing verifiable outcomes of CAD morbidity or mortality. Five key variables identified as possible psychosocial risk factors for CAD are addressed: acute and chronic stress, hostility, depression, social support, and socioeconomic status. Evidence regarding the efficacy of psychosocial interventions is also presented. It is suggested that, taken as a whole, evidence for a psychological and social impact on CAD morbidity and mortality is convincing. However, continued progress in this area requires multidisciplinary research integrating expertise in cardiology and the behavioral sciences, and more effective efforts to communicate research findings to a biomedical audience."}
{"_id": "0f2069063f35553917aca882cddfb6cd6e361c3f", "title": "Learning Theory and Instructional Design", "text": "Designing effective instruction goes beyond systematically executing various steps within an instructional design model. Among a host of considerations, effective instructional design should take into consideration the theoretical bases in which it is grounded. This is not to say that learning theory offers instructional designers answers to design problems but instead, offers clarity, direction and focus throughout the instructional design process. Merrill (2001, p. 294) explains that a \u201ctheoretical tool, in and of itself, is not an instructional design theory but defines instructional components that can be used to define instructional prescriptions more precisely.\u201d Likewise, Merriam and Caffarella (1999, p. 250) make the point that \u201c[learning] theories do not give us solutions, but they do direct our attention to those variables that are crucial in finding solutions.\u201d Thus, understanding theoretical frameworks and properly incorporating them within the scope of instructional design is important for designers to effectively prepare and present instruction as well as for organizational entities to more precisely and efficiently address training-appropriate issues."}
{"_id": "4da2d4a57d41e53eb3ea9e841b7c007820c6d8ff", "title": "Oil content, tocopherol composition and fatty acid patterns of the seeds of 51 Cannabis sativa L. genotypes", "text": "The oil content, the tocopherol composition, the plastochromanol-8 (P-8) content and the fatty acid composition (19 fatty acids) of the seed of 51 hemp (Cannabis sativa L.) genotypes were studied in the 2000 and 2001 seasons. The oil content of the hemp seed ranged from 26.25% (w/w) to 37.50%. Analysis of variance revealed significant effects of genotype, year and of the interaction (genotype \u00d7 year) on the oil content. The oil contents of the 51 genotypes in 2000 and 2001 were correlated (r = 0.37**) and averaged 33.19 \u00b1 1.45% in 2000 and 31.21 \u00b1 0.96% in 2001. The \u03b3-tocopherol, \u03b1-tocopherol, \u03b4-tocopherol, P-8- and \u03b2-tocopherol contents of the 51 genotypes averaged 21.68 \u00b1 3.19, 1.82 \u00b1 0.49, 1.20 \u00b1 0.40, 0.18 \u00b1 0.07 and 0.16 \u00b1 0.04 mg 100g\u22121 of seeds, respectively (2000 and 2001 data pooled). Hierarchical clustering of the fatty acid data did not group the hemp genotypes according to their geographic origin. The \u03b3-linolenic acid yield of hemp (3\u201330 kg ha\u22121) was similar to the \u03b3-linolenic acid yield of plant species that are currently used as sources of \u03b3-linolenic acid (borage (19\u201330 kg ha\u22121), evening primrose (7\u201330 kg ha\u22121)). The linoleic acid yield of hemp (129\u2013326 kg ha\u22121) was similar to flax (102\u2013250 kg ha\u22121), but less than in sunflower (868\u20131320 kg ha\u22121). Significant positive correlations were detected between some fatty acids and some tocopherols. Even though the average content of P-8 in hemp seeds was only 1/120th of the average \u03b3-tocopherol content, P-8 content was more closely correlated with the unsaturated fatty acid content than \u03b3-tocopherol or any other tocopherol fraction. The average broad-sense heritabilities of the oil content, the antioxidants (tocopherols and P-8) and the fatty acids were 0.53, 0.14 and 0.23, respectively. The genotypes Fibrimon 56, P57, Juso 31, GB29, Beniko, P60, FxT, F\u00e9lina 34, Ramo and GB18 were capable of producing the largest amounts of high quality hemp oil."}
{"_id": "b452a829d69fb1e265cf9277ff669bbc2fa8859b", "title": "Learning to Remember Translation History with a Continuous Cache", "text": "Existing neural machine translation (NMT) models generally translate sentences in isolation, missing the opportunity to take advantage of document-level information. In this work, we propose to augment NMT models with a very light-weight cache-like memory network, which stores recent hidden representations as translation history. The probability distribution over generated words is updated online depending on the translation history retrieved from the memory, endowing NMT models with the capability to dynamically adapt over time. Experiments on multiple domains with different topics and styles show the effectiveness of the proposed approach with negligible impact on the computational cost."}
{"_id": "3074acc23b66b9c2a53321a0b2dc4b7f9673890d", "title": "A fast method of fog and haze removal", "text": "Fog and haze degrade the quality of preview and captured image by reducing the contrast and saturation. As a result the visibility of scene or object degrades. The objective of the present work is to enhance the visibility, saturation, contrast and reduce the noise in a foggy image. We propose a method that uses single frame for enhancing foggy images using multilevel transmission map. The method is fast and free from noise or artifacts that generally arise in such enhancement techniques. A comparison with existing methods shows that the proposed method performs better in terms of both processing time and quality. The proposed method works in real time for VGA resolution. The proposed work also presents a scheme to remove fog, rain and snow in real-time."}
{"_id": "9e538e389d7805d297f66be47cfae47796ef9123", "title": "Snapshots in a flash with ioSnap", "text": "Snapshots are a common and heavily relied upon feature in storage systems. The high performance of flash-based storage systems brings new, more stringent, requirements for this classic capability. We present ioSnap, a flash optimized snapshot system. Through careful design exploiting common snapshot usage patterns and flash oriented optimizations, including leveraging native characteristics of Flash Translation Layers, ioSnap delivers low-overhead snapshots with minimal disruption to foreground traffic. Through our evaluation, we show that ioSnap incurs negligible performance overhead during normal operation, and that common-case operations such as snapshot creation and deletion incur little cost. We also demonstrate techniques to mitigate the performance impact on foreground I/O during intensive snapshot operations such as activation. Overall, ioSnap represents a case study of how to integrate snapshots into a modern, well-engineered flash-based storage system."}
{"_id": "1125e6304da5bb15746807aad75d5208267070b2", "title": "Analysis of back-end flash in a 1.5b/stage pipelined ADC", "text": "An analysis of the impact of last stage flash in a conventional pipeline ADC is performed in this paper. The performance of a pipeline ADC can be altered significantly by calibrating the comparators in the back-end flash. Also, realizing that the input to the back-end flash (in a pipeline ADC) is not uniformly distributed, this paper proposes alternative back-end flash references to improve the overall performance of the ADC. An analysis of the performance of the pipeline with large offsets in the MDAC stages is also presented in this paper."}
{"_id": "674221c1fbd95e2505ca0921e16d00bb81c81c0e", "title": "A Preliminary Taxonomy of Crowdsourcing", "text": "Many firms are now asking how they can benefit from the new form of outsourcing labelled \u201ccrowdsourcing\u201d. Like many other forms of outsourcing, crowdsourcing is now being \u201ctalked up\u201d by a somewhat credulous trade press. However, the term crowdsourcing has been used to describe several related, but different phenomena, and what might be successful with one form of crowdsourcing may not be with another. In this paper the notion of crowdsourcing is decomposed to create a taxonomy that expands our understanding of what is meant by the term. This taxonomy focuses on the different capability levels of crowdsourcing suppliers; different motivations; and different allocation of benefits. The management implications of these distinctions are then considered in light of what we know about other forms of outsourcing."}
{"_id": "7862f646d640cbf9f88e5ba94a7d642e2a552ec9", "title": "Being John Malkovich", "text": "Given a photo of person A, we seek a photo of person B with similar pose and expression. Solving this problem enables a form of puppetry, in which one person appears to control the face of another. When deployed on a webcam-equipped computer, our approach enables a user to control another person\u2019s face in real-time. This image-retrievalinspired approach employs a fully-automated pipeline of face analysis techniques, and is extremely general\u2014we can puppet anyone directly from their photo collection or videos in which they appear. We show several examples using images and videos of celebrities from the Internet."}
{"_id": "1e08547a5eb97c4d1f29b14de0703e40f8e4fd67", "title": "Common-Duty-Ratio Control of Input-Parallel Output-Parallel (IPOP) Connected DC\u2013DC Converter Modules With Automatic Sharing of Currents", "text": "Input-parallel output-parallel (IPOP) connected converter systems allow the use of low-power converter modules for high-power applications, with interleaving control scheme resulting in smaller filter size, better dynamic performance, and higher power density. In this paper, a new IPOP converter system is proposed, which consists of multiple dual-active half-bridge (DAHB) dc-dc converter modules. Moreover, by applying a common-duty-ratio control scheme, without a dedicated current-sharing controller, the automatic sharing of input currents or load currents is achieved in the IPOP converter even in the presence of substantial differences of 10% in various module parameters. The current-sharing performance of the proposed control method is analyzed using both a small-signal model and a steady-state dc model of the IPOP system. It is concluded that the equal sharing of currents among modules can be achieved by reducing the mismatches in various module parameters, which is achievable in practice. The current-sharing performance of the IPOP converter is also verified by Saber simulation and a 400-W experimental prototype consisting of two DAHB modules. The common-duty-ratio control method can be extended to any IPOP system that consists of three or more converter modules, including traditional dual-active bridge dc-dc converters, which have a characteristic of current source."}
{"_id": "128690d8a467070d60de3dc42266c6f3079dff2c", "title": "Unobtrusive User-Authentication on Mobile Phones Using Biometric Gait Recognition", "text": "The need for more security on mobile devices is increasing with new functionalities and features made available. To improve the device security we propose gait recognition as a protection mechanism. Unlike previous work on gait recognition, which was based on the use of video sources, floor sensors or dedicated high-grade accelerometers, this paper reports the performance when the data is collected with a commercially available mobile device containing low-grade accelerometers. To be more specific, the used mobile device is the Google G1 phone containing the AK8976A embedded accelerometer sensor. The mobile device was placed at the hip on each volunteer to collect gait data. Preproccesing, cycle detection and recognition-analysis were applied to the acceleration signal. The performance of the system was evaluated having 51 volunteers and resulted in an equal error rate (EER) of 20%."}
{"_id": "27a790a0ca254006fbe37d41e1751516911607ca", "title": "Estimation of Planar Curves, Surfaces, and Nonplanar Space Curves Defined by Implicit Equations with Applications to Edge and Range Image Segmentation", "text": "This paper addresses the problem of parametric representation and estimation of complex planar curves in 2-D, surfaces in 3-D and nonplanar space curves in 3-D. Curves and surfaces can be defined either parametrically or implicitly, and we use the latter representation. A planar curve is the set of zeros of a smooth function of two variables X-Y, a surface is the set of zeros of a smooth function of three variables X-~-Z, and a space curve is the intersection of two surfaces, which are the set of zeros of two linearly independent smooth functions of three variables X-!/-Z. For example, the surface of a complex object in 3D can be represented as a subset of a single implicit surface, with similar results for planar and space curves. We show how this unified representation can be used for object recognition, object position estimation, and segmentation of objects into meaningful subobjects, that is, the detection of \u201cinterest regions\u201d that are more complex than high curvature regions and, hence, more useful as features for object recognition. Fitting implicit curves and surfaces to data would be ideally based on minimizing the mean square distance from the data points to the curve or surface. Since the distance from a point to a curve or surface cannot be computed exactly by direct methods, the approximate distance, which is a first-order approximation of the real distance, is introduced, generalizing and unifying previous results. We fit implicit curves and surfaces to data minimizing the approximate mean square distance, which is a nonlinear least squares problem. We show that in certain cases, this problem reduces to the generalized eigenvector fit, which is the minimization of the sum of squares of the values of the functions that define the curves or surfaces under a quadratic constraint function of the data. This fit is computationally reasonable to compute, is readily parallelizable, and, hence, is easily computed in real time. In general, the generalized eigenvector lb provides a very good initial estimate for the iterative minimization of the approximate mean square distance. Although we are primarily interested in the 2-D and 3-D cases, the methods developed herein are dimension independent. We show that in the case of algebraic curves and surfaces, i.e., those defined by sets of zeros of polynomials, the minimizers of the approximate mean square distance and the generalized eigenvector fit are invariant with respect to similarity transformations. Thus, the generalized eigenvector lit is independent of the choice of coordinate system, which is a very desirable property for object recognition, position estimation, and the stereo matching problem. Finally, as applications of the previous techniques, we illustrate the concept of \u201cinterest regions\u201d Manuscript received April 13, 1988; revised April 30, 1991. This work was supported by an IBM Fellowship and NSF Grant IRI-8715774. The author was with the Laboratory for Engineering Man/Machine Systems, Department of Engineering, Brown University, Providence, RI 02912. He is now with the Exploratory Computer Vision Group, IBM T. J. Watson Research Center, Yorktown Heights, NY 10598. IEEE Log Number 9102093. for object recognition and describe a variable-order segmentation algorithm that applies to the three cases of interest."}
{"_id": "2e9accd64f810f9ae12ab35d905e43ecea35b85a", "title": "What Is Coefficient Alpha ? An Examination of Theory and Applications", "text": "Psychological research involving scale construction has been hindered considerably by a widespread lack of understanding of coefficient alpha and reliability theory in general. A discussion of the assumptions and meaning of coefficient alpha is presented. This discussion is followed by a demonstration of the effects of test length and dimensionality on alpha by calculating the statistic for hypothetical tests with varying numbers of items, numbers of orthogonal dimensions, and average item intercorrelations. Recommendations for the proper use of coefficient alpha are offered."}
{"_id": "366a1c224bfca7be5cd9aca449cb7a0c7399dbe5", "title": "Biometric authentication on a mobile device: a study of user effort, error and task disruption", "text": "We examine three biometric authentication modalities -- voice, face and gesture -- as well as password entry, on a mobile device, to explore the relative demands on user time, effort, error and task disruption. Our laboratory study provided observations of user actions, strategies, and reactions to the authentication methods. Face and voice biometrics conditions were faster than password entry. Speaking a PIN was the fastest for biometric sample entry, but short-term memory recall was better in the face verification condition. None of the authentication conditions were considered very usable. In conditions that combined two biometric entry methods, the time to acquire the biometric samples was shorter than if acquired separately but they were very unpopular and had high memory task error rates. These quantitative results demonstrate cognitive and motor differences between biometric authentication modalities, and inform policy decisions in selecting authentication methods."}
{"_id": "6e3e8f3cbd5a0af78ede12894c5ff308e1d3ccb0", "title": "Is Smart Beta Really Smart ?", "text": "BURTON G. MALKIEL is the Chemical Bank Chairman\u2019s Professor Emeritus of Economics at Princeton University in Princeton, NJ. bmalkiel@princeton.edu There is a popular new investment strategy in portfolio management called smart beta. With a catchy title and a promise of improved portfolio performance, the strategy has already attracted hundreds of billions of dollars and is growing by leaps and bounds. Unfortunately, smart beta portfolios do not consistently outperform and when they do produce appealing results, they f lunk the risk test."}
{"_id": "6a821cb17b30c26218e3eb5c20d609dc04a47bcb", "title": "Exploring the Regularity of Sparse Structure in Convolutional Neural Networks", "text": "Sparsity helps reduce the computational complexity of deep neural networks by skipping zeros. Taking advantage of sparsity is listed as a high priority in next generation DNN accelerators such as TPU[1]. The structure of sparsity, i.e., the granularity of pruning, affects the efficiency of hardware accelerator design as well as the prediction accuracy. Coarse-grained pruning creates regular sparsity patterns, making it more amenable for hardware acceleration but more challenging to maintain the same accuracy. In this paper we quantitatively measure the trade-off between sparsity regularity and prediction accuracy, providing insights in how to maintain accuracy while having more a more structured sparsity pattern. Our experimental results show that coarse-grained pruning can achieve a sparsity ratio similar to unstructured pruning without loss of accuracy. Moreover, due to the index saving effect, coarse-grained pruning is able to obtain a better compression ratio than fine-grained sparsity at the same accuracy threshold. Based on the recent sparse convolutional neural network accelerator (SCNN), our experiments further demonstrate that coarse-grained sparsity saves\u223c 2\u00d7 the memory references compared to fine-grained sparsity. Since memory reference is more than two orders of magnitude more expensive than arithmetic operations, the regularity of sparse structure leads to more efficient hardware design."}
{"_id": "d3398bebea89a15b02a6a03ddf5fd081b04b9c61", "title": "Lazy Modeling of Variants of Token Swapping Problem and Multi-agent Path Finding through Combination of Satisfiability Modulo Theories and Conflict-based Search", "text": "We address item relocation problems in graphs in this paper. We assume items placed in vertices of an undirected graph with at most one item per vertex. Items can be moved across edges while various constraints depending on the type of relocation problem must be satisfied. We introduce a general problem formulation that encompasses known types of item relocation problems such as multi-agent path finding (MAPF) and token swapping (TSWAP). In this formulation we express two new types of relocation problems derived from token swapping that we call token rotation (TROT) and token permutation (TPERM). Our solving approach for item relocation combines satisfiability modulo theory (SMT) with conflict-based search (CBS). We interpret CBS in the SMT framework where we start with the basic model and refine the model with a collision resolution constraint whenever a collision between items occurs in the current solution. The key difference between the standard CBS and our SMT-based modification of CBS (SMT-CBS) is that the standard CBS branches the search to resolve the collision while in SMT-CBS we iteratively add a single disjunctive collision resolution constraint. Experimental evaluation on several benchmarks shows that the SMT-CBS algorithm significantly outperforms the standard CBS. We also compared SMT-CBS with a modification of the SAT-based MDD-SAT solver that uses an eager modeling of item relocation in which all potential collisions are eliminated by constrains in advance. Experiments show that lazy approach in SMT-CBS produce fewer constraint than MDD-SAT and also achieves faster solving run-times."}
{"_id": "0acb878637612377ed547627dfdeeb266204dd97", "title": "Data Mining in Healthcare : Current Applications and Issues By", "text": "The successful application of data mining in highly visible fields like e-business, marketing and retail have led to the popularity of its use in knowledge discovery in databases (KDD) in other industries and sectors. Among these sectors that are just discovering data mining are the fields of medicine and public health. This research paper provides a survey of current techniques of KDD, using data mining tools for healthcare and public health. It also discusses critical issues and challenges associated with data mining and healthcare in general. The research found a growing number of data mining applications, including analysis of health care centers for better health policy-making, detection of disease outbreaks and preventable hospital deaths, and detection of fraudulent insurance claims."}
{"_id": "c1e6f0732ff7e5d8018d7b829d2a0ddabb456c78", "title": "Using the Kinect for detecting tremors: Challenges and opportunities", "text": "The Kinect sensor is an attachment for the Xbox gaming console which allows players to interact with games through body movement. This paper explores the possibility of using the Kinect in a clinical setting for detection of Parkinson's, postural, flapping, titubation and voice tremors. All of the physical tremors were simulated and the ability of the Kinect to capture them was evaluated. The Kinect was also used to record voice data from real voice tremors patients. Physical and voice data were gathered from healthy persons for comparison. The results showed that the Kinect could reliably detect voice, postural and Parkinson's tremors. A very consistent graph could be obtained repeatedly for both Parkinson's and postural tremors. For voice tremors there was also a consistent pattern that differentiated a normal voice from one with a tremor. We have therefore proven that the Kinect can consistently record Parkinson's, postural and voice tremors."}
{"_id": "67fc461fac073d0669f5a56ec96f9df35e050429", "title": "Automated Pyramid Scoring of Summaries using Distributional Semantics", "text": "The pyramid method for content evaluation of automated summarizers produces scores that are shown to correlate well with manual scores used in educational assessment of students\u2019 summaries. This motivates the development of a more accurate automated method to compute pyramid scores. Of three methods tested here, the one that performs best relies on latent semantics."}
{"_id": "59a1779c285ebbb7639ff4b26bed5576d6f65a3f", "title": "Large eddy simulations of the HyShot II scramjet", "text": "The present work is part of a broad effort toward predictive simulations of complex flows at high Reynolds numbers. The main objective of the Stanford PSAAP Center (Pre-dictive Science Academic Alliance Program) is to predict the reacting flow in the HyShot II scramjet experiment carried out in the HEG facility of the German Aerospace Agency DLR (cf. Laurence et al. 2011). Specifically, the objective is to predict the best-estimate of the flow, to estimate the uncertainties from a range of sources in this prediction, and, finally , to quantify the margin to engine unstart. Unstart occurs in supersonic internal flows when the flow becomes choked (e.g., due to excessive friction or heat addition), which initiates an unsteady unstart process that eventually leads to subsonic flow throughout. The unstart process involves a propagation of shock waves upstream in the combustor, the speed of which depends strongly on the amount of heat addition. The experiments by Laurence et al. (2011) indicate propagation velocities of 31 m/s at equivalence ratio (ER) of 0.5, 93 m/s at ER=0.7, and above 200 m/s at ER=1.1. For this problem with fixed mass flux of incoming air, the equivalence ratio is proportional to the fuel mass flux. The length of the HyShot combustor is about 300 mm; thus the unstart process takes about 10 ms at ER=0.5. The experiments were carried out in a shock tube with a test time of about 2.5 ms; thus the unstart process is significantly longer than the test time near the unstart boundary. For this reason, it is clear that shock tube experiments can not give a precise unstart bound. An alternative approach is to use a validated large eddy simulation (LES) method to explore and determine the unstart bound. This is the approach taken here, and the motivation of the work described in this brief. The complexity of the flow (turbulence, shocks, mixing, combustion) implies that a careful multi-stage validation plan is needed. Moreover, the available quantitative experimental data for the HyShot case is limited to measurements of pressure (mean and rms) and heat transfer (although these are highly uncertain) along two lines in the combustor. Therefore, validation on the HyShot problem alone is not sufficient. The first step in the validation is to compute a supersonic flat plate boundary layer as a basic check on the LES method and, specifically, the modeling of the friction and heat transfer at \u2026"}
{"_id": "d2f6787a8f631086c28a3617af767d1016375b13", "title": "Context-free Attacks Using Keyboard Acoustic Emanations", "text": "The emanations of electronic and mechanical devices have raised serious privacy concerns. It proves possible for an attacker to recover the keystrokes by acoustic signal emanations. Most existing malicious applications adopt context-based approaches, which assume that the typed texts are potentially correlated. Those approaches often incur a high cost during the context learning stage, and can be limited by randomly typed contents (e.g., passwords). Also, context correlations can increase the risk of successive false recognition. We present a context-free and geometry-based approach to recover keystrokes. Using off-the-shelf smartphones to record acoustic emanations from keystrokes, this design estimates keystrokes' physical positions based on the Time Difference of Arrival (TDoA) method. We conduct extensive experiments and the results show that more than 72.2\\% of keystrokes can be successfully recovered."}
{"_id": "195eebf76b20b62c1857fd81dce84c741aca34cc", "title": "Marriage, honesty, and stability", "text": "Many centralized two-sided markets form a matching between participants by running a stable marriage algorithm. It is a well-known fact that no matching mechanism based on a stable marriage algorithm can guarantee truthfulness as a dominant strategy for participants. However, as we will show in this paper, in a probabilistic setting where the preference lists of one side of the market are composed of only a constant (independent of the the size of the market) number of entries, each drawn from an arbitrary distribution, the number of participants that have more than one stable partner is vanishingly small. This proves (and generalizes) a conjecture of Roth and Peranson [23]. As a corollary of this result, we show that, with high probability, the truthful strategy is the best response for a given player when the other players are truthful. We also analyze equilibria of the deferred acceptance stable marriage game. We show that the game with complete information has an equilibrium in which a (1 - o(1)) fraction of the strategies are truthful in expectation. In the more realistic setting of a game of incomplete information, we will show that the set of truthful strategies form a (1 + o(1))-approximate Bayesian-Nash equilibrium. Our results have implications in many practical settings and were inspired by the work of Roth and Peranson [23] on the National Residency Matching Program."}
{"_id": "6572b014c44d7067afd77708b34a074f3497bdbe", "title": "A personalized recommender system for pervasive social networks", "text": "The current availability of interconnected portable devices, and the advent of the Web 2.0, raise the problem of supporting anywhere and anytime access to a huge amount of content, generated and shared by mobile users. On the one hand, users tend to be always connected for sharing experiences and conducting their social interactions with friends and acquaintances, through so-called Mobile Social Networks, further improving their social inclusion. On the other hand, the pervasiveness of communication infrastructures spreading data (cellular networks, direct device-to-device contacts, interactions with ambient devices as in the Internet-of-Things) makes compulsory the deployment of solutions able to filter off undesired information and to select what content should be addressed to which users, for both (i) better user experience, and (ii) resource saving of both devices and network. In this work, we propose a novel framework for pervasive social networks, called Pervasive PLIERS (p-PLIERS), able to discover and select, in a highly personalized way, contents of interest for single mobile users. p-PLIERS exploits the recently proposed PLIERS tag-based recommender system [2] as context a reasoning tool able to adapt recommendations to heterogeneous interest profiles of different users. p-PLIERS effectively operates also when limited knowledge about the network is maintained. It is implemented in a completely decentralized environment, in which new contents are continuously generated and diffused through the network, and it relies only on the exchange of single nodes\u2019 knowledge during proximity contacts and through device-to-device communications. We evaluated p-PLIERS by simulating its behavior in three different scenarios: a big event (Expo 2015), a conference venue (ACM KDD\u201915), and a working day in the city of Helsinki. For each scenario, we used real or synthetic mobility traces and we extracted real datasets from Twitter interactions to characterize the generation and sharing of user contents."}
{"_id": "c3329e5ab1b8234c3cd857ba0776ea4b382951e6", "title": "Recent Advancements in Retinal Vessel Segmentation", "text": "Retinal vessel segmentation is a key step towards the accurate visualization, diagnosis, early treatment and surgery planning of ocular diseases. For the last two decades, a tremendous amount of research has been dedicated in developing automated methods for segmentation of blood vessels from retinal fundus images. Despite the fact, segmentation of retinal vessels still remains a challenging task due to the presence of abnormalities, varying size and shape of the vessels, non-uniform illumination and anatomical variability between subjects. In this paper, we carry out a systematic review of the most recent advancements in retinal vessel segmentation methods published in last five years. The objectives of this study are as follows: first, we discuss the most crucial preprocessing steps that are involved in accurate segmentation of vessels. Second, we review most recent state-of-the-art retinal vessel segmentation techniques which are classified into different categories based on their main principle. Third, we quantitatively analyse these methods in terms of its sensitivity, specificity, accuracy, area under the curve and discuss newly introduced performance metrics in current literature. Fourth, we discuss the advantages and limitations of the existing segmentation techniques. Finally, we provide an insight into active problems and possible future directions towards building successful computer-aided diagnostic system."}
{"_id": "49dc006129cbf26d0c6ac14af161636613c7c02a", "title": "Deep Learning with Low Precision by Half-Wave Gaussian Quantization", "text": "The problem of quantizing the activations of a deep neural network is considered. An examination of the popular binary quantization approach shows that this consists of approximating a classical non-linearity, the hyperbolic tangent, by two functions: a piecewise constant sign function, which is used in feedforward network computations, and a piecewise linear hard tanh function, used in the backpropagation step during network learning. The problem of approximating the widely used ReLU non-linearity is then considered. An half-wave Gaussian quantizer (HWGQ) is proposed for forward approximation and shown to have efficient implementation, by exploiting the statistics of of network activations and batch normalization operations. To overcome the problem of gradient mismatch, due to the use of different forward and backward approximations, several piece-wise backward approximators are then investigated. The implementation of the resulting quantized network, denoted as HWGQ-Net, is shown to achieve much closer performance to full precision networks, such as AlexNet, ResNet, GoogLeNet and VGG-Net, than previously available low-precision networks, with 1-bit binary weights and 2-bit quantized activations."}
{"_id": "e6b0769d389097a7d7746358cd0e609759ce7e06", "title": "Resource Allocation Optimization for Device-to-Device Communication Underlaying Cellular Networks", "text": "Device-to-Device (D2D) communication will become an important technology in future networks with the increase of the requirements of local communication services. The interference between cellular communication and D2D communication can be coordinated by proper power control and resource allocation. In this paper, we analyze the resource allocation methods for D2D communication underlaying cellular networks. A novel resource allocation method that D2D can reuse the resources of more than one cellular user is proposed. After that, we discuss the selection of the optimal resource allocation method from the proposed method and the conventional methods. Finally, the performance of different methods is evaluated through numerical simulation. The simulation results show that the proposed method is the optimal resource allocation method when the D2D pair locates at the most part of the cell area in both uplink and downlink. The proposed method can improve the sum throughput of cellular communication and D2D communication significantly."}
{"_id": "477bbcb5655a9c64893207bb49032e87c06a05f2", "title": "Eleos: ExitLess OS Services for SGX Enclaves", "text": "Intel Software Guard extensions (SGX) enable secure and trusted execution of user code in an isolated enclave to protect against a powerful adversary. Unfortunately, running I/O-intensive, memory-demanding server applications in enclaves leads to significant performance degradation. Such applications put a substantial load on the in-enclave system call and secure paging mechanisms, which turn out to be the main reason for the application slowdown. In addition to the high direct cost of thousands-of-cycles long SGX management instructions, these mechanisms incur the high indirect cost of enclave exits due to associated TLB flushes and processor state pollution.\n We tackle these performance issues in Eleos by enabling exit-less system calls and exit-less paging in enclaves. Eleos introduces a novel Secure User-managed Virtual Memory (SUVM) abstraction that implements application-level paging inside the enclave. SUVM eliminates the overheads of enclave exits due to paging, and enables new optimizations such as sub-page granularity of accesses.\n We thoroughly evaluate Eleos on a range of microbenchmarks and two real server applications, achieving notable system performance gains. memcached and a face verification server running in-enclave with Eleos, achieves up to 2.2\u00d7 and 2.3\u00d7 higher throughput respectively while working on datasets up to 5\u00d7 larger than the enclave's secure physical memory."}
{"_id": "b0b7ca9555e2391c3b3fb63bdc3d696de785533c", "title": "When sex is more than just sex: attachment orientations, sexual experience, and relationship quality.", "text": "The authors explored the contribution of individual differences in attachment orientations to the experience of sexual intercourse and its association with relationship quality. In Study 1, 500 participants completed self-report scales of attachment orientations and sexual experience. The findings indicated that whereas attachment anxiety was associated with an ambivalent construal of sexual experience, attachment avoidance was associated with more aversive sexual feelings and cognitions. In Study 2, 41 couples reported on their attachment orientations and provided daily diary measures of sexual experiences and relationship interactions for a period of 42 days. Results showed that attachment anxiety amplified the effects of positive and negative sexual experiences on relationship interactions. In contrast, attachment avoidance inhibited the positive relational effect of having sex and the detrimental relational effects of negative sexual interactions. The authors discuss the possibility that attachment orientations are associated with different sex-related strategies and goals within romantic relationships."}
{"_id": "387baec681005e52cbcbcc17a00ca980103751a6", "title": "Hierarchical graph indexing", "text": "Traffic analysis, in the context of Telecommunications or Internet and Web data, is crucial for large network operations. Data in such networks is often provided as large graphs with hundreds of millions of vertices and edges. We propose efficient techniques for managing such graphs at the storage level in order to facilitate its processing at the interface level(visualization). The methods are based on a hierarchical decomposition of the graph edge set that is inherited from a hierarchical decomposition of the vertex set. Real time navigation is provided by an efficient two level indexing schema called the gkd*-tree. The first level is a variation of a kd-tree index that partitions the edge set in a way that conforms to the hierarchical decomposition and the data distribution (the gkd-tree). The second level is a redundant R-tree that indexes the leaf pages of the gkd-tree. We provide computational results that illustrate the superiority of the gkd-tree against conventional indexes like the kd-tree and the R*-tree both in creation as well as query response times."}
{"_id": "a4cae08b553f3a70e38c4940ea0fcbdcf24aa440", "title": "Detection of copy-move image forgery based on discrete cosine transform", "text": "Since powerful editing software is easily accessible, manipulation on images is expedient and easy without leaving any noticeable evidences. Hence, it turns out to be a challenging chore to authenticate the genuineness of images as it is impossible for human\u2019s naked eye to distinguish between the tampered image and actual image. Among the most common methods extensively used to copy and paste regions within the same image in tampering image is the copy-move method. Discrete Cosine Transform (DCT) has the ability to detect tampered regions accurately. Nevertheless, in terms of precision (FP) and recall (FN), the block size of overlapping block influenced the performance. In this paper, the researchers implemented the copy-move image forgery detection using DCT coefficient. Firstly, by using the standard image conversion technique, RGB image is transformed into grayscale image. Consequently, grayscale image is segregated into overlying blocks of m\u00a0\u00d7\u00a0m pixels, m\u00a0=\u00a04.8. 2D DCT coefficients are calculated and reposition into a feature vector using zig-zag scanning in every block. Eventually, lexicographic sort is used to sort the feature vectors. Finally, the duplicated block is located by the Euclidean Distance. In order to gauge the performance of the copy-move detection techniques with various block sizes with respect to accuracy and storage, threshold D_similar\u00a0=\u00a00.1 and distance threshold (N)_d\u00a0=\u00a0100 are used to implement the 10 input images in order. Consequently, 4\u00a0\u00d7\u00a04 overlying block size had high false positive thus decreased the accuracy of forged detection in terms of accuracy. However, 8\u00a0\u00d7\u00a08 overlying block accomplished more accurately for forged detection in terms of precision and recall as compared to 4\u00a0\u00d7\u00a04 overlying block. In a nutshell, the result of the accuracy performance of different overlying block size are influenced by the diverse size of forged area, distance between two forged areas and threshold value used for the research."}
{"_id": "e7efccfabb3f21c9836c9be8ba397e547a01b078", "title": "Analysis and validation of proteomic data generated by tandem mass spectrometry", "text": "The analysis of the large amount of data generated in mass spectrometry\u2013based proteomics experiments represents a significant challenge and is currently a bottleneck in many proteomics projects. In this review we discuss critical issues related to data processing and analysis in proteomics and describe available methods and tools. We place special emphasis on the elaboration of results that are supported by sound statistical arguments."}
{"_id": "d6ad577b7e17e330fea8f3e46e5220191a3ef631", "title": "A three-level, T-type, power electronics building block using Si-SiC hybrid switch for high-speed drives", "text": "This paper presents the design and implementation of a 100 kW three-level T-type (3L-T2), single-phase power electronics building block (PEBB). The PEBB features switching power devices with associated gate drivers, DC-link capacitors and interconnection busbar. A hybrid switch (HyS), which includes a Si IGBT and a SiC MOSFET in parallel, forms the switching power device in the PEBB. An low-inductive multi-layer laminated busbar was designed to have symmetrical current commutation loops. A major contribution is that this is achieved while no three-level T-type power module was used. A loop inductance of 29 nH was achieved which is lower than the existing state-of-the-art three-level designs in literature. The fabricated prototype PEBB achieves a specific power density of 27.7 kW/kg and a volumetric power density of 308.61 W/in3. Single-phase operation of the PEBB has been demonstrated at the switching frequency of 28 kHz."}
{"_id": "afce8bcb75284c08237a37bbf9e72d05e2098911", "title": "Trends in camera based Automotive Driver Assistance Systems (ADAS)", "text": "Advance Driver Assistance Systems (ADAS), once limited to high end luxury automobiles are fast becoming popular with Mid and entry level segments driven in part by legislation coming in to effect in the latter part of this decade. These systems require support for a wide variety of applications, from surround-view visual systems to safety critical vision applications (eg Pedestrian Detect, automatic braking etc). In this white paper we describe some of the existing and emerging trends and applications in each of these segments along with the requirements and motivations for each of these features. We also highlight TI's automotive class TDA2x device, a state of the art automotive grade device capable of handling complex ADAS applications within a low power and cost budget."}
{"_id": "6ade991a79455ee078b453f1e87229db7fb52fe8", "title": "Modeling and control active suspension system for a full car model", "text": "The purpose of this paper is to investigate the performance of a full car model active suspension system using LQR controller. Dynamic model used in this study is a linear model. A linear model can capture basic performances of vehicle suspension such as body displacement, body acceleration, wheel displacement, wheel deflection, suspension travels, pitch and yawn. Performance of suspension system is determined by the ride comfort and vehicle handling. It can be measured by car body displacement and wheel displacement performance. Two types of road profiles are used as input for the system. Simulation is based on the mathematical model by using MATLAB/SIMULINK software. Results show that the performance of body displacement and wheel displacement can be improved by using Linear Quadratic Regulator control (LQR)."}
{"_id": "02577799e42ca27be18e0d37476b658f552b5be3", "title": "Asymmetric division of labor in human skilled bimanual action: the kinematic chain as a model.", "text": "This article presents a tentative theoretical framework for the study of asymmetry in the context of human bimanual action. It is emphasized that in man most skilled manual activities involve two hands playing different roles, a fact that has been often overlooked in the experimental study of human manual lateralization. As an alternative to the current concepts of manual preference and manual superiority-whose relevance is limited to the particular case of unimanual actions-the more general concept of lateral preference is proposed to denote preference for one of the two possible ways of assigning two roles to two hands. A simple model describing man's favored intermanual division of labor in the model are the following. 1) The two hands represent two motors, that is, decomplexity is ignored in the suggested approach. 2) In man, the two manual motors cooperate with one another as if they were assembled in series, thereby forming a kinematic chain: In a right-hander allowed to follow his or her lateral preferences, motion produced by the right hand tends to articulate with motion produced by the left. It is suggested that the kinematic chain model may help in understanding the adaptive advantage of human manual specialization."}
{"_id": "30950db8a2cae3630057efe731b85f7b567848b8", "title": "CONDENSATION\u2014Conditional Density Propagation for Visual Tracking", "text": "The problem of tracking curves in dense visual clutter is challenging. Kalman filtering is inadequate because it is based on Gaussian densities which, being unimo dal, cannot represent simultaneous alternative hypotheses. The Condensation algorithm uses \u201cfactored sampling\u201d, previously applied to the interpretation of static images, in which the probability distribution of possible interpretations is represented by a randomly generated set. Condensation uses learned dynamical models, together with visual observations, to propagate the random set over time. The result is highly robust tracking of agile motion. Notwithstanding the use of stochastic methods, the algorithm runs in near real-time."}
{"_id": "4f7a0ab878930183b2d17b0512be4f72cb10b4f6", "title": "Surround-screen projection-based virtual reality: the design and implementation of the CAVE", "text": "Several common systems satisfy some but not all of the VR definition above. Flight simulators provide vehicle tracking, not head tracking, and do not generally operate in binocular stereo. Omnimax theaters give a large angle of view [8], occasionally in stereo, but are not interactive. Head-tracked monitors [4][6] provide all but a large angle of view. Head-mounted displays (HMD) [7][13] and BOOMs [9] use motion of the actual display screens to achieve VR by our definition. Correct projection of the imagery on large screens can also create a VR experience, this being the subject of this paper. This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned. Previous work in the VR area dates back to Sutherland [12], who in 1965 wrote about the \u201cUltimate Display.\u201d Later in the decade at the University of Utah, Jim Clark developed a system that allowed wireframe graphics VR to be seen through a headmounted, BOOM-type display for his dissertation. The common VR devices today are the HMD and the BOOM. Lipscomb [4] showed a monitor-based system in the IBM booth at SIGGRAPH '91 and Deering [6] demonstrated the Virtual Portal, a closets ized three-wal l project ion-based system, in the Sun Microsystems' booth at SIGGRAPH '92. The CAVE, our projection-based VR display [3], also premiered at SIGGRAPH '92. The Virtual Portal and CAVE have similar intent, but different implementation schemes."}
{"_id": "719da2a0ddd38e78151e1cb2db31703ea8b2e490", "title": "The Representation and Matching of Pictorial Structures", "text": "The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of \"goodness\" of matching or detection."}
{"_id": "b155f6cd9b5ae545fb0fba1f80529699f074e8ab", "title": "Research directions in virtual environments: report of an NSF Invitational Workshop, March 23-24, 1992, University of North Carolina at Chapel Hill", "text": "At the request of NSF's Interactive Systems Program, a two-day invitational workshop was held March 23-24, 1992 at UNC Chapel Hill to identify and recommend future research directions in the area of \"virtual environments\" (VE) *. Workshop participants included some 18 experts (plus 4 NSF officials) from universities, industry, and other leading technical organizations. The two-day schedule alternated between sessions of the entire group and sessions in the following specialty areas, around which the recommendations came to be organized: 1) Perception , 2) Human-Machine Software Interface , 3) Software, 4) Hardware, and 5) Applications. Also, two participants developed a taxonomy of VE applications that is included as an appendix to the report. 1. Collaborative science-technology development programs should be established at several sites around the country to encourage closer collaboration between developers and scientists. 2. Theoretical research should focus on development of metrics of performance and task demands in VE. 3. Paradigmatic applications and theoretical questions that illustrate the science-technology synergy need identification. Audition Spatial Sound 1.Theoretical research should emphasize the role of individual differences in Head-Related Transfer Functions (HRTF's), critical cues for distance and externalization, spectral cues for enhancing elevation and disambiguating the cone-of-confusion, head-motion, and intersensory interaction and adaptation in the accurate perception of virtual acoustic sources. The notion of artificially enhanced localization cues is also a promising area. 2. A fruitful area for joint basic and applied research is the development of perceptually-viable methods of simplifying the synthesis technique to maximize the efficiency of algorithms for complex room modeling. 3. Future effort should still be devoted to developing more realistic models of acoustic environments with implementation on more powerful hardware platforms. Nonspeech Audio 1. Theoretical research should focus on lower-level sensory and higher-level cognitive determinants of acoustic perceptual organization, with particular emphasis on how acoustic parameters interact to determine the identification, segregation, and localization of multiple, simultaneous sources. 2. Technology development should focus on hardware and software systems specifically aimed at real-time generation and control for acoustic information display. Haptics 1. Development should be encouraged of a variety of computer-controlled mechanical devices for either basic scientific investigation of the human haptic system or to serve as haptic interfaces for virtual environments and teleloperation. 2. Research programs should be initiated to encourage collaboration among engineers who are capable of building high precision robotic devices and scientists who can conduct biomechanical and perceptual experiments with the devices. 3. Research \u2026"}
{"_id": "ee6712cba52840691eaabdbe55052d231c456439", "title": "Bilingualism, Biliteracy, and Learning to Read: Interactions Among Languages and Writing Systems", "text": "Four groups of children in first grade were compared on early literacy tasks. Children in three of the groups were bilingual, each group representing a different combination of language and writing system, and children in the fourth group were monolingual speakers of English. All the bilingual children used both languages daily and were learning to read in both languages. The children solved decoding and phonological awareness tasks, and the bilinguals completed all tasks in both languages. Initial differences between the groups in factors that contribute to early literacy were controlled in an analysis of covariance, and the results showed a general increment in reading ability for all the bilingual children but a larger advantage for children learning two alphabetic systems. Similarly, bilinguals transferred literacy skills across languages only when both languages were written in the same system. Therefore, the extent of the bilingual facilitation for early reading depends on the relation between the two languages and writing systems."}
{"_id": "7ed49f95d9a882d7c98579498035b2d61b3285fa", "title": "Position Sensorless Control for Four-Switch Three-Phase Brushless DC Motor Drives", "text": "This paper proposes a position sensorless control scheme for four-switch three-phase (FSTP) brushless DC (BLDC) motor drives using a field programmable gate array (FPGA). A novel sensorless control with six commutation modes and novel pulsewidth modulation scheme is developed to drive FSTP BLDC motors. The low cost BLDC driver is achieved by the reduction of switch device count, cost down of control, and saving of hall sensors. The feasibility of the proposed sensorless control for FSTP BLDC motor drives is demonstrated by analysis and experimental results."}
{"_id": "9b42b749d2699dbc6d8cb2d1ec7304ae9cc7e348", "title": "Social Media Use and Mental Health among Young Adults", "text": "In recent years many parents, advocates and policy makers have expressed concerns regarding the potential negative impact of social media use. Some studies have indicated that social media use may be tied to negative mental health outcomes, including suicidality, loneliness and decreased empathy. Other studies have not found evidence for harm, or have indicated that social media use may be beneficial for some individuals. The current correlational study examined 467 young adults for their time spent using social media, importance of social media in their lives and tendency to engage in vaguebooking (posting unclear but alarming sounding posts to get attention). Outcomes considered included general mental health symptoms, suicidal ideation, loneliness, social anxiety and decreased empathy. Results indicated that social media use was not predictive of impaired mental health functioning. However, vaguebooking was predictive of suicidal ideation, suggesting this particular behavior could be a warning sign for serious issues. Overall, results from this study suggest that, with the exception of vaguebooking, concerns regarding social media use may be misplaced."}
{"_id": "6fb69989efda11830a491ef0f28df208c92f0518", "title": "Discriminative and consistent similarities in instance-level Multiple Instance Learning", "text": "In this paper we present a bottom-up method to instance-level Multiple Instance Learning (MIL) that learns to discover positive instances with globally constrained reasoning about local pairwise similarities. We discover positive instances by optimizing for a ranking such that positive (top rank) instances are highly and consistently similar to each other and dissimilar to negative instances. Our approach takes advantage of a discriminative notion of pairwise similarity coupled with a structural cue in the form of a consistency metric that measures the quality of each similarity. We learn a similarity function for every pair of instances in positive bags by how similarly they differ from instances in negative bags, the only certain labels in MIL. Our experiments demonstrate that our method consistently outperforms state-of-the-art MIL methods both at bag-level and instance-level predictions in standard benchmarks, image category recognition, and text categorization datasets."}
{"_id": "74bf33f0f1147752a93febb8cc1fc6e255f6cd27", "title": "Polyhedral Extraction Tool", "text": "We present a new library for extracting a polyhedral model from C source. The library is based on clang, the LLVM C frontend, and isl, a library for manipulating quasi-affine sets and relations. The use of clang for parsing the C code brings advanced diagnostics and full support for C99. The use of isl allows for an easy construction and a powerful and compact representation of the polyhedral model. Besides allowing arbitrary piecewise quasi-affine index expressions and conditions, the library also supports some data dependent constructs and has special treatment for unsigned integers. The library has been successfully used to obtain polyhedral models for use in an equivalence checker, a tool for constructing polyhedral process networks, a parallelizer targeting GPUs and an interactive polyhedral environment."}
{"_id": "18cb4318ad9c7ac3a3f0c87af6ab817f7695ba59", "title": "Printed Rectangular Monopole Antenna with Slots for UWB Applications", "text": "This paper presents various slotted rectangular Monopole Antenna. The proposed antenna is designed in order to meet ultra wideband (UWB) requirements and applications. This antenna is small in size enabling it to be fixed into wireless Personal Area Network (PAN) devices. There are 3 types of proposed slots; T-slot, \u03a0-slot and I-slot for patch. The T & I slots produce more vertical current, thus improve the matching impedance performance at higher frequencies. The \u03a0-slot on rectangular patch has shown dual frequency performance. For all proposed antennas, their measured return loss and radiation pattern meet the UWB requirements."}
{"_id": "b1e4d476c857333b9a0afdb1428eda27f6d26940", "title": "Cultivating Competence , Self-Efficacy , and Intrinsic Interest Through Proximal Self-Motivation", "text": "The present experiment tested the hypothesis that self-motivation through proximal goal setting serves as an effective mechanism for cultivating competencies, self-percepts of efficacy, and intrinsic interest. Children who exhibited gross deficits and disinterest in mathematical tasks pursued a program of self-directed learning under conditions involving either proximal subgoals, distal goals, or no goals. Results of the multifaceted assessment provide support for the superiority of proximal self-influence. Under proximal subgoals, children progressed rapidly in self-directed learning, achieved substantial mastery of mathematical operations, and developed a sense of personal efficacy and intrinsic interest in arithmetic activities that initially held little attraction for them. Distal goals had no demonstrable effects. In addition to its other benefits, goal proximity fostered veridical self-knowledge of capabilities as reflected in high congruence between judgments of mathematical self-efficacy and subsequent mathematical performance. Perceived self-efficacy was positively related to accuracy of mathematical performance and to intrinsic interest in arithmetic activities. Article: Much human behavior is directed and sustained over long periods, even though the external inducements for it may be few and far between. Under conditions in which external imperatives are minimal and discontinuous, people must partly serve as agents of their own motivation and action. In social learning theory (Bandura, 1977b, in press), self-directedness operates through a self system that comprises cognitive structures and subfunctions for perceiving, evaluating, motivating, and regulating behavior. An important, cognitively based source of self-motivation relies on the intervening processes of goal setting and self-evaluative re-actions to one's own behavior. This form of self-motivation, which operates largely through internal comparison processes, re-quires personal standards against which to evaluate ongoing performance. By making self-satisfaction conditional on a certain level of performance, individuals create self-inducements to persist in their efforts until their performances match internal standards. Both the anticipated satisfactions for matching attainments and the dissatisfactions with insufficient ones provide incentives for self-directed actions. Personal goals or standards do not automatically activate the evaluative processes that affect the level and course of one's behavior. Certain properties of goals, such as their specificity and level, help to provide clear standards of adequacy (Latham & Yukl, 1975; Locke, 1968; Steers & Porter, 1974). Hence, explicit goals are more likely than vague intentions to engage self-reactive influences in any given activity. Goal proximity, a third property, is especially critical because the more closely referential standards are related to ongoing behavior, the greater the likelihood that self-influences will be activated during the process. Some suggestive evidence exists that the impact of goals on behavior is indeed determined by how far into the future they are projected (Bandura & Simon, 1977; Jeffery, 1977). In the social learning view, adopting proximal subgoals for one's own behavior can have at least three major psychological effects. As already alluded to, such goals have motivational effects. One of the propositions tested in the present experiment is that self-motivation can be best created and sustained by attainable subgoals that lead to larger future ones. Proximal subgoals provide immediate incentives and guides for performance, whereas distal goals are too far re-moved in time to effectively mobilize effort or to direct what one does in the here and now. Focus on the distant future makes it easy to temporize and to slacken efforts in the present. Proximal subgoals can also serve as an important vehicle in the development of self-percepts of efficacy. Competence in dealing with one's environment is not a fixed act or simply knowing what to do. Rather, it involves a generative capability in which component skills must be selected and organized into integrated courses of action to manage changing task demands. Operative competence thus requires flexible orchestration of multiple subskills. Self-efficacy is concerned with judgments about how well one can organize and execute courses of action required to deal with prospective situations containing many ambiguous, unpredictable, and often stressful elements. Self-percepts of efficacy can affect people's choice of activities, how much effort they expend, and how long they will persist in the face of difficulties (Bandura, 1977a; Brown & Inouye, 1978; Schunk, 1981). Without standards against which to measure their performances, people have little basis for judging how they are doing or for gauging their capabilities. Subgoal attainments provide indicants of mastery for enhancing selfefficacy. By contrast, distal goals are too far removed in time to provide sufficiently clear markers of progress along the way to ensure a growing sense of self-efficacy. The processes underlying the development of intrinsic interest may similarly be governed, at least in part, by goal proximity. Most of the activities that people enjoy doing for their own sake originally had little or no interest for them. Young children are not innately interested in singing operatic arias, playing tubas, deriving mathematical equations, writing sonnets, or propelling heavy shotput balls through the air. However, through favorable continued involvement, almost any activity can become imbued with consuming significance. One can posit at least two ways in which proximal goals might contribute to enhancement of interest in activities. When people aim for, and master, desired levels of performance, they experience a sense of satisfaction (Locke, Cartledge, & Knerr, 1970). The satisfactions derived from subgoal attainments can build intrinsic interest. When performances are gauged against lofty, distal goals, the large negative disparities between standards and attainments are likely to attenuate the level of self-satisfaction experienced along the way. Conceptual analyses of intrinsic interest within the framework of both self-efficacy theory (Bandura, 1981) and intrinsic motivation theory (Deci, 1975; Lepper & Greene, 1979) assign perceived competence a mediating role. A sense of personal efficacy in mastering challenges is apt to generate greater interest in the activity than is selfperceived inefficacy in producing competent performances. To the extent that proximal subgoals promote and authenticate a sense of causal agency, they can heighten interest through their effects on perception of personal causation. Investigations of intrinsic interest have been concerned almost exclusively with the effects of extrinsic rewards on interest when it is already highly developed. Although results are somewhat variable, the usual finding is that rewards given regardless of quality of performance tend to reduce interest, whereas rewards for performances signifying competence sustain high interest (Boggiano & Ruble, 1979; Enzle & Ross, 1978; Lepper & Greene, 1979; Ross, 1976). The controversy over the effects of extrinsic rewards on preexisting high interest has led to a neglect of the issue of how intrinsic interest is developed when it is lacking. One of the present study's aims was to test the notion that proximal subgoals enlist the type of sustained involvement in activities that build selfefficacy and interest when they are absent. Children who displayed gross deficits in mathematical skills and strong disinterest in such activities engaged in self-directed learning over a series of sessions. They pursued the self-paced learning under conditions involving either proximal subgoals, distal goals, or bids to work actively without any reference to goals. It was predicted that self-motivation through proximal subgoals would prove most effective in cultivating mathematical competencies, self-percepts of efficacy, and intrinsic interest in mathematical activities. For reasons given earlier, distal goals were not expected to exceed bids to work actively in promoting changes. It was hypothesized further that strength of self-efficacy would predict subsequent accuracy on mathematical tasks and level of intrinsic interest."}
{"_id": "5f4515d0928fbc47aef1724e2627b264b26a9d17", "title": "Efficient segmentation and surface classification of range images", "text": "Derivation of geometric structures from point clouds is an important step towards scene understanding for mobile robots. In this paper, we present a novel method for segmentation and surface classification of ordered point clouds. Data from RGB-D cameras are used as input. Normal based region growing segments the cloud and point feature descriptors classify each segment. Not only planar segments can be described but also curved surfaces. In an evaluation on indoor scenes we show the performance of our approach as well as give a comparison to state of the art methods."}
{"_id": "6f6658e46970fff61c73f25f8e93654db7ce4664", "title": "Design of a Dual-Band Microstrip Antenna With Enhanced Gain for Energy Harvesting Applications", "text": "In this letter, a dual-band circular patch antenna is introduced. The antenna consists of a circular patch with a direct feed through microstrip feedline designed to radiate at 2.45 GHz with fractional bandwidth of 4.5%. A circular slot is inserted into the ground plane that radiates by capacitive coupling between the patch and the ground plane. This slot radiates at 1.95 GHz with fractional impedance bandwidth of 5%. The antenna achieves good radiation characteristics by inserting a reflecting plane at optimum position behind it. The antenna has gain of 8.3 and 7.8 dBi at 1.95 and 2.45 GHz, respectively. This antenna is proposed for the rectenna; then it is designed to direct the main beam in a certain direction by increasing front-to-back (F/B) ratio with low cross-polarization levels by using defected reflector structure in the reflecting plane. The equivalent circuit of the proposed antenna is introduced to model the electrical behavior of the antenna. The proposed antenna can be used to harvest the energy from Wi-Fi and widely spread mobile networks. The proposed antenna was designed using CST Microwave Studio. The simulated and measured results show good agreement."}
{"_id": "f493995e40a48cb19af0bc2bb00da99a6e1a3f9d", "title": "Trend analysis model: trend consists of temporal words, topics, and timestamps", "text": "This paper presents a topic model that identifies interpretable low dimensional components in time-stamped data for capturing the evolution of trends. Unlike other models for time-stamped data, our proposal, the trend analysis model (TAM), focuses on the difference between temporal words and other words in each document to detect topic evolution over time. TAM introduces a latent trend class variable into each document and a latent switch variable into each token for handling these differences. The trend class has a probability distribution over temporal words, topics, and a continuous distribution over time, where each topic is responsible for generating words. The latter class uses a document specific probabilistic distribution to judge which variable each word comes from for generating words in each token. Accordingly, TAM can explain which topic co-occurrence pattern will appear at any given time, and represents documents of similar content and timestamp as sharing the same trend class. Therefore, TAM projects them on a latent space of trend dimensionality and allows us to predict the temporal evolution of words and topics in document collections. Experiments on various data sets show that the proposed model can capture interpretable low dimensionality sets of topics and timestamps, take advantage of previous models, and is useful as a generative model in the analysis of the evolution of trends."}
{"_id": "2b6f2b163372e3417b687cc43313f2a630e7bca7", "title": "Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation", "text": "In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also the method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https://github.com/openai/baselines."}
{"_id": "daaab68653940c4c3380dbf514056e579c24099f", "title": "Exploring Deep Learning and Transfer Learning for Colonic Polyp Classification", "text": "Recently, Deep Learning, especially through Convolutional Neural Networks (CNNs) has been widely used to enable the extraction of highly representative features. This is done among the network layers by filtering, selecting, and using these features in the last fully connected layers for pattern classification. However, CNN training for automated endoscopic image classification still provides a challenge due to the lack of large and publicly available annotated databases. In this work we explore Deep Learning for the automated classification of colonic polyps using different configurations for training CNNs from scratch (or full training) and distinct architectures of pretrained CNNs tested on 8-HD-endoscopic image databases acquired using different modalities. We compare our results with some commonly used features for colonic polyp classification and the good results suggest that features learned by CNNs trained from scratch and the \"off-the-shelf\" CNNs features can be highly relevant for automated classification of colonic polyps. Moreover, we also show that the combination of classical features and \"off-the-shelf\" CNNs features can be a good approach to further improve the results."}
{"_id": "9fb837932bbcc0c7fd33a3692c68426ed1f36682", "title": "User Based Aggregation for Biterm Topic Model", "text": "Biterm Topic Model (BTM) is designed to model the generative process of the word co-occurrence patterns in short texts such as tweets. However, two aspects of BTM may restrict its performance: 1) user individualities are ignored to obtain the corpus level words co-occurrence patterns; and 2) the strong assumptions that two co-occurring words will be assigned the same topic label could not distinguish background words from topical words. In this paper, we propose Twitter-BTM model to address those issues by considering user level personalization in BTM. Firstly, we use user based biterms aggregation to learn user specific topic distribution. Secondly, each user\u2019s preference between background words and topical words is estimated by incorporating a background topic. Experiments on a large-scale real-world Twitter dataset show that Twitter-BTM outperforms several stateof-the-art baselines."}
{"_id": "3d188c6936ad6dd478e01f2dd34bb2167f43e7d7", "title": "Network medicine: a network-based approach to human disease", "text": "Given the functional interdependencies between the molecular components in a human cell, a disease is rarely a consequence of an abnormality in a single gene, but reflects the perturbations of the complex intracellular and intercellular network that links tissue and organ systems. The emerging tools of network medicine offer a platform to explore systematically not only the molecular complexity of a particular disease, leading to the identification of disease modules and pathways, but also the molecular relationships among apparently distinct (patho)phenotypes. Advances in this direction are essential for identifying new disease genes, for uncovering the biological significance of disease-associated mutations identified by genome-wide association studies and full-genome sequencing, and for identifying drug targets and biomarkers for complex diseases."}
{"_id": "9a650f33c24c826c9c4f9480e50e8ade176b2780", "title": "Development of a capacitive-type 6-axis force-torque sensor", "text": "Force sensing is a crucial task for robots, especially when end effectors such as fingers and hands need to interact with unknown environments; to sense such forces, a force-torque (F/T) sensor is an essential component. In this paper, we propose a small-sized 6-axis F/T sensor with a novel arrangement of 12 transducers using the force transducer we have previously developed. The copper beryllium used in our sensor reduces hysteresis in each transducer. Additionally, the sensor provides digital output via I2C bus to reduce the susceptibility to noise, and reduce the number of required wires. Sensor characteristics such as its sensitivity, signal-to-noise ratio, linearity, and hysteresis are determined. More importantly, we showed that our sensor can detect and measure the 6-axis F/T."}
{"_id": "0b023124c563ccf60486f9a0aaaed5a0466596d1", "title": "Service-Oriented Computing: State of the Art and Research Challenges", "text": "Service-oriented computing promotes the idea of assembling application components into a network of services that can be loosely coupled to create flexible, dynamic business processes and agile applications that span organizations and computing platforms. An SOC research road map provides a context for exploring ongoing research activities."}
{"_id": "5177bfba27fc17bd740b70b4bc0a9d95b2814cbe", "title": "An implementation of the 802.1AE MAC Security Standard for in-car networks", "text": "The continuous increase in complexity in automotive electronics has led to cars that include up to 80 Electronic Control Units (ECUs). As a consequence, in-car networks are currently up to their limit in terms of data load, flexibility and bandwidth. The Ethernet backbone is thus considered as the best performing solution. On the other hand, the growing interconnection of cars with the external world requires high security standards in order to prevent safety risks for car passengers. The IEEE 802.1AE MAC Security Standard (MACSec) solves the security issues of Ethernet networks by providing confidentiality, authenticity and integrity of data. This paper presents an efficient hardware implementation of the MACSec standard for the automotive world. The system was synthesized on a Stratix V FPGA and on a 28nm standard-cell CMOS technology. In terms of maximum throughput, the FPGA results in 1.1 Gbps while the standard-cell technology reaches 3.9 Gbps. The FPGA implementation occupies 4.5% of the Adaptive Logic Modules (ALMs) while the standard-cell one gives a 285 kgate size. The proposed architecture represents a suitable implementation for a low area and high-performance solution as usually required by in-car network controllers."}
{"_id": "7ef3ac14cdb484aaa2b039850093febd5cf73a21", "title": "Contextual correlates of synonymy", "text": "Experimentol corroboration was obtained for the hypothesis that the proportion of words common to the contexts of word A and to the contexts of word B is a function of the degree to which A and B are similar in meaning. The tests were carried out for variously defined contexts. The shapes of the functions, however, indicate that similarity of context is reliable as criterion only for detecting pairs of words that are very similar in meaning."}
{"_id": "1c2cd3188fba95d1bccdcc013224ec55443075d0", "title": "Stock market prediction exploiting microblog sentiment analysis", "text": "This paper proposes a stock market prediction method exploiting sentiment analysis using financial microblogs (Sina Weibo). We analyze the microblog texts to find the financial sentiments, then combine the sentiments and the historical data of the Shanghai Composite Index (SH000001) to predict the stock market movements. Our framework includes three modules: Microblog Filter (MF), Sentiment Analysis (SA), and Stock Prediction (SP). The MF module is based on LDA to get the financial microblogs. The SA module first sets up a financial lexicon, then gets the sentiments of the microblogs obtained from the MF module. The SP module proposes a user-group model which adjusts the importance of different people, and combines it with stock historical data to predict the movement of the SH000001. We use about 6.1 million microblogs to test our method, and the result demonstrates our method is effective."}
{"_id": "0651b333c2669227b0cc42de403268a4546ece70", "title": "A Critical Review of Recurrent Neural Networks for Sequence Learning", "text": "Countless learning tasks require dealing with sequential data. Image captioning, speech synthesis, and music generation all require that a model produce outputs that are sequences. In other domains, such as time series prediction, video analysis, and musical information retrieval, a model must learn from inputs that are sequences. Interactive tasks, such as translating natural language, engaging in dialogue, and controlling a robot, often demand both capabilities. Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes. Unlike standard feedforward neural networks, recurrent networks retain a state that can represent information from an arbitrarily long context window. Although recurrent neural networks have traditionally been difficult to train, and often contain millions of parameters, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful large-scale learning with them. In recent years, systems based on long short-term memory (LSTM) and bidirectional (BRNN) architectures have demonstrated ground-breaking performance on tasks as varied as image captioning, language translation, and handwriting recognition. In this survey, we review and synthesize the research that over the past three decades first yielded and then made practical these powerful learning models. When appropriate, we reconcile conflicting notation and nomenclature. Our goal is to provide a selfcontained explication of the state of the art together with a historical perspective and references to primary research."}
{"_id": "1d41372e61379c72ac2572656075194471a8dbac", "title": "Open Graphs and Monoidal Theories", "text": "String diagrams are a powerful tool for reasoning about physical processes, logic circuits, tensor networks, and many other compositional structures. The distinguishing feature of these diagrams is that edges need not be connected to vertices at both ends, and these unconnected ends can be interpreted as the inputs and outputs of a diagram. In this paper, we give a concrete construction for string diagrams using a special kind of typed graph called an open-graph. While the category of open-graphs is not itself adhesive, we introduce the notion of a selective adhesive functor, and show that such a functor embeds the category of open-graphs into the ambient adhesive category of typed graphs. Using this functor, the category of open-graphs inherits \u201cenough adhesivity\u201d from the category of typed graphs to perform double-pushout (DPO) graph rewriting. A salient feature of our theory is that it ensures rewrite systems are \u201ctype-safe\u201d in the sense that rewriting respects the inputs and outputs. This formalism lets us safely encode the interesting structure of a computational model, such as evaluation dynamics, with succinct, explicit rewrite rules, while the graphical representation absorbs many of the tedious details. Although topological formalisms exist for string diagrams, our construction is discrete, finitary, and enjoys decidable algorithms for composition and rewriting. We also show how open-graphs can be parameterised by graphical signatures, similar to the monoidal signatures of Joyal and Street, which define types for vertices in the diagrammatic language and constraints on how they can be connected. Using typed open-graphs, we can construct free symmetric monoidal categories, PROPs, and more general monoidal theories. Thus open-graphs give us a tool for mechanised reasoning in monoidal categories."}
{"_id": "728932f125be3c9e0004cc20fee8f8e0863f22f2", "title": "Functorial Boxes in String Diagrams", "text": "String diagrams were introduced by Roger Penrose as a handy notation to manipulate morphisms in a monoidal category. In principle, this graphical notation should encompass the various pictorial systems introduced in proof-theory (like Jean-Yves Girard\u2019s proof-nets) and in concurrency theory (like Robin Milner\u2019s bigraphs). This is not the case however, at least because string diagrams do not accomodate boxes \u2014 a key ingredient in these pictorial systems. In this short tutorial, based on our accidental rediscovery of an idea by Robin Cockett and Robert Seely, we explain how string diagrams may be extended with a notion of functorial box to depict a functor separating an inside world (its source category) from an outside world (its target category). We expose two elementary applications of the notation: first, we characterize graphically when a faithful balanced monoidal functor F : C \u2212\u2192 D transports a trace operator from the category D to the category C, and we then exploit this to construct well-behaved fixpoint operators in cartesian closed categories generated by models of linear logic; second, we explain how the categorical semantics of linear logic induces that the exponential box of proof-nets decomposes as two enshrined functorial boxes. \u2217Invited paper at the conference Computer Science Logic 2006 in Szeged, Hungary. To appear in the proceedings of the conference. c \u00a9 Springer Verlag. This research was partially supported by the ANR Project INVAL \u201cInvariants alg\u00e9briques des syst\u00e8mes informatiques\u201d."}
{"_id": "159c5cde6b0c9bbe2aade0b4fbc478bb07231aeb", "title": "A survey of graphical languages for monoidal categories", "text": "This article is intended as a reference guide to various notions of monoidal categories and their associated string diagrams. It is hoped that this will be useful not just to mathematicians, but also to physicists, computer scientists, and others who use diagrammatic reasoning. We have opted for a somewhat informal treatment of topological notions, and have omitted most proofs. Nevertheless, the exposition is sufficiently detailed to make it clear what is presently known, and to serve as a starting place for more in-depth study. Where possible, we provide pointers to more rigorous treatments in the literature. Where we include results that have only been proved in special cases, we indicate this in the form of caveats."}
{"_id": "c3094d1eb7be556e63547495146cd8a8560a6781", "title": "Diagram Rewriting for Orthogonal Matrices: A Study of Critical Peaks", "text": "One of the rules, which is similar to the Yang-Baxter equatio n, nvolves a map h : [0, \u03c0[ \u2192 [0, \u03c0[. To study the algebraic properties of h, we use the confluence of critical peaks in our rewrite system, and we introduce parametric diagramsdescribing the calculation of angles of rotations generated b y rewriting. In particular, h satisfies thetetrahedron equation(also called Zamolodchikov equation)."}
{"_id": "db10480323bd2b9aeb58268678c204053722d6ad", "title": "Term rewriting and all that", "text": ""}
{"_id": "2c4588f53a562385086c74205fcdfa853ac03aa7", "title": "An Extensive Empirical Evaluation of Character-Based Morphological Tagging for 14 Languages", "text": "This paper investigates neural characterbased morphological tagging for languages with complex morphology and large tag sets. Character-based approaches are attractive as they can handle rarelyand unseen words gracefully. We evaluate on 14 languages and observe consistent gains over a state-of-the-art morphological tagger across all languages except for English and French, where we match the state-of-the-art. We compare two architectures for computing characterbased word vectors using recurrent (RNN) and convolutional (CNN) nets. We show that the CNN based approach performs slightly worse and less consistently than the RNN based approach. Small but systematic gains are observed when combining the two architectures by ensembling."}
{"_id": "8207de2d333ab56e855e0875e8d5d31202fae91f", "title": "Understanding Citizen \u2019 s Continuance Intention to Use e-Government Website : a Composite View of Technology Acceptance Model and Computer Self-Efficacy", "text": "This study aims to understand the fundamental factors influencing the citizen\u2019s continuance intention to use eGovernment websites by using the Technology Acceptance Model (TAM) as a based theoretical model. Computer selfefficacy is adopted as an additional factor that influences the citizen\u2019s continuance intention to use e-Government websites. To empirically test the proposed research model, the web-based survey was employed. The participants consisted of 614 country-wide citizens with at least a bachelor\u2019s degree and an experience with e-Government websites. Regression analysis was conducted to test the model. The results revealed that perceived usefulness and perceived ease of use of e-Government websites and citizen\u2019s computer self-efficacy directly enhanced citizen\u2019s continuance intention to use e-Government websites. In addition, perceived ease of use of e-Government websites indirectly enhanced citizen\u2019s continuance intention through perceived usefulness."}
{"_id": "1bf260b5993d6d74ce597ceb822b42cb32033f53", "title": "Common Sense Knowledge Based Personality Recognition from Text", "text": "Past works on personality detection has shown that psycho-linguistic features, frequency based analysis at lexical level , emotive words and other lexical clues such as number of first person or sec ond person words carry major role to identify personality associated with the te xt. In this work, we propose a new architecture for the same task using common sen se knowledge with associated sentiment polarity and affective labels. To extract the common sense knowledge with sentiment polarity scores and affect iv labels we used Senticnet which is one of the most useful resources for opinion mining and sentiment analysis. In particular, we combined comm n sense knowledge based features with phycho-linguistic features and freque ncy based features and later the features were employed in supervised classifier . We designed five SMO based supervised classifiers for five personality t raits. We observe that the use of common sense knowledge with affective and sentim ent information enhances the accuracy of the existing frameworks which use o nly psycho-linguistic features and frequency based analysis at lexical le ve ."}
{"_id": "f8a2cf86baee7f68509576a923831539b35b13d0", "title": "Fascia lliaca block for analgesia after hip arthroplasty: a randomized double-blind, placebo-controlled trial.", "text": "BACKGROUND AND OBJECTIVES\nFascia iliaca block (FIB) is often used to treat pain after total hip arthroplasty (THA), despite a lack of randomized trials to evaluate its efficacy for this indication. The objective of this study was to assess the analgesic benefit of FIB after THA. Our primary hypothesis was administration of FIB decreases the intensity of postoperative pain (numeric rating scale [NRS-11] score) compared with sham block (SB) in patients after THA.\n\n\nMETHODS\nAfter institutional review board approval and informed consent, 32 eligible patients having THA were recruited. In the postoperative care unit, although all patients received intravenous morphine sulfate patient-controlled analgesia, patients reporting pain of 3 or greater on the NRS-11 scale were randomized to receive ultrasound-guided fascia iliaca (30 mL 0.5% ropivacaine) or SB (30 mL 0.9% NaCl) using identical technique, below fascia iliaca. The primary outcome was pain intensity (NRS-11) after FIB.\n\n\nRESULTS\nThirty-two patients (16 in each group) completed the study; all patients received an FIB. There was no difference in pain intensity (NRS-11 = 5.0 \u00b1 0.6 vs 4.7 \u00b1 0.6, respectively) after FIB versus SB or in opioid consumption (8.97 \u00b1 1.6 vs 5.7 \u00b1 1.6 mg morphine, respectively) between the groups at 1 hour. The morphine consumption after 24 hours was similar in both groups (49.0 \u00b1 29.9 vs 50.4 \u00b1 34.5 mg, P = 0.88, respectively).\n\n\nCONCLUSIONS\nThe evidence in these data suggests that the difference in average pain intensity after FIB versus SB was not significant (95% confidence interval, -2.2-1.4 NRS units)."}
{"_id": "87507a498558ed6ed23115a42f42376c0884f7f2", "title": "A (sub)graph isomorphism algorithm for matching large graphs", "text": "We present an algorithm for graph isomorphism and subgraph isomorphism suited for dealing with large graphs. A first version of the algorithm has been presented in a previous paper, where we examined its performance for the isomorphism of small and medium size graphs. The algorithm is improved here to reduce its spatial complexity and to achieve a better performance on large graphs; its features are analyzed in detail with special reference to time and memory requirements. The results of a testing performed on a publicly available database of synthetically generated graphs and on graphs relative to a real application dealing with technical drawings are presented, confirming the effectiveness of the approach, especially when working with large graphs."}
{"_id": "33267ba593995791becdf11633d2c09fa4ff9b6c", "title": "Image-based visual navigation for mobile robots", "text": "We introduce a new image-based visual navigation algorithm that allows the Cartesian velocity of a robot to be defined with respect to a set of visually observed features corresponding to previously unseen and unmapped world points. The technique is well suited to mobile robot tasks such as moving along a road or flying over the ground. We describe the algorithm in general form and present detailed simulation results for an aerial robot scenario using a spherical camera and a wide angle perspective camera, and present experimental results for a mobile ground robot."}
{"_id": "0a9b3dc3f46ca655869db789a0d1823d3141fb39", "title": "Knowledge sharing behavior of physicians in hospitals", "text": "Sharing knowledge of Physicians within hospitals can realize potential gains enormously and is critical to be successful and survive in competitive environments. There is a need for empirical research to identify the factors that determine physician's behavior to share knowledge. This study investigates the factors that determine the physician's individual knowledge sharing behavior in his/her department. The purpose of this study is to examine empirically the physicians' knowledge sharing behavior The research models under investigation are the Theory of Reasoned Action (TRA) model and the Theory of Planned Behavior (TPBI model. These models are empirically examined and compared, using the survey results on physicians' knowledge sharing behavior collected from 286 physicians practicing in 28 departments in 13 tertiary hospitals in Korea. TPB model exhibited a good fit with the data and appeared to be superior to TRA in explaining physicians' intentions to share knowledge. Amended TPB model provided an important improvement In fit over that of original TPB model. In amended TPB model, subjective norms were found to have the strongest total effects on behavioral intentions to share knowledge of physicians through direct and indirect path by attitude. Attitude was found to be the second important factor influencing physicians' intentions. Perceived behavioral control was also found to have effect on the intentions to share knowledge though it was weaker than that of subjective norms or attitude. The implications for physician's knowledge sharing activities are discussed"}
{"_id": "382591224df8b8b2eb39712f282860424575754e", "title": "vTPM: Virtualizing the Trusted Platform Module", "text": "We present the design and implementation of a system that enables trusted computing for an unlimited number of virtual machines on a single hardware platform. To this end, we virtualized the Trusted Platform Module (TPM). As a result, the TPM\u2019s secure storage and cryptographic functions are available to operating systems and applications running in virtual machines. Our new facility supports higher-level services for establishing trust in virtualized environments, for example remote attestation of software integrity. We implemented the full TPM specification in software and added functions to create and destroy virtual TPM instances. We integrated our software TPM into a hypervisor environment to make TPM functions available to virtual machines. Our virtual TPM supports suspend and resume operations, as well as migration of a virtual TPM instance with its respective virtual machine across platforms. We present four designs for certificate chains to link the virtual TPM to a hardware TPM, with security vs. efficiency trade-offs based on threat models. Finally, we demonstrate a working system by layering an existing integrity measurement application on top of our virtual TPM facility."}
{"_id": "12fe91ab616b797e22543ae6c2afa7866dbc9a49", "title": "Action recognition from a distributed representation of pose and appearance", "text": "We present a distributed representation of pose and appearance of people called the \u201cposelet activation vector\u201d. First we show that this representation can be used to estimate the pose of people defined by the 3D orientations of the head and torso in the challenging PASCAL VOC 2010 person detection dataset. Our method is robust to clutter, aspect and viewpoint variation and works even when body parts like faces and limbs are occluded or hard to localize. We combine this representation with other sources of information like interaction with objects and other people in the image and use it for action recognition. We report competitive results on the PASCAL VOC 2010 static image action classification challenge."}
{"_id": "0c66d2ee3c466b9ed37ffa604e80c22251f35c61", "title": "A Review of Sleep EEG Patterns . Part I : A Compilation of Amended Rules for Their Visual Recognition according to Rechtschaffen and Kales", "text": "Task Force \u2018Scoring of Polysomnographic Recordings\u2019 of the German Sleep Society (DGSM): Andrea Rodenbeck, Ralf Binder, Peter Geisler, Heidi Danker-Hopfe, Reimer Lund, Friedhart Raschke, Hans-G\u00fcnther Wee\u00df, and Hartmut Schulz (Chairman) Department of Psychiatry, University of G\u00f6ttingen, G\u00f6ttingen, Germany Department of Psychiatry, Pfalzklinikum, Klingenm\u00fcnster, Germany Department of Psychiatry, University of Regensburg, Regensburg, Germany Department of Psychiatry, Charit\u00e9-CBF, Berlin, Germany Gr\u00fcntenstr. 11, 80686 M\u00fcnchen (formerly Dept. of Pneumology, ASKLEPIOS Clinic Gauting, Germany) Institute of Rehabilitation Norderney, Norderney, Germany Department of Neurology, HELIOS-Klinikum Erfurt, and Free University of Berlin, Biopsychology, Germany"}
{"_id": "eca39eb10435a7f4825431e733616c9900f502aa", "title": "Surveys on attitudes towards legalisation of euthanasia: importance of question phrasing.", "text": "AIM\nTo explore whether the phrasing of the questions and the response alternatives would influence the answers to questions about legalisation of euthanasia.\n\n\nMETHODS\nResults were compared from two different surveys in populations with similar characteristics. The alternatives \"positive\", \"negative\", and \"don't know\" (first questionnaire) were replaced with an explanatory text, \"no legal sanction\", four types of legal sanctions, and no possibility to answer \"don't know\" (second questionnaire). Four undergraduate student groups (engineering, law, medicine, and nursing) answered.\n\n\nRESULTS\nIn the first questionnaire (n = 684) 43% accepted euthanasia (range 28-50%), 14% (8-33%) did not, and 43% (39-59%) answered \"don't know\". Two per cent of the respondents declined to answer. In comparison with previous surveys on attitudes to euthanasia the proportion of \"don't know\" was large. The results of the second questionnaire (n = 639), showed that 38% favoured \"no legal prosecution\" (26-50%). However, 62% (50-74%) opted for different kinds of legal sanctions, and two of four groups expressed significantly different views in the two surveys. A proportion of 10% declined to answer the second questionnaire.\n\n\nCONCLUSION\nAn introduction of an explanatory text and a wider range of response alternatives produced differences between the results of the two surveys conducted."}
{"_id": "283d9ef2b54453bf77723712693d02165cf08899", "title": "Knowledge Bases in the Age of Big Data Analytics", "text": "This tutorial gives an overview on state-of-the-art methods for the automatic construction of large knowledge bases and harnessing them for data and text analytics. It covers both big-data methods for building knowledge bases and knowledge bases being assets for big-data applications. The tutorial also points out challenges and research opportunities. 1. MOTIVATION AND SCOPE Comphrehensive machine-readable knowledge bases (KB\u2019s) have been pursued since the seminal projects Cyc [19, 20] and WordNet [12]. In contrast to these manually created KB\u2019s, great advances have recently been made on automating the building and curation of large KB\u2019s [1, 16], using information extraction (IE) techniques and harnessing highquality Web sources like Wikipedia. Prominent endeavors of this kind include academic research projects such as DBpedia [3], KnowItAll [10], NELL [5] and YAGO [29], as well as industrial ones such as Freebase. These projects provide automatically constructed KB\u2019s of facts about named entities, their semantic classes, and their mutual relationships. They contain millions of entities and billions of facts about them. Moreover, several KB\u2019s are interlinked at the entity level, forming the backbone of the Web of Linked Data [14]. Such world knowledge in turn enables cognitive applications and knowledge-centric services like disambiguating natural-language text, entity linking, text summarization, deep question answering, and semantic search and analytics over entities and relations in Web and enterprise data (e.g., [2, 6, 8, 13]). Prominent examples of how KB\u2019s can be harnessed include the Google Knowledge Graph [27] and the IBM Watson question answering system [17]. This tutorial presents state-of-the-art methods, recent advances, research opportunities, and open challenges along this avenue of knowledge harvesting and its applications. Particular emphasis is on the twofold role of KB\u2019s for bigdata analytics: using scalable distributed algorithms for harvesting knowledge from Web and text sources, and leveraging entity-centric knowledge for deeper interpretation of and This work is licensed under the Creative Commons AttributionNonCommercial-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain permission prior to any use beyond those covered by the license. Contact copyright holder by emailing info@vldb.org. Articles from this volume were invited to present their results at the 40th International Conference on Very Large Data Bases, September 1st 5th 2014, Hangzhou, China. Proceedings of the VLDB Endowment, Vol. 7, No. 13 Copyright 2014 VLDB Endowment 2150-8097/14/08. better intelligence with big data. The following sections outline the structure of the tutorial. An extensive bibliography on this theme is given in [30]. 2. BUILDING KNOWLEDGE BASES Digital Knowledge: Today\u2019s KB\u2019s represent their data mostly in RDF-style SPO (subject-predicate-object) triples. We introduce this data model and the most salient KB projects, which include KnowItAll [10, 11], BabelNet [22], ConceptNet [28], DBpedia [3, 18], DeepDive [24], Freebase [4], ImageNet [7], NELL [5], Wikidata [31], WikiNet [21], WikiTaxonomy [26], and YAGO [29, 15]. We briefly discuss industrial projects like the Google Knowledge Graph and related work at Google [9, 13, 25], the EntityCube and Probase projects at Microsoft Research [23, 32], and IBM\u2019s Watson project [17]. Harvesting Knowledge on Entities and Classes: Every entity in a KB (e.g., Steve Jobs) belongs to one or multiple classes (e.g., computer pioneer, entrepreneur). These classes are organized into a taxonomy, where more special classes are subsumed by more general classes (e.g., person). We discuss two families of methods to harvest such information: Wikipedia-based approaches that analyze the category system, and Web-based approaches that use techniques like set expansion. 3. HARVESTING FACTS AT WEB SCALE Harvesting Relational Facts: Relational facts express properties of and relationships between entities. There is a large spectrum of methods to extract such facts from Web documents. We give an overview on methods from pattern matching (e.g., regular expressions), computational linguistics (e.g., dependency parsing), statistical learning (e.g., factor graphs and MLN\u2019s), and logical consistency reasoning (e.g., weighted MaxSat or ILP solvers). We also discuss to what extent these approaches scale to handle big data. Open Information Extraction: Alternatively to methods that operate on a pre-specified set of relations and entities, open information extraction harvests arbitrary SPO triples from natural language documents. It aggressively taps into noun phrases as entity candidates and verbal phrases as prototypic patterns for relations. We discuss recent methods that follow this direction. Some methods along these lines make clever use of big-data techniques like frequent sequence mining and map-reduce computation. Temporal and Multilingual Knowledge: Properly interpreting entities and facts in a KB often requires additional meta-information like entity names in different languages and the temporal scope of facts. We discuss tech-"}
{"_id": "536ffdac9ce968cc38711e8b02a793c920fb3da2", "title": "LOGO MATCHING ALGORITHM FEATURE\u2010BASED MATCHING RESULTS IMRESTAURANT(): MATLAB FOR FEATURE\u2010BASED RESTAURANT LOGO RECOGNITION", "text": "This paper discusses the implementation of imrestaurant(), a MATLAB function for feature-based restaurant logo recognition. The algorithm works by first applying SIFT to a logo database and then creating a hierarchical vocabulary tree using K-means clustering. The input to the algorithm is a photograph of an image logo, which is first filtered to reduce noise. Then SIFT is applied to the input image, and the output descriptors are pushed to the vocabulary tree. Each image in the logo database is scored using TF-IDF weights and the ten highest scoring database logos are pairwise matched using the ratio test and RANSAC. The restaurant database image with the largest consensus set is searched for on yelp.com. This algorithm was shown to be robust against input image size, background noise, and shearing. Finally, future work in increasing the accuracy of feature detection as well as optical character recognition is discussed."}
{"_id": "d567cf1863a8a287d1a5a7f686ec51b0fc69644a", "title": "Ensemble of jointly trained deep neural network-based acoustic models for reverberant speech recognition", "text": "Distant speech recognition is a challenge, particularly due to the corruption of speech signals by reverberation caused by large distances between the speaker and microphone. In order to cope with a wide range of reverberations in real-world situations, we present novel approaches for acoustic modeling including an ensemble of deep neural networks (DNNs) and an ensemble of jointly trained DNNs. First, multiple DNNs are established, each of which corresponds to a different reverberation time 60 (RT60) in a setup step. Also, each model in the ensemble of DNN acoustic models is further jointly trained, including both feature mapping and acoustic modeling, where the feature mapping is designed for the dereverberation as a front-end. In a testing phase, the two most likely DNNs are chosen from the DNN ensemble using maximum a posteriori (MAP) probabilities, computed in an online fashion by using maximum likelihood (ML)-based blind RT60 estimation and then the posterior probability outputs from two DNNs are combined using the ML-based weights as a simple average. Extensive experiments demonstrate that the proposed approach leads to substantial improvements in speech recognition accuracy over the conventional DNN baseline systems under diverse reverberant conditions."}
{"_id": "ff4cba2c18e4772f2d18884b5c019da03876e571", "title": "We Can \"See\" You via Wi-Fi - WiFi Action Recognition via Vision-based Methods", "text": "Recently, Wi-Fi has caught tremendous attention for its ubiquity. Motivated by Wi-Fi\u2019s low cost and privacy preservation, researchers have been putting lots of investigation into its potential on action recognition and even person identification. In this paper, for bringing a new modality for multimedia community, we offer an comprehensive overview on these two topics in Wi-Fi. Also, through looking at these two topics from an unprecedented perspective, we could achieve generality instead of designing specific ad-hoc features for each scenario. Observing the great resemblance of Channel State Information (CSI, a finegrained information captured from the received Wi-Fi signal) to texture, we propose a brand-new framework based on computer vision methods. To minimize the effect of location dependency embedded in CSI, we propose a novel de-noising method based on Singular Value Decomposition (SVD) to eliminate the background energy and effectively extract the channel information of signals reflected by human bodies. From the experiments conducted, we demonstrate the feasibility and efficacy of the proposed methods. Also, we conclude the factors that would affect the performance and highlight a few potential issues that require further deliberations."}
{"_id": "7a8c2743db1749c2d9f16f62ee633574c1176e34", "title": "Face Photo-Sketch Synthesis and Recognition", "text": "Today in Modern Society Face Recognition has gained much attention in the field of network multimedia access. After the 9/11 tragedy in India, the need for technologies for identification, detection and recognition of suspects has increased. One of the most common biometric recognition techniques is face recognition since face is the convenient way used by the people to identify each other. In this paper we are going to study a method for representing face which is based on the features which uses geometric relationship among the facial features like mouth, nose and eyes .Feature based face representation is done by independently matching templates of three facial regions i.e eyes, mouth and nose .Principal Component Analysis method which is also called Eigen faces is appearance based technique used widely for the dimensionality reduction and recorded a greater performance in face recognition. Here we are going to study about PCA followed by Feed Forward Neural Network called PCA-NN."}
{"_id": "cffe77eec23a707b60342666eb9eb4b531c21f3e", "title": "X-band 1kW SSPA using 20-way Hybrid Radial combiner for accelerator", "text": "The X-band pulsed 1kW SSPA for XFEL has been designed and manufactured. Amplifier modules were combined by a 20-way Hybrid Radial combiner in order to reduce output loss. The manufactured 20-Way Hybrid Radial Combiner is capable of high output with a combining efficiency of 95%. The entire system is driven by Pulse and results in a Power Gain of 60dB with an Output Power of 1kW."}
{"_id": "d96f734b973cc533e79c7c53969038dd5e71a3c0", "title": "Applying QR code in augmented reality applications", "text": "In this paper we present an augmented reality (AR) application based on the QR Code. The system can extract the information embedded in a QR Code and show the information in an extended 3D form with the QR Code being the traditional AR marker. Traditional AR systems often use a particularly designed pattern (the marker) to recover the 3D scene structure and identify the object to be displayed on the scene. In these systems, the marker is used only for tracking and identification. They do not convey any other information. Consequently, the applications of these systems are limited and often a registration procedure is required for a new marker.\n QR Code has the advantage of large information capacity and is similar to an AR marker in appearance. Thus, more interesting and useful applications can be developed by combining the QR Code with the traditional AR system. In this paper, we combine these two techniques to develop a product demo system. A QR Code is pasted on the package of a product and then a 3D virtual object is displayed on the QR Code. This system allows the customer to visualize the product via a more direct and interactive way. Our system demonstrates the success of using QR Code as the AR marker to a particular application and we believe it can bring more convenient to our life in the future."}
{"_id": "b35e92abd62d979616b7425e17123c543264d688", "title": "Combining CPU and GPU architectures for fast similarity search", "text": "The Signature Quadratic Form Distance on feature signatures represents a flexible distance-based similarity model for effective content-based multimedia retrieval. Although metric indexing approaches are able to speed up query processing by two orders of magnitude, their applicability to large-scale multimedia databases containing billions of images is still a challenging issue. In this paper, we propose a parallel approach that balances the utilization of CPU and many-core GPUs for efficient similarity search with the Signature Quadratic Form Distance. In particular, we show how to process multiple distance computations and other parts of the search procedure in parallel, achieving maximal performance of the combined CPU/GPU system. The experimental evaluation demonstrates that our approach implemented on a common workstation with 2\u00a0GPU cards outperforms traditional parallel implementation on a high-end 48-core NUMA server in terms of efficiency almost by an order of magnitude. If we consider also the price of the high-end server that is ten times higher than that of the GPU workstation then, based on price/performance ratio, the GPU-based similarity search beats the CPU-based solution by almost two orders of magnitude. Although proposed for the SQFD, our approach of fast GPU-based similarity search is applicable for any distance function that is efficiently parallelizable in the SIMT execution model."}
{"_id": "6f52a5096478076afeacfdca1795259dd5d252b4", "title": "Aircraft angle of attack and air speed detection by redundant strip pressure sensors", "text": "Air speed and angle of attack are fundamental parameters in the control of flying bodies. Conventional detection techniques use sensors that may protrude outside the aircraft, be too bulky for installation on small UAVs, or be excessively intrusive. We propose a novel readout methodology where flight parameters are inferred from redundant pressure readings made by capacitive strip sensors directly applied on the airfoil skin. Redundancy helps lower the accuracy requirements on the individual sensors. A strategy for combining sensor data is presented with an error propagation analysis. The latter enables foreseeing the precision by which the flight parameters can be detected. The methodology has been validated by fluid dynamic simulation and a sample case is illustrated."}
{"_id": "1dfe6dfdf26562987cc45c0bb8c509acba6e10a7", "title": "Visual saliency based on multiscale deep features", "text": "Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0% and 13.2% respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7% and 35.1% respectively on these two datasets."}
{"_id": "2ce073da76e6ed87eda2da08da0e00f4f060f1a6", "title": "Deep Saliency with Encoded Low Level Distance Map and High Level Features", "text": "Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene. These advances have demonstrated superior results over previous works that utilize hand-crafted low level features for saliency detection. In this paper, we demonstrate that hand-crafted features can provide complementary information to enhance performance of saliency detection that utilizes only high level features. Our method utilizes both high level and low level features for saliency detection under a unified deep learning framework. The high level features are extracted using the VGG-net, and the low level features are compared with other parts of an image to form a low level distance map. The low level distance map is then encoded using a convolutional neural network(CNN) with multiple 1 1 convolutional and ReLU layers. We concatenate the encoded low level distance map and the high level features, and connect them to a fully connected neural network classifier to evaluate the saliency of a query region. Our experiments show that our method can further improve the performance of state-of-the-art deep learning-based saliency detection methods."}
{"_id": "87b53ce402587b4f4d66d9efbff14983cfdca724", "title": "ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design", "text": "Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the stateof-the-art in terms of speed and accuracy tradeoff."}
{"_id": "0a26477bd1e302219a065eea16495566ead2cede", "title": "Symbiotic Segmentation and Part Localization for Fine-Grained Categorization", "text": "We propose a new method for the task of fine-grained visual categorization. The method builds a model of the base-level category that can be fitted to images, producing high-quality foreground segmentation and mid-level part localizations. The model can be learnt from the typical datasets available for fine-grained categorization, where the only annotation provided is a loose bounding box around the instance (e.g. bird) in each image. Both segmentation and part localizations are then used to encode the image content into a highly-discriminative visual signature. The model is symbiotic in that part discovery/localization is helped by segmentation and, conversely, the segmentation is helped by the detection (e.g. part layout). Our model builds on top of the part-based object category detector of Felzenszwalb et al., and also on the powerful Grab Cut segmentation algorithm of Rother et al., and adds a simple spatial saliency coupling between them. In our evaluation, the model improves the categorization accuracy over the state-of-the-art. It also improves over what can be achieved with an analogous system that runs segmentation and part-localization independently."}
{"_id": "ae344cc792c27bc6d97deaaf738063c5373c780e", "title": "Overhead line fault detection using GSM technology", "text": "In this paper, a novel technique for the fault detection, classification & protection of transmission lines is proposed. The proposed system uses different protective equipment's, voltage sense section, microcontroller section, LED display section & GSM (global system for mobile communication) module. The faults like all series (1LO, 2LO, 3LO etc.) & shunt faults (L-G, L-L, L-L-G, L-L-L, L-L-L-G) get detected & classified according to characteristics condition of current & voltage at the occurrence of fault in the three phase overhead lines. The sensed signals are given to microcontroller for detection & classification of faults. Also wireless mobile communication technique i.e. GSM is used simultaneously to send message to responsible person on mobile. Type of fault gets displayed on fault display section. Simultaneously fault gets interrupted using protective devices. (fuse, circuit breakers etc.)"}
{"_id": "1c2d39eee9816982c5edaa6d945d39cb1bde339e", "title": "Complete 3D Scene Parsing from an RGBD Image", "text": "One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of orthogonal walls and the extent of objects, modeled with CAD-like 3D shapes. We parse both the visible and occluded portions of the scene and all observable objects, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the well-known challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scenes. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and spatial consistency. We use support inference to aid interpretation and propose a retrieval scheme that uses convolutional neural networks to classify regions and retrieve objects with similar shapes. We demonstrate the performance of our method on our newly annotated NYUd v2 dataset (Silberman et al., in: Computer vision-ECCV, 2012, pp 746\u2013760, 2012) with detailed 3D shapes."}
{"_id": "9081ad0447360f9c91685c03b601047bd8e9aacc", "title": "CertiCoq : A verified compiler for Coq", "text": "CertiCoq is a mechanically verified, optimizing compiler for Coq that bridges the gap between certified high-level programs and their translation to machine language. We outline its design as well as the main foundational and engineering challenges involved in building and certifying a compiler for Coq in Coq."}
{"_id": "b05f104f5a28a1a2c2fdb216d3d0959a5786f0ad", "title": "Compaction Management in Distributed Key-Value Datastores", "text": "Compactions are a vital maintenance mechanism used by datastores based on the log-structured merge-tree to counter the continuous buildup of data files under update-intensive workloads. While compactions help keep read latencies in check over the long run, this comes at the cost of significantly degraded read performance over the course of the compaction itself. In this paper, we offer an in-depth analysis of compaction-related performance overheads and propose techniques for their mitigation. We offload large, expensive compactions to a dedicated compaction server to allow the datastore server to better utilize its resources towards serving the actual workload. Moreover, since the newly compacted data is already cached in the compaction server\u2019s main memory, we fetch this data over the network directly into the datastore server\u2019s local cache, thereby avoiding the performance penalty of reading it back from the filesystem. In fact, pre-fetching the compacted data from the remote cache prior to switching the workload over to it can eliminate local cache misses altogether. Therefore, we implement a smarter warmup algorithm that ensures that all incoming read requests are served from the datastore server\u2019s local cache even as it is warming up. We have integrated our solution into HBase, and using the YCSB and TPC-C benchmarks, we show that our approach significantly mitigates compaction-related performance problems. We also demonstrate the scalability of our solution by distributing compactions across multiple compaction servers."}
{"_id": "4ac639f092b870ebe72e4b366afb90f3073d6223", "title": "Massively Parallel A* Search on a GPU", "text": "A* search is a fundamental topic in artificial intelligence. Recently, the general purpose computation on graphics processing units (GPGPU) has been widely used to accelerate numerous computational tasks. In this paper, we propose the first parallel variant of the A* search algorithm such that the search process of an agent can be accelerated by a single GPU processor in a massively parallel fashion. Our experiments have demonstrated that the GPU-accelerated A* search is efficient in solving multiple real-world search tasks, including combinatorial optimization problems, pathfinding and game solving. Compared to the traditional sequential CPU-based A* implementation, our GPU-based A* algorithm can achieve a significant speedup by up to 45x on largescale search problems."}
{"_id": "bd3395f9017763a906a336e71886b6b1cc941956", "title": "Portable and Efficient Auto-vectorized Bytecode: a Look at the Interaction between Static and JIT Compilers", "text": "Heterogeneity is a confirmed trend of computing systems. Bytecode formats and just-in-time compilers have been proposed to deal with the diversity of the platforms. By hiding architectural details and giving software developers a unified view of the machine, they help improve portability and manage the complexity of large software. Independently, careful exploitation of SIMD instructions has become crucial for the performance of many applications. However, auto-vectorizing compilers need detailed information about the architectural features of the processor to generate efficient code. We propose to reconcile the use of architecture neutral bytecode formats with the need to generate highly efficient vectorized native code. We make three contributions. 1) We show that vectorized bytecode is a viable approach that can deliver portable performance in the presence of SIMD extensions, while incurring only minor penalty when SIMD is not supported. In other words, the information that a loop can be vectorized is the vectorized loop itself. 2) We analyze the interaction between the static and just-in-time compilers and we derive conditions to deliver performance. 3) We add vectorization capabilities to the CLI port of the GCC compiler."}
{"_id": "c967238ac80b5d0671d14ad96ebb4085663d916a", "title": "Real-time 3D Walking Pattern Generation for a Biped Robot with Telescopic Legs", "text": "Meltran V, a new biped robot with telescopic legs, is introduced. For 3D walking control of the robot we analyze the dynamics of a three-dimensional inverted pendulum in which motion is constrained to move along an arbitrarily de ned plane. From this analysis we obtain simple linear dynamics, the ThreeDimensional Linear Inverted Pendulum Mode (3DLIPM). Using a real-time control method based on 3DLIPM, the Meltran V robot successfully demonstrated 3D dynamic walking without the use of any prepared trajectories."}
{"_id": "1c95aaee295b30b05bffbe871ccacea8dab54af8", "title": "Review on cyber-physical systems", "text": "Cyber-physical systems U+0028 CPS U+0029 are complex systems with organic integration and in-depth collaboration of computation, communications and control U+0028 3C U+0029 technology. Subject to the theory and technology of existing network systems and physical systems, the development of CPS is facing enormous challenges. This paper first introduces the concept and characteristics of CPS and analyzes the present situation of CPS researches. Then the development of CPS is discussed from perspectives of system model, information processing technology and software design. At last it analyzes the main obstacles and key researches in developing CPS."}
{"_id": "e35cd48605e7d07f8e8294c9bc47673cdc46da49", "title": "{\\Omega}-Net (Omega-Net): Fully Automatic, Multi-View Cardiac MR Detection, Orientation, and Segmentation with Deep Neural Networks", "text": "Pixelwise segmentation of the left ventricular (LV) myocardium and the four cardiac chambers in 2-D steady state free precession (SSFP) cine sequences is an essential preprocessing step for a wide range of analyses. Variability in contrast, appearance, orientation, and placement of the heart between patients, clinical views, scanners, and protocols makes fully automatic semantic segmentation a notoriously difficult problem. Here, we present \u03a9-Net (Omega-Net): a novel convolutional neural network (CNN) architecture for simultaneous localization, transformation into a canonical orientation, and semantic segmentation. First, an initial segmentation is performed on the input image; second, the features learned during this initial segmentation are used to predict the parameters needed to transform the input image into a canonical orientation; and third, a final segmentation is performed on the transformed image. In this work, \u03a9-Nets of varying depths were trained to detect five foreground classes in any of three clinical views (short axis, SA; four-chamber, 4C; two-chamber, 2C), without prior knowledge of the view being segmented. This constitutes a substantially more challenging problem compared with prior work. The architecture was trained on a cohort of patients with hypertrophic cardiomyopathy (HCM, N = 42) and healthy control subjects (N = 21). Network performance, as measured by weighted foreground intersection-over-union (IoU), was substantially improved for the best-performing \u03a9-Net compared with U-Net segmentation without localization or orientation (0.858 vs 0.834). In addition, to be comparable with other works, \u03a9-Net was retrained from scratch on the publicly available 2017 MICCAI Automated Cardiac Diagnosis Challenge (ACDC) dataset. The \u03a9-Net outperformed the state-of-the-art method in segmentation of the LV and RV bloodpools, and performed slightly worse in segmentation of the LV myocardium. We conclude that this architecture represents a substantive advancement over prior approaches, with implications for biomedical image segmentation more generally."}
{"_id": "75e62ee28f3df77549715d21c336aa51d71bbc57", "title": "Two cases of hymenal scars occurred by child rape", "text": "Children who have been raped some years back may have hymenal scars. However, medical professionals are not accustomed in assessing these scars because of the lack of experience in performing physical examinations of the external genitalia of children who suffered from rape some years back. Moreover, the importance of physical examination of the victim's external genitalia is sometimes overlooked. Two cases of rape victims with hymenal scars who visited Daegu Child Sexual Abuse Response Center several years after their first sexual abuse along with a literature review are presented here."}
{"_id": "6a76e7ec1d91852dcfc085131a6fa32f50919ddb", "title": "Distributed Power Control for the Downlink of Multi-Cell NOMA Systems", "text": "This paper investigates the power control problem for the downlink of a multi-cell non-orthogonal multiple access system. The problem, called P-OPT, aims to minimize the total transmit power of all the base stations subject to the data rate requirements of the users. The feasibility and optimality properties of P-OPT are characterized through a related optimization problem, called Q-OPT, which is constituted by some relevant power control subproblems. First, we characterize the feasibility of Q-OPT and prove the uniqueness of its optimal solution. Next, we prove that the feasibility of P-OPT can be characterized by the Perron-Frobenius eigenvalues of the matrices arising from the power control subproblems. Subsequently, the relationship between the optimal solutions to P-OPT and that to Q-OPT is presented, which motivates us to obtain the optimal solution to P-OPT through solving the corresponding Q-OPT. Furthermore, a distributed algorithm to solve Q-OPT is designed, and the underlying iteration is shown to be a standard interference function. According to Yates\u2019s power control framework, the algorithm always converges to the optimal solution if exists. Numerical results validate the convergence of the distributed algorithm and quantify the improvement of our proposed method over fractional transmit power control and orthogonal multiple access schemes in terms of power consumption and outage probability."}
{"_id": "418742e59ec1dc69a1d2bd92167fc5f21035603d", "title": "MANUFACTURE OF ARBITRARY CROSS-SECTION COMPOSITE HONEYCOMB CORES BASED ON ORIGAMI TECHNIQUES", "text": "In recent years, as space structures have become large and require higher accuracy, composite honeycombs, which can reduce weight and have low thermal expansion, are in increasing demand. As observed in the design of antenna reflectors and rocket bodies, both flat and 3D-shaped cores are used in this field. However, these special honeycombs have high manufacturing costs and limited applications. This study illustrates a new strategy to fabricate arbitrary cross-section honeycombs with applications of advanced composite materials. These types of honeycombs are usually manufactured from normal flat honeycombs by curving or carving, but the proposed method enables us to construct objective shaped honeycombs directly. The authors first introduce the concept of the kirigami honeycomb, which is made from single flat sheets and has periodical slits resembling origami. In previous studies, honeycombs having various shapes were made using this method, and were realized by only changing folding line diagrams (FLDs). In this study, these 3D kirigami honeycombs are generalized by numerical parameters and fabricated using a newly proposed FLD design method, which enables us to draw the FLD of arbitrary cross-section honeycombs. Next, the authors describe a method of applying this technique to advanced composite materials. Applying the partially soft composite techniques, folding lines are materialized by silicon rubber hinges on carbon fiber reinforced plastic. Complex FLD patterns are then printed using masks on carbon fabrics. Finally, these foldable composites that are cured in corrugated shapes in autoclaves are folded into honeycomb shapes, and some typical samples are shown with their FLDs. INTRODUCTION In the construction of aerospace components, lightweight, rigid, and strong honeycomb sandwich panels are required. In recent years, the use of composite materials has drastically increased in this field. In the case of sandwich panels, carbon fiber reinforced plastic (CFRP) face sheets are typically combined with an aluminum honeycomb. Currently, space structures are increasing in size and require greater degrees of accuracy; hence, the use of composites as a core material is a natural progression. It is promising for further reducing the weight of the body of satellites and enhancing the accuracy of their on-board equipment because of their low coefficients of thermal expansion (CTE). Furthermore, use of the same material for both the face sheet and the core prevents certain problems that would otherwise arise when combining materials that have different CTEs, such as CFRP-face aluminum honeycomb [1]. Various types of composite honeycombs [2-4] are commercially available, in addition to aluminum or nomex honeycombs. A comparable product is the CFRP honeycomb that has recently been used in antenna reflectors for high frequencies [5,6]. Other composites such as Kevlar honeycombs are also manufactured and used in floor panels of the latest airplane model. Quartz fiber, which has superior electrical properties has found the application in radome. 1 Copyright \u00a9 2013 by ASME DETC2013-12743 Proceedings of the ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference IDETC/CIE 2013 August 4-7, 2013, Portland, Oregon, USA Many studies on various types of core configurations that do not include hexagonal honeycombs have been reported. The main examples include lattice materials consisting of webs and struts [7-9]. Several configurations have already been proposed in this research area: octet-truss, the 2D and 3D Kagome structure, and the tetrahedral lattice. Square composite honeycombs, which are expected to show high in-plane stretching rigidity, have also been researched [10]. These square honeycombs are fabricated by assembling slotted rectangular composite sheets. The Z-fiber and X-Cor are truss structures that are fabricated by angled carbon fiber rods embedded in polymeric foam, and they are typical examples of composite sandwich panels [11] that have potential commercial applications. However, these composite core materials are not regularly used in sandwich construction. Compared to standard aluminum or nomex honeycombs, their manufacturing costs are very high and they have limited applications. Another problem is difficulty of machining. In the manufacture of complex-shaped parts, the cores must have some degree of curvature. For aluminum honeycombs, this can be done using a contour cutter, a 3-D tracer, and numerically controlled machines. However, burrs and buckling of cell walls present a difficult problem for surface accuracy. It is clear that the machining of composite cores requires more expensive and sophisticated systems. Realizing curvatures in honeycombs is also difficult because they deform a saddle shape when bent. It requires special cell shapes such as flexcore [2] or cells having auxetic behavior [12,13] for a large and accurate curvature. This study proposes a novel method to construct arbitrary cross-section composite honeycombs. The basic idea originates from the fold-made paper honeycombs proposed by Nojima and Saito in 2007 [14], in which they attempted to apply origami and kirigami techniques to the creation of sandwich structures. Origami is the traditional Japanese art of paper folding and has received widespread attention from artists, architects, and mathematicians. Kirigami is a variation of origami. While traditional origami prohibits the cutting of paper, it is permissible in kirigami. Kirigami artists create remarkable patterns on paper using a combination of cutting and folding. Figure 1 shows the basic concept of kirigami (or origami) honeycombs. The advantage of this method is that it can be extended to manufacture 3D (non-flat) honeycomb. This is achieved only by changing the folding line patterns. Some paper samples and their folding line diagrams (FLDs) are shown in Fig. 2. However, these previous studies [14] have not included non-convex-shaped cross-sections. Because of geometrical restrictions, their FLDs cannot be drawn on single sheets of flat paper. Generalized FLD design methods have also remained a challenge. In this study, these 3D kirigami honeycombs are generalized by numerical parameters, and a new FLD design method is devised. This study also includes non-convex-shaped crosssections that have not been possible to realize in previous studies. The outline of the paper is as follows. First, the design method of the FLD is devised. This involves the calculation of the position of the folding lines and slits from given crosssectional shapes. The second part describes the condition pertaining to foldability and propose a modified method for unfoldable cross-sections. This approach enabled us to fold arbitrary cross-section honeycombs (including non-convex honeycombs) by folding single sheets. The third part describes a method of applying this technique to advanced composite materials. As an exapmle of kirigami composite honeycombs, a wingbox was already manufactured in autoclave-cured woven Kevlar fabric using the FLD around an airfoil profile[15]. In Ref.15, a dashed line cutting is used on a folding line. This study proposes a new method to introduce folding lines on CFRP; Silicon rubber is used as the matrix for hinge areas. Finally, these foldable composites that are cured in corrugated shapes in autoclaves are folded into honeycomb shapes, and some typical samples are shown with their FLDs. Fig. 1 Concept of kirigami honeycomb core. (a) Basic folding lines diagram. Thick lines: Slits. Fine lines: mountain folding lines. Dashed lines: valley folding lines. (b)\u2013(d) The folding process for realizing a honeycomb shape. Fig. 2 Examples of 3D folded honeycombs and their folding line diagrams (FLDs). Upper: Tapered honeycomb. Lower: Convex curved honeycomb. Black lines and areas: Slits or cutouts. Gray lines: Folding lines. (a) (d) c'"}
{"_id": "1e55b1e69895a79124bb881f8ca994c42e5fd1c3", "title": "Software-based resolver-to-digital converter using the PLL tracking algorithm", "text": "In this paper, an all-digital resolver-to-digital conversion (RDC) system using phase-locked loop (PLL) tracking algorithm is proposed. Excitation signal is obtained from PWM module of a TMS320F2812 digital signal processor (DSP). According to the excitation signal, the resolver outputs differential sine and cosine signals, which are sampled by a 16-bit peripheral AD7606 module. The rotor angle position is estimated with the PLL tracking algorithm based on DSP. The proposed RDC system is applied to a servo synchronous motor with a resolver coupled to the shaft, and the electrical error of the resolver is \u00b110'. Finally, responding simulations and experiments are carried out. The results demonstrate that the RDC system has the features with simple hardware structure and low cost, and high-precision resolver-to-digital conversion is achieved."}
{"_id": "7d100c7eae045c0296139cc5aadfa29b28d02d97", "title": "Adopting the Cognitive Complexity Measure for Business Process Models", "text": "Business process models, often modelled using graphical languages like UML, serve as a base for communication between the stakeholders in the software development process. To fulfil this purpose, they should be easy to understand and easy to maintain. For this reason, it is useful to have measures that can give us some information about understandability, analyzability and maintainability of a business process model. Shao and Wang (2003) have proposed a cognitive complexity measure. It can be used to estimate the comprehension effort for understanding software. This paper discusses how these research results can be extended in order to analyze the cognitive complexity of graphical business process models"}
{"_id": "dc1e06779e5e6b4d068aa6796dc7b4c69fa788c4", "title": "Challenges in Exploiting Conversational Memory in Human-Agent Interaction", "text": "In human interactions, language is used to project and maintain a social identity over time. The way people speak with others and revisit language across repeated interactions helps to create rapport and develop a feeling of coordination between conversational partners. Memory of past conversations is the main mechanism that allows us to exploit and explore ways of speaking, given knowledge acquired in previous encounters. As such, we introduce an agent that uses its conversational memory to revisit shared history with users to maintain a coherent social relationship over time. In this paper, we describe the dialog management mechanisms to achieve these goals when applied to a robot that engages in social chit-chat. In a study lasting 14 days with 28 users, totaling 474 interactions, we find that it is difficult to leverage the shared history with individual users and to also accommodate to expected conversational coordination patterns. We discuss the implications of this finding for long-term human-agent interaction. In particular, we highlight the importance of topic modeling and signaling explicit recall of previous episodes. Moreover, the way that users contribute to interactions requires additional adaptation, indicating a significant challenge for language interaction designers."}
{"_id": "b2650126e807ea06514fdc9f95b128846c87046e", "title": "Are leader stereotypes masculine? A meta-analysis of three research paradigms.", "text": "This meta-analysis examined the extent to which stereotypes of leaders are culturally masculine. The primary studies fit into 1 of 3 paradigms: (a) In Schein's (1973) think manager-think male paradigm, 40 studies with 51 effect sizes compared the similarity of male and leader stereotypes and the similarity of female and leader stereotypes; (b) in Powell and Butterfield's (1979) agency-communion paradigm, 22 studies with 47 effect sizes compared stereotypes of leaders' agency and communion; and (c) in Shinar's (1975) masculinity-femininity paradigm, 7 studies with 101 effect sizes represented stereotypes of leadership-related occupations on a single masculinity-femininity dimension. Analyses implemented appropriate random and mixed effects models. All 3 paradigms demonstrated overall masculinity of leader stereotypes: (a) In the think manager-think male paradigm, intraclass correlation = .25 for the women-leaders similarity and intraclass correlation = .62 for the men-leaders similarity; (b) in the agency-communion paradigm, g = 1.55, indicating greater agency than communion; and (c) in the masculinity-femininity paradigm, g = 0.92, indicating greater masculinity than the androgynous scale midpoint. Subgroup and meta-regression analyses indicated that this masculine construal of leadership has decreased over time and was greater for male than female research participants. In addition, stereotypes portrayed leaders as less masculine in educational organizations than in other domains and in moderate- than in high-status leader roles. This article considers the relation of these findings to Eagly and Karau's (2002) role congruity theory, which proposed contextual influences on the incongruity between stereotypes of women and leaders. The implications for prejudice against women leaders are also considered."}
{"_id": "c73482cbb07008080b1623ded3216aa3e657d694", "title": "Design and Characterization of Miniaturized Patch Antennas Loaded With Complementary Split-Ring Resonators", "text": "An investigation into the design of compact patch antennas loaded with complementary split-ring resonators (CSRRs) and reactive impedance surface (RIS) is presented in this study. The CSRR is incorporated on the patch as a shunt LC resonator providing a low resonance frequency and the RIS is realized using the two-dimensional metallic patches printed on a metal-grounded substrate. Both the meta-resonator (CSRR) and the meta-surface (RIS) are able to miniaturize the antenna size. By changing the configuration of the CSRRs, multi-band operation with varied polarization states can be obtained. An equivalent circuit has been developed for the CSRR-loaded patch antennas to illustrate their working principles. Six antennas with different features are designed and compared, including a circularly-polarized antenna, which validate their versatility for practical applications. These antennas are fabricated and tested. The measured results are in good agreement with the simulation."}
{"_id": "77d77f899874f61fbf37299e2724b1bc07227479", "title": "Semi-Supervised Neural Machine Translation with Language Models", "text": "Training neural machine translation models is notoriously slow and requires abundant parallel corpora and computational power. In this work we propose an approach of transferring knowledge from separately trained language models to translation systems, also we investigate several techniques to improve translation quality when there is a lack of parallel data and computational resources. Our method is based on fusion between translation system and language model, and initialization of translation system with weights from pretrained language models. We show that this technique gives +2.2 BLEU score on En\u2013Fr pair of WMT europarl-7 dataset and allows us to reach 20.7 BLEU on 20k parallel sentences in less than 8 hours of training on a single NVIDIA GeForce GTX 1080 Ti. We specifically note, that for this advance we use nothing but monolingual corpora for source and target languages. Significant part of presented results was obtained during DeepHack.Babel hackathon on low-resource machine translation organized by iPavlov Lab."}
{"_id": "30ff15a515dc1a4c653760c65f40e952a38b6dfe", "title": "Non-specific effects of methylphenidate (Ritalin) on cognitive ability and decision-making of ADHD and healthy adults", "text": "The effect of a single dose of methylphenidate (MPH) on cognitive measures and decision-making processes was assessed in a sample of adults with ADHD and in a control sample. Thirty-two adults satisfying DSM-IV criteria for ADHD and 26 healthy controls performed several cognitive tasks. Half of the participants received MPH prior to performing the tasks, and the other half received placebo in a randomized, double-blind manner. The average digit-span test score was higher in the groups receiving MPH compared to the groups receiving placebo, while diagnosis did not have an effect upon scores. In decision-making tasks, however, MPH did not have an effect upon performance, whereas in one of the tasks the average proportion of risky choices was higher in ADHD adults compared to controls. Our data therefore demonstrates that (a) MPH is capable of enhancing specific aspects of cognitive performance and (b) this enhancement is not specific to ADHD."}
{"_id": "168a89ce530c63720da844a30f5fce0c8f00fe8b", "title": "Evaluating Window Joins over Unbounded Streams", "text": "We investigate algorithms for evaluating sliding window joins over pairs of unbounded streams. We introduce a unittime-basis cost model to analyze the expected performance of these algorithms. Using this cost model, we propose strategies for maximizing the efficiency of processing joins in three scenarios. First, we consider the case where one stream is much faster than the other. We show that asymmetric combinations of join algorithms, (e.g., hash join on one input, nested-loops join on the other) can outperform symmetric join algorithm implementations. Second, we investigate the case where system resources are insufficient to keep up with the input streams. We show that we can maximize the number of join result tuples produced in this case by properly allocating computing resources across the two input streams. Finally, we investigate strategies for maximizing the number of result tuples produced when memory is limited, and show that proper memory allocation across the two input streams can result in significantly lower resource usage and/or more result tuples produced."}
{"_id": "197bdab96f4fc8b4ac9706be5df13ccc94761583", "title": "Reconfigurable Multiband Antenna Designs for Wireless Communication Devices", "text": "New designs for compact reconfigurable antennas are introduced for mobile communication devices. The uniqueness of the antenna designs are that they allow various groups of their operating frequency bands to be selected electronically. In particular, each group of frequency bands, or mode, can be made to serve several different communication systems simultaneously. These systems may include various combinations of GSM, DCS, PCS, UMTS, Bluetooth, and wireless local-area network (LAN). Therefore, by electronically selecting different antenna modes, a variety of communication systems can be conveniently served by only one antenna. One advantage is that through the different operational modes, the total antenna volume can be reused, and therefore the overall antenna can be made compact. In these designs, the selection of the different modes is achieved by either i) switching different feeding locations of the antenna (switched feed) or ii) switching or breaking of the antenna's connection to the ground (switched ground). This paper demonstrates these two designs. For the first design of switched feed, it can support GSM, DCS, PCS, and UMTS. In the second design, the antenna makes use of a switched-ground technique, which can cover GSM, DCS, PCS, UMTS, Bluetooth, and 2.4 GHz wireless LAN. The designs are investigated when ideal switches and also various realistic active switches based on PIN diodes, GaAs field effect transistor, and MEMs configurations. The designs are verified through both numerical simulations and measurement of an experimental prototype. The results confirm good performance of the two multiband reconfigurable antenna designs."}
{"_id": "089539093a2bd96b5ed38baf910fd42a76c1e4b0", "title": "An Efficient Multiplication Algorithm Using Nikhilam Method", "text": "Multiplication is one of the most important operation in computer arithmetic. Many integer operations such as squaring, division and computing reciprocal require same order of time as multiplication whereas some other operations such as computing GCD and residue operation require at most a factor of logn time more than multiplication. We propose an integer multiplication algorithm using Nikhilam method of Vedic mathematics which can be used to multiply two binary numbers efficiently."}
{"_id": "4a430d84f1878f2ef6c5618434158877a0153ad3", "title": "Comparison of eSports and Traditional Sports Consumption Motives", "text": "With recognition of the need for studying eSports in this interactive digital communication era, this study explored 14 motivational factors affecting the time spent on eSports gaming. Using a sample of 515 college students and athletic event attendees, we further compared eSports game patterns to their non-eSport or traditional sport involvements (game participation, game attendance, sports viewership, sports readership, sports listenership, Internet usage specific to sports, and purchase of team merchandise). Multiple regression results indicated that competition and skill had a statistically significant impact on the time spent on eSports games while peer pressure had marginal significance. Related to the overall findings, developing tailored messages that drives consumption behaviors of target audiences to specific eSports games will provide a better chance for marketers to fulfill their strategic goals of increased purchasing and larger market shares. Understanding that the interest in competition and skill are critical to eSports gamers may influence marketers to focus on creating games and opportunities for gamers to compete against each other and give tangible rewards to the winner. The use of peer pressure may be another motivational factor for playing. Consequently, those marketing dollars could be spent more on the interactive nature of game design. The subsequent analysis on cross-validation check suggests that the results of the regression analysis could be generalized."}
{"_id": "9bb0aa7c062a1ac3df0a73d1e7caa88937e9716e", "title": "Protocols for Public Key Cryptosystems", "text": "New Cryptographic protocols which take full advantage of the unique properties of public key cryptosystems are now evolving. Several protocols for public key distribution and for digital signatures are briefly compared with each other and with the conventional alternative."}
{"_id": "39348c10c90be968357e2a6b65d5e0e479307735", "title": "Friends and neighbors on the Web", "text": "The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user's homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques to mine this information in order to predict relationships between individuals. Further we show that some pieces of information are better indicators of social connections than others, and that these indicators vary between user populations and provide a glimpse into the social lives of individuals in different communities. Our techniques provide potential applications in automatically inferring real-world connections and discovering, labeling, and characterizing communities."}
{"_id": "e6c177d30479f70e3bab84e087c01833002ddc7b", "title": "Consumer Decision Making in Online Shopping Environments: The Effects of Interactive Decision Aids", "text": "Despite the explosive growth of electronic commerce and the rapidly increasing number of consumers who use interactive media (such as the World Wide Web) for prepurchase information search and online shopping, very little is known about how consumers make purchase decisions in such settings. A unique characteristic of online shopping environments is that they allow vendors to create retail interfaces with highly interactive features. One desirable form of interactivity from a consumer perspective is the implementation of sophisticated tools to assist shoppers in their purchase decisions by customizing the electronic shopping environment to their individual preferences. The availability of such tools, which we refer to as interactive decision aids for consumers, may lead to a transformation of the way in which shoppers search for product information and make purchase decisions. The primary objective of this paper is to investigate the nature of the effects that interactive decision aids may have on consumer decision making in online shopping environments. While making purchase decisions, consumers are often unable to evaluate all available alternatives in great depth and, thus, tend to use two-stage processes to reach their decisions. At the first stage, consumers typically screen a large set of available products and identify a subset of the most promising alternatives. Subsequently, they evaluate the latter in more depth, perform relative comparisons across products on important attributes, and make a purchase decision. Given the different tasks to be performed in such a two-stage process, interactive tools that provide support to consumers in the following respects are particularly valuable: (1) the initial screening of available products to determine which ones are worth considering further, and (2) the in-depth comparison of selected products before making the actual purchase decision. This paper examines the effects of two decision aids, each designed to assist consumers in performing one of the above tasks, on purchase decision making in an online store. The first interactive tool, a recommendation agent (RA), allows consumers to more efficiently screen the (potentially very large) set of alternatives available in an online shopping environment. Based on self-explicated information about a consumer\u2019s own utility function (attribute importance weights and minimum acceptable attribute levels), the RA generates a personalized list of recommended alternatives. The second decision aid, a comparison matrix (CM), is designed to help consumers make in-depth comparisons among selected alternatives. The CM allows consumers to organize attribute information about multiple products in an alternatives attributes matrix and to have alternatives sorted by any attribute. Based on theoretical and empirical work in marketing, judgment and decision making, psychology, and decision support systems, we develop a set of hypotheses pertaining to the effects of these two decision aids on various aspects of consumer decision making. In particular, we focus on how use of the RA and CM affects consumers\u2019 search for product information, the size and quality of their consideration sets, and the quality of their purchase decisions in an online shopping environment. A controlled experiment using a simulated online store was conducted to test the hypotheses. The results indicate that both interactive decision aids have a substantial impact on consumer decision making. As predicted, use of the RA reduces consumers\u2019 search effort for product information, decreases the size but increases the quality of their consideration sets, and improves the quality of their purchase decisions. Use of the CM also leads to a decrease in the size but an increase in the quality of consumers\u2019 consideration sets, and has a favorable effect on some indicators of decision quality. In sum, our findings suggest that interactive tools designed to assist consumers in the initial screening of available alternatives and to facilitate in-depth comparisons among selected alternatives in an online shopping environment may have strong favorable effects on both the quality and the efficiency of purchase decisions\u2014shoppers can make much better decisions while expending substantially less effort. This suggests that interactive decision aids have the potential to drastically transform the way in which consumers search for product information and make purchase decisions. (Decision Making; Online Shopping; Electronic Commerce; Decision Aids; Recommendation Agents; Consumer Behavior; Information Search; Consideration Sets; Information Processing)"}
{"_id": "1356b1daebf1114a2a0f3e6dfee606bdc06e4fc2", "title": "Efficient identification of Web communities", "text": "\"# %$& $' & )(*$' $+ , .-/ 0 %1 !23 3 45$768 3 + 91: 1 3 ;< * =\"> 1 $? )(> 0 % +@ 3 A % B C @ =\"> 1 $ D*EA =\"> 91 $F %(:$' G H % = @ I . % A\"> * J 3 K 2 L K . 3 = & ,M 3 ON . QP 3 @ R S(T1G % 9 ! %1 4>U) 7 1 V W$' 1 ! 3$? X> %$' . Y %( 45 , 7 =\"> 1 $ U% % :$' L 4Y $' L$Z $V %( ! 232 @[4 . 7 + % @[ =\"> 91 $ D \\ (] K $' . A 91G . 723 17 ) 0 1G 723$7 = Y M . X C . ) A )X X 1 ,M @ & , ^ 3 & =\"> 91 $' 3XA\"K A % _% K 3 _= ^_%1G %X B 3 @ =\"K ` ! 1G 72 7 23 3 45$< ^ *-5 1' %2 $' 3 4Y 5 Dba! c 9(T@ (T 3-/ $'$7 %(< * %X X 1 .M 3 & ) 3 A %23_ )1 3 d L$c $Z '1G ) . 7 R$' -/ 1G )2* 1G 72*1 $' 23 $C ) C L K (T e \" $ UF % %1 3 $ U : \"A1 3 _%$ U ) A ) 1723 3 4 %X> 23 _% L $f , c )1 ^ $' (] 2g\" 7 ) %$' 32 A . , _ )1 3h . iD \\ X X 23 3 . ) 3 % $7 %(< % 1c )X X 1 % G A 3 2L F(T %@ $' . A 91G . 723 1 $F % C$' . ,1 9 B _ 3 $ U> % & , L ^X> %X 2j , 3 )( X> )1' G %2g . ) _% %1 3 $ Uk ) L X 1 ,-/ . 2 1 3 _ D Categories and Subject Descriptors l`D m5D n o pCq r q sSq t u vQq wSq x u yzu wir9{[|i}^ , G %\" %$' \\ X X 2L 3 . , 3 $I~ \u007f \u0080%\u0081[\u0080F\u0082=\u0083]\u0084 \u0083]\u00845\u0085 \u0086 l D \u00875D \u0087^o \u0088 wg\u0089 \u008a \u008b%yzq r.\u008cT\u008a>wz\u008dir \u008a \u008b,q x u`q w?\u008eA\u008f&u5r \u008b)\u008cTu5\u0090 q \u0091j{\u0092| \u0093 (T %1 & ) 3 % \u0095\u00945 . )1 G % \u0096f 9 '1 L )2 ~\u0098\u0097 \u0099 \u009a5\u009b \u0081 \u009c9\u009d9\u0083]\u0084K\u0085%\u009e^\u0083]\u0084.\u009f %\u009d9\u0082+\u0080%\u00818\u0083\u0092 %\u0084 \u00a1 \u0099 \u0081 \u009c9\u009d9\u0083T\u00845\u0085 \u00865\u00a2 D m5D m o pA\u008cTt.\u00a3 \u008b,u r u vQq r \u00a4Su yzq r.\u008cT\u00a3/t {\u0092| \u00a2 1G %X =a! %1' K~ \u0084>\u009c9\u00818\u00a5W %\u009d \u00a6^\u00a7 \u009d' / \u0308 \u0099 \u009c9\u0082 \u009b"}
{"_id": "1460a86302668eb1be7e32055c58e57e2f2a2f24", "title": "Reducing Buyer Search Costs : Implications for Electronic Marketplaces", "text": "Information systems can serve as intermediaries between the buyers and the sellers in a market, creating an \u201celectronic marketplace\u201d that lowers the buyers' cost to acquire information about seller prices and product offerings. As a result, electronic marketplaces reduce the inefficiencies caused by buyer search costs, in the process reducing the ability of sellers to extract monopolistic profits while increasing the ability of markets to optimally allocate productive resources. This article models the role of buyer search costs in markets with differentiated product offerings. The impact of reducing these search costs is analyzed in the context of an electronic marketplace, and the allocational efficiencies such a reduction can bring to a differentiated market are formalized. The resulting implications for the incentives of buyers, sellers and independent intermediaries to invest in electronic marketplaces are explored. Finally, the possibility to separate price information from product attribute information is introduced, and the implications of designing markets promoting competition along each of these dimensions are discussed. Copyright \u00a9 1997 by The Institute of Management Sciences"}
{"_id": "3a83bd97a340f5270fc99f1ea20a1708eb0111cb", "title": "Learning Probabilistic Models of Link Structure", "text": "Most real-world data is heterogeneous and richly interconnected. Examples include the Web, hypertext, bibliometric data and social networks. In contrast, most statistical learning methods work with \u201cflat\u201d data representations, forcing us to convert our data into a form that loses much of the link structure. The recently introduced framework of probabilistic relational models(PRMs) embraces the object-relational nature of structured data by capturing probabilistic interactions between attributes of related entities. In this paper, we extend this framework by modeling interactions between the attributes and the link structure itself. An advantage of our approach is a unified generative model for both content and relational structure. We propose two mechanisms for representing a probabilistic distribution over link structures: reference uncertaintyandexistence uncertainty . We describe the appropriate conditions for using each model and present learning algorithms for each. We present experimental results showing that the learned models can be used to predict link structure and, moreover, the observed link structure can be used to provide better predictions for the attributes in the model."}
{"_id": "a2741e0337478e1a7cbf508a48730a6a838073d5", "title": "A 2.1 MHz Crystal Oscillator Time Base with a Current Consumption under 500 nA", "text": "A micro-power circuit is encapsulated with a 2.1 MHz ZT-cut quartz in a vacuum package. The oscillator core has 2 complementary active MOSFETS and amplitude stabilization. New coupling and biasing circuits, and dynamic frequency dividers allow to achieve \u00b12 ppm frequency stability down to 1.8 V with a current under 0.5 \u00bfA."}
{"_id": "b2e7c5ddb686d503091dce57efe67fa89480fba4", "title": "Partial Image Encryption based on Block wise Shuffling using Arnold Map", "text": "To prevent image from unauthorized access, Encryption techniques of digital images play a very important role . The Fully image encryption techniques is very complex in nature and takes a lot of computational time. But, certain applications do not require total encryption; it requires a part of the multimedia data to be transparent to all users, like Pay-TV or Payable Internet Imaging Albums which involves encrypting the most meaningful parts of an image. In this paper we focus on a partial image encryption technique based on block wise shuffling with the help of Arnold map . Firstly image is divided into blocks then Blocks within the image are permuted by using Arnold map and final permuted block are combined that yields partial encrypted images . This process is repeated for different block size. The PSNR and NPCR obtained by our technique shows that the proposed technique gives better result than the existing techniques."}
{"_id": "c5081aec479e2b7784d033ce4ec2d729f1d08a99", "title": "Solid lipid nanoparticles: production, characterization and applications.", "text": "Solid lipid nanoparticles (SLN) have attracted increasing attention during recent years. This paper presents an overview about the selection of the ingredients, different ways of SLN production and SLN applications. Aspects of SLN stability and possibilities of SLN stabilization by lyophilization and spray drying are discussed. Special attention is paid to the relation between drug incorporation and the complexity of SLN dispersions, which includes the presence of alternative colloidal structures (liposomes, micelles, drug nanosuspensions, mixed micelles, liquid crystals) and the physical state of the lipid (supercooled melts, different lipid modifications). Appropriate analytical methods are needed for the characterization of SLN. The use of several analytical techniques is a necessity. Alternative structures and dynamic phenomena on the molecular level have to be considered. Aspects of SLN administration and the in vivo fate of the carrier are discussed."}
{"_id": "8dae4b64804974d542b819eca6b8393ee458307a", "title": "School-based interventions for aggressive and disruptive behavior: update of a meta-analysis.", "text": "BACKGROUND\nResearch about the effectiveness of school-based psychosocial prevention programs for reducing aggressive and disruptive behavior was synthesized using meta-analysis. This work updated previous work by the authors and further investigated which program and student characteristics were associated with the most positive outcomes.\n\n\nMETHODS\nTwo hundred forty-nine experimental and quasi-experimental studies of school-based programs with outcomes representing aggressive and/or disruptive behavior were obtained. Effect sizes and study characteristics were coded from these studies and analyzed.\n\n\nRESULTS\nPositive overall intervention effects were found on aggressive and disruptive behavior and other relevant outcomes. The most common and most effective approaches were universal programs and targeted programs for selected/indicated children. The mean effect sizes for these types of programs represent a decrease in aggressive/disruptive behavior that is likely to be of practical significance to schools. Multicomponent comprehensive programs did not show significant effects and those for special schools or classrooms were marginal. Different treatment modalities (e.g., behavioral, cognitive, social skills) produced largely similar effects. Effects were larger for better-implemented programs and those involving students at higher risk for aggressive behavior.\n\n\nCONCLUSIONS\nSchools seeking prevention programs may choose from a range of effective programs with some confidence that whatever they pick will be effective. Without the researcher involvement that characterizes the great majority of programs in this meta-analysis, schools might be well-advised to give priority to those that will be easiest to implement well in their settings."}
{"_id": "08a15672ce6c6918cf8b35355c65ebced8d71562", "title": "The Dark Side(-Channel) of Mobile Devices: A Survey on Network Traffic Analysis", "text": "In recent years, mobile devices (e.g., smartphones and tablets) have met an increasing commercial success and have become a fundamental element of the everyday life for billions of people all around the world. Mobile devices are used not only for traditional communication activities (e.g., voice calls and messages) but also for more advanced tasks made possible by an enormous amount of multi-purpose applications (e.g., finance, gaming, and shopping). As a result, those devices generate a significant network traffic (a consistent part of the overall Internet traffic). For this reason, the research community has been investigating security and privacy issues that are related to the network traffic generated by mobile devices, which could be analyzed to obtain information useful for a variety of goals (ranging from fine-grained user profiling to device security and network optimization). In this paper, we review the works that contributed to the state of the art of network traffic analysis targeting mobile devices. In particular, we present a systematic classification of the works in the literature according to three criteria: 1) the goal of the analysis; 2) the point where the network traffic is captured; and 3) the targeted mobile platforms. In this survey, we consider points of capturing such as Wi-Fi access points, software simulation, and inside real mobile devices or emulators. For the surveyed works, we review and compare analysis techniques, validation methods, and achieved results. We also discuss possible countermeasures, challenges, and possible directions for future research on mobile traffic analysis and other emerging domains (e.g., Internet of Things). We believe our survey will be a reference work for researchers and practitioners in this research field."}
{"_id": "453347aedfd17a601076868f1662f059f90a0185", "title": "Proportions of Hand Segments", "text": "Such investigation methods as thermography or isotope scanning allow visualizing inflammation, neoplastic and some other pathological processes, and their location in the body. But the knowledge of regional topography is necessary for exact localization of the process. Such external anatomical landmarks of the hand as bone prominences and creases help to localize its joints and other structures during the clinical examination. On X-ray pictures, MRIs and CT images or during ultrasonography, anatomical structures are well visualized and their localization is not a problem. However, thermography and isotope scanning allow us to see only the shape and outlines of the anatomical region. So the identification of anatomical structures, particularly hand joints, on thermograms and bone scans is often difficult."}
{"_id": "70614845363764ac3a68a2fb25c91c3fa22f139f", "title": "Removing rolling shutter wobble", "text": "We present an algorithm to remove wobble artifacts from a video captured with a rolling shutter camera undergoing large accelerations or jitter. We show how estimating the rapid motion of the camera can be posed as a temporal super-resolution problem. The low-frequency measurements are the motions of pixels from one frame to the next. These measurements are modeled as temporal integrals of the underlying high-frequency jitter of the camera. The estimated high-frequency motion of the camera is then used to re-render the sequence as though all the pixels in each frame were imaged at the same time. We also present an auto-calibration algorithm that can estimate the time between the capture of subsequent rows in the camera."}
{"_id": "d729b3a3f519a7b095891ed2b6aab459f6e121a9", "title": "A Multi-ESD-Path Low-Noise Amplifier With a 4.3-A TLP Current Level in 65-nm CMOS", "text": "This paper studies the electrostatic discharge (ESD)-protected RF low-noise amplifiers (LNAs) in 65-nm CMOS technology. Three different ESD designs, including double-diode, modified silicon-controlled rectifier (SCR), and modified-SCR with double-diode configurations, are employed to realize ESD-protected LNAs at 5.8 GHz. By using the modified-SCR in conjunction with double-diode, a 5.8-GHz LNA with multiple ESD current paths demonstrates a 4.3-A transmission line pulse (TLP) failure level, corresponding to a ~ 6.5-kV Human-Body-Mode (HBM) ESD protection level. Under a supply voltage of 1.2 V and a drain current of 6.5 mA, the proposed ESD-protected LNA demonstrates a noise figure of 2.57 dB with an associated power gain of 16.7 dB. The input third-order intercept point (IIP3) is - 11 dBm, the input and output return losses are greater than 15.9 and 20 dB, respectively."}
{"_id": "340a3377f138d88bce98ef5fe2f099fd6ebc5eda", "title": "High-performance carbon nanotube field-effect transistor with tunable polarities", "text": "State-of-the-art carbon nanotube field-effect transistors (CNFETs) behave as Schottky-barrier-modulated transistors. It is known that vertical scaling of the gate oxide significantly improves the performance of these devices. However, decreasing the oxide thickness also results in pronounced ambipolar transistor characteristics and increased drain leakage currents. Using a novel device concept, we have fabricated high-performance enhancement-mode CNFETs exhibiting n- or p-type unipolar behavior, tunable by electrostatic and/or chemical doping, with excellent OFF-state performance and a steep subthreshold swing (S=63 mV/dec). The device design allows for aggressive oxide thickness and gate-length scaling while maintaining the desired device characteristics."}
{"_id": "f65610116165ff751d49ef80f39f144ea49f751d", "title": "Reduced Number of Hypocretin Neurons in Human Narcolepsy", "text": "Murine and canine narcolepsy can be caused by mutations of the hypocretin (Hcrt) (orexin) precursor or Hcrt receptor genes. In contrast to these animal models, most human narcolepsy is not familial, is discordant in identical twins, and has not been linked to mutations of the Hcrt system. Thus, the cause of human narcolepsy remains unknown. Here we show that human narcoleptics have an 85%-95% reduction in the number of Hcrt neurons. Melanin-concentrating hormone (MCH) neurons, which are intermixed with Hcrt cells in the normal brain, are not reduced in number, indicating that cell loss is relatively specific for Hcrt neurons. The presence of gliosis in the hypocretin cell region is consistent with a degenerative process being the cause of the Hcrt cell loss in narcolepsy."}
{"_id": "51f3ba2fc98fb2d0f75a090d08f91d7840a7b57c", "title": "Enhancing Reliability of Workflow Execution Using Task Replication and Spot Instances", "text": "Cloud environments offer low-cost computing resources as a subscription-based service. These resources are elastically scalable and dynamically provisioned. Furthermore, cloud providers have also pioneered new pricing models like spot instances that are cost-effective. As a result, scientific workflows are increasingly adopting cloud computing. However, spot instances are terminated when the market price exceeds the users bid price. Likewise, cloud is not a utopian environment. Failures are inevitable in such large complex distributed systems. It is also well studied that cloud resources experience fluctuations in the delivered performance. These challenges make fault tolerance an important criterion in workflow scheduling. This article presents an adaptive, just-in-time scheduling algorithm for scientific workflows. This algorithm judiciously uses both spot and on-demand instances to reduce cost and provide fault tolerance. The proposed scheduling algorithm also consolidates resources to further minimize execution time and cost. Extensive simulations show that the proposed heuristics are fault tolerant and are effective, especially under short deadlines, providing robust schedules with minimal makespan and cost."}
{"_id": "a0f101eec1ef4be09a68fd8aec27cab724720d93", "title": "Automatic Tongue Diagnosis Using a Smart Phone", "text": "An automatic tongue diagnosis framework is proposed to analyzing tongue images taken by smart phones. Different from conventional tongue diagnosis systems, our input tongue images are usually in low resolution and taken under unknown lighting conditions. Consequently, existing tongue diagnosis methods cannot be directly applied to give accurate results. We propose a lighting condition estimation method based on the SVM classifier to predict the color correction matrix according to color difference of images taken with and without flashlight. We also modify the state of the art work of fur and fissure detection and successfully improve the detection accuracy by taking hue information into consideration and adding a de-noising step."}
{"_id": "43180e03a0adfce0a7d63180220a6b08e6853315", "title": "Visual scanning of faces in autism.", "text": "The visual scanpaths of five high-functioning adult autistic males and five adult male controls were recorded using an infrared corneal reflection technique as they viewed photographs of human faces. Analyses of the scanpath data revealed marked differences in the scanpaths of the two groups. The autistic participants viewed nonfeature areas of the faces significantly more often and core feature areas of the faces (i.e., eyes, nose, and mouth) significantly less often than did control participants. Across both groups of participants, scanpaths generally did not differ as a function of the instructions given to the participants (i.e., \"Please look at the faces in any manner you wish.\" vs. \"Please identify the emotions portrayed in these faces.\"). Autistic participants showed a deficit in emotion recognition, but this effect was driven primarily by deficits in the recognition of fear. Collectively, these results indicate disorganized processing of face stimuli in autistic individuals and suggest a mechanism that may subserve the social information processing deficits that characterize autism spectrum disorders."}
{"_id": "41d47de1571a6e495cde9f88e514d34dc95d3216", "title": "Adaptive block truncation filter for MVC depth image enhancement", "text": "In Multiview Video plus Depth (MVD) format, virtual views are generated from decoded texture videos with decoded depth images through Depth Image based Rendering (DIBR). 3DV-ATM is a reference model for H.264/AVC based Multiview Video Coding (MVC) and aims at achieving high coding efficiency for 3D video in MVD format. Depth images are first downsampled then coded by 3DV-ATM. However, sharp object boundary characteristic of depth images does not well match with the transform coding of 3DV-ATM. Depth boundaries are often blurred with ringing artifacts in the decoded depth images that result in noticeable artifacts in synthesized views. This paper presents a low complexity adaptive block truncation filter to recover the sharp object boundaries of depth images using adaptive block repositioning and expansion for increasing the depth values refinement accuracy. This new approach is very efficient and can avoid false depth boundary refinement when block boundaries lie around the depth edge regions and ensure sufficient information within the processing block for depth layers classification. Experimental results show that sharp depth edges can be recovered using the proposed filter and boundary artifacts in the synthesized views can be removed. The proposed method can provide improvement up to 3.25dB in the depth map enhancement and bitrate reduction of 3.06% in the synthesized views."}
{"_id": "1e143951f55bbc7b4df2816faa90ec2c65d6891a", "title": "A novel topology design approach using an integrated deep learning network architecture", "text": "Topology design optimization offers tremendous oppo rtunity in design and manufacturing freedoms by designing and producing a part from the ground-up without a meaningful initial design as required by conventional shape design opt imization approaches. Ideally, with adequate problem statements, to formulate and solve the topo logy design problem using a standard topology optimization process, such as SIMP (Simpli fied Isotropic Material with Penalization) is possible. In reality, an estimated over thousands o f design iterations is often required for just a few design variables, the conventional optimization approach is in general impractical or computationally unachievable for real world applica tions significantly diluting the development of the topology optimization technology. There is, therefore, a need for a different approach that will be able to optimize the initial design topolog y effectively and rapidly. Therefore, this work presents a new topology design procedure to generat e optimal structures using an integrated Generative Adversarial Networks (GANs) and convolut i nal neural network architecture. The discriminator in the GANs as well as the convolutio nal network are initially trained through the dataset of 3024 true optimized planner structure im ages generated from a conventional topology design approach (SIMP). The convolutional network m aps the optimized structure to the volume fraction, penalty and radius of the smoothening fil ter. Once the GAN is trained, the generator produced a large number of new unseen structures sa ti fying the design requirements. The corresponding input variables of these new structur es can be evaluated using the trained convolutional network. The structures generated by the GANs are also minutely post-processed to aberrations. Validation of the results is made b y generating the images with same conditions using existing topology optimization algorithms. T his paper presents a proof of concept which 1 Corresponding Author, email: shen.1@osu.edu"}
{"_id": "1b808061bbec0efcf5eee2cf23f6d33c8dddde84", "title": "Networks and the Epidemiology of Infectious Disease", "text": "The science of networks has revolutionised research into the dynamics of interacting elements. It could be argued that epidemiology in particular has embraced the potential of network theory more than any other discipline. Here we review the growing body of research concerning the spread of infectious diseases on networks, focusing on the interplay between network theory and epidemiology. The review is split into four main sections, which examine: the types of network relevant to epidemiology; the multitude of ways these networks can be characterised; the statistical methods that can be applied to infer the epidemiological parameters on a realised network; and finally simulation and analytical methods to determine epidemic dynamics on a given network. Given the breadth of areas covered and the ever-expanding number of publications, a comprehensive review of all work is impossible. Instead, we provide a personalised overview into the areas of network epidemiology that have seen the greatest progress in recent years or have the greatest potential to provide novel insights. As such, considerable importance is placed on analytical approaches and statistical methods which are both rapidly expanding fields. Throughout this review we restrict our attention to epidemiological issues."}
{"_id": "dfe2fa79e1a7eed5230a6db9dbafc9e4a3595e9c", "title": "Exploiting Visibility Information in Surface Reconstruction to Preserve Weakly Supported Surfaces", "text": "We present a novel method for 3D surface reconstruction from an input cloud of 3D points augmented with visibility information. We observe that it is possible to reconstruct surfaces that do not contain input points. Instead of modeling the surface from input points, we model free space from visibility information of the input points. The complement of the modeled free space is considered full space. The surface occurs at interface between the free and the full space. We show that under certain conditions a part of the full space surrounded by the free space must contain a real object also when the real object does not contain any input points; that is, an occluder reveals itself through occlusion. Our key contribution is the proposal of a new interface classifier that can also detect the occluder interface just from the visibility of input points. We use the interface classifier to modify the state-of-the-art surface reconstruction method so that it gains the ability to reconstruct weakly supported surfaces. We evaluate proposed method on datasets augmented with different levels of noise, undersampling, and amount of outliers. We show that the proposed method outperforms other methods in accuracy and ability to reconstruct weakly supported surfaces."}
{"_id": "5e5be7f6591a4ab3bb924923b9db9e6b608c44e8", "title": "Zero-Knowledge Proof of Decryption for FHE Ciphertexts", "text": "Zero-knowledge proofs of knowledge and fully-homomorphic encryption are two areas that have seen considerable advances in recent years, and these two techniques are used in conjunction in the context of verifiable decryption. Existing solutions for verifiable decryption are aimed at the batch setting, however there are many applications in which there will only be one ciphertext that requires a proof of decryption. The purpose of this paper is to provide a zero-knowledge proof of correct decryption on an FHE ciphertext, which for instance could hold the result of a cryptographic election. We give two main contributions. Firstly, we present a bootstrapping-like protocol to switch from one FHE scheme to another. The first scheme has efficient homomorphic capabilities; the second admits a simple zero-knowledge protocol. To illustrate this, we use the Brakerski et al. (ITCS, 2012) scheme for the former, and Gentry\u2019s original scheme (STOC, 2009) for the latter. Secondly, we present a simple one-shot zero-knowledge protocol for verifiable decryption using Gentry\u2019s original FHE scheme."}
{"_id": "aff1e85a8e3667b2763e2f9b285bd7b4a960dfe2", "title": "Monophyly of Primary Photosynthetic Eukaryotes: Green Plants, Red Algae, and Glaucophytes", "text": "Between 1 and 1.5 billion years ago, eukaryotic organisms acquired the ability to convert light into chemical energy through endosymbiosis with a Cyanobacterium (e.g.,). This event gave rise to \"primary\" plastids, which are present in green plants, red algae, and glaucophytes (\"Plantae\" sensu Cavalier-Smith). The widely accepted view that primary plastids arose only once implies two predictions: (1) all plastids form a monophyletic group, as do (2) primary photosynthetic eukaryotes. Nonetheless, unequivocal support for both predictions is lacking (e.g.,). In this report, we present two phylogenomic analyses, with 50 genes from 16 plastid and 15 cyanobacterial genomes and with 143 nuclear genes from 34 eukaryotic species, respectively. The nuclear dataset includes new sequences from glaucophytes, the less-studied group of primary photosynthetic eukaryotes. We find significant support for both predictions. Taken together, our analyses provide the first strong support for a single endosymbiotic event that gave rise to primary photosynthetic eukaryotes, the Plantae. Because our dataset does not cover the entire eukaryotic diversity (but only four of six major groups in), further testing of the monophyly of Plantae should include representatives from eukaryotic lineages for which currently insufficient sequence information is available."}
{"_id": "8e628777a34d6e8c94ee8aec498b72c4ba4f4cae", "title": "Role of social media in online travel information search", "text": "Social media are playing an increasingly important role as information sources for travelers. The goal of this study is to investigate the extent to which social media appear in search engine results in the context of travel-related searches. The study employed a research design that simulates a traveler\u2019s use of a search engine for travel planning by using a set of pre-defined keywords in combination with nine U.S. tourist destination names. The analysis of the search results showed that social media constitute a substantial part of the search results, indicating that search engines likely direct travelers to social media sites. This study confirms the growing importance of social media in the online tourism domain. It also provides evidence for challenges faced by traditional providers of travel-related information. Implications for tourism marketers in terms of online marketing strategies are discussed. 2009 Elsevier Ltd. All rights reserved."}
{"_id": "d6652eefc0a9d0375288787d3070cfe82dbe3240", "title": "Dual-Band and Wideband Dual-Polarized Cylindrical Dielectric Resonator Antennas", "text": "This letter investigates dual-band and wideband designs of dual-polarized cylindrical dielectric resonator antennas (DRAs). These designs make use of the fundamental HEM111 mode and higher-order HEM113 mode of the DRA. The strip- and slot-fed excitation methods are used for Ports 1 and 2 of the antennas, respectively. For demonstration, a dual-band dual-polarized DRA for DCS (1.71-1.88 GHz) and WLAN (2.4-2.48 GHz) bands and a wideband version that covers the 2.4-GHz WLAN band were designed. The S-parameters, radiation patterns, antenna gains, antenna efficiencies, and envelope correlations of the two designs are studied. It was found that the dual-band and wideband designs have port isolations of higher than 36 and 37 dB, respectively. Good agreement between the measured and simulated results is observed."}
{"_id": "22aece841f4ffaa5f24449c99755860e2c4338a3", "title": "Feature based image retrieval system using Zernike moments and Daubechies Wavelet Transform", "text": "In image processing research field, image retrieval is extensively used in various application. Increasing need of the image retrieval, it is quiet most exciting research field. In image retrieval system, features are the most significant process used for indexing, retrieving and classifying the images. For computer systems, automatic indexing, storing and retrieving larger image collections effectively are a critical task. Nowadays several retrieval systems were implemented to overcome these issues but still there is a lack of speed and accuracy during image retrieval process. First, address the various issues on performance degradation of image retrieval then analyze and compare the methods and results in previous work. Second, discover the effective approach to be used to increase the accuracy of retrieval system significantly. This work provides a framework based on low level features extraction using Daubechies Wavelet Transform (DWT) and Zernike moments. Based on that features images are retrieved by using the distance measure. Zernike moments constitute a powerful shape descriptor due to its strength and narrative capability. Experimental results shows that our scheme provides significant improvement on retrieval accuracy compared to existing system based on the combination of both the color and edge features by using Discrete Wavelet Transform. In this paper, wang's image dataset is used for experiments."}
{"_id": "56b228ad5d1efba154eec2e63f41011f563d6469", "title": "A Passive RFID Information Grid for Location and Proximity Sensing for the Blind User", "text": "We describe a navigation and location determination system for the blind using an RFID tag grid. Each RFID tag is programmed upon installation with spatial coordinates and information describing the surroundings. This allows for a self-describing, localized information system with no dependency on a centralized database or wireless infrastructure for communications. The system could be integrated into building code requirements as part of ADA (Americans with Disabilities Act) at a cost of less than $1 per square foot. With an established RFID grid infrastructure blind children and adults will gain the independence and freedom to explore and participate in activities without external assistance. An established RFID grid infrastructure will also enable advances in robotics which will benefit from knowing precise location. In this paper, we present an RFID based information grid system with a reader integrated into the user\u2019s shoe, which is connected to the user PDA or cell phone via a Bluetooth. An emphasis is placed on the architecture and design to allow for a truly integrated pervasive environment."}
{"_id": "b1397c9085361f308bd70793fc2427a4416973d7", "title": "FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance", "text": "This paper describes a probabilistic approach to the problem of recognizing places based on their appearance. The system we present is not limited to localization, but can determine that a new observation comes from a previously unseen place, and so augment its map. Effectively this is a SLAM system in the space of appearance. Our probabilistic approach allows us to explicitly account for perceptual aliasing in the environment\u2014identical but indistinctive observations receive a low probability of having come from the same place. We achieve this by learning a generative model of place appearance. By partitioning the learning problem into two parts, new place models can be learned online from only a single observation of a place. The algorithm complexity is linear in the number of places in the map, and is particularly suitable for online loop closure detection in mo-"}
{"_id": "e20aae4ce14009f689b55ebaf9dac541b88fb18d", "title": "Multi-modal Semantic Place Classification", "text": "The ability to represent knowledge about space and its position therein is crucial for a mobile robot. To this end, topological and semantic descriptions are gaining popularity for augmenting purely metric space representations. In this paper we present a multi-modal place classification system that allows a mobile robot to identify places and recognize semantic categories in an indoor environment. The system effectively utilizes information from different robotic sensors by fusing multiple visual cues and laser range data. This is achieved using a high-level cue integration scheme based on a Support Vector Machine (SVM) that learns how to optimally combine and weight each cue. Our multi-modal place classification approach can be used to obtain a real-time semantic space labeling system which integrates information over time and space. We perform an extensive experimental evaluation of the method for two different platforms and environments, on a realistic off-line database and in a live experiment on an autonomous robot. The results clearly demonstrate the effecThe International Journal of Robotics Research Vol. 00, No. 00, Xxxxxxxx 2009, pp. 000\u2013000 DOI: 10.1177/0278364909356483 c The Author(s), 2009. Reprints and permissions: http://www.sagepub.co.uk/journalsPermissions.nav Figures 1\u201315, 17, 18 appear in color online: http://ijr.sagepub.com tiveness of our cue integration scheme and its value for robust place classification under varying conditions. KEY WORDS\u2014recognition, sensor fusion, localization, multi-modal place classification, sensor and cue integration, semantic annotation of space"}
{"_id": "16ccb8d307d3f33ebb395b32db23279b409f1228", "title": "RADAR: An In-Building RF-Based User Location and Tracking System", "text": "The proliferation of mobile computing devices and local-area wireless networks has fostered a growing interest in location-aware systems and services. In this paper we present RADAR, a radio-frequency (RF) based system for locating and tracking users inside buildings. RADAR operates by recording and processing signal strength information at multiple base stations positioned to provide overlapping coverage in the area of interest. It combines empirical measurements with signal propagation modeling to determine user location and thereby enable locationaware services and applications. We present experimental results that demonstrate the ability of RADAR to estimate user location with a high degree of accuracy."}
{"_id": "e741b329e83eaac82e74572cc106e148be5164ed", "title": "A Survey on Automatically Mining Facets for Web Queries", "text": "Received Jan 4, 2017 Revised Jun 2, 2017 Accepted Jun 26, 2017 In this paper, a detailed survey on different facet mining techniques, their advantages and disadvantages is carried out. Facets are any word or phrase which summarize an important aspect about the web query. Researchers proposed different efficient techniques which improves the user\u2019s web query search experiences magnificently. Users are happy when they find the relevant information to their query in the top results. The objectives of their research are: (1) To present automated solution to derive the query facets by analyzing the text query; (2) To create taxonomy of query refinement strategies for efficient results; and (3) To personalize search according to user interest. Keyword:"}
{"_id": "59d7d8415dacd300eb4d98b0da3cb32d27503b36", "title": "Visualizing Topic Models", "text": "Managing large collections of documents is an important problem for many areas of science, industry, and culture. Probabilistic topic modeling offers a promising solution. Topic modeling is an unsupervised machine learning method that learns the underlying themes in a large collection of otherwise unorganized documents. This discovered structure summarizes and organizes the documents. However, topic models are high-level statistical tools\u2014a user must scrutinize numerical distributions to understand and explore their results. In this paper, we present a method for visualizing topic models. Our method creates a navigator of the documents, allowing users to explore the hidden structure that a topic model discovers. These browsing interfaces reveal meaningful patterns in a collection, helping end-users explore and understand its contents in new ways. We provide open source software of our method. Understanding and navigating large collections of documents has become an important activity in many spheres. However, many document collections are not coherently organized and organizing them by hand is impractical. We need automated ways to discover and visualize the structure of a collection in order to more easily explore its contents. Probabilistic topic modeling is a set of machine learning tools that may provide a solution (Blei and Lafferty 2009). Topic modeling algorithms discover a hidden thematic structure in a collection of documents; they find salient themes and represent each document as a combination of themes. However, topic models are high-level statistical tools. A user must scrutinize numerical distributions to understand and explore their results; the raw output of the model is not enough to create an easily explored corpus. We propose a method for using a fitted topic model to organize, summarize, visualize, and interact with a corpus. With our method, users can explore the corpus, moving between high level discovered summaries (the \u201ctopics\u201d) and the documents themselves, as Figure 1 illustrates. Our design is centered around the idea that the model both summarizes and organizes the collection. Our method translates these representations into a visual system for exploring a collection, but visualizing this structure is not enough. The discovered structure induces relationships\u2014between topics and articles, and between articles and articles\u2014which lead to interactions in the visualization. Copyright c \u00a9 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Thus, we have three main goals in designing the visualization: summarize the corpus for the user; reveal the relationships between the content and summaries; and, reveal the relationships across content. We aim to present these in a ways that are accessible and useful to a spectrum of users, not just machine learning experts."}
{"_id": "7a45104e2bf816ad7294a43ed37a638e789db8c3", "title": "Qualitative Research & Evaluation Methods: Integrating Theory and Practice", "text": "In what case do you like reading so much? What about the type of the qualitative research evaluation methods integrating theory and practice book? The needs to read? Well, everybody has their own reason why should read some books. Mostly, it will relate to their necessity to get knowledge from the book and want to read just to get entertainment. Novels, story book, and other entertaining books become so popular this day. Besides, the scientific books will also be the best reason to choose, especially for the students, teachers, doctors, businessman, and other professions who are fond of reading."}
{"_id": "c625c5960c0fedc3304092aadcaabc714b806bb9", "title": "Capturing the Temporal Domain in Echonest Features for Improved Classification Effectiveness", "text": "This paper proposes Temporal Echonest Features to harness the information available from the beat-aligned vector sequences of the features provided by The Echo Nest. Rather than aggregating them via simple averaging approaches, the statistics of temporal variations are analyzed and used to represent the audio content. We evaluate the performance on four traditional music genre classification test collections and compare them to state of the art audio descriptors. Experiments reveal, that the exploitation of temporal variability from beat-aligned vector sequences and combinations of different descriptors leads to an improvement of classification accuracy. Comparing the results of Temporal Echonest Features to those of approved conventional audio descriptors used as benchmarks, these approaches perform well, often significantly outperforming their predecessors, and can be effectively used for large scale music genre classification."}
{"_id": "4a127a28fe78fa7c238d4474c8a5c574ade37625", "title": "Learning Algorithms from Data", "text": "Statistical machine learning is concerned with learning models that describe observations. We train our models from data on tasks like machine translation or object recognition because we cannot explicitly write down programs to solve such problems. A statistical model is only useful when it generalizes to unseen data. Solomonoff114 has proved that one should choose the model that agrees with the observed data, while preferring the model that can be compressed the most, because such a choice guarantees the best possible generalization. The size of the best possible compression of the model is called the Kolmogorov complexity of the model. We define an algorithm as a function with small Kolmogorov complexity. This Ph.D. thesis outlines the problem of learning algorithms from data and shows several partial solutions to it. Our data model is mainly neural networks as they have proven to be successful in various domains like object recognition67,109,122, language modelling90, speech recognition48,39 and others. First, we examine empirical trainability limits for classical neural networks. Then, we extend them by providing interfaces, which provide a way to read memory, access the input, and postpone predictions. The model learns how to use them with reinforcement learning techniques like REINFORCE and Q-learning. Next, we ex-"}
{"_id": "1127dee1f3f64bf02ba679bde052799b53c643da", "title": "Generative Topic Embedding: a Continuous Representation of Documents", "text": "Word embedding maps words into a lowdimensional continuous embedding space by exploiting the local word collocation patterns in a small context window. On the other hand, topic modeling maps documents onto a low-dimensional topic space, by utilizing the global word collocation patterns in the same document. These two types of patterns are complementary. In this paper, we propose a generative topic embedding model to combine the two types of patterns. In our model, topics are represented by embedding vectors, and are shared across documents. The probability of each word is influenced by both its local context and its topic. A variational inference method yields the topic embeddings as well as the topic mixing proportions for each document. Jointly they represent the document in a low-dimensional continuous space. In two document classification tasks, our method performs better than eight existing methods, with fewer features. In addition, we illustrate with an example that our method can generate coherent topics even based on only one document."}
{"_id": "e51c63cab0806b05fe3acbadced3097033be323f", "title": "A study of array antenna with phase compensated technique for 60 GHz communication", "text": "This paper presents phase compensated techniques for 60 GHz communication. A phase compensation method for antenna embedded in a mobile device and a planar lens for base station antenna are introduced. For the Line-of Sight (LoS) scenario, end-fire antenna arrays are adapted for mobile device. 8\u00d71 end-fire patch antenna arrays are designed in Low temperature co-fired ceramic (LTCC) substrate. Simulated max gain of the antenna arrays is 13.03 dBi. When the antenna module is mounted, scattering problem is occurred and its max gain is severely degraded to 6.4 dBi. By using phase compensated technique, max gain of the antenna arrays are enhanced to 12.5 dBi. The beam steered gain of the arrays is also simulated and good coverage can be achieved. A planar lens antenna is also studied for high gain of the base station antenna."}
{"_id": "faf4d5076c513c18c3d502f29e1cc811184f3ebb", "title": "Comparison and Analysis of Single-Phase Transformerless Grid-Connected PV Inverters", "text": "Leakage current minimization is one of the most important considerations in transformerless photovoltaic (PV) inverters. In the past, various transformerless PV inverter topologies have been introduced, with leakage current minimized by the means of galvanic isolation and common-mode voltage (CMV) clamping. The galvanic isolation can be achieved via dc-decoupling or ac-decoupling, for isolation on the dc- or ac-side of the inverter, respectively. It has been shown that the latter provides lower losses due to the reduced switch count in conduction path. Nevertheless, leakage current cannot be simply eliminated by galvanic isolation and modulation techniques, due to the presence of switches' junction capacitances and resonant circuit effects. Hence, CMV clamping is used in some topologies to completely eliminate the leakage current. In this paper, several recently proposed transformerless PV inverters with different galvanic isolation methods and CMV clamping technique are analyzed and compared. A simple modified H-bridge zero-voltage state rectifier is also proposed, to combine the benefits of the low-loss ac-decoupling method and the complete leakage current elimination of the CMV clamping method. The performances of different topologies, in terms of CMV, leakage current, total harmonic distortion, losses and efficiencies are compared. The analyses are done theoretically and via simulation studies, and further validated with experimental results. This paper is helpful for the researchers to choose the appropriate topology for transformerless PV applications and to provide the design principles in terms of common-mode behavior and efficiency."}
{"_id": "0a03b67644a6411ab7ec73551aa27060b8e4ab1d", "title": "A Comparative Look into Public IXP Datasets", "text": "Internet eXchange Points (IXPs) are core components of the Internet infrastructure where Internet Service Providers (ISPs) meet and exchange traffic. During the last few years, the number and size of IXPs have increased rapidly, driving the flattening and shortening of Internet paths. However, understanding the present status of the IXP ecosystem and its potential role in shaping the future Internet requires rigorous data about IXPs, their presence, status, participants, etc. In this work, we do the first cross-comparison of three well-known publicly available IXP databases, namely of PeeringDB, Euro-IX, and PCH. A key challenge we address is linking IXP identifiers across databases maintained by different organizations. We find different AS-centric versus IXP-centric views provided by the databases as a result of their data collection approaches. In addition, we highlight differences and similarities w.r.t. IXP participants, geographical coverage, and co-location facilities. As a side-product of our linkage heuristics, we make publicly available the union of the three databases, which includes 40.2% more IXPs and 66.3% more IXP participants than the commonly-used PeeringDB. We also publish our analysis code to foster reproducibility of our experiments and shed preliminary insights into the accuracy of the union dataset."}
{"_id": "12596562eedf00cad846f13afd2cc8f4b3dd5b4a", "title": "OWLIM: A family of scalable semantic repositories", "text": "An explosion in the use of RDF for representing information about resources has driven the requirements for Webscale server systems that can store and process huge quantities of data, and furthermore provide powerful data access and mining functionalities. This paper describes OWLIM, a family of semantic repositories that provide storage, inference and novel data-access features delivered in a scalable, resilient, industrial-strength platform. Ontotext AD, 135 Tsarigradsko Chaussee, Sofia 1784, Bulgaria"}
{"_id": "46acc1dd8ac263a081d21cb95dccb5007a1294d4", "title": "Has the athlete trained enough to return to play safely? The acute:chronic workload ratio permits clinicians to quantify a player's risk of subsequent injury.", "text": "The return to sport from injury is a difficult multifactorial decision, and risk of reinjury is an important component. Most protocols for ascertaining the return to play status involve assessment of the healing status of the original injury and functional tests which have little proven predictive ability. Little attention has been paid to ascertaining whether an athlete has completed sufficient training to be prepared for competition. Recently, we have completed a series of studies in cricket, rugby league and Australian rules football that have shown that when an athlete's training and playing load for a given week (acute load) spikes above what they have been doing on average over the past 4 weeks (chronic load), they are more likely to be injured. This spike in the acute:chronic workload ratio may be from an unusual week or an ebbing of the athlete's training load over a period of time as in recuperation from injury. Our findings demonstrate a strong predictive (R(2)=0.53) polynomial relationship between acute:chronic workload ratio and injury likelihood. In the elite team setting, it is possible to quantify the loads we are expecting athletes to endure when returning to sport, so assessment of the acute:chronic workload ratio should be included in the return to play decision-making process."}
{"_id": "4b131a9c1ef6a3ea6c410110a15dd673a16ed3f8", "title": "Automatic Evaluation of Text Coherence: Models and Representations", "text": "This paper investigates the automatic evaluation of text coherence for machine-generated texts. We introduce a fully-automatic, linguistically rich model of local coherence that correlates with human judgments. Our modeling approach relies on shallow text properties and is relatively inexpensive. We present experimental results that assess the predictive power of various discourse representations proposed in the linguistic literature. Our results demonstrate that certain models capture complementary aspects of coherence and thus can be combined to improve performance."}
{"_id": "55840cff7f84c970207b65f084e33cb0992fe45b", "title": "Support vector machine approach for protein subcellular localization prediction", "text": "MOTIVATION\nSubcellular localization is a key functional characteristic of proteins. A fully automatic and reliable prediction system for protein subcellular localization is needed, especially for the analysis of large-scale genome sequences.\n\n\nRESULTS\nIn this paper, Support Vector Machine has been introduced to predict the subcellular localization of proteins from their amino acid compositions. The total prediction accuracies reach 91.4% for three subcellular locations in prokaryotic organisms and 79.4% for four locations in eukaryotic organisms. Predictions by our approach are robust to errors in the protein N-terminal sequences. This new approach provides superior prediction performance compared with existing algorithms based on amino acid composition and can be a complementary method to other existing methods based on sorting signals.\n\n\nAVAILABILITY\nA web server implementing the prediction method is available at http://www.bioinfo.tsinghua.edu.cn/SubLoc/.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary material is available at http://www.bioinfo.tsinghua.edu.cn/SubLoc/."}
{"_id": "988058ab8dfcb27e9566c6bcef398a4407b1ea04", "title": "Toward open manufacturing: A cross-enterprises knowledge and services exchange framework based on blockchain and edge computing", "text": ""}
{"_id": "be5784888299cb4ada53a25f96f29161f16e7eda", "title": "Android malware detection method based on naive Bayes and permission correlation algorithm", "text": "In order to detect Android malware more effectively, an Android malware detection model was proposed based on improved naive Bayes classification. Firstly, considering the unknown permission that may be malicious in detection samples, and in order to improve the Android detection rate, the algorithm of malware detection is proposed based on improved naive Bayes. Considering the limited training samples, limited permissions, and the new malicious permissions in the test samples, we used the impact of the new malware permissions and training permissions as the weight. The weighted naive Bayesian algorithm improves the Android malware detection efficiency. Secondly, taking into account the detection model, we proposed a detection model of permissions and information theory based on the improved naive Bayes algorithm. We analyzed the correlation of the permission. By calculating the Pearson correlation coefficient, we determined the value of Pearson correlation coefficient r, and delete the permissions whose value r is less than the threshold $$\\rho $$ \u03c1 and get the new permission set. So, we got the improved detection model by clustering based on information theory. Finally, we detected the 1725 Android malware and 945 non malicious application of multiple data sets in the same simulation environment. The detection rate of the improved the naive Bayes algorithm is 86.54%, and the detection rate of the non-malicious application is increased to 97.59%. Based on the improved naive Bayes algorithm, the false detection rate of the improved detection model is reduced by 8.25%."}
{"_id": "71550eeeb178cde0a6b00e3adb8d4eee482612ad", "title": "Heroin epidemics, treatment and ODE modelling.", "text": "The UN [United Nations Office on Drugs and Crime (UNODC): World Drug Report, 2005, vol. 1: Analysis. UNODC, 2005.], EU [European Monitoring Centre for Drugs and Drug Addiction (EMCDDA): Annual Report, 2005.http://annualreport.emcdda.eu.int/en/home-en.html.] and WHO [World Health Organisation (WHO): Biregional Strategy for Harm Reduction, 2005-2009. HIV and Injecting Drug Use. WHO, 2005.] have consistently highlighted in recent years the ongoing and persistent nature of opiate and particularly heroin use on a global scale. While this is a global phenomenon, authors have emphasised the significant impact such an epidemic has on individual lives and on society. National prevalence studies have indicated the scale of the problem, but the drug-using career, typically consisting of initiation, habitual use, a treatment-relapse cycle and eventual recovery, is not well understood. This paper presents one of the first ODE models of opiate addiction, based on the principles of mathematical epidemiology. The aim of this model is to identify parameters of interest for further study, with a view to informing and assisting policy-makers in targeting prevention and treatment resources for maximum effectiveness. An epidemic threshold value, R(0), is proposed for the drug-using career. Sensitivity analysis is performed on R(0) and it is then used to examine the stability of the system. A condition under which a backward bifurcation may exist is found, as are conditions that permit the existence of one or more endemic equilibria. A key result arising from this model is that prevention is indeed better than cure."}
{"_id": "098d8570fe03b8267f1d12db7608668400005896", "title": "Data resource profile: the World Health Organization Study on global AGEing and adult health (SAGE).", "text": "Population ageing is rapidly becoming a global issue and will have a major impact on health policies and programmes. The World Health Organization's Study on global AGEing and adult health (SAGE) aims to address the gap in reliable data and scientific knowledge on ageing and health in low- and middle-income countries. SAGE is a longitudinal study with nationally representative samples of persons aged 50+ years in China, Ghana, India, Mexico, Russia and South Africa, with a smaller sample of adults aged 18-49 years in each country for comparisons. Instruments are compatible with other large high-income country longitudinal ageing studies. Wave 1 was conducted during 2007-2010 and included a total of 34 124 respondents aged 50+ and 8340 aged 18-49. In four countries, a subsample consisting of 8160 respondents participated in Wave 1 and the 2002/04 World Health Survey (referred to as SAGE Wave 0). Wave 2 data collection will start in 2012/13, following up all Wave 1 respondents. Wave 3 is planned for 2014/15. SAGE is committed to the public release of study instruments, protocols and meta- and micro-data: access is provided upon completion of a Users Agreement available through WHO's SAGE website (www.who.int/healthinfo/systems/sage) and WHO's archive using the National Data Archive application (http://apps.who.int/healthinfo/systems/surveydata)."}
{"_id": "b63f9eeb1210e9519660589ff0e7a5fec9b39b71", "title": "Gain Improvement of Rectangular Dielectric Resonator Antenna by Engraving Grooves on Its Side Walls", "text": "A new technique for increasing the boresight gain of a rectangular dielectric resonator antenna (DRA) operating at its fundamental radiating $TE_{111}^y$ mode is introduced. The idea is to increase the radiations from the side walls of the DRA compared to that of its top wall by engraving grooves on the side walls. A model based on the array theory is developed to explain the high-gain nature of the antenna. The measured results demonstrate that the proposed antenna achieves an impedance bandwidth of 21% over a band of 3.24\u20134 GHz, with a maximum gain of 9.6 dB. This is significantly higher with respect to available data in the literature."}
{"_id": "9dbab944fe238fe0e985b5811397312b2b975035", "title": "Comparative Analysis of 6 Bit Thermometer-to-Binary Decoders for Flash Analog-to-Digital Converter", "text": "In the design of high speed Flash ADC selection of Thermometer to Binary decoder plays an important role. This paper describes different decoder topologies suitable for Flash ADCs. Comparative analysis between them is presented in terms of hardware required, propagation delay & power consumption. Result shows that fat tree & Mux based topologies are suitable for high speed conversions, but Mux based topology is effective in terms of power consumption. Presence of bubble error reduces output correction capability. In this paper advance scheme is proposed for correction of bubble error in Mux based decoder."}
{"_id": "6d871436d1f04810561b62e9fc7fa3e9ae471d47", "title": "E-Readers and Visual Fatigue", "text": "The mass digitization of books is changing the way information is created, disseminated and displayed. Electronic book readers (e-readers) generally refer to two main display technologies: the electronic ink (E-ink) and the liquid crystal display (LCD). Both technologies have advantages and disadvantages, but the question whether one or the other triggers less visual fatigue is still open. The aim of the present research was to study the effects of the display technology on visual fatigue. To this end, participants performed a longitudinal study in which two last generation e-readers (LCD, E-ink) and paper book were tested in three different prolonged reading sessions separated by--on average--ten days. Results from both objective (Blinks per second) and subjective (Visual Fatigue Scale) measures suggested that reading on the LCD (Kindle Fire HD) triggers higher visual fatigue with respect to both the E-ink (Kindle Paperwhite) and the paper book. The absence of differences between E-ink and paper suggests that, concerning visual fatigue, the E-ink is indeed very similar to the paper."}
{"_id": "81af5a72ba19c79cfe5c02797d01d3b5df998fa8", "title": "Three-Dimensional Skin Deformation as Force Substitution: Wearable Device Design and Performance During Haptic Exploration of Virtual Environments", "text": "Virtual reality systems would benefit from a compelling force sensory substitute when workspace or stability limitations prevent the use of kinesthetic force feedback systems. We present a wearable fingertip haptic device with the ability to make and break contact in addition to rendering both shear and normal skin deformation to the fingerpad. A delta mechanism with novel bias spring and tether actuator relocation method enables the use of high-end motors and encoders, allowing precise device control: 10 Hz bandwidth and 0.255 mm RMS tracking error were achieved during testing. In the first of two experiments, participants determined the orientation of a stiff region in a surrounding compliant virtual surface with an average angular error of 7.6 degree, similar to that found in previous studies using traditional force feedback. In the second experiment, we evaluated participants\u2019 ability to interpret differences in friction. The Just Noticeable Difference (JND) of surface friction coefficient discrimination using our skin deformation device was 0.20, corresponding with a reference friction coefficient of 0.5. While higher than that found using kinesthetic feedback, this demonstrates that users can perceive differences in surface friction without world-grounded kinesthetic forces. These experiments show that three DoF skin deformation enables both stiffness and friction discrimination capability in the absence of kinesthetic force feedback."}
{"_id": "3dbfabbb1fcd2798990e91387cbb3e2977315231", "title": "The possession game? A comparative analysis of ball retention and team success in European and international football, 2007-2010.", "text": "Possession is thought of as central to success in modern football, but questions remain about its impact on positive team outcomes (Bate, 1988; Hughes & Franks, 2005; Pollard & Reep, 1997; Stanhope, 2001). Recent studies (e.g. Bloomfield, Polman, & O'Donoghue, 2005; Carling, Williams, & Reilly, 2005; James, Mellallieu, & Holley, 2002; Jones, James, & Mellalieu, 2004; Lago, 2009; Lago & Martin, 2007; Lago-Pe\u00f1as & Dellal, 2010; Lago-Pe\u00f1as, Lago-Ballesteros, Dellal, & G\u00f3mez, 2010; Taylor, Mellalieu, & James, 2005; Tucker, Mellalieu, James, & Taylor, 2005) that have examined these questions have often been constrained by an exclusive focus on English or Spanish domestic play. Using data from five European leagues, UEFA and FIFA tournaments, the study found that while possession time and passing predicted aggregated team success in domestic league play, both variables were poor predictors at the individual match level once team quality and home advantage were accounted for. In league play, the effect of greater possession was consistently negative; in the Champions League, it had virtually no impact. In national team tournaments, possession failed to reach significance when offensive factors were accounted for. Much of the success behind the 'possession game' was thus a function of elite teams confined in geographic and competitive space. That ball hegemony was not consistently tied to success suggests that a nuanced approach to possession is needed to account for variant strategic environments (e.g. James et al., 2002) and compels match analysts to re-examine the metric's overall value."}
{"_id": "67de1880bc75bce2c77bf4c38f239fc52dfd3b7b", "title": "A mobile-based home automation system", "text": "The rapidly advancing mobile communication technology and the decrease in costs make it possible to incorporate mobile technology into home automation systems. We propose a mobile-based home automation system that consists of a mobile phone with Java capabilities, a cellular modem, and a home server. The home appliances are controlled by the home server, which operates according to the user commands received from the mobile phone via the cellular modem. In our proposed system the home server is built upon an SMS/GPRS (short message service/general packet radio service) mobile cell module and a microcontroller, allowing a user to control and monitor any variables related to the home by using any Java capable cell phone. This paper presents the design and implementation of AT modem driver, text based command processing software, and power failure resilient output for a microcontroller to facilitate in sending and receiving data via the cell module, together with the design of Java application to enable the cell phone to send commands and receive alerts through the cell module"}
{"_id": "dcdd30fec4b19a0b2bdfd7171bbdf0c6091e2ce5", "title": "Design and implementation of home automation system", "text": "M2M wireless communication of various machines and devices in mobile networks is a fast growing business and application area in industry, maintenance business, customer service, security, and banking areas. This paper presents design and implementation of remote control system by means of GSM cellular communication network. The design integrates the device to be controlled, the microcontroller, and the GSM module so that it can be used for a wide range of applications. Detailed description and implementation of each design element are presented. To verify the principle operation of the M2M design, two home applications are experimentally tested using PC-based environment."}
{"_id": "dde5c5f6088044ca0f2fa75d251ecff6fd36d46d", "title": "Implementation of ZigBee-GSM based home security monitoring and remote control system", "text": "Home security and control is one of the basic needs of mankind from early days. But today it has to be updated with the rapidly changing technology to ensure vast coverage, remote control, reliability, and real time operation. Deploying wireless technologies for security and control in home automation systems offers attractive benefits along with user friendly interface. In this paper, implementation of a novel security and control system for home automation is presented. The proposed system consists of a control console interfaced with different sensors using ZigBee. Suspected activities are conveyed to remote user through SMS (Short Message Service) or Call using GSM (Global System for Mobile communication) technology. Upon reply, the remote user can control his premises again through GSM-ZigBee combination. Besides, traditional burglar alarm enhances security in case of no acknowledgment from remote user. This system offers a low cost, low power consumption and user friendly way of a reliable portable monitoring and control of the secured environment. Using the concept of serial communication and mobile phone AT-commands (Attention Telephone/Terminal commands), the software is programmed using C-language. The design has been implemented in the hardware using ZigBee EM357 module, Atmega128 MCU (microcontroller unit) and Sony Ericsson T290i mobile phone set."}
{"_id": "04bf16c2d100ded2b38efc6fed89e15fd6e159e5", "title": "Java-based home automation system", "text": "This paper presents the design and implementation of a Java-based automation system that can monitor and control home appliances via the World Wide Web. The design is based on a stand alone embedded system board integrated into a PC-based server at home. The home appliances are connected to the input/output ports of the embedded system board and their status are passed to the server. The monitoring and control software engine is based on the combination of JavaServer pages, JavaBeans, and interactive C. The home appliances can be monitored and controlled locally via the embedded system board, or remotely through a Web browser from anywhere in the world provided that an Internet access is available. The system is scalable and allows multi-vendor appliances to be added with no major changes to its core. Password protection is used to block unauthorized users from accessing the appliances at home. If the Internet connection is down or the server is not up, the embedded system board still can control and operate the appliances locally."}
{"_id": "8b968bfa6a9d3e3713c9c9ff2971e333efe1343b", "title": "Ontologically Grounded Multi-sense Representation Learning for Semantic Vector Space Models", "text": "Words are polysemous. However, most approaches to representation learning for lexical semantics assign a single vector to every surface word type. Meanwhile, lexical ontologies such as WordNet provide a source of complementary knowledge to distributional information, including a word sense inventory. In this paper we propose two novel and general approaches for generating sense-specific word embeddings that are grounded in an ontology. The first applies graph smoothing as a postprocessing step to tease the vectors of different senses apart, and is applicable to any vector space model. The second adapts predictive maximum likelihood models that learn word embeddings with latent variables representing senses grounded in an specified ontology. Empirical results on lexical semantic tasks show that our approaches effectively captures information from both the ontology and distributional statistics. Moreover, in most cases our sense-specific models outperform other models we compare against."}
{"_id": "ba354fd004fb9170f2080ee145cde0fd8c244f05", "title": "Hippocampus segmentation through multi-view ensemble ConvNets", "text": "Automated segmentation of brain structures from MR images is an important practice in many neuroimage studies. In this paper, we explore the utilization of a multi-view ensemble approach that relies on neural networks (NN) to combine multiple decision maps in achieving accurate hippocampus segmentation. Constructed under a general convolutional NN structure, our Ensemble-Net networks explore different convolution configurations to capture the complementary information residing in the multiple label probabilities produced by our U-Seg-Net (a modified U-Net) segmentation neural network. T1-weighted MRI scans and the associated Hippocampal masks of 110 healthy subjects from the ADNI project were used as the training and testing data. The combined U-Seg-Net + Ensemble-Net framework achieves over 89% Dice ratio on the test dataset."}
{"_id": "2d8ea00bb87a66e9311376ca1c263dc9c2b61ea7", "title": "An Automatic Voice Conversion Evaluation Strategy Based on Perceptual Background Noise Distortion and Speaker Similarity", "text": "Voice conversion aims to modify the characteristics of one speaker to make it sound like spoken by another speaker without changing the language content. This task has attracted considerable attention and various approaches have been proposed since two decades ago. The evaluation of voice conversion approaches, usually through time-intensive subject listening tests, requires a huge amount of human labor. This paper proposes an automatic voice conversion evaluation strategy based on perceptual background noise distortion and speaker similarity. Experimental results show that our automatic evaluation results match the subjective listening results quite well. We further use our strategy to select best converted samples from multiple voice conversion systems and our submission achieves promising results in the voice conversion challenge (VCC2016)."}
{"_id": "2c2360e3015f1433671ec715ce0b71c42b46e7b5", "title": "A measurement study of Napster and Gnutella as examples of peer-to-peer file sharing systems", "text": "In this paper, we present and analyze an extensive measurement study of Napster and Gnutella."}
{"_id": "3e0bb060dee3123c7cf6758d3aa1db53432ffaf9", "title": "Child sexual abuse and subsequent offending and victimisation : A 45 year follow-up study", "text": "This is a project supported by a grant from the Criminology Research Council. The views expressed are the responsibility of the author and are not necessarily those of the Council."}
{"_id": "d0a9ea00fa117961a555a7beca2c0f24eaa3a93c", "title": "Depression duration but not age predicts hippocampal volume loss in medically healthy women with recurrent major depression.", "text": "This study takes advantage of continuing advances in the precision of magnetic resonance imaging (MRI) to quantify hippocampal volumes in a series of human subjects with a history of depression compared with controls. We sought to test the hypothesis that both age and duration of past depression would be inversely and independently correlated with hippocampal volume. A sample of 24 women ranging in age from 23 to 86 years with a history of recurrent major depression, but no medical comorbidity, and 24 case-matched controls underwent MRI scanning. Subjects with a history of depression (post-depressed) had smaller hippocampal volumes bilaterally than controls. Post-depressives also had smaller amygdala core nuclei volumes, and these volumes correlated with hippocampal volumes. In addition, post-depressives scored lower in verbal memory, a neuropsychological measure of hippocampal function, suggesting that the volume loss was related to an aspect of cognitive functioning. In contrast, there was no difference in overall brain size or general intellectual performance. Contrary to our initial hypothesis, there was no significant correlation between hippocampal volume and age in either post-depressive or control subjects, whereas there was a significant correlation with total lifetime duration of depression. This suggests that repeated stress during recurrent depressive episodes may result in cumulative hippocampal injury as reflected in volume loss."}
{"_id": "b320b4b23f708344b7bc4af20fdb37e56543d1a2", "title": "Towards String-to-Tree Neural Machine Translation", "text": "We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system. \u201daccepted...\u201d header"}
{"_id": "e6041cd17c9781aa440447b9bd3c421054bd9f5f", "title": "VSP: A Virtual Smartphone Platform to Enhance the Capability of Physical Smartphone", "text": "Smart mobile devices have become ubiquitous, and people prefer to choose smartphone in daily life rather than use traditional personal computer. However, due to the hardware capability, performance of smartphones varies greatly and sometimes it cannot meet the demand of users. Furthermore, if smartphone is attacked by malicious application, the local private sensitive information will be leaked which in turn will cause huge losses. This paper proposes VSP, a Virtual Smartphone Platform. It offers a new way to enhance the capability of physical smartphone by providing virtual smartphone (VS) deployed in cloud. Users operate VS remotely through the thin-client of VSP on physical smartphones, and ignore the limits of physical mobile devices. The isolation of VS in cloud platform guarantees the security between VSes, and also prevents access to private sensitive information on physical devices. The evaluation indicates that average bandwidth cost with zlib is about 80 kBps and it is suitable for use in practice. Index Terms\u2014Virtualization, Android, Remote Display, Cloud Computing, Ant Colony Optimization."}
{"_id": "3d809bb3b414a8ee58492e7ea775d6631ea05e91", "title": "Sentiment classification of Internet restaurant reviews written in Cantonese", "text": "Cantonese is an important dialect in some regions of Southern China. Local online users often represent their opinions and experiences on the web with written Cantonese. Although the information in those reviews is valuable to potential consumers and sellers, the huge amount of web reviews make it difficult to give an unbiased evaluation to a product and the Cantonese reviews are unintelligible for Mandarin Chinese speakers. In this paper, standard machine learning techniques naive Bayes and SVM are incorporated into the domain of online Cantonese-written restaurant reviews to automatically classify user reviews as positive or negative. The effects of feature presentations and feature sizes on classification performance are discussed. We find that accuracy is influenced by interaction between the classification models and the feature options. The naive Bayes classifier achieves as well as or better accuracy than SVM. Character-based bigrams are proved better features than unigrams and trigrams in capturing Cantonese sentiment orientation. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "403f2b41d5eed1f9b52eba68dfa97f526a1809eb", "title": "Towards Further Automation in AutoML", "text": "Even though recent AutoML systems have been successful in various applications, they introduce new hyper-hyperparameters of their own, including the choice of the evaluation strategy used in the loss function, time budgets to use and the optimization strategy with its hyper-hyperparameters. We study whether it is possible to make these choices in a data-driven way for a dataset at hand. Using 437 datasets from OpenML, we demonstrate the possibility of automating these choices, that this improves over picking a fixed strategy and that for different time horizons different strategies are necessary."}
{"_id": "cad77f0b65e685963de3604fd122105308044aff", "title": "Anti-steganalysis for image on convolutional neural networks", "text": "Nowadays, convolutional neural network (CNN) based steganalysis methods achieved great performance. While those methods are also facing security problems. In this paper, we proposed an attack scheme aiming at CNN based steganalyzer including two different attack methods 1) the LSB-Jstego Gradient Based Attack; 2) LSB-Jstego Evolutionary Algorithms Based Attack. The experiment results show that the attack strategies could achieve 96.02% and 90.25% success ratio separately on the target CNN. The proposed attack scheme is an effective way to fool the CNN based steganalyzer and in addition demonstrates the vulnerability of the neural networks in steganalysis."}
{"_id": "a9e991ccadfe71ad0f44a7967b6049d0dcab7e9a", "title": "Using the h-index to rank influential information scientists", "text": "We apply a new bibliometric measure, the h-index (Hirsch, 2005), to the literature of information science. Faculty rankings based on raw citation counts are compared with those based on h-counts. There is a strong positive correlation between the two sets of rankings. We show how the h-index can be used to express the broad impact of a scholar\u2019s research output over time in more nuanced fashion than straight citation counts."}
{"_id": "8d49a6729cf087f155d438b0db0e53c5afdafadb", "title": "Spiral Mecanum Wheel achieving omnidirectional locomotion in step-climbing", "text": "The vehicle using omnidirectional wheels has ability to move in all directions without changing the body direction unlike a normal four wheel drive vehicle. However most of omnidirectional vehicle are designed for using only on flat ground. In this paper, we propose a new type of omnidirectional wheel, \u201cSpiral Mecanum Wheel\u201d, which enables vehicle climb the step. This new wheel consists of spiral beams and many small rollers, and these small rollers are arranged along the spiral beams. When the vehicle with this spiral Mecanum Wheels moves in normal direction, the edge of the spiral moves to cover the step from above. We have performed the experiments using Spiral Mecanum Wheel and showing the wheel works very well. As a result, in the experiment using single Spiral Mecanum Wheel, this Spiral Mecanum Wheel climbed the step about 83% of the wheel diameter in normal direction motion. The vehicle with Spiral Mecnaum Wheel climbed the step about 37% of the wheel diameter in tangential direction motion and 59% in normal direction."}
{"_id": "3ac325284ae683e93b262a5eb530428717813c20", "title": "Collaboratively crowdsourcing workflows with turkomatic", "text": "Preparing complex jobs for crowdsourcing marketplaces requires careful attention to workflow design, the process of decomposing jobs into multiple tasks, which are solved by multiple workers. Can the crowd help design such workflows? This paper presents Turkomatic, a tool that recruits crowd workers to aid requesters in planning and solving complex jobs. While workers decompose and solve tasks, requesters can view the status of worker-designed workflows in real time; intervene to change tasks and solutions; and request new solutions to subtasks from the crowd. These features lower the threshold for crowd employers to request complex work. During two evaluations, we found that allowing the crowd to plan without requester supervision is partially successful, but that requester intervention during workflow planning and execution improves quality substantially. We argue that Turkomatic's collaborative approach can be more successful than the conventional workflow design process and discuss implications for the design of collaborative crowd planning systems."}
{"_id": "80d6d9e6501a24047083e807a39dba09ddd7b2b2", "title": "Detection of Surface Crack in Concrete Using Measurement Technique With Laser Displacement Sensor", "text": "Detection of crack in concrete is an important issue as concrete is one of the principal material used in civil infrastructures. Cracks often occur first on the surface of concrete structure under load and provide an indication for further degradation. In this brief, an efficient noncontact method of the detection of such cracks in concrete using a measurement system with a laser displacement sensor (LDS) is presented. The proposed system consists of an LDS mounted on the scanner performing a raster scan. During the scanning process, the system gives the reading of displacement value from the sensor head to the laser spot on the target surface of the specimen. The proof of concept is given by testing with two different concrete specimens, namely, a concrete slab with a 0.7-mm-width through crack and a cylindrical concrete specimen with multiple cracks on the surface caused by loading effect. It is shown that a characteristic response of crack as a sharp distortion of the displacement reading occurs when the laser spot crosses the crack. The negligible standard deviation proves the repeatability and accuracy of the measurement and shows that this technique can be applied in a real-life scenario. A root-mean-square deviation in the displacement reading of the LDS is used as an index of crack. The effect of the shape of the specimen is investigated by detecting same-width cracks at different points along the circumference of a cylindrical specimen with different incident angles."}
{"_id": "7a473af96508913443d0cc90f9eb97e8e9cd5a4f", "title": "Extending IEEE 802.1 AVB with time-triggered scheduling: A simulation study of the coexistence of synchronous and asynchronous traffic", "text": "In-car networks based on Ethernet are expected to be the first choice for future applications in the domain of info-and entertainment. However, the full benefit of a technologically integrated in-car network will only become rewarding with an Ethernet-based backbone, unifying several automotive domains in a single infrastructure. Today, there is remarkable interest in the IEEE 802.1 Audio/Video Bridging (AVB) protocol suite, that provides end-to-end performance guarantees in Ethernet networks. But for the strict timing requirements of automotive control-traffic, these guarantees are too weak. An extension of Ethernet AVB with synchronous time-triggered traffic can overcome these limitations. In this paper, we investigate the coexistence of synchronous and asynchronous traffic by experimentally adding time-triggered messages to the credit-based shaper of AVB in a straightforward way. Based on simulations and analytical evaluations, we quantify the impact of such integration concepts for a reasonable design range. Our results demonstrate the feasibility of a shaping strategy with concurrent AVB and time-triggered message, but show a significant impact of the schedule design on the asynchronous AVB streams. Based on our findings, we provide recommendations for configurations that can improve end-to-end network performance for in-car applications by over 100%."}
{"_id": "e440a623aedd6796ebabee326deae5c25daa48d5", "title": "REVIEW OF PROVEN TECHNOLOGIES AVAILABLE FOR THE REDUCTION OF RAW SUGAR COLOUR", "text": "Colour removal is the chief cost centre of sugar re fineries, hence, by reducing the colour of the raw sugar to be processed, the level of process ing and costs incurred by a refinery are also reduced. A lower colour raw sugar is thus highly ma rketable. There are a number of proven technologies that can be applied in raw sugar mills to control and reduce raw sugar colour. These mechanisms are r eviewed in order to identify effective and economically viable means of reducing the colou r f South African raw sugars to a reasonable level, ca 1 000 ICUMSA units. The technologies include: \u2022 syrup flotation clarification \u2022 juice and syrup sulphitation \u2022 alternative boiling schemes \u2022 double curing of C-massecuites \u2022 additional washing in centrifugals \u2022 the use of specialised chemicals and flocculants du ring clarification \u2022 hydrogen peroxide in wash water during centrifugati on."}
{"_id": "22ab36ad0aee87b57960036511338195036cc607", "title": "Effect of Corporate Governance Structure on the Financial Performance of Johannesburg Stock Exchange ( JSE )-Listed Mining Firms", "text": "There have been many corporate collapses and financial crises in recent years linked to a lack of effective corporate governance. The South African King IV Code of Corporate Governance recommends that corporate governing bodies should be comprised of an appropriate balance of knowledge, diversity, and independence for discharging their duties objectively and more efficiently. This study examines the effect of corporate governance structures on firm financial performance. The secondary data of selected Johannesburg Stock Exchange (JSE), Socially Responsible Investment (SRI) Index-listed mining firms\u2019 sustainability reports, and integrated annual financial statements are used. Using panel data analysis of the random effects model, we determined the relationship between board independence and board size and the return on equity (ROE) for the period 2010\u20132015. Results indicate a weak negative correlation between ROE and board size, and a weak, but positive, correlation between ROE and board independence. Additionally, there is a positive, but weak, correlation between ROE and sales growth, but a negative and weak relationship between ROE and firm size. The study suggests that effective corporate governance through a small effective board and monitoring by an independent board result in increased firm financial performance. We recommend that South African companies see compliance with the recommendations of the King IV Code on Corporate Governance not as a liability, but an ethical investment that may likely yield financial benefit in the long-term. Although complying with corporate governance principles does not necessarily translate into a significant economic benefit, firms should, however, continue to adopt corporate governance for ethical reasons to meet stakeholder\u2019s social and environmental needs for sustainable development."}
{"_id": "d0c37c2ed5447b4944aa12dd37f992ea2cd39c8c", "title": "The Neglected Intrinsic Resistome of Bacterial Pathogens", "text": "Bacteria with intrinsic resistance to antibiotics are a worrisome health problem. It is widely believed that intrinsic antibiotic resistance of bacterial pathogens is mainly the consequence of cellular impermeability and activity of efflux pumps. However, the analysis of transposon-tagged Pseudomonas aeruginosa mutants presented in this article shows that this phenotype emerges from the action of numerous proteins from all functional categories. Mutations in some genes make P. aeruginosa more susceptible to antibiotics and thereby represent new targets. Mutations in other genes make P. aeruginosa more resistant and therefore define novel mechanisms for mutation-driven acquisition of antibiotic resistance, opening a new research field based in the prediction of resistance before it emerges in clinical environments. Antibiotics are not just weapons against bacterial competitors, but also natural signalling molecules. Our results demonstrate that antibiotic resistance genes are not merely protective shields and offer a more comprehensive view of the role of antibiotic resistance genes in the clinic and in nature."}
{"_id": "be02e0714083fdda2ba424a1701d572bef181972", "title": "Trust-Based Requirements Traceability", "text": "Information retrieval (IR) approaches have proven useful in recovering traceability links between free-text documentation and source code. IR-based traceability recovery approaches produce ranked lists of traceability links between pieces of documentation and source code. These traceability links are then pruned using various strategies and, finally, validated by human experts. In this paper we propose two contributions to improve the precision and recall of traceability links and, thus, reduces the required human experts' manual validation effort. First, we propose a novel approach, Trust race, inspired by Web trust models to improve the precision and recall of traceability links: Trust race uses intractability recovery approach to obtain a set of traceability links, which rankings are then re-evaluated using a set of other traceability recovery approaches. Second, we propose a novel traceability recovery approach, His trace, to identify traceability links between requirements and source code through CVS/SVN change logs using a Vector Space Model (VSM). We combine a traditional recovery traceability approach with His trace to build Trust race in which we use Histraceas one expert adding knowledge to the traceability links extracttedfrom CVS/SVN\u00a0\u00a0change logs. We apply Trustrace on two case studies to compare its traceability links with those recovered using only the VSM-based approach, in terms of precision and recall. We show that Trustrace improves with statistical significance the precision of the traceability links while also improving recall but without statistical significance."}
{"_id": "309dc18b9040ec8e7354aa9266063db1c768b94a", "title": "A Computational Model of Graded Cueing: Robots Encouraging Behavior Change", "text": "Our work is motivated by the evidence that children with ASD are often behind in their development of imitative behavior, and that practicing through repeated interactions with a therapist can improve imitation abilities. Our model approximates those effects with a socially assistive robot in order to use technology to broaden access to ASD therapy. Our model of graded cueing informs how specific of feedback each participant needs based on their performance in previous rounds of imitation. What is Graded Cueing? Using Probabilistic Models to Make Up for Limited Data"}
{"_id": "7b5d686e19481977b38cb226fef966212de33d63", "title": "A Compact 27 GHz Antenna-in-Package (AiP) with RF Transmitter and Passive Phased Antenna Array", "text": "In this paper, based on rigid-flex printed circuit board (PCB), a compact solution of Antenna-in-Package (AiP) including RF transmitter and microstrip passive phased antenna array is demonstrated. Apart from the landing of two connectors, the size shrinks down to 35 mm \u00d7 32 mm. RF transmitter works at the band of 26.2 GHz to 27.2 GHz, with conversion gain of 13 dB and rejection ratio up to 17.5 dBc. The 1 \u00d7 4 microstrip passive phased antenna array integrating 4 \u00d7 4 Butler Matrix achieves four scanning angles which are -10\u00b0, 30\u00b0, -30\u00b0 and 10\u00b0 with 20\u00b0 half power beam width (HPBW), and simulated absolute gain is 13.51 dBi, 12.31 dBi, 12.23 dBi and 13.26 dBi respectively. Minor deviation on scanning angles from simulation data is observed in the testing due to measurement and manufacturing tolerance. However, among operating bandwidth, HPBW, main-to-side lobe ratio and the difference between two scanning angles when port 1, 4 and port 2, 3 are excited respectively are in good agreement with simulation results"}
{"_id": "f13009f75fd129a343e76cb516bdc1f5d29638e6", "title": "5.10 A 1.4V 10.5MHz swing-boosted differential relaxation oscillator with 162.1dBc/Hz FOM and 9.86psrms period jitter in 0.18\u00b5m CMOS", "text": "Relaxation oscillators have a profound scope as on-chip reference clock sources or sensor front-ends in comparison to ring oscillators due to their superior frequency stability, control linearity, and wide tuning range. However, despite a better fundamental limit predicted in theory, the phase noise performance of relaxation oscillators trails behind that of ring oscillators. Furthermore, there is a huge gap between the maximum achievable 1/f2 phase noise FOM (169dBc/Hz) [1] and those achieved by recently proposed low-power relaxation oscillator implementations [1-4]."}
{"_id": "3b8c6253f25595d705c3ac27ca7fbd9eaec17ab6", "title": "Fast Text Searching Allowing Errors", "text": "T h e string-matching problem is a very c o m m o n problem. We are searching for a string P = PtP2. . \"Pro i n s i d e a la rge t ex t f i le T = t l t2. . . t . , b o t h sequences of characters from a f i n i t e character set Z. T h e characters may be English characters in a text file, DNA base pairs, lines of source code, angles between edges in polygons, machines or machine parts in a production schedule, music notes and tempo in a musical score, and so fo r th . We w a n t to f i n d a l l occurrences of P i n T; n a m e l y , we are searching for the set of starting posit ions F = {i[1 --i--n m + 1 s u c h t h a t titi+ l \" \" t i + m 1 = P } \" T h e two most famous algorithms for this problem are t h e B o y e r M o o r e algorithm [3] and t h e K n u t h Morris Pratt algorithm [10]. There are many extensions to t h i s problem; for example, we may be looking for a set of patterns, a pattern w i t h \"wi ld cards,\" or a regular expression. String-matching tools are included in every reasonable text editor, word processor, and many other applications."}
{"_id": "353f12ccb9c0193f3908a2e077edaca85837105c", "title": "Introduction to Automata Theory, Languages and Computation", "text": "Let n be the pumping-lemma constant (note this n is unrelated to the n that is a local variable in the definition of the language L). Pick w = 0^n10^n. Then when we write w = xyz, we know that |xy| <= n, and therefore y consists of only 0's. Thus, xz, which must be in L if L is regular, consists of fewer than n 0's, followed by a 1 and exactly n 0's. That string is not in L, so we contradict the assumption that L is regular."}
{"_id": "5750815fc3230623164fa3cd3a983b6e58bf64f4", "title": "A Fast String Searching Algorithm", "text": "An algorithm is presented that searches for the location, \u201cil\u201d of the first occurrence of a character string, \u201cpat,\u201d in another string, \u201cstring.\u201d During the search operation, the characters of pat are matched starting with the last character of pat. The information gained by starting the match at the end of the pattern often allows the algorithm to proceed in large jumps through the text being searched. Thus the algorithm has the unusual property that, in most cases, not all of the first i characters of string are inspected. The number of characters actually inspected (on the average) decreases as a function of the length of pat. For a random English pattern of length 5, the algorithm will typically inspect i/4 characters of string before finding a match at i. Furthermore, the algorithm has been implemented so that (on the average) fewer than i + patlen machine instructions are executed. These conclusions are supported with empirical evidence and a theoretical analysis of the average behavior of the algorithm. The worst case behavior of the algorithm is linear in i + patlen, assuming the availability of array space for tables linear in patlen plus the size of the alphabet.\n3~"}
{"_id": "771636b26260fac6d215df5e76c9ce72c346ba88", "title": "Binary codes capable of correcting deletions, insertions and reversals", "text": ""}
{"_id": "caf43effcd2f115aad2a57b1c0fe93700fa9e60a", "title": "Black-Box Test Data Generation for GUI Testing", "text": "Effective system testing of applications with a Graphical User Interface (GUI) front-end demands careful generation of event sequences as well as providing relevant test data for parameterized widgets, i.e., widgets that accept input values such as textboxes and textareas. Current GUI testing techniques either manipulate the source code of the application under test (AUT) to generate the test data, or blindly use a set of random string values. In this paper, we propose a third novel way to generate relevant test data for GUI testing. We exploit the information provided in the GUI structure to extract a set of key identifiers for each parameterized widget. These identifiers are used to compose appropriate search phrases and collect relevant test data from the Internet. The results of an empirical study on five GUI-based applications show that the proposed approach is applicable and can get some hard-to-cover branches in the subject programs to execute. The proposed technique works from the black-box perspective and is entirely independent from GUI modeling and event sequence generation, thus it does not need access to the source code of AUT and provides an opportunity to be integrated with the existing GUI testing frameworks."}
{"_id": "62506c527c66b06cd6588bdada1c92d1b7877200", "title": "Trend Following Trading under a Regime Switching Model", "text": "This paper is concerned with the optimality of a trend following trading rule. The idea is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. We characterize the bull and bear phases of the markets mathematically using the conditional probabilities of the bull market given the up to date stock prices. The optimal buying and selling times are given in terms of a sequence of stopping times determined by two threshold curves. Numerical experiments are conducted to validate the theoretical results and demonstrate how they perform in a marketplace."}
{"_id": "cb7cd913f51409855c83011c6a115a8ebcb5a50a", "title": "TASK Channels on Basal Forebrain Cholinergic Neurons Modulate Electrocortical Signatures of Arousal by Histamine.", "text": "UNLABELLED\nBasal forebrain cholinergic neurons are the main source of cortical acetylcholine, and their activation by histamine elicits cortical arousal. TWIK-like acid-sensitive K(+) (TASK) channels modulate neuronal excitability and are expressed on basal forebrain cholinergic neurons, but the role of TASK channels in the histamine-basal forebrain cholinergic arousal circuit is unknown. We first expressed TASK channel subunits and histamine Type 1 receptors in HEK cells. Application of histamine in vitro inhibited the acid-sensitive K(+) current, indicating a functionally coupled signaling mechanism. We then studied the role of TASK channels in modulating electrocortical activity in vivo using freely behaving wild-type (n = 12) and ChAT-Cre:TASK(f/f) mice (n = 12), the latter lacking TASK-1/3 channels on cholinergic neurons. TASK channel deletion on cholinergic neurons significantly altered endogenous electroencephalogram oscillations in multiple frequency bands. We then identified the effect of TASK channel deletion during microperfusion of histamine into the basal forebrain. In non-rapid eye movement sleep, TASK channel deletion on cholinergic neurons significantly attenuated the histamine-induced increase in 30-50 Hz activity, consistent with TASK channels contributing to histamine action on basal forebrain cholinergic neurons. In contrast, during active wakefulness, histamine significantly increased 30-50 Hz activity in ChAT-Cre:TASK(f/f) mice but not wild-type mice, showing that the histamine response depended upon the prevailing cortical arousal state. In summary, we identify TASK channel modulation in response to histamine receptor activation in vitro, as well as a role of TASK channels on cholinergic neurons in modulating endogenous oscillations in the electroencephalogram and the electrocortical response to histamine at the basal forebrain in vivo.\n\n\nSIGNIFICANCE STATEMENT\nAttentive states and cognitive function are associated with the generation of \u03b3 EEG activity. Basal forebrain cholinergic neurons are important modulators of cortical arousal and \u03b3 activity, and in this study we investigated the mechanism by which these neurons are activated by the wake-active neurotransmitter histamine. We found that histamine inhibited a class of K(+) leak channels called TASK channels and that deletion of TASK channels selectively on cholinergic neurons modulated baseline EEG activity as well as histamine-induced changes in \u03b3 activity. By identifying a discrete brain circuit where TASK channels can influence \u03b3 activity, these results represent new knowledge that enhances our understanding of how subcortical arousal systems may contribute to the generation of attentive states."}
{"_id": "befc3381d751dc49b57890da484998c877249517", "title": "Impedance-based fault location in transmission networks: theory and application", "text": "A number of impedance-based fault location algorithms have been developed for estimating the distance to faults in a transmission network. Each algorithm has specific input data requirements and makes certain assumptions that may or may not hold true in a particular fault location scenario. Without a detailed understanding of the principle of each fault-locating method, choosing the most suitable fault location algorithm can be a challenging task. This paper, therefore, presents the theory of one-ended (simple reactance, Takagi, modified Takagi, Eriksson, and Novosel et al.) and two-ended (synchronized, unsynchronized, and current-only) impedance-based fault location algorithms and demonstrates their application in locating real-world faults. The theory details the formulation and input data requirement of each fault-locating algorithm and evaluates the sensitivity of each to the following error sources: 1) load; 2) remote infeed; 3) fault resistance; 4) mutual coupling; 5) inaccurate line impedances; 6) DC offset and CT saturation; 7) three-terminal lines; and 8) tapped radial lines. From the theoretical analysis and field data testing, the following criteria are recommended for choosing the most suitable fault-locating algorithm: 1) data availability and 2) fault location application scenario. Another objective of this paper is to assess what additional information can be gleaned from waveforms recorded by intelligent electronic devices (IEDs) during a fault. Actual fault event data captured in utility networks is exploited to gain valuable feedback about the transmission network upstream from the IED device, and estimate the value of fault resistance."}
{"_id": "2ffc08388367569d4eb99aa05b224dc92f299164", "title": "Promoting physical activity and health literacy: study protocol for a longitudinal, mixed methods evaluation of a cross-provider workplace-related intervention in Germany (The AtRisk study)", "text": "BACKGROUND\nPhysical activity and health literacy are topics of utmost importance in the prevention of chronic diseases. The present article describes the study protocol for evaluating a cross-provider workplace-related intervention promoting physical activity and health literacy.\n\n\nMETHODS\nThe RE-AIM Framework will be the conceptual framework of the AtRisk study. A controlled natural experiment and a qualitative study will be conducted. The cross-provider intervention is based on the cooperation of the German Pension Fund Rhineland and cooperating German Statutory Health Insurances. It combines two components: a behavior-oriented lifestyle intervention and the assignment of a health coach. The single-provider intervention only includes the behavior-oriented lifestyle intervention. The quantitative study (natural experiment) encompasses three measuring points (T0\u2009=\u2009start of the behavior-oriented lifestyle intervention (baseline); T1\u2009=\u2009end of the behavior-oriented lifestyle intervention (16\u00a0weeks); T2\u2009=\u20096 month follow-up) and will compare the effectiveness of the cross-provider workplace-related intervention compared with the single provider intervention. Participants are employees with health related risk factors. ANCOVA will be used to evaluate the effect of the intervention on the outcome variables leisure time physical (primary outcome) activity and health literacy (secondary outcome). The qualitative study comprises semi-structured interviews, systematic field notes of stakeholder meetings and document analyses.\n\n\nDISCUSSION\nThe AtRisk study will contribute towards the claim for cross-provider interventions and workplace-related approaches described in the new Preventive Health Care Act. The results of this study will inform providers, payers and policy makers about the effectiveness of a cross-provider workplace-related lifestyle intervention compared to a single-provider intervention. Beyond, the study will identify challenges for implementing cross-provider preventive interventions. With respect to the sustainability of preventive interventions the AtRisk study will give insight in the expectations and needs on health coaching from the perspective of different stakeholders.\n\n\nTRIAL REGISTRATION\nDRKS00010693 ."}
{"_id": "401d68e0521b8d25d01294223d27527d40677ba7", "title": "Radio resource allocation for non-orthogonal multiple access (NOMA) relay network using matching game", "text": "In this paper, we study the resource allocation problem for a single-cell non-orthogonal multiple access (NOMA) relay network where an OFDM amplify-and-forward (AF) relay allocates the spectrum and power resources to the source-destination (SD) pairs. We aim to optimize the spectrum and power resource allocation to maximize the total sum-rate. This is a very complicated problem and the optimal approach requires an exhaustive search, leading to a NP hard problem. To solve this problem, we propose an efficient many-to-many two sided SD pair-subchannel matching algorithm in which the SD pairs and sub-channels are considered as two sets of rational and selfish players chasing their own interests. The algorithm converges to a pair-wise stable matching after a limited number of iterations with a low complexity compared with the optimal solution. Simulation results show that the sum-rate of the proposed algorithm approaches the performance of the optimal exhaustive search and significantly outperforms the conventional orthogonal multiple access scheme, in terms of the total sum-rate and number of accessed SD pairs."}
{"_id": "a51ba1ff82d0da55c2284458f8ef3c1b59e6678e", "title": "CA-LSTM: Search Task Identification with Context Attention based LSTM", "text": "Search task identification aims to understand a user's information needs to improve search quality for applications such as query suggestion, personalized search, and advertisement retrieval. To properly identify the search task within long query sessions, it is important to partition these sessions into segments before further processing. In this paper, we present the first search session segmentation model that uses a long short-term memory (LSTM) network with an attention mechanism. This model considers sequential context knowledge, including temporal information, word and character, and essentially learns which parts of the input query sequence are relevant to the current segmentation task and determines the influence of these parts. This segmentation technique is also combined with an efficient clustering method using a novel query relevance metric for end-to-end search task identification. Using real-world datasets, we demonstrate that our segmentation technique improves task identification accuracy of existing clustering-based algorithms."}
{"_id": "34171a38531b077788af8c47a7a27078e0db8ed8", "title": "StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks", "text": "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions."}
{"_id": "6c8db465ac34e55d5a18a486e85c6c5299ee20bb", "title": "Digital Games in Education: The Design of Games-Based Learning Environments.", "text": "In recentyears, electronicgames have assumed an importantplace in the lives of children and adolescents. Children acquire digital literacy informally, through plaj and neither schools nor other educational institutions take suficient account of this important aspect. We consider that multimedia design for training and education should combine the mostpowerfulfedtures of interactive multimedia design with the most effective principles of technologically-mediated learning. An examination of the evolution of the design of videogames is a good way to analyze the main contributions and characteristics ofgames-ba~edlearnin~ environments. At the same time, we will discuss the main obstacles and challenges to the use ofgames for learning. ("}
{"_id": "56cc0ac9e06c46bed82a27ae68443770aa112a5c", "title": "Efficient OFDMA for LiFi Downlink", "text": "Light-fidelity (LiFi) relies on visible light communication to provide multiuser access in a small cell. For higher throughput, this paper studies the resource allocation problem for an orthogonal frequency-division multiplexing access scheme for LiFi downlink transmissions and presents a joint design of the bias level, power and subcarrier allocation. Decomposing the original optimization into subproblems, we first develop an efficient algorithm for the bias optimization, along with its performance analysis in terms of convergence properties and the global optimality. We then propose several power and subcarrier allocation strategies, including an optimal algorithm and a near-optimal simplified method, to satisfy different complexity and accuracy requirements. Finally, we present two algorithms to jointly optimize the bias, power, and subcarrier, and analyze the convergence properties in terms of both the objective and the solution. Comprehensive simulation results under practical LiFi system setups illustrate the effectiveness of the proposed algorithms."}
{"_id": "66180c81f915cdfef8f4c2fafe8d7a618e123f8e", "title": "Advances in deep neural network approaches to speaker recognition", "text": "The recent application of deep neural networks (DNN) to speaker identification (SID) has resulted in significant improvements over current state-of-the-art on telephone speech. In this work, we report a similar achievement in DNN-based SID performance on microphone speech. We consider two approaches to DNN-based SID: one that uses the DNN to extract features, and another that uses the DNN during feature modeling. Modeling is conducted using the DNN/i-vector framework, in which the traditional universal background model is replaced with a DNN. The recently proposed use of bottleneck features extracted from a DNN is also evaluated. Systems are first compared with a conventional universal background model (UBM) Gaussian mixture model (GMM) i-vector system on the clean conditions of the NIST 2012 speaker recognition evaluation corpus, where a lack of robustness to microphone speech is found. Several methods of DNN feature processing are then applied to bring significantly greater robustness to microphone speech. To direct future research, the DNN-based systems are also evaluated in the context of audio degradations including noise and reverberation."}
{"_id": "b7c80c02835c040a636e56b18214b6d6216a5e81", "title": "Developmental deficits in social perception in autism: the role of the amygdala and fusiform face area", "text": "Autism is a severe developmental disorder marked by a triad of deficits, including impairments in reciprocal social interaction, delays in early language and communication, and the presence of restrictive, repetitive and stereotyped behaviors. In this review, it is argued that the search for the neurobiological bases of the autism spectrum disorders should focus on the social deficits, as they alone are specific to autism and they are likely to be most informative with respect to modeling the pathophysiology of the disorder. Many recent studies have documented the difficulties persons with an autism spectrum disorder have accurately perceiving facial identity and facial expressions. This behavioral literature on face perception abnormalities in autism is reviewed and integrated with the functional magnetic resonance imaging (fMRI) literature in this area, and a heuristic model of the pathophysiology of autism is presented. This model posits an early developmental failure in autism involving the amygdala, with a cascading influence on the development of cortical areas that mediate social perception in the visual domain, specifically the fusiform \"face area\" of the ventral temporal lobe. Moreover, there are now some provocative data to suggest that visual perceptual areas of the ventral temporal pathway are also involved in important ways in representations of the semantic attributes of people, social knowledge and social cognition. Social perception and social cognition are postulated as normally linked during development such that growth in social perceptual skills during childhood provides important scaffolding for social skill development. It is argued that the development of face perception and social cognitive skills are supported by the amygdala-fusiform system, and that deficits in this network are instrumental in causing autism."}
{"_id": "151a851786f18b3d4570c176ca94ba48f25da77b", "title": "Learning Visual Clothing Style with Heterogeneous Dyadic Co-Occurrences", "text": "With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions like 'What outfit goes well with this pair of shoes?' To answer these types of questions, one has to go beyond learning visual similarity and learn a visual notion of compatibility across categories. In this paper, we propose a novel learning framework to help answer these types of questions. The main idea of this framework is to learn a feature transformation from images of items into a latent space that expresses compatibility. For the feature transformation, we use a Siamese Convolutional Neural Network (CNN) architecture, where training examples are pairs of items that are either compatible or incompatible. We model compatibility based on co-occurrence in large-scale user behavior data, in particular co-purchase data from Amazon.com. To learn cross-category fit, we introduce a strategic method to sample training data, where pairs of items are heterogeneous dyads, i.e., the two elements of a pair belong to different high-level categories. While this approach is applicable to a wide variety of settings, we focus on the representative problem of learning compatible clothing style. Our results indicate that the proposed framework is capable of learning semantic information about visual style and is able to generate outfits of clothes, with items from different categories, that go well together."}
{"_id": "c76c914288f85e16e50bf9fa3331e783d50a3513", "title": "ERP Event Log Preprocessing: Timestamps vs. Accounting Logic", "text": "Process mining has been gaining significant attention in academia and practice. A promising first step to apply process mining in the audit domain was taken with the mining of process instances from accounting data. However, the resulting process instances constitute graphs. Commonly, timestamp oriented event log formats require a sequential list of activities and do not support graph structures. Thus, event log based process mining techniques cannot readily be applied to accounting data. To close this gap, we present an algorithm that determines an activity sequence from accounting data. With this algorithm, mined process instance graphs can be decomposed in a way they fit into sequential event log formats. Event log based process mining techniques can then be used to construct process models. A case study demonstrates the effectiveness of the presented approach. Results reveal that the preprocessing of the event logs considerably improves the derived process models."}
{"_id": "ad0c4588fb7acd47d24d752a5fa3bb81ff8cbd1e", "title": "The activation of attentional networks", "text": "Alerting, orienting, and executive control are widely thought to be relatively independent aspects of attention that are linked to separable brain regions. However, neuroimaging studies have yet to examine evidence for the anatomical separability of these three aspects of attention in the same subjects performing the same task. The attention network test (ANT) examines the effects of cues and targets within a single reaction time task to provide a means of exploring the efficiency of the alerting, orienting, and executive control networks involved in attention. It also provides an opportunity to examine the brain activity of these three networks as they operate in a single integrated task. We used event-related functional magnetic resonance imaging (fMRI) to explore the brain areas involved in the three attention systems targeted by the ANT. The alerting contrast showed strong thalamic involvement and activation of anterior and posterior cortical sites. As expected, the orienting contrast activated parietal sites and frontal eye fields. The executive control network contrast showed activation of the anterior cingulate along with several other brain areas. With some exceptions, activation patterns of these three networks within this single task are consistent with previous fMRI studies that have been studied in separate tasks. Overall, the fMRI results suggest that the functional contrasts within this single task differentially activate three separable anatomical networks related to the components of attention."}
{"_id": "3a6c75d64a4e3c16db67953a33c26a9c93a6a767", "title": "A new 2D static hand gesture colour image dataset for ASL gestures", "text": "It usually takes a fusion of image processing and machine learning algorithms in order to build a fully-functioning computer vision system for hand gesture recognition. Fortunately, the complexity of developing such a system could be alleviated by treating the system as a collection of multiple sub-systems working together, in such a way that they can be dealt with in isolation. Machine learning need to feed on thousands of exemplars (e.g. images, features) to automatically establish some recognisable patterns for all possible classes (e.g. hand gestures) that applies to the problem domain. A good number of exemplars helps, but it is also important to note that the efficacy of these exemplars depends on the variability of illumination conditions, hand postures, angles of rotation, scaling and on the number of volunteers from whom the hand gesture images were taken. These exemplars are usually subjected to image processing first, to reduce the presence of noise and extract the important features from the images. These features serve as inputs to the machine learning system. Different sub-systems are integrated together to form a complete computer vision system for gesture recognition. The main contribution of this work is on the production of the exemplars. We discuss how a dataset of standard American Sign Language (ASL) hand gestures containing 2425 images from 5 individuals, with variations in lighting conditions and hand postures is generated with the aid of image processing techniques. A minor contribution is given in the form of a specific feature extraction method called moment invariants, for which the computation method and the values are furnished with the dataset."}
{"_id": "269e736628aa290b0dcf815ab00e459497baacd4", "title": "TechMiner: Extracting Technologies from Academic Publications", "text": "In recent years we have seen the emergence of a variety of scholarly datasets. Typically these capture \u2018standard\u2019 scholarly entities and their connections, such as authors, affiliations, venues, publications, citations, and others. However, as the repositories grow and the technology improves, researchers are adding new entities to these repositories to develop a richer model of the scholarly domain. In this paper, we introduce TechMiner, a new approach, which combines NLP, machine learning and semantic technologies, for mining technologies from research publications and generating an OWL ontology describing their relationships with other research entities. The resulting knowledge base can support a number of tasks, such as: richer semantic search, which can exploit the technology dimension to support better retrieval of publications; richer expert search; monitoring the emergence and impact of new technologies, both within and across scientific fields; studying the scholarly dynamics associated with the emergence of new technologies; and others. TechMiner was evaluated on a manually annotated gold standard and the results indicate that it significantly outperforms alternative NLP approaches and that its semantic features improve performance significantly with respect to both recall and precision."}
{"_id": "3dee5cd73168dfdc0e3eed83b417999ca5f4370c", "title": "Explicit Modelling of the Implicit Short Term User Preferences for Music Recommendation", "text": "Recommender systems are a key component of music sharing platforms, which suggest musical recordings a user might like. People often have implicit preferences while listening to music, though these preferences might not always be the same while they listen to music at different times. For example, a user might be interested in listening to songs of only a particular artist at some time, and the same user might be interested in the top-rated songs of a genre at another time. In this paper we try to explicitly model the short term preferences of the user with the help of Last.fm tags of the songs the user has listened to. With a session defined as a period of activity surrounded by periods of inactivity, we introduce the concept of a subsession, which is that part of the session wherein the preference of the user does not change much. We assume the user preference might change within a session and a session might have multiple subsessions. We use our modelling of the user preferences to generate recommendations for the next song the user might listen to. Experiments on the user listening histories taken from Last.fm indicate that this approach beats the present methodologies in predicting the next recording a user might listen to."}
{"_id": "fa992fdebcaa5d290fdaa396579c74be9ec4777f", "title": "Ring-Shaped Substrate Integrated Waveguide Wilkinson Power Dividers/Combiners", "text": "This paper proposes and presents a class of substrate integrated waveguide (SIW) power dividers/combiners based on a modified Wilkinson architecture. The proposed design consists of a ring-type circuit of 1.5 \u03bbg length with three waveguide ports. Impedance variation without affecting cutoff frequency and integration of isolation resistance are the two major limitations in connection with the design of SIW Wilkinson structures. Central ring part with different thickness ensures different impedance in the first proposed scheme while the half-mode SIW in the second option. As for the second limitation, the optimal configuration of integrating the isolation resistance is studied. A version without resistance is also examined using a broad-wall series slot. To validate the proposed concept, a bilayered circuit with standard Printed circuit board prototype was fabricated and measured. A table summarizing performances of all the proposed circuits shows a bandwidth of around 25% at the 10 dB reference. A negligible additional insertion loss is observed. The proposed design techniques are found useful at millimeter-wave frequencies where reduced physical dimensions make a circuit configuration suitable for low-cost implementation. This concept is also valid for rectangular waveguide counterparts."}
{"_id": "ad2955c51f314d8499bc38c47a89016fb0f89360", "title": "Municipal Solid Waste Management Problems in Nigeria : Evolving Knowledge Management Solution", "text": "The paper attempts a synthesis of problems relating to municipal waste management in Nigeria and proposes a conceptual knowledge management approach for tackling municipal waste problems in cities across Nigeria. The application of knowledge management approach and strategy is crucial for inculcating a change of attitude towards improving the management of waste. The paper is a review of existing literatures, information, policies and data on municipal waste management in Nigeria. The inefficient management of waste by individuals, households, consumers and waste management companies can be attributed to inadequate information on waste management benefits, lack of producers\u2019 involvement in waste management as well as poor implementation of government policies. The paper presents an alternative approach providing solutions promoting efficient municipal waste management. Keywords\u2014Environment, Knowledge management, Municipal waste management, Nigeria."}
{"_id": "6fe0a89e8c674dc3a4c58402b7fc27e8d5ad8b77", "title": "Variable-On-Time-Controlled Critical-Conduction-Mode Flyback PFC Converter", "text": "A variable on-time (VOT) control strategy for a critical conduction mode (CRM) flyback power factor correction (PFC) converter with low total harmonic distortion (THD) and high power factor (PF) is proposed in this paper. By using input voltage and the voltage across the auxiliary winding of the flyback transformer to modulate the turn-on time of the switch of the CRM flyback PFC converter, the PF and THD of the converter can be improved. The operation principles of the traditional constant on-time (COT)-controlled CRM flyback PFC converter and VOT-controlled CRM flyback PFC converter are analyzed. The experimental results show that the PF and THD of the VOT-controlled CRM flyback PFC converter are better than those of the COT-controlled CRM flyback PFC converter."}
{"_id": "4b33f3cf89235eef4f8c18ccfbd49827a99c24c2", "title": "A framework of traveling companion discovery on trajectory data streams", "text": "The advance of mobile technologies leads to huge volumes of spatio-temporal data collected in the form of trajectory data streams. In this study, we investigate the problem of discovering object groups that travel together (i.e., traveling companions) from trajectory data streams. Such technique has broad applications in the areas of scientific study, transportation management, and military surveillance. To discover traveling companions, the monitoring system should cluster the objects of each snapshot and intersect the clustering results to retrieve moving-together objects. Since both clustering and intersection steps involve high computational overhead, the key issue of companion discovery is to improve the efficiency of algorithms. We propose the models of closed companion candidates and smart intersection to accelerate data processing. A data structure termed traveling buddy is designed to facilitate scalable and flexible companion discovery from trajectory streams. The traveling buddies are microgroups of objects that are tightly bound together. By only storing the object relationships rather than their spatial coordinates, the buddies can be dynamically maintained along the trajectory stream with low cost. Based on traveling buddies, the system can discover companions without accessing the object details. In addition, we extend the proposed framework to discover companions on more complicated scenarios with spatial and temporal constraints, such as on the road network and battlefield. The proposed methods are evaluated with extensive experiments on both real and synthetic datasets. Experimental results show that our proposed buddy-based approach is an order of magnitude faster than the baselines and achieves higher accuracy in companion discovery."}
{"_id": "c08a2cc3b4953646078ed75ed7f827375a8309b5", "title": "Deep Learning for Multi-task Medical Image Segmentation in Multiple Modalities", "text": "Automatic segmentation of medical images is an important task for many clinical applications. In practice, a wide range of anatomical structures are visualised using different imaging modalities. In this paper, we investigate whether a single convolutional neural network (CNN) can be trained to perform different segmentation tasks. A single CNN is trained to segment six tissues in MR brain images, the pectoral muscle in MR breast images, and the coronary arteries in cardiac CTA. The CNN therefore learns to identify the imaging modality, the visualised anatomical structures, and the tissue classes. For each of the three tasks (brain MRI, breast MRI and cardiac CTA), this combined training procedure resulted in a segmentation performance equivalent to that of a CNN trained specifically for that task, demonstrating the high capacity of CNN architectures. Hence, a single system could be used in clinical practice to automatically perform diverse segmentation tasks without task-specific training."}
{"_id": "051a5110bab92cb89048ed426f28ebab9085945d", "title": "Coded exposure deblurring: Optimized codes for PSF estimation and invertibility", "text": "We consider the problem of single image object motion deblurring from a static camera. It is well-known that deblurring of moving objects using a traditional camera is ill-posed, due to the loss of high spatial frequencies in the captured blurred image. A coded exposure camera modulates the integration pattern of light by opening and closing the shutter within the exposure time using a binary code. The code is chosen to make the resulting point spread function (PSF) invertible, for best deconvolution performance. However, for a successful deconvolution algorithm, PSF estimation is as important as PSF invertibility. We show that PSF estimation is easier if the resulting motion blur is smooth and the optimal code for PSF invertibility could worsen PSF estimation, since it leads to non-smooth blur. We show that both criterions of PSF invertibility and PSF estimation can be simultaneously met, albeit with a slight increase in the deconvolution noise. We propose design rules for a code to have good PSF estimation capability and outline two search criteria for finding the optimal code for a given length. We present theoretical analysis comparing the performance of the proposed code with the code optimized solely for PSF invertibility. We also show how to easily implement coded exposure on a consumer grade machine vision camera with no additional hardware. Real experimental results demonstrate the effectiveness of the proposed codes for motion deblurring."}
{"_id": "0b181d71cd9dc7bcfe47361f281cf669d50defde", "title": "Natural image statistics and neural representation.", "text": "It has long been assumed that sensory neurons are adapted, through both evolutionary and developmental processes, to the statistical properties of the signals to which they are exposed. Attneave (1954)Barlow (1961) proposed that information theory could provide a link between environmental statistics and neural responses through the concept of coding efficiency. Recent developments in statistical modeling, along with powerful computational tools, have enabled researchers to study more sophisticated statistical models for visual images, to validate these models empirically against large sets of data, and to begin experimentally testing the efficient coding hypothesis for both individual neurons and populations of neurons."}
{"_id": "15e729fd12473ef67c545bdb0bdc7bdbb83b695b", "title": "Blue Screen Matting", "text": "A classical problem of imaging\u2014the matting problem\u2014is separation of a non-rectangular foreground image from a (usually) rectangular background image\u2014for example, in a film frame, extraction of an actor from a background scene to allow substitu tion of a different background. Of the several attacks on this diff cult and persistent problem, we discuss here only the special ca of separating a desired foreground image from a background of constant, or almost constant, backing color. This backing color has often been blue, so the problem, and its solution, have bee called blue screen matting . However, other backing colors, such as yellow or (increasingly) green, have also been used, so we o ten generalize to constant color matting . The mathematics of constant color matting is presented and proven to be unsolvable as generally practiced. This, of course, flies in the face of the fact that the technique is commonly used in film and video, so we demonstrate constraints on the general problem that lead to so tions, or at least significantly prune the search space of solution We shall also demonstrate that an algorithmic solution is possib by allowing the foreground object to be shot against two constant backing colors\u2014in fact, against two completely arbitrary backin so long as they differ everywhere."}
{"_id": "3608a5a8b4328df3028d7aadde9d2b887023aac7", "title": "Image statistics and the perception of surface qualities", "text": "The world is full of surfaces, and by looking at them we can judge their material qualities. Properties such as colour or glossiness can help us decide whether a pancake is cooked, or a patch of pavement is icy. Most studies of surface appearance have emphasized textureless matte surfaces, but real-world surfaces, which may have gloss and complex mesostructure, are now receiving increased attention. Their appearance results from a complex interplay of illumination, reflectance and surface geometry, which are difficult to tease apart given an image. If there were simple image statistics that were diagnostic of surface properties it would be sensible to use them. Here we show that the skewness of the luminance histogram and the skewness of sub-band filter outputs are correlated with surface gloss and inversely correlated with surface albedo (diffuse reflectance). We find evidence that human observers use skewness, or a similar measure of histogram asymmetry, in making judgements about surfaces. When the image of a surface has positively skewed statistics, it tends to appear darker and glossier than a similar surface with lower skewness, and this is true whether the skewness is inherent to the original image or is introduced by digital manipulation. We also find a visual after-effect based on skewness: adaptation to patterns with skewed statistics can alter the apparent lightness and glossiness of surfaces that are subsequently viewed. We suggest that there are neural mechanisms sensitive to skewed statistics, and that their outputs can be used in estimating surface properties."}
{"_id": "36fe91183ec9e30925d05c74ce887bf641c010cd", "title": "Non-local sparse models for image restoration", "text": "We propose in this paper to unify two different approaches to image restoration: On the one hand, learning a basis set (dictionary) adapted to sparse signal descriptions has proven to be very effective in image reconstruction and classification tasks. On the other hand, explicitly exploiting the self-similarities of natural images has led to the successful non-local means approach to image restoration. We propose simultaneous sparse coding as a framework for combining these two approaches in a natural manner. This is achieved by jointly decomposing groups of similar signals on subsets of the learned dictionary. Experimental results in image denoising and demosaicking tasks with synthetic and real noise show that the proposed method outperforms the state of the art, making it possible to effectively restore raw images from digital cameras at a reasonable speed and memory cost."}
{"_id": "312b125002101480693f6b842685d0bb0ca19096", "title": "Hidden Markov model-based speech emotion recognition", "text": "In this contribution we introduce speech emotion recognition by use of continuous hidden Markov models. Two methods are propagated and compared throughout the paper. Within the first method a global statistics framework of an utterance is classified by Gaussian mixture models using derived features of the raw pitch and energy contour of the speech signal. A second method introduces increased temporal complexity applying continuous hidden Markov models considering several states using low-level instantaneous features instead of global statistics. The paper addresses the design of working recognition engines and results achieved with respect to the alluded alternatives. A speech corpus consisting of acted and spontaneous emotion samples in German and English language is described in detail. Both engines have been tested and trained using this equivalent speech corpus. Results in recognition of seven discrete emotions exceeded 86% recognition rate. As a basis of comparison the similar judgment of human deciders classifying the same corpus at 79.8% recognition rate was analyzed."}
{"_id": "47843b4e522b7e0b792fad6c818091c996c5861b", "title": "Fundamentals for Future Mobile-Health (mHealth): A Systematic Review of Mobile Phone and Web-Based Text Messaging in Mental Health", "text": "BACKGROUND\nMobile phone text messages (short message service, SMS) are used pervasively as a form of communication. Almost 100% of the population uses text messaging worldwide and this technology is being suggested as a promising tool in psychiatry. Text messages can be sent either from a classic mobile phone or a web-based application. Reviews are needed to better understand how text messaging can be used in mental health care and other fields of medicine.\n\n\nOBJECTIVE\nThe objective of the study was to review the literature regarding the use of mobile phone text messaging in mental health care.\n\n\nMETHODS\nWe conducted a thorough literature review of studies involving text messaging in health care management. Searches included PubMed, PsycINFO, Cochrane, Scopus, Embase and Web of Science databases on May 25, 2015. Studies reporting the use of text messaging as a tool in managing patients with mental health disorders were included. Given the heterogeneity of studies, this review was summarized using a descriptive approach.\n\n\nRESULTS\nFrom 677 initial citations, 36 studies were included in the review. Text messaging was used in a wide range of mental health situations, notably substance abuse (31%), schizophrenia (22%), and affective disorders (17%). We identified four ways in which text messages were used: reminders (14%), information (17%), supportive messages (42%), and self-monitoring procedures (42%). Applications were sometimes combined.\n\n\nCONCLUSIONS\nWe report growing interest in text messaging since 2006. Text messages have been proposed as a health care tool in a wide spectrum of psychiatric disorders including substance abuse, schizophrenia, affective disorders, and suicide prevention. Most papers described pilot studies, while some randomized clinical trials (RCTs) were also reported. Overall, a positive attitude toward text messages was reported. RCTs reported improved treatment adherence and symptom surveillance. Other positive points included an increase in appointment attendance and in satisfaction with management and health care services. Insight into message content, preventative strategies, and innovative approaches derived from the mental health field may be applicable in other medical specialties."}
{"_id": "6b8780f3b171caaf4b0127114920d54ff0bcbb60", "title": "Remote Sensing for Crop Management", "text": "Scientists with the Agricultural Research Service (ARS) and various government agencies and private institutions have provided a great deal of fundamental information relating spectral reflectance and thermal emittance properties of soils and crops to their agronomic and biophysical characteristics. This knowledge has facilitated the development and use of various remote sensing methods for non-destructive monitoring of plant growth and development and for the detection of many environmental stresses which limit plant productivity. Coupled with rapid advances in computing and positionlocating technologies, remote sensing from ground-, air-, and space-based platforms is now capable of providing detailed spatial and temporal information on plant response to their local environment that is needed for site specific agricultural management approaches. This manuscript, which emphasizes contributions by ARS researchers, reviews the biophysical basis of remote sensing; examines approaches that have been developed, refined, and tested for management of water, nutrients, and pests in agricultural crops; and assesses the role of remote sensing in yield prediction. It concludes with a discussion of challenges facing remote sensing in the future. Introduction Agricultural production strategies have changed dramatically over the past decade. Many of these changes have been driven by economic decisions to reduce inputs and maximize profits and by environmental guidelines mandating more efficient and safer use of agricultural chemicals. However, growers now have a heightened sensitivity to concerns over the quality, nutritional value, and safety of agricultural products. They are selecting cultivars and adjusting planting dates to accommodate anticipated patterns in weather, e.g., El Ni\u00f1o or La Ni\u00f1a events (Jones et al., 2000). They are also relying on biotechnological innovations for suppressing pests, e.g., insect protected (Bt) and Roundup\u00ae ready crops (Monsanto Company, 2003). The possibility for selling carbon credits to industry is breathing new life into on-farm conservation tillage practices that enhance carbon sequestration (Robert, 2001). Perhaps the most significant change in agriculture during the past ten years is the shift towards precision, or sitespecific, crop management (National Research Council, 1997). Growers have long recognized within-field variability in potential productivity. Now, at the beginning of the 21st Century, they are seeking new ways to exploit that variability. In the process, they are discovering they need more information on soil and plant conditions than was required a decade ago. Not only does this information need to be accurate and consistent across their farm and from year to year, it must also be available at temporal and spatial scales that match rapidly evolving capabilities to vary cultural procedures, irrigations, and agrochemical inputs. A very large body of research spanning almost four decades has demonstrated that much of this required information is available remotely, via aircraftand satellitebased sensor systems. When combined with remarkable advances in Global Positioning System (GPS) receivers, microcomputers, geographic information systems (GIS), yield monitors, and enhanced crop simulation models, remote sensing technology has the potential to transform the ways that growers manage their lands and implement precision farming techniques. The objective of this paper is to review progress that has been made in remote sensing applications for crop management and, in particular, highlight the role that the USDA and its primary research agency, the Agricultural Research Service (ARS), has had in the movement. Of course, these advances have not been a singular effort by ARS (Pinter et al., 2003; p. 615 this issue). They have resulted from long-standing cooperation among a number of different agencies and institutions, all in pursuit of expanding remote sensing\u2019s role in providing information for crop management. We will begin with some fundamental relationships between the electromagnetic spectrum and basic agronomic conditions and biophysical plant processes, and then present specific examples of remote sensing P H O T O G R A M M E T R I C E N G I N E E R I N G & R E M O T E S E N S I N G Photogrammetric Engineering & Remote Sensing Vol. 69, No. 6, June 2003, pp. 647\u2013664. 0099-1112/03/6906\u2013647$3.00/0 \u00a9 2003 American Society for Photogrammetry and Remote Sensing P.J. Pinter, Jr., is with the USDA, ARS, U.S. Water Conservation Laboratory, 4331 E. Broadway Rd., Phoenix, AZ 85040-8807 (ppinter@uswcl.ars.ag.gov). J.L. Hatfield is with the USDA, ARS, National Soil Tilth Research Laboratory, 2150 Pammel Dr., Ames/Ankeny, IA 50011-3120 (hatfield@nstl.gov). J.S. Schepers is with the USDA, ARS, Soil and Water Conservation Research Laboratory, 120 Keim Hall on the University of Nebraska E. Campus, Lincoln, NE 68583-0934 (jschepers1@unl.edu). E. M. Barnes was with the USDA, ARS, U.S. Water Conservation Laboratory, 4331 E. Broadway Rd., Phoenix, AZ 85040-8807; he is currently with Cotton Inc., 6399 Weston Parkway, Cary, NC 27513 (Ebarnes@cottoninc.com). M.S. Moran is with the USDA, ARS, Southwest Watershed Research Center, 200 E. Allen Rd., Tucson, AZ 85719-1596 (smoran@tucson.ars.ag.gov). C.S.T. Daughtry is with the USDA, ARS, Hydrology and Remote Sensing Laboratory, Bldg. 007, BARC-West, 10300 Baltimore Blvd., Beltsville, MD 20705-2350 (cdaughtry@ hydrolab.arsusda.gov). D.R. Upchurch is with the USDA, ARS, Cropping Systems Research Laboratory, 3810 4th Street, Lubbock, TX 79415 (dupchurch@lbk.ars.usda.gov). IPC_Grams_03-904 4/12/03 7:40 AM Page 1"}
{"_id": "34a308946b048397778358505c8e06a6f6cc1e6f", "title": "Special Perforated Steel Plate Shear Walls with Reduced Beam Section Anchor Beams . I : Experimental Investigation", "text": "This paper presents results of an experimental investigation of specially detailed ductile perforated steel plate shear walls SPSWs designed to accommodate utility passage, and having anchor beams with reduced beam sections connections. Single-story, single-bay SPSW frames are subjected to quasi-static cyclic loading up to their maximum strength and displacement capacity. The tested specimens also had low yield strength steel infill panels. Two specimens make allowances for penetration of the panel by utilities. The first, having multiple holes specially laid out in the steel panel, also has the characteristic of reduced panel strength and stiffness compared to the corresponding SPSW having a solid panel. The second such specimen utilizes quarter-circle cutouts in the panel corners, which are reinforced to transfer the panel forces to the adjacent framing. A SPSW with solid panel is also tested for reference purposes. All specimens resisted an imposed input history of increasing displacements to a minimum drift of 3%. The perforated panel reduced the elastic stiffness and overall strength of the specimen by 15% as compared with the solid panel specimen. DOI: 10.1061/ ASCE 0733-9445 2009 135:3 211 CE Database subject headings: Steel plates; Beams; Anchors; Experimentation; Shear walls."}
{"_id": "5bdc3f9227ae3ca829989edbbea9ec08b548d937", "title": "Automatic Building of Synthetic Voices from Audio Books", "text": "Current state-of-the-art text-to-speech systems produce intelligible speech but lack the prosody of natural utterances. Building better models of prosody involves development of prosodically rich speech databases. However, development of such speech databases requires a large amount of effort and time. An alternative is to exploit story style monologues (long speech files) in audio books. These monologues already encapsulate rich prosody including varied intonation contours, pitch accents and phrasing patterns. Thus, audio books act as excellent candidates for building prosodic models and natural sounding synthetic voices. The processing of such audio books poses several challenges including segmentation of long speech files, detection of mispronunciations, extraction and evaluation of representations of prosody. In this thesis, we address the issues of segmentation of long speech files, capturing prosodic phrasing patterns of a speaker, and conversion of speaker characteristics. Techniques developed to address these issues include \u2013 text-driven and speech-driven methods for segmentation of long speech files; an unsupervised algorithm for learning speaker-specific phrasing patterns and a voice conversion method by modeling target speaker characteristics. The major conclusions of this thesis are \u2013 \u2022 Audio books can be used for building synthetic voices. Segmentation of such long speech files can be accomplished without the need for a speech recognition system. \u2022 The prosodic phrasing patterns are specific to a speaker. These can be learnt and incorporated to improve the quality of synthetic voices. \u2022 Conversion of speaker characteristics can be achieved by modeling speaker-specific features of a target speaker. Finally, the techniques developed in this thesis enable prosody research by leveraging a large number of audio books available in the public domain."}
{"_id": "0a7b6af6edfc8e33742aec20aecefefbec6a4b28", "title": "System in wafer-level package technology with RDL-first process", "text": "We have developed a new system-in-package (SiP) called a \u201cSystem in Wafer-Level Package\u201d (SiWLP). It is fabricated using \u201cRDL-first\u201d technology for fan-out wafer-level-packages (FO-WLPs) and provides high chip-I/O density, design flexibility, and package miniaturization. We developed this SiWLP by using multilayer RDLs and evaluated its unique packaging processes. We achieved high-throughput fabrication by using die-to-wafer (D2W) bonding with fine-pitch reflow soldering and simultaneous molding/underfilling at the wafer level."}
{"_id": "c94072fcf63739296981a23682291a1ef8c2e891", "title": "DropConnected neural network trained with diverse features for classifying heart sounds", "text": "A fully-connected, two-hidden-layer neural network trained by error backpropagation, and regularized with DropConnect is used to classify heart sounds as normal or abnormal. The heart sounds are segmented using an open-source algorithm based on a hidden semi-Markov model. Features are extracted from the heart sounds using a wavelet transform, mel-frequency cepstral coefficients, inter-beat properties, and signal complexity. Features are normalized by subtracting by their means and dividing by their standard deviations across the whole training set. Any feature which is not significantly different between normal and abnormal recordings in the training data is removed, as are highly-correlated features. The dimensionality of the features vector is reduced by projecting it onto its first 70 principal components. A 10 fold cross-validation study gives a mean classification score of 84.1% with a variance of 2.9%. The final score on the test data was 85.2%."}
{"_id": "42540e75173dea14c343a567f9fc722db8609e7e", "title": "Review: IT-Dependent Strategic Initiatives and Sustained Competitive Advantage: A Review and Synthesis of the Literature", "text": "The role of information systems in the creation and appropriation of economic value has a long tradition of research, within which falls the literature on the sustainability of IT-dependent competitive advantage. In this article, we formally define the notion of IT-dependent strategic initiative and use Jane Webster was the accepting senior editor for this paper. Anitesh Barua was the associate editor. Jeanne Ross, Anandhi Bharadwaj, and Paul Pavlou served as reviewers. it to frame a review of the literature on the sustainability of competitive advantage rooted in information systems use. We offer a framework that articulates both the dynamic approach to ITdependent strategic advantage currently receiving attention in the literature and the underlying drivers of sustainability. This framework models how and why the characteristics of the IT-dependent strategic initiative enable sustained competitive advantage, and how the determinants of sustainability are developed and strengthened over time. Such explanation facilitates the pre-implementation analysis of planned initiatives by innovators, as well as the post-implementation evaluation of existing initiatives so as to identify the basis of their sustainability. In carrying out this study, we examined the interdisciplinary literature on strategic information systems. Using a structured methodology, we reviewed the titles and abstracts of 648 articles drawn from information systems, strategic management, and marketing literature. We then examined and individually coded a relevant subset of 117 articles. The literature has identified four barriers to erosion of competitive advantage for ITdependent strategic initiatives and has surfaced the structural determinants of their magnitude. Previous work has also begun to theorize about the process by which these barriers to erosion evolve over time. Our review reveals that signifiPiccoli & Ives/IT-Dependent Strategic Initiatives 748 MIS Quarterly Vol. 29 No. 4/December 2005 cant exploratory research and theoretical development have occurred in this area, but there is a paucity of research providing rigorous tests of theoretical propositions. Our work makes three principal contributions. First, it formalizes the definition of IT-dependent strategic initiative. Second, it organizes the extant interdisciplinary research around an integrative framework that should prove useful to both research and practice. This framework offers an explanation of how and why IT-dependent strategic initiatives contribute to sustained competitive advantage, and explains the process by which they evolve over time. Finally, our review and analysis of the literature offers the basis for future research directions."}
{"_id": "93cee70be87b0b90a407593fa8c2260187898926", "title": "Selecting Valuable Stock Using Genetic Algorithm", "text": "In this study, we utilize the genetic algorithm (GA) to select high quality stocks with investment value. Given the fundamental financial and price information of stocks trading, we attempt to use GA to identify stocks that are likely to outperform the market by having excess returns. To evaluate the efficiency of the GA for stock selection, the return of equally weighted portfolio formed by the stocks selected by GA is used as evaluation criterion. Experiment results reveal that the proposed GA for stock selection provides a very flexible and useful tool to assist the investors in selecting valuable stocks."}
{"_id": "5c54ce3108ae7d5ef0a7e5f38ad90c29cda52118", "title": "Model-Driven Artificial Intelligence for Online Network Optimization", "text": "Future 5G wireless networks will rely on agile and automated network management, where the usage of diverse resources must be jointly optimized with surgical accuracy. A number of key wireless network functionalities (e.g., traffic steering, energy savings) give rise to hard optimization problems. What is more, high spatio-temporal traffic variability coupled with the need to satisfy strict per slice/service SLAs in modern networks, suggest that these problems must be constantly (re-)solved, to maintain close-to-optimal performance. To this end, in this paper we propose the framework of Online Network Optimization (ONO), which seeks to maintain both agile and efficient control over time, using an arsenal of data-driven, adaptive, and AI-based techniques. Since the mathematical tools and the studied regimes vary widely among these methodologies, a theoretical comparison is often out of reach. Therefore, the important question \u201cwhat is the right ONO technique?\u201d remains open to date. In this paper, we discuss the pros and cons of each technique and further attempt a direct quantitative comparison for a specific use case, using real data. Our results suggest that carefully combining the insights of problem modeling with state-of-the-art AI techniques provides significant advantages at reasonable complexity."}
{"_id": "6c2ce377d9b7a3828f4e12dce4a9427481732d3d", "title": "Analytic Performance Model of a Main-Memory Index Structure", "text": "Efficient evaluation of multi-dimensional range queries in a main-memory database is an important, but difficult task. State-of-the-art techniques rely on optimised sequential scans or tree-based structures. For range queries with small result sets, sequential scans exhibit poor asymptotic performance. Also, as the dimensionality of the data set increases, the performance of tree-based structures degenerates due to the curse of dimensionality. Recent literature proposed the Elf, a main-memory structure that is optimised for the case of such multi-dimensional low-selectivity queries. The Elf outperforms other state-ofthe-art methods in manually tuned scenarios. However, choosing an optimal parameter configuration for the Elf is vital, since for poor configurations, the search performance degrades rapidly. Consequently, further knowledge about the behaviour of the Elf in different configurations is required to achieve robust performance. In this thesis, we therefore propose a numerical cost model for the Elf. Like all main-memory index structures, the Elf response time is not dominated by disk accesses, refusing a straightforward analysis. Our model predicts the size and shape of the Elf region that is examined during search. We propose that the response time of a search is linear to the size of this region. Furthermore, we study the impact of skewed data distributions and correlations on the shape of the Elf. We find that they lead to behaviour that is accurately describable through simple reductions in attribute cardinality. Our experimental results indicate that for data sets of up to 15 dimensions, our cost model predicts the size of the examined Elf region with relative errors below 5%. Furthermore, we find that the size of the Elf region examined during search predicts the response time with an accuracy of 80 %."}
{"_id": "8d6abeb4248f3dfbbb449c5100ef9b10202f4b35", "title": "Beyond Cyberbullying: Self-Disclosure, Harm and Social Support on ASKfm", "text": "ASKfm is a social media platform popular among teens and young adults where users can interact anonymously or semi-anonymously. In this paper, we identify the modes of disclosure and interaction that occur on the site, and evaluate why users are motivated to post and interact on the site, despite its reputation for facilitating cyberbullying. Through topic modeling - supplemented with manual annotation - of a large dataset of ASKfm posts, we identify and classify the rich variety of discourse posted on ASKfm, including both positive and negative forms, providing insights into the why individuals continue to engage with the site. These findings are complemented by a survey of young adults (aged 18-20) ASKfm users, which provides additional insights into users' motivations and interaction patterns. We discuss how the affordances specific to platforms like ASKfm, including anonymity and visibility, might enable users to respond to cyberbullying in novel ways, engage in positive forms of self-disclosure, and gain social support on sensitive topics. We conclude with design recommendations that would highlight the positive interactions on the website and help diminish the repurcussions of the negative interactions."}
{"_id": "3371fe1086029f6f34c8a8eb2b6ef61e86920bdd", "title": "Cognitive behavioural therapy for major psychiatric disorder: does it really work? A meta-analytical review of well-controlled trials.", "text": "BACKGROUND\nAlthough cognitive behavioural therapy (CBT) is claimed to be effective in schizophrenia, major depression and bipolar disorder, there have been negative findings in well-conducted studies and meta-analyses have not fully considered the potential influence of blindness or the use of control interventions.\n\n\nMETHOD\nWe pooled data from published trials of CBT in schizophrenia, major depression and bipolar disorder that used controls for non-specific effects of intervention. Trials of effectiveness against relapse were also pooled, including those that compared CBT to treatment as usual (TAU). Blinding was examined as a moderating factor.\n\n\nRESULTS\nCBT was not effective in reducing symptoms in schizophrenia or in preventing relapse. CBT was effective in reducing symptoms in major depression, although the effect size was small, and in reducing relapse. CBT was ineffective in reducing relapse in bipolar disorder.\n\n\nCONCLUSIONS\nCBT is no better than non-specific control interventions in the treatment of schizophrenia and does not reduce relapse rates. It is effective in major depression but the size of the effect is small in treatment studies. On present evidence CBT is not an effective treatment strategy for prevention of relapse in bipolar disorder."}
{"_id": "9b5462fca01b3fe6afde3a64440daf944c399e02", "title": "Attention-based Pyramid Aggregation Network for Visual Place Recognition", "text": "Visual place recognition is challenging in the urban environment and is usually viewed as a large scale image retrieval task. The intrinsic challenges in place recognition exist that the confusing objects such as cars and trees frequently occur in the complex urban scene, and buildings with repetitive structures may cause over-counting and the burstiness problem degrading the image representations. To address these problems, we present an Attention-based Pyramid Aggregation Network (APANet), which is trained in an end-to-end manner for place recognition. One main component of APANet, the spatial pyramid pooling, can effectively encode the multi-size buildings containing geo-information. The other one, the attention block, is adopted as a region evaluator for suppressing the confusing regional features while highlighting the discriminative ones. When testing, we further propose a simple yet effective PCA power whitening strategy, which significantly improves the widely used PCA whitening by reasonably limiting the impact of over-counting. Experimental evaluations demonstrate that the proposed APANet outperforms the state-of-the-art methods on two place recognition benchmarks, and generalizes well on standard image retrieval datasets."}
{"_id": "d08d4f2d7487a1172cf8322927372bb0978ef2d0", "title": "Paving green passage for emergency vehicle in heavy traffic: Real-time motion planning under the connected and automated vehicles environment", "text": "This paper describes a real-time multi-vehicle motion planning (MVMP) algorithm for the emergency vehicle clearance task. To address the inherent limitations of human drivers in perception, communication, and cooperation, we require that the emergency vehicle and the surrounding normal vehicles are connected and automated vehicles (CAVs). The concerned MVMP task is to find cooperative trajectories such that the emergency vehicle can efficiently pass through the normal vehicles ahead. We use an optimal-control based formulation to describe the MVMP problem, which is centralized, straightforward, and complete. For the online solutions, the centralized MVMP formulation is converted into a multi-period and multi-stage version. Concretely, each period consists of two stages: the emergency vehicle and several normal CAVs ahead try to form a regularized platoon via acceleration or deceleration (stage 1); when a regularized platoon is formed, these vehicles act cooperatively to make way for the emergency vehicle until the emergency vehicle becomes the leader in this local platoon (stage 2). When one period finishes, the subsequent period begins immediately. This sequential process continues until the emergency vehicle finally passes through all the normal CAVs. The subproblem at stage 1 is extremely easy because nearly all the challenging nonlinearity gathers only in stage 2; typical solutions to the subproblem at stage 2 can be prepared offline, and then implemented online directly. Through this, our proposed MVMP algorithm avoids heavy online computations and thus runs in real time."}
{"_id": "2441e05dbaf5bff5e5d10c925d682a2d13282809", "title": "Simulating LTE Cellular Systems: An Open-Source Framework", "text": "Long-term evolution (LTE) represents an emerging and promising technology for providing broadband ubiquitous Internet access. For this reason, several research groups are trying to optimize its performance. Unfortunately, at present, to the best of our knowledge, no open-source simulation platforms, which the scientific community can use to evaluate the performance of the entire LTE system, are freely available. The lack of a common reference simulator does not help the work of researchers and poses limitations on the comparison of results claimed by different research groups. To bridge this gap, herein, the open-source framework LTE-Sim is presented to provide a complete performance verification of LTE networks. LTE-Sim has been conceived to simulate uplink and downlink scheduling strategies in multicell/multiuser environments, taking into account user mobility, radio resource optimization, frequency reuse techniques, the adaptive modulation and coding module, and other aspects that are very relevant to the industrial and scientific communities. The effectiveness of the proposed simulator has been tested and verified considering 1) the software scalability test, which analyzes both memory and simulation time requirements; and 2) the performance evaluation of a realistic LTE network providing a comparison among well-known scheduling strategies."}
{"_id": "cc004d48650b096cc0ee347f45768166d6c9c31c", "title": "2395-1621 Hand Gesture Recognition Using Flex Sensors", "text": "ARTICLE INFO In this system an electro-mechanical robot is designed and controlled using hand gesture in real time. The system is designed on microcontroller platform using Keil and MPLAB tools. Hand gesture recognition is done on the principle of resistance change sensed through flex sensor. These sensors are integrated in a hand gloves from which input to the system is given. The designed system is divided into two sections as transmitter and receiver. The transmitter section will be in hand gloves from which the data is sensed and processed through PIC16F7487 and send serially to the receiver section. RF technology is used to transmit the data at the receiver section at the frequency of 2.4 GHz. ARM 7 (LPC2148) processor is used to receive the data. Here from the received data, the character is predicted and matched with the closet character from which the character is identified and displayed on LCD. The various case studies is prepared for the designed system and tested in real time. The proposed system can be used for the various applications such as in unmanned machines, industries, handicapped personnel etc. Keywords\u2014 a Sensor gloves, flex sensors, PIC controller, ARM processor etc Article History Received:20 th September 2015 Received in revised form :"}
{"_id": "b304db2cee541bafaaabdd9a4499b2bfb4efb257", "title": "Wide-band microstrip-to-coplanar stripline/slotline transitions", "text": "Wide-band microstrip line to coplanar stripline (CPS) transitions are proposed. The transition consists of a multisection matching transformer and a quarter-wavelength radial stub for the impedance matching and field matching between the microstrip line and CPS, respectively. The proposed planar transition has the advantages of compact size, wide bandwidth, and straightforward design procedure. Several parameters are studied through simulations and experiments to derive some design guidelines. With the return loss of better than 14 dB, the 1- and 3-dB back-to-back insertion loss bandwidth can cover from 1.4 to 6.6 GHz (1 : 4.7) and from 1.1 to 10.5 GHz (1 : 9.6), respectively. In addition, the microstrip-to-CPS transition is extended to design a microstrip-to-slotline transition by tapering the CPS into a slotline. From 2.7 to 10.4 GHz (1 : 3.85), the back-to-back return loss is better than 15 dB and the insertion loss is less than 3 dB."}
{"_id": "7f3e7fe1efc0b74a4364c11a07668251f4875724", "title": "A survey of sparse matrix-vector multiplication performance on large matrices", "text": "One of the main sources of sparse matrices is the discretization of partial differential equations that govern continuumphysics phenomena such as fluid flow and transport, phase separation, mechanical deformation, electromagnetic wave propagation, and others. Recent advances in high-performance computing area have been enabling researchers to tackle increasingly larger problems leading to sparse linear systems with hundreds of millions to a few tens of billions of unknowns, e.g., [5, 6]. Iterative linear solvers are popular in large-scale computing as they consume less memory than direct solvers. Contrary to direct linear solvers, iterative solvers approach the solution gradually requiring the computation of sparse matrixvector (SpMV) products. The evaluation of SpMV products can emerge as a bottleneck for computational performance within the context of the simulation of large problems. In this work, we focus on a linear system arising from the discretization of the Cahn\u2013Hilliard equation, which is a fourth order nonlinear parabolic partial differential equation that governs the separation of a two-component mixture into phases [3]. The underlying spatial discretization is performed using the discontinuous Galerkin method (e.g. [10]) and Newton\u2019s method. A number of parallel algorithms and strategies have been evaluated in this work to accelerate the evaluation of SpMV products."}
{"_id": "60a8051816f62132bb9554d20b9f353d917dca79", "title": "Scale Space Technique for Word Segmentation in Handwritten Documents", "text": "Indexing large archives of historical manuscripts, like the papers of George Washington, is required to allow rapid perusal by scholars and researchers who wish to consult the original manuscripts. Presently, such large archives are indexed manually. Since optical character recognition (OCR) works poorly with handwriting, a scheme based on matching word images called word spotting has been suggested previously for indexing such documents. The important steps in this scheme are segmentation of a document page into words and creation of lists containing instances of the same word by word image matching. We have developed a novel methodology for segmenting handwritten document images by analyzing the extent of \u201cblobs\u201d in a scale space representationof the image. We believe this is the first application of scale space to this problem. The algorithm has been applied to around 30 grey level images randomly picked from different sections of the George Washington corpus of 6,400 handwritten document images. An accuracy of 77 \u2212 96 percent was observed with an average accuracy of around 87 percent. The algorithm works well in the presence of noise, shine through and other artifacts which may arise due aging and degradation of the page over a couple of centuries or through the man made processes of photocopying and scanning."}
{"_id": "4a40d93a2960d5603af45c45c4f6f6ccc7e9af7c", "title": "Mining and analyzing the enterprise knowledge graph", "text": "Today's enterprises hold ever-growing amounts of public data, stemming from different organizational systems, such as development environments, CRM systems, business intelligence systems, and enterprise social media. This data unlocks rich and diverse information about entities, people, terms, and the relationships among them. A lot of insight can be gained through analyzing this knowledge graph, both by individual employees and by the organization as a whole. In this talk, I will review recent work done by the Social Technologies & Analytics group at IBM Research-Haifa to mine these relationships, represent them in a generalized model, and use the model for different aims within the enterprise, including social search [5], expertise location [1], social recommendation [2, 3], and network analysis [4]."}
{"_id": "ac8b8cf9dd6dcacd4963da40200abcc56ce8ba49", "title": "Exploring IOT Application Using Raspberry Pi", "text": "There are thousands of sensors in an industry with different usage, such as, pressure transmitters, flow meter, temperature transmitters, level transmitters, and so on. Wired networks are mainly used to transfer data to base station by connecting sensor. It brings advantage as it provides reliable and stable communication system for instruments and controls. However, the cost of cables necessary is very costly. Therefore, recently low cost wireless networks are strongly required by customers, for example, temporary instrument networks and/or some non-critical permanent sites which require low data rate and longer battery life. In client/server model, file server act as a parent\u2019s node which allow multiple child node to connect with it. It is responsible for central storage and data management so that other computers enable to access the file under the same network. This article explores the use of Raspberry Pi to function as a server in which several laptops are connected to it to copy, store and delete the file over network. IT requires authentication for user login before granting access to the file to ensure data integrity and security. File server is widely used in many areas, for example in education for uploading study note into the serve and student immediate downloading it into their computer. Moreover this work also explores the use of Raspberry Pi B+ model and XBee (ZigBee module) to demonstrate wireless communication data transmission, proving the validity of usage as a mobile low-power wireless network communication. The main goal of the research is to explore the use of Raspberry Pi for client-server communication using various wireless communication scenario such as Wi-Fi and ZigBee. Index Terms \u2013 client/server model, Raspberry Pi, XBee."}
{"_id": "025720574ef67672c44ba9e7065a83a5d6075c36", "title": "Unsupervised Learning of Video Representations using LSTMs", "text": "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences \u2013 patches of image pixels and high-level representations (\u201cpercepts\u201d) of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem \u2013 human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance."}
{"_id": "0d8b85953c25c23c512d4522d5597c8e3e0bb8c7", "title": "Spatial Transformer Networks", "text": "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations."}
{"_id": "18cc17c06e34baaa3e196db07e20facdbb17026d", "title": "Describing Videos by Exploiting Temporal Structure", "text": "Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions."}
{"_id": "533ee188324b833e059cb59b654e6160776d5812", "title": "How to Construct Deep Recurrent Neural Networks", "text": "In this paper, we explore different ways to extend a recurrent neural network (RNN) to a deep RNN. We start by arguing that the concept of depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, however, we find three points of an RNN which may be made deeper; (1) input-to-hidden function, (2) hidden-tohidden transition and (3) hidden-to-output function. Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an alternative interpretation of these deep RNNs using a novel framework based on neural operators. The proposed deep RNNs are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNNs benefit from the depth and outperform the conventional, shallow RNNs."}
{"_id": "093ec8743f85a7a8d48fc84797591f20b65513cf", "title": "A new approach to developing and implementing eager database replication protocols", "text": "Database replication is traditionally seen as a way to increase the availability and performance of distributed databases. Although a large number of protocols providing data consistency and fault-tolerance have been proposed, few of these ideas have ever been used in commercial products due to their complexity and performance implications. Instead, current products allow inconsistencies and often resort to centralized approaches which eliminates some of the advantages of replication. As an alternative, we propose a suite of replication protocols that addresses the main problems related to database replication. On the one hand, our protocols maintain data consistency and the same transactional semantics found in centralized systems. On the other hand, they provide flexibility and reasonable performance. To do so, our protocols take advantage of the rich semantics of group communication primitives and the relaxed isolation guarantees provided by most databases. This allows us to eliminate the possibility of deadlocks, reduce the message overhead and increase performance. A detailed simulation study shows the feasibility of the approach and the flexibility with which different types of bottlenecks can be circumvented."}
{"_id": "6e2c9da857f1c70a2d8cfa4fbd3bcfdc6442a27e", "title": "Job performance prediction in a call center using a naive Bayes classifier", "text": "This study presents an approach to predict the performance of sales agents of a call center dedicated exclusively to sales and telemarketing activities. This approach is based on a naive Bayesian classifier. The objective is to know what levels of the attributes are indicative of individuals who perform well. A sample of 1037 sales agents was taken during the period between March and September of 2009 on campaigns related to insurance sales and service pre-paid phone services, to build the naive Bayes network. It has been shown that, socio-demographic attributes are not suitable for predicting performance. Alternatively, operational records were used to predict production of sales agents, achieving satisfactory results. In this case, the classifier training and testing is done through a stratified tenfold cross-validation. It classified the instances correctly 80.60% of times, with the proportion of false positives of 18.1% for class no (does not achieve minimum) and 20.8% for the class yes (achieves equal or above minimum acceptable). These results suggest that socio-demographic attributes has no predictive power on performance, while the operational information of the activities of the sale agent can predict the future performance of the"}
{"_id": "3abd86e2e846136ae602a09d38e74aa545f1cb0e", "title": "TUCH: Turning Cross-view Hashing into Single-view Hashing via Generative Adversarial Nets", "text": "Cross-view retrieval, which focuses on searching images as response to text queries or vice versa, has received increasing attention recently. Crossview hashing is to efficiently solve the cross-view retrieval problem with binary hash codes. Most existing works on cross-view hashing exploit multiview embedding method to tackle this problem, which inevitably causes the information loss in both image and text domains. Inspired by the Generative Adversarial Nets (GANs), this paper presents a new model that is able to Turn Crossview Hashing into single-view hashing (TUCH), thus enabling the information of image to be preserved as much as possible. TUCH is a novel deep architecture that integrates a language model network T for text feature extraction, a generator network G to generate fake images from text feature and a hashing network H for learning hashing functions to generate compact binary codes. Our architecture effectively unifies joint generative adversarial learning and cross-view hashing. Extensive empirical evidence shows that our TUCH approach achieves state-of-the-art results, especially on text to image retrieval, based on imagesentences datasets, i.e. standard IAPRTC-12 and large-scale Microsoft COCO."}
{"_id": "e9eaf8ec0fe238ce1ae0fcf47a24d3d19a869f0c", "title": "Application of Machine Learning Algorithms to an online Recruitment System", "text": "In this work, we present a novel approach for evaluating job applicants in online recruitment systems, leveraging machine learning algorithms to solve the candidate ranking problem. An application of our approach is implemented in the form of a prototype system, whose functionality is showcased and evaluated in a real-world recruitment scenario. The proposed system extracts a set of objective criteria from the applicants\u2019 LinkedIn profile, and infers their personality characteristics using linguistic analysis on their blog posts. Our system was found to perform consistently compared to human recruiters; thus, it can be trusted for the automation of applicant ranking and"}
{"_id": "557c209ec279b5e6181f86d30e371f1f20964437", "title": "Aggression in the Laboratory : Problems with the Validity of the Modified Taylor Competitive Reaction Time Test as a Measure of Aggression in Media Violence Studies", "text": "Many laboratory studies of aggression use a measure known as the modified Taylor Competitive Reaction Time Test (TCRTT), for which validation studies are lacking. Using sound blasts administered by the participant against a fictional human opponent, the TCRTT also allows for multiple methods of measuring aggression. The validity of the TCRTT was tested in 53 college student participants. Participants took a self-report measure of aggressiveness as well as neuropsychological measures of frontal lobe functioning predictive of aggression. Results were not supportive of the TCRTT\u2019s validity and indicated concerns regarding the use of the TCRTT as a measure of aggression. Results suggest that labaratory studies of media violence using the TCRTT are of questionable validity."}
{"_id": "5b4d42eba3068dd7c25575b358e33e43c1af9fe5", "title": "Understanding consumers' continuance intention towards mobile purchase: A theoretical framework and empirical study - A case of China", "text": "Although mobile purchase is convenient in the age of mobile commerce, many consumers still do not utilize mobile purchase to its full potential. From the mobile vendor\u2019s perspective, retaining current customers and facilitating their continued purchase are crucial to create profitability and achieve a sustainable development. An understanding of the continuance intention towards mobile purchase can provide insights into mobile vendors\u2019 marketing campaigns. Therefore, it is important to examine the determinants that impact continuance intentions of consumers for mobile purchase. Drawing upon information success model, flow theory and trust, this study proposed and empirically tested an integrated model to better understand the determinants of consumers\u2019 continued intention to purchase on mobile sites. Empirical data from 462 users who had experience with mobile purchase were tested against the proposed research model by using structural equation modelling (SEM). The results indicated that information quality, and privacy and security concerns are the main factors affecting trust, whereas service quality is the main factor affecting flow. System quality, and privacy and security concerns affect satisfaction. Trust affects flow, which in turn affects satisfaction. These three factors together affect continued intention towards mobile purchase. The findings of this study provide several important implications for mobile commerce research and practice. 2015 Elsevier Ltd. All rights reserved."}
{"_id": "cab7bfc78f641cdf301b0c6078fb31dc3107ac02", "title": "A Hybrid Word-Character Model for Abstractive Summarization", "text": "Automatic abstractive text summarization is an important and challenging research topic of natural language processing. Among many widely used languages, the Chinese language has a special property that a Chinese character contains rich information comparable to a word. Existing Chinese text summarization methods, either adopt totally character-based or word-based representations, fail to fully exploit the information carried by both representations. To accurately capture the essence of articles, we propose a hybrid word-character approach (HWC) which preserves the advantages of both wordbased and character-based representations. We evaluate the advantage of the proposed HWC approach by applying it to two existing methods, and discover that it generates state-of-the-art performance with a margin of 24 ROUGE points on a widely used dataset LCSTS. In addition, we find an issue contained in the LCSTS dataset and offer a script to remove overlapping pairs (a summary and a short text) to create a clean dataset for the community. The proposed HWC approach also generates the best performance on the new, clean LCSTS dataset."}
{"_id": "9006770978b945e57f9bf096a5e029fbe75cff7b", "title": "Open Information Extraction with Meta-pattern Discovery in Biomedical Literature", "text": "Biomedical open information extraction (BioOpenIE) is a novel paradigm to automatically extract structured information from unstructured text with no or little supervision. It does not require any pre-specified relation types but aims to extract all the relation tuples from the corpus. A major challenge for open information extraction (OpenIE) is that it produces massive surface-name formed relation tuples that cannot be directly used for downstream applications. We propose a novel framework CPIE (Clause+Pattern-guided Information Extraction) that incorporates clause extraction and meta-pattern discovery to extract structured relation tuples with little supervision. Compared with previous OpenIE methods, CPIE produces massive but more structured output that can be directly used for downstream applications. We first detect short clauses from input sentences. Then we extract quality textual patterns and perform synonymous pattern grouping to identify relation types. Last, we obtain the corresponding relation tuples by matching each quality pattern in the text. Experiments show that CPIE achieves the highest precision in comparison with state-of-the-art OpenIE baselines, and also keeps the distinctiveness and simplicity of the extracted relation tuples. CPIE shows great potential in effectively dealing with real-world biomedical literature with complicated sentence structures and rich information."}
{"_id": "59fda74cb34948f08a6baa67889ff87da41effc2", "title": "Tensegrity II. How structural networks influence cellular information processing networks.", "text": "The major challenge in biology today is biocomplexity: the need to explain how cell and tissue behaviors emerge from collective interactions within complex molecular networks. Part I of this two-part article, described a mechanical model of cell structure based on tensegrity architecture that explains how the mechanical behavior of the cell emerges from physical interactions among the different molecular filament systems that form the cytoskeleton. Recent work shows that the cytoskeleton also orients much of the cell's metabolic and signal transduction machinery and that mechanical distortion of cells and the cytoskeleton through cell surface integrin receptors can profoundly affect cell behavior. In particular, gradual variations in this single physical control parameter (cell shape distortion) can switch cells between distinct gene programs (e.g. growth, differentiation and apoptosis), and this process can be viewed as a biological phase transition. Part II of this article covers how combined use of tensegrity and solid-state mechanochemistry by cells may mediate mechanotransduction and facilitate integration of chemical and physical signals that are responsible for control of cell behavior. In addition, it examines how cell structural networks affect gene and protein signaling networks to produce characteristic phenotypes and cell fate transitions during tissue development."}
{"_id": "03bd7d473527757c08e8c31dc2959d0d23c51aea", "title": "Alternating Directions Dual Decomposition", "text": "We propose AD, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs based on the alternating directions method of multipliers. Like dual decomposition algorithms, AD uses worker nodes to iteratively solve local subproblems and a controller node to combine these local solutions into a global update. The key characteristic of AD is that each local subproblem has a quadratic regularizer, leading to a faster consensus than subgradient-based dual decomposition, both theoretically and in practice. We provide closed-form solutions for these AD subproblems for binary pairwise factors and factors imposing first-order logic constraints. For arbitrary factors (large or combinatorial), we introduce an active set method which requires only an oracle for computing a local MAP configuration, making AD applicable to a wide range of problems. Experiments on synthetic and realworld problems show that AD compares favorably with the state-of-the-art."}
{"_id": "cacfb6df4937059a166c0c59b2ad152c9b67c42c", "title": "Artificial intelligence framework for simulating clinical decision-making: A Markov decision process approach", "text": "OBJECTIVE\nIn the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can \"think like a doctor\".\n\n\nMETHODS\nThis approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record.\n\n\nRESULTS\nThe results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs.\n\n\nCONCLUSION\nGiven careful design and problem formulation, an AI simulation framework can approximate optimal decisions even in complex and uncertain environments. Future work is described that outlines potential lines of research and integration of machine learning algorithms for personalized medicine."}
{"_id": "2128d9f0ae74ab90ee27e7b8639bab24fb68592a", "title": "Seq2seq Fingerprint: An Unsupervised Deep Molecular Embedding for Drug Discovery", "text": "Many of today's drug discoveries require expertise knowledge and insanely expensive biological experiments for identifying the chemical molecular properties. However, despite the growing interests of using supervised machine learning algorithms to automatically identify those chemical molecular properties, there is little advancement of the performance and accuracy due to the limited amount of training data. In this paper, we propose a novel unsupervised molecular embedding method, providing a continuous feature vector for each molecule to perform further tasks, e.g., solubility classification. In the proposed method, a multi-layered Gated Recurrent Unit (GRU) network is used to map the input molecule into a continuous feature vector of fixed dimensionality, and then another deep GRU network is employed to decode the continuous vector back to the original molecule. As a result, the continuous encoding vector is expected to contain rigorous and enough information to recover the original molecule and predict its chemical properties. The proposed embedding method could utilize almost unlimited molecule data for the training phase. With sufficient information encoded in the vector, the proposed method is also robust and task-insensitive. The performance and robustness are confirmed and interpreted in our extensive experiments."}
{"_id": "b09a0f39d727ce3be1b30fa1719605734d92461e", "title": "Social disconnectedness, perceived isolation, and health among older adults.", "text": "Previous research has identified a wide range of indicators of social isolation that pose health risks, including living alone, having a small social network, infrequent participation in social activities, and feelings of loneliness. However multiple forms of isolation are rarely studied together making it difficult to determine which aspects of isolation are most deleterious for health. Using population-based data from the National Social Life, Health, and Aging Project, we combine multiple indicators of social isolation into scales assessing social disconnectedness (e.g., small social network, infrequent participation in social activities) and perceived isolation (e.g., loneliness, perceived lack of social support). We examine the extent to which social disconnectedness and perceived isolation have distinct associations with physical and mental health among older adults. Results indicate that social disconnectedness and perceived isolation are independently associated with lower levels of self-rated physical health. However, the association between disconnectedness and mental health may operate through the strong relationship between perceived isolation and mental health. We conclude that health researchers need to consider social disconnectedness and perceived isolation simultaneously."}
{"_id": "8e5fb0aab6c4c8a796bc3217ec31d06bcd5772bd", "title": "FC^4: Fully Convolutional Color Constancy with Confidence-Weighted Pooling", "text": "Improvements in color constancy have arisen from the use of convolutional neural networks (CNNs). However, the patch-based CNNs that exist for this problem are faced with the issue of estimation ambiguity, where a patch may contain insufficient information to establish a unique or even a limited possible range of illumination colors. Image patches with estimation ambiguity not only appear with great frequency in photographs, but also significantly degrade the quality of network training and inference. To overcome this problem, we present a fully convolutional network architecture in which patches throughout an image can carry different confidence weights according to the value they provide for color constancy estimation. These confidence weights are learned and applied within a novel pooling layer where the local estimates are merged into a global solution. With this formulation, the network is able to determine what to learn and how to pool automatically from color constancy datasets without additional supervision. The proposed network also allows for end-to-end training, and achieves higher efficiency and accuracy. On standard benchmarks, our network outperforms the previous state-of-the-art while achieving 120x greater efficiency."}
{"_id": "6d5768768e003481dcd356d6097b7b4cdf96003b", "title": "Internet addiction among Norwegian adults: a stratified probability sample study.", "text": "Most Norwegians are Internet users. We conducted a stratified probability sample study (Norway, 2007, age-group 16-74 years, N= 3,399, response rate 35.3%, 87.1% Internet users) to assess the prevalence of Internet addiction and at-risk Internet use by the Young Diagnostic Questionnaire (YDQ). The prevalence of Internet addiction (YDQ score 5-8) was 1.0% and an additional 5.2% were at-risk Internet users (YDQ score 3-4). Internet addiction and at-risk Internet use was strongly dependent on gender and age with highest prevalences among young males (16-29 years 4.1% and 19.0%, 30-39 years 3.3% and 10.7%). Logistic regression showed that male gender, young age, university level education, and an unsatisfactory financial situation were factors positively associated with \"problematic Internet use\" (at-risk and addicted use combined). Time spent on the Internet and prevalence of self-reported sleeping disorders, depression, and other psychological impairments increased linearly with YDQ score. Problematic Internet use clearly affects the lives of many people."}
{"_id": "34c445b8584d6dcbc898accfebf6bec1d067c1f8", "title": "Pseudo random number generator and Hash function for embedded microprocessors", "text": "Embedded microprocessors are commonly used for future technologies such as Internet of Things(IoT), RFID and Wireless Sensor Networks(WSN). However, the microprocessors have limited computing power and storages so straight-forward implementation of traditional services on resource constrained devices is not recommenced. To overcome this problem, lightweight implementation techniques should be concerned for practical implementations. Among various requirements, security applications should be conducted on microprocessors for secure and robust service environments. In this paper, we presented a light weight implementation techniques for efficient Pseudo Random Number Generator(PRNG) and Hash function. To reduce memory consumption and accelerate performance, we adopted AES accelerator based implementation. This technique is firstly introduced in INDOCRYPT'12, whose idea exploits peripheral devices for efficient hash computations. With this technique, we presented block cipher based light-weight pseudo random number generator and simple hash function on embedded microprocessors."}
{"_id": "56359d2b4508cc267d185c1d6d310a1c4c2cc8c2", "title": "Shape driven kernel adaptation in Convolutional Neural Network for robust facial trait recognition", "text": "One key challenge of facial trait recognition is the large non-rigid appearance variations due to some irrelevant real world factors, such as viewpoint and expression changes. In this paper, we explore how the shape information, i.e. facial landmark positions, can be explicitly deployed into the popular Convolutional Neural Network (CNN) architecture to disentangle such irrelevant non-rigid appearance variations. First, instead of using fixed kernels, we propose a kernel adaptation method to dynamically determine the convolutional kernels according to the spatial distribution of facial landmarks, which helps learning more robust features. Second, motivated by the intuition that different local facial regions may demand different adaptation functions, we further propose a tree-structured convolutional architecture to hierarchically fuse multiple local adaptive CNN subnetworks. Comprehensive experiments on WebFace, Morph II and MultiPIE databases well validate the effectiveness of the proposed kernel adaptation method and tree-structured convolutional architecture for facial trait recognition tasks, including identity, age and gender recognition. For all the tasks, the proposed architecture consistently achieves the state-of-the-art performances."}
{"_id": "d7140f3f70df89b7926197ad2d35ed67efb909c4", "title": "Story generation after TALE-SPIN", "text": "TALE-SPIN, the laat major AI attempt at atory generation, approached the problem of making up atoriea primarily from the perspective of an impartial world simulator. AUTHOR ia a program (under development) which generates atoriea as a creative reaaoner in purauit of her own narrative goals. It is thus intended to aimulate an author's mind aa ahe makes up a story, rather than the world as things happen in it. The four major forces driving the atory generation process, according to the AUTHOR model, are (1) author intentionality, (2) conceptual reformulation, (3) reminding, and (4) the opportunity enhancement metagoal. 1. World Simulation Based Story Generation TALE-SPIN [Meehan, 1976] is a program which made up Aesop-fable-like conceptual atoriea, which it then expressed in natural language. Each atory TALE-SPIN made up waa based on a model of the storyworld, including the characters in it and their goala, peraonalities, and interperaonal relatione. Storiea were apun by working out consequences of the model. Thus, central to TALE-SPIN waa the idea of world simulation. The three levels TALE-SPIN was most concerned with simulating were: 1. character intentionality-e.g., if Joe Bear was hungry, he would try to get something to eat 2. social conatrainta-e.g., if Joe Bear believed Irving Bird to be hia friend, he could aak an outright favor; if Joe Bear believed their relation to be a bit cooler, he would feel obliged to offer something in return (auch aa a worm) 3. phyaical cauaality-e.g., if Irving Bird ate a worm, the worm would cease to exist; after \"fooling around\", Aa oppoaed to Sheldon Klein's ayetem which only produced atory texts and which ia incapable, in principle, of understanding the atoriea it \"makes up\" [Klein et el., 1976]. characters felt \"wiped out\" World simulation, however, fails to reflect the proctaa that an author goes through in making up a atory-storyworlds are developed by authors aa needed, frequently as post hoc justification for eventa that the author has already decided ahe wants as part of the story. As illustration, let us consider the question of why, in the Star Wars aequel, \"The Empire Strikes Back\", Princess Leia and Solo go to visit Landeau. If one asks a reader (or viewer) this question, the anawer will be along the lines of, \"They needed a place to hide from imperial stormtroopers until they got the hyperdrive fixed, and Landeau was an old buddy of Solo's, and ...\" \u2022 \u2026"}
{"_id": "b715c90c8f175e646009f81feac7c2f8d2927b5c", "title": "A Multimodal Embedded Sensor System for Scalable Robotic and Prosthetic Fingers", "text": "The development of dexterous and robust anthropomorphic hands with rich sensor feedback remains a challenging task for both humanoid robotics as well as prosthetics as of today. The design of hands that are scalable in size and equipped with integrated multimodal sensor systems is a key requirement for advanced control schemes and reactive behaviour. In this paper, we present the design of a scalable and low cost robotic finger with a soft fingertip and position, temperature as well as normal and shear force sensors. All cables and sensors are completely enclosed inside the finger to ensure an anthropometric appearance. The finger is modelled based on a 50th percentile male little finger and can be easily adapted to other dimensions in terms of size and sensor system configuration. We describe the design of the sensor system, provide an experimental analysis for the characterization of the different sensor types in terms of sensor range, resolution, creep, spatial response as well as temperature flux."}
{"_id": "6a483c9ed4bf540531e6961c0251610d50f27ea0", "title": "A Fuzzy-Neural Intelligent Trading Model for Stock Price Prediction", "text": "In this paper, Fuzzy logic and Neural Network approaches for predicting financial stock price are investigated. A study of a knowledge based system for stock price prediction is carried out. We explore Trapezoidal membership function method and Sugeno-type fuzzy inference engine to optimize the estimated result. Our model utilizes the performance of artificial neural networks trained using back propagation and supervised learning methods respectively. The system is developed based on the selection of stock data history obtained from Nigerian Stock Exchange in Nigeria, which are studied and used in training the system. A computer simulation is designed to assist the experimental decision for the best control action. The system is developed using MySQL, NetBeans, Java, and MatLab. The experimental result shows that the model has such properties as fast convergence, high precision and strong function approximation ability. It has shown to perform well in the context of various trading strategies involving stocks."}
{"_id": "cb5bf23d57d56e9029b42cfdd5ca8786b3c4ef00", "title": "Heterogeneous Feature Selection With Multi-Modal Deep Neural Networks and Sparse Group LASSO", "text": "Heterogeneous feature representations are widely used in machine learning and pattern recognition, especially for multimedia analysis. The multi-modal, often also high- dimensional , features may contain redundant and irrelevant information that can deteriorate the performance of modeling in classification. It is a challenging problem to select the informative features for a given task from the redundant and heterogeneous feature groups. In this paper, we propose a novel framework to address this problem. This framework is composed of two modules, namely, multi-modal deep neural networks and feature selection with sparse group LASSO. Given diverse groups of discriminative features, the proposed technique first converts the multi-modal data into a unified representation with different branches of the multi-modal deep neural networks. Then, through solving a sparse group LASSO problem, the feature selection component is used to derive a weight vector to indicate the importance of the feature groups. Finally, the feature groups with large weights are considered more relevant and hence are selected. We evaluate our framework on three image classification datasets. Experimental results show that the proposed approach is effective in selecting the relevant feature groups and achieves competitive classification performance as compared with several recent baseline methods."}
{"_id": "4dd674450405d787b5cd1f6d314286235988e637", "title": "On the Margin Theory of Feedforward Neural Networks", "text": "Past works have shown that, somewhat surprisingly, over-parametrization can help generalization in neural networks. Towards explaining this phenomenon, we adopt a margin-based perspective. We establish: 1) for multilayer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for two-layer networks. In particular, an infinite-size neural network enjoys the best generalization guarantees. The typical infinite feature methods are kernel methods; we compare the neural net margin with that of kernel methods and construct natural instances where kernel methods have much weaker generalization guarantees. We validate this gap between the two approaches empirically. Finally, this infinite-neuron viewpoint is also fruitful for analyzing optimization. We show that a perturbed gradient flow on infinite-size networks finds a global optimizer in polynomial time."}
{"_id": "2106de484c3f1e3a21f2708effc181f51ca7d709", "title": "Social interaction detection using a multi-sensor approach", "text": "In the context of a social gathering, such as a cocktail party, the memorable moments are often captured by professional photographers or the participants. The latter case is generally undesirable because many participants would rather enjoy the event instead of being occupied by the tedious photo capturing task. Motivated by this scenario, we propose an automated social event photo-capture framework for which, given the multiple sensor data streams and the information from the Web as input, will output the visually appealing photos of the social event. Our proposal consists of three components: (1) social attribute extraction from both the physical space and the cyberspace; (2) social attribute fusion; and (3) active camera control. Current work is presented and we conclude with expected contributions as well as future direction."}
{"_id": "25f82d16a55bf610453c275a8fa43419d3f05777", "title": "Psychopaths know right from wrong but don't care.", "text": "Adult psychopaths have deficits in emotional processing and inhibitory control, engage in morally inappropriate behavior, and generally fail to distinguish moral from conventional violations. These observations, together with a dominant tradition in the discipline which sees emotional processes as causally necessary for moral judgment, have led to the conclusion that psychopaths lack an understanding of moral rights and wrongs. We test an alternative explanation: psychopaths have normal understanding of right and wrong, but abnormal regulation of morally appropriate behavior. We presented psychopaths with moral dilemmas, contrasting their judgments with age- and sex-matched (i) healthy subjects and (ii) non-psychopathic, delinquents. Subjects in each group judged cases of personal harms (i.e. requiring physical contact) as less permissible than impersonal harms, even though both types of harms led to utilitarian gains. Importantly, however, psychopaths' pattern of judgments on different dilemmas was the same as those of the other subjects. These results force a rejection of the strong hypothesis that emotional processes are causally necessary for judgments of moral dilemmas, suggesting instead that psychopaths understand the distinction between right and wrong, but do not care about such knowledge, or the consequences that ensue from their morally inappropriate behavior."}
{"_id": "9f22d8a6ccb8eeef5b3c792dd656ae406469a5ea", "title": "The role of emotions for moral judgments depends on the type of emotion and moral scenario.", "text": "Emotions seem to play a critical role in moral judgment. However, the way in which emotions exert their influence on moral judgments is still poorly understood. This study proposes a novel theoretical approach suggesting that emotions influence moral judgments based on their motivational dimension. We tested the effects of two types of induced emotions with equal valence but with different motivational implications (anger and disgust), and four types of moral scenarios (disgust-related, impersonal, personal, and beliefs) on moral judgments. We hypothesized and found that approach motivation associated with anger would make moral judgments more permissible, while disgust, associated with withdrawal motivation, would make them less permissible. Moreover, these effects varied as a function of the type of scenario: the induced emotions only affected moral judgments concerning impersonal and personal scenarios, while we observed no effects for the other scenarios. These findings suggest that emotions can play an important role in moral judgment, but that their specific effects depend upon the type of emotion induced. Furthermore, induced emotion effects were more prevalent for moral decisions in personal and impersonal scenarios, possibly because these require the performance of an action rather than making an abstract judgment. We conclude that the effects of induced emotions on moral judgments can be predicted by taking their motivational dimension into account. This finding has important implications for moral psychology, as it points toward a previously overlooked mechanism linking emotions to moral judgments."}
{"_id": "dc988db35a1a3558de7e782ad5c2ef84f9c98487", "title": "Gossip in Evolutionary Perspective", "text": "Conversation is a uniquely human phenomenon. Analyses of freely forming conversations indicate that approximately two thirds of conversation time is devoted to social topics, most of which can be given the generic label gossip. This article first explores the origins of gossip as a mechanism for bonding social groups, tracing these origins back to social grooming among primates. It then asks why social gossip in this sense should form so important a component of human interaction and presents evidence to suggest that, aside from servicing social networks, a key function may be related explicitly to controlling free riders. Finally, the author reviews briefly the role of social cognition in facilitating conversations of this kind."}
{"_id": "079607a10e055e02224a2cceb4820cda34dea4ee", "title": "Limbic abnormalities in affective processing by criminal psychopaths as revealed by functional magnetic resonance imaging", "text": "BACKGROUND\nPsychopathy is a complex personality disorder of unknown etiology. Central to the disorder are anomalies or difficulties in affective processing.\n\n\nMETHODS\nFunctional magnetic resonance imaging was used to elucidate the neurobiological correlates of these anomalies in criminal psychopaths during performance of an affective memory task.\n\n\nRESULTS\nCompared with criminal nonpsychopaths and noncriminal control participants, criminal psychopaths showed significantly less affect-related activity in the amygdala/hippocampal formation, parahippocampal gyrus, ventral striatum, and in the anterior and posterior cingulate gyri. Psychopathic criminals also showed evidence of overactivation in the bilateral fronto-temporal cortex for processing affective stimuli.\n\n\nCONCLUSIONS\nThese data suggest that the affective abnormalities so often observed in psychopathic offenders may be linked to deficient or weakened input from limbic structures."}
{"_id": "12f1e9093c1799806064b7899a7682a60cefc16b", "title": "Impairment of social and moral behavior related to early damage in human prefrontal cortex", "text": "The long-term consequences of early prefrontal cortex lesions occurring before 16 months were investigated in two adults. As is the case when such damage occurs in adulthood, the two early-onset patients had severely impaired social behavior despite normal basic cognitive abilities, and showed insensitivity to future consequences of decisions, defective autonomic responses to punishment contingencies and failure to respond to behavioral interventions. Unlike adult-onset patients, however, the two patients had defective social and moral reasoning, suggesting that the acquisition of complex social conventions and moral rules had been impaired. Thus early-onset prefrontal damage resulted in a syndrome resembling psychopathy."}
{"_id": "3c3bb40eba8fc1d0b885e23a8e3d54d60ae3aee6", "title": "Multimodal Classification for Analysing Social Media", "text": "Classification of social media data is an important approach in understanding user behavior on the Web. Although information on social media can be of different modalities such as texts, images, audio or videos, traditional approaches in classification usually leverage only one prominent modality. Techniques that are able to leverage multiple modalities are often complex and susceptible to the absence of some modalities. In this paper, we present simple models that combine information from different modalities to classify social media content and are able to handle the above problems with existing techniques. Our models combine information from different modalities using a pooling layer and an auxiliary learning task is used to learn a common feature space. We demonstrate the performance of our models and their robustness to the missing of some modalities in the emotion classification domain. Our approaches, although being simple, can not only achieve significantly higher accuracies than traditional fusion approaches but also have comparable results when only one modality is available."}
{"_id": "d1040d32bcb0aca7f8e26a45dc2bb84841cc41a8", "title": "A Case Study on Quality Attribute Measurement using MARF and GIPSY", "text": "This literature focuses on doing a comparative analysis between Modular Audio Recognition Framework (MARF) and the General Intentional Programming System (GIPSY) with the help of different software metrics. At first, we understand the general principles, architecture and working of MARF and GIPSY by looking at their frameworks and running them in the Eclipse environment. Then, we study some of the important metrics including a few state of the art metrics and rank them in terms of their usefulness and their influence on the different quality attributes of a software. The quality attributes are viewed and computed with the help of the Logiscope and McCabe IQ tools. These tools perform a comprehensive analysis on the case studies and generate a quality report at the factor level, criteria level and metrics level. In next step, we identify the worst code at each of these levels, extract the worst code and provide recommendations to improve the quality. We implement and test some of the metrics which are ranked as the most useful metrics with a set of test cases in JDeodorant. Finally, we perform an analysis on both MARF and GIPSY by doing a fuzzy code scan using MARFCAT to find the list of weak and vulnerable classes."}
{"_id": "169ee92caec357e470c6ccf9d1ddb9f53e3639ea", "title": "Pair-Wise Temporal Pooling Method for Rapid Training of the HTM Networks Used in Computer Vision Applications", "text": "In the paper, several modifications to the conventional learning algorithms of the Hierarchical Temporal Memory (HTM) \u2013 a biologically inspired largescale model of the neocortex by Numenta \u2013 have been proposed. Firstly, an alternative spatial pooling method has been introduced, which makes use of a random pattern generator exploiting the Metropolis-Hastings algorithm. The original inference algorithm by Numenta has been reformulated, in order to reduce a number of tunable parameters and to optimize its computational efficiency. The main contribution of the paper consists in the proposal of a novel temporal pooling method \u2013 the pair-wise explorer \u2013 which allows faster and more reliable training of the HTM networks using data without inherent temporal information (e.g., static images). While the conventional temporal pooler trains the HTM network on a finite segment of the smooth Brownian-like random walk across the training images, the 902 S. \u0160tolc, I. Bajla, K. Valent\u0301\u0131n, R. \u0160koviera proposed method performs training by means of the pairs of patterns randomly sampled (in a special manner) from a virtually infinite smooth random walk. We have conducted a set of experiments with the single-layer HTM network applied to the position, scale, and rotation-invariant recognition of geometric objects. The obtained results provide a clear evidence that the pair-wise method yields significantly faster convergence to the theoretical maximum of the classification accuracy with respect to both the length of the training sequence (defined by the maximum allowed number of updates of the time adjacency matrix \u2013 TAM) and the number of training patterns. The advantage of the proposed explorer manifested itself mostly in the lower range of TAM updates where it caused up to 10% relative accuracy improvement over the conventional method. Therefore we suggest to use the pair-wise explorer, instead of the smooth explorer, always when the HTM network is trained on a set of static images, especially when the exhaustive training is impossible due to the complexity of the given task."}
{"_id": "50bcbc85552291ce4a53f5ff835a905673461cff", "title": "Mapping interference resolution across task domains: A shared control process in left inferior frontal gyrus", "text": "Work in functional neuroimaging has mapped interference resolution processing onto left inferior frontal regions for both verbal working memory and a variety of semantic processing tasks. The proximity of the identified regions from these different tasks suggests the existence of a common, domain-general interference resolution mechanism. The current research specifically tests this idea in a within-subject design using fMRI to assess the activation associated with variable selection requirements in a semantic retrieval task (verb generation) and a verbal working memory task with a trial-specific proactive interference manipulation (recent-probes). High interference trials on both tasks were associated with activity in the midventrolateral region of the left inferior frontal gyrus, and the regions activated in each task strongly overlapped. The results indicate that an elemental component of executive control associated with interference resolution during retrieval from working memory and from semantic memory can be mapped to a common portion of the left inferior frontal gyrus."}
{"_id": "47a3c3b99a118129b6c5461648be58b15e440d1e", "title": "Creativity support for novice digital filmmaking", "text": "Machinima is a new form of creative digital filmmaking that leverages the real time graphics rendering of computer game engines. Because of the low barrier to entry, machinima has become a popular creative medium for hobbyists and novices while still retaining borrowed conventions from professional filmmaking. Can novice machinima creators benefit from creativity support tools? A preliminary study shows novices generally have difficulty adhering to cinematographic conventions. We identify and document four cinematic conventions novices typically violate. We report on a Wizard-of-Oz study showing a rule-based intelligent system that can reduce the frequency of errors that novices make by providing information about rule violations without prescribing solutions. We discuss the role of error reduction in creativity support tools."}
{"_id": "09e9c580d863cfba5fe22ca70a71dbbfefb11e38", "title": "System architecture directions for networked sensors", "text": "Technological progress in integrated, low-power, CMOS communication devices and sensors makes a rich design space of networked sensors viable. They can be deeply embedded in the physical world and spread throughout our environment like smart dust. The missing elements are an overall system architecture and a methodology for systematic advance. To this end, we identify key requirements, develop a small device that is representative of the class, design a tiny event-driven operating system, and show that it provides support for efficient modularity and concurrency-intensive operation. Our operating system fits in 178 bytes of memory, propagates events in the time it takes to copy 1.25 bytes of memory, context switches in the time it takes to copy 6 bytes of memory and supports two level scheduling. The analysis lays a groundwork for future architectural advances."}
{"_id": "a7412ecf07ff540f9ede6eb6831803e115a16206", "title": "Recent Development in Big Data Analytics for Business Operations and Risk Management", "text": "\u201cBig data\u201d is an emerging topic and has attracted the attention of many researchers and practitioners in industrial systems engineering and cybernetics. Big data analytics would definitely lead to valuable knowledge for many organizations. Business operations and risk management can be a beneficiary as there are many data collection channels in the related industrial systems (e.g., wireless sensor networks, Internet-based systems, etc.). Big data research, however, is still in its infancy. Its focus is rather unclear and related studies are not well amalgamated. This paper aims to present the challenges and opportunities of big data analytics in this unique application domain. Technological development and advances for industrial-based business systems, reliability and security of industrial systems, and their operational risk management are examined. Important areas for future research are also discussed and revealed."}
{"_id": "45f72b2812212f8b4e0b9fe185af022da992cd6f", "title": "Computational design of linkage-based characters", "text": "We present a design system for linkage-based characters, combining form and function in an aesthetically-pleasing manner. Linkage-based character design exhibits a mix of discrete and continuous problems, making for a highly unintuitive design space that is difficult to navigate without assistance. Our system significantly simplifies this task by allowing users to interactively browse different topology options, thus guiding the discrete set of choices that need to be made. A subsequent continuous optimization step improves motion quality and, crucially, safeguards against singularities. We demonstrate the flexibility of our method on a diverse set of character designs, and then realize our designs by physically fabricating prototypes."}
{"_id": "45fc709a2fb8cd3cc71462c65e3d5e1bcb23c444", "title": "Automated Summarization Evaluation with Basic Elements", "text": "As part of evaluating a summary automatically, it is usual to determine how much of the contents of one or more human-produced \u2018ideal\u2019 summaries it contains. Past automated methods such as ROUGE compare using fixed word ngrams, which are not ideal for a variety of reasons. In this paper we describe a framework in which summary evaluation measures can be instantiated and compared, and we implement a specific evaluation method using very small units of content, called Basic Elements, that address some of the shortcomings of ngrams. This method is tested on DUC 2003, 2004, and 2005 systems and produces very good correlations with human judgments."}
{"_id": "24903c58f330bff8709cdec34bb7780cde24bea9", "title": "Computational Intelligence, Networked Systems and Their Applications", "text": "This paper presents the metro constant illumination controller, which is designed to solve the current problem that the interior illumination in a moving metro fluctuates drastically with the changes of exterior environment. By detecting the real-time interior illumination values and adjusting the lights in a metro with PID controller based on BP neural network, the controller can keep the metro interior illumination values around the preset value. Simulations and actual test results show that the PID controller based on BP neural network has strong adaptability and robustness in a nonlinear system. It can both save energy and solve the problem of drastically fluctuating illumination in a moving metro, which cannot be achieved in conventional PID controllers."}
{"_id": "6d40f179abb4520ed15ed234cfe55114108fe7f2", "title": "Modelling of Mechanical and Mechatronic Systems MMaMS 2014 Pressure Distribution in Transtibial Prostheses Socket and the Stump Interface", "text": "The paper deals with the problem of a biomechanical solution for human locomotion after amputation of a lower limb. With an inappropriately designed transtibial prostheses socket, the interaction between the socket and the stump occurs, leading to increased friction and subsequent surface damage to the soft tissue. In individual types of transtibial prostheses, locations are defined which can be loaded and which cannot be loaded. On the basis of the study of the anterior side of the stump, three loadable and two non-loadable areas were monitored using the TACTILUS tactile pressure sensor (Sensor Products Inc., Madison, New Jersey, USA). Methods of measuring individual values were proposed for the purpose of sensing the pressure in the socket and the liner interface, the stump and the liner interface. Each method includes the methodological preparation of the patient, the preparation of the room, the preparation of the measuring system, and the subsequent method of data processing. In the submitted paper, 11 stumps were non-invasively monitored. The method of assessing the pressure exerted on the stumps was focused on the monitoring of the pressure distribution in the selected areas. The results obtained by the pressure measurements were statistically processed. In both cases of the pressure system placement, the p-value was higher than 0.05; therefore, we can state that the sets are equal. \u00a9 2014 The Authors. Published by Elsevier Ltd. Peer-review under responsibility of organizing committee of the Modelling of Mechanical and Mechatronic Systems MMaMS 2014."}
{"_id": "2ee2e8a704952051ed8a01386af7038944f664b5", "title": "DATS - Data Containers for Web Applications", "text": "Data containers enable users to control access to their data while untrusted applications compute on it. However, they require replicating an application inside each container - compromising functionality, programmability, and performance. We propose DATS - a system to run web applications that retains application usability and efficiency through a mix of hardware capability enhanced containers and the introduction of two new primitives modeled after the popular model-view-controller (MVC) pattern. (1) DATS introduces a templating language to create views that compose data across data containers. (2) DATS uses authenticated storage and confinement to enable an untrusted storage service, such as memcached and deduplication, to operate on plain-text data across containers. These two primitives act as robust declassifiers that allow DATS to enforce non-interference across containers, taking large applications out of the trusted computing base (TCB). We showcase eight different web applications including Gitlab and a Slack-like chat, significantly improve the worst-case overheads due to application replication, and demonstrate usable performance for common-case usage."}
{"_id": "09a68ed309ca96bce0a30383adc74fd160bf5266", "title": "Development of a wall-climbing robot using a tracked wheel mechanism", "text": "In this paper, a new concept of a wall-climbing robot able to climb a vertical plane is presented. A continuous locomotive motion with a high climbing speed of 15m/min is realized by adopting a series chain on two tracked wheels on which 24 suction pads are installed. While each tracked wheel rotates, the suction pads which attach to the vertical plane are activated in sequence by specially designed mechanical valves. The engineering analysis and detailed mechanism design of the tracked wheel, including mechanical valves and the overall features, are described in this paper. It is a self-contained robot in which a vacuum pump and a power supply are integrated and is controlled remotely. The climbing performance, using the proposed mechanism, is evaluated on a vertical steel plate. Finally, the procedures are presented for an optimization experiment using Taguchi methodology to maximize vacuum pressure which is a critical factor for suction force."}
{"_id": "852e0bb4efbd7906c4c1eb35a69fc2c0eb51f9d6", "title": "Information Fusion for Diabetic Retinopathy CAD in Digital Color Fundus Photographs", "text": "The purpose of computer-aided detection or diagnosis (CAD) technology has so far been to serve as a second reader. If, however, all relevant lesions in an image can be detected by CAD algorithms, use of CAD for automatic reading or prescreening may become feasible. This work addresses the question how to fuse information from multiple CAD algorithms, operating on multiple images that comprise an exam, to determine a likelihood that the exam is normal and would not require further inspection by human operators. We focus on retinal image screening for diabetic retinopathy, a common complication of diabetes. Current CAD systems are not designed to automatically evaluate complete exams consisting of multiple images for which several detection algorithm output sets are available. Information fusion will potentially play a crucial role in enabling the application of CAD technology to the automatic screening problem. Several different fusion methods are proposed and their effect on the performance of a complete comprehensive automatic diabetic retinopathy screening system is evaluated. Experiments show that the choice of fusion method can have a large impact on system performance. The complete system was evaluated on a set of 15 000 exams (60 000 images). The best performing fusion method obtained an area under the receiver operator characteristic curve of 0.881. This indicates that automated prescreening could be applied in diabetic retinopathy screening programs."}
{"_id": "49ef3e7b310793b276b94d55d3391a2129882d89", "title": "High speed locomotion for a quadrupedal microrobot", "text": "Research over the past several decades has elucidated some of the mechanisms behind high speed, highly efficient and robust locomotion in insects such as cockroaches. Roboticists have used this information to create biologically-inspired machines capable of running, jumping, and climbing robustly over a variety of terrains. To date, little work has been done to develop an at-scale insect-inspired robot capable of similar feats due to challenges in fabrication, actuation, and electronics integration for a centimeter-scale device. This paper addresses these challenges through the design, fabrication, and control of a 1.27g walking robot, the Harvard Ambulatory MicroRobot (HAMR). The current design is manufactured using a method inspired by pop-up books that enables fast and repeatable assembly of the miniature walking robot. Methods to drive HAMR at low and high speeds are presented, resulting in speeds up to 0.44m/s (10.1 body lengths per second) and the ability to maneuver and control the robot along desired trajectories."}
{"_id": "4decf38323c8311fbbe50cb309e242bee671418a", "title": "Topic Detection and Tracking using idf-Weighted Cosine Coefficient", "text": "The goal of TDT Topic Detection and Tracking is to develop aut omatic methods of identifying topically related stories wit hin a stream of news media. We describe approaches for both detection and tr cking based on the well-known idf -weighted cosine coefficient similarity metric. The surprising outcome of this research is th at we achieved very competitive results for tracking using a very simple method of feature selection, without word stemming and with ou a score normalization scheme. The detection task results wer e not as encouraging though we attribute this more to the clustering algorithm than the underlying similarity metric. 1. The Tracking Task The goal of the topic tracking task for TDT2 is to identify new s stories on a particular event defined by a small number ( Nt) of positive training examples and a greater number of negative examples . All stories in the news stream subsequent to the final positive ex ample are to be classified as on-topic if they pertain to the event or off-topic if they do not. Although the task is similar to IR routing and fi ltering tasks, the definition of event leads to at least one significan t difference. An event is defined as an occurrence at a given place and t ime covered by the news media. Stories are on-topic if they cover th event itself or any outcome (strictly-defined in [2]) of the e vent. By this definition, all stories prior to the occurrence are offt pic, which contrary to the IR tasks mentioned, theoretically provides for unlimited off-topic training material (assuming retrospective corpora are available). We expected to be able to take advantage of these unlimited negative examples but in our final implementation did so only to the extent that we used a retrospective corpus to improve t erm statistics of our database. 1.1. idf-Weighted Cosine Coefficient As the basis for our approach we used the idf -weighted cosine coefficient described in [1] often referred to as tf idf . Using this metric, the tracking task becomes two-fold. Firstly, choosing an op timal set of features to represent topics, i.e. feature selection. Th e approach must choose features from a single story as well as from multi ple stories (forNt > 1). Secondly, determining a threshold (potentially one per topic) which optimizes the miss and false alarm proba bilities for a particular cost function, effectively normalizing th e similarity scores across topics. The cosine coefficient is a document similarity metric which has been investigated extensively. Here documents (and querie s) ar represented as vectors in an n-dimensional space, where n is the number of unique terms in the database. The coefficients of th e vector for a given document are the term frequencies ( tf ) for that dimension. The resulting vectors are extremely sparse and t ypically high frequency words (mostly closed class) are ignored. The cosine of the angle between two vectors is an indication of vector si milarity and is equal to the dot-product of the vectors normalized by the product of the vector lengths. cos( ) = ~ A ~ B k ~ Akk ~ Bk tf idf (term frequency times inverse document frequency) weighting is an ad-hoc modification to the cosine coefficient calcul tion which weights words according to their usefulness in discriminating documents. Words that appear in few documents are more usefu l than words that appear in many documents. This is captured in the equation for the inverse document frequency of a word: idf(w) = log10 N df(w) Wheredf(w) is the number of documents in a collection which contain wordw andN is the total number of documents in the collection. For our implementation we weighted only the topic vector by idf and left the story vector under test unchanged. This allows u s to calculate and fix anidf -scaled topic vector immediately after training on the last positive example story for a topic. The resulting calculation for the similarity measure becomes: sim(a; b) = Pnw=1 tfa(w) tfb(w) idf(w) pPnw=1 tf2 a (w) pPnw=1 tf2 b (w) 1.2. UPENN System Attributes To facilitate testing, the stories were loaded into a simple document processing system. Once in the system, stories are proc ssed in chronological order testing all topics simultaneously w ith a single pass over the data 1 at a rate of approximately 6000 stories per minute on a Pentium 266 MHz machine. The system tokenizer delimits on white space and punctuation (and discards it), col lapses case, but provides no stemming. A list of 179 stop words consi sti g almost entirely of close classed words was also employed. In order to improve word statistics, particularly for the beginning of the test set, we prepended a retrospective corpus (the TDT Pilot Data [3]) of approximately 16 thousand stories. 1In accordance with the evaluation specification for this pro ject [2] no information is shared across topics. 1.3. Feature Selection Thechoice as well asnumber of features (words) used to represent a topic has a direct effect on the trade-off between miss and f lse alarm probabilities. We investigated four methods of produ cing lists of features sorted by their effectiveness in discriminatin g a topic. This then allowed us to easily vary the number of those featur s for the topic vectors 2. 1. Keep all features except those words belonging to the stop word list. 2. Relative to training stories, sort words by document coun t, keepn most frequent. This approach has the advantage of finding those words which are common across training stories, an d therefore are more general to the topic area, but has the disa dvantage of extending poorly from the Nt = 16 case to the Nt = 1 case. 3. For each story, sort by word count ( tf ), keepn most frequent. While this approach tends to ignore low count words which occur in multiple training documents, it generalizes well f rom theNt = 16 to theNt = 1 case. 4. As a variant on the previous method we tried adding to the initial n features using a simple greedy algorithm. Against a database containing all stories up to and including the Nt-th training story, we queried the database with the n f atures plus the next most frequent term. If the separation of on-topic an d off-topic stories increased, we kept the term, if not we igno red it and tested the next term in the list. We defined separation as the difference between the average on-topic scores and th e average of the 20 highest scoring off-topic documents. Of the feature selection methods we tried the forth one yield ed the best results across varying values of Nt, although only slightly better than the much simpler third method. Occam\u2019s Razor prompt ed us to omit this complication from the algorithm. The DET curves 3 in Figure 1. show the effect of varying the number of features (o btained from method 3) on the miss and false alarm probabilities. The upper right most curve results from choosing the single most fr equent feature. Downward to the left, in order are the curves for 5, 1 0, 50, 150 and 300 features. After examining similar plots from the pilot, training4, and development-test data sets, we set the number of features for our system to 50. It can be seen that there is limited benefit in adding features after this point. 1.4. Normalization / Threshold Selection With a method of feature selection in place, a threshold for t he similarity score must be determined above which stories will be d e med on-topic, and below which they will not. Since each topic is r epresented by its own unique vector it cannot be expected that the same threshold value will be optimal across all topics unless the scores are normalized. We tried two approaches for normalizing the topic similarity scores. For the first approach we calculated the similarity of a rando m sample of several hundred off-topic documents in order to estim a e an 2We did not employ feature selection on the story under test bu t used the text in entirety. 3See [5] for detailed description of DET curves. 4The first two month period of TDT2 data is called the training s et, not to be confused with training data. 1 2 5 10 20 40 60 80 90 .01 .02 .05 .1 .2 .5 1 2 5 10 20 40 60 80 90 M is s pr ob ab ili ty ( in % ) False Alarms probability (in %) random performance num_features=1 num_features=5 num_features=10 num_features=50 num_features=150 num_features=300 Figure 1: DET curve for varying number of features. (Nt=4, TD T2 evaluation data set, newswire and ASR transcripts) average off-topic score relative to the topic vector. The no rmalized score is then a function of the average on-topic 5 and off-topic scores and the off-topic standard deviation 6. The second approach looked at only thehighest scoringoff-topic stories returned from a query of the topic vector against a retrospective database with th e score normalized in a similar fashion to the first approach. Both attempts reduced the story-weighted miss probability by approximately 10 percent at low false alarm probability relat ive. However, this results was achieved at the expense of higher miss probability at higher false alarm probability, and a higher cost a t the operating point defined by the cost function for the task 7. Ctrack = Cmiss P (miss) Ptopic +Cfa P (fa) (1 Ptopic) where Cmiss = 1: (the cost of a miss) Cfa = 1: (the cost of a false alarm) Ptopic = 0:02: (thea priori probability of a story being on a given topic was chosen based on the TDT2 training topics and traini ng corpus.) Because of the less optimal trade-off between error probabi lities at the point defined by the cost function, we choose to ignore nor malization and look directly at cost as a function of a single thr es old value across all topics. We plotted tf idf score against story and topic-weighted cost for the training and development-test data sets. As our global threshold we averaged the scores at which story and topic-weighted cost were minimized. This is depicted in figu re 2. Figure 3 shows the same curves for the evaluation data set. Th e threshold resulting from the traini"}
{"_id": "2f03394aed0a9baabb7d30196ac291be4ec63ab3", "title": "15 Planar Antenna Technology for mm-Wave Automotive Radar , Sensing", "text": "Planar antennas are common components in sensing applications due to their low cost, low profile and simple integration with systems. They can be used commonly at frequencies as high as 77 GHz and above, for example in automotive sensing. Also, new ultra-wideband communication systems operating around 60 GHz will heavily rely on planar antennas due to their unique properties. One special advantage of planar elements is that they can easily form array structures combining very simple elements, like microstrip patches. Also phased and conformal arrays can be built. Due to all these characteristics, planar antennas are very good candidates for building front ends for mm-wave applications in radar, sensing or communications. For some applications, which are either established today or which will become commercial in a few years, general requirements can be given. In addition to automotive radar, this mainly applies to 60 GHz ultra-broadband wireless indoor communications. Microwave imaging at 94 GHz and above is still much under research, as well as other sensing applications in the millimeter-wave region above 100 GHz. Prominent frequency bands are in the 122 and 140 GHz range1. Typical antenna requirements are multi-beam or scanning capability, high gain up to 30..36 dBi and moderate to low sidelobes. In monopulse radar systems, the sidelobes may be included in the angle determination scheme, then the sidelobe level requirements are rather relaxed. Loss is generally an important issue. Comparing a planar approach to a commercially available dielectric lens, the planar antenna exhibits significantly higher losses, especially if the beamforming device is included. The antenna efficiency of 77 GHz planar array columns is roughly 50%. These losses are caused by dielectric losses and conductor losses. To attain reliable predictions of the losses in full-wave simulations, care has to be taken in the"}
{"_id": "0aa5055f20d0c5ba5f5bd92cd316f31c394a5612", "title": "Live-streaming mobile video: production as civic engagement", "text": "Live-streaming mobile video is an emerging medium. Few have measured how this new form of production is contributing to civic engagement or broadening the public sphere by circulating visual footage of community interest. Here I explore the overall trends in production of live-streaming mobile video from producers around the world, and focus more narrowly on the motivations and practices surrounding the production of civic content. Informing my study are mobile videos on Qik.com, a popular web service for live-streaming mobile video. I offer a quantitative content analysis of 1,000 videos, summarizing general trends in content production as well as analyzing the motivations behind the production of civic content, based on qualitative interviews with frequent producers. Results indicate that production is higher among those who self-identify as activists, journalists, community leaders or educators, suggesting this new medium can be best appropriated by those who are already civically engaged."}
{"_id": "0a808a17f5c86413bd552a324ee6ba180a12f46d", "title": "Improving Deep Visual Representation for Person Re-identification by Global and Local Image-language Association", "text": "Person re-identification is an important task that requires learning discriminative visual features for distinguishing different person identities. Diverse auxiliary information has been utilized to improve the visual feature learning. In this paper, we propose to exploit natural language description as additional training supervisions for effective visual features. Compared with other auxiliary information, language can describe a specific person from more compact and semantic visual aspects, thus is complementary to the pixel-level image data. Our method not only learns better global visual feature with the supervision of the overall description but also enforces semantic consistencies between local visual and linguistic features, which is achieved by building global and local image-language associations. The global image-language association is established according to the identity labels, while the local association is based upon the implicit correspondences between image regions and noun phrases. Extensive experiments demonstrate the effectiveness of employing language as training supervisions with the two association schemes. Our method achieves state-of-the-art performance without utilizing any auxiliary information during testing and shows better performance than other joint embedding methods for the image-language association."}
{"_id": "4ea3c8d07f6c16c14360e6db320cdf6c81258705", "title": "Design-Space Exploration in MDE: An Initial Pattern Catalogue", "text": "A designer often has to evaluate alternative designs during the development of a system. A multitude of Design-Space Exploration (DSE) techniques exist in the literature. Integration of these techniques into the modelling paradigm is needed when a model-driven engineering approach is used for designing systems. To a greater or lesser extent, the integration of those different DSE techniques share characteristics with each other. Inspired by software design patterns, we introduce an initial pattern catalogue to categorise the embedding of different DSE techniques in an MDE context. We demonstrate their use by a literature survey and discuss the consequences of each pattern."}
{"_id": "131209f1b90b388d96b9e168ff659ddac91c552d", "title": "Real-time Convolutional Networks for Depth-based Human Pose Estimation", "text": "We propose to combine recent Convolutional Neural Networks (CNN) models with depth imaging to obtain a reliable and fast multi-person pose estimation algorithm applicable to Human Robot Interaction (HRI) scenarios. Our hypothesis is that depth images contain less structures and are easier to process than RGB images while keeping the required information for human detection and pose inference, thus allowing the use of simpler networks for the task. Our contributions are threefold. (i) we propose a fast and efficient network based on residual blocks (called RPM) for body landmark localization from depth images; (ii) we created a public dataset DIH comprising more than 170k synthetic images of human bodies with various shapes and viewpoints as well as real (annotated) data for evaluation; (iii) we show that our model trained on synthetic data from scratch can perform well on real data, obtaining similar results to larger models initialized with pre-trained networks. It thus provides a good trade-off between performance and computation. Experiments on real data demonstrate the validity of our approach."}
{"_id": "14316b885f65d2197ce8c6d4ab3ee61fdab052b8", "title": "IT doesn't matter", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles."}
{"_id": "3068d1d6275933e7f4d332a2f2cf52543a4f0615", "title": "Elliptic Curve Multiset Hash", "text": "A homomorphic, or incremental, multiset hash function, associates a hash value to arbitrary collections of objects (with possible repetitions) in such a way that the hash of the union of two collections is easy to compute from the hashes of the two collections themselves: it is simply their sum under a suitable group operation. In particular, hash values of large collections can be computed incrementally and/or in parallel. Homomorphic hashing is thus a very useful primitive with applications ranging from database integrity verification to streaming set/multiset comparison and network coding. Unfortunately, constructions of homomorphic hash functions in the literature are hampered by two main drawbacks: they tend to be much longer than usual hash functions at the same security level (e.g. to achieve a collision resistance of 2, they are several thousand bits long, as opposed to 256 bits for usual hash functions), and they are also quite slow. In this paper, we introduce the Elliptic Curve Multiset Hash (ECMH), which combines a usual bit string-valued hash function like BLAKE2 with an efficient encoding into binary elliptic curves to overcome both difficulties. On the one hand, the size of ECMH digests is essentially optimal: 2m-bit hash values provideO(2) collision resistance. On the other hand, we demonstrate a highly-efficient software implementation of ECMH, which our thorough empirical evaluation shows to be capable of processing over 3 million set elements per second on a 4 GHz Intel Haswell machine at the 128-bit security level\u2014 many times faster than previous practical methods. While incremental hashing based on elliptic curves has been considered previously [1], the proposed method was less efficient, susceptible to timing attacks, and potentially patent-encumbered [2], and no practical implementation was demonstrated."}
{"_id": "58be188245e2727a8fe7a8a063c353c06c67035b", "title": "Voronoi treemaps for the visualization of software metrics", "text": "In this paper we present a hierarchy-based visualization approach for software metrics using Treemaps. Contrary to existing rectangle-based Treemap layout algorithms, we introduce layouts based on arbitrary polygons that are advantageous with respect to the aspect ratio between width and height of the objects and the identification of boundaries between and within the hierarchy levels in the Treemap. The layouts are computed by the iterative relaxation of Voronoi tessellations. Additionally, we describe techniques that allow the user to investigate software metric data of complex systems by utilizing transparencies in combination with interactive zooming."}
{"_id": "6bd3a92d1d6ee1f18024e322a159f356de5e44dc", "title": "Chemoinformatics and Drug Discovery", "text": "This article reviews current achievements in the field of chemoinformatics and their impact on modern drug discovery processes. The main data mining approaches used in cheminformatics, such as descriptor computations, structural similarity matrices, and classification algorithms, are outlined. The applications of cheminformatics in drug discovery, such as compound selection, virtual library generation, virtual high throughput screening, HTS data mining, and in silico ADMET are discussed. At the conclusion, future directions of chemoinformatics are suggested."}
{"_id": "8eced091ab3e3fd0b1747896cff082711c510d4a", "title": "Hierarchical Grouping to Optimize an Objective Function", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "28fddd885dea95d9d6c44175a2814eb60d4905ce", "title": "A Hybrid Reverse Engineering Approach Combining Metrics and Program Visualization", "text": "Surprising as it may seem, many of the early adopters of the object-oriented paradigm already face a number of problems typically encountered in large-scale legacy systems. Consequently, reverse engineering techniques are relevant in an object-oriented context as well. This paper investigates a hybrid approach, combining the immediate appeal of visualisations with the scalability of metrics. We validate such a hybrid approach by showing how CodeCrawler \u2014the experimental platform we built\u2014 allowed us to understand the program structure of, and identify potential design anomalies in a public domain software system."}
{"_id": "76ae06682ee31321774856629b625116355dcd73", "title": "Centroidal Voronoi Tessellations: Applications and Algorithms", "text": "A centroidal Voronoi tessellation is a Voronoi tessellation whose generating points are the centroids (centers of mass) of the corresponding Voronoi regions. We give some applications of such tessellations to problems in image compression, quadrature, finite difference methods, distribution of resources, cellular biology, statistics, and the territorial behavior of animals. We discuss methods for computing these tessellations, provide some analyses concerning both the tessellations and the methods for their determination, and, finally, present the results of some numerical experiments."}
{"_id": "61bc659a5c0c99406580b2b60a6ec90ffbd9dcfe", "title": "Graph-based forensic investigation of Bitcoin transactions", "text": "This thesis illustrates forensic research work on Bitcoin, an innovative Internet based global transaction system that attracts ascending popularity during the recent few years. As an open, public and scalable distributed payment system, Bitcoin brings forward significant economic and technological impact to our world. Meanwhile, a new notion of virtual currency, \u201cBitcoin\u201d comes into existence such that Bitcoin currency can be \u201cmined\u201d from all over world complying with specific algorithms. Mined bit \u201ccoins\u201d has practical monetary values that turn the Bitcoin system into a digital currency circulation system. Due to Bitcoin\u2019s decentralized semantics, Bitcoin transaction and currency are not subject to control and censorship from any single authority. Therefore, Bitcoin brings out various security concerns about its application as a long-term reliable system. The research in the thesis focuses on forensic study on Bitcoin. It covers experimental study on the Bitcoin network as a peer-to-peer system and a graph-based forensic approach against Bitcoin\u2019s transaction data. Major contributions include network data evaluation and transaction history analysis. In case of forensic investigation is needed against criminal incidents such as fraud, false transactions and money theft, which are commonly seen in commonly used digital payment systems, the research provides a guidance of efficient information collection and framework of evidence data processing and extraction."}
{"_id": "d67e6051f73fcac945a34932ea7828f558a5264b", "title": "Antecedents and consequences of customer-company identification: expanding the role of relationship marketing.", "text": "This article presents an empirical test of organizational identification in the context of customer-company (C-C) relationships. It investigates whether customers identify with companies and what the antecedents and consequences of such identification are. The model posits that perceived company characteristics, construed external image, and the perception of the company's boundary-spanning agent lead to C-C identification. In turn, such identification is expected to impact both in-role behavior (i.e., product utilization) as well as extra-role behavior (i.e., citizenship). The model was tested in a consultative selling context of pharmaceutical sales reps calling on physicians. Results from the empirical test indicated that customers do indeed identify with organizations and that C-C identification positively impacts both product utilization behavior and extra-role behavior even when the effect of brand perception is accounted for. Second, the study found that the organization's characteristics as well as the salesperson's characteristics contributed to the development of C-C identification."}
{"_id": "dedab73a8b0a572e4bf04f22da88f14157e5bc6c", "title": "WAVES: A Wearable Asymmetric Vibration Excitation System for Presenting Three-Dimensional Translation and Rotation Cues", "text": "WAVES, a Wearable Asymmetric Vibration Excitation System, is a novel wearable haptic device for presenting three dimensions of translation and rotation guidance cues. In contrast to traditional vibration feedback, which usually requires that users learn to interpret a binary cue, asymmetric vibrations have been shown to induce a pulling sensation in a desired direction. When attached to the fingers, a single voicecoil actuator presents a translation guidance cue and a pair of voicecoil actuators presents a rotation guidance cue. The directionality of mechanoreceptors in the skin led to our choice of the location and orientation of the actuators in order to elicit very strong sensations in certain directions. For example, users distinguished a \"left\" cue versus a \"right\" cue 94.5% of the time. When presented with one of six possible direction cues, users on average correctly identified the direction of translation cues 86.1% of the time and rotation cues 69.0% of the time."}
{"_id": "fbcd758ecc083037cd035c8ed0c26798ce62f15e", "title": "Automated detection of vulnerabilities in privileged programs by execution monitoring", "text": "We present a method for detecting exploitations of vulnerabilities in privileged programs by monitoring their execution using audit trials, where the monitoring is with respect to specifications of the security-relevant behavior of the programs. Our work is motivated by the intrusion detection paradigm, but is a n attempt to avoid a d hoc approaches to codifying misuse behavior. Our approach as based on the observation that although privileged programs can be exploited (due to errors) to cause security compromise in systems because of the privileges accorded to them, the intended behavior of privileged programs is, of course, limited a n d benign. The key, then i s to specify the intended behavior (Le., the program policy) and to detect any action by privileged program that is outside the intended behavior a n d that imperils security. We describe a program policy specification language, which is based on simple predicate logic and regular expressions. I n addition, we present specifications of privileged programs in Unix, and Q prototype execution monitor for analyzing audit trails with respect to these specifications. The program policies are surprisingly concise and clear, a n d an addition, capable of detecting exploitations of known vulnerabilities in these programs. Although our work has been motivated by the known vulnerabilities in Unix, we believe that by tightly restricting the behavior of all privileged programs, exploitations of unknown vulnerabilities can be detected. A s Q check on the specifications, work is in progress on verifying them with respect to a n abstract security policy. *This work is funded in part by the National Security Agency University Research Program under Contract No. DOD-MDA904-93-C4083 and by ARPA under Contract No. USNN00014-94-1-0065."}
{"_id": "c062e4612574bf54bd8cf8adcee7e9ecb8f98e91", "title": "A review and outlook for a 'Building Information Model' (BIM): A multi-standpoint framework for technological development", "text": "1474-0346/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.aei.2010.06.003 * Tel.: +386 1 4768 521; fax: +386 1 4250 681. E-mail address: tomo.cerovsek@fgg.uni-lj.si This study provides a review of important issues for \u2018Building Information Modelling\u2019 (BIM) tools and standards and comprehensive recommendations for their advancement and development that may improve BIM technologies and provide a basis for inter-operability, integration, model-based communication, and collaboration in building projects. Based on a critical review of Building Product Modelling, including the development of standards for exchange and the features of over 150 AEC/O (Architecture, Engineering, Construction, and Operation) tools and digital models, a methodological framework is proposed for improvements to both BIM tools and schemata. The features relevant to the framework were studied using a conceptual process model and a \u2018BIM System-of-Systems\u2019 (BIM-SoS) model. The development, implementation, and use of the BIM Schema are analysed from the standpoint of standardisation. The results embrace the requirements for a BIM research methodology, with an example of methods and procedures, an R&D review with critique, and a multi-standpoint framework for developments with concrete recommendations, supported by BIM metrics, upon which the progress of tools, models, and standards may be measured, evaluated, streamlined, and judged. It is also proposed that any BIM Schema will never be \u2018completed\u2019 but should be developed as evolutionary ontology by \u2018segmented standpoint models\u2019 to better account for evolving tools and AEC/O practices. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "cb2111cb362f566be61c75ada38af53ecf95a1d3", "title": "On the 1.1 Edge-Coloring of Multigraphs", "text": ""}
{"_id": "64536eabc9a1e2a3a63ba454591991e47a07e354", "title": "Privacy protection issues in social networking sites", "text": "Social Networking Sites (SNS) have become very popular during the past few years, as they allow users to both express their individuality and meet people with similar interests. Nonetheless, there are also many potential threats to privacy associated with these SNS such as identity theft and disclosure of sensitive information. However, many users still are not aware of these threats and the privacy settings provided by SNS are not flexible enough to protect user data. In addition, users do not have any control over what others reveal about them. As such, we conduct a preliminary study which examines the privacy protection issues on Social Networking Sites (SNS) such as MySpace, Facebook and LinkedIn. Based on this study, we identify three privacy problems in SNS and propose a Privacy Framework as a foundation to cope with these problems."}
{"_id": "719be07e598b94277098f593409b5778a48ab60c", "title": "Hesitant Fuzzy Sets: State of the Art and Future Directions", "text": "The necessity of dealing with uncertainty in real wold problems has been a long-term research challenge that has originated different methodologies and theories. Fuzzy sets along with their extensions such as, type-2 fuzzy sets, interval valued fuzzy sets, Atanassov\u2019s intuitionistic fuzzy sets, etc., have provided a wide range of tools able to deal with uncertainty in different type of problems. Recently, a new extension of fuzzy sets so-called hesitant fuzzy sets has been introduced to deal with hesitant situations which were not well managed by the previous tools. Hesitant fuzzy sets have attracted very quickly the attention of many researchers that have proposed diverse extensions, several type of operators to compute with such type of information, and eventually some applications have been developed. Because of such a growth, this paper presents an overview on hesitant fuzzy sets with the aim of providing a clear perspective on the different concepts, tools and trends related to this extension of fuzzy sets."}
{"_id": "583c00f75c519be40aba0fb791806a72fd84f72a", "title": "Violence Detection in Movies", "text": "As violence in movies has harmful influence on children, in this paper, we propose an algorithm to detect violent scene in movies. Under our definition of violence, the task of violent scene detection is decomposed into action scene detection and bloody frame detection. While previous approaches addressed on shot level of video structure only, our approach works on more semantic-complete scene structure of video. The input video (digital movie) is first segmented into several scenes. Based on the filmmaking characteristics of action scene, some features of the scene are extracted to feed into the support vector machine for classification. Finally, the face, blood and motion information are integrated to determine whether the action scene has violent content. Experimental results show that the proposed approach works reasonably well in detecting most of the violent scenes. Compared with related work, our approach is computationally simple yet effective."}
{"_id": "983e2b19e63f57bbec4287a953e956bc22efb814", "title": "A comparative study of metamodeling methods for multiobjective crashworthiness optimization", "text": "The response surface methodology (RSM), which typically uses quadratic polynomials, is predominantly used for metamodeling in crashworthiness optimization because of the high computational cost of vehicle crash simulations. Research shows, however, that RSM may not be suitable for modeling highly nonlinear responses that can often be found in impact related problems, especially when using limited quantity of response samples. The radial basis functions (RBF) have been shown to be promising for highly nonlinear problems, but no application to crashworthiness problems has been found in the literature. In this study, metamodels by RSM and RBF are used for multiobjective optimization of a vehicle body in frontal collision, with validations by finite element simulations using the full-scale vehicle model. The results show that RSM is able to produce good approximation models for energy absorption, and the model appropriateness can be well predicted by ANOVA. However, in the case of peak acceleration, RBF is found to generate better models than RSM based on the same number of response samples, with the multiquadric function identified to be the most stable RBF. Although RBF models are computationally more expensive, the optimization results of RBF models are found to be more accurate. 2005 Elsevier Ltd. All rights reserved."}
{"_id": "e5e9d64984b28f24a7423ada022d843e341b4351", "title": "Accuracy of Ultrasound-Guided Genicular Nerve Block: A Cadaveric Study.", "text": "BACKGROUND\nGenicular nerve block has recently emerged as a novel alternative treatment in chronic knee pain. The needle placement for genicular nerve injection is made under fluoroscopic guidance with reference to bony landmarks.\n\n\nOBJECTIVE\nTo investigate the anatomic landmarks for medial genicular nerve branches and to determine the accuracy of ultrasound-guided genicular nerve block in a cadaveric model.\n\n\nSTUDY DESIGN\nCadaveric accuracy study.\n\n\nSETTING\nUniversity hospital anatomy laboratory.\n\n\nMETHODS\nTen cadaveric knee specimens without surgery or major procedures were used in the study. The anatomic location of the superior medial genicular nerve (SMGN) and the inferior medial genicular nerve (IMGN) was examined using 4 knee dissections. The determined anatomical sites of the genicular nerves in the remaining 6 knee specimens were injected with 0.5 mL red ink under ultrasound guidance. The knee specimens were subsequently dissected to assess for accuracy. If the nerve was dyed with red ink, it was considered accurate placement. All other locations were considered inaccurate.\n\n\nRESULTS\nThe course of the SMGN is that it curves around the femur shaft and passes between the adductor magnus tendon and the femoral medial epicondyle, then descends approximately one cm anterior to the adductor tubercle. The IMGN is situated horizontally around the tibial medial epicondyle and passes beneath the medial collateral ligament at the midpoint between the tibial medial epicondyle and the tibial insertion of the medial collateral ligament. The adductor tubercle for the SMGN and the medial collateral ligament for the IMGN were determined as anatomic landmarks for ultrasound. The bony cortex one cm anterior to the peak of the adductor tubercle and the bony cortex at the midpoint between the peak of the tibial medial epicondyle and the initial fibers inserting on the tibia of the medial collateral ligament were the target points for the injections of SMGN and IMGN, respectively. In the cadaver dissections both genicular nerves were seen to be dyed with red ink in all the injections of the 6 knees.\n\n\nLIMITATIONS\nThe small number of cadavers might have led to some anatomic variations of genicular nerves being overlooked.\n\n\nCONCLUSIONS\nThe result of this cadaveric study suggests that ultrasound-guided medial genicular nerve branch block can be performed accurately using the above-stated anatomic landmarks."}
{"_id": "a1f87939a171de1bdd1a99a8a16eb11346285bc7", "title": "#Proana: Pro-Eating Disorder Socialization on Twitter.", "text": "PURPOSE\nPro-eating disorder (ED) online movements support engagement with ED lifestyles and are associated with negative health consequences for adolescents with EDs. Twitter is a popular social media site among adolescents that provides a unique setting for Pro-ED content to be publicly exchanged. The purpose of this study was to investigate Pro-ED Twitter profiles' references to EDs and how their social connections (followers) reference EDs.\n\n\nMETHODS\nA purposeful sample of 45 Pro-ED profiles was selected from Twitter. Profile information, all tweets, and a random sample of 100 of their followers' profile information were collected for content analysis using the Twitter Application Programming Interface. A codebook based on ED screening guidelines was applied to evaluate ED references. For each Pro-ED profile, proportion of tweets with ED references and proportion of followers with ED references in their own profile were evaluated.\n\n\nRESULTS\nIn total, our 45 Pro-ED profiles generated 4,245 tweets for analysis. A median of 36.4% of profiles' tweets contained ED references. Pro-ED profiles had a median of 173 followers, and a median of 44.5% of followers had ED references. Pro-ED profiles with more tweets with ED references also tended to have more followers with ED references (\u03b2\u00a0= .37, p < .01).\n\n\nCONCLUSIONS\nFindings suggest that profiles which self-identify as Pro-ED express disordered eating patterns through tweets and have an audience of followers, many of whom also reference ED in their own profiles. ED socialization on Twitter might provide social support, but in the Pro-ED context this activity might also reinforce an ED identity."}
{"_id": "88f75059a0688252a58c871c5a4c85f97ee01607", "title": "Phased array systems and technologies in SELEX-Sistemi Integrati: State of art and new challenges", "text": "This paper is a follow-on of [1] and [2], presented at 1996 and 2003 IEEE Intl. Symposium on Phased Array Systems and Technology, respectively. After a brief recall of 3D Long Range phased array concepts, already presented in [2], the focus is on the G-band multifunction radar systems and on the technological capabilities that SELEX-Sistemi Integrati is developing by taking advantage of its Research and Development facilities. At the end, future trends about new challenging applications and technological issues are presented and discussed."}
{"_id": "908b1f4e5fe1145cba6555db583e67927228e05d", "title": "Morpho-syntactic Lexical Generalization for CCG Semantic Parsing", "text": "In this paper, we demonstrate that significant performance gains can be achieved in CCG semantic parsing by introducing a linguistically motivated grammar induction scheme. We present a new morpho-syntactic factored lexicon that models systematic variations in morphology, syntax, and semantics across word classes. The grammar uses domain-independent facts about the English language to restrict the number of incorrect parses that must be considered, thereby enabling effective learning from less data. Experiments in benchmark domains match previous models with one quarter of the data and provide new state-of-the-art results with all available data, including up to 45% relative test-error reduction."}
{"_id": "7d73761ac4440279b09145e31db3740abfb7f7ad", "title": "Factors affecting the Choice of High Tech Engineering majors for University women and Men in Bangladesh", "text": "In contrast to western countries which are experiencing declines in female enrollment in high tech engineering (HTE), Bangladesh has experienced a continuous increase in female enrollment in engineering. In order to better understand the factors driving the choice of HTE as a major, survey data was collected from 590 male and female students in HTE majors in Bangladesh. Hypotheses related to gender differences in the influence of various factors on major choice were examined using t-tests. Results revealed that, while several factors had a similar impact for women and men, gender differences in selfefficacy and socio-economic status persist and may represent barriers to women\u2019s participation."}
{"_id": "2d8f527d1a96b0dae209daa6a241cf3255a6ec0d", "title": "Learning long-range vision for autonomous off-road driving", "text": "Most vision-based approaches to mobile robotics suffer from the limitations imposed by stereo obstacle detection, which is short-range and prone to failure. We present a self-supervised learning process for long-range vision that is able to accurately classify complex terrain at distances up to the horizon, thus allowing superior strategic planning. The success of the learning process is due to the self-supervised training data that is generated on every frame: robust, visually consistent labels from a stereo module, normalized wide-context input windows, and a discriminative and concise feature representation. A deep hierarchical network is trained to extract informative and meaningful features from an input image, and the features are used to train a realtime classifier to predict traversability. The trained classifier sees obstacles and paths from 5 to over 100 meters, far beyond the maximum stereo range of 12 meters, and adapts very quickly to new environments. The process was developed and tested on the LAGR mobile robot. Results from a ground truth dataset are given as well as field test results."}
{"_id": "87ffea85fbf52e22e4808e1fcc9e40ead4ff7738", "title": "Hybrid encryption/decryption technique using new public key and symmetric key algorithm", "text": "This research study proposes Hybrid Encryption System using new public key algorithm and private key algorithm. A hybrid cryptosystem is one which combines the convenience of a public-key cryptosystem with the efficiency of a symmetrickey cryptosystem. Here, we propose a provably two way secured data encryption system, which addresses the concerns of user\u2019s privacy, authentication and accuracy. This system has two different encryption algorithms have been used both in the Encryption and decryption sequence. One is public key cryptography based on linear block cipher another one is private key cryptography based on simple symmetric algorithm. This cryptography algorithm provides more security as well as authentication comparing to other existing hybrid algorithm."}
{"_id": "22e62e82528ab375ef9b1194d81d331ab6ee1548", "title": "A One-Layer Recurrent Neural Network with a Discontinuous Activation Function for Linear Programming", "text": "A one-layer recurrent neural network with a discontinuous activation function is proposed for linear programming. The number of neurons in the neural network is equal to that of decision variables in the linear programming problem. It is proven that the neural network with a sufficiently high gain is globally convergent to the optimal solution. Its application to linear assignment is discussed to demonstrate the utility of the neural network. Several simulation examples are given to show the effectiveness and characteristics of the neural network."}
{"_id": "7c28f4339db184d019b0d8b6a187e2d978c5112c", "title": "Works for me! characterizing non-reproducible bug reports", "text": "Bug repository systems have become an integral component of software development activities. Ideally, each bug report should help developers to find and fix a software fault. However, there is a subset of reported bugs that is not (easily) reproducible, on which developers spend considerable amounts of time and effort. We present an empirical analysis of non-reproducible bug reports to characterize their rate, nature, and root causes. We mine one industrial and five open-source bug repositories, resulting in 32K non-reproducible bug reports. We (1) compare properties of non-reproducible reports with their counterparts such as active time and number of authors, (2) investigate their life-cycle patterns, and (3) examine 120 Fixed non-reproducible reports. In addition, we qualitatively classify a set of randomly selected non-reproducible bug reports (1,643) into six common categories. Our results show that, on average, non-reproducible bug reports pertain to 17% of all bug reports, remain active three months longer than their counterparts, can be mainly (45%) classified as \"Interbug Dependencies'', and 66% of Fixed non-reproducible reports were indeed reproduced and fixed."}
{"_id": "df8267ba00e704ca216ad65c3638197e2026ec20", "title": "Anatomic Study of the Retaining Ligaments of the Face and Applications for Facial Rejuvenation", "text": "Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of"}
{"_id": "324e1673c4dc635a3b36996dcd6d600500f96ba0", "title": "A stereo display prototype with multiple focal distances", "text": "Typical stereo displays provide incorrect focus cues because the light comes from a single surface. We describe a prototype stereo display comprising two independent fixed-viewpoint volumetric displays. Like autostereoscopic volumetric displays, fixed-viewpoint volumetric displays generate near-correct focus cues without tracking eye position, because light comes from sources at the correct focal distances. (In our prototype, from three image planes at different physical distances.) Unlike autostereoscopic volumetric displays, however, fixed-viewpoint volumetric displays retain the qualities of modern projective graphics: view-dependent lighting effects such as occlusion, specularity, and reflection are correctly depicted; modern graphics processor and 2-D display technology can be utilized; and realistic fields of view and depths of field can be implemented. While not a practical solution for general-purpose viewing, our prototype display is a proof of concept and a platform for ongoing vision research. The design, implementation, and verification of this stereo display are described, including a novel technique of filtering along visual lines using 1-D texture mapping."}
{"_id": "266f21464ac5641136cf5da43bd10106944c1cc3", "title": "UnTran: Recognizing Unseen Activities with Unlabeled Data Using Transfer Learning", "text": "The success and impact of activity recognition algorithms largely depends on the availability of the labeled training samples and adaptability of activity recognition models across various domains. In a new environment, the pre-trained activity recognition models face challenges in presence of sensing bias- ness, device heterogeneities, and inherent variabilities in human behaviors and activities. Activity Recognition (AR) system built in one environment does not scale well in another environment, if it has to learn new activities and the annotated activity samples are scarce. Indeed building a new activity recognition model and training the model with large annotated samples often help overcome this challenging problem. However, collecting annotated samples is cost-sensitive and learning activity model at wild is computationally expensive. In this work, we propose an activity recognition framework, UnTran that utilizes source domains' pre-trained autoencoder enabled activity model that transfers two layers of this network to generate a common feature space for both source and target domain activities. We postulate a hybrid AR framework that helps fuse the decisions from a trained model in source domain and two activity models (raw and deep-feature based activity model) in target domain reducing the demand of annotated activity samples to help recognize unseen activities. We evaluated our framework with three real-world data traces consisting of 41 users and 26 activities in total. Our proposed UnTran AR framework achieves \u2248 75% F1 score in recognizing unseen new activities using only 10% labeled activity data in the target domain. UnTran attains \u2248 98% F1 score while recognizing seen activities in presence of only 2-3% of labeled activity samples."}
{"_id": "29b8c871ab25e5736e2542aad45a50d0d341ebb0", "title": "Comparison of Fixed and Variable Pitch Actuators for Agile Quadrotors", "text": "This paper presents the design, analysis and experimental testing of a variablepitch quadrotor. A custom in-lab built quadrotor with on-board attitude stabilization is developed and tested. An analysis of the dynamic differences in thrust output between a fixed-pitch and variable-pitch propeller is given and validated with simulation and experimental results. It is shown that variable-pitch actuation has significant advantages over the conventional fixed-pitch configuration, including increased thrust rate of change, decreased control saturation, and the ability to quickly and efficiently reverse thrust. These advantages result in improved quadrotor tracking of linear and angular acceleration command inputs in both simulation and hardware testing. The benefits should enable more aggressive and aerobatic flying with the variable-pitch quadrotor than with standard fixed-pitch actuation, while retaining much of the mechanical simplicity and robustness of the fixed-pitch quadrotor."}
{"_id": "8ac4024505ce31f47553f1c3b5ff6f4ffd8bd1bf", "title": "From the Test Benches to the First Prototype of the muFly Micro Helicopter", "text": "The goal of the European project muFly is to build a fully autonomous micro helicopter, which is comparable to a small bird in size and mass. The rigorous size and mass constraints infer various problems related to energy efficiency, flight stability and overall system design. In this research, aerodynamics and flight dynamics are investigated experimentally to gather information for the design of the helicopter\u2019s propulsion group and steering system. Several test benches are designed and built for these investigations. A coaxial rotor test bench is used to measure the thrust and drag torque of different rotor blade designs. The effects of cyclic pitching of the swash plate and the passive stabilizer bar are studied on a test bench measuring rotor forces and moments with a 6\u2013axis force sensor. The gathered knowledge is used to design a first prototype of the muFly helicopter. The prototype is described in terms of rotor configuration, structure, actuator and sensor selection according to the project demands, and a first version of the helicopter is shown. As a safety measure for the flight tests and to analyze the helicopter dynamics, a 6DoF vehicle test bench for tethered helicopter flight is used."}
{"_id": "0bfbdafdfbcc268860fe54ae4d8f08d487bcc762", "title": "An Application of Reinforcement Learning to Aerobatic Helicopter Flight", "text": "Autonomous helicopter flight is widely regarded to be a highly challenging control problem. This paper presents the first successful autonomous completion on a real RC helicopter of the following four aerobatic maneuvers: forward flip and sideways roll at low speed, tail-in funnel, and nose-in funnel. Our experimental results significantly extend the state of the art in autonomous helicopter flight. We used the following approach: First we had a pilot fly the helicopter to help us find a helicopter dynamics model and a reward (cost) function. Then we used a reinforcement learning (optimal control) algorithm to find a controller that is optimized for the resulting model and reward function. More specifically, we used differential dynamic programming (DDP), an extension of the linear quadratic regulator (LQR)."}
{"_id": "20b0f0268bc11c55389816223d712d85203e2936", "title": "The GRASP Multiple Micro-UAV Testbed", "text": "In the last five years, advances in materials, electronics, sensors, and batteries have fueled a growth in the development of microunmanned aerial vehicles (MAVs) that are between 0.1 and 0.5 m in length and 0.1-0.5 kg in mass [1]. A few groups have built and analyzed MAVs in the 10-cm range [2], [3]. One of the smallest MAV is the Picoftyer with a 60-mmpropellor diameter and a mass of 3.3 g [4]. Platforms in the 50-cm range are more prevalent with several groups having built and flown systems of this size [5]-[7]. In fact, there are severalcommercially available radiocontrolled (PvC) helicopters and research-grade helicopters in this size range [8]."}
{"_id": "21daea2e42b66309b5af932daf3ff7b0bc6badaf", "title": "Design , Analysis and Performance of a Rotary Wing MAV", "text": "An initial design concept for a micro-coaxial rotorcraft using custom manufacturing techniques and commercial off-the-shelf components is discussed in this paper. Issues associated with the feasibility of achieving hover and fully functional flight control at small scale for a coaxial rotor configuration are addressed. Results from this initial feasibility study suggest that it is possible to develop a small scale coaxial micro rotorcraft weighing approximately 100 grams, and that available moments are appropriate for roll, yaw and lateral control. A prototype vehicle was built and its rotors were tested in a custom hover stand used to measure Thrust and power. The radio controlled vehicle was flown untethered with its own power source and exhibited good flight stability and control dynamics. The best achievable rotor performance was measured to be 42%."}
{"_id": "b117cce77331f74c5c716c950d7b4fee4f34f8b7", "title": "Hybrid sampling Bayesian Occupancy Filter", "text": "Modeling and monitoring dynamic environments is a complex task but is crucial in the field of intelligent vehicle. A traditional way of addressing these issues is the modeling of moving objects, through Detection And Tracking of Moving Objects (DATMO) methods. An alternative to a classic object model framework is the occupancy grid filtering domain. Instead of segmenting the scene into objects and track them, the environment is represented as a regular grid of occupancy, in which each cell is tracked at a sub-object level. The Bayesian Occupancy Filter [1] is a generic occupancy grid framework which predicts the spread of spatial occupancy by estimating cell velocity distributions. However its velocity model, corresponding to a transition histogram per cell, leads to huge data management which in practice makes it hardly compatible to severe computational and hardware constraints, like in many embedded systems. In this paper, we present a new representation for the BOF, describing the environment through a mix of static and dynamic occupancy. This differentiation enables the use of a model adapted to the considered nature: static occupancy is described in a classic occupancy grid, while dynamic occupancy is modeled by a set of moving particles. Both static and dynamic parts are jointly generated and evaluated, their distribution over the cells being adjusted. This approach leads to a more compact model and to drastically improve the accuracy of the results, in particular in term of velocities. Experimental results show that the number of values required to model the velocities have been reduced from a typical 900 per cell (for a 30\u00d730 neighborhood) to less than 2 per cell in average. The massive data compression allows to plan dedicated embedded devices."}
{"_id": "63941f2aec9469dfa0c83464b9bbda503d009808", "title": "APD: the Antimicrobial Peptide Database", "text": "An antimicrobial peptide database (APD) has been established based on an extensive literature search. It contains detailed information for 525 peptides (498 antibacterial, 155 antifungal, 28 antiviral and 18 antitumor). APD provides interactive interfaces for peptide query, prediction and design. It also provides statistical data for a select group of or all the peptides in the database. Peptide information can be searched using keywords such as peptide name, ID, length, net charge, hydrophobic percentage, key residue, unique sequence motif, structure and activity. APD is a useful tool for studying the structure-function relationship of antimicrobial peptides. The database can be accessed via a web-based browser at the URL: http://aps.unmc.edu/AP/main.html."}
{"_id": "0651cfef4a266281bbf621cabf6ce8672e7755f2", "title": "Sensation seeking and internet dependence of Taiwanese high school adolescents", "text": "The present study examined excessive Internet use of Taiwanese adolescents and a psychological aspect of users, sensation seeking, thus to differentiate motivation of Internet dependents and non-dependents. Seven hundred and fifty three Taiwanese high school students were selected using cluster sampling and 88 of them were categorized as Internet dependent users. Results indicated that Internet dependents spent more time on-line than non-dependents. While Internet dependents perceived significantly more negative Internet influences on daily routines, school performance, and parental relation than non-dependents, both Internet dependents and non-dependents viewed Internet use as enhancing peer relations. Making friends through the Internet has become a popular activity among adolescents, potentially leading to its excessive use. Internet dependents scored significantly higher on overall sensation seeking and disinhibition than Internet non-dependents. However, both groups did not differ in the life experience seeking subscale and thrill and adventure seeking subscale. This finding contradicts that of Lavin, Marvin, McLarney, Nola, and Scott [CyberPsychol. Behav. 2 (2000) 425]. Possible reasons for this discrepancy and for the relation between Internet dependence and disinhibition in Taiwanese adolescents are also discussed. # 2002 Elsevier Science Ltd. All rights reserved."}
{"_id": "af9d52af8eeb60edd3da3f03196d072ed57f3ec2", "title": "Fundamental of Content Based Image Retrieval", "text": "The aim of this paper is to review the present state of the art in content-based image retrieval (CBIR), a technique for retrieving images on the basis of automatically-derived features like color, texture and shape. Our findings are based both on a review of the relevant literature and on discussions with researchers in the field. There is need to find a desired image from a collection is shared by many professional groups, including journalists, design engineers and art historians. During the requirements of image users can vary considerably, it can be useful to illustrate image queries into three levels of abstraction first is primitive features such as color or shape, second is logical features such as the identity of objects shown and last is abstract attributes such as the significance of the scenes depicted. While CBIR systems currently operate well only at the lowest of these levels, most users demand higher levels of retrieval."}
{"_id": "d24ab98b6188388487addb5d7a100b836e6d0199", "title": "Image Mining: A New Approach for Data Mining", "text": "We introduce a new focus for data mining, which is concerned with knowledge discovery in image databases. We expect all aspects of data mining to be relevant to image mining but in this first work we concentrate on the problem of finding associations. To that end, we present a data mining algorithm to find association rules in 2-dimensional color images. The algorithm has four major steps: feature extraction, object identification, auxiliary image creation and object mining. Our algorithm is general in that it does not rely on any type of domain knowledge. A synthetic image set containing geometric shapes was generated to test our initial algorithm implementation. Our experimental results show that image mining is feasible. We also suggest several directions for future work in this area."}
{"_id": "9246cbf8d7a489b2c6299317416c7dfdf747f72b", "title": "Implementing GrabCut", "text": "GrabCut is an innovative 2D image segmentation technique developed by Rother et al. [2004]. This paper provides implementation details omitted from the original paper. Details covered in background papers are summarized here so that future implementors can refer to a single paper. Our implementation of GrabCut is described and results are included. Our main contribution is correcting errors in equation (9) and in equation (11) of the original paper. We also discuss weaknesses of the algorithm that were not discussed in the original paper. We present possible research directions to address these problems. CR Categories: I.4.6 [Image Processing and Computer Vision]: Segmentation\u2014Pixel classification"}
{"_id": "242694561bf4e06762b9adababf3d823f5b9b1bf", "title": "Intelligent UAV-assisted routing protocol for urban VANETs", "text": "The process of routing in Vehicular Ad hoc Networks (VANET) is a challenging task in city environments. Finding the shortest end-to-end connected path satisfying delay restriction and minimal overhead is confronted with many constraints and difficulties. Such difficulties are due to the high mobility of vehicles, the frequent path failures, and the various obstructions, which may affect the reliability of the data transmission and routing. Commercial Unmanned Aerial Vehicles (UAVs) or what are commonly referred to as drones can come in handy in dealing with these constraints. In this paper, we study how UAVs operating in ad hoc mode can cooperate with VANET on the ground so as to assist in the routing process and improve the reliability of the data delivery by bridging the communication gap whenever it is possible. In a previous work, we have proposed UVAR \u2013 a UAV-Assisted VANETs routing protocol, which improves data routing and connectivity of the vehicles on the ground through the use of UAVs. However, UVAR does not fully exploit UAVs in the sky for data forwarding because it uses UAVs only when the network is poorly dense. In \u2217Corresponding author Email addresses: s.oubbati@lagh-univ.dz (Omar Sami Oubbati), alakas@uaeu.ac.ae (Abderrahmane Lakas), fen.zhou@univ-avignon.fr (Fen Zhou), mesut.guenes@ovgu.de (Mesut G\u00fcne\u015f), n.lagraa@lagh-univ.dz (Nasreddine Lagraa), m.yagoubi@lagh-univ.dz (Mohamed Bachir Yagoubi) Preprint submitted to Computer Communications April 4, 2017"}
{"_id": "a627234002439d5f89da44bf83b8fd0b503812ad", "title": "GP-GAN: Gender Preserving GAN for Synthesizing Faces from Landmarks", "text": "Facial landmarks constitute the most compressed representation of faces and are known to preserve information such as pose, gender and facial structure present in the faces. Several works exist that attempt to perform high-level face-related analysis tasks based on landmarks alone without the aid of face images. In contrast, in this work, an attempt is made to tackle the inverse problem of synthesizing faces from their respective landmarks. The primary aim of this work is to demonstrate that information preserved by landmarks (gender in particular) can be further accentuated by leveraging generative models to synthesize corresponding faces. Though the problem is particularly challenging due to its ill-posed nature, we believe that successful synthesis will enable several applications such as boosting performance of high-level face related tasks using landmark points and performing dataset augmentation. To this end, a novel face-synthesis method known as Gender Preserving Generative Adversarial Network (GP-GAN) that is guided by adversarial loss, perceptual loss and a gender preserving loss is presented. Further, we propose a novel generator sub-network UDeNet for GP-GAN that leverages advantages of U-Net and DenseNet architectures. Extensive experiments and comparison with recent methods are performed to verify the effectiveness of the proposed method. Our code is available at: https://github.com/DetionDXlGP-GAN-Gender-Preserving-GAN-for-Synthesizing-Faces-from-Landmarks"}
{"_id": "4bdbe002e8040b4fdac488ef3fb19ec943f0b9dc", "title": "A Comparison of Sender-Initiated and Receiver-Initiated Reliable Multicast Protocols", "text": "Sender-initiated reliable multicast protocols, based on the use of positive acknowledgments (ACKs), lead to an ACK implosion problem at the sender as the number of receivers increases. Briefly, the ACK implosion problem refers to the significant overhead incurred by the sending host due to the processing of ACKs from each receiver. A potential solution to this problem is to shift the burden of providing reliable data transfer to the receivers\u2014thus resulting in a receiver-initiated multicast error control protocol based on the use of negative acknowledgments (NAKs). In this paper we determine the maximum throughputs of the sending and receiving hosts for generic sender-initiated and receiver-initiated protocols. We show that the receiver-initiated error control protocols provide substantially higher throughputs than their sender-initiated counterparts. We further demonstrate that the introduction of random delays prior to generating NAKs coupled with the multicasting of NAKs to all receivers has the potential for an additional substantial increase in the throughput of receiver-initiated error control protocols over sender-initiated protocols."}
{"_id": "564264f0ce2b26fd80320e4f71b70ee8c67602ef", "title": "Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation", "text": "In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html."}
{"_id": "479fb5b345ed4f7800c3812368e07a86d024ac51", "title": "Bridging Neural Machine Translation and Bilingual Dictionaries", "text": "Neural Machine Translation (NMT) has become the new state-of-the-art in several language pairs. However, it remains a challenging problem how to integrate NMT with a bilingual dictionary which mainly contains words rarely or never seen in the bilingual training data. In this paper, we propose two methods to bridge NMT and the bilingual dictionaries. The core idea behind is to design novel models that transform the bilingual dictionaries into adequate sentence pairs, so that NMT can distil latent bilingual mappings from the ample and repetitive phenomena. One method leverages a mixed word/character model and the other attempts at synthesizing parallel sentences guaranteeing massive occurrence of the translation lexicon. Extensive experiments demonstrate that the proposed methods can remarkably improve the translation quality, and most of the rare words in the test sentences can obtain correct translations if they are covered by the dictionary."}
{"_id": "385066d369c56018ff357937b225d798de8d1469", "title": "Approximate image-based tree-modeling using particle flows", "text": "We present a method for producing 3D tree models from input photographs with only limited user intervention. An approximate voxel-based tree volume is estimated using image information. The density values of the voxels are used to produce initial positions for a set of particles. Performing a 3D flow simulation, the particles are traced downwards to the tree basis and are combined to form twigs and branches. If possible, the trunk and the first-order branches are determined in the input photographs and are used as attractors for particle simulation. The geometry of the tree skeleton is produced using botanical rules for branch thicknesses and branching angles. Finally, leaves are added. Different initial seeds for particle simulation lead to a variety, yet similar-looking branching structures for a single set of photographs."}
{"_id": "13645dd03627503fd860a2ba73189e92393a67e3", "title": "The Geometry of Algorithms with Orthogonality Constraints", "text": "In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper."}
{"_id": "73fab6de084fc2b0769c074d053ba71b0cad53ac", "title": "Recognition of Power-Quality Disturbances Using S-Transform-Based ANN Classifier and Rule-Based Decision Tree", "text": "This paper deals with a modified technique for the recognition of single stage and multiple power quality (PQ) disturbances. An algorithm based on Stockwell's transform and artificial neural network-based classifier and a rule-based decision tree is proposed in this paper. The analysis and classification of single stage PQ disturbances consisting of both events and variations such as sag, swell, interruption, harmonics, transients, notch, spike, and flicker are presented. Moreover, the proposed algorithm is also applied on multiple PQ disturbances such as harmonics with sag, swell, flicker, and interruption. A database of these PQ disturbances based on IEEE-1159 standard is generated in MATLAB for simulation studies. The proposed algorithm extracts significant features of various PQ disturbances using S-transform, which are used as input to this hybrid classifier for the classification of PQ disturbances. Satisfactory results of effective recognition and classification of PQ disturbances are obtained with the proposed algorithm. Finally, the proposed method is also implemented on real-time PQ events acquired in a laboratory to confirm the validity of this algorithm in practical conditions."}
{"_id": "86cbf4d3d3795fa5865e88de1d9aaf6dfc413122", "title": "Imbalanced Protein Data Classification Using Ensemble FTM-SVM", "text": "Classification of protein sequences into functional and structural families based on machine learning methods is a hot research topic in machine learning and Bioinformatics. In fact, the underlying protein classification problem is a huge multiclass problem. Generally, the multiclass problem can be reduced to a set of binary classification problems. The protein in one class are seen as positive examples while those outside the class are seen as negative examples. However, the class imbalance problem will arise in this case because the number of protein in one class is usually much smaller than that of the protein outside the class. To handle the challenge, we propose a novel framework to classify the protein. We firstly use free scores (FS) to perform feature extraction for protein; then, the inverse random under sampling (IRUS) is used to create a large number of distinct training sets; next, we use a new ensemble approach to combine these distinct training sets with a new fuzzy total margin support vector machine (FTM-SVM) that we have constructed. we call the novel ensemble classifier as ensemble fuzzy total margin support vector machine (EnFTM-SVM). We then give a full description of our method, including the details of its derivation. Finally, experimental results on fourteen benchmark protein data sets indicate that the proposed method outperforms many state-of-the-art protein classifying methods."}
{"_id": "ca0d247d4dda387f282188c949e9fe70ca240fce", "title": "A Software Gamification Model for Cross-Cultural Software Development Teams", "text": "Gamification is the use of game design elements in non-game context platforms such as services and marketing to motivate people to participate in planned activities to increase engagement and loyalty to achieve goals.\n Gamification has been applied to academic fields including software engineering in recent years. Some studies show that gamification can motivate engineers in Software Engineering (SE) if applied appropriately. However, most gamification implementations are lacking well-defined frameworks or models.\n This paper develops a software gamification model (SGM) with a well-defined framework that provides a robust process for implementing gamification for SE. This model also contains elements from applied psychology, Social Software Engineering (SSE), and the Capability Maturity Model (CMM). Having those elements ensures that the model has a solid foundation in gamification and SE disciplines. The model is further tailored for cross-cultural software development teams (CCSDT).\n CCSDT have been popular in recent years due to the fact that software development has become a global business. However, as more and more CCSDT are formed, many challenges and issues have been raised in those cross-cultural environments stemming from miscommunication, misunderstanding, cultural differences and conflicts. This paper uses the SGM to help resolve the issues."}
{"_id": "f6eff6b7b5f05d5ee81232ed0aa69efad01c9c2b", "title": "Joint torques during sit-to-stand in healthy subjects and people with Parkinson's disease.", "text": "OBJECTIVES\nTo compare lower limb joint torques during sit-to-stand in normal elderly subjects and people with Parkinson's disease, using a developed biomechanical model simulating all phases of sit-to-stand.Design. A cross-sectional study utilizing a Parkinsonian and a control group.\n\n\nBACKGROUND\nSubjects with Parkinson's disease were observed to experience difficulty in performing sit-to-stand. The developed model was used to calculate the lower limb joint torques in normal elderly subjects and subjects with Parkinson's disease, to delineate possible causes underlying difficulties in initiating sit-to-stand task.\n\n\nMETHODS\nSix normal elderly subjects and seven age-matched subjects with Parkinson's disease performed five sit-to-stand trials at their self-selected speed. Anthropometric data, two-dimensional kinematic and foot-ground and thigh-chair reactive forces were used to calculate, via inverse dynamics, the joint torques during sit-to-stand in both before and after seat-off phases. The difference between the control and Parkinson's disease group was analysed using independent t-tests.\n\n\nRESULTS\nBoth control and Parkinson's disease groups had a similar joint kinematic pattern, although the Parkinson's disease group demonstrated a slower angular displacement. The latter subjects produced significantly smaller normalized hip flexion torque and presented a slower torque build-up rate than the able-bodied subjects (P<0.05).\n\n\nCONCLUSION\nSlowness of sit-to-stand in people with Parkinson's disease could be due to a reduced hip flexion joint torque and a prolonged rate of torque production."}
{"_id": "7b30817561c8aa87fa4d72ef6a75080000ef5ea0", "title": "Deep Convolutional Neural Networks for Fire Detection in Images", "text": "Detecting fire in images using image processing and computer vision techniques has gained a lot of attention from researchers during the past few years. Indeed, with sufficient accuracy, such systems may outperform traditional fire detection equipment. One of the most promising techniques used in this area is Convolutional Neural Networks (CNNs). However, the previous research on fire detection with CNNs has only been evaluated on balanced datasets, which may give misleading information on real-world performance, where fire is a rare event. Actually, as demonstrated in this paper, it turns out that a traditional CNN performs relatively poorly when evaluated on the more realistically balanced benchmark dataset provided in this paper. We therefore propose to use even deeper Convolutional Neural Networks for fire detection in images, and enhancing these with fine tuning based on a fully connected layer. We use two pretrained state-of-the-art Deep CNNs, VGG16 and Resnet50, to develop our fire detection system. The Deep CNNs are tested on our imbalanced dataset, which we have assembled to replicate real world scenarios. It includes images that are particularly difficult to classify and that are deliberately unbalanced by including significantly more non-fire images than fire images. The dataset has been made available online. Our results show that adding fully connected layers for fine tuning indeed does increase accuracy, however, this also increases training time. Overall, we found that our deeper CNNs give good performance on a more challenging dataset, with Resnet50 slightly outperforming VGG16. These results may thus lead to more successful fire detection systems in practice."}
{"_id": "1e342cbce99cb2cb5fe0c3ad24458f5423262763", "title": "Multi-task low-rank affinity pursuit for image segmentation", "text": "This paper investigates how to boost region-based image segmentation by pursuing a new solution to fuse multiple types of image features. A collaborative image segmentation framework, called multi-task low-rank affinity pursuit, is presented for such a purpose. Given an image described with multiple types of features, we aim at inferring a unified affinity matrix that implicitly encodes the segmentation of the image. This is achieved by seeking the sparsity-consistent low-rank affinities from the joint decompositions of multiple feature matrices into pairs of sparse and low-rank matrices, the latter of which is expressed as the production of the image feature matrix and its corresponding image affinity matrix. The inference process is formulated as a constrained nuclear norm and \u21132;1-norm minimization problem, which is convex and can be solved efficiently with the Augmented Lagrange Multiplier method. Compared to previous methods, which are usually based on a single type of features, the proposed method seamlessly integrates multiple types of features to jointly produce the affinity matrix within a single inference step, and produces more accurate and reliable segmentation results. Experiments on the MSRC dataset and Berkeley segmentation dataset well validate the superiority of using multiple features over single feature and also the superiority of our method over conventional methods for feature fusion. Moreover, our method is shown to be very competitive while comparing to other state-of-the-art methods."}
{"_id": "cb1d5e717ff62395ec54eb2673e52661dcd0b0dd", "title": "A 240 GHz ultra-wideband FMCW radar system with on-chip antennas for high resolution radar imaging", "text": "Nowadays emerging industrial radar applications demand for high-resolution, high-precision and at the same time low-cost radar sensors. Recent advances in semiconductor technology allow highly integrated radar sensors at frequencies up to several hundred GHz in mass-production suitable and cost-effective SiGe bipolar technologies. In this contribution, a SiGe MMIC-based 240 GHz radar sensor with more than 60GHz bandwidth is presented. It consists of a MMIC chip including the high-frequency components and a digital control module with the PLL stabilisation, ramp generation, and data acquisition. The antenna is realized by on-chip patch antennas, which are focused by using an additional dielectric lens. The radar allows fast and highly linear frequency sweeps from 204 GHz to 265 GHz with an maximum output power of \u2248 -1dBm EIRP (patch only). A phase noise of <;-65 dBc/Hz (>1 kHz offset) is achieved over the complete tuning range. Additionally range profile, jitter and imaging measurements are presented to demonstrate the achieved system performance."}
{"_id": "27e6827804ca3b7c3e38c0a2723bd205ca18751a", "title": "Prefrontal neuropsychological effects of sleep deprivation in young adults--a model for healthy aging?", "text": "Neuropsychological testing and brain imaging show that healthy aging leads to a preferential impairment of the prefrontal cortex (PFC). Interestingly, in young adults sleep deprivation (SD) has similar effects. Psychological tasks not so oriented to the PFC are less sensitive both to SD and aging. The PFC is a cortical region working particularly hard during wakefulness, which may make it more vulnerable to \"deterioration,\" whether this is through aging or SD. In these respects SD in young adults may offer a model for aging. No study has directly compared aging with SD. We compared groups comprising (equal sexes): YOUNG (av. 23y), MIDDLE AGED (av. 60y) and OLD (av. 73y). Young were subdivided into SD and non-sleep deprived groups. All participants were carefully screened, were healthy, good sleepers and with a similar educational background. A battery of PFC-oriented, short and straightforward neuropsychological tests was used to compare the effects of 36h of SD in the young group, with findings from the healthy, alert, non-sleep deprived groups. Tests relied on accuracy rather than speed (which took into account the problem of \"global slowing\" in the older participants), and were administered once (i.e., were novel). Test outcomes were significantly affected by: (1) SD in the young groups, and (2) by age. A non-PFC component of one test was not affected by SD or by age. It was concluded that 36h SD in young adults produces effects on the PFC similar to those found in normal, alert people aged about 60 years. However, it can not be concluded that an aged brain is a sleep-deprived brain."}
{"_id": "5ab6435f923cbdfd6075a57eeccc1b427245b8e3", "title": "A new approach for sparse Bayesian channel estimation in SCMA uplink systems", "text": "The rapid growth of traffic and number of simultaneously available devices leads to the new challenges in constructing fifth generation wireless networks (5G). To handle with them various schemes of non-orthogonal multiple access (NOMA) were proposed. One of these schemes is Sparse Code Multiple Access (SCMA), which is shown to achieve better link level performance. In order to support SCMA signal decoding channel estimation is needed and sparse Bayesian learning framework may be used to reduce the requirement of pilot overhead. In this paper we propose a modification of sparse Bayesian learning based channel estimation algorithm that is shown to achieve better accuracy of user detection and faster convergence in numerical simulations."}
{"_id": "38d555bfe13b61e838364016219c7e42fb5dc919", "title": "Using BPM governance to align systems and practice", "text": "Purpose \u2013 The purpose of this paper is to propose a business process management (BPM) governance model that sets BPM decision making, along with roles and responsibilities. The setting context of the study is a government-owned corporation operating in Australia. Design/methodology/approach \u2013 A qualitative case study examined and analysed organisational documents using a content analysis approach. Results of document analysis are used to inform a series of in-depth interviews of key stakeholders in the organisation. Interviews are analysed using a constant comparison method to derive themes and build categories of description. Findings \u2013 A BPM governance model is proposed. Results of thematic analysis are interpreted against the framework of the BPM governance model, leading to findings that include implications for theory and practice. Practical implications \u2013 In practical terms, the research shows how BPM practice can be aligned and integrated with the corporate governance and management systems in the selected case study organisation. Originality/value \u2013 Despite research identifying the importance of governance, along with associated capabilities, there has been little progress on how the abovementioned capabilities can be effectively deployed across an organisation. This paper addresses a gap in the literature relating to how to deploy BPM governance in an organisation."}
{"_id": "17ab02894d728a584a4cfbd2d73edafdc209f000", "title": "Inferring Strategies for Sentence Ordering in Multidocument News Summarization", "text": "The problem of organizing information for multidocument summarization so that the generated summary is coherent has received relatively little attention. While sentence ordering for single document summarization can be determined from the ordering of sentences in the input article, this is not the case for multidocument summarization where summary sentences may be drawn from different input articles. In this paper, we propose a methodology for studying the properties of ordering information in the news genre and describe experiments done on a corpus of multiple acceptable orderings we developed for the task. Based on these experiments, we implemented a strategy for ordering information that combines constraints from chronological order of events and topical relatedness. Evaluation of our augmented algorithm shows a significant improvement of the ordering over two baseline strategies."}
{"_id": "76d5e3fa888bee872b7adb7fa810089aa8ab1d58", "title": "A Maximum-Entropy-Inspired Parser", "text": "We present a new parser for parsing down to Penn tree-bank style parse trees that achieves 90.1% average precision/recall for sentences of length 40 and less, and 89.5% for sentences of length 100 and less when trMned and tested on the previously established [5,9,10,15,17] \"standard\" sections of the Wall Street Journal treebank. This represents a 13% decrease in error rate over the best single-parser results on this corpus [9]. The major technical innovation is tire use of a \"ma~ximum-entropy-inspired\" model for conditioning and smoothing that let us successfully to test and combine many different conditioning events. We also present some partial results showing the effects of different conditioning information, including a surprising 2% improvement due to guessing the lexical head's pre-terminal before guessing the lexical head."}
{"_id": "969a9ec5f24dabcfb9c70c7ee04625075a6c0a98", "title": "Estimation of probabilities from sparse data for the language model component of a speech recognizer", "text": "The description of a novel type of rn-gram language model is given. The model offers, via a nonlinear recursive procedure, a computation and space efficient solution to the problem of estimating probabilities from sparse data. This solution compares favorably to other proposed methods. While the method has been developed for and successfully implemented in the IBM Real Time Speech Recognizers, its generality makes it applicable in other areas where the problem of estimating probabilities from sparse data arises. Sparseness of data is an inherent property of any real text, and it is a problem that one always encounters while collecting frequency statistics on words and word sequences (m-grams) from a text of finite size. This means that even for a very large data collection, the maximum likelihood estimation method does not allow us to adequately estimate probabilities of rare but nevertheless possible word sequences-many sequences occur only once (\u201csingletons\u201d); many more do not occur at all. Inadequacy of the maximum likelihood estimator and the necessity to estimate the probabilities of m-grams which did not occur in the text constitute the essence of the problem. The main idea of the proposed solution to the problem is to reduce unreliable probability estimates given by the observed frequencies and redistribute the \u201cfreed\u201d probability \u201cmass\u201d among m-grams which never occurred in the text. The reduction is achieved by replacing maximum likelihood estimates for m-grams having low counts with renormalized Turing\u2019s estimates [l], and the redistribution is done via the recursive utilization of lower level conditional distributions. We found Turing\u2019s method attractive because of its simplicity and its characterization as the optimal empirical Bayes\u2019 estimator of a multinomial probability. Robbins in [2] introduces the empirical Bayes\u2019 methodology and Nadas in [3] gives various derivations of the Turing\u2019s formula. Let N be a sample text size and let n, be the number of words (m-grams) which occurred in the text exactly r times, so that N = C rn,. (1) Turing\u2019s estimate PT for a probability of a word (m-gram) which occurred in the sample r times is r"}
{"_id": "0025b963134b1c0b64c1389af19610d038ab7072", "title": "Learning to Order Things", "text": "There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a preference function, of the form PREF , which indicates whether it is advisable to rank before . New instances are then ordered so as to maximize agreements with the learned preference function. We show that the problem of finding the ordering that agrees best with a preference function is NP-complete, even under very restrictive assumptions. Nevertheless, we describe a simple greedy algorithm that is guaranteed to find a good approximation. We then discuss an on-line learning algorithm, based on the \u201cHedge\u201d algorithm, for finding a good linear combination of ranking \u201cexperts.\u201d We use the ordering algorithm combined with the on-line learning algorithm to find a combination of \u201csearch experts,\u201d each of which is a domain-specific query expansion strategy for a WWW search engine, and present experimental results that demonstrate the merits of our approach."}
{"_id": "196163803d7498a16ccc24cf7a10bd12352c8e50", "title": "Fastest Mixing Markov Chain on a Graph", "text": "We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e., the mixing rate of the Markov chain, is determined by the second largest (in magnitude) eigenvalue of the transition matrix. In this paper we address the problem of assigning probabilities to the edges of the graph in such a way as to minimize the second largest magnitude eigenvalue, i.e., the problem of finding the fastest mixing Markov chain on the graph. We show that this problem can be formulated as a convex optimization problem, which can in turn be expressed as a semidefinite program (SDP). This allows us to easily compute the (globally) fastest mixing Markov chain for any graph with a modest number of edges (say, 1000) using standard numerical methods for SDPs. Larger problems can be solved by exploiting various types of symmetry and structure in the problem, and far larger problems (say 100000 edges) can be solved using a subgradient method we describe. We compare the fastest mixing Markov chain to those obtained using two commonly used heuristics: the maximum-degree method, and the Metropolis-Hastings algorithm. For many of the examples considered, the fastest mixing Markov chain is substantially faster than those obtained using these heuristic methods. We derive the Lagrange dual of the fastest mixing Markov chain problem, which gives a sophisticated method for obtaining (arbitrarily good) bounds on the optimal mixing rate, as well the optimality conditions. Finally, we describe various extensions of the method, including a solution of the problem of finding the fastest mixing reversible Markov chain, on a fixed graph, with a given equilibrium distribution."}
{"_id": "480e2fd9bdd8bdc447e7d0d7ed7d4c9062b85d4f", "title": "Growing a tree in the forest: constructing folksonomies by integrating structured metadata", "text": "Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many personal hierarchies into a common taxonomy, also known as a folksonomy, that will aid users in visualizing and browsing social content, and also to help them in organizing their own content. However, learning from social metadata presents several challenges, since it is sparse, shallow, ambiguous, noisy, and inconsistent. We describe an approach to folksonomy learning based on relational clustering, which exploits structured metadata contained in personal hierarchies. Our approach clusters similar hierarchies using their structure and tag statistics, then incrementally weaves them into a deeper, bushier tree. We study folksonomy learning using social metadata extracted from the photo-sharing site Flickr, and demonstrate that the proposed approach addresses the challenges. Moreover, comparing to previous work, the approach produces larger, more accurate folksonomies, and in addition, scales better."}
{"_id": "268d3f28ae2295b9d2bf6fef2aa27faf9048a86c", "title": "Exploiting different word clusterings for class-based RNN language modeling in speech recognition", "text": "We propose to exploit the potential of multiple word clusterings in class-based recurrent neural network (RNN) language models for ensemble RNN language modeling. By varying the clustering criteria and the space of word embedding, different word clusterings are obtained to define different word/class factorizations. For each such word/class factorization, several base RNNLMs are learned, and the word prediction probabilities of the base RNNLMs are then combined to form an ensemble prediction. We use a greedy backward model selection procedure to select a subset of models and combine these models for word prediction. The proposed ensemble language modeling method has been evaluated on Penn Treebank test set as well as Wall Street Journal (WSJ) Eval 92 and 93 test sets, where it improved test set perplexity and word error rate over the state-of-the-art single RNNLMs as well as multiple RNNLMs produced by varying RNN learning conditions."}
{"_id": "55f2e2b81673c8609d645a6237b84e317b6d7dc5", "title": "Efficient Scheduling of Arbitrary TAsk Graphs to Multiprocessors Using a Parallel Genetic Algorithm", "text": "Given a parallel program represented by a task graph, the objective of a scheduling algorithm is to minimize the overall execution time of the program by properly assigning the nodes of the graph to the processors. This multiprocessor scheduling problem is NP-complete even with simplifying assumptions and becomes more complex under relaxed assumptions such as arbitrary precedence constraints, and arbitrary task execution and communication times. The present literature on this topic is a large repertoire of heuristics that produce good solutions in a reasonable amount of time. These heuristics, however, have restricted applicability in a practical environment because they have a number of fundamental problems including high time complexity, lack of scalability, and no performance guarantee with respect to optimal solutions. Recently, genetic algorithms (GAs) have been widely reckoned as a useful vehicle for obtaining high quality or even optimal solutions for a broad range of combinatorial optimization problems. While a few GAs for scheduling have already been suggested, in this paper we propose a novel GA-based algorithm with an objective to simultaneously meet the goals of high performance, scalability, and fast running time. The proposed parallel genetic scheduling (PGS) algorithm itself is a parallel algorithm which generates high quality solutions in a short time. By encoding the scheduling list as a chromosome, the PGS algorithm can potentially generate an optimal scheduling list which in turn leads to an optimal schedule. The major strength of the PGS algorithm lies in its two efficient genetic operators: the order crossover and mutation. These operators effectively combine the building-blocks of good scheduling lists to construct better lists. The proposed algorithm is evaluated through a robust comparison with two heuristics best known in terms of performance and time complexity. It outperforms both heuristics while taking considerably less running time. When evaluated with random task graphs for which optimal solutions are known, the PGS algorithm generates optimal solutions for more than half of the test cases and close-to-optimal for the other half. \u00a9 1997 Academic Press"}
{"_id": "60fc396b7f4d3c817337db14648a7250e21d90e7", "title": "Learning a Variational Network for Reconstruction of Accelerated MRI Data", "text": "PURPOSE\nTo allow fast and high-quality reconstruction of clinical accelerated multi-coil MR data by learning a variational network that combines the mathematical structure of variational models with deep learning.\n\n\nTHEORY AND METHODS\nGeneralized compressed sensing reconstruction formulated as a variational model is embedded in an unrolled gradient descent scheme. All parameters of this formulation, including the prior model defined by filter kernels and activation functions as well as the data term weights, are learned during an offline training procedure. The learned model can then be applied online to previously unseen data.\n\n\nRESULTS\nThe variational network approach is evaluated on a clinical knee imaging protocol for different acceleration factors and sampling patterns using retrospectively and prospectively undersampled data. The variational network reconstructions outperform standard reconstruction algorithms, verified by quantitative error measures and a clinical reader study for regular sampling and acceleration factor 4.\n\n\nCONCLUSION\nVariational network reconstructions preserve the natural appearance of MR images as well as pathologies that were not included in the training data set. Due to its high computational performance, that is, reconstruction time of 193 ms on a single graphics card, and the omission of parameter tuning once the network is trained, this new approach to image reconstruction can easily be integrated into clinical workflow. Magn Reson Med 79:3055-3071, 2018. \u00a9 2017 International Society for Magnetic Resonance in Medicine."}
{"_id": "95fd485336f5ced455764a6dce6f86da5662f478", "title": "Empowerment-driven Exploration using Mutual Information Estimation", "text": "Exploration is a difficult challenge in reinforcement learning and is of prime importance in sparse reward environments. However, many of the state of the art deep reinforcement learning algorithms, that reply on epsilon-greedy exploration, fail on these environments. In such environments, empowerment can serve as an intrinsic reward signal to enable the agent to explore by maximizing the influence it has over the near future. We formulate empowerment as the channel capacity between actions and states and is calculated by estimating the mutual information between the actions and the following states. The mutual information is estimated using a Mutual Information Neural Estimator and a forward dynamics model. We demonstrate that an empowerment driven agent is able to improve significantly the score of a baseline DQN agent on the game of Montezuma\u2019s Revenge."}
{"_id": "05a958194f1756fb91ddd3e2cd5794f9b0c312ce", "title": "Multilevel Inverter Topologies With Reduced Device Count: A Review", "text": "Multilevel inverters have created a new wave of interest in industry and research. While the classical topologies have proved to be a viable alternative in a wide range of high-power medium-voltage applications, there has been an active interest in the evolution of newer topologies. Reduction in overall part count as compared to the classical topologies has been an important objective in the recently introduced topologies. In this paper, some of the recently proposed multilevel inverter topologies with reduced power switch count are reviewed and analyzed. The paper will serve as an introduction and an update to these topologies, both in terms of the qualitative and quantitative parameters. Also, it takes into account the challenges which arise when an attempt is made to reduce the device count. Based on a detailed comparison of these topologies as presented in this paper, appropriate multilevel solution can be arrived at for a given application."}
{"_id": "c5b470acc29abf0e0a2121b25633b2e1d2a6785f", "title": "DIZK: A Distributed Zero Knowledge Proof System", "text": "Recently there has been much academic and industrial interest in practical implementations of zero knowledge proofs. These techniques allow a party to prove to another party that a given statement is true without revealing any additional information. In a Bitcoin-like system, this allows a payer to prove validity of a payment without disclosing the payment\u2019s details. Unfortunately, the existing systems for generating such proofs are very expensive, especially in terms of memory overhead. Worse yet, these systems are \u201cmonolithic\u201d, so they are limited by the memory resources of a single machine. This severely limits their practical applicability. We describe DIZK, a system that distributes the generation of a zero knowledge proof across machines in a compute cluster. Using a set of new techniques, we show that DIZK scales to computations of up to billions of logical gates (100\u00d7 larger than prior art) at a cost of 10 \u03bcs per gate (100\u00d7 faster than prior art). We then use DIZK to study various security applications."}
{"_id": "841635aefe96967b0b9650bd92544f6ce55805be", "title": "Framework for a Digital Forensic Investigation", "text": "Computer Forensics is essential for the successful prosecution of computer criminals. For a forensic investigation to be performed successfully there are a number of important steps that have to be considered and taken. The aim of this paper is to define a clear, step-by-step framework for the collection of evidence suitable for presentation in a court of law. Existing forensic models will be surveyed and then adapted to create a specific application framework for single computer, entry point forensics."}
{"_id": "f87f58d4f0707f6d0d41827fc8888dd236cfcec5", "title": "Turn-Taking Estimation Model Based on Joint Embedding of Lexical and Prosodic Contents", "text": "A natural conversation involves rapid exchanges of turns while talking. Taking turns at appropriate timing or intervals is a requisite feature for a dialog system as a conversation partner. This paper proposes a model that estimates the timing of turn-taking during verbal interactions. Unlike previous studies, our proposed model does not rely on a silence region between sentences since a dialog system must respond without large gaps or overlaps. We propose a Recurrent Neural Network (RNN) based model that takes the joint embedding of lexical and prosodic contents as its input to classify utterances into turn-taking related classes and estimates the turn-taking timing. To this end, we trained a neural network to embed the lexical contents, the fundamental frequencies, and the speech power into a joint embedding space. To learn meaningful embedding spaces, the prosodic features from each single utterance are pretrained using RNN and combined with utterance lexical embedding as the input of our proposed model. We tested this model on a spontaneous conversation dataset and confirmed that it outperformed the use of word embedding-based features."}
{"_id": "484c465e13720fff383c7fb6fc49719a817fb0f7", "title": "Tutorial on Answering Questions about Images with Deep Learning", "text": "Together with the development of more accurate methods in Computer Vision and Natural Language Understanding, holistic architectures that answer on questions about the content of real-world images have emerged. In this tutorial, we build a neural-based approach to answer questions about images. We base our tutorial on two datasets: (mostly on) DAQUAR, and (a bit on) VQA. With small tweaks the models that we present here can achieve a competitive performance on both datasets, in fact, they are among the best methods that use a combination of LSTM with a global, full frame CNN representation of an image. We hope that after reading this tutorial, the reader will be able to use Deep Learning frameworks, such as Keras and introduced Kraino, to build various architectures that will lead to a further performance improvement on this challenging task. 1 Preface In this tutorial1 we build a few architectures that can answer questions about images. The architectures are based on our two papers on this topic: Malinowski et al. [2015] and Malinowski et al. [2016]; and more broadly, on our project towards a Visual Turing Test2. In particular, an encoder-decoder perspective of Malinowski et al. [2016] allows us to effectively experiment with various design choices. For the sake of simplicity, we only consider a classification-based approach to answer questions about images, although an approach that generate answers word-by-word is also studied in the community [Malinowski et al., 2015]. In the tutorial, we mainly focus on the DAQUAR dataset [Malinowski and Fritz, 2014a], but a few possible directions to apply learnt techniques to VQA [Antol et al., 2015] are also pointed. First, we will get familiar with the task of answering questions about images, and a dataset that implements the task (due to a small size, we mainly use DAQUAR as it better serves an educational purpose that we aim at this tutorial). Next, we build a few blind models that answer questions about images without actually seeing such images. Such models already exhibit a reasonable performance as they can effectively learn various biases that exist in a dataset, which we also interpret as learning a common sense knowledge [Malinowski et al., 2015, 2016]. Subsequently, we build a few language+vision models that answer questions based on both a textual and a visual inputs. Finally, we leave the tutorial with a few possible research directions. Technical aspects The tutorial is originally written using Python Notebook, which the reader is welcome to download and use through the tutorial. Instructions necessary to run the Notebook version of this tutorial are provided in the following: https://github.com/mateuszmalinowski/visual_turing_test-tutorial. In this tutorial, we heavily use a Python code, and therefore it is expected the reader either already knows this language, or can quickly learn it. However, we made an effort to make this tutorial approachable to a wider audience. We use Kraino that is a framework prepared for this tutorial in order to simplify the development of the question answering architectures. Under the hood, it uses Theano4 [Bastien et al., 2012] and Keras5 [Chollet, 2015] \u2013 two frameworks to build Deep Learning models. We also use various CNNs representations extracted from images that can be downloaded as explained at the beginning of our Notebook tutorial. We also highlight exercises that a curious reader may attempt to solve. This tutorial was presented for the first time during the 2nd Summer School on Integrating Vision and Language: Deep Learning. http://mpii.de/visual_turing_test https://github.com/mateuszmalinowski/visual_turing_test-tutorial/blob/master/visual_ turing_test.ipynb http://deeplearning.net/software/theano/ https://keras.io 1 ar X iv :1 61 0. 01 07 6v 1 [ cs .C V ] 4 O ct 2 01 6"}
{"_id": "59ff257cfcf488004d2efdc7dd43dfc2d1af4ca1", "title": "Accurate and Efficient On-Chip Spectral Analysis for Built-In Testing and Calibration Approaches", "text": "The fast Fourier transform (FFT) algorithm is widely used as a standard tool to carry out spectral analysis because of its computational efficiency. However, the presence of multiple tones frequently requires a fine frequency resolution to achieve sufficient accuracy, which imposes the use of a large number of FFT points that results in large area and power overheads. In this paper, an FFT method is proposed for on-chip spectral analysis of multi-tone signals with particular harmonic and intermodulation components. This accurate FFT analysis approach is based on coherent sampling, but it requires a significantly smaller number of points to make the FFT realization more suitable for on-chip built-in testing and calibration applications that require area and power efficiency. The technique was assessed by comparing the simulation results from the proposed method of single and multiple tones with the simulation results obtained from the FFT of coherently sampled tones. The results indicate that the proper selection of test tone frequencies can avoid spectral leakage even with multiple narrowly spaced tones. When low-frequency signals are captured with an analog-to-digital converter (ADC) for on-chip analysis, the overall accuracy is limited by the ADC's resolution, linearity, noise, and bandwidth limitations. Post-layout simulations of a 16-point FFT showed that third-order intermodulation (IM3) testing with two tones can be performed with 1.5-dB accuracy for IM3 levels of up to 50 dB below the fundamental tones that are quantized with a 10-bit resolution. In a 45-nm CMOS technology, the layout area of the 16-point FFT for on-chip built-in testing is 0.073 mm2, and its estimated power consumption is 6.47 mW."}
{"_id": "37815aa0c724fb0902f6a3ccbb9190717a38a1f9", "title": "Advanced automated SQL injection attacks and defensive mechanisms", "text": "SQL Injection vulnerabilities still exist even after almost two decades that it first appeared. In spite of numerous prevention methodologies being used today, web applications still tend to be vulnerable to SQL injection attacks. Technology has improved drastically over the past few years and computers have certainly brought a great impact on our lifestyle. The computer applications and their usage over the web are myriad. It is quite evident that the in the near future, the usage of computers would relatively be higher than what we are witnessing today. A wide variety of data such as credit information, military data, human communication data, and countless types of data is shared over the far-flung computer networks. As the usage and reliability on computers increase, the threat to sensitive data likewise increases. The challenges with the cyber security when dealing with sensitive information is now a nightmare. To help understand the threats and the severity of exploits deployed, the paper provides proof of concepts for exploits carried out to compromise web applications and how the databases are exploited using the SQL injection methodologies. The SQL injection vulnerabilities in the web applications are surprisingly very vast and this is definitely is a huge security threat to personal data of people that is stored on web. In this paper, the methods used in information gathering, how the security is breached, and how payloads are used to exploit web applications are explained using the Kali Linux. In addition, an analysis is carried out on how the websites are comprised. Advanced methods on how to defend SQL injections are briefly justified. For the readers to understand better, a real time scenario of a penetration tester and a database server is set up with a few suppositions, and the commands that dodge the security characteristics and manipulate the databases are explicated."}
{"_id": "f1592442297d172f5c07887bbaa609f0b169fc26", "title": "Ultrasound for fetal assessment in early pregnancy.", "text": "BACKGROUND\nAdvantages of early pregnancy ultrasound screening are thought to be more accurate calculation of gestational age, earlier identification of multiple pregnancies, and diagnosis of non-viable pregnancies and certain fetal malformations.\n\n\nOBJECTIVES\nThe objective of this review was to assess the use of routine (screening) ultrasound compared with the selective use of ultrasound in early pregnancy (ie before 24 weeks).\n\n\nSEARCH STRATEGY\nThe Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Controlled Trials Register (up to July 1998) were searched.\n\n\nSELECTION CRITERIA\nAdequately controlled trials of routine ultrasound imaging in early pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nOne reviewer assessed trial quality and extracted data. Study authors were contacted for additional information.\n\n\nMAIN RESULTS\nNine trials were included. The quality of the trials was generally good. Routine ultrasound examination was associated with earlier detection of multiple pregnancies (twins undiagnosed at 26 weeks, odds ratio 0.08, 95% confidence interval 0.04 to 0.16) and reduced rates of induction of labour for post-term pregnancy (odds ratio 0. 61, 95% confidence interval 0.52 to 0.72). There were no differences detected for substantive clinical outcomes such as perinatal mortality (odds ratio 0.86, 95% confidence interval 0.67 to 1.12). Where detection of fetal abnormality was a specific aim of the examination, the number of terminations of pregnancy for fetal anomaly increased.\n\n\nREVIEWER'S CONCLUSIONS\nRoutine ultrasound in early pregnancy appears to enable better gestational age assessment, earlier detection of multiple pregnancies and earlier detection of clinically unsuspected fetal malformation at a time when termination of pregnancy is possible. However the benefits for other substantive outcomes are less clear."}
{"_id": "edbfb838cdf488990025a2aaba427c228301f1c8", "title": "A Mobile-Cloud Collaborative Traffic Lights Detector for Blind Navigation", "text": "Context-awareness is a critical aspect of safe navigation especially for the blind and visually impaired in unfamiliar environments. Existing mobile devices for context-aware navigation fall short in many cases due to their dependence on specific infrastructure requirements as well as having limited access to resources that could provide a wealth of contextual clues. In this paper, we propose a mobile-cloud collaborative approach for context-aware navigation by exploiting the computational power of resources made available by Cloud Computing providers as well as the wealth of location-specific resources available on the Internet. We propose an extensible system architecture that minimizes reliance on infrastructure, thus allowing for wide usability. We present a traffic light detector that we developed as an initial application component of the proposed system. We present preliminary results of experiments performed to test the appropriateness for the real-time nature of the application."}
{"_id": "79357470d76ae7aeb4f6e39efd7c3936d615caa4", "title": "An inductorless linear optical receiver for 20Gbaud/s (40Gb/s) PAM-4 modulation using 28nm CMOS", "text": "This paper1 presents a linear optical receiver designed using a 28nm CMOS technology suitable for 20Gbaud/s (40Gb/s) PAM-4 modulation. The optical receiver consists of a transimpedance amplifier (gain adjustable from 40dB\u03a9 to 56dBO) followed by a variable gain amplifier (gain adjustable from 6dB to 17dB). Capacitive peaking is used to achieve a bandwidth of ~10GHz, thus avoiding the use of on-chip inductors which require large die area. A robust automatic gain control loop is used to ensure a constant differential output voltage swing of ~100mV for an input dynamic range of 20\u03bcA to 500\u03bcA (peak current). Over this same range, high linearity (total harmonic distortion less than 5%, 250MHz sinewave, 10harmonics taken into account) is obtained. The rms input referred noise current (integrated from 10MHz to 20GHz) was 2.5\u03bcArms. The linear optical receiver consumes 56mW from a 1.5V supply voltage."}
{"_id": "03b2e534532e9558e560df0bed74976b8f48c1a5", "title": "Conservation cores: reducing the energy of mature computations", "text": "Growing transistor counts, limited power budgets, and the breakdown of voltage scaling are currently conspiring to create a utilization wall that limits the fraction of a chip that can run at full speed at one time. In this regime, specialized, energy-efficient processors can increase parallelism by reducing the per-computation power requirements and allowing more computations to execute under the same power budget. To pursue this goal, this paper introduces conservation cores. Conservation cores, or c-cores, are specialized processors that focus on reducing energy and energy-delay instead of increasing performance. This focus on energy makes c-cores an excellent match for many applications that would be poor candidates for hardware acceleration (e.g., irregular integer codes). We present a toolchain for automatically synthesizing c-cores from application source code and demonstrate that they can significantly reduce energy and energy-delay for a wide range of applications. The c-cores support patching, a form of targeted reconfigurability, that allows them to adapt to new versions of the software they target. Our results show that conservation cores can reduce energy consumption by up to 16.0x for functions and by up to 2.1x for whole applications, while patching can extend the useful lifetime of individual c-cores to match that of conventional processors."}
{"_id": "81769902b87f330dec177ebb3a40a8ae27148a6b", "title": "Borg: An Auto-Adaptive Many-Objective Evolutionary Computing Framework", "text": "This study introduces the Borg multi-objective evolutionary algorithm (MOEA) for many-objective, multimodal optimization. The Borg MOEA combines -dominance, a measure of convergence speed named -progress, randomized restarts, and auto-adaptive multioperator recombination into a unified optimization framework. A comparative study on 33 instances of 18 test problems from the DTLZ, WFG, and CEC 2009 test suites demonstrates Borg meets or exceeds six state of the art MOEAs on the majority of the tested problems. The performance for each test problem is evaluated using a 1,000 point Latin hypercube sampling of each algorithm's feasible parameteri- zation space. The statistical performance of every sampled MOEA parameterization is evaluated using 50 replicate random seed trials. The Borg MOEA is not a single algorithm; instead it represents a class of algorithms whose operators are adaptively selected based on the problem. The adaptive discovery of key operators is of particular importance for benchmarking how variation operators enhance search for complex many-objective problems."}
{"_id": "6060d036639f91cf4801acb325b23d23065b68d5", "title": "HS2: Active Learning over Hypergraphs", "text": "We propose a hypergraph-based active learning scheme which we term HS; HS generalizes the previously reported algorithm S originally proposed for graph-based active learning with pointwise queries [1]. Our HS method can accommodate hypergraph structures and allows one to ask both pointwise queries and pairwise queries. Based on a novel parametric system particularly designed for hypergraphs, we derive theoretical results on the query complexity of HS for the above described generalized settings. Both the theoretical and empirical results show that HS requires a significantly fewer number of queries than S when one uses S over a graph obtained from the corresponding hypergraph via clique expansion."}
{"_id": "911dfacc219b2873180a7b76c7a256ee91fb51cf", "title": "Autonomous Farming and Surveillance Agribot in Adjacent Boundary", "text": "This research strives to implement the agricultural farming robot which can move instinctively, involuntarily for ploughing in different depths depending upon soil, seeding and irrigation in adjacent field. This research also contributes to enhance functionality of agribot in the field for surveillance. The agribot is equipped with a camera and sends the data to pc through Wi-Fi network. The robot has servo motors for ploughing, distributing the seeds and irrigation into the field. It works with ultrasonic sensor and IR sensor. Ultrasonic sensor is for avoiding the obstacles in the field, IR sensor is for sensing the path and field of adjacent boundary. Arduino Uno controller acts as heart and brain of the system, it makes fast, accurate, autonomous movement. This Research features a agribot that uses Wi-Fi 802.11G along with TCP/IP protocol. IP address and Video is received in laptop for further processing. This research intends to reduce human efforts, increase profit and provide an intelligent aid to the farmers."}
{"_id": "5420f5910b6a32c0ebed26b1f3b61ad6f7d75277", "title": "A fast iterative nearest point algorithm for support vector machine classifier design", "text": "In this paper we give a new fast iterative algorithm for support vector machine (SVM) classifier design. The basic problem treated is one that does not allow classification violations. The problem is converted to a problem of computing the nearest point between two convex polytopes. The suitability of two classical nearest point algorithms, due to Gilbert, and Mitchell et al., is studied. Ideas from both these algorithms are combined and modified to derive our fast algorithm. For problems which require classification violations to be allowed, the violations are quadratically penalized and an idea due to Cortes and Vapnik and Friess is used to convert it to a problem in which there are no classification violations. Comparative computational evaluation of our algorithm against powerful SVM methods such as Platt's sequential minimal optimization shows that our algorithm is very competitive."}
{"_id": "e14fc267137038a1877d4b48e7d17bf9b59087a8", "title": "An Annotated Corpus and Method for Analysis of Ad-Hoc Structures Embedded in Text", "text": "We describe a method for identifying and performing functional analysis of structured regions that are embedded in natural language documents, such as tables or key-value lists. Such regions often encode information according to ad hoc schemas and avail themselves of visual cues in place of natural language grammar, presenting problems for standard information extraction algorithms. Unlike previous work in table extraction, which assumes a relatively noiseless two-dimensional layout, our aim is to accommodate a wide variety of naturally occurring structure types. Our approach has three main parts. First, we collect and annotate a a diverse sample of \u201cnaturally\u201d occurring structures from several sources. Second, we use probabilistic text segmentation techniques, featurized by skip bigrams over spatial and token category cues, to automatically identify contiguous regions of structured text that share a common schema. Finally, we identify the records and fields within each structured region using a combination of distributional similarity and sequence alignment methods, guided by minimal supervision in the form of a single annotated record. We evaluate the last two components individually, and conclude with a discussion of further work."}
{"_id": "4f8cbe32bbbc552c733500e05881b2a12945b624", "title": "Overview of Permanent-Magnet Brushless Drives for Electric and Hybrid Electric Vehicles", "text": "With ever-increasing concerns on our environment, there is a fast growing interest in electric vehicles (EVs) and hybrid EVs (HEVs) from automakers, governments, and customers. As electric drives are the core of both EVs and HEVs, it is a pressing need for researchers to develop advanced electric-drive systems. In this paper, an overview of permanent-magnet (PM) brushless (BL) drives for EVs and HEVs is presented, with emphasis on machine topologies, drive operations, and control strategies. Then, three major research directions of the PM BL drive systems are elaborated, namely, the magnetic-geared outer-rotor PM BL drive system, the PM BL integrated starter-generator system, and the PM BL electric variable-transmission system."}
{"_id": "b0e98cb07dbcbfe69313f356c3e614c187b67fd8", "title": "Dual-Rotor Multiphase Permanent Magnet Machine With Harmonic Injection to Enhance Torque Density", "text": "Torque density of a novel dual-rotor, radial-flux multiphase permanent magnet (PM) machine is enhanced by injecting stator current harmonics without increasing stator yoke thickness. In order to enhance a regular PM machine torque capability by injecting harmonic currents, the no-load air-gap flux density should be designed to be trapezoidal, which increases the flux per pole, and results in a thicker stator yoke. However, this drawback can be avoided in a dual-rotor, radial-flux PM machine. In such a dual-rotor PM machine, the main flux can be designed to go directly from the inner rotor poles, to stator inner and outer teeth, then to the outer rotor poles, or reverse, without through the stator yoke. Therefore, the stator yoke thickness is not affected by air-gap flux at all and can actually be reduced to minimum for mechanical reasons. This paper applies stator current harmonic injection technique to a dual-rotor, radial-flux PM machine. Sizing equations of the multiphase PM machines are deduced and used for initial design and theoretical comparison. The torque performance of the proposed dual-rotor, radial-flux multiphase PM machine with stator harmonic current injection is evaluated using finite element simulation, and compared to a traditional three-phase dual-rotor, radial-flux PM machine. Simulation results validate the feasibility of torque density enhancement. The same technique is proven that it is also suitable in a dual-rotor axial-flux PM machine topology."}
{"_id": "596bde95450508e1c72610386f3bf63f71ead231", "title": "A general approach to sizing and power density equations for comparison of electrical machines", "text": "Whenever an electrical machine is meant to be fed by a power converter, the design should be approached as a system optimization, more than a simple machine sizing. A great variety of electrical machines is available to accomplish this goal, and the task of comparing the different options can be very difficult. A general purpose sizing equation, easily adjustable for every topology, that could take into account different waveforms and machine characteristics, would be a very desirable tool. In this paper, a general approach is presented to develop and to discuss such an equation. Sample applications of the sizing and power density equations are utilized to compare the induction machine and the doubly salient permanent magnet (DSPM) machine."}
{"_id": "73f8bbd3ac543a3a9aa787c9a0bdfcb3b61a31a1", "title": "Multiphase Electric Machines for Variable-Speed Applications", "text": "Although the concept of variable-speed drives, based on utilization of multiphase machines, dates back to the late 1960s, it was not until the mid- to late 1990s that multiphase drives became serious contenders for various applications. These include electric ship propulsion, locomotive traction, electric and hybrid electric vehicles, ldquomore-electricrdquo aircraft, and high-power industrial applications. As a consequence, there has been a substantial increase in the interest for such drive systems worldwide, resulting in a huge volume of work published during the last ten years. An attempt is made in this paper to provide a brief review of the current state of the art in the area. After addressing the reasons for potential use of multiphase rather than three-phase drives and the available approaches to multiphase machine designs, various control schemes are surveyed. This is followed by a discussion of the multiphase voltage source inverter control. Various possibilities for the use of additional degrees of freedom that exist in multiphase machines are further elaborated. Finally, multiphase machine applications in electric energy generation are addressed."}
{"_id": "9343a6990ca4fdce335c20d661a3ba47971632bc", "title": "Hybrid electric vehicle propulsion system architectures of the e-CVT type", "text": "There is now significant interest in hybrid electric vehicle (HEV) propulsion systems globally. Economics play a major role as evidenced by oil prices in North America pressing upwards of $100/Bbl coupled with a customer preference for full size crossover and sport utility vehicles. The situation in Oceania is milder, but emerging markets such as China are experiencing automotive sector growth rates of 37%/year. Europe remains least affected by hybrids since nearly 47% of all new vehicles sold are diesel fueled and have economy ratings on par with that of gasoline-electric hybrids. In the global economy there are presently some 57 Mil new vehicles manufactured each year. Toyota and Honda have projected that HEVs will be 10 % to 15 % of the U.S. market by 2009, with Toyota raising the bar further by stating they will produce 1 Mil hybrids a year in the 2012 time frame. Hybrid propulsion system types are only vaguely comprehended by the buying public, and to a large measure, even by technical professionals. This paper addresses this latter issue by presenting a summary of the globally accepted standard in hybrid power trains-the power split architecture, or more generically and in common usage, the electronic-continuously variable transmission"}
{"_id": "7b08d289351631a306b706fc268bbae4b036a46d", "title": "Persuasive messages on information system acceptance: A theoretical extension of elaboration likelihood model and social influence theory", "text": "Firms invest millions of dollars in the introduction of new information systems for long-term benefit. If employees are not willing to accept a new information system, such investments may be wasted. Employee acceptance of a new information system is in part determined by external influences. However, previous research has neglected the paths of persuasive strategies and external social influences on information system acceptance. Linkages between persuasive strategies and external social influences are also scarce. By integrating social influence theory and an elaboration likelihood model, this study explores the influence of persuasive messages (source credibility and argument quality) on social influence, affective response and cognitive response. This study also investigates the interrelationships among affective response, cognitive response and behavior intention. Furthermore, the moderating roles of social influences on the impact of affective response and cognitive response on behavior intention are identified. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "4440af6f81e78f04c6e923b076d276dfb977ac37", "title": "Differentiating anxiety forms and their role in academic performance from primary to secondary school", "text": "INTRODUCTION\nIndividuals with high levels of mathematics anxiety are more likely to have other forms of anxiety, such as general anxiety and test anxiety, and tend to have some math performance decrement compared to those with low math anxiety. However, it is unclear how the anxiety forms cluster in individuals, or how the presence of other anxiety forms influences the relationship between math anxiety and math performance.\n\n\nMETHOD\nWe measured math anxiety, test anxiety, general anxiety and mathematics and reading performance in 1720 UK students (year 4, aged 8-9, and years 7 and 8, aged 11-13). We conducted latent profile analysis of students' anxiety scores in order to examine the developmental change in anxiety profiles, the demographics of each anxiety profile and the relationship between profiles and academic performance.\n\n\nRESULTS\nAnxiety profiles appeared to change in specificity between the two age groups studied. Only in the older students did clusters emerge with specifically elevated general anxiety or academic anxiety (test and math anxiety). Our findings suggest that boys are slightly more likely than girls to have elevated academic anxieties relative to their general anxiety. Year 7/8 students with specifically academic anxiety show lower academic performance than those who also have elevated general anxiety.\n\n\nCONCLUSIONS\nThere may be a developmental change in the specificity of anxiety and gender seems to play a strong role in determining one's anxiety profile. The anxiety profiles present in our year 7/8 sample, and their relationships with math performance, suggest a bidirectional relationship between math anxiety and math performance."}
{"_id": "93b778473691148635770efd43e87b5bddc313c5", "title": "HOW TO CALCULATE INFORMATION VALUE FOR EFFECTIVE SECURITY RISK ASSESSMENT", "text": "The actual problem of information security (infosec) risk assessment is determining the value of information property or asset. This is particularly manifested through the use of quantitative methodology in which it is necessary to state the information value in quantitative sizes. The aim of this paper is to describe the evaluation possibilities of business information values, and the criteria needed for determining importance of information. For this purpose, the dimensions of information values will be determined and the ways used to present the importance of information contents will be studied. There are two basic approaches that can be used in evaluation: qualitative and quantitative. Often they are combined to determine forms of information content. The proposed criterion is the three-dimension model, which combines the existing experiences (i.e. possible solutions for information value assessment) with our own criteria. An attempt for structuring information value in a business environment will be made as well."}
{"_id": "6b59f34c8b1c6f44996eb305577478dd54f54249", "title": "Traffic speed prediction using deep learning method", "text": "Successful traffic speed prediction is of great importance for the benefits of both road users and traffic management agencies. To solve the problem, traffic scientists have developed a number of time-series speed prediction approaches, including traditional statistical models and machine learning techniques. However, existing methods are still unsatisfying due to the difficulty to reflect the stochastic traffic flow characteristics. Recently, various deep learning models have been introduced to the prediction field. In this paper, a deep learning method, the Deep Belief Network (DBN) model, is proposed for short-term traffic speed information prediction. The DBN model is trained in a greedy unsupervised method and fine-tuned by labeled data. Based on traffic speed data collected from one arterial in Beijing, China, the model is trained and tested for different prediction time horizons. From experiment analysis, it is concluded that the DBN can outperform Back Propagation Neural Network (BPNN) and Auto-Regressive Integrated Moving Average (ARIMA) for all time horizons. The advantages of DBN indicate that deep learning is promising in traffic research area."}
{"_id": "23892129e9fdee2019aab8e67ff53a9d0157d651", "title": "Classification of benign and malignant tumors in histopathology images", "text": "Breast cancer leads the list of cancer that act on women worldwide. It starts when cells in the breast begin to build up beyond control. These cells normally create a tumour that can usually be seen on an x-ray or felt as a lump. Analysing and grading the tumour will take up much of a pathologist time. Pathologists have been largely diagnosing disease the same way for the past years, by manually reviewing images under a microscope. Thus, to help the pathologists improve accuracy and significantly change the way breast cancer been diagnosed, this paper presents an automated classification program. BreakHis dataset was used which build of 7909 breast tumor images gathered from 82 patients. This system is developed in order to categorize the cancer cells into two classes of cancer which are benign and malignant. The classification system compared different types of feature extractors using k-nearest neighbours classifier to efficiently observe the performance of the classification system. An extensive set of experiments showed that the overall accuracy rates range from 83% to 86%."}
{"_id": "88011c816206be2d634c545c2f8b8f2779180bcd", "title": "Footwear bio-modelling: An industrial approach", "text": "There is a growing need within the footwear sector to customise the design of the last from which a specific footwear style is to be produced. This customisation is necessary for user comfort and health reasons, as the user needs to wear a suitable shoe. For this purpose, a relationship must be established between the user foot and the last with which the style will be made; up until now, no model has existed that integrates both elements. On the one hand, traditional customised footwear manufacturing techniques are based on purely artisanal procedures which make the process arduous and complex; on the other hand, geometric models proposed by different authors present the impossibility of implementing them in an industrial environment with limited resources for the acquisition of morphometric and structural data for the foot, apart from the fact that they do not prove to be sufficiently accurate given the non similarity of the foot and last. In this paper, two interrelated geometric models are defined, the first, a bio-deformable foot model and the second, a deformable last model. The experiments completed show the goodness of the model, with it obtaining satisfactory results in terms of comfort, efficiency and precision, which make it viable for use in the sector."}
{"_id": "87d40222dc89d7d34cc0340d5d78c5115b7d6a77", "title": "Humpectomy and spreader flaps.", "text": "In a primary rhinoplasty that requires a humpectomy, the dorsal aspect of the upper lateral cartilages is commonly discarded. Many of these patients need spreader grafts to reconstruct the middle third of the nose. However, it is possible to reconstruct the upper lateral cartilages into \"spreader flaps\" that act much like spreader grafts. In the process of making spreader flaps, an incremental humpectomy is performed on the dorsal septum and bony hump. This humpectomy procedure is more accurate than the conventional humpectomy that involves resection of the bone, and septum as a single unit. The open and closed approaches of this technique are discussed in this article."}
{"_id": "d6bbedf5aa9cf53e3f023ac0028d0c25d05cc61a", "title": "Simulating Human Game Play for Level Difficulty Estimation with Convolutional Neural Networks", "text": "This thesis presents an approach to predict the difficulty of levels in a game by simulating game play following a policy learned from human game play. Using state-action pairs tracked from players of the game Candy Crush Saga, we train a Convolutional Neural Network to predict an action given a game state. The trained model then acts as a policy. Our goal is to predict the success rate (SR) of players, from the SR obtained by simulating game play. Previous state-ofthe-art was using Monte Carlo tree search (MCTS) or handcrafted heuristics for game play simulation. We benchmark our suggested approach against one using MCTS. The hypothesis is that, using our suggested approach, predicting the players\u2019 SR from the SR obtained through the simulation, leads to better estimations of the players\u2019 SR. Our results show that we could not only significantly improve the predictions of the players\u2019 SR, but also decrease the time for game play simulation by at least 50 times. Referat Simulering av m\u00e4nskligt spelande f\u00f6r bed\u00f6mning av spelbanors sv\u00e5righetsgrad med Convolutional Neural Networks. Den h\u00e4r avhandlingen presenterar ett tillv\u00e4gag\u00e5ngss\u00e4tt f\u00f6r att f\u00f6rutse sv\u00e5righetsgrad av spelbanor genom spelsimulering enligt en strategi l\u00e4rd fr\u00e5n m\u00e4nskligt spelande. Med anv\u00e4ndning av tillst\u00e5nd-handlings par insamlade fr\u00e5n spelare av spelet Candy Crush Saga, tr\u00e4nar vi ett Convolutional Neural Network att f\u00f6rutse en handling fr\u00e5n ett givet tillst\u00e5nd. Den tr\u00e4nade modellen agerar sedan som strategi. V\u00e5rt m\u00e5l \u00e4r att f\u00f6rutse success rate (SR) av spelare, fr\u00e5n SR erh\u00e5llen fr\u00e5n spelsimulering. Tidigare state-of-the-art anv\u00e4nde Monte Carlo tree search (MCTS) eller handgjorda heuristiker f\u00f6r spelsimulering. Vi j\u00e4mf\u00f6r v\u00e5rt tillv\u00e4gag\u00e5ngss\u00e4tt med MCTS. Hypotesen \u00e4r att v\u00e5rt f\u00f6reslagna tillv\u00e4gag\u00e5ngss\u00e4tt leder till b\u00e4ttre f\u00f6ruts\u00e4gelser av m\u00e4nsklig SR fr\u00e5n SR erh\u00e5llen fr\u00e5n spelsimulering. V\u00e5ra resultat visar att vi inte bara signifikant kunde f\u00f6rb\u00e4ttra f\u00f6ruts\u00e4gelserna av m\u00e4nsklig SR utan ocks\u00e5 kunde minska tids\u00e5tg\u00e5ngen f\u00f6r spelsimulering med \u00e5tminstone en faktor 50."}
{"_id": "c39889a5bcaa467522a3e5f0738567dbbbd547b9", "title": "On-line consistent ranking on e-recruitment: seeking the truth behind a well-formed CV", "text": "In this work we present a novel approach for evaluating job applicants in online recruitment systems, using machine learning algorithms to solve the candidate ranking problem and performing semantic matching techniques. An application of our approach is implemented in the form of a prototype system, whose functionality is showcased and evaluated in a real-world recruitment scenario. The proposed system extracts a set of objective criteria from the applicants\u2019 LinkedIn profile, and compares them semantically to the job\u2019s prerequisites. It also infers their personality characteristics using linguistic analysis on their blog posts. Our system was found to perform consistently compared to human recruiters, thus it can be trusted for the automation of applicant ranking and personality mining."}
{"_id": "5a4b81f536f793adbb92437dd53f07646bc43618", "title": "Linear Time Algorithm for Isomorphism of Planar Graphs (Preliminary Report)", "text": "The isomorphism problem for graphs G1 and G2 is to determine if there exists a one-to-one mapping of the vertices of G1 onto the vertices of G2 such that two vertices of G1 are adjacent if and only if their images in G2 are adjacent. In addition to determining the existence of such an isomorphism, it is useful to be able to produce an isomorphism-inducing mapping in the case where one exists.\n The isomorphism problem for triconnected planar graphs is particularly simple since a triconnected planar graph has a unique embedding on a sphere [6]. Weinberg [5] exploited this fact in developing an algorithm for testing isomorphism of triconnected planar graphs in O(|V|2) time where V is the set consisting of the vertices of both graphs. The result has been extended to arbitrary planar graphs and improved to O(|V|log|V|) steps by Hopcroft and Tarjan [2,3]. In this paper, the time bound for planar graph isomorphism is improved to O(|V|). In addition to determining the isomorphism of two planar graphs, the algorithm can be easily extended to partition a set of planar graphs into equivalence classes of isomorphic graphs in time linear in the total number of vertices in all graphs in the set. A random access model of computation (see Cook [1]) is assumed. Although the proposed algorithm has a linear asymptotic growth rate, at the present stage of development it appears to be inefficient on account of a rather large constant. This paper is intended only to establish the existence of a linear algorithm which subsequent work might make truly efficient."}
{"_id": "094c105ffdd4e4e51017edf2a0aba47c2afef05e", "title": "Design and PID control of two wheeled autonomous balance robot", "text": "In this study, a two-wheeled autonomous balance robot has been designed and implemented practically. A visual computer interface based on Qt-Creator has been created. Thanks to the computer interface, different control algorithms can be performed on the robot easily, control parameters can be set up online, filter algorithms in various structures can be tried and the reaction of these changeable values to the system can be observed. The effects of some controllers such as Proportional (P), Proportional-Integral (PI), Proportional-Integral-Derivative (PID) on developed robot have been viewed successfully. Kalman Filters have been used for a stable control of the system and it has seen that the system can balance itself for a long time with optimum PID control parameters obtained."}
{"_id": "8aeb313c67497c03675e475afd7b28f14fcbf260", "title": "Classifying racist texts using a support vector machine", "text": "In this poster we present an overview of the techniques we used to develop and evaluate a text categorisation system to automatically classify racist texts. Detecting racism is difficult because the presence of indicator words is insufficient to indicate racist texts, unlike some other text classification tasks. Support Vector Machines (SVM) are used to automatically categorise web pages based on whether or not they are racist. Different interpretations of what constitutes a term are taken, and in this poster we look at three representations of a web page within an SVM -- bag-of-words, bigrams and part-of-speech tags."}
{"_id": "afc8767af2defba519a3ffa9deb4f0cc9a086431", "title": "Randomized Block Coordinate Descent for Online and Stochastic Optimization", "text": "Two types of low cost-per-iteration gradient descent metho ds have been extensively studied in parallel. One is online or stochastic gradient descent ( OGD/SG ), and the other is randomzied coordinate descent (RBCD). In this paper, we combine the two types o f methods together and propose online randomized block coordinate descent (ORBCD). At each itera t on, ORBCD only computes the partial gradient of one block coordinate of one mini-batch samples. ORBCD is well suited for the composite minimization problem where one function is the average of th e losses of a large number of samples and the other is a simple regularizer defined on high dimensional variables. We show that the iteration complexity of ORBCD has the same order as OGD or SGD. For strongly convex functions, by reducing the variance of stochastic gradients, we show that ORBCD can con verge at a geometric rate in expectation, matching the convergence rate of SGD with variance reductio n and RBCD."}
{"_id": "7b34277166c448c61ff4343a580b2eb7b7003c7b", "title": "Vehicle Detection Based on Multi-feature Clues and Dempster-Shafer Fusion Theory", "text": "On-road vehicle detection and rear-end crash prevention are demanding subjects in both academia and automotive industry. The paper focuses on monocular vision-based vehicle detection under challenging lighting conditions, being still an open topic in the area of driver assistance systems. The paper proposes an effective vehicle detection method based on multiple features analysis and Dempster-Shafer-based fusion theory. We also utilize a new idea of Adaptive Global Haar-like (AGHaar) features as a promising method for feature classification and vehicle detection in both daylight and night conditions. Validation tests and experimental results show superior detection results for day, night, rainy, and challenging conditions compared to state-of-the-art solutions."}
{"_id": "74026b2078ca2da7b063d004d1f566eec6d2e3e5", "title": "Anatomy and physiology of a color system in the primate visual cortex.", "text": "Staining for the mitochondrial enzyme cytochrome oxidase reveals an array of dense regions (blobs) in the primate primary visual cortex. They are most obvious in the upper layers, 2 and 3, but can also be seen in layers 4B, 5, and 6, in register with the blobs in layers 2 and 3. We compared cells inside and outside blobs in macaque and squirrel monkeys, looking at their physiological responses and anatomical connections. Cells within blobs did not show orientation selectivity, whereas cells between blobs were highly orientation selective. Receptive fields of blob cells had circular symmetry and were of three main types, Broad-Band Center-Surround, Red-Green Double-Opponent, and Yellow-Blue Double-Opponent. Double-Opponent cells responded poorly or not at all to white light in any form, or to diffuse light at any wavelength. In contrast to blob cells, none of the cells recorded in layer 4C beta were Double-Opponent: like the majority of cells in the parvocellular geniculate layers, they were either Broad-Band or Color-Opponent Center-Surround, e.g., red-on-center green-off-surround. To our surprise cells in layer 4C alpha were orientation selective. In tangential penetrations throughout layers 2 and 3, optium orientation, when plotted against electrode position, formed long, regular, usually linear sequences, which were interrupted but not perturbed by the blobs. Staining area 18 for cytochrome oxidase reveals a series of alternating wide and narrow dense stripes, separated by paler interstripes. After small injections of horseradish peroxidase into area 18, we saw a precise set of connections from the blobs in area 17 to thin stripes in area 18, and from the interblob regions in area 17 to interstripes in area 18. Specific reciprocal connections also ran from thin stripes to blobs and from interstripes to interblobs. We have not yet determined the area 17 connections to thick stripes in area 18. In addition, within area 18 there are stripe-to-stripe and interstripe-to-interstripe intrinsic connections. These results suggest that a system involved in the processing of color information, especially color-spatial interactions, runs parallel to and separate from the orientation-specific system. Color, encoded in three coordinates by the major blob cell types, red-green, yellow-blue, and black-white, can be transformed into the three coordinates, red, green, and blue, of the Retinex algorithm of Land."}
{"_id": "cbb9c6b2d3b854fcf60b71a234ca8060b5cc8c25", "title": "Image reconstruction with locally adaptive sparsity and nonlocal robust regularization", "text": "Sparse representation based modeling has been successfully used in many image-related inverse problems such as deblurring, super-resolution and compressive sensing. The heart of sparse representations lies on how to find a space (spanned by a dictionary of atoms) where the local image patch exhibits high sparsity and how to determine the image local sparsity. To identify the locally varying sparsity, it is necessary to locally adapt the dictionary learning process and the sparsity-regularization parameters. However, spatial adaptation alone runs into the risk of over-fitting the data because variation and invariance are two sides of the same coin. In this work, we propose two sets of complementary ideas for regularizing image reconstruction process: (1) the sparsity regularization parameters are locally estimated for each coefficient and updated along with adaptive learning of PCA-based dictionaries; (2) a nonlocal self-similarity constraint is introduced into the overall cost functional to improve the robustness of the model. An efficient alternative minimization algorithm is present to solve the proposed objective function and then an effective image reconstruction algorithm is presented. The experimental results on image deblurring, super-resolution and compressive sensing demonstrate that the proposed image reconstruct method outperforms many existing image reconstruction methods in both PSNR and visual quality assessment. & 2012 Elsevier B.V. All rights reserved."}
{"_id": "57178ce7f2c2f74a6b29d06e3f2f3f7b459024d1", "title": "A model for high school computer science education: the four key elements that make it!", "text": "This paper presents a model program for high school computer science education. It is based on an analysis of the structure of the Israeli high school computer science curriculum considered to be one of the leading curricula worldwide. The model consists of four key elements as well as interconnections between these elements. It is proposed that such a model be considered and/or adapted when a country wishes to implement a nation-wide program for high school computer science education."}
{"_id": "3816b430309f0c3aa809c40ca02bb76c18914887", "title": "Electromagnetic Two-Dimensional Scanner Using Radial Magnetic Field", "text": "In this paper, we present the design, fabrication, and measurement results of a two-dimensional electromagnetic scanning micromirror actuated by radial magnetic field. The scanner is realized by combining a gimbaled single-crystal-silicon micromirror with a single turn electroplated metal coil, with a concentric permanent magnet assembly composed of two concentric permanent magnets and an iron yoke. The proposed scanner utilizes the radial magnetic field rather than using a lateral magnetic field oriented 45deg to the horizontal and vertical scan axes to achieve a biaxial magnetic actuation. The single turn coil fabricated with electroplated copper achieves a nominal resistance of 1.2 Omega. A two-dimensional scanner with mirror size of 1.5 mm in diameter was fabricated. Maximum optical scan angle of 8.8deg in horizontal direction and 8.3deg in vertical direction were achieved. Forced actuation of the gimbal at 60 Hz and resonant actuation of the micromirror at 19.1-19.7 kHz provide slow vertical scan and fast horizontal scan, respectively. The proposed scanner can be used in raster scanning laser display systems and other scanner applications."}
{"_id": "00fc19647ca1aa9a3910abe7cf3414af2f811e64", "title": "Survey of multi-objective optimization methods for engineering", "text": "Asurveyofcurrentcontinuousnonlinearmultiobjective optimization (MOO) concepts and methods is presented. It consolidates and relates seemingly different terminology and methods. The methods are divided into three major categories: methods with a priori articulation of preferences, methods with a posteriori articulation of preferences, and methods with no articulation of preferences. Genetic algorithms are surveyed as well. Commentary is provided on three fronts, concerning the advantages and pitfalls of individual methods, the different classes of methods, and the field of MOO as a whole. The Characteristics of the most significant methods are summarized. Conclusions are drawn that reflect often-neglected ideas and applicability to engineering problems. It is found that no single approach is superior. Rather, the selection of a specific method depends on the type of information that is provided in the problem, the user\u2019s preferences, the solution requirements, and the availability of software."}
{"_id": "fd02adbbbca61852e0111919c2ddc10290df2498", "title": "A data mining approach for diagnosis of coronary artery disease", "text": "Cardiovascular diseases are very common and are one of the main reasons of death. Being among the major types of these diseases, correct and in-time diagnosis of coronary artery disease (CAD) is very important. Angiography is the most accurate CAD diagnosis method; however, it has many side effects and is costly. Existing studies have used several features in collecting data from patients, while applying different data mining algorithms to achieve methods with high accuracy and less side effects and costs. In this paper, a dataset called Z-Alizadeh Sani with 303 patients and 54 features, is introduced which utilizes several effective features. Also, a feature creation method is proposed to enrich the dataset. Then Information Gain and confidence were used to determine the effectiveness of features on CAD. Typical Chest Pain, Region RWMA2, and age were the most effective ones besides the created features by means of Information Gain. Moreover Q Wave and ST Elevation had the highest confidence. Using data mining methods and the feature creation algorithm, 94.08% accuracy is achieved, which is higher than the known approaches in the literature."}
{"_id": "99077dec54d0e0c42523e885533947c0c27ba78e", "title": "Developing a bioinspired pole climbing robot", "text": "This paper proposes a pole climbing robot that has an ability to climb pipes. The novelty of this design is that it uses no motors to climb on the pole. Till now many robots have been fabricated with the ability to climb pipes but most of them solely depend on DC motors. The use of DC motor induces the risk of loosening the grip of the robot in case of power failure that may lead to disastrous situations if robot is working on high altitudes. This could be easily avoidable by the use of electro-pneumatics and by using self-locking circuits. We used two pairs of pneumatic cylinders as linear actuators for gripping the pole i.e. one pair for one claw. Besides this two more cylinders are used for climbing purpose. Unlike other mechanisms, installation is extremely easy. It is like controlling a radio controlled car. This mechanism can be easily modified for different payload capacities by simply using the appropriate diameter of the pneumatic cylinders. The robot can be controlled in tele-operated mode as well as autonomous mode which is a closed loop system. It has a high accuracy with 4mm error per cycle that has a cycle time period of 5 seconds. This robot can be very useful in oil industries for inspection of pipes and power poles in electric industries. It can serve of great help in cases of emergency situation caused during fire accidents."}
{"_id": "1675b5fc56242a4b1b37e1dc82db23877f32b5e5", "title": "A framework of changing image emotion using emotion prediction", "text": "Most works about affective image classification in computer vision treat each emotion category independently and predict hard labels, ignoring the correlation between emotion categories. In this work, inspired by psychological theories, we adopt a dimensional emotion model to model the correlation among certain emotion categories. We also propose a framework of changing image emotion by using our emotion predictor. Easily extendable to other feature transformations, our framework changes image emotion by color histogram specification, relaxing the limitation of the previous method that each emotion is associated with a monotonic palette. Effective and comparable to the previous work of changing image emotion shown by user study, our proposed framework provides users with more flexible control in changing image emotion compared with the previous work."}
{"_id": "b4a935c37414d105ae3cb7960a75f6b153f10078", "title": "Problematizing cultural appropriation", "text": "Cultural appropriation in games entails the taking of knowledge, artifacts or expression from a culture and recontextualizing it within game structures. While cultural appropriation is a pervasive practice in games, little attention has been given to the ethical issues that emerge from such practices with regards to how culture is portrayed. This paper problematizes cultural appropriation in the context of a serious game for children inspired by D\u00eda de los Muertos, a Mexican festival focused on remembrance of the dead. Taking a research through design approach, we demonstrate that recontextualised cultural elements can retain their basic, original meaning. However, we also find that cultural appropriation is inevitable and its ethical implications can be far reaching. In our context, ethical concerns arose as a result of children's beliefs that death affects prominent others and their destructive ways of coping with death. We argue that revealing emergent ethical concerns is imperative before deciding how and in what way to encourage culturally authentic narratives."}
{"_id": "8a2875016d1858b7e939a3b3f333a0430ec79acf", "title": "An Efficient Power Scheduling Scheme for Residential Load Management in Smart Homes", "text": "In this paper, we propose mathematical optimization models of household energy units to optimally control the major residential energy loads while preserving the user preferences. User comfort is modelled in a simple way, which considers appliance class, user preferences and weather conditions. The wind-driven optimization (WDO) algorithm with the objective function of comfort maximization along with minimum electricity cost is defined and implemented. On the other hand, for maximum electricity bill and peak reduction, min-max regret-based knapsack problem (K-WDO) algorithm is used. To validate the effectiveness of the proposed algorithms, extensive simulations are conducted for several scenarios. The simulations show that the proposed algorithms provide with the best optimal results with a fast convergence rate, as compared to the existing techniques. Appl. Sci. 2015, 5 1135"}
{"_id": "b7fe99ffb5373b98434db6e8aa33401de7898893", "title": "Demand Response as a Load Shaping Tool in an Intelligent Grid With Electric Vehicles", "text": "As electric vehicles (EVs) take a greater share in the personal automobile market, their penetration will cause overload conditions at the distribution transformer. This paper focuses on the impacts of charging EVs on residential distribution networks including the transformer. The cost to accommodate a large-scale EV penetration by upgrading distribution transformers can be prohibitive. To alleviate the potential new load peaks with minimal infrastructure investments, a demand response strategy is proposed as a load shaping tool that allows improvement in distribution transformer usage. With the proposed strategy, consumers' preferences, load priorities, and privacy can be taken into account."}
{"_id": "2c361ef5db3231d34656dd86d9b288397f0b929e", "title": "No free lunch theorems for optimization", "text": "A framework is developed to explore the connection between e ective optimization algorithms and the problems they are solving A number of no free lunch NFL theorems are presented that establish that for any algorithm any elevated performance over one class of problems is exactly paid for in performance over another class These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem Applications of the NFL theorems to information theoretic aspects of optimization and benchmark measures of performance are also presented Other issues addressed are time varying optimization problems and a priori head to head minimax distinctions between optimization algorithms distinctions that can obtain despite the NFL theorems enforcing of a type of uniformity over all algorithms"}
{"_id": "d6647910bfdddeba029448221e33b508b25735a4", "title": "Coordinated Scheduling of Residential Distributed Energy Resources to Optimize Smart Home Energy Services", "text": "We describe algorithmic enhancements to a decision-support tool that residential consumers can utilize to optimize their acquisition of electrical energy services. The decision-support tool optimizes energy services provision by enabling end users to first assign values to desired energy services, and then scheduling their available distributed energy resources (DER) to maximize net benefits. We chose particle swarm optimization (PSO) to solve the corresponding optimization problem because of its straightforward implementation and demonstrated ability to generate near-optimal schedules within manageable computation times. We improve the basic formulation of cooperative PSO by introducing stochastic repulsion among the particles. The improved DER schedules are then used to investigate the potential consumer value added by coordinated DER scheduling. This is computed by comparing the end-user costs obtained with the enhanced algorithm simultaneously scheduling all DER, against the costs when each DER schedule is solved separately. This comparison enables the end users to determine whether their mix of energy service needs, available DER and electricity tariff arrangements might warrant solving the more complex coordinated scheduling problem, or instead, decomposing the problem into multiple simpler optimizations."}
{"_id": "40cacc4fc97132a2538c5355f7bcec632f6ae7f5", "title": "Computer vision based road traffic accident and anomaly detection in the context of Bangladesh", "text": "Road traffic accident is one of the major causes of death in Bangladesh. In this article, we propose a method that uses road side video data to learn the traffic pattern and uses vision based techniques to track and determine various kinetic features of vehicles. Finally, the proposed method detects anomaly (possibility of accident) and accidents on the road. The proposed method shows approximately 85% accuracy level in detecting special situations."}
{"_id": "725b2850821dfc87126d898b955a870d5c8d4824", "title": "Writing for synthesis of evidence in empirical software engineering", "text": "Context: Systematic literature reviews have become common in software engineering in the last decade, but challenges remain.\n Goal: Given the challenges, the objective is to describe improvement areas in writing primary studies, and hence provide a good basis for researchers aiming at synthesizing research evidence in a specific area.\n Method: The results presented are based on a literature review with respect to synthesis of research results in software engineering with a particular focus on empirical software engineering. The literature review is complemented and exemplified with experiences from conducting systematic literature reviews and working with research methodologies in empirical software engineering.\n Results: The paper presents three areas where improvements are needed to become more successful in synthesizing empirical evidence. These three areas are: terminology, paper content and reviewing.\n Conclusion: It is concluded that it must be possible to improve the primary studies, but it requires that researchers start having synthesis in mind when writing their research papers."}
{"_id": "f16915e4e8d0361b8e577b2123ba4a36a25032ba", "title": "On-line Trend Analysis with Topic Models: \\#twitter Trends Detection Topic Model Online", "text": "We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter."}
{"_id": "f16ab34312bbfa3d97b03e1d82d763eb77e288ad", "title": "A hybrid recommendation approach for a tourism system", "text": "Many current e-commerce systems provide personalization when their content is shown to users. In this sense, recommender systems make personalized suggestions and provide information of items available in the system. Nowadays, there is a vast amount of methods, including data mining techniques that can be employed for personalization in recommender systems. However, these methods are still quite vulnerable to some limitations and shortcomings related to recommender environment. In order to deal with some of them, in this work we implement a recommendation methodology in a recommender system for tourism, where classification based on association is applied. Classification based on association methods, also named associative classification methods, consist of an alternative data mining technique, which combines concepts from classification and association in order to allow association rules to be employed in a prediction context. The proposed methodology was evaluated in some case studies, where we could verify that it is able to shorten limitations presented in recommender systems and to enhance recommendation quality."}
{"_id": "2d93e7af2e38d9479114006704b836533026279f", "title": "Automatic White Balance for Digital Still Cameras", "text": "In recent years digital cameras have captured the camera market. Although the factors that consumer consider is the quality of the photographs produced. White balance is one of the methods used to improve the quality of the photographs. The basic white balance methods are, however powerless to handle all possible situations in the captured scene. In order to improve the white balance we divide this problem in three steps\u2013white object purification, white point detection, and white point adjustment. The proposed method basically solves the problem of detecting the white point very well. The experimental results show that the proposed method can achieve superior result in both objective and subjective evaluative terms. The complexity of the proposed system is acceptable. The propose method can be easily applied to digital cameras to obtain good results."}
{"_id": "f2d3e63053ae7cd1d072ea4a1f019a473b8ad73f", "title": "Human Visual Search as a Deep Reinforcement Learning Solution to a POMDP", "text": "When people search for a target in a novel image they often make use of eye movements to bring the relatively high acuity fovea to bear on areas of interest. The strategies that control these eye movements for visual search have been of substantial scientific interest. In the current article we report a new computational model that shows how strategies for visual search are an emergent consequence of perceptual/motor constraints and approximately optimal strategies. The model solves a Partially Observable Markov Decision Process (POMDP) using deep Q-learning to acquire strategies that optimise the tradeoff between speed and accuracy. Results are reported for the Distractor-ratio task."}
{"_id": "e52d364326d45927c0b06e07a99ba95dd3a66a4e", "title": "Learning with Taxonomies: Classifying Documents and Words", "text": "Automatically extracting semantic information about word meaning and document topic from text typically involves an extensive number of classes. Such classes may represent predefined word senses, topics or document categories and are often organized in a taxonomy. The latter encodes important information, which should be exploited in learning classifiers from labeled training data. To that extent, this paper presents an extension of multiclass Support Vector Machine learning which can incorporate prior knowledge about class relationships. The latter can be encoded in the form of class attributes, similarities between classes or even a kernel function defined over the set of classes. The paper also discusses how to specify and optimize meaningful loss functions based on the relative position of classes in the taxonomy. We include experimental results for text categorization and for word sense classification."}
{"_id": "97da965a6f147767608a8b4435398d9f9dbe6089", "title": "Learning scale-variant and scale-invariant features for deep image classification", "text": "Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial scales. Previous work attempting to deal with such scale variations focused on encouraging scale-invariant CNN representations. However, scale-invariant representations are incomplete representations of images, because images contain scale-variant information as well. This paper addresses the combined development of scaleinvariant and scale-variant representations. We propose a multiscale CNN method to encourage the recognition of both types of features and evaluate it on a challenging image classification task involving task-relevant characteristics at multiple scales. The results show that our multi-scale CNN outperforms single-scale CNN. This leads to the conclusion that encouraging the combined development of a scale-invariant and scale-variant representation in CNNs is beneficial to image recognition performance."}
{"_id": "8426337a7ff9fb51f0b1221004593ac1c4f80088", "title": "Design of new intelligent street light control system", "text": "In China, the methods of time-control, optical-control and time-optical-control are in common used to control street lamp, particularly in small and medium-sized cities. But due to the backward lighting control and administrative method, the precision is bad, and the result of work is also poor. Through many kinds of sensor combination sense environment's change, the multi-sensor exhibition can combinatory logically control the new intelligent street light controller system. And based on the degree of illumination control fixed time, in the automatic foundation fixed time, according to the multi-sensing exhibition survey data's special combination change, to control the street light nimbly; the system can also realize the automatic timing control, by pre-installing time to control street light switch and ultimately to control the street light time. Simultaneously the system can also realize the automatic sunshine control, which may act according to the actual determination of the sunlight degree of illumination and the degree of illumination control criterion and automatically control street light. On the basis of the merits of the regular control and the optical control, a new street smart controller is designed, with dual functions including timing control and automatic photoelectric control. It allows street lamps automatically lit in the evening, lighting the road for a few hours (adjustable time). After 0:00, when few vehicles or pedestrians go past, it turn off automatically. And terminal controller has wake-up function. After the street light turn out automatically, when the vehicles or pedestrians are going through, street light will be waken up by terminal controller. When the vehicles or pedestrians go past, the lights are off automatically. Design of new intelligent street light control system does not only achieve energy-saving power but also extend the service life of lighting equipment. Moreover, it is controllable, ease of maintenance. At the same time, it is helpful to highlight the festive and other characteristics, and ultimately make street light network, intelligence, humanity and art. An optimal configuration is reported. Finally, the results of experiment are obtained that: when a motor is closed to 15 meters, LED lights are on automatically. While people passed, the distance is proximity 10 meters."}
{"_id": "f57792502f5057c0ed85b99d75066ac327a57424", "title": "Modelling and analysis of convergence of wireless sensor network and passive optical network using queueing theory", "text": "Wireless sensor networks (WSNs) are being deployed for an ever-widening range of applications, particularly due to their low power consumption and reliability. In most applications, the sensor data must be sent over the Internet or core network, which results in the WSN backhaul problem. Passive optical networks (PONs), a next-generation access network technology, are well suited form a backhaul which interfaces WSNs to the core network. In this paper a new structure is proposed which converges WSNs and PONs. It is modelled and analyzed through queuing theory by employing two M/M/1 queues in tandem. Numerical results show how the WSN and PON dimensions affect the average queue length, thus serving as a guideline for future resource allocation and scheduling of such a converged network."}
{"_id": "7bd4242e4aca500bb506f331fcae1b0ad648914b", "title": "A morphologic two-stage approach for automated optic disk detection in color eye fundus images", "text": "A new adaptive morphological method for the automatic detection of the optic disk in digital color eye fundus images is presented in this paper. This method has been designed to detect the optic disk center and the optic disk rim. In our experiments with the DRIVE and DIARETDB1 databases, the proposed method was able to detect the optic disk center with 100% and 97.75% of accuracy, respectively. We considered correct all automatically detected optic disk location that is within the borders of the optic disk marked manually. Using our proposed method, the rim of the optic disk was detected in all images of the DRIVE database with average sensitivity and specificity of 83.54% and 99.81%, respectively, and on the DIARETDB1 database with average sensitivity and specificity of 92.51% and 99.76%, respectively. 2013 Published by Elsevier B.V."}
{"_id": "e227372d80f0d1853894d5eb452c74c8f78a95c2", "title": "Relationship between color and emotion : a study of college students", "text": "Ninety-eight college students were asked to indicate their emotional responses to five principle hues (i.e., red, yellow, green, blue, purple), five intermediate hues (i.e., yellow-red, greenyellow, blue-green, purple-blue, and red-purple), and three achromatic colors (white, gray, and black) and the reasons for their choices. The color stimuli were referenced from the Munsell Color System. The results revealed that the principle hues comprised the highest number of positive emotional responses, followed by the intermediate hues and the achromatic colors. The color green evoked mainly positive emotions such as relaxation and comfort because it reminded most of the respondents of nature. The color green-yellow had the lowest number of positive responses because it was associated with vomit and elicited the feelings of sickness and disgust. For the achromatic colors, white attained a large number of positive responses, followed by the colors black and gray. The reasons for the color-emotion associations are discussed and future research areas are suggested."}
{"_id": "009d1cd15191b1a0b4b7fa8796c4265fbf771946", "title": "A stream function solver for liquid simulations", "text": "This paper presents a liquid simulation technique that enforces the incompressibility condition using a stream function solve instead of a pressure projection. Previous methods have used stream function techniques for the simulation of detailed single-phase flows, but a formulation for liquid simulation has proved elusive in part due to the free surface boundary conditions. In this paper, we introduce a stream function approach to liquid simulations with novel boundary conditions for free surfaces, solid obstacles, and solid-fluid coupling.\n Although our approach increases the dimension of the linear system necessary to enforce incompressibility, it provides interesting and surprising benefits. First, the resulting flow is guaranteed to be divergence-free regardless of the accuracy of the solve. Second, our free-surface boundary conditions guarantee divergence-free motion even in the un-simulated air phase, which enables two-phase flow simulation by only computing a single phase. We implemented this method using a variant of FLIP simulation which only samples particles within a narrow band of the liquid surface, and we illustrate the effectiveness of our method for detailed two-phase flow simulations with complex boundaries, detailed bubble interactions, and two-way solid-fluid coupling."}
{"_id": "32e6fbc44a79bad68087c003a6e31f55b9584758", "title": "Location recommendation for out-of-town users in location-based social networks", "text": "Most previous research on location recommendation services in location-based social networks (LBSNs) makes recommendations without considering where the targeted user is currently located. Such services may recommend a place near her hometown even if the user is traveling out of town. In this paper, we study the issues in making location recommendations for out-of-town users by taking into account user preference, social influence and geographical proximity. Accordingly, we propose a collaborative recommendation framework, called User Preference, Proximity and Social-Based Collaborative Filtering} (UPS-CF), to make location recommendation for mobile users in LBSNs. We validate our ideas by comprehensive experiments using real datasets collected from Foursquare and Gowalla. By comparing baseline algorithms and conventional collaborative filtering approach (and its variants), we show that UPS-CF exhibits the best performance. Additionally, we find that preference derived from similar users is important for in-town users while social influence becomes more important for out-of-town users."}
{"_id": "85f94d8098322f8130512b4c6c4627548ce4a6cc", "title": "Unsupervised Pretraining for Sequence to Sequence Learning", "text": "Sequence to sequence models are successful tools for supervised sequence learning tasks, such as machine translation. Despite their success, these models still require much labeled data and it is unclear how to improve them using unlabeled data, which is much less expensive to obtain. In this paper, we present simple changes that lead to a significant improvement in the accuracy of seq2seq models when the labeled set is small. Our method intializes the encoder and decoder of the seq2seq model with the trained weights of two language models, and then all weights are jointly fine-tuned with labeled data. An additional language modeling loss can be used to regularize the model during fine-tuning. We apply this method to low-resource tasks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main finding is that the pretraining accelerates training and improves generalization of seq2seq models, achieving state-of-the-art results on the WMT English\u2192German task. Our model obtains an improvement of 1.3 BLEU from the previous best models on both WMT\u201914 and WMT\u201915 English\u2192German. Our ablation study shows that pretraining helps seq2seq models in different ways depending on the nature of the task: translation benefits from the improved generalization whereas summarization benefits from the improved optimization."}
{"_id": "0d9b90af172613d0d6af3b3352a1d351a7a09b5a", "title": "BOINC: a system for public-resource computing and storage", "text": "BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals of BOINC, the design issues that we confronted, and our solutions to these problems."}
{"_id": "9b4a861151fabae1dfd61c917d031c86d26be704", "title": "Controllable Abstractive Summarization", "text": "Current models for document summarization disregard user preferences such as the desired length, style, the entities that the user might be interested in, or how much of the document the user has already read. We present a neural summarization model with a simple but effective mechanism to enable users to specify these high level attributes in order to control the shape of the final summaries to better suit their needs. With user input, our system can produce high quality summaries that follow user preferences. Without user input, we set the control variables automatically \u2013 on the full text CNN-Dailymail dataset, we outperform state of the art abstractive systems (both in terms of F1-ROUGE1 40.38 vs. 39.53 F1-ROUGE and human evaluation)."}
{"_id": "3c0747367003a3efb5fda3422a2795e04687bb2c", "title": "Neuroergonomics: a review of applications to physical and cognitive work", "text": "Neuroergonomics is an emerging science that is defined as the study of the human brain in relation to performance at work and in everyday settings. This paper provides a critical review of the neuroergonomic approach to evaluating physical and cognitive work, particularly in mobile settings. Neuroergonomics research employing mobile and immobile brain imaging techniques are discussed in the following areas of physical and cognitive work: (1) physical work parameters; (2) physical fatigue; (3) vigilance and mental fatigue; (4) training and neuroadaptive systems; and (5) assessment of concurrent physical and cognitive work. Finally, the integration of brain and body measurements in investigating workload and fatigue, in the context of mobile brain/body imaging (\"MoBI\"), is discussed."}
{"_id": "bc87585b4fc874a29bd1d9a031dce1807b2ba0e8", "title": "Summarizing Contrastive Themes via Hierarchical Non-Parametric Processes", "text": "Given a topic of interest, a contrastive theme is a group of opposing pairs of viewpoints. We address the task of summarizing contrastive themes: given a set of opinionated documents, select meaningful sentences to represent contrastive themes present in those documents. Several factors make this a challenging problem: unknown numbers of topics, unknown relationships among topics, and the extraction of comparative sentences. Our approach has three core ingredients: contrastive theme modeling, diverse theme extraction, and contrastive theme summarization. Specifically, we present a hierarchical non-parametric model to describe hierarchical relations among topics; this model is used to infer threads of topics as themes from the nested Chinese restaurant process. We enhance the diversity of themes by using structured determinantal point processes for selecting a set of diverse themes with high quality. Finally, we pair contrastive themes and employ an iterative optimization algorithm to select sentences, explicitly considering contrast, relevance, and diversity. Experiments on three datasets demonstrate the effectiveness of our method."}
{"_id": "22d016d131f55637518a60cb4bef3334de5b4a64", "title": "Learning Discriminative and Shareable Features for Scene Classification", "text": "In this paper, we propose to learn a discriminative and shareable feature transformation filter bank to transform local image patches (represented as raw pixel values) into features for scene image classification. The learned filters are expected to: (1) encode common visual patterns of a flexible number of categories; (2) encode discriminative and class-specific information. For each category, a subset of the filters are activated in a data-adaptive manner, meanwhile sharing of filters among different categories is also allowed. Discriminative power of the filter bank is further enhanced by enforcing the features from the same category to be close to each other in the feature space, while features from different categories to be far away from each other. The experimental results on three challenging scene image classification datasets indicate that our features can achieve very promising performance. Furthermore, our features also show great complementary effect to the state-of-the-art ConvNets feature."}
{"_id": "e477d4353c1b7c39d87f3e2f43d7045e1ab7a4e0", "title": "Machine learning in cardiovascular medicine: are we there yet?", "text": "Artificial intelligence (AI) broadly refers to analytical algorithms that iteratively learn from data, allowing computers to find hidden insights without being explicitly programmed where to look. These include a family of operations encompassing several terms like machine learning, cognitive learning, deep learning and reinforcement learning-based methods that can be used to integrate and interpret complex biomedical and healthcare data in scenarios where traditional statistical methods may not be able to perform. In this review article, we discuss the basics of machine learning algorithms and what potential data sources exist; evaluate the need for machine learning; and examine the potential limitations and challenges of implementing machine in the context of cardiovascular medicine. The most promising avenues for AI in medicine are the development of automated risk prediction algorithms which can be used to guide clinical care; use of unsupervised learning techniques to more precisely phenotype complex disease; and the implementation of reinforcement learning algorithms to intelligently augment healthcare providers. The utility of a machine learning-based predictive model will depend on factors including data heterogeneity, data depth, data breadth, nature of modelling task, choice of machine learning and feature selection algorithms, and orthogonal evidence. A critical understanding of the strength and limitations of various methods and tasks amenable to machine learning is vital. By leveraging the growing corpus of big data in medicine, we detail pathways by which machine learning may facilitate optimal development of patient-specific models for improving diagnoses, intervention and outcome in cardiovascular medicine."}
{"_id": "c9ad8d639f66e0f0aa43d1187a5aac449b00b76d", "title": "Intrinsic frames of reference and egocentric viewpoints in scene recognition", "text": "Three experiments investigated the roles of intrinsic directions of a scene and observer's viewing direction in recognizing the scene. Participants learned the locations of seven objects along an intrinsic direction that was different from their viewing direction and then recognized spatial arrangements of three or six of these objects from different viewpoints. The results showed that triplets with two objects along the intrinsic direction (intrinsic triplets) were easier to recognize than triplets with two objects along the study viewing direction (non-intrinsic triplets), even when the intrinsic triplets were presented at a novel test viewpoint and the non-intrinsic triplets were presented at the familiar test viewpoint. The results also showed that configurations with the same three or six objects were easier to recognize at the familiar test viewpoint than other viewpoints. These results support and develop the model of spatial memory and navigation proposed by Mou, McNamara, Valiquette, and Rump [Mou, W., McNamara, T. P., Valiquiette C. M., & Rump, B. (2004). Allocentric and egocentric updating of spatial memories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 142-157]."}
{"_id": "158bfc6558da5495327f9d8b4288105ab17f6a8b", "title": "An exact algorithm for solving the vertex separator problem", "text": "Given G = (V,E) a connected undirected graph and a positive integer \u03b2(|V |), the vertex sperator problem is to find a partition of V into noempty three classes A, B, C such that there is no edge between A and B, max{|A|, |B|} \u2264 \u03b2(|V |) and |C| is minimum. In this paper we consider the vertex separator problem from a polyhedral point of view. We introduce new classes of valid inequalities for the associated polyhedron. Using a natural lower bound for the optimal solution, we present successful computational experiments."}
{"_id": "2d8d5c7d02a7a54b99f1dc2499593a9289888831", "title": "Detection and Localization of Curbs and Stairways Using Stereo Vision", "text": "We present algorithms to detect and precisely localize curbs and stairways for autonomous navigation. These algorithms combine brightness information (in the form of edgels) with 3-D data from a commercial stereo system. The overall system (including stereo computation) runs at about 4 Hz on a 1 GHz laptop. We show experimental results and discuss advantages and shortcomings of our approach."}
{"_id": "63623c63ffddd08d266d884680d3493e8b7705f1", "title": "Stereo Processing by Semiglobal Matching and Mutual Information", "text": "This paper describes the semiglobal matching (SGM) stereo method. It uses a pixelwise, mutual information (Ml)-based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement, and multibaseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments, and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed. A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2 seconds on typical test images. An in depth evaluation of the Ml-based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems."}
{"_id": "734e4341a2507c9b6039869aeb4c138bb86ade00", "title": "Adaptive neighborhood selection for real-time surface normal estimation from organized point cloud data using integral images", "text": "In this paper we present two real-time methods for estimating surface normals from organized point cloud data. The proposed algorithms use integral images to perform highly efficient border- and depth-dependent smoothing and covariance estimation. We show that this approach makes it possible to obtain robust surface normals from large point clouds at high frame rates and therefore, can be used in real-time computer vision algorithms that make use of Kinect-like data."}
{"_id": "154898f34460e95aef932bec5615bbd995824cad", "title": "A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms", "text": "Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms."}
{"_id": "706a1740113018e8f5f9ded79a2e5a43ca8407d2", "title": "A Compact 90-degree Twist using Novel Ridged Waveguide for Integrated Waveguide Subsystems", "text": "A compact waveguide twist using a single step of novel ridged structure for integrated waveguide subsystems is proposed. The cross-section of the ridged waveguide has the shape of two squares partially overlapped. The advantages of this configuration are the facilitation of manufacturing and the reduction of overall dimensions. In this paper, the broadband operation principle is described by using equivalent circuit, moreover the its design using HFSS and the experimental results are presented. In the result, the return loss in excess of 30dB could be obtained over 22 percent of frequency band without additional waveguide steps. The twist length is 0.22 times the wavelength of the standard rectangular waveguides"}
{"_id": "005b965c545ef1d67c462756b77d8ef7d812e944", "title": "An active volumetric model for 3D reconstruction", "text": "In this paper, we present an active volumetric model (AVM) for 3D reconstruction from multiple calibrated images of a scene. The AVM is a physically motivated 3D deformable model which shrinks actively under the influence of multiple simulated forces towards the real scene by throwing away some of its voxels. It provides a computational framework to integrate several constraints in an intelligible way. In the current work, we use three forces derived respectively from the smooth constraint, the compulsory silhouette constraint, and the color consistency constraint. Based on the composition of the 3 forces, our algorithm can significantly restrain holes and floating voxels, which plague voxel coloring algorithms, and produce precise and smooth models. We test our algorithm by experiments based on both synthetic and real data."}
{"_id": "8bd8b5ee7a38ae0a148d8afa1827f41aba5323cf", "title": "Effective Fashion Retrieval Based on Semantic Compositional Networks", "text": "Typical approaches for fashion retrieval rank clothing images according to the similarity to a user-provided query image. Similarity is usually assessed by encoding images in terms of visual elements such as color, shape and texture. In this work, we proceed differently and consider that the semantics of an outfit is mainly comprised of environmental and cultural concepts such as occasion, style and season. Thus, instead of retrieving outfits using strict visual elements, we find semantically similar outfits that fall into similar clothing styles and are adequate for the same occasions and seasons. We propose a compositional approach for fashion retrieval by arguing that the semantics of an outfit can be recognised by their constituents (i.e., clothing items and accessories). Specifically, we present a semantic compositional network (Comp-Net) in which clothing items are detected from the image and the probability of each item is used to compose a vector representation for the outfit. Comp-Net employs a normalization layer so that weights are updated by taking into consideration the previously known co-occurrence patterns between clothing items. Further, Comp-Net minimizes a cost-sensitive loss function as errors have different costs depending on the clothing item that is misclassified. This results in a space in which semantically related outfits are placed next to each other, enabling to find semantically similar outfits that may not be visually similar. We designed an evaluation setup that takes into account the association between different styles, occasions and seasons, and show that our compositional approach significantly outperforms a variety of recently proposed baselines."}
{"_id": "70e0660ff33b75b751f8635bb39f5b3299335fb7", "title": "Using humans as sensors: An estimation-theoretic perspective", "text": "The explosive growth in social network content suggests that the largest \"sensor network\" yet might be human. Extending the participatory sensing model, this paper explores the prospect of utilizing social networks as sensor networks, which gives rise to an interesting reliable sensing problem. In this problem, individuals are represented by sensors (data sources) who occasionally make observations about the physical world. These observations may be true or false, and hence are viewed as binary claims. The reliable sensing problem is to determine the correctness of reported observations. From a networked sensing standpoint, what makes this sensing problem formulation different is that, in the case of human participants, not only is the reliability of sources usually unknown but also the original data provenance may be uncertain. Individuals may report observations made by others as their own. The contribution of this paper lies in developing a model that considers the impact of such information sharing on the analytical foundations of reliable sensing, and embed it into a tool called Apollo that uses Twitter as a \"sensor network\" for observing events in the physical world. Evaluation, using Twitter-based case-studies, shows good correspondence between observations deemed correct by Apollo and ground truth."}
{"_id": "32b456688bf7c9cb8dced7348b981dc24d992f05", "title": "Enhancing perceptual and attentional skills requires common demands between the action video games and transfer tasks", "text": "Despite increasing evidence that shows action video game play improves perceptual and cognitive skills, the mechanisms of transfer are not well-understood. In line with previous work, we suggest that transfer is dependent upon common demands between the game and transfer task. In the current study, participants played one of four action games with varying speed, visual, and attentional demands for 20 h. We examined whether training enhanced performance for attentional blink, selective attention, attending to multiple items, visual search and auditory detection. Non-gamers who played the game (Modern Combat) with the highest demands showed transfer to tasks of attentional blink and attending to multiple items. The game (MGS Touch) with fewer attentional demands also decreased attentional blink, but to a lesser degree. Other games failed to show transfer, despite having many action game characteristics but at a reduced intensity. The results support the common demands hypothesis."}
{"_id": "f8c3bc760a29b3c3a78d1b051601af7d7150cd26", "title": "Psychosocial functioning and depression: distinguishing among antecedents, concomitants, and consequences.", "text": "In this article we attempt to distinguish empirically between psycho-social variables that are concomitants of depression, and variables that may serve as antecedents or sequelae of this disorder. We review studies that investigated the relationship between depression and any of six psychosocial variables after controlling for the effects of concurrent depression. The six variables examined are attributional style, dysfunctional attitudes, personality, social support, marital distress, and coping style. The review suggests that whereas there is little evidence in adults of a cognitive vulnerability to clinical depression, disturbances in interpersonal functioning may be antecedents or sequelae of this disorder. Specifically, marital distress and low social integration appear to be involved in the etiology of depression, and introversion and interpersonal dependency are identified as enduring abnormalities in the functioning of remitted depressives. We attempt to integrate what is known about the relationships among these latter variables, suggest ways in which they may influence the development of depression, and outline specific issues to be addressed in future research."}
{"_id": "ad79a5d7b3faa86da38b459625e6727bc0c80f2a", "title": "Formal Verification With Frama-C: A Case Study in the Space Software Domain", "text": "With the increasing importance of software in the aerospace field, as evidenced by its growing size and complexity, a rigorous and reliable software verification and validation process must be applied to ensure conformance with the strict requirements of this software. Although important, traditional validation activities such as testing and simulation can only provide a partial verification of behavior in critical real-time software systems, and thus, formal verification is an alternative to complement these activities. Two useful formal software verification approaches are deductive verification and abstract interpretation, which analyze programs statically to identify defects. This paper explores abstract interpretation and deductive verification by employing Frama-C's value analysis and Jessie plug-ins to verify embedded aerospace control software. The results indicate that both approaches can be employed in a software verification process to make software more reliable."}
{"_id": "9435022a8d60669468cc58331d09ba0a1c593b7e", "title": "Validation of heart rate extraction using video imaging on a built-in camera system of a smartphone", "text": "As a smartphone is becoming very popular and its performance is being improved fast, a smartphone shows its potential as a low-cost physiological measurement solution which is accurate and can be used beyond the clinical environment. Because cardiac pulse leads the subtle color change of a skin, a pulsatile signal which can be described as photoplethysmographic (PPG) signal can be measured through recording facial video using a digital camera. In this paper, we explore the potential that the reliable heart rate can be measured remotely by the facial video recorded using smartphone camera. First, using the front facing-camera of a smartphone, facial video was recorded. We detected facial region on the image of each frame using face detection, and yielded the raw trace signal from the green channel of the image. To extract more accurate cardiac pulse signal, we applied independent component analysis (ICA) to the raw trace signal. The heart rate was extracted using frequency analysis of the raw trace signal and the analyzed signal from ICA. The accuracy of the estimated heart rate was evaluated by comparing with the heart rate from reference electrocardiogram (ECG) signal. Finally, we developed FaceBEAT, an iPhone application for remote heart rate measurement, based on this study."}
{"_id": "59cc3b7cd89a74c26e359ad7063ef4ad940e1124", "title": "The minds of gods: A comparative study of supernatural agency", "text": "The present work is the first study to systematically compare the minds of gods by examining some of the intuitive processes that guide how people reason about them. By examining the Christian god and the spirit-masters of the Tyva Republic, it first confirms that the consensus view of the Christian god's mind is one of omniscience with acute concern for interpersonal social behavior (i.e., moral behaviors) and that Tyvan spirit-masters are not as readily attributed with knowledge or concern of moral information. Then, it reports evidence of a moralization bias of gods' minds; American Christians who believe that God is omniscient rate God as more knowledgeable of moral behaviors than nonmoral information. Additionally, Tyvans who do not readily report pro- or antisocial behavior among the things that spirit-masters care about will nevertheless rate spirit-masters' knowledge and concern of moral information higher than nonmoral information. However, this knowledge is distributed spatially; the farther away from spirits' place of governance a moral behavior takes place, the less they know and care about it. Finally, the wider the breadth of knowledge Tyvans attribute to spirit-masters, the more they attribute moral concern for behaviors that transpire beyond their jurisdiction. These results further demonstrate that there is a significant gulf between expressed beliefs and intuitive religious cognition and provides evidence for a moralization bias of gods' minds."}
{"_id": "c129ee744165be49aaf2a6914a623870c474b7bb", "title": "An Overview on XML Semantic Disambiguation from Unstructured Text to Semi-Structured Data : Background , Applications , and Ongoing Challenges", "text": "Since the last two decades, XML has gained momentum as the standard for Web information management and complex data representation. Also, collaboratively built semi-structured information resources, such as Wikipedia, have become prevalent on the Web and can be inherently encoded in XML. Yet most methods for processing XML and semi-structured information handle mainly the syntactic properties of the data, while ignoring the semantics involved. To devise more intelligent applications, one needs to augment syntactic features with machine-readable semantic meaning. This can be achieved through the computational identification of the meaning of data in context, also known as (a.k.a.) automated semantic analysis and disambiguation, which is nowadays one of the main challenges at the core of the Semantic Web. This survey paper provides a concise and comprehensive review of the methods related to XML-based semi-structured semantic analysis and disambiguation. It is made of four logical parts. First, we briefly cover traditional word sense disambiguation methods for processing flat textual data. Second, we describe and categorize disambiguation techniques developed and extended to handle semi-structured and XML data. Third, we describe current and potential application scenarios that can benefit from XML semantic analysis, including: data clustering and semantic-aware indexing, data integration and selective dissemination, semantic-aware and temporal querying, Web and Mobile Services matching and composition, blog and social semantic network analysis, and ontology learning. Fourth, we describe and discuss ongoing challenges and future directions, including: the quantification of semantic ambiguity, expanding XML disambiguation context, combining structure and content, using collaborative/social information sources, integrating explicit and implicit semantic analysis, emphasizing user involvement, and reducing computational complexity."}
{"_id": "98ebbb7ff06d45bc6f8bed1ed24351f510b74a22", "title": "Thermal characterisation of a copper-clip-bonded IGBT module with double-sided cooling", "text": "In pursuing higher power density power electronic equipment, the copper clip bonding concept is one of the double-sided cooling approaches under investigation for power semiconductors. This paper presents a comprehensive thermal assessment of the copper clip concept by undertaking comparative studies on one bespoke copper-clip-bonded IGBT power module and a conventional wire-bonded counterpart. A series of single-sided cooling tests and double-sided cooling tests have been undertaken on these two modules. Reductions are reported in the junction to case thermal resistance of up to 23% for one individual die and around 18% for the parallel operation of two dies in the single-sided cooling tests. Whilst in the double-sided cooling experiments, an additional average 18% thermal improvement is achieved due to the addition of a top fan-cooled heatsink mounted onto the copper clip. The results show clearly that the copper clip technology could provide significant benefits in the heat removal for power semiconductors and enable more power dense equipment."}
{"_id": "f2ae900a87eee49169f9dc983e14f5979c5b260e", "title": "Distributed Detection of Single-Stage Multipoint Cyber Attacks in a Water Treatment Plant", "text": "A distributed detection method is proposed to detect single stage multi-point (SSMP) attacks on a Cyber Physical System (CPS). Such attacks aim at compromising two or more sensors or actuators at any one stage of a CPS and could totally compromise a controller and prevent it from detecting the attack. However, as demonstrated in this work, using the flow properties of water from one stage to the other, a neighboring controller was found effective in detecting such attacks. The method is based on physical invariants derived for each stage of the CPS from its design. The attack detection effectiveness of the method was evaluated experimentally against an operational water treatment testbed containing 42 sensors and actuators. Results from the experiments point to high effectiveness of the method in detecting a variety of SSMP attacks but also point to its limitations. Distributing the attack detection code among various controllers adds to the scalability of the proposed method."}
{"_id": "5d5385eadf98c38a1b943b1b07bc677d1910d69d", "title": "Feature Extraction With Multiscale Covariance Maps for Hyperspectral Image Classification", "text": "The classification of hyperspectral images (HSIs) using convolutional neural networks (CNNs) has recently drawn significant attention. However, it is important to address the potential overfitting problems that CNN-based methods suffer when dealing with HSIs. Unlike common natural images, HSIs are essentially three-order tensors which contain two spatial dimensions and one spectral dimension. As a result, exploiting both spatial and spectral information is very important for HSI classification. This paper proposes a new hand-crafted feature extraction method, based on multiscale covariance maps (MCMs), that is specifically aimed at improving the classification of HSIs using CNNs. The proposed method has the following distinctive advantages. First, with the use of covariance maps, the spatial and spectral information of the HSI can be jointly exploited. Each entry in the covariance map stands for the covariance between two different spectral bands within a local spatial window, which can absorb and integrate the two kinds of information (spatial and spectral) in a natural way. Second, by means of our multiscale strategy, each sample can be enhanced with spatial information from different scales, increasing the information conveyed by training samples significantly. To verify the effectiveness of our proposed method, we conduct comprehensive experiments on three widely used hyperspectral data sets, using a classical 2-D CNN (2DCNN) model. Our experimental results demonstrate that the proposed method can indeed increase the robustness of the CNN model. Moreover, the proposed MCMs+2DCNN method exhibits better classification performance than other CNN-based classification strategies and several standard techniques for spectral-spatial classification of HSIs."}
{"_id": "5b742b999160442216f0936aab36cdd3b2e4c6b2", "title": "Interactions Between Philosophy and Artificial Intelligence: The Role of Intuition and Non-Logical Reasoning in Intelligence", "text": "This paper echoes, from a philosophical standpoint, the claim of McCarthy and Hayes that Philosophy and Artificial Intelligence have important relations. Philosophical problems about the use of \u2019intuition\u2019 in reasoning are related, via a concept of analogical representation, to problems in the simulation of perception, problem-solving and the generation of useful sets of possibilities in considering how to act. The requirements for intelligent decision-making proposed by McCarthy and Hayes are criticised as too narrow, and more general requirements are suggested instead."}
{"_id": "345947186f190649c582204776071ac9a62e8d67", "title": "Low-resource routing attacks against tor", "text": "Tor has become one of the most popular overlay networks for anonymizing TCP traffic. Its popularity is due in part to its perceived strong anonymity properties and its relatively low latency service. Low latency is achieved through Tor\u00e2 s ability to balance the traffic load by optimizing Tor router selection to probabilistically favor routers with highbandwidth capabilities.\n We investigate how Tor\u00e2 s routing optimizations impact its ability to provide strong anonymity. Through experiments conducted on PlanetLab, we show the extent to which routing performance optimizations have left the system vulnerable to end-to-end traffic analysis attacks from non-global adversaries with minimal resources. Further, we demonstrate that entry guards, added to mitigate path disruption attacks, are themselves vulnerable to attack. Finally, we explore solutions to improve Tor\u00e2 s current routing algorithms and propose alternative routing strategies that prevent some of the routing attacks used in our experiments."}
{"_id": "e2a389d1eff87ded82f528368ab64f2571fe563f", "title": "Beyond Linguistic Equivalence. An Empirical Study of Translation Evaluation in a Translation Learner Corpus", "text": "The realisation that fully automatic translation in many settings is still far from producing output that is equal or superior to human translation has lead to an intense interest in translation evaluation in the MT community. However, research in this field, by now, has not only largely ignored the tremendous amount of relevant knowledge available in a closely related discipline, namely translation studies, but also failed to provide a deeper understanding of the nature of \"translation errors\" and \"translation quality\". This paper presents an empirical take on the latter concept, translation quality, by comparing human and automatic evaluations of learner translations in the KOPTE corpus. We will show that translation studies provide sophisticated concepts for translation quality estimation and error annotation. Moreover, by applying well-established MT evaluation scores, namely BLEU and Meteor, to KOPTE learner translations that were graded by a human expert, we hope to shed light on properties (and potential shortcomings) of these scores. 1 Translation quality assessment In recent years, researchers in the field of MT evaluation have proposed a large variety of methods for assessing the quality of automatically produced translations. Approaches range from fully automatic quality scoring to efforts aimed at the development of \"human\" evaluation scores that try to exploit the (often tacit) linguistic knowledge of human evaluators. The criteria according to which quality is estimated often include adequacy, the degree of meaning preservation, and fluency, target language correctness (Callison-Burch et al., 2007). The goals of both \"human\" evaluation and fully automatic quality scoring are manifold and cover system optimisation as well as benchmarking and comparison. In translation studies, the scientific (and prescientific) discussion on how to assess the quality of human translations has been going on for centuries. In recent years, the development of appropriate concepts and tools has become even more vital to the discipline due to the pressing needs of the language industry. However, different from the belief, typical to MT, that the \"goodness\" of a translation can be scored on the basis of linguistic criteria alone, the notion of \"translation quality\", in translation studies, has assumed a multi-faceted shape, distancing itself from a simple strive for equivalence and embracing concepts such as functional, stylistic and pragmatic appropriateness as well as textual coherence. In this section, we provide an overview over approaches to translation quality assessment developed in MT and translation studies to specify how \"quality\" is being defined in both fields and which methods and features are used. Due to the amount of available literature, this overview is necessarily incomplete, but still insightful with respect to differences and commonalities between MT and human translation evaluation. 1.1 Automatic MT quality scores MT output is usually evaluated by automatic language-independent metrics which can be applied to any language produced by an MT system. The use of automatic metrics for MT evaluation is legitimate, since MT systems deal with large amounts of data, on which manual evaluation would be very time-consuming and expensive. Automatic metrics typically compute the closeness (adequacy) of a \"hypothesis\" to a \"reference\" translation and differ from each other by how this closeness is measured. The most popular MT eval-"}
{"_id": "89411367a191c96bc778fa6ba2bfd5eeea4638b6", "title": "High-Efficient Patch Antenna Array for E-Band Gigabyte Point-to-Point Wireless Services", "text": "This letter presents a low-cost, high-gain, and high-efficiency 4 \u00d7 4 circular patch array antenna for gigabyte point-to-point wireless services at E-band (81-86 GHz). The antenna structure consists of two layers. The feed network is placed at the bottom layer, while the circular patches are on the top layer. To increase the efficiency of the antenna array, substrate integrated waveguide (SIW) is used to feed the circular patches through longitudinal slots etched on the top metallic surface. Low-cost printed circuit board (PCB) process is used to fabricate the antenna prototype. Simulated and measured bandwidths of the antenna array are 7.2%, which covers the desired frequency range of 81-86 GHz. Measured gain of the 4 \u00d7 4 antenna array is 18.5 dBi, which is almost constant within the antenna bandwidth. Measured radiation efficiency of 90.3% is obtained."}
{"_id": "f5349e656cff0be73e7f8a9fa39c41637e523a7d", "title": "Robobarista: Learning to Manipulate Novel Objects via Deep Multimodal Embedding", "text": "There is a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on. It is challenging for a roboticist to program a robot for each of these object types and for each of their instantiations. In this work, we present a novel approach to manipulation planning based on the idea that many household objects share similarly-operated object parts. We formulate the manipulation planning as a structured prediction problem and learn to transfer manipulation strategy across different objects by embedding point-cloud, natural language, and manipulation trajectory data into a shared embedding space using a deep neural network. In order to learn semantically meaningful spaces throughout our network, we introduce a method for pre-training its lower layers for multimodal feature embedding and a method for fine-tuning this embedding space using a loss-based margin. In order to collect a large number of manipulation demonstrations for different objects, we develop a new crowd-sourcing platform called Robobarista. We test our model on our dataset consisting of 116 objects and appliances with 249 parts along with 250 language instructions, for which there are 1225 crowd-sourced manipulation demonstrations. We further show that our robot with our model can even prepare a cup of a latte with appliances it has never seen before."}
{"_id": "a6c41fd3888e884c0e20663c67bccb9efb802da4", "title": "Anatomic reconstruction of chronic symptomatic anterolateral proximal tibiofibular joint instability", "text": "Symptomatic chronic proximal tibiofibular joint subluxation is a pathology which is difficult to diagnose and treat. Surgical treatment has not been well defined. A report of two patients successfully treated with an anatomic reconstruction of the posterior aspect of the proximal tibiofibular joint is presented."}
{"_id": "ba32db1ab445e123d3be8502f81d65b41a688fad", "title": "Big data processing framework of learning weather information and road traffic collision using distributed CEP from CCTV video: Cognitive image processing", "text": "Nowadays, the many traffic information are extracted from CCTV. The road occupancy, vehicle speed, accident detection, traffic collision and weather information are calculated in CCTV. These are big data comes from varying sources, such as, social sites or mobile phone GPS signals and so on. The Hadoop and HBase can store and analyze these real-time big data in a distributed processing framework. This framework can be designed as flexible and scalable framework using distributed CEP that process massive real-time traffic data and ESB that integrates other services. In this paper, we propose a new architecture for distributed processing that enables big data processing on the road traffic data with specially weather information and its related information analysis. We tested the proposed framework on road traffic data gathering 41 CCTV from Icheon to Gangneung freeway section in Korea. And the forecasting method is introduced at intermediate local area between CCTV."}
{"_id": "7deb82243f6f070e58ed4c40a68d24c2aa07c45f", "title": "The Technology Behind Personal Digital Assistants: An overview of the system architecture and key components", "text": "We have long envisioned that one day computers will understand natural language and anticipate what we need, when and where we need it, and proactively complete tasks on our behalf. As computers get smaller and more pervasive, how humans interact with them is becoming a crucial issue. Despite numerous attempts over the past 30 years to make language understanding (LU) an effective and robust natural user interface for computer interaction, success has been limited and scoped to applications that were not particularly central to everyday use. However, speech recognition and machine learning have continued to be refined, and structured data served by applications and content providers has emerged. These advances, along with increased computational power, have broadened the application of natural LU to a wide spectrum of everyday tasks that are central to a user's productivity. We believe that as computers become smaller and more ubiquitous [e.g., wearables and Internet of Things (IoT)], and the number of applications increases, both system-initiated and user-initiated task completion across various applications and web services will become indispensable for personal life management and work productivity. In this article, we give an overview of personal digital assistants (PDAs); describe the system architecture, key components, and technology behind them; and discuss their future potential to fully redefine human?computer interaction."}
{"_id": "625c0f0ab987f72262c1dbc2d0b6483c81ec0da8", "title": "A multilevel automatic thresholding method based on a genetic algorithm for a fast image segmentation", "text": "In this paper, a multilevel thresholding method which allows the determination of the appropriate number of thresholds as well as the adequate threshold values is proposed. This method combines a genetic algorithm with a wavelet transform. First, the length of the original histogram is reduced by using the wavelet transform. Based on this lower resolution version of the histogram, the number of thresholds and the threshold values are determined by using a genetic algorithm. The thresholds are then projected onto the original space. In this step, a refinement procedure may be added to detect accurate threshold values. Experiments and comparative results with multilevel thresholding methods over a synthetic histogram and real images show the efficiency of the proposed method. 2007 Elsevier Inc. All rights reserved."}
{"_id": "6f20f796bcdc63bdba431e734ec31f02e66f97cc", "title": "Three tensions in participatory design for inclusion", "text": "One ideal of Participatory Design (PD) is active involvement by all stakeholders as co-designers. However, when PD is applied to real projects, certain compromises are unavoidable, no matter what stakeholders are involved. With this paper we want to shed light on some of the challenges in implementing \"true\" PD in the case of designing with children, in particular children with severe disabilities. We do this work to better understand challenges in an ongoing project, RHYME, and by doing so we hope to provide insight and inspiration for others."}
{"_id": "36298c797f8f6e55c8b685b9768e30758704a81b", "title": "The Distributed Simulation of Multi-Agent Systems", "text": "Agent-based systems are increasingly being applied in a wide range of areas including telecommunications, business process modelling, computer games, control of mobile robots and military simulations. Such systems are typically extremely complex and it is often useful to be able to simulate an agent-based system to learn more about its behaviour or investigate the implications of alternative architectures. In this paper, we discuss the application of distributed discrete-event simulation techniques to the simulation of multi-agent systems. We identify the efficient distribution of the agents\u2019 environment as a key problem in the simulation of agent-based systems, and present an approach to the decomposition of the environment which facilitates load balancing. Keywords\u2014 agents, distributed simulation, interest management, load balancing."}
{"_id": "e63a1fafb12fb7cef9daa4535fee0e9136f0c47d", "title": "Flexible Tactile Sensors Using Screen-Printed P(VDF-TrFE) and MWCNT/PDMS Composites", "text": "This paper presents and compares two different types of screen-printed flexible and conformable pressure sensors arrays. In both variants, the flexible pressure sensors are in the form of segmental arrays of parallel plate structure-sandwiching the piezoelectric polymer polyvinylidene fluoride trifluoroethylene [P(VDF-TrFE)] between two printed metal layers of silver (Ag) in one case and the piezoresistive [multiwall carbon nanotube (MWCNT) mixed with poly(dimethylsiloxane (PDMS)] layer in the other. Each sensor module consists of 4 \u00d7 4 sensors array with 1-mm \u00d7 1-mm sensitive area of each sensor. The screen-printed piezoelectric sensors array exploits the change in polarization level of P(VDF-TrFE) to detect dynamic tactile parameter such as contact force. Similarly, the piezoresistive sensors array exploits the change in resistance of the bulk printed layer of MWCNT/PDMS composite. The two variants are compared on the basis of fabrication by printing on plastic substrate, ease of processing and handling of the materials, compatibility of the dissimilar materials in multilayers structure, adhesion, and finally according to the response to the normal compressive forces. The foldable pressure sensors arrays are completely realized using screen-printing technology and are targeted toward realizing low-cost electronic skin."}
{"_id": "12d4ebe2727911fc18f1a6203dde2b1892057f83", "title": "Network motifs: simple building blocks of complex networks.", "text": "Complex networks are studied across many fields of science. To uncover their structural design principles, we defined \"network motifs,\" patterns of interconnections occurring in complex networks at numbers that are significantly higher than those in randomized networks. We found such motifs in networks from biochemistry, neurobiology, ecology, and engineering. The motifs shared by ecological food webs were distinct from the motifs shared by the genetic networks of Escherichia coli and Saccharomyces cerevisiae or from those found in the World Wide Web. Similar motifs were found in networks that perform information processing, even though they describe elements as different as biomolecules within a cell and synaptic connections between neurons in Caenorhabditis elegans. Motifs may thus define universal classes of networks. This approach may uncover the basic building blocks of most networks."}
{"_id": "2cde803f1f09c46902bf0685c2586b55aee429f6", "title": "Do the Rich Get Richer? An Empirical Analysis of the Bitcoin Transaction Network", "text": "The possibility to analyze everyday monetary transactions is limited by the scarcity of available data, as this kind of information is usually considered highly sensitive. Present econophysics models are usually employed on presumed random networks of interacting agents, and only some macroscopic properties (e.g. the resulting wealth distribution) are compared to real-world data. In this paper, we analyze Bitcoin, which is a novel digital currency system, where the complete list of transactions is publicly available. Using this dataset, we reconstruct the network of transactions and extract the time and amount of each payment. We analyze the structure of the transaction network by measuring network characteristics over time, such as the degree distribution, degree correlations and clustering. We find that linear preferential attachment drives the growth of the network. We also study the dynamics taking place on the transaction network, i.e. the flow of money. We measure temporal patterns and the wealth accumulation. Investigating the microscopic statistics of money movement, we find that sublinear preferential attachment governs the evolution of the wealth distribution. We report a scaling law between the degree and wealth associated to individual nodes."}
{"_id": "67af8bf83dc4354d1513b6f60b13df60f694c5b3", "title": "An Analysis of Anonymity in the Bitcoin System", "text": "Anonymity in Bit coin, a peer-to-peer electronic currency system, is a complicated issue. Within the system, users are identified by public-keys only. An attacker wishing to de-anonymize its users will attempt to construct the one to-many mapping between users and public-keys and associate information external to the system with the users. Bitcoinfrustrates this attack by storing the mapping of a user to his or her public-keys on that user's node only and by allowing each user to generate as many public-keys as required. In this paper we consider the topological structure of two networks derived from Bitcoin's public transaction history. We show that the two networks have a non-trivial topological structure, provide complementary views of the Bit coin system and have implications for anonymity. We combine these structures with external information and techniques such as context discovery and flow analysis to investigate an alleged theft of Bit coins, which, at the time of the theft, had a market value of approximately half a million U.S. dollars."}
{"_id": "78d1cc844f0ab403d1ecf0307d1fad5788642147", "title": "Learning from Positive and Unlabeled Examples with Different Data Distributions", "text": "We study the problem of learning from positive and unlabeled examples. Although several techniques exist for dealing with this problem, they all assume that positive examples in the positive set P and the positive examples in the unlabeled set U are generated from the same distribution. This assumption may be violated in practice. For example, one wants to collect all printer pages from the Web. One can use the printer pages from one site as the set P of positive pages and use product pages from another site as U. One wants to classify the pages in U into printer pages and non-printer pages. Although printer pages from the two sites have many similarities, they can also be quite different because different sites often present similar products in different styles and have different focuses. In such cases, existing methods perform poorly. This paper proposes a novel technique A-EM to deal with the problem. Experiment results with product page classification demonstrate the effectiveness of the proposed technique."}
{"_id": "aad9c378dc1485957ae8d661596462b2ec16ab24", "title": "BIRDS OF A FEATHER : Homophily in Social Networks", "text": "\u25a0 Abstract Similarity breeds connection. This principle\u2014the homophily principle\u2014structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people\u2019s personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people\u2019s social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localized positions) within social space. We argue for more research on: (a) the basic ecological processes that link organizations, associations, cultural communities, social movements, and many other social forms; (b) the impact of multiplex ties on the patterns of homophily; and (c) the dynamics of network change over time through which networks and other social entities co-evolve."}
{"_id": "79618485982953a5d794c8adcb3f3504cba9bf3e", "title": "Ant colony optimization : Introduction and recent trends", "text": "Ant colony optimization is a technique for optimization that was introduced in the early 1990\u2019s. The inspiring source colony optimization is the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the of approximate solutions to discrete optimization problems, to continuous optimization problems, and to important pro telecommunications, such as routing and load balancing. First, we deal with the biological inspiration of ant colony optim algorithms. We show how this biological inspiration can be transfered into an algorithm for discrete optimization. Then, we ant colony optimization in more general terms in the context of discrete optimization, and present some of the nowad performing ant colony optimization variants. After summarizing some important theoretical results, we demonstrate how a optimization can be applied to continuous optimization problems. Finally, we provide examples of an interesting recent direction: The hybridization with more classical techniques from artificial intelligence and operations research. \uf6d9 2005 Elsevier B.V. All rights reserved."}
{"_id": "d768bdf4485e59497b639e7425beb0347e08c62b", "title": "Inferring Query Performance Using Pre-retrieval Predictors", "text": "The prediction of query performance is an interesting and important issue in Information Retrieval (IR). Current predictors involve the use of relevance scores, which are time-consuming to compute. Therefore, current predictors are not very suitable for practical applications. In this paper, we study a set of predictors of query performance, which can be generated prior to the retrieval process. The linear and non-parametric correlations of the predictors with query performance are thoroughly assessed on the TREC disk4 and disk5 (minus CR) collections. According to the results, some of the proposed predictors have significant correlation with query performance, showing that these predictors can be useful to infer query performance in practical applications."}
{"_id": "3b5d728a11d51eeb7bb9f256a568341c00c69b17", "title": "Curvature-based regularization for surface approximation", "text": "We propose an energy-based framework for approximating surfaces from a cloud of point measurements corrupted by noise and outliers. Our energy assigns a tangent plane to each (noisy) data point by minimizing the squared distances to the points and the irregularity of the surface implicitly defined by the tangent planes. In order to avoid the well-known \u201dshrinking\u201d bias associated with first-order surface regularization, we choose a robust smoothing term that approximates curvature of the underlying surface. In contrast to a number of recent publications estimating curvature using discrete (e.g. binary) labellings with triple-cliques we use higher-dimensional labels that allows modeling curvature with only pair-wise interactions. Hence, many standard optimization algorithms (e.g. message passing, graph cut, etc) can minimize the proposed curvature-based regularization functional. The accuracy of our approach for representing curvature is demonstrated by theoretical and empirical results on synthetic and real data sets from multiview reconstruction and stereo."}
{"_id": "1e8c5dc1157879f8781a94aaace94187c67e47cd", "title": "A vision for personalized service level agreements in the cloud", "text": "Public Clouds today provide a variety of services for data analysis such as Amazon Elastic MapReduce and Google BigQuery. Each service comes with a pricing model and service level agreement (SLA). Today's pricing models and SLAs are described at the level of compute resources (instance-hours or gigabytes processed). They are also different from one service to the next. Both conditions make it difficult for users to select a service, pick a configuration, and predict the actual analysis cost. To address this challenge, we propose a new abstraction, called a Personalized Service Level Agreement, where users are presented with what they can do with their data in terms of query capabilities, guaranteed query performance and fixed hourly prices."}
{"_id": "da7ce9b241b076da2b24bd44ea463ba1f0074ac8", "title": "Types of Participant Behavior in a Massive Open Online Course", "text": "In recent years there has been a proliferation of massive open online courses (MOOCs), which provide unprecedented opportunities for lifelong learning. Registrants approach these courses with a variety of motivations for participation. Characterizing the different types of participation in MOOCs is fundamental in order to be able to better evaluate the phenomenon and to support MOOCs developers and instructors in devising courses which are adapted for different learners' needs. Thus, the purpose of this study was to characterize the different types of participant behavior in a MOOC. Using a data mining methodology, 21,889 participants of a MOOC were classified into clusters, based on their activity in the main learning resources of the course: video lectures, discussion forums, and assessments. Thereafter, the participants in each cluster were characterized in regard to demographics, course participation, and course achievement characteristics. Seven types of participant behavior were identified: Tasters (64.8%), Downloaders (8.5%), Disengagers (11.5%), Offline Engagers (3.6%), Online Engagers (7.4%), Moderately Social Engagers (3.7%), and Social Engagers (0.6%). A significant number of 1,020 participants were found to be engaged in the course, but did not achieve a certificate. The types are discussed according to the established research questions. The results provide further evidence regarding the utilization of the flexibility, which is offered in MOOCs, by the participants according to their needs. Furthermore, this study supports the claim that MOOCs' impact should not be evaluated solely based on certification rates but rather based on learning behaviors."}
{"_id": "2d86e73e56aa4394fe4ac9a6ed742d850abc73ce", "title": "Reporting and Methods in Clinical Prediction Research: A Systematic Review", "text": "BACKGROUND\nWe investigated the reporting and methods of prediction studies, focusing on aims, designs, participant selection, outcomes, predictors, statistical power, statistical methods, and predictive performance measures.\n\n\nMETHODS AND FINDINGS\nWe used a full hand search to identify all prediction studies published in 2008 in six high impact general medical journals. We developed a comprehensive item list to systematically score conduct and reporting of the studies, based on recent recommendations for prediction research. Two reviewers independently scored the studies. We retrieved 71 papers for full text review: 51 were predictor finding studies, 14 were prediction model development studies, three addressed an external validation of a previously developed model, and three reported on a model's impact on participant outcome. Study design was unclear in 15% of studies, and a prospective cohort was used in most studies (60%). Descriptions of the participants and definitions of predictor and outcome were generally good. Despite many recommendations against doing so, continuous predictors were often dichotomized (32% of studies). The number of events per predictor as a measure of statistical power could not be determined in 67% of the studies; of the remainder, 53% had fewer than the commonly recommended value of ten events per predictor. Methods for a priori selection of candidate predictors were described in most studies (68%). A substantial number of studies relied on a p-value cut-off of p<0.05 to select predictors in the multivariable analyses (29%). Predictive model performance measures, i.e., calibration and discrimination, were reported in 12% and 27% of studies, respectively.\n\n\nCONCLUSIONS\nThe majority of prediction studies in high impact journals do not follow current methodological recommendations, limiting their reliability and applicability."}
{"_id": "4816173b69084f8c305084177049c8fa771e9ed2", "title": "Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration", "text": "Testing in Continuous Integration (CI) involves test case prioritization, selection, and execution at each cycle. Selecting the most promising test cases to detect bugs is hard if there are uncertainties on the impact of committed code changes or, if traceability links between code and tests are not available. This paper introduces Retecs, a new method for automatically learning test case selection and prioritization in CI with the goal to minimize the round-trip time between code commits and developer feedback on failed test cases. The Retecs method uses reinforcement learning to select and prioritize test cases according to their duration, previous last execution and failure history. In a constantly changing environment, where new test cases are created and obsolete test cases are deleted, the Retecs method learns to prioritize error-prone test cases higher under guidance of a reward function and by observing previous CI cycles. By applying Retecs on data extracted from three industrial case studies, we show for the first time that reinforcement learning enables fruitful automatic adaptive test case selection and prioritization in CI and regression testing."}
{"_id": "bbe5748927e19d05a20d80106da189eb872789a4", "title": "Cannabidiol--recent advances.", "text": "The aim of this review is to present some of the recent publications on cannabidiol (CBD; 2), a major non-psychoactive constituent of Cannabis, and to give a general overview. Special emphasis is laid on biochemical and pharmacological advances, and on novel mechanisms recently put forward, to shed light on some of the pharmacological effects that can possibly be rationalized through these mechanisms. The plethora of positive pharmacological effects observed with CBD make this compound a highly attractive therapeutic entity."}
{"_id": "a86b1198a2d6dfe57e87a5acc33a747a69d1fdfe", "title": "When saliency meets sentiment: Understanding how image content invokes emotion and sentiment", "text": "Sentiment analysis is crucial for extracting social signals from social media content. Due to the prevalence of images in social media, image sentiment analysis is receiving increasing attention in recent years. However, most existing systems are black-boxes that do not provide insight on how image content invokes sentiment and emotion in the viewers. On the other hand, psychological studies have confirmed that salient objects in an image often invoke emotions. In this work, we investigate more fine-grained and more comprehensive interaction between visual saliency and visual sentiment. In particular, we partition images in several meta-level scene-type dimensions that are relevant to most images, including: open-closed, natural-manmade, indoor-outdoor, and face-noface. Facilitated by state of the art saliency detection algorithm and sentiment classification algorithm, we examine how the sentiment of the salient region(s) in an image relates to the overall sentiment of the image. The experiments on a representative image emotion dataset have shown interesting correlation between saliency and sentiment in different scene types and shed light on the mechanism of visual sentiment evocation."}
{"_id": "4c45d2ddbb3266e588efe78c41ced2c6af2824c3", "title": "Cross-layer secure cyber-physical control system design for networked 3D printers", "text": "Due to the high costs of 3D-printing infrastructure, outsourcing the production to third parties specializing in 3D-printing process becomes necessary. The integration of a 3D-printing system with networked communications constitutes a cyber-physical system, bringing new security challenges. Adversaries can explore the vulnerabilities of networks to damage the physical parts of the system. In this paper, we explore the vulnerabilities of 3D-printing systems, and design a cross-layer approach for the system. At the physical layer, we use a Markov jump system to model the system and develop a robust control policy to deal with uncertainties. At the cyber-layer, we apply FlipIt game to model the contention between the defender and attacker for the control of the 3D-printing system. To connect these two layers, we develop a Stackelberg framework to capture the interactions between cyber-layer attacker and defender game and the physical-layer controller and disturbance game, and define a new equilibrium concept that captures interdependence of the zero-sum and FlipIt games. We present numerical examples to better understand the equilibria and design defense strategies for 3D printers as a tradeoff between security and robustness."}
{"_id": "f1a10865f6075765d24fbd3b0017d1dee1639800", "title": "Waste Management as an IoT-Enabled Service in Smart Cities", "text": "Intelligent Transportation Systems (ITS) enable new services within Smart Cities. Efficient Waste Collection is considered a fundamental service for Smart Cities. Internet of Things (IoT) can be applied both in ITS and Smart cities forming an advanced platform for novel applications. Surveillance systems can be used as an assistive technology for high Quality of Service (QoS) in waste collection. Specifically, IoT components: (i) RFIDs, (ii) sensors, (iii) cameras, and (iv) actuators are incorporated into ITS and surveillance systems for efficient waste collection. In this paper we propose an advanced Decision Support System (DSS) for efficient waste collection in Smart Cities. The system incorporates a model for data sharing between truck drivers on real time in order to perform waste collection and dynamic route optimization. The system handles the case of ineffective waste collection in inaccessible areas within the Smart City. Surveillance cameras are incorporated for capturing the problematic areas and provide evidence to the authorities. The waste collection system aims to provide high quality of service to the citizens of a Smart City."}
{"_id": "3ff64e4a07be1e8c2f0b16f4736397be1405218a", "title": "Internet of Things Approach for Automation of the Complex Industrial Systems", "text": "This paper presents the analysis of the Internet of Things (IoT) approach and its application for the development of embedded monitoring and automatic control systems (EMACS) for technological objects and processes that are included in complex industrial systems. The functional structure and main components of EMACS based on IoT approach are given. The examples of IoT applications in design of specialized EMACS for such complex technical objects as gas turbine engines and floating docks are presented. Considerable attention is given to particular qualities of the functional structures, software and hardware implementation as well as multi-level human-machine interfaces of the EMACS for main process parameters. The developed EMACS provide: (a) high precision control of real-time operating processes of gas turbine engines and floating docks, (b) monitoring and automatic control of current technological parameters with high quality indicators, that leads to significant increasing of energy and economic efficiency of both industrial objects."}
{"_id": "8f2576492e468342659e3963fef17eee785ccac2", "title": "Human Motion Reconstruction from Action Video Data Using a 3-Layer-LSTM", "text": "Data driven motion generation is a challenging problem. Typically the model is trained using motion capture (MOCAP) data, which is limited in scope and diversity. Solving this problem using motion data extracted from Youtube videos would enable motion generation pipelines to feed from a wider range of action data, allowing for greater variability in the actions generated. Human action is expressed through a sequence of bodily movements. Hence, in this work we explore three stages for the motion synthesis pipeline: first by extracting human pose from video data with Part Affinity Fields method by [1], and then by training a 3-Layer Recurrent Neural Network (RNN) that learns to generate human-like motion for a given class from annotated video data, and finally we explore a simple synthesis of motion from multiple classes. Ultimately, this approach could enable robust generation of a wide range of motions, capturing the the subtle and fine variations of human movement expressed in videos that populate the Internet to artificially produce life like motion."}
{"_id": "50cf0a4dd4f868223eeec86274491a3875e70a27", "title": "EVE: explainable vector based embedding technique using Wikipedia", "text": "We present an unsupervised explainable vector embedding technique, called EVE, which is built upon the structure of Wikipedia. The proposed model defines the dimensions of a semantic vector representing a concept using human-readable labels, thereby it is readily interpretable. Specifically, each vector is constructed using the Wikipedia category graph structure together with the Wikipedia article link structure. To test the effectiveness of the proposed model, we consider its usefulness in three fundamental tasks: 1) intruder detection\u2014to evaluate its ability to identify a non-coherent vector from a list of coherent vectors, 2) ability to cluster\u2014to evaluate its tendency to group related vectors together while keeping unrelated vectors in separate clusters, and 3) sorting relevant items first\u2014to evaluate its ability to rank vectors (items) relevant to the query in the top order of the result. For each task, we also propose a strategy to generate a task-specific human-interpretable explanation from the model. These demonstrate the overall effectiveness of the explainable embeddings generated by EVE. Finally, we compare EVE with the Word2Vec, FastText, and GloVe embedding techniques across the three tasks, and report improvements over the state-of-the-art."}
{"_id": "59687d9646d99e81fbc45907e49c3075888fc217", "title": "Utility max-min fair resource allocation for communication networks with multipath routing", "text": "This paper considers the flow control and resource allocation problem as applied to the generic multipath communication networks with heterogeneous applications. We propose a novel distributed algorithm, show and prove that among all the sources with positive increasing and bounded utilities (no need to be concave) in steady state, the utility max-min fairness is achieved, which is essential for balancing QoS (Quality of Service) for different applications. By combining the first order Lagrangian method and filtering mechanism, the adopted approach eliminates typical oscillation behavior in multipath networks and possesses a rapid convergence property. In addition, the algorithm is capable of deciding the optimal routing strategy and distributing the total traffic evenly out of the available paths. The performance of our utility max-min fair flow control algorithm is evaluated through simulations under two representative case studies, as well as the real implementation issues are addressed deliberately for the practical purpose."}
{"_id": "70919a28fa94a94aa90daba0c8f9a7890ee1a892", "title": "3D High-Efficiency Video Coding for Multi-View Video and Depth Data", "text": "This paper describes an extension of the high efficiency video coding (HEVC) standard for coding of multi-view video and depth data. In addition to the known concept of disparity-compensated prediction, inter-view motion parameter, and inter-view residual prediction for coding of the dependent video views are developed and integrated. Furthermore, for depth coding, new intra coding modes, a modified motion compensation and motion vector coding as well as the concept of motion parameter inheritance are part of the HEVC extension. A novel encoder control uses view synthesis optimization, which guarantees that high quality intermediate views can be generated based on the decoded data. The bitstream format supports the extraction of partial bitstreams, so that conventional 2D video, stereo video, and the full multi-view video plus depth format can be decoded from a single bitstream. Objective and subjective results are presented, demonstrating that the proposed approach provides 50% bit rate savings in comparison with HEVC simulcast and 20% in comparison with a straightforward multi-view extension of HEVC without the newly developed coding tools."}
{"_id": "e6cf50180f74be11ffd7d9d520b36dd1650aae6c", "title": "Fuzzy logic controller for an inverted pendulum system using quantum genetic optimization", "text": "In this paper, we propose a new generalized design methodology of intelligent robust fuzzy control systems based on quantum genetic algorithm (QGA) called quantum fuzzy model that enhance robustness of fuzzy logic controllers. The QGA is adopted because of their capabilities of directed random search for global optimization to find the parameters of the shape and width of membership functions and rule set of the FLC to obtain the optimal fuzzy controller simultaneously. We test the optimal FLC obtained by the quantum computing applied on the control of dynamic balance and motion of cart-pole balancing system. We compare the proposed technique with existing mamdani fuzzy logic controller which is designed through conventional genetic algorithm. Simulation results reveal that QGA performs better than conventional GA in terms of running speed and optimizing capability."}
{"_id": "e5238abb6b17d13493e6d2c91e42f0ce8347a990", "title": "Sparse Recovery of Hyperspectral Signal from Natural RGB Images", "text": "Hyperspectral imaging is an important visual modality with growing interest and range of applications. The latter, however, is hindered by the fact that existing devices are limited in either spatial, spectral, and/or temporal resolution, while yet being both complicated and expensive. We present a low cost and fast method to recover high quality hyperspectral images directly from RGB. Our approach first leverages hyperspectral prior in order to create a sparse dictionary of hyperspectral signatures and their corresponding RGB projections. Describing novel RGB images via the latter then facilitates reconstruction of the hyperspectral image via the former. A novel, larger-than-ever database of hyperspectral images serves as a hyperspectral prior. This database further allows for evaluation of our methodology at an unprecedented scale, and is provided for the benefit of the research community. Our approach is fast, accurate, and provides high resolution hyperspectral cubes despite using RGB-only input."}
{"_id": "bc28d619b1dbde447acedcd926c7c06c3bdd8636", "title": "Supervised Learning Based Algorithm Selection for Deep Neural Networks", "text": "Many recent deep learning platforms rely on thirdparty libraries (such as cuBLAS) to utilize the computing power of modern hardware accelerators (such as GPUs). However, we observe that they may achieve suboptimal performance because the library functions are not used appropriately. In this paper, we target at optimizing the operations of multiplying a matrix with the transpose of another matrix (referred to as NT operation hereafter), which contribute half of the training time of fully connected deep neural networks. Rather than directly calling the library function, we propose a supervised learning based algorithm selection approach named MTNN, which uses a gradient boosted decision tree to select one from two alternative NT implementations intelligently: (1) calling the cuBLAS library function; (2) calling our proposed algorithm TNN that uses an efficient out-of-place matrix transpose. We evaluate the performance of MTNN on two modern GPUs: NVIDIA GTX 1080 and NVIDIA Titan X Pascal. MTNN can achieve 96% of prediction accuracy with very low computational overhead, which results in an average of 54% performance improvement on a range of NT operations. To further evaluate the impact of MTNN on the training process of deep neural networks, we have integrated MTNN into a popular deep learning platform Caffe. Our experimental results show that the revised Caffe can outperform the original one by an average of 28%. Both MTNN and the revised Caffe are open-source."}
{"_id": "e73f632aaec7b5a9695b0c7de4841f6da7885d85", "title": "Energy-Efficient Routing Protocols in Wireless Sensor Networks: A Survey", "text": "This paper represents energy efficient routing protocols in WSN. It is a collection of sensor nodes with a set of limited Processor and limited memory unit embedded in it. Reliable routing of packets from the sensor node to its base station is the most important task for the networks. The routing protocols applied for the other networks cannot be used here due to its battery powered nodes This paper gives an overview of the different routing strategies used in wireless sensor networks and gives a brief working model of energy efficient routing protocols in WSN. It also shows the comparison of these different routing protocols based on metrics such as mobility support, stability, issues and latency."}
{"_id": "7d388deff002d7a7562901148f3bdb98b25f0a30", "title": "All-Digital Background Calibration of a Successive Approximation ADC Using the \u201cSplit ADC\u201d Architecture", "text": "The \u201csplit ADC\u201d architecture enables fully digital calibration and correction of nonlinearity errors due to capacitor mismatch in a successive approximation (SAR) ADC. The die area of a single ADC design is split into two independent halves, each converting the same input signal. Total area and power is unchanged, resulting in minimal increase in analog complexity. For each conversion, the half-sized ADCs generate two independent outputs which are digitally corrected using estimates of capacitor mismatch errors for each ADC. The ADC outputs are averaged to produce the ADC output code. The difference of the two outputs is used in a background calibration algorithm which estimates the error in the correction parameters. Any nonzero difference drives an LMS feedback loop toward zero difference which can only occur when the average error in each correction parameter is zero. A novel segmentation and shuffling scheme in the SAR capacitive DAC enables background calibration for a wide range of input signals including dc. Simulation of a 16 bit 1 Msps SAR ADC in 180 nm CMOS shows calibration convergence within 200 000 samples."}
{"_id": "147fe6bfc76f30ccacc3620662511e452bc395f6", "title": "A Survey of Face Recognition Techniques", "text": "Face recognition presents a challenging problem in the field of image analysis and computer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology."}
{"_id": "26d172f0a4d7e903ce388f3159059f9c5463e5c5", "title": "Algebraic feature extraction of image for recognition", "text": "The extraction of image features is one of the fundamental tasks in image recognition . Up until now, there have been several kinds of features to be used for the purpose of image recognition as follows: (1) visual features ; (2) statistical features of pixel ; (3) transform coefficient features . In addition, there is another kind of feature which the author believes is very useful, i.e . (4) algebraic features which represent intrinsic attributions of an image . Singular Values (SV) of image are this kind of feature . In this paper, we prove that SV feature vector has some important properties of algebraic and geometric invariance, and insensitiveness to noise . These properties are very useful for the description and recognition of images . As an example, SV feature vector is used for the problem of recognizing human facial images . In this paper, using SV feature vector samples of facial images, a normal pattern Bayes classification model based on Sammon s optimal descriminant plane is constructed . The experimental result shows that SV feature vector has good performance of class separation . Image recognition Algebraic feature extraction Singular value feature Facial image recognition Discrinlinant vector Dimensionality reduction"}
{"_id": "55206f0b5f57ce17358999145506cd01e570358c", "title": "Parameterisation of a stochastic model for human face identification", "text": "Stochastic modelling of non-stationary vector timeseries based on HMMs has been very successful for speech applications [5]. Recently it has been applied to a range of image recognition problems [7, 9]. Previously reported work [6] has investigated the use of HMMs to model human faces for identi cation purposes. Faces can be intuitively divided into regions such as the mouth, eyes, nose, etc., and these regions can be associated with the states of an HMM. The identi cation performance of a top-bottom HMM compares favourably with some of the well-known algorithms, for example eigenfaces as detailed in [8]. However, the HMM parameterisation in the work presented so far was arrived at by subjective intuition. This paper presents experimental results which show how identi cation rates vary with HMM parameters, and which indicate the most sensible choice of parameters. The paper is organised as follows: section 2 gives an overview of the HMM-based approach; section 3 details the training and recognition processes; section 4 describes the experimental setup; section 5 presents the identi cation results; section 6 concludes the paper."}
{"_id": "5985014dda6d502469614aae17349b4d08f9f74c", "title": "A comparative study of texture measures with classification based on featured distributions", "text": "-This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently. For classification a method based on Kullback discrimination of sample and prototype distributions is used. The classification results for single features with one-dimensional feature value distributions and for pairs of complementary features with two-dimensional distributions are presented Texture analysis Classification Feature distribution Kullback discriminant Performance evaluation Brodatz textures l. I N T R O D U C T I O N Texture is an important characteristic for the analysis of many types of images. A wide variety of measures for discriminating textures have been proposed, c1'2~ Comparative studies to evaluate the performance of some texture measures have been carried out by Weszka et al., ~3~ Du Bufet al. ~4~ and Ohanian and Dubes, ~s) for"}
{"_id": "1270044a3fa1a469ec2f4f3bd364754f58a1cb56", "title": "Video-Based Face Recognition Using Probabilistic Appearance Manifolds", "text": "This paper presents a novel method to model and recognize human faces in video sequences. Each registered person is represented by a low-dimensional appearance manifold in the ambient image space. The complex nonlinear appearance manifold expressed as a collection of subsets (named pose manifolds), and the connectivity among them. Each pose manifold is approximated by an affine plane. To construct this representation, exemplars are sampled from videos, and these exemplars are clustered with a K-means algorithm; each cluster is represented as a plane computed through principal component analysis (PCA). The connectivity between the pose manifolds encodes the transition probability between images in each of the pose manifold and is learned from a training video sequences. A maximum a posteriori formulation is presented for face recognition in test video sequences by integrating the likelihoo d that the input image comes from a particular pose manifold and the transition probability to this pose manifold from th e previous frame. To recognize faces with partial occlusion, we introduce a weight mask into the process. Extensive experiments demonstrate that the proposed algorithm outperforms existing frame-based face recognition methods with temporal voting schemes."}
{"_id": "fd75ab282acbfa4f248c204af3e4574e9ea8d69b", "title": "Theory application by school-based occupational therapists.", "text": "OBJECTIVES\nRecent literature indicates that there is an inconsistent use of theory to guide clinical actions by occupational therapists, including those working in pediatrics. The purpose of this study was to describe school-based therapists' theory application by collecting information about what frames of reference they used and why.\n\n\nMETHOD\nOf the 72 school-based therapists in the mid-Atlantic states who agreed to respond to a questionnaire, 51 (70.8%) returned the questionnaire. Information about demographics, what frames of reference were used, and why they were used was obtained from the questionnaire.\n\n\nRESULTS\nRespondents reported using a multitheoretical approach, with sensory integration theory and neurodevelopmental theory being the predominate frames of reference applied but not the only ones used. The frames of reference were used on the basis of several factors, including the children's needs and the respondent's education.\n\n\nCONCLUSION\nFormal and continuing education seems to have a great effect on school-based occupational therapists as they develop their personal conceptual frameworks."}
{"_id": "497c64fa7bce496d9c3e89b5d9cc02a11b7a8c94", "title": "Geotagging Named Entities in News and Online Documents", "text": "News sources generate constant streams of text with many references to real world entities; understanding the content from such sources often requires effectively detecting the geographic foci of the entities. We study the problem of associating geography to named entities in online documents. More specifically, given a named entity and a page (or a set of pages) where the entity is mentioned, the problem being studied is how the geographic focus of the name can be resolved at a location granularity (e.g. city or country), assuming that the name has a geographic focus. We further study dispersion, and show that the dispersion of a name can be estimated with a good accuracy, allowing a geo-centre to be detected at an exact dispersion level. Two key features of our approach are: (i) minimal assumption is made on the structure of the mentions hence the approach can be applied to a diverse and heterogeneous set of web pages, and (ii) the approach is unsupervised, leveraging shallow English linguistic features and the large volume of location data in public domain.\n We evaluate our methods under different task settings and with different categories of named entities. Our evaluation reveals that the geo-centre of a name can be estimated with a good accuracy based on some simple statistics of the mentions, and that the accuracy of the estimation varies with the categories of the names."}
{"_id": "51e8f7f96a824dbb47a715fef622be06a70be36f", "title": "Increased human intestinal barrier permeability plasma biomarkers zonulin and FABP2 correlated with plasma LPS and altered gut microbiome in anxiety or depression.", "text": "We read with interest the recent work by Uhde et al, which demonstrated that physically asymptomatic non-coeliac gluten/ wheat sensitivity involves compromised intestinal epithelium barrier dysfunction in conjunction with systemic immune activation events. We also read with interest the recent work by Marchesi et al, which comprehensively reviewed the role of microbiota in physical disorders of the gut and extra-gut organs. But common to these Gut papers was the lack of accounting for anxiety and depression, comorbidities often experienced in gastroenterology clinics. Patients who are otherwise physically asymptomatic often do not explicitly divulge these mental disorders, or the disorders are unintentionally overlooked, yet they report \u2018diminished quality of life\u2019. In response to this gap, we explored roles of dysbiosis and gut barrier integrity in individuals physically asymptomatic for gastrointestinal distress, yet nonetheless experiencing mental distress. We hypothesised that anxiety and depressive disorders are linked to human gut dysbiosis with microbiota that secrete lipopolysaccharide (LPS) endotoxin into plasma, which in conjunction with compromised gut barrier integrity has systemic manifestations including the brain. We further hypothesised that this correlates with altered intestinal epithelium paracellular integrity molecules discharged into blood as biomarkers, namely zonulin and intestinal fatty acidbinding protein-2 (FABP2). Zonulin is the master regulator of endothelial and epithelium tight junctions, modulating both gut and blood\u2013brain barriers. We analysed stool faecal microbiota and plasma in 50 subjects who self-reported to be asymptomatic for gastrointestinal physical disorders; 22 volunteers met Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition criteria for a depressive disorder or an anxiety disorder (DEP/ ANX), and 28 more were control reference subjects (NODEPANX). Subjects refrained from probiotics and antibiotics during 30 days prior to providing samples (see online supplementary methods). LEfSe analyses (online supplementary figures S1 and S2) revealed gut dysbiosis in the DEP/ANX subjects, in contrast to microbiome of control subjects. PICRUSt analyses (figure 1) predicted KEGG orthologue gene differences consistent with a pathophysiological gut metagenomic signature in DEP/ANX versus NODEPANX microbiome. Notable in figure 1 were over-represented LPS biosynthesis genes in the gut microbiome of PostScript Gut Online First, published on August 16, 2017 as 10.1136/gutjnl-2017-314759"}
{"_id": "4894c3a0dc1f3a5064ed013809e025a33afa6eaf", "title": "What do exploratory searchers look at in a faceted search interface?", "text": "This study examined how searchers interacted with a web-based, faceted library catalog when conducting exploratory searches. It applied eye tracking, stimulated recall interviews, and direct observation to investigate important aspects of gaze behavior in a faceted search interface: what components of the interface searchers looked at, for how long, and in what order. It yielded empirical data that will be useful for both practitioners (e.g., for improving search interface designs), and researchers (e.g., to inform models of search behavior). Results of the study show that participants spent about 50 seconds per task looking at (fixating on) the results, about 25 seconds looking at the facets, and only about 6 seconds looking at the query itself. These findings suggest that facets played an important role in the exploratory search process."}
{"_id": "358e2ae243cb022938ae3d40ea0ac112319a6325", "title": "Family presence during resuscitation and/or invasive procedures in the Emergency Department: one size does not fit all.", "text": ""}
{"_id": "08fa0252365bda9bf7ea5b70a48dd4f744519fed", "title": "Security for Online Social Networks : Challenges", "text": "nline social networks (OSNs) [1, 2] such as Facebook, MySpace, and Twitter enable people to stay in touch with their contacts, reconnect with old acquaintances, and establish new relationships with other people based on shared features such as communities, hobbies, interests, and overlaps in friendship circles. Recent years have seen unprecedented growth in the application of OSNs, with about 300 OSN systems collecting information on more than half a billion registered users [2]. As a result, OSNs store a huge amount of possibly sensitive and private information on users and their interactions. This information is usually private and intended for the eyes of a specific audience only. However, the popularity of OSNs attracts not only faithful users but parties with rather adverse interests as well [3, 4]. The diversification and sophistication of purposes and usage patterns of OSNs inevitably introduce privacy infringement risks to all OSN users as a result of information exchange and sharing on the Internet. It is therefore not surprising that stories about privacy breaches by Facebook and MySpace appear repeatedly in mainstream media [3\u20135]. Regardless of the purpose of an OSN, one of the main motivations for users to join an OSN, create a profile, and use different applications offered by the OSN is the possibility to easily share information with selected contacts or the public, and facilitate social interactions between the users of OSNs. Disclosing personal information in OSNs is a double-edged sword [6]. On one hand, information exposure is usually a plus, even a must, if people want to participate in social communities. Visibility of users' profiles and public display of connections (friend lists) are necessary for implementing core functionalities of OSNs such as social search and social traver-sal. On the other hand, leakage of personal information, especially one's identity, may invite malicious attacks from the real world and cyberspace, such as stalking, reputation slander, personalized spamming, and phishing [7]. Despite the risks, many of the privacy and access control mechanisms of today's OSNs are purposefully weak to make joining the OSN and sharing information easy. We believe that more effective and flexible security mechanisms are therefore required for the safety of OSN users as well as the continued thriving of OSNs. In this article we present a general framework for assessing the security and privacy of current and next-generation OSNs. Whereas others have considered specific mechanisms for improving OSN users' security and privacy, \u2026"}
{"_id": "63cfe7302d8334c097f90dff5f8e3991590cb41d", "title": "Computer aided inbetweening", "text": "The production of inbetweens is a tedious task for animators and a complicated one for algorithms. In this paper, an algorithm for computer aided inbetweening and its integration in a pen-based graphical user interface are presented.The algorithm is layer-based, assuming an invariant layering order. It is applicable to animations in a style similar to paper cut out, in which the drawings on the cut-out pieces are inbetweened as well. The content of each key drawing is analysed and classified into strokes, chains of strokes and relations that hold among them. Rules decide what parts of different drawings may be matched. These rules specify allowed changes between relations in key drawings. A cost function based approach determines the correct matching of strokes. Generated animation paths between corresponding strokes determine the resulting inbetweens.To hold for possible mismatchings and to allow for artistic control over the results, the inbetweening algorithm is embedded in a free form graphic user interface. Thus artists are enabled to focus on the part of the inbetweening task computers are not able to solve."}
{"_id": "90ed29e10e65c0fa13b64903eeba0fef1ef3cc60", "title": "Robust Control Over a Packet-based Network", "text": "In this paper, we consider a robust network control problem. We consider linear unstable and uncertain discrete time plants with a network between the sensors and controller and the controller and plant. We investigate two defining characteristics of network controlled systems and the impact of uncertainty on these. Namely, the minimum data rates required for the two networks and the tolerable data drop out in the form of packet losses. We are able to derive sufficient conditions in terms of the minimum data rate and minimum packet arrival rate to ensure stability of the closed loop system. I. I NTRODUCTION Recently, networked control systems (NCS) have gained great attention from both the control community and the network and communication community. When compared with classical feedback control system, networked control systems have many advantages. For example, they can reduce the system wiring, make the system easy to operate and maintain and later diagnose in case of malfunctioning, and increase system agility [16]. In spite of the great advantages that the networked control architecture brings, inserting a network in between the plant and the controller introduces many problems as well. For instance, zero-delayed sensing and actuation, perfect information and synchronization are no longer guaranteed in the new system architecture as only finite bandwidth is available and packet drops and delays may occur due to network traffic conditions. These must be revisited and analyzed before any practical networked control systems are built. In the past decade, many researchers have spent effort on those issues and a number of significant results were obtained and many are in progress. Many of the aforementioned issues are studied separately. Tatikonda [15] and Sahai [11] have presented some interesting results in the area of control under communication constraints. Specifically, Tatikonda gave a necessary and sufficient condition on the channel data rate such that a noiseless LTI system in the closed loop is asymptotically stable. He also gave rate results for stabilizing a noisy LTI system over the digital channel. Sahai proposed the notion of anytime capacity to deal with real time estimation and control for a networked control system. In our companion paper [13], the authors have considered various rate issues under finite bandwidth, packet drops and finite controls. The effect of pacekt loss and delay on state Control and Dynamical Systems, California Institute of Technology; Pasadena, CA 91125. Email:{shiling, epstein, murray }@cds.caltech.edu; Tel: (626) 395-2313, Fax: (626) 796-8914. Work supported in part by NSF ITR grant CCR-0326554. The authors are equally contributed to this work. estimation was studied by the work of Sinopoli, et. al. in [2]. It has further been investigated by many researchers including the present authors in [12] and [5]. One of the hallmarks of a good control system design is that the closed loop remain stable in the presence of uncertainty [3], [4]. While the researchers in [7] studied the problem of LQG control across a packet dropping networks, not many have considered the norm bounded uncertainty investigated in the present paper. We examine the impact of a norm bounded uncertainty on the network control system and provide sufficient conditions for stability in terms of the minimum data rates and packet arrival rates for the networks. The paper is organized as follows. In Section II, we present the mathematical model of the closed loop system and state our assumptions. In Section III, we state the sufficient conditions for closed loop stability for the case where a network connects the sensors to the controller. In Section IV, we state the sufficient stability conditions where in addition there is a network between the controller and the plant. For both sections we obtain results for scalar and general vector cases. Conclusions and future work are given in the last section. II. PROBLEM SET UP We consider linear discrete time systems with a norm bounded uncertainty in the A matrix. We will investigate two NCS that we will define by the type of networks embedded in the control loop. The first NCS considered has a network between the measurement sensors and the controller, with the controller then directly connected to the actuators/plant. The second NCS will also include a network between the controller and the actuators/plant. These two network types and depicted in Figures 1 and 2. The networks are defined in terms of their data rates and probability of dropping packets. We would consider any packet delays as losses, i. ., we do not use delayed packets for estimation or control. The following equations represent the closed loop system for NCS I (Figure 1). xk+1 = (A + \u2206k)xk + Buk (1) yk = \u03bbkCxk (2) wherexk \u2208 IR is the state of the system, uk \u2208 IR is the control input,yk \u2208 IR is the output of the system, and \u03bbk are Bernoulli i.i.d random variable with parameter \u03bb, i.e., E[\u03bbk] = \u03bb for all k. \u2206k satisfies\u2206k \u2206k \u2264 KI for all k. We also assume the initial condition x0 \u2208 IR is bounded. The matrix A is assumed to be unstable without loss of generality as for any matrix A, we can always do some state transformation to decompose the states into stable ones and"}
{"_id": "bc687c9b91f444893fdd59cb10187bb8d731636a", "title": "Model-based fault-detection and diagnosis - status and applications", "text": "For the improvement of reliability, safety and efficiency advanced methods of supervision, fault detection and fault diagnosis become increasingly important for many technical processes. This holds especially for safety related processes like aircraft, trains, automobiles, power plants and chemical plants. The classical approaches are limit or trend checking of some measurable output variables. Because they do not give a deeper insight and usually do not allow a fault diagnosis, model-based methods of fault detection were developed by using input and output signals and applying dynamic process models. These methods are based, e.g., on parameter estimation, parity equations or state observers. Also signal model approaches were developed. The goal is to generate several symptoms indicating the difference between nominal and faulty status. Based on different symptoms fault diagnosis procedures follow, determining the fault by applying classification or inference methods. This contribution gives a short introduction into the field and shows some applications for an actuator, a passenger car and a combustion engine. Copyright \u00a9 2004 IFAC"}
{"_id": "472025e00666282f9cd00581a09040b0e03ecfc6", "title": "A layered software architecture for quantum computing design tools", "text": "Compilers and computer-aided design tools are essential for fine-grained control of nanoscale quantum-mechanical systems. A proposed four-phase design flow assists with computations by transforming a quantum algorithm from a high-level language program into precisely scheduled physical actions."}
{"_id": "f3269b2a45353533838f90f65eb75c137146f9bf", "title": "Graph-Based Relational Data Visualization", "text": "Relational databases are rigid-structured data sources characterized by complex relationships among a set of relations (tables). Making sense of such relationships is a challenging problem because users must consider multiple relations, understand their ensemble of integrity constraints, interpret dozens of attributes, and draw complex SQL queries for each desired data exploration. In this scenario, we introduce a twofold methodology, we use a hierarchical graph representation to efficiently model the database relationships and, on top of it, we designed a visualization technique for rapidly relational exploration. Our results demonstrate that the exploration of databases is deeply simplified as the user is able to visually browse the data with little or no knowledge about its structure, dismissing the need for complex SQL queries. We believe our findings will bring a novel paradigm in what concerns relational data comprehension."}
{"_id": "8ae85442fc4752f18e9ac814b4b616c56aed2cc9", "title": "A novel LLC resonant controller with best-in-class transient performance and low standby power consumption", "text": "This paper introduces a novel LLC resonant controller with better light load efficiency and best-in-class transient performance. A new control algorithm \u2014 hybrid hysteretic control, is proposed which combines the benefit of charge control and frequency control. It maintains the good transient performance of charge control, but avoids the related stability issues by adding slope compensation. The slope compensation also helps to sense the resonant capacitor voltage by only using lossless capacitor divider. Burst mode operation is developed to improve the light load efficiency. By turning on the power devices for only a short period during light load, it helps to reduce the equivalent switching frequency and achieve high efficiency. The effectiveness of the proposed controller has been validated in a 12 V/120 W half-bridge LLC converter, as well as a commercial power supply unit."}
{"_id": "7a45727105b1a131f7dd5cb222fd29f05455f499", "title": "A Markerless Augmented Reality Approach Based on Real-Time 3 D Reconstruction using Kinect", "text": "In this paper we present a markerless augmented reality (MAR) approach based on real-time 3D reconstruction using a low-cost depth camera, the Kinect. We intend to use this MAR approach for 3D medical visualization. In the actual state of developing, our approach is based on the KinectFusion algorithm with some minor adaptations, as long as our goal is not reconstruct scenes but use the reconstructed model as a reference to MAR. Keywords-Augmented Reality; 3D Reconstruction; Kinect;"}
{"_id": "2042f00374d90001f3a73b0708978385df809c96", "title": "Donna Interactive Chat-bot acting as a Personal Assistant", "text": "Chat-bots are computer programs coded to have a textual or verbal conversation which is logical or intelligent. Chat-bots are designed to make humans believe that they are talking to a human; but instead they are in fact talking to a machine. Taking advantage of this transparency property of chat-bot, an artificial character and personality can be given to a chat-bot which acts like a person of a specific profession. This paper describes an approach to the idea of implementing web-based artificially intelligent chat-bot as a personal assistant of the user, which stimulates setting and initiating meetings of user with his clients. The exchange of information happens through email conversations whereas its evaluation happens through natural language procession and natural language generation and AIML files."}
{"_id": "5a89d71137ba1e217e75ae6be532f9d6b0c4ddab", "title": "The gut microbiota shapes intestinal immune responses during health and disease", "text": "Immunological dysregulation is the cause of many non-infectious human diseases such as autoimmunity, allergy and cancer. The gastrointestinal tract is the primary site of interaction between the host immune system and microorganisms, both symbiotic and pathogenic. In this Review we discuss findings indicating that developmental aspects of the adaptive immune system are influenced by bacterial colonization of the gut. We also highlight the molecular pathways that mediate host\u2013symbiont interactions that regulate proper immune function. Finally, we present recent evidence to support that disturbances in the bacterial microbiota result in dysregulation of adaptive immune cells, and this may underlie disorders such as inflammatory bowel disease. This raises the possibility that the mammalian immune system, which seems to be designed to control microorganisms, is in fact controlled by microorganisms."}
{"_id": "b8b841750a536ceebd5f5e7ff1390f8afc1d7022", "title": "The FrameNet Database and Software Tools", "text": "The FrameNet Project is producing a lexicon of English for both human use and NLP applications, based on the principles of Frame Semantics, in which sentences are described on the basis of predicators which evoke semantic frames and other constituents which express the participants (frame elements) in these frames. Our lexicon contains detailed information about the possible syntactic realizations of frame elements, derived from annotated corpus examples. In the process, we have developed a suite of tools for the definition of semantic frames, for annotating sentences, for searching the results, and for creating a variety ofreports. We will discuss the conceptual basis ofour work and demonstrate the tools we work with, the results we produce, and how they may be ofuse to other NLP projects."}
{"_id": "23e2dde69ec07f1b4f55b01dc1d8ebc834d7bf47", "title": "A new look at survey propagation and its generalizations", "text": "We study the survey propagation algorithm [19, 5, 4], which is an iterative technique that appears to be very effective in solving random k-SAT problems even with densities close to threshold. We first describe how any SAT formula can be associated with a novel family of Markov random fields (MRFs), parameterized by a real number \u03c1. We then show that applying belief propagation---a well-known \"message-passing technique---to this family of MRFs recovers various algorithms, ranging from pure survey propagation at one extreme (\u03c1 = 1) to standard belief propagation on the uniform distribution over SAT assignments at the other extreme (\u03c1 = 0). Configurations in these MRFs have a natural interpretation as generalized satisfiability assignments, on which a partial order can be defined. We isolate cores as minimal elements in this partial ordering, and prove that any core is a fixed point of survey propagation. We investigate the associated lattice structure, and prove a weight-preserving identity that shows how any MRF with p > 0 can be viewed as a \"smoothed\" version of the naive factor graph representation of the k-SAT problem (p = 0). Our experimental results show that message-passing on our family of MRFs is most effective for values of \u03c1 \u2260 1 (i.e., distinct from survey propagation); moreover, they suggest that random formulas may not typically possess non-trivial cores. Finally, we isolate properties of Gibbs sampling and message-passing algorithms that are typical for an ensemble of k-SAT problems. We prove that the space of cores for random formulas is highly disconnected, and show that for values of p sufficiently close to one, either the associated MRF is highly concentrated around the all-star assignment, or it has exponentially small conductance. Similarly, we prove that for p sufficiently close to one, the all-star assignment is attractive for message-passing when analyzed in the density-evolution setting."}
{"_id": "ad9cbf31a1cd6a71e773a0d3e93c489304327174", "title": "A Powered Leg Orthosis for Gait Rehabilitation of Motor-Impaired Patients", "text": "This paper describes a powered leg orthosis for gait rehabilitation of patients with walking disabilities. The paper proposes controllers which can apply suitable forces on the leg so that it moves on a desired trajectory. The description of the controllers, simulations and experimental results with the powered orthosis are presented in the paper. Currently, experiments have been performed with a dummy leg in the orthosis. In the coming months, this powered orthosis will be used on healthy subjects and stroke patients."}
{"_id": "38b389580d774ce513284e671ff3bbcef0258de2", "title": "Rubik: Knowledge Guided Tensor Factorization and Completion for Health Data Analytics", "text": "Computational phenotyping is the process of converting heterogeneous electronic health records (EHRs) into meaningful clinical concepts. Unsupervised phenotyping methods have the potential to leverage a vast amount of labeled EHR data for phenotype discovery. However, existing unsupervised phenotyping methods do not incorporate current medical knowledge and cannot directly handle missing, or noisy data.\n We propose Rubik, a constrained non-negative tensor factorization and completion method for phenotyping. Rubik incorporates 1) guidance constraints to align with existing medical knowledge, and 2) pairwise constraints for obtaining distinct, non-overlapping phenotypes. Rubik also has built-in tensor completion that can significantly alleviate the impact of noisy and missing data. We utilize the Alternating Direction Method of Multipliers (ADMM) framework to tensor factorization and completion, which can be easily scaled through parallel computing.\n We evaluate Rubik on two EHR datasets, one of which contains 647,118 records for 7,744 patients from an outpatient clinic, the other of which is a public dataset containing 1,018,614 CMS claims records for 472,645 patients. Our results show that Rubik can discover more meaningful and distinct phenotypes than the baselines. In particular, by using knowledge guidance constraints, Rubik can also discover sub-phenotypes for several major diseases. Rubik also runs around seven times faster than current state-of-the-art tensor methods. Finally, Rubik is scalable to large datasets containing millions of EHR records."}
{"_id": "2188fa9a2d811a5a9674a6cf4aa9aed6d0014279", "title": "Novel Vector Control Method for Three-Stage Hybrid Cascaded Multilevel Inverter", "text": "A three-stage 18-level hybrid inverter circuit and its innovative control method have been presented. The three hybrid inverter stages are the high-, medium-, and low-voltage stages. The high-voltage stage is made of a three-phase conventional inverter to reduce dc source cost and losses. The medium- and low-voltage stages are made of three-level inverters constructed using cascaded H-bridge units. The novelty of the proposed algorithm is to avoid the undesirable high switching frequency for high- and medium-voltage stages despite the fact that the inverter's dc sources are selected to maximize the inverter levels by eliminating redundant voltage states. Switching algorithms of the high- and medium-voltage stages have been developed to assure fundamental switching frequency operation of the high-voltage stages and not more than few times this frequency for the medium-voltage stage. The low-voltage stage is controlled using SVM to achieve the reference voltage vector exactly and to set the order of dominant harmonics as desired. The realization of this control approach has been enabled by considering the vector space plane in the state selection rather than individual phase levels. The inverter has been constructed, and the control algorithm has been implemented. Test results show that the proposed algorithm achieves the claimed features, and all major hypotheses have been verified."}
{"_id": "333416708c80d0c163ca275d1b190b1f2576fa5f", "title": "Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks", "text": "We present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function. Such networks are often used in deep learning and have been shown to be hard to verify for modern satisfiability modulo theory (SMT) and integer linear programming (ILP) solvers. The starting point of our approach is the addition of a global linear approximation of the overall network behavior to the verification problem that helps with SMT-like reasoning over the network behavior. We present a specialized verification algorithm that employs this approximation in a search process in which it infers additional node phases for the non-linear nodes in the network from partial node phase assignments, similar to unit propagation in classical SAT solving. We also show how to infer additional conflict clauses and safe node fixtures from the results of the analysis steps performed during the search. The resulting approach is evaluated on collision avoidance and handwritten digit recognition case studies."}
{"_id": "94187ef33e34af2cdb42502083c6f9b4c3f5ba6b", "title": "Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples", "text": "Machine learning (ML) models, e.g., state-of-the-art deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We then use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. After labeling 6,400 synthetic inputs to train our substitute (an order of magnitude smaller than the training set used by MetaMind), we find that their DNN misclassifies adversarial examples crafted with our substitute at a rate of 84.24%. We demonstrate that our strategy generalizes to many ML techniques like logistic regression or SVMs, regardless of the ML model chosen for the substitute. We instantiate the same attack against models hosted by Amazon and Google, using logistic regression substitutes trained with only 800 label queries. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19% and 88.94%. We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder. We conclude with experiments exploring why adversarial samples transfer between models."}
{"_id": "1b8c1632670047f3fd6625463511ad661f767b44", "title": "Alphasort: A cache-sensitive parallel external sort", "text": "A new sort algorithm, called AlphaSort, demonstrates that commodity processors and disks can handle commercial batch workloads. Using commodity processors, memory, and arrays of SCSI disks, AlphaSort runs the industrystandard sort benchmark in seven seconds. This beats the best published record on a 32-CPU 32-disk Hypercube by 8:1. On another benchmark, AlphaSort sorted more than a gigabyte in one minute. AlphaSort is a cache-sensitive, memoryintensive sort algorithm. We argue that modern architectures require algorithm designers to re-examine their use of the memory hierarchy. AlphaSort uses clustered data structures to get good cache locality, file striping to get high disk bandwidth, QuickSort to generate runs, and replacement-selection to merge the runs. It uses shared memory multiprocessors to break the sort into subsort chores. Because startup times are becoming a significant part of the total time, we propose two new benchmarks: (1) MinuteSort: how much can you sort in one minute, and (2) PennySort: how much can you sort for one penny."}
{"_id": "6667816b50e4c0f01f3e639e5bd4492660c640d6", "title": "Universal OWL Axiom Enrichment for Large Knowledge Bases", "text": "The Semantic Web has seen a rise in the availability and usage of knowledge bases over the past years, in particular in the Linked Open Data initiative. Despite this growth, there is still a lack of knowledge bases that consist of high quality schema information and instance data adhering to this schema. Several knowledge bases only consist of schema information, while others are, to a large extent, a mere collection of facts without a clear structure. The combination of rich schema and instance data would allow powerful reasoning, consistency checking, and improved querying possibilities as well as provide more generic ways to interact with the underlying data. In this article, we present a light-weight method to enrich knowledge bases accessible via SPARQL endpoints with almost all types of OWL 2 axioms. This allows to semiautomatically create schemata, which we evaluate and discuss using DBpedia."}
{"_id": "599b0b00439c07121f314cc08b07cf74d9e50cca", "title": "Role-Based Access Controls", "text": "While Mandatory Access Controls (MAC) are appropriate for multilevel secure military applications, Discretionary Access Controls (DAC) are often perceived as meeting the security processing needs of industry and civilian government. This paper argues that reliance on DAC as the principal method of access control is unfounded and inappropriate for many commercial and civilian government organizations. The paper describes a type of non-discretionary access control role-based access control (RBAC) \u00ad that is more central to the secure processing needs of non-military systems than DAC."}
{"_id": "9f9128d064e36473bf29c3d0edb1663daa2dd6a9", "title": "Leveraging the schema in latent factor models for knowledge graph completion", "text": "We focus on the problem of predicting missing links in large Knowledge Graphs (KGs), so to discover new facts. Over the last years, latent factor models for link prediction have been receiving an increasing interest: they achieve state-of-the-art accuracy in link prediction tasks, while scaling to very large KGs. However, KGs are often endowed with additional schema knowledge, describing entity classes, their sub-class relationships, and the domain and range of each predicate: the schema is actually not used by latent factor models proposed in the literature. In this work, we propose an unified method for leveraging additional schema knowledge in latent factor models, with the aim of learning more accurate link prediction models. Our experimental evaluations show the effectiveness of the proposed method on several KGs."}
{"_id": "c426756dfbd7f835ace20548286dda200923abf4", "title": "A new miniaturized antenna for ISM 433 MHz frequency band", "text": "A new miniaturized antenna for 433MHz frequency band is introduced for sensor networks. The antenna is a magnetic monopole loaded with a capacitor, which can significantly decrease the resonant frequency of the structure. It is designed to operate in the ISM 433MHz band on a small PCB (103mm\u22c655mm) and has at least 3MHz bandwidth. Moreover, its radiation efficiency is suitable for the possible applications and its pattern is omnidirectionnal."}
{"_id": "ea56f138f81e3edca28a508599ee253337d936d5", "title": "HeidelPlace: An Extensible Framework for Geoparsing", "text": "Geographic information extraction from textual data sources, called geoparsing, is a key task in text processing and central to subsequent spatial analysis approaches. Several geoparsers are available that support this task, each with its own (often limited or specialized) gazetteer and its own approaches to toponym detection and resolution. In this demonstration paper, we present HeidelPlace, an extensible framework in support of geoparsing. Key features of HeidelPlace include a generic gazetteer model that supports the integration of place information from different knowledge bases, and a pipeline approach that enables an effective combination of diverse modules tailored to specific geoparsing tasks. This makes HeidelPlace a valuable tool for testing and evaluating different gazetteer sources and geoparsing methods. In the demonstration, we show how to set up a geoparsing workflow with HeidelPlace and how it can be used to compare and consolidate the output of different geoparsing approaches."}
{"_id": "97e6ed1f7e5de0034f71c370c01f59c87aaf9a72", "title": "Learning Recurrent Span Representations for Extractive Question Answering", "text": "The reading comprehension task, that asks questions about a given evidence document, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-defined manually or through the use of an external NLP pipeline. However, Rajpurkar et al. (2016) recently released the SQUAD dataset in which the answers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network. We show that scoring explicit span representations significantly improves performance over other approaches that factor the prediction into separate predictions about words or start and end markers. Our approach improves upon the best published results of Wang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.\u2019s baseline by > 50%."}
{"_id": "853ac9d5ae2662b8e33946d106b261005e391fed", "title": "Optimizing 2D Gabor Filters for Iris Recognition", "text": "The randomness and richness present in the iris texture make the 2D Gabor filter bank analysis a suitable technique to be used for iris recognition systems. To accurately characterize complex texture structures using 2D Gabor filters it is necessary to use multiple sets of parameters of this type of filters. This paper proposes a technique of optimizing multiple sets of 2D Gabor filter parameters to gradually enhance the accuracy of an iris recognition system. The proposed methodology is suitable to be applied on both near infrared and visible spectrum iris images. To illustrate the efficiency of the filter bank design technique, UBIRISv1 database was used for benchmarking."}
{"_id": "e2c0546cc8288f3b736e01aa7fb92089bbb45204", "title": "Investigation of a new GaN AC/DC topology for battery charging application", "text": "Switching losses are reduced by gallium nitride (GaN) technology, which allows faster device switching speeds. At the same time, the weight and size of converters utilizing these devices are greatly reduced in several applications by using high frequency (HF) transformers. A new, low voltage high current, GaN-based AC/DC topology for battery charging applications is simulated, modeled, and investigated in this paper. The proposed topology is designed to operate at 1 MHz for 120Vac to 48Vdc/60Vdc conversion for 1.4 kW load. The integration of GaN totem-pole power factor correction rectifier (TP-PFC), high-frequency series-resonant converter (SRC), and current doubler rectifier (CDR) presents an opportunity to further increase the efficiency and compactness of this GaN battery charging system. In this work, a new GaN topology to combine the advantages of TP-PFC and SRC with CDR in a battery charging converter is investigated, simulated, and modelled. Loss analysis of the proposed topology is addressed."}
{"_id": "2fd321ca02b89c93cc57f14bf4b2b0912dbd3893", "title": "BodyAvatar: creating freeform 3D avatars using first-person body gestures", "text": "BodyAvatar is a Kinect-based interactive system that allows users without professional skills to create freeform 3D avatars using body gestures. Unlike existing gesture-based 3D modeling tools, BodyAvatar centers around a first-person \"you're the avatar\" metaphor, where the user treats their own body as a physical proxy of the virtual avatar. Based on an intuitive body-centric mapping, the user performs gestures to their own body as if wanting to modify it, which in turn results in corresponding modifications to the avatar. BodyAvatar provides an intuitive, immersive, and playful creation experience for the user. We present a formative study that leads to the design of BodyAvatar, the system's interactions and underlying algorithms, and results from initial user trials."}
{"_id": "24609c772591cb0d8af56798c335a2bd31cc36d2", "title": "Reduction in gravitational torques of an industrial robot equipped with 2 DOF passive counterbalance mechanisms", "text": "In most 6 DOF robot arms, considerable amounts of gravitational torques due to the robot's own weight are applied to pitch joints of the robot, which causes most arms to use high capacity motors and speed reducers. A spring-based counterbalance mechanism can compensate for this gravitational torque, thus leading to a significant reduction in the effect of gravity. However, a simple installation of counterbalance mechanisms at each pitch joint does not work properly because the gravitational torque at each joint is dependent also on the other joints. To achieve multi-DOF counterbalancing, we propose a parallelogram linkage combined with dual counterbalance mechanisms, each being composed of a slider-crank mechanism and springs. Simulations and experimental results showed that the counterbalance robot arm based on the proposed counterbalance mechanisms effectively reduced the torques required to support the robot mass, thus allowing the prospective use of much smaller motors and speed reducers than traditional industrial robots."}
{"_id": "813df20116a4a09b7d82783cb2480594840a4ffa", "title": "Simple Personalized Search Based on Long-Term Behavioral Signals", "text": "Extensive research has shown that content-based Web result ranking can be significantly improved by considering personal behavioral signals (such as past queries) and global behavioral signals (such as global click frequencies). In this work we present a new approach to incorporating click behavior into document ranking, using ideas of click models as well as learning to rank. We show that by training a click model with pairwise loss, as is done in ranking problems, our approach achieves personalized reranking performance comparable to the stateof-the-art while eliminating much of the complexity required by previous models. This contrasts with other approaches that rely on complex feature engineering."}
{"_id": "332ff7330c094fec09d5bec886ea8bcaca99c023", "title": "Development of a Biomimetic Hand Exotendon Device (BiomHED) for Restoration of Functional Hand Movement Post-Stroke", "text": "Significant functional impairment of the hand is common among stroke survivors and restoration of hand function should be prioritized during post-stroke rehabilitation. The goal of this study was to develop a novel biomimetic device to assist patients in producing complex hand movements with a limited number of actuators. The Biomimetic Hand Exoskeleton Device (BiomHED) is actuated by exotendons that mimic the geometry of the major tendons of the hand. Ten unimpaired subjects and four chronic stroke survivors participated in experiments that tested the efficacy of the system. The exotendons reproduced distinct spatial joint coordination patterns similar to their target muscle-tendon units for both subject groups. In stroke survivors, the exotendon-produced joint angular displacements were smaller, but not significantly different, than those of unimpaired subjects (\\mbi p = 0.15-0.84). Even with limited use of the BiomHED, the kinematic workspace of the index finger increased by 63%-1014% in stroke survivors. The device improved the kinematics of the tip-pinch task in stroke survivors and resulted in a significant reduction in the fingertip-thumb tip distance ( 17.9 \u00b115.3 mm). This device is expected to enable effective \u201ctask-oriented\u201d training of the hand post-stroke."}
{"_id": "ed416bf3e3a5d8e1259415eafa9ae20caf25c870", "title": "Local-global ranking for facial expression intensity estimation", "text": "Facial action units provide an objective characterization of facial muscle movements. Automatic estimation of facial action unit intensities is a challenging problem given individual differences in neutral face appearances and the need to generalize across different pose, illumination and datasets. In this paper, we introduce the Local-Global Ranking method as a novel alternative to direct prediction of facial action unit intensities. Our method takes advantage of the additional information present in videos and image collections of the same person (e.g. a photo album). Instead of trying to estimate facial expression intensities independently for each image, our proposed method performs a two-stage ranking: a local pair-wise ranking followed by a global ranking. The local ranking is designed to be accurate and robust by making a simple 3-class comparison (higher, equal, or lower) between randomly sampled pairs of images. We use a Bayesian model to integrate all these pair-wise rankings and construct a global ranking. Our Local-Global Ranking method shows state-of-the-art performance on two publicly-available datasets. Our cross-dataset experiments also show better generalizability."}
{"_id": "aa830642a9fa3502447054d231f4b34b30ef3846", "title": "Development of humanoid robot HRP-3P", "text": "A development of humanoid robot HRP-3P, which is a humanoid robot HRP-3 prototype, is presented in this paper. HRP-3 is under development as the succession humanoid to HRP-2, which we developed in phase two of HRP (humanoid robotics project). One of futures of HRP-3P is that its main mechanical and structural components are designed for protection against dust and water. Another is that node controllers are developed for realization of distributed control system. Real-time communication on Ethernet is also newly developed for communication between main controller and node controllers. In this paper, mechanical features and electrical features are presented with specifications of HRP-3P"}
{"_id": "3ae166428b87d28cf85c61e14b8cade835b39373", "title": "An Algebraic Fault Attack on the LED Block Cipher", "text": "In this paper we propose an attack on block ciphers where we combine techniques derived from algebraic and fault based cryptanalysis. The recently introduced block cipher LED serves us as a target for our attack. We show how to construct an algebraic representation of the encryption map and how to cast the side channel information gained from a fault injection into polynomial form. The resulting polynomial system is converted into a logical formula in conjunctive normal form and handed over to a SAT solver for reconstruction of the secret key. Following this approach we were able to mount a new, successful attack on the version of LED that uses a 64-bit secret key, requiring only a single fault injection."}
{"_id": "2461123a8fa73d30a64c3211ad6ecd233a591ade", "title": "The building blocks of economic complexity.", "text": "For Adam Smith, wealth was related to the division of labor. As people and firms specialize in different activities, economic efficiency increases, suggesting that development is associated with an increase in the number of individual activities and with the complexity that emerges from the interactions between them. Here we develop a view of economic growth and development that gives a central role to the complexity of a country's economy by interpreting trade data as a bipartite network in which countries are connected to the products they export, and show that it is possible to quantify the complexity of a country's economy by characterizing the structure of this network. Furthermore, we show that the measures of complexity we derive are correlated with a country's level of income, and that deviations from this relationship are predictive of future growth. This suggests that countries tend to converge to the level of income dictated by the complexity of their productive structures, indicating that development efforts should focus on generating the conditions that would allow complexity to emerge to generate sustained growth and prosperity."}
{"_id": "e99db11bec946a577705d3689141927621226ceb", "title": "3D spatial pyramid: descriptors generation from point clouds for indoor scene classification", "text": "Traditionally, the indoor scene classification problem has been approached from a 2D image recognition point of view. In most visual scene classification systems, a descriptor for the input image is generated to obtain a suitable representation that includes features related to color, shape or spatial information. Techniques based on the use of a spatial pyramid have proven to be adequate to perform this step. In the past years, on the other hand, 3D sensors have become widely available, which allows to include new information sources to the framework previously described. In this work we rely on RGB-D data to extend the spatial pyramid approach, aimed at building descriptors that can lead to a more robust representation against changing lighting conditions. The proposed descriptors are evaluated on the RobotVision@ImageCLEF-2013 benchmark dataset, remarkably outperforming state-of-the-art 3D local and global descriptors."}
{"_id": "657b59e24b5fac8149ddeecf7ef1bcf162c48ab2", "title": "Customer churn analysis : A case study on the telecommunication industry of Thailand", "text": "Customer churn creates a huge anxiety in highly competitive service sectors especially the telecommunications sector. The objective of this research was to develop a predictive churn model to predict the customers that will be to churn; this is the first step to construct a retention management plan. The dataset was extracted from the data warehouse of the mobile telecommunication company in Thailand. The system generated the customer list, to implement a retention campaign to manage the customers with tendency to leave the company. WEKA software was used to implement the followings techniques: C4.5 decision trees algorithm, the logistic regression algorithm and the neural network algorithm. The C4.5 algorithm of decision trees proved optimal among the models. The findings are unequivocally beneficial to industry and other partners."}
{"_id": "14df973438a9ba4634bb41740072b9e4704ba47b", "title": "A New Concave Hull Algorithm and Concaveness Measure for n-dimensional Datasets", "text": "Convex and concave hulls are useful concepts for a wide variety of application areas, such as pattern recognition, image processing, statistics, and classification tasks. Concave hull performs better than convex hull, but it is difficult to formulate and few algorithms are suggested. Especially, an n-dimensional concave hull is more difficult than a 2or 3dimensional one. In this paper, we propose a new concave hull algorithm for n-dimensional datasets. It is simple but creative. We show its application to dataset analysis. We also suggest a concaveness measure and a graph that captures geometric shape of an n-dimensional dataset. Proposed concave hull algorithm and concaveness measure/graph are implemented using java, and are posted to http://user.dankook.ac.kr/ ~bitl/dkuCH."}
{"_id": "6a6cfe3209e4dfaa45a5a13a988b47b5dc5425b9", "title": "User Migration in Online Social Networks: A Case Study on Reddit During a Period of Community Unrest", "text": "Platforms like Reddit have attracted large and vibrant communities, but the individuals in those communities are free to migrate to other platforms at any time. History has borne this out with the mass migration from Slashdot to Digg. The underlying motivations of individuals who migrate between platforms, and the conditions that favor migration online are not well-understood. We examine Reddit during a period of community unrest affecting millions of users in the summer of 2015, and analyze large-scale changes in user behavior and migration patterns to Reddit-like alternative platforms. Using self-reported statements from user comments, surveys, and a computational analysis of the activity of users with accounts on multiple platforms, we identify the primary motivations driving user migration. While a notable number of Reddit users left for other platforms, we found that an important pull factor that enabled Reddit to retain users was its long tail of niche content. Other platforms may reach critical mass to support popular or \u201cmainstream\u201d topics, but Reddit\u2019s large userbase provides a key advantage in supporting niche top-"}
{"_id": "84e2be16fc0226af8429f2e2adf8fc0efceec8a7", "title": "Innovative technology-based interventions for autism spectrum disorders: a meta-analysis.", "text": "This article reports the results of a meta-analysis of technology-based intervention studies for children with autism spectrum disorders. We conducted a systematic review of research that used a pre-post design to assess innovative technology interventions, including computer programs, virtual reality, and robotics. The selected studies provided interventions via a desktop computer, interactive DVD, shared active surface, and virtual reality. None employed robotics. The results provide evidence for the overall effectiveness of technology-based training. The overall mean effect size for posttests of controlled studies of children with autism spectrum disorders who received technology-based interventions was significantly different from zero and approached the medium magnitude, d = 0.47 (confidence interval: 0.08-0.86). The influence of age and IQ was not significant. Differences in training procedures are discussed in the light of the negative correlation that was found between the intervention durations and the studies' effect sizes. The results of this meta-analysis provide support for the continuing development, evaluation, and clinical usage of technology-based intervention for individuals with autism spectrum disorders."}
{"_id": "2a7388063f02c30791aea85f813d41032c784307", "title": "Social recommendation using Euclidean embedding", "text": "Traditional recommender systems assume that all the users are independent, and they usually face the cold start and data sparse problems. To alleviate these problems, social recommender systems use social relations as an additional input to improve recommendation accuracy. Social recommendation follows the intuition that people with social relationships share some kinds of preference towards items. Current social recommendation methods commonly apply the Matrix Factorization (MF) model to incorporate social information into the recommendation process. As an alternative model to MF, we propose a novel social recommendation approach based on Euclidean Embedding (SREE) in this paper. The idea is to embed users and items in a unified Euclidean space, where users are close to both their desired items and social friends. Experimental results conducted on two real-world data sets illustrate that our proposed approach outperforms the state-of-the-art methods in terms of recommendation accuracy."}
{"_id": "7ffa7a36e5414a0f2b16b1d8f93442ab15e2235d", "title": "The CMU Pose, Illumination, and Expression Database", "text": "In the Fall of 2000 we collected a database of over 40,000 facial images of 68 people. Using the CMU 3D Room we imaged each person across 13 different poses, under 43 different illumination conditions, and with 4 different expressions. We call this the CMU Pose, Illumination, and Expression (PIE) database. We describe the imaging hardware, the collection procedure, the organization of the images, several possible uses, and how to obtain the database."}
{"_id": "050eda213ce29da7212db4e85f948b812a215660", "title": "Combining Models and Exemplars for Face Recognition : An Illuminating Example", "text": "We propose a modeland exemplar-based approach for face recognition. This problem has been previously tackled using either models or exemplars, with limited success. Our idea uses models to synthesize many more exemplars, which are then used in the learning stage of a face recognition system. To demonstrate this, we develop a statistical shape-fromshading model to recover face shape from a single image, and to synthesize the same face under new illumination. We then use this to build a simple and fast classifier that was not possible before because of a lack of training data."}
{"_id": "1b7ae509c8637f3c123cf6151a3089e6b8a0d5b2", "title": "From Few to Many: Generative Models for Recognition Under Variable Pose and Illumination", "text": "Image variability due to changes in pose and illumination can seriously impair object recognition. This paper presents appearance-based methods which, unlike previous appearance-based approaches, require only a small set of training images to generate a rich representation that models this variability. Speci cally, from as few as three images of an object in xed pose seen under slightly varying but unknown lighting, a surface and an albedo map are reconstructed. These are then used to generate synthetic images with large variations in pose and illumination and thus build a representation useful for object recognition. Our methods have been tested within the domain of face recognition on a subset of the Yale Face Database B containing 4050 images of 10 faces seen under variable pose and illumination. This database was speci cally gathered for testing these generative methods. Their performance is shown to exceed that of popular existing methods."}
{"_id": "667f0671f398b02c0d03023ce028b1228526f41c", "title": "Optical Flow Constraints on Deformable Models with Applications to Face Tracking", "text": "Optical flow provides a constraint on the motion of a deformable model. We derive and solve a dynamic system incorporating flow as a hard constraint, producing a model-based least-squares optical flow solution. Our solution also ensures the constraint remains satisfied when combined with edge information, which helps combat tracking error accumulation. Constraint enforcement can be relaxed using a Kalman filter, which permits controlled constraint violations based on the noise present in the optical flow information, and enables optical flow and edge information to be combined more robustly and efficiently. We apply this framework to the estimation of face shape and motion using a 3D deformable face model. This model uses a small number of parameters to describe a rich variety of face shapes and facial expressions. We present experiments in extracting the shape and motion of a face from image sequences which validate the accuracy of the method. They also demonstrate that our treatment of optical flow as a hard constraint, as well as our use of a Kalman filter to reconcile these constraints with the uncertainty in the optical flow, are vital for improving the performance of our system."}
{"_id": "149100302da692cc4b179f0a3bc514d36624163c", "title": "Visual Semantic Landmark-Based Robust Mapping and Localization for Autonomous Indoor Parking", "text": "Autonomous parking in an indoor parking lot without human intervention is one of the most demanded and challenging tasks of autonomous driving systems. The key to this task is precise real-time indoor localization. However, state-of-the-art low-level visual feature-based simultaneous localization and mapping systems (VSLAM) suffer in monotonous or texture-less scenes and under poor illumination or dynamic conditions. Additionally, low-level feature-based mapping results are hard for human beings to use directly. In this paper, we propose a semantic landmark-based robust VSLAM for real-time localization of autonomous vehicles in indoor parking lots. The parking slots are extracted as meaningful landmarks and enriched with confidence levels. We then propose a robust optimization framework to solve the aliasing problem of semantic landmarks by dynamically eliminating suboptimal constraints in the pose graph and correcting erroneous parking slots associations. As a result, a semantic map of the parking lot, which can be used by both autonomous driving systems and human beings, is established automatically and robustly. We evaluated the real-time localization performance using multiple autonomous vehicles, and an repeatability of 0.3 m track tracing was achieved at a 10 kph of autonomous driving."}
{"_id": "ba6579b0dae2285c39795ca7f8f7adc6fa52a925", "title": "A New Word Language Model Evaluation Metric for Character Based Languages", "text": "Perplexity is a widely used measure to evaluate word prediction power of a word-based language model. It can be computed independently and has shown good correlation with word error rate (WER) in speech recognition. However, for character based languages, character error rate (CER) is commonly used instead of WER as the measure for speech recognition, although language model is still word based. Due to the fact that different word segmentation strategies may result in different word vocabulary for the same text corpus, in many cases, wordbased perplexity is incompetent to evaluate the combined effect of word segmentation and language model training to predict final CER. In this paper, a new word-based language model evaluation measure is proposed to account for the effect of word segmentation and the goal of predicting CER. Experiments were conducted on Chinese speech recognition. Compared to the traditional word-based perplexity, the new measure is more robust to word segmentation and shows much more consistent correlation with CER in a large vocabulary continuous Chinese speech recognition task."}
{"_id": "94230767895e8a5fd76974504d966662f8b21f82", "title": "Joint wavelet-based spectrum sensing and FBMC modulation for cognitive mmWave small cell networks", "text": "Millimetre-wave (mmWave) 5G communications is an emerging technology to enhance the capacity of existing systems by thousand-fold improvement. Heterogeneous networks employing densely distributed small cells can optimise the available coverage and throughput of 5G systems. Efficiently utilising the spectrum bands by small cells is one of the approaches that will considerably increase the available data rate and capacity of the heterogeneous networks. This challenging task can be achieved by spectrum sensing capability of cognitive radios and new modulation techniques for data transmission. In this study, a wavelet-based filter bank is proposed for spectrum sensing and modulation in 5G heterogeneous networks. The proposed technique can mitigate the spectral leakage and interference by adapting the subcarriers according to cognitive information provided by wavelet packet based spectrum sensing (WPSS) and lowering sidelobes using wavelet-based filter bank multicarrier modulation. The performance improvement of WPSS compared with Fourier-based spectrum sensing is verified in terms of power spectral density comparison and probabilities of detection and false alarm. Meanwhile, the bit error rate performance demonstrates the superiority of the proposed wavelet-based system compared with its Fourier-based counterpart over the 60 GHz mmWave channel."}
{"_id": "195e55c90fd109642116ee51f7205c106f341111", "title": "Model-Driven Game Development Addressing Architectural Diversity and Game Engine-Integration", "text": ""}
{"_id": "892d5068d8200b6d8d7654c1cbe01883cbcb8488", "title": "The NumPy Array: A Structure for Efficient Numerical Computation", "text": "In the Python world, NumPy arrays are the standard representation for numerical data and enable efficient implementation of numerical computations in a high-level language. As this effort shows, NumPy performance can be improved through three techniques: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts."}
{"_id": "91eb69b35a1f349ded31c1c0e50c2f0f2614c978", "title": "Survey of Event Correlation Techniques for Attack Detection in Early Warning Systems", "text": "In the context of early warning systems for detecting Internet worms and other attacks, event correlation techniques are needed for two reasons. First, network attack detection is usually based on distributed sensors, e.g. intrusion detection systems. During attacks but even in normal operation, the generated amount of events is hard to handle in order to evaluate the current attack situation for a larger network. Thus, the concept of event or alert correlation has been introduced. This survey was motivated by recent work on early warning systems. We summarize and clarify the typical terminology used in this context and present a requirement analysis from an early warning system\u2019s point of view. In the main part of this survey, we summarize and classify event correlation techniques as described in the literature."}
{"_id": "27c2c58bab12896406f6500889f274e356ab26c2", "title": "Bayesian Optimization Using Domain Knowledge on the ATRIAS Biped", "text": "Robotics controllers often consist of expert-designed heuristics, which can be hard to tune in higher dimensions. Simulation can aid in optimizing these controllers if parameters learned in simulation transfer to hardware. Unfortunately, this is often not the case in legged locomotion, necessitating learning directly on hardware. This motivates using data-efficient learning techniques like Bayesian Optimization (BO) to minimize collecting expensive data samples. BO is a black-box data-efficient optimization scheme, though its performance typically degrades in higher dimensions. We aim to overcome this problem by incorporating domain knowledge, with a focus on bipedal locomotion. In our previous work, we proposed a feature transformation that projected a 16-dimensional locomotion controller to a 1-dimensional space using knowledge of human walking. When optimizing a human-inspired neuromuscular controller in simulation, this feature transformation enhanced sample efficiency of BO over traditional BO with a Squared Exponential kernel. In this paper, we present a generalized feature transform applicable to non-humanoid robot morphologies and evaluate it on the ATRIAS bipedal robot, in both simulation and hardware. We present three different walking controllers and two are evaluated on the real robot. Our results show that this feature transform captures important aspects of walking and accelerates learning on hardware and simulation, as compared to traditional BO."}
{"_id": "e682e11b643a5f77d114034422d9931bb3c17bea", "title": "GENDER AND LEADERSHIP BEHAVIOR AMONG SENIOR PASTORS", "text": "Research findings have been equivocal as to the existence of gender difference in leadership across settings. However, some studies based on transformational leadership theory and employing the Multifactor Leadership Questionnaire to measure leadership behaviors have indicated a difference in the leadership styles of female and male leaders. This study sought to test whether there were gender differences in the use of transformational, transactional, and laissez-faire leadership behaviors by Senior Pastors in an Australian Christian Church. The study did not detect any significant gender differences in leadership behaviors. Gender and Leadership Behavior among Senior Pastors Gender and Leadership Leadership has been typically a male prerogative in most sectors of society, including the corporate, political, military, and church sectors (Eagly & Karau, 2002). However, over the last 30 years women have made steady progress in moving into leadership roles. In 1972, women held 17% of all management and professional positions in Fortune 500 Companies. By 2006, this number had grown to 50.3% (Hoyt, Simon, and Reid, 2009). Women typically tend to occupy lower and middle management ranks while men cluster around the most powerful positions at the top. Women managers still receive significantly less remuneration for their work, with female managers receiving 24 percent less pay than men performing the equivalent function (Haslam and Ryan, 2008). Nonetheless, despite continuing 138 Asian Journal of Pentecostal Studies 12:2 (2009) inequity, it is clear that women are gradually occupying an increasing number of management and leadership positions. Although women have increasingly gained access to supervisory and middle management positions, they remain quite rare as elite leaders. For example, in 2006 women represented 5.2% of top earners, 14.7% of board members, 7.9% of the highest earners, and less than 2% of CEOs in Fortune 500 Companies (Hoyt, Simon, and Reid, 2009). This phenomenon has been explained by use of the idea of a \u201cglass ceiling\u201d \u2013 an invisible barrier preventing the rise of women within leadership ranks (Haslam & Ryan, 2008). Eagly and Karau (2002) describe it as \u201ca barrier of prejudice and discrimination that excludes women from higher leadership positions\u201d (p. 573). It is evident in the lower number of women in leadership positions, and particularly in high-level leadership positions. Yukl (2006) observes that \u201cIn the complete absence of sex-based discrimination, the number of women in chief executive positions in business and government should be close to 50 percent\u201d (p. 427). A variety of explanations have been offered for the existence of gender-based discrimination in the appointment of organizational leaders. These include: (1) gender stereotypes suggesting that men are more suited to leadership positions and that women are more suited to support roles (Yukl, 2006); (2) overt sexism in the workplace (Schwartz, 1971); (3) perceived incompatibilities between women\u2019s abilities and the requirements of leadership (Arvey, 1979); (4) women\u2019s competing responsibilities in the home (Schwartz, 1994); and (5) women\u2019s fear of success (Horner, 1972). The explanations are not mutually exclusive and they may combine to create significant barriers to the advancement of women. Martell and DeSmet (2001) found that a contributing reason for the glass ceiling and the continued absence of women in the upper ranks of management is \u201cthe existence of gender-based stereotypes in the leadership domain\u201d (p. 1227). Gender stereotypes are \u201ccategorical beliefs composed of the traits and behavioral characteristics assigned to women and men only on the basis of the group label\u201d (Martell & DeSmet, 2001, p. 1223). Such stereotypes serve as a type of expectation regarding the likely abilities of group members and, if left unchallenged, can translate into discriminatory behavior. Eagly and Karau (2002) point out that a \u201cpotential for prejudice exists when social perceivers hold a stereotype about a social group that is incongruent with the attributes that are thought to be required for success in certain classes of social roles\u201d (p. 574). Prejudice against women as leaders Fogarty, Gender and Leadership 139 \u201cfollows from the incongruity that many people perceive between the characteristics of women and the requirements of leader roles\u201d (Eagly & Karau, 2002, p. 574). Gender and Leadership within Christian Churches Within Christian churches gender based discrimination has been reinforced by theological perspectives (Barron, 1990; Bridges, 1998; Franklin, 2008; Scholz, 2005). The case that women are forbidden by scripture and church tradition to assume leadership within the church has been made frequently (Barron, 1990). This exclusion has been predominantly based on two Pauline texts (1 Timothy 2:11-15 and 1 Corinthians 14:33-34) and a broader theological position which sees men and women as being ontologically equal but functionally different. Its basic logic is that \u201cGod designed women to be subordinate to men in role and function\u201d and therefore \u201cwomen should not operate in positions of authority over men\u201d (Franklin, 2008, 14). For example, Piper and Grudem (1991) state that \u201cwe are persuaded that the Bible teaches that only men should be pastors and elders. That is, men should bear primary responsibility for Christlike leadership and teaching in the church. So it is unbiblical, we believe, and therefore detrimental, for women to assume this role\u201d (p. 60-61). Complementary to this theological position is the suggestion that women do not have the capacity for effective church leadership (Bridges, 1998). Piper (1991) exemplifies this position when he writes: \u201cAt the heart of mature masculinity is a sense of benevolent responsibility to lead, provide for and protect women in ways appropriate to a man\u2019s differing relationships\u201d and \u201cAt the heart of mature femininity is a freeing disposition to affirm, receive, and nurture strength and leadership from worthy men in ways appropriate to a woman\u2019s differing relationships\u2019 (p. 35-36). The understanding portrayed is that men have the capacity to lead and that women do not. The assumption implicit within this understanding is that leadership does not involve affirming and nurturing behaviors."}
{"_id": "dee91a3fa2299fa7060f81f7e8c54c9408859980", "title": "A double-tail sense amplifier for low-voltage SRAM in 28nm technology", "text": "A double-tail sense amplifier (DTSA) is designed as a drop-in replacement for a conventional SRAM sense amplifier (SA), to enable a robust read operation at low voltages. A pre-amplification stage helps reduce the offset voltage of the sense amplifier by magnifying the input of the regeneration stage. The self-timed regenerative latch simplifies the timing logic so the DTSA can replace the SA with no area overhead. A test chip in 28nm technology achieves 56% error rate reduction at 0.44V. The proposed scheme achieves 50mV of VDDmin reduction compared to commercial SRAM with a faster timing option that demonstrates a smaller bitline swing."}
{"_id": "a24d39e7c504a5705e4a480f99c1461992931934", "title": "Sampling cube: a framework for statistical olap over sampling data", "text": "Sampling is a popular method of data collection when it is impossible or too costly to reach the entire population. For example, television show ratings in the United States are gathered from a sample of roughly 5,000 households. To use the results effectively, the samples are further partitioned in a multidimensional space based on multiple attribute values. This naturally leads to the desirability of OLAP (Online Analytical Processing) over sampling data. However, unlike traditional data, sampling data is inherently uncertain, i.e., not representing the full data in the population. Thus, it is desirable to return not only query results but also the confidence intervals indicating the reliability of the results. Moreover, a certain segment in a multidimensional space may contain none or too few samples. This requires some additional analysis to return trustable results.\n In this paper we propose a Sampling Cube framework, which efficiently calculates confidence intervals for any multidimensional query and uses the OLAP structure to group similar segments to increase sampling size when needed. Further, to handle high dimensional data, a Sampling Cube Shell method is proposed to effectively reduce the storage requirement while still preserving query result quality."}
{"_id": "31019c31d591170df86477ca98d39f6037d3dff7", "title": "Diagnostic assessment of the borg MOEA for many-objective product family design problems", "text": "The recently introduced Borg multiobjective evolutionary algorithm (MOEA) framework features auto-adaptive search that tailors itself to effectively explore different problem spaces. A key auto-adaptive feature of the Borg MOEA is the dynamic allocation of search across a suite of recombination and mutation operators. This study explores the application of the Borg MOEA on a real-world product family design problem: the severely constrained, ten objective General Aviation Aircraft (GAA) problem. The GAA problem represents a promising benchmark problem that strongly highlights the importance of using auto-adaptive search to discover how to exploit multiple recombination strategies cooperatively. The auto-adaptive behavior of the Borg MOEA is rigorously compared against its ancestor algorithm, the \u03b5-MOEA, by employing global sensitivity analysis across each algorithm's feasible parameter ranges. This study provides the first Sobol' sensitivity analysis to determine the individual and interactive parameter sensitivities of MOEAs on a real-world many-objective problem."}
{"_id": "36e70ee51cb7b7ec12faac934ae6b6a4d9da15a8", "title": "Foundations of RDF\u22c6 and SPARQL\u22c6 (An Alternative Approach to Statement-Level Metadata in RDF)", "text": "The standard approach to annotate statements in RDF with metadata has a number of shortcomings including data size blow-up and unnecessarily complicated queries. We propose an alternative approach that is based on nesting of RDF triples and of query patterns. The approach allows for a more compact representation of data and queries, and it is backwards compatible with the standard. In this paper we present the formal foundations of our proposal and of different approaches to implement it. More specifically, we formally capture the necessary extensions of the RDF data model and its query language SPARQL, and we define mappings based on which our extended notions can be converted back to ordinary RDF and SPARQL. Additionally, for such type of mappings we define two desirable properties, information preservation and query result equivalence, and we show that the introduced mappings possess these properties."}
{"_id": "2ceaa8d6ee74105a6b5561661db299c885f9135b", "title": "Learning to Decode for Future Success", "text": "We introduce a general strategy for improving neural sequence generation by incorporating knowledge about the future. Our decoder combines a standard sequence decoder with a \u2018soothsayer\u2019 prediction function Q that estimates the outcome in the future of generating a word in the present. Our model draws on the same intuitions as reinforcement learning, but is both simpler and higher performing, avoiding known problems with the use of reinforcement learning in tasks with enormous search spaces like sequence generation. We demonstrate our model by incorporating Q functions that incrementally predict what the future BLEU or ROUGE score of the completed sequence will be, its future length, and the backwards probability of the source given the future target sequence. Experimental results show that future prediction yields improved performance in abstractive summarization and conversational response generation and the state-of-the-art in machine translation, while also enabling the decoder to generate outputs that have specific properties."}
{"_id": "b05fdba8f447b37d7fa6fdd63d23c70b2f4ee01b", "title": "Aspect extraction for opinion mining with a deep convolutional neural network", "text": "In this paper, we present the first deep learning approach to aspect extraction in opinion mining. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about. We used a 7-layer deep convolutional neural network to tag each word in opinionated sentences as either aspect or non-aspect word. We also developed a set of linguistic patterns for the same purpose and combined them with the neural network. The resulting ensemble classifier, coupled with a word-embedding model for sentiment analysis, allowed our approach to obtain significantly better accuracy than state-of-the-art methods."}
{"_id": "88794a4e8d223799e2572aed0407e9c80d10e6dd", "title": "Design, fabrication and preliminary results of a novel anthropomorphic hand for humanoid robotics: RCH-1", "text": "Among social infrastructure technologies, robot technology (RT) is expected to play an important role in solving the problems of both decrease of birth rate and increase of elderly people in the 21st century, specially but not only in Japan. In order to achieve this objective, the new generation of personal robots should be capable of a natural communication with humans by expressing human-like emotion. In this sense, human hands play a fundamental role in communication, because they have grasping, sensing and emotional expression ability. This paper presents the recent results of the collaboration between the Takanishi Lab of Waseda University, Tokyo, Japan, and the Arts Lab of Scuola Superiore Sant'Anna, Pisa, Italy, and ROBOCASA. In this paper, the development of a novel anthropomorphic hand for humanoid robotics RCH-1 (ROBOCASA Hand No.1) and its integration into a humanoid robotic platform, named WE-4RII (Waseda Eye No.4 Refined II) is presented."}
{"_id": "8f12309b0a04279c13ce5a7d1e29a3353a69ed52", "title": "Educational System for the Holy Quran and Its Sciences for Blind and Handicapped People Based on Google Speech API", "text": "There is a great need to provide educational environments for blind and handicapped people. There are many Islamic websites and applications dedicated to the educational services for the Holy Quran and Its Sciences (Quran Recitations, the interpretations, etc.) on the Internet. Unfortunately, blind and handicapped people could not use these services. These people cannot use the keyboard and the mouse. In addition, the ability to read and write is essential to benefit from these services. In this paper, we present an educational environment that allows these people to take full advantage of the scientific materials. This is done through the interaction with the system using voice commands by speaking directly without the need to write or to use the mouse. Google Speech API is used for the universal speech recognition after a preprocessing and post processing phases to improve the accuracy. For blind people, responses of these commands will be played back through the audio device instead of displaying the text to the screen. The text will be displayed on the screen to help other people make use of the system."}
{"_id": "bc715df3cdacb476bdde831318c671b0a24955e5", "title": "Web 3.0 Emerging", "text": "While Web 3.0 technologies are difficult to define precisely, the outline of emerging applications has become clear over the past year. We can thus essentially view Web 3.0 as semantic Web technologies integrated into, or powering, large-scale Web applications. The base of Web 3.0 applications resides in the resource description framework (RDF) for providing a means to link data from multiple Web sites or databases. With the SPARQL query language, a SQL-like standard for querying RDF data, applications can use native graph-based RDF stores and extract RDF data from traditional databases."}
{"_id": "f4a463f37f29f205451652a842ff75c718040279", "title": "Multiple Handshakes Security of TLS 1.3 Candidates", "text": "The Transport Layer Security (TLS) protocol is by far the most widely deployed protocol for securing communications and the Internet Engineering Task Force (IETF) is currently developing TLS 1.3 as the next-generation TLS protocol. The TLS standard features multiple modes of handshake protocols and supports many combinational running of successive TLS handshakes over multiple connections. Although each handshake mode is now well-understood in isolation, their composition in TLS 1.2 remains problematic, and yet it is critical to obtain practical security guarantees for TLS. In this paper, we present the first formal treatment of multiple handshakes protocols of TLS 1.3 candidates. First, we introduce a multi-level&stage security model, an adaptation of the Bellare-Rogaway authenticated key exchange model, covering all kinds of compositional interactions between different TLS handshake modes and providing reasonably strong security guarantees. Next, we prove that candidate handshakes of TLS 1.3 draft meet our strong notion of multiple handshakes security. Our results confirm the soundness of TLS 1.3 security protection design. Such a multi-level&stage approach is convenient for analyzing the compositional design of the candidates with different session modes, as they establish dependencies of multiple sessions. We also identify the triple handshake attack of Bhargavan et al. on TLS 1.2 within our multiple handshakes security model. We show generically that the proposed fixes (RFC 7627) for TLS 1.2 offer good protection against multiple handshakes attacks."}
{"_id": "d9c797a74f2f716379f388d5c57e437ecd68759c", "title": "Improving Design of Systems Supporting Creativity-intensive Processes - A Cross-industry Focus Group Evaluation", "text": ""}
{"_id": "b0ce93a3a4fc44a820fa73c2547adc6f1c338f99", "title": "Exploring Users' Attitudes and Intentions toward the Adoption of Cloud Computing in Saudi Arabia: an Empirical Investigation", "text": "Over the past few years, cloud computing has evolve d as one of the major advances in the field of Information Technology (IT) utilizing third-party s ervices. Therefore, trust in cloud vendors as well as the determination of potential risks, such as privacy a nd security issues, are crucial for ensuring the su ccessful adoption of an appropriate cloud. Prior research ha ve ddressed the technical aspects of cloud-based environments, such as cloud virtualization, scalabi lity and security. Nevertheless, it is argued that the biggest obstacle of cloud computing is not technolo gical, rather it is perceptual or attitudinal. The adoption of cloud computing has been central to several scho larly research areas, particularly user acceptance. This study presents an extended Technology Acceptance Mo del (TAM), which integrates Trust (TR), Anxiety (ANX) and Perceived Risk (PR), to investigate users \u2019 attitudes and intentions toward the adoption of c l ud computing. The proposed model was empirically exami ned using the Structural Equation Model (SEM) to analyze data gathered by a survey of both IT profes sionals and end users. The results herein suggested that trust, anxiety and PR can be successfully integrate d within the TAMs. Trust has demonstrated to have a strong positive influence on Perceived Ease of Use (PEOU), but it had no significant effects on Perceived Use fuln ss (PU). Both anxiety and PR were found to have signif icant negative effects on PEOU and PU. In addition, Behavioral Intention (BI) to use the cloud can be p r dicted by trust, attitudes and PU, as PR was show n t have no significant effect on BI. The proposed model, including PR, trust and anxiety , has been demonstrated to be a true predictor of user intentions, toward t he use of cloud computing, within the context of Sa udi Arabia."}
{"_id": "7a4406be8af884d44a7ebcdae6f83c94fcbb3bf7", "title": "Software-Defined LANs for Interconnected Smart Environment", "text": "In this paper, we propose a solution to delegate the control and the management of the network connecting the many devices of a smart environment to a software entity, while keeping end-users in control of what is happening in their networks. For this, we rely on the logical manipulation of all connected devices through device abstraction and network programmability. Applying Software Defined Networking (SDN) principles, we propose a software-based solution that we call Software-Defined LANs in order to interconnect devices of smart environments according to the services the users are requesting or expecting.We define the adequate virtualization framework based on Virtual Objects and Communities of Virtual Objects. Using these virtual entities, we apply the SDN architectural principles to define a generic architecture that can be applied to any smart environment. Then we describe a prototype implementing these concepts in the home networking context, through a scenario in which users of two different homes can easily interconnect two private but shareable DLNA devices in a dedicated video-delivery SD-LAN. Finally we provide a discussion of the benefits and challenges of our approach regarding the generalization of SDN principles, autonomic features, Internet of Things scalability, security and privacy aspects enabled by SD-LANs intrinsic properties."}
{"_id": "c7c0d983a7197ce337868ae786ea2da904552a83", "title": "Named entity recognition on Indonesian Twitter posts using long short-term memory networks", "text": "The task of Named-Entity Recognition (NER) can support the higher-level tasks such as question answering, text summarization, and information retrieval. This work views NER on Indonesian Twitter posts as a sequence labeling problem using supervised machine learning approach. The architecture used is Long Short-Term Memory Networks (LSTMs), with word embedding and POS tag as the model features. As the result, our model can give a performance with an F1 score of 77.08%."}
{"_id": "5f209fdf869ede1bc82ead79248196f34586ae4f", "title": "Scrutinizing WPA2 Password Generating Algorithms in Wireless Routers", "text": "A wireless router is a networking device that enables a user to set up a wireless connection to the Internet. A router can offer a secure channel by cryptographic means which provides authenticity and confidentiality. Nowadays, almost all routers use a secure channel by default that is based on Wi-Fi Protected Access II (WPA2). This is a security protocol which is believed not to be susceptible to practical key recovery attacks. However, the passwords should have sufficient entropy to avert bruteforce attacks. In this paper, we compose a strategy on how to reverse-engineer embedded routers. Furthermore, we describe a procedure that can instantly gather a complete wireless authentication trace which enables an offline password recovery attack. Finally, we present a number of use cases where we identify extremely weak password generating algorithms in various routers which are massively deployed in The Netherlands. The algorithms are used to generate the default WPA2 password. Such a password is loaded during device initialization and hardware reset. Users that did not explicitly change their wireless password are most likely vulnerable to practical attacks which can recover their password within minutes. A stolen password allows an adversary to abuse someone else\u2019s internet connection, for instance compromising the firewall, making a fraudulent transaction or performing other criminal activities. Together with the Dutch National Cyber Security Centre we have initiated a responsible disclosure procedure. However, since these routers are also used by many other companies in various countries, our findings seem to relate an international industry wide security issue."}
{"_id": "7b933815baf01d3d1a6b02cd8eb2d8746415d47e", "title": "A Simplification-Translation-Restoration Framework for Cross-Domain SMT Applications", "text": "Integration of domain specific knowledge into a general purpose statistical machine translation (SMT) system poses challenges due to insufficient bilingual corpora. In this paper we propose a simplification-translation-restoration (STR) framework for domain adaptation in SMT by simplifying domain specific segments of a text. For an in-domain text, we identify the critical segments and modify them to alleviate the data sparseness problem in the out-domain SMT system. After we receive the translation result, these critical segments are then restored according to the provided in-domain knowledge. We conduct experiments on an English-toChinese translation task in the medical domain and evaluate each step of the STR framework. The translation results show significant improvement of our approach over the out-domain and the na\u00efve in-domain SMT systems. \u7528\u65bc\u8de8\u9818\u57df\u7d71\u8a08\u5f0f\u6a5f\u5668\u7ffb\u8b6f\u7cfb\u7d71\u4e4b\u7c21\u5316-\u7ffb\u8b6f-\u9084\u539f\u67b6\u69cb"}
{"_id": "6dc98b838aaaa408a0f3c8ca07dec1d7c4929769", "title": "A column approximate minimum degree ordering algorithm", "text": "Sparse Gaussian elimination with partial pivoting computes the factorization PAQ = LU of a sparse matrix A, where the row ordering P is selected during factorization using standard partial pivoting with row interchanges. The goal is to select a column preordering, Q, based solely on the nonzero pattern of A, that limits the worst-case number of nonzeros in the factorization. The fill-in also depends on P, but Q is selected to reduce an upper bound on the fill-in for any subsequent choice of P. The choice of Q can have a dramatic impact on the number of nonzeros in L and U. One scheme for determining a good column ordering for A is to compute a symmetric ordering that reduces fill-in in the Cholesky factorization of ATA. A conventional minimum degree ordering algorithm would require the sparsity structure of ATA to be computed, which can be expensive both in terms of space and time since ATA may be much denser than A. An alternative is to compute Q directly from the sparsity structure of A; this strategy is used by MATLAB's COLMMD preordering algorithm. A new ordering algorithm, COLAMD, is presented. It is based on the same strategy but uses a better ordering heuristic. COLAMD is faster and computes better orderings, with fewer nonzeros in the factors of the matrix."}
{"_id": "6fdbbefe05648f6c0f027428ccff248b174798d5", "title": "Simultaneous map building and localization for an autonomous mobile robot", "text": "In this paper, we discuss a significant open problem in mobile robotics: simultaneous map building and localization, which we define as long-term globally referenced position estimation without a priori information. This problem is difficult because of the following paradox: to move precisely, a mobile robot must have an accurate environment map; however, to build an accurate map, the mobile robot\u2019s sensing locations must be known precisely. In this way, simultaneous map building and localization can be seen to present a question of \u201cwhich came first, the chicken or the egg?\u201d (The map or the motion?) When using ultrasonic sensing, to overcome this issue we equip the vehicle with multiple servo-mounted sonar sensors, to provide a means in which a subset of environment features can be precisely learned from the robot\u2019s initial location and subsequently tracked to provide precise positioning."}
{"_id": "3720e91c40978dce1c15af0482998d57608a3165", "title": "Stiffness analysis for a 3-PUU parallel kinematic machine", "text": "This paper presents the stiffness characteristics of a three-prismatic-universal\u2013universal (3-PUU) translational parallel kinematic machine (PKM). The stiffness matrix is derived intuitively based upon an alternative approach considering actuations and constraints, and the compliances subject to both actuators and legs are involved in the stiffness model. The stiffness performance of the manipulator is evaluated by utilizing the extremum stiffness values, and the influences of design parameters on the stiffness properties are presented, which will be valuable for the architecture design of a 3-PUU PKM. Moreover, the stiffness behavior of the PKM is investigated via the eigenscrew decomposition of the stiffness matrix, which provides a physical interpretation of the PKM stiffness and allows the identification of the stiffness center and compliant axis. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "efbc200feab74e5087c4005d8759e5dadb3a3077", "title": "Controllable Text Generation", "text": "Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible natural language sentences, whose attributes are dynamically controlled by learning disentangled latent representations with designated semantics. We propose a new neural generative model which combines variational auto-encoders and holistic attribute discriminators for effective imposition of semantic structures. With differentiable approximation to discrete text samples, explicit constraints on independent attribute controls, and efficient collaborative learning of generator and discriminators, our model learns highly interpretable representations from even only word annotations, and produces realistic sentences with desired attributes. Quantitative evaluation validates the accuracy of sentence and attribute generation."}
{"_id": "24aed1b7277dfb2c2a6515a1be82d30cc8aa85cc", "title": "Large-scale visual sentiment ontology and detectors using adjective noun pairs", "text": "We address the challenge of sentiment analysis from visual content. In contrast to existing methods which infer sentiment or emotion directly from visual low-level features, we propose a novel approach based on understanding of the visual concepts that are strongly related to sentiments. Our key contribution is two-fold: first, we present a method built upon psychological theories and web mining to automatically construct a large-scale Visual Sentiment Ontology (VSO) consisting of more than 3,000 Adjective Noun Pairs (ANP). Second, we propose SentiBank, a novel visual concept detector library that can be used to detect the presence of 1,200 ANPs in an image. The VSO and SentiBank are distinct from existing work and will open a gate towards various applications enabled by automatic sentiment analysis. Experiments on detecting sentiment of image tweets demonstrate significant improvement in detection accuracy when comparing the proposed SentiBank based predictors with the text-based approaches. The effort also leads to a large publicly available resource consisting of a visual sentiment ontology, a large detector library, and the training/testing benchmark for visual sentiment analysis."}
{"_id": "2110fde98e0b4e1d66e76ba1f65ddc277f27758d", "title": "A DNA nanorobot functions as a cancer therapeutic in response to a molecular trigger in vivo", "text": "Nanoscale robots have potential as intelligent drug delivery systems that respond to molecular triggers. Using DNA origami we constructed an autonomous DNA robot programmed to transport payloads and present them specifically in tumors. Our nanorobot is functionalized on the outside with a DNA aptamer that binds nucleolin, a protein specifically expressed on tumor-associated endothelial cells, and the blood coagulation protease thrombin within its inner cavity. The nucleolin-targeting aptamer serves both as a targeting domain and as a molecular trigger for the mechanical opening of the DNA nanorobot. The thrombin inside is thus exposed and activates coagulation at the tumor site. Using tumor-bearing mouse models, we demonstrate that intravenously injected DNA nanorobots deliver thrombin specifically to tumor-associated blood vessels and induce intravascular thrombosis, resulting in tumor necrosis and inhibition of tumor growth. The nanorobot proved safe and immunologically inert in mice and Bama miniature pigs. Our data show that DNA nanorobots represent a promising strategy for precise drug delivery in cancer therapy."}
{"_id": "ed08c270c0a99f7808f7265ac1adf23237ba00fb", "title": "60-GHz Millimeter-Wave Life Detection System (MLDS) for Noncontact Human Vital-Signal Monitoring", "text": "A first reported experimental study of a 60 GHz millimeter-wave life detection system (MLDS) for noncontact human vital-signal monitoring is presented. This detection system is constructed by using V-band millimeter-wave waveguide components. A clutter canceller is implemented in the system with an adjustable attenuator and phase shifter. It performs clutter cancellation for the transmitting power leakage from the circulator and background reflection to enhance the detecting sensitivity of weak vital signals. The noncontact vital signal measurements have been conducted on a human subject in four different physical orientations from distances of 1 and 2 m. The time-domain and spectrum waveforms of the measured breathing and heartbeat are presented. This prototype system will be useful for the development of the 60-GHz CMOS MLDS detector chip design."}
{"_id": "78b89e68ed84ef344f82922ea702a736a5791d0a", "title": "Working-memory capacity protects model-based learning from stress.", "text": "Accounts of decision-making have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental advances suggest that this classic distinction between habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning, called model-free and model-based learning. Popular neurocomputational accounts of reward processing emphasize the involvement of the dopaminergic system in model-free learning and prefrontal, central executive-dependent control systems in model-based choice. Here we hypothesized that the hypothalamic-pituitary-adrenal (HPA) axis stress response--believed to have detrimental effects on prefrontal cortex function--should selectively attenuate model-based contributions to behavior. To test this, we paired an acute stressor with a sequential decision-making task that affords distinguishing the relative contributions of the two learning strategies. We assessed baseline working-memory (WM) capacity and used salivary cortisol levels to measure HPA axis stress response. We found that stress response attenuates the contribution of model-based, but not model-free, contributions to behavior. Moreover, stress-induced behavioral changes were modulated by individual WM capacity, such that low-WM-capacity individuals were more susceptible to detrimental stress effects than high-WM-capacity individuals. These results enrich existing accounts of the interplay between acute stress, working memory, and prefrontal function and suggest that executive function may be protective against the deleterious effects of acute stress."}
{"_id": "fc50ad0fed75041b471a96588f86ebbb0b7e9b24", "title": "A Scoring Function for Learning Bayesian Networks based on Mutual Information and Conditional Independence Tests", "text": "We propose a new scoring function for learning Bayesian networks from data using score+search algorithms. This is based on the concept of mutual information and exploits some well-known properties of this measure in a novel way. Essentially, a statistical independence test based on the chi-square distribution, associated with the mutual information measure, together with a property of additive decomposition of this measure, are combined in order to measure the degree of interaction between each variable and its parent variables in the network. The result is a non-Bayesian scoring function called MIT (mutual information tests) which belongs to the family of scores based on information theory. The MIT score also represents a penalization of the Kullback-Leibler divergence between the joint probability distributions associated with a candidate network and with the available data set. Detailed results of a complete experimental evaluation of the proposed scoring function and its comparison with the well-known K2, BDeu and BIC/MDL scores are also presented."}
{"_id": "452aca244ef62a533d8b46a54c6212fe9fa3ce9a", "title": "Temporally Grounding Natural Sentence in Video", "text": "We introduce an effective and efficient method that grounds (i.e., localizes) natural sentences in long, untrimmed video sequences. Specifically, a novel Temporal GroundNet (TGN)1 is proposed to temporally capture the evolving fine-grained frame-by-word interactions between video and sentence. TGN sequentially scores a set of temporal candidates ended at each frame based on the exploited frameby-word interactions, and finally grounds the segment corresponding to the sentence. Unlike traditional methods treating the overlapping segments separately in a sliding window fashion, TGN aggregates the historical information and generates the final grounding result in one single pass. We extensively evaluate our proposed TGN on three public datasets with significant improvements over the stateof-the-arts. We further show the consistent effectiveness and efficiency of TGN through an ablation study and a runtime test."}
{"_id": "e9f1f3eae14e50034ecb090dd587d26f4a3b2b2b", "title": "Coping through emotional approach: scale construction and validation.", "text": "Four studies demonstrate the psychometric adequacy and validity of scales designed to assess coping through emotional approach. In separate undergraduate samples, exploratory and confirmatory factor analyses of dispositional (Study 1) and situational (Study 3) coping item sets yielded 2 distinct emotional approach coping factors: emotional processing (i.e., active attempts to acknowledge and understand emotions) and emotional expression. The 2 scales yielded high internal consistency and test-retest reliability, as well as convergent and discriminant validity. A study (Study 2) of young adults and their parents established the scales' interjudge reliabilities. Longitudinal (Study 3) and experimental (Study 4) research supported the predictive validity of the emotional approach coping scales with regard to adjustment to stressful encounters. Findings highlight the utility of functionalist theories of emotion as applied to coping theory."}
{"_id": "0cc76aae0cda05835dd5e356a1d9aebfdd918775", "title": "L\u00e9vy flights and random searches", "text": "In this work we discuss some recent contributions to the random search problem. Our analysis includes superdiffusive L\u00e9vy processes and correlated random walks in several regimes of target site density, mobility and revisitability. We present results in the context of mean-field-like and closed-form average calculations, as well as numerical simulations. We then consider random searches performed in regular lattices and lattices with defects, and we discuss a necessary criterion for distinguishing true superdiffusion from correlated random walk processes. We invoke energy considerations in relation to critical survival states on the edge of extinction, and we analyze the emergence of L\u00e9vy behavior in deterministic search walks. Finally, we comment on the random search problem in the context of biological foraging. PACS numbers: 05.50.Fb, 05.40.\u2212a"}
{"_id": "8898277e40cca945a674f6765ed31d733dd90aaa", "title": "Jerk-bounded manipulator trajectory planning: design for real-time applications", "text": "An online method for obtaining smooth, jerk-bounded trajectories has been developed and implemented. Jerk limitation is important in industrial robot applications, since it results in improved path tracking and reduced wear on the robot. The method described herein uses a concatenation of fifth-order polynomials to provide a smooth trajectory between two way points. The trajectory approximates a linear segment with parabolic blends trajectory. A sine wave template is used to calculate the end conditions (control points) for ramps from zero acceleration to nonzero acceleration. Joining these control points with quintic polynomials results in a controlled quintic trajectory that does not oscillate, and is near time optimal for the jerk and acceleration limits specified. The method requires only the computation of the quintic control points, up to a maximum of eight points per trajectory way point. This provides hard bounds for online motion algorithm computation time. A method for blending these straight-line trajectories over a series of way points is also discussed. Simulations and experimental results on an industrial robot are presented."}
{"_id": "1fff937ac760642a95f393ce1ffc00b55497a82f", "title": "Residual Reconstruction for Block-Based Compressed Sensing of Video", "text": "A simple block-based compressed-sensing reconstruction for still images is adapted to video. Incorporating reconstruction from a residual arising from motion estimation and compensation, the proposed technique alternatively reconstructs frames of the video sequence and their corresponding motion fields in an iterative fashion. Experimental results reveal that the proposed technique achieves significantly higher quality than a straightforward reconstruction that applies a still-image reconstruction independently frame by frame, a 3D reconstruction that exploits temporal correlation between frames merely in the form of a motion-agnostic 3D transform, and a similar, yet non-iterative, motion-compensated residual reconstruction."}
{"_id": "1e9b286977ffbaa2a2fd332689a5945f69589bde", "title": "Nonclassical Computation - A Dynamical Systems Perspective", "text": "In this chapter we investigate computation from a dynamical systems perspective. A dynamical system is described in terms of its abstract state space, the system\u2019s current state within its state space, and a rule that determines its motion through its state space. In a classical computational system, that rule is given explicitly by the computer program; in a physical system, that rule is the underlying physical law governing the behaviour of the system. So a dynamical systems approach to computation allows us to take a unified view of computation in classical discrete systems and in systems performing non-classical computation. In particular, it gives a route to a computational interpretation of physical embodied systems exploiting the natural dynamics of their material substrates. We start with autonomous (closed) dynamical systems: those whose dynamics is not an explicit function of time, in particular, those with no inputs from an external environment. We begin with computationally conventional discrete systems, examining their computational abilities from a dynamical systems perspective. The aim here is both to introduce the necessary dynamical systems concepts, and to demonstrate how classical computation can be viewed from this perspective. We then move on to continuous dynamical systems, such as those inherent in the complex dynamics of matter, and show how these too can be interpreted computationally, and see how the material embodiment can give such computation \u201cfor free\u201d, without the need to explicitly implement the dynamics. We next broaden the outlook to open (non-autonomous) dynamical systems, where the dynamics is a function of time, in the form of inputs from an external environment, and which may be in a closely coupled feedback loop with that environment. We finally look at constructive, or developmental, dynamical systems, where the structure of the state space is changing during the computation. This includes vari-"}
{"_id": "6dca00b95b4a7d5e98c8eeebea3f45ad06349848", "title": "Energy-aware wireless microsensor networks", "text": "Self-configuring wireless sensor networks can be invaluable in many civil and military applications for collecting, processing, and disseminating wide ranges of complex environmental data. Because of this, they have attracted considerable research attention in the last few years. The WINS [1] and SmartDust [2] projects, for instance, aim to integrate sensing, computing, and wireless communication capabilities into a small form factor to enable low-cost production of these tiny nodes in large numbers. Several other groups are investigating efficient hardware/software system architectures, signal processing algorithms, and network protocols for wireless sensor networks [3]-[5]. Sensor nodes are battery driven and hence operate on an extremely frugal energy budget. Further, they must have a lifetime on the order of months to years, since battery replacement is not an option for networks with thousands of physically embedded nodes. In some cases, these networks may be required to operate solely on energy scavenged from the environment through seismic, photovoltaic, or thermal conversion. This transforms energy consumption into the most important factor that determines sensor node lifetime. Conventional low-power design techniques [6] and hardware architectures only provide point solutions which are insufficient for these highly energy-constrained systems. Energy optimization, in the case of sensor networks, is much more complex, since it involves not only reducing the energy consumption of a single sensor node but also maximizing the lifetime of an entire network. The network lifetime can be maximized"}
{"_id": "543eda53b42b0b248acc32a941e7d7bcb683c4a8", "title": "Differences in procrastination and motivation between undergraduate and graduate students", "text": "Procrastination became increasingly prevalent among students in recent years. However, little research was found that directly compares academic procrastination across different academic grade levels.\t\r The\t\r present study used a self-regulated learning perspective to compare procrastination types and associated motivation between undergraduate and graduate students. Sixty-six undergraduate and sixty-eight graduate students responded to a packet of questionnaires concerning their experience in an educational psychology class. The results show that students\u2019 beliefs about the usefulness of procrastination were a better predictor of academic procrastination than self-efficacy beliefs and achievement goal orientations. Student age was related to procrastination types. Among the undergraduate procrastinators, the younger students were more likely to engage in active procrastination while the older students tended to engage in passive procrastination. Implications and future research directions are discussed."}
{"_id": "5ced6a0aab1350ef1dba574e1faa05a726d9517e", "title": "OpenPiton: An Open Source Manycore Research Framework", "text": "Industry is building larger, more complex, manycore processors on the back of strong institutional knowledge, but academic projects face difficulties in replicating that scale. To alleviate these difficulties and to develop and share knowledge, the community needs open architecture frameworks for simulation, synthesis, and software exploration which support extensibility, scalability, and configurability, alongside an established base of verification tools and supported software. In this paper we present OpenPiton, an open source framework for building scalable architecture research prototypes from 1 core to 500 million cores. OpenPiton is the world's first open source, general-purpose, multithreaded manycore processor and framework. OpenPiton leverages the industry hardened OpenSPARC T1 core with modifications and builds upon it with a scratch-built, scalable uncore creating a flexible, modern manycore design. In addition, OpenPiton provides synthesis and backend scripts for ASIC and FPGA to enable other researchers to bring their designs to implementation. OpenPiton provides a complete verification infrastructure of over 8000 tests, is supported by mature software tools, runs full-stack multiuser Debian Linux, and is written in industry standard Verilog. Multiple implementations of OpenPiton have been created including a taped-out 25-core implementation in IBM's 32nm process and multiple Xilinx FPGA prototypes."}
{"_id": "96e821d26b3412c9dfdb08205eb37f0108f62601", "title": "Inertial properties of the human trunk of males determined from magnetic resonance imaging", "text": "The purpose of this study was to evaluate the segmental parameters of the human trunk of malesin vivo using magnetic resonance imaging (MRI). In addition, the efficacy of volumetric estimation and existing prediction formulas to produce segmental properties similar to those produced by MRI was evaluated. As opposed to finding one representative normal value for these parameters, a range of normal values was defined. For instance, the average trunk mass was 42.2%\u00b13.5% (x\u00b1SD) of body mass, but values ranged from 35.8% to 48.0%. To account for segment parameters more accurately, specific anthropometric measures need to be considered in addition to overall measures of body height and mass. These specific measures included segment length, circumference, width, and depth. Studies reporting general percentages based on height and/or mass were found to be inadequate predictors of segmental parameters of the trunk compared with MRI estimates. Volumebased estimates, which assume a uniform density distribution within a segment, were found to correspond closely to MRI values except for the thorax. However, the use of density values reflective of the livingin vivo state would likely alleviate this disparity, thus indicating that the volumetric technique may be effective for deriving segmental parameters for large segments of the trunk. Future research should adopt noninvasive techniques such as MRI and/or volumetric estimation to enhance the predictability of segmental parameters of the body for specific population groups characterized by gender, developmental age, body type, and fitness level. Further efforts should be made to establish standardized boundary definitions for trunk segments to avoid unnecessary confusion, from which substantial errors may be introduced into biomechanical linked-segment analyses of human movement."}
{"_id": "4c83bfdf09070fb34016d8e06e61576960af4a2d", "title": "A New Dataset for Fine-Grained Citation Field Extraction", "text": "Citation field extraction entails segmenting a citation string into its constituent parts, such as title, authors, publisher and year. Despite the importance of this task, there is a lack of well-annotated citation data. This paper presents a new labeled dataset for citation extraction that, in comparison to the previous standard dataset, exceeds four-times more data, supplies detailed nested labels rather than coarse-grained flat labels, and is derived from four different academic fields rather than one. We describe our new dataset in detail, and provide baseline experimental results from a state-of-the-art extraction method."}
{"_id": "45e8ef229fae18b0a2ab328037d8e520866c3c81", "title": "Learning Feature Pyramids for Human Pose Estimation", "text": "Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multibranch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at https://github.com/bearpaw/PyraNet."}
{"_id": "665480fd29978857dc6001285e94740f3a94f649", "title": "Treating patellar tendinopathy with Fascial Manipulation.", "text": "According to Fascial Manipulation theory, patellar tendon pain is often due to uncoordinated quadriceps contraction caused by anomalous fascial tension in the thigh. Therefore, the focus of treatment is not the patellar tendon itself, but involves localizing the cause of this incoordination, considered to be within the muscular fascia of the thigh region. Eighteen patients suffering from patellar tendon pain were treated with the Fascial Manipulation technique. Pain was assessed (in VAS) before (VAS 67.8/100) and after (VAS 26.5/100) treatment, plus a follow-up evaluation at 1 month (VAS 17.2/100). Results showed a substantial decrease in pain immediately after treatment (p<0.0001) and remained unchanged or improved in the short term. The results show that the patellar tendon may be only the zone of perceived pain and that interesting results can be obtained by treating the muscular fascia of the quadriceps muscle, whose alteration may cause motor incoordination and subsequent pathology."}
{"_id": "1c78809a7fd22c95f4d60cd707c32280a019940f", "title": "Automatic question answering using the web: Beyond the Factoid", "text": "In this paper we describe and evaluate a Question Answering (QA) system that goes beyond answering factoid questions. Our approach to QA assumes no restrictions on the type of questions that are handled, and no assumption that the answers to be provided are factoids. We present an unsupervised approach for collecting question and answer pairs from FAQ pages, which we use to collect a corpus of 1 million question/answer pairs from FAQ pages available on the Web. This corpus is used to train various statistical models employed by our QA system: a statistical chunker used to transform a natural language-posed question into a phrase-based query to be submitted for exact match to an off-the-shelf search engine; an answer/question translation model, used to assess the likelihood that a proposed answer is indeed an answer to the posed question; and an answer language model, used to assess the likelihood that a proposed answer is a well-formed answer. We evaluate our QA system in a modular fashion, by comparing the performance of baseline algorithms against our proposed algorithms for various modules in our QA system. The evaluation shows that our system achieves reasonable performance in terms of answer accuracy for a large variety of complex, non-factoid questions."}
{"_id": "096e6a0b7b155bafeef62f9c71b9af2130131199", "title": "A simulation-based software design framework for network-centric and parallel systems", "text": "In this paper we discuss a software design framework that is capable of realizing network-centricity and the rising multicore technology. Instead of producing static design documents in the form of UML diagrams, we propose automatic generation of a visual simulation model, which represents the target system design. We discuss a design environment that is responsible for the generation and execution of the simulation model."}
{"_id": "4ebebbb6d64c8abdff7a2a721db6f40002a2cd5b", "title": "Computational systems biology", "text": "To understand complex biological systems requires the integration of experimental and computational research \u2014 in other words a systems biology approach. Computational biology, through pragmatic modelling and theoretical exploration, provides a powerful foundation from which to address critical scientific questions head-on. The reviews in this Insight cover many different aspects of this energetic field, although all, in one way or another, illuminate the functioning of modular circuits, including their robustness, design and manipulation. Computational systems biology addresses questions fundamental to our understanding of life, yet progress here will lead to practical innovations in medicine, drug discovery and engineering."}
{"_id": "2379c027e7376bb76978602a7b185dfa73a9cd35", "title": "A GPU implementation of inclusion-based points-to analysis", "text": "Graphics Processing Units (GPUs) have emerged as powerful accelerators for many regular algorithms that operate on dense arrays and matrices. In contrast, we know relatively little about using GPUs to accelerate highly irregular algorithms that operate on pointer-based data structures such as graphs. For the most part, research has focused on GPU implementations of graph analysis algorithms that do not modify the structure of the graph, such as algorithms for breadth-first search and strongly-connected components.\n In this paper, we describe a high-performance GPU implementation of an important graph algorithm used in compilers such as gcc and LLVM: Andersen-style inclusion-based points-to analysis. This algorithm is challenging to parallelize effectively on GPUs because it makes extensive modifications to the structure of the underlying graph and performs relatively little computation. In spite of this, our program, when executed on a 14 Streaming Multiprocessor GPU, achieves an average speedup of 7x compared to a sequential CPU implementation and outperforms a parallel implementation of the same algorithm running on 16 CPU cores.\n Our implementation provides general insights into how to produce high-performance GPU implementations of graph algorithms, and it highlights key differences between optimizing parallel programs for multicore CPUs and for GPUs."}
{"_id": "36f06481eaae63522dfb61475602584997ebfee8", "title": "KD-tree acceleration structures for a GPU raytracer", "text": "Modern graphics hardware architectures excel at compute-intensive tasks such as ray-triangle intersection, making them attractive target platforms for raytracing. To date, most GPU-based raytracers have relied upon uniform grid acceleration structures. In contrast, the kd-tree has gained widespread use in CPU-based raytracers and is regarded as the best general-purpose acceleration structure. We demonstrate two kd-tree traversal algorithms suitable for GPU implementation and integrate them into a streaming raytracer. We show that for scenes with many objects at different scales, our kd-tree algorithms are up to 8 times faster than a uniform grid. In addition, we identify load balancing and input data recirculation as two fundamental sources of inefficiency when raytracing on current graphics hardware."}
{"_id": "273d591af0bdcbefe37d7dd9150e2f612ca7121d", "title": "Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU", "text": "Recent advances in computing have led to an explosion in the amount of data being generated. Processing the ever-growing data in a timely manner has made throughput computing an important aspect for emerging applications. Our analysis of a set of important throughput computing kernels shows that there is an ample amount of parallelism in these kernels which makes them suitable for today's multi-core CPUs and GPUs. In the past few years there have been many studies claiming GPUs deliver substantial speedups (between 10X and 1000X) over multi-core CPUs on these kernels. To understand where such large performance difference comes from, we perform a rigorous performance analysis and find that after applying optimizations appropriate for both CPUs and GPUs the performance gap between an Nvidia GTX280 processor and the Intel Core i7-960 processor narrows to only 2.5x on average. In this paper, we discuss optimization techniques for both CPU and GPU, analyze what architecture features contributed to performance differences between the two architectures, and recommend a set of architectural features which provide significant improvement in architectural efficiency for throughput kernels."}
{"_id": "c79ae0af0d1cc663e50d3f443639569c02afba1b", "title": "Guaranteed-Quality Mesh Generation for Curved Surfaces", "text": "For several commonly-used solution techniques for partial differential equations, the first step is to divide the problem region into simply-shaped elements, creating a mesh. We present a technique for creating high-quality triangular meshes for regions on curved surfaces. This technique is an extension of previous methods we developed for regions in the plane. For both flat and curved surfaces, the resulting meshes are guaranteed to exhibit the following properties: (1) internal and external boundaries are respected, (2) element shapes are guaranteed\u2014all elements are triangles with angles between 30 and 120 degrees (with the exception of badly shaped elements that may be required by the specified boundary), and (3) element density can be controlled, producing small elements in \u201cinteresting\u201d areas and large elements elsewhere. An additional contribution of this paper is the development of a practical generalization of Delaunay triangulation to curved surfaces."}
{"_id": "d531c8be3aebb590de35278c17219445c6fd6e48", "title": "Bluetooth Based Home Automation and Security System Using ARM 9", "text": "-Today we are living in 21 century where automation is playing important role in human life. Home automation allows us to control household appliances like light, door, fan, AC etc. It also provides home security and emergency system to be activated. Home automation not only refers to reduce human efforts but also energy saving and time efficiency. The main objective of home automation and security is to help handicapped and old aged people who will enable them to control home appliances and alert them in critical situations. This paper put forwards the design of home automation and security system using ARM7 LPC2148 board. The design is based on a standalone embedded system board ARM7 LPC2148 at home. Home appliances are connected to the ARM7 and communication is established between the ARM7 and ARM9 with Bluetooth device. The home appliances are connected to the input / output ports of the embedded system board and their status is passed to the ARM7. We would develop an authentication to the system for authorized person to access home appliances. The device with low cost and scalable to less modification to the core is much important. It presents the design and implementation of automation system that can monitor and control home appliances via ARM9 S3C2440A board."}
{"_id": "589f3a6c7b167962c59ae960302a637a8d9f9a85", "title": "Player Profiling with Fallout 3", "text": "In previous research we concluded that a personality profile, based on the Five Factor Model, can be constructed from observations of a player\u2019s behavior in a module that we designed for Neverwinter Nights (Lankveld et al. 2011a). In the present research, we investigate whether we can do the same thing in an actual modern commercial video game, in this case the game Fallout 3. We stored automatic observations on 36 participants who played the introductory stages of Fallout 3. We then correlated these observations with the participants\u2019 personality profiles, expressed by values for five personality traits as measured by the standard NEO-FFI questionnaire. Our analysis shows correlations between all five personality traits and the game observations. These results validate and generalize the results from our previous research (Lankveld et al. 2011a). We may conclude that Fallout 3, and by extension other modern video games, allows players to express their personality, and can therefore be used to create person-"}
{"_id": "43f43743d9474c03e1f80739970f9fe28f81aa66", "title": "Future trends of electrical propulsion and implications to ship design", "text": "Over the last 10 years the electrical propulsion fleet grew three times faster than the world fleet. This paper provides an overlook of the main drivers supporting this impressive growth and explores supporting drivers for future growth. The paper also explores two emerging technologies in electrical propulsion and distribution and how these influence ship design: DC distribution and energy storage. with a considerable level of accuracy which led to development of Dynamic Positioning (DP). Since electric propulsion is the most suitable solution to cope with the power fluctuation and redundancy requirements from DP systems, the development of offshore segment was the biggest driver for the development of electrical propulsion, reaching more than 50% of the units in 2013. Figure 2 Distribution of electrical propulsion fleet Dredgers \u2013 Dredging equipment requires a considerable amount of power and consequently the auxiliary power tends to be quite high, very often higher than the propulsion power. Therefore, the merit of electric propulsion is to reduce the overall installed power on board. Cruise \u2013 The main driver for electrical propulsion is space saving and space optimization. By shifting into electrical propulsion it is possible to reduce the space occupied by main propulsion systems and above all to place the different equipment in a way that the spaces reserved for passengers are maximized in terms of volume and quality. Nevertheless, the redundancy levels offered by electrical propulsion and its ability to meet the regulations on this segment have also contributed significantly for the adoption of this technology. Survey vessels \u2013 These vessels are often used to tow underwater survey devices for different applications, e.g. hydrographic, seismic, etc. Therefore, the lower underwater noise offered by electrical propulsion is very attractive. In addition, the load requirements of the different operation modes can vary significantly which also favor electric propulsion. LNG carriers \u2013 When laden, LNG carriers use their own cargo as fuel which is released by the cargo containment system in a process known as boil-off. Therefore, the propulsion system is selected so that the boil-off gas can be used as fuel. Initially, most vessels were fitted with steam turbines connected to the propulsion shaft through a gearbox. However, the introduction of dual fuel engines led to the adoption of electric propulsion (Hansen et al. 2007). Icebreakers \u2013 Although traditional icebreakers use their own weight to break sheets of ice, it is quite often to have blocks of ice reaching the propellers. This causes a sudden drop of the propeller speed and consequently a sudden increase of torque which could easily stall the propulsion engine with mechanical drive. Electrical drives are able to vary the speed and the torque independently and therefore maintain the torque within the engine\u2019s acceptable limits (\u00c5dnanes, 2003)."}
{"_id": "47cc0539057cf987501f5d0c6bea4e48b16fede0", "title": "Fast and Scalable Learning of Neuro-Symbolic Representations of Biomedical Knowledge", "text": "In this work we address the problem of fast and scalable learning of neuro-symbolic representations for general biological knowledge. Based on a recently published comprehensive biological knowledge graph (Alshahrani, 2017) that was used for demonstrating neurosymbolic representation learning, we show how to train fast (under 1 minute) log-linear neural embeddings of the entities. We utilize these representations as inputs for machine learning classifiers to enable important tasks such as biological link prediction. Classifiers are trained by concatenating learned entity embeddings to represent entity relations, and training classifiers on the concatenated embeddings to discern true relations from automatically generated negative examples. Our simple embedding methodology greatly improves on classification error compared to previously published state-of-the-art results, yielding a maximum increase of +0.28 F-measure and +0.22 ROC AUC scores for the most difficult biological link prediction problem. Finally, our embedding approach is orders of magnitude faster to train (\u2264 1 minute vs. hours), much more economical in terms of embedding dimensions (d = 50 vs. d = 512), and naturally encodes the directionality of the asymmetric biological relations, that can be controlled by the order with which we concatenate the embeddings."}
{"_id": "380278716f4d78ad9dcc3ece9e12b235ca1d1569", "title": "TensorLog: Deep Learning Meets Probabilistic DBs", "text": "We present an implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neuralnetwork infrastructure such as Tensorflow or Theano. This leads to a close integration of probabilistic logical reasoning with deep-learning infrastructure: in particular, it enables high-performance deep learning frameworks to be used for tuning the parameters of a probabilistic logic. Experimental results show that TensorLog scales to problems involving hundreds of thousands of knowledge-base triples and tens of thousands of examples."}
{"_id": "581e200ec1e1317faf3895ab6ed464c5a161ed2b", "title": "A Unified Model for Cross-Domain and Semi-Supervised Named Entity Recognition in Chinese Social Media", "text": "Named entity recognition (NER) in Chinese social media is important but difficult because of its informality and strong noise. Previous methods only focus on in-domain supervised learning which is limited by the rare annotated data. However, there are enough corpora in formal domains and massive in-domain unannotated texts which can be used to improve the task. We propose a unified model which can learn from out-of-domain corpora and in-domain unannotated texts. The unified model contains two major functions. One is for cross-domain learning and another for semi-supervised learning. Cross-domain learning function can learn out-of-domain information based on domain similarity. Semi-Supervised learning function can learn in-domain unannotated information by self-training. Both learning functions outperform existing methods for NER in Chinese social media. Finally, our unified model yields nearly 11% absolute improvement over previously published results."}
{"_id": "b0b5c937f17d178a3345ea506ad91904a1bda880", "title": "Longitudinal relations between children's exposure to TV violence and their aggressive and violent behavior in young adulthood: 1977-1992.", "text": "Although the relation between TV-violence viewing and aggression in childhood has been clearly demonstrated, only a few studies have examined this relation from childhood to adulthood, and these studies of children growing up in the 1960s reported significant relations only for boys. The current study examines the longitudinal relations between TV-violence viewing at ages 6 to 10 and adult aggressive behavior about 15 years later for a sample growing up in the 1970s and 1980s. Follow-up archival data (N = 450) and interview data (N = 329) reveal that childhood exposure to media violence predicts young adult aggressive behavior for both males and females. Identification with aggressive TV characters and perceived realism of TV violence also predict later aggression. These relations persist even when the effects of socioeconomic status, intellectual ability, and a variety of parenting factors are controlled."}
{"_id": "e627b073a659093a34c267901d851713827032cd", "title": "Boot Attestation: Secure Remote Reporting with Off-The-Shelf IoT Sensors", "text": "A major challenge in computer security is about establishing the trustworthiness of remote platforms. Remote attestation is the most common approach to this challenge. It allows a remote platform to measure and report its system state in a secure way to a third party. Unfortunately, existing attestation solutions either provide low security, as they rely on unrealistic assumptions, or are not applicable to commodity low-cost and resource-constrained devices, as they require custom secure hardware extensions that are difficult to adopt across IoT vendors. In this work, we propose a novel remote attestation scheme, named Boot Attestation, that is particularly optimized for low-cost and resource-constrained embedded devices. In Boot Attestation, software integrity measurements are immediately committed to during boot, thus relaxing the traditional requirement for secure storage and reporting. Our scheme is very light on cryptographic requirements and storage, allowing efficient implementations, even on the most low-end IoT platforms available today. We also describe extensions for more flexible management of ownership and third party (public-key) attestation that may be desired in fully Internet-enabled devices. Our scheme is supported by many existing off-the-shelf devices. To this end, we review the hardware protection capabilities for a number of popular device types and present implementation results for two such commercially available platforms."}
{"_id": "69d9bfcc4815b2ba675ed013985ef20bb95a7f63", "title": "Implementation of Sha-256 Algorithm in Fpga Based Processor Soundarya", "text": "Hash functions play a significant role in today's cryptographic applications. SHA (Secure Hash Algorithm) is famous message compress standard used in computer cryptography, it can compress a long message to become a short message abstract. In this paper, SHA256 hash algorithm has been implemented using Verilog HDL (Hardware Description Language). The SHA-256 source code is divided into three modules, namely Data path, Memory and Top module. The Verilog code is synthesized using Xilinx software tool. The test vectors have been applied to verify the correctness of the SHA-256 functionality. A comparison between the proposed SHA-256 hash function implementation with other related works shows that it achieves the introduced system can working on reuse data, minimize critical paths and reduce the memory access by using cache memory, reducing clock cycles and needs less silicon area resources. The achieved performance in the term of throughput of the proposed system/architecture is much higher than the other hardware implementations. The proposed system could be used for the implementation of integrity units, and in many other sensitive cryptographic applications, such as, digital signatures, message authentication codes and random number generators."}
{"_id": "ae85493bf05e73afa0ec879156363d211f8b963f", "title": "PESTO: Parameter EStimation TOolbox", "text": "Summary\nPESTO is a widely applicable and highly customizable toolbox for parameter estimation in MathWorks MATLAB. It offers scalable algorithms for optimization, uncertainty and identifiability analysis, which work in a very generic manner, treating the objective function as a black box. Hence, PESTO can be used for any parameter estimation problem, for which the user can provide a deterministic objective function in MATLAB.\n\n\nAvailability and implementation\nPESTO is a MATLAB toolbox, freely available under the BSD license. The source code, along with extensive documentation and example code, can be downloaded from https://github.com/ICB-DCM/PESTO/.\n\n\nContact\njan.hasenauer@helmholtz-muenchen.de.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."}
{"_id": "ad20f3016415af6adaebcf31f8c8e0d4eab4a174", "title": "CRM ADOPTION IN A HIGHER EDUCATION INSTITUTION", "text": "More and more organisations, from private to public sectors, are pursuing higher levels of customer satisfaction, loyalty and retention. With this intent, higher education institutions (HEI) have adopted CRM \u2013 Customer Relationship Management. In order to analyse some of the interesting aspects of this phenomenon n, we conducted an action research in a European Institute. The main research question we answered is \u201chow to adopt a CRM strategy in a Higher Education Institution?\u201d Some of the main findings of this study are (1) even though HEI\u2019s main customer is the student, there are others stakeholders that a CRM project must consider; (2) universities can use their internal resources to implement a CRM project successfully; and (3) using Agile software methodology is an effective way to define clearer, more objective and more assertive technical requirements which result in a CRM software that meet send user\u2019s expectations and organizational strategic goals. These findings can help other HEIs 46 Rigo, G. E., Pedron, C. D., Caldeira, M., Ara\u00fajo, C. C. S., JISTEM, Brazil Vol. 13, No. 1, Jan/Abr., 2016 pp. 45-60 www.jistem.fea.usp.br planning to adopt CRM as a strategic tool to improve their relationship with the stakeholders \u0301 community and expand their student body."}
{"_id": "3aa1d840287639176f4cc9599f3219e449b7eb0d", "title": "Air filter particulate loading detection using smartphone audio and optimized ensemble classification", "text": "Automotive engine intake filters ensure clean air delivery to the engine, though over time these filters load with contaminants hindering free airflow. Today\u2019s open-loop approach to air filter maintenance has drivers replace elements at predetermined service intervals, causing costly and potentially harmful overand under-replacement. The result is that many vehicles consistently operate with reduced power, increased fuel consumption, or excessive particulate-related wear which may harm the catalyst or damage machined engine surfaces. We present a method of detecting filter contaminant loading from audio data collected by a smartphone and a stand microphone. Our machine learning approach to filter supervision uses Mel-Cepstrum, Fourier and Wavelet features as input into a classification model and applies feature ranking to select the best-differentiating features. We demonstrate the robustness of our technique by showing its efficacy for two vehicle types and different microphones, finding a best result of 79.7% accuracy when classifying a filter into three loading states. Refinements to this technique will help drivers supervise their filters and aid in optimally timing their replacement. This will result in an improvement in vehicle performance, efficiency, and reliability, while reducing the cost of maintenance to vehicle owners. \u00a9 2017 Elsevier Ltd. All rights reserved."}
{"_id": "2218865804788c714410ce03ccad195393daf515", "title": "Appropriate agile measurement: using metrics and diagnostics to deliver business value", "text": "Agile software development continually measures both our product and the process used to create it, to allow improvement. With increased popularity, more risk-averse groups are being drawn to agile, bringing with them modes of evaluation incompatible with agile values and principles. These outmoded metrics drive dysfunctional behaviors which threaten the integrity of an emerging agile culture. This paper collects some of the current thinking on appropriate agile metrics, and proposes simple tools for use by teams or organizations. The intention of these tools is to foster dialogue about the appropriateness of metrics in a given context, and thereby to encourage measurements more congruent with the objectives of agile teamwork"}
{"_id": "24fddb90ab733320a4b9c19d39b321d95eb51519", "title": "A comparison of methods for differential expression analysis of RNA-seq data", "text": "Finding genes that are differentially expressed between conditions is an integral part of understanding the molecular basis of phenotypic variation. In the past decades, DNA microarrays have been used extensively to quantify the abundance of mRNA corresponding to different genes, and more recently high-throughput sequencing of cDNA (RNA-seq) has emerged as a powerful competitor. As the cost of sequencing decreases, it is conceivable that the use of RNA-seq for differential expression analysis will increase rapidly. To exploit the possibilities and address the challenges posed by this relatively new type of data, a number of software packages have been developed especially for differential expression analysis of RNA-seq data. We conducted an extensive comparison of eleven methods for differential expression analysis of RNA-seq data. All methods are freely available within the R framework and take as input a matrix of counts, i.e. the number of reads mapping to each genomic feature of interest in each of a number of samples. We evaluate the methods based on both simulated data and real RNA-seq data. Very small sample sizes, which are still common in RNA-seq experiments, impose problems for all evaluated methods and any results obtained under such conditions should be interpreted with caution. For larger sample sizes, the methods combining a variance-stabilizing transformation with the \u2018limma\u2019 method for differential expression analysis perform well under many different conditions, as does the nonparametric SAMseq method."}
{"_id": "4b0b52d369683349ebf66595498202b573185bc0", "title": "Compact triple C shaped microstrip patch antenna for WLAN, WiMAX & Wi-Fi application at 2.5 GHz", "text": "In the rapid progress of commercial communication applications, the development of compact antenna has an important role. This paper presents the analysis on the performance of single band compact triple C shaped microstrip patch antenna for WLAN, WiMAX and Wi-Fi applications with center frequency at 2.5GHz. The microstrip antenna has a planar geometry and consists of a ground plane, a substrate, a compact patch as radiator. Different specifications of the proposed antenna are measured through computer simulation in free space. The antenna has an impedance bandwidth of 2500 MHz (2487MHz to 2512 MHz). The proposed antenna provides excellent SWR of 1.04 and return loss -32.80 dB while excited by a 50 O microstrip feed line. All simulation results are performed using the CST Microwave Studio."}
{"_id": "9af5c320f1ab4e881c1aa4e35d7c7f10d5d2405d", "title": "Unsupervised models for morpheme segmentation and morphology learning", "text": ""}
{"_id": "b4c0f100aa9e15de653794b597e02da707792ceb", "title": "Estimation of variable-speed-drive power consumption from harmonic content", "text": "Nonintrusive load monitoring can be used to identify the operating schedule of individual loads strictly from measurements of an aggregate power signal. Unfortunately, certain classes of loads present a continuously varying power demand. The power demand of these loads can be difficult to separate from an aggregate measurement. Variable-speed drives (VSDs) are industrially important variable-demand loads that are difficult to track non-intrusively. This paper proposes a VSD power estimation method based on observed correlations between fundamental and higher harmonic spectral content in current. The technique can be generalized to any load with signature correlations in harmonic content, including many power electronic and electromechanical loads. The approach presented here expands the applicability and field reliability of nonintrusive load monitoring."}
{"_id": "d462dca757653568d8d93397a20b5a4c90f06791", "title": "\"In vivo\" spam filtering: a challenge problem for KDD", "text": "Spam, also known as Unsolicited Commercial Email (UCE), is the bane of email communication. Many data mining researchers have addressed the problem of detecting spam, generally by treating it as a static text classification problem. True in vivo spam filtering has characteristics that make it a rich and challenging domain for data mining. Indeed, real-world datasets with these characteristics are typically difficult to acquire and to share. This paper demonstrates some of these characteristics and argues that researchers should pursue in vivo spam filtering as an accessible domain for investigating them."}
{"_id": "1240ceddd2441561b5f4d3247ff73777c9bce8e7", "title": "Exploiting Heterogeneous Human Mobility Patterns for Intelligent Bus Routing", "text": "Optimal planning for public transportation is one of the keys to sustainable development and better quality of life in urban areas. Compared to private transportation, public transportation uses road space more efficiently and produces fewer accidents and emissions. In this paper, we focus on the identification and optimization of flawed bus routes to improve utilization efficiency of public transportation services, according to people's real demand for public transportation. To this end, we first provide an integrated mobility pattern analysis between the location traces of taxicabs and the mobility records in bus transactions. Based on mobility patterns, we propose a localized transportation mode choice model, with which we can accurately predict the bus travel demand for different bus routing. This model is then used for bus routing optimization which aims to convert as many people from private transportation to public transportation as possible given budget constraints on the bus route modification. We also leverage the model to identify region pairs with flawed bus routes, which are effectively optimized using our approach. To validate the effectiveness of the proposed methods, extensive studies are performed on real world data collected in Beijing which contains 19 million taxi trips and 10 million bus trips."}
{"_id": "a386bc776407d2a6921402c1b1d21bc046e265b1", "title": "A PID Compensator Control for the Synchronous Rectifier Buck DC-DC Converter", "text": "This paper aims to design a PID compensator for the control of Synchronous Rectifier (SR) Buck Converter to improve its conversion efficiency under different load conditions. Since the diode rectifier is replaced by a high frequency MOSFET switch, the SR control method itself will be sufficient under heavy load condition, to attain better normal mode performance. However, this technique does not hold well in light load condition, due to increased switching losses. A new control technique accompanied with PID compensator is introduced in the paper will enable synchronous buck converter to realize ZVS, while feeding light loads. This is also least cost and highly efficient easy technique without use of extra auxiliary switches and RLC components. This control technique also proved to be efficient under input voltage variations. Simulation is done in MATLAB Simulink for proving stabilization provided by the PID compensator for synchronous rectifier (SR) buck converter. Keywords\u2014Synchronous Rectifier (SR), Continuous Condu-ction Mode (CCM), Discontinuous Conduction Mode (DCM), Zero Voltage Switching (ZVS), Proportional-IntegralDerivative (PID)."}
{"_id": "abd5512c3561895b014add792d6883dfb48f725a", "title": "200V Super Junction MOSFET Fabricated by High Aspect Ratio Trench Filling", "text": "In this paper, a new filling process using an anisotropic epitaxial growth was proposed as the fabrication method of a super junction (SJ) MOSFET. The anisotropic growth controlled with silicon (Si) and chlorine (Cl) source gases was applied to filling of the high-aspect-ratio trenches without voids. Using the process we succeeded in filling trenches with aspect ratio of 18, which is the highest aspect ratio works previously reported, and we fabricated a 200V SJ-MOSFET with a 2.7mum pitch trench gate structure. Consequently, the MOSFET showed electrical characteristics of the lowest on-resistance of 1.5 m-ohm-cm 2 and a breakdown voltage of 225V"}
{"_id": "bb7d337492af8fdcb660ad6cb8b0e114bf40f276", "title": "A survey on IoT communication and computation frameworks: An industrial perspective", "text": "This paper surveys fog computing and embedded systems platforms as the building blocks of Internet of Things (IoT). Many concepts around IoT architectures, with various examples, are also discussed. This paper reviews a high-level conceptual layered architecture for IoT from a computational perspective. The architecture incorporates fog computing to address several issues associated with cloud computing; however, it is never a binary decision between fog and cloud. Many of the world's physical objects are being embedded with sensors and actuators, tied by communication infrastructures, and managed by computational algorithms. IoT sensor networks and embedded systems connecting smart objects are revolutionizing how we approach our daily lives, health care, energy, and transportation. Such computational needs are addressed with an array of various models and frameworks. In an attempt to consolidate the use of these models, this paper reviews the state-of-the-art research in IoT, cloud computing, and fog computing."}
{"_id": "a066520a28ab92960300254587f448636a0fe0a1", "title": "The Obstetric Consequences of Female Genital Mutilation/Cutting: A Systematic Review and Meta-Analysis", "text": "Various forms of female genital mutilation/cutting (FGM/C) have been performed for millennia and continue to be prevalent in parts of Africa. Although the health consequences following FGM/C have been broadly investigated, divergent study results have called into question whether FGM/C is associated with obstetric consequences. To clarify the present state of empirical research, we conducted a systematic review of the scientific literature and quantitative meta-analyses of the obstetric consequences of FGM/C. We included 44 primary studies, of which 28 were comparative, involving almost 3 million participants. The methodological study quality was generally low, but several studies reported the same outcome and were sufficiently similar to warrant pooling of effect sizes in meta-analyses. The meta-analyses results showed that prolonged labor, obstetric lacerations, instrumental delivery, obstetric hemorrhage, and difficult delivery are markedly associated with FGM/C, indicating that FGM/C is a factor in their occurrence and significantly increases the risk of delivery complications. There was no significant difference in risk with respect to cesarean section and episiotomy. These results can make up the background documentation for health promotion and health care decisions that inform work to reduce the prevalence of FGM/C and improve the quality of services related to the consequences of FGM/C."}
{"_id": "6eeb9fb6be8acf70af5d4c76a012c3a9cd6c2bb8", "title": "A fully integrated 77GHz FMCW radar system in 65nm CMOS", "text": "Millimeter-wave anti-collision radars have been widely investigated in advanced CMOS technologies recently. This paper presents a fully integrated 77GHz FMCW radar system in 65nm CMOS. The FMCW radar transmits a continuous wave which is triangularly modulated in frequency and receives the wave reflected from objects. As can be illustrated in Fig. 11.2.1, for a moving target, the received frequency would be shifted (i.e. Doppler shift), resulting in two different offset frequencies f + and f\u2212 for the falling and rising ramps. Denoting the modulation range and period as B and Tm, respectively, we can derive the distance R and the relative velocity VR as listed in Fig. 11.2.1, where fc represents the center frequency and c the speed of light."}
{"_id": "95092f376ebcd064cdc213008150eb649f4f2149", "title": "An exploration of crowdsourcing citation screening for systematic reviews", "text": "Systematic reviews are increasingly used to inform health care decisions, but are expensive to produce. We explore the use of crowdsourcing (distributing tasks to untrained workers via the web) to reduce the cost of screening citations. We used Amazon Mechanical Turk as our platform and 4 previously conducted systematic reviews as examples. For each citation, workers answered 4 or 5 questions that were equivalent to the eligibility criteria. We aggregated responses from multiple workers into an overall decision to include or exclude the citation using 1 of 9 algorithms and compared the performance of these algorithms to the corresponding decisions of trained experts. The most inclusive algorithm (designating a citation as relevant if any worker did) identified 95% to 99% of the citations that were ultimately included in the reviews while excluding 68% to 82% of irrelevant citations. Other algorithms increased the fraction of irrelevant articles excluded at some cost to the inclusion of relevant studies. Crowdworkers completed screening in 4 to 17\u00a0days, costing $460 to $2220, a cost reduction of up to 88% compared to trained experts. Crowdsourcing may represent a useful approach to reducing the cost of identifying literature for systematic reviews."}
{"_id": "0c253bb9aee9aa1ae7909700eda845bd3124197f", "title": "Neural Program Meta-Induction", "text": "Most recently proposed methods for Neural Program Induction work under the assumption of having a large set of input/output (I/O) examples for learning any underlying input-output mapping. This paper aims to address the problem of data and computation efficiency of program induction by leveraging information from related tasks. Specifically, we propose two approaches for cross-task knowledge transfer to improve program induction in limited-data scenarios. In our first proposal, portfolio adaptation, a set of induction models is pretrained on a set of related tasks, and the best model is adapted towards the new task using transfer learning. In our second approach, meta program induction, a k-shot learning approach is used to make a model generalize to new tasks without additional training. To test the efficacy of our methods, we constructed a new benchmark of programs written in the Karel programming language [17]. Using an extensive experimental evaluation on the Karel benchmark, we demonstrate that our proposals dramatically outperform the baseline induction method that does not use knowledge transfer. We also analyze the relative performance of the two approaches and study conditions in which they perform best. In particular, meta induction outperforms all existing approaches under extreme data sparsity (when a very small number of examples are available), i.e., fewer than ten. As the number of available I/O examples increase (i.e. a thousand or more), portfolio adapted program induction becomes the best approach. For intermediate data sizes, we demonstrate that the combined method of adapted meta program induction has the strongest performance."}
{"_id": "19bdbf4925551a8e10579dc1ea6004c0ff9e2081", "title": "Neural Programmer-Interpreters", "text": "We propose the neural programmer-interpreter (NPI): a recurrent and compositional neural network that learns to represent and execute programs. NPI has three learnable components: a task-agnostic recurrent core, a persistent key-value program memory, and domain-specific encoders that enable a single NPI to operate in multiple perceptually diverse environments with distinct affordances. By learning to compose lower-level programs to express higher-level programs, NPI reduces sample complexity and increases generalization ability compared to sequence-tosequence LSTMs. The program memory allows efficient learning of additional tasks by building on existing programs. NPI can also harness the environment (e.g. a scratch pad with read-write pointers) to cache intermediate results of computation, lessening the long-term memory burden on recurrent hidden units. In this work we train the NPI with fully-supervised execution traces; each program has example sequences of calls to the immediate subprograms conditioned on the input. Rather than training on a huge number of relatively weak labels, NPI learns from a small number of rich examples. We demonstrate the capability of our model to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models. Furthermore, a single NPI learns to execute these programs and all 21 associated subprograms."}
{"_id": "61d226578cf4ca7d434c498891aaf1d4086a2986", "title": "Making Neural Programming Architectures Generalize via Recursion", "text": "Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system\u2019s behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion."}
{"_id": "b5a369ebcb2e9a8169b71791d77e7a3ad992870f", "title": "Synthesizing Programs for Images using Reinforced Adversarial Learning", "text": "Advances in deep generative networks have led to impressive results in recent years. Nevertheless, such models can often waste their capacity on the minutiae of datasets, presumably due to weak inductive biases in their decoders. This is where graphics engines may come in handy since they abstract away low-level details and represent images as high-level programs. Current methods that combine deep learning and renderers are limited by hand-crafted likelihood or distance functions, a need for large amounts of supervision, or difficulties in scaling their inference algorithms to richer datasets. To mitigate these issues, we present SPIRAL, an adversarially trained agent that generates a program which is executed by a graphics engine to interpret and sample images. The goal of this agent is to fool a discriminator network that distinguishes between real and rendered data, trained with a distributed reinforcement learning setup without any supervision. A surprising finding is that using the discriminator\u2019s output as a reward signal is the key to allow the agent to make meaningful progress at matching the desired output rendering. To the best of our knowledge, this is the first demonstration of an end-to-end, unsupervised and adversarial inverse graphics agent on challenging real world (MNIST, OMNIGLOT, CELEBA) and synthetic 3D datasets. A video of the agent can be found at https://youtu.be/iSyvwAwa7vk."}
{"_id": "a7894eb5c465b8b7e7c1b486a5a5f8f20184149a", "title": "On the Cache Behavior of SPLASH-2 Benchmarks on ARM and ALPHA Processors in Gem5 Full System Simulator", "text": "Today cache size and hierarchy level of caches play an important role in improving computer performance. By using full system simulations of gem5, the variation in memory bandwidth, system bus throughput, L1 and L2 cache size misses are measured by running SPLASH-2 Benchmarks on ARM and ALPHA Processors. In this work we calculate cache misses, memory bandwidth and system bus throughput by running SPLASH2 benchmarks on gem5 Full System Mode. Our results show that L1 cache misses decrease as L1 cache size is varied from 16KB to 64KB. L1 cache misses are independent of L2 cache size after the program data resides in L2 cache. The memory bandwidth and system bus throughput decreases as L1 and L2 cache size increases."}
{"_id": "0e8080e768d58d5acd4ad7a4d83bccb61ec867c2", "title": "Multi-connectivity in 5G mmWave cellular networks", "text": "The millimeter wave (mmWave) frequencies offer the potential of orders of magnitude increases in capacity for next-generation cellular wireless systems. However, links in mmWave networks are highly susceptible to blocking and may suffer from rapid variations in quality. Connectivity to multiple cells - both in the mmWave and in the traditional lower frequencies - is thus considered essential for robust connectivity. However, one of the challenges in supporting multi-connectivity in the mmWave space is the requirement for the network to track the direction of each link in addition to its power and timing. With highly directional beams and fast varying channels, this directional tracking may be the main bottleneck in realizing robust mmWave networks. To address this challenge, this paper proposes a novel measurement system based on (i) the UE transmitting sounding signals in directions that sweep the angular space, (ii) the mmWave cells measuring the instantaneous received signal strength along with its variance to better capture the dynamics and, consequently, the reliability of a channel/direction and, finally, (iii) a centralized controller making handover and scheduling decisions based on the mmWave cell reports and transmitting the decisions either via a mmWave cell or conventional microwave cell (when control signaling paths are not available). We argue that the proposed scheme enables efficient and highly adaptive cell selection in the presence of the channel variability expected at mmWave frequencies."}
{"_id": "f5e9eddfa9e26ac66b29606353d166dfc73223f5", "title": "Value creation process in the fast fashion industry . Towards a networking approach < Value co-creation and the changing role of suppliers and customers >", "text": "Purpose \u2013 Quick response, short product life cycles, customer-centric businesses, agile supply chains and reduction of lead times are considered to be the core business strategies in the sector of fast fashion. The work is an attempt to identify the most revealing management and organizational tools that support the final value creation and service delivery processes in the international industry of fast fashion Design/Methodology/approach \u2013 In order to fulfill such a purpose, the paper detects the sector through the recent developments from Service-Dominant (S-D) Logic and Network Theory scientific proposals Findings \u2013 Value is a co-created performance, requiring several actors participation. In the fast fashion, such actors are represented by textile businesses, providers, retailers, stores, customers. They all interact within a supply chain becoming a value chain in which stakeholders interact with each other contributing to the final value generation. At the light of this research, it seems to be restrictive to identify in the lead time reduction the success factor of the fast fashion businesses, but it should be pursued in politics of integration, interaction, co-creation and sharing throughout the chain. Fast fashion is an example of totally integrated global chain working and performing as a whole business in which every single component co-creates the final mutual benefit Practical implications \u2013 Textile organizations taking part in the network represented by fast fashion are called to re-organize their activities in function of a highly developed integration level and so to move towards a networking approach, fostering the mutual exchange of resources in order to co-create a common value Originality/value \u2013 The paper adds value to the study of the fast fashion industry, valorizing the importance of the co-creating perspective within the supply chain management"}
{"_id": "00c06dbe4d54181424f3a948d6a6e8f3dc918015", "title": "Efficient Road Detection and Tracking for Unmanned Aerial Vehicle", "text": "An unmanned aerial vehicle (UAV) has many applications in a variety of fields. Detection and tracking of a specific road in UAV videos play an important role in automatic UAV navigation, traffic monitoring, and ground-vehicle tracking, and also is very helpful for constructing road networks for modeling and simulation. In this paper, an efficient road detection and tracking framework in UAV videos is proposed. In particular, a graph-cut-based detection approach is given to accurately extract a specified road region during the initialization stage and in the middle of tracking process, and a fast homography-based road-tracking scheme is developed to automatically track road areas. The high efficiency of our framework is attributed to two aspects: the road detection is performed only when it is necessary and most work in locating the road is rapidly done via very fast homography-based tracking. Experiments are conducted on UAV videos of real road scenes we captured and downloaded from the Internet. The promising results indicate the effectiveness of our proposed framework, with the precision of 98.4% and processing 34 frames per second for 1046 \u00d7 595 videos on average."}
{"_id": "3c2724d49dec10f1bff401fbd66a8e1e6c7b1030", "title": "Inferring new indications for approved drugs via random walk on drug-disease heterogenous networks", "text": "Since traditional drug research and development is often time-consuming and high-risk, there is an increasing interest in establishing new medical indications for approved drugs, referred to as drug repositioning, which provides a relatively low-cost and high-efficiency approach for drug discovery. With the explosive growth of large-scale biochemical and phenotypic data, drug repositioning holds great potential for precision medicine in the post-genomic era. It is urgent to develop rational and systematic approaches to predict new indications for approved drugs on a large scale. In this paper, we propose the two-pass random walks with restart on a heterogenous network, TP-NRWRH for short, to predict new indications for approved drugs. Rather than random walk on bipartite network, we integrated the drug-drug similarity network, disease-disease similarity network and known drug-disease association network into one heterogenous network, on which the two-pass random walks with restart is implemented. We have conducted performance evaluation on two datasets of drug-disease associations, and the results show that our method has higher performance than six existing methods. A case study on the Alzheimer\u2019s disease showed that nine of top 10 predicted drugs have been approved or investigational for neurodegenerative diseases. The experimental results show that our method achieves state-of-the-art performance in predicting new indications for approved drugs. We proposed a two-pass random walk with restart on the drug-disease heterogeneous network, referred to as TP-NRWRH, to predict new indications for approved drugs. Performance evaluation on two independent datasets showed that TP-NRWRH achieved higher performance than six existing methods on 10-fold cross validations. The case study on the Alzheimer\u2019s disease showed that nine of top 10 predicted drugs have been approved or are investigational for neurodegenerative diseases. The results show that our method achieves state-of-the-art performance in predicting new indications for approved drugs."}
{"_id": "0694beaf14f558a75f2dc32f64d151e505dbee17", "title": "Stacked Semantics-Guided Attention Model for Fine-Grained Zero-Shot Learning", "text": "Zero-Shot Learning (ZSL) is generally achieved via aligning the semantic relationships between the visual features and the corresponding class semantic descriptions. However, using the global features to represent fine-grained images may lead to sub-optimal results since they neglect the discriminative differences of local regions. Besides, different regions contain distinct discriminative information. The important regions should contribute more to the prediction. To this end, we propose a novel stacked semantics-guided attention (SGA) model to obtain semantic relevant features by using individual class semantic features to progressively guide the visual features to generate an attention map for weighting the importance of different local regions. Feeding both the integrated visual features and the class semantic features into a multi-class classification architecture, the proposed framework can be trained end-to-end. Extensive experimental results on CUB and NABird datasets show that the proposed approach has a consistent improvement on both fine-grained zero-shot classification and retrieval tasks."}
{"_id": "0c7223f863f0736e541ab8324190ade539f00308", "title": "Monitoring of gait performance using dynamic time warping on IMU-sensor data", "text": "In this paper, a novel method for monitoring the changes in gait joint angle trajectories recorded using the low-cost and wearable Inertial Measurement Units (IMU) is presented. The introduced method is based on Dynamic Time Warping (DTW), an algorithm commonly used for evaluating the similarity of two time series which may vary in time and speed. DTW is employed as the measure of distance between two gait trajectories taken in different time instances, which could be used as an intuitive and effective measure for the evaluation of gait performances. The experimental results presented in the paper demonstrate that the proposed method is applicable for clinically relevant applications and is consequently adaptable to patients with diseases characterized with gait disorders and to different walking scenarios. The proposed method was firstly validated by applying the DTW-based measure on gait trajectories of five healthy subjects recorded while simulating different levels of walking disabilities. Then proposed measure was applied to estimate the distance between the \u201chealthy\u201d gait trajectories and gait trajectories of three patients with Parkinson's disease (PD) while performing single-task and dual-task overground walking. Also, the proposed measure was demonstrated as an effective measure for monitoring the changes in gait patterns of a PD patient before and after medication-based treatment. This result indicates potential use of proposed method for effective pharmacological management of PD."}
{"_id": "f1699de0ba1789457ef5cfe949c6c7a5d92edf75", "title": "Evolved Machines Shed Light on Robustness and Resilience", "text": "In biomimetic engineering, we may take inspiration from the products of biological evolution: we may instantiate biologically realistic neural architectures and algorithms in robots, or we may construct robots with morphologies that are found in nature. Alternatively, we may take inspiration from the process of evolution: we may evolve populations of robots in simulation and then manufacture physical versions of the most interesting or more capable robots that evolve. If we follow this latter approach and evolve both the neural and morphological subsystems of machines, we can perform controlled experiments that provide unique insight into how bodies and brains can work together to produce adaptive behavior, regardless of whether such bodies and brains are instantiated in a biological or technological substrate. In this paper, we review selected projects that use such methods to investigate the synergies and tradeoffs between neural architecture, morphology, action, and adaptive behavior."}
{"_id": "6caa85183b231c2dc86eb65faa53a7e747c9fd16", "title": "An Introduction to Locally Linear Embedding", "text": "Many problemsin informationprocessinginvolvesomeform of dimensionality reduction.Herewe describelocally linearembedding(LLE), anunsupervisedlearningalgorithmthat computeslow dimensional,neighborhood preservingembeddingsof high dimensionaldata.LLE attemptsto discover nonlinearstructurein high dimensionaldataby exploiting thelocal symmetries of linear reconstructions.Notably, LLE mapsits inputs into a single global coordinatesystemof lower dimensionality, and its optimizations\u2014 thoughcapableof generatinghighly nonlinearembeddings\u2014donot involve localminima.We illustratethemethodon imagesof lips usedin audiovisual speechsynthesis."}
{"_id": "25e46de665dfc07ffb304a0311f311a2979638b3", "title": "RECENT DEVELOPMENTS IN PAPER CURRENCY RECOGNITION SYSTEM", "text": "Currency denomination recognition is one the active research topics at present. And this wide interest is particularly due to the various potential applications it has. Monetary transaction is an integral part of our day to day activities. However, blind people particularly suffer in monetary transactions. They are not able to effectively distinguish between various denominations and are often deceived by other people. Also, a reliable currency recognition system could be used in any sector wherever monetary transaction is of concern. Thus, there is an ardent need to design a system that is helpful in recognition of paper currency notes correctly. Currency denomination detection is a vast area of research and significant progress had been achieved over the years. This paper presents an extensive survey of research on various developments in recent years in identification of currency denomination. A number of techniques applied by various researchers are discussed briefly in order to assess the state of art."}
{"_id": "6b2e0cdd23a6ab3b98ea7134d85f0adffaed0ae2", "title": "The effect of substrate noise on VCO performance", "text": "This study characterizes the effect of substrate noise on a standard component of the RF front end: the voltage controlled oscillator (VCO), as well as evaluating the effect of VCO bias current and guard rings on noise performance. Frequency effects of substrate noise are also examined through the study of VCOs at three different center frequencies: 900 MHz, 2.4 GHz, and 5.2 GHz. Substrate noise is a serious problem that continues to plague mixed-signal designs. Components of the RF frontend are particularly sensitive to substrate noise as the effectiveness of standard isolation techniques degrades at higher frequencies. This study has shown that the phase noise of a VCO is adversely affected by substrate noise. In the extreme, the VCO can lock to the substrate noise. Guard rings can effectively attenuate substrate noise at lower frequencies. For example, at 900 MHz, as much as 25 dB of isolation is observed. At 5.2 GHz, the isolation reduces to 10 dB. Furthermore, the use of guard rings can improve the response of the VCO to injection locking."}
{"_id": "f6daad3b3d228e9d4324c3af4c3b2404b3d7fc68", "title": "Delay-Constrained Hybrid Computation Offloading With Cloud and Fog Computing", "text": "To satisfy the delay constraint, the computation tasks can be offloaded to some computing servers, referred to as offloading destinations. Different to most of existing works which usually consider only a single type of offloading destinations, in this paper, we study the hybrid computation offloading problem considering diverse computation and communication capabilities of two types of offloading destinations, i.e., cloud computing servers and fog computing servers. The aim is to minimize the total energy consumption for both communication and computation while completing the computation tasks within a given delay constraint. It is quite challenging because the delay cannot be easily formulated as an explicit expression but depends on the embedded communication-computation scheduling problem for the computation offloading to different destinations. To solve the computation offloading problem, we first define a new concept named computation energy efficiency and divide the problem into four subproblems according to the computation energy efficiency of different types of computation offloading and the maximum tolerable delay. For each subproblem, we give a closed-form computation offloading solution with the analysis of communication-computation scheduling under the delay constraint. The numerical results show that the proposed hybrid computation offloading solution achieves lower energy consumption than the conventional single-type computation offloading under the delay constraint."}
{"_id": "b6bbb228300c72f141a2f05702ddc7f8ab4a8297", "title": "Information Security: End User Behavior and Corporate Culture", "text": "Information is the life blood of all modern organizations yet the news media continue to report stories of critical information loss. The purpose of information security is to protect valuable assets, such as information, hardware, software and people. The majority of information security specialists believe that promoting good end user behavior and constraining bad end user behavior is an important component of an effective Information Security Management System (ISMS). Implementing effective information security involves understanding security-related risk, then developing and implementing appropriate controls. In general the better employees are at applying the controls the more secure the organization will be, because even the best designed technical controls and procedures will be of limited value if the staff involved do not understand why they have been implemented and what they are accomplishing. Achieving the required level of understanding usually requires more than an annual awareness training initiative and represents a major challenge for most organizations. In fact, for many organizations it will involve a cultural change to ensure the integration of information security concepts into the organizational culture."}
{"_id": "a13dc9739e4637599359d792fd60d511ab8a016e", "title": "Robust visual inertial odometry using a direct EKF-based approach", "text": "In this paper, we present a monocular visual-inertial odometry algorithm which, by directly using pixel intensity errors of image patches, achieves accurate tracking performance while exhibiting a very high level of robustness. After detection, the tracking of the multilevel patch features is closely coupled to the underlying extended Kalman filter (EKF) by directly using the intensity errors as innovation term during the update step. We follow a purely robocentric approach where the location of 3D landmarks are always estimated with respect to the current camera pose. Furthermore, we decompose landmark positions into a bearing vector and a distance parametrization whereby we employ a minimal representation of differences on a corresponding \u03c3-Algebra in order to achieve better consistency and to improve the computational performance. Due to the robocentric, inverse-distance landmark parametrization, the framework does not require any initialization procedure, leading to a truly power-up-and-go state estimation system. The presented approach is successfully evaluated in a set of highly dynamic hand-held experiments as well as directly employed in the control loop of a multirotor unmanned aerial vehicle (UAV)."}
{"_id": "0bd6442092bc4a9e0e77cd2f302f2db1a242e250", "title": "IoT-based continuous glucose monitoring system: A feasibility study", "text": "Health monitoring systems based on Internet-of-things (IoT) have been recently introduced to improve the quality of health care services. However, the number of advanced IoT-based continuous glucose monitoring systems is small and the existing systems have several limitations. In this paper we study feasibility of invasive and continuous glucose monitoring (CGM) system utilizing IoT based approach. We designed an IoT-based system architecture from a sensor device to a back-end system for presenting real-time glucose, body temperature and contextual data (i.e. environmental temperature) in graphical and human-readable forms to end-users such as patients and doctors. In addition, nRF communication protocol is customized for suiting to the glucose monitoring system and achieving a high level of energy efficiency. Furthermore, we investigate energy consumption of the sensor device and design energy harvesting units for the device. Finally, the work provides many advanced services at a gateway level such as a push notification service for notifying patient and doctors in case of abnormal situations (i.e. too low or too high glucose level). The results show that our system is able to achieve continuous glucose monitoring remotely in real-time. In addition, the results reveal that a high level of energy efficiency can be achieved by applying the customized nRF component, the power management unit and the energy harvesting unit altogether in the sensor device. c \u00a9 2017 The Authors. Published by E sevier B.V. i ilit f t f re ce r ra hairs."}
{"_id": "d30dbdde4f25d3702348bd471d02b33806d3637a", "title": "Predicting Categorical Emotions by Jointly Learning Primary and Secondary Emotions through Multitask Learning", "text": "Detection of human emotions is an essential part of affect-aware human-computer interaction (HCI). In daily conversations, the preferred way of describing affects is by using categorical emotion labels (e.g., sad, anger, surprise). In categorical emotion classification, multiple descriptors (with different degrees of relevance) can be assigned to a sample. Perceptual evaluations have relied on primary and secondary emotions to capture the ambiguous nature of spontaneous recordings. Primary emotion is the most relevant category felt by the evaluator. Secondary emotions capture other emotional cues also conveyed in the stimulus. In most cases, the labels collected from the secondary emotions are discarded, since assigning a single class label to a sample is preferred from an application perspective. In this work, we take advantage of both types of annotations to improve the performance of emotion classification. We collect the labels from all the annotations available for a sample and generate primary and secondary emotion labels. A classifier is then trained using multitask learning with both primary and secondary emotions. We experimentally show that considering secondary emotion labels during the learning process leads to relative improvements of 7.9% in F1-score for an 8-class emotion classification task."}
{"_id": "92bfd82eceb67112d1db1c7378144007dc3aa247", "title": "A New Concept Varistor With Epoxy/Microvaristor Composite", "text": "A new concept composite varistor is proposed in this paper. This composite varistor has the chains of microvaristors, which are formed by applying an electric field during the curing process and work as current paths. This composite varistor shows superior nonlinear voltage-current characteristics despite small microvaristor content and has a good response against a steep voltage surge of several tens of nanoseconds in front time. In comparison with ceramic varistors where shapes are usually formed at their sintering temperatures of around 1000\u00b0C, the composite varistor has an advantage in flexible shape because its shape is formed by curing the base polymer at a low temperature."}
{"_id": "f7d997a640f2b804676cadb8030d8b2c7bd79d85", "title": "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", "text": "Model selection strategies for machine learning algorithm s typically involve the numerical optimisation of an appropriate model selection criterion, ofte n based on an estimator of generalisation performance, such as k-fold cross-validation. The error of such an estimator can b e broken down into bias and variance components. While unbiasedness is oft en cited as a beneficial quality of a model selection criterion, we demonstrate that a low varian ce is at least as important, as a nonnegligible variance introduces the potential for over-fitt ing in model selection as well as in training the model. While this observation is in hindsight perhaps rat her obvious, the degradation in performance due to over-fitting the model selection criterion can b e surprisingly large, an observation that appears to have received little attention in the machine lea rning literature to date. In this paper, we show that the effects of this form of over-fitting are often of c mparable magnitude to differences in performance between learning algorithms, and thus canno t be ignored in empirical evaluation. Furthermore, we show that some common performance evaluati on practices are susceptible to a form of selection bias as a result of this form of over-fitting and hence are unreliable. We discuss methods to avoid over-fitting in model selection and sub sequent selection bias in performance evaluation, which we hope will be incorporated into best pra ctice. While this study concentrates on cross-validation based model selection, the findings are quit general and apply to any model selection practice involving the optimisation of a model se lection criterion evaluated over a finite sample of data, including maximisation of the Bayesian evid nce and optimisation of performance bounds."}
{"_id": "4fb938a0205995244dbff84f72b533b9b3fecd0d", "title": "Design of a 1.8V-input 6.5V-output digital level shifter for trimming application", "text": "A 1.8V input, 6.5V output step-up digital level shifter is proposed. It is based on contention mitigated digital level shifter (CMLS) but it uses single supply only instead of two. It uses a complete, stand-alone, and unregulated voltage charge pump circuit that generates a 6.5V output voltage from a 1.8V input. Charge pump output is then used as high-side supply (VDDH) of the step-up level shifter, while the low-side supply (VDDl) is externally-supplied 1.8V. Compared to the conventional level shifter where two separate supply voltages (VDDl and VDDH) are required, the proposed design uses a single supply to shifted-up a digital signals. This technique simplifies the user-end power supply requirement of the system, thus the proposed level shifter design is applicable in many low-power and power-constraint applications such as in mobile and portable devices application. The proposed design is implemented using 0.35-\u03bcm TSMC CMOS technology. The schematic and the nestlist have been designed and extracted using LTSPICE tool; while the simulation has been carried out using the HSPICE circuit simulator."}
{"_id": "49d0f7ae8ccd2b859ca1cb481aa57c5e84b17718", "title": "Lazy Theta*: Any-Angle Path Planning and Path Length Analysis in 3D", "text": "Grids with blocked and unblocked cells are often used to represent continuous 2D and 3D environments in robotics and video games. The shortest paths formed by the edges of 8neighbor 2D grids can be up to \u2248 8% longer than the shortest paths in the continuous environment. Theta* typically finds much shorter paths than that by propagating information along graph edges (to achieve short runtimes) without constraining paths to be formed by graph edges (to find short \u201cany-angle\u201d paths). We show in this paper that the shortest paths formed by the edges of 26-neighbor 3D grids can be \u2248 13% longer than the shortest paths in the continuous environment, which highlights the need for smart path planning algorithms in 3D. Theta* can be applied to 3D grids in a straight-forward manner, but it performs a line-of-sight check for each unexpanded visible neighbor of each expanded vertex and thus it performs many more line-of-sight checks per expanded vertex on a 26-neighbor 3D grid than on an 8-neighbor 2D grid. We therefore introduce Lazy Theta*, a variant of Theta* which uses lazy evaluation to perform only one line-of-sight check per expanded vertex (but with slightly more expanded vertices). We show experimentally that Lazy Theta* finds paths faster than Theta* on 26-neighbor 3D grids, with one order of magnitude fewer line-of-sight checks and without an increase in path length."}
{"_id": "1067ef2c4d8c73bb710add5c7bfe35dd74bcb98a", "title": "Mechanisms of facial emotion recognition in autism spectrum disorders: Insights from eye tracking and electroencephalography", "text": "While behavioural difficulties in facial emotion recognition (FER) have been observed in individuals with Autism Spectrum Disorder (ASD), behavioural studies alone are not suited to elucidate the specific nature of FER challenges in ASD. Eye tracking (ET) and electroencephalography (EEG) provide insights in to the attentional and neurological correlates of performance, and may therefore provide insight in to the mechanisms underpinning FER in ASD. Given that these processes develop over the course of the developmental trajectory, there is a need to synthesise findings in regard to the developmental stages to determine how the maturation of these systems may impact FER in ASD. We conducted a systematic review of fifty-four studies investigating ET or EEG meeting inclusion criteria. Findings indicate divergence of visual processing pathways in individuals with ASD. Altered function of the social brain in ASD impacts the processing of facial emotion across the developmental trajectory, resulting in observable differences in ET and EEG outcomes."}
{"_id": "22b2bcdbabc47757505cceaeefe8d93be2d2d1ae", "title": "Adaptive Deep Brain Stimulation In Advanced Parkinson Disease", "text": "OBJECTIVE\nBrain-computer interfaces (BCIs) could potentially be used to interact with pathological brain signals to intervene and ameliorate their effects in disease states. Here, we provide proof-of-principle of this approach by using a BCI to interpret pathological brain activity in patients with advanced Parkinson disease (PD) and to use this feedback to control when therapeutic deep brain stimulation (DBS) is delivered. Our goal was to demonstrate that by personalizing and optimizing stimulation in real time, we could improve on both the efficacy and efficiency of conventional continuous DBS.\n\n\nMETHODS\nWe tested BCI-controlled adaptive DBS (aDBS) of the subthalamic nucleus in 8 PD patients. Feedback was provided by processing of the local field potentials recorded directly from the stimulation electrodes. The results were compared to no stimulation, conventional continuous stimulation (cDBS), and random intermittent stimulation. Both unblinded and blinded clinical assessments of motor effect were performed using the Unified Parkinson's Disease Rating Scale.\n\n\nRESULTS\nMotor scores improved by 66% (unblinded) and 50% (blinded) during aDBS, which were 29% (p = 0.03) and 27% (p = 0.005) better than cDBS, respectively. These improvements were achieved with a 56% reduction in stimulation time compared to cDBS, and a corresponding reduction in energy requirements (p < 0.001). aDBS was also more effective than no stimulation and random intermittent stimulation.\n\n\nINTERPRETATION\nBCI-controlled DBS is tractable and can be more efficient and efficacious than conventional continuous neuromodulation for PD."}
{"_id": "7628d73c73b8ae20c6cf866ce0865a1fb64612a3", "title": "Image Texture Feature Extraction Using GLCM Approach", "text": "Feature Extraction is a method of capturing visual content of images for indexing & retrieval. Primitive or low level image features can be either general features, such as extraction of color, texture and shape or domain specific features. This paper presents an application of gray level co-occurrence matrix (GLCM) to extract second order statistical texture features for motion estimation of images. The Four features namely, Angular Second Moment, Correlation, Inverse Difference Moment, and Entropy are computed using Xilinx FPGA. The results show that these texture features have high discrimination accuracy, requires less computation time and hence efficiently used for real time Pattern recognition applications."}
{"_id": "a1521e7108979598baecde9c5ef28ed0ca78cd7e", "title": "Plant Leaf Recognition using Shape Based Features and Neural Network Classifiers", "text": "This paper proposes an automated system for recognizing plant species based on leaf images. Plant leaf images corresponding to three plant types, are analyzed using two different shape modeling techniques, the first based on the Moments-Invariant (M-I) model and the second on the CentroidRadii (C-R) model. For the M-I model the first four normalized central moments have been considered and studied in various combinations viz. individually, in joint 2-D and 3-D feature spaces for producing optimum results. For the C-R model an edge detector has been used to identify the boundary of the leaf shape and 36 radii at 10 degree angular separation have been used to build the feature vector. To further improve the accuracy, a hybrid set of features involving both the M-I and C-R models has been generated and explored to find whether the combination feature vector can lead to better performance. Neural networks are used as classifiers for discrimination. The data set consists of 180 images divided into three classes with 60 images each. Accuracies ranging from 90%-100% are obtained which are comparable to the best figures reported in extant literature. Keywords-plant recognition; moment invariants; centroid-radii model; neural network; computer vision."}
{"_id": "a2e0aabcf0447c5960d5e624855cc8347bef723c", "title": "Images Features Extraction of Tobacco Leaves", "text": "Images features extraction is very important for the grading process of flue-cured tobacco leaves. In this paper, a machine vision techniques base system is proposed for the automatic inspection of flue-cured tobacco leaves. Machine vision techniques are used in this system to solve problems of features extraction and analysis of tobacco leaves, which include features of color, size, shape andsurface texture. The experimental results show that this system is a viable way for the features extraction of tobacco leaves, and can be used for the automatic classification of tobacco leaves."}
{"_id": "ddf1f27694928a729aefa6ffd6faabfd8ebf2842", "title": "PLANT LEAF RECOGNITION", "text": "This paper proposes an automated system for recognizing plant species based on leaf images. Plant leaf images corresponding to three plant types, are analyzed using three different shape modelling techniques, the first two based on the Moments-Invariant (M-I) model and the Centroid-Radii (C-R) model and the third based on a proposed technique of Binary-Superposition (B-S). For the M-I model the first four central normalized moments have been considered. For the C-R model an edge detector has been used to identify the boundary of the leaf shape and 36 radii at 10 degree angular separation have been used to build the shape vector. The proposed approach consists of comparing binary versions of the leaf images through superposition and using the sum of non-zero pixel values of the resultant as the feature vector. The data set for experimentations consists of 180 images divided into training and testing sets and comparison between them is done using Manhattan, Euclidean and intersection norms. Accuracies obtained using the proposed technique is seen to be an improvement over the M-I and C-R based techniques, and comparable to the best figures reported in extant literature."}
{"_id": "13754d470d8b5bf060ff42aa5a93b743f8be74a3", "title": "COLOUR AND SHAPE ANALYSIS TECHNIQUES FOR WEED DETECTION IN CEREAL FIELDS", "text": "Information on weed distribution within the field is necessary to implement spatially variable herbicide application. This paper deals with the development of near-ground image capture and processing techniques in order to detect broad leaf weeds in cereal crops, under actual field conditions. The proposed methods use both colour and shape analysis techniques for discriminating crop, weeds and soil. The performance of algorithms was assessed by comparing the results with a human classification, providing a good success rate. The study shows the potential of using image processing techniques to generate weed maps."}
{"_id": "fe3e91e40a950c6b6601b8f0a641884774d949ae", "title": "Distributional Reinforcement Learning With Quantile Regression", "text": "In reinforcement learning an agent interacts with the environment by taking actions and observing the next state and reward. When sampled probabilistically, these state transitions, rewards, and actions can all induce randomness in the observed long-term return. Traditionally, reinforcement learning algorithms average over this randomness to estimate the value function. In this paper, we build on recent work advocating a distributional approach to reinforcement learning in which the distribution over returns is modeled explicitly instead of only estimating the mean. That is, we examine methods of learning the value distribution instead of the value function. We give results that close a number of gaps between the theoretical and algorithmic results given by Bellemare, Dabney, and Munos (2017). First, we extend existing results to the approximate distribution setting. Second, we present a novel distributional reinforcement learning algorithm consistent with our theoretical formulation. Finally, we evaluate this new algorithm on the Atari 2600 games, observing that it significantly outperforms many of the recent improvements on DQN, including the related distributional algorithm C51."}
{"_id": "399227dca4dda780db053a7e05138c0851d86110", "title": "A 28-GHz 32-Element TRX Phased-Array IC With Concurrent Dual-Polarized Operation and Orthogonal Phase and Gain Control for 5G Communications", "text": "This paper presents the first reported 28-GHz phased-array IC for 5G communications. Implemented in 130-nm SiGe BiCMOS, the IC includes 32 TRX elements and features concurrent independent beams in two polarizations in either TX or RX operation. Circuit techniques to enable precise beam steering, orthogonal phase and amplitude control at each front end, and independent tapering and beam steering at the array level are presented. A TX/RX switch design is introduced which minimizes TX path loss resulting in 13.5 dBm/16 dBm Op1dB/Psat per front end with >20% peak power added efficiency of the power amplifier (including switch and off-mode LNA) while maintaining a 6 dB noise figure in the low noise amplifier (including switch and off-mode PA). Comprehensive on-wafer measurement results for the IC across multiple samples and temperature variation are presented. A package with four ICs and 64 dual-polarized antennas provides eight 16-element or two 64-element concurrent beams with 1.4\u00b0/step beam steering (<0.6\u00b0 rms error) across a \u00b150\u00b0 steering range without requiring calibration. A maximum saturated effective isotropic radiated power of 54 dBm is measured in the broadside direction for each polarization. Tapering control without requiring calibration achieves up to 20-dB sidelobe rejection without affecting the main lobe direction."}
{"_id": "aa8a8890421402a27067bc47c1be7a49ac772ccf", "title": "A study on brushless PM slotless motor with toroidal winding", "text": "This paper presents a study on brushless permanent magnet (PM) slotless motor with toroidal winding (TWSL). Herein, slotted motor with conventional winding (i.e., CWS motor), which has already been applied to the industry, is compared to the TWSL motor. A finite-element analysis is used to comprehend the operation basis of the TWSL motor, including the linkage flux, back electromotive force, and torque. The proposed TWSL motor can generate the torque level as much as the CWS motor does. The proposed TWSL motor is manufactured and experimented upon to validate the finite-element analysis result."}
{"_id": "0d9c20b221b12828182713b3b2f0f2e900a0afac", "title": "Efficient Adaptive-Support Association Rule Mining for Recommender Systems", "text": "Collaborative recommender systems allow personalization for e-commerce by exploiting similarities and dissimilarities among customers' preferences. We investigate the use of association rule mining as an underlying technology for collaborative recommender systems. Association rules have been used with success in other domains. However, most currently existing association rule mining algorithms were designed with market basket analysis in mind. Such algorithms are inefficient for collaborative recommendation because they mine many rules that are not relevant to a given user. Also, it is necessary to specify the minimum support of the mined rules in advance, often leading to either too many or too few rules; this negatively impacts the performance of the overall system. We describe a collaborative recommendation technique based on a new algorithm specifically designed to mine association rules for this purpose. Our algorithm does not require the minimum support to be specified in advance. Rather, a target range is given for the number of rules, and the algorithm adjusts the minimum support for each user in order to obtain a ruleset whose size is in the desired range. Rules are mined for a specific target user, reducing the time required for the mining process. We employ associations between users as well as associations between items in making recommendations. Experimental evaluation of a system based on our algorithm reveals performance that is significantly better than that of traditional correlation-based approaches."}
{"_id": "2fcca028e02af8e21abf82ce3ddabab6b1e259ae", "title": "Denial of Service in Sensor Networks", "text": "S ensor networks hold the promise of facilitating large-scale, real-time data processing in complex environments. Their foreseeable applications will help protect and monitor critical military, environmental, safety-critical , or domestic infrastructures and resources. In these and other vital or security-sensitive deployments, keeping the network available for its intended use is essential. The stakes are high: Denial of service attacks against such networks may permit real-world damage to the health and safety of people. Without proper security mechanisms, networks will be confined to limited, controlled environments , negating much of the promise they hold. The limited ability of individual sensor nodes to thwart failure or attack makes ensuring network availability more difficult. To identify denial of service vulnerabilities, we analyze two effective sensor network protocols that did not initially consider security. These examples demonstrate that consideration of security at design time is the best way to ensure successful network deployment. Advances in miniaturization combined with an insatiable appetite for previously unrealizable information gathering have lead to the development of new kinds of networks. In many areas, static infrastruc-tures are giving way to dynamic ad-hoc networks. One manifestation of these trends is the development of highly application-dependent sensor networks. Developers build sensor networks to collect and analyze low-level data from an environment of interest. Accomplishing the network's goal often depends on local cooperation, aggregation, or data processing because individual nodes have limited capabilities. Physically small, nodes have tiny or irreplaceable power reserves, communicate wirelessly, and may not possess unique identifiers. Further, they must form ad hoc relationships in a dense network with little or no preexisting infrastructure. Protocols and algorithms operating in the network must support large-scale distribution, often with only localized interactions among nodes. The network must continue operating even after significant node failure, and it must meet real-time requirements. In addition to the limitations imposed by application-dependent deadlines, because it reflects a changing environment, the data the network gathers may intrinsically be valid for only a short time. Sensor networks may be deployed in a host of different environments, and they often figure into military scenarios. These networks may gather intelligence in battlefield conditions, track enemy troop movements, monitor a secured zone for activity , or measure damage and casualties. An airplane or artillery 1 could deploy these networks to otherwise unreachable regions. Although military applications may be the easiest to imagine, much broader opportunities await. Sensor networks could form \u2026"}
{"_id": "234f46400747a34918882c5d7f5872e03d344bea", "title": "Modeling consumer loan default prediction using ensemble neural networks", "text": "In this paper, a loan default prediction model is constricted using three different training algorithms, to train a supervised two-layer feed-forward network to produce the prediction model. But first, two attribute filtering functions were used, resulting in two data sets with reduced attributes and the original data-set. Back propagation based learning algorithms was used for training the network. The neural networks are trained using real world credit application cases from a German bank datasets which has 1000 cases; each case with 24 numerical attributes; upon, which the decision is based. The aim of this paper was to compare between the resulting models produced from using different training algorithms, scaled conjugate gradient backpropagation, Levenberg-Marquardt algorithm, One-step secant backpropagation (SCG, LM and OSS) and an ensemble of SCG, LM and OSS. Empirical results indicate that training algorithms improve the design of a loan default prediction model and ensemble model works better than the individual models."}
{"_id": "4524e91bb59167c6babc54c7b0c7656beda342fe", "title": "IAP Guidelines on Rickettsial Diseases in Children.", "text": "OBJECTIVE\nTo formulate practice guidelines on rickettsial diseases in children for pediatricians across India.\n\n\nJUSTIFICATION\nRickettsial diseases are increasingly being reported from various parts of India. Due to low index of suspicion, nonspecific clinical features in early course of disease, and absence of easily available, sensitive and specific diagnostic tests, these infections are difficult to diagnose. With timely diagnosis, therapy is easy, affordable and often successful. On the other hand, in endemic areas, where healthcare workers have high index of suspicion for these infections, there is rampant and irrational use of doxycycline as a therapeutic trial in patients of undifferentiated fevers. Thus, there is a need to formulate practice guidelines regarding rickettsial diseases in children in Indian context.\n\n\nPROCESS\nA committee was formed for preparing guidelines on rickettsial diseases in children in June 2016. A meeting of consultative committee was held in IAP office, Mumbai and scientific content was discussed. Methodology and results were scrutinized by all members and consensus was reached. Textbook references and published guidelines were also used in few instances to make recommendations. Various Indian and international publications pertinent to present study were collated and guidelines were approved by all committee members. Future updates in these guidelines will be dictated by new scientific data in the field of rickettsial diseases in children.\n\n\nRECOMMENDATIONS\nIndian tick typhus and scrub typhus are commonly seen rickettsial diseases in India. It is recommended that practicing pediatricians should be well conversant with compatible clinical scenario, suggestive epidemiological features, differential diagnoses and suggestive laboratory features to make diagnosis and avoid over diagnosis of these infections, as suggested in these guidelines. Doxycycline is the drug of choice and treatment should begin promptly without waiting for confirmatory laboratory results."}
{"_id": "40e0588779c473cf56a09d2b5bb0af00a8cdb8f0", "title": "Traffic prediction in a bike-sharing system", "text": "Bike-sharing systems are widely deployed in many major cities, providing a convenient transportation mode for citizens' commutes. As the rents/returns of bikes at different stations in different periods are unbalanced, the bikes in a system need to be rebalanced frequently. Real-time monitoring cannot tackle this problem well as it takes too much time to reallocate the bikes after an imbalance has occurred. In this paper, we propose a hierarchical prediction model to predict the number of bikes that will be rent from/returned to each station cluster in a future period so that reallocation can be executed in advance. We first propose a bipartite clustering algorithm to cluster bike stations into groups, formulating a two-level hierarchy of stations. The total number of bikes that will be rent in a city is predicted by a Gradient Boosting Regression Tree (GBRT). Then a multi-similarity-based inference model is proposed to predict the rent proportion across clusters and the inter-cluster transition, based on which the number of bikes rent from/ returned to each cluster can be easily inferred. We evaluate our model on two bike-sharing systems in New York City (NYC) and Washington D.C. (D.C.) respectively, confirming our model's advantage beyond baseline approaches (0.03 reduction of error rate), especially for anomalous periods (0.18/0.23 reduction of error rate)."}
{"_id": "7860877056470f53583a74b28e00360e10802336", "title": "A Review of Methodological Approaches for the Design and Optimization of Wind Farms", "text": "This article presents a review of the state of the art of the Wind Farm Design and Optimization (WFDO) problem. The WFDO problem refers to a set of advanced planning actions needed to extremize the performance of wind farms, which may be composed of a few individual Wind Turbines (WTs) up to thousands of WTs. The WFDO problem has been investigated in different scenarios, with substantial differences in main objectives, modelling assumptions, constraints, and numerical solution methods. The aim of this paper is: (1) to present an exhaustive survey of the literature covering the full span of the subject, an analysis of the state-of-the-art models describing the performance of wind farms as well as its extensions, and the numerical approaches used to solve the problem; (2) to provide an overview of the available knowledge and recent progress in the application of such strategies to real onshore and offshore wind farms; and (3) to propose a comprehensive agenda for future research. OPEN ACCESS Energies 2014, 7 6931"}
{"_id": "f17c78d07dbe14e9d250f57156500e63ee558ee7", "title": "MORTALITY FROM CORONARY HEART DISEASE IN SUBJECTS WITH TYPE 2 DIABETES AND IN NONDIABETIC SUBJECTS WITH AND WITHOUT PRIOR MYOCARDIAL INFARCTION", "text": "s of articles published since 1993. Single articles and past issues of the Journal can also be ordered for a fee through the Internet (http://www.nejm.org/customer/)."}
{"_id": "631cc57858eb1a94522e0090c6640f6f39ab7e18", "title": "Blockchain as a Service for IoT", "text": "A blockchain is a distributed and decentralized ledger that contains connected blocks of transactions. Unlike other ledger approaches, blockchain guarantees tamper proof storage of approved transactions. Due to its distributed and decentralized organization, blockchain is beeing used within IoT e.g. to manage device configuration, store sensor data and enable micro-payments. This paper presents the idea of using blockchain as a service for IoT and evaluates the performance of a cloud and edge hosted blockchain implementation."}
{"_id": "dd44809400f0953b270d9df978aa989adfb459b3", "title": "AI-Assisted Game Debugging with Cicero", "text": "We present Cicero, a mixed-initiative application for prototyping two-dimensional sprite-based games across different genres such as shooters, puzzles, and action games. Cicero provides a host of features which can offer assistance in different stages of the game development process. Noteworthy features include AI agents for gameplay simulation, a game mechanics recommender system, a playtrace aggregator, heatmap-based game analysis, a sequential replay mechanism, and a query system that allows searching for particular interaction patterns. In order to evaluate the efficacy and usefulness of the different features of Cicero, we conducted a user study in which we compared how users perform in game debugging tasks with different kinds of assistance."}
{"_id": "c4bf850225bdb46b09a3c992d1bc87daf9e66248", "title": "On the use of windows for harmonic analysis with the discrete Fourier transform", "text": "This paper makes available a concise review of data windows and their affect on the detection of harmonic signals in the presence of broad-band noise, and in the presence of nearby strong harmonic interference. We also call attention to a number of common errors in the application of windows when used with the fast Fourier transform. This paper includes a comprehensive catalog of data windows along with their significant performance parameters from which the different windows can be compared. Finally, an example demonstrates the use and value of windows to resolve closely spaced harmonic signals characterized by large differences in amplitude."}
{"_id": "1854f1ff2a1387aff7ed7b93de5d47ffc4cef05e", "title": "Film Critics : Influencers or Predictors ?", "text": "Critics and their reviews pervade many industries and are particularly important in the entertainment industry. Few marketing scholars, however, have considered the relationship between the market performance of entertainment services and the role of critics. The authors do so here. They show empirically that critical reviews correlate with late and cumulative box office receipts but do not have a significant correlation with early box office receipts. Although still far from any definitive conclusion, this finding suggests that critics, at least from an aggregate-level perspective, appear to act more as leading indicators than as opinion leaders."}
{"_id": "58def848174596119326b4ee981111d5ed538e9d", "title": "Knowledge distillation across ensembles of multilingual models for low-resource languages", "text": "This paper investigates the effectiveness of knowledge distillation in the context of multilingual models. We show that with knowledge distillation, Long Short-Term Memory(LSTM) models can be used to train standard feed-forward Deep Neural Network (DNN) models for a variety of low-resource languages. We then examine how the agreement between the teacher's best labels and the original labels affects the student model's performance. Next, we show that knowledge distillation can be easily applied to semi-supervised learning to improve model performance. We also propose a promising data selection method to filter un-transcribed data. Then we focus on knowledge transfer among DNN models with multilingual features derived from CNN+DNN, LSTM, VGG, CTC and attention models. We show that a student model equipped with better input features not only learns better from the teacher's labels, but also outperforms the teacher. Further experiments suggest that by learning from each other, the original ensemble of various models is able to evolve into a new ensemble with even better combined performance."}
{"_id": "0862940a255d980d46ef041ab20f153276f96214", "title": "3D Object Representations for Fine-Grained Categorization", "text": "While 3D object representations are being revived in the context of multi-view object class detection and scene understanding, they have not yet attained wide-spread use in fine-grained categorization. State-of-the-art approaches achieve remarkable performance when training data is plentiful, but they are typically tied to flat, 2D representations that model objects as a collection of unconnected views, limiting their ability to generalize across viewpoints. In this paper, we therefore lift two state-of-the-art 2D object representations to 3D, on the level of both local feature appearance and location. In extensive experiments on existing and newly proposed datasets, we show our 3D object representations outperform their state-of-the-art 2D counterparts for fine-grained categorization and demonstrate their efficacy for estimating 3D geometry from images via ultra-wide baseline matching and 3D reconstruction."}
{"_id": "ed4eff77499cc5ab1ffe24a3a2dc35642a518704", "title": "Multiagent-Based Distributed-Energy-Resource Management for Intelligent Microgrids", "text": "Microgrid is a combination of distributed generators, storage systems, and controllable loads connected to low-voltage network that can operate either in grid-connected or in island mode. High penetration of power at distribution level creates such multiple microgrids. This paper proposes a two-level architecture for distributed-energy-resource management for multiple microgrids using multiagent systems. In order to match the buyers and sellers in the energy market, symmetrical assignment problem based on nai\u0301ve auction algorithm is used. The developed mechanism allows the pool members such as generation agents, load agents, auction agents, grid agents, and storage agents to participate in market. Three different scenarios are identified based on the supply-demand mismatch among the participating microgrids. At the end of this paper, two case studies are presented with two and four interconnected microgrids participating in the market. Simulation results clearly indicate that the agent-based management is effective in resource management among multiple microgrids economically and profitably."}
{"_id": "8fd21d3f865dc07953dc9836ecd1ac3dbf8193c6", "title": "Prediction of human emergency behavior and their mobility following large-scale disaster", "text": "The frequency and intensity of natural disasters has significantly increased over the past decades and this trend is predicted to continue. Facing these possible and unexpected disasters, accurately predicting human emergency behavior and their mobility will become the critical issue for planning effective humanitarian relief, disaster management, and long-term societal reconstruction. In this paper, we build up a large human mobility database (GPS records of 1.6 million users over one year) and several different datasets to capture and analyze human emergency behavior and their mobility following the Great East Japan Earthquake and Fukushima nuclear accident. Based on our empirical analysis through these data, we find that human behavior and their mobility following large-scale disaster sometimes correlate with their mobility patterns during normal times, and are also highly impacted by their social relationship, intensity of disaster, damage level, government appointed shelters, news reporting, large population flow and etc. On the basis of these findings, we develop a model of human behavior that takes into account these factors for accurately predicting human emergency behavior and their mobility following large-scale disaster. The experimental results and validations demonstrate the efficiency of our behavior model, and suggest that human behavior and their movements during disasters may be significantly more predictable than previously thought."}
{"_id": "9cbbb518b2d607cd3a8081523550d89623490cd6", "title": "The beauty of capturing faces: Rating the quality of digital portraits", "text": "Digital portrait photographs are everywhere, and while the number of face pictures keeps growing, not much work has been done to on automatic portrait beauty assessment. In this paper, we design a specific framework to automatically evaluate the beauty of digital portraits. To this end, we procure a large dataset of face images annotated not only with aesthetic scores but also with information about the traits of the subject portrayed. We design a set of visual features based on portrait photography literature, and extensively analyze their relation with portrait beauty, exposing interesting findings about what makes a portrait beautiful. We find that the beauty of a portrait is linked to its artistic value, and independent from age, race and gender of the subject. We also show that a classifier trained with our features to separate beautiful portraits from non-beautiful portraits outperforms generic aesthetic classifiers."}
{"_id": "b7b58efc195e2c6e9c3c83eb52b25fabece7e37b", "title": "Universal construction based on 3D printing electric motors: Steps towards self-replicating robots to transform space exploration", "text": "Through a recent confluence of technological capacities, self-replicating robots have become a potentially nearterm rather than speculative technology. In a practical sense, self-replicating robots can never become obsolete \u2014 the first self- replicating robots will spawn all future generations of robots, subject to deliberate upgrading and/or evolutionary change. Furthermore, this technology promises to revolutionise space exploration by bypassing the apparently-insurmountable problem of high launch costs. We present recent efforts in 3D printing the key robotic components required for any such self- replicating machine."}
{"_id": "dbe1ba5051222f0c5dafd8d07b6cfa164f002084", "title": "Millimeter-Wave MultiBeam Aperture-Coupled Magnetoelectric Dipole Array With Planar Substrate Integrated Beamforming Network for 5G Applications", "text": "A modified topology of the 2-D multibeam antenna array fed by a passive beamforming network (BFN) is proposed by introducing two sets of vertical interconnections into the conventional array configuration. Different from the traditional design, the new array structure can be integrated into multilayered planar substrates conveniently, which has advantages of low loss characteristics, ease of realization, and low fabrication cost for millimeter-wave (mmWave) applications. A $4\\times4$ multibeam antenna array that can generate 16 beams is then designed, fabricated, and measured in order to demonstrate the correctness of the proposed topology. As the crucial components of the design, two kinds of aperture-coupled magnetoelectric dipole antenna elements with multilayered feed structures and three types of vertical interconnections consisting of the vertical substrate integrated waveguides are investigated, respectively. Wide bandwidth of 16.4%, stable radiation beams, and gain up to 14.7 dBi are achieved by the fabricated prototype. The proposed array configuration provides a new mean to implement the relatively large size 2-D multibeam antenna arrays with planar passive BFNs, which would be attractive for future mmWave wireless systems used for 5G communications and other applications."}
{"_id": "5b6a17327082f2147a58ec63720f25b138c67201", "title": "A framework for improved video text detection and recognition", "text": "Text displayed in a video is an essential part for the high-level semantic information of the video content. Therefore, video text can be used as a valuable source for automated video indexing in digital video libraries. In this paper, we propose a workflow for video text detection and recognition. In the text detection stage, we have developed a fast localization-verification scheme, in which an edge-based multi-scale text detector first identifies potential text candidates with high recall rate. Then, detected candidate text lines are refined by using an image entropy-based filter. Finally, Stroke Width Transform (SWT)- and Support Vector Machine (SVM)-based verification procedures are applied to eliminate the false alarms. For text recognition, we have developed a novel skeleton-based binarization method in order to separate text from complex backgrounds to make it processible for standard OCR (Optical Character Recognition) software. Operability and accuracy of proposed text detection and binarization methods have been evaluated by using publicly available test data sets."}
{"_id": "f4eec79ae070554fae0a10dfd4dcd4bdd5b087a1", "title": "Road-marking Analysis for Autonomous Vehicle Guidance", "text": "Driving an automobile autonomously on rural roads requires knowledge about the geometry of the road. Furthermore, knowledge about the meaning of each lane of the road is needed in order to decide which lane should be taken and if the vehicle can do a lane change. This paper addresses the problem of extracting additional information about lanes. The information is extracted from the types of road-markings. The type of lane border markings is estimated in order to find out if a lane change is allowed. Arrows, which are painted on the road, are extracted and classified in order to determine the meaning of a lane such as a turn off lane."}
{"_id": "40a70ac617507b9b8fcbd4562b435985557c926c", "title": "Clustering with Lower Bound on Similarity", "text": "We propose a new method, called SimClus, for clustering with lower bound on similarity. Instead of accepting k the number of clusters to find, the alternative similarity-based approach imposes a lower bound on the similarity between an object and its corresponding cluster representative (with one representative per cluster). SimClus achieves a O(log n) approximation bound on the number of clusters, whereas for the best previous algorithm the bound can be as poor as O(n). Experiments on real and synthetic datasets show that our algorithm produces more than 40% fewer representative objects, yet offers the same or better clustering quality. We also propose a dynamic variant of the algorithm, which can be effectively used in an on-line setting."}
{"_id": "84421b054a3fc358e606275745223f49e24316b3", "title": "The long history of gaming in military training", "text": "There is a long history of the dual-use of games in both the military and the entertainment applications. This has taken the form of sand tables, miniatures, board games, and computer games. The current tension between entertainment and military applications of games is just the return of similar concerns that surrounded the gaming tools and technologies of previous generations. Dynamic representations of the physical world are interesting and useful tools in a number of fields, to include the military, city planning, architecture, education, and entertainment. Computer games are tools that allow all of these audiences to accomplish similar goals."}
{"_id": "0bfc3626485953e2d3f87854a00a50f88c62269d", "title": "A Tractable Approach to Coverage and Rate in Cellular Networks", "text": "Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage/outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks."}
{"_id": "0fdfc3fc7d89b5acc2e8c83ee62436d394eb8040", "title": "Channel Estimation and Hybrid Precoding for Millimeter Wave Cellular Systems", "text": "Millimeter wave (mmWave) cellular systems will enable gigabit-per-second data rates thanks to the large bandwidth available at mmWave frequencies. To realize sufficient link margin, mmWave systems will employ directional beamforming with large antenna arrays at both the transmitter and receiver. Due to the high cost and power consumption of gigasample mixed-signal devices, mmWave precoding will likely be divided among the analog and digital domains. The large number of antennas and the presence of analog beamforming requires the development of mmWave-specific channel estimation and precoding algorithms. This paper develops an adaptive algorithm to estimate the mmWave channel parameters that exploits the poor scattering nature of the channel. To enable the efficient operation of this algorithm, a novel hierarchical multi-resolution codebook is designed to construct training beamforming vectors with different beamwidths. For single-path channels, an upper bound on the estimation error probability using the proposed algorithm is derived, and some insights into the efficient allocation of the training power among the adaptive stages of the algorithm are obtained. The adaptive channel estimation algorithm is then extended to the multi-path case relying on the sparse nature of the channel. Using the estimated channel, this paper proposes a new hybrid analog/digital precoding algorithm that overcomes the hardware constraints on the analog-only beamforming, and approaches the performance of digital solutions. Simulation results show that the proposed low-complexity channel estimation algorithm achieves comparable precoding gains compared to exhaustive channel training algorithms. The results illustrate that the proposed channel estimation and precoding algorithms can approach the coverage probability achieved by perfect channel knowledge even in the presence of interference."}
{"_id": "261bfaf79635c455c7ed8bd2ddc6785fd9d9ced0", "title": "Spatially Sparse Precoding in Millimeter Wave MIMO Systems", "text": "Millimeter wave (mmWave) signals experience orders-of-magnitude more pathloss than the microwave signals currently used in most wireless applications and all cellular systems. MmWave systems must therefore leverage large antenna arrays, made possible by the decrease in wavelength, to combat pathloss with beamforming gain. Beamforming with multiple data streams, known as precoding, can be used to further improve mmWave spectral efficiency. Both beamforming and precoding are done digitally at baseband in traditional multi-antenna systems. The high cost and power consumption of mixed-signal devices in mmWave systems, however, make analog processing in the RF domain more attractive. This hardware limitation restricts the feasible set of precoders and combiners that can be applied by practical mmWave transceivers. In this paper, we consider transmit precoding and receiver combining in mmWave systems with large antenna arrays. We exploit the spatial structure of mmWave channels to formulate the precoding/combining problem as a sparse reconstruction problem. Using the principle of basis pursuit, we develop algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware. We present numerical results on the performance of the proposed algorithms and show that they allow mmWave systems to approach their unconstrained performance limits, even when transceiver hardware constraints are considered."}
{"_id": "35312649f71523e2bb1ef0033388eb046f6f9f1b", "title": "An introduction to millimeter-wave mobile broadband systems", "text": "Almost all mobile communication systems today use spectrum in the range of 300 MHz-3 GHz. In this article, we reason why the wireless community should start looking at the 3-300 GHz spectrum for mobile broadband applications. We discuss propagation and device technology challenges associated with this band as well as its unique advantages for mobile communication. We introduce a millimeter-wave mobile broadband (MMB) system as a candidate next generation mobile communication system. We demonstrate the feasibility for MMB to achieve gigabit-per-second data rates at a distance up to 1 km in an urban mobile environment. A few key concepts in MMB network architecture such as the MMB base station grid, MMB interBS backhaul link, and a hybrid MMB + 4G system are described. We also discuss beamforming techniques and the frame structure of the MMB air interface."}
{"_id": "5ac38297a3c256c88a67597fa8f5d0c44d5957c8", "title": "Millimeter Wave Mobile Communications for 5 G Cellular : It Will Work !", "text": "The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices. INDEX TERMS 28GHz, 38GHz, millimeter wave propagation measurements, directional antennas, channel models, 5G, cellular, mobile communications, MIMO."}
{"_id": "c63f67d4df2c6800b34832afa17ec80c13706efb", "title": "Combined ivermectin and doxycycline treatment has microfilaricidal and adulticidal activity against Dirofilaria immitis in experimentally infected dogs.", "text": "There is still a pressing need for effective adulticide treatment for human and animal filarial infections. Like many filarial nematodes, Dirofilaria immitis, the causative agent of canine heartworm disease, harbours the bacterial endosymbiont Wolbachia, which has been shown to be essential for worm development, fecundity and survival. Here the authors report the effect of different treatment regimens in dogs experimentally infected with adult D. immitis on microfilariemia, antigenemia, worm recovery and Wolbachia content. Treatment with ivermectin (IVM; 6 microg/kg per os weekly) combined with doxycycline (DOXY; 10 mg/kg/day orally from Weeks 0-6, 10-12, 16-18, 22-26 and 28-34) resulted in a significantly faster decrease of circulating microfilariae and higher adulticidal activity compared with either IVM or DOXY alone. Quantitative PCR analysis of ftsZ (Wolbachia DNA) and 18S rDNA (nematode DNA) absolute copy numbers showed significant decreases in Wolbachia content compared with controls in worms recovered from DOXY-treated dogs that were not, however, associated with worm death. Worms from IVM/DOXY-treated dogs, on the other hand, had Wolbachia/nematode DNA ratios similar to those of control worms, suggesting a loss of both Wolbachia and nematode DNA as indicated by absolute copy number values. Histology and transmission electron microscopy of worms recovered from the IVM/DOXY combination group showed complete loss of uterine content in females and immunohistochemistry for Wolbachia was negative. Results indicate that the combination of these two drugs causes adult worm death. This could have important implications for control of human and animal filarial infections."}
{"_id": "f0bcaa04ed5baea44bc22c7b2030c72a2774fccd", "title": "Landing Strategies in Honeybees and Applications to Uninhabited Airborne Vehicles", "text": "An application of insect visuomotor behavior to automatic control of landing is explored. Insects, being perhaps more reliant on image motion cues than mammals or higher vertebrates, are proving to be an excellent organism in which to investigate how information on optic flow is exploited to guide locomotion and navigation. We have observed how bees perform grazing landings on a flat surface and have deduced the algorithmic basis for the behavior. A smooth landing is achieved by a surprisingly simple and elegant strategy: image velocity is held constant as the surface is approached, thus automatically ensuring that flight speed is close to zero at touchdown. No explicit knowledge of flight speed or height above the ground is necessary. The feasibility of this landing strategy was tested by implementation in a robotic gantry. We also outline our current efforts at exploring the applicability of this and related techniques to the guidance of uninhabited airborne vehicles (UAVs). Aspects of the algorithm were tested on a small UAV using real imagery to control"}
{"_id": "fc25ec72a1ab526e61a443415b9e48a545827dcc", "title": "A New Approach for Resource Scheduling with Deep Reinforcement Learning", "text": "With the rapid development of deep learning, deep reinforcement learning (DRL) began to appear in the field of resource scheduling in recent years. Based on the previous research on DRL in the literature, we introduce online resource scheduling algorithm DeepRM2 and the offline resource scheduling algorithm DeepRM_Off. Compared with the state-of-the-art DRL algorithm DeepRM and heuristic algorithms, our proposed algorithms have faster convergence speed and better scheduling efficiency with regarding to average slowdown time, job completion time and rewards."}
{"_id": "13f1bde3d8ca8e9789803e2dd49d501204e99b25", "title": "COTS-based software development: Processes and open issues", "text": "The work described in this paper is an investigation of COTS-based software development within a particular NASA environment, with an emphasis on the processes used. Fifteen projects using a COTS-based approach were studied and their actual process was documented. This process is evaluated to identify essential differences in comparison to traditional software development. The main differences, and the activities for which projects require more guidance, are requirements definition and COTS selection, high level design, integration and testing. Starting from these empirical observations, a new process and set of guidelines for COTS-based development are developed and briefly presented."}
{"_id": "3ff66b5413b89823810b5860dd54b860bce0357b", "title": "UML CLASS DIAGRAM OR ENTITY RELATIONSHIP DIAGRAM? AN OBJECT-RELATIONAL CONCEPTUAL IMPEDANCE MISMATCH", "text": "It is now nearly 30 years since Peter Chen\u2019s watershed paper \u201cThe Entity-Relationship Model \u2013 towards a Unified View of Data\u201d. [1] The entity relationship model and variations and extensions to it have been taught in colleges and universities for many years. In his original paper Peter Chen looked at converting his new ER model to the then existing data structure diagrams for the Network model. In recent years there has been a tendency to use a Unified Modelling Language (UML) class diagram for conceptual modeling for relational databases, and several popular course text books use UML notation to some degree [2] [3]. However Object and Relational technology are based on different paradigms. In the paper we argue that the UML class diagram is more of a logical model (implementation specific). ER Diagrams on the other hand, are at a conceptual level of database design dealing with the main items and their relationships and not with implementation specific detail. UML focuses on OOAD (Object Oriented Analysis and Design) and is navigational and program dependent whereas the relational model is set based and exhibits data independence. The ER model provides a well-established set of mapping rules for mapping to a relational model. In this paper we look specifically at the areas which can cause problems for the novice database designer due to this conceptual mismatch of two different paradigms. Firstly, transferring the mapping of a weak entity from an Entity Relationship model to UML and secondly the representation of structural constraints between objects. We look at the mixture of notations which students mistakenly use when modeling. This is often the result of different notations being used on different courses throughout their degree. Several of the popular text books at the moment use either a variation of ER, UML, or both for teaching database modeling. At the moment if a student picks up a text book they could be faced with either; one of the many ER variations, UML, UML and a variation of ER both covered separately, or UML and ER merged together. We regard this problem as a conceptual impedance mismatch. This problem is documented in [21] who have produced a catalogue of impedance mismatch problems between object-relational and relational paradigms. We regard the problems of using UML class diagrams for relational database design as a conceptual impedance mismatch as the Entity Relationship model does not have the structures in the model to deal with Object Oriented concepts"}
{"_id": "6caea5cec7406f8314ad43788edaf48a51600e04", "title": "Controlling Physically Based Virtual Musical Instruments Using The Gloves", "text": "In this paper we propose an empirical method to develop mapping strategies between a gestural-based interface (the Gloves) and physically based sound synthesis models. An experiment was conducted to investigate which gestures listeners associate with sounds synthesised using physical models, corresponding to three categories of sound: sustained, iterative and impulsive. The results of the experiment show that listeners perform similar gestures when controlling sounds from the different categories. We used such gestures in order to create the mapping strategy between the Gloves and the physically based synthesis engine."}
{"_id": "05eef8cf5eed78407e2217b5652ea83f5753846d", "title": "Conservative treatment of fractures of the thoracolumbar spine", "text": "We reviewed 48 patients with thoracolumbar fractures treated conservatively between 1988 and 1999. The average follow-up was 77.5 (31\u2013137) months and average patient age (23 women, 25 men) was 46 (18\u201376) years. Twenty-nine patients suffered a fall from a height and 13 patients were injured in traffic accidents. Thirty-two patients had compression-type fractures and 16 burst-type fractures. There were no neurological deficits. Twenty-nine patients were treated by orthosis, 13 by body cast and six by bed rest. In addition to pain and functional scoring, we measured a number of radiographic parameters at the time of admission and at latest follow-up and compared the values. In patients with compression fractures there were significant changes in scoliosis angle and wedging index (p<0.05). The mean pain score was 1.66 and mean functional score 1.03. In patients with burst fractures, vertebral index, wedging index and height loss increased after treatment (p<0.05). The mean pain score was 1.26 and functional score 0.93. Compression fractures with kyphosis angle <30\u00b0 are supposed to be stable and can be treated conservatively. If the kyphosis angle is more than 30\u00b0, magnetic resonance imaging (MRI) should be performed, and if the posterior ligamentous complex is damaged, surgery should be considered. In burst fractures, MRI should always be performed and conservative treatment should only be considered if there is no neurological deficit and the ligaments are intact. Nous avons examin\u00e9 48 malades avec une fracture dorsolombaire trait\u00e9e d\u2019une mani\u00e8re conservatrice entre 1988 et 1999. Le suivi moyen \u00e9tait de 77,5 (31\u2013137) mois et l\u2019\u00e2ge moyen des malades (23 femmes, 25 hommes) \u00e9tait de 46 (18\u201376) ans. Vingt-neuf malades avaient fait une chute de grande hauteur et 13 malades ont \u00e9t\u00e9 bless\u00e9s dans un accident de la circulation. Trente-deux malades avaient des fractures du type compression et 16 fractures du type explosion. Il n\u2019y avait pas de d\u00e9ficit neurologique. Vingt-neuf malades ont \u00e9t\u00e9 trait\u00e9s par orth\u00e8se, 13 par corset pl\u00e2tr\u00e9 et six par repos au lit. En plus des scores douleur et des scores fonctionnels, nous avons compar\u00e9 plusieurs param\u00e8tres radiographiques entre l\u2019admission et le plus long recul. Chez les malades avec fracture-compression il y avait des changements significatifs dans l\u2019angle de la scoliose et l\u2019index angulaire (p<0.05). Le score moyen de la douleur \u00e9tait 1.66 et le score fonctionnel moyen 1.03. Chez les malades avec fracture-explosion, l\u2019index vert\u00e9bral, l\u2019index angulaire, et la perte de hauteur ont augment\u00e9 apr\u00e8s traitement (p<0.05). Le score moyen de la douleur \u00e9tait 1.26 et le score fonctionnel 0.93. Les fractures de type de compression avec angle de cyphose <30\u00b0 sont suppos\u00e9s \u00eatre stables et sont trait\u00e9s d\u2019une mani\u00e8re conservatrice. Si l\u2019angle de la cyphose est de plus de 30\u00b0, une IRM devrait \u00eatre ex\u00e9cut\u00e9e et si le complexe ligamentaire post\u00e9rieur est endommag\u00e9, la chirurgie devrait \u00eatre envisag\u00e9e. Dans les fractures du type de explosion, une IMR devrait toujours \u00eatre ex\u00e9cut\u00e9e et le traitement conservateur devrait \u00eatre retenu seulement s\u2019il n\u2019y a aucun d\u00e9ficit neurologique et si les ligaments sont intacts."}
{"_id": "0f234164f320e113f941f4378d52b682c6990a02", "title": "The role of rumination in depressive disorders and mixed anxiety/depressive symptoms.", "text": "Several studies have shown that people who engage in ruminative responses to depressive symptoms have higher levels of depressive symptoms over time, after accounting for baseline levels of depressive symptoms. The analyses reported here showed that rumination also predicted depressive disorders, including new onsets of depressive episodes. Rumination predicted chronicity of depressive disorders before accounting for the effects of baseline depressive symptoms but not after accounting for the effects of baseline depressive symptoms. Rumination also predicted anxiety symptoms and may be particularly characteristic of people with mixed anxiety/depressive symptoms."}
{"_id": "5def2ed833094254e9620c6fa0da3d3db9ff3f55", "title": "A direct approach to estimating surfaces in tomographic data", "text": "Under ideal circumstances, the inverse of the radon transform is computable, and sequences of measured projections are sufficient to obtain accurate estimates of volume densities. In situations where the sinogram data is incomplete, the radon transform is noninvertable, and attempts to reconstruct greyscale density values result in reconstruction artifacts that can undermine the effectiveness of subsequent processing. This paper presents a direct approach to the segmentation of incomplete tomographic data. The strategy is to impose a fairly simple model on the data, and treat segmentation as a problem of estimating the interface between two substances of somewhat homogeneous density. The segmentation is achieved by simultaneously deforming a surface model and updating density parameters in order to achieve a best fit between the projected model and the measured sinograms. The deformation is implemented with level-set surface models, calculated at the resolution of the input data. Relative to previous work, this paper makes several contributions. First is a proper derivation of the deformation of surface models that move according to a gradient descent on a likelihood measure. We also present a series of computational innovations that make this direct surface-fitting approach feasible with state-of-the-art computers. Another contribution is the demonstration of the effectiveness of this approach on under-constrained tomographic problems, using both simulated and real datasets."}
{"_id": "d3143fda4f262bb1e63cdce86634cf4bfc9ac56e", "title": "Adaptive Simulation-based Training of AI Decision-makers using Bayesian Optimization", "text": "This work studies how an AI-controlled dog-fighting agent with tunable decisionmaking parameters can learn to optimize performance against an intelligent adversary, as measured by a stochastic objective function evaluated on simulated combat engagements. Gaussian process Bayesian optimization (GPBO) techniques are developed to automatically learn global Gaussian Process (GP) surrogate models, which provide statistical performance predictions in both explored and unexplored areas of the parameter space. This allows a learning engine to sample full-combat simulations at parameter values that are most likely to optimize performance and also provide highly informative data points for improving future predictions. However, standard GPBO methods do not provide a reliable surrogate model for the highly volatile objective functions found in aerial combat, and thus do not reliably identify global maxima. These issues are addressed by novel Repeat Sampling (RS) and Hybrid Repeat/Multipoint Sampling (HRMS) techniques. Simulation studies show that HRMS improves the accuracy of GP surrogate models, allowing AI decision-makers to more accurately predict performance and efficiently tune parameters."}
{"_id": "1196336c26cbe0d7ecaad355b59d7c199a94776f", "title": "Folded cascode OTA design for wide band applications", "text": "This paper deals with design and optimization of a folded cascode operational transconductance amplifier. First, a detailed description of an optimum OTA topology is done in order to optimize MOS transistor sizing. Second, the design of folded cascode OTA, which works for frequencies that lead to a base band circuit design for RF application, is based on transistor sizing methodology. Third, folded cascode OTAs generally find several applications that are well developed. Simulation results are performed using SPICE software and BSIM3V3 model for CMOS 0.35mum process, show that the designed folded cascode OTA has a 85dB DC gain and provides a gain bandwidth product of around 332MHz"}
{"_id": "302e631bf51560494d09085fa5a7ea56458c036a", "title": "On visual similarity based 2D drawing retrieval", "text": "A large amount of 2D drawings have been produced in engineering fields. To reuse and share the available drawings efficiently, we propose two methods in this paper, namely 2.5D spherical harmonics transformation and 2D shape histogram, to retrieve 2D drawings by measuring their shape similarity. The first approach represents a drawing as a spherical function by transforming it from a 2D space into a 3D space. Then a fast spherical harmonics transformation is employed to get a rotation invariant descriptor. The second statistics-based approach represents the shape of a 2D drawing using a distance distribution between two randomly sampled points. To allow users to interactively emphasize certain local shapes that they are interested in, we have adopted a flexible sampling strategy by specifying a bias sampling density upon these local shapes. The two proposed methods have many valuable properties, including transform invariance, efficiency, and robustness. In addition, their insensitivity to noise allows for the user\u2019s causal input, thus supporting a freehand sketch-based retrieval user interface. Experiments show that a better performance can be achieved by combining them together using weights. q 2005 Elsevier Ltd. All rights reserved."}
{"_id": "ac993f7c54e57a51bd912492f1494dd5e1320916", "title": "Paraphrasing for Style", "text": "We present initial investigation into the task of paraphrasing language while targeting a particular writing style. The plays of William Shakespeare and their modern translations are used as a testbed for evaluating paraphrase systems targeting a specific style of writing. We show that even with a relatively small amount of parallel training data, it is possible to learn paraphrase models which capture stylistic phenomena, and these models outperform baselines based on dictionaries and out-of-domain parallel text. In addition we present an initial investigation into automatic evaluation metrics for paraphrasing writing style. To the best of our knowledge this is the first work to investigate the task of paraphrasing text with the goal of targeting a specific style of writing."}
{"_id": "6b125801726b464b9fcc5095e1b31527370ae0fb", "title": "Measurements and Modelling of Base Station Power Consumption under Real Traffic Loads \u2020", "text": "Base stations represent the main contributor to the energy consumption of a mobile cellular network. Since traffic load in mobile networks significantly varies during a working or weekend day, it is important to quantify the influence of these variations on the base station power consumption. Therefore, this paper investigates changes in the instantaneous power consumption of GSM (Global System for Mobile Communications) and UMTS (Universal Mobile Telecommunications System) base stations according to their respective traffic load. The real data in terms of the power consumption and traffic load have been obtained from continuous measurements performed on a fully operated base station site. Measurements show the existence of a direct relationship between base station traffic load and power consumption. According to this relationship, we develop a linear power consumption model for base stations of both technologies. This paper also gives an overview of the most important concepts which are being proposed to make cellular networks more energy-efficient."}
{"_id": "94c440feb507ada7e20ac43023758ffe3ca59ea9", "title": "Routing questions to appropriate answerers in community question answering services", "text": "Community Question Answering (CQA) service provides a platform for increasing number of users to ask and answer for their own needs but unanswered questions still exist within a fixed period. To address this, the paper aims to route questions to the right answerers who have a top rank in accordance of their previous answering performance. In order to rank the answerers, we propose a framework called Question Routing (QR) which consists of four phases: (1) performance profiling, (2) expertise estimation, (3) availability estimation, and (4) answerer ranking. Applying the framework, we conduct experiments with Yahoo! Answers dataset and the results demonstrate that on average each of 1,713 testing questions obtains at least one answer if it is routed to the top 20 ranked answerers."}
{"_id": "54bef8bca4bef4a5cb597c11b9389496f40df35c", "title": "Punctuation Prediction with Transition-based Parsing", "text": "Punctuations are not available in automatic speech recognition outputs, which could create barriers to many subsequent text processing tasks. This paper proposes a novel method to predict punctuation symbols for the stream of words in transcribed speech texts. Our method jointly performs parsing and punctuation prediction by integrating a rich set of syntactic features when processing words from left to right. It can exploit a global view to capture long-range dependencies for punctuation prediction with linear complexity. The experimental results on the test data sets of IWSLT and TDT4 show that our method can achieve high-level performance in punctuation prediction over the stream of words in transcribed speech text."}
{"_id": "1946aec5de1d5a57793a239ee3119ea824a232fd", "title": "A circularly polarized planar antenna modified for passive UHF RFID", "text": "The majority of RFID tags are linearly polarized dipole antennas, but a few use a planar, dual-dipole antenna that facilitates circular polarization, but requires a three-terminal IC. In this paper, we present a novel way to achieve circular polarization with a planar antenna using a two-terminal IC. We present an intuitive methodology for design, and perform experiments that validate circular polarization. The results show that the tag exhibits strong circular polarization, but the precise axial ratio of the tag remains uncertain due to lack of precision in the experimental system."}
{"_id": "82e9a883f47380ce2c89ecbc57597efbdd120be1", "title": "The art of UHF RFID antenna design: impedance-matching and size-reduction techniques", "text": "Radio-frequency identification technology, based on the reader/tag paradigm, is quickly permeating several aspects of everyday life. The electromagnetic research mainly concerns the design of tag antennas having high efficiency and small size, and suited to complex impedance matching to the embedded electronics. Starting from the available but fragmented open literature, this paper presents a homogeneous survey of relevant methodologies for the design of UHF passive tag antennas. Particular care is taken to illustrate, within a common framework, the basic concepts of the most-used design layouts. The design techniques are illustrated by means of many noncommercial examples."}
{"_id": "f8e47d84b46bbce43e57e6640d848d23466df570", "title": "Novel Compact Circularly Polarized Square Microstrip Antenna", "text": "A novel compact circular-polarization (CP) operation of the square microstrip antenna with four slits and a pair of truncated corners is proposed and investigated. Experimental results show that the proposed compact CP design can have an antenna-size reduction of about 36% as compared to the conventional corner-truncated square microstrip antenna at a given operating frequency. Also, the required size of the truncated corners for CP operation is much greater than that for the conventional CP design using a simple square microstrip patch, providing a relaxed manufacturing tolerance for the proposed compact CP design. Details of the experimental results are presented and discussed."}
{"_id": "40073b65a0ce71cf815bb821214a9e7cebd72cc9", "title": "The history of RFID", "text": "Radio frequency identification (RFID) is an integral part of our life, which increases productivity and convenience. It is the term coined for short-range radio technology used to communicate mainly digital information between a stationary location and a movable object or between movable objects. This RFID system uses the principle of modulated backscatter where it can transfer the data from the tag to the reader. The tag generally reads its internal memory of stored data and changes the loading on the tag antenna in a coded manner corresponding to the stored data. RFID is a technology, which spans systems engineering, software development, encryption etc., and thus there are many engineers involved in the development and application of RFID and at present the shortage of technical and business people trained in RFID is hampering the growth of the industry."}
{"_id": "7ec3bb3ee08c08490183ca327724fbfd66407b75", "title": "Software Cost Estimation Meets Software Diversity", "text": "The previous goal of having a one-size-fits-all software cost (and schedule) estimation model is no longer achievable. Sources of wide variation in the nature of software development and evolution processes, products, properties, and personnel (PPPPs) require a variety of estimation models and methods best fitting their situations. This Technical Briefing will provide a short history of pattern-breaking changes in software estimation methods, a summary of the sources of variation in software PPPPs and their estimation implications, a summary of the types of estimation methods being widely used or emerging, a summary of the best estimation-types for the various PPPP-types, and a process for guiding an organization's choices of estimation methods as their PPPP-types evolve."}
{"_id": "e36fe3227c8606fe0e0ce5a50edd6fdf38dc3cfb", "title": "Balance equilibrium manifold and control of rider-bikebot systems", "text": "Postural balance control of machines is one of the most important human motor skills. We present a balance equilibrium manifold (BEM) concept to study how a human rider balances a bicycle-like robot (i.e., bikebot) while maintaining tracking a desired trajectory. The BEM is built on the external-internal convertible (EIC) structure of the rider-bikebot dynamics to capture the balance properties of the rider-bikebot systems. We analyze the contribution of the upper-body movements and the steering inputs to the task of balance control of the systems. The results reveal that steering actuation is more effective than upper-body movement for balance task. A performance metric is also introduced to quantify the balance motor skills using the BEM. Extensive experiments are conducted to validate and demonstrate the analyses and the proposed balance skill metrics."}
{"_id": "c38dae60aafd1f8e502b8e3b17dac5052b29c810", "title": "The Adaptive Markets Hypothesis Market efficiency from an evolutionary perspective", "text": "30TH ANNIVERSARY ISSUE 2004 THE JOURNAL OF PORTFOLIO MANAGEMENT 15 The 30th anniversary of The Journal of Portfolio Management is a milestone in the rich intellectual history of modern finance, firmly establishing the relevance of quantitative models and scientific inquiry in the practice of financial management. One of the most enduring ideas from this intellectual history is the Efficient Markets Hypothesis (EMH), a deceptively simple notion that has become a lightning rod for its disciples and the proponents of behavioral economics and finance. In its purest form, the EMH obviates active portfolio management, calling into question the very motivation for portfolio research. It is only fitting that we revisit this groundbreaking idea after three very successful decades of this Journal. In this article, I review the current state of the controversy surrounding the EMH and propose a new perspective that reconciles the two opposing schools of thought. The proposed reconciliation, which I call the Adaptive Markets Hypothesis (AMH), is based on an evolutionary approach to economic interactions, as well as some recent research in the cognitive neurosciences that has been transforming and revitalizing the intersection of psychology and economics. Although some of these ideas have not yet been fully articulated within a rigorous quantitative framework, long time students of the EMH and seasoned practitioners will no doubt recognize immediately the possibilities generated by this new perspective. Only time will tell whether its potential will be fulfilled. I begin with a brief review of the classic version of the EMH, and then summarize the most significant criticisms leveled against it by psychologists and behavioral economists. I argue that the sources of this controversy can ANDREW W. LO is Harris & Harris Group Professor at the MIT Sloan School of Management, and chief scientific officer at AlphaSimplex Group, LLC, in Cambridge, MA. The Adaptive Markets Hypothesis"}
{"_id": "d258b75043376e1809bda00487023c4025e1c7a9", "title": "The distance function effect on k-nearest neighbor classification for medical datasets", "text": "INTRODUCTION\nK-nearest neighbor (k-NN) classification is conventional non-parametric classifier, which has been used as the baseline classifier in many pattern classification problems. It is based on measuring the distances between the test data and each of the training data to decide the final classification output.\n\n\nCASE DESCRIPTION\nSince the Euclidean distance function is the most widely used distance metric in k-NN, no study examines the classification performance of k-NN by different distance functions, especially for various medical domain problems. Therefore, the aim of this paper is to investigate whether the distance function can affect the k-NN performance over different medical datasets. Our experiments are based on three different types of medical datasets containing categorical, numerical, and mixed types of data and four different distance functions including Euclidean, cosine, Chi square, and Minkowsky are used during k-NN classification individually.\n\n\nDISCUSSION AND EVALUATION\nThe experimental results show that using the Chi square distance function is the best choice for the three different types of datasets. However, using the cosine and Euclidean (and Minkowsky) distance function perform the worst over the mixed type of datasets.\n\n\nCONCLUSIONS\nIn this paper, we demonstrate that the chosen distance function can affect the classification accuracy of the k-NN classifier. For the medical domain datasets including the categorical, numerical, and mixed types of data, K-NN based on the Chi square distance function performs the best."}
{"_id": "f647bc7aeea7d39ba66af00517b9c8822ea2a8d3", "title": "Application of Gradient Boosting through SAS Enterprise Miner TM 12 . 3 to Classify Human Activities", "text": "Using smart clothing with wearable medical sensors integrated to keep track of human health is now attracting many researchers. However, body movement caused by daily human activities inserts artificial noise into physiological data signals, which affects the output of a health monitoring/alert system. To overcome this problem, recognizing human activities, determining relationship between activities and physiological signals, and removing noise from the collected signals are essential steps. This paper focuses on the first step, which is human activity recognition. Our research shows that no other study used SAS\u00ae for classifying human activities. For this study, two data sets were collected from an open repository. Both data sets have 561 input variables and one nominal target variable with four levels. Principal component analysis along with other variable reduction and selection techniques were applied to reduce dimensionality in the input space. Several modeling techniques with different optimization parameters were used to classify human activity. The gradient boosting model was selected as the best model based on a test misclassification rate of 0.1233. That is, 87.67% of total events were classified correctly."}
{"_id": "a9061d3632ccc4f7c3fb82bec8cfbf4c2dc9962a", "title": "Bioavailability and bioefficacy of polyphenols in humans. I. Review of 97 bioavailability studies.", "text": "Polyphenols are abundant micronutrients in our diet, and evidence for their role in the prevention of degenerative diseases is emerging. Bioavailability differs greatly from one polyphenol to another, so that the most abundant polyphenols in our diet are not necessarily those leading to the highest concentrations of active metabolites in target tissues. Mean values for the maximal plasma concentration, the time to reach the maximal plasma concentration, the area under the plasma concentration-time curve, the elimination half-life, and the relative urinary excretion were calculated for 18 major polyphenols. We used data from 97 studies that investigated the kinetics and extent of polyphenol absorption among adults, after ingestion of a single dose of polyphenol provided as pure compound, plant extract, or whole food/beverage. The metabolites present in blood, resulting from digestive and hepatic activity, usually differ from the native compounds. The nature of the known metabolites is described when data are available. The plasma concentrations of total metabolites ranged from 0 to 4 mumol/L with an intake of 50 mg aglycone equivalents, and the relative urinary excretion ranged from 0.3% to 43% of the ingested dose, depending on the polyphenol. Gallic acid and isoflavones are the most well-absorbed polyphenols, followed by catechins, flavanones, and quercetin glucosides, but with different kinetics. The least well-absorbed polyphenols are the proanthocyanidins, the galloylated tea catechins, and the anthocyanins. Data are still too limited for assessment of hydroxycinnamic acids and other polyphenols. These data may be useful for the design and interpretation of intervention studies investigating the health effects of polyphenols."}
{"_id": "7a3bc1b39e914ff10386ce0e6db24b64e0e2378c", "title": "Operative strategy in postero-medial fracture-dislocation of the proximal tibia.", "text": "OBJECTIVE\nIn 1981, Moore introduced a new classification for dislocation-type fractures caused by high-energy mechanisms. The most common medial Moore-type fractures are entire condyle fractures with the avulsion of the median eminence (tibial anterior cruciate ligament (ACL) insertion). They are usually associated with a posterolateral depression of the tibial plateau and an injury of the lateral menisco-tibial capsule. This uniform injury of the knee is increasingly observed in the recent years after skiing injuries due to the high-speed carving technique. This uprising technique uses shorter skis with more sidecut allowing much higher curve speeds and increases the forces on the knee joint. The aim of this study was to describe the injury pattern, our developed operative approach for reconstruction and outcome.\n\n\nMETHODS\nA total of 28 patients with 29 postero-medial fracture dislocation of the proximal tibia from 2001 until 2009 were analysed. Clinical and radiological follow-up was performed after 4 years on average (1 year in minimum). Evaluation criteria included the Lysholm score for everyday knee function and the Tegner score evaluating the activity level. All fractures were stabilised post primarily. The surgical main approach was medial. First, the exposure of the entire medial condyle fracture was performed following the fracture line to the articular border. The posterolateral impaction was addressed directly through the main fracture gap from anteromedial to posterolateral. Small plateau fragments were removed, larger fragments reduced and preliminarily fixed with separate K-wire(s). The postero-medial part of the condyle was then prepared for main reduction and application of a buttress T-plate in a postero-medial position, preserving the pes anserinus and medial collateral ligament. In addition, a parapatellar medial mini-arthrotomy through the same main medial approach was performed. Through this mini-arthrotomy, the avulsed anterior eminence with attached distal ACL is retained by a transosseous suture back to the tibia.\n\n\nRESULTS\nWe could evaluate 26 patients (93%); two patients were lost to follow-up due to foreign residence. Median age was 51 years (20-77 years). The fractures were treated post primarily at an average of 4 days; in 18 cases a two-staged procedure with initial knee-spanning external fixator was used. All fractures healed without secondary displacement or infection. As many as 25 patients showed none to moderate osteoarthritis after a median of 4 years. One patient showed a severe osteoarthritis after 8 years. All patients judge the clinical result as good to excellent. The Lysholm score reached 95 (75-100) of maximal 100 points and the Tegner activity score 5 (3-7) of maximal 10 points (competitive sports). The patients achieved a median flexion of 135\u00b0 (100-145\u00b0).\n\n\nCONCLUSION\nIn our view, it is crucial to recognise the different components of the injury in the typical postero-medial fracture dislocation of the proximal tibia. The described larger medial approach for this type of medial fracture dislocation allows repairing most of the injured aspects of the tibial head, namely the medial condyle with postero-medial buttressing, the distal insertion of the ACL and the posterolateral impaction of the plateau."}
{"_id": "d47b792804f86676579f5021d5cf1a234b5b1edf", "title": "Quantum Algorithm Implementations for Beginners", "text": "Patrick J. Coles, Stephan Eidenbenz,\u2217 Scott Pakin, Adetokunbo Adedoyin, John Ambrosiano, Petr Anisimov, William Casper, Gopinath Chennupati, Carleton Coffrin, Hristo Djidjev, David Gunter, Satish Karra, Nathan Lemons, Shizeng Lin, Andrey Lokhov, Alexander Malyzhenkov, David Mascarenas, Susan Mniszewski, Balu Nadiga, Dan O\u2019Malley, Diane Oyen, Lakshman Prasad, Randy Roberts, Phil Romero, Nandakishore Santhi, Nikolai Sinitsyn, Pieter Swart, Marc Vuffray, Jim Wendelberger, Boram Yoon, Richard Zamora, and Wei Zhu Los Alamos National Laboratory, Los Alamos, New Mexico, USA"}
{"_id": "d49e310c8c0c7925556bc02e2b7aa82149625d6c", "title": "Alana: Social Dialogue using an Ensemble Model and a Ranker trained on User Feedback", "text": "We describe our Alexa prize system (called \u2018Alana\u2019) which consists of an ensemble of bots, combining rule-based and machine learning systems, and using a contextual ranking mechanism to choose system responses. This paper reports on the version of the system developed and evaluated in the semi-finals of the competition (i.e. up to 15 August 2017), but not on subsequent enhancements. The ranker for this system was trained on real user feedback received during the competition, where we address the problem of how to train on the noisy and sparse feedback obtained during the competition. In order to avoid initial problems of inappropriate and boring utterances coming from big datasets such as Reddit and Twitter, we later focussed on \u2018clean\u2019 data sources such as news and facts. We report on experiments with different ranking functions and versions of our NewsBot. We find that a multiturn news strategy is beneficial, and that a ranker trained on the ratings feedback from users is also effective. Our system continuously improved using the data gathered over the course over the competition (1 July \u2013 15 August) . Our final user score (averaged user rating over the whole semi-finals period) was 3.12, and we achieved 3.3 for the averaged user rating over the last week of the semi-finals (8-15 August 2017). We were also able to achieve long dialogues (average 10.7 turns) during the competition period. In subsequent weeks, after the end of the semi-final competition, we have achieved our highest scores of 3.52 (daily average, 18th October), 3.45 (weekly average on 23 and 24 October), and average dialogue lengths of 14.6 turns (1 October), and median dialogue length of 2.25 minutes (average for 7 days on 10th October)."}
{"_id": "bc9c8831183b9b4e6cd78cefed7ec04d8096499f", "title": "CBAM: Convolutional Block Attention Module", "text": "We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. Given an intermediate feature map, our module sequentially infers attention maps along two separate dimensions, channel and spatial, then the attention maps are multiplied to the input feature map for adaptive feature refinement. Because CBAM is a lightweight and general module, it can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. We validate our CBAM through extensive experiments on ImageNet-1K, MS COCO detection, and VOC 2007 detection datasets. Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available."}
{"_id": "476244bf49d3153a8b3c3fad4aac38848579a314", "title": "Use of assistance and therapy dogs for children with autism spectrum disorders: a critical review of the current evidence.", "text": "BACKGROUND\nAutism spectrum disorders (ASD) are characterized by deficits in social reciprocity and communication, and by unusually restricted, repetitive behaviors. Intervention strategies based on the exploitation of the emotional aspects of human-dog relationships hold the potential to overcome the difficulty of subjects with ASD to relate and interact effectively with others, targeting core symptoms of this disorder.\n\n\nMETHODS\nThis review summarizes the results of six published studies on the effects of brief interactions with dogs and the effects of introducing dogs in families with a child diagnosed with ASD, with an emphasis on social behaviors and language use. Furthermore, the possible mechanisms responsible for the beneficial effects observed are discussed.\n\n\nCONCLUSIONS\nAlthough the studies described here are encouraging, further research with better designs and using larger samples is needed to strengthen translation of such interventions to the clinic. In addition, potential applications of analyzing child-dog interactions are highlighted to screen for early signs of the disorder."}
{"_id": "e5adaf07807a2c0da8fc684fd31302070c436a14", "title": "Neurodegenerative Diseases Monitoring ( NDM ) main Challenges , Tendencies , and Enabling Big Data Technologies : A Survey", "text": "Evidence-based health monitoring has been recognized in the past few years as a very prominent solution to cope with continuous monitoring of chronic diseases such as neurodegenerative diseases for example: epilepsy. This has reduced the burden and cost for healthcare agencies and has led to efficient disease tracking, diagnosis, and intervention. Monitoring these diseases requires a long and continuous EEG signals recordings, pre-processing, analytics, and visualization. This will require a comprehensive solution to handle the complexity and time sensitivity of continuous monitoring. Several neurodegenerative disease monitoring (NDM) architectures have been proposed in the literature, however, they diverge on different aspects, such as the way they handle the monitoring processes, and the techniques they used to process, classify, and analyze the data. In this paper, we aim to bridge the gap between these existing NDM solutions. We provide first an overview of a standard NDM system, its main components, and requirements. We then survey and classify the exiting NDM solutions features, and characteristics. Afterwards, we provide a thorough evaluation of existing NDM solutions and we discuss the remaining key research challenges that have to be addressed. Finally, we propose and describe a generic NDM framework incorporating new technologies mainly the Cloud and Big Data to efficiently handle data intensive related processes. We aim by this work to serve researchers in this filed with useful information on NDM and provide direction for future research advancements. KeywordsNeurodegenerative Diseases Monitoring, Brain Informatics, Neuro Informatics, Big data, EEG, Seizure Detection"}
{"_id": "a6c7836a1877abb13c56d0ec29e9cde0d6e60cf8", "title": "The Malmo Platform for Artificial Intelligence Experimentation", "text": "We present Project Malmo \u2013 an AI experimentation platform built on top of the popular computer game Minecraft, and designed to support fundamental research in artificial intelligence. As the AI research community pushes for artificial general intelligence (AGI), experimentation platforms are needed that support the development of flexible agents that learn to solve diverse tasks in complex environments. Minecraft is an ideal foundation for such a platform, as it exposes agents to complex 3D worlds, coupled with infinitely varied game-play. Project Malmo provides a sophisticated abstraction layer on top of Minecraft that supports a wide range of experimentation scenarios, ranging from navigation and survival to collaboration and problem solving tasks. In this demo we present the Malmo platform and its capabilities. The platform is publicly released as open source software at IJCAI, to support openness and collaboration in AI research."}
{"_id": "e2896ae342b284b129dba9f12d046f3bbf313567", "title": "A multidisciplinary approach towards computational thinking for science majors", "text": "This paper describes the development and initial evaluation of a new course ``Introduction to Computational Thinking'' taken by science majors to fulfill a college computing requirement. The course was developed by computer science faculty in collaboration with science faculty and it focuses on the role of computing and computational principles in scientific inquiry. It uses Python and Python libraries to teach computational thinking via basic programming concepts, data management concepts, simulation, and visualization. Problems with a computational aspect are drawn from different scientific disciplines and are complemented with lectures from faculty in those areas. Our initial evaluation indicates that the problem-driven approach focused on scientific discovery and computational principles increases the student's interest in computing."}
{"_id": "1c73dd98297b5d3ec78722225c9a35e32502df24", "title": "Design of a Direct-Driven Linear Actuator for a High-Speed Quadruped Robot, Cheetaroid-I", "text": "Several legged robots have been developed in recent years and proved that they are an effective transportation system on an uneven terrain. In particular, quadruped robots are regarded as a new trend in robotics due to their superior gait stability and robustness to disturbances. More recently, many robotics researchers are making their best efforts to improve the locomotion speed, as well as the stability and robustness, of quadruped robots. The high-speed locomotion creates various challenges in the development of actuators, mechanical design, and control algorithms of the robot. In this paper, a linear actuation system for the high-speed locomotion of a quadruped robot is introduced. The proposed actuator is designed based on the principle of brushed direct-current electric motor systems. For the minimal impedance and improved power capacity, the actuator is designed with dual layers of cores, which are aligned parallel to permanent magnets. The mechanical and electrical properties of the actuation system, such as back-drivability, controllability, and response time, are verified by experimental results. A robotic leg, which is the rear leg of a Cheetah-like robot, is designed with the proposed actuator, and experimental results for trajectory tracking performance are presented."}
{"_id": "addab62e12c205b7a65eee4611577cc4bbfea7dc", "title": "Relationship Between Awareness, Knowledge and Attitudes Towards Environmental Education Among Secondary School Students in Malaysia", "text": "The importance of environmental education (EE) is well known globally among societies. Environmental education is gradually promoted as a sustainable tool in protection of the environment Environmental education is found across school curriculums in Malaysia. The objectives of the curriculum are environmental attitude, knowledge and awareness (AKA) where has been investigated in the current study. The study was conducted to identify the relationship between environmental awareness, knowledge and attitude among secondary school students. The survey was conducted on 470 respondents who were in Form Four (16 years old) in Kajang city, Selangor, Malaysia. An instrument which included (48 questions) was employed to investigate the relationship between awareness, knowledge and attitude. The results of Person Correlation showed a significant but weak relationship between awareness and knowledge on environmental issues while there was high relationship observed between awareness and attitudes among respondents. Moreover, the statistical test showed a negligible relationship between knowledge and attitude among students about environment. The study concluded that a high level of awareness and knowledge plus positive attitude of students may come have been achieved from the families of respondents, teachers, media, private reading and school curriculums regarding the environment that increases the environmental view among students as well as overall in the society. The study recommended that environmental education subject necessarily might be considered as an independent syllabus in Malaysian education system."}
{"_id": "2f1ec622fdbd0d1430d8c37c58d5edd9f344b727", "title": "A First-Estimates Jacobian EKF for Improving SLAM Consistency", "text": "In this work, we study the inconsistency of EKF-based SLAM from the perspective of observability. We analytically prove that when the Jacobians of the system and measurement models are evaluated at the latest state estimates during every time step, the linearized error-state system employed in the EKF has observable subspace of dimension higher than that of the actual, nonlinear, SLAM system. As a result, the covariance estimates of the EKF undergo reduction in directions of the state space where no information is available, which is a primary cause of the inconsistency. Furthermore, a new \u201cFirst-Estimates Jacobian\u201d (FEJ) EKF is proposed to improve the estimator\u2019s consistency during SLAM. The proposed algorithm performs better in terms of consistency, because when the filter Jacobians are calculated using the first-ever available estimates for each state variable, the error-state system model has an observable subspace of the same dimension as the underlying nonlinear SLAM system. The theoretical analysis is validated through both simulations and experiments."}
{"_id": "2063b13578d8e639115f75fe44df2d09f482f932", "title": "Computer security in the real world", "text": "Most computers today are insecure because security is costly in terms of user inconvenience and foregone features, and people are unwilling to pay the price. Real-world security depends more on punishment than on locks, but it's hard to even find network attackers, much less punish them. The basic elements of security are authentication, authorization, and auditing: the gold standard. The idea of one principal speaking for another is the key to doing these uniformly across the Internet."}
{"_id": "735c42d986c9c942b878504ec44125802e37f6c0", "title": "A review of argumentation for the Social Semantic Web", "text": "Argumentation represents the study of views and opinions that humans express with the goal of reaching a conclusion through logical reasoning. Since the 1950\u2019s, several models have been proposed to capture the essence of informal argumentation in different settings. With the emergence of the Web, and then the Semantic Web, this modeling shifted towards ontologies, while from the development perspective, we witnessed an important increase in Web 2.0 human-centered collaborative deliberation tools. Through a review of more than 150 scholarly papers, this article provides a comprehensive and comparative overview of approaches to modeling argumentation for the Social Semantic Web. We start from theoretical foundational models and investigate how they have influenced Social Web tools. We also look into Semantic Web argumentation models. Finally we end with Social Web tools for argumentation, including online applications combining Web 2.0 and Semantic Web technologies, following the path to a global World Wide Argument Web."}
{"_id": "12b7004186ac8658a26694e410f6a0617b26161f", "title": "Discriminating Gender on Twitter", "text": "Accurate prediction of demographic attributes from social media and other informal online content is valuable for marketing, personalization, and legal investigation. This paper describes the construction of a large, multilingual dataset labeled with gender, and investigates statistical models for determining the gender of uncharacterized Twitter users. We explore several different classifier types on this dataset. We show the degree to which classifier accuracy varies based on tweet volumes as well as when various kinds of profile metadata are included in the models. We also perform a large-scale human assessment using Amazon Mechanical Turk. Our methods significantly out-perform both baseline models and almost all humans on the same task."}
{"_id": "648f60e2720df96709ab5094b4513fd3a27f223b", "title": "Embedded Vision System for Real-Time Object Tracking using an Asynchronous Transient Vision Sensor", "text": "This paper presents an embedded vision system for object tracking applications based on a 128times128 pixel CMOS temporal contrast vision sensor. This imager asynchronously responds to relative illumination intensity changes in the visual scene, exhibiting a usable dynamic range of 120 dB and a latency of under 100 mus. The information is encoded in the form of address-event representation (AER) data. An algorithm for object tracking with 1 millisecond timestamp resolution of the AER data stream is presented. As a real-world application example, vehicle tracking for a traffic-monitoring is demonstrated in real time. The potential of the proposed algorithm for people tracking is also shown. Due to the efficient data pre-processing in the imager chip focal plane, the embedded vision system can be implemented using a low-cost, low-power digital signal processor"}
{"_id": "d5c34b960cef2d59a66331e83e0fefbff8a3da23", "title": "Deformable Image Registration: A Survey", "text": "Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by di erent imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where the development of phenomenon is studied over time; and iii) population modeling towards studying normal anatomical variability. In this research report, we attempt to give an overview of the deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study in depth image registration methods, their main components are identi ed and then studied independently. The most recent techniques are presented in a principled fashion. Key-words: Deformable registration, bibliographical review \u2217 Center for Visual Computing, Department of Applied Mathematics, Ecole Centrale de Paris, Equipe GALEN, INRIA Saclay \u00c9tude Bibliographique sur le Recalage D\u00e9formable d'Images R\u00e9sum\u00e9 : Le recalage d\u00e9formable d'images est une des t\u00e2ches les plus fondamentales dans l'imagerie m\u00e9dicale. Parmi ses applications les plus importantes, on compte: i) la fusion d' information provenant des di \u00e9rents types de modalit\u00e9s a n de faciliter le diagnostic et la plani cation du traitement; ii) les \u00e9tudes longitudinales, o\u00fa le d'eveloppement d'un ph\u00e9nom\u00e8ne est etudi\u00e9 en fonction du temps; et iii) la mod\u00e9lisation de la variabilit\u00e9 anatomique normale d'une population. Dans ce rapport de recherche, nous essayons de donner un aper\u00e7u des di \u00e9rentes m\u00e9thodes du recalage d\u00e9formables, en mettant l'accent sur les avanc\u00e9es les plus r\u00e9centes du domaine. Nous avons particuli\u00e9rement insist\u00e9 sur les techniques appliqu\u00e9es aux images m\u00e9dicales. A n d'\u00e9tudier en profondeur les m\u00e9thodes du recalage d'images, leurs composants principales sont d'abord identi \u00e9s puis \u00e9tudi\u00e9es de mani\u00e8re inde\u00e9pendante, les techniques les plus r\u00e9centes \u00e9tant classi \u00e9es en suivant un sch\u00e9ma logique d\u00e9termin\u00e9. Mots-cl\u00e9s : recalage d\u00e9formable, \u00e9tude bibliographique Deformable Image Registration: A Survey 3"}
{"_id": "3c60f16af7082fd67962add8017f01c7b1aa7e87", "title": "Edge-Oriented Computing Paradigms: A Survey on Architecture Design and System Management", "text": "While cloud computing has brought paradigm shifts to computing services, researchers and developers have also found some problems inherent to its nature such as bandwidth bottleneck, communication overhead, and location blindness. The concept of fog/edge computing is therefore coined to extend the services from the core in cloud data centers to the edge of the network. In recent years, many systems are proposed to better serve ubiquitous smart devices closer to the user. This article provides a complete and up-to-date review of edge-oriented computing systems by encapsulating relevant proposals on their architecture features, management approaches, and design objectives."}
{"_id": "cee7c2f06129ef93cebb6637db3169335f3dde6b", "title": "Computation and Computational Thinking", "text": "ion of reality in such a way that the neglected details in the model make it executable by a machine.\u201d [2] As we shall see, finding or devising appropriate models of computation to formulate problems is a central and often nontrivial part of computational thinking."}
{"_id": "7881d48deaffe4c8dc07200b9b5932a7ef74d783", "title": "Autopoiesis, Structural Coupling and Cognition: A history of these and other notions in the biology of cognition", "text": "My intent in this essay is to reflect on the history of some biological notions such as autopoiesis, structural coupling, and cognition, that I have developed since the early 1960's as a result of my work on visual perception and the organization of the living. No doubt I shall repeat things that I have said in other publications (Maturana & Varela, 1980, 1988), and I shall present notions\"that once they are said appear as obvious truisms. Moreover, I shall refine or expand the meaning of such notions, or even modify them. Yet, in any case, the reader is not invited to attend to the truisms, or to what seems to be obvious, rather he or she is invited to attend to the consequences that those notions entail for the understanding of cognition as a biological process. After all, explanations or demonstrations always become self evident once they are understood and accepted, and the purpose of this essay is the expansion of understanding in all dimensions of human existence."}
{"_id": "14a6c0a97c82c8d502ce53a60698ce630bafb294", "title": "Integration of an On-line Kaldi Speech Recogniser to the Alex Dialogue Systems Framework", "text": "This paper describes the integration of an on-line Kaldi speech recogniser into the Alex Dialogue Systems Framework (ADSF). As the Kaldi OnlineLatgenRecogniser is written in C++, we first developed a Python wrapper for the recogniser so that the ADSF, written in Python, could interface with it. Training scripts for acoustic and language modelling were developed and integrated into ADSF, and acoustic and language models were build. Finally, optimal recogniser parameters were determined and evaluated. The dialogue system Alex with the new speech recogniser is evaluated on Public Transport Information (PTI) domain."}
{"_id": "6182e4b5151aa27ceb75c94543e3f584c991e00f", "title": "Improving deep neural network acoustic models using generalized maxout networks", "text": "Recently, maxout networks have brought significant improvements to various speech recognition and computer vision tasks. In this paper we introduce two new types of generalized maxout units, which we call p-norm and soft-maxout. We investigate their performance in Large Vocabulary Continuous Speech Recognition (LVCSR) tasks in various languages with 10 hours and 60 hours of data, and find that the p-norm generalization of maxout consistently performs well. Because, in our training setup, we sometimes see instability during training when training unbounded-output nonlinearities such as these, we also present a method to control that instability. This is the \u201cnormalization layer\u201d, which is a nonlinearity that scales down all dimensions of its input in order to stop the average squared output from exceeding one. The performance of our proposed nonlinearities are compared with maxout, rectified linear units (ReLU), tanh units, and also with a discriminatively trained SGMM/HMM system, and our p-norm units with p equal to 2 are found to perform best."}
{"_id": "0687573a482d84385ddd55e708e240f3e303fab9", "title": "Hypothesis spaces for minimum Bayes risk training in large vocabulary speech recognition", "text": "The Minimum Bayes Risk (MBR) framework has been a successful strategy for the training of hidden Markov models for large vocabulary speech recognition. Practical implementations of MBR must select an appropriate hypothesis space and loss function. The set of word sequences and a word-based Levenshtein distance may be assumed to be the optimal choice but use of phoneme-based criteria appears to be more successful. This paper compares the use of different hypothesis spaces and loss functions defined using the system constituents of word, phone, physical triphone, physical state and physical mixture component. For practical reasons the competing hypotheses are constrained by sampling. The impact of the sampling technique on the performance of MBR training is also examined."}
{"_id": "2116b2eaaece4af9c28c32af2728f3d49b792cf9", "title": "Improving neural networks by preventing co-adaptation of feature detectors", "text": "When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This overfitting is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \u201cdropout\u201d gives big improvements on many benchmark tasks and sets new records for speech and object recognition. A feedforward, artificial neural network uses layers of non-linear hidden units between its inputs and its outputs. By adapting the weights on the incoming connections of these hidden units it learns feature detectors that enable it to predict the correct output when given an input vector [15]. If the relationship between the input and the correct output is complicated and the network has enough hidden units to model it accurately, there will typically be many different settings of the weights that can model the training set almost perfectly, especially if there is only a limited amount of labeled training data. Each of these weight vectors will make different predictions on held-out test data and almost all of them will do worse on the test data than on the training data because the feature detectors have been tuned to work well together on the training data but not on the test data. Overfitting can be reduced by using \u201cdropout\u201d to prevent complex co-adaptations on the training data. On each presentation of each training case, each hidden unit is randomly omitted from the network with a probability of 0.5, so a hidden unit cannot rely on other hidden units being present. Another way to view the dropout procedure is as a very efficient way of performing model averaging with neural networks. A good way to reduce the error on the test set is to average the predictions produced by a very large number of different networks. The standard way to do this is to train many separate networks and then to apply each of these networks to the test data, but this is computationally expensive during both training and testing. Random dropout makes it possible to train a huge number of different networks in a reasonable time. There is almost certainly a different network for each presentation of each training case but all of these networks share the same weights for the hidden units that are present. We use the standard, stochastic gradient descent procedure for training the dropout neural networks on mini-batches of training cases, but we modify the penalty term that is normally used to prevent the weights from growing too large. Instead of penalizing the squared length (L2 norm) of the whole weight vector, we set an upper bound on the L2 norm of the incoming weight vector for each individual hidden unit. If a weight-update violates this constraint, we renormalize the weights of the hidden unit by division. Using a constraint rather than a penalty prevents weights from growing very large no matter how large the proposed weight-update is. This makes it possible to start with a"}
{"_id": "54a4c8051e655e3035f98bb9dd8876a6511517ff", "title": "LIMSI$@$WMT'16: Machine Translation of News", "text": "This paper describes LIMSI\u2019s submissions to the shared WMT\u201916 task \u201cTranslation of News\u201d. We report results for Romanian-English in both directions, for English to Russian, as well as preliminary experiments on reordering to translate from English into German. Our submissions use mainly NCODE and MOSES along with continuous space models in a post-processing step. The main novelties of this year\u2019s participation are the following: for the translation into Russian and Romanian, we have attempted to extend the output of the decoder with morphological variations and to use a CRF model to rescore this new search space; as for the translation into German, we have been experimenting with source-side pre-ordering based on a dependency structure allowing permutations in order to reproduce the tar-"}
{"_id": "8ddc81b4ce53de15ec3658234cb524fc664bd76d", "title": "Expanded Semantic Graph Representation for Matching Related Information of Interest across Free Text Documents", "text": "This research proposes an expanded semantic graph definition that serves as a basis for an expanded semantic graph representation and graph matching approach. This representation separates the content and context and adds a number of semantic structures that encapsulate inferred information. The expanded semantic graph approach facilitates finding additional matches, identifying and eliminating poor matches, and prioritizing matches based on how much new knowledge is provided. By focusing on information of interest, doing pre-processing, and reducing processing requirements, the approach is applicable to problems where related information of interest is sought across a massive body of free text documents. Key aspects of the approach include (1) expanding the nodes and edges through inference using DL-Safe rules, abductive hypotheses, and syntactic patterns, (2) separating semantic content into nodes and semantic context into edges, and (3) applying relatedness measures on a node, edge, and sub graph basis. Results from tests using a ground-truthed subset of a large dataset of law enforcement investigator emails illustrate the benefits of these approaches."}
{"_id": "8f60633d0dbfea86722296444e1d7b2946c3b42c", "title": "Blind Deconvolution Using Alternating Maximum a Posteriori Estimation with Heavy-Tailed Priors", "text": "Single image blind deconvolution aims to estimate the unknown blur from a single observed blurred image and recover the original sharp image. Such task is severely ill-posed and typical approaches involve some heuristic or other steps without clear mathematical explanation to arrive at an acceptable solution. We show that a straightforward maximum a posteriory estimation combined with very sparse priors and an efficient numerical method can produce results, which compete with much more complicated state-of-the-art methods."}
{"_id": "5b79ad3e04e772fe72768b2ec6070d9a0ef9edc6", "title": "A 3D object detection and pose estimation pipeline using RGB-D images", "text": "3D object detection and pose estimation has been studied extensively in recent decades for its potential applications in robotics. However, there still remains challenges when we aim at detecting multiple objects while retaining low false positive rate in cluttered environments. This paper proposes a robust 3D object detection and pose estimation pipeline based on RGB-D images, which can detect multiple objects simultaneously while reducing false positives. Detection begins with template matching and yields a set of template matches. A clustering algorithm then groups templates of similar spatial location and produces multiple-object hypotheses. A scoring function evaluates the hypotheses using their associated templates and non-maximum suppression is adopted to remove duplicate results based on the scores. Finally, a combination of point cloud processing algorithms are used to compute objects' 3D poses. Existing object hypotheses are verified by computing the overlap between model and scene points. Experiments demonstrate that our approach provides competitive results comparable to the state-of-the-arts and can be applied to robot random bin-picking."}
{"_id": "e092be3120ce51aaa35af673fabd07b6513e702e", "title": "Mis problems and failures: a socio-technical perspective", "text": "Many of the problems arjd failures of Management Information Systems (MIS) and Management Science/Operations Research (MS/OR) projects have been attributed to organizational behavioral problems. The millions of dollars organizations spend on MIS and MS/OR development are of little benefit because systems continue to fail. Steps can be taken to understand and solve these behavioral probiems. This article argues that in most oases these behavioral problems are the result of inadequate designs. These bad designs are attributed to the way MIS systems designers view organizations, their members, and the function of an MIS within them. I.e., systems designers' frames of reference. These frames of reference cause faulty design choices and failures to perceive better design alternatives. Seven conditions are discussed which reflect current systems designers' points of view. The discussion of these conditions demonstrates the need to refrawe MIS design methodology within the Socio-Technicai Systems (STS) design approach and change systems designers' perspectives. The STS approach is introduced as a realistic view of organizations and a way to change them. This article is the first of two to appear in consecutive issues of the MIS Quarterly. The purpose of this first article is to demonstrate the need for the srs approaj:h. The second will present the basic concepts and principles of the STS methodology and how it can be utilized in the design of an MIS."}
{"_id": "c25a92479398cff2569ec5eab081be5fdfecca23", "title": "Dynamic Weighted Majority for Incremental Learning of Imbalanced Data Streams with Concept Drift", "text": "Concept drifts occurring in data streams will jeopardize the accuracy and stability of the online learning process. If the data stream is imbalanced, it will be even more challenging to detect and cure the concept drift. In the literature, these two problems have been intensively addressed separately, but have yet to be well studied when they occur together. In this paper, we propose a chunk-based incremental learning method called Dynamic Weighted Majority for Imbalance Learning (DWMIL) to deal with the data streams with concept drift and class imbalance problem. DWMIL utilizes an ensemble framework by dynamically weighting the base classifiers according to their performance on the current data chunk. Compared with the existing methods, its merits are four-fold: (1) it can keep stable for non-drifted streams and quickly adapt to the new concept; (2) it is totally incremental, i.e. no previous data needs to be stored; (3) it keeps a limited number of classifiers to ensure high efficiency; and (4) it is simple and needs only one thresholding parameter. Experiments on both synthetic and real data sets with concept drift show that DWMIL performs better than the state-of-the-art competitors, with less computational cost."}
{"_id": "7fd3bf8fe9c17b7dfed46412efe8549f76b6c1cc", "title": "Android smartphone vulnerabilities: A survey", "text": "Round the globe mobile devices like Smartphone, PDAs & tablets are playing an essential role in every person day to day lives. Various operating systems such as android, ios, blackberry etc provides a platform for smart devices. Google's Android is a one of the most popular and user friendly open source software platform for mobile devices. Along with its convenience people are likely to download and install malicious applications developed to misguide the user which will create a security gap to penetrate for the attacker. Hackers are inclined to discover and exploit the new vulnerabilities which will bring forth with the latest version of Android. In this paper, we concentrate on examining and understanding the vulnerabilities exist in android operating system. We will also suggest a metadata model which gives the information of all the related terms required for vulnerability assessment. Here, analyzing data is extracted from Open Source Vulnerability Database (OSVDB) and National Vulnerability Database (NVD)."}
{"_id": "31c964b8f0f0fc9796656693707484193e0e35be", "title": "Kernel Region ( a ) Top Level ( b ) Compute Unit Level ( c ) Processing Element Level", "text": "OpenCL FPGA has recently gained great popularity with emerging needs for workload acceleration such as Convolutional Neural Network (CNN), which is the most popular deep learning architecture in the domain of computer vision. While OpenCL enhances the code portability and programmability of FPGA, it comes at the expense of performance. The key challenge is to optimize the OpenCL kernels to efficiently utilize the flexible hardware resources in FPGA. Simply optimizing the OpenCL kernel code through various compiler options turns out insufficient to achieve desirable performance for both compute-intensive and data-intensive workloads such as convolutional neural networks . In this paper, we first propose an analytical performance model and apply it to perform an in-depth analysis on the resource requirement of CNN classifier kernels and available resources on modern FPGAs. We identify that the key performance bottleneck is the onchip memory bandwidth. We propose a new kernel design to effectively address such bandwidth limitation and to provide an optimal balance between computation, on-chip, and off-chip memory access. As a case study, we further apply these techniques to design a CNN accelerator based on the VGG model. Finally, we evaluate the performance of our CNN accelerator using an Altera Arria 10 GX1150 board. We achieve 866 Gop/s floating point performance at 370MHz working frequency and 1.79 Top/s 16-bit fixed-point performance at 385MHz. To the best of our knowledge, our implementation achieves the best power efficiency and performance density compared to existing work."}
{"_id": "4c9ced03deedff9bc97705882b99ca6f76b9c1d4", "title": "Valgrind: a framework for heavyweight dynamic binary instrumentation", "text": "Dynamic binary instrumentation (DBI) frameworks make it easy to build dynamic binary analysis (DBA) tools such as checkers and profilers. Much of the focus on DBI frameworks has been on performance; little attention has been paid to their capabilities. As a result, we believe the potential of DBI has not been fully exploited.\n In this paper we describe Valgrind, a DBI framework designed for building heavyweight DBA tools. We focus on its unique support for shadow values-a powerful but previously little-studied and difficult-to-implement DBA technique, which requires a tool to shadow every register and memory value with another value that describes it. This support accounts for several crucial design features that distinguish Valgrind from other DBI frameworks. Because of these features, lightweight tools built with Valgrind run comparatively slowly, but Valgrind can be used to build more interesting, heavyweight tools that are difficult or impossible to build with other DBI frameworks such as Pin and DynamoRIO."}
{"_id": "80a01a77fbbf3216af9594924416352c7b099095", "title": "Estimation of land surface temperature over Delhi using Landsat-7 ETM +", "text": "Land surface temperature (LST) is important factor in global change studies, in estimating radiation budgets in heat balance studies and as a control for climate models. The knowledge of surface temperature is important to a range of issues and themes in earth sciences central to urban climatology, global environmental change, and human-environment interactions. In the study an attempt has been made to estimate surface temperature over Delhi area using Landsat-7 ETM+ satellite data. The variability of these retrieved LSTs has been investigated with respect to different land use / land cover (LU/LC) types determined from the Landsat visible and NIR channels. The classification uncertainties over different land use land cover were assessed and it has been observed that the classification uncertainties were found to be lowest using Minimum Noise Fraction (MNF) components. The emissivity per pixel is retrieved directly from satellite data and has been estimated as narrow band emissivity at the satellite sensor channel in order to have least error in the surface temperature estimation. Strong correlation is observed between surface temperature with Normalized Difference Vegetation Index (NDVI) over different LU/LC classes and the relationship is moderate with fractional vegetation cover (FVC). A regression relation between these parameters has also been estimated indicating that surface temperatures can be predicted if NDVI values are known. The results suggest that the methodology is feasible to estimate NDVI, surface emissivity and surface temperature with reasonable accuracy over heterogeneous urban areas."}
{"_id": "ebbef7e8b7a1ea133999561ab279e51b961d31ec", "title": "A Novel Fuzzy C-Means Clustering Algorithm for Image Thresholding", "text": "Image thresholding has played an important role in image segmentation. In this paper, we present a novel spatially weighted fuzzy c-means (SWFCM) clustering algorithm for image thresholding. The algorithm is formulated by incorporating the spatial neighborhood information into the standard FCM clustering algorithm. Two improved implementations of the k-nearest neighbor (k-NN) algorithm are introduced for calculating the weight in the SWFCM algorithm so as to improve the performance of image thresholding. To speed up the FCM algorithm, the iteration is carried out with the gray level histogram of image instead of the conventional whole data of image. The performance of the algorithm is compared with those of an existed fuzzy thresholding algorithm and widely applied between variance and entropy methods. Experimental results with synthetic and real images indicate the proposed approach is effective and efficient. In addition, due to the neighborhood model, the proposed method is more tolerant to noise."}
{"_id": "7dbf06c01c8f2064547a4c29c3450d36a3b975eb", "title": "MIMO-SAR: Opportunities and Pitfalls", "text": "This paper reviews advanced radar architectures that employ multiple transmit and multiple receive antennas to improve the performance of future synthetic aperture radar (SAR) systems. These advanced architectures have been dubbed multiple-input multiple-output SAR (MIMO-SAR) in analogy to MIMO communication systems. Considerable confusion arose, however, with regard to the selection of suitable waveforms for the simultaneous transmission via multiple channels. It is shown that the mere use of orthogonal waveforms is insufficient for the desired performance improvement in view of most SAR applications. As a solution to this fundamental MIMO-SAR challenge, a new class of short-term shift-orthogonal waveforms is introduced. The short-term shift orthogonality avoids mutual interferences from the radar echoes of closely spaced scatterers, while interferences from more distant scatterers are suppressed by digital beamforming on receive in elevation. Further insights can be gained by considering the data acquisition of a side-looking imaging radar in a 3-D information cube. It becomes evident that the suggested waveforms fill different subspaces that can be individually accessed by a multichannel receiver. For completeness, the new class of short-term shift-orthogonal waveforms is also compared to a recently proposed pair of orthogonal frequency-division multiplexing waveforms. It is shown that both sets of waveforms require essentially the same principle of range time to elevation angle conversion via a multichannel receiver in order to be applicable for MIMO-SAR imaging without interference."}
{"_id": "c3c2cfd3c03bd3b621ebe00be36a60c96b3d8fad", "title": "Tools to support systematic reviews in software engineering: a cross-domain survey using semi-structured interviews", "text": "Background: A number of software tools are being developed to support systematic reviewers within the software engineering domain. However, at present, we are not sure which aspects of the review process can most usefully be supported by such tools or what characteristics of the tools are most important to reviewers. Aim: The aim of the study is to explore the scope and practice of tool support for systematic reviewers in other disciplines. Method: Researchers with experience of performing systematic reviews in Healthcare and the Social Sciences were surveyed. Qualitative data was collected through semi-structured interviews and data analysis followed an inductive approach. Results: 13 interviews were carried out. 21 software tools categorised into one of seven types were identified. Reference managers were the most commonly mentioned tools. Features considered particularly important by participants were support for multiple users, support for data extraction and support for tool maintenance. The features and importance levels identified by participants were compared with those proposed for tools to support systematic reviews in software engineering. Conclusions: Many problems faced by systematic reviewers in other disciplines are similar to those faced in software engineering. There is general consensus across domains that improved tools are needed."}
{"_id": "be6e61d1593dd5111d0482cd9a340d1ce686b2c3", "title": "DeepEmo: Learning and Enriching Pattern-Based Emotion Representations", "text": "We propose a graph-based mechanism to extract rich-emotion bearing patterns, which fosters a deeper analysis of online emotional expressions, from a corpus. The patterns are then enriched with word embeddings and evaluated through several emotion recognition tasks. Moreover, we conduct analysis on the emotion-oriented patterns to demonstrate its applicability and to explore its properties. Our experimental results demonstrate that the proposed techniques outperform most stateof-the-art emotion recognition techniques."}
{"_id": "26b6341330085c8588b0d6e3eaf34ab5a0f7ca53", "title": "A low power and high speed CMOS Voltage-Controlled Ring Oscillator", "text": "In this paper a new Voltage Controlled Ring Oscillator is presented. The proposed VCO uses partial positive feedback in its delay cell, allowing the circuit to operate with single two stages, achieving high speed with reduced power consumption. The new VCO has a wide range of operation frequencies (0.2 at 2.1 GHz), good linearity between the output frequency and the control voltage, phase noise of -90 dBc/Hz at 100 kHz offset and it only consumes 7.01 mW in its central frequency of 1.2 GHz using a 3.3 V power supply. The circuit was fabricated in a 0.35 /spl mu/m CMOS-AMS process and occupies an area of 67.5/spl times/77.5 /spl mu/m/sup 2/."}
{"_id": "2988659facc190ac819af177cf86094f1b467805", "title": "Low correlation MIMO antenna for LTE 700MHz band", "text": "A key feature of upcoming 4G wireless communication standards is multiple-input-multiple-output (MIMO) technology. To make the best use of MIMO, the antenna correlation between adjacent antennas must be low (< 0.5). In this context, we propose a new correlation reduction technique suitable for closely spaced antennas (distance, d < \u03bb/40). This technique reduces mutual coupling between antennas and concurrently uncorrelates the antennas' radiation characteristic by inducing negative group delay in the target frequency band. The validity of the technique is demonstrated with a USB dongle MIMO antenna designed for LTE 700MHz band. Measurement results show that the antenna correlation is reduced more than 37% using the proposed technique."}
{"_id": "5eb0b19fa9434588222a41b8967f9830c305f38a", "title": "Design of fuzzy controller for induction heating using DSP", "text": "This paper presents the development of fuzzy controller for induction heating temperature control. Induction heating coil can be controlled by single phase sinusoidal pulse width modulation (SPWM) digital signal processor (DSP)-based inverter. DSP is used to store the required commands for generating the necessary waveforms to control the frequency of the inverter through proper design of switching pulses. The SPWM technique produces pure sinusoidal wave of voltage and current output. Fuzzy controller software has been developed in the C++ language. VisSim software is used to experiment with a DSP. The inverter is current-fed and SPWM is used for DC to AC conversion. SPWM normally cuts off high-order harmonic by low-pass filter to produce sine wave power. The LC low-pass filter with cutoff frequency at 300Hz is used. Since low frequency produces ripple to cause transformer and inductance magnetic saturation, the range of frequency is between 100 Hz and 300 Hz. The results of the experiment are presented and show the effectiveness and superiority of this temperature control system."}
{"_id": "e5f1c9e6d8f5506b4d0488bdd1482f3c7d2f48ad", "title": "Trust in school : a pathway to inhibit teacher burnout ?", "text": "Purpose \u2013 The purpose of this paper is to consider trust as an important relational source in schools by exploring whether trust lowers teacher burnout. The authors examine how trust relationships with different school parties such as the principal relate to distinct dimensions of teacher burnout. The authors further analyze whether school-level trust additionally influences burnout. In doing this, the authors account for other teacher and school characteristics. Design/methodology/approach \u2013 The authors use quantitative data gathered during the 2008-2009 school year from 673 teachers across 58 elementary schools in Flanders (i.e. the northern Dutch-speaking region of Belgium). Because teacher and school characteristics are simultaneously related to burnout, multilevel modeling is applied. Findings \u2013 Trust can act as a buffer against teacher burnout. Teachers\u2019 trust in students demonstrates the strongest association with burnout compared to trust in principals or colleagues. Exploring relationships of trust in distinct school parties with different burnout dimensions yield interesting additional insights such as the specific importance of teacher-principal trust for teachers\u2019 emotional exhaustion. Burnout is further an individual teacher matter to which school-level factors are mainly unrelated. Research limitations/implications \u2013 Principals fulfill an important role in inhibiting emotional exhaustion among teachers. They are advised to create a school atmosphere that is conducive for different kinds of trust relationships to develop. Actions to strengthen trust and inhibit teacher burnout are necessary, although further qualitative and longitudinal research is desirable. Originality/value \u2013 This paper offers a unique contribution by examining trust in different school parties as a relational buffer against teacher burnout. It indicates that principals can affect teacher burnout and prevent emotional exhaustion by nurturing trusting relationships in school."}
{"_id": "4d4be6294e5b30cdf985fcc044f44ec9da495af3", "title": "New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots", "text": "Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis."}
{"_id": "5a24157483a4307ade1bf3537b8494fa9700d0b3", "title": "Contactless Sleep Apnea Detection on Smartphones", "text": "We present a contactless solution for detecting sleep apnea events on smartphones. To achieve this, we introduce a novel system that monitors the minute chest and abdomen movements caused by breathing on smartphones. Our system works with the phone away from the subject and can simultaneously identify and track the fine-grained breathing movements from multiple subjects. We do this by transforming the phone into an active sonar system that emits frequency-modulated sound signals and listens to their reflections; our design monitors the minute changes to these reflections to extract the chest movements. Results from a home bedroom environment shows that our design operates efficiently at distances of up to a meter and works even with the subject under a blanket.\n Building on the above system, we develop algorithms that identify various sleep apnea events including obstructive apnea, central apnea, and hypopnea from the sonar reflections. We deploy our system at the UW Medicine Sleep Center at Harborview and perform a clinical study with 37 patients for a total of 296 hours. Our study demonstrates that the number of respiratory events identified by our system is highly correlated with the ground truth and has a correlation coefficient of 0.9957, 0.9860, and 0.9533 for central apnea, obstructive apnea and hypopnea respectively. Furthermore, the average error in computing of rate of apnea and hypopnea events is as low as 1.9 events/hr."}
{"_id": "bb2edb3e16cd263bdb4e3038f383f3bb8c4c238b", "title": "Doubly Stochastic Variational Inference for Deep Gaussian Processes", "text": "Gaussian processes (GPs) are a good choice for function approximation as they are flexible, robust to overfitting, and provide well-calibrated predictive uncertainty. Deep Gaussian processes (DGPs) are multi-layer generalizations of GPs, but inference in these models has proved challenging. Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice. We present a doubly stochastic variational inference algorithm that does not force independence between layers. With our method of inference we demonstrate that a DGP model can be used effectively on data ranging in size from hundreds to a billion points. We provide strong empirical evidence that our inference scheme for DGPs works well in practice in both classification and regression."}
{"_id": "1ea8e2ccd1be716a60269f2ee86825db4b835be7", "title": "An experimental investigation of model-based parameter optimisation: SPO and beyond", "text": "This work experimentally investigates model-based approaches for optimising the performance of parameterised randomised algorithms. We restrict our attention to procedures based on Gaussian process models, the most widely-studied family of models for this problem. We evaluated two approaches from the literature, and found that sequential parameter optimisation (SPO) [4] offered the most robust performance. We then investigated key design decisions within the SPO paradigm, characterising the performance consequences of each. Based on these findings, we propose a new version of SPO, dubbed SPO+, which extends SPO with a novel intensification procedure and log-transformed response values. Finally, in a domain for which performance results for other (model-free) parameter optimisation approaches are available, we demonstrate that SPO+ achieves state-of-the-art performance."}
{"_id": "364c75585871ec93ec0405277e82b4d4dfac5d06", "title": "Neuroevolutionary reinforcement learning for generalized control of simulated helicopters", "text": "This article presents an extended case study in the application of neuroevolution to generalized simulated helicopter hovering, an important challenge problem for reinforcement learning. While neuroevolution is well suited to coping with the domain's complex transition dynamics and high-dimensional state and action spaces, the need to explore efficiently and learn on-line poses unusual challenges. We propose and evaluate several methods for three increasingly challenging variations of the task, including the method that won first place in the 2008 Reinforcement Learning Competition. The results demonstrate that (1) neuroevolution can be effective for complex on-line reinforcement learning tasks such as generalized helicopter hovering, (2) neuroevolution excels at finding effective helicopter hovering policies but not at learning helicopter models, (3) due to the difficulty of learning reliable models, model-based approaches to helicopter hovering are feasible only when domain expertise is available to aid the design of a suitable model representation and (4) recent advances in efficient resampling can enable neuroevolution to tackle more aggressively generalized reinforcement learning tasks."}
{"_id": "64bb1647bbd2540c077c2ff675c5398c8d978e51", "title": "Automatic Gait Optimization with Gaussian Process Regression", "text": "Gait optimization is a basic yet challenging problem for both quadrupedal and bipedal robots. Although techniques for automating the process exist, most involve local function optimization procedures that suffer from three key drawbacks. Local optimization techniques are naturally plagued by local optima, make no use of the expensive gait evaluations once a local step is taken, and do not explicitly model noise in gait evaluation. These drawbacks increase the need for a large number of gait evaluations, making optimization slow, data inefficient, and manually intensive. We present a Bayesian approach based on Gaussian process regression that addresses all three drawbacks. It uses a global search strategy based on a posterior model inferred from all of the individual noisy evaluations. We demonstrate the technique on a quadruped robot, using it to optimize two different criteria: speed and smoothness. We show in both cases our technique requires dramatically fewer gait evaluations than state-of-the-art local gradient approaches."}
{"_id": "daa63f57c3fbe994c4356f8d986a22e696e776d2", "title": "Efficient Global Optimization of Expensive Black-Box Functions", "text": "In many engineering optimization problems, the number of function evaluations is severely limited by time or cost. These problems pose a special challenge to the field of global optimization, since existing methods often require more function evaluations than can be comfortably afforded. One way to address this challenge is to fit response surfaces to data collected by evaluating the objective and constraint functions at a few points. These surfaces can then be used for visualization, tradeoff analysis, and optimization. In this paper, we introduce the reader to a response surface methodology that is especially good at modeling the nonlinear, multimodal functions that often occur in engineering. We then show how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule. The key to using response surfaces for global optimization lies in balancing the need to exploit the approximating surface (by sampling where it is minimized) with the need to improve the approximation (by sampling where prediction error may be high). Striking this balance requires solving certain auxiliary problems which have previously been considered intractable, but we show how these computational obstacles can be overcome."}
{"_id": "cbc5d3e04f80a07b49ac3fbdb41f4bd577664cdc", "title": "Inter-satellite links for cubesats", "text": "Realizing inter-satellite links is a must for ensuring the success of cubesat swarm missions. Nevertheless, it has hardly been considered until now. The communication systems for cubesats have to deal with a few peculiar demands regarding consumed power, geometry and throughput. Depending on the type of application, required data rates can go up to tens of megabits per second, while power consumption and physical size are limited by the platform. The proposed communication scheme will combine power-efficient modulation and channel coding with multiple access and spread spectrum techniques, enabling the deployment of multiple satellites. Apart from this, the antenna system has to be designed such that links can be established and maintained independent of the satellites' orientation. An electrically steerable radiation pattern is achieved by placing antennas on each face of the cube. Conformal beamforming provides the system with 5 dBi gain for any desired direction of transmission, eliminating the need for attitude control. Furthermore, using planar antennas reduces the complexity of the mechanical part as they require no deployment."}
{"_id": "846d1a14f4a443a2eab622c8589268359d44ed60", "title": "Transparent, conductive graphene electrodes for dye-sensitized solar cells.", "text": "Transparent, conductive, and ultrathin graphene films, as an alternative to the ubiquitously employed metal oxides window electrodes for solid-state dye-sensitized solar cells, are demonstrated. These graphene films are fabricated from exfoliated graphite oxide, followed by thermal reduction. The obtained films exhibit a high conductivity of 550 S/cm and a transparency of more than 70% over 1000-3000 nm. Furthermore, they show high chemical and thermal stabilities as well as an ultrasmooth surface with tunable wettability."}
{"_id": "1bd6460e7a06f4ca7857a0d3577bd7fc6fd4b1c9", "title": "BUILDING GENERALIZABLE AGENTS", "text": "Towards bridging the gap between machine and human intelligence, it is of utmost importance to introduce environments that are visually realistic and rich in content. In such environments, one can evaluate and improve a crucial property of practical intelligent systems, namely generalization. In this work, we build House3D, a rich, extensible and efficient environment that contains 45,622 human-designed 3D scenes of houses, ranging from single-room studios to multistoreyed houses, equipped with a diverse set of fully labeled 3D objects, textures and scene layouts, based on the SUNCG dataset (Song et al., 2017). With an emphasis on semantic-level generalization, we study the task of concept-driven navigation, RoomNav, using a subset of houses in House3D. In RoomNav, an agent navigates towards a target specified by a semantic concept. To succeed, the agent learns to comprehend the scene it lives in by developing perception, understand the concept by mapping it to the correct semantics, and navigate to the target by obeying the underlying physical rules. We train RL agents with both continuous and discrete action spaces and show their ability to generalize in new unseen environments. In particular, we observe that (1) training is substantially harder on large house sets but results in better generalization, (2) using semantic signals (e.g. segmentation mask) boosts the generalization performance, and (3) gated networks on semantic input signal lead to improved training performance and generalization. We hope House3D1, including the analysis of the RoomNav task, serves as a building block towards designing practical intelligent systems and we wish it to be broadly adopted by the community."}
{"_id": "15989a3074d773dcd0b8de04341124c5f7af4bab", "title": "Inferring the meaning of chord sequences via lyrics", "text": "This paper discusses how meanings associated with chord sequences can be inferred from word associations based on lyrics. The approach works by analyzing in-line chord annotations of lyrics to maintain co-occurrence statistics for chords and lyrics. This is analogous to the way parallel corpora are analyzed in order to infer translation lexicons. The result can benefit musical discovery systems by modeling how the chord structure complements the lyrics."}
{"_id": "7ac74ef49aeb6cd5b91a6640341a438b52d80f52", "title": "Handwritten-Digit Recognition by Hybrid Convolutional Neural Network based on HfO2 Memristive Spiking-Neuron", "text": "Although there is a huge progress in complementary-metal-oxide-semiconductor (CMOS) technology, construction of an artificial neural network using CMOS technology to realize the functionality comparable with that of human cerebral cortex containing 1010\u20131011 neurons is still of great challenge. Recently, phase change memristor neuron has been proposed to realize a human-brain level neural network operating at a high speed while consuming a small amount of power and having a high integration density. Although memristor neuron can be scaled down to nanometer, integration of 1010\u20131011 neurons still faces many problems in circuit complexity, chip area, power consumption, etc. In this work, we propose a CMOS\u00a0compatible HfO2 memristor neuron that can be well integrated with silicon circuits. A hybrid Convolutional Neural Network (CNN) based on the HfO2 memristor neuron is proposed and constructed. In the hybrid CNN, one memristive neuron can behave as multiple physical neurons based on the Time Division Multiplexing Access (TDMA) technique. Handwritten digit recognition is demonstrated in the hybrid CNN with a memristive neuron acting as 784 physical neurons. This work paves the way towards substantially shrinking the amount of neurons required in hardware and realization of more complex or even human cerebral cortex level memristive neural networks."}
{"_id": "029ec0d53785eaf719632f5aa67ae5c22689dc70", "title": "New circularly polarized slot radiator for substrate integrated waveguide (SIW) planar array", "text": "A circularly polarized slot radiator in millimetre band is presented. Low reflection and good polarization performance is obtained in comparison with state of art slot elements. A high gain array prototype has been implemented in substrate integrated waveguides (SIW) technology, in order to check the new element performance. A cosecant amplitude coverage in the elevation plane, as well as a 6\u00b0 tilted monopulse pattern in the azimuth plane are obtained. A gain peak value of 28.6 dBi and 79% of efficiency at 36.7 GHz have been obtained in the specified operation band (36.7 \u2013 37 GHz) for a manufactured antenna prototype. Likewise, the top measured axial ratio is 1.95 dB at 36.85 GHz."}
{"_id": "44d859ba961c5ec3c3ebd5203b221dc596cea1c3", "title": "Defects of the nose, lip, and cheek: rebuilding the composite defect.", "text": "BACKGROUND\nThe face can be divided into regions (units) with characteristic skin quality, border outline, and three-dimensional contour. A defect may lie entirely within a single major unit or encompass several adjacent units, creating unique problems for repair. Composite defects overlap two or more facial units. Nasal defects often extend into the adjacent lip and cheek. The simplest solution may appear to be to simply \"fill the hole\"--just replace the missing bulk. Poor contour, incorrect dimension, malposition, asymmetry, poor blending into adjacent normal tissues, and a patch-like repair often follow.\n\n\nMETHODS\nThe following principles of regional unit repair were applied to guide these complex reconstructions: (1) reconstruct units, not defects; (2) alter the wound in site, size, shape, and depth; (3) consider using separate grafts and flaps for each unit and subunit, if appropriate; (4) use \"like\" tissue for \"like\" tissue; (5) restore a stable platform; (6) build in stages; (7) use distant tissue for \"invisible\" needs and local skin for resurfacing; and (8) disregard old scars.\n\n\nRESULTS\nClinical cases of increasing composite complexity were repaired with local, regional, and distant tissues. Excellent aesthetics and function were obtained.\n\n\nCONCLUSIONS\nCareful visual analysis of the normal face and the defect can direct the choice, timing, and technique of facial repair. Regional unit principles provide a coordinated approach to the vision, planning, and fabrication of these difficult wounds. The entire repair should be intellectually planned, designed step by step, and laid out in a series of coordinated steps, with general principles applied to successfully repair composite defects of the nose, lip, and cheek."}
{"_id": "09dd654a4007754a4551c624671468cfa5f08867", "title": "Nasal Duplication Combined with Cleft Lip and Palate: Surgical Correction and Long-Term Follow-Up", "text": "Background\nDiprosopus dirrhinus, or nasal duplication, is a rare entity of partial craniofacial duplication.\n\n\nMethods\nThe case we present is the first report of diprosopus dirrhinus associated with complete cleft lip and palate. The baby was born in Cambodia at full term by normal vaginal delivery with no significant perinatal and family history. Physical examination revealed significant facial deformity due to the duplicated nose and the left complete cleft lip/palate on the right subset.\n\n\nResults\nThere were 4 nostrils; both medial apertures including the cleft site were found to be 10-15\u2009mm deep cul-de-sac structures without communication to the nasopharynx. The upper third of the face was notable for hypertelorism with a duplication of the soft-tissue nasion and glabella. Between the 2 nasal dorsums, there was a small cutaneous depression with a lacrimal fistula in the midline. Surgical treatment included the first stage of primary lip and nose repair and the second stage of palatoplasty.\n\n\nConclusions\nThe patient was followed up at the age of 10 years showing satisfactory results for both aesthetic and functional aspects. Further management in the future will be required for the hypertelorism and nasal deformity."}
{"_id": "dab9687d30e0cb5a5b1563fb9df7e8594b1fa298", "title": "Compositionality and the angular gyrus: A multi-voxel similarity analysis of the semantic composition of nouns and verbs", "text": "The cognitive and neural systems that enable conceptual processing must support the ability to combine (and recombine) concepts to form an infinite number of ideas. Two candidate neural systems for conceptual combination-the left anterior temporal lobe (ATL) and the left angular gyrus (AG)-have been characterized as \"semantic hubs\" due to both functional and anatomical properties; however, these two regions likely support different aspects of composition. Here we consider two hypotheses for the role of AG in conceptual combination, both of which differ from a putative role for the ATL in \"feature-based\" combinatorics (i.e., meaning derived by combining concepts' features). Firstly, we examine whether AG is more sensitive to function-argument relations of the sort that arise when a predicate is combined with its arguments. Secondly, we examine the non-mutually exclusive possibility that AG represents information carried on a verb in particular, whether this be information about event composition or about thematic relations denoted uniquely by verbs. We identified voxels that respond differentially to two-word versus one-word stimuli, and we measured the similarity of the patterns in these voxels evoked by (1) pairs of two-word phrases that shared a noun that was an argument, thus sharing function-argument composition (e.g. eats meat and with meat), in comparison with two-word phrases that shared only a noun, not an argument (e.g., eats meat and tasty meat); and (2) stimulus pairs that shared only an event (operationalized here as sharing a verb; e.g. eats meat and eats quickly), in comparison to both of the above. We found that activity patterns in left AG tracked information relating to the presence of an event-denoting verb in a pair of two-word phrases. We also found that the neural similarity in AG voxel patterns between two phrases sharing a verb correlated with subjects' ratings of how similar the meanings of those two verb phrases were. These findings indicate that AG represents information specific to verbs, perhaps event structure or thematic relations mediated by verbs, as opposed to argument structure in general."}
{"_id": "e17879cf2bb858fbbc3a3aa2441287c53a0f684a", "title": "A review of trisomy X (47,XXX)", "text": "Trisomy X is a sex chromosome anomaly with a variable phenotype caused by the presence of an extra X chromosome in females (47,XXX instead of 46,XX). It is the most common female chromosomal abnormality, occurring in approximately 1 in 1,000 female births. As some individuals are only mildly affected or asymptomatic, it is estimated that only 10% of individuals with trisomy X are actually diagnosed. The most common physical features include tall stature, epicanthal folds, hypotonia and clinodactyly. Seizures, renal and genitourinary abnormalities, and premature ovarian failure (POF) can also be associated findings. Children with trisomy X have higher rates of motor and speech delays, with an increased risk of cognitive deficits and learning disabilities in the school-age years. Psychological features including attention deficits, mood disorders (anxiety and depression), and other psychological disorders are also more common than in the general population. Trisomy X most commonly occurs as a result of nondisjunction during meiosis, although postzygotic nondisjunction occurs in approximately 20% of cases. The risk of trisomy X increases with advanced maternal age. The phenotype in trisomy X is hypothesized to result from overexpression of genes that escape X-inactivation, but genotype-phenotype relationships remain to be defined. Diagnosis during the prenatal period by amniocentesis or chorionic villi sampling is common. Indications for postnatal diagnoses most commonly include developmental delays or hypotonia, learning disabilities, emotional or behavioral difficulties, or POF. Differential diagnosis prior to definitive karyotype results includes fragile X, tetrasomy X, pentasomy X, and Turner syndrome mosaicism. Genetic counseling is recommended. Patients diagnosed in the prenatal period should be followed closely for developmental delays so that early intervention therapies can be implemented as needed. School-age children and adolescents benefit from a psychological evaluation with an emphasis on identifying and developing an intervention plan for problems in cognitive/academic skills, language, and/or social-emotional development. Adolescents and adult women presenting with late menarche, menstrual irregularities, or fertility problems should be evaluated for POF. Patients should be referred to support organizations to receive individual and family support. The prognosis is variable, depending on the severity of the manifestations and on the quality and timing of treatment."}
{"_id": "11302309de95ba15e05ac6338a1042045012695f", "title": "Recognition and identification of the clothes in the photo or video using neural networks", "text": "We introduce our solution of the problem of clothing recognition and identification. The cause of taking convolutional neural network as a mathematic model for the further research was the result of an experimental survey of methods of keypoints detection and computing their descriptors. The convolutional neural network model was developed and trained on the Apparel Classification with Style clothing image dataset. The training image dataset was expanded with the help of augmentation methods. The computational experiment aimed at determination of neural network efficiency and stability proved its productivity. Moreover, we developed clothing color recognition, which let us perform clothing identification by 2 parameters: type and color."}
{"_id": "f472f0ffaae2efbbeb62373cf8d60a4f68b79045", "title": "A Survey: Smart agriculture IoT with cloud computing", "text": "IoT is a revolutionary technology that represents the future of computing and communications. Most of the people over all worlds depend on agriculture. Because of this reason smart IT technologies are needed to migrate with traditional agriculture methods. Using modern technologies can control the cost, maintenance and monitoring performance. Satellite and aerial imagery play a vital role in modern agriculture. Precision agriculture sensor monitoring network is used greatly to measure agri-related information like temperature, humidity, soil PH, soil nutrition levels, water level etc. so, with IoT farmers can remotely monitor their crop and equipment by phones and computers. In this paper, we surveyed some typical applications of Agriculture IoT Sensor Monitoring Network technologies using Cloud computing as the backbone. This survey is used to understand the different technologies and to build sustainable smart agriculture. Simple IoT agriculture model is addressed with a wireless network."}
{"_id": "5b42ed20a1a01cb8d097141303dfd8f7cf1ced10", "title": "Efficiently indexing shortest paths by exploiting symmetry in graphs", "text": "Shortest path queries (SPQ) are essential in many graph analysis and mining tasks. However, answering shortest path queries on-the-fly on large graphs is costly. To online answer shortest path queries, we may materialize and index shortest paths. However, a straightforward index of all shortest paths in a graph of N vertices takes O(N2) space. In this paper, we tackle the problem of indexing shortest paths and online answering shortest path queries. As many large real graphs are shown richly symmetric, the central idea of our approach is to use graph symmetry to reduce the index size while retaining the correctness and the efficiency of shortest path query answering. Technically, we develop a framework to index a large graph at the orbit level instead of the vertex level so that the number of breadth-first search trees materialized is reduced from O(N) to O(|\u0394|), where |\u0394| \u2264 N is the number of orbits in the graph. We explore orbit adjacency and local symmetry to obtain compact breadth-first-search trees (compact BFS-trees). An extensive empirical study using both synthetic data and real data shows that compact BFS-trees can be built efficiently and the space cost can be reduced substantially. Moreover, online shortest path query answering can be achieved using compact BFS-trees."}
{"_id": "a6cdb31e20b7a102c7c62b768caf717f9226cd37", "title": "Trends in Obesity Among Adults in the United States, 2005 to 2014.", "text": "IMPORTANCE\nBetween 1980 and 2000, the prevalence of obesity increased significantly among adult men and women in the United States; further significant increases were observed through 2003-2004 for men but not women. Subsequent comparisons of data from 2003-2004 with data through 2011-2012 showed no significant increases for men or women.\n\n\nOBJECTIVE\nTo examine obesity prevalence for 2013-2014 and trends over the decade from 2005 through 2014 adjusting for sex, age, race/Hispanic origin, smoking status, and education.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nAnalysis of data obtained from the National Health and Nutrition Examination Survey (NHANES), a cross-sectional, nationally representative health examination survey of the US civilian noninstitutionalized population that includes measured weight and height.\n\n\nEXPOSURES\nSurvey period.\n\n\nMAIN OUTCOMES AND MEASURES\nPrevalence of obesity (body mass index \u226530) and class 3 obesity (body mass index \u226540).\n\n\nRESULTS\nThis report is based on data from 2638 adult men (mean age, 46.8 years) and 2817 women (mean age, 48.4 years) from the most recent 2 years (2013-2014) of NHANES and data from 21,013 participants in previous NHANES surveys from 2005 through 2012. For the years 2013-2014, the overall age-adjusted prevalence of obesity was 37.7% (95% CI, 35.8%-39.7%); among men, it was 35.0% (95% CI, 32.8%-37.3%); and among women, it was 40.4% (95% CI, 37.6%-43.3%). The corresponding prevalence of class 3 obesity overall was 7.7% (95% CI, 6.2%-9.3%); among men, it was 5.5% (95% CI, 4.0%-7.2%); and among women, it was 9.9% (95% CI, 7.5%-12.3%). Analyses of changes over the decade from 2005 through 2014, adjusted for age, race/Hispanic origin, smoking status, and education, showed significant increasing linear trends among women for overall obesity (P\u2009=\u2009.004) and for class 3 obesity (P\u2009=\u2009.01) but not among men (P\u2009=\u2009.30 for overall obesity; P\u2009=\u2009.14 for class 3 obesity).\n\n\nCONCLUSIONS AND RELEVANCE\nIn this nationally representative survey of adults in the United States, the age-adjusted prevalence of obesity in 2013-2014 was 35.0% among men and 40.4% among women. The corresponding values for class 3 obesity were 5.5% for men and 9.9% for women. For women, the prevalence of overall obesity and of class 3 obesity showed significant linear trends for increase between 2005 and 2014; there were no significant trends for men. Other studies are needed to determine the reasons for these trends."}
{"_id": "de00833ee86381503a0433d21507ec046c7b31f2", "title": "Chipless RFID: Bar Code of the Future", "text": "Radio-frequency identification (RFID) is a wireless data capturing technique that utilizes radio frequency (RF) waves for automatic identification of objects. RFID relies on RF waves for data transmission between the data carrying device, called the RFID tag, and the interrogator."}
{"_id": "15ccced6040f1c65304cb3e18757eecc764fdeab", "title": "Electronically Reconfigurable Microwave Bandpass Filter", "text": "A new class of reconfigurable bandpass filter based on parallel-coupled switched delay line approach is presented in this paper. The approach allows both center frequency and passband bandwidth to be reconfigured electronically. In contrast to conventional approach, the effects of the switch losses are relatively insignificant and can be treated as an auxiliary effect and therefore the filter has low passband loss. Despite the loss performance, the filter provides excellent linearity performance with large power handling capability. Theoretical analysis is presented and the feasibility of the approach has been experimentally verified with microstrip circuit prototypes. The first filter prototype provides narrowband quasi-elliptic filter response with constant bandwidth (50-MHz) tuning across 45% tuning range. The minimum stopband rejection level is 45 dB. The second filter possesses passband-width 3:1 ratio tunability. The third filter provides both functionalities which allow center frequency and passband bandwidth to be reconfigured. All filters have a < 4-dB passband loss with an OIP3 of >20 dBm and are immune from power saturation effect. The measured results demonstrate improved performance in terms of linear and nonlinear characteristic compared with prior works."}
{"_id": "18fa3a6835455caec9fc2f7377d6a1654d995e04", "title": "Fraud : The Affinity of Classification Techniques to Insurance Fraud Detection", "text": "Quite a large number of data mining techniques employed in financial fraud detection (FFD) are seen to be classification techniques. In this paper, we developed an algorithm to find the features of classification techniques (or method) that so much place it (classification techniques) in the heart of researchers in their various efforts in the study of insurance frauds detection. We also got to know the characteristics of insurance frauds data that made data mining classification techniques so much attracted to it (insurance data)."}
{"_id": "53d291cbdd2f5f765fb212d74c09e96e84c1a8ce", "title": "Practical steganalysis of digital images: state of the art", "text": "Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis \u2013 visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography \u2013 the bit-replacement or bit substitution \u2013 is inherently insecure with \u201csafe capacities\u201d far smaller than previously thought."}
{"_id": "140ae0297fe969474258f2b332deb66853c4d386", "title": "Topic-aware Social Influence Minimization", "text": "In this paper, we address the problem of minimizing the negative influence of undesirable things in a network by blocking a limited number of nodes from a topic modeling perspective. When undesirable thing such as a rumor or an infection emerges in a social network and part of users have already been infected, our goal is to minimize the size of ultimately infected users by blocking $k$ nodes outside the infected set. We first employ the HDP-LDA and KL divergence to analysis the influence and relevance from a topic modeling perspective. Then two topic-aware heuristics based on betweenness and out-degree for finding approximate solutions to this problem are proposed. Using two real networks, we demonstrate experimentally the high performance of the proposed models and learning schemes."}
{"_id": "bc74b894d3769d5747368fc4eab2e885823e0001", "title": "Value creation in web services: An integrative model", "text": "Web services are redefining the way organizations exchange business-critical information internally and externally with customers, suppliers, and other business associates. In this paper, we develop an integrative model for understanding value creation in web services from a provider\u2019s perspective. The model integrates the static representation of conventional business values with the fluidity of the emergent IT services domain. It captures the complexity and contradictions facing Web services providers in their drive towards market leadership, strategic differentiation and revenue generation from web services. The model comprises twelve propositions to guide our understanding and future research and practice in this increasingly important segment in the IT field. 2005 Elsevier B.V. All rights reserved."}
{"_id": "c979df0f9902669944edf8e824cee5f468db3f86", "title": "DeepScan: Exploiting Deep Learning for Malicious Account Detection in Location-Based Social Networks", "text": "Our daily lives have been immersed in widespread location-based social networks (LBSNs). As an open platform, LBSNs typically allow all kinds of users to register accounts. Malicious attackers can easily join and post misleading information, often with the intention of influencing users' decisions in urban computing environments. To provide reliable information and improve the experience for legitimate users, we design and implement DeepScan, a malicious account detection system for LBSNs. Different from existing approaches, DeepScan leverages emerging deep learning technologies to learn users' dynamic behavior. In particular, we introduce the long short-term memory (LSTM) neural network to conduct time series analysis of user activities. DeepScan combines newly introduced time series features and a set of conventional features extracted from user activities, and exploits a supervised machine-learning-based model for detection. Using real traces collected from Dianping, a representative LBSN, we demonstrate that DeepScan can achieve excellent prediction performance with an F1-score of 0.964. We also find that the time series features play a critical role in the detection system."}
{"_id": "e8193d26030f97c0725ec15f1096e08d85b838ab", "title": "Application of electrostatic separation to the recycling of plastic wastes: separation of PVC, PET, and ABS.", "text": "Plastics are widely used in everyday life as a useful material, and thus their consumption is growing at a rate of about 5% per year in Korea. However, the constant generation of plastic wastes and their disposal generates environmental problems along with economic loss. In particular, mixed waste plastics are difficult to recycle because of their inferior characteristics. A laboratory-scale triboelectrostatic separator unit has been designed and assembled for this study. On the basis of the control of electrostatic charge, the separation of three kinds of mixed plastics, polyvinyl chloride (PVC), poly(ethylene terephthalate) (PET), and acrylonitrile-butadiene-styrene (ABS), in a range of similar gravities has been performed through a two-stage separation process. Polypropylene (PP) and high-impact polystyrene (HIPS) were found to be the most effective materials for a tribo-charger in the separation of PVC, PET, and ABS. The charge-to-mass ratio (nC/g) of plastics increased with increasing air velocity in the tribo charger. In the first stage, using the PP cyclone charger, the separation efficiency of particles considerably depended on the air velocity (10 m/s), the relative humidity (< 30%), the electrode potential (> 20 kV), and the splitter position (+2 cm from the center) in the triboelelctrostatic separator unit. At this time, a PVC grade of 99.40% and a recovery of 98.10% have successfully been achieved. In the second stage, using the HIPS cyclone charger, a PET grade of 97.80% and a recovery of 95.12% could be obtained under conditions of 10 m/s, over 25 kV, a central splitter position, and less than 40% relative humidity. In order to obtain 99.9% PVC grade and 99.3% PET grade, their recoveries should be sacrificed by 20.9% and 27%, respectively, with moving the splitter from the center to a (+)6 cm position."}
{"_id": "16d8b3c370f7cce6c73c65665a83303726dda6d8", "title": "Reconsidering Research on Learning from Media", "text": "Recent meta-analyses and other studies of media's influence on learning are reviewed, Consistent evidence is found for the generalization that there are no learning benefits to be gained from employing any specific medium to deliver instruction. Research showing performance or timesaving gains from one or another medium are shown to be vulnerable to compelling rival hypotheses concerning the uncontrolled effects of instructional method and novelty. Problems with current media attribute and symbol system theories are described and suggestions made for more promising research directions. Studies of the influence of media on learning have ken a fixed feature of educational research since Thorndike (1912) recommended pictures as a laborsaving device in instruction, Most of this research is buttressed by the hope that learning will be enhanced with the proper mix of medium, student, subject matter content, and learning task. A typical study compares the relative achievement of groups who have received similar subject matter from different media. This research has led to so-called 'media selection\" schemes or models 1982). These models generally promise to incorporate existing research and practice into procedures for selecting the best medium or mix of media to deliver instruction. Most of these models base many of their prescriptions on presumed learning benefits from media (Jamison, Suppes, & Welles, 1974). However, this article will argue that most current summaries and meta-analyses of media comparison studies clearly suggest that media do not influence learning under any conditions. Even in the few cases where dramatic changes in achievement or ability have followed the introduction of a medium, as was the case with television in El Salvador (Schramm, 1977), it was not the medium that caused the change but rather a curricular reform that accompanied the change. The best current evidence is that media are mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition. Basically, the choice of vehicle might influence the cost or extent of distributing instruction, but only the content of the vehicle can influence achievement. While research often shows a slight learning advantage for newer media over more conventional instructional vehicles, this advantage will be __________________ The author wishes to acknowledge the substantive advice of Gavriel Salomon, William Winn, and anonymous reviewers without making them responsible for errors. ___________ p 445 shown to be vulnerable to compelling rival hypotheses. Among these rival explanations is evidence of artifact and confounding in existing studies and biased editorial decisions which may favor research showing larger effect sizes for newer media. After summarizing evidence from current meta-analyses of media research, I will discuss advantages and problems with current \"media attribute\" and 'symbol system\" theories and will conclude by suggesting tentative solutions to past problems and future directions for research involving media. Media Comparison Studies In the 1960s, Lumsdaine (1963) and others (e.g,, Mielke, 1968) argued that gross media comparison and selection studies might not pay off. They implied that media, when viewed as collections of mechanical instruments, such as television and computers, were sample delivery devices. Nevertheless, earlier reviewers also held the door open to learning effects from media by attributing much of the lack of significance in prior research to poor design and lack of adequate models or theory. Lumsdaine (1963) dealt primarily with adequate studies that had used defensible methodology, and had found significant differences between treatments. With the benefit of hindsight it is not surprising that most of the studies he selected for review employed media as simple vehicles for instructional methods, such as text organization, size of step in programming, cueing, repeated exposures, and prompting. These studies compared the effects of. for example, different step size in programmed instruction via television. It was step size (and other methods), not television (or other media), which were the focus of thew studies. This is an example of what Salomon and Clark (1977) called research with media. In these studies media are mere conveyances for the treatments being examined and are not the focus of the study, though the results are often mistakenly interpreted as suggesting benefits for various media. An example of instructional research with media would be a study which contrasted a logically organized audiotutorial lesson on photosynthesis with a randomly sequenced presentation of the same frames (cf. Clark & Snow, 1975; Salomon & Clark, 1977, for a review of similar studies). Perhaps as a result of this confusion, Lumsdaine (1963) reached few conclusions beyond the suggestion that media might reduce the cost of instruction when many students are served because \u201cthe cost of perfecting it can be prorated in terms of a denominator representing thousands of students\" (p. 670). A decade later, Glaser and Cooley (1973) and Levie and Dickie (1973) were cautious about media comparison studies, which apparently were still being conducted in large numbers. Glaser and Cooley (1973) recommended using acceptable medium as \"a vehicle for making available to schools what psychologists have learned about learning\" (p. 855). Levie and Dickie (1973) noted that most media comparison studies to that date had been fruitless and suggested that objectives can be attained through \"instruction presented by any of a variety of different media\" (p. 859). At that time televised education was still a lively topic and studies of computerized instruction were just beginning to appear. During the past decade, television research seems to have diminished considerably, but computer learning studies are now popular. This current h belongs to the familiar but generally fruitless media comparison approach or is concerned _____________ p 446 with different contents or methods being presented via different media (e.g., science teaching via computers). Generally, each new medium seems to attract its own set of advocates who make claims for improved learning and stimulate research questions which are similar to those asked about the previously popular medium. Most of the radio research approaches suggested in the 1950s (e.g., Hovland, Lumsdaine, & Sheffield, 1949) were very similar to those employed by the television movement of the 1960s (e.g., Schramm, 1977) and to the more recent reports of the computer-assisted instruction studies of the 1970s and 1980s (e.g., Dixon & Judd, 1977). It seems that similar research questions have resulted in similar and ambiguous data. Media comparison studies, regardless of the media employed, tend to result i n \"no significant difference\" conclusions (Mielke, 1968). These findings were incorrectly offered-as-evidence that different media were \u201cequally effective\u201d as conventional means in promoting learning. No significant difference results simply suggest that changes in the outcome scores (e.g., learning) did not result from any systematic differences in the treatments compared. Occasionally a study would find evidence for one or another medium. When this happens, Mielke (1968) has suggested that the active ingredient might be some uncontrolled aspect of the content or instructional strategy rather than the medium. When we investigate these positive studies, we find that the treatments are confounded. The evidence for this confounding may be found in the current meta-analyses of media comparison studies. The next section argues that it is the uncontrolled effects of novelty and instructional method which account for the existing evidence for the effects of various media on learning gains. Reviews and Meta-analyses of Media Research One of the most interesting trends in the past decade has been a significant increase in the number of excellent reviews and meta-analyses of research comparing the learning advantage of various media. The results of these overviews of past comparison studies seem to be reasonably unambiguous and unanimous. Taken together, they provide strong evidence that media comparison studies that find causal connections between media and achievement are confounded. Size of Effect of Media Treatments A recent series of meta-analyses of media comparison studies have been conducted by James Kulik and his colleagues at the University of Michigan (Cohen, Ebling & Kulik, 1981; C. Kulik, Kulik, & Cohen, 1980; J. Kulik, Bangert, & Williams, 1983; J. Kulik, Kulik, & Cohen, 1979). These reviews employ the relatively new technology of meta-analysis (Glass, 1976), which provides more precise estimates of the effect size of various media treatments than were possible a few years ago. Previous reviews dealing primarily with \"box score\" sums of significant findings for media versus conventional instructional delivery were sometimes misleading. Effect size estimates often were expressed in portions of standard score advantages for one or another type of treatment. This discussion will express effects in one of two ways: (a) the number of standard deviations separating experimental and control groups, and (b) as improvements in percent) scores on a final examination. ____________ p 447 Box Scores versus Effect Size An illustration of the advantage of meta-analytical effect size descriptions of past research over \"box scores\" is available in a recent review of Postlethwait's audiotutorial instruction studies (J. Kulik, Kulik, & Cohen,, 1979). The authors found 42 adequate studies, of which 29 favored audiotutorial instruction and only 13 favored conventional instruction. Of those 42, only 15 reported significant differences, but 11 of the 15 favored audiotutorial and only 4 favored conventional instruction. This type of box"}
{"_id": "7c46867d278dd5d27ce9b2fcc0ae81234c5a7c33", "title": "Environmental Detectives--The Development of an Augmented Reality Platform for Environmental Simulations", "text": "This research was supported with a grant from Microsoft MIT iCampus as a part of the Games-to-Teach Project. The authors would like to thank the PIs of the Games-to-Teach Project, Randy Hinrichs at Microsoft Research and Henry Jenkins, Director MIT Comparative Media Studies for their support of this project, as well as Kodjo Hesse, Gunnar Harboe, and Walter Holland for their hard work in the development of Environmental Detectives."}
{"_id": "c4977ec92de2401f0ecf5ed4bdde60d1d7b12d6b", "title": "Social constructivist perspectives on teaching and learning.", "text": "Social constructivist perspectives focus on the interdependence of social and individual processes in the co-construction of knowledge. After the impetus for understanding the influence of social and cultural factors on cognition is reviewed, mechanisms hypothesized to account for learning from this perspective are identified, drawing from Piagetian and Vygotskian accounts. The empirical research reviewed illustrates (a) the application of institutional analyses to investigate schooling as a cultural process, (b) the application of interpersonal analyses to examine how interactions promote cognition and learning, and (c) discursive analyses examining and manipulating the patterns and opportunities in instructional conversation. The review concludes with a discussion of the application of this perspective to selected contemporary issues, including: acquiring expertise across domains, assessment, educational equity, and educational reform."}
{"_id": "ff1764920c3a523fe4219f490f247669e9703110", "title": "What video games have to teach us about learning and literacy", "text": "Good computer and video games like System Shock 2, Deus Ex, Pikmin, Rise of Nations, Neverwinter Nights, and Xenosaga: Episode 1 are learning machines. They get themselves learned and learned well, so that they get played long and hard by a great many people. This is how they and their designers survive and perpetuate themselves. If a game cannot be learned and even mastered at a certain level, it won't get played by enough people, and the company that makes it will go broke. Good learning in games is a capitalist-driven Darwinian process of selection of the fittest. Of course, game designers could have solved their learning problems by making games shorter and easier, by dumbing them down, so to speak. But most gamers don't want short and easy games. Thus, designers face and largely solve an intriguing educational dilemma, one also faced by schools and workplaces: how to get people, often young people, to learn and master something that is long and challenging--and enjoy it, to boot."}
{"_id": "8abf8825026792f254d5fd552b6d4840e8849c26", "title": "Participatory Simulations: Building Collaborative Understanding Through Immersive Dynamic Modeling", "text": "The Participatory Simulations Project explores how a new kind of collaborative learning environment, which is supported by small, wearable computers, can facilitate collaborative theory-building and lead to a richer understanding of scientific experimentation. In a Participatory Simulation, participants become \"agents\" in a full-scale simulation. Unlike previous work, Participatory Simulations combines the notion of a microworld with the affordances of real world experience. By involving a large number of people (typically between 15 and 30) in a \"life-sized\" experience, the project brings a microworld off of the computer screen and into the physical world. In particular, this study explored the use of a series of Participatory Simulations in a high school Biology class. The students became \"agents\" in a simulation of a disease infecting their community. The results of this project show that Participatory Simulations fully engage participants in the simulation, facilitate collaborative problem solving, provide a substrate for collaboratively designing and running experiments, and support the definition of new vocabulary to discuss the underlying rules of the simulation. Thesis Advisor: Mitchel Resnick Associate Professor of Media Arts and Sciences Fukutake Career Development Professor of Research in Education This research was sponsored by the National Science Foundation (grant # CDA-9616444), the Lego Group, and the Things That Think and Digital Life consortia. Participatory Simulations: Building Collaborative Understanding through Immersive Dynamic Modeling"}
{"_id": "9062571f97af54f8360a7d8110e7ac3b74f3f04a", "title": "Predicting the Success of Online Petitions Leveraging Multidimensional Time-Series", "text": "Applying classical time-series analysis techniques to online content is challenging, as web data tends to have data quality issues and is often incomplete, noisy, or poorly aligned. In this paper, we tackle the problem of predicting the evolution of a time series of user activity on the web in a manner that is both accurate and interpretable, using related time series to produce a more accurate prediction. We test our methods in the context of predicting signatures for online petitions using data from thousands of petitions posted on The Petition Site\u2014one of the largest platforms of its kind. We observe that the success of these petitions is driven by a number of factors, including promotion through social media channels and on the front page of the petitions platform. We propose an interpretable model that incorporates seasonality, aging effects, self-excitation, and external effects. The interpretability of the model is important for understanding the elements that drives the activity of an online content. We show through an extensive empirical evaluation that our model is significantly better at predicting the outcome of a petition than state-of-the-art techniques."}
{"_id": "7d6b3286cf05de6924768cf8d347af73df3f25d6", "title": "A spatio-temporal reference model of the aging brain", "text": "Both normal aging and neurodegenerative disorders such as Alzheimer's disease (AD) cause morphological changes of the brain. It is generally difficult to distinguish these two causes of morphological change by visual inspection of magnetic resonance (MR) images. To facilitate making this distinction and thus aid the diagnosis of neurodegenerative disorders, we propose a method for developing a spatio-temporal model of morphological differences in the brain due to normal aging. The method utilizes groupwise image registration to characterize morphological variation across brain scans of people with different ages. To extract the deformations that are due to normal aging we use partial least squares regression, which yields modes of deformations highly correlated with age, and corresponding scores for each input subject. Subsequently, we determine a distribution of morphologies as a function of age by fitting smooth percentile curves to these scores. This distribution is used as a reference to which a person's morphology score can be compared. We validate our method on two different datasets, using images from both cognitively normal subjects and patients with Alzheimer disease (AD). Results show that the proposed framework extracts the expected atrophy patterns. Moreover, the morphology scores of cognitively normal subjects are on average lower than the scores of AD subjects, indicating that morphology differences between AD subjects and healthy subjects can be partly explained by accelerated aging. With our methods we are able to assess accelerated brain aging on both population and individual level. A spatio-temporal aging brain model derived from 988 T1-weighted MR brain scans from a large population imaging study (age range 45.9-91.7y, mean age 68.3y) is made publicly available at www.agingbrain.nl."}
{"_id": "4415838adb15f81b712c336fd9839dcafc9b6863", "title": "Autonomous website categorization with pre-defined dictionary", "text": "In this technology emerging era, the number of websites is increasing dramatically. The content and category of information are overflowing the Internet World. Finding the right information from almost a billion of websites is considerably hard, but finding the accurate and quality one is even harder. Hence, the need of website categorization's demand is increasing tremendously. Unfortunately, the website categorization techniques in previous works are still immature and not good enough to satisfy the need. Additionally, a training dataset is a limitation of supervised learning algorithm and unsupervised learning algorithm also have a complex algorithm. Regularly, they can categorize into only 1 category but the content usually contains various types. Therefore, in this paper, we propose the simple yet powerful algorithm for website categorization which can give a multi-category results with confidence level, distributed systems supported, and does not even need to be trained because the algorithm uses word frequency in the content of each website to match with the categories in a pre-defined dictionary. The result shows that the accuracy of our proposed algorithm is over 95% when tested with Reuters dataset. The comparison of our algorithm and another Text Analysis API shows that our algorithm has more accuracy with less computation time. The accuracy can also be increased by improving the pre-defined dictionary and filtering noise words."}
{"_id": "9864575864bc497e48569a6604180a64378b95c1", "title": "FPGA Vendor Agnostic True Random Number Generator", "text": "This paper describes a solution for the generation of true random numbers in a purely digital fashion; making it suitable for any FPGA type, because no FPGA vendor specific features (e.g., like phase-locked loop) or external analog components are required. Our solution is based on a framework for a provable secure true random number generator recently proposed by Sunar, Martin and Stinson. It uses a large amount of ring oscillators with identical ring lengths as a fast noise source - but with some deterministic bits - and eliminates the non-random samples by appropriate post-processing based on resilient functions. This results in a slower bit stream with high entropy. Our FPGA implementation achieves a random bit throughput of more than 2 Mbps, remains fairly compact (needing minimally 110 ring oscillators of 3 inverters) and is highly portable"}
{"_id": "48a38c7e87e00782a58b4c191c3931030af45d0a", "title": "Algorithmic Bayesian persuasion", "text": "Persuasion, defined as the act of exploiting an informational advantage in order to effect the decisions of others, is ubiquitous. Indeed, persuasive communication has been estimated to account for almost a third of all economic activity in the US. This paper examines persuasion through a computational lens, focusing on what is perhaps the most basic and fundamental model in this space: the celebrated Bayesian persuasion model of Kamenica and Gentzkow. Here there are two players, a sender and a receiver. The receiver must take one of a number of actions with a-priori unknown payoff, and the sender has access to additional information regarding the payoffs of the various actions for both players. The sender can commit to revealing a noisy signal regarding the realization of the payoffs of various actions, and would like to do so as to maximize her own payoff in expectation assuming that the receiver rationally acts to maximize his own payoff. When the payoffs of various actions follow a joint distribution (the common prior), the sender's problem is nontrivial, and its computational complexity depends on the representation of this prior. We examine the sender's optimization task in three of the most natural input models for this problem, and essentially pin down its computational complexity in each. When the payoff distributions of the different actions are i.i.d. and given explicitly, we exhibit a polynomial-time (exact) algorithmic solution, and a ``simple'' (1-1/e)-approximation algorithm. Our optimal scheme for the i.i.d. setting involves an analogy to auction theory, and makes use of Border's characterization of the space of reduced-forms for single-item auctions. When action payoffs are independent but non-identical with marginal distributions given explicitly, we show that it is #P-hard to compute the optimal expected sender utility. In doing so, we rule out a generalized Border's theorem, as defined by Gopalan et al, for this setting. Finally, we consider a general (possibly correlated) joint distribution of action payoffs presented by a black box sampling oracle, and exhibit a fully polynomial-time approximation scheme (FPTAS) with a bi-criteria guarantee. Our FPTAS is based on Monte-Carlo sampling, and its analysis relies on the principle of deferred decisions. Moreover, we show that this result is the best possible in the black-box model for information-theoretic reasons."}
{"_id": "bc437f63023a1791a4f09672a10f5e96f6e3cb65", "title": "Functional genomic analysis of the AUXIN RESPONSE FACTOR gene family members in Arabidopsis thaliana: unique and overlapping functions of ARF7 and ARF19.", "text": "The AUXIN RESPONSE FACTOR (ARF) gene family products, together with the AUXIN/INDOLE-3-ACETIC ACID proteins, regulate auxin-mediated transcriptional activation/repression. The biological function(s) of most ARFs is poorly understood. Here, we report the identification and characterization of T-DNA insertion lines for 18 of the 23 ARF gene family members in Arabidopsis thaliana. Most of the lines fail to show an obvious growth phenotype except of the previously identified arf2/hss, arf3/ett, arf5/mp, and arf7/nph4 mutants, suggesting that there are functional redundancies among the ARF proteins. Subsequently, we generated double mutants. arf7 arf19 has a strong auxin-related phenotype not observed in the arf7 and arf19 single mutants, including severely impaired lateral root formation and abnormal gravitropism in both hypocotyl and root. Global gene expression analysis revealed that auxin-induced gene expression is severely impaired in the arf7 single and arf7 arf19 double mutants. For example, the expression of several genes, such as those encoding members of LATERAL ORGAN BOUNDARIES domain proteins and AUXIN-REGULATED GENE INVOLVED IN ORGAN SIZE, are disrupted in the double mutant. The data suggest that the ARF7 and ARF19 proteins play essential roles in auxin-mediated plant development by regulating both unique and partially overlapping sets of target genes. These observations provide molecular insight into the unique and overlapping functions of ARF gene family members in Arabidopsis."}
{"_id": "239be6b3d6f8305ca43b5828118ad2d71385c660", "title": "Design of 4-Bit Universal Shift Register Using Reversible Gates", "text": "Power dissipation is one of the important factor in digital circuit design. Landauer's principle states that logic computations which are not reversible necessarily generate KT*ln2 Joules of heat energy for every bit of information that is lost, where k is Boltzmann's constant and T the absolute temperature at which computation is performed. Reversibility is the property of digital circuits in which there is one-to-one mapping between the inputs and the output vectors that is for each input vector there is a unique output vector and vice-versa. Thus one of the primary motivations for adopting reversible logic lies in the fact that it can provide a logic design methodologies for designing low power dissipating circuits. Fredkin gate, Feynman Gate is popularly used reversible logic gate which are used to implement various sequential circuits which are basic for design of digital circuits."}
{"_id": "a3d11e98794896849ab2304a42bf83e2979e5fb5", "title": "In Defense of the Triplet Loss for Person Re-Identification", "text": "In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person reidentification subfield is no exception to this, thanks to the notable publication of the Market-1501 and MARS datasets and several strong deep learning approaches. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms any other published method by a large margin."}
{"_id": "4c1909b5aab605951c1bef30c45f20511442c18a", "title": "3D Simulation for Robot Arm Control with Deep Q-Learning", "text": "Intelligent control of robotic arms has huge potential over the coming years, but as of now will often fail to adapt when presented with new and unfamiliar environments. Recent trends to solve this problem have seen a shift to endto-end solutions using deep reinforcement learning to learn policies from visual input, rather than relying on a handcrafted, modular pipeline. Building upon the recent success of deep Q-networks, we present an approach which uses threedimensional simulations to train a 7-DOF robotic arm in a robot arm control task without any prior knowledge. Policies accept images of the environment as input and output motor actions. However, the high-dimensionality of the policies as well as the large state space makes policy search difficult. This is overcome by ensuring interesting states are explored via intermediate rewards that guide the policy towards higher reward states. Our results demonstrate that deep Q-networks can be used to learn policies for a task that involves locating a cube, grasping, and then finally lifting. The agent is able to learn to deal with a range of starting joint configurations and starting cube positions when tested in simulation. Moreover, we show that policies trained via simulation have the potential to be directly applied to real-world equivalents without any further training. We believe that robot simulations can decrease the dependency on physical robots and ultimately improve productivity of training robot control tasks."}
{"_id": "1bd13095dd38a31ad9089d067134e71eb69bae83", "title": "SARL: A General-Purpose Agent-Oriented Programming Language", "text": "Complex software systems development require appropriate high-level features to better and easily tackle the new requirements in terms of interactions, concurrency and distribution. This requires a paradigm change in software engineering and corresponding programming languages. We are convinced that agent-oriented programming may be the support for this change by focusing on a small corpus of commonly accepted concepts and the corresponding programming language in line with the current developers' programming practices. This papers introduces SARL, a new general-purpose agent-oriented programming language undertaking this challenge. SARL comes with its full support in the Eclipse IDE for compilation and debugging, and a new version 2.0 of the Janus platform for execution purposes. The main perspective that guided the creation of SARL is the establishment of an open and easily extensible language. Our expectation is to provide the community with a common forum in terms of a first working test bed to study and compare various programming alternatives and associated metamodels."}
{"_id": "4393d36d36174aa255008c230baa0ab58d19f29d", "title": "On the Impossibility of Basing Identity Based Encryption on Trapdoor Permutations", "text": "We ask whether an identity based encryption (IBE) system can be built from simpler public-key primitives. We show that there is no black-box construction of IBE from trapdoor permutations (TDP) or even from chosen ciphertext secure public key encryption (CCA-PKE). These black-box separation results are based on an essential property of IBE, namely that an IBE system is able to compress exponentially many public-keys into a short public parameters string."}
{"_id": "118d375740582c3969efbbf3a785281a3d529d7a", "title": "Discovery of convoys in trajectory databases", "text": "As mobile devices with positioning capabilities continue to proliferate, data management for so-called trajectory databases that capture the historical movements of populations of moving objects becomes important. This paper considers the querying of such databases for convoys, a convoy being a group of objects that have traveled together for some time. More specifically, this paper formalizes the concept of a convoy query using density-based notions, in order to capture groups of arbitrary extents and shapes. Convoy discovery is relevant for reallife applications in throughput planning of trucks and carpooling of vehicles. Although there has been extensive research on trajectories in the literature, none of this can be applied to retrieve correctly exact convoy result sets. Motivated by this, we develop three efficient algorithms for convoy discovery that adopt the wellknown filter-refinement framework. In the filter step, we apply linesimplification techniques on the trajectories and establish distance bounds between the simplified trajectories. This permits efficient convoy discovery over the simplified trajectories without missing any actual convoys. In the refinement step, the candidate convoys are further processed to obtain the actual convoys. Our comprehensive empirical study offers insight into the properties of the paper\u2019s proposals and demonstrates that the proposals are effective and efficient on real-world trajectory data."}
{"_id": "8c29335e0f564b86178deb5b15d3224d10ca1d47", "title": "Time scales in motor learning and development.", "text": "A theoretical framework based on the concepts and tools of nonlinear dynamical systems is advanced to account for both the persistent and transitory changes traditionally shown for the learning and development of motor skills. The multiple time scales of change in task outcome over time are interpreted as originating from the system's trajectory on an evolving attractor landscape. Different bifurcations between attractor organizations and transient phenomena can lead to exponential, power law, or S-shaped learning curves. This unified dynamical account of the functions and time scales in motor learning and development offers several new hypotheses for future research on the nature of change in learning theory."}
{"_id": "87e4178f71990818a3c125a41db91749bba17cc2", "title": "Structured Time Series Analysis for Human Action Segmentation and Recognition", "text": "We address the problem of structure learning of human motion in order to recognize actions from a continuous monocular motion sequence of an arbitrary person from an arbitrary viewpoint. Human motion sequences are represented by multivariate time series in the joint-trajectories space. Under this structured time series framework, we first propose Kernelized Temporal Cut (KTC), an extension of previous works on change-point detection by incorporating Hilbert space embedding of distributions, to handle the nonparametric and high dimensionality issues of human motions. Experimental results demonstrate the effectiveness of our approach, which yields realtime segmentation, and produces high action segmentation accuracy. Second, a spatio-temporal manifold framework is proposed to model the latent structure of time series data. Then an efficient spatio-temporal alignment algorithm Dynamic Manifold Warping (DMW) is proposed for multivariate time series to calculate motion similarity between action sequences (segments). Furthermore, by combining the temporal segmentation algorithm and the alignment algorithm, online human action recognition can be performed by associating a few labeled examples from motion capture data. The results on human motion capture data and 3D depth sensor data demonstrate the effectiveness of the proposed approach in automatically segmenting and recognizing motion sequences, and its ability to handle noisy and partially occluded data, in the transfer learning module."}
{"_id": "d3462e9d394610fbe18c18b3916f1cdb0fa8e8fb", "title": "Analyzing Hypersensitive AI: Instability in Corporate-Scale Machine Learning", "text": "Predictive geometric models deliver excellent results for many Machine Learning use cases. Despite their undoubted performance, neural predictive algorithms can show unexpected degrees of instability and variance, particularly when applied to large datasets. We present an approach to measure changes in geometric models with respect to both output consistency and topological stability. Considering the example of a recommender system using word2vec, we analyze the influence of single data points, approximation methods and parameter settings. Our findings can help to stabilize models where needed and to detect differences in informational value of data points on a large scale."}
{"_id": "562f96fb2e3316dbdf70b4a9dba4e95597e29ad5", "title": "Comprehensive Evaluation of Deep Learning Architectures for Prediction of DNA/RNA Sequence Binding Specificities", "text": "Motivation: Deep learning architectures have recently demonstrated their power in predicting DNAand RNA-binding specificities. Existing methods fall into three classes: Some are based on Convolutional Neural Networks (CNNs), others use Recurrent Neural Networks (RNNs), and others rely on hybrid architectures combining CNNs and RNNs. However, based on existing studies it is still unclear which deep learning architecture is achieving the best performance. Thus an in-depth analysis and evaluation of the different methods is needed to fully evaluate their relative. Results: In this study, We present a systematic exploration of various deep learning architectures for predicting DNAand RNA-binding specificities. For this purpose, we present deepRAM, an end-to-end deep learning tool that provides an implementation of novel and previously proposed architectures; its fully automatic model selection procedure allows us to perform a fair and unbiased comparison of deep learning architectures. We find that an architecture that uses k-mer embedding to represent the sequence, a convolutional layer and a recurrent layer, outperforms all other methods in terms of model accuracy. Our work provides guidelines that will assist the practitioner in choosing the best architecture for the task at hand, and provides some insights on the differences between the models learned by convolutional and recurrent networks. In particular, we find that although recurrent networks improve model accuracy, this comes at the expense of a loss in the interpretability of the features learned by the model. Availability and implementation: The source code for deepRAM is available at https://github.com/MedChaabane/deepRAM Contact: asa@cs.colostate.edu Supplementary information: Supplementary data are available at Bioinformatics online."}
{"_id": "6e54fc48e0e17c0115f77f7c0024a40216c8c376", "title": "Educating the Disagreeable Extravert : Narcissism , the Big Five Personality Traits , and Achievement Goal Orientation", "text": "Despite the fact that longitudinal data have been compiled over the past 30 years among undergraduate students in higher education settings regarding narcissism, the literature is devoid of empirical investigations that explore the relationships between narcissism and learning. Because the data suggest that narcissism scores are increasing each year among this population, an exploration of the relationship between narcissism and learning is timely and warranted. Sampling from university undergraduate students, this study uses the Narcissistic Personality Inventory, the Big Five Inventory, and the Achievement Goal Questionnaire to verify the known relationships between narcissism and the Big Five personality traits of extraversion and agreeableness; to verify the known relationships between the Big Five personality traits of extraversion and agreeableness and goal orientation; and to explore a previously undocumented empirical relationship between narcissism and performance goal orientation. Results of this exploratory study indicate that while narcissism does contribute to a performance goal orientation, it is not a substantial variable in determining achievement goal orientation in general. The study addresses the implications and limitations of this research in addition to areas for additional investigation."}
{"_id": "fa7b4843c4d8cf1ad12a4f1f1cf29bc145ccd148", "title": "Altitude optimization of Airborne Wind Energy systems: A Bayesian Optimization approach", "text": "This study presents a data-driven approach for optimizing the operating altitude of Airborne Wind Energy (AWE) systems to maximize net energy production. Determining the optimal operating altitude of an AWE system is challenging, as the wind speed constantly varies with both time and altitude. Furthermore, without expensive auxiliary equipment, the wind speed is only measurable at the AWE system's operating altitude. The work presented in this paper shows how tools from machine learning can be blended with real-time control to optimize the AWE system's operating altitude efficiently, without the use of auxiliary wind profiling equipment. Specifically, Bayesian Optimization, which is a data-driven technique for finding the optimum of an unknown and expensive-to-evaluate objective function, is applied to the real-time control of an AWE system. The underlying objective function is modeled by a Gaussian Process (GP); then, Bayesian Optimization utilizes the predictive uncertainty information from the GP to decide the best subsequent operating altitude. In the AWE application, conventional Bayesian Optimization is extended to handle the time-varying nature of the wind shear profile (wind speed vs. time). Using real wind data, our method is validated against three baseline approaches. Our simulation results show that the Bayesian Optimization method is successful in dramatically increasing power production over these baselines."}
{"_id": "b1f4423c227fa37b9680787be38857069247a307", "title": "Collecting Large, Richly Annotated Facial-Expression Databases from Movies", "text": "Two large facial-expression databases depicting challenging real-world conditions were constructed using a semi-automatic approach via a recommender system based on subtitles."}
{"_id": "ffa5fd279241c63eb4d6c5385db44c7d59893cf5", "title": "Dynamic mode decomposition of numerical and experimental data", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L\u2019archive ouverte pluridisciplinaire HAL, est destin\u00e9e au d\u00e9p\u00f4t et \u00e0 la diffusion de documents scientifiques de niveau recherche, publi\u00e9s ou non, \u00e9manant des \u00e9tablissements d\u2019enseignement et de recherche fran\u00e7ais ou \u00e9trangers, des laboratoires publics ou priv\u00e9s. Dynamic mode decomposition of numerical and experimental data Peter Schmid"}
{"_id": "ba4eec4a53013e4a5ae0151d3722627f68a676c0", "title": "Traffic monitoring and accident detection at intersections", "text": "Among the most important research in Intelligent Transportation Systems (ITS) is the development of systems that automatically monitor traffic flow at intersections. Rather than being based on global flow analysis as is currently done, these automatic monitoring systems should be based on local analysis of the behavior of each vehicle at the intersection. The systems should be able to identify each vehicle and track its behavior, and to recognize situations or events that are likely to result from a chain of such behavior. The most difficult problem associated with vehicle tracking is the occlusion effect among vehicles. In order to solve this problem we have developed an algorithm, referred to as spatio-temporal Markov random field (MRF), for traffic images at intersections. This algorithm models a tracking problem by determining the state of each pixel in an image and its transit, and how such states transit along both the \u2013 image axes as well as the time axes. Vehicles, of course, are of various shapes and they move in random fashion, thereby leading to full or partial occlusion at intersections. Despite these complications, our algorithm is sufficiently robust to segment and track occluded vehicles at a high success rate of 93%\u201396%. This success has led to the development of an extendable robust event recognition system based on the hidden Markov model (HMM). The system learns various event behavior patterns of each vehicle in the HMM chains and then, using the output from the tracking system, identifies current event chains. The current system can recognize bumping, passing, and jamming. However, by including other event patterns in the training set, the system can be extended to recognize those other events, e.g., illegal U-turns or reckless driving. We have implemented this system, evaluated it using the tracking results, and demonstrated its effectiveness."}
{"_id": "259817ee2a4419795f698b123027aad89cf5f903", "title": "Higher-Order Pooling of CNN Features via Kernel Linearization for Action Recognition", "text": "Most successful deep learning algorithms for action recognition extend models designed for image-based tasks such as object recognition to video. Such extensions are typically trained for actions on single video frames or very short clips, and then their predictions from sliding-windows over the video sequence are pooled for recognizing the action at the sequence level. Usually this pooling step uses the first-order statistics of frame-level action predictions. In this paper, we explore the advantages of using higherorder correlations, specifically, we introduce Higher-order Kernel (HOK) descriptors generated from the late fusion of CNN classifier scores from all the frames in a sequence. To generate these descriptors, we use the idea of kernel linearization. Specifically, a similarity kernel matrix, which captures the temporal evolution of deep classifier scores, is first linearized into kernel feature maps. The HOK descriptors are then generated from the higher-order cooccurrences of these feature maps, and are then used as input to a video-level classifier. We provide experiments on two fine-grained action recognition datasets, and show that our scheme leads to state-of-the-art results."}
{"_id": "52f89a99c7e5af264a878dd05b62e94622c6bdac", "title": "Pattern codification strategies in structured light systems", "text": "Coded structured light is considered one of the most reliable techniques for recovering the surface of objects. This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points and points of the projected pattern can be easily found. The decoded points can be triangulated and 3D information is obtained. We present an overview of the existing techniques, as well as a new and de nitive classi cation of patterns for structured light sensors. We have implemented a set of representative techniques in this eld and present some comparative results. The advantages and constraints of the di5erent patterns are also discussed. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."}
{"_id": "5f8112731bd0a2bfa91158753242bae2f540d72d", "title": "Rapid Shape Acquisition Using Color Structured Light and Multi-pass Dynamic Programming", "text": "This paper presents a color structured light technique for recovering object shape from one or more images. The technique works by projecting a pattern of stripes of alternating colors and matching the projected color transitions with observed edges in the image. The correspondence problem is solved using a novel, multi-pass dynamic programming algorithm that eliminates global smoothness assumptions and strict ordering constraints present in previous formulations. The resulting approach is suitable for generating both highspeed scans of moving objects when projecting a single stripe pattern and high-resolution scans of static scenes using a short sequence of time-shifted stripe patterns. In the latter case, spacetime analysis is used at each sensor pixel to obtain inter-frame depth localization. Results are demonstrated for a variety of complex scenes."}
{"_id": "450d6ef1acfe802ae0cfeca71a8b355d103b2865", "title": "Recent progress in coded structured light as a technique to solve the correspondence problem: a survey", "text": "We present a survey of the most significant techniques, used in the last few years, concerning the coded structured light methods employed to get 3D information. In fact, depth perception is one of the most important subjects in computer vision. Stereovision is an attractive and widely used method, but, it is rather limited to make 3D surface maps, due to the correspondence problem. The correspondence problem can be improved using a method based on structured light concept, projecting a given pattern on the measuring surfaces. However, some relations between the projected pattern and the reflected one must be solved. This relationship can be directly found codifying the projected light, so that, each imaged region of the projected pattern carries the needed information to solve the correspondence problem. ( 1998 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. Pattern projection Correspondence problem Active stereo Depth perception Range data Computer vision."}
{"_id": "5a608ebb9b0e462a36857ee8ae3c9fde04efee9f", "title": "A low cost 3D scanner based on structured light", "text": "Automatic 3D acquisition devices (often called 3D scanners) allow to build highly accurate models of real 3D objects in a costand time-effective manner. We have experimented this technology in a particular application context: the acquisition of Cultural Heritage artefacts. Specific needs of this domain are: medium-high accuracy, easy of use, affordable cost of the scanning device, self-registered acquisition of shape and color data, and finally operational safety for both the operator and the scanned artefacts. According to these requirements, we designed a low-cost 3D scanner based on structured light which adopts a new, versatile colored stripe pattern approach. We present the scanner architecture, the software technologies adopted, and the first results of its use in a project regarding the 3D acquisition of an archeological statue."}
{"_id": "a7aeeb9b1a3ebbb79fdaf17232e89e6ff202aca7", "title": "Classification of Defect Clusters on Semiconductor Wafers Via the Hough Transformation", "text": "The Hough transformation employing a normal line-to-point parameterization is widely applied in digital image processing for feature detection. In this paper, we demonstrate how this same transformation can be adapted to classify defect signatures on semiconductor wafers as an aid to visual defect metrology. Given a rectilinear grid of die centers on a wafer, we demonstrate an efficient and effective procedure for classifying defect clusters composed of lines at angles of 0deg, 45deg, 90deg, and 135deg from the horizontal, as well as adjacent compositions of such lines. Included are defect clusters representing stripes, scratches at arbitrary angles, and center and edge defects. The principle advantage of the procedure over current industrial practice is that it can be fully automated to screen wafers for further engineering analysis."}
{"_id": "9abd3bd7588182f84e539ec0df36cd1721cf17f8", "title": "The Columbia-GWU System at the 2016 TAC KBP BeSt Evaluation", "text": "We present the components of the ColumbiaGWU contribution to the 2016 TAC KBP BeSt Evaluation."}
{"_id": "20687676930b1f3923732b3319b9cddbe27a10e6", "title": "Enhancing information retrieval and resource discovery from data using the Semantic Web", "text": "Data are everywhere. Often the sheer quantities of data that is stored or archived in repositories or digital libraries make it difficult to navigate data; information in the data is obscured, particularly where we have Big Data. Traditionally, we record metadata on our data items to assist data classification and some information retrieval. The Semantic Web enables us to further unlock and enrich our data, by exploring how different data are related or connected. Using ontologies and Linked Data we can declare, navigate and discover semantic relationships. Relationships exist both locally, within the data, and globally, such that we can enhance our data with information retrieved from a wider context. To illustrate how the application of Semantic Web technologies aids data discovery and information retrieval, I discuss two case studies: (1) Sharing Ancient Wisdoms (SAWS), a Dynamic Library of information on selected ancient wisdom literature; and (2) the DEFRA DTC archive, a repository of data about freshwater quality in the UK."}
{"_id": "4229a5426c6250a31d84aed1a5cd51dc8d9c6065", "title": "Combining equilibrium , resampling , and analysts \u2019 views in portfolio optimization", "text": "Portfolio optimization methodologies play a central role in strategic asset allocation (SAA), where it is desirable to have portfolios that are efficient, diversified, and stable. Since the development of the traditional mean-variance approach of Markowitz (1952), many improvements have been made to overcome problems such as lack of diversification and strong sensitivity of optimal portfolio weights to expected returns."}
{"_id": "29deb8c68de5ed69eb3c4b2234350ad2fdaaf2b8", "title": "Selective Transfer Learning for Cross Domain Recommendation", "text": "Collaborative filtering (CF) aims to predict users\u2019 ratings on items according to historical user-item preference data. In many realworld applications, preference data are usually sparse, which would make models overfit and fail to give accurate predictions. Recently, several research works show that by transferring knowledge from some manually selected source domains, the data sparseness problem could be mitigated. However for most cases, parts of source domain data are not consistent with the observations in the target domain, which may misguide the target domain model building. In this paper, we propose a novel criterion based on empirical prediction error and its variance to better capture the consistency across domains in CF settings. Consequently, we embed this criterion into a boosting framework to perform selective knowledge transfer. Comparing to several state-of-the-art methods, we show that our proposed selective transfer learning framework can significantly improve the accuracy of rating prediction tasks on several real-world recommendation tasks."}
{"_id": "02d08628572431c910b30a29796dd1a676ae0784", "title": "A recent development of antenna-in-package for 5G millimeter-wave applications (Invited paper)", "text": "Millimeter-wave technology brings a new paradigm of wireless communication in various areas including mobile devices, automotive, IoT (Internet of Things), medical, military and many others. In this paper, we present our recently developed antenna in a package and system for 5G (Fifth-Generation) millimeter-wave applications. The antenna is configured to be an array to compensate path loss and maintain data link budget. The antenna is designed with a multi-layer laminate substrate and is integrated with a 5G mmW RFIC. 5G NR (New Radio) wireless connection is demonstrated with a RFIC transceiver at 28 GHz band. Interoperability test is conducted to ensure compatibility with the 3GPP standard. Finally, indoor and outdoor field test including beamforming and beamtracking are presented to show realistic system-level throughput performance."}
{"_id": "4ae1a837c6b74f48d6a048c2d687c5387b9d70cb", "title": "Dilated convolutions for image classification and object localization", "text": "Yu et al.[1] showed that dilated convolutions are very effective in dense prediction problems such as semantic segmentation. In this work, we propose a new ResNet[2] based convolutional neural network model using dilated convolutions and show that this model can achieve lower error rate for image classification than ResNet with reduction of the number of the parameters of the network by 94% and that this model has high ability to localize objects despite being trained on image-level labels. We evaluated this model on ImageNet[5] which has 50 class labels randomly selected from 1000 class labels."}
{"_id": "329d527a353c8f0c112bf702617ec6643a77b7ec", "title": "Evaluation of a 900 V SiC MOSFET in a 13.56 MHz 2 kW resonant inverter for wireless power transfer", "text": "This paper demonstrates the performance of a 900 V silicon carbide (SiC) power MOSFET operating at 13.56 MHz in a 2 kW resonant inverter targeted for wireless power transfer (WPT) systems. Operating at multi-MHz switching frequency leads to smaller passive components and has the potential to improve power density and reduce weight for weight-sensitive applications. In addition, the advent of wide band gap (WBG) power semiconductors with higher breakdown voltages than silicon (Si) allows the design of high-power (2 kW or more) inverters even when operating at MHz frequencies. Previous work described the design and implementation of a class \u03a62 inverter using an enhancement mode gallium nitride (eGaN) FET at output power of 1.3 kW and switching at 13.56 MHz [1]. While eGaN FETs have promising electrical characteristics for high-frequency operation, SiC MOSFETs are becoming available with higher temperature ratings, higher breakdown voltages and higher avalanche ratings to operate at tens of MHz switching frequency. In this work, we study and evaluate the electrical characteristics of a 900 V, 7L D2PAK SiC MOSFET and demonstrate the design of a 2 kW single-ended resonant inverter operating at 13.56 MHz."}
{"_id": "e21fe0c4ef27d2b218988deb0625c09790b28a77", "title": "Incremental tessellation of trimmed parametric surfaces", "text": "This paper presents an incremental approach for tessellating trimmed parametric surfaces. A database is built to keep the tessellation of a CAD model. With the topological information provided, the database is efficiently updated when the model is modified. This enhances the tessellation process as only a slight change of the existing tessellation is required. The tessellation which involves the insertion and removal of trimlines is performed completely in the parametric space. The topological information also facilitates the triangles classification and the merging of boundaries between surfaces. The algorithm is particularly suitable for the generation of a valid triangulation of a complex CAD model for many downstream applications, e.g. in rapid prototyping (RP). q 2000 Elsevier Science Ltd. All rights reserved."}
{"_id": "ff3b75ff35cf1a680b622b60f5cedf2b01016800", "title": "Cannabinoids in the Treatment of Epilepsy.", "text": "Despite the availability of more than 20 different antiseizure drugs and the provision of appropriate medical therapy, 30% of people with epilepsy continue to have seizures.1,2 The approval of many new antiseizure drugs during the past two decades, including several with novel mechanisms of action, has not substantially reduced the proportion of patients with medically refractory disease.1 The safety and side-effect profile of antiseizure drugs has improved, but side effects related to the central nervous system are common and affect quality of life.3 Patients need new treatments that control seizures and have fewer side effects. This treatment gap has led patients and families to seek alternative treatments. Cannabis-based treatment for epilepsy has recently received prominent attention in the lay press4 and in social media, with reports of dramatic improvements in seizure control in children with severe epilepsy. In response, many states have legalized cannabis for the treatment of epilepsy (and other medical conditions) in children and adults (for a list of medical marijuana laws according to state, see www . ncsl . org/ research/ health/ state-medical-marijuana-laws . aspx). Cannabis has been used medicinally for millennia and was used in the treatment of epilepsy as early as 1800 b.c.e. in Sumeria.5 Victorian-era neurologists used Indian hemp to treat epilepsy and reported dramatic success.5,6 The use of cannabis therapy for the treatment of epilepsy diminished with the introduction of phenobarbital (1912) and phenytoin (1937) and the passage of the Marijuana Tax Act (1937). The discovery of an endogenous cannabinoid-signaling system in the 1990s7 rekindled interest in therapies derived from constituents of cannabis for nervous system disorders such as epilepsy (see ClinicalTrials.gov numbers, NCT02091375, NCT02224690, NCT02324673, NCT02318537, and NCT02318563). This review addresses the current preclinical and clinical data that suggest that compounds found in cannabis have efficacy against seizures. The pharmacokinetic properties of cannabinoids and related safety and regulatory issues that may affect clinical use are also discussed, as are the distinct challenges of conducting rigorous clinical trials of these compounds. More than 545 distinct compounds have been isolated from cannabis species; the most abundant are the cannabinoids, a family of molecules that have a 21-carbon terpenophenolic skeleton and includes numerous metabolites.8 The best studied of these cannabinoids (termed \u201cphytocannabinoids\u201d if derived from the plant) are \u03949-tetrahydrocannabinol (\u03949-THC) and cannabidiol and their metabolites. (See Fig. 1 for the structure of \u03949-THC, cannabidiol, and one other cannabinoid, cannabidivarin, as well as their targets in the central nervous system, and their actions.) Most of the psychoactive effects of cannabis are mediated by \u03949-THC. Many of the noncannabinoid molecules in cannabis plants may have biologic activity. This review focuses on cannabinoids, since other cannabis-derived compounds have been less well studied. From the Department of Neurology, New York University Langone School of Medicine, New York. Address reprint requests to Dr. Friedman at the Department of Neurology, NYU Langone School of Medicine, 223 E. 34th St., New York, NY 10016, or at daniel . friedman@ nyumc . org."}
{"_id": "38f5b25b0c2f640e2a1e0e5ee9babe237dc6f654", "title": "SharesSkew: An algorithm to handle skew for joins in MapReduce", "text": "In this paper, we investigate the problem of computing a multiway join in one round of MapReduce when the data may be skewed. We optimize on communication cost, i.e., the amount of data that is transferred from the mappers to the reducers. We identify join attributes values that appear very frequently, Heavy Hitters (HH). We distribute HH valued records to reducers avoiding skew by using an adaptation of the Shares [3] algorithm to achieve minimum communication cost. Our algorithm is implemented for experimentation and is offered as open source software. Furthermore, we investigate a class of multiway joins for which a simpler variant of the algorithm can handle skew. We offer closed forms for computing the parameters of the algorithm for chain and symmetric joins."}
{"_id": "536727a754e1d04bd2a749a813c0ac68eb2bb1f5", "title": "Noninvasive prediction of glucose by near-infrared diffuse reflectance spectroscopy.", "text": "BACKGROUND\nSelf-monitoring of blood glucose by diabetics is crucial in the reduction of complications related to diabetes. Current monitoring techniques are invasive and painful, and discourage regular use. The aim of this study was to demonstrate the use of near-infrared (NIR) diffuse reflectance over the 1050-2450 nm wavelength range for noninvasive monitoring of blood glucose.\n\n\nMETHODS\nTwo approaches were used to develop calibration models for predicting the concentration of blood glucose. In the first approach, seven diabetic subjects were studied over a 35-day period with random collection of NIR spectra. Corresponding blood samples were collected for analyte analysis during the collection of each NIR spectrum. The second approach involved three nondiabetic subjects and the use of oral glucose tolerance tests (OGTTs) over multiple days to cause fluctuations in blood glucose concentrations. Twenty NIR spectra were collected over the 3.5-h test, with 16 corresponding blood specimens taken for analyte analysis.\n\n\nRESULTS\nStatistically valid calibration models were developed on three of the seven diabetic subjects. The mean standard error of prediction through cross-validation was 1.41 mmol/L (25 mg/dL). The results from the OGTT testing of three nondiabetic subjects yielded a mean standard error of calibration of 1.1 mmol/L (20 mg/dL). Validation of the calibration model with an independent test set produced a mean standard error of prediction equivalent to 1.03 mmol/L (19 mg/dL).\n\n\nCONCLUSIONS\nThese data provide preliminary evidence and allow cautious optimism that NIR diffuse reflectance spectroscopy using the 1050-2450 nm wavelength range can be used to predict blood glucose concentrations noninvasively. Substantial research is still required to validate whether this technology is a viable tool for long-term home diagnostic use by diabetics."}
{"_id": "3dedf830742049a9fa5d773ecf16f254b3f7e0ad", "title": "Selective stiffening of soft actuators based on jamming", "text": "The ability to selectively stiffen otherwise compliant soft actuators increases their versatility and dexterity. We investigate granular jamming and layer jamming as two possible methods to achieve stiffening with PneuFlex actuators, a type of soft continuum actuator. The paper details five designs of jamming compartments that can be attached to an actuator. We evaluate the stiffening of the five different prototypes, achieving an up to 8-fold increase in stiffness. The strength of the most effective prototype based on layer jamming is also validated in the context of pushing buttons, resulting in an 2.23-fold increase in pushing force."}
{"_id": "263c66b0c2dc996c46d11693ad3d6f8d3f7f3d3c", "title": "Two-dimensional Disjunctively Constrained Knapsack Problem: Heuristic and exact approaches", "text": ""}
{"_id": "5401869e90d4ab31a72ae6dc627dd0bbadb63814", "title": "Multiple extended object tracking using Gaussian processes", "text": "The goal of multi-object tracking is to estimate the number of objects and their states recursively over time. In the presence of extended objects, i.e. objects with an extent that is not negligible in comparison to sensor resolution and which give rise to multiple measurements per time step, the tracking task is even more challenging: To fully utilize all available information and to achieve accurate estimation results, measurement models that are able to represent the object shape as well as algorithms that tackle the aggravated track-to-measurements association problem are necessary. In this work, the Gaussian process measurement model is integrated in the recently developed labeled multi-Bernoulli filter for extended objects. Thus, the ability of Gaussian processes to estimate and represent a wide range of free-from shapes is combined with a principled approach to the multiple extended objects tracking problem. The filter performance is demonstrated for simulated and experimental vehicle tracking using a laser scanner. For this particular application, a new, approximately axis-symmetric covariance function is additionally introduced."}
{"_id": "6bf1770a79309e4f05ef65e65e19f99c25974657", "title": "The Networked Sensory Landscape: Capturing and Experiencing Ecological Change Across Scales", "text": "What role will ubiquitous sensing play in our understanding and experience of ecology in the future? What opportunities are created by weaving a continuously sampling, geographically dense web of sensors into the natural environment, from the ground up? In this article, we explore these questions holistically, and present our work on an environmental sensor network designed to support a diverse array of applications, interpretations, and artistic expressions, from primary ecological research to musical composition. Over the past four years, we have been incorporating our ubiquitous sensing framework into the design and implementation of a large-scale wetland restoration, creating a broad canvas for creative exploration at the landscape scale. The projects we present here span the development and wide deployment of custom sensor node hardware, novel web services for providing real-time sensor data to end user applications, public-facing user interfaces for open-ended exploration of the data, as well as more radical UI modalities, through unmanned aerial vehicles, virtual and augmented reality, and wearable devices for sensory augmentation. From this work, we distill the Networked Sensory Landscape, a vision for the intersection of ubiquitous computing and environmental restoration. Sensor network technologies and novel approaches to interaction promise to reshape presence, opening up sensorial connections to ecological processes across spatial and temporal scales."}
{"_id": "342cc16e5a32613e325146c0ac06716850919234", "title": "Contact System Design to Improve Energy Efficiency in Copper Electrowinning Processes", "text": "To improve energy efficiency in copper electrowinning, different technologies have been developed. These include electrode positioning capping boards and 3-D grids, electrode spacers, and segmented intercell bars. This paper introduces a design concept to avoid electrode open circuits and reduce contact resistances. The design is based on a female tooth shape for the contacts on the intercell bar. This leads to improved electrode alignment, reduced contact resistances, easier contact cleaning, and ensured electrical contact for the electrodes. It results in lower operational temperature for the electrodes, reduced plant housekeeping, increased lifespan for capping boards, and higher rate of grade A copper production. The comparative results presented should be a useful guideline for any type of intercell bar. Improvements in production levels and energy efficiency should reach 0.5% and 3%, respectively. A 3-D finite-element-based analysis and industrial measurements are used to verify the results."}
{"_id": "9b312435f460f8137aa0b083b999794f676d9a38", "title": "ADHD comorbidity findings from the MTA study: comparing comorbid subgroups.", "text": "OBJECTIVES\nPrevious research has been inconclusive whether attention-deficit/hyperactivity disorder (ADHD), when comorbid with disruptive disorders (oppositional defiant disorder [ODD] or conduct disorder [CD]), with the internalizing disorders (anxiety and/or depression), or with both, should constitute separate clinical entities. Determination of the clinical significance of potential ADHD + internalizing disorder or ADHD + ODD/CD syndromes could yield better diagnostic decision-making, treatment planning, and treatment outcomes.\n\n\nMETHOD\nDrawing upon cross-sectional and longitudinal information from 579 children (aged 7-9.9 years) with ADHD participating in the NIMH Collaborative Multisite Multimodal Treatment Study of Children With Attention-Deficit/Hyperactivity Disorder (MTA), investigators applied validational criteria to compare ADHD subjects with and without comorbid internalizing disorders and ODD/CD.\n\n\nRESULTS\nSubstantial evidence of main effects of internalizing and externalizing comorbid disorders was found. Moderate evidence of interactions of parent-reported anxiety and ODD/CD status were noted on response to treatment, indicating that children with ADHD and anxiety disorders (but no ODD/CD) were likely to respond equally well to the MTA behavioral and medication treatments. Children with ADHD-only or ADHD with ODD/CD (but without anxiety disorders) responded best to MTA medication treatments (with or without behavioral treatments), while children with multiple comorbid disorders (anxiety and ODD/CD) responded optimally to combined (medication and behavioral) treatments.\n\n\nCONCLUSIONS\nFindings indicate that three clinical profiles, ADHD co-occurring with internalizing disorders (principally parent-reported anxiety disorders) absent any concurrent disruptive disorder (ADHD + ANX), ADHD co-occurring with ODD/CD but no anxiety (ADHD + ODD/CD), and ADHD with both anxiety and ODD/CD (ADHD + ANX + ODD/CD) may be sufficiently distinct to warrant classification as ADHD subtypes different from \"pure\" ADHD with neither comorbidity. Future clinical, etiological, and genetics research should explore the merits of these three ADHD classification options."}
{"_id": "f744715aedba86271000c1f49352e0bfdcaa3204", "title": "A General Optimization Framework for Multi-Document Summarization Using Genetic Algorithms and Swarm Intelligence", "text": "Extracting summaries via integer linear programming and submodularity are popular and successful techniques in extractive multi-document summarization. However, many interesting optimization objectives are neither submodular nor factorizable into an integer linear program. We address this issue and present a general optimization framework where any function of input documents and a system summary can be plugged in. Our framework includes two kinds of summarizers \u2013 one based on genetic algorithms, the other using a swarm intelligence approach. In our experimental evaluation, we investigate the optimization of two information-theoretic summary evaluation metrics and find that our framework yields competitive results compared to several strong summarization baselines. Our comparative analysis of the genetic and swarm summarizers reveals interesting complementary properties."}
{"_id": "c5cc6243f070d80f5edef24608694c39195e2d1a", "title": "SQL server column store indexes", "text": "The SQL Server 11 release (code named \"Denali\") introduces a new data warehouse query acceleration feature based on a new index type called a column store index. The new index type combined with new query operators processing batches of rows greatly improves data warehouse query performance: in some cases by hundreds of times and routinely a tenfold speedup for a broad range of decision support queries. Column store indexes are fully integrated with the rest of the system, including query processing and optimization. This paper gives an overview of the design and implementation of column store indexes including enhancements to query processing and query optimization to take full advantage of the new indexes. The resulting performance improvements are illustrated by a number of example queries."}
{"_id": "f417e2c902ec6fc9ffbbf975740e4590147448b2", "title": "A new weighted SVDD algorithm for outlier detection", "text": "Traditional Support Vector Domain Description (SVDD) and some improved SVDD algorithms are suboptimal in detecting some complex outliers. To solve the difficulty, we proposed a novel adaptive weighted SVDD (AW-SVDD) algorithm. Firstly, a weighting is computed for each data point based on their spatial position distribution and density distribution in a training dataset. The weighting can be used to measure the degree of the data point to be an outlier. Then the weighted data are trained by traditional SVDD. Lastly, a sphere shaped data description can be obtained for the training data. Experimental results demonstrate that AW-SVDD can overcome the interference from some complex outliers objectively, therefore the algorithm has a better performance than traditional SVDD and other improved SVDD algorithms."}
{"_id": "9528fa09fbd918618dbd1bac72fe8c24f5574400", "title": "IRIS: a Chat-oriented Dialogue System based on the Vector Space Model", "text": "This system demonstration paper presents IRIS (Informal Response Interactive System), a chat-oriented dialogue system based on the vector space model framework. The system belongs to the class of examplebased dialogue systems and builds its chat capabilities on a dual search strategy over a large collection of dialogue samples. Additional strategies allowing for system adaptation and learning implemented over the same vector model space framework are also described and discussed."}
{"_id": "09a9a6b6a0b9e8fa210175587181d4a8329f3f20", "title": "Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning", "text": "Learning, planning, and representing knowledge at multiple levels of temporal abstraction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforcement learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options\u2014closed-loop policies for taking action over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as muscle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning framework in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic programming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: (1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, (2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and (3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macroutility problem. ! 1999 Published by Elsevier Science B.V. All rights reserved. \u2217 Corresponding author. 0004-3702/99/$ \u2013 see front matter ! 1999 Published by Elsevier Science B.V. All rights reserved. PII: S0004-3702(99)00052 -1 182 R.S. Sutton et al. / Artificial Intelligence 112 (1999) 181\u2013211"}
{"_id": "a3f1fe3a1e81e4c8da25a3ca81e27390e62388e9", "title": "The Design and Implementation of the Clouds Distributed Operating System", "text": "Clouds is a native operating system for a distributed environment. The Clouds operating system is built on top of a kernel called Ra. Ra \u00eds a second generation kernel derived from our experience with the first version of the Clouds operating system. R\u00f8 is a minimal, flexible kernel that provides a framework for implementing a variety of distributed operating systems. This paper presents the Clouds paradigm and a brief overview of its first implementation. We then present the details of the R\u00f8 kernel, the rationale for its design, and the system services that constitute the Clouds operating system. This work was funded by NSF grant CCR-8619886. @ Computing Systems, Vol. 3 . No. I . Winter 1990 11"}
{"_id": "f181e4328bb0d66459642d39d655da62393cd194", "title": "Simulation of wide area measurement system with optimal phasor measurement unit location", "text": "Traditional supervisory control and data acquisition (SCADA) measurements have been providing information regarding different power system parameters for decades. These measurements are typically taken once every 4 to 10 seconds offering a steady state view of the power system behaviour. However, for monitoring and control of large grid only steady state information may not be sufficient. Phasor measurement units (PMUs) provide dynamic view of the grid. Since PMUs are costly device, optimal placement of PMUs is a thrust areas of research in power system planning. This has been already investigated in previous research findings by different optimization technique. This paper extensively corroborates a simulation model of a wide area network with optimal PMU location. The purpose is to experimentally determine the ability of PMU to observe the entire network from its optimal location. The proposed methodology is implemented and investigated on IEEE six bus system using MATLAB/Simulink."}
{"_id": "bb90fdd8ae078250baec5ffb7c1b458055a40d21", "title": "Eigenvector-centrality - a node-centrality?", "text": "Networks of social relations can be represented by graphs and socioor adjacency-matrices and their structure can be analyzed using different concepts, one of them called centrality. We will provide a new formalization of a \u201cnode-centrality\u201d which leads to some properties a measure of centrality has to satisfy. These properties allow to test given measures, for example measures based on degree, closeness, betweenness or Bonacich\u2019s eigenvector-centrality. It turns out that it depends on normalization whether eigenvector-centrality does satisfy the expected properties or not. \u00a9 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "3b59c4af513a7ce9cc1646c9ec0c7b15a9ae9d62", "title": "Leveraging Tokens in a Natural Language Query for", "text": "Natural Language Interface to Database (NLIDB) systems convert a Natural Language (NL) query into a Structured Query Language (SQL) and then use the SQL query to retrieve information from a database. The main advantage with this function of an NLIDB system is that it makes information retrieval much easier and more importantly, it also allows non-technical users to query a database. In this work, we present an effective usage of tokens in an NL query to address various problems in an NLIDB system. The conversion of an NL query to SQL query is framed as a token classification problem, using which we unveil a novel method of mapping an NL query to an SQL query. Our approach reasonably addresses domain dependency which is one of the major drawbacks of NLIDB systems. Concepts Identification is a major component of NLIDB systems (Gupta et al., 2012; Srirampur et al., 2014). We improve Concepts Identification (Srirampur et al., 2014) by making use of Stanford Parser Dependencies. Our approach is more robust than previously proposed methods. In addition, we also discuss how Concepts Identification can be applied to address Ellipsis in a dialogue process. In addition to providing results to a user, it is essential to provide a relevant and a compact set of results. We propose a new problem of generating a compact set of results to a user query. At a higher level, user-system interactions are modeled based on patterns frequently observed between a user\u2019s current query and his previous queries, while interacting with a system. Using these models, we propose a novel method of system prompting to help a user obtain a smaller and a relevant set of results. In addition to providing a compact and a relevant set of results, it is imperative that answers of an NLIDB system are comprehensible even by the people who are less familiar with a common language like English. NLIDB systems use Natural Language Generation (NLG) modules to provide answers in the form of sentences. It is important to make sure that an answer generated by an NLG module is very simple to understand. This brings in the problem of text simplification, wherein, one of the most crucial and initial sub-problems is Complex Word Identification (CWI). We address the problem of CWI by distinguishing words as simple and complex. A plethora of classifiers were explored to identify the complex words. This information helps us in improving the simplification of an NLIDB system\u2019s final output to a user. To summarize, in addition to addressing problems within an NLIDB system, this work touches postNLIDB problems like results processing. All the proposed issues are tackled using tokens in an NL query as the basic and a driving unit of force."}
{"_id": "aed124c053b9c510487d68e0faf32aff2a84c3b5", "title": "FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge", "text": "The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges."}
{"_id": "385be9f80a607957936804b3ce4428eaab5cc468", "title": "Efficient Real-Time Auditing and Proof of Violation for Cloud Storage Systems", "text": "In this paper we study how to develop an efficient real-time auditing and proof of violation (POV) scheme for cloud storage systems. A POV scheme enables a user or a service provider to produce cryptographic proofs which can be used to prove either the occurrence of the violation of properties or the innocence of the service provider. POV schemes are solutions for obtaining mutual nonrepudiation between users and the service provider in the cloud. After each file operation, a real-time auditing should be performed so that the violation of the service provider can be found instantly. Existing solutions need to cache the hash values of files in client devices and thus the overhead for storing and synchronizing hash values in client devices which share files is huge. We propose a novel scheme in which client devices do not need to cache any hash values of files. A small portion called slice of a binary hash tree is transferred to the client device for real-time auditing and it can support POV whenever the audit does not pass. Experimental results are presented that demonstrate the feasibility of the proposed scheme and show that our scheme outperforms previous work by one to two order of magnitude. Service providers of cloud storage can use the proposed scheme to provide a mutual nonrepudiation guarantee in their service-level agreements."}
{"_id": "2384bb81118b1d6b46718c6501c2cec6c26b8573", "title": "High-accuracy differential tracking of low-cost GPS receivers", "text": "In many mobile wireless applications such as the automated driving of cars, formation flying of unmanned air vehicles, and source localization or target tracking with wireless sensor networks, it is more important to know the precise relative locations of nodes than their absolute coordinates. GPS, the most ubiquitous localization system available, generally provides only absolute coordinates. Furthermore, low-cost receivers can exhibit tens of meters of error or worse in challenging RF environments. This paper presents an approach that uses GPS to derive relative location information for multiple receivers. Nodes in a network share their raw satellite measurements and use this data to track the relative motions of neighboring nodes as opposed to computing their own absolute coordinates. The system has been implemented using a network of Android phones equipped with a custom Bluetooth headset and integrated GPS chip to provide raw measurement data. Our evaluation shows that centimeter-scale tracking accuracy at an update rate of 1 Hz is possible under various conditions with the presented technique. This is more than an order of magnitude more accurate than simply taking the difference of reported absolute node coordinates or other simplistic approaches due to the presence of uncorrelated measurement errors."}
{"_id": "c257c0efac8ee9045b450ce4dc9e040752c898fe", "title": "Automatic drift calibration for EOG-based gaze input interface", "text": "A drift calibration technique for DC-coupled EOG (electrooculogram) systems is proposed. It assumes a non-linear relationship between EOG and eye angle and estimates the absolute eye angle by the EOG differences during saccade. Drift is calibrated every saccade without user's explicit action, so it is especially suitable for long-term gaze input interfaces. An experiment confirms that it can estimate horizontal absolute eye angle with an error of about 5\u00b0 in addition to accurate eye movement."}
{"_id": "f24c82e85906bc7325b296d37370febd65833fdd", "title": "Towards Continuous Sign Language Recognition with Deep Learning", "text": "Humans communicate with each other using abstract signs and symbols. While the cooperation between humans and machines can be a powerful tool for solving complex or difficult tasks, the communication must be at the abstract enough level that is both natural to the humans and understandable to the machines. Our paper focuses on natural language and in particular on sign language recognition. The approach described here combines heuristics for segmentation of the video stream by identifying the epenthesis with stacked LSTMs for automatic classification of the derived segments. This approach segments continuous stream of video data with the accuracy of over 80% and reaches accuracies of over 95% on segmented sign recognition. We compare results in terms of the number of signs being recognised and the utility of various features used for the recognition. We aim to integrate the models into a single continuous sign language recognition system and to learn policies for specific domains that would map perception of a robot to its action. This will improve the accuracy of understanding the common task within the shared activity between a human and a machine. Such understanding, in turn, will foster meaningful cooperation."}
{"_id": "924462af4f90643a91327a9c466d2cfa905c6fef", "title": "Data-driven segmentation and labeling of freehand sketches", "text": "We present a data-driven approach to derive part-level segmentation and labeling of free-hand sketches, which depict single objects with multiple parts. Our method performs segmentation and labeling simultaneously, by inferring a structure that best fits the input sketch, through selecting and connecting 3D components in the database. The problem is formulated using Mixed Integer Programming, which optimizes over both the local fitness of the selected components and the global plausibility of the connected structure. Evaluations show that our algorithm is significantly better than the straightforward approaches based on direct retrieval or part assembly, and can effectively handle challenging variations in the sketch."}
{"_id": "45c911ffca5da80fff69361072cb347511582822", "title": "Photorealistic Scene Reconstruction by Voxel Coloring", "text": "A novel scene reconstruction technique is presented, different from previous approaches in its ability to cope with large changes in visibility and its modeling of intrinsic scene color and texture information. The method avoids image correspondence problems by working in a discretized scene space whose voxels are traversed in a fixed visibility ordering. This strategy takes full account of occlusions and allows the input cameras to be far apart and widely distributed about the environment. The algorithm identifies a special set of invariant voxels which together form a spatial and photometric reconstruction of the scene, fully consistent with the input images. The approach is evaluated with images from both inward-facing and outward-facing cameras."}
{"_id": "ee964a0054cf22a14df0b37facab35e0450ee3e0", "title": "Performance of Two 18-Story Steel Moment-Frame Buildings in Southern California During Two Large Simulated San Andreas Earthquakes", "text": "Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the Mw=6.7 January 1994 Northridge earthquake, we determine the damage to two 18-story steel moment-frame buildings, one existing and one new, located in southern California due to ground motions from two hypothetical magnitude 7.9 earthquakes on the San Andreas Fault. The new building has the same configuration as the existing building but has been redesigned to current building code standards. Two cases are considered: rupture initiating at Parkfield and propagating from north to south, and rupture propagating from south to north and terminating at Parkfield. Severe damage occurs in these buildings at many locations in the region in the north-to-south rupture scenario. Peak velocities of 1 m.s\u22121 and 2 m.s\u22121 occur in the Los Angeles Basin and San Fernando Valley, respectively, while the corresponding peak displacements are about 1 m and 2 m, respectively. Peak interstory drifts in the two buildings exceed 0.10 and 0.06 in many areas of the San Fernando Valley and the Los Angeles Basin, respectively. The redesigned building performs significantly better than the existing building; however, its improved design based on the 1997 Uniform Building Code is still not adequate to prevent serious damage. The results from the south-to-north scenario are not as alarming, although damage is serious enough to cause significant business interruption and compromise life safety. DOI: 10.1193/1.2360698"}
{"_id": "7e8de9c277c2e30434d79a04f2a4916d03cb567a", "title": "Blockchain and Artificial Intelligence", "text": "It is undeniable that artificial intelligence (AI) and blockchain concepts are spreading at a phenomenal rate. Both technologies have distinct degree of technological complexity and multi-dimensional business implications. However, a common misunderstanding about blockchain concept, in particular, is that \u201ca blockchain is decentralized and therefore no one controls it\u201d. But the underlying development of a blockchain system is still attributed to a cluster of core developers. Take smart contract as an example, it is essentially a collection of codes (or functions) and data (or states) that are programmed and deployed on a blockchain (say, Ethereum) by different human programmers. It is thus, unfortunately, less likely to be free of loopholes and flaws. In this article, through a brief overview about how artificial intelligence could be used to deliver bug-free smart contract so as to achieve the goal of blockchain 2.0, we to emphasize that the blockchain implementation can be assisted or enhanced via various AI techniques. The alliance of AI and blockchain is expected to create numerous possibilities."}
{"_id": "c4a5421c8ac7277fe8d63bd296ba5742408adbbf", "title": "Simulation of bulk current injection test using integrated circuit immunity macro model and electromagnetic analysis", "text": "This paper provides a technique to predict bulk current injection (BCI) test results. In order to define the threshold of failure, the direct power injection based integrated circuit (IC) immunity macro model for conducted immunity is adopted. Injected radio frequency disturbance that reaches to an integrated circuit is calculated by using electromagnetic analysis with a high accuracy injection probe model. 3D model of equipment under test can provide the terminal voltage of IC which reference is the ground terminal of IC, not BCI test setups reference ground plane. The proposed method is applied to BCI tests and the simulated results have good correlation with experimental results."}
{"_id": "353f8f0eaa81a5a34078024c72bb1c255237687a", "title": "Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation", "text": "The applicability of computer vision to real paintings and artworks has been rarely investigated, even though a vast heritage would greatly benefit from techniques which can understand and process data from the artistic domain. This is partially due to the small amount of annotated artistic data, which is not even comparable to that of natural images captured by cameras. In this paper, we propose a semantic-aware architecture which can translate artworks to photo-realistic visualizations, thus reducing the gap between visual features of artistic and realistic data. Our architecture can generate natural images by retrieving and learning details from real photos through a similarity matching strategy which leverages a weakly-supervised semantic understanding of the scene. Experimental results show that the proposed technique leads to increased realism and to a reduction in domain shift, which improves the performance of pre-trained architectures for classification, detection, and segmentation. Code will be made publicly available."}
{"_id": "83d40f5e09b1143464d9c297bdb8c571e9a5ad4c", "title": "Round Robin Classification", "text": "In this paper, we discuss round robin classification (aka pairwise classification), a technique for handling multi-class problems with binary classifiers by learning one classifier for each pair of classes. We present an empirical evaluation of the method, implemented as a wrapper around the Ripper rule learning algorithm, on 20 multi-class datasets from the UCI database repository. Our results show that the technique is very likely to improve Ripper\u2019s classification accuracy without having a high risk of decreasing it. More importantly, we give a general theoretical analysis of the complexity of the approach and show that its run-time complexity is below that of the commonly used one-against-all technique. These theoretical results are not restricted to rule learning but are also of interest to other communities where pairwise classification has recently received some attention. Furthermore, we investigate its properties as a general ensemble technique and show that round robin classification with C5.0 may improve C5.0\u2019s performance on multi-class problems. However, this improvement does not reach the performance increase of boosting, and a combination of boosting and round robin classification does not produce any gain over conventional boosting. Finally, we show that the performance of round robin classification can be further improved by a straight-forward integration with bagging."}
{"_id": "a0ffcd12ecfc15d3cf32d7594819ef4289892006", "title": "QGrid: Q-learning based routing protocol for vehicular ad hoc networks", "text": "In Vehicular Ad Hoc Networks (VANETs), moving vehicles are considered as mobile nodes in the network and they are connected to each other via wireless links when they are within the communication radius of each other. Efficient message delivery in VANETs is still a very challenging research issue. In this paper, a Q-learning based routing protocol (i.e., QGrid) is introduced to help to improve the message delivery from mobile vehicles to a specific location. QGrid considers both macroscopic and microscopic aspects when making the routing decision, while the traditional routing methods focus on computing meeting information between different vehicles. QGrid divides the region into different grids. The macroscopic aspect determines the optimal next-hop grid and the microscopic aspect determines the specific vehicle in the optimal next-hop grid to be selected as next-hop vehicle. QGrid computes the Q-values of different movements between neighboring grids for a given destination via Q-learning. Each vehicle stores Q-value table learned offline, then selects optimal next-hop grid by querying Q-value table. Inside the selected next-hop grid, we either greedily select the nearest neighboring vehicle to the destination or select the neighboring vehicle with highest probability of moving to the optimal next-hop grid predicted by the two-order Markov chain. The performance of QGrid is evaluated by using real life trajectory GPS data of Shanghai taxies. Simulation comparison among QGrid and other existing position-based routing protocols confirms the advantages of proposed QGrid routing protocol for VANETs."}
{"_id": "eea9cb1d055c399c8363f1ed4c990bcf70cb9406", "title": "Recommendations Using Information from Multiple Association Rules: A Probabilistic Approach", "text": "Business analytics has evolved from being a novelty used by a select few, to an accepted facet of conducting business. Recommender systems form a cri tical component of the business analytics toolkit and, by enabling firms to effectively target custom ers with products and services, are helping alter t h ecommerce landscape. A variety of methods exist for providing recommendations, with collaborative filtering, matrix factorization, and association ru le based methods being the most common. In this pap er, we propose a method to improve the quality of recom mendations made using association rules. This is accomplished by combining rules when possible, and stands apart from existing rule-combination methods in that it is strongly grounded in probabil ity theory. Combining rules requires the identifica t on of the best combination of rules from the many comb inations that might exist, and we use a maximumlikelihood framework to compare alternative combina tions. As it is impractical to apply the maximum likelihood framework directly in real time, we show that this problem can equivalently be represented as a set partitioning problem by translating it into an information theoretic context \u2013 the best solution corresponds to the set of rules that leads to the h ighest sum of mutual information associated with th e rules. Through a variety of experiments that evalua te the quality of recommendations made using the proposed approach, we show that (i) a greedy heuris tic u ed to solve the maximum likelihood estimation problem is very effective, providing results compar able to those from using the optimal set partitioni ng solution, (ii) the recommendations made by our appr oach are more accurate than those made by a variety of state-of-the-art benchmarks, including collabora tive filtering and matrix factorization, and (iii) the recommendations can be made in a fraction of a seco nd on a desktop computer, making it practical to us e in real-world applications."}
{"_id": "17e3bc084ec01a55eb15d61edf721ea2356fdf0d", "title": "Smart digital door lock for the home automation", "text": "In this paper, we propose a smart digital door lock system for home automation. A digital door lock system is equipment that uses the digital information such as a secret code, semi-conductors, smart card, and finger prints as the method for authentication instead of the legacy key system. In our proposed system, a ZigBee module is embedded in digital door lock and the door lock acts as a central main controller of the overall home automation system. Technically, our proposed system is the network of sensor nodes and actuators with digital door lock as base station. A door lock system proposed here consists of RFID reader for user authentication, touch LCD, motor module for opening and closing of the door, sensor modules for detecting the condition inside the house, communication module, and control module for controlling other modules. Sensor nodes for environment sensing are deployed at appropriate places at home. Status of individual ZigBee module can be monitored and controlled by the centralized controller, digital door lock. As the door lock is the first and last thing people come across in entering and leaving the home respectively, the home automation function in digital door lock system enables user to conveniently control and monitor home environment and condition all at once before entering or leaving the house. Furthermore, it also allows users to remotely monitor the condition inside the house through Internet or any other public network. The biggest advantage of our proposed system over existing ones is that it can be easily installed when and where necessary without requirement of any infrastructures and proper planning."}
{"_id": "3a2c03f429b4b49aa11c3ba0d1300ed09df086a5", "title": "Automatic Driver Assistance System-Glare Free High Beam Control", "text": "At present automobiles upgrades with different technologies for assisting and providing safety for the occupant. Due to this a system known as Automatic Driver Assistance System (ADAS) proposed many systems which aim to help the driver in controlling the vehicle and to drive safely during night time. One of the systems of ADAS is Glare Free High Beam (GFHB) control. This provides the information of the oncoming vehicles and preceding vehicles based on camera, which slices out the high beam and shades spot when it detects the object by avoiding the glare and provides the driver with greater visibility. It makes use of the various sensors, actuator, image processor and microcontroller to detect an object. In this paper, unit level testing is done for different cases in Embedded C, simulated in Visual Studio and results are displayed using GUI."}
{"_id": "863f825eba269fbf83284c7e0c6a2f53b8876ce4", "title": "Correction of the retracted alar base.", "text": "Alar base retraction is a common yet difficult problem faced by the rhinoplasty surgeon. It may be caused by weakened, overresected lateral crura, vestibular lining deficiencies, or congenital alar malpositioning. Methods of correction include soft tissue manipulation, auricular composite grafting, and cartilage grafting. We present the senior author's graded approach to alar retraction using auricular composite grafting, alar rim grafting, and lateral crural strut graft placement with caudal lateral crural repositioning."}
{"_id": "50473a062677d9c4f31c10782bd7efcf1328f1d8", "title": "You Talking to Me? A Corpus and Algorithm for Conversation Disentanglement", "text": "When multiple conversations occur simultaneously, a listener must decide which conversation each utterance is part of in order to interpret and respond to it appropriately. We refer to this task as disentanglement. We present a corpus of Internet Relay Chat (IRC) dialogue in which the various conversations have been manually disentangled, and evaluate annotator reliability. This is, to our knowledge, the first such corpus for internet chat. We propose a graph-theoretic model for disentanglement, using discourse-based features which have not been previously applied to this task. The model\u2019s predicted disentanglements are highly correlated with manual annotations."}
{"_id": "8e0695dad914ee2d122095ad2cb4ec9306f0fd71", "title": "Dynamic Risk Assessment for Vehicles of Higher Automation Levels by Deep Learning", "text": "Vehicles of higher automation levels require the creation of situation awareness. One important aspect of this situation awareness is an understanding of the current risk of a driving situation. In this work, we present a novel approach for the dynamic risk assessment of driving situations based on images of a front stereo camera using deep learning. To this end, we trained a deep neural network with recorded monocular images, disparity maps and a risk metric for diverse traffic scenes. Our approach can be used to create the aforementioned situation awareness of vehicles of higher automation levels and can serve as a heterogeneous channel to systems based on radar or lidar sensors that are used traditionally for the calculation of risk metrics."}
{"_id": "cef80e5c6012739fb23a90f8fd9218430a6496e3", "title": "Crowdsourcing Annotations for Visual Object Detection", "text": "A large number of images with ground truth object bounding boxes are critical for learning object detectors, which is a fundamental task in compute vision. In this paper, we study strategies to crowd-source bounding box annotations. The core challenge of building such a system is to effectively control the data quality with minimal cost. Our key observation is that drawing a bounding box is significantly more difficult and time consuming than giving answers to multiple choice questions. Thus quality control through additional verification tasks is more cost effective than consensus based algorithms. In particular, we present a system that consists of three simple sub-tasks \u2014 a drawing task, a quality verification task and a coverage verification task. Experimental results demonstrate that our system is scalable, accurate, and cost-effective."}
{"_id": "3939607e665159002391082cd4a8952374d6bd99", "title": "Multilevel Converters-A New Breed of Power Converters", "text": "AbstructMultilevel voltage source converters are emerging as a new breed of power converter options for high-power applications. The multilevel voltage source converters typically synthesize the staircase voltage wave from several levels of dc capacitor voltages. One of the major limitations of the multilevel converters is the voltage unbalance between different levels. The techniques to balance the voltage between different levels normally involve voltage clamping or capacitor charge control. There are several ways of implementing voltage balance in multilevel converters. Without considering the traditional magnetic coupled converters, this paper presents three recently developed multilevel voltage source converters: 1) diode-clamp, 2) flyingcapacitors, and 3) cascaded-inverters with separate dc sources. The operating principle, features, constraints, and potential applications of these converters will be discussed."}
{"_id": "61c901789c7fb4721aae63769d7c4eb0e27a7e45", "title": "A generalized multilevel inverter topology with self voltage balancing", "text": "Multilevel power converters that provide more than two levels of voltage to achieve smoother and less distorted ac-to-dc, dc-to-ac, and dc-to-dc power conversion, have attracted many contributors. This paper presents a generalized multilevel inverter (converter) topology with self voltage balancing. The existing multilevel inverters such as diode-clamped and capacitor-clamped multilevel inverters can be derived from the generalized inverter topology. Moreover, the generalized multilevel inverter topology provides a true multilevel structure that can balance each dc voltage level automatically without any assistance from other circuits, thus, in principle, providing a complete and true multilevel topology that embraces the existing multilevel inverters. From this generalized multilevel inverter topology, several new multilevel inverter structures can be derived. Some application examples of the generalized multilevel converter will be given."}
{"_id": "bb004d2d04ce6872d0f7965808ae4867aa037f8b", "title": "Multilevel Converters as a Utility Interface for Renewable Energy Systems", "text": "Electric power production in the 21 Century will see dramatic changes in both the physical infrastructure and the control and information infrastructure. A shift will take place from a relatively few large, concentrated generation centers and the transmission of electricity over mostly a high voltage ac grid (Fig. 1) to a more diverse and dispersed generation infrastructure that also has a higher percentage of dc transmission lines (Fig. 2) [1]. In the United States, generation capacity has not kept up with power demands, as over the last decade the reserve margins declined from 22% in 1990 to 16% in 1997. This declining margin trend is expected to continue over the next decade partly because of the uncertainty of how the future deregulated electrical environment will function. Less reserve margins will lead to less peaking capacity on highdemand days and more volatile energy prices [2]. This change in a physical infrastructure combined with a more deregulated electric power industry will result in more parties generating power \u2013 or distributed generation. Some of the distributed generation power sources that are expected to increase greatly their market share of the total power produced in the United States and abroad include renewable energy sources such as photovoltaics, wind, low-head hydro, and geothermal [3]. Fuel cell technology is also nearing the development point where it could start to supply a significant share of the power needs [4]. The advent of high power electronic modules has also encouraged the use of more dc transmission and made the prospects of interfacing dc power sources such as fuel cells and photovoltaics more easily attainable. A modular, scalable power electronics technology that is ideal for these types of utility applications is the transformerless multilevel converter [5]."}
{"_id": "c29ac3455cb73a777e1908b2c2c09b1d14ea1e9e", "title": "Cascaded DC-DC converter connection of photovoltaic modules", "text": "New residential scale photovoltaic (PV) arrays are commonly connected to the grid by a single dc-ac inverter connected to a series string of pv panels, or many small dc-ac inverters which connect one or two panels directly to the ac grid. This paper proposes an alternative topology of nonisolated per-panel dc-dc converters connected in series to create a high voltage string connected to a simplified dc-ac inverter. This offers the advantages of a \"converter-per-panel\" approach without the cost or efficiency penalties of individual dc-ac grid connected inverters. Buck, boost, buck-boost, and Cu/spl acute/k converters are considered as possible dc-dc converters that can be cascaded. Matlab simulations are used to compare the efficiency of each topology as well as evaluating the benefits of increasing cost and complexity. The buck and then boost converters are shown to be the most efficient topologies for a given cost, with the buck best suited for long strings and the boost for short strings. While flexible in voltage ranges, buck-boost, and Cu/spl acute/k converters are always at an efficiency or alternatively cost disadvantage."}
{"_id": "e48532330a4d3fca4f361077c8e33bdea47f3e71", "title": "An FPGA-based AES-CCM Crypto Core For IEEE 802.11i Architecture", "text": "The widespread adoption of IEEE 802.11 wireless networks has brought its security paradigm under active research. One of the important research areas in this field is the realization of fast and secure implementations of cryptographic algorithms. Under this work, such an implementation has been done for Advanced Encryption Standard (AES) on fast, efficient and low power Field Programmable Gate Arrays (FPGAs) whereby computational intensive cryptographic processes are offloaded from the main processor thus results in achieving highspeed secure wireless connectivity. The dedicated resources of Spartan-3 FPGAs have been effectively utilized to develop wider logic function which minimizes the critical paths by confining logic to single Configurable Logic Block (CLB), thus improving the performance, density and power consumption of the design. The resultant design consumes only 4 Block RAMs and 487 Slices to fit both AES cores and its key scheduling."}
{"_id": "c5fe87291747c39f56bf0cbe4499cca77ae91351", "title": "Learning an Intrinsic Image Decomposer Using Synthesized RGB-D Dataset", "text": "Intrinsic image decomposition refers to recover the albedo and shading from images, which is an ill-posed problem in signal processing. As realistic labeled data are severely lacking, it is difficult to apply learning methods in this issue. In this letter, we propose using a synthesized dataset to facilitate the solving of this problem. A physically based renderer is used to generate color images and their underlying ground-truth albedo and shading from three-dimensional models. Additionally, we render a Kinect-like noisy depth map for each instance. We utilize this synthetic dataset to train a deep neural network for intrinsic image decomposition and further fine-tune it for real-world images. Our model supports both RGB and RGB-D as input, and it employs both high-level and low-level features to avoid blurry outputs. Experimental results verify the effectiveness of our model on realistic images."}
{"_id": "c2549417671e020ab02d0e41a894d37efed6267d", "title": "Survey: Finite-state technology in natural language processing", "text": "In this survey, we will discuss current uses of finite-state information in several statistical natural language processing tasks. To this end, we will review standard approaches in tokenization, part-of-speech tagging, and parsing, and illustrate the utility of finite-state information and technology in these areas. The particular problems were chosen to allow a natural progression from simple prediction to structured prediction. We aim for a sufficiently formal presentation suitable for readers with a background in automata theory that allows to appreciate the contribution of finite-state approaches, but we will not discuss practical issues outside the core ideas. We provide instructive examples and pointers into the relevant literature for all constructions. We close with an outlook on finite-state technology in statistical machine translation."}
{"_id": "ce44a911d1742ee9c20d6ae2ba3eda486f8dc3ff", "title": "Design and performance analysis of a high-speed air-cored axial-flux permanent-magnet generator with circular magnets and coils", "text": "Air-cored, axial flux permanent-magnet (AFPM) machines have magnetic and mechanical characteristics considered to be ideal for compact high speed electric power generation applications. To date, research on high speed AFPM machines is primarily from a mechanical perspective, with emphasis on higher power level applications such as in hybrid vehicles. Recently, there has been an increasing interest in man-portable mobile power sources with a power envelope of around 1kW, for mission critical applications. In this paper, a high speed air-cored surface mounted AFPM machine with circular magnets and coils is proposed. Comprehensive theoretical analysis and three-dimensional (3-D) electromagnetic finite element analysis (FEA) are developed to evaluate the performance of the machine. In addition, the rotor's mechanical stresses developed by high rotational speed are evaluated to ensure mechanical integrity. Finally, a prototype machine is developed for validation. Both the experimental and predicted results demonstrate that the proposed generator possesses distinct advantages over other systems, such as high efficiency, high power factor, and a simple and robust structure that offers a high degree of technology readiness."}
{"_id": "46d70bc351599ab79cebf89c82745597cf6d1f01", "title": "PeerFlow: Secure Load Balancing in Tor", "text": "We present PeerFlow, a system to securely load balance client traffic in Tor. Security in Tor requires that no adversary handle too much traffic. However, Tor relays are run by volunteers who cannot be trusted to report the relay bandwidths, which Tor clients use for load balancing. We show that existing methods to determine the bandwidths of Tor relays allow an adversary with little bandwidth to attack large amounts of client traffic. These methods include Tor\u2019s current bandwidthscanning system, TorFlow, and the peer-measurement system EigenSpeed. We present an improved design called PeerFlow that uses a peer-measurement process both to limit an adversary\u2019s ability to increase his measured bandwidth and to improve accuracy. We show our system to be secure, fast, and efficient. We implement PeerFlow in Tor and demonstrate its speed and accuracy in large-scale network simulations."}
{"_id": "abbc1e4150e8fc40300d3b5d1ac05153583f4f95", "title": "Sentiment Analysis on Twitter Data", "text": "Abstract \u2013 Now-a-days social networking sites are at the boom, so large amount of data is generated. Millions of people are sharing their views daily on micro blogging sites, since it contains short and simple expressions. In this paper, we will discuss about a paradigm to extract the sentiment from a famous micro blogging service, Twitter, where users post their opinions for everything. In this paper, we will discuss the existing analysis of twitter dataset with data mining approach such as use of Sentiment analysis algorithm using machine learning algorithms. An approach is introduced that automatically classifies the sentiments of Tweets taken from Twitter dataset as in [1]. These messages or tweets are classified as positive, negative or neutral with respect to a query term. This is very useful for the companies who want to know the feedback about their product brands or the customers who want to search the opinion from others about product before purchase. We will use machine learning algorithms for classifying the sentiment of Twitter messages using distant supervision which is discussed in [8]. The training data consists of Twitter messages with emoticons, acronyms which are used as noisy labels discussed in [4]. We examine sentiment analysis on Twitter data. The contributions of this survey paper are: (1) we use Parts Of Speech (POS)-specific prior polarity features. (2) We also use a tree kernel to prevent the need for monotonous feature engineering."}
{"_id": "b77dc5d09e39caf87df306d6f1b5758907bbbe6a", "title": "Arrhythmia Identification with Two-Lead Electrocardiograms Using Artificial Neural Networks and Support Vector Machines for a Portable ECG Monitor System", "text": "An automatic configuration that can detect the position of R-waves, classify the normal sinus rhythm (NSR) and other four arrhythmic types from the continuous ECG signals obtained from the MIT-BIH arrhythmia database is proposed. In this configuration, a support vector machine (SVM) was used to detect and mark the ECG heartbeats with raw signals and differential signals of a lead ECG. An algorithm based on the extracted markers segments waveforms of Lead II and V1 of the ECG as the pattern classification features. A self-constructing neural fuzzy inference network (SoNFIN) was used to classify NSR and four arrhythmia types, including premature ventricular contraction (PVC), premature atrium contraction (PAC), left bundle branch block (LBBB), and right bundle branch block (RBBB). In a real scenario, the classification results show the accuracy achieved is 96.4%. This performance is suitable for a portable ECG monitor system for home care purposes."}
{"_id": "fa3c0d69a8e2bcbee6e53f4d021303c809061ae2", "title": "Implementation of a Cognitive Radio Front-End Using Rotatable Controlled Reconfigurable Antennas", "text": "This communication presents a new antenna system designed for cognitive radio applications. The antenna structure consists of a UWB antenna and a frequency reconfigurable antenna system. The UWB antenna scans the channel to discover \u201cwhite space\u201d frequency bands while tuning the reconfigurable section to communicate within these bands. The frequency agility is achieved via a rotational motion of the antenna patch. The rotation is controlled by a stepper motor mounted on the back of the antenna structure. The motor's rotational motion is controlled by LABVIEW on a computer connected to the motor through its parallel port. The computer's parallel port is connected to a NPN Darlington array that is used to drive the stepper motor. The antenna has been simulated with the driving motor being taken into consideration. A good agreement is found between the simulated and the measured antenna radiation properties."}
{"_id": "68f78dcd8644b99a8df8826e97df1589758bd1a8", "title": "A new ultra-wideband, ultra-short monocycle pulse generator with reduced ringing", "text": "We introduce a new ultra-wideband (UWB), ultra-short, step recovery diode monocycle pulse generator. This pulse generator uses a simple RC high-pass filter as a differentiator to generate the monocycle pulse directly. The pulse-shaping network employs a resistive circuit to achieve UWB matching and substantial removal of the pulse ringing, and rectifying and switching diodes to further suppress the ringing. An ultra-short monocycle pulse of 300-ps pulse duration, -17-dB ringing level, and good symmetry has been demonstrated. Good agreement between the measured and calculated results was achieved."}
{"_id": "1bb06f401fac046234a9eeeb8735b8a456ba9d6d", "title": "EEHC: Energy efficient heterogeneous clustered scheme for wireless sensor networks", "text": "In recent years, there has been a growing interest in wireless sensor networks. One of the major issues in wireless sensor network is developing an energy-efficient clustering protocol. Hierarchical clustering algorithms are very important in increasing the network\u2019s life time. Each clustering algorithm is composed of two phases, the setup phase and steady state phase. The hot point in these algorithms is the cluster head selection. In this paper, we study the impact of heterogeneity of nodes in terms of their energy in wireless sensor networks that are hierarchically clustered. We assume that a percentage of the population of sensor nodes is equipped with the additional energy resources. We also assume that the sensor nodes are randomly distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. Homogeneous clustering protocols assume that all the sensor nodes are equipped with the same amount of energy and as a result, they cannot take the advantage of the presence of node heterogeneity. Adapting this approach, we introduce an energy efficient heterogeneous clustered scheme for wireless sensor networks based on weighted election probabilities of each node to become a cluster head according to the residual energy in each node. Finally, the simulation results demonstrate that our proposed heterogeneous clustering approach is more effective in prolonging the network lifetime compared with LEACH. 2008 Elsevier B.V. All rights reserved."}
{"_id": "d74ab74217d921df6cdf54ad75e41cbd9a058081", "title": "Online Adaptive Passive-Aggressive Methods for Non-Negative Matrix Factorization and Its Applications", "text": "This paper aims to investigate efficient and scalable machine learning algorithms for resolving Non-negative Matrix Factorization (NMF), which is important for many real-world applications, particularly for collaborative filtering and recommender systems. Unlike traditional batch learning methods, a recently proposed online learning technique named \"NN-PA\" tackles NMF by applying the popular Passive-Aggressive (PA) online learning, and found promising results. Despite its simplicity and high efficiency, NN-PA falls short in at least two critical limitations: (i) it only exploits the first-order information and thus may converge slowly especially at the beginning of online learning tasks; (ii) it is sensitive to some key parameters which are often difficult to be tuned manually, particularly in a practical online learning system. In this work, we present a novel family of online Adaptive Passive-Aggressive (APA) learning algorithms for NMF, named \"NN-APA\", which overcomes two critical limitations of NN-PA by (i) exploiting second-order information to enhance PA in making more informative updates at each iteration; and (ii) achieving the parameter auto-selection by exploring the idea of online learning with expert advice in deciding the optimal combination of the key parameters in NMF. We theoretically analyze the regret bounds of the proposed method and show its advantage over the state-of-the-art NN-PA method, and further validate the efficacy and scalability of the proposed technique through an extensive set of experiments on a variety of large-scale real recommender systems datasets."}
{"_id": "36cc47e2eb5c977836d5e06549221f1fba613274", "title": "Network-based wireless location: challenges faced in developing techniques for accurate wireless location information", "text": "Wireless location refers to the geographic coordinates of a mobile subscriber in cellular or wireless local area network (WLAN) environments. Wireless location finding has emerged as an essential public safety feature of cellular systems in response to an order issued by the Federal Communications Commission (FCC) in 1996. The FCC mandate aims to solve a serious public safety problem caused by the fact that, at present, a large proportion of all 911 calls originate from mobile phones, the location of which cannot be determined with the existing technology. However, many difficulties intrinsic to the wireless environment make meeting the FCC objective challenging. These challenges include channel fading, low signal-to-noise ratios (SNRs), multiuser interference, and multipath conditions. In addition to emergency services, there are many other applications for wireless location technology, including monitoring and tracking for security reasons, location sensitive billing, fraud protection, asset tracking, fleet management, intelligent transportation systems, mobile yellow pages, and even cellular system design and management. This article provides an overview of wireless location challenges and techniques with a special focus on network-based technologies and applications."}
{"_id": "4d657317f96981acec7d221fff227e97d705ea77", "title": "Microstrip Antennas", "text": "Microstrip antennas have been one of the niost innovative topics in antenna theory and design in recent years, and are increasingly finding application in a wide range of modern microwave systems. This paper begins with a brief overview of the basic characteristics of microstrip antennas, and then concentrates on the most significant developments in microstrip antenna technology that have been made in the last several years. Emphasis is on new antenna configurations for improved electrical performance and manufacturability, and advances in the analytical modeling of microstrip antennas and arrays."}
{"_id": "4682e3fc0dd2dd604d6128056fcabd48acb80dfd", "title": "Environmental impacts of polyvinyl chloride (PVC) production process", "text": "The increasing demand for plastics in the world leads to pressures on the environment generated by the consumption of raw materials based on fossil fuels and the need for plastic waste reduction and absorption to diminish their environmental impacts. In this paper we evaluated the environmental impacts induced by the production of polyvinyl chloride (PVC), one of the most widely used plastics in the manufacture of various goods. The impacts were calculated based on Life Cycle Assessment methodology. The negative environmental impacts of PVC production may be reduced by recycling, a sustainable, effective, currently available alternative of waste management. Our study showed that the use of recyclable materials, such as organic biodegradable waste as raw materials instead of crude oil can lead to significant reduction of environmental impacts."}
{"_id": "4efeae137ccd1ac4c10f0107597e0a9b6b2e88ba", "title": "On Networking of Internet of Things: Explorations and Challenges", "text": "Internet of Things (IoT), as the trend of future networks, begins to be used in many aspects of daily life. It is of great significance to recognize the networking problem behind developing IoT. In this paper, we first analyze and point out the key problem of IoT from the perspective of networking: how to interconnect large-scale heterogeneous network elements and exchange data efficiently. Combining our on-going works, we present some research progresses on three main aspects: 1) the basic model of IoT architecture; 2) the internetworking model; and 3) the sensor-networking mode. Finally, we discuss two remaining challenges in this area."}
{"_id": "a35d08cfb6d45d6aaa710bfcb8d37d2560cbc4fc", "title": "A Systematic Review of Service Level Management in the Cloud", "text": "Cloud computing make it possible to flexibly procure, scale, and release computational resources on demand in response to workload changes. Stakeholders in business and academia are increasingly exploring cloud deployment options for their critical applications. One open problem is that service level agreements (SLAs) in the cloud ecosystem are yet to mature to a state where critical applications can be reliably deployed in clouds. This article systematically surveys the landscape of SLA-based cloud research to understand the state of the art and identify open problems. The survey is particularly aimed at the resource allocation phase of the SLA life cycle while highlighting implications on other phases. Results indicate that (i) minimal number of SLA parameters are accounted for in most studies; (ii) heuristics, policies, and optimisation are the most commonly used techniques for resource allocation; and (iii) the monitor-analysis-plan-execute (MAPE) architecture style is predominant in autonomic cloud systems. The results contribute to the fundamentals of engineering cloud SLA and their autonomic management, motivating further research and industrial-oriented solutions."}
{"_id": "76e6d5341030b5581940d3d7b5d8d4d1fda0b479", "title": "Logo Retrieval Using Logo Proposals and Adaptive Weighted Pooling", "text": "This letter presents a novel approach for logo retrieval. Considering the fact that logo only occupies a small portion of an image, we apply Faster R-CNN to detect logo proposals first, and then use a two-step pooling strategy with adaptive weight to obtain an accurate global signature. The adaptive weighted pooling method can effectively balance the recall and precision of proposals by incorporating the probability of each proposal being a logo. Experimental results show that the proposed method interprets the similarity between query and database image more accurately and achieves state of the art performance."}
{"_id": "478fe2cf0f6cbaf8305d239d3cbbc5b33dab9294", "title": "Monolithically Integrated Si Photonics Transmitters in 0.25 um BiCMOS Platform for High-Speed Optical Communications", "text": "Monolithically integrated electro-optical transmitters fabricated in a 0.25 \u03bcm SiGe:C BiCMOS electronic-photonic integrated circuit (EPIC) technology with fT =190 GHz are presented. The modules are based on the co-design and integration of a segmented depletion-type Si Mach-Zehnder modulator and multichannel driver amplifiers. Two driving approaches and their performance trade-offs are discussed: a linear one, with direct interface to an external digital-to-analog converter (DAC), as well as a more power efficient implementation featuring integrated 4-bit DAC functionality. High extinction ratios at high speed are demonstrated, enabled by the high breakdown voltages of the HBT transistors. The modules support several modulation formats among which pulse-amplitude-modulation (PAM)\u22124 eye diagrams up to 37 GBd are shown, for the first time."}
{"_id": "a9791c944e9933fae1d25ebdb053f8572497b40c", "title": "IT Outsourcing Strategies: Universalistic, Contingency, and Configurational Explanations of Success", "text": "F on individual outsourcing decisions in IT research has often yielded contradictory findings and recommendations. To address these contradictions, we investigate a holistic, configurational approach with the prevailing universalistic or contingency perspectives in exploring the effects of IT outsourcing strategies on outsourcing success. Based on residual rights theory, we begin by identifying three dimensions of IT outsourcing strategies: degree of integration, allocation of control, and performance period. We then develop a model of fit-as-gestalt, drawing from literatures on strategy, governance, interorganizational relationships, and outsourcing. Next, based on data from 311 firms in South Korea, we test universalistic and contingency perspectives in explaining the relationship between IT outsourcing strategies and outsourcing success. We then identify three congruent patterns, or gestalts, of IT outsourcing strategies. We term these strategies independent, arm\u2019s-length, and embedded strategies. To establish the predictive validity of these gestalts and the viability of a configurational perspective, we then explore the effects of these congruent gestalts vis-\u00e0-vis noncongruent patterns on three dimensions of outsourcing success: strategic competence, cost efficiency, and technology catalysis. We also contrast the effects of each of the three gestalts on each of the three dimensions of outsourcing success. Our findings indicate the superiority of the configurational approach over universalistic and contingency perspectives in explaining outsourcing success."}
{"_id": "281d46e96a589cb7e3d17222ef5c09d35fbbcdd7", "title": "An ontology for representing financial headline news", "text": "This paper presents the development of an ontology to represent financial headline news. This ontology is developed using the On-To-Knowledge methodology where the focus is on the design steps of the Knowledge Meta Process. This development is part of an ongoing project which aims to design a virtual stock market simulator based on multi-agent systems. The proposed ontology has 31 concepts and includes 201 attributes. The testing results conducted on reliable headline news show that 99% of these headline news can be properly represented by the attributes of the right category in the ontology. Unreliable headline news characterized by news having uncertainties, incompleteness, ill-definition, or imprecision cannot be represented by the proposed ontology. Approaches for representing these unreliable headline"}
{"_id": "893e9798b276285a05bcba80db42480819b8a648", "title": "The real-time video stabilization for the rescue robot", "text": "When rescue robots navigate in a rough terrain, significant vibration of the video occurs unavoidably and a video stabilization system is proposed in this paper to reduce the disturbance on the visual system of the present rescue robots. The Kalman filter is applied to estimate the motion vector of the robot and the false estimation occurrence can be thus greatly reduced by applying analysis of correlation and variance of the motion vector estimation. With a hierarchical searching algorithm implemented on the TI DSP 6437, the frame rate can be improved from 10 fps to 28 fps to realize the real-time video stabilization and furthermore, the cooperative mission of multiple rescue robots has been achieved by applying the developed visual servo technique. The proposed virtual bounded motion control algorithm further leads the robots approaching the target precisely with cooperation of multiple rescue robots. The demo video is available at: http://lab816.cn.nctu.edu.tw/DemoVideo/. video stabilization"}
{"_id": "4361e64f2d12d63476fdc88faf72a0f70d9a2ffb", "title": "Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units", "text": "We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map, combining the intuitions of dropout and zoneout while respecting neuron values. This connection suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks."}
{"_id": "00d6b0615ed514b03eac655ce7c8d624fff3b89b", "title": "Device-to-Device Communication Underlaying a Finite Cellular Network Region", "text": "Underlay in-band device-to-device (D2D) communication can improve the spectrum efficiency of cellular networks. However, the coexistence of D2D and cellular users causes inter-cell and intra-cell interference. The former can be effectively managed through inter-cell interference coordination and, therefore, is not considered in this paper. Instead, we focus on the intra-cell interference and propose a D2D mode selection scheme to manage it inside a finite cellular network region. The potential D2D users are controlled by the base station (BS) to operate in D2D mode based on the average interference generated to the BS. Using stochastic geometry, we study the outage probability experienced at the BS and a D2D receiver, and spectrum reuse ratio, which quantifies the average fraction of successfully transmitting D2D users. The analysis shows that the outage probability at the D2D receiver varies for different locations. In addition, without impairing the performance at the BS, if the path-loss exponent on the cellular link is slightly lower than that on the D2D link, the spectrum reuse ratio can have negligible decrease, while the D2D users\u2019 average number of successful transmissions increases with increasing D2D node density. This indicates that an increasing level of D2D communication can be beneficial in future networks."}
{"_id": "71d7a5110d0bdacec57e7dd2ece7bc13f84614e6", "title": "BLAT--the BLAST-like alignment tool.", "text": "Analyzing vertebrate genomes requires rapid mRNA/DNA and cross-species protein alignments. A new tool, BLAT, is more accurate and 500 times faster than popular existing tools for mRNA/DNA alignments and 50 times faster for protein alignments at sensitivity settings typically used when comparing vertebrate sequences. BLAT's speed stems from an index of all nonoverlapping K-mers in the genome. This index fits inside the RAM of inexpensive computers, and need only be computed once for each genome assembly. BLAT has several major stages. It uses the index to find regions in the genome likely to be homologous to the query sequence. It performs an alignment between homologous regions. It stitches together these aligned regions (often exons) into larger alignments (typically genes). Finally, BLAT revisits small internal exons possibly missed at the first stage and adjusts large gap boundaries that have canonical splice sites where feasible. This paper describes how BLAT was optimized. Effects on speed and sensitivity are explored for various K-mer sizes, mismatch schemes, and number of required index matches. BLAT is compared with other alignment programs on various test sets and then used in several genome-wide applications. http://genome.ucsc.edu hosts a web-based BLAT server for the human genome."}
{"_id": "01dbc5466cce6abd567cc5b34a481f5c438fb15a", "title": "The Web Never Forgets: Persistent Tracking Mechanisms in the Wild", "text": "We present the first large-scale studies of three advanced web tracking mechanisms - canvas fingerprinting, evercookies and use of \"cookie syncing\" in conjunction with evercookies. Canvas fingerprinting, a recently developed form of browser fingerprinting, has not previously been reported in the wild; our results show that over 5% of the top 100,000 websites employ it. We then present the first automated study of evercookies and respawning and the discovery of a new evercookie vector, IndexedDB. Turning to cookie syncing, we present novel techniques for detection and analysing ID flows and we quantify the amplification of privacy-intrusive tracking practices due to cookie syncing.\n Our evaluation of the defensive techniques used by privacy-aware users finds that there exist subtle pitfalls --- such as failing to clear state on multiple browsers at once - in which a single lapse in judgement can shatter privacy defenses. This suggests that even sophisticated users face great difficulties in evading tracking techniques."}
{"_id": "05ad6c3ab7a0b1ab0c4fc3af9f1622cf6c0fa40e", "title": "Detecting and Defending Against Third-Party Tracking on the Web", "text": "While third-party tracking on the web has garnered much attention, its workings remain poorly understood. Our goal is to dissect how mainstream web tracking occurs in the wild. We develop a client-side method for detecting and classifying five kinds of third-party trackers based on how they manipulate browser state. We run our detection system while browsing the web and observe a rich ecosystem, with over 500 unique trackers in our measurements alone. We find that most commercial pages are tracked by multiple parties, trackers vary widely in their coverage with a small number being widely deployed, and many trackers exhibit a combination of tracking behaviors. Based on web search traces taken from AOL data, we estimate that several trackers can each capture more than 20% of a user\u2019s browsing behavior. We further assess the impact of defenses on tracking and find that no existing browser mechanisms prevent tracking by social media sites via widgets while still allowing those widgets to achieve their utility goals, which leads us to develop a new defense. To the best of our knowledge, our work is the most complete study of web tracking to date."}
{"_id": "0d2f693901fba451ede4d388724b0e3f57029cd3", "title": "Cookieless Monster: Exploring the Ecosystem of Web-Based Device Fingerprinting", "text": "The web has become an essential part of our society and is currently the main medium of information delivery. Billions of users browse the web on a daily basis, and there are single websites that have reached over one billion user accounts. In this environment, the ability to track users and their online habits can be very lucrative for advertising companies, yet very intrusive for the privacy of users. In this paper, we examine how web-based device fingerprinting currently works on the Internet. By analyzing the code of three popular browser-fingerprinting code providers, we reveal the techniques that allow websites to track users without the need of client-side identifiers. Among these techniques, we show how current commercial fingerprinting approaches use questionable practices, such as the circumvention of HTTP proxies to discover a user's real IP address and the installation of intrusive browser plugins. At the same time, we show how fragile the browser ecosystem is against fingerprinting through the use of novel browser-identifying techniques. With so many different vendors involved in browser development, we demonstrate how one can use diversions in the browsers' implementation to distinguish successfully not only the browser-family, but also specific major and minor versions. Browser extensions that help users spoof the user-agent of their browsers are also evaluated. We show that current commercial approaches can bypass the extensions, and, in addition, take advantage of their shortcomings by using them as additional fingerprinting features."}
{"_id": "254f86dc50c6a2e2bce7241416372c290883e7ec", "title": "Understanding Malvertising Through Ad-Injecting Browser Extensions", "text": "Malvertising is a malicious activity that leverages advertising to distribute various forms of malware. Because advertising is the key revenue generator for numerous Internet companies, large ad networks, such as Google, Yahoo and Microsoft, invest a lot of effort to mitigate malicious ads from their ad networks. This drives adversaries to look for alternative methods to deploy malvertising. In this paper, we show that browser extensions that use ads as their monetization strategy often facilitate the deployment of malvertising. Moreover, while some extensions simply serve ads from ad networks that support malvertising, other extensions maliciously alter the content of visited webpages to force users into installing malware. To measure the extent of these behaviors we developed Expector, a system that automatically inspects and identifies browser extensions that inject ads, and then classifies these ads as malicious or benign based on their landing pages. Using Expector, we automatically inspected over 18,000 Chrome browser extensions. We found 292 extensions that inject ads, and detected 56 extensions that participate in malvertising using 16 different ad networks and with a total user base of 602,417."}
{"_id": "bba5b4d1ec0e9f254d82b7a755596e825a975ca0", "title": "Trends and Lessons from Three Years Fighting Malicious Extensions", "text": "In this work we expose wide-spread efforts by criminals to abuse the Chrome Web Store as a platform for distributing malicious extensions. A central component of our study is the design and implementation of WebEval, the first system that broadly identifies malicious extensions with a concrete, measurable detection rate of 96.5%. Over the last three years we detected 9,523 malicious extensions: nearly 10% of every extension submitted to the store. Despite a short window of operation\u2014we removed 50% of malware within 25 minutes of creation\u2014 a handful of under 100 extensions escaped immediate detection and infected over 50 million Chrome users. Our results highlight that the extension abuse ecosystem is drastically different from malicious binaries: miscreants profit from web traffic and user tracking rather than email spam or banking theft."}
{"_id": "ed978e72b130befb2ae463ac5442296498047f93", "title": "Study of Lightning Surge Overvoltages at Substations Due to Direct Lightning Strokes to Phase Conductors", "text": "Accurate predictions of lightning surge overvoltages are essential to power equipment insulation design. Recent observations of lightning strokes to ultra-high-voltage designed transmission lines confirmed direct lightning strokes caused by shielding failure and found phenomena unexplainable by conventional shielding theories. However, there are few detailed studies of direct lightning surge overvoltages. This study assumed direct lightning stroke currents based on observational data and performs electromagnetic transient program analysis of the gas-insulated switchgear (GIS) and transformer overvoltages at substations to study the basic characteristics of overvoltages due to direct lightning strokes and evaluate lightning protection design. Consequently, the maximum GIS overvoltages were found during back-flashovers, and the locations of maximum overvoltages from direct lightning strokes and back-flashovers differ. Direct lightning strokes may be more severe than back-flashovers for transformers. This paper also studied the overvoltage generation mechanism and showed the relationship of the maximum voltage to lightning stroke current and transformer capacitance."}
{"_id": "0352893287ea6c7d6a65946706b1b75cbe598798", "title": "A Fast Monte-Carlo Test for Primality", "text": ""}
{"_id": "98ac8a3a694dc7a9672bbd9be6e8b5e02d85bcb9", "title": "GISMO: a Graphical Interactive Student Monitoring Tool for Course Management Systems", "text": "This paper presents GISMO, a graphical interactive student monitoring and tracking system tool that extracts tracking data from an on-line course maintained with a Course Management System, and generates graphical representations that can be explored by course instructors to examine various aspects of distance students. GISMO uses techniques from Information Visualisation to render into an appropriate graphical manner the complex multidimensional student tracking data provided by the Course Management System. GISMO aims to help instructors to become aware of what is happening in their classes and provide a better support to their learners."}
{"_id": "80f74c88376d7553a70808eb59bb110db7c22bdb", "title": "Mapping learning and game mechanics for serious games analysis", "text": "While there is a consensus on the instructional potential of Serious Games (SGs), there is still a lack of methodologies and tools not only for design but also to support analysis and assessment. Filling this gap is one of the main aims of the Games and Learning Alliance (GALA, www.galanoe.eu) European Network of Excellence on Serious Games (SGs), which has a focus upon pedagogy-driven SGs. This paper relies on the assumption that the fundamental aspect of SG design consists in the translation of learning goals/practices into mechanical element of game-play, serving to an instructional purpose beside that of play and fun. This paper proposes the Learning Mechanics-Game Mechanics (LM-GM) model, which supports SG analysis and design by allowing reflection on the various pedagogical and game elements in a SG. The LM-GM model includes a set of pre-defined game mechanics and pedagogical elements that we have abstracted from literature on game studies and learning theories. Designers and analysts can exploit these mechanics to draw the LM-GM map for a game, so as to identify and highlight its main pedagogical and entertainment features, and their interrelations. The tool may also be useful for teachers to evaluate the effectiveness of a given game and better understand how to implement it in educational settings. A case study is reported to illustrate the framework\u2019s support in determining how game-play and pedagogy intertwine in a SG. Finally, the paper presents the results of two comparative user tests demonstrating the advantages of the proposed model with respect to a similar state-of-the-art framework."}
{"_id": "b9e8407809babd5c8fe3a5334e0660f80f704767", "title": "Colyseus: A Distributed Architecture for Online Multiplayer Games", "text": "This paper presents the design, implementation, and evaluation of Colyseus, a distributed architecture for interactive multiplayer games. Colyseus takes advantage of a game\u2019s tolerance for weakly consistent state and predictable workload to meet the tight latency constraints of game-play and maintain scalable communication costs. In addition, it provides a rich distributed query interface and effective pre-fetching subsystem to help locate and replicate objects before they are accessed at a node. We have implemented Colyseus and modified Quake II, a popular first person shooter game, to use it. Our measurements of Quake II and our own Colyseus-based game with hundreds of players shows that Colyseus effectively distributes game traffic across the participating nodes, allowing Colyseus to support low-latency game-play for an order of magnitude more players than existing single server designs, with similar per-node bandwidth costs."}
{"_id": "4857ffdcbb4dd5b2b8da968f9fb2b0caefc7bde8", "title": "AskWorld: Budget-Sensitive Query Evaluation for Knowledge-on-Demand", "text": "Recently, several Web-scale knowledge harvesting systems have been built, each of which is competent at extracting information from certain types of data (e.g., unstructured text, structured tables on the web, etc.). In order to determine the response to a new query posed to such systems (e.g., is sugar a healthy food?), it is useful to integrate opinions from multiple systems. If a response is desired within a specific time budget (e.g., in less than 2 seconds), then maybe only a subset of these resources can be queried. In this paper, we address the problem of knowledge integration for on-demand time-budgeted query answering. We propose a new method, AskWorld, which learns a policy that chooses which queries to send to which resources, by accommodating varying budget constraints that are available only at query (test) time. Through extensive experiments on real world datasets, we demonstrate AskWorld\u2019s capability in selecting most informative resources to query within test-time constraints, resulting in improved performance compared to competitive baselines."}
{"_id": "167b82f3ff315610529ebda75718142bac2f92f0", "title": "Model-free control", "text": "Model-free control\u201d and the corresponding \u201cintelligent\u201d PID controllers (iPIDs), which already had many successful concrete applications, are presented here for the first time in an unified manner, where the new advances are taken into account. The basics of model-free control is now employing some old functional analysis and some elementary differential algebra. The estimation techniques become quite straightforward via a recent online parameter identification approach. The importance of iPIs and especially of iPs is deduced from the presence of friction. The strange industrial ubiquity of classic PID\u2019s and the great difficulty for tuning them in complex situations is deduced, via an elementary sampling, from their connections with iPIDs. Several numerical simulations are presented which include some infinite-dimensional systems. They demonstrate not only the power of our intelligent controllers but also the great simplicity for tuning them."}
{"_id": "873a493e553e4278040ff0b552eb1ae59f008c02", "title": "A Distributed Test System Architecture for Open-source IoT Software", "text": "In this paper, we discuss challenges that are specific to testing of open IoT software systems. The analysis reveals gaps compared to wireless sensor networks as well as embedded software. We propose a testing framework which (a) supports continuous integration techniques, (b) allows for the integration of project contributors to volunteer hardware and software resources to the test system, and (c) can function as a permanent distributed plugtest for network interoperability testing. The focus of this paper lies in open-source IoT development but many aspects are also applicable to closed-source projects."}
{"_id": "118e281d12ae4a623bf2b1bbdcda188c376343c0", "title": "On Poisson Graphical Models", "text": "Undirected graphical models, such as Gaussian graphical models, Ising, and multinomial/categorical graphical models, are widely used in a variety of applications for modeling distributions over a large number of variables. These standard instances, however, are ill-suited to modeling count data, which are increasingly ubiquitous in big-data settings such as genomic sequencing data, user-ratings data, spatial incidence data, climate studies, and site visits. Existing classes of Poisson graphical models, which arise as the joint distributions that correspond to Poisson distributed node-conditional distributions, have a major drawback: they can only model negative conditional dependencies for reasons of normalizability given its infinite domain. In this paper, our objective is to modify the Poisson graphical model distribution so that it can capture a rich dependence structure between count-valued variables. We begin by discussing two strategies for truncating the Poisson distribution and show that only one of these leads to a valid joint distribution. While this model can accommodate a wider range of conditional dependencies, some limitations still remain. To address this, we investigate two additional novel variants of the Poisson distribution and their corresponding joint graphical model distributions. Our three novel approaches provide classes of Poisson-like graphical models that can capture both positive and negative conditional dependencies between count-valued variables. One can learn the graph structure of our models via penalized neighborhood selection, and we demonstrate the performance of our methods by learning simulated networks as well as a network from microRNA-sequencing data."}
{"_id": "f52d4b5beab9ae573f1a8314bfd888eebe83e55b", "title": "Different voltage-dependent thresholds for inducing long-term depression and long-term potentiation in slices of rat visual cortex", "text": "IN the hippocampus and neocortex, high-frequency (tetanic) stimu-lation of an afferent pathway leads to long-term potentiation (LTP) of synaptic transmission1\u20135. In the hippocampus it has recently been shown that long-term depression (LTD) of excitatory transmission can also be induced by certain combinations of synaptic activation6,7. In most hippocampal8 and all neocortical pathways4,9\u201311 studied so far, the induction of LTP requires the activation of JV-methyl-D-aspartate (NMDA) receptorgated conductances. Here we report that LTD can occur in neurons of slices of the rat visual cortex and that the same tetanic stimulation can induce either LTP or LTD depending on the level of depolarization of the postsynaptic neuron. By applying intracellular current injections or pharmacological disinhibition to modify the depolarizing response of the postsynaptic neuron to tetanic stimulation, we show that the mechanisms of induction of LTD and LTP are both postsynaptic. LTD is obtained if postsynaptic depolarization exceeds a critical level but remains below a threshold related to NMDA receptorgated conductances, whereas LTP is induced if this second threshold is reached."}
{"_id": "2ddb82c5e45e839d15a1579e530b7e640114fdff", "title": "Identifying adverse drug events from patient social media: A case study for diabetes", "text": "Patient social media sites have emerged as major platforms for discussion of treatments and drug side effects, making them a promising source for listening to patients' voices in adverse drug event reporting. However, extracting patient reports from social media continues to be a challenge in health informatics research. In light of the need for more robust extraction methods, the authors developed a novel information extraction framework for identifying adverse drug events from patient social media. They also conducted a case study on a major diabetes patient social media platform to evaluate their framework's performance. Their approach achieves an f-measure of 86 percent in recognizing discussion of medical events and treatments, an f-measure of 69 percent for identifying adverse drug events, and an f-measure of 84 percent in patient report extraction. Their proposed methods significantly outperformed prior work in extracting patient reports of adverse drug events in health social media."}
{"_id": "b0112ff95822f147c2eb3374806739a1acc7bbbe", "title": "Slip detection and prediction in human walking using only wearable inertial measurement units (IMUs)", "text": "Slip and fall is one of the major causes for human injuries for elders and professional workers. Real-time detection and prediction of the foot slip is critical for developing effective assistive and rehabilitation devices to prevent falls and train balance disorder patients. This paper presents a novel real-time slip detection and prediction scheme with wearable inertial measurement units (IMUs). The slip-detection algorithm is built on a new dynamic model for bipedal walking with slips. An extended Kalman filter is designed to reliably predict the foot slip displacement using the wearable IMU measurements and kinematic constraints. The proposed slip detection and prediction scheme has been demonstrated by extensive experiments."}
{"_id": "edb588042f3e259916be75f565592d1e04b6de8b", "title": "Graphite: Iterative Generative Modeling of Graphs", "text": "Graphs are a fundamental abstraction for modeling relational data. However, graphs are discrete and combinatorial in nature, and learning representations suitable for machine learning tasks poses statistical and computational challenges. In this work, we propose Graphite an algorithmic framework for unsupervised learning of representations over nodes in a graph using deep latent variable generative models. Our model is based on variational autoencoders (VAE), and uses graph neural networks for parameterizing both the generative model (i.e., decoder) and inference model (i.e., encoder). The use of graph neural networks directly incorporates inductive biases due to the spatial, local structure of graphs directly in the generative model. We draw novel connections of our framework with approximate inference via kernel embeddings. Empirically, Graphite outperforms competing approaches for the tasks of density estimation, link prediction, and node classification on synthetic and benchmark datasets."}
{"_id": "eaeb504b664e27e5b24e123893a4dbbfdbb62cee", "title": "Personalized Dialogue Generation with Diversified Traits", "text": "Endowing a dialogue system with particular personality traits is essential to deliver more human-like conversations. However, due to the challenge of embodying personality via language expression and the lack of large-scale persona-labeled dialogue data, this research problem is still far from well-studied. In this paper, we investigate the problem of incorporating explicit personality traits in dialogue generation to deliver personalized dialogues. To this end, firstly, we construct PersonalDialog, a large-scale multi-turn dialogue dataset containing various traits from a large number of speakers. The dataset consists of 20.83M sessions and 56.25M utterances from 8.47M speakers. Each utterance is associated with a speaker who is marked with traits like Age, Gender, Location, Interest Tags, etc. Several anonymization schemes are designed to protect the privacy of each speaker. This large-scale dataset will facilitate not only the study of personalized dialogue generation, but also other researches on sociolinguistics or social science. Secondly, to study how personality traits can be captured and addressed in dialogue generation, we propose persona-aware dialogue generation models within the sequence to sequence learning framework. Explicit personality traits (structured by key-value pairs) are embedded using a trait fusion module. During the decoding process, two techniques, namely persona-aware attention and persona-aware bias, are devised to capture and address trait-related information. Experiments demonstrate that our model is able to address proper traits in different contexts. Case studies also show interesting results for this challenging research problem."}
{"_id": "960d9d72f1af109460cb9868891bee78a68b60cc", "title": "Realizing underwater communication through magnetic induction", "text": "The majority of the work on underwater communication has mainly been based on acoustic communication. Acoustic communication faces many known problems, such as high propagation delays, very low data rates, and highly environment-dependent channel behavior. In this article, to address these shortcomings, magnetic induction is introduced as a possible communication paradigm for underwater applications. Accordingly, all research challenges in this regard are explained. Fundamentally different from the conventional underwater communication paradigm, which relies on EM, acoustic, or optical waves, the underwater MI communications rely on the time varying magnetic field to covey information between the transmitting and receiving parties. MI-based underwater communications exhibit several unique and promising features such as negligible signal propagation delay, predictable and constant channel behavior, sufficiently long communication range with high bandwidth, as well as silent and stealth underwater operations. To fully utilize the promising features of underwater MI-based communications, this article introduces the fundamentals of underwater MI communications, including the MI channel models, MI networking protocols design, and MI-based underwater localization."}
{"_id": "732a8e952e361b5aab1c7e1815ccf1c7daf84b7b", "title": "Integrating Configuration Management with Model-driven Cloud Management based on TOSCA", "text": "The paradigm of Cloud computing introduces new approaches to manage IT services going beyond concepts originating in traditional IT service management. The main goal is to automate the whole management of services to reduce costs and to make management tasks less error-prone. Two different service management paradigms are used in practice: configuration management and model-driven Cloud management. The latter one aims to be a holistic management approach for services in the Cloud. However, both management paradigms are originating in different backgrounds, thus model-driven Cloud management does not cover all aspects of configuration management that are key for Cloud services. This paper presents approaches for integrating configuration management with model-driven Cloud management and how they can be realized based on the OASIS Topology and Orchestration Specification for Cloud Applications and Chef, a popular configuration management tool. These approaches enable the creation of holistic and highly portable service models."}
{"_id": "9f417a35b56b609ef006713f634a14227f1ef59b", "title": "The BeiHang Keystroke Dynamics Authentication System", "text": "Keystroke Dynamics is an important biometric solution for person authentication. Based upon keystroke dynamics, this paper designs an embedded password protection device, develops an online system, collects two public databases for promoting the research on keystroke authentication, exploits the Gabor filter bank to characterize keystroke dynamics, and provides benchmark results of three popular classification algorithms, one-class support vector machine, Gaussian classifier, and nearest neighbour classifier."}
{"_id": "c977e3c8a53143f2158234ec68eee29e409ffdeb", "title": "Google Web 1T 5-Grams Made Easy (but not for the computer)", "text": "This paper introduces Web1T5-Easy, a simple indexing solution that allows interactive searches of the Web 1T 5-gram database and a derived database of quasi-collocations. The latter is validated against co-occurrence data from the BNC and ukWaC on the automatic identification of non-compositional VPC."}
{"_id": "3e8bc8ed766a5fb0224bf61409599de21c221c56", "title": "A Face in any Form: New Challenges and Opportunities for Face Recognition Technology", "text": "Despite new technologies that make face detection and recognition more sophisticated, long-recognized problems in security, privacy, and accuracy persist. Refining this technology and introducing it into new domains will require solving these problems through focused interdisciplinary efforts among developers, researchers, and policymakers."}
{"_id": "4e9f509f3f2da9d3c9ad85f78de3907458f72abe", "title": "Practical Lattice-Based Cryptography: A Signature Scheme for Embedded Systems", "text": "Nearly all of the currently used and well-tested signature schemes (e.g. RSA or DSA) are based either on the factoring assumption or the presumed intractability of the discrete logarithm problem. Further algorithmic advances on these problems may lead to the unpleasant situation that a large number of schemes have to be replaced with alternatives. In this work we present such an alternative \u2013 a signature scheme whose security is derived from the hardness of lattice problems. It is based on recent theoretical advances in lattice-based cryptography and is highly optimized for practicability and use in embedded systems. The public and secret keys are roughly 12000 and 2000 bits long, while the signature size is approximately 9000 bits for a security level of around 100 bits. The implementation results on reconfigurable hardware (Spartan/Virtex 6) are very promising and show that the scheme is scalable, has low area consumption, and even outperforms some classical schemes."}
{"_id": "cae927872e8f49ebfc1307dd1583a87f3ded4b75", "title": "A new primary power regulation method for contactless power transfer", "text": "This paper presents a new power regulation method for contactless power transfer systems that combines current magnitude control of a series tuned resonant converter with energy injection concepts suitable for constant load or single pick-up applications. In the latter, the control of the primary converter operates with asymmetrical control, to regulate the magnitude of the high frequency primary track current according to the load requirement of the secondary power pickup. The feedback control algorithm is developed by monitoring the output voltage of the secondary power pickup. Switching losses and EMI are reduced due to the ZCS (Zero Current Switching) operation of the converter. Simulation and experimental results show that the primary converter has a fast and smooth start-up transient response, and the magnitudes of the primary current and the output voltage of a secondary power pickup are both fully controllable against load variations."}
{"_id": "3c17ae1d909890012472821c2a689fe39947ac2b", "title": "Clustering Web pages based on their structure", "text": "Several techniques have been recently proposed to automatically generate Web wrappers, i.e., programs that extract data from HTML pages, and transform them into a more structured format, typically in XML. These techniques automatically induce a wrapper from a set of sample pages that share a common HTML template. An open issue, however, is how to collect suitable classes of sample pages to feed the wrapper inducer. Presently, the pages are chosen manually. In this paper, we tackle the problem of automatically discovering the main classes of pages offered by a site by exploring only a small yet representative portion of it. We propose a model to describe abstract structural features of HTML pages. Based on this model, we have developed an algorithm that accepts the URL of an entry point to a target Web site, visits a limited yet representative number of pages, and produces an accurate clustering of pages based on their structure. We have developed a prototype, which has been used to perform experiments on real-life Web sites. 2004 Elsevier B.V. All rights reserved."}
{"_id": "3ccb694c2363d73afa300b26bfa8b59115a7f278", "title": "Characterizing gene sets using discriminative random walks with restart on heterogeneous biological networks", "text": "MOTIVATION\nAnalysis of co-expressed gene sets typically involves testing for enrichment of different annotations or 'properties' such as biological processes, pathways, transcription factor binding sites, etc., one property at a time. This common approach ignores any known relationships among the properties or the genes themselves. It is believed that known biological relationships among genes and their many properties may be exploited to more accurately reveal commonalities of a gene set. Previous work has sought to achieve this by building biological networks that combine multiple types of gene-gene or gene-property relationships, and performing network analysis to identify other genes and properties most relevant to a given gene set. Most existing network-based approaches for recognizing genes or annotations relevant to a given gene set collapse information about different properties to simplify (homogenize) the networks.\n\n\nRESULTS\nWe present a network-based method for ranking genes or properties related to a given gene set. Such related genes or properties are identified from among the nodes of a large, heterogeneous network of biological information. Our method involves a random walk with restarts, performed on an initial network with multiple node and edge types that preserve more of the original, specific property information than current methods that operate on homogeneous networks. In this first stage of our algorithm, we find the properties that are the most relevant to the given gene set and extract a subnetwork of the original network, comprising only these relevant properties. We then re-rank genes by their similarity to the given gene set, based on a second random walk with restarts, performed on the above subnetwork. We demonstrate the effectiveness of this algorithm for ranking genes related to Drosophila embryonic development and aggressive responses in the brains of social animals.\n\n\nAVAILABILITY AND IMPLEMENTATION\nDRaWR was implemented as an R package available at veda.cs.illinois.edu/DRaWR.\n\n\nCONTACT\nblatti@illinois.edu\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."}
{"_id": "0e1ebbf27b303b8ccf62176e6e9370261963e2c0", "title": "Differences in the mechanics of information diffusion across topics: idioms, political hashtags, and complex contagion on twitter", "text": "There is a widespread intuitive sense that different kinds of information spread differently on-line, but it has been difficult to evaluate this question quantitatively since it requires a setting where many different kinds of information spread in a shared environment. Here we study this issue on Twitter, analyzing the ways in which tokens known as hashtags spread on a network defined by the interactions among Twitter users. We find significant variation in the ways that widely-used hashtags on different topics spread.\n Our results show that this variation is not attributable simply to differences in \"stickiness,\" the probability of adoption based on one or more exposures, but also to a quantity that could be viewed as a kind of \"persistence\" - the relative extent to which repeated exposures to a hashtag continue to have significant marginal effects. We find that hashtags on politically controversial topics are particularly persistent, with repeated exposures continuing to have unusually large marginal effects on adoption; this provides, to our knowledge, the first large-scale validation of the \"complex contagion\" principle from sociology, which posits that repeated exposures to an idea are particularly crucial when the idea is in some way controversial or contentious. Among other findings, we discover that hashtags representing the natural analogues of Twitter idioms and neologisms are particularly non-persistent, with the effect of multiple exposures decaying rapidly relative to the first exposure.\n We also study the subgraph structure of the initial adopters for different widely-adopted hashtags, again finding structural differences across topics. We develop simulation-based and generative models to analyze how the adoption dynamics interact with the network structure of the early adopters on which a hashtag spreads."}
{"_id": "3145fc2e5cbdf877ef07f7408dcaee5e44ba6d4f", "title": "Meme-tracking and the dynamics of the news cycle", "text": "Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events.\n We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits."}
{"_id": "09031aa6d6743bebebc695955cd77c032cd9192f", "title": "Group formation in large social networks: membership, growth, and evolution", "text": "The processes by which communities come together, attract new members, and develop over time is a central research issue in the social sciences - political movements, professional organizations, and religious denominations all provide fundamental examples of such communities. In the digital domain, on-line groups are becoming increasingly prominent due to the growth of community and social networking sites such as MySpace and LiveJournal. However, the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities has left most basic questions about the evolution of such groups largely unresolved: what are the structural features that influence whether individuals will join communities, which communities will grow rapidly, and how do the overlaps among pairs of communities change over time.Here we address these questions using two large sources of data: friendship links and community membership on LiveJournal, and co-authorship and conference publications in DBLP. Both of these datasets provide explicit user-defined communities, where conferences serve as proxies for communities in DBLP. We study how the evolution of these communities relates to properties such as the structure of the underlying social networks. We find that the propensity of individuals to join communities, and of communities to grow rapidly, depends in subtle ways on the underlying network structure. For example, the tendency of an individual to join a community is influenced not just by the number of friends he or she has within the community, but also crucially by how those friends are connected to one another. We use decision-tree techniques to identify the most significant structural determinants of these properties. We also develop a novel methodology for measuring movement of individuals between communities, and show how such movements are closely aligned with changes in the topics of interest within the communities."}
{"_id": "3f52e3e498b81356682e7746910a14d4c3851a77", "title": "INFORMATION AND THE CHANGE IN THE PARADIGM IN ECONOMICS , PART 1 by", "text": "The research for which George Akerlof, Mike Spence, and I are being recognized is part of a larger research program which, today, embraces hundred, perhaps thousands, of researchers around the world. In this lecture, I want to set the particular work which was sited within this broader agenda, and that agenda within the broader perspective of the history of economic thought. I hope to show that Information Economics represents a fundamental change in the prevailing paradigm within economics. Problems of information are central to understanding not only market economics but also political economy, and in the last section of this lecture, I explore some of the implications of information imperfections for political processes."}
{"_id": "1a7c35dc1e98fa2bf32cd0632a1be240f8c45831", "title": "Approximate logic synthesis for error tolerant applications", "text": "Error tolerance formally captures the notion that -- for a wide variety of applications including audio, video, graphics, and wireless communications -- a defective chip that produces erroneous values at its outputs may be acceptable, provided the errors are of certain types and their severities are within application-specified thresholds. All previous research on error tolerance has focused on identifying such defective but acceptable chips during post-fabrication testing to improve yield. In this paper, we explore a completely new approach to exploit error tolerance based on the following observation: If certain deviations from the nominal output values are acceptable, then we can exploit this flexibility during circuit design to reduce circuit area and delay as well as to increase yield. The specific metric of error tolerance we focus on is error rate, i.e., how often the circuit produces erroneous outputs. We propose a new logic synthesis approach for the new problem of identifying how to exploit a given error rate threshold to maximally reduce the area of the synthesized circuit. Experiment results show that for an error rate threshold within 1%, our approach provides 9.43% literal reductions on average for all the benchmarks that we target."}
{"_id": "c66689fafa0ce5d6d85ac8b361068de31c623516", "title": "Generic and Scalable Framework for Automated Time-series Anomaly Detection", "text": "This paper introduces a generic and scalable framework for automated anomaly detection on large scale time-series data. Early detection of anomalies plays a key role in maintaining consistency of person's data and protects corporations against malicious attackers. Current state of the art anomaly detection approaches suffer from scalability, use-case restrictions, difficulty of use and a large number of false positives. Our system at Yahoo, EGADS, uses a collection of anomaly detection and forecasting models with an anomaly filtering layer for accurate and scalable anomaly detection on time-series. We compare our approach against other anomaly detection systems on real and synthetic data with varying time-series characteristics. We found that our framework allows for 50-60% improvement in precision and recall for a variety of use-cases. Both the data and the framework are being open-sourced. The open-sourcing of the data, in particular, represents the first of its kind effort to establish the standard benchmark for anomaly detection."}
{"_id": "31eb33e47570ca3ccddfb53407115a29f70b2b9f", "title": "Kinematics and design of a portable and wearable exoskeleton for hand rehabilitation", "text": "We present the kinematic design and actuation mechanics of a wearable exoskeleton for hand rehabilitation of post-stroke. Our design method is focused on achieving maximum safety, comfort and reliability in the interaction, and allowing different users to wear the device with no manual regulations. In particular, we propose a kinematic and actuation solution for the index finger flexion/extension, which leaves full movement freedom on the abduction-adduction plane. This paper presents a detailed kineto-static analysis of the system and a first prototype of the device."}
{"_id": "9d86a9e66856ee6b83b8c62b329d8fa204cf6616", "title": "The Growing Relationship Between China and Sub-Saharan Africa : Macroeconomic , Trade , Investment , and Aid Links", "text": "China\u2019s economic ascendance over the past two decades has generated ripple effects in the world economy. Its search for natural resources to satisfy the demands of industrialization has led it to Sub-Saharan Africa. Trade between China and Africa in 2006 totaled more than $50 billion, with Chinese companies importing oil from Angola and Sudan, timber from Central Africa, and copper from Zambia. Demand from China has contributed to an upward swing in prices, particularly for oil and metals from Africa, and has given a boost to real GDP in Sub-Saharan Africa. Chinese aid and investment in infrastructure are bringing desperately needed capital to the continent. At the same time, however, strong Chinese demand for oil is contributing to an increase in the import bill for many oil-importing SubSaharan African countries, and its exports of low-cost textiles, while benefiting African consumers, is threatening to displace local production. China poses a challenge to good governance and macroeconomic management in Africa because of the potential Dutch disease implications of commodity booms. China presents both an opportunity for Africa to reduce its marginalization from the global economy and a challenge for it to effectively harness the influx of resources to promote poverty-reducing economic development at home. JEL codes: F01, F35, F41, N55, N57, Q33, Q43"}
{"_id": "552528ae817834765e491d8f783c785c07ca7ccb", "title": "A Guide to Differential Privacy Theory in Social Network Analysis", "text": "Privacy of social network data is a growing concern which threatens to limit access to this valuable data source. Analysis of the graph structure of social networks can provide valuable information for revenue generation and social science research, but unfortunately, ensuring this analysis does not violate individual privacy is difficult. Simply anonymizing graphs or even releasing only aggregate results of analysis may not provide sufficient protection. Differential privacy is an alternative privacy model, popular in data-mining over tabular data, which uses noise to obscure individuals' contributions to aggregate results and offers a very strong mathematical guarantee that individuals' presence in the data-set is hidden. Analyses that were previously vulnerable to identification of individuals and extraction of private data may be safely released under differential-privacy guarantees. We review two existing standards for adapting differential privacy to network data and analyse the feasibility of several common social-network analysis techniques under these standards. Additionally, we propose out-link privacy, a novel standard for differential privacy over network data, and introduce two powerful out-link private algorithms for common network analysis techniques that were infeasible to privatize under previous differential privacy standards."}
{"_id": "01a9da9886fe353d77b6d6bc6854619ffe148438", "title": "A ripple voltage sensing MPPT circuit for ultra-low power microsystems", "text": "We propose a maximum power point tracking (MPPT) circuit for micro-scale sensor systems that measures ripple voltages in a switched capacitor energy harvester. Compared to conventional current mirror type MPPT circuits, this design incurs no voltage drop and does not require high bandwidth amplifiers. Using correlated double sampling, high accuracy is achieved with a power overhead of 5%, even at low harvested currents of 1.4uA based on measured results in 180nm CMOS."}
{"_id": "1f979f28a267522126acd8569ec2e3b964a7f656", "title": "Student See Versus Student Do: A Comparative Study of Two Online Tutorials", "text": "This study examines the impact on student performance after interactive and non-interactive tutorials using a 2\u00d72 treatment-control design. In an undergraduate management course, a control group watched a video tutorial while the treatment group received the same content using a dynamic tutorial. Both groups received the same quiz questions. Using effect size to determine magnitude of change, it was found that those in the treatment condition performed better than those in the control condition. Students were able to take the quiz up to two times. When examining for change in performance from attempt one to attempt two, the treatment group showed a greater magnitude of change. Students who consistently performed lowest on the quizzes outperformed all students in learning gains."}
{"_id": "8c047f39221328e3298c5370dd20bcec3e4a8d13", "title": "A programmable and virtualized network & IT infrastructure for the internet of things: How can NFV & SDN help for facing the upcoming challenges", "text": "The Internet of Things (IoT) revolution has major impacts on the network & Information Technology (IT) infrastructure. As IT in the past decade, network virtualization is simultaneously on its way with for instance Network Functions Virtualization (NFV) and Software-Defined Networking (SDN). NFV and SDN are approaches enhancing the infrastructure agility thus facilitating the design, delivery and operation of network services in a dynamic and scalable manner. IoT will push the infrastructure to its limit with numerous and diverse requirements to fulfill, we therefore believe that the agility brought by the combination of NFV and SDN is essential to face the IoT revolution. In this article, we first highlight some IoT challenges that the network & IT infrastructure will face. The NFV and SDN benefits are presented from a network operator point of view. Following a description of the IoT ecosystem and a recall of some of the IoT stakeholders expectations, a new multi-layered IoT architecture involving SDN and NFV and based upon network & IT resources is put forward. Finally, the article illustrates how the proposed architecture is able to cope with the identified IoT challenges."}
{"_id": "881fffd2d555344fa4171b295cb27df57e06ca5f", "title": "Cable-suspended planar parallel robots with redundant cables: controllers with positive cable tensions", "text": "Cable-suspended robots are structurally similar to parallel actuated robots but with the fundamental difference that cables can only pull the end-effector but not push it. From a scientific point of view, this feature makes feedback control of cable-suspended robots lot more challenging than their counterpart parallel-actuated robots. In the case with redundant cables, feedback control laws can be designed to make all tensions positive while attaining desired control performance. This paper describes these approaches and their effectiveness is demostrated through simulations of a three degree-offreedom cable suspended robots with four, five, and six cables."}
{"_id": "c017a2bc601513a1ec2be2d1155e0384e78b6380", "title": "Abnormal Event Detection in Videos Using Hybrid Spatio-Temporal Autoencoder", "text": "The LSTM Encoder-Decoder framework is used to learn representation of video sequences and applied for detect abnormal event in complex environment. However, it generally fails to account for the global context of the learned representation with a fixed dimension representation and the learned representation is crucial for decoder phase. Based on the LSTM Encoder-Decoder and the Convolutional Autoencoder, we explore a hybrid autoencoder architecture, which not only extracts better spatio-temporal context, but also improves the extrapolate capability of the corresponding decoder with the shortcut connection. The experimental results demonstrate that our approach outperforms lots of state-of-the-art methods on benchmark datasets."}
{"_id": "8884ef78bba4ceea3272ed2d25cafde8fc981c62", "title": "Hierarchical classification of Web content", "text": "This paper explores the use of hierarchical structure for classifying a large, heterogeneous collection of web content. The hierarchical structure is initially used to train different second-level classifiers. In the hierarchical case, a model is learned to distinguish a second-level category from other categories within the same top level. In the flat non-hierarchical case, a model distinguishes a second-level category from all other second-level categories. Scoring rules can further take advantage of the hierarchy by considering only second-level categories that exceed a threshold at the top level.\nWe use support vector machine (SVM) classifiers, which have been shown to be efficient and effective for classification, but not previously explored in the context of hierarchical classification. We found small advantages in accuracy for hierarchical models over flat models. For the hierarchical approach, we found the same accuracy using a sequential Boolean decision rule and a multiplicative decision rule. Since the sequential approach is much more efficient, requiring only 14%-16% of the comparisons used in the other approaches, we find it to be a good choice for classifying text into large hierarchical structures."}
{"_id": "92b09523943dcba6de8dcd709f4a510675a107df", "title": "A Case for Hardware Protection of Guest VMs from Compromised Hypervisors in Cloud Computing", "text": "Cloud computing, enabled by virtualization technologies, is becoming a mainstream computing model. Many companies are starting to utilize the infrastructure-as-a-service (IaaS) cloud computing model, leasing guest virtual machines (VMs) from the infrastructure providers for economic reasons: to reduce their operating costs and to increase the flexibility of their own infrastructures. Yet, many companies may be hesitant to move to cloud computing due to security concerns. An integral part of any virtualization technology is the all-powerful hyper visor. A hyper visor is a system management software layer which can access all resources of the platform. Much research has been done on using hyper visors to monitor guest VMs for malicious code and on hardening hyper visors to make them more secure. There is, however, another threat which has not been addressed by researchers -- that of compromised or malicious hyper visors that can extract sensitive or confidential data from guest VMs. Consequently, we propose that a new research direction needs to be undertaken to tackle this threat. We further propose that new hardware mechanisms in the multi core microprocessors are a viable way of providing protections for the guest VMs from the hyper visor, while still allowing the hyper visor to flexibly manage the resources of the physical platform."}
{"_id": "f52f5517f6643739fc939f27d544987d8d681921", "title": "Design and Analysis of a 3-Arm Spiral Antenna", "text": "A novel 3-arm spiral antenna structure is presented in this paper. This antenna similar to traditional two-arm or four-arm spiral antennas exhibits wideband radiation characteristic and circular polarization. Advantages offered by the new design are two fold. Unlike the traditional spiral antennas the three-arm spiral can be fed by an unbalanced transmission line, such as a coaxial line or coplanar waveguide, and therefore an external balun is not needed at the feed point. Also by proper choice of arms' dimensions the antenna can be directly matched to any practical transmission line characteristic impedance and therefore external matching networks are not required. This is accomplished by feeding the antenna at the outer radius by a coplanar waveguide (CPW) transmission line and tapering it towards the center. The antenna can also be fed from the center using a coaxial or CPW line perpendicular to the plane of the spiral antenna. A full-wave numerical simulation tool is used to optimize the geometry of the proposed 3-arm spiral to achieve a compact size, wide bandwidth operation, and low axial ratio. The antenna is also designed over a ground plane to achieve a unidirectional radiation and center loading is examined that improves the axial ratio. Simulated results like return loss, radiation pattern, gain, and axial ratio are compared with those obtained from measurements and good agreements are shown. Because of its unique feed structure and compact size, application of the proposed 3-arm spiral antenna for wideband array applications is demonstrated"}
{"_id": "2b97d99ac4c00fe972aa114a9c453a457e894418", "title": "Network game traffic modelling", "text": "A significant share of today's Internet traffic is generated by network gaming. This kind of traffic is interesting in regard to it's market potential as well as to it's real time requirements on the network. For the consideration of game traffic in network dimensioning, traffic models are required that allow to generate a characteristic load for analytical or simulative performance evaluation of networks. In this paper we evaluate the fast action multiplayer game ,,Counter Strike\" from a 36 hour LAN party measurement and present traffic models for client and server. The paper concludes with remarks on the use of game traffic models in simulations and on QoS metrics for an adequate evaluation of simulation results."}
{"_id": "bfaf51d76280c2ec78eb4227924fbff8cf15624d", "title": "ROBHAZ-rescue: rough-terrain negotiable teleoperated mobile robot for rescue mission", "text": "This paper presents design and integration of the ROBHAZ-DT3, which is a newly developed mobile robot system with chained double-track mechanisms. It is designed to carry out military and civilian missions in various hazardous environments. A passive adaptation mechanism equipped between the front and rear body enables the ROBHAZ-DT3 to have good adaptability to uneven terrains including stairs. The passive adaptation mechanism reduces energy consumption when moving on uneven terrain as well as its simplicity in design and remote control, since no actuator is necessary for adaptation. Based on this novel mobile platform, a rescue version of the ROBHAZ-DT3 with appropriate sensors and a semi-autonomous mapping and localization algorithm is developed to participate in the RoboCup2004 US-Open: Urban Search and Rescue Competition. From the various experiments in the realistic rescue arena, we can verify that the ROBHAZ-DT3 is reliable in travelling rugged terrain and the proposed mapping and localization algorithm are effective in the unstructured environment with uneven ground."}
{"_id": "428b80b4a912f08ce84bf729456407eb2c7392db", "title": "Detection of buried objects in FLIR imaging using mathematical morphology and SVM", "text": "In this paper we describe a method for detecting buried objects of interest using a forward looking infrared camera (FLIR) installed on a moving vehicle. Infrared (IR) detection of buried targets is based on the thermal gradient between the object and the surrounding soil. The processing of FILR images consists in a spot-finding procedure that includes edge detection, opening and closing. Each spot is then described using texture features such as histogram of gradients (HOG) and local binary patterns (LBP) and assigned a target confidence using a support vector machine (SVM) classifier. Next, each spot together with its confidence is projected and summed in the UTM space. To validate our approach, we present results obtained on 6 one mile long runs recorded with a long wave IR (LWIR) camera installed on a moving vehicle."}
{"_id": "ba18b3312b1b811703290c489aac5da193eb2269", "title": "Smart home and smart city solutions enabled by 5G, IoT, AAI and CoT services", "text": "In a nearby future 5G technologies will connect the world from the largest megacities to the smallest internet of things in an always online fashion. Such a connected hierarchy must combine the smart cities, the smart homes, and the internet of things into one large coherent infrastructure. This paper suggest a four layer model which join and interfaces these elements by deploying technologies such as 5G, internet of things, cloud of things, and distributed artificial intelligence. Many advantages and service possibilities are offered by this new infrastructure such as interconnected internet of things, smart homes with artificial intelligence, and a platform for new combined smart home and smart city services based on big-data."}
{"_id": "9284c14a3fd370fa4af25c37575b84f58083f72d", "title": "A Simple Steganography Algorithm Based on Lossless Compression Technique in WSN", "text": "Image security has wide applications in data transferring from source to destination system. However, cryptography is a data secrecy technique between sender and receiver, the steganography increases the level of security and acts a protective layer to the hidden information within the source image. In this paper, a compression scheme based image security algorithm for wireless sensor network is proposed to hide a secret color image within the source color image. The main contribution of this paper is to propose a compression scheme which is based on level matrix and integer matrix, and increases the compression level significantly. The performance of the proposed system is evaluated in terms of peak signal to noise ratio (PSNR), mean square error (MSE), number of pixels change rate (NPCR) and unified average changing intensity (UACI). The proposed method achieves 42.65% PSNR, 27.16% MSE, 99.9% NPCR and 30.99% UACI."}
{"_id": "5beaff7d26b72b0ebad27dafa59c36b8c1d94277", "title": "Application of Tikhonov Regularization to Super-Resolution Reconstruction of Brain MRI Images", "text": "This paper presents an image super-resolution method that enhances spatial resolution of MRI images in the slice-select direction. The algorithm employs Tikhonov regularization, using a standard model of general imaging process and then reformulating the reconstruction as a regularized minimization problem. Our experimental result shows improvements in both signal-to-noise ratio and visual quality."}
{"_id": "fda3b2e98ea0cf73dd3046aa3c2bea3936e89aa4", "title": "Chapter 18 Building Intelligent Tutoring Systems : An Overview", "text": "This chapter addresses the challenge of building or authoring an Intelligent Tutoring System (ITS), along with the problems that have arisen and been dealt with, and the solutions that have been tested. We begin by clarifying what building an ITS entails, and then position today's systems in the overall historical context of ITS research. The chapter concludes with a series of open questions and an introduction to the other chapters in this part of the book."}
{"_id": "d9ce43229be476f7ae6b372d97b4d371aa65700d", "title": "A segmental CRF approach to large vocabulary continuous speech recognition", "text": "This paper proposes a segmental conditional random field framework for large vocabulary continuous speech recognition. Fundamental to this approach is the use of acoustic detectors as the basic input, and the automatic construction of a versatile set of segment-level features. The detector streams operate at multiple time scales (frame, phone, multi-phone, syllable or word) and are combined at the word level in the CRF training and decoding processes. A key aspect of our approach is that features are defined at the word level, and are naturally geared to explain long span phenomena such as formant trajectories, duration, and syllable stress patterns. Generalization to unseen words is possible through the use of decomposable consistency features [1], [2], and our framework allows for the joint or separate discriminative training of the acoustic and language models. An initial evaluation of this framework with voice search data from the Bing Mobile (BM) application results in a 2% absolute improvement over an HMM baseline."}
{"_id": "b8e5bc5f31d0d699c3481e5f313e4729a93c5581", "title": "Low power in-memory computing based on dual-mode SOT-MRAM", "text": "In this paper, we propose a novel Spin Orbit Torque Magnetic Random Access Memory (SOT-MRAM) array design that could simultaneously work as non-volatile memory and implement a reconfigurable in-memory logic (AND, OR) without add-on logic circuits to memory chip as in traditional logic-in-memory designs. The computed logic output could be simply read out like a normal MRAM bit-cell using the shared memory peripheral circuits. Such intrinsic in-memory logic could be used to process data within memory to greatly reduce power-hungry and long distance data communication in conventional Von-Neumann computing systems. We further employ in-memory data encryption using Advanced Encryption Standard (AES) algorithm as a case study to demonstrate the efficiency of the proposed design. The device to architecture co-simulation results show that the proposed design can achieve 70.15% and 80.87% lower energy consumption compared to CMOS-ASIC and CMOL-AES implementations, respectively. It offers almost similar energy consumption as recent DW-AES implementation, but with 60.65% less area overhead."}
{"_id": "002c3339df17101b1b8f56d534ba4de2437f7a22", "title": "Evolving boxes for fast vehicle detection", "text": "We perform fast vehicle detection from traffic surveillance cameras. A novel deep learning framework, namely Evolving Boxes, is developed that proposes and refines the object boxes under different feature representations. Specifically, our framework is embedded with a light-weight proposal network to generate initial anchor boxes as well as to early discard unlikely regions; a fine-turning network produces detailed features for these candidate boxes. We show intriguingly that by applying different feature fusion techniques, the initial boxes can be refined for both localization and recognition. We evaluate our network on the recent DETRAC benchmark and obtain a significant improvement over the state-of-the-art Faster RCNN by 9.5% mAP. Further, our network achieves 9\u201313 FPS detection speed on a moderate commercial GPU."}
{"_id": "50137d663802224e683951c48970496b38b02141", "title": "DETRAC: A New Benchmark and Protocol for Multi-Object Tracking", "text": "In recent years, most effective multi-object tracking (MOT) methods are based on the tracking-by-detection framework. Existing performance evaluations of MOT methods usually separate the target association step from the object detection step by using the same object detection results for comparisons. In this work, we perform a comprehensive quantitative study on the effect of object detection accuracy to the overall MOT performance. This is based on a new large-scale DETection and tRACking (DETRAC) benchmark dataset. The DETRAC benchmark dataset consists of 100 challenging video sequences captured from real-world traffic scenes (over 140 thousand frames and 1.2 million labeled bounding boxes of objects) for both object detection and MOT. We evaluate complete MOT systems constructed from combinations of state-of-the-art target association methods and object detection schemes. Our analysis shows the complex effects of object detection accuracy on MOT performance. Based on these observations, we propose new evaluation tools and metrics for MOT systems that consider both object detection and target association for comprehensive analysis."}
{"_id": "344c95903d3efcb8157b74520ec60d4ce55b8585", "title": "Semantic-Based Text Document Clustering Using Cognitive Semantic Learning and Graph Theory", "text": "Semantic-based text document clustering aims to group documents into a set of topic clusters. We propose a new approach for semantically clustering of text documents based on cognitive science and graph theory. We apply a computational cognitive model of semantic association for human semantic memory, known as Incremental Construction of an Associative Network (ICAN). The vector-based model of Latent Semantic Analysis (LSA), has been a leading computational cognitive model for semantic learning and topic modeling, but it has well-known limitations including not considering the original word-order and doing semantic reduction with neither a linguistic nor cognitive basis. These issues have been overcome by the ICAN model. ICAN model is used to generate document-level semantic-graphs. Cognitive and graph-based topic and context identification methods are used for semantic reduction of ICAN graphs. A corpus-graph is generated from ICAN graphs, and then a community-detection graph algorithm is applied for the final step of document clustering. Experiments are conducted on three commonly used datasets. Using the purity and entropy criteria for clustering quality, our results show a notable outperformance over the LSA-based approach."}
{"_id": "a21ddc53945d43fe2cdb63178ac3b6e5f88abd7d", "title": "RevoMaker: Enabling Multi-directional and Functionally-embedded 3D printing using a Rotational Cuboidal Platform", "text": "In recent years, 3D printing has gained significant attention from the maker community, academia, and industry to support low-cost and iterative prototyping of designs. Current unidirectional extrusion systems require printing sacrificial material to support printed features such as overhangs. Furthermore, integrating functions such as sensing and actuation into these parts requires additional steps and processes to create \"functional enclosures\", since design functionality cannot be easily embedded into prototype printing. All of these factors result in relatively high design iteration times. We present \"RevoMaker\", a self-contained 3D printer that creates direct out-of-the-printer functional prototypes, using less build material and with substantially less reliance on support structures. By modifying a standard low-cost FDM printer with a revolving cuboidal platform and printing partitioned geometries around cuboidal facets, we achieve a multidirectional additive prototyping process to reduce the print and support material use. Our optimization framework considers various orientations and sizes for the cuboidal base. The mechanical, electronic, and sensory components are preassembled on the flattened laser-cut facets and enclosed inside the cuboid when closed. We demonstrate RevoMaker directly printing a variety of customized and fully-functional product prototypes, such as computer mice and toys, thus illustrating the new affordances of 3D printing for functional product design."}
{"_id": "d43cb4b1f749e4337377b4dea6269b30e484d528", "title": "Emotional intelligence and social and academic adaptation to school.", "text": "In a sample of 127 Spanish adolescents, the ability to understand and manage emotions, assessed by a performance measure of emotional intelligence (the MSCEIT), correlated positively with teacher ratings of academic achievement and adaptation for both males and females. Among girls, these emotional abilities also correlated positively with peer friendship nominations. After controlling for IQ and the Big Five personality traits, the ability to understand and manage emotions remained significantly associated with teacher ratings of academic adaptation among boys and peer friendship nominations among girls. Self-perceived emotional intelligence was unrelated to these criteria. These findings provide partial support for hypotheses that emotional abilities are associated with indicators of social and academic adaptation to school."}
{"_id": "be6f46580c3ea15d849fe5f8f2637cafb1a753d2", "title": "Personalization as a service: the architecture and a case study", "text": "Cloud computing has become a hot topic in the IT industry. Great efforts have been made to establish cloud computing platforms for enterprise users, mostly small businesses. However, there are few researches about the impact of cloud computing over individual users. In this paper we focus on how to provide personalized services for individual users in the cloud environment. We argue that a personalized cloud service shall compose of two parts. The client side program records user activities on personal de-vices such as PC. Besides that, the user model is also computed on the client side to avoid server overhead. The cloud side program fetches the user model periodically and adjusts its results accordingly. We build a personalized cloud data search engine prototype to prove our idea."}
{"_id": "11272cb9f1f5921741803e0a5b9c7d9bc4d5f7bc", "title": "A Densely Connected End-to-End Neural Network for Multiscale and Multiscene SAR Ship Detection", "text": "Synthetic aperture radar (SAR) images have been widely used for ship monitoring. The traditional methods of SAR ship detection are difficult to detect small scale ships and avoid the interference of inshore complex background. Deep learning detection methods have shown great performance on various object detection tasks recently but using deep learning methods for SAR ship detection does not show an excellent performance it should have. One of the important reasons is that there is no effective model to handle the detection of multiscale ships in multiresolution SAR images. Another important reason is it is difficult to handle multiscene SAR ship detection including offshore and inshore, especially it cannot effectively distinguish between inshore complex background and ships. In this paper, we propose a densely connected multiscale neural network based on faster-RCNN framework to solve multiscale and multiscene SAR ship detection. Instead of using a single feature map to generate proposals, we densely connect one feature map to every other feature maps from top to down and generate proposals from each fused feature map. In addition, we propose a training strategy to reduce the weight of easy examples in the loss function, so that the training process more focus on the hard examples to reduce false alarm. Experiments on expanded public SAR ship detection dataset, verify the proposed method can achieve an excellent performance on multiscale SAR ship detection in multiscene."}
{"_id": "a105f7448fcdd88d19e4603d94bd6e7b5c20143f", "title": "Cancer genes and the pathways they control", "text": "The revolution in cancer research can be summed up in a single sentence: cancer is, in essence, a genetic disease. In the last decade, many important genes responsible for the genesis of various cancers have been discovered, their mutations precisely identified, and the pathways through which they act characterized. The purposes of this review are to highlight examples of progress in these areas, indicate where knowledge is scarce and point out fertile grounds for future investigation."}
{"_id": "5aa28ff498ed8f54572cb8a4d5bf57b6011cb639", "title": "A Semi-supervised Approach to Measuring User Privacy in Online Social Networks", "text": "During our digital social life, we share terabytes of information that can potentially reveal private facts and personality traits to unexpected strangers. Despite the research efforts aiming at providing efficient solutions for the anonymization of huge databases (including networked data), in online social networks the most powerful privacy protection is in the hands of the users. However, most users are not aware of the risks derived by the indiscriminate disclosure of their personal data. With the aim of fostering their awareness on private data leakage risk, some measures have been proposed that quantify the privacy risk of each user. However, these measures do not capture the objective risk of users since they assume that all user\u2019s direct social connections are close (thus trustworthy) friends. Since this assumption is too strong, in this paper we propose an alternative approach: each user decides which friends are allowed to see each profile item/post and our privacy score is defined accordingly. We show that it can be easily computed with minimal user intervention by leveraging an active learning approach. Finally, we validate our measure on a set of real Facebook users."}
{"_id": "4f90c2788f719984f09f8c3cd83ef69dc557b854", "title": "Hunting or waiting? Discovering passenger-finding strategies from a large-scale real-world taxi dataset", "text": "In modern cities, more and more vehicles, such as taxis, have been equipped with GPS devices for localization and navigation. Gathering and analyzing these large-scale real-world digital traces have provided us an unprecedented opportunity to understand the city dynamics and reveal the hidden social and economic \u201crealities\u201d. One innovative pervasive application is to provide correct driving strategies to taxi drivers according to time and location. In this paper, we aim to discover both efficient and inefficient passenger-finding strategies from a large-scale taxi GPS dataset, which was collected from 5350 taxis for one year in a large city of China. By representing the passenger-finding strategies in a Time-Location-Strategy feature triplet and constructing a train/test dataset containing both top- and ordinary-performance taxi features, we adopt a powerful feature selection tool, L1-Norm SVM, to select the most salient feature patterns determining the taxi performance. We find that the selected patterns can well interpret the empirical study results derived from raw data analysis and even reveal interesting hidden \u201cfacts\u201d. Moreover, the taxi performance predictor built on the selected features can achieve a prediction accuracy of 85.3% on a new test dataset, and it also outperforms the one based on all the features, which implies that the selected features are indeed the right indicators of the passenger-finding strategies."}
{"_id": "591ddfcde630a63dd6bb69222f7a98376d45ef6d", "title": "Texture segmentation for seam carving", "text": "This paper analyzes the relationships between texture characteristics of an image and seam-carving (SC) algorithm for image retargeting (resizing). SC is a smart image content aware algorithm. The algorithm, changes the aspect ratio of an image while keeping the \"important\" objects untouched. Based on a subjective test, we combine between an image's characteristics and a saliency map which provides best results. Furthermore, in order to determine the significance of these results, we use principal component analysis (PCA) statistical method, which identifies those relationships in the subjective test's results. Finally, we present a novel saliency map for SC algorithm. This saliency map is based on a new classification model, which uses statistical connections between neighboring pixels and significantly reduces the level of distortion along the process, providing optimal preservation of important objects within the image."}
{"_id": "588f849b4577700398ed728aa46c4d9b2217e233", "title": "Interpreting image databases by region classification", "text": "This paper addresses automatic interpretation of images of outdoor scenes. The method allows instances of objects from a number of generic classes to be identi ed: vegetation, buildings, vehicles, roads, etc., thereby enabling image databases to be queried on scene content. The feature set is based, in part, on psychophysical principles and includes measures of colour, texture and shape. Using a large database of ground-truth labelled images, a neural network is trained as a pattern classi er. The method is demonstrated on a large test set to provide highly accurate image interpretations, with over 90% of the image area labelled correctly. Scene Understanding Object Recognition Classi cation Image Segmentation Neural Networks"}
{"_id": "38c205162aacb20cbdd47fb60e585929189d2770", "title": "Health-related quality of life in breast cancer patients: A bibliographic review of the literature from 1974 to 2007", "text": "BACKGROUND\nQuality of life in patients with breast cancer is an important outcome. This paper presents an extensive overview on the topic ranging from descriptive findings to clinical trials.\n\n\nMETHODS\nThis was a bibliographic review of the literature covering all full publications that appeared in English language biomedical journals between 1974 and 2007. The search strategy included a combination of key words 'quality of life' and 'breast cancer' or 'breast carcinoma' in titles. A total of 971 citations were identified and after exclusion of duplicates, the abstracts of 606 citations were reviewed. Of these, meetings abstracts, editorials, brief commentaries, letters, errata and dissertation abstracts and papers that appeared online and were indexed ahead of publication were also excluded. The remaining 477 papers were examined. The major findings are summarized and presented under several headings: instruments used, validation studies, measurement issues, surgical treatment, systemic therapies, quality of life as predictor of survival, psychological distress, supportive care, symptoms and sexual functioning.\n\n\nRESULTS\nInstruments-Several valid instruments were used to measure quality of life in breast cancer patients. The European Organization for Research and Treatment of Cancer Core Cancer Quality of Life Questionnaire (EORTC QLQ-C30) and its breast cancer specific complementary measure (EORTC QLQ-BR23) and the Functional Assessment Chronic Illness Therapy General questionnaire (FACIT-G) and its breast cancer module (FACIT-B) were found to be the most common and well developed instruments to measure quality of life in breast cancer patients. Surgery-different surgical procedures led to relatively similar results in terms of quality of life assessments, although mastectomy patients compared to conserving surgery patients usually reported a lower body image and sexual functioning. Systemic therapies-almost all studies indicated that breast cancer patients receiving chemotherapy might experience several side-effects and symptoms that negatively affect their quality of life. Adjuvant hormonal therapies also were found to have similar negative impact on quality of life, although in general they were associated with improved survival. Quality of life as predictor of survival-similar to known medical factors, quality of life data in metastatic breast cancer patients was found to be prognostic and predictive of survival time. Psychological distress-anxiety and depression were found to be common among breast cancer patients even years after the disease diagnosis and treatment. Psychological factors also were found to predict subsequent quality of life or even overall survival in breast cancer patients. Supportive care-clinical treatments to control emesis, or interventions such as counseling, providing social support and exercise could improve quality of life. Symptoms-Pain, fatigue, arm morbidity and postmenopausal symptoms were among the most common symptoms reported by breast cancer patients. As recommended, recognition and management of these symptoms is an important issue since such symptoms impair health-related quality of life. Sexual functioning-breast cancer patients especially younger patients suffer from poor sexual functioning that negatively affect quality of life.\n\n\nCONCLUSION\nThere was quite an extensive body of the literature on quality of life in breast cancer patients. These papers have made a considerable contribution to improving breast cancer care, although their exact benefit was hard to define. However, quality of life data provided scientific evidence for clinical decision-making and conveyed helpful information concerning breast cancer patients' experiences during the course of the disease diagnosis, treatment, disease-free survival time, and recurrences; otherwise finding patient-centered solutions for evidence-based selection of optimal treatments, psychosocial interventions, patient-physician communications, allocation of resources, and indicating research priorities were impossible. It seems that more qualitative research is needed for a better understanding of the topic. In addition, issues related to the disease, its treatment side effects and symptoms, and sexual functioning should receive more attention when studying quality of life in breast cancer patients."}
{"_id": "d392f05642d65c395258636dc166d52e8abca686", "title": "Anonymity and Online Commenting: The Broken Windows Effect and the End of Drive-by Commenting", "text": "In this study we ask how regulations about commenter identity affect the quantity and quality of discussion on commenting fora. In December 2013, the Huffington Post changed the rules for its comment forums to require participants to authenticate their accounts through Facebook. This enabled a large-scale 'before and after' analysis. We collected over 42m comments on 55,000 HuffPo articles published in the period January 2013 to June 2014 and analysed them to determine how changes in identity disclosure impacted on discussions in the publication's comment pages. We first report our main results on the quantity of online commenting, where we find both a reduction and a shift in its distribution from politicised to blander topics. We then discuss the quality of discussion. Here we focus on the subset of 18.9m commenters who were active both before and after the change, in order to disentangle the effects of the worst offenders withdrawing and the remaining commenters modifying their tone. We find a 'broken windows' effect, whereby comment quality improves even when we exclude interaction with trolls and spammers."}
{"_id": "bb762be465d67e1e4072d1c81255dc128fb315b3", "title": "Windows NT pagefile.sys Virtual Memory Analysis", "text": "As hard disk encryption, RAM disks, persistent data avoidance technology and memory resident malware become morewidespread, memory analysis becomes more important. In order to provide more virtual memory than is actually physicalpresent on a system, an operating system may transfer frames of memory to a pagefile on persistent storage. Current memoryanalysis software does not incorporate such pagefiles and thus misses important information. We therefore present a detailedanalysis of Windows NT paging. We use dynamic gray-box analysis, in which we place known data into virtual memory andexamine where it is mapped to, in either the physical memory or the pagefile, and cross-reference these findings with theWindows NT Research Kernel source code. We demonstrate how to decode the non-present page table entries, and accuratelyreconstruct the complete virtual memory space, including non-present memory pages on Windows NT systems using 32-bit,PAE or IA32e paging. Our analysis approach can be used to analyze other operating systems as well."}
{"_id": "452b71fe8ec1c3e86ccfa7aeae576355146cd0b5", "title": "Automated detection of euro pallet loads by interpreting PMD camera depth images", "text": "In this study, a novel approach for the detection of parcel loading positions on a pallet is presented. This approach was realized as a substantial change in comparison with traditional system design of contour detection in de-palletizing processes. Complex 3D-vision systems, costly laser scanners or throughput decreasing local sensor solutions integrated in grippers are substituted by a lowcost photonic mixing device (PMD) camera. By combining PMD technology and a predetermined model of loading situations, stored during assembling the pallet, this approach can compensate for the drawbacks of each respective system. An essential part of the approach are computer-graphics methods specific to the given problem to both detect the deviation between the nominal and the actual loading position and if necessary an automated correction of the packaging scheme. From an economic point of view, this approach can decrease the costs of mandatory contour checking in automated de-palletizing processes."}
{"_id": "07d47634110a159688c1e3a1a3d4fa9714f322b5", "title": "A Measurement-Driven Anti-Jamming System for 802.11 Networks", "text": "Dense, unmanaged IEEE 802.11 deployments tempt saboteurs into launching jamming attacks by injecting malicious interference. Nowadays, jammers can be portable devices that transmit intermittently at low power in order to conserve energy. In this paper, we first conduct extensive experiments on an indoor 802.11 network to assess the ability of two physical-layer functions, rate adaptation and power control, in mitigating jamming. In the presence of a jammer, we find that: 1) the use of popular rate adaptation algorithms can significantly degrade network performance; and 2) appropriate tuning of the carrier sensing threshold allows a transmitter to send packets even when being jammed and enables a receiver to capture the desired signal. Based on our findings, we build ARES, an Anti-jamming REinforcement System, which tunes the parameters of rate adaptation and power control to improve the performance in the presence of jammers. ARES ensures that operations under benign conditions are unaffected. To demonstrate the effectiveness and generality of ARES, we evaluate it in three wireless test-beds: 1) an 802.11n WLAN with MIMO nodes; 2) an 802.11a/g mesh network with mobile jammers; and 3) an 802.11a WLAN with TCP traffic. We observe that ARES improves the network throughput across all testbeds by up to 150%."}
{"_id": "0dce1412c4c49b2e6c88d30456295b40687b186d", "title": "Building Hierarchical Representations for Oracle Character and Sketch Recognition", "text": "In this paper, we study oracle character recognition and general sketch recognition. First, a data set of oracle characters, which are the oldest hieroglyphs in China yet remain a part of modern Chinese characters, is collected for analysis. Second, typical visual representations in shape- and sketch-related works are evaluated. We analyze the problems suffered when addressing these representations and determine several representation design criteria. Based on the analysis, we propose a novel hierarchical representation that combines a Gabor-related low-level representation and a sparse-encoder-related mid-level representation. Extensive experiments show the effectiveness of the proposed representation in both oracle character recognition and general sketch recognition. The proposed representation is also complementary to convolutional neural network (CNN)-based models. We introduce a solution to combine the proposed representation with CNN-based models, and achieve better performances over both approaches. This solution has beaten humans at recognizing general sketches."}
{"_id": "27f989a4994a6b76acae6fe63f992a0146ed1168", "title": "Reverse engineering digital circuits using functional analysis", "text": "Integrated circuits (ICs) are now designed and fabricated in a globalized multi-vendor environment making them vulnerable to malicious design changes, the insertion of hardware trojans/malware and intellectual property (IP) theft. Algorithmic reverse engineering of digital circuits can mitigate these concerns by enabling analysts to detect malicious hardware, verify the integrity of ICs and detect IP violations.\n In this paper, we present a set of algorithms for the reverse engineering of digital circuits starting from an unstructured netlist and resulting in a high-level netlist with components such as register files, counters, adders and subtracters. Our techniques require no manual intervention and experiments show that they determine the functionality of more than 51% and up to 93% of the gates in each of the practical test circuits that we examine."}
{"_id": "0e90fdf3319f4169886de3f3bbcf3a5927146e7e", "title": "A possible mechanism of horseback riding on dynamic trunk alignment", "text": "The study aimed to clarify the regularity of the motions of horse's back, rider's pelvis and spine associated with improvement of rider's dynamic trunk alignment. The study used a crossover design, with exercise using the horseback riding simulator (simulator hereafter) as the control condition. The experiments were conducted at Tokyo University of Agriculture Bio-therapy Center. The sample consisted of 20 healthy volunteers age 20-23 years. Participants performed 15-min sessions of horseback riding with a Hokkaido Pony and exercise using the simulator in experiments separated by \u22652 weeks. Surface electromyography (EMG) after horseback riding revealed decreased activity in the erector spinae. Exploratory data analysis of acceleration and angular velocity inferred associations between acceleration (Rider's neck/longitudinal axis [Y hereafter]) and angular velocity (Horse saddle/Y) as well as angular velocity (Rider's pelvis/Y) and angular velocity (Horse saddle/Y). Acceleration (Rider's neck/Y) tended to be associated with angular velocity (Rider's pelvis/Y). Surface EMG following exercise revealed decreased activity in the rectus abdominis and erector spinae after the simulator exercise. Horseback riding improved the rider's dynamic trunk alignment with a clear underlying mechanism, which was not observed with the simulator."}
{"_id": "bce84d14172b25f3844efc0b11507cbc93c049d3", "title": "Orthogonal least squares methods and their application to non-linear system identification", "text": "This article maybe used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. Identification algorithms based on the well-known linear least squares methods of gaussian elimination, Cholesky decomposition, classical Gram-Schmidt, modified Gram-Schmidt, Householder transformation, Givens method, and singular value decomposition are reviewed. The classical Gram-Schmidt, modified Gram-Schmidt,and Householder transformation algorithms are then extended to combine structure determination, or which terms to include in the model, and parameter estimation in a very simple and efficient manner for a class of multivariable discrete-time non-linear stochastic systems which are linear in the parameters."}
{"_id": "2b2b7ca0e169b009d62881c9f2cbd65a0ad29518", "title": "Software radio on smartphones: feasible?", "text": "In this work-in-progress paper, we explore the expected impacts, the reality, and the technical issues of making MAC/PHY protocols downloadable softwares (like apps) for smartphones. The notion of software radio on a small number of fixed stations has been investigated for decades, but moving its platform to billions of smartphones and even making it downloadable as an app has never been attempted. We firmly believe that such attempt could open new fronts for the software radio research. Beyond considering its possible impacts and ramifications, we demonstrate through measurements that today's smartphone hardware is already capable enough to support the real-time execution of down-loadable MAC/PHY software. It is certainly true for low-speed technologies such as ZigBee, and is close to providing real-time operation of higher-speed technologies like Wi-Fi. In our study, we also find that different application processor architectures lead to different bottlenecks in the MAC/PHY processing chain. This implies that the design of protocols that might be executed in software on general-purpose platforms should factor in the processor architecture like multiple cores and SIMD instruction sets, as it will facilitate the softwarization of wireless communication protocols in the future."}
{"_id": "3bd5c46eba124e95a98c1306adc2ba0ffb68aa91", "title": "IEEE 802.15.7 visible light communication: modulation schemes and dimming support", "text": "Visible light communication refers to shortrange optical wireless communication using visible light spectrum from 380 to 780 nm. Enabled by recent advances in LED technology, IEEE 802.15.7 supports high-data-rate visible light communication up to 96 Mb/s by fast modulation of optical light sources which may be dimmed during their operation. IEEE 802.15.7 provides dimming adaptable mechanisms for flicker-free high-data-rate visible light communication."}
{"_id": "f5d2667853cda9a540260158c7fc32d21409c0e8", "title": "Indoor optical wireless communication: potential and state-of-the-art", "text": "In recent years, interest in optical wireless (OW) as a promising complementary technology for RF technology has gained new momentum fueled by significant deployments in solid state lighting technology. This article aims at reviewing and summarizing recent advancements in OW communication, with the main focus on indoor deployment scenarios. This includes a discussion of challenges, potential applications, state of the art, and prospects. Related issues covered in this article are duplex transmission, multiple access, MAC protocols, and link capacity improvements."}
{"_id": "75db6d3adb41e9fa5ec6f9a6ac84ef620e8ad1c5", "title": "Fundamental analysis for visible-light communication system using LED lights", "text": "White LED offers advantageous properties such as high brightness, reliability, lower power consumption and long lifetime. White LEDs are expected to serve in the next generation of lamps. An indoor visible-light communication system utilizing white LED lights has been proposed from our laboratory. In the proposed system, these devices are used not only for illuminating rooms but also for an optical wireless communication system. Generally, plural lights are installed in our room. So, their optical path difference must be considered. In this paper, we discuss about the influence of interference and reflection. Based on numerical analyses, we show that the system is expected to be the indoor communication of the next generation."}
{"_id": "ebb5c85bacf68b8da2c6aa054326101a7f9599f4", "title": "GNU Radio 802 . 15 . 4 En-and Decoding", "text": "The IEEE wireless standard 802.15.4 gets widespread attention because of its adoption in sensor networks, home automation, and other networked systems. The goal of the project is to implement an enand decoding block for the IEEE 802.15.4 protocol in GNU Radio, an open source solution for software defined radios. This report will give an insight into the working of GNU Radio and some of its hardware components. Additionally, it gives details about the implementation of the enand decoding blocks. At the end, we will verify the implementation by sending and receiving messages to and from an actual IEEE 802.15.4 radio chip, the CC2420 from ChipCon, and give a small bandwidth comparison of the two solutions."}
{"_id": "f3f65a8113d6a2dcbc690fd47dfee2dff0f41097", "title": "Generating 3D Faces Using Convolutional Mesh Autoencoders", "text": "Learned 3D representations of human faces are useful for computer vision problems such as 3D face tracking and reconstruction from images, as well as graphics applications such as character generation and animation. Traditional models learn a latent representation of a face using linear subspaces or higher-order tensor generalizations. Due to this linearity, they can not capture extreme deformations and nonlinear expressions. To address this, we introduce a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. We introduce mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model. In a variational setting, our model samples diverse realistic 3D faces from a multivariate Gaussian distribution. Our training data consists of 20,466 meshes of extreme expressions captured over 12 different subjects. Despite limited training data, our trained model outperforms state-of-the-art face models with 50% lower reconstruction error, while using 75% fewer parameters. We also show that, replacing the expression space of an existing state-ofthe-art face model with our autoencoder, achieves a lower reconstruction error. Our data, model and code are available at http://github.com/ anuragranj/coma."}
{"_id": "e2e0ff066ca08cfdbdca10372c62b3a3c4f8e7af", "title": "Analysing the Impact of a DDoS Attack Announcement on Victim Stock Prices", "text": "DDoS attacks are increasingly used by 'hackers' and 'hacktivists' for various purposes. A number of on-line tools are available to launch an attack of significant intensity. These attacks lead to a variety of losses at the victim's end. We analyse the impact of Distributed Denial-of-Service (DDoS) attack announcements over a period of 5 years on the stock prices of the victim firms. We propose a method for event studies that does not assume the cumulative abnormal returns to be normally distributed, instead we use the empirical distribution for testing purposes. In most cases we find no significant impact on the stock returns but in cases where a DDoS attack creates an interruption in the services provided to the customer, we find a significant negative impact."}
{"_id": "27d249539034a67fbcf70ef335240ef6417bae24", "title": "56-Gb/s direct modulation in InGaAlAs BH-DFB lasers at 55\u00b0C", "text": "Direct modulation at 56 Gb/s of 1.3-\u03bcm InGaAlAs-MQW DFB laser, incorporating a ridge-shaped buried heterostructure (RS-BH), operating at 55\u00b0C, is experimentally demonstrated for the first time."}
{"_id": "541a89579d77942c0b8ac6debf8ef99a04684dca", "title": "Large gain linear series-fed microstrip antenna arrays at Ka and C bands", "text": "In this paper, the design of a series-fed microstrip antenna array is discussed and its design challenge of limited gain is explained. It discusses that the phase between the patches of the array plays a significant role in overcoming the challenge. A 23 element series fed linear array at Ka-band gives gain of 19dBi with beam in broadside direction and has SLL better than -15dB. A scaled down version at C-band has been designed and fabricated. A 7-element series fed antenna array yielded measured gain of 15.1 dBi at 5.79 GHz. This design can be extended for planar antenna arrays to obtain higher gain."}
{"_id": "f3655fac50bd13345ca4178804e2f157c41a024e", "title": "A Simple Edit-Time Metaobject Protocol", "text": "We present a simple edit-time metaobject protocol (ETMOP) which runs as part of a code editor and enables metadata annotations to customize the rendering and editing of code. The protocol is layered, so that simple render/edit customizations are easy to implement, while more substantial customizations are still manageable. Experiments with a prototype implementation of the protocol as an Eclipse plug-in show that the ETMOP is flexible enough to allow easy customization of the rendering and editing of a number of annotations being used in current practice. The flexibility and performance of the prototype suggest that the ETMOP approach is viable and warrants further study."}
{"_id": "0e9c968d09ab2e53e24c4dca5b2d67c7f7140f8e", "title": "Congestion Control in Linux TCP", "text": "The TCP protocol is used by the majority of the network applications on the Internet. TCP performance is strongly influenced by its congestion control algorithms that limit the amount of transmitted traffic based on the estimated network capacity and utilization. Because the freely available Linux operating system has gained popularity especially in the network servers, its TCP implementation affects many of the network interactions carried out today. We describe the fundamentals of the Linux TCP design, concentrating on the congestion control algorithms. The Linux TCP implementation supports SACK, TCP timestamps, Explicit Congestion Notification, and techniques to undo congestion window adjustments after incorrect congestion notifications. In addition to features specified by IETF, Linux has implementation details beyond the specifications aimed to further improve its performance. We discuss these, and finally show the performance effects of Quick acknowledgements, Rate-halving, and the algorithms for correcting incorrect congestion window adjustments by comparing the performance of Linux TCP implementing these features to the performance achieved with an implementation that does not use the algorithms in question."}
{"_id": "692c8285cabe15671693d84b1afb763a5d4804bc", "title": "A Review of Smart Homes\u2014Past, Present, and Future", "text": "A smart home is an application of ubiquitous computing in which the home environment is monitored by ambient intelligence to provide context-aware services and facilitate remote home control. This paper presents an overview of previous smart home research as well as the associated technologies. A brief discussion on the building blocks of smart homes and their interrelationships is presented. It describes collective information about sensors, multimedia devices, communication protocols, and systems, which are widely used in smart home implementation. Special algorithms from different fields and their significance are explained according to their scope of use in smart homes. This paper also presents a concrete guideline for future researchers to follow in developing a practical and sustainable smart home."}
{"_id": "ab1b3405454f42027c2037024f1d88b5abb23104", "title": "A SMUGGLING APPROACH TO THE PASSIVE IN ENGLISH", "text": "I propose a theory of the passive that combines aspects of the principles and parameters analysis (no specific rules, no downward movement) and Chomsky\u2019s (1957) Syntactic Structures analysis (the arguments in the passive are generated in the same positions as they are in the active)."}
{"_id": "ae8a564c016cc39d2d87030c28a1e85e67fb7a4a", "title": "Correction of deep bite in adults using the Invisalign system.", "text": "719 D overbite is commonly treated by molar extrusion, incisor intrusion, or both. Careful diagnostic assessment is needed to determine the best treatment approach. Patients with structural deep bites may require orthognathic surgery for full correction. The amount of incisor exposure in smiling is an important consideration in deep-bite cases. When the maxillary incisor display is correct, maxillary incisor intrusion is unnecessary; treatment should involve intrusion and leveling of the mandibular anterior teeth to avoid flattening of the smile arc. Any maxillary incisor intrusion or mandibular leveling should be carefully monitored during treatment to maintain proper incisor exposure while effectively correcting the deep bite. After the development of thermoformed appliances that could be used for minor tooth movements, the Invisalign* system was introduced for treatment of moderate malocclusions. In 2004, Wheeler reported that Invisalign could be used in patients with anterior and posterior crossbites, deep bites, anterior open bites or shallow overbites, and periodontal problems. The present article describes the use of Invisalign appliances for the correction of deep bite in adult patients with normal skeletal patterns. Case 1"}
{"_id": "ec23b44fa5a0b738e99ac7f4f6d87ec6f464392a", "title": "RFID Application in Hospitals: A Case Study on a Demonstration RFID Project in a Taiwan Hospital", "text": "After manufacturing and retail marketing, healthcare is considered the next home for Radio Frequency Identification (RFID). Although in its infancy, RFID technology has great potential in healthcare to significantly reduce cost, and improve patient safety and medical services. However, the challenge will be to incorporate RFID into medical practice, especially when relevant experience is limited. To explore this issue, we conducted a case study that demonstrated RFID integration into the medical world at one Taiwan hospital. The project that was studied, the Location-based Medicare Service project, was partially subsidized by the Taiwan government and was aimed at containing SARS, a dangerous disease that struck Taiwan in 2003. Innovative and productive of several significant results, the project established an infrastructure and platform allowing for other applications. In this paper we describe the development strategy, design, and implementation of the project. We discuss our findings pertaining to the collaborative development strategy, device management, data management, and value generation. These findings have important implications for developing RFID applications in healthcare organizations."}
{"_id": "828c504705841dee0031e52bf9acb016fcec45de", "title": "Virtualized and flexible ECC for main memory", "text": "We present a general scheme for virtualizing main memory error-correction mechanisms, which map redundant information needed to correct errors into the memory namespace itself. We rely on this basic idea, which increases flexibility to increase error protection capabilities, improve power efficiency, and reduce system cost; with only small performance overheads. We augment the virtual memory system architecture to detach the physical mapping of data from the physical mapping of its associated ECC information. We then use this mechanism to develop two-tiered error protection techniques that separate the process of detecting errors from the rare need to also correct errors, and thus save energy. We describe how to provide strong chipkill and double-chip kill protection using existing DRAM and packaging technology. We show how to maintain access granularity and redundancy overheads, even when using \u00d78 DRAM chips. We also evaluate error correction for systems that do not use ECC DIMMs. Overall, analysis of demanding SPEC CPU 2006 and PARSEC benchmarks indicates that performance overhead is only 1% with ECC DIMMs and less than 10% using standard Non-ECC DIMM configurations, that DRAM power savings can be as high as 27%, and that the system energy-delay product is improved by 12% on average."}
{"_id": "3ea9e4090f5a4b897b5967e273386b600632cd72", "title": "FRIP: a region-based image retrieval tool using automatic image segmentation and stepwise Boolean AND matching", "text": "We present our region-based image retrieval tool, finding region in the picture (FRIP), that is able to accommodate, to the extent possible, region scaling, rotation, and translation. Our goal is to develop an effective retrieval system to overcome a few limitations associated with existing systems. To do this, we propose adaptive circular filters used for semantic image segmentation, which are based on both Bayes' theorem and texture distribution of image. In addition, to decrease the computational complexity without losing the accuracy of the search results, we extract optimal feature vectors from segmented regions and apply them to our stepwise Boolean AND matching scheme. The experimental results using real world images show that our system can indeed improve retrieval performance compared to other global property-based or region-of-interest-based image retrieval methods."}
{"_id": "c36d770497b898266031d5f628716f23abc1381b", "title": "Today the Earwig, Tomorrow Man?", "text": "Kirsh, D., Today the earwig, tomorrow man?, Artificial Intelligence 47 (1991) 161-184. A startling amount of intelligent activity can be controlled without reasoning or thought. By tuning the perceptual system to task relevant properties a creature can cope with relatively sophisticated environments without concepts. There is a limit, however, to how far a creature without concepts can go. Rod Brooks, like many ecologically oriented scientists, argues that the vast majority of intelligent behaviour is concept-free. To evaluate this position I consider what special benefits accrue to concept-using creatures. Concepts are either necessary for certain types of perception, learning, and control, or they make those processes computationally simpler. Once a creature has concepts its capacities are vastly multiplied."}
{"_id": "d1f965f86aba82be1fe28d5718ec10660d584470", "title": "Preputial flaps to correct buried penis", "text": "The authors developed a preputial skin flap technique to correct the buried penis which was simple and practical. This simple procedure can be applied to most boys with buried penis. In the last 3\u00a0years, we have seen 12 boys with buried penis and have been treated by using preputial flaps. The mean age is about 5.1 (from 3 to 12). By making a longitudinal incision on the ventral side of penis, the tightness of the foreskin is released and leave a diamond-shaped skin defect. It allows the penile shaft to extend out. A circumferential incision is made about 5\u00a0mm proximal to the coronal sulcus. Pedicled preputial flaps are obtained leaving optimal penile skin on the dorsal side. The preputial skin flaps are rotated onto the ventral side and tailored to cover the defect. All patients are followed for at least 3\u00a0months. Edema and swelling on the flaps are common, but improves with time. None of our patients need a second operation. The preputial flaps technique is a simple technique which allows surgeons to deal with most cases of buried penis by tailoring the flaps providing good cosmetic and functional results."}
{"_id": "946158f9daec46edb11e6f00e53e2517ff4c3426", "title": "360ProbDASH: Improving QoE of 360 Video Streaming Using Tile-based HTTP Adaptive Streaming", "text": "Recently, there has been a significant interest towards 360-degree panorama video. However, such videos usually require extremely high bitrate which hinders their widely spread over the Internet. Tile-based viewport adaptive streaming is a promising way to deliver 360-degree video due to its on-request portion downloading. But it is not trivial for it to achieve good Quality of Experience (QoE) because Internet request-reply delay is usually much higher than motion-to-photon latency. In this paper, we leverage a probabilistic approach to pre-fetch tiles countering viewport prediction error, and design a QoE-driven viewport adaptation system, 360ProbDASH. It treats user's head movement as probability events, and constructs a probabilistic model to depict the distribution of viewport prediction error. A QoE-driven optimization framework is proposed to minimize total expected distortion of pre-fetched tiles. Besides, to smooth border effects of mixed-rate tiles, the spatial quality variance is also minimized. With the requirement of short-term viewport prediction under a small buffer, it applies a target-buffer-based rate adaptation algorithm to ensure continuous playback. We implement 360ProbDASH prototype and carry out extensive experiments on a simulation test-bed and real-world Internet with real user's head movement traces. The experimental results demonstrate that 360ProbDASH achieves at almost 39% gains on viewport PSNR, and 46% reduction on spatial quality variance against the existed viewport adaptation methods."}
{"_id": "149e5e5eeea5a9015ab5ae755f62c45ef70fa79b", "title": "Hierarchical Convolutional Features for Visual Tracking", "text": "Visual object tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we exploit features extracted from deep convolutional neural networks trained on object recognition datasets to improve tracking accuracy and robustness. The outputs of the last convolutional layers encode the semantic information of targets and such representations are robust to significant appearance variations. However, their spatial resolution is too coarse to precisely localize targets. In contrast, earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchies of convolutional layers as a nonlinear counterpart of an image pyramid representation and exploit these multiple levels of abstraction for visual tracking. Specifically, we adaptively learn correlation filters on each convolutional layer to encode the target appearance. We hierarchically infer the maximum response of each layer to locate targets. Extensive experimental results on a largescale benchmark dataset show that the proposed algorithm performs favorably against state-of-the-art methods."}
{"_id": "f464250c01a38411e58ba206b939a399534c506c", "title": "Assessing Test Suite E \u21b5 ectiveness Using Static Metrics", "text": "With the increasing amount of automated tests, we need ways to measure the test e\u21b5ectiveness. The state-of-the-art technique for assessing test e\u21b5ectiveness, mutation testing, is too slow and cumbersome to be used in large scale evolution studies or code audits by external companies. In this paper we investigated two alternatives, namely code coverage and assertion count. We discovered that code coverage outperforms assertion count by showing a relation with test suite e\u21b5ectiveness for all analysed project. Assertion count only displays such a relation in only one of the analysed projects. Further analysing this relationship between assertion count coverage and test e\u21b5ectiveness would allow to circumvent some of the problems of mutation testing."}
{"_id": "5b117fb14e3622d8c675c1c97b8b29e175dea194", "title": "Universal Stanford dependencies: A cross-linguistic typology", "text": "Revisiting the now de facto standard Stanford dependency representation, we propose an improved taxonomy to capture grammatical relations across languages, including morphologically rich ones. We suggest a two-layered taxonomy: a set of broadly attested universal grammatical relations, to which language-specific relations can be added. We emphasize the lexicalist stance of the Stanford Dependencies, which leads to a particular, partially new treatment of compounding, prepositions, and morphology. We show how existing dependency schemes for several languages map onto the universal taxonomy proposed here and close with consideration of practical implications of dependency representation choices for NLP applications, in particular parsing."}
{"_id": "07f9592a78ff4f8301dafc93699a32e855da3275", "title": "Computational modelling of visual attention", "text": "Five important trends have emerged from recent work on computational models of focal visual attention that emphasize the bottom-up, image-based control of attentional deployment. First, the perceptual saliency of stimuli critically depends on the surrounding context. Second, a unique 'saliency map' that topographically encodes for stimulus conspicuity over the visual scene has proved to be an efficient and plausible bottom-up control strategy. Third, inhibition of return, the process by which the currently attended location is prevented from being attended again, is a crucial element of attentional deployment. Fourth, attention and eye movements tightly interplay, posing computational challenges with respect to the coordinate system used to control attention. And last, scene understanding and object recognition strongly constrain the selection of attended locations. Insights from these five key areas provide a framework for a computational and neurobiological understanding of visual attention."}
{"_id": "075bfb99ce2dbaa2005500dff90f893b7caa68c2", "title": "On-line Boosting and Vision", "text": "Boosting has become very popular in computer vision, showing impressive performance in detection and recognition tasks. Mainly off-line training methods have been used, which implies that all training data has to be a priori given; training and usage of the classifier are separate steps. Training the classifier on-line and incrementally as new data becomes available has several advantages and opens new areas of application for boosting in computer vision. In this paper we propose a novel on-line AdaBoost feature selection method. In conjunction with efficient feature extraction methods the method is real time capable. We demonstrate the multifariousness of the method on such diverse tasks as learning complex background models, visual tracking and object detection. All approaches benefit significantly by the on-line training."}
{"_id": "2258e01865367018ed6f4262c880df85b94959f8", "title": "Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics", "text": "Simultaneous tracking of multiple persons in real-world environments is an active research field and several approaches have been proposed, based on a variety of features and algorithms. Recently, there has been a growing interest in organizing systematic evaluations to compare the various techniques. Unfortunately, the lack of common metrics for measuring the performance of multiple object trackers still makes it hard to compare their results. In this work, we introduce two intuitive and general metrics to allow for objective comparison of tracker characteristics, focusing on their precision in estimating object locations, their accuracy in recognizing object configurations and their ability to consistently label objects over time. These metrics have been extensively used in two large-scale international evaluations, the 2006 and 2007 CLEAR evaluations, to measure and compare the performance of multiple object trackers for a wide variety of tracking tasks. Selected performance results are presented and the advantages and drawbacks of the presented metrics are discussed based on the experience gained during the evaluations."}
{"_id": "5981e6479c3fd4e31644db35d236bfb84ae46514", "title": "Learning to associate: HybridBoosted multi-target tracker for crowded scene", "text": "We propose a learning-based hierarchical approach of multi-target tracking from a single camera by progressively associating detection responses into longer and longer track fragments (tracklets) and finally the desired target trajectories. To define tracklet affinity for association, most previous work relies on heuristically selected parametric models; while our approach is able to automatically select among various features and corresponding non-parametric models, and combine them to maximize the discriminative power on training data by virtue of a HybridBoost algorithm. A hybrid loss function is used in this algorithm because the association of tracklet is formulated as a joint problem of ranking and classification: the ranking part aims to rank correct tracklet associations higher than other alternatives; the classification part is responsible to reject wrong associations when no further association should be done. Experiments are carried out by tracking pedestrians in challenging datasets. We compare our approach with state-of-the-art algorithms to show its improvement in terms of tracking accuracy."}
{"_id": "65484334a5cd4cabf6e5f7a17f606f07e2acf625", "title": "Novel approach to nonlinear / non-Gaussian Bayesian state estimation", "text": "An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method is not restricted by assumptions of linearity or Gaussian noise: it may be applied to any state transition or measurement model. A simulation example of the bearings only tracking problem is presented. This simulation includes schemes for improving the efficiency of the basic algorithm. For this example, the performance of the bootstrap filter is greatly superior to the standard extended Kalman filter."}
{"_id": "056e13d5045e7d594489705f78834cfaf6642c36", "title": "The Boosting Approach to Machine Learning: An Overview", "text": "Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, this chapter overviews some of the recent work on boosting including analyses of AdaBoost\u2019s training error and generalization error; boosting\u2019s connection to game theory and linear programming; the relationship between boosting and logistic regression; extensions of AdaBoost for multiclass classification problems; methods of incorporating human knowledge into boosting; and experimental and applied work using boosting."}
{"_id": "ffb4d26b0f632cda0417b2f00d314bb765a847b4", "title": "Air Traffic Security : Aircraft Classification Using ADS-B Message \u2019 s Phase-Pattern", "text": "Automatic Dependent Surveillance-Broadcast (ADS-B) is a surveillance system used in Air Traffic Control. With this system, the aircraft transmits their own information (identity, position, velocity, etc.) to any equipped listener for surveillance scope. The ADS-B is based on a very simple protocol and does not provide any kind of authentication and encryption, making it vulnerable to many types of cyber-attacks. In the paper, the use of the airplane/transmitter carrier phase is proposed as a feature to perform a classification of the aircraft and, therefore, distinguish legitimate messages from fake ones. The feature extraction process is described and a classification method is selected. Finally, a complete intruder detection algorithm is proposed and evaluated with real data."}
{"_id": "27a918a368e971a15b545abf63353e269a8ce8a2", "title": "WeedMap: A Large-Scale Semantic Weed Mapping Framework Using Aerial Multispectral Imaging and Deep Neural Network for Precision Farming", "text": "The ability to automatically monitor agricultural fields is an important capability in precision farming, enabling steps towards more sustainable agriculture. Precise, high-resolution monitoring is a key prerequisite for targeted intervention and the selective application of agro-chemicals. In this paper, we present a novel weed segmentation and mapping framework that processes multispectral images obtained from an unmanned aerial vehicle (UAV) using a deep neural network (DNN). Most studies on crop/weed semantic segmentation only consider single images for processing and classification. Images taken by UAVs often cover only a few hundred square meters with either color only or color and near-infrared (NIR) channels. Although a map can be generated by processing single segmented images incrementally, this requires additional complex information fusion techniques which struggle to handle high fidelity maps due to their computational costs and problems in ensuring global consistency. Moreover, computing a single large and accurate vegetation map (e.g., crop/weed) using a DNN is non-trivial due to difficulties arising from: (1) limited ground sample distances (GSDs) in high-altitude datasets, (2) sacrificed resolution resulting from downsampling high-fidelity images, and (3) multispectral image alignment. To address these issues, we adopt a stand sliding window approach that operates on only small portions of multispectral orthomosaic maps (tiles), which are channel-wise aligned and calibrated radiometrically across the entire map. We define the tile size to be the same as that of the DNN input to avoid resolution loss. Compared to our baseline model (i.e., SegNet with 3 channel RGB inputs) yielding an area under the curve (AUC) of [background=0.607, crop=0.681, weed=0.576], our proposed model with 9 input channels achieves [0.839, 0.863, 0.782]. Additionally, we provide an extensive analysis of 20 trained models, both qualitatively and quantitatively, in order to evaluate the effects of varying input channels and tunable network hyperparameters. Furthermore, we release a large sugar beet/weed aerial dataset with expertly guided annotations for further research in the fields of remote sensing, precision agriculture, and agricultural robotics."}
{"_id": "57b12f411702fd135cbb553443c8de7718f07095", "title": "Microscopic and Macroscopic Spatio-Temporal Topic Models for Check-in Data", "text": "Twitter, together with other online social networks, such as Facebook, and Gowalla have begun to collect hundreds of millions of check-ins. Check-in data captures the spatial and temporal information of user movements and interests. To model and analyze the spatio-temporal aspect of check-in data and discover temporal topics and regions, we first propose a spatio-temporal topic model, i.e., Upstream Spatio-Temporal Topic Model (USTTM). USTTM can discover temporal topics and regions, i.e., a user\u2019s choice of region and topic is affected by time in this model. We use continuous time to model check-in data, rather than discretized time, avoiding the loss of information through discretization. In addition, USTTM captures the property that user\u2019s interests and activity space will change over time, and users have different region and topic distributions at different times in USTTM. However, both USTTM and other related models capture \u201cmicroscopic patterns\u201d within a single city, where users share POIs, and cannot discover \u201cmacroscopic\u201d patterns in a global area, where users check-in to different POIs. Therefore, we also propose a macroscopic spatio-temporal topic model, MSTTM, employing words of tweets that are shared between cities to learn the topics of user interests. We perform an experimental evaluation on Twitter and Gowalla data sets from New York City and on a Twitter US data set. In our qualitative analysis, we perform experiments with USTTM to discover temporal topics, e.g., how topic \u201ctourist destinations\u201d changes over time, and to demonstrate that MSTTM indeed discovers macroscopic, generic topics. In our quantitative analysis, we evaluate the effectiveness of USTTM in terms of perplexity, accuracy of POI recommendation, and accuracy of user and time prediction. Our results show that the proposed USTTM achieves better performance than the state-of-the-art models, confirming that it is more natural to model time as an upstream variable affecting the other variables. Finally, the performance of the macroscopic model MSTTM is evaluated on a Twitter US dataset, demonstrating a substantial improvement of POI recommendation accuracy compared to the microscopic models."}
{"_id": "dedecb86e01007eb183b78dc0a214955ad694250", "title": "18-year experience in the management of men with a complaint of a small penis.", "text": "In many cultures, the erect penis has been a symbol of masculine qualities. Because of this symbolism, a penis that is less than average size can cause insecurity or embarrassment. This series reports the authors' 18-year experience in the management of 60 men with a complaint of a small penis. For 44 of these 60 men, counseling was sufficient; the other 16 had surgery, and of these, 9 were satisfied with the result. Despite limitations, the authors conclude that those men who already achieve a penis length of no less than 7.5 cm (2.95 in) in erection, have only limited benefit from penis-enhancing surgery. This particular patient category should therefore be dissuaded from surgery."}
{"_id": "3666f19a9dac92f525893b8597af5c21855bf85b", "title": "A voxel-based octree construction approach for procedural cave generation", "text": "Procedural content generation is becoming an increasingly popular research area as a means of algorithmically generating scene content for virtual environments. The automated generation of such content avoids the manual labour typically associated with creating scene content, and is extremely useful in application areas such as computer graphics, movie production and video games. While virtual 3D caves are commonly featured in these virtual environment applications, procedural cave generation is not an area that has received much attention among researchers to date. This paper presents a procedural approach to generating 3D cave structures. Other than the development of a method to effectively automate the generation of visually believable 3D cave models, this paper also investigates how to efficiently construct and store this spatial information using a voxel-based octree data structure. In addition, the proposed approach demonstrates that caves with different characteristics can be generated by adjusting certain parameters in order to facilitate the creation of diverse cave structures."}
{"_id": "9f675dbc30a6cb282e40f9929bd3defa96189de6", "title": "IOT based crop-field monitoring and irrigation automation", "text": "Internet Of Things (IoT)is a shared network of objects or things which can interact with each other provided the Internet connection. IoT plays an important role in agriculture industry which can feed 9.6 billion people on the Earth by 2050. Smart Agriculture helps to reduce wastage, effective usage of fertilizer and thereby increase the crop yield. In this work, a system is developed to monitor crop-field using sensors (soil moisture, temperature, humidity, Light) and automate the irrigation system. The data from sensors are sent to Web server database using wireless transmission. In server database the data are encoded in JSON format. The irrigation is automated if the moisture and temperature of the field falls below the brink. In greenhouses light intensity control can also be automated in addition to irrigation. The notifications are sent to farmers' mobile periodically. The farmers' can able to monitor the field conditions from anywhere. This system will be more useful in areas where water is in scarce. This system is 92% more efficient than the conventional approach."}
{"_id": "5b31789d3e85f3dd10af6b5df5cfc72b23835434", "title": "Digital Democracy : Reimagining Pathways to Political Participation", "text": "Recently, research revolving around blogs has flourished. Usually, academics illustrate what blogs are, motivations to blog, and, only to some extent, their role in politics. Along these lines, we examine the impact of digital politics by looking specifically at blog readers. Although blog readers might be considered at the forefront of a new technological revolution, and people have speculated about their participatory habits both online and off, little research has specifically looked at this growing proportion of the population. This article models factors that predict traditional and online forms of participation, presenting a portrait of a new type of political advocate."}
{"_id": "36347d4ad8d5b8170d9bba6b39d3becc2eff8c4b", "title": "A New Model of Information Content for Semantic Similarity in WordNet", "text": "Information Content (IC) is an important dimension of assessing the semantic similarity between two terms or word senses in word knowledge. The conventional method of obtaining IC of word senses is to combine knowledge of their hierarchical structure from an ontology like WordNet with actual usage in text as derived from a large corpus. In this paper, a new model of IC is presented, which relies on hierarchical structure alone. The model considers not only the hyponyms of each word sense but also its depth in the structure. The IC value is easier to calculate based on our model, and when used as the basis of a similarity approach it yields judgments that correlate more closely with human assessments than others, which using IC value obtained only considering the hyponyms and IC value got by employing corpus analysis."}
{"_id": "f08d52220420b8008262d786dd98845ec1080f97", "title": "SPANISH IMPERFECTO AND PRETE\u0301RITO : TRUTH CONDITIONS AND AKTIONSART EFFECTS IN A SITUATION SEMANTICS*", "text": "offer an account of the semantics of these forms within a situation semantics, addressing a number of theoretically interesting questions about how to realize a semantics for tense and events in that type of framework. We argue that each of these forms is unambiguous, and that the apparent variety of readings attested for them derives from interaction with other factors in the course of interpretation. The meaning of the imperfecto is constrained to always reflect atelic aktionsart. In addition, it contains a modal element, and a contextually-given accessibility relation over situations constrains the interpretation of the modal in ways that give rise to all the attested readings. The pret\u00e9rito is indeterminate with respect to aktionsart, neither telic nor atelic. One or the other aktionsart may be forced by other factors in the clause in which the pret\u00e9rito occurs, as well as by pragmatic contrast with the possibility of using the imperfecto."}
{"_id": "39dc0cb50cb6b26582691f8d2a883f1e1059575a", "title": "An Integrated Microwave Imaging Radar With Planar Antennas for Breast Cancer Detection", "text": "The system design of an integrated microwave imaging radar for the diagnostic screening of breasts cancer is presented. A custom integrated circuit implemented in a 65-nm CMOS technology and a pair of patch antennas realized on a planar laminate are proposed as the basic module of the imaging antenna array. The radar operates on the broad frequency range from 2 to 16 GHz with a dynamic range of 107 dB. Imaging experiments carried out on a realistic breast phantom show that the system is capable of detecting tumor targets with a resolution of 3 mm."}
{"_id": "6193882b7ea731501771fc37899c41c7517bc893", "title": "A novel low-power full-adder cell for low voltage", "text": "This paper presents a novel low-power majority function-based 1-bit full adder that uses MOS capacitors (MOSCAP) in its structure. It can work reliably at low supply voltage. In this design, the timeconsuming XOR gates are eliminated. The circuits being studied are optimized for energy efficiency at 0.18-mm CMOS process technology. The adder cell is compared with seven widely used adders based on power consumption, speed, power-delay product (PDP) and area efficiency. Intensive simulation runs on a Cadence environment and HSPICE show that the new adder has more than 11% in power savings over a conventional 28-transistor CMOS adder. In addition, it consumes 30% less power than transmission function adder (TFA) and is 1.11 times faster. & 2009 Elsevier B.V. All rights reserved."}
{"_id": "5292d1a44c5f5e8db853c024b5d05e907887f499", "title": "Data-Driven Approaches to Game Player Modeling: A Systematic Literature Review", "text": "Modeling and predicting player behavior is of the utmost importance in developing games. Experience has proven that, while theory-driven approaches are able to comprehend and justify a model's choices, such models frequently fail to encompass necessary features because of a lack of insight of the model builders. In contrast, data-driven approaches rely much less on expertise, and thus offer certain potential advantages. Hence, this study conducts a systematic review of the extant research on data-driven approaches to game player modeling. To this end, we have assessed experimental studies of such approaches over a nine-year period, from 2008 to 2016; this survey yielded 46 research studies of significance. We found that these studies pertained to three main areas of focus concerning the uses of data-driven approaches in game player modeling. One research area involved the objectives of data-driven approaches in game player modeling: behavior modeling and goal recognition. Another concerned methods: classification, clustering, regression, and evolutionary algorithm. The third was comprised of the current challenges and promising research directions for data-driven approaches in game player modeling."}
{"_id": "40276d2551c3b38a02ff5804901044c692bf0267", "title": "Comparing Relational and Ontological Triple Stores in Healthcare Domain", "text": "Today\u2019s technological improvements have made ubiquitous healthcare systems that converge into smart healthcare applications in order to solve patients\u2019 problems, to communicate effectively with patients, and to improve healthcare service quality. The first step of building a smart healthcare information system is representing the healthcare data as connected, reachable, and sharable. In order to achieve this representation, ontologies are used to describe the healthcare data. Combining ontological healthcare data with the used and obtained data can be maintained by storing the entire health domain data inside big data stores that support both relational and graph-based ontological data. There are several big data stores and different types of big data sets in the healthcare domain. The goal of this paper is to determine the most applicable ontology data store for storing the big healthcare data. For this purpose, AllegroGraph and Oracle 12c data stores are compared based on their infrastructural capacity, loading time, and query response times. Hence, healthcare ontologies (GENE Ontology, Gene Expression Ontology (GEXO), Regulation of Transcription Ontology (RETO), Regulation of Gene Expression Ontology (REXO)) are used to measure the ontology loading time. Thereafter, various queries are constructed and executed for GENE ontology in order to measure the capacity and query response times for the performance comparison between AllegroGraph and Oracle 12c triple stores."}
{"_id": "4f623e3821d14553b3b286e20910db9225fb723f", "title": "Audio-Visual Person Recognition in Multimedia Data From the Iarpa Janus Program", "text": "Currently, datasets that support audio-visual recognition of people in videos are scarce and limited. In this paper, we introduce an expansion of video data from the IARPA Janus program to support this research area. We refer to the expanded set, which adds labels for voice to the already-existing face labels, as the Janus Multimedia dataset. We first describe the speaker labeling process, which involved a combination of automatic and manual criteria. We then discuss two evaluation settings for this data. In the core condition, the voice and face of the labeled individual are present in every video. In the full condition, no such guarantee is made. The power of audiovisual fusion is then shown using these publicly-available videos and labels, showing significant improvement over only recognizing voice or face alone. In addition to this work, several other possible paths for future research with this dataset are discussed."}
{"_id": "53b35519e09772fb7ec470fdec51c6edb43c4f13", "title": "Word Channel Based Multiscale Pedestrian Detection without Image Resizing and Using Only One Classifier", "text": "Most pedestrian detection approaches that achieve high accuracy and precision rate and that can be used for realtime applications are based on histograms of gradient orientations. Usually multiscale detection is attained by resizing the image several times and by recomputing the image features or using multiple classifiers for different scales. In this paper we present a pedestrian detection approach that uses the same classifier for all pedestrian scales based on image features computed for a single scale. We go beyond the low level pixel-wise gradient orientation bins and use higher level visual words organized into Word Channels. Boosting is used to learn classification features from the integral Word Channels. The proposed approach is evaluated on multiple datasets and achieves outstanding results on the INRIA and Caltech-USA benchmarks. By using a GPU implementation we achieve a classification rate of over 10 million bounding boxes per second and a 16 FPS rate for multiscale detection in a 640\u00d7480 image."}
{"_id": "5d7316f5e074d39cbf19e24a87ac1f8e26cc33fa", "title": "Cultural psychology: a once and future discipline?", "text": "Now, we come to offer you the right catalogues of book to open. cultural psychology a once and future discipline is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you."}
{"_id": "225929a1fdde2412b8283c15dbd767cde7d74b0b", "title": "Citation Author Topic Model in Expert Search", "text": "This paper proposes a novel topic model, Citation-Author-Topic (CAT) model that addresses a semantic search task we define as expert search \u2013 given a research area as a query, it returns names of experts in this area. For example, Michael Collins would be one of the top names retrieved given the querySyntactic Parsing. Our contribution in this paper is two-fold. First, we model the cited author information together with words and paper authors. Such extra contextual information directly models linkage among authors and enhances the author-topic association, thus produces more coherent author-topic distribution. Second, we provide a preliminary solution to the task of expert search when the learning repository contains exclusively research related documents authored by the experts. When compared with a previous proposed model (Johri et al., 2010), the proposed model produces high quality author topic linkage and achieves over 33% error reduction evaluated by the standard MAP measurement."}
{"_id": "0ca1e465dd85b8254bcdd7053032d7eab6e2d4b4", "title": "LogP: Towards a Realistic Model of Parallel Computation", "text": "A vast body of theoretical research has focused either on overly simplistic models of parallel computation, notably the PRAM, or overly specific models that have few representatives in the real world. Both kinds of models encourage exploitation of formal loopholes, rather than rewarding development of techniques that yield performance across a range of current and future parallel machines. This paper offers a new parallel machine model, called LogP, that reflects the critical technology trends underlying parallel computers. it is intended to serve as a basis for developing fast, portable parallel algorithms and to offer guidelines to machine designers. Such a model must strike a balance between detail and simplicity in order to reveal important bottlenecks without making analysis of interesting problems intractable. The model is based on four parameters that specify abstractly the computing bandwidth, the communication bandwidth, the communication delay, and the efficiency of coupling communication and computation. Portable parallel algorithms typically adapt to the machine configuration, in terms of these parameters. The utility of the model is demonstrated through examples that are implemented on the CM-5."}
{"_id": "22bebedc1a5f3556cb4f577bdbe032299a2865e8", "title": "Effective training of convolutional neural networks for face-based gender and age prediction", "text": "Convolutional Neural Networks (CNNs) have been proven very effective for human demographics estimation by a number of recent studies. However, the proposed solutions significantly vary in different aspects leaving many open questions on how to choose an optimal CNN architecture and which training strategy to use. In this work, we shed light on some of these questions improving the existing CNN-based approaches for gender and age prediction and providing practical hints for future studies. In particular, we analyse four important factors of the CNN training for gender recognition and age estimation: (1) the target age encoding and loss function, (2) the CNN depth, (3) the need for pretraining, and (4) the training strategy: mono-task or multi-task. As a result, we design the state-of-the-art gender recognition and age estimation models according to three popular benchmarks: LFW, MORPH-II and FGNET. Moreover, our best model won the ChaLearn Apparent Age Estimation Challenge 2016 significantly outperforming the solutions of other participants."}
{"_id": "94c852f5ac537123054d72f17bd1507d38db4f8e", "title": "Capacitive angular-position sensor with electrically floating conductive rotor and measurement redundancy", "text": "This paper presents a capacitive angular-position sensor with a contactless electrically floating conductive rotor and an interface electronic circuit that is designed to maximize the performance/cost ratio. The sensor includes two separate and independent measurement sections that sense the same angle and provide redundancy in critical applications. The electronic interface is based on a relaxation oscillator that, for each of the two sections, measures an appropriate quantity that relates capacitance ratio to angular position and provides a dc output voltage that varies ratiometrically with respect to the supply voltage. The sensor was built in a version with /spl plusmn/11/spl deg/ measuring range for each section. Experimental tests showed a linearity better that 1% of the span."}
{"_id": "37a1e8411669e29cf8fbf48ec920c97c0066ac7e", "title": "Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms", "text": "Drawing ideas from previous authors, we present a new non-blocking concurrent queue algorithm and a new twolock queue algorithm in which one enqueue and one dequeue can proceed concurrently. Both algorithms are simple, fast, and practical; we were surprised not to find them in the literature. Experiments on a 12-node SGI Challenge multiprocessor indicate that the new non-blocking queue consistently outperforms the best known alternatives; it is the clear algorithm of choice for machines that provide a universal atomic primitive (e.g. compare and swap or load linked/store conditional). The two-lock concurrent queue outperforms a single lock when several processes are competing simultaneously for access; it appears to be the algorithm of choice for busy queues on machines with non-universal atomic primitives (e.g. test and set). Since much of the motivation for non-blocking algorithms is rooted in their immunity to large, unpredictable delays in process execution,we report experimental results both for systems with dedicated processors and for systems with several processes multiprogrammed on each processor."}
{"_id": "480f39011679ef64f34f2ec225410c08c9402b79", "title": "Percutaneous kyphoplasty and pedicle screw fixation for the management of thoraco-lumbar burst fractures", "text": "The study design includes prospective evaluation of percutaneous osteosynthesis associated with cement kyphoplasty on 18 patients. The objective of the study is to assess the efficacy of a percutaneous method of treating burst vertebral fractures in patients without neurological deficits. Even if burst fractures are frequent, no therapeutic agreement is available at the moment. We report in this study the results at 2\u00a0years with a percutaneous approach for the treatment of burst fractures. 18 patients were included in this study. All the patients had burst vertebral fractures classified type A3 on the Magerl scale, between levels T9 and L2. The patients\u2019 mean age was 53\u00a0years (range 22\u201378\u00a0years) and the neurological examination was normal. A percutaneous approach was systematically used and a kyphoplasty was performed via the transpedicular pathway associated with percutaneous short-segment pedicle screw osteosynthesis. The patients\u2019 follow-up included CT scan analysis, measurement of vertebral height recovery and local kyphosis, and clinical pain assessments. With this surgical approach, the mean vertebral height was improved by 25% and a mean improvement of 11.28\u00b0 in the local kyphotic angle was obtained. 3\u00a0months after the operation, none of the patients were taking class II analgesics. The mean duration of their hospital stay was 4.5\u00a0days (range 3\u20137\u00a0days) and the mean follow-up period was 26\u00a0months (range 17\u201330\u00a0months). No significant changes in the results obtained were observed at the end of the follow-up period. Minimally invasive methods of treating burst vertebral fractures can be performed via the percutaneous pathway. This approach gives similar vertebral height recovery and kyphosis correction rates to those obtained with open surgery. It provides a short hospital stay, however, and might therefore constitute a useful alternative to open surgical methods."}
{"_id": "f1898d80af8968c49b2300dc214da703f28377cc", "title": "An unusual vesical calculus.", "text": "We report a 52 year old patient presenting with a bladder stone formed over a migrated intrauterine device (Copper-T). Her history was pertinent for intrauterine contraceptive (IUCD) device placement 10 years back. Investigations included plain ultrasound of abdomen, X-ray of abdomen, urinalysis, and urine culture. Ultrasound and plain X-ray of the pelvis confirmed a bladder stone formed over a migrated copper-T intrauterine device. The device was removed through suprapubic cystolithotomy. Of all the reported cases of vesical stone formation over a migrated IUCD, this case is unique as the patient was an elderly - 52 year old - female. All previously reported cases are of younger age."}
{"_id": "57a0213df9dd8b10f2dcd644905de23efead02e0", "title": "The development and malleability of executive control abilities", "text": "Executive control (EC) generally refers to the regulation of mental activity. It plays a crucial role in complex cognition, and EC skills predict high-level abilities including language processing, memory, and problem solving, as well as practically relevant outcomes such as scholastic achievement. EC develops relatively late in ontogeny, and many sub-groups of developmental populations demonstrate an exaggeratedly poor ability to control cognition even alongside the normal protracted growth of EC skills. Given the value of EC to human performance, researchers have sought means to improve it through targeted training; indeed, accumulating evidence suggests that regulatory processes are malleable through experience and practice. Nonetheless, there is a need to understand both whether specific populations might particularly benefit from training, and what cortical mechanisms engage during performance of the tasks used in the training protocols. This contribution has two parts: in Part I, we review EC development and intervention work in select populations. Although promising, the mixed results in this early field make it difficult to draw strong conclusions. To guide future studies, in Part II, we discuss training studies that have included a neuroimaging component - a relatively new enterprise that also has not yet yielded a consistent pattern of results post-training, preventing broad conclusions. We therefore suggest that recent developments in neuroimaging (e.g., multivariate and connectivity approaches) may be useful to advance our understanding of the neural mechanisms underlying the malleability of EC and brain plasticity. In conjunction with behavioral data, these methods may further inform our understanding of the brain-behavior relationship and the extent to which EC is dynamic and malleable, guiding the development of future, targeted interventions to promote executive functioning in both healthy and atypical populations."}
{"_id": "8ea1d82cadadab86e7d2236a8864881cc6b65c5c", "title": "Learning Parameterized Skills", "text": "LEARNING PARAMETERIZED SKILLS"}
{"_id": "000f90380d768a85e2316225854fc377c079b5c4", "title": "Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes", "text": "Semantic image segmentation is an essential component of modern autonomous driving systems, as an accurate understanding of the surrounding scene is crucial to navigation and action planning. Current state-of-the-art approaches in semantic image segmentation rely on pre-trained networks that were initially developed for classifying images as a whole. While these networks exhibit outstanding recognition performance (i.e., what is visible?), they lack localization accuracy (i.e., where precisely is something located?). Therefore, additional processing steps have to be performed in order to obtain pixel-accurate segmentation masks at the full image resolution. To alleviate this problem we propose a novel ResNet-like architecture that exhibits strong localization and recognition performance. We combine multi-scale context with pixel-level accuracy by using two processing streams within our network: One stream carries information at the full image resolution, enabling precise adherence to segment boundaries. The other stream undergoes a sequence of pooling operations to obtain robust features for recognition. The two streams are coupled at the full image resolution using residuals. Without additional processing steps and without pre-training, our approach achieves an intersection-over-union score of 71.8% on the Cityscapes dataset."}
{"_id": "02d17701dd346311197c3f1553ae9e0d6376fd43", "title": "Return of Frustratingly Easy Domain Adaptation", "text": "Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have labels, including some that perform very well despite being \u201cfrustratingly easy\u201d to implement. However, in practice, the target domain is often unlabeled, requiring unsupervised adaptation. We propose a simple, effective, and efficient method for unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Even though it is extraordinarily simple\u2013it can be implemented in four lines of Matlab code\u2013CORAL performs remarkably well in extensive evaluations on standard benchmark datasets. \u201cEverything should be made as simple as possible, but not simpler.\u201d"}
{"_id": "223319a93dcf3912bbc1e5f949e5ab4d53906e62", "title": "Unsupervised Domain Adaptation by Backpropagation", "text": "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled targetdomain data is necessary). As the training progresses, the approach promotes the emergence of \u201cdeep\u201d features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-ofthe-art on Office datasets."}
{"_id": "3842ee1e0fdfeff936b5c49973ff21adfaaf3929", "title": "Adversarial Discriminative Domain Adaptation", "text": "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task."}
{"_id": "38f35dd624cd1cf827416e31ac5e0e0454028eca", "title": "Regularization of Neural Networks using DropConnect", "text": "We introduce DropConnect, a generalization of Dropout (Hinton et al., 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models."}
{"_id": "2034af15cc632a9d820019352dfaee71e13c8efd", "title": "Implications of integrating test-driven development into CS1/CS2 curricula", "text": "Many academic and industry professionals have called for more testing in computer science curricula. Test-driven development (TDD) has been proposed as a solution to improve testing in academia. This paper demonstrates how TDD can be integrated into existing course materials without reducing topic coverage. Two controlled experiments were conducted in a CS1/CS2 course in Winter 2008. Following a test-driven learning approach, unit testing was introduced at the beginning of the course and reinforced through example. Results indicate that while student work loads may increase with the incorporation of TDD, students are able to successfully develop unit tests while learning to program."}
{"_id": "de46701859cfce9029773adcf474f3c3179c35f8", "title": "Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System", "text": "Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10\u00b0 and 5\u00b0 to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path."}
{"_id": "26f75576e065f0939e416a78cafdd00221425ae0", "title": "Mechanical recycling of waste electric and electronic equipment: a review.", "text": "The production of electric and electronic equipment (EEE) is one of the fastest growing areas. This development has resulted in an increase of waste electric and electronic equipment (WEEE). In view of the environmental problems involved in the management of WEEE, many counties and organizations have drafted national legislation to improve the reuse, recycling and other forms of recovery of such wastes so as to reduce disposal. Recycling of WEEE is an important subject not only from the point of waste treatment but also from the recovery of valuable materials.WEEE is diverse and complex, in terms of materials and components makeup as well as the original equipment's manufacturing processes. Characterization of this waste stream is of paramount importance for developing a cost-effective and environmentally friendly recycling system. In this paper, the physical and particle properties of WEEE are presented. Selective disassembly, targeting on singling out hazardous and/or valuable components, is an indispensable process in the practice of recycling of WEEE. Disassembly process planning and innovation of disassembly facilities are most active research areas. Mechanical/physical processing, based on the characterization of WEEE, provides an alternative means of recovering valuable materials. Mechanical processes, such as screening, shape separation, magnetic separation, Eddy current separation, electrostatic separation, and jigging have been widely utilized in recycling industry. However, recycling of WEEE is only beginning. For maximum separation of materials, WEEE should be shredded to small, even fine particles, generally below 5 or 10mm. Therefore, a discussion of mechanical separation processes for fine particles is highlighted in this paper. Consumer electronic equipment (brown goods), such as television sets, video recorders, are most common. It is very costly to perform manual dismantling of those products, due to the fact that brown goods contain very low-grade precious metals and copper. It is expected that a mechanical recycling process will be developed for the upgrading of low metal content scraps."}
{"_id": "d5b0ace1738b69fcfb55d2b503825b2ffefafb92", "title": "Query Term Suggestion in Academic Search", "text": "In this paper, we evaluate query term suggestion in the context of academic professional search. Our overall goal is to support scientists in their information seeking tasks. We set up an interactive search system in which terms are extracted from clicked documents and suggested to the user before every query specification step. We evaluated our method with the iSearch collection of academic information seeking behaviour and crowdsourced term relevance judgements. We found that query term suggestion can significantly improve recall in academic search."}
{"_id": "d22b1bcd1c525c9130ffa412ca9dd71ebbc2a2b1", "title": "Red Opal: product-feature scoring from reviews", "text": "Online shoppers are generally highly task-driven: they have a certain goal in mind, and they are looking for a product with features that are consistent with that goal. Unfortunately, finding a product with specific features is extremely time-consuming using the search functionality provided by existing web sites.In this paper, we present a new search system called Red Opal that enables users to locate products rapidly based on features. Our fully automatic system examines prior customer reviews, identifies product features, and scores each product on each feature. Red Opal uses these scores to determine which products to show when a user specifies a desired product feature. We evaluate our system on four dimensions: precision of feature extraction, efficiency of feature extraction, precision of product scores, and estimated time savings to customers. On each dimension, Red Opal performs better than a comparison system."}
{"_id": "8579b32d0cabee5a9f41dcde0b17aa51d9508850", "title": "Online, interactive learning of gestures for human/robot interfaces", "text": "We have developed a gesture recognition system, based on Hidden Markov Models, which can interactively recognize gestures and perform online learning of new gestures. In addition, it is able to update its model of a gesture iteratively with each example it recognizes. This system has demonstrated reliable recognition of 14 di erent gestures after only one or two examples of each. The system is currently interfaced to a Cyberglove for use in recognition of gestures from the sign language alphabet. The system is being implemented as part of an interactive interface for robot teleoperation and programming by example."}
{"_id": "a648b5610292aac366428339925823d78f0811f0", "title": "Enterprise Architecture: Enabling Integration, Agility And Change", "text": "Three core imperatives are essential for modern businesses and organizations: seamless integration of customer and operational processes, agility, and the ability to change. These imperatives are thus relevant in view of successfully executing strategic choices, but all too often not satisfied. Businesses and organizations can be viewed as complex adaptive socio-technical systems. Two fundamentally different perspectives on systems play a role: the functional (black-box) perspective and the constructional (white-box) perspective. Management and governance of businesses and organizations regard the functional, black-box perspective, which is inherently ill-suited for addressing the imperatives mentioned. Rather, establishing system integration, agility and change requires a focus on the system\u2019s design, hence entails the constructional perspective. Both perspectives are relevant. The concept of architecture plays a fundamental role for operationalizing the constructional perspective. Next to the more familiar notion of technology architecture, the concepts of business and organizational architecture are formally introduced and elucidated. In view of the important role of information in customer and operational processes, the notion of information architecture will be additionally discussed, specifically the relationships with other architectures. Various domains within these architectures will be highlighted, whereby the importance of coherence and consistency is stressed, especially in view of the ability to change. Collectively, the four architectures are labeled Enterprise Architecture. Finally, enterprise architecture will be positioned as a crucial means for linking strategy development and execution. The functional and constructional perspective Function and construction A system may be defined as an identifiable bounded set of functionally and/or methodically related elements or principles with a certain operational purpose. Within this definition, a plethora of systems can be mentioned: a payment system, an accounting system, a transport"}
{"_id": "a152e38c596c93f88e9856fe4af54d03c15b9b19", "title": "Compact substrate integrated waveguide based active and passive circularly polarized composite right/left handed leaky wave antennas", "text": "Two compact substrate integrated waveguide (SIW) based circularly polarized (CP) composite right/left handed (CRLH) leaky wave antennas (LWAs) are designed and presented. Besides wide-band matching enhancement, the proposed antennas are shortened by 39.8 % along the scanning plane compared to the previous work [1]. Additionally, the current amplitude distribution of the proposed antenna can be controlled by using negative resistance based reflection type amplifiers (RTAs). The compensated uniform distribution yields an enhanced directivity without compromising full space frequency scanning capability."}
{"_id": "7a1ebd6327d0938c9ecad6bfdbd6e781744d29a9", "title": "Crowd Geofencing", "text": "Geofencing mechanisms allow for timely message delivery to the visitors of predefined target areas. However, conventional geofencing approaches poorly support mobile data collection scenarios in which experts need in situ assistance. In this paper, we propose crowd geofencing environments, in which a large number of crowdworkers generate geofences to support mobile experts. As a first step to open up the possibilities of crowd geofencing, we have tested its feasibility by collecting more than one thousand geofences in an unfamiliar city prior to the visit to look into urban water and air quality issues. Our experience has revealed the strengths and weaknesses of crowd geofencing in terms of geofence quality and crowd-powered situated actions."}
{"_id": "3d95e395c19836726ebabd8ca04c5e7be13c269d", "title": "Analysis and design of block cipher constructions", "text": "This thesis is dedicated to symmetric cryptographic algorithms. The major focus of the work is on block ciphers themselves as well as on hash functions and message authentication codes based on block ciphers. Three main approaches to the cryptanalysis of symmetric cryptographic algorithms are pursued. First, several block cipher constructions are analyzed mathematically using statistical cryptanalysis. Second, practical attacks on real-world symmetric cryptosystems are considered. Finally, novel cryptanalytic techniques using side-channel leakage are studied with applications to block ciphers and message authentication codes. Differential and linear cryptanalyses are well-known statistical attacks on block ciphers. This thesis studies the security of unbalanced Feistel networks with contracting MDS diffusion with respect to differential and linear cryptanalysis. Upper bounds on the differential trail probabilities and linear probabilities of linear trails in such constructions are proven. It is shown that such unbalanced Feistel networks can be highly efficient and are comparable to many known balanced Feistel network constructions with respect to differential and linear cryptanalysis. Ultra-lightweight substitution-permutation networks with diffusion layers based on the co-design of S-boxes and bit permutations are proposed. This results in lightweight block ciphers and block cipher based compression functions for hash functions designed and analyzed. These constructions have very small footprint and can be efficiently implemented on the majority of RFID tags This work also studies practical attacks on real-world symmetric cryptographic systems. Attacks are proposed on the KeeLoq block cipher and authentication systems widely used for automotive access control and component identification. Cryptanalysis of the A5/2 stream cipher used for protecting GSM connections worldwide is performed. Linear slide attacks on KeeLoq are proposed resulting in the fastest known attack on the KeeLoq block cipher working for all keys. Severe weaknesses of the KeeLoq key management are identified. The KeeLoq real-world authentication protocols for access control and component identification are also analyzed. A special-purpose hardware architecture for attacking A5/2 is developed that allows for real-time key recovery within one second for different GSM channels. This engine is based on an optimized hardware algorithm for fast Gaussian elimination over binary finite fields."}
{"_id": "cc659ce37b49d2de8cf78aa97e5653f35cfb0c5b", "title": "Does Music Predictability Improve Spatial-Temporal Reasoning ?", "text": "In 1993, it was found that Mozart\u2019s music temporarily enhanced performance on spatialtemporal reasoning tasks. This effect later became termed the Mozart Effect. Since then, the mechanisms of the Mozart Effect have been hotly debated among the scientific community. Recent research has called for studying the Mozart Effect by analyzing music as a series of components. The present study took the components of music into account by testing if predictable music could enhance spatial-temporal reasoning compared to non-predictable music. Participants were administered a task designed to test spatial-temporal reasoning after listening to either predictable or non-predictable music. Additionally, as a control condition, they performed the same task after listening to a short story. Listening to music did not affect reasoning performance, regardless of the predictability of the music. The results indicate that predictability in music alone may not be a major element of the Mozart Effect."}
{"_id": "c7fc0a2883f4d8c5e8dedb83ddd340b21f85ad1d", "title": "Wireless Hand Gesture Recognition Based on Continuous-Wave Doppler Radar Sensors", "text": "Short-range continuous-wave Doppler radar sensors have been mainly used for noncontact detection of various motions. In this paper, we investigate the feasibility to implement the function of a remote mouse, an input device of a computer, by recognizing human gestures based on a dual-channel Doppler radar sensor. Direct conversion architecture, symmetric subcarrier modulation, and bandpass sampling techniques are used to obtain a cost-effective solution. An arcsine algorithm and a motion imaging algorithm are proposed to linearly reconstruct the hand and finger motions in a 2-D plane from the demodulated Doppler phase shifts. Experimental results verified the effectiveness of the proposed architecture and algorithms. Different from the frequency-domain \u201cmicro-Doppler\u201d approach, the proposed remote gesture recognition based on linear motion reconstruction is able to recognize definitive signatures for the corresponding motions, exhibiting promising potential in practical applications of human-computer interaction."}
{"_id": "ccd1c6196760b01573f7b06bf60dd3153709dc9d", "title": "Design Optimization of Transverse Flux Linear Motor for Weight Reduction and Performance Improvement Using Response Surface Methodology and Genetic Algorithms", "text": "Permanent magnet (PM) type transverse flux linear motors (TFLMs) are electromagnetic devices, which can develop directly powerful linear motion. These motors have been developed to apply to high power system, such as railway traction, electrodynamics vibrator, free-piston generator, etc. This paper presents an optimum design of a PM-type TFLM to reduce the weight of motor with constraints of thrust and detent force using response surface methodology (RSM) and genetic algorithms (GAs). RSM is well adapted to make analytical model of motor weight with constraints of thrust and detent forces, and enable objective function to be easily created and a great computational time to be saved. Finite element computations have been used for numerical experiments on geometrical design variables in order to determine the coefficients of a second-order analytical model for the RSM. GAs are used as a searching tool for design optimization of TFLM to reduce the weight of motor and improve the motor performance."}
{"_id": "697907bea03fdc95233b1b6ae3956ab3352999ad", "title": "Glioblastoma multiforme tissue histopathology images based disease stage classification with deep CNN", "text": "Recently, many feature extraction methods for histopathology images have been reported for automatic quantitative analysis. One of the severe brain tumors is the Glioblastoma multiforme (GBM) and histopathology tissue images can provide unique insights into identifying and grading disease stages. However, the number of tissue samples to be examined is enormous, and is a burden to pathologists because of tedious manual evaluation traditionally required for efficient evaluation. In this study, we consider feature extraction and disease stage classification for brain tumor histopathology images using automatic image analysis methods. In particular, we utilized an automatic feature extraction and labeling for histopathology imagery data given by The Cancer Genome Atlas (TCGA) and checked the classification accuracy of disease stages in GBM tissue images using deep Convolutional Neural Network (CNN). Experimental results indicate promise in automatic disease stage classification and high level of accuracy were obtained for tested image data."}
{"_id": "dda593f391e0442cbdf7441e47c8f488c6c24cf4", "title": "Template Deformation for Point Cloud Fitting", "text": "The reconstruction of high-quality surface meshes from measured data is a vital stage in digital shape processing. We present a new approach to this problem that deforms a template surface to fit a given point cloud. Our method takes a template mesh and a point cloud as input, the latter typically shows missing parts and measurement noise. The deformation process is initially guided by user specified correspondences between template and data, then during iterative fitting new correspondences are established. This approach is based on a Laplacian setting for the template without need of any additional meshing of the data or cross-parameterization. The reconstructed surface fits to the point cloud while it inherits shape properties and topology of the template. We demonstrate the effectiveness of the approach for several point data sets from different sources."}
{"_id": "f722e3d2a9f8fc8adfb0498850f19ff63c624436", "title": "Bystander Responses to a Violent Incident in an Immersive Virtual Environment", "text": "Under what conditions will a bystander intervene to try to stop a violent attack by one person on another? It is generally believed that the greater the size of the crowd of bystanders, the less the chance that any of them will intervene. A complementary model is that social identity is critical as an explanatory variable. For example, when the bystander shares common social identity with the victim the probability of intervention is enhanced, other things being equal. However, it is generally not possible to study such hypotheses experimentally for practical and ethical reasons. Here we show that an experiment that depicts a violent incident at life-size in immersive virtual reality lends support to the social identity explanation. 40 male supporters of Arsenal Football Club in England were recruited for a two-factor between-groups experiment: the victim was either an Arsenal supporter or not (in-group/out-group), and looked towards the participant for help or not during the confrontation. The response variables were the numbers of verbal and physical interventions by the participant during the violent argument. The number of physical interventions had a significantly greater mean in the in-group condition compared to the out-group. The more that participants perceived that the Victim was looking to them for help the greater the number of interventions in the in-group but not in the out-group. These results are supported by standard statistical analysis of variance, with more detailed findings obtained by a symbolic regression procedure based on genetic programming. Verbal interventions made during their experience, and analysis of post-experiment interview data suggest that in-group members were more prone to confrontational intervention compared to the out-group who were more prone to make statements to try to diffuse the situation."}
{"_id": "163aaf13875d29ee0424d0a248970f6e8233de7b", "title": "Point cloud compression based on hierarchical point clustering", "text": "In this work we propose an algorithm for compressing the geometry of a 3D point cloud (3D point-based model). The proposed algorithm is based on the hierarchical clustering of the points. Starting from the input model, it performs clustering to the points to generate a coarser approximation, or a coarser level of detail (LOD). Iterating this clustering process, a sequence of LODs are generated, forming an LOD hierarchy. Then, the LOD hierarchy is traversed top down in a width-first order. For each node encountered during the traversal, the corresponding geometric updates associated with its children are encoded, leading to a progressive encoding of the original model. Special efforts are made in the clustering to maintain high quality of the intermediate LODs. As a result, the proposed algorithm achieves both generic topology applicability and good ratedistortion performance at low bitrates, facilitating its applications for low-end bandwidth and/or platform configurations."}
{"_id": "25a9cd5d49caffb2d687d1e54441f0a5cf222586", "title": "INTERNAL COMPACT PRINTED LOOP ANTENNA WITH MATCHING ELEMENT FOR LAPTOP APPLICATIONS", "text": "An internal compact printed loop antenna with matching element is presented for laptop applications. Bending to the printed loop structure is employed to reduce the size of antenna. A matching element is used at backside of the substrate to enhance the bandwidth of the antenna. The matching element is of rectangular shape with inverted U shape slot in it. The proposed antenna with wideband characteristics covers GSM900, GSM1900, UMTS, LTE2300, WLAN, WiMAX 3.3/3.5/3.7 bands and is suitable for laptop applications. The antenna structure is simulated using IE3D software. Keywordsbended; multiband antenna; printed loop antenna; tablet; wideband antenna."}
{"_id": "c4a6258e204772bc331cc340562ffb2ab0e9d2e6", "title": "Anti-phishing detection of phishing attacks using genetic algorithm", "text": "An approach to detection of phishing hyperlinks using the rule based system formed by genetic algorithm is proposed, which can be utilized as a part of an enterprise solution to anti-phishing. A legitimate webpage owner can use this approach to search the web for suspicious hyperlinks. In this approach, genetic algorithm is used to evolve rules that are used to differentiate phishing link from legitimate link. Evaluating the parameters like evaluation function, crossover and mutation, GA generates a ruleset that matches only the phishing links. This ruleset is stored in a database and a link is reported as a phishing link if it matches any of the rules in the rule based system and thus it keeps safe from fake hackers. Preliminary experiments show that this approach is effective to detect phishing hyperlink with minimal false negatives at a speed adequate for online application."}
{"_id": "7d20d2e55a7616cb27ba1160a986f9035cfd373e", "title": "SDN and OpenFlow for Dynamic Flex-Grid Optical Access and Aggregation Networks", "text": "We propose and discuss the extension of software-defined networking (SDN) and OpenFlow principles to optical access/aggregation networks for dynamic flex-grid wavelength circuit creation. The first experimental demonstration of an OpenFlow1.0-based flex-grid \u03bb-flow architecture for dynamic 150 Mb/s per-cell 4 G Orthogonal Frequency Division Multiple Access (OFDMA) mobile backhaul (MBH) overlays onto 10 Gb/s passive optical networks (PON) without optical network unit (ONU)-side optical filtering, amplification, or coherent detection, over 20 km standard single mode fiber (SSMF) with a 1:64 passive split is also detailed. The proposed approach can be attractive for monetizing optical access/aggregation networks via on-demand support for high-speed, low latency, high quality of service (QoS) applications over legacy fiber infrastructure."}
{"_id": "fa352e8e4d9ec2f4b66965dd9cea75167950152a", "title": "Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations", "text": "We introduce physics informed neural networks \u2013 neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. In this two part treatise, we present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. Depending on the nature and arrangement of the available data, we devise two distinct classes of algorithms, namely continuous time and discrete time models. The resulting neural networks form a new class of data-efficient universal function approximators that naturally encode any underlying physical laws as prior information. In this first part, we demonstrate how these networks can be used to infer solutions to partial differential equations, and obtain physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters."}
{"_id": "0a28002c9fb869ae1feab9e851064c3c53fac5ad", "title": "Children's moral motivation, sympathy, and prosocial behavior.", "text": "Two studies investigated the role of children's moral motivation and sympathy in prosocial behavior. Study 1 measured other-reported prosocial behavior and self- and other-reported sympathy. Moral motivation was assessed by emotion attributions and moral reasoning following hypothetical transgressions in a representative longitudinal sample of Swiss 6-year-old children (N = 1,273). Prosocial behavior increased with increasing sympathy, especially if children displayed low moral motivation. Moral motivation and sympathy were also independently related to prosocial behavior. Study 2 extended the findings of Study 1 with a second longitudinal sample of Swiss 6-year-old children (N = 175) using supplementary measures of prosocial behavior, sympathy, and moral motivation. The results are discussed in regard to the precursors of the moral self in childhood."}
{"_id": "201239ba2f7e046aae90a2ed2bcc0ec23374d797", "title": "Earliest Deadline Scheduling for Real-Time Database Systems", "text": "Earlier studies have observed that in moderately-loaded real-time database systems, using an Earliest Deadline policy to schedule tasks results in the fewest missed deadlines. When the real-time system is overloaded, however, an Earliest Deadline schedule performs worse than most other policies. This is due to Earliest Deadline giving the highest priority to transactions that are close to missing their deadlines. In this paper, we present a new priority assignment algorithm called Adaptive Earliest Deadline (AED), which features a feedback control mechanism that detects overload conditions and modifies transaction priority assignments accordingly. Using a detailed simulation model, we compare the performance of AED with respect to Earliest Deadline and other fixed priority schemes. We also present and evaluate an extension of the AED algorithm called Hierarchical Earliest Deadline (HED), which is designed to handle applications that assign different values to transactions and where the goal is to maximize the total value of the in-time transactions."}
{"_id": "49a970d478146a43a9b0224ea5d881511c23c110", "title": "Urban-Area and Building Detection Using SIFT Keypoints and Graph Theory", "text": "Very high resolution satellite images provide valuable information to researchers. Among these, urban-area boundaries and building locations play crucial roles. For a human expert, manually extracting this valuable information is tedious. One possible solution to extract this information is using automated techniques. Unfortunately, the solution is not straightforward if standard image processing and pattern recognition techniques are used. Therefore, to detect the urban area and buildings in satellite images, we propose the use of scale invariant feature transform (SIFT) and graph theoretical tools. SIFT keypoints are powerful in detecting objects under various imaging conditions. However, SIFT is not sufficient for detecting urban areas and buildings alone. Therefore, we formalize the problem in terms of graph theory. In forming the graph, we represent each keypoint as a vertex of the graph. The unary and binary relationships between these vertices (such as spatial distance and intensity values) lead to the edges of the graph. Based on this formalism, we extract the urban area using a novel multiple subgraph matching method. Then, we extract separate buildings in the urban area using a novel graph cut method. We form a diverse and representative test set using panchromatic 1-m-resolution Ikonos imagery. By extensive testings, we report very promising results on automatically detecting urban areas and buildings."}
{"_id": "7d1e9bd95b07a0faec777e715f46ce95906d63d7", "title": "Automated Detection of Arbitrarily Shaped Buildings in Complex Environments From Monocular VHR Optical Satellite Imagery", "text": "This paper introduces a new approach for the automated detection of buildings from monocular very high resolution (VHR) optical satellite images. First, we investigate the shadow evidence to focus on building regions. To do that, we propose a new fuzzy landscape generation approach to model the directional spatial relationship between buildings and their shadows. Once all landscapes are collected, a pruning process is developed to eliminate the landscapes that may occur due to non-building objects. The final building regions are detected by GrabCut partitioning approach. In this paper, the input requirements of the GrabCut partitioning are automatically extracted from the previously determined shadow and landscape regions, so that the approach gained an efficient fully automated behavior for the detection of buildings. Extensive experiments performed on 20 test sites selected from a set of QuickBird and Geoeye-1 VHR images showed that the proposed approach accurately detects buildings with arbitrary shapes and sizes in complex environments. The tests also revealed that even under challenging environmental and illumination conditions, reasonable building detection performances could be achieved by the proposed approach."}
{"_id": "596cf71c1695d4eb914482ded1ebcf8f1333e0db", "title": "Fast and extensible building modeling from airborne LiDAR data", "text": "This paper presents an automatic algorithm which reconstructs building models from airborne LiDAR (light detection and ranging) data of urban areas. While our algorithm inherits the typical building reconstruction pipeline, several major distinct features are developed to enhance efficiency and robustness: 1) we design a novel vegetation detection algorithm based on differential geometry properties and unbalanced SVM; 2) after roof patch segmentation, a fast boundary extraction method is introduced to produce topology-correct water tight boundaries; 3) instead of making assumptions on the angles between roof boundary lines, we propose a data-driven algorithm which automatically learns the principal directions of roof boundaries and uses them in footprint production. Furthermore, we show the extendability of our algorithm by supporting non-flat object patterns with the help of only a few user interactions. We demonstrate the efficiency and accuracy of our algorithm by showing experiment results on urban area data of several different data sets."}
{"_id": "8f86f0276a40081841de28535cbf4ba87c1127f2", "title": "Detection of small objects from high-resolution panchromatic satellite imagery based on supervised image segmentation", "text": "A new concept for the detection of small objects from modular optoelectronic multispectral scanner (MOMS-02) high spatial resolution panchromatic satellite imagery is presented. We combine supervised shape classification with unsupervised image segmentation in an iterative procedure which allows a target-oriented search for specific object shapes."}
{"_id": "85f25c2811b15ba26dd55204089d6360accd173b", "title": "Implicit user modeling for personalized search", "text": "Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word \"java\" to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search. We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine."}
{"_id": "1763fa2c20d2a7038af51a97fddf37a1864c0fe2", "title": "Animal-inspired design and aerodynamic stabilization of a hexapedal millirobot", "text": "The VelociRoACH is a 10 cm long, 30 gram hexapedal millirobot capable of running at 2.7 m/s, making it the fastest legged robot built to date, relative to scale. We present the design by dynamic similarity technique and the locomotion adaptations which have allowed for this highly dynamic performance. In addition, we demonstrate that rotational dynamics become critical for stability as the scale of a robotic system is reduced. We present a new method of experimental dynamic tuning for legged millirobots, aimed at finding stable limit cycles with minimal rotational energy. By implementing an aerodynamic rotational damper, we further reduced the rotational energy in the system, and demonstrated that stable limit cycles with lower rotational energy are more robust to disturbances. This method increased the stability of the system without detracting from forward speed."}
{"_id": "44fb458c8bcf88c42b55bc601fea2f264211ad95", "title": "Dynamic tire friction models for vehicle traction control", "text": "In this paper we derive a dynamic friction force model for road/tire interaction for ground vehicles. The model is based on a similar dynamic friction model for contact developed previously for contact-point friction problems, called the LuGre model [4]. We show that the dynamic LuGre friction model is able to accurately capture velocity and road/surface dependence of the tire friction force."}
{"_id": "d018519d0f844228557b30f8e2b887fae4f3c2a1", "title": "Effects of Platform-Switching on Peri-implant Soft and Hard Tissue Outcomes: A Systematic Review and Meta-analysis.", "text": "PURPOSE\nThis systematic review and meta-analysis was aimed at evaluating the longitudinal effect of platform switching on implant survival rates as well as on soft and hard tissue outcomes.\n\n\nMATERIALS AND METHODS\nAn electronic search of the databases of the National Center for Biotechnology Information, PubMed, Ovid (MEDLINE), EMBASE, Web of Science, and Cochrane Collaboration Library was conducted in February 2015. Studies published in English with at least 10 human participants and a 12-month post-loading follow-up were included. Random effects meta-analyses of selected studies were applied to compare the primary and secondary outcomes of platform-switched (PS) and regular-platform (RP) implants, as well as the experimental designs and clinical outcomes.\n\n\nRESULTS\nA total of 26 studies involving 1,511 PS implants and 1,123 RP implants were evaluated. Compared to RP implants, PS implants showed a slight increase in vertical marginal bone loss (VMBL) and pocket depth reduction (weighted mean differences were -0.23 mm and -0.20 mm, respectively). The PS implants had a mean VMBL of 0.36 \u00b1 0.15 mm within the first year of service. The meta-regression suggested a trend of decreased bone resorption at sites with thick soft tissues at baseline.\n\n\nCONCLUSION\nThis study suggested that platform switching may have an indirect protective effect on implant hard tissue outcomes."}
{"_id": "cae0fa761fab754739f20044d73922aa614db661", "title": "Parallel Database Recovery for Multicore Main-Memory Databases", "text": "Main-memory database systems for multicore servers can ach ieve excellent performance when processing massive volume of OL TP workloads. But crash-resilience mechanism, or namely logg ingand-replay, can either degrade the performance of transact ion processing or slow down the process of failure recovery. In this paper, we show that, by exploiting application semantics, it i s possible to achieve speedy failure recovery without introduci ng any costly logging overhead to the execution of online transact ions. We propose TARS, a parallel log-recovery mechanism that is specifically designed for lightweight, coarse-grained commandlogging approach. TARS leverages a combination of static and dynamic analyses to parallelize the log replay: at compile time, T ARS decomposes stored procedures by carefully analyzing depende ncies within and across programs; at recovery time, a re-executio n schedule with a high degree of parallelism is attained through lig tweight one-pass scans over transaction parameter values. As such, recovery latency is remarkably minimized. We evaluated T ARS in a main-memory database prototype running on a 40-core serve r. Compared to the state-of-the-art mechanisms, T ARS yields significantly higher recovery rate without compromising the effici en y of transaction processing."}
{"_id": "3d1446375d8cb9af85e22200b1c9d27cd3890fb7", "title": "A convenient nucleic acid test on the basis of the capillary convective PCR for the on-site detection of enterovirus 71.", "text": "The recent and continuing epidemic of enterovirus 71 in China has affected millions of children and resulted in thousands of deaths. Timely diagnosis and management is essential for disease control. Current enterovirus 71 molecular tests require resources that are unavailable for on-site testing. We have developed a simple-to-operate nucleic acid test, the convenient and integrated nucleic acid test, for local medical institutions. It uses a convective PCR for rapid amplification, a dipstick for visual detection of PCR products, and a simple commercial kit for nucleic acid extraction. By using a specially designed reagent and reaction tube containing a dipstick, the amplification and detection processes are well integrated and simplified. Moreover, cross contamination that may be caused by an open-tube detection system can be avoided. On the basis of the convenient and integrated nucleic acid test, an enterovirus 71 assay for on-site testing was developed. After evaluating known hand, foot, and mouth disease virus stocks of 17 strains of 11 different serotypes, this assay showed a favorable detection spectrum and no cross-reactivity. Its clinical performance was established by testing 141 clinical samples and comparing the results with a nested RT-PCR method. The assay showed a clinical sensitivity and specificity of 98.5% and 100%, respectively. Our results suggest that this convenient and integrated nucleic acid test enterovirus 71 assay may serve as an on-site diagnosis tool."}
{"_id": "5f4924da9f8fac603c738cac64fb1785ef96c381", "title": "An 8.3 nW \u221272 dBm event driven IoE wake up receiver RF front end", "text": "This work presents an ultra-low power event driven wake-up receiver (WuRx) fabricated in a RF CMOS 130 nm process. The receiver consists of an off-chip lumped element matching network, an envelope detector, a decision circuit capable of detecting sub-mV baseband signal voltages and a clock source consuming 1.3 nW. This receiver has demonstrated a sensitivity of \u221272 dBm while consuming a total of 8.3 nW from 1 V and 0.65 V sources."}
{"_id": "354db02f22770bf4cb06fd8b45d58836795d7000", "title": "Standardization of marketing strategy 107 Standardization of international marketing strategy by firms from a developing country", "text": "A major debate in the international marketing literature deals with the globalization of markets and the extent to which a company\u2019s international marketing strategy can be standardized (Buzzell, 1968; Cavusgil et al., 1993; Douglas and Wind, 1987; Hill and Still, 1984; Jain, 1989; Levitt, 1983; Sorenson and Wiechmann, 1975). Significant progress has been made with respect to the extent to which international marketing strategy can be standardized across national borders (Cavusgil et al., 1993; Jain, 1989). However, almost all previous research has been conducted from a US-firm perspective. Even though a few studies dealt with this debate in the context of less developed economies, the issues studied were viewed from the perspective of US firms\u2019 operating activities in those economies (e.g. Grosse and Zinn, 1990; Hill and Still, 1984; Kacker, 1972). A major gap in the literature is whether our current knowledge can be generalized to foreign companies in other nations, especially the developing world. Firms from developing countries assume an increasingly important role in international competition, since many of the fastest growing economies in the world can be found in these nations. In addition, since developing countries are often culturally different from developed countries, they provide a suitable context to assess the generalizability of the existing knowledge in the standardization literature. Consequently, it is of interest to researchers and practitioners of international marketing to know the perspective of firms from developing countries regarding the standardization and adaptation of marketing strategy. Another void in the literature is that previous studies implicitly assume that standardization concepts are unidimensional at either the overall marketing programme level or the 4-P level (e.g. Akaah, 1991; Boddewyn et al., 1986;"}
{"_id": "64f6b71b331c5eafde453fc608c2af5151c0b84a", "title": "Numerically Grounded Language Models for Semantic Error Correction", "text": "Semantic error detection and correction is an important task for applications such as fact checking, speech-to-text or grammatical error correction. Current approaches generally focus on relatively shallow semantics and do not account for numeric quantities. Our approach uses language models grounded in numbers within the text. Such groundings are easily achieved for recurrent neural language model architectures, which can be further conditioned on incomplete background knowledge bases. Our evaluation on clinical reports shows that numerical grounding improves perplexity by 33% and F1 for semantic error correction by 5 points when compared to ungrounded approaches. Conditioning on a knowledge base yields further improvements."}
{"_id": "2051c7baeaae8eb86f74ab8f4cbdf6bc2fee059b", "title": "Sequential Self-Folding Structures by 3D Printed Digital Shape Memory Polymers", "text": "Folding is ubiquitous in nature with examples ranging from the formation of cellular components to winged insects. It finds technological applications including packaging of solar cells and space structures, deployable biomedical devices, and self-assembling robots and airbags. Here we demonstrate sequential self-folding structures realized by thermal activation of spatially-variable patterns that are 3D printed with digital shape memory polymers, which are digital materials with different shape memory behaviors. The time-dependent behavior of each polymer allows the temporal sequencing of activation when the structure is subjected to a uniform temperature. This is demonstrated via a series of 3D printed structures that respond rapidly to a thermal stimulus, and self-fold to specified shapes in controlled shape changing sequences. Measurements of the spatial and temporal nature of self-folding structures are in good agreement with the companion finite element simulations. A simplified reduced-order model is also developed to rapidly and accurately describe the self-folding physics. An important aspect of self-folding is the management of self-collisions, where different portions of the folding structure contact and then block further folding. A metric is developed to predict collisions and is used together with the reduced-order model to design self-folding structures that lock themselves into stable desired configurations."}
{"_id": "0617301c077e56c44933e2b790a270f3e590db12", "title": "A Hybrid Indexing Method for Approximate String Matching", "text": "We present a new indexing method for the approximate string matching problem. The method is based on a suffix array combined with a partitioning of the pattern. We analyze the resulting algorithm and show that the average retrieval time is , for some that depends on the error fraction tolerated and the alphabet size . It is shown that for approximately , where . The space required is four times the text size, which is quite moderate for this problem. We experimentally show that this index can outperform by far all the existing alternatives for indexed approximate searching. These are also the first experiments that compare the different existing schemes."}
{"_id": "6c94ec89603ffafcdba4600bc7506a2df35cc246", "title": "Extracting Business Process Models Using Natural Language Processing (NLP) Techniques", "text": "This Doctoral Consortium paper discusses how NLP can be applied in the domain of BPM in order to automatically generate business process models from existing documentation within the organization. The main idea is that from the syntactic and grammatical structure of a sentence, the components of a business process model can be derived (i.e. activities, resources, tasks, patterns). The result would be a business process model depicted using BPMN - a dedicated business process modeling technique"}
{"_id": "f004f7af674ac13dc73c16cb019e7e67c7fe83fc", "title": "Extraction and Adaptation of Fuzzy Rules for Friction Modeling and Control Compensation", "text": "Modeling of friction forces has been a challenging task in mechanical engineering. Parameterized approaches for modeling friction find it difficult to achieve satisfactory performance due to the presence of nonlinearity and uncertainties in dynamical systems. This paper aims to develop adaptive fuzzy friction models by the use of data-mining techniques and system theory. Our main technical contributions are twofold: extraction of fuzzy rules and formulation of a static fuzzy friction model and adaptation of the fuzzy friction model by the use of the Lyapunov stability theory, which is associated with a control compensation of a typical motion dynamics. The proposed framework in this paper shows a successful application of adaptive data-mining techniques in engineering. A single-degree-of-freedom mechanical system is employed as an experimental model in simulation studies. Results demonstrate that our proposed fuzzy friction model has promise in the design of uncertain mechanical control systems."}
{"_id": "7e0769e6857e28cadc507c47f2941a2538e69c9d", "title": "Offline Evaluation of Response Prediction in Online Advertising Auctions", "text": "Click-through rates and conversion rates are two core machine learning problems in online advertising. The evaluation of such systems is often based on traditional supervised learning metrics that ignore how the predictions are used. These predictions are in fact part of bidding systems in online advertising auctions. We present here an empirical evaluation of a metric that is specifically tailored for auctions in online advertising and show that it correlates better than standard metrics with A/B test results."}
{"_id": "2f9b49160ef60a11e32fae1a102384f77b4c7272", "title": "Automatically generating features for learning program analysis heuristics for C-like languages", "text": "We present a technique for automatically generating features for data-driven program analyses. Recently data-driven approaches for building a program analysis have been developed, which mine existing codebases and automatically learn heuristics for finding a cost-effective abstraction for a given analysis task. Such approaches reduce the burden of the analysis designers, but they do not remove it completely; they still leave the nontrivial task of designing so called features to the hands of the designers. Our technique aims at automating this feature design process. The idea is to use programs as features after reducing and abstracting them. Our technique goes through selected program-query pairs in codebases, and it reduces and abstracts the program in each pair to a few lines of code, while ensuring that the analysis behaves similarly for the original and the new programs with respect to the query. Each reduced program serves as a boolean feature for program-query pairs. This feature evaluates to true for a given program-query pair when (as a program) it is included in the program part of the pair. We have implemented our approach for three real-world static analyses. The experimental results show that these analyses with automatically-generated features are cost-effective and consistently perform well on a wide range of programs."}
{"_id": "8f16276617d5036bbebac824e15f585ad0fcf42f", "title": "Visualizing Linked Data with JavaScript", "text": "Despite the wealth of information contained in the Web of Linked Data, the current limitations and entry barriers of the Semantic Web technologies hinder the users from taking advantage of these information resources. Linked Data visualization can alleviate this problem. In this paper, we adopt a proper Linked Data visualization model, design Linked Data visualization algorithms, and develop a lightweight, easy-to-use prototype tool, LOD Viewer, using the platform independent JavaScript language. LOD Viewer can visualize different sources of RDF data including SPARQL endpoints for Linked Open Data (LOD) sources, and display the data in different graphic illustrations. Our case studies have verified the effectiveness and realizability of the proposed method. The time complexity analysis and experimental test show that the run-time of the proposed algorithms approximately exhibits a linear growth rate as the visualized RDF triples size increases."}
{"_id": "ae60b76436b86e03be63548b6f16b3705edb5ebd", "title": "TOPSIS method for multi-attribute group decision-making under single-valued neutrosophic environment", "text": "A single-valued neutrosophic set is a special case of neutrosophic set. It has been proposed as a generalization of crisp sets, fuzzy sets, and intuitionistic fuzzy sets in order to deal with incomplete information. In this paper, a new approach for multi-attribute group decision-making problems is proposed by extending the technique for order preference by similarity to ideal solution to single-valued neutrosophic environment. Ratings of alternative with respect to each attribute are considered as single-valued neutrosophic set that reflect the decision makers\u2019 opinion based on the provided information. Neutrosophic set characterized by three independent degrees namely truth-membership degree (T), indeterminacy-membership degree (I), and falsity-membership degree (F) is more capable to catch up incomplete information. Single-valued neutrosophic set-based weighted averaging operator is used to aggregate all the individual decision maker\u2019s opinion into one common opinion for rating the importance of criteria and alternatives. Finally, an illustrative example is provided in order to demonstrate its applicability and effectiveness of the proposed approach."}
{"_id": "887ee1e832c8d2d93a46de053ac579f9b08655b4", "title": "Semi-supervised feature learning for improving writer identification", "text": "Data augmentation is usually used by supervised learning approaches for offline writer identification, but such approaches require extra training data and potentially lead to overfitting errors. In this study, a semi-supervised feature learning pipeline was proposed to improve the performance of writer identification by training with extra unlabeled data and the original labeled data simultaneously. Specifically, we proposed a weighted label smoothing regularization (WLSR) method for data augmentation, which assigned the weighted uniform label distribution to the extra unlabeled data. The WLSR method could regularize the convolutional neural network (CNN) baseline to allow more discriminative features to be learned to represent the properties of different writing styles. The experimental results on well-known benchmark datasets (ICDAR2013 and CVL) showed that our proposed semisupervised feature learning approach could significantly improve the baseline measurement and perform competitively with existing writer identification approaches. Our findings provide new insights into offline write identification."}
{"_id": "40cfc568a2caa261f2896d09db636053c2d118eb", "title": "A Temporal Attentional Model for Rumor Stance Classification", "text": "Rumor stance classification is the task of determining the stance towards a rumor in text. This is the first step in effective rumor tracking on social media which is an increasingly important task. In this work, we analyze Twitter users' stance toward a rumorous tweet, in which users could support, deny, query, or comment upon the rumor. We propose a deep attentional CNN-LSTM approach, which takes the sequence of tweets in a thread of conversation as the input. We use neighboring tweets in the timeline as context vectors to capture the temporal dynamism in users' stance evolution. In addition, we use extra features such as friendship, to leverage useful relational features that are readily available in social media. Our model achieves the state-of-the-art results on rumor stance classification on a recent SemEval dataset, improving accuracy and F1 score by 3.6% and 4.2% respectively."}
{"_id": "730f99f6e26167f93876f401b02dc30b04c88862", "title": "Modeling WiFi Active Power/Energy Consumption in Smartphones", "text": "We conduct the first detailed measurement study of the properties of a class of WiFi active power/energy consumption models based on parameters readily available to smartphone app developers. We first consider a number of parameters used by previous models and show their limitations. We then focus on a recent approach modeling the active power consumption as a function of the application layer throughput. Using a large dataset and an 802.11n-equipped smartphone, we build four versions of a previously proposed linear power-throughput model, which allow us to explore the fundamental trade off between accuracy and simplicity. We study the properties of the model in relation to other parameters such as the packet size and/or the transport layer protocol, and we evaluate its accuracy under a variety of scenarios which have not been considered in previous studies. Our study shows that the model works well in a number of scenarios but its accuracy drops with high throughput values or when tested on different hardware. We further show that a non-linear model can greatly improve the accuracy in these two cases."}
{"_id": "1db7e6222809f0cdf0bbe2e87852295567edf838", "title": "Fingerstroke time estimates for touchscreen-based mobile gaming interaction.", "text": "The growing popularity of gaming applications and ever-faster mobile carrier networks have called attention to an intriguing issue that is closely related to command input performance. A challenging mirroring game service, which simultaneously provides game service to both PC and mobile phone users, allows them to play games against each other with very different control interfaces. Thus, for efficient mobile game design, it is essential to apply a new predictive model for measuring how potential touch input compares to the PC interfaces. The present study empirically tests the keystroke-level model (KLM) for predicting the time performance of basic interaction controls on the touch-sensitive smartphone interface (i.e., tapping, pointing, dragging, and flicking). A modified KLM, tentatively called the fingerstroke-level model (FLM), is proposed using time estimates on regression models."}
{"_id": "7be1268e8b98b9a706cb782d3e06fc6f69b94ed6", "title": "A fast transient LDO based on dual loop FVF with high PSRR", "text": "In this work, the design of a dual loop flipped voltage follower (FVF) based low-dropout regulator (LDO) to achieve fast transient response is proposed for all-digital phase locked loop (ADPLL) in the RFID application. With the help of an additional reference generation loop in FVF LDO and the cascaded structure, high power supply rejection ratio (PSRR) is achievable. This design is fabricated in GF 40nm CMOS technology. The FVF LDO core only occupies small area of 0.036 mm2. This area also includes 80pF on chip capacitor. Without large off-chip capacitor, this LDO is suitable for system-on-chip (SoC) requirement. Post layout simulation shows that fast response of 45ns and high PSRR of \u221242dB through up to 10GHz frequency range."}
{"_id": "9e7b9bb459c1fe4e04bcbf24a12c7fef5c2761b2", "title": "New compact OMT based on a septum solution", "text": "This paper presents a new compact orthomode transducer (OMT) able to extract or combine two orthogonal linear polarizations in a very compact manner with simple manufacturing process. The component looks similar to the well-known septum polarizer but contrary to this one, the new component is able to handle dual linear polarization."}
{"_id": "876ef047c98a640b4153cf6f2314751aa2334d2c", "title": "PIXHAWK: A system for autonomous flight using onboard computer vision", "text": "We provide a novel hardware and software system for micro air vehicles (MAV) that allows high-speed, low-latency onboard image processing. It uses up to four cameras in parallel on a miniature rotary wing platform. The MAV navigates based on onboard processed computer vision in GPS-denied in- and outdoor environments. It can process in parallel images and inertial measurement information from multiple cameras for multiple purposes (localization, pattern recognition, obstacle avoidance) by distributing the images on a central, low-latency image hub. Furthermore the system can utilize low-bandwith radio links for communication and is designed and optimized to scale to swarm use. Experimental results show successful flight with a range of onboard computer vision algorithms, including localization, obstacle avoidance and pattern recognition."}
{"_id": "34200d9fc5843237c2df7c364afe2c6a4e740a66", "title": "Algorithm of a Perspective Transform-Based PDF417 Barcode Recognition", "text": "When a PDF417 barcode are recognized, there are major recognition processes such as segmentation, normalization, and decoding. Among them, the segmentation and normalization steps are very important because they have a strong influence on the rate of barcode recognition. There are also previous segmentation and normalization techniques of processing barcode image, but some issues as follows. First, the previous normalization techniques need an additional restoration process and apply an interpolation process. Second, the previous recognition algorithms recognize a barcode image well only when it is placed in the predefined rectangular area. Therefore, we propose a novel segmentation and normalization method in PDF417 with the aims of improving its recognition rate and precision. The segmentation process to detect the barcode area in an image uses the conventional morphology and Hough transformmethods. The normalization process of the bar code region is based on the conventional perspective transformation and warping algorithms. In addition, we perform experiments using both experimental and actual data for evaluating our algorithms. Consequently, our experimental results can be summarized as follows. First, our method showed a stable performance over existing PDF417 barcode detection and recognition. Second, it overcame the limitation problem where the location of an input image should locate in a predefined rectangle area. Finally, it is expected that our result can be used as a restoration tool of printed images such as documents and pictures."}
{"_id": "9241ea3d8cb85633d314ecb74b31567b8e73f6af", "title": "Least squares quantization in PCM", "text": "It has long been realized that in pulse-code modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the one-third power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quantization schemes for 26 quanta, b = 1,2, t ,7, are given numerically for Gaussian and for Laplacian distribution of signal amplitudes."}
{"_id": "075bc988728788aa033b04dee1753ded711180ee", "title": "Robust Face Recognition via Sparse Representation", "text": "We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims."}
{"_id": "0d117c9fc3393237d71be5dce9bb6498b5e0c020", "title": "High-dimensional covariance estimation by minimizing $\\ell_1$-penalized log-determinant divergence", "text": "Given i.i.d. observations of a random vector X \u2208 R, we study the problem of estimating both its covariance matrix \u03a3, and its inverse covariance or concentration matrix \u0398 = (\u03a3). When X is multivariate Gaussian, the non-zero structure of \u0398 is specified by the graph of an associated Gaussian Markov random field; and a popular estimator for such sparse \u0398 is the l1-regularized Gaussian MLE. This estimator is sensible even for for non-Gaussian X, since it corresponds to minimizing an l1-penalized log-determinant Bregman divergence. We analyze its performance under high-dimensional scaling, in which the number of nodes in the graph p, the number of edges s, and the maximum node degree d, are allowed to grow as a function of the sample size n. In addition to the parameters (p, s, d), our analysis identifies other key quantities that control rates: (a) the l\u221e-operator norm of the true covariance matrix \u03a3; and (b) the l\u221e operator norm of the sub-matrix \u0393\u2217SS , where S indexes the graph edges, and \u0393\u2217 = (\u0398) \u2297 (\u0398); and (c) a mutual incoherence or irrepresentability measure on the matrix \u0393\u2217 and (d) the rate of decay 1/f(n, \u03b4) on the probabilities {|\u03a3\u0302ij \u2212 \u03a3\u2217ij | > \u03b4}, where \u03a3\u0302 is the sample covariance based on n samples. Our first result establishes consistency of our estimate \u0398\u0302 in the elementwise maximum-norm. This in turn allows us to derive convergence rates in Frobenius and spectral norms, with improvements upon existing results for graphs with maximum node degrees d = o( \u221a s). In our second result, we show that with probability converging to one, the estimate \u0398\u0302 correctly specifies the zero pattern of the concentration matrix \u0398\u2217. We illustrate our theoretical results via simulations for various graphs and problem parameters, showing good correspondences between the theoretical predictions and behavior in simulations. AMS 2000 subject classifications: Primary 62F12; secondary 62F30."}
{"_id": "216c8515f9f53533b2e87c8183e70d3b50c2c097", "title": "Malware detection using adaptive data compression", "text": "A popular approach in current commercial anti-malware software detects malicious programs by searching in the code of programs for scan strings that are byte sequences indicative of malicious code. The scan strings, also known as the signatures of existing malware, are extracted by malware analysts from known malware samples, and stored in a database often referred to as a virus dictionary. This process often involves a significant amount of human efforts. In addition, there are two major limitations in this technique. First, not all malicious programs have bit patterns that are evidence of their malicious nature. Therefore, some malware is not recorded in the virus dictionary and can not be detected through signature matching. Second, searching for specific bit patterns will not work on malware that can take many forms--obfuscated malware. Signature matching has been shown to be incapable of identifying new malware patterns and fails to recognize obfuscated malware. This paper presents a malware detection technique that discovers malware by means of a learning engine trained on a set of malware instances and a set of benign code instances. The learning engine uses an adaptive data compression model--prediction by partial matching (PPM)--to build two compression models, one from the malware instances and the other from the benign code instances. A code instance is classified, either as \"malware\" or \"benign\", by minimizing its estimated cross entropy. Our preliminary results are very promising. We achieved about 0.94 true positive rate with as low as 0.016 false positive rate. Our experiments also demonstrate that this technique can effectively detect unknown and obfuscated malware."}
{"_id": "43f132cbc24c433c16b5aabc4393e398d1775d81", "title": "An updated survey of GA-based multiobjective optimization techniques", "text": "After using evolutionary techniques for single-objective optimization during more than two decades, the incorporation of more than one objective in the fitness function has finally become a popular area of research. As a consequence, many new evolutionary-based approaches and variations of existing techniques have recently been published in the technical literature. The purpose of this paper is to summarize and organize the information on these current approaches, emphasizing the importance of analyzing the operations research techniques in which most of them are based, in an attempt to motivate researchers to look into these mathematical programming approaches for new ways of exploiting the search capabilities of evolutionary algorithms. Furthermore, a summary of the main algorithms behind these approaches is provided, together with a brief criticism that includes their advantages and disadvantages, degree of applicability, and some known applications. Finally, further trends in this area and some possible paths for further research are also addressed."}
{"_id": "2eeba9c58e544cae93eb6ee759070dcd79dee780", "title": "Understanding PubMed\u00ae user search behavior through log analysis", "text": "This article reports on a detailed investigation of PubMed users' needs and behavior as a step toward improving biomedical information retrieval. PubMed is providing free service to researchers with access to more than 19 million citations for biomedical articles from MEDLINE and life science journals. It is accessed by millions of users each day. Efficient search tools are crucial for biomedical researchers to keep abreast of the biomedical literature relating to their own research. This study provides insight into PubMed users' needs and their behavior. This investigation was conducted through the analysis of one month of log data, consisting of more than 23 million user sessions and more than 58 million user queries. Multiple aspects of users' interactions with PubMed are characterized in detail with evidence from these logs. Despite having many features in common with general Web searches, biomedical information searches have unique characteristics that are made evident in this study. PubMed users are more persistent in seeking information and they reformulate queries often. The three most frequent types of search are search by author name, search by gene/protein, and search by disease. Use of abbreviation in queries is very frequent. Factors such as result set size influence users' decisions. Analysis of characteristics such as these plays a critical role in identifying users' information needs and their search habits. In turn, such an analysis also provides useful insight for improving biomedical information retrieval.Database URL:http://www.ncbi.nlm.nih.gov/PubMed."}
{"_id": "1ddb27fe320d1a63ef694ef1457e704e6622a25d", "title": "Pattern classification of foot strike type using body worn accelerometers", "text": "The automatic classification of foot strike patterns into the three basic categories forefoot, midfoot and rearfoot striking plays an important role for applications like shoe fitting with instant feedback. This paper presents methods for this classification based on body worn accelerometers that allow giving the required direct feedback to the user. For our study, we collected data from 40 runners who had a standard accelerometer in a custom-built sensor pod attached to the laces of their running shoes. The acceleration in three axes was recorded continuously while the runners conducted their runs. Data for repeated runs at two different speed levels were collected in order to have sufficient sensor data for classification. The data was analyzed using features computed for individual steps of the runners to distinguish the three foot strike pattern classes. The labels for the strike pattern classes were established using high-speed video that was concurrently collected. We could show that the classification of the strike types based on the measured accelerations and the extracted features was up to 95.3% accurate. The established classification system can be used to support runners, for example by giving running shoe recommendations that ideally match the prevailing strike type of the runner."}
{"_id": "42a801fde0b4eda2260d4fa8f2f76d7eb2de09cc", "title": "SUPPRESSION OF GATE INDUCED DRAIN LEAKAGE CURRENT ( GIDL ) BY GATE WORKFUNCTION ENGINEERING : ANALYSIS AND MODEL", "text": "Leakage current reduction is of primary importance as the technology scaling trends continue towards deep sub-micrometer regime. One of the leakage mechanisms which contribute significantly to power dissipation is the Gate Induced Drain Leakage (GIDL). GIDL sets an upper limit on the VLSI MOSFET scaling and may even lead to device breakdown. Thus, in order to improve performance, static power consumption becomes a major concern in such miniaturized devices. With the CMOS technology in the nanometer regime, metal gates emerge as a powerful booster over the poly-Si gates, keeping the leakage mechanisms in control. This work presents a systematic study of the Gate Induced Drain Leakage current reduction by changing the gate workfunction. In this work, an attempt has been made to model the metal gates in the field equations in the gate-drain overlap region. The modeling has been accomplished by taking the physical property of the metals into account i.e. the workfunction. The analytical model has been used to study the impact of gate workfunction engineering on GIDL. The model includes the calculation of band bending, the resultant electric field in the gate-drain overlap region and the GIDL current. It has been observed that the electric field and the GIDL current decreases as the gate workfunction is increased continuously from 4.0~5.2 eV. Further, the results are more interesting at low applied voltages where the decrease in the current is of about 6 orders of magnitude as compared to that at higher voltages."}
{"_id": "6d3baf97c01083ccf38dc75afd6930688b6178c5", "title": "On the Interpolation of Data with Normally Distributed Uncertainty for Visualization", "text": "In many fields of science or engineering, we are confronted with uncertain data. For that reason, the visualization of uncertainty received a lot of attention, especially in recent years. In the majority of cases, Gaussian distributions are used to describe uncertain behavior, because they are able to model many phenomena encountered in science. Therefore, in most applications uncertain data is (or is assumed to be) Gaussian distributed. If such uncertain data is given on fixed positions, the question of interpolation arises for many visualization approaches. In this paper, we analyze the effects of the usual linear interpolation schemes for visualization of Gaussian distributed data. In addition, we demonstrate that methods known in geostatistics and machine learning have favorable properties for visualization purposes in this case."}
{"_id": "682640754c867ecc6ae2ccaa5dc68403ee7d2e63", "title": "Two-factor authentication for the Bitcoin protocol", "text": "We show how to realize two-factor authentication for a Bitcoin wallet. To do so, we explain how to employ an ECDSA adaption of the two-party signature protocol by MacKenzie and Reiter (Int J Inf Secur 2(3\u20134):218\u2013239, 2004. doi: 10.1007/s10207-004-0041-0 ) in the context of Bitcoin and present a prototypic implementation of a Bitcoin wallet that offers both: two-factor authentication and verification over a separate channel. Since we use a smart phone as the second authentication factor, our solution can be used with hardware already available to most users and the user experience is quite similar to the existing online banking authentication methods."}
{"_id": "b557889e5063e8a6dc69ca7d9d71643a255066f2", "title": "Jointly Detecting and Separating Singing Voice: A Multi-Task Approach", "text": "A main challenge in applying deep learning to music processing is the availability of training data. One potential solution is Multi-task Learning, in which the model also learns to solve related auxiliary tasks on additional datasets to exploit their correlation. While intuitive in principle, it can be challenging to identify related tasks and construct the model to optimally share information between tasks. In this paper, we explore vocal activity detection as an additional task to stabilise and improve the performance of vocal separation. Further, we identify problematic biases specific to each dataset that could limit the generalisation capability of separation and detection models, to which our proposed approach is robust. Experiments show improved performance in separation as well as vocal detection compared to single-task baselines. However, we find that the commonly used Signal-to-Distortion Ratio (SDR) metrics did not capture the improvement on non-vocal sections, indicating the need for improved evaluation methodologies."}
{"_id": "f9e65c08a616bd7154ceeb71b53ecc5297d66465", "title": "Authenticity and Subjective Wellbeing within the Context of a Religious Organization", "text": "Although authenticity has a long history as a philosophical and psychological idea, this concept has received scarce attention in the business literature until very lately. Nevertheless, scholars belonging to a broad array of disciplines have pointed out the escalation in the individuals' search for authenticity within developed societies. Hence, the purpose of this paper is to assess the link between authenticity and subjective wellbeing within the rarely explored context of faith-driven organizations, where the management of emotions attains a particular significance. Specifically, this study links authenticity with subjective wellbeing among the distinct groups that shape a large international Catholic organization. This study uses Partial Least Squares (PLS) to test our research model and hypotheses. This paper covers two noteworthy research gaps. On the one hand, it provides evidence of the relationship between authenticity and subjective wellbeing within the context of religious organizations. On the other hand, our results suggest that this relationship is not homogeneous among the distinct groups that shape the organization. Implications of the research are finally discussed."}
{"_id": "8e5ff3026f96e41fdf2d8500163309e7d1fd182e", "title": "Data-Driven Feature Characterization Techniques for Laser Printer Attribution", "text": "Laser printer attribution is an increasing problem with several applications, such as pointing out the ownership of crime proofs and authentication of printed documents. However, as commonly proposed methods for this task are based on custom-tailored features, they are limited by modeling assumptions about printing artifacts. In this paper, we explore solutions able to learn discriminant-printing patterns directly from the available data during an investigation, without any further feature engineering, proposing the first approach based on deep learning to laser printer attribution. This allows us to avoid any prior assumption about printing artifacts that characterize each printer, thus highlighting almost invisible and difficult printer footprints generated during the printing process. The proposed approach merges, in a synergistic fashion, convolutional neural networks (CNNs) applied on multiple representations of multiple data. Multiple representations, generated through different pre-processing operations, enable the use of the small and lightweight CNNs whilst the use of multiple data enable the use of aggregation procedures to better determine the provenance of a document. Experimental results show that the proposed method is robust to noisy data and outperforms existing counterparts in the literature for this problem."}
{"_id": "ecf4e2e84980da63ced4d5ec475da837be58bcf4", "title": "A Low-delay Protocol for Multihop Wireless Body Area Networks", "text": "Wireless body area networks (WBANs) form a new and interesting area in the world of remote health monitoring. An important concern in such networks is the communication between the sensors. This communication needs to be energy efficient and highly reliable while keeping delays low. Mobility also has to be supported as the nodes are positioned on different parts of the body that move with regard to each other. In this paper, we present a new cross-layer communication protocol for WBANs: CICADA or Cascading Information retrieval by Controlling Access with Distributed slot Assignment. The protocol sets up a network tree in a distributed manner. This tree structure is subsequently used to guarantee collision free access to the medium and to route data towards the sink. The paper analyzes CICADA and shows simulation results. The protocol offers low delay and good resilience to mobility. The energy usage is low as the nodes can sleep in slots where they are not transmitting or receiving."}
{"_id": "33c9a1219d73ad54c63e02303222c5c8988a2e74", "title": "The Role of Smart Meters in Enabling Real-Time Energy Services for Households : The Italian Case", "text": "The Smart Meter (SM) is an essential tool for successful balancing the demand-offer energy curve. It allows the linking of the consumption and production measurements with the time information and the customer\u2019s identity, enabling the substitution of flat-price billing with smarter solutions, such as Time-of-Use or Real-Time Pricing. In addition to sending data to the energy operators for billing and monitoring purposes, Smart Meters must be able to send the same data to customer devices in near-real-time conditions, enabling new services such as instant energy awareness and home automation. In this article, we review the ongoing situation in Europe regarding real-time services for the final customers. Then, we review the architectural and technological options that have been considered for the roll-out phase of the Italian second generation of Smart Meters. Finally, we identify a collection of use cases, along with their functional and performance requirements, and discuss what architectures and communications technologies can meet these requirements."}
{"_id": "2b6d3f6804c1a27a652b2cba9ed4a2fa7cbf9979", "title": "Ichthyosis congenita, harlequin fetus type: a case report.", "text": "Ichthyosis is a very heterogeneous family of skin disorders with harlequin ichthyosis being the most severe genetic form. It is a rare autosomal recessive condition, characterized by dry, severely thickened skin with large plates of hyperkeratotic scale, separated by deep fissures. Infants are very susceptible to metabolic abnormalities and infections. They usually do not survive for very long, but several long term survivals have been noted. The vast majority of affected individuals are homozygous for mutations in the ABCA12 gene, which cause a deficiency of the epidermal lipid transporter, resulting in hyperkeratosis and abnormal barrier function. We report a case of a newborn with harlequin ichthyosis, born to unrelated parents, who had a favorable evolution with topical treatment and intensive care."}
{"_id": "d3b17652e76d947f1aa583e3c99b158a792d514c", "title": "Classification accuracy is not enough", "text": "We argue that an evaluation of system behavior at the level of the music is required to usefully address the fundamental problems of music genre recognition (MGR), and indeed other tasks of music information retrieval, such as autotagging. A recent review of works in MGR since 1995 shows that most (82\u00a0%) measure the capacity of a system to recognize genre by its classification accuracy. After reviewing evaluation in MGR, we show that neither classification accuracy, nor recall and precision, nor confusion tables, necessarily reflect the capacity of a system to recognize genre in musical signals. Hence, such figures of merit cannot be used to reliably rank, promote or discount the genre recognition performance of MGR systems if genre recognition (rather than identification by irrelevant confounding factors) is the objective. This motivates the development of a richer experimental toolbox for evaluating any system designed to intelligently extract information from music signals."}
{"_id": "db56716afdb53829e6efb0e221e93496c8aa0a64", "title": "A game-theoretic method of fair resource allocation for\u00a0cloud computing services", "text": "As cloud-based services become more numerous and dynamic, resource provisioning becomes more and more challenging. A QoS constrained resource allocation problem is considered in this paper, in which service demanders intend to solve sophisticated parallel computing problem by requesting the usage of resources across a cloud-based network, and a cost of each computational service depends on the amount of computation. Game theory is used to solve the problem of resource allocation. A practical approximated solution with the following two steps is proposed. First, each participant solves its optimal problem independently, without consideration of the multiplexing of resource assignments. A Binary Integer Programming method is proposed to solve the independent optimization. Second, an evolutionary mechanism is designed, which changes multiplexed strategies of the initial optimal solutions of different participants with minimizing their efficiency losses. The algorithms in the evolutionary mechanism take both optimization and fairness into account. It is demonstrated that Nash equilibrium always exists if the resource allocation game has feasible solutions."}
{"_id": "f791b71a97e5bf181a42e3275560801103bb377f", "title": "Time-frequency Analysis 3.1 Time-frequency Analysis", "text": "Let f (t) be a mathematical idealization of some physical signal depending on time t. Perhaps f can be considered as a superposition of oscillating components but these oscillations have to be limited to some finite extension in time. This is a fundamental problem of the Fourier inversion formula which states that, for a well-behaved signal f one has f (t) = \u221e \u2212\u221e\u02c6f (\u03be)e 2\u03c0it\u03be d\u03be expressing a complex signal f as a superposition of exponentials e 2\u03c0it\u03be. If f vanishes outside some finite set then the exponentials, which extend over all time, must cancel each other in some fantastic way that makes it virtually impossible to quantify in any intuitive way which frequencies play a dominant role at any particular time t. 3.1.2 Frequency Local in time Consider a physical signal to be a square integrable real-valued function of time, x(t). One can define a complex extension z(t) = x(t) + iy(t) bey letting y(t) be the inverse Fourier transform of \u2212isgn (t)\u02c6 x(\u03be) where sgn denotes the signum function \u03be/|\u03be|. In this cas\u00ea z = \u02c6 x + i\u02c6y has only positive frequencies. Exercise 3.1.1. Explain why z(t) has an extension to a complex argument t + is, s > 0 that is analytic in the upper half plane {t + is : s > 0}. You may assume that x(t) is a continuous, bounded, absolutely integrable function. The analytic signal z has the polar form r(t)e i\u03b8(t) where r = x 2 + y 2 and \u03b8 = arctan y/x. The instantaneous frequency can be defined as d\u03b8/dt. This point of view however is a little too simple because x(t) can be a superposition of multiple oscillating components and the instantaneous frequency cannot resolve multiple oscillating contributions. We will return to this point later in this chapter. First we want to consider some fundamental issues governing the impossibility of joint time-frequency localization and, in view of these limitations, mathematical tools that aim to characterize compositions of signals in terms of time localized oscillations in view of such limitations. One typically refers to such tools as time-frequency representations. As with the case of Fourier transforms we will encounter both continuous and discrete parameter time-frequency representations. 3.1.3 The Heisenberg-Wiener inequality Variance inequalities Theorem 3.1.2. (Heisenberg uncertainty principle) If f \u2208 L 2 (R) with f 2 = 1 then xf (x) 2 \u03be \u02c6 f (\u03be) 2 \u2265 \u2026"}
{"_id": "3e4c6d704b6f6227b7b1dc2bb915c45ba5b19989", "title": "Effect of subjective perspective taking during simulation of action: a PET investigation of agency", "text": "Perspective taking is an essential component in the mechanisms that account for intersubjectivity and agency. Mental simulation of action can be used as a natural protocol to explore the cognitive and neural processing involved in agency. Here we took PET measurements while subjects simulated actions with either a first-person or a third-person perspective. Both conditions were associated with common activation in the SMA, the precentral gyrus, the precuneus and the MT/V5 complex. When compared to the first-person perspective, the third-person perspective recruited right inferior parietal, precuneus, posterior cingulate and frontopolar cortex. The opposite contrast revealed activation in left inferior parietal and somatosensory cortex. We suggest that the right inferior parietal, precuneus and somatosensory cortex are specifically involved in distinguishing self-produced actions from those generated by others."}
{"_id": "760629ebec8f83b4ba195d5dc0ded66e69b88475", "title": "Agile Software Development Methods: Review and Analysis", "text": "Agile \u2013 denoting \u201cthe quality of being agile; readiness for motion; nimbleness, activity, dexterity in motion\u201d \u2013 software development methods are attempting to offer an answer to the eager business community asking for lighter weight along with faster and nimbler software development processes. This is especially the case with the rapidly growing and volatile Internet software industry as well as for the emerging mobile application environment. The new agile methods have evoked a substantial amount of literature and debates. However, academic research on the subject is still scarce, as most of existing publications are written by practitioners or consultants. The aim of this publication is to begin filling this gap by systematically reviewing the existing literature on agile software development methodologies. This publication has three purposes. First, it proposes a definition and a classification of agile software development approaches. Second, it analyses ten software development methods that can be characterized as being \u201dagile\u201d against the defined criteria. Third, it compares these methods and highlights their similarities and differences. Based on this analysis, future research needs are identified and discussed."}
{"_id": "88fc633104156f2952cd4391232b45d31ed56c52", "title": "A Framework for Clustering Uncertain Data Streams", "text": "In recent years, uncertain data management applications have grown in importance because of the large number of hardware applications which measure data approximately. For example, sensors are typically expected to have considerable noise in their readings because of inaccuracies in data retrieval, transmission, and power failures. In many cases, the estimated error of the underlying data stream is available. This information is very useful for the mining process, since it can be used in order to improve the quality of the underlying results. In this paper we will propose a method for clustering uncertain data streams. We use a very general model of the uncertainty in which we assume that only a few statistical measures of the uncertainty are available. We will show that the use of even modest uncertainty information during the mining process is sufficient to greatly improve the quality of the underlying results. We show that our approach is more effective than a purely deterministic method such as the CluStream approach. We will test the approach on a variety of real and synthetic data sets and illustrate the advantages of the method in terms of effectiveness and efficiency."}
{"_id": "e427c8d3c1b616d319c8b5f233e725d4ebfd9768", "title": "Weakly Supervised Affordance Detection", "text": "Localizing functional regions of objects or affordances is an important aspect of scene understanding and relevant for many robotics applications. In this work, we introduce a pixel-wise annotated affordance dataset of 3090 images containing 9916 object instances. Since parts of an object can have multiple affordances, we address this by a convolutional neural network for multilabel affordance segmentation. We also propose an approach to train the network from very few keypoint annotations. Our approach achieves a higher affordance detection accuracy than other weakly supervised methods that also rely on keypoint annotations or image annotations as weak supervision."}
{"_id": "9e9f8a51659380a465906f2d593c1db79ba8dedb", "title": "A Framework for Evaluating Mobile App Repackaging Detection Algorithms", "text": "Because it is not hard to reverse engineer the Dalvik bytecode used in the Dalvik virtual machine, Android application repackaging has become a serious problem. With repackaging, a plagiarist can simply steal others\u2019 code violating the intellectual property of the developers. More seriously, after repackaging, popular apps can become the carriers of malware, adware or spy-ware for wide spreading. To maintain a healthy app market, several detection algorithms have been proposed recently, which can catch some types of repackaged apps in various markets efficiently. However, they are generally lack of valid analysis on their effectiveness. After analyzing these approaches, we find simple obfuscation techniques can potentially cause false negatives, because they change the main characteristics or features of the apps that are used for similarity detections. In practice, more sophisticated obfuscation techniques can be adopted (or have already been performed) in the context of mobile apps. We envision this obfuscation based repackaging will become a phenomenon due to the arms race between repackaging and its detection. To this end, we propose a framework to evaluate the obfuscation resilience of repackaging detection algorithms comprehensively. Our evaluation framework is able to perform a set of obfuscation algorithms in various forms on the Dalvik bytecode. Our results provide insights to help gauge both broadness and depth of algorithms\u2019 obfuscation resilience. We applied our framework to conduct a comprehensive case study on AndroGuard, an Android repackaging detector proposed in Black-hat 2011. Our experimental results have demonstrated the effectiveness and stability of our framework."}
{"_id": "aac1e9f591217c2dc4fbcbd2f048b54d7b7024cd", "title": "Mining Personal Data Using Smartphones and Wearable Devices: A Survey", "text": "The staggering growth in smartphone and wearable device use has led to a massive scale generation of personal (user-specific) data. To explore, analyze, and extract useful information and knowledge from the deluge of personal data, one has to leverage these devices as the data-mining platforms in ubiquitous, pervasive, and big data environments. This study presents the personal ecosystem where all computational resources, communication facilities, storage and knowledge management systems are available in user proximity. An extensive review on recent literature has been conducted and a detailed taxonomy is presented. The performance evaluation metrics and their empirical evidences are sorted out in this paper. Finally, we have highlighted some future research directions and potentially emerging application areas for personal data mining using smartphones and wearable devices."}
{"_id": "c334271678a186732edab0f50d9d2040f10db87b", "title": "Comparison of linear, nonlinear, and feature selection methods for EEG signal classification", "text": "The reliable operation of brain-computer interfaces (BCIs) based on spontaneous electroencephalogram (EEG) signals requires accurate classification of multichannel EEG. The design of EEG representations and classifiers for BCI are open research questions whose difficulty stems from the need to extract complex spatial and temporal patterns from noisy multidimensional time series obtained from EEG measurements. The high-dimensional and noisy nature of EEG may limit the advantage of nonlinear classification methods over linear ones. This paper reports the results of a linear (linear discriminant analysis) and two nonlinear classifiers (neural networks and support vector machines) applied to the classification of spontaneous EEG during five mental tasks, showing that nonlinear classifiers produce only slightly better classification results. An approach to feature selection based on genetic algorithms is also presented with preliminary results of application to EEG during finger movement."}
{"_id": "25c937ed643e7ecf79f2d3c5376a54224fc16d0c", "title": "Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge.", "text": "Functional magnetic resonance imaging (fMRI) of the human brain was used to study whether the amygdala is activated in response to emotional stimuli, even in the absence of explicit knowledge that such stimuli were presented. Pictures of human faces bearing fearful or happy expressions were presented to 10 normal, healthy subjects by using a backward masking procedure that resulted in 8 of 10 subjects reporting that they had not seen these facial expressions. The backward masking procedure consisted of 33 msec presentations of fearful or happy facial expressions, their offset coincident with the onset of 167 msec presentations of neutral facial expressions. Although subjects reported seeing only neutral faces, blood oxygen level-dependent (BOLD) fMRI signal in the amygdala was significantly higher during viewing of masked fearful faces than during the viewing of masked happy faces. This difference was composed of significant signal increases in the amygdala to masked fearful faces as well as significant signal decreases to masked happy faces, consistent with the notion that the level of amygdala activation is affected differentially by the emotional valence of external stimuli. In addition, these facial expressions activated the sublenticular substantia innominata (SI), where signal increases were observed to both fearful and happy faces--suggesting a spatial dissociation of territories that respond to emotional valence versus salience or arousal value. This study, using fMRI in conjunction with masked stimulus presentations, represents an initial step toward determining the role of the amygdala in nonconscious processing."}
{"_id": "3465522cb27dfa8c7fed67e6d0a1e42a10e3a401", "title": "A cognitive developmental approach to morality: investigating the psychopath", "text": "Various social animal species have been noted to inhibit aggressive attacks when a conspecific displays submission cues. Blair (1993) has suggested that humans possess a functionally similar mechanism which mediates the suppression of aggression in the context of distress cues. He has suggested that this mechanism is a prerequisite for the development of the moral/conventional distinction; the consistently observed distinction in subject's judgments between moral and conventional transgressions. Psychopaths may lack this violence inhibitor. A causal model is developed showing how the lack of this mechanism would explain the core behavioural symptoms associated with the psychopathic disorder. A prediction of such a causal model would be that psychopaths should fail to make the moral/conventional distinction. This prediction was confirmed. The implication of this finding for other theories of morality is discussed."}
{"_id": "46c99f0a5935572dd8b19936afef8faa8de25fbf", "title": "Fears, phobias, and preparedness: toward an evolved module of fear and fear learning.", "text": "An evolved module for fear elicitation and fear learning with 4 characteristics is proposed. (a) The fear module is preferentially activated in aversive contexts by stimuli that are fear relevant in an evolutionary perspective. (b) Its activation to such stimuli is automatic. (c) It is relatively impenetrable to cognitive control. (d) It originates in a dedicated neural circuitry, centered on the amygdala. Evidence supporting these propositions is reviewed from conditioning studies, both in humans and in monkeys; illusory correlation studies; studies using unreportable stimuli; and studies from animal neuroscience. The fear module is assumed to mediate an emotional level of fear learning that is relatively independent and dissociable from cognitive learning of stimulus relationships."}
{"_id": "daae218c48329de039b767d62b7e7be8c65ae43b", "title": "Approach and avoidance strength during goal attainment: regulatory focus and the \"goal looms larger\" effect.", "text": "Three studies tested the hypothesis that regulatory focus, both as a chronic person variable and as a manipulated situational variable, differentially affects the strength of participants' approach and avoidance strategic motivations as they get closer to the goal. In 2 studies, flexion and extension arm pressure were used as on-line measures of approach and avoidance intensity, respectively. As predicted, the approach gradient was steeper for participants with a promotion focus on aspirations and gains than for participants with a prevention focus on responsibilities and nonlosses, whereas the reverse was true for the avoidance gradient. In a third study, the same pattern of results was found on a persistence measure of motivational strength. Participants with a promotion focus worked longer on anagrams closer to the goal when they were approach means for goal attainment than when they were avoidance means, whereas the reverse was true for participants with a prevention focus."}
{"_id": "59161779442cbef9beed4faff8115de7ce74a7d5", "title": "Emotion in the criminal psychopath: startle reflex modulation.", "text": "Startle-elicited blinks were measured during presentation of affective slides to test hypotheses concerning emotional responding in psychopaths. Subjects were 54 incarcerated sexual offenders divided into nonpsychopathic, psychopathic, and mixed groups based on file and interview data. Consistent with findings for normal college students, nonpsychopaths and mixed subjects showed a significant linear relationship between slide valence and startle magnitude, with startle responses largest during unpleasant slides and smallest during pleasant slides. This effect was absent in psychopaths. Group differences in startle modulation were related to affective features of psychopathy, but not to antisocial behavior per se. Psychopathy had no effect on autonomic or self-report responses to slides. These results suggest an abnormality in the processing of emotional stimuli by psychopaths that manifests itself independently of affective report."}
{"_id": "731359cd04c7625549e78d9fbd9ad362642be9c8", "title": "Cynefin : Uncertainty , Small Worlds and Scenarios", "text": "Uncertainty, its modelling and analysis have been discussed across many literatures including statistics and operational research, knowledge management and philosophy. Adherents to Bayesian approaches have usually argued that uncertainty should either be modelled by probabilities or resolved by discussion which clarifies meaning. Others have followed Knight in distinguishing between contexts of risk and of uncertainty: the former admitting modelling and analysis through probability; the latter not. There are also host of approaches in the literatures stemming from Zadeh\u2019s concept of a fuzzy set. Theories of sense-making in the philosophy and management literatures see knowledge and uncertainty as opposite extremes of human understanding and discuss the resolution of uncertainty accordingly. Here we take a Bayesian stance, but a softer one than the conventional, which recognises the concerns in other approaches and, in particular, sets their concerns in the Cynefin framework of decision contexts to reflect on processes of modelling and analysis in statistical, risk and decision analysis. The approach builds on several recent strands of discussion which argue for a convergence of qualitative scenario planning ideas and more quantitative approaches to analysis I discuss how these suggestions and discussions relate to some earlier thinking on the methodology of modelling and, in particular, the concept of a \u2018small world\u2019 articulated by Savage."}
{"_id": "a992dd8c25c9d0630c309c233d1ab7a3be739dff", "title": "Social Media as It Interfaces with Psychosocial Development and Mental Illness in Transitional Age Youth.", "text": "For transitional age individuals, social media (SM) is an integral component of connecting with others. There are 2 billion SM users worldwide. SM users may experience an increase in perceived social support and life satisfaction. Use of SM may facilitate forming connections among people with potentially stigmatizing mental disorders. However, epidemiologic studies suggest that increased SM use is associated with conditions such as depression, anxiety, and sleep disturbance. Future research should examine directionality of these associations and the role of contextual factors. It also will be useful to leverage SM to provide mental health care and surveillance of mental health concerns."}
{"_id": "634acdd90380e5a96aa1e76d3cddac74e799f772", "title": "A global perspective on the use, sales, exposure pathways, occurrence, fate and effects of veterinary antibiotics (VAs) in the environment.", "text": "Veterinary antibiotics (VAs) are widely used in many countries worldwide to treat disease and protect the health of animals. They are also incorporated into animal feed to improve growth rate and feed efficiency. As antibiotics are poorly adsorbed in the gut of the animals, the majority is excreted unchanged in faeces and urine. Given that land application of animal waste as a supplement to fertilizer is often a common practice in many countries, there is a growing international concern about the potential impact of antibiotic residues on the environment. Frequent use of antibiotics has also raised concerns about increased antibiotic resistance of microorganisms. We have attempted in this paper to summarize the latest information available in the literature on the use, sales, exposure pathways, environmental occurrence, fate and effects of veterinary antibiotics in animal agriculture. The review has focused on four important groups of antibiotics (tylosin, tetracycline, sulfonamides and, to a lesser extent, bacitracin) giving a background on their chemical nature, fate processes, occurrence, and effects on plants, soil organisms and bacterial community. Recognising the importance and the growing debate, the issue of antibiotic resistance due to the frequent use of antibiotics in food-producing animals is also briefly covered. The final section highlights some unresolved questions and presents a way forward on issues requiring urgent attention."}
{"_id": "ce9e93582c2e83ae3008e498a79ec57d679ad87a", "title": "Locally Interpretable Models and Effects based on Supervised Partitioning (LIME-SUP)", "text": "Supervised Machine Learning (SML) algorithms such as Gradient Boosting, Random Forest, and Neural Networks have become popular in recent years due to their increased predictive performance over traditional statistical methods. This is especially true with large data sets (millions or more observations and hundreds to thousands of predictors). However, the complexity of the SML models makes them opaque and hard to interpret without additional tools. There has been a lot of interest recently in developing global and local diagnostics for interpreting and explaining SML models. In this paper, we propose locally interpretable models and effects based on supervised partitioning (trees) referred to as LIME-SUP. This is in contrast with the KLIME approach that is based on clustering the predictor space. We describe LIME-SUP based on fitting trees to the fitted response (LIM-SUP-R) as well as the derivatives of the fitted response (LIME-SUP-D). We compare the results with KLIME and describe its advantages using simulation and real data."}
{"_id": "b886f63e3b9907c57613c6ad7ff467975249c8e5", "title": "A Review of the Design Issues and Techniques for Radial-Flux Brushless Surface and Internal Rare-Earth Permanent-Magnet Motors", "text": "This paper reviews many design issues and analysis techniques for the brushless permanent-magnet machine. It reviews the basic requirements for the use of both ac and dc machines and issues concerning the selection of pole number, winding layout, rotor topology, drive strategy, field weakening, and cooling. These are key issues in the design of a motor. Leading-edge design techniques are illustrated. This paper is aimed as a tutor for motor designers who may be unfamiliar with this particular type of machine."}
{"_id": "5ab0b9923b408171fdc76af9797c10b4358885a7", "title": "The Most Dangerous Code in the Browser", "text": "Browser extensions are ubiquitous. Yet, in today\u2019s browsers, extensions are the most dangerous code to user privacy. Extensions are third-party code, like web applications, but run with elevated privileges. Even worse, existing browser extension systems give users a false sense of security by considering extensions to be more trustworthy than web applications. This is because the user typically has to explicitly grant the extension a series of permissions it requests, e.g., to access the current tab or a particular website. Unfortunately, extensions developers do not request minimum privileges and users have become desensitized to install-time warnings. Furthermore, permissions offered by popular browsers are very broad and vague. For example, over 71% of the top-500 Chrome extensions can trivially leak the user\u2019s data from any site. In this paper, we argue for new extension system design, based on mandatory access control, that protects the user\u2019s privacy from malicious extensions. A system employing this design can enable a range of common extensions to be considered safe, i.e., they do not require user permissions and can be ensured to not leak information, while allowing the user to share information when desired. Importantly, such a design can make permission requests a rarity and thus more meaningful."}
{"_id": "f31c2ddb7bb3f3f6ef4219143901cc0cddca5968", "title": "Modeling Survival Data: Extending the Cox Model", "text": "Reading is a hobby to open the knowledge windows. Besides, it can provide the inspiration and spirit to face this life. By this way, concomitant with the technology development, many companies serve the e-book or book in soft file. The system of this book of course will be much easier. No worry to forget bringing the modeling survival data extending the cox model book. You can open the device and get the book by on-line."}
{"_id": "0c6c30e3052fcc01aa5ee38252d77f75322d7b3f", "title": "DjiNN and Tonic: DNN as a service and its implications for future warehouse scale computers", "text": "As applications such as Apple Siri, Google Now, Microsoft Cortana, and Amazon Echo continue to gain traction, web-service companies are adopting large deep neural networks (DNN) for machine learning challenges such as image processing, speech recognition, natural language processing, among others. A number of open questions arise as to the design of a server platform specialized for DNN and how modern warehouse scale computers (WSCs) should be outfitted to provide DNN as a service for these applications.\n In this paper, we present DjiNN, an open infrastructure for DNN as a service in WSCs, and Tonic Suite, a suite of 7 end-to-end applications that span image, speech, and language processing. We use DjiNN to design a high throughput DNN system based on massive GPU server designs and provide insights as to the varying characteristics across applications. After studying the throughput, bandwidth, and power properties of DjiNN and Tonic Suite, we investigate several design points for future WSC architectures. We investigate the total cost of ownership implications of having a WSC with a disaggregated GPU pool versus a WSC composed of homogeneous integrated GPU servers. We improve DNN throughput by over 120x for all but one application (40x for Facial Recognition) on an NVIDIA K40 GPU. On a GPU server composed of 8 NVIDIA K40s, we achieve near-linear scaling (around 1000x throughput improvement) for 3 of the 7 applications. Through our analysis, we also find that GPU-enabled WSCs improve total cost of ownership over CPU-only designs by 4-20x, depending on the composition of the workload"}
{"_id": "88a4a9184e1163bbbce74969720ccf1aed4ac25f", "title": "Design of cavity-backed spiral antennas", "text": "In this paper, the design procedure of the 2-18 GHz and the 18-40 GHz cavity-backed spiral antenna is described. By varying the antenna parameters the VSWR and the axial ratio characteristics of the antenna are investigated. For the 2-18 GHz spiral antenna, the solid absorber and the Marchand balun are applied. The measured results of the fabricated antenna are compared with the simulated ones. For the 18-40 GHz antenna, the honeycomb absorber having variable loss tangent is considered in simulation to investigate for improvement of the VSWR and the axial ratio characteristics."}
{"_id": "f97092fbaf0fba436b9e9a43c37ea8da36d93814", "title": "Technology and Education : Computers , Software , and the Internet", "text": "Technology and Education: Computers, Software, and the Internet* A substantial amount of money is spent on technology by schools, families and policymakers with the hope of improving educational outcomes. This chapter explores the theoretical and empirical literature on the impacts of technology on educational outcomes. The literature focuses on two primary contexts in which technology may be used for educational purposes: i) classroom use in schools, and ii) home use by students. Theoretically, ICT investment and CAI use by schools and the use of computers at home have ambiguous implications for educational achievement: expenditures devoted to technology necessarily offset inputs that may be more or less efficient, and time allocated to using technology may displace traditional classroom instruction and educational activities at home. However, much of the evidence in the schooling literature is based on interventions that provide supplemental funding for technology or additional class time, and thus favor finding positive effects. Nonetheless, studies of ICT and CAI in schools produce mixed evidence with a pattern of null results. Notable exceptions to this pattern occur in studies of developing countries and CAI interventions that target math rather than language. In the context of home use, early studies based on multivariate and instrumental variables approaches tend to find large positive (and in a few cases negative) effects while recent studies based on randomized control experiments tend to find small or null effects. Early research focused on developed countries while more recently several experiments have been conducted in developing countries. JEL Classification: I2"}
{"_id": "43287ef5346ad1bb5d56183f8b9e986be3a548c7", "title": "Improved CCG Parsing with Semi-supervised Supertagging", "text": "Current supervised parsers are limited by the size of their labelled training data, making improving them with unlabelled data an important goal. We show how a state-of-the-art CCG parser can be enhanced, by predicting lexical categories using unsupervised vector-space embeddings of words. The use of word embeddings enables our model to better generalize from the labelled data, and allows us to accurately assign lexical categories without depending on a POS-tagger. Our approach leads to substantial improvements in dependency parsing results over the standard supervised CCG parser when evaluated on Wall Street Journal (0.8%), Wikipedia (1.8%) and biomedical (3.4%) text. We compare the performance of two recently proposed approaches for classification using a wide variety of word embeddings. We also give a detailed error analysis demonstrating where using embeddings outperforms traditional feature sets, and showing how including POS features can decrease accuracy."}
{"_id": "5fc0160ba0b727b1ab1f6fd377362af3e0167e56", "title": "What do students want?: towards an instrument for students' evaluation of quality of learning analytics services", "text": "Quality assurance in any organization is important for ensuring that service users are satisfied with the service offered. For higher education institutes, the use of service quality measures allows for ideological gaps to be both identified and resolved. The learning analytic community, however, has rarely addressed the concept of service quality. A potential outcome of this is the provision of a learning analytics service that only meets the expectations of certain stakeholders (e.g., managers), whilst overlooking those who are most important (e.g., students). In order to resolve this issue, we outline a framework and our current progress towards developing a scale to assess student expectations and perceptions of learning analytics as a service."}
{"_id": "6fda4a3b06e34622a279f797c30f269453ebed32", "title": "An improved one-time password authentication scheme", "text": "The rapid development of the Internet facilitates our lives in many aspects. More and more business will be done through Internet. Under such circumstances, enough attention must be given to the information security, of which the identity authentication is one important problem. In the traditional authentication scheme, the user provides the username and static password to service provider, but there are some inherent shortcomings of this method-static passwords maybe guessed, forgotten, and eavesdropped. One-Time Password (OTP) is considered as the strongest authentication scheme among all password-based solutions. In this paper, a novel two-factor authentication scheme based OTP is proposed. The scheme not only satisfies the mutual authentication between the user and service provider, but also presents higher security and lower computational cost than traditional schemes based OTP."}
{"_id": "c78d55112529472d6b7f7adb5ec5993bcd8df952", "title": "Mometasone furoate in the treatment of vulvar lichen sclerosus: could its formulation influence efficacy, tolerability and adherence to treatment?", "text": "PURPOSE\nTo assess the effectiveness, tolerability, and convenience of the cream formulation of mometasone furoate 0.1% (MMF) in the treatment of active vulvar lichen sclerosus (VLS) and to compare the cream with the ointment formulation.\n\n\nMETHODS\nThe following efficacy parameters were assessed in 27 VLS patients treated with MMF cream for 12 weeks (group A): (i) response rate, (ii) percentage of patients achieving an improvement from baseline of\u2009\u226575% in subjective and objective scores, and (iii) mean reduction in subjective and objective scores. These efficacy assessments, as well as those regarding safety and adherence, were compared with the assessments recorded among 37 VLS patients treated with MMF ointment (group B).\n\n\nRESULTS\n59.3% (group A) and 78.4% (group B) of patients were considered responders; 44.4% and 40.7% of patients in group A and 54.1% and 45.9% in group B achieved an improvement of at least 75% in subjective and objective scores, respectively. MMF ointment obtained a significantly higher improvement in symptom scores in comparison with the cream formulation.\n\n\nCONCLUSIONS\nMMF in ointment formulation seems to be more effective in treating active VLS in comparison with MMF cream. Both formulations are well tolerated and there is no difference in patient adherence and satisfaction."}
{"_id": "8c6bfe170f6bb0bf370deec37f9f5eb49d8d6693", "title": "Genotyping by Sequencing for SNP-Based Linkage Analysis and Identification of QTLs Linked to Fruit Quality Traits in Japanese Plum (Prunus salicina Lindl.)", "text": "Marker-assisted selection (MAS) in stone fruit (Prunus species) breeding is currently difficult to achieve due to the polygenic nature of the most relevant agronomic traits linked to fruit quality. Genotyping by sequencing (GBS), however, provides a large quantity of useful data suitable for fine mapping using Single Nucleotide Polymorphisms (SNPs) from a reference genome. In this study, GBS was used to genotype 272 seedlings of three F1 Japanese plum (Prunus salicina Lindl) progenies derived from crossing \"98-99\" (as a common female parent) with \"Angeleno,\" \"September King,\" and \"September Queen\" as male parents. Raw sequences were aligned to the Peach genome v1, and 42,909 filtered SNPs were obtained after sequence alignment. In addition, 153 seedlings from the \"98-99\" \u00d7 \"Angeleno\" cross were used to develop a genetic map for each parent. A total of 981 SNPs were mapped (479 for \"98-99\" and 502 for \"Angeleno\"), covering a genetic distance of 688.8 and 647.03 cM, respectively. Fifty five seedlings from this progeny were phenotyped for different fruit quality traits including ripening time, fruit weight, fruit shape, chlorophyll index, skin color, flesh color, over color, firmness, and soluble solids content in the years 2015 and 2016. Linkage-based QTL analysis allowed the identification of genomic regions significantly associated with ripening time (LG4 of both parents and both phenotyping years), fruit skin color (LG3 and LG4 of both parents and both years), chlorophyll degradation index (LG3 of both parents in 2015) and fruit weight (LG7 of both parents in 2016). These results represent a promising situation for GBS in the identification of SNP variants associated to fruit quality traits, potentially applicable in breeding programs through MAS, in a highly heterozygous crop species such as Japanese plum."}
{"_id": "d90261e63478f5cf91c8ab95b1b044e0f947ed4d", "title": "Distributed Representation-based Recommender Systems in E-commerce", "text": "Recommender system plays an important role in many e-commerce services, such as in Rakuten. In this paper, we focus on the item-to-item recommender and the user-to-item recommenders, which are two most widely used functions in online services for presenting relevant items given an item, or a particular user. We use a large amount of log data from one of Rakuten markets, and apply distributed representation method to that data for developing two types of recommender systems. The key idea of our approach is treating items as words, and users\u2019 sessions as sentences, then training the Word2vec model and Doc2vec models based on those items and user\u2019s information. Resulting item vectors from the Word2vec model can be used to calculate the cosine similarity between items, and find the similar items given an item. Similarly, Doc2vec model helps users find relevant items that might interest them using similarity between items and vectors. We also use the item vectors from both embedding models to build an additional user-to-item recommender, namely Item Vector-based system. The experiments show that our best system achieved a hit-rate of 24.17% for recommending items to users in testing data, which outperformed conventional approaches to a significant extent. Keyword Recommender System,Distributed Representation,Item Vector-based"}
{"_id": "5f5746fb44820e6a1bd0959515cf65a6a5fe5e9b", "title": "Assortment Optimization under the Multinomial Logit Model with Random Choice Parameters", "text": "We consider assortment optimization problems under the multinomial logit model, where the parameters of the choice model are random. The randomness in the choice model parameters is motivated by the fact that there are multiple customer segments, each with different preferences for the products, and the segment of each customer is unknown to the firm when the customer makes a purchase. This choice model is also called the mixture-of-logits model. The goal of the firm is to choose an assortment of products to offer that maximizes the expected revenue per customer, across all customer segments. We establish that the problem is NP-complete even when there are just two customer segments. Motivated by this complexity result, we focus on assortments consisting of products with the highest revenues, which we refer to as revenue-ordered assortments. We identify specially structured cases of the problem where revenue-ordered assortments are optimal. When the randomness in the choice model parameters does not follow a special structure, we derive tight approximation guarantees for revenue-ordered assortments. We extend our model to the multi-period capacity allocation problem, and prove that, when restricted to the revenue-ordered assortments, the mixture-of-logits model possesses the nesting-by-fare-order property. This result implies that revenue-ordered assortments can be incorporated into existing revenue management systems through nested protection levels. Numerical experiments show that revenue-ordered assortments perform remarkably well, generally yielding profits that are within a fraction of a percent of the optimal."}
{"_id": "981b28467071d64a38aa19be31d3b12e11b86f34", "title": "A location-based smart shopping system with IoT technology", "text": "Localization is one important part of Internet of Things(IoT) where the Location of Everything (LoE) system plays a important role to improve most services in IoT area. On the other hand, data mining techniques are essential analyses when we have big data from IoT platforms. Indeed, integration of location-based methods and data mining analysis process can make a smart system service for IoT scenarios and applications. For this purpose, we design a smart shopping platform including four components, location of everything component, data collection component, data filtering/analysing component and data mining component. Then a novel accurate localization scheme named \u201clocation orbital\u201d is developed that estimates the current location of mobile objects (users or everything) based on both current and the previous locations. Finally, an implementation of the experiment in a shopping mall is conducted to practically examine performance evaluation of the location-based scheme. The experimental results show that the proposed scheme could achieve significant higher precision than other localization techniques."}
{"_id": "2f9677eafcea2537585a3c3b717284d398fa3747", "title": "Neural Network-Based Coronary Heart Disease Risk Prediction Using Feature Correlation Analysis", "text": "Background\nOf the machine learning techniques used in predicting coronary heart disease (CHD), neural network (NN) is popularly used to improve performance accuracy.\n\n\nObjective\nEven though NN-based systems provide meaningful results based on clinical experiments, medical experts are not satisfied with their predictive performances because NN is trained in a \u201cblack-box\u201d style.\n\n\nMethod\nWe sought to devise an NN-based prediction of CHD risk using feature correlation analysis (NN-FCA) using two stages. First, the feature selection stage, which makes features acceding to the importance in predicting CHD risk, is ranked, and second, the feature correlation analysis stage, during which one learns about the existence of correlations between feature relations and the data of each NN predictor output, is determined.\n\n\nResult\nOf the 4146 individuals in the Korean dataset evaluated, 3031 had low CHD risk and 1115 had CHD high risk. The area under the receiver operating characteristic (ROC) curve of the proposed model (0.749\u2009\u00b1\u20090.010) was larger than the Framingham risk score (FRS) (0.393\u2009\u00b1\u20090.010).\n\n\nConclusions\nThe proposed NN-FCA, which utilizes feature correlation analysis, was found to be better than FRS in terms of CHD risk prediction. Furthermore, the proposed model resulted in a larger ROC curve and more accurate predictions of CHD risk in the Korean population than the FRS."}
{"_id": "f0fe2b9cb1f4de756db573127fe7560421e5de3d", "title": "Adverse health effects of non-medical cannabis use", "text": "For over two decades, cannabis, commonly known as marijuana, has been the most widely used illicit drug by young people in high-income countries, and has recently become popular on a global scale. Epidemiological research during the past 10 years suggests that regular use of cannabis during adolescence and into adulthood can have adverse effects. Epidemiological, clinical, and laboratory studies have established an association between cannabis use and adverse outcomes. We focus on adverse health effects of greatest potential public health interest-that is, those that are most likely to occur and to affect a large number of cannabis users. The most probable adverse effects include a dependence syndrome, increased risk of motor vehicle crashes, impaired respiratory function, cardiovascular disease, and adverse effects of regular use on adolescent psychosocial development and mental health."}
{"_id": "0a77717dae638e8fae8e45db58fc58c1172ab8c8", "title": "Enhancement of Sentence-Generation Based Summarization Method By Modelling Inter-Sentential Consequent-Relationships", "text": "In the text summarization field, based on how to create the summary, there are two main theoretic approaches (E. Lloret [22], K. Jezek and J. Steinberger [13]): the trend based on \"extraction\" in which the most important sentences in the source text will be chosen; the trend based on \"abstraction\" in which a new reduced grammatical and meaningful text will be created based on understanding the original text. In this paper, we focus on \"abstraction\" approach to summarize Vietnamese texts having only two sentences. We also restrict this research to consider verbs representing \"actions\" and \"states\", which appear in both two sentences of source text. Our method is based on modeling and processing Consequent-Relations between the verbs in the first sentence and the second sentence. The aim of this research is to generate new Vietnamese meaning-summarizing sentences which satisfy two requirements: (i) summarize the true meaning of the source pair of sentences; (ii) have natural expression for summarization. We also experiment and assess our approach by proposing a new quantitative evaluation method that suitable for the research approach."}
{"_id": "4d625677469be99e0a765a750f88cfb85c522cce", "title": "Understanding Hand-Object Manipulation with Grasp Types and Object Attributes", "text": "Our goal is to automate the understanding of natural hand-object manipulation by developing computer visionbased techniques. Our hypothesis is that it is necessary to model the grasp types of hands and the attributes of manipulated objects in order to accurately recognize manipulation actions. Specifically, we focus on recognizing hand grasp types, object attributes and actions from a single image within an unified model. First, we explore the contextual relationship between grasp types and object attributes, and show how that context can be used to boost the recognition of both grasp types and object attributes. Second, we propose to model actions with grasp types and object attributes based on the hypothesis that grasp types and object attributes contain complementary information for characterizing different actions. Our proposed action model outperforms traditional appearance-based models which are not designed to take into account semantic constraints such as grasp types or object attributes. Experiment results on public egocentric activities datasets strongly support our hypothesis."}
{"_id": "040249f5bf2a40d5fc219e31ab286e96c36f441f", "title": "A Survey of Schema Matching Research using Database Schemas and Instances", "text": "Schema matching is considered as one of the essential phases of data integration in database systems. The main aim of the schema matching process is to identify the correlation between schema which helps later in the data integration process. The main issue concern of schema matching is how to support the merging decision by providing the correspondence between attributes through syntactic and semantic heterogeneous in data sources. There have been a lot of attempts in the literature toward utilizing database instances to detect the correspondence between attributes during schema matching process. Many approaches based on instances have been proposed aiming at improving the accuracy of the matching process. This paper set out a classification of schema matching research in database system exploiting database schema and instances. We survey and analyze the schema matching techniques applied in the literature by highlighting the strengths and the weaknesses of each technique. A deliberate discussion has been reported highlights on challenges and the current research trends of schema matching in database. We conclude this paper with some future work directions that help researchers to explore and investigate current issues and challenges related to schema matching in contemporary"}
{"_id": "0c495e8b2176b48753f0fb2cb19cc997ffa8bd85", "title": "Real-Time Myoprocessors for a Neural Controlled Powered Exoskeleton Arm", "text": "Exoskeleton robots are promising assistive/rehabilitative devices that can help people with force deficits or allow the recovery of patients who have suffered from pathologies such as stroke. The key component that allows the user to control the exoskeleton is the human machine interface (HMI). Setting the HMI at the neuro-muscular level may lead to seamless integration and intuitive control of the exoskeleton arm as a natural extension of the human body. At the core of the exoskeleton HMI there is a model of the human muscle, the \"myoprocessor,\" running in real-time and in parallel to the physiological muscle, that predicts joint torques as a function of the joint kinematics and neural activation levels. This paper presents the development of myoprocessors for the upper limb based on the Hill phenomenological muscle model. Genetic algorithms are used to optimize the internal parameters of the myoprocessors utilizing an experimental database that provides inputs to the model and allows for performance assessment. The results indicate high correlation between joint moment predictions of the model and the measured data. Consequently, the myoprocessor seems an adequate model, sufficiently robust for further integration into the exoskeleton control system"}
{"_id": "84ed097a35ab8e3c2c404a84ed8e22d40b5b9db4", "title": "Development of robots for rehabilitation therapy: the Palo Alto VA/Stanford experience.", "text": "For over 25 years, personal assistant robots for severely disabled individuals have been in development. More recently, using robots to deliver rehabilitation therapy has been proposed. This paper summarizes the development and clinical testing of three mechatronic systems for post-stroke therapy conducted at the VA Palo Alto in collaboration with Stanford University. We describe the philosophy and experiences that guided their evolution. Unique to the Palo Alto approach is provision for bimanual, mirror-image, patient-controlled therapeutic exercise. Proof-of-concept was established with a 2-degree-of-freedom (DOF) elbow/forearm manipulator. Tests of a second-generation therapy robot producing planar forearm movements in 19 hemiplegic and control subjects confirmed the validity and reliability of interaction forces during mechanically assisted upper-limb movements. Clinical trials comparing 3-D robot-assisted therapy to traditional therapy in 21 chronic stroke subjects showed significant improvement in the Fugl-Meyer (FM) measure of motor recovery in the robot group, which exceeded improvements in the control group."}
{"_id": "8b68d7f43eceaffb9fc99d0a41abec2bc982a510", "title": "Robot-aided neurorehabilitation of the upper extremities", "text": "Task-oriented repetitive movements can improve muscle strength and movement co-ordination in patients with impairments due to neurological lesions. The application of robotics and automation technology can serve to assist, enhance, evaluate and document the rehabilitation of movements. The paper provides an overview of existing devices that can support movement therapy of the upper extremities in subjects with neurological pathologies. The devices are critically compared with respect to technical function, clinical applicability, and, if they exist, clinical outcomes."}
{"_id": "00a5fc6c3841d3b448967d1c73a53a56339b7eae", "title": "A myosignal-based powered exoskeleton system", "text": "Integrating humans and robotic machines into one system offers multiple opportunities for creating assistive technologies that can be used in biomedical, industrial, and aerospace applications. The scope of the present research is to study the integration of a human arm with a powered exoskeleton (orthotic device) and its experimental implementation in an elbow joint, naturally controlled by the human. The Human\u2013Machine interface was set at the neuromuscular level, by using the neuromuscular signal (EMG) as the primary command signal for the exoskeleton system. The EMG signal along with the joint kinematics were fed into a myoprocessor (Hill-based muscle model) which in turn predicted the muscle moments on the elbow joint. The moment-based control system integrated myoprocessor moment prediction with feedback moments measured at the human arm/exoskeleton and external load/exoskeleton interfaces. The exoskeleton structure under study was a two-link, two-joint mechanism, corresponding to the arm limbs and joints, which was mechanically linked (worn) by the human operator. In the present setup the shoulder joint was kept fixed at given positions and the actuator was mounted on the exoskeleton elbow joint. The operator manipulated an external weight, located at the exoskeleton tip, while feeling a scaled-down version of the load. The remaining external load on the joint was carried by the exoskeleton actuator. Four indices of performance were used to define the quality of the human/machine integration and to evaluate the operational envelope of the system. Experimental tests have shown that synthesizing the processed EMG signals as command signals with the external-load/human-arm moment feedback, significantly improved the mechanical gain of the system, while maintaining natural human control of the system, relative to other control algorithms that used only position or contact forces. The results indicated the feasibility of an EMG-based power exoskeleton system as an integrated human\u2013machine system using highlevel neurological signals."}
{"_id": "6e7e4fbc2737b473bbf9d9282896a4a7a7bda8d7", "title": "An EMG-driven musculoskeletal model to estimate muscle forces and knee joint moments in vivo.", "text": "This paper examined if an electromyography (EMG) driven musculoskeletal model of the human knee could be used to predict knee moments, calculated using inverse dynamics, across a varied range of dynamic contractile conditions. Muscle-tendon lengths and moment arms of 13 muscles crossing the knee joint were determined from joint kinematics using a three-dimensional anatomical model of the lower limb. Muscle activation was determined using a second-order discrete non-linear model using rectified and low-pass filtered EMG as input. A modified Hill-type muscle model was used to calculate individual muscle forces using activation and muscle tendon lengths as inputs. The model was calibrated to six individuals by altering a set of physiologically based parameters using mathematical optimisation to match the net flexion/extension (FE) muscle moment with those measured by inverse dynamics. The model was calibrated for each subject using 5 different tasks, including passive and active FE in an isokinetic dynamometer, running, and cutting manoeuvres recorded using three-dimensional motion analysis. Once calibrated, the model was used to predict the FE moments, estimated via inverse dynamics, from over 200 isokinetic dynamometer, running and sidestepping tasks. The inverse dynamics joint moments were predicted with an average R(2) of 0.91 and mean residual error of approximately 12 Nm. A re-calibration of only the EMG-to-activation parameters revealed FE moments prediction across weeks of similar accuracy. Changing the muscle model to one that is more physiologically correct produced better predictions. The modelling method presented represents a good way to estimate in vivo muscle forces during movement tasks."}
{"_id": "09d2ef139e0e9a3912501bbc273ccc1dbe4f4322", "title": "High Performance Emulation of Quantum Circuits", "text": "As quantum computers of non-trivial size become available in the near future, it is imperative to develop tools to emulate small quantum computers. This allows for validation and debugging of algorithms as well as exploring hardware-software co-design to guide the development of quantum hardware and architectures. The simulation of quantum computers entails multiplications of sparse matrices with very large dense vectors of dimension 2n, where n denotes the number of qubits, making this a memory-bound and network bandwidth-limited application. We introduce the concept of a quantum computer emulator as a component of a software framework for quantum computing, enabling a significant performance advantage over simulators by emulating quantum algorithms at a high level rather than simulating individual gate operations. We describe various optimization approaches and present benchmarking results, establishing the superiority of quantum computer emulators in terms of performance."}
{"_id": "ec55b5f397b1a4e73baa509180f417c0bdcd6d68", "title": "Fire flame detection in video sequences using multi-stage pattern recognition techniques", "text": "In this paper, we propose an effective technique that is used to automatically detect fire in video images. The proposed algorithm is composed of four stages: (1) an adaptive Gaussian mixture model to detect moving regions, (2) a fuzzy c-means (FCM) algorithm to segment the candidate fire regions from these moving regions based on the color of fire, (3) special parameters extracted based on the tempospatial characteristics of fire regions, and (4) a support vector machine (SVM) algorithm using these special parameters to distinguish between fire and non-fire. Experimental results indicate that the proposed method outperforms other fire detection algorithms, providing high reliability and a low false alarm rate. & 2012 Elsevier Ltd. All rights reserved."}
{"_id": "de4eaff56fa60e83f238d81a52e9dc5f3fd33f18", "title": "Prediction of click frauds in mobile advertising", "text": "Click fraud represents a serious drain on advertising budgets and can seriously harm the viability of the internet advertising market. This paper proposes a novel framework for prediction of click fraud in mobile advertising which consists of feature selection using Recursive Feature Elimination (RFE) and classification through Hellinger Distance Decision Tree (HDDT).RFE is chosen for the feature selection as it has given better results as compared to wrapper approach when evaluated using different classifiers. HDDT is also selected as classifier to deal with class imbalance issue present in the data set. The efficiency of proposed framework is investigated on the data set provided by Buzzcity and compared with J48, Rep Tree, logitboost, and random forest. Results show that accuracy achieved by proposed framework is 64.07 % which is best as compared to existing methods under study."}
{"_id": "aa6f094f17d78380f927555a348ad514a505cc3b", "title": "SlowFast Networks for Video Recognition", "text": "We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report 79.0% accuracy on the Kinetics dataset without using any pre-training, largely surpassing the previous best results of this kind. On AVA action detection we achieve a new stateof-the-art of 28.3 mAP. Code will be made publicly available."}
{"_id": "c23bedda0dd56d7a9c8198921a49db3979bf00a1", "title": "Reducing the risk of query expansion via robust constrained optimization", "text": "We introduce a new theoretical derivation, evaluation methods, and extensive empirical analysis for an automatic query expansion framework in which model estimation is cast as a robust constrained optimization problem. This framework provides a powerful method for modeling and solving complex expansion problems, by allowing multiple sources of domain knowledge or evidence to be encoded as simultaneous optimization constraints. Our robust optimization approach provides a clean theoretical way to model not only expansion benefit, but also expansion risk, by optimizing over uncertainty sets for the data. In addition, we introduce risk-reward curves to visualize expansion algorithm performance and analyze parameter sensitivity. We show that a robust approach significantly reduces the number and magnitude of expansion failures for a strong baseline algorithm, with no loss in average gain. Our approach is implemented as a highly efficient post-processing step that assumes little about the baseline expansion method used as input, making it easy to apply to existing expansion methods. We provide analysis showing that this approach is a natural and effective way to do selective expansion, automatically reducing or avoiding expansion in risky scenarios, and successfully attenuating noise in poor baseline methods."}
{"_id": "922e773e471430f637d07af4f17b72461ede8b7c", "title": "Reusing Scientific Data: How Earthquake Engineering Researchers Assess the Reusability of Colleagues\u2019 Data", "text": "Investments in cyberinfrastructure and e-Science initiatives are motivated by the desire to accelerate scientific discovery. Always viewed as a foundation of science, data sharing is appropriately seen as critical to the success of such initiatives, but new technologies supporting increasingly data-intensive and collaborative science raise significant challenges and opportunities. Overcoming the technical and social challenges to broader data sharing is a common and important research objective, but increasing the supply and accessibility of scientific data is no guarantee data will be applied by scientists. Before reusing data created by others, scientists need to assess the data\u2019s relevance, they seek confidence the data can be understood, and they must trust the data. Using interview data from earthquake engineering researchers affiliated with the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES), we examine how these scientists assess the reusability of colleagues\u2019 experimental data for model validation."}
{"_id": "42c73dfb22d04da507ebe8df62c697e5263f84a0", "title": "The Multidimensional Knapsack Problem: Structure and Algorithms", "text": "Structure and Algorithms Jakob Puchinger NICTA Victoria Laboratory Department of Computer Science & Software Engineering University of Melbourne, Australia jakobp@csse.unimelb.edu.au G\u00fcnther R. Raidl Institute of Computer Graphics and Algorithms Vienna University of Technology, Austria raidl@ads.tuwien.ac.at Ulrich Pferschy Department of Statistics and Operations Research University of Graz, Austria pferschy@uni-graz.at"}
{"_id": "59fa9d40d129f18f1f3193f65935fcb9e2042afb", "title": "State of the Art: Embedding Security in Vehicles", "text": "For new automotive applications and services, information technology (IT) has gained central importance. IT-related costs in car manufacturing are already high and they will increase dramatically in the future. Yet whereas safety and reliability have become a relatively well-established field, the protection of vehicular IT systems against systematic manipulation or intrusion has only recently started to emerge. Nevertheless, IT security is already the base of some vehicular applications such as immobilizers or digital tachographs. To securely enable future automotive applications and business models, IT security will be one of the central technologies for the next generation of vehicles. After a state-of-the-art overview of IT security in vehicles, we give a short introduction into cryptographic terminology and functionality. This contribution will then identify the need for automotive IT security while presenting typical attacks, resulting security objectives, and characteristic constraints within the automotive area. We will introduce core security technologies and relevant security mechanisms followed by a detailed description of critical vehicular applications, business models, and components relying on IT security. We conclude our contribution with a detailed statement about challenges and opportunities for the automotive IT community for embedding IT security in vehicles."}
{"_id": "afa15696b87fbd9884d223ed1adbfcdd4f32b0f2", "title": "Culture as situated cognition : Cultural mindsets , cultural fluency , and meaning making", "text": "The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable"}
{"_id": "25da5c2ce1d49f4c9a5144f553a2b3e05cb3d675", "title": "Accurate age classification of 6 and 12 month-old infants based on resting-state functional connectivity magnetic resonance imaging data", "text": "Human large-scale functional brain networks are hypothesized to undergo significant changes over development. Little is known about these functional architectural changes, particularly during the second half of the first year of life. We used multivariate pattern classification of resting-state functional connectivity magnetic resonance imaging (fcMRI) data obtained in an on-going, multi-site, longitudinal study of brain and behavioral development to explore whether fcMRI data contained information sufficient to classify infant age. Analyses carefully account for the effects of fcMRI motion artifact. Support vector machines (SVMs) classified 6 versus 12 month-old infants (128 datasets) above chance based on fcMRI data alone. Results demonstrate significant changes in measures of brain functional organization that coincide with a special period of dramatic change in infant motor, cognitive, and social development. Explorations of the most different correlations used for SVM lead to two different interpretations about functional connections that support 6 versus 12-month age categorization."}
{"_id": "376e8372d775ad2987081ab91adf5af2554676ce", "title": "A Joint Reconstruction and Segmentation Method for Limited-Angle X-Ray Tomography", "text": "Limited-angle computed tomography (CT) is common in industrial applications, where incomplete projection data can cause artifacts. For objects made from homogeneous materials, we propose a joint reconstruction and segmentation method that performs joint image reconstruction and segmentation directly on the projection data. We describe an alternating minimization algorithm to solve the resulting optimization problem, and we modify the primal-dual hybrid gradient algorithm for the non-convex piecewise constant Mumford-Shah model, which is a popular approximation model in biomedical image segmentation. The effectiveness of the proposed approach is validated by simulation and by application to actual micro-CT data sets."}
{"_id": "1512f584ef8572d998968b732476999a74a9146d", "title": "Unsupervised document zone identification using probabilistic graphical models", "text": "Document zone identification aims to automatically classify sequences of text-spans (e.g. sentences) within a document into predefined zone categories. Current approaches to document zone identification mostly rely on supervised machine learning methods, which require a large amount of annotated data, which is often difficult and expensive to obtain. In order to overcome this bottleneck, we propose graphical models based on the popular Latent Dirichlet Allocation (LDA) model. The first model, which we call zoneLDA aims to cluster the sentences into zone classes using only unlabelled data. We also study an extension of zoneLDA called zoneLDAb, which makes distinction between common words and non-common words within the different zone types. We present results on two different domains: the scientific domain and the technical domain. For the latter one we propose a new document zone classification schema, which has been annotated over a collection of 689 documents, achieving a Kappa score of 85%. Overall our experiments show promising results for both of the domains, outperforming the baseline model. Furthermore, on the technical domain the performance of the models are comparable to the supervised approach using the same feature sets. We thus believe that graphical models are a promising avenue of research for automatic document zoning."}
{"_id": "f4493ac41bb50e35a1b03872da715c7284c8c123", "title": "Violence during pregnancy in Jordan: its prevalence and associated risk and protective factors.", "text": "This study estimates the lifetime prevalence of physical violence during pregnancy and examines risk and protective factors among women (N = 390) attending reproductive health clinics in Jordan. Approximately 15% reported physical violence during pregnancy. The husband was the sole perpetrator in 83% of the cases. A high frequency of quarreling, the husband's use of alcohol, attitudes supportive of a woman's duty to obey her husband, infrequent communication between the respondent and her family, and exposure to violence as a child increased the risk of violence. Consanguinity (marriage to a blood relative) and higher education levels were protective against violence during pregnancy."}
{"_id": "3643e384f510318b779ce9a179f422e96955171d", "title": "A study on lidar data forensics", "text": "3D LiDAR (Light Imaging Detection and Ranging) data has recently been used in a wide range of applications such as vehicle automation and crime scene reconstruction. Decision making in such applications is highly dependent on LiDAR data. Thus, it becomes crucial to authenticate the data before using it. Though authentication of 2D digital images and video has been widely studied, the area of 3D data forensic is relatively unexplored. In this paper, we investigate and identify three possible attacks on the LiDAR data. We also propose two novel forensic approaches as a countermeasure for such attacks and study their effectiveness. The first forensic approach utilises the density consistency check while the second method leverages the occlusion effect for revealing the forgery. Experimental results demonstrate the effectiveness of the proposed forgery attacks and raise the awareness against unauthenticated use of LiDAR data. The performance analyses of the proposed forensic approaches indicate that the proposed methods are very efficient and provide the detection accuracy of more than 95% for certain kinds of forgery attacks. While the forensic approach is unable to handle all forgery attacks, the study motivates to explore more sophisticated forensic methods for LiDAR data."}
{"_id": "6aa4600114b079da9c3f3ee640bd0dd55c6d4dce", "title": "Can we Evaluate the Quality of Generated Text?", "text": "Evaluating the output of NLG systems is notoriously difficult, and performin g assessments of text quality even more so. A range of automated and subject-based approaches to the evaluation of text quality h ave been taken, including comparison with a putative gold standard text, analysis of specific linguistic features of the output, expert review and task-based evaluation. In this paper we present the results of a variety of such approaches in the context of a case study application. We discuss the problems encountered in the implementation of each approach in the context of the literature, and propo se that a test based on the Turing test for machine intelligence offers a way forward in the evaluation of the subjective notion of text qua lity."}
{"_id": "3fcdc01dbcecec542788fa791797e434e2ffdf33", "title": "Condensation methodologies for two-dimensional non-uniform constellations", "text": "Two-dimensional non-uniform constellations (2D-NUC) have been adopted for the most recent terrestrial broadcasting system called ATSC 3.0. They are known for being more efficient than one-dimensional non uniform constellations (1D-NUC) and uniform constellations (UC). They can be used in any communication system with the bit-interleaved coded modulation (BICM) structure. However, one of the main challenges of such constellations is their design for optimal behaviour in a wide range of cases and the design of a demapper that exploits their advantages. This paper presents two different condensation methodologies to design efficient 2D-NUCs simple in design and with limited demapping complexity. The proposal provides a complexity reduction of the design and demapping processes in the range from 13% to 94%. The demapping stage provides performance losses less than 0.1 dB if compared with standard 2D-NUC demapping."}
{"_id": "55a3d05383a496eb8c19c2ae811d3d64c672166c", "title": "1 Text Mining Systems for Predicting Market Response to News : A Survey", "text": "Several prototypes for predicting the short-term market reaction to news based on text mining techniques have been developed. However, no detailed comparison of the systems and their performances is available thus far. This paper describes the main systems developed and presents a framework for comparing the approaches. The prototypes differ in the text mining methods applied and the data sets used for performance evaluation. Some (mostly implicit) assumptions of these evaluations are rather unrealistic with respect to properties of financial markets and the performance results cannot be achieved in reality. Furthermore, the adequacy of applying text mining techniques for predicting stock price movements in general and approaches for dealing with existing problems are discussed."}
{"_id": "ae5e5085b4e8f4851d9fd76e1d3845da942c3147", "title": "Fast routing in road networks with transit nodes.", "text": "When you drive to somewhere far away, you will leave your current location via one of only a few important traffic junctions. Starting from this informal observation, we developed an algorithmic approach, transit node routing, that allows us to reduce quickest path queries in road networks to a small number of table lookups. For road maps of Western Europe and the United States, our best query times improved over the best previously published figures by two orders of magnitude. This is also more than one million times faster than the best known algorithm for general networks."}
{"_id": "b44415a13f29ddc1af497b3876a2396673c3cfc0", "title": "Twin Networks: Using the Future as a Regularizer", "text": "Being able to model long-term dependencies in sequential data, such as text, has been among the long-standing challenges of recurrent neural networks (RNNs). This issue is strictly related to the absence of explicit planning in current RNN architectures, more explicitly, the network is trained to predict only the next token given previous ones. In this paper, we introduce a simple way of biasing the RNNs towards planning behavior. Particularly, we introduce an additional neural network which is trained to generate the sequence in reverse order, and we require closeness between the states of the forward RNN and backward RNN that predict the same token. At each step, the states of the forward RNN are required to match the future information contained in the backward states. We hypothesize that the approach eases modeling of long-term dependencies thus helping in generating more globally consistent samples. The model trained with conditional generation achieved 4% relative improvement (CER of 7.3 compared to a baseline of 7.6)."}
{"_id": "926bd9147e95101d2ebb8b563696113a57b784ff", "title": "Design of a multi-DOF cable-driven mechanism of a miniature serial manipulator for robot-assisted minimally invasive surgery", "text": "While multi-fingered robotic hands have been developed for decades, none has been used for surgical operations. \u03bcAngelo is an anthropomorphic master-slave system for teleoperated robot-assisted surgery. As part of this system, this paper focuses on its slave instrument, a miniature three-digit hand. The design of the mechanism of such a manipulator poses a challenge due to the required miniaturization and the many active degrees of freedom. As the instrument has a human-centered design, its relation to the human hand is discussed. Two ways of routing its cable-driven mechanism are investigated and the method of deriving the input-output functions that drive the mechanism is presented."}
{"_id": "b386d760a623fcdca7685c35470d3ba3e0177a5e", "title": "German version of the Multidimensional Body-Self Relations Questionnaire - Appearance Scales (MBSRQ-AS): confirmatory factor analysis and validation.", "text": "The Multidimensional Body-Self Relations Questionnaire (MBSRQ) is a widely used questionnaire that measures body image as a multidimensional construct. The Appearance Scales (AS) of the MBSRQ (Appearance Evaluation, Appearance Orientation, Body Areas Satisfaction, Overweight Preoccupation and Self-Classified Weight) are subscales which facilitate a parsimonious assessment of appearance-related aspects of body image. The current study tested the psychometric properties and factor structure of a German translation of the MBSRQ-AS. Participants were n=230 female patients with the SCID diagnosis of an eating disorder and n=293 female healthy controls. In a confirmatory factor analysis, convincing goodness-of-fit indices emerged. The subscales of the questionnaire yielded good reliability and convergent and discriminant validity coefficients, with most items showing excellent characteristics. Like the English version, the German adaptation of the questionnaire can be recommended for a multidimensional assessment of appearance-related aspects of body image in both research and clinical practice."}
{"_id": "5402cd6e1f821ee61c786374512fe9f6400ed6a0", "title": "\u201cAnarchy is what states make of it: the social construction of power politics,\u201d", "text": "Wendt recasts the realist/liberal debate as an argument over the determinants of state action in the international system: i.e., whether state action is influenced by \u201cstructure\u201d (anarchy and relative power) or \u201cprocess\u201d (interaction and learning). He puts forth a \u201cconstructivist\u201d argument rejecting the realist belief that the structure of the international system \u2013 anarchy and \u201cself-help\u201d \u2013 forces states to \u201cplay competitive power politics.\u201d Rather, Wendt argues that self-help and power politics do not follow \u201ceither logically or causally\u201d from anarchy; if they exist, it is due to process, not structure (394). Thus, Wendt argues that state identities and interests are shaped and transformed within the international system, rather than (as the realists believe) existing as exogenous variables."}
{"_id": "56faf7e024ac63838a6a9f8e71b42980fbfb28e3", "title": "Automatic scoring of pronunciation quality", "text": "We present a paradigm for the automatic assessment of pronunciation quality by machine. In this scoring paradigm, both native and nonnative speech data is collected and a database of human-expert ratings is created to enable the development of a variety of machine scores. We \u00aerst discuss issues related to the design of speech databases and the reliability of human ratings. We then address pronunciation evaluation as a prediction problem, trying to predict the grade a human expert would assign to a particular skill. Using the speech and the expert-ratings databases, we build statistical models and introduce di\u0080erent machine scores that can be used as predictor variables. We validate these machine scores on the Voice Interactive Language Training System (VILTS) corpus, evaluating the pronunciation of American speakers speaking French and we show that certain machine scores, like the log-posterior and the normalized duration, achieve a correlation with the targeted human grades that is comparable to the human-to-human correlation when a su\u0081cient amount of speech data is available. \u00d3 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "25c776df4dcca8719d2bc7b1a526675c77d0e2a3", "title": "Learning from Demonstration for Autonomous Navigation in Complex Unstructured Terrain", "text": "Rough terrain autonomous navigation continues to pose a challenge to the robotics community. Robust navigation by a mobile robot depends not only on the individual performance of perception and planning systems, but on how well these systems are coupled. When traversing complex unstructured terrain, this coupling (in the form of a cost function) has a large impact on robot behavior and performance, necessitating a robust design. This paper explores the application of Learning from Demonstration to this task for the Crusher autonomous navigation platform. Using expert examples of desired navigation behavior, mappings from both online and offline perceptual data to planning costs are learned. Challenges in adapting existing techniques to complex online planning systems and imperfect demonstration are addressed, along with additional practical considerations. The benefits to autonomous performance of this approach are examined, as well as the decrease in necessary designer effort. Experimental results are presented from autonomous traverses through complex natural environments."}
{"_id": "9af29d227e96c747eb8941a6a7bceaa15b7d8e11", "title": "A tactile sensor for the fingertips of the humanoid robot iCub", "text": "In order to successfully perform object manipulation, humanoid robots must be equipped with tactile sensors. However, the limited space that is available in robotic fingers imposes severe design constraints. In [1] we presented a small prototype fingertip which incorporates a capacitive pressure system. This paper shows an improved version, which has been integrated on the hand of the humanoid robot iCub. The fingertip is 14.5 mm long and 13 mm wide. The capacitive pressure sensor system has 12 sensitive zones and includes the electronics to send the 12 measurements over a serial bus with only 4 wires. Each synthetic fingertip is shaped approximately like a human fingertip. Furthermore, an integral part of the capacitive sensor is soft silicone foam, and therefore the fingertip is compliant. We describe the structure of the fingertip, their integration on the humanoid robot iCub and present test results to show the characteristics of the sensor."}
{"_id": "131b4e65b80305a4c38e1cc20c12521732a952dc", "title": "Tactile sensing for mechatronics * a state of the art survey", "text": "In this paper we examine the state of the art in tactile sensing for mechatronics[ We de_ne a tactile sensor as a device or system that can measure a given property of an object or contact event through physical contact between the sensor and the object[ We consider any property that can be measured through contact\\ including the shape of an object\\ texture\\ temperature\\ hardness\\ moisture content\\ etc[ A comprehensive search of the literature revealed that there was a signi_cant increase in publications on tactile sensing from 0880 onwards[ Considerable e}ort in the 0879s was spent investigating transduction techniques and developing new sensors\\ whilst emphasis in more recent research has focused on experiments using tactile sensors to perform a variety of tasks[ This paper reports on progress in tactile sensing in the following areas] cutaneous sensors\\ sensing _ngers\\ soft materials\\ industrial robot grippers\\ multi_ngered hands\\ probes and whiskers\\ analysis of sensing devices\\ haptic perception\\ processing sensory data and new application areas[ We conclude that the predominant choice of transduction method is piezoelectric\\ with arrays using resistive or capacitive sensing[ We found that increased emphasis on understanding tactile sensing and perception issues has opened up potential for new application areas[ The predicted growth in applications in industrial automation has not eventuated[ New applications for tactile sensing including surgery\\ rehabilitation and service robotics\\ and food processing automation show considerable potential and are now receiving signi_cant levels of research attention[ \u00de 0887 Elsevier Science Ltd[ All rights reserved["}
{"_id": "38034493656903332acf537ee3b0a90880076303", "title": "The iCub humanoid robot: an open platform for research in embodied cognition", "text": "We report about the iCub, a humanoid robot for research in embodied cognition. At 104 cm tall, the iCub has the size of a three and half year old child. It will be able to crawl on all fours and sit up to manipulate objects. Its hands have been designed to support sophisticate manipulation skills. The iCub is distributed as Open Source following the GPL/FDL licenses. The entire design is available for download from the project homepage and repository (http://www.robotcub.org). In the following, we will concentrate on the description of the hardware and software systems. The scientific objectives of the project and its philosophical underpinning are described extensively elsewhere [1]."}
{"_id": "38f3e33622d9f40310d30258036bc0b1e17cea4c", "title": "HAWK : Halting Anomaly with Weighted ChoKing to Rescue Well-Behaved TCP Sessions from Shrew DoS Attacks", "text": "1 Manuscript submitted to ICDCS 2005 on October 8, 2004. All rights reserved by the authors. This research was supported by an NSF ITR Research Grant under contract number ACI-0325409. Corresponding Author: Kai Hwang, Email: kaihwang@usc.edu, Tel: 213-740-4470, Fax: 213-740-4418. Abstract\u2014High availability in network services is crucial for effective large-scale distributed computing. While denial-ofservice (DoS) attacks through massive packet flooding have baffled researchers for years, a new type of even more detrimental attack\u2014shrew attacks (periodic intensive packet bursts with low average rate)\u2014has recently been identified. Shrew attacks can significantly degrade well-behaved TCP sessions and are very difficult to detect, not to mention defend against, due to its low average rate. We propose a new stateful adaptive queue management technique called HAWK (Halting Anomaly with Weighted choKing) which works by judiciously identifying malicious shrew packet flows using a small flow table and dropping such packets decisively to halt the attack such that well-behaved TCP sessions can re-gain their bandwidth shares. Our NS-2 based extensive performance results indicate that HAWK is highly agile in the sense that it can swiftly (e.g., within 5 seconds) identify and then halt shrew attacks with a flow table of tiny size (e.g., 600 bytes). Thus, HAWK is readily amenable to practical implementation in a real environment."}
{"_id": "3e14c4e1c63e5a983518ae66f8b2822f5eceff33", "title": "Centralities in Large Networks: Algorithms and Observations", "text": "Node centrality measures are important in a large number of graph applications, from search and ranking to social and biological network analysis. In this paper we study node centrality for very large graphs, up to billions of nodes and edges. Various definitions for centrality have been proposed, ranging from very simple (e.g., node degree) to more elaborate. However, measuring centrality in billion-scale graphs poses several challenges. Many of the \u201ctraditional\u201d definitions such as closeness and betweenness were not designed with scalability in mind. Therefore, it is very difficult, if not impossible, to compute them both accurately and efficiently. In this paper, we propose centrality measures suitable for very large graphs, as well as scalable methods to effectively compute them. More specifically, we propose effective closeness and LINERANK which are designed for billion-scale graphs. We also develop algorithms to compute the proposed centrality measures in MAPREDUCE, a modern paradigm for large-scale, distributed data processing. We present extensive experimental results on both synthetic and real datasets, which demonstrate the scalability of our approach to very large graphs, as well as interesting findings and anomalies."}
{"_id": "dc85adfda51c32c45beddc3b84a0c2c2ac9ae74e", "title": "Phonetic posteriorgrams for many-to-one voice conversion without parallel data training", "text": "This paper proposes a novel approach to voice conversion with non-parallel training data. The idea is to bridge between speakers by means of Phonetic PosteriorGrams (PPGs) obtained from a speaker-independent automatic speech recognition (SI-ASR) system. It is assumed that these PPGs can represent articulation of speech sounds in a speaker-normalized space and correspond to spoken content speaker-independently. The proposed approach first obtains PPGs of target speech. Then, a Deep Bidirectional Long Short-Term Memory based Recurrent Neural Network (DBLSTM) structure is used to model the relationships between the PPGs and acoustic features of the target speech. To convert arbitrary source speech, we obtain its PPGs from the same SI-ASR and feed them into the trained DBLSTM for generating converted speech. Our approach has two main advantages: 1) no parallel training data is required; 2) a trained model can be applied to any other source speaker for a fixed target speaker (i.e., many-to-one conversion). Experiments show that our approach performs equally well or better than state-of-the-art systems in both speech quality and speaker similarity."}
{"_id": "3026c494c8f05973dd8ae117922cdaa5343b69b2", "title": "Telos: Representing Knowledge About Information Systems", "text": "We describe Telos, a language intended to support the development of information systems. The design principles for the language are based on the premise that information system development is knowledge intensive and that the primary responsibility of any language intended for the task is to be able to formally represent the relevent knowledge. Accordingly, the proposed language is founded on concepts from knowledge representations. Indeed, the language is appropriate for representing knowledge about a variety of worlds related to a particular information system, such as the subject world (application domain), the usage world (user models, environments), the system world (software requirements, design), and the development world (teams, metodologies).\nWe introduce the features of the language through examples, focusing on those provided for desribing metaconcepts that can then be used to describe knowledge relevant to a particular information system. Telos' fetures include an object-centered framework which supports aggregation, generalization, and classification; a novel treatment of attributes; an explicit representation of time; and facilities for specifying integrity constraints and deductive rules. We review actual applications of the language through further examples, and we sketch a formalization of the language."}
{"_id": "6abef12dfe2ba6afae50e91e132b59e4f974fd64", "title": "Fighting Accounting Fraud Through Forensic Data Analytics", "text": "Accounting fraud is a global concern representing a significant threat to the financial system stability due to the resulting diminishing of the market confidence and trust of regulatory authorities. Several tricks can be used to commit accounting fraud, hence the need for non-static regulatory interventions that take into account different fraudulent patterns. Accordingly, this study aims to improve the detection of accounting fraud via the implementation of several machine learning methods to better differentiate between fraud and non-fraud companies, and to further assist the task of examination within the riskier firms by evaluating relevant financial indicators. Out-of-sample results suggest there is a great potential in detecting falsified financial statements through statistical modelling and analysis of publicly available accounting information. The proposed methodology can be of assistance to public auditors and regulatory agencies as it facilitates auditing processes, and supports more targeted and effective examinations of accounting reports."}
{"_id": "ad8626a0d99a6cdea5179a01a2c06f84a72c1a75", "title": "Mindfulness and emotion regulation in depression and anxiety: common and distinct mechanisms of action.", "text": "BACKGROUND\nThe current study seeks to investigate the mechanisms through which mindfulness is related to mental health in a clinical sample of adults by examining (1) whether specific cognitive emotion regulation strategies (rumination, reappraisal, worry, and nonacceptance) mediate associations between mindfulness and depression and anxiety, respectively, and (2) whether these emotion regulation strategies operate uniquely or transdiagnostically in relation to depression and anxiety.\n\n\nMETHODS\nParticipants were 187 adults seeking treatment at a mood and anxiety disorders clinic in Connecticut. Participants completed a battery of self-report measures that included assessments of depression and anxiety (Mood and Anxiety Symptom Questionnaire), and emotion regulation (Ruminative Response Scale, Penn State Worry Questionnaire, Emotion Regulation Questionnaire, Difficulties in Emotion Regulation Scale).\n\n\nRESULTS\nSimple mediation analyses indicated that rumination and worry significantly mediated associations between mindfulness and anxiety symptoms, whereas rumination and reappraisal significantly mediated associations between mindfulness and depressive symptoms. Multiple mediation analyses showed that worry significantly mediated associations between mindfulness and anxiety symptoms and rumination and reappraisal significantly mediated associations between mindfulness and depressive symptoms.\n\n\nCONCLUSIONS\nFindings suggest that mindfulness operates through distinct and common mechanisms depending on clinical context."}
{"_id": "e0b3ad93dde06afc5b632cf79e542dcf3b65086e", "title": "The principles and practice of probabilistic programming", "text": "Probabilities describe degrees of belief, and probabilistic inference describes rational reasoning under uncertainty. It is no wonder, then, that probabilistic models have exploded onto the scene of modern artificial intelligence, cognitive science, and applied statistics: these are all sciences of inference under uncertainty. But as probabilistic models have become more sophisticated, the tools to formally describe them and to perform probabilistic inference have wrestled with new complexity. Just as programming beyond the simplest algorithms requires tools for abstraction and composition, complex probabilistic modeling requires new progress in model representation\u2014probabilistic programming languages. These languages provide compositional means for describing complex probability distributions; implementations of these languages provide generic inference engines: tools for performing efficient probabilistic inference over an arbitrary program. In their simplest form, probabilistic programming languages extend a well-specified deterministic programming language with primitive constructs for random choice. This is a relatively old idea, with foundational work by Giry, Kozen, Jones, Moggi, SahebDjahromi, Plotkin, and others [see e.g. 7]. Yet it has seen a resurgence thanks to new tools for probabilistic inference and new complexity of probabilistic modeling applications. There are a number of recent probabilistic programming languages [e.g. 8, 9, 11\u201317], embodying different tradeoffs in expressivity, efficiency, and perspicuity. We will focus on the probabilistic programming language Church [6] for simplicity, but the design of probabilistic languages to best support complex model representation and efficient inference is an active and important topic. Church extends (the purely functional subset of) Scheme with elementary random primitives, such as flip (a bernoulli), multinomial, and gaussian. In addition, Church includes language constructs that simplify modeling. For instance, mem, a higher-order procedure that memoizes its input function, is useful for describing persistent random properties and lazy model construction. (Interestingly, memoization has a semantic effect in probabilistic languages.) If we view the semantics of the underlying deterministic language as a map from programs to executions of the program, the semantics of the probabilistic language will be a map from programs to distributions over executions. When the program halts"}
{"_id": "46fbe8e7dac57f4dbfd91e353eae2d4cfd728e3d", "title": "Shilling attacks against recommender systems: a comprehensive survey", "text": "Online vendors employ collaborative filtering algorithms to provide recommendations to their customers so that they can increase their sales and profits. Although recommendation schemes are successful in e-commerce sites, they are vulnerable to shilling or profile injection attacks. On one hand, online shopping sites utilize collaborative filtering schemes to enhance their competitive edge over other companies. On the other hand, malicious users and/or competing vendors might decide to insert fake profiles into the user-item matrices in such a way so that they can affect the predicted ratings on behalf of their advantages. In the past decade, various studies have been conducted to scrutinize different shilling attacks strategies, profile injection attack types, shilling attack detection schemes, robust algorithms proposed to overcome such attacks, and evaluate them with respect to accuracy, cost/benefit, and overall performance. Due to their popularity and importance, we survey about shilling attacks in collaborative filtering algorithms. Giving an overall picture about various shilling attack types by introducing new classification attributes is imperative for further research. Explaining shilling attack detection schemes in detail and robust algorithms proposed so far might open a lead to develop new detection schemes and enhance such robust algorithms further, even propose new ones. Thus, we describe various attack types and introduce new dimensions for attack classification. Detailed description of the proposed detection and robust recommendation algorithms are given. Moreover, we briefly explain evaluation of the proposed schemes. We conclude the paper by discussing various open questions."}
{"_id": "57ff563b01a2ed1da9c5e20af6b922e8471f689c", "title": "Modeling people flow: simulation analysis of international-departure passenger flows in an airport terminal", "text": "An entire airport terminal building is simulated to examine passenger flows, especially international departures. First, times needed for passengers to be processed in the terminal building are examined. It is found that the waiting time for check-in accounts for more than 80 percent of the total waiting time of passengers spent in the airport. A special-purpose data-generator is designed and developed to create experimental data for executing a simulation. It is found that the possible number of passengers missing their flights could be drastically reduced by adding supporting staff to and by making use of first-and business-class check-in counters for processing economy-and group-class passengers."}
{"_id": "49d7824b8dee17d185eeb5987ec985c6da4cf548", "title": "Enhancement of MRI human thigh muscle segmentation by template-based framework", "text": "Image segmentation of anatomic structures is often an essential step in medical image analysis. A variety of segmentation methods have been proposed, but none provides automatic segmentation of the thigh. In magnetic resonance images of the thigh, the segmentation is complicated by factors, such as artifacts (e.g. intensity inhomogeneity and echo) and inconsistency of soft and hard tissue compositions, especially in muscle from older people, where accumulation of intermuscular fat is greater than in young muscles. In this paper, the combination framework that leads to a segmentation enhancement method for region of interest segmentation are demonstrated. Appropriate methods of image pre-processing, thresholding, manual interaction of muscle border, template conversion and deformable contours in combination with image filters are applied. Prior geometrical information in an initial template image is used to automatically derive the muscle outlines by application of snake active contours, in serial images within a single MRI dataset. Our approach has an average segmented output accuracy of 93.34% by Jaccard Similarity Index, and reduced the processing time by 97.73% per image compared to manual segmentation."}
{"_id": "c43dc4ae68a317b34a79636fadb3bcc4d1ccb61c", "title": "Age and gender estimation using deep residual learning network", "text": "In this paper, we propose a deep residual learning model for age and gender estimation. Our method detects faces in input images, and then the age and gender of each face are estimated. The estimation method consists of three deep neural networks where we adopt residual learning methods. We train the model with IMDB-WIKI database [4]. However, since the database has only a small number of face images under the age of 20, we augment the set by collecting the images on the Internet. Experimental results show that the proposed model with residual learning yields improved performance."}
{"_id": "af18db78d0ca1180eac884b01cbfea7ed1bef80e", "title": "Misinformation in Online Social Networks: Detect Them All with a Limited Budget", "text": "Online social networks have become an effective and important social platform for communication, opinions exchange, and information sharing. However, they also make it possible for rapid and wide misinformation diffusion, which may lead to pernicious influences on individuals or society. Hence, it is extremely important and necessary to detect the misinformation propagation by placing monitors.\n In this article, we first define a general misinformation-detection problem for the case where the knowledge about misinformation sources is lacking, and show its equivalence to the influence-maximization problem in the reverse graph. Furthermore, considering node vulnerability, we aim to detect the misinformation reaching to a specific user. Therefore, we study a \u03c4-Monitor Placement problem for cases where partial knowledge of misinformation sources is available and prove its #P complexity. We formulate a corresponding integer program, tackle exponential constraints, and propose a Minimum Monitor Set Construction (MMSC) algorithm, in which the cut-set2 has been exploited in the estimation of reachability of node pairs. Moreover, we generalize the problem from a single target to multiple central nodes and propose another algorithm based on a Monte Carlo sampling technique. Extensive experiments on real-world networks show the effectiveness of proposed algorithms with respect to minimizing the number of monitors."}
{"_id": "4e9cd1457e109acedddfdd756298b741eac413cc", "title": "The pains and pleasures of parenting: when, why, and how is parenthood associated with more or less well-being?", "text": "The relationship between parenthood and well-being has become a hot topic among scholars, media, and general public alike. The research, however, has been mixed-some studies indicate that parents are happier than nonparents, whereas others suggest the reverse. We suggest that the question of whether parents are more or less happy than their childless peers is not the most meaningful one. To reconcile the conflicting literature and expand understanding of the emotional experience of parenthood, we present a model of parents' well-being that describes why and how parents experience more or less happiness than nonparents (i.e., mediators of the link between parenthood and well-being). We then apply this model to explain when parents are more likely to experience more or less happiness (i.e., moderators of parents' well-being, such as parent age or child temperament). Supporting our model, we review 3 primary methodological approaches: studies comparing parents and nonparents, studies examining changes in well-being across the transition to parenthood, and studies comparing parents' experiences while with their children to their other daily activities. Our review suggests that the relationship between parenthood and well-being is highly complex. We propose that parents are unhappy to the extent that they encounter relatively greater negative emotions, magnified financial problems, more sleep disturbance, and troubled marriages. By contrast, when parents experience greater meaning in life, satisfaction of their basic needs, greater positive emotions, and enhanced social roles, they are met with happiness and joy."}
{"_id": "e791d39b4789def2a4abfaa17089e81ec501570f", "title": "Control Strategy on Doubly Fed Induction Generator Using PSIM", "text": "This paper deals with doubly fed induction generator control strategy. This paper includes the results performed in PSIM-9 software such as Performance comparison of output voltage under different loading conditions and output power. Keywords\u2014 Doubly Fed Induction Generator (DFIG), variable speed wind turbine"}
{"_id": "dfd3a25be87d47b1a0aaf1f124438474d74d4e64", "title": "On Semantic Algebra: A Denotational Mathematics for Natural Language Comprehension and Cognitive Computing", "text": "Semantics is composed meaning of language expressions and perceptions at the levels of words, phrase, sentence, paragraph, and essay. The basic unit of formal semantics is a concept as a formal model of words in a natural language. Cognitive linguistics focuses on cognitive semantics and its interaction with underlying syntactic structures. A denotational mathematics known as semantic algebra is presented for rigorous semantics manipulations in cognitive linguistics. Semantic algebra deals with formal semantics of linguistics by a general mathematical model of semantics and a set of algebraic operators. On the basis of semantic algebra, language semantics may not only be deductively analyzed based on syntactic structures from the top down, but also be synthetically composed by the algebraic semantic operators from the bottom up at different levels of language units. Semantic algebra enables a wide range of applications in cognitive informatics, cognitive linguistics, computational linguistics, semantic computing, cognitive computing, machine learning, computing with words, as well as natural language analysis, synthesis, and comprehension."}
{"_id": "0e7748df88b4f54389ddc50d4f9cfffecb8cd90c", "title": "Cross-Lingual Propagation for Deep Sentiment Analysis", "text": "Across the globe, people are voicing their opinion in social media and various other online fora. Given such data, modern deep learning-based sentiment analysis methods excel at determining the sentiment polarity of what is being said about companies, products, etc. Unfortunately, such deep methods require significant training data, while for many languages, resources and training data are scarce. In this work, we present a cross-lingual propagation algorithm that yields sentiment embedding vectors for numerous languages. We then rely on a dual-channel convolutional neural architecture to incorporate them into the network. This allows us to achieve gains in deep sentiment analysis across a range of languages and domains."}
{"_id": "167772d4aace87b235556ffaf119fd5091529c08", "title": "The Ring of Gyges : Using Smart Contracts for Crime", "text": "Thanks to their anonymity (pseudonymity) and lack of trusted intermediaries, cryptocurrencies such as Bitcoin have created or stimulated growth in many businesses and communities. Unfortunately, some are criminal, e.g., money laundering, marketplaces for illicit goods, and ransomware. Next-generation cryptocurrencies such as Ethereum will include rich scripting languages in support of smart contracts, programs that autonomously intermediate transactions. We illuminate the extent to which these new cryptocurrencies, by enabling criminal activities to be conducted anonymously and with minimal trust assumptions, may fuel new criminal ecosystems. Specifically, we show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various realworld crimes (murder, arson, terrorism). We show significantly that CSCs for leakage of secrets are efficiently realizable in existing scripting languages such as that in Ethereum. We show that CSCs for key theft can be achieved using cryptographic primitives, such as Succinct Non-interactive ARguments of Knowledge (SNARKs), that are already expressible in these languages and for which efficient supporting language extensions are anticipated. We demonstrate similarly that authenticated data feeds, another anticipated feature of smart contract systems, can facilitate CSCs for real-world crimes. Our results illuminate the scope of possible abuses in nextgeneration cryptocurrencies. They highlight the urgency of creating policy and technical safeguards and thereby realizing the great promise of smart contracts for beneficial goals."}
{"_id": "2efc0a99f13ef8875349ff5d47c278392c39e064", "title": "Opening the Black Box of Deep Neural Networks via Information", "text": "Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work [Tishby and Zaslavsky (2015)] proposed to analyze DNNs in the Information Plane; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the InformationPlane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on compression of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer. (v) As we expect critical slowing down of the stochastic relaxation near phase transitions on the IB curve, we expect the hidden layers to converge to such critical points.1"}
{"_id": "84c1e1d8badc9a892cbb4372fa5b2a418dfa62e9", "title": "A comparative evaluation of procedural level generators in the Mario AI framework", "text": "Evaluation is an open problem in procedural content generation research. The field is now in a state where there is a glut of content generators, each serving different purposes and using a variety of techniques. It is difficult to understand, quantitatively or qualitatively, what makes one generator different from another in terms of its output. To remedy this, we have conducted a large-scale comparative evaluation of level generators for the Mario AI Benchmark, a research-friendly clone of the classic platform game Super Mario Bros. In all, we compare the output of seven different level generators from the literature, based on different algorithmic methods, plus the levels from the original Super Mario Bros game. To compare them, we have defined six expressivity metrics, of which two are novel contributions in this paper. These metrics are shown to provide interestingly different characterizations of the level generators. The results presented in this paper, and the accompanying source code, is meant to become a benchmark against which to test new level generators and expressivity metrics."}
{"_id": "6fc8193f8327cf621289a745bd3be98c56c625ed", "title": "Fuzzy clustering with artificial bee colony algorithm", "text": "In this work, performance of the Artificial Bee Colony Algorithm which is a recently proposed algorithm, has been tested on fuzzy clustering. We applied the Artificial Bee Colony (ABC) Algorithm fuzzy clustering to classify different data sets; Cancer, Diabetes and Heart from UCI database, a collection of classification benchmark problems. The results indicate that the performance of Artificial Bee Colony Optimization Algorithm is successful in fuzzy clustering."}
{"_id": "2aca2e2c5bb4f680ba0f4a601564f0d1a4b3dbc3", "title": "A PRAM and NAND flash hybrid architecture for high-performance embedded storage subsystems", "text": "NAND flash-based storage is widely used in embedded systems due to its numerous benefits: low cost, high density, small form factor and so on. However, NAND flash-based storage is still suffering from serious performance degradation for random or small size write access. This degradation mainly comes from the physical constraints of NAND flash: erase-before-program and different unit size of erase and program operations. To overcome these constraints, we propose to use PRAM (Phase-change RAM) which supports advanced features: fast byte access capability and no requirement for erase-before-program.\n In this paper, we focus on developing a high-performance NAND flash-based storage system by maximally exploiting the advanced feature of PRAM, in terms of performance and wearing out. To do this, we first propose a new hybrid storage architecture which consists of PRAM and NAND flash. Second, we devise two novel software schemes for the proposed hybrid storage architecture; FSMS (File System Metadata Separation) and hFTL (hybrid Flash Translation Layer). Finally, we demonstrate that our hybrid architecture increases the performance up to 290% and doubles the lifespan compared to the existing NAND flash only storage systems."}
{"_id": "5a2bc1810375cccc9b440d8a8756da26f66417d9", "title": "New developments in our understanding of acne pathogenesis and treatment.", "text": "Interest in sebaceous gland physiology and its diseases is rapidly increasing. We provide a summarized update of the current knowledge of the pathobiology of acne vulgaris and new treatment concepts that have emerged in the last 3 years (2005-2008). We have tried to answer questions arising from the exploration of sebaceous gland biology, hormonal factors, hyperkeratinization, role of bacteria, sebum, nutrition, cytokines and toll-like receptors (TLRs). Sebaceous glands play an important role as active participants in the innate immunity of the skin. They produce neuropeptides, excrete antimicrobial peptides and exhibit characteristics of stem cells. Androgens affect sebocytes and infundibular keratinocytes in a complex manner influencing cellular differentiation, proliferation, lipogenesis and comedogenesis. Retention hyperkeratosis in closed comedones and inflammatory papules is attributable to a disorder of terminal keratinocyte differentiation. Propionibacterium acnes, by acting on TLR-2, may stimulate the secretion of cytokines, such as interleukin (IL)-6 and IL-8 by follicular keratinocytes and IL-8 and -12 in macrophages, giving rise to inflammation. Certain P. acnes species may induce an immunological reaction by stimulating the production of sebocyte and keratinocyte antimicrobial peptides, which play an important role in the innate immunity of the follicle. Qualitative changes of sebum lipids induce alteration of keratinocyte differentiation and induce IL-1 secretion, contributing to the development of follicular hyperkeratosis. High glycemic load food and milk may induce increased tissue levels of 5alpha-dihydrotestosterone. These new aspects of acne pathogenesis lead to the considerations of possible customized therapeutic regimens. Current research is expected to lead to innovative treatments in the near future."}
{"_id": "4e38485cb0fdfcac278835bea64ba7d95a25dd2a", "title": "Short-term meditation training improves attention and self-regulation.", "text": "Recent studies suggest that months to years of intensive and systematic meditation training can improve attention. However, the lengthy training required has made it difficult to use random assignment of participants to conditions to confirm these findings. This article shows that a group randomly assigned to 5 days of meditation practice with the integrative body-mind training method shows significantly better attention and control of stress than a similarly chosen control group given relaxation training. The training method comes from traditional Chinese medicine and incorporates aspects of other meditation and mindfulness training. Compared with the control group, the experimental group of 40 undergraduate Chinese students given 5 days of 20-min integrative training showed greater improvement in conflict scores on the Attention Network Test, lower anxiety, depression, anger, and fatigue, and higher vigor on the Profile of Mood States scale, a significant decrease in stress-related cortisol, and an increase in immunoreactivity. These results provide a convenient method for studying the influence of meditation training by using experimental and control methods similar to those used to test drugs or other interventions."}
{"_id": "82b96ae4177f1deea71661a7fd19ac36e9130e30", "title": "The benefits of being present: mindfulness and its role in psychological well-being.", "text": "Mindfulness is an attribute of consciousness long believed to promote well-being. This research provides a theoretical and empirical examination of the role of mindfulness in psychological well-being. The development and psychometric properties of the dispositional Mindful Attention Awareness Scale (MAAS) are described. Correlational, quasi-experimental, and laboratory studies then show that the MAAS measures a unique quality of consciousness that is related to a variety of well-being constructs, that differentiates mindfulness practitioners from others, and that is associated with enhanced self-awareness. An experience-sampling study shows that both dispositional and state mindfulness predict self-regulated behavior and positive emotional states. Finally, a clinical intervention study with cancer patients demonstrates that increases in mindfulness over time relate to declines in mood disturbance and stress."}
{"_id": "cf7c5f390370fcdce945b0f48445594dae37e056", "title": "Does mindfulness training improve cognitive abilities? A systematic review of neuropsychological findings.", "text": "Mindfulness meditation practices (MMPs) are a subgroup of meditation practices which are receiving growing attention. The present paper reviews current evidence about the effects of MMPs on objective measures of cognitive functions. Five databases were searched. Twenty three studies providing measures of attention, memory, executive functions and further miscellaneous measures of cognition were included. Fifteen were controlled or randomized controlled studies and 8 were case-control studies. Overall, reviewed studies suggested that early phases of mindfulness training, which are more concerned with the development of focused attention, could be associated with significant improvements in selective and executive attention whereas the following phases, which are characterized by an open monitoring of internal and external stimuli, could be mainly associated with improved unfocused sustained attention abilities. Additionally, MMPs could enhance working memory capacity and some executive functions. However, many of the included studies show methodological limitations and negative results have been reported as well, plausibly reflecting differences in study design, study duration and patients' populations. Accordingly, even though findings here reviewed provided preliminary evidence suggesting that MMPs could enhance cognitive functions, available evidence should be considered with caution and further high quality studies investigating more standardized mindfulness meditation programs are needed."}
{"_id": "29975c4969e5a473317ca3012c1c7e45350699b2", "title": "Effectiveness of a meditation-based stress reduction program in the treatment of anxiety disorders.", "text": "OBJECTIVE\nThis study was designed to determine the effectiveness of a group stress reduction program based on mindfulness meditation for patients with anxiety disorders.\n\n\nMETHOD\nThe 22 study participants were screened with a structured clinical interview and found to meet the DSM-III-R criteria for generalized anxiety disorder or panic disorder with or without agoraphobia. Assessments, including self-ratings and therapists' ratings, were obtained weekly before and during the meditation-based stress reduction and relaxation program and monthly during the 3-month follow-up period.\n\n\nRESULTS\nRepeated measures analyses of variance documented significant reductions in anxiety and depression scores after treatment for 20 of the subjects--changes that were maintained at follow-up. The number of subjects experiencing panic symptoms was also substantially reduced. A comparison of the study subjects with a group of nonstudy participants in the program who met the initial screening criteria for entry into the study showed that both groups achieved similar reductions in anxiety scores on the SCL-90-R and on the Medical Symptom Checklist, suggesting generalizability of the study findings.\n\n\nCONCLUSIONS\nA group mindfulness meditation training program can effectively reduce symptoms of anxiety and panic and can help maintain these reductions in patients with generalized anxiety disorder, panic disorder, or panic disorder with agoraphobia."}
{"_id": "4c3b3fbc449a70d6396fe9ffefe4b20fbbc51af6", "title": "The clinical use of mindfulness meditation for the self-regulation of chronic pain", "text": "Ninety chronic pain patients were trained in mindfulness meditation in a 10-week Stress Reduction and Relaxation Program. Statistically significant reductions were observed in measures of present-moment pain, negative body image, inhibition of activity by pain, symptoms, mood disturbance, and psychological symptomatology, including anxiety and depression. Pain-related drug utilization decreased and activity levels and feelings of self-esteem increased. Improvement appeared to be independent of gender, source of referral, and type of pain. A comparison group of pain patients did not show significant improvement on these measures after traditional treatment protocols. At follow-up, the improvements observed during the meditation training were maintained up to 15 months post-meditation training for all measures except present-moment pain. The majority of subjects reported continued high compliance with the meditation practice as part of their daily lives. The relationship of mindfulness meditation to other psychological methods for chronic pain control is discussed."}
{"_id": "487203d0d87cc706ed90e40d3bc181e5779f1b87", "title": "Decision Tree Discovery", "text": "We describe the two most commonly used systems for induction of decision trees for classi cation: C4.5 and CART. We highlight the methods and di erent decisions made in each system with respect to splitting criteria, pruning, noise handling, and other di erentiating features. We describe how rules can be derived from decision trees and point to some di erence in the induction of regression trees. We conclude with some pointers to advanced techniques, including ensemble methods, oblique splits, grafting, and coping with large data."}
{"_id": "a83769576d85417f209e234dbedacaa958d8733e", "title": "Development of Fire alarm system using Raspberry Pi and Arduino Uno", "text": "The proposed Fire alarm system is a real-time monitoring system that detects the presence of smoke in the air due to fire and captures images via a camera installed inside a room when a fire occurs. The embedded systems used to develop this fire alarm system are Raspberry Pi and Arduino Uno. The key feature of the system is the ability to remotely send an alert when a fire is detected. When the presence of smoke is detected, the system will display an image of the room state in a webpage. The system will need the user confirmation to report the event to the Firefighter using Short Message Service (SMS). The advantage of using this system is it will reduce the possibility of false alert reported to the Firefighter. The camera will only capture an image, so this system will consume a little storage and power."}
{"_id": "ea9d6b1a54b06707ab332a5a1008f1503499da1c", "title": "On the Influence of Emotional Valence Shifts on the Spread of Information in Social Networks", "text": "In this paper, we present a study on 4.4 million Twitter messages related to 24 systematically chosen real-world events. For each of the 4.4 million tweets, we first extracted sentiment scores based on the eight basic emotions according to Plutchik's wheel of emotions. Subsequently, we investigated the effects of shifts in the emotional valence on the spread of information. We found that in general OSN users tend to conform to the emotional valence of the respective real-world event. However, we also found empirical evidence that prospectively negative real-world events exhibit a significant amount of shifted emotions in the corresponding tweets (i.e. positive messages). To explain this finding, we use the theory of social connection and emotional contagion. To the best of our knowledge, this is the first study that provides empirical evidence for the undoing hypothesis in online social networks (OSNs). The undoing hypothesis postulates that positive emotions serve as an antidote during negative events."}
{"_id": "afe027815f7783e52a35f1dd6f96b68ce354aede", "title": "Comparison between Auditory and Visual Simple Reaction Times", "text": "Objective: The purpose of this study was to find out whether the simple reaction time was faster for auditory or visual stimulus and the factors responsible for improving the performance of the athlete. Methodology: 14 subjects were assigned randomly into groups consisting of 2 members. Both the members from each group performed both the visual and auditory tests. The tests were taken from the DirectRT software program from a laptop. The DirectRT software consists of Testlabvisual and Testlabsounds to test the reaction times to visual and auditory stimuli. The 2 members from each group completed both the visual and auditory reaction times, the data was taken and the mean reaction time was calculated excluding the first and last values. Results: The results show that the mean visual reaction time is around 331 milliseconds as compared to the mean auditory reaction time of around 284 milliseconds. Conclusion: This shows that the auditory reaction time is faster than the visual reaction time. And also males have faster reaction times when compared to females for both auditory as well as visual stimuli."}
{"_id": "c7d100f8057ca5048c36ae28dd104b47557fac4c", "title": "Maximizing Air Gap and Efficiency of Magnetic Resonant Coupling for Wireless Power Transfer Using Equivalent Circuit and Neumann Formula", "text": "The progress in the field of wireless power transfer in the last few years is remarkable. With recent research, transferring power across large air gaps has been achieved. Both small and large electric equipment have been proposed, e.g., wireless power transfer for small equipment (mobile phones and laptops) and for large equipment (electric vehicles). Furthermore, replacing every cord with wireless power transfer is proposed. The coupled mode theory was proposed in 2006 and proven in 2007. Magnetic and electric resonant couplings allow power to traverse large air gaps with high efficiency. This technology is closely related to electromagnetic induction and has been applied to antennas and resonators used for filters in communication technology. We have studied these phenomena and technologies using equivalent circuits, which is a more familiar format for electrical engineers than the coupled mode theory. In this paper, we analyzed the relationship between maximum efficiency air gap using equivalent circuits and the Neumann formula and proposed equations for the conditions required to achieve maximum efficiency for a given air gap. The results of these equations match well with the results of electromagnetic field analysis and experiments."}
{"_id": "976c45236c1041789e32c087d62910b92ba70f32", "title": "Five-Level Inverter for Renewable Power Generation System", "text": "In this paper, a five-level inverter is developed and applied for injecting the real power of the renewable power into the grid to reduce the switching power loss, harmonic distortion, and electromagnetic interference caused by the switching operation of power electronic devices. Two dc capacitors, a dual-buck converter, a full-bridge inverter, and a filter configure the five-level inverter. The input of the dual-buck converter is two dc capacitor voltage sources. The dual-buck converter converts two dc capacitor voltage sources to a dc output voltage with three levels and balances these two dc capacitor voltages. The output voltage of the dual-buck converter supplies to the full-bridge inverter. The power electronic switches of the full-bridge inverter are switched in low frequency synchronous with the utility voltage to convert the output voltage of the dual-buck converter to a five-level ac voltage. The output current of the five-level inverter is controlled to generate a sinusoidal current in phase with the utility voltage to inject into the grid. A hardware prototype is developed to verify the performance of the developed renewable power generation system. The experimental results show that the developed renewable power generation system reaches the expected performance."}
{"_id": "8e4d6e29e5741bfb4b88c56c097a16b340a871e8", "title": "A Systematic Approach of Dataset Definition for a Supervised Machine Learning Using NFR Framework", "text": "Non-functional requirements describe important constraints upon the software development and should therefore be considered and specified as early as possible during the system analysis. Effective elicitation of requirements is arguably among the most important of the resulting recommended RE practices. Recent research has shown that artificial intelligence techniques such as Machine Learning and Text Mining perform the automatic extraction and classification of quality attributes from text documents with relevant results. This paper aims to define a systematic process of dataset generation through NFR Framework catalogues improving the NFR's classification process using Machine Learning techniques. A well-known dataset (Promise) was used to evaluate the precision of our approach reaching interesting results. Regarding to security and performance we obtained a precision and recall ranging between ~85% and ~98%. And we achievement a F1 above ~79% when classified the security, performance and usability together."}
{"_id": "eceebc80495defc1fa9491c9857a03e9f9ab27dd", "title": "Improving malware detection time by using RLE and N-gram", "text": "Malware is a widespread problem and despite the common use of anti-virus software, the diversity of malware is still increasing. A major challenge facing the anti-virus industry is how to effectively detect thousands of malware samples that are received every day. In this paper, a novel approach based Run Length Encoding (RLE) algorithm and n-gram are proposed to improve malware detect on dynamic analysis of based on API sequences."}
{"_id": "9dd552290e5876a4f125cb535da9e9c41db18bc3", "title": "Mapping gray matter development: Implications for typical development and vulnerability to psychopathology", "text": "Recent studies with brain magnetic resonance imaging (MRI) have scanned large numbers of children and adolescents repeatedly over time, as their brains develop, tracking volumetric changes in gray and white matter in remarkable detail. Focusing on gray matter changes specifically, here we explain how earlier studies using lobar volumes of specific anatomical regions showed how different lobes of the brain matured at different rates. With the advent of more sophisticated brain mapping methods, it became possible to chart the dynamic trajectory of cortical maturation using detailed 3D and 4D (dynamic) models, showing spreading waves of changes evolving through the cortex. This led to a variety of time-lapse films revealing characteristic deviations from normal development in schizophrenia, bipolar illness, and even in siblings at genetic risk for these disorders. We describe how these methods have helped clarify how cortical development relates to cognitive performance, functional recovery or decline in illness, and ongoing myelination processes. These time-lapse maps have also been used to study effects of genotype and medication on cortical maturation, presenting a powerful framework to study factors that influence the developing brain."}
{"_id": "051853e79d6ebe49601348536ca4b14c5279cc97", "title": "A Secure Communication Architecture for Distributed Microgrid Control", "text": "Microgrids are a key component in the evolution of the power grid. Microgrids are required to operate in both grid connected and standalone island mode using local sources of power. A major challenge in implementing microgrids is the communications and control to support transition from grid connected mode and operation in island mode. Here, we propose a secure communication architecture to support microgrid operation and control. A security model, including network, data, and attack models, is defined and a security protocol to address the real-time communication needs of microgrids is proposed. The implementation of the proposed security scheme is discussed and its performance evaluated using theoretical and co-simulation analysis, which shows it to be superior to existing protocols."}
{"_id": "a759c9cc7d2839e8b227acf6514ec0f7c50ce434", "title": "Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images", "text": "Malaria is a blood disease caused by the Plasmodium parasites transmitted through the bite of female Anopheles mosquito. Microscopists commonly examine thick and thin blood smears to diagnose disease and compute parasitemia. However, their accuracy depends on smear quality and expertise in classifying and counting parasitized and uninfected cells. Such an examination could be arduous for large-scale diagnoses resulting in poor quality. State-of-the-art image-analysis based computer-aided diagnosis (CADx) methods using machine learning (ML) techniques, applied to microscopic images of the smears using hand-engineered features demand expertise in analyzing morphological, textural, and positional variations of the region of interest (ROI). In contrast, Convolutional Neural Networks (CNN), a class of deep learning (DL) models promise highly scalable and superior results with end-to-end feature extraction and classification. Automated malaria screening using DL techniques could, therefore, serve as an effective diagnostic aid. In this study, we evaluate the performance of pre-trained CNN based DL models as feature extractors toward classifying parasitized and uninfected cells to aid in improved disease screening. We experimentally determine the optimal model layers for feature extraction from the underlying data. Statistical validation of the results demonstrates the use of pre-trained CNNs as a promising tool for feature extraction for this purpose."}
{"_id": "14922d2e83f11c990be4824b2f4b245c92ec7e43", "title": "Real-time 6D stereo Visual Odometry with non-overlapping fields of view", "text": "In this paper, we present a framework for 6D absolute scale motion and structure estimation of a multi-camera system in challenging indoor environments. It operates in real-time and employs information from two cameras with non-overlapping fields of view. Monocular Visual Odometry supplying up-to-scale 6D motion information is carried out in each of the cameras, and the metric scale is recovered via a linear solution by imposing the known static transformation between both sensors. The redundancy in the motion estimates is finally exploited by a statistical fusion to an optimal 6D metric result. The proposed technique is robust to outliers and able to continuously deliver a reasonable measurement of the scale factor. The quality of the framework is demonstrated by a concise evaluation on indoor datasets, including a comparison to accurate ground truth data provided by an external motion tracking system."}
{"_id": "e084eda2b0b44b92e10cbf99224aa1fc9399a72b", "title": "Ontology based classification system for online job offers", "text": "The significance of employment in a setup of society is quite evident. Publishing jobs online opened the research opportunities to study different methods for the automation of job offers retrieval \u2014 from the internet. We need a classification mechanism, from machine learning or some other discipline, to automate job offers retrieval. In this paper, we devised an ontology-based classifier for job offers classification. More than 5000 job offers were collected from different job offers websites. The ontology-based classification works in three stages, concepts are extracted from job offers text description as feature vectors, the minimum threshold for the classification is calculated, and a classification model is developed. Rather using machine learning algorithms, we used an ontology for classification. We evaluated this classifier according to machine learning evaluation model, training, and test dataset. Our classifier showed more than 0.9 accuracy, precision, and recall for both training and test dataset. Our finding paves the way to automate the job offers classification and retrieval."}
{"_id": "7794aba6aec9b1297bf31c76d6a58daf19c27f14", "title": "RFID in robot-assisted indoor navigation for the visually impaired", "text": "We describe how radio frequency identification (RFID) can be used in robot-assisted indoor navigation for the visually impaired. We present a robotic guide for the visually impaired that was deployed and tested both with and without visually unpaired participants in two indoor environments. We describe how we modified the standard potential fields algorithms to achieve navigation at moderate walking speeds and to avoid oscillation in narrow spaces. The experiments illustrate that passive RFID tags deployed in the environment can act as reliable stimuli that trigger local navigation behaviors to achieve global navigation objectives."}
{"_id": "f307c3ec901bf7de7c7ea903019828eea2f6f814", "title": "Service Oriented Architecture based connectivity of automotive ECUs", "text": "Modern automotive industry uses embedded systems and corresponding Electronic Control Units (ECU) for a wide variety of purposes from entertainment to vehicle control. The use of these embedded systems became crucial to an extent that the performance of these embedded systems can severely affect the performance of the automobile. So the industry tries to develop new and efficient systems periodically, to improve the performance of the whole system. As the number of ECUs in an automobile increased, new technologies were developed to interconnect these ECUs. As a result of much efficient ECUs, the data produced by these ECUs and the data to be transmitted between these ECUs increased to a very large scale. The industry tried to select new connectivity solutions which provide higher bandwidth. The automotive embedded system industry thus ended up in using the well accepted and highly standardized Ethernet connectivity. This paved the way to develop new technologies for automotive embedded system. Service Oriented Architecture (SOA) has proved to be an efficient and flexible software design paradigm for high level systems. The main factors behind its success were the amount of scalability and reusability provided by this platform. The usage of SOA over the automotive Ethernet can provide the benefit of SOA to the automotive embedded world. Such an approach needs a deep feasibility study to identify whether such integration is possible, and to discover architectures and protocols which support it."}
{"_id": "a5779c9365f359517f8ee9455a5379d5adbe1593", "title": "Application of COReS to Compute Research Papers Similarity", "text": "Over the decades, the immense growth has been reported in research publications due to continuous developments in science. To date, various approaches have been proposed that find similarity between research papers by applying different similarity measures collectively or individually based on the content of research papers. However, the contemporary schemes are not conceptualized enough to find related research papers in a coherent manner. This paper is aimed at finding related research papers by proposing a comprehensive and conceptualized model via building ontology named COReS: Content-based Ontology for Research Paper Similarity. The ontology is built by finding the explicit relationships (i.e., super-type sub-type, disjointedness, and overlapping) between state-of-the-art similarity techniques. This paper presents the applications of the COReS model in the form of a case study followed by an experiment. The case study uses InText citation-based and vector space-based similarity measures and relationships between these measures as defined in COReS. The experiment focuses on the computation of comprehensive similarity and other content-based similarity measures and rankings of research papers according to these measures. The obtained Spearman correlation coefficient results between ranks of research papers for different similarity measures and user study-based measure, justify the application of COReS for the computation of document similarity. The COReS is in the process of evaluation for ontological errors. In the future, COReS will be enriched to provide more knowledge to improve the process of comprehensive research paper similarity computation."}
{"_id": "9dac63dc99ff7fcf4d0de3b00f59cd6822bb4563", "title": "Formalization, Annotation and Analysis of Diverse Drug and Probe Screening Assay Datasets Using the BioAssay Ontology (BAO)", "text": "Huge amounts of high-throughput screening (HTS) data for probe and drug development projects are being generated in the pharmaceutical industry and more recently in the public sector. The resulting experimental datasets are increasingly being disseminated via publically accessible repositories. However, existing repositories lack sufficient metadata to describe the experiments and are often difficult to navigate by non-experts. The lack of standardized descriptions and semantics of biological assays and screening results hinder targeted data retrieval, integration, aggregation, and analyses across different HTS datasets, for example to infer mechanisms of action of small molecule perturbagens. To address these limitations, we created the BioAssay Ontology (BAO). BAO has been developed with a focus on data integration and analysis enabling the classification of assays and screening results by concepts that relate to format, assay design, technology, target, and endpoint. Previously, we reported on the higher-level design of BAO and on the semantic querying capabilities offered by the ontology-indexed triple store of HTS data. Here, we report on our detailed design, annotation pipeline, substantially enlarged annotation knowledgebase, and analysis results. We used BAO to annotate assays from the largest public HTS data repository, PubChem, and demonstrate its utility to categorize and analyze diverse HTS results from numerous experiments. BAO is publically available from the NCBO BioPortal at http://bioportal.bioontology.org/ontologies/1533. BAO provides controlled terminology and uniform scope to report probe and drug discovery screening assays and results. BAO leverages description logic to formalize the domain knowledge and facilitate the semantic integration with diverse other resources. As a consequence, BAO offers the potential to infer new knowledge from a corpus of assay results, for example molecular mechanisms of action of perturbagens."}
{"_id": "095196a72491703cf809aff20ed977e8e92b1daf", "title": "A Fuzzy logic controller for stabilization and control of Double Inverted Pendulum ( DIP ) using different Membership functions ( MF ' s )", "text": "Double Inverted Pendulum (DIP) on cart is a highly non-linear system widely used as a testing bed for verification of newly designed control laws and controllers. In this study control of DIP is obtained using fuzzy logic controllers having different Membership functions(MF's) i.e. triangular, trapezoidal and gbell. The effects of shape of MF's on various controlling parameters i.e. stabilization time, maximum degree of overshoot and steady state error is also illustrated. Simulation results are shown with the help of graphs and tables which proves the validity of proposed method. KeywordsDouble Inverted Pendulum, Fuzzy Logic, Membership function, Matlab-Simulink. 1.0 Introduction The DIP is a multi-variable, unstable system which is difficult to stabilize in upright position [1]. It represents a kinematic joint for robotic knee and arm. It can also be considered as a model of human and of other animal postural control [2]. In this paper fuzzy logic reasoning is used for stabilization and control of DIP. Fuzzy logic controller is able to stabilize the non-linear systems effectively and increases their flexibility to a great extent [3]. This study shows a comparison between three different Membership functions (MF's) namely triangular, trapezoidal and gbell in terms of stabilization time, maximum degree of overshoot and steady state error. The affect of a particular MF's on performance of Inverted Pendulum system is illustrated in this study. There are several studies which have been done recently for the stabilization and control of DIP. Jianqiang Yi, Naoyoshi Yubazaki and Kaoru Hizota [4] proposed a new fuzzy controller AMO Advanced Modeling and Optimization. ISSN: 1841-4311 547 Ashwani Kharola, Dr Pravin Patil, Punit Gupta having six inputs and one output based Single Input Rules Modules (SIRM). Ding Chengjun, Duan Ping, Zhang Ming-lu and Zhang Yan-fang [5] used genetic algorithm to optimize the weighting coefficient and MF's parameters of fuzzy controller. Sandeep Kumar Yadav, Sachin Sharma and Mr. Narinder Singh [6] performed the analysis of DIP using Linear Quadratic Regulator (LQR) controller. I.Zamani and M.H.Zarif [3] designed a fuzzy controller based on Lyapunov theorem. Atabak Nejadfard, M.J. Yazdanpanah and Iraj Hassanzadeh [7] proposed a friction compensation of DIP using neuro-fuzzy model. Ehsan kiankhah, Mohammad Teshnelab and Mahdi Aliyari Shoorehdeli [8] designed a neuro fuzzy controller using feedback-error learning for control of DIP etc. 2.0 Double Inverted Pendulum The classical DIP system consist of a pair of rigid pendulum rods i.e. Bottom pendulum and top pendulum which are interconnected with a joint. The bottom pendulum is attached to a cart which moves in the horizontal direction [9]. Sensors are used to measure angle of each pendulum from vertical i.e. \u04e81 and \u04e82 and displacement of cart x. The final objective is to control the system such that \u04e81, \u04e82 and x should be equal to zero [1]. A view of DIP on cart is shown in figure 1.0 [6]. Figure 1.0 A view of DIP on cart 2.1 Mathematical model of DIP The mathematical equations for control of DIP are derived separately for each subsystem i.e. Cart, First Pendulum and Second Pendulum. The equations are as follows: A. Cart \u1e8d = 1/M[F-N1-b\u1e8b] where \u1e8d is acceleration of Cart, M is mass of Cart, F is the applied Force to Cart, N1 is the interaction force between Cart and First Pendulum and b is friction coefficient. 548 A Fuzzy logic controller for stabilization and control of Double Inverted Pendulum (DIP) using different Membership functions (MF's)"}
{"_id": "7a6896f17e815c8d4cd3d1300b1150499f8c6e91", "title": "LAYER RECURRENT NEURAL NETWORKS", "text": "In this paper, we propose a Layer-RNN (L-RNN) module that is able to learn contextual information adaptively using within-layer recurrence. Our contributions are three-fold: (i) we propose a hybrid neural network architecture that interleaves traditional convolutional layers with L-RNN module for learning longrange dependencies at multiple levels; (ii) we show that a L-RNN module can be seamlessly inserted into any convolutional layer of a pre-trained CNN, and the entire network then fine-tuned, leading to a boost in performance; (iii) we report experiments on the CIFAR-10 classification task, showing that a network with interleaved convolutional layers and L-RNN modules, achieves comparable results (5.39% top1 error) using only 15 layers and fewer parameters to ResNet-164 (5.46%); and on the PASCAL VOC2012 semantic segmentation task, we show that the performance of a pre-trained FCN network can be boosted by 5% (mean IOU) by simply inserting Layer-RNNs."}
{"_id": "c9fe4a812a5049cff28d0f69dc2800002e758d1e", "title": "Effective Multiple Feature Hashing for Large-Scale Near-Duplicate Video Retrieval", "text": "Near-duplicate video retrieval (NDVR) has recently attracted much research attention due to the exponential growth of online videos. It has many applications, such as copyright protection, automatic video tagging and online video monitoring. Many existing approaches use only a single feature to represent a video for NDVR. However, a single feature is often insufficient to characterize the video content. Moreover, while the accuracy is the main concern in previous literatures, the scalability of NDVR algorithms for large scale video datasets has been rarely addressed. In this paper, we present a novel approach-Multiple Feature Hashing (MFH) to tackle both the accuracy and the scalability issues of NDVR. MFH preserves the local structural information of each individual feature and also globally considers the local structures for all the features to learn a group of hash functions to map the video keyframes into the Hamming space and generate a series of binary codes to represent the video dataset. We evaluate our approach on a public video dataset and a large scale video dataset consisting of 132,647 videos collected from YouTube by ourselves. This dataset has been released (http://itee.uq.edu.au/shenht/UQ_VIDEO/). The experimental results show that the proposed method outperforms the state-of-the-art techniques in both accuracy and efficiency."}
{"_id": "5a4b6da5e14a61b6ed47679d6999b3d18a8f1cbb", "title": "Block-Based Neural Networks for Personalized ECG Signal Classification", "text": "This paper presents evolvable block-based neural networks (BbNNs) for personalized ECG heartbeat pattern classification. A BbNN consists of a 2-D array of modular component NNs with flexible structures and internal configurations that can be implemented using reconfigurable digital hardware such as field-programmable gate arrays (FPGAs). Signal flow between the blocks determines the internal configuration of a block as well as the overall structure of the BbNN. Network structure and the weights are optimized using local gradient-based search and evolutionary operators with the rates changing adaptively according to their effectiveness in the previous evolution period. Such adaptive operator rate update scheme ensures higher fitness on average compared to predetermined fixed operator rates. The Hermite transform coefficients and the time interval between two neighboring R-peaks of ECG signals are used as inputs to the BbNN. A BbNN optimized with the proposed evolutionary algorithm (EA) makes a personalized heartbeat pattern classifier that copes with changing operating environments caused by individual difference and time-varying characteristics of ECG signals. Simulation results using the Massachusetts Institute of Technology/Beth Israel Hospital (MIT-BIH) arrhythmia database demonstrate high average detection accuracies of ventricular ectopic beats (98.1%) and supraventricular ectopic beats (96.6%) patterns for heartbeat monitoring, being a significant improvement over previously reported electrocardiogram (ECG) classification results."}
{"_id": "40f68c65efc5282abff77d6f7fced06ec06b84e2", "title": "Unified Collaborative and Content-Based Web Service Recommendation", "text": "The last decade has witnessed a tremendous growth of web services as a major technology for sharing data, computing resources, and programs on the web. With increasing adoption and presence of web services, designing novel approaches for efficient and effective web service recommendation has become of paramount importance. Most existing web service discovery and recommendation approaches focus on either perishing UDDI registries, or keyword-dominant web service search engines, which possess many limitations such as poor recommendation performance and heavy dependence on correct and complex queries from users. It would be desirable for a system to recommend web services that align with users' interests without requiring the users to explicitly specify queries. Recent research efforts on web service recommendation center on two prominent approaches: collaborative filtering and content-based recommendation. Unfortunately, both approaches have some drawbacks, which restrict their applicability in web service recommendation. In this paper, we propose a novel approach that unifies collaborative filtering and content-based recommendations. In particular, our approach considers simultaneously both rating data (e.g., QoS) and semantic content data (e.g., functionalities) of web services using a probabilistic generative model. In our model, unobservable user preferences are represented by introducing a set of latent variables, which can be statistically estimated. To verify the proposed approach, we conduct experiments using 3,693 real-world web services. The experimental results show that our approach outperforms the state-of-the-art methods on recommendation performance."}
{"_id": "714544b7cf35a3b8bdc12fb1967624a38f257a42", "title": "Deep content-based music recommendation", "text": "Automatic music recommendation has become an increasingly relevant problem in recent years, since a lot of music is now sold and consumed digitally. Most recommender systems rely on collaborative filtering. However, this approach suffers from the cold start problem: it fails when no usage data is available, so it is not effective for recommending new and unpopular songs. In this paper, we propose to use a latent factor model for recommendation, and predict the latent factors from music audio when they cannot be obtained from usage data. We compare a traditional approach using a bag-of-words representation of the audio signals with deep convolutional neural networks, and evaluate the predictions quantitatively and qualitatively on the Million Song Dataset. We show that using predicted latent factors produces sensible recommendations, despite the fact that there is a large semantic gap between the characteristics of a song that affect user preference and the corresponding audio signal. We also show that recent advances in deep learning translate very well to the music recommendation setting, with deep convolutional neural networks significantly outperforming the traditional approach."}
{"_id": "7d20fe87fb0e483a633bdc22ff9f91a8dea4ed8b", "title": "The long tail of recommender systems and how to leverage it", "text": "The paper studies the Long Tail problem of recommender systems when many items in the Long Tail have only few ratings, thus making it hard to use them in recommender systems. The approach presented in the paper splits the whole itemset into the head and the tail parts and clusters only the tail items. Then recommendations for the tail items are based on the ratings in these clusters and for the head items on the ratings of individual items. If such partition and clustering are done properly, we show that this reduces the recommendation error rates for the tail items, while maintaining reasonable computational performance."}
{"_id": "855d0f722d75cc56a66a00ede18ace96bafee6bd", "title": "Theano: new features and speed improvements", "text": "Theano is a linear algebra compiler that optimizes a user\u2019s symbolically-specified mathematical computations to produce efficient low-level implementations. In this paper, we present new features and efficiency improvements to Theano, and benchmarks demonstrating Theano\u2019s performance relative to Torch7, a recently introduced machine learning library, and to RNNLM, a C++ library targeted at recurrent neural networks."}
{"_id": "2a61ef3947bcf8229f0798c153a67b977ec18dbe", "title": "Challenges with applying vulnerability prediction models", "text": "Vulnerability prediction models (VPM) are believed to hold promise for providing software engineers guidance on where to prioritize precious verification resources to search for vulnerabilities. However, while Microsoft product teams have adopted defect prediction models, they have not adopted vulnerability prediction models (VPMs). The goal of this research is to measure whether vulnerability prediction models built using standard recommendations perform well enough to provide actionable results for engineering resource allocation. We define 'actionable' in terms of the inspection effort required to evaluate model results. We replicated a VPM for two releases of the Windows Operating System, varying model granularity and statistical learners. We reproduced binary-level prediction precision (~0.75) and recall (~0.2). However, binaries often exceed 1 million lines of code, too large to practically inspect, and engineers expressed preference for source file level predictions. Our source file level models yield precision below 0.5 and recall below 0.2. We suggest that VPMs must be refined to achieve actionable performance, possibly through security-specific metrics."}
{"_id": "2a43d3905699927ace64e880fe9ba8a730e14be1", "title": "Distributed Event-Triggered Scheme for Economic Dispatch in Smart Grids", "text": "To reduce information exchange requirements in smart grids, an event-triggered communication-based distributed optimization is proposed for economic dispatch. In this work, the \u03b8-logarithmic barrier-based method is employed to reformulate the economic dispatch problem, and the consensus-based approach is considered for developing fully distributed technology-enabled algorithms. Specifically, a novel distributed algorithm utilizes the minimum connected dominating set (CDS), which efficiently allocates the task of balancing supply and demand for the entire power network at the beginning of economic dispatch. Further, an event-triggered communication-based method for the incremental cost of each generator is able to reach a consensus, coinciding with the global optimality of the objective function. In addition, a fast gradient-based distributed optimization method is also designed to accelerate the convergence rate of the event-triggered distributed optimization. Simulations based on the IEEE 57-bus test system demonstrate the effectiveness and good performance of proposed algorithms."}
{"_id": "f4f8eb84b22e4d9e9ef9cf7c28b849f0f2fa7abb", "title": "Mixing Coins of Different Quality: A Game-Theoretic Approach", "text": "Cryptocoins based on public distributed ledgers can differ in their quality due to different subjective values users assign to coins depending on the unique transaction history of each coin. We apply game theory to study how qualitative differentiation between coins will affect the behavior of users interested in improving their anonymity through mixing services. We present two stylized models of mixing with perfect and imperfect information and analyze them for three distinct quality propagation policies: poison, haircut, and seniority. In the game of perfect information, mixing coins of high quality remains feasible under certain conditions, while imperfect information eventually leads to a mixing market where only coins of the lowest quality are mixed."}
{"_id": "d5398517ddac70548dc8586ad9e0bdadc9b0eb33", "title": "Door recognition and deep learning algorithm for visual based robot navigation", "text": "In this paper, a new method based on deep learning for robotics autonomous navigation is presented. Different from the most traditional methods based on fixed models, a convolutional neural network (CNN) modelling technique in Deep learning is selected to extract the feature inspired by the working pattern of the biological brain. This neural network model has muti-layer features where the ambient scenes can be recognized and useful information such as the location of door can be identified. The extracted information can be used for robot navigation, so does the robot can approach the target accurately. In the field experiments, detecting doors and predicting the door poses such tasks are designed in the indoor environment to verify the proposed method. The experimental results demonstrate that the doors can be identified with good performance and the deep learning model is suitable for robot navigation."}
{"_id": "2ea2e9667ad9cc02ceb1d31886732babb7d5ab8d", "title": "Anomaly detection of malicious users' behaviors for web applications based on web logs", "text": "With more and more online services developed into web applications, security problems based on web applications become more serious now. Most intrusion detection systems are based on every single request to find the cyber-attack instead of users' behaviors, and these systems can only protect web application from known vulnerability rather than some zero-day attacks. In order to detect newly developed attacks, we analyze web logs from web servers and define users' behaviors to divide them into normal and malicious ones. The result shows that by using the feature of web resources to define users' behaviors, a higher accuracy rate and lower false alarm rate of intrusion detection can be obtained."}
{"_id": "c3b7ea473299b5ebdaa3f7b590e1d46d7ebe659c", "title": "Unsupervised Context-Sensitive Spelling Correction of English and Dutch Clinical Free-Text with Word and Character N-Gram Embeddings", "text": "We present an unsupervised context-sensitive spelling correction method for clinical free-text that uses word and character n-gram embeddings. Our method generates misspelling replacement candidates and ranks them according to their semantic fit, by calculating a weighted cosine similarity between the vectorized representation of a candidate and the misspelling context. To tune the parameters of this model, we generate self-induced spelling error corpora. We perform our experiments for two languages. For English, we greatly outperform off-the-shelf spelling correction tools on a manually annotated MIMIC-III test set, and counter the frequency bias of a noisy channel model, showing that neural embeddings can be successfully exploited to improve upon the state-of-the-art. For Dutch, we also outperform an off-the-shelf spelling correction tool on manually annotated clinical records from the Antwerp University Hospital, but can offer no empirical evidence that our method counters the frequency bias of a noisy channel model in this case as well. However, both our context-sensitive model and our implementation of the noisy channel model obtain high scores on the test set, establishing a state-of-the-art for Dutch clinical spelling correction with the noisy channel model."}
{"_id": "d041c7df26d2d212a6e37204f8615119aff56eed", "title": "Visualisation and Instructional Design", "text": "Human cognitive architecture includes a working memory of limited capacity and duration with partially separate visual and auditory channels, and an effectively infinite long-term memory holding many schemas that can vary in their degree of automation. These cognitive structures have evolved to handle information that varies in the extent to which elements can be processed successively in working memory or, because they interact, must be processed simultaneously imposing a heavy load on working memory. Cognitive load theory uses this combination of information and cognitive structures to guide instructional design. Several designs that rely heavily on visual working memory and its characteristics are discussed in this paper. Knowledge of human cognitive architecture is essential for instructional design, and visual cognition is a central aspect of human cognition. Not surprisingly, there are several instructional design effects that rely heavily on the manner in which humans visually process information. This paper discusses some relevant information structures, cognitive structures and instructional designs that rely on our knowledge of visual information processing. A. Information structures While considerable work by many researchers over several decades has been devoted to the organization of human cognitive architecture, far less effort has gone into investigating the information structures that must have driven the evolution of that architecture. Some work has been carried out by Sweller (1994) and Halford, Wilson and Phillips (1998). Sweller (1994) suggested that all information can be placed on a continuum according to the extent to which the elements that constitute the information interact. At one extreme, there is no interaction between the elements that need to be learned. They are independent. Element interactivity is low, or indeed, non-existent, and that means each element can be considered and learned serially without reference to any other element. Because elements at the low element interactivity end of the continuum do not interact with each other, there is no loss of understanding despite each element being learned individually and in isolation. Understanding is defined as the ability to process all elements that necessarily interact, simultaneously in working memory. Learning such material"}
{"_id": "e2fcd67cc559e40444f005d54ad85afa6feaa00f", "title": "Learning Explanatory Rules from Noisy Data", "text": "Artificial Neural Networks are powerful function approximators capable of modelling solutions to a wide variety of problems, both supervised and unsupervised. As their size and expressivity increases, so too does the variance of the model, yielding a nearly ubiquitous overfitting problem. Although mitigated by a variety of model regularisation methods, the common cure is to seek large amounts of training data\u2014which is not necessarily easily obtained\u2014that sufficiently approximates the data distribution of the domain we wish to test on. In contrast, logic programming methods such as Inductive Logic Programming offer an extremely data-efficient process by which models can be trained to reason on symbolic domains. However, these methods are unable to deal with the variety of domains neural networks can be applied to: they are not robust to noise in or mislabelling of inputs, and perhaps more importantly, cannot be applied to non-symbolic domains where the data is ambiguous, such as operating on raw pixels. In this paper, we propose a Differentiable Inductive Logic framework, which can not only solve tasks which traditional ILP systems are suited for, but shows a robustness to noise and error in the training data which ILP cannot cope with. Furthermore, as it is trained by backpropagation against a likelihood objective, it can be hybridised by connecting it with neural networks over ambiguous data in order to be applied to domains which ILP cannot address, while providing data efficiency and generalisation beyond what neural networks on their own can achieve."}
{"_id": "94180bd9c9392e7de99305532e36aee4700cc565", "title": "Falling for Fake News: Investigating the Consumption of News via Social Media", "text": "In the so called 'post-truth' era, characterized by a loss of public trust in various institutions, and the rise of 'fake news' disseminated via the internet and social media, individuals may face uncertainty about the veracity of information available, whether it be satire or malicious hoax. We investigate attitudes to news delivered by social media, and subsequent verification strategies applied, or not applied, by individuals. A survey reveals that two thirds of respondents regularly consumed news via Facebook, and that one third had at some point come across fake news that they initially believed to be true. An analysis task involving news presented via Facebook reveals a diverse range of judgement forming strategies, with participants relying on personal judgements as to plausibility and scepticism around sources and journalistic style. This reflects a shift away from traditional methods of accessing the news, and highlights the difficulties in combating the spread of fake news."}
{"_id": "86adcad240e1b7192118d7a7e6c67880c44b7426", "title": "Mixed Type Audio Classification with Support Vector Machine", "text": "Content-based classification of audio data is an important problem for various applications such as overall analysis of audio-visual streams, boundary detection of video story segment, extraction of speech segments from video, and content-based video retrieval. Though the classification of audio into single type such as music, speech, environmental sound and silence is well studied, classification of mixed type audio data, such as clips having speech with music as background, is still considered a difficult problem. In this paper, we present a mixed type audio classification system based on support vector machine (SVM). In order to capture characteristics of different types of audio data, besides selecting audio features, we also design four different representation formats for each feature. Our SVM-based audio classifier can classify audio data into five types: music, speech, environment sound, speech mixed with music, and music mixed with environment sound. The experimental results show that our system outperforms other classification systems using k nearest neighbor (k-NN), neural network (NN), and Naive Bayes (NB)"}
{"_id": "e4a6673ef658221fef360e6051631d65e9547b62", "title": "Enter the Matrix: A Virtual World Approach to Safely Interruptable Autonomous Systems", "text": "Robots and autonomous systems that operate around humans will likely always rely on kill switches that stop their execution and allow them to be remote-controlled for the safety of humans or to prevent damage to the system. It is theoretically possible for an autonomous system with sufficient sensor and effector capability and using reinforcement learning to learn that the kill switch deprives it of long-term reward and learn to act to disable the switch or otherwise prevent a human operator from using the switch. This is referred to as the big red button problem. We present a technique which prevents a reinforcement learning agent from learning to disable the big red button. Our technique interrupts the agent or robot by placing it in a virtual simulation where it continues to receive reward. We illustrate our technique in a simple grid world environment."}
{"_id": "d43f1538f2e5bd963658fbf2f73525aa9e5f6043", "title": "Efficient Mean-shift Clustering Using Gaussian KD-Tree", "text": "Mean shift is a popular approach for data clustering, however, the high computational complexity of the mean shift procedure limits its practical applications in high dimensional and large data set clustering. In this paper, we propose an efficient method that allows mean shift clustering performed on large data set containing tens of millions of points at interactive rate. The key in our method is a new scheme for approximating mean shift procedure using a greatly reduced feature space. This reduced feature space is adaptive clustering of the original data set, and is generated by applying adaptive KD-tree in a high-dimensional affinity space. The proposed method significantly reduces the computational cost while obtaining almost the same clustering results as the standard mean shift procedure. We present several kinds of data clustering applications to illustrate the efficiency of the proposed method, including image and video segmentation, static geometry model and time-varying sequences"}
{"_id": "7aa176ec4048bf0a89be5766b4b5e98f7b7f4955", "title": "Impact of Customer Relationship Management on Customer Loyalty , Customer Retention and Customer Profitability for Hotelier Sector", "text": "Since the entrance of strategies oriented to marketing relational in Hotelier Sector, the traditional way of travel agents and other representatives arranging hospitality services for hotel and travel reservations has changed. The strategies oriented to customer relationship management are a relatively new area of specialty loyalty marketing in the hotel and hotelier sector, with advancements being made constantly. The use of this type of strategy can allow hoteliers or companies to tailor special guest programs, services and promotions based on hotel guest preferences. The hotel can use the data collected in a program to identify the needs of particular customers across hotel chains to be able to use marketing that can be targeted at specific groups of people. It also gives hoteliers the opportunity to evaluate frequent guest programs, personalize their services and perform trend analysis. A program based in marketing relational is typically run by hotels and companies to collect guest information and transaction data for use and examining to allow hoteliers to see target groups that should be marketed too. Based on these transactions hotels are able to create and manage guest loyalty programs and reward schemes. This research approach is to appraise the impact of customer relationship management on customer profitability as mediated by customer loyalty and customer retention within the hotelier sector of Mexico, and specifically for those hoteliers classified like of three stars. A sample of 100 hotels three stars was interviewed as respondents in this study. The objective of the study was to find the impact relationship between effective customer relationship implementation, customer loyalty, and customer retention and customer profitability. The findings of the study add value to hotels three stars in Mexico, and provide some invaluable statistical results essential for hotel managers and owners to successfully enhance customer loyalty, customer retention and customer profitability. By applying a questionnaire that count with seven blocks different one each other, the results of our study identified and provide path analysis of the relevant systems; and enumerated among the relevant system, those critical inter-component relationships within this highly competitive industry. This study\u2019s findings add to the body of knowledge and enable the managers of this sector to implement customer relationship management in the best shape possible, to match it with Mexican market-needs thereby creating more loyal and repeat clientele."}
{"_id": "c5fa1558d7c1f27e35e18f2c9e29c011b062d4a5", "title": "Detection of camera artifacts from camera images", "text": "Cameras are frequently used in state-of-the-art systems in order to get detailed information of the environment. However, when cameras are used outdoors they easily get dirty or scratches on the lens which leads to image artifacts that can deteriorate a system's performance. Little work has yet been done on how to detect and cope with such artifacts. Most of the previous work has concentrated on detecting a specific artifact like rain drops on the windshield of a car. In this paper, we show that on moving systems most artifacts can be detected by analyzing the frames in a stream of images from a camera for static image parts. Based on the observation that most artifacts are temporally stable in their position in the image we compute pixel-wise correlations between images. Since the system is moving the static artifacts will lead to a high correlation value while the pixels showing only scene elements will have a low correlation value. For testing this novel algorithm, we recorded some outdoor data with the three different artifacts: raindrops, dirt and scratches. The results of our new algorithm on this data show, that it reliably detects all three artifacts. Moreover, we show that our algorithm can be implemented efficiently by means of box-filters which allows it to be used as a self-checking routine running in background on low-power systems such as autonomous field robots or advanced driver assistant systems on a vehicle."}
{"_id": "e64e988aa74d2bd96961ee4107b6ec66acc21d6b", "title": "FLAP: An End-to-End Event Log Analysis Platform for System Management", "text": "Many systems, such as distributed operating systems, complex networks, and high throughput web-based applications, are continuously generating large volume of event logs. These logs contain useful information to help system administrators to understand the system running status and to pinpoint the system failures. Generally, due to the scale and complexity of modern systems, the generated logs are beyond the analytic power of human beings. Therefore, it is imperative to develop a comprehensive log analysis system to support effective system management. Although a number of log mining techniques have been proposed to address specific log analysis use cases, few research and industrial efforts have been paid on providing integrated systems with an end-to-end solution to facilitate the log analysis routines.\n In this paper, we design and implement an integrated system, called FIU Log Analysis Platform (a.k.a. FLAP), that aims to facilitate the data analytics for system event logs. FLAP provides an end-to-end solution that utilizes advanced data mining techniques to assist log analysts to conveniently, timely, and accurately conduct event log knowledge discovery, system status investigation, and system failure diagnosis. Specifically, in FLAP, state-of-the-art template learning techniques are used to extract useful information from unstructured raw logs; advanced data transformation techniques are proposed and leveraged for event transformation and storage; effective event pattern mining, event summarization, event querying, and failure prediction techniques are designed and integrated for log analytics; and user-friendly interfaces are utilized to present the informative analysis results intuitively and vividly. Since 2016, FLAP has been used by Huawei Technologies Co. Ltd for internal event log analysis, and has provided effective support in its system operation and workflow optimization."}
{"_id": "846e0a065bbe6ed6221e10f967cb896c416b8125", "title": "Communication in reactive multiagent robotic systems", "text": "Multiple cooperating robots are able to complete many tasks more quickly and reliably than one robot alone. Communication between the robots can multiply their capabilities and e ectiveness, but to what extent? In this research, the importance of communication in robotic societies is investigated through experiments on both simulated and real robots. Performance was measured for three di erent types of communication for three di erent tasks. The levels of communication are progressively more complex and potentially more expensive to implement. For some tasks, communication can signi cantly improve performance, but for others inter-agent communication is apparently unnecessary. In cases where communication helps, the lowest level of communication is almost as e ective as the more complex type. The bulk of these results are derived from thousands of simulations run with randomly generated initial conditions. The simulation results help determine appropriate parameters for the reactive control system which was ported for tests on Denning mobile robots."}
{"_id": "62467e21f0be9461d0128b828d1632aea2586dc4", "title": "A Compact Highly Reconfigurable CMOS MMIC Directional Coupler", "text": "This paper presents a tunable CMOS directional coupler that utilizes lumped-element L-C sections. The lumped-element approach used to build the directional coupler makes it possible to integrate the coupler onto a single monolithic microwave integrated circuit (MMIC), as it occupies a small area compared to printed designs. The directional coupler uses varactors and tunable active inductors (TAIs) to synthesize the series and shunt reactances, respectively, which allows extensive electronic control over the coupling coefficient, while insuring a low return loss and a very high isolation. Furthermore, using varactors and TAIs allows the directional coupler to be reconfigured for operation over a wide range of frequencies. Moreover, the symmetric configuration of the coupler allows it to switch from forward to backward operation by simply exchanging the bias voltages applied across the series varactors. The MMIC coupler was fabricated in a standard 0.13-mum CMOS process and operates from a 1.5-V supply. The circuit occupies 730 mum times 600 mum, and is capable of achieving tunable coupling coefficients from 1.4 to 7.1 dB, while maintaining an isolation higher than 41 dB. The MMIC coupler is also capable of operating at any center frequency over the 2.1-3.1-GHz frequency range with higher than 40-dB isolation. The coupler achieves a -4.1-dBm 1-dB compression point while operating from a 1.5-V supply."}
{"_id": "d213d06053303d7f6d1550863c6f4046a52be545", "title": "Reconstructing the feedback polynomial of a linear scrambler with the method of hypothesis testing", "text": "Blind identification of the feedback polynomial in a linear scrambler is an important component in the wireless communication systems which have the function of automatic standard recognition and parameter estimation. The following research contains three parts. In the first part, promotion model is deduced based on Cluzeau\u2019s model. Comparative analysis shows the advantage of computational complexity by use of the promotion model. In the second part, the improved method of estimation for the normalised standard deviation is proposed, which plays an important role for the computation time and precision. In the final part, simulation results explain the better practical application of my algorithm compared with Cluzeau\u2019s algorithm and XiaoBei-Liu\u2019s algorithm."}
{"_id": "2ca425f88e70241daff3a6d717313a470be0fabc", "title": "An analysis of developer metrics for fault prediction", "text": "Background: Software product metrics have been widely used as independent variables for constructing a fault prediction model. However, fault injection depends not only on characteristics of the products themselves, but also on characteristics of developers involved in the project. Aims: The goal of this paper is to study the effects of developer features on software reliability. Method: This paper proposes developer metrics such as the number of code churns made by each developer, the number of commitments made by each developer and the number of developers for each module. By using the eclipse project dataset, we experimentally analyzed the relationship between the number of faults and developer metrics. Second, the effective of developer metrics for performance improvements of fault prediction models were evaluated. Results: The result revealed that the modules touched by more developer contained more faults. Compared with conventional fault prediction models, developer metrics improved the prediction performance. Conclusions: We conclude that developer metrics are good predictor of faults and we must consider the human factors for improving the software reliability."}
{"_id": "bf1a91bed0b42f9c99c580b0969599efa795518d", "title": "Happiness Economics , Eudaimonia and Positive Psychology : From Happiness Economics to Flourishing Economics", "text": "A remarkable current development, happiness economics focuses on the relevance of people\u2019s happiness in economic analyses. As this theory has been criticised for relying on an incomplete notion of happiness, this paper intends to support it with richer philosophical and psychological foundations. Specifically, it suggests that happiness economics should be based on Aristotle\u2019s philosophical eudaimonia concept and on a modified version of \u2018positive psychology\u2019 that stresses human beings\u2019 relational nature. First, this analysis describes happiness economics and its shortcomings. Next, it introduces Aristotle\u2019s eudaimonia and takes a look at positive psychology with this lens, elaborating on the need to develop a new approach that goes beyond the economics of happiness: the economics of flourishing. Finally, the paper specifies some possible socio-economic objectives of a eudaimonic economics of happiness."}
{"_id": "f97b592092377a6e3afeb2f55e2aacd8794cb8b2", "title": "Compact dual-band microstrip bandpass filter without external feeds", "text": "A compact dual-band microstrip bandpass filter is proposed and designed to operate at 2.4 and 5.2GHz without needing any external impedance-matching block. The modified half-wavelength stepped-impedance resonator with sinuous configuration is constructed to simultaneously excite the dual resonances at these two specified frequencies with miniaturized overall size. The parallel-coupled microstrip line is properly characterized to minimize the return losses within both dual passbands. The optimized results exhibit the good dual-band filtering performances with return losses higher than 20dB as well being confirmed by experimentation with a fabricated filter circuit."}
{"_id": "135ac65825f8bb23e1acdb45707ae9f57dc4953c", "title": "3D Object Proposals for Accurate Object Class Detection", "text": "In the Supplementary Material we first provide a description of the architecture of our object detection network. We then describe some details about our road estimation method. We also introduce the class-agnostic variant of our approach. We provide comprehensive results on object detection and orientation estimation, and recall statistics for both 2D and 3D box proposals. Finally, a visualization of success cases as well as failure modes are presented, followed by a detailed error analysis of our detector."}
{"_id": "248362e23fbb3303efa5004117f28e92180f2211", "title": "Geodesic weighted Bayesian model for salient object detection", "text": "In recent years, a variety of salient object detection methods under Bayesian framework have been proposed and many achieved state of the art. However, those ignore spatial relationships and thus background regions similar to the objects are also highlighted. In this paper, we propose a novel geodesic weighted Bayesian model to address this issue. We consider spatial relationships by attaching more importance to regions which are more likely to be parts of a salient object, thus suppressing background regions. First, we learn a combined similarity via multiple features to measure similarity of adjacent regions. Then, we apply the combined similarity as edge weight to construct an undirected weighted graph and compute geodesic distance. Last, we utilize the geodesic distance to weight the observation likelihood to infer a more precise saliency map. Experiments on several benchmark datasets demonstrate the effectiveness of our model."}
{"_id": "66a08ff2ea7447093624632e7069e3da16961d30", "title": "An Incremental Framework for Video-Based Traffic Sign Detection, Tracking, and Recognition", "text": "Video-based traffic sign detection, tracking, and recognition is one of the important components for the intelligent transport systems. Extensive research has shown that pretty good performance can be obtained on public data sets by various state-of-the-art approaches, especially the deep learning methods. However, deep learning methods require extensive computing resources. In addition, these approaches mostly concentrate on single image detection and recognition task, which is not applicable in real-world applications. Different from previous research, we introduce a unified incremental computational framework for traffic sign detection, tracking, and recognition task using the mono-camera mounted on a moving vehicle under non-stationary environments. The main contributions of this paper are threefold: 1) to enhance detection performance by utilizing the contextual information, this paper innovatively utilizes the spatial distribution prior of the traffic signs; 2) to improve the tracking performance and localization accuracy under non-stationary environments, a new efficient incremental framework containing off-line detector, online detector, and motion model predictor together is designed for traffic sign detection and tracking simultaneously; and 3) to get a more stable classification output, a scale-based intra-frame fusion method is proposed. We evaluate our method on two public data sets and the performance has shown that the proposed system can obtain results comparable with the deep learning method with less computing resource in a near-real-time manner."}
{"_id": "c8621c6634407aaae2a0cb3fc2d073a583dbeef5", "title": "Real time visual traffic lights recognition based on Spot Light Detection and adaptive traffic lights templates", "text": "This paper introduces a new real-time traffic light recognition system for on-vehicle camera applications. This approach has been tested with good results in urban scenes. Thanks to the use of our generic \u201cAdaptive Templates\u201d it would be possible to recognize different kinds of traffic lights from various countries."}
{"_id": "238e2e584192e7ad40be997ba2252323ae09673e", "title": "Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization", "text": "When people recall and digest what they have read for writing summaries, the important content is more likely to attract their attention. Inspired by this observation, we propose a cascaded attention based unsupervised model to estimate the salience information from the text for compressive multi-document summarization. The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience. By adding sparsity constraints on the number of output vectors, we can generate condensed information which can be treated as word salience. Fine-grained and coarse-grained sentence compression strategies are incorporated to produce compressive summaries. Experiments on some benchmark data sets show that our framework achieves better results than the state-of-the-art methods."}
{"_id": "e6beec2db218e1e8b53033025df9ce982a877719", "title": "Facial Performance Capture with Deep Neural Networks", "text": "We present a deep learning technique for facial performance capture, i.e., the transfer of video footage into a motion sequence of a 3D mesh representing an actor\u2019s face. Specifically, we build on a conventional capture pipeline based on computer vision and multiview video, and use its results to train a deep neural network to produce similar output from a monocular video sequence. Once trained, our network produces high-quality results for unseen inputs with greatly reduced effort compared to the conventional system. In practice, we have found that approximately 10 minutes worth of high-quality data is sufficient for training a network that can then automatically process as much footage from video to 3D as needed. This yields major savings in the development of modern narrativedriven video games involving digital doubles of actors and potentially hours of animated dialogue per character."}
{"_id": "32eb8d25995c40eda3f6b9cd15ad69c122d75693", "title": "1 Sex , Syntax , and Semantics", "text": "Many languages have a grammatical gender system whereby all nouns are assigned a gender (most commonly feminine, masculine, or neuter). Two studies examined whether (1) the assignment of genders to nouns is truly arbitrary (as has been claimed), and (2) whether the grammatical genders assigned to nouns have semantic consequences. In the first study, English speakers\u2019 intuitions about the genders of animals (but not artifacts) were found to correlate with the grammatical genders assigned to the names of these objects in Spanish and German. These findings suggest that the assignment of genders to nouns is not entirely arbitrary but may to some extent reflect the perceived masculine or feminine properties of the nouns\u2019 referents. Results of the second study suggested that people\u2019s ideas about the genders of objects are strongly influenced by the grammatical genders assigned to these objects in their native language. Spanish and German speakers\u2019 memory for object--name pairs (e.g., apple--Patricia) was better for pairs where the gender of the proper name was congruent with the grammatical gender of the object name (in their native language), than when the two genders were incongruent. This was true even though both groups performed the task in English. These results suggest that grammatical gender may not be as arbitrary or as purely grammatical as was previously"}
{"_id": "604b799fbaf17dc47928134ee7c54bd9622188d4", "title": "Ayahuasca, dimethyltryptamine, and psychosis: a systematic review of human studies.", "text": "Ayahuasca is a hallucinogen brew traditionally used for ritual and therapeutic purposes in Northwestern Amazon. It is rich in the tryptamine hallucinogens dimethyltryptamine (DMT), which acts as a serotonin 5-HT2A agonist. This mechanism of action is similar to other compounds such as lysergic acid diethylamide (LSD) and psilocybin. The controlled use of LSD and psilocybin in experimental settings is associated with a low incidence of psychotic episodes, and population studies corroborate these findings. Both the controlled use of DMT in experimental settings and the use of ayahuasca in experimental and ritual settings are not usually associated with psychotic episodes, but little is known regarding ayahuasca or DMT use outside these controlled contexts. Thus, we performed a systematic review of the published case reports describing psychotic episodes associated with ayahuasca and DMT intake. We found three case series and two case reports describing psychotic episodes associated with ayahuasca intake, and three case reports describing psychotic episodes associated with DMT. Several reports describe subjects with a personal and possibly a family history of psychosis (including schizophrenia, schizophreniform disorders, psychotic mania, psychotic depression), nonpsychotic mania, or concomitant use of other drugs. However, some cases also described psychotic episodes in subjects without these previous characteristics. Overall, the incidence of such episodes appears to be rare in both the ritual and the recreational/noncontrolled settings. Performance of a psychiatric screening before administration of these drugs, and other hallucinogens, in controlled settings seems to significantly reduce the possibility of adverse reactions with psychotic symptomatology. Individuals with a personal or family history of any psychotic illness or nonpsychotic mania should avoid hallucinogen intake."}
{"_id": "4b5535aa96e220c567c12597fa67932c4e881652", "title": "Identifying Potential CSCW Applications by Means of Activity Theory Concepts: A Case Example", "text": "The paper presents some novel concepts and models derived from Activity Theory for to identify a potential CSCW application. It is suggested that the six elements of the structure of the activity concept might be useful for differentiating between areas of support, and that three levels of support are needed in order to cope with both routine and emergent features of cooperative work situations. Thus a 3x6 support type classification is formed and its usefulness studied by means of a real-world example. A work situation is analyzed, problems identified and possible areas of support defined. A temporary solution is produced and, by evaluating it, possible directions for the development of a \u201creal\u201d new CSCW application and the usefulness of the classification am discussed."}
{"_id": "e8fa2e242369dcf50ab5cd1745b29bfc51aadf2a", "title": "Image Captioning with Object Detection and Localization", "text": "Automatically generating a natural language description of an image is a task close to the heart of image understanding. In this paper, we present a multi-model neural network method closely related to the human visual system that automatically learns to describe the content of images. Our model consists of two sub-models: an object detection and localization model, which extract the information of objects and their spatial relationship in images respectively; Besides, a deep recurrent neural network (RNN) based on long short-term memory (LSTM) units with attention mechanism for sentences generation. Each word of the description will be automatically aligned to different objects of the input image when it is generated. This is similar to the attention mechanism of the human visual system. Experimental results on the COCO dataset showcase the merit of the proposed method, which outperform previous benchmark models."}
{"_id": "4dc79f4540cfaee3a4593120c38b0b43e494d41a", "title": "Building Occlusion Detection From Ghost Images", "text": "This paper proposes a novel occlusion detection method for urban true orthophoto generation. In this new method, occlusion detection is performed using a ghost image; this method is therefore considerably different from the traditional Z-buffer method, in which occlusion detection is performed during the generation of a true orthophoto (to avoid ghost image occurrence). In the proposed method, a model is first established that describes the relationship between each ghost image and the boundary of the corresponding building occlusion, and then an algorithm is applied to identify the occluded areas in the ghost images using the building displacements. This theory has not previously been applied in true orthophoto generation. The experimental results demonstrate that the method proposed in this paper is capable of effectively avoiding pseudo-occlusion detection, with a success rate of 99.2%, and offers improved occlusion detection accuracy compared with the traditional Z-buffer detection method. The advantage of this method is that it avoids the shortcoming of performing occlusion detection and true orthophoto generation simultaneously, which results in false visibility and false occlusions; instead, the proposed method detects occlusions from ghost images and therefore provides simple and effective true orthophoto generation."}
{"_id": "2c636db0558838d0c24f7d4114582df6d5ea879f", "title": "Evaluation of supply chain structures through modularization and postponement", "text": "This paper introduces a conceptual framework for evaluating di\u0080erent supply chain structures in the context of modularization and postponement. In our analysis, modularization is linked to inbound logistics as the combination of di\u0080erent components (or modules) allow for the assembly of the \u00aenal product. Postponement, however, corresponds with the outbound logistics since it is through the distribution function that customers' speci\u00aec demand is satis\u00aeed. Given this perspective, we introduce a taxonomy and develop a corresponding framework for the characterization of four supply chain structures, de\u00aened according to the combined levels of modularization and postponement: Rigid, Postponed, Modularized and Flexible. We also illustrate the applicability of the resulting framework through quantifying the total cost di\u0080erential for utilizing a particular supply chain structure. By quantifying the total cost for employing a particular supply chain structure we can numerically illustrate structural results that allow for an objective comparison among them. As such, our \u00aendings substantiate the empirical evidence that vertical integration along the supply chain is not always desirable. A point of emphasis in this paper is the notion that \u00aerms may be better o\u0080 making combined modularization (outsourcing) and postponement decisions as opposed to separate ones (as they currently do). This also suggests that modularization and postponement decisions need not be independent and that their joint consideration extends operational advantages worthy of contemplation. \u00d3 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "d4a8e93f004c86267eead89edecbd332518dbf21", "title": "Database Description with SDM: A Semantic Database Model", "text": "SDM is a high-level semantics-based database description and structuring formalism (database model) for databases. This database model is designed to capture more of the meaning of an application environment than is possible with contemporary database models. An SDM specification describes a database in terms of the kinds of entities that exist in the application environment, the classifications and groupings of those entities, and the structural interconnections among them. SDM provides a collection of high-level modeling primitives to capture the semantics of an application environment. By accommodating derived information in a database structural specification, SDM allows the same information to be viewed in several ways; this makes it possible to directly accommodate the variety of needs and processing requirements typically present in database applications. The design of the present SDM is based on our experience in using a preliminary version of it.\nSDM is designed to enhance the effectiveness and usability of database systems. An SDM database description can serve as a formal specification and documentation tool for a database; it can provide a basis for supporting a variety of powerful user interface facilities, it can serve as a conceptual database model in the database design process; and, it can be used as the database model for a new kind of database management system."}
{"_id": "187ecf4982bb0510f3fc4ec731c7c7ea2dcda145", "title": "The Human Genetic History of East Asia: Weaving a Complex Tapestry", "text": "East Asia encompasses a wide variety of environments, peoples, cultures and languages. Although this review focuses on East Asia, no geographic region can be considered in isolation in terms of human population history, and migrations to and from East Asia have had a major impact. Here, we review the following topics: the initial colonization of East Asia, the direction of migrations between southeast Asia and northern Asia, the genetic relationships of East Asian hunter-gatherers and the genetic impact of various social practices on East Asian populations. By necessity we focus on insights derived from mitochondrial DNA and/or Y-chromosome data; ongoing and future studies of genome-wide SNP or multi-locus re-sequencing data, combined with the use of simulation, model-based methods to infer demographic parameters, will undoubtedly provide additional insights into the population history of East Asia."}
{"_id": "8c3d0838a26c4fe0bf9ce2d6e133adbcad2c3c27", "title": "Winter 2003 COMPETENCE , CULPABILITY , AND PUNISHMENT : IMPLICATIONS OF A TKINS FOR EXECUTING AND SENTENCING ADOLESCENTS", "text": "The Supreme Court has explored the issues of culpability, proportionality, and deserved punishment most fully in the context of capital punishment. In death penalty decisions addressing developmental impairments and culpability, the Court has considered the cases of defendants with mental retardation and older adolescents, and has created an anomalous inconsistency by reaching opposite conclusions about the deserved punishment for each group of defendants. Recently, in Atkins v. Virginia, the Court relied on both empirical and normative justifications to categorically prohibit states from executing defendants with mental retardation. Atkins reasoned that mentally retarded offenders lacked the reasoning, judgment, and impulse control necessary to equate their culpability with that of other death-eligible criminal defendants. This Article contends that the same psychological and developmental * Centennial Professor of Law, University of Minnesota, B.A., University of Pennsylvania, 1966; J.D., University of Minnesota Law School, 1969; Ph.D. (Sociology), Harvard University, 1973. 1 would like to thank Steven Drizin, Richard Frase, Wayne Logan, Monte Norgaard, and Victor Streib for critical and helpful comments on an earlier draft of this article. The questions and comments of faculty and student participants at the University of Houston Law School Symposium, January, 2004, helped to sharpen the analysis. I received outstanding research assistance from Brittany Stephens, Class of 2005. 1 Feld: Competence, Culpability, and Punishment: Implications of Atkins f Published by Scholarly Commons at Hofstra Law, 2003"}
{"_id": "bcaf03ac1a46de3220edfaccaa598f62b05d6e15", "title": "WOMEN \u2019 S VIOLENCE TO MEN IN INTIMATE RELATIONSHIPS Working on a Puzzle", "text": "Different notions among researchers about the nature of intimate partner violence have long been the subjects of popular and academic debate. Research findings are contradictory and point in two directions, with some revealing that women are as likely as men to perpetrate violence against an intimate partner (symmetry) and others showing that it is overwhelmingly men who perpetrate violence against women partners (asymmetry). The puzzle about who perpetrates intimate partner violence not only concerns researchers but also policy makers and community advocates who, in differing ways, have a stake in the answer to this question, since it shapes the focus of public concern, legislation, public policy and interventions for victims and offenders. The question of who are the most usual victims and perpetrators rests, to a large extent, on \u2018what counts\u2019 as violence. It is here that we begin to try to unravel the puzzle, by focusing on concept formation, definitions, forms of measurement, context, consequences and approaches to claim-making, in order better to understand how researchers have arrived at such apparently contradictory findings and claims. The question also turns on having more detailed knowledge about the nature, extent and consequences of women\u2019s violence, in order to consider the veracity of these contradictory findings. To date, there has been very little in-depth research about women\u2019s violence to male partners and it is difficult, if not impossible, to consider this debate without such knowledge. We present quantitative and qualitative findings from 190 interviews with 95 couples in which men and women reported separately upon their own violence and upon that of their partner. Men\u2019s and women\u2019s violence are compared. The findings suggest that intimate partner violence is primarily an asymmetrical problem of men\u2019s violence to women, and women\u2019s violence does not equate to men\u2019s in terms of frequency, severity, consequences and the victim\u2019s sense of safety and well-being. But why bother about the apparent contradictions in findings of research? For those making and implementing policies and expending public and private resources, the apparent contradiction about the very nature of this problem has real consequences for what might be done for those who are its victims and those who"}
{"_id": "d003c167e9e3c8ee9f6ba6f96e298a5646e39b41", "title": "Deep joint demosaicking and denoising", "text": "Demosaicking and denoising are the key first stages of the digital imaging pipeline but they are also a severely ill-posed problem that infers three color values per pixel from a single noisy measurement. Earlier methods rely on hand-crafted filters or priors and still exhibit disturbing visual artifacts in hard cases such as moir\u00e9 or thin edges. We introduce a new data-driven approach for these challenges: we train a deep neural network on a large corpus of images instead of using hand-tuned filters. While deep learning has shown great success, its naive application using existing training datasets does not give satisfactory results for our problem because these datasets lack hard cases. To create a better training set, we present metrics to identify difficult patches and techniques for mining community photographs for such patches. Our experiments show that this network and training procedure outperform state-of-the-art both on noisy and noise-free data. Furthermore, our algorithm is an order of magnitude faster than the previous best performing techniques."}
{"_id": "1f38c11fe8511c77fb7d383126214c9e7dc28e4a", "title": "Fingerprinting Information in JavaScript Implementations", "text": "To date, many attempts have been made to fingerprint users on the web. These fingerprints allow browsing sessions to be linked together and possibly even tied to a user\u2019s identity. They can be used constructively by sites to supplement traditional means of user authentication such as passwords; and they can be used destructively to counter attempts to stay anonymous online. In this paper, we identify two new avenues for browser fingerprinting. The new fingerprints arise from the browser\u2019s JavaScript execution characteristics, making them difficult to simulate or mitigate in practice. The first uses the innate performance signature of each browser\u2019s JavaScript engine, allowing the detection of browser version, operating system and microarchitecture, even when traditional forms of system identification (such as the user-agent header) are modified or hidden. The second subverts the whitelist mechanism of the popular NoScript Firefox extension, which selectively enables web pages\u2019 scripting privileges to increase privacy by allowing a site to determine if particular domains exist in a user\u2019s NoScript whitelist. We have experimentally verified the effectiveness of our system fingerprinting technique using a 1,015-person study on Amazon\u2019s Mechanical Turk platform."}
{"_id": "e7b3e527da72febf2e141fdff59e8ce8cc69f13d", "title": "Modeling trend progression through an extension of the Polya Urn Process", "text": "Knowing how and when trends are formed is a frequently visited research goal. In our work, we focus on the progression of trends through (social) networks. We use a random graph (RG) model to mimic the progression of a trend through the network. The context of the trend is not included in our model. We show that every state of the RG model maps to a state of the Polya process. We find that the limit of the component size distribution of the RG model shows power-law behaviour. These results are also supported by simulations."}
{"_id": "cced93571a42beb3853c2d0b3217b3f57b614bb0", "title": "A Benchmark for Geometric Facial Beauty Study", "text": ""}
{"_id": "4a53b2939e2e1b906f62d6c617b50645ada725e6", "title": "WebQual: An Exploration of Web-Site Quality", "text": "The issue of web-site quality is tackled from the perspective of the \u2018voice of the customer\u2019. Quality function deployment (QFD) is adopted as a framework for identifying web-site qualities demanded by users, which are gathered through a quality workshop. From the workshop an instrument for assessing web-site quality is developed (WebQual) and tested in the domain of UK business schools. The results of the WebQual survey are presented and analyzed, leading to the generation of a WebQual Index of web-site quality. Work under way to extend and refine the WebQual instrument includes electronic commerce evaluation, where web-site service quality is proposed as a key issue."}
{"_id": "9d68bcc77f953c3ae24047b8c83b7172646845d8", "title": "Distributed Q-Learning for Interference Control in OFDMA-Based Femtocell Networks", "text": "This paper proposes a self-organized power allocation technique to solve the interference problem caused by a femtocell network operating in the same channel as an orthogonal frequency division multiple access cellular network. We model the femto network as a multi-agent system where the different femto base stations are the agents in charge of managing the radio resources to be allocated to their femtousers. We propose a form of real-time multi-agent reinforcement learning, known as decentralized Q-learning, to manage the interference generated to macro-users. By directly interacting with the surrounding environment in a distributed fashion, the multi-agent system is able to learn an optimal policy to solve the interference problem. Simulation results show that the introduction of the femto network increases the system capacity without decreasing the capacity of the macro network."}
{"_id": "ad57c05be3d20485d2d9db4e7f5322b1656ac999", "title": "Guarding Against the Erosion of Competitive Advantage: A Knowledge Leakage Mitigation Model", "text": "A critical objective of knowledge-intensive organizations is to prevent erosion of their competitive knowledge base through leakage. Our review of the literature highlights the need for a more refined conceptualization of perceived leakage risk. We propose a Knowledge Leakage Mitigation (KLM) model to explain the incongruity between perceived high-risk of leakage and lack of protective actions. We argue that an organization\u2019s perceived risk of leakage increases if competitors can benefit from leakage incidents. Further, perceived leakage risk decreases if the organization is shielded from impact due to their diversity of knowledge assets and their ability to reconfigure knowledge resources to refresh their competitive knowledge base. We describe our approach to the design of a large-scale survey instrument that has been tested and refined in two stakeholder communities: 1) knowledge managers responsible for organizational strategy, and 2) Information security management consultants."}
{"_id": "e3026f54778ce5cab106e237695a760aa09284ec", "title": "Design and Analysis of Swapped Port Coupler and Its Application in a Miniaturized Butler Matrix", "text": "This paper presents a novel size reduction technique for designing a Butler matrix using a new type of coupler. This coupler is half the length of a conventional coupler and has a swapped port characteristic wherein the locations of the isolation and coupled ports are switched. These features make the new coupler, called the \"swapped port coupler,\" effective for reducing the size of the Butler matrix. The behavior of the swapped port coupler was mathematically analyzed, and its design equation was derived. Furthermore, the proposed coupler was employed to design a miniaturized Butler matrix. The area of this Butler matrix was 70 \u00d7 70 mm2, and its operating frequency was 1 GHz. When the area was normalized to the frequency, the area of the proposed Butler matrix was 10% of a conventional Butler matrix."}
{"_id": "2e4c06dd00c4c09ad5ac6be883cc66c19d88ea79", "title": "Learning to Make Predictions on Graphs with Autoencoders", "text": "We examine two fundamental tasks associated with graph representation learning: link prediction and semi-supervised node classification. We present a novel autoencoder architecture capable of learning a joint representation of both local graph structure and available node features for the multi-task learning of link prediction and node classification. Our autoencoder architecture is efficiently trained end-to-end in a single learning stage to simultaneously perform link prediction and node classification, whereas previous related methods require multiple training steps that are difficult to optimize. We provide a comprehensive empirical evaluation of our models on nine benchmark graph-structured datasets and demonstrate significant improvement over related methods for graph representation learning. Reference code and data are available at https://github.com/vuptran/graph-representation-learning."}
{"_id": "8288c8dc52039c7430fb985decc72bfe1296a828", "title": "Culture, corruption, suicide, happiness and global social media use: a cross-cultural perspective", "text": "This study was conducted to answer this simple question: \u2018Can cultural values explain global social media use?\u2019 Along with cultural dimensions introduced by past studies we also added several demographic, socio-economic and personality variables into this study that generated quite interesting findings. We found that there are low levels of suicide, more happiness and more corruption in societies that use social media heavily. We also observed that GDP per capita and median age are negatively related with social media use. Self-esteem stood out as important variable related to social media use intensity along with emotional expressiveness, openness and conscientiousness. Contrary to the common view, nation-level social capital achievement was negatively related with social media use and there was absolutely no relationship between high-context and low-context communication characteristics and local social media use. Some other findings also indicated that conservative and collectivistic countries use social media more often than do individualistic and developed countries. Schwartz\u2019s cultural dimensions and the results of the GLOBE study accounted for a considerable amount of variation in country-level social media use where Hofstede and Trompenaars\u2019 cultural dimensions were insignificant. Since most of the cultural values failed to explain the intensity of social media use, we also developed a cross-cultural online communication framework called cross-cultural self and others\u2019 worth."}
{"_id": "4e292e60ff70e72fa66fe666ca0d956ea55467d0", "title": "Running head : SOCIAL NETWORKING SITE USE Psychological Predictors of Young Adults ' Use of Social Networking Sites", "text": "Psychological predictors of young adults': use of social networking sites. Acknowledgements: The authors would like to thank Shari Walsh for assistance in the design of the study and Eric Livingston for assistance in data collection. Abstract Young people are increasingly using social networking sites (SNSs), like Myspace and Facebook, to engage with others. The use of SNSs can have both positive and negative effects on the individual; however, few research studies identify the types of people who frequent these Internet sites. This study sought to predict young adults' use of SNSs and addictive tendency towards the use of SNSs from their personality characteristics and levels of self-esteem. University students (N = 201), aged 17 to 24 years, reported their use of SNSs and addictive tendencies for SNSs use, and completed the NEO Five-Factor Personality Inventory 1 and the Coopersmith Self-Esteem Inventory. 2 Multiple regression analyses revealed that, as a group, the personality and self-esteem factors significantly predicted both level of SNS use and addictive tendency but did not explain a large amount of variance in either outcome measure. The findings indicated that extraverted and unconscientious individuals reported higher levels of both SNS use and addictive tendencies. Future research should attempt to identify which other psychosocial characteristics explain young people's level of use and propensity for addictive tendencies for these popular Internet sites. Social networking site use 3 The proliferation of social networking sites (SNSs) has created a phenomenon that engages millions of Internet users around the world, especially young people. Given the popularity of these sites and their importance in young people's lives to facilitate communication and relationships, it is important to understand the factors influencing SNS use, especially at higher levels, and to identify those who may be prone to developing addictive tendencies towards new communication technologies. 5 As with other communication technologies, 6,7 a useful starting point may be to examine the role of personality traits and self-esteem on young people's SNS use. Researchers have confirmed repeatedly that the five-factor model of personality adequately accounts for and explains personality by taking the approach that personality consists of five traits; openness to experience (pursuing and appreciating all types of experience), conscientiousness (control, regulation and direction of goals and impulses), extraversion (amount and intensity of interpersonal interactions), agreeableness (the type of interactions a person prefers to have with others), and neuroticism (degree of emotional adjustment and instability). 8 Self esteem \u2026"}
{"_id": "0a7fb47217e6d0e3b80159bc4f9e02a50ea1f391", "title": "Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales", "text": "We address therating-inference problem, wherein rather than simply decide whether a review is \u201cthumbs up\u201d or \u201cthumbs down\u201d, as in previous sentiment analysis work, one must determine an author\u2019s evaluation with respect to a multi-point scale (e.g., one to five \u201cstars\u201d). This task represents an interesting twist on standard multi-class text categorization because there are several different degrees of similarity between class labels; for example, \u201cthree stars\u201d is intuitively closer to \u201cfour stars\u201d than to \u201cone star\u201d. We first evaluate human performance at the task. Then, we apply a metaalgorithm, based on a metric labelingformulation of the problem, that alters a given n-ary classifier\u2019s output in an explicit attempt to ensure that similar items receive similar labels. We show that the meta-algorithm can provide significant improvements over both multi-class and regression versions of SVMs when we employ a novel similarity measure appropriate to the problem. Publication info: Proceedings of the ACL, 2005."}
{"_id": "311fcda76dc7b7cf50b17c705a2aaaaab5ed6a04", "title": "Learning Distributed Representations of Sentences from Unlabelled Data", "text": "Unsupervised methods for learning distributed representations of words are ubiquitous in today\u2019s NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data. This paper is a systematic comparison of models that learn such representations. We find that the optimal approach depends critically on the intended application. Deeper, more complex models are preferable for representations to be used in supervised systems, but shallow log-bilinear models work best for building representation spaces that can be decoded with simple spatial distance metrics. We also propose two new unsupervised representation-learning objectives designed to optimise the trade-off between training time, domain portability and performance."}
{"_id": "de0c56bd93ea24458e90e4cbbb0b94b32dfaa1cc", "title": "Wristband-Type Driver Vigilance Monitoring System Using Smartwatch", "text": "Studies have presented that the driver vigilance level has serious implication in the causation of road accidents. This paper focuses on integrating both the vehicle-based control behavior and physiological state to predict the driver vigilance index which is evaluated by using a smartwatch. The vehicle control behavior can be observed from the steering wheel movement. Our study utilized the smartwatch motion sensors to study the steering wheel behavior. Meanwhile, physiological state of driver reflects the driver capability of safety alert driving which is estimated by photoplethysmogram (PPG) and respiration signals in this paper. The PPG sensor is integrated in a sport wristband with a Bluetooth low energy module, transmitted the PPG signals to smartwatch in real time. The steering angle is derived by the reading from smartwatch built-in accelerometer and gyroscope sensors. On the other hand, the respiration is derived using the PPG peak baseline method. In order to utterly investigate the sleepiness-induced factors, the time, spectral, and phase space domain features are calculated. Considering the smartwatch processing capability, mutual-information technique is applied to designate the ten most descriptive features. Then, the extracted descriptive features are serve as parameters to a classifier to determine the driver aptitude status. The features are analyzed for their correlation with the subjective Koralinska sleepiness scale and through recorded video observations. The experimental results reveal that our system is capable of estimating driver hypervigilance at average of 96.5% accuracy rate by evaluating on both driving behavior and driver physiological state, provided a novel and low-cost implementation."}
{"_id": "56a9c20853dc74f7e72fda32060ba4c2b8e7255d", "title": "A Framework for Detecting MAC and IP Spoofing Attacks with Network Characteristics", "text": "This paper presents a spoofing attack detection framework based on physical network characteristics (e.g., received signal strength indicator round trip time and link quality indicator) that cannot easily be mimicked by artificial means. Unlike most previous studies that are sensitive to changes in network conditions, we propose a spoofing attack detection method, which is highly robust to the changes of network conditions over time. The proposed framework can monitor devices' physical network characteristics in real time and check if any significant changes have been made in the monitored measurements to effectively detect device spoofing attacks. To demonstrate the feasibility of the proposed framework, we analyzed how the RSSI values of packets were changed with varying physical distances in a real ZigBee (IEEE 802.15.4) network environment."}
{"_id": "3366030d7a0cebe35087c1dbb39d7bbddd17993a", "title": "Summarization from medical documents: a survey", "text": "OBJECTIVE\nThe aim of this paper is to survey the recent work in medical documents summarization.\n\n\nBACKGROUND\nDuring the last decade, documents summarization got increasing attention by the AI research community. More recently it also attracted the interest of the medical research community as well, due to the enormous growth of information that is available to the physicians and researchers in medicine, through the large and growing number of published journals, conference proceedings, medical sites and portals on the World Wide Web, electronic medical records, etc.\n\n\nMETHODOLOGY\nThis survey gives first a general background on documents summarization, presenting the factors that summarization depends upon, discussing evaluation issues and describing briefly the various types of summarization techniques. It then examines the characteristics of the medical domain through the different types of medical documents. Finally, it presents and discusses the summarization techniques used so far in the medical domain, referring to the corresponding systems and their characteristics.\n\n\nDISCUSSION AND CONCLUSIONS\nThe paper discusses thoroughly the promising paths for future research in medical documents summarization. It mainly focuses on the issue of scaling to large collections of documents in various languages and from different media, on personalization issues, on portability to new sub-domains, and on the integration of summarization technology in practical applications."}
{"_id": "84912443295516f3eba73ffcb4f644ffb7c5f8b9", "title": "TrustOTP: Transforming Smartphones into Secure One-Time Password Tokens", "text": "Two-factor authentication has been widely used due to the vulnerabilities associated with traditional text-based password. One-time password (OTP) plays an indispensable role on authenticating mobile users to critical web services that demand a high level of security. As the smartphones are increasingly gaining popularity nowadays, software-based OTP generators have been developed and installed into smartphones as software apps, which bring great convenience to the users without introducing extra burden. However, software-based OTP solutions cannot guarantee the confidentiality of the generated passwords or even the seeds when the mobile OS is compromised. Moreover, they also suffer from denial-of-service attacks when the mobile OS crashes. Hardware-based OTP tokens can solve these security problems in the software-based OTP solutions; however, it is inconvenient for the users to carry physical tokens with them, particularly, when there are more than one token to be carried. In this paper, we present TrustOTP, a secure one-time password solution that can achieve both the flexibility of software tokens and the security of hardware tokens by using ARM TrustZone technology. TrustOTP can not only protect the confidentiality of the OTPs against a malicious mobile OS, but also guarantee reliable OTP generation and trusted OTP display when the mobile OS is compromised or even crashes. It is flexible to integrate multiple OTP algorithms and instances for different application scenarios on the same smartphone platform without modifying the mobile OS. We develop a prototype of TrustOTP on Freescale i.MX53 QSB. The experimental results show that TrustOTP has small impacts on the mobile OS and its power consumption is low."}
{"_id": "94f3517c3f7550e8fef17f86a33d35dfbb7a9199", "title": "Keyword Search in Spatial Databases: Towards Searching by Document", "text": "This work addresses a novel spatial keyword query called the m-closest keywords (mCK) query. Given a database of spatial objects, each tuple is associated with some descriptive information represented in the form of keywords. The mCK query aims to find the spatially closest tuples which match m user-specified keywords. Given a set of keywords from a document, mCK query can be very useful in geotagging the document by comparing the keywords to other geotagged documents in a database. To answer mCK queries efficiently, we introduce a new index called the bR*-tree, which is an extension of the R*-tree. Based on bR*-tree, we exploit a priori-based search strategies to effectively reduce the search space. We also propose two monotone constraints, namely the distance mutex and keyword mutex, as our a priori properties to facilitate effective pruning. Our performance study demonstrates that our search strategy is indeed efficient in reducing query response time and demonstrates remarkable scalability in terms of the number of query keywords which is essential for our main application of searching by document."}
{"_id": "81147ad617c37b222121d4b2eef80dd969899a45", "title": "Free-form shape design using triangulated surfaces", "text": "We present an approach to modeling with truly mutable yet completely controllable free-form surfaces of arbitrary topology. Surfaces may be pinned down at points and along curves, cut up and smoothly welded back together, and faired and reshaped in the large. This style of control is formulated as a constrained shape optimization, with minimization of squared principal curvatures yielding graceful shapes that are free of the parameterization worries accompanying many patch-based approaches. Triangulated point sets are used to approximate these smooth variational surfaces, bridging the gap between patch-based and particle-based representations. Automatic refinement, mesh smoothing, and re-triangulation maintain a good computational mesh as the surface shape evolves, and give sample points and surface features much of the freedom to slide around in the surface that oriented particles enjoy. The resulting surface triangulations are constructed and maintained in real time."}
{"_id": "0571538dac768ee68ba7dfb2f9cadcc45646a720", "title": "Imitation and Mechanisms of Joint Attention : A Developmental Structure for Building Social Skills on a Humanoid Robot", "text": "Adults are extremely adept at recognizing social cues, such as eye direction or pointing gestures, that establish the basis of joint attention. These skills serve as the developmental basis for more complex forms of metaphor and analogy by allowing an infant to ground shared experiences and by assisting in the development of more complex communication skills. In this chapter, we review some of the evidence for the developmental course of these joint attention skills from developmental psychology, from disorders of social development such as autism, and from the evolutionary development of these social skills. We also describe an on-going research program aimed at testing existing models of joint attention development by building a human-like robot which communicates naturally with humans using joint attention. Our group has constructed an upper-torso humanoid robot, called Cog, in part to investigate how to build intelligent robotic systems by following a developmental progression of skills similar to that observed in human development. Just as a child learns social skills and conventions through interactions with its parents, our robot will learn to interact with people using natural social communication. We further consider the critical role that imitation plays in bootstrapping a system from simple visual behaviors to more complex social skills. We will present data from a face and eye finding system that serves as the basis of this developmental chain, and an example of how this system can imitate the head movements of an individual."}
{"_id": "4c3396fcdd149ac2877201068794a3fca8208840", "title": "Suicide prevention strategies: a systematic review.", "text": "CONTEXT\nIn 2002, an estimated 877,000 lives were lost worldwide through suicide. Some developed nations have implemented national suicide prevention plans. Although these plans generally propose multiple interventions, their effectiveness is rarely evaluated.\n\n\nOBJECTIVES\nTo examine evidence for the effectiveness of specific suicide-preventive interventions and to make recommendations for future prevention programs and research.\n\n\nDATA SOURCES AND STUDY SELECTION\nRelevant publications were identified via electronic searches of MEDLINE, the Cochrane Library, and PsychINFO databases using multiple search terms related to suicide prevention. Studies, published between 1966 and June 2005, included those that evaluated preventative interventions in major domains; education and awareness for the general public and for professionals; screening tools for at-risk individuals; treatment of psychiatric disorders; restricting access to lethal means; and responsible media reporting of suicide.\n\n\nDATA EXTRACTION\nData were extracted on primary outcomes of interest: suicidal behavior (completion, attempt, ideation), intermediary or secondary outcomes (treatment seeking, identification of at-risk individuals, antidepressant prescription/use rates, referrals), or both. Experts from 15 countries reviewed all studies. Included articles were those that reported on completed and attempted suicide and suicidal ideation; or, where applicable, intermediate outcomes, including help-seeking behavior, identification of at-risk individuals, entry into treatment, and antidepressant prescription rates. We included 3 major types of studies for which the research question was clearly defined: systematic reviews and meta-analyses (n = 10); quantitative studies, either randomized controlled trials (n = 18) or cohort studies (n = 24); and ecological, or population- based studies (n = 41). Heterogeneity of study populations and methodology did not permit formal meta-analysis; thus, a narrative synthesis is presented.\n\n\nDATA SYNTHESIS\nEducation of physicians and restricting access to lethal means were found to prevent suicide. Other methods including public education, screening programs, and media education need more testing.\n\n\nCONCLUSIONS\nPhysician education in depression recognition and treatment and restricting access to lethal methods reduce suicide rates. Other interventions need more evidence of efficacy. Ascertaining which components of suicide prevention programs are effective in reducing rates of suicide and suicide attempt is essential in order to optimize use of limited resources."}
{"_id": "b3c643e0b32418b66dc9128ae3db21e791ccb1c4", "title": "A Solution to Separation and Multicollinearity in Multiple Logistic Regression.", "text": "In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study."}
{"_id": "7f3ad695ac38e4b248eba2eb664f73bf2be825a2", "title": "The Traveling Salesman Problem: A Survey", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "6908ab53945ebac46fcef5af5af0799b8d8c800b", "title": "Improved algorithm for mining maximum frequent patterns based on FP-Tree", "text": "Mining association rule is an important matter in data mining, in which mining maximum frequent patterns is a key problem. Many of the previous algorithms mine maximum frequent patterns by producing candidate patterns firstly, then pruning. But the cost of producing candidate patterns is very high, especially when there exists long patterns. In this paper, the structure of a FP-tree is improved, we propose a fast algorithm based on FP-Tree for mining maximum frequent patterns, the algorithm does not produce maximum frequent candidate patterns and is more effectively than other improved algorithms. The new FP-Tree is a one-way tree and only retains pointers to point its father in each node, so at least one third of memory is saved. Experiment results show that the algorithm is efficient and saves memory space. Keywordsdata mining;association rule;maximum frequent pattern;FP-Tree"}
{"_id": "22031846091d6ec78b06d79ef5d61315492a92fa", "title": "Comparison Research on Text Pre-processing Methods on Twitter Sentiment Analysis", "text": "Twitter sentiment analysis offers organizations ability to monitor public feeling towards the products and events related to them in real time. The first step of the sentiment analysis is the text pre-processing of Twitter data. Most existing researches about Twitter sentiment analysis are focused on the extraction of new sentiment features. However, to select the pre-processing method is ignored. This paper discussed the effects of text pre-processing method on sentiment classification performance in two types of classification tasks, and summed up the classification performances of six pre-processing methods using two feature models and four classifiers on five Twitter datasets. The experiments show that the accuracy and F1-measure of Twitter sentiment classification classifier are improved when using the pre-processing methods of expanding acronyms and replacing negation, but barely changes when removing URLs, removing numbers or stop words. The Naive Bayes and Random Forest classifiers are more sensitive than Logistic Regression and support vector machine classifiers when various pre-processing methods were applied."}
{"_id": "973d0f35c3c3c4dc908d0f2b9d923348df5c470d", "title": "Alzheimer's Disease and the Impact of Music Therapy: A Systematic Literature Review", "text": "Currently ranked as the sixth leading cause of death in the United States, Alzheimer\u2019s disease (AD) has become the most prevalent form of dementia, a term commonly associated with memory loss and other progressive cognitive deficits that compromise patients\u2019 lives. What may begin as a mindlessly misplaced object or momentary inability to recall newly learned information will eventually advance to a loss of personal identity, forgotten loved ones, and utter misperception of reality. Although a cure for AD has yet to be discovered, there are several nonpharmacological treatments that can improve patients\u2019 quality of life and provide temporary relief from the disabling manifestations, one of which is music therapy, the topic of this literature review. Music can be deeply connected with emotional processing and memory recall, and, when utilized as an interventional therapy for AD patients, can yield numerous cognitive and behavioral symptomatic benefits. The purpose of this project is to conduct a systematic literature review that evaluates the therapeutic relationship between AD and music therapy with a narrowed focus on familiar music therapy, the potential mechanisms of action that explain the efficacy of this intervention, and the resulting nursing implications that may be utilized in practice. MUSIC THERAPY AND ALZHEIMER\u2019S DISEASE 6 Methods An exhaustive literature review was conducted using the database CINAHL (EBSCO). The search terms included: \u201cAlzheimer\u2019s disease AND music therapy\u201d, \u201cAlzheimer\u2019s AND music therapy\u201d, \u201cAlzheimer\u2019s disease symptoms AND music therapy\u201d, \u201cAlzheimer\u2019s disease AND music therapy AND familiar music\u201d, and \u201cAlzheimer\u2019s disease AND active music therapy.\u201d The literature search included full text articles from academic journals in the English language, including international articles. Only articles published between 2007 through 2017 were reviewed, with the exception of one relevant article from 2005. From the initial search results, 35 articles matched the preliminary criteria, and were further reviewed if they addressed: a) Alzheimer\u2019s disease, b) elements and types of music therapy, c) outcomes of music therapy, d) scientific explanation of music therapy efficacy and, e) nursing implications. A total of 13 articles met the inclusion criteria and were included in the literature review. Six of these articles focused on familiar music therapy, and are summarized in table 1. MUSIC THERAPY AND ALZHEIMER\u2019S DISEASE 7 Alzheimer\u2019s Disease Alzheimer\u2019s Disease (AD), an irreversibly progressive brain disorder, has an unrelenting course, adversely impacting every aspect of a patient\u2019s reality, often beginning with the inability to perform the most mundane tasks and resulting in utter dependency on others for the totality of self care. However, what may be even more staggering than the relentless nature of this disease are the associated statistics, for the prevalence of new cases is growing at an alarming rate \u2013 every 66 seconds to be exact. Currently, AD is the sixth leading cause of death in the United States, affecting more than 5 million individuals; however, this number is estimated to more than triple by 2050 (Alzheimer\u2019s Association, 2017). While this disease often progresses gradually through the three stages \u2013 mild, moderate, and severe \u2013 every patient endures the associated manifestations differently, which can make it difficult to discern which stage a person is experiencing. An individual displaying the early symptoms of AD may show signs of forgetfulness, possibly misplacing objects or failing to recall a friend\u2019s name, but will likely remain independent through the first stage of the disease. As the patient\u2019s condition progresses into the middle stage, personality and behavior changes will be noted, such as frustration, anger, refusing to shower, etc., as well as a greater need for assistance with daily activities. The most disabling manifestations of the disease appear in the final stage, including a loss of environmental awareness, severe communication impairments, crippling physical limitations, and the need for constant supervision and care (Alzheimer\u2019s Association, 2017). Perhaps one of the most troubling aspects of AD lies in the inevitable imminence of death once the disease begins to unfold, despite the pharmacological options that exist. Treatment efforts for AD patient currently focus on symptom management utilizing a multidisciplinary MUSIC THERAPY AND ALZHEIMER\u2019S DISEASE 8 approach, combining both pharmacological and non-pharmacological interventions to achieve the highest possible quality of life. While somewhat beneficial, most pharmacological treatments yield limited improvement and may result in further patient deterioration due to the adverse effects associated with antipsychotic and/or psychotropic drugs (Guetin, 2013). However, various non-pharmacological therapies for symptom management have been identified and implemented to improve patients\u2019 quality of life, and universal research efforts vigorously continue in the pursuit of answers to better understand the disease, determine effective treatment options, and prevent its development entirely. MUSIC THERAPY AND ALZHEIMER\u2019S DISEASE 9 Music Therapy The use of non-pharmacological strategies as a potential treatment method for cognitive and behavioral disorders has become a more prevalent occurrence in patient care plans, with a specific emphasis on music therapy as an exceptionally effective intervention for AD, as acknowledged by the French Agency for Health Accreditation and Evaluation (Narme et al., 2013). Music therapy, as defined by the World Federation of Music Therapy, is \u201cthe use of music and/or its musical elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication, relationships, learning, mobilization, expression, organization and other relevant therapeutic objectives in order to meet physical, emotional, mental, social and cognitive needs\u201d (Guetin et al., 2013, p. 621). While music therapy has gained popularity in recent years as an effective tool to manage a variety of health care needs, such as procedural pain control, behavioral disorder manifestations, and physical rehabilitation, this non-pharmacological intervention has ancient roots dating as far back as the sixth century BC (Cox, Nowak, & Buettner, 2011). There is hieroglyphic evidence suggesting the Egyptians treated a variety of ailments, such as pain, depression, and sleep disorders, with melodic hymns and incantations (Guetin et al., 2013). Plato, during ancient Grecian times, dubbed musical training a form of \u201cmental hygiene,\u201d as well as \u201cmedicine for the soul\u201d (Guetin et al., 2013, p. 622). The 20 century brought about the scientific discovery of the physiological effects music can produce on blood pressure and heart rate, which helps explain the earlier findings detected by dental surgeons, who noticed their patients seemed to experience decreased pain and anxiety during procedures if a phonograph was playing simultaneously (Guetin et al., 2013). MUSIC THERAPY AND ALZHEIMER\u2019S DISEASE 10 Although primitive, these historical efforts laid the foundation to legitimize music therapy professionally. The National Association for Music Therapy, created in the U.S. during the mid-20 century, became the original representative professional organization and was shortly followed by the emergence of the American Association for Music Therapy in 1971. Since then, these two associations have been consolidated into the present-day American Music Therapy Association (AMTA), which is responsible for setting the educational requirements for aspiring music therapists. In order to become Music Therapist-Board Certified (MT-BC) in the United States, these individuals must receive a bachelor\u2019s, master\u2019s, or doctoral degree from an accredited program, complete a clinical internship, and partake in 1200 clinical training hours before sitting for the board certification examination (Ahn & Ashida, 2012). In order to maintain the quality of services provided by the music therapy profession, the AMTA also established professional competencies, standards of clinical practice, and a code of ethics. The standards of clinical practice outline the general order in which music therapy services are delivered: 1) referral and acceptance, 2) assessment, 3) treatment planning, 4) implementation, 5) documentation, and 6) termination (American Music Therapy Association, 2015). MUSIC THERAPY AND ALZHEIMER\u2019S DISEASE 11 Active vs. Receptive Music Therapy Although the exact physiological mechanisms responsible for the efficacy of music therapy on Alzheimer\u2019s disease are not yet fully understood, the behavioral, cognitive, and physical improvements that have been noted during both therapy sessions and research studies are undeniably significant. The degree of improvement following musical intervention is primarily influenced by music therapists\u2019 understanding of how best to utilize the distinct elements within music, which is further based on their knowledge of how each aspect will impact a patient\u2019s cognitive and behavioral status (Guetin et al., 2009). The broader concept of music therapy encompasses two fundamental methods, active and receptive. Active music therapy requires the patient to be physically engaged, such as through the use of sound-producing objects, singing, dance-like movement, or playing instruments, for example. (Guetin et al., 2013). Receptive music therapy, considered a \u201ccontrolled method for listening to music,\u201d is comprised of specifically selected songs or live music played for the patient, with song choice based on individually meaningful elements, such as generation, culture, or personal history (Guetin et al., 2009). Both active and receptive music therapy can be used to target specific AD symptoms, such as memory loss, language deficits, depression, anx"}
{"_id": "3a77e0ace75ba9dcddb9b0bdc35cda78e960ae7c", "title": "Automatic recognition of unified parkinson's disease rating from speech with acoustic, i-vector and phonotactic features", "text": "Parkinson\u2019s Disease is a neurodegenerative disease affecting millions of people globally, most of whom present difficulties producing speech sounds. In this paper, we describe a system to identify the degree to which a person suffers from the disease. We use a number of automatic phone recognition-based features and we augment these with i-vector features and utterancelevel acoustic aggregations. On the Interspeech 2015 ComParE challenge corpus, we find that these features allow for prediction well above the challenge baseline, particularly under crossvalidation evaluation."}
{"_id": "eefcc7bcc05436dac9881acb4ff4e4a0b730e175", "title": "High-dimensional signature compression for large-scale image classification", "text": "We address image classification on a large-scale, i.e. when a large number of images and classes are involved. First, we study classification accuracy as a function of the image signature dimensionality and the training set size. We show experimentally that the larger the training set, the higher the impact of the dimensionality on the accuracy. In other words, high-dimensional signatures are important to obtain state-of-the-art results on large datasets. Second, we tackle the problem of data compression on very large signatures (on the order of 105 dimensions) using two lossy compression strategies: a dimensionality reduction technique known as the hash kernel and an encoding technique based on product quantizers. We explain how the gain in storage can be traded against a loss in accuracy and/or an increase in CPU cost. We report results on two large databases \u2014 ImageNet and a dataset of lM Flickr images \u2014 showing that we can reduce the storage of our signatures by a factor 64 to 128 with little loss in accuracy. Integrating the decompression in the classifier learning yields an efficient and scalable training algorithm. On ILSVRC2010 we report a 74.3% accuracy at top-5, which corresponds to a 2.5% absolute improvement with respect to the state-of-the-art. On a subset of 10K classes of ImageNet we report a top-1 accuracy of 16.7%, a relative improvement of 160% with respect to the state-of-the-art."}
{"_id": "9dae8976d7f8697010473e33272b3569e34e9417", "title": "Using Smart Sensor Strings for Continuous Monitoring of Temperature Stratification in Large Water Bodies", "text": "A \"smart\" thermistor string for continuous long-term temperature profiling in large water bodies is described allowing highly matched yet low-cost spatial and temporal temperature measurements. The sensor uses the three-wire SDI-12 communications standard to enable a low-powered radio or data logger on supporting buoys to command measurements and retrieve high-resolution temperature data in digital form. Each \"smart\" temperature sensor integrates a thermistor element, measurement circuitry, power control, calibration coefficient storage, temperature computation, and data communications. Multiple addressable sensors at discrete vertical depths are deployed along a three-wire cable that provides power and allows data transfer at regular intervals. Circuit, manufacturing, and automated calibration techniques allow temperature measurements with a resolution of plusmn0.003degC, and with intersensor matching of plusmn0.006degC. The low cost of each sensor is achieved by using poor tolerance thermistor and circuit components in conjunction with a 15-bit charge-balance analog-to-digital converter. Sensor inaccuracies and temperature coefficients are corrected by a two-point calibration procedure made possible by a standard-curve generator within the sensor, based upon the method of finite differences. This two-point calibration process allows in-field sensor string calibration in stratified water bodies and provides a means to correct for long-term calibration drift without having to return the string to a laboratory"}
{"_id": "c55415d6b24f8eab2da2a1df266e1b7ef0e9f11e", "title": "Animating fluid sediment mixture in particle-laden flows", "text": "In this paper, we present a mixed explicit and semi-implicit Material Point Method for simulating particle-laden flows. We develop a Multigrid Preconditioned fluid solver for the Locally Averaged Navier Stokes equation. This is discretized purely on a semi-staggered standard MPM grid. Sedimentation is modeled with the Drucker-Prager elastoplasticity flow rule, enhanced by a novel particle density estimation method for converting particles between representations of either continuum or discrete points. Fluid and sediment are two-way coupled through a momentum exchange force that can be easily resolved with two MPM background grids. We present various results to demonstrate the efficacy of our method."}
{"_id": "04a953ba760845232c0f3c6e4dc3ca7b1fb8da4e", "title": "Lightweight Memory Tracing", "text": "Dynamic binary instrumentation is a technique that consists of translating the binary code of executables on-the-fly, including all the loaded libraries. The original binary code is instrumented in the process of translation. Applications of this technique range from sandboxes to debugging and profiling tools. In this thesis we describe lMem, a lightweight binary translation framework for the IA-32/x64 architectures. One of the challenges when writing instrumentation tools is that resources such as registers and memory have to be shared between the instrumented program and the instrumentation logic. A possible way to approach this problem is to translate a given machine code snippet in an intermediate representation (IR), add instrumentation code written in this IR and then recompile the snippet to machine code and run it. Tools such as Valgrind successfully make use of this technique. These IR-based methods however are heavyweight in the sense that a significant amount of work done by the compiler has to be replicated at runtime. IR-based recompilation increases the runtime overhead and the complexity of the binary translation framework itself. A second approach consists of saving and restoring some of the state before and after the inserted instrumentation code. This approach, which is used for example by the selDebug debugger, is simple but it suffers from overhead due to frequent saving and restoring of state. A third approach is to let the instrumentation code make use of hardware resources that are not used by the instrumented program. The minemu taint checking tool achieves good performance by exploiting the fact that some 32 bit machines lack SSE registers. This enables it to fool the application into thinking that it is running on such a machine while in reality these registers are used by the instrumentation code. lMem translates IA-32 programs to x64 and uses additional registers and memory for instrumentation. To gauge the performance of this approach we implement a simple application on top of lMem which enables to insert an arbitrary number of watchpoints. We observe that this lMem-based application has overheads of only 50% to 200% on SPEC CPU 2006 benchmarks."}
{"_id": "126a237502b3c55d015bf1a825b38349c200d518", "title": "An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization", "text": "Bagging and boosting are methods that generate a diverse ensemble of classifiers by manipulating the training data given to a \u201cbase\u201d learning algorithm. Breiman has pointed out that they rely for their effectiveness on the instability of the base learning algorithm. An alternative approach to generating an ensemble is to randomize the internal decisions made by the base algorithm. This general approach has been studied previously by Ali and Pazzani and by Dietterich and Kong. This paper compares the effectiveness of randomization, bagging, and boosting for improving the performance of the decision-tree algorithm C4.5. The experiments show that in situations with little or no classification noise, randomization is competitive with (and perhaps slightly superior to) bagging but not as accurate as boosting. In situations with substantial classification noise, bagging is much better than boosting, and sometimes better than randomization."}
{"_id": "a607849a8327d36f987560653789a5f423e1bfa3", "title": "Kinematics analysis of a novel 4-dof stabilized platform", "text": "This paper presents a novel stabilized platform with three UPS-type and one RPU-type active limbs based on the shipboard helicopter's landing process and its kinematics are studied systematically. First, a prototype of this stabilized platform is constructed and its dof is analyzed, Second, analytic formulas for solving the inverse displacement kinematics are derived Third, analytic formulas for solving the forward displacement kinematics are derived and verified. Theoretical formulas and results provide a basis for the further optimization, control, and manufacturing."}
{"_id": "0ba749975aaffee5e8d0e877011414f10fd700c9", "title": "AP16-OL7: A multilingual database for oriental languages and a language recognition baseline", "text": "We present the AP16-OL7 database which was released as the training and test data for the oriental language recognition (OLR) challenge on APSIPA 2016. Based on the database, a baseline system was constructed on the basis of the i-vector model. We report the baseline results evaluated in various metrics defined by the AP16-OLR evaluation plan and demonstrate that AP16-OL7 is a reasonable data resource for multilingual research."}
{"_id": "444a20d3e883f22923df4b694212678d59ac2fc9", "title": "Accurate odometry and error modelling for a mobile robot", "text": "This paper presents a low cost novel odometry design capable of achieving high accuracy deadreckoning. It also develops a statistical error model for estimating position and orientation errors of a mobile robot using odometry. Previous work on propagating odometry error covariance relies on incrementally updating the covariance matrix in small time steps. The approach taken here sums the noise theoretically over the entire path length to produce simple closed form expressions, allowing efficient covariance matrix updating after the completion of path segments. Closed form error covariance matrix is developed for a general circular arc and two special cases : (I) straight line and (II) turning about the centre of axle of the robot. Other paths can be composed of short segments of constant curvature arcs without great loss of accuracy. The model assumes that wheel distance measurement errors are exclusively random zero mean white noise. Systematic errors due to wheel radius and wheel base measurement were first calibrated with UMBmark [BorFen94]. Experimental results show that, despite its low cost, our system\u2019s performance, with regard to dead-reckoning accuracy, is comparable to some of the best, award-winning vehicles around. The statistical error model, on the other hand, needs to be improved in light of new insights."}
{"_id": "c5d954d13c1c620d78ebaba9afa120733e90ed09", "title": "An Approximate Analysis of the LRU and FIFO Buffer Replacement Schemes", "text": "In this paper, we develop approximate analytical models for predicting the buffer hit probability under the Least Recently Used (LRU) and First In First Out (FIFO) buffer replacement policies under the independent reference model. In the case of the analysis of the LRU policy, the computational complexity for estimating the buffer hit probability is O(KB) where B is the size of the buffer and K denotes the number of items having distinct access probabilities. In the case of the FIFO policy, the solution algorithm is iterative and the computational complexity of each iteration is O(K). Results from these models are compared to exact results for models originally developed by King [KING71] for small values of the buffer size, B, and the total number of items sharing the buffer, D. Results are also compared with results from a simulation for large values of B and D. In most cases, the error is extremely small (less than 0.1%) for both LRU and FIFO, and a maximum error of 3% is observed for very small buffer size (less than 5) when the access probabilities are extremely skewed. To demonstrate the usefulness of the model, we consider two applications. In our first application, we compare the LRU and FIFO policies to an optimal static buffer allocation policy for a database consisting of two classes of data items. We observe that the performance of LRU is close to that of the optimal allocation. As the optimal allocation requires knowledge of the access probabilities, the LRU policy is preferred when this information is unavailable. We also observe that the LRU policy always performs better than the FIFO policy in our experiments. In our second application, we show that if multiple independent reference streams on mutually disjoint sets of data compete for the same buffer, it is better to partition the buffer using an optimal allocation policy than to share a common buffer."}
{"_id": "0e431052b86ba408622cbcf9728cf8423c6892fa", "title": "100 000 Frames/s 64 \u00d7 32 Single-Photon Detector Array for 2-D Imaging and 3-D Ranging", "text": "We report on the design and characterization of a multipurpose 64 \u00d7 32 CMOS single-photon avalanche diode (SPAD) array. The chip is fabricated in a high-voltage 0.35-\u03bcm CMOS technology and consists of 2048 pixels, each combining a very low noise (100 cps at 5-V excess bias) 30-\u03bcm SPAD, a prompt avalanche sensing circuit, and digital processing electronics. The array not only delivers two-dimensional intensity information through photon counting in either free-running (down to 10-\u03bcs integration time) or time-gated mode, but can also perform smart light demodulation with in-pixel background suppression. The latter feature enables phase-resolved imaging for extracting either three-dimensional depth-resolved images or decay lifetime maps, by measuring the phase shift between a modulated excitation light and the reflected photons. Pixel-level memories enable fully parallel processing and global-shutter readout, preventing motion artifacts (e.g., skew, wobble, motion blur) and partial exposure effects. The array is able to acquire very fast optical events at high frame-rate (up to 100 000 fps) and at single-photon level. Low-noise SPADs ensure high dynamic range (up to 110 dB at 100 fps) with peak photon detection efficiency of almost 50% at 410 nm. The SPAD imager provides different operating modes, thus, enabling both time-domain applications, like fluorescence lifetime imaging (FLIM) and fluorescence correlation spectroscopy, as well as frequency-domain FLIM and lock-in 3-D ranging for automotive vision and lidar."}
{"_id": "17d860c991c8fe643839ae94c639e826fc914ba0", "title": "CMOS SPADs: Design Issues and Research Challenges for Detectors, Circuits, and Arrays", "text": "Solid-state single photon detectors are playing a significant role in the development of high-performance single-photon imaging systems for fluorescence lifetime imaging, time-of-flight positron emission tomography and Raman Spectroscopy applications. The main driving factors are the unparalleled levels of miniaturization and portability, low fabrication costs, and high overall performance resulting from the integration of single-photon avalanche diodes (SPADs) with mixed-signal circuits in deep-submicron (DSM) complementary metal-oxide-semiconductor (CMOS) technology. At the heart of such imaging systems is the SPAD, capable of single-photon sensitivity and sub-nanosecond time resolution, and its associated circuitry, which in DSM CMOS, is capable of high-speed, low-power mixed-mode signal processing. In this paper, we review and discuss the most recent developments in DSM CMOS SPAD detectors, circuits and arrays and investigate issues of scalability, miniaturization and performance trade-offs involved in designing SPAD imaging systems. Design considerations, research challenges, and future directions for CMOS SPAD image sensors will be highlighted and addressed."}
{"_id": "40bbbaf3cb1459fd05fca6771f3a29d3fc9cfa19", "title": "Photon counting and direct ToF camera prototype based on CMOS SPADs", "text": "This paper presents a camera prototype for 2D/3D image capture in low illumination conditions based on single-photon avalanche-diode (SPAD) image sensor for direct time-offlight (d-ToF). The imager is a 64\u00d764 array with in-pixel TDC for high frame rate acquisition. Circuit design techniques are combined to ensure successful 3D image capturing under low sensitivity conditions and high level of uncorrelated noise such as dark count and background illumination. Among them an innovative time gated front-end for the SPAD detector, a reverse start-stop scheme and real-time image reconstruction at Ikfps are incorporated by the imager. To the best of our knowledge, this is the first ToF camera based on a SPAD sensor fabricated and proved for 3D image reconstruction in a standard CMOS process without any opto-flavor or high voltage option. It has a depth resolution of 1cm at an illumination power from less than 6nW/mm2 down to 0.1nW/mm2."}
{"_id": "62e4f3fcc41d88fbc2096cbfe3a6edc7bb3e17b5", "title": "11.5 A time-correlated single-photon-counting sensor with 14GS/S histogramming time-to-digital converter", "text": "Time-correlated single photon counting (TCSPC) is a photon-efficient technique to record ultra-fast optical waveforms found in numerous applications such as time-of-flight (ToF) range measurement (LIDAR) [1], ToF 3D imaging [2], scanning optical microscopy [3], diffuse optical tomography (DOT) and Raman sensing [4]. Typical instrumentation consists of a pulsed laser source, a discrete detector such as an avalanche photodiode (APD) or photomultiplier tube (PMT), time-to-digital converter (TDC) card and a FPGA or PC to assemble and compute histograms of photon time stamps. Cost and size restrict the number of channels of TCSPC hardware. Having few detection and conversion channels, the technique is limited to processing optical waveforms with low intensity, with less than one returned photon per laser pulse, to avoid pile-up distortion [4]. However, many ultra-fast optical waveforms exhibit high dynamic range in the number of photons emitted per laser pulse. Examples are signals observed at close range in ToF with multiple reflections, diffuse reflected photons in DOT or local variations in fluorescent dye concentration in microscopy. This paper provides a single integrated chip that reduces conventional TCSPC pile-up mechanisms by an order of magnitude through ultra-parallel realizations of both photon detection and time-resolving hardware. A TDC architecture is presented which combines the two step iterated TCSPC process of time-code generation, followed by memory lookup, increment and write, into one parallel direct-to-histogram conversion. The sensor achieves 71.4ps resolution, over 18.85ns dynamic range, with 14GS/s throughput. The sensor can process 1.7Gphoton/s and generate 21k histograms/s (with 4.6\u03bcs readout time), each capturing a total of 1.7kphotons in a 1\u03bcs exposure."}
{"_id": "fb4e3cced4b15a36bf1e4a8858d099d21a1396bd", "title": "Multiple-event direct to histogram TDC in 65nm FPGA technology", "text": "A novel multiple-event Time to Digital Converter (TDC) with direct to histogram output is implemented in a 65nm Xilinx Virtex 5 FPGA. The delay-line based architecture achieves 16.3 ps temporal accuracy over a 2.86ns dynamic range. The measured maximum conversion rate of 6.17 Gsamples/s and the sampling rate of 61.7 Gsamples/s are the highest published in the literature. The system achieves a linearity of -0.9/+3 LSB DNL and -1.5/+5 LSB INL. The TDC is demonstrated in a direct time of flight optical ranging application with 12mm error over a 350mm range."}
{"_id": "016335ce7e0a073623e1deac7138b28913dbf594", "title": "Human-level concept learning through probabilistic program induction", "text": "People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms\u2014for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world\u2019s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several \u201cvisual Turing tests\u201d probing the model\u2019s creative generalization abilities, which in many cases are indistinguishable from human behavior."}
{"_id": "917127973e3eb4217f62681aeeef13b79a89ca92", "title": "Analyzing Factors Affecting Users \u2019 Behavior Intention to Use Social Media : Twitter Case", "text": "Advancement of technology and the Internet proliferation have visible effects in the world. One of the most important effects is the increased social media usage among the Internet users. For this purpose, factors having impacts on users\u2019 behavior intention to social media usage are investigated within the scope of the study. A research model based on technology acceptance model 2 is proposed and revised by the taking social media platforms into consideration. The effects of perceived ease of use, perceived usefulness, social influence, facilitating conditions, playfulness, and trust are measured. Data collected from 462 respondents are analyzed by structural equation modeling technique. According to results of the measurement and structural model validities, a fit and acceptable model is achieved to measure the related effects of variables. The results of study reveal the both direct and indirect positive impacts of the factors on users\u2019 behavior intention to use social media."}
{"_id": "5df3dd8a56974c60632b7e15f5187ccfa06e28ee", "title": "Detecting Topic Drift with Compound Topic Models", "text": "The Latent Dirichlet Allocation topic model of Blei, Ng, & Jordan (2003) is well-established as an effective approach to recovering meaningful topics of conversation from a set of documents. However, a useful analysis of user-generated content is concerned not only with the recovery of topics from a static data set, but with the evolution of topics over time. We employ a compound topic model (CTM) to track topics across two distinct data sets (i.e. past and present) and to visualize trends in topics over time; we evaluate several metrics for detecting a change in the distribution of topics within a time-window; and we illustrate how our approach discovers emerging conversation topics related to current events in real data sets."}
{"_id": "e15f910f7cc2d785c3cefd37dc7ae99e46bf031f", "title": "Spatial reconstruction of single-cell gene expression data", "text": "Spatial localization is a key determinant of cellular fate and behavior, but methods for spatially resolved, transcriptome-wide gene expression profiling across complex tissues are lacking. RNA staining methods assay only a small number of transcripts, whereas single-cell RNA-seq, which measures global gene expression, separates cells from their native spatial context. Here we present Seurat, a computational strategy to infer cellular localization by integrating single-cell RNA-seq data with in situ RNA patterns. We applied Seurat to spatially map 851 single cells from dissociated zebrafish (Danio rerio) embryos and generated a transcriptome-wide map of spatial patterning. We confirmed Seurat's accuracy using several experimental approaches, then used the strategy to identify a set of archetypal expression patterns and spatial markers. Seurat correctly localizes rare subpopulations, accurately mapping both spatially restricted and scattered groups. Seurat will be applicable to mapping cellular localization within complex patterned tissues in diverse systems."}
{"_id": "1bee20577d298c3f42c5834ae79a89730ca92256", "title": "Defending against Internet worms: a signature-based approach", "text": "With the capability of infecting hundreds of thousands of hosts, worms represent a major threat to the Internet. The defense against Internet worms is largely an open problem. This paper investigates two important problems. Can a localized defense system detect new worms that were not seen before and moreover, capture the attack packets? How to identify polymorphic worms from the normal background traffic? We have two major contributions here. The first contribution is the design of a novel double-honeypot system, which is able to automatically detect new worms and isolate the attack traffic. The second contribution is the proposal of a new type of position-aware distribution signatures (PADS), which fit in the gap between the traditional signatures and the anomaly-based systems. We propose two algorithms based on expectation-maximization (EM) and Gibbs sampling for efficient computation of PADS from polymorphic worm samples. The new signature is capable of handling certain polymorphic worms. Our experiments show that the algorithms accurately separate new variants of the MSBlaster worm from the normal-traffic background."}
{"_id": "21f18913411668c4c0d0a4aabd7a5d2800922079", "title": "USim: a user behavior simulation framework for training and testing IDSes in GUI based systems", "text": "Anomaly detection systems largely depend on user profile data to be able to detect deviations from normal activity. Most of this profile data is currently based on command-line instructions/directives executed by users on a system. With the advent and extensive usage of graphical user interfaces (GUIs), command-line data can no longer fully represent user's complete behavior which is essential for effectively detecting the anomalies in these GUI based systems. Collection of user behavior data is a slow and time consuming process. In this paper, we present a new approach to automate the generation of user data by parameterizing user behavior in terms of user intention (malicious/normal), user skill level, set of applications installed on a machine, mouse movement and keyboard activity. The user behavior parameters are used to generate templates, which can be further customized. The framework is called USim which can achieve rapid generation of user behavior data based on these templates for GUI based systems. The data thus generated can be utilized for rapidly training and testing intrusion detection systems (IDSes) and improving their detection precision."}
{"_id": "d72642530cab7e8edeff459fd11b1e744e8ed03d", "title": "Learning Pulse: Using Wearable Biosensors and Learning Analytics to Investigate and Predict Learning Success in Self-regulated Learning", "text": "The Learning Pulse study aims to explore whether physiological data such as heart rate and step count correlate with learning activity data and whether they are good predictors for learning success during self-regulated learning. To verify this hypothesis an experiment was set up involving eight doctoral students at the Open University of the Netherlands. Through wearable sensors, heart rate and step count were constantly monitored and learning activity data were collected. All data were stored in a Learning Record Store in xAPI format. Additionally, with an Activity Rating Tool, the participants rated their learning and working experience by indicating the perceived levels of productivity, stress, challenge and abilities along with the type of activity. These human annotated labels can be used for supervising machine learning algorithms to discriminate the successful learning moments from the unsuccessful ones and eventually discover the attributes that most influence the learning process."}
{"_id": "91b57a3b4996447430c4b942cde01eab276febe2", "title": "Moving beyond linearity and independence in top-N recommender systems", "text": "This paper suggests a number of research directions in which the recommender systems can improve their quality, by moving beyond the assumptions of linearity and independence that are traditionally made. These assumptions, while producing effective and meaningful results, can be suboptimal, as in lots of cases they do not represent the real datasets. In this paper, we discuss three different ways to address some of the previous constraints. More specifically, we focus on the development of methods capturing higher-order relations between the items, cross-feature interactions and intra-set dependencies which can potentially lead to a considerable enhancement of the recommendation accuracy."}
{"_id": "5fa2f06b36b2c467f895dc361750edf38e764449", "title": "Adversarial Networks for the Detection of Aggressive Prostate Cancer", "text": "Semantic segmentation constitutes an integral part of medical image analyses for which breakthroughs in the field of deep learning were of high relevance. The large number of trainable parameters of deep neural networks however renders them inherently data hungry, a characteristic that heavily challenges the medical imaging community. Though interestingly, with the de facto standard training of fully convolutional networks (FCNs) for semantic segmentation being agnostic towards the \u2018structure\u2019 of the predicted label maps, valuable complementary information about the global quality of the segmentation lies idle. In order to tap into this potential, we propose utilizing an adversarial network which discriminates between expert and generated annotations in order to train FCNs for semantic segmentation. Because the adversary constitutes a learned parametrization of what makes a good segmentation at a global level, we hypothesize that the method holds particular advantages for segmentation tasks on complex structured, small datasets. This holds true in our experiments: We learn to segment aggressive prostate cancer utilizing MRI images of 152 patients and show that the proposed scheme is superior over the de facto standard in terms of the detection sensitivity and the dice-score for aggressive prostate cancer. The achieved relative gains are shown to be particularly pronounced in the small dataset limit."}
{"_id": "c77c29b3049bc39c36929d01525e94fcca14837f", "title": "Network virtualization: state of the art and research challenges", "text": "Recently network virtualization has been pushed forward by its proponents as a long-term solution to the gradual ossification problem faced by the existing Internet and proposed to be an integral part of the next-generation networking paradigm. By allowing multiple heterogeneous network architectures to cohabit on a shared physical substrate, network virtualization provides flexibility, promotes diversity, and promises security and increased manageability. However, many technical issues stand in the way of its successful realization. This article investigates the past and the state of the art in network virtualization along with the future challenges that must be addressed to realize a viable network virtualization environment."}
{"_id": "6703eefaa50f1c1228b4f4be657f2462fc83d6cd", "title": "Induction and molecular signature of pathogenic TH17 cells", "text": "Interleukin 17 (IL-17)-producing helper T cells (TH17 cells) are often present at the sites of tissue inflammation in autoimmune diseases, which has led to the conclusion that TH17 cells are main drivers of autoimmune tissue injury. However, not all TH17 cells are pathogenic; in fact, TH17 cells generated with transforming growth factor-\u03b21 (TGF-\u03b21) and IL-6 produce IL-17 but do not readily induce autoimmune disease without further exposure to IL-23. Here we found that the production of TGF-\u03b23 by developing TH17 cells was dependent on IL-23, which together with IL-6 induced very pathogenic TH17 cells. Moreover, TGF-\u03b23-induced TH17 cells were functionally and molecularly distinct from TGF-\u03b21-induced TH17 cells and had a molecular signature that defined pathogenic effector TH17 cells in autoimmune disease."}
{"_id": "551546dbc66e04a8aeaef18774a004c7dae31967", "title": "Customers Behavior Modeling by Semi-Supervised Learning in Customer Relationship Management", "text": "Leveraging the power of increasing amounts of data to analyze customer base for attracting and retaining the most valuable customers is a major problem facing companies in this information age. Data mining technologies extract hidden information and knowledge from large data stored in databases or data warehouses, thereby supporting the corporate decision making process. CRM uses data mining (one of the elements of CRM) techniques to interact with customers. This study investigates the use of a technique, semi-supervised learning, for the management and analysis of customer-related data warehouse and information. The idea of semi-supervised learning is to learn not only from the labeled training data, but to exploit also the structural information in additionally available unlabeled data. The proposed semi-supervised method is a model by means of a feed-forward neural network trained by a back propagation algorithm (multi-layer perceptron) in order to predict the category of an unknown customer (potential customers). In addition, this technique can be used with Rapid Miner tools for both labeled and unlabeled data."}
{"_id": "14d644c617b1bdf57d626d4d14f7210f99d140e6", "title": "MAUI: making smartphones last longer with code offload", "text": "This paper presents MAUI, a system that enables fine-grained energy-aware offload of mobile code to the infrastructure. Previous approaches to these problems either relied heavily on programmer support to partition an application, or they were coarse-grained requiring full process (or full VM) migration. MAUI uses the benefits of a managed code environment to offer the best of both worlds: it supports fine-grained code offload to maximize energy savings with minimal burden on the programmer. MAUI decides at run-time which methods should be remotely executed, driven by an optimization engine that achieves the best energy savings possible under the mobile device's current connectivity constrains. In our evaluation, we show that MAUI enables: 1) a resource-intensive face recognition application that consumes an order of magnitude less energy, 2) a latency-sensitive arcade game application that doubles its refresh rate, and 3) a voice-based language translation application that bypasses the limitations of the smartphone environment by executing unsupported components remotely."}
{"_id": "9e304b2f8dad82e0685c0be52a0860fc702a21af", "title": "Evaluation of Talent Management on Employees Performance in Beverage Industry : A Case of Delmonte Kenya Limited", "text": "Talent management practice within organization is an international human resource strategy that seeks to identify, develop, deploy and retain talented and high potential employees. The objectives of the study were to determine the effect of talent retention on employees performance, assess how talent attraction impacts, effect of learning and development on employees performance in beverage industry in Kenya. The study adopted a descriptive research design in which the target population of 2,500 employees of Del Monte Kenya. The study used stratified sampling method to select 83 employees according to their job cadres. Descriptive statistics such as the standard deviation, percentages and frequency distribution were used. The study established that the job retention motivated the employees of Del Monte leading to ultimate performance. The study recommended that the management should ensure the work environment was attractive to the employees so as to motivate, thus leading to better performance."}
{"_id": "42d7f8895b31ac0e1a4d30a3ec200955655d6100", "title": "Kinematics and dynamics analysis of a 2-DOF spherical parallel robot", "text": "In this paper, the kinematic and dynamic analysis of a 2-degree-of-freedom spherical parallel manipulator is presented. The mobility and motion pattern of the manipulator are analyzed using screw theory, which resulted in obtaining the direct and inverse Jacobian matrices. Position, velocity and acceleration relations between different parts of the manipulator and the actuated joint angles are obtained, which are prerequisite for dynamic analysis. Since the manipulator, belongs to a class of parallel mechanism known as over-constrained mechanism, thus to obtain a dynamical model, a modification should be applied in its kinematic arrangement by preserving the performed motion pattern. Finally, accuracy of the dynamical model is verified by comparing the results obtained from the formulated model with results obtained from a SimMechanics model of the under study manipulator."}
{"_id": "758e8d23b9336a4961e095664c247cb4cca3620e", "title": "A FRAMEWORK FOR DETERMINING CRITICAL SUCCESS FACTORS INFLUENCING CONSTRUCTION BUSINESS PERFORMANCE", "text": "Recent reports by Latham (1994) and Egan (1998) have emphasized the need for the construction industry to increase its competitiveness, and have suggested the use of performance measurement as a tool for continuous improvement. Comprehensive measurement of a company\u2019s performance and subsequent feedback to its managers is vital for business transformation. Measurement also enables businesses to be compared with each other on the basis of standardized information, allowing best practices to be identified and applied more widely. Historically, business performance has been determined principally through the use of financial performance criteria, but recently, it has been established that performance measurement needs to go beyond this. This paper reviews the various financial and non-financial factors that influence construction business. A methodology for defining the critical success factors relevant to construction businesses is further outlined. The critical success factors associated with construction business are also reviewed. Finally, it is concluded that more important than presenting a list of critical success factors (which are bound to change) is the need to have a holistic framework for identifying and implementing the required success factors."}
{"_id": "59519f319030b69d42e9b431e9f0e87987ee34c7", "title": "Interactive Visualization of Small World Graphs", "text": "Many real world graphs have small world characteristics, that is, they have a small diameter compared to the number of nodes and exhibit a local cluster structure. Examples are social networks, software structures, bibliographic references and biological neural nets. Their high connectivity makes both finding a pleasing layout and a suitable clustering hard. In this paper we present a method to create scalable, interactive visualizations of small world graphs, allowing the user to inspect local clusters while maintaining a global overview of the entire structure. The visualization method uses a combination of both semantical and geometrical distortions, while the layout is generated by a spring embedder algorithm using recently developed force model. We use a cross referenced database of 500 artists as a running example"}
{"_id": "78511f4cfe53d3436fce32c8ef87c241ff298165", "title": "PENS-wheel (one-wheeled self balancing vehicle) balancing control using PID controller", "text": "Pens-Wheel is an electric vehicle which uses one wheel that able to balance itself so the rider does not fall forward or backward while riding it. This vehicle uses one brushless DC motor as actuator which capable to rotate in both directions symmetrically. The vehicle uses a combination of accelerometer and gyroscope contained in IMU (Inertial Measurement Unit) for balancing sensor. The controlled motion on the vehicle occurs only on the x-axis (pitch angle), in forward and backward directions. The PID (Proportional Integral Derivative) control algorithm is used to maintain the balance and movement of the vehicle. From the simulation and application in the real vehicle, the use of PID control is capable driving the vehicle in maintaining the balance condition within \u00b110\u00b0 tilt angle boundary on flat surface, bumpy road, and inclining road up to 15\u00b0 slope."}
{"_id": "8efbb07f1ed1879a32574e7d3940063e9dafbadb", "title": "A Comparative Analysis of Crowdsourced Natural Language Corpora for Spoken Dialog Systems", "text": "Recent spoken dialog systems have been able to recognize freely spoken user input in restricted domains thanks to statistical methods in the automatic speech recognition. These methods require a high number of natural language utterances to train the speech recognition engine and to assess the quality of the system. Since human speech offers many variants associated with a single intent, a high number of user utterances have to be elicited. Developers are therefore turning to crowdsourcing to collect this data. This paper compares three different methods to elicit multiple utterances for given semantics via crowd sourcing, namely with pictures, with text and with semantic entities. Specifically, we compare the methods with regard to the number of valid data and linguistic variance, whereby a quantitative and qualitative approach is proposed. In our study, the method with text led to a high variance in the utterances and a relatively low rate of invalid data."}
{"_id": "bc93a5d09aad16356808843338bdd34df6b5b01b", "title": "DEEP-SEE FACE: A Mobile Face Recognition System Dedicated to Visually Impaired People", "text": "In this paper, we introduce the DEEP-SEE FACE framework, an assistive device designed to improve cognition, interaction, and communication of visually impaired (VI) people in social encounters. The proposed approach jointly exploits computer vision algorithms (region proposal networks, ATLAS tracking and global, and low-level image descriptors) and deep convolutional neural networks in order to detect, track, and recognize, in real-time, various persons existent in the video streams. The major contribution of the paper concerns a global, fixed-size face representation that takes into the account of various video frames while remaining independent of the length of the image sequence. To this purpose, we introduce an effective weight adaptation scheme that is able to determine the relevance assigned to each face instance, depending on the frame degree of motion/camera blur, scale variation, and compression artifacts. Another relevant contribution involves a hard negative mining stage that helps us differentiating between known and unknown face identities. The experimental results, carried out on a large-scale data set, validate the proposed methodology with an average accuracy and recognition rates superior to 92%. When tested in real life, indoor/outdoor scenarios, the DEEP-SEE FACE prototype proves to be effective and easy to use, allowing the VI people to access visual information during social events."}
{"_id": "41c8ff7475225c9dfe9cedc25c038e9216345cc6", "title": "Hashtag recommendation for hyperlinked tweets", "text": "Presence of hyperlink in a tweet is a strong indication of tweet being more informative. In this paper, we study the problem of hashtag recommendation for hyperlinked tweets (i.e., tweets containing links to Web pages). By recommending hashtags to hyperlinked tweets, we argue that the functions of hashtags such as providing the right context to interpret the tweets, tweet categorization, and tweet promotion, can be extended to the linked documents. The proposed solution for hashtag recommendation consists of two phases. In the first phase, we select candidate hashtags through five schemes by considering the similar tweets, the similar documents, the named entities contained in the document, and the domain of the link. In the second phase, we formulate the hashtag recommendation problem as a learning to rank problem and adopt RankSVM to aggregate and rank the candidate hashtags. Our experiments on a collection of 24 million tweets show that the proposed solution achieves promising results."}
{"_id": "cfbd8df065de6d29cb12a083521a4543a7052a7e", "title": "Students \u2019 Attitude towards Science and Technology", "text": "This study assessed attitudes towards science and technology middle school students. The population included all 3rd grade students a total of 230 students (105 female and 125 male) chose through stratified random sampling method. Research instrument was the Persian translation of the Science Education questionnaire. Data analyzed by SPSS version 17.00. Reliability of the scale calculated by Cronbach's alpha coefficient (0.91). Results indicated that there is a positive attitude towards science and technology among students. However, there was not a positive attitude towards some items of science and technology. The results also showed that there is a meaningful difference between males and females points of views in attitude towards sciences and technology. According to this result, males have higher averages than the females. The results of this research provide important information about students' attitude towards science and could be used by science teachers and educators to development of science curricula and science books."}
{"_id": "803e11535f2c499886c54379628d6346ab8de511", "title": "Intelligent fractional-order PID (FOPID) heart rate controller for cardiac pacemaker", "text": "Efficient and robust control of cardiac pacemaker is essential for providing life-saving control action to regulate Heart Rate (HR) in a dynamic environment. Several controller designs involving proportional-integral-derivative (PID) and fuzzy logic controllers (FLC) have been reported but each have their limitations to face the dynamic challenge of regulating HR. Fractional-order control (FOC) systems provide controllers that are described by fractional-order differential equations that offers fine tuning of the control parameters to provide robust and efficient performance. In this work a robust fractional-order PID (FOPID) controller is designed based on Ziegler-Nichols tuning method. The stable FOPID controller outperformed PID controllers with different tuning methods and also the FLC in terms of rise time, settling time and % overshoot. The FOPID controller also demonstrated feasibility for rate-adaptive pacing. However, the FOPID controller designed in this work is not optimal and is limited by the tuning procedure. More efficient design using optimization techniques such as particle swarm intelligence or genetic algorithm tuning can offer optimal control of the cardiac pacemaker."}
{"_id": "82a7eaddeed4ddf49a423f8d3b3079d9e39b9579", "title": "Design of charge pump circuit with consideration of gate-oxide reliability in low-voltage CMOS processes", "text": "A new charge pump circuit with consideration of gateoxide reliability is designed with two pumping branches in this paper. The charge transfer switches in the new proposed circuit can be completely turned on and turned off, so its pumping efficiency is higher than that of the traditional designs. Moreover, the maximum gate-source and gate-drain voltages of all devices in the proposed charge pump circuit do not exceed the normal operating power supply voltage (VDD). Two test chips have been implemented in a 0.35m 3.3-V CMOS process to verify the new proposed charge pump circuit. The measured output voltage of the new proposed four-stage charge pump circuit with each pumping capacitor of 2 pF to drive the capacitive output load is around 8.8 V under 3.3-V power supply (VDD = 3 3 V), which is limited by the junction breakdown voltage of the parasitic pn-junction in the given process. The new proposed circuit is suitable for applications in low-voltage CMOS processes because of its high pumping efficiency and no overstress across the gate oxide of devices."}
{"_id": "b90cec9c28f62b9ccd50d9f8a21a0166285060fb", "title": "23.8 A 34V charge pump in 65nm bulk CMOS technology", "text": "Recent advances in MEMS-based oscillators have resulted in their proliferation in timing applications that were once exclusive to quartz-based devices [1]. For applications requiring low phase noise - e.g., cellular, GPS and high-speed serial links - one possible approach is to bias the MEMS resonator at a higher DC voltage to reduce its motional impedance and increase signal energy [2]. Realizing high-voltage charge pumps in bulk CMOS technology is limited by the breakdown voltage of the well/substrate diodes shown in Fig. 23.8.1(a) and Fig. 23.8.1(b). This breakdown limit is even lower with technology scaling and is <;10V in a 22nm CMOS node. Systems with high-voltage requirements often resort to older, high-voltage-tolerant nodes or exotic technologies that limit MEMS integration into SoCs. This work demonstrates a charge pump design in 65nm technology with a three-fold increase in the output voltage range. Highvoltage tolerance is enabled by the proposed well-biasing arrangement and oxide isolation. The pump achieves 34V output by using three different charge pump cells that tradeoff achievable voltage range and power efficiency to achieve a peak efficiency of 38%. Additionally, finger capacitors are optimized to ensure reliability while maintaining efficiency."}
{"_id": "c9f7f785ecdfd86331defaeca585c0ad624c6321", "title": "Ultra-High-Voltage Charge Pump Circuit in Low-Voltage Bulk CMOS Processes With Polysilicon Diodes", "text": "An on-chip ultra-high-voltage charge pump circuit realized with the polysilicon diodes in the low-voltage bulk CMOS process is proposed in this work. Because the polysilicon diodes are fully isolated from the silicon substrate, the output voltage of the charge pump circuit is not limited by the junction breakdown voltage of MOSFETs. The polysilicon diodes can be implemented in the standard CMOS processes without extra process steps. The proposed ultra-high-voltage charge pump circuit has been fabricated in a 0.25-mum 2.5-V standard CMOS process. The output voltage of the four-stage charge pump circuit with 2.5-V power-supply voltage (VDD=2.5 V) can be pumped up to 28.08 V, which is much higher than the n-well/p-substrate breakdown voltage (~18.9 V) in a 0.25-mum 2.5-V bulk CMOS process"}
{"_id": "e962faa6692bcb9a614f35645de4bbed0beff720", "title": "Complex thermoelectric materials.", "text": "Thermoelectric materials, which can generate electricity from waste heat or be used as solid-state Peltier coolers, could play an important role in a global sustainable energy solution. Such a development is contingent on identifying materials with higher thermoelectric efficiency than available at present, which is a challenge owing to the conflicting combination of material traits that are required. Nevertheless, because of modern synthesis and characterization techniques, particularly for nanoscale materials, a new era of complex thermoelectric materials is approaching. We review recent advances in the field, highlighting the strategies used to improve the thermopower and reduce the thermal conductivity."}
{"_id": "76a72e8b93a64f14ada2bb3b76b76b7cd89923f7", "title": "AVANT-GUARD: scalable and vigilant switch flow management in software-defined networks", "text": "Among the leading reference implementations of the Software Defined Networking (SDN) paradigm is the OpenFlow framework, which decouples the control plane into a centralized application. In this paper, we consider two aspects of OpenFlow that pose security challenges, and we propose two solutions that could address these concerns. The first challenge is the inherent communication bottleneck that arises between the data plane and the control plane, which an adversary could exploit by mounting a \"control plane saturation attack\" that disrupts network operations. Indeed, even well-mined adversarial models, such as scanning or denial-of-service (DoS) activity, can produce more potent impacts on OpenFlow networks than traditional networks. To address this challenge, we introduce an extension to the OpenFlow data plane called \"connection migration\", which dramatically reduces the amount of data-to-control-plane interactions that arise during such attacks. The second challenge is that of enabling the control plane to expedite both detection of, and responses to, the changing flow dynamics within the data plane. For this, we introduce \"actuating triggers\" over the data plane's existing statistics collection services. These triggers are inserted by control layer applications to both register for asynchronous call backs, and insert conditional flow rules that are only activated when a trigger condition is detected within the data plane's statistics module. We present Avant-Guard, an implementation of our two data plane extensions, evaluate the performance impact, and examine its use for developing more scalable and resilient SDN security services."}
{"_id": "6c3877672f52dac3ed34be1614c981775d1779da", "title": "Less-structured time in children's daily lives predicts self-directed executive functioning", "text": "Executive functions (EFs) in childhood predict important life outcomes. Thus, there is great interest in attempts to improve EFs early in life. Many interventions are led by trained adults, including structured training activities in the lab, and less-structured activities implemented in schools. Such programs have yielded gains in children's externally-driven executive functioning, where they are instructed on what goal-directed actions to carry out and when. However, it is less clear how children's experiences relate to their development of self-directed executive functioning, where they must determine on their own what goal-directed actions to carry out and when. We hypothesized that time spent in less-structured activities would give children opportunities to practice self-directed executive functioning, and lead to benefits. To investigate this possibility, we collected information from parents about their 6-7 year-old children's daily, annual, and typical schedules. We categorized children's activities as \"structured\" or \"less-structured\" based on categorization schemes from prior studies on child leisure time use. We assessed children's self-directed executive functioning using a well-established verbal fluency task, in which children generate members of a category and can decide on their own when to switch from one subcategory to another. The more time that children spent in less-structured activities, the better their self-directed executive functioning. The opposite was true of structured activities, which predicted poorer self-directed executive functioning. These relationships were robust (holding across increasingly strict classifications of structured and less-structured time) and specific (time use did not predict externally-driven executive functioning). We discuss implications, caveats, and ways in which potential interpretations can be distinguished in future work, to advance an understanding of this fundamental aspect of growing up."}
{"_id": "749a6d0ff96acd02428ad8c61b5b9d53fe71ff77", "title": "SECURITY ENHANCEMENT OF ADVANCED ENCRYPTION STANDARD (AES) USING TIME-BASED DYNAMIC KEY GENERATION", "text": "Login is the first step which is conducted every time we access to a system and only granted to those who are entitled. Login is very important and it is a part of the security of a system. Registered user or guest and administrator are two kinds of users that have the privilege and access. Nowadays, wrongdoers are always on the dark side and frequently try to gain access to the system. To provide a better security, it is important to enhance the access mechanism and to evaluate the authentication process. Message-Digest 5 (MD5) is one of the algorithms that commonly used in the login system. Although it has been so popular, but the algorithm is still vulnerable to dictionary attacks and rainbow tables. In addition to the hash function algorithm, Advanced Encryption Standard (AES) algorithm alternatively can be the best choice in the login authentication process. This study is proposed to develop a dynamic key generation on the AES algorithm using the function of time. The experimental results obtained that AES key can be generated at random based on the value of the time when a user logs in with a particular active period. In this implementation, the security authentication becomes stronger because of changes in key generating chipertext changes for each encryption process. Based on the time as a valuable benchmark, the result shown that AES encryption-decryption process is relatively fast with a time average about 0.0023 s."}
{"_id": "532b273366aefbfd3ac598f11d4331603c4716b5", "title": "Efficiently solving dynamic Markov random fields using graph cuts", "text": "In this paper, we present a fast new fully dynamic algorithm for the st-mincut/max-flow problem. We show how this algorithm can be used to efficiently compute MAP estimates for dynamically changing MRF models of labeling problems in computer vision, such as image segmentation. Specifically, given the solution of the max-flow problem on a graph, we show how to efficiently compute the maximum flow in a modified version of the graph. Our experiments showed that the time taken by our algorithm is roughly proportional to the number of edges whose weights were different in the two graphs. We test the performance of our algorithm on one particular problem: the object-background segmentation problem for video and compare it with the best known st-mincut algorithm. The results show that the dynamic graph cut algorithm is much faster than its static counterpart and enables real time image segmentation. It should be noted that our method is generic and can be used to yield similar improvements in many other cases that involve dynamic change in the graph"}
{"_id": "fde20b8465ed90af100979efe92101da7e84529a", "title": "Blur-invariant copy-move forgery detection technique with improved detection accuracy utilising SWT-SVD", "text": "Majority of the existing copy-move forgery detection algorithms operate based on the principle of image block matching. However, such detection becomes complicated when an intelligent adversary blurs the edges of forged region(s). To solve this problem, the authors present a novel approach for detection of copy-move forgery using stationary wavelet transform (SWT) which, unlike most wavelet transforms (e.g. discrete wavelet transform), is shift invariant, and helps in finding the similarities, i.e. matches and dissimilarities, i.e. noise, between the blocks of an image, caused due to blurring. The blocks are represented by features extracted using singular value decomposition (SVD) of an image. Also, the concept of colour-based segmentation used in this work helps to achieve blur invariance. The authors\u2019 experimental results prove the efficiency of the proposed method in detection of copy-move forgery involving intelligent edge blurring. Also, their experimental results prove that the performance of the proposed method in terms of detection accuracy is considerably higher compared with the state-of-theart."}
{"_id": "95de1927a1c07a2125551e0d94491e981e78dc98", "title": "Automatic Service Discovery of IP Cameras over Wide Area Networks with NAT Traversal", "text": "A novel framework for remote service discovery and access of IP cameras with Network address Translation (NAT) traversal is presented in this paper. The proposed protocol, termed STDP (Service Trader Discovery Protocol), is a hybrid combination of Zeroconf and SIP (Session Initial Protocol). The Zeroconf is adopted for the discovery and/or publication of local services; whereas, the SIP is used for the delivery of local services to the remote nodes. In addition, both the SIP-ALG (Application Layer Gateway) and UPnP (Universal Plug and Play)-IGD (Internet Gateway Device) protocols are used for NAT traversal. The proposed framework is well-suited for high mobility applications where the fast deployment and low administration efforts of IP cameras are desired."}
{"_id": "ae3789827ffee3e0bdc5b1a0addb7bb4a79d42cf", "title": "Impact of Technostress on End-User Satisfaction and Performance", "text": "Organizational use of information and communications technologies (ICT) is increasingly resulting in negative cognitions in individuals, such as information overload and interruptions. recent literature has encapsulated these cognitions in 304 TarafDar, Tu, aND ragu-NaThaN the concept of technostress, which is stress caused by an inability to cope with the demands of organizational computer usage. given the critical role of the user in organizational information processing and accomplishing application-enabled workflows, understanding how these cognitions affect users\u2019 satisfaction with ICT and their performance in ICT-mediated tasks is an important step in appropriating benefits from current computing environments. The objective of this paper is to (1) understand the negative effects of technostress on the extent to which end users perceive the applications they use to be satisfactory and can utilize them to improve their performance at work and (2) identify mechanisms that can mitigate these effects. Specifically, we draw from the end-user computing and technostress literature to develop and validate a model that analyzes the effects of factors that create technostress on the individual\u2019s satisfaction with, and task performance using, ICT. The model also examines how user involvement in ICT development and support mechanisms for innovation can be used to weaken technostress-creating factors and their outcomes. The results, based on survey data analysis from 233 ICT users from two organizations, show that factors that create technostress reduce the satisfaction of individuals with the ICT they use and the extent to which they can utilize ICT for productivity and innovation in their tasks. Mechanisms that facilitate involvement of users, and encourage them to take risks, learn, explore new ideas, and experiment in the context of ICT use, diminish the factors that create technostress and increase satisfaction with the ICT they use. These mechanisms also have a positive effect on users\u2019 appropriation of ICT for productivity and innovation in their tasks. The paper contributes to emerging literature on negative outcomes of ICT use by (1) highlighting the influence of technostress on users\u2019 satisfaction and performance (i.e., productivity and innovation in ICT-mediated tasks) with ICT, (2) extending the literature on technostress, which has so far looked largely at the general behavioral and psychological domains, to include the domain of end-user computing, and (3) demonstrating the importance of user involvement and innovation support mechanisms in reducing technostress-creating conditions and their ICT use\u2013related outcomes. Key wordS and phraSeS: end-user performance, end-user satisfaction, ICT use, information overload, survey research, technostress, user involvement. organizaTional uSe of inforMaTion and coMMunicaTionS TechnologieS (icT) has become complex, real-time, ubiquitous, and functionally pervasive, often requiring users to process information simultaneously and continually from different applications and devices. Consequently, ICT users deal with a surfeit of information, experience frequent interruptions from different computing devices and applications, and engage in multitasking on them. further, they are increasingly frustrated and overwhelmed by continual efforts required to master the frequent introduction of new ICT. In recent times, therefore, managers have experienced negative cognitions toward ICT. recent academic literature has encapsulated these cognitions in the concept of technostress [65], which is stress caused by an inability to cope with the demands of organizational computer usage. Technostress describes the stress that users experience as a result of application multitasking, constant connectivity, information overload, frequent system upgrades and consequent uncertainty, continual relearning and consequent job-related IMpaCT Of TEChNOSTrESS ON END-uSEr SaTISfaCTION aND pErfOrMaNCE 305 insecurities, and technical problems associated with the organizational use of ICT. from the perspective of psychological outcomes, technostress reduces individuals\u2019 job satisfaction and commitment to their organization [65]. from the point of view of behavioral outcomes, it reduces individuals\u2019 productivity at work [75]. What is the potential effect of these negative cognitions on outcomes relating to the individual\u2019s use of ICT? There is research evidence [8, 35, 63] that in spite of continuing sophistication in the functional capabilities of ICT, technology overload and ICT-mediated interruptions reduce the satisfaction of users with the ICT they employ for their tasks and their ability to benefit from them. Emerging practitioner perspectives [22, 78] reinforce these findings, suggesting that excessive information, frequent upgrades, and blurring of work\u2013home boundaries induced by pervasive connectivity result in inaccurate information processing and poor task-related decision making using ICT and in dissatisfaction with ICT. at the same time, organizational computing environments have an important role for the end user in generating, accessing, analyzing, and using business information and in accomplishing application-enabled workflows. In such environments, it is critical that end users be satisfied with the applications and systems they interact with and work on and be able to effectively use them to enhance the quality and efficiency of their work tasks [16, 45, 81]. understanding how these \u201cnegative\u201d or \u201cunconstructive\u201d aspects of ICT and the negative cognitions associated with them affect user satisfaction and ICT-mediated task performance is therefore clearly an important step in managing and appropriating benefits from current commuting environments. research in this area is emerging and scarce, and there is absence of (1) a theoretical framework for understanding and (2) systematic empirical investigation for demonstrating how these aspects impact the user in his or her day-to-day application of ICT to organizational tasks. recognizing that the concept of technostress encompasses user-perceived negative cognitions about ICT, we apply the technostress lens to this research gap and enunciate a twofold objective of this paper. The first objective is to understand the (adverse) effects of technostress on the extent to which end users perceive the applications they use to be satisfactory and can utilize them to improve their performance at work. Second, we identify mechanisms that can reduce these effects. Specifically, we draw from the end-user computing and organizational stress literature to develop and validate a model that analyzes the effects of factors that create technostress on end-user satisfaction and end-user performance. We further study how stress-alleviating factors, such as mechanisms that facilitate user involvement in ICT development and those that support innovation, can be used to weaken technostresscreating conditions and their outcomes. The results, based on survey data analysis from 233 ICT users from two organizations, show that factors that create techno stress reduce users\u2019 satisfaction with the ICT they use and the extent to which they utilize ICT for productivity and innovation in their tasks. Mechanisms that facilitate involvement of users and encourage them to take risks, learn, explore new ideas, and experiment in the context of ICT use diminish the factors that create technostress for them and increase their satisfaction with the ICT they use. These mechanisms also have a positive effect on users\u2019 appropriation of ICT for productivity and innovation in their tasks. The paper 306 TarafDar, Tu, aND ragu-NaThaN contributes to emerging literature on \u201cnegative\u201d outcomes of organizational ICT use by (1) highlighting the importance of technostress on users\u2019 satisfaction with ICT and on their performance\u2014that is, their productivity and innovation\u2014in ICT-mediated tasks, (2) extending the literature on technostress, which has so far looked largely at the general behavioral and psychological domains, to include the domain of end-user computing, and (3) demonstrating the importance of user involvement and innovation support mechanisms in reducing technostress-creating conditions and outcomes. Next, we provide a theoretical background by reviewing studies on stress and technostress, and then we develop the research model and hypotheses. In the fourth section, we describe methods and validate the model. The paper closes with a discussion of the findings and research and managerial implications. Theoretical Background Organizational Stress STreSS iS a cogniTive STaTe experienced by an individual when there is an \u201cenvironmental situation that is perceived as presenting a demand which threatens to exceed the person\u2019s capabilities and resources for meeting it, under conditions where he or she expects a substantial differential in the rewards and costs from meeting the demand versus not meeting it\u201d [56, p. 1351]. It is a reaction to the perceived imbalance between a person and the environment [9, 19], borne out of anticipation of an inability to adequately respond to demand from the imbalance and accompanied by the expectation of negative consequences for inadequate response [56]. as shown in figure 1, the phenomenon of stress consists of three aspects\u2014stressors, strain, and situational variables [52]. Stressors represent factors or conditions that create stress. These conditions could be due to the individual\u2019s role [50] and task [56]. role stress and task stress have been widely studied. More recently, technology in general and ICT in particular have Figure 1. general relationships among Stressors, Strain, and Situational Variables IMpaCT Of TEChNOSTrESS ON END-uSEr SaTISfaCTION aND pErfOrMaNCE 307 emerged as conditions for the cause of stress [20]; this is the focus of technostress. Strain represents the outcome of stress. Individuals experience strain as a result of being ex"}
{"_id": "9d033b54f7b5278952a3c6e0aa4d9f4b08e3030d", "title": "WreckWatch: Automatic Traffic Accident Detection and Notification with Smartphones", "text": "Traffic accidents are one of the leading causes of fatalities in the US. An important indicator of survival rates after an accident is the time between the accident and when emergency medical personnel are dispatched to the scene. Eliminating the time between when an accident occurs and when first responders are dispatched to the scene decreases mortality rates by 6%. One approach to eliminating the delay between accident occurrence and first responder dispatch is to use in-vehicle automatic accident detection and notification systems, which sense when traffic accidents occur and immediately notify emergency personnel. These in-vehicle systems, however, are not available in all cars and are expensive to retrofit for older vehicles. This paper describes how smartphones, such as the iPhone and Google Android platforms, can automatically detect traffic accidents using accelerometers and accoustic data, immediately notify a central emergency dispatch server after an accident, and provide situational awareness through photographs, GPS coordinates, VOIP communication channels, and accident data recording. This paper provides the following contributions to the study of detecting traffic accidents via smartphones: (1) we present a formal model for accident detection that combines sensors and context data, (2) we show how smartphone sensors, network connections, and web services can be used to provide situational awarenss to first responders, and (3) we provide empirical results demonstrating the efficacy of different approaches employed by smartphone accident detection systems to prevent false positives. J. White and H. Turner Dept. of Electrical and Computer Engineering Virginia Tech E-mail: {julesw,hturner0}@vt.edu C. Thompson, B. Dougherty, and D.C. Schmidt Dept. of Electrical Engineering and Computer Science Vanderbilt University E-mail: {cthompson,briand,schmidt}@dre.vanderbilt.edu"}
{"_id": "588b9fae704c1964d639a5f87c3793a1ad354e69", "title": "An IT Balance Scorecard Design under Service Management Philosophy", "text": "The present study proposes a design for an Information Technology Balanced Scorecard (IT BSC) that integrates with business and environment and balances and optimizes deployment and control of the Information Technology strategy. This Strategic and Tactical IT Planning model enhances IT's role in obtaining and measuring its contribution to business value: optimizing IT operational efficiency or proposing new business ideas with a high IT component. The proposed model stimulates innovation in each of the parts of the IT Services lifecycle, is based on IT Service Management best practices and uses ideas from the ITIL v3 standard."}
{"_id": "7a6a54e8cf488ccb30b12e4bb54e40fe6176fbb0", "title": "Clustering Based Approach for Customers \u2019 Classification from Electrical Distribution Systems", "text": "The paper presents a clustering based approach for consumers\u2019 classification in representative categories characterized by typical load profiles. For determination of the consumption categories, every customer must be characterized by the following primary information: daily (monthly) energy consumption, average load and peak load. The used database contains the daily load curves corresponding to the small customers from a rural distribution system from Romania. The obtained results it demonstrated that the proposed approach can be used with the success in building specific tariff structures of the customers or in the optimal operation and planning of distribution systems."}
{"_id": "35e0110e4ec46edd06fe954c54a5dec02d36be3d", "title": "The Media ' s Influence on Body Image Disturbance and Eating Disorders : We ' ve Reviled Them , Now Can We Rehabilitate Them ?", "text": "Survey, correlational, randomized control, and covariance structure modeling investigations indicate that the media are a significant factor in the development and maintenance of eating and shape-related disorders. One specific individual difference variable, intemalization of societal pressures regarding prevailing standards of attractiveness, appears to moderate or even mediate the media's effects on women's body satisfaction and eating dysfunction. Problematic media messages inherent in existing media portrayals of eating disorders are apparent, leading researchers to pinpoint intervention strategies that might counteract such viewpoints. Social activism and social marketing approaches are suggested as methods for fighting negative media messages. The media itself is one potential vehicle for communicating productive, accurate, and deglamorized messages about eating and shape-related disorders."}
{"_id": "1fb5fa0389b506a557e9334cadbcdc56ea556eac", "title": "Statistical Edge Detection: Learning and Evaluating Edge Cues", "text": "We formulate edge detection as statistical inference. This statistical edge detection is data driven, unlike standard methods for edge detection which are model based. For any set of edge detection filters (implementing local edge cues) we use pre-segmented images to learn the probability distributions of filter responses conditioned on whether they are evaluated on or off an edge. Edge detection is formulated as a discrimination task specified by a likelihood ratio test on the filter responses. This approach emphasizes the necessity of modeling the image background (the off-edges). We represent the conditional probability distributions non-parametrically and learn them on two different datasets of 100 (Sowerby) and 50 (South Florida) images. Multiple edges cues, including chrominance and multiple-scale, are combined by using their joint distributions. Hence this cue combination is optimal in the statistical sense. We evaluate the effectiveness of different visual cues using the Chernoff information and Receiver Operator Characteristic (ROC) curves. This shows that our approach gives quantitatively better results than the Canny edge detector when the image background contains"}
{"_id": "83bcd3591d8e5d43d65e9e1e83e4c257f8431d4a", "title": "Single-Fed Low Profile Broadband Circularly Polarized Stacked Patch Antenna", "text": "A simple technique is developed in this communication to increase the axial ratio (AR) bandwidth and achieve good impedance matching of a single-fed low profile circularly polarized (CP) stacked patch antenna. The proposed antenna is composed of a driven patch layer and a parasitic patch layer. The driven patch layer consists of a truncated main patch, a parasitic patch and a probe feeding structure while the stacked patch layer is comprised of five patches. The proposed antenna combines the attractive features such as low profile, wide impedance and AR bandwidths, high gain as well as easiness of design, manufacture and integration. The antenna operating at 6 GHz band is designed and fabricated on an FR4 substrate and the overall volume is 0.8 \u03bb0 \u00d70.8 \u03bb0 \u00d70.09 \u03bb0. Measured results show that the antenna achieves an impedance bandwidth of more than 30% for dB, a 3-dB AR bandwidth of about 20.7%, and a gain level of over 7.9 dBi within the 3-dB AR bandwidth."}
{"_id": "bd8c0d98fd6995c8abec69d5e00f802fa48653d5", "title": "Man-At-The-End attacks: Analysis, taxonomy, human aspects, motivation and future directions", "text": "Man-At-The-End (MATE) attacks and fortifications are difficult to analyze, model, and evaluate predominantly for three reasons: firstly, the attacker is human and, therefore, utilizes motivation, creativity, and ingenuity. Secondly, the attacker has limitless and authorized access to the target. Thirdly, all major protections stand up to a determined attacker till a certain period of time. Digital assets range from business to personal use, from consumer devices to home networks, the public Internet, the cloud, and the Internet of Things \u2013 where traditional computer and network security are inadequate to address MATE attacks. MATE is fundamentally a hard problem. Much of the extant focus to deal with MATE attacks is purely technical; though security is more than just a technical issue. The main objective of the paper is to mitigate the consequences of MATE attacks through the human element of security and highlight the need for this element to form a part of a holistic security strategy alongside the necessary techniques and technologies. This paper contributes by taking software protection (SP) research to a new realm of challenges. Moreover, the paper elaborates the concept of MATE attacks, the different forms, and the analysis of MATE versus insider threats to present a thematic taxonomy of a MATE attack. The ensuing paper also highlights the fundamental concept of digital assets, and the core protection mechanisms and their qualitative comparison against MATE attacks. Finally, we present state-of-the-art trends and cutting-edge future research directions by taking into account only the human aspects for young researchers and professionals. & 2014 Elsevier Ltd. All rights reserved."}
{"_id": "c5b087e456d6ee373d5661ae0bb59813c1dfdc65", "title": "A multiperiod set covering location model for dynamic redeployment of ambulances", "text": "Emergency medical service (EMS) providers continually seek ways to improve system performance particularly the response time to incidents. The demand for ambulances fluctuate throughout the week, depending on the day of week, and even the time of day, therefore EMS operators can improve system performance by dynamic relocation/redeployment of ambulances in response to fluctuating demand patters. The objective of the model is to determine the minimum number of ambulances and their locations for each time cluster in which significant changes in demand pattern occur while meeting coverage requirement with a predetermined reliability. The model is further enhanced by calculating ambulance specific busy probabilities and validated by a comprehensive simulation model. Computational results on experimental data sets and data from an EMS agency are provided. 2006 Elsevier Ltd. All rights reserved."}
{"_id": "abe539ea6ce9ba3f6b9e690d2e731ebfddfbcae7", "title": "Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing", "text": "Large grain data flow (LGDF) programming is natural and convenient for describing digital signal processing (DSP) systems, but its runtime overhead is costly in real time or cost-sensitive applications. In some situations, designers are not willing to squander computing resources for the sake of programmer convenience. This is particularly true when the target machine is a programmable DSP chip. However, the runtime overhead inherent in most LGDF implementations is not required for most signal processing systems because such systems are mostly synchronous (in the DSP sense). Synchronous data flow (SDF) differs from traditional data flow in that the amount of data produced and consumed by a data flow node is specified a priori for each input and output. This is equivalent to specifying the relative sample rates in signal processing system. This means that the scheduling of SDF nodes need not be done at runtime, but can be done at compile time (statically), so the runtime overhead evaporates. The sample rates can all be different, which is not true of most current data-driven digital signal processing programming methodologies. Synchronous data flow is closely related to computation graphs, a special case of Petri nets. This self-contained paper develops the theory necessary to statically schedule SDF programs on single or multiple processors. A class of static (compile time) scheduling algorithms is proven valid, and specific algorithms are given for scheduling SDF systems onto single or multiple processors."}
{"_id": "3ba399dc9d678a852223953f57f23184204fb360", "title": "Contribution of obesity and abdominal fat mass to risk of stroke and transient ischemic attacks.", "text": "BACKGROUND AND PURPOSE\nWaist circumference has been shown to be a better predictor of cardiovascular risk than body mass index (BMI). Our case-control study aimed to evaluate the contribution of obesity and abdominal fat mass to the risk of stroke and transient ischemic attacks (TIA).\n\n\nMETHODS\nWe recruited 1137 participants: 379 cases with stroke/TIA and 758 regional controls matched for age and sex. Associations between different markers of obesity (BMI, waist-to-hip ratio, waist circumference and waist-to-stature ratio) and risk of stroke/TIA were assessed by using conditional logistic regression adjusted for other risk factors.\n\n\nRESULTS\nBMI showed a positive association with cerebrovascular risk which became nonsignificant after adjustment for physical inactivity, smoking, hypertension, and diabetes (odds ratio 1.18; 95% CI, 0.77 to 1.79, top tertile versus bottom tertile). Markers of abdominal adiposity were strongly associated with the risk of stroke/TIA. For the waist-to-hip ratio, adjusted odds ratios for every successive tertile were greater than that of the previous one (2nd tertile: 2.78, 1.57 to 4.91; 3rd tertile: 7.69, 4.53 to 13.03). Significant associations with the risk of stroke/TIA were also found for waist circumference and waist-to-stature ratio (odds ratio 4.25, 2.65 to 6.84 and odds ratio 4.67, 2.82 to 7.73, top versus bottom tertile after risk adjustment, respectively).\n\n\nCONCLUSIONS\nMarkers of abdominal adiposity showed a graded and significant association with risk of stroke/TIA, independent of other vascular risk factors. Waist circumference and related ratios can better predict cerebrovascular events than BMI."}
{"_id": "37c6a03ce66bf33d4ffeea8d2c4f098df0e91e2a", "title": "Firmaster: Analysis Tool for Home Router Firmware", "text": "As the Internet has changed the way people communicate with each other in everyday life, the number of home Wi-Fi access routers has grown up significantly over the past few years. However, the security of routers used in every household is still in the low level. The common vulnerabilities in routers can be easily exploited by an attacker in order to obtain user\u2019s sensitive information or even compromise the devices to be a part of the botnet network. Therefore, we developed one-stop service firmware analysis tool that can perform both static and dynamic analysis for the router firmware called \u201cFirmaster\u201d. textbfThe program is operated under graphical user interface (GUI) of Qt creator running on the Ubuntu Linux machine. textbfVulnerabilities of firmware analyzed by Firmaster program are based on OWASP\u2019s Top 10 IoT Vulnerabilities 2014. Firmaster contains seven main functions: password cracking, SSL scanning, web static analysis, firmware update analysis, web dynamic analysis, port scanning and the summary report."}
{"_id": "0b364b2093926ba0f4e19dc61f7009a944c21090", "title": "An Approach Based on Tree Kernels for Opinion Mining of Online Product Reviews", "text": "Opinion mining is a challenging task to identify the opinions or sentiments underlying user generated contents, such as online product reviews, blogs, discussion forums, etc. Previous studies that adopt machine learning algorithms mainly focus on designing effective features for this complex task. This paper presents our approach based on tree kernels for opinion mining of online product reviews. Tree kernels alleviate the complexity of feature selection and generate effective features to satisfy the special requirements in opinion mining. In this paper, we define several tree kernels for sentiment expression extraction and sentiment classification, which are subtasks of opinion mining. Our proposed tree kernels encode not only syntactic structure information, but also sentiment related information, such as sentiment boundary and sentiment polarity, which are important features to opinion mining. Experimental results on a benchmark data set indicate that tree kernels can significantly improve the performance of both sentiment expression extraction and sentiment classification. Besides, a linear combination of our proposed tree kernels and traditional feature vector kernel achieves the best performances using the benchmark data set."}
{"_id": "1b845a04a02df6036e34bcfb8f4a75859331b7d1", "title": "Postural hand synergies for tool use.", "text": "Subjects were asked to shape the right hand as if to grasp and use a large number of familiar objects. The chosen objects typically are held with a variety of grips, including \"precision\" and \"power\" grips. Static hand posture was measured by recording the angular position of 15 joint angles of the fingers and of the thumb. Although subjects adopted distinct hand shapes for the various objects, the joint angles of the digits did not vary independently. Principal components analysis showed that the first two components could account for >80% of the variance, implying a substantial reduction from the 15 degrees of freedom that were recorded. However, even though they were small, higher-order (more than three) principal components did not represent random variability but instead provided additional information about the object. These results suggest that the control of hand posture involves a few postural synergies, regulating the general shape of the hand, coupled with a finer control mechanism providing for small, subtle adjustments. Because the postural synergies did not coincide with grip taxonomies, the results suggest that hand posture may be regulated independently from the control of the contact forces that are used to grasp an object."}
{"_id": "ca1d8985e029527cb143278e87012ab22a081724", "title": "An evaluation of educational values of YouTube videos for academic writing", "text": "The aim is to assess the impact of YouTube videos about academic writing and its skills on the writing performance of students. Theoretical perspectives from constructivism and associated learning models are used to inform the purpose of the research. The contextual setting is matriculation students awaiting admission to higher institutions. The population is 40 students belonging to a class aimed at assisting disadvantaged students in their academic writing in Scottsville, Province of KwaZulu-Natal, South Africa. The students are broken into two groups \u2013 control/traditional teaching and the treatment/YouTube facilitated groups. Consequently, a dominant qualitative approach is adopted using focus group discussion, interviews and tests to identify underlying patterns, methods and approaches to best fit academic writing guides and videos to improve user experiences of the media for academic writing. The fundamental results show that positive characterisations of user experiences include innovation, surprise, playfulness and stimulation whereas the narratives that are not satisfying are categorised as dissatisfaction, frustration, dissolution, disappointment, anger, confusion and irritation. Ultimately, the major findings of the research have the potential to improve user experiences on the platform by highlighting how and when the positive and negative experiences of users occur and a mapping of the differences in the academic writing performance between the two groups and best practices. Finally, the results have implications for pedagogy the fitting of YouTube videos to academic writing instruction."}
{"_id": "1f8a36fbd4e9aac45ce5dbaec464003de994e0e6", "title": "BGP-RCN: improving BGP convergence through root cause notification", "text": "This paper presents a new mechanism, called BGP with root cause notification (BGP-RCN), that provides an upper bound of O(d) on routing convergence delay for BGP, where d is the network diameter as measured by the number of AS hops. BGP-RCN lets each routing update message carry the information about the specific cause which triggered the update message. Once a node v receives the first update message triggered by a link failure, v can avoid using any paths that have been obsoleted by the same failure. The basic approach in BGP-RCN is applicable to path vector routing protocols in general. Our analysis and simulation show that BGP-RCN can achieve substantial reduction in both BGP convergence time and the total number of intermediate route changes. 2004 Elsevier B.V. All rights reserved."}
{"_id": "0f3ab6835042ea45d2aab8e4a70151c11ca9a1d6", "title": "Topics in semantic representation.", "text": "Processing language requires the retrieval of concepts from memory in response to an ongoing stream of information. This retrieval is facilitated if one can infer the gist of a sentence, conversation, or document and use that gist to predict related concepts and disambiguate words. This article analyzes the abstract computational problem underlying the extraction and use of gist, formulating this problem as a rational statistical inference. This leads to a novel approach to semantic representation in which word meanings are represented in terms of a set of probabilistic topics. The topic model performs well in predicting word association and the effects of semantic association and ambiguity on a variety of language-processing and memory tasks. It also provides a foundation for developing more richly structured statistical models of language, as the generative process assumed in the topic model can easily be extended to incorporate other kinds of semantic and syntactic structure."}
{"_id": "c4193d67010a54c386bc9a2c75417cd6b0ad67c9", "title": "Performance Counters to Rescue: A Machine Learning based safeguard against Micro-architectural Side-Channel-Attacks", "text": "Micro-architectural side-channel-attacks are presently daunting threats to most mathematically elegant encryption algorithms. Even though there exist various defense mechanisms, most of them come with the extra overhead of implementation. Recent studies have prevented some particular categories of these attacks but fail to address the detection of other classes. This paper presents a generic machine learning based multi-layer detection approach targeting these micro-architectural side-channel-attacks, without concentrating on a single category. The proposed approach work by profiling low-level hardware events using Linux perf event API and then by analyzing these data with some appropriate machine learning techniques. This paper also presents a novel approach, using time-series data, to correlate the execution trace of the adversary with the secret key of encryption for dealing with false-positives and unknown attacks. The experimental results and performance of the proposed approach suggest its superiority with high detection accuracy and low performance overhead."}
{"_id": "b8deb0fcfa16a0dcb46652240c015cef93c711ed", "title": "Crohn's disease of the vulva in a 10-year-old girl.", "text": "Crohn's disease may involve all parts of the gastrointestinal tract and may often involve other organs as well. These non-intestinal affections are termed extraintestinal manifestations. Vulval involvement is an uncommon extraintestinal manifestation of Crohn's disease, and it is very rare in children. Patients with vulval CD typically present with erythema and edema of the labia majora, which progresses to extensive ulcer formation. Vulval Crohn's disease can appear before or after intestinal problems or it may occur simultaneously. We present a 10-year-old girl with intestinal Crohn's disease complicated with perianal skin tags and asymptomatic unilateral labial hypertrophy. The course of her lesion was independent of the intestinal disease and responded significantly to medical treatment including azathioprine and topical steroid. We emphasize that although vulval involvement in childhood is uncommon, Crohn's disease must be considered in the differential diagnosis of nontender, red, edematous lesions of the genital area."}
{"_id": "734c30c1d996e621d7966113ccf23fd515de0a12", "title": "Magic Mirror: A virtual handbag shopping system", "text": "We present an augmented reality system based on Kinect for on-line handbag shopping. The users can virtually try on different handbags on a TV screen at home. They can interact with the virtual handbags naturally, such as sliding a handbag to different positions on their arms and rotating a handbag to see it from different angles. The users can also see how the handbags fit them in different virtual environments other than the current real background. We describe the technical details for building such a system and demonstrate the experimental results."}
{"_id": "53907f28f819b42486db7eb4bd8c5ce5528823f3", "title": "Population genetic structure and ecological niche modelling of the leafhopper Hishimonus phycitis", "text": "Witches\u2019 broom disease of lime, caused by \u2018Candidatus Phytoplasma aurantifolia\u2019, is responsible for major losses of Mexican lime trees in Southern Iran, Oman and the United Arab Emirates. The causative phytoplasma is transmitted by the leafhopper, Hishimonus phycitis. We combined ecological niche modelling with environmental and genetic data for six populations of H. phycitis from Iran and one from Oman. The mitochondrial cytochrome c oxidase I (COI) gene and nine microsatellite DNA markers were used for the genetic analyses. Although the Oman population had specific haplotypes, the COI sequences were highly conserved among all populations studied. In contrast, the microsatellite data divided the populations from Iran and Oman into two separate clades. An analysis of molecular variance indicated a high level of variation within populations. The Mantel test showed no correlation between genetic and geographical distances. Gene flow values were small between the populations from Iran and north of Oman but significantly higher among the Iranian populations supporting the differentiation between Iran and Oman. In addition, we found that patterns of genetic divergence within Iranian populations were associated strongly with divergence in terms of their ecological niches. Data on six climatic variables, including elevation, were used to create ecological niche models. Our results suggest that the genetic differentiation of H. phycitis may be attributable to climatic conditions and/or geographical barriers."}
{"_id": "d326fc66ade9df942dfa34173be3a1c60761c507", "title": "HRM as a motivator to share knowledge-The importance of seeing the whole picture", "text": "Connecting Human Resource Management (HRM) and knowledge transfer through motivation is a new research area. Out of the few existing studies there is a predominance of quantitative studies, which are showing inconclusive results. As a response, this study uses a qualitative micro perspective to investigate how HRM practises influence intrinsicand extrinsic motivation to share knowledge. It is important to divide motivation into intrinsic and extrinsic, as it impacts knowledge sharing differently. Former studies have identified a need to study the whole HRM system, therefore, to capture differences in motivation among employees exposed to the same system, this thesis takes on a single case study approach. Qualitative interviews were held with employees at an MNC that relies on knowledge intensive activities. The findings showed that employees were motivated intrinsically through career development and extrinsically by the performance management system. The supportive climate showed to influence motivation to share knowledge, both directly and indirectly. Job design was shown to work well in combination with other practises. Finally, a key finding was the importance of having an aligned HRM system."}
{"_id": "9e1f14a8ac5d15c495674bfc4abb84649944f637", "title": "Using hybrid data mining and machine learning clustering analysis to predict the turnover rate for technology professionals", "text": "This study applies clustering analysis for data mining and machine learning to predict trends in technology professional turnover rates, including the hybrid artificial neural network and clustering analysis known as the self-organizing map (SOM). This hybrid clustering method was used to study the individual characteristics of turnover trend clusters. Using a transaction questionnaire, we studied the period of peak turnover, which occurs after the Chinese New Year, for individuals divided into various age groups. The turnover trend of technology professionals was examined in well-known Taiwanese companies. The results indicate that the high outstanding turnover trend circle was primarily caused by a lack of inner fidelity identification, leadership and management. Based on cross-verification, the clustering accuracy rate was 92.7%. This study addressed problems related to the rapid loss of key human resources and should help organizations learn how to enhance competitiveness and efficiency. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "0ebf179c7d2cfb25516c74be2186ea8693add00a", "title": "Collaborative Online Examinations: Impacts on Interaction, Learning, and Student Satisfaction", "text": "This paper presents the results of a field experiment on online examinations facilitated by collaboration support systems. In particular, it examines collaborative learning and virtual teams through online examinations as an assessment procedure, compared to traditional examinations. Assessment is increasingly regarded as an important part of the learning process. Applying constructivism and collaborative-learning theories, the collaborative examination process features students' active participation in various phases of the exam process through small group activities online. A 1 times 3 field experiment evaluated the collaborative online exam compared with the traditional in-class exam, and the participatory exam, where students participated in the online exam processes without groups. Data analysis using results from 485 students indicates that collaborative examinations significantly enhance interactions and the sense of an online learning community and result in significantly higher levels of perceived learning"}
{"_id": "1c8c9a7713395e9a176c42e49bc80574a013f89f", "title": "Mining heterogeneous information networks: a structural analysis approach", "text": "Most objects and data in the real world are of multiple types, interconnected, forming complex, heterogeneous but often semi-structured information networks. However, most network science researchers are focused on homogeneous networks, without distinguishing different types of objects and links in the networks. We view interconnected, multityped data, including the typical relational database data, as heterogeneous information networks, study how to leverage the rich semantic meaning of structural types of objects and links in the networks, and develop a structural analysis approach on mining semi-structured, multi-typed heterogeneous information networks. In this article, we summarize a set of methodologies that can effectively and efficiently mine useful knowledge from such information networks, and point out some promising research directions."}
{"_id": "e56e0c5bcfb28ccc7276eb67287cf360d234e17f", "title": "Autonomous Vehicles for Smart and Sustainable Cities: An In-Depth Exploration of Privacy and Cybersecurity Implications", "text": "Amidst rapid urban development, sustainable transportation solutions are required to meet the increasing demands for mobility whilst mitigating the potentially negative social, economic, and environmental impacts. This study analyses autonomous vehicles (AVs) as a potential transportation solution for smart and sustainable development. We identified privacy and cybersecurity risks of AVs as crucial to the development of smart and sustainable cities and examined the steps taken by governments around the world to address these risks. We highlight the literature that supports why AVs are essential for smart and sustainable development. We then identify the aspects of privacy and cybersecurity in AVs that are important for smart and sustainable development. Lastly, we review the efforts taken by federal governments in the US, the UK, China, Australia, Japan, Singapore, South Korea, Germany, France, and the EU, and by US state governments to address AV-related privacy and cybersecurity risks in-depth. Overall, the actions taken by governments to address privacy risks are mainly in the form of regulations or voluntary guidelines. To address cybersecurity risks, governments have mostly resorted to regulations that are not specific to AVs and are conducting research and fostering research collaborations with the private sector."}
{"_id": "3ce30279210ec4df6b0c557a2eef2c0310e6cd21", "title": "Parallel AES algorithm for fast Data Encryption on GPU", "text": "With the improvement of cryptanalysis, More and more applications are starting to use Advanced Encryption Standard (AES) instead of Data Encryption Standard (DES) to protect their information security. However, current implementations of AES algorithm suffer from huge CPU resource consumption and low throughput. In this paper, we studied the technologies of GPU parallel computing and its optimized design for cryptography. Then, we proposed a new algorithm for AES parallel encryption, and designed and implemented a fast data encryption system based on GPU. The test proves that our approach can accelerate the speed of AES encryption significantly."}
{"_id": "7e9732b5ef3234d256761863500a99b148338c84", "title": "Ranking outlier nodes in subspaces of attributed graphs", "text": "Outlier analysis is an important data mining task that aims to detect unexpected, rare, and suspicious objects. Outlier ranking enables enhanced outlier exploration, which assists the user-driven outlier analysis. It overcomes the binary detection of outliers vs. regular objects, which is not adequate for many applications. Traditional outlier ranking techniques focus on either vector data or on graph structures. However, many of today's databases store both, multi dimensional numeric information and relations between objects in attributed graphs. An open challenge is how outlier ranking should cope with these different data types in a unified fashion. In this work, we propose a first approach for outlier ranking in subspaces of attributed graphs. We rank graph nodes according to their degree of deviation in both graph and attribute properties. We describe novel challenges induced by this combination of data types and propose subspace analysis as important method for outlier ranking on attributed graphs. Subspace clustering provides a selected subset of nodes and its relevant attributes in which deviation of nodes can be observed. Our graph outlier ranking (GOutRank) introduces scoring functions based on these selected subgraphs and subspaces. In addition to this technical contribution, we provide an attributed graph extracted from the Amazon marketplace. It includes a ground truth of real outliers labeled in a user experiment. In order to enable sustainable and comparable research results, we publish this database on our website1 as benchmark for future publications. Our experiments on this graph demonstrate the potential and the capabilities of outlier ranking in subspaces of attributed graphs."}
{"_id": "e23f1ad69084eac1e7f958586570beb4814a0f57", "title": "High aspect-ratio 3D microstructures via near-field electrospinning for energy storage applications", "text": "High-aspect-ratio, three dimensional (3D) microstructures have been constructed by a direct-write method via self-aligned near-field electrospinning process. Using papers as the collectors, the process enables both design and fabrication of a variety of 3D structures, such as walls, grids and other configurations based on layer-by-layer deposition of electrospun micro/nano fibers. Afterwards, these fabricated polymeric structures can be mechanically separated from the paper substrate for various applications. In this work, we have successfully fabricated 20-layer, ca. 100 \u03bcm in thickness, 2 \u00d7 2 cm2 3D grid-structures made of PVDF (Polyvinylidene fluoride) electrospun fibers. After a coating process using the dip-and-dry process in the CNT (carbon nanotube) ink, conductive and flexible electrodes made of CNT coated polymer grid-structures have been demonstrated and assembled as a micro-supercapacitor."}
{"_id": "a0a1b3468377da62204e448f92e22a27be2d6b09", "title": "Feature Selection using Multiobjective Optimization for Aspect based Sentiment Analysis", "text": "In this paper, we propose a system for aspect-based sentiment analysis (ABSA) by incorporating the concepts of multi-objective optimization (MOO), distributional thesaurus (DT) and unsupervised lexical induction. The task can be thought of as a sequence of processes such as aspect term extraction, opinion target expression identification and sentiment classification. We use MOO for selecting the most relevant features, and demonstrate that classification with the resulting feature set can improve classification accuracy on many datasets. As base learning algorithms we make use of Support Vector Machines (SVM) for sentiment classification and Conditional Random Fields (CRF) for aspect term and opinion target expression extraction tasks. Distributional thesaurus and unsupervised DT prove to be effective with enhanced performance. Experiments on benchmark setups of SemEval-2014 and SemEval-2016 shared tasks show that we achieve the state of the art on aspect-based sentiment analysis for several languages."}
{"_id": "dd055e906f9d35ebd696f0d883d82ebfc727b313", "title": "A hybridization of cuckoo search and particle swarm optimization for solving optimization problems", "text": "A new hybrid optimization algorithm, a hybridization of cuckoo search and particle swarm optimization (CSPSO), is proposed in this paper for the optimization of continuous functions and engineering design problems. This algorithm can be regarded as some modifications of the recently developed cuckoo search (CS). These modifications involve the construction of initial population, the dynamic adjustment of the parameter of the cuckoo search, and the incorporation of the particle swarm optimization (PSO). To cover search space with balance dispersion and neat comparability, the initial positions of cuckoo nests are constructed by using the principle of orthogonal Lation squares. To reduce the influence of fixed step size of the CS, the step size is dynamically adjusted according to the evolutionary generations. To increase the diversity of the solutions, PSO is incorporated into CS using a hybrid strategy. The proposed algorithm is tested on 20 standard benchmarking functions and 2 engineering optimization problems. The performance of the CSPSO is compared with that of several meta-heuristic algorithms based on the best solution, worst solution, average solution, standard deviation, and convergence rate. Results show that in most cases, the proposed hybrid optimization algorithm performs better than, or as well as CS, PSO, and some other exiting meta-heuristic algorithms. That means that the proposed hybrid optimization algorithm is competitive to other optimization algorithms."}
{"_id": "062d272447f974dc401e6fc2dd4fad1b8300323d", "title": "Generalized Barycentric Coordinates on Irregular Polygons", "text": "In this paper we present an easy computation of a generalized form of barycentric coordinates for irregular, convex n-sided polygons. Triangular barycentric coordinates have had many classical applications in computer graphics, from texture mapping to ray-tracing. Our new equations preserve many of the familiar properties of the triangular barycentric coordinates with an equally simple calculation, contrary to previous formulations. We illustrate the properties and behavior of these new generalized barycentric coordinates through several example applications."}
{"_id": "8bfedec4b58a5339c7bd96e367a2ed3662996c84", "title": "Neural basis of speech perception.", "text": "The functional neuroanatomy of speech processing has been difficult to characterize. One major impediment to progress has been the failure to consider task effects when mapping speech-related processing systems. We summarize a dual-stream model of speech processing that addresses this situation. In this model, a ventral stream processes speech signals for comprehension, and a dorsal stream maps acoustic speech signals to parietal and frontal-lobe articulatory networks. The model assumes that the ventral stream is largely bilaterally organized, although there are important computational differences between the left- and right-hemisphere systems, whereas the dorsal stream is strongly left-hemisphere-dominant."}
{"_id": "105982d16612b070b8e2cea2ae65ef74e95e59c9", "title": "Advances in Knowledge Discovery and Data Mining", "text": "Similarity in people to people (P2P) recommendation in social networks is not symmetric, where both entities of a relationship are involved in the reciprocal process of determining the success of the relationship. The widely used memory-based collaborative filtering (CF) has advantages of effectiveness and efficiency in traditional item to people recommendation. However, the critical step of computation of similarity between the subjects or objects of recommendation in memory-based CF is typically based on a heuristically symmetric relationship, which may be flawed in P2P recommendation. In this paper, we show that memory-based CF can be significantly improved by using a novel asymmetric model of similarity that considers the probabilities of both positive and negative behaviours, for example, in accepting or rejecting a recommended relationship. We present also a unified model of the fundamental principles of collaborative recommender systems that subsumes both user-based and item-based CF. Our experiments evaluate the proposed approach in P2P recommendation in the real world online dating application, showing significantly improved performance over traditional memory-based methods."}
{"_id": "03ac537ecb49c7c79b4335bdb9aa6e5cdb067182", "title": "Fast and Accurate Detection and Classification of Plant Diseases", "text": "We propose and experimentally evaluate a software solution for automatic detection and classification of plant leaf diseases. The proposed solution is an improvement to the solution proposed in [1] as it provides faster and more accurate solution. The developed processing scheme consists of four main phases as in [1]. The following two steps are added successively after the segmentation phase. In the first step we identify the mostlygreen colored pixels. Next, these pixels are masked based on specific threshold values that are computed using Otsu's method, then those mostly green pixels are masked. The other additional step is that the pixels with zeros red, green and blue values and the pixels on the boundaries of the infected cluster (object) were completely removed. The experimental results demonstrate that the proposed technique is a robust technique for the detection of plant leaves diseases. The developed algorithm\u201fs efficiency can successfully detect and classify the examined diseases with a precision between 83% and 94%, and can achieve 20% speedup over the approach proposed in [1]."}
{"_id": "2742509e8ebc68fd150418080e651583d04af165", "title": "Learning temporal features using LSTM-CNN architecture for face anti-spoofing", "text": "Temporal features is important for face anti-spoofing. Unfortunately existing methods have limitations to explore such temporal features. In this work, we propose a deep neural network architecture combining Long Short-Term Memory (LSTM) units with Convolutional Neural Networks (CNN). Our architecture works well for face anti-spoofing by utilizing the LSTM units' ability of finding long relation from its input sequences as well as extracting local and dense features through convolution operations. Our best model shows significant performance improvement over general CNN architecture (5.93% vs. 7.34%), and hand-crafted features (5.93% vs. 10.00%) on CASIA dataset."}
{"_id": "3c66ae1f6f16b0c2807a3f2510ad99c8b05a6772", "title": "Robust kernel density estimation", "text": "In this paper, we propose a method for robust kernel density estimation. We interpret a KDE with Gaussian kernel as the inner product between a mapped test point and the centroid of mapped training points in kernel feature space. Our robust KDE replaces the centroid with a robust estimate based on M-estimation (P. Huber, 1981), The iteratively re-weighted least squares (IRWLS) algorithm for M-estimation depends only on inner products, and can therefore be implemented using the kernel trick. We prove the IRWLS method monotonically decreases its objective value at every iteration for a broad class of robust loss functions. Our proposed method is applied to synthetic data and network traffic volumes, and the results compare favorably to the standard KDE."}
{"_id": "ce8f6ac09e284c042840e0a479538a18d22ba4b0", "title": "Risk and maintenance factors for eating pathology: a meta-analytic review.", "text": "This meta-analytic review of prospective and experimental studies reveals that several accepted risk factors for eating pathology have not received empirical support (e.g., sexual abuse) or have received contradictory support (e.g.. dieting). There was consistent support for less-accepted risk factors(e.g., thin-ideal internalization) as well as emerging evidence for variables that potentiate and mitigate the effects of risk factors(e.g., social support) and factors that predict eating pathology maintenance(e.g., negative affect). In addition, certain multivariate etiologic and maintenance models received preliminary support. However, the predictive power of individual risk and maintenance factors was limited, suggesting it will be important to search for additional risk and maintenance factors, develop more comprehensive multivariate models, and address methodological limitations that attenuate effects."}
{"_id": "a3983077d951e29278a6139fe6dfb41ade9b1df1", "title": "Principal direction analysis-based real-time 3D human pose reconstruction from a single depth image", "text": "Human pose estimation in real-time is a challenging problem in computer vision. In this paper, we present a novel approach to recover a 3D human pose in real-time from a single depth human silhouette using Principal Direction Analysis (PDA) on each recognized body part. In our work, the human body parts are first recognized from a depth human body silhouette via the trained Random Forests (RFs). On each recognized body part which is presented as a set of 3D points cloud, PDA is applied to estimate the principal direction of the body part. Finally, a 3D human pose gets recovered by mapping the principal directional vector to each body part of a 3D human body model which is created with a set of super-quadrics linked by the kinematic chains. In our experiments, we have performed quantitative and qualitative evaluations of the proposed 3D human pose reconstruction methodology. Our evaluation results show that the proposed approach performs reliably on a sequence of unconstrained poses and achieves an average reconstruction error of 7.46 degree in a few key joint angles. Our 3D pose recovery methodology should be applicable to many areas such as human computer interactions and human activity recognition."}
{"_id": "a661452ade5c8209e34f0445a828d1b203da0339", "title": "An experimental study on the applicability of evolutionary algorithms to craniofacial superimposition in forensic identification", "text": "0020-0255/$ see front matter 2009 Elsevier Inc doi:10.1016/j.ins.2008.12.029 * Corresponding author. Tel.: +34 985 456545; fa E-mail address: oscar.ibanez@softcomputing.es ( Photographic supra-projection is a forensic process that aims to identify a missing person from a photograph and a skull found. One of the crucial tasks throughout all this process is the craniofacial superimposition which tries to find a good fit between a 3D model of the skull and the 2D photo of the face. This photographic supra-projection stage is usually carried out manually by forensic anthropologists. It is thus very time consuming and presents several difficulties. In this paper, we aim to demonstrate that real-coded evolutionary algorithms are suitable approaches to tackle craniofacial superimposition. To do so, we first formulate this complex task in forensic identification as a numerical optimization problem. Then, we adapt three different evolutionary algorithms to solve it: two variants of a real-coded genetic algorithm and the state of the art evolution strategy CMA-ES. We also consider an existing binary-coded genetic algorithm as a baseline. Results on several superimposition problems of real-world identification cases solved by the Physical Anthropology lab at the University of Granada (Spain) are considered to test our proposals. 2009 Elsevier Inc. All rights reserved."}
{"_id": "9f908c018a0c57bab2a4a3ee723c818e893fcb1b", "title": "Detail-preserving median based filters in image processing", "text": ""}
{"_id": "a24581086356ea77733a9921c682cb47387a15d7", "title": "Noise adaptive soft-switching median filter", "text": "Existing state-of-the-art switching-based median filters are commonly found to be nonadaptive to noise density variations and prone to misclassifying pixel characteristics at high noise density interference. This reveals the critical need of having a sophisticated switching scheme and an adaptive weighted median filter. We propose a novel switching-based median filter with incorporation of fuzzy-set concept, called the noise adaptive soft-switching median (NASM) filter, to achieve much improved filtering performance in terms of effectiveness in removing impulse noise while preserving signal details and robustness in combating noise density variations. The proposed NASM filter consists of two stages. A soft-switching noise-detection scheme is developed to classify each pixel to be uncorrupted pixel, isolated impulse noise, nonisolated impulse noise or image object's edge pixel. \"No filtering\" (or identity filter), standard median (SM) filter or our developed fuzzy weighted median (FWM) filter will then be employed according to the respective characteristic type identified. Experimental results show that our NASM filter impressively outperforms other techniques by achieving fairly close performance to that of ideal-switching median filter across a wide range of noise densities, ranging from 10% to 70%"}
{"_id": "4bf83c0344280918424cfefbe558188f00d45cb7", "title": "Tri-state median filter for image denoising", "text": "In this work, a novel nonlinear filter, called tri-state median (TSM) filter, is proposed for preserving image details while effectively suppressing impulse noise. We incorporate the standard median (SM) filter and the center weighted median (CWM) filter into a noise detection framework to determine whether a pixel is corrupted, before applying filtering unconditionally. Extensive simulation results demonstrate that the proposed filter consistently outperforms other median filters by balancing the tradeoff between noise reduction and detail preservation."}
{"_id": "005fea45547d08c2ddef7897d77ec73efc38e2e5", "title": "Nonlinear Modeling of the Self-Oscillating Fluxgate Current Sensor", "text": "An accurate mathematical model has been developed in order to describe electrical and geometrical factors affecting operation and DC transfer characteristics of the self-oscillating fluxgate current sensor. The mathematical description of the sensor circuit is based on a continuous function that approximates magnetization curve of the used transformer core. The circuit differential equation based on the adopted transformer model is analytically solved offering a fundamental basis for understanding factors that affect the operation of the sensor. All principle parameters affecting sensor operation are clearly demonstrated by using the proposed model, and good agreements with laboratory measurements has been obtained."}
{"_id": "89bdb87e86274768a7c1ba35a2ad9e001ca345e3", "title": "Integrated bioethanol and protein production from brown seaweed Laminaria digitata.", "text": "A wild-growing glucose-rich (i.e. 56.7% glucose content) brown seaweed species Laminaria digitata, collected from the North Coast of Denmark in August 2012, was used as the feedstock for an integrated bioethanol and protein production. Glutamic acid and aspartic acid are the two most abundant amino acids in the algal protein, both with proportional content of 10% in crude protein. Only minor pretreatment of milling was used on the biomass to facilitate the subsequent enzymatic hydrolysis and fermentation. The Separate Hydrolysis and Fermentation (SHF) resulted in obviously higher ethanol yield than the Simultaneous Saccharification and Fermentation (SSF). High conversion rate at maximum of 84.1% glucose recovery by enzymatic hydrolysis and overall ethanol yield at maximum of 77.7% theoretical were achieved. Protein content in the solid residues after fermentation was enriched by 2.7 fold, with similar distributions of amino acids, due to the hydrolysis of polymers in the seaweed cell wall matrix."}
{"_id": "ac37f867cf330149bbf0866710c572eb9c941c8f", "title": "Dynamic Thermal Management for High-Performance Microprocessors", "text": "With the increasing clock rate and transistor count of today\u2019s microprocessors, power dissipation is becoming a critical component of system design complexity. Thermal and power-delivery issues are becoming especially critical for high-performance computing systems. In this work, we investigate dynamic thermal management as a technique to control CPUpower dissipation. With the increasing usage of clock gating techniques, the average power dissipation typically seen by common applications is becoming much less than the chip\u2019s rated maximum power dissipation. However; system designers still must design thermal heat sinks to withstand the worst-case scenario. We define and investigate the major components of any dynamic thermal management scheme. Specijcally we explore the tradeoffs between several mechanisms for responding to periods of thermal trauma and we consider the effects of hardware and sofnyare implementations. With appropriate dynamic thermal management, the CPU can be designed for a much lower maximum power rating, with minimal performance impact for typical applications."}
{"_id": "31d202a4ef7bd725cdf15c9ffc39d90412de22a5", "title": "Efficient elastic burst detection in data streams", "text": "Burst detection is the activity of finding abnormal aggregates in data streams. Such aggregates are based on sliding windows over data streams. In some applications, we want to monitor many sliding window sizes simultaneously and to report those windows with aggregates significantly different from other periods. We will present a general data structure for detecting interesting aggregates over such elastic windows in near linear time. We present applications of the algorithm for detecting Gamma Ray Bursts in large-scale astrophysical data. Detection of periods with high volumes of trading activities and high stock price volatility is also demonstrated using real time Trade and Quote (TAQ) data from the New York Stock Exchange (NYSE). Our algorithm beats the direct computation approach by several orders of magnitude."}
{"_id": "10a70fe5a6bbf9e23f401114b7ab8fc3d1422bcb", "title": "Learning to Assign Orientations to Feature Points", "text": "We show how to train a Convolutional Neural Network to assign a canonical orientation to feature points given an image patch centered on the feature point. Our method improves feature point matching upon the state-of-the art and can be used in conjunction with any existing rotation sensitive descriptors. To avoid the tedious and almost impossible task of finding a target orientation to learn, we propose to use Siamese networks which implicitly find the optimal orientations during training. We also propose a new type of activation function for Neural Networks that generalizes the popular ReLU, maxout, and PReLU activation functions. This novel activation performs better for our task. We validate the effectiveness of our method extensively with four existing datasets, including two non-planar datasets, as well as our own dataset. We show that we outperform the state-of-the-art without the need of retraining for each dataset."}
{"_id": "5ac5d634a42741790a87b4419b1070131162241e", "title": "Compact CPW-Fed microstrip octagonal patch antenna with H slot for WLAN and WIMAX applications", "text": "In this paper, a Coplanar Wave Guide (CPW)-Fed microstrip octagonal patch antenna for WLAN and WIMAX Applications is proposed. The studied structure is suitable for 2.3/2.5/3.3/3.5/5/5.5GHz WiMAX and for 3.6/2.4\u20132.5/4.9\u20135.9GHz WLAN applications. The octagonal shape is obtained by making a cut in the four angles of the rectangular microstrip patch antenna; in addition the using of CPW-Fed allows obtaining UWB characteristics. The miniaturization in the antenna size for lower band is achieved by introducing an H slot in the radiating element. The proposed antenna is designed on a single and a small substrate board of dimensions 46\u00d740\u00d71.6 mm3. Moreover the miniaturized antenna has a good impedance matching and a reasonable gain. All the simulations were performed in CADFEKO, a Method of Moment (MoM) based solver."}
{"_id": "2fdee22266d58ae4e891711208106ca46c8e2778", "title": "Optimized Product Quantization for Approximate Nearest Neighbor Search", "text": "Product quantization is an effective vector quantization approach to compactly encode high-dimensional vectors for fast approximate nearest neighbor (ANN) search. The essence of product quantization is to decompose the original high-dimensional space into the Cartesian product of a finite number of low-dimensional subspaces that are then quantized separately. Optimal space decomposition is important for the performance of ANN search, but still remains unaddressed. In this paper, we optimize product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks. We present two novel methods for optimization: a non-parametric method that alternatively solves two smaller sub-problems, and a parametric method that is guaranteed to achieve the optimal solution if the input data follows some Gaussian distribution. We show by experiments that our optimized approach substantially improves the accuracy of product quantization for ANN search."}
{"_id": "67078d516a85204c016846e30c02e901ac16f142", "title": "A GPU-Based Rasterization Algorithm for Boolean Operations on Polygons", "text": "This paper presents a new GPU-based rasterization algorithm for Boolean operations that handles arbitary closed polygons. We construct an efficient data structure for interoperation of CPU and GPU and propose a fast GPU-based contour extraction method to ensure the performance of our algorithm. We then design a novel traversing strategy to achieve an error-free calculation of intersection point for correct Boolean operations. We finally give a detail evaluation and the results show that our algorithm has a higher performance than exsiting algorithms on processing polygons with large amount of vertices. key words: GPU, CPU, rasterization, Boolean operation, error-free"}
{"_id": "18d8e8d38abae3637ca885e0d88c2bb5267e5828", "title": "Experimental Analysis of Mode Switching Techniques in Touch-based User Interfaces", "text": "This paper presents the results of a 36 participant empirical comparison of touch mode-switching. Six techniques are evaluated, spanning current and future techniques: long press, non-dominant hand, two-fingers, hard press, knuckle, and thumb-on-finger. Two poses are controlled for, seated with the tablet on a desk and standing with the tablet held on the forearm. Findings indicate pose has no effect on mode switching time and little effect on error rate; using two-fingers is fastest while long press is much slower; non-preferred hand and thumb-on-finger also rate highly in subjective scores. The experiment protocol is based on Li et al.'s pen mode-switching study, enabling a comparison of touch and pen mode switching. Among the common techniques, the non-dominant hand is faster than pressure with touch, whereas no significant difference had been found for pen. Our work addresses the lack of empirical evidence comparing touch mode-switching techniques and provides guidance to practitioners when choosing techniques and to researchers when designing new mode-switching methods."}
{"_id": "32063421a091f94627fbc1c70a12d87ca0466e00", "title": "Backtracking search algorithm in CVRP models for efficient solid waste collection and route optimization.", "text": "Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts."}
{"_id": "f77f8ab7997d8ed8159633757790397630d1bf5d", "title": "AtlanticWave-SDX : An International SDX to Support Science Data Applications", "text": "New scientific instruments that are being designed and deployed in the coming years will dramatically increase the need for large, real-time data transfers among scientists throughout the world. One such instrument is the Large Synoptic Survey Telescope being built in Chile that will produce 6.4 GB images every 17 seconds. This paper describes an ongoing effort to meet the demands of these large data scientific instruments through the development of an international software defined exchange point (SDX) that will meet the provisioning needs for the scientific users. The specific planned and ongoing work in SDX architecture is described with specific consideration for policy specification and security. Keywords\u2014Software Defined Exchange; Science Data"}
{"_id": "e8c0ca9d8412e312114ba803aad7404cd0b38051", "title": "Pustulobullous variant of SDRIFE (symmetrical drug-related intertriginous and flexural exanthema).", "text": "We report on a 64-year-old patient from Sri Lanka who presented to the emergency room with a 24-hour history of pruritic, erythematous patches \u2013 with occasional pustules and blisters \u2013 in the groins, axillae, and antecubital fossae. The patient reported no pain or general symptoms. Six days earlier, he had started taking cefuroxime (500 mg BID) for the first time. The antibiotic had been prescribed following temporary ureteral stent placement due to nephrolithiasis. He denied any other medical or dermatological conditions, and no other drugs had been taken. At initial presentation, he exhibited sharply demarcated, symmetrically disseminated, dark-red patches in the groins, axillae, and antecubital fossae. Some lesions showed pustules and blisters (Figure 1a\u2013c). The physical exam was otherwise unremarkable; in particular, there was no fever. Histology revealed a superficial perivascular, spongiform dermatitis with lymphocytes, eosinophils, and neutrophils. Focally, necrotic keratinocytes and neutrophils were noted in the epidermis. Overall, the findings were consistent with a drug reaction (Figure 2). Apart from slightly elevated CRP levels (5.7 mg/dL, normal range: < 0.5) and mild thrombocytopenia (111,000, normal range: 166,000\u2013308,000), laboratory tests were within normal limits. The detection of Escherichia coli in a swab taken from a pustule in the inguinal region was thought to represent contamination; the pathological lab results were considered to be consistent with an inflammatory response to nephrolithiasis. Taking the history, clinical symptoms, and histological findings into account, the patient was diagnosed with a pustulobullous variant of SDRIFE (symmetrical drug-related intertriginous and flexural exanthema). Since 2004, the acronym SDRIFE has been used for the condition previously known as baboon syndrome. First reported in 1984, the term baboon syndrome referred to the acute development of erythematous lesions in the gluteal area following contact with mercury from broken fever thermometers [1, 2]. In particular, the new term SDRIFE takes into account that systemic drugs, too, may induce flexural skin lesions without prior sensitization [3\u20136]. In the present case, the development of skin lesions six days after the intake of cefuroxime suggests first exposure to the drug. The diagnosis of SDRIFE includes five clinical criteria that are summarized in Table 1. Atypical disease courses with pustules, papules, and blisters have also been described [4]. The onset of SDRIFE is independent of patient age, and occurs a few hours or even up to eight days after the administration/application of the triggering factor. Apart from systemic antibiotics \u2013 in particular \u03b2-lactam antibiotics \u2013 corticosteroids, psychopharmaceuticals, biologics, and many other drug classes are"}
{"_id": "e6b273c9ac785c600b760124159962e305cef168", "title": "High Efficient Interleaved Multi-channel dc/dc Converter Dedicated to Mobile Applications", "text": "In the transportation field, the empty weight of the vehicle has to be as light as possible in order to have the highest energetic efficiency. For onboard storage electrical systems, DC/DC converters are usually used to control the power flows in the vehicle. This paper focuses on two different DC/DC converter applications: high current/low voltage on the one hand and low current/high voltage on the other hand. For all power mobile applications, a standard dc/dc converter is not realizable due to mass and volume constraints. As the inductor is usually the biggest and heaviest component in the system, it is important to develop a new structure allowing the reduction of its size and weight. The chosen technology is based on interleaved channels operating in discontinuous mode, also using reverse conduction of power MOSFET transistors. For both of the cases described above, a prototype has been built composed of 8 and 4 channels respectively. The first converter is used for power assistance (250 A/18 V) using the MOS reverse conduction property. In this particular case, efficiency is up to 96.3 %. The second is developed for a mobile solar application (3.8 A/270 V). Efficiencies are up to 98.4 %"}
{"_id": "887567782cb859ecd339693589056903b0071353", "title": "Face Detection: A Survey", "text": "In this paper we present a comprehensive and critical survey of face detection algorithms. Face detection is a necessary first-step in face recognition systems, with the purpose of localizing and extracting the face region from the background. It also has several applications in areas such as content-based image retrieval, video coding, video conferencing, crowd surveillance, and intelligent human\u2013computer interfaces. However, it was not until recently that the face detection problem received considerable attention among researchers. The human face is a dynamic object and has a high degree of variability in its apperance, which makes face detection a difficult problem in computer vision. A wide variety of techniques have been proposed, ranging from simple edge-based algorithms to composite high-level approaches utilizing advanced pattern recognition methods. The algorithms presented in this paper are classified as either feature-based or image-based and are discussed in terms of their technical approach and performance. Due to the lack of standardized tests, we do not provide a comprehensive comparative evaluation, but in cases where results are reported on common datasets, comparisons are presented. We also give a presentation of some proposed applications and possible application areas. c \u00a9 2001 Academic Press"}
{"_id": "fedcd5e09dc2f960f0a4cf20ff1ea5527a6dafd7", "title": "Greedy Sampling of Graph Signals", "text": "Sampling is a fundamental topic in graph signal processing, having found applications in estimation, clustering, and video compression. In contrast to traditional signal processing, the irregularity of the signal domain makes selecting a sampling set nontrivial and hard to analyze. Indeed, though conditions for graph signal interpolation from noiseless samples exist, they do not lead to a unique sampling set. The presence of noise makes choosing among these sampling sets a hard combinatorial problem. Although greedy sampling schemes are commonly used in practice, they have no performance guarantee. This work takes a twofold approach to address this issue. First, universal performance bounds are derived for the Bayesian estimation of graph signals from noisy samples. In contrast to currently available bounds, they are not restricted to specific sampling schemes and hold for any sampling sets. Second, this paper provides near-optimal guarantees for greedy sampling by introducing the concept of approximate submodularity and updating the classical greedy bound. It then provides explicit bounds on the approximate supermodularity of the interpolation mean-square error showing that it can be optimized with worst case guarantees using greedy search even though it is not supermodular. Simulations illustrate the derived bound for different graph models and show an application of graph signal sampling to reduce the complexity of kernel principal component analysis."}
{"_id": "149d8514b026cca3b31ef8379e78aeb7c7795eb7", "title": "Development and Evaluation of a Bluetooth EKG Monitoring Sensor", "text": "With the adoption of 2G and 3G cellular network technologies, mobile phones now have the bandwidth capability to stream data back to monitoring stations in real-time. Our paper describes the design and evaluation of a Bluetooth electrocardiogram sensor that transmits medical data to a cell phone. This data is displayed and stored on the phone. Future development of the system will relay this data over a cellular GPRS network. The current system provides a low cost and lightweight alternative to existing EKG event monitors. The final GPRS connected system will provide continuous monitoring of a patient 's heart anywhere cellular coverage is available"}
{"_id": "a3d7257cf0ba5c501d2f6b8ddab931bcd588dfbf", "title": "Learning to Propagate Labels: Transductive Propagation Network for Few-shot Learning", "text": "The goal of few-shot learning is to learn a classifier that generalizes well even when trained with a limited number of training instances per class. The recently introduced meta-learning approaches tackle this problem by learning a generic classifier across a large number of multiclass classification tasks and generalizing the model to a new task. Yet, even with such meta-learning, the low-data problem in the novel classification task still remains. In this paper, we propose Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem. Specifically, we propose to learn to propagate labels from labeled instances to unlabeled test instances, by learning a graph construction module that exploits the manifold structure in the data. TPN jointly learns both the parameters of feature embedding and the graph construction in an end-to-end manner. We validate TPN on multiple benchmark datasets, on which it largely outperforms existing few-shot learning approaches and achieves the state-of-the-art results."}
{"_id": "7b003db87ab0ad84009a5a62c9baa5f37334fc01", "title": "Biomedical Image Processing", "text": "medical fields has led to the automated processing ofpictorial data. Here, a device called the cytocomputer searches for genetic mutations. A computer revolution has occurred not only in technical fields but also in medicine, where vast amounts of information must be processed quickly and accurately. Nowhere is the need for image processing techniques more apparent than in clinical diagnosis or mass screening applications where data take the form of digital images. New high-resolution scanning techniques such as computed tomography, nuclear magnetic resonance, po-sitron emission tomography, and digital radiography produce images containing immense amounts of relevant information for medical analysis. But as these scanning techniques become more vital to clinical diagnosis, the work for specialists who must visually examine the resultant images increases. In many cases, quantitative data in the form of measurements and counts are needed to supplement nonimage patient data, and the manual extraction of these data is a time-consuming and costly step in an otherwise automated process. Furthermore, subtle variants of shade and shape can be the earliest clues to a diagnosis, placing the additional burden of complete thoroughness on the examining specialist. For the last five years, the University of Michigan and the Environmental Research Institute of Michigan have conducted a unique series of studies that involve the processing of biomedical imagery on a highly parallel computer specifically designed for image processing. System designers have incorporated the requirements of extracting a verifiable answer from an image in a reasonable time into an integrated approach to hardware and software design. The system includes a parallel pipelined image processor, called a cytocomputer, and a high-level language specifically created for image processing, C-3PL, the cytocomputer parallel picture processing language. These studies have involved a great many people from both the medical and engineering communities and have highlighted the interdisciplinary aspects of biomedical image processing. The methods have been tested in anatomy, developmental biology, nuclear medicine, car-diology, and transplant rejection. The general consensus is that quantification by automated image analysis not only increases diagnostic accuracy but also provides significant data not obtainable from qualitative analysis alone. One study in particular, on which descriptions in this article are based, involves a joint effort by the University of Michigan's human genetics and electrical and computer engineering departments and is supported by a grant from the National Cancer Institute. Basically, automated image analysis is being applied via sophisticated biochemical and computer techniques to derive \u2026"}
{"_id": "e888535c5b8ce61f50fa7ecc1979403c18d2bafb", "title": "A Review on Automated Brain Tumor Detection and Segmentation from MRI of Brain", "text": "Tumor segmentation from magnetic resonance imaging (MRI) data is an important but time consuming manual task performed by medical experts. Automating this process is a challenging task because of the high diversity in the appearance of tumor tissues among different patients and in many cases similarity with the normal tissues. MRI is an advanced medical imaging technique providing rich information about the human soft-tissue anatomy. There are different brain tumor detection and segmentation methods to detect and segment a brain tumor from MRI images. These detection and segmentation approaches are reviewed with an importance placed on enlightening the advantages and drawbacks of these methods for brain tumor detection and segmentation. The use of MRI image detection and segmentation in different procedures are also described. Here a brief review of different segmentation for detection of brain tumor from MRI of brain has been discussed."}
{"_id": "17329d5ab7a6eb25a9d408280c3e7d5dd054895e", "title": "Survey on Cloud Computing Security", "text": "Cloud is a modern age computing paradigm which has potential to bring great benefits to information and Communication Technology (ICT) and ICT enabled business. The term Cloud comes from the graphic that was often used to show the heterogeneous networks and complex infrastructure. This graphic was adopted to describe the many aspects of Cloud Computing. In this paper, we aim to identify the security issues in cloud computing. Here we present an analysis of security issues in a cloud environment. Solutions exist to a certain extent for various issues. Keywords\u2014Cloud computing, security challenges, threats"}
{"_id": "93e69c173ff0efdc5125f458017870ce7d84b409", "title": "Review on Non-Volatile Memory with High-k Dielectrics: Flash for Generation Beyond 32 nm", "text": "Flash memory is the most widely used non-volatile memory device nowadays. In order to keep up with the demand for increased memory capacities, flash memory has been continuously scaled to smaller and smaller dimensions. The main benefits of down-scaling cell size and increasing integration are that they enable lower manufacturing cost as well as higher performance. Charge trapping memory is regarded as one of the most promising flash memory technologies as further down-scaling continues. In addition, more and more exploration is investigated with high-k dielectrics implemented in the charge trapping memory. The paper reviews the advanced research status concerning charge trapping memory with high-k dielectrics for the performance improvement. Application of high-k dielectric as charge trapping layer, blocking layer, and tunneling layer is comprehensively discussed accordingly."}
{"_id": "b98f948f841ecf123cf7dba0387b24863ed378d5", "title": "A unified theory of development: a dialectic integration of nature and nurture.", "text": "The understanding of nature and nurture within developmental science has evolved with alternating ascendance of one or the other as primary explanations for individual differences in life course trajectories of success or failure. A dialectical perspective emphasizing the interconnectedness of individual and context is suggested to interpret the evolution of developmental science in similar terms to those necessary to explain the development of individual children. A unified theory of development is proposed to integrate personal change, context, regulation, and representational models of development."}
{"_id": "16f940b4b5da79072d64a77692a876627092d39c", "title": "A framework for automated measurement of the intensity of non-posed Facial Action Units", "text": "This paper presents a framework to automatically measure the intensity of naturally occurring facial actions. Naturalistic expressions are non-posed spontaneous actions. The facial action coding system (FACS) is the gold standard technique for describing facial expressions, which are parsed as comprehensive, nonoverlapping action units (Aus). AUs have intensities ranging from absent to maximal on a six-point metric (i.e., 0 to 5). Despite the efforts in recognizing the presence of non-posed action units, measuring their intensity has not been studied comprehensively. In this paper, we develop a framework to measure the intensity of AU12 (lip corner puller) and AU6 (cheek raising) in videos captured from infant-mother live face-to-face communications. The AU12 and AU6 are the most challenging case of infant's expressions (e.g., low facial texture in infant's face). One of the problems in facial image analysis is the large dimensionality of the visual data. Our approach for solving this problem is to utilize the spectral regression technique to project high dimensionality facial images into a low dimensionality space. Represented facial images in the low dimensional space are utilized to train support vector machine classifiers to predict the intensity of action units. Analysis of 18 minutes of captured video of non-posed facial expressions of several infants and mothers shows significant agreement between a human FACS coder and our approach, which makes it an efficient approach for automated measurement of the intensity of non-posed facial action units."}
{"_id": "d0a21f94de312a0ff31657fd103d6b29db823caa", "title": "Facial Expression Analysis", "text": "The face is one of the most powerful channels of nonverbal communication. Facial expression provides cues about emotion, intention, alertness, pain, personality, regulates interpersonal behavior, and communicates psychiatric and biomedical status among other functions. Within the past 15 years, there has been increasing interest in automated facial expression analysis within the computer vision and machine learning communities. This chapter reviews fundamental approaches to facial measurement by behavioral scientists and current efforts in automated facial expression recognition. We consider challenges, review databases available to the research community, approaches to feature detection, tracking, and representation, and both supervised and unsupervised learning. keywords : Facial expression analysis, Action unit recognition, Active Appearance Models, temporal clustering."}
{"_id": "fe919ad7967765aca6f3c366ccc98640a433f58b", "title": "A Wavelet Tour of Signal Processing, 2nd Edition", "text": "Bargaining with reading habit is no need. Reading is not kind of something sold that you can take or not. It is a thing that will change your life to life better. It is the thing that will give you many things around the world and this universe, in the real world and here after. As what will be given by this a wavelet tour of signal processing 2nd edition, how can you bargain with the thing that has many benefits for you?"}
{"_id": "0289786e0d5edf663c586c8552ec3708eff62331", "title": "Detecting Faces in Images: A Survey", "text": "Images containing faces are essential to intelligent vision-based human computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation, and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face regardless of its three-dimensional position, orientation, and lighting conditions. Such a problem is challenging because faces are nonrigid and have a high degree of variability in size, shape, color, and texture. Numerous techniques have been developed to detect faces in a single image, and the purpose of this paper is to categorize and evaluate these algorithms. We also discuss relevant issues such as data collection, evaluation metrics, and benchmarking. After analyzing these algorithms and identifying their limitations, we conclude with several promising directions for"}
{"_id": "ec31c947eea6ca36b4ec67291ff7a6ad3e8d4c74", "title": "Switched Reluctance Motor Drive With External Rotor for Fan in Air Conditioner", "text": "This paper describes the developed three-phase 6/8 poles switched reluctance external rotor motor drive for a fan in air conditioner. The external rotor core structure and the internal stator core structure, the three-phase windings arrangement, the slotted claw, and the setting structure of the photoelectric transducers on the rotor position detector are illustrated. The electromagnetic field calculation results are given. The three-phase asymmetric bridge power converter was used in the drive system. The block diagram of the switched reluctance external rotor motor drive with closed-loop rotor speed control is given. The closed-loop rotor speed control is implemented using a fuzzy logic algorithm. The experimental tests of the developed prototype are made for driving the fan from 200 to 950 r/min. The comparison results of the two systems show that the input power and input phase current RMS value are lower in the developed three-phase 6/8 poles external rotor switched reluctance motor drive prototype with the fan than those in the induction motor variable-frequency variable-speed drive with the fan."}
{"_id": "105aeb68181bf2af2016c3c71f45011319e9c29d", "title": "Reflective and Refractive Objects for Mixed Reality", "text": "In this paper, we present a novel rendering method which integrates reflective or refractive objects into a differential instant radiosity (DIR) framework usable for mixed-reality (MR) applications. This kind of objects are very special from the light interaction point of view, as they reflect and refract incident rays. Therefore they may cause high-frequency lighting effects known as caustics. Using instant-radiosity (IR) methods to approximate these high-frequency lighting effects would require a large amount of virtual point lights (VPLs) and is therefore not desirable due to real-time constraints. Instead, our approach combines differential instant radiosity with three other methods. One method handles more accurate reflections compared to simple cubemaps by using impostors. Another method is able to calculate two refractions in real-time, and the third method uses small quads to create caustic effects. Our proposed method replaces parts in light paths that belong to reflective or refractive objects using these three methods and thus tightly integrates into DIR. In contrast to previous methods which introduce reflective or refractive objects into MR scenarios, our method produces caustics that also emit additional indirect light. The method runs at real-time frame rates, and the results show that reflective and refractive objects with caustics improve the overall impression for MR scenarios."}
{"_id": "889402d83c68927688f9fb441b55a68a9c919f6a", "title": "GENETAG: a tagged corpus for gene/protein named entity recognition", "text": "Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE\u00ae sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the \"gold standard\". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz . A newer version of GENETAG-05, will be released later this year."}
{"_id": "90a9c02b36060834404da541eae69f20fc2cfcb6", "title": "Robust Driving Path Detection in Urban and Highway Scenarios Using a Laser Scanner and Online Occupancy Grids", "text": "Many driver assistant and safety systems depend on an accurate environmental model containing the positions of stationary objects, the states of dynamic objects and information about valid driving corridors. Therefore, a robust differentiation between moving and stationary objects is required. This is challenging for laser scanners, because these sensors are not able to measure the velocity of objects directly. Therefore, an advanced occupancy grid approach, the online map, is introduced, which enables the robust separation of moving and stationary objects. The online map is used for the robust detection of the road boundaries for the determination of driving corridors in urban and highway scenarios. An algorithm for the detection of arbitrary moving objects using the online map is proposed."}
{"_id": "7133063af6727c29c5cbd3a2f93dccd6ad46128c", "title": "Aromatherapy positively affects mood, EEG patterns of alertness and math computations.", "text": "EEG activity, alertness, and mood were assessed in 40 adults given 3 minutes of aromatherapy using two aromas, lavender (considered a relaxing odor) or rosemary (considered a stimulating odor). Participants were also given simple math computations before and after the therapy. The lavender group showed increased beta power, suggesting increased drowsiness, they had less depressed mood (POMS) and reported feeling more relaxed and performed the math computations faster and more accurately following aromatherapy. The rosemary group, on the other hand, showed decreased frontal alpha and beta power, suggesting increased alertness. They also had lower state anxiety scores, reported feeling more relaxed and alert and they were only faster, not more accurate, at completing the math computations after the aromatherapy session."}
{"_id": "4ceac711f4b158503b11434f082dbe31014d4a9a", "title": "On-Line Economic Optimization of Energy Systems Using Weather Forecast Information", "text": "We establish an on-line optimization framework to exploit weather forecast information in the operation of energy systems. We argue that anticipating the weather conditions can lead to more proactive and cost-effective operations. The framework is based on the solution of a stochastic dynamic real-time optimization (D-RTO) problem incorporating forecasts generated from a state-of-the-art weather prediction model. The necessary uncertainty information is extracted from the weather model using an ensemble approach. The accuracy of the forecast trends and uncertainty bounds are validated using real meteorological data. We present a numerical simulation study in a building system to demonstrate the developments."}
{"_id": "b1208d1271e12a9be50d68ec47d8a9adf6561edb", "title": "An Ultra-Wide-Band 0.4\u201310-GHz LNA in 0.18-$\\mu$m CMOS", "text": "A two-stage ultra-wide-band CMOS low-noise amplifier (LNA) is presented. With the common-gate configuration employed as the input stage, the broad-band input matching is obtained and the noise does not rise rapidly at higher frequency. By combining the common-gate and common-source stages, the broad-band characteristic and small area are achieved by using two inductors. This LNA has been fabricated in a 0.18-mum CMOS process. The measured power gain is 11.2-12.4 dB and noise figure is 4.4-6.5 dB with -3-dB bandwidth of 0.4-10 GHz. The measured IIP3 is -6 dBm at 6 GHz. It consumes 12 mW from a 1.8-V supply voltage and occupies only 0.42 mm2"}
{"_id": "38fcc670f7369b3ee7f402cc5b214262829c139f", "title": "Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula", "text": "Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression for the mutual information has been proposed using heuristic statistical physics computations, and proven in few specific cases. Here, we show how to rigorously prove the conjectured formula for the symmetric rank-one case. This allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal. There exists, however, a gap between what currently known polynomial algorithms can do and what is expected information theoretically. Additionally, the proof technique has an interest of its own and exploits three essential ingredients: the interpolation method introduced in statistical physics by Guerra, the analysis of the approximate message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in statistical estimation where heuristic statistical physics predictions are available. Consider the following probabilistic rank-one matrix estimation problem: one has access to noisy observations w=(wij)i,j=1 of the pair-wise product of the components of a vector s=(s1, . . . , sn)\u2208R with i.i.d components distributed as Si\u223cP0, i=1, . . . , n. The entries of w are observed through a noisy element-wise (possibly non-linear) output probabilistic channel Pout(wij |sisj/ \u221a n). The goal is to estimate the vector s from w assuming that both P0 and Pout are known and independent of n (noise is symmetric so that wij =wji). Many important problems in statistics and machine learning can be expressed in this way, such as sparse PCA [Zou et al. (2006)], the Wigner spike model [Johnstone and Lu (2012); Deshpande and Montanari (2014)], community detection [Deshpande et al. (2015)] or matrix completion [Cand\u00e8s and Recht (2009)]. Proving a result initially derived by a heuristic method from statistical physics, we give an explicit expression for the mutual information and the information theoretic minimal mean-square-error (MMSE) in the asymptotic n\u2192+\u221e limit. Our results imply that for 1 ar X iv :1 60 6. 04 14 2v 1 [ cs .I T ] 1 3 Ju n 20 16 a large region of parameters, the posterior marginal expectations of the underlying signal components (often assumed intractable to compute) can be obtained in the leading order in n using a polynomial-time algorithm called approximate message-passing (AMP) [Rangan and Fletcher (2012); Deshpande and Montanari (2014); Deshpande et al. (2015); Lesieur et al. (2015b)]. We also demonstrate the existence of a region where both AMP and spectral methods [Baik et al. (2005)] fail to provide a good answer to the estimation problem, while it is nevertheless information theoretically possible to do so. We illustrate our theorems with examples and also briefly discuss the implications in terms of computational complexity. 1. Setting and main results 1.1 The additive white Gaussian noise setting A standard and natural setting is the case of additive white Gaussian noise (AWGN) of known variance \u2206, wij = sisj \u221a n + zij \u221a \u2206, (1) where z=(zij)i,j=1 is a symmetric matrix with i.i.d entries Zij\u223cN (0, 1), 1\u2264 i\u2264j\u2264n. Perhaps surprisingly, it turns out that this Gaussian setting is sufficient to completely characterize all the problems discussed in the introduction, even if these have more complicated output channels. This is made possible by a theorem of channel universality [Krzakala et al. (2016)] (already proven for community detection in [Deshpande et al. (2015)] and conjectured in [Lesieur et al. (2015a)]). This theorem states that given an output channel Pout(w|y), such that logPout(w|y = 0) is three times differentiable with bounded second and third derivatives, then the mutual information satisfies I(S;W)=I(S;SST/ \u221a n+Z \u221a \u2206)+O( \u221a n), where \u2206 is the inverse Fisher information (evaluated at y = 0) of the output channel: \u2206\u22121 := EPout(w|0)[(\u2202y logPout(W |y)|y=0)]. Informally, this means that we only have to compute the mutual information for an AWGN channel to take care of a wide range of problems, which can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large class of signal distributions P0, an explicit one-letter formula for the mutual information per variable I(S;W)/n in the asymptotic limit n\u2192+\u221e."}
{"_id": "959accf2a92bb2677ac8dcaa00d2b7550b33f67a", "title": "Passing a Hide-and-Seek Third-Person Turing Test", "text": "Hiding and seeking are cognitive abilities frequently demonstrated by humans in both real life and video games. To determine to which extent these abilities can be replicated with AI, we introduce a specialized version of the Turing test for hiding and seeking. We then develop a computer agent that passes the test by appearing indistinguishable from human behavior to a panel of human judges. We analyze the AI techniques that enable the agent to imitate human hide-and-seek behavior and their relative contribution to the agent's performance."}
{"_id": "f9920607ca991be0c90d3f4f4f4773e64dfc7c11", "title": "Altered salience network connectivity predicts macronutrient intake after sleep deprivation", "text": "Although insufficient sleep is a well-recognized risk factor for overeating and weight gain, the neural mechanisms underlying increased caloric (particularly fat) intake after sleep deprivation remain unclear. Here we used resting-state functional magnetic resonance imaging and examined brain connectivity changes associated with macronutrient intake after one night of total sleep deprivation (TSD). Compared to the day following baseline sleep, healthy adults consumed a greater percentage of calories from fat and a lower percentage of calories from carbohydrates during the day following TSD. Subjects also exhibited increased brain connectivity in the salience network from the dorsal anterior cingulate cortex (dACC) to bilateral putamen and bilateral anterior insula (aINS) after TSD. Moreover, dACC-putamen and dACC-aINS connectivity correlated with increased fat and decreased carbohydrate intake during the day following TSD, but not during the day following baseline sleep. These findings provide a potential neural mechanism by which sleep loss leads to increased fat intake."}
{"_id": "226ceb666cdb2090fc3ab786129e83f3ced56e05", "title": "Sentence Selection and Weighting for Neural Machine Translation Domain Adaptation", "text": "Neural machine translation NMT has been prominent in many machine translation tasks. However, in some domain-specific tasks, only the corpora from similar domains can improve translation performance. If out-of-domain corpora are directly added into the in-domain corpus, the translation performance may even degrade. Therefore, domain adaptation techniques are essential to solve the NMT domain problem. Most existing methods for domain adaptation are designed for the conventional phrase-based machine translation. For NMT domain adaptation, there have been only a few studies on topics such as fine tuning, domain tags, and domain features. In this paper, we have four goals for sentence level NMT domain adaptation. First, the NMT's internal sentence embedding is exploited and the sentence embedding similarity is used to select out-of-domain sentences that are close to the in-domain corpus. Second, we propose three sentence weighting methods, i.e., sentence weighting, domain weighting, and batch weighting, to balance the data distribution during NMT training. Third, in addition, we propose dynamic training methods to adjust the sentence selection and weighting during NMT training. Fourth, to solve the multidomain problem in a real-world NMT scenario where the domain distributions of training and testing data often mismatch, we proposed a multidomain sentence weighting method to balance the domain distributions of training data and match the domain distributions of training and testing data. The proposed methods are evaluated in international workshop on spoken language translation IWSLT English-to-French/German tasks and a multidomain English-to-French task. Empirical results show that the sentence selection and weighting methods can significantly improve the NMT performance, outperforming the existing baselines."}
{"_id": "23ed493b36353ed06709a14a94dd0bf54d06bacc", "title": "Amortized Efficiency of List Update and Paging Rules", "text": "In this article we study the amortized efficiency of the \u201cmove-to-front\u201d and similar rules for dynamically maintaining a linear list. Under the assumption that accessing the ith element from the front of the list takes &thgr;(i) time, we show that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules. Other natural heuristics, such as the transpose and frequency count rules, do not share this property. We generalize our results to show that move-to-front is within a constant factor of optimum as long as the access cost is a convex function. We also study paging, a setting in which the access cost is not convex. The paging rule corresponding to move-to-front is the \u201cleast recently used\u201d (LRU) replacement rule. We analyze the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule (Belady's MIN algorithm) by a factor that depends on the size of fast memory. No on-line paging algorithm has better amortized performance."}
{"_id": "63f37950b5adfe7d90baa6284e4fc4e2edacbcc6", "title": "Learning Multi-agent Communication under Limited-bandwidth Restriction for Internet Packet Routing", "text": "Communication is an important factor for the big multi-agent world to stay organized and productive. Recently, the AI community has applied the Deep Reinforcement Learning (DRL) to learn the communication strategy and the control policy for multiple agents. However, when implementing the communication for real-world multi-agent applications, there is a more practical limited-bandwidth restriction, which has been largely ignored by the existing DRL-based methods. Specifically, agents trained by most previous methods keep sending messages incessantly in every control cycle; due to emitting too many messages, these methods are unsuitable to be applied to the real-world systems that have a limited bandwidth to transmit the messages. To handle this problem, we propose a gating mechanism to adaptively prune unprofitable messages. Results show that the gating mechanism can prune more than 80% messages with little damage to the performance. Moreover, our method outperforms several state-of-the-art DRL-based and rule-based methods by a large margin in both the real-world packet routing tasks and four benchmark tasks."}
{"_id": "588bc92522dcd93226553f719bbe97a68c216db9", "title": "Assessing the Potential of Classical Q-learning in General Game Playing", "text": "After the recent groundbreaking results of AlphaGo and AlphaZero, we have seen strong interests in deep reinforcement learning and artificial general intelligence (AGI) in game playing. However, deep learning is resource-intensive and the theory is not yet well developed. For small games, simple classical table-based Q-learning might still be the algorithm of choice. General Game Playing (GGP) provides a good testbed for reinforcement learning to research AGI. Q-learning is one of the canonical reinforcement learning methods, and has been used by (Banerjee & Stone, IJCAI 2007) in GGP. In this paper we implement Q-learning in GGP for three small-board games (Tic-Tac-Toe, Connect Four, Hex), to allow comparison to Banerjee et al.. We find that Q-learning converges to a high win rate in GGP. For the -greedy strategy, we propose a first enhancement, the dynamic algorithm. In addition, inspired by (Gelly & Silver, ICML 2007) we combine online search (Monte Carlo Search) to enhance offline learning, and propose QM-learning for GGP. Both enhancements improve the performance of classical Q-learning. In this work, GGP allows us to show, if augmented by appropriate enhancements, that classical table-based Q-learning can perform well in small games."}
{"_id": "bd45ed21dfc2e1e4af0c3de6200b1483634a29cd", "title": "A new adaptive frequency oscillator for gait assistance", "text": "To control exoskeletons for walking gait assistance, it is of primary importance to control them to act synchronously with the gaits of users. To effectively estimate the gait cycle (or the phase within a stride) of users, we propose a new adaptive frequency oscillator (AFO). While previous AFOs successfully estimated the walking frequency from joint angles as inputs, the new AFO, called particularly-shaped adaptive oscillator (PSAO) can estimate gait cycle from the same inputs, which would have required foot contact sensors in previous approaches. To predict the effects of PSAO-based gait assistance on human walking, it has been tested with neuromuscular walking simulation. In the simulation, the gait assistance system reduced the metabolic cost of walking for some assistance patterns. The walk ratio (step length per step rate) also changed as assistance patterns shifted in phase, which is meaningful because metabolic cost of walking in general is minimal at specific walk ratio. For a prototype exoskeleton we developed, the effect of gait assistance was experimented on a human subject walking on level ground and inclining slopes to verify the predictions from the simulation: (1) physiological cost index computed from heart rate significantly decreased indicating reduction in metabolic energy expenditure; (2) walk ratio was in fact controllable to an extent."}
{"_id": "59278512a6b4fa152896de8575b8c7ceee63182e", "title": "Education 4.0 for tall thin engineer in a data driven society", "text": "The rise of digital technology creates disruptive innovation, new opportunities in communication, it transforms industrial production and it changes the way we work, learn and live. That leads to the fourth Industrial Revolution, which changes the working world enormously. New skills, future-ready curricula, new learning and training concepts as well as the flexibility in education are necessary. To deliver that, it needs a fourth revolution in the educational system with a strong active presence and most important Artificial Intelligence (AI) driven features for better performance. Education 4.0 needs also a strong partnership between industry and academicals environment in the formation of human resources for Industry 4.0. We present in this paper our answers of education regarding the challenges of fourth revolution in Industry and the seven AI driven didactic methods identified for the University of Future."}
{"_id": "587dad3d93ed1925868b8851ae3eab2d2d31f717", "title": "Fusion of Keypoint Tracking and Facial Landmark Detection for Real-Time Head Pose Estimation", "text": "In this paper, we address the problem of extreme head pose estimation from intensity images, in a monocular setup. We introduce a novel fusion pipeline to integrate into a dedicated Kalman Filter the pose estimated from a tracking scheme in the prediction stage and the pose estimated from a detection scheme in the correction stage. To that end, the measurement covariance of the Kalman Filter is updated in every frame. The tracking scheme is performed using a set of keypoints extracted in the area of the head along with a simple 3D geometric model. The detection scheme, on the other hand, relies on the alignment of facial landmarks in each frame combined with 3D features extracted on a head mesh. The head pose in each scheme is estimated by minimizing the reprojection error from the 3D-2D correspondences. By combining both frameworks, we extend the applicability of head pose estimation from facial landmarks to cases where these features are no longer visible. We compared the proposed method to other related approaches, showing that it can achieve state-of-the-art performance. We also demonstrate that our approach is suitable for cases with extreme head rotations and (self-) occlusions, besides being suitable for real time applications."}
{"_id": "23a8c0c4fe63d9f402d444e1aff39359a509aa15", "title": "Mux-Kmeans: multiplex kmeans for clustering large-scale data set", "text": "Kmeans clustering algorithm is widely used in a number of scientific applications due to its simple iterative nature and ease of implementation. The quality of clustering result highly depends on the selection of initial centroids. Different selections of initial centroids result in different clustering results. In practice, people run a series of Kmeans processes with multiple initial centroid groups serially and return the best clustering result among them. However, in the era of big data, a Kmeans process is implemented on MapReduce to scale to large data sets. Even a single Kmeans process on MapReduce requires considerable long runtime. This paper proposes Mux-Kmeans. Rather than executing multiple Kmeans processes serially, Mux-Kmeans launches these Kmeans processes concurrently with multiple centroid groups. In each iteration, Mux-Kmeans (i) evaluates these Kmeans processes, (ii) prunes the low-quality Kmeans processes, and (iii) incubates new Kmeans processes. After a certain number of iterations, it finally obtains the best among these local optimal results. We implement Mux-Kmeans on MapReduce and evaluate it on Amazon EC2. The experimental results show that starting from the same initial centroid groups, the clustering result of Mux-Kmeans is always non-worse than the best of a series of Kmeans processes. Mux-Kmeans also saves elapsed time than serial multiple Kmeans processes."}
{"_id": "aa5de1112a3e91a26e1362ae1ae0f323b6707811", "title": "A Clinical Study of 35 Cases of Pincer Nails", "text": "BACKGROUND\nPincer nail is a nail deformity characterized by transverse overcurvature of the nail plate. Pincer nail can affect a patient's quality of life due to its chronic, recurrent course; however, there have been no clinical studies on the pincer nail condition in Korean patients.\n\n\nOBJECTIVE\nThe purpose of this study was to characterize the clinical findings and treatment of pincer nail. In addition, possible etiological factors were considered, and treatment efficacy was evaluated.\n\n\nMETHODS\nThe medical records and clinical photographs of 35 patients (12 males, 23 females) who were diagnosed with pincer nail between August 1, 2005 and July 31, 2009 were studied.\n\n\nRESULTS\nPatient age ranged from 10 to 77 (52.09\u00b117.26) years, and there was a predominance of female (23 out of 35 patients, F:M=2:1). The mean duration of the disorder was 7.45 years (range 0.25~40); 85% had pincer nail for at least 1 year. In addition, 40% had a history of previous treatment and recurrence. There were 82.8% patients with the common type of pincer nails. The most commonly involved nails were both great toenails. Among 35 patients, nail grinding was started in 30 patients, and 25 patients showed clinical improvement with nail grinding. The width index increased and the height index decreased after treatment. The mean follow up period was 8.42 months (range 1~27), and 7 patients showed recurrence after 8.8 months (range 2~20). Among 35 patients, 5 patients were treated with nail extraction with matricectomy, and the symptoms resolved immediately. The mean follow up period was 7.6 months (range 0~19), and recurrence was not observed. Onychomycosis was also present in 37.1% of patients, and itraconazole pulse therapy for 3 months was added.\n\n\nCONCLUSION\nThe results of this study demonstrate the clinical features of pincer nail in Korean patients. The findings show that the common type of pincer nail was most common, and nail grinding as a conservative treatment greatly improved pincer nails despite a risk of recurrence. When onychomycosis was also present, oral antifungal therapy added to nail grinding resulted in a more rapid change in nail thickness and clinical improvement."}
{"_id": "747f35bc46c78f6f8fa051cee5b4e16a26837131", "title": "Applying a tendency to be well retweeted to false information detection", "text": "While a lot of useful information can be found in SNS, false information also diffuses through it, thereby confusing many people sometimes. In this paper, we predict a tendency of tweets to be well retweeted and consider applying the tendency to false information detection. The tendency prediction can be implemented with simple features of tweets. We examine the effect of the tendency when it is used in false information detection empirically. Our experimental results indicate that it would be valuable to take the tendency into account for the detection. We also discuss findings when applying them to tweets in Japanese."}
{"_id": "23bbea130398331084021cc895a132064219b4b1", "title": "COFI RANK - Maximum Margin Matrix Factorization for Collaborative Ranking", "text": "In this paper, we consider collaborative filtering as a ranking problem. We present a method which uses Maximum Margin Matrix Factorization and optimizes ranking instead of rating. We employ structured output prediction to optimize directly for ranking scores. Experimental results show that our method gives very good ranking scores and scales well on collaborative filtering tasks."}
{"_id": "26d9c40e8a6099ce61a5d9a6afa11814c45def01", "title": "EFFICIENT CONSTRAINED PATH PLANNING VIA SEARCH IN STATE LATTICES", "text": "We propose a novel approach to constrained path planning that is based on a special search space which efficiently encodes feasible paths. The paths are encoded implicitly as connections between states, but only feasible and local connections are included. Once this search space is developed, we systematically generate a near-minimal set of spatially distinct path primitives. This set expresses the local connectivity of constrained motions and also eliminates redundancies. The set of primitives is used to define heuristic search, and thereby create a very efficient path planner at the chosen resolution. We also discuss a wide variety of space and terrestrial robotics applications where this motion planner can be especially useful."}
{"_id": "36d81c39b0b2378432ab947b243dc56e5358298b", "title": "Path Planning using a Dynamic Vehicle Model", "text": "This paper addresses the problem of path planning using a dynamic vehicle model. Previous works which include a basic kinematic model generate paths that are only realistic at very low speed. By considering higher vehicle speed during navigation, the vehicle can significantly deviate from the planned trajectory. Consequently, the planned path becomes unusable for the mission achievement. So, to bridge a gap between planning and navigation, we propose a realistic path planner based on a dynamic vehicle model"}
{"_id": "6db6e5f104e62bc84d27142f5bbbe48b5be3e566", "title": "Autonomous local path planning for a mobile robot using a genetic algorithm", "text": "This work presents results of our work in development of a genetic algorithm based path-planning algorithm for local obstacle avoidance (local feasible path) of a mobile robot in a given search space. The method tries to find not only a valid path but also an optimal one. The objectives are to minimize the length of the path and the number of turns. The proposed path-planning method allows a free movement of the robot in any direction so that the path-planner can handle complicated search spaces."}
{"_id": "0e3d3b2d29b8c770ac993101742537fdee073737", "title": "Reactive Nonholonomic Trajectory Generation via Parametric Optimal Control", "text": "There are many situations for which a feasible nonholonomic motion plan must be generated immediately based on real-time perceptual information. Parametric trajectory representations limit computation because they reduce the search space for solutions (at the cost of potentially introducing suboptimality). The use of any parametric trajectory model converts the optimal control formulation into an equivalent nonlinear programming problem. In this paper, curvature polynomials of arbitrary order are used as the assumed form of solution. Polynomials sacrifice little in terms of spanning the set of feasible controls while permitting an expression of the general solution to the system dynamics in terms of decoupled quadratures. These quadratures are then readily linearized to express the necessary conditions for optimality. Resulting trajectories are convenient to manipulate and execute in vehicle controllers and they can be computed with a straightforward numerical procedure in real time. KEY WORDS\u2014mobile robots, car-like robots, trajectory generation, curve generation, nonholonomic, clothoid, cornu spiral, optimal control"}
{"_id": "42343570eb9cdf429832311527eca3746ff3da2a", "title": "MINDFULNESS AND TRAUMA: IMPLICATIONS FOR TREATMENT", "text": "Mindfulness, originally a construct used in Eastern spiritual and philosophical traditions, has found new utility in psychotherapy practice. Mindfulness practice has been recently applied to treatments of several psychological and health related problems, and research is showing successful outcomes in psychological interventions incorporating mindfulness practices. Several schools of psychotherapy have theorized why mindfulness may be an effective intervention. One population which would theoretically be benefited by mindfulness practice in treatment consists of those individuals who have experienced traumatic events and are exhibiting post-traumatic stress disorder and/or related correlates of past trauma. The present paper gives a general review of the application of mindfulness to clinical psychology interventions. Additionally, we explain how mindfulness is applicable to our integrative behavioral approach to treating trauma and its sequelae. Specifically, this paper will (a) give a general overview of the conceptions and applications of mindfulness to psychology and psychotherapy and provide a brief account of the concepts origins in eastern traditions; (b) discuss the theoretical conceptualization of clinical problems that may relate to the longterm correlates of trauma; (c) describe how mindfulness, acceptance and the therapeutic relationship can address trauma symptoms and discuss a modified treatment approach for trauma survivors that incorporates mindfulness and acceptance practices into traditional exposure treatment. Address correspondence to Victoria Follette, Department of Psychology/296, University of Nevada, Reno, NV, 89557, USA; e-mail: vmf@unr.edu. Journal of Rational-Emotive & Cognitive-Behavior Therapy, Vol. 24, No. 1, Spring 2006 ( 2006) DOI: 10.1007/s10942-006-0025-2 Published Online: August 2, 2006 45 2006 Springer Science+Business Media, Inc. Understanding the long-term effects of traumatic experiences and identifying appropriate treatment for trauma survivors continue to be complex and at times controversial tasks (Brewin, 2003). While cognitive-behavioral treatments are generally considered to have the most significant empirical support for the treatment of trauma (American Psychiatric Association Work Group on ASD and PTSD, 2004; Foa, Keane, & Friedman, 2000), practices associated with the treatment of the broader range of trauma related problems continue to evolve. Our understanding of trauma symptoms has expanded to include the knowledge that outcomes associated with traumatic experiences are not limited to Post-traumatic Stress Disorder (PTSD) but rather extend across a range of intraand interpersonal problems. Therefore, in our work with trauma survivors, we have conceptualized trauma symptomology in terms of functional classes of behavior, rather than focusing on a syndromal classification of symptoms (Follette & Naugle, 2006). While a great deal is known about the treatment of basic anxiety related phenomena associated with traumatic experiences, there are a number of lacunae in our knowledge about the treatment of the broader range of problems associated with a trauma history. Certainly the current repertoire of cognitive behavioral treatments for trauma survivors has demonstrated efficacy. However, we believe that the addition of mindfulness related treatment interventions will enhance our ability to treat people who have difficulties associated with having experienced a traumatic event. The theoretical work arising from our conceptualization of functional domains of behavior has informed our present integrative behavioral treatment approach. In our approach, we work to influence behavior change by increasing both acceptance strategies and skillful behavior. Mindfulness is a core component of both of these approaches. In the present paper, we will briefly address the historical roots of mindfulness, with its origins in eastern religious traditions. We will present a short review of the application of mindfulness to clinical psychology interventions. Finally, we will discuss our view of the utility of mindfulness in our integrative behavioral approach to treating trauma and its sequelae. MINDFULNESS IN EASTERN RELIGIOUS TRADITIONS In discussing the integration of Buddhist teachings with Western psychological practices, it should be noted that there are 46 Journal of Rational-Emotive & Cognitive-Behavior Therapy"}
{"_id": "7fd68e8d2b2680cb3a94b2f3d774de80cccba3f6", "title": "Measuring User Engagement", "text": "User engagement refers to the quality of the user experience that emphasizes the positive aspects of interacting with an online application and, in particular, the desire to use that application longer and repeatedly. User engagement is a key concept in the design of online applications (whether for desktop, tablet or mobile), motivated by the observation that successful applications are not just used, but are engaged with. Users invest time, attention, and emotion in their use of technology, and seek to satisfy pragmatic and hedonic needs. Measurement is critical for evaluating whether online applications are able to successfully engage users, and may inform the design of and use of applications. User engagement is a multifaceted, complex phenomenon; this gives rise to a number of potential measurement approaches. Common ways to evaluate user engagement include using self-report measures, e.g., questionnaires; observational methods, e.g. facial expression analysis, speech analysis; neuro-physiological signal processing methods, e.g., respiratory and cardiovascular accelerations and decelerations, muscle spasms; and web analytics, e.g., number of site visits, click depth. These methods represent various trade-offs in terms of the setting (laboratory versus \u201cin the wild\u201d), object of measurement (user behaviour, affect or cognition) and scale of data collected. For instance, small-scale user studies are deep and rich, but limited in terms of generalizability, whereas large-scale web analytic studies are powerful but negate users\u2019 motivation and context. The focus of this book is how user engagement is currently being measured and various considerations for its measurement. Our goal is to leave readers with an appreciation of the various ways in which to measure user engagement, and their associated strengths and weaknesses. We emphasize the multifaceted nature of user engagement and the unique contextual constraints that come to bear upon attempts to measure engagement in different settings, and across different user groups and web domains. At the same time, this book advocates for the development of \u201cgood\u201d measures and good measurement practices that will advance the study of user engagement and improve our understanding of this construct, which has become so vital in our wired world."}
{"_id": "8748a927f45c73645dca00ccc11f4bb4b20c25c0", "title": "A 95% Efficient Normally-Off GaN-on-Si HEMT Hybrid-IC Boost-Converter with 425-W Output Power at 1 MHz", "text": "A 2:1 351 V hard-switched boost converter was constructed using high-voltage GaN high-electron-mobility transistors grown on Si substrates and GaN Schottky diodes grown on Sapphire substrates. The high speed and low on-resistance of the GaN devices enables extremely fast switching times and low losses, resulting in a high conversion efficiency of 95% with 425-W output power at 1 MHz. The boost converter has a power density of 175 W/in3. To our knowledge, these results are the best reported on GaN devices, and the highest for 1MHz switching."}
{"_id": "a3834bb73b2189b55896f9a30753b9e3fd397e1d", "title": "Episodic Memory and Beyond: The Hippocampus and Neocortex in Transformation.", "text": "The last decade has seen dramatic technological and conceptual changes in research on episodic memory and the brain. New technologies, and increased use of more naturalistic observations, have enabled investigators to delve deeply into the structures that mediate episodic memory, particularly the hippocampus, and to track functional and structural interactions among brain regions that support it. Conceptually, episodic memory is increasingly being viewed as subject to lifelong transformations that are reflected in the neural substrates that mediate it. In keeping with this dynamic perspective, research on episodic memory (and the hippocampus) has infiltrated domains, from perception to language and from empathy to problem solving, that were once considered outside its boundaries. Using the component process model as a framework, and focusing on the hippocampus, its subfields, and specialization along its longitudinal axis, along with its interaction with other brain regions, we consider these new developments and their implications for the organization of episodic memory and its contribution to functions in other domains."}
{"_id": "5da91b8c2fba3f8b05ae8ff26d11546e221baf99", "title": "Solving sybil attacks using evolutionary game theory", "text": "Recommender systems have become quite popular recently. However, such systems are vulnerable to several types of attacks that target user ratings. One such attack is the Sybil attack where an entity masquerades as several identities with the intention of diverting user ratings. In this work, we propose evolutionary game theory as a possible solution to the Sybil attack in recommender systems. After modeling the attack, we use replicator dynamics to solve for evolutionary stable strategies. Our results show that under certain conditions that are easily achievable by a system administrator, the probability of an attack strategy drops to zero implying degraded fitness for Sybil nodes that eventually die out."}
{"_id": "aef825b71a41e2ff985553b2c2001719385e61c1", "title": "Design of a smart parking system using wireless sensor network", "text": "In this article it is included the design of a smart parking system, the base was the current system \"Sistema integrado de Estacionamiento (SIMERT)\" and the mobility in Loja - Ecuador, with the use of a sensor network and wireless technology based in Zigbee 900 and Digimesh 2.4 GHz. To the feasibility of the project it was considered a technological platform that could be adapted to the city conditions and that be friendly with the environment, furthermore the suggested solution gives us connectivity with a Web application for integrated parking and usability of administrators and final users."}
{"_id": "3a30273f96f644bbb45a086ccb212cfd6d5665c2", "title": "Automated Detection of Logical Errors in Programs", "text": "Research and industrial experience reveal that code reviews as a part of software inspection might be the most cost-effective technique a team can use to reduce defects. Tools that automate code inspection mostly focus on the detection of a priori known defect patterns and security vulnerabilities. Automated detection of logical errors, due to a faulty implementation of applications\u2019 functionality is a relatively uncharted territory. Automation can be based on profiling the intended behavior behind the source code. In this paper, we present a new code profiling method that combines an information flow analysis, the crosschecking of dynamic invariants with symbolic execution, and the use of fuzzy logic. Our goal is to detect logical errors and exploitable vulnerabilities. The theoretical underpinnings and the practical implementation of our approach are discussed. We test the APP_LogGIC tool that implements the proposed analysis on two realworld applications. The results show that profiling the intended program behavior is feasible in diverse applications. We discuss the heuristics used to overcome the problem of state space explosion and of the large data sets. Code metrics and test results are provided to demonstrate the effectiveness of the approach."}
{"_id": "0b3b5fa77fef4dd40f8255226fb448c06500fabc", "title": "An algorithm to determine peer-reviewers", "text": "The peer-review process is the most widely accepted certification mechanism for officially accepting the written results of researchers within the scientific community. An essential component of peer-review is the identification of competent referees to review a submitted manuscript. This article presents an algorithm to automatically determine the most appropriate reviewers for a manuscript by way of a co-authorship network data structure and a relative-rank particle-swarm algorithm. This approach is novel in that it is not limited to a pre-selected set of referees, is computationally efficient, requires no human-intervention, and, in some instances, can automatically identify conflict of interest situations. A useful application of this algorithm would be to open commentary peer-review systems because it provides a weighting for each referee with respects to their expertise in the domain of a manuscript. The algorithm is validated using referee bid data from the 2005 Joint Conference on Digital Libraries."}
{"_id": "6eb6f722499acb3783bcb67f01447f3c59854711", "title": "Health information systems: a survey of frameworks for developing countries.", "text": "OBJECTIVES\nThe objective of this paper is to perform a survey of excellent research on health information systems (HIS) analysis and design, and their underlying theoretical frameworks. It classifies these frameworks along major themes, and analyzes the different approaches to HIS development that are practical in resource-constrained environments.\n\n\nMETHOD\nLiterature review based on PubMed citations and conference proceedings, as well as Internet searches on information systems in general, and health information systems in particular.\n\n\nRESULTS\nThe field of health information systems development has been studied extensively. Despite this, failed implementations are still common. Theoretical frameworks for HIS development are available that can guide implementers.\n\n\nCONCLUSION\nAs awareness, acceptance, and demand for health information systems increase globally, the variety of approaches and strategies will also follow. For developing countries with scarce resources, a trial-and-error approach can be very costly. Lessons from the successes and failures of initial HIS implementations have been abstracted into theoretical frameworks. These frameworks organize complex HIS concepts into methodologies that standardize techniques in implementation. As globalization continues to impact healthcare in the developing world, demand for more responsive health systems will become urgent. More comprehensive frameworks and practical tools to guide HIS implementers will be imperative."}
{"_id": "834a3bd7caa3fa967e124d04be64f038cf8951a7", "title": "Applying Deep Learning to the Newsvendor Problem", "text": "The newsvendor problem is one of the most basic and widely applied inventory models. There are numerous extensions of this problem. If the probability distribution of the demand is known, the problem can be solved analytically. However, approximating the probability distribution is not easy and is prone to error; therefore, the resulting solution to the newsvendor problem may be not optimal. To address this issue, we propose an algorithm based on deep learning that optimizes the order quantities for all products based on features of the demand data. Our algorithm integrates the forecasting and inventory-optimization steps, rather than solving them separately, as is typically done, and does not require knowledge of the probability distributions of the demand. Numerical experiments on real-world data suggest that our algorithm outperforms other approaches, including data-driven and machine learning approaches, especially for demands with high volatility. Finally, in order to show how this approach can be used for other inventory optimization problems, we provide an extension for (r,Q) policies."}
{"_id": "e8b83d99ea6121c13df3570b4f8d3697257b1c2b", "title": "Arches: a Framework for Modeling Complex Terrains", "text": "In this paper, we present a framework for representing complex terrains with such features as overhangs, arches and caves and including different materials such as sand and rocks. Our hybrid model combines a volumetric discrete data structure that stores the different materials and an implicit representation for sculpting and reconstructing the surface of the terrain. Complex scenes can be edited and sculpted interactively with high level tools. We also propose an original rock generation technique that enables us to automatically generate complex rocky sceneries with piles of rocks without any computationally demanding physically-based simulation."}
{"_id": "377e1010e90248a2280b2f4754b8f4a04fe414f9", "title": "BERDS-BERkeley EneRgy Disaggregation Data Set", "text": "We first review some of the suggested methods for energy disaggregation. We then provide a real-life data set for researchers to help developing novel energy disaggregation algorithms. We also present a publicly available experimental data set captured from a typical building on the University of California, Berkeley campus."}
{"_id": "c0d79e6077c47d6289ab89054f2b51653d887958", "title": "Action Search: Learning to Search for Human Activities in Untrimmed Videos", "text": "Traditional approaches for action detection use trimmed data to learn sophisticated action detector models. Although these methods have achieved great success at detecting human actions, we argue that huge information is discarded when ignoring the process, through which this trimmed data is obtained. In this paper, we propose Action Search, a novel approach that mimics the way people annotate activities in video sequences. Using a Recurrent Neural Network, Action Search can efficiently explore a video and determine the time boundaries during which an action occurs. Experiments on the THUMOS14 dataset reveal that our model is not only able to explore the video efficiently but also accurately find human activities, outperforming stateof-the-art methods."}
{"_id": "aceaec2ad171f7a5ffc03f52a5598e8577506fe5", "title": "Utility-Preserving Differentially Private Data Releases Via Individual Ranking Microaggregation", "text": "Being able to release and exploit open data gathered in information systems is crucial for researchers, enterprises and the overall society. Yet, these data must be anonymized before release to protect the privacy of the subjects to whom the records relate. Differential privacy is a privacy model for anonymization that offers more robust privacy guarantees than previous models, such as k-anonymity and its extensions. However, it is often disregarded that the utility of differentially private outputs is quite limited, either because of the amount of noise that needs to be added to obtain them or because utility is only preserved for a restricted type and/or a limited number of queries. On the contrary, k-anonymity-like data releases make no assumptions on the uses of the protected data and, thus, do not restrict the number and type of doable analyses. Recently, some authors have proposed mechanisms to offer general-purpose differentially private data releases. This paper extends such works with a specific focus on the preservation of the utility of the protected data. Our proposal builds on microaggregation-based anonymization, which is more flexible and utility-preserving than alternative anonymization methods used in the literature, in order to reduce the amount of noise needed to satisfy differential privacy. In this way, we improve the utility of differentially private data releases. Moreover, the noise reduction we achieve does not depend on the size of the data set, but just on the number of attributes to be protected, which is a more desirable behavior for large data sets. The utility benefits brought by our proposal are empirically evaluated and compared with related works for several data sets and metrics."}
{"_id": "cc50c72a5120877352f9b8638a738c7085bd2c45", "title": "The neuropsychiatry of the cerebellum \u2014 insights from the clinic", "text": "A central aspect of the cerebellar cognitive affective syndrome is the dysregulation of affect that occurs when lesions involve the \u2018limbic cerebellum\u2019 (vermis and fastigial nucleus). In this case series we describe neuropsychiatric disturbances in adults and children with congenital lesions including cerebellar agenesis, dysplasia, and hypoplasia, and acquired conditions including cerebellar stroke, tumor, cerebellitis, trauma, and neurodegenerative disorders. The behaviors that we witnessed and that were described by patients and families included distractibility and hyperactivity, impulsiveness, disinhibition, anxiety, ritualistic and stereotypical behaviors, illogical thought and lack of empathy, as well as aggression and irritability. Ruminative and obsessive behaviors, dysphoria and depression, tactile defensiveness and sensory overload, apathy, childlike behavior, and inability to appreciate social boundaries and assign ulterior motives were also evident. We grouped these disparate neurobehavioral profiles into five major domains, characterized broadly as disorders of attentional control, emotional control, and social skill set as well as autism spectrum disorders, and psychosis spectrum disorders. Drawing on our dysmetria of thought hypothesis, we conceptualized the symptom complexes within each putative domain as reflecting either exaggeration (overshoot, hypermetria) or diminution (hypotonia, or hypometria) of responses to the internal or external environment. Some patients fluctuated between these two states. We consider the implications of these neurobehavioral observations for the care of patients with ataxia, discuss the broader role of the cerebellum in the pathogenesis of these neuropsychiatric symptoms, and revisit the possibility of using cerebellar stimulation to treat psychiatric disorders by enhancing cerebellar modulation of cognition and emotion."}
{"_id": "662338a7586c3e06ba684bcafd2a211eec2f7165", "title": "Neuronal Specificity of Acupuncture in Alzheimer's Disease and Mild Cognitive Impairment Patients: A Functional MRI Study", "text": "Although acupuncture is considered to be effective and safe for Alzheimer's disease (AD) and mild cognitive impairment (MCI), the mechanism underlying its therapeutic effect is still unknown. Most studies clarifying the neuronal pathway produced by acupuncture were still applied to healthy subjects with limited single acupuncture point stimulation, which was inconsistency with clinical practice. Thus, in our present study, we investigate the differences between brain activity changes in AD and MCI patients caused by multi-acupuncture point Siguan (four gates), in order to provide visualized evidence for neuronal specificity of clinical acupuncture. Forty-nine subjects were recruited, including 21 AD patients, 14 MCI patients, and 14 healthy controls (HC). AD and MCI patients were randomly divided into two groups, respectively: real acupuncture point group (14 AD and 8 MCI) and sham acupuncture point group (7 AD and 6 MCI). We adopted a 16-minute, single-block, experimental design for acquiring functional MRI images. We found, in AD and MCI patients, Siguan (four gates) elicited extensive activations and deactivations in cognitive-related areas, visual-related areas, the sensorimotor-related area, basal ganglia, and cerebellum. Compared with HC, AD and MCI patients showed similar activations in cognitive-related brain areas (inferior frontal gyrus, supramarginal gyrus, and rolandic operculum) as well as deactivations in cognitive-related areas, visual-related areas, basal ganglia, and cerebellum, which were not found in HC. Compared with sham acupuncture points, real acupuncture points produced more specific brain changes with both activated and deactivated brain activities in AD and MCI. The preliminary results in our study verified the objective evidence for neuronal specificity of acupuncture in AD and MCI patients."}
{"_id": "d8bdf111fee120bb80991f65ecd2da864fdbb6ac", "title": "Roombots\u2014Towards decentralized reconfiguration with self-reconfiguring modular robotic metamodules", "text": "This paper presents our work towards a decentralized reconfiguration strategy for self-reconfiguring modular robots, assembling furniture-like structures from Roombots (RB) metamodules. We explore how reconfiguration by locomotion from a configuration A to a configuration B can be controlled in a distributed fashion. This is done using Roombots metamodules\u2014two Roombots modules connected serially\u2014that use broadcast signals, lookup tables of their movement space, assumptions about their neighborhood, and connections to a structured surface to collectively build desired structures without the need of a centralized planner."}
{"_id": "94ea27b6e89f2587f26962b3803260adfcf7b3f1", "title": "Social Persuasion in Online and Physical Networks", "text": "Social persuasion to influence the actions, beliefs, and behaviors of individuals, embedded in a social network, has been widely studied. It has been applied to marketing, healthcare, sustainability, political campaigns, and public policy. Traditionally, there has been a separation between physical (offline) and cyber (online) worlds. While persuasion methods in the physical world focused on strong interpersonal trust and design principles, persuasion methods in the online world were rich on data-driven analysis and algorithms. Recent trends including Internet of Things, \u201cbig data,\u201d and smartphone adoption point to the blurring divide between the cyber world and the physical world in the following ways. Fine grained data about each individual's location, situation, social ties, and actions are collected and merged from different sources. The messages for persuasion can be transmitted through both worlds at suitable times and places. The impact of persuasion on each individual is measurable. Hence, we posit that the social persuasion will soon be able to span seamlessly across these worlds and will be able to employ computationally and empirically rigorous methods to understand and intervene in both cyber and physical worlds. Several early examples indicate that this will impact the fundamental facets of persuasion including who, how, where, and when, and pave way for multiple opportunities as well as research challenges."}
{"_id": "8dd4393cfc8739b65bf443fcbc849896447d7cb2", "title": "Improving Interactional Organizational Research : A Model of Person-Organization Fit", "text": "In order for researchers to understand and predict behavior, they must consider both person and situation factors and how these factors interact. Even though organization researchers have developed interactional models, many have overemphasized either person or situation components, and most have failed to consider the effects that persons have on situations. This paper presents criteria for improving interactional models and a model of person-organization fit, which satisfies these criteria. Using a Q-sort methodology, individual value profiles are compared to organizational value profiles to determine fit and to predict changes in values, norms, and behaviors."}
{"_id": "ef150d94c06c8c16d3a127d8115278e927f6ae03", "title": "Constraint Programming in OPL", "text": "OPL is a modeling language for mathematical programming and combinatorial optimization problems. It is the rst modeling language to combine high-level algebraic and set notations from model-ing languages with a rich constraint language and the ability to specify search procedures and strategies that is the essence of constraint programming. In addition, OPL models can be controlled and composed using OPLSCRIPT, a script language that simpliies the development of applications that solve sequences of models, several instances of the same model, or a combination of both as in column-generation applications. This paper illustrates some of the functionalities of OPL for constraint programming using frequency allocation, sport-scheduling, and job-shop scheduling applications. It also illustrates how OPL models can be composed using OPLSCRIPT on a simple connguration example."}
{"_id": "09b2ddbba0bf89d0f5172e8dff5090714d279d3c", "title": "Recycling ambient microwave energy with broad-band rectenna arrays", "text": "This paper presents a study of reception and rectification of broad-band statistically time-varying low-power-density microwave radiation. The applications are in wireless powering of industrial sensors and recycling of ambient RF energy. A 64-element dual-circularly-polarized spiral rectenna array is designed and characterized over a frequency range of 2-18 GHz with single-tone and multitone incident waves. The integrated design of the antenna and rectifier, using a combination of full-wave electromagnetic field analysis and harmonic balance nonlinear circuit analysis, eliminates matching and filtering circuits, allowing for a compact element design. The rectified dc power and efficiency is characterized as a function of dc load and dc circuit topology, RF frequency, polarization, and incidence angle for power densities between 10/sup -5/-10/sup -1/ mW/cm/sup 2/. In addition, the increase in rectenna efficiency for multitone input waves is presented."}
{"_id": "1d36a291318adabb5ff10f4d5f576a828f2a2490", "title": "Dynamic Difficulty Adjustment (DDA) in Computer Games: A Review", "text": "Dynamic difficulty adjustment (DDA) is a method of automatically modifying a game\u2019s features, behaviors, and scenarios in realtime, depending on the player\u2019s skill, so that the player, when the game is very simple, does not feel bored or frustrated, when it is very difficult. The intent of the DDA is to keep the player engrossed till the end and to provide him/her with a challenging experience. In traditional games, difficulty levels increase linearly or stepwise during the course of the game. The features such as frequency, starting levels, or rates can be set only at the beginning of the game by choosing a level of difficulty. This can, however, result in a negative experience for players as they try to map a predecided learning curve. DDA attempts to solve this problem by presenting a customized solution for the gamers. This paper provides a review of the current approaches to DDA."}
{"_id": "182fdcb0d6cd872a5a35c58cc2230486d2750201", "title": "Scale-Space and Edge Detection Using Anisotropic Diffusion", "text": "Abstracf-The scale-space technique introduced by Witkin involves generating coarser resolution images by convolving the original image with a Gaussian kernel. This approach has a major drawback: it is difficult to obtain accurately the locations of the \u201csemantically meaningful\u201d edges at coarse scales. In this paper we suggest a new definition of scale-space, and introduce a class of algorithms that realize it using a diffusion process. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing in preference to interregion smoothing. It is shown that the \u201cno new maxima should be generated at coarse scales\u201d property of conventional scale space is preserved. As the region boundaries in our approach remain sharp, we obtain a high quality edge detector which successfully exploits global information. Experimental results are shown on a number of images. The algorithm involves elementary, local operations replicated over the image making parallel hardware implementations feasible."}
{"_id": "18c626507bc53449021d9c6d9ec1dabd1dbe8337", "title": "Segmentation using Eigenvectors: A Unifying View", "text": "Automatic grouping and segmentation of images remains a challenging problem in computer vision. Recently, a number of authors have demonstrated good performance on this task using methods that are based on eigenvectors of the a nity matrix. These approaches are extremely attractive in that they are based on simple eigendecomposition algorithms whose stability is well understood. Nevertheless, the use of eigendecompositions in the context of segmentation is far from well understood. In this paper we give a unied treatment of these algorithms, and show the close connections between them while highlighting their distinguishing features. We then prove results on eigenvectors of block matrices that allow us to analyze the performance of these algorithms in simple grouping settings. Finally, we use our analysis to motivate a variation on the existing methods that combines aspects from di erent eigenvector segmentation algorithms. We illustrate our analysis with results on real and synthetic images. Human perceiving a scene can often easily segment it into coherent segments or groups. There has been a tremendous amount of e ort devoted to achieving the same level of performance in computer vision. In many cases, this is done by associating with each pixel a feature vector (e.g. color, motion, texture, position) and using a clustering or grouping algorithm on these feature vectors. Perhaps the cleanest approach to segmenting points in feature space is based on mixture models in which one assumes the data were generated by multiple processes and estimates the parameters of the processes and the number of components in the mixture. The assignment of points to clusters can then be easily performed by calculating the posterior probability of a point belonging to a cluster. Despite the elegance of this approach, the estimation process leads to a notoriously di cult optimization. The frequently used EM algorithm [3] often converges to a local maximum that depends on the initial conditions."}
{"_id": "51fea461cf3724123c888cb9184474e176c12e61", "title": "An Iterative Image Registration Technique with an Application to Stereo Vision", "text": "Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is faster because it examines far fewer potential matches between the images than existing techniques. Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show show our technique can be adapted for use in a stereo vision system."}
{"_id": "b94c7ff9532ab26c3aedbee3988ec4c7a237c173", "title": "Normalized Cuts and Image Segmentation", "text": "w e propose Q novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the amage data, our approach aims a t extracting the global impression of an image. We treat image segmentation QS (I graph partitioning problem and propose Q novel global criterion, the normalized cut, for segmenting the graph. The normalized cut craterion measures both the total dissimilarity between the different groups QS well as the total similarity within the groups. We show that an eficient computational technique based on a generaked eigenvalue problem can be used to optimize this criterion. w e have applied this approach to segmenting static images and found results very enco u raging."}
{"_id": "1e9f671cd3c680de39db2afef69991d8f8e038ba", "title": "A First Look at a Telepresence System with Room-Sized Real-Time 3 D Capture and Life-Sized Tracked Display Wall", "text": "This paper provides a first look at a telepresence system offering room-sized, fully dynamic real-time 3D scene capture and continuous-viewpoint head-tracked display on a life-sized tiled display wall. The system is an expansion of a previous system, based an array of commodity depth sensors. We describe adjustments and improvements made to camera calibration, sensor data processing, data merger, rendering, and display, as required to scale the earlier system to room-sized."}
{"_id": "91a613ed06c4654f38f5c2e7fe6ebffeec53d887", "title": "Weighted extreme learning machine for imbalance learning", "text": "Extreme learning machine (ELM) is a competitive machine learning technique, which is simple in theory and fast in implementation. The network types are \u2018\u2018generalized\u2019\u2019 single hidden layer feedforward networks, which are quite diversified in the form of variety in feature mapping functions or kernels. To deal with data with imbalanced class distribution, a weighted ELM is proposed which is able (1) it is simple in theory and convenient in implementation; (2) a wide type of feature mapping functions or kernels are available for the proposed framework; (3) the proposed method can be applied directly into multiclass classification tasks. In addition, after integrating with the weighting scheme, (1) the weighted ELM is able to deal with data with imbalanced class distribution while maintain the good performance on well balanced data as unweighted ELM; (2) by assigning different weights for each example according to users\u2019 needs, the weighted ELM can be generalized to cost sensitive learning. & 2012 Elsevier B.V. All rights reserved."}
{"_id": "36c2cafdf3a0d39931e3ee46d7665270fca42350", "title": "Robust motion detection using histogram of oriented gradients for illumination variations", "text": "This paper proposes a robust motion detection method for illumination variations which uses histogram of oriented gradients. The detection process is divided into two phases: coarse detection and refinement. In the coarse detection phase, first, a texture-based background model is built which implements a group of adaptive histograms of oriented gradients; then, by comparing the histogram of oriented gradients of each pixel between current frame and background model, a foreground is segmented based on texture feature which is not susceptible to illumination variations, whereas some missing foreground regions exist; finally, the result based on texture is optimized by combining the pixel-wise detection result produced by Gaussian Mixture Model (GMM) algorithm, which greatly improves the detection performance by incorporating efficient morphological operations. In the refinement phase, the above detection result is refined based on the distinction in color feature to eliminate errors like shadows, noises, redundant contour, etc. Experimental results show the effectiveness and robustness of our approach in detecting moving objects in varying illumination conditions."}
{"_id": "740affe128d0d20f2d00fbab0eed877968abbc09", "title": "Renewable energy strategies for sustainable development", "text": "This paper discusses the perspective of renewable energy (wind, solar, wave and biomass) in the making of strategies for a sustainable development. Such strategies typically involve three major technological changes: energy savings on the demand side, efficiency improvements in the energy production, and replacement of fossil fuels by various sources of renewable energy. Consequently, large-scale renewable energy implementation plans must include strategies of how to integrate the renewable sources in coherent energy systems influenced by energy savings and efficiency measures. Based on the case of Denmark, this paper discusses the problems and perspectives of converting present energy systems into a 100 percent renewable energy system. The conclusion is that such development will be possible. The necessary renewable energy sources are present, if further technological improvements of the energy system are achieved. Especially technologies of converting the transportation and the introduction of flexible energy system technologies are crucial."}
{"_id": "8d6fe5700a242f90fd406a670d4574dcef436fb6", "title": "DeepSynth : Synthesizing A Musical Instrument With Video", "text": "This paper introduces DeepSynth, an end-to-end neural network model for generating the sound of a musical instrument based on a silent video of it being played. We specifically focus on building a synthesizer for the piano, but the ideas proposed in this paper are applicable to a wide range of musical instruments. At a high level, the model consists of a convolutional neural network (CNN) to extract features from the raw video frames and two stacked neural autoregressive models, one to encode the spatiotemporal features of the video and the other to generate the raw audio waveform."}
{"_id": "6df7c198635cfa22b2dbc7e80ed4bccea2480504", "title": "Implementation of page rank algorithm in Hadoop MapReduce framework", "text": "One of the most popular algorithm in processing internet data i.e. webpages is page rank algorithm which is intended to decide the importance of a webpage by assigning a weighting value based on any incoming link to those webpage. However, the large amount of internet data may lead into computational burden in processing those page rank algorithm. To take into account those burden, in this paper we present a page rank processing algorithm over distributed system using Hadoop MapReduce framework called MR PageRank. Our algorithm can be decomposed into three processes, each of which is implemented in one Map and Reduce job. We first parse the raw webpage input to produce title of page and its outgoing links as key and value pair, respectively, as well as total dangling node's weight and total amount of pages. We next calculate the probability of each page and distribute this probability to each of outgoing link evenly. Each of the outgoing weight is shuffled and aggregated based on similarity of page title to update a new weighting value of each page. Notice that in the calculation we consider the dangling node and jumping factor. In the end, all of the page are descendingly sorted based on their weighting values. From the experimental result, we show that our implementation has output with reasonable ordering result. This electronic document is a \u201clive\u201d template and already defines the components of your paper."}
{"_id": "25f55b9bf29625ba1ea3195954bc4e13896218c0", "title": "Big Data Analytics with Spark", "text": "What\u2019s more, Big Data Analytics with Spark provides an introduction to other big data technologies that are commonly used along with Spark, such as HDFS, Avro, Parquet, Ka a, Cassandra, HBase, Mesos, and so on. It also provides an introduction to machine learning and graph concepts. So the book is self-suffi cient; all the technologies that you need to know to use Spark are covered. The only thing that you are expected to have is some programming knowledge in any language."}
{"_id": "9689b1b7ddcb203b6d9a8f633d8bbef71c44a44c", "title": "Variation in the impact of social network characteristics on physical functioning in elderly persons: MacArthur Studies of Successful Aging.", "text": "OBJECTIVES\nSocial support and social networks have been shown to exert significant effects on health and functioning among elderly persons. Although theorists have speculated that the strength of these effects may differ as a function of sociodemographic characteristics and prior health status, few studies have directly tested the moderating effects of these variables.\n\n\nMETHODS\nLongitudinal data from the MacArthur Study of Successful Aging were used to examine the effects of structural and functional social support on changes in physical functioning over a 7-year period, measured by the Nagi scale, in a sample of initially high-functioning men and women aged 70 to 79 years. Multiple regression analyses were used to test the main effects of social support and social network variables, as well as their interactions with gender, income, and baseline physical performance.\n\n\nRESULTS\nAfter controlling for potential confounding effects, respondents with more social ties showed less functional decline. The beneficial effects of social ties were stronger for respondents who were male or had lower levels of baseline physical performance.\n\n\nDISCUSSION\nThe effects of social support and social networks may vary according to the individual's gender and baseline physical capabilities. Studies of functional decline among elderly persons should not ignore this population variation in the effects of social networks."}
{"_id": "055d55726d45406a6f115c4d26f510bade021be3", "title": "Design and Implementation of Autonomous Car using Raspberry Pi", "text": "The project aims to build a monocular vision autonomous car prototype using Raspberry Pi as a processing chip. An HD camera along with an ultrasonic sensor is used to provide necessary data from the real world to the car. The car is capable of reaching the given destination safely and intelligently thus avoiding the risk of human errors. Many existing algorithms like lane detection, obstacle detection are combined together to provide the necessary control to the car."}
{"_id": "4dc0a91cc4da1d15c525385bf880cd39aa3d2b71", "title": "Routing and Wavelength Assignment in Optical WDM Networks", "text": "This article discusses the routing and wavelength assignment (RWA) problem in optical networks employing wavelenength division multiplexing (WDM) technology. Two variants of the problem are studied: static RWA, whereby the traffic requirements are known in advance, and dynamic RWA in which connection requests arrive in some random fashion. Both point-topoint and multicast traffic demands are considered."}
{"_id": "93ad3279c3c4cbe02428388874aec44cf2d30f30", "title": "Low-Power High-Efficiency Class D Audio Power Amplifiers", "text": "The architecture, design, and implementation of two clock-free analog class D audio power amplifiers using 0.5 \u00bfm CMOS standard technology are introduced. The amplifiers are designed to consume significantly less power than former implementations. Both designs operate with a 2.7 V single voltage supply and deliver a maximum output power of 250 mW into an 8 \u00bf speaker. The two class-D amplifiers are based on a hysteretic sliding mode controller, which avoids the complex task of generating the highly linear triangle carrier signal used in conventional architectures. The first design generates a two-level modulated signal, known as binary modulated amplifier (BMA); the second topology implements a three-level modulated signal, also called ternary modulated amplifier (TMA). The architectures and implementations are simple and compact, providing very low quiescent power consumption. Experimental results of the BMA/TMA yield an efficiency of 89/90% and a total harmonic distortion plus noise (THD+N) of 0.02/0.03%, respectively. The efficiency and linearity is comparable to state-of-the-art amplifiers but the static power consumption is less than one tenth of previous proposed architectures. Both class D amplifiers achieve a power supply rejection ratio greater than 75 dB at 217 Hz, and a signal-to-noise ratio (SNR) higher than 90 dB within the whole audio band. Each amplifier occupies less than 1.5 mm2."}
{"_id": "b39e5f7217abae9e2c682ee5068a11309631b93b", "title": "Incremental Clustering of Mobile Objects", "text": "Moving objects are becoming increasingly attractive to the data mining community due to continuous advances in technologies like GPS, mobile computers, and wireless communication devices. Mining spatio-temporal data can benefit many different functions: marketing team managers for identifying the right customers at the right time, cellular companies for optimizing the resources allocation, web site administrators for data allocation matters, animal migration researchers for understanding migration patterns, and meteorology experts for weather forecasting. In this research we use a compact representation of a mobile trajectory and define a new similarity measure between trajectories. We also propose an incremental clustering algorithm for finding evolving groups of similar mobile objects in spatio-temporal data. The algorithm is evaluated empirically by the quality of object clusters (using Dunn and Rand indexes), memory space efficiency, execution times, and scalability (run time vs. number of objects)."}
{"_id": "81bed85b8533c6efb07757ca825fa05adad38bde", "title": "Filling Knowledge Base Gaps for Distant Supervision of Relation Extraction", "text": "Distant supervision has attracted recent interest for training information extraction systems because it does not require any human annotation but rather employs existing knowledge bases to heuristically label a training corpus. However, previous work has failed to address the problem of false negative training examples mislabeled due to the incompleteness of knowledge bases. To tackle this problem, we propose a simple yet novel framework that combines a passage retrieval model using coarse features into a state-of-the-art relation extractor using multi-instance learning with fine features. We adapt the information retrieval technique of pseudorelevance feedback to expand knowledge bases, assuming entity pairs in top-ranked passages are more likely to express a relation. Our proposed technique significantly improves the quality of distantly supervised relation extraction, boosting recall from 47.7% to 61.2% with a consistently high level of precision of around 93% in the experiments."}
{"_id": "ad5e7a4823883af5a5118ab88179a5746ec528cb", "title": "Common-Specific Multimodal Learning for Deep Belief Network", "text": "Multimodal Deep Belief Network has been widely used to extract representations for multimodal data by fusing the high-level features of each data modality into common representations. Such straightforward fusion strategy can benefit the classification and information retrieval tasks. However, it may introduce noise in case the high-level features are not naturally common hence non-fusable for different modalities. Intuitively, each modality may have its own specific features and corresponding representation capabilities thus should not be simply fused. Therefore, it is more reasonable to fuse only the common features and represent the multimodal data by both the fused features and the modality-specific features. To distinguish common features from modal-specific features is a challenging task for traditional DBN models where all features are crudely mixed. This paper proposes the Common-Specific Multimodal Deep Belief Network (CSDBN) to solve the problem. CS-DBN automatically separates common features from modal-specific features and fuses only the common ones for data representation. Experimental results demonstrate the superiority of CS-DBN for classification tasks compared with the baseline approaches."}
{"_id": "29f0a4ba7176e870ce4ab54b6b627ee7c761d890", "title": "Pastry: Scalable, distributed object location and routing for large-scale peer-to-", "text": "This paper presents the design and evaluation of Pastry, a sc al ble, distributed object location and routing scheme for wide-ar a peer-to-peer applications. Pastry performs application-level routing and ob ject location in a potentially very large overlay network of nodes connected via the Int rnet. It can be used to support a wide range of peer-to-peer applications li ke g obal data storage, global data sharing, and naming. An insert operation in Pastry stores an object at a user-defin ed number of diverse nodes within the Pastry network. A lookup operation reliabl y retrieves a copy of the requested object if one exists. Moreover, a lookup is usu ally routed to the node nearest the client issuing the lookup (by some measure of pro ximity), among the nodes storing the requested object. Pastry is completely de centralized, scalable, and self-configuring; it automatically adapts to the arriva l, departure and failure of nodes. Experimental results obtained with a prototype implementa tion on a simulated network of up to 100,000 nodes confirm Pastry\u2019s scalability, its ability to selfconfigure and adapt to node failures, and its good network loc ality properties."}
{"_id": "f14d3dde8b8c63fa64f3c715ee58142466eb0d43", "title": "Face-to-face transfer of wafer-scale graphene films", "text": "Graphene has attracted worldwide interest since its experimental discovery, but the preparation of large-area, continuous graphene film on SiO2/Si wafers, free from growth-related morphological defects or transfer-induced cracks and folds, remains a formidable challenge. Growth of graphene by chemical vapour deposition on Cu foils has emerged as a powerful technique owing to its compatibility with industrial-scale roll-to-roll technology. However, the polycrystalline nature and microscopic roughness of Cu foils means that such roll-to-roll transferred films are not devoid of cracks and folds. High-fidelity transfer or direct growth of high-quality graphene films on arbitrary substrates is needed to enable wide-ranging applications in photonics or electronics, which include devices such as optoelectronic modulators, transistors, on-chip biosensors and tunnelling barriers. The direct growth of graphene film on an insulating substrate, such as a SiO2/Si wafer, would be useful for this purpose, but current research efforts remain grounded at the proof-of-concept stage, where only discontinuous, nanometre-sized islands can be obtained. Here we develop a face-to-face transfer method for wafer-scale graphene films that is so far the only known way to accomplish both the growth and transfer steps on one wafer. This spontaneous transfer method relies on nascent gas bubbles and capillary bridges between the graphene film and the underlying substrate during etching of the metal catalyst, which is analogous to the method used by tree frogs to remain attached to submerged leaves. In contrast to the previous wet or dry transfer results, the face-to-face transfer does not have to be done by hand and is compatible with any size and shape of substrate; this approach also enjoys the benefit of a much reduced density of transfer defects compared with the conventional transfer method. Most importantly, the direct growth and spontaneous attachment of graphene on the underlying substrate is amenable to batch processing in a semiconductor production line, and thus will speed up the technological application of graphene."}
{"_id": "160631daa8ecea247014401f5429deb49883d395", "title": "Memristor-Based Material Implication (IMPLY) Logic: Design Principles and Methodologies", "text": "Memristors are novel devices, useful as memory at all hierarchies. These devices can also behave as logic circuits. In this paper, the IMPLY logic gate, a memristor-based logic circuit, is described. In this memristive logic family, each memristor is used as an input, output, computational logic element, and latch in different stages of the computing process. The logical state is determined by the resistance of the memristor. This logic family can be integrated within a memristor-based crossbar, commonly used for memory. In this paper, a methodology for designing this logic family is proposed. The design methodology is based on a general design flow, suitable for all deterministic memristive logic families, and includes some additional design constraints to support the IMPLY logic family. An IMPLY 8-bit full adder based on this design methodology is presented as a case study."}
{"_id": "2c3dffc38d40b725bbd2af80694375e6fc0b1b45", "title": "Video Super-Resolution via Bidirectional Recurrent Convolutional Networks", "text": "Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance."}
{"_id": "4b7f95162aa2c8523117e7e18b120b3062aaca6d", "title": "Semi-supervised time series classification", "text": "The problem of time series classification has attracted great interest in the last decade. However current research assumes the existence of large amounts of labeled training data. In reality, such data may be very difficult or expensive to obtain. For example, it may require the time and expertise of cardiologists, space launch technicians, or other domain specialists. As in many other domains, there are often copious amounts of unlabeled data available. For example, the PhysioBank archive contains gigabytes of ECG data. In this work we propose a semi-supervised technique for building time series classifiers. While such algorithms are well known in text domains, we will show that special considerations must be made to make them both efficient and effective for the time series domain. We evaluate our work with a comprehensive set of experiments on diverse data sources including electrocardiograms, handwritten documents, and video datasets. The experimental results demonstrate that our approach requires only a handful of labeled examples to construct accurate classifiers."}
{"_id": "17a0182ee8d12aa2491a1f4309afe84e6869b2a0", "title": "Luhn Revisited: Significant Words Language Models", "text": "Users tend to articulate their complex information needs in only a few keywords, making underspecified statements of request the main bottleneck for retrieval effectiveness. Taking advantage of feedback information is one of the best ways to enrich the query representation, but can also lead to loss of query focus and harm performance in particular when the initial query retrieves only little relevant information when overfitting to accidental features of the particular observed feedback documents. Inspired by the early work of Luhn [23], we propose significant words language models of feedback documents that capture all, and only, the significant shared terms from feedback documents. We adjust the weights of common terms that are already well explained by the document collection as well as the weight of rare terms that are only explained by specific feedback documents, which eventually results in having only the significant terms left in the feedback model.\n Our main contributions are the following. First, we present significant words language models as the effective models capturing the essential terms and their probabilities. Second, we apply the resulting models to the relevance feedback task, and see a better performance over the state-of-the-art methods. Third, we see that the estimation method is remarkably robust making the models in- sensitive to noisy non-relevant terms in feedback documents. Our general observation is that the significant words language models more accurately capture relevance by excluding general terms and feedback document specific terms."}
{"_id": "bda4bcaf90b11b1d88f7ec11931789b1a00461db", "title": "Performance Engineering for Microservices: Research Challenges and Directions", "text": "Microservices complement approaches like DevOps and continuous delivery in terms of software architecture. Along with this architectural style, several important deployment technologies, such as container-based virtualization and container orchestration solutions, have emerged. These technologies allow to efficiently exploit cloud platforms, providing a high degree of scalability, availability, and portability for microservices.\n Despite the obvious importance of a sufficient level of performance, there is still a lack of performance engineering approaches explicitly taking into account the particularities of microservices. In this paper, we argue why new solutions to performance engineering for microservices are needed. Furthermore, we identify open issues and outline possible research directions with regard to performance-aware testing, monitoring, and modeling of microservices."}
{"_id": "fdfdd91c18fdaa254162483c47895734f2059826", "title": "ABF + + : Fast and Robust Angle Based Flattening Temporary version-In print", "text": "Conformal parameterization of mesh models has numerous applications in geometry processing. Conformality is desirable for remeshing, surface reconstruction, and many other mesh processing applications. Subject to the conformality requirement, these applications typically bene t from parameterizations with smaller stretch. The Angle Based Flattening (ABF) method, presented a few years ago, generates provably valid conformal parameterizations with low stretch. However, it is quite time consuming and becomes error prone for large meshes due to numerical error accumulation. This work presents ABF++, a highly e cient extension of the ABF method that overcomes these drawbacks, while maintaining all the advantages of ABF. ABF++ robustly parameterizes meshes of hundreds of thousands and millions of triangles within minutes. It is based on two main components: (1) a new numerical solution technique that dramatically reduces the dimension of the linear systems solved at each iteration, speeding up the solution; (2) an e cient hierarchical solution technique. The speedup with (1) does not come at the expense of greater distortion. The hierarchical technique (2) enables parameterization of models with millions of faces in seconds, at the expense of a minor increase in parametric distortion. The parameterization computed by ABF++ are provably valid, i.e. they contain no ipped triangles. As a result of these extensions, the ABF++ method is extremely suitable for robustly and e ciently parameterizing models for geometry processing applications."}
{"_id": "6f07a10dfbd583fdda034c7d606e53148f162f2d", "title": "Steering with eyes closed: Mm-Wave beam steering without in-band measurement", "text": "Millimeter-wave communication achieves multi-Gbps data rates via highly directional beamforming to overcome pathloss and provide the desired SNR. Unfortunately, establishing communication with sufficiently narrow beamwidth to obtain the necessary link budget is a high overhead procedure in which the search space scales with device mobility and the product of the sender-receiver beam resolution. In this paper, we design, implement, and experimentally evaluate Blind Beam Steering (BBS) a novel architecture and algorithm that removes in-band overhead for directional mm-Wave link establishment. Our system architecture couples mm-Wave and legacy 2.4/5 GHz bands using out-of-band direction inference to establish (overhead-free) multi-Gbps mm-Wave communication. Further, BBS evaluates direction estimates retrieved from passively overheard 2.4/5 GHz frames to assure highest mm-Wave link quality on unobstructed direct paths. By removing in-band overhead, we leverage mm-Wave's very high throughput capabilities, beam-width scalability and provide robustness to mobility. We demonstrate that BBS achieves 97.8% accuracy estimating direction between pairing nodes using at least 5 detection band antennas. Further, BBS successfully detects unobstructed direct path conditions with an accuracy of 96.5% and reduces the IEEE 802.11ad beamforming training overhead by 81%."}
{"_id": "d60c2ac5c7b48d979a32cd74f850d3e9bd771924", "title": "Joint Resource Partitioning and Offloading in Heterogeneous Cellular Networks", "text": "In heterogeneous cellular networks (HCNs), it is desirable to offload mobile users to small cells, which are typically significantly less congested than the macrocells. To achieve sufficient load balancing, the offloaded users often have much lower SINR than they would on the macrocell. This SINR degradation can be partially alleviated through interference avoidance, for example time or frequency resource partitioning, whereby the macrocell turns off in some fraction of such resources. Naturally, the optimal offloading strategy is tightly coupled with resource partitioning; the optimal amount of which in turn depends on how many users have been offloaded. In this paper, we propose a general and tractable framework for modeling and analyzing joint resource partitioning and offloading in a two-tier cellular network. With it, we are able to derive the downlink rate distribution over the entire network, and an optimal strategy for joint resource partitioning and offloading. We show that load balancing, by itself, is insufficient, and resource partitioning is required in conjunction with offloading to improve the rate of cell edge users in co-channel heterogeneous networks."}
{"_id": "02da7de4dab6e16a2872d2c68fbc24cfa5e60a0c", "title": "IEEE 802.11ad: directional 60 GHz communication for multi-Gigabit-per-second Wi-Fi [Invited Paper]", "text": "With the ratification of the IEEE 802.11ad amendment to the 802.11 standard in December 2012, a major step has been taken to bring consumer wireless communication to the millimeter wave band. However, multi-gigabit-per-second throughput and small interference footprint come at the price of adverse signal propagation characteristics, and require a fundamental rethinking of Wi-Fi communication principles. This article describes the design assumptions taken into consideration for the IEEE 802.11ad standard and the novel techniques defined to overcome the challenges of mm-Wave communication. In particular, we study the transition from omnidirectional to highly directional communication and its impact on the design of IEEE 802.11ad."}
{"_id": "338de8d0176b9ca50609abf28100b94d0879054e", "title": "Inter-Session Modeling for Session-Based Recommendation", "text": "In recent years, research has been done on applying Recurrent Neural Networks (RNNs) as recommender systems. Results have been promising, especially in the session-based setting where RNNs have been shown to outperform state-of-the-art models. In many of these experiments, the RNN could potentially improve the recommendations by utilizing information about the user's past sessions, in addition to its own interactions in the current session. A problem for session-based recommendation, is how to produce accurate recommendations at the start of a session, before the system has learned much about the user's current interests. We propose a novel approach that extends an RNN recommender to be able to process the user's recent sessions, in order to improve recommendations. This is done by using a second RNN to learn from recent sessions, and predict the user's interest in the current session. By feeding this information to the original RNN, it is able to improve its recommendations. Our experiments on two different datasets show that the proposed approach can significantly improve recommendations throughout the sessions, compared to a single RNN working only on the current session. The proposed model especially improves recommendations at the start of sessions, and is therefore able to deal with the cold start problem within sessions."}
{"_id": "5218f2a2db6ddb814cc5af96fc2878ece86e37d9", "title": "Mood and Emotions in Small Groups and Work Teams", "text": "Affective influences abound in groups. In this article we propose an organizing model for understanding these affective influences and their effects on group life. We begin with individuallevel affective characteristics that members bring to their groups: moods, emotions, sentiments, and emotional intelligence. These affective characteristics then combine to form a group\u2019s affective composition. We discuss explicit and implicit processes through which this affective combination occurs by examining the research on emotional contagion, entrainment, modeling, and the manipulation of affect. We also explore how elements of the affective context, such as organizationwide emotion norms and the group\u2019s particular emotional history, may serve to constrain or amplify group members\u2019 emotions. The outcome, group emotion, results from the combination of the group\u2019s affective composition and the affective context in which the group is behaving. Last, we focus on the important interaction between nonaffective factors and affective factors in group life and suggest a possible agenda for future research. q 2001 Academic Press"}
{"_id": "8dc7cc939af832d071c2a050fd0284973ac70695", "title": "LoRea: A Backscatter Reader for Everyone!", "text": "Computational RFID (CRFID) platforms have enabled reconfigurable, battery-free applications for close to a decade. However, several factors have impeded their wide-spread adoption: low communication range, low throughput, and expensive infrastructure\u2014 CRFID readers usually cost upwards of $1000. This paper presents LoRea, a backscatter reader that achieves an order of magnitude higher range than existing CRFID readers, while costing a fraction of their price. LoRea achieves this by diverging from existing designs of CRFID readers and more specifically, how self-interference is tackled. LoRea builds on recent work that frequency-shifts backscatter transmissions away from the carrier signal to reduce selfinterference. LoRea also decouples the carrier generation from the reader, helping to further decrease self-interference. Decoupling carrier generation also enables the use of deployed infrastructure of smartphones and sensor nodes to provide the carrier signal. Together these methods reduce cost and complexity of the reader. LoRea also purposefully operates at lower bitrates than recent backscatter systems which enables high sensitivity and longer range. We evaluate LoRea experimentally and find that it achieves a communication range of up to 225 m in lineof-sight scenarios. In indoor environments, where the signal traverses several walls separating the reader and the backscatter tag, LoRea achieves a range of 30 m. These results illustrate how LoRea outperforms stateof-art backscatter systems and CRFID platforms."}
{"_id": "acd5d3ea9e88c96b63b6594d5c59961425986785", "title": "Sensory integration dysfunction affects efficacy of speech therapy on children with functional articulation disorders", "text": "BACKGROUND\nArticulation disorders in young children are due to defects occurring at a certain stage in sensory and motor development. Some children with functional articulation disorders may also have sensory integration dysfunction (SID). We hypothesized that speech therapy would be less efficacious in children with SID than in those without SID. Hence, the purpose of this study was to compare the efficacy of speech therapy in two groups of children with functional articulation disorders: those without and those with SID.\n\n\nMETHOD\nA total of 30 young children with functional articulation disorders were divided into two groups, the no-SID group (15 children) and the SID group (15 children). The number of pronunciation mistakes was evaluated before and after speech therapy.\n\n\nRESULTS\nThere were no statistically significant differences in age, sex, sibling order, education of parents, and pretest number of mistakes in pronunciation between the two groups (P > 0.05). The mean and standard deviation in the pre- and post-test number of mistakes in pronunciation were 10.5 \u00b1 3.2 and 3.3 \u00b1 3.3 in the no-SID group, and 10.1 \u00b1 2.9 and 6.9 \u00b1 3.5 in the SID group, respectively. Results showed great changes after speech therapy treatment (F = 70.393; P < 0.001) and interaction between the pre/post speech therapy treatment and groups (F = 11.119; P = 0.002).\n\n\nCONCLUSIONS\nSpeech therapy can improve the articulation performance of children who have functional articulation disorders whether or not they have SID, but it results in significantly greater improvement in children without SID. SID may affect the treatment efficiency of speech therapy in young children with articulation disorders."}
{"_id": "bde7d5d4ed0ad97f3d9234c547a53142283dbce4", "title": "Membrane vesicles as conveyors of immune responses", "text": "In multicellular organisms, communication between cells mainly involves the secretion of proteins that then bind to receptors on neighbouring cells. But another mode of intercellular communication \u2014 the release of membrane vesicles \u2014 has recently become the subject of increasing interest. Membrane vesicles are complex structures composed of a lipid bilayer that contains transmembrane proteins and encloses soluble hydrophilic components derived from the cytosol of the donor cell. These vesicles have been shown to affect the physiology of neighbouring recipient cells in various ways, from inducing intracellular signalling following binding to receptors to conferring new properties after the acquisition of new receptors, enzymes or even genetic material from the vesicles. This Review focuses on the role of membrane vesicles, in particular exosomes, in the communication between immune cells, and between tumour and immune cells."}
{"_id": "1f48eaf7c34a0faa9f533ed457b5e8e86cf1a15a", "title": "Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion", "text": "Multi-view structure from motion (SfM) estimates the position and orientation of pictures in a common 3D coordinate frame. When views are treated incrementally, this external calibration can be subject to drift, contrary to global methods that distribute residual errors evenly. We propose a new global calibration approach based on the fusion of relative motions between image pairs. We improve an existing method for robustly computing global rotations. We present an efficient a contrario trifocal tensor estimation method, from which stable and precise translation directions can be extracted. We also define an efficient translation registration method that recovers accurate camera positions. These components are combined into an original SfM pipeline. Our experiments show that, on most datasets, it outperforms in accuracy other existing incremental and global pipelines. It also achieves strikingly good running times: it is about 20 times faster than the other global method we could compare to, and as fast as the best incremental method. More importantly, it features better scalability properties."}
{"_id": "54050af5a0a0f47bcaa6c1ee3e62025e114af3d9", "title": "SquishedNets: Squishing SqueezeNet further for edge device scenarios via deep evolutionary synthesis", "text": "While deep neural networks have been shown in recent years to outperform other machine learning methods in a wide range of applications, one of the biggest challenges with enabling deep neural networks for widespread deployment on edge devices such as mobile and other consumer devices is high computational and memory requirements. Recently, there has been greater exploration into small deep neural network architectures that are more suitable for edge devices, with one of the most popular architectures being SqueezeNet, with an incredibly small model size of 4.8MB. Taking further advantage of the notion that many applications of machine learning on edge devices are often characterized by a low number of target classes, this study explores the utility of combining architectural modifications and an evolutionary synthesis strategy for synthesizing even smaller deep neural architectures based on the more recent SqueezeNet v1.1 macroarchitecture (considered state-of-the-art in efficient architectures) for applications with fewer target classes. In particular, architectural modifications are first made to SqueezeNet v1.1 to accommodate for a 10-class ImageNet-10 dataset, and then an evolutionary synthesis strategy is leveraged to synthesize more efficient deep neural networks based on this modified macroarchitecture. The resulting SquishedNets possess model sizes ranging from 2.4MB to 0.95MB (\u223c5.17X smaller than SqueezeNet v1.1, or 253X smaller than AlexNet). Furthermore, the SquishedNets are still able to achieve accuracies ranging from 81.2% to 77%, and able to process at speeds of 156 images/sec to as much as 256 images/sec on a Nvidia Jetson TX1 embedded chip. These preliminary results show that a combination of architectural modifications and an evolutionary synthesis strategy can be a useful tool for producing very small deep neural network architectures that are well-suited for edge device scenarios without the need for compression or quantization."}
{"_id": "42481c1ceb57111b32cd7ef02488e7bd5a21a26a", "title": "Towards the Next Generation of Tabletop Gaming Experiences", "text": "In this paper we present a novel hardware and software platform (STARS) to realize computer augmented tabletop games that unify the strengths of traditional board games and computer games. STARS game applications preserve the social situation of traditional board games and provide a tangible interface with physical playing pieces to facilitate natural interaction. The virtual game components offer exciting new opportunities for game design and provide richer gaming experiences impossible to realize with traditional media. This paper describes STARS in terms of the hardware setup and the software platform used to develop and play STARS games. The interaction design within STARS is discussed and sample games are presented with regard to their contributions to enhancing user experience. Finally, realworld experiences with the platform are reported."}
{"_id": "14d418772a6490a7d2a7255ea2bd24fb58bba8e2", "title": "Integrated feature selection and higher-order spatial feature extraction for object categorization", "text": "In computer vision, the bag-of-visual words image representation has been shown to yield good results. Recent work has shown that modeling the spatial relationship between visual words further improves performance. Previous work extracts higher-order spatial features exhaustively. However, these spatial features are expensive to compute. We propose a novel method that simultaneously performs feature selection and feature extraction. Higher-order spatial features are progressively extracted based on selected lower order ones, thereby avoiding exhaustive computation. The method can be based on any additive feature selection algorithm such as boosting. Experimental results show that the method is computationally much more efficient than previous approaches, without sacrificing accuracy."}
{"_id": "5d7db185048212fb590f461628abb3ce1c968d79", "title": "A Comparison Among ARIMA, BP-NN, and MOGA-NN for Software Clone Evolution Prediction", "text": "Software evolution continues throughout the life cycle of the software. During the evolution of software system, it has been observed that the developers have a tendency to copy the modules completely or partially and modify them. This practice gives rise to identical or very similar code fragments called software clones. This paper examines the evolution of clone components by using advanced time series analysis. In the first phase, software clone components are extracted from the source repository of the software application by using the abstract syntax tree approach. Then, the evolution of software clone components is analyzed. In this paper, three models, Autoregressive Integrated Moving Average, back propagation neural network, and multi-objective genetic algorithm-based neural network, have been compared for the prediction of the evolution of software clone components. Evaluation is performed on the large open-source software application, ArgoUML. The ability to predict the clones helps the software developer to reduce the effort during software maintenance activities."}
{"_id": "ae3a29aacf4b326269d65fa90fc0101b7274f0a8", "title": "Automated detection of optic disc and blood vessel in retinal image using morphological, edge detection and feature extraction technique", "text": "Reliable, fast and efficient optic disc localization and blood-vessel detection are the primary tasks in computer analyses of retinal image. Most of the existing algorithms suffer due to inconsistent image contrast, varying individual condition, noises and computational complexity. This paper presents an algorithm to automatically detect landmark features of retinal image, such as optic disc and blood vessel. First, optic disc and blood vessel pixels are detected from blue plane of the image. Then, using OD location the vessel pixels are connected. The detection scheme utilizes basic operations like edge detection, binary thresholding and morphological operation. This method was evaluated on standard retinal image databases, such as STARE and DRIVE. Experimental results demonstrate that the high accuracy achieved by the proposed method is comparable to that reported by the most accurate methods in literature, alongside a substantial reduction of execution time. Thus the method may provide a reliable solution in automatic mass screening and diagnosis of the retinal diseases."}
{"_id": "eaa5e10061f5b87646a66637b51392a3b9aa1a37", "title": "Linked data on the web (LDOW2008)", "text": "The Web is increasingly understood as a global information space consisting not just of linked documents, but also of Linked Data. More than just a vision, the resulting Web of Data has been brought into being by the maturing of the Semantic Web technology stack, and by the publication of an increasing number of datasets according to the principles of Linked Data.\n The Linked Data on the Web (LDOW2008) workshop brings together researchers and practitioners working on all aspects of Linked Data. The workshop provides a forum to present the state of the art in the field and to discuss ongoing and future research challenges. In this workshop summary we will outline the technical context in which Linked Data is situated, describe developments in the past year through initiatives such as the Linking Open Data community project, and look ahead to the workshop itself."}
{"_id": "6878c7a109df8d74ff28cc14ede829a361d4c157", "title": "Spectrogram Enhancement Using Multiple Window Savitzky-Golay (MWSG) Filter for Robust Bird Sound Detection", "text": "Bird sound detection from real-field recordings is essential for identifying bird species in bioacoustic monitoring. Variations in the recording devices, environmental conditions, and the presence of vocalizations from other animals make the bird sound detection very challenging. In order to overcome these challenges, we propose an unsupervised algorithm comprising two main stages. In the first stage, a spectrogram enhancement technique is proposed using a multiple window Savitzky-Golay MWSG filter. We show that the spectrogram estimate using MWSG filter is unbiased and has lower variance compared with its single window counterpart. It is known that bird sounds are highly structured in the time-frequency T-F plane. We exploit these cues of prominence of T-F activity in specific directions from the enhanced spectrogram, in the second stage of the proposed method, for bird sound detection. In this regard, we use a set of four moving average filters that when applied to the enhanced spectrogram, yield directional spectrograms that capture the direction specific information. We propose a thresholding scheme on the time varying energy profile computed from each of these directional spectrograms to obtain frame-level binary decisions of bird sound activity. These individual decisions are then combined to obtain the final decision. Experiments are performed with three different datasets, with varying recording and noise conditions. Frame level F-score is used as the evaluation metric for bird sound detection. We find that the proposed method, on average, achieves higher F-score $10.24\\%$ relative compared to the best of the six baseline schemes considered in this work."}
{"_id": "f4236a996e1dc5c67a0a23ba19e62730fdc14dab", "title": "A fast and efficient python library for interfacing with the Biological Magnetic Resonance Data Bank", "text": "The Biological Magnetic Resonance Data Bank (BMRB) is a public repository of Nuclear Magnetic Resonance (NMR) spectroscopic data of biological macromolecules. It is an important resource for many researchers using NMR to study structural, biophysical, and biochemical properties of biological macromolecules. It is primarily maintained and accessed in a flat file ASCII format known as NMR-STAR. While the format is human readable, the size of most BMRB entries makes computer readability and explicit representation a practical requirement for almost any rigorous systematic analysis. To aid in the use of this public resource, we have developed a package called nmrstarlib in the popular open-source programming language Python. The nmrstarlib\u2019s implementation is very efficient, both in design and execution. The library has facilities for reading and writing both NMR-STAR version 2.1 and 3.1 formatted files, parsing them into usable Python dictionary- and list-based data structures, making access and manipulation of the experimental data very natural within Python programs (i.e. \u201csaveframe\u201d and \u201cloop\u201d records represented as individual Python dictionary data structures). Another major advantage of this design is that data stored in original NMR-STAR can be easily converted into its equivalent JavaScript Object Notation (JSON) format, a lightweight data interchange format, facilitating data access and manipulation using Python and any other programming language that implements a JSON parser/generator (i.e., all popular programming languages). We have also developed tools to visualize assigned chemical shift values and to convert between NMR-STAR and JSONized NMR-STAR formatted files. Full API Reference Documentation, User Guide and Tutorial with code examples are also available. We have tested this new library on all current BMRB entries: 100% of all entries are parsed without any errors for both NMR-STAR version 2.1 and version 3.1 formatted files. We also compared our software to three currently available Python libraries for parsing NMR-STAR formatted files: PyStarLib, NMRPyStar, and PyNMRSTAR. The nmrstarlib package\u00a0is a simple, fast, and efficient library for accessing data from the BMRB. The library provides an intuitive dictionary-based interface with which Python programs can read, edit, and write NMR-STAR formatted files and their equivalent JSONized NMR-STAR files. The nmrstarlib package\u00a0can be used as a library for accessing and manipulating data stored in NMR-STAR files and as a command-line tool to convert from NMR-STAR file format into its equivalent JSON file format and vice versa, and to visualize chemical shift values.\u00a0Furthermore, the nmrstarlib implementation provides a guide for effectively JSONizing other older scientific formats, improving the FAIRness of data in these formats."}
{"_id": "f0a764b074f889c200033e66d01943fd41778d09", "title": "Bibliometric information retrieval system (BIRS): A web search interface utilizing bibliometric research results", "text": "The aim of this article is to test whether the results obtained from a specific bibliographic research can be applied to a real search environment and enhance the level of utility of an information retrieval session for all levels of end users. In this respect, a Web-based Bibliometric Information Retrieval System (BIRS) has been designed and created, with facilities to assist the end users to get better understanding of their search domain, formulate and expand their search queries, and visualize the bibliographic research results. There are three specific features in the system design of the BIRS: the information visualization feature of the BIRS (cocitation maps) to guide the end users to identify the important research groups and capture the detailed information about the intellectual structure of the search domain; the multilevel browsing feature to allow the end users to go to different levels of interesting topics; and the common user interface feature to enable the end users to search all kinds of databases regardless of different searching systems, different working platforms, different database producer and supplier, such as different Web search engines, different library OPACs, or different on-line databases. A preliminary user evaluation study of BIRS revealed that users generally found it easy to form and expand their queries, and that BIRS helped them acquire useful background information about the search domain. They also pointed out aspects of information visualization, multilevel browsing, and common user interface as novel characteristics exhibited by BIRS."}
{"_id": "8d4f630a3de87f28f6dd676ce3dc9d316c1ced33", "title": "Elementary Siphons of Petri Nets and Deadlock Control in FMS", "text": "For responsiveness, in the Petri nets theory framework deadlock prevention policies based elementary siphons control are often utilized to deal with deadlocks caused by the sharing of resources in flexible manufacturing system (FMS) which is developing the theory of efficient strict minimal siphons of an S3PR. Analyzer of Petri net models and their P-invariant analysis, and deadlock control are presented as tools for modelling, efficiency structure analysis, control, and investigation of the FMSs when different policies can be implemented for the deadlock prevention. We are to show an effective deadlock prevention policy of a special class of Petri nets namely elementary siphons. As well, both structural analysis and reachability graph analysis and simulation are used for analysis and control of Petri nets. This work is successfully applied Petri nets to deadlock analysis using the concept of elementary siphons, for design of supervisors of some supervisory control problems of FMS and simulation of Petri net tool with MATLAB."}
{"_id": "079fbebe9aee481f08a387c78d5429bacb6c327a", "title": "Omega-3 Fatty Acids and Inflammatory Processes", "text": "Long chain fatty acids influence inflammation through a variety of mechanisms; many of these are mediated by, or at least associated with, changes in fatty acid composition of cell membranes. Changes in these compositions can modify membrane fluidity, cell signaling leading to altered gene expression, and the pattern of lipid mediator production. Cell involved in the inflammatory response are typically rich in the n-6 fatty acid arachidonic acid, but the contents of arachidonic acid and of the n-3 fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) can be altered through oral administration of EPA and DHA. Eicosanoids produced from arachidonic acid have roles in inflammation. EPA also gives rise to eicosanoids and these often have differing properties from those of arachidonic acid-derived eicosanoids. EPA and DHA give rise to newly discovered resolvins which are anti-inflammatory and inflammation resolving. Increased membrane content of EPA and DHA (and decreased arachidonic acid content) results in a changed pattern of production of eicosanoids and resolvins. Changing the fatty acid composition of cells involved in the inflammatory response also affects production of peptide mediators of inflammation (adhesion molecules, cytokines etc.). Thus, the fatty acid composition of cells involved in the inflammatory response influences their function; the contents of arachidonic acid, EPA and DHA appear to be especially important. The anti-inflammatory effects of marine n-3 PUFAs suggest that they may be useful as therapeutic agents in disorders with an inflammatory component."}
{"_id": "464d8dd114fd0380dab1e3e3674e30a1a09a8159", "title": "Impact of mosquito gene drive on malaria elimination in a computational model with explicit spatial and temporal dynamics.", "text": "The renewed effort to eliminate malaria and permanently remove its tremendous burden highlights questions of what combination of tools would be sufficient in various settings and what new tools need to be developed. Gene drive mosquitoes constitute a promising set of tools, with multiple different possible approaches including population replacement with introduced genes limiting malaria transmission, driving-Y chromosomes to collapse a mosquito population, and gene drive disrupting a fertility gene and thereby achieving population suppression or collapse. Each of these approaches has had recent success and advances under laboratory conditions, raising the urgency for understanding how each could be deployed in the real world and the potential impacts of each. New analyses are needed as existing models of gene drive primarily focus on nonseasonal or nonspatial dynamics. We use a mechanistic, spatially explicit, stochastic, individual-based mathematical model to simulate each gene drive approach in a variety of sub-Saharan African settings. Each approach exhibits a broad region of gene construct parameter space with successful elimination of malaria transmission due to the targeted vector species. The introduction of realistic seasonality in vector population dynamics facilitates gene drive success compared with nonseasonal analyses. Spatial simulations illustrate constraints on release timing, frequency, and spatial density in the most challenging settings for construct success. Within its parameter space for success, each gene drive approach provides a tool for malaria elimination unlike anything presently available. Provided potential barriers to success are surmounted, each achieves high efficacy at reducing transmission potential and lower delivery requirements in logistically challenged settings."}
{"_id": "0f3dc90f97b7e6bcc67b7e47a5474cabb07b3a29", "title": "Accurate design of corrugated waveguide low-pass filters using exclusively closed-form expressions", "text": "A simple, quasi-analytical, and accurate method to design classical corrugated rectangular waveguide low-pass filters is presented in this paper. Unlike in other published techniques, the dimensions of the final filter are directly obtained in a quasi-analytical manner following exclusively closed-form expressions and, in particular, using neither electromagnetic simulation nor optimization. Thus, our method is amenable to be implemented using an off-the-shelf mathematical software tool such as Matlab. The proposed method is based on the synthesis of the heights and lengths of the final structure taking into account the shunt capacitance effect (produced by the excitation of higher-order modes at the height steps) over the magnitude and phase of the local reflection coefficients. The novel method has been validated with the design of a classical Chebyshev corrugated low-pass filter of order 31, which shows a remarkable accordance between the ideal and simulated frequency responses."}
{"_id": "4fd9bbc352e30a4f16258f09222f5ebbcd4af1f8", "title": "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam", "text": "Uncertainty computation in deep learning is essential to design robust and reliable systems. Variational inference (VI) is a promising approach for such computation, but requires more effort to implement and execute compared to maximumlikelihood methods. In this paper, we propose new natural-gradient algorithms to reduce such efforts for Gaussian mean-field VI. Our algorithms can be implemented within the Adam optimizer by perturbing the network weights during gradient evaluations, and uncertainty estimates can be cheaply obtained by using the vector that adapts the learning rate. This requires lower memory, computation, and implementation effort than existing VI methods, while obtaining uncertainty estimates of comparable quality. Our empirical results confirm this and further suggest that the weight-perturbation in our algorithm could be useful for exploration in reinforcement learning and stochastic optimization."}
{"_id": "0c41d52ae135fe7548d24b9673e5f352e2f86814", "title": "Mid-term electricity price forecasting using SVM", "text": "In the modern electricity market, it is very significant to have a precise electricity price forecasting in medium term time horizon. There are few studies done concentrating on the medium term forecasting of the electricity price. Medium term time horizon of electricity price forecasting is very useful and of great importance. It has many applications such as future maintenance scheduling of the power plants, risk management, plan future contracts, purchasing raw materials and determine the market pricing. To forecast the electricity price, some factors are very important, such as choosing the most useful price features that influence the market price, and choosing the proper prediction model that is able to predict the price variations behavior using the historical data. The proposed SVM method and the other employed methods are evaluated using the data from the New England ISO which are published at their official website."}
{"_id": "64f856ccccd53503a2755cc8c42c2d89e1f22afd", "title": "Autonomous Intersection Management For Semi-Autonomous Vehicles", "text": "Recent advances in autonomous vehicle technology will open the door to highly efficient transportation systems in the future, as demonstrated by Autonomous Intersection Management, an intersection control protocol designed for fully autonomous vehicles. We, however, anticipate there will be a long transition period during which most vehicles have some but not all capabilities of fully autonomous vehicles. This chapter introduces a new protocol called Semi-Autonomous Intersection Management, which allows vehicles with partiallyautonomous features such as adaptive cruise control to enter an intersection from different directions simultaneously. Our experiments show that this protocol can greatly decrease traffic delay when most vehicles are semi-autonomous. Our incremental deployment study reveals that traffic delay keeps decreasing as more and more vehicles employ features of autonomy."}
{"_id": "0ff92dd1d6e0878347be94dae6c06365247aa46b", "title": "SQL injection detection and prevention techniques", "text": "As the use of internet is increasing rapidly, the attacks on web applications are also increasing as well. Nowadays SQL injection attack is a major issue of web applications. It allows unrestricted access to the database. The successful execution of SQL injection leads to a loss of integrity and confidentiality. In this paper, a review of different types of SQL injection attacks, their detection and prevention techniques are presented. This paper will help the researchers decide the technique of interest for further studies."}
{"_id": "1c0fb6b1bbfde0f9bab6268f5609cce2bd3bc5bd", "title": "Autonomous Driving in Urban Environments: Boss and the Urban Challenge", "text": "Chris Urmson, Joshua Anhalt, Drew Bagnell, Christopher Baker, Robert Bittner, M. N. Clark, John Dolan, Dave Duggins, Tugrul Galatali, Chris Geyer, Michele Gittleman, Sam Harbaugh, Martial Hebert, Thomas M. Howard, Sascha Kolski, Alonzo Kelly, Maxim Likhachev, Matt McNaughton, Nick Miller, Kevin Peterson, Brian Pilnick, Raj Rajkumar, Paul Rybski, Bryan Salesky, Young-Woo Seo, Sanjiv Singh, Jarrod Snider, Anthony Stentz, William \u201cRed\u201d Whittaker, Ziv Wolkowicki, and Jason Ziglar Carnegie Mellon University Pittsburgh, Pennsylvania 15213 e-mail: curmson@ri.cmu.edu"}
{"_id": "68c0f511121d61ae8089be2f162246f5cac7f40f", "title": "Maneuver-Based Trajectory Planning for Highly Autonomous Vehicles on Real Road With Traffic and Driver Interaction", "text": "This paper presents the design and first test on a simulator of a vehicle trajectory-planning algorithm that adapts to traffic on a lane-structured infrastructure such as highways. The proposed algorithm is designed to run on a fail-safe embedded environment with low computational power, such as an engine control unit, to be implementable in commercial vehicles of the near future. The target platform has a clock frequency of less than 150 MHz, 150 kB RAM of memory, and a 3-MB program memory. The trajectory planning is performed by a two-step algorithm. The first step defines the feasible maneuvers with respect to the environment, aiming at minimizing the risk of a collision. The output of this step is a target group of maneuvers in the longitudinal direction (accelerating or decelerating), in the lateral direction (changing lanes), and in the combination of both directions. The second step is a more detailed evaluation of several possible trajectories within these maneuvers. The trajectories are optimized to additional performance indicators such as travel time, traffic rules, consumption, and comfort. The output of this module is a trajectory in the vehicle frame that represents the recommended vehicle state (position, heading, speed, and acceleration) for the following seconds."}
{"_id": "b348042a91beb4fa0c60fd94f27cf0366d5f9630", "title": "Model-Based Probabilistic Collision Detection in Autonomous Driving", "text": "The safety of the planned paths of autonomous cars with respect to the movement of other traffic participants is considered. Therefore, the stochastic occupancy of the road by other vehicles is predicted. The prediction considers uncertainties originating from the measurements and the possible behaviors of other traffic participants. In addition, the interaction of traffic participants, as well as the limitation of driving maneuvers due to the road geometry, is considered. The result of the presented approach is the probability of a crash for a specific trajectory of the autonomous car. The presented approach is efficient as most of the intensive computations are performed offline, which results in a lean online algorithm for real-time application."}
{"_id": "f69c83aab19183795af7612c3f224b5e116f242a", "title": "Junior: The Stanford entry in the Urban Challenge", "text": ""}
{"_id": "30ad8b14cea05d2147c3b49f9c5a6b8de01f5c7a", "title": "Digital color halftoning with generalized error diffusion and multichannel green-noise masks", "text": "In this paper, we introduce two novel techniques for digital color halftoning with green-noise--stochastic dither patterns generated by homogeneously distributing minority pixel clusters. The first technique employs error diffusion with output-dependent feedback where, unlike monochrome image halftoning, an interference term is added such that the overlapping of pixels of different colors can be regulated for increased color control. The second technique uses a green-noise mask, a dither array designed to create green-noise halftone patterns, which has been constructed to also regulate the overlapping of different colored pixels. As is the case with monochrome image halftoning, both techniques are tunable, allowing for large clusters in printers with high dot-gain characteristics, and small clusters in printers with low dot-gain characteristics."}
{"_id": "9badd26e11967f16adf3eaa77cb9e86b90d24741", "title": "Marrying Uncertainty and Time in Knowledge Graphs", "text": "The management of uncertainty is crucial when harvesting structured content from unstructured and noisy sources. Knowledge Graphs (KGs) are a prominent example. KGs maintain both numerical and non-numerical facts, with the support of an underlying schema. These facts are usually accompanied by a confidence score that witnesses how likely is for them to hold. Despite their popularity, most of existing KGs focus on static data thus impeding the availability of timewise knowledge. What is missing is a comprehensive solution for the management of uncertain and temporal data in KGs. The goal of this paper is to fill this gap. We rely on two main ingredients. The first is a numerical extension of Markov Logic Networks (MLNs) that provide the necessary underpinning to formalize the syntax and semantics of uncertain temporal KGs. The second is a set of Datalog constraints with inequalities that extend the underlying schema of the KGs and help to detect inconsistencies. From a theoretical point of view, we discuss the complexity of two important classes of queries for uncertain temporal KGs: maximum a-posteriori and conditional probability inference. Due to the hardness of these problems and the fact that MLN solvers do not scale well, we also explore the usage of Probabilistic Soft Logics (PSL) as a practical tool to support our reasoning tasks. We report on an experimental evaluation comparing the MLN and PSL approaches."}
{"_id": "3f08b47c049a84fb440caaf6ee3a44c0af4e3fef", "title": "Business Models : A Discovery Driven Approach", "text": "The business model concept offers strategists a fresh way to consider their options in uncertain, fast-moving and unpredictable environments. In contrast to conventional assumptions, recognizing that more new business models are both feasible and actionable than ever before is creating unprecedented opportunities for today\u2019s organizations. However, unlike conventional strategies that emphasize analysis, strategies that aim to discover and exploit new models must engage in significant experimentation and learning e a \u2018discovery driven,\u2019 rather than analytical approach."}
{"_id": "a7ff420f069fdf42d8a487a95618a0ccf0e46527", "title": "A Wideband Differential-Fed Slot Antenna Using Integrated Compact Balun With Matching Capability", "text": "This communication proposes a wideband differential-fed slot antenna that works in its dominant mode ( \u03bb/2 mode). The antenna has a near omnidirectional radiation pattern with vertical polarization, which is designed for near-ground wireless sensor node applications. A compact integrated T-slot balun and a loop to T-slot feeding structure are introduced to realize the differential feeding; the bandwidth is expanded to 30.7% (2.2 GHz-3.0 GHz) with a 3.8 dB peak gain. A detailed matching method without using lumped elements is proposed, which can be applied to various types of RFIC chips. A prototype is built and measured. Its differential impedance, antenna patterns, and gain are consistent with the simulation results."}
{"_id": "32b694d14d716c524723a38a9cbc115db67aa2eb", "title": "Cross-Point Resistive RAM Based on Field-Assisted Superlinear Threshold Selector", "text": "We report a 3-D-stackable 1S1R passive cross-point resistive random access memory (RRAM). The sneak (leakage) current challenge in the cross-point RRAM integration has been overcome utilizing a field-assisted superlinear threshold selector. The selector offers high selectivity of >107, sharp switching slope of <;5 mV/decade, ability to tune the threshold voltage, stable operation at 125\u00b0C, and endurance of >1011. Furthermore, we demonstrate 1S1R integration in which the selector-subthreshold current is <;10 pA while offering >102 memory ON/OFF ratio and >106 selectivity during cycling. Combined with self-current-controlled RRAM, the 1S1R enables high-density and high-performance memory applications."}
{"_id": "2deaf4147c037875662e7b2f10145beffb097538", "title": "Augmented reality-based exergames for rehabilitation", "text": "Rehabilitation for stroke afflicted patients, through exercises tailored for individual needs, aims at relearning basic motor skills, especially in the extremities. Rehabilitation through Augmented Reality (AR) based games engage and motivate patients to perform exercises which, otherwise, maybe boring and monotonic. Also, mirror therapy allows users to observe one's own movements in the game providing them with good visual feedback. This paper presents an augmented reality based system for rehabilitation by playing four interactive, cognitive and fun Exergames (exercise and gaming).\n The system uses low-cost RGB-D cameras such as Microsoft Kinect V2 to capture and generate 3D model of the person by extracting him/her from the entire captured data and immersing it in different interactive virtual environments. Animation based limb movement enhancement along with cognitive aspects incorporated in the game can help in positive reinforcement, progressive challenges and motion improvement. Recording module of the toolkit allows future reference and facilitates feedback from the physician. 10 able-bodied users, 2 psychological experts and 2 Physical Medicine and Rehabilitation physicians evaluated the user experience and usability aspects of the exergames. Results obtained shows the games to be fun and realistic, and at the same time engaging and motivating for performing exercises."}
{"_id": "5f685fa84901898b5259713d1881c21d4995e7f3", "title": "Commonsense Reasoning in and Over Natural Language", "text": "ConceptNet is a very large semantic network of commonsense knowledge suitable for making various kinds of practical inferences over text. ConceptNet captures a wide range of commonsense concepts and relations like those in Cyc, while its simple semantic network structure lends it an ease-of-use comparable to WordNet. To meet the dual challenge of having to encode complex higher-order concepts, and maintaining ease-of-use, we introduce a novel use of semi-structured natural language fragments as the knowledge representation of commonsense concepts. In this paper, we present a methodology for reasoning flexibly about these semi-structured natural language fragments. We also examine the tradeoffs associated with representing commonsense knowledge in formal logic versus in natural language. We conclude that the flexibility of natural language makes it a highly suitable representation for achieving practical inferences over text, such as context finding, inference chaining, and conceptual analogy. 1 What is ConceptNet? ConceptNet (www.conceptnet.org) is the largest freely available, machine-useable commonsense resource. Structured as a network of semi-structured natural language fragments, ConceptNet presently consists of over 250,000 elements of commonsense knowledge. We were inspired dually by the range of commonsense concepts and relations in Cyc (Lenat, 1995), and by the ease-of-use of WordNet (Fellbaum, 1998), and hoped to combine the best of both worlds. As a result, we adopted the semantic network representation of WordNet, but extended the representation in several key ways. First, we extended WordNet\u2019s lexical notion of nodes to a conceptual notion of nodes, but we kept the nodes in natural language, because one of the primary strengths of WordNet in the textual domain is that its knowledge representation is itself textual. ConceptNet\u2019s nodes are thus natural language fragments which are semi-structured according to an ontology of allowable syntactic patterns, and accommodate both first-order concepts given as noun phrases (e.g. \u201cpotato chips\u201d), and second-order concepts given as verb phrases (e.g. \u201cbuy potato chips\u201d). Second, we extended WordNet\u2019s small ontology of semantic relations, which are primarily taxonomic in nature, to include a richer set of relations appropriate to concept-level nodes. At present there are 19 semantic relations used in ConceptNet, representing categories of, inter alia, temporal, spatial, causal, and functional knowledge. By combining higher order nodes with this relational ontology, it is possible to represent richer kinds of knowledge in ConceptNet beyond what can be represented in WordNet (Fig 1.). For example, we can represent a layman\u2019s common sense observation that \u201cyou may be hurt if you get into an accident\u201d in ConceptNet as EffectOf(\u201cget into accident\u201d, \u201cbe hurt\u201d). Note that because the knowledge representation is semi-structured natural language, there are often various ways to represent the same knowledge. This is a source of ambiguity, but as we will argue in this paper, maintaining some ambiguity lends us greater flexibility for reasoning."}
{"_id": "09c762a06155821218e5eb00ce69f4f7a179355e", "title": "Who falls for phish?: a demographic analysis of phishing susceptibility and effectiveness of interventions", "text": "In this paper we present the results of a roleplay survey instrument administered to 1001 online survey respondents to study both the relationship between demographics and phishing susceptibility and the effectiveness of several anti-phishing educational materials. Our results suggest that women are more susceptible than men to phishing and participants between the ages of 18 and 25 are more susceptible to phishing than other age groups. We explain these demographic factors through a mediation analysis. Educational materials reduced users' tendency to enter information into phishing webpages by 40% percent; however, some of the educational materials we tested also slightly decreased participants' tendency to click on legitimate links."}
{"_id": "682b1e56a38d9d7580aa264edb12cf4ed0932485", "title": "An Ensemble-of-Classifiers Based Approach for Early Diagnosis of Alzheimer's Disease: Classification Using Structural Features of Brain Images", "text": "Structural brain imaging is playing a vital role in identification of changes that occur in brain associated with Alzheimer's disease. This paper proposes an automated image processing based approach for the identification of AD from MRI of the brain. The proposed approach is novel in a sense that it has higher specificity/accuracy values despite the use of smaller feature set as compared to existing approaches. Moreover, the proposed approach is capable of identifying AD patients in early stages. The dataset selected consists of 85 age and gender matched individuals from OASIS database. The features selected are volume of GM, WM, and CSF and size of hippocampus. Three different classification models (SVM, MLP, and J48) are used for identification of patients and controls. In addition, an ensemble of classifiers, based on majority voting, is adopted to overcome the error caused by an independent base classifier. Ten-fold cross validation strategy is applied for the evaluation of our scheme. Moreover, to evaluate the performance of proposed approach, individual features and combination of features are fed to individual classifiers and ensemble based classifier. Using size of left hippocampus as feature, the accuracy achieved with ensemble of classifiers is 93.75%, with 100% specificity and 87.5% sensitivity."}
{"_id": "a20f5b5b6355df7214a65e9981afcd49f2f4803b", "title": "Family issues in child anxiety: attachment, family functioning, parental rearing and beliefs.", "text": "Family studies have found a large overlap between anxiety disorders in family members. In addition to genetic heritability, a range of family factors may also be involved in the intergenerational transmission of anxiety. Evidence for a relationship between family factors and childhood as well as parental anxiety is reviewed. Four groups of family variables are considered: (I) attachment; (II), aspects of family functioning, such as marital conflict, co-parenting, functioning of the family as a whole, and sibling relationships; (III) parental rearing strategies; and (IV) beliefs that parents hold about their child. The reviewed literature provides evidence for an association between each of these family factors and child anxiety. However, there is little evidence as yet that identified family factors are specific to child anxiety, rather than to child psychopathology in general. Moreover, evidence for a relationship between child anxiety and family factors is predominantly cross-sectional. Therefore, whether the identified family factors cause childhood anxiety still needs to be investigated. Further research that investigates mechanisms mediating the relationship between family factors and child anxiety is also called for. Finally, parental beliefs are identified as important predictors of parental behaviour that have largely not been investigated in relation to child anxiety disorders."}
{"_id": "1e4d081c6fa2103ccd0b9d977d98dffaff3a6f3c", "title": "Computer Architecture with Associative Processor Replacing Last-Level Cache and SIMD Accelerator", "text": "This study presents a computer architecture, where a last-level cache and a SIMD accelerator are replaced by an associative processor. Associative processor combines data storage and data processing, and functions as a massively parallel SIMD processor and a memory at the same time. An analytic performance model of this computer architecture is introduced. Comparative analysis supported by cycle-accurate simulation and emulation shows that this architecture may outperform a conventional computer architecture comprising a SIMD coprocessor and a shared last-level cache while consuming less power."}
{"_id": "d59f6576398c977906d12e570ce5812435bec4d8", "title": "Improved image-based classification of Alzheimer \u2019 s disease using multimodality brain imaging data", "text": "Purpose Amyloid-PET is an accurate marker of Alzheimer's disease. However, this technique is invasive and not widely available. Here we investigated whether other neuroimaging techniques can yield similar diagnostic accuracy as PET imaging. This proof of concept study presents two approaches of analyzing multi-modality combinations of metabolic, functional and anatomical brain imaging data using kernel-based pattern recognition techniques. Methods Input images were taken from a systematically collected database of previous studies. Data included [ 18 F]FDG-PET, rs-fMRI and structural MR data of 15 controls and 25 AD patients. [ 11 C]PIB was used as reference test for classification of AD using imaging data. From rs-fMRI data, both eigenvector centrality (EC) images and default mode network (DMN) regression maps were calculated. Structural MR data was used to create gray matter (GM) maps. Intermediate results were combined and analyzed with two types of existing pattern classification software; 1) using spatial concatenation of voxel data and, 2) a regional multi-kernel approach."}
{"_id": "9331df99ae353ddac8054eb0b6b6fd93429db4de", "title": "What to expect from teleconferencing.", "text": "Like other marvels of the electronic age, teleconferencing has been both oversold and underused. Though it has many potential uses, what managers know, or think they know, about it is generally based on misconceptions. Rather than relying only on vendors of teleconferencing, potential purchasers should first decide what their communication needs are, then choose the system that suits them best. These authors explain the new teleconferencing options and give guidelines for adapting them to a particular company."}
{"_id": "4b9cb3ed58f5c6752605ff87f609496a1d41cfb3", "title": "Exploring motivations for contributing to open source initiatives: The roles of contribution context and personal values", "text": "We explore contextual and dispositional correlates of the motivation to contribute to open source initiatives. We examine how the context of the open source project, and the personal values of contributors, are related to the types of motivations for contributing. A web-based survey was administered to 300 contributors in two prominent open source contexts: software and content. As hypothesized, software contributors placed a greater emphasis on reputation-gaining and self-development motivations, compared with content contributors, who placed a greater emphasis on altruistic motives. Furthermore, the hypothesized relationships were found between contributors\u2019 personal values and their motivations for contributing. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "fda1e13a2eaeaa0b4434833d3ee0eb8e79b0ba94", "title": "On the cognitive process of human problem solving", "text": "One of the fundamental human cognitive processes is problem solving. As a higher-layer cognitive process, problem solving interacts with many other cognitive processes such as abstraction, searching, learning, decision making, inference, analysis, and synthesis on the basis of internal knowledge representation by the object\u2013attribute-relation (OAR) model. Problem solving is a cognitive process of the brain that searches a solution for a given problem or finds a path to reach a given goal. When a problem object is identified, problem solving can be perceived as a search process in the memory space for finding a relationship between a set of solution goals and a set of alternative paths. This paper presents both a cognitive model and a mathematical model of the problem solving process. The cognitive structures of the brain and the mechanisms of internal knowledge representation behind the cognitive process of problem solving are explained. The cognitive process is formally described using real-time process algebra (RTPA) and concept algebra. This work is a part of the cognitive computing project that designed to reveal and simulate the fundamental mechanisms and processes of the brain according to Wang\u2019s layered reference model of the brain (LRMB), which is expected to lead to the development of future generation methodologies for cognitive computing and novel cognitive computers that are capable of think, learn, and perceive. 2008 Published by Elsevier B.V."}
{"_id": "127f464c2dc8d85b7612a6924495f79e5458710f", "title": "Move Evaluation in Go Using Deep Convolutional Neural Networks", "text": "The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional-search program GnuGo in 97% of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates two million positions per move."}
{"_id": "a97d78cb8682971289cc824a37232065112a9558", "title": "Summarizability in OLAP and Statistical Data Bases", "text": "Summarizability of OLAP and Statistical Databases is an a extremely important property because violating this condition can lead to erroneous conclusions and decisions. In this paper we explore the conditions for summarizability. We introduce a framework for specifying precisely the context in which statistical objects are defined. We use a three step process to define normalized statistical objects. Using this framework, we identify three necessary conditions for summarizability. We provided specific tests for each of the conditions that can be verified either from semantic knowledge, or by checking the statistical database itself. We also provide the reasoning for our belief that these three summarizability conditions are sufficient as well."}
{"_id": "61c5b3f7e5fbabe40c1b1f4cdcd382d5e76dff02", "title": "Andromeda: Performance, Isolation, and Velocity at Scale in Cloud Network Virtualization", "text": "This paper presents our design and experience with Andromeda, Google Cloud Platform\u2019s network virtualization stack. Our production deployment poses several challenging requirements, including performance isolation among customer virtual networks, scalability, rapid provisioning of large numbers of virtual hosts, bandwidth and latency largely indistinguishable from the underlying hardware, and high feature velocity combined with high availability. Andromeda is designed around a flexible hierarchy of flow processing paths. Flows are mapped to a programming path dynamically based on feature and performance requirements. We introduce the Hoverboard programming model, which uses gateways for the long tail of low bandwidth flows, and enables the control plane to program network connectivity for tens of thousands of VMs in seconds. The on-host dataplane is based around a highperformance OS bypass software packet processing path. CPU-intensive per packet operations with higher latency targets are executed on coprocessor threads. This architecture allows Andromeda to decouple feature growth from fast path performance, as many features can be implemented solely on the coprocessor path. We demonstrate that the Andromeda datapath achieves performance that is competitive with hardware while maintaining the flexibility and velocity of a software-based architecture."}
{"_id": "e18e148c747697f79045842c10bd54f3cf15b038", "title": "Microwave antenna applications of metasurfaces", "text": "This paper reviews the past and current research progress in microwave antenna applications using 2D planar metamaterials or metasurfaces. Firstly, we revisit the definitions of metamaterial and metasurface, and the applications of the ultra-thin metasurfaces for low-profile printed antennas in microwave frequency regime, for the purposes of size reduction and performance enhancement. Then we summarize the different types of metasurfaces and their applications. Finally, we provide the future lookout into microwave antenna and EM related applications using metasurfaces."}
{"_id": "0bd4b36c79871b445aba444f38a2d23c3a239adf", "title": "A Modified Node2vec Method for Disappearing Link Prediction", "text": "Disappearing link prediction aims to predict the possibility of the links disappearing in the future. This paper describes the disappearing link prediction problem in scientific collaboration networks based on network embedding. We propose a novel network embedding method called TDL2vec, which is an extension of node2vec algorithm. TDL2vec generates the link embdeddings considering with the time factor. In this paper, the disappearing link prediction problem is considered as a binary classification problem, and support vector machine (SVM) is used as the classifier after link embedding. To evaluate the performance in disappearing link prediction, this paper tests the proposed method and several baseline methods on a real-world network. The experimental results show that TDL2vec achieves better performance than other baselines."}
{"_id": "f6284d750cf12669ca3bc12a1b485545af776239", "title": "EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning", "text": "Over the last few years, deep learning techniques have yielded significant improvements in image inpainting. However, many of these techniques fail to reconstruct reasonable structures as they are commonly over-smoothed and/or blurry. This paper develops a new approach for image inpainting that does a better job of reproducing filled regions exhibiting fine details. We propose a two-stage adversarial model EdgeConnect that comprises of an edge generator followed by an image completion network. The edge generator hallucinates edges of the missing region (both regular and irregular) of the image, and the image completion network fills in the missing regions using hallucinated edges as a priori. We evaluate our model end-to-end over the publicly available datasets CelebA, Places2, and Paris StreetView, and show that it outperforms current state-ofthe-art techniques quantitatively and qualitatively."}
{"_id": "5fb50e750f700f920f06b3982bd16ea920d11f68", "title": "Learning Temporal Transformations From Time-Lapse Videos", "text": "Based on life-long observations of physical, chemical, and biologic phenomena in the natural world, humans can often easily picture in their minds what an object will look like in the future. But, what about computers? In this paper, we learn computational models of object transformations from time-lapse videos. In particular, we explore the use of generative models to create depictions of objects at future times. These models explore several different prediction tasks: generating a future state given a single depiction of an object, generating a future state given two depictions of an object at different times, and generating future states recursively in a recurrent framework. We provide both qualitative and quantitative evaluations of the generated results, and also conduct a human evaluation to compare variations of our models."}
{"_id": "c0e4b6a993a35ded0d17a5e751bd135b795244ae", "title": "Large-Scale Training of Shadow Detectors with Noisily-Annotated Shadow Examples", "text": "This paper introduces training of shadow detectors under the large-scale dataset paradigm. This was previously impossible due to the high cost of precise shadow annotation. Instead, we advocate the use of quickly but imperfectly labeled images. Our novel label recovery method automatically corrects a portion of the erroneous annotations such that the trained classifiers perform at state-of-the-art level. We apply our method to improve the accuracy of the labels of a new dataset that is 20 times larger than existing datasets and contains a large variety of scenes and image types. Naturally, such a large dataset is appropriate for training deep learning methods. Thus, we propose a semantic-aware patch level Convolutional Neural Network architecture that efficiently trains on patch level shadow examples while incorporating image level semantic information. This means that the detected shadow patches are refined based on image semantics. Our proposed pipeline can be a useful baseline for future advances in shadow detection."}
{"_id": "fcabf1c0f4a26431d4df95ddeec2b1dff9b3e928", "title": "Semantic Segmentation using Adversarial Networks", "text": "Adversarial training has been shown to produce state of the art results for generative image modeling. In this paper we propose an adversarial training approach to train semantic segmentation models. We train a convolutional semantic segmentation network along with an adversarial network that discriminates segmentation maps coming either from the ground truth or from the segmentation network. The motivation for our approach is that it can detect and correct higher-order inconsistencies between ground truth segmentation maps and the ones produced by the segmentation net. Our experiments show that our adversarial training approach leads to improved accuracy on the Stanford Background and PASCAL VOC 2012 datasets."}
{"_id": "1d94b9bec8319c3b15e1a1868175506b9a2104fa", "title": "Exploring Time-Sensitive Variational Bayesian Inference LDA for Social Media Data", "text": "There is considerable interest among both researchers and the mass public in understanding the topics of discussion on social media as they occur over time. Scholars have thoroughly analysed samplingbased topic modelling approaches for various text corpora including social media; however, another LDA topic modelling implementation\u2014 Variational Bayesian (VB)\u2014has not been well studied, despite its known e ciency and its adaptability to the volume and dynamics of social media data. In this paper, we examine the performance of the VB-based topic modelling approach for producing coherent topics, and further, we extend the VB approach by proposing a novel time-sensitive Variational Bayesian implementation, denoted as TVB. Our newly proposed TVB approach incorporates time so as to increase the quality of the generated topics. Using a Twitter dataset covering 8 events, our empirical results show that the coherence of the topics in our TVB model is improved by the integration of time. In particular, through a user study, we find that our TVB approach generates less mixed topics than state-of-the-art topic modelling approaches. Moreover, our proposed TVB approach can more accurately estimate topical trends, making it particularly suitable to assist end-users in tracking emerging topics on social media."}
{"_id": "575039bf87823fcccd1d2597baf296367d13b7b0", "title": "The effect of experimental presentation of thin media images on body satisfaction: a meta-analytic review.", "text": "OBJECTIVE\nThe effect of experimental manipulations of the thin beauty ideal, as portrayed in the mass media, on female body image was evaluated using meta-analysis.\n\n\nMETHOD\nData from 25 studies (43 effect sizes) were used to examine the main effect of mass media images of the slender ideal, as well as the moderating effects of pre-existing body image problems, the age of the participants, the number of stimulus presentations, and the type of research design.\n\n\nRESULTS\nBody image was significantly more negative after viewing thin media images than after viewing images of either average size models, plus size models, or inanimate objects. This effect was stronger for between-subjects designs, participants less than 19 years of age, and for participants who are vulnerable to activation of a thinness schema.\n\n\nCONCLUSION\nResults support the sociocultural perspective that mass media promulgate a slender ideal that elicits body dissatisfaction. Implications for prevention and research on social comparison processes are considered."}
{"_id": "2e631bb61e889ac7b1892ffb9bb3ad64b3095b54", "title": "Motivated recruitment of autobiographical memories.", "text": "We hypothesized that people motivated to believe that they possess a given trait search for autobiographical memories that reflect that trait, so as to justify their desired self-view. We led subjects to believe that either extraversion or introversion was desirable, and obtained convergent evidence from open-ended memory-listing tasks as well as from reaction-time tasks measuring the speed with which memories could be generated that this manipulation enhanced the accessibility of memories reflecting the desired trait. If people rely on their memories to construct desired self-concepts, motivated changes in self-concepts should be constrained by the content of available memories. Our final study demonstrates such constraints."}
{"_id": "07fe0ddb6cef7ed8946d416c093452c1d0db0c34", "title": "Evading android runtime analysis via sandbox detection", "text": "The large amounts of malware, and its diversity, have made it necessary for the security community to use automated dynamic analysis systems. These systems often rely on virtualization or emulation, and have recently started to be available to process mobile malware. Conversely, malware authors seek to detect such systems and evade analysis. In this paper, we present techniques for detecting Android runtime analysis systems. Our techniques are classified into four broad classes showing the ability to detect systems based on differences in behavior, performance, hardware and software components, and those resulting from analysis system design choices. We also evaluate our techniques against current publicly accessible systems, all of which are easily identified and can therefore be hindered by a motivated adversary. Our results show some fundamental limitations in the viability of dynamic mobile malware analysis platforms purely based on virtualization."}
{"_id": "04f4679765d2f71576dd77c1b00a2fd92e5c6da4", "title": "Part Detector Discovery in Deep Convolutional Neural Networks", "text": "Current fine-grained classification approaches often rely on a robust localization of object parts to extract localized feature representations suitable for discrimination. However, part localization is a challenging task due to the large variation of appearance and pose. In this paper, we show how pre-trained convolutional neural networks can be used for robust and efficient object part discovery and localization without the necessity to actually train the network on the current dataset. Our approach called \u201cpart detector discovery\u201d (PDD) is based on analyzing the gradient maps of the network outputs and finding activation centers spatially related to annotated semantic parts or bounding boxes. This allows us not just to obtain excellent performance on the CUB2002011 dataset, but in contrast to previous approaches also to perform detection and bird classification jointly without requiring a given bounding box annotation during testing and ground-truth parts during training. The code is available at http://www.inf-cv.uni-jena.de/part_ discovery and https://github.com/cvjena/PartDetectorDisovery."}
{"_id": "12b2dc13a1ef31dd69acc8a4c1c51753868f25ac", "title": "Flexible dual TCP/UDP streaming for H.264 HD video over WLANs", "text": "High Definition video streaming over WLANs faces many challenges because video data requires not only data integrity but also frames have strict playout deadline. Traditional streaming methods that rely solely on either UDP or TCP have difficulties meeting both requirements because UDP incurs packet loss while TCP incurs delay. This paper proposed a new streaming method called Flexible Dual-TCP/UDP Streaming Protocol (FDSP) that utilizes the benefit of both UDP and TCP. The FDSP takes advantage of the hierarchical structure of the H.264/AVC syntax and uses TCP to transmit important syntax elements of H.264/AVC video and UDP to transmit non-important elements. The proposed FDSP is implemented and validated under different wireless network conditions. Both visual quality and delay results are compared against pure-UDP and pure-TCP streaming methods. Our results show that FDSP effectively achieves a balance between delay and visual quality, thus it has advantage over traditional pure-UDP and pure-TCP methods."}
{"_id": "498eae3e72d039c58faf5806e97c2367c22e47df", "title": "Probabilistic models for discovering e-communities", "text": "The increasing amount of communication between individuals in e-formats (e.g. email, Instant messaging and the Web) has motivated computational research in social network analysis (SNA). Previous work in SNA has emphasized the social network (SN) topology measured by communication frequencies while ignoring the semantic information in SNs. In this paper, we propose two generative Bayesian models for semantic community discovery in SNs, combining probabilistic modeling with community detection in SNs. To simulate the generative models, an EnF-Gibbs sampling algorithm is proposed to address the efficiency and performance problems of traditional methods. Experimental studies on Enron email corpus show that our approach successfully detects the communities of individuals and in addition provides semantic topic descriptions of these communities."}
{"_id": "4cc54596ef0d37d66b10d9ba8c5a8e2e755369fe", "title": "Ideal Ratio Mask Estimation Using Deep Neural Networks for Monaural Speech Segregation in Noisy Reverberant Conditions", "text": "Monaural speech segregation is an important problem in robust speech processing and has been formulated as a supervised learning problem. In supervised learning methods, the ideal binary mask (IBM) is usually used as the target because of its simplicity and large speech intelligibility gains. Recently, the ideal ratio mask (IRM) has been found to improve the speech quality over the IBM. However, the IRM was originally defined in anechoic conditions and did not consider the effect of reverberation. In this paper, the IRM is extended to reverberant conditions where the direct sound and early reflections of target speech are regarded as the desired signal. Deep neural networks (DNNs) is employed to estimate the extended IRM in the noisy reverberant conditions. The estimated IRM is then applied to the noisy reverberant mixture for speech segregation. Experimental results show that the estimated IRM provides substantial improvements in speech intelligibility and speech quality over the unprocessed mixture signals under various noisy and reverberant conditions."}
{"_id": "9481c0920dbf184dcce2ecf13e73c3b6edcd48db", "title": "Analyzing the Performance of Mutation Operators to Solve the Travelling Salesman Problem", "text": "The genetic algorithm includes some parameters that should be adjusted, so as to get reliable results. Choosing a representation of the problem addressed, an initial population, a method of selection, a crossover operator, mutation operator, the probabilities of crossover and mutation, and the insertion method creates a variant of genetic algorithms. Our work is part of the answer to this perspective to find a solution for this combinatorial problem. What are the best parameters to select for a genetic algorithm that creates a variety efficient to solve the Traveling Salesman Problem (TSP)? In this paper, we present a comparative analysis of different mutation operators, surrounded by a dilated discussion that justifying the relevance of genetic operators chosen to solving the TSP problem."}
{"_id": "85dd6464b1efc310c701bb70de48eaf520d8524b", "title": "Multi-Column Convolutional Neural Networks with Causality-Attention for Why-Question Answering", "text": "Why-question answering (why-QA) is a task to retrieve answers (or answer passages) to why-questions (e.g., \"why are tsunamis generated?\") from a text archive. Several previously proposed methods for why-QA improved their performance by automatically recognizing causalities that are expressed with such explicit cues as \"because\" in answer passages and using the recognized causalities as a clue for finding proper answers. However, in answer passages, causalities might be implicitly expressed, (i.e., without any explicit cues): \"An earthquake suddenly displaced sea water and a tsunami was generated.\" The previous works did not deal with such implicitly expressed causalities and failed to find proper answers that included the causalities. We improve why-QA based on the following two ideas. First, implicitly expressed causalities in one text might be expressed in other texts with explicit cues. If we can automatically recognize such explicitly expressed causalities from a text archive and use them to complement the implicitly expressed causalities in an answer passage, we can improve why-QA. Second, the causes of similar events tend to be described with a similar set of words (e.g., \"seismic energy\" and \"tectonic plates\" for \"the Great East Japan Earthquake\" and \"the 1906 San Francisco Earthquake\"). As such, even if we cannot find in a text archive any explicitly expressed cause of an event (e.g., \"the Great East Japan Earthquake\") expressed in a question (e.g., \"Why did the Great East Japan earthquake happen?\"), we might be able to identify its implicitly expressed causes with a set of words (e.g., \"tectonic plates\") that appear in the explicitly expressed cause of a similar event (e.g., \"the 1906 San Francisco Earthquake\").\n We implemented these two ideas in our multi-column convolutional neural networks with a novel attention mechanism, which we call causality attention. Through experiments on Japanese why-QA, we confirmed that our proposed method outperformed the state-of-the-art systems."}
{"_id": "c2eac6110861617601b9db1240e9f60754bcf954", "title": "Analysis and design of compact two-way Wilkinson power dividers using coupled lines", "text": "This paper analyzes the two-way Wilkinson power dividers using coupled lines as \u03bb/4 impedance transformer to have more compact layout. The analysis shows that the input port matching is only influenced by the even mode impedance of the coupled lines. Once the even mode impedance is fixed, all other specifications of the power divider are determined by the odd mode impedance of the coupled lines. Design tradeoffs among the output matching, isolation and the odd mode impedance are shown. By using smaller odd mode impedance, layout constraint can be relaxed. To validate the analysis, Wilkinson power dividers using coupled lines with areas of 1.5 cm2 and 1.25 cm2 are designed together with a conventional divider with an area of 3.8 cm2. The experimental results of the three designs are comparable. When coupled lines are used, the layout is more compact with a reduction in size of more than 50% compared to the conventional design."}
{"_id": "ab208d4fe216c95d9ae43163c2337a277269f45e", "title": "Organizational Chart Inference", "text": "Nowadays, to facilitate the communication and cooperation among employees, a new family of online social networks has been adopted in many companies, which are called the \"enterprise social networks\" (ESNs). ESNs can provide employees with various professional services to help them deal with daily work issues. Meanwhile, employees in companies are usually organized into different hierarchies according to the relative ranks of their positions. The company internal management structure can be outlined with the organizational chart visually, which is normally confidential to the public out of the privacy and security concerns. In this paper, we want to study the IOC (Inference of Organizational Chart) problem to identify company internal organizational chart based on the heterogeneous online ESN launched in it. IOC is very challenging to address as, to guarantee smooth operations, the internal organizational charts of companies need to meet certain structural requirements (about its depth and width). To solve the IOC problem, a novel unsupervised method Create (ChArT REcovEr) is proposed in this paper, which consists of 3 steps: (1) social stratification of ESN users into different social classes, (2) supervision link inference from managers to subordinates, and (3) consecutive social classes matching to prune the redundant supervision links. Extensive experiments conducted on real-world online ESN dataset demonstrate that Create can perform very well in addressing the IOC problem."}
{"_id": "306990eae07d178a5791def2103f5981c666d16a", "title": "The curvelet transform for image denoising", "text": "We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a; trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with \"state of the art\" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement."}
{"_id": "0f368620bbc2a1f41b77d10356af72e55b3ef376", "title": "Embodied Construction Grammar", "text": "Construction-based approaches to grammar are driven by the insight that meaning plays a crucial role in answering this question: the meanings of both words and grammatical structures can affect grammaticality, and they should therefore be incorporated into grammatical representations. To wit, the various approaches presented in this volume take this tack in predicting the grammaticality of utterances of all sorts, from the creative to the conventional. But focusing on utterance grammaticality removes utterances from their physical and social contexts of use. The result is an approach to language that resembles traditional anatomy, in which utterances are dissected, categorized, and explained in terms of their structural and distributional properties. Insights from such an approach can be both useful and informative in accounting for what utterances are or potentially could be. They do not, however, explain human language use\u2014that is, the mechanisms and representations that human beings use to learn, produce, and understand utterances. If we are interested in these latter issues, issues more properly situated within cognitive science than analytical linguistics, then we find our investigations of grammar (and language more generally) guided by a different sort of question:"}
{"_id": "16cdf3178d5e674ba5bb7650823781e33ca3d5e3", "title": "Cloud Computing \u2013 A Comprehensive Definition", "text": "Cloud computing is an evolving technology that is consistently generating impact in IT industry and academia. It performs all computational tasks over Internet by using virtualization techniques and remains isolated from intricate vast hardware and software infrastructures. The aim of this paper is to glance through the background and evolutions of cloud computing, its architecture and services to develop cumulative knowledge for future research extension and evaluation."}
{"_id": "9f3f6a33eb412d508da319bb270112075344abd0", "title": "Discovering Diverse and Salient Threads in Document Collections", "text": "We propose a novel probabilistic technique for modeling and extracting salient structure from large document collections. As in clustering and topic modeling, our goal is to provide an organizing perspective into otherwise overwhelming amounts of information. We are particularly interested in revealing and exploiting relationships between documents. To this end, we focus on extracting diverse sets of threads\u2014singlylinked, coherent chains of important documents. To illustrate, we extract research threads from citation graphs and construct timelines from news articles. Our method is highly scalable, running on a corpus of over 30 million words in about four minutes, more than 75 times faster than a dynamic topic model. Finally, the results from our model more closely resemble human news summaries according to several metrics and are also preferred by human judges."}
{"_id": "ade4087abf35522f1945ff1c4ed8d7bc5d460577", "title": "Current review of prepubertal vaginal bleeding.", "text": "PURPOSE OF REVIEW\nPrepubertal vaginal bleeding raises many concerns and evaluation and diagnosis may prove difficult for many providers. We aim to provide a comprehensive review and recent updates for those practitioners who care for these patients.\n\n\nRECENT FINDINGS\nPrompt management in the case of prepubertal vaginal bleeding is indicated, especially to rule out malignancy or abuse. If a child is reluctant to undergo examination, or if the extent of injury or source of bleeding cannot be determined, examination under anesthesia and vaginoscopy is recommended. Use of vaginoscopy allows for clear visualization of the vagina and cervix without distorting hymenal anatomy, as well as diagnosis and removal of a foreign body and evaluation of mucosal damage caused. In the case of sexual abuse, providers specifically trained in pediatrics need to be present, and safety of the patient should always be ensured.\n\n\nSUMMARY\nCareful history taking and targeted examination may lead to diagnosis in the case of prepubertal vaginal bleeding. However, in more difficult cases, practitioners should not hesitate to examine a patient in the operating room using general anesthesia to elicit the cause. Although sexual abuse and malignancy are always on the differential, most causes of bleeding are benign and easily treated."}
{"_id": "7862ec68c19b34984192e60adfa5612c86895d87", "title": "Hypothesizing the Aptness of Social Media and the Information Richness Requirements of Disaster Management", "text": "In this article, the author first analyzes the social presence theory, media richness theory and taskmedia fit to investigate the suitability of various types of Social Media in disaster management. Then, on the basis of this analysis, use of social media is proposed to facilitate the communication tasks involved in the interaction between disaster management agencies and communities during disaster management. Next the author adapt a conceptual framework that integrates three types of communication (involving disaster management agencies and communities). The framework is further used as a springboard to develop a number of hypotheses to predict the aptness of rich and lean types of Social Media against the media richness requirements of disaster management tasks."}
{"_id": "4214fbf7b0f28bc5b1c26ca8ee6bfb84759e436a", "title": "Natural Image Bases to Represent Neuroimaging Data", "text": "Visual inspection of neuroimagery is susceptible to human eye limitations. Computerized methods have been shown to be equally or more e\u21b5ective than human clinicians in diagnosing dementia from neuroimages. Nevertheless, much of the work involves the use of domain expertise to extract hand\u2013crafted features. The key technique in this paper is the use of cross\u2013domain features to represent MRI data. We used a sparse autoencoder to learn a set of bases from natural images and then applied convolution to extract features from the Alzheimer\u2019s Disease Neuroimaging Initiative (ADNI) dataset. Using this new representation, we classify MRI instances into three categories: Alzheimer\u2019s Disease (AD), Mild Cognitive Impairment (MCI) and Healthy Control (HC). Our approach, in spite of being very simple, achieved high classification performance, which is competitive with or better than other approaches."}
{"_id": "407880a5f6d2f530e690e00c93f30cf766bd3b14", "title": "Doing good or doing well ? Image motivation and monetary incentives in behaving prosocially", "text": "Doing Good or Doing Well? Image Motivation and Monetary Incentives in Behaving Prosocially This paper experimentally examines image motivation \u2013 the desire to be liked and wellregarded by others \u2013 as a driver in prosocial behavior (doing good), and asks whether extrinsic monetary incentives (doing well) have a detrimental effect on prosocial behavior due to crowding out of image motivation. By definition, image depends on one\u2019s behavior being visible to other people. Using this unique property we show that image is indeed an important part of the motivation to behave prosocially. Moreover, we show that extrinsic incentives interact with image motivation and are therefore less effective in public than in private. Together, these results imply that image motivation is crowded out by monetary incentives; which in turn means that monetary incentives are more likely to be counterproductive for public prosocial activities than for private ones. JEL Classification: D64, C90, H41"}
{"_id": "325ca887bf16f857f47d1a65d5979ed6890f3bc8", "title": "Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising", "text": "Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to approximate message passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization-the firm shrinkage nonlinearity and the minimax nonlinearity-and also nonscalar denoisers-block thresholding, monotone regression, and total variation minimization. Let the variables \u03b5 = k/N and \u03b4 = n/N denote the generalized sparsity and undersampling fractions for sampling the k-generalized-sparse N-vector x0 according to y=Ax0. Here, A is an n\u00d7N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve \u03b4 = \u03b4(\u03b5) separating successful from unsuccessful reconstruction of x0 by AMP is given by \u03b4 = M(\u03b5|Denoiser) where M(\u03b5|Denoiser) denotes the per-coordinate minimax mean squared error (MSE) of the specified, optimally tuned denoiser in the directly observed problem y = x + z. In short, the phase transition of a noiseless undersampling problem is identical to the minimax MSE in a denoising problem. We prove that this formula follows from state evolution and present numerical results validating it in a wide range of settings. The above formula generates numerous new insights, both in the scalar and in the nonscalar cases."}
{"_id": "1f1fdac34d5b3fd459693e6a60051e3cdb08397d", "title": "ENVIRONMENT AND CRIME IN THE INNER CITY Does Vegetation Reduce Crime?", "text": "Although vegetation has been positively linked to fear of crime and crime in a number of settings, recent findings in urban residential areas have hinted at a possible negative relationship: Residents living in \u201cgreener\u201d surroundings report lower levels of fear, fewer incivilities, and less aggressive and violent behavior. This study used police crime reports to examine the relationship between vegetation and crime in an inner-city neighborhood. Crime rates for 98 apartment buildings with varying levels of nearby vegetation were compared. Results indicate that although residents were randomly assigned to different levels of nearby vegetation, the greener a building\u2019s surroundings were, the fewer crimes reported. Furthermore, this pattern held for both property crimes and violent crimes. The relationship of vegetation to crime held after the number of apartments per building, building height, vacancy rate, and number of occupied units per building were accounted for. The highway from one merchant town to another shall be cleared so that no cover for malefactors should be allowed for a width of two hundred feet on either side; landlords who do not effect this clearance will be answerable for robberies committed in consequence of their default, and in case of murder they will be in the king\u2019s mercy. \u2014Statute of Winchester of 1285, Chapter V, King Edward I 343 AUTHORS\u2019 NOTE: A portion of these findings was presented in invited testimony to the National Urban and Community Forestry Advisory Council (NUCFAC). This ENVIRONMENT AND BEHAVIOR, Vol. 33 No. 3, May 2001 343-367 \u00a9 2001 Sage Publications, Inc. There is a long tradition of addressing crime in problem areas by removing vegetation. As early as 1285, the English King Edward I sought to reduce highway robbery by forcing property owners to clear highway edges of trees and shrubs (Pluncknett, 1960). Today, that tradition continues as park authorities, universities, and municipalities across North America engage in active programs to remove vegetation because it is thought to conceal and facilitate criminal acts (Michael & Hull, 1994; Nasar & Fisher, 1993; Weisel, Gouvis, & Harrell, 1994). One of the settings in which crime is of greatest concern today is the inner-city neighborhood. To combat crime in this setting, should vegetation be removed? This article suggests the opposite. We present theory and evidence to suggest that far from abetting crime, high-canopy trees and grass may actually work to deter crime in poor inner-city neighborhoods. COULD THERE BE EXCEPTIONS TO THE RULE? As a rule, the belief is that vegetation facilitates crime because it hides perpetrators and criminal activity from view. Here, we review the evidence in support of this \u201crule\u201d and suggest conditions under which it might not apply. Although no studies to date have examined whether crime rates are actually higher in the presence of dense vegetation, a variety of evidence links dense vegetation with fear, fear of crime, and possibly crime itself. It is certainly the case that many people fear densely vegetated areas. In research on urban parks, densely wooded areas have consistently been associated with fear. In one study, safety ratings for 180 scenes of urban parks showed that individuals felt most vulnerable in densely forested areas and safest in open, mowed areas (Schroeder & Anderson, 1984). And in another study, individuals who were asked for their open-ended responses to photo344 ENVIRONMENT AND BEHAVIOR / May 2001 work was also supported by the Cooperative State Research, Education and Extension Service, U.S. Department of Agriculture, under Project No. ILLU-65-0387. Weare grateful for the assistance of many individuals and other institutions as well. John Potter and Liesette Brunson assisted in data entry and data analysis in the initial stages of this project. A reviewer\u2019s suggestion substantially strengthened the analyses presented here. The Chicago Housing Authority and the management of Ida B. Wells were helpful in many ways, and the Chicago Police Department graciously gave us access to their year-end crime reports. Jerry Barrett helped produce the figures, and Helicopter Transport of Chicago donated the helicopter flight over Ida B. Wells. Correspondence concerning this article should be addressed to Frances E. Kuo, HumanEnvironment Research Laboratory, University of Illinois, 1103 S. Dorner, Urbana, IL, 61801; e-mail: f-kuo@uiuc.edu. graphs of urban parks indicated that heavily vegetated areas seemed dangerous (Talbot & Kaplan, 1984). Although neither of these studies specifically probed fear of crime (as opposed to more general fear), it was clear that at least some participants had crime in mind; one respondent specifically suggested that weedy areas gave muggers good hiding places (Talbot & Kaplan, 1984). Dense vegetation has also been linked specifically to fear of crime. In safety ratings for 180 scenes of parking lots, the more a photo was covered by vegetation, the lower the perceived security (Shaffer & Anderson, 1985). And in research examining fear of crime on a university campus, dense understories that reduced views into areas where criminals might hide were associated with fear of crime (Nasar & Fisher, 1993). In these and other studies, view distance seems to be an important factor. Fear of crime is higher where vegetation blocks views (Fisher & Nasar, 1992; Kuo, Bacaicoa, & Sullivan, 1998; Michael & Hull, 1994). Not only has dense vegetation been linked to general fears and to fear of crime in particular, but two studies have pointed more directly at a facilitative role of vegetation in crime. In the first study, park managers and park police indicated that dense vegetation is regularly used by criminals to conceal their activities (Michael & Hull, 1994). In the second, burglars themselves lent support to this notion. In this study, automobile burglars described how they used dense vegetation in a variety of ways, including to conceal their selection of a target and their escape from the scene, to shield their examination of stolen goods, and finally, in the disposal of unwanted goods (Michael, Hull, & Zahm, 1999). At the same time, Michael and his coauthors made it clear that vegetation was neither necessary nor sufficient for a crime to take place. The clear theme in all these studies is that dense vegetation provides potential cover for criminal activities, possibly increasing the likelihood of crime and certainly increasing the fear of crime. Large shrubs, underbrush, and dense woods all substantially diminish visibility and therefore are capable of supporting criminal activity. But, not all vegetation blocks views. A well-maintained grassy area certainly does not block views; widely spaced, high-canopy trees have minimal effect on visibility; and flowers and low-growing shrubs seem unlikely to provide cover for criminal activities. We suggest that although the rule that vegetation aids crime may hold for visibility-decreasing forms of vegetation, there are systematic exceptions to this rule. To wit, we propose that widely spaced, high-canopy trees and other visibility-preserving forms of vegetation do not promote crime. Kuo, Sullivan / VEGETATION AND CRIME 345"}
{"_id": "2a16325d502dbf82e6feea943379b5eb0f79947f", "title": "Structural modeling and analysis of dengue-mediated inhibition of interferon signaling pathway.", "text": "Dengue virus (DENV) belongs to the family Flaviviridae and can cause major health problems worldwide, including dengue fever and dengue shock syndrome. DENV replicon in human cells inhibits interferon \u03b1 and \u03b2 with the help of its non-structural proteins. Non-structural protein 5 (NS5) of DENV is responsible for the proteasome-mediated degradation of signal transducer and activator of transcription (STAT) 2 protein, which has been implicated in the development of resistance against interferon-mediated antiviral effect. This degradation of STAT2 primarily occurs with the help of E3 ubiquitin ligases. Seven in absentia homologue (SIAH) 2 is a host protein that can mediate the ubiquitination of proteins and is known for its interaction with NS5. In this study, comprehensive computational analysis was performed to characterize the protein-protein interactions between NS5, SIAH2, and STAT2 to gain insight into the residues and sites of interaction between these proteins. The objective of the study was to structurally characterize the NS5-STAT2, SIAH2-STAT2, and NS5-SIAH2 interactions along with the determination of the possible reaction pattern for the degradation of STAT2. Docking and physicochemical studies indicated that DENV NS5 may first interact with the host SIAH2, which can then proceed towards binding with STAT2 from the side of SIAH2. These implications are reported for the first time and require validation by wet-lab studies."}
{"_id": "2b0bfff8bd0e4b5a57c8a82d143b7c0dcae8a046", "title": "Visualizing cyber security: Usable workspaces", "text": "The goal of cyber security visualization is to help analysts increase the safety and soundness of our digital infrastructures by providing effective tools and workspaces. Visualization researchers must make visual tools more usable and compelling than the text-based tools that currently dominate cyber analysts' tool chests. A cyber analytics work environment should enable multiple, simultaneous investigations and information foraging, as well as provide a solution space for organizing data. We describe our study of cyber-security professionals and visualizations in a large, high-resolution display work environment and the analytic tasks this environment can support. We articulate a set of design principles for usable cyber analytic workspaces that our studies have brought to light. Finally, we present prototypes designed to meet our guidelines and a usability evaluation of the environment."}
{"_id": "3834b27a7c684afabf4464a15cb5d3a0c4d4918d", "title": "Visual Analytics: Scope and Challenges", "text": "In today\u2019s applications data is produced at unprecedented rates. While the capacity to collect and store new data rapidly grows, the ability to analyze these data volumes increases at much lower rates. This gap leads to new challenges in the analysis process, since analysts, decision makers, engineers, or emergency response teams depend on information hidden in the data. The emerging field of visual analytics focuses on handling these massive, heterogenous, and dynamic volumes of information by integrating human judgement by means of visual representations and interaction techniques in the analysis process. Furthermore, it is the combination of related research areas including visualization, data mining, and statistics that turns visual analytics into a promising field of research. This paper aims at providing an overview of visual analytics, its scope and concepts, addresses the most important research challenges and presents use cases from a wide variety of application scenarios."}
{"_id": "095b05f6f0803bb1871b677cf3c3d4b41dbe6d18", "title": "The base-rate fallacy and the difficulty of intrusion detection", "text": "Many different demands can be made of intrusion detection systems. An important requirement is that an intrusion detection system be effective; that is, it should detect a substantial percentage of intrusions into the supervised system, while still keeping the false alarm rate at an acceptable level. This article demonstrates that, for a reasonable set of assumptions, the false alarm rate is the limiting factor for the performance of an intrusion detection system. This is due to the base-rate fallacy phenomenon, that in order to achieve substantial values of the Bayesian detection rate P(Intrusion***Alarm), we have to achieve a (perhaps in some cases unattainably) low false alarm rate. A selection of reports of intrusion detection performance are reviewed, and the conclusion is reached that there are indications that at least some types of intrusion detection have far to go before they can attain such low false alarm rates."}
{"_id": "4f2a1f53c2a326d94cb4962c73cf60a9dbaa240d", "title": "Generalized fisheye views", "text": "In many contexts, humans often represent their own \u201cneighborhood\u201d in great detail, yet only major landmarks further away. This suggests that such views (\u201cfisheye views\u201d) might be useful for the computer display of large information structures like programs, data bases, online text, etc. This paper explores fisheye views presenting, in turn, naturalistic studies, a general formalism, a specific instantiation, a resulting computer program, example displays and an evaluation."}
{"_id": "2f69a87647007e9bc028874d7c21e2e840be2c5b", "title": "Automatic Root Cause Analysis for LTE Networks Based on Unsupervised Techniques", "text": "The increase in the size and complexity of current cellular networks is complicating their operation and maintenance tasks. While the end-to-end user experience in terms of throughput and latency has been significantly improved, cellular networks have also become more prone to failures. In this context, mobile operators start to concentrate their efforts on creating self-healing networks, i.e., those networks capable of troubleshooting in an automatic way, making the network more reliable and reducing costs. In this paper, an automatic diagnosis system based on unsupervised techniques for Long-Term Evolution (LTE) networks is proposed. In particular, this system is built through an iterative process, using self-organizing maps (SOMs) and Ward's hierarchical method, to guarantee the quality of the solution. Furthermore, to obtain a number of relevant clusters and label them properly from a technical point of view, an approach based on the analysis of the statistical behavior of each cluster is proposed. Moreover, with the aim of increasing the accuracy of the system, a novel adjustment process is presented. It intends to refine the diagnosis solution provided by the traditional SOM according to the so-called silhouette index and the most similar cause on the basis of the minimum Xth percentile of all distances. The effectiveness of the developed diagnosis system is validated using real and simulated LTE data by analyzing its performance and comparing it with reference mechanisms."}
{"_id": "2b0501ba0b9791548495f78798bbc416771498a8", "title": "A journey to highly dynamic, self-adaptive service-based applications", "text": "Future software systems will operate in a highly dynamic world. Systems will need to operate correctly despite of unespected changes in factors such as environmental conditions, user requirements, technology, legal regulations, and market opportunities. They will have to operate in a constantly evolving environment that includes people, content, electronic devices, and legacy systems. They will thus need the ability to continuously adapt themselves in an automated manner to react to those changes. To realize dynamic, self-adaptive systems, the service concept has emerged as a suitable abstraction mechanism. Together with the concept of the service-oriented architecture (SOA), this led to the development of technologies, standards, and methods to build service-based applications by flexibly aggregating individual services. This article discusses how those concepts came to be by taking two complementary viewpoints. On the one hand, it evaluates the progress in software technologies and methodologies that led to the service concept and SOA. On the other hand, it discusses how the evolution of the requirements, and in particular business goals, influenced the progress towards highly dynamic self-adaptive systems. Finally, based on a discussion of the current state of the art, this article points out the possible future evolution of the field."}
{"_id": "0be87c8cfe8fd3770af835525ef53dd5253b80a1", "title": "An Efficient Synchronization Mechanism for Mirrored Game Architectures", "text": "Existing online multiplayer games typically use a client-server model, which introduces added latency as well as a single bottleneck and single point of failure to the game. Distributed multiplayer games minimize latency and remove the bottleneck, but require special synchronization mechanisms to provide a consistent game for all players. Current synchronization methods have been borrowed from distributed military simulations and are not optimized for the requirements of fast-paced multiplayer games. In this paper we present a new synchronization mechanism, trailing state synchronization (TSS), which is designed around the requirements of distributed first-person shooter games. We look at TSS in the environment of a mirrored game architecture, which is a hybrid between traditional centralized architectures and the more scalable peer-to-peer architectures. Mirrored architectures allow for improved performance compared to client-server architectures while at the same time allowing for a greater degree of centralized administration than peer-to-peer architectures. We evaluate the performance of TSS and other synchronization methods through simulation and examine heuristics for selecting the synchronization delays needed for TSS."}
{"_id": "55a3c14180b1fe91569c9629091d76323d0c6ec2", "title": "Algorithmic Design of Low-Power Variable-Stiffness Mechanisms", "text": "Compliant actuators enabling low-power stiffness adaptation are missing ingredients and key enablers of next generation robotic systems. One of the key components of these actuators is the mechanism implementing stiffness adaptation that requires sophisticated control and nontrivial mechanical design. However, despite recent advances in controlling these systems, their design remains experience based and not well understood. In this paper, we present an optimization-based computational framework for the design of intrinsically low-power compliant variable stiffness mechanisms. The core ingredient of this framework is the mathematical formulation of the design problem\u2014provided by a constrained nonlinear parameter optimization\u2014which is computationally solved here to identify optimal variable stiffness designs. We show the basic capability of this formulation in finding parameters for variable stiffness mechanisms that require the least power by design. Further, we demonstrate the generality of this method in cross-comparing mechanisms with different kinematic topology to identify the one that requires the least power by design."}
{"_id": "97c9f9a74b6decc130010e2394983f0ccd94ff73", "title": "Computer- vs. paper-based tasks: are they equivalent?", "text": "In 1992, Dillon published his critical review of the empirical literature on reading from paper vs. screen. However, the debate concerning the equivalence of computer- and paper-based tasks continues, especially with the growing interest in online assessment. The current paper reviews the literature over the last 15 years and contrasts the results of these more recent studies with Dillon's findings. It is concluded that total equivalence is not possible to achieve, although developments in computer technology, more sophisticated comparative measures and more positive user attitudes have resulted in a continuing move towards achieving this goal. Many paper-based tasks used for assessment or evaluation have been transferred directly onto computers with little regard for any implications. This paper considers equivalence issues between the media by reviewing performance measures. While equivalence seems impossible, the importance of any differences appears specific to the task and required outcomes."}
{"_id": "4f485509e7b7db2c1a0a356ff1e929f474650606", "title": "Cellular commitment in the developing cerebellum", "text": "The mammalian cerebellum is located in the posterior cranial fossa and is critical for motor coordination and non-motor functions including cognitive and emotional processes. The anatomical structure of cerebellum is distinct with a three-layered cortex. During development, neurogenesis and fate decisions of cerebellar primordium cells are orchestrated through tightly controlled molecular events involving multiple genetic pathways. In this review, we will highlight the anatomical structure of human and mouse cerebellum, the cellular composition of developing cerebellum, and the underlying gene expression programs involved in cell fate commitments in the cerebellum. A critical evaluation of the cell death literature suggests that apoptosis occurs in ~5% of cerebellar cells, most shortly after mitosis. Apoptosis and cellular autophagy likely play significant roles in cerebellar development, we provide a comprehensive discussion of their role in cerebellar development and organization. We also address the possible function of unfolded protein response in regulation of cerebellar neurogenesis. We discuss recent advancements in understanding the epigenetic signature of cerebellar compartments and possible connections between DNA methylation, microRNAs and cerebellar neurodegeneration. Finally, we discuss genetic diseases associated with cerebellar dysfunction and their role in the aging cerebellum."}
{"_id": "057d5f66a873ec80f8ae2603f937b671030035e6", "title": "Newtonian Image Understanding: Unfolding the Dynamics of Objects in Static Images", "text": "In this paper, we study the challenging problem of predicting the dynamics of objects in static images. Given a query object in an image, our goal is to provide a physical understanding of the object in terms of the forces acting upon it and its long term motion as response to those forces. Direct and explicit estimation of the forces and the motion of objects from a single image is extremely challenging. We define intermediate physical abstractions called Newtonian scenarios and introduce Newtonian Neural Network (N3) that learns to map a single image to a state in a Newtonian scenario. Our evaluations show that our method can reliably predict dynamics of a query object from a single image. In addition, our approach can provide physical reasoning that supports the predicted dynamics in terms of velocity and force vectors. To spur research in this direction we compiled Visual Newtonian Dynamics (VIND) dataset that includes more than 6000 videos aligned with Newtonian scenarios represented using game engines, and more than 4500 still images with their ground truth dynamics."}
{"_id": "588202cdbc7f56a56961ed22f4dd673d8e8505a6", "title": "Neural networks in building QSAR models.", "text": "This chapter critically reviews some of the important methods being used for building quantitative structure-activity relationship (QSAR) models using the artificial neural networks (ANNs). It attends predominantly to the use of multilayer ANNs in the regression analysis of structure-activity data. The highlighted topics cover the approximating ability of ANNs, the interpretability of the resulting models, the issues of generalization and memorization, the problems of overfitting and overtraining, the learning dynamics, regularization, and the use of neural network ensembles. The next part of the chapter focuses attention on the use of descriptors. It reviews different descriptor selection and preprocessing techniques; considers the use of the substituent, substructural, and superstructural descriptors in building common QSAR models; the use of molecular field descriptors in three-dimensional QSAR studies; along with the prospects of \"direct\" graph-based QSAR analysis. The chapter starts with a short historical survey of the main milestones in this area."}
{"_id": "8d9b83d213b172582fcdb930ebe3d33ada3c5a8e", "title": "Complexity of vehicle routing and scheduling problems", "text": "The complexity of a class of vehicle routing and scheduling problems is investigated. We review known NP-hardness results and compile the results on the worst-case performance of approximation algorithms. Some directions for future research are suggested. The presentation is based on two discussion sessions during the Workshop to Investigate Future Directions in Routing and Scheduling of Vehicles and Crews, held at the University of Maryland at College Park, June 4-6, 1979."}
{"_id": "0530f1d4d2283676e1f20bfb100753b164a44313", "title": "Flat-Region Detection and False Contour Removal in the Digital TV Display", "text": "Bit-depth reduction in digital displays results in false contours in the image. Moreover some of video enhancement processing in the digital TV display, such as histogram equalization, contrast enhancement, and increasing sharpness, etc., make false contours more visible. Bit-depth reduction comes from various display limitations such as video memory constraints, physical characteristics of the display, display drivers, and coarse MPEG quantization, etc. Daly, SJ et al., (2004). We present an efficient method for detecting and segmenting flat-region in the image, and a technique for bit-depth extension to effectively remove false contours. We have simulated bit-depth reduction followed by video enhancement processing that cause false contours with various video image sequences including simple patterns. Our result shows that false contours are effectively removed in the flat-region in the image while the sharpness of object edges is preserved"}
{"_id": "102b71923e42d2d046567163ece35a898f9c8ccd", "title": "BakerSFIeld : Bringing software fault isolation to x 64", "text": "We present BakerSFIeld, a Software Fault Isolation (SFI) framework for the x64 instruction set architecture. BakerSFIeld is derived from PittSFIeld, and retains the assembly language rewriting style of sandboxing, but has been substantially re-architected for greater clarity, separating the concerns of rewriting the program for memory safety, and re-aligning the code for control-flow safety. BakerSFIeld also corrects several significant security flaws we found in PittSFIeld. We measure the performance of BakerSFIeld on a modern 64-bit desktop machine and determine that it has significantly lower overhead than its 32-bit cousin."}
{"_id": "5516b99b30d93e58d628f5af671ca50a07c2effb", "title": "Leveraging Multiple Domains for Sentiment Classification", "text": "Sentiment classification becomes more and more important with the rapid growth of usergenerated content. However, sentiment classification task usually comes with two challenges: first, sentiment classification is highly domain-dependent and training sentiment classifier for every domain is inefficient and often impractical; second, since the quantity of labeled data is important for assessing the quality of classifier, it is hard to evaluate classifiers when labeled data is limited for certain domains. To address the challenges mentioned above, we focus on learning high-level features that are able to generalize across domains, so a global classifier can benefit with a simple combination of documents from multiple domains. In this paper, the proposed model incorporates both labeled and unlabeled data from multiple domains and learns new feature representations. Our model doesn\u2019t require labels from every domain, which means the learned feature representation can be generalized for sentiment domain adaptation. In addition, the learned feature representation can be used as classifier since our model defines the meaning of feature value and arranges high-level features in a prefixed order, so it is not necessary to train another classifier on top of the new features. Empirical evaluations demonstrate our model outperforms baselines and yields competitive results to other state-of-the-art works on the benchmark dataset."}
{"_id": "a37aeb94b56be9943198cb57021251cc0668ef3d", "title": "Factors that relate to good and poor handwriting.", "text": "OBJECTIVE\nThis study investigated the relationships between specific performance components, eye-hand coordination, visuomotor integration, in-hand manipulation, and handwriting skill.\n\n\nMETHOD\nA sample of 48 typical first grade students were identified as good and poor handwriters by their teachers. Each child completed the Motor Accuracy Test; the Developmental Test of Visual-Motor Integration (VMI); two tests of in-hand manipulation, including a rotation and a translation task; and the Minnesota Handwriting Test (MHT).\n\n\nRESULTS\nAll test scores for the subjects with good handwriting were significantly higher than those of the subjects with poor handwriting. Each performance component test was significantly correlated to MHT scores. Translation, VMI, and rotation scores were significant predictors of MHT scores, accounting for almost 73% of variance. A discriminant analysis using the performance components correctly classified 98% of the students as good or poor handwriters.\n\n\nCONCLUSION\nIn-hand manipulation has significant association to handwriting skill."}
{"_id": "6f2e21685cad7b269d754d386e30de8bddd43489", "title": "Cloud Log Forensics: Foundations, State of the Art, and Future Directions", "text": "Cloud log forensics (CLF) mitigates the investigation process by identifying the malicious behavior of attackers through profound cloud log analysis. However, the accessibility attributes of cloud logs obstruct accomplishment of the goal to investigate cloud logs for various susceptibilities. Accessibility involves the issues of cloud log access, selection of proper cloud log file, cloud log data integrity, and trustworthiness of cloud logs. Therefore, forensic investigators of cloud log files are dependent on cloud service providers (CSPs) to get access of different cloud logs. Accessing cloud logs from outside the cloud without depending on the CSP is a challenging research area, whereas the increase in cloud attacks has increased the need for CLF to investigate the malicious activities of attackers. This paper reviews the state of the art of CLF and highlights different challenges and issues involved in investigating cloud log data. The logging mode, the importance of CLF, and cloud log-as-a-service are introduced. Moreover, case studies related to CLF are explained to highlight the practical implementation of cloud log investigation for analyzing malicious behaviors. The CLF security requirements, vulnerability points, and challenges are identified to tolerate different cloud log susceptibilities. We identify and introduce challenges and future directions to highlight open research areas of CLF for motivating investigators, academicians, and researchers to investigate them."}
{"_id": "0de7b3211469c8e653420412285b7eaec6ef02e8", "title": "Relations among positive parenting, children's effortful control, and externalizing problems: a three-wave longitudinal study.", "text": "In a 3-wave longitudinal study (with assessments 2 years apart) involving 186 early adolescents (M ages of approximately 9.3, 11.4, and 13.4), the hypothesis that parental warmth/positive expressivity predicts children's effortful control (EC) (a temperamental characteristic contributing to emotion regulation) 2 years later, which in turn predicts low levels of externalizing problems another 2 years later, was examined. The hypothesis that children's EC predicts parenting over time was also examined. Parents were observed interacting with their children; parents and teachers reported children's EC and externalizing problems; and children's persistence was assessed behaviorally. Children's EC mediated the relation between positive parenting and low levels of externalizing problems (whereas there was no evidence that children's EC predicted parenting)."}
{"_id": "c4f7d2ca3105152e5be77d36add2582977649b1d", "title": "Uninvited Connections: A Study of Vulnerable Devices on the Internet of Things (IoT)", "text": "The Internet of Things (IoT) continues to grow as uniquely identifiable objects are added to the internet. The addition of these devices, and their remote connectivity, has brought a new level of efficiency into our lives. However, the security of these devices has come into question. While many may be secure, the sheer number creates an environment where even a small percentage of insecure devices may create significant vulnerabilities. This paper evaluates some of the emerging vulnerabilities that exist and puts some figures to the scale of the threat."}
{"_id": "77aaa90f7f6814b140d48bd3797a12da46900dd7", "title": "Adaptive Current Control for Grid-Connected Converters With LCL Filter", "text": "This paper presents a discrete-time adaptive current controller for grid-connected pulse width modulation voltage source converters with LCL filter. The main attribute of the proposed current controller is that, in steady state, the damping of the LCL resonance does not depend on the grid characteristic since the adaptive feedback gains ensure a predefined behavior for the closed-loop current control. An overview of model reference adaptive state feedback theory is presented aiming to give the reader the required background for the adaptive current control design. The digital implementation delay is included in the model, and the stability concerning the variation of the grid parameters is analyzed in detail. Furthermore, current distortions due to the grid background voltage are rejected without using the conventional stationary resonant controllers or the synchronous proportional-plus-integral controllers. Simulation and experimental results are presented to validate the analysis and to demonstrate the good performance of the proposed controller for grid-connected converters subjected to large grid impedance variation and grid voltage disturbances."}
{"_id": "bb95223053244ca6e418e5077005e4fee4777d59", "title": "Domain Adaptation for Relation Extraction with Domain Adversarial Neural Network", "text": "Relations are expressed in many domains such as newswire, weblogs and phone conversations. Trained on a source domain, a relation extractor\u2019s performance degrades when applied to target domains other than the source. A common yet labor-intensive method for domain adaptation is to construct a target-domainspecific labeled dataset for adapting the extractor. In response, we present an unsupervised domain adaptation method which only requires labels from the source domain. Our method is a joint model consisting of a CNN-based relation classifier and a domain-adversarial classifier. The two components are optimized jointly to learn a domain-independent representation for prediction on the target domain. Our model outperforms the state-of-the-art on all three test domains of ACE 2005."}
{"_id": "cf8a1e69fb0736874ab1251575335abfad143a23", "title": "What is Your Organization 'Like'?: A Study of Liking Activity in the Enterprise", "text": "The 'like' button, introduced by Facebook several years ago, has become one of the most prominent icons of social media. Similarly to other popular social media features on the web, enterprises have also recently adopted it. In this paper, we present a first comprehensive study of liking activity in the enterprise. We studied the logs of an enterprise social media platform within a large global organization along a period of seven months, in which 393,720 'likes' were performed. In addition, we conducted a survey of 571 users of the platform's 'like' button. Our evaluation combines quantitative and qualitative analysis to inspect what employees like, why they use the 'like' button, and to whom they give their 'likes'."}
{"_id": "f43c26208ce55c6bc97114e4a8ae9a24b3753c71", "title": "Time series contextual anomaly detection for detecting market manipulation in stock market", "text": "Anomaly detection in time series is one of the fundamental issues in data mining that addresses various problems in different domains such as intrusion detection in computer networks, irregularity detection in healthcare sensory data and fraud detection in insurance or securities. Although, there has been extensive work on anomaly detection, majority of the techniques look for individual objects that are different from normal objects but do not take the temporal aspect of data into consideration. We are particularly interested in contextual outlier detection methods for time series that are applicable to fraud detection in securities. This has significant impacts on national and international securities markets. In this paper, we propose a prediction-based Contextual Anomaly Detection (CAD) method for complex time series that are not described through deterministic models. The proposed method improves the recall from 7% to 33% compared to kNN and Random Walk without compromising the precision."}
{"_id": "d75a4d66e3c24d1f6efa6543a6bb208aae5ca21e", "title": "Sketching Interfaces: Toward More Human Interface Design", "text": "A s computers grow more powerful, less expensive , and more widely available, people are expecting them not only to perform obvious computational tasks, but also to assist in people-oriented tasks, such as writing , drawing, and designing. This shift is causing some user-interface (UI) researchers to rethink the traditional reliance on methods that are more machine-oriented and to look at ways to support properties like ambiguity, creativity, and informal communication. The idea is to bend computers to people's way of interacting , not the other way around. This flexibility is particularly important in the early stages of UI design itself, when designers need the freedom to sketch rough design ideas quickly, the ability to test designs by interacting with them, and the flexibility to fill in the design details as they make choices. 1 Tools at this stage must support conceptual design, which is characterized by ambiguity and the need to create several design variations quickly, as the \" Why Sketching Is Important \" sidebar describes. Unfortunately , with current UI tools, designers tend to focus on issues such as colors, fonts, and alignment, which are more appropriate later in the design. Thus, most UI designers resort to sketching ideas on paper, but these are hard to edit and inconvenient for user evaluations. (CMU) have designed, implemented, and evaluated SILK (Sketching Interfaces Like Krazy), an informal sketching tool that combines many of the benefits of paper-based sketching with the merits of current electronic tools. With SILK, designers can quickly sketch an interface using an electronic pad and stylus, and SILK recognizes widgets and other interface elements as the designer draws them. Unlike paper-based sketching, however, designers can exercise these elements in their sketchy state. For example, a sketched scrollbar is likely to contain an elevator or thumbnail, the small rectangle a user drags with a mouse. In a paper sketch, the elevator would just sit there, but in a SILK sketch, designers can drag it up and down, which lets them test the components' or widgets' behavior. SILK also supports the creation of storyboards\u2014the arrangement of sketches to show how design elements behave, such as how a dialog box appears when the user activates a button. Storyboards are important because they give designers a way to show colleagues, customers, or end users early on how an interface will behave. Designers can test the interface at any point, not just \u2026"}
{"_id": "144ac03e8b30920bd4f8eaf5363c3f69bc7989a0", "title": "Interactive multiresolution mesh editing", "text": "We describe a multiresolution representation for meshes based on subdivision, which is a natural extension of the existing patch-based surface representations. Combining subdivision and the smoothing algorithms of Taubin [26] allows us to construct a set of algorithms for interactive multiresolution editing of complex hierarchical meshes of arbitrary topology. The simplicity of the underlying algorithms for refinement and coarsification enables us to make them local and adaptive, thereby considerably improving their efficiency. We have built a scalable interactive multiresolution editing system based on such algorithms."}
{"_id": "3e88d6f867ec3d1c0347b20698d2e82117485219", "title": "Suede: a Wizard of Oz prototyping tool for speech user interfaces", "text": "Speech-based user interfaces are growing in popularity. Unfortunately, the technology expertise required to build speech UIs precludes many individuals from participating in the speech interface design process. Furthermore, the time and knowledge costs of building even simple speech systems make it difficult for designers to iteratively design speech UIs. SUEDE, the speech interface prototyping tool we describe in this paper, allows designers to rapidly create prompt/response speech interfaces. It offers an electronically supported Wizard of Oz (WOz) technique that captures test data, allowing designers to analyze the interface after testing. This informal tool enables speech user interface designers, even non-experts, to quickly create, test, and analyze speech user interface prototypes."}
{"_id": "60aecc883a77bf0865d62dcf803703fbfa2ceb7e", "title": "Ambiguous Intentions: A Paper-like Interface for Creative Design", "text": "Interfaces for conceptual and creative design should recognize and interpret drawings. They should also capture users\u2019 intended ambiguity, vagueness, and imprecision and convey these qualities visually and through interactive behavior. Freehand drawing can provide this information and it is a natural input mode for design. We describe a pen-based interface that acquires information about ambiguity and precision from freehand input, represents it internally, and echoes it to users visually and through constraint based"}
{"_id": "8671518a43bc7c9d5446b49640ee8783d5b580d7", "title": "Deformation transfer for triangle meshes", "text": ""}
{"_id": "e5f67b995b09e750bc1a32293d5a528de7f601a9", "title": "A Customizable Framework for Prioritizing Systems Security Engineering Processes, Activities, and Tasks", "text": "As modern systems become increasingly complex, current security practices lack effective methodologies to adequately address the system security. This paper proposes a repeatable and tailorable framework to assist in the application of systems security engineering (SSE) processes, activities, and tasks as defined in the recently released National Institute of Standards and Technology (NIST) Special Publication 800\u2013160. First, a brief survey of systems-oriented security methodologies is provided. Next, an examination of the relationships between the NIST-defined SSE processes is conducted to provide context for the engineering problem space. These findings inform a mapping of the NIST SSE processes to seven system-agnostic security domains which enable prioritization for three types of systems (conventional IT, cyber-physical, and defense). These concrete examples provide further understanding for applying and prioritizing the SSE effort. The goal of this paper is assist practitioners by informing the efficient application of the 30 processes, 111 activities, and 428 tasks defined in NIST SP 800\u2013160. The customizable framework tool is available online for developers to employ, modify, and tailor to meet their needs."}
{"_id": "8bb78bf5b55a908ead0f1bc16f53ca9e4ec4a1c2", "title": "Internet of Things : Architectural Framework for eHealth Security", "text": "The Internet of Things (IoT) holds big promises for Health Care, especially in Proactive Personal eHealth. To fulfil the promises major challenges must be overcome, particularly regarding privacy and security. This paper explores these issues, discusses use case scenarios, and advances a secure architecture framework. We close the paper with a discussion of the state of various standard organizations, in the conviction of the critical role they will play in making eHealth bloom."}
{"_id": "3615f566f33c4c0f1a9276b8fd77feb3bff89c1a", "title": "Performance assessment of Multilateration Systems - a solution to nextgen surveillance", "text": "Multilateration (MLAT) surveillance is now being used in all types of airspace for air traffic management. MLAT can be used for airport surface movement surveillance as well as for terminal and en route surveillance, using Wide Area Multilateration (WAM). MLAT is a low-cost technology that not only has major advantages in operations and maintenance, but also provides excellent performance under all conditions, especially for countries with large geographic areas or mountainous terrain to cover. This paper focuses on performance metrics from operational system data to demonstrate that the performance of Era's MLAT systems meet surveillance requirements for all surveillance applications. This includes the key performance requirements for the various surveillance applications that are evaluated, including accuracy, coverage, update rate, integrity and availability."}
{"_id": "50f9c565fc2cbd67185a2dfa39b8af05caf8b15c", "title": "Beyond flat surface computing: challenges of depth-aware and curved interfaces", "text": "In the past decade, multi-touch-sensitive interactive surfaces have transitioned from pure research prototypes in the lab, to commercial products with wide-spread adoption. One of the longer term visions of this research follows the idea of ubiquitous computing, where everyday surfaces in our environment are made interactive. However, most of current interfaces remain firmly tied to the traditional flat rectangular displays of the today's computers and while they benefit from the directness and the ease of use, they are often not much more than touch-enabled standard desktop interfaces.\n In this paper, we argue for explorations that transcend the traditional notion of the flat display, and envision interfaces that are curved, three-dimensional, or that cross the boundary between the digital and physical world. In particular, we present two research directions that explore this idea: (a) exploring the three-dimensional interaction space above the display and (b) enabling gestural and touch interactions on curved devices for novel interaction possibilities. To illustrate both of these, we draw examples from our own work and the work of others, and guide the reader through several case studies that highlight the challenges and benefits of such novel interfaces. The implications on media requirements and collaboration aspects are discussed in detail, and, whenever possible, we highlight promising directions of future research. We believe that the compelling application design for future non-flat user interfaces will greatly depend on exploiting the unique characteristics of the given form factor."}
{"_id": "ea969a3fa9529873b17403ed941cbd499b93b2c1", "title": "Body Area Sensor Networks: Requirements, Operations, and Challenges", "text": "This article provides an overview of body area sensor networks (BASNs), an application of wireless technology that changed health care to suit the comfort of the population. There are various applications of BASNs and attempts have been made to make the human body a channel for wireless communication. BASNs employ a three-tiered architectural system that requires various technical requirements for its optimal and efficient operation. Energy consumption is one of the major issues that is currently being addressed through self-harvesting and many other techniques. Even with the benefits at hand, there are various issues such as interference and eavesdropping that BASNs have to tackle. Biometrics is a widely used solution. Researchers are also working on various ambitious projects that deal with improving deep brain simulation, heart regulation, drug delivery, and prosthetic actuation to use BASN effectively."}
{"_id": "9a1eba52439b3a8ba7445d7f30aea7f456385589", "title": "DeepMind Lab", "text": "DeepMind Lab is a first-person 3D game platform designed for research and development of general artificial intelligence and machine learning systems. DeepMind Lab can be used to study how autonomous artificial agents may learn complex tasks in large, partially observed, and visually diverse worlds. DeepMind Lab has a simple and flexible API enabling creative task-designs and novel AI-designs to be explored and quickly iterated upon. It is powered by a fast and widely recognised game engine, and tailored for effective use by the research community."}
{"_id": "146337bef79ce443142626f3b320e29d97f957fd", "title": "Deep Learning for Practical Image Recognition: Case Study on Kaggle Competitions", "text": "In past years, deep convolutional neural networks (DCNN) have achieved big successes in image classification and object detection, as demonstrated on ImageNet in academic field. However, There are some unique practical challenges remain for real-world image recognition applications, e.g., small size of the objects, imbalanced data distributions, limited labeled data samples, etc. In this work, we are making efforts to deal with these challenges through a computational framework by incorporating latest developments in deep learning. In terms of two-stage detection scheme, pseudo labeling, data augmentation, cross-validation and ensemble learning, the proposed framework aims to achieve better performances for practical image recognition applications as compared to using standard deep learning methods. The proposed framework has recently been deployed as the key kernel for several image recognition competitions organized by Kaggle. The performance is promising as our final private scores were ranked 4 out of 2293 teams for fish recognition on the challenge \"The Nature Conservancy Fisheries Monitoring\" and 3 out of 834 teams for cervix recognition on the challenge \"Intel &MobileODT Cervical Cancer Screening\", and several others. We believe that by sharing the solutions, we can further promote the applications of deep learning techniques."}
{"_id": "d63c953aa3a7327202d7af9baeda6fd0a646d5a5", "title": "Overview of the EVS codec architecture", "text": "The recently standardized 3GPP codec for Enhanced Voice Services (EVS) offers new features and improvements for low-delay real-time communication systems. Based on a novel, switched low-delay speech/audio codec, the EVS codec contains various tools for better compression efficiency and higher quality for clean/noisy speech, mixed content and music, including support for wideband, super-wideband and full-band content. The EVS codec operates in a broad range of bitrates, is highly robust against packet loss and provides an AMR-WB interoperable mode for compatibility with existing systems. This paper gives an overview of the underlying architecture as well as the novel technologies in the EVS codec and presents listening test results showing the performance of the new codec in terms of compression and speech/audio quality."}
{"_id": "cbc7329b4576d4459514e9875f8f85e71961cbb4", "title": "Interaction techniques using head gaze for virtual reality", "text": "Virtual Reality (VR) has finally caught the attention of major technology companies, giving it a significant boost towards becoming market-ready. VR has the capacity to create profoundly engaging experiences by synthesizing highly detailed sensory input in real time, making games the most obvious applications of VR. While the promises of VR may be exciting due to the problems it can solve and new experiences it can offer, there are still numerous issues to be addressed in order to achieve maximal immersion in VR. In this paper, we present the issues encountered during open testing of our applications on commercially available VR goggles and hand input device. The inability to effectively use the hands and feet to control and navigate the environment degrades the level of immersion of VR. With users preferring reliability over intuitiveness, we explore the use of gaze as a substitute input device. Results of experiments conducted on numerous volunteers showed significant improvement in terms of user control."}
{"_id": "5747e99b367e991d1371d37fffa84ff9a5f285cb", "title": "Classification for Fraud Detection with Social Network Analysis", "text": "Worldwide fraud conducts to big losses to states\u2019 treasuries and to private companies. Because of that, motivations to detect and fight fraud are high, but despite continuous efforts, it is far from being accomplished. The problems faced when trying to characterize fraud activities are many, with the specificities of fraud on each business domain leading the list. Despite the differences, building a classifier for fraud detection almost always requires to deal with unbalanced datasets, since fraudulent records are usually in a small number when compared with the nonfraudulent ones. This work describes two types of techniques to deal with fraud detection: techniques at a preprocessing level where the goal is to balance the dataset, and techniques at a processing level where the objective is to apply different errors costs to fraudulent and non-fraudulent cases. Besides that, as organizations and people more often do associations in order to commit fraud, is proposed a new method to make use of that information to improve the training of classifiers for fraud detection. In particular, this new method identifies patterns among the social networks for fraudulent organizations, and uses them to enrich the description of its entity. The enriched data will then be used jointly with balancing techniques to produce a better classifier to identify fraud."}
{"_id": "01229b6ee115841a28ae27ba49b667ccf1f0c2c0", "title": "Sentiment analysis of students' comment using lexicon based approach", "text": "In education system, students' feedback is important to measure the quality of teaching. Students' feedback can be analyzed using lexicon based approach to identify the students' positive or negative attitude. In most of the existing teaching evaluation system, the intensifier words and blind negation words are not considered. The level of opinion result isn't displayed: whether positive or negative opinion. To address this problem, we propose to analyze the students' text feedback automatically using lexicon based approach to predict the level of teaching performance. A database of English sentiment words is created as a lexical source to get the polarity of words. By analyzing the sentiment information including intensifier words extracting from students' feedback, we are able to determine opinion result of teachers, describing the level of positive or negative opinions. This system shows the opinion result of teachers that is represented as to whether strongly positive, moderately positive, weakly positive, strongly negative, moderately negative, weakly negative or neutral."}
{"_id": "1815f026176b471c424779808d972f3c687ea14e", "title": "The task-based approach : some questions and suggestions", "text": "This article >rst addresses the question of what tasks are. It suggests that rather than accept the common \u2018communicative\u2019 de>nition, we should return to a broader de>nition and then focus on key dimensions that distinguish (from the learner\u2019s perspective) di=erent types of task, notably degrees of taskinvolvement and degrees of focus on form or meaning. This approach helps us to conceptualize the complementary roles of form-focused and meaningfocused tasks in our methodology. It also shows the continuity between taskbased language teaching and the broader communicative approach within which it is a development. Finally the article asks whether \u2018task-based approach\u2019 is really the most appropriate term at all for describing these developments in language pedagogy."}
{"_id": "2d22e31f8ada5ba5eb02458e94a5019cdf191314", "title": "Go with the Flow: When Listeners Use Music as Technology", "text": "Music has been shown to have a profound effect on listeners\u2019 internal states as evidenced by neuroscience research. Listeners report selecting and listening to music with specific intent, thereby using music as a tool to achieve desired psychological effects within a given context. In light of these observations, we argue that music information retrieval research must revisit the dominant assumption that listening to music is only an end unto itself. Instead, researchers should embrace the idea that music is also a technology used by listeners to achieve a specific desired internal state, given a particular set of circumstances and a desired goal. This paper focuses on listening to music in isolation (i.e., when the user listens to music by themselves with headphones) and surveys research from the fields of social psychology and neuroscience to build a case for a new line of research in music information retrieval on the ability of music to produce flow states in listeners. We argue that interdisciplinary collaboration is necessary in order to develop the understanding and techniques necessary to allow listeners to exploit the full potential of music as psychological technology."}
{"_id": "f451d97f1f403a3043752abb552d15a6568fcfd9", "title": "A single neuron PID control for twin rotor MIMO system", "text": "This paper presents an intelligent control scheme which utilizes a single neuron PID controller to an experimental propeller setup called the twin rotor multi-input multi-output system (TRMS). The control objective is to make the TRMS move quickly and accurately to the desired attitudes. The pitch angle and the azimuth angle in the conditions of decoupled and cross-coupled between vertical and horizontal axes are considered. It is difficult to design a suitable controller because of the influence between two axes and nonlinear movement. For easy demonstrate the movement on the vertical plane and horizontal plane separately, the TRMS is decoupled first by the main rotor and tail rotor. After successful simulations in decoupled condition, the more difficult simulations of cross-coupled condition are performed. Simulation results show that the new approach can improve the tracking performance."}
{"_id": "a885f2c4ebe3bc552fbad68cb133128f7d2710c6", "title": "Affinity Studies between Drugs and Clays as Adsorbent Material", "text": "Pharmaceuticals, with veterinary and human usage, have continuously been launched into the environment and their presence has been frequently detected in water bodies. The inefficacy of conventional water treatment processes for the removal of drugs, added to their potential adverse effects to human health and environment, suggest that new separation processes should be studied. Adsorption is highlighted as a promising method and the use of alternative adsorbents is encouraged due to the high costs of activated carbon. Different clay materials were evaluated in the present work for the removal of amoxicillin, caffeine, propranolol, and diclofenac sodium from aqueous solutions. The removal efficiency depended on the drug and adsorbent material used and varied between 23-98 % for amoxicillin, 21-89 % for caffeine, 29-100 % for propranolol, and 2-99 % for diclofenac sodium. The results showed that clays may be used successfully as alternative adsorbent material on the removal of selected emergent contaminants."}
{"_id": "944d7fd308122614a7a4203c7a809b5d333f1e19", "title": "Some Digital Communication Fundamentals for Physicists and Others", "text": "Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission."}
{"_id": "213cb58f013874f14d3eb39f80e28152b8f133cb", "title": "Some Fundamental Limits on Cognitive Radio", "text": "Cognitive radio refers to wireless architectures in which a communication system does not operate in a fixed assigned band, but rather searches and finds an appropriate band in which to operate. In this paper we explore, from first principles, the fundamental requirements for such system that tries to avoid interference to potential primary users of a band. We first show that in order to deliver real gains, cognitive radios must be able to detect undecodable signals. This is done by showing how to evaluate the tradeoff between secondary user power, available space for secondary operation, and interference protection for the primary receivers. We prove that in general, the performance of the optimal detector for detecting a weak unknown signal from a known zero-mean constellation is like that of the energy detector (radiometer). However we show that the presence of a known pilot signal can help greatly. We further motivate the need for pilot signals by showing that the radiometer is rendered useless by just moderate noise uncertainty. Finally, we show that quantization combined with noise uncertainty can make the detection of signals by any detector absolutely impossible below a certain SNR threshold. 1 Description of problem and some fundamental geographic tradeoffs The enormous success of the ISM bands has strengthened criticism of the FCC\u2019s traditional process which allocates bands to a single use, issues exclusive licenses to a single entity within a geographical area, and prohibits other devices from transmitting significant power within these bands. As a result, the FCC is considering revising its spectrum allocation policies [1], and is moving ahead with the process. Cognitive radios are one proposed idea to take advantage of a more open spectrum policy.[2, 3]. A cognitive radio would be designed to dynamically adapt its transmissions to find and utilize frequencies while minimizing interference. This is inspired by actual measurements showing that most of the allocated spectrum is vastly underutilized [4]. Cognitive radios promise great societal benefits by allowing new applications that can flexibly use what spectrum happens to be available. However, fundamental theoretical questions remain as to the exact requirements for engineering a practical cognitive radio system. For illustrative purposes, consider the world depicted in (1a). Each of the two antennas represents a bustling metropolis. Under current regulations, users in the middle of nowhere are prohibited from using these spectrum bands, even if there are no receivers in the vicinity that might be interfered with. However, cognitive radios may someday be allowed to share the spectrum, on the condition that they not interfere with primary users. These primary users may be providing more socially important services, or they might simply be legacy systems that are unable to change. Figure (1b) depicts the primary transmitter in one metropolis. The dotted circle represents the boundary of decodability for a single-user system. That is, in the absence of all interference, 1"}
{"_id": "925b2dfe2e6e13f5d42b4e56552ebf7425b2fe59", "title": "To Click or Not To Click: Automatic Selection of Beautiful Thumbnails from Videos", "text": "Thumbnails play such an important role in online videos. As the most representative snapshot, they capture the essence of a video and provide the first impression to the viewers; ultimately, a great thumbnail makes a video more attractive to click and watch. We present an automatic thumbnail selection system that exploits two important characteristics commonly associated with meaningful and attractive thumbnails: high relevance to video content and superior visual aesthetic quality. Our system selects attractive thumbnails by analyzing various visual quality and aesthetic metrics of video frames, and performs a clustering analysis to determine the relevance to video content, thus making the resulting thumbnails more representative of the video. On the task of predicting thumbnails chosen by professional video editors, we demonstrate the effectiveness of our system against six baseline methods, using a real-world dataset of 1,118 videos collected from Yahoo Screen. In addition, we study what makes a frame a good thumbnail by analyzing the statistical relationship between thumbnail frames and non-thumbnail frames in terms of various image quality features. Our study suggests that the selection of a good thumbnail is highly correlated with objective visual quality metrics, such as the frame texture and sharpness, implying the possibility of building an automatic thumbnail selection system based on visual aesthetics."}
{"_id": "382903a745ae23518c1e98899e00bafd2c4ac076", "title": "Improving homograph disambiguation with supervised machine learning", "text": "We describe a pre-existing rule-based homograph disambiguation system used for text-to-speech synthesis at Google, and compare it to a novel system which performs disambiguation using classifiers trained on a small amount of labeled data. An evaluation of these systems, using a new, freely available English data set, finds that hybrid systems (making use of both rules and machine learning) are significantly more accurate than either hand-written rules or machine learning alone. The evaluation also finds minimal performance degradation when the hybrid system is configured to run on limited-resource mobile devices rather than on production servers. The two best systems described here are used for homograph disambiguation on all US English text-to-speech traffic at Google."}
{"_id": "86a50bd2ee1c4beb7c33e08e9073d5fa8d804b9a", "title": "Online LCL filter compensation using embedded FRA", "text": "LCL filter control design is challenging because of presence of a resonant pole in the LCL filter transfer function. This paper presents a guideline for LCL filter design, which are often used in grid connected applications, with compensation based on complex poles and zeros. A Software Frequency Response Analyzer (SFRA) is integrated in the control software of the power converter. This enables periodical collection of frequency response data, which can be used to monitor movement of resonant poles due to aging, drifts and degradation of components. The information obtained from embedded SFRA can then be used to adjust the compensation. A feedback linearization method is illustrated to account for the grid voltage disturbances, which cannot be compensated by traditional PI controller. A solar micro inverter board is designed to prove the proposed scheme. Discussions about the embedded design challenges in integration of multiple power stage control, for a micro inverter using a single processor, are included and solutions presented."}
{"_id": "782a6a7ede0fa942de25b233bd1447062ea9254f", "title": "TCP / IP Stack Implementation for Communication over IP with AUTOSAR Ethernet Specification", "text": "The new generation vehicle will provide connectivity and telematics services for enabling vehicle communication. The communication over IP in TCP/IP means enables a connection between external node and invehicle nodes using IP protocols. In order to bring IP into the vehicle communication Ethernet technology is needed. Ethernet technology will provide high speed data transmission. A hardware independent Ethernet driver is necessary for providing Ethernet services. In vehicle diagnostics, the diagnostic tools and vehicles are separated by an internetwork. The main aim of using IP into the family of automotive communication protocol is that the development of new in-vehicle network has led to the need for communication between external equipment and onboard Electronic Control Units (ECUs) using many data link layer technologies. Diagnostic Over IP (DoIP) is a protocol mainly used for communication between off-board and on-board diagnostic system. This will improve the opportunities of interconnecting in-vehicle networks with internet for many new applications, including online, remote automotive diagnostics. Due to the limited resource availability in embedded devices, lightweight TCP/IP implementation is adopted. AutoSAR (Automotive Open System Architecture) is a standardized architecture introduced by a group of automobile manufacturers, suppliers and tool developers for developing and integrating software modules from different vendors."}
{"_id": "6b8f26b245887d7c6111e6778947a81257c69a6e", "title": "Mechanism of collapse of tall steel moment frame buildings under earthquake excitation", "text": "The mechanism of collapse of tall steel moment frame buildings is explored through three-dimensional nonlinear analyses of two 18-story steel moment frame buildings under earthquake excitation. Both fracture-susceptible as well as perfect-connection conditions are investigated. Classical energy balance analysis shows that only longperiod excitation imparts energy to tall buildings large enough to cause collapse. Under such long-period motion, the shear beam analogy alludes to the existence of a characteristic mechanism of collapse or a few preferred mechanisms of collapse for these buildings. Numerical evidence from parametric analyses of the buildings under a suite of idealized sawtooth-like ground motion time histories, with varying period (T ), amplitude (peak ground velocity, PGV ), and duration (number of cycles, N ), is presented to support this hypothesis. Damage localizes to form a \u201cquasi-shear\u201d band over a few stories. When the band is destabilized, sidesway collapse is initiated and gravity takes over. Only one to five collapse mechanisms occur out of a possible 153 mechanisms in either principal direction of the buildings considered. Where two or more preferred mechanisms do exist, they have significant story-overlap, typically separated by just one story. It is shown that a simple work-energy relation applied to all possible quasi-shear bands, combined with plastic analysis principles can systematically identify all the preferred collapse mechanisms."}
{"_id": "0236a9d8e9d3a648a5a945524f79b75dd5d01edb", "title": "Alterations in brain and immune function produced by mindfulness meditation.", "text": "OBJECTIVE\nThe underlying changes in biological processes that are associated with reported changes in mental and physical health in response to meditation have not been systematically explored. We performed a randomized, controlled study on the effects on brain and immune function of a well-known and widely used 8-week clinical training program in mindfulness meditation applied in a work environment with healthy employees.\n\n\nMETHODS\nWe measured brain electrical activity before and immediately after, and then 4 months after an 8-week training program in mindfulness meditation. Twenty-five subjects were tested in the meditation group. A wait-list control group (N = 16) was tested at the same points in time as the meditators. At the end of the 8-week period, subjects in both groups were vaccinated with influenza vaccine.\n\n\nRESULTS\nWe report for the first time significant increases in left-sided anterior activation, a pattern previously associated with positive affect, in the meditators compared with the nonmeditators. We also found significant increases in antibody titers to influenza vaccine among subjects in the meditation compared with those in the wait-list control group. Finally, the magnitude of increase in left-sided activation predicted the magnitude of antibody titer rise to the vaccine.\n\n\nCONCLUSIONS\nThese findings demonstrate that a short program in mindfulness meditation produces demonstrable effects on brain and immune function. These findings suggest that meditation may change brain and immune function in positive ways and underscore the need for additional research."}
{"_id": "1beeb25756ea352634e0c78ed653496a3474925e", "title": "Attentional and affective concomitants of meditation: a cross-sectional study.", "text": ""}
{"_id": "7dd98cd1dac8babb66c58c63c5aab3e4b98d0fe3", "title": "Attention training and attention state training", "text": "The ability to attend and to exercise cognitive control are vital aspects of human adaptability. Several studies indicate that attention training using computer based exercises can lead to improved attention in children and adults. Randomized control studies of exposure to nature, mindfulness and integrative body-mind training (IBMT) yield improved attention and self-regulation. Here, we ask how attention training and attention state training might be similar and different in their training methods, neural mechanisms and behavioral outcomes. Together these various methods lead to practical ways of improving attention and self-regulation."}
{"_id": "8de327d3ec7b2b3a9ec007cdf532360482bd8017", "title": "Mindfulness-based psychotherapies: a review of conceptual foundations, empirical evidence and practical considerations.", "text": "OBJECTIVE\nThis paper, composed by an interest group of clinicians and researchers based in Melbourne, presents some background to the practice of mindfulness-based therapies as relevant to the general professional reader. We address the empirical evidence for these therapies, the principles through which they might operate, some practical questions facing those wishing to commence practice in this area or to refer patients into mindfulness-based therapies, and some considerations relevant to the conduct and interpretation of research into the therapeutic application of mindfulness.\n\n\nMETHOD\nDatabases (e.g. PsycINFO, MEDLINE) were searched for literature on the impact of mindfulness interventions, and the psychological and biological mechanisms that underpin the effects of mindfulness practice. This paper also draws upon the clinical experience of the author group.\n\n\nRESULTS\nMindfulness practice and principles have their origins in many contemplative and philosophical traditions but individuals can effectively adopt the training and practice of mindfulness in the absence of such traditions or vocabulary. A recent surge of interest regarding mindfulness in therapeutic techniques can be attributed to the publication of some well-designed empirical evaluations of mindfulness-based cognitive therapy. Arising from this as well as a broader history of clinical integration of mindfulness and Western psychotherapies, a growing number of clinicians have interest and enthusiasm to learn the techniques of mindfulness and to integrate them into their therapeutic work. This review highlights the importance of accurate professional awareness and understanding of mindfulness and its therapeutic applications.\n\n\nCONCLUSIONS\nThe theoretical and empirical literatures on therapeutic applications of mindfulness are in states of significant growth and development. This group suggests, based on this review, that the combination of some well-developed conceptual models for the therapeutic action of mindfulness and a developing empirical base, justifies a degree of optimism that mindfulness-based approaches will become helpful strategies to offer in the care of patients with a wide range of mental and physical health problems."}
{"_id": "c3da1095536f2fb5dd69ad86e39c7bd8d665e100", "title": "The neurobiology of Meditation and its clinical effectiveness in psychiatric disorders", "text": "This paper reviews the evidence for changes of Meditation on body and brain physiology and for clinical effectiveness in disorders of psychiatry. The aim of Meditation is to reduce or eliminate irrelevant thought processes through training of internalised attention, thought to lead to physical and mental relaxation, stress reduction, psycho-emotional stability and enhanced concentration. Physiological evidence shows a reduction with Meditation of stress-related autonomic and endocrine measures, while neuroimaging studies demonstrate the functional up-regulation of brain regions of affect regulation and attention control. Clinical studies show some evidence for the effectiveness of Meditation in disorders of affect, anxiety and attention. The combined evidence from neurobiological and clinical studies seems promising. However, a more thorough understanding of the neurobiological mechanisms of action and clinical effectiveness of the different Meditative practices is needed before Meditative practices can be leveraged in the prevention and intervention of mental illness."}
{"_id": "dc12776d80c545ea320da23e536845b41bbf2d2b", "title": "Design and Analysis of Slotless Brushless DC Motor", "text": "This paper presents a design method for a small-sized slotless brushless dc motor. Distributed hexagonal windings are employed for achieving a high torque density and for ease of manufacture. Numerical approaches with an analytic model are used to analyze the magnetic and output characteristics of the motor. The proposed motor is manufactured, and the empirical results are compared with the results of the simulation."}
{"_id": "fac5a9a18157962cff38df6d4ae69f8a7da1cfa8", "title": "Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics", "text": "In this paper, we propose two novel methods for face recognition under arbitrary unknown lighting by using spherical harmonics illumination representation, which require only one training image per subject and no 3D shape information. Our methods are based on the result which demonstrated that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace. We provide two methods to estimate the spherical harmonic basis images spanning this space from just one image. Our first method builds the statistical model based on a collection of 2D basis images. We demonstrate that, by using the learned statistics, we can estimate the spherical harmonic basis images from just one image taken under arbitrary illumination conditions if there is no pose variation. Compared to the first method, the second method builds the statistical models directly in 3D spaces by combining the spherical harmonic illumination representation and a 3D morphable model of human faces to recover basis images from images across both poses and illuminations. After estimating the basis images, we use the same recognition scheme for both methods: we recognize the face for which there exists a weighted combination of basis images that is the closest to the test face image. We provide a series of experiments that achieve high recognition rates, under a wide range of illumination conditions, including multiple sources of illumination. Our methods achieve comparable levels of accuracy with methods that have much more onerous training data requirements. Comparison of the two methods is also provided."}
{"_id": "5bc8b5d860b8a05ee7f894c5be3068fb59fa5f4a", "title": "Level Lines based Disocclusion", "text": "Object recognition, robotic vision, occluding noise removal or photograph design require the ability to perform disocclusion. We call disocclusion the recovery of hidden parts of objects in a digital image by interpolation from the vicinity of the occluded area. It is shown in this paper how disocclusion can be performed by means of level lines structure, which offers a reliable, complete and contrast-invariant representation of image, in contrast to edges. Level lines based disocclusion yields a solution that may have strong discontinuities, which is not possible with PDE-based interpolation. Moreover, the proposed method is fully compatible with Kanizsa\u2019s theory of \u201camodal completion\u201d."}
{"_id": "488fff23542ff397cdb1ced64db2c96320afc560", "title": "Weakly supervised localization of novel objects using appearance transfer", "text": "We consider the problem of localizing unseen objects in weakly labeled image collections. Given a set of images annotated at the image level, our goal is to localize the object in each image. The novelty of our proposed work is that, in addition to building object appearance model from the weakly labeled data, we also make use of existing detectors of some other object classes (which we call \u201cfamiliar objects\u201d). We propose a method for transferring the appearance models of the familiar objects to the unseen object. Our experimental results on both image and video datasets demonstrate the effectiveness of our approach."}
{"_id": "86e7ada570fdb001115a56176adeebcbc1afd722", "title": "Peer-to-Peer Lending \u2013 A ( Financial Stability ) Risk Perspective", "text": "The aim of this paper is to discuss P2P lending, a subcategory of crowdfunding, from a (financial stability) risk perspective. The discussion focuses on a number of dimensions such as the role of soft information, herding, platform default risk, liquidity risk, and the institutionalization of P2P markets. Overall, we conclude that P2P lending is more risky than traditional banking. However, it is important to recognize that a constant conclusion would be misleading. P2P platforms have evolved and changed their appearance markedly over time, which implies that although our final conclusion of increased riskiness through P2P markets remains valid over time, it is based on different arguments at different points in time. In addition, we discuss that acting on P2P online platforms satisfies most possible definitions of shadow banking and shows significant similarities with many observed aspects of shadow banking. We thus infer that P2P lending can be considered part of the shadow banking sector. JEL Classification: F34, G21, G23"}
{"_id": "d152dc9f111c39ab424a8049eb050ca67678656d", "title": "Basic aspects of food preservation by hurdle technology.", "text": "Hurdle technology is used in industrialized as well as in developing countries for the gentle but effective preservation of foods. Previously hurdle technology, i.e., a combination of preservation methods, was used empirically without much knowledge of the governing principles. Since about 20 years the intelligent application of hurdle technology became more prevalent, because the principles of major preservative factors for foods (e.g., temperature, pH, a(w), Eh, competitive flora), and their interactions, became better known. Recently, the influence of food preservation methods on the physiology and behaviour of microorganisms in foods, i.e. their homeostasis, metabolic exhaustion, stress reactions, are taken into account, and the novel concept of multitarget food preservation emerged. In the present contribution a brief introduction is given on the potential hurdles for foods, the hurdle effect, and the hurdle technology. However, emphasis is placed on the homeostasis, metabolic exhaustion, and stress reactions of microorganisms related to hurdle technology, and the prospects of the future goal of a multitarget preservation of foods."}
{"_id": "6831db33ea9db905b66b09f476c429f085ebb45f", "title": "A triaxial accelerometer and portable data processing unit for the assessment of daily physical activity", "text": "The present study describes the development of a triaxial accelerometer (TA) and a portable data processing unit for the assessment of daily physical activity. The TA is composed of three orthogonally mounted uniaxial piezoresistive accelerometers and can be used to register accelerations covering the amplitude and frequency ranges of human body acceleration. Interinstrument and test-retest experiments showed that the offset and the sensitivity of the TA were equal for each measurement direction and remained constant on two measurement days. Transverse sensitivity was significantly different for each measurement direction, but did not influence accelerometer output (<3% of the sensitivity along the main axis). The data unit enables the on-line processing of accelerometer output to a reliable estimator of physical activity over eight-day periods. Preliminary evaluation of the system in 13 male subjects during standardized activities in the laboratory demonstrated a significant relationship between accelerometer output and energy expenditure due to physical activity, the standard reference for physical activity (r=0.89). Shortcomings of the system are its low sensitivity to sedentary activities and the inability to register static exercise. The validity of the system for the assessment of normal daily physical activity and specific activities outside the laboratory should be studied in free-living subjects."}
{"_id": "d9b826bf3fe005554fe05b5d219e34ec160d9270", "title": "Hybridly Connected Structure for Hybrid Beamforming in mmWave Massive MIMO Systems", "text": "In this paper, we propose a hybridly connected structure for hybrid beamforming in millimeter-wave (mmWave) massive MIMO systems, where the antenna arrays at the transmitter and receiver consist of multiple sub-arrays, each of which connects to multiple radio frequency (RF) chains, and each RF chain connects to all the antennas corresponding to the sub-array. In this structure, through successive interference cancelation, we decompose the precoding matrix optimization problem into multiple precoding sub-matrix optimization problems. Then, near-optimal hybrid digital and analog precoders are designed through factorizing the precoding sub-matrix for each sub-array. Furthermore, we compare the performance of the proposed hybridly connected structure with the existing fully and partially connected structures in terms of spectral efficiency, the required number of phase shifters, and energy efficiency. Finally, simulation results are presented to demonstrate that the spectral efficiency of the hybridly connected structure is better than that of the partially connected structure and that its spectral efficiency can approach that of the fully connected structure with the increase in the number of RF chains. Moreover, the proposed algorithm for the hybridly connected structure is capable of achieving higher energy efficiency than existing algorithms for the fully and partially connected structures."}
{"_id": "7c6e7d3ee5904ad940426bbb3e473cc9fbafdbe0", "title": "Critical Clearing Time and Wind Power in Small Isolated Power Systems Considering Inertia Emulation", "text": "The stability and security of small and isolated power systems can be compromised when large amounts of wind power enter them. Wind power integration depends on such factors as power generation capacity, conventional generation technology or grid topology. Another issue that can be considered is critical clearing time (CCT). In this paper, wind power and CCT are studied in a small isolated power system. Two types of wind turbines are considered: a squirrel cage induction generator (SCIG) and a full converter. Moreover, the full converter wind turbine\u2019s inertia emulation capability is considered, and its impact on CCT is discussed. Voltage is taken into account because of its importance in power systems of this kind. The study focuses on the small, isolated Lanzarote-Fuerteventura power system, which is expected to be in operation by 2020."}
{"_id": "65aa21f154b32635eee0c35799b23ab86ec1c799", "title": "Analyzing and retrieving illicit drug-related posts from social media", "text": "Illicit drug use is a serious problem around the world. Social media has increasingly become an important tool for analyzing drug use patterns and monitoring emerging drug abuse trends. Accurately retrieving illicit drug-related social media posts is an important step in this research. Frequently, hashtags are used to identify and retrieve posts on a specific topic. However hashtags are highly ambiguous. Posts with the same hashtags are not always on the same topic. Moreover, hashtags are evolving, especially those related to illicit drugs. New street names are introduced constantly to avoid detection. In this paper, we employ topic modeling to disambiguate hashtags and track the changes of hashtags using semantic word embedding. Our preliminary evaluation shows the promise of these methods."}
{"_id": "53588e00d970eda07b1d9109306f4a72a0c5e455", "title": "The Effects of Hyperparameters on SGD Training of Neural Networks", "text": "The performance of neural network classifiers is determined by a number of hyperparameters, including learning rate, batch size, and depth. A number of attempts have been made to explore these parameters in the literature, and at times, to develop methods for optimizing them. However, exploration of parameter spaces has often been limited. In this note, I report the results of large scale experiments exploring these different parameters and their interactions. 1 Datasets and Libraries All experiments reported here were carried out using the Torch library [1] and CUDA (some of the experiments have been reproduced on a smaller scale with other libraries). The dataset for all the experiments is MNIST [3, 2]. Characters were deskewed prior to all experiments. Deskewing significantly reduces error rates in nearest neighbor classifiers. Skew corresponds to a simple one-parameter family of linear transformations in feature space and causes decision regions to become highly anisotropic. Without deskewing, differences in performance between different architectures might primarily reduce to their ability to \u201clearn deskewing\u201d. With deskewing, MNIST character classification become more of an instance of a typical classification problem. Prior results on classifying deskewed MNIST data both with neural networks and with other methods are shown in the table below. 2 Logistic vs Softmax Outputs Multi-Layer Perceptrons (MLPs) used for classification usually attempt to approximate posterior probabilities and use those as their discriminant function. Two common approaches to this are the use of least square regression with logistic output units trained with a least square error measure (\u201clogistic outputs\u201d) 1 ar X iv :1 50 8. 02 78 8v 1 [ cs .N E ] 1 2 A ug 2 01 5 Method Test Error Preprocessing Reference Reduced Set SVM deg 5 polynomial 1 deskewing LeCun et al. 1998 SVM deg 4 polynomial 1.1 deskewing LeCun et al. 1998 K-nearest-neighbors, L3 1.22 deskewing, noise removal, blurring, 2 pixel shift Kenneth Wilder, U. Chicago K-nearest-neighbors, L3 1.33 deskewing, noise removal, blurring, 1 pixel shift Kenneth Wilder, U. Chicago 2-layer NN, 300 HU 1.6 deskewing LeCun et al. 1998 Table 1: Other previously reported results on the MNIST database. and a softmax output layer (\u201csoftmax outputs\u201d). In the limit of infinite amounts of training data, both approaches converge to true posterior probability estimates. Softmax output layers have the property that they are guaranteed to produce a normalized posterior probability distribution across all classes, while least square regression with logistic output units generates independent probability estimates for each class membership without any guarantees that these probabilities sum up to one. Softmax is often preferred, although there is no obvious theoretical reason why it should yield better discriminant functions or lower classification error for finite training sets. In OCR and speech recognition, some practitioners have observed that logistic outputs yield better posterior probability estimates and better results when combined with probabilistic language models. In addition, when the sum of the posterior probability estimates derived from logistic outputs differs significantly from unity, that is a strong indication that the input lies outside the training set and should be rejected. Figure 1 shows a scatterplot of test vs training error for a large number of MLPs with one hidden layer at different learning rates, different number of hidden units, and different batch sizes. Such scatterplots show what error rates are achievable by the different architectures, hyperparameter choices, initializations, and order of sample presentations. The lowest points in the vertical direction indicate the lowest test set error achievable by the architecture in this set of experiments. The scatterplot shows that logistic outputs achieve test set error rates of about 1.0% vs 1.1% for softmax outputs. At the same time, logistic outputs never achieve zero percent training set error, while softmax outputs frequently do. In order to ascertain that the difference in test set error between the two"}
{"_id": "11469352326248de37b299d6f49b0b5b107a083b", "title": "Building an Ontology of Cyber Security", "text": "Situation awareness depends on a reliable perception of the environment and comprehension of its semantic structures. In this respect, cyberspace presents a unique challenge to the situation awareness of users and analysts, since it is a unique combination of human and machine elements, whose complex interactions occur in a global communication network. Accordingly, we outline the underpinnings of an ontology of secure operations in cyberspace, presenting the ontology framework and providing two modeling examples. We make the case for adopting a rigorous semantic model of cyber security to overcome the current limits of the state of the art. Keywords\u2014 cyber security, ontology, situation awareness,"}
{"_id": "3feb26730e17d76156dc22cbf4654ad17e2576dd", "title": "Review and performance comparison of VANET protocols: AODV, DSR, OLSR, DYMO, DSDV & ZRP", "text": "In this paper a fast review and comparative study of the adhoc routing protocols used in VANET will be presented. The research filed in Vehicular adhoc network (VANET) is developing very fast. An extensive variety of utilizations has been served under various situation (Highway, urban, and Cities). Many protocols have been adopted to serve different topology and scenarios, these protocols faced various challenges. VANETs provide communication between vehicles moving on the roads. The routing protocols in VANET are affected by the vehicle high speed which leads to frequently link breaks between the communicated vehicles, so the adhoc routing protocols are adapted with the VANET characteristics to deliver the data between vehicles in short time. The main goal of VANET is to assemble a data system among vehicles that are moving on the roads, which enables the vehicles to communicate with each other for the safety manners. In this paper, an endeavor has been made to compare six surely understood protocols AODV, DSR, OLSR, DSDV, ZRP and DSDV. Used in VANET from the (packet deliver ratio, end to end delay, throughput, routing algorithm load, received packets, routing packets, dropped packets, ratio of packet loss, Average Jitter) point of view."}
{"_id": "e24876e5b81381415eb39f2ee8655c1ea732b1ef", "title": "Human Activity Recognition from Sensor Data", "text": "Human Activity Recognition (HAR) refers to the problem of automatically identifying human activities by learning from data collected either from the individual or from the environment surrounding that individual. The form of the data might be inertial measurements, physiological signals from devices like wearable sensors or those such as image, audio and video data collected from environmental devices/sensors. HAR has been a well studied problem for its potential capability to mine human behavioral patterns and for monitoring human activity for security and health purposes."}
{"_id": "01dbd4ceafcd428bbb7b15db10d53952d0ad8ae7", "title": "Designing the whyline: a debugging interface for asking questions about program behavior", "text": "Debugging is still among the most common and costly of programming activities. One reason is that current debugging tools do not directly support the inquisitive nature of the activity. Interrogative Debugging is a new debugging paradigm in which programmers can ask why did and even why didn't questions directly about their program's runtime failures. The Whyline is a prototype Interrogative Debugging interface for the Alice programming environment that visualizes answers in terms of runtime events directly relevant to a programmer's question. Comparisons of identical debugging scenarios from user tests with and without the Whyline showed that the Whyline reduced debugging time by nearly a factor of 8, and helped programmers complete 40% more tasks."}
{"_id": "6e675106f43d519a1afbc87fef836e7a445b26ef", "title": "Age effects on wayfinding and route learning skills", "text": "While existing evidence suggests that older adults have compromised spatial navigation abilities, the effects of age on specific aspects of navigational skill are less well specified. The current study examined age effects on spatial navigation abilities considering the multiple cognitive and neural factors that contribute to successful navigation. Young and older adults completed wayfinding and route learning tasks in a virtual environment and aspects of environmental knowledge were assessed. Prefrontal, caudate and hippocampal volumes were obtained in a subset of older adults. Age differences were observed in both wayfinding and route learning. For wayfinding, there were age effects in recalling landmarks, and recognizing environmental scenes. In the route learning condition, older adults evidenced difficulty with the location, temporal order and directional information of landmarks. In both conditions, there was evidence of age-related differences in the acquisition of configural knowledge. Wayfinding was associated with the hippocampus whereas route learning was associated with the caudate nucleus. These results provide indications of specific aspects of navigational learning that may contribute to age-related declines and potential neural substrates."}
{"_id": "8f8686e0bd7db0fc2f9eddda2f24aea71f85d863", "title": "On some applications of finite-state automata theory to natural language processing", "text": "We describe new applications of the theory of automata to natural language processing: the representation of very large scale dictionaries and the indexation of natural language texts. They are based on new algorithms that we introduce and describe in detail. In particular, we give pseudocodes for the de-terminization of string to string transducers, the deterministic union of p-subsequential string to string transducers, and the indexation by automata. We report several experiments illustrating the applications."}
{"_id": "4820cba0c8428defee3edc89fe0a7105b517940c", "title": "Wechsler Adult Intelligence Scale-IV Dyads for Estimating Global Intelligence.", "text": "All possible two-subtest combinations of the core Wechsler Adult Intelligence Scale-IV (WAIS-IV) subtests were evaluated as possible viable short forms for estimating full-scale IQ (FSIQ). Validity of the dyads was evaluated relative to FSIQ in a large clinical sample (N = 482) referred for neuropsychological assessment. Sample validity measures included correlations, mean discrepancies, and levels of agreement between dyad estimates and FSIQ scores. In addition, reliability and validity coefficients were derived from WAIS-IV standardization data. The Coding + Information dyad had the strongest combination of reliability and validity data. However, several other dyads yielded comparable psychometric performance, albeit with some variability in their particular strengths. We also observed heterogeneity between validity coefficients from the clinical and standardization-based estimates for several dyads. Thus, readers are encouraged to also consider the individual psychometric attributes, their clinical or research goals, and client or sample characteristics when selecting among the dyadic short forms."}
{"_id": "2b5cea3171e911c444f8253c68312d93c1545572", "title": "Modelling strategic relationships for process reengineering", "text": "Existing models for describing a process such as a business process or a software development process tend to focus on the what or the how of the process For example a health insurance claim process would typically be described in terms of a number of steps for assessing and approving a claim In trying to improve or redesign a process however one also needs to have an understanding of the why for example why do physicians submit treatment plans to insurance companies before giving treatment and why do claims managers seek medical opinions when assessing treatment plans An understanding of the motivations and interests of process participants is often crucial to the successful redesign of processes This thesis proposes a modelling framework i pronounced i star consisting of two mod elling components The Strategic Dependency SD model describes a process in terms of in tentional dependency relationships among agents Agents depend on each other for goals to be achieved tasks to be performed and resources to be furnished Agents are intentional in that they have desires and wants and strategic in that they are concerned about opportunities and vulnerabilities The Strategic Rationale SR model describes the issues and concerns that agents have about existing processes and proposed alternatives and how they might be addressed in terms of a network of means ends relationships An agent s routines for carrying out a process can be analyzed for their ability workability viability and believability Means ends rules are used to suggest methods for addressing issues related issues to be raised and assumptions to be challenged The models are represented in the conceptual modelling language Telos The modelling concepts are axiomatically characterized The utility of the framework is illustrated in each of four application areas requirements engineering business process reengineering organizational impacts analysis and software pro cess modelling Advantages of i over existing modelling techniques in each of these areas are described"}
{"_id": "531673beb6db0a1e439da5920769be52af7be228", "title": "Intelligent accident detection classification using mobile phones", "text": "Road accidents are one of the leading causes of mortality. While most accidents merely affect the exterior of the cars of the drivers involved, some of them have led to serious and fatal injuries. It is imperative that the Emergency Medical Services (EMS) are given as much information about the crash site as possible before their arrival at the scene. In this paper, a mobile phone application is developed that, when placed inside a car, intelligently classifies the type of accident it is involved in and notifies the EMS team of this classification along with the car's GPS location. The classification mechanism is built through a collection of data sets from a simulation of three types of collisions, which creates a knowledge base for an artificial intelligence-based classifier software. The experimental setup for data collection and the functionality of the mobile phone application called \u2018Crash Detect\u2019 are explored."}
{"_id": "655207daf6b6594a1dd13b00f9334fb78972d348", "title": "Segmentation of edges into lines and arcs", "text": ".4 long standing problem in computer vision is the extraction of meaningfulfeatures from images. This paper describes a method of segmenting curves in images into a combination of circular arcs and straight lines. This uses a recursive algorithm that first analyses lists of connected edge points and finds a polygonal description, and then analyses this description fitting arcs to groups of connected lines. The result is a description of image edges consisting of circular arcs and lines. The algorithm uses no thresholding. Instead the best option is chosen at each decision stage."}
{"_id": "fe4886dcc8132fc19f1bb25cde4fee08292915ce", "title": "Ethical frontiers of ICT and older users: cultural, pragmatic and ethical issues", "text": "The reality of an ageing Europe has called attention to the importance of e-inclusion for a growing population of senior citizens. For some, this may mean closing the digital divide by providing access and support to technologies that increase citizen participation; for others, e-inclusion means access to assistive technologies to facilitate and extend their living independently. These initiatives address a social need and provide economic opportunities for European industry. While undoubtedly desirable, and supported by European Union initiatives, several cultural assumptions or issues related to the initiatives could benefit from fuller examination, as could their practical and ethical implications. This paper begins to consider these theoretical and practical concerns. The first part of the paper examines cultural issues and assumptions relevant to adopting e-technologies, and the ethical principles applied to them. These include (1) the persistence of ageism, even in e-inclusion; (2) different approaches to, and implications of independent living; and (3) the values associated with different ethical principles, given their implications for accountability to older users. The paper then discusses practical issues and ethical concerns that have been raised by the use of smart home and monitoring technologies with older persons. Understanding these assumptions and their implications will allow for more informed choices in promoting ethical application of e-solutions for older persons."}
{"_id": "283dedcdfa3e065146cb8649a7dd8a9ac6ab581d", "title": "Instance Weighting for Domain Adaptation in NLP", "text": "Domain adaptation is an important problem in natural language processing (NLP) due to the lack of labeled data in novel domains. In this paper, we study the domain adaptation problem from the instance weighting perspective. We formally analyze and characterize the domain adaptation problem from a distributional view, and show that there are two distinct needs for adaptation, corresponding to the different distributions of instances and classification functions in the source and the target domains. We then propose a general instance weighting framework for domain adaptation. Our empirical results on three NLP tasks show that incorporating and exploiting more information from the target domain through instance weighting is effective."}
{"_id": "477a1ef18cf51686786c9ea223ff26a2dc364f91", "title": "End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF", "text": "State-of-the-art sequence labeling systems traditionally require large amounts of taskspecific knowledge in the form of handcrafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both wordand character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data preprocessing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks \u2014 Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both datasets \u2014 97.55% accuracy for POS tagging and 91.21% F1 for NER."}
{"_id": "88a87a17072d7670522bff791544ee8740edb705", "title": "Improving Named Entity Recognition for Morphologically Rich Languages Using Word Embeddings", "text": "In this paper, we addressed the Named Entity Recognition (NER) problem for morphologically rich languages by employing a semi-supervised learning approach based on neural networks. We adopted a fast unsupervised method for learning continuous vector representations of words, and used these representations along with language independent features to develop a NER system. We evaluated our system for the highly inflectional Turkish and Czech languages. We improved the state-of-the-art F-score obtained for Turkish without using gazetteers by 2.26% and for Czech by 1.53%. Unlike the previous state-of-the-art systems developed for these languages, our system does not make use of any language dependent features. Therefore, we believe it can easily be applied to other morphologically rich languages."}
{"_id": "4e3e391a4c8dd7f5db9372d711f28a1219fab7c7", "title": "Bayesian Landmark Learning for Mobile Robot Localization", "text": "To operate successfully in indoor environments, mobile robots must be able to localize themselves. Most current localization algorithms lack flexibility, autonomy, and often optimality, since they rely on a human to determine what aspects of the sensor data to use in localization (e.g., what landmarks to use). This paper describes a learning algorithm, called BaLL, that enables mobile robots to learn what features/landmarks are best suited for localization, and also to train artificial neural networks for extracting them from the sensor data. A rigorous Bayesian analysis of probabilistic localization is presented, which produces a rational argument for evaluating features, for selecting them optimally, and for training the networks that approximate the optimal solution. In a systematic experimental study, BaLL outperforms two other recent approaches to mobile robot localization."}
{"_id": "aef0e4f1612cba103ad8bf056a637404368abec1", "title": "Knowledge Engineering and Knowledge Management", "text": "Recently, experts and practitioners in language resources have started recognizing the benefits of the linked data (LD) paradigm for the representation and exploitation of linguistic data on the Web. The adoption of the LD principles is leading to an emerging ecosystem of multilingual open resources that conform to the Linguistic Linked Open Data Cloud, in which datasets of linguistic data are interconnected and represented following common vocabularies, which facilitates linguistic information discovery, integration and access. In order to contribute to this initiative, this paper summarizes several key aspects of the representation of linguistic information as linked data from a practical perspective. The main goal of this document is to provide the basic ideas and tools for migrating language resources (lexicons, corpora, etc.) as LD on the Web and to develop some useful NLP tasks with them (e.g., word sense disambiguation). Such material was the basis of a tutorial imparted at the EKAW\u201914 conference, which is also reported in the paper."}
{"_id": "f5452733121e8964945d76a55e34df339269b57b", "title": "Hydraulic erosion", "text": "This paper presents a generalized solution to modelling hydraulic erosion using ideas from fluid mechanics. The model is based on the Navier\u2013Stokes equations, which provide the dynamics of velocity and pressure. These equations form the basis for the model to balance erosion and deposition that determine changes in the layers between water and erosion material. The eroded material is captured and relocated by water according to a material transport equation. The resulting model is fully 3D and is able to simulate a variety of phenomena including river meanders, low hill sediment wash, natural water springs and receding waterfalls. The simulations show the terrain morphogenesis and can be used for animations as well as for static scene generation. Copyright# 2006 JohnWiley & Sons, Ltd."}
{"_id": "60ab2ad61ccc10015081c8fa9bd1cb4744713030", "title": "Aspect-level opinion mining of online customer reviews", "text": "This paper focuses on how to improve aspect-level opinion mining for online customer reviews. We first propose a novel generative topic model, the Joint Aspect/Sentiment (JAS) model, to jointly extract aspects and aspect-dependent sentiment lexicons from online customer reviews. An aspect-dependent sentiment lexicon refers to the aspect-specific opinion words along with their aspect-aware sentiment polarities with respect to a specific aspect. We then apply the extracted aspect-dependent sentiment lexicons to a series of aspect-level opinion mining tasks, including implicit aspect identification, aspect-based extractive opinion summarization, and aspect-level sentiment classification. Experimental results demonstrate the effectiveness of the JAS model in learning aspect- dependent sentiment lexicons and the practical values of the extracted lexicons when applied to these practical tasks."}
{"_id": "71a2cac80f612a7d550cbd7dbb794d47a1fb7d41", "title": "Cluster validity functions for categorical data: a solution-space perspective", "text": "For categorical data, there are three widely-used internal validity functions: the $$k$$ k -modes objective function, the category utility function and the information entropy function, which are defined based on within-cluster information only. Many clustering algorithms have been developed to use them as objective functions and find their optimal solutions. In this paper, we study the generalization, effectiveness and normalization of the three validity functions from a solution-space perspective. First, we present a generalized validity function for categorical data. Based on it, we analyze the generality and difference of the three validity functions in the solution space. Furthermore, we address the problem whether the between-cluster information is ignored when these validity functions are used to evaluate clustering results. To the end, we analyze the upper and lower bounds of the three validity functions for a given data set, which can help us estimate the clustering difficulty on a data set and compare the performance of a clustering algorithm on different data sets."}
{"_id": "ce0ab4df47bc22e885a67b4d31ca85d5ca74f3e6", "title": "Soft labeling by Distilling Anatomical knowledge for Improved MS Lesion Segmentation", "text": "This paper explores the use of a soft ground-truth mask (\u201csoft mask\u201d) to train a Fully Convolutional Neural Network (FCNN) for segmentation of Multiple Sclerosis (MS) lesions. Detection and segmentation of MS lesions is a complex task largely due to the extreme unbalanced data, with very small number of lesion pixels that can be used for training. Utilizing the anatomical knowledge that the lesion surrounding pixels may also include some lesion level information, we suggest to increase the data set of the lesion class with neighboring pixel data with a reduced confidence weight. A soft mask is constructed by morphological dilation of the binary segmentation mask provided by a given expert, where expert-marked voxels receive label 1 and voxels of the dilated region are assigned a soft label. In the methodology proposed, the FCNN is trained using the soft mask. On the ISBI 2015 challenge dataset, this is shown to provide a better precision-recall tradeoff and to achieve a higher average Dice similarity coefficient. We also show that by using this soft mask scheme we can improve the network segmentation performance when compared to a second independent expert."}
{"_id": "f6cd444c939c0b5c08b07bb35fd694a45e07b97e", "title": "Low kickback noise techniques for CMOS latched comparators", "text": "The latched comparator is utilized in virtually all analog-to-digital converter architectures. It uses a positive feedback mechanism to regenerate the analog input signal into a full-scale digital level. Such high voltage variations in the regeneration nodes are coupled to the input voltage - kickback noise. This paper reviews existing solutions to minimize the kickback noise and proposes two new ones. HSPICE simulations verify the effectiveness of our techniques."}
{"_id": "a4e63ac464c426b6159a399706ec17a00239278a", "title": "Characterization and Modeling of Nonfilamentary Ta/TaOx/TiO2/Ti Analog Synaptic Device", "text": "A two-terminal analog synaptic device that precisely emulates biological synaptic features is expected to be a critical component for future hardware-based neuromorphic computing. Typical synaptic devices based on filamentary resistive switching face severe limitations on the implementation of concurrent inhibitory and excitatory synapses with low conductance and state fluctuation. For overcoming these limitations, we propose a Ta/TaOx/TiO2/Ti device with superior analog synaptic features. A physical simulation based on the homogeneous (nonfilamentary) barrier modulation induced by oxygen ion migration accurately reproduces various DC and AC evolutions of synaptic states, including the spike-timing-dependent plasticity and paired-pulse facilitation. Furthermore, a physics-based compact model for facilitating circuit-level design is proposed on the basis of the general definition of memristor devices. This comprehensive experimental and theoretical study of the promising electronic synapse can facilitate realizing large-scale neuromorphic systems."}
{"_id": "ff6b73217db2b24732c136c5c501e7d200f11001", "title": "Customer Lifetime Value ( CLV ) Measurement Based on RFM Model", "text": "Nowadays companies increasingly derive revenue from the creation and sustenance of long-term relationships with their customers. In such an environment, marketing serves the purpose of maximizing customer lifetime value (CLV) and customer equity, which is the sum of the lifetime values of the company\u2019s customers. A frequently-encountered difficulty for companies wishing to measure customer profitability is that management accounting and reporting systems have tended to reflect product profitability rather than customer profitability. But in spite of these difficulties, Companies looking for methods to know how calculate their customers's CLV. In this paper, we used K-Mean clustering approach to determine customers's CLV and segment them based on recency, frequency and monetary (RFM) measures. We also used Discriminant analysis to approve clustering results. Data required applying this method gathered from one branch of an Iranian private bank which is established newly. Finally, in terms of this segmentation, we proposed customer retention strategies for treating with the bank customers."}
{"_id": "6c85e50c86b6857d1ef5099de4cffb7ad8cbc8ad", "title": "Getting users to pay attention to anti-phishing education: evaluation of retention and transfer", "text": "Educational materials designed to teach users not to fall for phishing attacks are widely available but are often ignored by users. In this paper, we extend an embedded training methodology using learning science principles in which phishing education is made part of a primary task for users. The goal is to motivate users to pay attention to the training materials. In embedded training, users are sent simulated phishing attacks and trained after they fall for the attacks. Prior studies tested users immediately after training and demonstrated that embedded training improved users' ability to identify phishing emails and websites. In the present study, we tested users to determine how well they retained knowledge gained through embedded training and how well they transferred this knowledge to identify other types of phishing emails. We also compared the effectiveness of the same training materials delivered via embedded training and delivered as regular email messages. In our experiments, we found that: (a) users learn more effectively when the training materials are presented after users fall for the attack (embedded) than when the same training materials are sent by email (non-embedded); (b) users retain and transfer more knowledge after embedded training than after non-embedded training; and (c) users with higher Cognitive Reflection Test (CRT) scores are more likely than users with lower CRT scores to click on the links in the phishing emails from companies with which they have no account."}
{"_id": "d25ceed5ff16bff36e0c91549e51501e6ca4ddb0", "title": "CO2 bio-mitigation using microalgae", "text": "Microalgae are a group of unicellular or simple multicellular photosynthetic microorganisms that can fix CO2 efficiently from different sources, including the atmosphere, industrial exhaust gases, and soluble carbonate salts. Combination of CO2 fixation, biofuel production, and wastewater treatment may provide a very promising alternative to current CO2 mitigation strategies."}
{"_id": "b29b52d355ed21bf0cac4c665384831fba429657", "title": "A Low-Cost High-Performance Digital Radar Test Bed", "text": "This paper describes the design of a dual-channel S-band digital radar test bed. The test bed combines stretch processing with a novel and cost-effective hardware architecture that enables it to achieve an in-band dynamic range of 60 dB over 600 MHz of instantaneous bandwidth. The dual digital receiver channels allow for adaptive digital beamforming which can be used to mitigate a directional source of interference. Experimental test and verification results are presented to demonstrate system performance."}
{"_id": "4876756ac236f941dd9572b41e20c21c2e953777", "title": "CS principles goes to middle school: learning how to teach \"Big Data\"", "text": "Spurred by evidence that students' future studies are highly influenced during middle school, recent efforts have seen a growing emphasis on introducing computer science to middle school learners. This paper reports on the in-progress development of a new middle school curricular module for Big Data, situated as part of a new CS Principles-based middle school curriculum. Big Data is of widespread societal importance and holds increasing implications for the computer science workforce. It also has appeal as a focus for middle school computer science because of its rich interplay with other important computer science principles. This paper examines three key aspects of a Big Data unit for middle school: its alignment with emerging curricular standards; the perspectives of middle school classroom teachers in mathematics, science, and language arts; and student feedback as explored during a middle school pilot study with a small subset of the planned curriculum. The results indicate that a Big Data unit holds great promise as part of a middle school computer science curriculum."}
{"_id": "3811d3832b0fef86e6e45d5ed58f1bec20abf39c", "title": "A low-power capacitor switching scheme with low common-mode voltage variation for successive approximation ADC", "text": "In this paper, a new low-energy switching technique with low common-mode voltage variation is proposed for successive-approximation analog-to-digital converters (SA-ADCs). In the proposed scheme, not only the switching energy consumed within the first three comparisons is less than zero, but also other comparisons are made with the low-power monotonic method. Therefore, the switching energy of the capacitive array, including the consumed energy during the sampling phase, is reduced by 90.68% compared with the conventional counterpart. Moreover, since the variation of the input common-mode voltage of the employed comparator is only 0.125Vref, where Vref is the reference voltage of the ADC, the required comparator\u2019s performance can be much more relaxed leading to more power saving. Post-layout simulation results of a 10-bit 1-MS/s SA-ADC in a 0.18-\u03bcm CMOS technology show a signal-to-noise-and-distortion ratio (SNDR) of 61 dB, a spurious-free dynamic range (SFDR) of 79.8 dB, and an effective number of 9.84 bits. The ADC consumes 35.3 \u03bcW with a 1.8-V supply and achieves a Figure-of-Merit (FoM) of 38.5 fJ/conversion-step."}
{"_id": "41c07725a4e7d715162f10d974ac086250922652", "title": "A comprehensive and conservative approach for the restoration of abrasion and erosion. Part I: concepts and clinical rationale for early intervention using adhesive techniques.", "text": "Tooth wear represents a frequent pathology with multifactorial origins. Behavioral changes, unbalanced diet, various medical conditions and medications inducing acid regurgitation or influencing saliva composition and flow rate, trigger tooth erosion. Awake and sleep bruxism, which are widespread nowadays with functional disorders, induce attrition. It has become increasingly important to diagnose early signs of tooth wear so that proper preventive, and if needed, restorative measures are taken. Such disorders have biological, functional, and also esthetic consequences. Following a comprehensive clinical evaluation, treatment objectives, such as a proper occlusal and anatomical scheme as well as a pleasing smile line, are usually set on models with an anterior teeth full-mouth waxup, depending on the severity of tissue loss. Based on the new vertical dimension of occlusion (VDO), combinations of direct and indirect restorations can then help to reestablish anatomy and function. The use of adhesive techniques and resin composites has demonstrated its potential, in particular for the treatment of moderate tooth wear. Part I of this article reviews recent knowledge and clinical concepts dealing with the various forms of early restorative interventions and their potential to restrict ongoing tissue destruction."}
{"_id": "dccf47646bea3dae2b062c77d0770e1713910526", "title": "Kernel Clustering: Density Biases and Solutions", "text": "Kernel methods are popular in clustering due to their generality and discriminating power. However, we show that many kernel clustering criteria have density biases theoretically explaining some practically significant artifacts empirically observed in the past. For example, we provide conditions and formally prove the density mode isolation bias in kernel K-means for a common class of kernels. We call it Breiman\u2019s bias due to its similarity to the histogram mode isolation previously discovered by Breiman in decision tree learning with Gini impurity. We also extend our analysis to other popular kernel clustering methods, e.g.,\u00a0average/normalized cut or dominant sets, where density biases can take different forms. For example, splitting isolated points by cut-based criteria is essentially the sparsest subset bias, which is the opposite of the density mode bias. Our findings suggest that a principled solution for density biases in kernel clustering should directly address data inhomogeneity. We show that density equalization can be implicitly achieved using either locally adaptive weights or locally adaptive kernels. Moreover, density equalization makes many popular kernel clustering objectives equivalent. Our synthetic and real data experiments illustrate density biases and proposed solutions. We anticipate that theoretical understanding of kernel clustering limitations and their principled solutions will be important for a broad spectrum of data analysis applications across the disciplines."}
{"_id": "e92ebdf49fedbba5c231a2e38c2cd332caa9a664", "title": "A Framework for Validating Models of Evasion Attacks on Machine Learning, with Application to PDF Malware Detection", "text": "Machine learning (ML) techniques are increasingly common in security applications, such as malware and intrusion detection. However, there is increasing evidence that machine learning models are susceptible to evasion attacks, in which an adversary makes small changes to the input (such as malware) in order to cause erroneous predictions (for example, to avoid being detected). Evasion attacks on ML fall into two broad categories: 1) those which generate actual malicious instances and demonstrate both evasion of ML and efficacy of attack (we call these problem space attacks), and 2) attacks which directly manipulate features used by ML, abstracting efficacy of attack into a mathematical cost function (we call these feature space attacks). Central to our inquiry is the following fundamental question: are feature space models of attacks useful proxies for real attacks? In the process of answering this question, we make two major contributions: 1) a general methodology for evaluating validity of mathematical models of ML evasion attacks, and 2) an application of this methodology as a systematic hypothesis-driven evaluation of feature space evasion attacks on ML-based PDF malware detectors. Specific to our case study, we find that a) feature space evasion models are in general not adequate in representing real attacks, b) such models can be significantly improved by identifying conserved features (features that are invariant in real attacks) whenever these exist, and c) ML hardened using the improved feature space models remains robust to alternative attacks, in contrast to ML hardened using a very powerful class of problem space attacks, which does not."}
{"_id": "55871b1fd0d2546647e41ba1b832f98b9efeded4", "title": "Vitamin D: An overview of vitamin D status and intake in Europe", "text": "In recent years, there have been reports suggesting a high prevalence of low vitamin D intakes and vitamin D deficiency or inadequate vitamin D status in Europe. Coupled with growing concern about the health risks associated with low vitamin D status, this has resulted in increased interest in the topic of vitamin D from healthcare professionals, the media and the public. Adequate vitamin D status has a key role in skeletal health. Prevention of the well-described vitamin D deficiency disorders of rickets and osteomalacia are clearly important, but there may also be an implication of low vitamin D status in bone loss, muscle weakness and falls and fragility fractures in older people, and these are highly significant public health issues in terms of morbidity, quality of life and costs to health services in Europe. Although there is no agreement on optimal plasma levels of vitamin D, it is apparent that blood 25-hydroxyvitamin D [25(OH)D] levels are often below recommended ranges for the general population and are particularly low in some subgroups of the population, such as those in institutions or who are housebound and non-Western immigrants. Reported estimates of vitamin D status within different European countries show large variation. However, comparison of studies across Europe is limited by their use of different methodologies. The prevalence of vitamin D deficiency [often defined as plasma 25(OH)D <25\u2009nmol/l] may be more common in populations with a higher proportion of at-risk groups, and/or that have low consumption of foods rich in vitamin D (naturally rich or fortified) and low use of vitamin D supplements. The definition of an adequate or optimal vitamin D status is key in determining recommendations for a vitamin D intake that will enable satisfactory status to be maintained all year round, including the winter months. In most European countries, there seems to be a shortfall in achieving current vitamin D recommendations. An exception is Finland, where dietary survey data indicate that recent national policies that include fortification and supplementation, coupled with a high habitual intake of oil-rich fish, have resulted in an increase in vitamin D intakes, but this may not be a suitable strategy for all European populations. The ongoing standardisation of measurements in vitamin D research will facilitate a stronger evidence base on which policies can be determined. These policies may include promotion of dietary recommendations, food fortification, vitamin D supplementation and judicious sun exposure, but should take into account national, cultural and dietary habits. For European nations with supplementation policies, it is important that relevant parties ensure satisfactory uptake of these particularly in the most vulnerable groups of the population."}
{"_id": "5847433be1247a48852730bd183f88edd097573e", "title": "A Survey of Incentive Mechanisms for Participatory Sensing", "text": "Participatory sensing is now becoming more popular and has shown its great potential in various applications. It was originally proposed to recruit ordinary citizens to collect and share massive amounts of sensory data using their portable smart devices. By attracting participants and paying rewards as a return, incentive mechanisms play an important role to guarantee a stable scale of participants and to improve the accuracy/coverage/timeliness of the sensing results. Along this direction, a considerable amount of research activities have been conducted recently, ranging from experimental studies to theoretical solutions and practical applications, aiming at providing more comprehensive incentive procedures and/or protecting benefits of different system stakeholders. To this end, this paper surveys the literature over the period of 2004-2014 from the state of the art of theoretical frameworks, applications and system implementations, and experimental studies of the incentive strategies used in participatory sensing by providing up-to-date research in the literature. We also point out future directions of incentive strategies used in participatory sensing."}
{"_id": "8173411609b20d2ff2380e53573069e8aaac1b95", "title": "Transferring sentiment knowledge between words and tweets", "text": "Message-level and word-level polarity classification are two popular tasks in Twitter sentiment analysis. They have been commonly addressed by training supervised models from labelled data. The main limitation of these models is the high cost of data annotation. Transferring existing labels from a related problem domain is one possible solution for this problem. In this paper, we study how to transfer sentiment labels from the word domain to the tweet domain and vice versa by making their corresponding instances compatible. We model instances of these two domains as the aggregation of instances from the other (i.e., tweets are treated as collections of the words they contain and words are treated as collections of the tweets in which they occur) and perform aggregation by averaging the corresponding constituents. We study two different setups for averaging tweet and word vectors: 1) representing tweets by standard NLP features such as unigrams and part-of-speech tags and words by averaging the vectors of the tweets in which they occur, and 2) representing words using skip-gram embeddings and tweets as the average embedding vector of their words. A consequence of our approach is that instances of both domains reside in the same feature space. Thus, a sentiment classifier trained on labelled data from one domain can be used to classify instances from the other one. We evaluate this approach in two transfer learning tasks: 1) sentiment classification of tweets by applying a word-level sentiment classifier, and 2) induction of a polarity lexicon by applying a tweet-level polarity classifier. Our results show that the proposed model can successfully classify words and tweets after transfer."}
{"_id": "428940fa3f81f8d415c26661de797a77d8af4d43", "title": "The Fog computing paradigm: Scenarios and security issues", "text": "Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. In this article, we elaborate the motivation and advantages of Fog computing, and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state-of-the-art of Fog computing and similar work under the same umbrella. Security and privacy issues are further disclosed according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device."}
{"_id": "9551d21143dcc267268962de5d8d0bae4157a092", "title": "IoT Considerations, Requirements, and Architectures for Smart Buildings\u2014Energy Optimization and Next-Generation Building Management Systems", "text": "The Internet of Things (IoT) is entering the daily operation of many industries; applications include but are not limited to smart cities, smart grids, smart homes, physical security, e-health, asset management, and logistics. For example, the concept of smart cities is emerging in multiple continents, where enhanced street lighting controls, infrastructure monitoring, public safety and surveillance, physical security, gunshot detection, meter reading, and transportation analysis and optimization systems are being deployed on a city-wide scale. A related and cost-effective user-level IoT application is the support of IoT-enabled smart buildings. Commercial space has substantial requirements in terms of comfort, usability, security, and energy management. IoT-based systems can support these requirements in an organic manner. In particular, power over Ethernet, as part of an IoT-based solution, offers disruptive opportunities in revolutionizing the in-building connectivity of a large swath of devices. However, a number of deployment-limiting issues currently impact the scope of IoT utilization, including lack of comprehensive end-to-end standards, fragmented cybersecurity solutions, and a relative dearth of fully-developed vertical applications. This paper reviews some of the technical opportunities offered and the technical challenges faced by the IoT in the smart building arena."}
{"_id": "0b458ce6c0d6d7fd20499e5b64a46132d7c380f2", "title": "Cyber security in the Smart Grid: Survey and challenges", "text": "The Smart Grid, generally referred to as the next-generation power system, is considered as a revolutionary and evolutionary regime of existing power grids. More importantly, with the integration of advanced computing and communication technologies, the Smart Grid is expected to greatly enhance efficiency and reliability of future power systems with renewable energy resources, as well as distributed intelligence and demand response. Along with the silent features of the Smart Grid, cyber security emerges to be a critical issue because millions of electronic devices are inter-connected via communication networks throughout critical power facilities, which has an immediate impact on reliability of such a widespread infrastructure. In this paper, we present a comprehensive survey of cyber security issues for the Smart Grid. Specifically, we focus on reviewing and discussing security requirements, network vulnerabilities, attack countermeasures, secure communication protocols and architectures in the Smart Grid. We aim to provide a deep understanding of security vulnerabilities and solutions in the Smart Grid and shed light on future research directions for Smart Grid security. 2013 Elsevier B.V. All rights reserved."}
{"_id": "2126d7558c128dbbfc73f6b87d915ae1c4bfb92a", "title": "EPPA: An Efficient and Privacy-Preserving Aggregation Scheme for Secure Smart Grid Communications", "text": "The concept of smart grid has emerged as a convergence of traditional power system engineering and information and communication technology. It is vital to the success of next generation of power grid, which is expected to be featuring reliable, efficient, flexible, clean, friendly, and secure characteristics. In this paper, we propose an efficient and privacy-preserving aggregation scheme, named EPPA, for smart grid communications. EPPA uses a superincreasing sequence to structure multidimensional data and encrypt the structured data by the homomorphic Paillier cryptosystem technique. For data communications from user to smart grid operation center, data aggregation is performed directly on ciphertext at local gateways without decryption, and the aggregation result of the original data can be obtained at the operation center. EPPA also adopts the batch verification technique to reduce authentication cost. Through extensive analysis, we demonstrate that EPPA resists various security threats and preserve user privacy, and has significantly less computation and communication overhead than existing competing approaches."}
{"_id": "402984edc885acc62c8703ed7f746fb44a8e4822", "title": "MigCEP: operator migration for mobility driven distributed complex event processing", "text": "A recent trend in communication networks --- sometimes referred to as fog computing --- offers to execute computational tasks close to the access points of the networks. This enables real-time applications, like mobile Complex Event Processing (CEP), to significantly reduce end-to-end latencies and bandwidth usage. Most work studying the placement of operators in such an environment completely disregards the migration costs. However, the mobility of users requires frequent migration of operators, together with possibly large state information, to meet latency restrictions and save bandwidth in the infrastructure.\n This paper presents a placement and migration method for providers of infrastructures that incorporate cloud and fog resources. It ensures application-defined end-to-end latency restrictions and reduces the network utilization by planning the migration ahead of time. Furthermore, we present how the application knowledge of the CEP system can be used to improve current live migration techniques for Virtual Machines to reduce the required bandwidth during the migration. Our evaluations show that we safe up to 49% of the network utilization with perfect knowledge about a users mobility pattern and up to 27% of the network utilization when considering the uncertainty of those patterns."}
{"_id": "1c2c978faad6f27e05cf9e3fca509132ace8fae4", "title": "Beyond PASCAL: A benchmark for 3D object detection in the wild", "text": "3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http://cvgl.stanford.edu/projects/pascal3d."}
{"_id": "831861f098640b972a342433d72db6d7135d3099", "title": "Improving retention and graduate recruitment through immersive research experiences for undergraduates", "text": "Research experiences for undergraduates are considered an effective means for increasing student retention and encouraging undergraduate students to continue on to graduate school. However, managing a cohort of undergraduate researchers, with varying skill levels, can be daunting for faculty advisors. We have developed a program to engage students in research and outreach in visualization, virtual reality, networked robotics, and interactive games. Our program immerses students into the life of a lab, employing a situated learning approach that includes tiered mentoring and collaboration to enable students at all levels to contribute to research. Students work in research teams comprised of other undergraduates, graduate students and faculty, and participate in professional development and social gatherings within the larger cohort. Results from our first two years indicate this approach is manageable and effective for increasing students' ability and desire to conduct research."}
{"_id": "6abc445016d5f57bae83d913446e4343fea7c259", "title": "Silicon carbide Schottky diodes and MOSFETs: Solutions to performance problems", "text": "Silicon carbide has long been hailed as the successor to silicon in many power electronics applications. Its superior electrical and thermal properties have delivered devices that operate at higher voltages, higher temperatures and with lower on-resistances than silicon devices. However, SiC Schottky diodes are still the only devices commercially available today. Though SiC Schottkys are now being used with silicon IGBTs in dasiahybridpsila inverter modules, the real advantages will be seen when silicon switching devices can be replaced by SiC. This paper describes the current state of SiC diode and MOSFET technology, discussing possible solutions to making these devices commercially viable."}
{"_id": "524f702ee63de2aff659cddf94333df797938b05", "title": "Remaining trouble spots with computational thinking", "text": "Addressing unresolved questions concerning computational thinking."}
{"_id": "18427d980e196c34fd0a82f1cb1bcf4d54b6bfa1", "title": "How do software engineers understand code changes?: an exploratory study in industry", "text": "Software evolves with continuous source-code changes. These code changes usually need to be understood by software engineers when performing their daily development and maintenance tasks. However, despite its high importance, such change-understanding practice has not been systematically studied. Such lack of empirical knowledge hinders attempts to evaluate this fundamental practice and improve the corresponding tool support.\n To address this issue, in this paper, we present a large-scale quantitative and qualitative study at Microsoft. The study investigates the role of understanding code changes during software-development process, explores engineers' information needs for understanding changes and their requirements for the corresponding tool support. The study results reinforce our beliefs that understanding code changes is an indispensable task performed by engineers in software-development process. A number of insufficiencies in the current practice also emerge from the study results. For example, it is difficult to acquire important information needs such as a change's completeness, consistency, and especially the risk imposed by it on other software components. In addition, for understanding a composite change, it is valuable to decompose it into sub-changes that are aligned with individual development issues; however, currently such decomposition lacks tool support."}
{"_id": "db6e349f17ad892584c3dbcfc510253a4f0085c1", "title": "Text Coherence Analysis Based on Deep Neural Network", "text": "In this paper, we propose a novel deep coherence model (DCM) using a convolutional neural network architecture to capture the text coherence. The text coherence problem is investigated with a new perspective of learning sentence distributional representation and text coherence modeling simultaneously. In particular, the model captures the interactions between sentences by computing the similarities of their distributional representations. Further, it can be easily trained in an end-to-end fashion. The proposed model is evaluated on a standard Sentence Ordering task. The experimental results demonstrate its effectiveness and promise in coherence assessment showing a significant improvement over the state-of-the-art by a wide margin."}
{"_id": "12a9f3469cda36e99e40188091d041c982aaaa82", "title": "Minimum mean square error detector for multimessage spread spectrum embedding", "text": "In this paper, a new minimum mean square error (MMSE) based detector is presented for multi-message detection in both conventional spread spectrum (SS) watermarking and improved spread spectrum (ISS) watermarking. The proposed detector yields an improved detection performance in terms of bit error rate (BER) for both SS and ISS multiple message embedding. Simulation results demonstrate that the proposed MMSE detector outperforms the conventional detector, the matched filter detector, by orders of magnitude."}
{"_id": "3d14714a0657dbd653ee8bee5325706ac098a889", "title": "Dissociation Following Traumatic Stress Etiology and Treatment", "text": "We postulate that the cascade \u2018\u2018Freeze-Flight-Fight-Fright-Flag-Faint\u2019\u2019 is a coherent sequence of six fear responses that escalate as a function of defense possibilities and proximity to danger during life-threat. The actual sequence of trauma-related response dispositions acted out in an extremely dangerous situation therefore depends on the appraisal of the threat by the organism in relation to her/his own power to act (e.g., age and gender) as well as the perceived characteristics of threat and perpetrator. These reaction patterns provide optimal adaption for particular stages of imminence. Subsequent to the traumatic threats, portions of the experience may be replayed. The actual individual cascade of defense stages a survivor has gone through during the traumatic event will repeat itself every time the fear network, which has evolved peritraumatically, is activated again (i.e., through internal or external triggers or, e.g., during exposure therapy).When a parasympathetically dominated \u2018\u2018shutdown\u2019\u2019 was the prominent peri-traumatic response during the traumatic incident, comparable dissociative responses may dominate responding to subsequently experienced threat and may also reappear when the traumatic memory is reactivated. Repeated experience of traumatic stress forms a fear network that can become pathologically detached from contextual cues such as time and location of the danger, a condition which manifests itself as posttraumatic stress disorder (PTSD). Intrusions, for example, can therefore be understood as repetitive displays of fragments of the event, which would then, depending on the dominant physiological response during the threat, elicit a corresponding combination of hyperarousal and dissociation. We suggest that trauma treatment must therefore differentiate between patients on two dimensions: those with peritraumatic sympathetic activation versus those who went down the whole defense cascade, which leads to parasympathetic dominance during the trauma and a corresponding replay of physiological and dissociative responding, when reminded. The differential management of dissociative stages (\u2018\u2018fright\u2019\u2019 and \u2018\u2018faint\u2019\u2019) has important treatment implications."}
{"_id": "62be8736747856270c757670be80eef659b3c464", "title": "Classifying heart sounds using peak location for segmentation and feature construction", "text": "In this paper we describe our methodology and results for the Classifying Heart Sounds PASCAL Challenge. We present our algoritm and results for segmentation and classification of S1 and S2 heart sounds."}
{"_id": "47e9b11a3064583bfee76a88e6ddb70baecce2d2", "title": "SORT: A Self-ORganizing Trust Model for Peer-to-Peer Systems", "text": "Open nature of peer-to-peer systems exposes them to malicious activity. Building trust relationships among peers can mitigate attacks of malicious peers. This paper presents distributed algorithms that enable a peer to reason about trustworthiness of other peers based on past interactions and recommendations. Peers create their own trust network in their proximity by using local information available and do not try to learn global trust information. Two contexts of trust, service, and recommendation contexts, are defined to measure trustworthiness in providing services and giving recommendations. Interactions and recommendations are evaluated based on importance, recentness, and peer satisfaction parameters. Additionally, recommender's trustworthiness and confidence about a recommendation are considered while evaluating recommendations. Simulation experiments on a file sharing application show that the proposed model can mitigate attacks on 16 different malicious behavior models. In the experiments, good peers were able to form trust relationships in their proximity and isolate malicious peers."}
{"_id": "4f251e32b7fb57eaf5ac5f0141c68d8ee90c88ed", "title": "Deep Learning meets Semantic Web: A feasibility study with the Cardiovascular Disease Ontology and PubMed citations", "text": "Background: Automatic identification of gene and protein names from biomedical publications can help curators and researchers to keep up with the findings published in the scientific literature. As of today, this is a challenging task related to information retrieval, and in the realm of Big Data Analytics. Objectives: To investigate the feasibility of using word embed-dings (i.e. distributed word representations) from Deep Learning algorithms together with terms from the Cardiovascular Disease Ontology (CVDO) as a step to identifying omics information encoded in the biomedical literature. Methods: Word embeddings were generated using the neural language models CBOW and Skip-gram with an input of more than 14 million PubMed citations (titles and abstracts) corresponding to articles published between 2000 and 2016. Then the abstracts of selected papers from the sysVASC systematic review were manually annotated with gene/protein names. We set up two experiments that used the word embeddings to produce term variants for gene/protein names: the first experiment used the terms manually annotated from the papers; the second experiment enriched/expanded the annotated terms using terms from the human-readable labels of key classes (gene/proteins) from the CVDO ontology. CVDO is formalised in the W3C Web Ontology Language (OWL) and contains 172,121 UniProt Knowledgebase protein classes related to human and 86,792 UniProtKB protein classes related to mouse. The hypothesis is that by enriching the original annotated terms, a better context is provided, and therefore, it is easier to obtain suitable (full and/or partial) term variants for gene/protein names from word embed-dings. Results: From the papers manually annotated, a list of 107 terms (gene/protein names) was acquired. As part of the word embeddings generated from CBOW and Skip-gram, a lexicon with more than 9 million terms was created. Using the cosine similarity metric, a list of the 12 top-ranked terms was generated from word embeddings for query terms present in the generated lexicon. Domain experts evaluated a total of 1968 pairs of terms and classified the retrieved terms as: TV (term variant); PTV (partial term variant); and NTV (non term variant, meaning none of the previous two categories). In experiment I, Skip-gram finds the double amount of (full and/or partial) term variants for gene/protein names as compared with CBOW. Using Skip-gram, the weighted Cohen's Kappa inter-annotator agreement for two domain experts was 0.80 for the first experiment and 0.74 for the second experiment. In the first experiment, suitable (full and/or partial) term variants were found \u2026"}
{"_id": "2cb72b036d11c13ea07ce95a9b946b4d88d26ea5", "title": "A computational approach to motion perception", "text": "In this paper it is shown that the computation of the optical flow from a sequence of timevarying images is not, in general, an underconstrained problem. A local algorithm for the computation of the optical flow which uses second order derivatives of the image brightness pattern, and that avoids the aperture problem, is presented. The obtained optical flow is very similar to the true motion field \u2014 which is the vector field associated with moving features on the image plane \u2014 and can be used to recover 3D motion information. Experimental results on sequences of real images, together with estimates of relevant motion parameters, like time-to-crash for translation and angular velocity for rotation, are presented and discussed. Due to the remarkable accuracy which can be achieved in estimating motion parameters, the proposed method is likely to be very useful in a number of computer vision applications."}
{"_id": "b482b919755257fa4b69bb2475dd32b6a6e61b8f", "title": "A novel booster antenna on flexible substrates for metal proximity NFC applications", "text": "In this article a novel booster antenna is proposed in order to increase the read range of a small (1.1\u00d710-4 m2) metal mounted HF (13.56 MHz) RFID. The highly inductive coupling between the reader and booster, and between the RFID and booster is the main propagation method. The communication between the reader and RFID realized through the booster antenna. For this purpose two coils are connected in parallel to each other resonating at the same frequency. The greater coil is coupled to the reader antenna and the smaller coil is coupled to the RFID. The simulation and measurement results show the reliability of the proposed structure."}
{"_id": "65e63ea1658cc2caa4b485b792f3bfdd6e5909ea", "title": "Microbial co-operation in the rhizosphere.", "text": "Soil microbial populations are immersed in a framework of interactions known to affect plant fitness and soil quality. They are involved in fundamental activities that ensure the stability and productivity of both agricultural systems and natural ecosystems. Strategic and applied research has demonstrated that certain co-operative microbial activities can be exploited, as a low-input biotechnology, to help sustainable, environmentally-friendly, agro-technological practices. Much research is addressed at improving understanding of the diversity, dynamics, and significance of rhizosphere microbial populations and their co-operative activities. An analysis of the co-operative microbial activities known to affect plant development is the general aim of this review. In particular, this article summarizes and discusses significant aspects of this general topic, including (i) the analysis of the key activities carried out by the diverse trophic and functional groups of micro-organisms involved in co-operative rhizosphere interactions; (ii) a critical discussion of the direct microbe-microbe interactions which results in processes benefiting sustainable agro-ecosystem development; and (iii) beneficial microbial interactions involving arbuscular mycorrhiza, the omnipresent fungus-plant beneficial symbiosis. The trends of this thematic area will be outlined, from molecular biology and ecophysiological issues to the biotechnological developments for integrated management, to indicate where research is needed in the future."}
{"_id": "904b96d0691c62cacd1d9456e623ab24901763a0", "title": "Facade Segmentation in the Wild", "text": "Urban fa\u00e7ade segmentation from automatically acquired imagery, in contrast to traditional image segmentation, poses several unique challenges. 360\u25e6 photospheres captured from vehicles are an effective way to capture a large number of images, but this data presents difficult-to-model warping and stitching artifacts. In addition, each pixel can belong to multiple fa\u00e7ade elements, and different facade elements (e.g., window, balcony, sill, etc.) are correlated and vary wildly in their characteristics. In this paper, we propose three network architectures of varying complexity to achieve multilabel semantic segmentation of fa\u00e7ade images while exploiting their unique characteristics. Specifically, we propose a MULTIFACSEGNET architecture to assign multiple labels to each pixel, a SEPARABLE architecture as a low-rank formulation that encourages extraction of rectangular elements, and a COMPATIBILITY network that simultaneously seeks segmentation across facade element types allowing the network to \u2018see\u2019 intermediate output probabilities of the various fa\u00e7ade element classes. Our results on benchmark datasets show significant improvements over existing fa\u00e7ade segmentation approaches for the typical fa\u00e7ade elements. For example, on one commonly used dataset the accuracy scores for window (the most important architectural element) increases from 0.91 to 0.97 percent compared to the best competing method, and comparable improvements on other element types."}
{"_id": "bc8f96fd2b96874f66196c3fdfc1f2a946fb2344", "title": "Application of Neural Networks to Stock Market Prediction", "text": "All rights reserved. Material in this report may not be reproduced in any form. This material is made available for learning and may not be used in any manner for any profit activities without the permission of the author."}
{"_id": "1c46e5ba1a9964ddacfdde406777c2fa838273d1", "title": "Using Big Data as a window into consumers\u2019 psychology", "text": "The rise of \u2018Big Data\u2019 had a big impact on marketing research and practice. In this article, we first highlight sources of useful consumer information that are now available at large scale and very little or no cost. We subsequently discuss how this information \u2013 with the help of new analytical techniques \u2013 can be translated into valuable insights on consumers\u2019 psychological states and traits that can, in turn, be used to inform marketing strategy. Finally, we discuss opportunities and challenges related to the use of Big Data as a window into consumers\u2019 psychology, and provide recommendations for how to implement related technologies in a way that benefits both businesses and consumers."}
{"_id": "49b9b1200dfbc182d477b1dda0a58a41c4142814", "title": "Maintenance 4.0: Intelligent and Predictive Maintenance System Architecture", "text": "In the current manufacturing world, the role of maintenance has been receiving increasingly more attention while companies understand that maintenance, when well performed, can be a strategic factor to achieve the corporate goals. The latest trends of maintenance leans towards the predictive approach, exemplified by the Prognosis and Health Management (PHM) and the Condition-based Maintenance (CBM) techniques. The implementation of such approaches demands a well structured architecture and can be boosted through the use of emergent ICT technologies, namely Internet of Things (IoT), cloud computing, advanced data analytics and augmented reality. Therefore, this paper describes the architecture of an intelligent and predictive maintenance system, aligned with Industry 4.0 principles, that considers advanced and online analysis of the collected data for the earlier detection of the occurrence of possible machine failures, and supports technicians during the maintenance interventions by providing a guided intelligent decision support."}
{"_id": "72dedddd28ad14f91f1ec4bea3bd323b9492c0dd", "title": "Memory training impacts short-term changes in aging white matter: a longitudinal diffusion tensor imaging study.", "text": "A growing body of research indicates benefits of cognitive training in older adults, but the neuronal mechanisms underlying the effect of cognitive intervention remains largely unexplored. Neuroimaging methods are sensitive to subtle changes in brain structure and show potential for enhancing our understanding of both aging- and training-related neuronal plasticity. Specifically, studies using diffusion tensor imaging (DTI) suggest substantial changes in white matter (WM) in aging, but it is not known whether cognitive training might modulate these structural alterations. We used tract-based spatial statistics (TBSS) optimized for longitudinal analysis to delineate the effects of 8 weeks intensive memory training on WM microstructure. 41 participants (mean age 61 years) matched for age, sex and education were randomly assigned to an intervention or control group. All participants underwent MRI-scanning and neuropsychological assessments at the beginning and end of the study. Longitudinal analysis across groups revealed significant increase in frontal mean diffusivity (MD), indicating that DTI is sensitive to WM structural alterations over a 10-week interval. Further, group analysis demonstrated positive effects of training on the short-term changes. Participants in the training group showed a relative increase in fractional anisotropy (FA) compared with controls. Further, a significant relationship between memory improvement and change in FA was found, suggesting a possible functional significance of the reported changes. The training effect on FA seemed to be driven by a relative decrease in radial diffusivity, which might indicate a role for myelin-related processes in WM plasticity."}
{"_id": "174b4cb435c87e421c973ce59ccf5b06e09aa8af", "title": "Anatomy of GPU Memory System for Multi-Application Execution", "text": "As GPUs make headway in the computing landscape spanning mobile platforms, supercomputers, cloud and virtual desktop platforms, supporting concurrent execution of multiple applications in GPUs becomes essential for unlocking their full potential. However, unlike CPUs, multi-application execution in GPUs is little explored. In this paper, we study the memory system of GPUs in a concurrently executing multi-application environment. We first present an analytical performance model for many-threaded architectures and show that the common use of misses-per-kilo-instruction (MPKI) as a proxy for performance is not accurate without considering the bandwidth usage of applications. We characterize the memory interference of applications and discuss the limitations of existing memory schedulers in mitigating this interference. We extend the analytical model to multiple applications and identify the key metrics to control various performance metrics. We conduct extensive simulations using an enhanced version of GPGPU-Sim targeted for concurrently executing multiple applications, and show that memory scheduling decisions based on MPKI and bandwidth information are more effective in enhancing throughput compared to the traditional FR-FCFS and the recently proposed RR FR-FCFS policies."}
{"_id": "50de0f6a952131dfe562c5b3836e5d934b39b939", "title": "Application-aware prioritization mechanisms for on-chip networks", "text": "Network-on-Chips (NoCs) are likely to become a critical shared resource in future many-core processors. The challenge is to develop policies and mechanisms that enable multiple applications to efficiently and fairly share the network, to improve system performance. Existing local packet scheduling policies in the routers fail to fully achieve this goal, because they treat every packet equally, regardless of which application issued the packet.\n This paper proposes prioritization policies and architectural extensions to NoC routers that improve the overall application-level throughput, while ensuring fairness in the network. Our prioritization policies are application-aware, distinguishing applications based on the stall-time criticality of their packets. The idea is to divide processor execution time into phases, rank applications within a phase based on stall-time criticality, and have all routers in the network prioritize packets based on their applications' ranks. Our scheme also includes techniques that ensure starvation freedom and enable the enforcement of system-level application priorities.\n We evaluate the proposed prioritization policies on a 64-core CMP with an 8x8 mesh NoC, using a suite of 35 diverse applications. For a representative set of case studies, our proposed policy increases average system throughput by 25.6% over age-based arbitration and 18.4% over round-robin arbitration. Averaged over 96 randomly-generated multiprogrammed workload mixes, the proposed policy improves system throughput by 9.1% over the best existing prioritization policy, while also reducing application-level unfairness."}
{"_id": "043afbd936c95d0e33c4a391365893bd4102f1a7", "title": "Project Adam: Building an Efficient and Scalable Deep Learning Training System", "text": "Large deep neural network models have recently demonstrated state-of-the-art accuracy on hard visual recognition tasks. Unfortunately such models are extremely time consuming to train and require large amount of compute cycles. We describe the design and implementation of a distributed system called Adam comprised of commodity server machines to train such models that exhibits world-class performance, scaling and task accuracy on visual recognition tasks. Adam achieves high efficiency and scalability through whole system co-design that optimizes and balances workload computation and communication. We exploit asynchrony throughout the system to improve performance and show that it additionally improves the accuracy of trained models. Adam is significantly more efficient and scalable than was previously thought possible and used 30x fewer machines to train a large 2 billion connection model to 2x higher accuracy in comparable time on the ImageNet 22,000 category image classification task than the system that previously held the record for this benchmark. We also show that task accuracy improves with larger models. Our results provide compelling evidence that a distributed systems-driven approach to deep learning using current training algorithms is worth pursuing."}
{"_id": "b0b157b2233d1aa5ecf2cd0ce97d6515a08f5ede", "title": "TunePad: Computational Thinking Through Sound Composition", "text": "Computational thinking skills will be important for the next generation of students. However, there is a disparity in the populations joining the field. Integrating computational thinking into artistic fields has shown to increase participation in computer science. In this paper, we present our initial design prototype for TunePad, a sound composition tablet application controlled by a block-based programming environment. TunePad is designed to introduce learners to computational thinking and to prepare them for text-based coding environments. From our preliminary testing, with children ages 7-14, we observed that our design actively engages learners and communicates how the programming blocks control the sounds being played. This testing is a prelude to more formal studies as we continue to improve the design and interface of TunePad. With this work, we intend to engage students in computational thinking who may not have otherwise been exposed, giving the opportunity to more people to enter the computer science field."}
{"_id": "b91d39175b1c8baca0975645f0e6896b1f607b54", "title": "Privacy-Preserving Online Task Assignment in Spatial Crowdsourcing with Untrusted Server", "text": "With spatial crowdsourcing (SC), requesters outsource their spatiotemporal tasks (tasks associated with location and time) to a set of workers, who will perform the tasks by physically traveling to the tasks' locations. However, current solutions require the locations of the workers and/or the tasks to be disclosed to untrusted parties (SC server) for effective assignments of tasks to workers. In this paper we propose a framework for assigning tasks to workers in an online manner without compromising the location privacy of workers and tasks. We perturb the locations of both tasks and workers based on geo-indistinguishability and then devise techniques to quantify the probability of reachability between a task and a worker, given their perturbed locations. We investigate both analytical and empirical models for quantifying the worker-task pair reachability and propose task assignment strategies that strike a balance among various metrics such as the number of completed tasks, worker travel distance and system overhead. Extensive experiments on real-world datasets show that our proposed techniques result in minimal disclosure of task locations and no disclosure of worker locations without significantly sacrificing the total number of assigned tasks."}
{"_id": "ccaa4e57c5bb4be7c981ae7dace179db7a94a94a", "title": "BareDroid: Large-Scale Analysis of Android Apps on Real Devices", "text": "To protect Android users, researchers have been analyzing unknown, potentially-malicious applications by using systems based on emulators, such as the Google's Bouncer and Andrubis. Emulators are the go-to choice because of their convenience: they can scale horizontally over multiple hosts, and can be reverted to a known, clean state in a matter of seconds. Emulators, however, are fundamentally different from real devices, and previous research has shown how it is possible to automatically develop heuristics to identify an emulated environment, ranging from simple flag checks and unrealistic sensor input, to fingerprinting the hypervisor's handling of basic blocks of instructions. Aware of this aspect, malware authors are starting to exploit this fundamental weakness to evade current detection systems. Unfortunately, analyzing apps directly on bare metal at scale has been so far unfeasible, because the time to restore a device to a clean snapshot is prohibitive: with the same budget, one can analyze an order of magnitude less apps on a physical device than on an emulator.\n In this paper, we propose BareDroid, a system that makes bare-metal analysis of Android apps feasible by quickly restoring real devices to a clean snapshot. We show how BareDroid is not detected as an emulated analysis environment by emulator-aware malware or by heuristics from prior research, allowing BareDroid to observe more potentially malicious activity generated by apps. Moreover, we provide a cost analysis, which shows that replacing emulators with BareDroid requires a financial investment of less than twice the cost of the servers that would be running the emulators. Finally, we release BareDroid as an open source project, in the hope it can be useful to other researchers to strengthen their analysis systems."}
{"_id": "4266a9fcef31d35de5ba6bb5162da6faf21771bc", "title": "Adtranz: A Mobile Computing System for Maintenance and Collaboration", "text": "The paper describes the mobile information and communication aspects of a next generation train maintenance and diagnosis system, discusses the working prototype features, and research results. Wearable/ Mobile computers combined with the wireless technology improve efficiency and accuracy of the maintenance work. This technology enables maintenance personnel at the site to communicate with a remote helpdesk / expertise center through digital data, audio, and image."}
{"_id": "c7fcfe09f56256f8f802c4af9fb2a066100e881a", "title": "Multi-layer FSS for gain improvement of a wide-band stacked printed antenna", "text": "In this paper, an attempt has been made to improve the gain of a wide-band antenna using two layers of frequency-selective surfaces. It is seen that using two layers it is possible to achieve flat transmission characteristics over a reasonable bandwidth. However with an actual antenna, the gain improvement has at present been achieved over only a small band. It is expected that with further optimization and possibly increasing the number of layers, wide-band gain enhancement can be achieved."}
{"_id": "3ee47780011ee618bd5a64624a662375e1958e0a", "title": "Power-Management Architecture of the Intel Microarchitecture Code-Named Sandy Bridge", "text": "Modern microprocessors are evolving into system-on-a-chip designs with high integration levels, catering to ever-shrinking form factors. Portability without compromising performance is a driving market need. An architectural approach that's adaptive to and cognizant of workload behavior and platform physical constraints is indispensable to meeting these performance and efficiency goals. This article describes power-management innovations introduced on Intel's Sandy Bridge microprocessor."}
{"_id": "63d630482d59e83449f73b51c0efb608e662d3ef", "title": "All-printed diode operating at 1.6 GHz.", "text": "Printed electronics are considered for wireless electronic tags and sensors within the future Internet-of-things (IoT) concept. As a consequence of the low charge carrier mobility of present printable organic and inorganic semiconductors, the operational frequency of printed rectifiers is not high enough to enable direct communication and powering between mobile phones and printed e-tags. Here, we report an all-printed diode operating up to 1.6 GHz. The device, based on two stacked layers of Si and NbSi2 particles, is manufactured on a flexible substrate at low temperature and in ambient atmosphere. The high charge carrier mobility of the Si microparticles allows device operation to occur in the charge injection-limited regime. The asymmetry of the oxide layers in the resulting device stack leads to rectification of tunneling current. Printed diodes were combined with antennas and electrochromic displays to form an all-printed e-tag. The harvested signal from a Global System for Mobile Communications mobile phone was used to update the display. Our findings demonstrate a new communication pathway for printed electronics within IoT applications."}
{"_id": "d01ac7040c253b941192dcd7710635d47d837609", "title": "AllConcur: Leaderless Concurrent Atomic Broadcast (Extended Version)", "text": "Many distributed systems require coordination between the components involved. With the steady growth of such systems, the probability of failures increases, which necessitates scalable fault-tolerant agreement protocols. The most common practical agreement protocol, for such scenarios, is leader-based atomic broadcast. In this work, we propose ALLCONCUR, a distributed system that provides agreement through a leaderless concurrent atomic broadcast algorithm, thus, not suffering from the bottleneck of a central coordinator. In ALLCONCUR, all components exchange messages concurrently through a logical overlay network that employs early termination to minimize the agreement latency. Our implementation of ALLCONCUR supports standard sockets-based TCP as well as high-performance InfiniBand Verbs communications. ALLCONCUR can handle up to 135 million requests per second and achieves 17\u00d7 higher throughput than today\u2019s standard leader-based protocols, such as Libpaxos. Thus, ALLCONCUR is highly competitive with regard to existing solutions and, due to its decentralized approach, enables hitherto unattainable system designs in a variety of fields."}
{"_id": "6ee90eb04d420ab5ee4e72dba207139b34ffa9c5", "title": "Short-Term Power Load Forecasting with Deep Belief Network and Copula Models", "text": "The complexity and uncertainty in scheduling and operation of the power system are prominently increasing with the penetration of smart grid. An essential task for the effective operation of power systems is the power load forecasting. In this paper, a tandem data-driven method is studied in this research based on deep learning. A deep belief network (DBN) embedded with parametric Copula models is proposed to forecast the hourly load of a power grid. Data collected over a whole year from an urbanized area in Texas, United States is utilized. Forecasting hourly power load in four different seasons in a selected year is examined. Two forecasting scenarios, day-ahead and week-ahead forecasting are conducted using the proposed methods and compared with classical neural networks (NN), support vector regression machine (SVR), extreme learning machine (ELM), and classical deep belief networks (DBN). The accuracy of the forecasted power load is assessed by mean absolute percentage error (MAPE) and root mean square error (RMSE). Computational results confirm the effectiveness of the proposed semi-parametric data-driven method."}
{"_id": "0220354bd4467f28bf0e9a83f2663d01ed3899bd", "title": "Text Classification with Topic-based Word Embedding and Convolutional Neural Networks", "text": "Recently, distributed word embeddings trained by neural language models are commonly used for text classification with Convolutional Neural Networks (CNNs). In this paper, we propose a novel neural language model, Topic-based Skip-gram, to learn topic-based word embeddings for biomedical literature indexing with CNNs. Topic-based Skip-gram leverages textual content with topic models, e.g., Latent Dirichlet Allocation (LDA), to capture precise topic-based word relationship and then integrate it into distributed word embedding learning. We then describe two multimodal CNN architectures, which are able to employ different kinds of word embeddings at the same time for text classification. Through extensive experiments conducted on several real-world datasets, we demonstrate that combination of our Topic-based Skip-gram and multimodal CNN architectures outperforms state-of-the-art methods in biomedical literature indexing, clinical note annotation and general textual benchmark dataset classification."}
{"_id": "07529c6cac9427efab237cd20f7aa54237e74511", "title": "EROS: a fast capability system", "text": "EROS is a capability-based operating system for commodity processors which uses a single level storage model. The single level store's persistence is transparent to applications. The performance consequences of support for transparent persistence and capability-based architectures are generally believed to be negative. Surprisingly, the basic operations of EROS (such as IPC) are generally comparable in cost to similar operations in conventional systems. This is demonstrated with a set of microbenchmark measurements of semantically similar operations in Linux.The EROS system achieves its performance by coupling well-chosen abstract objects with caching techniques for those objects. The objects (processes, nodes, and pages) are well-supported by conventional hardware, reducing the overhead of capabilities. Software-managed caching techniques for these objects reduce the cost of persistence. The resulting performance suggests that composing protected subsystems may be less costly than commonly believed."}
{"_id": "04ffdf2e15e7d8189e7516aaedc484ed6a03f0a3", "title": "Simple Method for High-Performance Digit Recognition Based on Sparse Coding", "text": "In this brief paper, we propose a method of feature extraction for digit recognition that is inspired by vision research: a sparse-coding strategy and a local maximum operation. We show that our method, despite its simplicity, yields state-of-the-art classification results on a highly competitive digit-recognition benchmark. We first employ the unsupervised Sparsenet algorithm to learn a basis for representing patches of handwritten digit images. We then use this basis to extract local coefficients. In a second step, we apply a local maximum operation to implement local shift invariance. Finally, we train a support vector machine (SVM) on the resulting feature vectors and obtain state-of-the-art classification performance in the digit recognition task defined by the MNIST benchmark. We compare the different classification performances obtained with sparse coding, Gabor wavelets, and principal component analysis (PCA). We conclude that the learning of a sparse representation of local image patches combined with a local maximum operation for feature extraction can significantly improve recognition performance."}
{"_id": "c15c068ac4b639646a74ad14fc994016f8925901", "title": "Standard cell library and automated design flow for circuits on flexible substrates", "text": "A-Si:H TFTs are traditionally used in backplane arrays for active matrix displays and occasionally in row or column drive electronics with current efforts focusing on flexible displays and drivers. This paper extends flexible electronics to complex digital circuitry by designing a standard cell library for a-Si:H TFTs on flexible stainless steel and plastic substrates. The standard cell library enables layout automation with a standard cell place and route tool, significantly speeding the layout of a-Si:H digital circuits on the backplane to enhance display functionality. Since only n-channel transistors are available, the gates are designed with a bootstrap pull-up network to ensure good output voltage swings. The library developed consists of 7 gates: 5 combinational gates (Inverter, NAND2, NOR2, NOR3, and MUX2) and 2 sequential gates (latch and \u2018D\u2019 flip-flop). Test structures have been fabricated to experimentally characterize the delay vs. fan-out of the standard cells. Automatic extraction of electrical interconnections from layout, enabling layout versus schematic (LVS) has also incorporated into the existing tool suite for bottom gate a-Si:H TFTs. A 3bit counter was designed, fabricated and tested to demonstrate the standard cell library is described."}
{"_id": "5b3cef8664c91433b7baa233be4d7fdfc5147adb", "title": "Challenges Facing the Industrial Implementation of Fog Computing", "text": "Recently, the industries converge to the integration of the industry 4.0 paradigm to keep responding to the variable market demands. This integration is realized by the adoption of several components of the industry 4.0 such as IoT, Big Data and Cloud Computing, etc. Several difficulties concerning the integration of data management were encountered during first level of Industry 4.0 integration because of the unexpected quantity of data generated by IoT devices. The Fog computing can be considered as a new component of Industry 4.0 to resolve this kind of problem. However its implementation in the industrial field faces several challenges from different natures. This paper explains the role of Fog Computing solution to enhance the Cloud layer (distribution, low latency, real-time,. . . ) and studies its ability to be implemented in manufacturing systems. The Fog Manufacturing is introduced as the new industrial Fog vision. The challenges preventing the Fog Manufacturing implementation are studied and the links between each other are justified. A future use case is described to carry out the solutions given to satisfy the Fog Manufacturing challenges."}
{"_id": "3fde6a074930e2669550f380b26da3a1a6bc1ac9", "title": "An empirical study of dormant bugs", "text": "Over the past decade, several research efforts have studied the quality of software systems by looking at post-release bugs. However, these studies do not account for bugs that remain dormant (i.e., introduced in a version of the software system, but are not found until much later) for years and across many versions. Such dormant bugs skew our under- standing of the software quality. In this paper we study dormant bugs against non-dormant bugs using data from 20 different open-source Apache foundation software systems. We find that 33% of the bugs introduced in a version are not reported till much later (i.e., they are reported in future versions as dormant bugs). Moreover, we find that 18.9% of the reported bugs in a version are not even introduced in that version (i.e., they are dormant bugs from prior versions). In short, the use of reported bugs to judge the quality of a specific version might be misleading. Exploring the fix process for dormant bugs, we find that they are fixed faster (median fix time of 5 days) than non- dormant bugs (median fix time of 8 days), and are fixed by more experienced developers (median commit counts of developers who fix dormant bug is 169% higher). Our results highlight that dormant bugs are different from non-dormant bugs in many perspectives and that future research in software quality should carefully study and consider dormant bugs."}
{"_id": "21c32bb07aca6ea3f79dce7ef27c8fb937e99b93", "title": "Chronic ulcerative stomatitis: a distinct clinical entity?", "text": "Chronic ulcerative stomatitis (CUS) is a mucocutaneous disorder which is characterised by persistent oral mucosal ulceration. The clinical appearance is often reminiscent of oral lichen planus (OLP) leading to erroneous diagnoses. The immune mediated inhibition of the CUS protein (CUSP) is implicated in the pathogenesis of CUS. CUSP acts as an anti-apoptotic protein and when its action is prevented it may result in significant epithelial injury. The objective or this article is to present the first documented case of CUS in South Africa, with relevant reference to current international literature. CUS should be considered in patients previously diagnosed with OLP but who are unresponsive to glucocorticosteroid therapy. The condition can be successfully managed using hydroxychloroquine."}
{"_id": "b7afb8adc591e9ece348617b5e21348dc933580c", "title": "Small and fast moving object detection and tracking in sports video sequences", "text": "We propose an algorithm for detection and tracking of small and fast moving objects, like a ping pong ball or a cricket ball, in sports video sequences. For detection, the proposed method uses only motion as a cue; moreover it does not use any texture information. Our method is able to detect the object with very low contrast and negligible texture content. Along with detection, we also propose a tracking algorithm using the multiple filter bank approach. Thus we provide a complete solution. The tracking algorithm is able to track maneuvering as well as non-maneuvering movements of the object without using any a priori information about the target dynamics"}
{"_id": "d1bf0962711517cff15205b1844d6b8d625ca7da", "title": "Social entrepreneurship research : A source of explanation , prediction , and delight", "text": "Social entrepreneurship, as a practice and a field for scholarly investigation, provides a unique opportunity to challenge, question, and rethink concepts and assumptions from different fields of management and business research. This article puts forward a view of social entrepreneurship as a process that catalyzes social change and addresses important social needs in a way that is not dominated by direct financial benefits for the entrepreneurs. Social entrepreneurship is seen as differing from other forms of entrepreneurship in the relatively higher priority given to promoting social value and development versus capturing economic value. To stimulate future research the authors introduce the concept of embeddedness as a nexus between theoretical perspectives for the study of social entrepreneurship. # 2005 Elsevier Inc. All rights reserved."}
{"_id": "4a415f9b5eb605d39d47ac863a21ab65aeb3dcbe", "title": "Ultrasound measurements of axillary recess capsule thickness in unilateral frozen shoulder: study of correlation with MRI measurements", "text": "The aims of this study were to compare the ultrasound thickness of the affected axillary recess capsule (ARC) with that of the unaffected ARC in patients with frozen shoulder (FS), to analyze whether the ultrasound measurements of the ARC thickness correlate with those obtained using MRI, and to assess whether the ultrasound thickness of the ARC correlates with the patterns of range of motion limitation. Forty-four patients with clinically diagnosed unilateral FS and MRI evaluation performed ultrasound measurement of ARC. The ultrasound measurement of the ARC thickness was performed with the patients in a supine position with their shoulder abducted by 40\u00b0. The ARC thickness was also measured by MRI on oblique coronal images by another physician blinded to the ultrasound measurements. With both ultrasound and MRI, ARC thickness was determined at the widest portion of the capsule. The ultrasound thickness of ARC was significantly higher in the affected shoulder (4.4\u2009\u00b1\u20091.1\u00a0mm) than in the unaffected one (2.2\u2009\u00b1\u20090.5\u00a0mm) (p\u2009<\u20090.001). The ultrasound thickness of the ARC in the affected shoulder correlated with that measured by MRI (8.9\u2009\u00b1\u20091.9\u00a0mm) (p\u2009<\u20090.001, r\u2009=\u20090.83). The ARC thickness, whether measured by ultrasound or MRI, was not significantly related to the limitation of movement in specific directions. Ultrasound can demonstrate the difference in ARC thickness between affected and unaffected shoulders in patients with unilateral FS. The ARC thickness measured by ultrasound correlates with that measured by MRI."}
{"_id": "8bc2fc891291579dcc0da5ace37a5a9b6057e086", "title": "A scaling normalization method for differential expression analysis of RNA-seq data", "text": "The fine detail provided by sequencing-based transcriptome surveys suggests that RNA-seq is likely to become the platform of choice for interrogating steady state RNA. In order to discover biologically important changes in expression, we show that normalization continues to be an essential step in the analysis. We outline a simple and effective method for performing normalization and show dramatically improved results for inferring differential expression in simulated and publicly available data sets."}
{"_id": "abd11d8542ae3931ac94970257c10fb92423fe57", "title": "Image-based Human Character Modeling and Reconstruction for Virtual Reality Exposure Therapy", "text": "We design a modeling platform to construct virtual environments for Virtual Reality Exposure Therapy. In this platform, the scene is built by image-based reconstruction followed by manual refinement (when needed). The main technical contribution of this paper is on enhanced human character modeling, which is done through reconstructing 3D face geometry, body shape and posture from provided 2D photos."}
{"_id": "4747d169a5d6b48febfa111a8b28680159eb3bb2", "title": "Detecting People in Artwork with CNNs", "text": "CNNs have massively improved performance in object detection in photographs. However research into object detection in artwork remains limited. We show state-of-the-art performance on a challenging dataset, People-Art, which contains people from photos, cartoons and 41 different artwork movements. We achieve this high performance by fine-tuning a CNN for this task, thus also demonstrating that training CNNs on photos results in overfitting for photos: only the first three or four layers transfer from photos to artwork. Although the CNN\u2019s performance is the highest yet, it remains less than 60% AP, suggesting further work is needed for the cross-depiction problem."}
{"_id": "a5c8585ee4a55e4775cbb23719f1061fb41fff3e", "title": "Introduction to Space-Time Wireless Communications", "text": "published by the press syndicate of the university of cambridge A catalog record for this book is available from the British Library ISBN 0 521 82615 2 hardback Contents List of figures page xiv List of tables xxii Preface xxiii List of abbreviations xxvi List of symbols xxix vii viii Contents 2.6 Polarization and field diverse channels 27 2.7 Antenna array topology 28 2.8 Degenerate channels 29 2.9 Reciprocity and its implications 31 3 ST channel and signal models"}
{"_id": "e60e9cd0f06975dae2fa009703ee663a5a40ea63", "title": "A quasi-orthogonal space-time block code", "text": "It has been shown that a complex orthogonal design that provides full diversity and full transmission rate for a space\u2013time block code is not possible for more than two antennas. Previous attempts have been concentrated in generalizing orthogonal designs which provide space\u2013time block codes with full diversity and a high transmission rate. In this work, we design rate one codes which are quasi-orthogonal and provide partial diversity. The decoder of the proposed codes works with pairs of transmitted symbols instead of single symbols."}
{"_id": "58461d01e8b6bd177d26ee17f9cf332cb8ca286a", "title": "Space-Time block codes from orthogonal designs", "text": "We introduce space\u2013time block coding , a new paradigm for communication over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space\u2013time block code and the encoded data is split inton streams which are simultaneously transmitted usingn transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. Maximumlikelihood decoding is achieved in a simple way through decoupling of the signals transmitted from different antennas rather than joint detection. This uses the orthogonal structure of the space\u2013time block code and gives a maximum-likelihood decoding algorithm which is based only on linear processing at the receiver. Space\u2013time block codes are designed to achieve the maximum diversity order for a given number of transmit and receive antennas subject to the constraint of having a simple decoding algorithm. The classical mathematical framework of orthogonal designs is applied to construct space\u2013time block codes. It is shown that space\u2013time block codes constructed in this way only exist for few sporadic values ofn. Subsequently, a generalization of orthogonal designs is shown to provide space\u2013time block codes for both real and complex constellations for any number of transmit antennas. These codes achieve the maximum possible transmission rate for any number of transmit antennas using any arbitrary real constellation such as PAM. For an arbitrary complex constellation such as PSK and QAM, space\u2013time block codes are designed that achieve1=2 of the maximum possible transmission rate for any number of transmit antennas. For the specific cases of two, three, and four transmit antennas, space\u2013time block codes are designed that achieve, respectively, all, 3=4, and 3=4 of maximum possible transmission rate using arbitrary complex constellations. The best tradeoff between the decoding delay and the number of transmit antennas is also computed and it is shown that many of the codes presented here are optimal in this sense as well."}
{"_id": "8052c6fe35b5f5660fcfcf11edffedbeeabf0f60", "title": "Space-Time Codes for High Data Rate Wireless Communications", "text": "Space-time codes (STC) are a class of signaling techniques, offering coding and diversity gains along with improved spectral efficiency. These codes exploit both the spatial and the temporal diversity of the wireless link by combining the design of the error correction code, modulation scheme and array processing. STC are well suited for improving the downlink performance, which is the bottleneck in asymmetric applications such as downstream Internet. Three original contributions to the area of STC are presented in this dissertation. First, the development of analytic tools that determine the fundamental limits on the performance of STC in a variety of channel conditions. For trellis-type STC, transfer function based techniques are applied to derive performance bounds over Rayleigh, Rician and correlated fading environments. For block-type STC, an analytic framework that supports various complex orthogonal designs with arbitrary signal cardinalities and array configurations is developed. In the second part of the dissertation, the Virginia Tech Space-Time Advanced Radio (VT-STAR) is designed, introducing a multi-antenna hardware laboratory test bed, which facilitates characterization of the multiple-input multiple-output (MIMO) channel and validation of various space-time approaches. In the third part of the dissertation, two novel space-time architectures paired with iterative processing principles are proposed. The first scheme extends the suitability of STC to outdoor wireless communications by employing iterative equalization/decoding for time dispersive channels and the second scheme employs iterative interference cancellation/decoding to solve the error propagation problem of Bell-Labs Layered Space-Time Architecture (BLAST). Results show that remarkable energy and spectral efficiencies are achievable by combining concepts drawn from space-time coding, multiuser detection, array processing and iterative decoding."}
{"_id": "4593273d72193f4ea67c1227c0eb9308bea7fcdb", "title": "Haplogroup Prediction from Y-STR Values Using an Allele-Frequency Approach", "text": "A new approach to predicting the Y-chromosome haplogroup from a set of Y-STR marker values is presented and compared to other approaches. The method has been implemented in an Excel-based program, where an arbitrary number of STR markers may be input and a \u201cgoodness of fit\u201d score for 10 haplogroups (E3a, E3b, G, I1a, I1b, I1c, J2, N3, R1a, and R1b) is returned. This method has been applied to 101 R1b haplotypes and 50 I1a haplotypes (all having 37 STR markers available), and the distribution of results is presented. In the case of I1a, the results are compared with the predictions of another method."}
{"_id": "9a7e21e18ddf24e3e734021ede367e44ccf52d0d", "title": "A design of miniaturization LPDA antenna with ultra wideband", "text": "This paper designed a Log-periodic miniaturization antenna working in 8GHz to 18GHz. It satisfies the characteristic of high gain, good radiation pattern, and a large proportion of the size reduction which other antenna don't possess. The lengthwise is 80mm, it reduces more than 50% of the transverse dimension with the original specification. A \u201cmountain\u201d terminal loaded design is used to shrink size of antenna under the premise of guaranteeing directivity of antenna. By optimizing backward feeding structure and antenna array to improve antenna performance. The final result is superior to other same type antenna distinctly."}
{"_id": "41f7cd6f53f9c9c151f762cbc41a89a8b1133c4e", "title": "Learning styles and performance in the introductory programming sequence", "text": "This paper reports on the implication of different preferred learning styles on students' performance in the introductory programming sequence and on work in progress on how to accommodate these different styles.Students were given a learning styles preference test and then their preferred learning styles were compared to their performance on the exam and the practical programming part of the introductory programming module. There were significant differences in performance between groups of students.This result could lead one to two possible conclusions. One might be that some students' learning styles are more suited to learning programming than others.An alternative explanation is that our current methods of teaching advantage students with certain learning preference styles. We are at present in the process of testing this second assumption by providing students with a wider range of learning materials. We will then see if student performance is improved by using our current results as a baseline for comparison"}
{"_id": "17fad2cc826d2223e882c9fda0715fcd5475acf3", "title": "Human facial expressions as adaptations: Evolutionary questions in facial expression research.", "text": "The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed."}
{"_id": "b8de61c6e7ebea5186ac52f98ba64d7b33524c6f", "title": "A Comparison Between Single-Phase Quasi- $Z$-Source and Quasi-Switched Boost Inverters", "text": "The properties of a single-phase quasi Z-source inverter (qZSI) and a single-phase quasi-switched boost inverter (qSBI), both of which are single-stage buck-boost inverters, are investigated and compared. For the same operating conditions, qSBI has the following advantages over qZSI: 1) Three capacitors are saved; 2) the current rating on both of its switches and diodes is lower; 3) its boost factor is higher with an equivalent parasitic effect; and 4) its efficiency is higher. However, qSBI has one more active switch and one more diode than Z-source/ qZSIs. In addition, the capacitor voltage stress of qSBI is higher than that of qZSI. The dc and ac component circuit analysis, impedance design with low-frequency and high-frequency ripples, component stresses, and power loss calculation are presented. A prototype based on a TMS320F28335 DSP is built in order to compare the operating principle of qSBI and qZSI."}
{"_id": "55b6449b2e8e183f3f4522a8fed1b86c2b56c9e8", "title": "Panel 1 : Terms used in qualitative", "text": "A broad base of medical and scientific knowledge is needed if medicine is to maintain its identity as a discipline founded on scientific knowledge. However, interpretive action must also be included in medical knowledge. In my first article, 1 I investigated the nature of clinical knowledge in medicine, exposed some of the shortcomings of quantitative research methods, and briefly introduced qualitative methods as an approach for improved understanding. Here, I shall discuss how scientific quality can be maintained when qualitative research methods are applied. I present some overall standards, describe specific challenges met when the medical researcher uses qualitative research methods, and subsequently propose guidelines for qualitative inquiry in medical research. I do not intend to provide comprehensive guidance for the inexperienced qualitative researcher, who must be prepared to acquire basic skills of qualitative research from the relevant literature. Some of the specific terms that I use are presented in panel 1. Standards Qualitative research methods involve the systematic collection, organisation, and interpretation of textual material derived from talk or observation. It is used in the exploration of meanings of social phenomena as experienced by individuals themselves, in their natural context. Qualitative research is still regarded with scepticism by the medical community, accused of its subjective nature and the absence of facts. Although the adequacy of guidelines has been vigorously debated within this cross-disciplinary field, 6,7 scientific standards, criteria, and checklists do exist. 3,8\u201311 However, as Chapple and Rogers 12 point out, medical researchers often encounter difficulties when they try to apply guidelines designed by social scientists, which deal with issues important in their own discipline, but which are not necessarily generically valid as scientific standards. Hamberg and colleagues, 13 for example, claim that the established criteria for scientific rigour in quantitative research cannot be applied to qualitative studies. Referring to Lincoln and Guba, 2 they suggest alternative criteria: credibility, dependability, confirmability, and trans-ferability. They admit that these criteria correspond with traditional ones in some ways, comparing credibility with internal validity, confirmability with objectivity, and transferability with generalisability. Mays and Pope, 7 however, maintain that qualitative research can be assessed with reference to the same broad criteria as quantitative research, albeit used in a different way. Referring to Hammersley, 14 they suggest that validity and relevance are essential. Neither of these criteria are straightforward to assess though, and each requires judgments to be made. To improve validity, Mays and Pope \u2026"}
{"_id": "5b7c45e321427bff232745449fa67af29289e4df", "title": "A Neural Schema Architecture for Autonomous Robots", "text": "As autonomous robots become more complex in their behavior, more sophisticated software architectures are required to support the ever more sophisticated robotics software. These software architectures must support complex behaviors involving adaptation and learning, implemented, in particular, by neural networks. We present in this paper a neural based schema [2] software architecture for the development and execution of autonomous robots in both simulated and real worlds. This architecture has been developed in the context of adaptive robotic agents, ecological robots [6], cooperating and competing with each other in adapting to their environment. The architecture is the result of integrating a number of development and execution systems: NSL, a neural simulation language; ASL, an abstract schema language; and MissionLab, a schema-based mission-oriented simulation and robot system. This work contributes to modeling in Brain Theory (BT) and Cogniti ve Psychology, with applications in Distributed Artificial Intell igence (DAI), Autonomous Agents and Robotics. Areas: Robotics, Agent-oriented programming, Neural Nets"}
{"_id": "ecd613273dc573de060eac9e1eabe1d790b1faa9", "title": "Seed priming to alleviate salinity stress in germinating seeds.", "text": "Salinity is one of the major abiotic stresses that affect crop production in arid and semiarid areas. Seed germination and seedling growth are the stages most sensitive to salinity. Salt stress causes adverse physiological and biochemical changes in germinating seeds. It can affect the seed germination and stand establishment through osmotic stress, ion-specific effects and oxidative stress. The salinity delays or prevents the seed germination through various factors, such as a reduction in water availability, changes in the mobilization of stored reserves and affecting the structural organization of proteins. Various techniques can improve emergence and stand establishment under salt conditions. One of the most frequently utilized is seed priming. The process of seed priming involves prior exposure to an abiotic stress, making a seed more resistant to future exposure. Seed priming stimulates the pre-germination metabolic processes and makes the seed ready for radicle protrusion. It increases the antioxidant system activity and the repair of membranes. These changes promote seed vigor during germination and emergence under salinity stress. The aim of this paper is to review the recent literature on the response of plants to seed priming under salinity stress. The mechanism of the effect of salinity on seed germination is discussed and the seed priming process is summarized. Physiological, biochemical and molecular changes induced by priming that lead to seed enhancement are covered. Plants' responses to some priming agents under salinity stress are reported based on the best available data. For a great number of crops, little information exists and further research is needed."}
{"_id": "aee6e3fbaf14ed5da8275769d1c8597d75677764", "title": "Signal quality measures for pulse oximetry through waveform morphology analysis.", "text": "Pulse oximetry has been extensively used to estimate oxygen saturation in blood, a vital physiological parameter commonly used when monitoring a subject's health status. However, accurate estimation of this parameter is difficult to achieve when the fundamental signal from which it is derived, the photoplethysmograph (PPG), is contaminated with noise artifact induced by movement of the subject or the measurement apparatus. This study presents a novel method for automatic rejection of artifact contaminated pulse oximetry waveforms, based on waveform morphology analysis. The performance of the proposed algorithm is compared to a manually annotated gold standard. The creation of the gold standard involved two experts identifying sections of the PPG signal containing good quality PPG pulses and/or noise, in 104 fingertip PPG signals, using a simultaneous electrocardiograph (ECG) signal as a reference signal. The fingertip PPG signals were each 1 min in duration and were acquired from 13 healthy subjects (10 males and 3 females). Each signal contained approximately 20 s of purposely induced artifact noise from a variety of activities involving subject movement. Some unique waveform morphology features were extracted from the PPG signals, which were believed to be correlated with signal quality. A simple decision-tree classifier was employed to arrive at a classification decision, at a pulse-by-pulse resolution, of whether a pulse was of acceptable quality for use or not. The performance of the algorithm was assessed using Cohen's kappa coefficient (\u03ba), sensitivity, specificity and accuracy measures. A mean \u03ba of 0.64 \u00b1 0.22 was obtained, while the mean sensitivity, specificity and accuracy were 89 \u00b1 10%, 77 \u00b1 19% and 83 \u00b1 11%, respectively. Furthermore, a heart rate estimate, extracted from uncontaminated sections of PPG, as identified by the algorithm, was compared with the heart rate derived from an uncontaminated simultaneous ECG signal. The mean error between both heart rate readings was 0.49 \u00b1 0.66 beats per minute (BPM), in comparison to an error value observed without using the artifact detection algorithm of 7.23 \u00b1 5.78 BPM. These results demonstrate that automated identification of signal artifact in the PPG signal through waveform morphology analysis is achievable. In addition, a clear improvement in the accuracy of the derived heart rate is also evident when such methods are employed."}
{"_id": "0a07543807e63a76d0d4b63aec80c15d6de30394", "title": "Modeling Ambiguity, Subjectivity, and Diverging Viewpoints in Opinion Question Answering Systems", "text": "Product review websites provide an incredible lens into the wide variety of opinions and experiences of different people, and play a critical role in helping users discover products that match their personal needs and preferences. To help address questions that can't easily be answered by reading others' reviews, some review websites also allow users to pose questions to the community via a question-answering (QA) system. As one would expect, just as opinions diverge among different reviewers, answers to such questions may also be subjective, opinionated, and divergent. This means that answering such questions automatically is quite different from traditional QA tasks, where it is assumed that a single 'correct' answer is available. While recent work introduced the idea of question-answering using product reviews, it did not account for two aspects that we consider in this paper: (1) Questions have multiple, often divergent, answers, and this full spectrum of answers should somehow be used to train the system, and (2) What makes a 'good' answer depends on the asker and the answerer, and these factors should be incorporated in order for the system to be more personalized. Here we build a new QA dataset with 800 thousand questions\u2014and over 3.1 million answers\u2014and show that explicitly accounting for personalization and ambiguity leads both to quantitatively better answers, but also a more nuanced view of the range of supporting, but subjective, opinions."}
{"_id": "b612ce278763c43f1a8ac37293fee4e5981cf4b7", "title": "Dual wideband CPW-fed split-ring monopole antenna for WLAN applications", "text": "This paper presents a dual wideband CPW-fed monopole antenna based on split-ring elements for WLAN (2.4/5 GHz) applications. The proposed split-ring monopole antenna (SRMA) has a compact novel design which provides 13.8% and 55.5% impedance bandwidth at operational frequency bands. Moreover, the antenna exhibits good radiation pattern with predicted gains of 4\u20136 dBi in the related bands. The analysis and design of the proposed microstrip antenna have been carried out by means of Ansoft HFSS and an antenna prototype was then constructed and tested. Numerical analyses using the simulator and measurement results are also presented in this paper."}
{"_id": "56e7557fc2612884d4787dd42e47f4d9c59e5069", "title": "Machine learning in the Internet of Things: A semantic-enhanced approach", "text": "Novel Internet of Things (IoT) applications and services rely on an intelligent understanding of the environment leveraging data gathered via heterogeneous sensors and micro-devices. Though increasingly effective, Machine Learning (ML) techniques generally do not go beyond classification of events with opaque labels, lacking machine-understandable representation and explanation of taxonomies. This paper proposes a framework for semantic-enhanced data mining on sensor streams, amenable to resource-constrained pervasive contexts. It merges an ontology-based characterization of data distributions with non-standard reasoning for a fine-grained event detection. The typical classification problem of ML is treated as a resource discovery by exploiting semantic matchmaking. Outputs of classification are endowed with computer-processable descriptions in standard Semantic Web languages, while explanation of matchmaking outcomes motivates confidence on results. A case study on road and traffic analysis has allowed to validate the proposal and achieve an assessment with respect to state-of-the-art ML algorithms."}
{"_id": "56ceac6ed3baf689f59d8995d31aae8bf20011ca", "title": "Architecture and Implementation of a Java Package for Multiple Input Devices ( MID )", "text": "A major difficulty in writing Single Display Groupware (co-present collaborative) applications is getting input from multiple devices. We introduce MID, a Java package that addresses this problem and offers an architecture to access advanced events through Java. In this paper, we describe the features, architecture and limitations of MID. We also briefly describe an application that uses MID to get input from multiple mice: KidPad."}
{"_id": "e807dfd97a0847398a3ef73ccc2bc0b5f4bbe37a", "title": "Series WP-1207 A Source of Bias in Public Opinion Stability", "text": "A long acknowledged but seldom addressed problem with political communication experiments concerns the use of captive participants. Study participants rarely have the opportunity to choose information themselves, instead receiving whatever information the experimenter provides. Druckman and his co-authors relax this assumption in the context of an over-time framing experiment focused on opinions about health care policy. Their results dramatically deviate from extant understandings of over-time communication effects. Allowing individuals to choose information for themselves\u2014a common situation on many political issues\u2014leads to the preeminence of early frames and the rejection of later frames. Instead of opinion decay, they find dogmatic adherence to opinions formed in response to the first frame to which participants were exposed (i.e., staunch opinion stability). The effects match those that occur when early frames are repeated multiple times. The results suggest that opinion stability may often reflect biased information seeking. Moreover, the findings have implications for a range of topics including the micro-macro disconnect in studies of public opinion, political polarization, normative evaluations of public opinion, the role of inequality considerations in the debate about health care, and perhaps most importantly, the design of experimental studies of public opinion. *The authors thank Cengiz Erisen, Larry Jacobs, Jon Krosnick, Heather Madonia, Leslie McCall, Spencer Piston, David Sterrett, and seminar participants in Northwestern\u2019s Applied Quantitative Methods Workshop for helpful comments and advice."}
{"_id": "be1b0b6ab3148262840d17f7a4ea50e367489cb6", "title": "Choice of intra-articular injection in treatment of knee osteoarthritis: platelet-rich plasma, hyaluronic acid or ozone options", "text": "This study was performed to compare the efficacy of treatment in three groups of patients with knee osteoarthritis (OA) given an intra-articular injection of platelet-rich plasma (PRP), hyaluronic acid (HA) or ozone gas. A total of 102 patients with mild\u2013moderate and moderate knee OA who presented at the polyclinic with at least a 1-year history of knee pain and VAS score \u22654 were randomly separated into three groups. Group 1 (PRP group) received intra-articular injection of PRP\u00a0\u00d7\u00a02 doses, Group 2 (HA group) received a single dose of HA, and Group 3 (Ozone group) received ozone\u00a0\u00d7\u00a0four doses. Weight-bearing anteroposterior\u2013lateral and Merchant\u2019s radiographs of both knees were evaluated. WOMAC and VAS scores were applied to all patients on first presentation and at 1, 3, 6 and 12\u00a0months. At the end of the 1st month after injection, significant improvements were seen in all groups. In the 3rd month, the improvements in WOMAC and VAS scores were similar in Groups 1 and 2, while those in Group 3 were lower (p\u00a0<\u00a00.001). At the 6th month, while the clinical efficacies of PRP and HA were similar and continued, the clinical effect of ozone had disappeared (p\u00a0<\u00a00.001). At the end of the 12th month, PRP was determined to be both statistically and clinically superior to HA (p\u00a0<\u00a00.001). In the treatment of mild\u2013moderate knee OA, PRP was more successful than HA and ozone injections, as the application alone was sufficient to provide at least 12\u00a0months of pain-free daily living activities. Therapeutic study, Level I."}
{"_id": "223694d5aa8de0e96de70bcbcf5973db568770dc", "title": "Polyphonic Music Modelling with LSTM-RTRBM", "text": "Recent interest in music information retrieval and related technologies is exploding. However, very few of the existing techniques take advantage of the recent advancements in neural networks. The challenges of developing effective browsing, searching and organization techniques for the growing bodies of music collections call for more powerful statistical models. In this paper, we present LSTM-RTRBM, a new neural network model for the problem of creating accurate yet flexible models of polyphonic music. Our model integrates the ability of Long Short-Term Memory (LSTM) in memorizing and retrieving useful history information, together with the advantage of Restricted Boltzmann Machine (RBM) in high dimensional data modelling. Our approach greatly improves the performance of polyphonic music sequence modelling, achieving the state-of-the-art results on multiple datasets."}
{"_id": "7fcbab7f684a4b7906a289c1cf95ef3224af9eb0", "title": "Improving the effectiveness of peer feedback for learning", "text": "The present study examined the effectiveness of (a) peer feedback for learning, more specifically of certain characteristics of the content and style of the provided feedback, and (b) a particular instructional intervention to support the use of the feedback. A quasi-experimental repeated measures design was adopted. Writing assignments of 43 students of Grade 7 in secondary education showed that receiving \u2018justified\u2019 comments in feedback improves performance, but this effect diminishes for students with better pretest performance. Justification was superior to the accuracy of comments. The instructional intervention of asking assessees to reflect upon feedback after peer assessment did not increase learning gains significantly. 2009 Published by Elsevier Ltd."}
{"_id": "26e9eb44ed8065122d37b0c429a8d341bfeea9a5", "title": "Reference-Aware Language Models", "text": "We propose a general class of language models that treat reference as explicit stochastic latent variables. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or discourse context, even when the targets of the reference may be rare words. Experiments on three representative applications show our model variants outperform models based on deterministic attention."}
{"_id": "8ba0f4c3f8864fc9bcf3cc1e2a6b883169f9ab0f", "title": "Cluster analysis using data mining approach to develop CRM methodology to assess the customer loyalty", "text": "Data mining (DM) methodology has a tremendous contribution for researchers to extract the hidden knowledge and information which have been inherited in the data used by researchers. This study has proposed a new procedure, based on expanded RFM model by including one additional parameter, joining WRFM-based method to K-means algorithm applied in DM with K-optimum according to Davies\u2013 Bouldin Index, and then classifying customer product loyalty in under B2B concept. The developed methodology has been implemented for SAPCO Co. in Iran. The result shows a tremendous capability to the firm to assess his customer loyalty in marketing strategy designed by this company in comparing with random selection commonly used by most companies in Iran. 2009 Elsevier Ltd. All rights reserved."}
{"_id": "25a7b5d2db857cd86692c45d0e5376088f51aa12", "title": "Rationale for the RBAC96 family of access control models", "text": "A family of role-based access control (RBAC) models, referred to here as the RBAC96 models, was recently published by the author and his colleagues. This paper gives our rationale for the major decisions in developing these models and discusses alternatives that were considered."}
{"_id": "24c8ec0acee0a38a14aa98ba2317816bb7dba70c", "title": "Document dewarping via text-line based optimization", "text": "This paper presents a new document image dewarping method that removes geometric distortions in camera-captured document images. The proposed method does not directly use the text-line which has been the most widely used feature for the document dewarping. Instead, we use the discrete representation of text-lines and text-blocks which are the sets of connected components. Also, we model the geometric distortions caused by page curl and perspective view as the generalized cylindrical surfaces and camera rotation respectively. With these distortion models and the discrete representation of the features, we design a cost function whose minimization yields the parameters of the distortion model. In the cost function, we encode the properties of the pages such as text-block alignment, linespacing, and the straightness of text-lines. By describing the text features using the sets of discrete points, the cost function can be easily defined and efficiently solved by Levenberg\u2013Marquadt algorithm. Experiments show that the proposed method works well for the various layouts and curved surfaces, and compares favorably with the conventional methods on the standard dataset. & 2015 Elsevier Ltd. All rights reserved."}
{"_id": "97899af067fbccaac6431bd33d822026044056d3", "title": "Discipline and practices of TDD: (test driven development)", "text": "This panel brings together practitioners with experience in Agile and XP methodologies to discuss the approaches and benefits of applying Test Driven Development (TDD). The goal of TDD is clean code that works. The mantra of TDD is: write a test; make it run; and make it right. Open questions to be addressed by the panel include: - How are TDD approaches to be applied to databases, GUIs, and distributed systems? What are the quantitative benchmarks that can demonstrate the value of TDD, and what are the best approaches to solve the ubiquitous issue of scalability."}
{"_id": "22be3a5dac2c67a71c78e6bc70f7f4649c636826", "title": "Ontology-based decision making on uncontrolled intersections and narrow roads", "text": "Many Advanced Driver Assistance Systems (ADAS) have been developed to improve car safety. However, it is still a challenging problem to make autonomous vehicles to drive safely on urban streets such as uncontrolled intersections (without traffic lights) and narrow roads. In this paper, we introduce a decision making system that can assist autonomous vehicles at uncontrolled intersections and narrow roads. We constructed a machine understandable ontology-based Knowledge Base, which contains maps and traffic regulations. The system makes decisions in comply with traffic regulations such as Right-Of-Way rules when it receives a collision warning signal. The decisions are sent to a path planning system to change the route or stop to avoid collisions."}
{"_id": "cf4e9530c1aaab41b1ce414f7535c2510598a980", "title": "Using Ontology-Based Traffic Models for More Efficient Decision Making of Autonomous Vehicles", "text": "The paper describes how a high-level abstract world model can be used to support the decision-making process of an autonomous driving system. The approach uses a hierarchical world model and distinguishes between a low-level model for the trajectory planning and a high-level model for solving the traffic coordination problem. The abstract world model used in the CyberCars-2 project is presented. It is based on a topological lane segmentation and introduces relations to represent the semantic context of the traffic scenario. This makes it much easier to realize a consistent and complete driving control system, and to analyze, evaluate and simulate such a system."}
{"_id": "13314ec85469e3b6025cb2dc07d97a4930b3326c", "title": "The Semantic Web Revisited", "text": "The article included many scenarios in which intelligent agents and bots undertook tasks on behalf of their human or corporate owners. Of course, shopbots and auction bots abound on the Web, but these are essentially handcrafted for particular tasks: they have little ability to interact with heterogeneous data and information types. Because we haven't yet delivered large-scale, agent-based mediation, some commentators argue that the semantic Web has failed to deliver. We argue that agents can only flourish when standards are well established and that the Web standards for expressing shared meaning have progressed steadily over the past five years"}
{"_id": "17d41e6fd35629a9cc14f203b0035985f44500d5", "title": "Pellet: A practical OWL-DL reasoner", "text": "In this report, we present Pellet: a complete and capable OWL-DL reasoner with acceptable to very good performance, extensive middleware, and a number of unique features. Pellet is written in Java and is open source under a very liberal license. It is used in a number of projects, from pure research to industrial settings. Pellet is the first sound and complete OWL-DL reasoner with extensive support for rea-soning with individuals (including nominal support and conjunctive query), user-defined datatypes, and debugging support for ontologies. It implements several extensions to OWL-DL including a combination formalism for OWL-DL ontologies, a non-monotonic operator, and preliminary support for OWL/Rule hybrid reasoning. It has proven to be a reliable tool for working with OWL-DL ontologies and experimenting with OWL extensions. In this paper we describe Pellet\u2019s features, architecture and special capabilities, along with an empirical comparison of its performance against other leading OWL-DL reasoners."}
{"_id": "294bb01b6e402b5d37ed0842bbf08f7e3d2358fd", "title": "RoadGraph: High level sensor data fusion between objects and street network", "text": "The RoadGraph is a graph based environmental model for driver assistance systems. It integrates information from different sources like digital maps, onboard sensors and V2X communication into one single model about the vehicle's environment. At the moment of information aggregation some function independent situation analysis is done. In this paper we look at techniques for lane-precise map-matching even with moderate GPS reception using distinct information sources. We also analyze the concepts of aggregating objects from different sources with a-priori knowledge of the street-layout. Results of this novelty approach are shown."}
{"_id": "3a6bde1bea5e954ada2fa6ab1dcdbefa737f37f8", "title": "Defeasible Logic Programming: An Argumentative Approach", "text": "The work reported here introduces Defeasible Logic Programming (DeLP), a formalism that combines results of Logic Programming and Defeasible Argumentation. DeLP provides the possibility of representing information in the form of weak rules in a declarative manner, and a defeasible argumentation inference mechanism for warranting the entailed conclusions. In DeLP an argumentation formalism will be used for deciding between contradictory goals. Queries will be supported by arguments that could be defeated by other arguments. A query q will succeed when there is an argument A for q that is warranted, i. e. the argument A that supports q is found undefeated by a warrant procedure that implements a dialectical analysis. The defeasible argumentation basis of DeLP allows to build applications that deal with incomplete and contradictory information in dynamic domains. Thus, the resulting approach is suitable for representing agent\u2019s knowledge and for providing an argumentation based reasoning mechanism to agents."}
{"_id": "bc62c888e7cefc5f04e82ff705c17b2faf2d6991", "title": "How Positive Informational Social Influence Affects Consumers Decision of Internet Shopping?", "text": "Given the amount of perceived risk involved in Internet shopping, many potential Internet shoppers tend to wait and observe the experiences of others who have tried it before considering adopting it. This study explores how positive informational social influence affects consumers\u2019 decision of Internet shopping using a laboratory experiment. The research results reveal that positive informational social influence reinforces the relationships between perceived ease of use and consumer attitude toward Internet shopping, as well as attitude and their intention to shop. Implications for the current investigation and future research directions are provided."}
{"_id": "2dcefbd7c6571c7653ede9fb95fefbec6cc96f0c", "title": "Prisoner's dilemma game model for e-commerce", "text": "This study investigates how the user\u2019s exchanges affect people\u2019s trust in electronic commerce and empower them to interact with foreign customers. Online exchanges often occur between strangers who cannot rely on past behavior or the prospect of future interactions to establish mutual trust. Game theorists have formalized this problem as a prisoner\u2019s dilemma and predict mutual noncooperation. In this paper, we introduce a social network based model to promote the cooperation in the prisoner\u2019s dilemma game. For this study, we implemented an agent-based simulation framework, which models different types of behaviors in online auctions. Agents who represent buyers or sellers are located in the nodes of a small world graph. Each link weight between the agent and its neighbor symbolizes how much it trusts this neighbor. A link with trust inferior to some cut-link threshold will be removed from the graph. Our results show that the evolved structure of the graph induces considerable variation in the level of cooperation and the profit in the e-commerce system. This shows that the outcome of our co-evolutionary game, in terms of cooperative behavior, strongly depends on topology but also on the update rule used in the trust between users. Co-evolution of game dynamics in interconnected networks is also studied, where two networks are interconnected, and players have interactions not only with others in the same network, but also with players in the other network. We will demonstrate that the interdependence between networks promotes the cooperation in both networks. But, the degree of promotion changes as a function of interdependence"}
{"_id": "a5ba3ae8135a0f8a7d7ee065f0b544eac31ad69f", "title": "Flexible design of a wearable lower limb exoskeleton robot", "text": "Wearable exoskeleton robot is a kind of humanoid service robot to help the elderly and the patients with walking dysfunction, it is also an effective medical rehabilitation method to help patients who have walking disorders due to central neural system damage. This paper focuses on the flexible mechanism design of wearable lower limb exoskeleton robot. A wearable exoskeleton robot prototype was developed which can assist human walking. This paper analyzes the role of the major joints of walking human by experimental studies based on bionic design methods from human anatomy and bone surgery. We first analyze parameters of joint movements in a gait cycle, then we design a preliminary bionic model, finally we proposed a control strategy including a pair of electric crutches for the exoskeleton robot."}
{"_id": "23d88502dc3f9a718b5a155d3b78f6dac11b3d30", "title": "The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity", "text": "Recently, there has been an upsurge of attention focused on bias and its impact on specialized artificial intelligence (AI) applications. Allegations of racism and sexism have permeated the conversation as stories surface about search engines delivering job postings for well-paying technical jobs to men and not women, or providing arrest mugshots when keywords such as \"black teenagers\" are entered. Learning algorithms are evolving; they are often created from parsing through large datasets of online information while having truth labels bestowed on them by crowd-sourced masses. These specialized AI algorithms have been liberated from the minds of researchers and startups, and released onto the public. Yet intelligent though they may be, these algorithms maintain some of the same biases that permeate society. They find patterns within datasets that reflect implicit biases and, in so doing, emphasize and reinforce these biases as global truth. This paper describes specific examples of how bias has infused itself into current AI and robotic systems, and how it may affect the future design of such systems. More specifically, we draw attention to how bias may affect the functioning of (1) a robot peacekeeper, (2) a self-driving car, and (3) a medical robot. We conclude with an overview of measures that could be taken to mitigate or halt bias from permeating robotic technology."}
{"_id": "771b52e7c7d0a4ac8b8ee0cdeed209d1c4114480", "title": "A monad for deterministic parallelism", "text": "We present a new programming model for deterministic parallel computation in a pure functional language. The model is monadic and has explicit granularity, but allows dynamic construction of dataflow networks that are scheduled at runtime, while remaining deterministic and pure. The implementation is based on monadic concurrency, which has until now only been used to simulate concurrency in functional languages, rather than to provide parallelism. We present the API with its semantics, and argue that parallel execution is deterministic. Furthermore, we present a complete work-stealing scheduler implemented as a Haskell library, and we show that it performs at least as well as the existing parallel programming models in Haskell."}
{"_id": "01dc98beaf13fb5d92458da021843776947a6af5", "title": "A Lightweight Semantic Web-based Approach for Data Annotation on IoT Gateways", "text": "Internet of Things (IoT) applications rely on networks composed of set of heterogeneous sensors and smart devices, which have the capability to constantly, observe the surroundings and gather data. This heterogeneity is reflected on raw data gathered by such type of systems. Consequently, the task of high-level IoT applications to interpret such data and detect events in the real world is more complex. Moreover, data heterogeneity leads to the lack of interoperability between IoT applications. Semantic Web (SW) technologies have been widely adopted to model and integrate data from different sources on the web; extending them to the IoT domain can be used to mitigate the aforementioned challenges. Semantically annotating IoT data is a fundamental step toward developing smarter and interoperable IoT applications. However, this type of process requires a large amount of computing resources, especially in scenarios where a large number of sensors is expected to be involved such as smart city. To address these challenges, we propose a lightweight semantic annotation approach that can be implemented on resource-constrained IoT gateways connected to a limited number of sensors. To evaluate the feasibility of the proposed approach, we have carried out a set of experimentations using a middleware prototype implementation. Several benchmarks are considered such as: Data size, response time, and resource utilization. c \u00a9 2017 The Authors. Published by Elsevier B.V. ."}
{"_id": "19eba94485350d5db23f078e03a8997c7ebb6ed2", "title": "MI&T Lab at SemEval-2017 task 4: An Integrated Training Method of Word Vector for Sentiment Classification", "text": "A CNN method for sentiment classification task in Task 4A of SemEval 2017 is presented. To solve the problem of word2vec training word vector slowly, a method of training word vector by integrating word2vec and Convolutional Neural Network (CNN) is proposed. This training method not only improves the training speed of word2vec, but also makes the word vector more effective for the target task. Furthermore, the word2vec adopts a full connection between the input layer and the projection layer of the Continuous Bag-of-Words (CBOW) for acquiring the semantic information of the original sentence."}
{"_id": "403b3b53c6d5b9e5bc88b5d960b6a42ff6fb25ba", "title": "Data Gloves for Sign Language Recognition System", "text": "Communication between deaf-dumb and a normal person have always been a challenging task . About 9 billion people in the world come into this category which is quite large number to be ignored. As deaf-dumb people use sign language for their communication which is difficult to understand by the normal people. This paper aims at eradicating the communication barrier between them by developing an embedded system which will translate the hand gestures into"}
{"_id": "84bd344370d4bb24005dc6886860e711a6749ef1", "title": "A Comparative Evaluation of Log-Based Process Performance Analysis Techniques", "text": "Process mining has gained traction over the past decade and an impressive body of research has resulted in the introduction of a variety of process mining approaches measuring process performance. Having this set of techniques available, organizations might find it difficult to identify which approach is best suited considering context, performance indicator, and data availability. In light of this challenge, this paper aims at introducing a framework for categorizing and selecting performance analysis approaches based on existing research. We start from a systematic literature review for identifying the existing works discussing how to measure process performance based on information retrieved from event logs. Then, the proposed framework is built starting from the information retrieved from these studies taking into consideration different aspects of performance analysis."}
{"_id": "cf6d8ceb80ee5057da178ac2ef9e237070901e18", "title": "Benchmarking Web API Quality", "text": "Web APIs are increasingly becoming an integral part of web or mobile applications. As a consequence, performance characteristics and availability of the APIs used directly impact the user experience of end users. Still, quality of web APIs is largely ignored and simply assumed to be sufficiently good and stable. Especially considering geo-mobility of today\u2019s client devices, this can lead to negative surprises at runtime. In this work, we present an approach and toolkit for benchmarking the quality of web APIs considering geo-mobility of clients. Using our benchmarking tool, we then present the surprising results of a geo-distributed 3-month benchmark run for 15 web APIs and discuss how application developers can deal with volatile quality both from an architectural and engineering point of view."}
{"_id": "d9e87addc2a841882bc601c7296bbdc3ff7ae421", "title": "Iterative localization refinement in convolutional neural networks for improved object detection", "text": "Accurate region proposals are of importance to facilitate object localization in the existing convolutional neural network (CNN)-based object detection methods. This paper presents a novel iterative localization refinement (ILR) method which, undertaken at a mid-layer of a CNN architecture, iteratively refines region proposals in order to match as much ground-truth as possible. The search for the desired bounding box in each iteration is first formulated as a statistical hypothesis testing problem and then solved by a divide-and-conquer paradigm. The proposed ILR is not only data-driven, free of learning, but also compatible with a variety of CNNs. Furthermore, to reduce complexity, an approximate variant based on a refined sampling strategy using linear interpolation is addressed. Simulations show that the proposed method improves the main state-of-the-art works on the PASCAL VOC 2007 dataset."}
{"_id": "77db2bc08df285e7b84ada136f6c56cc5693b64e", "title": "A Multimodal Analysis of Floor Control in Meetings", "text": "The participant in a human-to-human communication who controls the floor bears the burden of moving the communication process along. Change in control of the floor can happen through a number of mechanisms, including interruptions, delegation of the floor, and so on. This paper investigates floor control in multiparty meetings that are both audio and video taped; hence, we analyze patterns of speech (e.g., the use of discourse markers) and visual cues (e.g, eye gaze exchanges) that are often involved in floor control changes. Identifying who has control of the floor provides an important focus for information retrieval and summarization of meetings. Additionally, without understanding who has control of the floor, it is impossible to identify important events such as challenges for the floor. In this paper, we analyze multimodal cues related to floor control in two different meetings involving five participants each."}
{"_id": "c1ae4425cdd92aa3eaca11e455a156780898d0d4", "title": "Transaction-Cost Economics: The Governance of Contractual Relations", "text": "Transaction-Cost Economics: The Governance of Contractual Relations Author(s): Oliver E. Williamson Reviewed work(s): Source: Journal of Law and Economics, Vol. 22, No. 2 (Oct., 1979), pp. 233-261 Published by: The University of Chicago Press for The Booth School of Business of the University of Chicago and The University of Chicago Law School Stable URL: http://www.jstor.org/stable/725118 . Accessed: 07/01/2013 16:49"}
{"_id": "3a04f3cbb7c7c37e95c8f5945a0e33a435f50adb", "title": "Product Classification in E-Commerce using Distributional Semantics", "text": "Product classification is the task of automatically predicting a taxonomy path for a product in a predefined taxonomy hierarchy given a textual product description or title. For efficient product classification we require a suitable representation for a document (the textual description of a product) feature vector and efficient and fast algorithms for prediction. To address the above challenges, we propose a new distributional semantics representation for document vector formation. We also develop a new two-level ensemble approach utilizing (with respect to the taxonomy tree) a path-wise, node-wise and depth-wise classifiers for error reduction in the final product classification. Our experiments show the effectiveness of the distributional representation and the ensemble approach on data sets from a leading e-commerce platform and achieve better results on various evaluation metrics compared to earlier approaches."}
{"_id": "c31834b6c2f5dd90740dfb2d7ed3d6d4b3e96309", "title": "Missing data analyses: a hybrid multiple imputation algorithm using Gray System Theory and entropy based on clustering", "text": "Researchers and practitioners who use databases usually feel that it is cumbersome in knowledge discovery or application development due to the issue of missing data. Though some approaches can work with a certain rate of incomplete data, a large portion of them demands high data quality with completeness. Therefore, a great number of strategies have been designed to process missingness particularly in the way of imputation. Single imputation methods initially succeeded in predicting the missing values for specific types of distributions. Yet, the multiple imputation algorithms have maintained prevalent because of the further promotion of validity by minimizing the bias iteratively and less requirement on prior knowledge to the distributions. This article carefully reviews the state of the art and proposes a hybrid missing data completion method named Multiple Imputation using Gray-system-theory and Entropy based on Clustering (MIGEC). Firstly, the non-missing data instances are separated into several clusters. Then, the imputed value is obtained after multiple calculations by utilizing the information entropy of the proximal category for each incomplete instance in terms of the similarity metric based on Gray System Theory (GST). Experimental results on University of California Irvine (UCI) datasets illustrate the superiority of MIGEC to other current achievements on accuracy for either numeric or categorical attributes under different missing mechanisms. Further discussion on real aerospace datasets states MIGEC is also applicable for the specific area with both more precise inference and faster convergence than other multiple imputation methods in general."}
{"_id": "77dae6361257c4da4d015fb6212715dc756dbde9", "title": "Analysing and improving the diagnosis of ischaemic heart disease with machine learning", "text": "Ischaemic heart disease is one of the world's most important causes of mortality, so improvements and rationalization of diagnostic procedures would be very useful. The four diagnostic levels consist of evaluation of signs and symptoms of the disease and ECG (electrocardiogram) at rest, sequential ECG testing during the controlled exercise, myocardial scintigraphy, and finally coronary angiography (which is considered to be the reference method). Machine learning methods may enable objective interpretation of all available results for the same patient and in this way may increase the diagnostic accuracy of each step. We conducted many experiments with various learning algorithms and achieved the performance level comparable to that of clinicians. We also extended the algorithms to deal with non-uniform misclassification costs in order to perform ROC analysis and control the trade-off between sensitivity and specificity. The ROC analysis shows significant improvements of sensitivity and specificity compared to the performance of the clinicians. We further compare the predictive power of standard tests with that of machine learning techniques and show that it can be significantly improved in this way."}
{"_id": "b29297857de55a47b3885d2b81157fee63fbe428", "title": "IUI workshop on interactive machine learning", "text": "Many applications of Machine Learning (ML) involve interactions with humans. Humans may provide input to a learning algorithm (in the form of labels, demonstrations, corrections, rankings or evaluations) while observing its outputs (in the form of feedback, predictions or executions). Although humans are an integral part of the learning process, traditional ML systems used in these applications are agnostic to the fact that inputs/outputs are from/for humans.\n However, a growing community of researchers at the intersection of ML and human-computer interaction are making interaction with humans a central part of developing ML systems. These efforts include applying interaction design principles to ML systems, using human-subject testing to evaluate ML systems and inspire new methods, and changing the input and output channels of ML systems to better leverage human capabilities. With this Interactive Machine Learning (IML) workshop at IUI 2013 we aim to bring this community together to share ideas, get up-to-date on recent advances, progress towards a common framework and terminology for the field, and discuss the open questions and challenges of IML."}
{"_id": "9df3e4ebbc0754484d4ec936de334a2f47c48a82", "title": "NLAST: A natural language assistant for students", "text": "This paper presents a system that works as an assistant for students in their learning process. The assistant system has two main parts: an Android application and a server platform. The Android application is a chatterbot (i.e., an agent intended to conduct a conversation in natural language with a human being) based on AIML, one of the more successful languages for developing conversational agents. The chatterbot acts as an intermediation agent between a student and the server platform. The server platform contains four repositories and a recommender (which are part of a bigger professor assessment system). The final objective of the assistant system is to make a student able to carry out several actions related to his/her learning and assessment processes, such as: to consult exam questions from a repository, to receive recommendations about learning material, to ask questions about a course, and to check his/her own assessed exams. These actions are carried out through and Android application using a natural language interface (by voice or typing). The purpose of this development is to facilitate the access to this information through a friendly interface."}
{"_id": "28d99dc2d673d62118658f8375b414e5192eac6f", "title": "Using Ranking-CNN for Age Estimation", "text": "Human age is considered an important biometric trait for human identification or search. Recent research shows that the aging features deeply learned from large-scale data lead to significant performance improvement on facial image-based age estimation. However, age-related ordinal information is totally ignored in these approaches. In this paper, we propose a novel Convolutional Neural Network (CNN)-based framework, ranking-CNN, for age estimation. Ranking-CNN contains a series of basic CNNs, each of which is trained with ordinal age labels. Then, their binary outputs are aggregated for the final age prediction. We theoretically obtain a much tighter error bound for ranking-based age estimation. Moreover, we rigorously prove that ranking-CNN is more likely to get smaller estimation errors when compared with multi-class classification approaches. Through extensive experiments, we show that statistically, ranking-CNN significantly outperforms other state-of-the-art age estimation models on benchmark datasets."}
{"_id": "ea949df68de667b25adfd68caebde203a4d48b01", "title": "The 500MHz band low power rectenna for DTV in the Tokyo area", "text": "In this paper, the 500MHz band low power rectenna with high sensitivity characteristic is described. The rectenna is consisting of the folded dipole antenna with a 1.6k\u03a9 impedance and the Cockcroft-Walton type rectifier for improvement of the rectification efficiency and the output dc voltage. Furthermore an L-Type LPF is added to reduce the negative influence due to the junction capacitance of the SBD. The developed rectennas (stage number: 1, 2) achieve the rectification efficiency of 48.9 % (stage number: 1) and 43.1 % (stage number: 2) at -15dBm of the input power. In addition to above, the top level efficiency of 8.7 % (stage number:1) at the input power of -30dBm can be achieved. In the Tokyo are, the output dc voltage of 0.22 V can be measured at the 25 km point from the antenna tower."}
{"_id": "27dc15a7f2370d45a2aa4084b69dbfb0c9a74569", "title": "Cochrane handbook for systematic reviews of interventions", "text": "Chapter 1: Introduction .......................................................................... 3 Key points .................................................................................. 3 1.1 Cochrane ................................................................................ 3 1.1.1 What is Cochrane? ......................................................................... 4 1.1.2 A brief history of Cochrane .................................................................... 4 1.1.3 Cochrane organization and structure .............................................................. 5 1.2 Systematic reviews .......................................................................... 5 1.2.1 The need for systematic reviews ................................................................. 5 1.2.2 What is a systematic review? ................................................................... 5 1.3 Cochrane Reviews .......................................................................... 6 1.3.1 Reviews of the effects of interventions ............................................................. 6 1.3.2 Reviews of diagnostic test accuracy .............................................................. 6 1.3.3 Overviews of Reviews ....................................................................... 6 1.3.4 Reviews of methodology ...................................................................... 7 1.4 Publication of Cochrane Reviews ................................................................. 7 1.4.1 The Cochrane Library ....................................................................... 7 1.5 Handbook structure .......................................................................... 8 1.6 Chapter information .......................................................................... 8 1.7 References ............................................................................... 8 Chapter 3: Reporting the review ..................................................................... 9 3."}
{"_id": "8fbfe4cf21d2e28e29d5e0d99b58cdc99208dc85", "title": "Simplified markov random fields for efficient semantic labeling of 3D point clouds", "text": "In this paper, we focus on 3D point cloud classification by assigning semantic labels to each point in the scene. We propose to use simplified Markov networks to model the contextual relations between points, where the node potentials are calculated from point-wise classification results using off-the-shelf classifiers, such as Random Forest and Support Vector Machines, and the edge potentials are set by physical distance between points. Our experimental results show that this approach yields comparable if not better results with improved speed compared with state-of-the-art methods. We also propose a novel robust neighborhood filtering method to exclude outliers in the neighborhood of points, in order to reduce noise in local geometric statistics when extracting features and also to reduce number of false edges when constructing Markov networks. We show that applying robust neighborhood filtering improves the results when classifying point clouds with more object categories."}
{"_id": "d940d19edf0e7b9257dbf42966b14acd6191288e", "title": "Contactless vital signs measurement for self-service healthcare kiosk in intelligent building", "text": "Contactless healthcare system is becoming a new trend to solve the existing healthcare system limitations. High flexibility and accessibility are the main advantages of contactless measurements. In this paper, we proposed a contactless healthcare kiosk for intelligent building application. The Remote Imaging Photoplethysmography (rIPPG) based framework to measure multiple human's vital signs, including pulse rate (PR), respiratory rate (RR), and systolic blood pressure (SBP) was developed to fulfill the performance requirement with a personal computer. Our experimental results show the Mean Absolute Error (MAE) of each measured parameters are 1.7 BPM, 0.41 BPM and 8.15 mmHg for PR, RR, and SBP respectively compared with clinical apparatus. These statistical numbers show the feasibility of using the contactless based system to measure numerous vital signs and also meet the requirements of standard healthcare device."}
{"_id": "92fc1ee8cf89106a3ed76cff4875edabd159f686", "title": "Surviving Sepsis Campaign: International Guidelines for Management of Severe Sepsis and Septic Shock, 2012", "text": "To provide an update to the \u201cSurviving Sepsis Campaign Guidelines for Management of Severe Sepsis and Septic Shock,\u201d last published in 2008. A consensus committee of 68 international experts representing 30 international organizations was convened. Nominal groups were assembled at key international meetings (for those committee members attending the conference). A formal conflict of interest policy was developed at the onset of the process and enforced throughout. The entire guidelines process was conducted independent of any industry funding. A stand-alone meeting was held for all subgroup heads, co- and vice-chairs, and selected individuals. Teleconferences and electronic-based discussion among subgroups and among the entire committee served as an integral part of the development. The authors were advised to follow the principles of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system to guide assessment of quality of evidence from high (A) to very low (D) and to determine the strength of recommendations as strong (1) or weak (2). The potential drawbacks of making strong recommendations in the presence of low-quality evidence were emphasized. Recommendations were classified into three groups: (1) those directly targeting severe sepsis; (2) those targeting general care of the critically ill patient and considered high priority in severe sepsis; and (3) pediatric considerations. Key recommendations and suggestions, listed by category, include: early quantitative resuscitation of the septic patient during the first 6\u00a0h after recognition (1C); blood cultures before antibiotic therapy (1C); imaging studies performed promptly to confirm a potential source of infection (UG); administration of broad-spectrum antimicrobials therapy within 1\u00a0h of the recognition of septic shock (1B) and severe sepsis without septic shock (1C) as the goal of therapy; reassessment of antimicrobial therapy daily for de-escalation, when appropriate (1B); infection source control with attention to the balance of risks and benefits of the chosen method within 12\u00a0h of diagnosis (1C); initial fluid resuscitation with crystalloid (1B) and consideration of the addition of albumin in patients who continue to require substantial amounts of crystalloid to maintain adequate mean arterial pressure (2C) and the avoidance of hetastarch formulations (1B); initial fluid challenge in patients with sepsis-induced tissue hypoperfusion and suspicion of hypovolemia to achieve a minimum of 30\u00a0mL/kg of crystalloids (more rapid administration and greater amounts of fluid may be needed in some patients (1C); fluid challenge technique continued as long as hemodynamic improvement is based on either dynamic or static variables (UG); norepinephrine as the first-choice vasopressor to maintain mean arterial pressure \u226565\u00a0mmHg (1B); epinephrine when an additional agent is needed to maintain adequate blood pressure (2B); vasopressin (0.03\u00a0U/min) can be added to norepinephrine to either raise mean arterial pressure to target or to decrease norepinephrine dose but should not be used as the initial vasopressor (UG); dopamine is not recommended except in highly selected circumstances (2C); dobutamine infusion administered or added to vasopressor in the presence of (a) myocardial dysfunction as suggested by elevated cardiac filling pressures and low cardiac output, or (b) ongoing signs of hypoperfusion despite achieving adequate intravascular volume and adequate mean arterial pressure (1C); avoiding use of intravenous hydrocortisone in adult septic shock patients if adequate fluid resuscitation and vasopressor therapy are able to restore hemodynamic stability (2C); hemoglobin target of 7\u20139\u00a0g/dL in the absence of tissue hypoperfusion, ischemic coronary artery disease, or acute hemorrhage (1B); low tidal volume (1A) and limitation of inspiratory plateau pressure (1B) for acute respiratory distress syndrome (ARDS); application of at least a minimal amount of positive end-expiratory pressure (PEEP) in ARDS (1B); higher rather than lower level of PEEP for patients with sepsis-induced moderate or severe ARDS (2C); recruitment maneuvers in sepsis patients with severe refractory hypoxemia due to ARDS (2C); prone positioning in sepsis-induced ARDS patients with a Pao 2/Fio 2 ratio of \u2264100\u00a0mm\u00a0Hg in facilities that have experience with such practices (2C); head-of-bed elevation in mechanically ventilated patients unless contraindicated (1B); a conservative fluid strategy for patients with established ARDS who do not have evidence of tissue hypoperfusion (1C); protocols for weaning and sedation (1A); minimizing use of either intermittent bolus sedation or continuous infusion sedation targeting specific titration endpoints (1B); avoidance of neuromuscular blockers if possible in the septic patient without ARDS (1C); a short course of neuromuscular blocker (no longer than 48\u00a0h) for patients with early ARDS and a Pao 2/Fi o 2 <150 mm Hg (2C); a protocolized approach to blood glucose management commencing insulin dosing when two consecutive blood glucose levels are >180\u00a0mg/dL, targeting an upper blood glucose \u2264180\u00a0mg/dL (1A); equivalency of continuous veno-venous hemofiltration or intermittent hemodialysis (2B); prophylaxis for deep vein thrombosis (1B); use of stress ulcer prophylaxis to prevent upper gastrointestinal bleeding in patients with bleeding risk factors (1B); oral or enteral (if necessary) feedings, as tolerated, rather than either complete fasting or provision of only intravenous glucose within the first 48\u00a0h after a diagnosis of severe sepsis/septic shock (2C); and addressing goals of care, including treatment plans and end-of-life planning (as appropriate) (1B), as early as feasible, but within 72\u00a0h of intensive care unit admission (2C). Recommendations specific to pediatric severe sepsis include: therapy with face mask oxygen, high flow nasal cannula oxygen, or nasopharyngeal continuous PEEP in the presence of respiratory distress and hypoxemia (2C), use of physical examination therapeutic endpoints such as capillary refill (2C); for septic shock associated with hypovolemia, the use of crystalloids or albumin to deliver a bolus of 20\u00a0mL/kg of crystalloids (or albumin equivalent) over 5\u201310\u00a0min (2C); more common use of inotropes and vasodilators for low cardiac output septic shock associated with elevated systemic vascular resistance (2C); and use of hydrocortisone only in children with suspected or proven \u201cabsolute\u201d\u2019 adrenal insufficiency (2C). Strong agreement existed among a large cohort of international experts regarding many level 1 recommendations for the best care of patients with severe sepsis. Although a significant number of aspects of care have relatively weak support, evidence-based recommendations regarding the acute management of sepsis and septic shock are the foundation of improved outcomes for this important group of critically ill patients."}
{"_id": "28a2166112318413647605a87d5baae5badf5c3a", "title": "Recommending citations for academic papers", "text": "We approach the problem of academic literature search by considering an unpublished manuscript as a query to a search system. We use the text of previous literature as well as the citation graph that connects it to find relevant related material. We evaluate our technique with manual and automatic evaluation methods, and find an order of magnitude improvement in mean average precision as compared to a text similarity baseline."}
{"_id": "1b4a45c05620fa7ee4b8e8a9a2c4583f02a02a9c", "title": "Intelligent Conversational Bot for Massive Online Open Courses (MOOCs)", "text": "Massive Online Open Courses (MOOCs) which were introduced in 2008 has since drawn attention around the world for both its advantages as well as criticism on its drawbacks. One of the issues in MOOCs which is the lack of interactivity with the instructor has brought conversational bot into the picture to fill in this gap. In this study, a prototype of MOOCs conversational bot, MOOC-bot is being developed and integrated into MOOCs website to respond to the learner\u2019s enquiries using text or speech input. MOOC-bot is using the popular Artificial Intelligence Markup Language (AIML) to develop its knowledge base, leverage from AIML\u2019s capability to deliver appropriate responses and can be quickly adapted to new knowledge domains. The system architecture of MOOC-bot consists of knowledge base along with AIML interpreter, chat interface, MOOCs website and Web Speech API to provide speech recognition and speech synthesis capability. The initial MOOC-bot prototype has the general knowledge from the past Loebner Prize winner ALICE, course\u2019s frequent asked questions, and a course\u2019s content offered by Universiti Teknikal Malaysia Melaka (UTeM). The evaluation of MOOC-bot based on the past competition questions from Chatterbox Challenge (CBC) and Loebner Prize has shown that it was able to provide correct answers most of the time during the test and demonstrated the capability to prolong the conversation. The advantages of MOOCbot such as able to provide 24-hour service that can serve different time zones, able to have knowledge in multiple domains, and can be shared by multiple sites simultaneously have outweighed its existing limitations."}
{"_id": "998a18da106d8065ba4765e7ba381555d4016fd4", "title": "Executive Involvement and Participation in the Management of Information Technology", "text": "Executive support is often prescribed as critical for fully tapping the benefits of information technology (IT). However, few investigations have attempted to determine what type of executive support is likely or organizationally appropriate. This article puts forward alternative models of executive support. The models are tested by examining chief executive officers\u2019 behaviors in and perceptions of IT activities. CEOs and information systems executives are surveyed and further data collected from industry handbooks and from chairmen\u2019s annual letters to shareholders. The results suggest that executive involvement (a psychological state) is more strongly associated with the firm\u2019s progressive use of IT than executive participation (actual behaviors) in IT activities. Executive involvement is influenced by a CEO\u2019s participation, prevailing organizational conditions, and the executive\u2019s functional background. CEO\u2019s perceptions about the importance of IT in their firms were generally positive, although they participated in IT activities rather infrequently."}
{"_id": "25128668964e006fcea3f0ff5061c5fdd755c2e3", "title": "Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning", "text": "Many problems in sequential decision making and stochastic control often have natural multiscale structure: sub-tasks are assembled together to accomplish complex goals. Systematically inferring and leveraging hierarchical structure, particularly beyond a single level of abstraction, has remained a longstanding challenge. We describe a fast multiscale procedure for repeatedly compressing, or homogenizing, Markov decision processes (MDPs), wherein a hierarchy of sub-problems at different scales is automatically determined. Coarsened MDPs are themselves independent, deterministic MDPs, and may be solved using existing algorithms. The multiscale representation delivered by this procedure decouples sub-tasks from each other and can lead to substantial improvements in convergence rates both locally within sub-problems and globally across sub-problems, yielding significant computational savings. A second fundamental aspect of this work is that these multiscale decompositions yield new transfer opportunities across different problems, where solutions of sub-tasks at different levels of the hierarchy may be amenable to transfer to new problems. Localized transfer of policies and potential operators at arbitrary scales is emphasized. Finally, we demonstrate compression and transfer in a collection of illustrative domains, including examples involving discrete and continuous statespaces."}
{"_id": "b661882fefc9d9e59f26417d2e134e15e57bbccd", "title": "Three-phase power factor correction rectifier applied to wind energy conversion systems", "text": "This paper proposes a topology of three-phase controlled rectifier feasible for small wind energy conversion systems. This rectifier uses six wires of the generator and allows for the operation with high power factor, increasing the generator efficiency. One Cycle Control (OCC) was used in the rectifier control. This study is concerned with the operating principle, a design example, and some simulation and experimental results for a 5 kW prototype."}
{"_id": "ba60b642a558858325f50d38a345b6bb85114ce1", "title": "Imbalanced Deep Learning by Minority Class Incremental Rectification", "text": "Model learning from class imbalanced training data is a long-standing and significant challenge for machine learning. In particular, existing deep learning methods consider mostly either class balanced data or moderately imbalanced data in model training, and ignore the challenge of learning from significantly imbalanced training data. To address this problem, we formulate a class imbalanced deep learning model based on batch-wise incremental minority (sparsely sampled) class rectification by hard sample mining in majority (frequently sampled) classes during model training. This model is designed to minimise the dominant effect of majority classes by discovering sparsely sampled boundaries of minority classes in an iterative batch-wise learning process. To that end, we introduce a Class Rectification Loss (CRL) function that can be deployed readily in deep network architectures. Extensive experimental evaluations are conducted on three imbalanced person attribute benchmark datasets (CelebA, X-Domain, DeepFashion) and one balanced object category benchmark dataset (CIFAR-100). These experimental results demonstrate the performance advantages and model scalability of the proposed batch-wise incremental minority class rectification model over the existing state-of-the-art models for addressing the problem of imbalanced data learning."}
{"_id": "c40174aeb1be3998a2f8faae28d6689611bb7aad", "title": "Learning a dense multi-view representation for detection, viewpoint classification and synthesis of object categories", "text": "Recognizing object classes and their 3D viewpoints is an important problem in computer vision. Based on a part-based probabilistic representation [31], we propose a new 3D object class model that is capable of recognizing unseen views by pose estimation and synthesis. We achieve this by using a dense, multiview representation of the viewing sphere parameterized by a triangular mesh of viewpoints. Each triangle of viewpoints can be morphed to synthesize new viewpoints. By incorporating 3D geometrical constraints, our model establishes explicit correspondences among object parts across viewpoints. We propose an incremental learning algorithm to train the generative model. A cellphone video clip of an object is first used to initialize model learning. Then the model is updated by a set of unsorted training images without viewpoint labels. We demonstrate the robustness of our model on object detection, viewpoint classification and synthesis tasks. Our model performs superiorly to and on par with state-of-the-art algorithms on the Savarese et al. 2007 and PASCAL datasets in object detection. It outperforms all previous work in viewpoint classification and offers promising results in viewpoint synthesis."}
{"_id": "11cb21527a463cad75c2334f0829532ec14b51cb", "title": "Data-swapping --a Technique for Disclosure Control", "text": "In recent years there has been increasing concern about the confidentiality of computerized databases. This has led to a fast growing interest in the development of techniques for controlling the disclosure of information from such databases about individuals, both natural and legal persons. 3 Much of this interest has been focused on the release of statistical tabulations and microdata files. In this paper, we introduce a new technique, data-swapping, that addresses these interests. This technique can be used to both produce microdata and release statistical tabulations so that confidentiality is not violated. The ideas presented in this report are tentative and have not been tested on real-life data. The illustrations provided in the text are based on small-scale numerical experiments, and serve to illustrate the use of data-swapping for disclosure control. Before the proposed techniques can be used on a large scale, more research will be needed. Directions for this research are pointed out throughout the report. This report is organized in two parts. In Part I, we present the underlying idea of the technique. In Part II, we present some theory and methods."}
{"_id": "dfba505f7e15d53318e5edb74a7f0547f5c683de", "title": "Sequence to sequence learning for unconstrained scene text recognition", "text": "In this work we present a state-of-the-art approach for unconstrained natural scene text recognition. We propose a cascade approach that incorporates a convolutional neural network (CNN) architecture followed by a long short term memory model (LSTM). The CNN learns visual features for the characters and uses them with a softmax layer to detect sequence of characters. While the CNN gives very good recognition results, it does not model relation between characters, hence gives rise to false positive and false negative cases (confusing characters due to visual similarities like\"g\"and\"9\", or confusing background patches with characters; either removing existing characters or adding non-existing ones) To alleviate these problems we leverage recent developments in LSTM architectures to encode contextual information. We show that the LSTM can dramatically reduce such errors and achieve state-of-the-art accuracy in the task of unconstrained natural scene text recognition. Moreover we manually remove all occurrences of the words that exist in the test set from our training set to test whether our approach will generalize to unseen data. We use the ICDAR 13 test set for evaluation and compare the results with the state of the art approaches [11, 18]. We finally present an application of the work in the domain of for traffic monitoring."}
{"_id": "5642eb6d333ca0d9e3772ca007f020f6e533c46c", "title": "Persuasive System Design Does Matter: A Systematic Review of Adherence to Web-Based Interventions", "text": "BACKGROUND\nAlthough web-based interventions for promoting health and health-related behavior can be effective, poor adherence is a common issue that needs to be addressed. Technology as a means to communicate the content in web-based interventions has been neglected in research. Indeed, technology is often seen as a black-box, a mere tool that has no effect or value and serves only as a vehicle to deliver intervention content. In this paper we examine technology from a holistic perspective. We see it as a vital and inseparable aspect of web-based interventions to help explain and understand adherence.\n\n\nOBJECTIVE\nThis study aims to review the literature on web-based health interventions to investigate whether intervention characteristics and persuasive design affect adherence to a web-based intervention.\n\n\nMETHODS\nWe conducted a systematic review of studies into web-based health interventions. Per intervention, intervention characteristics, persuasive technology elements and adherence were coded. We performed a multiple regression analysis to investigate whether these variables could predict adherence.\n\n\nRESULTS\nWe included 101 articles on 83 interventions. The typical web-based intervention is meant to be used once a week, is modular in set-up, is updated once a week, lasts for 10 weeks, includes interaction with the system and a counselor and peers on the web, includes some persuasive technology elements, and about 50% of the participants adhere to the intervention. Regarding persuasive technology, we see that primary task support elements are most commonly employed (mean 2.9 out of a possible 7.0). Dialogue support and social support are less commonly employed (mean 1.5 and 1.2 out of a possible 7.0, respectively). When comparing the interventions of the different health care areas, we find significant differences in intended usage (p=.004), setup (p<.001), updates (p<.001), frequency of interaction with a counselor (p<.001), the system (p=.003) and peers (p=.017), duration (F=6.068, p=.004), adherence (F=4.833, p=.010) and the number of primary task support elements (F=5.631, p=.005). Our final regression model explained 55% of the variance in adherence. In this model, a RCT study as opposed to an observational study, increased interaction with a counselor, more frequent intended usage, more frequent updates and more extensive employment of dialogue support significantly predicted better adherence.\n\n\nCONCLUSIONS\nUsing intervention characteristics and persuasive technology elements, a substantial amount of variance in adherence can be explained. Although there are differences between health care areas on intervention characteristics, health care area per se does not predict adherence. Rather, the differences in technology and interaction predict adherence. The results of this study can be used to make an informed decision about how to design a web-based intervention to which patients are more likely to adhere."}
{"_id": "fdb15e965c4d0c7a667b37651f63aef10b376d7e", "title": "Age-related changes in grey and white matter structure throughout adulthood", "text": "Normal ageing is associated with gradual brain atrophy. Determining spatial and temporal patterns of change can help shed light on underlying mechanisms. Neuroimaging provides various measures of brain structure that can be used to assess such age-related change but studies to date have typically considered single imaging measures. Although there is consensus on the notion that brain structure deteriorates with age, evidence on the precise time course and spatial distribution of changes is mixed. We assessed grey matter (GM) and white matter (WM) structure in a group of 66 adults aged between 23 and 81. Multimodal imaging measures included voxel-based morphometry (VBM)-style analysis of GM and WM volume and diffusion tensor imaging (DTI) metrics of WM microstructure. We found widespread reductions in GM volume from middle age onwards but earlier reductions in GM were detected in frontal cortex. Widespread age-related deterioration in WM microstructure was detected from young adulthood onwards. WM decline was detected earlier and more sensitively using DTI-based measures of microstructure than using markers of WM volume derived from conventional T1-weighted imaging."}
{"_id": "2490306ed749acb8976155e8ba8b1fc3461b6c9c", "title": "The effects of a 6-week plyometric training program on agility.", "text": "The purpose of the study was to determine if six weeks of plyometric training can improve an athlete's agility. Subjects were divided into two groups, a plyometric training and a control group. The plyometric training group performed in a six week plyometric training program and the control group did not perform any plyometric training techniques. All subjects participated in two agility tests: T-test and Illinois Agility Test, and a force plate test for ground reaction times both pre and post testing. Univariate ANCOVAs were conducted to analyze the change scores (post - pre) in the independent variables by group (training or control) with pre scores as covariates. The Univariate ANCOVA revealed a significant group effect F2,26 = 25.42, p=0.0000 for the T-test agility measure. For the Illinois Agility test, a significant group effect F2,26 = 27.24, p = 0.000 was also found. The plyometric training group had quicker posttest times compared to the control group for the agility tests. A significant group effect F2,26 = 7.81, p = 0.002 was found for the Force Plate test. The plyometric training group reduced time on the ground on the posttest compared to the control group. The results of this study show that plyometric training can be an effective training technique to improve an athlete's agility. Key PointsPlyometric training can enhance agility of athletes.6 weeks of plyometric training is sufficient to see agility results.Ground reaction times are decreased with plyometric training."}
{"_id": "dea9b9619936bb00cf845cdf4478435da5363057", "title": "Shift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions", "text": "Neural networks rely on convolutions to aggregate spatial information. However, spatial convolutions are expensive in terms of model size and computation, both of which grow quadratically with respect to kernel size. In this paper, we present a parameter-free, FLOP-free \"shift\" operation as an alternative to spatial convolutions. We fuse shifts and point-wise convolutions to construct end-to-end trainable shift-based modules, with a hyperparameter characterizing the tradeoff between accuracy and efficiency. To demonstrate the operation's efficacy, we replace ResNet's 3x3 convolutions with shift-based modules for improved CIFAR10 and CIFAR100 accuracy using 60% fewer parameters; we additionally demonstrate the operation's resilience to parameter reduction on ImageNet, outperforming ResNet family members. We finally show the shift operation's applicability across domains, achieving strong performance with fewer parameters on image classification, face verification and style transfer."}
{"_id": "360256f07d4239964d929dcd3d8df0415cf3fbf7", "title": "Weighted-to-Spherically-Uniform Quality Evaluation for Omnidirectional Video", "text": "Omnidirectional video records a scene in all directions around one central position. It allows users to select viewing content freely in all directions. Assuming that viewing directions are uniformly distributed, the isotropic observation space can be regarded as a sphere. Omnidirectional video is commonly represented by different projection formats with one or multiple planes. To measure objective quality of omnidirectional video in observation space more accurately, a weighted-to-spherically-uniform quality evaluation method is proposed in this letter. The error of each pixel on projection planes is multiplied by a weight to ensure the equivalent spherical area in observation space, in which pixels with equal mapped spherical area have the same influence on distortion measurement. Our method makes the quality evaluation results more accurate and reliable since it avoids error propagation caused by the conversion from resampling representation space to observation space. As an example of such quality evaluation method, weighted-to-spherically-uniform peak signal-to-noise ratio is described and its performance is experimentally analyzed."}
{"_id": "ba1637705ac683f77a10c8112ed9b44449bd859e", "title": "Comparative Study and Enhancement of Camera Tampering Detection Algorithms", "text": "Recently the use of video surveillance systems is widely increasing. Different places are equipped by camera surveillances such as hospitals, schools, airports, museums and military places in order to ensure the safety and security of the persons and their property. Therefore it becomes significant to guarantee the proper working of these systems. Intelligent video surveillance systems equipped by sophisticated digital camera can analyze video information's and automatically detect doubtful actions. The camera tampering detection algorithms may indicate that accidental or suspicious activities have occurred and that causes the abnormality works of the video surveillance. It uses several techniques based on image processing and computer vision. In this paper, comparative study of performance of four algorithms that can detect abnormal disturbance for video surveillance is presented also a new proposed method is exposed."}
{"_id": "449f0be722c178f3d0a3ee4454c4d1cb0fe31b73", "title": "Deception : The Role of Consequences", "text": "Deception is part of many economic interactions. Business people, politicians, diplomats, lawyers, and students in the experimental laboratory who make use of private information do not always do so honestly. This observation indicates that behavior often rejects the moral approach to deception. As St. Augustine wrote, \u201cTo me, however, it seems certain that every lie is a sin. . . \u201d (St. Augustine, 421). Later, philosophers like Immanuel Kant (1787) again adopted this uncompromising moral stance when arguing against lying. At the other extreme, economic theory is built on the assumption of \u201chomo economicus,\u201d a figure who acts selfishly and is unconcerned about the well-being of others. An implication of this assumption is that lies will be told whenever it is beneficial for the liar, regardless of their effect on the other party. Another implication is that there is no negative outcome associated with lying per se. This assumption is very useful in many economic models. Consider contract theory, where it is assumed that without an explicit contract, neither side will fulfill its respective obligations. For example, George Akerlof\u2019s (1970) paper on asymmetric information and the market for lemons assumes that sellers of used cars will always lie if it is in their benefit to do so. In the mechanism design literature (e.g., Bengt Holmstrom, 1979), the standard assumption is that people will tell the truth only if this is incentive-compatible given material outcomes. In the literature on tax evasion, the choice of whether to avoid paying taxes is considered a decision under uncertainty; cost is treated as a product of the probability of being caught and the cost of punishment, whereas benefit is simply the money saved by avoiding payment. However, there is no cost associated with the very act of lying (Michael Alingham and Agnar Sandmo, 1972). Another example is the game theoretic treatment of \u201ccheap talk\u201d (Crawford and Joel Sobel, 1982). An intermediate approach is taken by utilitarian philosophers (e.g., Jeremy Bentham, 1789). Utilitarianism prescribes that, when choosing whether to lie, one should weigh benefits against harm, and happiness against unhappiness. As Martin Luther stated, \u201cWhat harm would it do, if a man told a good strong lie for the sake of the good and for the Christian church. . . a lie out of necessity, a useful lie, a helpful lie, such lies would not be against God, he would accept them.\u201d Similarly to the economic theory approach, this type of calculation implies that lies, apart from their resultant harm and benefit, are in themselves neutral. A lie and a truthful statement that achieve the same monetary payoffs (for both sides) are considered * Graduate School of Business, University of Chicago, 5807 South Woodlawn Avenue, Chicago, IL 60637 (e-mail: uri.gneezy.gsb.uchicago.edu). I thank Douglas Bernheim and two anonymous reviewers for insightful comments that considerably improved the paper. I also thank Andreas Blume, Gary Charness, Rachel Croson, Martin Dufwenberg, Georg Kirchsteiger, David Levine, Muriel Niederle, Yuval Rottenstreich, Maurice Schweitzer, Richard Thaler, George Wu, and seminar participants at numerous universities for their comments and suggestions. Ira Leybman provided valuable help in running the experiment. I became interested in deception when my father was terminally ill and his physicians created in him a belief that they considered to be untrue. I dedicate this paper to his memory. 1 Important deviations from this assumption in economic modeling are found in Kenneth Arrow\u2019s (1972) discussion of trust, Gary Becker\u2019s (1976) modeling of altruistic preferences, and Akerlof\u2019s (1982) study of the fair-wage hypothesis. For a general discussion, see Becker (1993): \u201cThe economic approach I refer to does not assume that individuals are motivated solely by selfishness or material gain. It is a method of analysis, not an assumption about particular motivations. Along with others, I have tried to pry economists away from narrow assumptions about self-interest. Behavior is driven by a much richer set of values and preferences\u201d (p. 385). 2 Note that this does not mean that a completely selfish person will always lie. There may be strategic reasons not to lie. For example, see the David Kreps and Robert Wilson (1982) discussion of reputation and imperfect information; see also Vincent P. Crawford (2003). 3 Cited by his secretary, in a letter in Max Lenz, ed., Briefwechsel Landgraf Phillips des Grossmuthigen von Hessen mit Bucer, Vol. 1."}
{"_id": "6d5faca2dc2613067d0399451c0a91c05ed5f5bd", "title": "Ensemble forecasting of species distributions.", "text": "Concern over implications of climate change for biodiversity has led to the use of bioclimatic models to forecast the range shifts of species under future climate-change scenarios. Recent studies have demonstrated that projections by alternative models can be so variable as to compromise their usefulness for guiding policy decisions. Here, we advocate the use of multiple models within an ensemble forecasting framework and describe alternative approaches to the analysis of bioclimatic ensembles, including bounding box, consensus and probabilistic techniques. We argue that, although improved accuracy can be delivered through the traditional tasks of trying to build better models with improved data, more robust forecasts can also be achieved if ensemble forecasts are produced and analysed appropriately."}
{"_id": "23d4acbd427488dc1f1cdd8d5decc5881e44d7df", "title": "Robot dynamics and control", "text": "This chapter presents an introduction to the dynamics and control of robot manipulators. We derive the equations of motion for a general open-chain manipulator and, using the structure present in the dynamics , construct control laws for asymptotic tracking of a desired trajectory. In deriving the dynamics, we will make explicit use of twists for representing the kinematics of the manipulator and explore the role that the kinematics play in the equations of motion. We assume some familiarity with dynamics and control of physical systems."}
{"_id": "612114263de5f9bfa8a3b27d943814a9a4a3000b", "title": "Glove Based User Interaction Techniques for Augmented Reality in an Outdoor Environment", "text": "This paper presents a set of pinch glove-based user interface tools for an outdoor wearable augmented reality computer system. The main form of user interaction is the use of hand and head gestures. We have developed a set of augmented reality information presentation techniques. To support direct manipulation, the following three selection techniques have been implemented: two-handed framing, line of sight and laser beam. A new glove-based text entry mechanism has been developed to support symbolic manipulation. A scenario for a military logistics task is described to illustrate the functionality of this form of interaction."}
{"_id": "5b1b03749b6ec4d6140e519555f0d77441cd35af", "title": "A Survey of Augmented Reality", "text": "This paper surveys the field of augmented reality (AR), in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality."}
{"_id": "ece63ca6b755ba04bda38d37550f432123f3c3dc", "title": "Design of High Performance Permanent-Magnet Synchronous Wind Generators", "text": "This paper is devoted to the analysis and design of high performance permanent-magnet synchronous wind generators (PSWGs). A systematic and sequential methodology for the design of PMSGs is proposed with a high performance wind generator as a design model. Aiming at high induced voltage, low harmonic distortion as well as high generator efficiency, optimal generator parameters such as pole-arc to pole-pitch ratio and stator-slot-shoes dimension, etc. are determined with the proposed technique using Maxwell 2-D, Matlab software and the Taguchi method. The proposed double three-phase and six-phase winding configurations, which consist of six windings in the stator, can provide evenly distributed current for versatile applications regarding the voltage and current demands for practical consideration. Specifically, windings are connected in series to increase the output voltage at low wind speed, and in parallel during high wind speed to generate electricity even when either one winding fails, thereby enhancing the reliability as well. A PMSG is designed and implemented based on the proposed method. When the simulation is performed with a 6 \u03a9 load, the output power for the double three-phase winding and six-phase winding are correspondingly 10.64 and 11.13 kW. In addition, 24 \u03a9 load experiments show that the efficiencies of double three-phase winding and six-phase winding are 96.56% and 98.54%, respectively, verifying the proposed high performance operation. OPEN ACCESS Energies 2014, 7 7106"}
{"_id": "e72bef4333c4655376c5c1b38b5c9eae42aa6d64", "title": "Fog Computing for the Internet of Mobile Things: Issues and Challenges", "text": "The Internet of Things (IoT) conceives a world where everyday objects are able to join the Internet and exchange data as well as process, store, collect them from the surrounding environment, and actively intervene on it. An unprecedented number of services may be envisioned by exploiting the Internet of Things. Fog Computing, which is also known as Edge Computing, was proposed in 2012 as the ideal paradigm to support the resource-constrained IoT devices in data processing and information delivery. Indeed, the Fog, which does not replace the centralized Cloud but cooperates with it, distributes Cloud Computing technologies and principles anywhere along the Cloud-to-Things continuum and particularly at the network edge. The Fog proximity to the IoT devices allows for several advantages that must be continuously guaranteed, also when end devices move from one place to another. In this paper, we aim at examining in depth what it means to provide mobility support in a Fog environment and at investigating what are the main challenges to be faced. Besides, in order to highlight the importance of this topic in everyday life, we provide the reader with three scenarios where there is an integration between the IoT and Fog Computing, and in which mobility support is essential. We finally point out the main research directions in the field."}
{"_id": "b2f05880574b5369ba449698ad960094dc60578c", "title": "Evaluating CNN and LSTM for Web Attack Detection", "text": "Web attack detection is the key task for network security. To tackle this hard problem, this paper explores the deep learning methods, and evaluates convolutional neural network, long-short term memory and their combination method. By comparing with the traditional methods, experimental results show that deep learning methods can greatly outperform the traditional methods. Besides, we also analyze the different factors influencing the performance. This work can be a good guidance for researcher to design network architecture and parameters for detecting web attack in practice."}
{"_id": "3a9e3301d0c0322579b319d81ea7f3e1ca9812dc", "title": "Optimising ring oscillator-based true random number generators concept on FPGA", "text": "Random Number Generators are widely used in cryptographic applications (symmetric and asymmetric encryption, digital certificate generation, etc.) and other related areas. True Random Number Generators (TRNG) are based on physical non-deterministic phenomenon. Most of these generators are based on a large number of interconnected high frequency ring oscillators (RO), which are most commonly implemented in FPGA. This paper describes an optimized solution which increases the speed and the complexity of these ring oscillator-based TRNG. The proposed solution combines a personalized approach on the Ring Oscillator scheme, based on preprocessed raw data and a well-known one, based on the desynchronization of a large number of free running ROs, therefore minimizing the resource used within the FPGA. The paper covers the most known statistical testes (provided by the National Institute of Standards and Technology - NIST) applied on a large amount of data and proves the stability of a generator by providing positive experimental results that were obtained from measurements using different operating frequencies."}
{"_id": "4529c536c241e0e1deaf6f8d03c9e340684e9412", "title": "Profanity in media associated with attitudes and behavior regarding profanity use and aggression.", "text": "OBJECTIVE\nWe hypothesized that exposure to profanity in media would be directly related to beliefs and behavior regarding profanity and indirectly to aggressive behavior.\n\n\nMETHODS\nWe examined these associations among 223 adolescents attending a large Midwestern middle school. Participants completed a number of questionnaires examining their exposure to media, attitudes and behavior regarding profanity, and aggressive behavior.\n\n\nRESULTS\nResults revealed a positive association between exposure to profanity in multiple forms of media and beliefs about profanity, profanity use, and engagement in physical and relational aggression. Specifically, attitudes toward profanity use mediated the relationship between exposure to profanity in media and subsequent behavior involving profanity use and aggression.\n\n\nCONCLUSIONS\nThe main hypothesis was confirmed, and implications for the rating industry and research field are discussed."}
{"_id": "be3e0615b6a14cdc0f6a73f324722affe8d088da", "title": "An architecture pattern for safety critical automated driving applications: Design and analysis", "text": "Introduction of automated driving increases complexity of automotive systems. As a result, architecture design becomes a major concern for ensuring non-functional requirements such as safety, and modifiability. In the ISO 26262 standard, architecture patterns are recommended for system development. However, the existing architecture patterns may not be able to answer requirements of automated driving completely. When applying these patterns in the automated driving context, modification and analysis of these patterns are needed. In this paper, we present a novel architecture pattern for safety critical automated driving functions. In addition, we propose a generic approach to compare our pattern with a number of existing ones. The comparison results can be used as a basis for project specific architectural decisions. Our Safety Channel pattern is validated by its implementation for a real-life truck platooning application."}
{"_id": "d098c03cd4e487766f7737c7abb0b4a740779682", "title": "Visual Tracking via Boolean Map Representations", "text": "In this paper, we present a simple yet effective Boolean map based representation that exploits connectivity cues for visual tracking. We describe a target object with histogram of oriented gradients and raw color features, of which each one is characterized by a set of Boolean maps generated by uniformly thresholding their values. The Boolean maps effectively encode multi-scale connectivity cues of the target with different granularities. The fine-grained Boolean maps capture spatially structural details that are effective for precise target localization while the coarse-grained ones encode global shape information that are robust to large target appearance variations. Finally, all the Boolean maps form together a robust representation that can be approximated by an explicit feature map of the intersection kernel, which is fed into a logistic regression classifier with online update, and the target location is estimated within a particle filter framework. The proposed representation scheme is computationally efficient and facilitates achieving favorable performance in terms of accuracy and robustness against the state-of-the-art tracking methods on a large benchmark dataset of 50 image sequences."}
{"_id": "ba2c3ee8422f7cb28d184d0abd7ca6c27669ce8a", "title": "Dynamic Computation Offloading for Mobile-Edge Computing With Energy Harvesting Devices", "text": "Mobile-edge computing (MEC) is an emerging paradigm to meet the ever-increasing computation demands from mobile applications. By offloading the computationally intensive workloads to the MEC server, the quality of computation experience, e.g., the execution latency, could be greatly improved. Nevertheless, as the on-device battery capacities are limited, computation would be interrupted when the battery energy runs out. To provide satisfactory computation performance as well as achieving green computing, it is of significant importance to seek renewable energy sources to power mobile devices via energy harvesting (EH) technologies. In this paper, we will investigate a green MEC system with EH devices and develop an effective computation offloading strategy. The execution cost, which addresses both the execution latency and task failure, is adopted as the performance metric. A low-complexity online algorithm is proposed, namely, the Lyapunov optimization-based dynamic computation offloading algorithm, which jointly decides the offloading decision, the CPU-cycle frequencies for mobile execution, and the transmit power for computation offloading. A unique advantage of this algorithm is that the decisions depend only on the current system state without requiring distribution information of the computation task request, wireless channel, and EH processes. The implementation of the algorithm only requires to solve a deterministic problem in each time slot, for which the optimal solution can be obtained either in closed form or by bisection search. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Sample simulation results shall be presented to corroborate the theoretical analysis as well as validate the effectiveness of the proposed algorithm."}
{"_id": "0f15095ad20a2f83842d95cc9d29f147f0e119bd", "title": "On Fast Dropout and its Applicability to Recurrent Networks", "text": "Recurrent Neural Networks (RNNs) are rich models for the processing of sequential data. Recent work on advancing the state of the art has been focused on the optimization or modelling of RNNs, mostly motivated by adressing the problems of the vanishing and exploding gradients. The control of overfitting has seen considerably less attention. This paper contributes to that by analyzing fast dropout, a recent regularization method for generalized linear models and neural networks from a back-propagation inspired perspective. We show that fast dropout implements a quadratic form of an adaptive, per-parameter regularizer, which rewards large weights in the light of underfitting, penalizes them for overconfident predictions and vanishes at minima of an unregularized training loss. The derivatives of that regularizer are exclusively based on the training error signal. One consequence of this is the absence of a global weight attractor, which is particularly appealing for RNNs, since the dynamics are not biased towards a certain regime. We positively test the hypothesis that this improves the performance of RNNs on four musical data sets."}
{"_id": "550ff91d102b2493d3443566ec811639bb3e2f9c", "title": "The structure of creative cognition in the human brain", "text": "Creativity is a vast construct, seemingly intractable to scientific inquiry-perhaps due to the vague concepts applied to the field of research. One attempt to limit the purview of creative cognition formulates the construct in terms of evolutionary constraints, namely that of blind variation and selective retention (BVSR). Behaviorally, one can limit the \"blind variation\" component to idea generation tests as manifested by measures of divergent thinking. The \"selective retention\" component can be represented by measures of convergent thinking, as represented by measures of remote associates. We summarize results from measures of creative cognition, correlated with structural neuroimaging measures including structural magnetic resonance imaging (sMRI), diffusion tensor imaging (DTI), and proton magnetic resonance spectroscopy (1H-MRS). We also review lesion studies, considered to be the \"gold standard\" of brain-behavioral studies. What emerges is a picture consistent with theories of disinhibitory brain features subserving creative cognition, as described previously (Martindale, 1981). We provide a perspective, involving aspects of the default mode network (DMN), which might provide a \"first approximation\" regarding how creative cognition might map on to the human brain."}
{"_id": "1e63054b3836795a7424c3684f7f0846f9a20d46", "title": "From Partition-Based Clustering to Density-Based Clustering: Fast Find Clusters With Diverse Shapes and Densities in Spatial Databases", "text": "Spatial data clustering has played an important role in the knowledge discovery in spatial databases. However, due to the increasing volume and diversity of data, conventional spatial clustering methods are inefficient even on moderately large data sets, and usually fail to discover clusters with diverse shapes and densities. To address these challenges, we propose a two-phase clustering method named KMDD (clustering by combining K-means with density and distance-based method) to fast find clusters with diverse shapes and densities in spatial databases. In the first phase, KMDD uses a partition-based algorithm (K-means) to cluster the data set into several relatively small spherical or ball-shaped subclusters. After that, each subcluster is given a local density; to merge subclusters, KMDD utilizes the idea that genuine cluster cores are characterized by a higher density than their neighbor subclusters and by a relatively large distance from subclusters with higher densities. Extensive experiments on both synthetic and real-world data sets demonstrate that the proposed algorithm has a near-linear time complexity with respect to the data set size and dimension, and has the capability to find clusters with diverse shapes and densities."}
{"_id": "f3c509e14c9294174bd5a109b86fdc5e861722c7", "title": "Fair Client Puzzles from the Bitcoin Blockchain", "text": "Client puzzles have been proposed as a mechanism for proving legitimate intentions by providing \u201cproofs of work\u201d, which can be applied to discourage malicious usage of resources. A typical problem of puzzle constructions is the difference in expected solving time on different computing platforms. We call puzzles which can be solved independently of client computing resources fair client puzzles. We propose a construction for client puzzles requiring widely distributed computational effort for their solution. These puzzles can be solved using the mining process of Bitcoin, or similar cryptocurrencies. Adapting existing definitions, we show that our puzzle construction satisfies formal requirements of client puzzles under reasonable assumptions. We describe a way of transforming our client puzzles for use in denial of service scenarios and demonstrate a practical construction."}
{"_id": "19937e5e7accc90d00334fcd6199ce3bf7bf2273", "title": "Topology and Analysis of Three-Phalanx COSA Finger Based on Linkages for Humanoid Robot Hands", "text": "The design of a novel linkage three-phalanx finger was proposed in this paper. The finger is based on the concept of COSA (coupling and self-adaptation) and also two-phalanx COSA finger. As a combination of rigid coupled finger and self-adaptive under-actuated finger, the COSA finger has comprehensive grasping ability with human-like motion. Available topology designs were discussed in this paper and then mechanical design in details is given. Motion and grasping patterns of the finger were illustrated and compared with traditional coupled finger and under-actuated finger. Moreover, stability of grasp was simulated by static analysis. The three-phalanx finger with advantages of coupling and self-adaptation can be widely applied in hands for prosthetic limbs or humanoid robots."}
{"_id": "29cd61d634786dd3b075eeeb06349a98ea0535c6", "title": "Food insufficiency and American school-aged children's cognitive, academic, and psychosocial development.", "text": "OBJECTIVE\nThis study investigates associations between food insufficiency and cognitive, academic, and psychosocial outcomes for US children and teenagers ages 6 to 11 and 12 to 16 years.\n\n\nMETHODS\nData from the Third National Health and Nutrition Examination Survey (NHANES III) were analyzed. Children were classified as food-insufficient if the family respondent reported that his or her family sometimes or often did not get enough food to eat. Regression analyses were conducted to test for associations between food insufficiency and cognitive, academic, and psychosocial measures in general and then within lower-risk and higher-risk groups. Regression coefficients and odds ratios for food insufficiency are reported, adjusted for poverty status and other potential confounding factors.\n\n\nRESULTS\nAfter adjusting for confounding variables, 6- to 11-year-old food-insufficient children had significantly lower arithmetic scores and were more likely to have repeated a grade, have seen a psychologist, and have had difficulty getting along with other children. Food-insufficient teenagers were more likely to have seen a psychologist, have been suspended from school, and have had difficulty getting along with other children. Further analyses divided children into lower-risk and higher-risk groups. The associations between food insufficiency and children's outcomes varied by level of risk.\n\n\nCONCLUSIONS\nThe results demonstrate that negative academic and psychosocial outcomes are associated with family-level food insufficiency and provide support for public health efforts to increase the food security of American families."}
{"_id": "379bc0407dc302da7a8a975ed98a5c74492301ef", "title": "Exploring the Influence of Online Consumers \u2019 Perception on Purchase Intention as Exemplified with an Online Bookstore", "text": "The purpose of this study is to use structural equation modeling (SEM) to explore the influence of online bookstore consumers\u2019 perception on their purchase intention. Through literature review, four constructs were used to establish a causal relationship between perception of online shopping and consumers\u2019 purchase intention. Questionnaires based on these four constructs were designed and distributed to the customers of an online bookstore; AMOS software as analytical tool was used to build and confirm a SEM model. Results of this study show that product perception, shopping experience, and service quality have positive and significant influence on consumers\u2019 purchase intention, but perceived risk has negative influence on consumers\u2019 purchase intention, and shopping experience is most important."}
{"_id": "d06d7c076d61ea47ce473a3b60c156bdc148b190", "title": "Application of a Lightweight Encryption Algorithm to a Quantized Speech Image for Secure IoT", "text": "The IoT Internet of Things being a promising technology of the future. It is expected to connect billions of devices. The increased communication number is expected to generate data mountain and the data security can be a threat. The devices in the architecture are fundamentally smaller in size and low powered. In general, classical encryption algorithms are computationally expensive and this due to their complexity and needs numerous rounds for encrypting, basically wasting the constrained energy of the gadgets. Less complex algorithm, though, may compromise the desired integrity. In this paper we apply a lightweight encryption algorithm named as Secure IoT (SIT) to a quantized speech image for Secure IoT. It is a 64-bit block cipher and requires 64-bit key to encrypt the data. This quantized speech image is constructed by first quantizing a speech signal and then splitting the quantized signal into frames. Then each of these frames is transposed for obtaining the different columns of this quantized speech image. Simulations result shows the algorithm provides substantial security in just five encryption rounds. Keywords\u2014IoT, Security, Encryption, quantized speech image, SNR, PESQ, Histogram, Entropy, Correlation."}
{"_id": "c8920cc159c4db10653c69906e4b954c79dcd4dc", "title": "Conceptual data structures for personal knowledge management", "text": "Structured Abstract Purpose \u2013 Designing a model and tools that are (a) capable of representing and handling personal knowledge in different degrees of structuredness and formalisation and (b) usable and extensible by end-users. Design/methodology/approach \u2013 This paper presents the result of analysing literature and various data-models and formalism used to structure information on the desktop. Findings \u2013 The unified data model (CDS) is capable of representing structures from various information tools, e. g. documents, file system, hypertext, tagging, and mind maps. The five knowledge axes of CDS are: identity, order, hierarchy, annotation and linking. Research limitations / implications \u2013 The CDS model is based on text. Extensions for multimedia annotations have not been investigated. Practical implications \u2013 Future PKM tools should take the mentioned shortcoming of existing PKM tools into account. Implementing the CDS model can be a way to make PKM tools interoperable. Originality/value \u2013 This paper presents research combining cognitive psychology, personal knowledge management and semantic web technologies. The CDS model shows a way to let end-users work on different levels of granularity and different levels of formality in one environment. Introduction \" The most important contribution of management in the 20th century was to increase manual worker productivity fifty-fold. The most important contribution of management in the 21st century will be to increase knowledge worker productivity \u2013 hopefully by the same percentage. [...] The methods, however, are totally different from those that increased the productivity of manual workers. \" Drucker (1999a, p. 79) What might these methods be? The field of knowledge management investigates since 1995 (Stankosky, 2005) how people and knowledge work together. North (2007) defines knowledge work as work based on knowledge with an immaterial result; value creation is based on processing, generating and communicating knowledge. Polanyi (1966) makes a distinction between explicit knowledge encoded in artefacts such as books or web pages, and tacit knowledge which resides in the individual. The SECI-model of Nonaka (1994) describes knowledge transfers between humans and artefacts. The field of knowledge management has focused on knowledge transfer between people, either via socialisation or via externalised artefacts. The high expectations of central enterprise knowledge repositories have often not been met (Braganza and Mollenkramer, 2002). The following wave of expert finders and corporate white pages focused mostly on connecting the right people and let them communicate. Today, knowledge workers are flooded with information (Alvarado et al., 2003). The field of Personal \u2026"}
{"_id": "7fab17ef7e25626643f1d55257a3e13348e435bd", "title": "Age Progression/Regression by Conditional Adversarial Autoencoder", "text": "If I provide you a face image of mine (without telling you the actual age when I took the picture) and a large amount of face images that I crawled (containing labeled faces of different ages but not necessarily paired), can you show me what I would look like when I am 80 or what I was like when I was 5? The answer is probably a No. Most existing face aging works attempt to learn the transformation between age groups and thus would require the paired samples as well as the labeled query image. In this paper, we look at the problem from a generative modeling perspective such that no paired samples is required. In addition, given an unlabeled image, the generative model can directly produce the image with desired age attribute. We propose a conditional adversarial autoencoder (CAAE) that learns a face manifold, traversing on which smooth age progression and regression can be realized simultaneously. In CAAE, the face is first mapped to a latent vector through a convolutional encoder, and then the vector is projected to the face manifold conditional on age through a deconvolutional generator. The latent vector preserves personalized face features (i.e., personality) and the age condition controls progression vs. regression. Two adversarial networks are imposed on the encoder and generator, respectively, forcing to generate more photo-realistic faces. Experimental results demonstrate the appealing performance and flexibility of the proposed framework by comparing with the state-of-the-art and ground truth."}
{"_id": "6fa6677bed721213ed9a56689be710f71468e776", "title": "Social Recommendation in Location Based Social Network Using Text Mining", "text": "The rapid growth of location-based social networks (LBSNs) has greatlyenriched people\u2019s urban experience through social media and attracted increasingnumber of users in recent years. Typical location-based social networking sites allowusers to \u201ccheck in\u201d at a physical place and share the location with their onlinefriends, and therefore bridge the gap between the real world and online social networks.The purpose is that on the finalextraction of information, the results are worked for SocialRecommendation. However, the recommender systems presentsome failures in the filtering of the results and the way theyaresuggested to users.The method used here intendsto create an application thatperforms the Data Mining with textual information linked togeo-location data. The availability of large amounts of geographical and social data on LBSNsprovides an unprecedented opportunity to study human mobile behaviour throughdata analysis in a spatial-temporal-social context, enabling a variety of locationbasedservices, from mobile marketing to disaster relief.Data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information information that can be used to increase revenue, cuts costs, or both."}
{"_id": "965b93aefe3a7877f4b400c4cad9dea047c445b7", "title": "A deep neuro-fuzzy method for multi-label malware classification and fuzzy rules extraction", "text": "Modern malware represents an emerging threat not only to individual privacy, but also to organizational security. Over the last few decades, a number of malware families have been codified, differentiated by their behaviour and functionality. The majority of machine learning methods work well with the general benign-malicious classification yet fail to distinguish among many classes. Deep learning is known to be successful for such applications, Deep Neural Networks in particular. There is no way however to extract an understandable model in order to characterize each malware family separately. Thus, the goal of this paper is many-fold: we investigated multinomial malware categorization using static and dynamic behavioural features extracted from Windows Portable Executables (PE32) malware collection as of the end of 2015. Furthermore, we proposed a novel Deep Neuro-Fuzzy architecture, a Hybrid Intelligence approach capable of boosting the performance of conventional Neuro-Fuzzy method targeting rules extraction. It proved capable of achieving a 49.335% better result than classical Neuro-Fuzzy and 7.347% improvement over the same complexity Deep Neural Network (DNN). Moreover, we studied the Deep Neural Network behaviour when handling such non-trivial problems as multinomial malware categorization. We believe that this model contributes to Windows PE32 malware categorization by improving the similarity-based detection of zero-days malware. It can serve as a stepping stone for malware analysts to speed up the detection and dissemination."}
{"_id": "100857eb8b35bbc06443d20ad2e3e2362cb46fb4", "title": "Realization by Biped Leg-wheeled Robot of Biped Walking and Wheel-driven Locomotion", "text": "Biped walking is easily adaptable to rough terrain such as stairs and stony paths, but the walk speed and energy efficiency on the flat ground is not so effective compared with wheeled locomotion. In this paper, we propose a biped robot to be able to walk or wheel according to the ground conditions. For wheeled locomotion, WS-2 (Waseda Shoes -No. 2) is developed which is composed of a DC motor, a spherical caster and two rubber pads on each foot. WS-2 is attached to the feet of WL-16 (Waseda Leg -No. 16) that is the world's first biped-walking robot capable of carrying a human. Also, a path planning for wheeled locomotion is presented. Through hardware experiments, the effectiveness of this foot module is confirmed."}
{"_id": "f91e6248d252ae9eca5c8db03f3298c5c691439d", "title": "Usability, quality, value and e-learning continuance decisions", "text": "Previous research suggests that an eventual information technology (IT) success depend on both its initial adoption (acceptance) and subsequent continued usage (continuance). Expectancy disconfirmation theory (EDT) has been successfully used to predict users intention to continue using information technologies. This study proposed a decomposed EDT model to examine cognitive beliefs and affect that influence users continuance decision in the context of e-learning service. The proposed model extended EDT by decomposing the perceived performance component into usability, quality, and value. Research hypotheses derived from this model are empirically validated using the responses to a survey on e-learning usage among 183 users. The results suggest that users continuance intention is determined by satisfaction, which in turn is jointly determined by perceived usability, perceived quality, perceived value, and usability disconfirmation. 2004 Elsevier Ltd. All rights reserved."}
{"_id": "fce8cda50e2f0cd752bc5d2e4169111f7c70714c", "title": "Measuring information systems success: models, dimensions, measures, and interrelationships", "text": "Received: 30 December 2006 Revised: 12 July 2007 2nd Revision: 16 December 2007 3rd Revision: 29 April 2008 Accepted: 15 May 2008 Abstract Since DeLone and McLean (D&M) developed their model of IS success, there has been much research on the topic of success as well as extensions and tests of their model. Using the technique of a qualitative literature review, this research reviews 180 papers found in the academic literature for the period 1992\u20132007 dealing with some aspect of IS success. Using the six dimensions of the D&M model \u2013 system quality, information quality, service quality, use, user satisfaction, and net benefits \u2013 90 empirical studies were examined and the results summarized. Measures for the six success constructs are described and 15 pairwise associations between the success constructs are analyzed. This work builds on the prior research related to IS success by summarizing the measures applied to the evaluation of IS success and by examining the relationships that comprise the D&M IS success model in both individual and organizational contexts. European Journal of Information Systems (2008) 17, 236\u2013263. doi:10.1057/ejis.2008.15"}
{"_id": "001ffbeb63dfa6d52e9379dae46e68aea2d9407e", "title": "The Measurement of Web-Customer Satisfaction: An Expectation and Disconfirmation Approach", "text": ""}
{"_id": "261d07faef13311820fa9a39614ae10c05cee5c3", "title": "Broadcasting Intermediate Blocks as a Defense Mechanism Against Selfish-Mine in Bitcoin", "text": "Although adopted by many cryptocurrencies, the Bitcoin mining protocol is not incentive-compatible, as the selfish mining strategy enables a miner to gain unfair mining rewards. Existing defenses either demand fundamental changes to block validity rules or have little effect on an attacker with more than one third of the total mining power. This paper proposes an effective defense mechanism against resourceful selfish miners. Our defense requires miners to publish intermediate blocks, the by-products of the mining process, then build on each other\u2019s work. By adding transparency to the mining process, block forks are resolved by comparing the amount of work of each branch. Moreover, this mechanism has the advantages of backward compatibility, low communication and computational costs, accelerating block propagation, and mitigating double-spending attacks on fast payments. To evaluate our design, we computed the most profitable mining strategy within our defense with analysis and simulation. Our simulation showed that within our defense, a selfish miner with almost half of the total mining power can only gain marginal unfair block rewards."}
{"_id": "cd9a34420f6d0cb3e6adfdae0deccbc5f924c8ef", "title": "Beyond information seeking: towards a general model of information behaviour", "text": "Introduction. The aim of the paper is to propose new models of information behaviour that extend the concept beyond simply information seeking to consider other modes of behaviour. The models chiefly explored are those of Wilson and Dervin. Argument A shortcoming of some models of information behaviour is that they present a sequence of stages where it is evident that actual behaviour is not always sequential. In addition, information behaviour models tend to confine themselves to depictions of information seeking. Development. A model of 'multi-directionality' is explored, to overcome the notion of sequential stages. Inspired by authors such as Chatman, Krikelas, and Savolainen, modes of information behaviour such as creating, destroying and avoiding information are included. Conclusion. New models of information behaviour are presented that replace the notion of 'barriers' with the concept of 'gap', as a means of integrating the views of Wilson and Dervin. The proposed models incorporate the notion of multi-directionality and identify ways in which an individual may navigate 'gap' using modes of information behaviour beyond information seeking."}
{"_id": "58bd1664187117c8f69d904d827e63f9da10b7d9", "title": "Mention-anomaly-based Event Detection and tracking in Twitter", "text": "The ever-growing number of people using Twitter makes it a valuable source of timely information. However, detecting events in Twitter is a difficult task, because tweets that report interesting events are overwhelmed by a large volume of tweets on unrelated topics. Existing methods focus on the textual content of tweets and ignore the social aspect of Twitter. In this paper we propose MABED (Mention-Anomaly-Based Event Detection), a novel method that leverages the creation frequency of dynamic links (i.e. mentions) that users insert in tweets to detect important events and estimate the magnitude of their impact over the crowd. The main advantages of MABED over prior works are that (i) it relies solely on tweets, meaning no external knowledge is required, and that (ii) it dynamically estimates the period of time during which each event is discussed rather than assuming a predefined fixed duration. The experiments we conducted on both English and French Twitter data show that the mention-anomaly-based approach leads to more accurate event detection and improved robustness in presence of noisy Twitter content. Last, we show that MABED helps with the interpretation of detected events by providing clear and precise descriptions."}
{"_id": "37188c693b7845b43408ed5e26dd13ac90656aca", "title": "Working Memory: A View from Neuroimaging", "text": "We have used neuroimaging techniques, mainly positron emission tomography (PET), to study cognitively driven issues about working memory. Two kinds of experiments are described. In the first kind, we employ standard subtraction logic to uncover the basic components of working memory. These studies indicate that: (a) there are different working-memory systems for spatial, object, and verbal information (with the spatial system localized more in the right hemisphere, and the verbal system more in the left hemisphere); (b) within at least the spatial and verbal systems, separable components seem to be responsible for the passive storage of information and the active maintenance of information (with the storage component being localized more in the back of the brain, and the maintenance component in the front); and (c) there may be separate components responsible for processing the contents of working memory (localized in prefrontal cortex). In our second kind of experiment we have focused on verbal working memory and incrementally varied one task parameter-memory load-in an effort to obtain a more fine-grained analysis of the system's operations. The results indicate that all relevant components of the system show some increase in activity with increasing memory load (e.g., the frontal regions responsible for verbal rehearsal show incremental increases in activation with increasing memory load). In contrast, brain regions that are not part of the working-memory system show no effect of memory load. Furthermore, the time courses of activation may differ for regions that are sensitive to load versus those that are not. Taken together, our results provide support for certain cognitive models of working memory (e.g., Baddeley, 1992) and also suggest some distinctions that these models have not emphasized. And more fundamentally, the results provide a neural base for cognitive models of working memory."}
{"_id": "7e383307edacb0bb53e57772fdc1ffa2825eba91", "title": "Learning Deep Structured Models", "text": "Many problems in real-world applications involve predicting several random variables that are statistically related. Markov random fields (MRFs) are a great mathematical tool to encode such dependencies. The goal of this paper is to combine MRFs with deep learning to estimate complex representations while taking into account the dependencies between the output random variables. Towards this goal, we propose a training algorithm that is able to learn structured models jointly with deep features that form the MRF potentials. Our approach is efficient as it blends learning and inference and makes use of GPU acceleration. We demonstrate the effectiveness of our algorithm in the tasks of predicting words from noisy images, as well as tagging of Flickr photographs. We show that joint learning of the deep features and the MRF parameters results in significant performance gains."}
{"_id": "1d2dab7790303bbe7894d0ff08ecf87d57b1fbca", "title": "A codebook-free and annotation-free approach for fine-grained image categorization", "text": "Fine-grained categorization refers to the task of classifying objects that belong to the same basic-level class (e.g. different bird species) and share similar shape or visual appearances. Most of the state-of-the-art basic-level object classification algorithms have difficulties in this challenging problem. One reason for this can be attributed to the popular codebook-based image representation, often resulting in loss of subtle image information that are critical for fine-grained classification. Another way to address this problem is to introduce human annotations of object attributes or key points, a tedious process that is also difficult to generalize to new tasks. In this work, we propose a codebook-free and annotation-free approach for fine-grained image categorization. Instead of using vector-quantized codewords, we obtain an image representation by running a high throughput template matching process using a large number of randomly generated image templates. We then propose a novel bagging-based algorithm to build a final classifier by aggregating a set of discriminative yet largely uncorrelated classifiers. Experimental results show that our method outperforms state-of-the-art classification approaches on the Caltech-UCSD Birds dataset."}
{"_id": "484b466095c522bd935b91a2ddc4ea0da2a6cd4e", "title": "Prospect theory on the brain? Toward a cognitive neuroscience of decision under risk.", "text": "Most decisions must be made without advance knowledge of their consequences. Economists and psychologists have devoted much attention to modeling decisions made under conditions of risk in which options can be characterized by a known probability distribution over possible outcomes. The descriptive shortcomings of classical economic models motivated the development of prospect theory (D. Kahneman, A. Tversky, Prospect theory: An analysis of decision under risk. Econometrica, 4 (1979) 263-291; A. Tversky, D. Kahneman, Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5 (4) (1992) 297-323) the most successful behavioral model of decision under risk. In the prospect theory, subjective value is modeled by a value function that is concave for gains, convex for losses, and steeper for losses than for gains; the impact of probabilities are characterized by a weighting function that overweights low probabilities and underweights moderate to high probabilities. We outline the possible neural bases of the components of prospect theory, surveying evidence from human imaging, lesion, and neuropharmacology studies as well as animal neurophysiology studies. These results provide preliminary suggestions concerning the neural bases of prospect theory that include a broad set of brain regions and neuromodulatory systems. These data suggest that focused studies of decision making in the context of quantitative models may provide substantial leverage towards a fuller understanding of the cognitive neuroscience of decision making."}
{"_id": "452b51ae387d9403661163270e090c41cfef0e0b", "title": "Pachyonychia congenita type 2 (Jackson-Lawler syndrome) or PC-17: case report.", "text": "Pachyonychia congenita (PC) is a rare genodermatosis caused by mutations in any of the four genes KRT6A, KRT6B, KRT16, or KRT17, which can lead to dystrophic, thickened nails and focal palmoplantar keratoderma, among other manifestations. Although classically subdivided into two major variants, PC-1 (Jadassohn-Lewandowski syndrome) and PC-2 (Jackson-Lawler syndrome), according to the localization of the mutations in the KRT6A/KRT16 or KRT6B/KRT17 genes, respectively, a classification system based on the mutant gene (PC-6a, PC-6b, PC-16 and PC-17) has been recently proposed. We report a 2-year-old female patient with a history of thickened and discolored nails, small cystic papulonodules on the central face, dry, unruly and curly hair, slight palmoplantar hyperkeratosis, and natal teeth. Both her father and paternal grandfather presented onychodystrophy, palmoplantar keratoderma, and previous excision of \"sebaceous\" cysts. Molecular genetic analysis of the patient revealed a missense mutation (c.1163T>C) in heterozygosity in exon 6 of the KRT17 gene, confirming the diagnosis of PC-2 (Jackson-Lawler type), or PC-17. We conclude that PC is a relatively easy and consistent clinical diagnosis, but a high index of suspicion is required if the diagnosis is to be made correctly. With this case, the authors intend to draw attention to this condition and the role of the dermatologist in the diagnosis."}
{"_id": "fb1db8d9c610f4c2316986992551cf5419024a68", "title": "Kinematics and Dynamics of a New 16 DOF Humanoid Biped Robot with Active Toe Joint", "text": "Humanoid biped robots are typically complex in design, having numerous Degrees\u2010of\u2010Freedom (DOF) due to the ambitious goal of mimicking the human gait. The paper proposes a new architecture for a biped robot with seven DOF per each leg and one DOF corresponding to the toe joint. Furthermore, this work presents close equations for the forward and inverse kinematics by dividing the walking gait into the Sagittal and Frontal planes. This paper explains the mathematical model of the dynamics equations for the legs into the Sagittal and Frontal planes by further applying the principle of Lagrangian dynamics. Finally, a control approach using a PD control law with gravity compensation was recurred in order to control the desired trajectories and finding the required torque by the joints. The paper contains several simulations and numerical examples to prove the analytical results, using SimMechanics of MATLAB toolbox and SolidWorks to verify the analytical results."}
{"_id": "832530c970157cb98850383f11266e083ddfdaad", "title": "Residual Highway Convolutional Neural Networks for in-loop Filtering in HEVC", "text": "High efficiency video coding (HEVC) standard achieves half bit-rate reduction while keeping the same quality compared with AVC. However, it still cannot satisfy the demand of higher quality in real applications, especially at low bit rates. To further improve the quality of reconstructed frame while reducing the bitrates, a residual highway convolutional neural network (RHCNN) is proposed in this paper for in-loop filtering in HEVC. The RHCNN is composed of several residual highway units and convolutional layers. In the highway units, there are some paths that could allow unimpeded information across several layers. Moreover, there also exists one identity skip connection (shortcut) from the beginning to the end, which is followed by one small convolutional layer. Without conflicting with deblocking filter (DF) and sample adaptive offset (SAO) filter in HEVC, RHCNN is employed as a high-dimension filter following DF and SAO to enhance the quality of reconstructed frames. To facilitate the real application, we apply the proposed method to I frame, P frame, and B frame, respectively. For obtaining better performance, the entire quantization parameter (QP) range is divided into several QP bands, where a dedicated RHCNN is trained for each QP band. Furthermore, we adopt a progressive training scheme for the RHCNN where the QP band with lower value is used for early training and their weights are used as initial weights for QP band of higher values in a progressive manner. Experimental results demonstrate that the proposed method is able to not only raise the PSNR of reconstructed frame but also prominently reduce the bit-rate compared with HEVC reference software."}
{"_id": "9fff24d89f0d1fa5a996ede4c1d151923e798803", "title": "Understanding heart rate sharing: towards unpacking physiosocial space", "text": "Advances in biosensing make it possible to include heart rate monitoring in applications and several studies have suggested that heart rate communication has potential for improving social connectedness. However, it is not known how people understand heart rate feedback, or what issues need to be taken into account when designing technologies including heart rate feedback. To explore this, we created a heart rate communication probe that was used in two qualitative in-lab studies and a two-week field trial in participants' homes. Results show that heart rate feedback is a strong connectedness cue that affects the interaction in various ways, depending on a number of interrelated factors. In particular, we found two distinct categories of effects: heart rate as information and heart rate as connection. We propose two mechanisms that could explain these observations and draw out the implications they have for future use of heartbeat communication to support social connectedness or other aspects of social interaction."}
{"_id": "28a98726a4ce95f0cd2d76f0ba353788dadea41b", "title": "Designing Twister 2 : Efficient Programming Environment Toolkit for Big Data", "text": "Data-driven applications are required to adapt to the everincreasing volume, velocity and veracity of data generated by a variety of sources including the Web and Internet of Things devices. At the same time, an event-driven computational paradigm is emerging as the core of modern systems designed for both database queries, data analytics and on-demand applications. MapReduce has been generalized to Map Collective and shown to be very effective in machine learning. However one often uses a dataflow computing model, which has been adopted by most major big data processing runtimes. The HPC community has also developed several asynchronous many tasks (AMT) systems according to the dataflow model. From a different point of view, the services community is moving to an increasingly event-driven model where (micro)services are composed of small functions driven by events in the form of Function as a Service(Faas) and serverless computing. Such designs allow the applications to scale quickly as well as be cost effective in cloud environments. An event-driven runtime designed for data processing consists of well-understood components such as communication, scheduling, and fault tolerance. One can make different design decisions for these components that will determine the type of applications a system can support efficiently. We find that modern systems are designed in a monolithic approach with a fixed set of choices that cannot be changed easily afterwards. Because of these design choices their functionality is limited to specific sets of applications. In this paper we study existing systems (candidate event-driven runtimes), the design choices they have made for each component, and how this affects the type of applications they can support. Further we propose a loosely coupled component-based approach for designing a big data toolkit where each component can have different implementations to support various applications. We believe such a polymorphic design would allow services and data analytics to be integrated seamlessly and expand from edge to cloud to high performance computing environments."}
{"_id": "41cbbea3e8aef717d021411fbe5c12e7729247a7", "title": "A comparative study between honey and povidone iodine as dressing solution for Wagner type II diabetic foot ulcers.", "text": "Honey dressing has been used to promote wound healing for years but scanty scientific studies did not provide enough evidences to justify it benefits in the treatment of diabetic foot ulcers. We conducted a prospective study to compare the effect of honey dressing for Wagner's grade-II diabetic foot ulcers with controlled dressing group (povidone iodine followed by normal saline). Surgical debridement and appropriate antibiotics were prescribed in all patients. There were 30 patients age between 31 to 65-years-old (mean of 52.1 years). The mean healing time in the standard dressing group was 15.4 days (range 9-36 days) compared to 14.4 days (range 7-26 days) in the honey group (p < 0.005). In conclusion, ulcer healing was not significantly different in both study groups. Honey dressing is a safe alternative dressing for Wagner grade-II diabetic foot ulcers."}
{"_id": "1805e19e9fa6a4c140c531bc0dca8016ee75257b", "title": "A Tutorial on Support Vector Machines for Pattern Recognition", "text": "The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light."}
{"_id": "714f8ba2b712dabdb0d55408586d40a86511a4dc", "title": "Characterization of the Impact of Hardware Islands on OLTP", "text": "Modern hardware is abundantly parallel and increasingly heterogeneous. The numerous processing cores have non-uniform access latencies to the main memory and processor caches, which causes variability in the communication costs. Unfortunately, database systems mostly assume that all processing cores are the same and that microarchitecture differences are not significant enough to appear in critical database execution paths. As we demonstrate in this paper, however, non-uniform core topology does appear in the critical path and conventional database architectures achieve suboptimal and even worse, unpredictable performance. We perform a detailed performance analysis of OLTP deployments in servers with multiple cores per CPU (multicore) and multiple CPUs per server (multisocket). We compare different database deployment strategies where we vary the number and size of independent database instances running on a single server, from a single shared-everything instance to fine-grained shared-nothing configurations. We quantify the impact of non-uniform hardware on various deployments by (a) examining how efficiently each deployment uses the available hardware resources and (b) measuring the impact of distributed transactions and skewed requests on different workloads. We show that no strategy is optimal for all cases and that the best choice depends on the combination of hardware topology and workload characteristics. Finally, we argue that transaction processing systems must be aware of the hardware topology in order to achieve predictably high performance."}
{"_id": "017eb2deb11f48ef7350873e81c19391aa61b8e3", "title": "An evaluation study of BigData frameworks for graph processing", "text": "When Google first introduced the Map/Reduce paradigm in 2004, no comparable system had been available to the general public. The situation has changed since then. The Map/Reduce paradigm has become increasingly popular and there is no shortage of Map/Reduce implementations in today's computing world. The predominant solution is currently Apache Hadoop, started by Yahoo. Besides employing custom Map/Reduce installations, customers of cloud services can now exploit ready-made made installations (e.g. the Elastic Map/Reduce System). In the mean time, other, second generation frameworks have started to appear. They either fine tune the Map/Reduce model for specific scenarios, or change the paradigm altogether, such as Google's Pregel. In this paper, we present a comparison between these second generation frameworks and the current de-facto standard Hadoop, by focusing on a specific scenario: large-scale graph analysis. We analyze the different means of fine-tuning those systems by exploiting their unique features. We base our analysis on the k-core decomposition problem, whose goal is to compute the centrality of each node in a given graph; we tested our implementation in a cluster of Amazon EC2 nodes with realistic datasets made publicly available by the SNAP project."}
{"_id": "042d6d7025cf43cbcf0bb52970a0120a21bb4635", "title": "Optimistic Optimization of a Deterministic Function without the Knowledge of its Smoothness", "text": "We consider a global optimization problem of a deterministic function f in a semimetric space, given a finite budget of n evaluations. The function f is assumed to be locally smooth (around one of its global maxima) with respect to a semi-metric l. We describe two algorithms based on optimistic exploration that use a hierarchical partitioning of the space at all scales. A first contribution is an algorithm, DOO, that requires the knowledge of l. We report a finite-sample performance bound in terms of a measure of the quantity of near-optimal states. We then define a second algorithm, SOO, which does not require the knowledge of the semimetric l under which f is smooth, and whose performance is almost as good as DOO optimally-fitted."}
{"_id": "8a497d5aed6d239824c77b33a57a8aed682d9e99", "title": "Classification ability of single hidden layer feedforward neural networks", "text": "Multilayer perceptrons with hard-limiting (signum) activation functions can form complex decision regions. It is well known that a three-layer perceptron (two hidden layers) can form arbitrary disjoint decision regions and a two-layer perceptron (one hidden layer) can form single convex decision regions. This paper further proves that single hidden layer feedforward neural networks (SLFN's) with any continuous bounded nonconstant activation function or any arbitrary bounded (continuous or not continuous) activation function which has unequal limits at infinities (not just perceptrons) can form disjoint decision regions with arbitrary shapes in multidimensional cases. SLFN's with some unbounded activation function can also form disjoint decision regions with arbitrary shapes."}
{"_id": "1cb755b56e9ea808f707a01ddd5a4f7d53dddf39", "title": "Reactions to discrimination, stigmatization, ostracism, and other forms of interpersonal rejection: a multimotive model.", "text": "This article describes a new model that provides a framework for understanding people's reactions to threats to social acceptance and belonging as they occur in the context of diverse phenomena such as rejection, discrimination, ostracism, betrayal, and stigmatization. People's immediate reactions are quite similar across different forms of rejection in terms of negative affect and lowered self-esteem. However, following these immediate responses, people's reactions are influenced by construals of the rejection experience that predict 3 distinct motives for prosocial, antisocial, and socially avoidant behavioral responses. The authors describe the relational, contextual, and dispositional factors that affect which motives determine people's reactions to a rejection experience and the ways in which these 3 motives may work at cross-purposes. The multimotive model accounts for the myriad ways in which responses to rejection unfold over time and offers a basis for the next generation of research on interpersonal rejection."}
{"_id": "95d03985bd995eeaa9c483437f15554f7579e2aa", "title": "An Energy-Efficient , Wireless Top-of-Rack to Top-of-Rack Datacenter Network using 60 GHz Links", "text": "Datacenters have become the digital backbone of the modern society and consume enormous amounts of power. Significant portion of the power consumption is due to the power hungry switching fabric necessary for communication in the datacenter. Additionally, the complex cabling in traditional datacenters pose design and maintenance challenges and increase the energy cost of the cooling infrastructure by obstructing the flow of chilled air. Recent research on wireless datacenters have proposed interconnecting rows of racks at the top-of-rack (ToR) level via wireless links to eliminate the need for complex network of power hungry routers and cables. Links are established using either highly directional phased array antennas or narrow-beam horned antennas between those ToR entities. ToR-to-ToR wireless links have also been used to augment existing wired networks, improving overall performance characteristics. All these wireless approaches advocate the use of 60GHz line-of-sight (LoS) communication paths between antennas for the establishment of reliable wireless channels. In this work, we explore the feasibility of a ToR-to-ToR wireless network for a small to medium-scale datacenter from the perspective of system-level performance. We evaluate a ToR-to-ToR wireless datacenter network (DCN) for network-level data rate and overall power consumption and compare it to a traditional fat-tree based DCN. We find that the ToR-to-ToR wireless DCN can sustain similar data rates for typical query based applications and consume less power compared to that of traditional datacenters. KeywordsWireless datacenter; Energy Efficiency; 60GHz; IEEE 802.11ad; Top-of-Rack;"}
{"_id": "892217128022dd5e5773fc44975794c8839132dd", "title": "2D diffuse optical imaging using clustered sparsity", "text": "Diffuse optical imaging (DOI) is a non-invasive technique which measures hemodynamic changes in the tissue with near infrared light, and has been increasingly used to study brain functions. Due to the nature of light propagation in the tissue, the reconstruction problem is severely ill-posed. Sparsity-regularization has achieved promising results in recent works for linearized DOI problem. In this paper, we exploit more prior information to improve DOI besides sparsity. Based on the functional specialization of the brain, the in vivo absorption changes caused by specific brain function can be clustered in certain region(s) and not randomly distributed. Thus, a new algorithm is proposed to utilize this prior in reconstruction. Results of numerical simulations and phantom experiments have demonstrated the superiority of the proposed method over the state-of-the-art."}
{"_id": "6c75a54ecfabb7361df09357822c4763900cb56a", "title": "The ethics of representation and action in virtual reality", "text": "This essay addresses ethical aspects of the design and use of virtual reality (VR) systems, focusing on the behavioral options made available in such systems and the manner in which reality is represented or simulated in them. An assessment is made of the morality of \u2018immoral\u2019 behavior in virtual reality, and of the virtual modeling of such behavior. Thereafter, the ethical aspects of misrepresentation and biased representation in VR applications are discussed."}
{"_id": "5a86036172aa4e09c70e187b9dcc6be93de13c89", "title": "Caricature synthesis with feature deviation matching under example-based framework", "text": "Example-based caricature synthesis techniques have been attracting large attentions for being able to generate attractive caricatures of various styles. This paper proposes a new example-based caricature synthesis system using a feature deviation matching method as a cross-modal distance metric. It employs the deviation values from average features across different feature spaces rather than the values of features themselves to search for similar components from caricature examples directly. Compared with traditional example-based systems, the proposed system can generate various styles of caricatures without requiring paired photograph\u2013caricature example databases. The newly designed features can effectively capture visual characteristics of the hairstyles and facial components in input portrait images. In addition, this system can control the exaggeration of individual facial components and provide several similarity-based candidates to satisfy users\u2019 different preferences. Experiments are conducted to prove the above ideas."}
{"_id": "1c32125cc9fb052b881a6dec812b62ed998915d7", "title": "Lessons from applying the systematic literature review process within the software engineering domain", "text": "A consequence of the growing number of empirical studies in software engineering is the need to adopt systematic approaches to assessing and aggregating research outcomes in order to provide a balanced and objective summary of research evidence for a particular topic. The paper reports experiences with applying one such approach, the practice of systematic literature review, to the published studies relevant to topics within the software engineering domain. The systematic literature review process is summarised, a number of reviews being undertaken by the authors and others are described and some lessons about the applicability of this practice to software engineering are extracted. The basic systematic literature review process seems appropriate to software engineering and the preparation and validation of a review protocol in advance of a review activity is especially valuable. The paper highlights areas where some adaptation of the process to accommodate the domain-specific characteristics of software engineering is needed as well as areas where improvements to current software engineering infrastructure and practices would enhance its applicability. In particular, infrastructure support provided by software engineering indexing databases is inadequate. Also, the quality of abstracts is poor; it is usually not possible to judge the relevance of a study from a review of the abstract alone. 2006 Elsevier Inc. All rights reserved."}
{"_id": "90089d3ec1c73857e7fde0d8a234d0e147c5a477", "title": "SLR-Tool - A Tool for Performing Systematic Literature Reviews", "text": "Systematic literature reviews (SLRs) have been gaining a significant amount of attention from Software Engineering researchers since 2004. SLRs are considered to be a new research methodology in Software Engineering, which allow evidence to be gathered with regard to the usefulness or effectiveness of the technology proposed in Software Engineering for the development and maintenance of software products. This is demonstrated by the growing number of publications related to SLRs that have appeared in recent years. While some tools exist that can support some or all of the activities of the SLR processes defined in (Kitchenham & Charters, 2007), these are not free. The objective of this paper is to present the SLR-Tool, which is a free tool and is available on the following website: http://alarcosj.esi.uclm.es/SLRTool/, to be used by researchers from any discipline, and not only Software Engineering. SLR-Tool not only supports the process of performing SLRs proposed in (Kitchenham & Charters, 2007), but also provides additional functionalities such as: refining searches within the documents by applying text mining techniques; defining a classification schema in order to facilitate data synthesis; exporting the results obtained to the format of tables and charts; and exporting the references from the primary studies to the formats used in bibliographic packages such as EndNote, BibTeX or Ris. This tool has, to date, been used by members of the Alarcos Research Group and PhD students, and their perception of it is that it is both highly necessary and useful. Our purpose now is to circulate the use of SLR-Tool throughout the entire research community in order to obtain feedback from other users."}
{"_id": "444b9f2fff2132251a43dc4a4f8bd213e7763634", "title": "Evidence-based software engineering", "text": "Objective: Our objective is to describe how softwareengineering might benefit from an evidence-basedapproach and to identify the potential difficultiesassociated with the approach.Method: We compared the organisation and technicalinfrastructure supporting evidence-based medicine (EBM)with the situation in software engineering. We consideredthe impact that factors peculiar to software engineering(i.e. the skill factor and the lifecycle factor) would haveon our ability to practice evidence-based softwareengineering (EBSE).Results: EBSE promises a number of benefits byencouraging integration of research results with a view tosupporting the needs of many different stakeholdergroups. However, we do not currently have theinfrastructure needed for widespread adoption of EBSE.The skill factor means software engineering experimentsare vulnerable to subject and experimenter bias. Thelifecycle factor means it is difficult to determine howtechnologies will behave once deployed.Conclusions: Software engineering would benefit fromadopting what it can of the evidence approach providedthat it deals with the specific problems that arise from thenature of software engineering."}
{"_id": "66392ca9b0fc8bb8c8d27312ddd90ca3b418516a", "title": "Factors Influencing the Usage of Websites: The Case of a Generic Portal in the Netherlands", "text": "In this paper, we empirically investigate an extension of the Technology Acceptance Model (TAM, Davis, 1989) to explain the individual acceptance and usage of websites. Conceptually, we examine perceived ease-of-use, usefulness, enjoyment, and their impact on attitude towards using, intention to use and actual use. The paper also introduces a new construct, \u201cperceived visual attractiveness\u201d of the website and suggest that it influences usefulness, enjoyment, and ease-of-use. For our empirical research we partnered with a Dutch generic portal site with over 300 000 subscribers at the time the research was conducted. The websurvey resulted in sample size of 825 respondents. The results confirmed all of the 12 hypotheses formulated. Three findings are worth mentioning in particular: (1) intention is most dominantly influenced by attitude (\u03b2 = 0.51), (2) ease-of-use, enjoyment, and usefulness contribute equally to attitude towards using (\u03b2 = 0.23, 0.23, and 0.17 respectively) and (3) visual attractiveness contributes remarkably well to both ease-of-use, enjoyment, and usefulness (\u03b2 = 0.41, 0.35, and 0.21). Although this is not the first research to apply TAM to an internet context, we claim three major contributions: (1) a single website as the unit of analysis, (2) the introduction of visual attractiveness, and (3) the use of \u201creal\u201d website visitors rather than student samples. Promising future research lies in the conceptual connection between actual website features and website use, a connection for which the TAM framework provides a meaningful bridge. Factors Influencing the Usage of Websites: The Case of a Generic Portal in the Netherlands"}
{"_id": "3a9c17cc59386433e743d537abcb7b8076827061", "title": "A Comparative Study of Data Clustering Algorithms", "text": "Data clustering is a process of partitioning data points into meaningful clusters such that a cluster holds similar data and different clusters hold dissimilar data. It is an unsupervised approach to classify data into different patterns. In general, the clustering algorithms can be classified into the following two categories: firstly, hard clustering, where a data object can belong to a single and distinct cluster and secondly, soft clustering, where a data object can belong to different clusters. In this report we have made a comparative study of three major data clustering algorithms highlighting their merits and demerits. These algorithms are: k-means, fuzzy c-means and K-NN clustering algorithm. Choosing an appropriate clustering algorithm for grouping the data takes various factors into account for illustration one is the size of data to be partitioned."}
{"_id": "fa20723e6d7eaa64ba3fd64d0f505adda3ec18ea", "title": "A low-Cost, Open-Source, BCI- VR Game Control Development Environment Prototype for Game Based Neurorehabilitation", "text": "In this paper we present a low-cost and open-source brain-computer interface (BCI) virtual-reality (VR) Game Control Development Environment prototype, which we demonstrate using real-time signal processing of Electroencephalography (EEG) event-related desynchronization and synchronization changes (ERD/ERS) within the Precentral Gyrus (Motor Cortex), allowing a user to control a 3D object within a Virtual Reality Environment. This BCI-VR system prototype was functionally tested on multiple participants and demonstrated live before an audience during the 2017 \u2018Hack the Brain\u2019 at the Dublin Science Gallery. The availability of such an open-source, effective, BCI-VR Game Control Development Environment, at a level acceptable for industry experimentation, has the potential to open up this field to a much wider range of researchers and games developers and to assist the investigation of gaming experiences which both incorporate the specific control features available through BCI as a core element of the game play and the potential for its use in neurorehabilitation."}
{"_id": "e145012743605857e89755703d82de1a9dc2df94", "title": "Self-esteem and perceived regard: how I see myself affects my relationship satisfaction.", "text": "The authors explored the relations among self-esteem, perceived regard, and satisfaction in dating relationships. On the basis of the dependency regulation model (T. DeHart, B. Pelham, & S. Murray, 2004), the authors hypothesized that self-esteem would influence individuals' self-perceptions and views of how their partners perceive them (metaperception). They also hypothesized that perceived regard (self-perception minus metaperception) would predict relationship satisfaction. Regression analyses indicated that for moderate relationship-relevant traits (i.e., caring, loving), high self-esteem was associated with self-enhancement (idealization), whereas low self-esteem was associated with self-deprecation. For low relationship-relevant traits (i.e., quiet, reserved), both low and high self-esteem individuals self-enhanced. Hierarchical regression analyses indicated that self-esteem and perceived regard for moderate relationship-relevant traits predicted satisfaction. The authors discuss the implications of idealization, self-verification, and self-deprecation for the perceivers and their relationships."}
{"_id": "3d277e7f6c1821aa3af961f359925c26d5167601", "title": "A systematic survey on automated concurrency bug detection, exposing, avoidance, and fixing techniques", "text": "Currently, concurrent programs are becoming increasingly widespread to meet the demands of the rapid development of multi-core hardware. However, it could be quite expensive and challenging to guarantee the correctness and efficiency of concurrent programs. In this paper, we provide a systematic review of the existing research on fighting against concurrency bugs, including automated concurrency bug exposing, detection, avoidance, and fixing. These four categories cover the different aspects of concurrency bug problems and are complementary to each other. For each category, we survey the motivation, key issues, solutions, and the current state of the art. In addition, we summarize the classical benchmarks widely used in previous empirical studies and the contribution of active research groups. Finally, some future research directions on concurrency bugs are recommended. We believe this survey would be useful for concurrency programmers and researchers."}
{"_id": "bb45e05baf716632e885f39be67ead141c9d0a55", "title": "[Skin signs in child abuse].", "text": "Child abuse is far more prevalent today than is generally recognized. Up to 90% of victims suffer physical abuse that can be observed in signs on the skin. Dermatologists are particularly qualified to identify these signs and distinguish them from other conditions that can mimic abuse. This review covers the signs of child abuse that can be observed on the skin. We discuss clues that can help differentiate between lesions caused by abuse and those that are accidental, and we describe the skin conditions that mimic physical abuse."}
{"_id": "7e02c5b0e83cd6bba4c94c458bdb7079e97c36cd", "title": "Entity Resolution: Theory, Practice & Open Challenges", "text": "This tutorial brings together perspectives on ER from a variety of fields, including databases, machine learning, natural language processing and information retrieval, to provide, in one setting, a survey of a large body of work. We discuss both the practical aspects and theoretical underpinnings of ER. We describe existing solutions, current challenges, and open research problems."}
{"_id": "cf234668399ff2d7e5e5a54039907b0fa7cf36d3", "title": "Survey on 3D Hand Gesture Recognition", "text": "Three-dimensional hand gesture recognition has attracted increasing research interests in computer vision, pattern recognition, and human-computer interaction. The emerging depth sensors greatly inspired various hand gesture recognition approaches and applications, which were severely limited in the 2D domain with conventional cameras. This paper presents a survey of some recent works on hand gesture recognition using 3D depth sensors. We first review the commercial depth sensors and public data sets that are widely used in this field. Then, we review the state-of-the-art research for 3D hand gesture recognition in four aspects: 1) 3D hand modeling; 2) static hand gesture recognition; 3) hand trajectory gesture recognition; and 4) continuous hand gesture recognition. While the emphasis is on 3D hand gesture recognition approaches, the related applications and typical systems are also briefly summarized for practitioners."}
{"_id": "be8d148b7f20690c3cc08d516d9ad6d9316311a1", "title": "Model-Driven ERP Implementation", "text": "Enterprise Resource Planning (ERP) implementations are very complex. To obtain a fair level of understanding of the system, it is then necessary to model the supported business processes. However, the problem is the accuracy of the mapping between this model and the actual technical implementation. A solution is to make use of the OMG\u2019s Model-Driven Architecture (MDA) framework. In fact, this framework lets the developer model his system at a high abstraction level and allows the MDA tool to generate the implementation details. This paper presents our results in applying the MDA framework to ERP implementation based on a high level model of the business processes. Then, we show how our prototype is structured and implemented in the IBM/Rational XDE environment"}
{"_id": "5386ad8a18b76c85840385a4d208e239208079cd", "title": "A General Divide and Conquer Approach for Process Mining", "text": "Operational processes leave trails in the information systems supporting them. Such event data are the starting point for process mining \u2013 an emerging scientific discipline relating modeled and observed behavior. The relevance of process mining is increasing as more and more event data become available. The increasing volume of such data (\u201cBig Data\u201d) provides both opportunities and challenges for process mining. In this paper we focus on two particular types of process mining: process discovery (learning a process model from example behavior recorded in an event log) and conformance checking (diagnosing and quantifying discrepancies between observed behavior and modeled behavior). These tasks become challenging when there are hundreds or even thousands of different activities and millions of cases. Typically, process mining algorithms are linear in the number of cases and exponential in the number of different activities. This paper proposes a very general divide-and-conquer approach that decomposes the event log based on a partitioning of activities. Unlike existing approaches, this paper does not assume a particular process representation (e.g., Petri nets or BPMN) and allows for various decomposition strategies (e.g., SESEor passage-based decomposition). Moreover, the generic divide-andconquer approach reveals the core requirements for decomposing process discovery and conformance checking problems."}
{"_id": "1d323cff44e52dfe7be117b39b7974e61cecdd43", "title": "Survey of Consistent Network Updates", "text": "Computer networks have become a critical infrastructure. As such, networks should not only meet strict requirements in terms of correctness, availability, and performance, but they should also be very flexible and support fast updates, e.g., due to a change in the security policy, increasing traffic, or failures. In this paper, we present a structured survey of mechanisms and protocols to update computer networks in a fast and consistent manner. In particular, we identify and discuss the different desirable consistency properties that should be provided throughout a network update, the algorithmic techniques which are needed to meet these consistency properties, and the implications on the speed and costs at which updates can be performed. We also explain the relationship between consistent network update problems and classic algorithmic optimization ones. While our survey is mainly motivated by the advent of Software-Defined Networks (SDNs) and their primary need for correct and efficient update techniques, the fundamental underlying problems are not new, and we provide a historical perspective of the subject as well. CCS Concepts: rGeneral and reference \u2192 Surveys and overviews; rNetworks \u2192 Network algorithms; rComputer systems organization\u2192 Distributed architectures;"}
{"_id": "082bcec29d021ef844736565277cfc3a0c9c3c43", "title": "Supervised Bilingual Lexicon Induction with Multiple Monolingual Signals", "text": "Prior research into learning translations from source and target language monolingual texts has treated the task as an unsupervised learning problem. Although many techniques take advantage of a seed bilingual lexicon, this work is the first to use that data for supervised learning to combine a diverse set of signals derived from a pair of monolingual corpora into a single discriminative model. Even in a low resource machine translation setting, where induced translations have the potential to improve performance substantially, it is reasonable to assume access to some amount of data to perform this kind of optimization. Our work shows that only a few hundred translation pairs are needed to achieve strong performance on the bilingual lexicon induction task, and our approach yields an average relative gain in accuracy of nearly 50% over an unsupervised baseline. Large gains in accuracy hold for all 22 languages (low and high resource) that we investigate."}
{"_id": "6b66495ecca056c0ca118c1577bb17beca4be8ab", "title": "HEVC/H.265 vs. VP9 state-of-the-art video coding comparison for HD and UHD applications", "text": "High Efficiency Video Coding (HEVC) and VP9 are the latest state-of-the-art high-efficiency video coding standards. Both standards aim to decrease the bit-rate while improving the compression efficiency. HEVC/H.265 video standard was collaboratively developed by ITU-T and ISO/IEC with up to 50% bit-rate reduction as compared to its predecessor H.264/AVC. On the other hand, VP9 is an open source video standard developed by Google. While maintaining the same video quality, VP9 has achieved 50% bit-rate reduction as compared to its predecessor VP8. This research paper focuses on the comparison of HEVC and VP9 based on both subjective as well as objective assessments. First, detailed differences in design and specification of the two standards are discussed. Later experiment results are provided, which compare the coding efficiency of both standards on various high-definition 720p, 1090p, and 2160p sample test bench videos. According to our experimental results, it has been empirically proved that HEVC/H.265 outperforms VP9 in bit-rate savings."}
{"_id": "946390bd206d9b92c1ba930e8faaa62343da4a6e", "title": "The Narrative Advantage : Gender and the Language of Crowdfunding", "text": "In this study, we set out to examine the role of language in the success of online fundraising\u2014a new form of entrepreneurial project financing. In particular, we evaluate the influence of linguistic content on fundraising outcomes, above and beyond type of product or service offered. Online fundraising settings pose an interesting empirical puzzle: women are systematically more successful than men, an outcome contrary to offline gender inequality. We propose that this outcome is partially explained by linguistic differences between men and women in terms of language they use, and we test this mechanism using data from the online crowdfunding platform Indiegogo. The results support our theory, suggesting a link between micro-level linguistic choices and macro level outcomes: the institution of crowdfunding may reduce gender inequalities in the fundraising arena by benefitting the communication style of women."}
{"_id": "24b904d6d0db56a9e4e659ad977af7b116ab2a09", "title": "Low voltage sub-nanosecond pulsed current driver IC for high-resolution LIDAR applications", "text": "This paper introduces a new low voltage sub-nanosecond monolithic pulsed current driver for light detection and ranging (LIDAR) applications. Unique architecture based on a controlled current source and Vernier activation sequence, combined with a monolithic implementation that allows operation with low input voltage levels, high-resolution pulse width and sub-nanosecond rise and fall times. An on-chip low voltage pulsed driver sub-nanosecond prototype has been implemented in a TS 0.18-\u03bcm 5V-gated power management process. It incorporates an integrated wide range sesnseFET based current sensor and a rail-to-rail comparator for current regulation. To characterize the avalanche capabilities of the integrated lateral MOSFET power devices required for the driver IC, a separate line of investigation has been carried out. Several lateral diffused MOS (LDMOS) power devices have been custom designed and experimentally evaluated for a life-cycle performance characterization. Post-layout analysis of the power driver IC is in a good agreement with the theoretical predictions. For a 5V input voltage, rise and fall times of the laser pulse light output are on the order of hundreds of picoseconds, with currents up to 5A. To validate the concept of high-resolution pulse width generation and short fall time, a discrete prototype has been constructed and experimentally tested."}
{"_id": "60240fc5d76df560f0ef87ced4528c815119b0df", "title": "Semi-Paired Discrete Hashing: Learning Latent Hash Codes for Semi-Paired Cross-View Retrieval", "text": "Due to the significant reduction in computational cost and storage, hashing techniques have gained increasing interests in facilitating large-scale cross-view retrieval tasks. Most cross-view hashing methods are developed by assuming that data from different views are well paired, e.g., text-image pairs. In real-world applications, however, this fully-paired multiview setting may not be practical. The more practical yet challenging semi-paired cross-view retrieval problem, where pairwise correspondences are only partially provided, has less been studied. In this paper, we propose an unsupervised hashing method for semi-paired cross-view retrieval, dubbed semi-paired discrete hashing (SPDH). In specific, SPDH explores the underlying structure of the constructed common latent subspace, where both paired and unpaired samples are well aligned. To effectively preserve the similarities of semi-paired data in the latent subspace, we construct the cross-view similarity graph with the help of anchor data pairs. SPDH jointly learns the latent features and hash codes with a factorization-based coding scheme. For the formulated objective function, we devise an efficient alternating optimization algorithm, where the key binary code learning problem is solved in a bit-by-bit manner with each bit generated with a closed-form solution. The proposed method is extensively evaluated on four benchmark datasets with both fully-paired and semi-paired settings and the results demonstrate the superiority of SPDH over several other state-of-the-art methods in term of both accuracy and scalability."}
{"_id": "3b76c51beceb5057b1285bd7d709817cda17adc0", "title": "Exploration and apprenticeship learning in reinforcement learning", "text": "We consider reinforcement learning in systems with unknown dynamics. Algorithms such as E3 (Kearns and Singh, 2002) learn near-optimal policies by using \"exploration policies\" to drive the system towards poorly modeled states, so as to encourage exploration. But this makes these algorithms impractical for many systems; for example, on an autonomous helicopter, overly aggressive exploration may well result in a crash. In this paper, we consider the apprenticeship learning setting in which a teacher demonstration of the task is available. We show that, given the initial demonstration, no explicit exploration is necessary, and we can attain near-optimal performance (compared to the teacher) simply by repeatedly executing \"exploitation policies\" that try to maximize rewards. In finite-state MDPs, our algorithm scales polynomially in the number of states; in continuous-state linear dynamical systems, it scales polynomially in the dimension of the state. These results are proved using a martingale construction over relative losses."}
{"_id": "4f1e9c218336ce5945078291d56a2133c444e088", "title": "Simple, Accurate, and Robust Nonparametric Blind Super-Resolution", "text": "This paper proposes a simple, accurate, and robust approach to single image nonparametric blind Super-Resolution (SR). This task is formulated as a functional to be minimized with respect to both an intermediate super-resolved image and a nonparametric blur-kernel. The proposed approach includes a convolution consistency constraint which uses a non-blind learning-based SR result to better guide the estimation process. Another key component is the unnatural bi-l0-l2-norm regularization imposed on the super-resolved, sharp image and the blur-kernel, which is shown to be quite beneficial for estimating the blur-kernel accurately. The numerical optimization is implemented by coupling the splitting augmented Lagrangian and the conjugate gradient (CG). Using the pre-estimated blur-kernel, we finally reconstruct the SR image by a very simple non-blind SR method that uses a natural image prior. The proposed approach is demonstrated to achieve better performance than the recent method by Michaeli and Irani [2] in both terms of the kernel estimation accuracy and image SR quality."}
{"_id": "ed28ce5a2d74e28949f33ae9bb521c4795b53df2", "title": "The Role of the Gut Microbiota in Childhood Obesity.", "text": "BACKGROUND\nChildhood and adolescent obesity has reached epidemic proportions worldwide. The pathogenesis of obesity is complex and multifactorial, in which genetic and environmental contributions seem important. The gut microbiota is increasingly documented to be involved in the dysmetabolism associated with obesity.\n\n\nMETHODS\nWe conducted a systematic search for literature available before October 2015 in the PubMed and Scopus databases, focusing on the interplay between the gut microbiota, childhood obesity, and metabolism.\n\n\nRESULTS\nThe review discusses the potential role of the bacterial component of the human gut microbiota in childhood and adolescent-onset obesity, with a special focus on the factors involved in the early development of the gut bacterial ecosystem, and how modulation of this microbial community might serve as a basis for new therapeutic strategies in combating childhood obesity. A vast number of variables are influencing the gut microbial ecology (e.g., the host genetics, delivery method, diet, age, environment, and the use of pre-, pro-, and antibiotics); but the exact physiological processes behind these relationships need to be clarified.\n\n\nCONCLUSIONS\nExploring the role of the gut microbiota in the development of childhood obesity may potentially reveal new strategies for obesity prevention and treatment."}
{"_id": "a6ce6adcce4c716f4fd0ead92c2db38b4f04b0b3", "title": "Determination of the plan of the A Famosa Fortress , Malaysia", "text": "The \u201cA Famosa Fortress\u201d is one of the oldest partially extant European buildings in Malaysia. It was built in 1511 by the Portuguese and went through several architectural developments and changes before being largely destroyed during the British occupation in 1824. With the subsequent overbuilding of the site by Melaka city today, it is impossible to fully reconstruct this fortress in its physical context. In this paper, we focus on determining the fortress layout based on various textual descriptions and old drawings and plans in preparation to building a detailed 3-D digital model. We have identified several important issues arising from the lack of any authoritative documentation. Such plans as exist not only differ in their depiction of the fort, but also use various ancient measurement systems. The paper gives examples of these problems and shows how a verifiable conjectural layout has been constructed. This is then compared against such archaeological evidence as is currently available. We are not aware of any previously published attempt to verify the consistency, similarity and integrity of the documentary data."}
{"_id": "5ed5f368aa5ee237828f058d883eec5d489e650b", "title": "Learning effective Gait features using LSTM", "text": "Human gait is an important biometric feature for person identification in surveillance videos because it can be collected at a distance without subject cooperation. Most existing gait recognition methods are based on Gait Energy Image (GEI). Although the spatial information in one gait sequence can be well represented by GEI, the temporal information is lost. To solve this problem, we propose a new feature learning method for gait recognition. Not only can the learned feature preserve temporal information in a gait sequence, but it can also be applied to cross-view gait recognition. Heatmaps extracted by a convolutional neutral network (CNN) based pose estimate method are used to describe the gait information in one frame. To model a gait sequence, the LSTM recurrent neural network is naturally adopted. Our LSTM model can be trained with unlabeled data, where the identity of the subject in a gait sequence is unknown. When labeled data are available, our LSTM works as a frame to frame view transformation model (VTM). Experiments on a gait benchmark demonstrate the efficacy of our method."}
{"_id": "70b7de43aaed79f769c0fa53a5a34f195984f5da", "title": "Intervention Studies on Forgiveness : A Meta-Analysis", "text": "79 A promising area of counseling research that emerged in the 1990s is the scientific investigation of forgiveness interventions. Although the notion of forgiving is ancient (Enright & the Human Development Study Group, 1991), it has not been systematically studied until relatively recently (Enright, Santos, & Al-Mabuk, 1989). Significant to counseling because of its interpersonal nature, forgiveness issues are relevant to the contexts of marriage and dating relationships, parent\u2013child relationships, friendships, professional relationships, and others. In addition, forgiveness is integral to emotional constructs such as anger. As forgiveness therapies (Ferch, 1998; Fitzgibbons, 1986) and the empirical study of these therapies (Freedman & Enright, 1996) begin to unfold, it is important to ask if these interventions can consistently demonstrate salient positive effects on levels of forgiveness and on the mental health of targeted clients. The purpose of this article is to analyze via meta-analysis the existing published interventions on forgiveness. Meta-analysis is a popular vehicle of synthesizing results across multiple studies. Recent successful uses of this method include the study by McCullough (1999), who analyzed five studies that compared the efficacy for depression of standard approaches with counseling with religion-accommo-dative approaches. Furthermore, in order to reach conclusions about the influence of hypnotherapy on treatment for clients with obesity, Allison and Faith (1996) used meta-analysis to examine six studies that compared the efficacy of using cognitive-behavioral therapy (CBT) alone with the use of CBT combined with hypnotherapy. Finally, Morris, Audet, Angelillo, Chalmers, and Mosteller (1992) used meta-analysis to combine the results of 10 studies with contradictory findings to show that the benefits of chlorinating drinking water far outweighed the risks. Although there may be some concern that using forgiveness as an intervention in counseling is in too early a stage of development and that too few studies exist for a proper meta-analysis, the effectiveness of these recent meta-analyses supports this meta-analytic investigation. Certainly any findings must be tempered with due caution. However, this analysis may serve as important guidance for the structure and development of future counseling studies of forgiveness. We first examine the early work in forgiveness interventions by examining the early case studies. From there, we define forgiveness, discuss the models of forgiveness in counseling and the empirically based interventions, and then turn to the meta-analysis. The early clinical case studies suggested that forgiveness might be helpful for people who have experienced deep emotional pain because of unjust treatment. For \u2026"}
{"_id": "54702c9e7fa1aea1ce2802e0d7c26d0db0b48cf4", "title": "Supervised Representation Learning for Audio Scene Classification", "text": "This paper investigates the use of supervised feature learning approaches for extracting relevant and discriminative features from acoustic scene recordings. Owing to the recent release of open datasets for acoustic scene classification problems, representation learning techniques can now be envisioned for solving the problem of feature extraction. This paper makes a step toward this goal by first introducing a supervised nonnegative matrix factorization SNMF. Our goal through this SNMF is to induce the matrix decomposition to carry out discriminative information in addition to the usual generative ones. We achieve this objective by augmenting the nonnegative matrix factorization optimization problem with a novel loss function related to class labels of each column of the matrix to decompose. While the scale of the datasets available is still small compared to those available in computer vision, we have studied models based on convolutional neural networks. We have analyzed the performances of these models on the DCASE-16 dataset and a corrected version of the LITIS Rouen one. Our experiments show that despite the small-scale setting, supervised feature learning is favorably competitive compared to the current state-of-the-art features. We also point out that for smaller scale dataset, SNMF is indeed slightly less prone to overfitting than convolutional neural networks. While the performances of these learned features are interesting per se, a deeper analysis of their behavior in the acoustic scene problem context raises open and difficult questions that we believe, need to be addressed for further performance breakthroughs."}
{"_id": "0b15b4fec6e98aa94bebe37d001cd006c4138c47", "title": "Cross-domain video concept detection using adaptive svms", "text": "Many multimedia applications can benefit from techniques for adapting existing classifiers to data with different distributions. One example is cross-domain video concept detection which aims to adapt concept classifiers across various video domains. In this paper, we explore two key problems for classifier adaptation: (1) how to transform existing classifier(s) into an effective classifier for a new dataset that only has a limited number of labeled examples, and (2) how to select the best existing classifier(s) for adaptation. For the first problem, we propose Adaptive Support Vector Machines (A-SVMs) as a general method to adapt one or more existing classifiers of any type to the new dataset. It aims to learn the \"delta function\" between the original and adapted classifier using an objective function similar to SVMs. For the second problem, we estimate the performance of each existing classifier on the sparsely-labeled new dataset by analyzing its score distribution and other meta features, and select the classifiers with the best estimated performance. The proposed method outperforms several baseline and competing methods in terms of classification accuracy and efficiency in cross-domain concept detection in the TRECVID corpus."}
{"_id": "18c84e6b5f1d6da3c670454db3f0fa61266ab1e3", "title": "Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit", "text": "This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems."}
{"_id": "33da83b54410af11d0cd18fd07c74e1a99f67e84", "title": "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition", "text": "We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be repurposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms."}
{"_id": "8e11a2a6f780c9f5f0acef7efb85cc592f47793b", "title": "Fall detection using ceiling-mounted 3D depth camera", "text": "This paper proposes an algorithm for fall detection using a ceiling-mounted 3D depth camera. The lying pose is separated from common daily activities by a k-NN classifier, which was trained on features expressing head-floor distance, person area and shape's major length to width. In order to distinguish between intentional lying postures and accidental falls the algorithm also employs motion between static postures. The experimental validation of the algorithm was conducted on realistic depth image sequences of daily activities and simulated falls. It was evaluated on more than 45000 depth images and gave 0% error. To reduce the processing overload an accelerometer was used to indicate the potential impact of the person and to start an analysis of depth images."}
{"_id": "fb488de498471771503aa3b3c2ae1323d3ce83d7", "title": "Improving Dermoscopic Image Segmentation With Enhanced Convolutional-Deconvolutional Networks", "text": "Automatic skin lesion segmentation on dermoscopic images is an essential step in computer-aided diagnosis of melanoma. However, this task is challenging due to significant variations of lesion appearances across different patients. This challenge is further exacerbated when dealing with a large amount of image data. In this paper, we extended our previous work by developing a deeper network architecture with smaller kernels to enhance its discriminant capacity. In addition, we explicitly included color information from multiple color spaces to facilitate network training and thus to further improve the segmentation performance. We participated and extensively evaluated our method on the ISBI 2017 skin lesion segmentation challenge. By training with the 2000 challenge training images, our method achieved an average Jaccard Index of 0.765 on the 600 challenge testing images, which ranked itself in the first place among 21 final submissions in the challenge."}
{"_id": "1f8564d458f2e36f5d1ad756c4a92485e45cf7ff", "title": "Retinal vessel segmentation via deep learning network and fully-connected conditional random fields", "text": "Vessel segmentation is a key step for various medical applications. This paper introduces the deep learning architecture to improve the performance of retinal vessel segmentation. Deep learning architecture has been demonstrated having the powerful ability in automatically learning the rich hierarchical representations. In this paper, we formulate the vessel segmentation to a boundary detection problem, and utilize the fully convolutional neural networks (CNNs) to generate a vessel probability map. Our vessel probability map distinguishes the vessels and background in the inadequate contrast region, and has robustness to the pathological regions in the fundus image. Moreover, a fully-connected Conditional Random Fields (CRFs) is also employed to combine the discriminative vessel probability map and long-range interactions between pixels. Finally, a binary vessel segmentation result is obtained by our method. We show that our proposed method achieve a state-of-the-art vessel segmentation performance on the DRIVE and STARE datasets."}
{"_id": "1ef86a58c9a4490683b58c95784a5ce07df368bb", "title": "Learning a non-linear knowledge transfer model for cross-view action recognition", "text": "This paper concerns action recognition from unseen and unknown views. We propose unsupervised learning of a non-linear model that transfers knowledge from multiple views to a canonical view. The proposed Non-linear Knowledge Transfer Model (NKTM) is a deep network, with weight decay and sparsity constraints, which finds a shared high-level virtual path from videos captured from different unknown viewpoints to the same canonical view. The strength of our technique is that we learn a single NKTM for all actions and all camera viewing directions. Thus, NKTM does not require action labels during learning and knowledge of the camera viewpoints during training or testing. NKTM is learned once only from dense trajectories of synthetic points fitted to mocap data and then applied to real video data. Trajectories are coded with a general codebook learned from the same mocap data. NKTM is scalable to new action classes and training data as it does not require re-learning. Experiments on the IXMAS and N-UCLA datasets show that NKTM outperforms existing state-of-the-art methods for cross-view action recognition."}
{"_id": "92c17da290d575caac28280c27427bc06140c456", "title": "Realtime ray tracing and interactive global illumination", "text": "Interaktive 3D-Computergraphik basiert heutzutage fast ausschliesslich auf dem Rasterisierungsverfahren. Dieses ist heute sehr effizient und g\u00fcnstig realisierbar, st\u00f6sst aber zunehmend an seine Grenzen hinsichtlich unterst\u00fctzter Szenenkomplexit\u00e4t und erreichbarer Darstellungsqualit\u00e4t. Eine Alternative zum Rasterisierungsverfahren ist das Ray Tracing Verfahren, welches zwar allgemein f\u00fcr bessere Bildqualit\u00e4t bekannt ist, aber aufgrund hoher Rechenanforderungen bisher als unvereinbar mit interaktiver Performanz galt. In dieser Arbeit geht es um die Entwicklung der Echtzeit Ray Tracing Technologie, mit der das Ray Tracing Verfahren auch f\u00fcr interaktive Anwendungen erm\u00f6glicht wird. Die im Rahmen dieser Dissertation entwickelten Techniken erm\u00f6glichen nun erstmalig interaktive Ray Tracing Performanz selbst auf Standard PCs. Die dazu entwickelten Methoden beinhalten unter anderem einen extrem schnellen Ray Tracing Kern, ein effizientes Parallelisierungsframework, die Unterst\u00fctzung dynamischer und extrem komplexer Modelle, sowie eine geeignete Programmierschnittstelle. Darauf aufbauend wurden dann weiterhin Verfahren entwickelt, die es erstmalig erm\u00f6glichen, auch die globale Lichtausbreitung interaktiv zu simulieren. Die im Rahmen der Dissertation entwickelten Techniken formen ein komplettes Framework f\u00fcr Echtzeit-Ray Tracing und interaktive Beleuchtungssimulation, welches interaktive Performanz selbst f\u00fcr massiv komplexe Szenen liefert, sowie erstmalig physikalisch basierte Computergraphik unter Echtzeitbedingungen erm\u00f6glicht."}
{"_id": "30b5bc9c3df6ecd97c1ecab972275dbc610882e9", "title": "Supervised clustering of streaming data for email batch detection", "text": "We address the problem of detecting batches of emails that have been created according to the same template. This problem is motivated by the desire to filter spam more effectively by exploiting collective information about entire batches of jointly generated messages. The application matches the problem setting of supervised clustering, because examples of correct clusterings can be collected. Known decoding procedures for supervised clustering are cubic in the number of instances. When decisions cannot be reconsidered once they have been made --- owing to the streaming nature of the data --- then the decoding problem can be solved in linear time. We devise a sequential decoding procedure and derive the corresponding optimization problem of supervised clustering. We study the impact of collective attributes of email batches on the effectiveness of recognizing spam emails."}
{"_id": "a3f6e614b7add731261347c1dfab9f3e64c28c24", "title": "VSkin: Sensing Touch Gestures on Surfaces of Mobile Devices Using Acoustic Signals", "text": "Enabling touch gesture sensing on all surfaces of the mobile device, not limited to the touchscreen area, leads to new user interaction experiences. In this paper, we propose VSkin, a system that supports fine-grained gesture-sensing on the back of mobile devices based on acoustic signals. VSkin utilizes both the structure-borne sounds, i.e., sounds propagating through the structure of the device, and the air-borne sounds, i.e., sounds propagating through the air, to sense finger tapping and movements. By measuring both the amplitude and the phase of each path of sound signals, VSkin detects tapping events with an accuracy of 99.65% and captures finger movements with an accuracy of 3.59mm."}
{"_id": "0b566c60021347fd1ffbfe04d036e3c13b58fbb5", "title": "An Acoustic ECHO Suppressor Based on a Frequency-Domain Model of Highly Nonlinear Residual ECHO", "text": "This paper proposes a new acoustic echo suppressor based on a frequency-domain model of highly nonlinear residual echo. The proposed echo suppressor controls the gain in each frequency bin based on the new model where the highly nonlinear residual echo is approximated as a product of a regression coefficient and the echo replica in the frequency domain. To reduce annoying modulation by the error of the model, a flooring operation of estimated near-end signal level is introduced to the gain control. Simulation results with speech data recorded by a hands-free cellphone show that the proposed echo suppressor reduces the highly nonlinear residual echo to an almost inaudible level"}
{"_id": "46dab5eb9c11bd49893e2dafa7d1b720a0aa2b3d", "title": "W ORDS OR C HARACTERS ? F INE-GRAINED", "text": "Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children\u2019s Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.1"}
{"_id": "1c426bfa87f9a7d2839deb5784a609967e8f376f", "title": "Are Metamorphic Viruses Really Invincible?", "text": "Metamorphic viruses enjoy the apparent invincibility because a virus writer has the advantage of knowing the weak spots of AV technologies. We could turn the tables if we can identify similar weak spots in a metamorphic virus. Indeed, Lakhotia and Singh close their otherwise gloomy article with one bright spot: \" The good news is that a virus writer is confronted with the same theoretical limit as anti-virus technologies\u2026 It may be worth contemplating how this could be used to the advantage of anti-virus technologies. \" SUMMARY In the game of \" hide and seek, \" where a virus tries to hide and the AV scanners tries to seek, the winner is the one that can take advantage of the other's weak spot. So far a virus writer has enjoyed the upper hand for she could exploit the limitations of AV technologies. Metamorphic viruses are particularly insidious in taking such advantage. A metamorphic virus thwarts detection by signature-based (static) AV technologies by morphing its code as it propagates. A virus can also thwart detection by emulation-based (dynamic) technologies. To do so it needs to detect whether it is running in an emulator and change its behavior. So are metamorphic viruses invincible? This paper investigates the above remark and identifies what promises to be the Achilles' heel of a metamorphic virus. The key observation is that in order to mutate its code, generations after generations, a metamorphic virus must analyze its own code. Thus, it too must face the limits of static and dynamic analyses. Beyond that a metamorphic virus has another constraint: it must be able to re-analyze the mutated code that it generates. Thus, the analysis within the virus, of how to transform the code in current generation, depends upon the complexity of transformations in the previous generation. To overcome the challenges of static and dynamic analyses, the virus has the following options: do not obfuscate the transformed code in any generation; use some coding conventions that can aid it in detecting its own obfuscations; or develop smart algorithms to detect its specific obfuscations. This paper uncovers the Achilles' heel of a metamorphic virus."}
{"_id": "b6011390c08d7982bdaecb60822e72ed7c751ea4", "title": "MapReduce network enabled algorithms for classification based on association rules", "text": ""}
{"_id": "c516d505dcee2faa0eea6b6a456fefa9451af12e", "title": "Optimizing memory transactions", "text": "Atomic blocks allow programmers to delimit sections of code as 'atomic', leaving the language's implementation to enforce atomicity. Existing work has shown how to implement atomic blocks over word-based transactional memory that provides scalable multi-processor performance without requiring changes to the basic structure of objects in the heap. However, these implementations perform poorly because they interpose on all accesses to shared memory in the atomic block, redirecting updates to a thread-private log which must be searched by reads in the block and later reconciled with the heap when leaving the block.This paper takes a four-pronged approach to improving performance: (1) we introduce a new 'direct access' implementation that avoids searching thread-private logs, (2) we develop compiler optimizations to reduce the amount of logging (e.g. when a thread accesses the same data repeatedly in an atomic block), (3) we use runtime filtering to detect duplicate log entries that are missed statically, and (4) we present a series of GC-time techniques to compact the logs generated by long-running atomic blocks.Our implementation supports short-running scalable concurrent benchmarks with less than 50\\% overhead over a non-thread-safe baseline. We support long atomic blocks containing millions of shared memory accesses with a 2.5-4.5x slowdown."}
{"_id": "cb84d3910fe18b399a5831e50f6545107d6af379", "title": "Contextual gaps: privacy issues on Facebook", "text": "Social networking sites like Facebook are rapidly gaining in popularity. At the same time, they seem to present significant privacy issues for their users. We analyze two of Facebooks\u2019s more recent features, Applications and News Feed, from the perspective enabled by Helen Nissenbaum\u2019s treatment of privacy as \u201ccontextual integrity.\u201d Offline, privacy is mediated by highly granular social contexts. Online contexts, including social networking sites, lack much of this granularity. These contextual gaps are at the root of many of the sites\u2019 privacy issues. Applications, which nearly invisibly shares not just a users\u2019, but a user\u2019s friends\u2019 information with third parties, clearly violates standard norms of information flow. News Feed is a more complex case, because it involves not just questions of privacy, but also of program interface and of the meaning of \u201cfriendship\u201d online. In both cases, many of the privacy issues on Facebook are primarily design issues, which could be ameliorated by an interface that made the flows of information more transparent to users."}
{"_id": "822160590349053d529d4404d06e77c07d8a584f", "title": "Map-based navigation in mobile robots:: II. A review of map-learning and path-planning strategies", "text": "This article reviews map-learning and path-planning strategies within the context of mapbased navigation in mobile robots. Concerning map-learning, it distinguishes metric maps from topological maps and describes procedures that help maintain the coherency of these maps. Concerning path-planning, it distinguishes continuous from discretized spaces and describes procedures applicable when the execution of a plan fails. It insists on the need for an integrated conception of such procedures, that must be tightly tailored to the specific robot that is used notably to the capacities and limitations of its sensory-motor equipment and to the specific environment that is experienced. A hierarchy of navigation strategies is outlined in the discussion, together with the sort of adaptive capacities each affords to cope with unexpected obstacles or dangers encountered on an animat or robot\u2019s way to its goal."}
{"_id": "a74eb421855e032a4267d8a5cbb98f45cfd49cb2", "title": "Effective Defense Schemes for Phishing Attacks on Mobile Computing Platforms", "text": "Recent years have witnessed the increasing threat of phishing attacks on mobile computing platforms. In fact, mobile phishing is particularly dangerous due to the hardware limitations of mobile devices and the habits of mobile users. In this paper, we did a comprehensive study on the security vulnerabilities caused by mobile phishing attacks, including web page phishing attacks, application phishing attacks, and account registry phishing attacks. Existing schemes designed for web phishing attacks on personal computers (PCs) cannot effectively address the various phishing attacks on mobile devices. Hence, we propose MobiFish, which is a novel automated lightweight antiphishing scheme for mobile platforms. MobiFish verifies the validity of web pages, applications, and persistent accounts by comparing the actual identity to the claimed identity. MobiFish has been implemented on a Nexus 4 smartphone running the Android 4.2 operating system. We experimentally evaluate the performance of MobiFish with 100 phishing URLs and corresponding legitimate URLs, as well as phishing apps. The results show that MobiFish is very effective in detecting phishing attacks on mobile phones."}
{"_id": "db5458476f3fd850d9b4c947e3aff04bf8ba3edc", "title": "Knowledge actionability: satisfying technical and business interestingness", "text": "Traditionally, knowledge actionability has been investigated mainly by developing and improving technical interestingness. Recently, initial work on technical subjective interestingness and business-oriented profit mining presents general potential, while it is a long-term mission to bridge the gap between technical significance and business expectation. In this paper, we propose a two-way significance framework for measuring knowledge actionability, which highlights both technical interestingness and domain-specific expectations. We further develop a fuzzy interestingness aggregation mechanism to generate a ranked final pattern set balancing technical and business interests. Real-life data mining applications show the proposed knowledge actionability framework can complement technical interestingness while satisfy real user needs."}
{"_id": "4d49410cb1c4b4c6b05cbc83cc652984adf68fe3", "title": "Results of syndactyly release using a modification of the Flatt technique.", "text": "The results of 144 congenital syndactyly releases over a 12-year period by a single surgeon using a modified Flatt technique (dorsal hourglass flap, interdigitating zigzag flaps, and full-thickness skin grafts) are analyzed considering the association of skin grafts and web creep. The mean follow-up was 5 years. There were seven cases of graft failure, only one of which developed web creep. Web creep occurred in 4.2% of web releases. The results suggest that avoiding longitudinal straight-line scars across the web space may be an important factor in avoiding web creep when performing the modified Flatt technique described."}
{"_id": "32d00b42eaa2fc8bac4e9d0be7b95fea2c05a808", "title": "A hybrid medium access control protocol for underwater wireless networks", "text": "Underwater networks allow investigation of many areas of the world not easily accessed by humans, but offer interesting challenges to device and protocol designers due to the unique channel conditions present when using acoustic communications. The high transmit power of acoustic transceivers makes the medium access protocol a primary focus point for reducing energy consumption in energy limited underwater devices. Scheduled protocols use very little power by eliminating collisions, but cannot adapt to changing traffic conditions in the same way as random protocols. We attempt to bridge these two ideas by dividing time into scheduled and unscheduled access periods in order to yield the benefits of each protocol. We show that this technique increases the bits delivered per energy unit in many cases of interest. Additionally, the hybrid technique provides low latency for a wider range of traffic rates than either of the two protocols when considered individually. We also investigate some of the design tradeoffs to consider when using a hybrid protocol."}
{"_id": "847fd4428705785972bbf0d3be9575ba9a36f516", "title": "NativeGuard: protecting android applications from third-party native libraries", "text": "Android applications often include third-party libraries written in native code. However, current native components are not well managed by Android's security architecture. We present NativeGuard, a security framework that isolates native libraries from other components in Android applications. Leveraging the process-based protection in Android, NativeGuard isolates native libraries of an Android application into a second application where unnecessary privileges are eliminated. NativeGuard requires neither modifications to Android nor access to the source code of an application. It addresses multiple technical issues to support various interfaces that Android provides to the native world. Experimental results demonstrate that our framework works well with a set of real-world applications, and incurs only modest overhead on benchmark programs."}
{"_id": "2725ae621bf24e7f64a92d10bab82499fff541cc", "title": "Working memory deficits in adults with attention-deficit/hyperactivity disorder (ADHD): an examination of central executive and storage/rehearsal processes.", "text": "The current study was the first to use a regression approach to examine the unique contributions of central executive (CE) and storage/rehearsal processes to working memory (WM) deficits in adults with ADHD. Thirty-seven adults (ADHD = 21, HC = 16) completed phonological (PH) and visuospatial (VS) working memory tasks. While both groups performed significantly better during the PH task relative to the VS task, adults with ADHD exhibited significant deficits across both working memory modalities. Further, the ADHD group recalled disproportionately fewer PH and VS stimuli as set-size demands increased. Overall, the CE and PH storage/rehearsal processes of adults with ADHD were both significantly impaired relative to those of the healthy control adults; however, the magnitude of the CE effect size was much smaller compared to previous studies of children with the disorder. Collectively, results provide support for a lifelong trajectory of WM deficits in ADHD."}
{"_id": "0fe14361f45b8197185c54549fbaf105d9d3e3aa", "title": "Line-of-sight guidance for formation control of quadrotor", "text": "In the broad class of unmanned aerial vehicles (UAV), Quadrotor is an important member. Cooperative control of UAVs are currently being researched for a wide range of applications. An active and important research area of cooperative control of multiple UAVs is the formation control. Formation flying is the disciplined flight of two or more aircraft under the command of a flight leader. The main goal is to coordinate a group of UAVs to achieve a desired formation pattern and avoid inter-vehicle collisions at the same time. Formation flight of a team of UAVs has broad applications such as aerial mapping, traffic monitoring, terrain and utilities inspection, search and rescue operations, perimeter planning, disaster monitoring, reconnaissance mission, surveillance etc. Here we propose a Line Of Sight (LOS) guidance based flight formation control strategy for multiple quad rotor UAVs in leader-follower framework. The guidance laws for 2D formation are implemented with the help of LOS angles, between adjacent quadrotors to keep the current quadrotor's position in formation. The full dynamic model of quadrotor incorporating LOS guidance law is considered for designing PID controller and the simulation analysis was done using MATLAB/ Simulink."}
{"_id": "2c0e8ab3879c0a729e610cf02144318ae45d7dde", "title": "Hand Gesture Recognition With Multiscale Weighted Histogram of Contour Direction Normalization for Wearable Applications", "text": "This paper proposes a static hand gesture recognition method with low computation and memory consumption for wearable applications. The hand contour is chosen as the hand gesture feature and support vector machine is used to classify the feature. A multiscale weighted histogram of contour direction-based direction normalization is proposed to ensure good recognition performance. In order to improve efficiency, the proposed histogram only counts the direction of the contour point to focus on the most significant hand feature in the first-person view of wearable devices. Based on the hand\u2019s anatomy, the proposed histogram is weighted by considering each contour point\u2019s position and direction jointly using the direction-angle map, to ensure robustness. Experimental results show that the proposed method can give a recognition accuracy of 97.1% with a frame rate of 30 fps on a PC."}
{"_id": "ffcea157b792edc13390f9cc2795a35d480c4867", "title": "Prototype of automatic accident detection and management in vehicular environment using VANET and IoT", "text": "The increase in the population of cities and the number of vehicles leads to increase in congestion on roads and accidents. The lack of quick medical service on road accidents is the major cause of death. In such situation, an automatic accident detection can help to save the loss of life. In this paper, a prototype is designed for an automatic accident detection using Vehicular Adhoc Network (VANET) and Internet of Things (IoT). The application is able to detect accident and severity of the emergency level with the help of mechanical and medical sensors deployed in the vehicle. In case of emergency, the message is sent to a hospital using VANET, where our central server is able to find out the location of the accident and nearest medical center. It sends a message for an ambulance after detecting basic information. In order to clear the path on the way to accident's location, the client application on the ambulance generates alert messages."}
{"_id": "79f23b49b32bed6071993a4fade5e5926ad5feda", "title": "Author ' s personal copy Narcissism and recollections of early life experiences", "text": "Recent studies have found associations between narcissistic personality features and retrospective accounts of early experiences. The current study sought to extend these previous findings by examining whether adaptive and maladaptive features of narcissism were associated with recollections of early life experiences in a non-clinical sample of undergraduate students (N = 334). Results revealed that the Entitlement/Exploitativeness feature of narcissism was associated with low security, high parental discipline, and high threats of separation. Narcissistic Grandiosity was positively associated with peer affectional support and parental discipline, whereas Narcissistic Vulnerability was not uniquely associated with memories of early life experiences. The results provide partial support for models of narcissism in which parents are recalled as failing to provide a secure base while inducing threats of separation and discipline. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "609ab78579f2f51e4677715c32d3370899bfd3a7", "title": "Mixing Qualitative and Quantitative Methods : Triangulation in Action *", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a joumal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "7abeaf172af1129556ee8b3fcbb2139172e50bdf", "title": "STRATEGIC DECISION PROCESSES IN HIGH VELOCITY ENVIRONMENTS : FOUR CASES IN THE MICROCOMPUTER INDUSTRY *", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "93dbcdc45336f4d26575e8273b3d70f7a1a260b2", "title": "Research Commentary: Desperately Seeking the \"IT\" in IT Research - A Call to Theorizing the IT Artifact", "text": "The field of information systems is premised on the centrality of information technology in everyday socio-economic life. Yet, drawing on a review of the full set of articles published in Information Systems Research (ISR) over the past ten years, we argue that the field has not deeply engaged its core subject matter\u2014the information technology (IT) artifact. Instead, we find that IS researchers tend to give central theoretical significance to the context (within which some usually unspecified technology is seen to operate), the discrete processing capabilities of the artifact (as separable from its context or use), or the dependent variable (that which is posited to be affected or changed as technology is developed, implemented, and used). The IT artifact itself tends to disappear from view, be taken for granted, or is presumed to be unproblematic once it is built and installed. After discussing the implications of our findings, we propose a research direction for the IS field that begins to take technology as seriously as its effects, context, and capabilities. In particular, we propose that IS researchers begin to theorize specifically about IT artifacts, and then incorporate these theories explicitly into their studies. We believe that such a research direction is critical if IS research is to make a significant contribution to the understanding of a world increasingly suffused with ubiquitous, interdependent, and emergent information technologies. (Information Systems Research; Information Technology; IT Research; IT Theory; Technological Artifacts; Technology Change)"}
{"_id": "bc5e20c9e950a5dcedbe1caacc39afe097e3a6b0", "title": "Generalizing Generalizability in Information Systems Research", "text": "Generalizability is a major concern to those who do, and use, research. Statistical, sampling-based generalizability is well known, but methodologists have long been aware of conceptions of generalizability beyond the statistical. The purpose of this essay is to clarify the concept of generalizability by critically examining its nature, illustrating its use and misuse, and presenting a framework for classifying its different forms. The framework organizes the different forms into four types, which are defined by the distinction between empirical and theoretical kinds of statements. On the one hand, the framework affirms the bounds within which statistical, sampling-based generalizability is legitimate. On the other hand, the framework indicates ways in which researchers in information systems and other fields may properly lay claim to generalizability, and thereby broader relevance, even when their inquiry falls outside the bounds of sampling-based research. (Research Methodology; Positivist Research; Interpretive Research; Quantitative Research; Qualitative Research; Case Studies; Research Design; Generalizability )"}
{"_id": "5261431c078f1c83740b9597e5263df7d218baf5", "title": "Ranking Through Clustering: An Integrated Approach to Multi-Document Summarization", "text": "Multi-document summarization aims to create a condensed summary while retaining the main characteristics of the original set of documents. Under such background, sentence ranking has hitherto been the issue of most concern. Since documents often cover a number of topic themes with each theme represented by a cluster of highly related sentences, sentence clustering has been explored in the literature in order to provide more informative summaries. For each topic theme, the rank of terms conditional on this topic theme should be very distinct, and quite different from the rank of terms in other topic themes. Existing cluster-based summarization approaches apply clustering and ranking in isolation, which leads to incomplete, or sometimes rather biased, analytical results. A newly emerged framework uses sentence clustering results to improve or refine the sentence ranking results. Under this framework, we propose a novel approach that directly generates clusters integrated with ranking in this paper. The basic idea of the approach is that ranking distribution of sentences in each cluster should be quite different from each other, which may serve as features of clusters and new clustering measures of sentences can be calculated accordingly. Meanwhile, better clustering results can achieve better ranking results. As a result, ranking and clustering by mutually and simultaneously updating each other so that the performance of both can be improved. The effectiveness of the proposed approach is demonstrated by both the cluster quality analysis and the summarization evaluation conducted on the DUC 2004-2007 datasets."}
{"_id": "a677c44e5650f364a16eaee871dc9a4073966c86", "title": "The optimal training load for the development of dynamic athletic performance.", "text": "This study was performed to determine which of three theoretically optimal resistance training modalities resulted in the greatest enhancement in the performance of a series of dynamic athletic activities. The three training modalities included 1) traditional weight training, 2) plyometric training, and 3) explosive weight training at the load that maximized mechanical power output. Sixty-four previously trained subjects were randomly allocated to four groups that included the above three training modalities and a control group. The experimental groups trained for 10 wk performing either heavy squat lifts, depth jumps, or weighted squat jumps. All subjects were tested prior to training, after 5 wk of training and at the completion of the training period. The test items included 1) 30-m sprint, 2) vertical jumps performed with and without a countermovement, 3) maximal cycle test, 4) isokinetic leg extension test, and 5) a maximal isometric test. The experimental group which trained with the load that maximized mechanical power achieved the best overall results in enhancing dynamic athletic performance recording statistically significant (P < 0.05) improvements on most test items and producing statistically superior results to the two other training modalities on the jumping and isokinetic tests."}
{"_id": "f14eb2a9af434328ea458b8d6aedba2979aede12", "title": "Hate Online : A Content Analysis of Extremist Internet Sites", "text": "Extremists, such as hate groups espousing racial supremacy or separation, have established an online presence. A content analysis of 157 extremist web sites selected through purposive sampling was conducted using two raters per site. The sample represented a variety of extremist groups and included both organized groups and sites maintained by apparently unaffiliated individuals. Among the findings were that the majority of sites contained external links to other extremist sites (including international sites), that roughly half the sites included multimedia content, and that half contained racist symbols. A third of the sites disavowed racism or hatred, yet one third contained material from supremacist literature. A small percentage of sites specifically urged violence. These and other findings suggest that the Internet may be an especially powerful tool for extremists as a means of reaching an international audience, recruiting members, linking diverse extremist groups, and allowing maximum image control."}
{"_id": "4bd837a78ed8a217b6da10adbb391a61ac42ebc6", "title": "Treatment of oral mucocele-scalpel versus CO2 laser.", "text": "OBJECTIVE\nTo compare the results obtained after oral mucocele resection with the scalpel versus the CO2 laser, based on the complications and recurrences after surgery.\n\n\nPATIENTS AND METHODS\nOf the 68 patients we studied who have mucocele, 38 were resected with a scalpel and the remaining 30 with the CO2 laser (5-7 W). Patient sex and age were documented, along with location of the lesion as well as size, symptoms, duration, etiological factors, type of treatment, complications and recurrences after surgical removal.\n\n\nRESULTS\nThe sample comprised 40 males and 28 females, aged between 6-65 years. The histological diagnosis was extravasation mucocele in 95% of the cases. The most frequent location was the lower lip (73.5%). The mean lesion diameter was 9 mm, and in most cases no evident etiological factor was recorded. The mean duration of the lesion was 4 months. Among the cases of conventional surgical removal of mucocele, recurrence was recorded in 8.8% of the cases, and 13.2% of the patients suffered postoperative complications--the most frequent being the presence of fibrous scars. There were no complications or relapses after a minimum follow-up of 12 months in the cases subjected to CO2 laser treatment.\n\n\nCONCLUSIONS\nOral mucocele ablation with the CO2 laser offers more predictable results and fewer complications and recurrences than conventional resection with the scalpel."}
{"_id": "639a12a90a88641575d62bafbfce637b835f218a", "title": "A neuromorphic VLSI design for spike timing and rate based synaptic plasticity", "text": "Triplet-based Spike Timing Dependent Plasticity (TSTDP) is a powerful synaptic plasticity rule that acts beyond conventional pair-based STDP (PSTDP). Here, the TSTDP is capable of reproducing the outcomes from a variety of biological experiments, while the PSTDP rule fails to reproduce them. Additionally, it has been shown that the behaviour inherent to the spike rate-based Bienenstock-Cooper-Munro (BCM) synaptic plasticity rule can also emerge from the TSTDP rule. This paper proposes an analogue implementation of the TSTDP rule. The proposed VLSI circuit has been designed using the AMS 0.35\u00a0\u03bcm CMOS process and has been simulated using design kits for Synopsys and Cadence tools. Simulation results demonstrate how well the proposed circuit can alter synaptic weights according to the timing difference amongst a set of different patterns of spikes. Furthermore, the circuit is shown to give rise to a BCM-like learning rule, which is a rate-based rule. To mimic an implementation environment, a 1000 run Monte Carlo (MC) analysis was conducted on the proposed circuit. The presented MC simulation analysis and the simulation result from fine-tuned circuits show that it is possible to mitigate the effect of process variations in the proof of concept circuit; however, a practical variation aware design technique is required to promise a high circuit performance in a large scale neural network. We believe that the proposed design can play a significant role in future VLSI implementations of both spike timing and rate based neuromorphic learning systems."}
{"_id": "7ea6a57b28d7f827ae5ad1a631aac31f3dabf275", "title": "A Comparative Study on English and Chinese Word Uses with LIWC", "text": "This paper compared the linguistic and psychological word uses in English and Chinese languages with LIWC (Linguistic Inquiry and Word Count) programs. A Principal Component Analysis uncovered six linguistic and psychological components, among which five components were significantly correlated. The correlated components were ranked as Negative Valence (r=.92), Embodiment (r=.88), Narrative (r=.68), Achievement (r=.65), and Social Relation (r=.64). However, the results showed the order of the representative features differs in two languages and certain word categories co-occurred with different components in English and Chinese. The differences were interpreted from the perspective of distinctive eastern and western cultures."}
{"_id": "d706fb1165318986f72e28b58c8047fc9a7201a8", "title": "Enhancing credit scoring model performance by a hybrid scoring matrix", "text": "Competition of the consumer credit market in Taiwan has become severe recently. Therefore, most financial institutions actively develop credit scoring models based on assessments of the credit approval of new customers and the credit risk management of existing customers. This study uses a genetic algorithm for feature selection and decision trees for customer segmentation. Moreover, it utilizes logistic regression to build the application and credit bureau scoring models where the two scoring models are combined for constructing the scoring matrix. The scoring matrix undergoes more accurate risk judgment and segmentation to further identify the parts required enhanced management or control within a personal loan portfolio. The analytical results demonstrate that the predictive ability of the scoring matrix outperforms both the application and credit bureau scoring models. Regarding the K-S value, the scoring matrix increases the prediction accuracy compared to the application and credit bureau scoring models by 18.40 and 5.70%, respectively. Regarding the AUC value, the scoring matrix increases the prediction accuracy compared to the application and credit bureau scoring models by 10.90 and 6.40%, respectively. Furthermore, this study applies the scoring matrix to the credit approval decisions for corresponding risk groups to strengthen bank\u2019s risk management practices."}
{"_id": "8fc1e559e837172e5409d4efaaa9dd6ea02577e6", "title": "Tracking in cell and developmental biology.", "text": "The past decade has seen an unprecedented data explosion in biology. It has become evident that in order to take full advantage of the potential wealth of information hidden in the data produced by even a single experiment, visual inspection and manual analysis are no longer adequate. To ensure efficiency, consistency, and completeness in data processing and analysis, computational tools are essential. Of particular importance to many modern live-cell imaging experiments is the ability to automatically track and analyze the motion of objects in time-lapse microscopy images. This article surveys the recent literature in this area. Covering all scales of microscopic observation, from cells, down to molecules, and up to entire organisms, it discusses the latest trends and successes in the development and application of computerized tracking methods in cell and developmental biology."}
{"_id": "39a4d785cc3e1124c2af637617b0d3977da23ffd", "title": "Measurement-Based Harmonic Modeling of an Electric Vehicle Charging Station Using a Three-Phase Uncontrolled Rectifier", "text": "The harmonic pollution of electric vehicle chargers is increasing rapidly as the scale of electric vehicle charging stations enlarges to meet the rising demand for electric vehicles. This paper investigates the operating characteristics of a three-phase uncontrolled rectification electric vehicle charger with passive power factor correction. A method for estimating the equivalent circuit parameters of chargers is proposed based on the measured feature data of voltage and current at the ac side and on the circuit constraint during the conduction charging process. A harmonic analytical model of the charging station is then established by dividing the same charger types into groups. The parameter estimation method and the harmonic model of the charging station are verified through simulation, experiment, and field test. The parameter sensitivities of the equivalent circuit to the charging current harmonic are also studied."}
{"_id": "a80fedd6746790ae95dd30387aaf64ea0b265f91", "title": "Lemmatization and Morphological Tagging in German and Latin: A Comparison and a Survey of the State-of-the-art", "text": "This paper relates to the challenge of morphological tagging and lemmatization in morphologically rich languages by example of German and Latin. We focus on the question what a practitioner can expect when using state-of-the-art solutions out of the box. Moreover, we contrast these with old(er) methods and implementations for POS tagging. We examine to what degree recent efforts in tagger development pay out in improved accuracies \u2014 and at what cost, in terms of training and processing time. We also conduct in-domain vs. out-domain evaluation. Out-domain evaluations are particularly insightful because the distribution of the data which is being tagged by a user will typically differ from the distribution on which the tagger has been trained. Furthermore, two lemmatization techniques are evaluated. Finally, we compare pipeline tagging vs. a tagging approach that acknowledges dependencies between inflectional categories."}
{"_id": "50eaaa6d4b27b69de01e9186990cb5cdfad3d330", "title": "Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex", "text": "Pyramidal neurons represent the majority of excitatory neurons in the neocortex. Each pyramidal neuron receives input from thousands of excitatory synapses that are segregated onto dendritic branches. The dendrites themselves are segregated into apical, basal, and proximal integration zones, which have different properties. It is a mystery how pyramidal neurons integrate the input from thousands of synapses, what role the different dendrites play in this integration, and what kind of network behavior this enables in cortical tissue. It has been previously proposed that non-linear properties of dendrites enable cortical neurons to recognize multiple independent patterns. In this paper we extend this idea in multiple ways. First we show that a neuron with several thousand synapses segregated on active dendrites can recognize hundreds of independent patterns of cellular activity even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where patterns detected on proximal dendrites lead to action potentials, defining the classic receptive field of the neuron, and patterns detected on basal and apical dendrites act as predictions by slightly depolarizing the neuron without generating an action potential. By this mechanism, a neuron can predict its activation in hundreds of independent contexts. We then present a network model based on neurons with these properties that learns time-based sequences. The network relies on fast local inhibition to preferentially activate neurons that are slightly depolarized. Through simulation we show that the network scales well and operates robustly over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. We contrast the properties of the new network model with several other neural network models to illustrate the relative capabilities of each. We conclude that pyramidal neurons with thousands of synapses, active dendrites, and multiple integration zones create a robust and powerful sequence memory. Given the prevalence and similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory may be a universal property of neocortical tissue."}
{"_id": "2e5f2b57f4c476dd69dc22ccdf547e48f40a994c", "title": "Gradient Flow in Recurrent Nets : the Difficulty of Learning Long-Term Dependencies", "text": ""}
{"_id": "290fdb5d932f75395f6ab668b774a2f62efaf056", "title": "Defining Creativity: Finding Keywords for Creativity Using Corpus Linguistics Techniques", "text": "A computational system that evaluates creativity needs guidance on what creativity actually is. It is by no means straightforward to provide a computer with a formal definition of creativity; no such definition yet exists and viewpoints in creativity literature vary as to what the key components of creativity are considered to be. This work combines several viewpoints for a more general consensus of how we define creativity, using a corpus linguistics approach. 30 academic papers from various academic disciplines were analysed to extract the most frequently used words and their frequencies in the papers. This data was statistically compared with general word usage in written English. The results form a list of words that are significantly more likely to appear when talking about creativity in academic texts. Such words can be considered keywords for creativity, guiding us in uncovering key sub-components of creativity which can be used for computational assessment of creativity."}
{"_id": "9fc8addac99bda488f8318721078f39bdd9c6c73", "title": "Weighted Task Regularization for Multitask Learning", "text": "Multitask Learning has been proven to be more effective than the traditional single task learning on many real-world problems by simultaneously transferring knowledge among different tasks which may suffer from limited labeled data. However, in order to build a reliable multitask learning model, nontrivial effort to construct the relatedness between different tasks is critical. When the number of tasks is not large, the learning outcome may suffer if there exists outlier tasks that inappropriately bias majority. Rather than identifying or discarding such outlier tasks, we present a weighted regularized multitask learning framework based on regularized multitask learning, which uses statistical metrics, such as Kullback-Leibler divergence, to assign weights prior to regularization process that robustly reduces the impact of outlier tasks and results in better learned models for all tasks. We then show that this formulation can be solved using dual form like optimizing a standard support vector machine with varied kernels. We perform experiments using both synthetic dataset and real-world dataset from petroleum industry which shows that our methodology outperforms existing methods."}
{"_id": "02d1105bec3877ed8cd2d28f76b67ae8ba3f2331", "title": "A contextual-bandit approach to personalized news article recommendation", "text": "Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation.\n In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks.\n The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5% click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce."}
{"_id": "a6ec6739325a324d047f2c2ca27ce419f356d0d3", "title": "3D hand tracking using Kalman filter in depth space", "text": "Hand gestures are an important type of natural language used in many research areas such as human-computer interaction and computer vision. Hand gestures recognition requires the prior determination of the hand position through detection and tracking. One of the most efficient strategies for hand tracking is to use 2D visual information such as color and shape. However, visual-sensor-based hand tracking methods are very sensitive when tracking is performed under variable light conditions. Also, as hand movements are made in 3D space, the recognition performance of hand gestures using 2D information is inherently limited. In this article, we propose a novel real-time 3D hand tracking method in depth space using a 3D depth sensor and employing Kalman filter. We detect hand candidates using motion clusters and predefined wave motion, and track hand locations using Kalman filter. To verify the effectiveness of the proposed method, we compare the performance of the proposed method with the visual-based method. Experimental results show that the performance of the proposed method out performs visual-based method."}
{"_id": "7da955395515887257d8bdad47311e827ccf4c0f", "title": "Classification of Monte Carlo tree search variants", "text": "Many variations of Monte Carlo tree search have been proposed and tested but relatively little comparison of these variants have occurred. In this study an Agent Case Embedding analysis and agglomorative hierarchical clustering was performed using eight variants of Monte Carlo Tree Search as agents and eight games as cases. This allowed us to compare the variant's abilities on each of the games to determine the type of games each are good at handling as well as which variants are similar to others. This comparison of variants exploits the ability of ACEs to compare different types of objects based on their behavior. By looking at the behavior of MCTS variants on a variety of games we obtain a good notion of the degree to which different MCTS variants exhibit different capabilities. A side effect of comparing MCTS variants with agent-case embeddings is that we also are able to compare the games used to test the MCTS variants."}
{"_id": "9a0e6e80bbbbe541230fe06ed4f072e6aa2a80ef", "title": "Group awareness in distributed software development", "text": "Open-source software development projects are almost always collaborative and distributed. Despite the difficulties imposed by distance, these projects have managed to produce large, complex, and successful systems. However, there is still little known about how open-source teams manage their collaboration. In this paper we look at one aspect of this issue: how distributed developers maintain group awareness. We interviewed developers, read project communication, and looked at project artifacts from three successful open source projects. We found that distributed developers do need to maintain awareness of one another, and that they maintain both a general awareness of the entire team and more detailed knowledge of people that they plan to work with. Although there are several sources of information, this awareness is maintained primarily through text-based communication (mailing lists and chat systems). These textual channels have several characteristics that help to support the maintenance of awareness, as long as developers are committed to reading the lists and to making their project communication public."}
{"_id": "d44c1e41821dff02a96d72ef5141120b1870aa38", "title": "Gland Segmentation and Computerized Gleason Grading of Prostate Histology by Integrating Low-, High-level and Domain Specific Information", "text": "In this paper we present a method of automatically detecting and segmenting glands in digitized images of prostate histology and to use features derived from gland morphology to distinguish between intermediate Gleason grades. Gleason grading is a method of describing prostate cancer malignancy on a numerical scale from grade 1 (early stage cancer) through grade 5 (highly infiltrative cancer). Studies have shown that gland morphology plays a significant role in discriminating Gleason grades. We present a method of automated detection and segmentation of prostate gland regions. A Bayesian classifier is used to detect candidate gland regions by utilizing low-level image features to find the lumen, epithelial cell cytoplasm, and epithelial nuclei of the tissue. False positive regions identified as glands are eliminated via use of domain-specific knowledge constraints. Following candidate region detection via low-level and empirical domain information, the lumen area is used to initialize a level-set curve, which is evolved to lie at the interior boundary of the nuclei surrounding the gland structure. Features are calculated from the boundaries that characterize the morphology of the lumen and the gland regions, including area overlap ratio, distance ratio, standard deviation and variance of distance, perimeter ratio, compactness, smoothness, and area. The feature space is reduced using a manifold learning scheme (Graph Embedding) that is used to embed objects that are adjacent to each other in the high dimensional feature space into a lower dimensional embedding space. Objects embedded in this low dimensional embedding space are then classified via a support vector machine (SVM) classifier as belonging to Gleason grade 3, grade 4 cancer, or benign epithelium. We evaluate the efficacy of the automated segmentation algorithm by comparing the classification accuracy obtained using the automated segmentation scheme to the accuracy obtained via a user assisted segmentation scheme. Using the automated scheme, the system achieves accuracies of 86.35% when distinguishing Gleason grade 3 from benign epithelium, 92.90% distinguishing grade 4 from benign epithelium, and 95.19% distinguishing between Gleason grades 3 and 4. The manual scheme returns accuracies of 95.14%, 95.35%, and 80.76% for the respective classification tasks, indicating that the automated segmentation algorithm and the manual scheme are comparable in terms of achieving the overall objective of grade classification."}
{"_id": "a9f74cbf03b70611d441a426dfb7eb656e1a3172", "title": "VHDL Implementation of 8-Bit ALU", "text": "In this paper VHDL implementation of 8-bit arithmetic logic unit (ALU) is presented. The design was implemented using VHDL Xilinx Synthesis tool ISE 13.1 and targeted for Spartan device. ALU was designed to perform arithmetic operations such as addition and subtraction using 8-bit fast adder, logical operations such as AND, OR, XOR and NOT operations, 1\u2019s and 2\u2019s complement operations and compare. ALU consist of two input registers to hold the data during operation, one output register to hold the result of operation, 8-bit fast adder with 2\u2019s complement circuit to perform subtraction and logic gates to perform logical operation. The maximum propagation delay is 13.588ns and power dissipation is 38mW. The ALU was designed for controller used in network interface card."}
{"_id": "49c309bbe7e9c68e9fe5ba863655523d3ae04b6e", "title": "String similarity measures and joins with synonyms", "text": "A string similarity measure quantifies the similarity between two text strings for approximate string matching or comparison. For example, the strings \"Sam\" and \"Samuel\" can be considered similar. Most existing work that computes the similarity of two strings only considers syntactic similarities, e.g., number of common words or q-grams. While these are indeed indicators of similarity, there are many important cases where syntactically different strings can represent the same real-world object. For example, \"Bill\" is a short form of \"William\". Given a collection of predefined synonyms, the purpose of the paper is to explore such existing knowledge to evaluate string similarity measures more effectively and efficiently, thereby boosting the quality of string matching.\n In particular, we first present an expansion-based framework to measure string similarities efficiently while considering synonyms. Because using synonyms in similarity measures is, while expressive, computationally expensive (NP-hard), we propose an efficient algorithm, called selective-expansion, which guarantees the optimality in many real scenarios. We then study a novel indexing structure called SI-tree, which combines both signature and length filtering strategies, for efficient string similarity joins with synonyms. We develop an estimator to approximate the size of candidates to enable an online selection of signature filters to further improve the efficiency. This estimator provides strong low-error, high-confidence guarantees while requiring only logarithmic space and time costs, thus making our method attractive both in theory and in practice. Finally, the results from an empirical study of the algorithms verify the effectiveness and efficiency of our approach."}
{"_id": "3238e8a63989f8f4fc82e95fd6926adf7e9c0ce2", "title": "Dynamic 2D/3D registration for the Kinect", "text": "Image and geometry registration algorithms are essential components of many computer graphics and computer vision systems. With recent technological advances in RGB-D sensors, robust algorithms that combine 2D image and 3D geometry registration have become an active area of research.\n This course introduces the basics of 2D/3D registration algorithms and provides theoretical explanations and practical tools for designing computer vision and computer graphics systems based on RGB-D devices such as the Microsoft Kinect or Asus Xtion Live. To illustrate the theory and demonstrate practical relevance, the course briefly discusses three applications: rigid scanning, non-rigid modeling, and real-time face tracking."}
{"_id": "9d6f79588b68b4a464b9d9badfdac59fba0fe7db", "title": "Stylometry with R: a suite of tools", "text": "Stylometry today uses either stand-alone dedicated programs, custom-made by stylometrists, or applies existing software, often one for each stage of the analysis. Stylometry with R can be placed somewhere in-between, as the powerful open-source statistical programming environment provides, on the one hand, the opportunity of building statistical applications from scratch, and, on the other, allows less advanced researchers to use ready-made scripts and libraries. In our own stylometric adventure with R, one of the aims was to build a tool (or a set of tools) that would combine sophisticated state-of-the-art algorithms of classification and/or clustering with a user-friendly interface. In particular, we wanted to implement a number of multidimensional methods that could be used by scholars without programming skills. And more: it soon became evident that once our R scripts are made, provided with a graphic user interface and more or less documented, they are highly usable in class; experience shows that this is an excellent way to work around R\u2019s normally steep learning curve without losing anything of the environment\u2019s considerable computing power and speed."}
{"_id": "31d585499a494273b04b807ba5564f057a2b2e96", "title": "A first simulation on optimizing EDS for cabin baggage screening regarding throughput", "text": "Airport security screening is vital for secure air transportation. Screening of cabin baggage heavily relies on human operators reviewing X-ray images. Explosive detection systems (EDS) developed for cabin baggage screening can be a very valuable addition security-wise. Depending on the EDS machine and settings, false alarm rates increase, which could reduce throughput. A discrete event simulation was used to investigate how different machine settings of EDS, different groups of X-ray screeners, and different durations of alarm resolution with explosives trace detection (ETD) influence throughput of a specific cabin baggage screening process. For the modelling of screening behavior in the context of EDS and for the estimation of model parameters, data was borrowed from a human-machine interaction experiment and a work analysis. In a second step, certain adaptations were tested for their potential to reduce the impact of EDS on throughput. The results imply that moderate increases in the false alarm rate by EDS can be buffered by employing more experienced and trained X-ray screeners. Larger increases of the false alarm rate require a fast alarm resolution and additional resources for the manual search task."}
{"_id": "7835d6bf615e9f7d5d2f4f3e232e9ca41fa142bf", "title": "Misuse of automated decision aids: Complacency, automation bias and the impact of training experience", "text": "The present study investigates automation misuse based on complacency and automation bias in interacting with a decision aid in a process control system. The effect of a preventive training intervention which includes exposing participants to rare automation failures is examined. Complacency is reflected in an inappropriate checking and monitoring of automated functions. In interaction with automated decision aids complacency might result in commission errors, i.e., following automatically generated recommendations even though they are false. Yet, empirical evidence proving this kind of relationship is still lacking. A laboratory experiment (N 1\u20444 24) was conducted using a process control simulation. An automated decision aid provided advice for fault diagnosis and management. Complacency was directly measured by the participants\u2019 information sampling behavior, i.e., the amount of information sampled in order to verify the automated recommendations. Possible commission errors were assessed when the aid provided false recommendations. The results provide clear evidence for complacency, reflected in an insufficient verification of the automation, while commission errors were associated with high levels of complacency. Hence, commission errors seem to be a possible, albeit not an inevitable consequence of complacency. Furthermore, exposing operators to automation failures during training significantly decreased complacency and thus represents a suitable means to reduce this risk, even though it might not avoid it completely. Potential applications of this research include the design of training protocols in order to prevent automation misuse in interaction with automated decision aids. r 2008 Elsevier Ltd. All rights reserved."}
{"_id": "fc306dbbcb851fcb50723009a1f068369c224a11", "title": "Airport Security Screener Competency : A Cross-Sectional and Longitudinal Analysis", "text": "The performance of 5,717 aviation security screeners in detecting prohibited items in x-ray images of passenger bags was analyzed over a period of 4 years. The measure used in this study was detection performance on the X-Ray Competency Assessment Test (X-Ray CAT) and performance on this measure was obtained on an annual basis. Between tests, screeners performed varying amounts of training in the task of prohibited item detection using an adaptive computer-based training system called X-Ray Tutor (XRT). For both XRT and X-Ray CAT, prohibited items are categorized into guns, knives, improvised explosive devices (IEDs), and other prohibited items. Cross-sectional and longitudinal analyses of the training and test data were performed. Both types of analysis revealed a strong improvement of X-Ray CAT performance as a result of XRT training for all 4 item categories. The results of the study indicate that screener competency in detecting prohibited items in x-ray images of passenger bags can be greatly improved through routine XRT training."}
{"_id": "7037813bb025161c109ccc6f5e3c716408d2d158", "title": "Fleet-sizing for multi-depot and periodic vehicle routing problems using a modular heuristic algorithm", "text": "In this paper, we address the problem of determining the optimal fleet size for three vehicle routing problems, i.e., multi-depot VRP, periodic VRP and multidepot periodic VRP. In each of these problems, we consider three kinds of constraints that are often found in reality, i.e., vehicle capacity, route duration and budget constraints. To tackle the problems, we propose a new Modular Heuristic Algorithm (MHA) whose exploration and exploitation strategies enable the algorithm to produce promising results. Extensive computational experiments show that MHA performs impressively well, in terms of solution quality and computational time, for the three problem classes."}
{"_id": "40ec4c93b8fcf7e85385e9cf0d498411b3f31aca", "title": "Review of Semantic-Free Utterances in Social Human-Robot Interaction", "text": "Review of Semantic-Free Utterances in Social Human\u2013Robot Interaction Selma Yilmazyildiz, Robin Read, Tony Belpeame & Werner Verhelst To cite this article: Selma Yilmazyildiz, Robin Read, Tony Belpeame & Werner Verhelst (2016) Review of Semantic-Free Utterances in Social Human\u2013Robot Interaction, International Journal of Human\u2013Computer Interaction, 32:1, 63-85, DOI: 10.1080/10447318.2015.1093856 To link to this article: http://dx.doi.org/10.1080/10447318.2015.1093856"}
{"_id": "f218d75744791bb070a6690b2977cafe026610ca", "title": "Gender Differences in Undergraduate Engineering Applicants: A Text Mining Approach", "text": "It is well known that post-secondary science and engineering programs attract fewer female students. In this paper, we analyze gender differences through text mining of over 30,000 applications to the engineering faculty of a large North American university. We use syntactic and semantic analysis methods to highlight differences in motivation, interests and background. Our analysis leads to three main findings. First, female applicants demonstrate a wider breadth of experience, whereas male applicants put a greater emphasis on technical depth. Second, more female applicants demonstrate a greater desire to serve society. Third, female applicants are more likely to mention personal influences for studying engineering."}
{"_id": "1f2c92c53cc5ad80bc929ff3b0ad746da0bb5f30", "title": "Building Competitive Direct Acoustics-to-Word Models for English Conversational Speech Recognition", "text": "Direct acoustics-to-word (A2W) models in the end-to-end paradigm have received increasing attention compared to conventional subword based automatic speech recognition models using phones, characters, or context-dependent hidden Markov model states. This is because A2W models recognize words from speech without any decoder, pronunciation lexicon, or externally-trained language model, making training and decoding with such models simple. Prior work has shown that A2W models require orders of magnitude more training data in order to perform comparably to conventional models. Our work also showed this accuracy gap when using the English Switchboard-Fisher data set. This paper describes a recipe to train an A2W model that closes this gap and is at-par with state-of-the-art sub-word based models. We achieve a word error rate of 8.8.8%/13.9% on the Hub5-2000 Switchboard/CallHome test sets without any decoder or language model. We find that model initialization, training data order, and regularization have the most impact on the A2W model performance. Next, we present a joint word-character A2W model that learns to first spell the word and then recognize it. This model provides a rich output to the user instead of simple word hypotheses, making it especially useful in the case of words unseen or rarely-seen during training."}
{"_id": "62c7918a228d4afc60422d2307616b69c5e5af43", "title": "Crime analysis and prediction using data mining", "text": "Crime analysis and prevention is a systematic approach for identifying and analyzing patterns and trends in crime. Our system can predict regions which have high probability for crime occurrence and can visualize crime prone areas. With the increasing advent of computerized systems, crime data analysts can help the Law enforcement officers to speed up the process of solving crimes. Using the concept of data mining we can extract previously unknown, useful information from an unstructured data. Here we have an approach between computer science and criminal justice to develop a data mining procedure that can help solve crimes faster. Instead of focusing on causes of crime occurrence like criminal background of offender, political enmity etc we are focusing mainly on crime factors of each day."}
{"_id": "02a32c38798af97af5d579e565164c898b39d6f8", "title": "A 27.6 MHz 297 \u03bcW 33 ppm/\u00b0C CMOS relaxation oscillator with an adjustable temperature compensation scheme", "text": "A 27.6 MHz 297 \u03bcW relaxation oscillator is presented in this paper by using an 180-nm CMOS technology. The proposed oscillator employs an adjustable temperature compensation feedforward scheme, in which the charging current can be set steady by a four-bit digital trimming signal. We have demonstrated a frequency variation lower than 33.5 ppm/\u00b0C which could be close to 0 ppm/\u00b0C in theory if the precision is high enough. In practical production, it is effective to calibrate the mismatching and deviation of fabrication because of the novel adjustable temperature compensation scheme."}
{"_id": "98b6e6e3c67443928bda3b92a833753288f2a0b8", "title": "A modified antipodal Vivaldi antenna (AVA) with elliptical slotting edge (ESE) for ultra-wideband (UWB) applications", "text": "As the demand of UWB system increases nowadays, several types of hybrid technique on antenna design are studied, modified, and proposed for UWB applications. In this work, an enhanced performance of modified antipodal Vivaldi antenna (AVA) for targeting ultra-wideband (UWB) frequency range between 2.17 GHz and 10.6 GHz has been proposed. This proposed UWB antenna is using FR-4 substrate with dielectric constant, \u03b5r = 4.4 and the electrical conductivity tangent loss, tan\u0394 = 0.019. This antenna design is based on the modified from the dual exponential tapered slot antenna (DETSA) with varying of the exponential flares design for obtain the best return loss of the antenna within the frequency bandwidth of the UWB. It has two exponential flares that follow the characteristic of exponential curve. Next, an elliptical slotting edge (ESE) with fixed horizontal radius and varied vertical radius is implemented to the antenna which effectively increases the performance of the antenna at lower frequencies. These modifications are done without changing the antenna size, which is 70 mm \u00d7 90 mm in dimension, which is approximately 0.5\u03bb \u00d7 0.6\u03bb, where \u03bb is the wavelength of 2 GHz. The effects of the parameters of the antenna characteristic are also studied. In this case, the parametric study on varying distance between elliptical slots, Ks. It shows that, the higher value of Ks will effect to shift to the resonant frequency and increase the return loss of the antenna. At 9.104 GHz of resonant frequencies, it shows of the best return loss of 48.809 dB. Rather than that, the side lobe levels of the radiation pattern of the antenna are reduced in the lower frequencies."}
{"_id": "f997e433057f9e84e2dc6cf36da79458a4f4557f", "title": "Visual-Quality-Driven Learning for Underwater Vision Enhancement", "text": "The image processing community has witnessed remarkable advances in enhancing and restoring images. Nevertheless, restoring the visual quality of underwater images remains a great challenge. End-to-end frameworks might fail to enhance the visual quality of underwater images since in several scenarios it is not feasible to provide the ground truth of the scene radiance. In this work, we propose a CNN-based approach that does not require ground truth data since it uses a set of image quality metrics to guide the restoration learning process. The experiments showed that our method improved the visual quality of underwater images preserving their edges and also performed well considering the UCIQE metric."}
{"_id": "99ba3facd967004a540a12e0889eee76124747a9", "title": "Implementation of power efficient 8-bit reversible linear feedback shift register for BIST", "text": "Built-in Self Test (BIST) is more appropriate for testing any VLSI circuits as it provides a larger range for low power applications. Test pattern generator is the vital module in BIST. Linear Feedback Shift Registers (LFSR) is broadly used in BIST for generating test vectors as it produces highly random test pattern. This paper mainly aims at design and implementation of power efficient LFSR using reversible logic for low power applications. Pareek Gate is used to implement reversible D Flip Flop (DFF). 8 bit reversible LFSR is implemented using 8 DFFs, 8 Feynman Gates (FG) and 3 Double Feynman Gates (DFG). It is found from the analysis that, the proposed approach of designing reversible LFSR reduces the power by 10% when compared to conventional design. Thus the proposed design can be used for designing BIST in low power applications."}
{"_id": "93b4fce37b35c0fe0abe1030692205c2edc961de", "title": "Graphical Fisheye Views of Graphs", "text": "A fisheye lens is a very wide angle lens that shows places nearby in detail while also showing remote regions in successively less detail. This paper describes a system for viewing and browsing planar graphs using a software analog of a fisheye lens. We first show how to implement such a view using solely geometric transformations. We then describe a more general transformation that allows hierarchical, structured information about the graph to modify the views. Our general transformation is a fundamental extension to the previous research in fisheye views."}
{"_id": "2f5e9ff244ddafc0316fd549b995c487391ef3d0", "title": "Gravity compensation in robotics", "text": "The actuator power required to resist joint torque caused by the weight of robot links can be a significant problem. Gravity compensation is a well-known technique in robot design to achieve equilibrium throughout the range of motion and as a result to reduce the loads on the actuator. Therefore, it is desirable and commonly implemented in many situations. Various design concepts for gravity compensation are available in the literature. This paper proposes an overview of gravity compensation methods applied in robotics. The examined properties of the gravity compensation are disclosed and illustrated via kinematic schemes. In order to classify the considered balancing schemes three principal groups are distinguished due to the nature of the compensation force: counterweight, spring or active force developed by an auxiliary actuator. Then, each group is reviewed through sub-groups organized via structural features of balancing schemes. The author believes that such an arrangement of gravity compensation methods allows one to carry out a systematized analysis and provides a comprehensive view on the problem."}
{"_id": "a4741625b0c6769995beec18e39e9d25da312dc7", "title": "of the Association of Management RETAINING TALENT : ASSESSING JOB SATISFACTION FACETS MOST SIGNIFICANTLY RELATED TO SOFTWARE DEVELOPER TURNOVER INTENTIONS", "text": "Retaining information technology employees has been a problem in many organizations for decades. When key software developers quit, they depart with critical knowledge of business processes and systems that are essential for maintaining a competitive advantage. The primary aim of this study was to assess facets of job satisfaction that are most significantly correlated with software developer turnover intentions. Surveys were collected from a sample of software developers across the United States. Correlations were assessed through multiple linear regression and parametric measures of association. The results indicated a significant predicting relationship between the software developers\u2019 turnover intentions and nine facets of job satisfaction. Also found was a significant negative relationship between satisfaction with the nature of work and turnover intentions when controlling for the effects of the other independent variables. Implications of these findings are discussed along with recommendations for IT professionals and researchers."}
{"_id": "3b1dc83672b4b7f6031eeef1aae776ee103df170", "title": "Wrist Pulse Rate Monitor Using Self-Injection-Locked Radar Technology", "text": "To achieve sensitivity, comfort, and durability in vital sign monitoring, this study explores the use of radar technologies in wearable devices. The study first detected the respiratory rates and heart rates of a subject at a one-meter distance using a self-injection-locked (SIL) radar and a conventional continuous-wave (CW) radar to compare the sensitivity versus power consumption between the two radars. Then, a pulse rate monitor was constructed based on a bistatic SIL radar architecture. This monitor uses an active antenna that is composed of a SIL oscillator (SILO) and a patch antenna. When attached to a band worn on the subject's wrist, the active antenna can monitor the pulse on the subject's wrist by modulating the SILO with the associated Doppler signal. Subsequently, the SILO's output signal is received and demodulated by a remote frequency discriminator to obtain the pulse rate information."}
{"_id": "56c0b6c9397d7b986d71d95d442458afbfdbb35e", "title": "On the Use of Automated Text Summarization Techniques for Summarizing Source Code", "text": "During maintenance developers cannot read the entire code of large systems. They need a way to get a quick understanding of source code entities (such as, classes, methods, packages, etc.), so they can efficiently identify and then focus on the ones related to their task at hand. Sometimes reading just a method header or a class name does not tell enough about its purpose and meaning, while reading the entire implementation takes too long. We study a solution which mitigates the two approaches, i.e., short and accurate textual descriptions that illustrate the software entities without having to read the details of the implementation. We create such descriptions using techniques from automatic text summarization. The paper presents a study that investigates the suitability of various such techniques for generating source code summaries. The results indicate that a combination of text summarization techniques is most appropriate for source code summarization and that developers generally agree with the summaries produced."}
{"_id": "471b8e9b2fea73f6cee2b15364611b2d3e5ae471", "title": "Pattern recognition and knowledge discovery from road traffic accident data in Ethiopia: Implications for improving road safety", "text": "This research tries to view accident data collection and analysis as a system that requires a special view towards understanding the whole and making sense out of it for improved decision making in the effort of reducing the problem of road safety. Under the umbrella of an information architecture research for road safety in developing countries, the objective of this machine learning experimental research is to explore and predict the role of road users on possible injury risks. The research employed Classification and Adaptive Regression Trees (CART) and RandomForest approaches. To identify relevant patterns and illustrate the performance of the techniques for the road safety domain, road accident data collected from Addis Ababa Traffic Office is exposed to many sided analyses. Empirical results showed that the models could classify accidents with promising accuracy."}
{"_id": "6d27f3debb474a518917460ecf49b3d0a37292c6", "title": "Paraplay: Exploring Playfulness Around Physical Console Gaming", "text": "We present the concept of paraplay: playful activities that take place within the context of an interactive game or other play activity, but outside the activity itself. By critically examining work related to gaming and play goals and motivations we argue that the concept of playfulness should have a stronger role in our understanding of gaming sessions, and particularly social gaming sessions. In order to further understand the role of playfulness in social gaming we conducted an empirical field study of physical console gaming. Six families with a total of 32 participants were provided with an Xbox 360 console, Kinect sensor, and three casual physical video games to play together for a period of approximately two weeks. Participants were instructed to record their social gaming sessions. We conducted video analysis on these recordings as well as interviews with many of the participants. We found numerous types and examples of playfulness within the gaming session even from those who were not actively participating in the game. Drawing on the results of this study we present a taxonomy of paraplay and discuss the ways that playfulness can be exhibited in a social play session. We show that participants in a game situation act within a wider context of playfulness, according to a variety of significant roles ranging from active player through to audience member. We explore these roles and their attributes to provide a rich account of paraplay and its importance in understanding playful activities broadly."}
{"_id": "8aa20e4a193f40df29e96561a56bfe459f79edeb", "title": "Multimodal visualization of the optomechanical response of silicon cantilevers with ultrafast electron microscopy", "text": "The manner in which structure at the mesoscale affects emergent collective dynamics has become the focus of much attention owing, in part, to new insights into how morphology on these spatial scales can be exploited for enhancement and optimization of macroscopic properties. Key to advancements in this area is development of multimodal characterization tools, wherein access to a large parameter space (energy, space, and time) is achieved (ideally) with a single instrument. Here, we describe the study of optomechanical responses of single-crystal Si cantilevers with an ultrafast electron microscope. By conducting structural-dynamics studies in both real and reciprocal space, we are able to visualize MHz vibrational responses from atomicto micrometerscale dimensions. With nanosecond selected-area and convergent-beam diffraction, we demonstrate the effects of spatial signal averaging on the isolation and identification of eigenmodes of the cantilever. We find that the reciprocal-space methods reveal eigenmodes mainly below 5 MHz, indicative of the first five vibrational eigenvalues for the cantilever geometry studied here. With nanosecond real-space imaging, however, we are able to visualize local vibrational frequencies exceeding 30 MHz. The heterogeneously-distributed vibrational response is mapped via generation of pixel-by-pixel time-dependent Fourier spectra, which reveal the localized highfrequency modes, whose presence is not detected with parallel-beam diffraction. By correlating the transient response of the three modalities, the oscillation, and dissipation of the optomechanical response can be compared to a linear-elastic model to isolate and identify the spatial threedimensional dynamics."}
{"_id": "27fc71f87ba6985bb84ef6fb2023d067ac15e28f", "title": "Stress aware layout optimization", "text": "Process-induced mechanical stress is used to enhance carrier transport and achieve higher drive currents in current CMOS technologies. In this paper, we study how stress-induced performance enhancements are affected by layout properties and suggest guidelines for improving layouts so that performance gains are maximized. All MOS devices in this work include STI and nitride stress liners as sources of stress. Additionally, the PMOS devices incorporate the stress effects caused by the embedded SiGe S/D layer common in today's processes. First, we study how stress and drive current depend on layout parameters such as active area length and contact placement. We develop an intuition for the drive current dependency on these parameters and propose simple guidelines to improve a layout while considering mechanical stress effects. We then use these guidelines to improve the standard cell layouts in a 65nm industrial library. Experimental results show that we can enhance NMOS and PMOS drive currents by ~5% and ~12%, respectively, while only increasing NMOS leakage current by 1.48X and PMOS leakage current by 3.78X. By applying our guidelines to a 3-input NOR gate and a 3-input NAND gate, we are able to achieve a ~13.5% PMOS drive current improvement in the NOR gate and a ~7% NMOS drive current improvement in the NAND gate, without increasing cell area in either case"}
{"_id": "e64e97ed7f540d08a2009a12581824bd3e9d7324", "title": "Pomegranate juice, total pomegranate ellagitannins, and punicalagin suppress inflammatory cell signaling in colon cancer cells.", "text": "Phytochemicals from fruits such as the pomegranate (Punica granatum L) may inhibit cancer cell proliferation and apoptosis through the modulation of cellular transcription factors and signaling proteins. In previous studies, pomegranate juice (PJ) and its ellagitannins inhibited proliferation and induced apoptosis in HT-29 colon cancer cells. The present study examined the effects of PJ on inflammatory cell signaling proteins in the HT-29 human colon cancer cell line. At a concentration of 50 mg/L PJ significantly suppressed TNFalpha-induced COX-2 protein expression by 79% (SE = 0.042), total pomegranate tannin extract (TPT) 55% (SE = 0.049), and punicalagin 48% (SE = 0.022). Additionally, PJ reduced phosphorylation of the p65 subunit and binding to the NFkappaB response element 6.4-fold. TPT suppressed NFkappaB binding 10-fold, punicalagin 3.6-fold, whereas ellagic acid (EA) (another pomegranate polyphenol) was ineffective. PJ also abolished TNFalpha-induced AKT activation, needed for NFkappaB activity. Therefore, the polyphenolic phytochemicals in the pomegranate can play an important role in the modulation of inflammatory cell signaling in colon cancer cells."}
{"_id": "79485d122d2176e4faaa77bae083698e94ca08c8", "title": "Channel Equalization and Interference Analysis for Uplink Narrowband Internet of Things (NB-IoT)", "text": "We derive the uplink system model for In-band and Guard-band narrowband Internet of Things (NB-IoT). The results reveal that the actual channel frequency response (CFR) is not a simple Fourier transform of the channel impulse response, due to sampling rate mismatch between the NB-IoT user and long term evolution (LTE) base station. Consequently, a new channel equalization algorithm is proposed based on the derived effective CFR. In addition, the interference is derived analytically to facilitate the co-existence of NB-IoT and LTE signals. This letter provides an example and guidance to support network slicing and service multiplexing in the physical layer."}
{"_id": "b4a3a828c91fa5203257774ba5628614da7704e5", "title": "The Social Impact of Natural Language Processing", "text": "Medical sciences have long since established an ethics code for experiments, to minimize the risk of harm to subjects. Natural language processing (NLP) used to involve mostly anonymous corpora, with the goal of enriching linguistic analysis, and was therefore unlikely to raise ethical concerns. As NLP becomes increasingly wide-spread and uses more data from social media, however, the situation has changed: the outcome of NLP experiments and applications can now have a direct effect on individual users\u2019 lives. Until now, the discourse on this topic in the field has not followed the technological development, while public discourse was often focused on exaggerated dangers. This position paper tries to take back the initiative and start a discussion. We identify a number of social implications of NLP and discuss their ethical significance, as well as ways to address them."}
{"_id": "f4f3f60133469d48499d04cd6c4570561ddbeeac", "title": "Optimizing Multipath Routing With Guaranteed Fault Tolerance in Internet of Things", "text": "Internet of Things (IoTs) refers to the rapidly growing network of connected objects and people that are able to collect and exchange data using embedded sensors. To guarantee the connectivity among these objects and people, fault tolerance routing has to be significantly considered. In this paper, we propose a bio-inspired particle multi-swarm optimization (PMSO) routing algorithm to construct, recover, and select $k$ -disjoint paths that tolerates the failure while satisfying the quality of service parameters. Multi-swarm strategy enables determining the optimal directions in selecting the multipath routing while exchanging messages from all positions in the network. The validity of the proposed algorithm is assessed and results demonstrate high-quality solutions compared with the canonical particle swarm optimization (CPSO). Our results indicate the superiority of the multi-swarm and fully PMSO with constriction coefficient, which record an average improvement over CPSO equal to 88.45% in terms of sensors\u2019 count, and 89.15% and 86.51% under the ring and mesh topologies, respectively."}
{"_id": "d242c065734ce75419318ddc9f40f461ba180ff5", "title": "Ancient Ka\u1e6dapay\u0101di System Sanskrit Encryption Technique Unified", "text": "Computers today, generate enormous amount of data/information with each moment passing by. With the production of such huge amount of information comes its indispensable part of information security. Encryption Algorithms today drastically increase the file size. Hence the secure transmission of data requires extra bandwidth. Here in this paper we propose a system AKS - SETU, which is also the abbreviation to the title of this paper. Using the ancient technique of encryption from Sanskrit, AKS - SETU not only encrypts the information but also attempts on decreasing of the file size. AKS - SETU performs Sanskrit encryption, which we propose to be termed as Sanscryption."}
{"_id": "14b5e8ba23860f440ea83ed4770e662b2a111119", "title": "Visualizing and Understanding Convolutional Networks", "text": "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark (Krizhevsky et al., 2012). However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al. on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."}
{"_id": "1e314666d2c2bdcedac32aeb1d42402ad664c74d", "title": "Vehicle Logo Recognition Using a SIFT-Based Enhanced Matching Scheme", "text": "In this paper, a new algorithm for vehicle logo recognition on the basis of an enhanced scale-invariant feature transform (SIFT)-based feature-matching scheme is proposed. This algorithm is assessed on a set of 1200 logo images that belong to ten distinctive vehicle manufacturers. A series of experiments are conducted, splitting the 1200 images to a training set and a testing set, respectively. It is shown that the enhanced matching approach proposed in this paper boosts the recognition accuracy compared with the standard SIFT-based feature-matching method. The reported results indicate a high recognition rate in vehicle logos and a fast processing time, making it suitable for real-time applications."}
{"_id": "424561d8585ff8ebce7d5d07de8dbf7aae5e7270", "title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", "text": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features\u2014using the recently popular terminology of neural networks with \u2019attention\u2019 mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."}
{"_id": "4d89e4aa54cd116f7393671a9af15a9869ccec5b", "title": "Scalable triangulation-based logo recognition", "text": "We propose a scalable logo recognition approach that extends the common bag-of-words model and incorporates local geometry in the indexing process. Given a query image and a large logo database, the goal is to recognize the logo contained in the query, if any. We locally group features in triples using multi-scale Delaunay triangulation and represent triangles by signatures capturing both visual appearance and local geometry. Each class is represented by the union of such signatures over all instances in the class. We see large scale recognition as a sub-linear search problem where signatures of the query image are looked up in an inverted index structure of the class models. We evaluate our approach on a large-scale logo recognition dataset with more than four thousand classes."}
{"_id": "6a801e1a1fb37d5907b47bd1a230ada9ba6772e9", "title": "Algorithm 777: HOMPACK90: A Suite of Fortran 90 Codes for Globally Convergent Homotopy Algorithms", "text": "HOMPACK90 is a Fortran 90 version of the Fortran 77 package HOMPACK (Algorithm 652), a collection of codes for finding zeros or fixed points of nonlinear systems using globally convergent probability-one homotopy algorithms. Three qualitatively different algorithms\u2014 ordinary differential equation based, normal flow, quasi-Newton augmented Jacobian matrix\u2014are provided for tracking homotopy zero curves, as well as separate routines for dense and sparse Jacobian matrices. A high level driver for the special case of polynomial systems is also provided. Changes to HOMPACK include numerous minor improvements, simpler and more elegant interfaces, use of modules, new end games, support for several sparse matrix data structures, and new iterative algorithms for large sparse Jacobian matrices."}
{"_id": "1577620a67dd712620b88a2d45b548393b264ee3", "title": "Evaluation of the Spiritual Well-Being Scale in a Sample of Korean Adults", "text": "This study explored the psychometric qualities and construct validity of the Spiritual Well-Being Scale (SWBS; Ellison in J Psychol Theol 11:330\u2013340, 1983) using a sample of 470 Korean adults. Two factor analyses, exploratory factor analysis and confirmatory factor analysis, were conducted in order to test the validity of the SWBS. The results of the factor analyses supported the original two-dimensional structure of the SWBS\u2014religious well-being (RWB) and existential well-being (EWB) with method effects associated with negatively worded items. By controlling for method effects, the evaluation of the two-factor structure of SWBS is confirmed with clarity. Further, the differential pattern and magnitude of correlations between the SWB subscales and the religious and psychological variables suggested that two factors of the SWBS were valid for Protestant, Catholic, and religiously unaffiliated groups except Buddhists. The Protestant group scored higher in RWB compared to the Buddhist, Catholic, and unaffiliated groups. The Protestant group scored higher in EWB compared to the unaffiliated groups. Future studies may need to include more Buddhist samples to gain solid evidence for validity of the SWBS on a non-Western religious tradition."}
{"_id": "a31dfa5eb97fced9494dfa1d88578da6827bf78d", "title": "Fuzzy Set Theory \u2015 and Its Applications", "text": "Spend your time even for only few minutes to read a book. Reading a book will never reduce and waste your time to be useless. Reading, for some people become a need that is to do every day such as spending time for eating. Now, what about you? Do you like to read a book? Now, we will show you a new book enPDFd fuzzy set theory and its applications that can be a new way to explore the knowledge. When reading this book, you can get one thing to always remember in every reading time, even step by step."}
{"_id": "e057a9bd9ab7ec650730472460da9039d07674b8", "title": "Towards Text Generation with Adversarially Learned Neural Outlines", "text": "Recent progress in deep generative models has been fueled by two paradigms \u2013 autoregressive and adversarial models. We propose a combination of both approaches with the goal of learning generative models of text. Our method first produces a high-level sentence outline and then generates words sequentially, conditioning on both the outline and the previous outputs. We generate outlines with an adversarial model trained to approximate the distribution of sentences in a latent space induced by general-purpose sentence encoders. This provides strong, informative conditioning for the autoregressive stage. Our quantitative evaluations suggests that conditioning information from generated outlines is able to guide the autoregressive model to produce realistic samples, comparable to maximum-likelihood trained language models, even at high temperatures with multinomial sampling. Qualitative results also demonstrate that this generative procedure yields natural-looking sentences and interpolations."}
{"_id": "3d1e26f734b3cf9afdd77d490eac6893e0e0b6fb", "title": "Service-oriented Communities: Models and Concepts Towards Fractal Social Organizations", "text": "It is described an abstract model for the definition and the dynamic evolution of \"communities of actants\" originating from a given reference society of roles. Multiple representations are provided, showing how communities evolve with respect to their reference societies. In particular we show how such representations are self-similar and factorisable into \"prime\" constituents. An operating model is then introduced that describes the life-cycle of the communities of actants. After this a software component is presented -- the service-oriented community -- and its features are described in terms of the above mentioned models. Finally it is shown how such component can constitute the building block of a novel architecture for the design of fractal social organizations."}
{"_id": "8f2006ac47e66df6da42356b5b6791cb7958248d", "title": "Distressed Stocks in Distressed Times", "text": "Financially distressed stocks do not underperform healthy stocks when the entire economy is in distress. The asset beta and financial leverage of distressed stocks rise significantly after major market downturns, resulting in a dramatic increase in equity beta. Hence, a long/short healthyminus-distressed trading strategy leads to significant losses when the market rebounds. Managing this risk mitigates the severe losses of financial distress strategies, and significantly improves their Sharpe ratios."}
{"_id": "afb80b31aa7cc6d26d1f6dbefd1e90c67bdcfdb9", "title": "Real-time three-dimensional smooth path planning for unmanned aerial vehicles in completely unknown cluttered environments", "text": "This paper presents a real-time three-dimensional path planning algorithm for improving autonomous navigation of Unmanned Aerial Vehicles (UAVs) operating in completely unknown cluttered environments. The algorithm generates smooth paths consisting of continuous piecewise B\u00e9zier curves in real time. Specifically, a RRT-based waypoint generation algorithm is firstly proposed for the exploration of collision-free waypoints successively during flight. Besides, a novel real-time path smoothing technique is developed to generate continuous collision-free paths that satisfy the motion constraints of UAVs. This is achieved by fitting B\u00e9zier curves between consecutive waypoints based on the particle swarm optimization (PSO) algorithm. Lastly, a path selection strategy is also introduced to seek for an optimum path when multiple solutions are available. The simulation results demonstrate the superiority of the proposed real-time three-dimensional smooth path planning algorithm."}
{"_id": "4321cbc4c07e5d69fb5756261b42851bf14882aa", "title": "Application of data mining methods for link prediction in social networks", "text": "Using social networking services is becoming more popular day by day. Social network analysis views relationships in terms of nodes (people) and edges (links or connections\u2014the relationship between the people). The websites of the social networks such as Facebook currently are among the most popular internet services just after giant portals such as Yahoo, MSN and search engines such as Google. One of the main problems in analyzing these networks is the prediction of relationships between people in the network. It is hard to find one method that can identify relation between people in the social network. The purpose of this paper is to forecast the friendship relation between individuals among a social network, especially the likelihood of a relation between an existing member with a new member. For this purpose, we used a few hypotheses to make the graph of relationships between members of social network, and we used the method of logistic regression to complete the graph. Test data from Flickr website are used to evaluate the proposed method. The results show that the method has achieved 99\u00a0% accuracy in prediction of friendship relationships."}
{"_id": "6e8b441a460aee60bae9937969ada3cbd04ca87a", "title": "Energy Management of Fuel Cell/Battery/Supercapacitor Hybrid Power Sources Using Model Predictive Control", "text": "Well known as an efficient and eco-friendly power source, fuel cell, unfortunately, offers slow dynamics. When attached as primary energy source in a vehicle, fuel cell would not be able to respond to abrupt load variations. Supplementing battery and/or supercapacitor to the system will provide a solution to this shortcoming. On the other hand, a current regulation that is vital for lengthening time span of the energy storage system is needed. This can be accomplished by keeping fuel cell's and batteries' current slope in reference to certain values, as well as attaining a stable dc output voltage. For that purpose, a feedback control system for regulating the hybrid of fuel cell, batteries, and supercapacitor was constructed for this study. Output voltage of the studied hybrid power sources (HPS) was administered by assembling three dc-dc converters comprising two bidirectional converters and one boost converter. Current/voltage output of fuel cell was regulated by boost converter, whereas the bidirectional converters regulated battery and supercapacitor. Reference current for each converter was produced using Model Predictive Control (MPC) and subsequently tracked using hysteresis control. These functions were done on a controller board of a dSPACE DS1104. Subsequently, on a test bench made up from 6 V, 4.5 Ah battery and 7.5 V, 120 F supercapacitor together with a fuel cell of 50 W, 10 A, experiment was conducted. Results show that constructing a control system to restrict fuel cell's and batteries' current slope and maintaining dc bus voltage in accordance with the reference values using MPC was feasible and effectively done."}
{"_id": "6b5f392e398f96b94bc5b42d4ebdb5aed5fa22fa", "title": "Interventions shown to aid executive function development in children 4 to 12 years old.", "text": "To be successful takes creativity, flexibility, self-control, and discipline. Central to all those are executive functions, including mentally playing with ideas, giving a considered rather than an impulsive response, and staying focused. Diverse activities have been shown to improve children's executive functions: computerized training, noncomputerized games, aerobics, martial arts, yoga, mindfulness, and school curricula. All successful programs involve repeated practice and progressively increase the challenge to executive functions. Children with worse executive functions benefit most from these activities; thus, early executive-function training may avert widening achievement gaps later. To improve executive functions, focusing narrowly on them may not be as effective as also addressing emotional and social development (as do curricula that improve executive functions) and physical development (shown by positive effects of aerobics, martial arts, and yoga)."}
{"_id": "a937f17d156b5bb7c46f57e5afad4cf19a297d44", "title": "Jointly Embedding Entities and Text with Distant Supervision", "text": "Learning representations for knowledge base entities and concepts is becoming increasingly important for NLP applications. However, recent entity embedding methods have relied on structured resources that are expensive to create for new domains and corpora. We present a distantly-supervised method for jointly learning embeddings of entities and text from an unnanotated corpus, using only a list of mappings between entities and surface forms. We learn embeddings from open-domain and biomedical corpora, and compare against prior methods that rely on human-annotated text or large knowledge graph structure. Our embeddings capture entity similarity and relatedness better than prior work, both in existing biomedical datasets and a new Wikipedia-based dataset that we release to the community. Results on analogy completion and entity sense disambiguation indicate that entities and words capture complementary information that can be effectively combined for downstream use."}
{"_id": "1e077413b25c4d34945cc2707e17e46ed4fe784a", "title": "Universal Language Model Fine-tuning for Text Classification", "text": "Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model. Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 1824% on the majority of datasets. Furthermore, with only 100 labeled examples, it matches the performance of training from scratch on 100\u00d7 more data. We opensource our pretrained models and code1."}
{"_id": "9a55434c23b2c9505a4157343251b691ddb7be2f", "title": "Normative Social Influence in Persuasive Technology: Intensity versus Effectiveness", "text": "Persuasion is a form of social influence that implies intentional but voluntary change of a person\u2019s behaviours, feelings or thoughts about an issue, object or action. Successful persuasion by means of persuasive technology relies on a number of factors and motivators. Recently, the so called social acceptance or rejection motivator has formed the backbone of a new type of persuasion, called Mass Interpersonal Persuasion, or MIP for short. This type of persuasion uses the social influence that is generated on online social networks to persuade people. Though it has been established that normative social influence can be used effectively in persuasive technology, it is unknown if the application of more social pressure also makes it more effective. In order to test the hypothesis that the effectiveness of persuasion increases when the persuasion becomes more intense, a quantitative experiment was conducted on the online social network of Facebook. Although evidence to support the hypothesis was found, it cannot be concluded from this experiment that when utilizing normative social influence in persuasive technology more intense persuasion is also more effective."}
{"_id": "e9b973253fed4ad02c4ef856d651609723001f63", "title": "Hadoop Oriented Smart Cities Architecture", "text": "A smart city implies a consistent use of technology for the benefit of the community. As the city develops over time, components and subsystems such as smart grids, smart water management, smart traffic and transportation systems, smart waste management systems, smart security systems, or e-governance are added. These components ingest and generate a multitude of structured, semi-structured or unstructured data that may be processed using a variety of algorithms in batches, micro batches or in real-time. The ICT architecture must be able to handle the increased storage and processing needs. When vertical scaling is no longer a viable solution, Hadoop can offer efficient linear horizontal scaling, solving storage, processing, and data analyses problems in many ways. This enables architects and developers to choose a stack according to their needs and skill-levels. In this paper, we propose a Hadoop-based architectural stack that can provide the ICT backbone for efficiently managing a smart city. On the one hand, Hadoop, together with Spark and the plethora of NoSQL databases and accompanying Apache projects, is a mature ecosystem. This is one of the reasons why it is an attractive option for a Smart City architecture. On the other hand, it is also very dynamic; things can change very quickly, and many new frameworks, products and options continue to emerge as others decline. To construct an optimized, modern architecture, we discuss and compare various products and engines based on a process that takes into consideration how the products perform and scale, as well as the reusability of the code, innovations, features, and support and interest in online communities."}
{"_id": "58ccbcff078c8dd4ce8777870603bde174c70acf", "title": "Customer Churn Prediction Modelling Based on Behavioural Patterns Analysis using Deep Learning", "text": "Customer churn refers to when a customer ceases their relationship with a company. A churn rate, used to estimate growth, is now considered as important a metric as financial profit. With growing competition in the market, companies are desperate to keep the churn rate as low as possible. Thus, churn prediction has gained critical importance, not just for existing customers, but also for predicting trends of future customers. This paper demonstrates prediction of churn on a Telco dataset using a Deep Learning Approach. A multilayered Neural Network was designed to build a non-linear classification model. The churn prediction model works on customer features, support features, usage features and contextual features. The possibility of churn as well as the determining factors are predicted. The trained model then applies the final weights on these features and predict the possibility of churn for that customer. An accuracy of 80.03% was achieved. Since the model also provides the churn factors, it can be used by companies to analyze the reasons for these factors and take steps to eliminate them."}
{"_id": "3ece9a0cefd81b775ddb8a24cda29d7adbfdca8a", "title": "' s personal copy Comments and Controversies Ten ironic rules for non-statistical reviewers", "text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: a b s t r a c t a r t i c l e i n f o Keywords: Statistical testing Sample-size Effect size Power Classical inference As an expert reviewer, it is sometimes necessary to ensure a paper is rejected. This can sometimes be achieved by highlighting improper statistical practice. This technical note provides guidance on how to critique the statistical analysis of neuroimaging studies to maximise the chance that the paper will be declined. We will review a series of critiques that can be applied universally to any neuroimaging paper and consider responses to potential rebuttals that reviewers might encounter from authors or editors. Introduction This technical note is written for reviewers who may not have sufficient statistical expertise to provide an informed critique during the peer-reviewed process, but would like to recommend rejection on the basis of inappropriate or invalid statistical analysis. This guidance follows the 10 simple rules format and hopes to provide useful tips and criticisms for reviewers who find themselves in this difficult position. These rules are presented for reviewers in an ironic way 1 that makes it easier (and hopefully more entertaining) to discuss the issues from the point of view of both the reviewer and author \u2014 and to caricature both sides of the arguments. Some key issues are presented more formally in (non-ironic) appendices. There is a perceived need to reject peer-reviewed papers with the advent of open access publishing and the large number of journals available to authors. Clearly, there may be idiosyncratic reasons to block a paper \u2013 to ensure your precedence in the literature, personal rivalry etc. \u2013 however, we will assume that there is an imperative to reject papers for the good of the community: handling editors are often happy to receive recommendations to decline a paper. This is because they are placed under pressure to maintain a high rejection rate. This pressure is usually exerted by the editorial board (and publishers) and enforced by circulating quantitative information about their rejection rates (i.e., naming and shaming lenient editors). All journals want to maximise rejection rates, because this increases the quality of submissions, increases their impact factor \u2026"}
{"_id": "ebdfa2a44932929f37089f4ffb4ce51ee4bb1619", "title": "From Visual Data Exploration to Visual Data Mining: A Survey", "text": "We survey work on the different uses of graphical mapping and interaction techniques for visual data mining of large data sets represented as table data. Basic terminology related to data mining, data sets, and visualization is introduced. Previous work on information visualization is reviewed in light of different categorizations of techniques and systems. The role of interaction techniques is discussed, in addition to work addressing the question of selecting and evaluating visualization techniques. We review some representative work on the use of information visualization techniques in the context of mining data. This includes both visual data exploration and visually expressing the outcome of specific mining algorithms. We also review recent innovative approaches that attempt to integrate visualization into the DM/KDD process, using it to enhance user interaction and comprehension."}
{"_id": "b5dfbd00a582346b5d4fd03d35978e39295380c8", "title": "LaserStroke: Mid-air Tactile Experiences on Contours Using Indirect Laser Radiation", "text": "This demonstration presents a novel form of mid-air tactile display, LaserStroke, that makes use of a laser irradiated on the elastic medium attached to the skin. LaserStroke extends a laser device with an orientation control platform and a magnetic tracker so that it can elicit tapping and stroking sensations to a user's palm from a distance. LaserStroke offers unique tactile experiences while a user freely moves his/her hand in midair."}
{"_id": "9798777816b233fb1b64157e0a70f541a7bcd941", "title": "Cannabis - from cultivar to chemovar.", "text": "The medicinal use of Cannabis is increasing as countries worldwide are setting up official programs to provide patients with access to safe sources of medicinal-grade Cannabis. An important question that remains to be answered is which of the many varieties of Cannabis should be made available for medicinal use. Drug varieties of Cannabis are commonly distinguished through the use of popular names, with a major distinction being made between Indica and Sativa types. Although more than 700 different cultivars have already been described, it is unclear whether such classification reflects any relevant differences in chemical composition. Some attempts have been made to classify Cannabis varieties based on chemical composition, but they have mainly been useful for forensic applications, distinguishing drug varieties, with high THC content, from the non-drug hemp varieties. The biologically active terpenoids have not been included in these approaches. For a clearer understanding of the medicinal properties of the Cannabis plant, a better classification system, based on a range of potentially active constituents, is needed. The cannabinoids and terpenoids, present in high concentrations in Cannabis flowers, are the main candidates. In this study, we compared cultivars obtained from multiple sources. Based on the analysis of 28 major compounds present in these samples, followed by principal component analysis (PCA) of the quantitative data, we were able to identify the Cannabis constituents that defined the samples into distinct chemovar groups. The study indicates the usefulness of a PCA approach for chemotaxonomic classification of Cannabis varieties."}
{"_id": "c69e2b1b1222abf3cd9f079f54da12f207db5636", "title": "Shunt active filter for harmonic and reactive power compensation using p-q theory", "text": "Now-a-days, decadence of power quality is foremost issue of electrical society. In practice, utility of switching device is increased in industrial as well as in domestic applications. Non-linearity causes adverse effects on system efficiency, utility of the power supply, power factor, etc. As reduction in power factor increases reactive power which does not have any contribution in energy transfer so its compensation is needed. So efforts are made to upgrade power quality, concept of Filter is in demand. In contrast to passive filter, active filters are popular due to its smaller size and weight. In this paper, work is done on shunt active power filter (SAPF) using P-Q theory for current harmonic mitigation and reactive power compensation. Here the simulation as well as its parameters, with or without active filter is well presented. We commence the paper with power quality issues then better understanding of PQ theory, simulation circuits, results, comparison and finally ended with conclusion. An effort is made to achieve THD value of source current below 5% to meet the required IEEE standards."}
{"_id": "d71551efa514c8d5d713493d250f24b2c09eeaa3", "title": "Generating timeline summaries with social media attention", "text": "Timeline generation is an important research task which can help users to have a quick understanding of the overall evolution of one given topic. Previous methods simply split the time span into fixed, equal time intervals without studying the role of the evolutionary patterns of the underlying topic in timeline generation. In addition, few of these methods take users\u2019 collective interests into considerations to generate timelines. We consider utilizing social media attention to address these two problems due to the facts: 1) social media is an important pool of real users\u2019 collective interests; 2) the information cascades generated in it might be good indicators for boundaries of topic phases. Employing Twitter as a basis, we propose to incorporate topic phases and user\u2019s collective interests which are learnt from social media into a unified timeline generation algorithm.We construct both one informativeness-oriented and three interestingness-oriented evaluation sets over five topics.We demonstrate that it is very effective to generate both informative and interesting timelines. In addition, our idea naturally leads to a novel presentation of timelines, i.e., phase based timelines, which can potentially improve user experience."}
{"_id": "b1f4213299ae3e55213b9a24cfaf36cc65146931", "title": "THE EVOLUTION OF ACCOUNTING INFORMATION SYSTEMS", "text": "Technological evolution becomes more and more a daily reality for businesses and individuals who use information systems as for supporting their operational activities. This article focuses on the way technological evolution changes the accounting practices, starting from the analysis of the traditional model and trying to determine future trends and arising challenges to face. From data input to consolidation and reporting, accountants\u2019 function and operations are dissected in order to identify to what extent the development of new concepts, such as cloud computing, cloud accounting, real-time accounting or mobile accounting may affect the financial-accounting process, as well as the challenges that arise from the changing environment. SEA Practical Application of Science Volume III, Issue 1 (7) / 2015 92 Introduction Technology evolves rapidly as for responding to customers demand (Wei\u00df and Leimeister, 2012). From a business perspective, more and more companies acknowledge the fact that technology may support process optimisation in terms of costs, lead time and involved resources. The actual market context is driving companies to continuously search for new ways to optimise their processes and increase their financial indicators (Christauskas and Miseviciene, 2012). The company\u2019s efficiency is in direct relation with the objective and timely financial information provided by the accounting system, the main purpose of which is to collect and record information regarding transactions or events with economic impact upon the company, as well as to process and provide relevant, fair information to stakeholders (both internal and external) (Girdzijauskas, et al., 2008; Kundeliene, 2011). The accounting system is thus essential in providing reliable, relevant, significant and useful information for the users of financial data (Kalcinskaite, 2009). From the deployment of basic accounting program or the use of large ERP systems (Enterprise Resource Planning), to the creation of World-Wide Web and development of web-based communication, technology increased its development in speed and implications. Today, concepts such as cloud computing, cloud accounting or real-time reporting are more and more a daily reality among companies. But as technology evolves at an incredible speed, it is necessary for companies to adapt their processes and practices to the new trends. However, that could not be possible without the decision factors to completely acknowledge the implications of the new technology, how it can be used to better manage their businesses and what challenges it also brings along. The present study is based on the theoretical and empirical analysis of accounting process from the appearance of the first accounting information systems up to nowadays\u2019 services and techniques available for supporting the accounting function of a company. The research comprised of analysing accounting operations, activities and practices as they followed the technological evolution for over than 20 years, focusing on a general level, applicable to companies of all sizes and business sectors. Although the geographic extent was limited to the European Union, the study may as well be considered globally applicable, considering the internationality of today\u2019s technology (e.g. cloud computing services may be used by a company in Japan while the server is physically located in the USA). Traditional practices and future trends The accounting systems may be seen as aiming to support businesses in collecting, understanding and analysing the financial data (Chytilova et al., 2011). The evolution of software accounting generation may be split, according toPhillips (2012) into three major categories: \u25aa 90\u2019s era \u2013 marked by the apparition of the first accounting software programs under what is known as \u2018the Windows age\u2019; applications were solid, but only supporting basic accounting operations. \u25aa 00\u2019s era \u2013 \u2018integration\u2019 and \u2018SaaS\u2019 concepts took birth, bringing along more developed systems that would allow more complex accounting operations and data processing, as well as concurrent access to files and programs. \u25aa 2010 \u2013 on-going \u2013 \u2018Mobile\u2019 accounting era, marked by real-time accounting, financial dashboards and other mobile applications supporting financial processing and reporting. The same author outlines the evolution of communication \u2013 if the traditional accounting model was based on e-mail or .ftp files communication, the technological evolution now allows sharing and concurrent access to data, through virtual platforms provided by cloud computing technology. Based on the types of accounting services available on the market, three major categories may be defined: \u25aa On-premises accounting: a dedicated accounting software program is purchased by the company and installed using its own infrastructure. Investment in the software and hardware equipment is required for such programs. \u25aa Hosted solutions: the logical access is remotely performed through the company\u2019s installed programs; however the data centre is physically located in a different place, managed by a dedicated third party. Infrastructure costs are reduced for the company, as hardware is administered and maintained by the service provider. \u25aa Cloud computing: the service could prove even more cost efficient for companies, as the data is managed through virtual platforms, and administered by a dedicated third party, allowing multi-tenancy of services in order to split fixed infrastructure costs between companies. Traditional accounting practices used to focus on bookkeeping and financial reporting, having as a final purpose the preparation and presentation of financial statements. The activities were driven by the need of financial information users (both internal and external) to gain a \u2018fair view\u2019 of the company. The objective was often SEA Practical Application of Science Volume III, Issue 1 (7) / 2015 93 reached through the use of small, atomic systems meant to support the reporting; nevertheless, collection of documents, processing of data and operations, posting of journal entries, as well as consolidation and final reporting operations were mostly manual, and manual controls (reconciliations, validations, etc.) were performed as systems did not communicate through automated interfaces. In early 1920s, the first outsourcing agreement was signed by British Petroleum with Accenture. Ever since, the accounting started changing its meaning within the companies, turning from the bookkeeping function to the management strategic and decision-making support function. The technological evolution gave birth in the late 80s to ERP (Enterprise Resource Planning) systems, used to incorporate and connect various organisational functions (accounting, asset management, operations, procurement, human resources, etc.) (Ziemba and Oblak, 2013). Ustas\u00fcleyman and Percin (2010) define the ERP systems as \u2018software packages enabling the integration of business processes throughout an organisation\u2019, while Salmeron andLopez (2010) see the ERP system as \u2018a single software system allowing complete integration of information flow from all functional areas in companies by means of a single database, and accessible through a unified interface and communication channel\u2019. The ERP systems became of common use among large companies who managed to reduce the process lead time and involved resources by automation of data transfer between ERP modules, processes within the ERP system, as well as validation and reconciliation controls. From an accounting perspective, the deployment of ERP systems represented a major change, offering support in bookkeeping (the operations performed through different modules would generate automated journal entries posting), processing and transfer of data (through automated interfaces between the ERP modules), as well as consolidation and reporting. This progress took the accountants one step away from the traditional accounting\u2019s bookkeeping practices, and one step closer towards today\u2019s role more close to management strategy and decision support. Further on, as a consequence of the financial-economic crisis started in 2008, the role of accountants within the company changed drastically from bookkeeping and reporting to playing an active and essential role in the management strategy and decision-making process. Thus, it was technology\u2019s role to take in the traditional tasks and operations. More and more companies implemented automated controls for processing data, posting journal entries, consolidation and final reporting under the financial statements and other internal management reports. Soon after automating the accounting operations, technology evolved into also automating document collection and management, through development of concepts such as einvoicing, e-archiving, e-payments, etc. (Burinskiene&Burinskas, 2010). Technology proved once again responsive to the market\u2019s demand, and thus accounting software easily customisable for each client\u2019s particularities regarding the activity profile, accounting practices and chart of accounts, were built as for supporting the automation of accounting process. With the automation of the process, implementation of certain controls was also required as for ensuring the correctness and completeness of reported information. Technology also took into account the need for robust, transparent processes, and it was only a matter of time until cloud computing, cloud accounting or real-time reporting concepts became a daily reality among companies of all sizes, activity sectors or region/state. The access to financial information previously physically limited to the company\u2019s premises (where the network and infrastructure would be located) was fairly improved through cloud computing to an extent where internet connection was the only condition users needed to respect in order to gain access to the financial programs and data. Cloud computing start"}
{"_id": "6496d2abd1ba9550b552194769fa4c9c2e4b702e", "title": "Interactive Topic Modeling for Exploring Asynchronous Online Conversations: Design and Evaluation of ConVisIT", "text": "Since the mid-2000s, there has been exponential growth of asynchronous online conversations, thanks to the rise of social media. Analyzing and gaining insights from such conversations can be quite challenging for a user, especially when the discussion becomes very long. A promising solution to this problem is topic modeling, since it may help the user to understand quickly what was discussed in a long conversation and to explore the comments of interest. However, the results of topic modeling can be noisy, and they may not match the user\u2019s current information needs. To address this problem, we propose a novel topic modeling system for asynchronous conversations that revises the model on the fly on the basis of users\u2019 feedback. We then integrate this system with interactive visualization techniques to support the user in exploring long conversations, as well as in revising the topic model when the current results are not adequate to fulfill the user\u2019s information needs. Finally, we report on an evaluation with real users that compared the resulting system with both a traditional interface and an interactive visual interface that does not support human-in-the-loop topic modeling. Both the quantitative results and the subjective feedback from the participants illustrate the potential benefits of our interactive topic modeling approach for exploring conversations, relative to its counterparts."}
{"_id": "cddb9e0effbc56594049c9e7d788b0df2247b1e5", "title": "ACCEPTANCE AND COMMITMENT THERAPY ( ACT ) IN PANIC DISORDER WITH AGORAPHOBIA : A CASE STUDY", "text": ""}
{"_id": "b91180d8853d00e8f2df7ee3532e07d3d0cce2af", "title": "Visual Categorization with Bags of Keypoints", "text": "We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Na\u00efve Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information."}
{"_id": "ca7b979c61be9140961fa6dfa56b12e4f98b0c2b", "title": "CSIFT: A SIFT Descriptor with Color Invariant Characteristics", "text": "SIFT has been proven to be the most robust local invariant feature descriptor. SIFT is designed mainly for gray images. However, color provides valuable information in object description and matching tasks. Many objects can be misclassified if their color contents are ignored. This paper addresses this problem and proposes a novel colored local invariant feature descriptor. Instead of using the gray space to represent the input image, the proposed approach builds the SIFT descriptors in a color invariant space. The built Colored SIFT (CSIFT) is more robust than the conventional SIFT with respect to color and photometrical variations. The evaluation results support the potential of the proposed approach."}
{"_id": "0765780521cb2c056ca2f10080e8b5ce34ac0d29", "title": "Short-Term Load Forecasting Using EMD-LSTM Neural Networks with a Xgboost Algorithm for Feature Importance Evaluation", "text": "Accurate load forecasting is an important issue for the reliable and efficient operation of a power system. This study presents a hybrid algorithm that combines similar days (SD) selection, empirical mode decomposition (EMD), and long short-term memory (LSTM) neural networks to construct a prediction model (i.e., SD-EMD-LSTM) for short-term load forecasting. The extreme gradient boosting-based weighted k-means algorithm is used to evaluate the similarity between the forecasting and historical days. The EMD method is employed to decompose the SD load to several intrinsic mode functions (IMFs) and residual. Separated LSTM neural networks were also employed to forecast each IMF and residual. Lastly, the forecasting values from each LSTM model were reconstructed. Numerical testing demonstrates that the SD-EMD-LSTM method can accurately forecast the electric load."}
{"_id": "217159a0df79af168e7b3e0a2907f9e7bc69fca3", "title": "Short-term memory, working memory, and executive functioning in preschoolers: longitudinal predictors of mathematical achievement at age 7 years.", "text": "This study examined whether measures of short-term memory, working memory, and executive functioning in preschool children predict later proficiency in academic achievement at 7 years of age (third year of primary school). Children were tested in preschool (M age = 4 years, 6 months) on a battery of cognitive measures, and mathematics and reading outcomes (from standardized, norm-referenced school-based assessments) were taken on entry to primary school, and at the end of the first and third year of primary school. Growth curve analyses examined predictors of math and reading achievement across the duration of the study and revealed that better digit span and executive function skills provided children with an immediate head start in math and reading that they maintained throughout the first three years of primary school. Visual-spatial short-term memory span was found to be a predictor specifically of math ability. Correlational and regression analyses revealed that visual short-term and working memory were found to specifically predict math achievement at each time point, while executive function skills predicted learning in general rather than learning in one specific domain. The implications of the findings are discussed in relation to further understanding the role of cognitive skills in different mathematical tasks, and in relation to the impact of limited cognitive skills in the classroom environment."}
{"_id": "88f80245745204ede6573e88471b57f731b3d844", "title": "Energy-Brushes: Interactive Tools for Illustrating Stylized Elemental Dynamics", "text": "Dynamic effects such as waves, splashes, fire, smoke, and explosions are an integral part of stylized animations. However, such dynamics are challenging to produce, as manually sketching key-frames requires significant effort and artistic expertise while physical simulation tools lack sufficient expressiveness and user control. We present an interactive interface for designing these elemental dynamics for animated illustrations. Users draw with coarse-scale energy brushes which serve as control gestures to drive detailed flow particles which represent local velocity fields. These fields can convey both realistic and artistic effects based on user specification. This painting metaphor for creating elemental dynamics simplifies the process, providing artistic control, and preserves the fluidity of sketching. Our system is fast, stable, and intuitive. An initial user evaluation shows that even novice users with no prior animation experience can create intriguing dynamics using our system."}
{"_id": "8ba873e379f49aa6f73923206722b2d963c433bd", "title": "The short-form McGill pain questionnaire", "text": "A short form of the McGill Pain Questionnaire (SF-MPQ) has been developed. The main component of the SF-MPQ consists of 15 descriptors (11 sensory; 4 affective) which are rated on an intensity scale as 0 = none, 1 = mild, 2 = moderate or 3 = severe. Three pain scores are derived from the sum of the intensity rank values of the words chosen for sensory, affective and total descriptors. The SF-MPQ also includes the Present Pain Intensity (PPI) index of the standard MPQ and a visual analogue scale (VAS). The SF-MPQ scores obtained from patients in post-surgical and obstetrical wards and physiotherapy and dental departments were compared to the scores obtained with the standard MPQ. The correlations were consistently high and significant. The SF-MPQ was also shown to be sufficiently sensitive to demonstrate differences due to treatment at statistical levels comparable to those obtained with the standard form. The SF-MPQ shows promise as a useful tool in situations in which the standard MPQ takes too long to administer, yet qualitative information is desired and the PPI and VAS are inadequate."}
{"_id": "2042b469be68653afcb2b7b38490c16369b4501a", "title": "X10: an object-oriented approach to non-uniform cluster computing", "text": "It is now well established that the device scaling predicted by Moore's Law is no longer a viable option for increasing the clock frequency of future uniprocessor systems at the rate that had been sustained during the last two decades. As a result, future systems are rapidly moving from uniprocessor to multiprocessor configurations, so as to use parallelism instead of frequency scaling as the foundation for increased compute capacity. The dominant emerging multiprocessor structure for the future is a Non-Uniform Cluster Computing (NUCC) system with nodes that are built out of multi-core SMP chips with non-uniform memory hierarchies, and interconnected in horizontally scalable cluster configurations such as blade servers. Unlike previous generations of hardware evolution, this shift will have a major impact on existing software. Current OO language facilities for concurrent and distributed programming are inadequate for addressing the needs of NUCC systems because they do not support the notions of non-uniform data access within a node, or of tight coupling of distributed nodes.We have designed a modern object-oriented programming language, X10, for high performance, high productivity programming of NUCC systems. A member of the partitioned global address space family of languages, X10 highlights the explicit reification of locality in the form of places}; lightweight activities embodied in async, future, foreach, and ateach constructs; a construct for termination detection (finish); the use of lock-free synchronization (atomic blocks); and the manipulation of cluster-wide global data structures. We present an overview of the X10 programming model and language, experience with our reference implementation, and results from some initial productivity comparisons between the X10 and Java\u2122 languages."}
{"_id": "fd15502f613ae4214a4db94682e61790490ac3fc", "title": "Musculoskeletal symptoms among mobile hand-held device users and their relationship to device use: A preliminary study in a Canadian university population.", "text": "The study aims were, in a population of university students, staff, and faculty (n = 140), to: 1) determine the distribution of seven measures of mobile device use; 2) determine the distribution of musculoskeletal symptoms of the upper extremity, upper back and neck; and 3) assess the relationship between device use and symptoms. 137 of 140 participants (98%) reported using a mobile device. Most participants (84%) reported pain in at least one body part. Right hand pain was most common at the base of the thumb. Significant associations found included time spent internet browsing and pain in the base of the right thumb (odds ratio 2.21, 95% confidence interval 1.02-4.78), and total time spent using a mobile device and pain in the right shoulder (2.55, 1.25-5.21) and neck (2.72, 1.24-5.96). Although this research is preliminary, the observed associations, together with the rising use of these devices, raise concern for heavy users."}
{"_id": "4a672ab6b0cc1d02c6cc93b7a051a367ebf6682d", "title": "The heroism of women and men.", "text": "Heroism consists of actions undertaken to help others, despite the possibility that they may result in the helper's death or injury. The authors examine heroism by women and men in 2 extremely dangerous settings: the emergency situations in which Carnegie medalists rescued others and the holocaust in which some non-Jews risked their lives to rescue Jews. The authors also consider 3 risky but less dangerous prosocial actions: living kidney donations, volunteering for the Peace Corps, and volunteering for Doctors of the World. Although the Carnegie medalists were disproportionately men, the other actions yielded representations of women that were at least equal to and in most cases higher than those of men. These findings have important implications for the psychology of heroism and of gender."}
{"_id": "32ce532a30ddfebd3e809c27aba9056862b20aa7", "title": "Ischiofemoral Impingement Syndrome", "text": "Ischiofemoral impingement syndrome is known as one of the causes of hip pain due to impingement of ischium and femur, and usually correlated with trauma or operation. We report a rare case of ischiofemoral impingement syndrome that has no history of trauma or surgery. A 48-year-old female patient was referred for 2 months history of the left hip pain, radiating to lower extremity with a hip snapping sensation. She had no history of trauma or surgery at or around the hip joint and femur. The magnetic resonance imaging (MRI) of the lumbar spine showed no abnormality, except diffuse bulging disc without cord compression at the lumbosacral area. Electrophysiologic study was normal, and there were no neurologic abnormalities compatible with the lumbosacral radiculopathy or spinal stenosis. Hip MRI revealed quadratus femoris muscle edema with concurrent narrowing of the ischiofemoral space. The distance of ischiofemoral space and quadratus femoris space were narrow. It was compatible with ischiofemoral impingement syndrome. After treatment with nonsteroidal anti-inflammatory drugs, physical therapy, and exercise program, the patient's pain was relieved and the snapping was improved. To our knowledge, this is the first reported case of a nontraumatic, noniatrogenic ischiofemoral impingement syndrome, and also the first case to be treated by a nonsurgical method in the Republic of Korea."}
{"_id": "c1654cd6a11884e3fcfddae933dab24d28c6e684", "title": "Statistical and Machine Learning approach in forex prediction based on empirical data", "text": "This study proposed a new insight in comparing common methods used in predicting based on data series i.e statistical method and machine learning. The corresponding techniques are use in predicting Forex (Foreign Exchange) rates. The Statistical method used in this paper is Adaptive Spline Threshold Autoregression (ASTAR), while for machine learning, Support Vector Machine (SVM) and hybrid form of Genetic Algorithm-Neural Network (GA-NN) are chosen. The comparison among the three methods accurate rate is measured in root mean squared error (RMSE). It is found that ASTAR and GA-NN method has advantages depend on the period time intervals."}
{"_id": "2b775c23848e70945e471b08f6d994bfe2294f28", "title": "Novel 2-D MMSE Subpixel-Based Image Down-Sampling", "text": "Subpixel-based down-sampling is a method that can potentially improve apparent resolution of a down-scaled image on LCD by controlling individual subpixels rather than pixels. However, the increased luminance resolution comes at price of chrominance distortion. A major challenge is to suppress color fringing artifacts while maintaining sharpness. We propose a new subpixel-based down-sampling pattern called diagonal direct subpixel-based down-sampling (DDSD) for which we design a 2-D image reconstruction model. Then, we formulate subpixel-based down-sampling as a MMSE problem and derive the optimal solution called minimum mean square error for subpixel-based down-sampling (MMSE-SD). Unfortunately, straightforward implementation of MMSE-SD is computational intensive. We thus prove that the solution is equivalent to a 2-D linear filter followed by DDSD, which is much simpler. We further reduce computational complexity using a small k \u00d7 k filter to approximate the much larger MMSE-SD filter. To compare the performances of pixel and subpixel-based down-sampling methods, we propose two novel objective measures: normalized l1 high frequency energy for apparent luminance sharpness and PSNRU(V) for chrominance distortion. Simulation results show that both MMSE-SD and MMSE-SD(k) can give sharper images compared with conventional down-sampling methods, with little color fringing artifacts."}
{"_id": "bf1b72f833f5734f19fd70edbf25d2e6b51e9418", "title": "A Technique for Measuring Data Persistence Using the Ext4 File System Journal", "text": "In this paper, we propose a method of measuring data persistence using the Ext4 journal. Digital Forensic tools and techniques are commonly used to extract data from media. A great deal of research has been dedicated to the recovery of deleted data, however, there is a lack of information on quantifying the chance that an investigator will be successful in this endeavor. To that end, we suggest the file system journal be used as a source to gather empirical evidence of data persistence, which can later be used to formulate the probability of recovering deleted data under various conditions. Knowing this probability can help investigators decide where to best invest their resources. We have implemented a proof of concept system that interrogates the Ext4 file system journal and logs relevant data. We then detail how this information can be used to track the reuse of data blocks from the examination of file system metadata structures. This preliminary design contributes a novel method of tracking deleted data persistence that can be used to generate the information necessary to formulate probability models regarding the full and/or partial recovery of deleted data."}
{"_id": "33e94daa8d5e1e6969f28503a12181bd022d8df6", "title": "Joint Haar-like features for face detection", "text": "In this paper, we propose a new distinctive feature, called joint Haar-like feature, for detecting faces in images. This is based on co-occurrence of multiple Haar-like features. Feature co-occurrence, which captures the structural similarities within the face class, makes it possible to construct an effective classifier. The joint Haar-like feature can be calculated very fast and has robustness against addition of noise and change in illumination. A face detector is learned by stagewise selection of the joint Haar-like features using AdaBoost. A small number of distinctive features achieve both computational efficiency and accuracy. Experimental results with 5, 676 face images and 30,000 nonface images show that our detector yields higher classification performance than Viola and Jones' detector; which uses a single feature for each weak classifier. Given the same number of features, our method reduces the error by 37%. Our detector is 2.6 times as fast as Viola and Jones' detector to achieve the same performance"}
{"_id": "3f52800002bd9960fd0e0be285dd1ccff9a1c256", "title": "A sub-pW timer using gate leakage for ultra low-power sub-Hz monitoring systems", "text": "In this work, we present a novel ultra-low power timer designed using the gate leakage of MOS capacitors. The test chip was fabricated in a 0.13 mum CMOS technology and the total circuit area is 480 mum2. Measurement results show that the circuit functions correctly at a wide range of supply voltages from 300 mV to 1.2 V, making it particularly suitable for subthreshold systems. The temperature sensitivity is 0.16%/degC at 600 mV and 0.6%/degC at 300 mV. The power dissipation is less than 1pW running at 20degC and 300 mV."}
{"_id": "7ce1db2e70919dc12c4683687711c0cc788457fd", "title": "A concise introduction to autonomic computing", "text": "School of Computing and Mathematics, Faculty of Engineering, University of Ulster at Jordanstown, Shore Road, Newtownabbey, County Antrim BT37 0QB, Northern Ireland, UK The Applied Software Systems Laboratory (TASSL), Electrical and Computer Engineering, Rutgers, The State University of New Jersey, 94 Brett Road, Piscataway, NJ 08854-8058, USA School of Computing and Mathematical Sciences, The SRIF/SHEFC Centre for Virtual Organization Technology Enabling Research (VOTER), Glasgow Caledonian University, 70 Cowcaddens Road, Glasgow G4 0BA, UK Institute for Computer Science and Business Information Systems (ICB), University of Duisburg-Essen, Sch\u00fctzenbahn 70, 45117 Essen, Germany"}
{"_id": "4e8613e4522264cf9065b393ae3a6c3644f21155", "title": "A 1.7\u00a0ps Equivalent Bin Size and 4.2\u00a0ps RMS FPGA TDC Based on Multichain Measurements Averaging Method", "text": "A high precision and high resolution time-to-digital converter (TDC) based on multichain measurements averaging method is implemented in a 40 nm fabrication process Virtex-6 FPGA. The results of the detailed theoretical analysis and the simulation with the MATLAB tool based on a complete TDC module show that the resolution limitation determined by the intrinsic cell delay of plain tapped-delay chain can be overcame, which results in an improvement on both resolution and precision without increasing the dead time. The test results agree with the simulation results quite well. In such a TDC, the input signal is connected to multiple tapped-delay chains simultaneously (the number of the chains is M), and each chain is just a plain TDC and generates a timestamp for a hit signal. Therefore, M timestamps should be obtained in total, which, after averaging, give the final timestamp. A TDC with 1.7 ps equivalent bin size, 1.5 ps averaged bin size and 4.2 ps RMS has been implemented with M being 16, which performs much better than the plain TDC constructed of a single tapped delay chain having 42.3 ps equivalent bin size, 24.0 ps averaged bin size resolution and 13.2 ps RMS precision. The comparisons of equivalent bin size and averaged bin size show that the nonlinearity is improved with a larger M. Due to the real time integral nonlinearity (INL) calibration and averaging calculation, the multichain TDC is almost insensitive to the process voltage and temperature (PVT) variations."}
{"_id": "72744c9f42bb6f6240e21669ee1e5e8495081a68", "title": "Recognition of handwritten Roman Numerals using Tesseract open source OCR engine", "text": "The objective of the paper is to recognize handwritten samples of Roman numerals using Tesseract open source Optical Character Recognition (OCR) engine. Tesseract is trained with data samples of different persons to generate one user-independent language model, representing the handwritten Roman digit-set. The system is trained with 1226 digit samples collected form the different users. The performance is tested on two different datasets, one consisting of samples collected from the known users (those who prepared the training data samples) and the other consisting of handwritten data samples of unknown users. The overall recognition accuracy is obtained as 92.1% and 86.59% on these test datasets respectively."}
{"_id": "f67883708ffdccd1376271b2629256af7bdfb761", "title": "Design, Modeling, Control, and Evaluation of a Hybrid Hip Joint Miniature Climbing Robot", "text": "The subject of this paper is the design, control, and evaluation of a biped\u2013climbing robot featuring a new hybrid hip joint. The hybrid hip provides both prismatic and revolute motion, discretely, to the robot, using a single actuator. This is intended to improve its adaptability in confined environments and its capability to maneuver over and around obstacles. Optimization of the hybrid hip joint relative to robot size, weight, and actuation limits is considered while maximizing range of motion. The mechanical structure of the robot is discussed, as well as forward and inverse kinematics for motion planning. Reachable robot workspace in both revolute and prismatic phases of the hybrid hip joint is analyzed. Different robot locomotion strides are developed and dynamic controller requirements are considered. Experimental evaluation of the robot walking and climbing on and between surfaces with different inclinations is conducted to evaluate performance and repeatability of the system. KEY WORDS\u2014biped robot, climbing robot, robot design, kinematics, locomotion strides"}
{"_id": "a3d30bfd3e1c058186bc2eac3a0302075176d319", "title": "A Machine Learning Approach to Predict Gene Regulatory Networks in Seed Development in Arabidopsis", "text": "Gene regulatory networks (GRNs) provide a representation of relationships between regulators and their target genes. Several methods for GRN inference, both unsupervised and supervised, have been developed to date. Because regulatory relationships consistently reprogram in diverse tissues or under different conditions, GRNs inferred without specific biological contexts are of limited applicability. In this report, a machine learning approach is presented to predict GRNs specific to developing Arabidopsis thaliana embryos. We developed the Beacon GRN inference tool to predict GRNs occurring during seed development in Arabidopsis based on a support vector machine (SVM) model. We developed both global and local inference models and compared their performance, demonstrating that local models are generally superior for our application. Using both the expression levels of the genes expressed in developing embryos and prior known regulatory relationships, GRNs were predicted for specific embryonic developmental stages. The targets that are strongly positively correlated with their regulators are mostly expressed at the beginning of seed development. Potential direct targets were identified based on a match between the promoter regions of these inferred targets and the cis elements recognized by specific regulators. Our analysis also provides evidence for previously unknown inhibitory effects of three positive regulators of gene expression. The Beacon GRN inference tool provides a valuable model system for context-specific GRN inference and is freely available at https://github.com/BeaconProjectAtVirginiaTech/beacon_network_inference.git."}
{"_id": "653f60d4370e79c2cbce4eb72fdd7659775bbb2e", "title": "Rotor Failures in Squirrel Cage Induction Motors", "text": "During the past two decades, tremendous improvement has been made in the design and manufacture of stator windings. This has been accomplished primarily through the development of improved insulation materials and treatment processes. As a result, the expected life from a thermal, dielectrical, mechanical, and environmental standpoint has been significantly increased. The rotor design and manufacturing remains basically unchanged. A survey of warranty data and service facilities suggests that rotor failures now account for a larger percentage of the total induction motor failures. The majority of these failures are caused by various stresses which act on the rotor assembly. These various stresses and how they affect the life of the motor and contribute to premature failure are discussed."}
{"_id": "d07b767213c247c63ff8b68dca58e7340554671d", "title": "Annual Report Readability , Earnings , and Stock Returns \u2217", "text": "This paper examines the relationship between annual report readability and firm performance. This is motivated by the Securities and Exchange Commission\u2019s plain English disclosure regulations that attempt to make corporate disclosures easier to read for ordinary investors. I measure the readability of public company annual reports using the Fog Index and the Kincaid Index from computational linguistics. I find that the annual reports of firms with lower earnings are harder to read. Moreover, the positive earnings of firms with annual reports that are easier to read are more persistent in the next one to four years. This suggests that, consistent with the motivation behind the plain English disclosure regulation of the Securities and Exchange Commission, managers may be opportunistically choosing the readability of annual reports to hide adverse information from investors. However, I do not find any correlation between annual report readability and future stock returns, suggesting that the market does impound the implications of disclosure readability into stock prices."}
{"_id": "2e855ed319df1cf3667888e2c8d3f702971be279", "title": "Applying Business Intelligence Concepts to Medicaid Claim Fraud Detection", "text": "U.S. governmental agencies are striving to do more with less. Controlling the costs of delivering healthcare services such as Medicaid is especially critical at a time of increasing program enrollment and decreasing state budgets. Fraud is estimated to steal up to ten percent of the taxpayer dollars used to fund governmentally supported healthcare, making it critical for government authorities to find cost effective methods to detect fraudulent transactions. This paper explores the use of a business intelligence system relying on statistical methods to detect fraud in one state\u2019s existing Medicaid claim payment data. This study shows that Medicaid claim transactions that have been collected for payment purposes can be reformatted and analyzed to detect fraud and provide input for decision makers charged with making the best use of available funding. The results illustrate the efficacy of using unsupervised statistical methods to detect fraud in healthcare-related data."}
{"_id": "db904e7ad984cc402729564a281110a13370f385", "title": "High-Resistivity SOI CMOS Cellular Antenna Switches", "text": "Results for cellular antenna switches using highresistivity silicon-on-insulator (SOI) CMOS technology are presented. The performance of SOI RF switch FETs is presented and compared to a production GaAs pHEMT technology. Data from prototype high-resistivity SOI RF switch designs is presented and compared to pHEMT based designs. Index Terms \u2014 RF CMOS, SOI, silicon-on-insulator, RF SOI, cellular switch, signal isolation"}
{"_id": "304047c5b35f199ab91f4ea2f458371d920b5828", "title": "Controlling tensegrity robots through evolution", "text": "Tensegrity structures (built from interconnected rods and cables) have the potential to offer a revolutionary new robotic design that is light-weight, energy-efficient, robust to failures, capable of unique modes of locomotion, impact tolerant, and compliant (reducing damage between the robot and its environment). Unfortunately robots built from tensegrity structures are difficult to control with traditional methods due to their oscillatory nature, nonlinear coupling between components and overall complexity. Fortunately this formidable control challenge can be overcome through the use of evolutionary algorithms. In this paper we show that evolutionary algorithms can be used to efficiently control a ball shaped tensegrity robot. Experimental results performed with a variety of evolutionary algorithms in a detailed soft-body physics simulator show that a centralized evolutionary algorithm performs 400% better than a hand-coded solution, while the multiagent evolution performs 800% better. In addition, evolution is able to discover diverse control solutions (both crawling and rolling) that are robust against structural failures and can be adapted to a wide range of energy and actuation constraints. These successful controls will form the basis for building high-performance tensegrity robots in the near future."}
{"_id": "f48e7c627bf5626511985f07111dcb4e05f5e5ee", "title": "A sensor fusion approach for recognizing continuous human grasping sequences using hidden Markov models", "text": "The Programming by Demonstration (PbD) technique aims at teaching a robot to accomplish a task by learning from a human demonstration. In a manipulation context, recognizing the demonstrator's hand gestures, specifically when and how objects are grasped, plays a significant role. Here, a system is presented that uses both hand shape and contact-point information obtained from a data glove and tactile sensors to recognize continuous human-grasp sequences. The sensor fusion, grasp classification, and task segmentation are made by a hidden Markov model recognizer. Twelve different grasp types from a general, task-independent taxonomy are recognized. An accuracy of up to 95% could be achieved for a multiple-user system."}
{"_id": "09d1a6f5a50a8c3e066fb05a8833bc00663ada0e", "title": "Dynamo: amazon's highly available key-value store", "text": "Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems.\n This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an \"always-on\" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use."}
{"_id": "72ec19ead4007e130786bde139c56c980f5466c5", "title": "HadoopDB: An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads", "text": "The production environment for analytical data management applications is rapidly changing. Many enterprises are shifting away from deploying their analytical databases on high-end proprietary machines, and moving towards cheaper, lower-end, commodity hardware, typically arranged in a shared-nothing MPP architecture, often in a virtualized environment inside public or private \u201cclouds\u201d. At the same time, the amount of data that needs to be analyzed is exploding, requiring hundreds to thousands of machines to work in parallel to perform the analysis. There tend to be two schools of thought regarding what technology to use for data analysis in such an environment. Proponents of parallel databases argue that the strong emphasis on performance and efficiency of parallel databases makes them wellsuited to perform such analysis. On the other hand, others argue that MapReduce-based systems are better suited due to their superior scalability, fault tolerance, and flexibility to handle unstructured data. In this paper, we explore the feasibility of building a hybrid system that takes the best features from both technologies; the prototype we built approaches parallel databases in performance and efficiency, yet still yields the scalability, fault tolerance, and flexibility of MapReduce-based systems."}
{"_id": "3b3367f8f0d4692abe6c111a14bb702f30ab20a3", "title": "C-Store: A Column-oriented DBMS", "text": "This paper presents the design of a read-optimized relational DBMS that contrasts sharply with most current systems, which are write-optimized. Among the many differences in its design are: storage of data by column rather than by row, careful coding and packing of objects into storage including main memory during query processing, storing an overlapping collection of columnoriented projections, rather than the current fare of tables and indexes, a non-traditional implementation of transactions which includes high availability and snapshot isolation for read-only transactions, and the extensive use of bitmap indexes to complement B-tree structures. We present preliminary performance data on a subset of TPC-H and show that the system we are building, C-Store, is substantially faster than popular commercial products. Hence, the architecture looks very encouraging."}
{"_id": "340e55a44793226a51ad06612f340f2c520e3575", "title": "G2DeNet: Global Gaussian Distribution Embedding Network and Its Application to Visual Recognition", "text": "Recently, plugging trainable structural layers into deep convolutional neural networks (CNNs) as image representations has made promising progress. However, there has been little work on inserting parametric probability distributions, which can effectively model feature statistics, into deep CNNs in an end-to-end manner. This paper proposes a Global Gaussian Distribution embedding Network (G2DeNet) to take a step towards addressing this problem. The core of G2DeNet is a novel trainable layer of a global Gaussian as an image representation plugged into deep CNNs for end-to-end learning. The challenge is that the proposed layer involves Gaussian distributions whose space is not a linear space, which makes its forward and backward propagations be non-intuitive and non-trivial. To tackle this issue, we employ a Gaussian embedding strategy which respects the structures of both Riemannian manifold and smooth group of Gaussians. Based on this strategy, we construct the proposed global Gaussian embedding layer and decompose it into two sub-layers: the matrix partition sub-layer decoupling the mean vector and covariance matrix entangled in the embedding matrix, and the square-rooted, symmetric positive definite matrix sub-layer. In this way, we can derive the partial derivatives associated with the proposed structural layer and thus allow backpropagation of gradients. Experimental results on large scale region classification and fine-grained recognition tasks show that G2DeNet is superior to its counterparts, capable of achieving state-of-the-art performance."}
{"_id": "0f5597b9f042877211ce71b107132fd590ad7e0d", "title": "A grounded theory analysis of modern web applications: knowledge, skills, and abilities for DevOps", "text": "Since 2009, DevOps, the combination of development and operation, has been adopted within organizations in industry, such as Netflix, Flickr, and Fotopedia. Configuration management tools have been used to support DevOps. However, in this paper we investigate which Knowledge, Skills, and Abilities (KSA) have been employed in developing and deploying modern web applications and how these KSAs support DevOps. By applying a qualitative analysis approach, namely grounded theory, to three web application development projects, we discover that the KSAs for both Software Development and IT Operator practitioners support the four perspectives of DevOps: collaboration culture, automation, measurement, and sharing."}
{"_id": "2e75f61d8bb0d960bbb997cbe8c56f1d7b131470", "title": "Reducing restraint use for older adults in acute care.", "text": "Although no evidence shows that restraints are effective for maintaining safety, preventing disruption of treatment, or controlling behavior, they're still commonly used in acute care facilities (especially in critical care) in the United States, where the reported prevalence of their use ranges from 7.4% to 17%.[1\u20133] They may be used to protect patients from falls or to prevent them from inadvertently removing tubes and other devices.[4] Older adults are three times more likely to be restrained during an acute hospital admission than younger patients, even though this practice is associated with poor outcomes.[1] (See Adverse effects of physical restraints.)"}
{"_id": "c641facdb4a994e0292b94a3a269b6585473fb0b", "title": "The Impact of Information Communication and Technology on Students ' Academic Performance : Evidence from Indonesian EFL Classrooms", "text": "The present study examined the impact of Information Communication Technology [ICT] on a group of university students' academic performance in an Indonesian English as a Foreign Language (EFL) classroom. As the platform of the ICT usage in this current study, English learning websites was used as the independent variable. Academic performance (students' score on pre and post test) was used the dependent variable. The participants in the study were 60 students of the Department of Public Health at the State University of Gorontalo, Indonesia, i.e an experimental group of 30 students (n=30) and a control group of 30 students (n=30). They took English courses as one of the compulsory subjects in the university curriculum. This study used a mixed method of a quasi-experimental and a qualitative interview approaches. Based on the result of the quantitative method of data collection, ttests of this study indicated that students in the experiment group performed significantly better than the students in the control group. Test results also showed that there was a significant difference between students' score on the preand post-test. The students' score in the post test and post test in the control group, however, were not significantly different. As interview results showed, participants expressed their positive learning experience with technologies, growing motivation in English learning, and positive feeling and confidence on their language performance."}
{"_id": "1f664383207be85c134d78fd95e4ec77e916331e", "title": "Evaluation of Features Detectors and Descriptors based on 3D Objects", "text": "We explore the performance of a number of popular feature detectors and descriptors in matching 3D object features across viewpoints and lighting conditions. To this end we design a method, based on intersecting epipolar constraints, for providing ground truth correspondence automatically. These correspondences are based purely on geometric information, and do not rely on the choice of a specific feature appearance descriptor. We test detector-descriptor combinations on a database of 100 objects viewed from 144 calibrated viewpoints under three different lighting conditions. We find that the combination of Hessian-affine feature finder and SIFT features is most robust to viewpoint change. Harris-affine combined with SIFT and Hessian-affine combined with shape context descriptors were best respectively for lighting change and change in camera focal length. We also find that no detector-descriptor combination performs well with viewpoint changes of more than 25\u201330\u2218."}
{"_id": "abeaf3f4cb5ee4acf9d13b5dfa0ee1e04ef1c620", "title": "Logic-Based Models for the Analysis of Cell Signaling Networks\u2020", "text": "Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks."}
{"_id": "3c91fbc95a1b127e858c93d75f4284e66825a70f", "title": "Engineering design review of stance-control knee-ankle-foot orthoses.", "text": "Persons with quadriceps muscle weakness are often prescribed a knee-ankle-foot orthosis that locks the knee in full extension during both stance and swing phases of gait. Locking the knee results in abnormal gait patterns characterized by hip hiking and leg circumduction during swing. The stance-control knee-ankle-foot orthosis (SCKAFO), a new type of orthosis, has emerged that permits free knee motion during swing while resisting knee flexion during stance, thereby supporting the limb during weight-bearing. This article examines various SCKAFO designs, discuss the existing design limitations, and identify remaining design challenges. Several commercial SCKAFOs have been released that incorporate different locking mechanisms. Preliminary gait studies have shown some devices to be promising; however, an important functional limitation in some SCKAFOs is dependence on specific joint angles to switch between stance and swing modes. Important design factors such as size, weight, and noise must be considered in new orthosis designs to ensure wide consumer acceptance."}
{"_id": "69f1e19e697778fceae1cacb7440fcf5bd5dbff5", "title": "A Novel Cross Layer Intrusion Detection System in MANET", "text": "Intrusion detection System forms a vital component of internet security. To keep pace with the growing trends, there is a critical need to replace single layer detection technology with multi layer detection. Different types of Denial of Service (DoS) attacks thwart authorized users from gaining access to the networks and we tried to detect as well as alleviate some of those attacks. In this paper, we have proposed a novel cross layer intrusion detection architecture to discover the malicious nodes and different types of DoS attacks by exploiting the information available across different layers of protocol stack in order to improve the accuracy of detection. We have used cooperative anomaly intrusion detection with data mining technique to enhance the proposed architecture. We have implemented fixed width clustering algorithm for efficient detection of the anomalies in the MANET traffic and also generated different types of attacks in the network. The simulation of the proposed architecture is performed in OPNET simulator and we got the result as we expected."}
{"_id": "05af71f63c1f2b552858ebf623a7b7a77deb72cb", "title": "Towards online anti-opinion spam: Spotting fake reviews from the review sequence", "text": "Detecting review spam is important for current e-commerce applications. However, the posted order of review has been neglected by the former work. In this paper, we explore the issue on fake review detection in review sequence, which is crucial for implementing online anti-opinion spam. We analyze the characteristics of fake reviews firstly. Based on review contents and reviewer behaviors, six time sensitive features are proposed to highlight the fake reviews. And then, we devise supervised solutions and a threshold-based solution to spot the fake reviews as early as possible. The experimental results show that our methods can identify the fake reviews orderly with high precision and recall."}
{"_id": "66e1dfffa651889c95b083a1dfb567f42053f859", "title": "Knowledge management strategy and its link to knowledge creation process", "text": "Knowledge has become to be considered as valuable strategic assets that can provide proprietary competitive advantages. It is more important for companies to distinguish themselves through knowledge management strategies. Without a constant creation of knowledge, a business is condemned to poor performance. However, it is still unclear how these strategies affect knowledge creation. Knowledge management strategies can be categorized as being either human or system oriented. This paper proposes a model to illustrate the link between the strategies and its creating process. The model is derived on the basis of samples from 58 Korean firms. The model depicts how companies should align the strategies with four knowledge creation modes such as socialization, externalization, combination, and internalization. It is found that human strategy is more likely to be effective for socialization while system strategy is more likely to be effective for combination. Furthermore, the survey result suggests that managers should adjust knowledge management strategies in view of the characteristics of their departments. q 2002 Elsevier Science Ltd. All rights reserved."}
{"_id": "2b4e3ba4e2dd87eeca3e86270b5a8babf754f17a", "title": "What makes a chair a chair?", "text": "Many object classes are primarily defined by their functions. However, this fact has been left largely unexploited by visual object categorization or detection systems. We propose a method to learn an affordance detector. It identifies locations in the 3d space which \u201csupport\u201d the particular function. Our novel approach \u201cimagines\u201d an actor performing an action typical for the target object class, instead of relying purely on the visual object appearance. So, function is handled as a cue complementary to appearance, rather than being a consideration after appearance-based detection. Experimental results are given for the functional category \u201csitting\u201d. Such affordance is tested on a 3d representation of the scene, as can be realistically obtained through SfM or depth cameras. In contrast to appearance-based object detectors, affordance detection requires only very few training examples and generalizes very well to other sittable objects like benches or sofas when trained on a few chairs."}
{"_id": "5569d8022a8e218fd55afe480ccf2449d1a00ba3", "title": "High-Frequency Behavior of Graphene-Based Interconnects\u2014Part II: Impedance Analysis and Implications for Inductor Design", "text": "This paper provides the first detailed insights into the ultrahigh-frequency behavior of graphene ribbons (GRs) and analyzes their consequences in designing interconnects and low-loss on-chip inductors. In the companion paper (part I), an accurate impedance modeling methodology has been developed based on the Boltzmann equation with the magnetic vector potential Green's function approach incorporating the dependency of current on the nonlocal electric field. Based on the developed methodology, this paper for the first time embarks on the rigorous investigation of the intricate processes occurring at high frequencies in GRs, such as anomalous skin effect (ASE), high-frequency resistance and inductance saturation, intercoupled relation between edge specularity and ASE, and the influence of the linear dimensions on impedance. A comparative study of the high-frequency response of GRs with that of carbon nanotubes (CNTs) and Cu is made to highlight the potential of GR interconnects for high-frequency applications. Subsequently, the high-frequency performance of GR inductors is analyzed, and it is shown that they can achieve 32% and 50% improvements in maximum Q-factor compared to Cu and single-walled CNT inductors with 1/3 metallic fraction, respectively."}
{"_id": "db24a2c27656db88486479b26f99d8754a44f4b8", "title": "Age estimation via face images: a survey", "text": "Facial aging adversely impacts performance of face recognition and face verification and authentication using facial features. This stochastic personalized inevitable process poses dynamic theoretical and practical challenge to the computer vision and pattern recognition community. Age estimation is labeling a face image with exact real age or age group. How do humans recognize faces across ages? Do they learn the pattern or use age-invariant features? What are these age-invariant features that uniquely identify one across ages? These questions and others have attracted significant interest in the computer vision and pattern recognition research community. In this paper, we present a thorough analysis of recent research in aging and age estimation. We discuss popular algorithms used in age estimation, existing models, and how they compare with each other; we compare performance of various systems and how they are evaluated, age estimation challenges, and insights for future research."}
{"_id": "89d3ba75aaa04244ed2ebda1966c396aab8b49c4", "title": "Machine Learning Based Missing Value Imputation Method for Clinical Datasets", "text": "Missing value imputation is one of the biggest tasks of data pre-processing when performing data mining. Most medical datasets are usually incomplete. Simply removing the incomplete cases from the original datasets can bring more problems than solutions. A suitable method for missing value imputation can help to produce good quality datasets for better analysing clinical trials. In this paper we explore the use of a machine learning technique as a missing value imputation method for incomplete cardiovascular data. Mean/mode imputation, fuzzy unordered rule induction algorithm imputation, decision tree imputation and other machine learning algorithms are used as missing value imputation and the final datasets are classified using decision tree, fuzzy unordered rule induction, KNN and K-Mean clustering. The experiment shows that final classifier performance is improved when the fuzzy unordered rule induction algorithm is used to predict missing attribute values for K-Mean clustering and in most cases, the machine learning techniques were found to perform better than the standard mean imputation technique."}
{"_id": "a63b0a75bf93d9ce7655f6dc6430e1be4f553d26", "title": "Multi-terminal transport measurements of MoS2 using a van der Waals heterostructure device platform.", "text": "Atomically thin two-dimensional semiconductors such as MoS2 hold great promise for electrical, optical and mechanical devices and display novel physical phenomena. However, the electron mobility of mono- and few-layer MoS2 has so far been substantially below theoretically predicted limits, which has hampered efforts to observe its intrinsic quantum transport behaviours. Potential sources of disorder and scattering include defects such as sulphur vacancies in the MoS2 itself as well as extrinsic sources such as charged impurities and remote optical phonons from oxide dielectrics. To reduce extrinsic scattering, we have developed here a van der Waals heterostructure device platform where MoS2 layers are fully encapsulated within hexagonal boron nitride and electrically contacted in a multi-terminal geometry using gate-tunable graphene electrodes. Magneto-transport measurements show dramatic improvements in performance, including a record-high Hall mobility reaching 34,000\u2005cm(2)\u2005V(-1)\u2005s(-1) for six-layer MoS2 at low temperature, confirming that low-temperature performance in previous studies was limited by extrinsic interfacial impurities rather than bulk defects in the MoS2. We also observed Shubnikov-de Haas oscillations in high-mobility monolayer and few-layer MoS2. Modelling of potential scattering sources and quantum lifetime analysis indicate that a combination of short-range and long-range interfacial scattering limits the low-temperature mobility of MoS2."}
{"_id": "e0e55d5fd5c02557790b621a50af5a0f963c957f", "title": "Test-potentiated learning: three independent replications, a disconfirmed hypothesis, and an unexpected boundary condition.", "text": "Arnold and McDermott [(2013). Test-potentiated learning: Distinguishing between direct and indirect effects of testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 940-945] isolated the indirect effects of testing and concluded that encoding is enhanced to a greater extent following more versus fewer practice tests, referred to as test-potentiated learning. The current research provided further evidence for test-potentiated learning and evaluated the covert retrieval hypothesis as an alternative explanation for the observed effect. Learners initially studied foreign language word pairs and then completed either one or five practice tests before restudy occurred. Results of greatest interest concern performance on test trials following restudy for items that were not correctly recalled on the test trials that preceded restudy. Results replicate Arnold and McDermott (2013) by demonstrating that more versus fewer tests potentiate learning when trial time is limited. Results also provide strong evidence against the covert retrieval hypothesis concerning why the effect occurs (i.e., it does not reflect differential covert retrieval during pre-restudy trials). In addition, outcomes indicate that the magnitude of the test-potentiated learning effect decreases as trial length increases, revealing an unexpected boundary condition to test-potentiated learning."}
{"_id": "7908a8d73c9164ddfa6eb3f355494dfc849dc98f", "title": "An Improved Dijkstra Algorithm for Firefighting", "text": "In this paper, briefly the efforts going toward a mathematical calculation to validate a new-Dijkstra algorithm from the original normal Dijkstra algorithm [1] as an improvised Dijkstra. The result of this improvised Dijkstra was used to find the shortest path for firefighting unit to reach the location of the fire exactly. The idea depends on the influence of turns on the path, while there are two equal paths\u2019, the more number of turns take more time to pass through it and the less number turns takes less time to pass through it. To apply this scenario practically we take a small real area in south Khartoum. The result gives strong justification that proves and verifies our methodology with a clear contribution in Improved Dijkstra algorithm for firefighting like Geo-Dijkstra. Furthermore, an evaluation of the above-mentioned algorithms has been done showing very promising and realistic results."}
{"_id": "0c4f7b6e4a59a28bbb9775219e54d9e2d067735c", "title": "Computers and Creativity", "text": "The Painting Fool is software that we hope will one day be taken seriously as a creative artist in its own right. This aim is being pursued as an Artificial Intelligence (AI) project, with the hope that the technical difficulties overcome along the way will lead to new and improved generic AI techniques. It is also being pursued as a sociological project, where the effect of software which might be deemed as creative is tested in the art world and the wider public. In this chapter, we summarise our progress so far in The Painting Fool project. To do this, we first compare and contrast The Painting Fool with software of a similar nature arising from AI and graphics projects. We follow this with a discussion of the guiding principles from Computational Creativity research that we adhere to in building the software. We then describe five projects with The Painting Fool where our aim has been to produce increasingly interesting and culturally valuable pieces of art. We end by discussing the issues raised in building an automated painter, and describe further work and future prospects for the project. By studying both the technical difficulties and sociological issues involved in engineering software for creative purposes, we hope to help usher in a new era where computers routinely act as our creative collaborators, as well as independent and creative artists, musicians, writers, designers, engineers and scientists, and contribute in meaningful and interesting ways to human culture."}
{"_id": "140889e6eb281b3a993af343d72e142ad1224873", "title": "AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification", "text": "Aerial scene classification, which aims to automatically label an aerial image with a specific semantic category, is a fundamental problem for understanding high-resolution remote sensing imagery. In recent years, it has become an active task in the remote sensing area, and numerous algorithms have been proposed for this task, including many machine learning and data-driven approaches. However, the existing data sets for aerial scene classification, such as UC-Merced data set and WHU-RS19, contain relatively small sizes, and the results on them are already saturated. This largely limits the development of scene classification algorithms. This paper describes the Aerial Image data set (AID): a large-scale data set for aerial scene classification. The goal of AID is to advance the state of the arts in scene classification of remote sensing images. For creating AID, we collect and annotate more than 10000 aerial scene images. In addition, a comprehensive review of the existing aerial scene classification techniques as well as recent widely used deep learning methods is given. Finally, we provide a performance analysis of typical aerial scene classification and deep learning approaches on AID, which can be served as the baseline results on this benchmark."}
{"_id": "67f76b41786c2fa7e155e9896bc5f7b6ae279a2f", "title": "300mm low k wafer dicing saw study", "text": "With the farther shrink of the IC dimension, the low k material has been widely used to replace the traditional SiO/sub 2/ ILD in order to reduce the interconnect delay. The introduction of low k material into silicon imposed challenges on dicing saw process, ILD and metal layers peeling and its penetration into the sealing ring of the die during dicing saw are the most common defects. In this paper, the low k material structure and its impact on wafer dicing were elaborated. A practical dicing quality inspection matrix was developed to assess the cutting process variation. A 300mm CMOS 90nm dual damascene low k wafer was chosen as a test vehicle to develop robust low k dicing saw process. The critical factors (dicing blade, index speed, spindle speed, cut in depth, test pattern in the saw street...) affecting cutting quality were studied and optimized. The selected C90 dual damascene low k device passed package reliability test with the optimized low k dicing saw recipe and process. The further improvement and solutions in eliminating the low k dicing saw peeling from both wafer fab and packaging assembly were also explored."}
{"_id": "add0d4cf1f120a8cd56e03a1c4ea1aeecc5c63e1", "title": "Evidence for perceptual deficits in associative visual (prosop)agnosia: a single-case study", "text": "Associative visual agnosia is classically defined as normal visual perception stripped of its meaning [Archiv f\u00fcr Psychiatrie und Nervenkrankheiten 21 (1890) 22/English translation: Cognitive Neuropsychol. 5 (1988) 155]: these patients cannot access to their stored visual memories to categorize the objects nonetheless perceived correctly. However, according to an influential theory of visual agnosia [Farah, Visual Agnosia: Disorders of Object Recognition and What They Tell Us about Normal Vision, MIT Press, Cambridge, MA, 1990], visual associative agnosics necessarily present perceptual deficits that are the cause of their impairment at object recognition Here we report a detailed investigation of a patient with bilateral occipito-temporal lesions strongly impaired at object and face recognition. NS presents normal drawing copy, and normal performance at object and face matching tasks as used in classical neuropsychological tests. However, when tested with several computer tasks using carefully controlled visual stimuli and taking both his accuracy rate and response times into account, NS was found to have abnormal performances at high-level visual processing of objects and faces. Albeit presenting a different pattern of deficits than previously described in integrative agnosic patients such as HJA and LH, his deficits were characterized by an inability to integrate individual parts into a whole percept, as suggested by his failure at processing structurally impossible three-dimensional (3D) objects, an absence of face inversion effects and an advantage at detecting and matching single parts. Taken together, these observations question the idea of separate visual representations for object/face perception and object/face knowledge derived from investigations of visual associative (prosop)agnosia, and they raise some methodological issues in the analysis of single-case studies of (prosop)agnosic patients."}
{"_id": "760c492a41b234e43a2218a363d0fa9feb597583", "title": "An Infrastructure Approach to Context-Aware Computing", "text": "The Context Toolkit (Dey, Salber, and Abowd 2001 [this special issue]) is only one of many possible architectures for supporting context-aware applications. In this essay, we look at the trade-offs involved with a service infrastructure approach to context-aware computing. We describe the advantages that a service infrastructure for contextawareness has over other approaches, outline some of the core technical challenges that must be addressed before such an infrastructure can be built, and point out promising research directions for overcoming these challenges."}
{"_id": "0fccd6c005fc60153afa8d454e056e80cca3102e", "title": "Intrusion Detection in Computer Networks based on Machine Learning Algorithms", "text": "Network security technology has become crucial in protecting government and industry computing infrastructure. Modern intrusion detection applications face complex requirements; they need to be reliable, extensible, easy to manage, and have low maintenance cost. In recent years, machine learning-based intrusion detection systems have demonstrated high accuracy, good generalization to novel types of intrusion, and robust behavior in a changing environment. This work aims to compare efficiency of machine learning methods in intrusion detection system, including artificial neural networks and support vector machine, with the hope of providing reference for establishing intrusion detection system in future. Compared with other related works in machine learning-based intrusion detectors, we propose to calculate the mean value via sampling different ratios of normal data for each measurement, which lead us to reach a better accuracy rate for observation data in real world. We compare the accuracy, detection rate, false alarm rate for 4 attack types. The extensive experimental results on the KDD-cup intrusion detection benchmark dataset demonstrate that the proposed approach produces higher performance than KDD Winner, especially for U2R and U2L type attacks."}
{"_id": "7603a0946d9e44ef6bca77190f1045f9299057b2", "title": "BA-NET: DENSE BUNDLE ADJUSTMENT NETWORKS", "text": "This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error. The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA. The basis depth maps generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem. Experiments on large scale real data prove the success of the proposed method."}
{"_id": "3cfe9fd38b1239b1e6714dd6aa326b26b456bbe3", "title": "Elastic textures for additive fabrication", "text": "We introduce elastic textures: a set of parametric, tileable, printable, cubic patterns achieving a broad range of isotropic elastic material properties: the softest pattern is over a thousand times softer than the stiffest, and the Poisson's ratios range from below zero to nearly 0.5. Using a combinatorial search over topologies followed by shape optimization, we explore a wide space of truss-like, symmetric 3D patterns to obtain a small family. This pattern family can be printed without internal support structure on a single-material 3D printer and can be used to fabricate objects with prescribed mechanical behavior. The family can be extended easily to create anisotropic patterns with target orthotropic properties. We demonstrate that our elastic textures are able to achieve a user-supplied varying material property distribution. We also present a material optimization algorithm to choose material properties at each point within an object to best fit a target deformation under a prescribed scenario. We show that, by fabricating these spatially varying materials with elastic textures, the desired behavior is achieved."}
{"_id": "379f61c6f01b70848c89e49370ce999631007d61", "title": "Real Time Localization and 3D Reconstruction", "text": "In this paper we describe a method that estimates the motion of a calibrated camera (settled on an experimental vehicle) and the tridimensional geometry of the environment. The only data used is a video input. In fact, interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in real-time, key-frames are selected and permit the features 3D reconstruction. The algorithm is particularly appropriate to the reconstruction of long images sequences thanks to the introduction of a fast and local bundle adjustment method that ensures both good accuracy and consistency of the estimated camera poses along the sequence. It also largely reduces computational complexity compared to a global bundle adjustment. Experiments on real data were carried out to evaluate speed and robustness of the method for a sequence of about one kilometer long. Results are also compared to the ground truth measured with a differential GPS."}
{"_id": "c8965cc5c62a245593dbc679aebdf3338bb945fc", "title": "Visual odometry for ground vehicle applications", "text": "We present a system that estimates the motion of a stereo head or a single moving camera based on video input. The system operates in real-time with low delay and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize-and-test architecture. This generates motion estimates from visual input alone. No prior knowledge of the scene nor the motion is necessary. The visual estimates can also be used in conjunction with information from other sources such as GPS, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive and handheld platforms. We focus on results obtained with a stereo-head mounted on an autonomous ground vehicle. We give examples of camera trajectories estimated in real-time purely from images over previously unseen distances (600 meters) and periods of time ."}
{"_id": "190fc0e9a39c3fee6a54a37ba856b4b880a393bd", "title": "Hybrid retinal image registration", "text": "This work studies retinal image registration in the context of the National Institutes of Health (NIH) Early Treatment Diabetic Retinopathy Study (ETDRS) standard. The ETDRS imaging protocol specifies seven fields of each retina and presents three major challenges for the image registration task. First, small overlaps between adjacent fields lead to inadequate landmark points for feature-based methods. Second, the non-uniform contrast/intensity distributions due to imperfect data acquisition will deteriorate the performance of area-based techniques. Third, high-resolution images contain large homogeneous nonvascular/texureless regions that weaken the capabilities of both feature-based and area-based techniques. In this work, we propose a hybrid retinal image registration approach for ETDRS images that effectively combines both area-based and feature-based methods. Four major steps are involved. First, the vascular tree is extracted by using an efficient local entropy-based thresholding technique. Next, zeroth-order translation is estimated by maximizing mutual information based on the binary image pair (area-based). Then image quality assessment regarding the ETDRS field definition is performed based on the translation model. If the image pair is accepted, higher-order transformations will be involved. Specifically, we use two types of features, landmark points and sampling points, for affine/quadratic model estimation. Three empirical conditions are derived experimentally to control the algorithm progress, so that we can achieve the lowest registration error and the highest success rate. Simulation results on 504 pairs of ETDRS images show the effectiveness and robustness of the proposed algorithm"}
{"_id": "e5a7ac37d542349ae19281f1e2a571f7030b789c", "title": "Vehicle Localization along a Previously Driven Route Using Image Database", "text": "In most autonomous driving applications, such as parking and commuting, a vehicle follows a previously taken route, or almost the same route. In this paper, we propose a method to localize a vehicle along a previously driven route using images. The proposed method consists of two stages: offline creation of a database, and online localization. In the offline stage, a database is created from images that are captured when the vehicle drives a route for the first time. The database consists of images, 3D positions of feature points estimated by structure-from-motion, and a topological graph. In the online stage, the method first identifies the database image that is most similar to the current image by topometric localization, which considers topological information on a metric scale. The vehicle poses are then estimated from the 3D-2D correspondences of matching feature points between the current image and the identified database image. In an experiment, we estimated vehicle poses using images captured in an indoor parking lot."}
{"_id": "038b2f91b29f6a74e709bfaccb04ddb1141dc8c1", "title": "Keyword extraction for social snippets", "text": "Today, a huge amount of text is being generated for social purposes on social networking services on the Web. Unlike traditional documents, such text is usually extremely short and tends to be informal. Analysis of such text benefit many applications such as advertising, search, and content filtering. In this work, we study one traditional text mining task on such new form of text, that is extraction of meaningful keywords. We propose several intuitive yet useful features and experiment with various classification models. Evaluation is conducted on Facebook data. Performances of various features and models are reported and compared."}
{"_id": "84b72076b2849ea770f45ed3f3abbd56a34f6df1", "title": "UNIK: unsupervised social network spam detection", "text": "Social network spam increases explosively with the rapid development and wide usage of various social networks on the Internet. To timely detect spam in large social network sites, it is desirable to discover unsupervised schemes that can save the training cost of supervised schemes. In this work, we first show several limitations of existing unsupervised detection schemes. The main reason behind the limitations is that existing schemes heavily rely on spamming patterns that are constantly changing to avoid detection. Motivated by our observations, we first propose a sybil defense based spam detection scheme SD2 that remarkably outperforms existing schemes by taking the social network relationship into consideration. In order to make it highly robust in facing an increased level of spam attacks, we further design an unsupervised spam detection scheme, called UNIK. Instead of detecting spammers directly, UNIK works by deliberately removing non-spammers from the network, leveraging both the social graph and the user-link graph. The underpinning of UNIK is that while spammers constantly change their patterns to evade detection, non-spammers do not have to do so and thus have a relatively non-volatile pattern. UNIK has comparable performance to SD2 when it is applied to a large social network site, and outperforms SD2 significantly when the level of spam attacks increases. Based on detection results of UNIK, we further analyze several identified spam campaigns in this social network site. The result shows that different spammer clusters demonstrate distinct characteristics, implying the volatility of spamming patterns and the ability of UNIK to automatically extract spam signatures."}
{"_id": "484c4eec34e985d8ca0c20bf83efc56881180709", "title": "Efficient semantic image segmentation with superpixel pooling", "text": "In this work, we evaluate the use of superpixel pooling layers in deep network architectures for semantic segmentation. Superpixel pooling is a flexible and efficient replacement for other pooling strategies that incorporates spatial prior information. We propose a simple and efficient GPU-implementation of the layer and explore several designs for the integration of the layer into existing network architectures. We provide experimental results on the IBSR and Cityscapes dataset, demonstrating that superpixel pooling can be leveraged to consistently increase network accuracy with minimal computational overhead. Source code is available at https://github.com/bermanmaxim/superpixPool."}
{"_id": "f6f864260526fb823f5be5f12d111864b70f29ed", "title": "Collaborative matrix factorization mechanism for group recommendation in big data-based library systems", "text": "Purpose \u2013Academic groups are designed specifically for researchers. A group recommendation procedure is essential to support scholars\u2019 research-based social activities. However, group recommendation methods are rarely applied in online libraries and they often suffer from scalability problem in big data context. The purpose of this paper is to facilitate academic group activities in big data-based library systems by recommending satisfying articles for academic groups. Design/methodology/approach \u2013 The authors propose a collaborative matrix factorization (CoMF) mechanism and implement paralleled CoMF under Hadoop framework. Its rationale is collaboratively decomposing researcher-article interaction matrix and group-article interaction matrix. Furthermore, three extended models of CoMF are proposed. Findings \u2013 Empirical studies on CiteULike data set demonstrate that CoMF and three variants outperform baseline algorithms in terms of accuracy and robustness. The scalability evaluation of paralleled CoMF shows its potential value in scholarly big data environment. Research limitations/implications \u2013 The proposed methods fill the gap of group-article recommendation in online libraries domain. The proposed methods have enriched the group recommendation methods by considering the interaction effects between groups and members. The proposed methods are the first attempt to implement group recommendation methods in big data contexts. Practical implications \u2013 The proposed methods can improve group activity effectiveness and information shareability in academic groups, which are beneficial to membership retention and enhance the service quality of online library systems. Furthermore, the proposed methods are applicable to big data contexts and make library system services more efficient. Social implications \u2013 The proposed methods have potential value to improve scientific collaboration and research innovation. Originality/value \u2013 The proposed CoMF method is a novel group recommendation method based on the collaboratively decomposition of researcher-article matrix and group-article matrix. The process indirectly reflects the interaction between groups and members, which accords with actual library environments and provides an interpretable recommendation result."}
{"_id": "d9ca156e63d069ece4d8253f5106254d1424f46e", "title": "High-Resolution Frequency-Wavenumber Spectrum Analysis", "text": "The output of an array bfoensors is considered to be a homogeneous random field. In this case there is a spectral representation for this field, similar to that for stationary random processes, which consists of a superposition of traveling waves. The frequencywavenumber power spectral density provides the mean-square value for the amplitudes of these waves and is of considerable importance in the analysis of propagating waves by means of an array of sensors. The conventional method of frequency-wavenumber power spectral density estimation uses a fixed wavenumber window and its resolution is determined esserrtially by& beam pattern of the array of sensors. A high-resolution method-of estimation is introduced which employs a wavenumber windoar whose shape changes and is a function of the wavenumber at which an estimate is obtained. It is shown that the wavenumber resolution of this method is considerably better than that of the conventional method. Application of these results is given to seismic data obtained from the large apenure seismic array located in eastern Montana. In addition, the application of the high-resolution method to other areas, such as radar, sonar, and radio astronomy, is indicated."}
{"_id": "86ab4cae682fbd49c5a5bedb630e5a40fa7529f6", "title": "Handwritten Digit Recognition with a Back-Propagation Network", "text": "We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 % error rate and about a 9% reject rate on zipcode digits provided by the U.S. Postal Service."}
{"_id": "2a38079ad772dabb03871f40d33b78b102c56ba8", "title": "An RIG-I-Like RNA Helicase Mediates Antiviral RNAi Downstream of Viral siRNA Biogenesis in Caenorhabditis elegans", "text": "Dicer ribonucleases of plants and invertebrate animals including Caenorhabditis elegans recognize and process a viral RNA trigger into virus-derived small interfering RNAs (siRNAs) to guide specific viral immunity by Argonaute-dependent RNA interference (RNAi). C. elegans also encodes three Dicer-related helicase (drh) genes closely related to the RIG-I-like RNA helicase receptors which initiate broad-spectrum innate immunity against RNA viruses in mammals. Here we developed a transgenic C. elegans strain that expressed intense green fluorescence from a chromosomally integrated flock house virus replicon only after knockdown or knockout of a gene required for antiviral RNAi. Use of the reporter nematode strain in a feeding RNAi screen identified drh-1 as an essential component of the antiviral RNAi pathway. However, RNAi induced by either exogenous dsRNA or the viral replicon was enhanced in drh-2 mutant nematodes, whereas exogenous RNAi was essentially unaltered in drh-1 mutant nematodes, indicating that exogenous and antiviral RNAi pathways are genetically distinct. Genetic epistatic analysis shows that drh-1 acts downstream of virus sensing and viral siRNA biogenesis to mediate specific antiviral RNAi. Notably, we found that two members of the substantially expanded subfamily of Argonautes specific to C. elegans control parallel antiviral RNAi pathways. These findings demonstrate both conserved and unique strategies of C. elegans in antiviral defense."}
{"_id": "bfc3ff21f8418e415548529083cd68e9f3dfa488", "title": "A Case Study on Implementing False Data Injection Attacks Against Nonlinear State Estimation", "text": "Smart grid aims to improve control and monitoring routines to ensure reliable and efficient supply of electricity. The rapid advancements in information and communication technologies of Supervisory Control And Data Acquisition (SCADA) networks, however, have resulted in complex cyber physical systems. This added complexity has broadened the attack surface of power-related applications, amplifying their susceptibility to cyber threats. A particular class of system integrity attacks against the smart grid is False Data Injection (FDI). In a successful FDI attack, an adversary compromises the readings of grid sensors in such a way that errors introduced into estimates of state variables remain undetected. This paper presents an end-to-end case study of how to instantiate real FDI attacks to the Alternating Current (AC) --nonlinear-- State Estimation (SE) process. The attack is realized through firmware modifications of the microprocessor-based remote terminal systems, falsifying the data transmitted to the SE routine, and proceeds regardless of perfect or imperfect knowledge of the current system state. The case study concludes with an investigation of an attack on the IEEE 14 bus system using load data from the New York Independent System Operator (NYISO)."}
{"_id": "82ab56ad713aa4ee2b31ee0584293ef93a2aa508", "title": "A wall climbing robot for tank inspection. An autonomous prototype", "text": "This paper describes a solution to a mobile climbing robot on magnetic wheels, designed for inspecting exterior oil tank surfaces made of metal sheets. A mechanical design has been developed which presents a practical solution without an umbilical cord. The inspection system has been developed based on client/server architecture. The robot runs a client application and a remote PC executes the server functions. They will both allow any necessary inspections to be performed simultaneously by more than one robot. A sensorial system and a data fusion strategy to estimate the absolute robot position is proposed to allow the robot to navigate autonomously. The graphical monitoring of the robot position in the remote PC (server application) provides the operator with the possibility of controlling the robot, even in situations in which the operator visibility of an area tank is very low or inexistent. Previous experiments have demonstrated the mechanical system's robustness. These experiments consist of robot trajectory measurements and the comparison to a motion kinematic model."}
{"_id": "6ad23f13da7669ec3905a15c0deb0dff0e3d5c84", "title": "Discovering Parallel Text from the World Wide Web", "text": "Parallel corpus is a rich linguistic resource for various multilingual text management tasks, including crosslingual text retrieval, multilingual computational linguistics and multilingual text mining. Constructing a parallel corpus requires effective alignment of parallel documents. In this paper, we develop a parallel page identification system for identifying and aligning parallel documents from the World Wide Web. The system crawls the Web to fetch potentially parallel multilingual Web documents using a Web spider. To determine the parallelism between potential document pairs, two modules are developed. First, a filename comparison module is used to check filename resemblance. Second, a content analysis module is used to measure the semantic similarity. The experiment conducted to a multilingual Web site shows the effectiveness of the system."}
{"_id": "3c5ba48d25fbe24691ed060fa8f2099cc9eba14f", "title": "Racial Faces in-the-Wild: Reducing Racial Bias by Deep Unsupervised Domain Adaptation", "text": "Despite of the progress achieved by deep learning in face recognition (FR), more and more people find that racial bias explicitly degrades the performance in realistic FR systems. Facing the fact that existing training and testing databases consist of almost Caucasian subjects, there are still no independent testing databases to evaluate racial bias and even no training databases and methods to reduce it. To facilitate the research towards conquering those unfair issues, this paper contributes a new dataset called Racial Faces in-the-Wild (RFW) database with two important uses, 1) racial bias testing: four testing subsets, namely Caucasian, Asian, Indian and African, are constructed, and each contains about 3000 individuals with 6000 image pairs for face verification, 2) racial bias reducing: one labeled training subset with Caucasians and three unlabeled training subsets with Asians, Indians and Africans are offered to encourage FR algorithms to transfer recognition knowledge from Caucasians to other races. For we all know, RFW is the first database for measuring racial bias in FR algorithms. After proving the existence of domain gap among different races and the existence of racial bias in FR algorithms, we further propose a deep information maximization adaptation network (IMAN) to bridge the domain gap, and comprehensive experiments show that the racial bias could be narrowed-down by our algorithm."}
{"_id": "2a4bb73d7a630ce1fed85c7b8819d1fba3fc6418", "title": "PUSH: A Pipelined Reconstruction I/Of or Erasure-Coded Storage Clusters", "text": "A key design goal of erasure-coded storage clusters is to minimize reconstruction time, which in turn leads to high reliability by reducing vulnerability window size. PULL-Rep and PULL-Sur are two existing reconstruction schemes based on PULL-type transmission, where a rebuilding node initiates reconstruction by sending a set of read requests to surviving nodes to retrieve surviving blocks. To eliminate the transmission bottleneck of replacement nodes in PULL-Rep and mitigate the extra overhead caused by noncontiguous disk access in PULL-Sur, we incorporate PUSH-type transmissions to node reconstruction, where the reconstruction procedure is divided into multiple tasks accomplished by surviving nodes in a pipelining manner. We also propose two PUSH-based reconstruction schemes (i.e., PUSH-Rep and PUSH-Sur), which can not only exploit the I/O parallelism of PULL-Sur, but also maintain sequential I/O accesses inherited from PULL-Rep. We build four reconstruction-time models to study the reconstruction process and estimate the reconstruction time of the four schemes in large-scale storage clusters. We implement a proof-of-concept prototype where the four reconstruction schemes are deployed and quantitatively evaluated. Experimental results show that the PUSH-based reconstruction schemes outperform the PULL-based counterparts. In a real-world (9,6)RS-coded storage cluster, PUSH-Rep speeds up the reconstruction time by a factor of 5.76 compared with PULL-Rep; PUSH-Sur accelerates the reconstruction by a factor of 1.85 relative to PULL-Sur."}
{"_id": "8d86ea5bf053aec40dcf28c414802844d0c3b1ef", "title": "Writing a narrative biomedical review: considerations for authors, peer reviewers, and editors", "text": "Review articles comprehensively covering a specific topic are crucial for successful research and academic projects. Most editors consider review articles for special and regular issues of journals. Writing a review requires deep knowledge and understanding of a field. The aim of this review is to analyze the main steps in writing a narrative biomedical review and to consider points that may increase the chances of success. We performed a comprehensive search through MEDLINE, EMBASE, Scopus, and Web of Science using the following keywords: review of the literature, narrative review, title, abstract, authorship, ethics, peer review, research methods, medical writing, scientific writing, and writing standards. Opinions expressed in the review are also based on personal experience as authors, peer reviewers, and editors."}
{"_id": "b95d96bc223961d34083705111b29df984565ed1", "title": "Ranking-Based Automatic Seed Selection and Noise Reduction for Weakly Supervised Relation Extraction", "text": "This paper addresses the tasks of automatic seed selection for bootstrapping relation extraction, and noise reduction for distantly supervised relation extraction. We first point out that these tasks are related. Then, inspired by ranking relation instances and patterns computed by the HITS algorithm, and selecting cluster centroids using the K-means, LSA, or NMF method, we propose methods for selecting the initial seeds from an existing resource, or reducing the level of noise in the distantly labeled data. Experiments show that our proposed methods achieve a better performance than the baseline systems in both tasks."}
{"_id": "312ecc16ff1ac3d5fe896bb29df5fd8da2a50573", "title": "Biomedical question answering: A survey", "text": "OBJECTIVES\nIn this survey, we reviewed the current state of the art in biomedical QA (Question Answering), within a broader framework of semantic knowledge-based QA approaches, and projected directions for the future research development in this critical area of intersection between Artificial Intelligence, Information Retrieval, and Biomedical Informatics.\n\n\nMATERIALS AND METHODS\nWe devised a conceptual framework within which to categorize current QA approaches. In particular, we used \"semantic knowledge-based QA\" as a category under which to subsume QA techniques and approaches, both corpus-based and knowledge base (KB)-based, that utilize semantic knowledge-informed techniques in the QA process, and we further classified those approaches into three subcategories: (1) semantics-based, (2) inference-based, and (3) logic-based. Based on the framework, we first conducted a survey of open-domain or non-biomedical-domain QA approaches that belong to each of the three subcategories. We then conducted an in-depth review of biomedical QA, by first noting the characteristics of, and resources available for, biomedical QA and then reviewing medical QA approaches and biological QA approaches, in turn. The research articles reviewed in this paper were found and selected through online searches.\n\n\nRESULTS\nOur review suggested the following tasks ahead for the future research development in this area: (1) Construction of domain-specific typology and taxonomy of questions (biological QA), (2) Development of more sophisticated techniques for natural language (NL) question analysis and classification, (3) Development of effective methods for answer generation from potentially conflicting evidences, (4) More extensive and integrated utilization of semantic knowledge throughout the QA process, and (5) Incorporation of logic and reasoning mechanisms for answer inference.\n\n\nCONCLUSION\nCorresponding to the growth of biomedical information, there is a growing need for QA systems that can help users better utilize the ever-accumulating information. Continued research toward development of more sophisticated techniques for processing NL text, for utilizing semantic knowledge, and for incorporating logic and reasoning mechanisms, will lead to more useful QA systems."}
{"_id": "7e3ec2dd42d90768637cafc34f6c58b8bec51453", "title": "Automatic license plate recognition using extracted features", "text": "Automatic License Plate Recognition (ALPR) systems are employed for detection and recognition of license plate/number plate of vehicles. The performance of existing systems is well below the desired level. In this perspective, there is a definite need to propose a system to overcome the limitations of currently available systems. A new approach is being introduced in this paper for fast and efficient implementation of ALPR system. In this approach, the vertical edge detection algorithm is applied and removes unwanted edges by image normalization technique. The LP region is extracted by incorporating statistical and morphological image processing techniques. For character recognition, the template matching is employed for optical character recognition (OCR). The algorithm is tested on 500 real time images, which are acquired under different illumination conditions and from different scenes. Overall efficiency of the proposed method is 84.8% and the execution time is less than 0.5sec."}
{"_id": "f50ba0051b57df419dc464daa475bc08cb879092", "title": "Analysis and design of a high-efficiency multistage Doherty power amplifier for wireless communications", "text": "A comprehensive analysis of a multistage Doherty amplifier, which can be used to achieve higher efficiency at a lower output power level compared to the classical Doherty amplifier, is presented. Generalized design equations that explain the operation of a three-stage Doherty amplifier, which can be easily extended to an N-stage Doherty amplifier, are derived. In addition, the optimum device periphery, which minimizes AM-AM distortion for perfect Doherty amplifier operation, is analyzed. For the first time, a multistage Doherty power amplifier that meets wide-band code-division multiple-access (WCDMA) requirements is demonstrated. The designed power amplifier exhibits a power-added efficiency (PAE) of 42% at 6-dB output power backoff and 27% at 12-dB output power backoff. These PAEs are more than 2/spl times/ and 7/spl times/ better, respectively, than that of a single-stage linear power amplifier at the same output power backoff levels. The power amplifier is capable of delivering up to 33 dBm of output power, and has a maximum adjacent channel power leakage ratio of -35 and -47 dBc at 5- and 10-MHz offset, respectively. To the best of the authors' knowledge, these represent the best reported results of a Doherty amplifier for WCDMA application in the 1.95-GHz band to date."}
{"_id": "46ec3ee9ed3f7919fb3e2a6efb3e5b4216d96d84", "title": "Multicast Outage Probability and Transmission Capacity of Multihop Wireless Networks", "text": "Multicast transmission, wherein the same packet must be delivered to multiple receivers, is an important aspect of sensor and tactical networks and has several distinctive traits as opposed to more commonly studied unicast networks. Specially, these include 1) identical packets must be delivered successfully to several nodes, 2) outage at any receiver requires the packet to be retransmitted at least to that receiver, and 3) the multicast rate is dominated by the receiver with the weakest link in order to minimize outage and retransmission. A first contribution of this paper is the development of a tractable multicast model and throughput metric that captures each of these key traits in a multicast wireless network. We utilize a Poisson cluster process (PCP) consisting of a distinct Poisson point process (PPP) for the transmitters and receivers, and then define the multicast transmission capacity (MTC) as the maximum achievable multicast rate per transmission attempt times the maximum intensity of multicast clusters under decoding delay and multicast outage constraints. A multicast cluster is a contiguous area over which a packet is multicasted, and to reduce outage it can be tessellated into v smaller regions of multicast. The second contribution of the paper is the analysis of several key aspects of this model, for which we develop the following main result. Assuming \u03c4/v transmission attempts are allowed for each tessellated region in a multicast cluster, we show that the MTC is \u0398(\u03c1kxlog(k)vy) where \u03c1, x and y are functions of \u03c4 and v depending on the network size and intensity, and k is the average number of the intended receivers in a cluster. We derive {\u03c1, x, y} for a number of regimes of interest, and also show that an appropriate number of retransmissions can significantly enhance the MTC."}
{"_id": "c85aac3b57fc3d5ddd728826ae495cfc40fc60db", "title": "An Ultra Low Energy 12-bit Rate-Resolution Scalable SAR ADC for Wireless Sensor Nodes", "text": "A resolution-rate scalable ADC for micro-sensor networks is described. Based on the successive approximation register (SAR) architecture, this ADC has two resolution modes: 12 bit and 8 bit, and its sampling rate is scalable, at a constant figure-of-merit, from 0-100 kS/s and 0-200 kS/s, respectively. At the highest performance point (i.e., 12 bit, 100 kS/s), the entire ADC (including digital, analog, and reference power) consumes 25 muW from a 1-V supply. The ADC's CMRR is enhanced by common-mode independent sampling and passive auto-zero reference generation. The efficiency of the comparator is improved by an analog offset calibrating latch, and the preamplifier settling time is relaxed by self-timing the bit-decisions. Prototyped in a 0.18-mum, 5M2P CMOS process, the ADC, at 12 bit, 100 kS/s, achieves a Nyquist SNDR of 65 dB (10.55 ENOB) and an SFDR of 71 dB. Its INL and DNL are 0.68 LSB and 0.66 LSB, respectively"}
{"_id": "556f37f050c72c8d4a96829e04bf97656075dcbc", "title": "Discovery of Collocation Patterns: from Visual Words to Visual Phrases", "text": "A visual word lexicon can be constructed by clustering primitive visual features, and a visual object can be described by a set of visual words. Such a \"bag-of-words\" representation has led to many significant results in various vision tasks including object recognition and categorization. However, in practice, the clustering of primitive visual features tends to result in synonymous visual words that over-represent visual patterns, as well as polysemous visual words that bring large uncertainties and ambiguities in the representation. This paper aims at generating a higher-level lexicon, i.e. visual phrase lexicon, where a visual phrase is a meaningful spatially co-occurrent pattern of visual words. This higher-level lexicon is much less ambiguous than the lower-level one. The contributions of this paper include: (1) a fast and principled solution to the discovery of significant spatial co-occurrent patterns using frequent itemset mining; (2) a pattern summarization method that deals with the compositional uncertainties in visual phrases; and (3) a top-down refinement scheme of the visual word lexicon by feeding back discovered phrases to tune the similarity measure through metric learning."}
{"_id": "21922f216002ed2bc44b886b5e8e3042da45c40c", "title": "Topic Detection and Tracking Pilot Study Final Report", "text": "Topic Detection and Tracking (TDT) is a DARPA-sponsored initiative to investigate the state of the art in finding and following new events in a stream of broadcast news stories. The TDT problem consists of three major tasks: (1) segmenting a stream of data, especially recognized speech, into distinct stories; (2) identifying those news stories that are the first to discuss a new event occurring in the news; and (3) given a small number of sample news stories about an event, finding all following stories in the stream. The TDT Pilot Study ran from September 1996 through October 1997. The primary participants were DARPA, Carnegie Mellon University, Dragon Systems, and the University of Massachusetts at Amherst. This report summarizes the findings of the pilot study. The TDT work continues in a new project involving larger training and test corpora, more active participants, and a more broadly defined notion of \u201ctopic\u201d than was used in the pilot study. The following individuals participated in the research reported."}
{"_id": "6204fa63d8a60aea36a40cb171a696a74c22c6cd", "title": "Event Extraction with Complex Event Classification Using Rich Features", "text": "Biomedical Natural Language Processing (BioNLP) attempts to capture biomedical phenomena from texts by extracting relations between biomedical entities (i.e. proteins and genes). Traditionally, only binary relations have been extracted from large numbers of published papers. Recently, more complex relations (biomolecular events) have also been extracted. Such events may include several entities or other relations. To evaluate the performance of the text mining systems, several shared task challenges have been arranged for the BioNLP community. With a common and consistent task setting, the BioNLP'09 shared task evaluated complex biomolecular events such as binding and regulation.Finding these events automatically is important in order to improve biomedical event extraction systems. In the present paper, we propose an automatic event extraction system, which contains a model for complex events, by solving a classification problem with rich features. The main contributions of the present paper are: (1) the proposal of an effective bio-event detection method using machine learning, (2) provision of a high-performance event extraction system, and (3) the execution of a quantitative error analysis. The proposed complex (binding and regulation) event detector outperforms the best system from the BioNLP'09 shared task challenge."}
{"_id": "6723dda58e5e09089ec78ba42827b65859f030e2", "title": "Message Understanding Conference- 6: A Brief History", "text": "We have recently completed the sixth in a series of \"Message Understanding Conferences\" which are designed to promote and evaluate research in information extraction. MUC-6 introduced several innovations over prior MUCs, most notably in the range of different tasks for which evaluations were conducted. We describe some of the motivations for the new format and briefly discuss some of the results of the evaluations. 1 The M U C Evaluations We have just completed the sixth in a series of Message Understanding Conferences, which have been organized by NRAD, the RDT&E division of the Naval Command, Control and Ocean Surveillance Center (formerly NOSC, the Naval Ocean Systems Center) with the support of DARPA, the Defense Advanced Research Projects Agency. This paper looks briefly at the history of these Conferences and then examines the considerations which led to the structure of MUC-6} The Message Understanding Conferences were initiated by NOSC to assess and to foster research on the automated analysis of mili tary messages containing textual information. Although called \"conferences\", the distinguishing characteristic of the MUCs are not the conferences themselves, but the evaluations to which participants must submit in order to be permit ted to at tend the conference. For each MUC, participating groups have been given sample messages and instructions on the type of information to be extracted, and have developed a system to process such messages. Then, shortly before the conference, participants are given a set of test messages to be run through their system (without making any changes to the system); the output of each part icipant 's system 1The full proceedings of the conference are to be distributed by Morgan Kaufmann Publishers, San Mateo, California; earlier MUC proeeedings~ for MUC-3, 4, and 5, are also available from Morgan Kaufmann. Beth Sundhe im Naval Command, Control and Ocean Surveillance Center Research, Development, Test and Evaluation Division (NRaD) Code 44208 53140 Gatchell Road San Diego, CMifornia 92152-7420 s u n d h e i m @ p o j k e . n o s c . m i l is then evaluated against a manual ly-prepared answer key. The MUCs are remarkable in part because of the degree to which these evaluations have defined a prograin of research and development. DARPA has a number of information science and technology programs which are driven in large part, by regular evaluations. The MUCs are notable, however, in that they in large par t have shaped the research program in information extraction and brought it to its current s ta te}"}
{"_id": "0f9b608cd19afeb083e0244df4cd0db1a00e029b", "title": "Inducing Features of Random Fields", "text": "We present a technique for constructing random elds from a set of training samples. The learning paradigm builds increasingly complex elds by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the eld and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random eld models and techniques introduced in this paper di er from those common to much of the computer vision literature in that the underlying random elds are nonMarkovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classi cation in natural language processing."}
{"_id": "1b7ca711dd2a6126936c9bf1eb6636632ebdaab6", "title": "Memory consolidation and the hippocampus: further evidence from studies of autobiographical memory in semantic dementia and frontal variant frontotemporal dementia", "text": "Studies of autobiographical memory in semantic dementia have found relative preservation of memories for recent rather than remote events. As semantic dementia is associated with progressive atrophy to temporal neocortex, with early asymmetric sparing of the hippocampus, this neuropsychological pattern suggests that the hippocampal complex plays a role in the acquisition and retrieval of recent memories, but is not necessary for the recall of older episodic events. In an alternative view of memory consolidation, however, the hippocampus plays a role in the retrieval of all autobiographical memories, regardless of the age of the memory [Curr. Opin. Neurobiol. 7(1997)217]. This 'multiple trace theory' predicts that patients with semantic dementia should show no effects of time in their autobiographical recall. In this article, we ask whether it is possible to reconcile the data from semantic dementia with the multiple trace theory by investigating whether the time-dependent pattern of autobiographical retrieval seen in the disease is due to (i) patients showing this effect being exceptional in their presentation; and/or (ii) patients with semantic dementia exhibiting impaired strategic retrieval from concomitant frontal damage. A series of experiments in patients with semantic dementia, the frontal variant of frontotemporal dementia and Alzheimer's disease clearly demonstrates that neither of these two factors can explain the documented effect of time seen in semantic dementia. Nonetheless, we discuss how damage to semantic knowledge could result in an autobiographical memory deficit and suggest that data from semantic dementia may be consistent with both views of hippocampal involvement in long-term memory."}
{"_id": "1d286a264b233125b681e522e8f5fed596a8608c", "title": "An integrated GPU power and performance model", "text": "GPU architectures are increasingly important in the multi-core era due to their high number of parallel processors. Performance optimization for multi-core processors has been a challenge for programmers. Furthermore, optimizing for power consumption is even more difficult. Unfortunately, as a result of the high number of processors, the power consumption of many-core processors such as GPUs has increased significantly.\n Hence, in this paper, we propose an integrated power and performance (IPP) prediction model for a GPU architecture to predict the optimal number of active processors for a given application. The basic intuition is that when an application reaches the peak memory bandwidth, using more cores does not result in performance improvement.\n We develop an empirical power model for the GPU. Unlike most previous models, which require measured execution times, hardware performance counters, or architectural simulations, IPP predicts execution times to calculate dynamic power events. We then use the outcome of IPP to control the number of running cores. We also model the increases in power consumption that resulted from the increases in temperature.\n With the predicted optimal number of active cores, we show that we can save up to 22.09%of runtime GPU energy consumption and on average 10.99% of that for the five memory bandwidth-limited benchmarks."}
{"_id": "2d7e04e61aab62e37c007c60efdfa05819a8724b", "title": "Growth of the flickr social network", "text": "Online social networking sites like MySpace, Orkut, and Flickr are among the most popular sites on the Web and continue to experience dramatic growth in their user population. The popularity of these sites offers a unique opportunity to study the dynamics of social networks at scale. Having a proper understanding of how online social networks grow can provide insights into the network structure, allow predictions of future growth, and enable simulation of systems on networks of arbitrary size. However, to date, most empirical studies have focused on static network snapshots rather than growth dynamics.\n In this paper, we collect and examine detailed growth data from the Flickr online social network, focusing on the ways in which new links are formed. Our study makes two contributions. First, we collect detailed data covering three months of growth, encompassing 950,143 new users and over 9.7 million new links, and we make this data available to the research community. Second, we use a first-principles approach to investigate the link formation process. In short, we find that links tend to be created by users who already have many links, that users tend to respond to incoming links by creating links back to the source, and that users link to other users who are already close in the network."}
{"_id": "11f8c51cf54e0ea2dec517a3635771e4bb946fb4", "title": "Characterizing Data Structures for Volatile Forensics", "text": "Volatile memory forensic tools can extract valuable evidence from latent data structures present in memory dumps. However, current techniques are generally limited by a lack of understanding of the underlying data without the use of expert knowledge. In this paper, we characterize the nature of such evidence by using deep analysis techniques to better understand the life-cycle and recoverability of latent program data in memory. We have developed Cafegrind, a tool that can systematically build an object map and track the use of data structures as a program is running. Statistics collected by our tool can show which data structures are the most numerous, which structures are the most frequently accessed and provide summary statistics to guide forensic analysts in the evidence gathering process. As programs grow increasingly complex and numerous, the ability to pinpoint specific evidence in memory dumps will be increasingly helpful. Cafegrind has been tested on a number of real-world applications and we have shown that it can successfully map up to 96% of heap accesses."}
{"_id": "0ffab19326002ada57c0a22b1801439a8040ac54", "title": "Sparse Model-based Image Alignment Feature Alignment Pose & Structure Refinement Motion Estimation Thread New Image Last Frame Map Frame Queue Feature Extraction Initialize Depth-Filters", "text": "Direct methods for Visual Odometry (VO) have gained popularity due to their capability to exploit information from all intensity gradients in the image. However, low computational speed as well as missing guarantees for optimality and consistency are limiting factors of direct methods, where established feature-based methods instead succeed at. Based on these considerations, we propose a Semi-direct VO (SVO) that uses direct methods to track and triangulate pixels that are characterized by high image gradients but relies on proven feature-based methods for joint optimization of structure and motion. Together with a robust probabilistic depth estimation algorithm, this enables us to efficiently track pixels lying on weak corners and edges in environments with little or high-frequency texture. We further demonstrate that the algorithm can easily be extended to multiple cameras, to track edges, to include motion priors, and to enable the use of very large field of view cameras, such as fisheye and catadioptric ones. Experimental evaluation on benchmark datasets shows that the algorithm is significantly faster than the state of the art while achieving highly competitive accuracy. SUPPLEMENTARY MATERIAL Video of the experiments: https://youtu.be/hR8uq1RTUfA"}
{"_id": "562b3a980bbe291390ac1bc7ec81ca6be9d06be5", "title": "Autonomic nervous system activity in emotion: A review", "text": "Autonomic nervous system (ANS) activity is viewed as a major component of the emotion response in many recent theories of emotion. Positions on the degree of specificity of ANS activation in emotion, however, greatly diverge, ranging from undifferentiated arousal, over acknowledgment of strong response idiosyncrasies, to highly specific predictions of autonomic response patterns for certain emotions. A review of 134 publications that report experimental investigations of emotional effects on peripheral physiological responding in healthy individuals suggests considerable ANS response specificity in emotion when considering subtypes of distinct emotions. The importance of sound terminology of investigated affective states as well as of choice of physiological measures in assessing ANS reactivity is discussed."}
{"_id": "5a0e84b72d161ce978bba66bfb0e337b80ea1708", "title": "Dude, where's my card?: RFID positioning that works with multipath and non-line of sight", "text": "RFIDs are emerging as a vital component of the Internet of Things. In 2012, billions of RFIDs have been deployed to locate equipment, track drugs, tag retail goods, etc. Current RFID systems, however, can only identify whether a tagged object is within radio range (which could be up to tens of meters), but cannot pinpoint its exact location. Past proposals for addressing this limitation rely on a line-of-sight model and hence perform poorly when faced with multipath effects or non-line-of-sight, which are typical in real-world deployments. This paper introduces the first fine-grained RFID positioning system that is robust to multipath and non-line-of-sight scenarios. Unlike past work, which considers multipath as detrimental, our design exploits multipath to accurately locate RFIDs. The intuition underlying our design is that nearby RFIDs experience a similar multipath environment (e.g., reflectors in the environment) and thus exhibit similar multipath profiles. We capture and extract these multipath profiles by using a synthetic aperture radar (SAR) created via antenna motion. We then adapt dynamic time warping (DTW) techniques to pinpoint a tag's location. We built a prototype of our design using USRP software radios. Results from a deployment of 200 commercial RFIDs in our university library demonstrate that the new design can locate misplaced books with a median accuracy of 11~cm."}
{"_id": "5df4c94a9b191a5041dd0efd5bfb07ff263d530a", "title": "Performance evaluation and model checking join forces", "text": "A call for the perfect marriage between classical performance evaluation and state-of-the-art verification techniques."}
{"_id": "9e1ad0f8ceb47ed9c5e47149fad5cfcdfa4e5737", "title": "Activity Recognition with Smartphone Sensors Second Exam Literature Review", "text": "The ubiquity of smartphones together with their ever-growing computing, networking and sensing powers have been changing the landscape of people\u2019s daily life. Among others, activity recoginition, which takes the raw sensor reading as inputs and predicts a user\u2019s motion activity, has become an active research area in recent years. It is the core building block in many high-impact applications, ranging from health and fitness monitoring, personal biometric signature, urban computing, assistive technology and elder-care, to indoor localization and navigation, etc. This paper presents a comprehensive survey of the recent advances in activity recognition with smartphones\u2019 sensors. We start with the basic concepts such as sensors, activity types, etc. We review the core data mining techniques behind the main stream activity recognition algorithms, analyze their major challenges and introduce a variety of real applications enabled by activity recognition. Professor: Dr. Hanghang Tong Signature: Date: 06-12-2014 Professor: Dr. Ping Ji Signature: Date: 06-12-2014 Professor: Dr. Ted Brown Signature: Date: 06-12-2014 Advisor: Dr. Hanghang Tong"}
{"_id": "431310dd980475b0c1572dcda6ec1e2ff4d08508", "title": "Chinese character CAPTCHA recognition based on convolution neural network", "text": "CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are increasingly used in many applications for machine and human identification. Compared with traditional English and digital characters based CAPTCHAs, Chinese characters contain more complicated characters which greatly enhance difficulty of automatic recognition. To solve that problem, we proposed a Convolution Neural Network (CNN) based approach. This approach greatly improves the recognition accuracy of Chinese Character CAPTCHAs with distortion, rotation and background noise. Our experiment results show that this approach achieves more than 95% accuracy for single character and 84% accuracy for three types of Chinese Character CAPTCHAs with four characters. This encouraging result indicates that deep neural network is useful in complicated structure perception of Chinese Character CAPTCHAs."}
{"_id": "1ba1de0f143bd3166c9961acc869e123651d9836", "title": "Deep learning with support vector data description", "text": "One of the most critical problems for machine learning methods is overfitting. The overfitting problem is a phenomenon in which the accuracy of the model on unseen data is poor whereas the training accuracy is nearly perfect. This problem is particularly severe in complex models that have a large set of parameters. In this paper, we propose a deep learning neural network model that adopts the support vector data description (SVDD). The SVDD is a variant of the support vector machine, which has high generalization performance by acquiring a maximal margin in one-class classification problems. The proposed model strives to obtain the representational power of deep learning. Generalization performance is maintained using the SVDD. The experimental results showed that the proposed model can learn multiclass data without severe overfitting problems."}
{"_id": "620af2a56b4ff2c76c88c6b942a63e685e5a0412", "title": "Building Footprints Extraction from Oblique Imagery", "text": "Nowadays, multi-camera aerial platforms combining nadir and oblique cameras are experiencing a revival and several companies have proposed new image acquisition systems. Due to their various advantages, oblique imagery have found their place in numerous companies and civil applications. However, the automatic processing of such image blocks still remains a topic of research. Camera configuration indeed poses a challenge on the traditional photogrammetric pipeline used in commercial software but, on the other hand, gives the opportunity to exploit the additional information provided by the oblique views and allows a more reliable feature extraction. In particular, the information that can be provided in correspondence of building fa\u00e7ades can open new possibilities for the building detection and footprint extraction. In this paper, a methodology for the automated extraction of building footprints from oblique imagery is presented. The extraction is performed using dense point clouds generated using an image matching algorithm. The developed methodology and the achieved results are described in detail showing the advantages and opportunities offered by oblique aerial systems for cartographic and mapping purposes."}
{"_id": "0d4775270584b2e3ab0c19ad1cd98f29bcc8e773", "title": "Multi-View Dimensionality Reduction via Canonical Correlation Analysis", "text": "We analyze the multi-view regression problem where we have two views X = (X, X) of the input data and a target variableY of interest. We provide sufficient conditions under which we can reduce the dimensionality of X (via a projection) without loosing predictive power of Y . Crucially, this projection can be computed via a Canonical Correlation Analysis only on the unlabeled data. The algorithmic template is as f ollows: with unlabeled data, perform CCA and construct a certain projection; with the labeled data, do least squa res regression in this lower dimensional space. We show how, under certain natural assumptions, the number o f labeled samples could be significantly reduced (in comparison to the single view setting) \u2014 in particular, we show how this dimen sionality reduction does not loose predictive power ofY (thus it only introduces little bias but could drastically reduce the variance). We explore two separate assumptions under which this is possible and show how, under either assumption alone, dimensionality reduction could reduce the labeled sample complexity. The two assumptions we consider are a conditional independenceassumption and a redundancyassumption. The typical conditional independence assumption is that conditioned onY the viewsX andX are independent \u2014 we relax this assumption to: conditioned on some hidden state H the viewsX andX are independent. Under the redundancy assumption, we show that the best predictor from each view is roughly as good as the best predictor u sing both views."}
{"_id": "fd5f266135942c4d9d4cd681047cf967fba07d49", "title": "High Performance Ternary Multiplier Using CNTFET", "text": "Ternary logic is a promising alternative to the conventional binary logic in VLSI design as it provides the advantages of reduced interconnects, higher operating speeds and smaller chip area. This work presents a ternary multiplier using carbon nanotube field effect transistors (CNTFETs). The proposed designs use in-depth analysis of addition required for designing a two trit multiplier. Based on this analysis two efficient adders are proposed. These adders are used to optimize the multiplier design. The proposed circuits are extensively simulated using HSPICE to obtain power, delay and power delay product. The circuit performances are compared with designs reported in recent literature. This circuit demonstrates a power delay product improvement up to 16.8%, with lesser transistor count of 16%. So, the use of these circuits in complex arithmetic circuits will be advantageous."}
{"_id": "09f64d88010c9b31e971b697dc6d64349285b788", "title": "Informative Path Planning and Mapping with Multiple UAVs in Wind Fields", "text": "Informative path planning (IPP) is used to design paths for robotic sensor platforms to extract the best/maximum possible information about a quantity of interest while operating under a set of constraints, such as dynamic feasibility of vehicles. The key challenges of IPP are the strong coupling in multiple layers of decisions: the selection of locations to visit, the allocation of sensor platforms to those locations; and the processing of the gathered information along the paths. This paper presents an systematic procedure for IPP and environmental mapping using multiple UAV sensor platforms. It (a) selects the best locations to observe, (b) calculates the cost and finds the best paths for each UAV, and (c) estimates the measurement value within a given region using the Gaussian process (GP) regression framework. An illustrative example of RF intensity field mapping is presented to demonstrate the validity and applicability of the proposed approach."}
{"_id": "8f5e4fc6b92bcc50ed5f91104b69a58ac98ede9d", "title": "A new pedestrian detection method based on combined HOG and LSS features", "text": "Pedestrian detection is a critical issue in computer vision, with several feature descriptors can be adopted. Since the ability of various kinds of feature descriptor is different in pedestrian detection and there is no basis in feature selection, we analyze the commonly used features in theory and compare them in experiments. It is desired to find a new feature with the strongest description ability from their pair-wise combinations. In experiments, INRIA database and Daimler database are adopted as the training and testing set. By theoretic analysis, we find the HOG\u2013LSS combined feature have more comprehensive description ability. At first, Adaboost is regarded as classifier and the experimental results show that the description ability of the new combination features is improved on the basis of the single feature and HOG\u2013LSS combined feature has the strongest description ability. For further verifying this conclusion, SVM classifier is used in the experiment. The detection performance is evaluated by miss rate, the false positives per window, and the false positives per image. The results of these indicators further prove that description ability of HOG\u2013LSS feature is better than other combination of these"}
{"_id": "5a8da94888feb243b7f4a64de01d3528e5ba573d", "title": "Visfer: Camera-based visual data transfer for cross-device visualization", "text": "Going beyond the desktop to leverage novel devices\u2014such as smartphones, tablets, or large displays\u2014for visual sensemaking typically requires supporting extraneous operations for device discovery, interaction sharing, and view management. Such operations can be time-consuming and tedious, and distract the user from the actual analysis. Embodied interaction models in these multi-device environments can take advantage of the natural interaction and physicality afforded by multimodal devices and help effectively carry out these operations in visual sensemaking. In this paper, we present cross-device interaction models for visualization spaces, that are embodied in nature, by conducting a user study to elicit actions from participants that could trigger a portrayed effect of sharing visualizations (and therefore information) across devices. We then explore one common interaction style from this design elicitation called Visfer, a technique for effortlessly sharing visualizations across devices using the visual medium. More specifically, this technique involves taking pictures of visualizations, or rather the QR codes augmenting them, on a display using the built-in camera on a handheld device. Our contributions include a conceptual framework for cross-device interaction and the Visfer technique itself, as well as transformation guidelines to exploit the capabilities of each specific device and a web framework for encoding visualization components into animated QR codes, which capture multiple frames of QR codes to embed more information. Beyond this, we also present the results from a performance evaluation for the visual data transfer enabled by Visfer. We end the paper by presenting the application examples of our Visfer framework."}
{"_id": "1934b909471ed388936fcf79cde66aa3711c2acb", "title": "CS Teacher Experiences with Educational Technology, Problem-BasedLearning, and a CS Principles Curriculum", "text": "Little is known about how K-12 Computer Science (CS) teachers use technology and problem-based learning (PBL) to teach CS content in the context of CS Principles curricula. Significantly, little qualitative research has been conducted in these areas in computer science education, so we lack an in-depth understanding of the complicated realities of CS teachers' experiences. This paper describes the practices and experiences of six teachers' use of technology that was implemented to support PBL in the context of a dual enrollment CS Principles course.\n Results from an early offering of this course suggest that (1) while CS teachers used technology, they did not appear to use it to support student inquiry, (2) local adaptations to the curriculum were largely teacher-centric, and (3) the simultaneous adoption of new instructional practices, technologies, and curricula was overwhelming to teachers. This paper then describes how these results were used to modify the curriculum and professional development, leading to increased teacher satisfaction and student success in the course."}
{"_id": "a54559bc351726d481e94e47094af11287c01ea6", "title": "Resin composite--state of the art.", "text": "OBJECTIVES\nThe objective is to review the current state of the art of dental composite materials.\n\n\nMETHODS\nAn outline of the most important aspects of dental composites was created, and a subsequent literature search for articles related to their formulation, properties and clinical considerations was conducted using PubMed followed by hand searching citations from relevant articles.\n\n\nRESULTS\nThe current state of the art of dental composites includes a wide variety of materials with a broad range of mechanical properties, handling characteristics, and esthetic possibilities. This highly competitive market continues to evolve, with the major emphasis in the past being to produce materials with adequate strength, and high wear resistance and polishability retention. The more recent research and development efforts have addressed the issue of polymerization shrinkage and its accompanying stress, which may have a deleterious effect on the composite/tooth interfacial bond. Current efforts are focused on the delivery of materials with potentially therapeutic benefits and self-adhesive properties, the latter leading to truly simplified placement in the mouth.\n\n\nSIGNIFICANCE\nThere is no one ideal material available to the clinician, but the commercial materials that comprise the current armamentarium are of high quality and when used appropriately, have proven to deliver excellent clinical outcomes of adequate longevity."}
{"_id": "cf328bb777c6b3701e6b40e4c2a2f06f2d538513", "title": "On-road driver eye movement tracking using head-mounted devices", "text": "It is now evident from anecdotal evidence and preliminary research that distractions can hinder the task of operating a vehicle, and consequently reduce driver safety. However with increasing wireless connectivity and the portability of office devices, the vehicle of the future is visualized as an extension of the static work place - i.e. an office-on-the-move, with a phone, a fax machine and a computer all within the reach of the vehicle operator. For this research a Head mounted Eye-tracking Device (HED), is used for tracking the eye movements of a driver navigating a test route in an automobible while completing various driving tasks. Issues arising from data collection of eye movements during the completion of various driving tasks as well as the analysis of this data are discussed. Methods for collecting video and scan-path data, as well as difficulties and limitations are also reported."}
{"_id": "1b7aef72aacbef7602f7c5c1a77de941074303fb", "title": "State-space model with deep learning for functional dynamics estimation in resting-state fMRI", "text": "Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach."}
{"_id": "43814f683616ff3db83488f73df44b01d2e67392", "title": "Computationally modeling human emotion", "text": "Computer models of emotion inform theories of human intelligence and advance human-centric applications."}
{"_id": "3e93e41ccd22596ca478f336727f6340b12e7572", "title": "LEARNING PHYSICAL DYNAMICS", "text": "We present the Neural Physics Engine (NPE), a framework for learning simulators of intuitive physics that naturally generalize across variable object count and different scene configurations. We propose a factorization of a physical scene into composable object-based representations and a neural network architecture whose compositional structure factorizes object dynamics into pairwise interactions. Like a symbolic physics engine, the NPE is endowed with generic notions of objects and their interactions; realized as a neural network, it can be trained via stochastic gradient descent to adapt to specific object properties and dynamics of different worlds. We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds. By comparing to less structured architectures, we show that the NPE\u2019s compositional representation of the structure in physical interactions improves its ability to predict movement, generalize across variable object count and different scene configurations, and infer latent properties of objects such as mass."}
{"_id": "f63ebe0ced9ca61ff1dd42b592f365db821e1f7e", "title": "An Ontology-based Context Model in Intelligent Environments", "text": "Computing becomes increasingly mobile and pervasive today; these changes imply that applications and services must be aware of and adapt to their changing contexts in highly dynamic environments. Today, building context-aware systems is a complex task due to lack of an appropriate infrastructure support in intelligent environments. A context-aware infrastructure requires an appropriate context model to represent, manipulate and access context information. In this paper, we propose a formal context model based on ontology using OWL to address issues including semantic context representation, context reasoning and knowledge sharing, context classification, context dependency and quality of context. The main benefit of this model is the ability to reason about various contexts. Based on our context model, we also present a Service-Oriented Context-Aware Middleware (SOCAM) architecture for building of context-aware services."}
{"_id": "981414fb045bf10380e3a154da9b7555b8334ad8", "title": "Parental vaccine safety concerns in 2009.", "text": "OBJECTIVE\nVaccine safety concerns can diminish parents' willingness to vaccinate their children. The objective of this study was to characterize the current prevalence of parental vaccine refusal and specific vaccine safety concerns and to determine whether such concerns were more common in specific population groups.\n\n\nMETHODS\nIn January 2009, as part of a larger study of parents and nonparents, 2521 online surveys were sent to a nationally representative sample of parents of children who were aged =17 years. The main outcome measures were parental opinions on vaccine safety and whether the parent had ever refused a vaccine that a doctor recommended for his or her child.\n\n\nRESULTS\nThe response rate was 62%. Most parents agreed that vaccines protect their child(ren) from diseases; however, more than half of the respondents also expressed concerns regarding serious adverse effects. Overall, 11.5% of the parents had refused at least 1 recommended vaccine. Women were more likely to be concerned about serious adverse effects, to believe that some vaccines cause autism, and to have ever refused a vaccine for their child(ren). Hispanic parents were more likely than white or black parents to report that they generally follow their doctor's recommendations about vaccines for their children and less likely to have ever refused a vaccine. Hispanic parents were also more likely to be concerned about serious adverse effects of vaccines and to believe that some vaccines cause autism.\n\n\nCONCLUSIONS\nAlthough parents overwhelmingly share the belief that vaccines are a good way to protect their children from disease, these same parents express concerns regarding the potential adverse effects and especially seem to question the safety of newer vaccines. Although information is available to address many vaccine safety concerns, such information is not reaching many parents in an effective or convincing manner."}
{"_id": "2c9b78203f61d30cf6c3b04a979554d3e59a3f61", "title": "Testing Intrusion detection systems: a critique of the 1998 and 1999 DARPA intrusion detection system evaluations as performed by Lincoln Laboratory", "text": "In 1998 and again in 1999, the Lincoln Laboratory of MIT conducted a comparative evaluation of intrusion detection systems (IDSs) developed under DARPA funding. While this evaluation represents a significant and monumental undertaking, there are a number of issues associated with its design and execution that remain unsettled. Some methodologies used in the evaluation are questionable and may have biased its results. One problem is that the evaluators have published relatively little concerning some of the more critical aspects of their work, such as validation of their test data. The appropriateness of the evaluation techniques used needs further investigation. The purpose of this article is to attempt to identify the shortcomings of the Lincoln Lab effort in the hope that future efforts of this kind will be placed on a sounder footing. Some of the problems that the article points out might well be resolved if the evaluators were to publish a detailed description of their procedures and the rationale that led to their adoption, but other problems would clearly remain./par>"}
{"_id": "7f750853c849dbdf08a17fcd91ab077fb6d8a791", "title": "Aberrant Behavior Detection in Time Series for Network Monitoring", "text": "The open-source software RRDtool and Cricket provide a solution to the problem of collecting, storing, and visualizing service network time series data for the real-time monitoring task. However, simultaneously monitoring all service network time series of interest is an impossible task even for the accomplished network technician. The solution is to integrate a mathematical model for automatic aberrant behavior detection in time series into the monitoring software. While there are many such models one might choose, the primary goal should be a model compatible with real-time monitoring. At WebTV, the solution was to integrate a model based on exponential smoothing and Holt-Winters forecasting into the Cricket/RRDtool architecture. While perhaps not optimal, this solution is flexible, efficient, and effective as a tool for automatic aberrant behavior detection."}
{"_id": "fbcd7840a2159a2128b28187b52ca4e4421437ab", "title": "Unsupervised Anomaly Detection in Network Intrusion Detection Using Clusters", "text": "Most current network intrusion detection systems employ signature-based methods or data mining-based methods which rely on labelled training data. This training data is typically expensive to produce. Moreover, these methods have difficulty in detecting new types of attack. Using unsupervised anomaly detection techniques, however, the system can be trained with unlabelled data and is capable of detecting previously \u201cunseen\u201d attacks. In this paper, we present a new density-based and grid-based clustering algorithm that is suitable for unsupervised anomaly detection. We evaluated our methods using the 1999 KDD Cup data set. Our evaluation shows that the accuracy of our approach is close to that of existing techniques reported in the literature, and has several advantages in terms of computational complexity."}
{"_id": "1a76f0539a9badf317b0b35ea92f734c62467138", "title": "Data Mining Approaches for Intrusion Detection", "text": "In this paper we discuss our research in developing general and systematic methods for intrusion detection. The key ideas are to use data mining techniques to discover consistent and useful patterns of system features that describe program and user behavior, and use the set of relevant system features to compute (inductively learned) classifiers that can recognize anomalies and known intrusions. Using experiments on the s ndmailsystem call data and the network tcpdumpdata, we demonstrate that we can construct concise and accurate classifiers to detect anomalies. We provide an overview on two general data mining algorithms that we have implemented: the association rules algorithm and the frequent episodes algorithm. These algorithms can be used to compute the intraand interaudit record patterns, which are essential in describing program or user behavior. The discovered patterns can guide the audit data gathering process and facilitate feature selection. To meet the challenges of both efficient learning (mining) and real-time detection, we propose an agent-based architecture for intrusion detection systems where the learning agents continuously compute and provide the updated (detection) models to the detection agents."}
{"_id": "c42639a324930275df4baad710b465f15439dfb1", "title": "Improving Sentiment Analysis with Document-Level Semantic Relationships from Rhetoric Discourse Structures", "text": "Conventional sentiment analysis usually neglects semantic information between (sub-)clauses, as it merely implements so-called bag-of-words approaches, where the sentiment of individual words is aggregated independently of the document structure. Instead, we advance sentiment analysis by the use of rhetoric structure theory (RST), which provides a hierarchical representation of texts at document level. For this purpose, texts are split into elementary discourse units (EDU). These EDUs span a hierarchical structure in the form of a binary tree, where the branches are labeled according to their semantic discourse. Accordingly, this paper proposes a novel combination of weighting and grid search to aggregate sentiment scores from the RST tree, as well as feature engineering for machine learning. We apply our algorithms to the especially hard task of predicting stock returns subsequent to financial disclosures. As a result, machine learning improves the balanced accuracy by 8.6 percent compared to the baseline."}
{"_id": "66326f2e81838c23f92d427f69ebc16c33e1006b", "title": "Measurement of preoperative and postoperative nasal tip projection and rotation.", "text": "OBJECTIVE\nTo measure the effect of columellar struts and cephalic trim on tip projection and tip rotation using digitized photographs.\n\n\nMETHODS\nUsing photographs of 62 patients who underwent external rhinoplasty, we retrospectively analyzed nasal tip projection (the Goode method) and rotation (nasolabial angle) before and after surgery. A cartilaginous strut was used in 36 patients, whereas 26 patients did not receive a strut. Patients were categorized into 4 subgroups, depending on the placement of a strut (placement, strut+ vs nonplacement, strut-) and the removal of the cephalic margin (removal, cephalic+ vs nonremoval, cephalic-) of the lateral crus: strut-/cephalic-, n = 17; strut+/cephalic-, n = 23; strut-/cephalic+, n = 9; strut+/cephalic+, n = 12.\n\n\nRESULTS\nNasal tip projection, measured with the Goode method, increased from 0.58 to 0.60 (P = .02) in the strut+ group; in the strut- group, nasal tip projection did not change significantly. Nasolabial angle increased from 93.96 degrees to 100.92 degrees in the strut+/cephalic- group and from 88.30 degrees to 95.06 degrees in the strut+/cephalic+ group. Removal of the cephalic margin alone (strut-/cephalic+) hardly affected tip rotation (P = .05).\n\n\nCONCLUSIONS\nThe external rhinoplasty approach did not lead to a decrease in nasal tip projection. A cartilaginous strut slightly increased nasal tip projection and also increased nasal tip rotation. This effect was accentuated by the removal of the cephalic margin of the lateral crus."}
{"_id": "7de6e81d775e9cd7becbfd1bd685f4e2a5eebb22", "title": "Labeled Faces in the Wild: A Survey", "text": "In 2007, Labeled Faces in the Wild was released in an effort to spur research in face recognition, specifically for the problem of face verification with unconstrained images. Since that time, more than 50 papers have been published that improve upon this benchmark in some respect. A remarkably wide variety of innovative methods have been developed to overcome the challenges presented in this database. As performance on some aspects of the benchmark approaches 100% accuracy, it seems appropriate to review this progress, derive what general principles we can from these works, and identify key future challenges in face recognition. In this survey, we review the contributions to LFW for which the authors have provided results to the curators (results found on the LFW results web page). We also review the cross cutting topic of alignment and how it is used in various methods. We end with a brief discussion of recent databases designed to challenge the next generation of face recognition algorithms. Erik Learned-Miller University of Massachusetts, Amherst, Massachusetts, e-mail: elm@cs.umass.edu Gary B. Huang Howard Hughes Medical Institute, Janelia Research Campus, e-mail: gbhuang@cs.umass.edu Aruni RoyChowdhury University of Massachusetts, Amherst, Massachusetts, e-mail: aruni@cs.umass.edu Haoxiang Li Stevens Institute of Technology, Hoboken, New Jersey, e-mail: hli18@stevens.edu Gang Hua Stevens Institute of Technology, Hoboken, New Jersey, e-mail: ganghua@gmail.com"}
{"_id": "fffd38961b9b78cfb4add0a67fda76e83932bdf8", "title": "Open production: scientific foundation for co-creative product realization", "text": "Globalization and the use of technology call for an adaptation of value creation strategies. As the potential for rationalization and achieving flexibility within companies is to the greatest possible extent exhausted, approaches to the corporate reorganization of value creation are becoming increasingly important. In this process, the spread and further development of information and communication technology often provide the basis for a reorganization of cross-company value nets and lead to a redistribution of roles and tasks between the actors involved in value creation. While cooperative, decentralized and self-organizing value creation processes are in fact being promoted, the associated potential for development and production engineering is being underestimated and hence not implemented sufficiently. This contribution will introduce a value creation taxonomy and then, using its notion and structure, describe the emerging transformations in value creation on the basis of case studies. Finally an adequate framework for analysing and configuring value creation will be presented."}
{"_id": "385c8f8a4589940d59697e1e89830f4fee8a468b", "title": "Partisan Media Exposure and Attitudes Toward the Opposition", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."}
{"_id": "979279e200ab8dc71445c571a5e26444a0307f9a", "title": "Steerable interfaces for pervasive computing spaces", "text": "This paper introduces a new class of interactive interfaces that can be moved around to appear on ordinary objects and surfaces anywhere in a space. By dynamically adapting the form, function, and location of an interface to suit the context of the user, such steerable interfaces have the potential to offer radically new and powerful styles of interaction in intelligent pervasive computing spaces. We propose defining characteristics of steerable interfaces and present the first steerable interface system that combines projection, gesture recognition, user tracking, environment modeling and geometric reasoning components within a system architecture. Our work suggests that there is great promise and rich potential for further research on steerable interfaces."}
{"_id": "72b06ad725fdc4066667134b394c92c96540acb7", "title": "Computational Intelligence Models for Insurance Fraud Detection : A Review of a Decade of Research", "text": "This paper presents a review of the literature on the application of data mining techniques for the detection of insurance fraud. Academic literature were analyzed and classified into three types of insurance fraud (automobile insurance, crop insurance and healthcare insurance) and six classes of data mining techniques (classification, regression, clustering, prediction, outlier detection, and visualization). The findings of this review clearly show that automobile insurance fraud detection have also attracted a great deal of attention in recent years. The main data mining techniques used for insurance fraud detection are logistic models, Decision tree, the Na\u00efve Bayes, and support vector machine."}
{"_id": "f51cdbcfed1822811c3e114b317f270e28c89275", "title": "Fast Optimal Transport Averaging of Neuroimaging Data", "text": "Knowing how the Human brain is anatomically and functionally organized at the level of a group of healthy individuals or patients is the primary goal of neuroimaging research. Yet computing an average of brain imaging data defined over a voxel grid or a triangulation remains a challenge. Data are large, the geometry of the brain is complex and the between subjects variability leads to spatially or temporally non-overlapping effects of interest. To address the problem of variability, data are commonly smoothed before performing a linear group averaging. In this work we build on ideas originally introduced by Kantorovich to propose a new algorithm that can average efficiently non-normalized data defined over arbitrary discrete domains using transportation metrics. We show how Kantorovich means can be linked to Wasserstein barycenters in order to take advantage of the entropic smoothing approach used by. It leads to a smooth convex optimization problem and an algorithm with strong convergence guarantees. We illustrate the versatility of this tool and its empirical behavior on functional neuroimaging data, functional MRI and magnetoencephalography (MEG) source estimates, defined on voxel grids and triangulations of the folded cortical surface."}
{"_id": "76f427542150941395ef7b1c8a8043b7a3a5ea5f", "title": "Deep Learning with Edge Computing for Localization of Epileptogenicity Using Multimodal rs-fMRI and EEG Big Data", "text": "Epilepsy is a chronic brain disorder characterized by the occurrence of spontaneous seizures of which about 30 percent of patients remain medically intractable and may undergo surgical intervention; despite the latter, some may still fail to attain a seizure-free outcome. Functional changes may precede structural ones in the epileptic brain and may be detectable using existing noninvasive modalities. Functional connectivity analysis through electroencephalography (EEG) and resting state-functional magnetic resonance imaging (rs-fMRI), complemented by diffusion tensor imaging (DTI), has provided such meaningful input in cases of temporal lobe epilepsy (TLE). Recently, the emergence of edge computing has provided competent solutions enabling context-aware and real-time response services for users. By leveraging the potential of autonomic edge computing in epilepsy, we develop and deploy both noninvasive and invasive methods for the monitoring, evaluation and regulation of the epileptic brain, with responsive neurostimulation (RNS; Neuropace). First, an autonomic edge computing framework is proposed for processing of big data as part of a decision support system for surgical candidacy. Second, an optimized model for estimation of the epileptogenic network using independently acquired EEG and rs-fMRI is presented. Third, an unsupervised feature extraction model is developed based on a convolutional deep learning structure for distinguishing interictal epileptic discharge (IED) periods from nonIED periods using electrographic signals from electrocorticography (ECoG). Experimental and simulation results from actual patient data validate the effectiveness of the proposed methods."}
{"_id": "e64d6db47ed60e8c6f446ec7a34d869a05c44e0f", "title": "The sounds of silence: language, cognition, and anxiety in selective mutism.", "text": "OBJECTIVES\nTo determine whether oral language, working memory, and social anxiety differentiate children with selective mutism (SM), children with anxiety disorders (ANX), and normal controls (NCs) and explore predictors of mutism severity.\n\n\nMETHOD\nChildren ages 6 to 10 years with SM (n = 44) were compared with children with ANX (n = 28) and NCs (n = 19) of similar age on standardized measures of language, nonverbal working memory, and social anxiety. Variables correlating with mutism severity were entered in stepwise regressions to determine predictors of mute behavior in SM.\n\n\nRESULTS\nChildren with SM scored significantly lower on standardized language measures than children with ANX and NCs and showed greater visual memory deficits and social anxiety relative to these two groups. Age and receptive grammar ability predicted less severe mutism, whereas social anxiety predicted more severe mutism. These factors accounted for 38% of the variance in mutism severity.\n\n\nCONCLUSIONS\nSocial anxiety and language deficits are evident in SM, may predict mutism severity, and should be evaluated in clinical assessment. Replication is indicated, as are further studies of cognition and of intervention in SM, using large, diverse samples."}
{"_id": "af229f71430f5184de0817b02776acbdf53fb754", "title": "Collaborative Filtering Service Recommendation Based on a Novel Similarity Computation Method", "text": "Recently, collaborative filtering-based methods are widely used for service recommendation. QoS attribute value-based collaborative filtering service recommendation mainly includes two important steps. One is the similarity computation, and the other is the prediction for QoS attribute value, which the user has not experienced. In previous studies, the performances of some methods need to be improved. In this paper, we propose a ratio-based method to calculate the similarity. We can get the similarity between users or between items by comparing the attribute values directly. Based on our similarity computation method, we propose a new method to predict the unknown value. By comparing the values of a similar service and the current service that are invoked by common users, we can obtain the final prediction result. The performance of the proposed method is evaluated through a large data set of real web services. Experimental results show that our method obtains better prediction precision, lower mean absolute error ( $MAE$ ) and faster computation time than various reference schemes considered."}
{"_id": "61a074d96571554aecef5a9bcaa7264b0601c768", "title": "Relative Physical Knowledge of Actions and Objects", "text": "Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e.g., \u201cMy house is bigger than me.\u201d However, while rarely stated explicitly, this trivial everyday knowledge does influence the way people talk about the world, which provides indirect clues to reason about the world. For example, a statement like, \u201cTyler entered his house\u201d implies that his house is bigger than Tyler. In this paper, we present an approach to infer relative physical knowledge of actions and objects along five dimensions (e.g., size, weight, and strength) from unstructured natural language text. We frame knowledge acquisition as joint inference over two closely related problems: learning (1) relative physical knowledge of object pairs and (2) physical implications of actions when applied to those object pairs. Empirical results demonstrate that it is possible to extract knowledge of actions and objects from language and that joint inference over different types of knowledge improves performance."}
{"_id": "197a26f9665fc4f33c0e98a0a86e12c44139f7c6", "title": "Graph Databases", "text": "Story Slides Slide 1 Graph Databases and the Semantic Web Slide 2 Background Slide 3 Graph Databases: The Definitive Book Slide 4 The Graph Database Space Slide 5 Gartner Magic Quadrant for BI & Analytics Platforms Slide 6 Foreword and Preface Slide 7"}
{"_id": "62af2a997ec4db11cb698096c851009a1de2a142", "title": "Smart cities based on web semantic technologies", "text": "A good communication and interaction between citizens and the administration is important and crucial, also can greatly help in improving the quality of urban life of citizen. In this paper, we propose a semantic data model for managing and resolving the problems that exist in cities such as water leak, street faults, broken street lights, and potholes. The main idea is to focus on the best practices of linked open data to describe all issues, and then integrate them in the dataset provided by DBpedia. Hence, our approach is based on the standards of the World Wide Web Consortium."}
{"_id": "ddedb83a585a53c8cd6f3277cdeaa367b526173f", "title": "Examining strategic ambidexterity as an antecedent of functional and cross-functional ambidexterity", "text": ""}
{"_id": "79fe72080be951cf096524fd54c33402387c8e8f", "title": "Bitcoin's Academic Pedigree", "text": "The concept of cryptocurrencies is built from forgotten ideas in research literature."}
{"_id": "303c34acc2f38742d8a5138339ce1a838f55d902", "title": "A High-Voltage Pulse Generator With Continuously Variable Pulsewidth Based on a Modified PFN", "text": "The amount of stored energy is an important issue in the low-droop and high-precision pulse generation. Pulse forming network (PFN) simulating an open-ended transmission line seems to be useful due to its minimum stored energy. The drawbacks of the PFN include low-output pulse quality and constant pulsewidth, which limit their applications. Another limitation of generating rectangular pulses using PFN is that half of the charged voltage appears in the output when delivering the entire stored energy to the matched load. This paper proposes a structure of PFN-based pulse generator that overcomes the above-mentioned limitations. For continuous pulsewidth adjustment, a lossless scheme is presented. By increasing the ratio of the load magnitude to the characteristic impedance, the amplitude of the output pulse voltage would be approximately equal to that of the charged voltage. Therefore, the semiconductor switches can be utilized in the PFN pulse generator easily. In addition, the pulse quality is enhanced and faster rising time is obtained in comparison with the conventional PFNs. In this proposal, the stored energy quantity is limited to three times the energy needed for the output pulse in the case of $R_{L}=10Z_{0}$ . To verify the proposed structure, a prototype is simulated using PSPICE software, and the experimental results are presented."}
{"_id": "21d83202678668d1fc1c1e852f1e5179173f1ef5", "title": "Is it a tool suitable for learning? A critical review of the literature on Facebook as a technology-enhanced learning environment", "text": "Despite its continuing popularity as the social network site par excellence, the educational value of Facebook has not been fully determined, and results from the mainstream educational paradigms are contradictory, with some scholars emphasizing its pedagogical affordances (e.g., widening context of learning, mixing information and learning resources, hybridization of expertise) and others cautioning against its use for educational purposes. Moreover, systematic reviews about documented educational usage of Facebook as a learning environment are lacking. This article attempts to provide a critical overview of current studies focusing on the use of Facebook as a technology-enhanced learning environment, with the aim of exploring the extent to which its pedagogical potential is actually translated into practice. Only empirical studies published in peer-reviewed academic journals with a specific focus on Facebook as a learning environment have been considered for the review. The authors conducted a comprehensive literature search that identified 23 relevant articles that were subsequently analysed according to a simplified list of guidelines. These articles were further analysed and recoded through a set of emerging categories. The results show that pedagogical affordances of Facebook have only been partially implemented and that there are still many obstacles that may prevent a full adoption of Facebook as a learning environment such as implicit institutional, teacher and student pedagogies, and cultural issues. Finally, a broad observation on the implications of the study is developed with some suggestions for future research."}
{"_id": "ff932a8313d6e983aff1aeb1e9052c8035808401", "title": "A Next-Generation Secure Cloud-Based Deep Learning License Plate Recognition for Smart Cities", "text": "License Plate Recognition System (LPRS) plays a vital role in smart city initiatives such as traffic control, smart parking, toll management and security. In this article, a cloud-based LPRS is addressed in the context of efficiency where accuracy and speed of processing plays a critical role towards its success. Signature-based features technique as a deep convolutional neural network in a cloud platform is proposed for plate localization, character detection and segmentation. Extracting significant features makes the LPRS to adequately recognize the license plate in a challenging situation such as i) congested traffic with multiple plates in the image ii) plate orientation towards brightness, iii) extra information on the plate, iv) distortion due to wear and tear and v) distortion about captured images in bad weather like as hazy images. Furthermore, the deep learning algorithm computed using bare-metal cloud servers with kernels optimized for NVIDIA GPUs, which speed up the training phase of the CNN LPDS algorithm. The experiments and results show the superiority of the performance in both recall and precision and accuracy in comparison with traditional LP detecting systems."}
{"_id": "b1419674f1cacf74e5fc27255fa18b75a739a1bb", "title": "Pornographic image detection utilizing deep convolutional neural networks", "text": "Many internet users are potential victims of the pornographic images and a large part of them are underage children. Thus, content-based pornographic images detection is an important task in computer vision and multimedia research. Previous solutions usually rely on hand-engineered visual features that are hard to analyze and select. In this paper, to detect pornographic images in any style accurately and efficiently with a single model, a novel scheme utilizing the deep convolutional neural networks (CNN) is proposed. The training data are obtained from internet followed by an improved sliding window method and some novel data augmentation approaches. Then a highly efficient training algorithm is proposed based on two strategies. The first is the pre-trained midlevel representations non-fixed fine-tuning strategy. The second is adjusting the training data at the appropriate time on the basis of the performance of the proposed network on the validation set. Furthermore, we introduce a fast image scanning method which is also based on the sliding window approach in the test. We further propose a fast forward pass method based on the \u201cfixedpoint algorithm\u201d. So our CNN could detect all scale images so fast by one forward pass. The effectiveness of the proposed method is demonstrated in experiments on the proposed dataset and the comparative results show that our \u2217Corresponding author Email address: tenglwy@gmail.com (Teng Li) Preprint submitted to Neurocomputing June 9, 2016 method lead to state-of-the-art detection performance."}
{"_id": "7168ac216098548fdf5a1e1985f1b31ccd7f9f28", "title": "A Novel DC-Side Zero-Voltage Switching (ZVS) Three-Phase Boost PWM Rectifier Controlled by an Improved SVM Method", "text": "A novel active clamping zero-voltage switching three-phase boost pulsewidth modulation (PWM) rectifier is analyzed and a modified minimum-loss space vector modulation (SVM) strategy suitable for the novel zero-voltage switching (ZVS) rectifier is proposed in this paper. The topology of the novel ZVS rectifier only adds one auxiliary active switch, one resonant inductor, and one clamping capacitor to the traditional hard-switched three-phase boost PWM rectifier. With the proposed SVM strategy, the novel ZVS rectifier can achieve ZVS for all the main and auxiliary switches. In addition, the antiparallel diodes can be turned OFF softly, so the reverse recovery current is eliminated. Besides, the voltage stress of all the switches is equal to the dc-link voltage. The operation principle and soft-switching condition of the novel ZVS rectifier are analyzed. The design guidelines of the soft switched circuit parameters are described in detail. A DSP controlled 30 kW prototype is implemented to verify the theory."}
{"_id": "8abe812f2e01845f0ea3b4c062f897d5ca9bfa0c", "title": "Using heuristics to estimate an appropriate number of latent topics in source code analysis", "text": "Latent Dirichlet Allocation (LDA) is a data clustering algorithm that performs especially well for text documents. In natural-language applications it automatically finds groups of related words (called \u201clatent topics\u201d) and clusters the documents into sets that are about the same \u201ctopic\u201d. LDA has also been applied to source code, where the documents are natural source code units such as methods or classes, and the words are the keywords, operators, and programmer-defined names in the code. The problem of determining a topic count that most appropriately describes a set of source code documents is an open problem. We address this empirically by constructing clusterings with different numbers of topics for a large number of software systems, and then use a pair of measures based on source code locality and topic model similarity to assess how well the topic structure identifies related source code units. Results suggest that the topic count required can be closely approximated using the number of software code fragments in the system. We extend these results to recommend appropriate topic counts for arbitrary software systems based on an analysis of a set of open source systems."}
{"_id": "628c204c303ebe2a36de58e85eadfbb502b96ef0", "title": "Standard-Compliant Multiview Video Coding and Streaming for Virtual Reality Applications", "text": "Virtual reality (VR) systems employ multiview cameras or camera rigs to capture a scene from the entire 360-degree perspective. Due to computational or latency constraints, it might not be possible to stitch multiview videos into a single video sequence prior to encoding. In this paper we investigate the coding and streaming of multiview VR video content. We present a standard-compliant method where we first divide the camera views into two types: Primary views represent a subset of camera views with lower resolution and non-overlapping (minimally overlapping) content which cover the entire 360-degree field-of-view to guarantee immediate monoscopic viewing during very rapid head movements. Auxiliary views consist of remaining camera views with higher resolution which produce overlapping content with the primary views and are additionally used for stereoscopic viewing. Based on this categorization, we propose a coding arrangement in which, the primary views are independently coded in the base layer and the additional auxiliary views are coded as an enhancement layer, using inter-layer prediction from primary views. The proposed system not only meets the low latency requirements of VR systems, but also conforms to the existing multilayer extensions of the High Efficiency Video Coding standard. Simulation results show that the coding and streaming performance of the proposed scheme is significantly improved compared to earlier methods."}
{"_id": "5858a8b27a5efa3c62a1adb1ede1f8cef9a80e88", "title": "Audio-based context recognition", "text": "The aim of this paper is to investigate the feasibility of an audio-based context recognition system. Here, context recognition refers to the automatic classification of the context or an environment around a device. A system is developed and compared to the accuracy of human listeners in the same task. Particular emphasis is placed on the computational complexity of the methods, since the application is of particular interest in resource-constrained portable devices. Simplistic low-dimensional feature vectors are evaluated against more standard spectral features. Using discriminative training, competitive recognition accuracies are achieved with very low-order hidden Markov models (1-3 Gaussian components). Slight improvement in recognition accuracy is observed when linear data-driven feature transformations are applied to mel-cepstral features. The recognition rate of the system as a function of the test sequence length appears to converge only after about 30 to 60 s. Some degree of accuracy can be achieved even with less than 1-s test sequence lengths. The average reaction time of the human listeners was 14 s, i.e., somewhat smaller, but of the same order as that of the system. The average recognition accuracy of the system was 58% against 69%, obtained in the listening tests in recognizing between 24 everyday contexts. The accuracies in recognizing six high-level classes were 82% for the system and 88% for the subjects."}
{"_id": "705da905174287789c57013c5d53beb22ba2f3a7", "title": "Hidden semi-Markov models", "text": "Article history: Received 14 April 2009 Available online 17 November 2009"}
{"_id": "4f2e08bc94e09b30cc361147c5f63b1e96338906", "title": "An overview of audio information retrieval", "text": "The problem of audio information retrieval is familiar to anyone who has returned from vacation to find an answering machine full of messages. While there is not yet an \u201cAltaVista\u201d for the audio data type, many workers are finding ways to automatically locate, index, and browse audio using recent advances in speech recognition and machine listening. This paper reviews the state of the art in audio information retrieval, and presents recent advances in automatic speech recognition, word spotting, speaker and music identification, and audio similarity with a view towards making audio less \u201copaque\u201d. A special section addresses intelligent interfaces for navigating and browsing audio and multimedia documents, using automatically derived information to go beyond the tape recorder metaphor."}
{"_id": "9cecd9eaeff409ccaa63aa6d25ad7ba4658a994a", "title": "Fast and robust fixed-point algorithms for independent component analysis", "text": "Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon's information-theoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast (objective) functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model, and it is shown how to choose contrast functions that are robust and/or of minimum variance. Finally, we introduce simple fixed-point algorithms for practical optimization of the contrast functions. These algorithms optimize the contrast functions very fast and reliably."}
{"_id": "1b5072e402ae534014c985b2089abd9eba9154c8", "title": "Programming language evolution", "text": "Programming languages are the way developers communicate with computers\u2014just like natural languages let us communicate with one another. In both cases multiple languages have evolved. However, the programming language ecosystem changes at a much higher rate compared to natural languages. In fact, programming languages need to constantly evolve in response to user needs, hardware advances and research developments or they are otherwise displaced by newcomers. As a result, existing code written by developers may stop working with a newer language version. Consequently, developers need to search (analyse) and replace (refactor) code in their code bases to support new language versions. Traditionally, tools have focused on the replace aspect (refactoring) to support developers evolving their code bases. This dissertation argues that developers also need machine support focused on the search aspect. This dissertation starts by characterising factors driving programming language evolution based on external versus internal forces. Next, it introduces a classification of changes introduced by language designers that affect developers. It then contributes three techniques to support developers in analysing their code bases. First, we show a source code query system based on graphs that express code queries at a mixture of syntax-tree, type, control-flow graph and data-flow levels. We demonstrate how this system can support developers and language designers in locating various code patterns relevant in evolution. Second, we design an optional run-time type system for Python, that lets developers manually specify contracts to identify semantic incompatibilities between Python 2 and Python 3. Third, recognising that existing codebases do not have such contracts, we describe a dynamic analysis to automatically generate them."}
{"_id": "667e1e6de637841a01a334a7a701fa5a1ae0017d", "title": "Compressive Spectrum Sensing for Cognitive Radio Networks", "text": "............................................................................................................................... 3 R\u00c9SUME .................................................................................................................................... 5 ACKNOWLEDGEMENT .......................................................................................................... 7 TABLE OF CONTENTS ............................................................................................................ 9 LIST OF FIGURES .................................................................................................................. 12 LIST OF TABLES .................................................................................................................... 13 NOTATION .............................................................................................................................. 14 LIST OF ABBREVIATION ..................................................................................................... 15 Chapter I .................................................................................................................................... 16 INTRODUCTION .................................................................................................................... 16 I.1 Spectrum Management and Cognitive Radio .................................................................. 16 I.2 Cognitive Radio Cycle ..................................................................................................... 18 I.3 Compressive Sensing ....................................................................................................... 19 I.4 Dissertation Objectives .................................................................................................... 19 I.5 Dissertation Contributions ............................................................................................... 20 I.6 List of Publications .......................................................................................................... 21 I.7 Dissertation Organization ................................................................................................ 23 Chapter II .................................................................................................................................. 24 SPECTRUM SENSING ........................................................................................................... 24 II.1 Spectrum Sensing Model ............................................................................................... 24 II.2 Spectrum Sensing Techniques ....................................................................................... 26 II.2.1 Energy detection ...................................................................................................... 26 II.2.2 Autocorrelation based Detection ............................................................................. 28 II.2.3 Euclidian Distance based Detection ........................................................................ 30 II.2.4 Wavelet based Sensing ............................................................................................ 31 II.2.5 Matched Filter Detection ......................................................................................... 33 II.2.6 Evaluation Metrics .................................................................................................. 34 II.3 Conclusion ...................................................................................................................... 35 Chapter III ................................................................................................................................. 36 COMPRESSIVE SENSING ..................................................................................................... 36 III."}
{"_id": "0884a1b616d64e5b2c252f5980acf2049b2d8b80", "title": "Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents", "text": "User interaction with voice-powered agents generates large amounts of unlabeled utterances. In this paper, we explore techniques to efficiently transfer the knowledge from these unlabeled utterances to improve model performance on Spoken Language Understanding (SLU) tasks. We use Embeddings from Language Model (ELMo) to take advantage of unlabeled data by learning contextualized word representations. Additionally, we propose ELMo-Light (ELMoL), a faster and simpler unsupervised pre-training method for SLU. Our findings suggest unsupervised pre-training on a large corpora of unlabeled utterances leads to significantly better SLU performance compared to training from scratch and it can even outperform conventional supervised transfer. Additionally, we show that the gains from unsupervised transfer techniques can be further improved by supervised transfer. The improvements are more pronounced in low resource settings and when using only 1000 labeled in-domain samples, our techniques match the performance of training from scratch on 10-15x more labeled in-domain data."}
{"_id": "563f2b60598d4b07124f970ac86eb4d5932509b6", "title": "Through the eyes of love: reality and illusion in intimate relationships.", "text": "This article reviews the research literature and theory concerned with accuracy of judgments in romantic relationships. We initially propose a model of cognition in (romantic) relationships that distinguishes between 2 forms of accuracy: mean-level bias and tracking accuracy. We then report the results of meta-analyses of research on heterosexual, romantic relationships, which used external benchmarks and reported levels of tracking accuracy (98 studies) and/or mean-level bias (48 studies). The results revealed robust overall effect sizes for both tracking accuracy (r = .47) and positive mean-level bias (r = .09). As expected, the effects were substantial and positive for tracking accuracy across 6 judgmental categories, whereas signed mean-level bias was negative for the interaction attributions (e.g., love, communication). The results showed, as expected, that these 2 forms of accuracy were independent-the 2 kinds of effect size derived from the same set of 38 studies were uncorrelated. As expected, gender, relationship length, and relationship evaluations moderated mean-level bias across studies but (unexpectedly) not for tracking accuracy. In the Discussion we evaluate the prior model in light of the findings, other research, moderating variables (such as self-esteem), the role of projection, the early stages of mate selection, metacognition, and the rationality and nature of motivated cognition. We conclude that our model, findings, and analyses help to resolve the apparent paradox that love is both riven with illusions and rooted in reality, and support both evolutionary and social psychological approaches to understanding cognition in romantic relationships."}
{"_id": "223ecc1582a59147377fb0da23654cf20b9d4ae3", "title": "Aligning movies with scripts by exploiting temporal ordering constraints", "text": "Scripts provide rich textual annotation of movies, including dialogs, character names, and other situational descriptions. Exploiting such rich annotations requires aligning the sentences in the scripts with the corresponding video frames. Previous work on aligning movies with scripts predominantly relies on time-aligned closed-captions or subtitles, which are not always available. In this paper, we focus on automatically aligning faces in movies with their corresponding character names in scripts without requiring closed-captions/subtitles. We utilize the intuition that faces in a movie generally appear in the same sequential order as their names are mentioned in the script. We first apply standard techniques for face detection and tracking, and cluster similar face tracks together. Next, we apply a generative Hidden Markov Model (HMM) and a discriminative Latent Conditional Random Field (LCRF) to align the clusters of face tracks with the corresponding character names. Our alignment models (especially LCRF) significantly outperform the previous state-of-the-art on two different movie datasets and for a wide range of face clustering algorithms."}
{"_id": "59d28c47c994c043aa970d3d26a0fbe714b19dfa", "title": "Traffic sign detection and recognition for intelligent vehicle", "text": "In this paper, we propose a computer vision based system for real-time robust traffic sign detection and recognition, especially developed for intelligent vehicle. In detection phase, a color-based segmentation method is used to scan the scene in order to quickly establish regions of interest (ROI). Sign candidates within ROIs are detected by a set of Haar wavelet features obtained from AdaBoost training. Then, the Speeded Up Robust Features (SURF) is applied for the sign recognition. SURF finds local invariant features in a candidate sign and matches these features to the features of template images that exist in data set. The recognition is performed by finding out the template image that gives the maximum number of matches. We have evaluated the proposed system on our intelligent vehicle SmartVII. A recognition accuracy of over 90% in real-time has been achieved."}
{"_id": "24575456ce609f0a1b10c2cd87a1d3b454ab533a", "title": "An effective approach for active tracking with a PTZ camera", "text": "The concept of active tracking is presented to simulate the characteristics of human vision in intelligent visual surveillance. The Pan/Tilt/Zoom (PTZ) camera is generally used for active tracking. In this paper, we present a novel and effective approach for active object tracking with a PTZ camera, and construct a near real-time system for indoor and outdoor scenes. The tracking algorithm of our system is based on the feature matching, with the PID control to drive the camera. The feature extracted from moving people is described as a region covariance matrix which combines the spatial and statistical properties of the targets (e.g. coordinates, color, and gradient). Results from indoor and outdoor experiments demonstrate the effectiveness and accuracy of our approach."}
{"_id": "691e9e6f09e8a98b6e81c9d9986605d21c56ca21", "title": "The prerepresentational self and its affective core.", "text": ""}
{"_id": "97fd22c526a9c7463d508c954dd3cea58855dd06", "title": "Autonomous vehicles : Pedestrian heaven or pedestrian hell ?", "text": "Autonomous vehicles are an emerging technology that will play a significant role in shaping transport systems in the decades to come. Consequently, implications are anticipated for societal issues ranging from transport safety to congestion, from energy consumption to the allocation of public and private lands. Driverless technology will undoubtedly influence other transport modes, too. With this in mind, the implications for pedestrians have hardly been researched. If the introduction of driverless cars could ultimately result in the disappearance of on-street parking spaces and in crossing streets without so much as glancing towards approaching cars, this could potentially mean that walking becomes much more attractive. At the same time, one could imagine the opposite outcome if society decides that the ability of being picked up and dropped off at any front door or location imaginable renders walking as a state-supported transport mode superfluous. The purpose of this paper is to discuss the impact of autonomous vehicles on pedestrian transport in all its aspects, including the range of possible outcomes, interdependencies, and opposing effects. With the state-of-the-art of pedestrian transport research as a starting point, how can the needs of pedestrians be taken into consideration when fully autonomous vehicles become ubiquitous in daily life?"}
{"_id": "bb9419100e83f9257b1ba1658b95554783a13141", "title": "Controller design and fault tolerance analysis of 4-phase floating interleaved boost converter for fuel cell electric vehicles", "text": "Owing to the energy shortage and the increasingly serious environmental pollution, fuel cell electric vehicles (FCEV) with zero-emission and high-efficiency have been expected to be the most potential candidate to substitute the conventional vehicles. The DC/DC converter is the interface between the fuel cell (FC) and the driveline of FCEV. It not only needs high voltage gain to convert the wide FC voltage into an appropriate voltage level, but also needs the capacity of fault tolerance to enhance the reliability of the system. For this reason, floating interleaved boost converters (FIBC) seem to be the optimal selection. Despite this topology can continue operating without interruption under the action of the proper control scheme in the case of power switch open circuit fault (OCF), operating in degraded mode has adverse impacts on the component stress and the input current ripple. Hence, this paper aims to design an effective controller to maintain the dc bus voltage constant and to demonstrate thorough theoretical analysis and simulation verification of these undesirable effects."}
{"_id": "7c86dbd467ed0066eb13d1e111c8823a2b043968", "title": "Cross-Genre Author Profile Prediction Using Stylometry-Based Approach", "text": "Author profiling task aims to identify different traits of an author by analyzing his/her written text. This study presents a Stylometry-based approach for detection of author traits (gender and age) for cross-genre author profiles. In our proposed approach, we used different types of stylistic features including 7 lexical features, 16 syntactic features, 26 character-based features and 6 vocabulary richness (total 56 stylistic features). On the training corpus, the proposed approach obtained promising results with an accuracy of 0.787 for gender, 0.983 for age and 0.780 for both (jointly detecting age and gender). On the test corpus, proposed system gave an accuracy of 0.576 for gender, 0.371 for age and 0.256 for both."}
{"_id": "8a9502f8ddf89869e1ae57b43e4b62f889b8df1c", "title": "Building Simple Annotation Tools", "text": "OF THE THESIS Building Simple Annotation Tools"}
{"_id": "bef329a889d593e5df903189782b9bedfb2bd397", "title": "Optical interconnects for extreme scale computing systems", "text": "Large-scale high performance computing is permeating nearly every corner of modern applications spanning from scientific research and business operations, to medical diagnostics, and national security. All these communities rely on computer systems to process vast volumes of data quickly and efficiently, yet progress toward increased computing power has experienced a slowdown in the last number of years. The sheer cost and scale, stemming from the need for extreme parallelism, are among the reasons behind this stall. In particular, very large-scale, ultra-high bandwidth interconnects, essential for maintaining computation performance, represent an increasing portion of the total cost budget. Photonic systems are often cited as ways to break through the energy-bandwidth limitations of conventional electrical wires toward drastically improving interconnect performance. This paper presents an overview of the challenges associated with large-scale interconnects, and reviews how photonic technologies can contribute to addressing these challenges. We review some important aspects of photonics that should not be underestimated in order to truly reap the benefits of cost and power reduction. \u00a9 2017 Elsevier B.V. All rights reserved."}
{"_id": "3813d74ddf2540c06aa48fc42468bd0d97f51708", "title": "Asynchronous Multi-task Learning", "text": "Many real-world machine learning applications involveseveral learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations. However, the models are also closely related because of the nature of the learning tasks modeling the same disease. By simultaneously learning all the tasks, multi-task learning (MTL) paradigm performs inductive knowledge transfer among tasks to improve the generalization performance. When datasets for the learning tasks are stored at different locations, it may not always be feasible to transfer the data to provide a data centralized computing environment due to various practical issues such as high data volume and privacy. In this paper, we propose a principled MTL framework for distributed and asynchronous optimization to address the aforementioned challenges. In our framework, gradient update does not wait for collecting the gradient information from all the tasks. Therefore, the proposed method is very efficient when the communication delay is too high for some task nodes. We show that many regularized MTL formulations can benefit from this framework, including the low-rank MTL for sharedsubspace learning. Empirical studies on both synthetic and realworld datasets demonstrate the efficiency and effectiveness of the proposed framework."}
{"_id": "c301012283549f7866654a81c36c1d9cf1280eb5", "title": "A Survey of Evaluation Methods and Measures for Interpretable Machine Learning", "text": "The need for interpretable and accountable intelligent system gets sensible as artificial intelligence plays more role in human life. Explainable artificial intelligence systems can be a solution by self-explaining the reasoning behind the decisions and predictions of the intelligent system. Researchers from different disciplines work together to define, design and evaluate interpretable intelligent systems for the user. Our work supports the different evaluation goals in interpretable machine learning research by a thorough review of evaluation methodologies used in machine-explanation research across the fields of human-computer interaction, visual analytics, and machine learning. We present a 2D categorization of interpretable machine learning evaluation methods and show a mapping between user groups and evaluation measures. Further, we address the essential factors and steps for a right evaluation plan by proposing a nested model for design and evaluation of explainable artificial intelligence systems."}
{"_id": "734212b92d5afe704d6aaaed5f18fccbe62e2cc7", "title": "BENCHMARKING DEEP LEARNING FRAMEWORKS FOR THE CLASSIFICATION OF VERY HIGH RESOLUTION SATELLITE MULTISPECTRAL DATA", "text": "In this paper we evaluated deep-learning frameworks based on Convolutional Neural Networks for the accurate classification of multispectral remote sensing data. Certain state-of-the-art models have been tested on the publicly available SAT-4 and SAT-6 high resolution satellite multispectral datasets. In particular, the performed benchmark included the AlexNet, AlexNet-small and VGG models which had been trained and applied to both datasets exploiting all the available spectral information. Deep Belief Networks, Autoencoders and other semi-supervised frameworks have been, also, compared. The high level features that were calculated from the tested models managed to classify the different land cover classes with significantly high accuracy rates i.e., above 99.9%. The experimental results demonstrate the great potentials of advanced deep-learning frameworks for the supervised classification of high resolution multispectral remote sensing data."}
{"_id": "d7b2d863ff9067bc90a514db972bedb0f92dd75d", "title": "The economic costs of pain in the United States.", "text": "UNLABELLED\nIn 2008, according to the Medical Expenditure Panel Survey (MEPS), about 100 million adults in the United States were affected by chronic pain, including joint pain or arthritis. Pain is costly to the nation because it requires medical treatment and complicates treatment for other ailments. Also, pain lowers worker productivity. Using the 2008 MEPS, we estimated 1) the portion of total U.S. health care costs attributable to pain; and 2) the annual costs of pain associated with lower worker productivity. We found that the total costs ranged from $560 to $635 billion in 2010 dollars. The additional health care costs due to pain ranged from $261 to $300 billion. This represents an increase in annual per person health care costs ranging from $261 to $300 compared to a base of about $4,250 for persons without pain. The value of lost productivity due to pain ranged from $299 to $335 billion. We found that the annual cost of pain was greater than the annual costs of heart disease ($309 billion), cancer ($243 billion), and diabetes ($188 billion). Our estimates are conservative because they do not include costs associated with pain for nursing home residents, children, military personnel, and persons who are incarcerated.\n\n\nPERSPECTIVE\nThis study estimates that the national cost of pain ranges from $560 to $635 billion, larger than the cost of the nation's priority health conditions. Because of its economic toll on society, the nation should invest in research, education, and training to advocate the successful treatment, management, and prevention of pain."}
{"_id": "55561f291fb12cb86c8fe24dd875a190d2ed6d23", "title": "Towards Automatic Animated Storyboarding", "text": "In this paper, we propose a machine learning-based NLP system for automatically creating animated storyboards using the action descriptions of movie scripts. We focus particularly on the importance of verb semantics when generating graphics commands, and find that semantic role labelling boosts performance and is relatively robust to the effects of"}
{"_id": "be288d94576df608ed7f0840898d99766210a27f", "title": "The characteristics of persistent sexual offenders: a meta-analysis of recidivism studies.", "text": "A meta-analysis of 82 recidivism studies (1,620 findings from 29,450 sexual offenders) identified deviant sexual preferences and antisocial orientation as the major predictors of sexual recidivism for both adult and adolescent sexual offenders. Antisocial orientation was the major predictor of violent recidivism and general (any) recidivism. The review also identified some dynamic risk factors that have the potential of being useful treatment targets (e.g., sexual preoccupations, general self-regulation problems). Many of the variables commonly addressed in sex offender treatment programs (e.g., psychological distress, denial of sex crime, victim empathy, stated motivation for treatment) had little or no relationship with sexual or violent recidivism."}
{"_id": "92429398e0c65e488f7d02b0128f99609eae6a9b", "title": "A Novel Compact UHF RFID Tag Antenna Designed With Series Connected Open Complementary Split Ring Resonator (OCSRR) Particles", "text": "A novel compact planar antenna for passive UHF radio frequency identification (RFID) tags is presented. Instead of using meander-line sections, much smaller open complementary split ring resonator (OCSRR) particles are connected in series to create a small dipole with a conjugate match to the power harvesting circuit on the passive RFID tag. The manufactured (prototype) OCSRR RFID tag presented here has an antenna input impedance of 15.8 + J142.5 \u03a9 at a frequency of 920 MHz and a max read range of 5.48 m. This performance is achieved with overall tag dimensions of 0.036\u03bb0 \u00d7 0.17\u03bb0 where \u03bb0 is the free space wavelength of 920 MHz."}
{"_id": "7b6620f2b4e507a0bea226837a017e45d8267f4f", "title": "Path Planning Based on B\u00e9zier Curve for Autonomous Ground Vehicles", "text": "In this paper we present two path planning algorithms based on Bezier curves for autonomous vehicles with way points and corridor constraints. Bezier curves have useful properties for the path generation problem. The paper describes how the algorithms apply these properties to generate the reference trajectory for vehicles to satisfy the path constraints. Both algorithms join cubic Bezier curve segments smoothly to generate the path. Additionally, we discuss the constrained optimization problem that optimizes the resulting path for a user-defined cost function. The simulation shows the generation of successful routes for autonomous vehicles using these algorithms as well as control results for a simple kinematic vehicle. Extensions of these algorithms towards navigating through an unstructured environment with limited sensor range are discussed."}
{"_id": "288dc26897865d428012c8c94b7cfe2edef55f30", "title": "A multi-DoF anthropomorphic transradial prosthetic arm", "text": "An anthropomorphic transradial prosthetic arm is proposed in this paper. In order to generate the wrist flexion/extension and ulna/radial deviation, a novel wrist mechanism is proposed based on the parallel prismatic manipulators. It is expected to realize high speed operation, higher positional accuracy and anthropomorphic features using the proposed mechanism. The prosthetic arm consists of an under-actuated hand as the terminal device. The hand mechanism is capable of providing the grasping adaptation. With the intention of verifying the effectiveness of the mechanisms in motion generation, motion simulation and kinematic analysis are carried out."}
{"_id": "4f31ab4f8a8c0fe29f39bafdc93cda5e485d0855", "title": "SemEval-2007 Task 02: Evaluating Word Sense Induction and Discrimination Systems", "text": "Word Sense Disambiguation (WSD) is a key enabling-technology. Supervised WSD techniques are the best performing in public evaluations, but need large amounts of hand-tagging data. Existing hand-annotated corpora like SemCor (Miller et al., 1993), which is annotated with WordNet senses (Fellbaum, 1998) allow for a small improvement over the simple most frequent sense heuristic, as attested in the all-words track of the last Senseval competition (Snyder and Palmer, 2004). In theory, larger amounts of training data (SemCor has approx. 500M words) would improve the performance of supervised WSD, but no current project exists to provide such an expensive resource. Another problem of the supervised approach is that the inventory and distribution of senses changes dramatically from one domain to the other, requiring additional hand-tagging of corpora (Mart\u0131\u0301nez and Agirre, 2000; Koeling et al., 2005). Supervised WSD is based on the \u201cfixed-list of senses\u201d paradigm, where the senses for a target word are a closed list coming from a dictionary or lexicon. Lexicographers and semanticists have long warned about the problems of such an approach, where senses are listed separately as discrete entities, and have argued in favor of more complex representations, where, for instance, senses are dense regions in a continuum (Cruse, 2000). Unsupervised Word Sense Induction and Discrimination (WSID, also known as corpus-based unsupervised systems) has followed this line of thinking, and tries to induce word senses directly from the corpus. Typical WSID systems involve clustering techniques, which group together similar examples. Given a set of induced clusters (which represent word uses or senses1), each new occurrence of the target word will be compared to the clusters and the most similar cluster will be selected as its sense. One of the problems of unsupervised systems is that of managing to do a fair evaluation. Most of current unsupervised systems are evaluated in-house, with a brief comparison to a re-implementation of a former system, leading to a proliferation of unsupervised systems with little ground to compare among them. The goal of this task is to allow for comparison across sense-induction and discrimination systems, and also to compare these systems to other supervised and knowledge-based systems. The paper is organizes as follows. Section 2 presents the evaluation framework used in this task. Section 3 presents the systems that participated in the task, and the official results. Finally, Section 4 draws the conclusions."}
{"_id": "accbdecb0ff697f52c608bce57a255b8201a6559", "title": "Fossils, genes and the evolution of animal limbs", "text": "The morphological and functional evolution of appendages has played a crucial role in the adaptive radiation of tetrapods, arthropods and winged insects. The origin and diversification of fins, wings and other structures, long a focus of palaeontology, can now be approached through developmental genetics. Modifications of appendage number and architecture in each phylum are correlated with regulatory changes in specific patterning genes. Although their respective evolutionary histories are unique, vertebrate, insect and other animal appendages are organized by a similar genetic regulatory system that may have been established in a common ancestor."}
{"_id": "0946163c1464c18b52d8f7783e0b984cd18b4655", "title": "The DL-Lite Family and Relations", "text": "The recently introduced series of description logics under the common moniker \u2018DLLite\u2019 has attracted attention of the description logic and semantic web communities due to the low computational complexity of inference, on the one hand, and the ability to represent conceptual modeling formalisms, on the other. The main aim of this article is to carry out a thorough and systematic investigation of inference in extensions of the original DL-Lite logics along five axes: by (i) adding the Boolean connectives and (ii) number restrictions to concept constructs, (iii) allowing role hierarchies, (iv) allowing role disjointness, symmetry, asymmetry, reflexivity, irreflexivity and transitivity constraints, and (v) adopting or dropping the unique name assumption. We analyze the combined complexity of satisfiability for the resulting logics, as well as the data complexity of instance checking and answering positive existential queries. Our approach is based on embedding DL-Lite logics in suitable fragments of the one-variable first-order logic, which provides useful insights into their properties and, in particular, computational behavior."}
{"_id": "8b3cb3d5dd580bcccd079edd9b47e20e45dfdec3", "title": "Generalized schema-mappings: from termination to tractability", "text": "Data-Exchange is the problem of creating new databases according to a high-level specification called a schema-mapping while preserving the information encoded in a source database. This paper introduces a notion of generalized schema-mapping that enriches the standard schema-mappings (as defined by Fagin et al) with more expressive power. It then proposes a more general and arguably more intuitive notion of semantics that rely on three criteria: Soundness, Completeness and Laconicity (non-redundancy and minimal size). These semantics are shown to coincide precisely with the notion of cores of universal solutions in the framework of Fagin, Kolaitis and Popa. It is also well-defined and of interest for larger classes of schema-mappings and more expressive source databases (with null-values and equality constraints). After an investigation of the key properties of generalized schema-mappings and their semantics, a criterion called Termination of the Oblivious Chase (TOC) is identified that ensures polynomial data-complexity. This criterion strictly generalizes the previously known criterion of Weak-Acyclicity. To prove the tractability of TOC schema-mappings, a new polynomial time algorithm is provided that, unlike the algorithm of Gottlob and Nash from which it is inspired, does not rely on the syntactic property of Weak-Acyclicity. As the problem of deciding whether a Schema-mapping satisfies the TOC criterion is only recursively enumerable, a more restrictive criterion called Super-weak Acylicity (SwA) is identified that can be decided in Polynomial-time while generalizing substantially the notion of Weak-Acyclicity."}
{"_id": "ea2e47ce45ee31fe3abb139f1515e57c8cf16dc4", "title": "Ontology-Based Data Access: Ontop of Databases", "text": "We present the architecture and technologies underpinning the OBDA system Ontop and taking full advantage of storing data in relational databases. We discuss the theoretical foundations of Ontop: the tree-witness query rewriting, T -mappings and optimisations based on database integrity constraints and SQL features. We analyse the performance of Ontop in a series of experiments and demonstrate that, for standard ontologies, queries and data stored in relational databases, Ontop is fast, efficient and produces SQL rewritings of high quality."}
{"_id": "407748e97d8d3878535f6371ad324708915bf6d9", "title": "Linking Data to Ontologies", "text": "Many organizations nowadays face the problem of accessing existing data sources by means of flexible mechanisms that are both powerful and efficient. Ontologies are widely considered as a suitable formal tool for sophisticated data access. The ontology expresses the domain of interest of the information system at a high level of abstraction, and the relationship between data at the sources and instances of concepts and roles in the ontology is expressed by means of mappings. In this paper we present a solution to the problem of designing effective systems for ontology-based data access. Our solution is based on three main ingredients. First, we present a new ontology language, based on Description Logics, that is particularly suited to reason with large amounts of instances. The second ingredient is a novel mapping language that is able to deal with the so-called impedance mismatch problem, i.e., the problem arising from the difference between the basic elements managed by the sources, namely data, and the elements managed by the ontology, namely objects. The third ingredient is the query answering method, that combines reasoning at the level of the ontology with specific mechanisms for both taking into account the mappings and efficiently accessing the data at the sources."}
{"_id": "6b23b81b1a9ef6aeaef872ca322b8f06ff2932b6", "title": "Data Integration: A Theoretical Perspective", "text": "Data integration is the problem of combining data residing at different sources, and providing the user with a unified view of these data. The problem of designing data integration systems is important in current real world applications, and is characterized by a number of issues that are interesting from a theoretical point of view. This document presents on overview of the material to be presented in a tutorial on data integration. The tutorial is focused on some of the theoretical issues that are relevant for data integration. Special attention will be devoted to the following aspects: modeling a data integration application, processing queries in data integration, dealing with inconsistent data sources, and reasoning on queries."}
{"_id": "3e8dc933e24126e95bf1394528744e5864044d07", "title": "Self-Adaptive Systems in Organic Computing: Strategies for Self-Improvement", "text": "With the intensified use of intelligent things, the demands on the technological systems are increasing permanently. A possible approach to meet the continuously changing challenges is to shift the system integration from design to run-time by using adaptive systems. Diverse adaptivity properties \u2013 so-called self* properties \u2013 form the basis of these systems and one of the properties is self-improvement. It describes the ability of a system not only to adapt to a changing environment according to a predefined model, but also the capability to adapt the adaptation logic of the whole system. In this paper, a closer look is taken at the structure of self-adaptive systems. Additionally, the systems\u2019 ability to improve themselves during run-time is described from the perspective of Organic Computing. Furthermore, four different strategies for self-improvement are presented, following the taxonomy of self-adaptation suggested by Christian Krupitzer et al."}
{"_id": "d48e29fc2c6a8663b35899d5fa98caa13f8f9fef", "title": "Design and Evaluation of a High-Frequency LTCC Inductor Substrate for a Three-Dimensional Integrated DC/DC Converter", "text": "High operating frequency and integration technique are two main approaches to achieve high power density for the switching mode power supply. The emerging gallium nitride (GaN)-based power device enables a multimegahertz high-efficiency point-of-load (POL) converter with high current capability. The low-temperature cofire ceramic (LTCC)-based integration technique successfully extends the three-dimensional (3-D) integrated POL module from the low current level to the high current level (>10 A). This paper presents the low-profile LTCC inductor substrate design and evaluation for a multimegahertz 3-D integrated POL converter with large output current. The detailed study about the impact of frequency on the LTCC inductor shows that the high frequency not only shrinks the volume of the inductor, but also simplifies the inductor structure. The comparison between the LTCC inductor and the discrete inductor demonstrates that the LTCC inductor dramatically boosts the converter light-load efficiency due to its nonlinear inductance. Because of the low-profile design, the power density of the single-phase POL module with LTCC inductor achieves 1.1 kW/in3 at 5 MHz. The performance of the LTCC inductor can be further improved by the inverse coupling, which results in more than 40% core thickness and core loss reduction. Therefore, the power density of a two-phase integrated POL module is pushed to 1.5 kW/in3, which is around ten times of the power density of state-of-the-art industry products with the same current level."}
{"_id": "0d1e474186099e124544686c5f82cfc5f38339b7", "title": "Which Predictability Measures Affect Content Word Durations", "text": "The pronunciation of a word can vary widely, and many factors are known to affect this variation. This paper focuses on the role of predictability on word duration. Previous research has suggested that more frequent words are shorter, as are words which are more predictable from neighboring words. This research has tended to focus only on extremely high frequency function words; previous research on content words has not been able to examine natural speech and control for key confounding factors like accent and rate of speech. We examined 1401 content words from the Switchboard corpus and studied the role of word frequency, conditional and joint probabilities with neighboring words and a measure of word semantic association, Latent Semantic Analysis. Using multiple regression to control for confounding factors, we show that two predictability variables principally affect a word\u2019s duration\u2014word frequency, and the conditional probability of a word given the following word. Other predictability variables do not make an additional significant contribution. We discuss the implications for pronunciation modeling."}
{"_id": "40446df47af60cbf989b216f556a64f63b3dbc79", "title": "A GENETIC ALGORITHM FOR STRUCTURAL OPTIMIZATION OF STEEL TRUSS ROOFS", "text": "A common structural design problem is the weight minimization of structures which is formulated selecting a set of design variables that represent the structural and architectural configuration of the system. The structures are usually subject to stress and displacement constraints and the design variables can be continuous or discrete. In practice, it is often desirable to choose design variables (such as cross-sectional areas) from commercially available sizes. The use of a continuous optimization procedure \u2013 although usually more straightforward \u2013 will lead to non-available sizes and any attempt to substitute those values by the closest available commercial sizes can make the design unfeasible or unnecessarily heavier. In this paper a genetic algorithm is proposed to evolve the structural configuration for weight minimization of industrial buildings, with rectangular geometry projection, made up of uniform planar structures (steel truss roofs) along the longitudinal direction which are interconnected by purlins. The planar structures are trusses and their number, shape, and topology are allowed to change during the optimization process."}
{"_id": "24befe76fbd4fa47f4650d14ece3c7d1f562ee10", "title": "C-OPT: Coverage-Aware Trajectory Optimization Under Uncertainty", "text": "We introduce a new problem of continuous, coverage-aware trajectory optimization under localization and sensing uncertainty. In this problem, the goal is to plan a path from a start state to a goal state that maximizes the coverage of a user-specified region while minimizing the control costs of the robot and the probability of collision with the environment. We present a principled method for quantifying the coverage sensing uncertainty of the robot. We use this sensing uncertainty along with the uncertainty in robot localization to develop C-OPT, a coverage-optimization algorithm which optimizes trajectories over belief-space to find locally optimal coverage paths. We highlight the applicability of our approach in multiple simulated scenarios inspired by surveillance, UAV crop analysis, and search-and-rescue tasks. We also present a case study on a physical, differential-drive robot. We also provide quantitative and qualitative analysis of the paths generated by our approach."}
{"_id": "064bc62b407ad0fd4715b44884daa5d0d5e68aac", "title": "Compact Ultrawideband MIMO Antenna With Half-Slot Structure", "text": "In this letter, a multiple-input\u2013multiple-output (MIMO) antenna with very compact size of only $\\text{23}\\times \\text{18}\\; \\text{mm}^{2}$ is proposed for ultrawideband (UWB) applications. The proposed MIMO antenna consists of two symmetrical half-slot antenna elements with coplanar waveguide-fed structures and a Y-shaped slot that is cut at the bottom center of the common ground plane. The slot efficiently prevents the current from directly flowing between two ports at low UWB frequency. For such a compact-size antenna, the ground plane works as a radiator as well as a reflector that reflects the radiation from radiators at high frequency. The measured impedance bandwidth for $S_{11}, S_{22}$ < \u221210\u00a0dB is from 3 to 12.4\u00a0GHz. The measured mutual coupling for $S_{12}, S_{21}$ < \u221215\u00a0dB is from 3 to 4\u00a0GHz and for $S_{12}, S_{21}$ < \u221220\u00a0dB is from 4 to 12.4\u00a0GHz. The proposed antenna also contains relatively stable radiation patterns and gains. These performances indicate that the proposed antenna is one competitive candidate for UWB applications."}
{"_id": "adfd7cdb7a5259034a10395a6e58bdb382214f0d", "title": "AUV SLAM using forward/downward looking cameras and artificial landmarks", "text": "Autonomous underwater vehicles (AUVs) are usually equipped with one or more optical cameras to obtain visual data of underwater environments. The camera can also be used to estimate the AUV's pose information, along with various navigation sensors such as inertial measurement unit (IMU), Doppler velocity log (DVL), depth sensor, and so on. In this paper, we propose a vision-based simultaneous localization and mapping (SLAM) of AUVs, where underwater artificial landmarks are used to help visual sensing of forward and downward looking cameras. Three types of landmarks are introduced and their detection algorithms are organized in a framework of conventional extended Kalman filter (EKF) SLAM to estimate both robot and landmark states. The proposed method is validated by an experiment performed in a engineering basin. Since DVL suffers from noises in a real ocean environment, we generated synthetic noisy data based on the real sensor data. With this data we verify that the proposed SLAM approach can recover from the erroneous dead reckoning position."}
{"_id": "73e1f7df789390c7f135a15dba50f3c3ee517578", "title": "A 1.8V CMOS chopper four-quadrant analog multiplier", "text": "A 1.8V CMOS chopper four-quadrant analog multiplier, intending to serve as an autonomous IC block for low-frequency signal processing, is presented. Particular emphasis is laid upon achieving low output noise by means of chopper stabilization, while the multiplier's operation is based on the MOS Translinear Principle. The proposed design has been implemented and simulated in TSMC 0.18 \u00b5m CMOS process."}
{"_id": "ede851351f658426e77c72e7d1989dda970c995a", "title": "A broadband compact low-loss 4\u00d74 Butler Matrix in CMOS with stacked transformer based quadrature couplers", "text": "This paper presents an ultra-broadband ultracompact Butler Matrix design scheme. The design employs stacked transformer based couplers and lumped LC \u03c0-network phase shifters for substantial size reduction. As a proof-of-concept design, a 4\u00d74 Butler Matrix is implemented in a standard 130nm bulk CMOS process at a center frequency of 2.0 GHz. Compared with reported fully integrated 2.0 GHz 4\u00d74 Butler Matrix designs in CMOS, the proposed design achieves the lowest insertion loss of 1.10dB, the smallest amplitude mismatch of 0.3 dB, the largest fractional bandwidth of 34.6%, and the smallest chip core area of 0.635\u00d71.122 mm2. Based on the measured S-parameters, the four concurrent electrical array patterns of the Butler Matrix achieve array peak-to-null ratio (PNR) of 29.5 dB at 2.0 GHz and better than 15.0 dB between 1.55 GHz and 2.50 GHz."}
{"_id": "1470c07ac3b1f60f640bb70f8c64b4d7fcc50f69", "title": "A high accuracy solver for RTE in underwater optical communication path loss prediction", "text": "In this paper, we present a new improved numerical framework to evaluate the time-dependent radiative transfer equation (RTE) for underwater optical wireless communication (UOWC) systems. The RTE predicts the optical path-loss of light in an underwater channel, as a function of the inherent optical properties (IOPs) related to the water type, namely the absorption and scattering coefficients as well as the phase scattering function (PSF). We reach the simulation performance based on an improvement of the finite difference scheme proposed in [1] as well as an enhancement of the quadrature method aiming to calculate the integral term of the RTE [2]. Additionally, we evaluate the received power at the receiver plane in three dimensions by considering a given receiver aperture and a field of view (FOV). Finally, we evaluate the UOWC system's bit error rate performance metric as a function of the propagation distance, and time."}
{"_id": "eba845e239e3f96d0dd9d7bd77f7c65464cf0353", "title": "Interlocking obfuscation for anti-tamper hardware", "text": "Tampering and Reverse Engineering of a chip to extract the hardware Intellectual Property (IP) core or to inject malicious alterations is a major concern. Digital systems susceptible to tampering are of immense concern to defense organizations. First, offshore chip manufacturing allows the design secrets of the IP cores to be transparent to the foundry and other entities along the production chain. Second, small malicious modifications to the design may not be detectable after fabrication without anti-tamper mechanisms. Some techniques have been developed in the past to improve the defense against such attacks but they tend to fall prey to the increasing power of the attacker. We present a new way to protect against tampering by a clever obfuscation of the design, which can be unlocked with a specific, dynamic path traversal. Hence, the functional mode of the controller is hidden with the help of obfuscated states, and the functional mode is made operational only on the formation of a specific interlocked Code-Word during state transition. No comparator is needed as the obfuscation is embedded within the transition function of the state machine itself. A side benefit is that any small alteration will be magnified via the obfuscated design. In other words, an alteration to the design will manifest itself as a large difference in the circuit's functionality. Experimental results on an Advanced Encryption Standard (AES) circuit from the open-source IP-cores suite suggest that the proposed method provides better active defense mechanisms against attacks with nominal (7.8%) area overhead."}
{"_id": "ea01e636742f563d1376da8e00d1699733a6faf0", "title": "A printed antenna array fed by balanced Schiffman shifter used for UWB location system", "text": "A double-sided printed bow-tie antenna array fed by balanced Schiffman phase shifter, which used as a reader antenna in ultra-wideband (UWB) location system, is proposed in this paper. The presented antenna has a impedance bandwidth of 3.4-4.4 GHz and a high gain of more than 5dB within the whole bandwidth. Good omnidirectional radiation pattern is obtained in the plane of \u03b8 = 15\u00b0. Good simulation results are achieved by HFSS."}
{"_id": "295fb635d777568c0bd05f912f97cfb6c56b8b34", "title": "Building event-centric knowledge graphs from news", "text": "Knowledge graphs have gained increasing popularity in the past couple of years, thanks to their adoption in everyday search engines. Typically, they consist of fairly static and encyclopedic facts about persons and organizations \u2013 e.g. a celebrity\u2019s birth date, occupation and family members \u2013 obtained from large repositories such as Freebase or Wikipedia. In this paper, we present a method and tools to automatically build knowledge graphs from news articles. As news articles describe changes in the world through the events they report, we present an approach to create Event-Centric Knowledge Graphs (ECKGs) using state-of-the-art natural language processing and semantic web techniques. Such ECKGs capture long-term developments and histories on hundreds of thousands of entities and are complementary to the static encyclopedic information in traditional knowledge graphs. We describe our event-centric representation schema, the challenges in extracting event information from news, our open source pipeline, and the knowledge graphs we have extracted from four different news corpora: general news (Wikinews), the FIFA world cup, the Global Automotive Industry, and Airbus A380 airplanes. Furthermore, we present an assessment on the accuracy of the pipeline in extracting the triples of the knowledge graphs. Moreover, through an event-centered browser and visualization tool we show how approaching information from news in an event-centric manner can increase the user\u2019s understanding of the domain, facilitates the reconstruction of news story lines, and enable to perform exploratory investigation of news hidden facts. \u00a9 2016 Elsevier B.V. All rights reserved."}
{"_id": "1dbc48d9ed4e987b7d26796922e7442a9907fd31", "title": "Information Bottleneck Inspired Method For Chat Text Segmentation", "text": "We present a novel technique for segmenting chat conversations using the information bottleneck method (Tishby et al., 2000), augmented with sequential continuity constraints. Furthermore, we utilize critical non-textual clues such as time between two consecutive posts and people mentions within the posts. To ascertain the effectiveness of the proposed method, we have collected data from public Slack conversations and Fresco, a proprietary platform deployed inside our organization. Experiments demonstrate that the proposed method yields an absolute (relative) improvement of as high as 3.23% (11.25%). To facilitate future research, we are releasing manual annotations for segmentation on public Slack conversations."}
{"_id": "18d15d46cd7d888195092c1b53da7b7c90b4a2f5", "title": "Neurofeedback treatment for attention-deficit/hyperactivity disorder in children: a comparison with methylphenidate.", "text": "Clinical trials have suggested that neurofeedback may be efficient in treating attention-deficit/hyperactivity disorder (ADHD). We compared the effects of a 3-month electroencephalographic feedback program providing reinforcement contingent on the production of cortical sensorimotor rhythm (12-15 Hz) and betal activity (15-18 Hz) with stimulant medication. Participants were N = 34 children aged 8-12 years, 22 of which were assigned to the neurofeedback group and 12 to the methylphenidate group according to their parents' preference. Both neurofeedback and methylphenidate were associated with improvements on all subscales of the Test of Variables of Attention, and on the speed and accuracy measures of the d2 Attention Endurance Test. Furthermore, behaviors related to the disorder were rated as significantly reduced in both groups by both teachers and parents on the IOWA-Conners Behavior Rating Scale. These findings suggest that neurofeedback was efficient in improving some of the behavioral concomitants of ADHD in children whose parents favored a nonpharmacological treatment."}
{"_id": "9b05437b69e5b67bc2a34c76c9521d5fb8d85080", "title": "An ultra-wideband absorber based on multiple resonators in 3-D frequency-selective structure", "text": "An ultra-wideband absorber based on employing multiple resonators in the unit cell of a 3-D frequency-selective structure is proposed. The original design consists of two-dimensional periodic array of three-dimensional unit cells and lumped components to provide matching with free space and absorb the incident electromagnetic wave. By introducing one more resonator in the unit cell, an ultra-wideband absorber can be constructed. It has a fractional bandwidth of 155.7 % for \u221210 dB of reflection coefficient under the normal incidence with an absorber thickness of 0.12 \u03bb0 at the lowest absorption frequency. This absorber works for TE polarization at which two quasi-TEM modes are excited in the air and substrate regions. The proposed absorber shows a stable performance under oblique angles of incidence."}
{"_id": "111e964951c7e3d068860237d462111f768781d7", "title": "Electrical and thermal processes of HEV induction machines taking into account stator winding form", "text": "Internal combustion engine start processes in Hybrid Electric Vehicles (HEVs) are accompanied by large currents in the windings of electrical machines. It can lead to insulation overheating and fault operation mode. Induction squirrel-cage machine is one of the best AC machines for HEVs. The high efficiency of HEV induction machine with compact stator winding form is proposed. Its stator electrical and thermal processes are considered. Irregular cross-section of stator winding leads to irregular current density in particular winding parts. To decrease temperature gradients within induction machine and improve heat transfer, air gap between overhang parts and stator should be increased. Numerical and computer models for researching are based on finite element method."}
{"_id": "2b8228ac424cb8ee964f6880c21b9e49d3be869c", "title": "TRESNEI, a Matlab trust-region solver for systems of nonlinear equalities and inequalities", "text": "The Matlab implementation of a trust-region Gauss-Newton method for boundconstrained nonlinear least-squares problems is presented. The solver, called TRESNEI, is adequate for zero and small-residual problems and handles the solution of nonlinear systems of equalities and inequalities. The structure and the usage of the solver are described and an extensive numerical comparison with functions from the Matlab Optimization Toolbox is carried out."}
{"_id": "33b04c2ca92aac756b221e96c1d2b4b714cca409", "title": "A wearable heart rate belt for ambulant ECG monitoring", "text": "Long-term ECG monitoring is desirable in many daily healthcare situations where a wearable device that can continuously record ECG signals is needed. In this work, we proposed a wearable heart rate belt for ambulant ECG monitoring which can be comfortably worn on the chest or the waist. Active textile electrodes were designed for ECG recording. And a battery powered circuit board was developed consisting of ECG signal conditioning circuits, a 3-axis accelerator for body motion detection, a 12-bit AD converter, a DSP for signal processing and a SD card for data storage. The system also includes a wireless communication module that can transmit heart rate data to a sport watch for displaying. Experiments were carried out which shows that the proposed system is unobtrusive and can be comfortably worn by the user during daily activities. When wearing on the waist, ECG signals with reasonably good quality were acquired in rest and walking situations. The proposed system shows promise for long-term ambulant ECG monitoring."}
{"_id": "8c250dfbdeb96976a3f8be1a90f7052c5d8d9130", "title": "Effects of elastic therapeutic taping on motor function in children with motor impairments: a systematic review.", "text": "BACKGROUND\nThe elastic therapeutic taping has been considered a promising resource for disabled children.\n\n\nOBJECTIVE\nTo systematically review the evidence of the effects of elastic therapeutic taping on motor function in children with motor impairments.\n\n\nMETHOD\nThree independent evaluators conducted searches in electronic databases (MEDLINE/PubMed, Scopus, LILACS, BIREME/BVS, Science Direct, SciELO, and PEDro). Clinical studies design, published until 2016, involving elastic therapeutic taping and children aged 0-12 years with motor impairments were included. The variables considered were the methodological aspects (study design, participants, outcome measurements, and experimental conditions); results presented in the studies, and also the methodological quality of studies.\n\n\nRESULTS\nFinal selection was composed by 12 manuscripts (five randomized controlled trials), published in the last 10 years. Among them, cerebral palsy (CP) was the most recurrent disorder (n\u2009=\u20097), followed by congenital muscular torticollis (n\u2009=\u20092) and brachial plexus palsy (n\u2009=\u20092). Positive results were associated with taping application: improvement in the upper limb function, gross motor skills, postural control, muscular balance, and performance in the dynamics functional and daily activities.\n\n\nLIMITATIONS\nLower quality of the studies, clinical and population heterogeneity existed across studies.\n\n\nCONCLUSIONS\nThe elastic therapeutic taping has been shown to be a promising adjunct resource to the conventional rehabilitation in children with motor impairments. However, high methodological studies about its efficacy in this population are already scarce. Implications for Rehabilitation Elastic therapeutic taping has been shown to be a promising adjunct resource to the conventional rehabilitation in disabled children. Clinical trials have indicated improvement in the postural control and functional activities with both, upper and lower limbs, and increase in the functional independency resulting from the taping use. Randomized control trials and well-established protocols are needed to increase the confidence in applying elastic therapeutic taping to specific clinical conditions."}
{"_id": "845111f92b5719197a74d20dd0e050c65d4b8635", "title": "Novelty detection for time series data analysis in water distribution systems using support vector machines", "text": "The sampling frequency and quantity of time series data collected from water distribution systems has been increasing in recent years, giving rise to the potential for improving system knowledge if suitable automated techniques can be applied, in particular, machine learning. Novelty (or anomaly) detection refers to the automatic identification of novel or abnormal patterns embedded in large amounts of \u2018\u2018normal\u2019\u2019 data. When dealing with time series data (transformed into vectors), this means abnormal events embedded amongst many normal time series points. The support vector machine is a data-driven statistical technique that has been developed as a tool for classification and regression. The key features include statistical robustness with respect to non-Gaussian errors and outliers, the selection of the decision boundary in a principled way, and the introduction of nonlinearity in the feature space without explicitly requiring a nonlinear algorithm by means of kernel functions. In this research, support vector regression is used as a learning method for anomaly detection from water flow and pressure time series data. No use is made of past event histories collected through other information sources. The support vector regression methodology, whose robustness derives from the training error function, is applied"}
{"_id": "1bdcf9e7a9a5e10acebb0ff37656badb6abadc4c", "title": "Domain Adaptation From Multiple Sources: A Domain-Dependent Regularization Approach", "text": "In this paper, we propose a new framework called domain adaptation machine (DAM) for the multiple source domain adaption problem. Under this framework, we learn a robust decision function (referred to as target classifier) for label prediction of instances from the target domain by leveraging a set of base classifiers which are prelearned by using labeled instances either from the source domains or from the source domains and the target domain. With the base classifiers, we propose a new domain-dependent regularizer based on smoothness assumption, which enforces that the target classifier shares similar decision values with the relevant base classifiers on the unlabeled instances from the target domain. This newly proposed regularizer can be readily incorporated into many kernel methods (e.g., support vector machines (SVM), support vector regression, and least-squares SVM (LS-SVM)). For domain adaptation, we also develop two new domain adaptation methods referred to as FastDAM and UniverDAM. In FastDAM, we introduce our proposed domain-dependent regularizer into LS-SVM as well as employ a sparsity regularizer to learn a sparse target classifier with the support vectors only from the target domain, which thus makes the label prediction on any test instance very fast. In UniverDAM, we additionally make use of the instances from the source domains as Universum to further enhance the generalization ability of the target classifier. We evaluate our two methods on the challenging TRECIVD 2005 dataset for the large-scale video concept detection task as well as on the 20 newsgroups and email spam datasets for document retrieval. Comprehensive experiments demonstrate that FastDAM and UniverDAM outperform the existing multiple source domain adaptation methods for the two applications."}
{"_id": "5d9a3036181676e187c9c0ff995d8bed1db3557d", "title": "Adapting Visual Category Models to New Domains", "text": "Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to non-image data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions."}
{"_id": "818e627defcd36477b6d39ec5fc4d02a479e7c37", "title": "Trends in extreme learning machines: A review", "text": "Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives."}
{"_id": "9ccca698224219cf48068a0d1df111ce209a0ebe", "title": "Semi-Supervised and Unsupervised Extreme Learning Machines", "text": "Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency."}
{"_id": "3acabf9198bbb6a050ff6adf7cc70940de6da127", "title": "Faster reinforcement learning after pretraining deep networks to predict state dynamics", "text": "Deep learning algorithms have recently appeared that pretrain hidden layers of neural networks in unsupervised ways, leading to state-of-the-art performance on large classification problems. These methods can also pretrain networks used for reinforcement learning. However, this ignores the additional information that exists in a reinforcement learning paradigm via the ongoing sequence of state, action, new state tuples. This paper demonstrates that learning a predictive model of state dynamics can result in a pretrained hidden layer structure that reduces the time needed to solve reinforcement learning problems."}
{"_id": "451eed7fd8ae281d1cc76ca8cdecbaf47816e55a", "title": "Close Yet Distinctive Domain Adaptation", "text": "Domain adaptation is transfer learning which aims to generalize a learning model across training and testing data with different distributions. Most previous research tackle this problem in seeking a shared feature representation between source and target domains while reducing the mismatch of their data distributions. In this paper, we propose a close yet discriminative domain adaptation method, namely CDDA, which generates a latent feature representation with two interesting properties. First, the discrepancy between the source and target domain, measured in terms of both marginal and conditional probability distribution via Maximum Mean Discrepancy is minimized so as to attract two domains close to each other. More importantly, we also design a repulsive force term, which maximizes the distances between each label dependent sub-domain to all others so as to drag different class dependent sub-domains far away from each other and thereby increase the discriminative power of the adapted domain. Moreover, given the fact that the underlying data manifold could have complex geometric structure, we further propose the constraints of label smoothness and geometric structure consistency for label propagation. Extensive experiments are conducted on 36 cross-domain image classification tasks over four public datasets. The Comprehensive results show that the proposed method consistently outperforms the state-of-the-art methods with significant margins. \u2217These first two authors contributed equally."}
{"_id": "5d5319e1133ee105d83e07da567ee288b7851d09", "title": "Alterations of the human gut microbiome in liver cirrhosis", "text": "Liver cirrhosis occurs as a consequence of many chronic liver diseases that are prevalent worldwide. Here we characterize the gut microbiome in liver cirrhosis by comparing 98 patients and 83 healthy control individuals. We build a reference gene set for the cohort containing 2.69\u00a0million genes, 36.1% of which are novel. Quantitative metagenomics reveals 75,245 genes that differ in abundance between the patients and healthy individuals (false discovery rate\u00a0<\u00a00.0001) and can be grouped into 66 clusters representing cognate bacterial species; 28 are enriched in patients and 38 in control individuals. Most (54%) of the patient-enriched, taxonomically assigned species are of buccal origin, suggesting an invasion of the gut from the mouth in liver cirrhosis. Biomarkers specific to liver cirrhosis at gene and function levels are revealed by a comparison with those for type 2 diabetes and inflammatory bowel disease. On the basis of only 15 biomarkers, a highly accurate patient discrimination index is created and validated on an independent cohort. Thus microbiota-targeted biomarkers may be a powerful tool for diagnosis of different diseases."}
{"_id": "5cfaa2674e087a0131a64f6443f0dbb7abb80810", "title": "IOT Based Air And Sound Pollution Monitoring System", "text": "The growing air and sound pollution is one of the serious issues these days. This large amount of increasing pollution has made human life prone to large number of diseases. Therefore, it has now become necessary to control the pollution to ensure healthy livelihood and better future. The Air and Sound Pollution Monitoring device can be accessed by the authorities and the common people belonging to the area. The device will be installed through a mobile application which will show the live updates of the pollution level of the area. This device is also capable of detecting the fire in its area and notify the same to the fire brigade authorities so that they could take necessary actions accordingly, and also the mobile applications will be installed in the fire brigades itself so that if a fire is taking place nearby, it could be controlled in time to reduce loss of people and property. This system works on the methods of IOT which is a rising technology based on the fusion of electronics and computer science. The embedded sensors in the system help to detect major air polluting gases such as CO2, SO2 and CO and level of sound pollution. The concept of IOT helps to access data from remote locations and save it in database so that we don\u2019t need to actually be present in that area."}
{"_id": "f9cb84486b815e09b5cb037bd255aef96fd0bfef", "title": "High-Quality Real-Time Video Stabilization Using Trajectory Smoothing and Mesh-Based Warping", "text": "Some state-of-the-art video stabilization methods can achieve quite good visual effect, but they always cost a lot of time. On the other hand, current real-time video stabilization methods cannot generate satisfactory results. In this paper, we propose a novel trajectory-based video stabilization method which can generate high-quality results in real time. Our method runs very fast, because many techniques are proposed for acceleration. In the trajectory smoothing step, trajectories are extracted, pre-processed, and smoothed. A video splitting algorithm is proposed for pre-processing, and binomial filtering is used for smoothing. Both of them are simple and fast. In the frame warping step, we calculate a spatially varying warp that is directed by a grid mesh for each frame. Instead of solving time consuming global optimization problems, the transformation matrix of each grid is calculated using nearby trajectories in our method, leading to very high speed. We implement our method and run it on a variety of videos. Experiments show that while the stabilization effect is comparable with state-of-the-art methods, our algorithm can run in real time."}
{"_id": "88064f0a67015777e1021862a1bf2e49613ebfb7", "title": "Fusion of facial expressions and EEG for implicit affective tagging", "text": "The explosion of user-generated, untagged multimedia data in recent years, generates a strong need for efficient search and retrieval of this data. The predominant method for content-based tagging is through slow, labour-intensive manual annotation. Consequently, automatic tagging is currently a subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users\u2019 responses to the interaction with the multimedia content are analysed in order to generate descriptive tags. Here, we present a multi-modal approach that analyses both facial expressions and Electroencephalography(EEG) signals for the generation of affective tags. We perform classification and regression in the valence-arousal space and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information."}
{"_id": "4c80d19b98c73c40039abc385e493083f91e3bb9", "title": "Three-dimensional bin packing problem with variable bin height", "text": "0377-2217/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.ejor.2009.05.040 * Corresponding author. Address: Institute for Logi fax: +61 3 9689 4859. E-mail addresses: wuyong@pmail.ntu.edu.sg (Y. W This paper studies a variant of the three-dimensional bin packing problem (3D-BPP), where the bin height can be adjusted to the cartons it packs. The bins and cartons to be packed are assumed rectangular in shape. The cartons are allowed to be rotated into any one of the six positions that keep the carton edges parallel to the bin edges. This greatly increases the difficulty of finding a good solution since the search space expands significantly comparing to the 3D-BPP where the cartons have fixed orientations. A mathematical (mixed integer programming) approach is modified based on [Chen, C. S., Lee, S. M., Shen, Q. S., 1995. An analytical model for the container loading problem. European Journal of Operational Research 80 (1), 68\u201376] and numerical experiments indicate that the mathematical approach is not suitable for the variable bin height 3D-BPP. A special bin packing algorithm based on packing index is designed to utilize the special problem feature and is used as a building block for a genetic algorithm designed for the 3DBPP. The paper also investigates the situation where more than one type of bin are used and provides a heuristic for packing a batch of cartons using the genetic algorithm. Numerical experiments show that our proposed method yields quick and satisfactory results when benchmarked against the actual packing practice and the MIP model with the latest version of CPLEX. 2009 Elsevier B.V. All rights reserved."}
{"_id": "a7909d353012175b6cbdea4158a9c5833803652c", "title": "The Role of Adult Attachment Styles in Psychopathology and Psychotherapy Outcomes", "text": "Attachment theory provides a model for understanding how the attachment styles formed in infancy systematically affect subsequent psychological functioning across the life span. Attachment styles provide the cognitive schemas, or working models, through which individuals perceive and relate to their worlds. In turn, these schemas predispose the development of psychopathologies and influence outcomes when people undergo psychotherapy. After reviewing recent empirical findings, the authors conclude that an understanding of attachment theory facilitates the conceptualization of clients\u2019 problems and the selection of appropriate interventions. Accordingly, attachment styles should be assessed as a standard part of treatment planning. Furthermore, the authors propose that attachment styles should be assessed as individual difference variables in psychotherapy outcome research because adult attachment styles dictate how people perceive and respond to their environments and, therefore, how clients respond differentially to various treatments."}
{"_id": "32266dd1ff4716bb34747cc325ac3f23fa9aa8d6", "title": "Self-similarity of complex networks", "text": "Complex networks have been studied extensively owing to their relevance to many real systems such as the world-wide web, the Internet, energy landscapes and biological and social networks. A large number of real networks are referred to as \u2018scale-free\u2019 because they show a power-law distribution of the number of links per node. However, it is widely believed that complex networks are not invariant or self-similar under a length-scale transformation. This conclusion originates from the \u2018small-world\u2019 property of these networks, which implies that the number of nodes increases exponentially with the \u2018diameter\u2019 of the network, rather than the power-law relation expected for a self-similar structure. Here we analyse a variety of real complex networks and find that, on the contrary, they consist of self-repeating patterns on all length scales. This result is achieved by the application of a renormalization procedure that coarse-grains the system into boxes containing nodes within a given \u2018size\u2019. We identify a power-law relation between the number of boxes needed to cover the network and the size of the box, defining a finite self-similar exponent. These fundamental properties help to explain the scale-free nature of complex networks and suggest a common self-organization dynamics."}
{"_id": "734e856b6b7eaec56e9e3936536a50c1b18b5d3f", "title": "Legal Compliance by Design (LCbD) and through Design (LCtD): Preliminary Survey", "text": "The purpose of this paper is twofold: (i) carrying out a preliminary survey of the literature and research projects on Compliance by Design (CbD); and (ii) clarifying the double process of (a) extending business managing techniques to other regulatory fields, and (b) converging trends in legal theory, legal technology and Artificial Intelligence. The paper highlights the connections and differences we found across different domains and proposals. We distinguish three different policydriven types of CbD: (i) business, (ii) regulatory, (iii) and legal. The recent deployment of ethical views, and the implementation of general principles of privacy and data protection lead to the conclusion that, in order to appropriately define legal compliance, Compliance through Design (CtD) should be differentiated from CbD."}
{"_id": "3743a306ad623aac459b3e3e589f1e5f6db87291", "title": "Microformats: a pragmatic path to the semantic web", "text": "Microformats are a clever adaptation of semantic XHTML that makes it easier to publish, index, and extract semi-structured information such as tags, calendar entries, contact information, and reviews on the Web. This makes it a pragmatic path towards achieving the vision set forth for the Semantic Web.Even though it sidesteps the existing \"technology stack\" of RDF, ontologies, and Artificial Intelligence-inspired processing tools, various microformats have emerged that parallel the goals of several well-known Semantic Web projects. This poster compares their prospects to the Semantic Web according to Rogers' Diffusion of Innovation model."}
{"_id": "3274d8a1a2fabc3d95ce9adb76462bec7d4a7b1f", "title": "Efficient Name Disambiguation for Large-Scale Databases", "text": "Name disambiguation can occur when one is seeking a list of publications of an author who has used different name variations and when there are multiple other authors with the same name. We present an efficient integrative framework for solving the name disambiguation problem: a blocking method retrieves candidate classes of authors with similar names and a clustering method, DBSCAN, clusters papers by author. The distance metric between papers used in DBSCAN is calculated by an online active selection support vector machine algorithm (LASVM), yielding a simpler model, lower test errors and faster prediction time than a standard SVM. We prove that by recasting transitivity as density reachability in DBSCAN, transitivity is guaranteed for core points. For evaluation, we manually annotated 3,355 papers yielding 490 authors and achieved 90.6% pairwise-F1. For scalability, authors in the entire CiteSeer dataset, over 700,000 papers, were readily disambiguated."}
{"_id": "560d4588f6719d0acf4a43a66010a10106bed26a", "title": "Automatic Segmentation of Liver Tumor in CT Images with Deep Convolutional Neural Networks", "text": "Liver tumors segmentation from computed tomography (CT) images is an essential task for diagnosis and treatments of liver cancer. However, it is difficult owing to the variability of appearances, fuzzy boundaries, heterogeneous densities, shapes and sizes of lesions. In this paper, an automatic method based on convolutional neural networks (CNNs) is presented to segment lesions from CT images. The CNNs is one of deep learning models with some convolutional filters which can learn hierarchical features from data. We compared the CNNs model to popular machine learning algorithms: AdaBoost, Random Forests (RF), and support vector machine (SVM). These classifiers were trained by handcrafted features containing mean, variance, and contextual features. Experimental evaluation was performed on 30 portal phase enhanced CT images using leave-one-out cross validation. The average Dice Similarity Coefficient (DSC), precision, and recall achieved of 80.06% \u00b1 1.63%, 82.67% \u00b1 1.43%, and 84.34% \u00b1 1.61%, respectively. The results show that the CNNs method has better performance than other methods and is promising in liver tumor segmentation."}
{"_id": "6116e17653e9a23c2ee19b9c4477a5319d52d42c", "title": "Comparison of Nanostring nCounter\u00ae Data on FFPE Colon Cancer Samples and Affymetrix Microarray Data on Matched Frozen Tissues", "text": "The prognosis of colorectal cancer (CRC) stage II and III patients remains a challenge due to the difficulties of finding robust biomarkers suitable for testing clinical samples. The majority of published gene signatures of CRC have been generated on fresh frozen colorectal tissues. Because collection of frozen tissue is not practical for routine surgical pathology practice, a clinical test that improves prognostic capabilities beyond standard pathological staging of colon cancer will need to be designed for formalin-fixed paraffin-embedded (FFPE) tissues. The NanoString nCounter\u00ae platform is a gene expression analysis tool developed for use with FFPE-derived samples. We designed a custom nCounter\u00ae codeset based on elements from multiple published fresh frozen tissue microarray-based prognostic gene signatures for colon cancer, and we used this platform to systematically compare gene expression data from FFPE with matched microarray array data from frozen tissues. Our results show moderate correlation of gene expression between two platforms and discovery of a small subset of genes as candidate biomarkers for colon cancer prognosis that are detectable and quantifiable in FFPE tissue sections."}
{"_id": "cc0a4093698ff16920f6b2ec0542c6fbed13e4af", "title": "A Generic Layered Architecture for Context Aware Applications", "text": "As pervasive computing is at its preliminary stage a number of computing solutions were proposed and more are still under experimentation. In this paper a four layered generic architecture is proposed with a recommended components so as to support context awareness for pervasive applications. Detail of each layer is described sufficiently to insight the process of context generation from low level contextual data up to high level context reporting scheme in a given pervasive environment. The implementation section of this article describes how the context reasoning component employee different level of reasoning techniques such as knowledge acquisition, context based rule execution and the application of ontology in pervasive computing. \u00a9 2014 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Program Chairs of EICM-2014."}
{"_id": "6d0f5d4a43e60400b888d473387190a13882824b", "title": "CTR prediction for contextual advertising: learning-to-rank approach", "text": "Contextual advertising is a textual advertising displayed within the content of a generic web page. Predicting the probability that users will click on ads plays a crucial role in contextual advertising because it influences ranking, filtering, placement, and pricing of ads. In this paper, we introduce a click-through rate prediction algorithm based on the learning-to-rank approach. Focusing on the fact that some of the past click data are noisy and ads are ranked as lists, we build a ranking model by using partial click logs and then a regression model on it. We evaluated this approach offline on a data set based on logs from an ad network. Our method is observed to achieve better results than other baselines in our three metrics."}
{"_id": "95ded03f3eb9d60b3e3d51931147d5049be4ba5e", "title": "Design of Novel Reconfigurable Reflectarrays With Single-Bit Phase Resolution for Ku-Band Satellite Antenna Applications", "text": "Reconfigurable reflectarray antennas operating in Ku-band are presented in this paper. First, a novel multilayer unit-cell based on polarization turning concept is proposed to achieve the single-bit phase shift required for reconfigurable reflectarray applications. The principle of the unit-cell is discussed using the current model and the space match condition, along with simulations to corroborate the design and performance criteria. Then, an offset-fed configuration is developed to verify performance of the unit-cell in antenna application, and its polarization transformation property is elaborated. Finally, an offset-fed reflectarray with 10 \u00d7 10 elements is developed and fabricated. The dual-polarized antenna utilizes the control code matrices to accomplish a wide angle beam-scanning. A full-wave analysis is applied to the reflectarray, and detailed results are presented and discussed. This electronically steerable reflectarray antenna has significant potential for satellite applications, due to its wide operating band, simple control and beam-scanning capability."}
{"_id": "8a02b0055a935b30c9f2a3d5fe50c8b14b0c8099", "title": "An agile process model for product derivation in software product line engineering", "text": "Software Product Lines (SPL) and Agile practices have emerged as new paradigms for developing software. Both approaches share common goals; such as improving productivity, reducing time to market, decreasing development costs and increasing customer satisfaction. These common goals provide the motivation for this research. We believe that integrating Agile practices into SPL can bring a balance between agility and formalism. However, there has been little research on such integration. We have been researching the potential of integrating Agile approaches in one of the key SPL process areas, product derivation. In this paper we present an outline of our Agile process model for product derivation that was developed through industry based case study research."}
{"_id": "2b55e2e74d3feae904ca178a39aac826b11ae5cd", "title": "A Proximity Authentication System for Smartphones", "text": "Authenticating whether two smartphones are in close proximity is important in smartphone security. For example, the authentication result can be used to pair two devices and construct a secure communication channel between them. Many existing proximity authentication systems rely on short range networks-the communication is usually restricted in short range networks. However, this approach is inadequate when we want to verify whether the communication distance is within a few centimeters, i.e. near field. To address this challenge, many other techniques construct systems based on the near field communication (NFC) system. Unfortunately, only a small portion of smart devices in the current market are equipped with NFC chips. The purpose of this paper is to provide a close proximity authentication system which does not depend on NFC chips. We devise a system to achieve close proximity authentication by using correlated finger movements on the two smartphones. Human input usually contains errors and is of low entropy, which affects the usability and security of our system. We solve these issues in an efficient way, considering the limited computational resources on smart devices. Our system does not need any prior secret information shared between the two devices, and generates the same high-entropy cryptographic key for both devices in a successful authentication. The efficiency of the system is validated by evaluations on Motorola Droid smartphones."}
{"_id": "fb39b1a73b37170d22d2588fd5af460c17156ce2", "title": "Survey on Computer Vision for UAVs: Current Developments and Trends", "text": "During last decade the scientific research on Unmanned Aerial Vehicless (UAVs) increased spectacularly and led to the design of multiple types of aerial platforms. The major challenge today is the development of autonomously operating aerial agents capable of completing missions independently of human interaction. To this extent, visual sensing techniques have been integrated in the control pipeline of the UAVs in order to enhance their navigation and guidance skills. The aim of this article is to present a comprehensive literature review on vision based applications for UAVs focusing mainly on current developments and trends. These applications are sorted in different categories according to the research topics among various research groups. More specifically vision based position-attitude control, pose estimation and mapping, obstacle detection as well as target tracking are the identified components towards autonomous agents. Aerial platforms could reach greater level of autonomy by integrating all these technologies onboard. Additionally, throughout this article the concept of fusion multiple sensors is highlighted, while an overview on the challenges C. Kanellakis ( ) \u00b7 G. Nikolakopoulos Lule\u00e5 University of Technology, Lule\u00e5, Sweden e-mail: christoforos.kanellakis@ltu.se G. Nikolakopoulos e-mail: geonik@ltu.se addressed and future trends in autonomous agent development will be also provided."}
{"_id": "0024559c0758fd680a5ab777348f4a740b8c7323", "title": "Persistent Point Feature Histograms for 3 D Point Clouds", "text": "This paper proposes a novel way of characterizing the local geometry of 3D points, using persistent feature histograms. The relationships between the neighbors of a point are analyzed and the resulted values are stored in a 16-bin histogram. The histograms are pose and point cloud density invariant and cope well with noisy datasets. We show that geometric primitives have unique signatures in this feature space, preserved even in the presence of additive noise. To extract a compact subset of points which characterizes a point cloud dataset, we perform an in-depth analysis of all point feature histograms using different distance metrics. Preliminary results show that point clouds can be roughly segmented based on the uniqueness of geometric primitives feature histograms. We validate our approach on datasets acquired from laser sensors in indoor (kitchen) environments."}
{"_id": "318cb91c41307135781a0a01bc9e0b6a6e123b0f", "title": "Towards a benchmark for RGB-D SLAM evaluation", "text": "We provide a large dataset containing RGB-D image sequences and the ground-truth camera trajectories with the goal to establish a benchmark for the evaluation of visual SLAM systems. Our dataset contains the color and depth images of a Microsoft Kinect sensor and the groundtruth trajectory of camera poses. The data was recorded at full frame rate (30 Hz) and sensor resolution (640x480). The ground-truth trajectory was obtained from a high-accuracy motion-capture system with eight high-speed tracking cameras (100 Hz). Further, we provide the accelerometer data from the Kinect. Finally, we propose an evaluation criterion for measuring the quality of the estimated camera trajectory of visual SLAM systems."}
{"_id": "34b5065af120cc339c4788ee0ba0223a2accc3b9", "title": "Surface feature detection and description with applications to mesh matching", "text": "In this paper we revisit local feature detectors/descriptors developed for 2D images and extend them to the more general framework of scalar fields defined on 2D manifolds. We provide methods and tools to detect and describe features on surfaces equiped with scalar functions, such as photometric information. This is motivated by the growing need for matching and tracking photometric surfaces over temporal sequences, due to recent advancements in multiple camera 3D reconstruction. We propose a 3D feature detector (MeshDOG) and a 3D feature descriptor (MeshHOG) for uniformly triangulated meshes, invariant to changes in rotation, translation, and scale. The descriptor is able to capture the local geometric and/or photometric properties in a succinct fashion. Moreover, the method is defined generically for any scalar function, e.g., local curvature. Results with matching rigid and non-rigid meshes demonstrate the interest of the proposed framework."}
{"_id": "47c3c0273c010115cd1d5ee90210937f47658d4e", "title": "Fast Point Feature Histograms (FPFH) for 3D registration", "text": "In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment)."}
{"_id": "cbe8fa1b7d7d602049a186c9340fb46f8b791a23", "title": "GENERATIVE ADVERSARIAL NETWORK BASED ACOUSTIC SCENE TRAINING SET AUGMENTATION AND SELECTION USING SVM HYPERPLANE", "text": "Although it is typically expected that using a large amount of labeled training data would lead to improve performance in deep learning, it is generally difficult to obtain such DataBase (DB). In competitions such as the Detection and Classification of Acoustic Scenes and Events (DCASE) challenge Task 1, participants are constrained to use a relatively small DB as a rule, which is similar to the aforementioned issue. To improve Acoustic Scene Classification (ASC) performance without employing additional DB, this paper proposes to use Generative Adversarial Networks (GAN) based method for generating additional training DB. Since it is not clear whether every sample generated by GAN would have equal impact in classification performance, this paper proposes to use Support Vector Machine (SVM) hyper plane for each class as reference for selecting samples, which have class discriminative information. Based on the crossvalidated experiments on development DB, the usage of the generated features could improve ASC performance."}
{"_id": "3b2d99a40844b84b644481662041427e9a277a2e", "title": "Visual tree detection for autonomous navigation in forest environment", "text": "This paper describes a classification based tree detection method for autonomous navigation of forest vehicles in forest environment. Fusion of color, and texture cues has been used to segment the image into tree trunk and background objects. The segmentation of images into tree trunk and background objects is a challenging task due to high variations of illumination, effect of different color shades, non-homogeneous bark texture, shadows and foreshortening. To accomplish this, the approach has been to find the best combinations of color, and texture descriptors, and classification techniques. An additional task has been to estimate the distance between forest vehicle and the base of segmented trees using monocular vision. A simple heuristic distance measurement method is proposed that is based on pixel height and a reference width. The performance of various color and texture operators, and accuracy of classifiers has been evaluated using cross validation techniques."}
{"_id": "6e73703654b645c7dab15a977434e9ee43d92e77", "title": "Learning Semantic Correspondences with Less Supervision", "text": "A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state. To deal with the high degree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state. We show that our model generalizes across three domains of increasing difficulty\u2014Robocup sportscasting, weather forecasts (a new domain), and NFL recaps."}
{"_id": "0a96cdec9f5d3c5eec71959437f9dc16d0e416b7", "title": "A Primer on Memory Consistency and Cache Coherence", "text": "Many modern computer systems and most multicore chips (chip multiprocessors) support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. For a shared memory machine, the memory consistency model defines the architecturally visible behavior of its memory system. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. As part of supporting a memory consistency model, many machines also provide cache coherence protocols that ensure that multiple cached copies of data are kept up-to-date. The goal of this primer is to provide readers with a basic understanding of consistency and coherence. This understanding includes both the issues that must be solved as well as a variety of solutions. We present both highlevel concepts as well as specific, concrete examples from real-world systems. vi"}
{"_id": "57f1c73fcac9e6fe405501a5706c2e683d0c6af4", "title": "Improving Movie Gross Prediction through News Analysis", "text": "Traditional movie gross predictions are based on numerical and categorical movie data from The Internet Movie Database (IMDB). In this paper, we use the quantitative news data generated by Lydia, our system for large-scale news analysis, to help people to predict movie grosses. By analyzing two different models (regression and k-nearest neighbor models), we find models using only news data can achieve similar performance to those using IMDB data. Moreover, we can achieve better performance by using the combination of IMDB data and news data. Further, the improvement is statistically significant."}
{"_id": "4332413cfd2e540d2980a673cb657df2216332db", "title": "Prevalence of chronic pain with neuropathic characteristics in the general population", "text": "We conducted a large nationwide postal survey to estimate the prevalence of chronic pain with or without neuropathic characteristics in the French general population. A questionnaire aimed at identifying chronic pain (defined as daily pain for at least 3 months), evaluating its intensity, duration and body locations, was sent to a representative sample of 30,155 subjects. The DN4 questionnaire was used to identify neuropathic characteristics. Of the questionnaires, 24,497 (81.2%) were returned and 23,712 (96.8%) could be assessed. Seven thousand five hundred and twenty-two respondents reported chronic pain (prevalence=31.7%; [95%CI: 31.1-32.3]) and 4709 said the pain intensity was moderate to severe (prevalence=19.9%; [95%CI: 19.5-20.4]). Neuropathic characteristics were reported by 1631 respondents with chronic pain (prevalence=6.9%; [95%CI: 6.6-7.2]), which was moderate to severe in 1209 (prevalence=5.1% [95%CI: 4.8-5.4]). A higher prevalence of chronic pain with neuropathic characteristics was associated with middle age (50-64 years), manual professions and those living in rural areas. It was more frequently located in the lower limbs and its intensity and duration were higher in comparison with chronic pain without neuropathic characteristics. This large national population-based study indicates that a significant proportion of chronic pain patients report neuropathic characteristics. We identified distinctive socio-demographic profile and clinical features indicating that chronic pain with neuropathic characteristics is a specific health problem."}
{"_id": "12f806ec40bc0d4199fe90867195397b7a41a22a", "title": "You had me at hello: How phrasing affects memorability", "text": "Understanding the ways in which information achieves widespread public awareness is a research question of significant interest. We consider whether, and how, the way in which the information is phrased \u2014 the choice of words and sentence structure \u2014 can affect this process. To this end, we develop an analysis framework and build a corpus of movie quotes, annotated with memorability information, in which we are able to control for both the speaker and the setting of the quotes. We find that there are significant differences between memorable and non-memorable quotes in several key dimensions, even after controlling for situational and contextual factors. One is lexical distinctiveness: in aggregate, memorable quotes use less common word choices, but at the same time are built upon a scaffolding of common syntactic patterns. Another is that memorable quotes tend to be more general in ways that make them easy to apply in new contexts \u2014 that is, more portable. We also show how the concept of \u201cmemorable language\u201d can be extended across domains. To appear at ACL 2012; final version 1 Hello. My name is Inigo Montoya. Understanding what items will be retained in the public consciousness, and why, is a question of fundamental interest in many domains, including marketing, politics, entertainment, and social media; as we all know, many items barely register, whereas others catch on and take hold in many people\u2019s minds. An active line of recent computational work has employed a variety of perspectives on this question. Building on a foundation in the sociology of diffusion [27, 31], researchers have explored the ways in which network structure affects the way information spreads, with domains of interest including blogs [1, 11], email [37], on-line commerce [22], and social media [2, 28, 33, 38]. There has also been recent research addressing temporal aspects of how different media sources convey information [23, 30, 39] and ways in which people react differently to information on different topics [28, 36]. Beyond all these factors, however, one\u2019s everyday experience with these domains suggests that the way in which a piece of information is expressed \u2014 the choice of words, the way it is phrased \u2014 might also have a fundamental effect on the extent to which it takes hold in people\u2019s minds. Concepts that attain wide reach are often carried in messages such as political slogans, marketing phrases, or aphorisms whose language seems intuitively to be memorable, \u201ccatchy,\u201d or otherwise compelling. Our first challenge in exploring this hypothesis is to develop a notion of \u201csuccessful\u201d language that is precise enough to allow for quantitative evaluation. We also face the challenge of devising an evaluation setting that separates the phrasing of a message from the conditions in which it was delivered \u2014 highlycited quotes tend to have been delivered under compelling circumstances or fit an existing cultural, political, or social narrative, and potentially what appeals to us about the quote is really just its invocation of these extra-linguistic contexts. Is the form of the language adding an effect beyond or indepenar X iv :1 20 3. 63 60 v2 [ cs .C L ] 3 0 A pr 2 01 2 dent of these (obviously very crucial) factors? To investigate the question, one needs a way of controlling \u2014 as much as possible \u2014 for the role that the surrounding context of the language plays. The present work (i): Evaluating language-based memorability Defining what makes an utterance memorable is subtle, and scholars in several domains have written about this question. There is a rough consensus that an appropriate definition involves elements of both recognition \u2014 people should be able to retain the quote and recognize it when they hear it invoked \u2014 and production \u2014 people should be motivated to refer to it in relevant situations [15]. One suggested reason for why some memes succeed is their ability to provoke emotions [16]. Alternatively, memorable quotes can be good for expressing the feelings, mood, or situation of an individual, a group, or a culture (the zeitgeist): \u201cCertain quotes exquisitely capture the mood or feeling we wish to communicate to someone. We hear them ... and store them away for future use\u201d [10]. None of these observations, however, serve as definitions, and indeed, we believe it desirable to not pre-commit to an abstract definition, but rather to adopt an operational formulation based on external human judgments. In designing our study, we focus on a domain in which (i) there is rich use of language, some of which has achieved deep cultural penetration; (ii) there already exist a large number of external human judgments \u2014 perhaps implicit, but in a form we can extract; and (iii) we can control for the setting in which the text was used. Specifically, we use the complete scripts of roughly 1000 movies, representing diverse genres, eras, and levels of popularity, and consider which lines are the most \u201cmemorable\u201d. To acquire memorability labels, for each sentence in each script, we determine whether it has been listed as a \u201cmemorable quote\u201d by users of the widely-known IMDb (the Internet Movie Database), and also estimate the number of times it appears on the Web. Both of these serve as memorability metrics for our purposes. When we evaluate properties of memorable quotes, we compare them with quotes that are not assessed as memorable, but were spoken by the same character, at approximately the same point in the same movie. This enables us to control in a fairly fine-grained way for the confounding effects of context discussed above: we can observe differences that persist even after taking into account both the speaker and the setting. In a pilot validation study, we find that human subjects are effective at recognizing the more IMDbmemorable of two quotes, even for movies they have not seen. This motivates a search for features intrinsic to the text of quotes that signal memorability. In fact, comments provided by the human subjects as part of the task suggested two basic forms that such textual signals could take: subjects felt that (i) memorable quotes often involve a distinctive turn of phrase; and (ii) memorable quotes tend to invoke general themes that aren\u2019t tied to the specific setting they came from, and hence can be more easily invoked for future (out of context) uses. We test both of these principles in our analysis of the data. The present work (ii): What distinguishes memorable quotes Under the controlled-comparison setting sketched above, we find that memorable quotes exhibit significant differences from nonmemorable quotes in several fundamental respects, and these differences in the data reinforce the two main principles from the human pilot study. First, we show a concrete sense in which memorable quotes are indeed distinctive: with respect to lexical language models trained on the newswire portions of the Brown corpus [21], memorable quotes have significantly lower likelihood than their nonmemorable counterparts. Interestingly, this distinctiveness takes place at the level of words, but not at the level of other syntactic features: the part-ofspeech composition of memorable quotes is in fact more likely with respect to newswire. Thus, we can think of memorable quotes as consisting, in an aggregate sense, of unusual word choices built on a scaffolding of common part-of-speech patterns. We also identify a number of ways in which memorable quotes convey greater generality. In their patterns of verb tenses, personal pronouns, and determiners, memorable quotes are structured so as to be more \u201cfree-standing,\u201d containing fewer markers that indicate references to nearby text. Memorable quotes differ in other interesting aspects as well, such as sound distributions. Our analysis of memorable movie quotes suggests a framework by which the memorability of text in a range of different domains could be investigated. We provide evidence that such cross-domain properties may hold, guided by one of our motivating applications in marketing. In particular, we analyze a corpus of advertising slogans, and we show that these slogans have significantly greater likelihood at both the word level and the part-of-speech level with respect to a language model trained on memorable movie quotes, compared to a corresponding language model trained on non-memorable movie quotes. This suggests that some of the principles underlying memorable text have the potential to apply across different areas. Roadmap \u00a72 lays the empirical foundations of our work: the design and creation of our movie-quotes dataset, which we make publicly available (\u00a72.1), a pilot study with human subjects validating IMDbbased memorability labels (\u00a72.2), and further study of incorporating search-engine counts (\u00a72.3). \u00a73 details our analysis and prediction experiments, using both movie-quotes data and, as an exploration of cross-domain applicability, slogans data. \u00a74 surveys related work across a variety of fields. \u00a75 briefly summarizes and indicates some future directions. 2 I\u2019m ready for my close-up."}
{"_id": "7b8dffb11e74c5d78181ea4c8c475ed473905519", "title": "Self-Oscillating Fluxgate-Based Quasi-Digital Sensor for DC High-Current Measurement", "text": "A quasi-digital dc current sensor with rated current of \u00b1600 A while overload current of about \u00b1750 A is proposed in this paper. The new sensor is based on the open-loop self-oscillating fluxgate technology, but its originality is using a microcontroller unit to detect the duty cycle of the exciting voltage of the fluxgate. Compared with the published similar method, the whole signal chain of the new sensor is quasi-digital and without low-pass filter and analog-to-digital converter required when connected to digital systems. A precisely theoretical equation with respect to the linear dependence between the duty cycle and the current to be measured is established. Based on the equation, factors affecting the sensor sensitivity, accuracy, and resolution are determined, which constitutes the theoretical basis on the optimization design of the new sensor. The sensor linearity is improved using the least-squares polynomial fitting method. Some key specifications including the linearity, repeatability, and power supply effect of the sensor are characterized. The measurement results show that the linearity of the new sensor with the theoretical equation is better than 1.7% in the full scale of \u00b1600 A, and can be improved to 0.3% when the fifth-order polynomial fitting method is used."}
{"_id": "318ada827c5273a6998cfa84e57801121ce04ddc", "title": "Applying Organizational Routines in understanding organizational change", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L\u2019archive ouverte pluridisciplinaire HAL, est destin\u00e9e au d\u00e9p\u00f4t et \u00e0 la diffusion de documents scientifiques de niveau recherche, publi\u00e9s ou non, \u00e9manant des \u00e9tablissements d\u2019enseignement et de recherche fran\u00e7ais ou \u00e9trangers, des laboratoires publics ou priv\u00e9s. Applying Organizational Routines in understanding organizational change Markus Becker, Nathalie Lazaric, Richard Nelson, Sidney G. Winter"}
{"_id": "3dc5ad13cb3a8140a2950bde081a5e22360ef304", "title": "Text and Data Mining Techniques in Adverse Drug Reaction Detection", "text": "We review data mining and related computer science techniques that have been studied in the area of drug safety to identify signals of adverse drug reactions from different data sources, such as spontaneous reporting databases, electronic health records, and medical literature. Development of such techniques has become more crucial for public heath, especially with the growth of data repositories that include either reports of adverse drug reactions, which require fast processing for discovering signals of adverse reactions, or data sources that may contain such signals but require data or text mining techniques to discover them. In order to highlight the importance of contributions made by computer scientists in this area so far, we categorize and review the existing approaches, and most importantly, we identify areas where more research should be undertaken."}
{"_id": "717a4ebff4f6d61bab0fcd07456b2561c43e7111", "title": "FIRST AND SECOND ORDER METHODS FOR ONLINE CONVOLUTIONAL", "text": "Convolutional sparse representations are a form of sparse representation with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage, and severely limits the training data that can be used. Very recently, however, a number of authors have considered the design of online convolutional dictionary learning algorithms that offer far better scaling of memory and computational cost with training set size than batch methods. This paper extends our prior work, improving a number of aspects of our previous algorithm; proposing an entirely new one, with better performance, and that supports the inclusion of a spatial mask for learning from incomplete data; and providing a rigorous theoretical analysis of these methods."}
{"_id": "5d30e5bd29174ea3a82019187d98eae72454800b", "title": "Interactive multiple anisotropic scattering in clouds", "text": "We propose an algorithm for the real time realistic simulation of multiple anisotropic scattering of light in a volume. Contrary to previous real-time methods we account for all kinds of light paths through the medium and preserve their anisotropic behavior.\n Our approach consists of estimating the energy transport from the illuminated cloud surface to the rendered cloud pixel for each separate order of multiple scattering. We represent the distribution of light paths reaching a given viewed cloud pixel with the mean and standard deviation of their entry points on the lit surface, which we call the collector area. At rendering time for each pixel we determine the collector area on the lit cloud surface for different sets of scattering orders, then we infer the associated light transport. The fast computation of the collector area and light transport is possible thanks to a preliminary analysis of multiple scattering in plane-parallel slabs and does not require slicing or marching through the volume.\n Rendering is done efficiently in a shader on the GPU, relying on a cloud surface mesh augmented with a Hypertexture to enrich the shape and silhouette. We demonstrate our model with the interactive rendering of detailed animated cumulus and cloudy sky at 2--10 frames per second."}
{"_id": "0a4eabea3f727a49f414a036f253ea28d1652104", "title": "On label dependence and loss minimization in multi-label classification", "text": "Most of the multi-label classification (MLC) methods proposed in recent years intended to exploit, in one way or the other, dependencies between the class labels. Comparing to simple binary relevance learning as a baseline, any gain in performance is normally explained by the fact that this method is ignoring such dependencies. Without questioning the correctness of such studies, one has to admit that a blanket explanation of that kind is hiding many subtle details, and indeed, the underlying mechanisms and true reasons for the improvements reported in experimental studies are rarely laid bare. Rather than proposing yet another MLC algorithm, the aim of this paper is to elaborate more closely on the idea of exploiting label dependence, thereby contributing to a better understanding of MLC. Adopting a statistical perspective, we claim that two types of label dependence should be distinguished, namely conditional and marginal dependence. Subsequently, we present three scenarios in which the exploitation of one of these types of dependence may boost the predictive performance of a classifier. In this regard, a close connection with loss minimization is established, showing that the benefit of exploiting label dependence does also depend on the type of loss to be minimized. Concrete theoretical results are presented for two representative loss functions, namely the Hamming loss and the subset 0/1 loss. In addition, we give an overview of state-of-the-art decomposition algorithms for MLC and we try to reveal the reasons for their effectiveness. Our conclusions are supported by carefully designed experiments on synthetic and benchmark data."}
{"_id": "114a84f82d1b312b3e920fbef35e5c93d7c3bd16", "title": "Automated and safe vulnerability assessment", "text": "As the number of system vulnerabilities multiplies in recent years, vulnerability assessment has emerged as a powerful system security administration tool that can identify vulnerabilities in existing systems before they are exploited. Although there are many commercial vulnerability assessment tools in the market, none of them can formally guarantee that the assessment process never compromises the computer systems being tested. This paper proposes a featherweight virtual machine (FVM) technology to address the safety issue associated with vulnerability testing. Compared with other virtual machine technologies, FVM is designed to facilitate sharing between virtual machines but still provides strong protection between them. The FVM technology allows a vulnerability assessment tool to test an exact replica of a production-mode network service, including both hardware and system software components, while guaranteeing that the production-mode network service is fully isolated from the testing process. In addition to safety, the vulnerability assessment support system described in this paper can also automate the entire process of vulnerability testing and thus for the first time makes it feasible to run vulnerability testing autonomously and frequently. Experiments on a Windows-based prototype show that Nessus assessment results against an FVM virtual machine are identical to those against a real machine. Furthermore, modifications to the file system and registry state made by vulnerability assessment runs are completely isolated from the host machine. Finally, the performance impact of vulnerability assessment runs on production network services is as low as 3%"}
{"_id": "3417d3ea8c58e3101f1d8a76bd7453c951588b1c", "title": "Understanding Performance Interference of I/O Workload in Virtualized Cloud Environments", "text": "Server virtualization offers the ability to slice large, underutilized physical servers into smaller, parallel virtual machines (VMs), enabling diverse applications to run in isolated environments on a shared hardware platform. Effective management of virtualized cloud environments introduces new and unique challenges, such as efficient CPU scheduling for virtual machines, effective allocation of virtual machines to handle both CPU intensive and I/O intensive workloads. Although a fair number of research projects have dedicated to measuring, scheduling, and resource management of virtual machines, there still lacks of in-depth understanding of the performance factors that can impact the efficiency and effectiveness of resource multiplexing and resource scheduling among virtual machines. In this paper, we present our experimental study on the performance interference in parallel processing of CPU and network intensive workloads in the Xen Virtual Machine Monitors (VMMs). We conduct extensive experiments to measure the performance interference among VMs running network I/O workloads that are either CPU bound or network bound. Based on our experiments and observations, we conclude with four key findings that are critical to effective management of virtualized cloud environments for both cloud service providers and cloud consumers. First, running network-intensive workloads in isolated environments on a shared hardware platform can lead to high overheads due to extensive context switches and events in driver domain and VMM. Second, co-locating CPU-intensive workloads in isolated environments on a shared hardware platform can incur high CPU contention due to the demand for fast memory pages exchanges in I/O channel. Third, running CPU-intensive workloads and network-intensive workloads in conjunction incurs the least resource contention, delivering higher aggregate performance. Last but not the least, identifying factors that impact the total demand of the exchanged memory pages is critical to the in-depth understanding of the interference overheads in I/O channel in the driver domain and VMM."}
{"_id": "dfaafdc85710125a0641e7bafeb849e0daf9fd48", "title": "Efficient Algorithms for Citation Network Analysis", "text": "In the paper very efficient, linear in number of arcs, algorithms for determining Hummon and Doreian\u2019s arc weights SPLC and SPNP in citation network are proposed, and some theoretical properties of these weights are presented. The nonacyclicity problem in citation networks is discussed. An approach to identify on the basis of arc weights an important small subnetwork is proposed and illustrated on the citation networks of SOM (self organizing maps) literature and US patents."}
{"_id": "7ce27aa6eceeaa247927b828dbab960ffcbb117e", "title": "Upvoting hurricane Sandy: event-based news production processes on a social news site", "text": "This paper uses the case of Hurricane Sandy and reddit's topical community (subreddit) /r/sandy to examine the production and curation of news content around events on a social news site. Through qualitative analysis, we provide a coded topology of produced content and describe how types of networked gatekeeping impact the framing of a crisis situation. This study also examines, through quantitative modeling, what kind of information becomes negotiated and voted as relevant. We suggest that highly scored content shared in a social news setting focused more on human-interest media and perspective-based citizen journalism than professional news reports. We conclude by discussing how the mechanisms of social news sites conflict with the social norms and culture of reddit to produce differing expectations around news."}
{"_id": "80ffad0d381b2b538fea1c100c159ad106363596", "title": "Kuijk Bandgap Voltage Reference With High Immunity to EMI", "text": "This brief evaluates the effect of conducted electromagnetic interference (EMI) that is injected in the power supply of a classic Kuijk bandgap reference voltage circuit. Two modified Kuijk bandgap topologies with high immunity to EMI are introduced and compared to the original structure. Measurements of a test IC confirm the theoretical analyses."}
{"_id": "2a329bfb7906e722a23e593a30a116584ff83ea9", "title": "Containerization and the PaaS Cloud", "text": "Containerization is widely discussed as a lightweight virtualization solution. Apart from exhibiting benefits over traditional virtual machines in the cloud, containers are especially relevant for platform-as-a-service (PaaS) clouds to manage and orchestrate applications through containers as an application packaging mechanism. This article discusses the requirements that arise from having to facilitate applications through distributed multicloud platforms."}
{"_id": "b803b8ede21f9b170f05b17bcda4c6d7a0aa9b73", "title": "Vehicle detection with sub-class training using R-CNN for the UA-DETRAC benchmark", "text": "Different types of vehicles, such as buses and cars, can be quite different in shapes and details. This makes it more difficult to try to learn a single feature vector that can detect all types of vehicles using a single object class. We proposed an approach to perform vehicle detection with Sub-Classes categories learning using R-CNN in order to improve the performance of vehicle detection. Instead of using a single object class, which is a vehicle in this experiment, to train on the R-CNN, we used multiple sub-classes of vehicles so that the network can better learn the features of each individual type. In the experiment, we also evaluated the result of using a transfer learning approach to use a pre-trained weights on a new dataset."}
{"_id": "d92df1783f6883f5151f566c3e892713e392fc2e", "title": "An Energy-Efficient Micropower Neural Recording Amplifier", "text": "This paper describes an ultralow-power neural recording amplifier. The amplifier appears to be the lowest power and most energy-efficient neural recording amplifier reported to date. We describe low-noise design techniques that help the neural amplifier achieve input-referred noise that is near the theoretical limit of any amplifier using a differential pair as an input stage. Since neural amplifiers must include differential input pairs in practice to allow robust rejection of common-mode and power supply noise, our design appears to be near the optimum allowed by theory. The bandwidth of the amplifier can be adjusted for recording either neural spikes or local field potentials (LFPs). When configured for recording neural spikes, the amplifier yielded a midband gain of 40.8 dB and a -3-dB bandwidth from 45 Hz to 5.32 kHz; the amplifier's input-referred noise was measured to be 3.06 muVrms while consuming 7.56 muW of power from a 2.8-V supply corresponding to a noise efficiency factor (NEF) of 2.67 with the theoretical limit being 2.02. When configured for recording LFPs, the amplifier achieved a midband gain of 40.9 dB and a -3-dB bandwidth from 392 mHz to 295 Hz; the input-referred noise was 1.66 muVrms while consuming 2.08 muW from a 2.8-V supply corresponding to an NEF of 3.21. The amplifier was fabricated in AMI's 0.5-mum CMOS process and occupies 0.16 mm2 of chip area. We obtained successful recordings of action potentials from the robust nucleus of the arcopallium (RA) of an anesthesized zebra finch brain with the amplifier. Our experimental measurements of the amplifier's performance including its noise were in good accord with theory and circuit simulations."}
{"_id": "8a8430804ae6b6bf43b344347d46eda2b2995390", "title": "A Survey of Algorithmic Shapes", "text": "In the context of computer-aided design, computer graphics and geometry processing, the idea of generative modeling is to allow the generation of highly complex objects based on a set of formal construction rules. Using these construction rules, a shape is described by a sequence of processing steps, rather than just by the result of all applied operations: shape design becomes rule design. Due to its very general nature, this approach can be applied to any domain and to any shape representation that provides a set of generating functions. The aim of this survey is to give an overview of the concepts and techniques of procedural and generative modeling, as well as their applications with a special focus on archeology and architecture."}
{"_id": "327acefe53c09b40ae15bfac9165b5c8f812d158", "title": "Trust in leadership: meta-analytic findings and implications for research and practice.", "text": "In this study, the authors examined the findings and implications of the research on trust in leadership that has been conducted during the past 4 decades. First, the study provides estimates of the primary relationships between trust in leadership and key outcomes, antecedents, and correlates (k = 106). Second, the study explores how specifying the construct with alternative leadership referents (direct leaders vs. organizational leadership) and definitions (types of trust) results in systematically different relationships between trust in leadership and outcomes and antecedents. Direct leaders (e.g., supervisors) appear to be a particularly important referent of trust. Last, a theoretical framework is offered to provide parsimony to the expansive literature and to clarify the different perspectives on the construct of trust in leadership and its operation."}
{"_id": "6edc4db7bfa467f7ab7029b699522beff1fbf672", "title": "An Empirical Assessment of Organizational Commitment and Organizational Effectiveness", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "4e2f7400a73bb375bbc2f72702ba181460350aa2", "title": "Social Exchange in Organizations : Perceived Organizational Support , Leader-Member Exchange , and Employee Reciprocity", "text": "Social exchange (P. Blau, 1964) and the norm of reciprocity (A. W. Gouldner, 1960) have been used to explain the relationship of perceived organizational support and leader-member exchange with employee attitudes and behavior. Recent empirical research suggests that individuals engage in different reciprocation efforts depending on the exchange partner (e.g., B. L. McNeely & B. M. Meglino, 1994). The purpose of the present study was to further investigate these relationships by examining the relative contribution of indicators of employee-organization exchange and subordinate-supervisor exchange. Structural equation modeling was used to compare nested models. Results indicate that perceived organizational support is associated with organizational commitment, whereas leader-member exchange is associated with citizenship and in-role behavior."}
{"_id": "a7052f29f15cb4c8c637f0dc0b505793b37575d7", "title": "What we know about leadership. Effectiveness and personality.", "text": "Although psychologists know a great deal about leadership, persons who make decisions about real leaders seem largely to ignore their accumulated wisdom. In an effort to make past research more accessible, interpretable, and relevant to decision makers, this article defines leadership and then answers nine questions that routinely come up when practical decisions are made about leadership (e.g., whom to appoint, how to evaluate them, when to terminate them."}
{"_id": "cc4b0d54996ade0c6b2fdfdcf1f64c8b64085686", "title": "Justice at the millennium: a meta-analytic review of 25 years of organizational justice research.", "text": "The field of organizational justice continues to be marked by several important research questions, including the size of relationships among justice dimensions, the relative importance of different justice criteria, and the unique effects of justice dimensions on key outcomes. To address such questions, the authors conducted a meta-analytic review of 183 justice studies. The results suggest that although different justice dimensions are moderately to highly related, they contribute incremental variance explained in fairness perceptions. The results also illustrate the overall and unique relationships among distributive, procedural, interpersonal, and informational justice and several organizational outcomes (e.g., job satisfaction, organizational commitment, evaluation of authority, organizational citizenship behavior, withdrawal, performance). These findings are reviewed in terms of their implications for future research on organizational justice."}
{"_id": "ac9c460a6f7952ac7e4f1340fd06a18b5bd1559f", "title": "Comparison of Feedback Control Techniques for Torque-Vectoring Control of Fully Electric Vehicles", "text": "Fully electric vehicles (FEVs) with individually controlled powertrains can significantly enhance vehicle response to steering-wheel inputs in both steady-state and transient conditions, thereby improving vehicle handling and, thus, active safety and the fun-to-drive element. This paper presents a comparison between different torque-vectoring control structures for the yaw moment control of FEVs. Two second-order sliding-mode controllers are evaluated against a feedforward controller combined with either a conventional or an adaptive proportional-integral-derivative (PID) controller. Furthermore, the potential performance and robustness benefits arising from the integration of a body sideslip controller with the yaw rate feedback control system are assessed. The results show that all the evaluated controllers are able to significantly change the understeer behavior with respect to the baseline vehicle. The PID-based controllers achieve very good vehicle performance in steady-state and transient conditions, whereas the controllers based on the sliding-mode approach demonstrate a high level of robustness against variations in the vehicle parameters. The integrated sideslip controller effectively maintains the sideslip angle within acceptable limits in the case of an erroneous estimation of the tire-road friction coefficient."}
{"_id": "be9cde5dd5899e4b24a85261364beb8544159a14", "title": "Using the Delphi method", "text": "The reliability and validity of the selected research method are subjects to which every researcher is bound to address himself when representing the findings and conclusions of his/her work. In this study we will discuss the reliability, validity and philosophical aspects of the Delphi method, first with a small literature review and then by representing two different surveys conducted using the Delphi method. The point of view in our report is the usability of Delphi in collecting qualitative data for software engineering research. The most significant features of the Delphi method are its recursion and the possibility to get immediate feedback and evaluate one's own answer. Although there are many forms of Delphi techniques, these features exist in one form or the other in all Delphi variations. In Delphi-based surveys, the minimum number of participants is smaller and participants are not selected at random but because of their particular expertise. Among the traditional research methods this is seen to cause the risk of bias and endangering both reliability and validity. In the Delphi method, the recursion produced by three or more rounds, the expertise and - in most cases - anonymity of participants, and the opportunity to evaluate and argue one's own answer after having seen the other opinions and arguments are thought to guarantee the quality of well-planned and well-conducted research."}
{"_id": "1c80ef2b740a1ccd15f26efadf02be966ad458d5", "title": "Analysis on 2-element insertion sort algorithm", "text": "This paper proposes a 2-element insertion sort algorithm, an improved algorithm on the direct insertion sort, gives the algorithm design idea and the description in C. After analyzes comparatively the time complexity and space complexity of the algorithms, the paper summarizes the pros and cons of the 2-element insertion sort algorithm compared with the original one, so as to giving a theoretical basis for the direct insertion sort algorithm optimization, and playing a guiding role in teaching the relevant chapters in \u201cData Structure\u201d curriculum."}
{"_id": "2eff738019e00c4c72155a35686f220b478521ff", "title": "OPTIMAL PATHS FOR A CAR THAT GOES BOTH FORWARDS AND BACKWARDS", "text": "The path taken by a car with a given minimum turning radius has a lower bound on its radius of curvature at each point, but the path has cusps if the car shifts into or out of reverse gear. What is the shortest such path a car can travel between two points if its starting and ending directions are specified? One need consider only paths with at most 2 cusps or reversals. We give a set of paths which is sufficient in the sense that it always contains a shortest path and small in the sense that there are at most 68, but usually many fewer paths in the set for any pair of endpoints and directions. We give these paths by explicit formula. Calculating the length of each of these paths and selecting the (not necessarily unique) path with smallest length yields a simple algorithm for a shortest path in each case. These optimal paths or geodesies may be described as follows: If C is an arc of a circle of the minimal turning radius and S is a line segment, then it is sufficient to consider only certain paths of the form CCSCC where arcs and segments fit smoothly, one or more of the arcs or segments may vanish, and where reversals, or equivalently cusps, between arcs or segments are allowed. This contrasts with the case when cusps are not allowed, where Dubins (1957) has shown that paths of the form CCC and CSC suffice."}
{"_id": "532c2c340cdf50c5e387462176ee4ee317882fc5", "title": "Improving classification with latent variable models by sequential constraint optimization", "text": "In this paper we propose a method to use multiple generative models with latent variables for classi cation tasks. The standard approach to use generative models for classi cation is to train a separate model for each class. A novel data point is then classi ed by the model that attributes the highest probability. The algorithm we propose modi es the parameters of the models to improve the classi cation accuracy. Our approach is made computationally tractable by assuming that each of the models is deterministic, by which we mean that a data-point is associated to only a single latent state. The resulting algorithm is a variant of the support vector machine learning algorithm and in a limiting case the method is similar to the standard perceptron learning algorithm. We apply the method to two types of latent variable models. The rst has a discrete latent state space and the second, principal component analysis, has a continuous latent state space. We compare the e6ectiveness of both approaches on a handwritten digit recognition problem and on a satellite image recognition problem. c \u00a9 2003 Elsevier B.V. All rights reserved."}
{"_id": "5ccfbaab9951068d0b5156596ea9a9b0b5a628d6", "title": "Personalized PageRank Based Multi-document Summarization", "text": "This paper presents a novel multi-document summarization approach based on personalized pagerank (PPRSum). In this algorithm, we uniformly integrate various kinds of information in the corpus. At first, we train a salience model of sentence global features based on Naive Bayes Model. Secondly, we generate a relevance model for each corpus utilizing the query of it. Then, we compute the personalized prior probability for each sentence in the corpus utilizing the salience model and the relevance model both. With the help of personalized prior probability, a Personalized PageRank ranking process is performed depending on the relationships among all sentences in the corpus. Additionally, the redundancy penalty is imposed on each sentence. The summary is produced by choosing the sentences with both high query-focused information richness and high information novelty. Experiments on DUC2007 are performed and the ROUGE evaluation results show that PPRSum ranks between the 1st and the 2nd systems on DUC2007 main task."}
{"_id": "50300afbfb781a2c79cd71106727489d85036e3f", "title": "A Foundation for Multi-dimensional Databases", "text": "We present a multi-dimensional database model, which we believe can serve as a conceptual model for On-Line Analytical Processing (OLAP)-based applications. Apart from providing the functionalities necessary for OLAP-based applications, the main feature of the model we propose is a clear separation between structural aspects and the contents. This separation of concerns allows us to define data manipulation languages in a reasonably simple, transparent way. In particular, we show that the data cube operator can be expressed easily. Concretely, we define an algebra and a calculus and show them to be equivalent. We conclude by comparing our approach to related work. The conceptual multi-dimensional database model developed here is orthogonal to its implementation, which is not a subject of the present paper."}
{"_id": "cbe39b3982489ad20a2f62469fcffe8ee64068ba", "title": "Regional differences in synaptogenesis in human cerebral cortex.", "text": "The formation of synaptic contacts in human cerebral cortex was compared in two cortical regions: auditory cortex (Heschl's gyrus) and prefrontal cortex (middle frontal gyrus). Synapse formation in both cortical regions begins in the fetus, before conceptual age 27 weeks. Synaptic density increases more rapidly in auditory cortex, where the maximum is reached near postnatal age 3 months. Maximum synaptic density in middle frontal gyrus is not reached until after age 15 months. Synaptogenesis occurs concurrently with dendritic and axonal growth and with myelination of the subcortical white matter. A phase of net synapse elimination occurs late in childhood, earlier in auditory cortex, where it has ended by age 12 years, than in prefrontal cortex, where it extends to midadolescence. Synaptogenesis and synapse elimination in humans appear to be heterochronous in different cortical regions and, in that respect, appears to differ from the rhesus monkey, where they are concurrent. In other respects, including overproduction of synaptic contacts in infancy, persistence of high levels of synaptic density to late childhood or adolescence, the absolute values of maximum and adult synaptic density, and layer specific differences, findings in the human resemble those in rhesus monkeys."}
{"_id": "a2ffc574cda6f1348a98db66533b0d4bb01bf684", "title": "On shared understanding in software engineering: an essay", "text": "Shared understanding is essential for efficient software engineering when the risk of unsatisfactory out-come and rework of project results shall be low. Today, however, shared understanding is used mostly in an unreflected, ad-hoc way. This affects the quality of the engineered software solutions and generates re-work once the quality problems are discovered. In this article, we investigate the role, value, and usage of shared understanding in software engineering. We contribute a reflected analysis of the problem, in particular of how to rely on shared understanding that is implicit, rather than explicit. After an overview of the state of the art we discuss forms and value of shared understanding in software engineering, survey enablers and obstacles, compile existing practices for dealing with shared understanding, and present a roadmap for improving knowledge and practice in this area."}
{"_id": "11034aebfaa8ab24ea189e17a51bbbac33f95470", "title": "Too Much of a Good Thing? The Relationship Between Number of Friends and Interpersonal Impressions on Facebook", "text": "A central feature of the online social networking system, Facebook, is the connection to and links among friends. The sum of the number of one\u2019s friends is a feature displayed on users\u2019 profiles as a vestige of the friend connections a user has accrued. In contrast to offline social networks, individuals in online network systems frequently accrue friends numbering several hundred. The uncertain meaning of friend status in these systems raises questions about whether and how sociometric popularity conveys attractiveness in non-traditional, non-linear ways. An experiment examined the relationship between the number of friends a Facebook profile featured and observers\u2019 ratings of attractiveness and extraversion. A curvilinear effect of sociometric popularity and social attractiveness emerged, as did a quartic relationship between friend count and perceived extraversion. These results suggest that an overabundance of friend connections raises doubts about Facebook users\u2019 popularity and desirability. Zu viel des Guten? Zur Beziehung zwischen der Anzahl der Freunde und interpersonalen Eindr\u00fccken bei Facebook Eine zentrale Eigenschaft des sozialen Online-Netzwerks Facebook ist die Verbindung von Freunden. Die Gesamtanzahl der Freunde eines Nutzers wird als Merkmal im Benutzerprofil angezeigt und dient als eine Statistik der Freundeverbindungen, die ein Nutzer gesammelt hat. Im Gegensatz zu Offline-Netzwerken, haben Personen in OnlineNetzwerken oft mehrere Hundert Freunde. Die unklare Bedeutung des Freundestatus in diesem System wirft die Frage auf, ob und wie soziometrische Popularit\u00e4t die Attraktivit\u00e4t auf nicht-traditionelle, nichtlineare Weise ausdr\u00fcckt. In einem Experiment wurde die Beziehung zwischen der Anzahl der Freunde im Facebook-Profil und der Einsch\u00e4tzung von Attraktivit\u00e4t und Extraversion durch den Beobachter untersucht. Es zeigten sich ein kurvilinearer Effekt von soziometrischer Popularit\u00e4t und sozialer Attraktivit\u00e4t, sowie eine biquatratische Beziehung zwischen der Anzahl der Freunde und wahrgenommener Extraversion. Diese Ergebnisse deuten an, dass eine \u00fcberm\u00e4\u00dfig hohe Zahl an Freunden Zweifel an der Popularit\u00e4t und Attraktivit\u00e4t des Facebook-Nutzers"}
{"_id": "ab147e27d85fb69d528fe9e07d625276ddc77ac0", "title": "Crossbar RRAM Arrays: Selector Device Requirements During Write Operation", "text": "A comprehensive analysis of write operations (SET and RESET) in a resistance-change memory (resistive random access memory) crossbar array is carried out. Three types of resistive switching memory cells-nonlinear, rectifying-SET, and rectifying-RESET-are compared with each other in terms of voltage delivery, current delivery, and power consumption. Two different write schemes, V/2 and V/3, were considered, and the V/2 write scheme is preferred due to much lower power consumption. A simple numerical method was developed that simulates entire current flows and node voltages within a crossbar array and provides a quantitative tool for the accurate analysis of crossbar arrays and guidelines for developing reliable write operation."}
{"_id": "2466f58fa3e2894660c0586dbe8050212b8c04bc", "title": "Segmentation quality evaluation based on multi-scale convolutional neural networks", "text": "Segmentation quality evaluation is an important task in image segmentation. The existing evaluation methods formulate segmentation quality as regression model, and recent Convolutional Neural Network (CNN) based evaluation methods show superior performance. However, designing efficient CNN-based segmentation evaluation model is still under exploited. In this paper, we propose two types of CNN structures such as double-net and multi-scale network for segmentation quality evaluation. We observe that learning the local and global information and considering multi-scale image are useful for segmentation quality evaluation. To train and verify the proposed networks, we construct a novel objective segmentation quality evaluation dataset with large amount of data by combining several proposal generation methods. The experimental results demonstrate that the proposed method obtains larger Linear Correlation Coefficient (LCC) value than several state-of-art segmentation quality evaluation methods."}
{"_id": "cb0278e152f843fd96fffa5ea5ed916b8b13a5c1", "title": "Exploring Hypergraph Representation on Face Anti-spoofing Beyond 2D Attacks", "text": "Face anti-spoofing plays a crucial role in protecting face recognition systems from various attacks. Previous modelbased and deep learning approaches achieve satisfactory performance for 2D face spoofs, but remain limited for more advanced 3D attacks such as vivid masks. In this paper, we address 3D face anti-spoofing via the proposed Hypergraph Convolutional Neural Networks (HGCNN). Firstly, we construct a computation-efficient and posture-invariant face representation with only a few key points on hypergraphs. The hypergraph representation is then fed into the designed HGCNN with hypergraph convolution for feature extraction, while the depth auxiliary is also exploited for 3D mask anti-spoofing. Further, we build a 3D face attack database with color, depth and infrared light information to overcome the deficiency of 3D face anti-spoofing data. Experiments show that our method achieves the state-of-theart performance over widely used 3D and 2D databases as well as the proposed one under various tests."}
{"_id": "6627756d57204169be2e9891321aabb6f86716e3", "title": "A review of eye-tracking applications as tools for training", "text": "Substantial literature exists regarding how eye-tracking systems can be used to measure cognitive load and how these measurements can be useful for adapting training in real time. Much of the published literature discusses the applications and limitations of the research and typically provides recommendations for improvement. This review assesses these articles collectively to provide a clearer solution for implementing eye-tracking systems into a training environment. Although limitations exist for using eye tracking as an interface tool, gaze and pupillary response have been successfully used to reflect changes in cognitive load and are starting to be incorporated into adaptive training systems, although issues are still present with differentiating pupil responses from simultaneous psychological effects. Additionally, current eye-tracking systems and data analysis software have proven accurate enough for general use, but issues including system cost and software integration prevent this technology from becoming commercialized for use in common instructional settings."}
{"_id": "1aa8f67589dc15983789a70ec33531aa95633375", "title": "Waveguide slot antennas for circularly polarized radiated field", "text": "We present a new waveguide slot configuration able to radiate a circular polarization with a very low axial ratio. This configuration is mainly intended as a circularly polarized radiating element to be used in an array. Both left-hand and right-hand circular polarization can be independently obtained and therefore this configuration could be used to realize a polarization agile antenna. The proposed configuration consists of two very closely spaced radiating slots, so a specialized method of moments code has been developed to analyze, in a very effective way, the near-field interaction between such slots. Our method of moments code has been validated against a commercial finite-element method code, and then used to design a number of different circularly polarized radiating elements."}
{"_id": "14f42741ce2024b301148995afb8b2fa8ef1018d", "title": "Evaluating the Readability of Text Simplification Output for Readers with Cognitive Disabilities", "text": "This paper presents an approach for automatic evaluation of the readability of text simplification output for readers with cognitive disabilities. First, we present our work towards the development of the EasyRead corpus, which contains easy-to-read documents created especially for people with cognitive disabilities. We then compare the EasyRead corpus to the simplified output contained in the LocalNews corpus (Feng, 2009), the accessibility of which has been evaluated through reading comprehension experiments including 20 adults with mild intellectual disability. This comparison is made on the basis of 13 disability-specific linguistic features. The comparison reveals that there are no major differences between the two corpora, which shows that the EasyRead corpus is to a similar reading level as the user-evaluated texts. We also discuss the role of Simple Wikipedia (Zhu et al., 2010) as a widely-used accessibility benchmark, in light of our finding that it is significantly more complex than both the EasyRead and the LocalNews corpora."}
{"_id": "70b851e8f63440a88b484b761c8d3a5999ddcd24", "title": "Wafer-scale VLSI implementations of pulse coupled neural networks", "text": "In this paper, we present a system architecture currently under development that will allow very large (>10 neurons, >10 synapses) reconfigurable networks to be built, in the form of interlinked dies on a single wafer. Reconfigurable routing and complex adaptation/plasticity across several timescales in neurons and synapses allow for the implementation of large-scale biologically realistic neuronal networks and behaviors. Compared to biology the network dynamics will be about 10 to 10 times faster, so that developmental plasticity can be studied in detail. Keywords\u2014neural VLSI hardware, system simulation, mapping algorithms, parallelization."}
{"_id": "944e840c2cc7ab9d8e4eea149c7633390fe891c8", "title": "Escher sphere construction kit", "text": "M.C. Escher created a myriad of amazing planar tessellations, yet only a few three-dimensional ones such as his wooden fish ball and dodecahedral flower. We have developed an interactive program to design and manufacture \u201cEscher Spheres\u201d sets of tiles that can be assembled into spherical balls. The user chooses from a set of predefined symmetry groups and then deforms the boundaries of the basic domain tile; all corresponding points based on the chosen symmetry class move concurrently, instantly showing the overall result. The interior of the tile can be embellished with a bas-relief. Finally the tile is radially extruded and output as a solid model suitable for free-form fabrication."}
{"_id": "2cf9542f007a49d74c9c4c824464df08cf320d96", "title": "Comparative study of algorithms for Atrial Fibrillation detection", "text": "Automatic detection of Atrial Fibrillation (AF) is necessary for the long-term monitoring of patients who are suspected to have AF. Several methods for AF detection exist in the literature. These methods are mainly based on two different characteristics of AF ECGs: the irregularity of RR intervals (RRI) and the fibrillatory electrical Atrial Activity (AA). The electrical AA is characterized by the absence of the P-wave (PWA) and special frequency properties (FSA). Nine AF detection algorithms were selected from literature and evaluated with the same protocol in order to study their performance under different conditions. Results showed that the highest sensitivity (Se=97.64%) and specificity (Sp=96.08%) was achieved with methods based on analysis of irregularity of RR interval, while combining RR and atrial activity analysis gave the highest positive predictive value (PPV=92.75%). Algorithms based on RR irregularity were also the most robust against noise (Se=85.79% and Sp=81.90% for SNR=0dB; and Se=82.52% and Sp=40.47% for SNR=\u22125dB)."}
{"_id": "9264c0585e0b731868b945185cc7edbf42789b39", "title": "Resource management in a changing and uncertain climate", "text": "\u00a9 The Ecological Society of America www.frontiersinecology.org C change has already had important effects on ecological systems (Parmesan 2006; Root and Schneider 2006; IPCC 2007a; Rosenzweig et al. 2008). Projected changes in climate for the coming century are all greater than the climatic changes the Earth has experienced in the past 100 years (IPCC 2007b). Consequently, future changes in climate are likely to result in even more dramatic ecological responses, including declines in particularly sensitive species (eg corals), continued shifts in species distributions, and substantial changes in ecosystem processes (IPCC 2007a). Changes in hydrologic and fire regimes will fundamentally alter ecological systems. Sea-level rise, in particular, will have dramatic effects on coastal systems (Watson et al. 1996). Changes in phenology will affect the delicate relationships between pollinators and plants, parasites and hosts, foragers and forage, and predators and prey (eg Memmott et al. 2007). Despite the pervasiveness of climate change, most land, water, and resource managers are still following management plans that were developed before there was a scientific consensus that climate-change impacts were both real and substantial (Pyke et al. 2008). Climate change poses difficult challenges for many already overstretched natural resource managers, who must deal with day-to-day crises and who have little access to climate experts. Much of the widely available information about climate change focuses on global or regional scales that are often too broad to fully inform the management of specific nature reserves or regional forests. Moreover, much of the information that is available has a high level of uncertainty, and thus can be difficult to interpret. Finally, although climate change is having impacts today, the largest impacts are still decades in the future. Envisioning these impacts and acting well in advance of their realization will require a new level of proactive management. Several articles have provided general recommendations for managing particular systems in a changing climate (Noss 2001; Hannah et al. 2002; West and Salm REVIEWS REVIEWS REVIEWS"}
{"_id": "54bbf7ab72d7e73d002a682d5cfbaac36eb38276", "title": "A Smart Fridge with an Ability to Enhance Health and Enable Better Nutrition", "text": "Intelligent appliances with multimedia capability have been emerging into our daily life, thanks to the fast advance of computing technology and the wide use of the Interne. Smart home is one of the most prominent areas of intelligent appliances. Kitchen is one of the places where such intelligent appliances have been used. Since modern life style is driving people spending less time on cooking healthy food at home, an enjoyable and healthy life style can be assisted with an intelligent kitchenware such as a smart fridge. In this paper we introduce a novel idea of the smart fridge which can enable better nutrition and enhance health. It is designed for managing items stored in it and advising its users with cooking methods depending on what kind of food is stored. More importantly, it can perform other actions such as dietary control, nutrition monitoring, eating habit analysis, etc. The characteristics, functions and design details of the smart fridge are presented in details. We are confident that such a smart fridge will be an important component in future smart home."}
{"_id": "6b8fe767239a34e25e71e99bd8b8a64f8279d7f4", "title": "Review: A Review of Culture in Information Systems Research: Toward a Theory of Information Technology Culture Conflict", "text": "An understanding of culture is important to the study of information technologies in that culture at various levels, including national, organizational, and group, can influence the successful implementation and use of information technology. Culture also plays a role in managerial processes that may directly, or indirectly, influence IT. Culture is a challenging variable to research, in part because of the multiple divergent definitions and measures of culture. Notwithstanding, a wide body of literature has emerged that sheds light on the relationship of IT and culture. This paper sets out to provide a review of this literature in order to lend insights into our understanding of the linkages between IT and culture. We begin by conceptualizing culture and laying the groundwork for a values-based approach to the examination of IT and culture. Using this approach, we then provide a comprehensive review of the organizational and cross-cultural IT literature that conceptually links these two traditionally separate streams of research. From our analysis, we develop six themes of IT-culture research emphasizing culture's impact on IT, IT's impact on culture, and IT culture. Building upon these themes, we then develop a theory of IT, values, and conflict. Based upon the theory, we develop propositions concerning three types of cultural conflict and the results of these conflicts. Ultimately, the theory suggests that the reconciliation of these conflicts results in a reorientation of values. We conclude with the particular research challenges posed in this line of inquiry."}
{"_id": "958534ee647a5b83f30453ecb3ea605fce280e5d", "title": "Multivariate data analysis of NMR data.", "text": "Multivariate methods based on principal components (PCA and PLS) have been used to reduce NMR spectral information, to predict NMR parameters of complicated structures, and to relate shift data sets to dependent descriptors of biological significance. Noise reduction and elimination of instrumental artifacts are easily performed on 2D NMR data. Configurational classification of triterpenes and shift predictions in disubstituted benzenes can be obtained using PCA and PLS analysis. Finally, the shift predictions of tripeptides from descriptors of amino acids open the possibility of automatic analysis of multidimensional data of complex structures."}
{"_id": "9f720a880fe4c99557c4bdfe0e3595ea60902055", "title": "IT Consumerization: When Gadgets Turn Into Enterprise IT Tools", "text": ""}
{"_id": "9aefb1c6a467b795f72d61d82ae0be1a47d298b4", "title": "Preserving differential privacy in convolutional deep belief networks", "text": "The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users\u2019 personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing $$\\epsilon $$ \u03f5 -differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results. One key contribution of this work is that we propose the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions. Our theoretical analysis shows that we can further derive the sensitivity and error bounds of the approximate polynomial representation. As a result, preserving differential privacy in CDBNs is feasible. We applied our model in a health social network, i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for human behavior prediction, human behavior classification, and handwriting digit recognition tasks. Theoretical analysis and rigorous experimental evaluations show that the pCDBN is highly effective. It significantly outperforms existing solutions."}
{"_id": "3351477b943300c334356c3148c7b8f9320e6e5a", "title": "Gate sizing: finFETs vs 32nm bulk MOSFETs", "text": "FinFET devices promise to replace traditional MOSFETs because of superior ability in controlling leakage and minimizing short channel effects while delivering a strong drive current. We investigate in this paper gate sizing of finFET devices, and we provide a comparison with 32nm bulk CMOS. Wider finFET devices are built utilizing multiple parallel fins between the source and drain. Independent gating of the finFET's double gates allows significant reduction in leakage current. We perform temperature-aware circuit optimization by modeling delay using temperature-dependent parameters, and by imposing constraints that limit the maximum allowable number of parallel fins. We show that finFET circuits are superior in performance and produce less static power when compared to 32nm circuits."}
{"_id": "bbbe5535472e8c8e019c72c1dade4eedeef333c3", "title": "An evolutionary artificial neural networks approach for breast cancer diagnosis", "text": "This paper presents an evolutionary artificial neural network (EANN) approach based on the pareto-differential evolution (PDE) algorithm augmented with local search for the prediction of breast cancer. The approach is named memetic pareto artificial neural network (MPANN). Artificial neural networks (ANNs) could be used to improve the work of medical practitioners in the diagnosis of breast cancer. Their abilities to approximate nonlinear functions and capture complex relationships in the data are instrumental abilities which could support the medical domain. We compare our results against an evolutionary programming approach and standard backpropagation (BP), and we show experimentally that MPANN has better generalization and much lower computational cost."}
{"_id": "f63faee06ee84dc4c19746f9ddc865302de4fa70", "title": "pypet: A Python Toolkit for Data Management of Parameter Explorations", "text": "pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines."}
{"_id": "5507060ad65205536d1361b93b45278955792707", "title": "Foreground object detection from videos containing complex background", "text": "This paper proposes a novel method for detection and segmentation of foreground objects from a video which contains both stationary and moving background objects and undergoes both gradual and sudden \"once-off\" changes. A Bayes decision rule for classification of background and foreground from selected feature vectors is formulated. Under this rule, different types of background objects will be classified from foreground objects by choosing a proper feature vector. The stationary background object is described by the color feature, and the moving background object is represented by the color co-occurrence feature. Foreground objects are extracted by fusing the classification results from both stationary and moving pixels. Learning strategies for the gradual and sudden \"once-off\" background changes are proposed to adapt to various changes in background through the video. The convergence of the learning process is proved and a formula to select a proper learning rate is also derived. Experiments have shown promising results in extracting foreground objects from many complex backgrounds including wavering tree branches, flickering screens and water surfaces, moving escalators, opening and closing doors, switching lights and shadows of moving objects."}
{"_id": "88ed5906b566461a49f62ba759bafe3fbe834188", "title": "Amorphous silicon thin film transistor circuit integration for organic LED displays on glass and plastic", "text": "This paper presents design considerations along with measurement results pertinent to hydrogenated amorphous silicon (a-Si:H) thin film transistor (TFT) drive circuits for active matrix organic light emitting diode (AMOLED) displays. We describe both pixel architectures and TFT circuit topologies that are amenable for vertically integrated, high aperture ratio pixels. Here, the OLED layer is integrated directly above the TFT circuit layer, to provide an active pixel area that is at least 90% of the total pixel area with an aperture ratio that remains virtually independent of scaling. Both voltage-programmed and current-programmed drive circuits are considered. The latter provides compensation for shifts in device characteristics due to metastable shifts in the threshold voltage of the TFT. Various drive circuits on glass and plastic were fabricated and tested. Integration of on-panel gate drivers is also discussed where we present the architecture of an a-Si:H based gate de-multiplexer that is threshold voltage shift invariant. In addition, a programmable current mirror with good linearity and stability is presented. Programmable current sources are an essential requirement in the design of source driver output stages."}
{"_id": "b02afe6694b5f08c1100fdbb14e03a8aa1d13611", "title": "Nonreciprocal Phase-Shift Composite Right/Left Handed Microstrip Lines Using Ferrite-Rod-Embedded Substrate", "text": "Nonreciprocal phase-shift composite right/left handed transmission lines with small nonreciprocity in the amplitude of the transmission coefficients are proposed. The present structures are composed of a microstrip line periodically loaded with series capacitance and shunt inductance, and a ferrite rectangular rod is embedded in the dielectric substrate just below the center strip. The balanced composite right/left handed transmission line with nonreciprocal phase characteristics is designed. The numerical simulation results validate that the nonreciprocity in magnitude of the transmission coefficients is negligibly small whereas the nonreciprocity in the phase is still kept relatively large for the present configuration."}
{"_id": "f29df56481a5cc8d808328481ed2ec4a5caa7f6e", "title": "EXPLORING THE CONSUMERS \u2019 ACCEPTANCE OF ELECTRONIC RETAILING USING TECHNOLOGY ACCEPTANCE MODEL", "text": "Following the approach of Pavlou (2003), we investigate key drivers for consumers\u0300 electronic retailing (e-tailing) acceptance that are integrated into TAM (technology acceptance model). TAM is one of the better known models for explaining the intention to use technology and it integrates two perceptions: usefulness and ease of use. Th e main purpose of this paper is to explore the relationship between perceived ease of use and perceived usefulness in accepting e-tailing among Croatian consumers. In particular, the focus of the paper is on the areas which infl uenced customer commitment and loyalty in e-tailing. Th e results of the quantitative study among Croatian consumers show that electronic retailing, e.g. purchasing via the Internet, would enhance consumers\u0300 eff ectiveness in getting product information and their eff ectiveness in purchasing products."}
{"_id": "1a8c33f9e51ba01e1cdade7029f96892c7c7087b", "title": "Large-scale learning of word relatedness with constraints", "text": "Prior work on computing semantic relatedness of words focused on representing their meaning in isolation, effectively disregarding inter-word affinities. We propose a large-scale data mining approach to learning word-word relatedness, where known pairs of related words impose constraints on the learning process. We learn for each word a low-dimensional representation, which strives to maximize the likelihood of a word given the contexts in which it appears. Our method, called CLEAR, is shown to significantly outperform previously published approaches. The proposed method is based on first principles, and is generic enough to exploit diverse types of text corpora, while having the flexibility to impose constraints on the derived word similarities. We also make publicly available a new labeled dataset for evaluating word relatedness algorithms, which we believe to be the largest such dataset to date."}
{"_id": "b9bd19e0ced15ce32a9fe42484ae885a181ddad3", "title": "The Use of Information Retrieval Techniques for Intrusion Detection", "text": "Intrusion detection is a broad problem, and we need a greater range of tools than is currently available. In this article, we report a new approach. We have applied information retrieval techniques to index audit trails. These indexes can be extremely e cient at detecting attacks whose signature is an unusual combination of events, and they may consume only a very small additional amount of storage. This approach allows the intrusion detection community to adopt a wide range of techniques developed in applications ranging from library science to web search engines."}
{"_id": "439f7671affcd8ae7f666041838958bd523d63aa", "title": "Image-based ghostings for single layer occlusions in augmented reality", "text": "In augmented reality displays, X-Ray visualization techniques make hidden objects visible through combining the physical view with an artificial rendering of the hidden information. An important step in X-Ray visualization is to decide which parts of the physical scene should be kept and which should be replaced by overlays. The combination should provide users with essential perceptual cues to understand the relationship of depth between hidden information and the physical scene. In this paper we present an approach that addresses this decision in unknown environments by analyzing camera images of the physical scene and using the extracted information for occlusion management. Pixels are grouped into perceptually coherent image regions and a set of parameters is determined for each region. The parameters change the X-Ray visualization for either preserving existing structures or generating synthetic structures. Finally, users can customize the overall opacity of foreground regions to adapt the visualization."}
{"_id": "a7901e3047fcc6a1d118478e07c7516d7788deed", "title": "Twin gate rectangular recessed channel (TG-RRC) MOSFET for digital-logic applications", "text": "In this paper, twin gate rectangular recessed channel (TG-RRC) MOSFET with independent gate control is used to realize its application in digital electronics by using it as two input logic. The input logic is controlled by the independent gates which have different work functions (\u03a61 for gate 1 and \u03a62 gate 2) which are separated by oxide layer of 2 nm, thus controlling various electrical parameters such as current density, potential etc. across the channel in a swift manner and also increasing the switching characteristics of the device. It can give the full functionality of \u201cAND\u201d and \u201cNAND\u201d gate. We have shown the structure and functioning of the gates with a simulation on Silvaco TCAD. Since it is one of the fundamental gates has much applicability in digital logics starting from basic Flip-flops to sequential circuits in SRAM or DRAM cells."}
{"_id": "0b5fede45a29792fd4a748f55e026599f63a6ef7", "title": "Robot path planning : an object-oriented approach", "text": "Path planning has important applications in many areas, for example industrial robotics, autonomous systems, virtual prototyping, and computer-aided drug design. This thesis presents a new framework for developing and evaluating path planning algorithms. The framework is named CoPP (Components for Path Planning). It consists of loosely coupled and reusable components that are useful for building path planning applications. The framework is especially designed to make it easy to do fair comparisons between different path planning algorithms. CoPP is also designed to allow almost any user-defined moving system. The default type of moving system is a robot class, which is capable of describing tree-like kinematic chains. Additional features of this robot class are: joint couplings, numerical or closed-form inverse kinematics, and hierarchical robot representations. The last feature is useful when planning for complex systems like a mobile platform equipped with an arm and a hand. During the last six years, Rapidly-exploring Random Trees (RRTs) have become a popular framework for developing randomized path planning algorithms. This thesis presents a method for augmenting bidirectional RRT-planners with local trees. For problems where the solution trajectory has to pass through several narrow passages, local trees help to reduce the required planning time. To reduce the work needed for programming of industrial robots, it is desirable to allow task specifications at a very high level, leaving it up to the robot system to figure out what to do. Here we present a fast and flexible pick-and-place planner. Given an object that has to be moved to another position, the planner chooses a suitable grasp of the object and finds motions that bring the object to the desired position. The planner can also handle constraints on, e.g., the orientation of the manipulated object. For planning of pick-and-place tasks it is necessary to choose a grasp suitable to the task. Unless the grasp is given, some sort of grasp planning has to be performed. This thesis presents a fast grasp planner for a threefingered robot hand. The grasp planner could be used in an industrial setting, where a robot is to pick up irregularly shaped objects from a conveyor belt. In conjunction with grasp planning, a new method for evaluating grasp stability is presented."}
{"_id": "1cdc044cc8a9565b753dfe14b87904d770f3de17", "title": "A virtual reality based exercise system for hand rehabilitation post-stroke: transfer to function", "text": "This paper presents preliminary results from a virtual reality (VR)-based system for hand rehabilitation that uses a CyberGlove and a Rutgers Master II-ND haptic glove. This computerized system trains finger range of motion, finger flexion speed, independence of finger motion, and finger strength using specific VR simulation exercises. A remote Web-based monitoring station was developed to allow telerehabilitation interventions. The remote therapist observes simplified versions of the patient exercises that are updated in real time. Patient data is stored transparently in an Oracle database, which is also Web accessible through a portal GUI. Thus the remote therapist or attending physician can graph exercise outcomes and thus evaluate patient outcomes at a distance. Data from the VR simulations is complemented by clinical measurements of hand function and strength. Eight chronic post-stroke subjects participated in a pilot study of the above system. In keeping with variability in both their lesion size and site and in their initial upper extremity function, each subject showed improvement on a unique combination of movement parameters in VR training. Importantly, these improvements transferred to gains on clinical tests, as well as to significant reductions in task-completion times for the prehension of real objects. These results are indicative of the potential feasibility of this exercise system for rehabilitation in patients with hand dysfunction resulting from neurological impairment."}
{"_id": "24fc6065ddb01cb4d6ab4a2100170e5a68becb39", "title": "Research challenges in legal-rule and QoS-aware cloud service brokerage", "text": "The ICT industry and specifically critical sectors, such as healthcare, transportation, energy and government, require as mandatory the compliance of ICT systems and services with legislation and regulation, as well as with standards. In the era of cloud computing, this compliance management issue is exacerbated by the distributed nature of the system and by the limited control that customers have on the services. Today, the cloud industry is aware of this problem (as evidenced by the compliance program of many cloud service providers), and the research community is addressing the many facets of the legal-rule compliance checking and quality assurance problem. Cloud service brokerage plays an important role in legislation compliance and QoS management of cloud services. In this paper we discuss our experience in designing a legal-rule and QoS-aware cloud service broker, and we explore relate research issues. Specifically we provide three main contributions to the literature: first, we describe the detailed design architecture of the legal-rule and QoS-aware broker. Second, we discuss our design choices which rely on the state of the art solutions available in literature. We cover four main research areas: cloud broker service deployment, seamless cloud service migration, cloud service monitoring, and legal rule compliance checking. Finally, from the literature review in these research areas, we identify and discuss research challenges."}
{"_id": "8e30c56627d80bd1bb7b337bf9ee3d6ccb157682", "title": "Arc Flash Calculations for a 1.3-MW Photovoltaic System", "text": "When planning for the installation of a 1.3-MW photovoltaic (PV) system and its integration into an existing facility's electrical system, an arc flash study was performed to determine the best protective device settings to minimize the arc flash energy in the PV system and to determine the effect of the additional short-circuit current on arc flash calculations for the existing system. The existing system has distribution at 13.8 kV and two 10-MVA transformers to a 2.4-kV feeder system. The PV system adds the hazard of 500-Vdc subarray collection wiring and terminations. The dc arc flash exposure of PV systems, including those in the inverter enclosures, can be complicated, and this paper outlines the basic data collected for the study, the steps taken to complete the calculations, the methodology used, the results, and the team's learnings."}
{"_id": "8202c5481306f8797bf7b78f411d2d28895461e6", "title": "Joint Optimization of Constellation With Mapping Matrix for SCMA Codebook Design", "text": "Sparse code multiple access (SCMA) is being considered as a promising multiple access solution for 5G systems. A distinguishing feature of SCMA is that it combines the procedures of bit to constellation symbol mapping and subsequent spreading using multidimensional codebooks differentiated by users. Such codebooks dominate the system implementation as a main source of not only performance gain but also design complexity. This letter presents a joint constellation with mapping matrix design for SCMA codebooks, which formulates the constellations optimization as a nonconvex quadratically constrained quadratic programming problem based on a set of well-constructed mapping matrices. We elaborately solve the problem to achieve outperformance over existing SCMA design in terms of bit error rate (BER). For improving practicality, an approximate approach is further proposed to reduce the complexity significantly with a limited BER loss."}
{"_id": "df2f8cf3f16c4e0a467bee97f753e56c31b612c6", "title": "Exploring the Interconnectedness of Cryptocurrencies using Correlation Networks", "text": "Correlation networks were used to detect characteristics which, although fixed over time, have an important influence on the evolution of prices over time. Potentially important features were identified using the websites and whitepapers of cryptocurrencies with the largest userbases. These were assessed using two datasets to enhance robustness: one with fourteen cryptocurrencies beginning from 9 November 2017, and a subset with nine cryptocurrencies starting 9 September 2016, both ending 6 March 2018. Separately analysing the subset of cryptocurrencies raised the number of data points from 115 to 537, and improved robustness to changes in relationships over time. Excluding USD Tether, the results showed a positive association between different cryptocurrencies that was statistically significant. Robust, strong positive associations were observed for six cryptocurrencies where one was a fork of the other; Bitcoin / Bitcoin Cash was an exception. There was evidence for the existence of a group of cryptocurrencies particularly associated with Cardano, and a separate group correlated with Ethereum. The data was not consistent with a token\u2019s functionality or creation mechanism being the dominant determinants of the evolution of prices over time but did suggest that factors other than speculation contributed to the price."}
{"_id": "85d8b1b3483c7f4db999e7cf6b3e6231954c43dc", "title": "Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening", "text": "We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy."}
{"_id": "3705da054c46880b1d0493795993614f5142819b", "title": "Ako: Decentralised Deep Learning with Partial Gradient Exchange", "text": "Distributed systems for the training of deep neural networks (DNNs) with large amounts of data have vastly improved the accuracy of machine learning models for image and speech recognition. DNN systems scale to large cluster deployments by having worker nodes train many model replicas in parallel; to ensure model convergence, parameter servers periodically synchronise the replicas. This raises the challenge of how to split resources between workers and parameter servers so that the cluster CPU and network resources are fully utilised without introducing bottlenecks. In practice, this requires manual tuning for each model configuration or hardware type.\n We describe Ako, a decentralised dataflow-based DNN system without parameter servers that is designed to saturate cluster resources. All nodes execute workers that fully use the CPU resources to update model replicas. To synchronise replicas as often as possible subject to the available network bandwidth, workers exchange partitioned gradient updates directly with each other. The number of partitions is chosen so that the used network bandwidth remains constant, independently of cluster size. Since workers eventually receive all gradient partitions after several rounds, convergence is unaffected. For the ImageNet benchmark on a 64-node cluster, Ako does not require any resource allocation decisions, yet converges faster than deployments with parameter servers."}
{"_id": "7ae0c54752082e397c73c60328df579aa3a4e064", "title": "Positional asphyxia or diabetic ketoacidosis? A case report.", "text": "We describe an autopsy case in which a patient with diabetic ketoacidosis (DKA) was found in a head-down position. A female in her late 70s was found dead in her home in a supine position on the kitchen floor. The upper part of her body was hanging down over the edge of the kitchen floor to the backyard through the open window. External examination revealed congestion of the head and upper region of the face and neck. There were numerous petechiae on the superior palpebral conjunctivae and upper part of the oral mucosa. On internal examination, extensive hemorrhages in the subcutaneous fat tissues and muscles were observed at the upper part of the neck, although there were no external injuries on the neck. Histopathological examination revealed that hemorrhages were accompanied with infiltration of polymorphonuclear leukocytes both within and around the hemorrhages on the neck skin. Nodular glomerulosclerosis and many fat droplets in the cytoplasm of proximal tubule cells were found in the kidney. Postmortem blood analysis showed acetone (204.2 \u03bcg/ml), HbA1c (10.8%), acetoacetate (<2.0 \u03bcmol/l), 3-hydroxybutyrate (11,844 \u03bcmol/l), blood urea nitrogen (128.9 mg/dl), and creatinine (3.11 mg/dl). The glucose and acetone levels in the urine were 876.7 mg/dl and 201.4 \u03bcg/ml, respectively, suggesting that she suffered severe DKA. However, since hemorrhage of the neck could have developed only when she was still alive, asphyxia should have arisen antemortem. Based on these findings, we concluded that the direct cause of her death is positional asphyxia, which was resulted from DKA. It is difficult to diagnose the cause of death when the victim is in an unusual posture. To confirm a suspicion of positional asphyxia, photographs of the undisturbed scene are useful in addition to a precise autopsy and accurate examinations."}
{"_id": "937c4f3d90626a2b53ca51402889b03af5452e30", "title": "The impact of staff turnover on software projects: the importance of understanding what makes software practitioners tick", "text": "In this paper we investigate the impact of staff turnover on software projects. In particular we investigate whether high staff turnover damages project success. We analyse data from an empirical study of 89 software practitioners to show that projects with high staff turnover are less successful. Furthermore our empirical data suggests a relationship between high staff turnover on projects and low staff motivation levels. We discuss factors which have been previously found to improve motivation levels and conclude that improving motivation levels can reduce staff turnover, which in turn increases project success."}
{"_id": "7f7948942a3064c053d7b85f178994eddaabff87", "title": "Fast Global Registration", "text": "We present an algorithm for fast global registration of partially overlapping 3D surfaces. The algorithm operates on candidate matches that cover the surfaces. A single objective is optimized to align the surfaces and disable false matches. The objective is defined densely over the surfaces and the optimization achieves tight alignment with no initialization. No correspondence updates or closest-point queries are performed in the inner loop. An extension of the algorithm can perform joint global registration of many partially overlapping surfaces. Extensive experiments demonstrate that the presented approach matches or exceeds the accuracy of state-of-the-art global registration pipelines, while being at least an order of magnitude faster. Remarkably, the presented approach is also faster than local refinement algorithms such as ICP. It provides the accuracy achieved by well-initialized local refinement algorithms, without requiring an initialization and at lower computational cost."}
{"_id": "46144f313a8df01fecdf03147d4a680b08c5f2a2", "title": "Surfel-Based Next Best View Planning", "text": "Next best view (NBV) planning is a central task for automated three-dimensional (3-D) reconstruction in robotics. The most expensive phase of NBV computation is the view simulation step, where the information gain of a large number of candidate sensor poses are estimated. Usually, information gain is related to the visibility of unknown space from the simulated viewpoint. A well-established technique is to adopt a volumetric representation of the environment and to compute the NBV from ray casting by maximizing the number of unknown visible voxels. This letter explores a novel approach for NBV planning based on surfel representation of the environment. Surfels are oriented surface elements, such as circular disks, without explicit connectivity. A new kind of surfel is introduced to represent the frontier between empty and unknown space. Surfels are extracted during 3-D reconstruction, with minimal overhead, from a KinectFusion volumetric representation. Surfel rendering is used to generate images from each simulated sensor pose. Experiments in a real robot setup are reported. The proposed approach achieves better performance than volumetric algorithms based on ray casting implemented on graphics processing unit (GPU), with comparable results in terms of reconstruction quality. Moreover, surfel-based NBV planning can be applied in larger environments as a volumetric representation is limited by GPU memory."}
{"_id": "2c90cf37144656775a7f48f70f908f72bdb58ed8", "title": "Green Internet of Things for Smart World", "text": "Smart world is envisioned as an era in which objects (e.g., watches, mobile phones, computers, cars, buses, and trains) can automatically and intelligently serve people in a collaborative manner. Paving the way for smart world, Internet of Things (IoT) connects everything in the smart world. Motivated by achieving a sustainable smart world, this paper discusses various technologies and issues regarding green IoT, which further reduces the energy consumption of IoT. Particularly, an overview regarding IoT and green IoT is performed first. Then, the hot green information and communications technologies (ICTs) (e.g., green radio-frequency identification, green wireless sensor network, green cloud computing, green machine to machine, and green data center) enabling green IoT are studied, and general green ICT principles are summarized. Furthermore, the latest developments and future vision about sensor cloud, which is a novel paradigm in green IoT, are reviewed and introduced, respectively. Finally, future research directions and open problems about green IoT are presented. Our work targets to be an enlightening and latest guidance for research with respect to green IoT and smart world."}
{"_id": "e3ca7797b4b5e7bca630318fc268ae4f51c12076", "title": "Learning patterns for discovering domain-oriented opinion words", "text": "Sentiment analysis is a challenging task that attracted increasing interest during the last years. The availability of online data along with the business interest to keep up with consumer feedback generates a constant demand for online analysis of user-generated content. A key role to this task plays the utilization of domain-specific lexicons of opinion words that enables algorithms to classify short snippets of text into sentiment classes (positive, negative). This process is known as dictionary-based sentiment analysis. The related work tends to solve this lexicon identification problem by either exploiting a corpus and a thesaurus or by manually defining a set of patterns that will extract opinion words. In this work, we propose an unsupervised approach for discovering patterns that will extract domain-specific dictionary. Our approach (DidaxTo) utilizes opinion modifiers, sentiment consistency theories, polarity assignment graphs and pattern similarity metrics. The outcome is compared against lexicons extracted by the state-of-the-art approaches on a sentiment analysis task. Experiments on user reviews coming from a diverse set of products demonstrate the utility of the proposed method. An implementation of the proposed approach in an easy to use application for extracting opinion words from any domain and evaluate their quality is also presented."}
{"_id": "dc799bb259d6d1d1bf781d7191fe108d41a47913", "title": "Relativity of remembering: why the laws of memory vanished.", "text": "For 120 years, cognitive psychologists have sought general laws of learning and memory. In this review I conclude that none has stood the test of time. No empirical law withstands manipulation across the four sets of factors that Jenkins (1979) identified as critical to memory experiments: types of subjects, kinds of events to be remembered, manipulation of encoding conditions, and variations in test conditions. Another factor affecting many phenomena is whether a manipulation of conditions occurs in randomized, within-subjects designs rather than between-subjects (or within-subject, blocked) designs. The fact that simple laws do not hold reveals the complex, interactive nature of memory phenomena. Nonetheless, the science of memory is robust, with most findings easily replicated under the same conditions as originally used, but when other variables are manipulated, effects may disappear or reverse. These same points are probably true of psychological research in most, if not all, domains."}
{"_id": "a00454d93f6c4bb9e8452a377b51cfeff6bbaa1d", "title": "Decision quality and satisfaction: the effects of online information sources and self-efficacy", "text": "Purpose \u2013 Digital libraries and social media are two sources of online information with different characteristics. This study integrates self-efficacy into the analysis of the relationship between information sources and decision-making, with the aim of exploring the effect of self-efficacy on decision-making, as well as the interacting effect of self-efficacy and information sources on decision-making. Design/methodology/approach \u2013Survey data were collected and the partial least squares (PLS) structural equation modelling (SEM) was employed to verify the research model. Findings \u2013 The effect of digital library usage for acquiring information on perceived decision quality is larger than that of social media usage for acquiring information on perceived decision quality. Self-efficacy in acquiring information stands out as the key determinant for perceived decision quality. The effect of social media usage for acquiring information on perceived decision quality is positively moderated by self-efficacy in acquiring information. Practical implications \u2013 Decision-making is a fundamental activity for individuals, but human decision-making is often subject to biases. The findings of this study provide useful insights into decision quality improvement, highlighting the importance of self-efficacy in acquiring information in the face of information overload. Originality/value \u2013 This study integrates self-efficacy into the analysis of the relationship between information sources and decision-making, presenting a new perspective for decision-making research and practice alike."}
{"_id": "bf5e64480afc22232b9d2f2df33ed8d0e1e1a09b", "title": "BTS guidelines for the insertion of a chest drain.", "text": "D Laws, E Neville, J Duffy, on behalf of the British Thoracic Society Pleural Disease Group, a subgroup of the British Thoracic Society Standards of Care Committee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ."}
{"_id": "d7260b8cf64aca3f538080369390490830a1e248", "title": "LOCOMO satcom terminal: A switchable RHCP/LHCP Array Antenna for on-the-move applications in Ka-band", "text": "This paper describes the main features of a multilayer antenna panel for a low cost Ka-band Array Antenna for on-the-move applications. The LOCOMO satcom terminal is based on a dual-polarized low-profile antenna with trasmit/receive and RHCP/LHCP switching capability."}
{"_id": "d9368c2c6d030e19bc43690d9a5623427722b5f0", "title": "Multi-labeled Relation Extraction with Attentive Capsule Network", "text": "To disclose overlapped multiple relations from a sentence still keeps challenging. Most current works in terms of neural models inconveniently assuming that each sentence is explicitly mapped to a relation label, cannot handle multiple relations properly as the overlapped features of the relations are either ignored or very difficult to identify. To tackle with the new issue, we propose a novel approach for multi-labeled relation extraction with capsule network which acts considerably better than current convolutional or recurrent net in identifying the highly overlapped relations within an individual sentence. To better cluster the features and precisely extract the relations, we further devise attention-based routing algorithm and sliding-margin loss function, and embed them into our capsule network. The experimental results show that the proposed approach can indeed extract the highly overlapped features and achieve significant performance improvement for relation extraction comparing to the state-of-the-art works. Introduction Relation extraction plays a crucial role in many natural language processing (NLP) tasks. It aims to identify relation facts for pairs of entities in a sentence to construct triples like [Arthur Lee, place born, Memphis]. Relation extraction has received renewed interest in the neural network era, when neural models are effective to extract semantic meanings of relations. Compared with traditional approaches which focus on manually designed features, neural methods such as Convolutional Neural Network (CNN) (Liu et al. 2013; Zeng et al. 2014) and Recurrent Neural Network (RNN) (Zhang and Wang 2015; Zhou et al. 2016) have achieved significant improvement in relation classification. However, previous neural models are unlikely to scale in the scenario where a sentence has multiple relation labels and face the challenges in extracting highly overlapped and discrete relation features due to the following two drawbacks. First, one entity pair can express multiple relations in a sentence, which will confuse relation extractor seriously. For example, as in Figure 1, the entity pair [Arthur Lee, Memphis] keeps three possible relations which are place birth, \u2217Corresponding authors: Weijia Jia, Hai Zhao, {jia-wj, zhaohai}@cs.sjtu.edu.cn Copyright c \u00a9 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. place death and place lived. The sentence S1 and S2 can both express two relations, and the sentence S3 represents another two relations. These sentences contain multiple kinds of relation features which are difficult to be identified clearly. The existing neural models tendentiously merge low-level semantic meanings to one high-level relation representation vector with methods such as max-pooling (Zeng et al. 2014; Zhang, Zhao, and Qin 2016) and word-level attention (Zhou et al. 2016). However, one high-level relation vector is still insufficient to express multiple relations precisely. Second, current methods are neglecting of the discretization of relation features. For instance, as shown in Figure 1, all the sentences express their relations with a few significant words (labeled italic in the figure) distributed discretely in the sentences. However, common neural methods handle sentences with fixed structures, which are difficult to gather relation features of different positions. For example, being spatially sensitive, CNNs adopt convolutional feature detectors to extract local patterns from a sliding window of vector sequences and use the max-pooling to select the prominent ones. Besides, the feature distribution of \u201cno relation (NA, others)\u201d in a dataset is different from that of definite relations. A sentence can be classified to \u201cno relation\u201d only when it does not contain any features of other relations. In this paper, to extract overlapped and discrete relation features, we propose a novel approach for multi-labeled relation extraction with an attentive capsule network. As shown in Figure 1, the relation extractor of the proposed method is constructed with three major layers that are feature extracting, feature clustering and relation predicting. The first one extracts low-level semantic meanings. The second layer clusters low-level features to high-level relation representations, and the final one predicts relation types for each relation representation. The low-level features are extracted with traditional neural models such as Bidirectional Long ShortTerm Memory (Bi-LSTM) and CNN. For the feature clustering layer, we utilize an attentive capsule network inspired by Sabour, Frosst, and Hinton (2017). Capsule (vector) is a small group of neurons used to express features. Its overall length indicates the significance of features, and the direction of a capsule suggests the specific property of the feature. The low-level semantic meanings from the first layer are embedded to amounts of low-level capsules, which will ar X iv :1 81 1. 04 35 4v 1 [ cs .C L ] 1 1 N ov 2 01 8 ID Instances Relations S1 [Arthur Lee], the leader of Love, died on Thursday in [Memphis]. person/place_death"}
{"_id": "ecda3cc93064bb274eecd94d06b47945bb672ca4", "title": "A design for an electronically-steerable holographic antenna with polarization control", "text": "We present a design for an electronic-steerable holographic antenna with polarization control composed of a radial array of Ku-band, electronically-steerable, surface-wave waveguide (SWG) artificial-impedance-surface antennas (AISA). The antenna operates by launching surface waves into each of the SWGs via a central feed network. The surface-wave impedance is electronically controlled with varactor-tuned impedance patches. The impedance is adjusted to scan the antenna in elevation, azimuth and polarization. The radial symmetry allows for 360\u00b0 azimuthal steering. If constructed with previously-demonstrated SWG AISAs, it is capable of scanning in elevation from -75\u00b0 to 75\u00b0 with gain variation of less than 3 dB. The polarization can be switched among V-Pol, H-Pol, LHCP and RHCP at will."}
{"_id": "31632b27bc8a31b1fbb7867656b8e3ca840376e0", "title": "Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics", "text": "As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records."}
{"_id": "e00b5c0fea6fb05b1370cc3dc537ed2e821033b7", "title": "\u21131-penalized linear mixed-effects models for high dimensional data with application to BCI", "text": "Recently, a novel statistical model has been proposed to estimate population effects and individual variability between subgroups simultaneously, by extending Lasso methods. We will for the first time apply this so-called \u2113(1)-penalized linear regression mixed-effects model for a large scale real world problem: we study a large set of brain computer interface data and through the novel estimator are able to obtain a subject-independent classifier that compares favorably with prior zero-training algorithms. This unifying model inherently compensates shifts in the input space attributed to the individuality of a subject. In particular we are now for the first time able to differentiate within-subject and between-subject variability. Thus a deeper understanding both of the underlying statistical and physiological structures of the data is gained."}
{"_id": "4f4161e5ad83f8d31d387c5ff0d7b51fd17fe073", "title": "BotCop: An Online Botnet Traffic Classifier", "text": "A botnet is a network of compromised computers infected with malicious code that can be controlled remotely under a common command and control (C&C) channel. As one the most serious security threats to the Internet, a botnet cannot only be implemented with existing network applications (e.g. IRC, HTTP, or Peer-to-Peer) but also can be constructed by unknown or creative applications, thus making the botnet detection a challenging problem. In this paper, we propose a new online botnet traffic classification system, called BotCop, in which the network traffic are fully classified into different application communities by using payload signatures and a novel decision tree model, and then on each obtained application community, the temporal-frequent characteristic of flows is studied and analyzed to differentiate the malicious communication traffic created by bots from normal traffic generated by human beings. We evaluate our approach with about 30 million flows collected over one day on a large-scale WiFi ISP network and results show that the proposed approach successfully detects an IRC botnet from about 30 million flows with a high detection rate and a low false alarm rate."}
{"_id": "0d57d7cba347c6b8929a04f7391a25398ded096c", "title": "On the compression of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition", "text": "We study the problem of compressing recurrent neural networks (RNNs). In particular, we focus on the compression of RNN acoustic models, which are motivated by the goal of building compact and accurate speech recognition systems which can be run efficiently on mobile devices. In this work, we present a technique for general recurrent model compression that jointly compresses both recurrent and non-recurrent inter-layer weight matrices. We find that the proposed technique allows us to reduce the size of our Long Short-Term Memory (LSTM) acoustic model to a third of its original size with negligible loss in accuracy."}
{"_id": "3fcb84758e328d49e6e0c8115da594fd8267bee7", "title": "Efficient Object Instance Search Using Fuzzy Objects Matching", "text": "Recently, global features aggregated from local convolutional features of the convolutional neural network have shown to be much more effective in comparison with hand-crafted features for image retrieval. However, the global feature might not effectively capture the relevance between the query object and reference images in the object instance search task, especially when the query object is relatively small and there exist multiple types of objects in reference images. Moreover, the object instance search requires to localize the object in the reference image, which may not be achieved through global representations. In this paper, we propose a Fuzzy Objects Matching (FOM) framework to effectively and efficiently capture the relevance between the query object and reference images in the dataset. In the proposed FOM scheme, object proposals are utilized to detect the potential regions of the query object in reference images. To achieve high search efficiency, we factorize the feature matrix of all the object proposals from one reference image into the product of a set of fuzzy objects and sparse codes. In addition, we refine the feature of the generated fuzzy objects according to its neighborhood in the feature space to generate more robust representation. The experimental results demonstrate that the proposed FOM framework significantly outperforms the state-of-theart methods in precision with less memory and computational cost on three public datasets. The task of object instance search, is to retrieve all the images containing a specific object query and localize the query object in the reference images. It has received a sustained attention over the last decade, leading to many object instance search systems (Meng et al. 2010; Jiang, Meng, and Yuan 2012; Jiang et al. 2015; Tolias, Avrithis, and J\u00e9gou 2013; Tao et al. 2014; Razavian et al. 2014a; 2014b; Tolias, Sicre, and J\u00e9gou 2016; Meng et al. 2016; Bhattacharjee et al. 2016b; 2016a; Mohedano et al. 2016; Cao et al. 2016; Wu et al. 2016). Since the query object only occupies a small part of an image, the global representation may not be effective to capture the relevance between the query object with reference image. Therefore, the relevance between the query object and one reference image is not determined by the overall similarity between the query and the reference image. Copyright c \u00a9 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ..."}
{"_id": "779cbb350c11a5b24a8a17114cff0c26fe3747e6", "title": "Parsing English into Abstract Meaning Representation Using Syntax-Based Machine Translation", "text": "We present a parser for Abstract Meaning Representation (AMR). We treat Englishto-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser significantly improves upon state-of-the-art results."}
{"_id": "d8d160d9f6e987aa9e95a2480e97105fdeebca8f", "title": "An image encryption scheme based on elliptic curve pseudo random and Advanced Encryption System", "text": "Elliptic curve cryptography (ECC) has proven to be an effective cryptography. ECC has its own advantages such as efficient key size compared to other public key infrastructures. This paper exploits the Elliptic curve random generator defined by National Institute of Standards and Technology (NIST) to generate a sequence of arbitrary numbers based on curves. The random generation phase is based on public shared key and a changing point G, which is a generator of a curve to obtain random sequences. Then, Advanced Encryption System is applied to these sequences acquiring arbitrary keys for encrypting image. Using AES alongside well distributed randoms provides a prominent encryption technique. Our experiments show that the proposed method fulfills the basics of cryptography including simpleness and correctness. Moreover, the results of the evaluation prove the effectiveness and security of the proposed method."}
{"_id": "9110a3b64b6175a52c78088c52de32c576e9a60a", "title": "Black Box Testing: Techniques for Functional Testing of Software and Systems", "text": "Editor: Peter Anderson Rochester Institute of Technology 1 Lomb Memorial Dr. Rochester, NY 14623 pga@cs.rit.edu Each technique receives similar coverage, including a description of its new and prerequisite vocabulary, its model, the bug assumptlons that underlie its effectiveness, typical applications, examples (almost all based on standard IRS tax forms), caveats and limitations, automation and tools for the technique, a self-evaluation quiz, and exercises. Common threads in the presentation show the role"}
{"_id": "ba3b96863f0e6fd1c678aacc0837e654b317ef81", "title": "Introduction to Neurolinguistics", "text": "attitude is lacking in patients with anomia (inability to name), since they cannot categorize, and in patients with agrammatism (difficulties using grammatical morphemes and function words), since they cannot use elements that \u201chave no meaning in isolation.\u201d In his description of abstract attitude, Goldstein mixes psychological factors, such as the ability to separate figure and ground, with physiological factors, like spreading and activation (i.e., the fact that stimuli can activate an abnormally large area in the cortex if thresholds of activation for neurons are changed by brain damage, which makes the patient unable to detect, for example, exactly where somebody is touching his skin). An abstract attitude is required for the fully efficient use of language and therefore the linguistic disorder arises secondarily. Words lose their function to organize the external world. Goldstein assumed different positions with respect to localization, but he was mainly an anti-localist. He claimed that the cortex could be separated into a \u201cperipheral part\u201d with structural localization and a \u201ccentral part\u201d which is equipotential and constructs \u201cdynamic structures\u201d against a \u201cdynamic background.\u201d Lesions in the central part cause a loss of abstract attitude. It was Karl Lashley (1933, 1950), a physiologist, who presented the theory of cerebral equipotentiality and mass effect. Equipotentiality means that all parts of the cortex have the same \u201cability\u201d and mass effect means that functions are dependent on how much of the cortex is used. Lashley claimed that studies of humans were too unreliable, due to humans\u2019 complex behavior, and therefore concentrated on experiments with rats in mazes. He accepted specified functions only for motor and sensory projection areas and objected to attempts to localize mental functions. He emphasized that psychological unity does not guarantee neurological unity; for example, many structures participate in intentional movements. Lashley tried to identify different integrative mechanisms, such as intensity, spatial orientation and serial ordering. Inspired by Marie, Bay (1962) suggested that speech is dependent on a large number of \u201cextralinguistic\u201d factors, which are disturbed by brain damage. In aphasia, the transmission of information is disturbed. This disorder always involves a reduced intellect. The aphasic patient\u2019s concepts become pathologically distorted, which can be seen in tasks like drawing or sculpting. The same defect is noticed in speech: concepts become fuzzy, deformed, and poorly differentiated. Jason Brown\u2019s theory of aphasia is inspired by Jackson, von Monakow, and Freud. It contains, among other things, evolutionary aspects and a regression hypothesis combined with a functional view of aphasia. He raises the connections between emotions, degree of consciousness and intentionality, creativity, and language. Brown\u2019s theory is presented in several books, including Mind, brain and consciousness from 1977. (See also Chapter 10.) Chapter 2. The development of theories about brain and language 27 Localism and associationism Localism continued to survive but it did not attract much attention during the first half of the 20th century. Some examples are mentioned below. Swedish neurologist Salomon Henschen (1926) formulated a detailed model on the basis of all the clinical pathological cases of aphasia that had been described to that point. The model contained a large number of separate centers for different aspects of language function. Korbinian Brodmann (1909), another neurologist, divided the cortex into numerous histologically defined areas, each of which had a determined physiological function. His numbering system is still used to give exact references to areas of the brain (see Figure 14.6 in Chapter 14). Karl Kleist (1916) investigated war injuries from the First World War and tried to find a psychological explanation for each cytoand myeloarchitectonic (related to cellular patterns and myelinated nerve cell patterns, respectively) area. Speech and language were localized according to tradition. Johannes Nielsen (1947) devoted himself to the anatomical-physiological classification of cortical functions. He distinguished between receptive and afferent signals, and between primary identification through one sense and secondary identification via associations of different senses at a higher level. According to Nielsen, recognition means receiving impressions and comparing them to engrams (physical alterations of nerve tissue in response to stimuli). Remembering, on the other hand, is starting from the concept and identifying it with an engram, which takes place at a higher level. Two types of aphasia can then occur: (a) a type that changes the connection between language and recognition, and (b) a type that changes the \u201csignification\u201d (what Head called \u201csemantic aphasia\u201d). The latter is not localized in specific areas; rather, any lesion can affect the whole system. The destruction of a projection area causes a total loss of function, while the destruction of an adjacent zone causes agnosia (inability to recognize) and the destruction of association zones that are not localized results in memory disorders. Thus, this theory is strongly inspired by associationism. Associationism is rediscovered by Geschwind Associationism continues to be very influential today. It was Norman Geschwind who rediscovered associationism and made it known as \u201cconnectionism,\u201d or the neoclassical school of neurolinguistics, in his 1965 book Disconnection syndromes in animals and man. (For more information, see Chapter 3.) Introduction to Neurolinguistics 28 Dynamic localization of function: Pavlov, Vygotsky, and Luria Dynamic localization of function is a tradition originating in Russia which developed during the 20th century in parallel with the other traditions. Today, it also has considerable influence in Western Europe and the U.S. The well-known physiologist Ivan Pavlov (1949) claimed that complex behavior could not be handled by isolated and fixed brain structures, but must emerge during the ontogenetic development of dynamic systems. These systems were said to consist of connected structures in different places within the nervous system. Psychologist Lev Vygotsky emphasized that it is necessary to first investigate what is to be localized, before asking the question where. Functions must be analyzed with respect to ontogenetic development. Vygotsky (1970) saw a function as a complex activity whereby the whole organism adapts to a task. The activity can be performed in different ways, because of the cooperation of several organs. This dynamic cooperation is controlled by neural structures that monitor different organs and are localized in different places. Vygotsky also formulated a theory about the development of language and thinking that became very influential. The noted psychologist and aphasiologist Alexander Luria studied aphasia within the framework of the theory of dynamic localization of function. Luria\u2019s work has been extremely influential in aphasia research, and especially in clinical work. (It will be further discussed in Chapter 3.) The test psychological tradition Theodore Weisenburg and Katherine McBride investigated cognitive factors in persons with aphasia, persons with right-hemisphere lesions, and control subjects, using a large test battery of verbal and nonverbal tasks. Their methodology became the model for a large number of studies. The results showed that persons with aphasia varied a great deal cognitively. In general, aphasia was associated with a cognitive disorder, but there was no necessary correlation. Certain persons with severe aphasia had excellent results on nonverbal tests. Thus, there was no support for Head\u2019s theories. Weisenburg and McBride claimed that the importance of language for thinking must differ in different individuals and that the intellect before the lesion must be determinant. Many psychologists were inspired by Weisenburg and McBride and by Goldstein and developed experiments for testing \u201csymbolic thinking\u201d in aphasics. Subjects with aphasia performed classification tasks, Raven\u2019s matrices (analogy tests), and similar tests. The results most often showed considerable variability, and the general conclusion was that the cognitive disorder, where such a disorder existed, could not be attributed to the linguistic disorder. Nevertheless, certain types of cognitive disorders were often found in persons with aphasia, such as difficulties in discriminating faces and Chapter 2. The development of theories about brain and language 29 interpreting mimed actions, as well as construction apraxia (i.e., problems performing spatial construction tasks). Linguists and the linguistic influence on aphasiology Aphasia research was traditionally an area for neurologists, but philosophers and psychologists have also had considerable influence on the area. In the middle of the 20th century, linguists joined this group. They saw studies of aphasia as a means of testing general theories about language. By functionally analyzing different disorders using the descriptive methods of linguistics, they aimed to gain better understanding of the role of the brain in normal speech. The term neurolinguistics was eventually adopted for these studies. In 1941, Roman Jakobson published the book Kindersprache, Aphasie und allgemeine Lautgesetze (\u201cChild language, aphasia and phonological universals\u201d), in which he pointed out parallels between linguistic universals, children\u2019s language development, and symptoms of aphasia. At this point, he was the only linguist to systematically describe aphasia within the structuralist framework. Jakobson applied Ferdinand de Saussure\u2019s principle of paradigmatic and syntagmatic relations in language. If language consisted of two types of operations \u2014 choice of units and combination of units \u2014 then two types of aphasia should be possible: (a) a"}
{"_id": "fa72ad1e31c2a8c0fdc9a6ccd6db392bb883c684", "title": "An\u00e1lisis de la Simplificaci\u00f3n de Expresiones Num\u00e9ricas en Espa\u00f1ol mediante un Estudio Emp\u00edrico", "text": "In this paper we present the results of an empirical study carried out on a parallel corpus of original and manually simplified texts in Spanish and a subsequent survey, with the aim of targeting simplification operations concerning numerical expressions. For the purpose of the study, a \u201cnumerical expression\u201d is understood as any phrase expressing quantity possibly modified with a numerical hedge, such as almost a quarter. Data is analyzed both in context and in isolation, and attention is paid to the difference the target reader makes to simplification. Our future work aims at computational implementation of the transformation rules extracted so far."}
{"_id": "40eb320dd27fb6414d350d4ef20806ed393c28be", "title": "Sistema de Conversi\u00f3n Texto a Voz de C\u00f3digo Abierto Para Lenguas Ib\u00e9ricas", "text": "This paper presents a text-to-speech system based on statistical synthesis which, for the first time, allows generating speech in any of the four official languages of Spain as well as English. Using the AhoTTS system already developed for Spanish and Basque as a starting point, we have added support for Catalan, Galician and English using the code of available open-source modules. The resulting system, named multilingual AhoTTS, has also been released as open-source and it is already being used in real applications."}
{"_id": "a566bcd3435e21210fa91f95c1537f0f94a44460", "title": "The RB and p 53 pathways in cancer", "text": "The life history of cancer cells encompasses a series of genetic missteps in which normal cells are progressively transformed into tumor cells that invade surrounding tissues and become malignant. Most prominent among the regulators disrupted in cancer cells are two tumor suppressors, the retinoblastoma protein (RB) and the p53 transcription factor. Here, we discuss interconnecting signaling pathways controlled by RB and p53, attempting to explain their potentially universal involvement in the etiology of cancer. Pinpointing the various ways by which the functions of RB and p53 are subverted in individual tumors should provide a rational basis for developing more refined tumor-specific therapies."}
{"_id": "52f40003253c1301f78d0f95e77f44e48c80cb9e", "title": "Creativity, the Turing Test, and the (Better) Lovelace Test", "text": "The Turing Test (TT) is claimed by many to be a way to test for the presence, in computers, of such ``deep'' phenomena as thought and consciousness. Unfortunately, attempts to build computational systems able to pass TT (or at least restricted versions of this test) have devolved into shallow symbol manipulation designed to, by hook or by crook, trick. The human creators of such systems know all too well that they have merely tried to fool those people who interact with their systems into believing that these systems really have minds. And the problem is fundamental: the structure of the TT is such as to cultivate tricksters. A better test is one that insists on a certain restrictive epistemic relation between an artificial agent (or system) A, its output o, and the human architect H of A \u2013 a relation which, roughly speaking, obtains when H cannot account for how A produced o. We call this test the ``Lovelace Test'' in honor of Lady Lovelace, who believed that only when computers originate things should they be believed to have minds."}
{"_id": "199006c69026ed2dde8a25e676b67bdcb76f171b", "title": "Social network analysis: a powerful strategy, also for the information sciences", "text": "Social network analysis (SNA) is not a formal theory in sociology but rather a strategy for investigating social structures. As it is an idea that can be applied in many fields, we study, in particular, its influence in the information sciences. Information scientists study publication, citation and co-citation networks, collaboration structures and other forms of social interaction networks. Moreover, the Internet represents a social network of an unprecedented scale. We are convinced that in all these studies social network analysis can successfully be applied. SNA is further related to recent theories concerning the free market economy, geography and transport networks. The growth of SNA is documented and a co-author network of SNA is drawn. Centrality measures of this network are calculated. * To whom all correspondence should be addressed."}
{"_id": "16c43db3f50e714d4a6d773cd8b709a34b338167", "title": "Classifying the Semantic Relations in Noun Compounds via a Domain-Specific Lexical Hierarchy", "text": "We are developing corpus-based techniques for identifying semantic relations at an intermediate level of description (more specific than those used in case frames, but more general than those used in traditional knowledge representation systems). In this paper we describe a classification algorithm for identifying relationships between two-word noun compounds. We find that a very simple approach using a machine learning algorithm and a domain-specific lexical hierarchy successfully generalizes from training instances, performing better on previously unseen words than a baseline consisting of training on the words themselves."}
{"_id": "3100d6b90ca7385e09c6b59478fc817f314492a4", "title": "Studying Online Social Networks", "text": "What is Social Network Analysis? The Social Network Approach Units of Analysis Relations Ties Multiplexity Composition Beyond the Tie Social Networks Two Views: Ego-Centered and Whole Networks Network Characteristics Partitioning Networks Groups STUDYING ONLINE SOCIAL NETWORKS, by Laura Garton... http://jcmc.indiana.edu/vol3/issue1/garton.html 1 of 32 9/21/11 11:24 AM Positional Analysis Networks of Networks Placing CMC in Context Collecting Network Data for CMC Studies Selecting a Sample Collecting Data How are Network Data Analyzed? Ego-Centered Analysis Whole Network Analysis Conclusions References About the Authors"}
{"_id": "51a01688f35a5570de3fb5e7926f828ca2c672fe", "title": "MAPLSC: A novel multi-class classifier for medical diagnosis", "text": "Analysis of clinical records contributes to the Traditional Chinese Medicine (TCM) experience expansion and techniques promotion. More than two diagnostic classes (diagnostic syndromes) in the clinical records raise a popular data mining problem: multi-value classification. In this paper, we propose a novel multi-class classifier, named Multiple Asymmetric Partial Least Squares Classifier (MAPLSC). MAPLSC attempts to be robust facing imbalanced data distribution in the multi-value classification. Elaborated comparisons with other seven state-of-the-art methods on two TCM clinical datasets and four public microarray datasets demonstrate MAPLSC's remarkable improvements."}
{"_id": "0f238588114dea88a3eeefe9186427436fe377e9", "title": "Template-Based Information Extraction without the Templates", "text": "Standard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract their slot fillers (e.g., an embassy is the Target of a Bombing template). This paper describes an approach to template-based IE that removes this requirement and performs extraction without knowing the template structure in advance. Our algorithm instead learns the template structure automatically from raw text, inducing template schemas as sets of linked events (e.g., bombings include detonate, set off, and destroy events) associated with semantic roles. We also solve the standard IE task, using the induced syntactic patterns to extract role fillers from specific documents. We evaluate on the MUC-4 terrorism dataset and show that we induce template structure very similar to handcreated gold structure, and we extract role fillers with an F1 score of .40, approaching the performance of algorithms that require full knowledge of the templates."}
{"_id": "d36b1e032270148c68be3817bbd5e58b54148017", "title": "Emerging Trends in Smart Home Security, Privacy, and Digital Forensics", "text": "Technology integration is becoming an impetus to everyday lives. This new interconnected world can be found from our most private spaces to the public ones. Smart homes, which is the use of Internet of Things (IoT) within a home, has become the utmost concern in the security and privacy domain. In this paper, we review the current literature from both research and practice and offer five research trends for security and privacy in smart homes: potential for remote security breaches, risks for smart home devices, privacy violations, infrastructure vulnerabilities, and digital forensics challenges. We integrated these five trends in a conceptual model showcasing their roles in the rapidly changing IoT landscape. Combining the trends and smart forensics, we elucidate the bimodal lenses in security and privacy: preventive and investigative. These discussions offer research directions and practitioners\u2019 implications for further development of a safe and secured smart home."}
{"_id": "256f1312201a95c24280b8549a75fda32536ad55", "title": "COA: finding novel patents through text analysis", "text": "In recent years, the number of patents filed by the business enterprises in the technology industry are growing rapidly, thus providing unprecedented opportunities for knowledge discovery in patent data. One important task in this regard is to employ data mining techniques to rank patents in terms of their potential to earn money through licensing. Availability of such ranking can substantially reduce enterprise IP (Intellectual Property) management costs. Unfortunately, the existing software systems in the IP domain do not address this task directly. Through our research, we build a patent ranking software, named COA (Claim Originality Analysis) that rates a patent based on its value by measuring the recency and the impact of the important phrases that appear in the \"claims\" section of a patent. Experiments show that COA produces meaningful ranking when comparing it with other indirect patent evaluation metrics--citation count, patent status, and attorney's rating. In reallife settings, this tool was used by beta-testers in the IBM IP department. Lawyers found it very useful in patent rating, specifically, in highlighting potentially valuable patents in a patent cluster. In this article, we describe the ranking techniques and system architecture of COA. We also present the results that validate its effectiveness."}
{"_id": "10999a5c8711a39acefc0484afe73f478f5377a4", "title": "Honey: A Biologic Wound Dressing.", "text": "Honey has been used as a wound dressing for thousands of years, but only in more recent times has a scientific explanation become available for its effectiveness. It is now realized that honey is a biologic wound dressing with multiple bioactivities that work in concert to expedite the healing process. The physical properties of honey also expedite the healing process: its acidity increases the release of oxygen from hemoglobin thereby making the wound environment less favorable for the activity of destructive proteases, and the high osmolarity of honey draws fluid out of the wound bed to create an outflow of lymph as occurs with negative pressure wound therapy. Honey has a broad-spectrum antibacterial activity, but there is much variation in potency between different honeys. There are 2 types of antibacterial activity. In most honeys the activity is due to hydrogen peroxide, but much of this is inactivated by the enzyme catalase that is present in blood, serum, and wound tissues. In manuka honey, the activity is due to methylglyoxal which is not inactivated. The manuka honey used in wound-care products can withstand dilution with substantial amounts of wound exudate and still maintain enough activity to inhibit the growth of bacteria. There is good evidence for honey also having bioactivities that stimulate the immune response (thus promoting the growth of tissues for wound repair), suppress inflammation, and bring about rapid autolytic debridement. There is clinical evidence for these actions, and research is providing scientific explanations for them."}
{"_id": "19f6217d4f8de723c497ad55f025d818ce64c1db", "title": "Fast Component-Based QR Code Detection in Arbitrarily Acquired Images", "text": "Quick Response (QR) codes are a type of 2D barcode that is becoming very popular, with several application possibilities. Since they can encode alphanumeric characters, a rich set of information can be made available through encoded URL addresses. In particular, QR codes could be used to aid visually impaired and blind people to access web based voice information systems and services, and autonomous robots to acquire context-relevant information. However, in order to be decoded, QR codes need to be properly framed, something that robots, visually impaired and blind people will not be able to do easily without guidance. Therefore, any application that aims assisting robots or visually impaired people must have the capability to detect QR codes and guide them to properly frame the code. A\u00a0fast component-based two-stage approach for detecting QR codes in arbitrarily acquired images is proposed in this work. In the first stage, regular components present at three corners of the code are detected, and in the second stage geometrical restrictions among detected components are verified to confirm the presence of a code. Experimental results show a high detection rate, superior to 90\u00a0%, at a fast speed compatible with real-time applications."}
{"_id": "34ab424fba4b49d0b7b058045b3b5b2b8a4950fd", "title": "Detection and Analysis of 2016 US Presidential Election Related Rumors on Twitter", "text": "The 2016 U.S. presidential election has witnessed the major role of Twitter in the year\u2019s most important political event. Candidates used this social media platform extensively for online campaigns. Meanwhile, social media has been filled with rumors, which might have had huge impacts on voters\u2019 decisions. In this paper, we present a thorough analysis of rumor tweets from the followers of two presidential candidates: Hillary Clinton and Donald Trump. To overcome the difficulty of labeling a large amount of tweets as training data, we detect rumor tweets by matching them with verified rumor articles. We analyze over 8 million tweets collected from the followers of the two candidates. Our results provide answers to several primary concerns about rumors in this election, including: which side of the followers posted the most rumors, who posted these rumors, what rumors they posted, and when they posted these rumors. The insights of this paper can help us understand the online rumor behaviors in American politics."}
{"_id": "399df5d25bb7208ab83efb6e7a3f22afb9ea8290", "title": "Static hand gesture recognition based on HOG characters and support vector machines", "text": "Gesture recognition technology has important significance in the field of human-computer interaction (HCI), the gesture recognition technology which is based on visual is sensitive to the impact of the experimental environment lighting, and so, the recognition result will produce a greater change; it makes this technology one of the most challenging topics. HOG feature which is successfully applied to pedestrian detection is operating on the local grid unit of image, so it can maintain a good invariance on geometric and optical deformation. In this paper, we extracted the gradient direction histogram (HOG) features of gestures, then, a Support Vector Machines is used to train these feature vectors, at testing time, a decision is taken using the previously learned SVMs, and compared the same gesture recognition rate in different light conditions. Experimental results show that the HOG feature extraction and multivariate SVM classification methods has a high recognition rate, and the system has a better robustness for the illumination."}
{"_id": "42a0fbc1aa854de4bb9f0c376ad8bfcf29445e58", "title": "A Surface-Growing Approach to Multi-View Stereo Reconstruction", "text": "We present a new approach to reconstruct the shape of a 3D object or scene from a set of calibrated images. The central idea of our method is to combine the topological flexibility of a point-based geometry representation with the robust reconstruction properties of scene-aligned planar primitives. This can be achieved by approximating the shape with a set of surface elements (surfels) in the form of planar disks which are independently fitted such that their footprint in the input images matches. Instead of using an artificial energy functional to promote the smoothness of the recovered surface during fitting, we use the smoothness assumption only to initialize planar primitives and to check the feasibility of the fitting result. After an initial disk has been found, the recovered region is iteratively expanded by growing further disks in tangent direction. The expansion stops when a disk rotates by more than a given threshold during the fitting step. A global sampling strategy guarantees that eventually the whole surface is covered. Our technique does not depend on a shape prior or silhouette information for the initialization and it can automatically and simultaneously recover the geometry, topology, and visibility information which makes it superior to other state-of-the-art techniques. We demonstrate with several high-quality reconstruction examples that our algorithm performs highly robustly and is tolerant to a wide range of image capture modalities."}
{"_id": "893abc7489cf80aa9ce7f39c2f20d680584ee20f", "title": "Neural evidence for the interplay between language, gesture, and action: A review", "text": "Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action."}
{"_id": "edb02471b67e50b228e169997b8670fdff54e99e", "title": "Rubinstein Taybi Syndrome in an Indian Child due to EP300 Gene Mutation.", "text": "To the Editor : A 10-y-old girl was diagnosed with Rubinstein Taybi Syndrome (RTS) based on distinctive facial features (low hanging columella, high palate, grimacing smile, talon cusps), broad, angulated thumbs and great toes, short stature and moderate intellectual disability (Fig. 1). Her younger sister, who died at 11 mo of age due to acute myeloid leukemia, did not have clinical features of RTS. Complete sequencing of CREBBP and EP300 genes was done for the proband as previously described [1, 2]. No pathogenic mutation was noted in CREBBP gene. She was found to be heterozygous for c.1282C > T or p.P428S mutation in exon five of EP300 gene (Fig. 1). The mutation is predicted to affect the donor splicing [3]. The patient\u2019s mother was heterozygous for the same mutation. Prenatal diagnosis was offered to the couple and the fetus did not carry this mutation, and was confirmed to be unaffected postnatally. This mutation was not identified in 100 normal unrelated individuals. Deletion/duplication analysis of CREBBP and EP300 gene was not available. Our patient had horizontal palpebral fissures as described with EP300 related RTS, compared with CREBBP related RTS, who have anti-mongoloid slant [4]. Although patients with EP300 mutation have broad thumbs or toes, radial deviation has not been reported [2] (Fig. 1). This mutation was inherited from her mother who had mild hypertelorism, broad nasal bridge, normal intelligence and hands/feet. Negri et al. reported a patient with mutation in EP300 gene (p.N1511T) that was inherited from a healthy mother. Low penetrance in EP300 related RTS may present a challenge during genetic counseling and prenatal diagnosis [2]. EP300 gene somatic mutations are reported in acute leukemia and RTS patients have an increased incidence of cancer [2, 5]. Thus, we report an Indian child with Rubinstein Taybi Syndrome with familial EP300 gene mutation."}
{"_id": "d45359ef03aea64f06c4b3df16d29e157137fb20", "title": "Optimized Edge Detection Algorithm for Face Recognition", "text": "Face recognition is one of the most challenging tasks in the field of image processing. This paper presents an optimized edge detection algorithm for the task of face recognition. In this method a gradient based filter using a wide convolution kernel is applied on the image to extract the edges. Later a thinning algorithm optimized for the wide convolution kernel is applied on the extracted edges. The advantages of this method over other gradient based methods is its ability to find the missing edges more successfully and boost the significant edges of the overall facial contour."}
{"_id": "acacd5e9a15c32d1688ead1c771418bd27799ea7", "title": "Modeling and control strategy for the transition of a convertible tail-sitter UAV", "text": "This paper addresses the problem of the transition between rotary-wing and fixed-wing flight of a tail-sitter unmanned aerial vehicle (UAV). A nonlinear control design is presented to regulate the vertical-flight dynamics of the vehicle. We present the dynamic and aerodynamic equations that model the behavior of the vehicle before (vertical flight), during and after (forward flight) the transition. A low-cost embedded system, including an homemade inertial measurement unit (IMU), is used to perform autonomous attitude-stabilized flight in vertical mode."}
{"_id": "78515ff75593b82180598a834b54af2bdf41c9a3", "title": "Towards Knowledge-Driven Annotation", "text": "While the Web of data is attracting increasing interest and rapidly growing in size, the major support of information on the surface Web are still multimedia documents. Semantic annotation of texts is one of the main processes that are intended to facilitate meaning-based information exchange between computational agents. However, such annotation faces several challenges such as the heterogeneity of natural language expressions, the heterogeneity of documents structure and context dependencies. While a broad range of annotation approaches rely mainly or partly on the target textual context to disambiguate the extracted entities, in this paper we present an approach that relies mainly on formalized-knowledge expressed in RDF datasets to categorize and disambiguate noun phrases. In the proposed method, we represent the reference knowledge bases as co-occurrence matrices and the disambiguation problem as a 0-1 Integer Linear Programming (ILP) problem. The proposed approach is unsupervised and can be ported to any RDF knowledge base. The system implementing this approach, called KODA, shows very promising results w.r.t. state-of-the-art annotation tools in cross-domain experimentations."}
{"_id": "3672a6c497d6bd51948e110f92e961d849457aed", "title": "Time - motion analysis of professional rugby union players during match-play.", "text": "The aim of this study was to quantify the movement patterns of various playing positions during professional rugby union match-play, such that the relative importance of aerobic and anaerobic energy pathways to performance could be estimated. Video analysis was conducted of individual players (n=29) from the Otago Highlanders during six \"Super 12\" representative fixtures. Each movement was coded as one of six speeds of locomotion (standing still, walking, jogging, cruising, sprinting, and utility), three states of non-running intensive exertion (rucking/mauling, tackling, and scrummaging), and three discrete activities (kicking, jumping, passing). The results indicated significant demands on all energy systems in all playing positions, yet implied a greater reliance on anaerobic glycolytic metabolism in forwards, due primarily to their regular involvement in non-running intense activities such as rucking, mauling, scrummaging, and tackling. Positional group comparisons indicated that while the greatest differences existed between forwards and backs, each positional group had its own unique demands. Front row forwards were mostly involved in activities involving gaining/retaining possession, back row forwards tended to play more of a pseudo back-line role, performing less rucking/mauling than front row forwards, yet being more involved in aspects of broken play such as sprinting and tackling. While outside backs tended to specialize in the running aspects of play, inside backs tended to show greater involvement in confrontational aspects of play such as rucking/mauling and tackling. These results suggest that rugby training and fitness testing should be tailored specifically to positional groups rather than simply differentiating between forwards and backs."}
{"_id": "52928cc42540e031a02ca0788a785a8824d0f591", "title": "The motivations and experiences of the on-demand mobile workforce", "text": "On-demand mobile workforce applications match physical world tasks and willing workers. These systems offer to help conserve resources, streamline courses of action, and increase market efficiency for micro- and mid-level tasks, from verifying the existence of a pothole to walking a neighbor's dog. This study reports on the motivations and experiences of individuals who regularly complete physical world tasks posted in on-demand mobile workforce marketplaces. Data collection included semi-structured interviews with members (workers) of two different services. The analysis revealed the main drivers for participating in an on-demand mobile workforce, including desires for monetary compensation and control over schedules and task selection. We also reveal main reasons for task selection, which involve situational factors, convenient physical locations, and task requester profile information. Finally, we discuss the key characteristics of the most worthwhile tasks and offer implications for novel crowdsourcing systems for physical world tasks."}
{"_id": "4f127806a889f02fbef6a87ae78d58b9c9f3ca63", "title": "Properties of cutaneous mechanoreceptors in the human hand-related to touch sensation", "text": "Recordings from single peripheral nerve fibres made it possible to analyse the functional properties of tactile afferent units supplying the glabrous skin of the human hand and to assess directly the relation between impulse discharge and perceptive experiences. The 17,000 tactile units in this skin area of the human hand are of four different types: two fast adapting types, FA I and FA I1 (formerly RA and PC), and two slowly adapting types, SA I and SA 11. The receptive field characteristics and the densities in the skin of the type I units (FA I and SA I) indicate that these account for the detailed spatial resolution that is of paramount importance for the motor skill and the explorative role of the hand. The relationship between the stimulus amplitude and perceived intensity during sustained skin indentations did not match the corresponding stimulus response functions of SA units suggesting non-linear transformations within the central nervous system. These transformations, in turn, appear to vary between subjects. A single impulse in a single FA I unit may be felt when originating from the most important tactile regions of the hand, indicating that the psychophysical detection may be set by the threshold of the sense organs. Moreover, no significant noise seems to be superimposed in the respective central sensory pathways."}
{"_id": "6df2322faf3a46d74abb61ca5b46d486eba4686a", "title": "A functional imaging study of translation and language switching.", "text": "The neural systems underlying translation and language switching were investigated using PET. Proficient German-English adult bilinguals were scanned whilst either translating or reading visually presented words in German (L1), English (L2) or alternating L1/L2. We refer to alternating L1/L2 as 'switching'. The results revealed contrasting patterns of activation for translation and switching, suggesting at least partially independent mechanisms. Translation, but not switching, increased activity in the anterior cingulate and subcortical structures whilst decreasing activation in several other temporal and parietal language areas associated with the meaning of words. Translation also increased activation in regions associated with articulation (the anterior insula, cerebellum and supplementary motor area) arguably because the reading response to the stimulus must be inhibited whilst a response in a different language is activated. In contrast, switching the input language resulted in activation of Broca's area and the supramarginal gyri, areas associated with phonological recoding. The results are discussed in terms of the cognitive control of language processes."}
{"_id": "7c5497bead5a13e1499678aed1e574411747882e", "title": "The comparability of measurements of attitudes toward immigration in the European Social Survey : exact versus approximate measurement equivalence", "text": "International survey datasets are analyzed with increasing frequency to investigate and compare attitudes toward immigration and to examine the contextual factors that shape these attitudes. However, international comparisons of abstract, psychological constructs require the measurements to be equivalent\u2013i.e., they should measure the same concept on the same measurement scale. Traditional approaches to assessing measurement equivalence quite often lead to the conclusion that measurements are cross-nationally incomparable but have been criticized for being overly strict. In the current study, we present an alternative Bayesian approach that assesses whether measurements are approximately (rather than exactly) equivalent. This approach allows small variations in measurement parameters across groups. Taking a multiple group confirmatory factor analysis framework as a starting point, this study applies approximate and exact equivalence tests to the anti-immigration attitudes scale that was implemented in the European Social Survey (ESS). Measurement equivalence is tested across the full set of 271,220 individuals in 35 ESS countries over six rounds. The results of the exact and the approximate approaches are quite different. Approximate scalar measurement equivalence is established in all ESS rounds, thus allowing researchers to meaningfully compare these mean scores and their relationships with other theoretical constructs of interest. The exact approach, however, eventually proves to be overly strict and leads to the conclusion that measurements are incomparable for a large number of countries and time points. DOI: https://doi.org/10.1093/poq/nfv008 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-110767 Published Version Originally published at: Davidov, Eldad; Cieciuch, Jan; Meuleman, Bart; Schmidt, Peter; Algesheimer, Ren\u00e9; Hausherr, Mirjam (2015). The comparability of measurements of attitudes toward immigration in the European Social Survey: exact versus approximate measurement equivalence. Public Opinion Quarterly, 79(S1):244-266. DOI: https://doi.org/10.1093/poq/nfv008 \u00a9 The Author 2015. Published by Oxford University Press on behalf of the American Association for Public Opinion Research. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com THE COMPARABILITY OF MEASUREMENTS OF ATTITUDES TOWARD IMMIGRATION IN THE EUROPEAN SOCIAL SURVEY EXACT VERSUS APPROXIMATE MEASUREMENT EQUIVALENCE"}
{"_id": "852094207ef6083d807a5215028e46e50685acbb", "title": "Mining Invariants from Console Logs for System Problem Detection", "text": "Detecting execution anomalies is very important to the maintenance and monitoring of large-scale distributed systems. People often use console logs that are produced by distributed systems for troubleshooting and problem diagnosis. However, manually inspecting console logs for the detection of anomalies is unfeasible due to the increasing scale and complexity of distributed systems. Therefore, there is great demand for automatic anomaly detection techniques based on log analysis. In this paper, we propose an unstructured log analysis technique for anomaly detection, with a novel algorithm to automatically discover program invariants in logs. At first, a log parser is used to convert the unstructured logs to structured logs. Then, the structured log messages are further grouped to log message groups according to the relationship among log parameters. After that, the program invariants are automatically mined from the log message groups. The mined invariants can reveal the inherent linear characteristics of program work flows. With these learned invariants, our technique can automatically detect anomalies in logs. Experiments on Hadoop show that the technique can effectively detect execution anomalies. Compared with the state of art, our approach can not only detect numerous real problems with high accuracy but also provide intuitive insight into the problems."}
{"_id": "2576fb2e04d93b63089cb293ee4f8038376df248", "title": "Deep analysis for development of RDF, RDFS and OWL ontologies with protege", "text": "In SW (SW), Ontology is an important layer architecture that is develop with the help of web ontology language (OWL) and makes the web semantic. OWL is designs and creates metadata that required for developing SW applications. It is an essential part of SW. In this paper we design an education and non-education domain ontology framework which is developed by using prote\u0301ge\u0301 an ontology editor tool developed by Stanford University. We also show the output for both ontologies by using pellet reasoner based on the proposed ontology frameworks. We analyze from our result that OWL improves the inference compare to Resource Description Framework (RDF) and Resource Description Framework Schema (RDFS). OWL also provides more relevant information compare to RDF and RDFS and OWL precision and recall is better compare to RDF and RDFS. In our paper we used Pellet reasoner that retrieve relevant information from the education and non education ontology and prote\u0301ge\u0301 4.3 alpha tool for the construction of education and non education domain ontology."}
{"_id": "d7e8a79ca355c512084431607f5371c9f966f160", "title": "Teleoperation control of Baxter robot using body motion tracking", "text": "In this paper, we use Kinect Xbox 360 sensor to implement the motion control of Baxter robot, a semi humanoid robot with limbs of 7 DOF joints with collision avoidance capabilities. Two different methods using vector approach and inverse kinematics approach have been designed to achieve a given task. Primitive experiments have been carried out to verify the effectiveness of the developed methods. Human motions is captured by Kinect sensor and calculated with Processing Software using SimpleOpenNI wrapper for OpenNI and NITE. UDP protocol is adopted to send reference motion to Baxter robot. Python and RosPy script programming kit is used to calculate joint angles of Baxter robot based on vector approach and inverse kinematics approach. The experimental results demonstrate that both of our proposed approaches have achieved satisfactory performance."}
{"_id": "40f6207b722c739c04ba5a41f7b22d472aeb08ec", "title": "PFID: Pittsburgh fast-food image dataset", "text": "We introduce the first visual dataset of fast foods with a total of 4,545 still images, 606 stereo pairs, 303 360\u00b0 videos for structure from motion, and 27 privacy-preserving videos of eating events of volunteers. This work was motivated by research on fast food recognition for dietary assessment. The data was collected by obtaining three instances of 101 foods from 11 popular fast food chains, and capturing images and videos in both restaurant conditions and a controlled lab setting. We benchmark the dataset using two standard approaches, color histogram and bag of SIFT features in conjunction with a discriminative classifier. Our dataset and the benchmarks are designed to stimulate research in this area and will be released freely to the research community."}
{"_id": "54dd77bd7b904a6a69609c9f3af11b42f654ab5d", "title": "ImageNet: A large-scale hierarchical image database", "text": ""}
{"_id": "62f9c50666152cca170619bab5f2b4da17bc15e1", "title": "Food image recognition with deep convolutional features", "text": "In this paper, we report the feature obtained from the Deep Convolutional Neural Network boosts food recognition accuracy greatly by integrating it with conventional hand-crafted image features, Fisher Vectors with HoG and Color patches. In the experiments, we have achieved 72.26% as the top-1 accuracy and 92.00% as the top-5 accuracy for the 100-class food dataset, UEC-FOOD100, which outperforms the best classification accuracy of this dataset reported so far, 59.6%, greatly."}
{"_id": "61c1fd5ca7d775edc54ade31f562f893fcafcbfb", "title": "Graphical Models and Belief Propagation-hierarchy for Optimal Physics-Constrained Network Flows", "text": "In this manuscript we review new ideas and first results on application of the Graphical Models approach, originated from Statistical Physics, Information Theory, Computer Science and Machine Learning, to optimization problems of network flow type with additional constraints related to the physics of the flow. We illustrate the general concepts on a number of enabling examples from power system and natural gas transmission (continental scale) and distribution (district scale) systems. 1.1 Introductory remarks In this chapter we discuss optimization problems which appears naturally in the classical settings describing flows over networks constrained by the physical nature Michael Chertkov Theoretical Division, T-4 & CNLS, Los Alamos National Laboratory Los Alamos, NM 87545, USA and Energy System Center, Skoltech, Moscow, 143026, Russia, e-mail: chertkov@lanl. gov Sidhant Misra Theoretical Division, T-5, Los Alamos National Laboratory Los Alamos, NM 87545, USA, e-mail: sidhant@lanl.gov Marc Vuffray Theoretical Division, T-4, Los Alamos National Laboratory Los Alamos, NM 87545, USA, e-mail: sidhant@lanl.gov Krishnamurthy Dvijotham Pacific Northwest National Laboratory, PO Box 999, Richland, WA 99352, USA e-mail: krishnamurthy.dvijotham@pnnl.gov Pascal Van Hentenryck University of Michigan, Department of Industrial & Operations Engineering Ann Arbor, MI 48109, USA, e-mail: pvanhent@umich.edu 1 ar X iv :1 70 2. 01 89 0v 1 [ cs .S Y ] 7 F eb 2 01 7 2 M. Chertkov, S. Misra, M. Vuffray, D. Krishnamurthy, and P. Van Hentenryck of the flows which appear in the context of electric power systems, see e.g. [27, 44], and natural gas application, see e.g. [13] and references there in. Other examples of physical flows where similar optimization problem arise include pipe-flow systems, such as district heating [75, 1] and water [54], as well as traffic systems [40]. We aim to show that the network flow optimization problem can be stated naturally in terms of the so-called Graphical Models (GM). In general, GMs for optimization and inference are wide spread in statistical disciplines such as Applied Probability, Machine Learning and Artificial Intelligence [53, 29, 16, 12, 32, 50], Information Theory [55] and Statistical Physics [47]. Main benefit of adopting GM methodology to the physics-constrained network flows is in modularity and flexibility of the approach \u2013 any new constraints, set of new variables, and any modification of the optimization objective can be incorporated in the GM formulation with an ease. Besides, if all (or at least majority of) constraints and modifications are factorized, i.e. can be stated in terms of a small subset of variables, underlying GM optimization or GM statistical inference problems can be solved exactly or approximately with the help of an emerging set of techniques, algorithms and computational approaches coined collectively Belief Propagation (BP), see e.g. an important original paper [74] and recent reviews [47, 55, 66]. It is also important to emphasize that an additional benefit of the GM formulation is in its principal readiness for generalization. Even though we limit our discussion to application of the GM and BP framework to deterministic optimizations, many probabilistic and/or mixed generalizations (largely not discussed in this paper) fit very naturally in this universal framework as well. We will focus on optimization problems associated with Physics-Constrained Newtork Flow (PCNF) problems. Structure of the networks will obviously be inherited in the GM formulation, however indirectly through graphand variabletransformations and modifications. Specifically, next Section 1.2 is devoted solely to stating a number of exemplary energy system formulations in GM terms. Thus, in Section 1.2.1 and Section 1.2.2 we consider dissipation optimal and respectively general physics-constrained network flow problems. In particular, Section 1.2.2 includes discussion of power flow problems in both power-voltage, Section 1.2.2.1, and current-voltage, Section 1.2.2.2, formats, as well as discussion of the gas flow formulation in Section 1.2.2.3 and general k-component physics-constrained network flow problem in Section 1.2.2.4. Section 1.2.3 describes problems of the next level of complexity \u2013 these including optimization over resources. In particular, general optimal physics-controlled network flow problem is discussed in Section 1.2.3.1 and more specific cases of optimal flows, involving optimal power flow (in both power-flow and current-voltage formulations) and gas flows are discussed in Sections 1.2.3.2,1.2.3.3,1.2.2.3, respectively. Section 1.2.4 introduces a number of feasibility problems, all stated as special kinds of optimizations. Here we discuss the so-called instanton, Section 1.2.4.1, containment Section 1.2.4.2, and state estimation, Section 1.2.4.3, formulation. The long introductory section concludes with a discussion in Section 1.2.5 of an exemplary (and even more) complex optimization involving split of resources between participants/aggregators. 1 Graphical Models for Optimal Flows 3 In Section 1.3 we describe how any of the aforementioned PCNF and optimal PCNF problems can be re-stated in the universal Graphical Model format. Then, in Section 1.4, we take advantage of the factorized form of the PCNF GM and illustrate how BP methodology can be used to solve the optimization problems exactly and/or approximately. Specifically, in Section 1.4.1 we restate the optimization (Maximum Likelihood) GM problem as a Linear Programming (LP) in the space of beliefs (proxies for probabilities). The resulting LP is generally difficult as working with all variables in a combination. We take advantage of the GM factorization and introduce in Section 1.4.2 the so-called Linear Programming Belief Propagation (LP-BP) relaxation, providing a provable lower bound for the optimal. Finally, in Section 1.4.3 we construct a tractable relaxation of LP-BP based on an interval partitioning of the underlying space. Section 1.5 discuss hierarchies which allow to generalize, and thus improve LPBP. The so-called LP-BP hierarchies, related to earlier papers on the subject [65, 30, 63] are discussed in Section 1.5.1. Then, relation between the LP-BP hierarchies and classic LP-based Sherali-Adams [59] and Semi-Definite-Programming based Lasserre hierarchies [41, 36, 52, 37] are discussed in Section 1.5.2. Section 1.6 discuss the special case of a GM defined over a tree (graph without loops). In this case LP-BP is exac, equivalent to the so-called Dynamic Programming approach, and as such it provides a distributed alternative to the global optimization through a sequence of graph-element-local optimizations. However, even in the tree case the exact LP-BP and/or DP are not tractable for GM stated in terms of physical variables, such as flows, voltages and/or pressures, drawn from a continuous set. Following, [18] we discuss here how the problem can be resolved with a proper interval-partitioning (discretization). We conclude the manuscript presenting summary and discussing path forward in Section 1.7. 1.2 Problems of Interest: Formulations In this Section we formulate a number of physics-constrained network flow problems which we will then attempt to analyze and solve with the help of Graphical Model (GM)/Belief Propagation (BP) approaches/techniques in the following Sections. 1.2.1 Dissipation-Optimal Network Flow We start introducing/discussing Network Flows constrained by a minimum dissipation principle, i.e. one which can be expressed as an unconstrained optimization/minimization of an energy function (potential). 4 M. Chertkov, S. Misra, M. Vuffray, D. Krishnamurthy, and P. Van Hentenryck Consider a static flow of a commodity over an undirected graph, G = (V ,E ) described through the following network flow equations i \u2208 V : qi = \u2211 j:(i, j)\u2208E \u03c6i j, (1.1) where qi stand for injection, qi > 0, or consumption, qi < 0, of the flow at the node i and \u03c6i j =\u2212\u03c6 ji stands for the value of the flow through the directed edge (i, j) \u2013 in the direction from i to j 1. We consider a balanced network, \u2211i\u2208V qi = 0. We constraint the flow requiring that the minimum dissipation principle is obeyed min \u03c6 \u2211 {i, j}\u2208E Ei j(\u03c6i j) \u2223\u2223\u2223\u2223 Eq. (1.1) , (1.2) where \u03c6 . = (\u03c6i j = \u2212\u03c6 ji|{i, j} \u2208 E ), and Ei j(x) are local (energy) functions of their arguments for all {i, j} \u2208 E . The local energy functions Ei j(x) are required to be convex at least on a restricted domain. We call the sum of local energy functions E(\u03c6) = \u2211{i, j}\u2208E Ei j(\u03c6i j) the global energy function or simply the energy function. Versions of this problem appear in the context of the feasibility analysis of the dissipative network flows, that is flows whose redistribution over the network is constrained by potentials, e.g. voltages or pressures in the context of resistive electric networks and gas flow networks, respectively [22, 48, 64]. Note, that the formulation (1.2) can also be supplemented by additional flow or potential constraints. Requiring Karush-Kuhn-Tucker (KKT) stationary point conditions on the optimization problem stated in Eq. (1.2) leads to the following set of equations \u2200{i, j} \u2208 E : E \u2032 i j(\u03c6i j) = \u03bbi\u2212\u03bb j, (1.3) where \u03bbi is a Lagrangian multiplier corresponding to the i\u2019s equation (1.1). The problem becomes fully defined by the pair of Eqs. (1.1,1.3), which can also be restated solely in terms of the \u03bb -variables i \u2208 V : qi = \u2211 j:{i, j}\u2208E ( E \u2032 i j )\u22121 (\u03bbi\u2212\u03bb j). (1.4) 1.2.2 General Physics-Constrained Network Flows We call \u201cunconstrained\u201d a network flow for which only conservation of flow(s), described by Eq. (1.1), is enforced. Contrariwise we call \u201cPhysics-constrained\u201d a 1 In the following we will use notation {i, j} for the undirected graph and (i, j) for the respective directed graph. When the meaning is clear we slightly abuse notations denoting by E both the set of undirected and directed edges. 1 Graphical Models for Optimal Flows 5 network flow t"}
{"_id": "570c2d6678d6e6d9c080cd07c2c38364e5ac8907", "title": "Improved direct back EMF detection for sensorless brushless DC (BLDC) motor drives", "text": "Improved back EMF detection circuits for low voltage/low speed and high voltage sensorless BLDC motor drives are presented in this paper. The improvements are based on the direct back EMF sensing method from our previous research work described in reference, which describes a technique for directly extracting phase back EMF information without the need to sense or re-construct the motor neutral. The reference method is not sensitive to switching noise and requires no filtering, achieving much better performance than traditional back EMF sensing scheme. A complementary PWM (synchronous rectification) is proposed to reduce the power dissipation in power devices for low voltage applications. In order to further extend the sensorless BLDC system to lower speed, a pre-conditioning circuit is proposed to amplify the back EMFs at very low speed. As a result, the brushless DC motor can run at lower speed with the improved back EMF sensing scheme. On the other hand, another improved detection circuit is presented for high voltage applications to overcome the delaying problem caused by large sensing resistors. The detailed circuit models are analyzed and experimental results verify the analysis."}
{"_id": "124c0ee6a3730a9266dae59d94a90124760f1a5c", "title": "Alignment of continuous video onto 3D point clouds", "text": "We propose a general framework for aligning continuous (oblique) video onto 3D sensor data. We align a point cloud computed from the video onto the point cloud directly obtained from a 3D sensor. This is in contrast to existing techniques where the 2D images are aligned to a 3D model derived from the 3D sensor data. Using point clouds enables the alignment for scenes full of objects that are difficult to model; for example, trees. To compute 3D point clouds from video, motion stereo is used along with a state-of-the-art algorithm for camera pose estimation. Our experiments with real data demonstrate the advantages of the proposed registration algorithm for texturing models in large-scale semi-urban environments. The capability to align video before a 3D model is built from the 3D sensor data offers new practical opportunities for 3D modeling. We introduce a novel modeling-through-registration approach that fuses 3D information from both the 3D sensor and the video. Initial experiments with real data illustrate the potential of the proposed approach."}
{"_id": "9dc9030fc91b591b4fb94dbe1a6d83b3abdb4c50", "title": "Neuronal Circuits for Fear Expression and Recovery: Recent Advances and Potential Therapeutic Strategies", "text": "Recent technological developments, such as single unit recordings coupled to optogenetic approaches, have provided unprecedented knowledge about the precise neuronal circuits contributing to the expression and recovery of conditioned fear behavior. These data have provided an understanding of the contributions of distinct brain regions such as the amygdala, prefrontal cortex, hippocampus, and periaqueductal gray matter to the control of conditioned fear behavior. Notably, the precise manipulation and identification of specific cell types by optogenetic techniques have provided novel avenues to establish causal links between changes in neuronal activity that develop in dedicated neuronal structures and the short and long-lasting expression of conditioned fear memories. In this review, we provide an update on the key neuronal circuits and cell types mediating conditioned fear expression and recovery and how these new discoveries might refine therapeutic approaches for psychiatric conditions such as anxiety disorders and posttraumatic stress disorder."}
{"_id": "009d2d73e40395dfa4d4c19f3dab0f0d911763ee", "title": "In-situ vehicular antenna integration and design aspects for vehicle-to-vehicle communications", "text": "Vehicle-to-vehicle (V2V) communications aim to enhance driver safety and traffic efficiency by using the recently designated frequency bands in the 5.9 GHz range in Europe. Due to the time-frequency selective fading behavior of the vehicular communication channel, multi-antenna techniques can provide enhanced link conditions by means of diversity processing. This paper highlights the integration of a four-element (N =4) linear array antenna into the roof-top compartment of a vehicle to conduct Multiple-Input Multiple-Output (MIMO) high-resolution mobile-to-mobile channel measurements."}
{"_id": "61b59d34d4323b84e1a0328024e61cb4ae5eae2c", "title": "Musicmapper: interactive 2d representations of music samples for in-browser remixing and exploration", "text": "Much of the challenge and appeal in remixing music comes from manipulating samples. Typically, identifying distinct samples of a song requires expertise in music production software. Additionally, it is di cult to visualize similarities and di\u21b5erences between all samples of a song simultaneously and use this to select samples. MusicMapper is a web application that allows nonexpert users to find and visualize distinctive samples from a song without any manual intervention, and enables remixing by having users play back clusterings of such samples. This is accomplished by splitting audio from the Soundcloud API into appropriately-sized spectrograms, and applying the tSNE algorithm to visualize these spectrograms in two dimensions. Next, we apply k-means to guide the user\u2019s eye toward related clusters and set k = 26 to enable playback of the clusters by pressing keys on an ordinary keyboard. We present the source code and a demo video of the MusicMapper web application that can be run in most modern browsers."}
{"_id": "4fda3b5f93c7be582d32b2e2654f28d32a8766c8", "title": "Power without wires", "text": "This article presents the history of WPT and the various technologies and applications of this exciting technology. In the near future, standardization and regulation will be of importance to realizing WPT based products for commercial applications. The WPC has defined a standard for inductive coupling and members of this group have released conforming products. There are, as yet, no standards or regulation for resonant coupling and MPT technologies. In Japan, a technical forum known as Broadband Wireless Forum has been established to discuss the future of WPT. SPS researchers have also submitted a proposal for WPT to the International Telecommunication Union (ITU). In the IEEE MTT Society, the Technical Committee MTT-26 Wireless Energy Transfer and Conversion was established in June, 2011 to discuss the future of WPT."}
{"_id": "ea33172af0532d7bf9688ad58a86d567e7b84904", "title": "How Not to Do a Mindset Intervention: Learning from a Mindset Intervention among Students with Good Grades", "text": "The present study examined the effectiveness of a Growth Mindset intervention based on Dweck et al.'s (1995) theory in the Hungarian educational context. A cluster randomized controlled trial classroom experiment was carried out within the framework of a train-the-trainer intervention among 55 Hungarian 10th grade students with high Grade Point Average (GPA). The results suggest that students' IQ and personality mindset beliefs were more incremental in the intervention group than in the control group 3 weeks after the intervention. Furthermore, compared to both the baseline measure and the control group, students' amotivation decreased. However, no intrinsic and extrinsic motivation change was found. Students with low grit scores reported lower amotivation following the intervention. However, in the second follow-up measurement-the end of the semester-all positive changes disappeared; and students' GPA did not change compared to the previous semester. These results show that mindset beliefs are temporarily malleable and in given circumstances, they can change back to their pre-intervention state. The potential explanation is discussed in the light of previous mindset intervention studies and recent findings on wise social psychological interventions."}
{"_id": "f6c133017cf45779aa12d403559c807f4f296f55", "title": "Adults and children processing music: An fMRI study", "text": "The present study investigates the functional neuroanatomy of music perception with functional magnetic resonance imaging (fMRI). Three different subject groups were investigated to examine developmental aspects and effects of musical training: 10-year-old children with varying degrees of musical training, adults without formal musical training (nonmusicians), and adult musicians. Subjects made judgements on sequences that ended on chords that were music-syntactically either regular or irregular. In adults, irregular chords activated the inferior frontal gyrus, orbital frontolateral cortex, the anterior insula, ventrolateral premotor cortex, anterior and posterior areas of the superior temporal gyrus, the superior temporal sulcus, and the supramarginal gyrus. These structures presumably form different networks mediating cognitive aspects of music processing (such as processing of musical syntax and musical meaning, as well as auditory working memory), and possibly emotional aspects of music processing. In the right hemisphere, the activation pattern of children was similar to that of adults. In the left hemisphere, adults showed larger activations than children in prefrontal areas, in the supramarginal gyrus, and in temporal areas. In both adults and children, musical training was correlated with stronger activations in the frontal operculum and the anterior portion of the superior temporal gyrus."}
{"_id": "2b8ff930f433179b5082d945d43d05dfed577245", "title": "Multi-robot SLAM with Unknown Initial Correspondence: The Robot Rendezvous Case", "text": "This paper presents a new approach to the multi-robot map-alignment problem that enables teams of robots to build joint maps without initial knowledge of their relative poses. The key contribution of this work is an optimal algorithm for merging (not necessarily overlapping) maps that are created by different robots independently. Relative pose measurements between pairs of robots are processed to compute the coordinate transformation between any two maps. Noise in the robot-to-robot observations, propagated through the map-alignment process, increases the error in the position estimates of the transformed landmarks, and reduces the overall accuracy of the merged map. When there is overlap between the two maps, landmarks that appear twice provide additional information, in the form of constraints, which increases the alignment accuracy. Landmark duplicates are identified through a fast nearest-neighbor matching algorithm. In order to reduce the computational complexity of this search process, a kd-tree is used to represent the landmarks in the original map. The criterion employed for matching any two landmarks is the Mahalanobis distance. As a means of validation, we present experimental results obtained from two robots mapping an area of 4,800 m2"}
{"_id": "398284c451d26de66c97c5177f45f6e238354233", "title": "Improved lossless intra coding for H.264/MPEG-4 AVC", "text": "A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project"}
{"_id": "90285c99f7518d8a53f8105b8bf6be30937546b4", "title": "Facial recognition with PCA and machine learning methods", "text": "Facial recognition is a challenging problem in image processing and machine learning areas. Since widespread applications of facial recognition make it a valuable research topic, this work tries to develop some new facial recognition systems that have both high recognition accuracy and fast running speed. Efforts are made to design facial recognition systems by combining different algorithms. Comparisons and evaluations of recognition accuracy and running speed show that PCA + SVM achieves the best recognition result, which is over 95% for certain training data and eigenface sizes. Also, PCA + KNN achieves the balance between recognition accuracy and running speed."}
{"_id": "79d855d4aeaf22f58b61bc7bfa02e2b5fa12296e", "title": "The description\u2013experience gap in risky choice", "text": "According to a common conception in behavioral decision research, two cognitive processes-overestimation and overweighting-operate to increase the impact of rare events on people's choices. Supportive findings stem primarily from investigations in which people learn about options via descriptions thereof. Recently, a number of researchers have begun to investigate risky choice in settings in which people learn about options by experiential sampling over time. This article reviews work across three experiential paradigms. Converging findings show that when people make decisions based on experience, rare events tend to have less impact than they deserve according to their objective probabilities. Striking similarities in human and animal experience-based choices, ways of modeling these choices, and their implications for risk and precautionary behavior are discussed."}
{"_id": "b1059d25d092e0e872a1d2db01b24c73eb869ad9", "title": "Machine learning for neural decoding", "text": "Despite rapid advances in machine learning tools, the majority of neural decoding approaches still use traditional methods. Improving the performance of neural decoding algorithms allows us to better understand the information contained in a neural population, and can help advance engineering applications such as brain machine interfaces. Here, we apply modern machine learning techniques, including neural networks and gradient boosting, to decode from spiking activity in 1) motor cortex, 2) somatosensory cortex, and 3) hippocampus. We compare the predictive ability of these modern methods with traditional decoding methods such as Wiener and Kalman filters. Modern methods, in particular neural networks and ensembles, significantly outperformed the traditional approaches. For instance, for all of the three brain areas, an LSTM decoder explained over 40% of the unexplained variance from a Wiener filter. These results suggest that modern machine learning techniques should become the standard methodology for neural decoding. We provide a tutorial and code to facilitate wider implementation of these methods. Introduction: Neural decoding uses activity recorded from the brain to make predictions about variables in the outside world. For example, researchers predict movements based on activity in motor cortex [1, 2], predict decisions based on activity in prefrontal and parietal cortices [3, 4], and predict locations based on activity in the hippocampus [5, 6]. There are two primary purposes of decoding. First, it is an increasingly critical tool for understanding how neural signals relate to the outside world. It can be used to determine how much information the brain contains about an external variable (e.g., sensation or movement) [7-9], and how this information differs across brain areas [10-12], experimental conditions [13, 14], disease states [15], and more. Second, it is useful in engineering contexts, such as for brain machine interfaces (BMIs), where signals from motor cortex are used to control computer cursors [1], robotic arms [16], and muscles [2]. Decoding is a central tool for neural data analysis. When predicting a continuous variable, decoding is simply a regression problem and when predicting a discrete variable, decoding is simply a classification problem. Thus, there are many methods that can be used for neural decoding. However, despite the recent advances in machine learning techniques, it is still common to use traditional methods such as linear regression. Using modern machine learning tools for neural decoding would likely significantly boost performance, and might allow deeper insights into neural function. Here, we first give a brief tutorial so that readers can get started with using standard machine learning methods for decoding. We provide companion code so that readers can easily use a variety of decoding methods. Next, we compare the performance of many different machine learning methods to decode information from neural spiking activity. We predict movement velocities from macaque motor cortex and sensorimotor cortex, and locations in space from rat hippocampus. In all brain regions, modern methods, in particular neural networks and ensembles, led to the highest accuracy decoding, even for limited amounts of data. Tutorial for getting started with using machine learning for decoding: Code We have made Python code available at https://github.com/KordingLab/Neural_Decoding, which accompanies the tutorial below. This includes code that will correctly format the neural and output data for decoding, a tutorial for hyperparameter optimization, and examples of using many different decoders. We go into more detail on these topics below. General framework for decoding The decoding problem we are considering can be summarized as follows. We have N neurons whose spiking activity is recorded for a period of time, T (Fig. 1a). While we focus here on spiking neurons, the same methods could be used with other forms of neural data, such as the BOLD activity of N voxels, or the power in particular frequency bands of N LFP or EEG signals. We have also recorded outputs that we are trying to predict over that same time period (Fig. 1a). Here, we focus on output variables that are continuous (e.g., velocity, position), rather than discrete (e.g., choice). However, the general framework is very similar for discrete output variables. The first choice we need to make is to decide the temporal resolution, R, for decoding. That is, do we want to make a prediction every 50ms, 100ms, etc? We need to put the input and output into bins of length R (Fig. 1a). It is common (although not necessary) to use the same bin size for the neural data and output data, and we do so here. Thus, we will have approximately T/R total bins of neural activity and outputs. Within each bin, we compute the average activity of all neurons and the average value of the output. Next, we need to choose the time period of neural activity used to predict a given output. In the simplest case, the activity from all neurons in a given time bin would be used to predict the output in that same time bin. However, it is often the case that we want the neural data to precede the output (e.g., in the case of movements) or follow the decoder output (e.g., in the case of sensation). Plus, we often want to use neural data from more than one bin (e.g., using 500 ms of preceding neural data to predict a movement in the current 50 ms bin). In the following, we use the nomenclature that B time bins of neural activity are being used to predict a given output. For example, if we use one bin preceding the output, one concurrent bin, and one following bin, then B=3 (Fig. 1a). Note that when multiple bins of neural data are used to predict an output (B>1), then overlapping neural data will be used to predict different output times (Fig. 1a). When multiple bins of neural data are used to predict an output, then we will need to exclude some output bins. For instance, if we are using one bin of neural data preceding the output, then we cannot predict the first output bin, and if we are using one bin of neural data following the output, then we cannot predict the final output bin (Fig. 1a). Thus, we will be predicting K total output bins, where K is less than the total number of bins (T/R). To summarize, our decoders will be predicting each of these K outputs using B surrounding bins of activity from N neurons. Below, we describe how to format the neural data and output variables for use in different types of decoders. Non-recurrent decoders: For many \u201cnon-recurrent\u201d decoders, we are just solving a standard machine learning regression problem. We have N x B features (the firing rates of each neuron in each relevant time bin) that are used to predict each output (Fig. 1b). If there is a single output that is being predicted, it can be put in a vector, Y, of length K. Note that for many decoders, if there are multiple outputs, each is independently decoded. If multiple outputs are being simultaneously predicted, which can occur with neural network decoders, the outputs can be put in a matrix Y, that has K rows and d columns, where d is the number of outputs being predicted. The input covariate matrix, X, has N x B columns (one for each feature) and K rows (corresponding to each output being predicted). This is now the format of a standard regression problem. Linear regression simply finds a linear combination of these features that predicts the output. More sophisticated forms of regression use nonlinear combinations of features for predictions. In general, this format is beneficial because there are many machine learning regression techniques that can easily be substituted for one another. We provide code for a Wiener filter (linear regression), a Wiener cascade (a linear-nonlinear model), support vector regression, XGBoost (gradient boosted trees), and feedforward neural networks (see Methods). We test the performance of these decoders in Results. Recurrent neural network decoders: When using recurrent neural networks (RNNs) for decoding, we need to put the inputs in a different format. Recurrent neural networks explicitly model temporal transitions across time (Fig. 1c). In the non-recurrent decoders, there were N x B features that were equivalently used for prediction, regardless of the time bin they came from. However, with a recurrent decoder, at each time bin, N features (the firing rates of all neurons in that time bin) are used for predicting the hidden state of the system at that time. Along with being a function of the N features, the hidden state at a time bin is also a function of the hidden state at the previous time bin (Fig. 1c). After transitioning through all B bins, the hidden state in this final bin is used to predict the output. This architecture allows the decoder to take advantage of temporal structure in the data, and allowing it (via its hidden state) to integrate the effect of neural inputs over an extended period of time. For use in this type of decoder, the input can be formatted as a 3-dimensional matrix of size K x N x B (Fig. 1c). That is, for each row (corresponding to the output that is predicted), there will be N features (2nd matrix dimension) over B bins (3rd matrix dimension) used for prediction. Within this format, different types of RNNs, including those more sophisticated than the standard RNN shown in Fig. 1c, can be easily switched for one another. We provide code for a standard recurrent network, a gated recurrent unit (GRU) network, and a long short-term memory (LSTM) network. In Results, we test the performance of these decoders. Decoders with additional information: While the focus of this tutorial is on decoders that fit into standard machine learning frameworks, we want to briefly mention two other commonly used decoders. The Kalman filter and its variants have frequently been used in the brain computer interface field fo"}
{"_id": "063b7146dbbff239707a93dda672eb87ddffc8a5", "title": "OF A NOVEL SUPER WIDE BAND CIRCULAR-HEXAGONAL FRACTAL ANTENNA", "text": "In this paper, a novel circular-hexagonal Fractal antenna is investigated for super wide band applications. The proposed antenna is made of iterations of a hexagonal slot inside a circular metallic patch with a transmission line. A partial ground plane and asymmetrical patch toward the substrate are used for designing the antenna to achieve a super wide bandwidth ranging from 2.18 GHz to 44.5GHz with a bandwidth ratio of 20.4 : 1. The impedance bandwidth and gain of the proposed antenna are much better than the recently reported super wideband antennas which make it appropriate for many wireless communications systems such as ISM, Wi-Fi, GPS, Bluetooth, WLAN and UWB. Details of the proposed antenna design are presented and discussed."}
{"_id": "46639ab055d377cfb9a99bf9963329c8fe02d0c2", "title": "SphereFlow: 6 DoF Scene Flow from RGB-D Pairs", "text": "We take a new approach to computing dense scene flow between a pair of consecutive RGB-D frames. We exploit the availability of depth data by seeking correspondences with respect to patches specified not as the pixels inside square windows, but as the 3D points that are the inliers of spheres in world space. Our primary contribution is to show that by reasoning in terms of such patches under 6 DoF rigid body motions in 3D, we succeed in obtaining compelling results at displacements large and small without relying on either of two simplifying assumptions that pervade much of the earlier literature: brightness constancy or local surface planarity. As a consequence of our approach, our output is a dense field of 3D rigid body motions, in contrast to the 3D translations that are the norm in scene flow. Reasoning in our manner additionally allows us to carry out occlusion handling using a 6 DoF consistency check for the flow computed in both directions and a patchwise silhouette check to help reason about alignments in occlusion areas, and to promote smoothness of the flow fields using an intuitive local rigidity prior. We carry out our optimization in two steps, obtaining a first correspondence field using an adaptation of PatchMatch, and subsequently using alpha-expansion to jointly handle occlusions and perform regularization. We show attractive flow results on challenging synthetic and real-world scenes that push the practical limits of the aforementioned assumptions."}
{"_id": "9098342a8c4481d5fd0707a5d3baaad10455f9fc", "title": "Fast and Robust High Dynamic Range Image Generation with Camera and Object Movement", "text": "High dynamic range (HDR) images play an important role in todays computer graphics: Typical applications are the improved display of a photograph with a tone mapper and the relighting of a virtual object in a natural environment using an HDR environment map. Generating an HDR image usually requires a sequence of photographs with different exposure times which are combined to a single image. In practice two problems occur when taking an image sequence: The camera is moving or objects are moving during the sequence. This results in a blurry HDR image due to the camera movement and moving objects appear multiple times as ghost images in the combined image. In this paper, we present a simple and fast algorithm for a robust HDR generation that removes these artifacts without prior knowledge of the camera response function."}
{"_id": "1e924873a4bf4a8c6f954506d43564be0c159d7e", "title": "What storytelling can do for information visualization", "text": "A well-told story conveys great quantities of information in relatively few words in a format that is easily assimilated by the listener or viewer. People usually find it easier to understand information integrated into stories than information spelled out in serial lists (such as bulleted items in an overhead slide). Stories are also just more compelling. For example, despite its sketchiness, the story fragment in Figure 1 is loaded with information, following an analysis similar to that of John Thomas of IBM Research [5]. We find that Jim uses technology (a pager and the Internet) and is dedicated to his job. Many other pieces of information can be deduced about Jim and his work, as well as about his relationships with his coworkers, as noted in the right side of the figure. The story does not express all this information explicitly; some is only implied; for example, we can surmise that Jim is probably not at the gym and his attendance at the meeting is important to his boss and coworkers, as well as to his company\u2019s business performance. As in most stories, this one involves uncerFor as long as people have been around, they have used stories to convey information, cultural values, and experiences. Since the invention of writing and the printing press until today, technology and culture have constantly provided new and increasingly sophisticated means to tell stories. More recently, technology, entertainment, and art have converged in the computer. The ancient art of storytelling and its adaptation in film and video can now be used to efficiently convey information in our increasingly computerized world. What Storytelling Can Do for Information Visualization"}
{"_id": "2e02d25a3b7f9e287a572900e6f5c4800e94ca2f", "title": "[Internet- and computer game addiction: phenomenology, comorbidity, etiology, diagnostics and therapeutic implications for the addictives and their relatives].", "text": "OBJECTIVE\nExcessive and addictive internet use and computer game playing is reported as an increasing problem in outpatient care. The aim of this paper is to give an overview about the current scientific discussion of the overuse and addiction of internet and computer game playing.\n\n\nMETHODS\nPubmed was used for a systematic literature research considering original papers and review articles dealing with Internet/computer game addiction.\n\n\nRESULTS\nRecent epidemiological data from Germany suggest that 1.5-3.5 % of adolescent computer and internet users show signs of an overuse or addictive use of computer and video games. Moreover there is evidence that the disorder is associated with higher rates of depression, anxiety, as well as lower achievements e. g. at school. Although the nosological assignment still remains unclear there is some evidence from neurobiological data that the disorder can be conceptualized as behavioral addiction. As treatment strategy CBT-techniques have been proposed, but there is still a lack of controlled clinical trials concerning their efficacy.\n\n\nCONCLUSIONS\nSince the addicted persons often show little motivation for a behavioural change we consider it a promising approach to treat and train their relatives with the aim of increasing the motivation for a behavioural change of the addicted person."}
{"_id": "2120b039b8cfeb013400dfa1ead44229e582bbb7", "title": "SOCIAL DIALOGUE WITH EMBODIED CONVERSATIONAL AGENTS", "text": "The functions of social dialogue between people in the context of performing a task is discussed, as well as approaches to modelling such dialogue in embodied conversational agents. A study of an agent\u2019s use of social dialogue is presented, comparing embodied interactions with similar interactions conducted over the phone, assessing the impact these media have on a wide range of behavioural, task and subjective measures. Results indicate that subjects\u2019 perceptions of the agent are sensitive to both interaction style (social vs. task-only dialogue) and"}
{"_id": "2517387c7eea1eedc774bce38e64a1405f5c66cc", "title": "A Random-Finite-Set Approach to Bayesian SLAM", "text": "This paper proposes an integrated Bayesian frame work for feature-based simultaneous localization and map building (SLAM) in the general case of uncertain feature number and data association. By modeling the measurements and feature map as random finite sets (RFSs), a formulation of the feature-based SLAM problem is presented that jointly estimates the number and location of the features, as well as the vehicle trajectory. More concisely, the joint posterior distribution of the set-valued map and vehicle trajectory is propagated forward in time as measurements arrive, thereby incorporating both data association and feature management into a single recursion. Furthermore, the Bayes optimality of the proposed approach is established. A first-order solution, which is coined as the probability hypothesis density (PHD) SLAM filter, is derived, which jointly propagates the posterior PHD of the map and the posterior distribution of the vehicle trajectory. A Rao-Blackwellized (RB) implementation of the PHD-SLAM filter is proposed based on the Gaussian-mixture PHD filter (for the map) and a particle filter (for the vehicle trajectory). Simulated and experimental results demonstrate the merits of the proposed approach, particularly in situations of high clutter and data association ambiguity."}
{"_id": "7234ac8de0a7e658db6344a8e27590e8af9c85dd", "title": "Anisotropic Ambient Volume Shading", "text": "We present a novel method to compute anisotropic shading for direct volume rendering to improve the perception of the orientation and shape of surface-like structures. We determine the scale-aware anisotropy of a shading point by analyzing its ambient region. We sample adjacent points with similar scalar values to perform a principal component analysis by computing the eigenvectors and eigenvalues of the covariance matrix. In particular, we estimate the tangent directions, which serve as the tangent frame for anisotropic bidirectional reflectance distribution functions. Moreover, we exploit the ratio of the eigenvalues to measure the magnitude of the anisotropy at each shading point. Altogether, this allows us to model a data-driven, smooth transition from isotropic to strongly anisotropic volume shading. In this way, the shape of volumetric features can be enhanced significantly by aligning specular highlights along the principal direction of anisotropy. Our algorithm is independent of the transfer function, which allows us to compute all shading parameters once and store them with the data set. We integrated our method in a GPU-based volume renderer, which offers interactive control of the transfer function, light source positions, and viewpoint. Our results demonstrate the benefit of anisotropic shading for visualization to achieve data-driven local illumination for improved perception compared to isotropic shading."}
{"_id": "ef3357d3b1d9a04d50955a560210b11cfb548e4b", "title": "Variants of travelling salesman problem: A survey", "text": "The Travelling Salesman Problem (TSP) is a well-known NP-hard problem exceedingly studied in the fields of operations research and computer science. In TSP, a salesman wants to visit each of a set of cities exactly once and return to the starting city with minimal distance travelled. The significance of the TSP is that it can be pertained on many practical applications in real life scenario. But, it is not always possible to apply TSP for all the real world applications because of different constraints and also variations of TSP might be desired in such real-life scenarios. Therefore, several variants of TSP have been proposed to manage with the application specific constraints. In this work, a comprehensive study on various categories of TSP variants such as Profit Based, Time Windows based, Maximal Based and Kinetic Based has been studied with respect to the problem formulation and applications."}
{"_id": "398bb20caeca8280243e75fefe36bce8c33d8f3d", "title": "Detection of and compensation for shadows in colored urban aerial images", "text": "A new method for shadow detection in colored urban aerial images is proposed. First, a new imaging model of shadows is presented, which indicates that hue values of shadowed image areas are larger than those of these areas non-shadowed. Based on this model, a thresholding technique is employed to detect shadowed areas. After detection, the Retinex technique is applied to shadowed and non-shadowed areas individually to compensate for shadows. Experiment results show the effectiveness of the proposed method."}
{"_id": "bda256469290374c63e1ca4fd417eade6ff5d989", "title": "A framework of sensor-cloud integration opportunities and challenges", "text": "In the past few years, wireless sensor networks (WSNs) have been gaining increasing attention because of their potential of enabling of novel and attractive solutions in areas such as industrial automation, environmental monitoring, transportation business, health-care etc. If we add this collection of sensor derived data to various Web-based social networks or virtual communities, blogs etc., we can have a remarkable transformation in our ability to \"see\" ourselves and our planet. Our primary goal is to facilitate connecting sensors, people and software objects to build community-centric sensing applications. However, the computational tools needed to launch this exploration may be more appropriately built from the data center \"Cloud\" computing model than the traditional HPC approaches. In this paper, we propose a framework to enable this exploration by integrating sensor networks to the emerging data center \"cloud\" model of computing. But there are many challenges to enable this framework. We propose a pub-sub based model which simplifies the integration of sensor networks with cloud based community-centric applications. Also there is a need for internetworking cloud providers in case of violation of service level agreement with users. We discussed these issues and proposed reasonable solutions."}
{"_id": "46319a2732e38172d17a3a2f0bb218729a76e4ec", "title": "Activity Recognition in the Home Using Simple and Ubiquitous Sensors", "text": "In this work, a system for recognizing activities in the home setting using a set of small and simple state-change sensors is introduced. The sensors are designed to be \u201ctape on and forget\u201d devices that can be quickly and ubiquitously installed in home environments. The proposed sensing system presents an alternative to sensors that are sometimes perceived as invasive, such as cameras and microphones. Unlike prior work, the system has been deployed in multiple residential environments with non-researcher occupants. Preliminary results on a small dataset show that it is possible to recognize activities of interest to medical professionals such as toileting, bathing, and grooming with detection accuracies ranging from 25% to 89% depending on the evaluation criteria used ."}
{"_id": "5766f1f8b5e1b7b00d1401bc7e71106c3452022a", "title": "TWO-WAY COMMUNICATION CHANNELS", "text": "input at terminal 2 and Y2 the corresponding output. Once each second, say, new inputs xi and x2 may be chosen from corresponding input alphabets and put into the channel; outputs yi and Y2 may then be observed. These outputs will be related statistically to the inputs and perhaps historically to previous inputs and outputs if the channel has memory. The problem is to communicate in both directions through the channel as effectively as possible. Particularly, we wish to determine what pairs of signalling rates R1 and R2 for the two directions can be approached with arbitrarily small error probabilities."}
{"_id": "220bdd265e6721e1d7ec1c4252aa41825147e61b", "title": "Apprenticeship learning via inverse reinforcement learning", "text": "We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function."}
{"_id": "c8e8b54f87f43c4a6f85695712dff55e0edec760", "title": "A simple procedure for pruning back-propagation trained neural networks", "text": "The sensitivity of the global error (cost) function to the inclusion/exclusion of each synapse in the artificial neural network is estimated. Introduced are shadow arrays which keep track of the incremental changes to the synaptic weights during a single pass of back-propagating learning. The synapses are then ordered by decreasing sensitivity numbers so that the network can be efficiently pruned by discarding the last items of the sorted list. Unlike previous approaches, this simple procedure does not require a modification of the cost function, does not interfere with the learning process, and demands a negligible computational overhead."}
{"_id": "998532a5f7c882b79f058ad7607001bae3075553", "title": "A MAV that flies like an airplane and hovers like a helicopter", "text": "Near-earth environments, such as forests, caves, tunnels, and urban structures make reconnaissance, surveillance and search-and-rescue missions difficult and dangerous to accomplish. Micro-air-vehicles (MAVs), equipped with wireless cameras, can assist in such missions by providing real-time situational awareness. This paper describes an additional flight modality enabling fixed-wing MAVs to supplement existing endurance superiority with hovering capabilities. This secondary flight mode can also be used to avoid imminent collisions by quickly transitioning from cruise to hover flight. A sensor suite which allows for autonomous hovering by regulating the aircraft's yaw. pitch and roll angles is also described"}
{"_id": "5041610131cb077b5700b60fcb2ae7089f34cfe0", "title": "Conversion of Natural Language Query to SQL Query", "text": "This paper present an approach to automate the conversion of Natural Language Query to SQL Query effectively. Structured Query Language is a powerful tool for managing data held in a relational database management system. To retrieve or manage data user have to enter the correct SQL Query. But the users who don't have any knowledge about SQL are unable to retrieve the required data. To overcome this we proposed a model in Natural Language Processing for converting the Natural Language Query to SQL query. This helps novice user to get required content without knowing any complex details about SQL. This system can also deal with complex queries. This system is designed for Training and Placement cell officers who work on student database but don't have any knowledge about SQL. In this system, user can also enter the query using speech. System will convert speech into the text format. This query will get transformed to SQL query. System will execute the query and gives output to the user."}
{"_id": "52b28d7e1c39c86ceb2c9dd93216695f3735263e", "title": "Secure Multiparty Computation and Secret Sharing", "text": "In a data-driven society, individuals and companies encounter numerous situations where private information is an important resource. How can parties handle confidential data if they do not trust everyone involved? This text is the first to present a comprehensive treatment of unconditionally secure techniques for multiparty computation (MPC) and secret sharing. In a secure MPC, each party possesses some private data, whereas secret sharing provides a way for one party to spread information on a secret such that all parties together hold full information, yet no single party has all the information. The authors present basic feasibility results from the last thirty years, generalizations to arbitrary access structures using linear secret sharing, some recent techniques for efficiency improvements, and a general treatment of the theory of secret sharing, focusing on asymptotic results with interesting applications related to MPC."}
{"_id": "3521f4c21e4609aa9e88160f6d1f25f4cba84ad5", "title": "Improved Performance of Serially Connected Li-Ion Batteries With Active Cell Balancing in Electric Vehicles", "text": "This paper presents an active cell balancing method for lithium-ion battery stacks using a flyback dc/dc converter topology. The method is described in detail, and a simulation is performed to estimate the energy gain for ten serially connected cells during one discharging cycle. The simulation is validated with measurements on a balancing prototype with ten cells. It is then shown how the active balancing method with respect to the cell voltages can be improved using the capacity and the state of charge rather than the voltage as the balancing criterion. For both charging and discharging, an improvement in performance is gained when having the state of charge and the capacity of the cells as information. A battery stack with three single cells is modeled, and a realistic driving cycle is applied to compare the difference between both methods in terms of usable energy. Simulations are also validated with measurements."}
{"_id": "e0239139acedba9fe22668e5c8b94b51c2bb3d12", "title": "Adaptive Optimal Control of Highly Dissipative Nonlinear Spatially Distributed Processes With Neuro-Dynamic Programming", "text": "Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loe\u0300ve decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness."}
{"_id": "9c3e1e9f6bec554a2741d27f5588de4aaba6724a", "title": "Test-cost optimization in a scan-compression architecture using support-vector regression", "text": "Scan compression is widely used in high-volume testing of complex integrated circuits. With an increase in design complexity, the increased density of unknown (X) values from output responses reduces compression efficiency. In order to effectively block X values and maximize the effectiveness of test compression, a scan-compression architecture has recently been proposed, in which deterministic test patterns can be loaded into selected scan cells by controlling the initial state of the pseudo-random pattern generator (PRPG). A careful selection of the PRPG length is however essential to reduce test cost. We propose an optimization method based on support-vector regression to determine the PRPG length for test-cost reduction in a given scan-compression architecture. A correlation-based feature selection methodology is also proposed to reduce the amount of data needed for the accurate selection of the PRPG length. Experimental results on industrial designs highlight the effectiveness of the proposed method."}
{"_id": "fac3f6c063711d86236d17dcbc71164e9d2a1aeb", "title": "3D Soft Body Simulation Using Mass-spring System with Internal Pressure Force and Simplified Implicit Integration", "text": "In this paper, we propose a method to simulate soft bodies by using gravitational force, spring and damping forces between surface points, and internal molecular pressure forces. We consider a 3D soft body model composed of mesh points that define the body\u2019s surface such that the points are connected by springs and influenced by internal molecular pressure forces. These pressure forces have been modeled on gaseous molecular interactions. Simulation of soft body with internal pressure forces is known to become unstable when high constants are used and is averted using an implicit integration method. We propose an approximation to this implicit integration method that considerably reduces the number of computations in the algorithm. Our results show that the proposed method realistically simulates soft bodies and improves performance of the implicit integration method."}
{"_id": "39d900da87fa2f8987567d22a924fb7674f9be67", "title": "Generating Notifications for Missing Actions: Don't Forget to Turn the Lights Off!", "text": "We all have experienced forgetting habitual actions among our daily activities. For example, we probably have forgotten to turn the lights off before leaving a room or turn the stove off after cooking. In this paper, we propose a solution to the problem of issuing notifications on actions that may be missed. This involves learning about interdependencies between actions and being able to predict an ongoing action while segmenting the input video stream. In order to show a proof of concept, we collected a new egocentric dataset, in which people wear a camera while making lattes. We show promising results on the extremely challenging task of issuing correct and timely reminders. We also show that our model reliably segments the actions, while predicting the ongoing one when only a few frames from the beginning of the action are observed. The overall prediction accuracy is 46.2% when only 10 frames of an action are seen (2/3 of a sec). Moreover, the overall recognition and segmentation accuracy is shown to be 72.7% when the whole activity sequence is observed. Finally, the online prediction and segmentation accuracy is 68.3% when the prediction is made at every time step."}
{"_id": "56cf75f8e34284a9f022e9c49d330d3fc3d18862", "title": "Grammatical Error Correction", "text": "Grammatical error correction (GEC) is the task of automatically correcting grammatical errors in written text. Earlier attempts to grammatical error correction involve rule-based and classifier approaches which are limited to correcting only some particular type of errors in a sentence. As sentences may contain multiple errors of different types, a practical error correction system should be able to detect and correct all errors. In this report, we investigate GEC as a translation task from incorrect to correct English and explore some machine translation approaches for developing end-to-end GEC systems for all error types. We apply Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) approaches to GEC and show that they can correct multiple errors of different types in a sentence when compared to the earlier methods which focus on individual errors. We also discuss some of the weakness of machine translation approaches. Finally, we also experiment on a candidate re-ranking technique to re-rank the hypotheses generated by machine translation systems. With regression models, we try to predict a grammaticality score for each candidate hypothesis and re-rank them according to the score."}
{"_id": "c5b1351042b0743fbf888c2ac4a2dd79475163cb", "title": "Stacked Convolutional Denoising Auto-Encoders for Feature Representation", "text": "Deep networks have achieved excellent performance in learning representation from visual data. However, the supervised deep models like convolutional neural network require large quantities of labeled data, which are very expensive to obtain. To solve this problem, this paper proposes an unsupervised deep network, called the stacked convolutional denoising auto-encoders, which can map images to hierarchical representations without any label information. The network, optimized by layer-wise training, is constructed by stacking layers of denoising auto-encoders in a convolutional way. In each layer, high dimensional feature maps are generated by convolving features of the lower layer with kernels learned by a denoising auto-encoder. The auto-encoder is trained on patches extracted from feature maps in the lower layer to learn robust feature detectors. To better train the large network, a layer-wise whitening technique is introduced into the model. Before each convolutional layer, a whitening layer is embedded to sphere the input data. By layers of mapping, raw images are transformed into high-level feature representations which would boost the performance of the subsequent support vector machine classifier. The proposed algorithm is evaluated by extensive experimentations and demonstrates superior classification performance to state-of-the-art unsupervised networks."}
{"_id": "8dcfc54c6173f669d1de1b36bdc70b8b430bd2b5", "title": "Moving from syntactic to semantic organizations using JXML2OWL", "text": "Today\u2019s enterprises face critical needs in integrating disparate information spread over several data sources inside and even outside the organization. Most organizations already rely on XML standard to define their data models. Unfortunately, even when using XML to represent data, problems arise when it is necessary to integrate different data sources. Emerging Semantic Web technologies, such as ontologies, RDF, RDFS, and OWL, can play an important role in the semantic definition and integration of data. The purpose of our study is to present a framework to assist organizations to move from a syntactic data infrastructure defined in XML to a semantic data infrastructure using OWL. The framework supports mappings and fully automated instance transformation from syntactic data sources in XML format to a common shared global model defined by an ontology using Semantic Web technologies. The presented framework, JXML2OWL, allows organizations to automatically convert their XML data sources to a semantic model defined in OWL."}
{"_id": "0ecfeff95ac1499c2f74575871656d5f71bb24a3", "title": "Towards Frequent Subgraph Mining on Single Large Uncertain Graphs", "text": "Uncertainty is intrinsic to a wide spectrum of real-life applications, which inevitably applies to graph data. Representative uncertain graphs are seen in bio-informatics, social networks, etc. This paper motivates the problem of frequent subgraph mining on single uncertain graphs. We present an enumeration-evaluation algorithm to solve the problem. By showing support computation on an uncertain graph is #P-hard, we develop an approximation algorithm with accuracy guarantee for this purpose. To enhance the solution, we devise optimization techniques to achieve better mining performance. Experiment results on real-life data confirm the usability of the algorithm."}
{"_id": "27a5264f4e1e985b6d21b88e598ad39434c00b7d", "title": "Use of RFID Based Real Time Location Tracking System to Curb Diversion of Transit Goods in East Africa", "text": "In this paper, RFID based Real Time Location Tracking System (RTLS) for tracking transit goods is discussed. The system was designed to address the problem of diversions of transit wet cargo destined for hinterland countries into the local markets, so as to evade payment of duties and taxes. Transit goods are currently tracked manually and by use of manual physical escorts. Reliance on manual methods has rendered the system open to abuse by unscrupulous traders and clearing agents; at times in collusion with government officers. To be effective in tracking wet cargo, the required solution was established to be one that not only provides information in real time but also with features for intervention when cargo is perceived to be at risk. This paper discusses the design and implementation of RTLS for tracking wet cargo, and points out the strenghts, benefits, weaknesses and limitations of the solution. The effectiveness of the solution was demonstrated in addressing the weaknesses of the current manual tracking system in curbing diversion for wet cargo. In addition, the use of RTLS led to a reduction in turn around times by between thirty to forty percent compared to those which are manualy tracked. KeywordsElectronic Seals, RFID, Transit goods, Tanker Tracking, Real Time Location Tracking Systems, Cargo Tracking"}
{"_id": "859439df0a0290289e54dd1f8a6a79f310a74947", "title": "Review of Face Recognition Techniques", "text": "Use of Biometrics and widespread acceptability for person authentication has instigated several techniques. Now a day\u2019s security is very important. Biometrics has received a great attention. Face biometrics, is useful person\u2019s authentication that recognizes face. In this paper first we represent an overview of face recognition and discuss the methodology and functioning. Thereafter we represent face recognition techniques which are recently used including their advantages and disadvantages. A large number of face recognition algorithms have been developed. In this paper an attempt is made to review a wide range of methods including PCA, Eigen face, LDA for recognition and hybrid combination if this techniques. Under the various illumination and expression condition of face images some techniques specified here also improves the efficiency of face recognition. Keywords\u2014 Face Recognition, Principal Component Analysis, Eigen Face, Linear Discriminant Analysis"}
{"_id": "4a3e54370c8a199d29196132ba8a049c1e27a46d", "title": "Experiences of System-Level Model-Based GUI Testing of an Android Application", "text": "This paper presents experiences in model-based graphical user interface testing of Android applications. We present how model-based testing and test automation was implemented with Android, including how applications were modeled, how tests were designed and executed, and what kind of problems were found in the tested application during the whole process. The main focus is on a case study that was performed with an Android application, the BBC News Widget. Our goal is to present actual data on the experiences and to discuss if advantages can be gained using model-based testing when compared with traditional graphical user interface testing. Another contribution of this paper is a description of a keyword-based test automation tool that was implemented for the Android emulator during the case study. All the models and the tools created or used in this case study are available as open source."}
{"_id": "8e6c3870ef6797cd9f00b6ab5c7aa275cab3c5df", "title": "Network Slicing in 5G Mobile Communication Architecture, Profit Modeling, and Challenges", "text": "Efficient flexibility and higher system scalability call for enhanced network performance, better energy consumption, lower infrastructure cost, and effective resource utilization. To accomplish this, an architectural optimization and reconstruction of existing cellular network is required. Network slicing is considered to be one of the key enablers and an architectural answer of communication system of 2020 and beyond. Traditional mobile operators provide all types of services to various kinds of customers through a single network, however, with the deployment of network slicing operators are now able to divide entire network into different slices each with its own configuration and specific Quality of Service (QoS) requirements. In a slice-based network, each slice will be considered as a separate logical network. In this way, the infrastructure utilization and resource allocation will be much more energy and cost efficient in comparison to traditional network. In this paper, we provided a comprehensive discussion on concept and system architecture of network slicing with particular focus on its business aspect and profit modeling. We throughly discussed two different dimensions of profit modeling, so called Own-Slice Implementation and Resource Leasing for Outsourced Slices. We further addressed open research directions and existing challenges with the purpose of motivating new advances and adding realistic solutions to this emerging technology."}
{"_id": "185a3f6d4965bb9922894c58dd15ad4e98e25b02", "title": "A 0.8V, 560fJ/bit, 14Gb/s injection-locked receiver with input duty-cycle distortion tolerable edge-rotating 5/4X sub-rate CDR in 65nm CMOS", "text": "A quarter-rate forwarded-clock receiver utilizes an edge-rotating 5/4X sub-rate CDR for improved jitter tolerance with low power overhead relative to conventional 2X oversampling CDR systems. Low-voltage operation is achieved with efficient quarter-rate clock generation from an injection-locked oscillator (ILO) and through automatic independent phase rotator control that optimizes timing margin of each input quantizer in the presence of receive-side clock static phase errors and transmitter duty-cycle distortion (DCD). Fabricated in GP 65nm CMOS, the receiver operates up to 16Gb/s with a BER<;10-12, achieves a 1MHz phase tracking bandwidth, tolerates \u00b150%UIpp DCD on input data, and has 14Gb/s energy efficiency of 560fJ/bit at VDD=0.8V."}
{"_id": "1efcbef39332603ad8493888adfa8da4f7f04a6e", "title": "Evolution of the mid range theory of comfort for outcomes research.", "text": "constructs. VOLUME 49 \u0095 NUMBER 2 NURSING OUTLOOK 88 Evolution of the Mid Range Theory of Comfort for Outcomes Research Kolcaba Figure 2. MR theory of comfort. Reprinted with permission from Aspen Publishers, Inc. Copyright \u00a9 1992 Aspen Publishers, Inc. Adv Nurs Sci 1992;15:1. The work of psychologist Murray in 1938 met these criteria. Because his theory was about human needs, it was applicable to patients who experience multiple stimuli in stressful health care situations. This was the deductive stage of theory development: beginning with an abstract, general theoretic construction and substructing downward to more specific levels that included concepts for nursing practice. Each nursing concept then could be operationalized relative to a specific research setting. Murray's intent was to synthesize a grand theory for psychology from existing lesser psychologic theories of his time. His concepts are found in Figure 2, lines 1, 2, and 3. Because comfort was perceived by patients, it was logically substructed under Murray's concept of \"perception.\" \"Obstructing forces\" were substructed for nursing as health care needs; \"facilitating forces\" were nursing interventions, and \"interacting forces\" were intervening variables (line 4). This was the first and altruistic part of the theory, which stated that nurses identified unmet comfort needs of their patients, designed interventions to address those needs, and sought to enhance their patients' comfort, the immediate desired outcome. The second and practical part of the theory addressed the question? \"Why comfort?\" For nursing, unitary trend was substructed to health thema, which was further substructed to HSBs. HSB was Schlotfeldt's concept and represented the broad category of subsequent desired outcomes. She stated that HSBs could be internal, external, or a peaceful death. Some examples of HSBs are decreased length of stay, improved functional status, better response (or effort) to therapy, faster healing, or increased patient satisfaction. PATIENT COMFORT IN OUTCOMES RESEARCH: THE RETRODUCTIVE STAGE Retroduction is a form of reasoning that originates ideas. It is useful for the selection of phenomena that can be developed further and tested. This type of reasoning is applied in fields in which there are few available theories. Such is the case with outcomes research that, to date, is centered on collecting large databases for selected outcomes and relating those outcomes to types of nursing, medical, or institutional protocols. Adding a nursing theoretic framework to outcomes research would enhance this area of nursing investigation because theory-based practice enables nurses to design interventions that are congruent with desired outcomes, thus increasing the likelihood of finding significant results. Significant results on desired outcomes would provide data to respective institutions about nursing's \"productivity\" and the importance of nursing in the present competitive market. Murray's 20th century framework could not account for 21st century emphasis on institutional outcomes. However, with the use of retroduction, the concept of institutional integrity was added to the MR theory of comfort (Figure 3). Institutional integrity is conceptualized as the quality or state of health care corporations being complete, whole, sound, upright, honest, and sincere. The term has normative and descriptive components. Adding the term to the theory of comfort extends the theory to a consideration of the relationships between HSBs and institutional integrity. NURSING OUTLOOK MARCH/APRIL 2001 89 Figure 3. Comfort theory adapted for outcomes research. Box 1. Propositions in Theory of Comfort 1. Nurses identify patients' comfort needs that have not been met by existing support systems. 2. Nurses design interventions to address those needs. 3. Intervening variables are taken into account in designing interventions and mutually agreeing on reasonable immediate (enhanced comfort) and/or subsequent (HSBs) outcomes. 4. If enhanced comfort is achieved, patients are strengthened to engage in health-seeking behaviors. 5. When patients engage in health-seeking behaviors as a result of being strengthened by comforting actions, nurses and patients are more satisfied with their health care. 6. When patiencs are satisfied with their health care in a specific institution, that institution retains its integrity; institutional integrity has a normative and descriptive component. Box 2. Assumptions underpinning the Theory of Comfort 1. Human beings have holistic responses to complex stimuli. 2. Comfort is a desirable holistic outcome that is germane to the discipline of nursing. 3. Human beings strive to meet, or to have met, their basic comfort needs; it is an active endeavor. 4. Institutional integrity has a normative and descriptive component that is based on a patient-oriented value system. The theory now predicts that when patients engage fully in essential (comfort care). The concepts are (1) health care needs HSBs, such as their rehabilitation program or medical that include physical, psychospiritual, social, and environregimen, institutional integrity is enhanced also. Institutional mental needs that arise for patients in stressful health care situintegrity can be operationalized as patient satisfaction, ations; (2) nursing interventions, an umbrella term for successful discharges, cost-benefit ratios, or other outcomes commitment by nurses and institutions to promote comfort that are essential to institutional integrity. All of these concepts care (intentional care by nurses directed to meeting comfort are indicators of the integrity of the institution. The definineeds of patients); (3) intervening variables that will affect tions of underlined concepts (Figure 2) will be given; proposioutcomes (for example, institutions that are committed to tions that link the concepts are in Box 1, and assumptions that achieving improved outcomes through comfort care must underpin this theory are in Box 2. provide adequate staffing of registered nurses to meet comfort This theory describes traditional nursing practice as needs associated with existing patient acuity on any given unit); humanistic, needs-related, and holistic. Further, it relates prac(4) patient comfort, defined as the immediate state of being tice to institutional outcomes that make those nursing actions strengthened by having needs met in 4 contexts of human that promote soundness of health care institutions visible and experience (physical, psychospiritual, social, and environ90 VOLUME 49 \u0095 NUMBER 2 NURSING OUTLOOK Evolution of the Mid Range Theory of Comfort for Outcomes Research Kolcaba Evolution of the Mid Range Theory of Comfort for Outcomes Research Kolcaba Box 3. Evaluation of the Teory of Comfort 1. Concepts are specific for nursing. 1 Comfort has been called the essence of nursing. 2. Concepts and propositions are readily operationalized. The theory has been tested in many settings. The outcome of comfort is opera-tionaiized easily using the taxonomic structure of comfort as a guide for item generation. 11 3. The theory can be applied to many situations. Through The Comfort Line, students and researchers are working with the author to adapt the theory to micro-level situations. 4. Propositions range from causal to associative. Propositions generated from the theory are shown in Box 1 and have the desired range. 5. Assumptions fit the theory. The theory is holistic and needs based and theoretically empowers patients to engage in health-seeking behaviors. Institutional integrity is an important theoretic link to outcomes research. 6. The theory is relevant for potential users of the theory. Nursing students learn this theory easily and apply it to practice and research. 7. The theory is oriented to an outcome chat is important to patients. Qualitative research indicates chat patients want, and often need, their nurses to assist them in meeting their comfort needs. 8. The theory entails a nursing-sensitive outcome. A traditional goal of nursing has been to attend to patient comfort. Patients expect this from nurses and give them credit when comfort is delivered. Through deliberate actions of nurses, patients receive what they need and want from their nurses. This theory explicates how and why to do so. mental) that can be operationalized by the general comfort questionnaire; (5) HSBs, defined as patient actions of which they may or may not be aware and which may or may not be observed that are predictors or indicators of improved health (categorized as internal [eg, healing, immune function], as external [eg, functional status, perception of health], or as a peaceful death; HSBs are more accurate indicator of nurse productivity than the number of patients cared for; and (6) institutional integrity (previously defined). The definition of comfort has grown from its early definition9 to one that incorporates the strengthening component of comfort, the immediate desired outcome of nursing care. It is this strengthening component that facilitates patients' increased engagement in HSBs, the subsequent outcome. These subsequent HSBs, as indicators of nurse productivity, are of great interest to health care administrators because they facilitate decreased lengths of stay, successful discharges, and improved public relations when patients and families are happy with their health care. The 3 parts of the theory can be tested separately, or all concepts can be tested in one study. Path analysis can indicate which variables have direct or indirect influences on desired outcomes. Now, by linking HSBs to institutional integrity in an explicit way, outcomes research is theoretically based in nursing. CRITERIA FOR ADEQUACY OF MR THEORIES The expanded theory of comfort meets the following criteria for MR theory: (1) its concepts and propositions are specific to nursing, (2) it is readily operationalized, (3) it can be applied to many situati"}
{"_id": "4a6ebcf0ef4367a87edb4fe925615e93f82191b8", "title": "Sentiment Analysis of Citations Using Word2vec", "text": "Citation sentiment analysis is an important task in scientific paper analysis. Existing machine learning techniques for citation sentiment analysis are focusing on labor-intensive feature engineering, which requires large annotated corpus. As an automatic feature extraction tool, word2vec has been successfully applied to sentiment analysis of short texts. In this work, I conducted empirical research with the question: how well does word2vec work on the sentiment analysis of citations? The proposed method constructed sentence vectors (sent2vec) by averaging the word embeddings, which were learned from Anthology Collections (ACL-Embeddings). I also investigated polarity-specific word embeddings (PS-Embeddings) for classifying positive and negative citations. The sentence vectors formed a feature space, to which the examined citation sentence was mapped to. Those features were input into classifiers (support vector machines) for supervised classification. Using 10-cross-validation scheme, evaluation was conducted on a set of annotated citations. The results showed that word embeddings are effective on classifying positive and negative citations. However, hand-crafted features performed better for the overall classification."}
{"_id": "293e411e2e7820219c3bb49e34b302132a8cff39", "title": "Design, simulation and implementation of a self-oscillating control circuit to drive series resonant inverter feeding a brazing induction furnace", "text": "This research deals with the design and simulation of induction furnace power source (inverter) using MATLAB package. This source designed to lock on the resonant frequency of the load by using self-oscillating technique, also it has the capability to control the power supplied to the load using phase shift pulse width modulation (PSPWM) technique. These characteristics used to overcome the load nonlinear behavior during the brazing process and to achieve soft switching of the inverter elements. Also, the inverter has the capability to operate with or without load (workpiece). The implemented prototype operates at a frequency range (50-100)kHz and 10kW was successfully used for brazing two copper workpieces."}
{"_id": "dd162df4afd1275b2eb78cfd80e44e4bb541f4c1", "title": "A complete solution to the inverse kinematics problem for 4-DOF manipulator robot", "text": "A complete closed-form solution to the inverse kinematics problem for a 4-DOF manipulator robot is proposed. With the discussion of the existence of the offset and the distance between the axis of 1st joint and 2nd joint, all the possible solutions and singular configurations are presented. \u201cTask attitude\u201d is defined to describe the orientation of the end effector, which is convenient to solve the inverse kinematics problems in engineering. Finally, simulation of path-planning is performed, and the efficiency of the proposed method is verified through Mathematica."}
{"_id": "2e60c997eef6a37a8af87659798817d3eae2aa36", "title": "Stochastic Gradient Hamiltonian Monte Carlo", "text": "Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals. The popularity of such methods has grown significantly in recent years. However, a limitation of HMC methods is the required gradient computation for simulation of the Hamiltonian dynamical system\u2014such computation is infeasible in problems involving a large sample size or streaming data. Instead, we must rely on a noisy gradient estimate computed from a subset of the data. In this paper, we explore the properties of such a stochastic gradient HMC approach. Surprisingly, the natural implementation of the stochastic approximation can be arbitrarily bad. To address this problem we introduce a variant that uses second-order Langevin dynamics with a friction term that counteracts the effects of the noisy gradient, maintaining the desired target distribution as the invariant distribution. Results on simulated data validate our theory. We also provide an application of our methods to a classification task using neural networks and to online Bayesian matrix factorization."}
{"_id": "49c6c08709d3cbf4b58477375d7c04bcd4da4520", "title": "Bayesian Learning via Stochastic Gradient Langevin Dynamics", "text": "In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic gradient optimization algorithm we show that the iterates will converge to samples from the true posterior distribution as we anneal the stepsize. This seamless transition between optimization and Bayesian posterior sampling provides an inbuilt protection against overfitting. We also propose a practical method for Monte Carlo estimates of posterior statistics which monitors a \u201csampling threshold\u201d and collects samples after it has been surpassed. We apply the method to three models: a mixture of Gaussians, logistic regression and ICA with natural gradients."}
{"_id": "a290b2b99eb74ffb6d56c470959deccc4ca2ab1e", "title": "Performance-oriented DevOps: A Research Agenda", "text": "DevOps is a trend towards a tighter integration between development (Dev) and operations (Ops) teams. The need for such an integration is driven by the requirement to continuously adapt enterprise applications (EAs) to changes in the business environment. As of today, DevOps concepts have been primarily introduced to ensure a constant flow of features and bug fixes into new releases from a functional perspective. In order to integrate a non-functional perspective into these DevOps concepts this report focuses on tools, activities, and processes to ensure one of the most important quality attributes of a software system, namely performance. Performance describes system properties concerning its timeliness and use of resources. Common metrics are response time, throughput, and resource utilization. Performance goals for EAs are typically defined by setting upper and/or lower bounds for these metrics and specific business transactions. In order to ensure that such performance goals can be met, several activities are required during development and operation of these systems as well as during the transition from Dev to Ops. Activities during development are typically summarized by the term Software Performance Engineering (SPE), whereas activities during operations are called Application Performance Management (APM). SPE and APM were historically tackled independently from each other, but the newly emerging DevOps concepts require and enable a tighter integration between both activity streams. This report presents existing solutions to support this integration as well as open research challenges in this area. The report starts by defining EAs and summarizes their characteristics that make performance evaluations for these systems particularly challenging. It continues by describing our understanding of DevOps and explaining the roots of this trend to set the context for the remaining parts of the report. Afterwards, performance management activities that are common in both life cycle phases are explained, until the particularities of SPE and APM are discussed in separate sections. Finally, the report concludes by outlining activities and challenges to support the rapid iteration between Dev and Ops."}
{"_id": "f997b973e85f48a0a14907ab3c5ff2b852236ab0", "title": "Programs, life cycles, and laws of software evolution", "text": "By classifying programs according to their relationship to the environment in which they are executed, the paper identifies the sources of evolutionary pressure on computer applications and programs and shows why this results in a process of never ending maintenance activity. The resultant life cycle processes are then briefly discussed. The paper then introduces laws of Program Evolution that have been formulated following quantitative studies of the evolution of a number of different systems. Finally an example is provided of the application of Evolution Dynamics models to program release planning."}
{"_id": "68328eb662cf088a900d5d3ffd40d7f78c3541f6", "title": "BFC: High-performance distributed big-file cloud storage based on key-value store", "text": "Nowadays, cloud-based storage services are rapidly growing and becoming an emerging trend in data storage field. There are many problems when designing an efficient storage engine for cloud-based systems with some requirements such as big-file processing, lightweight meta-data, low latency, parallel I/O, deduplication, distributed, high scalability. Key-value stores played an important role and showed many advantages when solving those problems. This paper presents about Big File Cloud (BFC) with its algorithms and architecture to handle most of problems in a big-file cloud storage system based on key-value store. It is done by proposing low-complicated, fixed-size meta-data design, which supports fast and highly-concurrent, distributed file I/O, several algorithms for resumable upload, download and simple data deduplication method for static data. This research applied the advantages of ZDB - an in-house key-value store which was optimized with auto-increment integer keys for solving big-file storage problems efficiently. The results can be used for building scalable distributed data cloud storage that support big-file with size up to several terabytes."}
{"_id": "9034818161d015f9bea80b493cbaafd91e2086fd", "title": "A Meta-plugin for Bespoke Data Management in WordPress", "text": "WordPress is a powerful and extensible platform for webbased information publishing and management. While the WordPress core is targeted to the publication of chronologically ordered textual articles typical of blogs, users have developed plugins as well as themes to support the data management requirements of specific domains such as e-commerce or e-learning. However, the creation of such plugins requires development skills and effort. We present a meta-plugin that automatically generates bespoke plugins for data management based on user-defined ER models. We illustrate the approach using an example of creating a WordPress site for managing information about courses."}
{"_id": "d8455a1c020af245e2c6a21d39079a2d19aa11e6", "title": "Dendrites, deep learning, and sequences in the hippocampus.", "text": "The hippocampus places us both in time and space. It does so over remarkably large spans: milliseconds to years, and centimeters to kilometers. This works for sensory representations, for memory, and for behavioral context. How does it fit in such wide ranges of time and space scales, and keep order among the many dimensions of stimulus context? A key organizing principle for a wide sweep of scales and stimulus dimensions is that of order in time, or sequences. Sequences of neuronal activity are ubiquitous in sensory processing, in motor control, in planning actions, and in memory. Against this strong evidence for the phenomenon, there are currently more models than definite experiments about how the brain generates ordered activity. The flip side of sequence generation is discrimination. Discrimination of sequences has been extensively studied at the behavioral, systems, and modeling level, but again physiological mechanisms are fewer. It is against this backdrop that I discuss two recent developments in neural sequence computation, that at face value share little beyond the label \"neural.\" These are dendritic sequence discrimination, and deep learning. One derives from channel physiology and molecular signaling, the other from applied neural network theory - apparently extreme ends of the spectrum of neural circuit detail. I suggest that each of these topics has deep lessons about the possible mechanisms, scales, and capabilities of hippocampal sequence computation."}
{"_id": "404c3777c518b8c364395518d9b9aa4e84597279", "title": "Multivariate Network Exploration and Presentation: From Detail to Overview via Selections and Aggregations", "text": "Network data is ubiquitous; e-mail traffic between persons, telecommunication, transport and financial networks are some examples. Often these networks are large and multivariate, besides the topological structure of the network, multivariate data on the nodes and links is available. Currently, exploration and analysis methods are focused on a single aspect; the network topology or the multivariate data. In addition, tools and techniques are highly domain specific and require expert knowledge. We focus on the non-expert user and propose a novel solution for multivariate network exploration and analysis that tightly couples structural and multivariate analysis. In short, we go from Detail to Overview via Selections and Aggregations (DOSA): users are enabled to gain insights through the creation of selections of interest (manually or automatically), and producing high-level, infographic-style overviews simultaneously. Finally, we present example explorations on real-world datasets that demonstrate the effectiveness of our method for the exploration and understanding of multivariate networks where presentation of findings comes for free."}
{"_id": "ca67cbb7a740425da127c11466ce8b09ef46958c", "title": "Evaluating and Developing Theories in the Information Systems Discipline", "text": "This paper articulates a framework and criteria that can be used to evaluate the quality of theories. While the framework and criteria have general applicability, my focus is the evaluation of theories within the information systems discipline. To illustrate the usefulness of the framework and criteria, I show how they can be employed to pinpoint the strengths and weaknesses of a theory which, based upon citation evidence, has had a significant impact on other researchers within the information systems discipline. Because the evaluation of existing theories often provides the basis for refining existing theories or building new theories, I also show how the framework and criteria can be used to inform the development of high-quality theory."}
{"_id": "d257ba76407a13bbfddef211a5e3eb00409dc7b6", "title": "Boosting Algorithms for Parallel and Distributed Learning", "text": "The growing amount of available information and its distributed and heterogeneous nature has a major impact on the field of data mining. In this paper, we propose a framework for parallel and distributed boosting algorithms intended for efficient integrating specialized classifiers learned over very large, distributed and possibly heterogeneous databases that cannot fit into main computer memory. Boosting is a popular technique for constructing highly accurate classifier ensembles, where the classifiers are trained serially, with the weights on the training instances adaptively set according to the performance of previous classifiers. Our parallel boosting algorithm is designed for tightly coupled shared memory systems with a small number of processors, with an objective of achieving the maximal prediction accuracy in fewer iterations than boosting on a single processor. After all processors learn classifiers in parallel at each boosting round, they are combined according to the confidence of their prediction. Our distributed boosting algorithm is proposed primarily for learning from several disjoint data sites when the data cannot be merged together, although it can also be used for parallel learning where a massive data set is partitioned into several disjoint subsets for a more efficient analysis. At each boosting round, the proposed method combines classifiers from all sites and creates a classifier ensemble on each site. The final classifier is constructed as an ensemble of all classifier ensembles built on disjoint data sets. The new proposed methods applied to several data sets have shown that parallel boosting can achieve the same or even better prediction accuracy considerably faster than the standard sequential boosting. Results from the experiments also indicate that distributed boosting has comparable or slightly improved classification accuracy over standard boosting, while requiring much less memory and computational time since it uses smaller data sets."}
{"_id": "49449797e2c7bd486b4c1c90a162a88e3d2bcd20", "title": "Fast exact string matching algorithms", "text": "String matching is the problem of finding all the occurrences of a pattern in a text. We propose a very fast new family of string matching algorithms based on hashing q-grams. The new algorithms are the fastest on many cases, in particular, on small size alphabets. \u00a9 2007 Elsevier B.V. All rights reserved."}
{"_id": "fbceba388bebf2701481be5411ad3978e85cddde", "title": "Intensive Preprocessing of KDD Cup 99 for Network Intrusion Classification Using Machine Learning Techniques", "text": "Network security engineers work to keep services available all the time by handling intruder attacks. Intrusion Detection System (IDS) is one of the obtainable mechanism that used to sense and classify any abnormal actions. Therefore, the IDS must be always up to date with the latest intruder attacks signatures to preserve confidentiality, integrity and availability of the services. The speed of the IDS is very important issue as well learning the new attacks. This research work illustrates how the Knowledge Discovery and Data Mining (or Knowledge Discovery in Databases) KDD dataset is very handy for testing and evaluating different Machine Learning Techniques. It mainly focuses on the KDD preprocess part in order to prepare a decent and fair experimental data set. The techniques J48, Random Forest, Random Tree, MLP, Na\u00efve Bayes and Bayes Network classifiers have been chosen for this study. It has been proven that the Random forest classifier has achieved the highest accuracy rate for detecting and classifying all KDD dataset attacks, which are of type (DOS, R2L, U2R, and PROBE). Keywords\u2014 IDS, DDoS, MLP, Na\u00efve Bayes, Random Forest"}
{"_id": "0fe13bfef337492d9e47e48da5c0fe5edb712149", "title": "Automated coverage-driven testing: combining symbolic execution and model checking", "text": "\u8f6f\u4ef6\u6d4b\u8bd5\u662f\u5de5\u4e1a\u754c\u6700\u5e38\u7528\u7684\u8f6f\u4ef6\u9a8c\u8bc1\u6280\u672f, \u800c\u767d\u76d2\u6d4b\u8bd5\u662f\u6700\u57fa\u7840\u7684\u8f6f\u4ef6\u6d4b\u8bd5\u65b9\u6cd5\u3002 \u4e3a\u4e86\u63d0\u9ad8\u767d\u76d2\u6d4b\u8bd5\u6709\u6548\u6027, \u672c\u6587\u521b\u65b0\u6027\u5730\u63d0\u51fa\u4e86\u4e00\u79cd\u6df7\u5408\u7684\u8986\u76d6\u9a71\u52a8\u6d4b\u8bd5\u65b9\u6cd5, \u5b83\u63a5\u53d7\u4e00\u4e2a\u5f85\u6d4b\u7a0b\u5e8f\u548c\u76ee\u6807\u8986\u76d6\u51c6\u5219\u4e3a\u8f93\u5165, \u7136\u540e\u4e3a\u53ef\u8fbe\u7684\u6d4b\u8bd5\u5bf9\u8c61\u81ea\u52a8\u751f\u6210\u6d4b\u8bd5\u7528\u4f8b, \u540c\u65f6\u4e5f\u68c0\u6d4b\u51fa\u4e0d\u53ef\u8fbe\u7684\u6d4b\u8bd5\u5bf9\u8c61\u3002 \u672c\u6587\u5c06\u9a71\u52a8\u6d4b\u8bd5\u95ee\u9898\u8f6c\u5316\u4e3a\u7279\u5b9a\u8def\u5f84\u7684\u5bfb\u627e\u95ee\u9898, \u63d0\u51fa\u4e86\u5e26\u5f15\u5bfc\u7684\u7b26\u53f7\u6267\u884c\u6280\u672f\u548c\u589e\u5f3a\u7684\u6a21\u578b\u68c0\u67e5\u6280\u672f, \u6700\u7ec8\u53c8\u5c06\u4e24\u9879\u6280\u672f\u7ed3\u5408\u5728\u4e00\u8d77, \u8fdb\u4e00\u6b65\u63d0\u9ad8\u6d4b\u8bd5\u8986\u76d6\u7387\u5e76\u964d\u4f4e\u6d4b\u8bd5\u65f6\u95f4, \u5b9e\u73b0\u4f18\u52bf\u4e92\u8865\u3002"}
{"_id": "e1d9f97416986524f65733742021d9f02c8f7d0d", "title": "Semantic assessment of shopping behavior using trajectories, shopping related actions, and context information", "text": "The possibility of automatic understanding of customers\u2019 shopping behavior and acting according to their needs is relevant in the marketing domain, attracting a lot of attention lately. In this work, we focus on the task of automatic assessment of customers\u2019 shopping behavior, by proposing a multilevel framework. The framework is supported at low-level by different types of cameras, which are synchronized, facilitating efficient processing of information. A fish-eye camera is used for tracking, while a high-definition one serves for the action recognition task. The experiments are performed on both laboratory and real-life recordings in a supermarket. From the video recordings, we extract features related to the spatio-temporal behavior of trajectories, the dynamics and the time spent in each region of interest (ROI) in the shop and regarding the customer-products interaction patterns. Next \u2217Corresponding Author. Phone: +31(0)15 2781537. Fax: +31(0)15 2787141. Email address: m.c.popa@tudelft.nl (M.C. Popa) Preprint submitted to Pattern Recognition Letters December 24, 2011 we analyze the shopping sequences using a Hidden Markov Model (HMM). We conclude that it is possible to accurately classify trajectories (93%), discriminate between different shopping related actions (91.6%), and recognize shopping behavioral types by means of the proposed reasoning model in 95% of the cases."}
{"_id": "6b7f27cff688d5305c65fbd90ae18f3c6190f762", "title": "Generative networks as inverse problems with Scattering transforms", "text": "Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder."}
{"_id": "f7a7adc42f1ecaddf8a7c5738107be09b38e9195", "title": "Graph Clustering Bandits for Recommendation", "text": "We investigate an efficient context-dependent clustering technique for recommender systems based on exploration-exploitation strategies through multi-armed bandits over multiple users. Our algorithm dynamically groups users based on their observed behavioral similarity during a sequence of logged activities. In doing so, the algorithm reacts to the currently served user by shaping clusters around him/her but, at the same time, it explores the generation of clusters over users which are not currently engaged. We motivate the effectiveness of this clustering policy, and provide an extensive empirical analysis on real-world datasets, showing scalability and improved prediction performance over state-ofthe-art methods for sequential clustering of users in multi-armed bandit scenarios."}
{"_id": "88c53dfbb1902b68c0e985d48074d248f4604838", "title": "Entity Matching in Online Social Networks", "text": "In recent years, Online Social Networks (OSNs) have essentially become an integral part of our daily lives. There are hundreds of OSNs, each with its own focus and offers for particular services and functionalities. To take advantage of the full range of services and functionalities that OSNs offer, users often create several accounts on various OSNs using the same or different personal information. Retrieving all available data about an individual from several OSNs and merging it into one profile can be useful for many purposes. In this paper, we present a method for solving the Entity Resolution (ER), problem for matching user profiles across multiple OSNs. Our algorithm is able to match two user profiles from two different OSNs based on machine learning techniques, which uses features extracted from each one of the user profiles. Using supervised learning techniques and extracted features, we constructed different classifiers, which were then trained and used to rank the probability that two user profiles from two different OSNs belong to the same individual. These classifiers utilized 27 features of mainly three types: name based features (i.e., the Soundex value of two names), general user info based features (i.e., the cosine similarity between two user profiles), and social network topological based features (i.e., the number of mutual friends between two users' friends list). This experimental study uses real-life data collected from two popular OSNs, Facebook and Xing. The proposed algorithm was evaluated and its classification performance measured by AUC was 0.982 in identifying user profiles across two OSNs."}
{"_id": "c7fc1f2087cd6e990a5e58c8158f3ad480c30af6", "title": "Assessment of Motor Functioning in the Preschool Period", "text": "The assessment of motor functioning in young children has become increasingly important in recent years with the acknowledgement that motor impairment is linked with cognitive, language, social and emotional difficulties. However, there is no one gold standard assessment tool to investigate motor ability in children. The aim of the current paper was to discuss the issues related to the assessment of motor ability in young pre-school children and to provide guidelines on the best approach for motor assessment. The paper discusses the maturational changes in brain development at the preschool level in relation to motor ability. Other issues include sex differences in motor ability at this young age, and evidence for this in relation to sociological versus biological influences. From the previous literature it is unclear what needs to be assessed in relation to motor functioning. Should the focus be underlying motor processes or movement skill assessment? Several key assessment tools are discussed that produce a general measure of motor performance followed by a description of tools that assess specific skills, such as fine and gross motor, ball and graphomotor skills. The paper concludes with recommendations on the best approach in assessing motor function in pre-school children."}
{"_id": "d0dba40ebe54beea86ed185f627b69c84957ffed", "title": "The Analysis of the Mozart Effect on Visual Search", "text": "Over twenty-five years ago, researchers began to study the effects of music on the brain and cognition. The Mozart Effect is a term that was first coined by Dr. Alfred Tomatis in 1991 in his book titled \u201cPorquoi Mozart\u201d in which Tomatis advocated the use of Mozart music as alternative medicine for those suffering from dyslexia, autism, and other learning disorders. [Sorensen 2008] This effect, originally called \u201cThe Tomatis Effect\u201d was observed by Tomatis during therapy sessions with his patients. From his observations, Tomatis claimed that Mozart music helped promote healing and could even cure depression. The definition of the term evolved over time; in the present day, the term \u201cMozart Effect\u201d is typically used to refer to the phenomena of improved spatial reasoning after exposure to Mozart music."}
{"_id": "0ed598886c0d4bc792adf1479b480d7f01fec98a", "title": "A two-dimensional model for the study of interpersonal attraction.", "text": "We describe a model for understanding interpersonal attraction in which attraction can be understood as a product of the initial evaluations we make about others. The model posits that targets are evaluated on two basic dimensions, capacity and willingness, such that affective and behavioral attraction result from evaluations of (a) a target's capacity to facilitate the perceiver's goals/needs and (b) a target's potential willingness to facilitate those goals/needs. The plausibility of the two-dimensional model of attraction is evaluated vis-\u00e0-vis the extant literature on various attraction phenomena including the reciprocity of liking effect, pratfall effect, matching hypothesis, arousal effects, and similarity effect. We conclude that considerable evidence across a wide range of phenomena supports the idea that interpersonal attraction is principally determined by inferences about the target's capacity and willingness."}
{"_id": "7297a2196f1d82db54b26129a0fe5602d8ab8d8d", "title": "Designing mobile interfaces for novice and low-literacy users", "text": "While mobile phones have found broad application in bringing health, financial, and other services to the developing world, usability remains a major hurdle for novice and low-literacy populations. In this article, we take two steps to evaluate and improve the usability of mobile interfaces for such users. First, we offer an ethnographic study of the usability barriers facing 90 low-literacy subjects in India, Kenya, the Philippines, and South Africa. Then, via two studies involving over 70 subjects in India, we quantitatively compare the usability of different points in the mobile design space. In addition to text interfaces such as electronic forms, SMS, and USSD, we consider three text-free interfaces: a spoken dialog system, a graphical interface, and a live operator.\n Our results confirm that textual interfaces are unusable by first-time low-literacy users, and error prone for literate but novice users. In the context of healthcare, we find that a live operator is up to ten times more accurate than text-based interfaces, and can also be cost effective in countries such as India. In the context of mobile banking, we find that task completion is highest with a graphical interface, but those who understand the spoken dialog system can use it more quickly due to their comfort and familiarity with speech. We synthesize our findings into a set of design recommendations."}
{"_id": "a570d877766f0f6933d4b974a167758787ae763c", "title": "Proxemic Transitions: Designing Shape-Changing Furniture for Informal Meetings", "text": "The field of Shape-Changing Interfaces explores the qualities of physically dynamic artifacts. At furniture-scale, such artifacts have the potential of changing the ways we collaborate and engage with interiors and physical spaces. Informed by theories of proxemics, empirical studies of informal meetings and design work with shape-changing furniture, we develop the notion of proxemic transitions. We present three design aspects of proxemic transitions: transition speed, stepwise reconfiguration, and radical shifts. The design aspects focus on how to balance between physical and digital transformations in designing for proxemic transitions. Our contribution is three-fold: 1) the notion of proxemic transitions, 2) three design aspects to consider in designing for proxemic transitions, and 3) initial exploration of how these design aspects might generate designs of dynamic furniture. These contributions outline important aspects to consider when designing shape-changing furniture for informal workplace meetings."}
{"_id": "2f88b7d26f08c25ee336168fba0e37772c06ca6e", "title": "Analysis Operator Learning and its Application to Image Reconstruction", "text": "Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this paper, we present an algorithm for learning an analysis operator from training images. Our method is based on lp-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques."}
{"_id": "77c970bb9a0f57ed2474da22284c2c133f4970b7", "title": "Deep, complex, invertible networks for inversion of transmission effects in multimode optical fibres", "text": "We use complex-weighted, deep networks to invert the effects of multimode optical fibre distortion of a coherent input image. We generated experimental data based on collections of optical fibre responses to greyscale input images generated with coherent light, by measuring only image amplitude (not amplitude and phase as is typical) at the output of 1m and 10m long, 105 \u03bcm diameter multimode fibre. This data is made available as the Optical fibre inverse problem Benchmark collection. The experimental data is used to train complex-weighted models with a range of regularisation approaches. A unitary regularisation approach for complexweighted networks is proposed which performs well in robustly inverting the fibre transmission matrix, which is compatible with the physical theory. A benefit of the unitary constraint is that it allows us to learn a forward unitary model and analytically invert it to solve the inverse problem. We demonstrate this approach, and outline how it has the potential to improve performance by incorporating knowledge of the phase shift induced by the spatial light modulator."}
{"_id": "b59abcfd7dbed8e284c4237c405f912adb1ad4ea", "title": "Predicting the Semantic Orientation of Adjectives", "text": "We identify and validate from a large corpus constraints from conjunctions on the positive or negative semantic orientation of the conjoined adjectives. A log-linear regression model uses these constraints to predict whether conjoined adjectives are of same or different orientations, achieving 82% accuracy in this task when each conjunction is considered independently. Combining the constraints across many adjectives, a clustering algorithm separates the adjectives into groups of different orientations, and finally, adjectives are labeled positive or negative. Evaluations on real data and simulation experiments indicate high levels of performance: classification precision is more than 90% for adjectives that occur in a modest number of conjunctions in the corpus. 1 I n t r o d u c t i o n The semantic orientation or polarity of a word indicates the direction the word deviates from the norm for its semantic group or lezical field (Lehrer, 1974). It also constrains the word's usage in the language (Lyons, 1977), due to its evaluative characteristics (Battistella, 1990). For example, some nearly synonymous words differ in orientation because one implies desirability and the other does not (e.g., simple versus simplisfic). In linguistic constructs such as conjunctions, which impose constraints on the semantic orientation of their arguments (Anscombre and Ducrot, 1983; Elhadad and McKeown, 1990), the choices of arguments and connective are mutually constrained, as illustrated by: The tax proposal was simple and well-received } simplistic but well-received *simplistic and well-received by the public. In addition, almost all antonyms have different semantic orientations3 If we know that two words relate to the same property (for example, members of the same scalar group such as hot and cold) but have different orientations, we can usually infer that they are antonyms. Given that semantically similar words can be identified automatically on the basis of distributional properties and linguistic cues (Brown et al., 1992; Pereira et al., 1993; Hatzivassiloglou and McKeown, 1993), identifying the semantic orientation of words would allow a system to further refine the retrieved semantic similarity relationships, extracting antonyms. Unfortunately, dictionaries and similar sources (theusari, WordNet (Miller et al., 1990)) do not include semantic orientation information. 2 Explicit links between antonyms and synonyms may also be lacking, particularly when they depend on the domain of discourse; for example, the opposition bearbull appears only in stock market reports, where the two words take specialized meanings. In this paper, we present and evaluate a method that automatically retrieves semantic orientation information using indirect information collected from a large corpus. Because the method relies on the corpus, it extracts domain-dependent information and automatically adapts to a new domain when the corpus is changed. Our method achieves high precision (more than 90%), and, while our focus to date has been on adjectives, it can be directly applied to other word classes. Ultimately, our goal is to use this method in a larger system to automatically identify antonyms and distinguish near synonyms. 2 O v e r v i e w o f O u r A p p r o a c h Our approach relies on an analysis of textual corpora that correlates linguistic features, or indicators, with 1 Exceptions include a small number of terms that are both negative from a pragmatic viewpoint and yet stand in all antonymic relationship; such terms frequently lexicalize two unwanted extremes, e.g., verbose-terse. 2 Except implicitly, in the form of definitions and usage examples."}
{"_id": "35bc3b88d20098869a2e5cdb8cb83ed926627af0", "title": "SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing", "text": "With the recent advance of wearable devices and Internet of Things (IoTs), it becomes attractive to implement the Deep Convolutional Neural Networks (DCNNs) in embedded and portable systems. Currently, executing the software-based DCNNs requires high-performance servers, restricting the widespread deployment on embedded and mobile IoT devices. To overcome this obstacle, considerable research efforts have been made to develop highly-parallel and specialized DCNN accelerators using GPGPUs, FPGAs or ASICs.\n Stochastic Computing (SC), which uses a bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power (energy) and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources allow immense design space for enhancing scalability and robustness for hardware DCNNs.\n This paper presents SC-DCNN, the first comprehensive design and optimization framework of SC-based DCNNs, using a bottom-up approach. We first present the designs of function blocks that perform the basic operations in DCNN, including inner product, pooling, and activation function. Then we propose four designs of feature extraction blocks, which are in charge of extracting features from input feature maps, by connecting different basic function blocks with joint optimization. Moreover, the efficient weight storage methods are proposed to reduce the area and power (energy) consumption. Putting all together, with feature extraction blocks carefully selected, SC-DCNN is holistically optimized to minimize area and power (energy) consumption while maintaining high network accuracy. Experimental results demonstrate that the LeNet5 implemented in SC-DCNN consumes only 17 mm2 area and 1.53 W power, achieves throughput of 781250 images/s, area efficiency of 45946 images/s/mm2, and energy efficiency of 510734 images/J."}
{"_id": "068f658a3ce19e4109f5284468396551345f07c0", "title": "Look who's talking: speech style and social context in language input to infants are linked to concurrent and future speech development.", "text": "Language input is necessary for language learning, yet little is known about whether, in natural environments, the speech style and social context of language input to children impacts language development. In the present study we investigated the relationship between language input and language development, examining both the style of parental speech, comparing 'parentese' speech to standard speech, and the social context in which speech is directed to children, comparing one-on-one (1:1) to group social interactions. Importantly, the language input variables were assessed at home using digital first-person perspective recordings of the infants' auditory environment as they went about their daily lives (N =26, 11- and 14-months-old). We measured language development using (a) concurrent speech utterances, and (b) word production at 24 months. Parentese speech in 1:1 contexts is positively correlated with both concurrent speech and later word production. Mediation analyses further show that the effect of parentese speech-1:1 on infants' later language is mediated by concurrent speech. Our results suggest that both the social context and the style of speech in language addressed to children are strongly linked to a child's future language development."}
{"_id": "ddacb13730e0bd4e8d97d86401c5fc479c364836", "title": "The Internet of Things: a security point of view", "text": "ed as devices to provide services at low-levels as network discovery services, metadata exchange services, and asynchronous publish and subscribe event in (Kranenburg et al. 2011; Sundmaeker et al. 2010); In (Peris-Lopez et al. 2006), a representational state transfer (REST) is defined to increase interoperability between Page 14 of 32 http://mc.manuscriptcentral.com/intr Internet Research 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 or Peer Rview loosely coupled services and distributed applications. In (Hernandez-Castro et al. 2013), the services layer introduced a service provisioning process (SPP) that can provide the interaction between applications and services. It is important to design an effective security strategy to protect services against attacks in the service layer. The security requirements in the service layer include: \u2022 Authorization, service authentication, group authentication, privacy protection, integrity, integrity, security of keys, non-repudiation, anti-replay, availability,"}
{"_id": "5cc51fb6ecadc853cb4017a43fb644ad1b852bc1", "title": "Chest pathology detection using deep learning with non-medical training", "text": "In this work, we examine the strength of deep learning approaches for pathology detection in chest radiographs. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of CNN learned from a non-medical dataset to identify different types of pathologies in chest x-rays. We tested our algorithm on a 433 image dataset. The best performance was achieved using CNN and GIST features. We obtained an area under curve (AUC) of 0.87-0.94 for the different pathologies. The results demonstrate the feasibility of detecting pathology in chest x-rays using deep learning approaches based on non-medical learning. This is a first-of-its-kind experiment that shows that Deep learning with ImageNet, a large scale non-medical image database may be a good substitute to domain specific representations, which are yet to be available, for general medical image recognition tasks."}
{"_id": "74d67d2b4c6fdc9f6498ac546b9685a14a9c02ee", "title": "Crowdsourcing of Pollution Data using Smartphones", "text": "In this paper we present our research into participatory sensing based solutions for the collection of data on urban pollution and nuisance. In the past 2 years we have been involved in the NoiseTube project which explores a crowdsourcing approach to measuring and mapping urban noise pollution using smartphones. By involving the general public and using off-the-shelf smartphones as noise sensors, we seek to provide a low cost solution for citizens to measure their personal exposure to noise in their everyday environment and participate in the creation of collective noise maps by sharing their geo-localized and annotated measurements with the community. We believe our work represents an interesting example of the novel mobile crowdsourcing applications which are enabled by ubiquitous computing systems. Furthermore we believe the NoiseTube system, and the currently ongoing validation experiments, provide an illustrative context for some of the open challenges faced by creators of ubiquitous crowdsourcing applications and services in general. We will also take the opportunity to present the insights we gained into some of the challenges."}
{"_id": "44df79541fa068c54cafd50357ab78d626170365", "title": "Architecture, the Backbone of Robotic Systems", "text": "Architectures form the backbone of complete robotic systems. The right choice of architecture can go a long way in facilitating the specification, implementation and validation of robotic systems. Conversely, of course, the wrong choice can make one's life miserable. We present some of the needs of robotic systems, describe some general classes of robot architectures, and discuss how different architectural styles can help in addressing those needs. The paper, like the field itself, is somewhat preliminary, yet it is hoped that it will provide guidance for those who use, or develop, robot architectures."}
{"_id": "69150ca557440b1cb384505e26e8144306159308", "title": "CLASSIFICATION OF DEFECTS IN SOFTWARE USING DECISION TREE ALGORITHM", "text": "Software defects due to coding errors continue to plague the industry with disastrous impact, especially in the enterprise application software category. Identifying how much of these defects are specifically due to coding errors is a challenging problem. Defect prevention is the most vivid but usually neglected aspect of software quality assurance in any project. If functional at all stages of software development, it can condense the time, overheads and wherewithal entailed to engineer a high quality product. In order to reduce the time and cost, we will focus on finding the total number of defects if the test case shows that the software process not executing properly. That has occurred in the software development process. The proposed system classifying various defects using decision tree based defect classification technique, which is used to group the defects after identification. The classification can be done by employing algorithms such as ID3 or C4.5 etc. After the classification the defect patterns will be measured by employing pattern mining technique. Finally the quality will be assured by using various quality metrics such as defect density, etc. The proposed system will be implemented in JAVA."}
{"_id": "45969603820a845d531cdf3fb8dc7bd0c4198dc2", "title": "SympGraph: a framework for mining clinical notes through symptom relation graphs", "text": "As an integral part of Electronic Health Records (EHRs), clinical notes pose special challenges for analyzing EHRs due to their unstructured nature. In this paper, we present a general mining framework SympGraph for modeling and analyzing symptom relationships in clinical notes.\n A SympGraph has symptoms as nodes and co-occurrence relations between symptoms as edges, and can be constructed automatically through extracting symptoms over sequences of clinical notes for a large number of patients. We present an important clinical application of SympGraph: symptom expansion, which can expand a given set of symptoms to other related symptoms by analyzing the underlying SympGraph structure. We further propose a matrix update algorithm which provides a significant computational saving for dynamic updates to the graph. Comprehensive evaluation on 1 million longitudinal clinical notes over 13K patients shows that static symptom expansion can successfully expand a set of known symptoms to a disease with high agreement rate with physician input (average precision 0.46), a 31% improvement over baseline co-occurrence based methods. The experimental results also show that the expanded symptoms can serve as useful features for improving AUC measure for disease diagnosis prediction, thus confirming the potential clinical value of our work."}
{"_id": "7c17b29d1b6d1dc35ff7f0a3cee172d28728a59e", "title": "Using Design Science Research to Incorporate Gamification into Learning Activities", "text": "Gamifying learning activities can be beneficial as it can better engage students and result in improved learning. However, incorporating game elements into learning activities can be difficult because it requires an appropriate mix of science, art, and experience. Design science research can help to address this issue. In particular, its search process for determining an appropriate solution for a given problem is useful as it allows for a number of iterative cycles over which the solution is incrementally refined, and ultimately resulting in a successful implementation. Our objective in this study is to develop a non-discipline-specific instantiation that can be embedded into a learning activity to motivate students and improve the quality of their learning. We provide a detailed explanation of how we designed and developed a gamified multiple choice quiz software tool over multiple iterations. The tool was trialled in three undergraduate IT-related courses and evaluated using a questionnaire survey. Results showed that the tool was well received as 76 per cent of students believed it was effective for learning and would for it to be used in their other courses."}
{"_id": "03eba58bd1d1f86a341ed5ac27ad94dc4a9a2ced", "title": "Network failure detection and graph connectivity", "text": "We consider a model for monitoring the connectivity of a network subject to node or edge failures. In particular, we are concerned with detecting (\u03b5, k)-failures: events in which an adversary deletes up to network elements (nodes or edges), after which there are two sets of nodes A and B, each at least an \u03b5 fraction of the network, that are disconnected from one another. We say that a set D of nodes is an (\u03b5 k)-detection set if, for any (\u03b5 k)-failure of the network, some two nodes in D are no longer able to communicate; in this way, D \"witnesses\" any such failure. Recent results show that for any graph G, there is an is (\u03b5 k)-detection set of size bounded by a polynomial in k and \u03b5, independent of the size of G.In this paper, we expose some relationships between bounds on detection sets and the edge-connectivity \u03bb and node-connectivity \u03ba of the underlying graph. Specifically, we show that detection set bounds can be made considerably stronger when parameterized by these connectivity values. We show that for an adversary that can delete \u03ba\u03bb edges, there is always a detection set of size O((\u03ba/\u03b5) log (1/\u03b5)) which can be found by random sampling. Moreover, an (\u03b5, &lambda)-detection set of minimum size (which is at most 1/\u03b5\u03b5) can be computed in polynomial time. A crucial point is that these bounds are independent not just of the size of G but also of the value of \u03bb.Extending these bounds to node failures is much more challenging. The most technically difficult result of this paper is that a random sample of O((\u03ba/\u03b5) log (1/\u03b5)) nodes is a detection set for adversaries that can delete a number of nodes up to \u03ba, the node-connectivity.For the case of edge-failures we use VC-dimension techniques and the cactus representation of all minimum edge-cuts of a graph; for node failures, we develop a novel approach for working with the much more complex set of all minimum node-cuts of a graph."}
{"_id": "9eacc0de031c0b7c94927bb112ab939dc81a73a5", "title": "Multi-View 3D Entangled Forest for Semantic Segmentation and Mapping", "text": "Applications that provide location related services need to understand the environment in which humans live such that verbal references and human interaction are possible. We formulate this semantic labelling task as the problem of learning the semantic labels from the perceived 3D structure. In this contribution we propose a batch approach and a novel multi-view frame fusion technique to exploit multiple views for improving the semantic labelling results. The batch approach works offline and is the direct application of an existing single-view method to scene reconstructions with multiple views. The multi-view frame fusion works in an incremental fashion accumulating the single-view results, hence allowing the online multi-view semantic segmentation of single frames and the offline reconstruction of semantic maps. Our experiments show the superiority of the approaches based on our fusion scheme, which leads to a more accurate semantic labelling."}
{"_id": "033fb6c30817e36dce787928cd821d1e6adaad8d", "title": "Coupled hidden Markov models for complex action recognition", "text": "We present algorithms for coupling and training hidden Markov models ( HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm, and a clear Bayesian semantics. However, the Markovian framework makes strong restrictive assumptions about the system generating the signal\u2014that it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions."}
{"_id": "598eadcca8ac9365d188157d585e076dc2ef60d9", "title": "UMAN WAYFINDING IN UNFAMILIAR BUILDINGS : A SIMULATION WITH A COGNIZING AGENT", "text": "Existing cognitively based computational models for wayfinding focus primarily on the exploration of mental representations rather than the information needs for wayfinding. It is important to consider information needs because people trying to find their ways in unfamiliar environments do not have a previously acquired mental representation but depend on external information. The fundamental tenet of this work is that all information must be presented to the wayfinder at each decision point as \u201cknowledge in the world.\u201d Simulating people\u2019s wayfinding behavior in a cognitively plausible way requires the integration of structures for information perception and cognition in the underlying model. In this work we use a cognizing agent to simulate people\u2019s wayfinding processes in unfamiliar buildings. The agent-based model consists of two tiers: simulated states of the environment and simulated beliefs of the agent. The agent is modeled with state, an observation schema, a specific wayfinding strategy, and commonsense knowledge. The environment is modeled as a graph, where nodes represent decision points and edges represent lines of movement. The perceptual wayfinding model integrates the agent and its environment within a \u201cSense-Plan-Act\u201d framework. It focuses on knowledge in the world to explain actions of the agent. The concepts of affordance and information are used to describe the kinds of knowledge the agent derives from the world by means of visual perception. Affordances are possibilities for action with reference to the agent. Information is necessary for the agent to decide which affordances to utilize. During the navigation process the agent accumulates beliefs about the environment by observing task-relevant affordances and information at decision points. The utilization of a \u201cgo-to\u201d affordance leads the agent from one node to another where it is again provided with percepts. A successful navigation corresponds to the agent\u2019s traversal from a start to a goal node. The proposed formal specifications of the agent-based model can be used to simulate people\u2019s wayfinding behavior in spatial information and design systems in a cognitively plausible way. Such simulation helps to determine where and why people face wayfinding difficulties and what needs to be done to avoid them. The case of wayfinding in an airport, in which the signage in the airport is tested, is employed to demonstrate the perceptual wayfinding model."}
{"_id": "179c5137c6e6122862ed776eea8e0ffa23ad93d4", "title": "A multi-view intelligent editor for digital video libraries", "text": "Silver is an authoring tool that aims to allow novice users to edit di gital video. The goal is to make editing of digital video as easy as text editing. Silver provides multiple coordinated views, including project, source, outline, subject, storyboard, textual transcript and timeline views. Selections and edits in any view are synchronized with all other views. A variety of recognition algorithms are applied to the video and audio content and then are used to aid in the editing tasks. The Informedia Digital Library supplies the recognition algorithms and metadata used to support intelligent editing, and Informedia also provides search and a repository. The metadata includes shot boundaries and a time-synchronized transcript, which are used to support intelligent selection and intelligent cut/copy/paste."}
{"_id": "1a6b8d832037f1565f6b81e3743fdacd227c6009", "title": "Recurrent Residual Learning for Sequence Classification", "text": "In this paper, we explore the possibility of leveraging Residual Networks (ResNet), a powerful structure in constructing extremely deep neural network for image understanding, to improve recurrent neural networks (RNN) for modeling sequential data. We show that for sequence classification tasks, incorporating residual connections into recurrent structures yields similar accuracy to Long Short Term Memory (LSTM) RNN with much fewer model parameters. In addition, we propose two novel models which combine the best of both residual learning and LSTM. Experiments show that the new models significantly outperform LSTM."}
{"_id": "ccdc2766b7c0d2c20d2db5abf243926e269ee106", "title": "Design and Evaluation of Visual Interpenetration Cues in Virtual Grasping", "text": "We present design and impact studies of visual feedback for virtual grasping. The studies suggest new or updated guidelines for feedback. Recent grasping techniques incorporate visual cues to help resolve undesirable visual or performance artifacts encountered after real fingers enter a virtual object. Prior guidelines about such visuals are based largely on other interaction types and provide inconsistent and potentially-misleading information when applied to grasping. We address this with a two-stage study. In the first stage, users adjusted parameters of various feedback types, including some novel aspects, to identify promising settings and to give insight into preferences regarding the parameters. In the next stage, the tuned feedback techniques were evaluated in terms of objective performance (finger penetration, release time, and precision) and subjective rankings (visual quality, perceived behavior impact, and overall preference). Additionally, subjects commented on the techniques while reviewing them in a final session. Performance wise, the most promising techniques directly reveal penetrating hand configuration in some way. Subjectively, subjects appreciated visual cues about interpenetration or grasp force, and color changes are most promising. The results enable selection of the best cues based on understanding the relevant tradeoffs and reasonable parameter values. The results also provide a needed basis for more focused studies of specific visual cues and for choosing conditions in comparisons to other feedback modes, such as haptic, audio, or multimodal. Considering results, we propose that 3D interaction guidelines must be updated to capture the importance of interpenetration cues, possible performance benefits of direct representations, and tradeoffs involved in cue selection."}
{"_id": "ed61f1ae043764e78627fff0af93b9425fd608f4", "title": "An Attribute-aware Neural Attentive Model for Next Basket Recommendation", "text": "Next basket recommendation is a new type of recommendation, which recommends a set of items, or a basket, to the user. Purchase in basket is a common behavior of consumers. Recently, deep neural networks have been applied to model sequential transactions of baskets in next basket recommendation. However, current methods do not track the user's evolving appetite for items explicitly, and they ignore important item attributes such as product category. In this paper, we propose a novel Attribute-aware Neural Attentive Model (ANAM) to address these problems. ANAM adopts an attention mechanism to explicitly model user's evolving appetite for items, and utilizes a hierarchical architecture to incorporate the attribute information. In specific, ANAM utilizes a recurrent neural network to model the user's sequential behavior over time, and relays the user's appetite toward items and their attributes to next basket through attention weights shared across baskets on the two different hierarchies. Experiment results on two public datasets (\u0131e Ta-Feng and JingDong) demonstrate the effectiveness of our ANAM model for next basket recommendation."}
{"_id": "615757ef091d745900172cb9446375618eee1420", "title": "A Context-Based Detection Framework for Advanced Persistent Threats", "text": "Besides a large set of malware categories such as worms and Trojan horses, Advanced Persistent Threat (APT) is another more sophisticated attack entity emerging in the cyber threats environment. In this paper we propose a model of the APT detection problem as well as a methodology to implement it on a generic organization network. From our knowledge, the proposed method is the first to address the problem of modeling an APT and to provide a possible detection framework."}
{"_id": "5426559cc4d668ee105ec0894b77493e91c5c4d3", "title": "Data Semantics", "text": ""}
{"_id": "143d057ea9ff4a508b772d80496c8e838da9b156", "title": "On the write energy of non-volatile resistive crossbar arrays with selectors", "text": "Crossbar arrays based on non-volatile resistive devices such as resistive RAM and phase change memory have become an important technology due to the applications to memory systems. The energy consumption of integrated circuits has become a primary issue due to thermal constraints in high performance systems and limited battery time in mobile and IoT applications. In this paper, the energy efficiency of a crossbar array of a one-selector-one-resistor (1S1R) configuration during a write operation is explored for the V/2 and V/3 bias schemes. The characteristics that affect the most energy efficient bias scheme are demystified. The write energy of a crossbar array is modeled in terms of the array size, number of selected cells, and the nonlinearity factor. For a specific array size and selector technology, the number of selected cells during a write operation can affect the choice of bias scheme. Moreover, the effect of leakage current due to partially biased unselected cells is explored."}
{"_id": "8d3490c7954ed9b9395e459ccdb2f89af588fdfa", "title": "Interactions between location and task affect the spatial and directional firing of hippocampal neurons.", "text": "When rats forage for randomly dispersed food in a high walled cylinder the firing of their hippocampal \"place\" cells exhibits little dependence on the direction faced by the rat. On radial arm mazes and similar tasks, place cells are strongly directionally selective within their fields. These tasks differ in several respects, including the visual environment, configuration of the traversable space, motor behavior (e.g., linear and angular velocities), and behavioral context (e.g., presence of specific, consistent goal locations within the environment). The contributions of these factors to spatial and directional tuning of hippocampal neurons was systematically examined in rats performing several tasks in either an enriched or a sparse visual environment, and on different apparati. Place fields were more spatially and directionally selective on a radial maze than on an open, circular platform, regardless of the visual environment. On the platform, fields were more directional when the rat searched for food at fixed locations, in a stereotypic and directed manner, than when the food was scattered randomly. Thus, it seems that place fields are more directional when the animal is planning or following a route between points of special significance. This might be related to the spatial focus of the rat's attention (e.g., a particular reference point). Changing the behavioral task was also accompanied by a change in firing location in about one-third of the cells. Thus, hippocampal neuronal activity appears to encode a complex interaction between locations, their significance and the behaviors the rat is called upon to execute."}
{"_id": "fbf1abc9f708704689eae63ac8420282a0d1e8bf", "title": "Design of a Novel Broadband Skeletal Discone Antenna With a Compact Configuration", "text": "A novel broadband skeletal discone antenna with a compact configuration is proposed in this letter. The antenna consists of three components including a circular metal disc, a small upside-down cone, and a skeletal cone. The operating principle of this antenna is investigated by simulation, and the corresponding simulated results are also demonstrated in this letter. According to the simulation, the antenna was fabricated and followed by the measurement. The comparison between the simulated and measured results is made for validating the design strategy. Although some discrepancies can be obviously found at some discrete frequencies, both the simulated and measured results indicate that this antenna is capable of operating in the frequency range of 400 MHz to almost 16.4 GHz with VSWR below 2.5, while maintaining stable omnidirectional radiation. Meanwhile, this antenna also has a distinct advantage over the traditional discone antenna for its compact configuration, which makes it a more competitive candidate for applications as wireless communication."}
{"_id": "926a62a19fb38b0211b2343e285439b383d50f5e", "title": "Distributed Data Clustering in Peer -to-Peer Networks: A Technical Review", "text": "Clustering as one of the main branches of data mining, has gained an important place in the different applied fields. On the other hand, Peer-to-Peer (P2P) networks with features such as simplicity, low cost communication, and high availability resources, have gained a worldwide popularity in the present days. In P2P network, high volumes of data are distributed between dispersed data sources. In such distributed systems, sending data to a central site is not possible, due to processing, storage and handling costs. This issue makes the need for specific methods for distributed data clustering in P2P networks. This paper makes a technical review of distributed data clustering approaches in P2P network, to understand the research trends in this area. Keywords\u2014Peer to Peer Networks, Distributed Data Mining, Distributed Data Clustering"}
{"_id": "86e3c2ecc0f5fcf56cbcbac71814f4c7aa76e3c1", "title": "Omni-Directional Mobility Using Active Split Offset Castors", "text": "An omni-directional mobility platform design concept using two active split offset cas (ASOC) and one or more conventional castors is presented. An ASOC module cons two coaxial conventional wheels driven independently and connected to the platfor an offset link. The kinematics and implementation of the omni-directional platform described and analyzed. Particular attention is paid to the system performance on u floors. The fundamental mechanics of the ASOC wheel scrubbing, which is critic system wear and energy use, is analyzed and compared to conventional active designs. The effectiveness of the design is shown experimentally using an inte mobility aid for the elderly.@DOI: 10.1115/1.1767181 #"}
{"_id": "05df8861bc5cb956d330ee8abf18289b9a44a802", "title": "Momentum and Credit Rating", "text": "This paper establishes a strong link between momentum and firm credit rating. Momentum profitability is statistically significant and economically large among low-grade firms, but it is nonexistent among high-grade firms. The momentum payoffs documented in the literature are generated by low rated firms that account for less than four percent of the overall market capitalization of rated firms. For loser (winner) stocks in the low rating category, profit margins, sales growth, operating cash flows, and interest coverage decrease (increase) over the formation and holdings periods, while illiquidity and volatility increase (decrease). This operating performance of the winner and loser stocks is not anticipated by the market as evidenced by the earnings surprises and analyst forecast revisions. As the market observes the deteriorating (improving) conditions there is a pressure to sell (buy) losers (winners). Indeed, we show that institutional investors buy winners and sell losers during one year into the holding period. Such buying and selling activity accompanied with high illiquidity precipitates the gains among high credit-risk winners and losses among high credit-risk losers."}
{"_id": "3edb5a16553226936438a8e92dda0c66f5872ad6", "title": "A Unified Theory of Underreaction , Momentum Trading , and Overreaction in Asset Markets", "text": "We model a market populated by two groups of boundedly rational agents: \u201cnewswatchers\u201d and \u201cmomentum traders.\u201d Each newswatcher observes some private information, but fails to extract other newswatchers\u2019 information from prices. If information diffuses gradually across the population, prices underreact in the short run. The underreaction means that the momentum traders can profit by trendchasing. However, if they can only implement simple ~i.e., univariate! strategies, their attempts at arbitrage must inevitably lead to overreaction at long horizons. In addition to providing a unified account of underand overreactions, the model generates several other distinctive implications. OVER THE LAST SEVERAL YEARS, a large volume of empirical work has documented a variety of ways in which asset returns can be predicted based on publicly available information. Although different studies have used a host of different predictive variables, many of the results can be thought of as belonging to one of two broad categories of phenomena. On the one hand, returns appear to exhibit continuation, or momentum, in the short to medium run. On the other hand, there is also a tendency toward reversals, or fundamental reversion, in the long run.1 It is becoming increasingly clear that traditional asset-pricing models\u2014 such as the capital asset pricing model ~CAPM! of Sharpe ~1964! and Lintner ~1965!, Ross\u2019s ~1976! arbitrage pricing theory ~APT!, or Merton\u2019s ~1973! intertemporal capital asset pricing model ~ICAPM!\u2014have a hard time explaining the growing set of stylized facts. In the context of these models, all of the predictable patterns in asset returns, at both short and long horizons, must ultimately be traced to differences in loadings on economically meaningful risk factors. And there is little affirmative evidence at this point to suggest that this can be done. * Stanford Business School and MIT Sloan School of Management and NBER. This research is supported by the National Science Foundation and the Finance Research Center at MIT. We are grateful to Denis Gromb, Ren\u00e9 Stulz, an anonymous referee, and seminar participants at MIT, Michigan, Wharton, Duke, UCLA, Berkeley, Stanford, and Illinois for helpful comments and suggestions. Thanks also to Melissa Cunniffe for help in preparing the manuscript. 1 We discuss this empirical work in detail and provide references in Section I below. THE JOURNAL OF FINANCE \u2022 VOL. LIV, NO. 6 \u2022 DECEMBER 1999"}
{"_id": "d0e60dec5bb676a2431786c683a2253aa12f3dea", "title": "ON THE PRICING OF CORPORATE DEBT : THE RISK STRUCTURE OF INTEREST RATES", "text": "On the Pricing of Corporate Debt: The Risk Structure of Interest Rates Author(s): Robert C. Merton Source: The Journal of Finance, Vol. 29, No. 2, Papers and Proceedings of the Thirty-Second Annual Meeting of the American Finance Association, New York, New York, December 28-30, 1973 (May, 1974), pp. 449-470 Published by: Blackwell Publishing for the American Finance Association Stable URL: http://www.jstor.org/stable/2978814 Accessed: 16/10/2008 10:09"}
{"_id": "06490212e42b3ed2d99ffb9d88a6ba437c0bf423", "title": "Momentum , Business Cycle and Time-Varying Expected Returns By", "text": "Momentum, Business Cycle and Time -Varying Expected Returns In recent years there has been a dramatic growth in academic interest in the predictability of asset returns based on past history. A growing number of researchers argue that time-series patterns in returns are due to investor irrationality, and thus can be translated into abnormal profits. Continuation of shortterm returns or momentum is one such pattern that has defied any rational explanation, and is at odds with market efficiency. This paper shows that profits to momentum strategies can be explained by a set of lagged macroeconomic variables and payoffs to momentum strategies disappear once stock returns are adjusted for their predictability based on these macroeconomic variables. Our results provide a possible role for time-varying expected returns as an explanation for momentum payoffs."}
{"_id": "3185c6c092fa0f2482a0d7142767e65a78a222c8", "title": "Disclosure , Liquidity , and the Cost of Capital", "text": "This paper shows that revealing public information to reduce information asymmetry can reduce a firm's cost of capital by attracting increased demand from large investors due to increased liquidity of its securities. Large firms will disclose more information since they benefit most. Disclosure also reduces the risk bearing capacity available through market makers. If initial information asymmetry is large, reducing it will increase the current price of the security. However, the maximum current price occurs with some asymmetry of information: further reduction of information asymmetry accentuates the undesirable effects of exit from market making. THIS PAPER STUDIES THE causes and consequences of a security's liquidity, especially the effect of future liquidity on the security's current price-equivalently the effect on its required expected rate of return, its cost of capital. Under conditions that we identify, reducing information asymmetry reduces the cost of capital. Under other (less typical) conditions, this reduced information asymmetry can have the opposite effect. We use public disclosure of information as the means of changing information asymmetry, but the points are more general. Our model is related to those of Kyle (1985), Glosten and Milgrom (1985), and Admati and Pfleiderer (1988). They assume that there is unlimited risk bearing capacity devoted to market making, which implies that changes in future liquidity never influence a security's cost of capital.1 In contrast, we develop a model of trade in an illiquid market with limited risk bearing capacity of risk-averse market makers and examine the effects of private information on the incentives of market makers to provide risk bearing *The University of Chicago and the University of Pennsylvania, respectively. Diamond gratefully acknowledges support from NSF grant SES-8896223 and from a gift to the University of Chicago from Dimensional Fund Advisors. Verrecchia gratefully acknowledges financial assistance from Ernst & Young and the Wharton School of the University of Pennsylvania. We are grateful to Franklin Allen, Ren6 Stulz, Robert Vishny, an anonymous referee, and workshop participants at the University of Chicago, the Federal Reserve Bank of Richmond, Georgetown University, and UCLA for helpful comments on a previous draft. 'Given the assumption, one could even close the secondary market for an arbitrarily long time in the future without influencing the current price. As a security becomes less liquid in these models, more of the security is permanently held by the market makers who do not care about liquidity."}
{"_id": "04f73d3c07cfe7788b2a25ff4b47f0d425da6f32", "title": "Robust sensor-based grasp primitive for a three-finger robot hand", "text": "This paper addresses the problem of robot grasping in conditions of uncertainty. We propose a grasp controller that deals robustly with this uncertainty using feedback from different contact-based sensors. This controller assumes a description of grasp consisting of a primitive that only determines the initial configuration of the hand and the control law to be used. We exhaustively validate the controller by carrying out a large number of tests with different degrees of inaccuracy in the pose of the target objects and by comparing it with results of a naive grasp controller."}
{"_id": "60d2a3bc5c4e80e0442564a8e7dd7c97802c43df", "title": "Integrating Out Multinomial Parameters in Latent Dirichlet Allocation and Naive Bayes for Collapsed Gibbs Sampling", "text": "This note shows how to integrate out the multinomial parameters for latent Dirichlet allocation (LDA) and naive Bayes (NB) models. This allows us to perform Gibbs sampling without taking multinomial parameter samples. Although the conjugacy of the Dirichlet priors makes sampling the multinomial parameters relatively straightforward, sampling on a topic-by-topic basis provides two advantages. First, it means that all samples are drawn from simple discrete distributions with easily calculated parameters. Second, and more importantly, collapsing supports fully stochastic Gibbs sampling where the model is updated after each word (in LDA) or document (in NB) is assigned a topic. Typically, more stochastic sampling leads to quicker convergence to the stationary state of the Markov chain made up of the Gibbs samples."}
{"_id": "ceb709d8be647b7fa089a63d0ba9d82b2eede1f4", "title": "A 6.8GHz low-profile H-plane ridged SIW horn antenna", "text": "The substrate integrated waveguide (SIW) allows the construct of some types of planar antennas which cannot be conventionally integrated on the substrate. However, owing to some technology restrictions, designed SIW horn antennas normally work in the frequency above 10 GHz. This paper proposes a 6.8GHz low-profile H-plane horn antenna based on the ridged SIW, which allows the substrates thinner than \u03bb0/10. Far field radiation patterns are reported to reveal the good performance of proposed antenna. Comparisons among the ridged SIW horn antennas with different number of ridges are also made to shown the matching improvements."}
{"_id": "c06cdfb3e6561e8083de73da6858a778b0cd046c", "title": "Neural Networks applied to wireless communications", "text": "This paper presents a time-delayed neural network (TDNN) model that has the capability of learning and predicting the dynamic behavior of nonlinear elements that compose a wireless communication system. This model could help speeding up system deployment by reducing modeling time. This paper presents results of effective application of the TDNN model to an amplifier, part of a wireless transmitter."}
{"_id": "0116899fce00ffa4afee08b505300bb3968faf9f", "title": "Long Text Generation via Adversarial Training with Leaked Information", "text": "Automatically generating coherent and semantically meaningful text has many applications in machine translation, dialogue systems, image captioning, etc. Recently, by combining with policy gradient, Generative Adversarial Nets (GAN) that use a discriminative model to guide the training of the generative model as a reinforcement learning policy has shown promising results in text generation. However, the scalar guiding signal is only available after the entire text has been generated and lacks intermediate information about text structure during the generative process. As such, it limits its success when the length of the generated text samples is long (more than 20 words). In this paper, we propose a new framework, called LeakGAN, to address the problem for long text generation. We allow the discriminative net to leak its own high-level extracted features to the generative net to further help the guidance. The generator incorporates such informative signals into all generation steps through an additional MANAGER module, which takes the extracted features of current generated words and outputs a latent vector to guide the WORKER module for next-word generation. Our extensive experiments on synthetic data and various realworld tasks with Turing test demonstrate that LeakGAN is highly effective in long text generation and also improves the performance in short text generation scenarios. More importantly, without any supervision, LeakGAN would be able to implicitly learn sentence structures only through the interaction between MANAGER and WORKER."}
{"_id": "08b9aaab78afbe916a708c69f4740fb5c55ef76f", "title": "Pvclust: an R package for assessing the uncertainty in hierarchical clustering", "text": "SUMMARY\nPvclust is an add-on package for a statistical software R to assess the uncertainty in hierarchical cluster analysis. Pvclust can be used easily for general statistical problems, such as DNA microarray analysis, to perform the bootstrap analysis of clustering, which has been popular in phylogenetic analysis. Pvclust calculates probability values (p-values) for each cluster using bootstrap resampling techniques. Two types of p-values are available: approximately unbiased (AU) p-value and bootstrap probability (BP) value. Multiscale bootstrap resampling is used for the calculation of AU p-value, which has superiority in bias over BP value calculated by the ordinary bootstrap resampling. In addition the computation time can be enormously decreased with parallel computing option."}
{"_id": "d7de5b2531d609d293c441bd21fc9ac34cb413b9", "title": "Robust single particle tracking in live cell time-lapse sequences", "text": "Single-particle tracking (SPT) is often the rate-limiting step in live-cell imaging studies of subcellular dynamics. Here we present a tracking algorithm that addresses the principal challenges of SPT, namely high particle density, particle motion heterogeneity, temporary particle disappearance, and particle merging and splitting. The algorithm first links particles between consecutive frames and then links the resulting track segments into complete trajectories. Both steps are formulated as global combinatorial optimization problems whose solution identifies the overall most likely set of particle trajectories throughout a movie. Using this approach, we show that the GTPase dynamin differentially affects the kinetics of long- and short-lived endocytic structures and that the motion of CD36 receptors along cytoskeleton-mediated linear tracks increases their aggregation probability. Both applications indicate the requirement for robust and complete tracking of dense particle fields to dissect the mechanisms of receptor organization at the level of the plasma membrane."}
{"_id": "a70e0ae7407d6ba9f2bf576dde69a0e109114af0", "title": "Improving the resolution performance of eigenstructure-based direction-finding algorithms", "text": ""}
{"_id": "363fc41f35a783944fe7fa2915c7b1ef0a4fe9fe", "title": "A Semi-Automatic Approach to Handle Data Warehouse Schema Evolution using Ontology", "text": "In recent years, the number of digital information storage and retrieval systems has increased immensely. Data warehousing has been found to be an extremely useful technology for integrating such heterogeneous and autonomous information sources. Data within the data warehouse is modelled in the form of a star or snowflake schema which facilitates business analysis in a multidimensional perspective. The data warehouse schema is derived from these sources and business requirements. Due to changing business needs the information sources not only change their data but as well change their schema structure. In addition to the source changes the business requirements for data warehouse may also change. Both these changes results in data warehouse schema evolution. Existing approaches either deal with source changes or requirements changes in a manual way and the changes to the data warehouse schema is carried out at the physical level. This may induce high maintenance costs and complex OLAP server administration. As ontology seems to be promising technique in data warehousing research, in this paper we propose an ontological approach to automate the evolution of a data warehouse schema. Our method assists the designer of the data warehouse to handle evolution at the conceptual level and after validation the physical schema is constructed automatically."}
{"_id": "de8b37623b6a1ea2c3d103d2cf4182856227aead", "title": "A Survey on Semantic Web Search Engine", "text": "The tremendous growth in the volume of data and with the terrific growth of number of web pages, traditional search engines now a days are not appropriate and not suitable anymore. Search engine is the most important tool to discover any information in World Wide Web. Semantic Search Engine is born of traditional search engine to overcome the above problem. The Semantic Web is an extension of the current web in which information is given well-defined meaning. Semantic web technologies are playing a crucial role in enhancing traditional web search, as it is working to create machine readable data. but it will not replace traditional search engine. In this paper we made a brief survey on various promising features of some of the best semantic search engines developed so far and we have discussed the various approaches to semantic search. We have summarized the techniques, advantages of some important semantic web search engines that are developed so far.The most prominent part is that how the semantic search engines differ from the traditional searches and their results are shown by giving a sample query as input."}
{"_id": "13595eb7573cf828734d7059f4f23f2bdb6a2c6f", "title": "Distributed Mutual Exclusion Algorithms for Intersection Traffic Control", "text": "Traffic control at intersections is a key issue and hot research topic in intelligent transportation systems. Existing approaches, including traffic light scheduling and trajectory maneuver, are either inaccurate and inflexible or complicated and costly. More importantly, due to the dynamics of traffic, it is really difficult to obtain the optimal solution in a real-time way. Inspired by the emergence of vehicular ad hoc network, we propose a novel approach to traffic control at intersections. Via vehicle to vehicle or vehicle to infrastructure communications, vehicles can compete for the privilege of passing the intersection, i.e., traffic is controlled via coordination among vehicles. Such an approach is flexible and efficient. To realize the coordination among vehicles, we first model the problem as a new variant of the classic mutual exclusion problem, and then design algorithms to solve new problem. Both centralized and distributed algorithms are. We conduct extensive simulations to evaluate the performance of our proposed algorithms. The results show that, our approach is efficient and outperforms a reference algorithm based on optimal traffic light scheduling. Moreover, our approach does not rely on traffic light or intersection controller facilities, which makes it flexible and applicable to various kinds of intersections."}
{"_id": "5dd05625e6f84ae4718760d54a6ae65ef04d7a98", "title": "Combining Machine Learning Techniques and Natural Language Processing to Infer Emotions Using Spanish Twitter Corpus", "text": "In the recent years, microblogging services, as Twitter, have become a popular tool for expressing feelings, opinions, broadcasting news, and communicating with friends. Twitter users produced more than 340 million tweets per day which may be consider a rich source of user information. We take a supervised approach to the problem, but leverage existing hashtags in Twitter for building our training data. Finally, we tested the Spanish emotional corpus applying two different machine learning algorithms for emotion identification reaching about 65% accuracy."}
{"_id": "2dd734db863834b1270e7d8b4793ccee50114f12", "title": "Learning to Create Jazz Melodies Using a Product of Experts", "text": "We describe a neural network architecture designed to learn the musical structure of jazz melodies over chord progressions, then to create new melodies over arbitrary chord progressions from the resulting connectome (representation of neural network structure). Our architecture consists of two sub-networks, the interval expert and the chord expert, each being LSTM (long short-term memory) recurrent networks. These two sub-networks jointly learn to predict a probability distribution over future notes conditioned on past notes in the melody. We describe a training procedure for the network and an implementation as part of the opensource Impro-Visor (Improvisation Advisor) application, and demonstrate our method by providing improvised melodies based on a variety of training sets."}
{"_id": "edda91dd4a82f410523bad42990c37d9b5037954", "title": "Ship Classification in TerraSAR-X Images With Feature Space Based Sparse Representation", "text": "Ship classification is the key step in maritime surveillance using synthetic aperture radar (SAR) imagery. In this letter, we develop a new ship classification method in TerraSAR-X images based on sparse representation in feature space, in which the sparse representation classification (SRC) method is exploited. In particular, to describe the ship more accurately and to reduce the dimension of the dictionary in SRC, we propose to employ a representative feature vector to construct the dictionary instead of utilizing the image pixels directly. By testing on a ship data set collected from TerraSAR-X images, we show that the proposed method is superior to traditional methods such as the template matching (TM), K-nearest neighbor (K-NN), Bayes and Support Vector Machines (SVM)."}
{"_id": "4e628aa75f319acf3c5b7f5108d80feb2c69fbb5", "title": "Robust Real-Time Periodic Motion Detection, Analysis, and Applications", "text": "We describe new techniques to detect and analyze periodic mo t on as seen from both a static and moving camera. By tracking objects of interest, we compute a n object\u2019s self-similarity as it evolves in time. For periodic motion, the self-similarity measure is a l o periodic, and we apply Time-Frequency analysis to detect and characterize the periodic motion. Th e periodicity is also analyzed robustly using the 2-D lattice structures inherent in similarity matrices . A real-time system has been implemented to track and classify objects using periodicity. Examples of o bject classification (people, running dogs, vehicles), person counting, and non-stationary periodici ty are provided."}
{"_id": "0c68fefbed415a76cf7b87c2e8cefaf8695b3b8c", "title": "Finding scientific gems with Google's PageRank algorithm", "text": "We apply the Google PageRank algorithm to assess the relative importance of all publications in the Physical Review family of journals from 1893 to 2003. While the Google number and the number of citations for each publication are positively correlated, outliers from this linear relation identify some exceptional papers or \u201cgems\u201d that are universally familiar to physicists. \u00a9 2006 Published by Elsevier Ltd. PACS: 02.50.Ey; 05.40.-a; 05.50.+q; 89.65.-s"}
{"_id": "36cfe2e94b7b99653e6565642236e0127d43ef5a", "title": "Automatic Driving on Ill-defined Roads: An Adaptive, Shape-constrained, Color-based Method", "text": "Autonomous following of ill-defined roads is an important part of visual navigation systems. This paper presents an adaptive method that uses a statistical model of the colour of the road surface within a trapezoidal shape that approximately corresponds to the projection of the road on the image plane. The method does not perform an explicit segmentation of the images but instead expands the shape sideways until the match between shape and road worsens, simultaneously computing the colour statistics. Results show that the method is capable of reactively following roads, at driving speeds typical of the robots used, in a variety of situations while coping with variable conditions of the road such as surface type, puddles and shadows. We extensively evaluate the proposed method using a large number of datasets with ground truth (that will be made public on our server once the paper is published). We moreover evaluate many colour spaces in the context of road following and find that the colour spaces that separate luminance from colour information perform best, especially if the luminance information is discarded."}
{"_id": "0f060ec52c0f7ea2dde6b23921a766e7b8bf4822", "title": "Mapping the knowledge domain and the theme evolution of appropriability research between 1986 and 2016: a scientometric review", "text": "The scholars in the research domains of innovation and strategic management concerned about the appropriability for about 30\u00a0years or more. They focused on appropriability research and constantly evolving. In this paper, we analyze 30\u00a0years (1986\u20132016) literature on appropriability studies from Web of Science Core Collection database. A cited reference clustering map of different periods and terms co-occurrence map have been generated using bibliometric analysis and content analysis. Based on this, we study the evolutionary trajectory, mechanisms and theoretical architecture of appropriability research and explore further research directions. The results indicate that the essence of the appropriability research evolution is the perception changes in opening and sharing, value creation and value growth, and future research is focusing on role of appropriability in the platform governance, generative appropriability and the evolution of the problem-solving mechanisms."}
{"_id": "1da28f17e4df1f45056d6b8e76c08252ee909333", "title": "Reidentification by Relative Distance Comparison", "text": "Matching people across nonoverlapping camera views at different locations and different times, known as person reidentification, is both a hard and important problem for associating behavior of people observed in a large distributed space over a prolonged period of time. Person reidentification is fundamentally challenging because of the large visual appearance changes caused by variations in view angle, lighting, background clutter, and occlusion. To address these challenges, most previous approaches aim to model and extract distinctive and reliable visual features. However, seeking an optimal and robust similarity measure that quantifies a wide range of features against realistic viewing conditions from a distance is still an open and unsolved problem for person reidentification. In this paper, we formulate person reidentification as a relative distance comparison (RDC) learning problem in order to learn the optimal similarity measure between a pair of person images. This approach avoids treating all features indiscriminately and does not assume the existence of some universally distinctive and reliable features. To that end, a novel relative distance comparison model is introduced. The model is formulated to maximize the likelihood of a pair of true matches having a relatively smaller distance than that of a wrong match pair in a soft discriminant manner. Moreover, in order to maintain the tractability of the model in large scale learning, we further develop an ensemble RDC model. Extensive experiments on three publicly available benchmarking datasets are carried out to demonstrate the clear superiority of the proposed RDC models over related popular person reidentification techniques. The results also show that the new RDC models are more robust against visual appearance changes and less susceptible to model overfitting compared to other related existing models."}
{"_id": "60d3a9ceb6dc359a8e547b42fa039016d129acdb", "title": "The implications of embodiment for behavior and cognition: animal and robotic case studies", "text": "In this paper, we will argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. \u2018intelligence requires a body\u2019, the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. A number of case studies are presented to illustrate the concept. These involve animals and robots and are concentrated around locomotion, grasping, and visual perception. A theoretical scheme that can be used to embed the diverse case studies will be presented. Finally, we will establish a link between the low-level sensory-motor processes and cognition. We will present an embodied view on categorization, and propose the concepts of \u2018body schema\u2019 and \u2018forward models\u2019 as a natural extension of the embodied approach toward first representations."}
{"_id": "e1427d72a7f1bd9658c01d0cf3c2358262ef0c5d", "title": "Double-Hypergraph Based Sentence Ranking for Query-Focused Multi-document Summarizaton", "text": "Traditional graph based sentence ranking approaches modeled the documents as a text graph where vertices represent sentences and edges represent pairwise similarity relationships between two sentences. Such modeling cannot capture complex group relationships shared among multiple sentences which can be useful for sentence ranking. In this paper, we propose two different group relationships (sentence-topic relationship and document-topic relationship) shared among sentences, and construct a double-hypergraph integrating these relationships into a unified framework. Then, a double-hypergraph based sentence ranking algorithm is developed for query-focused multi-document summarization, in which Markov random walk is defined on each hypergraph and the mixture Markov chains are formed so as to perform transductive learning in the double-hypergraph. When evaluated on DUC datasets, performance of the proposed approach is remarkable."}
{"_id": "4eb4c82eb0dfc4f1fb5ebbf76cfad0549dde3d11", "title": "Connectionist speech recognition of Broadcast News", "text": "This paper describes connectionist techniques for recogni ti n of Broadcast News. The fundamental difference between connectionist systems and more convent ional mixture-of-Gaussian systems is that connectionist models directly estimate posterior probabi lities as opposed to likelihoods. Access to posterior probabilities has enabled us to develop a number of no vel approaches to confidence estimation, pronunciation modelling and search. In addition we have inv stigated a new feature extraction technique based on the modulation-filtered spectrogram, and methods f or combining multiple information sources. We have incorporated all of these techniques into a system fo r the transcription of Broadcast News, and we present results on the 1998 DARPA Hub-4E Broadcast News ev aluation data."}
{"_id": "59c51fddb09f899f298d45a32cccb9760b8465c1", "title": "Application of Pretrained Deep Neural Networks to Large Vocabulary Speech Recognition", "text": "The use of Deep Belief Networks (DBN) to pretrain Neural Networks has recently led to a resurgence in the use of Artificial Neural Network Hidden Markov Model (ANN/HMM) hybrid systems for Automatic Speech Recognition (ASR). In this paper we report results of a DBN-pretrained context-dependent ANN/HMM system trained on two datasets that are much larger than any reported previously with DBN-pretrained ANN/HMM systems 5870 hours of Voice Search and 1400 hours of YouTube data. On the first dataset, the pretrained ANN/HMM system outperforms the best Gaussian Mixture Model Hidden Markov Model (GMM/HMM) baseline, built with a much larger dataset by 3.7% absolute WER, while on the second dataset, it outperforms the GMM/HMM baseline by 4.7% absolute. Maximum Mutual Information (MMI) fine tuning and model combination using Segmental Conditional Random Fields (SCARF) give additional gains of 0.1% and 0.4% on the first dataset and 0.5% and 0.9% absolute on the second dataset."}
{"_id": "7d558433925a5ec7203a6d19b754c45f4b959173", "title": "Ml.lib: robust, cross-platform, open-source machine learning for max and pure data", "text": "This paper documents the development of ml.lib: a set of opensource tools designed for employing a wide range of machine learning techniques within two popular real-time programming environments, namely Max and Pure Data. ml.lib is a crossplatform, lightweight wrapper around Nick Gillian\u2019s Gesture Recognition Toolkit, a C++ library that includes a wide range of data processing and machine learning techniques. ml.lib adapts these techniques for real-time use within popular dataflow IDEs, allowing instrument designers and performers to integrate robust learning, classification and mapping approaches within their existing workflows. ml.lib has been carefully designed to allow users to experiment with and incorporate machine learning techniques within an interactive arts context with minimal prior knowledge. A simple, logical and consistent, scalable interface has been provided across over sixteen externals in order to maximize learnability and discoverability. A focus on portability and maintainability has enabled ml.lib to support a range of computing architectures\u2014including ARM\u2014 and operating systems such as Mac OS, GNU/Linux and Windows, making it the most comprehensive machine learning implementation available for Max and Pure Data."}
{"_id": "30492d91c8ac2e476a25bf7ddf445be00b3d2fe3", "title": "CrowdProbe: Non-invasive Crowd Monitoring with Wi-Fi Probe", "text": "Devices with integrated Wi-Fi chips broadcast beacons for network connection management purposes. Such information can be captured with inexpensive monitors and used to extract user behavior. To understand the behavior of visitors, we deployed our passive monitoring system---CrowdProbe, in a multi-floor museum for six months. We used a Hidden Markov Models (HMM) based trajectory inference algorithm to infer crowd movement using more than 1.7 million opportunistically obtained probe request frames.\n However, as more devices adopt schemes to randomize their MAC addresses in the passive probe session to protect user privacy, it becomes more difficult to track crowd and understand their behavior. In this paper, we try to make use of historical transition probability to reason about the movement of those randomized devices with spatial and temporal constraints. With CrowdProbe, we are able to achieve sufficient accuracy to understand the movement of visitors carrying devices with randomized MAC addresses."}
{"_id": "6938196e63ff09c25d1e1366aaec7135a6720216", "title": "Boosting Named Entity Recognition with Neural Character Embeddings", "text": "Most state-of-the-art named entity recognition (NER) systems rely on handcrafted features and on the output of other NLP tasks such as part-of-speech (POS) tagging and text chunking. In this work we propose a language-independent NER system that uses automatically learned features only. Our approach is based on the CharWNN deep neural network, which uses word-level and character-level representations (embeddings) to perform sequential classification. We perform an extensive number of experiments using two annotated corpora in two different languages: HAREM I corpus, which contains texts in Portuguese; and the SPA CoNLL2002 corpus, which contains texts in Spanish. Our experimental results shade light on the contribution of neural character embeddings for NER. Moreover, we demonstrate that the same neural network which has been successfully applied to POS tagging can also achieve state-of-the-art results for language-independet NER, using the same hyperparameters, and without any handcrafted features. For the HAREM I corpus, CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score for the total scenario (ten NE classes), and by 7.2 points in the F1 for the selective scenario (five NE classes)."}
{"_id": "57678c40090eed6a6024cc6879c92c9b18826e34", "title": "Treatment for an endosseous implant migrated into the maxillary sinus not causing maxillary sinusitis: case report.", "text": "Placement of endosseous implants in the maxilla has been proven to be a reliable treatment modality. If there is lack of supporting bone, the placed implant may not have enough primary stability and may migrate into the maxillary sinus. Displaced implants must be removed. If there are no signs of maxillary sinusitis, augmentation of the resulting alveolar defect can be performed during the same procedure."}
{"_id": "5459cd31ff0f182da81d5c58026546b995118676", "title": "The SLAM project: debugging system software via static analysis", "text": "The goal of the SLAM project is to check whether or not a program obeys \"API usage rules\" that specify what it means to be a good client of an API. The SLAM toolkit statically analyzes a C program to determine whether or not it violates given usage rules. The toolkit has two unique aspects: it does not require the programmer to annotate the source program (invariants are inferred); it minimizes noise (false error messages) through a process known as \"counterexample-driven refinement\". SLAM exploits and extends results from program analysis, model checking and automated deduction. We have successfully applied the SLAM toolkit to Windows XP device drivers, to both validate behavior and find defects in their usage of kernel APIs."}
{"_id": "4c9546e1169e1e47dd98d36a5501d4cdbc6b5133", "title": "ACC radar sensor technology, test requirements, and test solutions", "text": "This paper presents an overview of the radar sensor technology used in adaptive cruise control (ACC) applications. The performance tradeoffs of the different modulation schemes and scanning antenna types used by these radar sensors are discussed. The test requirements of these millimeter wave radar sensors during production, installation on a vehicle, and during after-market service are also discussed. Different test methods used for characterizing and aligning these radar sensors are presented."}
{"_id": "63c410b5ff4b32d651648dbe8cd19c93cc124310", "title": "IntEx: A Syntactic Role Driven Protein-Protein Interaction Extractor for Bio-Medical Text", "text": "In this paper, we present a fully automated extraction system, named IntEx, to identify gene and protein interactions in biomedical text. Our approach is based on first splitting complex sentences into simple clausal structures made up of syntactic roles. Then, tagging biological entities with the help of biomedical and linguistic ontologies. Finally, extracting complete interactions by analyzing the matching contents of syntactic roles and their linguistically significant combinations. Our extraction system handles complex sentences and extracts multiple and nested interactions specified in a sentence. Experimental evaluations with two other state of the art extraction systems indicate that the IntEx system achieves better performance without the labor intensive pattern engineering requirement. \u2217"}
{"_id": "67a4d21928f9541adab5c95f0a05da5f2fcd80d8", "title": "Estimating the Tensor of Curvature of a Surface from a Polyhedral Approximation", "text": "Estimating principal curvatures and principal directions of a surface from a polyhedral approximation with a large number of small faces, such as those produced by iso-surface construction algorithms, has become a basic step in many computer vision algorithms. Particularly in those targeted at medical applications. In this paper we describe a method to estimate the tensor of curvature of a surface at the vertices of a polyhedral approximation. Principal curvatures and principal directions are obtained by computing in closed form the eigenvalues and eigenvectors of certain 3 3 symmetric matrices defined by integral formulas, and closely related to the matrix representation of the tensor of curvature. The resulting algorithm is linear, both in time and in space, as a function of the number of vertices and faces of the polyhedral surface."}
{"_id": "ac67016546d4f50981e0f9da3b3fcfb759553362", "title": "Omnidirectional Circularly Polarized Antenna Combining Monopole and Loop Radiators", "text": "In this letter, a new omnidirectional circularly polarized (CP) antenna is proposed. By adopting a coaxial probe, in which a metal sleeve acts as a monopole, and printed spoke-like metal strips fabricated on two substrates, which act as a loop, this antenna exhibits an omnidirectional circularly polarized radiation in the azimuthal plane. Theory, design, simulation, and experiment of the antenna are presented. Measurement results show that the proposed antenna can achieve a -10-dB impedance bandwidth of 13.8% (ranges from 1.55 to 1.78 GHz) and a 3-dB axial-ratio (AR) bandwidth of 17.5% (ranges from 1.46 to 1.74 GHz) in the azimuthal plane. At the frequency of 1.575 GHz, the measured average axial ratio and right-hand CP (RHCP) gain are 1.91 dB and 0.86 dBi, respectively, in the azimuthal plane."}
{"_id": "0e6f0e8445b2fbdec390b7b91b5b8cd61519fa1c", "title": "Topology and Prediction Focused Research on Graph Convolutional Neural Networks", "text": "Important advances have been made using convolutional neural network (CNN) approaches to solve complicated problems in areas that rely on grid structured data such as image processing and object classification. Recently, research on graph convolutional neural networks (GCNN) has increased dramatically as researchers try to replicate the success of CNN for graph structured data. Unfortunately, traditional CNN methods are not readily transferable to GCNN, given the irregularity and geometric complexity of graphs. The emerging field of GCNN is further complicated by research papers that differ greatly in their scope, detail, and level of academic sophistication needed by the reader. The present paper provides a review of some basic properties of GCNN. As a guide to the interested reader, recent examples of GCNN research are then grouped according to techniques that attempt to uncover the underlying topology of the graph model and those that seek to generalize traditional CNN methods on graph data to improve prediction of class membership. Discrete Signal Processing on Graphs (DSPG) is used as a theoretical framework to better understand some of the performance gains and limitations of these recent GCNN approaches. A brief discussion of Topology Adaptive Graph Convolutional Networks (TAGCN) is presented as an approach motivated by DSPG and future research directions using this approach are briefly discussed."}
{"_id": "96af12f384a998ca05db74d3f36487ca1fa7eee4", "title": "Block-Matching Convolutional Neural Network for Image Denoising", "text": "There are two main streams in up-to-date image denoising algorithms: non-local self similarity (NSS) prior based methods and convolutional neural network (CNN) based methods. The NSS based methods are favorable on images with regular and repetitive patterns while the CNN based methods perform better on irregular structures. In this paper, we propose a blockmatching convolutional neural network (BMCNN) method that combines NSS prior and CNN. Initially, similar local patches in the input image are integrated into a 3D block. In order to prevent the noise from messing up the block matching, we first apply an existing denoising algorithm on the noisy image. The denoised image is employed as a pilot signal for the block matching, and then denoising function for the block is learned by a CNN structure. Experimental results show that the proposed BMCNN algorithm achieves state-of-the-art performance. In detail, BMCNN can restore both repetitive and irregular structures."}
{"_id": "248743cee130f36d54d19f9dd3b9687de1c6a5d8", "title": "Towards a Machine-Learning Approach for Sickness Prediction in 360\u00b0 Stereoscopic Videos", "text": "Virtual reality systems are widely believed to be the next major computing platform. There are, however, some barriers to adoption that must be addressed, such as that of motion sickness \u2014 which can lead to undesirable symptoms including postural instability, headaches, and nausea. Motion sickness in virtual reality occurs as a result of moving visual stimuli that cause users to perceive self-motion while they remain stationary in the real world. There are several contributing factors to both this perception of motion and the subsequent onset of sickness, including field of view, motion velocity, and stimulus depth. We verify first that differences in vection due to relative stimulus depth remain correlated with sickness. Then, we build a dataset of stereoscopic 3D videos and their corresponding sickness ratings in order to quantify their nauseogenicity, which we make available for future use. Using this dataset, we train a machine learning algorithm on hand-crafted features (quantifying speed, direction, and depth as functions of time) from each video, learning the contributions of these various features to the sickness ratings. Our predictor generally outperforms a na\u00efve estimate, but is ultimately limited by the size of the dataset. However, our result is promising and opens the door to future work with more extensive datasets. This and further advances in this space have the potential to alleviate developer and end user concerns about motion sickness in the increasingly commonplace virtual world."}
{"_id": "220b3067f4976b64dda1b86ff6bf09c322345893", "title": "Towards Security-Aware Virtual Environments for Digital Twins", "text": "Digital twins open up new possibilities in terms of monitoring, simulating, optimizing and predicting the state of cyber-physical systems (CPSs). Furthermore, we argue that a fully functional, virtual replica of a CPS can also play an important role in securing the system. In this work, we present a framework that allows users to create and execute digital twins, closely matching their physical counterparts. We focus on a novel approach to automatically generate the virtual environment from specification, taking advantage of engineering data exchange formats. From a security perspective, an identical (in terms of the system's specification), simulated environment can be freely explored and tested by security professionals, without risking negative impacts on live systems. Going a step further, security modules on top of the framework support security analysts in monitoring the current state of CPSs. We demonstrate the viability of the framework in a proof of concept, including the automated generation of digital twins and the monitoring of security and safety rules."}
{"_id": "1e6bdc40419c2c628b0ba75ee9226a8be24848fd", "title": "Joint learning of ontology and semantic parser from text", "text": "Semantic parsing methods are used for capturing and representing semantic meaning of text. Meaning representation capturing all the concepts in the text may not always be available or may not be sufficiently complete. Ontologies provide a structured and reasoning-capable way to model the content of a collection of texts. In this work, we present a novel approach to joint learning of ontology and semantic parser from text. The method is based on semi-automatic induction of a context-free grammar from semantically annotated text. The grammar parses the text into semantic trees. Both, the grammar and the semantic trees are used to learn the ontology on several levels \u2013 classes, instances, taxonomic and non-taxonomic relations. The approach was evaluated on the first sentences of Wikipedia pages describing people."}
{"_id": "f6493f96f05318c554076b72520811759972fb6c", "title": "Differential effects of attentional focus strategies during long-term resistance training.", "text": "The purpose of this study was to investigate the effects of using an internal versus external focus of attention during resistance training on muscular adaptations. Thirty untrained college-aged men were randomly assigned to an internal focus group (INTERNAL) that focused on contracting the target muscle during training (n\u2009=\u200915) or an external focus group (EXTERNAL) that focused on the outcome of the lift (n\u2009=\u200915). Training for both routines consisted of 3 weekly sessions performed on non-consecutive days for 8 weeks. Subjects performed 4 sets of 8-12 repetitions per exercise. Changes in strength were assessed by six repetition maximum in the biceps curl and isometric maximal voluntary contraction in knee extension and elbow flexion. Changes in muscle thickness for the elbow flexors and quadriceps were assessed by ultrasound. Results show significantly greater increases in elbow flexor thickness in INTERNAL versus EXTERNAL (12.4% vs. 6.9%, respectively); similar changes were noted in quadriceps thickness. Isometric elbow flexion strength was greater for INTERNAL while isometric knee extension strength was greater for EXTERNAL, although neither reached\u00a0statistical significance. The findings lend support to the use of a mind-muscle connection to enhance muscle hypertrophy."}
{"_id": "5cc777b278cfc5587a175fbb4a8becbb9b1271fc", "title": "Design of an Optical Character Recognition System for Camera-based Handheld Devices", "text": "This paper presents a complete Optical Character Recognition (OCR) system for camera captured image/graphics embedded textual documents for handheld devices. At first, text regions are extracted and skew corrected. Then, these regions are binarized and segmented into lines and characters. Characters are passed into the recognition module. Experimenting with a set of 100 business card images, captured by cell phone camera, we have achieved a maximum recognition accuracy of 92.74%. Compared to Tesseract, an open source desktop-based powerful OCR engine, present recognition accuracy is worth contributing. Moreover, the developed technique is computationally efficient and consumes low memory so as to be applicable on handheld devices."}
{"_id": "cd108ed4f69b754cf0a5f3eb74d6c1949ea6674d", "title": "Super-Resolution with Deep Convolutional Sufficient Statistics", "text": "Inverse problems in image and audio, and super-resolution in particular, can be seen as high-dimensional structured prediction problems, where the goal is to characterize the conditional distribution of a high-resolution output given its lowresolution corrupted observation. When the scaling ratio is small, point estimates achieve impressive performance, but soon they suffer from the regression-to-themean problem, result of their inability to capture the multi-modality of this conditional distribution. Modeling high-dimensional image and audio distributions is a hard task, requiring both the ability to model complex geometrical structures and textured regions. In this paper, we propose to use as conditional model a Gibbs distribution, where its sufficient statistics are given by deep convolutional neural networks. The features computed by the network are stable to local deformation, and have reduced variance when the input is a stationary texture. These properties imply that the resulting sufficient statistics minimize the uncertainty of the target signals given the degraded observations, while being highly informative. The filters of the CNN are initialized by multiscale complex wavelets, and then we propose an algorithm to fine-tune them by estimating the gradient of the conditional log-likelihood, which bears some similarities with Generative Adversarial Networks. We evaluate experimentally the proposed approach in the image superresolution task, but the approach is general and could be used in other challenging ill-posed problems such as audio bandwidth extension."}
{"_id": "9e003d71f413b8b2e2e7118d83f6da1edd77199f", "title": "Development of the Conformity to Masculine Norms Inventory", "text": "This article describes the construction of the Conformity to Masculine Norms Inventory (CMNI), and 5 studies that examined its psychometric properties. Factor analysis indicated 11 distinct factors: Winning, Emotional Control, Risk-Taking, Violence, Dominance, Playboy, SelfReliance, Primacy of Work, Power Over Women, Disdain for Homosexuals, and Pursuit of Status. Results from Studies 2\u20135 indicated that the CMNI had strong internal consistency estimates and good differential validity comparing men with women and groups of men on healthrelated questions; all of the CMNI subscales were significantly and positively related to other masculinity-related measures, with several subscales being related significantly and positively to psychological distress, social dominance, aggression, and the desire to be more muscular, and significantly and negatively to attitudes toward psychological help seeking and social desirability; and CMNI scores had high test\u2013retest estimates for a 2\u20133 week period."}
{"_id": "0dc75ef88f936c966b9daa24afcd40aaa9ab4e0a", "title": "Visual Transfer between Atari Games using Competitive Reinforcement Learning", "text": "This paper explores the use of deep reinforcement learning agents to transfer knowledge from one environment to another. More specifically, the method takes advantage of asynchronous advantage actor critic (A3C) architecture to generalize a target game using an agent trained on a source game in Atari. Instead of fine-tuning a pre-trained model for the target game, we propose a learning approach to update the model using multiple agents trained in parallel with different representations of the target game. Visual mapping between video sequences of transfer pairs is used to derive new representations of the target game; training on these visual representations of the target game improves model updates in terms of performance, data efficiency and stability. In order to demonstrate the functionality of the architecture, Atari games Pong-v0 and Breakout-v0 are being used from the OpenAI gym environment; as the source and target environment."}
{"_id": "c5f29fd458bbed6f9941787b611cc84f04f6ace7", "title": "Approximate Gaussian Elimination", "text": "Approximate Gaussian Elimination"}
{"_id": "22f7fe1ea5e983aee091e75ae13be1e832222c51", "title": "A two-view based multilayer feature graph for robot navigation", "text": "To facilitate scene understanding and robot navigation in a modern urban area, we design a multilayer feature graph (MFG) based on two views from an on-board camera. The nodes of an MFG are features such as scale invariant feature transformation (SIFT) feature points, line segments, lines, and planes while edges of the MFG represent different geometric relationships such as adjacency, parallelism, collinearity, and coplanarity. MFG also connects the features in two views and the corresponding 3D coordinate system. Building on SIFT feature points and line segments, MFG is constructed using feature fusion which incrementally, iteratively, and extensively verifies the aforementioned geometric relationships using random sample consensus (RANSAC) framework. Physical experiments show that MFG can be successfully constructed in urban area and the construction method is demonstrated to be very robust in identifying feature correspondence."}
{"_id": "57c270a9f468f7129643852945cf3562cbb76e07", "title": "Multi-label convolutional neural network based pedestrian attribute classification", "text": "Recently, pedestrian attributes like gender, age, clothing etc., have been used as soft biometric traits for recognizing people. Unlike existing methods that assume the independence of attributes during their prediction, we propose a multi-label convolutional neural network (MLCNN) to predict multiple attributes together in a unified framework. Firstly, a pedestrian image is roughly divided into multiple overlapping body parts, which are simultaneously integrated in the multi-label convolutional neural network. Secondly, these parts are filtered independently and aggregated in the cost layer. The cost function is a combination of multiple binary attribute classification cost functions. Experiments show that the proposed method significantly outperforms the SVM based method on the PETA database."}
{"_id": "18ad2478014dd61d38e8197ff7060e9997e7a989", "title": "Evolving neural networks to play checkers without relying on expert knowledge", "text": "An experiment was conducted where neural networks compete for survival in an evolving population based on their ability to play checkers. More specifically, multilayer feedforward neural networks were used to evaluate alternative board positions and games were played using a minimax search strategy. At each generation, the extant neural networks were paired in competitions and selection was used to eliminate those that performed poorly relative to other networks. Offspring neural networks were created from the survivors using random variation of all weights and bias terms. After a series of 250 generations, the best-evolved neural network was played against human opponents in a series of 90 games on an internet website. The neural network was able to defeat two expert-level players and played to a draw against a master. The final rating of the neural network placed it in the \"Class A\" category using a standard rating system. Of particular importance in the design of the experiment was the fact that no features beyond the piece differential were given to the neural networks as a priori knowledge. The process of evolution was able to extract all of the additional information required to play at this level of competency. It accomplished this based almost solely on the feedback offered in the final aggregated outcome of each game played (i.e., win, lose, or draw). This procedure stands in marked contrast to the typical artifice of explicitly injecting expert knowledge into a game-playing program."}
{"_id": "40fe245fd4ccdb9eb6b61528d7e88f564784caf3", "title": "Verifying Anaconda's expert rating by competing against Chinook: experiments in co-evolving a neural checkers player", "text": "Since the early days of arti5cial intelligence, there has been interest in having a computer teach itself how to play a game of skill, like checkers, at a level that is competitive with human experts. To be truly noteworthy, such e7orts should minimize the amount of human intervention in the learning process. Recently, co-evolution has been used to evolve a neural network (called Anaconda) that, when coupled with a minimax search, can evaluate checker-boards and play to the level of a human expert, as indicated by its rating of 2045 on an international web site for playing checkers. The neural network uses only the location, type, and number of pieces on the board as input. No other features that would require human expertise are included. Experiments were conducted to verify the neural network\u2019s expert rating by competing it in 10 games against a \u201cnovice-level\u201d version of Chinook, a world-champion checkers program. The neural network had 2 wins, 4 losses, and 4 draws in the 10-game match. Based on an estimated rating of Chinook at the novice level, the results corroborate Anaconda\u2019s expert rating. c \u00a9 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "61b9de524afa0b134052c50b40a4b48ee60bb410", "title": "Decentralized Decision Making in the Game of Tic-tac-toe", "text": "Traditionally, the game of Tic-tac-toe is a pencil and paper game played by two people who take turn to place their pieces on a 3times3 grid with the objective of being the first player to fill a horizontal, vertical, or diagonal row with their pieces. What if instead of having one person playing against another, one person plays against a team of nine players, each of whom is responsible for one cell in the 3times3 grid? In this new way of playing the game, the team has to coordinate its players, who are acting independently based on their limited information. In this paper, we present a solution that can be extended to the case where two such teams play against each other, and also to other board games. Essentially, the solution uses a decentralized decision making, which at first seems to complicate the solution. However, surprisingly, we show that in this mode, an equivalent level of decision making ability comes from simple components that reduce system complexity"}
{"_id": "82d02f4782bd208b65fd4e3dea06abe95cc48b04", "title": "A self-learning evolutionary chess program", "text": "A central challenge of artificial intelligence is to create machines that can learn from their own experience and perform at the level of human experts. Using an evolutionary algorithm, a computer program has learned to play chess by playing games against itself. The program learned to evaluate chessboard configurations by using the positions of pieces, material and positional values, and neural networks to assess specific sections of the chessboard. During evolution, the program improved its play by almost 400 rating points. Testing under simulated tournament conditions against Pocket Fritz 2.0 indicated that the evolved program performs above the master level."}
{"_id": "b6df5c2ac2f91d71b1d08d76135e2a470ac1ad1e", "title": "Machine learning - an artificial intelligence approach", "text": "Research in the area of learning structural descriptions from examples is reviewed, giving primary attention to methods of learning characteristic descrip\u00ad tions of single concepts. In particular, we examine methods for finding the maximally-specific conjunctive generalizations (MSC-generalizations) that cover all of the training examples of a given concept. Various important aspects of structural learning in general are examined, and several criteria for evaluating structural learning methods are presented. Briefly, these criteria include (i) ade\u00ad quacy of the representation language, (ii) generalization rules employed, computational efficiency, and (iv) flexibility and extensibility. Selected learning methods developed by Buchanan, et al., Hayes-Roth, Vere, Winston, and the authors are analyzed according to these criteria. Finally, some goals are sug\u00ad gested for future research."}
{"_id": "55688a3f5596ec42636f4bcfd5f11c97c8ae2a8f", "title": "Implementing the Chinese Wall Security Model in Workflow Management Systems", "text": "The Chinese wall security model (CWSM) was designed to provide access controls that mitigate conflict of interest in commercial organizations, and is especially important for large-scale interenterprise workflow applications. This paper describes how to implement the CWSM in a WfMS. We first demonstrate situations in which the role-based access control model is not sufficient for this, and we then propose a security policy language to solve this problem, also providing support for the intrinsic dynamic access control mechanism defined in the CWSM (i.e., the dynamic binding of subjects and elements in the company data set). This language can also specify several requirements of the dynamic security policy that arise when applying the CWSM in WfMSs. Finally we discuss how to implement a run-time system to implement CWSM policies specified by this language in a WfMS."}
{"_id": "3ff5ff2c5a333add8cac95ed7b8cb422e5f4ebbb", "title": "Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface", "text": "In motor imagery-based brain computer interfaces (BCI), discriminative patterns can be extracted from the electroencephalogram (EEG) using the common spatial pattern (CSP) algorithm. However, the performance of this spatial filter depends on the operational frequency band of the EEG. Thus, setting a broad frequency range, or manually selecting a subject-specific frequency range, are commonly used with the CSP algorithm. To address this problem, this paper proposes a novel filter bank common spatial pattern (FBCSP) to perform autonomous selection of key temporal-spatial discriminative EEG characteristics. After the EEG measurements have been bandpass-filtered into multiple frequency bands, CSP features are extracted from each of these bands. A feature selection algorithm is then used to automatically select discriminative pairs of frequency bands and corresponding CSP features. A classification algorithm is subsequently used to classify the CSP features. A study is conducted to assess the performance of a selection of feature selection and classification algorithms for use with the FBCSP. Extensive experimental results are presented on a publicly available dataset as well as data collected from healthy subjects and unilaterally paralyzed stroke patients. The results show that FBCSP, using a particular combination feature selection and classification algorithm, yields relatively higher cross-validation accuracies compared to prevailing approaches."}
{"_id": "0f6af4ad085494b94c6ac66d187ccfd91800ba55", "title": "Coping with child sexual abuse among college students and post-traumatic stress disorder: the role of continuity of abuse and relationship with the perpetrator.", "text": "OBJECTIVE\nThe purpose of this study was to examine the effects of child sexual abuse (CSA) on the use of coping strategies and post-traumatic stress disorder (PTSD) scores in young adults, as well as the role of avoidance and approach coping strategies in those PTSD scores in CSA victims. The role of coping strategies was studied by considering their possible interactive effect with the continuity of abuse and the relationship with the perpetrator; the effect of coping strategies on PTSD was also compared between CSA victim and non-CSA victim participants.\n\n\nMETHOD\nThe sample was comprised of 138 victims of CSA and another 138 participants selected as a comparison group. Data about child sexual abuse were obtained from a questionnaire developed for this purpose. Coping strategies were assessed with the How I Deal with Things Scale (Burt & Katz, 1987), while PTSD scores were assessed with the \"Escala de Gravedad de S\u00edntomas del Trastorno de Estr\u00e9s Postraum\u00e1tico\" (Severity of Symptoms of PTSD Scale; Echebur\u00faa et al., 1997).\n\n\nRESULTS\nParticipants who had been victims of CSA showed significantly higher PTSD scores and lower approach coping strategies scores. However, differences in avoidance coping strategies between groups were not consistent and did not always follow the expected direction. Only the use of avoidance coping strategies was related to PTSD, participants who used these showing higher scores. The effects of avoidance strategies were stronger in continued than in isolated abuse, in intrafamilial than in extrafamilial abuse and in CSA victims than in non-victims.\n\n\nCONCLUSIONS\nThese results confirm the idea of CSA as a high-risk experience that can affect the victim's coping strategies and lead to PTSD to a lesser or greater extent depending on the coping strategy used. Moreover, the role of these strategies varies depending on whether or not the participant is a victim of CSA and on the characteristics of abuse (continuity and relationship with the perpetrator).\n\n\nPRACTICE IMPLICATIONS\nIn terms of intervention, a reduction of avoidance-type strategies appears to have a beneficial effect, especially in the case of intrafamilial and/or continued CSA victims. The encouragement of \"spontaneous\" approach strategies (devised by the victim herself, without counseling) would probably not lead to more positive outcomes in terms of PTSD symptomatology. However, encouraging CSA survivors to engage in therapy aimed at developing effective approach strategies, as other studies have suggested, may help reduce PTSD symptoms."}
{"_id": "de1fa7d3eb801dd316cd1e40cb029cd5eb309070", "title": "An approch for monitoring and smart planning of urban solid waste management using smart-M3 platform", "text": "Solid waste management is one of the most important challenges in urban areas throughout the world and it is becoming a critical issue in developing countries where a rapid increase in population has been observed. Waste collection is a complex process that requires the use of large amount of money and an elaborate management of logistics. In this paper an approch to smart waste collection is proposed able to improve and optimize the handling of solid urban waste. Context of smart waste management requires interconnection among heterogeneous devices and data sharing involving a large amount of people. Smart-M3 platform solves these problems offering a high degree of decoupling and scalability.Waste collection is made by real-time monitoring the level of bin's fullness through sensors placed inside the containers. This method enables to exempt from collecting semi-empty bins. Furthermore, incoming data can be provided to decisional algorithms in order to determine the optimal number of waste vehicles or bins to distribute in the territory. The presented solution gives important advantages for both service providers and consumers. The formers could obtain a sensible cost reduction. On the other hand, users may benefit from a higher level of service quality. In order to make users feel closer to their community, they can interact with the system to be aware about the fulness state of the nearest bins. Finally, a mechanism for collecting \u201cgreen points\u201d was introduced for encouraging citizens to recycle."}
{"_id": "0aa86c5d58b77415364622ce56646ba89c30cb63", "title": "Evolutionary algorithms for multiobjective optimization: methods and applications", "text": "Many real-world problems involve two types of problem difficulty: i) multiple, conflicting objectives and ii) a highly complex search space. On the one hand, instead of a single optimal solution competing goals give rise to a set of compromise solutions, generally denoted as Pareto-optimal. In the absence of preference information, none of the corresponding trade-offs can be said to be better than the others. On the other hand, the search space can be too large and too complex to be solved by exact methods. Thus, efficient optimization strategies are required that are able to deal with both difficulties. Evolutionary algorithms possess several characteristics that are desirable for this kind of problem and make them preferable to classical optimization methods. In fact, various evolutionary approaches to multiobjective optimization have been proposed since 1985, capable of searching for multiple Paretooptimal solutions concurrently in a single simulation run. However, in spite of this variety, there is a lack of extensive comparative studies in the literature. Therefore, it has remained open up to now: \u2022 whether some techniques are in general superior to others, \u2022 which algorithms are suited to which kind of problem, and \u2022 what the specific advantages and drawbacks of certain methods are. The subject of this work is the comparison and the improvement of existing multiobjective evolutionary algorithms and their application to system design problems in computer engineering. In detail, the major contributions are: \u2022 An experimental methodology to compare multiobjective optimizers is developed. In particular, quantitative measures to assess the quality of trade-off fronts are introduced and a set of general test problems is defined, which are i) easy to formulate, ii) represent essential aspects of real-world problems, and iii) test for different types of problem difficulty. \u2022 On the basis of this methodology, an extensive comparison of numerous evolutionary techniques is performed in which further aspects such as the influence of elitism and the population size are also investigated. \u2022 A novel approach to multiobjective optimization, the strength Pareto evolutionary algorithm, is proposed. It combines both established and new techniques in a unique manner. \u2022 Two complex multicriteria applications are addressed using evolutionary algorithms: i) the automatic synthesis of heterogeneous hardware/systems and ii) the multidimensional exploration of software implementations for digital signal processors. Zusammenfassung Viele praktische Optimierungsprobleme sind durch zwei Eigenschaften charakterisiert: a) mehrere, teilweise im Konflikt stehende Zielfunktionen sind involviert, und b) der Suchraum ist hochgradig komplex. Einerseits f \u0308 uhren widerspr\u00fcchliche Optimierungskriterien dazu, dass es statt eines klar definierten Optimums eine Menge von Kompromissl \u0308 o ungen, allgemein als Pareto-optimal bezeichnet, gibt. Insofern keine Gewichtung der Kriterien vorliegt, m \u0308 ussen die entsprechenden Alternativen als gleichwertig betrachtet werden. Andererseits kann der Suchraum eine bestimmte Gr \u0308 osse und Komplexit \u0308 at \u00fcberschreiten, so dass exakte Optimierungsverfahren nicht mehr anwendbar sind. Erforderlich sind demnach effiziente Suchstrategien, die beiden Aspekten gerecht werden. Evolution\u00e4re Algorithmen sind aufgrund mehrerer Merkmale f \u0308 ur diese Art von Problem besonders geeignet; vor allem im Vergleich zu klassischen Methoden weisen sie gewisse Vorteile auf. Doch obwohl seit 1985 verschiedenste evolution\u00e4re Ans\u00e4tze entwickelt wurden, die mehrere Pareto-optimale L \u0308 osungen in einem einzigen Simulationslauf generieren k \u0308 onnen, mangelt es in der Literatur an umfassenden Vergleichsstudien. Folglich blieb bislang ungekl \u0308 art, \u2022 ob bestimmte Techniken anderen Methoden generell \u0308 ub rlegen sind, \u2022 welche Algorithmen f \u0308 ur welche Art von Problem geeignet sind und \u2022 wo die spezifischen Vorund Nachteile einzelner Verfahren liegen. Die vorliegende Arbeit hat zum Gegenstand, bestehende evolution \u0308 are Mehrzieloptimierungsverfahren zu vergleichen, zu verbessern und auf Entwurfsprobleme im Bereich der Technischen Informatik anzuwenden. Im Einzelnen werden folgende Themen behandelt: \u2022 Eine Methodik zum experimentellen Vergleich von Mehrzieloptimierungsverfahren wird entwickelt. Unter anderem werden quantitative Qualit \u0308 atsmasse f\u00fcr Mengen von Kompromissl \u0308 osungen eingef \u0308 uhrt und mehrere Testfunktionen definiert, die a) eine einfache Problembeschreibung besitzen, b) wesentliche Merkmale realer Optimierungsprobleme repr \u0308 asentieren und c) erlauben, verschiedene Einflussfaktoren separat zu \u0308 uberpr\u00fcfen. \u2022 Auf der Basis dieser Methodik wird ein umfangreicher Vergleich diverser evolution\u00e4rer Techniken durchgef \u0308 uhrt, wobei auch weitere Aspekte wie die Auswirkungen von Elitism und der Populationsgr \u0308 osse auf den Optimierungsprozess untersucht werden. \u2022 Ein neues Verfahren, der Strength-Pareto-Evolutionary-Algorithm, wird vorgestellt. Es kombiniert auf spezielle Art und Weise bew \u0308 ahrte und neue Konzepte miteinander. \u2022 Zwei komplexe Mehrzielprobleme werden auf der Basis evolution \u0308 arer Methoden untersucht: a) die automatische Synthese von heterogenen Hardware/Software-Systemen und b) die mehrdimensionale Exploration von Softwareimplementierungen f \u0308 ur digitale Signalverarbeitungsprozessoren. I would like to thank Prof. Dr. Lothar Thiele for the valuable discussions concerning this research, Prof. Dr. Kalyanmoy Deb for his willingness to be the co-examiner of my thesis,"}
{"_id": "8e1990cbbc636a66ea04db146a65706efea494f6", "title": "Wearable Glove-Type Driver Stress Detection Using a Motion Sensor", "text": "Increased driver stress is generally recognized as one of the major factors leading to road accidents and loss of life. Even though physiological signals are reported as the most reliable means to measure driver stresses, they often require the use of unique and expensive sensors, which produce dynamic and varying readings within individuals. This paper presents a novel means to predict a driver\u2019s stress level by evaluating the movement pattern of the steering wheel. This is accomplished by using an inertial motion unit sensor, which is placed on a glove worn by the driver. The motion sensor selected for this paper was chosen because for its low cost and the fact that it is least affected by environmental factors as compared with a physiological signal. Experiments were conducted in three different environmental scenarios. The scenarios were classified as \u201curban,\u201d \u201chighway,\u201d and \u201crural,\u201d and they were chosen to simulate contrasting stress conditions experienced by the driver. In this paper, skin conductance and driver self-reports served as a reference stress to predict the driver\u2019s stress level. Galvanic skin response, a well-known stress indicator, was captured along the driver\u2019s palm and the readings were transmitted to a mobile device via low energy Bluetooth for further processing. The results revealed that indirect measurement of steering wheel movement with an inertial motion sensor could obtain accuracies up to an average rate of 94.78%. This demonstrates the opportunity for inclusion of motion sensors in wireless driver assistance systems for ambulatory monitoring of stress levels."}
{"_id": "82e0749092b5b0730a7394ed2a67d23774f88034", "title": "The Role of Mental Health Professionals in Gender Reassignment Surgeries: Unjust Discrimination or Responsible Care?", "text": "Recent literature has raised an important ethical concern relating to the way in which surgeons approach people with gender dysphoria (GD): it has been suggested that referring transsexual patients to mental assessment can constitute a form of unjust discrimination. The aim of this paper is to examine some of the ethical issues concerning the role of the mental health professional in gender reassignment surgeries (GRS). The role of the mental health professional in GRS is analyzed by presenting the Standards of Care by the World Professional Association of Transgender Health, and discussing the principles of autonomy and non-discrimination. Purposes of psychotherapy are exploring gender identity; addressing the negative impact of GD on mental health; alleviating internalized transphobia; enhancing social and peer support; improving body image; promoting resilience; and assisting the surgeons with the preparation prior to the surgery and the patient\u2019s follow-up. Offering or requesting psychological assistance is in no way a form of negative discrimination or an attack to the patient\u2019s autonomy. Contrarily, it might improve transsexual patients\u2019 care, and thus at the most may represent a form of positive discrimination. To treat people as equal does not mean that they should be treated in the same way, but with the same concern and respect, so that their unique needs and goals can be achieved. Offering or requesting psychological assistance to individuals with GD is a form of responsible care, and not unjust discrimination. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 ."}
{"_id": "e3d6fc079ac78b27cc0cd00e1ea1f67603c5a75b", "title": "Semantics and service technologies for the automatic generation of online MCQ tests", "text": "Active learning requires that students receive a continuous feedback about their understanding. Multiple-Choice Questions (MCQ) tests have been frequently used to provide students the required feedback and to measure the effectiveness of this learning model. To construct a test is a challenging task, which is time consuming and requires experience. For these reasons, research efforts have been focused on the automatic generation of well-constructed tests. The semantic technologies have played a relevant role in the implementation of these testgeneration systems. Nevertheless, the existing proposals present a set of drawbacks that restrict their applicability to different learning domains and the type of test to be composed. In this paper, we propose a service-oriented and semantic-based system that solves these drawbacks. The system consists of a dynamic strategy of generating candidate distractors (alternatives to the correct answer), a set of heuristics for scoring the distractors' suitability, and a selection of distractors that considers the difficulty level of tests. Besides, the final version of tests is created using the Google Form service, a de-facto standard for elaborating online questionnaires."}
{"_id": "4a4cea4421ff0be7bcc06e92179cd2d5f1102ff8", "title": "Solving traveling salesman problems via artificial intelligent search techniques", "text": "The traveling salesman problem (TSP) is one of the most intensively studied problems in computational mathematics and combinatorial optimization. It is also considered as the class of the NPcomplete combinatorial optimization problems. By literatures, many algorithms and approaches have been launched to solve such the TSP. However, no current algorithms that can provide the exactly optimal solution of the TSP problem are available. This paper proposes the application of AI search techniques to solve the TSP problems. Three AI search methods, i.e. genetic algorithms (GA), tabu search (TS), and adaptive tabu search (ATS), are conducted. They are tested against ten benchmark real-world TSP problems. As results compared with the exactly optimal solutions, the AI search techniques can provide very satisfactory solutions for all TSP problems. Key-Words: Traveling Salesman Problem, Genetic Algorithm, Tabu Search, Adaptive Tabu Search"}
{"_id": "37f133901021b8f2183158ded92595cb1fa9a19b", "title": "Conjunctive representations in learning and memory: principles of cortical and hippocampal function.", "text": "The authors present a theoretical framework for understanding the roles of the hippocampus and neocortex in learning and memory. This framework incorporates a theme found in many theories of hippocampal function: that the hippocampus is responsible for developing conjunctive representations binding together stimulus elements into a unitary representation that can later be recalled from partial input cues. This idea is contradicted by the fact that hippocampally lesioned rats can learn nonlinear discrimination problems that require conjunctive representations. The authors' framework accommodates this finding by establishing a principled division of labor, where the cortex is responsible for slow learning that integrates over multiple experiences to extract generalities whereas the hippocampus performs rapid learning of the arbitrary contents of individual experiences. This framework suggests that tasks involving rapid, incidental conjunctive learning are better tests of hippocampal function. The authors implement this framework in a computational neural network model and show that it can account for a wide range of data in animal learning."}
{"_id": "26891f40f3d7850c6aa340c2dc02c9e2c2b8de1d", "title": "Estimation of Detection Thresholds for Redirected Walking Techniques", "text": "In immersive virtual environments (IVEs), users can control their virtual viewpoint by moving their tracked head and walking through the real world. Usually, movements in the real world are mapped one-to-one to virtual camera motions. With redirection techniques, the virtual camera is manipulated by applying gains to user motion so that the virtual world moves differently than the real world. Thus, users can walk through large-scale IVEs while physically remaining in a reasonably small workspace. In psychophysical experiments with a two-alternative forced-choice task, we have quantified how much humans can unknowingly be redirected on physical paths that are different from the visually perceived paths. We tested 12 subjects in three different experiments: (E1) discrimination between virtual and physical rotations, (E2) discrimination between virtual and physical straightforward movements, and (E3) discrimination of path curvature. In experiment E1, subjects performed rotations with different gains, and then had to choose whether the visually perceived rotation was smaller or greater than the physical rotation. In experiment E2, subjects chose whether the physical walk was shorter or longer than the visually perceived scaled travel distance. In experiment E3, subjects estimate the path curvature when walking a curved path in the real world while the visual display shows a straight path in the virtual world. Our results show that users can be turned physically about 49 percent more or 20 percent less than the perceived virtual rotation, distances can be downscaled by 14 percent and upscaled by 26 percent, and users can be redirected on a circular arc with a radius greater than 22 m while they believe that they are walking straight."}
{"_id": "3a6d9e74fe6e02bc5e2430fe4ad7317a7c00527a", "title": "Managing and Mining Graph Data", "text": "Will reading habit influence your life? Many say yes. Reading managing and mining graph data is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading."}
{"_id": "750b9e868e1b68bdfc8141b93c639d289892fe25", "title": "Story Cloze Ending Selection Baselines and Data Examination", "text": "This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al., 2016a). We first build a classifier using features based on word embeddings and semantic similarity computation. We further implement a neural LSTM system with different encoding strategies that try to model the relation between the story and the provided endings. Our experiments show that a model using representation features based on average word embedding vectors over the given story words and the candidate ending sentences words, joint with similarity features between the story and candidate ending representations performed better than the neural models. Our best model achieves an accuracy of 72.42, ranking 3rd in the official evaluation."}
{"_id": "bf341864280ae25a35d2db236bfff05b3a3b8e2e", "title": "Comparison of a 1450-nm diode laser and a 1320-nm Nd:YAG laser in the treatment of atrophic facial scars: a prospective clinical and histologic study.", "text": "BACKGROUND\nAtrophic scar revision techniques, although numerous, have been hampered by inadequate clinical responses and prolonged postoperative recovery periods. Nonablative laser treatment has been shown to effect significant dermal collagen remodeling with minimal posttreatment sequelae. Although many studies have been published regarding the effectiveness of these nonablative lasers on rhytides, there are limited data demonstrating their specific effects on atrophic scars.\n\n\nOBJECTIVE\nTo evaluate and compare the efficacy and safety of long-pulsed 1320-nm Nd:YAG and 1450-nm diode lasers in the treatment of atrophic facial scarring.\n\n\nMETHODS\nA series of 20 patients (skin phototypes I-V) with mild to moderate atrophic facial acne scars randomly received three successive monthly treatments with a long-pulsed 1320-nm Nd:YAG laser on one facial half and a long-pulsed 1450-nm diode laser on the contralateral facial half. Patients were evaluated using digital photography and three-dimensional in vivo microtopography measurements at each treatment visit and at 1, 3, 6, and 12 months postoperatively. Histologic evaluations of cutaneous biopsies obtained before treatment, immediately after the first treatment, and at 1, 3, 6, and 12 months after the third treatment were performed. Clinical assessment scores were determined at each treatment session and follow-up visit. Patient satisfaction surveys were obtained at the end of the study.\n\n\nRESULTS\nMild to moderate clinical improvement was observed after the series of three treatments in the majority of patients studied. Patient satisfaction scores and in vivo microtopography measurements paralleled the photographic and histopathologic changes seen. Side effects of treatment were limited to mild transient erythema, edema, and hyperpigmentation. No scarring or adverse textural changes resulted from the use of either laser system.\n\n\nCONCLUSIONS\nNonablative long-pulsed 1320-nm Nd:YAG and 1450-nm diode lasers each offer clinical improvement for patients with atrophic scarring without significant side effects or complications. The 1450-nm diode laser showed greater clinical scar response at the parameters studied. The use of nonablative laser systems is a good treatment alternative for patients with atrophic scarring who are unable or unwilling to endure the prolonged postoperative recovery process associated with ablative laser skin resurfacing procedures."}
{"_id": "efb802f5cb3926f9bee6c41ea898dc7134297a0e", "title": "Best practices for missing data management in counseling psychology.", "text": "This article urges counseling psychology researchers to recognize and report how missing data are handled, because consumers of research cannot accurately interpret findings without knowing the amount and pattern of missing data or the strategies that were used to handle those data. Patterns of missing data are reviewed, and some of the common strategies for dealing with them are described. The authors provide an illustration in which data were simulated and evaluate 3 methods of handling missing data: mean substitution, multiple imputation, and full information maximum likelihood. Results suggest that mean substitution is a poor method for handling missing data, whereas both multiple imputation and full information maximum likelihood are recommended alternatives to this approach. The authors suggest that researchers fully consider and report the amount and pattern of missing data and the strategy for handling those data in counseling psychology research and that editors advise researchers of this expectation."}
{"_id": "794d1f355d052efbdcbfd09df93e885e85150fd6", "title": "Dynamic Impact of Online Word-of-Mouth and Advertising on Supply Chain Performance", "text": "Cooperative (co-op) advertising investments benefit brand goodwill and further improve supply chain performance. Meanwhile, online word-of-mouth (OWOM) can also play an important role in supply chain performance. On the basis of co-op advertising, this paper considers a single supply chain structure led by a manufacturer and examines a fundamental issue concerning the impact of OWOM on supply chain performance. Firstly, by the method of differential game, this paper analyzes the dynamic impact of OWOM and advertising on supply chain performance (i.e., brand goodwill, sales, and profits) under three different supply chain decisions (i.e., only advertising, and manufacturers with and without sharing cost of OWOM with retailers). We compare and analyze the optimal strategies of advertising and OWOM under the above different supply chain decisions. Secondly, the system dynamics model is established to reflect the dynamic impact of OWOM and advertising on supply chain performance. Finally, three supply chain decisions under two scenarios, strong brand and weak brand, are analyzed through the system dynamics simulation. The results show that the input of OWOM can enhance brand goodwill and improve earnings. It further promotes the OWOM reputation and improves the supply chain performance if manufacturers share the cost of OWOM with retailers. Then, in order to eliminate the retailers from word-of-mouth fraud and establish a fair competition mechanism, the third parties (i.e., regulators or e-commerce platforms) should take appropriate punitive measures against retailers. Furthermore, the effect of OWOM on supply chain performance under a strong brand differed from those under a weak brand. Last but not least, if OWOM is improved, there would be more remarkable performance for the weak brand than that for the strong brand in the supply chain."}
{"_id": "3c045560f824473172027c89eaeefa46260afe55", "title": "Genetic heritability and shared environmental factors among twin pairs with autism.", "text": "CONTEXT\nAutism is considered the most heritable of neurodevelopmental disorders, mainly because of the large difference in concordance rates between monozygotic and dizygotic twins.\n\n\nOBJECTIVE\nTo provide rigorous quantitative estimates of genetic heritability of autism and the effects of shared environment.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nTwin pairs with at least 1 twin with an autism spectrum disorder (ASD) born between 1987 and 2004 were identified through the California Department of Developmental Services.\n\n\nMAIN OUTCOME MEASURES\nStructured diagnostic assessments (Autism Diagnostic Interview-Revised and Autism Diagnostic Observation Schedule) were completed on 192 twin pairs. Concordance rates were calculated and parametric models were fitted for 2 definitions, 1 narrow (strict autism) and 1 broad (ASD).\n\n\nRESULTS\nFor strict autism, probandwise concordance for male twins was 0.58 for 40 monozygotic pairs (95% confidence interval [CI], 0.42-0.74) and 0.21 for 31 dizygotic pairs (95% CI, 0.09-0.43); for female twins, the concordance was 0.60 for 7 monozygotic pairs (95% CI, 0.28-0.90) and 0.27 for 10 dizygotic pairs (95% CI, 0.09-0.69). For ASD, the probandwise concordance for male twins was 0.77 for 45 monozygotic pairs (95% CI, 0.65-0.86) and 0.31 for 45 dizygotic pairs (95% CI, 0.16-0.46); for female twins, the concordance was 0.50 for 9 monozygotic pairs (95% CI, 0.16-0.84) and 0.36 for 13 dizygotic pairs (95% CI, 0.11-0.60). A large proportion of the variance in liability can be explained by shared environmental factors (55%; 95% CI, 9%-81% for autism and 58%; 95% CI, 30%-80% for ASD) in addition to moderate genetic heritability (37%; 95% CI, 8%-84% for autism and 38%; 95% CI, 14%-67% for ASD).\n\n\nCONCLUSION\nSusceptibility to ASD has moderate genetic heritability and a substantial shared twin environmental component."}
{"_id": "653cc5363cfc8a28c577dc7cd2ddb9c8f24cec48", "title": "A least-squares approach to anomaly detection in static and sequential data", "text": "We describe a probabilistic, nonparametric method for anomaly detection, based on a squared-loss objective function which has a simple analytical solution. The method emerges from extending recent work in nonparametric leastsquares classification to include a \u201cnone-of-the-above\u201d class which models anomalies in terms of non-anamalous training data. The method shares the flexibility of other kernel-based anomaly detection methods, yet is typically much faster to train and test. It can also be used to distinguish between multiple inlier classes and anomalies. The probabilistic nature of the output makes it straightforward to apply even when test data has structural dependencies; we show how a hidden Markov model framework can be incorporated in order to identify anomalous subsequences in a test sequence. Empirical results on datasets from several domains show the method to have comparable discriminative performance to popular alternatives, but with a clear speed advantage."}
{"_id": "5d78afb70c52401b1d8d4ca269cf69b07124862f", "title": "Online Social Network Dependency: Theoretical Development and Testing of Competing Models", "text": "The proliferation of new social media technologies has changed the behavioral patterns of online users. This study aims at investigating the structure and dimensionality of the Online Social Network Dependency (OSN Dependency). We tested the competing models built upon the cognitive-behavioral model and the biopsychogical framework of addiction. Our findings suggested that OSN Dependency can be explained by a higher-order factor model with seven first-order factors (i.e. mood alternation, social benefit, negative outcomes, compulsivity, excessive time, withdrawal, and interpersonal control) and two correlated second-order factors (i.e. social components and intrapersonal components). The model provides a good-fit to the data, reflecting logical consistency. Implications of the current investigation for practice and research are provided."}
{"_id": "589f8271d1ab4767822418a570a60c0e70acbb5b", "title": "Map API - scalable decentralized map building for robots", "text": "Large scale, long-term, distributed mapping is a core challenge to modern field robotics. Using the sensory output of multiple robots and fusing it in an efficient way enables the creation of globally accurate and consistent metric maps. To combine data from multiple agents into a global map, most existing approaches use a central entity that collects and manages the information from all agents. Often, the raw sensor data of one robot needs to be made available to processing algorithms on other agents due to the lack of computational resources on that robot. Unfortunately, network latency and low bandwidth in the field limit the generality of such an approach and make multi-robot map building a tedious task. In this paper, we present a distributed and decentralized back-end for concurrent and consistent robotic mapping. We propose a set of novel approaches that reduce the bandwidth usage and increase the effectiveness of inter-robot communication for distributed mapping. Instead of locking access to the map during operations, we define a version control system which allows concurrent and consistent access to the map data. Updates to the map are then shared asynchronously with agents which previously registered notifications. A technique for data lookup is provided by state-of-the-art algorithms from distributed computing. We validate our approach on real-world datasets and demonstrate the effectiveness of the proposed algorithms."}
{"_id": "4658c87f099cf001ef53c47e5b88b758e4e2b050", "title": "AMR-to-text Generation with Synchronous Node Replacement Grammar", "text": "EDUCATION PhD 2001 Computer Science, University of California, Berkeley. Directed by Nelson Morgan, Daniel Jurafsky, Charles Fillmore, Jerome Feldman. Dissertation: Statistical Language Understanding Using Frame Semantics MS 1999 Computer Science, University of California, Berkeley. Thesis: Topic-Based Language Models Using EM BA 1995 Double major in Linguistics and Computer Science, University of California, Berkeley."}
{"_id": "e242a542de4ca01fa4c391abe1381093e9e5c9c9", "title": "PHD: A Probabilistic Model of Hybrid Deep Collaborative Filtering for Recommender Systems", "text": "Collaborative Filtering (CF), a well-known approach in producing recommender systems, has achieved wide use and excellent performance not only in research but also in industry. However, problems related to cold start and data sparsity have caused CF to attract an increasing amount of attention in efforts to solve these problems. Traditional approaches adopt side information to extract effective latent factors but still have some room for growth. Due to the strong characteristic of feature extraction in deep learning, many researchers have employed it with CF to extract effective representations and to enhance its performance in rating prediction. Based on this previous work, we propose a probabilistic model that combines a stacked denoising autoencoder and a convolutional neural network together with auxiliary side information (i.e, both from users and items) to extract users and items\u2019 latent factors, respectively. Extensive experiments for four datasets demonstrate that our proposed model outperforms other traditional approaches and deep learning models making it state of the art."}
{"_id": "0d1d0900cf862f11d3d7812c01d28be27c71a6c7", "title": "Compositional dynamic test generation", "text": "Dynamic test generation is a form of dynamic program analysis that attempts to compute test inputs to drive a program along a specific program path. Directed Automated Random Testing, or DART for short, blends dynamic test generation with model checking techniques with the goal of systematically executing all feasible program paths of a program while detecting various types of errors using run-time checking tools (like Purify, for instance). Unfortunately, systematically executing all feasible program paths does not scale to large, realistic programs.This paper addresses this major limitation and proposes to perform dynamic test generation compositionally, by adapting known techniques for interprocedural static analysis. Specifically, we introduce a new algorithm, dubbed SMART for Systematic Modular Automated Random Testing, that extends DART by testing functions in isolation, encoding test results as function summaries expressed using input preconditions and output postconditions, and then re-using those summaries when testing higher-level functions. We show that, for a fixed reasoning capability, our compositional approach to dynamic test generation (SMART) is both sound and complete compared to monolithic dynamic test generation (DART). In other words, SMART can perform dynamic test generation compositionally without any reduction in program path coverage. We also show that, given a bound on the maximum number of feasible paths in individual program functions, the number of program executions explored by SMART is linear in that bound, while the number of program executions explored by DART can be exponential in that bound. We present examples of C programs and preliminary experimental results that illustrate and validate empirically these properties."}
{"_id": "6421e02baee72c531cb044760338b314c7161406", "title": "Contextual Phrase-Level Polarity Analysis Using Lexical Affect Scoring and Syntactic N-Grams", "text": "We present a classifier to predict contextual polarity of subjective phrases in a sentence. Our approach features lexical scoring derived from the Dictionary of Affect in Language (DAL) and extended through WordNet, allowing us to automatically score the vast majority of words in our input avoiding the need for manual labeling. We augment lexical scoring with n-gram analysis to capture the effect of context. We combine DAL scores with syntactic constituents and then extract ngrams of constituents from all sentences. We also use the polarity of all syntactic constituents within the sentence as features. Our results show significant improvement over a majority class baseline as well as a more difficult baseline consisting of lexical n-grams."}
{"_id": "b21aaef087ec31e7c869278f123d222720dab8d9", "title": "Industrial Big Data in an Industry 4.0 Environment: Challenges, Schemes, and Applications for Predictive Maintenance", "text": "Industry 4.0 can make a factory smart by applying intelligent information processing approaches, communication systems, future-oriented techniques, and more. However, the high complexity, automation, and flexibility of an intelligent factory bring new challenges to reliability and safety. Industrial big data generated by multisource sensors, intercommunication within the system and external-related information, and so on, might provide new solutions for predictive maintenance to improve system reliability. This paper puts forth attributes of industrial big data processing and actively explores industrial big data processing-based predictive maintenance. A novel framework is proposed for structuring multisource heterogeneous information, characterizing structured data with consideration of the spatiotemporal property, and modeling invisible factors, which would make the production process transparent and eventually implement predictive maintenance on facilities and energy saving in the industry 4.0 era. The effectiveness of the proposed scheme was verified by analyzing multisource heterogeneous industrial data for the remaining life prediction of key components of machining equipment."}
{"_id": "9a1a677caf0b50c7d50d737bfe921085a007db99", "title": "Fault Classification using Pseudomodal Energies and Neuro-fuzzy modelling", "text": "This paper presents a fault classification method which makes use of a Takagi-Sugeno neuro-fuzzy model and Pseudomodal energies calculated from the vibration signals of cylindrical shells. The calculation of Pseudomodal Energies, for the purposes of condition monitoring, has previously been found to be an accurate method of extracting features from vibration signals. This calculation is therefore used to extract features from vibration signals obtained from a diverse population of cylindrical shells. Some of the cylinders in the population have faults in different substructures. The pseudomodal energies calculated from the vibration signals are then used as inputs to a neuro-fuzzy model. A leave-one-out cross-validation process is used to test the performance of the model. It is found that the neuro-fuzzy model is able to classify faults with an accuracy of 91.62%, which is higher than the previously used multilayer perceptron."}
{"_id": "24d91de65d6ee19975e9cb8e0d7ed7466a403e72", "title": "Analysis on credit card fraud detection methods", "text": "Due to the rise and rapid growth of E-Commerce, use of credit cards for online purchases has dramatically increased and it caused an explosion in the credit card fraud. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In real life, fraudulent transactions are scattered with genuine transactions and simple pattern matching techniques are not often sufficient to detect those frauds accurately. Implementation of efficient fraud detection systems has thus become imperative for all credit card issuing banks to minimize their losses. Many modern techniques based on Artificial Intelligence, Data mining, Fuzzy logic, Machine learning, Sequence Alignment, Genetic Programming etc., has evolved in detecting various credit card fraudulent transactions. A clear understanding on all these approaches will certainly lead to an efficient credit card fraud detection system. This paper presents a survey of various techniques used in credit card fraud detection mechanisms and evaluates each methodology based on certain design criteria."}
{"_id": "3486d7f6dff3e78813dcf19d47c93e1204c61acc", "title": "Automatic Annotation of Semantic Term Types in the Complete ACL Anthology Reference Corpus", "text": "In the present paper, we present an automated tagging approach aimed at enhancing a well-known resource, the ACL Anthology Reference Corpus, with semantic class labels for more than 20,000 technical terms that are relevant to the domain of computational linguistics. We use state-of-the-art classification techniques to assign semantic class labels to technical terms extracted from several reference term lists. We also sketch a set of research questions and approaches directed towards the integrated analysis of scientific corpora. To this end, we query the data set resulting from our annotation effort on both the term and the semantic class level level."}
{"_id": "86558ab54bdbfa1c6ec69f464ed7da959db81693", "title": "Eddy Current Losses in Transformer Windings and Circuit Wiring", "text": "Skin Effect Figure 1 shows the magnetic field (flux lines) in and around a conductor carrying dc or low frequency current I. The field is radially symmetrical, as shown, only if the return current with its associated field is at a great distance. At low frequency, the energy in the magnetic field is trivial compared to the energy loss in the resistance of the wire. Hence the current Introduction As switching power supply operating frequencies increase, eddy current losses and parasitic inductances can greatly impair circuit performance. These high frequency effects are caused by the magnetic field resulting from current flow in transformer windings and circuit wiring. This paper is intended to provide insight into these phenomena so that improved high frequency. performance can be achieved. Among other things, it explains (I) why eddy current losses increase so dramatically with more winding layers, (2) why parallelling thin strips doesn't work, (3) how passive conductors (Faraday shields and C. T .windings) have high losses, and (4) why increasing conductor surface area will actually worsen losses and parasitic inductance if the configuration is not correct."}
{"_id": "767f4e4cca1882e2b6b5ff1e54e3a27366f7c3ea", "title": "Collaborating between Local and Global Learning for Distributed Online Multiple Tasks", "text": "This paper studies the novel learning scenarios of Distributed Online Multi-tasks (DOM), where the learning individuals with continuously arriving data are distributed separately and meanwhile they need to learn individual models collaboratively. It has three characteristics: distributed learning, online learning and multi-task learning. It is motivated by the emerging applications of wearable devices, which aim to provide intelligent monitoring services, such as health emergency alarming and movement recognition.\n To the best of our knowledge, no previous work has been done for this kind of problems. Thus, in this paper a collaborative learning scheme is proposed for this problem. Specifically, it performs local learning and global learning alternately. First, each client performs online learning using the increasing data locally. Then, DOM switches to global learning on the server side when some condition is triggered by clients. Here, an asynchronous online multi-task learning method is proposed for global learning. In this step, only this client's model, which triggers the global learning, is updated with the support of the difficult local data instances and the other clients' models. The experiments from 4 applications show that the proposed method of global learning can improve local learning significantly. DOM framework is effective, since it can share knowledge among distributed tasks and obtain better models than learning them separately. It is also communication efficient, which only requires the clients send a small portion of raw data to the server."}
{"_id": "6bbab30198ed942748f155d3ca7ea4807ab62714", "title": "Detection of obstructive sleep apnea through ECG signal features", "text": "Obstructive sleep apnea (OSA) is a common disorder in which individuals stop breathing during their sleep. Most of sleep apnea cases are currently undiagnosed because of expenses and practicality limitations of overnight polysomnography (PSG) at sleep labs, where an expert human observer is needed to work over night. New techniques for sleep apnea classification are being developed by bioengineers for most comfortable and timely detection. In this paper, an automated classification algorithm is presented which processes short duration epochs of the electrocardiogram (ECG) data. The automated classification algorithm is based on support vector machines (SVM) and has been trained and tested on sleep apnea recordings from subjects with and without OSA. The results show that our automated classification system can recognize epochs of sleep disorders with a high degree of accuracy, approximately 96.5%. Moreover, the system we developed can be used as a basis for future development of a tool for OSA screening."}
{"_id": "95f1188fd6d031893ecd823c33fb0908a7cf79dc", "title": "A Deep Learning Approach to Contract Element Extraction", "text": "We explore how deep learning methods can be used for contract element extraction. We show that a BILSTM operating on word, POS tag, and tokenshape embeddings outperforms the linear sliding-window classifiers of our previous work, without any manually written rules. Further improvements are observed by stacking an additional LSTM on top of the BILSTM, or by adding a CRF layer on top of the BILSTM. The stacked BILSTM-LSTM misclassifies fewer tokens, but the BILSTM-CRF combination performs better when methods are evaluated for their ability to extract entire, possibly multi-token contract elements."}
{"_id": "c7d279de948facf6a9bb3673f8933750f525d19f", "title": "A Miniaturized Dual-Band Frequency Selective Surface (FSS) With Closed Loop and Its Complementary Pattern", "text": "A single-layer substrate frequency selective surface (FSS) made of miniaturized elements is proposed, with two controllable passbands obtained. Each FSS element consists of a loop wire on the top metal layer and its complementary pattern etched at the bottom one, which provides two transmission poles separated by a transmission zero. An equivalent circuit model is given for predicting the characteristics of the designed FSS, and a good agreement between the simulated and measured transmission coefficients is obtained. Furthermore, the cases of oblique wave incidence and cascading FSSs are also measured and examined."}
{"_id": "9770a7231b095c0ebcd69936ebf5155e76fe05ed", "title": "57.5GHz bandwidth 4.8Vpp swing linear modulator driver for 64GBaud m-PAM systems", "text": "A novel series-stacked large swing push-pull MOS-HBT driver was implemented in 55nm SiGe BiCMOS technology. The circuit achieves 4.8Vpp differential swing, 57.5GHz band-width and has an output compression point of 12 dBm per side. 4-PAM and 8-PAM eye diagrams were measured at 56 GBaud for a record data rate of 168 Gb/s. 4-PAM 64GBaud eye diagrams were also demonstrated. The circuit consumes 820/600 mW with/without the predriver, for an energy efficiency of 4.88/3.57 pJ/b."}
{"_id": "2f95e11768517edfe4015e53f3817fd6922d9e87", "title": "Novel Wideband Transition Between Coplanar Waveguide and Microstrip Line", "text": "A novel wideband vertical transition for connecting the coplanar waveguide (CPW) to the microstrip line is proposed. This transition can be very useful for millimeter-wave packaging and vertical interconnects. It is multilayered, partly tapered, and consists of only one via interconnect. Two different transitions are designed. The first transition allows connectivity of a CPW with Zc=50 \u03a9 to a microstrip line with Zc=16 \u03a9 with a bandwidth of 10-60 GHz. The second transition has the same characteristic impedance, Zc=50 \u03a9, at the two ports. In this case, the operating frequency is from 40 MHz to 60 GHz. The return losses of both transitions are generally lower than -10 dB over their indicated frequency ranges, while the maximum measured insertion losses are 1.8 and 2.4 dB for the first and second transition, respectively. To extract the S-parameters of the transitions, a new thru-line technique, based on the standard thru-reflect-line two-tier calibration is introduced. Simulation and experimental results, showing good agreement, are presented and discussed."}
{"_id": "0c7ca6a7e9268d17f4677c408406cd039c27af42", "title": "Geographic and Genetic Population Differentiation of the Amazonian Chocolate Tree (Theobroma cacao L)", "text": "Numerous collecting expeditions of Theobroma cacao L. germplasm have been undertaken in Latin-America. However, most of this germplasm has not contributed to cacao improvement because its relationship to cultivated selections was poorly understood. Germplasm labeling errors have impeded breeding and confounded the interpretation of diversity analyses. To improve the understanding of the origin, classification, and population differentiation within the species, 1241 accessions covering a large geographic sampling were genotyped with 106 microsatellite markers. After discarding mislabeled samples, 10 genetic clusters, as opposed to the two genetic groups traditionally recognized within T. cacao, were found by applying Bayesian statistics. This leads us to propose a new classification of the cacao germplasm that will enhance its management. The results also provide new insights into the diversification of Amazon species in general, with the pattern of differentiation of the populations studied supporting the palaeoarches hypothesis of species diversification. The origin of the traditional cacao cultivars is also enlightened in this study."}
{"_id": "9215560fe16396445a1e84b50d184f88662a60b4", "title": "In-Store Gamification: Testing a Location-Based Treasure Hunt App in a Real Retailing Environment", "text": "Traditional retailers are facing strong competition from e-commerce. One way to meet this challenge is to follow the marketing movement of focusing on customer experiences. This transformation is based on the notion of engaging customers and one way to drive this engagement is through gamification to support value creation. In this study, we have identified variables affecting intentions to use gamified services and in what ways. For this purpose, we developed an app that generated different levels of gamification by varying the number of game elements. The data from a survey distributed during a field experiment indicates that an increasing level of gamification and technology experience have direct positive associations with intrinsic motivation. Furthermore, intrinsic motivation has a positive direct association with satisfaction, although this is partly mediated by mood. Finally, satisfaction has a positive direct relation with intention to use."}
{"_id": "8c7056a7eb9d2cffc5bcbcc20e7d9b8ea797f5de", "title": "A comparison between Chinese and Caucasian head shapes.", "text": "Univariate anthropometric data have long documented a difference in head shape proportion between Chinese and Caucasian populations. This difference has made it impossible to create eyewear, helmets and facemasks that fit both groups well. However, it has been unknown to what extend and precisely how the two populations differ from each other in form. In this study, we applied geometric morphometrics to dense surface data to quantify and characterize the shape differences using a large data set from two recent 3D anthropometric surveys, one in North America and Europe, and one in China. The comparison showed the significant variations between head shapes of the two groups and results demonstrated that Chinese heads were rounder than Caucasian counterparts, with a flatter back and forehead. The quantitative measurements and analyses of these shape differences may be applied in many fields, including anthropometrics, product design, cranial surgery and cranial therapy."}
{"_id": "547242ed248a57a726c212a307669aedadaa862e", "title": "Evolution of Communication Technologies for Smart Grid applications", "text": "The idea of Smart Grid has started to evolve more rapidly with the enhancement in Communication Technologies. Two way communication is a key aspect in realizing Smart Grids and is easily possible with the help of modern day advancements in both wired and wireless communication technologies. This paper discusses some of the major communication technologies which include IEEE specified ZigBee, WiMAX and Wireless LAN (Wi-Fi) technologies, GSM 3G/4G Cellular, DASH 7 and PLC (Power Line Communications), with special focus on their applications in Smart Grids. The Smart Grid environments and domains such as Home Area Automation, Substation Automation, Automated Metering Infrastructure, Vehicle-to-Grid Communications, etc. are considered as priority areas for developing smarter grids. The advancements, challenges and the opportunities present in these priority areas are discussed in this paper. & 2012 Elsevier Ltd. All rights reserved."}
{"_id": "5c9d22ac7a75967a41788181474a84d2f52af033", "title": "Valuating privacy", "text": "In several experimental auctions, participants put a dollar value on private information before revealing it to a group. An analysis of results show that a trait's desirability in relation to the group played a key role in the amount people demanded to publicize private information. Because people can easily obtain, aggregate, and disperse personal data electronically, privacy is a central concern in the information age. This concern is clear in relation to financial data and genetic information, both of which can lead to identity abuse and discrimination. However, other relatively harmless information can also be abused, including a person's gender, salary, age, marital status, or shopping preferences. What's unclear is whether it's the fear of such abuse that actually causes people's stated hesitance to reveal their data. Our hypothesis - and the motivation for our study - is that people reveal information when they feel that they're somewhat typical or positively atypical compared to the target group. To test this hypothesis, we conducted experiments that elicit the value people place on their private data. We found, with great significance (more than 95 percent statistical confidence) that a linear relationship exists between an individual's belief about a trait and the value he or she places on it. That is, the less desirable the trait, the greater the price a person demands for releasing the information. Furthermore, we found that small deviations in a socially positive direction are associated with a lower asking price."}
{"_id": "a4b7b9e127f2dabdfd89153ac88a2939fcbca3f2", "title": "VMTP: a transport protocol for the next generation of communication systems", "text": "The Versatile Message Transaction Protocol (VMTP) is a transport-level protocol designed to support remote procedure call, multicast and real-time communication. The protocol is optimized for efficient page-level network file access in particular.\nIn this paper, we describe the significant aspects of the VMTP design, including the VMTP treatment of sessions, addressing, duplicate suppression, flow control and retransmissions plus its provision for multicast. The VMTP design reflects a change in the use of computer communication as well as a change in the underlying hardware base for the next generation of communication systems. It also challenges certain established notions in the design of protocols."}
{"_id": "cd14bffcea4165b8bda586a79c328267099f70d6", "title": "The synchronous data flow programming language LUSTRE", "text": "This paper describes the language LUSTRE which is a data flow synchronous language, designed for programming reactive systems-uch as automatic control and monitoring sy s t emsas well as for describing hardware. The data flow aspect of LUSTRE makes it very close to usual description tools in these domains (blockdiagrams, networks of operators, dynamical sample-systems, etc.), and its synchronous interpretation makes it well suited for handling time in programs. Moreover, this synchronous interpretation allows it to be compiled into an efficient sequential program. Finally, the LUSTRE formalism is very similar to temporal logics. This allows the language to be used for both writing programs and expressing program properties, which results in an original program verification methodology."}
{"_id": "542f5fd2387e07e70ddb0dbcb52666aeb1d1efe4", "title": "Self-Similarity Driven Color Demosaicking", "text": "Demosaicking is the process by which from a matrix of colored pixels measuring only one color component per pixel, red, green, or blue, one can infer a whole color information at each pixel. This inference requires a deep understanding of the interaction between colors, and the involvement of image local geometry. Although quite successful in making such inferences with very small relative error, state-of-the-art demosaicking methods fail when the local geometry cannot be inferred from the neighboring pixels. In such a case, which occurs when thin structures or fine periodic patterns were present in the original, state-of-the-art methods can create disturbing artifacts, known as zipper effect, blur, and color spots. The aim of this paper is to show that these artifacts can be avoided by involving the image self-similarity to infer missing colors. Detailed experiments show that a satisfactory solution can be found, even for the most critical cases. Extensive comparisons with state-of-the-art algorithms will be performed on two different classic image databases."}
{"_id": "262aa1bfe9b9ba413e8feed5fe9f9723c26104b5", "title": "Input/Output Devices and Interaction Techniques", "text": "The computing literature often draws a sharp distinction between input and output; computer scientists are used to regarding a screen as a passive output device and a mouse as a pure input device. However, nearly all examples of human-computer interaction require both input and output to do anything useful. For example, what good would a mouse be without the corresponding feedback embodied by the cursor on the screen, as well as the sound and feel of the buttons when they are clicked? The distinction between output devices and input devices becomes even more blurred in the real world. A sheet of paper can be used to both record ideas (input) and display them (output). Clay reacts to the sculptor\u2019s fingers yet also provides feedback through the curvature and texture of its surface. Indeed, the complete and seamless integration of input and output is becoming a common research theme in advanced computer interfaces such as ubiquitous computing (Weiser, 1991) and tangible interaction (Ishii & Ullmer, 1997)."}
{"_id": "de29c51d169cdb5066f6832e9a8878900c43f100", "title": "Interactions with big data analytics", "text": "Increasingly in the 21st century, our daily lives leave behind a detailed digital record: our shifting thoughts and opinions shared on Twitter, our social relationships, our purchasing habits, our information seeking, our photos and videos\u2014even the movements of our bodies and cars. Naturally, for those interested in human behavior, this bounty of personal data is irresistible. Decision makers of all kinds, from company executives to government agencies to researchers and scientists, would like to base their decisions and actions on this data. In response, a new discipline of big data analytics is forming. Fundamentally, big data analytics is a workflow that distills terabytes of low-value data (e.g., every tweet) down to, in some cases, a single bit of high-value data (Should Company X acquire Company Y? Can we reject the null hypothesis?). The goal is to see the big picture from the minutia of our digital lives. It is no surprise today that big data is useful for HCI researchers and user interface design. As one example, A/B testing is a standard practice in the usability community to help determine relative differences in user performance using different interfaces. For many years, we have used strict laboratory conditions to evaluate interfaces, but more recently we have seen the ability to implement those tests quickly and on a large population by running controlled Interactions with Big Data Analytics"}
{"_id": "34ba24f51f8e1f15b9345382c3c3917a08b20325", "title": "ClausIE: clause-based open information extraction", "text": "We propose ClausIE, a novel, clause-based approach to open information extraction, which extracts relations and their arguments from natural language text. ClausIE fundamentally differs from previous approaches in that it separates the detection of ``useful'' pieces of information expressed in a sentence from their representation in terms of extractions. In more detail, ClausIE exploits linguistic knowledge about the grammar of the English language to first detect clauses in an input sentence and to subsequently identify the type of each clause according to the grammatical function of its constituents. Based on this information, ClausIE is able to generate high-precision extractions; the representation of these extractions can be flexibly customized to the underlying application. ClausIE is based on dependency parsing and a small set of domain-independent lexica, operates sentence by sentence without any post-processing, and requires no training data (whether labeled or unlabeled). Our experimental study on various real-world datasets suggests that ClausIE obtains higher recall and higher precision than existing approaches, both on high-quality text as well as on noisy text as found in the web."}
{"_id": "13a2d317431d51e980dc03c9bdeebe8a15d79f66", "title": "Global Learning of Typed Entailment Rules", "text": "Extensive knowledge bases of entailment rules between predicates are crucial for applied semantic inference. In this paper we propose an algorithm that utilizes transitivity constraints to learn a globally-optimal set of entailment rules for typed predicates. We model the task as a graph learning problem and suggest methods that scale the algorithm to larger graphs. We apply the algorithm over a large data set of extracted predicate instances, from which a resource of typed entailment rules has been recently released (Schoenmackers et al., 2010). Our results show that using global transitivity information substantially improves performance over this resource and several baselines, and that our scaling methods allow us to increase the scope of global learning of entailment-rule graphs."}
{"_id": "18b534c7207a1376fa92e87fe0d2cfb358d98c51", "title": "Accurate Unlexicalized Parsing", "text": "We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F 1) is better than that of earlylexicalizedPCFG models, and surprisingly close to the current state-of-theart. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize. In the early 1990s, as probabilistic methods swept NLP, parsing work revived the investigation of probabilistic context-free grammars ( PCFGs) (Booth and Thomson, 1973; Baker, 1979). However, early results on the utility ofPCFGs for parse disambiguation and language modeling were somewhat disappointing. A conviction arose that lexicalizedPCFGs (where head words annotate phrasal nodes) were the key tool for high performancePCFG parsing. This approach was congruent with the great success of word n-gram models in speech recognition, and drew strength from a broader interest in lexicalized grammars, as well as demonstrations that lexical dependencies were a key tool for resolving ambiguities such asPPattachments (Ford et al., 1982; Hindle and Rooth, 1993). In the following decade, great success in terms of parse disambiguation and even language modeling was achieved by various lexicalized PCFG models (Magerman, 1995; Charniak, 1997; Collins, 1999; Charniak, 2000; Charniak, 2001). However, several results have brought into question how large a role lexicalization plays in such parsers. Johnson (1998) showed that the performance of anunlexicalizedPCFGover the Penn treebank could be improved enormously simply by annotating each node by its parent category. The Penn treebank coveringPCFGis a poor tool for parsing because the context-freedom assumptions it embodies are far too strong, and weakening them in this way makes the model much better. More recently, Gildea (2001) discusses how taking the bilexical probabilities out of a good current lexicalized PCFG parser hurts performance hardly at all: by at most 0.5% for test text from the same domain as the training data, and not at all for test text from a different domain. 1 But it is precisely these bilexical dependencies that backed the intuition that lexicalized PCFGs should be very successful, for example in Hindle and Rooth\u2019s demonstration fromPPattachment. We take this as a reflection of the fundamental sparseness of the lexical dependency information available in the Penn Treebank. As a speech person would say, one million words of training data just isn\u2019t enough. Even for topics central to the treebank\u2019s Wall Street Journal text, such as stocks, many very plausible dependencies occur only once, for example stocks stabilized, while many others occur not at all, for example stocks skyrocketed .2 The best-performing lexicalized PCFGs have increasingly made use of subcategorization 3 of the 1There are minor differences, but all the current best-known lexicalized PCFGs employ bothmonolexicalstatistics, which describe the phrasal categories of arguments and adjuncts t hat appear around a head lexical item, and bilexicalstatistics, or dependencies, which describe the likelihood of a head word tak ing as a dependent a phrase headed by a certain other word. 2This observation motivates various classor similaritybased approaches to combating sparseness, and this remains a promising avenue of work, but success in this area has proven somewhat elusive, and, at any rate, current lexicalized PCFGs do simply use exact word matches if available, and interpola te with syntactic category-based estimates when they are not. 3In this paper we use the term subcategorizationin the original general sense of Chomsky (1965), for where a syntactic ca tcategories appearing in the Penn treebank. Charniak (2000) shows the value his parser gains from parentannotation of nodes, suggesting that this information is at least partly complementary to information derivable from lexicalization, and Collins (1999) uses a range of linguistically motivated and carefully hand-engineered subcategorizations to break down wrong context-freedom assumptions of the naive Penn treebank covering PCFG, such as differentiating \u201cbaseNPs\u201d from noun phrases with phrasal modifiers, and distinguishing sentences with empty subjects from those where there is an overt subject NP. While he gives incomplete experimental results as to their efficacy, we can assume that these features were incorporated because of beneficial effects on parsing that were complementary to lexicalization. In this paper, we show that the parsing performance that can be achieved by an unlexicalized PCFG is far higher than has previously been demonstrated, and is, indeed, much higher than community wisdom has thought possible. We describe several simple, linguistically motivated annotations which do much to close the gap between a vanilla PCFG and state-of-the-art lexicalized models. Specifically, we construct anunlexicalizedPCFG which outperforms the lexicalized PCFGs of Magerman (1995) and Collins (1996) (though not more recent models, such as Charniak (1997) or Collins (1999)). One benefit of this result is a much-strengthened lower bound on the capacity of an unlexicalized PCFG. To the extent that no such strong baseline has been provided, the community has tended to greatly overestimate the beneficial effect of lexicalization in probabilistic parsing, rather than looking critically at where lexicalized probabilities are both neededto make the right decision and availablein the training data. Secondly, this result affirms the value of linguistic analysis for feature discovery. The result has other uses and advantages: an unlexicalized PCFGis easier to interpret, reason about, and improve than the more complex lexicalized models. The grammar representation is much more compact, no longer requiring large structures that store lexicalized probabilities. The parsing algorithms have lower asymptotic complexity4 and have much smaller grammar egory is divided into several subcategories, for example di viding verb phrases into finite and non-finite verb phrases, rath e than in the modern restricted usage where the term refers onl y to the syntactic argument frames of predicators. 4O(n3) vs. O(n5) for a naive implementation, or vs. O(n4) if using the clever approach of Eisner and Satta (1999). constants. An unlexicalizedPCFG parser is much simpler to build and optimize, including both standard code optimization techniques and the investigation of methods for search space pruning (Caraballo and Charniak, 1998; Charniak et al., 1998). It is not our goal to argue against the use of lexicalized probabilities in high-performance probabilistic parsing. It has been comprehensively demonstrated that lexical dependencies are useful in resolving major classes of sentence ambiguities, and a parser should make use of such information where possible. We focus here on using unlexicalized, tructural context because we feel that this information has been underexploited and underappreciated. We see this investigation as only one part of the foundation for state-of-the-art parsing which employsboth lexical and structural conditioning. 1 Experimental Setup To facilitate comparison with previous work, we trained our models on sections 2\u201321 of the WSJsection of the Penn treebank. We used the first 20 files (393 sentences) of section 22 as a development set (devset ). This set is small enough that there is noticeable variance in individual results, but it allowed rapid search for good features via continually reparsing the devset in a partially manual hill-climb. All of section 23 was used as a test set for the final model. For each model, input trees were annotated or transformed in some way, as in Johnson (1998). Given a set of transformed trees, we viewed the local trees as grammar rewrite rules in the standard way, and used (unsmoothed) maximum-likelihood estimates for rule probabilities. 5 To parse the grammar, we used a simple array-based Java implementation of a generalizedCKY parser, which, for our final best model, was able to exhaustively parse all sentences in section 23 in 1GB of memory, taking approximately 3 sec for average length sentences. 6 5The tagging probabilities were smoothed to accommodate unknown words. The quantityP(tag|word) was estimated as follows: words were split into one of several categories wordclass, based on capitalization, suffix, digit, and other character features. For each of these categories, we took th e maximum-likelihood estimate of P(tag|wordclass). This distribution was used as a prior against which observed tagging s, if any, were taken, givingP(tag|word) = [c(tag, word) + \u03ba P(tag|wordclass)]/[c(word)+\u03ba]. This was then inverted to give P(word|tag). The quality of this tagging model impacts all numbers; for example the raw treebank grammar\u2019s devset F 1 is 72.62 with it and 72.09 without it. 6The parser is available for download as open source at: http://nlp.stanford.edu/downloads/lex-parser.shtml"}
{"_id": "684d28fd923e19f1d1ff8cb8827b7acadb25b16d", "title": "TUNABLE AND SWITCHABLE BANDPASS FILTERS USING SLOT-LINE RESONATORS", "text": "In this paper, novel uniplanar tunable and switchable bandpass filters are designed by using the centrally-loaded slot-line resonator. From the voltage-wave distribution along the resonator, the appropriate location for the loading element is determined to be the center of the slot-line resonator, where the voltages of the fundamental signal and second harmonic are maximum and zero, respectively. As a result, the fundamental frequency can be tuned while the second harmonic remains almost unchanged. For the first time, the properties of the centrally-loaded slot-line resonator are analyzed by using the evenand odd-mode method, and their respective resonant frequencies are derived. The demonstrated tunable bandpass filter can give a 30.9% frequency tuning range with acceptable insertion loss when a varactor is used as the loading element. By replacing the loading varactors with PIN diodes, a switchable bandpass filter is realized in which the attenuation in the fundamental passband can be controlled. In experiments, the switchable bandpass filter exhibits a 2.13 dB insertion loss in the fundamental passband when the PIN diodes are off and more than 49 dB isolation across the passband when the PIN diodes are on. Received 8 October 2010, Accepted 5 November 2010, Scheduled 27 November 2010 Corresponding author: Zhi-Hua Bao (bao.zh@ntu.edu.cn)."}
{"_id": "f987c85716db71e6fc249480508f08cde57c76d0", "title": "ViziQuer: A Tool to Explore and Query SPARQL Endpoints", "text": "The presented tool uses a novel approach to explore and query a SPARQL endpoint. The tool is simple to use as a user needs only to enter an address of a SPARQL endpoint of one\u2019s interest. The tool will extract and visualize graphically the data schema of the endpoint. The user will be able to overview the data schema and use it to construct a SPARQL query according to the data schema. The tool can be downloaded from http://viziquer.lumii.lv. There is also additional information and help on how to use it in practice."}
{"_id": "fa0bf04c7a0ef74f746891c37ddcc29805c04033", "title": "Knowledge Discovery in Manufacturing Simulations", "text": "Discrete event simulation studies in a manufacturing context are a powerful instrument when modeling and evaluating processes of various industries. Usually simulation experts conduct simulation experiments for a predetermined system specification by manually varying parameters through educated assumptions and according to a prior defined goal. Moreover, simulation experts try to reduce complexity and number of simulation runs by excluding parameters that they consider as not influential regarding the simulation project scope. On the other hand, today's world of big data technology enables us to handle huge amounts of data. We therefore investigate the potential benefits of designing large scale experiments with a much broader coverage of possible system behavior. In this paper, we propose an approach for applying data mining methods on simulation data in combination with suitable visualization methods in order to uncover relationships in model behavior to discover knowledge that otherwise would have remained hidden. For a prototypical demonstration we used a clustering algorithm to divide large amounts of simulation output datasets into groups of similar performance values and depict those groups through visualizations to conduct a visual investigation process of the simulation data."}
{"_id": "d8f3fc3c5853cb0c2d23be6e05dc6f48349b2275", "title": "Toward Delay-Efficient Game-Aware Data Centers for Cloud Gaming", "text": "Gaming on demand is an emerging service that has recently started to garner prominence in the gaming industry. Cloud-based video games provide affordable, flexible, and high-performance solutions for end-users with constrained computing resources and enables them to play high-end graphic games on low-end thin clients. Despite its advantages, cloud gaming's Quality of Experience (QoE) suffers from high and varying end-to-end delay. Since the significant part of computational processing, including game rendering and video compression, is performed in data centers, controlling the transfer of information within the cloud has an important impact on the quality of cloud gaming services. In this article, a novel method for minimizing the end-to-end latency within a cloud gaming data center is proposed. We formulate an optimization problem for reducing delay, and propose a Lagrangian Relaxation (LR) time-efficient heuristic algorithm as a practical solution. Simulation results indicate that the heuristic method can provide close-to-optimal solutions. Also, the proposed model reduces end-to-end delay and delay variation by almost 11% and 13.5%, respectively, and outperforms the existing server-centric and network-centric models. As a byproduct, our proposed method also achieves better fairness among multiple competing players by almost 45%, on average, in comparison with existing methods."}
{"_id": "4e4a76df3123c24ccc61650cc8d27bff2df5d876", "title": "The Application of Non-parametric Method and Time Series Analysis to Predict the Machine Repair Times", "text": "This study focuses on the utilization of non-parametric to assess the distribution of repair times of a machine part as well as the prediction of the future values. There are two folds of objectives, namely, the distribution assessment of the repair times. The diagnostic graph, i.e., histogram and normal probability plot, as well as a non-parametric test, Kolmogorov-Smirnov (KS) method, is utilized to assess the distribution of data. According to the KS test, it can be used effectively to test the distribution of the repair times of a machine which are selected as a case study. Another objective is the prediction of the future repair time required to fix a designated part. The time series analysis based on autoregressive integrated moving average (ARIMA) model is deployed in order to forecast the repair times. It turned out that one of the simplest models of ARIMA, ARIMA (0, 1, 0) or random walk, is the most appropriate model for the prediction and this indicates that the pattern of repair times is non-stationary."}
{"_id": "bd7ced628f897eecdd2ceac24475a3f480bc6497", "title": "Towards an automated system for intelligent screening of candidates for recruitment using ontology mapping (EXPERT)", "text": "Many e-recruitment tools for recruiting candidates for jobs have significantly spread in recent years. Companies often receive resumes from candidates using e-recruitment tools or via job portal for each job posting and manually short list qualified applicants. The existing e-recruitment tools have been mainly used for the storage of applicant contact data. In this paper, we present EXPERT, an intelligent tool for screening candidates for recruitment using ontology mapping. EXPERT has three phases in screening candidates for recruitment. In first phase, the system collects candidates\u2019 resumes and constructs ontology document for the features of the candidates. Job openings/job requirements are represented as ontology in the second phase and in third phase, EXPERT maps the job requirement ontology onto the candidate ontology document and retrieves the eligible candidates. Experiment results show that this model improves the accuracy of matching candidates with job requirement."}
{"_id": "5ed48309e6dfa6972407ec76d86fb76675f7fe1c", "title": "Social Internet of Things: The Potential of the Internet of Things for Defining Human Behaviours", "text": "The potential of the artificial intelligence, data mining and the so-called Big Data to build solutions based on the understanding of human behaviours is being extended with the capabilities of the Future Internet and the Internet of Things (IoT). The described potential of data and networks have been analyzed by the complex network theory, offering a specialized branch, the defined as Human Dynamics. The goal with the Internet of Things in the social area is to describe in real-time the human behaviours and activities. These goals are starting to be feasible through the quantity of data provided by the personal devices such as smart phone, and the smart environments such as Smart cities that makes more intelligent the environment and offer a smart space that sense our movements, actions, and the evolution of the ecosystem. This work analyses the ecosystem defined through the triangle formed by Big Data, Smart Cities and Personal/Wearable Computing to determinate human behaviours and human dynamics. For this purpose, the life-cycle of human dynamics determinations in Smart Cities have been analysed in order to determinate the current status of the technology, challenges, and opportunities."}
{"_id": "6b6e54af4c4d96cb9f25f970380adb1d9cf5dac6", "title": "The State of Solving Large Incomplete-Information Games, and Application to Poker", "text": "ion algorithms take as input a description of the game and output a smaller but strategically similar \u2014 or even equivalent \u2014 game. The abstraction algorithms discussed here work with any finite number of players and do not assume a zerosum game. Information Abstraction The most popular kind of abstraction is information abstraction. The game is abstracted so that the agents cannot distinguish some of the states that they can distinguish in the actual game. For example in an abstracted poker hand, an agent is not able to observe all the nuances of the cards that she would normally observe. Lossless Information Abstraction. It turns out that it is possible to do lossless information abstraction, which may seem like an oxymoron at first. The method I will describe (Gilpin and Sandholm 2007b) is for a class of games that we call games with ordered signals. It is structured, but still general enough to capture a wide range of strategic situations. A game with ordered signals consists of a finite number of rounds. Within a round, the players play a game on a directed tree (the tree can be different in different rounds). The only uncertainty players face stems from private signals the other players have received and from the unknown future signals. In other words, players observe each Articles 14 AI MAGAZINE Nash equilibrium Nash equilibrium Original game"}
{"_id": "4fac79f00c260d93e823f87776a341001be8f91f", "title": "A case study on propagating and updating provenance information using the CIDOC CRM", "text": "Provenance information of digital objects maintained by digital libraries and archives is crucial for authenticity assessment, reproducibility and accountability. Such information is commonly stored on metadata placed in various Metadata Repositories (MRs) or Knowledge Bases (KBs). Nevertheless, in various settings it is prohibitive to store the provenance of each digital object due to the high storage space requirements that are needed for having complete provenance. In this paper, we introduce provenance-based inference rules as a means to complete the provenance information, to reduce the amount of provenance information that has to be stored, and to ease quality control (e.g., corrections). Roughly, we show how provenance information can be propagated by identifying a number of basic inference rules over a core conceptual model for representing provenance. The propagation of provenance concerns fundamental modelling concepts such as actors, activities, events, devices and information objects, and their associations. However, since a MR/KB is not static but changes over time due to several factors, the question that arises is how we can satisfy update requests while still supporting the aforementioned inference rules. Towards this end, we elaborate on the specification of the required add/delete operations, consider two different semantics for deletion of information, and provide the corresponding update algorithms. Finally, we report extensive comparative results for different repository policies regarding the derivation of new knowledge, in datasets containing up to one million RDF triples. The results allow us to understand the tradeoffs related to the use of inference rules on storage space and performance of queries and updates."}
{"_id": "ebd2fae814efc15cca83c75f7618c71617151de0", "title": "Development of a Whole-Sensitive Teleoperated Robot Arm using Torque Sensing Technique", "text": "In this paper, we concentrate on the design of a new whole-sensitive robot arm enabling torque measurement in each joint by means of developed optical torque sensors. When the contact of arm with an object occurs, local impedance algorithm provides active compliance of corresponding robot arm joint. Thus, the whole structure of the manipulator can safely interact with unstructured environment. The detailed design procedure of the 4-DOF robot arm and optical torque sensors is described in the paper. The gravity compensation algorithm was elaborated and verified by implementation of the local impedance control of a simple two-link arm manipulator"}
{"_id": "fd5f5b6b28ff9ef1c400b4f44a35fb7087eb65ba", "title": "The kNN-TD Reinforcement Learning Algorithm", "text": "A reinforcement learning algorithm called kNN-TD is introduced. This algorithm has been developed using the classical formulation of temporal difference methods and a k-nearest neighbors scheme as its expectations memory. By means of this kind of memory the algorithm is able to generalize properly over continuous state spaces and also take benefits from collective action selection and learning processes. Furthermore, with the addition of probability traces, we obtain the kNN-TD(\u03bb) algorithm which exhibits a state of the art performance. Finally the proposed algorithm has been tested on a series of well known reinforcement learning problems and also at the Second Annual RL Competition with excellent results."}
{"_id": "5289e63d93ad5a84226512bcf504ea48910858c3", "title": "The Evolution and Future of National Customer Satisfaction Index Models", "text": "A number of both national and international customer satisfaction barometers or indices have been introduced in the last decade. For the most part, these satisfaction indices are embedded within a system of cause and effect relationships or satisfaction model. Yet there has been little in the way of model development. Of critical importance to the validity and reliability of such indices is that the models and methods used to measure customer satisfaction and related constructs continue to learn, adapt, and improve over time. The primary goal of this research is to propose and test a number of modifications and improvements to the national index models. Using survey data from the Norwegian Customer Satisfaction Barometer (NCSB), we find general support for the proposed modifications."}
{"_id": "84d48648feef13cf1053bb0490377101c3c8059b", "title": "Virtual Reality Exposure Therapy for Anxiety and Specific Phobias", "text": "There is a growing body of research indicating the multiple ways that affective dysregulation (e.g., anxiety disorders, specific phobias, panic disorder, and posttraumatic stress disorder (PTSD)) may lead to significant impairments in normal life functioning. Anxiety and fear are concentrated emotional experiences that serve critical functions in organizing necessary survival responses (Fendt & Fanselow, 1999). In properly functioning affective systems, the responses are adaptive. LeDoux (2012a) posits survival circuits that enable humans to adapt to feared stimuli by organizing brain functions. The fear induced arousal and activation of survival circuits allows for adaptive responses to take priority and other responses are inhibited. Further, attentional processing focuses on pertinent environmental stimuli and learning occurs (Scherer, 2000; LeDoux, 2012b). Hence, adaptive survival circuits are optimized to detect threatening stimuli and relay the information as environmental challenges and opportunities. The adaptive survival circuits use this information to adjust behavioral and psychophysiological responses for appropriate adaptation and resolution. Excessive fear responses, however, can be restrictive and may be a sign of dysregulated anxiety. When exposure to stress occurs early in development and is repeated in persons with a particular genetic disposition, a decreased threshold for developing anxiety may result (Heim & Nemeroff, 1999). Further, over-excitation and deprivation can influence the affective system and may induce changes in the emotional circuitry of the brain that can contribute to stress-related psychopathology (Davidson, Jackson, & Kalin, 2000). A good deal of research has shown that exposure therapy is effective for reducing negative affective symptoms (Rothbaum & Schwartz, 2002). In vivo exposure therapy has been found to have greater efficacy when compared to imaginal exposure, especially in the treatment of specific phobias (Emmelkamp, 2003). Exposure to emotional situations and prolonged rehearsal result in the regular activation of cerebral metabolism in brain areas associated with inhibition of maladaptive associative processes (Schwartz, 1998). Identical neural circuits have been found to be involved in affective regulation across affective disorders (De Raedt, 2006; Mineka, Watson, & Clark, 1998). Systematic and controlled therapeutic exposure to phobic stimuli may enhance emotional regulation through adjustments of inhibitory processes on the amygdala by the medial prefrontal cortex during exposure and structural changes in the hippocampus after successful therapy (Hariri, Bookheimer, & Mazziotta, 2000). A novel tool for conducting exposure therapy is virtual reality exposure therapy (VRET), in which users are immersed within a computer-generated simulation or virtual environment (VE) that updates in a natural way to the user\u2019s psychophysiological arousal, head and/ or body motion (Parsons and Courtney, 2011, Parsons and Reinebold, 2012). Virtual environment applications that focus on treatment of affective (see Powers & Emmelkamp, 2008; Parsons et al., 2008a; Opris et al., 2012) and cognitive disorders (see Rose et al., 2005; Parsons 2009a) as well as assessment of component cognitive processes are now being developed and tested: attention (Parsons, et al., 2007, 2011) spatial abilities (Beck et al., 2010; Goodrich-Hunsaker and Hopkins, 2010; Parsons, et al., 2013), memory (Moffat, 2009; Parsons & Rizzo, 2008b; Parsons et al., 2013; Knight & Titov, 2009), spatial memory (Parsons et al., 2013); and executive functions (Armstrong et al., 2013; Henry et al., 2012; Parsons et al., 2012, 2013, 2014). The increased ecological validity of virtual scenarios may aid differential diagnosis and treatment planning. Within a virtual world, it is possible to systematically present cognitive tasks targeting neuropsychological performance beyond what are currently available using traditional methods (Parsons, 2011, 2012). Thomas D. Parsons University of North Texas, USA"}
{"_id": "2309d6dd93244c3daa3878e1f71a925079e83a20", "title": "BYOD security challenges: control and protect your most sensitive data", "text": "According to a recent survey by Harris Interactive and ESET, more than 80% of employed adults use some kind of personally owned electronic device for work-related functions.1 One of the primary advantages in allowing employees to bring and use their personal devices for work is that it increases efficiency and flexibility. By giving access to the corporate network, and therefore corporate information via unmanaged devices, work can be done from anywhere, at any time, using any endpoint. It also provides cost savings to organisations, since personal device usage means they don\u2019t need to provide or manage mobile devices for their employees. The Harris survey also determined that 47% of employees use personal desktop computers to access or store company information, while 41% do this with personal laptops, 24% with smartphones and 10% with tablets. However, with less than half of these devices being protected by basic security measures, some organisations may start to worry that the security challenges associated with BYOD far outweigh the benefits."}
{"_id": "6e46dec206180c0278f7327559b608105acaea4f", "title": "Enabling the IoT Paradigm in E-health Solutions through the VIRTUS Middleware", "text": "In Europe, in a context of growing population and decreasing resources, ageing related diseases represent one of the most relevant challenges in terms of healthcare organization complexity, plus care levels and system financing balancing. Nowadays there are several researches that are studying the application of the IoT (Internet of Things) paradigm in the e-health field. This paper explains how a solution, built on the top of the VIRTUS IoT middleware, provides a valid alternative to current IoT solutions, which are mainly based on SOA (Service Oriented Architecture). VIRTUS leverage an Instant Messaging protocol (XMPP) to guarantee a (near) real-time, secure and reliable communication channel among heterogeneous devices. The presented development has been exploited in a healthcare case study: an implementation of a cost-savvy remote body movement monitoring system, aimed at classify daily patients' activities, designed as a modular architecture and deployed in a large scale scenario. The paper analyzes the features offered by the VIRTUS middleware, if used within a e-health solution, providing a comparison with other well-known systems."}
{"_id": "deb3a10011bf9a6da10769bcd5c10ececb4541ca", "title": "Feasibility Study on the Use of Biometrics in an Entitlement Scheme", "text": "This report examines the feasibility of using biometrics as a means of establishing a unique identity, to support the proposed entitlement scheme under development by the United Kingdom Passport Service (UKPS) and Driver and Vehicle Licensing Agency (DVLA). Biometrics identification systems measure physiological and behavioural characteristics of a person, and use these measurements to reliably distinguish one person from another. The main examples considered in this study are fingerprint, iris and face image recognition. Biometric identification can assist in the issue of entitlement cards, passports, driving licences and other identity documents in two ways. On the initial issue of such documents, biometrics can be used to check that applicants are not erroneously issued documents using two different identity details; this is the main focus of this study. Secondly, when an entitlement card, passport or driving licence is being used, biometrics can help confirm that it is being used by the correct person. The purpose of the study is to assess the feasibility of fingerprint, iris and face recognition technologies for these applications, to identify unknowns and the risks associated with the use of biometrics in such a national identity scheme, and to make recommendations for how some of these risks might be addressed should such a scheme proceed. Biometric methods do not offer 100% certainty of authentication of individuals. The success of any deployment of a system using biometric methods depends, therefore, on many factors such as: the degree of the 'uniqueness' of the biometric measure, technical and social factors, user interfaces, and provision of secure backup systems for those situations and individual s where the biometric will not work effectively. The main findings of the study are: a In principle, fingerprint or iris recognition can provide the identification performance required for unique identification over the entire UK adult population. In the case of fingerprint recognition, the system would require the enrolment of at least four fingers, whereas for iris recognition both irises should be registered. However, the practicalities of deploying either iris or fingerprint recognition in such a scheme are far from straightforward. b Such a system would be a groundbreaking deployment for this kind of biometric application. Not only would it be one of the largest deployments to date, but aspects of its performance would be far more demanding than those of similarly sized systems; such existing systems are either not applied in the civil sector, or operate \u2026"}
{"_id": "3201d4dc96199101f11fb4ef2468875557f98225", "title": "Model predictive hovering-translation control of an unmanned Tri-TiltRotor", "text": "The experimental translational hovering control of a Tri-TiltRotor Unmanned Aerial Vehicle is the subject of this paper. This novel UAV is developed to possess the capability to perform autonomous conversion between the Vertical Take-Off and Landing, and the Fixed-Wing flight modes. Via this design's implemented features however, the capability for additional control authority on the UAV's longitudinal translational motion arises: The rotor-tilting servos are utilized in performing thrust vectoring of the main rotors, thus exploiting their fast response characteristics in directly providing translation-controlling forces. The system's hovering translation is handled by a Model Predictive Control scheme, following the aforementioned actuation principles. While performing experimental studies of the overall controlled system's efficiency, the advantageous effects of this novel control authority are clearly noted. Additionally, in this article the considerations and requirements for operational autonomy-related on-board-only state estimation are addressed."}
{"_id": "0aac231f1f73bfaabb89ec8b7fdd47dcb288e237", "title": "Regularized Off-Policy TD-Learning", "text": "We present a novel l1 regularized off-policy convergent TD-learning method (termed RO-TD), which is able to learn sparse representations of value functions with low computational complexity. The algorithmic framework underlying ROTD integrates two key ideas: off-policy convergent gradient TD methods, such as TDC, and a convex-concave saddle-point formulation of non-smooth convex optimization, which enables first-order solvers and feature selection using online convex regularization. A detailed theoretical and experimental analysis of RO-TD is presented. A variety of experiments are presented to illustrate the off-policy convergence, sparse feature selection capability and low computational cost of the RO-TD algorithm."}
{"_id": "3ee5bb89e10b5365d4e3cdcb8dc461b058ddcfa4", "title": "Understanding neurocognitive developmental disorders can improve education for all.", "text": "Specific learning disabilities (SLDs) are estimated to affect up to 10% of the population, and they co-occur far more often than would be expected, given their prevalences. We need to understand the complex etiology of SLDs and their co-occurrences in order to underpin the training of teachers, school psychologists, and clinicians, so that they can reliably recognize SLDs and optimize the learning contexts for individual learners."}
{"_id": "105ef0aaca927ab66295ce5e9c2a1f4ceba98152", "title": "An efficient k-means clustering algorithm", "text": "In this paper, we present a novel algorithm for performing k-means clustering. It organizes all the patterns in a k-d tree structure such that one can find all the patterns which are closest to a given prototype efficiently. The main intuition behind our approach is as follows. All the prototypes are potential candidates for the closest prototype at the root level. However, for the children of the root node, we may be able to prune the candidate set by using simple geometrical constraints. This approach can be applied recursively until the size of the candidate set is one for each node. Our experimental results demonstrate that our scheme can improve the computational speed of the direct k-means algorithm by an order to two orders of magnitude in the total number of distance calculations and the overall time of computation."}
{"_id": "b13618851ec9a4764b465ed3048d73e4e573ed46", "title": "Miniaturized Wilkinson power dividers utilizing capacitive loading", "text": "The authors report the miniaturization of a planar Wilkinson power divider by capacitive loading of the quarter wave transmission lines employed in conventional Wilkinson power dividers. Reduction of the transmission line segments from /spl lambda//4 to between /spl lambda//5 and /spl lambda//12 are reported here. The input and output lines at the three ports and the lines comprising the divider itself are coplanar waveguide (CPW) and asymmetric coplanar stripline (ACPS), respectively. The 10 GHz power dividers are fabricated on high resistivity silicon (HRS) and alumina wafers. These miniaturized dividers are 74% smaller than conventional Wilkinson power dividers, and have a return loss better than +30 dB and an insertion loss less than 0.55 dB. Design equations and a discussion about the effect of parasitic reactance on the isolation are presented for the first time."}
{"_id": "e881439705f383468b276415b9d01d0059c1d3e5", "title": "A Randomized Singular Value Decomposition Algorithm for Image Processing Applications", "text": "The main contribution of this paper is to demonstrate that a new randomized SVD algorithm, proposed by Drineas et. al. in [4], is not only of theoretical interest but also a viable and fast alternative to traditional SVD algorithms in applications (e.g. image processing). This algorithm samples a constant number of rows (or columns) of the matrix, scales them appropriately to form a small matrix, say S, and then computes the SVD of S (which is a good approximation to the SVD of the original matrix). We experimentally evaluate the accuracy and speed of this algorithm for image matrices, using various probability distributions to perform the sampling."}
{"_id": "7c8eb2391ae965c5f897a36451eab5062403aa7f", "title": "Design and experimental evaluation of the hydraulically actuated prototype leg of the HyQ robot", "text": "This paper focuses on the design and experimental evaluation of a hydraulically actuated robot leg. The evaluation of the leg prototype is an important milestone in the development of HyQ, a Hydraulically actuated Quadruped robot. The prototype features two rotary joints actuated by hydraulic cylinders and has a mass of 4.5kg. We performed several experiments with the leg prototype attached to a vertical slider to tests the robustness of the mechanical design and the hydraulic actuation system. Besides the experimental evaluation of the hydraulic components, we also extensively studied the sensor data of the leg during periodic hopping. The results show that hydraulic actuation is suitable for legged robots because of its high power-to-weight ratio, fast response and ability to cope with high impact force peaks. Furthermore, we compare the cylinder force data obtained by the load cell with the calculated value based on the cylinder pressures to analyze if it is possible to eliminate this sensory system redundancy in the future. Through these studies, weaknesses of the design were identified and suggestions on how to improve them are presented."}
{"_id": "1a10508b6fdc44a2f1cc0a62659b0c87e46f8642", "title": "Cortical coordination dynamics and cognition", "text": "New imaging techniques in cognitive neuroscience have produced a deluge of information correlating cognitive and neural phenomena. Yet our understanding of the inter-relationship between brain and mind remains hampered by the lack of a theoretical language for expressing cognitive functions in neural terms. We propose an approach to understanding operational laws in cognition based on principles of coordination dynamics that are derived from a simple and experimentally verified theoretical model. When applied to the dynamical properties of cortical areas and their coordination, these principles support a mechanism of adaptive inter-area pattern constraint that we postulate underlies cognitive operations generally."}
{"_id": "1b2cb27c5144d4bd2135fa150cd4ecb90b77418f", "title": "Cerebral mechanisms of word masking and unconscious repetition priming", "text": "We used functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs) to visualize the cerebral processing of unseen masked words. Within the areas associated with conscious reading, masked words activated left extrastriate, fusiform and precentral areas. Furthermore, masked words reduced the amount of activation evoked by a subsequent conscious presentation of the same word. In the left fusiform gyrus, this repetition suppression phenomenon was independent of whether the prime and target shared the same case, indicating that case-independent information about letter strings was extracted unconsciously. In comparison to an unmasked situation, however, the activation evoked by masked words was drastically reduced and was undetectable in prefrontal and parietal areas, correlating with participants' inability to report the masked words."}
{"_id": "3f262d230ef401e4981cf73b4c6b8946a807995a", "title": "Neural Darwinism and consciousness", "text": "Neural Darwinism (ND) is a large scale selectionist theory of brain development and function that has been hypothesized to relate to consciousness. According to ND, consciousness is entailed by reentrant interactions among neuronal populations in the thalamocortical system (the 'dynamic core'). These interactions, which permit high-order discriminations among possible core states, confer selective advantages on organisms possessing them by linking current perceptual events to a past history of value-dependent learning. Here, we assess the consistency of ND with 16 widely recognized properties of consciousness, both physiological (for example, consciousness is associated with widespread, relatively fast, low amplitude interactions in the thalamocortical system), and phenomenal (for example, consciousness involves the existence of a private flow of events available only to the experiencing subject). While no theory accounts fully for all of these properties at present, we find that ND and its recent extensions fare well."}
{"_id": "7985e2f58c03ad5ab51a4daf8fbb276c6ab87116", "title": "An information integration theory of consciousness", "text": "Consciousness poses two main problems. The first is understanding the conditions that determine to what extent a system has conscious experience. For instance, why is our consciousness generated by certain parts of our brain, such as the thalamocortical system, and not by other parts, such as the cerebellum? And why are we conscious during wakefulness and much less so during dreamless sleep? The second problem is understanding the conditions that determine what kind of consciousness a system has. For example, why do specific parts of the brain contribute specific qualities to our conscious experience, such as vision and audition? This paper presents a theory about what consciousness is and how it can be measured. According to the theory, consciousness corresponds to the capacity of a system to integrate information. This claim is motivated by two key phenomenological properties of consciousness: differentiation \u2013 the availability of a very large number of conscious experiences; and integration \u2013 the unity of each such experience. The theory states that the quantity of consciousness available to a system can be measured as the \u03a6 value of a complex of elements. \u03a6 is the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements. A complex is a subset of elements with \u03a6>0 that is not part of a subset of higher \u03a6. The theory also claims that the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. Finally, each particular conscious experience is specified by the value, at any given time, of the variables mediating informational interactions among the elements of a complex. The information integration theory accounts, in a principled manner, for several neurobiological observations concerning consciousness. As shown here, these include the association of consciousness with certain neural systems rather than with others; the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; the reduction of consciousness during dreamless sleep and generalized seizures; and the time requirements on neural interactions that support consciousness. The theory entails that consciousness is a fundamental quantity, that it is graded, that it is present in infants and animals, and that it should be possible to build conscious artifacts."}
{"_id": "322fb758814a900c5a5500b01eff5fa99f4cb69d", "title": "Interaction as performance : cases of configuring physical interfaces in mixed media", "text": "Mixed media, as artful assemblages of digital objects and physical artefacts, provide distinctive opportunities for experiential, presentational and representational interaction. In project-based learning of architecture design, participants staged spatial narratives with multiple projections, performed mixed objects and artefacts, and exploited bodily movements in mixed representations. These cases show how physical interfaces in mixed media acquire a spatial dimension, integrate physical artefacts and bodily movements and propose configurability as a central feature. A perspective based on anthropological concepts of performance makes it possible to address these aspects in a coherent way, pointing to sense experience, the individuality and collective emergence of expression and its diachronic and event-like character. From this perspective, interaction is part of expressive events aimed at generating new insights for participants (interchangeable performers and spectators) privileging sense experience. Events are the outcome of configurations of space, artefacts and digital media, and are characterised by a simultaneousness of doing and undergoing, of bodily presence and representation. More importantly, the performance perspective suggests a particular temporal view of interaction, based on the concept of event, addressing a neglected granularity of analysis between the moment-by-moment unfolding of interaction and the longer term co-evolution of technology and practice. Implications of interaction as performance contribute to a wider program of interaction design, thereby providing alternatives to established human-computer interaction tenets: the notion of event is an alternative to the notion of task; perception in Dewey's terms replaces recognition proposing expression as an alternative to accountability and usability. Implications include looking at how space can be configured and staged instead of measured or simulated, and how situations can be staged instead of sensed and recognised, privileging the sensing human over the"}
{"_id": "a6ce2f0795839d9c2543d64a08e043695887e0eb", "title": "Driver Gaze Region Estimation Without Using Eye Movement", "text": "Postmaster: Send undelivered copies and address changes to IEEE Intelligent Systems, Membership Processing Dept., IEEE Service Center, 445 Hoes Lane, Piscataway, NJ 08854-4141. Periodicals Postage Paid at New York, NY, and at additional mailing offices. Canadian GST #125634188. Canada Post Publications Mail Agreement Number 40013885. Return undeliverable Canadian addresses to 4960-2 Walker Rd., Windsor, ON N9A 6J3. Printed in the USA. Reuse Rights and Reprint Permissions: Educational or personal use of this material is permitted without fee, provided such use 1) is not made for profit, 2) includes this notice and a full citation to the original work on the first page of the copy, and 3) does not imply IEEE endorsement of any third-party products or services. Authors and their companies are permitted to post the accepted version of IEEE-copyrighted material on their own Web servers without permission, provided that the IEEE copyright notice and a full citation to the original work appear on the first screen of the posted copy. An accepted manuscript is a version that has been revised by the author to incorporate review suggestions but not the published version with copyediting, proofreading, and formatting added by IEEE. For more information, please go to http://www. ieee.org/publications_standards/publications/rights/ paperversionpolicy.html. Permission to reprint/republish this material for commercial, advertising, or promotional purposes or for creating new collective works for resale or redistribution must be obtained from IEEE by writing to the IEEE Intellectual Property Rights Office, 445 Hoes Lane, Piscataway, NJ 08854-4141 or pubs-permissions@ ieee.org. Copyright \u00a9 2016 IEEE. All rights reserved. Abstracting and Library Use: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bottom of the first page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. ARTICLES"}
{"_id": "25cb62fb7d99e29f1d5ba1a6228952c9676b334c", "title": "Controlling privacy in recommender systems", "text": "Recommender systems involve an inherent trade-off between accuracy of recommendations and the extent to which users are willing to release information about their preferences. In this paper, we explore a two-tiered notion of privacy where there is a small set of \u201cpublic\u201d users who are willing to share their preferences openly, and a large set of \u201cprivate\u201d users who require privacy guarantees. We show theoretically and demonstrate empirically that a moderate number of public users with no access to private user information already suffices for reasonable accuracy. Moreover, we introduce a new privacy concept for gleaning relational information from private users while maintaining a first order deniability. We demonstrate gains from controlled access to private user preferences."}
{"_id": "5bc1d57c05e86c57797be7c221f3043fa05cb39e", "title": "On fast surface reconstruction methods for large and noisy point clouds", "text": "In this paper we present a method for fast surface reconstruction from large noisy datasets. Given an unorganized 3D point cloud, our algorithm recreates the underlying surface's geometrical properties using data resampling and a robust triangulation algorithm in near realtime. For resulting smooth surfaces, the data is resampled with variable densities according to previously estimated surface curvatures. Incremental scans are easily incorporated into an existing surface mesh, by determining the respective overlapping area and reconstructing only the updated part of the surface mesh. The proposed framework is flexible enough to be integrated with additional point label information, where groups of points sharing the same label are clustered together and can be reconstructed separately, thus allowing fast updates via triangular mesh decoupling. To validate our approach, we present results obtained from laser scans acquired in both indoor and outdoor environments."}
{"_id": "bb152c9d7896d3bc125d5a871f8bcef7f4c98563", "title": "Distant Supervision for Relation Extraction with Hierarchical Attention and Entity Descriptions", "text": "Distant supervision for relation extraction is an effective method to find novel relational facts from plain text. However, distant supervision inevitably suffers from wrong label problem, and most existing methods generally focus on direct sentences containing entity pairs, but ignore massive information from background knowledge. To tackle these problems, we propose a novel hierarchical attention model to select valid instances and capture vital semantic information in them. Furthermore, we incorporate entity descriptions extracted from Wikipedia into the hierarchical attention model to provide supplementary background knowledge. The proposed architecture can not only combat the noise introduced by distant supervision, but also adequately extract latent and helpful background information. The experimental results on both Chinese and English datasets show that the proposed approach consistently achieves significant improvements on relation extraction as compared with strong baselines."}
{"_id": "b0b1b04f4d1a66f7208b1482725e01ff8de85e0c", "title": "2018 Low-Power Image Recognition Challenge", "text": "The Low-Power Image Recognition Challenge (LPIRC, \u200bhttps://rebootingcomputing.ieee.org/lpirc\u200b) is an annual competition started in 2015. The competition identifies the best technologies that can classify and detect objects in images efficiently (short execution time and low energy consumption) and accurately (high precision). Over the four years, the winners\u2019 scores have improved more than 24 times. As computer vision is widely used in many battery-powered systems (such as drones and mobile phones), the need for low-power computer vision will become increasingly important. This paper summarizes LPIRC 2018 by describing the three different tracks and the winners\u2019 solutions."}
{"_id": "0d579708c77a490de0e132876f8632362797b511", "title": "A layered naming architecture for the internet", "text": "Currently the Internet has only one level of name resolution, DNS, which converts user-level domain names into IP addresses. In this paper we borrow liberally from the literature to argue that there should be three levels of name resolution: from user-level descriptors to service identifiers; from service identifiers to endpoint identifiers; and from endpoint identifiers to IP addresses. These additional levels of naming and resolution (1) allow services and data to be first class Internet objects (in that they can be directly and persistently named), (2) seamlessly accommodate mobility and multi-homing and (3) integrate middleboxes (such as NATs and firewalls) into the Internet architecture. We further argue that flat names are a natural choice for the service and endpoint identifiers. Hence, this architecture requires scalable resolution of flat names, a capability that distributed hash tables (DHTs) can provide."}
{"_id": "2c914f3d0b5552f1f7f100719ed57ff5c2627a76", "title": "Pathophysiology of human visceral obesity: an update.", "text": "Excess intra-abdominal adipose tissue accumulation, often termed visceral obesity, is part of a phenotype including dysfunctional subcutaneous adipose tissue expansion and ectopic triglyceride storage closely related to clustering cardiometabolic risk factors. Hypertriglyceridemia; increased free fatty acid availability; adipose tissue release of proinflammatory cytokines; liver insulin resistance and inflammation; increased liver VLDL synthesis and secretion; reduced clearance of triglyceride-rich lipoproteins; presence of small, dense LDL particles; and reduced HDL cholesterol levels are among the many metabolic alterations closely related to this condition. Age, gender, genetics, and ethnicity are broad etiological factors contributing to variation in visceral adipose tissue accumulation. Specific mechanisms responsible for proportionally increased visceral fat storage when facing positive energy balance and weight gain may involve sex hormones, local cortisol production in abdominal adipose tissues, endocannabinoids, growth hormone, and dietary fructose. Physiological characteristics of abdominal adipose tissues such as adipocyte size and number, lipolytic responsiveness, lipid storage capacity, and inflammatory cytokine production are significant correlates and even possible determinants of the increased cardiometabolic risk associated with visceral obesity. Thiazolidinediones, estrogen replacement in postmenopausal women, and testosterone replacement in androgen-deficient men have been shown to favorably modulate body fat distribution and cardiometabolic risk to various degrees. However, some of these therapies must now be considered in the context of their serious side effects. Lifestyle interventions leading to weight loss generally induce preferential mobilization of visceral fat. In clinical practice, measuring waist circumference in addition to the body mass index could be helpful for the identification and management of a subgroup of overweight or obese patients at high cardiometabolic risk."}
{"_id": "38c4109938aaf5fedc3d165bcfa83f67cb4513bd", "title": "Linked Data and multimedia: the state of affairs", "text": "Linked Data is a way of exposing and sharing data as resources on the Web and interlinking them with semantically related resources. In the last three years significant amounts of data have been generated, increasingly forming a globally connected, distributed data space. For multimedia content, metadata are a key factor for efficient management, organization, and retrieval. However, the relationship between multimedia and Linked Data has been rarely studied, leading to a lack of mutual awareness and, as a consequence thereof, technological deficiencies. This article introduces the basic concepts of Linked Data in the context of multimedia metadata, and discusses techniques to generate, expose, discover, and consume Linked Data. It shows that a large amount of data sources exist, which are ready to be exploited by multimedia applications. The benefit of Linked Data in two multimedia-related applications is discussed and open research issues are outlined with the goal of bringing the research fields of multimedia and Linked Data closer together in order to facilitate mutual benefit."}
{"_id": "703d5435ce7699350f9eeaa606564e3d71357b32", "title": "Cheek defects.", "text": "The challenge surgeons face when reconstructing cheek defects varies significantly depending on the location and depth of the defect and the distensibility of the surrounding tissues. The cheek is a large aesthetic unit characterized in most areas with a convex surface and inherent transitions in color and texture. These characteristics demand the surgeon's attention to achieve superior results during reconstruction. Surgeons must also recognize the free margins of the adjacent structures, including the lower eyelid, nasal ala, and lip, to minimize distortion of these areas during healing. With these challenges in mind, this article discusses several approaches to reconstruction of various types of cheek defects."}
{"_id": "6145be7ccac159d4c261c803728444acf8bd2c14", "title": "Role of self-care in management of diabetes mellitus", "text": "Diabetes mellitus (DM) is a chronic progressive metabolic disorder characterized by hyperglycemia mainly due to absolute (Type 1 DM) or relative (Type 2 DM) deficiency of insulin hormone. World Health Organization estimates that more than 346 million people worldwide have DM. This number is likely to more than double by 2030 without any intervention. The needs of diabetic patients are not only limited to adequate glycemic control but also correspond with preventing complications; disability limitation and rehabilitation. There are seven essential self-care behaviors in people with diabetes which predict good outcomes namely healthy eating, being physically active, monitoring of blood sugar, compliant with medications, good problem-solving skills, healthy coping skills and risk-reduction behaviors. All these seven behaviors have been found to be positively correlated with good glycemic control, reduction of complications and improvement in quality of life. Individuals with diabetes have been shown to make a dramatic impact on the progression and development of their disease by participating in their own care. Despite this fact, compliance or adherence to these activities has been found to be low, especially when looking at long-term changes. Though multiple demographic, socio-economic and social support factors can be considered as positive contributors in facilitating self-care activities in diabetic patients, role of clinicians in promoting self-care is vital and has to be emphasized. Realizing the multi-faceted nature of the problem, a systematic, multi-pronged and an integrated approach is required for promoting self-care practices among diabetic patients to avert any long-term complications."}
{"_id": "40e5a40ae66d44e6c00d562d068d35db6922715d", "title": "Improving the Accuracy and Speed of Support Vector Machines", "text": "Bernhard Scholkopf\" Max-Planck-Institut fur biologische Kybernetik , Spemannstr. 38 72076 Tubingen, Germany bs@mpik-tueb.mpg.de Support Vector Learning Machines (SVM) are finding application in pattern recognition , regression estimation , and operator inversion for ill-posed problems. Against this very general backdrop , any methods for improving the generalization performance, or for improving the speed in test phase, of SVMs are of increasing interest. In this paper we combine two such techniques on a pattern recognition problem. The method for improving generalization performance (the \"virtual support vector\" method) does so by incorporating known invariances of the problem. This method achieves a drop in the error rate on 10,000 NIST test digit images of 1.4% to 1.0%. The method for improving the speed (the \"reduced set\" method) does so by approximating the support vector decision surface. We apply this method to achieve a factor of fifty speedup in test phase over the virtual support vector machine. The combined approach yields a machine which is both 22 times faster than the original machine, and which has better generalization performance, achieving 1.1 % error. The virtual support vector method is applicable to any SVM problem with known invariances. The reduced set method is applicable to any support vector machine."}
{"_id": "59ff18c4c8d41bfb9baf487f186a699a8f03f44e", "title": "Online continuous stereo extrinsic parameter estimation", "text": "Stereo visual odometry and dense scene reconstruction depend critically on accurate calibration of the extrinsic (relative) stereo camera poses. We present an algorithm for continuous, online stereo extrinsic re-calibration operating only on sparse stereo correspondences on a per-frame basis. We obtain the 5 degree of freedom extrinsic pose for each frame, with a fixed baseline, making it possible to model time-dependent variations. The initial extrinsic estimates are found by minimizing epipolar errors, and are refined via a Kalman Filter (KF). Observation covariances are derived from the Cramer-Rao lower bound of the solution uncertainty. The algorithm operates at frame rate with unoptimized Matlab code with over 1000 correspondences per frame. We validate its performance using a variety of real stereo datasets and simulations."}
{"_id": "594010eaaf2b62361a605417f515907d5aedd49a", "title": "ActiveStereoNet: End-to-End Self-supervised Learning for Active Stereo Systems", "text": "In this paper we present ActiveStereoNet, the first deep learning solution for active stereo systems. Due to the lack of ground truth, our method is fully self-supervised, yet it produces precise depth with a subpixel precision of 1/30th of a pixel; it does not suffer from the common over-smoothing issues; it preserves the edges; and it explicitly handles occlusions. We introduce a novel reconstruction loss that is more robust to noise and texture-less patches, and is invariant to illumination changes. The proposed loss is optimized using a window-based cost aggregation with an adaptive support weight scheme. This cost aggregation is edge-preserving and smooths the loss function, which is key to allow the network to reach compelling results. Finally we show how the task of predicting invalid regions, such as occlusions, can be trained end-toend without ground-truth. This component is crucial to reduce blur and particularly improves predictions along depth discontinuities. Extensive quantitatively and qualitatively evaluations on real and synthetic data demonstrate state of the art results in many challenging scenes."}
{"_id": "61a70dd3553a0f05efb4cf5cd7660c4209034bd9", "title": "Towards 5G Security", "text": "This paper discusses potential security requirements and mechanisms for 5G mobile networks. It does not intend to do so exhaustively, but rather aims at initiating and spurring the work towards a 5G security architecture."}
{"_id": "2c1dad38b47fdbb77838fbe5d382923c5f6d8af1", "title": "Cyclic codes over the ring \ud835\udd3dp[u, v] / \u3008uk, v2, uv-vu\u3009", "text": "Article history: Received 4 June 2014 Received in revised form 29 December 2014 Accepted 13 January 2015 Available online xxxx Communicated by W. Cary Huffman"}
{"_id": "b5ae59a6c123b5af8152f5a741b10156c5363649", "title": "The internet of energy: a web-enabled smart grid system", "text": "The quest for sustainable energy models is the main factor driving research on smart grid technology. SGs represent the bridging paradigm to enable highly efficient energy production, transport, and consumption along the whole chain, from the source to the user. Although this concept promises to be very fruitful, the research on how to deploy it in the real world has just begun. A discussion on the enabling technologies for SGs and a possible roadmap for the profitable evolution thereof is the focus of this article. After introducing the recent trends that are pushing the SG paradigm, we will discuss various key scenarios for the SG, and briefly introduce some of its key requirements. We will then provide an analysis of how current and future standard solutions in the areas of communications and networking can be engineered into a system that fulfills the needs of the SG vision. We advocate the use of small, cheap, and resource-constrained devices with pervasive computing capabilities as the key component to deploy a ubiquitous energy control system. To this end, the recent efforts carried out by Internet standardization bodies such as the IETF and W3C toward the vision of the Internet of Things (IoT) are especially relevant. The various components of the proposed solution have been successfully showcased in real-world implementations, and relevant actors such as ETSI, ZigBee, and IPSO are already evaluating their potential for future IoT applications, making the Internet-based smart grid vision considered in this article practically achievable in the not too distant future."}
{"_id": "0bb7b5098d30b10422c2e52462f5152696390c3a", "title": "Power laws , Pareto distributions and Zipf \u2019 s law", "text": "When the probability of measuring a particular value of some quantity varies inversely as a power of that value, the quantity is said to follow a power law, also known variously as Zipf\u2019s law or the Pareto distribution. Power laws appear widely in physics, biology, earth and planetary sciences, economics and finance, computer science, demography and the social sciences. For instance, the distributions of the sizes of cities, earthquakes, forest fires, solar flares, moon craters and people\u2019s personal fortunes all appear to follow power laws. The origin of power-law behaviour has been a topic of debate in the scientific community for more than a century. Here we review some of the empirical evidence for the existence of power-law forms and the theories proposed to explain them."}
{"_id": "cd44e696b2bfa9538a48095e57c60db61bc8af89", "title": "LED driver circuit with inherent PFC", "text": "A buck-boost topology operating in discontinuous conduction mode (DCM) is used as an off line LED driver for lighting applications. Operating from a full wave rectified ac-voltage with minimum input capacitance, with a constant switching frequency and constant on-time, the utility current has near unity power factor. This LED driver is suitable for cost sensitive applications because this circuit has a minimal parts count and a very simple control circuit. Disadvantages are that the LED drive current is modulated at twice the utility frequency and DCM operation increases component stress levels."}
{"_id": "7c63785382c6b75d673c553bae09bfe505aef8be", "title": "Retargeting Semantically-Rich Photos", "text": "Semantically-rich photos contain a rich variety of semantic objects (e.g., pedestrians and bicycles). Retargeting these photos is a challenging task since each semantic object has fixed geometric characteristics. Shrinking these objects simultaneously during retargeting is prone to distortion. In this paper, we propose to retarget semantically-rich photos by detecting photo semantics from image tags, which are predicted by a multi-label SVM. The key technique is a generative model termed latent stability discovery (LSD). It can robustly localize various semantic objects in a photo by making use of the predicted noisy image tags. Based on LSD, a feature fusion algorithm is proposed to detect salient regions at both the low-level and high-level. These salient regions are linked into a path sequentially to simulate human visual perception . Finally, we learn the prior distribution of such paths from aesthetically pleasing training photos. The prior enforces the path of a retargeted photo to be maximally similar to those from the training photos. In the experiment, we collect 217 1600 \u00d71200 photos, each containing over seven salient objects. Comprehensive user studies demonstrate the competitiveness of our method."}
{"_id": "de60bd369d2ff07ab84d757dca606d7e943cc70e", "title": "Euler Number and Connectivity Indexes of a Three Dimensional Digital Picture", "text": "Fundamental properties of topological structure of a 3D digitized picture are presented including the concept of neighborhood and connectivity among volume cells (voxels) of 3D digitized binary pictures defined on a cubic grid, the concept of simplicial decomposition of a 3D digitized object, and two algorithms for calculating the Euler number (genus). First we define four types of connectivity. Second we present two algorithms to calculate the Euler number of a 3D figure. Thirdly we introduce new local features called the connectivity number (CN) and the connectivity index to study topological properties of a 3D object in a 3D digitized picture. The CN at a 1-voxel (=a voxel with the value 1) x is defined as the change in the Euler number of the object caused by changing the value of x into zero, that is, caused by deleting x. Finally, by using them, we prove a necessary and sufficient condition that a 1-voxel is deletable. A 1-voxel x is said to be deletable if deletion of x causes no decrease and no increase in the numbers of connected components, holes and cavities in a given 3D picture."}
{"_id": "80df49b1d5d0ce2aa2b03bb0d74afdb19eee9822", "title": "Necessary Conditions for Existence of Some Designs in Polynomial Metric Spaces", "text": "Let M be a metric space with a finite diameter D and a finite normalized measure \u03bcM. Let the Hilbert space L2(M) of complex-valued functions be decomposed into a countable (whenM is infinite) or finite (with D+ 1 members whenM is finite) direct sum of mutually orthogonal subspaces L2(M) = V0 \u2295 V1 \u2295 \u00b7 \u00b7 \u00b7 (V0 is the space of constant functions). We denote N = D + 1 ifM is finite and N = \u221e otherwise. In definition and description of the notion of polynomial metric spaces we follow [26, 27]."}
{"_id": "94f2b0a8a4bf74324c4d4ea97f471ceda4037304", "title": "On Maximal Spherical Codes I", "text": "We investigate the possibilities for attaining two Levenshtein upper bounds for spherical codes. We find the distance distributions of all codes meeting these bounds. Then we show that the fourth Levenshtein bound can be attained in some very special cases only. We prove that no codes with an irrational maximal scalar product meet the third Levenshtein bound. So in dimensions 3 \u2264 n \u2264 100 exactly seven codes are known to attain this bound and ten cases remain undecided. Moreover, the first two codes (in dimensions 5 and 6) are unique up to isometry. Nonexistence of maximal codes in all dimensions n with cardinalities between 2n+1 and 2n+ [7 \u221a n ] is shown as well. We prove nonexistence of several infinite families of maximal codes whose maximal scalar product is rational. The distance distributions of the only known nontrivial infinite family of maximal codes (due to Levenshtein) are given."}
{"_id": "fa4c72540d19204317c3f06474a6ce9b38f5bb5d", "title": "Interpolation and approximation in L2(gamma)", "text": "Assume a standard Brownian motion W = (Wt )t\u2208[0,1], a Borel function f : R \u2192 R such that f (W1) \u2208 L2, and the standard Gaussian measure on the real line. We characterize that f belongs to the Besov space B 2,q ( ) := (L2( ), D1,2( )) ,q , obtained via the real interpolation method, by the behavior of aX(f (X1); ) := \u2016f (W1)\u2212P Xf (W1)\u2016L2 , where =(ti )ni=0 is a deterministic time net and P X : L2 \u2192 L2 the orthogonal projection onto a subspace of \u2018discrete\u2019stochastic integrals x0+ \u2211n i=1 vi\u22121(Xti \u2212Xti\u22121) with X being the Brownian motion or the geometric Brownian motion. By using Hermite polynomial expansions the problem is reduced to a deterministic one. The approximation numbers aX(f (X1); ) can be used to describe the L2-error in discrete time simulations of the martingale generated by f (W1) and (in stochastic finance) to describe the minimal quadratic hedging error of certain discretely adjusted portfolios. \u00a9 2006 Elsevier Inc. All rights reserved."}
{"_id": "33fa11ba676f317b73f963ade7226762b4f3f9c2", "title": "On Maximal Codes in Polynominal Metric Spaces", "text": ""}
{"_id": "6d5866cbc027bdf72693685c26344d4365433d72", "title": "Nonexistence of Certain Spherical Designs of Odd Strengths and Cardinalities", "text": "A spherical\u03c4 -design onSn\u22121 is a finite set such that, for all polynomials f of degree at most \u03c4 , the average of over the set is equal to the average of f over the sphere Sn\u22121. In this paper we obtain some necessary conditions for the existence of designs of odd strengths and cardinalities. This gives nonexistence results in many cases. Asymptotically, we derive a bound which is better than the corresponding estimation ensured by the Delsarte\u2013 Goethals\u2013Seidel bound. We consider in detail the strengths \u03c4 = 3 and\u03c4 = 5 and obtain further nonexistence results in these cases. When the nonexistence argument does not work, we obtain bounds on the minimum distance of such designs."}
{"_id": "6b68a0c00a1a6d2ffa96402065378b1dae28f0e2", "title": "Image Steganography based on a Parameterized Canny Edge Detection Algorithm", "text": "Steganography is the science of hiding digital information in such a way that no one can suspect its existence. Unlike cryptography which may arouse suspicions, steganography is a stealthy method that enables data communication in total secrecy. Steganography has many requirements, the foremost one is irrecoverability which refers to how hard it is for someone apart from the original communicating parties to detect and recover the hidden data out of the secret communication. A good strategy to guaranteeirrecoverability is to cover the secret data not usinga trivial method based on a predictable algorithm, but using a specific random pattern based on a mathematical algorithm. This paper proposes an image steganography technique based on theCanny edge detection algorithm. It is designed to hide secret data into a digital image within the pixels that make up the boundaries of objects detected in the image. More specifically, bits of the secret data replace the three LSBs of every color channel of the pixels detected by the Canny edge detection algorithm as part of the edges in the carrier image. Besides, the algorithm is parameterized by three parameters: The size of the Gaussian filter, a low threshold value, and a high threshold value. These parameters can yield to different outputs for the same input image and secret data. As a result, discovering the inner-workings of the algorithm would be considerably ambiguous, misguiding steganalysts from the exact location of the covert data. Experiments showed a simulation tool codenamed GhostBit, meant to cover and uncover secret data using the proposed algorithm. As future work, examining how other image processing techniques such as brightness and contrast adjustment can be taken"}
{"_id": "3ff5284e97c511ff15f588a2ebdd3f9220faa8af", "title": "Comparative analysis of master-slave latches and flip-flops for high-performance and low-power systems", "text": "In this paper, we propose a set of rules for consistent estimation of the real performance and power features of the flip-flop and master\u2013slave latch structures. A new simulation and optimization approach is presented, targeting both highperformance and power budget issues. The analysis approach reveals the sources of performance and power-consumption bottlenecks in different design styles. Certain misleading parameters have been properly modified and weighted to reflect the real properties of the compared structures. Furthermore, the results of the comparison of representative master\u2013slave latches and flipflops illustrate the advantages of our approach and the suitability of different design styles for high-performance and low-power applications."}
{"_id": "352ea65a8a912f254a16dea7f6c25543e5e2af77", "title": "Dual-band microstrip antenna for WiMAX applications using complementary split ring resonators", "text": "A new design of dual-band microstrip antenna is presented in this paper using metamaterial concept. By etching a metamaterial unit cells that have a resonance frequency of 5.3 GHz within the ground plane of a conventional patch resonates at 3.6 GHz, a dual resonance response is created to satisfy the requirement of covering the middle and upper bands of Wi-MAX.A detailed study of the placement of the unit cell is introduced and the most suitable one is chosen to be the place of our unit cell. Multi-unit cell concept is used for performance enhancement of the fabricated design to achieve a good matching with the simulated one."}
{"_id": "5988871eacdf73804bd2f3ad8e680431c4272d61", "title": "A Two-Level Investigation of Information Systems Outsourcing", "text": "O UTSOURCING is the contracting of various systems to outside information systems (IS) vendors. Ever since the Eastman Kodak--IBM partnership was reported in 1989 [15], outsourcing has emerged and has been recognized as a key method of managing IS. In previous studies of outsourcing [4, 5, 12, 15, 17, 20], two gaps are noticeable. The first relates to the fact that requirements for outsourcing are not uniform, and managers have different approaches to the process. Yet most previous research studies have phrased their inquiries unidimensionally in terms of the extent or degree of outsourcing: either as a binary variable or as size of contract in terms of the percentage of total IS budget [5, 15]. Neither unidimensional gauge is sufficiently expressive; neither allows representation of diverse patterns of outsourcing. Second, even though the IS industry had not used the term \u201coutsourcing\u201d explicitly in the past, outsourcing is not a new concept and has existed for many years in one form or another. Even though firms have repetitively used outside vendors for years, researchers have neither considered prior relationships in their studies of IS outsourcing nor studied the intentions of client firms to continue the partnership with the outsourcing vendors in the future. In this study, IS outsourcing decisions are investigated at two levels. The first level deals with the initial outsourcing decision of client firms. The second level pertains to the intention to continue the relationships with current outsourcing vendors in the future. The following three research questions are explored at these two levels (in contrast to the singlelevel approach taken by most researchers): \u2022 What are the dimensions of outsourcing decisions? Two dimensions, extent of substitution by vendors and strategic impact of IS applications, are proposed in order to conceptualize the diverse types of outsourcing relationships between clients and vendors. Based on these two dimensions, four types of outsourcing relationships are proposed. \u2022 What are the determinants that affect the dimensions of outsourcing decisions at the first level? The concepts derived from incomplete contracts [2, 9] and transactions cost economics [23, 24] theory are used with information technology (IT) organizational contexts and processes as a foundation to study the determinants of the two dimensions of outsourcing decisions. Information Syst"}
{"_id": "f04778053628446775733ef25536358c0f26924f", "title": "Design of Low Actuation Voltage RF MEMS Switch", "text": "Low-loss microwave microelectromechanical systems (MEMS) shunt switches are reported that utilize highly compliant serpentine spring folded suspensions together with large area capacitive actuators to achieve low actuation voltages while maintaining sufficient off-state isolation. The RF MEMS switches were fabricated via a surface micromachining process using PI2545 Polyimide 1 as the sacrificial layer. The switch structure was composed of electroplated nickel and the serpentine folded suspensions had varying number of meanders from 1 to 5. DC measurements indicate actuation voltages as low as 9 V with an on-to-off capacitance ratio of 48. Power handling measurement results showed no \u201dselfbiasing\u201d or failure of the MEMS switches for power levels up to 6.6W. RF measurements demonstrate an isolation of -26 dB at 40 GHz."}
{"_id": "9b76da8963efc95a16ce1de33ecd1866ac73a913", "title": "Comparison of different methods to cancel offset capacitance in capacitive displacement sensors", "text": "This paper presents a comparative study of the power consumption of two methods to cancel the offset capacitance in high-resolution capacitive displacement sensors. One method is applied based on correlated-double-sampling (CDS) technique and the other method is applied using oversampling technique. The analytical comparison predicts similar power consumption, if the same resolution for the same conversion time need to be reached. Experimental results with implementations of the two methods confirmed the analytical prediction."}
{"_id": "913864ff9e0ff021bddd4266f3f322b5156aa4c6", "title": "From IT Addiction to Discontinued Use: A Cognitive Dissonance Perspective", "text": "One of the main topics discussed within the realm of the dark side of IT is addiction. IT addiction has been found to bring adverse consequences on users\u2019 lives. In order to overcome the difficulties associated with IT addiction, interrupting and quitting addiction has become an important research agenda. Recent research findings have shown that IT addicts do not always feel guilty about their usage, and in many cases, they do not even perceive their usage as problematic. In this study, we draw on cognitive dissonance theory to theorize and propose a model showing that the degree of users\u2019 cognitive dissonance can make a difference in their willingness to quit their IT addiction. We tested the model using data collected from 226 social network sites users. The analysis provided empirical support for our model and shed light on the mediation and moderation effects of cognitive dissonance in this process. 1"}
{"_id": "7352925e375549e4a001990ff93af72b7805f149", "title": "Comparison of Internet addicts and non-addicts in Taiwanese high school", "text": "This study investigated the difference between Internet addicts and non-addicts in Taiwanese high schools, and focused specifically on their Internet usage patterns, and gratification and communication pleasures. A total of 1708 valid data samples of high school adolescents were collected. Among this sample, 236 subjects (13.8%) were identified as addicts using the eight-item Internet addiction Diagnostic Questionnaire designed by Young [Internet addiction survey [Online]. Available: http://www.pitt.edu/_ksy/survey.htm]. The analytical results revealed that Internet addicts spent almost twice as many hours on line on average than the non-addicts. Notably, surfing with a social/entertainment motivation and gratification was positively correlated with Internet addiction. Furthermore, Internet addicts obtained markedly higher overall PIUST scores and scored higher than non-addicts on four subscales (tolerance; compulsive use and withdrawal; related problems, including family, school, health, and other problems; interpersonal and financial problems). While Internet addicts perceived the Internet to have significantly more negative influences on daily routines, school performance, teacher and parental relation than non-addicts, both Internet addicts and non-addicts viewed Internet use as enhancing peer relations. Moreover, students with personalities characterized by dependence, shyness, depression and low self-esteem had a high tendency to become addicted. 2004 Elsevier Ltd. All rights reserved."}
{"_id": "0377692f15151de63bbb919ba949f5a035cc19ef", "title": "Scheduling in the Dark ( Improved Result )", "text": "We considered non-clairvoyant multiprocessor scheduling of jobs with arbitrary arrival times and changing execution characteristics. The problem has been studied extensively when either the jobs all arrive at time zero, or when all the jobs are fully parallelizable, or when the scheduler has considerable knowledge about the jobs. This paper considers for the first time this problem without any of these three restrictions yet when our algorithm is given more resources than the adversary. We provide new upper and lower bound techniques applicable in this more difficult scenario. The results are of both theoretical and practical interest. In our model, a job can arrive at any arbitrary time and its execution characteristics can change through the life of the job from being anywhere from fully parallelizable to completely sequential. We assume that the scheduler has no knowledge about the jobs except for knowing when a job arrives and knowing when it completes. (This is why we say that the scheduler is completely in the dark.) Given all this, we prove that the scheduler algorithm Equi-partition, though simple, performs within a constant factor as well as the optimal scheduler as long as it is given at least twice as many processors. More over, we prove that if none of the jobs are \u201dstrictly\u201d fully parallelizable, then Equi-partition performs competitively with no extra processors. Author is supported by NSERC Canada."}
{"_id": "072f64cf386f47fff76abb9c6dc809413861bb48", "title": "Optimal Lower Bounds on Regular Expression Size Using Communication Complexity", "text": "The problem of converting deterministic finite automata into (short) regular expressions is considered. It is known that the required expression size is 2 in the worst case for infinite languages, and for finite languages it is n log n) and n , if the alphabet size grows with the number of states n of the given automaton. A new lower bound method based on communication complexity for regular expression size is developed to show that the required size is indeed n . For constant alphabet size the best lower bound known to date is \u03a9(n), even when allowing infinite languages and nondeterministic finite automata. As the technique developed here works equally well for deterministic finite automata over binary alphabets, the lower bound is improved to n ."}
{"_id": "faec856e8bd4567fcd9d25e87d4fbce5ce8154b2", "title": "Quantifying the Security Posture of Containerized Mission Critical Systems", "text": "Determining the security posture of containerized mission-critical systems is difficult given the vast number of parameters that determine a system's ability to withstand cyber-attacks. In many cases, technical audits can be performed to determine a system's security posture and to evaluate how well they are configured to protect against known cyber-threats. Properly configuring systems can lead to higher security, however, the configuration and auditing process can be time-consuming and error-prone. In addition, the results obtained from these audits can be difficult to summarize into one meaningful metric that accurately characterizes system's security posture as guided by customer needs. In this work, we propose an approach for computing a security-posture metric for containerized systems that supports operators during the sense-making process that follows traditional security audits. The results of this work can be used on a per-deployment case, taking into account what matters to operators of containerized mission-critical systems."}
{"_id": "bb6ff9b72e2e02da62ffdfa5e476614b03dca40a", "title": "Intelligent Heart Disease Prediction System Using Probabilistic Neural Network", "text": "The diagnosis of diseases is a crucial and difficult job in medicine. The recognition of heart disease from diverse features or signs is a multi-layered problem that is not free from false assumptions and is frequently accompanied by hasty effects [1]. Thus an attempt to exploit knowledge and experience of several specialists and clinical screening data of patients composed in databases to assist the diagnosis procedure is regarded as a great challenge. The healthcare industry gathers enormous amounts of heart disease data that unfortunately, are not mined to determine concealed information for effective diagnosing [2]. In this paper, an efficient approach for the intelligent heart disease prediction based on Probabilistic Neural Network (PNN) technique is proposed. Initially, the data set containing 13 medical attributes were obtained from the Cleveland heart disease database. The dataset is clustered with the aid of k-means clustering algorithm. The PNN with radial basis function is trained using the selected data sets. The comparison between existing approaches and proposed approach with respect to accuracy of prediction, sensitivity and specificity is recorded in this paper. The results from experiments returned with diminishing fact that there is considerable improvement in classification and prediction. The proposed PNN works as promising tool for prediction of heart disease."}
{"_id": "8385d35fe4dae9416f1efad6e38f9ae54548361c", "title": "Requirements for Safe Robots : Measurements , Analysis and New Insights", "text": "Physical human\u2013robot interaction and cooperation has become a topic of increasing importance and of major focus in robotics research. An essential requirement of a robot designed for high mobility and direct interaction with human users or uncertain environments is that it must in no case pose a threat to the human. Until recently, quite a few attempts were made to investigate real-world threats via collision tests and use the outcome to considerably improve safety during physical human\u2013robot interaction. In this paper, we give an overview of our systematic evaluation of safety in human\u2013robot interaction, covering various aspects of the most significant injury mechanisms. In order to quantify the potential injury risk emanating from such a manipulator, impact tests with the DLR-Lightweight Robot III were carried out using standard automobile crash test facilities at the German Automobile Club (ADAC). Based on these tests, several industrial robots of different weight have been evaluated and the influence of the robot mass and velocity have been investigated. The evaluated non-constrained impacts would only partially capture the nature of human\u2013robot safety. A possibly constrained environment and its effect on the resulting human injuries are discussed and evaluated from different perspectives. As well as such impact tests and simulations, we have analyzed the problem of the quasi-static constrained impact, which could pose a serious threat to the human even for low-inertia robots under certain circumstances. Finally, possible injuries relevant in robotics are summarized and systematically classified. KEY WORDS\u2014Physical Human-Robot Interaction, Simulation, Flexible Arms, Mechanics, Design and Control, Force Control The International Journal of Robotics Research Vol. 00, No. 00, Xxxxxxxx 2009, pp. 000\u2013000 DOI: 10.1177/0278364909343970 c The Author(s), 2009. Reprints and permissions: http://www.sagepub.co.uk/journalsPermissions.nav Figures 1\u201310, 12\u201318 appear in color online: http://ijr.sagepub.com"}
{"_id": "3b5b9cbd2f0cfd390eeb968c99266115cb2c9597", "title": "Sego: Pervasive Trusted Metadata for Efficiently Verified Untrusted System Services", "text": "Sego is a hypervisor-based system that gives strong privacy and integrity guarantees to trusted applications, even when the guest operating system is compromised or hostile. Sego verifies operating system services, like the file system, instead of replacing them. By associating trusted metadata with user data across all system devices, Sego verifies system services more efficiently than previous systems, especially services that depend on data contents. We extensively evaluate Sego's performance on real workloads and implement a kernel fault injector to validate Sego's file system-agnostic crash consistency and recovery protocol."}
{"_id": "3811b03c4ebbd93dfc602eef422fc8237fda8654", "title": "A Data Driven Approach for Algebraic Loop Invariants", "text": "We describe a new algorithm GUESS-AND-CHECK for computing algebraic equation invariants. These invariants are of the form \u2227ifi(x1, . . . , xn) = 0, where each fi is a polynomial over the variables x1, . . . , xn of the program. Two novel features of our algorithm are:(1) it is data driven, that is, invariants are derived from data generated from concrete executions of the program, and (2) it is sound and complete, that is, the algorithm terminates with the correct invariant if it exists. The GUESS-AND-CHECK algorithm proceeds iteratively in two phases. The \u201cguess\u201d phase uses linear algebra techniques to efficiently derive a candidate invariant from data. This candidate invariant is subsequently validated in a \u201ccheck\u201d phase. If the check phase fails to validate the candidate invariant, then the proof of invalidity is used to generate more data and this is used to find better candidate invariants in a subsequent guess phase. Therefore, GUESS-AND-CHECK iteratively invokes the guess and check phases until it finds the desired invariant. In terms of implementation, off-the-shelf linear algebra solvers are used to implement the guess phase, and off-the-shelf decision procedures are used to implement the check phase. We have evaluated our technique on a number of benchmark programs from recent papers on invariant generation. Our results are encouraging \u2013 we are able to efficiently compute algebraic invariants in all cases."}
{"_id": "95e8c75daa1c65264b948392356a86596ff0b695", "title": "Nanowire Ultraviolet Photodetectors and Optical Switches", "text": "Nanowires and nanotubes may become important building blocks for nanoscale optoelectronics, since they can function as miniaturized devices as well as electrical interconnects. Nano-devices such as field-effect transistors, single-electron transistors, metal\u00b1semiconductor junctions, and intermolecular crossed junctions have been demonstrated. Many of these devices rely on binary switching, which is critical for important applications such as memory storage and logic circuits. Switching on the nanometer and molecular level has been predominantly achieved through proper electrical gating, as exemplified by nanotube transistors. However, no attention has been given to the photoconducting properties of nanowires despite the exciting possibilities for use in optoelectronic circuits. Here, we show the possibility of creating highly sensitive nanowire switches by exploring the photoconducting properties of individual semiconductor nanowires. The conductivity of the ZnO nanowires is extremely sensitive to ultraviolet light exposure. The light-induced conductivity increase allows us to reversibly switch the nanowires between aOFFo and aONo states, an optical gating phenomenon analogous to the commonly used electrical gating. The ZnO nanowires used in the experiments were grown by a vapor phase transport process developed in our lab. The diameters of these wires range from 50 to 300 nm. To characterize their photoconducting properties, the nanowires were dispersed directly on pre-fabricated gold electrodes. Alternatively, electron-beam lithography was used to fabricate gold electrodes on top of the nanowires. Field-emission scanning electron microscopy (FE-SEM) was used to image the ZnO nanowire devices. Electrical resistivity measurements were performed in a four-terminal configuration in air, nitrogen, or vacuum environments. Four-terminal measurements of individual ZnO nanowires indicate that they are highly insulating in the dark with a resistivity above 3.5 MX cm. When the nanowires are exposed to ultraviolet (UV)-light with wavelengths below 380 nm (handheld UV-lamp, 0.3 mW cm, 365 nm), the nanowire resistivity decreases by typically 4 to 6 orders of magnitude. Figure 1 compares the current\u00b1voltage (I\u00b1V) curves measured on a 60 nm nanowire in the dark and upon UV-light exposure. A larger photoresponse was detected at higher bias. We notice that the I\u00b1V curve for the UV-exposed nanowire exhibits nonlinear behavior. The same nonlinear I\u00b1V has been observed for both the wire-on-electrode and electrode-on-wire configurations. The four-terminal and two-terminal measurements show essentially identical resistivity values, which suggests that the Au/ZnO contacts may not contribute to the I\u00b1V nonlinearity. The exact reason for this nonlinearity remains unknown at this stage. The high sensitivity of the nanowire photoconductors can be seen in Figure 2, which shows the power dependence of the photoresponse. The third harmonic of a Nd:YAG laser was used as the UV light source. Neutral density filters were used to change the incident UV light power. It was found that the photoresponse (Ipc) can be expressed by a simple power law"}
{"_id": "6db97f77c1f6f5636cf0ae7650f1718e143840a2", "title": "The Dual-State Theory of Prefrontal Cortex Dopamine Function with Relevance to Catechol-O-Methyltransferase Genotypes and Schizophrenia", "text": "There is now general consensus that at least some of the cognitive deficits in schizophrenia are related to dysfunctions in the prefrontal cortex (PFC) dopamine (DA) system. At the cellular and synaptic level, the effects of DA in PFC via D1- and D2-class receptors are highly complex, often apparently opposing, and hence difficult to understand with regard to their functional implications. Biophysically realistic computational models have provided valuable insights into how the effects of DA on PFC neurons and synaptic currents as measured in vitro link up to the neural network and cognitive levels. They suggest the existence of two discrete dynamical regimes, a D1-dominated state characterized by a high energy barrier among different network patterns that favors robust online maintenance of information and a D2-dominated state characterized by a low energy barrier that is beneficial for flexible and fast switching among representational states. These predictions are consistent with a variety of electrophysiological, neuroimaging, and behavioral results in humans and nonhuman species. Moreover, these biophysically based models predict that imbalanced D1:D2 receptor activation causing extremely low or extremely high energy barriers among activity states could lead to the emergence of cognitive, positive, and negative symptoms observed in schizophrenia. Thus, combined experimental and computational approaches hold the promise of allowing a detailed mechanistic understanding of how DA alters information processing in normal and pathological conditions, thereby potentially providing new routes for the development of pharmacological treatments for schizophrenia."}
{"_id": "716a6d5706305697a4b22d5dd694a7ee668b842f", "title": "KEEPING YOUR HEADCOUNT WHEN ALL ABOUT YOU ARE LOSING THEIRS : DOWNSIZING , VOLUNTARY TURNOVER RATES , AND THE MODERATING ROLE OF HR PRACTICES", "text": "Although both downsizing and voluntary turnover have been topics of great interest in the organizational literature, little research has addressed the possible relationship between the two. Using organization-level data from multiple industries, we first investigate whether downsizing predicts voluntary turnover rates. Second, to lend support to our causal model, we examine whether this relationship is mediated by aggregated levels of organizational commitment. Third, we test whether the downsizing-turnover rate relationship: (1) is mitigated by HR practices that tend to embed employees in the organization or convey procedural fairness; and (2) is strengthened by HR practices that enhance career development. Results support the hypothesized main, mediated, and moderated effects."}
{"_id": "c5c5f3a4f177af2de8ec61039ebc6e43bb377041", "title": "Bias in Wikipedia", "text": "While studies have shown that Wikipedia articles exhibit quality that is comparable to conventional encyclopedias, research still proves that Wikipedia, overall, is prone to many different types of Neutral Point of View (NPOV) violations that are explicitly or implicitly caused by bias from its editors. Related work focuses on political, cultural and gender bias. We are developing an approach for detecting both explicit and implicit bias in Wikipedia articles and observing its evolution over time. Our approach is based on different factors of bias, with the most important ones being language style, editors, and citations. In this paper we present the approach, methodology and a first analysis."}
{"_id": "446f2361283f627639a14ad5776b8502cffe15a5", "title": "Touching the void: direct-touch interaction for intangible displays", "text": "In this paper, we explore the challenges in applying and investigate methodologies to improve direct-touch interaction on intangible displays. Direct-touch interaction simplifies object manipulation, because it combines the input and display into a single integrated interface. While traditional tangible display-based direct-touch technology is commonplace, similar direct-touch interaction within an intangible display paradigm presents many challenges. Given the lack of tactile feedback, direct-touch interaction on an intangible display may show poor performance even on the simplest of target acquisition tasks. In order to study this problem, we have created a prototype of an intangible display. In the initial study, we collected user discrepancy data corresponding to the interpretation of 3D location of targets shown on our intangible display. The result showed that participants performed poorly in determining the z-coordinate of the targets and were imprecise in their execution of screen touches within the system. Thirty percent of positioning operations showed errors larger than 30mm from the actual surface. This finding triggered our interest to design a second study, in which we quantified task time in the presence of visual and audio feedback. The pseudo-shadow visual feedback was shown to be helpful both in improving user performance and satisfaction."}
{"_id": "3764717e9e2ed08d2a9f8d096adf2730cfba3902", "title": "Urban computing with taxicabs", "text": "Urban computing for city planning is one of the most significant applications in Ubiquitous computing. In this paper we detect flawed urban planning using the GPS trajectories of taxicabs traveling in urban areas. The detected results consist of 1) pairs of regions with salient traffic problems and 2) the linking structure as well as correlation among them. These results can evaluate the effectiveness of the carried out planning, such as a newly built road and subway lines in a city, and remind city planners of a problem that has not been recognized when they conceive future plans. We conduct our method using the trajectories generated by 30,000 taxis from March to May in 2009 and 2010 in Beijing, and evaluate our results with the real urban planning of Beijing."}
{"_id": "af1abcaaa1736605f2aca4dcbd09984902f4edf6", "title": "Network diversity and economic development.", "text": "Social networks form the backbone of social and economic life. Until recently, however, data have not been available to study the social impact of a national network structure. To that end, we combined the most complete record of a national communication network with national census data on the socioeconomic well-being of communities. These data make possible a population-level investigation of the relation between the structure of social networks and access to socioeconomic opportunity. We find that the diversity of individuals' relationships is strongly correlated with the economic development of communities."}
{"_id": "c928c2d99656b0e7b1c654c63e736edc7725b58a", "title": "Tracking \"gross community happiness\" from tweets", "text": "Policy makers are calling for new socio-economic measures that reflect subjective well-being, to complement traditional measures of material welfare as the Gross Domestic Product (GDP). Self-reporting has been found to be reasonably accurate in measuring one's well-being and conveniently tallies with sentiment expressed on social media (e.g., those satisfied with life use more positive than negative words in their Facebook status updates). Social media content can thus be used to track well-being of individuals. A question left unexplored is whether such content can be used to track well-being of entire physical communities as well. To this end, we consider Twitter users based in a variety of London census communities, and study the relationship between sentiment expressed in tweets and community socio-economic well-being. We find that the two are highly correlated: the higher the normalized sentiment score of a community's tweets, the higher the community's socio-economic well-being. This suggests that monitoring tweets is an effective way of tracking community well-being too."}
{"_id": "04f6189268a78f07b80cdb89b829079bf21a61b6", "title": "Modeling Dominance in Group Conversations Using Nonverbal Activity Cues", "text": "Dominance - a behavioral expression of power - is a fundamental mechanism of social interaction, expressed and perceived in conversations through spoken words and audiovisual nonverbal cues. The automatic modeling of dominance patterns from sensor data represents a relevant problem in social computing. In this paper, we present a systematic study on dominance modeling in group meetings from fully automatic nonverbal activity cues, in a multi-camera, multi-microphone setting. We investigate efficient audio and visual activity cues for the characterization of dominant behavior, analyzing single and joint modalities. Unsupervised and supervised approaches for dominance modeling are also investigated. Activity cues and models are objectively evaluated on a set of dominance-related classification tasks, derived from an analysis of the variability of human judgment of perceived dominance in group discussions. Our investigation highlights the power of relatively simple yet efficient approaches and the challenges of audiovisual integration. This constitutes the most detailed study on automatic dominance modeling in meetings to date."}
{"_id": "b58251c96564f42b1e484b9e624dd4e546e6f61b", "title": "Rule-Based Neural Networks for Classification and Probability Estimation", "text": "In this paper we propose a network architecture that combines a rule-based approach with that of the neural network paradigm. Our primary motivation for this is to ensure that the knowledge embodied in the network is explicitly encoded in the form of understandable rules. This enables the network's decision to be understood, and provides an audit trail of how that decision was arrived at. We utilize an information theoretic approach to learning a model of the domain knowledge from examples. This model takes the form of a set of probabilistic conjunctive rules between discrete input evidence variables and output class variables. These rules are then mapped onto the weights and nodes of a feedforward neural network resulting in a directly specified architecture. The network acts as parallel Bayesian classifier, but more importantly, can also output posterior probability estimates of the class variables. Empirical tests on a number of data sets show that the rule-based classifier performs comparably with standard neural network classifiers, while possessing unique advantages in terms of knowledge representation and probability estimation."}
{"_id": "5ea1fb4aa64f6aaba57a7ef29c716e8b4092166d", "title": "Wind Speed and Rotor Position Sensorless Control for Direct-Drive PMG Wind Turbines", "text": "This paper proposes a wind speed and rotor position sensorless control for wind turbines directly driving permanent magnetic generators (PMGs). A sliding-mode observer is designed to estimate the rotor position of the PMG by using the measured stator currents and the commanded stator voltages obtained from the control scheme of the machine-side converter of the PMG wind turbine. The rotor speed of the PMG (i.e., the turbine shaft speed) is estimated from its back electromotive force using a model adaptive reference system observer. Based on the measured output electrical power and estimated rotor speed of the PMG, the mechanical power of the turbine is estimated by taking into account the power losses of the wind turbine generator system. A back-propagation artificial neural network is then designed to estimate the wind speed in real time by using the estimated turbine shaft speed and mechanical power. The estimated wind speed is used to determine the optimal shaft speed reference for the PMG control system. Finally, a sensorless control is developed for the PMG wind turbines to continuously generate the maximum electrical power without using any wind speed or rotor position sensors. The validity of the proposed estimation and control algorithms are shown by simulation studies on a 3-kW PMG wind turbine and are further demonstrated by experimental results on a 300-W practical PMG wind turbine."}
{"_id": "08f410a5d6b2770e4630e3f90fb6f3e6b5bfc285", "title": "A Survey of Arabic Text Representation and Classification Methods", "text": "In this paper we have presented a brief current state of the Art for Arabic text representation and classification methods. First we describe some algorithms applied to classification on Arabic text. Secondly, we cite all major works when comparing classification algorithms applied on Arabic text, after this, we mention some authors who proposing new classification methods and finally we investigate the impact of preprocessing on Arabic TC."}
{"_id": "49929ca34d65302238bb5953f91f2879ea10eb58", "title": "Cloud-Based Big Data Management and Analytics for Scholarly Resources: Current Trends, Challenges and Scope for Future Research", "text": "With the shifting focus of organizations and governments towards digitization of academic and technical documents, there has been an increasing need to use this reserve of scholarly documents for developing applications that can facilitate and aid in better management of research. In addition to this, the evolving nature of research problems has made them essentially interdisciplinary. As a result, there is a growing need for scholarly applications like collaborator discovery, expert finding and research recommendation systems. This research paper reviews the current trends and identifies the challenges existing in the architecture, services and applications of big scholarly data platform with a specific focus on directions for future research."}
{"_id": "31e4e28e1c5b1f56ae84ad8962854821f77720eb", "title": "An effect analysis of industry 4.0 to higher education", "text": "From history to the present day, especially in the field of technology is provided a wide range of developments. The most important of these developments has achieved the realization of the industrial revolution, has led to play a leading role in the production is done by human beings. The effects on production of industry 4.0 emerging with this mind is very important. The qualified employees should be trained to make the necessary preparations for the company in this period, it is experiencing the fourth industrial revolution. Therefore, the impacts on higher education of industry 4.0 are examined in this paper. The importance in education of industry 4.0 has revealed with statistical data presented in this study."}
{"_id": "d8fc1a1a6506446e2fe1ad7cccde78358e53bb2c", "title": "Locating the fovea center position in digital fundus images using thresholding and feature extraction techniques", "text": "A new methodology for detecting the fovea center position in digital retinal images is presented in this paper. A pixel is firstly searched for within the foveal region according to its known anatomical position relative to the optic disc and vascular tree. Then, this pixel is used to extract a fovea-containing subimage on which thresholding and feature extraction techniques are applied so as to find fovea center. The methodology was evaluated on 1200 fundus images from the publicly available MESSIDOR database, 660 of which present signs of diabetic retinopathy. In 93.92% of these images, the distance between the methodology-provided and actual fovea center position remained below 1/4 of one standard optic disc radius (i.e., 17, 26, and 27 pixels for MESSIDOR retinas of 910, 1380 and 1455 pixels in size, respectively). These results outperform all the reviewed methodologies available in literature. Its effectiveness and robustness with different illness conditions makes this proposal suitable for retinal image computer analyses such as automated screening for early diabetic retinopathy detection."}
{"_id": "9baaec67e59de85dc0d5efec96d80be028be557d", "title": "Modular ATRON: modules for a self-reconfigurable robot", "text": "This paper describes the mechanical and electrical design of a new lattice based self-reconfigurable robot, called the ATRON. The ATRON system consists of several fully self-contained robot modules, each having their own processing power, power supply, sensors and actuators. The ATRON modules are roughly spheres with equatorial rotation. Each module can be connected to up to eight neighbors through four male and four female connectors. In this paper, we describe the realization of the design, both the mechanics and the electronics. Details on power sharing and power consumption is given. Finally, this paper includes a brief outline of our future work on the ATRON system."}
{"_id": "30f6e7715c1b808c38edee543796be257338979b", "title": "A performance comparison of polar codes and Reed-Muller codes", "text": "Polar coding is a code construction method that can be used to construct capacity-achieving codes for binary-input channels with certain symmetries. Polar coding may be considered as a generalization of Reed-Muller (RM) coding. Here, we demonstrate the performance advantages of polar codes over RM codes under belief-propagation decoding."}
{"_id": "0f25e83603f90ec259aae2c1a67c43658223ab49", "title": "Consumer Intention to Shop Online : B 2 C E-Commerce in Developing Countries", "text": "The development of dot com companies in 90s opened a new door of sales and revenue generation for the businesses world wide. The number of online shoppers increased dramatically within a very short span of time. While some people found it a convenient and sophisticated way of shopping, others remained reluctant to adopt this medium. Different factors are considered responsible for this variation in online behavior. Most of the researches on this topic are conducted in the developed countries so there is a need to study the phenomenon from the developing countries perspective. Based on existing literature on the topic a research model was developed which was further tested by means of a survey."}
{"_id": "a912747dc2bf694c3029aac3366dba8d9e33e066", "title": "Coordinated Charging of Plug-In Hybrid Electric Vehicles to Minimize Distribution System Losses", "text": "As the number of plug-in hybrid vehicles (PHEVs) increases, so might the impacts on the power system performance, such as overloading, reduced efficiency, power quality, and voltage regulation particularly at the distribution level. Coordinated charging of PHEVs is a possible solution to these problems. In this work, the relationship between feeder losses, load factor, and load variance is explored in the context of coordinated PHEV charging. From these relationships, three optimal charging algorithms are developed which minimize the impacts of PHEV charging on the connected distribution system. The application of the algorithms to two test systems verifies these relationships approximately hold independent of system topology. They also show the additional benefits of reduced computation time and problem convexity when using load factor or load variance as the objective function rather than system losses. This is important for real-time dispatching of PHEVs."}
{"_id": "6ef78fdb3c54a847d665d006cf812d69326e70ed", "title": "Analysis of PWM nonlinearity in non-inverting buck-boost power converters", "text": "This paper focuses on the non-inverting buck- boost converter operated as either buck or boost converter. It is shown that a pulse-width modulation (PWM) discontinuity around buck/boost mode transition can result in substantial increases in output voltage ripple. The effect of the PWM nonlinearity is studied using periodic steady state analysis to quantify the worst case ripple voltage in terms of design parameters. Furthermore, a bifurcation analysis shows that the PWM discontinuity leads to quasi- periodic route to chaos, which results in erratic operation around the buck/boost mode transition. The increased ripple is a very significant problem when the converter is used as a power supply for an RF power amplifier, as is the case in WCDMA handsets. An approach is proposed to remove the discontinuity which results in reduced output voltage ripple, at the expense of reduced efficiency, as demonstrated on an experimental prototype."}
{"_id": "bd386854f1acd751d45ac1a898131e5ccde86e88", "title": "A cortical\u2013hippocampal system for declarative memory", "text": "Recent neurobiological studies have begun to reveal the cognitive and neural coding mechanisms that underlie declarative memory \u2014 our ability to recollect everyday events and factual knowledge. These studies indicate that the critical circuitry involves bidirectional connections between the neocortex, the parahippocampal region and the hippocampus. Each of these areas makes a unique contribution to memory processing. Widespread high-order neocortical areas provide dedicated processors for perceptual, motor or cognitive information that is influenced by other components of the system. The parahippocampal region mediates convergence of this information and extends the persistence of neocortical memory representations. The hippocampus encodes the sequences of places and events that compose episodic memories, and links them together through their common elements. Here I describe how these mechanisms work together to create and re-create fully networked representations of previous experiences and knowledge about the world."}
{"_id": "4bd68866ba88bbc349c8e36ba98fe528699d0797", "title": "Plans and resource-bounded practical reasoning", "text": "An architecture for a rational agent must allow for means end reason ing for the weighing of competing alternatives and for interactions between these two forms of reasoning Such an architecture must also address the problem of resource boundedness We sketch a solution of the rst problem that points the way to a solution of the second In particular we present a high level speci cation of the practical reasoning component of an archi tecture for a resource bounded rational agent In this architecture a major role of the agent s plans is to constrain the amount of further practical reasoning she must perform Bratman Israel and Pollack"}
{"_id": "9aed0787a5abf4afd7bc4a9b1ae0bf1be1886aa4", "title": "A survey of autonomic communications", "text": "Autonomic communications seek to improve the ability of network and services to cope with unpredicted change, including changes in topology, load, task, the physical and logical characteristics of the networks that can be accessed, and so forth. Broad-ranging autonomic solutions require designers to account for a range of end-to-end issues affecting programming models, network and contextual modeling and reasoning, decentralised algorithms, trust acquisition and maintenance---issues whose solutions may draw on approaches and results from a surprisingly broad range of disciplines. We survey the current state of autonomic communications research and identify significant emerging trends and techniques."}
{"_id": "fe226df22c446980c942db1b64ad3c8adad512f4", "title": "Design of Hybrid Excited Synchronous Machine for Electrical Vehicles", "text": "This paper deals with the optimal design of a new synchronous motor particularly suitable for electric vehicles. The unique advantage of this motor is its possibility to realize field weakening at high speeds, which is the most important demand for such drives. The main design goal is to identify the dimensions of certain crucial regions of the machine in order to avoid saturation of the iron and to minimize the mass to volume ratio, while simultaneously maximizing the torque. The originality of the contribution is manifold: formulating the design problem in multiobjective terms, solving it by means of evolutionary computing, solving the associated direct problem based on a three-dimensional finite element analysis (3-D-FEA) field model, considering a reduced set of design variables by means of response surface method, referring to a real-life problem."}
{"_id": "c9bfcf161e4858d200366ee663acb2ab8a5c876d", "title": "Criteria-Based Requirements Prioritization for Software Product Management", "text": "Meeting stakeholders requirements and expectations becomes one of the critical aspects on which any software organization in market-driven environment focus on, and pays a lot of efforts and expenses to maximize the satisfaction of their stakeholders. Therefore identifying the software product release contents becomes one of the critical decisions for software product success. Requirements prioritization refers to that activity through which product releases contents that maximize stakeholder satisfaction can be identified. This makes it one of the most important component of software requirement decision support in incremental software development [8]. This paper illustrates the Criteria-Based requirement prioritization approach for software product management. The technique proposed on this paper designed based on the Hierarchical Cumulative Voting (HCV) and Value-Oriented Prioritization (VOP) techniques. The proposed technique, Value-Oriented HCV (VOHCV) can select the best candidate requirements for each release based on the stakeholders input values for each requirement. These values reflect the importance of each requirement in terms of associated anticipated cost, technical risk, relative impact, market-related aspects and perceived value to the stakeholder. The VOHCV inherits the strengths of the HCV and VOP techniques. It also provides a mechanism that takes different stakeholders\u2019 aspects into account while selecting the best candidate release requirement, to maximize stakeholders\u2019 value and satisfaction [11]."}
{"_id": "7ebfa8f1c92ac213ff35fa27287dee94ae5735a1", "title": "A Novel Transient Wrinkle Detection Algorithm and Its Application for Expression Synthesis", "text": "Because facial wrinkle is a representative feature of facial expression, automatic wrinkle detection has been an important and challenging topic for expression simulation, recognition, and animation. Recently, most works about wrinkle detection have focused on permanent wrinkles (e.g., age wrinkles), which are usually linear shapes, whereas the detection of transient wrinkles (e.g., expression wrinkles) has not been sufficiently studied because of their shape diversity and complexity. In this work, a novel algorithm for automatic detection of transient wrinkles with linear, fixed, and chaotic shapes is proposed, which largely consists of edge pair matching, active-appearance-model-based wrinkle structure location, and support-vector-machine-based wrinkle classification. The proposed wrinkle detector is applied for expression synthesis and an improved Poisson wrinkle mapping approach is proposed. Experimental results illustrate the competitiveness of the proposed wrinkle detector in detecting different transient wrinkles. Compared with state-of-the-art algorithms, the proposed approach yields complete and accurate wrinkle centers. The expression synthesized by the improved wrinkle mapping is also much more realistic."}
{"_id": "1b755f9db6b8c1d5030248bf9a13cdb86d7aae20", "title": "A survey on algorithms of hole filling in 3D surface reconstruction", "text": "The surface reconstruction of 3D objects has attracted more and more attention for its widespread application in many areas, such as computer science, cultural heritage restoration, medical facilities, entertainment. However, due to occlusion, reflectance, the scanning angle, raw data preprocessing, it is inevitable to lose some point data, which leads to holes in the reconstruction surface, making it undesirable for various applications. Therefore, methods for filling holes in the process of surface reconstruction are critical to the final results of reconstruction. This paper makes a survey of existing well-known hole-filling algorithms, classifies the algorithms into two main categories, analyzes and compares these algorithms from the viewpoints of theories and experimental results to make a clear introduction of their performance. At the end, the paper points out the possible development direction of hole filling in the future and hopes to be a good guide for other researchers."}
{"_id": "6c6ca490a19955b58317a49cbf85af4f4ae31682", "title": "Experiential Design of Shared Information Spaces", "text": "5."}
{"_id": "7c0c9ab92d49941089979c1e344fe66efc873bdd", "title": "Generative Adversarial Examples", "text": "Adversarial examples are typically constructed by perturbing an existing data point, and current defense methods are focused on guarding against this type of attack. In this paper, we propose a new class of adversarial examples that are synthesized entirely from scratch using a conditional generative model. We first train an Auxiliary Classifier Generative Adversarial Network (AC-GAN) to model the class-conditional distribution over inputs. Then, conditioned on a desired class, we search over the AC-GAN latent space to find images that are likely under the generative model and are misclassified by a target classifier. We demonstrate through human evaluation that this new kind of adversarial inputs, which we call Generative Adversarial Examples, are legitimate and belong to the desired class. Our empirical results on the MNIST, SVHN, and CelebA datasets show that generative adversarial examples can easily bypass strong adversarial training and certified defense methods which can foil existing adversarial attacks."}
{"_id": "f9f8dc979727e3a31c4cedcbdfad9523c28c009f", "title": "ORCHID : Thai Part-Of-Speech Tagged Corpus", "text": "This paper presents a procedure in building a Thai part-of-speech (POS) tagged corpus named ORCHID [1]. It is a collaboration project between Communications Research Laboratory (CRL) of Japan and National Electronics and Computer Technology Center (NECTEC) of Thailand. We proposed a new tagset based on the previous research on Thai parts-of-speech for using in a multi-lingual machine translation project. We marked the corpus in three levels:paragraph, sentence and word. The corpus keeps text information in text information line and numbering line which are necessary in retrieving process. Since there are no explicit word/sentence boundary, punctuation and inflection in Thai text, we have to separate a paragraph into sentences before tagging the POS. We applied a probabili stic trigram model for simultaneously word segmenting and POS tagging. Rule for syllable construction is additionally used to reduce the number of candidates for computing the probabili ty. The problems in POS assignment are formalized to reduce the ambiguity occurring in case of the similar POSs."}
{"_id": "62cd8203c11cb464e73b4b2be9619192222a5716", "title": "Low- and high-anxious hypermobile Ehlers\u2013Danlos syndrome patients: comparison of psychosocial and health variables", "text": "Despite the frequent co-ocurrence of hypermobile Ehler\u2013Danlos syndrome (hEDS) and pathological anxiety, little is known about the psychosocial and health implications of such comorbidity. Our aim was to explore the association between high levels of anxiety and psychosocial (catastrophizing, kinesiophobia, somatosensory amplification, social support and functioning), health (pain, fatigue, BMI, tobacco/alcohol use, depression, diagnosis delay, general health), and sociodemographic factors in people with hEDS. In this cross-sectional study, 80 hEDS patients were divided into two groups according to self-reported anxiety levels: low and high. Psychosocial, sociodemographic and health variables were compared between the groups. Forty-one participants reported a high level of anxiety (51.2%). No differences were found in the sociodemographic variables between high-anxious and low-anxious patients. The percentage of participants with severe fatigue and high depressive symptomatology was significantly higher in the high-anxious group (80.5 vs 56.4; 26.8 vs 12.8%, respectively). High-anxious hEDS patients also showed significantly higher levels of pain catastrophizing, somatosensory amplification as well as a poorer social functioning and general health. Multivariate analyses showed that somatosensory amplification, pain catastrophizing and poor social functioning are variables that increase the probability of belonging to the high-anxious group. Despite limitations, this first study comparing high-anxious versus low-anxious hEDS patients with respect to health aspects, highlight the importance of considering the psychosocial factors (many susceptible to modification), to improve the adjustment to this chronic condition and provide support to those affected through a biopsychosocial approach."}
{"_id": "d0827319adca59dda5a06cc943a9c43f5a25e169", "title": "Combining multispectral aerial imagery and digital surface models to extract urban buildings", "text": "This paper presents an automated classification of buildings in Coleraine, Northern Ireland. The classification was generated using very high spatial resolution data (10 cm) from a Digital Mapping Camera (DMC) for March 2009. The visible to near infrared (VNIR) bands of the DMC enabled a supervised classification to be performed to extract buildings from vegetation. A Digital Surface Model (DSM) was also created from the image to differentiate between buildings and other land classes with similar spectral profiles, such as roads. The supervised classification had the lowest classification accuracy (50%) while the DSM had an accuracy of 81%. The combination of the DSM and the supervised classification achieved an overall classification accuracy of 95%. Two spatial metrics (percentage of the landscape and number of patches) were also used to test the level of agreement between the classification and digitised building data. The results suggest that fine resolution multispectral aerial imagery can automatically detect buildings to a very high level of accuracy. Current space borne sensors, such as IKONOS and QuickBird, lag behind airborne sensors with VNIR bands provided at a much coarser spatial resolution (4m and 2.4m respectively). Techniques must be developed from current airborne sensors that can be applied to new space borne sensors in the future. The ability to generate DSMs from high resolution aerial imagery will afford new insights into the three-dimensional aspects of urban areas which will in turn inform future urban planning. (Received 19th August 2010; Revised 26th January 2011; Accepted 9th February 2011) ISSN 1744-5647 doi:10.4113/jom.2011.1152 51 Journal of Maps, 2011, 51-59 McNally, A.J.D. & McKenzie, S.J.P."}
{"_id": "61811fc14b159cbaadbdbc64b79ba652a49e6801", "title": "Suffocation using plastic bags: a retrospective study of suicides in Ontario, Canada.", "text": "One hundred and ten cases of suicidal suffocation using a plastic bag were identified in the files of the Office of the Chief Coroner of Ontario, Canada, between 1993 and 1997. The records were reviewed to determine the demographic characteristics of this group compared with all cases of suicide in Ontario, the scene information, autopsy findings and toxicology results. Most suicides occurred in people over 60 years of age, with older women making up a considerable proportion of cases as compared with other methods of suicide. In 40% of cases the deceased was suffering from a serious illness. Autopsy findings were usually minimal, with facial, conjunctival and visceral petechiae present in a minority of cases. One or more drugs were detected in the blood in 92.6% of cases where toxicologic testing was performed. Benzodiazepines, diphenhydramine and antidepressants were the most common drugs found, with diphenhydramine the most common drug present at an elevated concentration. Information at the scene from \"right to die\" societies was uncommon. One quarter of decedents took additional measures, besides the use of drugs or alcohol, to ensure the rapidity, certainty or comfort of their death. This study further elucidates the characteristics of this uncommon method of suicide. It emphasizes additional scene findings, such as the presence of dust masks, physical restraints and modification of the plastic bag that may be of use to death investigators in determining the correct manner of death."}
{"_id": "14b750a0fd5a13f7494e4abf9b97718ff558f508", "title": "Bugs as deviant behavior: a general approach to inferring errors in systems code", "text": "A major obstacle to finding program errors in a real system is knowing what correctness rules the system must obey. These rules are often undocumented or specified in an ad hoc manner. This paper demonstrates techniques that automatically extract such checking information from the source code itself, rather than the programmer, thereby avoiding the need for a priori knowledge of system rules. The cornerstone of our approach is inferring programmer \u201cbeliefs\u201d that we then cross-check for contradictions. Beliefs are facts implied by code: a dereference of a pointer, p, implies a belief that p is non-null, a call to \u201cunlock(l)\u201d implies that l was locked, etc. For beliefs we know the programmer must hold, such as the pointer dereference above, we immediately flag contradictions as errors. For beliefs that the programmer may hold, we can assume these beliefs hold and use a statistical analysis to rank the resulting errors from most to least likely. For example, a call to \u201cspin lock\u201d followed once by a call to \u201cspin unlock\u201d implies that the programmer may have paired these calls by coincidence. If the pairing happens 999 out of 1000 times, though, then it is probably a valid belief and the sole deviation a probable error. The key feature of this approach is that it requires no a priori knowledge of truth: if two beliefs contradict, we know that one is an error without knowing what the correct belief is. Conceptually, our checkers extract beliefs by tailoring rule \u201ctemplates\u201d to a system \u2013 for example, finding all functions that fit the rule template \u201c must be paired with .\u201d We have developed six checkers that follow this conceptual framework. They find hundreds of bugs in real systems such as Linux and OpenBSD. From our experience, they give a dramatic reduction in the manual effort needed to check a large system. Compared to our previous work [9], these template checkers find ten to one hundred times more rule instances and derive properties we found impractical to specify manually."}
{"_id": "56b9100895e9855c2d9e72f81bb5933ac8a5c17c", "title": "Scaling regression testing to large software systems", "text": "When software is modified, during development and maintenance, it is regression tested to provide confidence that the changes did not introduce unexpected errors and that new features behave as expected. One important problem in regression testing is how to select a subset of test cases, from the test suite used for the original version of the software, when testing a modified version of the software. Regression-test-selection techniques address this problem. Safe regression-test-selection techniques select every test case in the test suite that may behave differently in the original and modified versions of the software. Among existing safe regression testing techniques, efficient techniques are often too imprecise and achieve little savings in testing effort, whereas precise techniques are too expensive when used on large systems. This paper presents a new regression-test-selection technique for Java programs that is safe, precise, and yet scales to large systems. It also presents a tool that implements the technique and studies performed on a set of subjects ranging from 70 to over 500 KLOC. The studies show that our technique can efficiently reduce the regression testing effort and, thus, achieve considerable savings."}
{"_id": "d3c1106c4e16a915bed14c1f7bce8a0211d4b836", "title": "Predicting risk of software changes", "text": "Reducing the number of software failures is one of the most challenging problems of software production. We assume that software development proceeds as a series of changes and model the probability that a change to software will cause a failure. We use predictors based on the properties of a change itself. Such predictors include size in lines of code added, deleted, and unmodified; diffusion of the change and its component subchanges, as reflected in the number of files, modules, and subsystems touched, or changed; several measures of developer experience; and the type of change and its subchanges (fault fixes or new code). The model is built on historic information and is used to predict the risk of new changes. In this paper we apply the model to 5ESS\u00ae software updates and find that change diffusion and developer experience are essential to predicting failures. The predictive model is implemented as a Web-based tool to allow timely prediction of change quality. The ability to predict the quality of change enables us to make appropriate decisions regarding inspection, testing, and delivery. Historic information on software changes is recorded in many commercial software projects, suggesting that our results can be easily and widely applied in practice."}
{"_id": "0519437d7dfa393963ff1e61911302b716b2be93", "title": "Predicting Fault Incidence Using Software Change History", "text": "\u00d0This paper is an attempt to understand the processes by which software ages. We define code to be aged or decayed if its structure makes it unnecessarily difficult to understand or change and we measure the extent of decay by counting the number of faults in code in a period of time. Using change management data from a very large, long-lived software system, we explore the extent to which measurements from the change history are successful in predicting the distribution over modules of these incidences of faults. In general, process measures based on the change history are more useful in predicting fault rates than product metrics of the code: For instance, the number of times code has been changed is a better indication of how many faults it will contain than is its length. We also compare the fault rates of code of various ages, finding that if a module is, on the average, a year older than an otherwise similar module, the older module will have roughly a third fewer faults. Our most successful model measures the fault potential of a module as the sum of contributions from all of the times the module has been changed, with large, recent changes receiving the most weight. Index Terms\u00d0Fault potential, code decay, change management data, metrics, statistical analysis, generalized linear models."}
{"_id": "17ce0382776df1f8414555a9bb6f9cb35b791a17", "title": "Chianti: a tool for change impact analysis of java programs", "text": "This paper reports on the design and implementation of Chianti, a change impact analysis tool for Java that is implemented in the context of the Eclipse environment. Chianti analyzes two versions of an application and decomposes their difference into a set of atomic changes. Change impact is then reported in terms of affected (regression or unit) tests whose execution behavior may have been modified by the applied changes. For each affected test, Chianti also determines a set of affecting changes that were responsible for the test's modified behavior. This latter step of isolating the changes that induce the failure of one specific test from those changes that only affect other tests can be used as a debugging technique in situations where a test fails unexpectedly after a long editing session. We evaluated Chianti on a year (2002) of CVS data from M. Ernst's Daikon system, and found that, on average, 52% of Daikon's unit tests are affected. Furthermore, each affected unit test, on average, is affected by only 3.95% of the atomic changes. These findings suggest that our change impact analysis is a promising technique for assisting developers with program understanding and debugging."}
{"_id": "bbebd3b5e67abaf641f5994b014e306dbba9fa6a", "title": "Accurate dB-Linear Variable Gain Amplifier With Gain Error Compensation", "text": "This paper describes use of a novel exponential approximation for designing dB-linear variable gain amplifiers (VGAs). The exponential function is accurately generated using a simple error-compensation technique. The dB-linear gain is controlled linearly by the gate voltage, resulting in a simple and robust VGA. The proposed dB-linear VGA fabricated in a 65-nm CMOS process achieves a total variable gain range of 76 dB and dB-linear range greater than 50 dB with \u00b10.5-dB gain error. Under a 1.2-V supply voltage, the current consumption of the VGA is 1.8 mA and that of the output buffer is 1.4 mA. The input-referred in-band noise density is 3.5 nV/\u221a{Hz} and the in-band OIP3 is 11.5 dBm. Due to the very simple circuit topology, the total active area of the VGA and the output buffer is extremely small, 0.01 mm2."}
{"_id": "c3ee8642c596015f705eaefab326c558da1168ab", "title": "Goubau-Line Leaky-Wave Antenna for Wide-Angle Beam Scanning From Backfire to Endfire", "text": "A Goubau-line leaky-wave antenna (LWA) with a large scanning angle is presented in this letter. In contrast to the conventional Goubau-line leaky wave with a small scanning angle range, this letter employed a periodically bending Goubau line, which not only brings in a periodic perturbation for leaky-wave radiation, but also enhances the scanning range due to the increased delay for each line element. The simulation and experimental results show that the proposed LWA provides $90\\%$ radiation efficiency and 7\u201310 dBi radiation gain from backfire to endfire through broadside as frequency changes. The proposed antenna features good radiation performance and has a compact and low-profile configuration."}
{"_id": "2112172d79789cbe6175f9671604d02144655a86", "title": "Autonomous NAT Traversal", "text": "Traditional NAT traversal methods require the help of a third party for signalling. This paper investigates a new autonomous method for establishing connections to peers behind NAT. The proposed method for autonomous NAT traversal uses fake ICMP messages to initially contact the NATed peer. This paper presents how the method is supposed to work in theory, discusses some possible variations, introduces various concrete implementations of the proposed approach and evaluates empirical results of a measurement study designed to evaluate the efficacy of the idea in practice."}
{"_id": "1a27eba363239f2ec6487b337e4387394c6a28f0", "title": "Synthesis Design of a Wideband Bandpass Filter With Inductively Coupled Short-Circuited Multi-Mode Resonator", "text": "In this letter, a class of wideband bandpass filters (BPFs) with short-circuited multi-mode resonators (MMRs) is proposed and designed by a synthesis approach. The MMRs are formed by n sections of cascaded transmission lines, which are all set to be one quarter wavelength long with respect to the center frequency of the passband. Short-ended stubs are then added at the two ends of the MMR as inductive coupling elements. For the filter design, a transfer function is derived to characterize the frequency response of the proposed BPF, and it is enforced to apply to the equal-ripple in-band response. As such, the impedance value for each section is determined once the specifications are given. With this method, design curves of different orders and ripple factors are provided to aid the design of a class of wideband BPFs. As a design example, a fifth-order ultra-wide filter is designed, fabricated and measured in the end. The experimental results verify the design approach."}
{"_id": "4f22dc9084ce1b99bf171502174a992e502e32e1", "title": "A SELF-RATING DEPRESSION SCALE.", "text": ""}
{"_id": "c7bd6ff231f5ca6051ebfe9fac1ecf209868bff6", "title": "Inertial sensor-based knee flexion/extension angle estimation.", "text": "A new method for estimating knee joint flexion/extension angles from segment acceleration and angular velocity data is described. The approach uses a combination of Kalman filters and biomechanical constraints based on anatomical knowledge. In contrast to many recently published methods, the proposed approach does not make use of the earth's magnetic field and hence is insensitive to the complex field distortions commonly found in modern buildings. The method was validated experimentally by calculating knee angle from measurements taken from two IMUs placed on adjacent body segments. In contrast to many previous studies which have validated their approach during relatively slow activities or over short durations, the performance of the algorithm was evaluated during both walking and running over 5 minute periods. Seven healthy subjects were tested at various speeds from 1 to 5 mile/h. Errors were estimated by comparing the results against data obtained simultaneously from a 10 camera motion tracking system (Qualysis). The average measurement error ranged from 0.7 degrees for slow walking (1 mph) to 3.4 degrees for running (5 mph). The joint constraint used in the IMU analysis was derived from the Qualysis data. Limitations of the method, its clinical application and its possible extension are discussed."}
{"_id": "129707a3e577c8fe0f491a63ee628700874e3ed5", "title": "Suggesting accurate method and class names", "text": "Descriptive names are a vital part of readable, and hence maintainable, code. Recent progress on automatically suggesting names for local variables tantalizes with the prospect of replicating that success with method and class names. However, suggesting names for methods and classes is much more difficult. This is because good method and class names need to be functionally descriptive, but suggesting such names requires that the model goes beyond local context. We introduce a neural probabilistic language model for source code that is specifically designed for the method naming problem. Our model learns which names are semantically similar by assigning them to locations, called embeddings, in a high-dimensional continuous space, in such a way that names with similar embeddings tend to be used in similar contexts. These embeddings seem to contain semantic information about tokens, even though they are learned only from statistical co-occurrences of tokens. Furthermore, we introduce a variant of our model that is, to our knowledge, the first that can propose neologisms, names that have not appeared in the training corpus. We obtain state of the art results on the method, class, and even the simpler variable naming tasks. More broadly, the continuous embeddings that are learned by our model have the potential for wide application within software engineering."}
{"_id": "76e91f0c6936b654e0b53afa91e7d455570d4e9d", "title": "Predicting Facial Beauty without Landmarks", "text": "A fundamental task in artificial intelligence and computer vision is to build machines that can behave like a human in recognizing a broad range of visual concepts. This paper aims to investigate and develop intelligent systems for learning the concept of female facial beauty and producing human-like predictors. Artists and social scientists have long been fascinated by the notion of facial beauty, but study by computer scientists has only begun in the last few years. Our work is notably different from and goes beyond previous works in several aspects: 1) we focus on fully-automatic learning approaches that do not require costly manual annotation of landmark facial features but simply take the raw pixels as inputs; 2) our study is based on a collection of data that is an order of magnitude larger than that of any previous study; 3) we imposed no restrictions in terms of pose, lighting, background, expression, age, and ethnicity on the face images used for training and testing. These factors significantly increased the difficulty of the learning task. We show that a biologically-inspired model with multiple layers of trainable feature extractors can produce results that are much more human-like than the previously used eigenface approach. Finally, we develop a novel visualization method to interpret the learned model and revealed the existence of several beautiful features that go beyond the current averageness and symmetry hypotheses."}
{"_id": "82f6824b0070de354f7d528074c7b31d97af11b1", "title": "Distributed Hessian-Free Optimization for Deep Neural Network", "text": "Training deep neural network is a high dimensional and a highly non-convex optimization problem. In this paper, we revisit Hessian-free optimization method for deep networks with negative curvature direction detection. We also develop its distributed variant and demonstrate superior scaling potential to SGD, which allows more efficiently utilizing larger computing resources thus enabling large models and faster time to obtain desired solution. We show that these techniques accelerate the training process for both the standard MNIST dataset and also the TIMIT speech recognition problem, demonstrating robust performance with upto an order of magnitude larger batch sizes. This increased scaling potential is illustrated with near l inear speed-up on upto 32 CPU nodes for a simple 4-layer network."}
{"_id": "eedbae3bf07ea7cf0da9f14b2ae12975ac941b22", "title": "Deep Multi-User Reinforcement Learning for Distributed Dynamic Spectrum Access", "text": "We consider the problem of dynamic spectrum access for network utility maximization in multichannel wireless networks. The shared bandwidth is divided into $K$ orthogonal channels. In the beginning of each time slot, each user selects a channel and transmits a packet with a certain transmission probability. After each time slot, each user that has transmitted a packet receives a local observation indicating whether its packet was successfully delivered or not (i.e., ACK signal). The objective is a multi-user strategy for accessing the spectrum that maximizes a certain network utility in a distributed manner without online coordination or message exchanges between users. Obtaining an optimal solution for the spectrum access problem is computationally expensive, in general, due to the large-state space and partial observability of the states. To tackle this problem, we develop a novel distributed dynamic spectrum access algorithm based on deep multi-user reinforcement leaning. Specifically, at each time slot, each user maps its current state to the spectrum access actions based on a trained deep-Q network used to maximize the objective function. Game theoretic analysis of the system dynamics is developed for establishing design principles for the implementation of the algorithm. The experimental results demonstrate the strong performance of the algorithm."}
{"_id": "ea6e230f000b778b64c6febcad54b37731105c35", "title": "Multiple memory systems as substrates for multiple decision systems", "text": "It has recently become widely appreciated that value-based decision making is supported by multiple computational strategies. In particular, animal and human behavior in learning tasks appears to include habitual responses described by prominent model-free reinforcement learning (RL) theories, but also more deliberative or goal-directed actions that can be characterized by a different class of theories, model-based RL. The latter theories evaluate actions by using a representation of the contingencies of the task (as with a learned map of a spatial maze), called an \"internal model.\" Given the evidence of behavioral and neural dissociations between these approaches, they are often characterized as dissociable learning systems, though they likely interact and share common mechanisms. In many respects, this division parallels a longstanding dissociation in cognitive neuroscience between multiple memory systems, describing, at the broadest level, separate systems for declarative and procedural learning. Procedural learning has notable parallels with model-free RL: both involve learning of habits and both are known to depend on parts of the striatum. Declarative memory, by contrast, supports memory for single events or episodes and depends on the hippocampus. The hippocampus is thought to support declarative memory by encoding temporal and spatial relations among stimuli and thus is often referred to as a relational memory system. Such relational encoding is likely to play an important role in learning an internal model, the representation that is central to model-based RL. Thus, insofar as the memory systems represent more general-purpose cognitive mechanisms that might subserve performance on many sorts of tasks including decision making, these parallels raise the question whether the multiple decision systems are served by multiple memory systems, such that one dissociation is grounded in the other. Here we investigated the relationship between model-based RL and relational memory by comparing individual differences across behavioral tasks designed to measure either capacity. Human subjects performed two tasks, a learning and generalization task (acquired equivalence) which involves relational encoding and depends on the hippocampus; and a sequential RL task that could be solved by either a model-based or model-free strategy. We assessed the correlation between subjects' use of flexible, relational memory, as measured by generalization in the acquired equivalence task, and their differential reliance on either RL strategy in the decision task. We observed a significant positive relationship between generalization and model-based, but not model-free, choice strategies. These results are consistent with the hypothesis that model-based RL, like acquired equivalence, relies on a more general-purpose relational memory system."}
{"_id": "712ea226efcdca0105f1e40dcf4410e2057d72f9", "title": "Mass detection on mammogram images : A first assessment of deep learning techniques", "text": "Deep Learning approaches have gathered a lot of attention lately. In this work, we study their application to the breast cancer field, in particular for mass detection in mammograms. Several experiments were made on a real mammogram benchmark dataset. Deep Learning approaches were compared to other classification methodologies. It was concluded that, although useful, the implementation used does not outperform SVMs. Further study and adjustment of the method for this application is needed."}
{"_id": "ea4db92fe67c2a8d7d24e9941111eea6dee4e0a7", "title": "Open-set Plant Identification Using an Ensemble of Deep Convolutional Neural Networks", "text": "Open-set recognition, a challenging problem in computer vision, is concerned with identification or verification tasks where queries may belong to unknown classes. This work describes a fine-grained plant identification system consisting of an ensemble of deep convolutional neural networks, within an open-set identification framework. Two wellknown deep learning architectures of VGGNet and GoogLeNet, pretrained on the object recognition dataset of ILSVRC 2012, are fine-tuned using the plant dataset of LifeCLEF 2015. Moreover, GoogLeNet is finetuned using plant and non-plant images, to serve for rejecting samples from non-plant classes. Our systems have been evaluated on the test dataset of PlantCLEF 2016 by the campaign organizers and the best proposed model has achieved an official score of 0.738 in terms of the mean average precision, while the best official score was announced to be 0.742."}
{"_id": "df74f8ee027cf7050aade9f7bc541a526c5cf079", "title": "Real time hand pose estimation using depth sensors", "text": "This paper describes a depth image based real-time skeleton fitting algorithm for the hand, using an object recognition by parts approach, and the use of this hand modeler in an American Sign Language (ASL) digit recognition application. In particular, we created a realistic 3D hand model that represents the hand with 21 different parts. Random decision forests (RDF) are trained on synthetic depth images generated by animating the hand model, which are then used to perform per pixel classification and assign each pixel to a hand part. The classification results are fed into a local mode finding algorithm to estimate the joint locations for the hand skeleton. The system can process depth images retrieved from Kinect in real-time at 30 fps. As an application of the system, we also describe a support vector machine (SVM) based recognition module for the ten digits of ASL based on our method, which attains a recognition rate of 99.9% on live depth images in real-time1."}
{"_id": "50b3a59cef3fc66a9655094164714c90f80c51cc", "title": "A Microstrip Ultra-Wideband Bandpass Filter With Cascaded Broadband Bandpass and Bandstop Filters", "text": "This paper develops a novel ultra-wideband bandpass filter by cascading a broadband bandpass filter with another broadband bandstop filter. Properly selected impedances of transmission lines achieve broadband bandpass and bandstop filters and make independent designs possible. Detailed design and synthesis procedures are provided; moreover, agreement between measured and theoretically predicted results demonstrates feasibility of the proposed filter. Due to its simple structure, the ultra-wideband bandpass filter newly introduced in this paper is suitable for integration in the single-chipped circuit or implementation on printed circuit boards."}
{"_id": "d6f073762c744bff5fe7562936d3aae4c2f7b67d", "title": "A Review of Window Query Processing for Data Streams", "text": "In recent years, progress in hardware technology has resulted in the possibility of monitoring many events in real time. The volume of incoming data may be so large, that monitoring all individual data might be intractable. Revisiting any particular record can also be impossible in this environment. Therefore, many database schemes, such as aggregation, join, frequent pattern mining, and indexing, become more challenging in this context. This paper surveys the previous efforts to resolve these issues in processing data streams. The emphasis is on specifying and processing sliding window queries, which are supported in many stream processing engines. We also review the related work on stream query processing, including synopsis structures, plan sharing, operator scheduling, load shedding, and disorder control. Category: Ubiquitous computing"}
{"_id": "8bef87ef4ad3cc65238bf96433fe4d813aa243e0", "title": "Current-mode sensorless control of single-phase brushless DC fan motors", "text": "This paper proposes a new control scheme for the implementation of a low-cost and high efficiency sensorless speed control IC for single-phase brushless dc (BLDC) fan motors. The proposed control scheme detects the zero-crossing-point (ZCP) of the measured back-EMF to generate commutation signals without Hall sensor. A current mode soft switching scheme is used to smooth the current spikes induced by the reduction of the back-EMF when the rotor is crossing the commutation boundary. An adaptive blanking time control strategy is used to adjust the time interval to ensure correct detection of the ZCP. An open-loop constant Ampere/Hertz ramping control scheme is developed for the startup control from zero speed and switching to sensorless mode once the ZCP is detected. The proposed sensorless control scheme has been verified by using computer simulation based on a developed single-phase BLDC fan motor model. Experimental verification of the proposed control scheme has been carried out by using digital implementation technique. Experimental results reveal the superior performance of the proposed control scheme."}
{"_id": "ced52371b589900df5b54cf7d71f57904d1e0fba", "title": "High-Power Pulse Generator With Flexible Output Pattern", "text": "This paper presents a high-voltage bipolar rectangular pulse generator using a solid-state boosting front-end and an H-bridge output stage. The topology generates rectangular pulses with fast enough rise time and allows easy step-up input voltage. In addition, the circuit is able to adjust positive or negative pulsewidth, dead time between two pulses, and operating frequency. The topology can also be controlled to produce unipolar pulses and other pulse patterns without changing its configuration. With an appropriate dc source, the output voltage can also be adjusted to requirements of different applications. The intended application for such a circuit is algal cell membrane rupture for oil extraction, although additional applications, include biotechnology and plasma sciences, medicine, and food industry. A 1 kV/200 A bipolar solid-state pulse generator was fabricated to validate the theoretical analysis presented in this paper. In addition, to validate the analysis with simulations and prototype tests, biological test were conducted in order to examine the technical value of the proposed circuit. These evaluations seem to suggest that oil production rate from bipolar pulses may double that of an equivalent process with unipolar pulses."}
{"_id": "cb745fd78fc7613f95bf5bed1fb125d2e7e39708", "title": "X CLAIM : Interoperability with Cryptocurrency-Backed Tokens ?", "text": "Building trustless cross-blockchain trading protocols is challenging. Therefore, centralized liquidity providers remain the preferred route to execute transfers across chains \u2014 which fundamentally contradicts the purpose of permissionless ledgers to replace trusted intermediaries. Enabling crossblockchain trades could not only enable currently competing blockchain projects to better collaborate, but seems of particular importance to decentralized exchanges as those are currently limited to the trade of digital assets within their respective blockchain ecosystem. In this paper we systematize the notion of cryptocurrencybacked tokens, an approach towards trustless cross-chain communication. We propose XCLAIM, a protocol for issuing, trading, and redeeming e.g. Bitcoin-backed tokens on Ethereum. We provide implementations for three possible protocol versions and evaluate their security and on-chain costs. With XCLAIM, it costs at most USD 1.17 to issue an arbitrary amount of Bitcoinbacked tokens on Ethereum, given current blockchain transaction fees. Our protocol requires no modifications to Bitcoin\u2019s and Ethereum\u2019s consensus rules and is general enough to support other cryptocurrencies."}
{"_id": "5edb3151747d0d323ca9f071d8366b0779ab99e2", "title": "Reading comprehension and its underlying components in second-language learners: A meta-analysis of studies comparing first- and second-language learners.", "text": "We report a systematic meta-analytic review of studies comparing reading comprehension and its underlying components (language comprehension, decoding, and phonological awareness) in first- and second-language learners. The review included 82 studies, and 576 effect sizes were calculated for reading comprehension and underlying components. Key findings were that, compared to first-language learners, second-language learners display a medium-sized deficit in reading comprehension (pooled effect size d = -0.62), a large deficit in language comprehension (pooled effect size d = -1.12), but only small differences in phonological awareness (pooled effect size d = -0.08) and decoding (pooled effect size d = -0.12). A moderator analysis showed that characteristics related to the type of reading comprehension test reliably explained the variation in the differences in reading comprehension between first- and second-language learners. For language comprehension, studies of samples from low socioeconomic backgrounds and samples where only the first language was used at home generated the largest group differences in favor of first-language learners. Test characteristics and study origin reliably contributed to the variations between the studies of language comprehension. For decoding, Canadian studies showed group differences in favor of second-language learners, whereas the opposite was the case for U.S. studies. Regarding implications, unless specific decoding problems are detected, interventions that aim to ameliorate reading comprehension problems among second-language learners should focus on language comprehension skills."}
{"_id": "708f339a2d7a5bb827149448fd3b37385ba9b873", "title": "Exploring the factors associated with Web site success in the context of electronic commerce", "text": "Web sites are being widely deployed commercially. As the widespread use and dependency on Web technology increases, so does the need to assess factors associated with Web site success. The objective is to explore these factors in the context of electronic commerce (EC). The research framework was derived from information systems and marketing literature. Webmasters from Fortune 1000 companies were used as the target group for a survey. Four factors that are critical to Web site success in EC were identi\u00aeed: (1) information and service quality, (2) system use, (3) playfulness, and (4) system design quality. An analysis of the data provides valuable managerial implications for Web site success in the context of electronic commerce. # 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "df805da2bb2a7e830b615636ee7cd22368a63563", "title": "Web Site Usability, Design, and Performance Metrics", "text": "Websites provide the key interface for consumer use of the Internet. This research reports on a series of three studies that develop and validate Web site usability, design and performance metrics, including download delay, navigability, site content, interactivity, and responsiveness. The performance metric that was developed includes the subconstructs user satisfaction, the likelihood of return, and the frequency of use. Data was collected in 1997, 1999, and 2000 from corporate Web sites via three methods, namely, a jury, third-party ratings, and a software agent. Significant associations betweenWeb site design elements and Web site performance indicate that the constructs demonstrate good nomological validity. Together, the three studies provide a set of measures with acceptable validity and reliability. The findings also suggest lack of significant common methods biases across the jury-collected data, third-party data, and agent-collected data. Results suggest that Web site success is a first-order construct. Moreover, Web site success is significantly associated with Web site download delay (speed of access and display rate within the Web site), navigation (organization, arrangement, layout, and sequencing), content (amount and variety of product information), interactivity (customization and interactivity), and responsiveness (feedback options and FAQs). (e-Commerce, Web Metrics, or Measurement;Web Site Usability;Design and Performance Constructs; Construct Validity; Nomological Validity)"}
{"_id": "0b990a9c6000b80dc00b69b68f6091844b898215", "title": "Marketing in hypermedia computer-mediated environment: Conceptual foundations", "text": "This paper addresses the role of marketing in hypermedia computer-mediated environments (CMEs). Our approach considers hypermedia CMEs to be large-scale (i.e. national or global) networked environments, of which the World Wide Web on the Internet is the first and current global implementation. We introduce marketers to this revolutionary new medium, and propose two structural models of consumer behavior in a CME. Then we examine the set of consequent testable research propositions and marketing implications that flow from the models. Marketing in Hypermedia Computer-Mediated Environments: Conceptual Foundations 1) Introduction Firms communicate with their customers through various media. Traditionally, these media follow a passive one-to-many communication model whereby a firm reaches many current and potential customers, segmented or not, through marketing efforts that allow only limited forms of feedback on the part of the customer. For several years now, a revolution has been developing that is dramatically altering this traditional view of advertising and communication media. This revolution is the Internet, the massive global network of interconnected packet-switched computer networks, and as a new marketing medium, has the potential to radically change the way firms do business with their customers. The Internet operationalizes a model of distributed computing that facilitates interactive multimedia many-to-many communication. As such, the Internet supports discussion groups (e.g. USENET news and moderated and unmoderated mailing lists), multi-player games and communications systems (e.g. MUDs, irc, chat, MUSEs), file transfer, electronic mail, and global information access and retrieval systems (e.g. archie, gopher, and the World Wide Web). The business implications of this model \"[where] the engine of democratization sitting on so many desktops is already out of control, is already creating new players in a new game\" (Carroll 1994), will be played out in as yet unknown ways for years to come. This paper is concerned with the marketing implications of commercializing hypermedia computer-mediated environments (CMEs), of which the World Wide Web (Berners-Lee et. al. 1992, 1993) on the Internet is the first and current networked global implementation. While we provide a formal definition subsequently, at this point we informally define a hypermedia CME as a distributed computer network used to access and provide hypermedia content (i.e., multimedia content connected across the network with hypertext links). Though other CMEs are relevant to marketers, including private bulletin board systems (Bunch 1994); public conferencing systems such as the WELL (Figallo 1993; Rheingold 1992, 1993) and ECHO; and commercial online services such as America On-Line, Prodigy, and CompuServe, we restrict our current focus to marketing activities in hypermedia CMEs accessible via the \"Web\" on the Internet. The Internet is an important focus for marketers because consumers and firms are conducting business on the Internet in proportions that dwarf the commercial provider base of the other CMEs combined. There are over 21,700 commercial Internet addressess (Verity and Hof 1994), and an increasing percentage of these commercial addresses are providing Web services. As of December 28, 1994, 1465 firms were listed in Open Market\u2019s (1994) directory of \"Commercial Services on the Net,\" and there were 6370 entries in the \"Business/Corporations\" directory of the Yahoo Guide to WWW (Filo and Yang 1994). The central thesis driving this research is that hypermedia CMEs, such as but not limited to the World Wide Web on the Internet, require the development and application of new marketing concepts and models. This is because hypermedia CMEs possess unique characteristics, including machine-interactivity, telepresence, hypermedia, and network navigation, which distinguish them from traditional media and some interactive multimedia, on which conventional concepts and models are based. Hoffman & Novak (1995), \"Marketing in Hypermedia CMEs: Conceptual Foundations\" page 1"}
{"_id": "3073eda62f8391db0e695acb69bcb8c68b34c7b4", "title": "Data Integration: After the Teenage Years", "text": "The field of data integration has expanded significantly over the years, from providing a uniform query and update interface to structured databases within an enterprise to the ability to search, ex- change, and even update, structured or unstructured data that are within or external to the enterprise. This paper describes the evolution in the landscape of data integration since the work on rewriting queries using views in the mid-1990's. In addition, we describe two important challenges for the field going forward. The first challenge is to develop good open-source tools for different components of data integration pipelines. The second challenge is to provide practitioners with viable solutions for the long-standing problem of systematically combining structured and unstructured data."}
{"_id": "8ebbfae66b020d6927784d2e59fdc2ddf3585cd3", "title": "A synaptically controlled, associative signal for Hebbian plasticity in hippocampal neurons.", "text": "The role of back-propagating dendritic action potentials in the induction of long-term potentiation (LTP) was investigated in CA1 neurons by means of dendritic patch recordings and simultaneous calcium imaging. Pairing of subthreshold excitatory postsynaptic potentials (EPSPs) with back-propagating action potentials resulted in an amplification of dendritic action potentials and evoked calcium influx near the site of synaptic input. This pairing also induced a robust LTP, which was reduced when EPSPs were paired with non-back-propagating action potentials or when stimuli were unpaired. Action potentials thus provide a synaptically controlled, associative signal to the dendrites for Hebbian modifications of synaptic strength."}
{"_id": "8ff35ce40d89b02bff71ed03da24bbd3ce383df3", "title": "Screening for trisomy 18 by maternal age, fetal nuchal translucency, free beta-human chorionic gonadotropin and pregnancy-associated plasma protein-A.", "text": "OBJECTIVES\nTo derive a model and examine the performance of first-trimester screening for trisomy 18 by maternal age, fetal nuchal translucency (NT) thickness, and maternal serum free beta-human chorionic gonadotropin (beta-hCG) and pregnancy-associated plasma protein-A (PAPP-A).\n\n\nMETHODS\nProspective combined screening for trisomy 21 was performed at 11 + 0 to 13 + 6 weeks in 56 893 singleton pregnancies, including 56 376 cases of euploid fetuses, 395 with trisomy 21 and 122 with trisomy 18. The measured free beta-hCG and PAPP-A were converted into a multiple of the median (MoM) and then into likelihood ratios (LR). Similarly, the measured NT was transformed into LRs using the mixture model of NT distributions. In each case the LRs for NT and the biochemical markers were multiplied by the age and gestation-related risk to derive the risk for trisomy 21 and trisomy 18. Detection rates (DRs) and false-positive rates (FPRs) were calculated by taking the proportions with risks above a given risk threshold.\n\n\nRESULTS\nIn screening with the algorithm for trisomy 21, at a FPR of 3%, the estimated DRs of trisomies 21 and 18 were 89% and 82%, respectively. The use of an algorithm for trisomy 18 identified 93% of affected fetuses at a FPR of 0.2%. When the algorithm for trisomy 21 was used and screen positivity was fixed at a FPR of 3%, and in addition the algorithm for trisomy 18 was used and screen positivity was fixed at a FPR of 0.2%, the overall FPR was 3.1% and the DRs of trisomies 21 and 18 were 90% and 97%, respectively.\n\n\nCONCLUSIONS\nA beneficial side effect of first-trimester combined screening for trisomy 21 is the detection of a high proportion of fetuses with trisomy 18. If an algorithm for trisomy 18 in addition to the one for trisomy 21 is used, more than 95% of trisomy 18 fetuses can be detected with a minor increase of 0.1% in the overall FPR."}
{"_id": "b2acf82e565826149af5ea291261a498c05215eb", "title": "Classification and adulteration detection of vegetable oils based on fatty acid profiles.", "text": "The detection of adulteration of high priced oils is a particular concern in food quality and safety. Therefore, it is necessary to develop authenticity detection method for protecting the health of customers. In this study, fatty acid profiles of five edible oils were established by gas chromatography coupled with mass spectrometry (GC/MS) in selected ion monitoring mode. Using mass spectral characteristics of selected ions and equivalent chain length (ECL), 28 fatty acids were identified and employed to classify five kinds of edible oils by using unsupervised (principal component analysis and hierarchical clustering analysis), supervised (random forests) multivariate statistical methods. The results indicated that fatty acid profiles of these edible oils could classify five kinds of edible vegetable oils into five groups and are therefore employed to authenticity assessment. Moreover, adulterated oils were simulated by Monte Carlo method to establish simultaneous adulteration detection model for five kinds of edible oils by random forests. As a result, this model could identify five kinds of edible oils and sensitively detect adulteration of edible oil with other vegetable oils about the level of 10%."}
{"_id": "a4ca2f47bebd8762b34da074e5f638d96583d9de", "title": "The active layer morphology of organic solar cells probed with grazing incidence scattering techniques.", "text": "Grazing incidence X-ray scattering (GIXS) provides unique insights into the morphology of active materials and thin film layers used in organic photovoltaic devices. With grazing incidence wide angle X-ray scattering (GIWAXS) the molecular arrangement of the material is probed. GIWAXS is sensitive to the crystalline parts and allows for the determination of the crystal structure and the orientation of the crystalline regions with respect to the electrodes. With grazing incidence small angle X-ray scattering (GISAXS) the nano-scale structure inside the films is probed. As GISAXS is sensitive to length scales from nanometers to several hundred nanometers, all relevant length scales of organic solar cells are detectable. After an introduction to GISAXS and GIWAXS, selected examples for application of both techniques to active layer materials are reviewed. The particular focus is on conjugated polymers, such as poly(3-hexylthiophene) (P3HT)."}
{"_id": "1c47e12744c82d7ca658732e1e272b409f464440", "title": "Medial temporal lobe activations in fMRI and PET studies of episodic encoding and retrieval.", "text": "Early neuroimaging studies often failed to obtain evidence of medial temporal lobe (MTL) activation during episodic encoding or retrieval, but a growing number of studies using functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have provided such evidence. We review data from fMRI studies that converge on the conclusion that posterior MTL is associated with episodic encoding; too few fMRI studies of retrieval have reported MTL activations to allow firm conclusions about their exact locations. We then turn to a recent meta-analysis of PET studies (Lepage et al., Hippocampus 1998;8:313-322) that appears to contradict the fMRI encoding data. Based on their analysis of the rostrocaudal distribution of activations reported during episodic encoding or retrieval, Lepage et al. (1998) concluded that anterior MTL is strongly associated with episodic encoding, whereas posterior MTL is strongly associated with episodic retrieval. After considering the evidence reviewed by Lepage et al. (1998) along with additional studies, we conclude that PET studies of encoding reveal both anterior and posterior MTL activations. These observations indicate that the contradiction between fMRI and PET studies of encoding was more apparent than real. However, PET studies have reported anterior MTL encoding activations more frequently than have fMRI studies. We consider possible sources of these differences."}
{"_id": "b04bb8fe1ebdaab87e0c3352823644f64467ec47", "title": "Dynamic simultaneous fare proration for large-scale network revenue management", "text": "Network revenue management is concerned with managing demand for products that require inventory from one or several resources by controlling product availability and/or prices in order to maximize expected revenues subject to the available resource capacities. One can tackle this problem by decomposing it into resource-level subproblems that can be solved efficiently, e.g. by dynamic programming (DP). We propose a new dynamic fare proration method specifically having large-scale applications in mind. It decomposes the network problem by fare proration and solves the resource-level dynamic programs simultaneously using simple, endogenously obtained dynamic marginal capacity value estimates to update fare prorations over time. An extensive numerical simulation study demonstrates that the method results in tightened upper bounds on the optimal expected revenue, and that the obtained policies are very effective with regard to achieved revenues and required runtime."}
{"_id": "51937487039de8d10e93be941bb1d9a8c7e2de9c", "title": "IP covert timing channels: design and detection", "text": "A network covert channel is a mechanism that can be used to leak information across a network in violation of a security policy and in a manner that can be difficult to detect. In this paper, we describe our implementation of a covert network timing channel, discuss the subtle issues that arose in its design, and present performance data for the channel. We then use our implementation as the basis for our experiments in its detection. We show that the regularity of a timing channel can be used to differentiate it from other traffic and present two methods of doing so and measures of their efficiency. We also investigate mechanisms that attackers might use to disrupt the regularity of the timing channel, and demonstrate methods of detection that are effective against them."}
{"_id": "041b31bfc95ba05789bf0af91244c6adde584413", "title": "Education, Signaling and Mismatch", "text": "We assess the importance education as a signal of workers\u0092skills and the e\u00a4ects of poor signaling quality on labor market outcomes. We do so by merging a frictional labor market model with a signaling setup where there is a privately observed idiosyncratic component in the cost of education. Given that highly skilled workers cannot correctly signal their abilities, their wages will be lower and they will not be matched to the \"right\" vacancies, or may be unemployed. Skilled workers will then have lower incentives to move to high productivity markets. Furthermore, fewer vacancies will be created in labor markets where skills matter, and incentives for workers to invest in education will be lower. Overall, an economy where education is a noisier signal generates lower educational attainment, higher unemployment and lower productivity. In addition, we provide evidence suggesting that education plays a poor signaling role in Latin American countries. We then calibrate our model using Peruvian data, and through a quantitative exercise we show that this mechanism could be relevant to explain the relatively bad performance of labor markets in Latin American countries. 1E-mail: larozamena@utdt.edu, hru\u00a4o@utdt.edu. We are grateful to Fernando \u00c1lvarez Parra for his comments and suggestions."}
{"_id": "24b3785a3ffae660a60e9ac4eb104b8adfc5871b", "title": "A Fuzzy Logic Controller for Autonomous Wheeled Vehicles", "text": "Autonomous vehicles have potential applications in many fields, such as replacing humans in hazardous environments, conducting military missions, and performing routine tasks for industry. Driving ground vehicles is an area where human performance has proven to be reliable. Drivers typically respond quickly to sudden changes in their environment. While other control techniques may be used to control a vehicle, fuzzy logic has certain advantages in this area; one of them is its ability to incorporate human knowledge and experience, via language, into relationships among the given quantities. Fuzzy logic controllers for autonomous vehicles have been successfully applied to address various (and sometimes simultaneous) navigational issues,including: \u2022 reaching a static target (Baturone, et al., 2004, Boada, et al., 2005, El Hajjaji & Bentalba, 2003, Maeda et al., 1991, Chen & Ozguner, 2005), \u2022 tracking moving targets (Ollero, et al., 2001), \u2022 maintaining stable vehicular velocity (Holzmann et al., 1998, Nobe & Wang, 2001), \u2022 following a lane or a wall (Rosa & Garcia-Alegre, 1990, Hessburg & Tomizuka, 1994, Peng & Tomizuka, 1993) and, \u2022 avoiding collision with static and dynamic obstacles (Baturone, et al., 2004, Murphy, 2001, Godjevac, et al., 2001, Seraji, 2005, Lee & Wang, 1994, Wheeler & Shoureshi, 1994, Ye & Wang, 2001). Several researchers combined fuzzy logic controllers with various learning techniques, such as: \u2022 supervised learning method (Godjevac, 2001), \u2022 evolutionary method (Hoffman, 2001, Kim, et al., 2001), \u2022 neural network (Pasquier, et al., 2001, Cang, et al., 2003), \u2022 reinforcement learning (Dai, et al., 2005) and, \u2022 optimization methods (Hong, 1997, Sanchez, et al., 1999)."}
{"_id": "7f1f24f8f003bd3536647690f048e90560c26b79", "title": "A Case Study on User Experience (UX) Evaluation of Mobile Augmented Reality Prototypes", "text": "Mobile Augmented Reality (MAR) blends the real world with digital objects especially in ubiquitous devices such as smartphones. The MAR applications provide an intelligent interface for users. In this, valuable digital information is advertised in physical spaces. However, the success of these applications is tied directly to the degree of user acceptance. This makes understanding the needs and expectations of the MAR\u2019s potential users of paramount importance for designing and building the proper application. The objective of the paper is to expose an important gap in the development of novel applications in the virtual world. Previous research has shown that it is essential to study and understand the needs and expectations of the potential users of the upcoming application or system. Studying user needs and expectations before offering the developed application ensures a minimum level of acceptance and, of course, success. This paper presents a detailed study comprising of a userexperience (UX) evaluation of different prototypes through the use of three different UX evaluation methods. This kind of evaluation allows new developments to offer systems, which do not fail. The main contributions of this study are that it: 1) solicits expectations when consumers use MAR applications, 2) assesses the UX over different prototypes using three different metrics, 3) provides methodological insights on UX evaluation experiments and, 4) is useful for anyone who wants to develop handheld applications after understanding user expectations and how his experience should progress. The results of the study show that users value concreteness, realizability, personalization, novelty, intuitiveness and the usefulness of presented information. Paying attention to these factors can help develop more acceptable MAR applications and lead to more novel future designs."}
{"_id": "6ccb4bec7491333d6323f4732aef389e9deb6d27", "title": "A Computational Analysis of Mahabharata", "text": "Indian epics have not been analyzed computationally to the extent that Greek epics have. In this paper, we show how interesting insights can be derived from the ancient epic Mahabharata by applying a variety of analytical techniques based on a combination of natural language processing, sentiment/emotion analysis and social network analysis methods. One of our key findings is the pattern of significant changes in the overall sentiment of the epic story across its eighteen chapters and the corresponding characterization of the primary protagonists in terms of their sentiments, emotions, centrality and leadership attributes in the epic saga."}
{"_id": "e85b25b092e6d7f5cd372338feb3126706b7c3f0", "title": "Looking Beyond the Simple Scenarios: Combining Learners and Optimizers in 3D Temporal Tracking", "text": "3D object temporal trackers estimate the 3D rotation and 3D translation of a rigid object by propagating the transformation from one frame to the next. To confront this task, algorithms either learn the transformation between two consecutive frames or optimize an energy function to align the object to the scene. The motivation behind our approach stems from a consideration on the nature of learners and optimizers. Throughout the evaluation of different types of objects and working conditions, we observe their complementary nature \u2014 on one hand, learners are more robust when undergoing challenging scenarios, while optimizers are prone to tracking failures due to the entrapment at local minima; on the other, optimizers can converge to a better accuracy and minimize jitter. Therefore, we propose to bridge the gap between learners and optimizers to attain a robust and accurate RGB-D temporal tracker that runs at approximately 2 ms per frame using one CPU core. Our work is highly suitable for Augmented Reality (AR), Mixed Reality (MR) and Virtual Reality (VR) applications due to its robustness, accuracy, efficiency and low latency. Aiming at stepping beyond the simple scenarios used by current systems, often constrained by having a single object in the absence of clutter, averting to touch the object to prevent close-range partial occlusion or selecting brightly colored objects to easily segment them individually, we demonstrate the capacity to handle challenging cases under clutter, partial occlusion and varying lighting conditions."}
{"_id": "bc1aa3af96192db18426d3ccc950c197311bfa73", "title": "A Machine Learning-based Forensic Discriminator of Pornographic and Bikini Images", "text": "The increased use of microcomputers and smart-phones has contributed to social progress, but it has also facilitated the exchange of illegal files, such as child pornography photographs, increasing the demand for digital forensic examination in these devices, which can store more than 300,000 images. Based on the large number of files to be analyzed, it is necessary to use capable algorithms to perform this detection, especially in the most challenging scenarios, such as the distinction between pornographic and bikini images. In this work, we present an approach that improves the ``Algorithm for detection of Nudity proposed by Ap-Apid. Our method improved the performance of this algorithm by using machine learning in extracted features from detected skin regions instead of using static rules to classify this kind of images. In addition, we used detected faces as features in order to mitigate false positives in portrait photos. For conducting the training, validation and testing phases, we used the AIIA-PID4 pornographic data set. Furthermore, we also present a statistical analysis by comparing our approach to two algorithms, one based on the \u201cAlgorithm for detection of Nudity\u201d and another proposed by the AIIA-PID4 pornographic data set author. The experimental results showed that we achieved an accuracy of 96.96% and 94.94 in the F1-score metric, increasing the accuracy by 79.19% and 18.21% compared to the referred works, respectively."}
{"_id": "52b6d0c61fe83bf4ee1f74307e9a1aa9e60bf19e", "title": "Mobile technology habits:\u00a0patterns of association among device usage, intertemporal preference, impulse control, and reward sensitivity.", "text": "Mobile electronic devices are playing an increasingly pervasive role in our daily activities. Yet, there has been very little empirical research investigating how mobile technology habits might relate to individual differences in cognition and affect. The research presented in this paper provides evidence that heavier investment in mobile devices is correlated with a relatively weaker tendency to delay gratification (as measured by a delay discounting task) and a greater inclination toward impulsive behavior (i.e., weaker impulse control, assessed behaviorally and through self-report) but is not related to individual differences in sensitivity to reward. Analyses further demonstrated that individual variation in impulse control mediates the relationship between mobile technology usage and delay of gratification. Although based on correlational results, these findings lend some backing to concerns that increased use of portable electronic devices could have negative impacts on impulse control and the ability to appropriately valuate delayed rewards."}
{"_id": "169476ccd90c41054a78a38192b1138599f4ddc0", "title": "Model predictive control for three-level boost converter in photovoltaic systems", "text": "In this paper, a three-level Boost (TLB) converter maximum power point tracking (MPPT) control strategy for a two-stage photovoltaic (PV) system based on model predictive control (MPC) is proposed. This method realizes fast and precise control through establishment of a prediction model and minimization of a cost function. In the work, first a three-level predictive model for the boost converter is extracted, then a predictive control algorithm using the Perturb and Observe (P&O) method is proposed, and subsequently voltage balance of the neutral point of the converter is added to the control loop. Based on the proposed method, since the inverter is not obligated to control the DC-link neutral point voltage, its modulation can be used to realize other controls on the AC side to improve grid operation. Results of the control scheme have been verified through Matlab / Simulink, and the results have been in agreement with the expectations."}
{"_id": "97902bc7485dc991200848094a69f44c56458b7a", "title": "Comparison of Leaf Recognition by Moments and Fourier Descriptors", "text": "We test various features for recognition of leaves of wooden species. We compare Fourier descriptors, Zernike moments, Legendre moments and Chebyshev moments. All the features are computed from the leaf boundary only. Experimental evaluation on real data indicates that Fourier descriptors slightly outperform the other tested features."}
{"_id": "abcf3c4232f9d4eb8d0161b32fde2f5624fb97ad", "title": "Using Distributed Scrum for Supporting an Online Community-A Qualitative Descriptive Study of Students \u2019 Perceptions", "text": "One purpose of higher education is to prepare students for a modern and ever-changing global society characterized by increasing complexity and collaborative environments. Scrum is an agile, widely used framework for project management dealing with the development of complex products. Scrum projects are conducted in small, empowered teams with intense communication, interaction and collaboration between the team members, facilitated by a servant-leader Scrum master. Scrum has been commonly used in professional software development and is also now being adopted in other areas, including education. There have been few studies of the application of Scrum in higher education and very few of them have studied distributed Scrum in an online context. An online learning community has several positive effects for students such as increased learning, engagement, retention and lower risks for isolation and dropouts. Participating in and contributing to a team is dependent on a sense of community, which can be difficult to build up in a distributed environment where members are geographically dispersed and do not have the possibility to meet and communicate face to face. This study examines to what extent and how distributed Scrum can support building an online learning community, from a student perspective. Twenty students, enrolled in an online course in distributed software development, participated in four Scrum projects as members of distributed Scrum teams, each team consisting of five students. Students\u2019 perceptions were investigated by conducting semi-structured interviews. The interview transcripts were analyzed according to Rovai\u2019s four dimensions of a classroom community. The results indicate that students were very satisfied with their distributed Scrum projects and that they experienced a high degree of flexibility during the projects. The Scrum process promoted and initiated communication and interaction among students and they learned how to communicate and collaborate effectively in an online environment. The transparency in Scrum was perceived as a key factor to open communication and effective collaboration and also contributed to increasing their motivation and engagement in the projects. Another interesting outcome of this study was understanding the importance of creating a team with members who are similar regarding competence level, ambition and preferences in working schedule."}
{"_id": "ffb73de64c9c769ae534201768d272c519991d1f", "title": "Path-tracking of a tractor-trailer vehicle along rectilinear and circular paths: a Lyapunov-based approach", "text": "The problem of asymptotic stabilization for straight and circular forward/backward motions of a tractor-trailer system is addressed using Lyapunov techniques. Smooth, bounded, nonlinear control laws achieving asymptotic stability along the desired path are designed, and explicit bounds on the region of attraction are provided. The problem of asymptotic controllability with bounded control is also addressed."}
{"_id": "7e80397d3dcb359761d163aaf10bf60c696642d1", "title": "Bloom Filter Performance on Graphics Engines", "text": "Bloom filters are a probabilistic technique for large-scale set membership tests. They exhibit no false negative test results but are susceptible to false positive results. They are well-suited to both large sets and large numbers of membership tests. We implement the Bloom filters present in an accelerated version of BLAST, a genome biosequence alignment application, on NVIDIA GPUs and develop an analytic performance model that helps potential users of Bloom filters to quantify the inherent tradeoffs between throughput and false positive rates."}
{"_id": "07b27e79099f00a8d50f9e529e6b325ed827ead2", "title": "Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets", "text": "We present attribute bagging (AB), a technique for improving the accuracy and stability of classi#er ensembles induced using random subsets of features. AB is a wrapper method that can be used with any learning algorithm. It establishes an appropriate attribute subset size and then randomly selects subsets of features, creating projections of the training set on which the ensemble classi#ers are built. The induced classi#ers are then used for voting. This article compares the performance of our AB method with bagging and other algorithms on a hand-pose recognition dataset. It is shown that AB gives consistently better results than bagging, both in accuracy and stability. The performance of ensemble voting in bagging and the AB method as a function of the attribute subset size and the number of voters for both weighted and unweighted voting is tested and discussed. We also demonstrate that ranking the attribute subsets by their classi#cation accuracy and voting using only the best subsets further improves the resulting performance of the ensemble. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved."}
{"_id": "38676c77a8f5f24ed3e5357b83185064baa4cb6f", "title": "Your liking is my curiosity : a social popularity intervention to induce curiosity", "text": "Our actions and decisions are regularly influenced by the social environment around us. Can social environment be leveraged to induce curiosity and facilitate subsequent learning? Across two experiments, we show that curiosity is contagious: social environment can influence people\u2019s curiosity about the answers to scientific questions. Our findings show that people are more likely to become curious about the answers to more popular questions, which in turn influences the information they choose to reveal. Given that curiosity has been linked to better learning, these findings have important implications for education."}
{"_id": "365859840ba0b560a6e82e0e808ee26a7734af51", "title": "Android permissions: a perspective combining risks and benefits", "text": "The phenomenal growth of the Android platform in the past few years has made it a lucrative target of malicious application (app) developers. There are numerous instances of malware apps that send premium rate SMS messages, track users' private data, or apps that, even if not characterized as malware, conduct questionable actions affecting the user's privacy or costing them money. In this paper, we investigate the feasibility of using both the permissions an app requests, the category of the app, and what permissions are requested by other apps in the same category to better inform users whether the risks of installing an app is commensurate with its expected benefit. Existing approaches consider only the risks of the permissions requested by an app and ignore both the benefits and what permissions are requested by other apps, thus having a limited effect. We propose several risk signals that and evaluate them using two datasets, one consists of 158,062 Android apps from the Android Market, and another consists of 121 malicious apps. We demonstrate the effectiveness of our proposal through extensive data analysis."}
{"_id": "922a324c99e97c511477637f3cfe16c7f06ce382", "title": "DroidAPIMiner: Mining API-Level Features for Robust Malware Detection in Android", "text": "The increasing popularity of Android apps makes them the target of malware authors. To defend against this severe increase of Android malwares and help users make a better evaluation of apps at install time, several approaches have been proposed. However, most of these solutions suffer from some shortcomings; computationally expensive, not general or not robust enough. In this paper, we aim to mitigate Android malware installation through providing robust and lightweight classifiers. We have conducted a thorough analysis to extract relevant features to malware behavior captured at API level, and evaluated different classifiers using the generated feature set. Our results show that we are able to achieve an accuracy as high as 99% and a false positive rate as low as 2.2% using KNN classifier."}
{"_id": "e5ad332e2b23727ae842875674b60a99f490fed8", "title": "Detecting Malware for Android Platform: An SVM-Based Approach", "text": "In recent years, Android has become one of the most popular mobile operating systems because of numerous mobile applications (apps) it provides. However, the malicious Android applications (malware) downloaded from third-party markets have significantly threatened users' security and privacy, and most of them remain undetected due to the lack of efficient and accurate malware detection techniques. In this paper, we study a malware detection scheme for Android platform using an SVM-based approach, which integrates both risky permission combinations and vulnerable API calls and use them as features in the SVM algorithm. To validate the performance of the proposed approach, extensive experiments have been conducted, which show that the proposed malware detection scheme is able to identify malicious Android applications effectively and efficiently."}
{"_id": "a4714774f451eef9755ef3914dc87d488607982e", "title": "Security in Infancy , Childhood , and Adulthood : A Move to the Level of Representation", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "18493175642909909196e99b90a6af0bf3ef803b", "title": "Statecharts: A Visual Formalism for Complex Systems", "text": "We present a broad extension of the conventional formalism of state machines and state diagrams, that is relevant to the specification and design of complex discrete-event systems, such as multi-computer real-time systems, communication protocols and digital control units. Our diagrams, which we call statecharts, extend conventional state-transition diagrams with essentially three elements, dealing, respectively, with the notions of hierarchy, concurrency and communication. These transform the language of state diagrams into a highly structured and economical description language. Statecharts are thus compact and expressiv-small diagrams can express complex behavior-as well as compositional and modular. When coupled with the capabilities of computerized graphics, statecharts enable viewing the description at different levels of detail, and make even very large specifications manageable and comprehensible. In fact, we intend to demonstrate here that statecharts counter many of the objections raised against conventional state diagrams, and thus appear to render specification by diagrams an attractive and plausible approach. Statecharts can be used either as a stand-alone behavioral description or as part of a more general design methodology that deals also with the system\u2019s other aspects, such as functional decomposition and data-flow specification. We also discuss some practical experience that was gained over the last three years in applying the statechart formalism to the specification of a particularly complex system."}
{"_id": "28a2addad0adc8523490105c17d36a96cc55c5a9", "title": "Performance Evaluation of a High-Power-Density PMASynRM With Ferrite Magnets", "text": "Although motors that use rare-earth permanent magnets (PMs) typically exhibit high performance, high costs and concerns about the stability of raw material supplies are leading to their decreased production. However, the performance of such motors is not easily matched without the use of rare-earth PMs. This paper proposes and examines a PM-assisted synchronous reluctance motor with ferrite magnets that has the same power density as rare-earth PM synchronous motors. A suitable rotor structure for high torque and high power is discussed with respect to the demagnetization of ferrite magnets and the mechanical strength."}
{"_id": "109f6c2ea18058cca7b9cc734b30dea6dd64cac0", "title": "Supervised Weighting-Online Learning Algorithm for Short-Term Traffic Flow Prediction", "text": "Prediction of short-term traffic flow has become one of the major research fields in intelligent transportation systems. Accurately estimated traffic flow forecasts are important for operating effective and proactive traffic management systems in the context of dynamic traffic assignment. For predicting short-term traffic flows, recent traffic information is clearly a more significant indicator of the near-future traffic flow. In other words, the relative significance depending on the time difference between traffic flow data should be considered. Although there have been several research works for short-term traffic flow predictions, they are offline methods. This paper presents a novel prediction model, called online learning weighted support-vector regression (OLWSVR), for short-term traffic flow predictions. The OLWSVR model is compared with several well-known prediction models, including artificial neural network models, locally weighted regression, conventional support-vector regression, and online learning support-vector regression. The results show that the performance of the proposed model is superior to that of existing models."}
{"_id": "c83bfb21bbff0d46595eab2c979bc40d4fd8c851", "title": "A comparative assessment of classification methods", "text": "Classification systems play an important role in business decision-making tasks by classifying the available information based on some criteria. The objective of this research is to assess the relative performance of some well-known classification methods. We consider classification techniques that are based on statistical and AI techniques. We use synthetic data to perform a controlled experiment in which the data characteristics are systematically altered to introduce imperfections such as nonlinearity, multicollinearity, unequal covariance, etc. Our experiments suggest that data characteristics considerably impact the classification performance of the methods. The results of the study can aid in the design of classification systems in which several classification methods can be employed to increase the reliability and consistency of the classification. D 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "2924e3afe0a0c07bd1f02fbe1089dcb8b4516212", "title": "Islamic star patterns from polygons in contact", "text": "We present a simple method for rendering Islamic star patterns based on Hankin\u2019s \u201cpolygons-in-contact\u201d technique. The method builds star patterns from a tiling of the plane and a small number of intuitive parameters. We show how this method can be adapted to construct Islamic designs reminiscent of Huff\u2019s parquet deformations. Finally, we introduce a geometric transformation on tilings that expands the range of patterns accessible using our method. This transformation simplifies construction techniques given in previous work, and clarifies previously unexplained relationships between certain classes of star patterns."}
{"_id": "6da4f296950456023105b62f285983c81df69b04", "title": "Exact clothing retrieval approach based on deep neural network", "text": "In recent years, the trend of shopping for clothing on an onscreen character has been raising. Unfortunately, clothes that appear on screen could be difficult to track down, much less to purchase directly. It is a challenging task to find out exactly what the clothing in a TV show is from massive amounts of clothing items in online clothing retails due to the clearly noticeable visual discrepancies of the same clothing in video and in online shop photos. In order to tackle the problem, we adopt the deep neural network to learn recognizable clothing-specific feature for finding the exact clothing item in the video scenario among a large collection of online shop images. The experiments shows the clothing-specific feature is more effective in retrieval task and color filtering improve the retrieval performance."}
{"_id": "6e176436c677af3f6c95edb329b4b5e015d695ed", "title": "Multi-objective optimisation algorithms for GIS-based multi-criteria decision analysis : an application for evacuation planning", "text": "Geographic Information Systems have gradually acquired greater relevance as tools to support decision-making processes, and during the last decades they have been used in conjunction with Multi-Criteria Decision Analysis techniques (GIS-MCDA) to solve real-world spatial problems. GIS-MCDA can be generally divided in two main approaches: Multi-Attribute and MultiObjective techniques. Until now most of the applications of GIS-MCDA have been focused only on using the multi-attribute approach, and less than 10% of the research has been related to a specific type of multi-objective technique: the use of heuristic/meta-heuristic algorithms. The present study explores how different heuristic/meta-heuristic methods perform on solving a spatial multi-objective optimisation problem. To achieve this, four algorithms representing different types of heuristics methods were implemented, and applied to solve the same spatial multi-objective optimisation problem related with an evacuation planning situation. The implemented algorithms were Standard Particle Swarm Optimisation (SPSO), Non-dominated Sorting Genetic Algorithm II (NSGA-II), Archived Multi-Objective Simulated Annealing (AMOSA) and Multi-Objective Grey Wolf Optimiser (MOGWO). The results show that the four algorithms were effective on solving the given problem, although in general AMOSA and MOGWO had a higher performance for the evaluated aspects (number of solutions, effectiveness of the optimisation, diversity, execution time and repeatability). However, the differences in the results obtained from each algorithm were not clear enough to state that one type of heuristic is superior than others. Since AMOSA and MOGWO are the most recent algorithms among the implemented ones, they include several improvements achieved by the latest research, and their superior performance could be connected to these improvements more than to the specific type of algorithms they belong to. Further research is suggested to explore the suitability of these methods for many-objectives spatial problems, to consider the temporal variability and dynamism of real-world situations, to create a standard set of algorithms to be used for benchmarking, and to integrate them with the currently available GIS-MCDA tools. Despite this, from the performed research it is possible to conclude that heuristics methods are reliable techniques for solving spatial problems with multiple and conflictive objectives, and future research and practical implementations in this field can strengthen the capacities of GIS as a multi-criteria decisionmaking support tool."}
{"_id": "d203f5734f5ee9d49c0adff31805ed93034ca60e", "title": "Cosine similarity based fingerprinting algorithm in WLAN indoor positioning against device diversity", "text": "The fingerprinting location method is commonly used in WLAN indoor positioning system. Device diversity (DD) which leads to Received Signal Strength (RSS) value difference between the users' device and the reference device is becoming an increasingly important factor impacting the positioning accuracy. Thus, the device diversity is a key problem gained more and more attention in fingerprinting location system recently, which introduces many uncertainties to the positioning result. Traditionally, the Euclidean distance is widely adopted in fingerprinting method. However, when encountering with RSS value difference caused by device diversity, the localization performance is degraded significantly. Due to this problem, our paper proposes a method employing cosine similarity instead of the Euclidean distance to improve the positioning accuracy about 13.15% higher within 2 meters when device diversity exists in the positioning. The experiment results show that the proposed method presents a good performance without the expenses of computation caused by calibration method which is employed in many previous works."}
{"_id": "c404992349e537bf6100a4835fec8fb2ebbe25cb", "title": "Design comparison of NdFeB and ferrite radial flux magnetic gears", "text": "Magnetic gears promise the benefits of mechanical gears with added advantages from contactless power transfer. Although most literature focuses on minimizing the size of magnetic gears, their material costs must also be reduced to achieve economic feasibility. This work compares the active material costs of NdFeB and ferrite radial flux magnetic gears with surface permanent magnets through a parametric 2D and 3D finite element analysis (FEA) study. Differences in optimal design trends such as pole count and magnet thicknesses are illustrated for the two materials. The results demonstrate that, for most historical price combination scenarios, NdFeB gear designs are capable of achieving lower active material costs than ferrite gear designs, and they are always capable of achieving much higher torque densities. Based on the selected design constraints, relative to a nominal ferrite cost of $10/kg, NdFeB must cost more than $90/kg before ferrite is cost competitive. Additionally, contour plots are provided to show the impact of material price rate variation on the cost break points."}
{"_id": "790709abb93c0de4e02576993ab94d4f3b7143a5", "title": "iPCA : An Interactive System for PCA-based Visual Analytics", "text": "Principle Component Analysis (PCA) is a widely used mathematical technique in many fields for factor and trend analysis, dimension reduction, etc. However, it is often considered to be a \u201cblack box\u201d operation whose results are difficult to interpret and sometimes counter-intuitive to the user. In order to assist the user in better understanding and utilizing PCA, we have developed a system that visualizes the results of principal component analysis using multiple coordinated views and a rich set of user interactions. Our design philosophy is to support analysis of multivariate datasets through extensive interaction with the PCA output. To demonstrate the usefulness of our system, we performed a comparative user study with a known commercial system, SAS/INSIGHT\u2019s Interactive Data Analysis. Participants in our study solved a number of high-level analysis tasks with each interface and rated the systems on ease of learning and usefulness. Based on the participants\u2019 accuracy, speed, and qualitative feedback, we observe that our system helps users to better understand relationships between the data and the calculated eigenspace, which allows the participants to more accurately analyze the data. User feedback suggests that the interactivity and transparency of our system are the key strengths of our approach."}
{"_id": "6c316c08a7912f6427c27edbfbd2f53cbbad50a5", "title": "Fusion of time-of-flight depth and stereo for high accuracy depth maps", "text": "Time-of-flight range sensors have error characteristics which are complementary to passive stereo. They provide real time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy and often perform poorly on the textured scenes for which stereo excels. We introduce a method for combining the results from both methods that performs better than either alone. A depth probability distribution function from each method is calculated and then merged. In addition, stereo methods have long used global methods such as belief propagation and graph cuts to improve results, and we apply these methods to this sensor. Since time-of-flight devices have primarily been used as individual sensors, they are typically poorly calibrated. We introduce a method that substantially improves upon the manufacturerpsilas calibration. We show that these techniques lead to improved accuracy and robustness."}
{"_id": "bdc7f9989d33004b79cbfe66cc5317c7d70b19e2", "title": "An adaptive and adaptable learning platform with realtime eye-tracking support: lessons learned", "text": "For a learning platform to be adapti ve, it needs to monitor the user in r ealtime. At the ScienceCampus T\u00fcbingen, we conduct empirical research on learning strategies, in particular with regard to system adaptivity. Our experimental Webbased adaptiv e multimedia learning environment is able to communicate with an eyetracking system in real-time, and l inks gaze data to browser-based learning content. By processing simple evaluation scripts in the browser, we can control the learning process by adapting the learning content in response to predefined conditions and user inputs. The experimental system serves as a basis for a series of psyc hologi al studies in the learning context. We have already conducted some studies with our experimental learning platform, but its devel opment is still ongoing. In particular, we are adding new feature s required for future studies. In this paper, we desc ribe the actual state of our experimental learning environment, lessons learned from its a pplication in the studies, and our plans for future devel opment."}
{"_id": "5910f05c7f938c70181253031afe541a1434bf95", "title": "Evaluation of 3D Feature Descriptors for Multi-modal Data Registration", "text": "We propose a framework for 2D/3D multi-modal data registration and evaluate 3D feature descriptors for registration of 3D datasets from different sources. 3D datasets of outdoor environments can be acquired using a variety of active and passive sensor technologies. Registration of these datasets into a common coordinate frame is required for subsequent modelling and visualisation. 2D images are converted into 3D structure by stereo or multiview reconstruction techniques and registered to a unified 3D domain with other datasets in a 3D world. Multi-modal datasets have different density, noise, and types of errors in geometry. This paper provides a performance benchmark for existing 3D feature descriptors across multi-modal datasets. This analysis highlights the limitations of existing 3D feature detectors and descriptors which need to be addressed for robust multi-modal data registration. We analyse and discuss the performance of existing methods in registering various types of datasets then identify future directions required to achieve robust multi-modal data registration."}
{"_id": "8228b780429acc22ffb48e64147f58b4dea291f1", "title": "2005: Tutorial on Agent-Based Modeling and Simulation", "text": "Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS is a third way of doing science besides deductive and inductive reasoning. Computational advances have made possible a growing number of agentbased applications in a variety of fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling consumer behavior to understanding the fall of ancient civilizations, to name a few. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing ABMS models, and provides some thoughts on the relationship between ABMS and traditional modeling techniques."}
{"_id": "0ea4e3d8cc656aaee6e4cb6c3ef8366facdba9b5", "title": "A Dempster-Shafer theoretic conditional approach to evidence updating for fusion of hard and soft data", "text": "Fusion of hard data with soft data is an issue that has attracted recent attention. An effective fusion strategy requires an analytical framework that can capture the uncertainty inherent in hard and soft data. For instance, computational linguistic parsing of text-based data generates logical propositions that inherently possess significant semantic ambiguity. An effective fusion framework must exploit the respective advantages of hard and soft data while mitigating their particular weaknesses. In this paper, we describe a Dempster-Shafer theoretic approach to hard and soft data fusion that relies upon the novel conditional approach to updating. The conditional approach engenders a more flexible method that allows for tuning and adapting update strategies. When computational complexity concerns are taken into account, it also provides guidance on how evidence could be ordered for updating. This has important implications in working with models that convert propositional logic statements from text into Dempster-Shafer theoretic form."}
{"_id": "e0c1c27c369d1c1f1b76f0ae8fded19a0b3a8de4", "title": "Guidelines for critique of a research report.", "text": "Before findings reported in a research article are used to change clinical practice, the research must be evaluated for evidence of credibility, integrity, and potential for replication. This paper provides guidelines for use in evaluating deductive, empirical research. The standards presented in this paper are derived from the tenets of the scientific method, measurement theory, statistical principles, and research ethics. The guidelines may be used to evaluate each section of a research report, from the title to the interpretation of findings."}
{"_id": "6df3dc585e32f3b1cb49228d94a5469c30d79d2b", "title": "High Performance Computer Acoustic Data Accelerator: A New System for Exploring Marine Mammal Acoustics for Big Data Applications", "text": "This paper presents a new software model designed for distributed sonic signal detection runtime using machine learning algorithms called DeLMA. A new algorithm--Acoustic Data-mining Accelerator (ADA)--is also presented. ADA is a robust yet scalable solution for efficiently processing big sound archives using distributing computing technologies. Together, DeLMA and the ADA algorithm provide a powerful tool currently being used by the Bioacoustics Research Program (BRP) at the Cornell Lab of Ornithology, Cornell University. This paper provides a high level technical overview of the system, and discusses various aspects of the design. Basic runtime performance and project summary are presented. The DeLMA-ADA baseline performance comparing desktop serial configuration to a 64 core distributed HPC system shows as much as a 44 times faster increase in runtime execution. Performance tests using 48 cores on the HPC shows a 9x to 12x efficiency over a 4 core desktop solution. Project summary results for 19 east coast deployments show that the DeLMA-ADA solution has processed over three million channel hours of sound to date."}
{"_id": "f34b673052bfb29391b493cb00408e4d00f39604", "title": "Effects of small-sided games on physical conditioning and performance in young soccer players.", "text": "The purpose of this study was to examine, first, the movement actions performed during two different small-sided games and, second, their effects on a series of field endurance and technical tests. Thirty-four young soccer players (age: 13 \u00b1 0.9 yrs; body mass: 62.3 \u00b1 15.1 kg; height: 1.65 \u00b1 0.06 m) participated in the study. Small-sided games included three-a-side (3 versus 3 players) and six-a-side (6 versus 6 players) games consisting of 10 bouts of 4 min duration with 3 min active recovery between bouts. Soccer player performance was evaluated using five field tests: a) 30m sprint, b) throw-in for distance, c) Illinois Agility Test, d) dribbling the ball and e) horizontal jump before, in the middle and after the implementation of both game situations. Heart rate was monitored during the entire testing session. Each game was also filmed to measure soccer movements within the game. The ANOVA analysis indicated that the three-a- side games displayed significantly higher heart rate values compared with the six-a-side games (p < 0.05). The number of short passes, kicks, tackles, dribbles and scoring goals were significantly higher during the three-a-side compared with the six-a-side game condition (p < 0. 05) while players performed more long passes and headed the ball more often during the six-a-side (p < 0.05). After the three-a-side games, there was a significant decline in sprint and agility performance (p < 0.05), while after both game conditions significant alterations in the throw-in and the horizontal jump performance were observed (p < 0.05). The results of the present study indicated that three-a-side games provide higher stimulus for physical conditioning and technical improvement than six-a-side games and their use for training young soccer players is recommended. Key pointsThree-a-side games display higher HR compared with six-a-side games.In the three-a-side games players performed more short passes, kicks, dribbles, tackles and scored more goals compared with the six-a-side games.Impairment in endurance and field test performance was observed mainly after three-a-side games.The use of the three-a-side games to develop physical fitness and technique in young soccer players is recommended."}
{"_id": "a205303bb8e66cf8588b1f8c555f1ffcefbed107", "title": "Modular Monad Transformers", "text": "During the last two decades, monads have become an indispensable tool for structuring functional programs with computational effects. In this setting, the mathematical notion of a monad is extended with operations that allow programmers to manipulate these effects. When several effects are involved, monad transformers can be used to build up the required monad one effect at a time. Although this seems to be modularity nirvana, there is a catch: in addition to the construction of a monad, the effect-manipulating operations need to be lifted to the resulting monad. The traditional approach for lifting operations is nonmodular and ad-hoc. We solve this problem with a principled technique for lifting operations that makes monad transformers truly modular."}
{"_id": "6b4081a0b5d775172269854c07dadb0c07977806", "title": "A novel objective function for improved phoneme recognition using time delay neural networks", "text": "The authors present single- and multispeaker recognition results for the voiced stop consonants /b, d, g/ using time-delay neural networks (TDNN), a new objective function for training these networks, and a simple arbitration scheme for improved classification accuracy. With these enhancements a median 24% reduction in the number of misclassifications made by TDNNs trained with the traditional backpropagation objective function is achieved. This redundant results in /b, d, g/ recognition rates that consistently exceed 98% for TDNNs trained with individual speakers; it yields a 98.1% recognition rate for a TDNN trained with three male speakers.<>"}
{"_id": "8130cefaab7dc16ad84220ea74fcb133a4266c05", "title": "Combined long short-term memory based network employing wavelet coefficients for MI-EEG recognition", "text": "Motor Imagery Electroencephalography (MI-EEG) plays an important role in brain computer interface (BCI) based rehabilitation robot, and its recognition is the key problem. The Discrete Wavelet Transform (DWT) has been applied to extract the time-frequency features of MI-EEG. However, the existing EEG classifiers, such as support vector machine (SVM), linear discriminant analysis (LDA) and BP network, did not make full use of the time sequence information in time-frequency features, the resulting recognition performance were not very ideal. In this paper, a Long Short-Term Memory (LSTM) based recurrent Neural Network (RNN) is integrated with Discrete Wavelet Transform (DWT) to yield a novel recognition method, denoted as DWT-LSTM. DWT is applied to analyze the each channel of MI-EEG and extract its effective wavelet coefficients, representing the time-frequency features. Then a LSTM based RNN is used as a classifier for the patten recognition of observed MI-EEG data. Experiments are conducted on a publicly available dataset, and the 5-fold cross validation experimental results show that DWT-LSTM yields relatively higher classification accuracies compared to the existing approaches. This is helpful for the further research and application of RNN in processing of MI-EEG."}
{"_id": "6dc8b7fd326a4e2037bb002b3420a3af45e20947", "title": "A review of soft computing applications in supply chain management", "text": "It is broadly recognised by global companies that supply chain management is one of the major core competencies for an organisation to compete in the marketplace. Organisational strategies are mainly concentrated on improvement of customer service levels as well as reduction of operational costs in order to maintain profit margins. Therefore supply chain performance has attracted researchers\u2019 attention. A variety of soft computing techniques including fuzzy logic and genetic algorithms have been employed to improve effectiveness and efficiency in various aspects of supply chain management. Meanwhile, an increasing number of papers have been published to address related issues. The aim of this paper is to summarise the findings by a systematic review of existing research papers concerning the application of soft computing techniques to supply chain management. Some areas in supply chain management that have rarely been exposed in existing papers, such as customer relationship management and reverse logistics, are therefore suggested for future research. 2009 Elsevier B.V. All rights reserved."}
{"_id": "49859cb966f4571c355595d7a614aff20154bfae", "title": "Consumer attitudes and behavior: the theory of planned behavior applied to food consumption decisions", "text": "The author compares the multi-attribute and subjective expected utility (SEU) models popular in research on consumer behavior to the approach offered by the theory of planed behavior (TPB). Unlike the multi-attribute and SEU models, the TPB relies not on revealed preferences to infer the underlying decision process but instead on direct assessment of its theoretical constructs. According to the theory, the consumer\u2019s behavior is a function of intention to perform the behavior in question; the intention is based on attitude, subjective norm, and perceived behavioral control with respect to the behavior; and these factors are determined, respectively, by behavioral, normative, and control beliefs. The theory allows us to predict intentions and behavior with respect to the purchase or use of a single brand or product as well as in relation to choice among different brands or products."}
{"_id": "cd94b967179d26ec485348df0f35bde3336e98b4", "title": "Parent \u2013 School Involvement and School Performance : Mediated Pathways Among Socioeconomically Comparable African American and Euro-American Families", "text": "Children\u2019s academic and social competencies were examined as mediators to explain the often positive relation between parent\u2013school involvement and achievement. Ethnic variations in the relation between parent\u2013school involvement and early achievement and the mediated pathways were examined. Because much of the comparative research confounds ethnicity with socioeconomic status, the relations were examined among socioeconomically comparable samples of African American and Euro-American kindergarten children and their mothers. For reading achievement, academic skills mediated the relation between involvement and achievement for African Americans and Euro-Americans. For math achievement, the underlying process differed across ethnic groups. For African Americans, academic skills mediated the relation between school involvement and math performance. For Euro-Americans, social competence mediated the impact of home involvement on school achievement."}
{"_id": "23b08abfd6352a3c500e3a0db56431e890c40050", "title": "The framing of decisions and the psychology of choice.", "text": "The psychological principles that govern the perception of decision problems and the evaluation of probabilities and outcomes produce predictable shifts of preference when the same problem is framed in different ways. Reversals of preference are demonstrated in choices regarding monetary outcomes, both hypothetical and real, and in questions pertaining to the loss of human lives. The effects of frames on preferences are compared to the effects of perspectives on perceptual appearance. The dependence of preferences on the formulation of decision problems is a significant concern for the theory of rational choice."}
{"_id": "114ed9f07c9b8760b7c59b69653e86307507cd82", "title": "Android Code Protection via Obfuscation Techniques: Past, Present and Future Directions", "text": "Mobile devices have become ubiquitous due to centralization of private user information, contacts, messages and multiple sensors. Google Android, an open-source mobile Operating System (OS), is currently the market leader. Android popularity has motivated the malware authors to employ set of cyber attacks leveraging code obfuscation techniques. Obfuscation is an action that modifies an application (app) code, preserving the original semantics and functionality to evade anti-malware. Code obfuscation is a contentious issue. Theoretical code analysis techniques indicate that, attaining a verifiable and secure obfuscation is impossible. However, obfuscation tools and techniques are popular both among malware developers (to evade anti-malware) and commercial software developers (protect intellectual rights). We conducted a survey to uncover answers to concrete and relevant questions concerning Android code obfuscation and protection techniques. The purpose of this paper is to review code obfuscation and code protection practices, and evaluate efficacy of existing code de-obfuscation tools. In particular, we discuss Android code obfuscation methods, custom app protection techniques, and various de-obfuscation methods. Furthermore, we review and analyze the obfuscation techniques used by malware authors to evade analysis efforts. We believe that, there is a need to investigate efficiency of the defense techniques used for code protection. This survey would be beneficial to the researchers and practitioners, to understand obfuscation and de-obfuscation techniques to propose novel solutions on Android."}
{"_id": "5539249a24f7446b11f0e49f5a47610e5a38d63c", "title": "Automated Plant Disease Analysis (APDA): Performance Comparison of Machine Learning Techniques", "text": "Plant disease analysis is one of the critical tasks in the field of agriculture. Automatic identification and classification of plant diseases can be supportive to agriculture yield maximization. In this paper we compare performance of several Machine Learning techniques for identifying and classifying plant disease patterns from leaf images. A three-phase framework has been implemented for this purpose. First, image segmentation is performed to identify the diseased regions. Then, features are extracted from segmented regions using standard feature extraction techniques. These features are then used for classification into disease type. Experimental results indicate that our proposed technique is significantly better than other techniques used for Plant Disease Identification and Support Vector Machines outperforms other techniques for classification of diseases."}
{"_id": "ab63550b8d19ddd1fe2d7fff97d0d33fa0331bf7", "title": "Adaptive content generation for games", "text": "From a simple entertainment activity to a learning tool, it is undeniable that video games are one of the most active and relevant areas in today's society. Since their inception, video games experienced an evolution unmatched by almost any other area and today's video games are complex pieces of software built by expensive multidisciplinary teams. Procedural Content Generation (PCG) offers an alternative to manual design of video game content, optimizing the process of development and thus reducing its cost. However, the content that is generated by traditional PCG techniques is usually very generic and it struggles to offer meaningful game experiences to a diverse player base. Recent years have brought some new PCG techniques that try to solve this problem by dynamically adjusting the generation of content to suit the needs of each individual player. The work presented in this paper focuses on the development of a new PCG methodology that aims to close the gap between game developers and their players. This is achieved by providing the developers with relevant real time player and playing context information and thus creating an easier way for developers to adjust their content generation process in run time to better suit the needs of each player. This was achieved by first designing a methodology that models the player, their context and the game. It was also developed a simple game to showcase the potential of such methodology. The end result was a game that adapts some of its content to different types of players and contexts."}
{"_id": "44abde153c7507451f073ae361f75edd9d7010dd", "title": "eSOLHotel: Generaci\u00f3n de un lexic\u00f3n de opini\u00f3n en espa\u00f1ol adaptado al dominio tur\u00edstico", "text": "Since Web 2.0 is the largest container for subjective expressions about different topics or issues expressed in all languages, the study of Sentiment Analysis has grown exponentially. In this work, we focus on Spanish polarity classification of hotel reviews and a new domain-dependent lexical resource (eSOLHotel) is presented. This new lexicon has been compiled following a corpus-based approach. We have carried out several experiments using an unsupervised approach for the polarity classification over the category of hotels from corpus SFU. The results obtained with the new lexicon eSOLHotel outperform the results with other general purpose lexicon."}
{"_id": "658d97c2ea8a6ed1d9de4ef0f85da21a8816d29a", "title": "Exact matrix completion via convex optimization", "text": "Suppose that one observes an incomplete subset of entries selected from a low-rank matrix. When is it possible to complete the matrix and recover the entries that have not been seen? We demonstrate that in very general settings, one can perfectly recover all of the missing entries from most sufficiently large subsets by solving a convex programming problem that finds the matrix with the minimum nuclear norm agreeing with the observed entries. The techniques used in this analysis draw upon parallels in the field of compressed sensing, demonstrating that objects other than signals and images can be perfectly reconstructed from very limited information."}
{"_id": "2f976aa22e08e4233c8d1dd82343bfd3a124d9ac", "title": "A Survey of Software-Defined Networking: Past, Present, and Future of Programmable Networks", "text": "The idea of programmable networks has recently re-gained considerable momentum due to the emergence of the Software-Defined Networking (SDN) paradigm. SDN, often referred to as a \"radical new idea in networking\", promises to dramatically simplify network management and enable innovation through network programmability. This paper surveys the state-of-the-art in programmable networks with an emphasis on SDN. We provide a historic perspective of programmable networks from early ideas to recent developments. Then we present the SDN architecture and the OpenFlow standard in particular, discuss current alternatives for implementation and testing of SDN-based protocols and services, examine current and future SDN applications, and explore promising research directions based on the SDN paradigm."}
{"_id": "18e2202311580318b91f2b4e6073c3afb3db8d7b", "title": "Evaluation of Local Spatio-temporal Features for Action Recognition", "text": "Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature, and promising recognition results were demonstrated for different action datasets. The comparison of those methods, however, is limited given the different experimental settings and various recognition methods used. The purpose of this paper is first to define a common evaluation setup to compare local space-time detectors and descriptors. All experiments are reported for the same bag-of-features SVM recognition framework. Second, we provide a systematic evaluation of different spatio-temporal features. We evaluate the performance of several space-time interest point detectors and descriptors along with their combinations on datasets with varying degree of difficulty. We also include a comparison with dense features obtained by regular sampling of local space-time patches. Feature detectors. In our experimental evaluation, we consider the following feature detectors. (1) The Harris3D detector [3] extends the Harris detector for images to image sequences. At each video point, a spatio-temporal second-moment matrix \u03bc is computed using a separable Gaussian smoothing function and space-time gradients. Interest points are located at local maxima of H = det(\u03bc)\u2212 k trace3(\u03bc). (2) The Cuboid detector [1] is based on temporal Gabor filters. The response function has the form: R = (I \u2217 g \u2217 hev) + (I \u2217 g \u2217 hod), where g(x,y;\u03c3) is the 2D Gaussian smoothing kernel, and hev and hod are 1D Gabor filters. Interest points are detected at local maxima of R. (3) The Hessian detector [6] is a spatio-temporal extension of the Hessian saliency measure. The determinant of the 3D Hessian matrix is used to measure saliency. The determinant of the Hessian is computed over several spatial and temporal scales. A non-maximum suppression algorithm selects extrema as interest points. (4) Dense sampling extracts multi-scale video blocks at regular positions in space and time and for varying scales. In our experiments, we sample cuboids with 50% spatial and temporal overlap. Feature descriptors. The following feature descriptors are investigated. (1) For the Cuboid descriptor [1], gradients computed for each pixel in a cuboid region are concatenated into a single vector. PCA projects vectors to a lower dimensional space. (2) The HOG/HOF descriptors [4] divide a cuboid region into a grid of cells. For each cell, 4-bin histograms of gradient orientations (HOG) and 5-bin histograms of optic flow (HOF) are computed. Normalized histograms are concatenated into HOG, HOF as well as HOG/HOF descriptor vectors. (3) The HOG3D descriptor [2] is based on histograms of 3D gradient orientations. Gradients are computed via an integral video representations. Regular polyhedrons are used to uniformly quantize the orientation of spatio-temporal gradients. A given 3D volume is divided into a grid of cells. The corresponding descriptor concatenates gradient histograms of all cells. (4) The extended SURF (ESURF) descriptor [6] extends the image SURF descriptor to videos. Again 3D cuboids are divided into a grid of cells. Each cell is represented by a weighted sum of uniformly sampled responses of Haar-wavelets aligned with the three axes. Experimental Setup. We represent video sequences as a bag of local spatio-temporal features [5]. Spatio-temporal features are first quantized HOG3D HOG/HOF HOG HOF Cuboids ESURF Harris3D 89.0% 91.8% 80.9% 92.1% \u2013 \u2013 Cuboids 90.0% 88.7% 82.3% 88.2% 89.1% \u2013 Hessian 84.6% 88.7% 7767% 88.6% \u2013 81.4% Dense 85.3% 86.1% 79.0% 88.0% \u2013 \u2013 Table 1: Average accuracy on the KTH actions dataset."}
{"_id": "11ef2155eb99bb608702303bfc5464acd5524355", "title": "Gender Recognition Based on Sift Features", "text": "This paper proposes a robust approach for face detection and gender classification in color images. Previous researches about gender recognition suppose an expensive computational and time-consuming pre-processing step in order to alignment in which face images are aligned so that facial landmarks like eyes, nose, lips, chin are placed in uniform locations in image. In this paper, a novel technique based on mathematical analysis is represented in three stages that eliminates alignment step. First, a new color based face detection method is represented with a better result and more robustness in complex backgrounds. Next, the features which are invariant to affine transformations are extracted from each face using scale invariant feature transform (SIFT) method. To evaluate the performance of the proposed algorithm, experiments have been conducted by employing a SVM classifier on a database of face images which contains 500 images from distinct people with equal ratio of male and female."}
{"_id": "9439871c45158dd4cd42109342780de5226fe4dd", "title": "Data integration driven ontology design, case study smart city", "text": "Methods to design of formal ontologies have been in focus of research since the early nineties when their importance and conceivable practical application in engineering sciences had been understood. However, often significant customization of generic methodologies is required when they are applied in tangible scenarios. In this paper, we present a methodology for ontology design developed in the context of data integration. In this scenario, a targeting ontology is applied as a mediator for distinct schemas of individual data sources and, furthermore, as a reference schema for federated data queries. The methodology has been used and evaluated in a case study aiming at integration of buildings' energy and carbon emission related data. We claim that we have made the design process much more efficient and that there is a high potential to reuse the methodology."}
{"_id": "e93b51b7599365c540308e4271034a4802dd99d8", "title": "Hand Gesture Recognition using Computer Vision", "text": "The use of the gesture system in our daily life as a natural human-human interaction has inspired the researchers to simulate and utilize this gift in human-machine interaction which is appealing and can take place the bore interaction ones that existed such as television, radio, and various home appliances as well as virtual reality will worth and deserve its name. This kind of interaction ensures promising and satisfying outcomes if applied in systematic approach, and supports unadorned human hand when transferring the message to these devices which is easiest, comfort and desired rather than the communication that requires frills to deliver the message to such devices. With the rapid emergence of 3d applications and virtual environments in computer system the need for a new type of interaction device arises. This is because the traditional devices such as mouse, keyboard and joystick become inefficient and cumbersome within this virtual environments.in other words evolution of user interfaces shapes the change in the HumanComputer Interaction (HCI).Intuitive and naturalness characteristics of \u201cHand Gestures\u201d in the HCI have been the driving force and motivation to develop an interaction device which can replace current unwieldy tools. A survey on the methods of analysing, modelling and recognizing hand gestures in the context of the HCI is provided in this paper."}
{"_id": "9e9ebfba5740abb00ffd6582826a1869a0c67e65", "title": "A Theoretic Framework of K-Means-Based Consensus Clustering", "text": "Consensus clustering emerges as a promising solution to find cluster structures from data. As an efficient approach for consensus clustering, the Kmeans based method has garnered attention in the literature, but the existing research is still preliminary and fragmented. In this paper, we provide a systematic study on the framework of K-meansbased Consensus Clustering (KCC). We first formulate the general definition of KCC, and then reveal a necessary and sufficient condition for utility functions that work for KCC, on both complete and incomplete basic partitionings. Experimental results on various real-world data sets demonstrate that KCC is highly efficient and is comparable to the state-of-the-art methods in terms of clustering quality. In addition, KCC shows high robustness to incomplete basic partitionings with substantial missing values."}
{"_id": "8189e4f5fc09ae691c77bbd0d4e09b8853b02edf", "title": "Pose estimation of anime/manga characters: a case for synthetic data", "text": "2D articulated pose estimation is the task of locating the body joint positions of a human figure in an image. A pose estimator that works on anime/manga images could be an important component of an automatic system to create 3D animation from existing manga or anime, which could significantly the lower cost of media production. To create an accurate pose estimator, however, a sizable and high-quality dataset is needed, and such a dataset can be expensive to create.\n To alleviate data scarcity, we propose using a database of 3D character models and poses to generate synthetic training data for 2D pose estimators of anime/manga characters based on convolutional neural networks (CNN). We demonstrate that a high-performing estimator can be obtained by pretraining a network on a large synthetic dataset and then fine-tuning it on a small dataset of drawings. We also show that the approach yields a pose estimator competitive with many previous works when applied to a photograph-based dataset, establishing our synthetic data's usefulness beyond the intended domain."}
{"_id": "44607270754f8521d6c4d42297aa881393f4f8e0", "title": "A universal algorithm for sequential data compression", "text": "A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source."}
{"_id": "a63e0f8a433942b514f493421b09a517bf15c7d6", "title": "RoboADS: Anomaly Detection Against Sensor and Actuator Misbehaviors in Mobile Robots", "text": "Mobile robots such as unmanned vehicles integrate heterogeneous capabilities in sensing, computation, and control. They are representative cyber-physical systems where the cyberspace and the physical world are strongly coupled. However, the safety of mobile robots is significantly threatened by cyber/physical attacks and software/hardware failures. These threats can thwart normal robot operations and cause robot misbehaviors. In this paper, we propose a novel anomaly detection system, which leverages physical dynamics of mobile robots to detect misbehaviors in sensors and actuators. We explore issues raised in real-world implementations, e.g., distinctive robot dynamic models, sensor quantity and quality, decision parameters, etc., for practicality purposes. We implement the detection system on two types of mobile robots and evaluate the detection performance against various misbehavior scenarios, including signal interference, sensor spoofing, logic bomb and physical jamming. The experiments show detection effectiveness and small detection delays."}
{"_id": "2e282093f9ddc226c94f3215daeff1be578dfa50", "title": "Loginson: a transform and load system for very large-scale log analysis in large IT infrastructures", "text": "Nowadays, most systems and applications produce log records that are useful for security and monitoring purposes such as debugging programming errors, checking system status, and detecting configuration problems or even attacks. To this end, a log repository becomes necessary whereby logs can be accessed and visualized in a timely manner. This paper presents Loginson, a high-performance log centralization system for large-scale log collection and processing in large IT infrastructures. Besides log collection, Loginson provides high-level analytics through a visual interface for the purpose of troubleshooting critical incidents. We note that Loginson outperforms all of the other log centralization solutions by taking full advantage of the vertical scalability, and therefore decreasing Capital Expenditure (CAPEX) and Operating Expense (OPEX) costs for deployment scenarios with a huge volume of log data."}
{"_id": "5a0e5b936bfb86e063818af1e3699c365a0fc03c", "title": "Factors Influencing Consumers \u2019 E-Commerce Commodity Purchases", "text": "The 2002 North America Online Report published by eMarketer estimates that almost 24 million pre-college age students (ages 9-17) are already shopping online and gaining valuable e-commerce purchasing experience. Estimates of online shopping usage project steady growth, the number of young adults buying online will increase proportionally. This research seeks to develop a better understanding of the factors motivating young people to select e-commerce vendors for commodity purchases by exploring attitudes, demographic characteristics and purchase decision perceptions (i.e., the product, shopping experience, customer service, and consumer risk). Findings indicate that young adults with a history of e-commerce purchasing experience have a more positive attitude towards online buying than do young adults without e-commerce purchasing experience. In a related finding, a history of e-commerce purchasing experience serves as a good predictor of future e-commerce commodity purchases. Additionally, consumer risk and shopping experience perceptions were found to influence experienced e-commerce shoppers\u2019 commodity purchase decisions more than customer service or consumer risk."}
{"_id": "9f2e9d7fb13d2f494198c46362aa319856b3f4c3", "title": "Orbitofrontal cortex, decision-making and drug addiction", "text": "The orbitofrontal cortex, as a part of prefrontal cortex, is implicated in executive function. However, within this broad region, the orbitofrontal cortex is distinguished by its unique pattern of connections with crucial subcortical associative learning nodes, such as basolateral amygdala and nucleus accumbens. By virtue of these connections, the orbitofrontal cortex is uniquely positioned to use associative information to project into the future, and to use the value of perceived or expected outcomes to guide decisions. This review will discuss recent evidence that supports this proposal and will examine evidence that loss of this signal, as the result of drug-induced changes in these brain circuits, might account for the maladaptive decision-making that characterizes drug addiction."}
{"_id": "c2e4df54d8f241d7e6a0c04f4299956b7748950f", "title": "Deep Underwater Image Enhancement", "text": "In an underwater scene, wavelength-dependent light absorption and scattering degrade the visibility of images, causing low contrast and distorted color casts. To address this problem, we propose a convolutional neural network based image enhancement model, i.e., UWCNN, which is trained efficiently using a synthetic underwater image database. Unlike the existing works that require the parameters of underwater imaging model estimation or impose inflexible frameworks applicable only for specific scenes, our model directly reconstructs the clear latent underwater image by leveraging on an automatic end-to-end and data-driven training mechanism. Compliant with underwater imaging models and optical properties of underwater scenes, we first synthesize ten different marine image databases. Then, we separately train multiple UWCNN models for each underwater image formation type. Experimental results on real-world and synthetic underwater images demonstrate that the presented method generalizes well on different underwater scenes and outperforms the existing methods both qualitatively and quantitatively. Besides, we conduct an ablation study to demonstrate the effect of each component in our network."}
{"_id": "0be0d781305750b37acb35fa187febd8db67bfcc", "title": "A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection", "text": "We review accuracy estimation methods and compare the two most commonmethods cross validation and bootstrap Recent experimen tal results on arti cial data and theoretical re sults in restricted settings have shown that for selecting a good classi er from a set of classi ers model selection ten fold cross validation may be better than the more expensive leave one out cross validation We report on a large scale experiment over half a million runs of C and a Naive Bayes algorithm to estimate the e ects of di erent parameters on these al gorithms on real world datasets For cross validation we vary the number of folds and whether the folds are strati ed or not for boot strap we vary the number of bootstrap sam ples Our results indicate that for real word datasets similar to ours the best method to use for model selection is ten fold strati ed cross validation even if computation power allows using more folds"}
{"_id": "00e26133551f152f4c8913c5442238d30a63ef79", "title": "Natural language Interface for Database: A Brief review", "text": "Information is playing an important role in our lives. One of the major sources of information is databases. Databases and database technology are having major impact on the growing use of computers. Almost all IT applications are storing and retrieving information from databases. Retrieving information database requires knowledge of database languages like SQL. The Structured Query Language (SQL) norms are been pursued in almost all languages for relational database systems. However, not everybody is able to write SQL queries as they may not be aware of the structure of the database. So this has led to the development of Intelligent Database System ( IDBS) . There is an overwhelming need for non-expert users to query relational databases in their natural language instead of working with the values of the attributes. As a result many intelligent natural language interfaces to databases have been developed, which provides flexible options for manipulating queries. The idea of using Natural Language instead of SQL has prompted the development of new type of processing called Natural language Interface to Database. NLIDB is a step towards the development of intelligent database systems (IDBS) to enhance the users in performing flexible querying in databases. This paper is an introduction to Intelligent Database System and Natural Language Interface to Databases. Then a brief overview of NLIDB subcomponents is given and then discussion then moves on to NLIDB architectures and various approaches for the development of NLIDB systems."}
{"_id": "377177bb82105c35e6e26ebad1698a20688473bd", "title": "Dynamic circular work-stealing deque", "text": "The non-blocking work-stealing algorithm of Arora, Blumofe, and Plaxton (henceforth ABP work-stealing) is on its way to becoming the multiprocessor load balancing technology of choice in both industry and academia. This highly efficient scheme is based on a collection of array-based double-ended queues (deques) with low cost synchronization among local and stealing processes. Unfortunately, the algorithm's synchronization protocol is strongly based on the use of fixed size arrays, which are prone to overflows, especially in the multiprogrammed environments for which they are designed. We present a work-stealing deque that does not have the overflow problem.The only ABP-style work-stealing algorithm that eliminates the overflow problem is the list-based one presented by Hendler, Lev and Shavit. Their algorithm indeed deals with the overflow problem, but it is complicated, and introduces a trade-off between the space and time complexity, due to the extra work required to maintain the list.Our new algorithm presents a simple lock-free work-stealing deque, which stores the elements in a cyclic array that can grow when it overflows. The algorithm has no limit other than integer overflow (and the system's memory size) on the number of elements that may be on the deque, and the total memory required is linear in the number of elements in the deque."}
{"_id": "2c10a1ee5039c2f145abab6d5cc335d58f161ef0", "title": "Information Systems and Healthcare XXXII: Understanding the Multidimensionality of Information Systems Use: A Study of Nurses' Use of a Mandated Electronic Medical Record System", "text": ""}
{"_id": "160285998b31b11788182da282a1dc6f1e1b40f2", "title": "Data-Intensive Question Answering", "text": "Microsoft Research Redmond participated for the first time in TREC this year, focusing on the question answering track. There is a separate report in this volume on the Microsoft Research Cambridge submissions for the filtering and Web tracks (Robertson et al., 2002). We have been exploring data-driven techniques for Web question answering, and modified our system somewhat for participation in TREC QA. We submitted two runs for the main QA track (AskMSR and AskMSR2)."}
{"_id": "8213dbed4db44e113af3ed17d6dad57471a0c048", "title": "The Nature of Statistical Learning Theory", "text": ""}
{"_id": "0521ffc1c02c6a4898d02b4afcc7da162fc3ded3", "title": "Compact Microstrip-to-CPS Transition for UWB Application", "text": "A novel ultra-wideband (UWB) microstrip-to-CPS (coplanar stripline) transition has been developed. This transition or balun structure has several attractive advantages such as good impedance transformation, compact size and wide bandwidth. After the parallel-coupled line section between the microstrip line and CPS is investigated under varied traversal dimensions, a wide transmitting band is well achieved with the emergence of two transmission poles. Next, such a single transition circuit is optimally designed to cover the whole UWB band (3.1 GHz to 10.6 GHz). To verify the predicted results in experiment, the two back-to-back transitions with the same 50 Omega microstrip feed lines are fabricated and tested. Measured results exhibit the return loss close to 10.0 dB over a band from 3.5 GHz to 10.0 GHz."}
{"_id": "851a1732881a7cc7feed063064d447854084b154", "title": "Retrieval of Song Lyrics from Sung Queries", "text": "Retrieving the lyrics of a sung recording from a database of text documents is a research topic that has not received much attention so far. Such a retrieval system has many practical applications, e.g. for karaoke applications or for indexing large song databases by their lyric content. We present a new method for lyrics retrieval. An acoustic model trained on singing is used to obtain phoneme probabilities from sung queries, which are then mapped to phoneme sequences. These are compared against lines of textual lyrics in a large corpus in order to retrieve the best-matching song. The approach is tested on three sung datasets. Lyrics are retrieved from a set of 300 possible songs (12,000 lines of lyrics). The results are highly encouraging and could be used further to perform automatic lyrics alignment and keyword spotting for large databases of songs, or for retrieving lyrics from the internet."}
{"_id": "a8927a4e38f332513ad649b789b7da54f66bbca6", "title": "IT worker turnover: an empirical examination of intrinsic motivation", "text": "This study examined intrinsic motivation's influence on information technology (IT) workers' attitudes and intentions. Drawing on Human Resource management research (Eby & Freeman, 1999), we model Intrinsic motivation as mediating the influence of motivators (i.e., intrinsic job characteristics) and hygiene factors (i.e., pay and supervisory satisfaction) on workplace attitudes (i.e., job satisfaction and affective organizational commitment). In turn, workplace attitudes mediate the influence of intrinsic motivation on turnover intention. The model was tested using data collected from public sector IT workers in the Southeastern United States. Although intrinsic motivation did not fully mediate the influence of motivators and hygiene factors, findings suggest that intrinsic motivation positively influences workplace attitudes and has a mediated influence on turnover intent. Implications for research and practice are offered."}
{"_id": "82f158082a611807cd7fbb24667561c0f61875b5", "title": "DSCarver: decompose-and-spiral-carve for subtractive manufacturing", "text": "We present an automatic algorithm for subtractive manufacturing of freeform 3D objects using high-speed machining (HSM) via CNC. A CNC machine operates a cylindrical cutter to carve off material from a 3D shape stock, following a tool path, to \"expose\" the target object. Our method decomposes the input object's surface into a small number of patches each of which is fully accessible and machinable by the CNC machine, in continuous fashion, under a fixed cutter-object setup configuration. This is achieved by covering the input surface with a minimum number of accessible regions and then extracting a set of machinable patches from each accessible region. For each patch obtained, we compute a continuous, space-filling, and iso-scallop tool path which conforms to the patch boundary, enabling efficient carving with high-quality surface finishing. The tool path is generated in the form of connected Fermat spirals, which have been generalized from a 2D fill pattern for layered manufacturing to work for curved surfaces. Furthermore, we develop a novel method to control the spacing of Fermat spirals based on directional surface curvature and adapt the heat method to obtain iso-scallop carving. We demonstrate automatic generation of accessible and machinable surface decompositions and iso-scallop Fermat spiral carving paths for freeform 3D objects. Comparisons are made to tool paths generated by commercial software in terms of real machining time and surface quality."}
{"_id": "1f1940fa524e2a762fbd67fe09ea1acec05470eb", "title": "ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models", "text": "While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ActiVis, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance-and subset-level. ActiVis has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ActiVis may work with different models."}
{"_id": "3629ace050c32b62e771a9a390d96f36186f33d7", "title": "Efficient and robust computation of an approximated medial axis", "text": "The medial axis can be viewed as a compact representation for an arbitrary model; it is an essential geometric structure in many applications. A number of practical algorithms for its computation have been aimed at speeding up its computation and at addressing its instabilities. In this paper we propose a new algorithm to compute the medial axis with arbitrary precision. It exhibits several desirable properties not previously combined in a practical and efficient algorithm. First, it allows for a tradeoff between computation time and accuracy, making it well-suited for applications in which an approximation of the medial axis suffices, but computational efficiency is of particular concern. Second, it is output sensitive: the computation complexity of the algorithm does not depend on the size of the representation of a model, but on the size of the representation of the resulting medial axis. Third, the densities of the approximated medial axis points in different areas are adaptive to local free space volumes, based on the assumption that a coarser approximation in wide open area can still suffice the requirements of the applications. We present theoretical results, bounding the error introduced by the approximation process. The algorithm has been implemented and experimental results are presented that illustrate its computational efficiency and robustness."}
{"_id": "260850ce2353c25d9ab125f9820b3185087cac4a", "title": "Can you hear me now?!: it must be BGP", "text": "Industry observers expect VoIP to eventually replace most of the existing land-line telephone connections. Currently however, quality and reliability concerns largely limit VoIP usage to either personal calls on cross-domain services such as Skype and Vonage, or to single-domain services such as trunking, where a core ISP carries long-distance voice as VoIP only within its backbone, to save cost with a unified voice/data infrastructure. This paper investigates the factors that prevent cross-domain VoIP deployments from achieving the quality and reliability of existing land-line telephony (PSTN). We ran over 50,000 VoIP phone calls between 24 locations in US and Europe for a three-week period. Our results indicate that VoIP usability is hindered as much by BGP's slow convergence as network congestion. In fact, about half of the unintelligible VoIP samples in our data occur within 10 minutes of a BGP update"}
{"_id": "5912cc523fc7583e24ac2985b67d1d4c24b11a07", "title": "Out-of-bag Estimation", "text": "In bagging, predictors are constructed using bootstrap samples from the training set and then aggregated to form a bagged predictor. Each bootstrap sample leaves out about 37% of the examples. These left-out examples can be used to form accurate estimates of important quantities. For instance, they can be used to give much improved estimates of node probabilities and node error rates in decision trees. Using estimated outputs instead of the observed outputs improves accuracy in regression trees. They can also be used to give nearly optimal estimates of generalization errors for bagged predictors. Introduction: We assume that there is a training set T= {(y n ,x n), n=1, ... ,N} and a method for constructing a predictor Q(x,T) using the given training set. The output variable y can either be a class label (classification) or numerical (regression). In bagging (Breiman[1996a]) a sequence of training sets T B,1 , ... , T B,K are generated of the same size as T by bootstrap selection from T. Then K predictors are constructed such that the kth predictor Q(x,T k,B) is based on the kth bootstrap training set. It was shown that if these predictors are aggregated-averaging in regression or voting in classification, then the resultant predictor can be considerably more accurate than the original predictor."}
{"_id": "22ee2316b96c41f743082bd9de679104d79c683a", "title": "Intelligent Human Robot Interaction (iHRI)", "text": ""}
{"_id": "fb4d55e909081d78320bf9f9bde4ce08e8b18453", "title": "Near-Field Microwave Imaging Based on Aperture Raster Scanning With TEM Horn Antennas", "text": "The design, fabrication, and characterization of an ultrawideband (UWB) antenna for near-field microwave imaging of dielectric objects are presented together with the imaging setup. The focus is on an application in microwave breast tumor detection. The new antenna operates as a sensor with the following properties: 1) direct contact with the imaged body; 2) more than 90% of the microwave power is coupled directly into the tissue; 3) UWB performance; 4) excellent de-coupling from the outside environment; 5) small size; and 6) simple fabrication. The antenna characterization includes return loss, total efficiency, near-field directivity, fidelity, and group velocity. The near-field imaging setup employs planar aperture raster scanning. It consists of two antennas aligned along each other's boresight and moving together to scan two parallel apertures. The imaged object lies between the two apertures. With a blind de-convolution algorithm, the images are de-blurred. Simulation and experimental results confirm the satisfactory performance of the antenna as an UWB sensor for near-field imaging."}
{"_id": "59c752f88015604bf5070c56879ee1ac168d6c8b", "title": "Predicting retweet count using visual cues", "text": "Social media platforms allow rapid information diffusion, and serve as a source of information to many of the users. Particularly, in Twitter information provided by tweets diffuses over the users through retweets. Hence, being able to predict the retweet count of a given tweet is important for understanding and controlling information diffusion on Twitter. Since the length of a tweet is limited to 140 characters, extracting relevant features to predict the retweet count is a challenging task. However, visual features of images linked in tweets may provide predictive features. In this study, we focus on predicting the expected retweet count of a tweet by using visual cues of an image linked in that tweet in addition to content and structure-based features."}
{"_id": "43aa9d6b73bd0bdb0e910ac5a6ac8970186c0745", "title": "The circulatory system of amphioxus (Branchiostoma floridae)", "text": "The endothelial cell of Amphioxus has an endocytotic capability since both ferritin and horseradish peroxidase, when injected into the endostylar artery, became localized within endothelial cell cytoplasmic vesicles. The endothelial cells also contain vesicles within which the lysosomal enzyme acid phosphatase has been localized. Because of the presence of an endocytotic capability and lysosomes, these cells possess both the means and the method for the acquisition and disposal of exogenous proteins."}
{"_id": "bca4f8b6605999481b2fe1bba73db75273b29fb9", "title": "Algebraic MACs and Keyed-Verification Anonymous Credentials", "text": "We consider the problem of constructing anonymous credentials for use in a setting where the issuer of credentials is also the verifier, or more generally where the issuer and verifier have a shared key. In this setting we can use message authentication codes (MACs) instead of public key signatures as the basis for the credential system.\n To this end, we construct two algebraic MACs in prime-order groups, along with efficient protocols for issuing credentials, asserting possession of a credential, and proving statements about hidden attributes (e.g., the age of the credential owner). We prove the security of the first scheme in the generic group model, and prove the security of the second scheme\\dash using a dual-system-based approach\\dash under decisional Diffie-Hellman (DDH). Our MACs are of independent interest, as they are the only uf-cmva-secure MACs with efficient proofs of knowledge.\n Finally, we compare the efficiency of our new systems to two existing constructions of anonymous credentials: U-Prove and Idemix. We show that the performance of the new schemes is competitive with U-Prove (which does not have multi-show unlinkability), and many times faster than Idemix."}
{"_id": "747773ea59e466927abf6505800c506286e82b60", "title": "Predicting User Behavior in Display Advertising via Dynamic Collective Matrix Factorization", "text": "Conversion prediction and click prediction are two important and intertwined problems in display advertising, but existing approaches usually look at them in isolation. In this paper, we aim to predict the conversion response of users by jointly examining the past purchase behavior and the click response behavior. Additionally, we model the temporal dynamics between the click response and purchase activity into a unified framework. In particular, a novel matrix factorization approach named dynamic collective matrix factorization (DCMF) is proposed to address this problem. Our model considers temporal dynamics of post-click conversions and also takes advantages of the side information of users, advertisements, and items. Experiments on a real-world marketing dataset show that our model achieves significant improvements over several baselines."}
{"_id": "35ac10ea6e39fe773e80c33bbac84e158ff6fe5a", "title": "Malicious web content detection using machine leaning", "text": "Naive users using a browser have no idea about the back-end of the page. The users might be tricked into giving away their credentials or downloading malicious data. Our aim is to create an extension for Chrome which will act as middleware between the users and the malicious websites, and mitigate the risk of users succumbing to such websites. Further, all harmful content cannot be exhaustively collected as even that is bound to continuous development. To counter this we are using machine learning-to train the tool and categorize the new content it sees every time into the particular categories so that corresponding action can be taken."}
{"_id": "70560383cbf7c0dc5e9be1f2fd9efba905377095", "title": "Accelerating Online CP Decompositions for Higher Order Tensors", "text": "Tensors are a natural representation for multidimensional data. In recent years, CANDECOMP/PARAFAC (CP) decomposition, one of the most popular tools for analyzing multi-way data, has been extensively studied and widely applied. However, today's datasets are often dynamically changing over time. Tracking the CP decomposition for such dynamic tensors is a crucial but challenging task, due to the large scale of the tensor and the velocity of new data arriving. Traditional techniques, such as Alternating Least Squares (ALS), cannot be directly applied to this problem because of their poor scalability in terms of time and memory. Additionally, existing online approaches have only partially addressed this problem and can only be deployed on third-order tensors. To fill this gap, we propose an efficient online algorithm that can incrementally track the CP decompositions of dynamic tensors with an arbitrary number of dimensions. In terms of effectiveness, our algorithm demonstrates comparable results with the most accurate algorithm, ALS, whilst being computationally much more efficient. Specifically, on small and moderate datasets, our approach is tens to hundreds of times faster than ALS, while for large-scale datasets, the speedup can be more than 3,000 times. Compared to other state-of-the-art online approaches, our method shows not only significantly better decomposition quality, but also better performance in terms of stability, efficiency and scalability."}
{"_id": "3eae95390ead5b7f2b89c7088b6a1d3e382e6307", "title": "A reconfigurable low-noise dynamic comparator with offset calibration in 90nm CMOS", "text": "This paper presents a reconfigurable, low offset, low noise and high speed dynamic clocked-comparator for medium to high resolution Analog to Digital Converters (ADCs). The proposed comparator reduces the input referred noise by half and shows a better output driving capability when compared with the previous work. The offset, noise and power consumption can be controlled by a clock delay which allows simple reconfiguration. Moreover, the proposed offset calibration technique improves the offset voltage from 11.6mV to 533\u03bcV at 1 sigma. A prototype of the comparator is implemented in 90nm 1P8M CMOS with experimental results showing 320\u03bcV input referred noise at 1.5GHz with 1.2V supply."}
{"_id": "f86e89362164efa5da71b4b80c96e6f31838ee0c", "title": "Educational data mining applications and tasks: A survey of the last 10 years", "text": "Educational Data Mining (EDM) is the field of using data mining techniques in educational environments. There exist various methods and applications in EDM which can follow both applied research objectives such as improving and enhancing learning quality, as well as pure research objectives, which tend to improve our understanding of the learning process. In this study we have studied various tasks and applications existing in the field of EDM and categorized them based on their purposes. We have compared our study with other existing surveys about EDM and reported a taxonomy of task."}
{"_id": "39c8e0e9e7f640773d661246beda7af4df6d2dcc", "title": "A CSRR Loaded MIMO Antenna System for ISM Band Operation", "text": "A 2 \u00d7 2 (four-element) multiple-input multiple-output (MIMO) patch antenna system is designed and fabricated for a 2.45-GHz ISM band operation. It uses complementary split-ring resonator (CSRR) loading on its ground plane for antenna miniaturization. This reduces the single-element antenna size by 76%. The total board size of the proposed MIMO antenna system, including the GND plane is 100 \u00d7 50 \u00d7 0.8 mm3, while the single-patch antenna element has a size of 14 \u00d7 18 mm2. The antenna is fabricated and tested. Measured results are in good agreement with simulations. A minimum measured isolation of 10 dB is obtained given the close interelement spacing of 0.17\u03bb. The maximum measured gain for a single operating element is -0.8 dBi."}
{"_id": "c6b43c352de668a43949546de3016ef9cbc5fe74", "title": "Asymptotic Confidence Regions for High-Dimensional Structured Sparsity", "text": "In the setting of high-dimensional linear regression models, we propose two frameworks for constructing pointwise and group confidence sets for penalized estimators, which incorporate prior knowledge about the organization of the nonzero coefficients. This is done by desparsifying the estimator by S. van de Geer and B. Stucky and S. van de Geer et\u00a0al., then using an appropriate estimator for the precision matrix $\\Theta$. In order to estimate the precision matrix a corresponding structured matrix norm penalty has to be introduced. After normalization the result is an asymptotic pivot. The asymptotic behavior is studied and simulations are added to study the differences between the two schemes."}
{"_id": "09f6fa1869be4d3d9188d1313061602038cb97d4", "title": "YAGO: A Large Ontology from Wikipedia and WordNet", "text": "This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type Ontologies Information extraction K checking techniques help us keep YAGO\u2019s precision at 95%\u2014as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model s data"}
{"_id": "696385d0b5b3ad778cc85f189bd2ade5a109736d", "title": "Knowledge-Rich Word Sense Disambiguation Rivaling Supervised Systems", "text": "One of the main obstacles to highperformance Word Sense Disambiguation (WSD) is the knowledge acquisition bottleneck. In this paper, we present a methodology to automatically extend WordNet with large amounts of semantic relations from an encyclopedic resource, namely Wikipedia. We show that, when provided with a vast amount of high-quality semantic relations, simple knowledge-lean disambiguation algorithms compete with state-of-the-art supervised WSD systems in a coarse-grained all-words setting and outperform them on gold-standard domain-specific datasets."}
{"_id": "16e8fd6f9ba842530e71e4d20446f3ba42b06842", "title": "Arabic Tokenization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop", "text": "We present an approach to using a morphological analyzer for tokenizing and morphologically tagging (including partof-speech tagging) Arabic words in one process. We learn classifiers for individual morphological features, as well as ways of using these classifiers to choose among entries from the output of the analyzer. We obtain accuracy rates on all tasks in the high nineties."}
{"_id": "c9ca8771762607d8a90f2a4953458eb0687c87f6", "title": "The curvelet transform for fusion of very-high resolution multi-spectral and panchromatic images", "text": "This paper presents a novel image fusion method, suitable for pan-sharpening of multispectral (MS) bands, based on multi-resolution analysis (MRA). The low-resolution MS bands are sharpened by injecting high-pass directional details extracted from the high-resolution panchromatic (Pan) image by means of the curvelet transform, which is a non-separable MRA, whose basis function are directional edges with progressively increasing resolution. The advantage with respect to conventional separable MRA, either decimated or not, is twofold: directional detail coefficients matching image edges may be preliminarily soft-thresholded to achieve denoising better than in the separable wavelet domain; modeling of the relationships between high-resolution detail coefficients of MS bands and of the Pan image is more fitting, being carried out in a directional wavelet domain. Experiments carried out on a very-high resolution MS + Pan QuickBird image show that the proposed curvelet method quantitatively outperforms state-of-the art image fusion methods, in terms of geometric, radiometric, and spectral fidelity."}
{"_id": "4e7bee71d10905bc0dc96b079791fcdced38ac9c", "title": "Motivation and Second Language Acquisition 1", "text": "This paper posits four stages of language acquisition, identified as elemental, consolidation, conscious expression, and automaticity and thought, and considers the role of motivation in this process. It distinguishes between two types of motivation, language learning motivation and classroom motivation, indicating how these relate to two distinct contexts, the cultural and the educational through their influence on integrativeness and attitudes toward the learning situation. It discusses how the two types of motivation are differentially involved in the four stages, and empirical support for this perspective is presented in the form of path analyses of two samples of students from Catalonia."}
{"_id": "e69fd8723290b98d3ea203fb3a35e9b1aee81eaf", "title": "Leveraging Augmented Reality Training Tool for Medical Education: a Case Study in Central Venous Catheterization", "text": "Central venous catheterization is a relatively common bedside medical procedure that involves the placement of a catheter into a patient\u00bbs internal jugular vein in order to administer medication or fluids. To learn this technique, medical students traditionally practice on training mannequins under the guidance of a clinical instructor. The objective of this project was to co-develop a standardized augmented reality solution for teaching medical students this procedure, which would enable them to practice independently and at their own pace. Following an iterative design and prototyping process, we compiled a comprehensive set of usability heuristics specific to augmented reality healthcare applications, used to identify unique usability issues associated with augmented reality software. This approach offers a better strategy to improve the usability of augmented reality system and increases the potential to standardize and render medical education more accessible. The benefits of applying augmented reality to simulated medical education come with heavy consequences in the event of poor learning outcomes. The usability of these systems is paramount to ensure the development of clinical competence is facilitated and not hindered."}
{"_id": "275315ca82ce2ef52c2654e19714773b59a9d1e7", "title": "Manipulator state estimation with low cost accelerometers and gyroscopes", "text": "Robot manipulator designs are increasingly focused on low cost approaches, especially those envisioned for use in unstructured environments such as households, office spaces and hazardous environments. The cost of angular sensors varies based on the precision offered. For tasks in these environments, millimeter order manipulation errors are unlikely to cause drastic reduction in performance. In this paper, estimates the joint angles of a manipulator using low cost triaxial accelerometers by taking the difference between consecutive acceleration vectors. The accelerometer-based angle is compensated with a uniaxial gyroscope using a complementary filter to give robust measurements. Three compensation strategies are compared: complementary filter, time varying complementary filter, and extended Kalman filter. This sensor setup can also accurately track the joint angle even when the joint axis is parallel to gravity and the accelerometer data does not provide useful information. In order to analyze this strategy, accelerometers and gyroscopes were mounted on one arm of a PR2 robot. The arm was manually moved smoothly through different trajectories in its workspace while the joint angle readings from the on-board optical encoders were compared against the joint angle estimates from the accelerometers and gyroscopes. The low cost angle estimation strategy has a mean error 1.3\u00b0 over the three joints estimated, resulting in mean end effector position errors of 6.1 mm or less. This system provides an effective angular measurement as an alternative to high precision encoders in low cost manipulators and as redundant measurements for safety in other manipulators."}
{"_id": "a453f5cf8e3677150ec1ef2a17952492f2389df1", "title": "Agents of Things (AoT): An intelligent operational concept of the Internet of Things (IoT)", "text": "In this conceptual paper, we review the definitions, characteristics, and architecture of the Internet of Things (IoT) concept. We then identify the deficiencies of the IoT concept, analyze them, and discover the issue of lack of reasoning and intelligence in the IoT concept. We propose a solution to augment the IoT with intelligent software agents resulting in a new concept called the Agents of Things (AoT). The paper presents the AoT architecture and a scenario for one application of the proposed concept. Finally, it discusses the benefits of implementing AoT concept to solve real world issues and the future work."}
{"_id": "ac5de4a32acc64e8616ee6f203e8eaed9cbd98c8", "title": "Natural history and evolutionary principles of gene duplication in fungi", "text": "Gene duplication and loss is a powerful source of functional innovation. However, the general principles that govern this process are still largely unknown. With the growing number of sequenced genomes, it is now possible to examine these events in a comprehensive and unbiased manner. Here, we develop a procedure that resolves the evolutionary history of all genes in a large group of species. We apply our procedure to seventeen fungal genomes to create a genome-wide catalogue of gene trees that determine precise orthology and paralogy relations across these species. We show that gene duplication and loss is highly constrained by the functional properties and interacting partners of genes. In particular, stress-related genes exhibit many duplications and losses, whereas growth-related genes show selection against such changes. Whole-genome duplication circumvents this constraint and relaxes the dichotomy, resulting in an expanded functional scope of gene duplication. By characterizing the functional fate of duplicate genes we show that duplicated genes rarely diverge with respect to biochemical function, but typically diverge with respect to regulatory control. Surprisingly, paralogous modules of genes rarely arise, even after whole-genome duplication. Rather, gene duplication may drive the modularization of functional networks through specialization, thereby disentangling cellular systems."}
{"_id": "75041575e3a9fa92af93111fb0a93565efed4858", "title": "Implementation of potential field method for mobile robot navigation in greenhouse environment with WSN support", "text": "This paper deals with the implementation of mobile measuring station in greenhouse environment navigated using potential field method. The function of a greenhouse is to create the optimal growing conditions for the full life of the plants. Using autonomous measuring systems helps to monitor all the necessary parameters for creating the optimal environment in the greenhouse. The robot equipped with sensors is capable of driving to the end and back along crop rows inside the greenhouse. It introduces a wireless sensor network that was used for the purpose of measuring and controlling the greenhouse application. Continuous advancements in wireless technology and miniaturization have made the deployment of sensor networks to monitor various aspects of the environment increasingly flexible."}
{"_id": "ebebfef4557f9628f44b265ee32bacb75966a3a2", "title": "Automatic common-centroid layout generation for binary-weighted capacitors in charge-scaling DAC", "text": "As the precision of the capacitance ratios among binary-weighted capacitors is the key to accuracy/performance of charge-scaling digital-to-analog converters, it is very important to generate a highly matched common-centroid layout with minimum routing-induced parasitics. However, most of the previous works only focused on common-centroid placement optimization with the consideration of random and systematic mismatch. This paper introduces a novel common-centroid capacitor layout generation approach to minimize the parasitic impact on circuit accuracy/performance. Experimental results show that, compared with the manual layout, the layout generated by the presented approach can achieve even smaller layout area and better circuit accuracy/performance within much shorter time."}
{"_id": "a920c32587dc3c1a29ced1045717781654482597", "title": "Scaffolds in tissue engineering bone and cartilage.", "text": "Musculoskeletal tissue, bone and cartilage are under extensive investigation in tissue engineering research. A number of biodegradable and bioresorbable materials, as well as scaffold designs, have been experimentally and/or clinically studied. Ideally, a scaffold should have the following characteristics: (i) three-dimensional and highly porous with an interconnected pore network for cell growth and flow transport of nutrients and metabolic waste; (ii) biocompatible and bioresorbable with a controllable degradation and resorption rate to match cell/tissue growth in vitro and/or in vivo; (iii) suitable surface chemistry for cell attachment, proliferation, and differentiation and (iv) mechanical properties to match those of the tissues at the site of implantation. This paper reviews research on the tissue engineering of bone and cartilage from the polymeric scaffold point of view."}
{"_id": "bd9823202db4aa53f1a35f6a35f56843d0d0c71e", "title": "3D Ego-Pose Estimation via Imitation Learning", "text": "Ego-pose estimation, i.e., estimating a person\u2019s 3D pose with a single wearable camera, has many potential applications in activity monitoring. For these applications, both accurate and physically plausible estimates are desired, with the latter often overlooked by existing work. Traditional computer vision-based approaches using temporal smoothing only take into account the kinematics of the motion without considering the physics that underlies the dynamics of motion, which leads to pose estimates that are physically invalid. Motivated by this, we propose a novel control-based approach to model human motion with physics simulation and use imitation learning to learn a videoconditioned control policy for ego-pose estimation. Our imitation learning framework allows us to perform domain adaption to transfer our policy trained on simulation data to real-world data. Our experiments with real egocentric videos show that our method can estimate both accurate and physically plausible 3D ego-pose sequences without observing the cameras wearer\u2019s body."}
{"_id": "7895dad6f44faaecb765226b03459604dc95c339", "title": "Jointless structure and under-actuation mechanism for compact hand exoskeleton", "text": "It is important for a wearable robot to be compact and sufficiently light for use as an assistive device. Since human fingers are arranged in a row in dense space, the concept of traditional wearable robots using a rigid frame and a pin joint result in size and complexity problems. A structure without a conventional pin joint, called a jointless structure, has the potential to be used as a wearable robotic hand because the human skeleton and joint can replace the robot's conventional structure. Another way to reduce the weight of the system is to use under-actuation. Under-actuation enables adaptive grasping with less number of actuators for robotic hands. Differential mechanisms are widely used for multi-finger under-actuation; however, they require additional working space. We propose a design with a jointless structure and a novel under-actuation mechanism to reduce the size and weight of a hand exoskeleton. Using these concepts, we developed a prototype that weighs only 80 grams. To evaluate the prototype, fingertip force and blocked force are measured. Fingertip force is the force that can be applied by the finger of the hand exoskeleton on the object surface. The fingertip force is about 18 N when actuated by a tension force of 35 N from the motor. 18 N is sufficient for simple pinch motion in daily activities. Another factor related to performance of the under-actuation mechanism is blocked force, which is a force required to stop one finger while the other finger keeps on moving. It is measured to be 0.5 N, which is sufficiently small. With these experiments, the feasibility of the new hand exoskeleton has been shown."}
{"_id": "7a59cea2d03ddbcc8a74a262c205533dcd66f061", "title": "Multiband Handset Antenna Combining a PIFA, Slots, and Ground Plane Modes", "text": "A multiband handset antenna combining a PIFA and multiple slots on a ground plane is presented. It is shown by means of simulations that the slots on the ground plane have a double function: to tune the ground plane resonance at low frequencies (f \u00bf 900 MHz) and to act as parasitic radiators at high frequencies (f \u00bf 1800 MHz). A prototype is designed and built featuring a behavior suitable for low frequencies (GSM850 and GSM900) and for high frequencies spanning from DCS1800 to Bluetooth, and including, for instance, PCS1900, UMTS2000, and other possible systems. Reflection coefficient, efficiency, and radiation patterns are measured and compared with a design without slots to prove the advantages of the slotted ground plane. The component effect is investigated to determine critical areas where the placement is not recommended. Besides, the effect of the slot of the ground plane on SAR is investigated, by discussing the effect of the ground plane and slot modes for two phone positions. The total antenna volume of the proposed design is 40 \u00d7 15 \u00d7 6 mm3."}
{"_id": "35c46d95b51f174db7a4114c9678c6dea148f5a6", "title": "3D Face Recognition Using Shapes of Facial Curves", "text": "Recognition of human beings using shapes of their full facial surfaces is a difficult problem. Our approach is to approximate a facial surface using a collection of (closed) facial curves, and to compare surfaces by comparing their corresponding curves. The differences between shapes of curves are quantified using lengths of geodesic paths between them on a pre-defined curve shape space. The metric for comparing facial surfaces is a composition of the metric involving individual facial curves. These ideas are demonstrated in the context of face recognition using the nearest-neighbor classifier"}
{"_id": "7d21c8681571ec3d46b401b932db328f7659fc81", "title": "CSRR loaded Sierpinski gasket fractal antenna for multiband applications", "text": "The CSRR loaded Sierpinski fractal antenna for multiband applications are presented. This model fairs that the multiband presence of the Sierpinski fractal antenna is a fallout of its fractal nature. The antenna consists of iterations of sierpinski gasket with a complementary split ring resonator. In plan to provoke multi resonant frequency band aspect, these CSRR are implanted back side of the substrate. The simulated results offers a three operating bands, covering C band at 5.72 GHz, Ku band at 14.3 GHz and 16.06 GHz. The size of the CSRR loaded Gasket antenna is 14 mm \u00d7 12 mm \u00d7 1.6 mm3 printed on FR4 substrate. The parametric review and simulated radiation component are presented."}
{"_id": "a83ece75211d80fe7436de13796e79186ee76667", "title": "A Lie Never Lives to be Old: The Effects of Fake Social Information on Consumer Decision-Making in Crowdfunding", "text": "The growing success of social media led to a strong presence of social information such as customer product reviews and product ratings in electronic markets. While this information helps consumers to better assess the quality of goods before purchase, its impact on consumer decision-making also incentivizes sellers to game the system by creating fake data in favor of specific goods in order to deliberately mislead consumers. As a consequence, consumers could make suboptimal choices or could choose to disregard social information altogether. In this exploratory study, we assess the effects nongenuine social information has on the consumer\u2019s decision-making in the context of reward-based crowdfunding. Specifically, we capture unnatural peaks in the number of Facebook Likes that a specific crowdfunding campaign receives on the platforms Kickstarter and Indiegogo and observe subsequent campaign performance. Our results show that fake Facebook Likes have a very short-term positive effect on the number of backers participating in the respective crowdfunding campaign. However, this peak in participation is then followed by a period in which participation is lower than prior to the existence of the non-genuine social information. We further discuss circumstances that foster this artificial manipulation of quality signals."}
{"_id": "ae3a79dfee20e4a3c4a919049079b0ab153e0f45", "title": "Opening black box Data Mining models using Sensitivity Analysis", "text": "There are several supervised learning Data Mining (DM) methods, such as Neural Networks (NN), Support Vector Machines (SVM) and ensembles, that often attain high quality predictions, although the obtained models are difficult to interpret by humans. In this paper, we open these black box DM models by using a novel visualization approach that is based on a Sensitivity Analysis (SA) method. In particular, we propose a Global SA (GSA), which extends the applicability of previous SA methods (e.g. to classification tasks), and several visualization techniques (e.g. variable effect characteristic curve), for assessing input relevance and effects on the model's responses. We show the GSA capabilities by conducting several experiments, using a NN ensemble and SVM model, in both synthetic and real-world datasets."}
{"_id": "071036460c07283adec2bd262c526aeafeca502c", "title": "Learning to predict slip for ground robots", "text": "In this paper we predict the amount of slip an exploration rover would experience using stereo imagery by learning from previous examples of traversing similar terrain. To do that, the information of terrain appearance and geometry regarding some location is correlated to the slip measured by the rover while this location is being traversed. This relationship is learned from previous experience, so slip can be predicted later at a distance from visual information only. The advantages of the approach are: 1) learning from examples allows the system to adapt to unknown terrains rather than using fixed heuristics or predefined rules; 2) the feedback about the observed slip is received from the vehicle's own sensors which can fully automate the process; 3) learning slip from previous experience can replace complex mechanical modeling of vehicle or terrain, which is time consuming and not necessarily feasible. Predicting slip is motivated by the need to assess the risk of getting trapped before entering a particular terrain. For example, a planning algorithm can utilize slip information by taking into consideration that a slippery terrain is costly or hazardous to traverse. A generic nonlinear regression framework is proposed in which the terrain type is determined from appearance and then a nonlinear model of slip is learned for a particular terrain type. In this paper we focus only on the latter problem and provide slip learning and prediction results for terrain types, such as soil, sand, gravel, and asphalt. The slip prediction error achieved is about 15% which is comparable to the measurement errors for slip itself"}
{"_id": "9cb7c36728cd09cabf30a3bb04d5daefd2a86e9d", "title": "Bartender: a fast and accurate clustering algorithm to count barcode reads", "text": "Motivation\nBarcode sequencing (bar-seq) is a high-throughput, and cost effective method to assay large numbers of cell lineages or genotypes in complex cell pools. Because of its advantages, applications for bar-seq are quickly growing-from using neutral random barcodes to study the evolution of microbes or cancer, to using pseudo-barcodes, such as shRNAs or sgRNAs to simultaneously screen large numbers of cell perturbations. However, the computational pipelines for bar-seq clustering are not well developed. Available methods often yield a high frequency of under-clustering artifacts that result in spurious barcodes, or over-clustering artifacts that group distinct barcodes together. Here, we developed Bartender, an accurate clustering algorithm to detect barcodes and their abundances from raw next-generation sequencing data.\n\n\nResults\nIn contrast with existing methods that cluster based on sequence similarity alone, Bartender uses a modified two-sample proportion test that also considers cluster size. This modification results in higher accuracy and lower rates of under- and over-clustering artifacts. Additionally, Bartender includes unique molecular identifier handling and a 'multiple time point' mode that matches barcode clusters between different clustering runs for seamless handling of time course data. Bartender is a set of simple-to-use command line tools that can be performed on a laptop at comparable run times to existing methods.\n\n\nAvailability and implementation\nBartender is available at no charge for non-commercial use at https://github.com/LaoZZZZZ/bartender-1.1.\n\n\nContact\nsasha.levy@stonybrook.edu or song.wu@stonybrook.edu.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."}
{"_id": "30e8abde846ceebe652e644ace7c5770ff343cd8", "title": "The comparation of text mining with Naive Bayes classifier, nearest neighbor, and decision tree to detect Indonesian swear words on Twitter", "text": "Twitter is one of world most famous social media. There are many statement expresed in Twitter like happiness, sadness, public information, etc. Unfortunately, people may got angry to each other and write it down as a tweet on Twitter. Some tweet may contain Indonesian swear words. It's serious problem because many Indonesians may not tolerated swear words. Some Indonesian swear words may have multiple means, not always an Indonesian swear word means insulting. Twitter has provide tweet's data by account, trending topics, and advance keyword. This work try to analyze many tweet about political news, political event, and some Indonesian famous person because the tweet assumed contains many Indonesian swear word. The derived tweets will process in text mining and then analyzed by classification process using Naive Bayes Classifier, Nearest Neighbor, and Decision Tree to detect Indonesian swear word. This work expected to discover the high accurate classification model. It means, the model can differentiate the real meaning of Indonesian swear word contained in tweet."}
{"_id": "2b920fe2d038571693bd96dafd3ed0dbadc4cb67", "title": "Bayesian Symbol-Refined Tree Substitution Grammars for Syntactic Parsing", "text": "We propose Symbol-Refined Tree Substitution Grammars (SR-TSGs) for syntactic parsing. An SR-TSG is an extension of the conventional TSG model where each nonterminal symbol can be refined (subcategorized) to fit the training data. We aim to provide a unified model where TSG rules and symbol refinement are learned from training data in a fully automatic and consistent fashion. We present a novel probabilistic SR-TSG model based on the hierarchical Pitman-Yor Process to encode backoff smoothing from a fine-grained SR-TSG to simpler CFG rules, and develop an efficient training method based on Markov Chain Monte Carlo (MCMC) sampling. Our SR-TSG parser achieves an F1 score of 92.4% in the Wall Street Journal (WSJ) English Penn Treebank parsing task, which is a 7.7 point improvement over a conventional Bayesian TSG parser, and better than state-of-the-art discriminative reranking parsers."}
{"_id": "39f1b108687f643015f96a0c800585a44621f99c", "title": "Parsing as Language Modeling", "text": "We recast syntactic parsing as a language modeling problem and use recent advances in neural network language modeling to achieve a new state of the art for constituency Penn Treebank parsing \u2014 93.8 F1 on section 23, using 2-21 as training, 24 as development, plus tri-training. When trees are converted to Stanford dependencies, UAS and LAS are 95.9% and 94.1%."}
{"_id": "715dde17239c52b3f9924a5a35edc32b0f27830b", "title": "Span-Based Constituency Parsing with a Structure-Label System and Provably Optimal Dynamic Oracles", "text": "Parsing accuracy using efficient greedy transition systems has improved dramatically in recent years thanks to neural networks. Despite striking results in dependency parsing, however, neural models have not surpassed stateof-the-art approaches in constituency parsing. To remedy this, we introduce a new shiftreduce system whose stack contains merely sentence spans, represented by a bare minimum of LSTM features. We also design the first provably optimal dynamic oracle for constituency parsing, which runs in amortized O(1) time, compared to O(n) oracles for standard dependency parsing. Training with this oracle, we achieve the best F1 scores on both English and French of any parser that does not use reranking or external data."}
{"_id": "033eb044ef6a865a53878397633876827b7a8f20", "title": "Character-Aware Neural Language Models", "text": "We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long shortterm memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-ofthe-art despite having 60% fewer parameters. On languages with rich morphology (Czech, German, French, Spanish, Russian), the model consistently outperforms a Kneser-Ney baseline and word-level/morpheme-level LSTM baselines, again with far fewer parameters. Our results suggest that on many languages, character inputs are sufficient for language modeling."}
{"_id": "0874f9274a471819b5ed82a07e67003a91913b65", "title": "A Hierarchical Bayesian Language Model Based On Pitman-Yor Processes", "text": "We propose a new hierarchical Bayesian n-gram model of natural languages. Our model makes use of a generalization of the commonly used Dirichlet distributions called Pitman-Yor processes which produce power-law distributions more closely resembling those in natural languages. We show that an approximation to the hierarchical Pitman-Yor language model recovers the exact formulation of interpolated Kneser-Ney, one of the best smoothing methods forn-gram language models. Experiments verify that our model gives cross entropy results superior to interpolated Kneser-Ney and comparable to modified Kneser-Ney."}
{"_id": "5a3306619ab02d8b9b022946f3ddac16af16bcce", "title": "Modelling language evolution: Examples and predictions.", "text": "We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines."}
{"_id": "6c5ffaadd87c3f6caebc4e1dc0a5e793315dcaaa", "title": "Workload Patterns for Quality-Driven Dynamic Cloud Service Configuration and Auto-Scaling", "text": "Cloud service providers negotiate SLAs for customer services they offer based on the reliability of performance and availability of their lower-level platform infrastructure. While availability management is more mature, performance management is less reliable. In order to support an iterative approach that supports the initial static infrastructure configuration as well as dynamic reconfiguration and auto-scaling, an accurate and efficient solution is required. We propose a prediction-based technique that combines a pattern matching approach with a traditional collaborative filtering solution to meet the accuracy and efficiency requirements. Service workload patterns abstract common infrastructure workloads from monitoring logs and act as a part of a first-stage high-perform ant configuration mechanism before more complex traditional methods are considered. This enhances current reactive rule-based scalability approaches and basic prediction techniques based on for example exponential smoothing."}
{"_id": "4778d44e048b3f70ec1a92cf1290622347d00249", "title": "Towards a Methodology for Building Ontologies", "text": "We outline some requirements for a comprehensive methodology for building ontologies, and review some important work that has been done in the area which could contribute to this goal. We describe our own experiences in constructing a signi cant ontology, emphasising the ontology capture phase. We rst consider the very general issue of categorisation in modelling, and relate it to the process of ontology capture. We then describe the procedure that we used to identify the terms and produce de nitions. We describe a successful way to handle ambiguous terms, which can be an enormous obstacle to reaching a shared understanding. Other important ndings include: it may not be necessary to identify competency questions before building the ontology; the meta-ontology can be chosen after detailed text de nitions are produced; de ning terms which are 'cognitively basic' rst can lead to less re-work. AIAI-TR-183 Page 1 of 13"}
{"_id": "74511d4c8ceba09193e87102b824126ec4783f6c", "title": "Practical Byzantine Fault Tolerance", "text": "Our growing reliance on online services accessible on the Internet demands highly-available systems that provide correct service without interruptions. Byzantine faults such as software bugs, operator mistakes, and malicious attacks are the major cause of service interruptions. This thesis describes a new replication algorithm, BFT, that can be used to build highly-available systems that tolerate Byzantine faults. It shows, for the first time, how to build Byzantine-fault-tolerant systems that can be used in practice to implement real services because they do not rely on unrealistic assumptions and they perform well. BFT works in asynchronous environments like the Internet, it incorporates mechanisms to defend against Byzantine-faulty clients, and it recovers replicas proactively. The recovery mechanism allows the algorithm to tolerate any number of faults over the lifetime of the system provided fewer than 1/3 of the replicas become faulty within a small window of vulnerability. The window may increase under a denial-of-service attack but the algorithm can detect and respond to such attacks and it can also detect when the state of a replica is corrupted by an attacker. BFT has been implemented as a generic program library with a simple interface. The BFT library provides a complete solution to the problem of building real services that tolerate Byzantine faults. We used the library to implement the first Byzantine-fault-tolerant NFS file system, BFS. The BFT library and BFS perform well because the library incorporates several important optimizations. The most important optimization is the use of symmetric cryptography to authenticate messages. Public-key cryptography, which was the major bottleneck in previous systems, is used only to exchange the symmetric keys. The performance results show that BFS performs 2% faster to 24% slower than production implementations of the NFS protocol that are not replicated. Therefore, we believe that the BFT library can be used to build practical systems that tolerate Byzantine faults. Thesis Supervisor: Barbara H. Liskov Title: Ford Professor of Engineering"}
{"_id": "2084db2b57901fb489a1740bfdc813147df008a8", "title": "The huggable: a therapeutic robotic companion for relational, affective touch", "text": "Numerous studies have shown the positive benefits of companion animal therapy. Unfortunately, companion animals are not always available. The Huggable is a new type of robotic companion being designed specifically for such cases. It features a full body \u0093sensitive skin\u0094 for relational affective touch, silent, muscle-like, voice coil actuators, an embedded PC with data collection and networking capabilities. In this paper we briefly describe the Huggable and propose a live demonstration of the robot."}
{"_id": "1432a4ec41da1c76446263ffbe8803a1b7779e63", "title": "Scap: stream-oriented network traffic capture and analysis for high-speed networks", "text": "Many network monitoring applications must analyze traffic beyond the network layer to allow for connection-oriented analysis, and achieve resilience to evasion attempts based on TCP segmentation. However, existing network traffic capture frameworks provide applications with just raw packets, and leave complex operations like flow tracking and TCP stream reassembly to application developers. This gap leads to increased application complexity, longer development time, and most importantly, reduced performance due to excessive data copies between the packet capture subsystem and the stream processing module. This paper presents the Stream capture library (Scap), a network monitoring framework built from the ground up for stream-oriented traffic processing. Based on a kernel module that directly handles flow tracking and TCP stream reassembly, Scap delivers to user-level applications flow-level statistics and reassembled streams by minimizing data movement operations and discarding uninteresting traffic at early stages, while it inherently supports parallel processing on multi-core architectures, and uses advanced capabilities of modern network cards. Our experimental evaluation shows that Scap can capture all streams for traffic rates two times higher than other stream reassembly libraries, and can process more than five times higher traffic loads when eight cores are used for parallel stream processing in a pattern matching application."}
{"_id": "7f99509fc97fbbb776212565a62de2e03814303d", "title": "Network navigability in the social Internet of Things", "text": "The Internet of Things is expected to be overpopulated by a very large number of objects, with intensive interactions, heterogeneous communications and millions of services. Consequently, scalability issues will arise from the search of the right object that can provide the desired service. A new paradigm known as Social Internet of Things has been introduced and proposes the integration of social networking concepts into the Internet of Things. The underneath idea is that every object can look for the desired service using its friendships, in a distributed manner. However, in the resulting network, every object will still have to manage a large number of friends, slowing down the search of the services. In this work, we intend to address this issue by analyzing possible strategies to drive the objects to select the appropriate links for the benefit of overall network navigability."}
{"_id": "77eb52603c62437e0d40479f964f9ebca2c0fa19", "title": "Affect recognition from face and body: early fusion vs. late fusion", "text": "This paper presents an approach to automatic visual emotion recognition from two modalities: face and body. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information first at a feature-level, in which the data from both modalities are combined before classification, and later at a decision-level, in which we integrate the outputs of the monomodal systems by the use of suitable criteria. We then evaluate these two fusion approaches, in terms of performance over monomodal emotion recognition based on facial expression modality only. In the experiments performed the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. Moreover, fusion at the feature-level proved better recognition than fusion at the decision-level."}
{"_id": "7cb1627415e98a58b28bb64ff1e0a12920266bba", "title": "Low Complexity Damped Gauss-Newton Algorithms for CANDECOMP/PARAFAC", "text": "The damped Gauss-Newton (dGN) algorithm for CANDECOMP /PARAFAC (CP) decomposition has been successfully applied for di fficult tensor factorization such as collinearity of factors, different magnitudes of factors. Nevertheless, for factorization of an N-D tensor of sizeI1 \u00d7 . . . \u00d7 IN with rank R, the algorithm is computationally demanding due to construction of large app roximate Hessian of size (( R \u2211 n In) \u00d7 (R \u2211 n In)) and its inversion. In this paper, we propose a fast implementation o f the dGN algorithm which is based on novel expressions of the inverse approximate Hessian in block form. The new imp le entation has a lower computational complexity, which beside computation of the gradient (this part is commo n to both methods) involves inversion of a matrix of the sizeNR2 \u00d7 NR2, much smaller than the Hessian, if n In \u226b NR. In addition, the implementation has a lower memory requirements, because neither the Hessian nor its in verse need to be stored entire in one time. A variant of the algorithm working with complex valued data is proposed a s well. Complexity and performance of the proposed algorithm is compared with those of dGN and ALS with line sear ch on examples with di fficult benchmark tensors."}
{"_id": "8f393e8510787d55da62a9ee044e3168098962fa", "title": "Capsule Networks for Hyperspectral Image Classification", "text": "Convolutional neural networks (CNNs) have recently exhibited an excellent performance in hyperspectral image classification tasks. However, the straightforward CNNbased network architecture still finds obstacles when effectively exploiting the relationships between hyperspectral imaging (HSI) features in the spectral\u2013spatial domain, which is a key factor to deal with the high level of complexity present in remotely sensed HSI data. Despite the fact that deeper architectures try to mitigate these limitations, they also find challenges with the convergence of the network parameters, which eventually limit the classification performance under highly demanding scenarios. In this paper, we propose a new CNN architecture based on spectral\u2013spatial capsule networks in order to achieve a highly accurate classification of HSIs while significantly reducing the network design complexity. Specifically, based on Hinton\u2019s capsule networks, we develop a CNN model extension that redefines the concept of capsule units to become spectral\u2013 spatial units specialized in classifying remotely sensed HSI data. The proposed model is composed by several building blocks, called spectral\u2013spatial capsules, which are able to learn HSI spectral\u2013spatial features considering their corresponding spatial positions in the scene, their associated spectral signatures, and also their possible transformations. Our experiments, conducted Manuscript received June 9, 2018; revised August 22, 2018; accepted September 19, 2018. This work was supported in part by the Ministerio de Educaci\u00f3n (Resoluci\u00f3n de 26 de diciembre de 2014 y de 19 de noviembre de 2015, de la Secretar\u00eda de Estado de Educaci\u00f3n, Formaci\u00f3n Profesional y Universidades, por la que se convocan ayudas para la formaci\u00f3n de profesorado universitario, de los subprogramas de Formaci\u00f3n y de Movilidad incluidos en el Programa Estatal de Promoci\u00f3n del Talento y su Empleabilidad, and en el marco del Plan Estatal de Investigaci\u00f3n Cient\u00edfica y T\u00e9cnica y de Innovaci\u00f3n 2013\u20132016), in part by the Junta de Extremadura (decreto 297/2014, ayudas para la realizaci\u00f3n de actividades de investigaci\u00f3n y desarrollo tecnol\u00f3gico, and de divulgaci\u00f3n y de transferencia de conocimiento por los Grupos de Investigaci\u00f3n de Extremadura) under Grant GR15005, in part by the Generalitat Valenciana under Contract APOSTD/2017/007, in part by the Spanish Ministry of Economy under Project ESP2016-79503-C2-2P and Project TIN2015-63646-C5-5-R, in part by the National Natural Science Foundation of China under Grant 61771496, in part by the National Key Research and Development Program of China under Grant 2017YFB0502900, and in part by the Guangdong Provincial Natural Science Foundation under Grant 2016A030313254. (Corresponding author: Jun Li.) M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza are with the Hyperspectral Computing Laboratory, Department of Technology of Computers and Communications, Escuela Polit\u00e9cnica, University of Extremadura, 10003 C\u00e1ceres, Spain (e-mail: mpaoletti@unex.es; juanmariohaut@unex.es; jplaza@unex.es; aplaza@unex.es). R. Fernandez-Beltran and F. Pla are with the Institute of New Imaging Technologies, Universitat Jaume I, 12071 Castell\u00f3n de la Plana, Spain (e-mail: rufernan@uji.es; pla@uji.es). J. Li is with the Guangdong Provincial Key Laboratory of Urbanization and Geosimulation, Center of Integrated Geographic Information Analysis, School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, China (e-mail: lijun48@mail.sysu.edu.cn). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TGRS.2018.2871782 using five well-known HSI data sets and several state-of-theart classification methods, reveal that our HSI classification approach based on spectral\u2013spatial capsules is able to provide competitive advantages in terms of both classification accuracy and computational time."}
{"_id": "599b6d58b8d72b037319c1dae4853ef37be21ddc", "title": "A randomized trial of two different doses of a SHR-5 Rhodiola rosea extract versus placebo and control of capacity for mental work.", "text": "A randomized, double-blind, placebo-controlled, parallel-group clinical study with an extra non-treatment group was performed to measure the effect of a single dose of standardized SHR-5 Rhodiola rosea extract on capacity for mental work against a background of fatigue and stress. An additional objective was to investigate a possible difference between two doses, one dose being chosen as the standard mean dose in accordance with well-established medicinal use as a psychostimulant/adaptogen, the other dose being 50% higher. Some physiological parameters, e.g. pulse rate, systolic and diastolic blood pressure, were also measured. The study was carried out on a highly uniform population comprising 161 cadets aged from 19 to 21 years. All groups were found to have very similar initial data, with no significant difference with regard to any parameter. The study showed a pronounced antifatigue effect reflected in an antifatigue index defined as a ratio called AFI. The verum groups had AFI mean values of 1.0385 and 1.0195, 2 and 3 capsules respectively, whilst the figure for the placebo group was 0.9046. This was statistically highly significant (p < 0.001) for both doses (verum groups), whilst no significant difference between the two dosage groups was observed. There was a possible trend in favour of the lower dose in the psychometric tests. No such trend was found in the physiological tests."}
{"_id": "493a9d3929b846371516d340617509b0325e31ee", "title": "Euthanasia and physician-assisted suicide: focus on the data.", "text": "1 MBritain and the United States began in the late 19th century. Legislation was periodically proposed only to be defeated until, in 1942, Switzerland decriminalised assistance in suicide for cases when there were no \u201cselfish motives\u201d. In 2002, euthanasia was legalised in the Netherlands and Belgium, then in Luxembourg in 2009, and most recently, in 2015 in Colombia and in 2016 in Canada. PAS, but not euthanasia, has been legalised in five US states. In Oregon, PAS was legalised by popular referendum in 1997. In addition, in 2009, Washington State legalised PAS by referendum and Montana by court ruling; Vermont in 2013 and California in 2015 also legalised PAS by legislation."}
{"_id": "b05faf0ae510cbd7510a6242aafdda7de3088282", "title": "Encoder Semantic Decoder Input Image MultiTask Loss Instance Decoder Depth Decoder Semantic Task Uncertainty Instance Task Uncertainty Depth Task Uncertainty \u03a3", "text": "Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task\u2019s loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task."}
{"_id": "c33d36344ca8b751e5d1fc062a54f8294ee0e58c", "title": "Comparing software and hardware schemes for reducing the cost of branches", "text": "Pipelining has become a common technique to increase throughput of the instruction fetch, instruction decode, and instruction execution portions of modern computers. Branch instructions disrupt the flow of instructions through the pipeline, increasing the overall execution cost of branch instructions. Three schemes to reduce the cost of branches are presented in the context of a general pipeline model. Ten realistic Unix domain programs are used to directly compare the cost and performance of the three schemes and the results are in favor of the software-based scheme. For example, the software-based scheme has a cost of 1.65 cycles/branch vs. a cost of 1.68 cycles/branch of the best hardware scheme for a highly pipelined processor (11-stage pipeline). The results are 1.19 (software scheme) vs. 1.23 cycles/branch (best hardware scheme) for a moderately pipelined processor (5-stage pipeline)."}
{"_id": "f0cb57879456b1b8132896fabeccd972b2c500e6", "title": "Comparative Evaluation of Direct Torque Control Strategies for Permanent Magnet Synchronous Machines", "text": "This paper presents a comprehensive evaluation of several direct torque control (DTC) strategies for permanent magnet synchronous machines (PMSMs), namely DTC, model predictive DTC, and duty ratio modulated DTC. Moreover, field-oriented control is also included in this study. The aforementioned control strategies are reviewed and their control performances are analyzed and compared. The comparison is carried out through simulation, finite-element analysis, and experimental results of a PMSM fed by a two-level voltage source inverter. With the intent to fully reveal the advantages and disadvantages of each control strategy, critical evaluation has been conducted on the basis of several criteria: Torque ripple, stator flux ripple, switching frequency of inverter, steady-state control performance, dynamic response, machine losses, parameter sensitivity, algorithm complexity, and stator current total harmonic distortion."}
{"_id": "3c6a8b9766dcb863140ac77ce9323cb77abbbd79", "title": "A Social-Neuroscience Perspective on Empathy", "text": "In recent years, abundant evidence from behavioral and cognitive studies and functional-imaging experiments has indicated that individuals come to understand the emotional and affective states expressed by others with the help of the neural architecture that produces such states in themselves. Such a mechanism gives rise to shared representations, which constitutes one important aspect of empathy, although not the sole one. We suggest that other components, including people\u2019s ability to monitor and regulate cognitive and emotional processes to prevent confusion between self and other, are equally necessary parts of a functional model of empathy. We discuss data from recent functional-imaging studies in support of such amodel and highlight the role of specific brain regions, notably the insula, the anterior cingulate cortex, and the right temporo-parietal region. Because this model assumes that empathy relies on dissociable informationprocessing mechanisms, it predicts a variety of structural or functional dysfunctions, depending on which mechanism is disrupted. KEYWORDS\u2014empathy; intersubjectivity; affective sharing; perspective taking; emotion regulation Empathy refers to the capacity to understand and respond to the unique affective experiences of another person. At an experiential level of description, this psychological construct denotes a sense of similarity between one\u2019s own feelings and those expressed by another person. At a basic level of description, empathy can be conceived of as an interaction between any two individuals, with one experiencing and sharing the feeling of the other. This sharing of feelings does not necessarily imply that one will act or even feel impelled to act in a supportive or sympathetic way. The social and emotional situations eliciting empathy can be quite complex, depending on the feelings experienced by the observed person (target), the relationship of the target to the observer, and the context in which they socially interact. In recent years, there has been a growing interest in the cognitive-affective neuroscience of empathy. In this article, we first discuss what the components of this psychological construct are and then present empirical data that can cast some light on the neurocognitive mechanisms subserving empathy, with a special emphasis on the perception of pain in others. THE MAJOR COMPONENTS OF EMPATHY Despite the various definitions of empathy, there is broad agreement on three primary components: (a) an affective response to another person, which often, but not always, entails sharing that person\u2019s emotional state; (b) a cognitive capacity to take the perspective of the other person; and (c) emotion regulation. Some scholars favor a particular aspect over the others in their definitions. For instance, Hoffman (1981) views empathy as a largely involuntary vicarious response to affective cues from another person, while Batson et al. (1997) emphasize people\u2019s intentional role-taking ability, which taps mainly into cognitive resources. These two aspects represent the opposite sides of the same coin: Depending on how empathy is triggered, the automatic tendency to mimic the expressions of others (bottom-up processing) and the capacity for the imaginative transposing of oneself into the feeling and thinking of another (top-down processing) are differentially involved. Moreover, both aspects tap, to some extent, similar neural mechanisms that underpin emotion processing. It is unlikely, however, that the overlap between selfand other representations is absolute. Such a complete overlapping between self and other could lead to personal distress (i.e., a self-focused, aversive response to another\u2019s emotional state). This would consequently hamper the ability to toggle between selfand other perspectives and would not constitute an adaptive behavior. Therefore, self-regulatory processes are at play to prevent confusion between selfand other feelings. Address correspondence to Jean Decety, Department of Psychology, University of Chicago, 5848 S. University Avenue, Chicago, IL 60637; e-mail: decety@uchicago.edu. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 54 Volume 15\u2014Number 2 Copyright r 2006 Association for Psychological Science Affective Sharing Between Self and Others A number of theorists have pointed out that empathy involves resonating with another person\u2019s unconscious affect and experiencing that person\u2019s experience along with him or her while keeping one\u2019s own self-integrity intact. Notably, Basch (1983) speculated that, because their respective autonomic nervous systems are genetically programmed to respond in a like fashion, a given affective expression by a member of a particular species sometimes triggers a similar response in other members of that species. The view that unconscious automatic mimicry of a target generates in the observer the autonomic response associated with that bodily state and facial expression subsequently received empirical support from a variety of studies as well as observations from ethologists (Preston & de Waal, 2002). For instance, viewing facial expressions triggers similar expressions on one\u2019s own face, even in the absence of conscious recognition of the stimulus. It was proposed that people may \u2018\u2018catch\u2019\u2019 the emotions of others as a result of feedback generated by elementary motor mimicry of others\u2019 expressive behavior, producing a simultaneous matching emotional experience. Interestingly, Levenson and Ruef (1992) found that a perceiver\u2019s accuracy in inferring a target\u2019s negative emotional states is related to the degree of physiological synchrony between the perceiver and the target. In other words, when the physiological state (e.g., heart rate, muscle activity) of two individuals is more closely matched, they are more accurate at perceiving each other\u2019s feelings. Recently a functional magnetic resonance imaging (fMRI) experiment confirmed these results by showing that when participants are required to observe or to imitate facial expressions of various emotions, increased neurodynamic activity is detected in the brain regions that are implicated in the facial expression of these emotions, including the superior temporal sulcus, the anterior insula, and the amygdala, as well as specific areas of the premotor cortex (Carr, Iacoboni, Dubeau, Mazziotta, & Lenzi, 2003). The similarity between the expression of an emotion and the perception of that emotion has also been demonstrated for disgust. Damage to the insula, a region crucial in monitoring body state, can impair both the experience of disgust and the recognition of social signals (e.g., facial and emotional expression) that convey disgust. Functional neuroimaging studies (see Decety & Jackson, 2004, for review) have later shown that observing facial expressions of disgust and feelings of disgust activated very similar sites in the anterior insula and anterior cingulate cortex (ACC). Altogether these results point to one basic mechanism for social interaction: the direct link between perception and action. Such a system automatically prompts the observer to resonate with the emotional state of another individual, with the observer emulating the motor representations and associated autonomic and somatic responses of the observed target (Preston& deWaal, 2002). This covert mimicry process is responsible for shared affects and feelings between self and other. Developmental research has demonstrated that motor and affective mimicry are active already in the earliest interactions between infants and caregivers, raising the possibility that these processes are hardwired. The expression of pain provides a crucial signal that can motivate helping behaviors in others. Finding out how individuals perceive others in pain is thus an interesting way to decipher the underlying neural mechanisms of empathy. Recently, a handful of fMRI studies have indicated that the observation of pain in others is mediated by several brain areas that are implicated in processing the affective and motivational aspects of one\u2019s own pain. In one study, participants received painful stimuli and observed signals indicating that their partner, who was present in the same room, had received the same stimuli (Singer et al., 2004). The rostral (or anterior) ACC, the insula, and the cerebellum were active during both conditions. In another study, participants were shown photographs depicting body parts in painful or neutral everyday-life situations, and were asked to imagine the level of pain that these situations would produce (Jackson, Meltzoff, & Decety, 2005). In comparison to neutral situations, painful conditions elicited significant activation in regions involved in the affective aspects of pain processing, notably the ACC and the anterior insula. Moreover, analyses taking into account the behavioral responses of participants revealed that the level of activity within the ACC correlated with ratings of pain that subjects ascribed to the different situations. This finding strongly suggests that this region plays a crucial role in affective modulation, which is triggered by the assessment of painful situations. Altogether, these results lend support to the idea that common neural circuits are involved in representing one\u2019s own and others\u2019 affective painrelated states (Fig. 1). Adopting the Perspective of the Other Humans have the capacity to intentionally adopt the subjective perspective of others by putting themselves into other people\u2019s shoes and imagining what they feel. Such a capacity requires that one mentally simulate the other\u2019s perspective using one\u2019s own neural machinery. In one neuroimaging study, the participants were presented with short written sentences that depicted real-life situations likely to induce social emotions (e.g., shame) or other situations that were emotionally neutral (Ruby & Decety, 2004). Participants were each asked to imagine how they would feel if they were in those situatio"}
{"_id": "0a904f9d6b47dfc574f681f4d3b41bd840871b6f", "title": "Author Attribution with CNN\u2019s", "text": "In this report, the results from my CS224D final project are given and explained. The project was based on the application of relatively new neural network architectures, namely convolutional neural networks over word embeddings, to the task of authorship identification. The problem was posed as a classification task and models were evaluated over two datasets, a baseline of my own collection and a competition dataset with pre-existing, published results. Despite robust results on the baseline dataset, the CNN architecture failed to outmatch the best competition submission, although it did outscore most of the competitors\u2019 submissions."}
{"_id": "dcbb2c33c082fedf009139a9456bd90e549aac3d", "title": "iKnow how you walk \u2014 A smartphone based personalized gait diagnosing system", "text": "Humans, due to aging and hormonal changes are prone to pains in their limbs. As a result of which, the fundamental activity of humans i.e., movement pattern of limbs also known as gait is affected. By exerting unequal weight on both limbs in-order to avoid pain in one leg, humans slowly develop an abnormal gait pattern consisting of limping and sideways bend in the posture. This often goes unnoticed for a long time. We propose a system using smartphones that can sense and detect abnormal walking patterns. The sensing of limb movement is performed by an embedded accelerometer in a smartphone and detection of abnormal walk pattern is performed by classifying different features such as stride length, walk velocity etc. By incorporating Naive Bayes and Decision trees classifiers, we obtained close to 89% accuracies in classifying different levels of abnormalaties. Further validation was done by implementing a decision tree based gait variation detector smartphone application across five users which resulted in 90% accuracy."}
{"_id": "7d5aedecdfc4e8f83638bac47eb7cf2f860ec51c", "title": "Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm", "text": "The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation-no spatial information is taken into account. This causes the FM model to work only on well-defined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM model-based methods produce unreliable results. Here, the authors propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM model-based approach. To fit the HMRF model, an EM algorithm is used. The authors show that by incorporating both the HMRF model and the EM algorithm into a HMRF-EM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRF-EM framework can easily be combined with other techniques. As an example, the authors show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a three-dimensional fully automated approach for brain MR image segmentation."}
{"_id": "9314ea06d58532d85f270b398ec24946e0f90452", "title": "The Influence of IS Affordances on Work Practices in Health Care: A Relational Coordination Approach", "text": "High-reliability healthcare organizations, such as Intensive Care Units, operate in challenging high-velocity environments regarding decision-making, uncertainty, and time constraints. Further, these organizations are multidisciplinary, professionally driven, and characterized by a strong hierarchical structure. Given such conditions, effective cross-functional communication is critical for ensuring successful work practices. We propose a theoretical model of IS affordances and Relational Coordination in the presence of functional status differences. We establish a set of propositions regarding how use patterns and types of IS affordances enable or constrain potential effects on Relational Coordination. We are currently conducting a case study of an ICU in a major University hospital, with the goal to iteratively refine our theory and propositions. We hope that this research will offer an in-depth understanding of the types of opportunities related to information systems, and the role of individual-level interpretations and actions across functional groups for these opportunities."}
{"_id": "84c08d24012f5af14589c371d34b146202850c96", "title": "Learning Kernels for Unsupervised Domain Adaptation with Applications to Visual Object Recognition", "text": "Domain adaptation aims to correct the mismatch in statistical properties between the source domain on which a classifier is trained and the target domain to which the classifier is to be applied. In this paper, we address the challenging scenario of unsupervised domain adaptation, where the target domain does not provide any annotated data to assist in adapting the classifier. Our strategy is to learn robust features which are resilient to the mismatch across domains and then use them to construct classifiers that will perform well on the target domain. To this end, we propose novel kernel learning approaches to infer such features for adaptation. Concretely, we explore two closely related directions. In the first direction, we propose unsupervised learning of a geodesic flow kernel (GFK). The GFK summarizes the inner products in an infinite sequence of feature subspaces that smoothly interpolates between the source and target domains. In the second direction, we propose supervised learning of a kernel that discriminatively combines multiple base GFKs. Those base kernels model the source and the target domains at fine-grained granularities. In particular, each base kernel pivots on a different set of landmarks\u2014the most useful data instances that reveal the similarity between the source and the target domains, thus bridging them to achieve adaptation. Our approaches are computationally convenient, automatically infer important hyper-parameters, and are capable of learning features and classifiers discriminatively without demanding labeled data from the target domain. In extensive empirical studies on standard benchmark recognition datasets, our appraches yield state-of-the-art results compared to a variety of competing methods."}
{"_id": "504138c904588fb0548ab4d7b79c8b4c5208bf2d", "title": "Dissecting Contributions of Prefrontal Cortex and Fusiform Face Area to Face Working Memory", "text": "Interactions between prefrontal cortex (PFC) and stimulusspecific visual cortical association areas are hypothesized to mediate visual working memory in behaving monkeys. To clarify the roles for homologous regions in humans, event-related fMRI was used to assess neural activity in PFC and fusiform face area (FFA) of subjects performing a delay-recognition task for faces. In both PFC and FFA, activity increased parametrically with memory load during encoding and maintenance of face stimuli, despite quantitative differences in the magnitude of activation. Moreover, timing differences in PFC and FFA activation during memory encoding and retrieval implied a context dependence in the flow of neural information. These results support existing neurophysiological models of visual working memory developed in the nonhuman primate."}
{"_id": "6e88d09b2adc7a3d9230d324387929ec54a9d886", "title": "Data-intensive applications, challenges, techniques and technologies: A survey on Big Data", "text": "It is already true that Big Data has drawn huge attention from researchers in information sciences, policy and decision makers in governments and enterprises. As the speed of information growth exceeds Moore\u2019s Law at the beginning of this new century, excessive data is making great troubles to human beings. However, there are so much potential and highly useful values hidden in the huge volume of data. A new scientific paradigm is born as dataintensive scientific discovery (DISD), also known as Big Data problems. A large number of fields and sectors, ranging from economic and business activities to public administration, from national security to scientific researches in many areas, involve with Big Data problems. On the one hand, Big Data is extremely valuable to produce productivity in businesses and evolutionary breakthroughs in scientific disciplines, which give us a lot of opportunities to make great progresses in many fields. There is no doubt that the future competitions in business productivity and technologies will surely converge into the Big Data explorations. On the other hand, Big Data also arises with many challenges, such as difficulties in data capture, data storage, data analysis and data visualization. This paper is aimed to demonstrate a close-up view about Big Data, including Big Data applications, Big Data opportunities and challenges, as well as the state-of-the-art techniques and technologies we currently adopt to deal with the Big Data problems. We also discuss several underlying methodologies to handle the data deluge, for example, granular computing, cloud computing, bio-inspired computing, and quantum computing. 2014 Elsevier Inc. All rights reserved."}
{"_id": "f19983b3e9b7fe8106c0375ebbd9f73a53295a28", "title": "Data quality management, data usage experience and acquisition intention of big data analytics", "text": "Big data analytics associated with database searching, mining, and analysis can be seen as an innovative IT capability that can improve firm performance. Even though some leading companies are actively adopting big data analytics to strengthen market competition and to open up new business opportunities, many firms are still in the early stage of the adoption curve due to lack of understanding of and experience with big data. Hence, it is interesting and timely to understand issues relevant to big data adoption. In this study, a research model is proposed to explain the acquisition intention of big data analytics"}
{"_id": "478fbef8568a021c3d91c13128efa19ad719dd88", "title": "The 8 requirements of real-time stream processing", "text": "Applications that require real-time processing of high-volume data steams are pushing the limits of traditional data processing infrastructures. These stream-based applications include market feed processing and electronic trading on Wall Street, network and infrastructure monitoring, fraud detection, and command and control in military environments. Furthermore, as the \"sea change\" caused by cheap micro-sensor technology takes hold, we expect to see everything of material significance on the planet get \"sensor-tagged\" and report its state or location in real time. This sensorization of the real world will lead to a \"green field\" of novel monitoring and control applications with high-volume and low-latency processing requirements.Recently, several technologies have emerged---including off-the-shelf stream processing engines---specifically to address the challenges of processing high-volume, real-time data without requiring the use of custom code. At the same time, some existing software technologies, such as main memory DBMSs and rule engines, are also being \"repurposed\" by marketing departments to address these applications.In this paper, we outline eight requirements that a system software should meet to excel at a variety of real-time stream processing applications. Our goal is to provide high-level guidance to information technologists so that they will know what to look for when evaluation alternative stream processing solutions. As such, this paper serves a purpose comparable to the requirements papers in relational DBMSs and on-line analytical processing. We also briefly review alternative system software technologies in the context of our requirements.The paper attempts to be vendor neutral, so no specific commercial products are mentioned."}
{"_id": "6834913a76b686957c0b8c755d1ca6ef3bd76914", "title": "Data privacy through optimal k-anonymization", "text": "Data de-identification reconciles the demand for release of data for research purposes and the demand for privacy from individuals. This paper proposes and evaluates an optimization algorithm for the powerful de-identification procedure known as k-anonymization. A k-anonymized dataset has the property that each record is indistinguishable from at least k - 1 others. Even simple restrictions of optimized k-anonymity are NP-hard, leading to significant computational challenges. We present a new approach to exploring the space of possible anonymizations that tames the combinatorics of the problem, and develop data-management strategies to reduce reliance on expensive operations such as sorting. Through experiments on real census data, we show the resulting algorithm can find optimal k-anonymizations under two representative cost measures and a wide range of k. We also show that the algorithm can produce good anonymizations in circumstances where the input data or input parameters preclude finding an optimal solution in reasonable time. Finally, we use the algorithm to explore the effects of different coding approaches and problem variations on anonymization quality and performance. To our knowledge, this is the first result demonstrating optimal k-anonymization of a non-trivial dataset under a general model of the problem."}
{"_id": "76c5db9edf820433eae631383f08b4e89e90fffa", "title": "Privacy-preserving trajectory data publishing by local suppression", "text": "The pervasiveness of location-aware devices has spawned extensive research in trajectory data mining, resulting in many important real-life applications. Yet, the privacy issue in sharing trajectory data among different parties often creates an obstacle for effective data mining. In this paper, we study the challenges of anonymizing trajectory data: high dimensionality, sparseness, and sequentiality. Employing traditional privacy models and anonymization methods often leads to low data utility in the resulting data and ineffective data mining. In addressing these challenges, this is the first paper to introduce local suppression to achieve a tailored privacy model for trajectory data anonymization. The framework allows the adoption of various data utility metrics for different data mining tasks. As an illustration, we aim at preserving both instances of location-time doublets and frequent sequences in a trajectory database, both being the foundation of many trajectory data \u2217Corresponding author Email addresses: ru_che@encs.concordia.ca (Rui Chen), fung@ciise.concordia.ca (Benjamin C. M. Fung), no_moham@encs.concordia.ca (Noman Mohammed), bcdesai@cs.concordia.ca (Bipin C. Desai) Preprint submitted to Information Sciences April 10, 2011 mining tasks. Our experiments on both synthetic and real-life data sets suggest that the framework is effective and efficient to overcome the challenges in trajectory data anonymization. In particular, compared with the previous works in the literature, our proposed local suppression method can significantly improve the data utility in anonymous trajectory data."}
{"_id": "b2aaa63f6f5f0540da1b530853e427574c0848a0", "title": "Scholarly use of information: graduate students' information seeking behaviour", "text": "Introduction. This study explored graduate students' information behaviour related to their process of inquiry and scholarly activities. Method. In depth, semi-structured interviews were conducted with one hundred graduate students representing all disciplines and departments from Carnegie Mellon University. Analysis. Working in pairs, we coded transcripts of interviews into meaningful categories using ATLAS.ti software. The combined use of quantitative and qualitative analysis aimed to reduce subjectivity. Results. Graduate students often begin with a meeting with professors who provide direction, recommend and provide resources. Other students help to shape graduate students' research activities, and university library personnel provide guidance in finding resources. The Internet plays a major role, although students continue to use print resources. Convenience, lack of sophistication in finding and using resources and course requirements affect their information behaviour. Findings vary across disciplines and between programmes. Conclusion. Libraries can influence students' information behaviour by re-evaluating their instructional programmes and provision of resources and services. They can take a lead by working with academic staff to guide students."}
{"_id": "8a84ea00fc22fb7d4b6d6f0b450e43058cb4113f", "title": "Music and language side by side in the brain: a PET study of the generation of melodies and sentences.", "text": "Parallel generational tasks for music and language were compared using positron emission tomography. Amateur musicians vocally improvised melodic or linguistic phrases in response to unfamiliar, auditorily presented melodies or phrases. Core areas for generating melodic phrases appeared to be in left Brodmann area (BA) 45, right BA 44, bilateral temporal planum polare, lateral BA 6, and pre-SMA. Core areas for generating sentences seemed to be in bilateral posterior superior and middle temporal cortex (BA 22, 21), left BA 39, bilateral superior frontal (BA 8, 9), left inferior frontal (BA 44, 45), anterior cingulate, and pre-SMA. Direct comparisons of the two tasks revealed activations in nearly identical functional brain areas, including the primary motor cortex, supplementary motor area, Broca's area, anterior insula, primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus, and posterior cerebellum. Most of the differences between melodic and sentential generation were seen in lateralization tendencies, with the language task favouring the left hemisphere. However, many of the activations for each modality were bilateral, and so there was significant overlap. While clarification of this overlapping activity awaits higher-resolution measurements and interventional assessments, plausible accounts for it include component sharing, interleaved representations, and adaptive coding. With these and related findings, we outline a comparative model of shared, parallel, and distinctive features of the neural systems supporting music and language. The model assumes that music and language show parallel combinatoric generativity for complex sound structures (phonology) but distinctly different informational content (semantics)."}
{"_id": "a476d62237bd84719b77190f9008e0745b2d1e27", "title": "Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters", "text": "Although molecular dynamics (MD) simulations of biomolecular systems often run for days to months, many events of great scientific interest and pharmaceutical relevance occur on long time scales that remain beyond reach. We present several new algorithms and implementation techniques that significantly accelerate parallel MD simulations compared with current state-of-the-art codes. These include a novel parallel decomposition method and message-passing techniques that reduce communication requirements, as well as novel communication primitives that further reduce communication time. We have also developed numerical techniques that maintain high accuracy while using single precision computation in order to exploit processor-level vector instructions. These methods are embodied in a newly developed MD code called Desmond that achieves unprecedented simulation throughput and parallel scalability on commodity clusters. Our results suggest that Desmond's parallel performance substantially surpasses that of any previously described code. For example, on a standard benchmark, Desmond's performance on a conventional Opteron cluster with 2K processors slightly exceeded the reported performance of IBM's Blue Gene/L machine with 32K processors running its Blue Matter MD code."}
{"_id": "5dee13efc6090823d9cb7eaab441b0427575f6f6", "title": "Cellular IP: a new approach to Internet host mobility", "text": "This paper describes a new approach to Internet host mobility. We argue that by separating local and wide area mobility, the performance of existing mobile host protocols (e.g. Mobile IP) can be significantly improved. We propose Cellular IP, a new lightweight and robust protocol that is optimized to support local mobility but efficiently interworks with Mobile IP to provide wide area mobility support. Cellular IP shows great benefit in comparison to existing host mobility proposals for environments where mobile hosts migrate frequently, which we argue, will be the rule rather than the exception as Internet wireless access becomes ubiquitous. Cellular IP maintains distributed cache for location management and routing purposes. Distributed paging cache coarsely maintains the position of 'idle' mobile hosts in a service area. Cellular IP uses this paging cache to quickly and efficiently pinpoint 'idle' mobile hosts that wish to engage in 'active' communications. This approach is beneficial because it can accommodate a large number of users attached to the network without overloading the location management system. Distributed routing cache maintains the position of active mobile hosts in the service area and dynamically refreshes the routing state in response to the handoff of active mobile hosts. These distributed location management and routing algorithms lend themselves to a simple and low cost implementation of Internet host mobility requiring no new packet formats, encapsulations or address space allocation beyond what is present in IP."}
{"_id": "446545c2a1043e1732b6cbaad01fe7089ad3f36e", "title": "Real-time pen-and-ink illustration of landscapes", "text": "Non-photorealistic rendering has been proven to be particularly efficient in conveying and transmitting selected visual information. Our paper presents a NPR rendering pipeline that supports pen-and-ink illustration for, but not limited to, complex landscape scenes in real time. This encompasses a simplification framework using clustering which enables new approaches to efficient and coherent rendering of stylized silhouettes, hatching and abstract shading. Silhouette stylization is performed in image-space. This avoids explicit computation of connected lines. Further, coherent hatching of the tree foliage is performed using an approximate view-dependent parameterization computed on-the-fly within the same simplification framework. All NPR algorithms are integrated with photorealistic rendering, allowing seamless transition and combination between a variety of photorealistic and non-photorealistic drawing styles."}
{"_id": "c6fabf4a6208f3024efeb0b5247aa27158bba65f", "title": "Planar Array Antennas with Travelling-Wave Excitation in Millimeter-Wave Band", "text": "\u3042\u3089\u307e\u3057 \u30df\u30ea\u6ce2\u306e\u5468\u6ce2\u6570\u5e2f\u306b\u304a\u3044\u3066\u6307\u5411\u6027\u8d70\u67fb\u53ef\u80fd\u306a\u5e73\u9762\u30a2\u30f3\u30c6\u30ca\u304c\u671b\u307e\u308c\u3066\u3044\u308b.\u672c\u8ad6\u6587\u3067\u306f,\u30df\u30ea\u6ce2\u306e \u4ee3\u8868\u7684\u306a\u4e8c\u3064\u306e\u5e73\u9762\u30a2\u30f3\u30c6\u30ca\u3067\u3042\u308b\u5c0e\u6ce2\u7ba1\u30a2\u30f3\u30c6\u30ca\u3068\u30de\u30a4\u30af\u30ed\u30b9\u30c8\u30ea\u30c3\u30d7\u30a2\u30f3\u30c6\u30ca\u306b\u3064\u3044\u3066,\u958b\u767a\u3057\u305f\u5e73\u9762\u30a2\u30f3 \u30c6\u30ca\u3092\u7d39\u4ecb\u3059\u308b.\u30da\u30f3\u30b7\u30eb\u30d3\u30fc\u30e0\u304c\u5fc5\u8981\u306a\u6a5f\u68b0\u8d70\u67fb\u578b\u30a2\u30f3\u30c6\u30ca\u306b\u9069\u3057\u305f\u4e8c\u6b21\u5143\u5c0e\u6ce2\u7ba1\u30a2\u30ec\u30fc\u30a2\u30f3\u30c6\u30ca\u3068,\u8907\u6570\u306e \u30b5\u30d6\u30a2\u30ec\u30fc\u306e\u914d\u5217\u304b\u3089\u306a\u308b\u96fb\u5b50\u8d70\u67fb\u65b9\u5f0f\u306e\u30b5\u30d6\u30a2\u30ec\u30fc\u5e73\u9762\u30a2\u30f3\u30c6\u30ca\u7528\u306b,\u5c0e\u6ce2\u7ba1\u30a2\u30ec\u30fc\u30a2\u30f3\u30c6\u30ca\u3068\u30de\u30a4\u30af\u30ed\u30b9\u30c8 \u30ea\u30c3\u30d7\u30b3\u30e0\u30e9\u30a4\u30f3\u30a2\u30f3\u30c6\u30ca\u306b\u3064\u3044\u3066\u958b\u767a\u3057\u305f\u4f8b\u3092\u793a\u3059.\u3053\u308c\u3089\u306b\u3064\u3044\u3066,\u5404\u653e\u5c04\u7d20\u5b50\u3067\u53cd\u5c04\u7279\u6027\u3092\u4f4e\u6e1b\u3059\u308b\u3053\u3068 \u304c,\u7279\u6027\u306e\u5411\u4e0a\u30fb\u8a2d\u8a08\u81ea\u7531\u5ea6\u306e\u62e1\u5927\u306b\u6709\u52b9\u3067\u3042\u308b\u3053\u3068\u3092\u793a\u3059.\u307e\u305f,\u3053\u308c\u3089\u306e\u30a2\u30f3\u30c6\u30ca\u306b\u3064\u3044\u3066\u4f4e\u640d\u5931\u306b\u7d66\u96fb\u3059 \u308b\u69cb\u9020\u3092\u793a\u3059\u3068\u540c\u6642\u306b,\u7247\u7aef\u304b\u3089\u7d66\u96fb\u3059\u308b\u3088\u308a\u3082,\u4e2d\u592e\u304b\u3089\u7d66\u96fb\u3059\u308b\u65b9\u304c,\u5468\u6ce2\u6570\u5e2f\u57df\u5e45\u304c\u5e83\u304f\u306a\u308b\u3053\u3068\u3092\u793a\u3059. \u66f4\u306b,\u3053\u308c\u3089\u306e\u30a2\u30f3\u30c6\u30ca\u5b9f\u73fe\u306e\u305f\u3081\u306e\u30ad\u30fc\u3068\u306a\u308b\u9032\u884c\u6ce2\u52b1\u632f\u30a2\u30ec\u30fc\u8a2d\u8a08\u6280\u8853\u306b\u3064\u3044\u3066\u89e3\u8aac\u3059\u308b. \u30ad\u30fc\u30ef\u30fc\u30c9 \u30df\u30ea\u6ce2,\u30a2\u30ec\u30fc\u30a2\u30f3\u30c6\u30ca,\u5e73\u9762\u30a2\u30f3\u30c6\u30ca,\u5c0e\u6ce2\u7ba1\u30a2\u30f3\u30c6\u30ca,\u30de\u30a4\u30af\u30ed\u30b9\u30c8\u30ea\u30c3\u30d7\u30a2\u30f3\u30c6\u30ca"}
{"_id": "58ca5ac14af2765ce1d25c3a82d6f9312437ded0", "title": "Multiview Clustering via Adaptively Weighted Procrustes", "text": "In this paper, we make a multiview extension of the spectral rotation technique raised in single view spectral clustering research. Since spectral rotation is closely related to the Procrustes Analysis for points matching, we point out that classical Procrustes Average approach can be used for multiview clustering. Besides, we show that direct applying Procrustes Average (PA) in multiview tasks may not be optimal theoretically and empirically, since it does not take the clustering capacity differences of different views into consideration. Other than that, we propose an Adaptively Weighted Procrustes (AWP) approach to overcome the aforementioned deficiency. Our new AWP weights views with their clustering capacities and forms a weighted Procrustes Average problem accordingly. The optimization algorithm to solve the new model is computational complexity analyzed and convergence guaranteed. Experiments on five real-world datasets demonstrate the effectiveness and efficiency of the new models."}
{"_id": "3365109a45c7874049fd858602b66bbe8d75f680", "title": "Accelerating Comparative Genomics Workflows in a Distributed Environment with Optimized Data Partitioning", "text": "The advent of new sequencing technology has generated massive amounts of biological data at unprecedented rates. High-throughput bioinformatics tools are required to keep pace with this. Here, we implement a workflow-based model for parallelizing the data intensive task of genome alignment and variant calling with BWA and GATK's Haplotype Caller. We explore different approaches of partitioning data and how each affect the run time. We observe granularity-based partitioning for BWA and alignment-based partitioning for Halo type Caller to be the optimal choices for the pipeline. We identify the various challenges encountered while developing such an application and provide an insight into addressing them. We report significant performance improvements, from 12 days to 4 hours, while running the BWA-GATK pipeline using 100 nodes for analyzing high-coverage oak tree data."}
{"_id": "ab8fb76765c698070d62554378bc09e4c7517447", "title": "ARTINO: A New High Resolution 3D Imaging Radar System on an Autonomous Airborne Platform", "text": "The new radar system ARTINO (Airborne Radar for Three-dimensional Imaging and Nadir Observation), developed at FGAN-FHR, allows to image a direct overflown scene in three dimensions. Integrated in a small, mobile, and dismountable UAV (Unmanned Aerial Vehicle) it will be an ideal tool for various applications. This paper gives an overview about the ARTINO principle, the raw data simulation, the image formation, the technical realisation, and the status of the experimental system. I. THE ARTINO PRINCIPLE ARTINO is a new radar system integrated in a small and dismountable low-wing UAV, which allows to image the direct overflown scene in three dimensions (Figure 1). Fig. 1. Artist impression of an imaging mission using ARTINO. This new system can image the direct overflown scene in three dimensions. General side-looking SAR systems are constraint by shading effects which can hide essential information in the explored scene. The downward-looking concept of ARTINO overcomes this restriction and enables imaging of street canyons and deep terrain in mountainous areas. Moreover, the 3D imaging capability together with the small and mobile platform is an ideal tool of close in time data acquisitions of fast changing terrains, like snow slopes (danger of avalanches) and active volcanoes. This new system could be used for various applications, like DEM (Digital Elevation Model) generation, surveying, city planing, environmental monitoring, disaster relief, surveillance, and reconnaissance. In contrary to similar concepts (e.g. [1]\u2013[3]) the ARTINO principle works with a sparse antenna array distributed along the wings, with the transmitting elements at the tips and the receiving elements in between. Virtual antenna elements are formed by the mean positions of every couple of single transmit and receive elements. Finally, one gets a fully distributed virtual antenna array. The 3D resolution cells are formed by the appliance of the synthetic aperture and a beamforming operation. A detailed description of the ARTINO principle, the used UAV (Figure 2), and the simulation of raw data can be found in [4]. The image formation using the ARTINO principle is extensively discussed in [5]. A detailed description of the technical realization and its status is given in [6]. This paper gives an overview of the concept, the processing, some first simulation results, and the technical realisation. Fig. 2. Photo of the low wing UAV ARTINO. The new radar system will be integrated in the fuselage and the wings. II. MODEL OF THE ARTINO CONCEPT A. Geometrical consideration and signal model ARTINO is supposed to fly at the altitude h along the x-axis with the velocity v. The virtual antenna is composed of Nvirt elements, which are centered at the y-axis and regularly spaced along this axis. The position of the i-th virtual antenna element with i \u2208 [\u2212Nvirt\u22121 2 ; Nvirt\u22121 2 ] is given by \u03b7i = (x, yi, h). The transpose operator is denoted by the superscript . T denotes the pulse-to-pulse time and t the fast time. The antenna position along the x-axis at time T is given by x = v\u00b7T . Figure 3 shows the geometry of the ARTINO principle. The distance d between the virtual antenna elements was determined by simulations in order to optimize the antenna beam (reduction of grating lobes) of the whole array (Figure 4). For the demonstration of the feasibility of this new radar concept, a pulse radar is assumed. To obtain a distinct assignment of each virtual antenna element, it will be necessary for the experimental system that the real antenna elements transmit with a time multiplex from pulse to pulse. In the simulation, all virtual antenna elements transmit simultaneously. A point scatterer P is positioned at \u03be = (\u03bex, \u03bey, \u03bez) with the reflectivity \u03b1(\u03be) (Figure 3). The signal assigned to the 0-7803-9510-7/06/$20.00 \u00a9 2006 IEEE 3825 Fig. 3. Geometry for the ARTINO principle \u22123 \u22122 \u22121 0 1 2 3 \u221240 \u221230 \u221220 \u221210 0"}
{"_id": "393f75566fc90724852c4d259159c9ed1438c8dd", "title": "An artificial neural network to estimate physical activity energy expenditure and identify physical activity type from an accelerometer.", "text": "The aim of this investigation was to develop and test two artificial neural networks (ANN) to apply to physical activity data collected with a commonly used uniaxial accelerometer. The first ANN model estimated physical activity metabolic equivalents (METs), and the second ANN identified activity type. Subjects (n = 24 men and 24 women, mean age = 35 yr) completed a menu of activities that included sedentary, light, moderate, and vigorous intensities, and each activity was performed for 10 min. There were three different activity menus, and 20 participants completed each menu. Oxygen consumption (in ml x kg(-1) x min(-1)) was measured continuously, and the average of minutes 4-9 was used to represent the oxygen cost of each activity. To calculate METs, activity oxygen consumption was divided by 3.5 ml x kg(-1) x min(-1) (1 MET). Accelerometer data were collected second by second using the Actigraph model 7164. For the analysis, we used the distribution of counts (10th, 25th, 50th, 75th, and 90th percentiles of a minute's second-by-second counts) and temporal dynamics of counts (lag, one autocorrelation) as the accelerometer feature inputs to the ANN. To examine model performance, we used the leave-one-out cross-validation technique. The ANN prediction of METs root-mean-squared error was 1.22 METs (confidence interval: 1.14-1.30). For the prediction of activity type, the ANN correctly classified activity type 88.8% of the time (confidence interval: 86.4-91.2%). Activity types were low-level activities, locomotion, vigorous sports, and household activities/other activities. This novel approach of applying ANNs for processing Actigraph accelerometer data is promising and shows that we can successfully estimate activity METs and identify activity type using ANN analytic procedures."}
{"_id": "10d3f77225eca1d576268ba84ed83f230a5e47c4", "title": "Crafting a multi-task CNN for viewpoint estimation", "text": "Convolutional Neural Networks (CNNs) were recently shown to provide state-of-theart results for object category viewpoint estimation. However different ways of formulating this problem have been proposed and the competing approaches have been explored with very different design choices. This paper presents a comparison of these approaches in a unified setting as well as a detailed analysis of the key factors that impact performance. Followingly, we present a new joint training method with the detection task and demonstrate its benefit. We also highlight the superiority of classification approaches over regression approaches, quantify the benefits of deeper architectures and extended training data, and demonstrate that synthetic data is beneficial even when using ImageNet training data. By combining all these elements, we demonstrate an improvement of approximately 5% mAVP over previous state-of-the-art results on the Pascal3D+ dataset [28]. In particular for their most challenging 24 view classification task we improve the results from 31.1% to 36.1% mAVP."}
{"_id": "a8dbadf2c893337cae38de3c898d6181a5c9fe98", "title": "Analysis on Credit Card Fraud Detection Methods 1", "text": "Due to the theatrical increase of fraud which results in loss of dollars worldwide each year, several modern techniques in detecting fraud are persistently evolved and applied to many business fields. Fraud detection involves monitoring the activities of populations of users in order to estimate, perceive or avoid undesirable behavior. Undesirable behavior is a broad term including delinquency, fraud, intrusion, and account defaulting. This paper presents a survey of current techniques used in credit card fraud detection and telecommunication fraud. The goal of this paper is to provide a comprehensive review of different techniques to detect fraud."}
{"_id": "dda8b34e2532c0f54e0ae9ad3be50085935e6439", "title": "Children \u2019 s Eye Movements during Listening : Developmental Evidence for a Constraint-Based Theory of Sentence Processing", "text": "Many comprehension studies of grammatical development have focused on the ultimate interpretation that children assign to sentences and phrases, yielding somewhat static snapshots of children's emerging grammatical knowledge. Studies of the dynamic processes underlying children's language comprehension have to date been rare, owing in part to the lack of online sentence processing techniques suitable for use with children. In this chapter, we describe recent work from our research group, which examines the moment-by-moment interpretation decisions of children (age 4 to 6 years) while they listen to spoken sentences. These real-time measures were obtained by recording the children's eye movements as they visually interrogated and manipulated objects in response to spoken instructions. The first of these studies established some striking developmental differences in processing ability, with the youngest children showing an inability to use relevant properties of the referential scene to resolve temporary grammatical ambiguities (Trueswell, Sekerina, Hill, & Logrip, 1999). This finding could be interpreted as support for an early encapsulated syntactic processor that has difficulty using non-syntactic information to revise parsing commitments. However, we will review evidence from a series of follow-up experiments which suggest that this pattern arises from a developing interactive parsing system. Under this account, adult and child sentence comprehension is a \" perceptual guessing game \" in which multiple statistical cues are used to recover detailed linguistic structure. These cues, which include lexical-distribution evidence, verb semantic biases, and referential scene information, come \" online \" (become automated) at different points in the course of development. The developmental timing of these effects is related to their differential reliability and ease of detection in the input."}
{"_id": "41b0d9399573a6ca0d48f9ae06fa14697e80073d", "title": "Development and natural history of mood disorders", "text": "To expand and accelerate research on mood disorders, the National Institute of Mental Health (NIMH) developed a project to formulate a strategic research plan for mood disorder research. One of the areas selected for review concerns the development and natural history of these disorders. The NIMH convened a multidisciplinary Workgroup of scientists to review the field and the NIMH portfolio and to generate specific recommendations. To encourage a balanced and creative set of proposals, experts were included within and outside this area of research, as well as public stakeholders. The Workgroup identified the need for expanded knowledge of mood disorders in children and adolescents, noting important gaps in understanding the onset, course, and recurrence of early-onset unipolar and bipolar disorder. Recommendations included the need for a multidisciplinary research initiative on the pathogenesis of unipolar depression encompassing genetic and environmental risk and protective factors. Specifically, we encourage the NIMH to convene a panel of experts and advocates to review the findings concerning children at high risk for unipolar depression. Joint analyses of existing data sets should examine specific risk factors to refine models of pathogenesis in preparation for the next era of multidisciplinary research. Other priority areas include the need to assess the long-term impact of successful treatment of juvenile depression and known precursors of depression, in particular, childhood anxiety disorders. Expanded knowledge of pediatric-onset bipolar disorder was identified as a particularly pressing issue because of the severity of the disorder, the controversies surrounding its diagnosis and treatment, and the possibility that widespread use of psychotropic medications in vulnerable children may precipitate the condition. The Workgroup recommends that the NIMH establish a collaborative multisite multidisciplinary Network of Research Programs on Pediatric-Onset Bipolar Disorder to achieve a better understanding of its causes, course, treatment, and prevention. The NIMH should develop a capacity-building plan to ensure the availability of trained investigators in the child and adolescent field. Mood disorders are among the most prevalent, recurrent, and disabling of all illnesses. They are often disorders of early onset. Although the NIMH has made important strides in mood disorders research, more data, beginning with at-risk infants, children, and adolescents, are needed concerning the etiology and developmental course of these disorders. A diverse program of multidisciplinary research is recommended to reduce the burden on children and families affected with these conditions."}
{"_id": "127db6c733f2882754f56835ebc43e58016d8083", "title": "Clique Graphs and Overlapping Communities", "text": "It is shown how to construct a clique graph in which properties of cliques of a fixed order in a given graph are represented by vertices in a weighted graph. Various definitions and motivations for these weights are given. The detection of communities or clusters is used to illustrate how a clique graph may be exploited. In particular a benchmark network is shown where clique graphs find the overlapping communities accurately while vertex partition methods fail. PACS numbers: 89.75.Hc, 89.75.Fb, 89.75.-k"}
{"_id": "0970c10b5af6c5bbaeedf5d696c3c355aaa97959", "title": "Document Image Quality Assessment: A Brief Survey", "text": "To maintain, control and enhance the quality of document images and minimize the negative impact of degradations on various analysis and processing systems, it is critical to understand the types and sources of degradations and develop reliable methods for estimating the levels of degradations. This paper provides a brief survey of research on the topic of document image quality assessment. We first present a detailed analysis of the types and sources of document degradations. We then review techniques for document image degradation modeling. Finally, we discuss objective measures and subjective experiments that are used to characterize document image quality."}
{"_id": "13ae3c8afef5a0d6f4c9e684da9fc1fa96caaeb6", "title": "Online Anomaly Detection in Crowd Scenes via Structure Analysis", "text": "Abnormal behavior detection in crowd scenes is continuously a challenge in the field of computer vision. For tackling this problem, this paper starts from a novel structure modeling of crowd behavior. We first propose an informative structural context descriptor (SCD) for describing the crowd individual, which originally introduces the potential energy function of particle's interforce in solid-state physics to intuitively conduct vision contextual cueing. For computing the crowd SCD variation effectively, we then design a robust multi-object tracker to associate the targets in different frames, which employs the incremental analytical ability of the 3-D discrete cosine transform (DCT). By online spatial-temporal analyzing the SCD variation of the crowd, the abnormality is finally localized. Our contribution mainly lies on three aspects: 1) the new exploration of abnormal detection from structure modeling where the motion difference between individuals is computed by a novel selective histogram of optical flow that makes the proposed method can deal with more kinds of anomalies; 2) the SCD description that can effectively represent the relationship among the individuals; and 3) the 3-D DCT multi-object tracker that can robustly associate the limited number of (instead of all) targets which makes the tracking analysis in high density crowd situation feasible. Experimental results on several publicly available crowd video datasets verify the effectiveness of the proposed method."}
{"_id": "91dfbc19a2bb88b00fd258d36d0f099f5ceb5772", "title": "BER analysis of optical eU-OFDM transmission over AWGN", "text": "Enhanced unipolar orthogonal frequency-division multiplexing (eU-OFDM) was recently introduced to improve the power-spectral efficiency trade-off of intensity-modulated direct-detection (IM/DD) optical OFDM transmission schemes. This is accomplished by superimposing unipolar signals over several layers. Bit-error-rate (BER) analysis for each layer however, is eluded by error-propagation from previous layers in the eU-OFDM receiver. This paper presents a general mathematical model for eU-OFDM transmission. From the model, we introduce successive interference cancellation using soft-symbol estimates, which leads to slight performance improvement compared to the original hard-decision counterpart. More importantly, by using soft-symbol estimates we derive analytically tractable expressions that accurately predict the system's performance."}
{"_id": "7022082e8974d54d5bf79c782702f885feb1e47f", "title": "A comprehensive study on security attacks on SSL/TLS protocol", "text": "Secure Socket Layer (SSL) protocol was introduced in 1994 and was later renamed as transport layer security (TLS) protocol for securing transport layer. SSL/TLS protocol is used for securing communication on the network by ensuring data confidentiality, data integrity and authenticity between the communicating party. Authentication of the communicating party and securing transfer of data is done through certificates, key exchange and cipher suites. Security issues were found during evolutionary development of SSL/TLS protocol. The paper gives a detailed chronological order of attacks of past 22 years on SSL/TLS protocol."}
{"_id": "e08088b490881afabb9b2298ae1d0702dcf7ba9d", "title": "Post-Punching Behavior of Flat Slabs by", "text": "Reinforced concrete flat slabs are a common structural system for cast-in-place concrete slabs. Failures in punching shear near the column regions are typically governing at ultimate. In case no punching shear or integrity reinforcement is placed, failures in punching develop normally in a brittle manner with almost no warning signs. Furthermore, the residual strength after punching is, in general, significantly lower than the punching load. Thus, punching of a single column of a flat slab overloads adjacent columns and can potentially lead to their failure on punching, thus triggering the progressive collapse of the structure. Over the past decades, several collapses have been reported due to punching shear failures, resulting in human casualties and extensive damage. Other than placing conventional punching shear reinforcement, the deformation capacity and residual strength after punching can also be enhanced by placing integrity reinforcement to avoid progressive collapses of flat slabs. This paper presents the main results of an extensive experimental campaign performed at the Ecole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL) on the role of integrity reinforcement by means of 20 slabs with dimensions of 1500 x 1500 x 125 mm (\u22485 ft x 5 ft x 5 in.) and various integrity reinforcement layouts. The performance and robustness of the various solutions is investigated to obtain physical explanations and a consistent design model for the load-carrying mechanisms and strength after punching failures. INTRODUCTION Over the past decades, several collapses due to punching shear failure have been reported in Europe and America,"}
{"_id": "9d7af4b0f9a42f914f4d2a3bdafee776407a30d3", "title": "User Interface Menu Design Performance and User Preferences: A Review and Ways Forward", "text": "This review paper is about menus on web pages and applications and their positioning on the user screen. The paper aims to provide the reader with a succinct summary of the major research in this area along with an easy to read tabulation of the most important studies. Furthermore, the paper concludes with some suggestions for future research regarding how menus and their positioning on the screen could be improved. The two principal suggestions concern trying to use more qualitative methods for investigating the issues and to develop in the future more universally designed menus. Keywords\u2014Menus; navigation of interfaces; universal design;"}
{"_id": "1b7690012a25bb33b429dbd72eca7459b9f50653", "title": "PEGASUS: A Policy Search Method for Large MDPs and POMDPs", "text": "We propose a new approach to the problem of searching a space of policies for a Markov decision process (MDP) or a partially observable Markov decision process (POMDP), given a model. Our approach is based on the following observation: Any (PO)MDP can be transformed into an \u201cequivalent\u201d POMDP in which all state transitions (given the current state and action) are deterministic. This reduces the general problem of policy search to one in which we need only consider POMDPs with deterministic transitions. We give a natural way of estimating the value of all policies in these transformed POMDPs. Policy search is then simply performed by searching for a policy with high estimated value. We also establish conditions under which our value estimates will be good, recovering theoretical results similar to those of Kearns, Mansour and Ng [7], but with \u201csample complexity\u201d bounds that have only a polynomial rather than exponential dependence on the horizon time. Our method applies to arbitrary POMDPs, including ones with infinite state and action spaces. We also present empirical results for our approach on a small discrete problem, and on a complex continuous state/continuous action problem involving learning to ride a bicycle."}
{"_id": "55b9587b8557f17ceec341a9310f89fa39f62330", "title": "Simulation-based optimization of Markov reward processes", "text": "We propose a simulation based algorithm for optimizing the average reward in a Markov Reward Process that depends on a set of parameters As a special case the method applies to Markov Decision Processes where optimization takes place within a parametrized set of policies The algorithm involves the simulation of a single sample path and can be implemented on line A convergence result with probability is provided This research was supported by contracts with Siemens AG Munich Germany and Alcatel Bell Belgium and by contract DMI with the National Science Foundation Introduction Markov Decision Processes and the associated dynamic programming DP methodology Ber a Put provide a general framework for posing and analyzing problems of se quential decision making under uncertainty DP methods rely on a suitably de ned value function that has to be computed for every state in the state space However many inter esting problems involve very large state spaces curse of dimensionality In addition DP assumes the availability of an exact model in the form of transition probabilities In many practical situations such a model is not available and one must resort to simulation or experimentation with an actual system For all of these reasons dynamic programming in its pure form may be inapplicable The e orts to overcome the aforementioned di culties involve two main ideas The use of simulation to estimate quantities of interest thus avoiding model based computations The use of parametric representations to overcome the curse of dimensionality Parametric representations and the associated algorithms can be broadly classi ed into three main categories a Parametrized value functions Instead of associating a value V i with each state i one uses a parametric form V i r where r is a vector of tunable parameters weights and V is a so called approximation architecture For example V i r could be the output of a multilayer perceptron with weights r when the input is i Other representations are possible e g involving polynomials linear combina tions of feature vectors state aggregation etc When the main ideas from DP are combined with such parametric representations one obtains methods that go un der the names of reinforcement learning or neuro dynamic programming see BT SB for textbook expositions as well as the references therein A key char acteristic is that policy optimization is carried out in an indirect fashion one tries to obtain a good approximation of the optimal value function of dynamic programming and uses it to construct policies that are close to optimal Such methods are reason ably well though not fully understood and there have been some notable practical successes see BT SB for an overview including the world class backgammon player by Tesauro Tes b Parametrized policies In an alternative approach which is the one considered in this paper the tuning of a parametrized value function is bypassed Instead one considers a class of policies described in terms of a parameter vector Simulation is employed to estimate the gradient of the performance metric with respect to and the policy is improved by updating in a gradient direction In some cases the re quired gradient can be estimated using IPA in nitesimal perturbation analysis see e g HC Gla CR and the references therein For general Markov processes and in the absence of special structure IPA is inapplicable but gradient estimation is still possible using likelihood ratio methods Gly Gly GG LEc GI c Actor critic methods A third approach which is a combination of the rst two includes parametrizations of the policy actor and of the value function critic BSA While such methods seem particularly promising theoretical understand ing has been limited to the impractical case of lookup representations one parameter per state KB This paper concentrates on methods based on policy parametrization and approx imate gradient improvement in the spirit of item b above While we are primarily interested in the case of Markov Decision Processes almost everything applies to the more general case of Markov Reward Processes that depend on a parameter vector and we proceed within this broader context We start with a formula for the gradient of the performance metric that has been presented in di erent forms and for various contexts in Gly CC FH JSJ TH CW We then suggest a method for estimating the terms that appear in that formula This leads to a simulation based method that updates the parameter vector at every regeneration time in an approximate gradient direction Furthermore we show how to construct an on line method that updates the parameter vector at each time step The resulting method has some conceptual similarities with those described in CR that reference assumes however the availability of an IPA estimator with certain guaranteed properties that are absent in our context and in JSJ which however does not contain convergence results The method that we propose only keeps in memory and updates K numbers where K is the dimension of Other than itself this includes a vector similar to the eligibility trace in Sutton s temporal di erence methods and as in JSJ an estimate of the average reward under the current value of If that estimate was accurate our method would be a standard stochastic gradient algorithm However as keeps changing is generally a biased estimate of the true average reward and the mathematical structure of our method is more complex than that of stochastic gradient algorithms For reasons that will become clearer later standard approaches e g martingale arguments or the ODE approach do not seem to su ce for establishing convergence and a more elaborate proof is necessary Our gradient estimator can also be derived or interpreted in terms of likelihood ratios Gly GG It takes the same form as the one presented in p of Gly but it is used di erently The development in Gly leads to a consistent estimator of the gradient assuming that a very large number of regenerative cycles are estimated while keeping the policy parameter at a xed value Presumably would be then updated after such a long simulation In contrast our method updates much more frequently and retains the desired convergence properties despite the fact that any single cycle results in a biased gradient estimate An alternative simulation based stochastic gradient method again based on a likeli hood ratio formula has been provided in Gly and uses the simulation of two regen erative cycles to construct an unbiased estimate of the gradient We note some of the di erences with the latter work First the methods in Gly involve a larger number of auxiliary quantities that are propagated in the course of a regenerative cycle Second our method admits a modi cation see Sections that can make it applicable even if the time until the next regeneration is excessive in which case likelihood ratio based methods su er from excessive variance Third our estimate \u0003 of the average reward is obtained as a weighted average of all past rewards not just over the last regenerative cycle In contrast an approach such as the one in Gly would construct an independent estimate of \u0003 during each regenerative cycle which should result in higher variance Finally our method brings forth and makes crucial use of the value di erential reward function of dy namic programming This is important because it paves the way for actor critic methods in which the variance associated with the estimates of the di erential rewards is poten tially reduced by means of learning value function approximation Indeed subsequent to the rst writing of this paper this latter approach has been pursued in KT SMS In summary the main contributions of this paper are as follows We introduce a new algorithm for updating the parameters of a Markov Reward Process on the basis of a single sample path The parameter updates can take place either during visits to a certain recurrent state or at every time step We also specialize the method to Markov Decision Processes with parametrically represented policies In this case the method does not require the transition probabilities to be known We establish that the gradient with respect to the parameter vector of the perfor mance metric converges to zero with probability which is the strongest possible result for gradient related stochastic approximation algorithms The method admits approximate variants with reduced variance such as the one described in Section or various types of actor critic methods The remainder of this paper is organized as follows In Section we introduce our framework and assumptions and state some background results including a formula for the gradient of the performance metric In Section we present an algorithm that per forms updates during visits to a certain recurrent state present our main convergence result and provide a heuristic argument Sections and deal with variants of the algo rithm that perform updates at every time step In Section we specialize our methods to the case of Markov Decision Processes that are optimized within a possibly restricted set of parametrically represented randomized policies We present some numerical results in Section and conclude in Section The lengthy proof of our main results is developed in the appendices Markov Reward Processes Depending on a Parameter In this section we present our general framework make a few assumptions and state some basic results that will be needed later We consider a discrete time nite state Markov chain fing with state space S f Ng whose transition probabilities depend on a parameter vector K and are denoted by pij P in j j in i Whenever the state is equal to i we receive a one stage reward that also depends on and is denoted by gi For every K let P be the stochastic matrix with entries pij Let P fP j Kg be the set of all such matrices and let P be its closure Note that every element of P is also a stochastic matrix and therefore de nes a Markov chain on the same s"}
{"_id": "5e884f51916d37b91c35bae2a45b28d12b7e20d2", "title": "Using EM for Reinforcement Learning", "text": "We discsus Hinton\u2019s (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximisation procedure of Dempster, Laird & Rubin (1976)."}
{"_id": "71b77ce0c0b36ae64c82ea0ba901db2e997fbb3c", "title": "Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization", "text": "Learning to remember long sequences remains a challenging task for recurrent neural networks. Register memory and attention mechanisms were both proposed to resolve the issue with either high computational cost to retain memory differentiability, or by discounting the RNN representation learning towards encoding shorter local contexts than encouraging long sequence encoding. Associative memory, which studies the compression of multiple patterns in a fixed size memory, were rarely considered in recent years. Although some recent work tries to introduce associative memory in RNN and mimic the energy decay process in Hopfield nets, it inherits the shortcoming of rule-based memory updates, and the memory capacity is limited. This paper proposes a method to learn the memory update rule jointly with task objective to improve memory capacity for remembering long sequences. Also, we propose an architecture that uses multiple such associative memory for more complex input encoding. We observed some interesting facts when compared to other RNN architectures on some well-studied sequence learning tasks."}
{"_id": "58816e61b3a412d197b96d943dcf3d175be2a93e", "title": "Image Retargeting Quality Assessment: A Study of Subjective Scores and Objective Metrics", "text": "This paper presents the result of a recent large-scale subjective study of image retargeting quality on a collection of images generated by several representative image retargeting methods. Owning to many approaches to image retargeting that have been developed, there is a need for a diverse independent public database of the retargeted images and the corresponding subjective scores to be freely available. We build an image retargeting quality database, in which 171 retargeted images (obtained from 57 natural source images of different contents) were created by several representative image retargeting methods. And the perceptual quality of each image is subjectively rated by at least 30 viewers, meanwhile the mean opinion scores (MOS) were obtained. It is revealed that the subject viewers have arrived at a reasonable agreement on the perceptual quality of the retargeted image. Therefore, the MOS values obtained can be regarded as the ground truth for evaluating the quality metric performances. The database is made publicly available (Image Retargeting Subjective Database, [Online]. Available: http://ivp.ee.cuhk.edu.hk/projects/demo/retargeting/index.html) to the research community in order to further research on the perceptual quality assessment of the retargeted images. Moreover, the built image retargeting database is analyzed from the perspectives of the retargeting scale, the retargeting method, and the source image content. We discuss how to retarget the images according to the scale requirement and the source image attribute information. Furthermore, several publicly available quality metrics for the retargeted images are evaluated on the built database. How to develop an effective quality metric for retargeted images is discussed through a specifically designed subjective testing process. It is demonstrated that the metric performance can be further improved, by fusing the descriptors of shape distortion and content information loss."}
{"_id": "91d0c579f080087a494713a2736015c15d0b8e73", "title": "The ecology of human performance: a framework for considering the effect of context.", "text": "In theory and in practice, context (as an area of concern to occupational therapists) has not received the same attention as performance components and performance areas. The Ecology of Human Performance serves as a framework for considering the effect of context. Context is described as a lens from which persons view their world. The interrelationship of person and context determines which tasks fall within the person's performance range. The Ecology of Human Performance framework provides guidelines for encompassing context in occupational therapy theory, practice, and research."}
{"_id": "eb39688f697a858128e88e49253d5684403104f8", "title": "Particle swarm optimization for hyper-parameter selection in deep neural networks", "text": "Deep neural networks (DNNs) have achieved unprecedented success in a wide array of tasks. However, the performance of these systems depends directly on their hyper-parameters which often must be selected by an expert. Optimizing the hyper-parameters remains a substantial obstacle in designing DNNs in practice. In this work, we propose to select them using particle swarm optimization (PSO). Such biologically-inspired approaches have not been extensively exploited for this task. We demonstrate that PSO efficiently explores the solution space, allowing DNNs of a minimal topology to obtain competitive classification performance over the MNIST dataset. We showed that very small DNNs optimized by PSO retrieve promising classification accuracy for CIFAR-10. Also, PSO improves the performance of existing architectures. Extensive experimental study, backed-up with the statistical tests, revealed that PSO is an effective technique for automating hyper-parameter selection and efficiently exploits computational resources."}
{"_id": "04b0ed77fd9e233eddd3d5364c981547ba5d6def", "title": "Learning user-specific parameters in a multibiometric system", "text": "Biometric systems that use a single biometric trait have to contend with noisy data, restricted degrees of freedom, failureto-enroll problems, spoof attacks, and unacceptable error rates. Multibiometric systems that use multiple traits of an individual for authentication, alleviate some of these problems while improving verification performance. We demonstrate that the performance of multibiometric systems can be further improved by learning user-specific parameters. Two types of parameters are considered here. (i) Thresholds that are used to decide if a matching score indicates a genuine user or an impostor, and (ii) weights that are used to indicate the importance of matching scores output by each biometric trait. User-specific thresholds are computed using the cumulative histogram of impostor matching scores corresponding to each user. The user-specific weights associated with each biometric are estimated by searching for that set of weights which minimizes the total verification error. The tests were conducted on a database of 50 users who provided fingerprint, face and hand geometry data, with 10 of these users providing data over a period of two months. We observed that user-specific thresholds improved system performance by \u223c 2%, while user-specific weights improved performance by \u223c 3%."}
{"_id": "17ed959ece2bc6ee416f214bb4aaf534d27da133", "title": "Effects of level of meditation experience on attentional focus: is the efficiency of executive or orientation networks improved?", "text": "The present investigation examined the contributions of specific attentional networks to long-term trait effects of meditation. It was hypothesized that meditation could improve the efficiency of executive processing (inhibits prepotent/incorrect responses) or orientational processing (orients to specific objects in the attentional field). Participants (50 meditators and 10 controls) were given the Stroop (measures executive attention) and Global-Local Letters (measures orientational attention) tasks. Results showed that meditation experience was associated with reduced interference on the Stroop task (p < 0.03), in contrast with a lack of effect on interference in the Global-Local Letters task. This suggests that meditation produces long-term increases in the efficiency of the executive attentional network (anterior cingulate/prefrontal cortex) but no effect on the orientation network (parietal systems). The amount of time participants spent meditating each day, rather than the total number of hours of meditative practice over their lifetime, was negatively correlated with interference on the Stroop task (r = -0.31, p < 0.005)."}
{"_id": "b41a46752e865bcd7dda6e7557187eb2e13106e7", "title": "Sarcasm detection of tweets: A comparative study", "text": "Sarcasm is a nuanced form of communication where the individual states opposite of what is implied. One of the major challenges of sarcasm detection is its ambiguous nature. There is no prescribed definition of sarcasm. Another major challenge is the growing size of the languages. Every day hundreds of new slang words are being created and used on these sites. Hence, the existing corpus of positive and negative sentiments may not prove to be accurate in detecting sarcasm. Also, the recent developments in online social networks allow its users to use varied kind of emoticons with the text. These emoticons may change the polarity of the text and make it sarcastic. Due to these difficulties and the inherently tricky nature of sarcasm it is generally ignored during social network analysis. As a result the results of such analysis are affected adversely. Thus, sarcasm detection poses to be one of the most critical problems which we need to overcome. Detection of sarcastic content is vital to various NLP based systems such as text summarization and sentiment analysis. In this paper we address the problem of sarcasm detection by leveraging the most common expression of sarcasm \u2014 \u201cpositive sentiment attached to a negative situation\u201d. Our work uses two ensemble based approaches \u2014 voted ensemble classifier and random forest classifier. Unlike current approaches to sarcasm detection which rely on existing corpus of positive and negative sentiments for training the classifiers, we use a seeding algorithm to generate training corpus. The proposed model also uses a pragmatic classifier to detect emoticon based sarcasm."}
{"_id": "e60778347ffb55b8b42ba831ffb8f8f7269182a5", "title": "Components for parametric urban design in Grasshopper from street network to building geometry", "text": "The main contribution of our work is in combining the methods for parametric urban design of highly specialized software such as CityEngine and general-purpose parametric modeling platform such as Grasshopper. Our work facilitates and prompts the use of parametric tools by architects and planners for urban design. In this paper we present a custom grasshopper component for street network generation and block subdivision. The component was developed in C# using the RhinoCommon SDK. We used Grasshopper for the development of an urban design proposal at a teaching exercise. To meet the requirements of the urban design project, additional functionalities had to be added to the range of existing Grasshopper components. In particular, we needed components for street network generation and block subdivision. To develop the component we implemented the street expansion strategies described in (Weber et al., 2009) and the methods for block subdivision described in (Vanegas et al., 2009). Additionally, we adapted and enhanced the strategies to meet the NURBS modeling capabilities of Rhinoceros."}
{"_id": "20a1ac0c7bccc7818c8ebb98c368720dc732b286", "title": "Elicitation of Security requirements for E-Health system by applying Model Oriented Security Requirements Engineering (MOSRE) Framework", "text": "E-health is a health care system which is supported by electronic process and communication. The information that is kept in the system must be accurate. In case of false information, it may cause harm to human life. So this system needs more security to protect the credential information. E-Health system is the most security sensitive process handled electronically. The highest achievable security is never too much for an E-Health system. So when system is being built, tasks such as Security Requirements Elicitation, Specification and Validation are essential to assure the Quality of the resulting secure E-Health system. By considering the Security requirements as functional requirements in the Requirement phase, the completeness of Security Requirements for E-Health system can be developed. In this paper we propose Model Oriented Security Requirements Engineering (MOSRE) Framework in the early phases of E-Health system development, to identify assets, threats and vulnerabilities. This helps in standardizing the Security Requirements for secure E-Health system without any security issues."}
{"_id": "9c222a6d7ca4b305be445c73d89a6f2485b04f31", "title": "Delta-9-tetrahydrocannabinol may palliate altered chemosensory perception in cancer patients: results of a randomized, double-blind, placebo-controlled pilot trial.", "text": "BACKGROUND\nA pilot study (NCT00316563) to determine if delta-9-tetrahydrocannabinol (THC) can improve taste and smell (chemosensory) perception as well as appetite, caloric intake, and quality of life (QOL) for cancer patients with chemosensory alterations.\n\n\nPATIENTS AND METHODS\nAdult advanced cancer patients, with poor appetite and chemosensory alterations, were recruited from two sites and randomized in a double-blinded manner to receive either THC (2.5 mg, Marinol(\u00ae); Solvay Pharma Inc., n = 24) or placebo oral capsules (n = 22) twice daily for 18 days. Twenty-one patients completed the trial. At baseline and posttreatment, patients completed a panel of patient-reported outcomes: Taste and Smell Survey, 3-day food record, appetite and macronutrient preference assessments, QOL questionnaire, and an interview.\n\n\nRESULTS\nTHC and placebo groups were comparable at baseline. Compared with placebo, THC-treated patients reported improved (P = 0.026) and enhanced (P < 0.001) chemosensory perception and food 'tasted better' (P = 0.04). Premeal appetite (P = 0.05) and proportion of calories consumed as protein increased compared with placebo (P = 0.008). THC-treated patients reported increased quality of sleep (P = 0.025) and relaxation (P = 0.045). QOL scores and total caloric intake were improved in both THC and placebo groups.\n\n\nCONCLUSIONS\nTHC may be useful in the palliation of chemosensory alterations and to improve food enjoyment for cancer patients."}
{"_id": "9b956e094c3aa5a831c9b916dde35d1ca9abf066", "title": "Completely Derandomized Self-Adaptation in Evolution Strategies", "text": "This paper puts forward two useful methods for self-adaptation of the mutation distribution - the concepts of derandomization and cumulation. Principle shortcomings of the concept of mutative strategy parameter control and two levels of derandomization are reviewed. Basic demands on the self-adaptation of arbitrary (normal) mutation distributions are developed. Applying arbitrary, normal mutation distributions is equiv-alent to applying a general, linear problem encoding. The underlying objective of mutative strategy parameter control is roughly to favor previously selected mutation steps in the future. If this objective is pursued rigor-ously, a completely derandomized self-adaptation scheme results, which adapts arbitrary normal mutation distributions. This scheme, called covariance matrix adaptation (CMA), meets the previously stated demands. It can still be considerably improved by cumulation - utilizing an evolution path rather than single search steps. Simulations on various test functions reveal local and global search properties of the evolution strategy with and without covariance matrix adaptation. Their performances are comparable only on perfectly scaled functions. On badly scaled, non-separable functions usually a speed up factor of several orders of magnitude is ob-served. On moderately mis-scaled functions a speed up factor of three to ten can be expected."}
{"_id": "e62a01862475aabafb16015a4601375626c74a99", "title": "Handbook of Multibiometrics (International Series on Biometrics)", "text": "Will reading habit influence your life? Many say yes. Reading handbook of multibiometrics 6 international series on biometrics is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading."}
{"_id": "ec4560bc781944dc74723771d422f9a37241ad29", "title": "Photonic ADC: overcoming the bottleneck of electronic jitter.", "text": "Accurate conversion of wideband multi-GHz analog signals into the digital domain has long been a target of analog-to-digital converter (ADC) developers, driven by applications in radar systems, software radio, medical imaging, and communication systems. Aperture jitter has been a major bottleneck on the way towards higher speeds and better accuracy. Photonic ADCs, which perform sampling using ultra-stable optical pulse trains generated by mode-locked lasers, have been investigated for many years as a promising approach to overcome the jitter problem and bring ADC performance to new levels. This work demonstrates that the photonic approach can deliver on its promise by digitizing a 41 GHz signal with 7.0 effective bits using a photonic ADC built from discrete components. This accuracy corresponds to a timing jitter of 15 fs - a 4-5 times improvement over the performance of the best electronic ADCs which exist today. On the way towards an integrated photonic ADC, a silicon photonic chip with core photonic components was fabricated and used to digitize a 10 GHz signal with 3.5 effective bits. In these experiments, two wavelength channels were implemented, providing the overall sampling rate of 2.1 GSa/s. To show that photonic ADCs with larger channel counts are possible, a dual 20-channel silicon filter bank has been demonstrated."}
{"_id": "5d3707c2c4c98882d2b474a4a7fe3687f84d4ba6", "title": "Full-body illusions and minimal phenomenal selfhood", "text": "We highlight the latest research on body perception and self-consciousness, but argue that despite these achievements, central aspects have remained unexplored, namely, global aspects of bodily self-consciousness. Researchers investigated central representations of body parts and actions involving these, but neglected the global and unitary character of self-consciousness, the 'I' of experience and behaviour. We ask, what are the minimally sufficient conditions for the appearance of a phenomenal self, that is, the fundamental conscious experience of being someone? What are necessary conditions for self-consciousness in any type of system? We offer conceptual clarifications, discuss recent empirical evidence from neurology and cognitive science and argue that these findings offer a new entry point for the systematic study of global and more fundamental aspects of self-consciousness."}
{"_id": "f094b074deeafb43523281bdcd13e23903c0c667", "title": "A Fuzzy-based approach to programming language independent source-code plagiarism detection", "text": "Source-code plagiarism detection in programming, concerns the identification of source-code files that contain similar and/or identical source-code fragments. Fuzzy clustering approaches are a suitable solution to detecting source-code plagiarism due to their capability to capture the qualitative and semantic elements of similarity. This paper proposes a novel Fuzzy-based approach to source-code plagiarism detection, based on Fuzzy C-Means and the Adaptive-Neuro Fuzzy Inference System (ANFIS). In addition, performance of the proposed approach is compared to the Self- Organising Map (SOM) and the state-of-the-art plagiarism detection Running Karp-Rabin Greedy-String-Tiling (RKR-GST) algorithms. The advantages of the proposed approach are that it is programming language independent, and hence there is no need to develop any parsers or compilers in order for the fuzzy-based predictor to provide detection in different programming languages. The results demonstrate that the performance of the proposed fuzzy-based approach overcomes all other approaches on well-known source code datasets, and reveals promising results as an efficient and reliable approach to source-code plagiarism detection."}
{"_id": "bf6a087e7b6fb84dd4f54f3b16950ad577e591e0", "title": "Unsupervised local deep feature for image recognition", "text": "Unsupervised feature learning is an important problem in computer vision and machine learning. Various unsupervised learning methods, e.g., PCA, KMeans, autoencoder, have been applied for visual recognition tasks. Especially, autoencoder has superior image recognition performance, which is usually used as a global feature extractor. To make better use of autoencoder for image recognition, we propose unsupervised local deep feature (ULDF). We collect patches from training images and use autoencoder to transform vectorized patches into short codes. Then these codes are encoded using Fisher Vector with spatial pyramid. Image recognition with ULDF only requires an efficient linear SVM classifier. Finally, we perform multi-scale voting to achieve more accurate classification performance. In the experiments, we evaluate ULDF on the handwritten digit recognition task and the shape classification task, and show that it achieves highly competitive performance comparing to the state-of-the-art image recognition methods. The results suggest that the proposed local deep feature is superior than traditional global autoencoder. \u00a9 2016 Elsevier Inc. All rights reserved."}
{"_id": "51530111c0a33ac3a7f00e34028798f1dae6f69d", "title": "Transanal endoscopic total mesorectal excision: technical aspects of approaching the mesorectal plane from below\u2014a preliminary report", "text": "Laparoscopic total mesorectal excision (TME) for low rectal cancer can be technically challenging. This report describes our initial experience with a hybrid laparoscopic and transanal endoscopic technique for TME in low rectal cancer. Between December 2012 and October 2013, we identified patients with rectal cancer\u00a0<\u00a05\u00a0cm from the anorectal junction (ARJ) who underwent laparoscopic-assisted TME with a transanal minimally invasive surgery (TAMIS) technique. A standardized stepwise approach was used in all patients. Resection specimens were examined for completeness and measurement of margins. Preoperative magnetic resonance imaging (MRI) characteristics and short-term postoperative outcomes were examined. All values are mean\u00a0\u00b1\u00a0standard deviation. Ten patients (8 males; median age: 60.5 (range 36\u201370)\u00a0years) were included. On initial MRI, all tumors were T2 or T3, mean tumor height from the ARJ was 28.9\u00a0\u00b1\u00a012.2\u00a0mm, mean circumferential resection margin was 5.3\u00a0\u00b1\u00a03.1\u00a0mm , and the mean angle between the anal canal and the levator ani was 83.9\u00b0\u00a0\u00b1\u00a09.7\u00b0. All patients had had preoperative chemoradiotherapy, TME via TAMIS, and distal anastomosis. There were no intraoperative complications, anastomotic leaks, or 30-day mortality. The pathologic quality of all mesorectal specimens was excellent. The distal resection margin was 19.4\u00a0\u00b1\u00a010.4\u00a0mm, the mean circumferential resection margin was 13.8\u00a0\u00b1\u00a05.1\u00a0mm, and the median lymph node harvest was 10.5 (range 5\u201315) nodes. A combined laparoscopic and transanal approach can achieve a safe and oncologically complete TME dissection for low rectal tumors. This approach may improve clinical outcomes in these technically difficult cases, but larger prospective studies are needed."}
{"_id": "a3e3493bfc56ec35a229eb072813f16936137a4b", "title": "TreeSegNet: Adaptive Tree CNNs for Subdecimeter Aerial Image Segmentation.", "text": "For the task of subdecimeter aerial imagery segmentation, fine-grained semantic segmentation results are usually difficult to obtain because of complex remote sensing content and optical conditions. Recently, convolutional neural networks (CNNs) have shown outstanding performance on this task. Although many deep neural network structures and techniques have been applied to improve the accuracy, few have paid attention to better differentiating the easily confused classes. In this paper, we propose TreeSegNet which adopts an adaptive network to increase the classification rate at the pixelwise level. Specifically, based on the infrastructure of DeepUNet, a Tree-CNN block in which each node represents a ResNeXt unit is constructed adaptively according to the confusion matrix and the proposed TreeCutting algorithm. By transporting feature maps through concatenating connections, the Tree-CNN block fuses multiscale features and learns best weights for the model. In experiments on the ISPRS 2D semantic labeling Potsdam dataset, the results obtained by TreeSegNet are better than those of other \uf02a Corresponding author. Tel.: +86 18911938368. E-mail address: ilydouble@gmail.com TreeSegNet: Adaptive Tree CNNs for Subdecimeter Aerial Image Segmentation 2 published state-of-the-art methods. Detailed comparison and analysis show that the improvement brought by the adaptive Tree-CNN block is significant."}
{"_id": "654489b82fd0402114d9114f596ba43ddd11ea6a", "title": "A sub-\u00b5W, 1.0V CMOS temperature sensor circuit insensitive to device parameters", "text": "In this paper, we firstly propose the CMOS temperature sensor circuit operating in the subthreshold region. The circuit has advantages of 1.0 V operation and sub-\u00b5W power consumption. Furthermore, it is insensitive to the device parameter of the fabrication process. Next, we design the high sensitivity temperature sensor circuit that can be realized by cascade-connection of the proposed temperature sensor circuits. The proposed circuits were evaluated through HSPICE with a standard 0.35 \u00b5m CMOS device parameter. The simulation demonstrate the maximum power consumption is 0.27 \u00b5W under the condition that the temperature range is from 0 \u00b0C to 100 \u00b0C, the sensitivity is 1.01 mV/ \u00b0C and the maximum differential error is 0.87 %."}
{"_id": "bd3551ca36b945425abe3783512c1189376017da", "title": "Fresh apps: an empirical study of frequently-updated mobile apps in the Google play store", "text": "Mobile app stores provide a unique platform for developers to rapidly deploy new updates of their apps. We studied the frequency of updates of 10,713 mobile apps (the top free 400 apps at the start of 2014 in each of the 30 categories in the Google Play store). We find that a small subset of these apps (98 apps representing \u02dc1 % of the studied apps) are updated at a very frequent rate \u2014 more than one update per week and 14 % of the studied apps are updated on a bi-weekly basis (or more frequently). We observed that 45 % of the frequently-updated apps do not provide the users with any information about the rationale for the new updates and updates exhibit a median growth in size of 6 %. This paper provides information regarding the update strategies employed by the top mobile apps. The results of our study show that 1) developers should not shy away from updating their apps very frequently, however the frequency varies across store categories. 2) Developers do not need to be too concerned about detailing the content of new updates. It appears that users are not too concerned about such information. 3) Users highly rank frequently-updated apps instead of being annoyed about the high update frequency."}
{"_id": "3f4f346052bdb8700cd70845baf5a5daf98b29a3", "title": "Energy Efficient Clustering Algorithms for Wireless Sensor Networks", "text": "Energy efficiency is a major concern in wireless sensor networks (WSNs). Many clustering algorithms have been proposed for such a purpose. This paper investigates the existing clustering algorithms. The algorithms have been classified and some representatives are described in each category. After analyzing the strengths and the weaknesses of each category, an important characteristic of WSNs is pointed out for further improvement of energy efficiency for WSNs."}
{"_id": "4aa68739ca016f8ce45eb007942d8d6695bf312f", "title": "Graded Entailment for Compositional Distributional Semantics", "text": "The categorical compositional distributional model of natural language provides a conceptually motivated procedure to compute the meaning of sentences, given grammatical structure and the meanings of its words. This approach has outperformed other models in mainstream empirical language processing tasks. However, until now it has lacked the crucial feature of lexical entailment \u2013 as do other distributional models of meaning. In this paper we solve the problem of entailment for categorical compositional distributional semantics. Taking advantage of the abstract categorical framework allows us to vary our choice of model. This enables the introduction of a notion of entailment, exploiting ideas from the categorical semantics of partial knowledge in quantum computation. The new model of language uses density matrices, on which we introduce a novel robust graded order capturing the entailment strength between concepts. This graded measure emerges from a general framework for approximate entailment, induced by any commutative monoid. Quantum logic embeds in our graded order. Our main theorem shows that entailment strength lifts compositionally to the sentence level, giving a lower bound on sentence entailment. We describe the essential properties of graded entailment such as continuity, and provide a procedure for calculating entailment strength."}
{"_id": "0f279b97e9e318b6db58da8da66a565505a0fab6", "title": "Detecting Offensive Language in Social Media to Protect Adolescent Online Safety", "text": "Since the textual contents on online social media are highly unstructured, informal, and often misspelled, existing research on message-level offensive language detection cannot accurately detect offensive content. Meanwhile, user-level offensiveness detection seems a more feasible approach but it is an under researched area. To bridge this gap, we propose the Lexical Syntactic Feature (LSF) architecture to detect offensive content and identify potential offensive users in social media. We distinguish the contribution of pejoratives/profanities and obscenities in determining offensive content, and introduce hand-authoring syntactic rules in identifying name-calling harassments. In particular, we incorporate a user's writing style, structure and specific cyber bullying content as features to predict the user's potentiality to send out offensive content. Results from experiments showed that our LSF framework performed significantly better than existing methods in offensive content detection. It achieves precision of 98.24% and recall of 94.34% in sentence offensive detection, as well as precision of 77.9% and recall of 77.8% in user offensive detection. Meanwhile, the processing speed of LSF is approximately 10msec per sentence, suggesting the potential for effective deployment in social media."}
{"_id": "58f7accbe36aadd0ef83fd2746c879079eb816bc", "title": "A survey on opinion mining and sentiment analysis: Tasks, approaches and applications", "text": "With the advent of Web 2.0, people became more eager to express and share their opinions on web regarding day-to-day activities and global issues as well. Evolution of social media has also contributed immensely to these activities, thereby providing us a transparent platform to share views across the world. These electronic Word of Mouth (eWOM) statements expressed on the web are much prevalent in business and service industry to enable customer to share his/her point of view. In the last one and half decades, research communities, academia, public and service industries are working rigorously on sentiment analysis, also known as, opinion mining, to extract and analyze public mood and views. In this regard, this paper presents a rigorous survey on sentiment analysis, which portrays views presented by over one hundred articles published in the last decade regarding necessary tasks, approaches, and applications of sentiment analysis. Several sub-tasks need to be performed for sentiment analysis which in turn can be accomplished using various approaches and techniques. This survey covering published literature during 2002-2015, is organized on the basis of sub-tasks to be performed, machine learning and natural language processing techniques used and applications of sentiment analysis. The paper also presents open issues and along with a summary table of a hundred and sixty one articles."}
{"_id": "9f9d4a94008ddfe2d9db72c1dd74727ffbbeb5b0", "title": "Offensive Language Detection Using Multi-level Classification", "text": "Text messaging through the Internet or cellular phones has become a major medium of personal and commercial communication. In the same time, flames (such as rants, taunts, and squalid phrases) are offensive/abusive phrases which might attack or offend the users for a variety of reasons. An automatic discriminative software with a sensitivity parameter for flame or abusive language detection would be a useful tool. Although a human could recognize these sorts of useless annoying texts among the useful ones, it is not an easy task for computer programs. In this paper, we describe an automatic flame detection method which extracts features at different conceptual levels and applies multilevel classification for flame detection. While the system is taking advantage of a variety of statistical models and rule-based patterns, there is an auxiliary weighted pattern repository which improves accuracy by matching the text to its graded entries."}
{"_id": "e2efefbba8bf3e76605db24da0ba15df7b0adc9e", "title": "Movie-DiC: a Movie Dialogue Corpus for Research and Development", "text": "This paper describes Movie-DiC a Movie Dialogue Corpus recently collected for research and development purposes. The collected dataset comprises 132,229 dialogues containing a total of 764,146 turns that have been extracted from 753 movies. Details on how the data collection has been created and how it is structured are provided along with its main statistics and characteristics."}
{"_id": "b441a7ea2bbed4c2a05b94903539786137872bbf", "title": "A bibliometric analysis on rural studies in human geography and related disciplines", "text": "Although the world has experienced rapid urbanization, rural areas have always been and are still an important research field in human geography. This paper performed a bibliometric analysis on rural geography studies based on the peer-reviewed articles concerning rural geography published in the SSCI-listed journals from 1990 to 2012. Our analysis examines publication patterns (document types and publishing languages, article outputs and their categories, major journals and their publication, most productive authors, geographic distribution and international collaboration) and demonstrates the evolution of intellectual development of rural geography by studying highly cited papers and their citation networks and temporal evolution of keywords. Our research findings include: The article number has been increasing since the 1900s, and went through three phases, and the rural geography research is dominated in size by UK and USA. The USA is the most productive in rural geography, but the UK had more impact than other countries in the terms of the average citation of articles. Three distinct but loosely linked research streams of rural geography were identified and predominated by the UK rural geographers. The keywords frequencies evolved according to contexts of rural development and academic advances of human geography, but they were loosely and scattered since the rural researches in different regions or different systems faced with different problems."}
{"_id": "6a09c9f2947271c7b9a48739947eadf1bfa31e81", "title": "ColorBars: increasing data rate of LED-to-camera communication using color shift keying", "text": "LED-to-camera communication allows LEDs deployed for illumination purposes to modulate and transmit data which can be received by camera sensors available in mobile devices like smartphones, wearable smart-glasses etc. Such communication has a unique property that a user can visually identify a transmitter (i.e. LED) and specifically receive information from the transmitter. It can support a variety of novel applications such as augmented reality through mobile devices, navigation using smart signs, fine-grained location specific advertisement etc. However, the achievable data rate in current LED-to-camera communication techniques remains very low (\u2248 12 bytes per second) to support any practical application. In this paper, we present ColorBars, an LED-to-camera communication system that utilizes Color Shift Keying (CSK) to modulate data using different colors transmitted by the LED. It exploits the increasing popularity of Tri-LEDs (RGB) that can emit a wide range of colors. We show that commodity cameras can efficiently and accurately demodulate the color symbols. ColorBars ensures flicker-free and reliable communication even in the presence of inter-frame loss and diversity of rolling shutter cameras. We implement ColorBars on embedded platform and evaluate it with Android and iOS smartphones as receivers. Our evaluation shows that ColorBars can achieve a data rate of 5.2 Kbps on Nexus 5 and 2.5 Kbps on iPhone 5S, which is significantly higher than previous approaches. It is also shown that lower CSK modulations (e.g. 4 and 8 CSK) provide extremely low symbol error rates (< 10--3), making them a desirable choice for reliable LED-to-camera communication."}
{"_id": "354b0f3412fbaf6c8cebb0763180e00ab124fee0", "title": "Accelerating ODE-Based Simulation of General and Heterogeneous Biophysical Models Using a GPU", "text": "Flint is a simulator that numerically integrates heterogeneous biophysical models described by a large set of ordinary differential equations. It uses an internal bytecode representation of simulation-related expressions to handle various biophysical models built for general purposes. We propose two acceleration methods for Flint using a graphics processing unit (GPU). The first method interprets multiple bytecodes in parallel on the GPU. It automatically parallelizes the simulation using a level scheduling algorithm. We implement an interpreter of the Flint bytecode that is suited for running on the GPU, which reduces both the number of memory accesses and divergent branches to achieve higher performance. The second method translates a model into a source code for both the CPU and the GPU through the internal bytecode, which speeds up the compilation of the generated source codes, because the code size is diminished because of bytecode unification. With large models such that tens of thousands or more expressions can be evaluated simultaneously, the translated code running on the GPU achieves computational performance of up to 2.7 higher than that running on a CPU. Otherwise, with small models, the CPU is faster than the GPU. Therefore, the translated code dynamically determines on which to run either the CPU or the GPU by profiling initial few iterations of the simulation."}
{"_id": "6f6602d260479077991313dd26a0f8ac54a2e307", "title": "BatTracker: High Precision Infrastructure-free Mobile Device Tracking in Indoor Environments", "text": "Continuous tracking of the device location in 3D space is a popular form of user input, especially for virtual/augmented reality (VR/AR), video games and health rehabilitation. Conventional inertial based approaches are well known for inaccuracy caused by large error drifts. Computer vision approaches can produce accuracy tracking but have privacy concerns and are subject to lighting conditions and computation complexity. Recent work exploits accurate acoustic distance measurements for high precision tracking. However, they require additional hardware (e.g., multiple external speakers), which adds to the costs and installation efforts, thus limiting the convenience and usability. In this paper, we propose BatTracker, which incorporates inertial and acoustic data for robust, high precision and infrastructure-free tracking in indoor environments. BatTracker leverages echoes from nearby objects and uses distance measurements from them to correct error accumulation in inertial based device position prediction. It incorporates Doppler shifts and echo amplitudes to reliably identify the association between echoes and objects despite noisy signals from multi-path reflection and cluttered environment. A probabilistic algorithm creates, prunes and evolves multiple hypotheses based on measurement evidences to accommodate uncertainty in device position. Experiments in real environments show that BatTracker can track a mobile device's movements in 3D space at sub-cm level accuracy, comparable to the state-of-the-art infrastructure based approaches, while eliminating the needs of any additional hardware."}
{"_id": "4cea60c30d404abfd4044a6367d436fa6f67bb89", "title": "ConTagNet: Exploiting User Context for Image Tag Recommendation", "text": "In recent years, deep convolutional neural networks have shown great success in single-label image classification. However, images usually have multiple labels associated with them which may correspond to different objects or actions present in the image. In addition, a user assigns tags to a photo not merely based on the visual content but also the context in which the photo has been captured. Inspired by this, we propose a deep neural network which can predict multiple tags for an image based on the content as well as the context in which the image is captured. The proposed model can be trained end-to-end and solves a multi-label classification problem. We evaluate the model on a dataset of 1,965,232 images which is drawn from the YFCC100M dataset provided by the organizers of Yahoo-Flickr Grand Challenge. We observe a significant improvement in the prediction accuracy after integrating user-context and the proposed model performs very well in the Grand Challenge."}
{"_id": "d43ff32d37803e8b18b2d2bc909101709212499f", "title": "Reveal, a general reverse engineering algorithm for inference of genetic network architectures.", "text": "Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields."}
{"_id": "32dc07d3f24653c74c073ae59674070fc321df50", "title": "Barrel menu: a new mobile phone menu for feature rich devices", "text": "Mobile phones have ceased to be devices that people merely use to make phone calls. Today, modern mobile phones offer their users a large selection of features which are accessed via hierarchical menus. These hierarchical menus typically result in deep nested menus that require numerous clicks to navigate through, often resulting in usability issues. This paper presents a prototype of a new menu style (Barrel menu) for mobile phones, and compares the usability of this menu style with that of a traditional hierarchical (Hub-and-Spoke) design. The Barrel menu was found to be as efficient as the Hub-and-Spoke menu in terms of time-on-task and key presses, but was found to cause fewer user errors. In addition, the Barrel menu was found to be better in terms of ease of use, orientation, user satisfaction and learnability. Thus, the Barrel menu shows the potential to be a feasible alternative for mobile phone manufacturers to overcome some of the usability issues associated with Hub-and-Spoke menus."}
{"_id": "7a596a618f11e7d5ac3e3c273e87915679ff023a", "title": "Simultaneous robot-world and hand-eye calibration", "text": "1 Simultaneous Robot-World and Hand-Eye Calibration Fadi Dornaika and Radu Horaud, Member, IEEE Abstract|Recently, Zhuang, Roth, & Sudhakar [1] proposed a method that allows simultaneous computation of the rigid transformations from world frame to robot base frame and from hand frame to camera frame. Their method attempts to solve a homogeneous matrix equation of the form AX = ZB. They use quaternions to derive explicit linear solutions for X and Z. In this short paper, we present two new solutions that attempt to solve the homogeneous matrix equation mentioned above: (i) a closed-form method which uses quaternion algebra and a positive quadratic error function associated with this representation and (ii) a method based on non-linear constrained minimization and which simultaneously solves for rotations and translations. These results may be useful to other problems that can be formulated in the same mathematical form. We perform a sensitivity analysis for both our two methods and the linear method developed by Zhuang et al. [1]. This analysis allows the comparison of the three methods. In the light of this comparison the non-linear optimization method, which solves for rotations and translations simultaneously, seems to be the most stable one with respect to noise and to measurement errors. Keywords| hand/eye calibration, robot/world calibration, quaternion algebra."}
{"_id": "81e06c9fc1af2adfeff04758188f454a1d3e3f68", "title": "Modern electrical machine analysis and design techniques applied to hybrid vehicle drive machines", "text": "In this paper a review is made of the different analysis and design techniques that can be used in modern electrical machine design, and references them to the developing area of hybrid vehicle drives. These machines are characterized by high transient torque requirement, compactness and forced cooling. While rare-earth magnet machines are commonly used in these applications, there is an increasing interest in motors without magnets. An example is made of the published Toyota Prius permanent-magnet machine specification. An alternative induction motor design is addressed."}
{"_id": "b0e65c7087d5c584886fa7e209628446cbc6a6b5", "title": "BP: Profiling Vulnerabilities on the Attack Surface", "text": "Security practitioners use the attack surface of software systems to prioritize areas of systems to test and analyze. To date, approaches for predicting which code artifacts are vulnerable have utilized a binary classification of code as vulnerable or not vulnerable. To better understand the strengths and weaknesses of vulnerability prediction approaches, vulnerability datasets with classification and severity data are needed. The goal of this paper is to help researchers and practitioners make security effort prioritization decisions by evaluating which classifications and severities of vulnerabilities are on an attack surface approximated using crash dump stack traces. In this work, we use crash dump stack traces to approximate the attack surface of Mozilla Firefox. We then generate a dataset of 271 vulnerable files in Firefox, classified using the Common Weakness Enumeration (CWE) system. We use these files as an oracle for the evaluation of the attack surface generated using crash data. In the Firefox vulnerability dataset, 14 different classifications of vulnerabilities appeared at least once. In our study, 85.3% of vulnerable files were on the attack surface generated using crash data. We found no difference between the severity of vulnerabilities found on the attack surface generated using crash data and vulnerabilities not occurring on the attack surface. Additionally, we discuss lessons learned during the development of this vulnerability dataset."}
{"_id": "00b202871ec41b8049e8393e463660525ecb61b5", "title": "Subspace clustering based on low rank representation and weighted nuclear norm minimization", "text": "Subspace clustering refers to the problem of segmenting a set of data points approximately drawn from a union of multiple linear subspaces. Aiming at the subspace clustering problem, various subspace clustering algorithms have been proposed and low rank representation based subspace clustering is a very promising and efficient subspace clustering algorithm. Low rank representation method seeks the lowest rank representation among all the candidates that can represent the data points as linear combinations of the bases in a given dictionary. Nuclear norm minimization is adopted to minimize the rank of the representation matrix. However, nuclear norm is not a very good approximation of the rank of a matrix and the representation matrix thus obtained can be of high rank which will affect the final clustering accuracy. Weighted nuclear norm (WNN) is a better approximation of the rank of a matrix and WNN is adopted in this paper to describe the rank of the representation matrix. The convex program is solved via conventional alternation direction method of multipliers (ADMM) and linearized alternating direction method of multipliers (LADMM) and they are respectively refer to as WNNM-LRR and WNNM-LRR(L). Experimental results show that, compared with low rank representation method and several other state-of-the-art subspace clustering methods, WNNM-LRR and WNNM-LRR(L) can get higher clustering accuracy."}
{"_id": "a76077422be2d2f495be52b23092d8f2a1f10709", "title": "iHadoop: Asynchronous Iterations for MapReduce", "text": "MapReduce is a distributed programming framework designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications, tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop's task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application's latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches invariant data between iterations, reduces execution time by 38% on average."}
{"_id": "3085aa4779c04898685c1b2d50cdafa98b132d3f", "title": "A Quaternion-based Unscented Kalman Filter for Orientation Tracking", "text": "This paper describes a Kalman filter for the real-time estimation of a rigid body orientation from measurements of acceleration, angular velocity and magnetic field strength. A quaternion representation of the orientation is computationally effective and avoids problems with singularities. The nonlinear relationship between estima ed orientation and expected measurement prevent the usage of a classical Kalman filter. This problem is solved by an Unscented Kalman filter which allows nonlinear process and measurement models and is more accurate and less costly than the common Extended Kalman filter. Several extensions to the original Unscented Kalman filter are necessary to treat the inherent properties of unit quaternions. Resul ts with simulated and measured data are discussed."}
{"_id": "3979a1adf2f12443436870cb7219be87f6b209cc", "title": "Graph-wavelet filterbanks for edge-aware image processing", "text": "In our recent work, we proposed the construction of critically-sampled wavelet filterbanks for analyzing functions defined on the vertices of arbitrary undirected graphs. These graph based functions, referred to as graph-signals, provide a flexible model for representing many datasets with arbitrary location and connectivity. An application area considered in that work is image-processing, where pixels can be connected with their neighbors to form undirected graphs. In this paper, we propose various graph-formulations of images, which capture both directionality and intrinsic edge-information. The proposed graph-wavelet filterbanks provide a sparse, edge-aware representation of image-signals. Our preliminary results in non-linear approximation and denoising using graphs show promising gains over standard separable wavelet filterbank designs."}
{"_id": "f0d6579c5a550a27c8af75a0651e6fd6ebae5d40", "title": "Mixed Methods Research Designs in Counseling Psychology", "text": "With the increased popularity of qualitative research, researchers in counseling psychology are expanding their methodologies to include mixed methods designs. These designs involve the collection, analysis, and integration of quantitative and qualitative data in a single or multiphase study. This article presents an overview of mixed methods research designs. It defines mixed methods research, discusses its origins and philosophical basis, advances steps and procedures used in these designs, and identifies 6 different types of designs. Important design features are illustrated using studies published in the counseling literature. Finally, the article ends with recommendations for designing, implementing, and reporting mixed methods studies in the literature and for discussing their viability and continued usefulness in the field of counseling psychology."}
{"_id": "10195a163ab6348eef37213a46f60a3d87f289c5", "title": "Deep Expectation of Real and Apparent Age from a Single Image Without Facial Landmarks", "text": "In this paper we propose a deep learning solution to age estimation from a single face image without the use of facial landmarks and introduce the IMDB-WIKI dataset, the largest public dataset of face images with age and gender labels. If the real age estimation research spans over decades, the study of apparent age estimation or the age as perceived by other humans from a face image is a recent endeavor. We tackle both tasks with our convolutional neural networks (CNNs) of VGG-16 architecture which are pre-trained on ImageNet for image classification. We pose the age estimation problem as a deep classification problem followed by a softmax expected value refinement. The key factors of our solution are: deep learned models from large data, robust face alignment, and expected value formulation for age regression. We validate our methods on standard benchmarks and achieve state-of-the-art results for both real and apparent age estimation."}
{"_id": "c763bf953a3381d7631ccad11843cb35e8a37441", "title": "Content-boosted Matrix Factorization Techniques for Recommender Systems", "text": "Many businesses are using recommender systems for marketing outreach. Recommendation algorithms can be either based on content or driven by collaborative filtering. We study different ways to incorporate content information directly into the matrix factorization approach of collaborative filtering. These content-boosted matrix factorization algorithms not only improve recommendation accuracy, but also provide useful insights about the contents, as well as make recommendations more easily interpretable."}
{"_id": "714732b758b14dcaa063b85364fb7b2b81f4032a", "title": "NE-Rank: A Novel Graph-Based Keyphrase Extraction in Twitter", "text": "The massive growth of the micro-blogging service Twitter has shed the light on the challenging problem of summarizing a collection of large number of tweets. This paper attempts to extract topical key phrases that would represent topics in tweets. Due to the informality, noise, and short length of tweets, such research is nontrivial. We tackle such challenges with extensive preprocessing approach. Followed by, introduction of new features that improve topical key phrase extraction in Twitter. We start by proposing a novel unsupervised graph-based keyword ranking method, called NE-Rank, that considers word weights in addition to edge weights when calculating the ranking. Then we introduce a new approach of leveraging hash tags when extracting key phrases. We have conducted a set of experiments showing the potential of both approaches with 16% to 39% improvement for NE-Rank and 20% improvement for hash tag enhanced extraction."}
{"_id": "2e1325b331ff65644dfb1b8d74c08245a3c45f3f", "title": "Defending continuous collision detection against errors", "text": "Numerical errors and rounding errors in continuous collision detection (CCD) can easily cause collision detection failures if they are not handled properly. A simple and effective approach is to use error tolerances, as shown in many existing CCD systems. Unfortunately, finding the optimal tolerance values is a difficult problem for users. Larger tolerance values will introduce false positive artifacts, while smaller tolerance values may cause collisions to be undetected. The biggest issue here is that we do not know whether or when CCD will fail, even though failures are extremely rare. In this paper, we demonstrate a set of simple modifications to make a basic CCD implementation failure-proof. Using error analysis, we prove the safety of this method and we formulate suggested tolerance values to reduce false positives. The resulting algorithms are safe, automatic, efficient, and easy to implement."}
{"_id": "17e073138c9a8e6daebfb75695df17eb672a8688", "title": "BGPmon: A Real-Time, Scalable, Extensible Monitoring System", "text": "This paper presents a new system, called BGPmon, for monitoring the Border Gateway Protocol (BGP). BGP is the routing protocol for the global Internet. Monitoring BGP is important for both operations and research; a number of public and private BGP monitors are deployed and widely used. These existing monitors typically collect data using a full implementation of a BGP router. In contrast, BGPmon eliminates the unnecessary functions of route selection and data forwarding to focus solely on the monitoring function. BGPmon uses a publish/subscribe overlay network to provide real-time access to vast numbers of peers and clients. All routing events are consolidated into a single XML stream. XML allows us to add additional features such as labeling updates to allow easy identification of useful data by clients. Clients subscribe to BGPmon and receive the XML stream, performing tasks such as archiving, filtering, or real-time data analysis. BGPmon enables scalable real-time monitoring data distribution by allowing monitors to peer with each other and form an overlay network to provide new services and features without modifying the monitors. We illustrate the effectiveness of the BGPmon data using the Cyclops route monitoring system."}
{"_id": "a158e943b31d9abe9bdcc63f344c5fd91ab8edbe", "title": "Smart Irrigation Decision Support Based on Fuzzy Logic Using Wireless Sensor Network", "text": "Smart Irrigation Decision Support (SIDS) system based on fuzzy logic using Wireless Sensor Network (WSN) is considered for precision water irrigation. WSN consists of limited-energy sensor nodes which are equipped with sensing, wireless communication and processing capabilities. SIDS aims to measure the agricultural parameters including the soil temperature and soil moisture using the sensor nodes. The rate of soil moisture reduction is calculated from the current soil moisture reading and the previous one. These soil temperate and the rate of soil moisture reduction are employed as input parameters for fuzzy logic controller to produce the amount of irrigation time as output parameter. The fuzzy logic rules and linguistic values for the input and output parameters of fuzzy logic are carefully selected with the guide of agricultural experts including the farmers. The experimental results show that by using the SIDS system, the amount of irrigation time is precisely calculated based on the measured agricultural parameters. In addition, the water irrigation utilization is improved."}
{"_id": "39086c5c29eaf385bf39fcdcff07c2458eb93314", "title": "The \u2018 dark side \u2019 of leadership personality and transformational leadership : An exploratory study", "text": "There is growing interest in dysfunctional dispositions in the workplace, in particular relating to leaders and managers within organizations. This paper reports an exploratory study investigating the relationship between the \u2018dark side\u2019 of leadership personality and transformational leadership. Data were collected from a sample of New Zealand business leaders and senior managers (n = 80) from scores on the Hogan Development Survey (HDS; Hogan & Hogan, 1997) and the Multifactor Leadership Questionnaire (MLQ; Bass & Avolio, 1995). Regression analyses revealed the \u2018Colorful\u2019 (histrionic) dimension of the HDS to be a positive predictor of transformational leadership, while the \u2018Cautious\u2019 (avoidant) and \u2018Bold\u2019 (narcissistic) dimensions of the HDS were revealed as negative predictors of transformational leadership. This is one of the first reports to suggest a relationship between histrionic personality and transformational leadership, in a literature preoccupied with the relationship between leadership and narcissism. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "4f05bc999323d7ba611d7dca480075a3885b44b4", "title": "Shedding Light on the Dark Triad of Personality : Narcissism , Machiavellianism , and Psychopathy", "text": "Of the offensive yet non-pathological personalities in the literature, three are especially prominent: Machiavellianism, subclinical narcissism, and subclinical psychopathy. We evaluated the recent contention that, in normal samples, this Dark Triad of constructs are one and the same. In a sample of 245 students, we measured the three constructs with standard measures and examined a variety of laboratory and self-report correlates. The measures were moderately inter-correlated, but certainly were not equivalent. Their only common Big Five correlate was disagreeableness. Subclinical psychopaths were distinguished by low neuroticism; Machiavellians, and psychopaths were low in conscientiousness; narcissism showed small positive associations with cognitive ability. Narcissists and, to a lesser extent, psychopaths exhibited self-enhancement on two objectively scored indexes. We conclude that the Dark Triad of personalities, as currently measured, are overlapping but distinct constructs. 2002 Elsevier Science (USA). All rights reserved."}
{"_id": "1a50c235a12739fd10a02ff99175de1f2c5822b2", "title": "The Narcissistic Personality Inventory: alternative form reliability and further evidence of construct validity.", "text": "The aim of this study was twofold: (a)to measure the alternate form reliability of the Narcissistic Personality Inventory, and (b)to determine its construct validity by correlating it with the four scales of the Eysenck Personality Questionnaire (EPQ). The alternate form reliability was .72. The Extraversion and Psychoticism scales of the EPQ were positively and significantly correlated with the narcissism measure, and the Lie scale showed a significant negative correlation. The Neuroticism scale showed a nonsignificant relationship with narcissism. In addition, the combined Extraversion and Psychoticism scales produced[ a Multiple R with the narcissism measure that accounted for significantly more of the variance in narcissism than did either measure alone."}
{"_id": "25eb626f0024f9733f0381d6c907c31a3f75c9c5", "title": "Loving yourself abundantly: relationship of the narcissistic personality to self- and other perceptions of workplace deviance, leadership, and task and contextual performance.", "text": "The authors report results from 2 studies assessing the extent to which narcissism is related to self- and other ratings of leadership, workplace deviance, and task and contextual performance. Study 1 results revealed that narcissism was related to enhanced self-ratings of leadership, even when controlling for the Big Five traits. Study 2 results also revealed that narcissism was related to enhanced leadership self-perceptions; indeed, whereas narcissism was significantly positively correlated with self-ratings of leadership, it was significantly negatively related to other ratings of leadership. Study 2 also revealed that narcissism was related to more favorable self-ratings of workplace deviance and contextual performance compared to other (supervisor) ratings. Finally, as hypothesized, narcissism was more strongly negatively related to contextual performance than to task performance."}
{"_id": "cbd6e91afffa0fdd461452a90842241a6d44bf70", "title": "Integrating the 3+1 SysML view model with safety engineering", "text": "System safety is the property of the system that characterizes its ability to prevent from hazards, which may lead to accidents or losses. Traditionally, system developers are not familiar with system safety analysis processes which are performed by safety engineers. One reason for this is the gap that exists between the traditional development processes, methodologies, notations and tools and the ones used in safety engineering. This gap makes the development of safety aware systems a very complicated task. Several approaches based on UML have been proposed to address this gap. In this paper, an approach to integrate safety engineering with a SysML based development process that is expressed in the form of the V-model, is presented. Preliminary hazard analysis is adopted and applied to a SysML based requirements specification of the mechatronic system that exploits essential use cases. A case study from the railway domain is used to illustrate the proposed approach."}
{"_id": "a589f88848dd8b21648b1f8831de7c89b4d6a888", "title": "The Search for Quasi-Periodicity in Islamic 5-fold Ornament", "text": "T he Penrose tilings are remarkable in that they are non-periodic (have no translational symmetry) but are clearly organised. Their structure, called quasiperiodicity, can be described in several ways, including via self-similar subdivision, tiles with matching rules, and projection of a slice of a cubic lattice in R. The tilings are also unusual for their many centres of local 5-fold and 10fold rotational symmetry, features shared by some Islamic geometric patterns. This resemblance has prompted comparison, and has led some to see precursors of the Penrose tilings and even evidence of quasi-periodicity in traditional Islamic designs. Bonner [2] identified three styles of self-similarity; Makovicky [20] was inspired to develop new variants of the Penrose tiles and later, with colleagues [24], overlaid Penrose-type tilings on traditional Moorish designs; more recently, Lu and Steinhardt [17] observed the use of subdivision in traditional Islamic design systems and overlaid Penrose kites and darts on Iranian designs. The latter article received widespread exposure in the world\u2019s press, although some of the coverage overstated and misrepresented the actual findings. The desire to search for examples of quasi-periodicity in traditional Islamic patterns is understandable, but we must take care not to project modern motivations and abstractions into the past. An intuitive knowledge of group theory is sometimes attributed to any culture that has produced repeating patterns displaying a wide range of symmetry types, even though they had no abstract notion of a group. There are two fallacies to avoid: \u2022 abstraction: P knew about X and X is an example of Y therefore P knew Y. \u2022 deduction: P knew X and X implies Y therefore P knew Y."}
{"_id": "22dbe3d73538361b07d2f29dda56547afc3f9642", "title": "Social Choice for Partial Preferences Using Imputation", "text": "Within the field of multiagent systems, the area of computational social choice considers the problems arising when decisions must be made collectively by a group of agents. Usually such systems collect a ranking of the alternatives from each member of the group in turn, and aggregate these individual rankings to arrive at a collective decision. However, when there are many alternatives to consider, individual agents may be unwilling, or unable, to rank all of them, leading to decisions that must be made on the basis of incomplete information. While earlier approaches attempt to work with the provided rankings by making assumptions about the nature of the missing information, this can lead to undesirable outcomes when the assumptions do not hold, and is ill-suited to certain problem domains. In this thesis, we propose a new approach that uses machine learning algorithms (both conventional and purpose-built) to generate plausible completions of each agent\u2019s rankings on the basis of the partial rankings the agent provided (imputations), in a way that reflects the agents\u2019 true preferences. We show that the combination of existing social choice functions with certain classes of imputation algorithms, which forms the core of our proposed solution, is equivalent to a form of social choice. Our system then undergoes an extensive empirical validation under 40 different test conditions, involving more than 50,000 group decision problems generated from real-world electoral data, and is found to outperform existing competitors significantly, leading to better group decisions overall. Detailed empirical findings are also used to characterize the behaviour of the system, and illustrate the circumstances in which it is most advantageous. A general testbed for comparing solutions using real-world and artificial data (Prefmine) is then described, in conjunction with results that justify its design decisions. We move on to propose a new machine learning algorithm intended specifically to learn and impute the preferences of agents, and validate its effectiveness. This Markov-Tree approach is demonstrated to be superior to imputation using conventional machine learning, and has a simple interpretation that characterizes the problems on which it will perform well. Later chapters contain an axiomatic validation of both of our new approaches, as well as techniques for mitigating their manipulability. The thesis concludes with a discussion of the applicability of its contributions, both for multiagent systems and for settings involving human elections. In all, we reveal an interesting connection between machine learning and computational social choice, and introduce a testbed which facilitates future research efforts on computational social choice for partial preferences, by allowing empirical comparisons between competing approaches to be conducted easily, accurately, and quickly. Perhaps most importantly, we offer an important and effective new direction for enabling group decision making when preferences are not completely specified, using imputation methods."}
{"_id": "86111dd33decaf1c8beb723eb9999a4c24fba1c2", "title": "DRAPE: DRessing Any PErson", "text": "We describe a complete system for animating realistic clothing on synthetic bodies of any shape and pose without manual intervention. The key component of the method is a model of clothing called DRAPE (DRessing Any PErson) that is learned from a physics-based simulation of clothing on bodies of different shapes and poses. The DRAPE model has the desirable property of \"factoring\" clothing deformations due to body shape from those due to pose variation. This factorization provides an approximation to the physical clothing deformation and greatly simplifies clothing synthesis. Given a parameterized model of the human body with known shape and pose parameters, we describe an algorithm that dresses the body with a garment that is customized to fit and possesses realistic wrinkles. DRAPE can be used to dress static bodies or animated sequences with a learned model of the cloth dynamics. Since the method is fully automated, it is appropriate for dressing large numbers of virtual characters of varying shape. The method is significantly more efficient than physical simulation."}
{"_id": "fb2b0b020b4aa690068c576b34d7fe78f6866abe", "title": "Cooperation and Competition", "text": "Some time ago, in the garden of a friend\u2019s house, my 5-year-old son and his chum were struggling over possession of a water hose. (They were in conflict.) Each wanted to use it first to water the garden. (They had a competitive orientation.) Each was trying to tug it away from the other and both were crying. Each was very frustrated, and neither was able to use the hose to sprinkle the flowers as he\u2019d desired. After reaching a deadlock in this tug-of-war, they began to punch one another and call each other names. (As a result of their competitive approach, the conflict took a destructive course for both of them\u2014producing frustration, crying, and violence.) Now imagine a different scenario. The garden consists mainly of two sections, flowers and vegetables. Each kid wants to use the hose first. Let\u2019s suppose they want to resolve their conflict amicably. (They have a cooperative orientation.) One says to the other, \u201cLet\u2019s flip a coin to see who uses the hose first.\u201d (A fair procedure for resolving the conflict is suggested.) The other agrees and suggests that the loser be given the right to select which section of the garden he waters. They both agree to the suggestion. (A cooperative, win-win agreement is reached.) Their agreements are implemented and both kids feel happy and good about one another. (These are common effects of a cooperative or constructive approach to a conflict.) As this example illustrates, whether the participants in a conflict have a cooperative orientation or a competitive one is decisive in determining its course and outcomes. This chapter is concerned with understanding the processes involved in cooperation and competition, their effects, and the factors that contribute to developing a cooperative or competitive relationship. It is important to understand the nature of cooperation and competition since almost all conflicts are mixed-motive, containing elements of both cooperation and competition."}
{"_id": "241614097085a68b8ecc0fa2d7ff72026f33f209", "title": "Review of ontology matching with background knowledge", "text": "The ontology matching process with background knowledge is more suitable to match heterogeneous ontologies, since background knowledge is used as a mediator or a reference to identify relation between two concepts being matched. This method is called indirect matching and the system is called indirect matching system. This paper reviews the motivation that described the urgency of ontology matching, the various background knowledge and their strengths, also indirect matching process. At the end we provide the comparison of indirect matching system. Based on the comparison, mapping repair function were added to improve the quality of mapping. The purpose of this paper is to help in guiding new practitioners get a general idea on the ontology matching field and to determine possible research lines."}
{"_id": "9ad55d5a0b7218d863f3b53118627748ad236f8c", "title": "Development of sEMG-driven assistive devices based on twisted string actuation", "text": "A twisted string actuation module with an integrated force sensor based on optoelectronic components is presented in this paper. This solution is especially suited for very compact, light-weight and wearable robotic devices, such as exoskeletons, but is appropriate for various robotic applications. An in-depth presentation of the proposed actuation module, the description and the basic sensor working principle of the integrated force sensor are portrayed and discussed. Extensive experimental measures have been carried out to verify the actuation module compliant frame design and the related finite element analysis results. Therefore, the proposed actuation module has been used for the implementation of a simple assistive application where surface-electromyographic signals are used to detect the user's muscle activity in order to control the support action provided by the actuator, thus reducing his/her effort."}
{"_id": "1cff17f4ba4522e4373abe91ef97eed755345894", "title": "Variational inference in nonconjugate models", "text": "Mean-field variational methods are widely used for approximate posterior inference in many probabilistic models. In a typical application, mean-field methods approximately compute the posterior with a coordinate-ascent optimization algorithm. When the model is conditionally conjugate, the coordinate updates are easily derived and in closed form. However, many models of interest\u2014like the correlated topic model and Bayesian logistic regression\u2014are nonconjugate. In these models, mean-field methods cannot be directly applied and practitioners have had to develop variational algorithms on a case-by-case basis. In this paper, we develop two generic methods for nonconjugate models, Laplace variational inference and delta method variational inference. Our methods have several advantages: they allow for easily derived variational algorithms with a wide class of nonconjugate models; they extend and unify some of the existing algorithms that have been derived for specific models; and they work well on real-world data sets. We studied our methods on the correlated topic model, Bayesian logistic regression, and hierarchical Bayesian logistic regression."}
{"_id": "1406f6b5ed4034b72ed2dccc3fcfa4c5c0810924", "title": "Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program", "text": "The UMLS Metathesaurus, the largest thesaurus in the biomedical domain, provides a representation of biomedical knowledge consisting of concepts classified by semantic type and both hierarchical and non-hierarchical relationships among the concepts. This knowledge has proved useful for many applications including decision support systems, management of patient records, information retrieval (IR) and data mining. Gaining effective access to the knowledge is critical to the success of these applications. This paper describes MetaMap, a program developed at the National Library of Medicine (NLM) to map biomedical text to the Metathesaurus or, equivalently, to discover Metathesaurus concepts referred to in text. MetaMap uses a knowledge intensive approach based on symbolic, natural language processing (NLP) and computational linguistic techniques. Besides being applied for both IR and data mining applications, MetaMap is one of the foundations of NLM's Indexing Initiative System which is being applied to both semi-automatic and fully automatic indexing of the biomedical literature at the library."}
{"_id": "2d948d7f1187db65f9b0306327563bc0526d76d9", "title": "Contrast-Guided Image Interpolation", "text": "In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45\u00b0 and 135\u00b0 CDMs for interpolating the diagonal pixels and 2) the 0\u00b0 and 90\u00b0 CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications."}
{"_id": "5be3d215c55e246196a09c977d77d9d51170db5f", "title": "Increased frontal and paralimbic activation following ayahuasca, the pan-amazonian inebriant", "text": "Ayahuasca is a South American psychoactive plant tea which contains the serotonergic psychedelic N,N-dimethyltryptamine (DMT) and monoamine-oxidase inhibitors that render DMT orally active. Previous investigations with ayahuasca have highlighted a psychotropic effect profile characterized by enhanced introspective attention, with individuals reporting altered somatic perceptions and intense emotional modifications, frequently accompanied by visual imagery. Despite recent advances in the study of ayahuasca pharmacology, the neural correlates of acute ayahuasca intoxication remain largely unknown. To investigate the effects of ayahuasca administration on regional cerebral blood flow. Fifteen male volunteers with prior experience in the use of psychedelics received a single oral dose of encapsulated freeze-dried ayahuasca equivalent to 1.0\u00a0mg DMT/kg body weight and a placebo in a randomized double-blind clinical trial. Regional cerebral blood flow was measured 100\u2013110\u00a0min after drug administration by means of single photon emission tomography (SPECT). Ayahuasca administration led to significant activation of frontal and paralimbic brain regions. Increased blood perfusion was observed bilaterally in the anterior insula, with greater intensity in the right hemisphere, and in the anterior cingulate/frontomedial cortex of the right hemisphere, areas previously implicated in somatic awareness, subjective feeling states, and emotional arousal. Additional increases were observed in the left amygdala/parahippocampal gyrus, a structure also involved in emotional arousal. The present results suggest that ayahuasca interacts with neural systems that are central to interoception and emotional processing and point to a modulatory role of serotonergic neurotransmission in these processes."}
{"_id": "0ad0518637d61e8f4b151657797b067ec74418e4", "title": "Semi-supervised deep learning by metric embedding", "text": "Deep networks are successfully used as classification models yielding state-ofthe-art results when trained on a large number of labeled samples. These models, however, are usually much less suited for semi-supervised problems because of their tendency to overfit easily when trained on small amounts of data. In this work we will explore a new training objective that is targeting a semi-supervised regime with only a small subset of labeled data. This criterion is based on a deep metric embedding over distance relations within the set of labeled samples, together with constraints over the embeddings of the unlabeled set. The final learned representations are discriminative in euclidean space, and hence can be used with subsequent nearest-neighbor classification using the labeled samples."}
{"_id": "45cb1ff9e1221b52ba3c26e33e98550f6117ae5a", "title": "Deep Belief Network-Based Approaches for Link Prediction in Signed Social Networks", "text": "In some online social network services (SNSs), the members are allowed to label their relationships with others, and such relationships can be represented as the links with signed values (positive or negative). The networks containing such relations are named signed social networks (SSNs), and some real-world complex systems can be also modeled with SSNs. Given the information of the observed structure of an SSN, the link prediction aims to estimate the values of the unobserved links. Noticing that most of the previous approaches for link prediction are based on the members\u2019 similarity and the supervised learning method, however, research work on the investigation of the hidden principles that drive the behaviors of social members are rarely conducted. In this paper, the deep belief network (DBN)-based approaches for link prediction are proposed. Including an unsupervised link prediction model, a feature representation method and a DBN-based link prediction method are introduced. The experiments are done on the datasets from three SNSs (social networking services) in different domains, and the results show that our methods can predict the values of the links with high performance and have a good generalization ability across these datasets."}
{"_id": "834e7f6982b44d9eaf12fa7008305ccfe70bddba", "title": "Impact of reducing STI-induced stress on layout dependence of MOSFET characteristics", "text": "Active-area layout dependence of MOSFET parametric characteristics and its reduction by reducing shallow trench isolation (STI)-induced mechanical stress were investigated. Threshold voltages (V/sub th/) and saturation drain currents (I/sub ds/) become sensitive to the active-area layout of MOSFET in scaled-down technology. This phenomenon is the effect of mechanical stress from STI edge, which reduces impurity diffusion in channel region and enhances carrier mobility. To reduce the STI-induced stress, we examined STI-wall-oxide nitridation and STI gap-fill-oxide densifying in pure N/sub 2/ ambient. These processes reduced the reoxidation of the STI wall oxide, therefore reduced the STI-induced stress. According to the new STI process, the active-area layout dependence of V/sub th/ and I/sub ds/ were reduced successfully."}
{"_id": "57de40bd7ec54c48602d8905012b67a924f0e389", "title": "Kijken naar porno: subcorticale hersenactiviteit weerspiegelt negatieve maar niet positieve seksuele associaties", "text": "Aan de ene kant kunnen mensen visuele seksuele stimulatie (VSS) ervaren als plezierig, maar aan de andere kant kunnen mensen VSS ook ervaren als iets \u201cviezigs\u201d en negatief. Dit onderzoek richt zich op de vraag hoe het brein bij vrouwen reageert op VSS en hoe de hersenactiviteit samenhangt met hun impliciete en expliciete attitudes ten aanzien van VSS. Vrouwen zonder seksuele klachten (N = 20) bekeken een reeks afbeeldingen uit diverse categorie\u00ebn (seksuele penetratie, walging, neutraal) in een fMRI-scanner. Met behulp van een reactietijd-taak (Implicit Association Test) werd vervolgens de sterkte van hun impliciete seks-vies versus seks-lekker associaties gemeten. Daarnaast beoordeelden ze alle afbeeldingen op een visuele analoge schaal als meting van hun expliciete attitude. De hersenactiviteit in reactie op afbeeldingen van seksuele penetratie vertoonde grote overeenkomst met die op afbeeldingen van walgelijke plaatjes. Specifiek de hersenactiviteit in reactie op de seksuele penetratieafbeeldingen bleek te vari\u00ebren als functie van de impliciete attitude ten aanzien van de seksuele penetratie-plaatjes. Hoe sterker de impliciete associatie van seks met walging des te sterker de activiteit in de basale voorhersenen (inclusief de nucleus accumbens en de bed nucleus van de stria terminalis), de middenhersenen, en amygdala in reactie op het zien van de seksuele penetratie plaatjes. De betrokken hersengebieden zijn veelal geassocieerd met visuele seksuele verwerkingsprocessen. De huidige resultaten laten zien dat de activatie van deze gebieden niet zondermeer een positieve appreciatie van seksuele stimuli impliceert, maar ook kan duiden op een negatieve of ambivalente houding ten aanzien van seksuele stimuli."}
{"_id": "e23ba0d9f36c32ee18a06f4e5026fa2a6414e8ea", "title": "Automatic detection of motion artifacts in MR images using CNNS", "text": "Considerable practical interest exists in being able to automatically determine whether a recorded magnetic resonance image is affected by motion artifacts caused by patient movements during scanning. Existing approaches usually rely on the use of navigators or external sensors to detect and track patient motion during image acquisition. In this work, we present an algorithm based on convolutional neural networks that enables fully automated detection of motion artifacts in MR scans without special hardware requirements. The approach is data driven and uses the magnitude of MR images in the spatial domain as input. We evaluate the performance of our algorithm on both synthetic and real data and observe adequate performance in terms of accuracy and generalization to different types of data. Our proposed approach could potentially be used in clinical practice to tag an MR image as motion-free or motion-corrupted immediately after a scan is finished. This process would facilitate the acquisition of high-quality MR images that are often indispensable for accurate medical diagnosis."}
{"_id": "e629bbeba17b8312fde05b0334760e6743d11a4d", "title": "D Region Proposal U-Net with Dense and Residual Learning for Lung Nodule Detection", "text": ""}
{"_id": "a577285e59adb772913140dd0a8bcce8a258b3dc", "title": "Semi-supervised Deep Learning with Memory", "text": "We consider the semi-supervised multi-class classification problem of learning from sparse labelled and abundant unlabelled training data. To address this problem, existing semi-supervised deep learning methods often rely on the up-to-date \u201cnetwork-in-training\u201d to formulate the semi-supervised learning objective. This ignores both the discriminative feature representation and the model inference uncertainty revealed by the network in the preceding learning iterations, referred to as the memory of model learning. In this work, we propose a novel Memory-Assisted Deep Neural Network (MA-DNN) capable of exploiting the memory of model learning to enable semi-supervised learning. Specifically, we introduce a memory mechanism into the network training process as an assimilation-accommodation interaction between the network and an external memory module. Experiments demonstrate the advantages of the proposed MA-DNN model over the state-of-the-art semisupervised deep learning methods on three image classification benchmark datasets: SVHN, CIFAR10, and CIFAR100."}
{"_id": "099c7a437ae47bc5ea1ba9bee41119789d1282b7", "title": "A real-time object detecting and tracking system for outdoor night surveillance", "text": "Autonomous video surveillance and monitoring has a rich history. Many deployed systems are able to reliably track human motion in indoor and controlled outdoor environments. However, object detection and tracking at night remain very important problems for visual surveillance. The objects are often distant, small and their signatures have low contrast against the background. Traditional methods based on the analysis of the difference between successive frames and a background frame will do not work. In this paper, a novel real time object detection algorithm is proposed for night-time visual surveillance. The algorithm is based on contrast analysis. In the first stage, the contrast in local change over time is used to detect potential moving objects. Then motion prediction and spatial nearest neighbor data association are used to suppress false alarms. Experiments on real scenes show that the algorithm is effective for night-time object detection and tracking. 2007 Published by Elsevier Ltd on behalf of Pattern Recognition Society."}
{"_id": "bcc5ec72dbb446da90498568d3ce2879f1f1da70", "title": "Thermal management and cooling of windings in electrical machines for electric vehicle and traction application", "text": "Conventional electrical machine cooling includes housing fins, shaft-mounted fan, and water jacket. In these cases, heat from the copper loss of windings needs to pass through either the stator core or the air between windings and the housing. Because of the large thermal resistance in the path, these methods are sometimes not efficient enough for high torque density machines. Overheat in windings causes failure in the insulation and damages the machine. Many technologies that facilitate winding cooling have been investigated in the literature, such as winding topologies with more efficient heat dissipation capability, impregnation material with high thermal conductivity, and advanced direct winding. This paper reviews and classifies thermal management and cooling methods to provide a guideline for high torque density electrical machine design for a better winding thermal management."}
{"_id": "9f5e78b9b7b2a3792c3453c3439c3dc3de9a662b", "title": "Multisensor data fusion: Target tracking with a doppler radar and an Electro-Optic camera", "text": "This paper addresses the problem of multisensor data fusion for target tracking using a Doppler radar with range rate measurements and an Electro-Optic (EO) camera. We present three fusion architectures, named FA1-FA3, to perform data fusion using the above mentioned sensors. FA1 and FA2 are distributed fusion architectures employing the information matrix fusion method with dynamic feedback. In FA1, radar and camera pseudo measurements are formed that allow us to make use of a linear Kalman Filter (KF) for the radar local filter and an Extended Kalman Filter (EKF) for the EO camera local filter. In FA2, the radar and camera measurements are used directly and therefore the system comprises two EKFs. FA3 is a centralised architecture where the data fusion is performed by way of the measurement fusion method. The final contribution of this paper is a performance comparison of these sensor data fusion techniques when making use of range rate measurements. In order to evaluate the performance of the fusion architectures, Monte Carlo simulations are performed and two filter metrics are presented: an absolute metric - the root mean squared error (RMSE) and a performance metric - the average normalised estimation error squared (ANEES). The results show that the fusion architectures presented are accurate, stable and credible."}
{"_id": "1345c39bea4802e20cc7e3adbc3e3287c1839c8a", "title": "Two Bitcoins at the Price of One? Double-Spending Attacks on Fast Payments in Bitcoin", "text": "Bitcoin is a decentralized payment system that is based on Proof-of-Work. Bitcoin is currently gaining popularity as a digital currency; several businesses are starting to accept Bitcoin transactions. An example case of the growing use of Bitcoin was recently reported in the media; here, Bitcoins were used as a form of fast payment in a local fast-food restaurant. In this paper, we analyze the security of using Bitcoin for fast payments, where the time between the exchange of currency and goods is short (i.e., in the order of few seconds). We focus on doublespending attacks on fast payments and demonstrate that these attacks can be mounted at low cost on currently deployed versions of Bitcoin. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast transactions are not always effective in resisting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we leverage on our findings and propose a lightweight countermeasure that enables the detection of doublespending attacks in fast transactions."}
{"_id": "308da0c615e83a4616c8a4d1cf8159bb35897dfe", "title": "Empirical Analysis of Denial-of-Service Attacks in the Bitcoin Ecosystem", "text": "We present an empirical investigation into the prevalence and impact of distributed denial-of-service (DDoS) attacks on operators in the Bitcoin economy. To that end, we gather and analyze posts mentioning \u201cDDoS\u201d on the popular Bitcoin forum bitcointalk.org. Starting from around 3 000 different posts made between May 2011 and October 2013, we document 142 unique DDoS attacks on 40 Bitcoin services. We find that 7% of all known operators have been attacked, but that currency exchanges, mining pools, gambling operators, eWallets, and financial services are much more likely to be attacked than other services. Not coincidentally, we find currency exchanges and mining pools are much more likely to have DDoS protection such as CloudFlare, Incapsula, or Amazon Cloud. We show that those services that have been attacked are more than three times as likely to buy anti-DDoS services than operators who have not been attacked. We find that big mining pools (those with historical hashrate shares of at least 5%) are much more likely to be DDoSed than small pools. We investigate Mt. Gox as a case study for DDoS attacks on currency exchanges and find a disproportionate amount of DDoS reports made during the large spike in trading volume and exchange rates in spring 2013. We conclude by outlining future opportunities for researching DDoS attacks on Bitcoin."}
{"_id": "30acc826a84919f6474e2f91f9eba81c4b1824be", "title": "Beware the Middleman: Empirical Analysis of Bitcoin-Exchange Risk", "text": "Bitcoin has enjoyed wider adoption than any previous crypto-currency; yet its success has also attracted the attention of fraudsters who have taken advantage of operational insecurity and transaction irreversibility. We study the risk investors face from Bitcoin exchanges, which convert between Bitcoins and hard currency. We examine the track record of 40 Bitcoin exchanges established over the past three years, and find that 18 have since closed, with customer account balances often wiped out. Fraudsters are sometimes to blame, but not always. Using a proportional hazards model, we find that an exchange\u2019s transaction volume indicates whether or not it is likely to close. Less popular exchanges are more likely to be shut than popular ones. We also present a logistic regression showing that popular exchanges are more likely to suffer a security breach."}
{"_id": "33189adda43ee49005373f9dbbb351ff78a45199", "title": "Game-Theoretic Analysis of DDoS Attacks Against Bitcoin Mining Pools", "text": "One of the unique features of the digital currency Bitcoin is that new cash is introduced by so-called miners carrying out resourceintensive proof-of-work operations. To increase their chances of obtaining freshly minted bitcoins, miners typically join pools to collaborate on the computations. However, intense competition among mining pools has recently manifested in two ways. Miners may invest in additional computing resources to increase the likelihood of winning the next mining race. But, at times, a more sinister tactic is also employed: a mining pool may trigger a costly distributed denial-of-service (DDoS) attack to lower the expected success outlook of a competing mining pool. We explore the trade-off between these strategies with a series of game-theoretical models of competition between two pools of varying sizes. We consider differences in costs of investment and attack, as well as uncertainty over whether a DDoS attack will succeed. By characterizing the game\u2019s equilibria, we can draw a number of conclusions. In particular, we find that pools have a greater incentive to attack large pools than small ones. We also observe that larger mining pools have a greater incentive to attack than smaller ones."}
{"_id": "d0fa9fa9db95d3cfe8b4c42205ea9fff29a19115", "title": "Towards Risk Scoring of Bitcoin Transactions", "text": "If Bitcoin becomes the prevalent payment system on the Internet, crime fighters will join forces with regulators and enforce blacklisting of transaction prefixes at the parties who offer real products and services in exchange for bitcoin. Blacklisted bitcoins will be hard to spend and therefore less liquid and less valuable. This requires every recipient of Bitcoin payments not only to check all incoming transactions for possible blacklistings, but also to assess the risk of a transaction being blacklisted in the future. We elaborate this scenario, specify a risk model, devise a prediction approach using public knowledge, and present preliminary results using data from selected known thefts. We discuss the implications on markets where bitcoins are traded and critically revisit Bitcoin\u2019s ability to serve as a unit of account."}
{"_id": "b4497c0922adc29ab1f4863acd5263459b971ab6", "title": "Mastering Digital Transformation: the Path of a Financial Services Provider towards a Digital Transformation Strategy", "text": "To master the challenges of a digital transformation and to systematically address IT\u2019s multifaceted transformative impacts on an organization\u2019s inner and outer environments, top management is increasingly formulating and implementing a digital transformation strategy (DTS). To date, there have been few details of DTS formation concerning its underlying processes and activities. In this study, an interpretive case study approach is employed and DTS formation is investigated from a process/activity perspective. By using an activity-based process model that builds on IS strategizing, an in-depth case study at a large financial services provider was conducted. The results show that this DTS was predominantly shaped by a diversity of emergent strategizing activities through a bottom-up process and prior to the introduction of a holistic approach by top management. Top management then sought to formalize emergent strategy contents by formulating and implementing a DTS that comprised a shared target picture, distinct digital transformation governance, and measures to increase the share of deliberate strategy contents. Besides providing practical implications for DTS formulation and implementation, this study contributes to the literature on digital transformation, IS strategy, and IS strategizing."}
{"_id": "955a83f8b52c5d6e9865b5464ded1b1490951f93", "title": "A comprehensive physical model for light reflection", "text": "A new general reflectance model for computer graphics is presented. The model is based on physical optics and describes specular, directional diffuse, and uniform diffuse reflection by a surface. The reflected light pattern depends on wavelength, incidence angle, two surface roughness parameters, and surface refractive index. The formulation is self consistent in terms of polarization, surface roughness, masking/shadowing, and energy. The model applies to a wide range of materials and surface finishes and provides a smooth transition from diffuse-like to specular reflection as the wavelength and incidence angle are increased or the surface roughness is decreased. The model is analytic and suitable for Computer Graphics applications. Predicted reflectance distributions compare favorably with experiment. The model is applied to metallic, nonmetallic, and plastic materials, with smooth and rough surfaces."}
{"_id": "47d6f907a38e897983d245a91ffc5422606b2552", "title": "Labor Costs and Foreign Direct Investment : A Panel VAR Approach", "text": "This paper examines the endogenous interaction between labor costs and Foreign Direct Investment (FDI) in the OECD countries via the Panel VAR approach under system GMM estimates for the period 1995\u20132009. The available data allows identifying the relevance of the components of labor costs, and allows a detailed analysis across different sectors. Empirical findings have revealed that sectoral composition of FDI and the decomposition of labor costs play a significant role in investigating the dynamic association between labor costs and FDI. Further, results suggest that labor market policies should focus on productivity-enhancing tools in addition to price hindering tools."}
{"_id": "ccf3cac7dbdaecfc7a60b5d34e2d5c7cf8843873", "title": "Hairstyle recommendation system for women", "text": "A perfect hairstyle enhanced anyone's self-confidence, especially women. However, in order to choose a good hairstyle, one was limited to rely on knowledge of a beauty expert. This paper presented a hairstyle recommendation system for women based on hairstyle experts' knowledge and a face shape classification scheme that the authors devised in a previous study. The system showed a user's face with a recommended or not recommended hairstyle on a monitor."}
{"_id": "2d0e81dfe372e3e04ce2529a03c4dcfc2dab379b", "title": "Harmonic Resonance Evaluation for Hub Traction Substation Consisting of Multiple High-Speed Railways", "text": "In many hub traction power-supply systems (TPSSs), the 27.5 kV busbars in the traction substation (TSS) have supplied several railway lines in a local area for reducing the number of TSSs. However, the multiple railway lines connected to the busbar may affect the resonance behaviors of the entire system. It is therefore necessary to investigate the resonance behaviors under such hub TPSS conditions since the harmonic current collection of all railway lines will aggravate the resonance problem. Thus, resonance-mode assessment and its sensitivity method is adopted for investigating the resonance behaviors of hub TPSS affected by different numbers of connected lines, lengths of supply lines, system capacities, etc. The numerical results of modal sensitivity indices are compared to investigate and quantify the harmonic resonance issue caused by multiple electrical railway lines. In addition, the resonance frequency-shift technique and modal impedance sensitivity index-based method are also employed to mitigate the critical harmonic resonances. The results illustrate that it is effective to analyze resonance issues by adopting the harmonic model and modal methods in this paper. The two types of resonance behaviors (i.e., primary resonance and resonance band) caused from the hub TPSS have been fully investigated, and the resulting harmonic distortion of hub TPSS is suppressed."}
{"_id": "4414a35093721c9f8aa73cd26bae895879de84d2", "title": "Communication-Efficient Distributed Statistical Inference", "text": "We present a Communication-efficient Surrogate Likelihood (CSL) framework for solving distributed statistical inference problems. CSL provides a communication-efficient surrogate to the global likelihood that can be used for low-dimensional estimation, high-dimensional regularized estimation and Bayesian inference. For low-dimensional estimation, CSL provably improves upon naive averaging schemes and facilitates the construction of confidence intervals. For high-dimensional regularized estimation, CSL leads to a minimax-optimal estimator with controlled communication cost. For Bayesian inference, CSL can be used to form a communication-efficient quasi-posterior distribution that converges to the true posterior. This quasi-posterior procedure significantly improves the computational efficiency of MCMC algorithms even in a non-distributed setting. We present both theoretical analysis and experiments to explore the properties of the CSL approximation."}
{"_id": "83587062d3f0bbc1184cc7e015526c17f0411466", "title": "Social Interactions in Online Gaming", "text": "This paper briefly overviews five studies examining massively multiplayer online role-playing games (MMORPGs). The first study surveyed 540 gamers and showed that the social aspects of the game were the most important factor for many gamers. The second study explored the social interactions of 912 MMORPG players and showed they created strong friendships and emotional relationships. A third study examined the effect of online socializing in the lives of 119 online gamers. Significantly more male gamers than female gamers said that they found it easier to converse online than offline, and 57% of gamers had engaged in gender swapping. A fourth study surveyed 7,069 gamers and found that 12% of gamers fulfilled at least three diagnostic criteria of addiction. Finally, an interview study of 71 gamers explored attitudes, experiences, and feelings about online gaming. They provided detailed descriptions of personal problems that had arisen due to playing MMORPGs. an individualistic character (Griffiths, Davies, & Chappell, 2004). This is the only setting where millions of users voluntarily immerse themselves in a graphical virtual environment and interact with each other through avatars on a daily basis (Yee, 2007). Research suggests that the game play within these virtual worlds is enhanced because players use them as traditional games as well as arenas in which to explore DOI: 10.4018/ijgbl.2011100103 International Journal of Game-Based Learning, 1(4), 20-36, October-December 2011 21 Copyright \u00a9 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. new relationships, new places and themselves (Krotoski, 2004). Despite the massive amounts of money spent on online gaming, very little research has been carried out regarding the positive social aspects of these games. Much of the debate over the last 30 years has focused on the dangers of computer gaming in the adolescent population, including increased aggression and addiction. Research has also been carried out examining the potentially harmful effects playing computer games may have on social development, self-esteem, social inadequacy, and social anxiety. MMORPGs are very (virtually) socially interactive but little social interaction in the real world is needed when playing them as only one person can play them at any one time from a single computer, unlike some popular two-player console games such as Mortal Kombat. Yee (2001, 2006, 2007) has carried out research into MMORPGs and notes that they allow new forms of social identity and social interaction. Yee\u2019s research has shown that MMORPGs appeal to adults and teenagers from a wide range of backgrounds, and they spend on average more than half a working week in these environments. In a study by Utz (2000), it was found that 77% of respondents reported that they had some sort of relation with other Multi-User Dungeon (MUD) Gamers. It has also been suggested that college students can develop compulsions to play MMORPGs leading to social isolation, poor academic performance, and sleep deprivation. In 2004, a survey of over 54,000 American students found 11% of females and 20% of males said their recreational computer use had significantly hindered their performance at college and University (American College Health Association, 2005). Players can become fixated on their virtual characters, striving to obtain the best armour, experience and reputation in the game, ignoring the fact that their grades are dropping and their friends have drifted apart. It is clear to see that computer games appear to play a role in the socialisation of heavy game players particularly for those who play MMORPGs. Krotoski (2004) maintains that MMORPGs encourage group interaction and involvement, flexibility and mastery, resulting in significant friendships and personal empowerment. It is important to realise that gaming has shown elements of being a compulsive behaviour, with players feeling addicted, experiencing a strong impulse to play the games and finding it hard to resist the games (Griffiths & Davies, 2005). Positive social interaction is paramount in MMORPGs because they require a large number of players to cooperate together and work as a team at the same time. MMORPGs also have multiple tasks that require different characters with different skills in order to complete a challenge or quest. This teaches gamers to be dependent on one another that reinforce their relationships, providing a good understanding of teamwork. The purpose of the research here is to examine the social interactions that occur both within and outside of MMORPGs. The development of virtual friendships can be very enjoyable for gamers, and anecdotal evidence has suggested they sometimes develop into serious real-life friendships and relationships. Not only do MMORPGs facilitate formation of relationships, they are also windows into and catalysts in existing relationships. To date, there has been relatively little research into massively multiplayer online roleplaying games (MMORPGs). This paper briefly overviews five studies by the authors (Griffiths, Davies & Chappell, 2004; Cole & Griffiths, 2007; Hussain & Griffiths, 200, 2009; Gr\u00fcsser, Thalemann, & Griffiths, 2007). In all studies outlined below, participants were informed that participation was entirely voluntary and that the research was conducted according to the British Psychological Society\u2019s Ethical Code of Conduct for Psychologists. In the case of the surveys, if participants no longer wished to take part they simply had to close the Internet browser. All players were guaranteed anonymity and confidentiality. 15 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/social-interactions-onlinegaming/60132?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Select, InfoSci-Select, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Educational Leadership, Administration, and Technologies eJournal Collection, InfoSci-Select, InfoSci-Select, InfoSci-Journal Disciplines Communications and Social Science. Recommend this product to your"}
{"_id": "5ebfab75eaf08e323ca28eee09b43452b9d7a5a8", "title": "THE AUDIT RISK MODEL UNDER THE RISK OF FRAUD", "text": "In this article, we derive an audit risk formula for a simple situation. This formula closely resembles the SAS 47 model when we assume that no material misstatement due to fraud exists. A simple case illustrates how the risk of material misstatement due to management fraud impacts audit risk and how performing special audit procedures to detect such irregularities can decrease overall audit risk. While SAS 53 requires auditors to assess the risk of errors and irregularities and plan the audit to provide reasonable assurance of detecting errors and irregularities, the audit risk model of SAS 47 does not directly address the risk of management fraud. This article develops an audit risk model using the belief-function framework that considers the risks faced by auditors due to random errors, defalcations (employee fraud) and management fraud. We consider two cases. In the first, we consider only affirmative items of evidence and derive an analytical formula for the audit risk. In the second, we consider mixed items of evidence (both affirmative and negative), which models the situation more frequently faced by auditors. We demonstrate that a serious underestimation of audit risk can occur if the audit risk model is used without specifically considering the risks associated with management fraud."}
{"_id": "bb6b4d13cc3fbabb4b3392ce86c4740490a4590e", "title": "A Three-Winding Coupled-Inductor DC\u2013DC Converter Topology With High Voltage Gain and Reduced Switch Stress", "text": "This paper presents a dc\u2013dc boost converter topology for low input and high output voltage applications, such as photovoltaic systems, fuel cell systems, high-intensity discharge lamp, and electric vehicles. The suggested configuration consists of a three-winding coupled-inductor, a single switch and two hybrid voltage multiplier cells. Furthermore, two independent hybrid voltage multiplier cells are in parallel when the single switch S is turned on, and they are in series when the switch S is turned off. So, the advantages of the proposed converter structure are summarized as follows: 1) A coupled inductor with three windings is introduced in the presented converter structure. The two secondary windings of the coupled inductor are, respectively, used to form a hybrid multiplier cell on the one hand, on the other hand, it increases the control freedom of the voltage gain, enhances the utility rate of magnetic core and power density, and reduces the stress of power components to provide a stable constant dc output voltage. 2) The two hybrid multiplier cells can absorb synchronously the energy of stray inductance, which not only reduces the current stress of corresponding diodes, but also greatly alleviates the spike voltage of the main switch, which improves the efficiency. 3) The two hybrid multiplier cells are connected in series to supply power energy for the load, so the voltage gain is extended greatly due to this particular structure. Thus, low-voltage low-conduction-loss devices can be selected and the reverse-recovery currents within the diodes are inhibited. The operating principles and the steady state analyses of the proposed converter are discussed in detail. Finally, a test prototype has been implemented in the laboratory, and the simulated and experimental results show satisfactory agreement with the theoretical analysis."}
{"_id": "b2fa2d6de9c2d5a5edbf36edcb8871bec0d1604c", "title": "Twitter Sentiment Analysis: A Bootstrap Ensemble Framework", "text": "Twitter sentiment analysis has become widely popular. However, stable Twitter sentiment classification performance remains elusive due to several issues: heavy class imbalance in a multi-class problem, representational richness issues for sentiment cues, and the use of diverse colloquial linguistic patterns. These issues are problematic since many forms of social media analytics rely on accurate underlying Twitter sentiments. Accordingly, a text analytics framework is proposed for Twitter sentiment analysis. The framework uses an elaborate bootstrapping ensemble to quell class imbalance, sparsity, and representational richness issues. Experiment results reveal that the proposed approach is more accurate and balanced in its predictions across sentiment classes, as compared to various comparison tools and algorithms. Consequently, the bootstrapping ensemble framework is able to build sentiment time series that are better able to reflect events eliciting strong positive and negative sentiments from users. Considering the importance of Twitter as one of the premiere social media platforms, the results have important implications for social media analytics and social intelligence."}
{"_id": "3d3fbbba74f6322f4bccbefcb977581eab79e436", "title": "DeFlaker: Automatically Detecting Flaky Tests", "text": "Developers often run tests to check that their latest changes to a code repository did not break any previously working functionality. Ideally, any new test failures would indicate regressions caused by the latest changes. However, some test failures may not be due to the latest changes but due to non-determinism in the tests, popularly called flaky tests. The typical way to detect flaky tests is to rerun failing tests repeatedly. Unfortunately, rerunning failing tests can be costly and can slow down the development cycle.\n We present the first extensive evaluation of rerunning failing tests and propose a new technique, called DeFlaker, that detects if a test failure is due to a flaky test without rerunning and with very low runtime overhead. DeFlaker monitors the coverage of latest code changes and marks as flaky any newly failing test that did not execute any of the changes. We deployed DeFlaker live, in the build process of 96 Java projects on TravisCI, and found 87 previously unknown flaky tests in 10 of these projects. We also ran experiments on project histories, where DeFlaker detected 1, 874 flaky tests from 4, 846 failures, with a low false alarm rate (1.5%). DeFlaker had a higher recall (95.5% vs. 23%) of confirmed flaky tests than Maven's default flaky test detector."}
{"_id": "6c2e20a55373a83dd9ad3d8ecc9f51880837c939", "title": "Understanding the influence and service type of trusted third party on consumers' online trust: evidence from Australian B2C marketplace", "text": "In this study, the trusted third party (TTP) in Australia's B2C marketplace is studied and the factors influencing consumers' trust behaviour are examined from the perspective of consumers' online trust. Based on the literature review and combined with the development status and background of Australia's e-commerce, underpinned by the Theory of Planned Behaviour (TPB) and a conceptual trust model, this paper expatiates the specific factors and influence mechanism of TTP on consumers' trust behaviour. Also this paper explains two different functions of TTP to solve the online trust problem faced by consumers. Meanwhile, this paper summarizes five different types of services provided by TTPs during the establishment of the trust relationship. Finally, the present study selects 100 B2C enterprises by the simple random sampling method and makes a detailed analysis of their TTPs, to verify the services and functions of the proposed TTP in the trust model. This study is of some significance for comprehending the influence mechanism, functions and services of TTPs on consumers' trust behaviour in the realistic Australian B2C environment."}
{"_id": "9bf0e62b5b2343a0b509a1ac7a658be587a5c37d", "title": "Exploration and Mapping with Autonomous Robot Teams Results from the MAGIC 2010 Competition", "text": "The potential impact of autonomous robotics is magnified when those robots are deployed in teams: a team of cooperating robots can greatly increase the effectiveness of a human working alone, making short work of search-and-rescue and reconnaissance tasks. To achieve this potential, however, a number of challenging problems ranging from multi-robot planning, state estimation, object detection, and human-robot interfaces must first be solved. The MAGIC 2010 competition, like the DARPA grand challenges that preceded it, presented a formidable robotics problem designed to foster fundamental advances in these difficult areas. MAGIC asked teams of robots to collaboratively explore and map a 500\u00d7 500m area, detect and track benign and dangerous objects, and collaborate with human commanders while respecting their cognitive limits. This paper describes our winning entry in the MAGIC contest, where we fielded a team of 14 autonomous robots supervised by two human operators. While the challenges in MAGIC were diverse, we believe that cooperative multi-robot state estimation is ultimately the critical factor in building a successful system. In this paper, we describe our system and some of the technological advances that we believe were responsible for our success. We also contrast our approach to those of other teams. a)"}
{"_id": "d4b11787ca323794d3ecb8f2a4b492cf309da9e9", "title": "A segmented claw-pole motor for traction applications considering recycling aspects", "text": "With the expansion of the fleet of electric and hybrid electric vehicles worldwide, it is of interest to consider recycling aspects of the parts that are introduced in these new vehicles. This paper focuses on the design of an electrical machine, a claw-pole motor, considering recycling of its components. The suggested design has segmented core parts of soft magnetic composites which yield a design very suitable for recycling as the core material is brittle and it thus simplifies the access of the copper winding. The windings are pre-formed ring-coils with a high fill-factor and with no end-windings. Through the use of direct water cooling, employing space between the claws, the current density can be high, and the machine can be compact. The results of finite element simulations of the electromagnetic properties (previously validated against measurements) show that a high performance recyclable motor with ratings above 10 kW can be made. In comparison with a commercial motor, the claw-pole motor has similar performance, a higher core and magnet weight and a lower copper weight. It also has an expected advantage regarding manufacturing cost and cost of recycling and it has lower copper loss."}
{"_id": "7ff3bef25ea255a3902deedeb3e8376ed8f6ce17", "title": "Force Jacket: Pneumatically-Actuated Jacket for Embodied Haptic Experiences", "text": "Immersive experiences seek to engage the full sensory system in ways that words, pictures, or touch alone cannot. With respect to the haptic system, however, physical feedback has been provided primarily with handheld tactile experiences or vibration-based designs, largely ignoring both pressure receptors and the full upper-body area as conduits for expressing meaning that is consistent with sight and sound. We extend the potential for immersion along these dimensions with the Force Jacket, a novel array of pneumatically-actuated airbags and force sensors that provide precisely directed force and high frequency vibrations to the upper body. We describe the pneumatic hardware and force control algorithms, user studies to verify perception of airbag location and pressure magnitude, and subsequent studies to define full-torso, pressure and vibration-based feel effects such as punch, hug, and snake moving across the body. We also discuss the use of those effects in prototype virtual reality applications."}
{"_id": "e96e339497c45ad5b14af81611afa8c61174b9c6", "title": "Adoption of business continuity planning processes in IT service management", "text": "For any fault of the same severity level, traditional fault discovery and notification tools provide equal weighting from business points of view. To improve the fault correlation from business perspectives, we proposed a framework to automate network and system alerts with respect to its business service impact for proactive notification to IT operations management. This paper outlines the value of business continuity planning (BCP) during the course of service impact analysis, placing particular emphasis on the business perspective in the processes of IT service management. The framework explicitly employs BCP relevant processes in order to identify the relationships between business services and IT resources A practical case in IT operations to illustrate the concept was then conducted."}
{"_id": "a0d9ec58fde6ffb1a0db982adff3d6b9925af8b8", "title": "ADMIRAL: A Data Mining Based Financial Trading System", "text": "This paper presents a novel framework for predicting stock trends and making financial trading decisions based on a combination of data and text mining techniques. The prediction models of the proposed system are based on the textual content of time-stamped Web documents in addition to traditional numerical time series data, which is also available from the Web. The financial trading system based on the model predictions (ADMIRAL) is using three different trading strategies. In this paper, the ADMIRAL system is simulated and evaluated on real-world series of news stories and stocks data using the C4.5 decision tree induction algorithm. The main performance measures are the predictive accuracy of the induced models and, more importantly, the profitability of each trading strategy using these predictions"}
{"_id": "c24495b0c14bf6b563e4b8c9656a4072d6f83995", "title": "Linear cross-modal hashing for efficient multimedia search", "text": "Most existing cross-modal hashing methods suffer from the scalability issue in the training phase. In this paper, we propose a novel cross-modal hashing approach with a linear time complexity to the training data size, to enable scalable indexing for multimedia search across multiple modals. Taking both the intra-similarity in each modal and the inter-similarity across different modals into consideration, the proposed approach aims at effectively learning hash functions from large-scale training datasets. More specifically, for each modal, we first partition the training data into $k$ clusters and then represent each training data point with its distances to $k$ centroids of the clusters. Interestingly, such a k-dimensional data representation can reduce the time complexity of the training phase from traditional O(n2) or higher to O(n), where $n$ is the training data size, leading to practical learning on large-scale datasets. We further prove that this new representation preserves the intra-similarity in each modal. To preserve the inter-similarity among data points across different modals, we transform the derived data representations into a common binary subspace in which binary codes from all the modals are \"consistent\" and comparable. nThe transformation simultaneously outputs the hash functions for all modals, which are used to convert unseen data into binary codes. Given a query of one modal, it is first mapped into the binary codes using the modal's hash functions, followed by matching the database binary codes of any other modals. Experimental results on two benchmark datasets confirm the scalability and the effectiveness of the proposed approach in comparison with the state of the art."}
{"_id": "4fd74b807b47a5975e9b0ab354bfd780e0d921d2", "title": "Armadillo: An Open Source C++ Linear Algebra Library for Fast Prototyping and Computationally Intensive Experiments", "text": "In this report we provide an overview of the open source Armadillo C++ linear algebra library (matrix maths). The library aims to have a good balance between speed and ease of use, and is useful if C++ is the language of choice (due to speed and/or integration capabilities), rather than another language like Matlab or Octave. In particular, Armadillo can be used for fast prototyping and computationally intensive experiments, while at the same time allowing for relatively painless transition of research code into production environments. It is distributed under a license that is applicable in both open source and proprietary software development contexts. The library supports integer, floating point and complex numbers, as well as a subset of trigonometric and statistics functions. Various matrix decompositions are provided through optional integration with LAPACK, or one its high-performance drop-in replacements, such as MKL from Intel or ACML from AMD. A delayed evaluation approach is employed (during compile time) to combine several operations into one and reduce (or eliminate) the need for temporaries. This is accomplished through C++ template meta-programming. Performance comparisons suggest that the library is considerably faster than Matlab and Octave, as well as previous C++ libraries such as IT++ and Newmat. This report reflects a subset of the functionality present in Armadillo v2.4. Armadillo can be downloaded from: \u2022 http://arma.sourceforge.net If you use Armadillo in your research and/or software, we would appreciate a citation to this document. Please cite as: \u2022 Conrad Sanderson. Armadillo: An Open Source C++ Linear Algebra Library for Fast Prototyping and Computationally Intensive Experiments. Technical Report, NICTA, 2010. Armadillo: An Open Source C++ Linear Algebra Library NICTA Technical Report"}
{"_id": "1621f05894ad5fd6a8fcb8827a8c7aca36c81775", "title": "An optimal method for stochastic composite optimization", "text": "This paper considers an important class of convex programming (CP) problems, namely, the stochastic composite optimization (SCO), whose objective function is given by the summation of general nonsmooth and smooth stochastic components. Since SCO covers non-smooth, smooth and stochastic CP as certain special cases, a valid lower bound on the rate of convergence for solving these problems is known from the classic complexity theory of convex programming. Note however that the optimization algorithms that can achieve this lower bound had never been developed. In this paper, we show that the simple mirror-descent stochastic approximation method exhibits the best-known rate of convergence for solving these problems. Our major contribution is to introduce the accelerated stochastic approximation (AC-SA) algorithm based on Nesterov\u2019s optimal method for smooth CP [32,34], and show that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO. To the best of our knowledge, it is also the first universally optimal algorithm in the literature for solving non-smooth, smooth and stochastic CP problems. We illustrate the significant advantages of the AC-SA algorithm over existing methods in the context of solving a special but broad class of stochastic programming problems."}
{"_id": "04113e8974341f97258800126d05fd8df2751b7e", "title": "Universal approximation bounds for superpositions of a sigmoidal function", "text": "Approximation properties of a class of artificial neural networks are established. It is shown that feedforward networks with one layer of sigmoidal nonlinearities achieve inte\u00ad grated squared error of order O(l/n), where n is the number of nodes. The function appruximated is assumed to have a bound on the first moment of the magnitude distribution of the Fourier transform. The nonlinear parameters associated with the sigmoidal nodes, as well as the parameters of linear combination, are adjusted in the approximation. In contrast, it is shown that for series expansions with n terms, in which only the parameters of linear combination are adjusted, the integrated squared approximation error cannot be made smaller than order 1/n2/d uniformly for functions satisfying the same smoothness assumption, where d is the dimension of the input to the function. For the class of functions examined here, the approximation rate and the parsimony of the parameterization of the networks are surprisingly advantageous in high-dimensional settings."}
{"_id": "1281c90b462a82dcebf609cfd02f418b1c60beab", "title": "Greed is good: algorithmic results for sparse approximation", "text": "This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms."}
{"_id": "73f583aad5195324ee75eb981b8b5f1fed6f9d38", "title": "Primal-dual subgradient methods for convex problems", "text": "In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primaldual since they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem. Besides other advantages, this useful feature provides the methods with a reliable stopping criterion. The proposed schemes differ from the classical approaches (divergent series methods, mirror descent methods) by presence of two control sequences. The first sequence is responsible for aggregating the support functions in the dual space, and the second one establishes a dynamically updated scale between the primal and dual spaces. This additional flexibility allows to guarantee a boundedness of the sequence of primal test points even in the case of unbounded feasible set. We present the variants of subgradient schemes for nonsmooth convex minimization, minimax problems, saddle point problems, variational inequalities, and stochastic optimization. In all situations our methods are proved to be optimal from the view point of worst-case black-box lower complexity bounds."}
{"_id": "b5602b44fd8d5e32eb2499f273effa9eb2f916fb", "title": "Modelling Radiological Language with Bidirectional Long Short-Term Memory Networks", "text": "Motivated by the need to automate medical information extraction from free-text radiological reports, we present a bi-directional long short-term memory (BiLSTM) neural network architecture for modelling radiological language. The model has been used to address two NLP tasks: medical named-entity recognition (NER) and negation detection. We investigate whether learning several types of word embeddings improves BiLSTM\u2019s performance on those tasks. Using a large dataset of chest x-ray reports, we compare the proposed model to a baseline dictionary-based NER system and a negation detection system that leverages the hand-crafted rules of the NegEx algorithm and the grammatical relations obtained from the Stanford Dependency Parser. Compared to these more traditional rule-based systems, we argue that BiLSTM offers a strong alternative for both our tasks."}
{"_id": "62f21bb224673c70e0ca73a5bbd1abd1f9e09b78", "title": "Marine debris in central California: quantifying type and abundance of beach litter in Monterey Bay, CA.", "text": "Monitoring beach litter is essential for reducing ecological threats towards humans and wildlife. In Monterey Bay, CA information on seasonal and spatial patterns is understudied. Central California's coastal managers require reliable information on debris abundance, distribution, and type, to support policy aimed at reducing litter. We developed a survey method that allowed for trained citizen scientists to quantify the types and abundance of beach litter. Sampling occurred from July 2009-June 2010. Litter abundance ranged from 0.03 to 17.1 items m(-2). Using a mixed model approach, we found season and location have the greatest effect on litter abundance. Styrofoam, the most numerically abundant item, made up 41% of the total amount of litter. Unexpected items included fertilizer pellets. The results of this study provide a baseline on the types and abundance of litter on the central coast and have directly supported policy banning Styrofoam take out containers from local municipalities."}
{"_id": "b77c7e37577e08a355030eecfc34347776034218", "title": "Issues and Perspectives in Meditation Research: In Search for a Definition", "text": "Despite the growing interest in the neurobiological correlates of meditation, most research has omitted to take into account the underlying philosophical aspects of meditation and its wider implications. This, in turn, is reflected in issues surrounding definition, study design, and outcomes. Here, I highlight the often ignored but important aspect of definition in the existing scholarship on neuroscience and meditation practice. For a satisfactory account of a neuroscience of meditation, we must aim to retrieve an operational definition that is inclusive of a traditional ontological description as well as the modern neurocognitive account of the phenomena. Moving beyond examining the effects of meditation practice, to take a potential step forward in the direction to establish how meditation works, it becomes crucial to appraise the philosophical positions that underlie the phenomenology of meditation in the originating traditions. This endeavor may challenge our intuitions and concepts in either directions, but issues pertaining to definition, design, and validity of response measures are extremely important for the evolution of the field and will provide a much-needed context and framework for meditation based interventions."}
{"_id": "a817c4630d5b18ef90b1d942aeaff64a62a3696a", "title": "Patent classifications as indicators of intellectual organization", "text": "Using the 138,751 patents filed in 2006 under the Patent Cooperation Treaty, coclassification analysis is pursued on the basis of threeand four-digit codes in the International Patent Classification (IPC, 8 edition). The co-classifications among the patents enable us to analyze and visualize the relations among technologies at different levels of aggregation. The hypothesis that classifications might be considered as the organizers of patents into classes, and that therefore co-classification patterns\u2014more than co-citation patterns\u2014might be useful for mapping, is not corroborated. The classifications hang weakly together, even at the four-digit level; at the country level, more specificity can be made visible. However, countries are not the appropriate units of analysis because patent portfolios are largely similar in many advanced countries in terms of the classes attributed. Instead of classes, one may wish to explore the mapping of title words as a better approach to visualize the intellectual organization of patents."}
{"_id": "6691e4fd5e6c53897051c4b0ab01979581551f38", "title": "Genetically Improved PSO Algorithm for Efficient Data Clustering", "text": "Clustering is an important research topic in data mining that appears in a wide range of unsupervised classification applications. Partitional clustering algorithms such as the k-means algorithm are the most popular for clustering large datasets. The major problem with the k-means algorithm is that it is sensitive to the selection of the initial partitions and it may converge to local optima. In this paper, we present a hybrid two-phase GAI-PSO+k-means data clustering algorithm that performs fast data clustering and can avoid premature convergence to local optima. In the first phase we utilize the new genetically improved particle swarm optimization algorithm (GAI-PSO) which is a population-based heuristic search technique modeled on the hybrid of cultural and social rules derived from the analysis of the swarm intelligence (PSO) and the concepts of natural selection and evolution (GA). The GAI-PSO combines the standard velocity and position update rules of PSOs with the ideas of selection, mutation and crossover from GAs. The GAI-PSO algorithm searches the solution space to find the optimal initial cluster centroids for the next phase. The second phase is a local refining stage utilizing the k-means algorithm which can efficiently converge to the optimal solution. The proposed algorithm combines the ability of the globalized searching of the evolutionary algorithms and the fast convergence of the k-means algorithm and can avoid the drawback of both. The performance of the proposed algorithm is evaluated through several benchmark datasets. The experimental results show that the proposed algorithm is highly forceful and outperforms the previous approaches such as SA, ACO, PSO and k-means for the partitional clustering problem."}
{"_id": "9fdb8e2eaa77d10d7fff751d9e7afa2810a8a3a3", "title": "s t )-r technologies social responses to communication c Defining Virtual Reality : Dimensions Determining Telepresence", "text": "Virtual reality (VR) is typically defined in terms of technological hardware. This paper attempts to cast a new, variable-based definition of virtual reality that can be used to classify virtual reality in relation to other media. The defintion of virtual reality is based on concepts of \u201cpresence\u201d and \u201ctelepresence,\u201d which refer to the sense of being in an environment, generated by natural or mediated means, respectively. Two technological dimensions that contribute to telepresence, vividness and interactivity, are discussed. A variety of media are classified according to these dimensions. Suggestions are made for the application of the new definition of virtual reality within the field of communication research."}
{"_id": "31e52cca457bcb96f2d7d46236dc31587c8994de", "title": "Advances in ultrasound biomicroscopy.", "text": "The visualisation of living tissues at microscopic resolution is attracting attention in several fields. In medicine, the goals are to image healthy and diseased tissue with the aim of providing information previously only available from biopsy samples. In basic biology, the goal may be to image biological models of human disease or to conduct longitudinal studies of small-animal development. High-frequency ultrasonic imaging (ultrasound biomicroscopy) offers unique advantages for these applications. In this paper, the development of ultrasound biomicroscopy is reviewed. Aspects of transducer development, systems design and tissue properties are presented to provide a foundation for medical and biological applications. The majority of applications appear to be developing in the 40-60-MHz frequency range, where resolution on the order of 50 microm can be achieved. Doppler processing in this frequency range is beginning to emerge and some examples of current achievements will be highlighted. The current state of the art is reviewed for medical applications in ophthalmology, intravascular ultrasound, dermatology, and cartilage imaging. Ultrasound biomicroscopic studies of mouse embryonic development and tumour biology are presented. Speculation on the continuing evolution of ultrasound biomicroscopy will be discussed."}
{"_id": "e4487429764b8c087b2613e2f8b04eb8aba4a502", "title": "A practical approach for content mining of Tweets.", "text": "Use of data generated through social media for health studies is gradually increasing. Twitter is a short-text message system developed 6 years ago, now with more than 100 million users generating over 300 million Tweets every day. Twitter may be used to gain real-world insights to promote healthy behaviors. The purposes of this paper are to describe a practical approach to analyzing Tweet contents and to illustrate an application of the approach to the topic of physical activity. The approach includes five steps: (1) selecting keywords to gather an initial set of Tweets to analyze; (2) importing data; (3) preparing data; (4) analyzing data (topic, sentiment, and ecologic context); and (5) interpreting data. The steps are implemented using tools that are publically available and free of charge and designed for use by researchers with limited programming skills. Content mining of Tweets can contribute to addressing challenges in health behavior research."}
{"_id": "6d778ec897507edb08df3b58ee33a5517bb895f3", "title": "60 GHz system-on-package antenna array with parasitic microstrip antenna single element", "text": "In this paper microstrip antenna for 60 GHz on LTCC is presented. Special techniques are used to satisfy the antenna specification in wide bandwidth (7 GHz, from 57 GHz to 64 GHz) and high gain (15 dBi). An increase in bandwidth is achieved by using aperture the coupling and the multilayer structure. Gain is increased by using parasitic patches which increase the total antenna aperture. In paper was presented 60 GHz parasitic microstrip antenna with two layers of parasitic patches. High gain is obtained as the result of the optimization. However, presented return loss results can not be treated as wide bandwidth."}
{"_id": "cbf71f27890d65985c7bea257ebdaad5aab7a585", "title": "Checking Business Process Evolution", "text": "Business processes support the modeling and the implementation of software as workflows of local and inter-process activities. Taking over structuring and composition, evolution has become a central concern in software development. We advocate it should be taken into account as soon as the modeling of business processes, which can thereafter be made executable using process engines or model-to-code transformations. We show here that business process evolution needs formal analysis in order to compare different versions of processes, identify precisely the differences between them, and ensure the desired consistency. To reach this objective, we first present a model transformation from the BPMN standard notation to the LNT process algebra. We then propose a set of relations for comparing business processes at the formal model level. With reference to related work, we propose a richer set of comparison primitives supporting renaming, refinement, propertyand context-awareness. Thanks to an implementation of our approach that can be used through a Web application, we put the checking of evolution within the reach of business process designers."}
{"_id": "be25bf323157ca9893f2d7366d5598eabc1ffa6a", "title": "On the number of response regions of deep feed forward networks with piece-wise linear activations", "text": "This paper explores the complexity of deep feedforward networks with linear presynaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piecewise linear functions based on computational geometry. We look at a deep rectifier multi-layer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime, when the number of inputs stays constant, if the shallow model has kn hidden units and n0 inputs, then the number of linear regions is O(k0n0). For a k layer model with n hidden units on each layer it is \u03a9(bn/n0c n0). The number bn/n0c grows faster than k0 when n tends to infinity or when k tends to infinity and n \u2265 2n0. Additionally, even when k is small, if we restrict n to be 2n0, we can show that a deep model has considerably more linear regions that a shallow one. We consider this as a first step towards understanding the complexity of these models and specifically towards providing suitable mathematical tools for future analysis."}
{"_id": "07bc34058ddded67279207aee205486cc2bced82", "title": "Business Process Design by Reusing Business Process Fragments from the Cloud", "text": "The constant development of technologies forces companies to be more innovative in order to stay competitive. In fact, designing a process from scratch is time consuming, error prone and costly. In this context, companies are heading to reuse process fragments when designing a new process to ensure a high degree of efficiency, with respect to delivery deadlines. However, reusing these fragments may disclose sensitive business activities, especially if these latter are deployed in an untrusted environment. In addition, companies are concerned about their user's privacy. To address these issues, we investigate how to build a new business process by reusing the safest existing fragments coming from various cloud servers, i.e. The ones that comply at best with company's preferences and policies, and offer an appropriate level of safety."}
{"_id": "721a016c57c5400a92c44a04ebfb1585cd875764", "title": "A Communication Theoretical Analysis of Multiple-Access Channel Capacity in Magneto-Inductive Wireless Networks", "text": "Magneto-inductive (MI) wireless communications is an emerging subject with a rich set of applications, including local area networks for the Internet-of-Things, wireless body area networks, in-body and on-chip communications, and underwater and underground sensor networks as a low-cost alternative to radio frequency, acoustic or optical methods. Practical MI networks include multiple access channel (MAC) mechanisms for connecting a random number of coils without any specific topology or coil orientation assumptions covering both short and long ranges. However, there is not any information theoretical modeling of MI MAC (MIMAC) capacity of such universal networks with fully coupled frequency selective channel models and exact 3-D coupling model of circular coils instead of long range dipole approximations. In this paper, K-user MIMAC capacity is information theoretically modeled and analyzed, and two-user MIMACs are modeled with explicitly detailed channel responses, bandwidths and coupled thermal noise. K-user MIMAC capacity is achieved through Lagrangian solution with K-user water-filling optimization. Optimum orientations maximizing capacity and received power are theoretically analyzed, and numerically simulated for two-user MIMACs. Constructive gain and destructive interference mechanisms on MIMACs are introduced in comparison with the classical interference based approaches. The theoretical basis promises the utilization of MIMACs in 5G architectures."}
{"_id": "980e96300f63564683fc1d27850a6f3555c641c1", "title": "A Space Vector Modulation Scheme of the Quasi-Z-Source Three-Level T-Type Inverter for Common-Mode Voltage Reduction", "text": "The conventional three-level inverter suffers the limitation of voltage buck operation. In order to give both voltage buck and boost operation capability, the quasi-Z-source three-level T-type inverter (3LT $^2$I) has been proposed. This paper further proposes a space vector modulation (SVM) scheme for the quasi-Z-source 3LT$^2$ I to reduce the magnitude and slew rate of common-mode voltage (CMV). By properly selecting the shoot-through phase, the shoot-through states are inserted within zero vector in order not to affect the active states and output voltage. Doing so, the CMV generated by the quasi-Z-source 3LT $^2$I is restricted within one-sixth of dc-link voltage, and voltage boosting and CMV reduction can be simultaneously realized. In addition, high dc-link voltage utilization can be maintained. The proposed scheme has been verified in both simulations and experiments. Comparisons are conducted with the conventional SVM method and the phase-shifted sinusoidal PWM method."}
{"_id": "88f07086254df5e1a459c6beac59171c642e1f80", "title": "User experience: challenges and opportunities", "text": "User expectations, motivations and feelings when using a product or encountering a system have urged the need to investigate beyond the traditional functionality and usability concerns by assessing and designing for the user experience. Having this realized, we can ensure positive user experience as well as desirable products. This paper aims at providing an overview of the main concepts and principles related to user experience. The topics discussed in this paper cover both theories from the academia as well as empirical studies from the industry. Due to misconception of user experience, it was found that industry depends on and uses the traditional usability methods in the product development as user experience practices. As a result, this paper discusses and elaborates the relationship between conventional usability and user experience in addition to the variety of methods which can be used for user experience evaluation during different time spans. Furthermore, user experience extends over different areas such as human computer interaction, and product design and development. Knowledge and concepts related to user experience are scattered due to the dynamic concepts of user experience. In other words, authors tend to discuss the user experience from their own perspective and interest. Hence, this paper is trying to provide and establish a holistic understanding of the user experience by covering different topics related to user experience. The output of this paper may provide and assist researchers and practitioners to establish a base to aid in proper understanding of user experience, its concepts and principles."}
{"_id": "4f3c28c02f6d52d799c4fab76918e9fb1502e1db", "title": "High performance communication architecture for smart distribution power grid in developing nations", "text": "In a smart distribution power grid, cost efficient and reliable communication architecture plays a crucial role in achieving complete functionality. There are different sets of Quality of Services (QoS) requirements for different data packets transmitting inside the microgrid (a regionally limited smart distribution grid), making it challenging to derive optimal communication architecture. The objective of this research work is to determine the optimal communication technologies for each data packet based on its QoS requirement. In this paper, we have proposed an architecture for a smart distribution power grid with Cyber Physical System enabled microgrids, which accommodate almost all functional requirements of a smart distribution power grid. For easy transition towards optimal communication architecture, we have presented a six-tier communication topology, which is derived from the architecture for a smart distribution power grid. The optimization formulations for each packet structure presented in this paper minimize the overall cost and consider the QoS requirements for each packet. Based on the simulation results, we have made recommendations for optimal communication technologies for each packet and thereby developed a heterogeneous communication architecture for"}
{"_id": "868df20cf3128730018641f220303cee5dbc78ec", "title": "Sexual Dimorphism in the Parietal Substrate Associated with Visuospatial Cognition Independent of General Intelligence", "text": "Sex differences in visuospatial cognition (VSC) with male advantage are frequently reported in the literature. There is evidence for sexual dimorphisms in the human brain, one of which postulates more gray matter (GM) in females and more white matter (WM) in males relative to total intracranial volume. We investigated the neuroanatomy of VSC independent of general intelligence (g) in sex-separated populations, homogenous in age, education, memory performance, a memory- and brain morphology-related gene, and g. VSC and g were assessed with the Wechsler adult intelligence scale. The influence of g on VSC was removed using a hierarchical factor analysis and the Schmid\u2013Leiman solution. Structural high-resolution magnetic resonance images were acquired and analyzed with voxel-based morphometry. As hypothesized, the clusters of positive correlations between local volumes and VSC performance independent of g were found mainly in parietal areas, but also in pre- and postcentral regions, predominantly in the WM in males, whereas in females these correlations were located in parietal and superior temporal areas, predominantly in the GM. Our results suggest that VSC depends more strongly on parietal WM structures in males and on parietal GM structures in females. This sex difference might have to do with the increased axonal and decreased somatodendritic tissue in males relative to females. Whether such sex-specific implementations of the VSC network can be explained genetically as suggested in investigations into the Turner syndrome or as a result of structural neural plasticity upon different experience and usage remains to be shown."}
{"_id": "c23602290222a32fee70b2b44eddee2e45dcf968", "title": "The effect of flow experience on mobile SNS users' loyalty", "text": "Purpose \u2013 The purpose of this research is to examine the effect of flow experience on mobile social networking service (SNS) users\u2019 loyalty. Design/methodology/approach \u2013 Based on 305 valid responses collected from a survey questionnaire, structural equation modeling (SEM) technology was employed to examine the research model. Findings \u2013 The results show that both information quality and system quality significantly affect users\u2019 trust and flow experiences, which further determine their loyalty. The results indicate that flow experience is the strongest determinant of users\u2019 loyalty. Practical implications \u2013 Mobile SNS providers need to consider user experience when seeking users\u2019 loyalty. They should enhance information quality and system quality in order to improve user trust and flow experience. Originality/value \u2013 Although much research has been conducted to explore the effects of extrinsic motivations, such as perceived usefulness, on mobile commerce user behavior, the effect of intrinsic motivations, such as flow experience, is seldom tested. This research found the effect of flow experience on mobile SNS users\u2019 loyalty to be significant."}
{"_id": "22d099551059cbe15eadbdb7d43f360695ca82ec", "title": "Engineering Time-Expanded Graphs for Faster Timetable Information", "text": "We present an extension of the well-known time-expanded approach for timetable information. By remodeling unimportant stations, we are able to obtain faster query times with less space consumption than the original model. Moreover, we show that our extensions harmonize well with speed-up techniques whose adaption to timetable networks is more challenging than one might expect."}
{"_id": "245a303367a0f0b1900c85eff06eefc65f47314c", "title": "FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi", "text": "Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux."}
{"_id": "caf0dff8d4008ad634821e7a3061d59cd1fd7fde", "title": "Depth driven people counting using deep region proposal network", "text": "People counting is a crucial subject in video surveillance application. Factors such as severe occlusions, scene perspective distortions in real application scenario make this task challenging. In this paper, we carefully designed a deep detection framework based on depth information for people counting in crowded environments. Our system performs head detection on depth images collected by an overhead vertical Kinect sensor. To the best of our knowledge, this is the first attempt to use the deep convolutional neural networks on depth images for people counting. We explored the impact of the number and quality of RPN positive anchors on the performance of Faster R-CNN and proposed a solution. Our method is very simple but effective, not only showing promising results but also efficiency as it runs in real-time at a frame rate of about 110 frames per second on a GPU."}
{"_id": "1ea4517925d212f30018ae2d7cf60b1a1200affb", "title": "A Sparse Matrix Library in C + + for High PerformanceArchitectures", "text": "We describe an object oriented sparse matrix library in C++ built upon the Level 3 Sparse BLAS proposal [5] for portability and performance across a wide class of machine architectures. The C++ library includes algorithms for various iterative methods and supports the most common sparse data storage formats used in practice. Besides simplifying the subroutine interface, the object oriented design allows the same driving code to be used for various sparse matrix formats, thus addressing many of the di culties encountered with the typical approach to sparse matrix libraries. We emphasize the fundamental design issues of the C++ sparse matrix classes and illustrate their usage with the preconditioned conjugate gradient (PCG) method as an example. Performance results illustrate that these codes are competitive with optimized Fortran 77. We discuss the impact of our design on elegance, performance, maintainability, portability, and robustness."}
{"_id": "35734e8724559fb0d494e5cba6a28ad7a3d5dd4d", "title": "Explaining and Harnessing Adversarial Examples", "text": "Several machine learning models, including neural networks, consistently misclassify adversarial examples\u2014inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks\u2019 vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset."}
{"_id": "35ee0d59bf0e38e73c87ba9c0feead0ed164c193", "title": "Towards Deep Neural Network Architectures Robust to Adversarial Examples", "text": "Recent work has shown deep neural networks (DNNs) to be highly susceptible to well-designed, small perturbations at the input layer, or so-called adversarial examples. Taking images as an example, such distortions are often imperceptible, but can result in 100% mis-classification for a state of the art DNN. We study the structure of adversarial examples and explore network topology, pre-processing and training strategies to improve the robustness of DNNs. We perform various experiments to assess the removability of adversarial examples by corrupting with additional noise and pre-processing with denoising autoencoders (DAEs). We find that DAEs can remove substantial amounts of the adversarial noise. However, when stacking the DAE with the original DNN, the resulting network can again be attacked by new adversarial examples with even smaller distortion. As a solution, we propose Deep Contractive Network, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE). This increases the network robustness to adversarial examples, without a significant performance penalty."}
{"_id": "38502c84f76aaebc436317fb1ec086c66b158d40", "title": "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images", "text": "Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study [30] revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call \u201cfooling images\u201d (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision."}
{"_id": "6272f3f6456a0324f6f9348d6db01eeded9d4b17", "title": "Analysis of classifiers\u2019 robustness to adversarial perturbations", "text": "The goal of this paper is to analyze the intriguing instability of classifiers to adversarial perturbations (Szegedy et al., in: International conference on learning representations (ICLR), 2014). We provide a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on two practical classes of classifiers, namely the linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured mathematically by the distinguishability measure). We further show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to $$\\sqrt{d}$$ d (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed by Szegedy et al. in the context of neural networks. We finally show experimental results on controlled and real-world data that confirm the theoretical analysis and extend its spirit to more complex classification schemes."}
{"_id": "b91d2354302da4db06841a474ce37ee98656bcda", "title": "The evolution of Crew Resource Management training in commercial aviation.", "text": "In this study, we describe changes in the nature of Crew Resource Management (CRM) training in commercial aviation, including its shift from cockpit to crew resource management. Validation of the impact of CRM is discussed. Limitations of CRM, including lack of cross-cultural generality are considered. An overarching framework that stresses error management to increase acceptance of CRM concepts is presented. The error management approach defines behavioral strategies taught in CRM as error countermeasures that are employed to avoid error, to trap errors committed, and to mitigate the consequences of error."}
{"_id": "3df5459065196e517091bfaca8b34d723c9486b7", "title": "Understanding Types of Cyberbullying in an Anonymous Messaging Application", "text": "The possibility of anonymity and lack of effective ways to identify inappropriate messages have resulted in a significant amount of online interaction data that attempt to harass, bully, or offend the recipient. In this work, we perform a preliminary linguistic study on messages exchanged using one such popular web/smartphone application\u2014Sarahah, that allows friends to exchange messages anonymously. Since messages exchanged via Sarahah are private, we collect them when the recipient shares it on Twitter. We then perform an analysis of the different kinds of messages exchanged through this application. Our linguistic analysis reveals that a significant number of these messages (\u223c 20%) include inappropriate, hurtful, or profane language intended to embarrass, offend, or bully the recipient. Our analysis helps in understanding the different ways in which anonymous message exchange platforms are used and the different types of bullying present in such exchanges."}
{"_id": "648902017c8d42ed6a922c4380509a87b8cab399", "title": "Transient response and dynamic power dissipation comparison of various Dickson charge pump configurations based on charge transfer switches", "text": "Dickson charge pump is a switches-capacitor network providing a voltage gain and having open-circuit resistance dependent on the topology frequency and the size of storage capacitors. In this paper, the current status of research and development in the field of Dickson charge pump is extensively reviewed. Dickson charge pump is inductor-less DC to DC converter which uses a capacitor for its energy storage. The aim of this paper is to compare various techniques of Dickson charge pump currently explored in industry and academia. The parameters especially efficiency and voltage are compared. We have also discussed and evaluated the optimised criteria of capacitor sizing according to clock signals provided such that the charge pump performance could be properly optimised. We have simulated the transient response of DCP and provided the comprehensive study."}
{"_id": "5d93482d2d4a28937307a9b25bb9ce35d9c92b15", "title": "From GPS traces to a routable road map", "text": "This paper presents a method for automatically converting raw GPS traces from everyday vehicles into a routable road network. The method begins by smoothing raw GPS traces using a novel aggregation technique. This technique pulls together traces that belong on the same road in response to simulated potential energy wells created around each trace. After the traces are moved in response to the potential fields, they tend to coalesce into smooth paths. To help adjust the parameters of the constituent potential fields, we present a theoretical analysis of the behavior of our algorithm on a few different road configurations. With the resulting smooth traces, we apply a custom clustering algorithm to create a graph of nodes and edges representing the road network. We show how this network can be used to plan reasonable driving routes, much like consumer-oriented mapping Web sites. We demonstrate our algorithms using real GPS data collected on public roads, and we evaluate the effectiveness of our approach on public roads, and we evaluate the effectiveness of our approach by comparing the route planning results suggested by our generated graph to a commercial route planner."}
{"_id": "d1264a0352859e8319e1fa3def07c02486cf3027", "title": "Modeling and control of a hexacopter with a rotating seesaw", "text": "This paper presents a novel multicomputer type with a unique structure. This structure allows decoupling angular motion from translational motion along one of the body axes, and by that, five of the body degrees of freedom (DoF) can be controlled independently. The proposed aerial vehicle is a hexacopter with six propellers, such that four propellers are attached to the vehicle body and the other two attached to a seesaw. The seesaw is free to rotate around a single axis that is fixed in the body frame. Compared to a standard hexacopter, this unique design improves the maneuverability of the aerial vehicle, without the price of additional actuators. As opposed to tilting-rotor aerial-vehicle, in the proposed design all actuators generate lift. The paper describes the dynamical model of the novel vehicle and suggests a nonlinear controller to track desired trajectories along five of its DoF. The performances of the design are demonstrated numerically."}
{"_id": "3dbc3541609c97dce9b3d991604689fdbcfd6964", "title": "Peer victimization, psychosocial adjustment, and physical activity in overweight and at-risk-for-overweight youth.", "text": "OBJECTIVE\nTo examine the relationship between peer victimization and child and parent reports of psychosocial adjustment and physical activity in a clinical sample of at-risk-for-overweight and overweight children and adolescents.\n\n\nMETHODS\nThe Schwartz Peer Victimization Scale, Children's Depression Inventory-Short Form, Multidimensional Anxiety Scale for Children, Social Physique Anxiety Scale, PACE+ Adolescent Physical Activity Measure, and Asher Loneliness Scale were administered to 92 children and adolescents (54 females) aged 8-18 years. The youth's parent/guardian completed the Child Behavior Checklist.\n\n\nRESULTS\nPeer victimization was positively related to child-reported depression, anxiety, social physique anxiety, and loneliness, and parent-reported internalizing and externalizing symptoms. Peer victimization was negatively related to physical activity. Depressive symptoms and loneliness mediated the relations between peer victimization and physical activity.\n\n\nCONCLUSION\nRecognition of the magnitude of the problem and the means of evaluating for peer victimization is important for clinicians who work with overweight youth. Assessing peer experiences may assist in understanding rates of physical activity and/or past nonadherence to clinician recommendations."}
{"_id": "a8ca975845a3c69dd9c234f0ea3104748443a2eb", "title": "Investigations on Knowledge Base Embedding for Relation Prediction and Extraction", "text": "We report an evaluation of the effectiveness of the existing knowledge base embedding models for relation prediction and for relation extraction on a wide range of benchmarks. We also describe a new benchmark, which is much larger and complex than previous ones, which we introduce to help validate the effectiveness of both tasks. The results demonstrate that knowledge base embedding models are generally effective for relation prediction but unable to give improvements for the state-of-art neural relation extraction model with the existing strategies, while pointing limitations of existing methods."}
{"_id": "64abe99b54ff8bd1f5aaa47c65ebffe0d48e9b03", "title": "Agents on Stage: Advancing the State of the Art of AI", "text": "Intelligent computer agents are both the original goal and the ultimate goal of artificial intelligence research. In striving toward that goal, our community has followed a practical research strategy of \"divide-and-conquer,\" with different sub-communities attacking important component functions of intelligence, such as planning, search, knowledge representation, vision, and natural language. This strategy has been almost too successful, yielding both challenging theoretical problems that have come to dominate the inquiries of many researchers and a spate of practical techniques that have enabled other researchers to move into profitable commercial enterprises. While each of these pursuits is worthy in its own right, together they have had the unfortunate sideeffect of fragmenting the field and almost completely diverting it from efforts to build intelligent agents."}
{"_id": "af6c7c0b1c9f6815efd65ea3e070616516e73eb0", "title": "An Effective CU Size Decision Method for HEVC Encoders", "text": "The emerging high efficiency video coding standard (HEVC) adopts the quadtree-structured coding unit (CU). Each CU allows recursive splitting into four equal sub-CUs. At each depth level (CU size), the test model of HEVC (HM) performs motion estimation (ME) with different sizes including 2N \u00d7 2N, 2N \u00d7 N, N \u00d7 2N and N \u00d7 N. ME process in HM is performed using all the possible depth levels and prediction modes to find the one with the least rate distortion (RD) cost using Lagrange multiplier. This achieves the highest coding efficiency but requires a very high computational complexity. In this paper, we propose a fast CU size decision algorithm for HM. Since the optimal depth level is highly content-dependent, it is not efficient to use all levels. We can determine CU depth range (including the minimum depth level and the maximum depth level) and skip some specific depth levels rarely used in the previous frame and neighboring CUs. Besides, the proposed algorithm also introduces early termination methods based on motion homogeneity checking, RD cost checking and SKIP mode checking to skip ME on unnecessary CU sizes. Experimental results demonstrate that the proposed algorithm can significantly reduce computational complexity while maintaining almost the same RD performance as the original HEVC encoder."}
{"_id": "86e0d2d7a234d58df96f169fee9e03c6324329b5", "title": "Vectorization vs. compilation in query execution", "text": "Compiling database queries into executable (sub-) programs provides substantial benefits comparing to traditional interpreted execution. Many of these benefits, such as reduced interpretation overhead, better instruction code locality, and providing opportunities to use SIMD instructions, have previously been provided by redesigning query processors to use a vectorized execution model. In this paper, we try to shed light on the question of how state-of-the-art compilation strategies relate to vectorized execution for analytical database workloads on modern CPUs. For this purpose, we carefully investigate the behavior of vectorized and compiled strategies inside the Ingres VectorWise database system in three use cases: Project, Select and Hash Join. One of the findings is that compilation should always be combined with block-wise query execution. Another contribution is identifying three cases where \"loop-compilation\" strategies are inferior to vectorized execution. As such, a careful merging of these two strategies is proposed for optimal performance: either by incorporating vectorized execution principles into compiled query plans or using query compilation to create building blocks for vectorized processing."}
{"_id": "02cdf5f8077a778d3b746d1c9db8b118ed7b9cc5", "title": "\"PASS\" principles for predictable bone regeneration.", "text": "Guided bone regeneration is a well-established technique used for augmentation of deficient alveolar ridges. Predictable regeneration requires both a high level of technical skill and a thorough understanding of underlying principles of wound healing. This article describes the 4 major biologic principles (i.e., PASS) necessary for predictable bone regeneration: primary wound closure to ensure undisturbed and uninterrupted wound healing, angiogenesis to provide necessary blood supply and undifferentiated mesenchymal cells, space maintenance/creation to facilitate adequate space for bone ingrowth, and stability of wound and implant to induce blood clot formation and uneventful healing events. In addition, a novel flap design and clinical cases using this principle are presented."}
{"_id": "60299a93b2a44d34a22061c07b91b814d6c4d244", "title": "Expression Profiling of Developing Zebrafish Retinal Cells.", "text": "During retinal development, a variety of different types of neurons are produced. Understanding how each of these types of retinal nerve cells is generated is important from a developmental biology perspective. It is equally important if one is interested in how to regenerate cells after an injury or a disease. To gain more insight into how retinal neurons develop in the zebrafish, we performed single-cell mRNA profiling and in situ hybridizations (ISHs) on retinal sections and whole-mount zebrafish. Through the series of ISHs, designed and performed solely by undergraduate students in the laboratory, we were able to retrospectively identify our single-cell mRNA profiles as most likely coming from developing amacrine cells. Further analysis of these profiles will reveal genes that can be mutated using genome editing techniques. Together these studies increase our knowledge of the genes driving development of different cell types in the zebrafish retina."}
{"_id": "a1c77da415e5b7cdb957c00dc7df2ef7a7dfd327", "title": "Context and behavioral processes in extinction.", "text": "This article provides a selective review and integration of the behavioral literature on Pavlovian extinction. The first part reviews evidence that extinction does not destroy the original learning, but instead generates new learning that is especially context-dependent. The second part examines insights provided by research on several related behavioral phenomena (the interference paradigms, conditioned inhibition, and inhibition despite reinforcement). The final part examines four potential causes of extinction: the discrimination of a new reinforcement rate, generalization decrement, response inhibition, and violation of a reinforcer expectation. The data are consistent with behavioral models that emphasize the role of generalization decrement and expectation violation, but would be more so if those models were expanded to better accommodate the finding that extinction involves a context-modulated form of inhibitory learning."}
{"_id": "4f1b66c68d52f38448b52e07420899f5d4249acb", "title": "Forecasting seasonal time series with computational intelligence: On recent methods and the potential of their combinations", "text": "Accurate time series forecasting is a key issue to support individual and organizational decision making. In this paper, we introduce novel methods for multi-step seasonal time series forecasting. All the presented methods stem from computational intelligence techniques: evolutionary artificial neural networks, support vector machines and genuine linguistic fuzzy rules. Performance of the suggested methods is experimentally justified on seasonal time series from distinct domains on three forecasting horizons. The most important contribution is the introduction of a new hybrid combination using linguistic fuzzy rules and the other computational intelligence methods. This hybrid combination presents competitive forecasts, when compared with the popular ARIMA method. Moreover, such hybrid model is more easy to interpret by decision-makers when modeling trended series. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "bc9e94794338b2cadaf7ac15f91aa3e63f6f92b4", "title": "A Wide-Range Model for Metal-Oxide Surge Arrester", "text": "This paper presents an electric model of a metal-oxide surge arrester (MOSA). The proposed electric model accurately represents the MOSA in a wide range of frequencies and amplitudes. The model was developed and validated based on MOSA electrical behavior in each one of the three operating regions of the zinc-oxide (ZnO) surge arresters, and in a database composed of voltage and current waveforms measured from tests performed in 12 ZnO varistors having different physical dimensions and electrical characteristics\u2014from five different manufacturers. These varistors were subjected to different voltage levels in the low current region, and multilevel amplitude of switching current impulses (30/60\u00a0$\\mu$s), lightning current impulses (8/20\u00a0$\\mu$s), high current impulses (4/10\u00a0$\\mu$s), and fast-front current impulses (1.5/26\u00a0$\\mu$s and 3/6\u00a0$\\mu$s) encompass the three regions of operation and a wide range of frequencies and amplitudes. The results provided by the MOSA wide-range (MWR) model were compared with those obtained in the laboratory. The MWR model has shown good agreement in terms of waveform, peak value, and absorbed energy for the evaluated cases."}
{"_id": "5738fd9f88adcd9cf910b46d969e1a68aabb81b0", "title": "Perceptually based brush strokes for nonphotorealistic visualization", "text": "An important problem in the area of computer graphics is the visualization of large, complex information spaces. Datasets of this type have grown rapidly in recent years, both in number and in size. Images of the data stored in these collections must support rapid and accurate exploration and analysis. This article presents a method for constructing visualizations that are both effective and aesthetic. Our approach uses techniques from master paintings and human perception to visualize a multidimensional dataset. Individual data elements are drawn with one or more brush strokes that vary their appearance to represent the element's attribute values. The result is a nonphotorealistic visualization of information stored in the dataset. Our research extends existing glyph-based and nonphotorealistic techniques by applying perceptual guidelines to build an effective representation of the underlying data. The nonphotorealistic properties the strokes employ are selected from studies of the history and theory of Impressionist art. We show that these properties are similar to visual features that are detected by the low-level human visual system. This correspondence allows us to manage the strokes to produce perceptually salient visualizations. Psychophysical experiments confirm a strong relationship between the expressive power of our nonphotorealistic properties and previous findings on the use of perceptual color and texture patterns for data display. Results from these studies are used to produce effective nonphotorealistic visualizations. We conclude by applying our techniques to a large, multidimensional weather dataset to demonstrate their viability in a practical, real-world setting."}
{"_id": "6e0bbef92b6dfb2a37d124ce462059d01ebf5444", "title": "TEXT MINING : PROMISES AND CHALLENGES", "text": "Text mining, also known as knowledge discovery from text, and document information mining, refers to the process of extracting interesting patterns from very large text corpus for the purposes of discovering knowledge. Text mining is an interdisciplinary field involving information retrieval, text understanding, information extraction, clustering, categorization, visualization, database technology, machine learning, and data mining. Regarded by many as the next wave of knowledge discovery, text mining has a very high commercial value. This talk presents a general framework for text mining, consisting of two stages: text refining that transforms unstructured text documents into an intermediate form; and knowledge distill ation that deduces patterns or knowledge from the intermediate form. We then survey the state-of-the-art text mining approaches, products, and applications by aligning them based on the text refining and knowledge distill ation functions as well as the intermediate form that they adopt. In conclusion, we highlight the upcoming challenges of text mining and the opportunities it offers."}
{"_id": "df6f04fb119908ebf6d12574b7658efd522c7707", "title": "Soft-Switching Zeta\u2013Flyback Converter With a Buck\u2013Boost Type of Active Clamp", "text": "This paper presents the system analysis, design consideration, and implementation of a soft-switching zeta-flyback converter to achieve zero-voltage switching (ZVS). In the proposed converter, the zeta and flyback topologies are adopted in the output side to achieve the following features: to share the power components in the transformer primary side, to achieve the partial magnetizing flux reset, and to share the output power. The buck-boost type of active clamp is connected in parallel with the primary side of the isolation transformer to recycle the energy stored in the leakage inductor of isolated transformer and to limit the peak voltage stress of switching devices due to the transformer leakage inductor when the main switch is turned off. The active clamp circuit can make the switching devices to turn on at ZVS. Experimental results taken from a laboratory prototype rated at 240 W, input voltage of 150 V, output voltage of 12 V, and switching frequency of 150 kHz are presented to demonstrate the converter performance. Based on the experimental results, the circuit efficiency is about 90.5% at rated output power, and the output voltage variation is about 1%."}
{"_id": "b288c1f9464fde9354629345b47db2047ba9d00f", "title": "Incomplete Multisource Transfer Learning", "text": "Transfer learning is generally exploited to adapt well-established source knowledge for learning tasks in weakly labeled or unlabeled target domain. Nowadays, it is common to see multiple sources available for knowledge transfer, each of which, however, may not include complete classes information of the target domain. Naively merging multiple sources together would lead to inferior results due to the large divergence among multiple sources. In this paper, we attempt to utilize incomplete multiple sources for effective knowledge transfer to facilitate the learning task in target domain. To this end, we propose an incomplete multisource transfer learning through two directional knowledge transfer, i.e., cross-domain transfer from each source to target, and cross-source transfer. In particular, in cross-domain direction, we deploy latent low-rank transfer learning guided by iterative structure learning to transfer knowledge from each single source to target domain. This practice reinforces to compensate for any missing data in each source by the complete target data. While in cross-source direction, unsupervised manifold regularizer and effective multisource alignment are explored to jointly compensate for missing data from one portion of source to another. In this way, both marginal and conditional distribution discrepancy in two directions would be mitigated. Experimental results on standard cross-domain benchmarks and synthetic data sets demonstrate the effectiveness of our proposed model in knowledge transfer from incomplete multiple sources."}
{"_id": "18b0e7079a2f194824ffa6df571a53ec6f9db730", "title": "General Survey on Security Issues on Internet of Things", "text": "Internet of Things is the integration of a variety of technologies. The Internet of Things incorporates transparently and impeccably large number of assorted end systems, providing open access to selected data for digital services. Internet of things is a promising research in commerce, industry, and education applications. The abundance of sensors and actuators motivates sensing and actuate devices in communication scenarios thus enabling sharing of information in Internet of Things. Advances in sensor data collection technology and Radio Frequency Identification technology has led large number of smart devices connected to the Internet, continuously transmitting data over time.In the context of security, due to different communication overloads and standards conventional security services are not applicable on Internet of Things as a result of which the technological loopholes leads to the generation of malicious data, devices are compromised and so on. Hence a flexible mechanism can deal with the security threats in the dynamic environment of Internet of Things and continuous researches and new ideas needs to be regulated periodically for various upcoming challenges. This paper basically tries to cover up the security issues and challenges of Internet of Things along with a"}
{"_id": "7c025c6eac14d0f183d0aa0aa4dbfea9bf5eab20", "title": "Toward Characterizing the Productivity Benefits of Very Large Displays", "text": "Larger display surfaces are becoming increasingly available due to multi-monitor capability built into many systems, in addition to the rapid decrease in their costs. However, little is known about the performance benefits of using these larger surfaces compared to traditional single-monitor displays. In addition, it is not clear that current software designs and interaction techniques have been properly tuned for these larger surfaces. A preliminary user study was carried out to provide some initial evidence about the benefits of large versus small display surfaces for complex, multi-application office work. Significant benefits were observed in the use of a prototype, larger display, in addition to significant positive user preference and satisfaction with its use over a small display. In addition, design guidelines for enhancing user interaction across large display surfaces were identified. User productivity could be significantly enhanced in future graphical user interface designs if developed with these findings in mind."}
{"_id": "5764472d5d3ff1c67c3a7f6a4024d08a05a27ff8", "title": "Enhancing Knowledge Graph Completion By Embedding Correlations", "text": "Despite their large sizes, modern Knowledge Graphs (KGs) are still highly incomplete. Statistical relational learning methods can detect missing links by \"embedding\" the nodes and relations into latent feature tensors. Unfortunately, these methods are unable to learn good embeddings if the nodes are not well-connected. Our proposal is to learn embeddings for correlations between subgraphs and add a post-prediction phase to counter the lack of training data. This technique, applied on top of methods like TransE or HolE, can significantly increase the predictions on realistic KGs."}
{"_id": "0dd72887465046b0f8fc655793c6eaaac9c03a3d", "title": "Real-Time Head Orientation from a Monocular Camera Using Deep Neural Network", "text": "[1] J. Xiao, S. Baker, I. Matthews, T. Kanade, \u201cReal-Time Combined 2D + 3D Active Appearance Models,\u201d CVPR, pp.535-542, 2004. [2] N. Gourier, D. Hall, J. L. Crowley, \u201cEstimating Face Orientation from Robust Detection of Salient Facial Features\u201d, Proceedings of Pointing 2004, ICPR, International Workshop on Visual Observation of Deictic Gestures, 2004. [3] G. Fanelli, M. Dantone, J. Gall, A. Jssati, L. Van Gool, \u201cRandom Forests for Real Time 3D Face Analysis,\u201d IJCV, pp.437-458, 2013. [4] Y. LeCun, R. Bottou, Y. Bengio, P. Haffner, \u201cGradient-based Learning Applied to Document Recognition,\u201d Proc. IEEE, vol. 86, no. 11, pp.2278-2324, 1998. [5] A. Krizhevsky, I. Sutskever, G. E. Hinton, \u201cImageNet Classification with Deep Convolutional Neural Networks,\u201d NIPS, pp.1106-1114, 2012. This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (No. 2010-0028680). Proposed Method \uf0a7 Conventional Head Pose Estimation Approaches"}
{"_id": "93ffe14e172976135d167fb593c4b97e0ff14faa", "title": "Driver head pose estimation using efficient descriptor fusion", "text": "A great interest is focused on driver assistance systems using the head pose as an indicator of the visual focus of attention and the mental state. In fact, the head pose estimation is a technique allowing to deduce head orientation relatively to a view of camera and could be performed by model-based or appearance-based approaches. Modelbased approaches use a face geometrical model usually obtained from facial features, whereas appearance-based techniques use the whole face image characterized by a descriptor and generally consider the pose estimation as a classification problem. Appearance-based methods are faster andmore adapted to discrete pose estimation. However, their performance depends strongly on the head descriptor, which should be well chosen in order to reduce the information about identity and lighting contained in the face appearance. In this paper, we propose an appearancebased discrete head pose estimation aiming to determine the driver attention level from monocular visible spectrum images, even if the facial features are not visible. Explicitly, we first propose a novel descriptor resulting from the fusion of four most relevant orientation-based head descriptors, namely the steerable filters, the histogram of oriented gradients (HOG), the Haar features, and an adapted version of speeded up robust feature (SURF) descriptor. Second, in order to derive a compact, relevant, and consistent subset of descriptor\u2019s features, a comparative study is conducted on some well-known feature selection algorithms. Finally, the obtained subset is subject to the classification process, performed by the support vector machine (SVM), to learn head pose variations. As we show in experiments with the public database (Pointing\u201904) as well as with our real-world sequence, our approach describes the head with a high accuracy and provides robust estimation of the head pose, compared to state-of-the-art methods."}
{"_id": "b40f176684ada07faac259aa3a8d46121543dc75", "title": "Real time head pose estimation with random regression forests", "text": "Fast and reliable algorithms for estimating the head pose are essential for many applications and higher-level face analysis tasks. We address the problem of head pose estimation from depth data, which can be captured using the ever more affordable 3D sensing technologies available today. To achieve robustness, we formulate pose estimation as a regression problem. While detecting specific face parts like the nose is sensitive to occlusions, learning the regression on rather generic surface patches requires enormous amount of training data in order to achieve accurate estimates. We propose to use random regression forests for the task at hand, given their capability to handle large training datasets. Moreover, we synthesize a great amount of annotated training data using a statistical model of the human face. In our experiments, we show that our approach can handle real data presenting large pose changes, partial occlusions, and facial expressions, even though it is trained only on synthetic neutral face data. We have thoroughly evaluated our system on a publicly available database on which we achieve state-of-the-art performance without having to resort to the graphics card."}
{"_id": "e4511fd8a4cafcb270df76e594017dd70806f29d", "title": "3D head pose estimation with convolutional neural network trained on synthetic images", "text": "In this paper, we propose a method to estimate head pose with convolutional neural network, which is trained on synthetic head images. We formulate head pose estimation as a regression problem. A convolutional neural network is trained to learn head features and solve the regression problem. To provide annotated head poses in the training process, we generate a realistic head pose dataset by rendering techniques, in which we consider the variation of gender, age, race and expression. Our dataset includes 74000 head poses rendered from 37 head models. For each head pose, RGB image and annotated pose parameters are given. We evaluate our method on both synthetic and real data. The experiments show that our method improves the accuracy of head pose estimation."}
{"_id": "2561e613038865e8904e34f021c8dd9760e7dfb4", "title": "Multilingual Speech Recognition with a Single End-to-End Model", "text": "Training a conventional automatic speech recognition (ASR) system to support multiple languages is challenging because the sub-word unit, lexicon and word inventories are typically language specific. In contrast, sequence-to-sequence models are well suited for multilingual ASR because they encapsulate an acoustic, pronunciation and language model jointly in a single network. In this work we present a single sequence-to-sequence ASR model trained on 9 different Indian languages, which have very little overlap in their scripts. Specifically, we take a union of language-specific grapheme sets and train a grapheme-based sequence-to-sequence model jointly on data from all languages. We find that this model, which is not explicitly given any information about language identity, improves recognition performance by 21% relative compared to analogous sequence-to-sequence models trained on each language individually. By modifying the model to accept a language identifier as an additional input feature, we further improve performance by an additional 7% relative and eliminate confusion between different languages."}
{"_id": "dff9874951549d2afdc60b7695c9322a510972cd", "title": "The coevolution of cultural groups and ingroup favoritism.", "text": "Cultural boundaries have often been the basis for discrimination, nationalism, religious wars, and genocide. Little is known, however, about how cultural groups form or the evolutionary forces behind group affiliation and ingroup favoritism. Hence, we examine these forces experimentally and show that arbitrary symbolic markers, though initially meaningless, evolve to play a key role in cultural group formation and ingroup favoritism because they enable a population of heterogeneous individuals to solve important coordination problems. This process requires that individuals differ in some critical but unobservable way and that their markers be freely and flexibly chosen. If these conditions are met, markers become accurate predictors of behavior. The resulting social environment includes strong incentives to bias interactions toward others with the same marker, and subjects accordingly show strong ingroup favoritism. When markers do not acquire meaning as accurate predictors of behavior, players show a markedly reduced taste for ingroup favoritism. Our results support the prominent evolutionary hypothesis that cultural processes can reshape the selective pressures facing individuals and so favor the evolution of behavioral traits not previously advantaged."}
{"_id": "54c0c2e3832ef3dd0da0025312b7f774a8117e6b", "title": "Efficiency-Oriented Design of ZVS Half-Bridge Series Resonant Inverter With Variable Frequency Duty Cycle Control", "text": "The efficiency of zero voltage switching half-bridge series resonant inverter can be decreased under certain load conditions due to the high switching frequencies required. The proposed variable frequency duty cycle (VFDC) control is intended to improve the efficiency in the medium and low output power levels because of the decreased switching frequencies. The study performed in this letter includes, in a first step, a theoretical analysis of power balance as a function of control parameters. In addition, restrictions due to snubber capacitors and deadtime, and variability of the loads have been considered. Afterward, an efficiency analysis has been carried out to determine the optimum operation point. Switching and conduction losses have been calculated to examine the overall efficiency improvement. VFDC strategy efficiency improvement is achieved by means of a switching-frequency reduction, mainly at low-medium power range, and with low-quality factor loads. Domestic induction heating application is suitable for the use of VFDC strategy due to its special load characteristics. For this reason, the simulation results have been validated using an induction heating inverter with a specially designed load."}
{"_id": "db2b8de5dd455f8dc90b6c2961fb3e0cd462a9f2", "title": "Implementation of RFID on Computer Based Test (RF-CBT) system", "text": "The paper presents an improvement of assessment model using RFID (Radio Frequency Identification) technology which implemented on Computer Based Test (CBT). The RFID worked at frequency 13.56 MHz. The RF-CBT system is constructed with User-PC and Server-PC. A RFID reader module (Mifare RC522, Microcontroller ATMEGA328) and LabVIEW program are installed on User-PC. An RFID tag information from User-PC was sent into Server-PC which is installed with MySQL database and PHP. The registered RFID tag on Server-PC could log in the CBT automatically. Our system is proposed not only for auto login to CBT but also for developing a random question and scoring test result on CBT. The system could reduce the opportunity to cheat in test as anyone cannot login to the test except have the authentic RFID tag. This system is feasible to measure students' cognitive abilities at once because it uses multiple choices test type."}
{"_id": "678300de239a0ad2095793e4c72ca467637aad46", "title": "Social influence on the use of Clinical Decision Support Systems: Revisiting the Unified Theory of Acceptance and Use of Technology by the fuzzy DEMATEL technique", "text": "The aim of study is to examine whether social influence affects medical professionals\u2019 behavioral intention to use while introducing a new Clinical Decision Support System (CDSS). The series of Technology Acceptance Models (TAMs) have been widely applied to examine new technology acceptance by scholars; nevertheless, these models omit system diversity and the user\u2019s profession. On the other hand, causal analysis greatly affects the efficiency of decision-making, and it is usually analyzed by Structural Equation Modeling (SEM); however, the method is often misapplied. This research applies the Decision-Making Trial and Evaluation Laboratory (DEMATEL) technique to explore the causal relationship between the significant Unified Theory of Acceptance and Use of Technology (UTAUT) variables. Fuzzy concept is applied to illustrate human vague judgment. It is significant that, in contrary with UTAUT, this study found that social influence does not matter in the behavioral intention to use the CDSS for medical professionals. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "97790de891bd5ac30a76551f848482dab687bf85", "title": "An in-pipe robot with underactuated parallelogram crawler modules", "text": "In this paper, we present a new in-pipe robot with independent underactuated parallelogram crawler modules, which can automatically overcome inner obstacles in the pipes. The parallelogram crawler modules are adopted to maintain the anterior-posterior symmetry of forward and backward movements, and a simple differential mechanism based on a pair of spur gears is installed to provide underactuated mechanisms. A central base unit connects each crawler module through foldable pantograph mechanisms. To verify the basic behavior of this robot, primary experiments in pipes with different diameters and at partial steps were conducted."}
{"_id": "de0009e4c79b1b9cb702d06f3725a7011985de17", "title": "Evaluating the influence of cultural competence education on students' transcultural self-efficacy perceptions.", "text": "Guided by the cultural competence and confidence (CCC) model, the Transcultural Self-Efficacy Tool (TSET) was used to evaluate the influence of cultural competence education on the transcultural self-efficacy (TSE) perceptions of undergraduate nursing students following an integrated approach to cultural competence education. Results continue to support that TSE is influenced by formalized education and other learning experiences. As hypothesized, compared with novice students, advanced students' scores were higher for all subscales in both cross-sectional (n = 147) and longitudinal (n = 36) study designs. Results from analysis of variance and covariance demonstrated that none of the demographic variables predicted change; semester was the sole predictor, lending additional support that (a) the educational interventions throughout the four semesters influenced TSE changes and (b) all students regardless of background benefit (and require) formalized cultural competence education. Implications for nurse educators conclude the article."}
{"_id": "672a7c8aef5fb745894af13cb5ed9d1afd388849", "title": "Feature extraction approaches based on matrix pattern: MatPCA and MatFLDA", "text": "Feature Extraction Approaches Based On Matrix Pattern: MatPCA and MatFLDA Songcan Chen 1* Yulian Zhu 1 Daoqiang Zhang 1 Jing-Yu Yang 2 Dept. of Computer Science and Engineering, Nanjing University of Aeronautics & Astronautics, Nanjing 210016, People\u2019s Republic of China. Dept. of Computer Science, Nanjing University of Science & Technology, Nanjing 210094, People\u2019s Republic of China Abstract Principle component analysis (PCA) and Fisher linear discriminant analysis (FLDA), as two popular feature extraction approaches in Pattern recognition and data analysis, extract so-needed features directly based on vector patterns, i.e., before applying them, any non-vector pattern such as an image is first vectorized into a vector pattern by some technique like concatenation. However, such a vectorization has been proved not to be beneficial for image recognition due to consequences of both the algebraic feature extraction approach and 2DPCA. In this paper, inspired by the above two approaches, we try an opposite direction to extract features for any vector pattern by first matrixizing it into a matrix pattern and then applying the matrixized versions of PCA and FLDA, MatPCA and MatFLDA, to the pattern. MatFLDA uses, in essence, the same principle as the algebraic feature extraction approach and is constructed in terms of similar objective function to FLDA while MatPCA uses a minimization of the reconstructed error for the training samples like PCA to obtain a set of projection vectors, which is somewhat different derivation from 2DPCA despite of equivalence. Finally experiments on 10 publicly obtainable datasets show that both MatPCA and MatFLDA gain performance improvement in different degrees respectively on 7 and 5"}
{"_id": "787a8793732e088eec2c8711012b229ba36befec", "title": "Performance analysis and security dependence of on paper digital signature using random and critical content", "text": "The content of physical paper documents is a major concern, after the development of modern scanning and printing technologies. The tendency for the document fraud has been growing and the paper-based documents require authenticating the sender identity and ensuring the content integrity. This paper presents a reliable and simple method for authenticating the content of a paper document by implementing an on paper digital signature. This method randomly selects the words at random positions of the document along with critical content and creates on paper digital signature. This method employs an online document verification system and the unauthorized access is denied by the user authentication mechanism. Performance analysis and security dependence on the system is also studied in this paper."}
{"_id": "4f5a1fe4a162df4d4c431ede80a6d7daad4af5c1", "title": "Population-Level Prediction of Type 2 Diabetes From Claims Data and Analysis of Risk Factors.", "text": "We present a new approach to population health, in which data-driven predictive models are learned for outcomes such as type 2 diabetes. Our approach enables risk assessment from readily available electronic claims data on large populations, without additional screening cost. Proposed model uncovers early and late-stage risk factors. Using administrative claims, pharmacy records, healthcare utilization, and laboratory results of 4.1 million individuals between 2005 and 2009, an initial set of 42,000 variables were derived that together describe the full health status and history of every individual. Machine learning was then used to methodically enhance predictive variable set and fit models predicting onset of type 2 diabetes in 2009-2011, 2010-2012, and 2011-2013. We compared the enhanced model with a parsimonious model consisting of known diabetes risk factors in a real-world environment, where missing values are common and prevalent. Furthermore, we analyzed novel and known risk factors emerging from the model at different age groups at different stages before the onset. Parsimonious model using 21 classic diabetes risk factors resulted in area under ROC curve (AUC) of 0.75 for diabetes prediction within a 2-year window following the baseline. The enhanced model increased the AUC to 0.80, with about 900 variables selected as predictive (p\u2009<\u20090.0001 for differences between AUCs). Similar improvements were observed for models predicting diabetes onset 1-3 years and 2-4 years after baseline. The enhanced model improved positive predictive value by at least 50% and identified novel surrogate risk factors for type 2 diabetes, such as chronic liver disease (odds ratio [OR] 3.71), high alanine aminotransferase (OR 2.26), esophageal reflux (OR 1.85), and history of acute bronchitis (OR 1.45). Liver risk factors emerge later in the process of diabetes development compared with obesity-related factors such as hypertension and high hemoglobin A1c. In conclusion, population-level risk prediction for type 2 diabetes using readily available administrative data is feasible and has better prediction performance than classical diabetes risk prediction algorithms on very large populations with missing data. The new model enables intervention allocation at national scale quickly and accurately and recovers potentially novel risk factors at different stages before the disease onset."}
{"_id": "2eea41136efeea6e1426c631fa1ac0a221fe6978", "title": "Deep learning approach for Network Intrusion Detection in Software Defined Networking", "text": "Software Defined Networking (SDN) has recently emerged to become one of the promising solutions for the future Internet. With the logical centralization of controllers and a global network overview, SDN brings us a chance to strengthen our network security. However, SDN also brings us a dangerous increase in potential threats. In this paper, we apply a deep learning approach for flow-based anomaly detection in an SDN environment. We build a Deep Neural Network (DNN) model for an intrusion detection system and train the model with the NSL-KDD Dataset. In this work, we just use six basic features (that can be easily obtained in an SDN environment) taken from the forty-one features of NSL-KDD Dataset. Through experiments, we confirm that the deep learning approach shows strong potential to be used for flow-based anomaly detection in SDN environments."}
{"_id": "9bd94b36f4a5fabdd2db8d2ebd20bba7c1c2d9e6", "title": "Shot boundary detection from videos using entropy and local descriptor", "text": "Video shot segmentation is an important step in key frame selection, video copy detection, video summarization, and video indexing for retrieval. Although some types of video data, e.g., live sports coverage, have abrupt shot boundaries that are easy to identify using simple heuristics, it is much more difficult to identify shot boundaries in other types such as cinematic movies. We propose an algorithm for shot boundary detection able to accurately identify not only abrupt shot boundaries, but also the fade-in and fade-out boundaries typical of cinematic movies. The algorithm is based on analysis of changes in the entropy of the gray scale intensity over consecutive frames and analysis of correspondences between SURF features over consecutive frames. In an experimental evaluation on the TRECVID-2007 shot boundary test set, the algorithm achieves substantial improvements over state of the art methods, with a precision of 97.8% and a recall of 99.3%."}
{"_id": "faccce1a55c0c0ac767b74782c862a3eed0d1065", "title": "SIGNet: Semantic Instance Aided Unsupervised 3D Geometry Perception", "text": "Unsupervised learning for visual perception of 3D geometry is of great interest to autonomous systems. Recent works on unsupervised learning have made considerable progress on geometry perception; however, they perform poorly on dynamic objects and scenarios with dark and noisy environments. In contrast, supervised learning algorithms, which are robust, require large labeled geometric dataset. This paper introduces SIGNet, a novel framework that provides robust geometry perception without requiring geometrically informative labels. Specifically, SIGNet integrates semantic information to make unsupervised robust geometric predictions for dynamic objects in low lighting and noisy environments. SIGNet is shown to improve upon the state-of-the-art unsupervised learning for geometry perception by 30% (in squared relative error for depth prediction). In particular, SIGNet improves the dynamic object class performance by 39% in depth prediction and 29% in flow prediction."}
{"_id": "7bdaefdd9954b75fdf305135b27105214c8eac66", "title": "Analysis of hidden units in a layered network trained to classify sonar targets", "text": "-A neural network learning procedure has been applied to the classification ~/sonar returns [kom two undersea targets, a metal cylinder and a similarly shaped rock. Networks with an intermediate layer ~/ hidden processing units achieved a classification accuracy as high as 100% on a training set of l04 returns. These net~orks correctly classified up to 90.4% of 104 test returns not contained in the training set. This perfi~rmance was better than that of a nearest neighbor classifier, which was 82.7%. and was close to that of an optimal Bayes classifie~ Specific signal features extracted by hidden units in a trained network were identified and related to coding schemes in the pattern of connection strengths between the input and the hidden units. Network perlbrmance and class[/~cation strategy was comparable to that of trained human listeners. Keywords--Learning algorithms, Hidden units. Multilayered neural network, Sonar, Signal processing."}
{"_id": "169e0f24be436ea480b77fbeed92f019c8dba3be", "title": "Pushover Analysis of 4 Storey \u2019 s Reinforced Concrete Building", "text": "The earthquakes in the Indian subcontinent have led to an increase in the seismic zoning factor over many parts of the country. Also, ductility has become an issue for all building that was designed and detailed using earlier versions of the codes. Under such circumstances, seismic qualification of building has become extremely important. The structural engineering profession has been using the nonlinear static procedure (NSP) or pushover analysis. Modeling for such analysis requires the determination of the nonlinear properties of each component in the structure, quantified by strength and deformation capacities, which depend on the modeling assumptions. Pushover analysis is carried out for either user-defined nonlinear hinge properties or default-hinge properties, available in some programs based on the FEMA-356 and ATC-40 guidelines. This paper aims to evaluate the zone \u2013IV selected reinforced concrete building to conduct the non-linear static analysis (Pushover Analysis). The pushover analysis shows the pushover curves, capacity spectrum, plastic hinges and performance level of the building. The non-linear static analysis gives better understanding and more accurate seismic performance of buildings of the damage or failure element."}
{"_id": "a6c04f3ba5a59bdeedb042835c8278e0d80e81ff", "title": "A Low-Profile Wideband Substrate-Integrated Waveguide Cavity-Backed E-Shaped Patch Antenna for the Q-LINKPAN Applications", "text": "A low-profile substrate-integrated waveguide cavity-backed E-shaped patch antenna is proposed for the Q-LINKPAN application in this paper. In order to expand the operating bandwidth, a co-planar waveguide (CPW) is used to feed the proposed antenna with a metallized via for generating one more resonant mode. In addition, a substrate-integrated cavity is employed to suppress the surface wave and improve the radiation efficiency. A differential feeding network is used in the design to improve the symmetry of the E-plane radiation pattern and the H-plane cross polarization. A $2 \\times 2$ prototype is designed, fabricated, and measured for a demonstration. The measured results show that the prototype has a 10 dB impedance bandwidth of 34.4%, a gain of around 12.5 dBi with a narrow E-plane radiation beam within 37.5\u201346 GHz for long distance applications, and a gain of around 8 dBi with a broad E-plane radiation beam within 47\u201353 GHz for short distance applications. The proposed technique can be used to develop compact planar antenna for meeting both the short- and long-rang communication requirements of the emerging Q-LINKPAN wireless system."}
{"_id": "75e72ceac5b6ca66a42a132de2df552a4b3a3da3", "title": "Knowledge Collaboration in Online Communities", "text": "O communities (OCs) are a virtual organizational form in which knowledge collaboration can occur in unparalleled scale and scope, in ways not heretofore theorized. For example, collaboration can occur among people not known to each other, who share different interests and without dialogue. An exploration of this organizational form can fundamentally change how we theorize about knowledge collaboration among members of organizations. We argue that a fundamental characteristic of OCs that affords collaboration is their fluidity. This fluidity engenders a dynamic flow of resources in and out of the community\u2014resources such as passion, time, identity, social disembodiment of ideas, socially ambiguous identities, and temporary convergence. With each resource comes both a negative and positive consequence, creating a tension that fluctuates with changes in the resource. We argue that the fluctuations in tensions can provide an opportunity for knowledge collaboration when the community responds to these tensions in ways that encourage interactions to be generative rather than constrained. After offering numerous examples of such generative responses, we suggest that this form of theorizing\u2014induced by online communities\u2014has implications for theorizing about the more general case of knowledge collaboration in organizations."}
{"_id": "16e239cff1be186c21e25064d7abba92bd444263", "title": "IP Lookup using Two-level Indexing and B-Trees", "text": "Networks are expanding very fast and the number of clients is increasing dramatically, this causes the router forwarding table to become very large and present more demand on faster router operations. In this paper, we address the problem of packet forwarding in the routers aiming to increase the speed of address lookup and minimize the memory required for storing the forwarding table. We propose a new algorithm that makes use of two-level indexing and Btrees. We test the approach and compare it to other famous IP lookup approaches. The preliminary simulations show 20% less memory requirements and the lookup speed scaling linearly with increasing table size."}
{"_id": "488257dcbc7bcb56836f10a410e69c2c283989e5", "title": "mTOR Signaling in Growth Control and Disease", "text": "The mechanistic target of rapamycin (mTOR) signaling pathway senses and integrates a variety of environmental cues to regulate organismal growth and homeostasis. The pathway regulates many major cellular processes and is implicated in an increasing number of pathological conditions, including cancer, obesity, type 2 diabetes, and neurodegeneration. Here, we review recent advances in our understanding of the mTOR pathway and its role in health, disease, and aging. We further discuss pharmacological approaches to treat human pathologies linked to mTOR deregulation."}
{"_id": "683b6d7aac522ce3eb54b2aee5982bdf256e5ebf", "title": "Counterintuitive Behavior of Social Systems", "text": "This paper addresses several social concerns: population trends; quality of urban life; policies for urban growth; and the unexpected, ineffective, or detrimental results often generated by government programs. Society becomes frustrated as repeated attacks on deficiencies in social systems lead only to worse symptoms. Legislation is debated and passed with great hope, but many programs prove to be ineffective. Results are often far short of expectations. Because dynamic behavior of social systems is not understood, government programs often cause exactly the reverse of desired results. The field of system dynamics now can explain how such contrary results happen. Fundamental reasons cause people to misjudge behavior of social systems. Orderly processes in creating human judgment and intuition lead people to wrong decisions when faced with complex and highly interacting systems. Until we reach a much better public understanding of social systems, attempts to develop corrective programs for social troubles will continue to be disappointing. This paper cautions against continuing to depend on the same past approaches that have led to present feelings of frustration. New methods developed over the last 30 years will lead to a better understanding of social systems and thereby to more effective policies for guiding the future. 1 Updated March, 1995. This paper was first copyrighted \u00a9 1971 by Jay W. Forrester. It is based on testimony for the Subcommittee on Urban Growth of the Committee on Banking and Currency, U.S. House of Representatives, on October 7, 1970. The original text appeared in the January, 1971, issue of the Technology Review published by the Alumni Association of the Massachusetts Institute of Technology. All figures are taken from World Dynamics by Jay W. Forrester, Productivity Press, Portland, Oregon. 2 Germeshausen Professor Emeritus and Senior Lecturer, Massachusetts Institute of Technology, Cambridge, MA, USA Copyright \u00a9 1995 Jay W. Forrester"}
{"_id": "9ac5843db25e5859b77d90cba929bbe4fb2bec50", "title": "Overview of narrowband IoT in LTE Rel-13", "text": "In 3GPP Rel-13, a narrowband system, named Narrowband Internet of Things (NB-IoT), has been introduced to provide low-cost, low-power, wide-area cellular connectivity for the Internet of Things. This system, based on Long Term Evolution (LTE) technology, supports most LTE functionalities albeit with essential simplifications to reduce device complexity. Further optimizations to increase coverage, reduce overhead and reduce power consumption while increasing capacity have been introduced as well. The design objectives of NB-IoT include low-complexity devices, high coverage, long device battery life, and massive capacity. Latency is relaxed although a delay budget of 10 seconds is the target for exception reports. This paper provides an overview of NB-IoT design, including salient features from the physical and higher layers. Illustrative results with respect to performance objectives are also provided. Finally, NB-IoT enhancements in LTE Rel-14 are briefly outlined."}
{"_id": "767998816838f0b08d3ee1dc97acdc8113d71d88", "title": "Parenting stress in mothers and fathers of toddlers with autism spectrum disorders: associations with child characteristics.", "text": "Elevated parenting stress is observed among mothers of older children with autism spectrum disorders (ASD), but little is known about parents of young newly-diagnosed children. Associations between child behavior and parenting stress were examined in mothers and fathers of 54 toddlers with ASD (mean age = 26.9 months). Parents reported elevated parenting stress. Deficits/delays in children's social relatedness were associated with overall parenting stress, parent-child relationship problems, and distress for mothers and fathers. Regulatory problems were associated with maternal stress, whereas externalizing behaviors were associated with paternal stress. Cognitive functioning, communication deficits, and atypical behaviors were not uniquely associated with parenting stress. Clinical assessment of parental stress, acknowledging differences in parenting experiences for mothers and fathers of young children with ASD, is needed."}
{"_id": "46d99d58304c3774c0eabec4d93948c6be36ddd0", "title": "Dynamic time warping constraint learning for large margin nearest neighbor classification", "text": "Nearest neighbor (NN) classifier with dynamic time warping (DTW) is considered to be an effective method for time series classification. The performance of NN-DTW is dependent on the DTW constraints because the NN classifier is sensitive to the used distance function. For time series classification, the global path constraint of DTW is learned for optimization of the alignment of time series by maximizing the nearest neighbor hypothesis margin. In addition, a reduction technique is combined with a search process to condense the prototypes. The approach is implemented and tested on UCR datasets. Experimental results show the effectiveness of the proposed method. ! 2011 Elsevier Inc. All rights reserved."}
{"_id": "017f3cc6c4b25e6aa4e48c4ef204d0f558a6a12a", "title": "DreamSketch: Early Stage 3D Design Explorations with Sketching and Generative Design", "text": "We present DreamSketch, a novel 3D design interface that combines the free-form and expressive qualities of sketching with the computational power of generative design algorithms. In DreamSketch, a user coarsely defines the problem by sketching the design context. Then, a generative design algorithm produces multiple solutions that are augmented as 3D objects in the sketched context. The user can interact with the scene to navigate through the generated solutions. The combination of sketching and generative algorithms enables designers to explore multiple ideas and make better informed design decisions during the early stages of design. Design study sessions with designers and mechanical engineers demonstrate the expressive nature and creative possibilities of DreamSketch."}
{"_id": "1001c09821f6910b5b8038a3c5993456ba966946", "title": "Practical Bayesian Optimization of Machine Learning Algorithms", "text": "The use of machine learning algorithms frequently involves careful tuning of learning parameters and model hyperparameters. Unfortunately, this tuning is often a \u201cblack art\u201d requiring expert experience, rules of thumb, or sometimes bruteforce search. There is therefore great appeal for automatic approaches that can optimize the performance of any given learning algorithm to the problem at hand. In this work, we consider this problem through the framework of Bayesian optimization, in which a learning algorithm\u2019s generalization performance is modeled as a sample from a Gaussian process (GP). We show that certain choices for the nature of the GP, such as the type of kernel and the treatment of its hyperparameters, can play a crucial role in obtaining a good optimizer that can achieve expertlevel performance. We describe new algorithms that take into account the variable cost (duration) of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks."}
{"_id": "2dec5b671af983b1e57418434932f0320f51e9ca", "title": "Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid", "text": "Naive Bayes induction algorithms were previously shown to be surprisingly accurate on many classi cation tasks even when the conditional independence assumption on which they are based is violated How ever most studies were done on small databases We show that in some larger databases the accuracy of Naive Bayes does not scale up as well as decision trees We then propose a new algorithm NBTree which in duces a hybrid of decision tree classi ers and Naive Bayes classi ers the decision tree nodes contain uni variate splits as regular decision trees but the leaves contain Naive Bayesian classi ers The approach re tains the interpretability of Naive Bayes and decision trees while resulting in classi ers that frequently out perform both constituents especially in the larger databases tested"}
{"_id": "30423f985355d74295546f1d14ed2ddd33cdef99", "title": "Bayesian Optimization with Robust Bayesian Neural Networks", "text": "Bayesian optimization is a prominent method for optimizing expensive-to-evaluate black-box functions that is widely applied to tuning the hyperparameters of machine learning algorithms. Despite its successes, the prototypical Bayesian optimization approach \u2013 using Gaussian process models \u2013 does not scale well to either many hyperparameters or many function evaluations. Attacking this lack of scalability and flexibility is thus one of the key challenges of the field. We present a general approach for using flexible parametric models (neural networks) for Bayesian optimization, staying as close to a truly Bayesian treatment as possible. We obtain scalability through stochastic gradient Hamiltonian Monte Carlo, whose robustness we improve via a scale adaptation. Experiments including multi-task Bayesian optimization with 21 tasks, parallel optimization of deep neural networks and deep reinforcement learning show the power and flexibility of this approach."}
{"_id": "5ba6dcdbf846abb56bf9c8a060d98875ae70dbc8", "title": "Taking the Human Out of the Loop: A Review of Bayesian Optimization", "text": "Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications."}
{"_id": "775a4e375cc79b53b94e37fa3eedff481823e4a6", "title": "Efficient and Robust Automated Machine Learning", "text": "The success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts. To be effective in practice, such systems need to automatically choose a good algorithm and feature preprocessing steps for a new dataset at hand, and also set their respective hyperparameters. Recent work has started to tackle this automated machine learning (AutoML) problem with the help of efficient Bayesian optimization methods. Building on this, we introduce a robust new AutoML system based on scikit-learn (using 15 classifiers, 14 feature preprocessing methods, and 4 data preprocessing methods, giving rise to a structured hypothesis space with 110 hyperparameters). This system, which we dub AUTO-SKLEARN, improves on existing AutoML methods by automatically taking into account past performance on similar datasets, and by constructing ensembles from the models evaluated during the optimization. Our system won the first phase of the ongoing ChaLearn AutoML challenge, and our comprehensive analysis on over 100 diverse datasets shows that it substantially outperforms the previous state of the art in AutoML. We also demonstrate the performance gains due to each of our contributions and derive insights into the effectiveness of the individual components of AUTO-SKLEARN."}
{"_id": "2c13b73ce6123966a9f8c82b8f26d9e3fbc312b7", "title": "The biology of cancer: metabolic reprogramming fuels cell growth and proliferation.", "text": "Cell proliferation requires nutrients, energy, and biosynthetic activity to duplicate all macromolecular components during each passage through the cell cycle. It is therefore not surprising that metabolic activities in proliferating cells are fundamentally different from those in nonproliferating cells. This review examines the idea that several core fluxes, including aerobic glycolysis, de novo lipid biosynthesis, and glutamine-dependent anaplerosis, form a stereotyped platform supporting proliferation of diverse cell types. We also consider regulation of these fluxes by cellular mediators of signal transduction and gene expression, including the phosphatidylinositol 3-kinase (PI3K)/Akt/mTOR system, hypoxia-inducible factor 1 (HIF-1), and Myc, during physiologic cell proliferation and tumorigenesis."}
{"_id": "8c964cbb6e211062f9e45f513cc8b4573b8b40c6", "title": "Plantar fasciitis: evidence-based review of diagnosis and therapy.", "text": "Plantar fasciitis causes heel pain in active as well as sedentary adults of all ages. The condition is more likely to occur in persons who are obese or in those who are on their feet most of the day. A diagnosis of plantar fasciitis is based on the patient's history and physical findings. The accuracy of radiologic studies in diagnosing plantar heel pain is unknown. Most interventions used to manage plantar fasciitis have not been studied adequately; however, shoe inserts, stretching exercises, steroid injection, and custom-made night splints may be beneficial. Extracorporeal shock wave therapy may effectively treat runners with chronic heel pain but is ineffective in other patients. Limited evidence suggests that casting or surgery may be beneficial when conservative measures fail."}
{"_id": "899835a31035bbfb7eaa803b6e58eebbcdee03f3", "title": "Penetration state transition analysis: A rule-based intrusion detection approach", "text": "| This paper presents a new approach to representing and detecting computer penetrations in real-time. The approach, called state transition analysis, models penetrations as a series of state changes that lead from an initial secure state to a target compromised state. State transition diagrams, the graphical representation of penetrations, identify precisely the requirements for and the compromise of a penetration and present only the critical events that must occur for the successful completion of the penetration. State transition diagrams are written to correspond to the states of an actual computer system, and these diagrams form the basis of a rule-based expert system for detecting penetrations, called the State Transition Analysis Tool (STAT). The design and implementation of a UNIX-speci c prototype of this expert system, called USTAT, is also presented. This prototype provides a further illustration of the overall design and functionality of this intrusion detection approach. Lastly, STAT is compared to the functionality of comparable intrusion detection tools. Keywords|Security, Intrusion Detection, Expert Systems"}
{"_id": "3086170c79b7c9770f3a9d174e122170db852d09", "title": "Detection and counting of pothole using image processing techniques", "text": "Pothole are the primary cause of accidents, hence identification and classification using image processing techniques is very important. In this paper image pre-processing based on difference of Gaussian-Filtering and clustering based image segmentation methods are implemented for better results. From the results the K-Means clustering based segmentation was preferred for its fastest computing time and edge detection based segmentation is preferred for its specificity. The main goal of this paper is to identify a better method which is highly efficient and accurate compared to the conventional methods. Different image pre-processing and segmentation methods for pothole detection where reviewed using performance measures."}
{"_id": "a79704c1ce7bf10c8753a8f51437ccbc61947d03", "title": "Robust facial expression recognition using local binary patterns", "text": "A novel low-computation discriminative feature space is introduced for facial expression recognition capable of robust performance over a rang of image resolutions. Our approach is based on the simple local binary patterns (LBP) for representing salient micro-patterns of face images. Compared to Gabor wavelets, the LBP features can be extracted faster in a single scan through the raw image and lie in a lower dimensional space, whilst still retaining facial information efficiently. Template matching with weighted Chi square statistic and support vector machine are adopted to classify facial expressions. Extensive experiments on the Cohn-Kanade Database illustrate that the LBP features are effective and efficient for facial expression discrimination. Additionally, experiments on face images with different resolutions show that the LBP features are robust to low-resolution images, which is critical in real-world applications where only low-resolution video input is available."}
{"_id": "39b1c4bf2409ac4fd21c611f732745329e118e0b", "title": "1 FEEDFORWARD NEURAL NETWORKS : AN INTRODUCTION", "text": "1 A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects (Haykin 1998): 1. Knowledge is acquired by the network through a learning process. 2. Interconnection strengths known as synaptic weights are used to store the knowledge. Basically, learning is a process by which the free parameters (i.e., synaptic weights and bias levels) of a neural network are adapted through a continuing process of stimulation by the environment in which the network is embedded. The type of learning is determined by the manner in which the parameter changes take place. In a general sense, the learning process may be classified as follows:"}
{"_id": "43c0ff1070def3d98f548b7cbf523fdd4a83827a", "title": "Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention", "text": "Multimedia content is dominating today's Web information. The nature of multimedia user-item interactions is 1/0 binary implicit feedback (e.g., photo likes, video views, song downloads, etc.), which can be collected at a larger scale with a much lower cost than explicit feedback (e.g., product ratings). However, the majority of existing collaborative filtering (CF) systems are not well-designed for multimedia recommendation, since they ignore the implicitness in users' interactions with multimedia content. We argue that, in multimedia recommendation, there exists item- and component-level implicitness which blurs the underlying users' preferences. The item-level implicitness means that users' preferences on items (e.g. photos, videos, songs, etc.) are unknown, while the component-level implicitness means that inside each item users' preferences on different components (e.g. regions in an image, frames of a video, etc.) are unknown. For example, a 'view'' on a video does not provide any specific information about how the user likes the video (i.e.item-level) and which parts of the video the user is interested in (i.e.component-level). In this paper, we introduce a novel attention mechanism in CF to address the challenging item- and component-level implicit feedback in multimedia recommendation, dubbed Attentive Collaborative Filtering (ACF). Specifically, our attention model is a neural network that consists of two attention modules: the component-level attention module, starting from any content feature extraction network (e.g. CNN for images/videos), which learns to select informative components of multimedia items, and the item-level attention module, which learns to score the item preferences. ACF can be seamlessly incorporated into classic CF models with implicit feedback, such as BPR and SVD++, and efficiently trained using SGD. Through extensive experiments on two real-world multimedia Web services: Vine and Pinterest, we show that ACF significantly outperforms state-of-the-art CF methods."}
{"_id": "771063a4b9c166e2f87c30342c7953dab566c406", "title": "3D Mesh Compression: Survey, Comparisons, and Emerging Trends", "text": "3D meshes are commonly used to represent virtual surface and volumes. However, their raw data representations take a large amount of space. Hence, 3D mesh compression has been an active research topic since the mid 1990s. In 2005, two very good review articles describing the pioneering works were published. Yet, new technologies have emerged since then. In this article, we summarize the early works and put the focus on these novel approaches. We classify and describe the algorithms, evaluate their performance, and provide synthetic comparisons. We also outline the emerging trends for future research."}
{"_id": "e0ef5fe2fefc7d6df1964a890a4474e13b8162b3", "title": "Prescribed performance control of quadruped robot", "text": "Due to the complex and high nonlinear structure of the quadruped robots, the dynamic control of quadruped robots has long been a big challenge for the researchers. In this paper, a guaranteed performance adaptive control algorithm is proposed for tracking control of a quadruped robot with parameter uncertainties. The controller is designed based on the sliding mode theory, and incorporates a transformed error, which includes the performance indices. Parameter uncertainties are estimated by a linear network. By choosing a proper Lyapunov function, the prescribed performance with the proposed controller is proven in the scenes of the parameter uncertainties and the environment constraint. Numerical simulations demonstrate the effectiveness and superiority of the proposed controller."}
{"_id": "4c8a5035757d06e4af104a04b2ac4bc83a1cdb2d", "title": "Largest inscribed rectangles in convex polygons", "text": "We consider approximation algorithms for the problem of computing an inscribed rectangle having largest area in a convex polygon on n vertices. If the order of the vertices of the polygon is given, we present a randomized algorithm that computes an inscribed rectangle with area at least (1\u2212 ) times the optimum with probability t in time O( 1 log n) for any constant t < 1. We further give a deterministic approximation algorithm that computes an inscribed rectangle of area at least (1\u2212 ) times the optimum in running time O( 1 2 log n) and show how this running time can be slightly improved."}
{"_id": "e28a0c509ef43f253ee475d30d5b419debefce05", "title": "Self-enucleation in a young schizophrenic patient--a case report.", "text": "Self-enucleation represents an extreme but fortunately rare form of deliberate self-harm. Case reports of patients who self-enucleate reveal some common features. A case of auto-enucleation in a young schizophrenic patient and a short discussion on deliberate self-harm are presented."}
{"_id": "40f4bed6f7706e2c60685b0bb22ffdd527310094", "title": "Random Forests and VGG-NET: An Algorithm for the ISIC 2017 Skin Lesion Classification Challenge", "text": "This manuscript briefly describes an algorithm developed for the ISIC 2017 Skin Lesion Classification Competition. In this task, participants are asked to complete two independent binary image classification tasks that involve three unique diagnoses of skin lesions (melanoma, nevus, and seborrheic keratosis). In the first binary classification task, participants are asked to distinguish between (a) melanoma and (b) nevus and seborrheic keratosis. In the second binary classification task, participants are asked to distinguish between (a) seborrheic keratosis and (b) nevus and melanoma. The other phases of the competition are not considered. Our proposed algorithm consists of three steps: preprocessing, classification using VGG-NET [2] and Random Forests [3], and calculation of a final score."}
{"_id": "749818fb681ac52db30b2424042ebd90050d26c7", "title": "A MATLAB-based simulation program for indoor visible light communication system", "text": "We report a simulation program for indoor visible light communication environment based on MATLAB and Simulink. The program considers the positions of the transmitters and the reflections at each wall. For visible light communication environment, the illumination light-emitting diode is used not only as a lighting device, but also as a communication device. Using the simulation program, the distributions of illuminance and root-mean-square delay spread are analyzed at bottom surface."}
{"_id": "421703f469c46e06bddf0642224e75189b208af6", "title": "Cognitive Biases in Visualizations", "text": ""}
{"_id": "54cf3614c31e6f150cc712d9cb988d3663bf8e80", "title": "Automatic creation of WordNets from parallel corpora", "text": "In this paper we present the evaluation results for the creation of WordNets for five languages (Spanish, French, German, Italian and Portuguese) using an approach based on parallel corpora. We have used three very large parallel corpora for our experiments: DGT-TM, EMEA and ECB. The English part of each corpus is semantically tagged using Freeling and UKB. After this step, the process of WordNet creation is converted into a word alignment problem, where we want to align WordNet synsets in the English part of the corpus with lemmata on the target language part of the corpus. The word alignment algorithm used in these experiments is a simple most frequent translation algorithm implemented into the WN-Toolkit. The obtained precision values are quite satisfactory, but the overall number of extracted synset-variant pairs is too low, leading into very poor recall values. In the conclusions, the use of more advanced word alignment algorithms, such as Giza++, Fast Align or Berkeley aligner is suggested."}
{"_id": "7fdf15c10225409ab9f1df387aecb7bafe8f3965", "title": "Preliminary safety assessment for a sectorless ATM concept", "text": "In a sectorless air traffic management concept the airspace is no longer divided into sectors but regarded as one piece. A number of aircraft, not necessarily in the same airspace region, are assigned to each air traffic controller who is then responsible for these aircraft from their entry into the airspace to their exit. These individually assigned flights can be anywhere in the airspace and therefore in different traffic situations. This means that air traffic controllers will manage flights which may not be geographically connected. Such a concept change will necessitate also a significant change in the controllers' routines and the support tools. Naturally, the question of safety arises regarding new procedures and systems. This paper provides a preliminary safety assessment for a sectorless air traffic management concept. The assessment is based on the Single European Sky ATM Research (SESAR) Safety Reference Material which was originally developed for SESAR purposes. This success-based approach stresses the positive contribution of a new concept while traditional approaches mainly consider the negative effect of possible hazards and failures. Based on validation activities including realtime simulations we have developed safety acceptance criteria and safety objectives for a sectorless air traffic management (ATM) concept. Starting from these we have sketched the safety performance requirement model and deduce the first safety requirements for normal conditions, abnormal conditions and in the case of internal system failures."}
{"_id": "d37042fda4de547a2e1eb22f5771f22e805ff777", "title": "Two decades of terror management theory: a meta-analysis of mortality salience research.", "text": "A meta-analysis was conducted on empirical trials investigating the mortality salience (MS) hypothesis of terror management theory (TMT). TMT postulates that investment in cultural worldviews and self-esteem serves to buffer the potential for death anxiety; the MS hypothesis states that, as a consequence, accessibility of death-related thought (MS) should instigate increased worldview and self-esteem defense and striving. Overall, 164 articles with 277 experiments were included. MS yielded moderate effects (r = .35) on a range of worldview- and self-esteem-related dependent variables (DVs), with effects increased for experiments using (a) American participants, (b) college students, (c) a longer delay between MS and the DV, and (d) people-related attitudes as the DV. Gender and self-esteem may moderate MS effects differently than previously thought. Results are compared to other reviews and examined with regard to alternative explanations of TMT. Finally, suggestions for future research are offered."}
{"_id": "3b8ad1f2335fc755e5cd75ee5922b8a0d432018a", "title": "A Fast and Compact Saliency Score Regression Network Based on Fully Convolutional Network", "text": "Visual saliency detection aims at identifying the most visually distinctive parts in an image, and serves as a pre-processing step for a variety of computer vision and image processing tasks. To this end, the saliency detection procedure must be as fast and compact as possible and optimally processes input images in a real time manner. It is an essential application requirement for the saliency detection task. However, contemporary detection methods often utilize some complicated procedures to pursue feeble improvements on the detection precession, which always take hundreds of milliseconds and make them not easy to be applied practically. In this paper, we tackle this problem by proposing a fast and compact saliency score regression network which employs fully convolutional network, a special deep convolutional neural network, to estimate the saliency of objects in images. It is an extremely simplified endto-end deep neural network without any pre-processings and post-processings. When given an image, the network can directly predict a dense full-resolution saliency map (image-to-image prediction). It works like a compact pipeline which effectively simplifies the detection procedure. Our method is evaluated on six public datasets, and experimental results show that it can achieve comparable or better precision performance than the state-of-the-art methods while get a significant improvement in detection speed (35 FPS, processing in real time)."}
{"_id": "cf5ef59ae935085513d70b4eecea3809c80bbcfb", "title": "The Evolution to Modern Phased Array Architectures", "text": "Phased array technology has been evolving steadily with advances in solid-state microwave integrated circuits, analysis and design tools, and reliable fabrication practices. With significant government investments, the technologies have matured to a point where phased arrays are widely used in military systems. Next-generation phased arrays will employ high levels of digitization, which enables a wide range of improvements in capability and performance. Digital arrays leverage the rapid commercial evolution of digital processor technology. The cost of phased arrays can be minimized by utilizing high-volume commercial microwave manufacturing and packaging techniques. Dramatic cost reductions are achieved by employing a tile array architecture, which greatly reduces the number of printed circuit boards and connectors in the array."}
{"_id": "476ee4dfb5640702fece5cbd680b839c206f4b15", "title": "Spectrum Survey in Singapore: Occupancy Measurements and Analyses", "text": "We study the 24-hour spectrum usage pattern in Singapore in the frequency bands ranging from 80 MHz to 5850 MHz. The objectives are to find how the scarce radio spectrum allocated to different services is utilized in Singapore and identify the bands that could be accessed for future opportunistic use due to their low or no active utilization. The results from the spectrum measurements taken over 12 weekday periods reveal that a significant amount of spectrum in Singapore has very low occupancy all the time. The occupancy is quantified as the amount of spectrum detected above a certain received power threshold. The outcome of this study suggests that Singapore has a great potential for employing emerging spectrum sharing technology such as the cognitive radio technology to accommodate enormous demands for future wireless services. However, this study of spectrum survey is preliminary in its nature and future long term studies need to be performed to determine any potential secondary usage on those channels that have low or no active utilization."}
{"_id": "25ac694fa23f733679496a139e9168472e267865", "title": "Convolutional Neural Networks vs. Convolution Kernels: Feature Engineering for Answer Sentence Reranking", "text": "In this paper, we study, compare and combine two state-of-the-art approaches to automatic feature engineering: Convolution Tree Kernels (CTKs) and Convolutional Neural Networks (CNNs) for learning to rank answer sentences in a Question Answering (QA) setting. When dealing with QA, the key aspect is to encode relational information between the constituents of question and answer in learning algorithms. For this purpose, we propose novel CNNs using relational information and combined them with relational CTKs. The results show that (i) both approaches achieve the state of the art on a question answering task, where CTKs produce higher accuracy and (ii) combining such methods leads to unprecedented high results."}
{"_id": "54cec6788cbde31a271290d65bb7532f635d0e9c", "title": "Electronic textiles: a platform for pervasive computing", "text": "In our era of converging technologies, it is significant that the second Industrial Revolution should meet the first on common ground: textiles. Modern data processing is, after all, an offshoot of technology first introduced in the 18th century to automate the production of woven textiles. The punched-card system developed by J.-M. Jacquard in 1804 to regulate the weaving of patterned fabrics was later adapted by the English inventor C. Babbage for his proposed analytical engine, the precursor of all binary-based digital computers. Now researchers are delving into a new technology that literally weaves a computing platform into wearable fabrics, creating electronic textiles, or e-textiles, that offer the wearer ready access to information anytime, anywhere, through the medium of an unobtrusive interface. This new field of research draws together an interdisciplinary cadre of specialists in information technology, computing and communications, microtechnology, textiles, and application domains. Their objective is to develop an economical process for the manufacture of large-area, flexible, conformable information systems that can be used in such diverse areas as medicine, sports, space research, firefighting, and law enforcement. In this paper, the authors explore the synergies accruing from wedding the capabilities of textiles with those of computing. In addition to the twin dimensions of functionality and aesthetics, clothing can now be woven to accommodate a third dimension: intelligence. Clothing, the authors point out, meets the user\u2019s requirements for interactivity, connectivity, individuality, and adaptability. It is also probably the most universal of human\u2013computer interfaces: it is familiar; it can be tailored to individual sizes, tastes, and budgets; and it is adaptable to varying climatic conditions. Textiles also provide the ultimate flexibility in systems design by offering a broad range of fibers, yarns, fabrics, and manufacturing techniques. Fabrics also have the spatial capacity needed for hosting the large numbers of sensors and processors demanded for covering broad terrains, such as a battlefield. The size of clothing further enables designers the latitude to build in redundant systems to ensure fault tolerance."}
{"_id": "a5265b79c86c4506eb5aa9394dceab6898ded96a", "title": "Synchromesh Torque Estimation in an Electric Vehicle's Clutchless Automated Manual Transmission Using Unknown Input Observer", "text": "This paper studies the estimation of the frictional torque of the synchromesh during the gear shifting operation in an electric vehicle equipped with a clutchless automated manual transmission (AMT). The clutchless drivetrain of the electric vehicle is discussed and the dynamical model of the powertrain from the electric traction motor to the synchromesh system of a two-speed AMT is developed. In order to estimate the frictional torque of the synchromesh, which is indeed an unknown input to the dynamical system, it is assumed to be generated by a fictitious autonomous system. Thereafter, the augmented state- space representation of the actual and fictitious state variables, which forms the basis for the observer design, is provided and the observability of this augmented system is discussed. A deterministic Luenberger observer and a stochastic Kalman-Bucy filter are designed in order to estimate the frictional torque of the synchromesh. The estimation is based on the measuring angular velocities of the electric motor and synchro ring together with the known electromagnetic torque of the traction motor. The performance of the observers is assessed experimentally by means of a test rig. The results demonstrate the satisfactory performance of the stochastic observer when the system encounters process and measurement noise which are likely to happen in practice."}
{"_id": "ff5285edaf3bf118d0039141e5352246c9f7c5aa", "title": "Automatic Cloud I/O Configurator for I/O Intensive Parallel Applications", "text": "As the cloud platform becomes a promising alternative to traditional HPC (high performance computing) centers or in-house clusters, the I/O bottleneck problem is highlighted in this new environment, typically with top-of-the-line compute instances but sub-par communication and I/O facilities. It has been observed that changing the cloud I/O system configurations, such as choices of file systems, number of I/O servers and their placement strategies, etc., will lead to a considerable variation in the performance and cost efficiency of I/O intensive parallel applications. However, storage system configuration is tedious and error-prone to do manually, even for expert users, leading to solutions that are grossly over-provisioned (low cost inefficiency), substantially under-performing (poor performance) or, in the worst case, both. This paper proposes ACIC, a system which automatically searches for optimized I/O system configurations from many candidates for each individual application running on a given cloud platform. ACIC takes advantage of machine learning models to perform performance/cost predictions. To tackle the high-dimensional parameter exploration space, we enable affordable, reusable, and incremental training on cloud platforms, guided by the Plackett and Burman Matrices for experiment design. Our evaluation results with four representative parallel applications indicate that ACIC consistently identifies optimal or near-optimal configurations among a large group of candidate settings. The top ACIC-recommended configuration is capable of improving the applications' performance by a factor of up to 10.5 (3.1 on average), and cost saving of up to 89 percent (51 percent on average), compared with a commonly used baseline I/O configuration. In addition, we carried out a small-scale user study for one of the test applications, which found that ACIC consistently beat the user and even the application's developer, often by a significant margin, in selecting optimized configurations."}
{"_id": "52a2e27a59a7c989033dc422f914393eef9e9a01", "title": "Evolutionary algorithm in Forex trade strategy generation", "text": "This paper shows an evolutionary algorithm application to generate profitable strategies to trade futures contracts on foreign exchange market (Forex). Strategy model in approach is based on two decision trees, responsible for taking the decisions of opening long or short positions on Euro/US Dollar currency pair. Trees take into consideration only technical analysis indicators, which are connected by logic operators to identify border values of these indicators for taking profitable decision(s). We have tested the efficiency of presented approach on learning and test time-frames of various characteristics."}
{"_id": "651e0d845d42ef8d3d4e0c7b25370e2e35f56318", "title": "Enhanced 3GPP system for Machine Type Communications and Internet of Things", "text": "The Third Generation Partnership Project (3GPP) has been working on developing specifications on Machine Type Communications (MTC), also known as Machine to Machine Communications (M2M), which is all part of Internet of Things (IoT), a technology that enables machines and devices to be inter-connected via the internet. This paper presents recent M2M/IoT feature enhancements in 3GPP. These features include architectural enhancements for inclusion of a service exposure framework, group based IoT management, efficient monitoring mechanisms for IoT devices, optimization of IoT device power consumption, and high latency communications. These M2M/IoT related functionalities and enhancements are a continuing effort within 3GPP to make the mobile network fulfil the ever changing M2M/IoT requirements, and in so doing allow operators/service providers to offer unique services."}
{"_id": "c04c3e9ca649e5709a33b53cf35fd20862297426", "title": "On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators", "text": "This paper shows, by means of an operator called a splitting operator, that the Douglas-Rachford splitting method for finding a zero of the sum of two monotone operators is a special case of the proximal point algorithm, Therefore, applications of Douglas -Rachford splitting, such as the alternating direction method of multipliers for convex programming decomposit ion, are also special cases of the proximal point algorithm. This observation allows the unification and generalization of a variety of convex programming algorithms. By introducing a modified version of the proximal point algorithm, we derive a new, generalized alternating direction method of multipliers for convex programming. Advances of this sort illustrate the power and generality gained by adopting monotone operator theory as a conceptual framework."}
{"_id": "5884bb205781106c7bb970533cf61a6074e2c7bb", "title": "Mikro Bloglardaki Finans Topluluklar\u0131 i\u00e7in Kullan\u0131c\u0131 A\u011f\u0131rl\u0131kland\u0131r\u0131lm\u0131\u015f Duygu Analizi Y\u00f6ntemi", "text": "Nowadays, sentiment analysis is a popular research area in computer science. which aims to determine a person\u2019s or a group\u2019s mood, behaviour and opinion about any topic with using textual documents. With the proliferation of social micro-blogging sites, digital opinion text data is increased. Thus, many sentiment analysis researches are performed on these public data in different sociological fields, such as finance, economy and politics. In this paper, a novel sentiment analysis method is proposed on micro-blogging sites which uses new user metrics. Proposed method is used to measure financial community\u2019s sentiment polarity on micro-blogging sites. In addition to that we analyze the correlation between the mood of financial community and the behavior of the Borsa Istanbul 100 index weekly by Pearson correlation coefficient method. Our test results show that this novel sentiment analysis method improves to the accuracy of linear relationship between the behavior of the stock market and the sentiment polarity of the financial community."}
{"_id": "98104ed5c5b464358adb525ab9d8b4e42be055e8", "title": "Testing predictions from personality neuroscience. Brain structure and the big five.", "text": "We used a new theory of the biological basis of the Big Five personality traits to generate hypotheses about the association of each trait with the volume of different brain regions. Controlling for age, sex, and whole-brain volume, results from structural magnetic resonance imaging of 116 healthy adults supported our hypotheses for four of the five traits: Extraversion, Neuroticism, Agreeableness, and Conscientiousness. Extraversion covaried with volume of medial orbitofrontal cortex, a brain region involved in processing reward information. Neuroticism covaried with volume of brain regions associated with threat, punishment, and negative affect. Agreeableness covaried with volume in regions that process information about the intentions and mental states of other individuals. Conscientiousness covaried with volume in lateral prefrontal cortex, a region involved in planning and the voluntary control of behavior. These findings support our biologically based, explanatory model of the Big Five and demonstrate the potential of personality neuroscience (i.e., the systematic study of individual differences in personality using neuroscience methods) as a discipline."}
{"_id": "515ec15f06ab5ab31324f7e5fb94b47195d6bb71", "title": "Hand Gesture Recognition for Real Time Human Machine Interaction System", "text": "Real Time Human-machine Interaction system using hand gesture Recognition to handle the mouse event , media player , image viewer .Users have to repeat same mouse and keyboard actions, inducing waste of time. Gestures have long been considered as an interaction technique that can potentially deliver more natural. A fast gesture recognition scheme is proposed to be an interface for the human-machine interaction (HMI) of systems. The system presents some lowcomplexity algorithms and gestures to reduce the gesture recognition complexity and be more suitable for controlling real-time computer systems. In this paper we use the webcam for capturing the image. After capturing the image it converts into the binary image. A gesture is a specific combination of hand position."}
{"_id": "9d88a3565d94463eb9f197b92a95d9140c3e3576", "title": "Solution aversion: On the relation between ideology and motivated disbelief.", "text": "There is often a curious distinction between what the scientific community and the general population believe to be true of dire scientific issues, and this skepticism tends to vary markedly across groups. For instance, in the case of climate change, Republicans (conservatives) are especially skeptical of the relevant science, particularly when they are compared with Democrats (liberals). What causes such radical group differences? We suggest, as have previous accounts, that this phenomenon is often motivated. However, the source of this motivation is not necessarily an aversion to the problem, per se, but an aversion to the solutions associated with the problem. This difference in underlying process holds important implications for understanding, predicting, and influencing motivated skepticism. In 4 studies, we tested this solution aversion explanation for why people are often so divided over evidence and why this divide often occurs so saliently across political party lines. Studies 1, 2, and 3-using correlational and experimental methodologies-demonstrated that Republicans' increased skepticism toward environmental sciences may be partly attributable to a conflict between specific ideological values and the most popularly discussed environmental solutions. Study 4 found that, in a different domain (crime), those holding a more liberal ideology (support for gun control) also show skepticism motivated by solution aversion."}
{"_id": "46aac855784e894242488bcfc1f0779503042f8b", "title": "Lava: Hardware Design in Haskell", "text": "Lava is a tool to assist circuit designers in specifying, designing, verifying and implementing hardware. It is a collection of Haskell modules. The system design exploits functional programming language features, such as monads and type classes, to provide multiple interpretations of circuit descriptions. These interpretations implement standard circuit analyses such as simulation, formal verification and the generation of code for the production of real circuits.Lava also uses polymorphism and higher order functions to provide more abstract and general descriptions than are possible in traditional hardware description languages. Two Fast Fourier Transform circuit examples illustrate this."}
{"_id": "69990527b9a463f6e3456dcae67d0810ac9f55d4", "title": "Locality preserving indexing for document representation", "text": "Document representation and indexing is a key problem for document analysis and processing, such as clustering, classification and retrieval. Conventionally, Latent Semantic Indexing (LSI) is considered effective in deriving such an indexing. LSI essentially detects the most representative features for document representation rather than the most discriminative features. Therefore, LSI might not be optimal in discriminating documents with different semantics. In this paper, a novel algorithm called Locality Preserving Indexing (LPI) is proposed for document indexing. Each document is represented by a vector with low dimensionality. In contrast to LSI which discovers the global structure of the document space, LPI discovers the local structure and obtains a compact document representation subspace that best detects the essential semantic structure. We compare the proposed LPI approach with LSI on two standard databases. Experimental results show that LPI provides better representation in the sense of semantic structure."}
{"_id": "35b2154900e2a5009015eed7f23a0eed13157664", "title": "Dear Sir or Madam, May I Introduce the GYAFC Dataset: Corpus, Benchmarks and Metrics for Formality Style Transfer", "text": "Style transfer is the task of automatically transforming a piece of text in one particular style into another. A major barrier to progress in this field has been a lack of training and evaluation datasets, as well as benchmarks and automatic metrics. In this work, we create the largest corpus for a particular stylistic transfer (formality) and show that techniques from the machine translation community can serve as strong baselines for future work. We also discuss challenges of using automatic metrics."}
{"_id": "75f5b8b01ddc45751def412b0f4b9b5c892dae8b", "title": "A Deep Policy Inference Q-Network for Multi-Agent Systems", "text": "We present DPIQN, a deep policy inference Q-network that targets multi-agent systems composed of controllable agents, collaborators, and opponents that interact with each other. We focus on one challenging issue in such systems\u2014modeling agents with varying strategies\u2014and propose to employ \u201cpolicy features\u201d learned from raw observations (e.g., raw images) of collaborators and opponents by inferring their policies. DPIQN incorporates the learned policy features as a hidden vector into its own deep Q-network (DQN), such that it is able to predict better Q values for the controllable agents than the state-of-the-art deep reinforcement learningmodels. We further propose an enhanced version of DPIQN, called deep recurrent policy inference Q-network (DRPIQN), for handling partial observability. Both DPIQN and DRPIQN are trained by an adaptive training procedure, which adjusts the network\u2019s attention to learn the policy features and its own Q-values at different phases of the training process. We present a comprehensive analysis of DPIQN and DRPIQN, and highlight their effectiveness and generalizability in various multi-agent settings. Our models are evaluated in a classic soccer game involving both competitive and collaborative scenarios. Experimental results performed on 1 vs. 1 and 2 vs. 2 games show that DPIQN and DRPIQN demonstrate superior performance to the baseline DQN and deep recurrent Q-network (DRQN) models. We also explore scenarios in which collaborators or opponents dynamically change their policies, and show that DPIQN and DRPIQN do lead to better overall performance in terms of stability and mean scores."}
{"_id": "774e560a2cadcb84f4b1def7b152e5398b062efb", "title": "Scalable Modified Kneser-Ney Language Model Estimation", "text": "We present an efficient algorithm to estimate large modified Kneser-Ney models including interpolation. Streaming and sorting enables the algorithm to scale to much larger models by using a fixed amount of RAM and variable amount of disk. Using one machine with 140 GB RAM for 2.8 days, we built an unpruned model on 126 billion tokens. Machine translation experiments with this model show improvement of 0.8 BLEU point over constrained systems for the 2013 Workshop on Machine Translation task in three language pairs. Our algorithm is also faster for small models: we estimated a model on 302 million tokens using 7.7% of the RAM and 14.0% of the wall time taken by SRILM. The code is open source as part of KenLM."}
{"_id": "255bdac58c73b6e9e09b77348dd15eda4b9f645e", "title": "A dual-band bagley power divider using modified \u03a0-network", "text": "A new methodology to design a dual-band 3-way Bagley power divider is presented. The general method of using \u03c0-network as well as the proposed modified \u03c0-network has been discussed. The proposed \u03c0-network mimics a \u03bb/2 line at two different frequencies. Design equations in closed form are obtained by using transmission line concepts and simple network analysis techniques. The proposed design is validated using Agilent ADS. This modification also leads to wider bandwidth as observed by a careful simulation."}
{"_id": "7e6429291b65b4984a461350f7a07a3af1af7029", "title": "Self-organizing semantic maps", "text": "Self-organized formation of topographic maps for abstract data, such as words, is demonstrated in this work. The semantic relationships in the data are reflected by their relative distances in the map. Two different simulations, both based on a neural network model that implements the algorithm of the selforganizing feature maps, are given. For both, an essential, new ingredient is the inclusion of the contexts, in which each symbol appears, into the input data. This enables the network to detect the \u201clogical similarity\u201d between words from the statistics of their contexts. In the first demonstration, the context simply consists of a set of attribute values that occur in conjunction with the words. In the second demonstration, the context is defined by the sequences in which the words occur, without consideration of any associated attributes. Simple verbal statements consisting of nouns, verbs, and adverbs have been analyzed in this way. Such phrases or clauses involve some of the abstractions that appear in thinking, namely, the most common categories, into which the words are then automatically grouped in both of our simulations. We also argue that a similar process may be at work in the brain."}
{"_id": "2ea8d4cfb92d6354caa76a7070a3a5e053e1b066", "title": "Multi-View Unsupervised Feature Selection with Adaptive Similarity and View Weight", "text": "With the advent of multi-view data, multi-view learning has become an important research direction in both machine learning and data mining. Considering the difficulty of obtaining labeled data in many real applications, we focus on the multi-view unsupervised feature selection problem. Traditional approaches all characterize the similarity by fixed and pre-defined graph Laplacian in each view separately and ignore the underlying common structures across different views. In this paper, we propose an algorithm named Multi-view Unsupervised Feature Selection with Adaptive Similarity and View Weight (ASVW) to overcome the above mentioned problems. Specifically, by leveraging the learning mechanism to characterize the common structures adaptively, we formulate the objective function by a common graph Laplacian across different views, together with the sparse $\\ell _{2,p}$ -norm constraint designed for feature selection. We develop an efficient algorithm to address the non-smooth minimization problem and prove that the algorithm will converge. To validate the effectiveness of ASVW, comparisons are made with some benchmark methods on real-world datasets. We also evaluate our method in the real sports action recognition task. The experimental results demonstrate the effectiveness of our proposed algorithm."}
{"_id": "2c648da4d3b0fb7f175111ddad58ba912b37d9c6", "title": "Evolution of Trajectories: A Novel Representation for Deep Action Recognition", "text": "Achieving high classification accuracy remains a challenge for human action recognition approaches that are based on convolutional neural networks (CNN). CNN-based action recognition methods on resource-constrained edge systems are currently unable to train on whole videos due to infeasible computational and memory requirements. On the other hand, approaches that utilize video-level supervision with sparse-sampling designs run the risk of learning local features shared with multiple similar classes. Additionally, features captured by motion estimation algorithms for temporal stream CNN's are already reduced in dimension, increasing the possibility of label mismatch in methods that rely on short-term features. To address the aforementioned points, we design a novel temporal representation to capture a 1) long-term interval of motion and 2) integrate the trajectory of motion captured therein. We compare our approach with current motion representations and demonstrate its efficacy for examples containing local features with high inter-class similarity. We implement our representation as part of two and three-stream CNN's and conduct experiments on one of the popular and most difficult action recognition datasets: HMDB51. Our results show that, for methods employing sparse-sampling designs, our approach surpasses the current state-of-the-art approaches achieving 71.76% on HMDB51."}
{"_id": "e3d8c6cf9109b262d4c5275f2254e2427ee48a88", "title": "BranchScope: A New Side-Channel Attack on Directional Branch Predictor", "text": "We present BranchScope - a new side-channel attack where the attacker infers the direction of an arbitrary conditional branch instruction in a victim program by manipulating the shared directional branch predictor. The directional component of the branch predictor stores the prediction on a given branch (taken or not-taken) and is a different component from the branch target buffer (BTB) attacked by previous work. BranchScope is the first fine-grained attack on the directional branch predictor, expanding our understanding of the side channel vulnerability of the branch prediction unit. Our attack targets complex hybrid branch predictors with unknown organization. We demonstrate how an attacker can force these predictors to switch to a simple 1-level mode to simplify the direction recovery. We carry out BranchScope on several recent Intel CPUs and also demonstrate the attack against an SGX enclave."}
{"_id": "b2585a34326e46a0af4d33e7a2062b930c6c5b6b", "title": "How GaN Power Transistors Drive High-Performance Lidar: Generating ultrafast pulsed power with GaN FETs", "text": "Light detection and ranging (lidar) is a versatile light-based remote sensing technology that recently has been the subject of great attention. It has shown up in a number of media venues and has even led to public debate about the engineering choices of a well-known electric car company, Tesla Motors [1]. While this article is not going to enter the fray, it will provide some background on lidar and discuss its strong connection to power electronics technologies."}
{"_id": "aa8d9fa4ee083eb927e42485c071942109c12f09", "title": "Color Transform Based Approach for Disease Spot Detection on Plant Leaf", "text": "In this research, an algorithm for disease spot segmentation using image processing techniques in plant leaf is implemented. This is the first and important phase for automatic detection and classification of plant diseases. Disease spots are different in color but not in intensity, in comparison with plant leaf color. So we color transform of RGB image can be used for better segmentation of disease spots. In this paper a comparison of the effect of CIELAB, HSI and YCbCr color space in the process of disease spot detection is done. Median filter is used for image smoothing. Finally threshold can be calculated by applying Otsu method on color component to detect the disease spot. An algorithm which is independent of background noise, plant type and disease spot color was developed and experiments were carried out on different \u201cMonocot\u201d and \u201cDicot\u201d family plant leaves with both, noise free (white) and noisy background."}
{"_id": "b8737d6ec1b033f6185e1da0f40a14fa44808d3f", "title": "Review of control algorithms for robotic ankle systems in lower-limb orthoses, prostheses, and exoskeletons.", "text": "This review focuses on control strategies for robotic ankle systems in active and semiactive lower-limb orthoses, prostheses, and exoskeletons. Special attention is paid to algorithms for gait phase identification, adaptation to different walking conditions, and motion intention recognition. The relevant aspects of hardware configuration and hardware-level controllers are discussed as well. Control algorithms proposed for other actuated lower-limb joints (knee and/or hip), with potential applicability to the development of ankle devices, are also included."}
{"_id": "d6c899b3cfc70d1d31f2d2cdf696ff7567ff0e02", "title": "Exoskeleton robot for rehabilitation of elbow and forearm movements", "text": "To perform essential daily activities the movement of shoulder, elbow, and wrist play a vital role and therefore proper functioning of upper-limb is very much essential. We therefore have been developing an exoskeleton robot (ExoRob) to rehabilitate and to ease upper limb motion. Toward to make a complete (i.e., 7DOF) upper-arm motion assisted robotic exoskeleton this paper focused on the development of a 2DOF exoskeleton robot to rehabilitate the elbow and forearm movements. The proposed 2DOF ExoRob is supposed to be worn on the lateral side of forearm and provide naturalistic range movements of elbow (flexion/extension) and forearm (pronation/supination) motions. This paper also focuses on the modeling and control of the proposed ExoRob. A kinematic model of the ExoRob has been developed based on modified Denavit-Hartenberg notations. Nonlinear sliding mode control technique is employed in dynamic simulation of the proposed ExoRob, where trajectory tracking that corresponds to typical rehab (passive) exercises has been carried out to evaluate the effectiveness of the developed model and controller. Simulated results show that the controller is able to maneuver the ExoRob efficiently to track the desired trajectories, which in this case consisted in passive arm movements. These movements are widely used in rehab therapy and could be performed efficiently with the developed ExoRob and the controller."}
{"_id": "6882dcb241f5aaefe85025bf754f8dd1c1502df1", "title": "Robot-aided neurorehabilitation.", "text": "Our goal is to apply robotics and automation technology to assist, enhance, quantify, and document neurorehabilitation. This paper reviews a clinical trial involving 20 stroke patients with a prototype robot-aided rehabilitation facility developed at the Massachusetts Institute of Technology, Cambridge, (MIT) and tested at Burke Rehabilitation Hospital, White Plains, NY. It also presents our approach to analyze kinematic data collected in the robot-aided assessment procedure. In particular, we present evidence 1) that robot-aided therapy does not have adverse effects, 2) that patients tolerate the procedure, and 3) that peripheral manipulation of the impaired limb may influence brain recovery. These results are based on standard clinical assessment procedures. We also present one approach using kinematic data in a robot-aided assessment procedure."}
{"_id": "03dd188eb4a34f2723d8e23a5a29ff86894c78f7", "title": "An Alternative Way to Analyze Workflow Graphs", "text": "At the CAiSE conference in Heidelberg in 1999, Wasim Sadiq and Maria Orlowska presented an algorithm to verify workflow graphs [19]. The algorithm uses a set of reduction rules to detect structural conflicts. This paper shows that the set of reduction rules presented in [19] is not complete and proposes an alternative algorithm. The algorithm translates workflow graphs into so-called WF-nets. WF-nets are a class of Petri nets tailored towards workflow analysis. As a result, Petri-net theory and tools can be used to verify workflow graphs. In particular, our workflow verification tool Woflan [21] can be used to detect design errors. It is shown that the absence of structural conflicts, i.e., deadlocks and lack of synchronization, conforms to soundness of the corresponding WF-net [2]. In contrast to the algorithm presented in [19], the algorithm presented in this paper is complete. Moreover, the complexity of this alternative algorithm is given."}
{"_id": "69e70b8101da9f794c33ee35740344461f262e8e", "title": "Smart-Contract Based System Operations for Permissioned Blockchain", "text": "Enterprises have paid attention to blockchain (BC), recently permissioned BC characterized with smart-contract, where busi-ness transactions among inter-authorized companies (forming consortium) can automatically be executed based on distributed consensus protocol over user-defined business logics pre-built with program codes. A single BC system will be built across mul-tiple management domains having different operational policies, e.g., datacenter of each organization; this will trigger a problem that its system operations (e.g., backup) will become time-consuming and costly due to the difficulty in unifying and/or adjusting operational policy, schedule, etc. Toward solving the problem, we propose an operations execution method for BC systems; a primary idea is to define operations as smart-contract so that unified and synchronized cross-organizational operations can be executed effectively by using BC-native features. We de-sign the proposed method as hybrid architecture including in-BC consensus establishment and out-BC event-based instruction execution, in order to be adaptable to the recent heterogeneous BC architecture. Performance evaluation using a prototype with Hyperledger Fabric v1.0 shows that the proposed method can start executing operations within 5 seconds. Furthermore, cost evaluation using model-based estimation shows that the total yearly cost of monthly operations on a 5-organizational BC sys-tem could be reduced by 61 percent compared to a conventional manual method."}
{"_id": "3ee01ec27e4e66e089b72a9989724be611c2ad90", "title": "Neural Map: Structured Memory for Deep Reinforcement Learning", "text": "A critical component to enabling intelligent reasoning in partially observable environments is memory. Despite this importance, Deep Reinforcement Learning (DRL) agents have so far used relatively simple memory architectures, with the main methods to overcome partial observability being either a temporal convolution over the past k frames or an LSTM layer. More recent work (Oh et al., 2016) has went beyond these architectures by using memory networks which can allow more sophisticated addressing schemes over the past k frames. But even these architectures are unsatisfactory due to the reason that they are limited to only remembering information from the last k frames. In this paper, we develop a memory system with an adaptable write operator that is customized to the sorts of 3D environments that DRL agents typically interact with. This architecture, called the Neural Map, uses a spatially structured 2D memory image to learn to store arbitrary information about the environment over long time lags. We demonstrate empirically that the Neural Map surpasses previous DRL memories on a set of challenging 2D and 3D maze environments and show that it is capable of generalizing to environments that were not seen during training."}
{"_id": "106804244aeca715094e12266e3233adca5b78af", "title": "A portable powered ankle-foot orthosis for rehabilitation.", "text": "Innovative technological advancements in the field of orthotics, such as portable powered orthotic systems, could create new treatment modalities to improve the functional out come of rehabilitation. In this article, we present a novel portable powered ankle-foot orthosis (PPAFO) to provide untethered assistance during gait. The PPAFO provides both plantar flexor and dorsiflexor torque assistance by way of a bidirectional pneumatic rotary actuator. The system uses a portable pneumatic power source (compressed carbon dioxide bottle) and embedded electronics to control the actuation of the foot. We collected pilot experimental data from one impaired and three nondisabled subjects to demonstrate design functionality. The impaired subject had bilateral impairment of the lower legs due to cauda equina syndrome. We found that data from nondisabled walkers demonstrated the PPAFO's capability to provide correctly timed plantar flexor and dorsiflexor assistance during gait. Reduced activation of the tibialis anterior during stance and swing was also seen during assisted nondisabled walking trials. An increase in the vertical ground reaction force during the second half of stance was present during assisted trials for the impaired subject. Data from nondisabled walkers demonstrated functionality, and data from an impaired walker demonstrated the ability to provide functional plantar flexor assistance."}
{"_id": "9a52172fb96a7ad91969a947587cf3db9b8180bf", "title": "Learning Semantic Composition to Detect Non-compositionality of Multiword Expressions", "text": "Non-compositionality of multiword expressions is an intriguing problem that can be the source of error in a variety of NLP tasks such as language generation, machine translation and word sense disambiguation. We present methods of non-compositionality detection for English noun compounds using the unsupervised learning of a semantic composition function. Compounds which are not well modeled by the learned semantic composition function are considered noncompositional. We explore a range of distributional vector-space models for semantic composition, empirically evaluate these models, and propose additional methods which improve results further. We show that a complex function such as polynomial projection can learn semantic composition and identify non-compositionality in an unsupervised way, beating all other baselines ranging from simple to complex. We show that enforcing sparsity is a useful regularizer in learning complex composition functions. We show further improvements by training a decomposition function in addition to the composition function. Finally, we propose an EM algorithm over latent compositionality annotations that also improves the performance."}
{"_id": "f5d6a1f6581b097e06620063e364f0c538956c3c", "title": "Usability Problem Description and the Evaluator Effect in Usability Testing", "text": "Previous usability evaluation method (UEM) comparison studies have noted an evaluator effect on problem detection in heuristic evaluation, with evaluators differing in problems found and problem severity judgments. There have been few studies of the evaluator effect in usability testing (UT), task-based testing with end-use rs. UEM comparison studies focus on counting usability problems detected, but we also need to assess the content of usability problem descriptions (UPDs) to more fully mea sure evaluation effectiveness. The goals of this research were to develop UPD gui delines, explore the evaluator effect in UT, and evaluate the usefulness of the guidelines for grading UPD content. Ten guidelines for writing UPDs were developed by consulting usability practitioners through two questionnaires and a card sort. These guidelines are ( briefly): be clear and avoid jargon, describe problem severity, provide backing data, describe problem causes, describe user actions, provide a solution, consider politics and diplomacy, be professional and scientific, describe your methodology, and help the re ader sympathize with the user. A fourth study compared usability reports collect ed from 44 evaluators, both practitioners and graduate students, watching the same 10-minute UT session recording. Three judges measured problem detection for each evaluator a nd graded the reports for following 6 of the UPD guidelines. There was support for existence of an evaluator effect, even when watching prerecorded sessions, with low to moderate individual thoroughness of problem detection across all/severe problems (22%/34%), reliability of problem detection (37%/50%) and reliability of severity judgments (57% for severe ratings). Practiti oners received higher grades averaged across the 6 guidelines than students did, suggesting that the guide lin s may be useful for grading reports. The grades for the guidelines were not cor related with thoroughness, suggesting that the guideline grades complement measures of problem detection. A simulation of evaluators working in groups found a 34% increase in severe problems found by adding a second evaluator. The simulation also found that thoroughness of individual evaluators would have been overestimated if the study had included a small number of evaluators. The final recommendations are to use multipl e evaluators in UT, and to assess both problem detection and description when measuring evaluation effectiveness. iii ACKNOWLEDGEMENTS I would like to thank my advisory committee, Tonya Smith-Jackson, John Burton, Rex Hartson, Brian Kleiner, and Maury Nussbaum. In particular, Tonya Smith -Jackson provided suggestions for experimental design and statistical analysis, and Rex Hartson and Tonya Smith-Jackson provided suggestions for the judging procedure used to analyze the usability problems collected from evaluators. I also owe a special thanks to Laurian Hobby, John Howarth, and Pardha Pyla, each of whom generously donated over 50 hours of their time to read and evaluate hundreds of usability problems. The final study would not have been possible without them. Thanks also to my thesis advisor, Bob Williges, who guided me when I began studying usability methods. This dissertation would not have been possible without the support of my husband, Rob Capra, who shared the journey with me as we both completed our dissertations within a month of each other. I might not have discovered Human Factors or pursued a PhD without him. He is my partner in research, and many important ideas gre w out of discussions of our research at school, at home, and over meals. He is my partner in life, and I love him with all my heart. Many other people contributed to this research. Terence Andre shared the usability movies and reports from studies run by him and his students. Rob Capra assi sted with data coding in the first and fourth studies. Suzanne Aref suggested using factor analysis to cluster the items in the card sort. Joe Dumas shared his experience s analyzing the CUE-4 reports, provided criteria for identifying descriptions that discuss t he same usability problem, and helped recruit practitioners. Rolf Molich gave permiss ion to use the report template and severity rating scales from the CUE studies. The co mparison of usability diagnosis to medical diagnosis was refined through discussions with Rex Hartson and Steve Belz. Thanks to the dozens of usability practitioners and graduate students who volunteered to participate in my studies or provided feedback as pilot participants, especially the practitioners that took extra time above and beyond the study requirem nts to discuss their usability practices and reporting habits. Thanks to my famil and friends for years of encouragement and patience, and fellow students for support and commiseration along the way. Thanks to my former colleagues at what is now AT &T Labs for introducing me to Human Factors and sparking my interest in the field. Thanks to IMDb, Inc. for permission to include screen shots of their website in this document. Information courtesy of: The Internet Movie Database (http://www.imdb.com/ ). Used with permission. Portions of this research were conducted while I was supported by the Alexander E. Walter Fellowship from the Grado Department of Industrial and Systems Eng ineering (2001-2004). iv TABLE OF CONTENTS ABSTRACT . ................................................................................................................. ii ACKNOWLEDGEMENTS............................................................................................. iii TABLE OF CONTENTS................................................................................................. iv LIST OF TABLES .........................................................................................................viii LIST OF FIGURES .......................................................................................................... x LIST OF EQUATIONS ..................................................................................................xii CHAPTER"}
{"_id": "ecc7607622c202f5a4e83a9a233930d1ba2f6648", "title": "Driver fatigue detection based on saccadic eye movements", "text": "The correct determination of driver's level of fatigue has been of vital importance for the safety of driving. There are various methods, such as analyzing facial expression, eyelid activity, and head movements to assess the fatigue level of drivers. This paper describes the design and prototype implementation of a driver fatigue level determination system based on detection of saccadic eye movements. Driver's eye movement speed is used to assess driver's fatigue level. The information about eyes is obtained via infrared led camera device. Movements of pupils were recorded in two driving scenarios with different traffic density. In the first scenario, the traffic density was set to low while the second scenario was based on high density and aggressive traffic. Based on the movements of pupils, the data on saccadic eye movement was analyzed to determine fatigue level of the driver. Acceleration, speed, and size of pupils at both traffic scenarios were compared with data mining techniques, such as segmentation adaptive peak, entropy, and data distribution analyses. Significantly different levels of fatigue were found between the tired and vigorous driver for the different types of scenarios."}
{"_id": "21db1334a75c6e12979d16de2e996c01e95006f5", "title": "Emotion, plasticity, context, and regulation: perspectives from affective neuroscience.", "text": "The authors present an overview of the neural bases of emotion. They underscore the role of the prefrontal cortex (PFC) and amygdala in 2 broad approach- and withdrawal-related emotion systems. Components and measures of affective style are identified. Emphasis is given to affective chronometry and a role for the PFC in this process is proposed. Plasticity in the central circuitry of emotion is considered, and implications of data showing experience-induced changes in the hippocampus for understanding psychopathology and stress-related symptoms are discussed. Two key forms of affective plasticity are described--context and regulation. A role for the hippocampus in context-dependent normal and dysfunctional emotional responding is proposed. Finally, implications of these data for understanding the impact on neural circuitry of interventions to promote positive affect and on mechanisms that govern health and disease are considered."}
{"_id": "7849a9929595b29caa1503d8acaa9c475b43fae2", "title": "On Analyzing User Topic-Specific Platform Preferences Across Multiple Social Media Sites", "text": "Topic modeling has traditionally been studied for single text collections and applied to social media data represented in the form of text documents. With the emergence of many social media platforms, users find themselves using different social media for posting content and for social interaction. While many topics may be shared across social media platforms, users typically show preferences of certain social media platform(s) over others for certain topics. Such platform preferences may even be found at the individual level. To model social media topics as well as platform preferences of users, we propose a new topic model known as MultiPlatform-LDA (MultiLDA). Instead of just merging all posts from different social media platforms into a single text collection, MultiLDA keeps one text collection for each social media platform but allowing these platforms to share a common set of topics. MultiLDA further learns the user-specific platform preferences for each topic. We evaluate MultiLDA against TwitterLDA, the state-of-the-art method for social media content modeling, on two aspects: (i) the effectiveness in modeling topics across social media platforms, and (ii) the ability to predict platform choices for each post. We conduct experiments on three real-world datasets from Twitter, Instagram and Tumblr sharing a set of common users. Our experiments results show that the MultiLDA outperforms in both topic modeling and platform choice prediction tasks. We also show empirically that among the three social media platforms, \u201cDaily matters\u201d and \u201cRelationship matters\u201d are dominant topics in Twitter, \u201cSocial gathering\u201d, \u201cOuting\u201d and \u201cFashion\u201d are dominant topics in Instagram, and \u201cMusic\u201d, \u201cEntertainment\u201d and \u201cFashion\u201d are dominant topics in Tumblr."}
{"_id": "1a89144b9d5518e3a5efca27087d8796476fbc61", "title": "A3: a coding guideline for HCI+autism research using video annotation", "text": "Due to the profile of strengths and weaknesses indicative of autism spectrum disorders (ASD), technology may play a key role in ameliorating communication difficulties with this population. This paper documents coding guidelines established through cross-disciplinary work focused on facilitating communication development in children with ASD using computerized feedback. The guidelines, referred to as A3 (pronounced A-Cubed) or Annotation for ASD Analysis, define and operationalize a set of dependent variables coded via video annotation. Inter-rater reliability data are also presented from a study currently in-progress, as well as related discussion to help guide future work in this area. The design of the A3 methodology is well-suited for the examination and evaluation of the behavior of low-functioning subjects with ASD who interact with technology."}
{"_id": "76ea56d814249d0636670687e335c495476572f8", "title": "River flow time series prediction with a range-dependent neural network", "text": "Artificial neural networks provide a promising alternative to hydrological time series modelling. However, there are still many fundamental problems requiring further analyses, such as structure identification, parameter estimation, generalization, performance improvement, etc. Based on a proposed clustering algorithm for the training pairs, a new neural network, namely the range-dependent neural network (RDNN) has been developed for better accuracy in hydrological time series prediction. The applicability and potentials of the RDNN in daily streamflow and annual reservoir inflow prediction are examined using data from two watersheds in China. Empirical comparisons of the predictive accuracy, in terms of the model efficiency R and absolute relative errors (ARE), between the RDNN, backpropagation (BP) networks and the threshold auto-regressive (TAR) model are made. The case studies demonstrated that the RDNN network performed significantly better than the BP network, especially for reproducing low-flow events."}
{"_id": "5fc795b50caef136178ca76a889eb51baee0ccea", "title": "Short circuit III in high power IGBTs", "text": "Short circuit III is the occurrence of a short circuit across the load during the conducting mode of the freewheeling diode. This has the same importance for the application of high power IGBTs as the known short circuit II. Different to short circuit II, the IGBT of the same module is now turned-on starting from very low voltage across its terminals and showing a forward recovery voltage before saturating. After that a dynamic short circuit peak current occurs similar to the one seen in SC II. During the short circuit, a reverse recovery process occurs at the freewheeling diode with a very high voltage slope."}
{"_id": "525caa4f62e14d5579dca7b61604fb8ae6d3a340", "title": "Cancer phenotype as the outcome of an evolutionary game between normal and malignant cells", "text": "Background:There is variability in the cancer phenotype across individuals: two patients with the same tumour may experience different disease life histories, resulting from genetic variation within the tumour and from the interaction between tumour and host. Until now, phenotypic variability has precluded a clear-cut identification of the fundamental characteristics of a given tumour type.Methods:Using multiple myeloma as an example, we apply the principles of evolutionary game theory to determine the fundamental characteristics that define the phenotypic variability of a tumour.Results:Tumour dynamics is determined by the frequency-dependent fitness of different cell populations, resulting from the benefits and costs accrued by each cell type in the presence of others. Our study shows how the phenotypic variability in multiple myeloma bone disease can be understood through the theoretical approach of a game that allows the identification of key genotypic features in a tumour and provides a natural explanation for phenotypic variability. This analysis also illustrates how complex biochemical signals can be translated into cell fitness that determines disease dynamics.Conclusion:The present paradigm is general and extends well beyond multiple myeloma, and even to non-neoplastic disorders. Furthermore, it provides a new perspective in dealing with cancer eradication. Instead of trying to kill all cancer cells, therapies should aim at reducing the fitness of malignant cells compared with normal cells, allowing natural selection to eradicate the tumour."}
{"_id": "8ef2a5e3dffb0a155a14575c8333b175b61e0675", "title": "Machine Learning Applied to Cyber Operations", "text": ""}
{"_id": "1f000f4346d9fdf0b801dff81fe4873c18fcab4d", "title": "Demanding Customers : Consumerist Patients and Quality of Care", "text": "Consumerism arises when patients acquire and use medical information from sources other than their physicians. This practice has been hailed as a means of improving quality. This need not be the result. Our theoretical model identifies a channel through which consumerism may reduce quality: consumerist patients place additional demands on their doctors\u2019 time, thus imposing a negative externality on other patients. Relative to a world in which consumerism does not exist, consumerism may harm other consumerists, non-consumerists, or both. Data from a large national survey of physicians confirm the negative effects of consumerism: high levels of consumerist patients are associated with lower perceived quality among physicians."}
{"_id": "7b1cb08db30b4b223ac5601d1ca5baa23c6e9904", "title": "Sequential Optimization and Reliability Assessment Method for Efficient Probabilistic Design", "text": "Probabilistic design, such as reliability-based design and robust design, offers tool making reliable decisions with the consideration of uncertainty associated with de variables/parameters and simulation models. Since a probabilistic optimization ofte volves a double-loop procedure for the overall optimization and iterative probabili assessment, the computational demand is extremely high. In this paper, the seq optimization and reliability assessment (SORA) is developed to improve the efficien probabilistic optimization. The SORA method employs a single-loop strategy with a s of cycles of deterministic optimization and reliability assessment. In each cycle, op zation and reliability assessment are decoupled from each other; the reliability asses is only conducted after the deterministic optimization to verify constraint feasibility un uncertainty. The key to the proposed method is to shift the boundaries of violated straints (with low reliability) to the feasible direction based on the reliability informati obtained in the previous cycle. The design is quickly improved from cycle to cycle an computational efficiency is improved significantly. Two engineering applications, reliability-based design for vehicle crashworthiness of side impact and the integr reliability and robust design of a speed reducer, are presented to demonstrate the tiveness of the SORA method. @DOI: 10.1115/1.1649968 #"}
{"_id": "8c43cf593531013ebb819f3fb9a4c45453a23657", "title": "LIPN at SemEval-2017 Task 10: Filtering Candidate Keyphrases from Scientific Publications with Part-of-Speech Tag Sequences to Train a Sequence Labeling Model", "text": "This paper describes the system used by the team LIPN in SemEval 2017 Task 10: Extracting Keyphrases and Relations from Scientific Publications. The team participated in Scenario 1, that includes three subtasks, Identification of keyphrases (Subtask A), Classification of identified keyphrases (Subtask B) and Extraction of relationships between two identified keyphrases (Subtask C). The presented system was mainly focused on the use of part-of-speech tag sequences to filter candidate keyphrases for Subtask A. Subtasks A and B were addressed as a sequence labeling problem using Conditional Random Fields (CRFs) and even though Subtask C was out of the scope of this approach, one rule was included to identify synonyms."}
{"_id": "8d2dd62b1784794e545d44332a5cb66649af0eca", "title": "Network densification: the dominant theme for wireless evolution into 5G", "text": "This article explores network densification as the key mechanism for wireless evolution over the next decade. Network densification includes densification over space (e.g, dense deployment of small cells) and frequency (utilizing larger portions of radio spectrum in diverse bands). Large-scale cost-effective spatial densification is facilitated by self-organizing networks and intercell interference management. Full benefits of network densification can be realized only if it is complemented by backhaul densification, and advanced receivers capable of interference cancellation."}
{"_id": "4cc1b2abb3be1286389800eadbbf1490a2c66fc1", "title": "Ripple-Based Control of Switching Regulators\u2014An Overview", "text": "Switching regulators with ripple-based control (i.e., \u00bfripple regulators\u00bf) are conceptually simple, have fast transient responses to both line and load perturbations, and some versions operate with a switching frequency that is proportional to the load current under the discontinuous conduction mode. These characteristics make the ripple regulators well-suited, especially for power management applications in computers and portable electronic devices. Ripple regulators also have some drawbacks, including (in some versions) a poorly defined switching frequency, noise-induced jitter, inadequate dc regulation, and a tendency for fast-scale instability. This paper presents an overview of the various ripple-based control techniques, discusses their merits and limitations, and introduces techniques for reducing the noise sensitivity and the sensitivity to capacitor parameters, improving the frequency stability and the dc regulation, and avoiding fast-scale instability."}
{"_id": "192ed3a93e493c7d7b228ee1bc22d23513cffe35", "title": "Maximum likelihood analysis of conflicting observations in social sensing", "text": "This article addresses the challenge of truth discovery from noisy social sensing data. The work is motivated by the emergence of social sensing as a data collection paradigm of growing interest, where humans perform sensory data collection tasks. Unlike the case with well-calibrated and well-tested infrastructure sensors, humans are less reliable, and the likelihood that participants' measurements are correct is often unknown a priori. Given a set of human participants of unknown trustworthiness together with their sensory measurements, we pose the question of whether one can use this information alone to determine, in an analytically founded manner, the probability that a given measurement is true. In our previous conference paper, we offered the first maximum likelihood solution to the aforesaid truth discovery problem for corroborating observations only. In contrast, this article extends the conference paper and provides the first maximum likelihood solution to handle the cases where measurements from different participants may be conflicting. The article focuses on binary measurements. The approach is shown to outperform our previous work used for corroborating observations, the state-of-the-art fact-finding baselines, as well as simple heuristics such as majority voting."}
{"_id": "4515d542d825ecc543c270cddfac7821ec340134", "title": "GENETIC ALGORITHM APPLIED TO OPTIMIZATION OF THE SHIP HULL FORM WITH RESPECT TO SEAKEEPING PERFORMANCE", "text": "Hull form optimization from a hydrodynamic performance point of view is an important aspect in preliminary ship design. This study presents a computational method to estimate the ship seakeeping in regular head waves. In the optimization process, the genetic algorithm (GA) is linked to the computational method to obtain an optimum hull form by taking into account the displacement as a design constraint. New hull forms are obtained from the wellknown S60 hull and the classical Wigley hull taken as initial hulls in the optimization process at two Froude numbers (Fn=0.2 and Fn=0.3). The optimization variables are a combination of ship hull offsets and main dimensions. The objective function of the optimization procedure includes the peak values for vertical absolute motion at the centre of gravity (CG) and the bow point (0.15Lwl) behind the forward perpendicular (FP)."}
{"_id": "f3d84a0b5a4c5de47d7da9ec9514e3f9931bb6f6", "title": "User experiences with web-based 3D virtual travel destination marketing portals: the need for visual indication of interactive 3D elements", "text": "The tourism sector has found virtual reality technology to be a good way to market travel destinations for consumers. In this paper, we describe two user studies with three web-based 3D virtual travel destination marketing portals. These three portals were developed to support and attract wintertime tourism into the region by offering a possibility to experience in advance, a virtual snowy scenery with different activities, for example downhill skiing. In both user studies with 21 subjects the focus was on user experience with the 3D virtual travel destination marketing portals. In the second study also the visual design aspects within these portals were studied. Our studies indicate that 3D virtual travel destination marketing portals can enhance 2D web pages, if they offer the possibility to explore the location freely and through different kinds of virtual activities. Also our studies support prior findings of the efficiency of glow effect for indicating interactive 3D elements within a 3D virtual environment."}
{"_id": "adeef1a00a7dcee03a9de566e6c53ab134becdd9", "title": "Frontomaxillary facial angle in trisomy 21 fetuses at 16-24 weeks of gestation.", "text": "OBJECTIVES\nTo establish a normal range for the frontomaxillary facial (FMF) angle by three-dimensional (3D) ultrasound imaging and to examine the FMF angle in trisomy 21 fetuses at 16-24 weeks of gestation.\n\n\nMETHODS\nWe measured the FMF angle using 3D volumes of the fetal profile obtained with the transducer parallel to the long axis of the nose and at 45 degrees to the palate, which had been acquired from 150 normal fetuses and 23 fetuses with trisomy 21.\n\n\nRESULTS\nIn the normal group there was no significant association between the FMF angle and gestational age; the mean FMF angle was 83.9 degrees (range, 76.9-90.2 degrees ) and the 95(th) centile was 88.5 degrees . In 15 (65.2%) of the fetuses with trisomy 21 the FMF angle was greater than 88.5 degrees . Repeatability studies demonstrated that in 95% of cases the difference between two measurements of FMF angle by the same operator and different operators was less than 5 degrees .\n\n\nCONCLUSIONS\nIn the majority of second-trimester fetuses with trisomy 21 the FMF angle is increased."}
{"_id": "0bfa23178121f8fed03e6fc67608da74c8c4d9b1", "title": "Multi-Product Utility Maximization for Economic Recommendation", "text": "Basic economic relations such as substitutability and complementarity between products are crucial for recommendation tasks, since the utility of one product may depend on whether or not other products are purchased. For example, the utility of a camera lens could be high if the user possesses the right camera (complementarity), while the utility of another camera could be low because the user has already purchased one (substitutability). We propose \\emph{multi-product utility maximization} (MPUM) as a general approach to recommendation driven by economic principles. MPUM integrates the economic theory of consumer choice with personalized recommendation, and focuses on the utility of \\textit{sets} of products for individual users. MPUM considers what the users already have when recommending additional products. We evaluate MPUM against several popular recommendation algorithms on two real-world E-commerce datasets. Results confirm the underlying economic intuition, and show that MPUM significantly outperforms the comparison algorithms under top-K evaluation metrics."}
{"_id": "8523f37425199d72e79467104022277e0175b381", "title": "Towards IP over LPWANs technologies: LoRaWAN, DASH7, NB-IoT", "text": "In this paper, we discuss a set of solutions that are proposed in order to run IPv6 over IoT and we investigate its applicability over the three Low Power Wide Area Networks (LPWANs) technologies: LoRaWAN, DASH7, NB-IoT. LPWANs are wireless technologies that are used to connect things to the Internet. These technologies are characterized by their large coverage area, long battery operation, low bandwidth, small frame payload size, and the use of asymmetric and non-synchronized communication. Based on this investigation, we highlight the schemes that can be adopted for IP-based LPWANs technologies."}
{"_id": "656e91af74e2ad12071a1232b27a580e1b11914c", "title": "The SIMON and SPECK lightweight block ciphers", "text": "The Simon and Speck families of block ciphers were designed specifically to offer security on constrained devices, where simplicity of design is crucial. However, the intended use cases are diverse and demand flexibility in implementation. Simplicity, security, and flexibility are ever-present yet conflicting goals in cryptographic design. This paper outlines how these goals were balanced in the design of Simon and Speck."}
{"_id": "980fef72a502338685c6b4ad2aedf424b7560691", "title": "FELICS - Fair Evaluation of Lightweight Cryptographic Systems", "text": "In this paper we introduce FELICS, a free and open-source benchmarking framework designed for fair and consistent evaluation of software implementations of lightweight cryptographic primitives for embedded devices. The framework is very flexible thanks to its modular structure, which allows for an easy integration of new metrics, target devices and evaluation scenarios. It consists of two modules that can currently asses the performance of lightweight block and stream ciphers on three widely used microcontrollers: 8-bit AVR, 16-bit MSP and 32-bit ARM. The metrics extracted are execution time, RAM consumption and binary code size. FELICS has a simple user interface and is intended to be used by cipher designers to compare new primitives with the state of the art. The extracted metrics are very detailed and assist embedded software engineers in selecting the best cipher to match the requirements of a particular application. The tool aims to increase the transparency and trust in benchmarking results of lightweight primitives and facilitates a fair comparison between different primitives using the same evaluation conditions."}
{"_id": "4c9af4b266ed108c04847241ed101ff4cdf79382", "title": "Block Cipher Speed and Energy Efficiency Records on the MSP430: System Design Trade-Offs for 16-bit Embedded Applications", "text": "Embedded microcontroller applications often experience multiple limiting constraints: memory, speed, and for a wide range of portable devices, power. Applications requiring encrypted data must simultaneously optimize the block cipher algorithm and implementation choice against these limitations. To this end we investigate block cipher implementations that are optimized for speed and energy efficiency, the primary metrics of devices such as the MSP430 where constrained memory resources nevertheless allow a range of implementation choices. The results set speed and energy efficiency records for the MSP430 device at 132 cycles/byte and 2.18 \u03bcJ/block for AES-128 and 103 cycles/byte and 1.44 \u03bcJ/block for equivalent block and key sizes using the lightweight block cipher SPECK. We provide a comprehensive analysis of size, speed, and energy consumption for 24 different variations of AES and 20 different variations of SPECK, to aid system designers of microcontroller platforms optimize the memory and energy usage of secure applications."}
{"_id": "8d9175b071b4a6bcd02040c9d6668035582f8ee7", "title": "The LED Block Cipher", "text": "We present a new block cipher LED. While dedicated to compact hardware implementation, and offering the smallest silicon footprint among comparable block ciphers, the cipher has been designed to simultaneously tackle three additional goals. First, we explore the role of an ultra-light (in fact non-existent) key schedule. Second, we consider the resistance of ciphers, and LED in particular, to related-key attacks: we are able to derive simple yet interesting AES-like security proofs for LED regarding relatedor single-key attacks. And third, while we provide a block cipher that is very compact in hardware, we aim to maintain a reasonable performance profile for software implementation."}
{"_id": "b5f235b168f3b4364afa84fba11ee0ddd701afe0", "title": "Linear (Hull) and Algebraic Cryptanalysis of the Block Cipher PRESENT", "text": "The contributions of this paper include the first linear hull and a revisit of the algebraic cryptanalysis of reduced-round variants of the block cipher PRESENT, under known-plaintext and ciphertextonly settings. We introduce a pure algebraic cryptanalysis of 5-round PRESENT and in one of our attacks we recover half of the bits of the key in less than three minutes using an ordinary desktop PC. The PRESENT block cipher is a design by Bogdanov et al., announced in CHES 2007 and aimed at RFID tags and sensor networks. For our linear attacks, we can attack 25-round PRESENT with the whole code book, 2 25round PRESENT encryptions, 2 blocks of memory and 0.61 success rate. Further we can extend the linear attack to 26-round with small success rate. As a further contribution of this paper we computed linear hulls in practice for the original PRESENT cipher, which corroborated and even improved on the predicted bias (and the corresponding attack complexities) of conventional linear relations based on a single linear"}
{"_id": "9c8b880f59f96a6f1971816068342a153ccdd073", "title": "Domain Modelling in Computational Persuasion for Behaviour Change in Healthcare", "text": "The aim of behaviour change is to help people to change aspects of their behaviour for the better (e.g., to decrease calorie intake, to drink in moderation, to take more exercise, to complete a course of antibiotics once started, etc.). In current persuasion technology for behaviour change, the emphasis is on helping people to explore their issues (e.g., through questionnaires or game playing) or to remember to follow a behaviour change plan (e.g., diaries and email reminders). However, recent developments in computational persuasion are leading to an argument-centric approach to persuasion that can potentially be harnessed in behaviour change applications. In this paper, we review developments in computational persuasion, and then focus on domain modelling as a key component. We present a multi-dimensional approach to domain modelling. At the core of this proposal is an ontology which provides a representation of key factors, in particular kinds of belief, which we have identified in the behaviour change literature as being important in diverse behaviour change initiatives. Our proposal for domain modelling is intended to facilitate the acquisition and representation of the arguments that can be used in persuasion dialogues, together with meta-level information about them which can be used by the persuader to make strategic choices of argument to present."}
{"_id": "cd5002a8aeec3111c43d33f4c1594580d795b09c", "title": "Towards a Multimedia Knowledge-Based Agent with Social Competence and Human Interaction Capabilities", "text": "We present work in progress on an intelligent embodied conversation agent in the basic care and healthcare domain. In contrast to most of the existing agents, the presented agent is aimed to have linguistic cultural, social and emotional competence needed to interact with elderly and migrants. It is composed of an ontology-based and reasoning-driven dialogue manager, multimodal communication analysis and generation modules and a search engine for the retrieval of multimedia background content from the web needed for conducting a conversation on a given topic."}
{"_id": "8617830f4560e5287d4e4648f45aed2d11d0b690", "title": "Temporal multi-hierarchy smoothing for estimating rates of rare events", "text": "We consider the problem of estimating rates of rare events obtained through interactions among several categorical variables that are heavy-tailed and hierarchical. In our previous work, we proposed a scalable log-linear model called LMMH (Log-Linear Models for Multiple Hierarchies) that combats data sparsity at granular levels through small sample size corrections that borrow strength from rate estimates at coarser resolutions. This paper extends our previous work in two directions. First, we model excess heterogeneity by fitting local LMMH models to relatively homogeneous subsets of the data. To ensure scalable computation, these subsets are induced through a decision tree, we call this Treed-LMMH. Second, the Treed-LMMH method is coupled with temporal smoothing procedure based on a fast Kalman filter style algorithm. We show that simultaneously performing hierarchical and temporal smoothing leads to significant improvement in predictive accuracy. Our methods are illustrated on a large scale computational advertising dataset consisting of billions of observations and hundreds of millions of attribute combinations(cells)."}
{"_id": "cdee17ad8dc9cf6cf074512d8c2e776e0ff4d18c", "title": "Security in the Internet of Things: A Survey on Application Layer Protocols", "text": "The rapid development of technology nowadays led people to a new and revolutionary concept, named the Internet of Things. This model imposes that all \"objects\", such as personal objects (smartphones, notebooks, smart watches, tablets etc), electronic equipment embed with sensors and other environmental elements are always connected to a common network. Therefore, one can access any resource at any time, by using a device recognized in the network. While the IoT may be economically and socially beneficial, the implementation of such a system poses many difficulties, risks and security issues that must be taken into consideration. Nowadays, the architecture of the Internet must be updated and rethought in order to interconnect trillions of devices and to ensure interoperability between them. Nevertheless, the most important problem is the security requirements of the IoT, which is probably one of the main reasons of the relatively slow development of this field. This paper presents the most important application layer protocols that are currently used in the IoT context: CoAP, MQTT, XMPP. We discuss them both separately and by comparison, with focus on the security provided by these protocols. Finally, we provide some future research opportunities and conclusions."}
{"_id": "901d741c3b72bb5eef25790ccfdbe52354b9cd74", "title": "Antioxidant defense responses: physiological plasticity in higher plants under abiotic constraints", "text": "Environmental stresses (salinity, drought, heat/cold, light and other hostile conditions) may trigger in plants oxidative stress, generating the formation of reactive oxygen species (ROS). These species are partially reduced or activated derivatives of oxygen, comprising both free radical $$ ( {\\text{O}}_{2}^{\\cdot - } ,{\\text{OH}} \\cdot , {\\text{OH}}_{ 2} \\cdot ) $$ and non-radical (H2O2) forms, leading to cellular damage, metabolic disorders and senescence processes. In order to overcome oxidative stress, plants have developed two main antioxidants defense mechanisms that can be classified as non-enzymatic and enzymatic systems. The first class (non-enzymatic) consists of small molecules such as vitamin (A, C and E), glutathione, carotenoids and phenolics that can react directly with the ROS by scavenging them. Second class is represented by enzymes among them superoxide dismutase, peroxidase and catalase which have the capacity to eliminate superoxide and hydrogen peroxide. In this review, we have tried to explore the related works, which have revealed the changes in the basic antioxidant metabolism of plants under various abiotic constraints."}
{"_id": "5b6387df63091f13c9cad4457ca4582a2c73f90e", "title": "Image analysis of blood microscopic images for acute leukemia detection", "text": "Acute lymphoblastic leukemia (ALL) is an serious hematological neoplasia of childhood which is characterized by abnormal growth and development of immature white blood cells (lymphoblasts). ALL makes around 80% of childhood leukemia and it mostly occur in the age group of 3\u20137. The nonspecific nature of the signs and symptoms of ALL often leads to wrong diagnosis. Diagnostic confusion is also posed due to imitation of similar signs by other disorders. Careful microscopic examination of stained blood smear or bone marrow aspirate is the only way to effective diagnosis of leukemia. Techniques such as fluorescence in situ hybridization (FISH), immunophenotyping, cytogenetic analysis and cytochemistry are also employed for specific leukemia detection. The need for automation of leukemia detection arises since the above specific tests are time consuming and costly. Morphological analysis of blood slides are influenced by factors such as hematologists experience and tiredness, resulting in non standardized reports. A low cost and efficient solution is to use image analysis for quantitative examination of stained blood microscopic images for leukemia detection. A fuzzy clustering based two stage color segmentation strategy is employed for segregating leukocytes or white blood cells (WBC) from other blood components. Discriminative features i.e. nucleus shape, texture are used for final detection of leukemia. In the present paper two novel shape features i.e., Hausdorff Dimension and contour signature is implemented for classifying a lymphocytic cell nucleus. Support Vector Machine (SVM) is employed for classification. A total of 108 blood smear images were considered for feature extraction and final performance evaluation is validated with the results of a hematologist."}
{"_id": "b2c2277fd8cdea79c2f3fa87bc508c789085e4fd", "title": "Beats: Tapping Gestures for Smart Watches", "text": "Interacting with smartwatches poses new challenges. Although capable of displaying complex content, their extremely small screens poorly match many of the touchscreen interaction techniques dominant on larger mobile devices. Addressing this problem, this paper presents beating gestures, a novel form of input based on pairs of simultaneous or rapidly sequential and overlapping screen taps made by the index and middle finger of one hand. Distinguished simply by their temporal sequence and relative left/right position these gestures are designed explicitly for the very small screens (approx. 40mm square) of smartwatches and to operate without interfering with regular single touch input. This paper presents the design of beating gestures and a rigorous empirical study that characterizes how users perform them -- in a mean of 355ms and with an error rate of 5.5%. We also derive thresholds for reliably distinguishing between simultaneous (under 30ms) and sequential (under 400ms) pairs of screen touches or releases. We then present five interface designs and evaluate them in a qualitative study in which users report valuing the speed and ready availability of beating gestures."}
{"_id": "29672a46afbf64d572a7efcf034d66e3d3a266a6", "title": "A Game Theoretic Framework for Analyzing Re-Identification Risk", "text": "Given the potential wealth of insights in personal data the big databases can provide, many organizations aim to share data while protecting privacy by sharing de-identified data, but are concerned because various demonstrations show such data can be re-identified. Yet these investigations focus on how attacks can be perpetrated, not the likelihood they will be realized. This paper introduces a game theoretic framework that enables a publisher to balance re-identification risk with the value of sharing data, leveraging a natural assumption that a recipient only attempts re-identification if its potential gains outweigh the costs. We apply the framework to a real case study, where the value of the data to the publisher is the actual grant funding dollar amounts from a national sponsor and the re-identification gain of the recipient is the fine paid to a regulator for violation of federal privacy rules. There are three notable findings: 1) it is possible to achieve zero risk, in that the recipient never gains from re-identification, while sharing almost as much data as the optimal solution that allows for a small amount of risk; 2) the zero-risk solution enables sharing much more data than a commonly invoked de-identification policy of the U.S. Health Insurance Portability and Accountability Act (HIPAA); and 3) a sensitivity analysis demonstrates these findings are robust to order-of-magnitude changes in player losses and gains. In combination, these findings provide support that such a framework can enable pragmatic policy decisions about de-identified data sharing."}
{"_id": "5164d01ff133ef3aeb04b44d0e10efb19316267b", "title": "The hidden treasure in nursing leadership: informal leaders.", "text": "AIM\nThe goal of the present article was to generate awareness of characteristics of informal leaders in healthcare with the emphasis on nurses in acute care settings. There is limited research or literature regarding informal leaders in nursing and how they positively impact nursing management, the organization and, ultimately, patient care. Identification of nurses with leadership characteristics is important so that leadership development and mentoring can occur within the nursing profession.\n\n\nBACKGROUND\nMore than ever, nursing needs energetic, committed and dedicated leaders to meet the challenges of the healthcare climate and the nursing shortage. This requires nurse leaders to consider all avenues to ensure the ongoing profitability and viability of their healthcare facility.\n\n\nKEY ISSUES\nThis paper discusses clinical nurses as informal leaders; characteristics of the informal nurse leader, the role they play, how they impact their unit and how they shape the organization.\n\n\nIMPLICATION FOR NURSING MANAGEMENT\n\u2002 Informal nurse leaders are an underutilized asset in health care. If identified early, these nurses can be developed and empowered to impact unit performance, efficiency and environmental culture in a positive manner."}
{"_id": "b6f5b165928c590eec7614fc23120ebe10d1e733", "title": "An Algorithm for image stitching and blending", "text": "In many clinical studies, including those of cancer, it is highly desirable to acquire images of whole tumour sections whilst retaining a microscopic resolution. A usual approach to this is to create a composite image by appropriately overlapping individual images acquired at high magnification under a microscope. A mosaic of these images can be accurately formed by applying image registration, overlap removal and blending techniques. We describe an optimised, automated, fast and reliable method for both image joining and blending. These algorithms can be applied to most types of light microscopy imaging. Examples from histology, from in vivo vascular imaging and from fluorescence applications are shown, both in 2D and 3D. The algorithms are robust to the varying image overlap of a manually moved stage, though examples of composite images acquired both with manually-driven and computer-controlled stages are presented. The overlap-removal algorithm is based on the cross-correlation method; this is used to determine and select the best correlation point between any new image and the previous composite image. A complementary image blending algorithm, based on a gradient method, is used to eliminate sharp intensity changes at the image joins, thus gradually blending one image onto the adjacent \u2018composite\u2019. The details of the algorithm to overcome both intensity discrepancies and geometric misalignments between the stitched images will be presented and illustrated with several examples."}
{"_id": "cc0d259a1e1beac4534c5d5cbcc37e7fdb324b8e", "title": "INTELLIGENT SYSTEMS IN TOURISM A Social Science Perspective", "text": "Intelligent systems sense their environment and learn from the actions they implement to reach specific goals. They are increasingly used to support tourist information search and decision making as well as work processes. In order to model the tourism domain, these systems require a profound understanding of its nature. Looking at existing literature in tourism, this paper discusses critical gaps in the knowledge of the field to be filled so that intelligent system design can be informed and impacts understood. Specifically, it discusses the need to better conceptualize technology in tourism research and argues for a focus on uses and interactions. It challenges simplistic views of tourist information search and decisionmaking processes and calls for more research on potential impacts."}
{"_id": "e4f7706213ee2bc9b1255c82c88990992ed0fddc", "title": "Broadband Capacitively Coupled Stacked Patch Antenna for GNSS Applications", "text": "A broadband stacked circular patch antenna is presented in this letter. The broadband characteristic is achieved by employing capacitively coupled feed structure. The antenna is fed by a wideband four-output-ports feed network with equal magnitude and consistent 90\u00b0 phase shift. The final antenna provides very good circularly polarized radiation for Global Navigation Satellite System applications including GPS, GLONASS, Galileo, and Compass."}
{"_id": "1afee4edc9a9a799f2804ec1845e7a394a27dcb4", "title": "Secondary confessions: the influence (or lack thereof) of incentive size and scientific expert testimony on jurors' perceptions of informant testimony.", "text": "The goal of this research was to determine whether the size of the incentive (none, small, medium, or large, in terms of sentence reduction) a jailhouse informant receives for testifying, as well as scientific expert testimony regarding the fundamental attribution error, would influence mock juror decision-making in a criminal trial involving a secondary confession. Participants read a murder trial transcript involving informant testimony in which incentive size and expert testimony were manipulated and then provided verdict judgments, made attributions for the informant's decision to testify, and rated the informant and expert on a number of dimensions. Neither expert testimony nor size of incentive had a direct influence on verdicts. However, contrary to previous research on the influence of incentives on jurors' perceptions of secondary confessions, the presence of an incentive did influence verdict decisions, informant ratings, and attributional responses. Results imply that jury-eligible community members may be becoming aware of the issues with informant testimony as a function of incentive but that they are insensitive to the size of the incentive, and expert testimony may not sensitize them to the limitations of such testimony."}
{"_id": "80d45f972fb9f9c07947edce2c90ac492b2e9789", "title": "Variational Recurrent Neural Networks for Speech Separation", "text": "We present a new stochastic learning machine for speech separation based on the variational recurrent neural network (VRNN). This VRNN is constructed from the perspectives of generative stochastic network and variational auto-encoder. The idea is to faithfully characterize the randomness of hidden state of a recurrent neural network through variational learning. The neural parameters under this latent variable model are estimated by maximizing the variational lower bound of log marginal likelihood. An inference network driven by the variational distribution is trained from a set of mixed signals and the associated source targets. A novel supervised VRNN is developed for speech separation. The proposed VRNN provides a stochastic point of view which accommodates the uncertainty in hidden states and facilitates the analysis of model construction. The masking function is further employed in network outputs for speech separation. The benefit of using VRNN is demonstrated by the experiments on monaural speech separation."}
{"_id": "bbd79864aa337f536db16c52bdc70dd1e385f57b", "title": "Physiological-based emotion recognition with IRS model", "text": "A major challenge in physiology-based emotion recognition is to establish an effective emotion recognizer for multi-users in the user-independent scenario. The recognition result is not satisfied because it ignores the difference in individual response pattern, which can be attributed to IRS (Individual Response Specificity) and SRS(Stimuli Response Specificity) in psychophysiology. To improve the performance of emotion recognition, this paper proposes a Group-Based IRS model by adaptively matching a suitable recognizer in accordance with user's IRS level. Specifically, the users are put into distinct groups by using cluster analysis techniques, where users within the same group have similar IRS level than other groups. Then physiological data of users from each group is utilized to build the corresponding emotion recognizers. After categorizing a new user into one group according to his IRS level, the new user's emotion state is predicted by the corresponding emotion recognizer. To validate our model, the affective physiological data was collected from 11 subjects in four induced emotions(neutral, sadness, fear and pleasure), three-channel bio-sensors were used to measure users electrocardiogram (ECG), galvanic skin response (GSR) and photo plethysmography (PPG). The results show that the recognition precision in Group-based IRS model is higher than general model."}
{"_id": "7ba039e6892a8fb01478189ab6c1377194e8b08d", "title": "Conceptual Integration of Enterprise Architecture Management and Security Risk Management", "text": "Enterprise Architecture Management (EAM) is considered to provide the mechanism for, amongst others, governing enterprise transformations required by changes in the environment. In this paper, we focus on changes that result from the analysis of information security risks and of their impacts on the services delivered by an enterprise. We present how the concepts of an information system security risks management domain can be mapped into the ArchiMate enterprise architecture modeling language. We illustrate the application of the proposed approach through the handling of a lab case."}
{"_id": "be522b149aab5b672fd949326dcb3da817b3982b", "title": "Particle Belief Propagation", "text": "The popularity of particle filtering for inference in Markov chain models defined over random variables with very large or continuous domains makes it natural to consider sample\u2013based versions of belief propagation (BP) for more general (tree\u2013structured or loopy) graphs. Already, several such algorithms have been proposed in the literature. However, many questions remain open about the behavior of particle\u2013based BP algorithms, and little theory has been developed to analyze their performance. In this paper, we describe a generic particle belief propagation (PBP) algorithm which is closely related to previously proposed methods. We prove that this algorithm is consistent, approaching the true BP messages as the number of samples grows large. We then use concentration bounds to analyze the finitesample behavior and give a convergence rate for the algorithm on tree\u2013structured graphs. Our convergence rate is O(1/ \u221a n) where n is the number of samples, independent of the domain size of the variables."}
{"_id": "804f96f4ed40bf2f45849b1eb446c0ffa73a9277", "title": "A 3D Coarse-to-Fine Framework for Automatic Pancreas Segmentation", "text": "In this paper, we adopt 3D CNNs to segment the pancreas in CT images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D applications due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarseto-fine framework for volumetric pancreas segmentation to tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial information along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the state-of-the-art in terms of Dice-S\u00f8rensen Coefficient (DSC). Moreover, the worst case of DSC on the NIH dataset was improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications."}
{"_id": "3cbad29cc2d0970d4c00d0d6194ee6c289d5b593", "title": "Changes in excitability of the cortical projections to the human tibialis anterior after paired associative stimulation.", "text": "Paired associative stimulation (PAS) based on Hebb's law of association can induce plastic changes in the intact human. The optimal interstimulus interval (ISI) between the peripheral nerve and transcranial magnetic stimulus is not known for muscles of the lower leg. The aims of this study were to investigate the effect of PAS for a variety of ISIs and to explore the efficacy of PAS when applied during dynamic activation of the target muscle. PAS was applied at 0.2 Hz for 30 min with the tibialis anterior (TA) at rest. The ISI was varied randomly in seven sessions (n = 5). Subsequently, PAS was applied (n = 14, ISI = 55 ms) with the TA relaxed or dorsi-flexing. Finally, an optimized ISI based on the subject somatosensory evoked potential (SEP) latency plus a central processing delay (6 ms) was used (n = 13). Motor-evoked potentials (MEPs) were elicited in the TA before and after the intervention, and the size of the TA MEP was extracted. ISIs of 45, 50, and 55 ms increased and 40 ms decreased TA MEP significantly (P = 0.01). PAS during dorsi-flexion increased TA MEP size by 92% (P = 0.001). PAS delivered at rest resulted in a nonsignificant increase; however, when the ISI was optimized from SEP latency recordings, all subjects showed significant increases (P = 0.002). No changes in MEP size occurred in the antagonist. Results confirm that the excitability of the corticospinal projections to the TA but not the antagonist can be increased after PAS. This is strongly dependent on the individualized ISI and on the activation state of the muscle."}
{"_id": "f860806451ec7c6d3cd7c282e5155f340b5514d9", "title": "Characterization of Polymeric Microneedle Arrays for Transdermal Drug Delivery", "text": "Microfabrication of dissolvable, swellable, and biodegradable polymeric microneedle arrays (MNs) were extensively investigated based in a nano sensitive fabrication style known as micromilling that is then combined with conventional micromolding technique. The aim of this study was to describe the polymer selection, and optimize formulation compounding parameters for various polymeric MNs. Inverse replication of micromilled master MNs reproduced with polydimethylsiloxane (PDMS), where solid out of plane polymeric MNs were subsequently assembled, and physicochemically characterized. Dissolvable, swellable, and biodegradable MNs were constructed to depth of less than 1 mm with an aspect ratio of 3.6, and 1/2 mm of both inter needle tip and base spacing. Micromolding step also enabled to replicate the MNs very precisely and accurate. Polymeric microneedles (MN) precision was ranging from \u00b1 0.18 to \u00b1 1.82% for microneedle height, \u00b1 0.45 to \u00b1 1.42% for base diameter, and \u00b1 0.22 to \u00b1 0.95% for interbase spacing. Although dissolvable sodium alginate MN showed less physical robustness than biodegradable polylactic-co-glycolic acid MN, their thermogravimetric analysis is of promise for constructing these polymeric types of matrix devices."}
{"_id": "31134761e0678cbae30168a5adc3a5c08cc295be", "title": "Mobile robot navigation based on lidar", "text": "A wheeled mobile robot based on Arduino was described in this article. Hardware selection, Simultaneous Localization and Mapping (SLAM), Path Planning and the principle of the algorithm applied in this process are introduced in detail. The algorithm used in ROS is further verified by experiments. In order to verify the feasibility of two different 2D SLAM algorithms, we use the wheeled robot built in the lab to do the experiment in the same indoor environment and compare their results."}
{"_id": "bd0b7181eb8b2cb09a4e1318649d442986cf9697", "title": "Analysis and modelling of a bidirectional push-pull converter with PWM plus phase-shift control", "text": "This paper presents analysis and modeling for a bidirectional push-pull dc-dc converter. In particular, attention is paid to leakage current reduction with PWM plus phase-shift control. The leakage current in different operating modes has been compared to identify the efficient modes for various operation conditions. The state space averaging method is adopted to derive the small signal and large signal models of the most efficient mode. A 30~70V/300V prototype was built. Both simulation results and experimental results prove the validity of the model."}
{"_id": "6cd4f7cf35ce89d9b8e62432ffe7df1b86fee68c", "title": "SystemT: a system for declarative information extraction", "text": "As applications within and outside the enterprise encounter increasing volumes of unstructured data, there has been renewed interest in the area of information extraction (IE) -- the discipline concerned with extracting structured information from unstructured text. Classical IE techniques developed by the NLP community were based on cascading grammars and regular expressions. However, due to the inherent limitations of grammarbased extraction, these techniques are unable to: (i) scale to large data sets, and (ii) support the expressivity requirements of complex information tasks. At the IBM Almaden Research Center, we are developing SystemT, an IE system that addresses these limitations by adopting an algebraic approach. By leveraging well-understood database concepts such as declarative queries and costbased optimization, SystemT enables scalable execution of complex information extraction tasks. In this paper, we motivate the SystemT approach to information extraction. We describe our extraction algebra and demonstrate the effectiveness of our optimization techniques in providing orders of magnitude reduction in the running time of complex extraction tasks."}
{"_id": "a117b7b488fea7c7730fc575f9fc319027eea548", "title": "A Fractional-N Sub-Sampling PLL using a Pipelined Phase-Interpolator With an FoM of -250\u00a0dB", "text": "A fractional-N sub-sampling PLL architecture based on pipelined phase-interpolator and Digital-to-Time-Converter (DTC) is presented in this paper. The combination of pipelined phase-interpolator and DTC enables efficient design of the multi-phase generation mechanism required for the fractional operation. This technique can be used for designing a fractional-N PLL with low in-band phase noise and low spurious tones with low power consumption. The short-current-free pipelined phase-interpolator used in this work is capable of achieving high-linearity with low-power while minimizing the intrinsic jitter. A number of other circuit techniques and layout techniques are also employed in this design for ensuring high-performance operation with minimal chip area and power consumption. The proposed fractional-N PLL is implemented in standard 65 nm CMOS technology. The PLL has an operating range of 600 MHz from 4.34 GHz to 4.94 GHz. In fractional-N mode, the proposed PLL achieves -249.5 dB FoM and less than -59 dBc fractional spurs."}
{"_id": "45d2f411151a4a9c05cd0f1fba0746570dbd7708", "title": "Structural Design of Convolutional Neural Networks for Steganalysis", "text": "Recent studies have indicated that the architectures of convolutional neural networks (CNNs) tailored for computer vision may not be best suited to image steganalysis. In this letter, we report a CNN architecture that takes into account knowledge of steganalysis. In the detailed architecture, we take absolute values of elements in the feature maps generated from the first convolutional layer to facilitate and improve statistical modeling in the subsequent layers; to prevent overfitting, we constrain the range of data values with the saturation regions of hyperbolic tangent (TanH) at early stages of the networks and reduce the strength of modeling using 1\u00d71 convolutions in deeper layers. Although it learns from only one type of noise residual, the proposed CNN is competitive in terms of detection performance compared with the SRM with ensemble classifiers on the BOSSbase for detecting S-UNIWARD and HILL. The results have implied that well-designed CNNs have the potential to provide a better detection performance in the future."}
{"_id": "32677c508f6482e1947c2c1a23b001ff595d254c", "title": "Towards Automated Factchecking: Developing an Annotation Schema and Benchmark for Consistent Automated Claim Detection", "text": "In an effort to assist factcheckers in the process of factchecking, we tackle the claim detection task, one of the necessary stages prior to determining the veracity of a claim. It consists of identifying the set of sentences, out of a long text, deemed capable of being factchecked. This paper is a collaborative work between Full Fact, an independent factchecking charity, and academic partners. Leveraging the expertise of professional factcheckers, we develop an annotation schema and a benchmark for automated claim detection that is more consistent across time, topics and annotators than previous approaches. Our annotation schema has been used to crowdsource the annotation of a dataset with sentences from UK political TV shows. We introduce an approach based on universal sentence representations to perform the classification, achieving an F1 score of 0.83, with over 5% relative improvement over the state-of-the-art methods ClaimBuster and ClaimRank. The system was deployed in production and received positive user feedback."}
{"_id": "1411357f3a4c807dc15f51375dadb1cf975d2f78", "title": "A Low power and area efficient CLA adder design using Full swing GDI technique", "text": "The low power VLSI design has an important role in designing of many electronic systems. While designing any combinational or sequential circuits, the important parameters like power consumption, implementation area, voltage leakage and performance of the circuit are to be considered. Design of area, high speed and powerefficient data path logic systems forms the largest areas of research in VLSI system design. This paper presents a low power Carry look ahead adder design using Full swing Gate diffusion (FS-GDI) technique. The proposed CLA implementation utilizes improved full-swing GDI F1 and F2 gates, which are the counterparts of standard CMOS NAND and NOR gates. The basic Gate Diffusion Input (GDI) logic style suffers from some practical limitations like swing degradation, fabrication complexity in standard CMOS process and bulk connections. These limitations can be overcome by Full swing GDI technique. The proposed technique utilizes a single swing restoration (SR) transistor to improve the output swing of F1 and F2 GDI gates. A 16-bit CLA is designed and Simulations are performed by Mentor graphics 130nm CMOS technology ELDO simulator. Simulation results have shown a greater reduction in delay, power dissipation and area."}
{"_id": "93f93ca88c09e109c989bf28f1c108786dc0f163", "title": "A 300 nW , 15 ppm / C , 20 ppm / V CMOS Voltage Reference Circuit Consisting of Subthreshold MOSFETs", "text": "A low-power CMOS voltage reference was developed using a 0.35 m standard CMOS process technology. The device consists of MOSFET circuits operated in the subthreshold region and uses no resistors. It generates two voltages having opposite temperature coefficients and adds them to produce an output voltage with a near-zero temperature coefficient. The resulting voltage is equal to the extrapolated threshold voltage of a MOSFET at absolute zero temperature, which was about 745 mV for the MOSFETs we used. The temperature coefficient of the voltage was 7 ppm/ C at best and 15 ppm/ C on average, in a range from 20 to 80 C. The line sensitivity was 20 ppm/V in a supply voltage range of 1.4\u20133 V, and the power supply rejection ratio (PSRR) was 45 dB at 100 Hz. The power dissipation was 0.3 W at 80 C. The chip area was 0.05 mm . Our device would be suitable for use in subthreshold-operated, power-aware LSIs."}
{"_id": "fc73090889036a0e42ea40827ac835cd5e135b16", "title": "Deep Learning based Large Scale Visual Recommendation and Search for E-Commerce", "text": "In this paper, we present a uni\u0080ed end-to-end approach to build a large scale Visual Search and Recommendation system for ecommerce. Previous works have targeted these problems in isolation. We believe a more e\u0082ective and elegant solution could be obtained by tackling them together. We propose a uni\u0080ed Deep Convolutional Neural Network architecture, called VisNet 1, to learn embeddings to capture the notion of visual similarity, across several semantic granularities. We demonstrate the superiority of our approach for the task of image retrieval, by comparing against the state-of-the-art on the Exact Street2Shop [14] dataset. We then share the design decisions and trade-o\u0082s made while deploying the model to power Visual Recommendations across a catalog of 50M products, supporting 2K queries a second at Flipkart, India\u2019s largest e-commerce company. \u008ce deployment of our solution has yielded a signi\u0080cant business impact, as measured by the conversion-rate."}
{"_id": "b63cf071a624d170f21cf01003146b4119aaffed", "title": "CMOS Amperometric Instrumentation and Packaging for Biosensor Array Applications", "text": "An integrated CMOS amperometric instrument with on-chip electrodes and packaging for biosensor arrays is presented. The mixed-signal integrated circuit supports a variety of electrochemical measurement techniques including linear sweep, constant potential, cyclic and pulse voltammetry. Implemented in 0.5 \u03bcm CMOS, the 3 \u00d7 mm2 chip dissipates 22.5 mW for a 200 kHz clock. The highly programmable chip provides a wide range of user-controlled stimulus rate and amplitude settings with a maximum scan range of 2 V and scan rates between 1 mV/sec and 400 V/sec. The amperometric readout circuit provides \u00b1500 fA linear resolution and supports inputs up to \u00b147 \u03bcA. A 2 \u00d7 2 gold electrode array was fabricated on the surface of the CMOS instrumentation chip. An all-parylene packaging scheme was developed for compatibility with liquid test environments as well as a harsh piranha electrode cleaning process. The chip was tested using cyclic voltammetry of different concentrations of potassium ferricyanide at 100 mV/s and 200 mV/s, and results were identical to measurements using commercial instruments."}
{"_id": "a8ef6c8aaf83ce9ad41ed092cc07e21676fc7bad", "title": "Performance Enhancement of Boost Converter Based on PID Controller Plus Linear-to-Nonlinear Translator", "text": "Here, A PID controller plus a novel linear-to-nonlinear translator is proposed and applied to the boost converter to improve the operating performance of the boost converter under large transient variations in load all over the quiescent dc input voltage range. The proposed translator is inserted between the PID controller and the main power stage such that the PID controller considers the behavior of the boost converter to be quasi-linear, thereby suppressing the effect of nonlinear behavior of the boost converter on the PID controller. Besides, variations in dc gain of the control-to-output transfer function with the translator are smaller than those without the translator, thus making system control easy to attain. Also, one-comparator counter-based sampling without any A-D converter is presented herein to realize the proposed control strategy. As mentioned above, the proposed control strategy can make the boost converter stably operated under large load transient responses all over the quiescent dc input voltage range."}
{"_id": "faefb00d195c1e7d99c5b21a3ab739f2ccef2925", "title": "A five level cascaded H-bridge inverter based on space vector pulse width modulation technique", "text": "Multilevel inverters brought a tremendous revolution in high power superior performance industrial drive applications. Minimizing harmonic distortion is a challenging issue for power electronic drive. This problem can be solved by developing suitable modulation technique. There are number of modulation techniques established in the last decade. Among them, the space vector pulse width modulation (SVPWM) is extensively used because of their easier digital realization, better DC bus utilization and lower THD. Conventional SVPWM requires different number of logical computations of on time calculations and different number of lookup table for each triangle due to having their different number of switching states. In this paper, a simple SVPWM is presented where proposed only four logical computations of on time calculation, four active switching states and four lookup table for each triangle. Five level cascaded h-bridge inverter (CHBI) based on SVPWM technique are modeled and analyzed by using MATLAB/Simulink and origin 6.1 with a passive R-L load that can be extended to any level. These results are compared with other lower levels of inverter and with previous established results. From these results, five level CHBI based on SVPWM shows better performance compared to others in terms of THD."}
{"_id": "7e6497cd7320678ba57ecc816251492a28be53c1", "title": "TA-COS 2018 : 2 nd Workshop on Text Analytics for Cybersecurity and Online Safety", "text": "In this study, we present toxicity annotation for a Thai Twitter Corpus as a preliminary exploration for toxicity analysis in the Thai language. We construct a Thai toxic word dictionary and select 3,300 tweets for annotation using the 44 keywords from our dictionary. We obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing target, and word sense ambiguity. Finally, we conducted supervised classification using our corpus as a dataset and obtained an accuracy of 0.80, which is comparable with the inter-annotator agreement of this dataset. Our dataset is available on GitHub."}
{"_id": "bf0c632fd2aa24c92ceb2625346c89497d478df2", "title": "Correlation Hashing Network for Efficient Cross-Modal Retrieval", "text": "Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for large-scale multimedia retrieval. Cross-modal hashing, which improves the quality of hash coding by exploiting the semantic correlation across different modalities, has received increasing attention recently. For most existing cross-modal hashing methods, an object is first represented as of vector of hand-crafted or machine-learned features, followed by another separate quantization step that generates binary codes. However, suboptimal hash coding may be produced, because the quantization error is not statistically minimized and the feature representation is not optimally compatible with the binary coding. In this paper, we propose a novel Correlation Hashing Network (CHN) architecture for cross-modal hashing, in which we jointly learn good data representation tailored to hash coding and formally control the quantization error. The CHN model is a hybrid deep architecture constituting four key components: (1) an image network with multiple convolution-pooling layers to extract good image representations, and a text network with several fully-connected layers to extract good text representations; (2) a fully-connected hashing layer to generate modality-specific compact hash codes; (3) a squared cosine loss layer for capturing both cross-modal correlation and within-modal correlation; and (4) a new cosine quantization loss for controlling the quality of the binarized hash codes. Extensive experiments on standard cross-modal retrieval datasets show the proposed CHN model yields substantial boosts over latest state-of-the-art hashing methods."}
{"_id": "fee1d4cfd912ed176f373e8552eeaf3b779f85fb", "title": "ontoAGA: Ontology to Support Educational Systems Interoperability", "text": "Looking forward to promote the exchange and integration of information in order to build a unique reference model for the universities it is essencial to assure commom concepts of higher education institutions in Brazil. This paper proposes an University Support Management Ontology, named ontoAGA, to help the operation of legacy data in Semantic Web in a standardized manner and to allow the integration of databases with different technologies.To evaluate the research questions two usages cenarios were defined and the results point to the proposal viability. Resumo. Para promover o interc\u00e2mbio e a integra\u00e7\u00e3o de informa\u00e7\u00f5es de forma a se construir um modelo de refer\u00eancia para as universidades h\u00e1 necessidade de se estruturar conceitos comuns \u00e0s institui\u00e7\u00f5es de ensino superior no Brasil. Este artigo prop\u00f5e uma ontologia, no dom\u00ednio da gest\u00e3o acad\u00eamica nas universidades, denominada ontoAGA, para ajudar a explora\u00e7\u00e3o de dados legados na web Sem\u00e2ntica de forma padronizada e para possibilitar a integra\u00e7\u00e3o de bases de dados de diferentes tecnologias. Para avalia\u00e7\u00e3o das quest\u00f5es de pesquisa foram definidos dois cen\u00e1rios de uso, cujos resultados apontam para a viabilidade da proposta."}
{"_id": "17c7fb511cb754e259a78f97b3644ded7d87d00d", "title": "Safer smart contracts through type-driven development", "text": "We show how dependent and polymorphic types can make smart contract development safer. This is demonstrated by using the functional language Idris to describe smart contracts on the Ethereum platform. In particular, we show how one class of common errors can be captured at compile time using dependent types and algebraic side effects. We also bring type annotations to the realm of smart contracts, helping developers to circumvent another class of common errors. To demonstrate the feasibility of our solutions, we have extended the Idris compiler with a backend for the Ethereum Virtual Machine. While we find that the functional paradigm might not be the most suitable for the domain, our approach solves the identified problems and provides advantages over the languages in current use."}
{"_id": "4c0c97adb9753d937890545ba4fb78462f824247", "title": "Complexities in Internet peering: Understanding the \u201cBlack\u201d in the \u201cBlack Art\u201d", "text": "Peering in the Internet interdomain network has long been considered a \u201cblack art\u201d, understood in-depth only by a select few peering experts while the majority of the network operator community only scratches the surface employing conventional rules-of-thumb to form peering links through ad hoc personal interactions. Why is peering considered a black art? What are the main sources of complexity in identifying potential peers, negotiating a stable peering relationship, and utility optimization through peering? How do contemporary operational practices approach these problems? In this work we address these questions for Tier-2 Network Service Providers. We identify and explore three major sources of complexity in peering: (a) inability to predict traffic flows prior to link formation (b) inability to predict economic utility owing to a complex transit and peering pricing structure (c) computational infeasibility of identifying the optimal set of peers because of the network structure. We show that framing optimal peer selection as a formal optimization problem and solving it is rendered infeasible by the nature of these problems. Our results for traffic complexity show that 15% NSPs lose some fraction of customer traffic after peering. Additionally, our results for economic complexity show that 15% NSPs lose utility after peering, approximately, 50% NSPs end up with higher cumulative costs with peering than transit only, and only 10% NSPs get paid-peering customers."}
{"_id": "342001a13bbe6cf25069d827b77886065af98a42", "title": "Learning from mistakes: towards a correctable learning algorithm", "text": "Many learning algorithms generate complex models that are difficult for a human to interpret, debug, and extend. In this paper, we address this challenge by proposing a new learning paradigm called correctable learning, where the learning algorithm receives external feedback about which data examples are incorrectly learned. We define a set of metrics which measure the correctability of a learning algorithm. We then propose a simple and efficient correctable learning algorithm which learns local models for different regions of the data space. Given an incorrect example, our method samples data in the neighborhood of that example and learns a new, more correct local model over that region. Experiments over multiple classification and ranking datasets show that our correctable learning algorithm offers significant improvements over the state-of-the-art techniques."}
{"_id": "0b6fcefe9d52b8cd6e663ff3e388dc6ecb8559b2", "title": "Wrapper Induction for Information Extraction", "text": "Many Internet information resources present relational data|telephone directories, product catalogs, etc. Because these sites are formatted for people, mechanically extracting their content is di cult. Systems using such resources typically use hand-coded wrappers, procedures to extract data from information resources. We introduce wrapper induction, a method for automatically constructing wrappers, and identify hlrt, a wrapper class that is e ciently learnable, yet expressive enough to handle 48% of a recently surveyed sample of Internet resources. We use PAC analysis to bound the problem's sample complexity, and show that the system degrades gracefully with imperfect labeling knowledge."}
{"_id": "7e48e740eba48783100c114181cf9ea80f5e0c59", "title": "Achieving Secure, Universal, and Fine-Grained Query Results Verification for Secure Search Scheme over Encrypted Cloud Data", "text": "Secure search techniques over encrypted cloud data allow an authorized user to query data files of interest by submitting encrypted query keywords to the cloud server in a privacy-preserving manner. However, in practice, the returned query results may be incorrect or incomplete in the dishonest cloud environment. For example, the cloud server may intentionally omit some qualified results to save computational resources and communication overhead. Thus, a well-functioning secure query system should provide a query results verification mechanism that allows the data user to verify results. In this paper, we design a secure, easily integrated, and fine-grained query results verification mechanism, by which, given an encrypted query results set, the query user not only can verify the correctness of each data file in the set but also can further check how many or which qualified data files are not returned if the set is incomplete before decryption. The verification scheme is loose-coupling to concrete secure search techniques and can be very easily integrated into any secure query scheme. We achieve the goal by constructing secure verification object for encrypted cloud data. Furthermore, a short signature technique with extremely small storage cost is proposed to guarantee the authenticity of verification object and a verification object request technique is presented to allow the query user to securely obtain the desired verification object. Performance evaluation shows that the proposed schemes are practical and efficient."}
{"_id": "60e3f74c98407e362560edbcb10a094e2a64c3ce", "title": "Collective Opinion Target Extraction in Chinese Microblogs", "text": "Microblog messages pose severe challenges for current sentiment analysis techniques due to some inherent characteristics such as the length limit and informal writing style. In this paper, we study the problem of extracting opinion targets of Chinese microblog messages. Such fine-grained word-level task has not been well investigated in microblogs yet. We propose an unsupervised label propagation algorithm to address the problem. The opinion targets of all messages in a topic are collectively extracted based on the assumption that similar messages may focus on similar opinion targets. Topics in microblogs are identified by hashtags or using clustering algorithms. Experimental results on Chinese microblogs show the effectiveness of our framework and algorithms."}
{"_id": "044e02d1c2a13cfa8c52fc29935e7fa76ac57300", "title": "Voting models in random networks", "text": "A crucial problem of Social Sciences is under what conditions agreement, or disagreement, emerge in a network of interacting agents. This topic has application in many contexts, including business and marketing decisions, with potential impact on information and technological networks. In this paper we consider a particular model of interaction between a group of individuals connected through a network of acquaintances. In the first model, a node waits an exponentially time with parameter one, and when it expires it chooses one of its neighbors' at random and adopts its decision. In the second one, the node chooses the opinion which is the most adopted by its neighbors (hence, majority rule). We show how different updating rule of the agent' state lead to different emerging patterns, namely, agreement and disagreement. In addition, in the case of agreement, we provide bounds on the time to convergence for various types of graphs."}
{"_id": "8efb188642a284752ad4842f77f2b83604783c57", "title": "Evolving Graphs and Networks with Edge Encoding: Preliminary Report", "text": "We present an alternative to the cellular encoding technique [Gruau 1992] for evolving graph and network structures via genetic programming. The new technique, called edge encoding, uses edge operators rather than the node operators of cellular encoding. While both cellular encoding and edge encoding can produce all possible graphs, the two encodings bias the genetic search process in different ways; each may therefore be most useful for a different set of problems. The problems for which these techniques may be used, and for which we think edge encoding may be particularly useful, include the evolution of recurrent neural networks, finite automata, and graph-based queries to symbolic knowledge bases. In this preliminary report we present a technical description of edge encoding and an initial comparison to cellular encoding. Experimental investigation of the relative merits of these encoding schemes is currently in progress."}
{"_id": "59b0b6a08909664bdd1e110435eaaeac36473625", "title": "EMD-L1: An Efficient and Robust Algorithm for Comparing Histogram-Based Descriptors", "text": "We propose a fast algorithm, EMD-L1, for computing the Earth Mover\u2019s Distance (EMD) between a pair of histograms. Compared to the original formulation, EMD-L1 has a largely simplified structure. The number of unknown variables in EMD-L1 is O(N) that is significantly less than O(N) of the original EMD for a histogram with N bins. In addition, the number of constraints is reduced by half and the objective function is also simplified. We prove that the EMD-L1 is formally equivalent to the original EMD with L1 ground distance without approximation. Exploiting the L1 metric structure, an efficient tree-based algorithm is designed to solve the EMD-L1 computation. An empirical study demonstrates that the new algorithm has the time complexity of O(N), which is much faster than previously reported algorithms with super-cubic complexities. The proposed algorithm thus allows the EMD to be applied for comparing histogram-based features, which is practically impossible with previous algorithms. We conducted experiments for shape recognition and interest point matching. EMD-L1 is applied to compare shape contexts on the widely tested MPEG7 shape dataset and SIFT image descriptors on a set of images with large deformation, illumination change and heavy noise. The results show that our EMD-L1based solutions outperform previously reported state-of-the-art features and distance measures in solving the two tasks."}
{"_id": "04e34e689386604ab37780c48797352321f95102", "title": "Boxlets: A Fast Convolution Algorithm for Signal Processing and Neural Networks", "text": "Signal processing and pattern recognition algorithms make extensive use of convolution. In many cases, computational accuracy is not as important as computational speed. In feature extraction, for instance, the features of interest in a signal are usually quite distorted. This form of noise justi es some level of quantization in order to achieve faster feature extraction. Our approach consists of approximating regions of the signal with low degree polynomials, and then di erentiating the resulting signals in order to obtain impulse functions (or derivatives of impulse functions). With this representation, convolution becomes extremely simple and can be implemented quite e ectively. The true convolution can be recovered by integrating the result of the convolution. This method yields substantial speed up in feature extraction and is applicable to convolutional neural networks."}
{"_id": "0fb1b0ce8b93abcfd30a4bb41d4d9b266b1c0f64", "title": "Robust Real-time Object Detection", "text": "This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the \u201cIntegral Image\u201d which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [6]. The third contribution is a method for combining classifiers in a \u201ccascade\u201d which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performace comparable to the best previous systems [18, 13, 16, 12, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second."}
{"_id": "f4cb70ace6c61a81f90970551974e70d3ddc0b33", "title": "Modeling of a Vertical Axis Wind Turbine with Permanent Magnet Synchronous Generator for Nigeria", "text": "The performances of a vertical axis wind turbine (VAWT) with permanent magnet synchronous generator (PMSG) under low and unsteady wind speed is investigated and Modelled in this paper. The ever increasing demand for electrical energy has led to the continuous search for the most readily available means of providing electricity. Wind turbine is an alternative source of electricity. To implement a wind turbine, careful and extensive investigation is required since different environments or geographical locations call for different wind speed. Nigeria is a typical example of locations with low and unsteady wind speed. The wind turbine system consists of three main parts; the wind speed, the turbine and the generator. A three phase PMSG is chosen in this work due to its higher efficiency and less need for maintenance when compared to other types of generators. It does not require rotor windings which imply a reduction of weight and cost. These elements and entire idea of this work have been modelled and simulated using MATLAB/ SIMULINK. Results obtained showed good system performance. The methodology used in this work, if fully implemented, will not only reduce the cost of design but will also reduce the cost of maintenance, making it economical and affordable."}
{"_id": "4a5641cd6b537ecfdf6d54496a09d5ff7cd97adf", "title": "PUF-FSM: A Controlled Strong PUF", "text": "Existing strong controlled physical unclonable function (PUF) designs are built to resist modeling attacks and they deal with noisy PUF responses by exploiting error correction logic. These designs are burdened by the costs of the error correction logic and information shown to leak through the associated helper data for assisting error corrections; leaving the design vulnerable to fault attacks or reliability-based attacks. We present a hybrid PUF\u2013finite state machine (PUF-FSM) construction to realize a controlled strong PUF. The PUF-FSM design removes the need for error correction logic and related computation, storage of the helper data and loading it on-chip by only employing error-free responses judiciously determined on demand in the absence of the underlying PUF\u2014an Arbiter PUF\u2014with a large challenge response pair space. The PUF-FSM demonstrates improved security, especially to reliability-based attacks and is able to support a range of applications from authentication to more advanced cryptographic applications built upon shared keys. We experimentally validate the practicability of the PUF-FSM."}
{"_id": "13f29041672d9877e59546624b2135d4e41bb1e7", "title": "A Machine Learning Based Intrusion Detection System for Software Defined 5G Network", "text": "As an inevitable trend of future 5G networks, Software Defined architecture has many advantages in providing centralized control and flexible resource management. But it is also confronted with various security challenges and potential threats with emerging services and technologies. As the focus of network security, Intrusion Detection Systems (IDS) are usually deployed separately without collaboration. They are also unable to detect novel attacks with limited intelligent abilities, which are hard to meet the needs of software defined 5G. In this paper, we propose an intelligent intrusion system taking the advances of software defined technology and artificial intelligence based on Software Defined 5G architecture. It flexibly combines security function modules which are adaptively invoked under centralized management and control with a globle view. It can also deal with unknown intrusions by using machine learning algorithms. Evaluation results prove that the intelligent intrusion detection system achieves a better performance."}
{"_id": "2916fd514c69c1b3141c377c1c97d957bdc86c5e", "title": "Medusa: Simplified Graph Processing on GPUs", "text": "Graphs are common data structures for many applications, and efficient graph processing is a must for application performance. Recently, the graphics processing unit (GPU) has been adopted to accelerate various graph processing algorithms such as BFS and shortest paths. However, it is difficult to write correct and efficient GPU programs and even more difficult for graph processing due to the irregularities of graph structures. To simplify graph processing on GPUs, we propose a programming framework called Medusa which enables developers to leverage the capabilities of GPUs by writing sequential C/C++ code. Medusa offers a small set of user-defined APIs and embraces a runtime system to automatically execute those APIs in parallel on the GPU. We develop a series of graph-centric optimizations based on the architecture features of GPUs for efficiency. Additionally, Medusa is extended to execute on multiple GPUs within a machine. Our experiments show that 1) Medusa greatly simplifies implementation of GPGPU programs for graph processing, with many fewer lines of source code written by developers and 2) the optimization techniques significantly improve the performance of the runtime system, making its performance comparable with or better than manually tuned GPU graph operations."}
{"_id": "1fda076db531e56feb00b4c256f270eff97c3da7", "title": "Efficacy of Topical Combination of 0.25% Finasteride and 3% Minoxidil Versus 3% Minoxidil Solution in Female Pattern Hair Loss: A Randomized, Double-Blind, Controlled Study.", "text": "BACKGROUND\nThe relationship between female pattern hair loss (FPHL) and androgenic hormones is not well established, but some evidence indicates oral finasteride may be efficacious in FPHL. Use of a topical formulation has been proposed to minimize unwanted effects.\n\n\nOBJECTIVES\nOur objective was to compare the efficacy and safety of topical 0.25% finasteride combined with 3% minoxidil solution and 3% minoxidil solution as monotherapy in the treatment of FPHL.\n\n\nMETHODS\nThis was a prospective, randomized, double-blind study in 30 postmenopausal women with FPHL. Each participant was randomized to receive either topical 0.25% finasteride combined with topical 3% minoxidil or topical 3% minoxidil solution as monotherapy for 24\u00a0weeks. To determine efficacy, the hair density and diameter was measured and global photographic assessment was conducted at baseline and 8, 16, and 24\u00a0weeks. Side effects and serum dihydrotestosterone levels were also evaluated.\n\n\nRESULTS\nBy 24\u00a0weeks, hair density and diameter had increased in both groups, and finasteride/minoxidil was significantly superior to minoxidil solution in terms of hair diameter (p\u2009=\u20090.039). No systemic side effects were reported. However, serum dihydrotestosterone levels in the finasteride/minoxidil group significantly decreased from baseline (p\u2009=\u20090.016).\n\n\nCONCLUSION\nA topical combination of 0.25% finasteride and 3% minoxidil may be a promising option in the treatment of FPHL with an additional benefit of increasing hair diameter. Nevertheless, as it may be absorbed percutaneously, it should be reserved for postmenopausal women.\n\n\nTRIAL REGISTRATION\nclinicaltrials.in.th; identifier TCTR20160912002."}
{"_id": "9c3ca80ef875a356e642a33b19c954da3b217e01", "title": "Exercise treatment for major depression: maintenance of therapeutic benefit at 10 months.", "text": "OBJECTIVE\nThe purpose of this study was to assess the status of 156 adult volunteers with major depressive disorder (MDD) 6 months after completion of a study in which they were randomly assigned to a 4-month course of aerobic exercise, sertraline therapy, or a combination of exercise and sertraline.\n\n\nMETHODS\nThe presence and severity of depression were assessed by clinical interview using the Diagnostic Interview Schedule and the Hamilton Rating Scale for Depression (HRSD) and by self-report using the Beck Depression Inventory. Assessments were performed at baseline, after 4 months of treatment, and 6 months after treatment was concluded (ie, after 10 months).\n\n\nRESULTS\nAfter 4 months patients in all three groups exhibited significant improvement; the proportion of remitted participants (ie, those who no longer met diagnostic criteria for MDD and had an HRSD score <8) was comparable across the three treatment conditions. After 10 months, however, remitted subjects in the exercise group had significantly lower relapse rates (p = .01) than subjects in the medication group. Exercising on one's own during the follow-up period was associated with a reduced probability of depression diagnosis at the end of that period (odds ratio = 0.49, p = .0009).\n\n\nCONCLUSIONS\nAmong individuals with MDD, exercise therapy is feasible and is associated with significant therapeutic benefit, especially if exercise is continued over time."}
{"_id": "9840ce26a49f46159b880588c508074aa9a82620", "title": "Hardware Intrinsic Security from Physically Unclonable Functions", "text": "Counterfeiting of goods in general and of electronic goods in particular is a growing concern with a huge impact on the global economy, the society, and the security of its critical infrastructure. Various examples are known where companies suffer from economic and brand damage due to competition with counterfeit goods. In some cases the use of counterfeit components has even led to tragic accidents in which lives were lost. It has also recently become clear that counterfeit products can penetrate the critical and security infrastructure of our modern societies and hence cause a threat to national security. One of the difficulties to deal with this problem stems from the fact that counterfeit goods can originate from sources that are able to make copies that are very hard to distinguish from their legitimate counterpart. A first well-known aspect of counterfeiting is product cloning. A second much less known but increasingly dangerous aspect consists of overproduction of goods. A special, but modern, case of counterfeiting is theft of Intellectual Property such as software and designs. The attractive part from the attackers\u2019 point of view is that it is relatively easy to steal and has a high value without having to do huge investments in research and development. From a high-level point of view one can state that the attack can be thwarted by using encryption and authentication techniques. Device configuration data or embedded software can, for example, be encrypted such that it will only run on the device possessing the correct cryptographic key. Since encrypted data is still easy to copy, it now becomes essential that the secret key is well protected against copying or cloning. In order to deal with these two aspects of counterfeiting, a secret unclonable identifier is required together with strong cryptographic protocols. In this chapter we focus on a new way to address these problems: Hardware Intrinsic Security. It is based on the implementation and generation of secret physically unclonable identifiers used in conjunction with cryptographic techniques such as encryption and"}
{"_id": "e796b6083fa2bdcca5ac26fe6d7019c90f4909fd", "title": "Development and Applications of CRISPR-Cas9 for Genome Engineering", "text": "Recent advances in genome engineering technologies based on the CRISPR-associated RNA-guided endonuclease Cas9 are enabling the systematic interrogation of mammalian genome function. Analogous to the search function in modern word processors, Cas9 can be guided to specific locations within complex genomes by a short RNA search string. Using this system, DNA sequences within the endogenous genome and their functional outputs are now easily edited or modulated in virtually any organism of choice. Cas9-mediated genetic perturbation is simple and scalable, empowering researchers to elucidate the functional organization of the genome at the systems level and establish causal linkages between genetic variations and biological phenotypes. In this Review, we describe the development and applications of Cas9 for a variety of research or translational applications while highlighting challenges as well as future directions. Derived from a remarkable microbial defense system, Cas9 is driving innovative applications from basic biology to biotechnology and medicine."}
{"_id": "4ffe6540cade360db97d316fbb50e63616f36f5d", "title": "A waveguide slot array antenna with integrated T-shaped filters in the corporate-feed circuit", "text": "In this paper, we demonstrate a waveguide slot array antenna by integration of T-shaped filters using the diffusion bonding of laminated thin metal plates. The excellent performance of the T-shaped filter has been experimentally verified by the in-band transmission and the out-of-band rejection of the fabricated antenna."}
{"_id": "d814c27616f7171a4267102e743defeba49eda19", "title": "Information overload: why some people seem to suffer more than others", "text": "We studied information overload among senior managers in an industrial company. We used the critical incident collection technique to gather specific examples of information overload and coping strategies. We then used textual interpretation and the affinity diagram technique to interpret the interviews and to categorize our respondents, the critical incidents they described, and the coping strategies they mentioned. Our results show that the extent to which people suffer from information overload is closely related to the strategies they use to deal with it."}
{"_id": "846b9fdefdc100b234e943e803760b7f0beaceee", "title": "Do you care if a computer says sorry?: user experience design through affective messages", "text": "While traditional HCI research emphasizes usability based on models of cognition, user experience (UX) focuses on affect and emotion through the provision of positive interactive experiences. Providing affective cues, such as apologetic on-screen display messages, appears to be a way to influence users' affective states as well as their perceptions toward an information retrieval system. A study was designed to determine whether users' affect and perceptions differ between three types of systems: neutral, apologetic, and non-apologetic. Our results revealed that the users perceived the apologetic system as more aesthetically appealing and usable than the neutral or non-apologetic system. The result also showed that users' frustration was the lowest when using the apologetic system. We discuss the implications of these results in designing a more experience-centered system."}
{"_id": "e0a051ad689963e33eaa854ed4bf849b91240f34", "title": "Parsing sewing patterns into 3D garments", "text": "We present techniques for automatically parsing existing sewing patterns and converting them into 3D garment models. Our parser takes a sewing pattern in PDF format as input and starts by extracting the set of panels and styling elements (e.g. darts, pleats and hemlines) contained in the pattern. It then applies a combination of machine learning and integer programming to infer how the panels must be stitched together to form the garment. Our system includes an interactive garment simulator that takes the parsed result and generates the corresponding 3D model. Our fully automatic approach correctly parses 68% of the sewing patterns in our collection. Most of the remaining patterns contain only a few errors that can be quickly corrected within the garment simulator. Finally we present two applications that take advantage of our collection of parsed sewing patterns. Our garment hybrids application lets users smoothly interpolate multiple garments in the 2D space of patterns. Our sketch-based search application allows users to navigate the pattern collection by drawing the shape of panels."}
{"_id": "7aebab236759a7a3682335d7f20ad9dba2678669", "title": "Easy Questions First? A Case Study on Curriculum Learning for Question Answering", "text": "Cognitive science researchers have emphasized the importance of ordering a complex task into a sequence of easy to hard problems. Such an ordering provides an easier path to learning and increases the speed of acquisition of the task compared to conventional learning. Recent works in machine learning have explored a curriculum learning approach called selfpaced learning which orders data samples on the easiness scale so that easy samples can be introduced to the learning algorithm first and harder samples can be introduced successively. We introduce a number of heuristics that improve upon selfpaced learning. Then, we argue that incorporating easy, yet, a diverse set of samples can further improve learning. We compare these curriculum learning proposals in the context of four non-convex models for QA and show that they lead to real improvements in each of them."}
{"_id": "a37609517767206a906856f9a45f2a61abc64ed1", "title": "Improving additive manufacturing by image processing and robotic milling", "text": "This paper describes an improvement of Additive Manufacturing (AM) to process products with complex inner and outer geometries such as turbine blades by using a combination of AM, optical measuring techniques and robotics. One problem of AM is the rough surface caused by the layer-by-layer 3D printing with granular material, so the geometries need to be post-processed with milling techniques. To overcome this problem, we implement an inline quality control management to post-process several manufactured layers by a integrated imaging and robotic techniques, especially while the geometries are still accessible. The produced surfaces are measured by an inline stereo vision system and are compared with the real 3D CAD model data. Detected differences are translated to movements of a robot arm which then follows the contours to smoothen the surfaces with a milling tool. We show our current state on the 3D imaging, on developing the driver for the deployed robot arm and on a 3D simulation using the Robot Operating System (ROS)."}
{"_id": "64e431814db6e7078d78c45c4fe20e6867c02551", "title": "Intelligent Detection System for e-banking Phishing websites using Fuzzy Data Mining", "text": "Detecting and identifying e-banking Phishing websites is really a complex and dynamic problem involving many factors and criteria. Because of the subjective considerations and the ambiguities involved in the detection, Fuzzy Data Mining Techniques can be an effective tool in assessing and identifying e-banking phishing websites since it offers a more natural way of dealing with quality factors rather than exact values. In this paper, we present novel approach to overcome the \u201efuzziness\u201f in the e-banking phishing website assessment and propose an intelligent resilient and effective model for detecting e-banking phishing websites. The proposed model is based on Fuzzy logic combined with Data Mining algorithms to characterize the e-banking phishing website factors and to investigate its techniques by classifying there phishing types and defining six e-banking phishing website attack criteria\u201fs with a layer structure. A Case study was applied to illustrate and simulate the phishing process. Our experimental results showed the significance and importance of the e-banking phishing website criteria (URL & Domain Identity) represented by layer one, and the variety influence of the phishing characteristic layers on the final e-banking phishing website rate. KeywordsPhishing, Fuzzy Logic, Data Mining, Classification, association, apriori, e-banking risk assessment"}
{"_id": "2d0f96c27695b213a55bcd5681bf1dba9dc5ad94", "title": "Watermarking, Tamper-Proofing, and Obfuscation-Tools for Software Protection", "text": "We identify three types of attack on the intellectual property contained in software, and three corresponding technical defenses. A potent defense against reverse engineering is obfuscation, a process that renders software unintelligible but still functional. A defense against software piracy is watermarking, a process that makes it possible to determine the origin of software. A defense against tampering is tamper-proofing, so that unauthorized modifications to software (for example to remove a watermark) will result in non-functional code. We briefly survey the available technology for each type of defense."}
{"_id": "59ad775077e687699411f7073d1e264a2e6171da", "title": "A multidisciplinary approach to 3D survey and reconstruction of historical buildings", "text": "The aim of this research is to suggest a methodology based on 3D survey and reconstructive modeling, suitable to increase the actual knowledge of an historical building and supporting its historical interpretation. The case study used for testing the proposed methodology is the huge Chartreuse of Pavia, with a special focus on a relatively unexplored portion of the monument. The survey, based on 3D laser scanning and orthoimages, integrated by historical studies and other complementary information (thermoluminescence dating, IR imaging, hystorical analysis), allowed to read all the architectural aspects hidden in this highly architecturally stratified monument, improving in this way the comprehension of the building's transformations in time. A 3D reconstruction approach was then suggested, merging several information of different nature, from the actual geometry of the building to the interpretation of historical documents, suggesting a sequence of diachronic models as virtual narration of the historical evolution. On other hand the 3D models were used to obtain a cross-validation of the historical evolution hypotheses developed by experts in the various disciplines involved in the project. The data collected were exploited through a web portal in order to enhance the readability of tangible and intangible heritage associated to that Chartreuse portion, nowadays not accessible to common public."}
{"_id": "bdf174423e25f03c9e774de129131ba05392a04a", "title": "Design and Validation of Computer Protocols", "text": "This report aims at giving a short overview on rank function analysis of authentication protocols concentrating on the results of Schneider in [4] and Heather in [2]. Therefore it shows a standard form of protocol implementation in CSP, shows how authentication properties can be captured as trace specifications, gives basic definitions necessary to reason about the unbounded set of messages, reformulates the central rank function theorem and restates it proof. Additionally it gives hints on applying this approach to verify secrecy protocols as well, a short overview on tool support available and some hints on interesting properties of rank functions. This report will use notations introduced in earlier reports [11-16] throughout and only reintroduce notations where they are considered vital."}
{"_id": "a0ff427a1b832ae3617ae8d95e71d34082ab850b", "title": "Entry into Platform-Based Markets \u2217", "text": "We develop a theoretical model to examine the relative importance of platform quality, indirect network effects, and consumer expectations on the success of entrants in platform-based markets. We find that an entrant\u2019s success depends on the strength of indirect network effects and on the consumers\u2019 discount factor for future applications. We illustrate the model\u2019s applicability by examining Xbox\u2019s entry into the video game industry. We find that Xbox had a small quality advantage over the incumbent, PlayStation 2, and the strength of indirect network effects and the consumers\u2019 discount factor, while statistically significant, fall in the region where PlayStation 2\u2019s position is unsustainable."}
{"_id": "feab13692036096e673738624fc736d9d8f32e6c", "title": "Learning Styles and Learning Spaces : A Review of the Multidisciplinary Application of Experiential Learning Theory in Higher Education", "text": "Using the concept of learning style and learning space to describe the interface between learners and the learning environment in higher education, in this chapter we review studies addressing how learning style information and the experiential learning model have been applied to improve teaching and learning in sixteen different academic fields and professions. Studies suggest that experiential learning affords educators a way to design and implement teaching and learning strategies conducive to creating a learning environment beneficial for both faculty and students with possibilities for institutional wide dissemination of its core principles and practices. Drawing from the findings generated from the studies, we propose guiding principles for creating growth promoting learning spaces throughout higher education."}
{"_id": "3b70fbcd6c0fdc7697c93d0c3fb845066cf34487", "title": "A First Look at Music Composition using LSTM Recurrent Neural Networks", "text": "In general music composed by recurrent neural networks (RNNs) suffers from a lack of global structure. Though networks can learn note-by-note transition probabilities and even reproduce phrases, attempts at learning an entire musical form and using that knowledge to guide composition have been unsuccessful. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long Short-Term Memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing & counting and CSL learning. In the current study I show that LSTM is also a good mechanism for learning to compose music. I compare this approach to previous attempts, with particular focus on issues of data representation. I present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and I believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen."}
{"_id": "e260c71d3e1da03ae874c9d479307637d1b0d045", "title": "Stock Trend Prediction Using Regression Analysis \u2013 A Data Mining Approach", "text": "Organizations have been collecting data for decades, building massive data warehouses in which to store the data. Even though this data is available, very few of these organizations have been able to realize the actual value stored in it. The question these organizations are asking is how to extract meaningful data and uncover patterns and relationship from their databases. This paper presents a study of regression analysis for use in stock price prediction. Data were obtained from the daily official list of the prices of all shares traded on the stock exchange published by the Nigerian Stock Exchange using banking sector of Nigerian economy with three banks namely:First Bank of Nigeria Plc, Zenith Bank Plc, and Skye Bank Plc to build a database. A data mining software tool was used to uncover patterns and relationships and also to extract values of variables from the database to predict the future values of other variables through the use of time series data that employed moving average method. The tools were found capable technique to describe the trends of stock market prices and predict the future stock market prices of three banks sampled."}
{"_id": "ad8ad53f33799cf346b8b6ea76f1d91e498f63be", "title": "Dual-Band Loop-Dipole Composite Unidirectional Antenna for Broadband Wireless Communications", "text": "The design of a novel dual-band loop-dipole composite unidirectional antenna for broadband wireless applications is proposed. First, design guidelines of the antenna are presented, and then, a step-by-step derivation of the antenna geometry is drawn. The antenna's operation principle is analyzed by carrying out parametric studies of surface current distribution, impedance and radiation patterns. The reflection coefficient, radiation patterns and gain of the antenna are numerically and experimentally studied to verify the operation principle and the design approach. It is demonstrated that two pass-bands, from 1.54 to 3.24 and from 4.88 to 6.80 GHz, of reflection coefficient smaller than -10 dB are obtained. The antenna has stable and unidirectional radiation pattern with low cross polarization within its pass-bands. The average gain of the antenna is 6.7 dBi at low frequency band and 6.5 dBi at high frequency band."}
{"_id": "51fe1d8999b48a499fc711df1a27ce6966fd2f65", "title": "Fundamental Limitations of Small Antennas", "text": "A capacitor or inductor operating as a small antenna is theoretically capable of intercepting a certain amount of power, independent of its size, on the assumption of tuning without circuit loss. The practical efficiency relative to this ideal is limited by the \"radiation power factor\" of the antenna as compared with the power factor and bandwidth of the antenna tuning. The radiation power factor of either kind of antenna is somewhat greater than (1/6\u03c0) (Ab/l2) in which Ab is the cylindrical volume occupied by the antenna, and l is the radianlength (defined as 1/2\u03c0 wavelength) at the operating frequency. The efficiency is further limited by the closeness of coupling of the antenna with its tuner. Other simple formulas are given for the more fundamental properties of small antennas and their behavior in a simple circuit. Examples for 1-Mc. operation in typical circuits indicate a loss of about 35 db for the I.R.E. standard capacitive antenna, 43 db for a large loop occupying a volume of 1 meter square by 0.5 meter axial length, and 64 db for a loop of 1/5 these dimensions."}
{"_id": "555bec05549f4d6688f9566e04b46c5e14020af6", "title": "Design of a Compact Dual-Band Annular-Ring Slot Antenna", "text": "This letter aims at miniaturizing an annular-ring slot antenna that is suitable for the 2.4/5-GHz dual-band operations. The miniaturization purpose was achieved by embedding in the center patch of the antenna a pair of slots to excite three resonant modes. The resonant band of the first excited resonant mode was lowered from that of the unperturbed annular-ring slot antenna, whereas those of the second and third excited resonant modes were combined to form a wide upper operating band by appropriately adjusting the dimensions of the embedded slots. The proposed antenna proves to have very similar copolarization radiation patterns in its two operating bands and have enough antenna gains for practical applications."}
{"_id": "7e972fadea51b28b37f036f2641bb34eacee2665", "title": "The Magnetoelectric Dipole\u2014A Wideband Antenna for Base Stations in Mobile Communications", "text": "In this paper, stringent requirements imposed on the design of base station antennas for mobile communications are summarized. Conventional techniques for implementing base station antennas are reviewed. The complementary antenna concept of combining an electric dipole with a magnetic dipole is reconsidered. Recently, this kind of antenna has been commonly called a \u201cHuygen's source.\u201d The purpose is to develop wideband unidirectional antennas with stable frequency characteristics and low back radiation. Based on this concept, the magnetoelectric dipole was invented by integrating an electric dipole with an -probe fed shorted quarter-wave patch antenna. A number of magnetoelectric dipoles with different radiation patterns and different polarizations have been developed in recent years. An overview of the characteristics of this new class of complementary antennas is presented. Major design challenges are explained. Finally, a new magnetoelectric dipole that is low in profile and robust in structure is presented. The magnetic dipole part of this antenna is realized by a triangular-shaped loop antenna. The antenna is inherently direct current (dc) grounded, which satisfies the requirement for outdoor applications."}
{"_id": "b3a7ef30218832c1ab69cf18bee5aebb39dc746f", "title": "A UWB Unidirectional Antenna With Dual-Polarization", "text": "A novel \u00b145\u00b0 dual-polarized unidirectional antenna element is presented, consisting of two cross center-fed tapered mono-loops and two cross electric dipoles located against a reflector for ultrawideband applications. The operation principle of the antenna including the use of elliptically tapered transmission line for transiting the unbalanced energy to the balanced energy is described. Designs with different reflectors-planar or conical-are investigated. A measured overlapped impedance bandwidth of 126% (SWR <; 2) is demonstrated. Due to the complementary nature of the structure, the antenna has a relatively stable broadside radiation pattern with low cross polarization and low back lobe radiation over the operating band. The measured gain of the proposed antenna varies from 4 to 13 dBi and 7 to 14.5 dBi for port 1 and port 2, respectively, over the operating band, when mounted against a conical backed reflector. The measured coupling between the two ports is below -25 dB over the operating band."}
{"_id": "6f53f8488ad22d8d8424d595bd56630a230a96fa", "title": "Digital Ordering System for Restaurant Using Android", "text": "Nowadays web services technology is widely used to integrate heterogeneous systems and develop new applications. Here an application of integration of hotel management systems by web services technology is presented. Digital Hotel Management integrates lots of systems of hotel industry such as Ordering System Kitchen Order Ticket (KOT), Billing System, Customer Relationship Management system (CRM) together. This integration solution can add or expand hotel software system in any size of hotel chains environment. This system increases quality and speed of service. This system also increases attraction of place for large range of customers. Implementing this system gives a cost-efficient opportunity to give your customers a personalized service experience where they are in control choosing what they want, when they want it \u2013 from dining to ordering to payment and feedback. We are implementing this system using android application for Tablet PC\u2019s. The front end will be developed using JAVA Android and the backend will work on MySQL database. Index TermsDFD: Data Flow Diagram. DOSRUA: Digital Ordering System for Restaurant Using KOT: Kitchen Order Ticket Android UML: Unified Modeling Language."}
{"_id": "8a59945d2940efa4502a06b13f12dedbda0425a3", "title": "Acute Effects of Cocaine on Human Brain Activity and Emotion", "text": "We investigated brain circuitry mediating cocaine-induced euphoria and craving using functional MRI (fMRI). During double-blind cocaine (0.6 mg/kg) and saline infusions in cocaine-dependent subjects, the entire brain was imaged for 5 min before and 13 min after infusion while subjects rated scales for rush, high, low, and craving. Cocaine induced focal signal increases in nucleus accumbens/subcallosal cortex (NAc/SCC), caudate, putamen, basal forebrain, thalamus, insula, hippocampus, parahippocampal gyrus, cingulate, lateral prefrontal and temporal cortices, parietal cortex, striate/extrastriate cortices, ventral tegmentum, and pons and produced signal decreases in amygdala, temporal pole, and medial frontal cortex. Saline produced few positive or negative activations, which were localized to lateral prefrontal cortex and temporo-occipital cortex. Subjects who underwent repeat studies showed good replication of the regional fMRI activation pattern following cocaine and saline infusions, with activations on saline retest that might reflect expectancy. Brain regions that exhibited early and short duration signal maxima showed a higher correlation with rush ratings. These included the ventral tegmentum, pons, basal forebrain, caudate, cingulate, and most regions of lateral prefrontal cortex. In contrast, regions that demonstrated early but sustained signal maxima were more correlated with craving than with rush ratings; such regions included the NAc/SCC, right parahippocampal gyrus, and some regions of lateral prefrontal cortex. Sustained negative signal change was noted in the amygdala, which correlated with craving ratings. Our data demonstrate the ability of fMRI to map dynamic patterns of brain activation following cocaine infusion in cocaine-dependent subjects and provide evidence of dynamically changing brain networks associated with cocaine-induced euphoria and cocaine-induced craving."}
{"_id": "a2bf50825812b2f5d59e11b38345c4dca02279ac", "title": "Interaction in the segmentation of medical images: A survey", "text": "Segmentation of the object of interest is a difficult step in the analysis of digital images. Fully automatic methods sometimes fail, producing incorrect results and requiring the intervention of a human operator. This is often true in medical applications, where image segmentation is particularly difficult due to restrictions imposed by image acquisition, pathology and biological variation. In this paper we present an early review of the largely unknown territory of human-computer interaction in image segmentation. The purpose is to identify patterns in the use of interaction and to develop qualitative criteria to evaluate interactive segmentation methods. We discuss existing interactive methods with respect to the following aspects: the type of information provided by the user, how this information affects the computational part, and the purpose of interaction in the segmentation process. The discussion is based on the potential impact of each strategy on the accuracy, repeatability and interaction efficiency. Among others, these are important aspects to characterise and understand the implications of interaction to the results generated by an interactive segmentation method. This survey is focused on medical imaging, however similar patterns are expected to hold for other applications as well."}
{"_id": "093acf67883f50d5c9c8b1d9b3ef58029218287c", "title": "Mathematical Models of Fads Explain the Temporal Dynamics of Internet Memes", "text": "Internet memes are a pervasive phenomenon on the social Web. They typically consist of viral catch phrases, images, or videos that spread through instant messaging, (micro) blogs, forums, and social networking sites. Due to their popularity and proliferation, Internet memes attract interest in areas as diverse as marketing, sociology, or computer science and have been dubbed a new form of communication or artistic expression. In this paper, we examine the merits of such claims and analyze how collective attention into Internet memes evolves over time. We introduce and discuss statistical models of the dynamics of fads and fit them to meme related time series obtained from Google Trends. Given data as to more than 200 memes, we find that our models provide more accurate descriptions of the dynamics of growth and decline of collective attention to individual Internet memes than previous approaches from the literature. In short, our results suggest that Internet memes are nothing but fads."}
{"_id": "868cf1e77b2b242ed29e7309684bfcbacdd323c5", "title": "Targeting Horror via Level and Soundscape Generation", "text": "Horror games form a peculiar niche within game design paradigms, as they entertain by eliciting negative emotions such as fear and unease to their audience during play. This genre often follows a specific progression of tension culminating at a metaphorical peak, which is defined by the designer. A player\u2019s tension is elicited by several facets of the game, including its mechanics, its sounds, and the placement of enemies in its levels. This paper investigates how designers can control and guide the automated generation of levels and their soundscapes by authoring the intended tension of a player traversing them. Procedural content generation (PCG) is an extensive area of game research, and is often used as an effective method to reduce content creation costs and increase game longevity (Togelius et al. 2011). However, research in procedural audio is uncommon in the field (Collins 2013), likely due to the additional demands that sound often requires. Digital games are multi-faceted creative domains (Liapis, Yannakakis, and Togelius 2014), where facets such as audio, visuals, levels and game mechanics work in conjunction to create interactive digital experiences (Lopes and Yannakakis 2014). This paper investigates the interplay between level design and sound, where designers define the player experience while the system generates levels and their respective soundscapes to accommodate the designer\u2019s intentions. The survival horror genre is unique in its heavy reliance on sound to convey negative affective states such as shock, disgust, ecstasy, fear and relief (Ekman and Lankoski 2009). It also focuses on exploration and hiding as players have limited combat ability (e.g. no weapons or limited ammunition). These complex characteristics of player affect raise important challenges for the generation of levels and soundscapes, where the focus is in evoking these types of emotions. For instance, an interesting question is how a level generator can anticipate and influence the affective state of a player, while consistently balancing feelings such as stress and relief; or how players navigate through a level under the effects of stress caused by previously encountered monsters. This paper tackles some of these challenges by exploring the generation of levels and soundscapes in the survival horror genre Copyright c \u00a9 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. while also operating on a simplified player model, simulating horror gameplay during level traversal. This paper presents an extension to the Sonancia prototype (Lopes, Liapis, and Yannakakis 2015), a system capable of generating multiple facets of horror games. Levels consist of rooms in a haunted mansion that are procedurally generated via genetic search, while audio snippets are distributed throughout the rooms (i.e. level sonification). Earlier work focused on generating the architecture of the level autonomously based on the distance of a singular path. This paper focuses on the generation of levels which include added gameplay elements (i.e. monsters, quest items), and on the generation of their soundscapes, where notions of tension and suspense are used to drive the level and soundscape generative process respectively. Sonancia allows designers to define the flow of relaxation and tension through the generated level. To do this, the system requires two tension curves: the desired (designerspecified) tension curve (DTC) and the actual (level-based) tension curve (LTC). The system searches, via a genetic algorithm, for room setups where the actual tension curve more closely matches the desired one, so that generated levels and their respective soundscapes follow the designer specifications as closely as possible, while still maintaining a degree of variability."}
{"_id": "496b885e79c4247f2b977a56668e4ec3f0037a61", "title": "The Pervasive Effects of an Antibiotic on the Human Gut Microbiota, as Revealed by Deep 16S rRNA Sequencing", "text": "The human intestinal microbiota is essential to the health of the host and plays a role in nutrition, development, metabolism, pathogen resistance, and regulation of immune responses. Antibiotics may disrupt these coevolved interactions, leading to acute or chronic disease in some individuals. Our understanding of antibiotic-associated disturbance of the microbiota has been limited by the poor sensitivity, inadequate resolution, and significant cost of current research methods. The use of pyrosequencing technology to generate large numbers of 16S rDNA sequence tags circumvents these limitations and has been shown to reveal previously unexplored aspects of the \"rare biosphere.\" We investigated the distal gut bacterial communities of three healthy humans before and after treatment with ciprofloxacin, obtaining more than 7,000 full-length rRNA sequences and over 900,000 pyrosequencing reads from two hypervariable regions of the rRNA gene. A companion paper in PLoS Genetics (see Huse et al., doi: 10.1371/journal.pgen.1000255) shows that the taxonomic information obtained with these methods is concordant. Pyrosequencing of the V6 and V3 variable regions identified 3,300-5,700 taxa that collectively accounted for over 99% of the variable region sequence tags that could be obtained from these samples. Ciprofloxacin treatment influenced the abundance of about a third of the bacterial taxa in the gut, decreasing the taxonomic richness, diversity, and evenness of the community. However, the magnitude of this effect varied among individuals, and some taxa showed interindividual variation in the response to ciprofloxacin. While differences of community composition between individuals were the largest source of variability between samples, we found that two unrelated individuals shared a surprising degree of community similarity. In all three individuals, the taxonomic composition of the community closely resembled its pretreatment state by 4 weeks after the end of treatment, but several taxa failed to recover within 6 months. These pervasive effects of ciprofloxacin on community composition contrast with the reports by participants of normal intestinal function and with prior assumptions of only modest effects of ciprofloxacin on the intestinal microbiota. These observations support the hypothesis of functional redundancy in the human gut microbiota. The rapid return to the pretreatment community composition is indicative of factors promoting community resilience, the nature of which deserves future investigation."}
{"_id": "975452a2cf9999997ed1b6eca253d4ea66eabfbd", "title": "An FPGA-based processor for training convolutional neural networks", "text": "Convolutional neural networks (CNNs) have gained great success in various computer vision applications. However, training a CNN model is computation-intensive and time-consuming. Hence training is mainly processed on large clusters of high-performance processors like server CPUs and GPUs. In this paper, we propose an FPGA-based processor design to accelerate the training process of CNNs. We first analyze the operations in all types of CNN layers in the training process. A uniform computation engine design is proposed to efficiently carry out all kinds of operations based on the analysis. Then a scalable accelerator framework is presented that exploits the parallelism further by unrolling the loops in two levels. The proposed accelerator design is demonstrated by implementing a processor on the Xilinx ZU19EG FPGA working at 200 MHz. The evaluation results on a group of CNN models show that our processor is 5.7 to 10.7-fold faster than the software implementations on the Intel Core i5-4440 CPU(@3.10GHz)."}
{"_id": "3fac2cfdbaafad06a511a96cfd40be6263c24182", "title": "Transformer Health Condition Monitoring Through GSM Technology", "text": ". Transformers are a vital part of the transmission and distribution system. Monitoring transformers for problems before they occur can prevent faults that are costly to repair and result in a loss of service. Current systems can provide information about the state of a transformer, but are either offline or very expensive to implement. Transformers being the essential part of power transmission system are expensive, as is the cost of power interruptions. Because of the cost of scheduled and unscheduled maintenance, especially at remote sites, the utility industry has begun investing in instrumentation and monitoring of transformer. On-line transformer diagnostics using conventional technologies like carrier power line communication, Radio frequency based control system, and Supervisory control and data acquiring systems, Distributed control systems and Internet based communications are having their own limitations. GSM is an open, digital cellular technology used for transmitting mobile voice and data services. This project objective is to develop low cost solution for monitoring health condition of remotely located distribution transformers using GSM technology to preent premature failure of distribution transformers and improving reliability of services to the customers. An Embedded based hardware design is developed to acquire data from electrical sensing system. It consists of a sensing system, signal conditioning electronic circuits, advanced embedded hardware for middle level computing, a powerful computer network for further transmission of data to various places. A powerful GSM networking is designed to send data from a network to other network for proper corrective action at the earliest. Any change in parameters of transmission is sensed to protect the entire transmission and distribution. The performance of prototype model developed is tested at laboratory for monitoring various parameters like transformer over load, voltage fluctuations, over temperature, oil quality and level etc."}
{"_id": "ef7d230c4f24ccc1c70fe9fbfb8936488aea92ba", "title": "Learning Joint Multilingual Sentence Representations with Neural Machine Translation", "text": "In this paper, we use the framework of neural machine translation to learn joint sentence representations across six very different languages. Our aim is that a representation which is independent of the language, is likely to capture the underlying semantics. We define a new crosslingual similarity measure, compare up to 1.4M sentence representations and study the characteristics of close sentences. We provide experimental evidence that sentences that are close in embedding space are indeed semantically highly related, but often have quite different structure and syntax. These relations also hold when comparing sentences in different languages."}
{"_id": "c4a3619d7cd10160b1d15599330011f3f9f046a3", "title": "Periodic Leaky-Wave Antenna for Millimeter Wave Applications Based on Substrate Integrated Waveguide", "text": "Substrate integrated waveguides (SIW) are built up of periodically arranged metallic via-holes or via-slots. The leakage loss of SIW structures increases with the distance between the via-holes or via-slots. An open periodic waveguide with a large via distance supports the propagation of leaky-wave modes and can thus be used for the design of a leaky-wave antenna. In this paper, this leakage loss is studied in detail and used to design a periodic leaky-wave antenna. The proposed concept represents an excellent choice for applications in the millimeter-wave band. Due to its versatility, the finite difference frequency domain method for periodic guided-wave or leaky-wave structures is used to analyze the characteristics of the proposed periodic leaky-wave antenna. Two modes (TE10 and TE20) are investigated and their different leaky-wave properties are analyzed. Based on the proposed leaky-mode analysis method, a novel periodic leaky-wave antenna at 28-34 GHz is designed and fabricated."}
{"_id": "b2e31539f68c2c278f507fd6ff8da0d75ac67157", "title": "Open Information Extraction on Scientific Text: An Evaluation", "text": "Open Information Extraction (OIE) is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering. While OIE methods are targeted at being domain independent, they have been evaluated primarily on newspaper, encyclopedic or general web text. In this article, we evaluate the performance of OIE on scientific texts originating from 10 different disciplines. To do so, we use two state-of-the-art OIE systems using a crowd-sourcing approach. We find that OIE systems perform significantly worse on scientific text than encyclopedic text. We also provide an error analysis and suggest areas of work to reduce errors. Our corpus of sentences and judgments are made available."}
{"_id": "cb52554f0b28304e8a2de813ac572267ecd7ce46", "title": "Complications of botulinum toxin A use in facial rejuvenation.", "text": "The esthetic application of botulinum toxin type A (Botox) is a safe treatment modality; nevertheless complications can occur as a result of patient-and physician-related factors. Fortunately, adverse effects and undesirable sequelae after Botox injections are temporary. Complications may be more serious in patients who have more severe rhytids (which require more Botox), previous facial plastic surgery (altered anatomy), and those who have pre-existing neuromuscular disease. The physician can reduce complications by using proper injection techniques, appropriate regional Botox dosing, and by being conservative in the overall approach to Botox-mediated facial rejuvenation."}
{"_id": "b01fc869ca410830b009adbd2f747f1f41bc6287", "title": "Adipose Hypertrophy Following Cryolipolysis.", "text": "Cryolipolysis has become a popular non-invasive treatment for unwanted fatty collections. The benefits are minimal down time, short recovery time, and decreased pain, although pain is the most frequent complaint. The author recently treated a patient who presented with an enlarging adipose collection on his lower abdomen following cryolipolysis with CoolSculpting(\u00ae) (ZELTIQ Aesthetics, Inc., Pleasanton, CA)."}
{"_id": "620b7d0d5e2ceeba58d808dc3d7b09a9fb57831c", "title": "Frequency-following (microphonic-like) neural responses evoked by sound.", "text": ""}
{"_id": "44561e7a54649b2b7aa2ba08f2842d754f648e23", "title": "Action recognition with deep neural networks", "text": "In this work, we have invesigated the action recognition problem using the Charades Action Recognition Dataset with 157 action classes. We have compared the results of different techniques such as extreme learning machines, support vector machines, and decision trees, applied on the features extracted with deep neural networks and the scene-action conditional probabilities."}
{"_id": "5ddf0c1bcf8d2da4beb10799805ab670fc26a7c5", "title": "Real time personalized search on social networks", "text": "Internet users are shifting from searching on traditional media to social network platforms (SNPs) to retrieve up-to-date and valuable information. SNPs have two unique characteristics: frequent content update and small world phenomenon. However, existing works are not able to support these two features simultaneously. To address this problem, we develop a general framework to enable real time personalized top-k query. Our framework is based on a general ranking function that incorporates time freshness, social relevance and textual similarity. To ensure efficient update and query processing, there are two key challenges. The first is to design an index structure that is update-friendly while supporting instant query processing. The second is to efficiently compute the social relevance in a complex graph. To address these challenges, we first design a novel 3D cube inverted index to support efficient pruning on the three dimensions simultaneously. Then we devise a cube based threshold algorithm to retrieve the top-k results, and propose several pruning techniques to optimize the social distance computation, whose cost dominates the query processing. Furthermore, we optimize the 3D index via a hierarchical partition method to enhance our pruning on the social dimension. Extensive experimental results on two real world large datasets demonstrate the efficiency and the robustness of our proposed solution."}
{"_id": "793dd7d95a90215165707864c6d09729e33f89cd", "title": "D.L.D., Dynamic Lighting Design: Parametric Interactive Lighting Software in Urban Public Space", "text": "D.L.D., Dynamic Lighting Design, is a three year, E.U. funded, ongoing research project, concerning ways of designing dynamic luminous environments for pedestrians in open public spaces. This paper presents a set of ideas about how space and the surface of architecture are defined in a dynamic luminous environment, which enhance pedestrian lighting and encourage people to walk and visit public spaces. The D.L.D., parametric interactive lighting software in urban public space, will also be outlined in this paper."}
{"_id": "9b73ff2de1a8fde14e485ea587108de22c0dc7d3", "title": "Motivated Reinforcement Learning", "text": "The standard reinforcement learning view of the involvement of neuromodulatory systems in instrumental conditioning includes a rather straightforward conception of motivation as prediction of sum future reward. Competition between actions is based on the motivating characteristics of their consequent states in this sense. Substantial, careful, experiments reviewed in Dickinson & Balleine, 12,13 into the neurobiology and psychology of motivation shows that this view is incomplete. In many cases, animals are faced with the choice not between many different actions at a given state, but rather whether a single response is worth executing at all. Evidence suggests that the motivational process underlying this choice has different psychological and neural properties from that underlying action choice. We describe and model these motivational systems, and consider the way they interact."}
{"_id": "dc89011f7c6ee838c33faccbb85405e656170d17", "title": "CleanM: An Optimizable Query Language for Unified Scale-Out Data Cleaning", "text": "Data cleaning has become an indispensable part of data analysis due to the increasing amount of dirty data. Data scientists spend most of their time preparing dirty data before it can be used for data analysis. At the same time, the existing tools that attempt to automate the data cleaning procedure typically focus on a specific use case and operation. Still, even such specialized tools exhibit long running times or fail to process large datasets. Therefore, from a user\u2019s perspective, one is forced to use a different, potentially inefficient tool for each category of errors. This paper addresses the coverage and efficiency problems of data cleaning. It introduces CleanM (pronounced clean\u2019em), a language which can express multiple types of cleaning operations. CleanM goes through a three-level translation process for optimization purposes; a different family of optimizations is applied in each abstraction level. Thus, CleanM can express complex data cleaning tasks, optimize them in a unified way, and deploy them in a scaleout fashion. We validate the applicability of CleanM by using it on top of CleanDB, a newly designed and implemented framework which can query heterogeneous data. When compared to existing data cleaning solutions, CleanDB a) covers more data corruption cases, b) scales better, and can handle cases for which its competitors are unable to terminate, and c) uses a single interface for querying and for data cleaning."}
{"_id": "2f9f80840e89f99998273747b1c364980d02aee8", "title": "A Teachable-Agent Arithmetic Game's Effects on Mathematics Understanding, Attitude and Self-efficacy", "text": "A teachable-agent arithmetic game is presented and evaluated in terms of student performance, attitude and self-eff icacy. An experimental prepost study design was used, enrolling 153 3 rd and 5 grade students in Sweden. The playing group showed significantly larger gains i math performance and self-efficacy beliefs, but not in general attitude towards math, compared to control groups. The contributions in relation to pr evious work include a novel educational game being evaluated, and an emphasis o n elf-efficacy in the study as a strong predictor of math achievements."}
{"_id": "171a4ef673e40d09d7091082c7fd23b3758fc3c2", "title": "Video-based face recognition using ensemble of haar-like deep convolutional neural networks", "text": "Growing number of surveillance and biometric applications seek to recognize the face of individuals appearing in the viewpoint of video cameras. Systems for video-based FR can be subjected to challenging operational environments, where the appearance of faces captured with video cameras varies significantly due to changes in pose, illumination, scale, blur, expression, occlusion, etc. In particular, with still-to-video FR, a limited number of high-quality facial images are typically captured for enrollment of an individual to the system, whereas an abundance facial trajectories can be captured using video cameras during operations, under different viewpoints and uncontrolled conditions. This paper presents a deep learning architecture that can learn a robust facial representation for each target individual during enrollment, and then accurately compare the facial regions of interest (ROIs) extracted from a still reference image (of the target individual) with ROIs extracted from live or archived videos. An ensemble of deep convolutional neural networks (DCNNs) named HaarNet is proposed, where a trunk network first extracts features from the global appearance of the facial ROIs (holistic representation). Then, three branch networks effectively embed asymmetrical and complex facial features (local representations) based on Haar-like features. In order to increase the discriminativness of face representations, a novel regularized triplet-loss function is proposed that reduces the intra-class variations, while increasing the inter-class variations. Given the single reference still per target individual, the robustness of the proposed DCNN is further improved by fine-tuning the HaarNet with synthetically-generated facial still ROIs that emulate capture conditions found in operational environments. The proposed system is evaluated on stills and videos from the challenging COX Face and Chokepoint datasets according to accuracy and complexity. Experimental results indicate that the proposed method can significantly improve performance with respect to state-of-the-art systems for video-based FR."}
{"_id": "12c5179c88f6ebf42a5b965a7440461ff67eaed8", "title": "Double Refinement Network for Efficient Indoor Monocular Depth Estimation", "text": "Monocular Depth Estimation is an important problem of Computer Vision that may be solved with Neural Networks and Deep Learning nowadays. Though recent works in this area have shown significant improvement in accuracy, state-of-the-art methods require large memory and time resources. The main purpose of this paper is to improve performance of the latest solutions with no decrease in accuracy. To achieve this, we propose a Double Refinement Network architecture. We evaluate the results using the standard benchmark RGB-D dataset NYU Depth v2. The results are equal to the current state-of-the-art, while frames per second rate of our approach is significantly higher (up to 15 times speedup per image with batch size 1), RAM per image is significantly lower."}
{"_id": "64b50b400222676965e5876bde3a6a0eccac8bb7", "title": "The Possibility of an Epidemic Meme Analogy for Web Community Population Analysis", "text": "The aim of this paper is to discuss the possibility of understanding human social interaction in web communities by analogy with a disease propagation model from epidemiology. When an article is submitted by an individual to a social web service, it is potentially influenced by other participants. The submission sometimes starts a long and argumentative chain of articles, but often does not. This complex behavior makes management of server resources difficult and a more theoretical methodology is required. This paper tries to express these complex human dynamics by analogy with infection by a virus. In this first report, by fitting an epidemiological model to Bulletin Board System (BBS) logs in terms of a numerical triple, we show that the analogy is reasonable and beneficial because the analogy can estimate the community size despite the submitter\u2019s information alone being observable."}
{"_id": "cf46c0a735634f0a80cc6c2ab549cdcfe9b85b18", "title": "Millimeter-wave Gbps Broadband Evolution towards 5G: Fixed Access and Backhaul", "text": "\u2014 As wireless communication evolves towards 5G, both fixed broadband and mobile broadband will play a crucial part in providing the Gbps infrastructure for a connected society. This paper proposes a Millimeter-wave Gbps Broadband (MGB) system as the solution to two critical problems in this evolution: last-mile access for fixed broadband and small cell backhaul for mobile broadband. The key idea is to use spectrum that is already available in the millimeter wave bands for fixed wireless access with optimized dynamic beamforming and massive MIMO infrastructure to achieve high capacity with wide area coverage. This paper explains the MGB concept and describes potential array architectures for realizing the system. Simulations demonstrate that with 500 MHz of bandwidth (at 39 GHz band) and 28 dBm transmission power (55 dBm EIRP), it is possible to provide more than 11 Gbps backhaul capacity for 96 small cells within 1-km radius."}
{"_id": "0ee93827fad11535a8bed25b8d00eea35f41770e", "title": "Predicting Elections with Twitter: What 140 Characters Reveal about Political Sentiment", "text": "Twitter is a microblogging website where users read and write millions of short messages on a variety of topics every day. This study uses the context of the German federal election to investigate whether Twitter is used as a forum for political deliberation and whether online messages on Twitter validly mirror offline political sentiment. Using LIWC text analysis software, we conducted a content analysis of over 100,000 messages containing a reference to either a political party or a politician. Our results show that Twitter is indeed used extensively for political deliberation. We find that the mere number of messages mentioning a party reflects the election result. Moreover, joint mentions of two parties are in line with real world political ties and coalitions. An analysis of the tweets\u2019 political sentiment demonstrates close correspondence to the parties' and politicians\u2019 political positions indicating that the content of Twitter messages plausibly reflects the offline political landscape. We discuss the use of microblogging message content as a valid indicator of political sentiment and derive suggestions for further research."}
{"_id": "42212cf0a0bd477bf8a2abb668e0189e3d5d21d2", "title": "Science and society. Social media and the elections.", "text": "C R E D IT : J O E S U T IL F F /J O E S U T L IF F .C O M I n the United States, social media sites\u2014such as Facebook, Twitter, and YouTube\u2014are currently being used by two out of three people ( 1), and search engines are used daily ( 2). Monitoring what users share or search for in social media and on the Web has led to greater insights into what people care about or pay attention to at any moment in time. Furthermore, it is also helping segments of the world population to be informed, to organize, and to react rapidly. However, social media and search results can be readily manipulated, which is something that has been underappreciated by the press and the general public. In times of political elections, the stakes are high, and advocates may try to support their cause by active manipulation of social media. For example, altering the number of followers can affect a viewer\u2019s conclusion about candidate popularity. Recently, it was noted that the number of followers for a presidential candidate in the United States surged by over 110 thousand within one single day, and analysis showed that most of these followers are unlikely to be real people ( 3). We can model propaganda efforts in graph-theoretic terms, as attempts to alter our \u201ctrust network\u201d: Each of us keeps a mental trust network that helps us decide what and what not to believe ( 4). The nodes in this weighted network are entities that we are already familiar with (people, institutions, and ideas), and the arcs are our perceived connections between these entities. The weights on the nodes are values of trust and distrust that we implicitly assign to every entity we know. A propagandist is trying to make us alter connections and values in our trust network, i.e., trying to infl uence our perception about the candidates for the coming elections, and thus \u201chelp us\u201d decide on candidates of their choice. The Web, as seen by search engines ( 5), is similarly a weighted network that is used to rank search results. The hyperlinks are considered \u201cvotes of support,\u201d and the weights are a computed measurement of importance assigned to Web pages (the nodes in the graph). It is also the target of propaganda attacks, known as \u201cWeb spam\u201d ( 6). A Web spammer is trying to alter the weighted Web network by adding connections and values that support his or her cause, aimed at affecting the search engine\u2019s ranking decisions and thus the number of viewers who see the page and consider it important ( 4). \u201cGoogle bomb\u201d is a type of Web spam that is widely known and applicable to all major search engines today. Exploiting the descriptive power of anchor text (the phrase directly associated with a hyperlink), Web spammers create associations between anchor words or phrases and linked Web pages. These associations force a search engine to give high relevancy to results that would otherwise be unrelated, sending them to the \u201ctop 10\u201d search results. A well-known Google bomb was the association of the phrase \u201cmiserable failure\u201d with the Web page of President G. W. Bush initially and later with those of Michael Moore, Hillary Clinton, and Jimmy Carter ( 7). Another Google bomb associated candidate John Kerry with the word \u201cwaffl es\u201d in 2004. A cluster of Google bombs was used in an effort to infl uence the 2006 congressional elections. Google has adjusted its ranking algorithm to defuse Google bombs on congressional candidates by restricting the selection of the top search results when querying their names ( 8). During the 2008 and 2010 elections, it proved impossible to launch any successful Google bombs on politicians, and it is hoped that the trend will continue. During the 2010 Massachusetts Special Election (MASEN) to fi ll the seat vacated by the death of Senator Ted Kennedy, we saw attempts to infl uence voters just before the elections, launched by out-of-state political groups ( 9). Propagandists exploited a loophole introduced by the feature of including real-time information from social networks in the \u201ctop 10\u201d results of Web searches. They ensured that their message was often visible by repeatedly posting the same tweet. A third of all election-related tweets sent during the week before the 2010 MASEN were tweet repeats ( 9). All search engines have since reacted by moving real-time results out of the organic results (results selected purely by information retrieval algorithms) and into a separate search category. \u201cTwitter bombs,\u201d however, are likely to be launched within days of the elections. A Twitter bomb is the act of sending unsolicited replies to specifi c users via Twitter in order to get them to pay attention to one\u2019s cause. Typically, it is done effectively by means of \u201cbots,\u201d short-lived programs that can send a large quantity of tweets automatically. Twitter is good at shutting most of them down because of their activity patterns and/or users\u2019 complaints. However, bombers have used fake \u201creplies\u201d to spam real users who are not aware of their existence. For example, in the 2010 MASEN, political spammers created nine fake accounts that were used to send about 1000 tweets before being blocked by Twitter for spamming ( 9). Their messages were carefully focused, however, targeting users who in the previous hours were discussing the elections. With the retweeting help of similarly minded users, >60,000 Twitter accounts were reached within a day at essentially no cost. Twitter bombs, unfortunately, have become common practice. Department of Computer Science, Wellesley College, 106 Central Street, Wellesley, MA 02481, USA. E-mail: pmetaxas@wellesley. edu; emustafa@wellesley.edu US"}
{"_id": "5045d1e1204d754e9cae6d999f1fc03f7d990575", "title": "Information network or social network?: the structure of the twitter follow graph", "text": "In this paper, we provide a characterization of the topological features of the Twitter follow graph, analyzing properties such as degree distributions, connected components, shortest path lengths, clustering coefficients, and degree assortativity. For each of these properties, we compare and contrast with available data from other social networks. These analyses provide a set of authoritative statistics that the community can reference. In addition, we use these data to investigate an often-posed question: Is Twitter a social network or an information network? The \"follow\" relationship in Twitter is primarily about information consumption, yet many follows are built on social ties. Not surprisingly, we find that the Twitter follow graph exhibits structural characteristics of both an information network and a social network. Going beyond descriptive characterizations, we hypothesize that from an individual user's perspective, Twitter starts off more like an information network, but evolves to behave more like a social network. We provide preliminary evidence that may serve as a formal model of how a hybrid network like Twitter evolves."}
{"_id": "71d55130ae9a96564ec176ff081ac5cba574db5c", "title": "Does Twitter motivate involvement in politics? Tweeting, opinion leadership, and political engagement", "text": ""}
{"_id": "4323ea81a04766acc38287254e0c3ff30985dbe5", "title": "Rule Based Machine Translation Combined with Statistical Post Editor for Japanese to English Patent Translation", "text": "Since sentences in patent texts are long, they are difficult to translate by a machine. Although statistical machine translation is one of the major streams of the field, long patent sentences are difficult to translate not using syntactic analysis. We propose the combination of a rule based method and a statistical method. It is a rule based machine translation (RMT) with a statistical based post editor (SPE). The evaluation by the NIST score shows RMT+SPE is more accurate than RMT only. Manual checks, however, show the outputs of RMT+SPE often have strange expressions in the target language. So we propose a new evaluation measure NMG (normalized mean grams). Although NMG is based on n-gram, it counts the number of words in the longest word sequence matches between the test sentence and the target language reference corpus. We use two reference corpora. One is the reference translation only the other is a large scaled target language corpus. In the former case, RMT+SPE wins in the later case, RMT wins."}
{"_id": "e6cb179de4964a3f572d5cd458eabe62aa8b8d3c", "title": "Designing web usability", "text": "Where you can find the designing web usability easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this designing web usability book. It is about this book that will give wellness for all people from many societies."}
{"_id": "0b0a2604e2452bcdd82613e3fa760aeb332e9d84", "title": "Reliable Circuit Techniques for Low-Voltage Analog Design in Deep Submicron Standard CMOS: A Tutorial", "text": "We present in this paper an overview of circuit techniques dedicated to design reliable low-voltage (1-V and below) analog functions in deep submicron standard CMOS processes. The challenges of designing such low-voltage and reliable analog building blocks are addressed both at circuit and physical layout levels. State-ofthe-art circuit topologies and techniques (input level shifting, bulk and current driven, DTMOS), used to build main analog modules (operational amplifier, analog CMOS switches) are covered with the implementation of MOS capacitors."}
{"_id": "81b1991bfc9e32a3af7bd00bb84a3b4a60007f19", "title": "Scrum Metrics for Hyperproductive Teams: How They Fly like Fighter Aircraft", "text": "Scrum Teams use lightweight tools like Story Points, the Burn down chart, and Team Velocity. While essential, these tools alone provide insufficient information to maintain a high energy state that yields Hyper productivity. More data is required, but data collection itself can slow Teams. This effect must be avoided when productivity is the primary marker of success. Here we describe nine metrics that can develop and sustain Hyper productive Teams -- Velocity, Work Capacity, Focus Factor, Percentage of Adopted Work, Percentage of Found Work, Accuracy of Estimation, Accuracy of Forecast, Targeted Value Increase, Success at Scale, and the Win/Loss Record of the Team. The unique contribution of this paper is to demonstrate how a light touch and lightweight strategy can be used to compare Teams with different Story Point reference scales."}
{"_id": "521c6c94d52b405b7d5fdf733abb82fdfca267ce", "title": "A fast nearest neighbor search algorithm by nonlinear embedding", "text": "We propose an efficient algorithm to find the exact nearest neighbor based on the Euclidean distance for large-scale computer vision problems. We embed data points nonlinearly onto a low-dimensional space by simple computations and prove that the distance between two points in the embedded space is bounded by the distance in the original space. Instead of computing the distances in the high-dimensional original space to find the nearest neighbor, a lot of candidates are to be rejected based on the distances in the low-dimensional embedded space; due to this property, our algorithm is well-suited for high-dimensional and large-scale problems. We also show that our algorithm is improved further by partitioning input vectors recursively. Contrary to most of existing fast nearest neighbor search algorithms, our technique reports the exact nearest neighbor - not an approximate one - and requires a very simple preprocessing with no sophisticated data structures. We provide the theoretical analysis of our algorithm and evaluate its performance in synthetic and real data."}
{"_id": "963ff9c54e09dc1eb52717df50cd06225cf739aa", "title": "Nostalgia and the emotional tone and content of song lyrics.", "text": "Emotion and topic were manipulated in original song lyrics. Participants completed Batcho's and Holbrook's nostalgia surveys and rated 6 sets of lyrics for happiness, sadness, anger, nostalgia, meaning, liking, and relevance. Nostalgic lyrics were characterized by bittersweet reverie, loss of the past, identity, and meaning. Contrary to theories linking nostalgia to pathology, participants who scored high on Batcho's measure of personal nostalgia preferred happy lyrics, found them more meaningful, and related more closely to them. Consistent with theories relating nostalgia to social connectedness, high-nostalgia respondents preferred other-directed to solitary themes. Historical nostalgia was associated with relating more closely to sad lyrics."}
{"_id": "d6c3474859835d23b605a0fe65f8208dd24478a7", "title": "Assistive Robotic Exoskeleton for Helping Limb Girdle Muscular Dystrophy", "text": "The Limb Girdle Muscular Dystrophy (LGMD) is a heterogeneous group of muscle diseases, this is manifested by a progressive loss of the strength of the muscles of the pelvic and shoulders girdles. Several devices that can compensate for the loss of some motors skills have been developed in support for the activities of patients presenting such dystrophy. Those devices include the pole or treadmill that can be used with an exoskeleton to improve the phase of displacement among other activities. This work shows a new mechanical design for a robotic-exoskeleton with seven movement axis, having as purpose to help people, with certain LGMD dystrophy or diseases characteristics, in generate movements of flexion/extension in the lower extremity joints and realize the gait cycle. All of the design movement's analysis is simulated in Solidworks\u00ae and OpenSim\u00ae software."}
{"_id": "0cd6ed1c6d5be7d75ff53a8778ab62115b87cab9", "title": "CNN-based modulation classification in the complicated communication channel", "text": "Machine Learning are gradually applied in Automatic Modulation Classification (AMC) domain these years. However, in the past researches, the experiment conditions are mostly idealized or simplified. This paper proposes a modified structure of Convolutional Neural Network (CNN), which is proved to have strong capability of classification. The wavelet denoising technology is exploited to restrain the high frequency noise for input signal. As an analysis of the feasibility of this method for complicated channel, we compare the proposed method with the commonly used AMC methods through a large number of signal sequences, taking consider of various distractors. The result shows that the method proposed outperforms on most rugged conditions."}
{"_id": "61dc7b3bcdff4e0fec03880ee079ae06f8fe32f9", "title": "An Integrated Bayesian Approach for Effective Multi-Truth Discovery", "text": "Truth-finding is the fundamental technique for corroborating reports from multiple sources in both data integration and collective intelligent applications. Traditional truth-finding methods assume a single true value for each data item and therefore cannot deal will multiple true values (i.e., the multi-truth-finding problem). So far, the existing approaches handle the multi-truth-finding problem in the same way as the single-truth-finding problems. Unfortunately, the multi-truth-finding problem has its unique features, such as the involvement of sets of values in claims, different implications of inter-value mutual exclusion, and larger source profiles. Considering these features could provide new opportunities for obtaining more accurate truth-finding results. Based on this insight, we propose an integrated Bayesian approach to the multi-truth-finding problem, by taking these features into account. To improve the truth-finding efficiency, we reformulate the multi-truth-finding problem model based on the mappings between sources and (sets of) values. New mutual exclusive relations are defined to reflect the possible co-existence of multiple true values. A finer-grained copy detection method is also proposed to deal with sources with large profiles. The experimental results on three real-world datasets show the effectiveness of our approach."}
{"_id": "87c631d44c288c3596da809d2e55061c48036cdb", "title": "SLAM and Static Driver Verifier: Technology Transfer of Formal Methods inside Microsoft", "text": "A tutorial introduction to designs in unifying theories of programming p. 40 An integration of program analysis and automated theorem proving p. 67 Verifying controlled components p. 87 Efficient CSP[subscript Z] data abstraction p. 108 State/event-based software model checking p. 128 Formalising behaviour trees with CSP p. 148 Generating MSCs from an integrated formal specification language p. 168 UML to B : formal verification of object-oriented models p. 187 Software verification with integrated data type refinement for integer arithmetic p. 207"}
{"_id": "3b5dd53e94efe51d17056b46fabe50715f9dcbba", "title": "Replicator neural networks for universal optimal source coding.", "text": "Replicator neural networks self-organize by using their inputs as desired outputs; they internally form a compressed representation for the input data. A theorem shows that a class of replicator networks can, through the minimization of mean squared reconstruction error (for instance, by training on raw data examples), carry out optimal data compression for arbitrary data vector sources. Data manifolds, a new general model of data sources, are then introduced and a second theorem shows that, in a practically important limiting case, optimal-compression replicator networks operate by creating an essentially unique natural coordinate system for the manifold."}
{"_id": "c877261e8f268cadefd0087345ac6f7b2ec825f2", "title": "Coarse to Fine: Diffusing Categories in Wikipedia", "text": "Automatic taxonomy construction aims to build a categorization system without human efforts. Traditional textual pattern based methods extract hyponymy relation in raw texts. However, these methods usually yield low precision and recall. In this paper, we propose a method to automatically find diffusing attributes to a category from Wikipedia infoboxes. We use the diffusing attribute to diffuse a coarsegrained category into several fine-grained subcategories and generate a finer-grained taxonomy. Experiments show our method can find proper diffusing attributes to categories across various domains."}
{"_id": "26b16f5815407ab59d2cc4589459bd71c92ae32e", "title": "Conditional Generators of Words Definitions", "text": "We explore recently introduced definition modeling technique that provided the tool for evaluation of different distributed vector representations of words through modeling dictionary definitions of words. In this work, we study the problem of word ambiguities in definition modeling and propose a possible solution by employing latent variable modeling and soft attention mechanisms. Our quantitative and qualitative evaluation and analysis of the model shows that taking into account words ambiguity and polysemy leads to performance improvement."}
{"_id": "534c940868bdc3af1ae9e3ce0751ce616f07d3a4", "title": "Probabilistic wavelet transform for partial discharge measurement of transformer", "text": "Partial discharge (PD) measurement provides a means for online monitoring and diagnosis of transformers. However, extensive interferences and noise can significantly jeopardize the measured PD signals and cause ambiguity in PD measurement interpretation. Necessary PD signal de-noising techniques need to be adopted and wavelet transform is one of such techniques. Mother wavelet selection, decomposition level determination and thresholding are important processes for effective PD extraction using wavelet transform. Various methods have been proposed in the literature to improve the above processes of wavelet transform. In these methods a single threshold is normally adopted at each decomposition level and a binary decision is made to indicate whether an extracted signal is PD signal or noise. However, in online PD measurements it is difficult to find a threshold, which can be used for extracting only PD signals without including any noise. As such, the signals determined by a single threshold cannot be assured as PD signals with certainty. To address the limitations caused by the single thresholding method in wavelet transform for PD signals extraction, this paper proposes quantile based multi-scale thresholding method at each decomposition level, which can thus provide probability indexes for the extracted signals evaluating the likelihood of these signals to be PD signals. To evaluate the proposed method, PD measurements have been conducted on both experimental PD models and inservice transformers at substation. The results are provided in the paper."}
{"_id": "65424ccfbf1ae9498e909aa94d42d2af2a111268", "title": "Comparisons of waist circumferences measured at 4 sites.", "text": "BACKGROUND\nWaist circumference (WC) is now accepted as a practical measure of adipose tissue distribution. Four body sites for WC measurements are commonly used, as follows: immediately below the lowest ribs (WC1), the narrowest waist (WC2), the midpoint between the lowest rib and the iliac crest (WC3), and immediately above the iliac crest (WC4).\n\n\nOBJECTIVE\nWe sought to compare the magnitude and reliability of WC measured at these 4 sites in males and females.\n\n\nDESIGN\nWC was measured at each site 1 time in all subjects [49 males and 62 females, aged 7-83 y, with a body mass index (in kg/m(2)) of 9-43] and 3 times in a subgroup (n = 93) by one experienced observer using a heavy-duty inelastic tape. Body fat was measured in a subgroup (n = 74) with the use of dual-energy X-ray absorptiometry.\n\n\nRESULTS\nThe mean values of WC were WC2 < WC1 < WC3 < WC4 (P < 0.01) in females and WC2 < WC1, WC3, and WC4 (P < 0.01) in males. For all 4 sites, measurement reproducibility was high, with intraclass correlation (r) values > 0.99. WC values were significantly correlated with fatness; correlations with trunk fat were higher than correlations with total body fat in both sexes.\n\n\nCONCLUSIONS\nWC values at the 4 commonly used anatomic sites differ in magnitude depending on sex, are highly reproducible, and are correlated with total body and trunk adiposity in a sex-dependent manner. These observations have implications for the use of WC measurements in clinical practice and patient-oriented research."}
{"_id": "cac52342ddf353ec45981663beae278560ab8b45", "title": "Making the Best of Mifare Classic Update", "text": "What would you do if you would be instructed to make a secure application built on the Mifare Classic? Arguably, due to the vulnerabilities shown in [5] and hinted at on [6], this is rather difficult and it may be easier to use to another chip. This document explores what the best is you can get, if the only option is Mifare Classic. We propose countermeasures against state restoration and against cloning. The effectiveness of these countermeasures depends on the absense of other vulnerabilities of"}
{"_id": "4afed47c94910622498f33fbd780bbe9dd1202ba", "title": "Reaching out: involving Users in Innovation Tasks through Social Media", "text": "Integrating social media into the innovation process can open up the potential for organizations to utilize the collective creativity of consumers from all over the world. The research in this paper sets out to identify how social media can facilitate innovation. By taking a Design Science Research approach this research presents the Social Media Innovation Method for matching innovation tasks with social media characteristics. This supports the selection of best suitable social media and can help organizations to achieve their innovation goals. At the core of the method is the honeycomb model which describes seven social media characteristics on three dimensions: audience, content and time. The method has been evaluated by using an approach called scenario walkthrough that is applied in a real-life spatial planning project. This research concludes that there is no one-size-fits-all answer to the question how social media can be of value for the innovation process. However, organizations that want to know how it can benefit their own innovation process can use the Social Media Innovation Method presented in this research as a way to provide an answer to that question, uniquely tailored to each innovation task for which social media is to be used."}
{"_id": "c4dadc1b8e1bac024d42307d95627b91b18028b6", "title": "A study on the bending mechanism of the flexible ureteroscope", "text": "Flexible ureteroscope is increasingly used for urologic treatment today. The key mechanism of the ureteroscope is the bending mechanism. It consists of a bending head and a control lever connected through a set of wire. The lighting and imaging optics as well as the surgical tools all go through the bending head. With the demand for improved performance and reduced size, more research on its design is still needed. In this paper, the modeling and simulation of an ureteroscope are presented, which pave the road for design optimization in the future."}
{"_id": "2c8d5fc54717f4e910d1dedea622d1ffbb14333a", "title": "Setting our bibliographic references free: towards open citation data", "text": "Purpose. Citation BLOCKIN BLOCKIN data BLOCKIN BLOCKIN needs BLOCKIN BLOCKIN to BLOCKIN BLOCKIN be BLOCKIN BLOCKIN recognized BLOCKIN BLOCKIN as BLOCKIN BLOCKIN a BLOCKIN BLOCKIN part BLOCKIN BLOCKIN of BLOCKIN BLOCKIN the BLOCKIN BLOCKIN Commons BLOCKIN BLOCKIN \u2013 BLOCKIN BLOCKIN those BLOCKIN BLOCKIN works BLOCKIN BLOCKIN that are freely and legally available for sharing \u2013 and placed in an open repository. Design/methodology/approach. The Open Citation Corpus is a new open repository of scholarly citation data, made available under a Creative Commons CC0 1.0 public domain dedication and encoded as Open Linked Data using the SPAR Ontologies. BLOCKIN Annotation Tools and CiTalO, to facilitate the semantic enhancement of the references in scholarly papers according to Semantic Publishing models and technologies. BLOCKIN publishers BLOCKIN BLOCKIN and BLOCKIN BLOCKIN institutions BLOCKIN BLOCKIN may BLOCKIN BLOCKIN freely BLOCKIN BLOCKIN build BLOCKIN BLOCKIN upon, BLOCKIN BLOCKIN enhance BLOCKIN BLOCKIN and reuse the open citation data for any purpose, without restriction under copyright or database law."}
{"_id": "155448563c354b01d12610b5864b511644cfeb27", "title": "Mapping Images to Sentiment Adjective Noun Pairs with Factorized Neural Nets", "text": "We consider the visual sentiment task of mapping an image to an adjective noun pair (ANP) such as \u201dcute baby\u201d. To capture the two-factor structure of our ANP semantics as well as to overcome annotation noise and ambiguity, we propose a novel factorized CNN model which learns separate representations for adjectives and nouns but optimizes the classification performance over their product. Our experiments on the publicly available SentiBank dataset show that our model significantly outperforms not only independent ANP classifiers on unseen ANPs and on retrieving images of novel ANPs, but also image captioning models which capture word semantics from co-occurrence of natural text; the latter turn out to be surprisingly poor at capturing the sentiment evoked by pure visual experience. That is, our factorized ANP CNN not only trains better from noisy labels, generalizes better to new images, but can also expands the ANP vocabulary on its own."}
{"_id": "ec6790251ff0c4c687203f9c4e62a4c5836d0ac1", "title": "Pattern electrical stimulation of the human retina", "text": "Experiments were conducted to study if electrical stimulation of the retinal surface can elicit visual sensation in individuals blind from end-stage retinitis pigmentosa (RP) or age-related macular degeneration (AMD). Under local anesthesia, different stimulating electrodes were inserted through the eyewall and positioned over the surface of the retina. Subjects' psychophysical responses to electrical stimulation were recorded. Subjects perceived simple forms in response to pattern electrical stimulation of the retina. A non-flickering perception was created with stimulating frequencies between 40 and 50 Hz. The stimulation threshold was dependent on the targeted retinal area (macular versus extramacular)."}
{"_id": "1877a83c46ca9bf54194e460d628c67c370a4076", "title": "Top-down facilitation of visual recognition.", "text": "Cortical analysis related to visual object recognition is traditionally thought to propagate serially along a bottom-up hierarchy of ventral areas. Recent proposals gradually promote the role of top-down processing in recognition, but how such facilitation is triggered remains a puzzle. We tested a specific model, proposing that low spatial frequencies facilitate visual object recognition by initiating top-down processes projected from orbitofrontal to visual cortex. The present study combined magnetoencephalography, which has superior temporal resolution, functional magnetic resonance imaging, and a behavioral task that yields successful recognition with stimulus repetitions. Object recognition elicited differential activity that developed in the left orbitofrontal cortex 50 ms earlier than it did in recognition-related areas in the temporal cortex. This early orbitofrontal activity was directly modulated by the presence of low spatial frequencies in the image. Taken together, the dynamics we revealed provide strong support for the proposal of how top-down facilitation of object recognition is initiated, and our observations are used to derive predictions for future research."}
{"_id": "e7bc4f036ff905080773b6767e4d9610a83a9503", "title": "Models and Statistical Inference : The Controversy between Fisher and Neyman \u2013 Pearson", "text": "The main thesis of the paper is that in the case of modern statistics, the differences between the various concepts of models were the key to its formative controversies. The mathematical theory of statistical inference was mainly developed by Ronald A. Fisher, Jerzy Neyman, and Egon S. Pearson. Fisher on the one side and Neyman\u2013Pearson on the other were involved often in a polemic controversy. The common view is that Neyman and Pearson made Fisher\u2019s account more stringent mathematically. It is argued, however, that there is a profound theoretical basis for the controversy: both sides held conflicting views about the role of mathematical modelling. At the end, the influential programme of Exploratory Data Analysis is considered to be advocating another, more instrumental conception of models."}
{"_id": "4c0675bd4ca1355ffaef105299bb57e7131696f9", "title": "Learning how to match fresco fragments", "text": "One of the main problems faced during reconstruction of fractured archaeological artifacts is sorting through a large number of candidate matches between fragments to find the relatively few that are correct. Previous computer methods for this task provided scoring functions based on a variety of properties of potential matches, including color and geometric compatibility across fracture surfaces. However, they usually consider only one or at most a few properties at once, and therefore provide match predictions with very low precision. In this article, we investigate a machine learning approach that computes the probability that a match is correct based on the combination of many features. We explore this machine learning approach for ranking matches in three different sets of fresco fragments, finding that classifiers based on many match properties can be significantly more effective at ranking proposed matches than scores based on any single property alone. Our results suggest that it is possible to train a classifier on match properties in one dataset and then use it to rank predicted matches in another dataset effectively. We believe that this approach could be helpful in a variety of cultural heritage reconstruction systems."}
{"_id": "04a20cd0199d0a24fea8e6bf0e0cc61b26c1f3ac", "title": "Boosting the margin: A new explanation for the effectiveness of voting methods", "text": "One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik\u2019s support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition."}
{"_id": "0b6e98a6a8cf8283fd76fe1100b23f11f4cfa711", "title": "Matching pursuits with time-frequency dictionaries", "text": "We introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. We derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. We compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser."}
{"_id": "c9c9b50b51dc677ff83f58f1a5433b2a41321ec3", "title": "Support-Vector Networks", "text": "The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition."}
{"_id": "15febc703b7a537d9e2bedb2b2504bed9913cc0b", "title": "Self-Oscillating Flyback Converter for Mobile Batteries Charging Applications", "text": "The self-oscillating flyback converter is a popular circuit for cost-sensitive applications due to its simplicity and low component count. This paper presents self-oscillating fly-back converter for mobile batteries charging applications. The converter is designed to step-down the high voltage input to an acceptable level from 220 VAC to 7VDC. By adding indictor and capacitor the power factor and input currant increased, and became 500mA, where the old input currant was 250mA, Diode was connected between Emitter and collector, so the output voltage was improved, and has less ripple, in addition to the soft switching was obtained, simulation model is developed by using MULTISIM software. The operation principle of this converter is described using some operating waveforms based on simulation results and a detailed circuit operation analysis. Analytical results are verified by laboratory experiment. With an input of 220 VAC, 500mA and an output of 7V DC, the efficiency of circuit was75%."}
{"_id": "c7890690e6db6915b5d876b316f978d2e7dcddc6", "title": "Efficient and Low Latency Detection of Intruders in Mobile Active Authentication", "text": "Active authentication (AA) refers to the problem of continuously verifying the identity of a mobile device user for the purpose of securing the device. We address the problem of quickly detecting intrusions with lower false detection rates in mobile AA systems with higher resource efficiency. Bayesian and MiniMax versions of the quickest change detection (QCD) algorithms are introduced to quickly detect intrusions in mobile AA systems. These algorithms are extended with an update rule to facilitate low-frequency sensing which leads to low utilization of resources. Effectiveness of the proposed framework is demonstrated using three publicly available unconstrained face and touch gesture-based AA datasets. It is shown that the proposed QCD-based intrusion detection methods can perform better than many state-of-the-art AA methods in terms of latency and low false detection rates. Furthermore, it is shown that employing the proposed resource-efficient extension further improves the performance of the QCD-based setup."}
{"_id": "7173e6e30fe2e223c4443594bb0817f85fb3bef1", "title": "Don't Be Lazy, Be Consistent: Postgres-R, A New Way to Implement Database Replication", "text": "\u008c\u008e\u008d\u0090\u008f\u0091\u008d\u0090\u0092 \u008d\u0088\u0093\\\u0094\u0096\u0095 \u0094N\u0093\\\u0097|\u0098\u0088\u0099 \u0094N\u009a\\\u0093w\u009bO\u009cE\u008f\u0091\u0094N\u0099\u009e\u009d^\u009b\u0090\u0097|\u0099<\u008f\u007f\u009b\u0088\u009f \u008fe\u008f\\ \u008d\u0088\u008fe\u0094N\u008d\u0090\u0098\u0088\u0094N\u009aN\u00a1 \u009f \u009d^\u0095 \u008d\u0090\u008f\u0091\u0094 \u0094N\u00a2\u0090\u0094X\u009a\\\u00a3<\u00a4 \u0094N\u009a\u0091\u0094\u00a5\u009a\u0091\u0094N\u009d \u00a6|\u0097|\u00a7N\u008d\u0090\u008f\u0091\u0097|\u009b\u0090\u0099 \u0308\u0093\\\u009f<\u00a9R\u0094N\u009a\\\u0093d\u009cV\u009a\\\u009b\u0090a \u0097|\u0098\u0090 \u0095a\u0094N\u008d\u0090\u0095 \u00a6|\u009b<\u00a7 \u00ab \u009a\\\u008d\u0088\u008f\\\u0094N\u0093X\u00a1;aU\u0094N\u0093\\\u0093\u0091\u008d\u0090\u0098\u0088\u0094 \u009bO\u00a2\u0090\u0094X\u009a\\ \u0094N\u008d\u0088\u0095\u00ac\u008d\u0088\u0099 \u0095 \u009d^\u009b \u009b\u0088\u009a \u009a\\\u0094N\u0093\u0091\u009dW\u009b\u0088\u0099 \u0093\\\u00940\u008f\\\u0097|aU\u0094N\u0093N\u00ad@\u00aeu\u00997\u008f\u0091 \u0097|\u0093 \u009da\u008d\u0090\u009d^\u0094N\u009aN\u00a1e\u00a4o\u0094 \u0093\u0091 \u009bO\u00a4 \u008f\\ a\u008d\u0090\u008f \u008f\\ a\u0094N\u0093\\\u0094`\u00a6|\u0097|a \u0097|\u008f\u0091\u008d\u0090\u008f\\\u0097 \u0304\u009b\u0090\u0099 \u00930\u00a7N\u008d\u0088\u0099 \u0092W\u0094J\u00a7N\u0097|\u009a\\\u00a7X\u009f a \u00a2\u0090\u0094N\u0099<\u008f\u0091\u0094N\u0095 \u0092<\u00a3 \u009f \u0093\u0091\u0097|\u0099 \u00987\u008d \u00a7X\u009b\u0090a \u0092 \u0097|\u0099 \u008d\u0088\u008f\\\u0097|\u009b\u0090\u0099 \u009bO\u009c \u00ab \u0099a\u009bO\u00a4 \u0099I\u008d\u0088\u0099 \u0095J\u0099 \u009bO\u00a2\u0090\u0094X\u00a6 \u008f\\\u0094X\u00a7\u00b0 a\u0099 \u0097|\u00b1 \u009fa\u0094N\u0093N\u00ad 20\u009b\u0088\u009a\\\u0094N\u009bO\u00a2\u0088\u0094N\u009aN\u00a1\u007f\u00a4\u0096\u0094@\u0093\\ \u009bO\u00a43 \u009bO\u00a4\u00ac\u008f\u0091 \u0094\u009e\u009da\u009a\\\u009bO3 \u009d^\u009b\u0090\u0093\\\u0094X\u0095 \u0093\\\u009b\u0088\u00a6|\u009f \u008f\\\u0097|\u009b\u0090\u0099 \u00a7X\u008d\u0090\u0099 \u0092^\u00949\u0097|\u0099 \u00a7N\u009b\u0088\u009a\\\u009d^\u009b\u0090\u009a\\\u008d\u0088\u008f\\\u0094N\u0095U\u0097|\u0099 \u008f\u0091\u009b&\u008d&\u009a\u0091\u0094N\u008d\u0090\u00a6 \u0095 \u008d\u0088\u008f\\\u008d\u0090\u0092a\u008d\u0090\u0093\\\u0094w\u0093\\\u00a3<\u0093\\\u008f\u0091\u0094Na4\u00ade \u0301 \u0094*\u009d \u008d\u0088\u009dW\u0094X\u009aj\u0095a\u0097\u03bc\u0093\u0091\u00a7N\u009f \u0093\u0091\u0093\\\u0094N\u0093*\u008f\u0091 \u0094\u0096\u0099 \u0094N\u00a4 \u009d \u009a\u0091\u009b\u0090\u008f\\\u009b<\u00a7N\u009b\u0088\u00a6|\u0093 \u008d\u0090\u0099 \u0095`\u008f\u0091 \u0094N\u0097|\u009a \u0097|a \u009d \u00a6|\u0094NaU\u0094N\u0099<\u008f\\\u008d\u0088\u008f\\\u0097|\u009b\u0090\u0099 \u0097|\u0099\u00b7\u00b6[\u009b\u0088\u0093\\\u008fu3 \u0098\u0090\u009a\u0091\u0094_ \u0327 1 oe\u00adR\u00aeu\u008f\u008e\u008d\u0090\u00a6|\u0093\\\u009b \u009d \u009a\u0091\u009bO\u00a2<\u0097\u03bc\u0095a\u0094N\u00939\u0094N\u00bb<\u009d^\u0094N\u009a\\\u0097|aU\u0094N\u0099<\u008f\\\u008d\u0090\u00a6[\u009a\\\u0094N\u0093\u0091\u009f \u00a6|\u008f\\\u0093 \u009d \u009a\u0091\u009bO\u00a2 \u0097|\u0099 \u0098@\u008f\\ a\u008d\u0090\u008f a \u008d\u0088\u0099<\u00a3\u009e\u009b1\u20444\u009cj\u008f\\ a\u0094&\u0095 \u008d\u0090\u0099a\u0098\u0090\u0094N\u009a\u0091\u0093 \u008d\u0090\u0099 \u00950\u00a6|\u0097|a \u0097|\u008f\\\u008d1\u204443 \u008f\\\u0097|\u009b\u0088\u0099 \u0093 \u009bO\u009c \u009a\\\u0094N\u009d \u00a6|\u0097|\u00a7N\u008d\u0088\u008f\\\u0097|\u009b\u0090\u0099 \u00a7N\u008d\u0090\u0099J\u0092^\u00944\u008dO\u00a2\u0090\u009b\u0088\u0097\u03bc\u0095a\u0094N\u0095 \u0092<\u00a3 \u009f \u0093\u0091\u0097\u03bc\u0099a\u0098 \u008f\\ a\u0094 \u008d\u0088\u009d \u009d \u009a\u0091\u009b\u0090\u009d \u009a\u0091\u0097\u03bc\u008d\u0088\u008f\\\u0094 \u008f\\\u0094X\u00a7\u00b0 a\u0099 \u0097|\u00b1 \u009fa\u0094N\u0093N\u00ad"}
{"_id": "4f1147e0eecd463a4aceb710dcb58e7621b37d99", "title": "A neural network students' performance prediction model (NNSPPM)", "text": "In the academic industry, students' early performance prediction is important to academic communities so that strategic intervention can be planned before students reach the final semester. This paper presents a study on Artificial Neural Network (ANN) model development in predicting academic performance of engineering students. Cumulative Grade Point Average (CGPA) was used to measure the academic achievement at semester eight. The study was conducted at the Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), Malaysia. Students' results for the fundamental subjects in the first semester were used as independent variables or input predictor variables while CGPA at semester eight was used as the output or the dependent variable. The study was done for two different entry points namely Matriculation and Diploma intakes. Performances of the models were measured using the coefficient of Correlation R and Mean Square Error (MSE). The outcomes from the study showed that fundamental subjects at semester one and three have strong influence in the final CGPA upon graduation."}
{"_id": "ffdceba3805493828cf9b65f29edd4d29eee9622", "title": "Deep Image Prior", "text": ""}
{"_id": "bb4cf037d8a5adbb3f08a3405d926d022b8c27c5", "title": "OpenCL: A Parallel Programming Standard for Heterogeneous Computing Systems", "text": "The OpenCL standard offers a common API for program execution on systems composed of different types of computational devices such as multicore CPUs, GPUs, or other accelerators."}
{"_id": "2b8de7727693b19e59c69bc70eb706f085e97346", "title": "E-LOYALTY IS NOT ALL ABOUT TRUST , PRICE ALSO MATTERS : EXTENDING EXPECTATION-CONFIRMATION THEORY IN BOOKSELLING WEBSITES", "text": "Identifying factors that influence customers\u2019 e-loyalty is paramount for practitioners and academics to develop successful marketing strategies and behavioral models. Online bookselling is a rapid growing industry in the UK, where e-loyalty models have yet to reach a conclusive argument. This paper aims to explore factors influencing customers\u2019 e-loyalty to five online bookselling websites in the UK by testing a theoretical model, based on expectation-confirmation theory. A quantitative approach was employed using questionnaires; the sample consisted of 290 respondents (50% males, age range: 18 to over 54). The questionnaire was pretested and confirmatory factor analysis was performed to assess the measurement model. Structural equation modeling and MANOVA were employed to examine the association between latent constructs. Eleven hypotheses were formulated examining different independent variables in the theoretical model; results showed a significant direct and positive association between satisfaction and e-loyalty. Web design affected e-loyalty significantly on all bookselling websites, perceived value was a significant predictor of satisfaction while price notably influenced e-trust development. Etrust was not associated with e-loyalty. This study introduces new variables that affect e-loyalty as well as illuminates new associations between existing factors \u2013 perceived value, price, and trust are new aspects which practitioners and academics should take into account for marketing strategies and behavioral models, respectively. Hence, managers will likely increase customer satisfaction and loyalty by improving web design and balancing quality with price, predisposing positively customer\u2019s attitudes towards the website."}
{"_id": "4b4db3df66f98ad764faefc3cd6ee0b7573d90a9", "title": "A Storage Solution for Massive IoT Data Based on NoSQL", "text": "Storage is an important research direction of the data management of the Internet of Things. Massive and heterogeneous data of the Internet of Things brings the storage huge challenges. Based on the analysis of the IoT data characteristics, this paper proposed a storage management solution called IOTMDB based on NoSQL as current storage solutions are not well support storing massive and heterogeneous IoT data. Besides, the storage strategies about expressing and organizing IoT data in a uniform manner were proposed, some evaluations were carried out. Furthermore, we not only just concerned about descripting the data itself, but also cared for sharing of the data, so a data sharing mechanism based on ontology was proposed. Finally, index was studied and a set of query syntaxes to meet the needs of different kinds of IoT queries based NoSQL was given."}
{"_id": "97b3386dd7c06841a1addc5903f6e251575264c2", "title": "Automatic Definition of Modular Neural Networks", "text": "This paper illustrates an artiicial developmental system that is a computationally ee-cient technique for the automatic generation of complex Artiicial Neural Networks (ANN). Artiicial developmental system can develop a graph grammar into a modular ANN made of a combination of more simple subnetworks. A genetic algorithm is used to evolve coded grammars that generates ANNs for controlling a six-legged robot locomotion. A mechanism for the automatic deenition of sub-neural networks is incorporated. Using this mechanism, the genetic algorithm can automatically decompose a problem into subproblems, generate a subANN for solving the subproblem, and instantiate copies of this subANN to build a higher level ANN that solves the problem. We report some simulation results showing that the same problem cannot be solved if the mechanism for automatic deenition of sub-networks is suppressed. We support our argumentation with pictures describing the steps of development, how ANN structures are evolved, and how the ANNs compute.matic deention of sub-neural networks."}
{"_id": "48a9997cd9a838aaed5bfb94b8237d1e7c6c7546", "title": "Analyzing the transitional region in low power wireless links", "text": "The wireless sensor networks community, has now an increased understanding of the need for realistic link layer models. Recent experimental studies have shown that real deployments have a \"transitional region\" with highly unreliable links, and that therefore the idealized perfect-reception-within-range models used in common network simulation tools can be very misleading. In this paper, we use mathematical techniques from communication theory to model and analyze the low power wireless links. The primary contribution of this work is the identification of the causes of the transitional region, and a quantification of their influence. Specifically, we derive expressions for the packet reception rate as a function of distance, and for the width of the transitional region. These expressions incorporate important channel and radio parameters such as the path loss exponent and shadowing variance of the channel; and the modulation and encoding of the radio. A key finding is that for radios using narrow-band modulation, the transitional region is not an artifact of the radio non-ideality, as it would exist even with perfect-threshold receivers because of multi-path fading. However, we hypothesize that radios with mechanisms to combat multi-path effects, such as spread-spectrum and diversity techniques, can reduce the transitional region."}
{"_id": "0ff9a593b7dbf6c85635dbbd91adca43dafdea59", "title": "A New Approach Layered Architecture Based Clustering for Prolong Life of Wireless Sensor Network (WSN)", "text": "Sensor nodes are the tiny particles which have to rely on limited power of energy. Sensor nodes with limited battery power are deployed to gather information in wireless environment throughout the field. Due to the small amount of energy in a node of sensor web, energy balancing among nodes is quite important. Secondly, due to lack of energy in a particular node we are also concerned to the power saving of a sensor node. We are also concerned here to prolong the life of sensor node so that it can sense the field for longer period of time. All these features are the prime necessity to introduce a new energy efficient routing algorithm in this paper."}
{"_id": "3115d42d3a2a7ac8a0148d93511bd282613b8396", "title": "Discrete Content-aware Matrix Factorization", "text": "Precisely recommending relevant items from massive candidates to a large number of users is an indispensable yet computationally expensive task in many online platforms (e.g., Amazon.com and Netflix.com). A promising way is to project users and items into a Hamming space and then recommend items via Hamming distance. However, previous studies didn't address the cold-start challenges and couldn't make the best use of preference data like implicit feedback. To fill this gap, we propose a Discrete Content-aware Matrix Factorization (DCMF) model, 1) to derive compact yet informative binary codes at the presence of user/item content information; 2) to support the classification task based on a local upper bound of logit loss; 3) to introduce an interaction regularization for dealing with the sparsity issue. We further develop an efficient discrete optimization algorithm for parameter learning. Based on extensive experiments on three real-world datasets, we show that DCFM outperforms the state-of-the-arts on both regression and classification tasks."}
{"_id": "4d5c246f1341874c7f1ccd13bc2bda47cab8d214", "title": "Application of Representation Theory to Dual-Mode Microwave Bandpass Filters", "text": "This paper discusses the physics behind the operation of dual-mode bandpass filters from a field theoretical point of view. It is argued that the two degenerate modes of the empty dual-mode cavity, commonly taken as the vertical and horizontal polarizations, become nonphysical when coupling and tuning elements are inserted. Instead, the original degenerate modes are rotated, or modified in a complex way, to generate two new modes whose characteristics depend on the coupling and tuning elements. It is shown that the tuning elements, as placed in existing dual-mode filter designs, act as both tuning and coupling elements. A working dual-mode filter can be designed with only ldquotuningrdquo elements present. A physical representation of dual-mode filters in terms of the eigenresonances of the dual-mode cavities, with the tuning and coupling elements present, is introduced. Two fourth-order dual-mode rectangular cavity filters with the same response in the passband and its vicinity are also presented to demonstrate the similar role played by ldquotuningrdquo and coupling elements in dual-mode cavities. The first filter uses only ldquotuningrdquo elements, while the second is based only on ldquocouplingrdquo elements."}
{"_id": "69c11c1d0614cc86ca20179f3e54cc7d3165ddbc", "title": "Indra: A Word Embedding and Semantic Relatedness Server", "text": "In recent years word embedding/distributional semantic models evolved to become a fundamental component in many natural language processing (NLP) architectures due to their ability of capturing and quantifying semantic associations at scale. Word embedding models can be used to satisfy recurrent tasks in NLP such as lexical and semantic generalisation in machine learning tasks, finding similar or related words and computing semantic relatedness of terms. However, building and consuming specific word embedding models require the setting of a large set of configurations, such as corpus-dependant parameters, distance measures as well as compositional models. Despite their increasing relevance as a component in NLP architectures, existing frameworks provide limited options in their ability to systematically build, parametrise, compare and evaluate different models. To answer this demand, this paper describes INDRA, a multi-lingual word embedding/distributional semantics framework which supports the creation, use and evaluation of word embedding models. In addition to the tool, INDRA also shares more than 65 pre-computed models in 14 languages."}
{"_id": "5a1b193774124fc77cd28287963e42f0eacecd42", "title": "Association Rule Mining: A Survey", "text": "Data mining [Chen et al. 1996] is the process of extracting interesting (non-trivial, implicit, previously unknown and potentially useful) information or patterns from large information repositories such as: relational database, data warehouses, XML repository, etc. Also data mining is known as one of the core processes of Knowledge Discovery in Database (KDD). Many people take data mining as a synonym for another popular term, Knowledge Discovery in Database (KDD). Alternatively other people treat Data Mining as the core process of KDD. The KDD processes are shown in Figure 1 [Han and Kamber 2000]. Usually there are three processes. One is called preprocessing, which is executed before data mining techniques are applied to the right data. The preprocessing includes data cleaning, integration, selection and transformation. The main process of KDD is the data mining process, in this process different algorithms are applied to produce hidden knowledge. After that comes another process called postprocessing, which evaluates the mining result according to users\u2019 requirements and domain knowledge. Regarding the evaluation results, the knowledge can be presented if the result is satisfactory, otherwise we have to run some or all of those processes again until we get the satisfactory result. The actually processes work as follows. First we need to clean and integrate the databases. Since the data source may come from different databases, which may have some inconsistences and duplications, we must clean the data source by removing those noises or make some compromises. Suppose we have two different databases, different words are used to refer the same thing in their schema. When we try to integrate the two sources we can only choose one of them, if we know that they denote the same thing. And also real world data tend to be incomplete and noisy due to the manual input mistakes. The integrated data sources can be stored in a database, data warehouse or other repositories. As not all the data in the database are related to our mining task, the second process is to select task related data from the integrated resources and transform them into a format that is ready to be mined. Suppose we want to find which items are often purchased together in a supermarket, while the database that records the purchase history may contains customer ID, items bought, transaction time, prices, number of each items and so on, but for this specific task we only need items bought. After selection of relevant data, the database that we are going to apply our data mining techniques to will be much smaller, consequently the whole process will be"}
{"_id": "5916bd2d642765864b473025070e8e087920e35e", "title": "Mobile robot positioning: Sensors and techniques", "text": "Exact knowledge of the position of a vehicle is a fundamental problem in mobile robot applications. In search for a solution, researchers and engineers have developed a variety of systems, sensors, and techniques for mobile robot positioning. This paper provides a review of relevant mobile robot positioning technologies. The paper defines seven categories for positioning systems: 1. Odometry; 2. Inertial Navigation; 3. Magnetic Compasses; 4. Active Beacons; 5. Global Positioning Systems; 6. Landmark Navigation; and 7. Model Matching. The characteristics of each category are discussed and examples of existing technologies are given for each category. The field of mobile robot navigation is active and vibrant, with more great systems and ideas being developed continuously. For this reason the examples presented in this paper serve only to represent their respective categories, but they do not represent a judgment by the authors. Many ingenious approaches can be found in the literature, although, for reasons of brevity, not all could be cited in this paper. 1) (Corresponding Author) The University of Michigan, Advanced Technologies Lab, 1101 Beal Avenue, Ann Arbor, MI 48109-2110, Ph.: 313-763-1560, Fax: 313-944-1113. Email: johannb@umich.edu 2) Naval Command, Control, and Ocean Surveillance Center, RDT&E Division 5303, 271 Catalina Boulevard, San Diego, CA 92152-5001, Email: Everett@NOSC.MIL 3) The University of Michigan, Advanced Technologies Lab, 1101 Beal Avenue, Ann Arbor, MI 481092110, Email: Feng@engin.umich.edu 4) The University of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 239 Cooley Bldg., Ann Arbor, MI 48109, Email: dkw@umich.edu"}
{"_id": "b1cb18067659a3d5de6677828d3aebe2a3563280", "title": "On the role of linguistic descriptions of data in the building of natural language generation systems", "text": "This paper explores the current state of the task of generating easily understandable information from data for people using natural language, which is currently addressed by two independent research fields: the natural language generation field \u2014 and, more specifically, the data-to-text sub-field \u2014 and the linguistic descriptions of data field. Both approaches are explained in a detailed description which includes: i) a methodological revision of both fields including basic concepts and definitions, models and evaluation procedures; ii) the most relevant systems, use cases and real applications described in the literature. Some reflections about the current state and future trends of each field are also provided, followed by several remarks that conclude by hinting at some potential points of mutual interest and convergence between both fields. \u00a9 2015 Elsevier B.V. All rights reserved."}
{"_id": "48327aaf21902c09a92b90b1122f5bf2de62f56e", "title": "A Survey on Ambient-Assisted Living Tools for Older Adults", "text": "In recent years, we have witnessed a rapid surge in assisted living technologies due to a rapidly aging society. The aging population, the increasing cost of formal health care, the caregiver burden, and the importance that the individuals place on living independently, all motivate development of innovative-assisted living technologies for safe and independent aging. In this survey, we will summarize the emergence of `ambient-assisted living\u201d (AAL) tools for older adults based on ambient intelligence paradigm. We will summarize the state-of-the-art AAL technologies, tools, and techniques, and we will look at current and future challenges."}
{"_id": "5cf52b914bfed5aa5babf340b489392c0d961d38", "title": "Behavioral Patterns of Older Adults in Assisted Living", "text": "In this paper, we examine at-home activity rhythms and present a dozen of behavioral patterns obtained from an activity monitoring pilot study of 22 residents in an assisted living setting with four case studies. Established behavioral patterns have been captured using custom software based on a statistical predictive algorithm that models circadian activity rhythms (CARs) and their deviations. The CAR was statistically estimated based on the average amount of time a resident spent in each room within their assisted living apartment, and also on the activity level given by the average n.umber of motion events per room. A validated in-home monitoring system (IMS) recorded the monitored resident's movement data and established the occupancy period and activity level for each room. Using these data, residents' circadian behaviors were extracted, deviations indicating anomalies were detected, and the latter were correlated to activity reports generated by the IMS as well as notes of the facility's professional caregivers on the monitored residents. The system could be used to detect deviations in activity patterns and to warn caregivers of such deviations, which could reflect changes in health status, thus providing caregivers with the opportunity to apply standard of care diagnostics and to intervene in a timely manner."}
{"_id": "20faa2ef4bb4e84b1d68750cda28d0a45fb16075", "title": "Clustering of time series data - a survey", "text": "Time series clustering has been shown effective in providing useful information in various domains. There seems to be an increased interest in time series clustering as part of the effort in temporal data mining research. To provide an overview, this paper surveys and summarizes previous works that investigated the clustering of time series data in various application domains. The basics of time series clustering are presented, including general-purpose clustering algorithms commonly used in time series clustering studies, the criteria for evaluating the performance of the clustering results, and the measures to determine the similarity/dissimilarity between two time series being compared, either in the forms of raw data, extracted features, or some model parameters. The past researchs are organized into three groups depending upon whether they work directly with the raw data either in the time or frequency domain, indirectly with features extracted from the raw data, or indirectly with models built from the raw data. The uniqueness and limitation of previous research are discussed and several possible topics for future research are identified. Moreover, the areas that time series clustering have been applied to are also summarized, including the sources of data used. It is hoped that this review will serve as the steppingstone for those interested in advancing this area of research. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."}
{"_id": "363601765e56e4a68c22da3760a2e4f8d7db3e68", "title": "Miniature Ultra-Wideband Power Divider Using Bridged T-Coils", "text": "In this work, a miniature ultra-wideband (UWB) power divider (PD) is proposed. By implementing the transmission lines of a two-stage Wilkinson PD using bridged T-coils, very compact size with no reduction in bandwidth can be achieved. Specifically, a proposed UWB two-way PD with a center frequency f0=5.5 GHz is implemented using the commercial GaAs pHEMT process. The circuit size without pads is only 1.45 mm \u00d7 0.84 mm, which is about 0.027\u03bb0 \u00d7 0.016\u03bb0 at f0. The fractional bandwidth for 15-dB input/output return loss and isolation is 110%, and the in-band insertion loss is within 1.3 \u00b1 0.36 dB."}
{"_id": "d2b6ef05bc644975302dd9fe6956dded74406a4b", "title": "Using Syntactic Features for Phishing Detection", "text": "This paper reports on the comparison of the subject and object of verbs in their usage between phishing emails and legitimate emails. The purpose of this research is to explore whether the syntactic structures and subjects and objects of verbs can be distinguishable features for phishing detection. To achieve the objective, we have conducted two series of experiments: the syntactic similarity for sentences, and the subject and object of verb comparison. The results of the experiments indicated that both features can be used for some verbs, but more work has to be done for others. Keywords\u2014phishing detection; syntactic similarity; parse tree path."}
{"_id": "fd864f394bc0992b1c2ac1c1204836a1b072b8f0", "title": "LTE-advanced in 3GPP Rel -13/14: an evolution toward 5G", "text": "As the fourth generation (4G) LTE-Advanced network becomes a commercial success, technologies for beyond 4G and 5G are being actively investigated from the research perspective as well as from the standardization perspective. While 5G will integrate the latest technology breakthroughs to achieve the best possible performance, it is expected that LTE-Advanced will continue to evolve, as a part of 5G technologies, in a backward compatible manner to maximize the benefit from the massive economies of scale established around the 3rd Generation Partnership Project (3GPP) LTE/LTE-Advanced ecosystem from Release 8 to Release 12. In this article we introduce a set of key technologies expected for 3GPP Release 13 and 14 with a focus on air interface aspects, as part of the continued evolution of LTE-Advanced and as a bridge from 4G to 5G."}
{"_id": "710433deb919011cc3c294bf0ad0e2ecb62e24b5", "title": "Search-Oriented Conversational AI (SCAI)", "text": "The aim of SCAI@ICTIR2017 is to bring together IR and AI communities to instigate future direction of search-oriented conversational systems. We identified the number of research areas related to conversational AI which is of actual interest to both communities and which have not been fully explored yet. We think it's beneficial to exchange our visions. We solicit the paper submissions and more importantly proposals for panel discussions where researchers can exchange opinions and experiences. We believe that the proposed workshop is relevant to ICTIR since we look for novel contributions to search-oriented conversational systems which are a new and promising area."}
{"_id": "5647d77ed5e92817a502b0a19e92b47c99dd45fe", "title": "Fabrication of extended-release patient-tailored prednisolone tablets via fused deposition modelling (FDM) 3D printing.", "text": "Rapid and reliable tailoring of the dose of controlled release tablets to suit an individual patient is a major challenge for personalized medicine. The aim of this work was to investigate the feasibility of using a fused deposition modelling (FDM) based 3D printer to fabricate extended release tablet using prednisolone loaded poly(vinyl alcohol) (PVA) filaments and to control its dose. Prednisolone was loaded into a PVA-based (1.75 mm) filament at approximately 1.9% w/w via incubation in a saturated methanolic solution of prednisolone. The physical form of the drug was assessed using differential scanning calorimetry (DSC) and X-ray powder diffraction (XRPD). Dose accuracy and in vitro drug release patterns were assessed using HPLC and pH change flow-through dissolution test. Prednisolone loaded PVA filament demonstrated an ability to be fabricated into regular ellipse-shaped solid tablets using the FDM-based 3D printer. It was possible to control the mass of printed tablet through manipulating the volume of the design (R(2) = 0.9983). On printing tablets with target drug contents of 2, 3, 4, 5, 7.5 and 10mg, a good correlation between target and achieved dose was obtained (R(2) = 0.9904) with a dose accuracy range of 88.7-107%. Thermal analysis and XRPD indicated that the majority of prednisolone existed in amorphous form within the tablets. In vitro drug release from 3D printed tablets was extended up to 24h. FDM based 3D printing is a promising method to produce and control the dose of extended release tablets, providing a highly adjustable, affordable, minimally sized, digitally controlled platform for producing patient-tailored medicines."}
{"_id": "26d025ea030ab355bb8456064f155506988b5c78", "title": "Leave-one-out error and stability of learning algorithms with applications", "text": "The leave-one-out error is an important statistical estimator of the performance of a learning algorithm. Unlike the empirical error, it is almost unbiased and is frequently used for model selection. We review attempts aiming at justifying the use of leave-one-out error in Machine Learning. We especially focus on the concept of stability of a learning algorithm and show how this can be used to formally link leaveone-out error to the generalization error. Stability has also motivated recent work on averaging techniques similar to bagging which we briefly summarize in the paper. The ideas we develop are illustrated in some details in the context of kernel-based learning algorithms."}
{"_id": "72869ab0aaadfbb380872330db4129f92e521c6a", "title": "Distortion correction formulas for pre-objective dual galvanometer laser scanning.", "text": "For many applications a laser beam needs to be scanned across a plane. Several methods exist to accomplish this.One method, dual galvanometer pre-objective scanning, is particularly well suited to precise laser marking applications. The laser beam impinges on a first mirror mounted on a galvanometer motor, then on a second mirror with its rota\u00ad tion axis parallel to the laser axis, and passes then through the flat-field objective lens, which focuses the beam onto the image plane. Figure 1 shows a photograph of the SILAMATIK laser marker. This letter investigates what amount of distortion is intro\u00ad duced and how it can be corrected electronically. We as\u00ad sume that the lens is axial symmetric. The geometry is that of Fig. 2. Say the first mirror is rotated a mechanical angle a/2 from its rest position and the second mirror an angle 6/2. The direction vector lR of the beam that falls on the lens is"}
{"_id": "64edd2c5c41e856695e1dcb950c0512c3a87edec", "title": "Improving recommendation lists through topic diversification", "text": "In this work we present topic diversification, a novel method designed to balance and diversify personalized recommendation lists in order to reflect the user's complete spectrum of interests. Though being detrimental to average accuracy, we show that our method improves user satisfaction with recommendation lists, in particular for lists generated using the common item-based collaborative filtering algorithm.Our work builds upon prior research on recommender systems, looking at properties of recommendation lists as entities in their own right rather than specifically focusing on the accuracy of individual recommendations. We introduce the intra-list similarity metric to assess the topical diversity of recommendation lists and the topic diversification approach for decreasing the intra-list similarity. We evaluate our method using book recommendation data, including offline analysis on 361, !, 349 ratings and an online study involving more than 2, !, 100 subjects."}
{"_id": "0724ce21fc53d6fff9a1527d1f32fbe30342f0b8", "title": "Healthy Menus Recommendation: Optimizing the Use of the Pantry", "text": "We are often unable to plan menus ahead, thus making poor and unhealthy choices of meals. Besides healthy, one may want menus in which ingredients harmonize and cover well the available ingredients in the pantry. In this paper, we propose a novel multi-objective-based recommender of menus that features an optimal balance between nutritional aspects, harmony and coverage of available ingredients. We conduct experiments on real-world and synthetic datasets and show that our approach achieves the desired levels of nutrients, harmonization and coverage of ingredients."}
{"_id": "1105cf4931ddc49631abba0e6e022056d1ec352f", "title": "Learning a convolutional neural network for non-uniform motion blur removal", "text": "In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches."}
{"_id": "0f42befba8435c7e7aad8ea3d150504304eb3695", "title": "A Simple Compact Reconfigurable Slot Antenna With a Very Wide Tuning Range", "text": "A simple and compact slot antenna with a very wide tuning range is proposed. A 25 mm (roughly equal to \u03bbH/8, where \u03bbH corresponds to the highest frequency of the tuning range) open slot is etched at the edge of the ground. To achieve the tunability, only two lumped elements, namely, a PIN diode and a varactor diode are used in the structure. By switching the PIN diode placed at the open end of the slot, the slot antenna can resonate as a standard slot (when the switch is on) or a half slot (when the switch is off). Continuous tuning over a wide frequency range in those two modes can be achieved by adjusting the reverse bias (giving different capacitances) of the varactor diode loaded in the slot. Through optimal design, the tuning bands of the two modes are stitched together to form a very wide tuning range. The fabricated prototype has a tuning frequency range from 0.42 GHz to 1.48 GHz with Sll better than -10 dB, giving a frequency ratio (fR = fu/fL) of 3.52:1. The measured full spherical radiation patterns show consistent radiation characteristics of the proposed antenna within the whole tuning range."}
{"_id": "47ae587c3668ee61cc35ee948551f84462bad7a7", "title": "Cellular and synaptic mechanisms of nicotine addiction.", "text": "The tragic health effects of nicotine addiction highlight the importance of investigating the cellular mechanisms of this complex behavioral phenomenon. The chain of cause and effect of nicotine addiction starts with the interaction of this tobacco alkaloid with nicotinic acetylcholine receptors (nAChRs). This interaction leads to activation of reward centers in the CNS, including the mesoaccumbens DA system, which ultimately leads to behavioral reinforcement and addiction. Recent findings from a number of laboratories have provided new insights into the biologic processes that contribute to nicotine self-administration. Examination of the nAChR subtypes expressed within the reward centers has identified potential roles for these receptors in normal physiology, as well as the effects of nicotine exposure. The high nicotine sensitivity of some nAChR subtypes leads to rapid activation followed in many cases by rapid desensitization. Assessing the relative importance of these molecular phenomena in the behavioral effects of nicotine presents an exciting challenge for future research efforts."}
{"_id": "04b0a2d8f98bce01a345af0931a713330467eb84", "title": "A survey of point-based POMDP solvers", "text": "The past decade has seen a significant breakthrough in research on solving partially observable Markov decision processes (POMDPs). Where past solvers could not scale beyond perhaps a dozen states, modern solvers can handle complex domains with many thousands of states. This breakthrough was mainly due to the idea of restricting value function computations to a finite subset of the belief space, permitting only local value updates for this subset. This approach, known as point-based value iteration, avoids the exponential growth of the value function, and is thus applicable for domains with longer horizons, even with relatively large state spaces. Many extensions were suggested to this basic idea, focusing on various aspects of the algorithm\u2014mainly the selection of the belief space subset, and the order of value function updates. In this survey, we walk the reader through the fundamentals of point-based value iteration, explaining the main concepts and ideas. Then, we survey the major extensions to the basic algorithm, discussing their merits. Finally, we include an extensive empirical analysis using well known benchmarks, in order to shed light on the strengths and limitations of the various approaches."}
{"_id": "bf77b3a8fa0ec0e4ef5f40089ca86001a6901f2d", "title": "Plant disease detection based on data fusion of hyper-spectral and multi-spectral fluorescence imaging using Kohonen maps", "text": ""}
{"_id": "d2c4e319a7351f1091ae08a6fc870309003ace31", "title": "Smartphone Addiction in University Students and Its Implication for Learning", "text": ""}
{"_id": "9fca0badfdb81e10995f7415de6b823586d91404", "title": "Design and Implementation of a Wearable Gas Sensor Network for Oil and Gas Industry Workers", "text": "Industrial environment usually involves some types of hazardous substances including toxic and/or flammable gases. Accidental gas leakage can cause potential dangers to a plant, its employees and surrounding neighborhoods. Around 64% of accidents that happen in the oil fields are due to combustibles and/or toxic gases. The safety plan of most industries includes measures to reduce risk to humans and plants by incorporating early-warning devices, such as gas detectors. Most existing tools for monitoring gases are stationary and incapable of accurately measuring individual exposures that depend on personal lifestyles and environment. This paper provides a design and implementation of a wearable gas sensor network by building sensor nodes with wireless communication modules which communicate their data along the network. The system is designed to be flexible, low cost, low maintenance and with accurate performance to detect toxic gases in a timely fashion to warn employees before an existence of a disaster."}
{"_id": "a42345ba892bb0fea5518f9c4ccc9989ad3547c7", "title": "Estimating relatedness via data compression", "text": "We show that it is possible to use data compression on independently obtained hypotheses from various tasks to algorithmically provide guarantees that the tasks are sufficiently related to benefit from multitask learning. We give uniform bounds in terms of the empirical average error for the true average error of the n hypotheses provided by deterministic learning algorithms drawing independent samples from a set of n unknown computable task distributions over finite sets."}
{"_id": "b7d4306f4e59070852e9e67af964640d3a96152d", "title": "Sharing Personal Content Online: Exploring Channel Choice and Multi-Channel Behaviors", "text": "People share personal content online with varied audiences, as part of tasks ranging from conversational-style content sharing to collaborative activities. We use an interview- and diary-based study to explore: 1) what factors impact channel choice for sharing with particular audiences; and 2) what behavioral patterns emerge from the ability to combine or switch between channels. We find that in the context of different tasks, participants match channel features to selective-sharing and other task-based needs, shaped by recipient attributes and communication dynamics. Participants also combine multiple channels to create composite sharing features or reach broader audiences when one channel is insufficient. We discuss design implications of these channel dynamics."}
{"_id": "5d22542d370653d9c52d09eb79b96bf61f6af786", "title": "Comparing Task Models for User Interface Design", "text": "Many task models, task analysis methods, and supporting tools have been introduced in the literature and are widely used in practice. With this comes need to understand their scopes and their differences. This chapter provides a thorough review of selected, significant task models along with their method and supporting tools. For this purpose, a meta-model of each task model is expressed as an Entity-Relationship-Attribute schema (ERA) and discussed. This leads to a comparative analysis of task models according to aims and goals, discipline, concepts and relationships, expressiveness of static and dynamic structures. Following is discussion of the model with respect to developing life cycle steps, tool support, advantages, and shortcomings. This comparative analysis provides a reference framework against which task models can be understood with respect to each other. The appreciation of the similarities and the differences allows practioners to identify a task model that fits a situation\u2019s given requirements. It shows how a similar concept or relationship translates in different task usage models."}
{"_id": "0fe66f19ceb1e0712dcc69eaad090ec97012ad78", "title": "Regression from patch-kernel", "text": "In this paper, we present a patch-based regression framework for addressing the human age and head pose estimation problems. Firstly, each image is encoded as an ensemble of orderless coordinate patches, the global distribution of which is described by Gaussian mixture models (GMM), and then each image is further expressed as a specific distribution model by Maximum a Posteriori adaptation from the global GMM. Then the patch-kernel is designed for characterizing the Kullback-Leibler divergence between the derived models for any two images, and its discriminating power is further enhanced by a weak learning process, called inter-modality similarity synchronization. Finally, kernel regression is employed for ultimate human age or head pose estimation. These three stages are complementary to each other, and jointly minimize the regression error. The effectiveness of this regression framework is validated by three experiments: 1) on the YAMAHA aging database, our solution brings a more than 50% reduction in age estimation error compared with the best reported results; 2) on the FG-NET aging database, our solution based on raw image features performs even better than the state-of-the-art algorithms which require fine face alignment for extracting warped appearance features; and 3) on the CHIL head pose database, our solution significantly outperforms the best one reported in the CLEAR07 evaluation."}
{"_id": "0abb49fe138e8fb7332c26b148a48d0db39724fc", "title": "Stochastic Pooling for Regularization of Deep Convolutional Neural Networks", "text": "We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation."}
{"_id": "23ae5fa0e8d581b184a8749d764d2ded128fd87e", "title": "Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree", "text": "Abstract We seek to improve deep neural networks by generalizing the pooling operations that play a central role in current architectures. We pursue a careful exploration of approaches to allow pooling to learn and to adapt to complex and variable patterns. The two primary directions lie in (1) learning a pooling function via (two strategies of) combining of max and average pooling, and (2) learning a pooling function in the form of a tree-structured fusion of pooling filters that are themselves learned. In our experiments every generalized pooling operation we explore improves performance when used in place of average or max pooling. We experimentally demonstrate that the proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets; they are also easy to implement, and can be applied within various deep neural network architectures. These benefits come with only a light increase in computational overhead during training (ranging from 5% to 15% in timing experiments) and a very modest increase in the number of model parameters. For example, using 45 additional parameters, we improve AlexNet performance on ImageNet by 6% relative (top-5, single-view)."}
{"_id": "fd6d101967259f9f8c86f5f5a9871e34d22c63e6", "title": "A survey of frequent subgraph mining algorithms", "text": "Graph mining is an important research area within the domain of data mining. The field of study concentrates on the identification of frequent subgraphs within graph data sets. The research goals are directed at: (i) effective mechanisms for generating candidate subgraphs (without generating duplicates) and (ii) how best to process the generated candidate subgraphs so as to identify the desired frequent subgraphs in a way that is computationally efficient and procedurally effective. This paper presents a survey of current research in the field of frequent subgraph mining, and proposed solutions to address the main research issues."}
{"_id": "0c3ecffd5e356755e02c29ac2473a88ad3d4b7dd", "title": "Notions of Video Game Addiction and Their Relation to Self-Reported Addiction Among Players of World of Warcraft", "text": "In this study, 438 players of the online video game, World of Warcraft, completed a survey about video game addiction and answered an open-ended question about behaviors they considered characteristic of video game addiction. Responses were coded and correlated with players\u2019 self-reports of being addicted to games and scores on a modified video game addiction scale. The behaviors most frequently mentioned as characteristic of addiction included playing a lot and games\u2019 interfering with other activities, especially socializing or work. Few players mentioned such signs of addiction as withdrawal symptoms or tolerance, and some thought it was not possible to become addicted to video games. Self-reported addiction to video games correlated positively with perceptions that video game addiction involved playing a lot or playing to escape problems, and correlated negatively with perceptions that addiction involved games\u2019 interfering with other activities or not being able to stop play. Implications for assessment are discussed."}
{"_id": "0ad8098658f0b25e126fda2cae6b9b84afd6baa8", "title": "Consideration of 10kW in-wheel type axial-gap motor using ferrite permanent magnets", "text": "In general, the in-wheel type permanent magnet synchronous motor (PMSM) for electric city commuters uses powerful rare earth permanent magnets (PMs). However, the employment of the rare earth PMs should be reduced due to high prices. Therefore, it is important to develop an in-wheel PMSM that does not use the rare earth PMs. We proposed 5kW in-wheel type axial gap motor that can generate high torque density and has features such as a coreless rotor structure with ferrite PMs and a reduction gearbox in the inner side of the stator. And we have presented details of experimental results on a prototype. So, in this paper, in order to achieve further higher power output, we considered a 10 kW in-wheel motor with the proposed structure. The motor with 10-kW output was examined by means of using 3D-FEM and experimental results in order to attain the further higher power as a rare earth free motor."}
{"_id": "6d461ac9c8df76b338465c6630c70476c9f9b019", "title": "A Personalized Markov Clustering and Deep Learning Approach for Arabic Text Categorization", "text": "Text categorization has become a key research field in the NLP community. However, most works in this area are focused on Western languages ignoring other Semitic languages like Arabic. These languages are of immense political and social importance necessitating robust categorization techniques. In this paper, we present a novel three-stage technique to efficiently classify Arabic documents into different categories based on the words they contain. We leverage the significance of root-words in Arabic and incorporate a combination of Markov clustering and Deep Belief Networks to classify Arabic words into separate groups (clusters). Our approach is tested on two public datasets giving a F-Measure of 91.02%."}
{"_id": "21f389aabe2491d620ce920e3bad2b12521fa025", "title": "Algorithms on strings, trees, and sequences", "text": "Linear-Time Construction of Suffix Trees We will present two methods for constructing suffix trees in detail, Ukkonen\u2019s method and Weiner\u2019s method. Weiner was the first to show that suffix trees can be built in linear time, and his method is presented both for its historical importance and for some different technical ideas that it contains. However, lJkkonen\u2019s method is equally fast and uses far less space (i.e., memory) in practice than Weiner\u2019s method Hence Ukkonen is the method of choice for most problems requiring the construction of a suffix tree. We also believe that Ukkonen\u2019s method is easier to understand. Therefore, it will be presented first A reader who wishes to study only one method is advised to concentrate on it. However, our development of Weiner\u2019s method does not depend on understanding Ukkonen\u2019s algorithm, and the two algorithms can be read independently (with one small shared section noted in the description of Weiner\u2019s method)."}
{"_id": "2b67e5926e5bfced275910c810df02d082cb93f6", "title": "Enterprise Software System Integration : An Architectural Perspective", "text": "The present thesis is concerned with enterprise software systems of companies in the Swedish electricity industry, an industry that for the past few years has been exposed to a fairly tumultuous change process as a consequence of legislative reforms. Previously, the business operations of electric utilities \u2013 as well as those of most companies in the computerized world \u2013 were supported by a number of isolated software systems performing specific tasks. In recent years, these systems have been extended, and more importantly, integrated into a company-wide system in its own right, in this thesis referred to as the enterprise software system. As enterprise software systems have evolved, so has a need for strategies, methods and techniques for their management. Enterprise software system management involves a number of concerns; of these concerns, software integration is one of the most prominent. The discipline of software architecture is concerned with the modeling of large-scale structures of software systems. While the generated models are employed for a number of purposes , their perhaps most significant function is to serve as a base for reasoning about the represented system. Several methods for analysis of architectures have been proposed in recent years, and although software architecture analysis has displayed considerable success for a number of systems, enterprise software systems have to a large extent been ignored. In the empirical context of the Scandinavian electricity industry, this thesis explores the applicability of software architecture analysis to enterprise software system integration. Two conceptually different architectural analysis methods \u2013 deduction-and induction-based approaches \u2013 are considered, as well as the engineering process in which architectural analysis is performed. As a result of the investigations, the thesis proposes a modified process for architectural analysis, presents an evaluation of deduction-based analysis methods , and proposes an adaptation of induction-based analysis methods to the enterprise software system context."}
{"_id": "97862a468d375d6fbd83ed1baf2bd8d74ffefdee", "title": "Wireless Sensor Network for Precise Agriculture Monitoring", "text": "Precision Agriculture Monitor System (PAMS) is an intelligent system which can monitor the agricultural environments of crops and provides service to farmers. PAMS based on the wireless sensor network (WSN) technique attracts increasing attention in recent years. The purpose of such systems is to improve the outputs of crops by means of managing and monitoring the growth period. This paper presents the design of a WSN for PAMS, shares our real-world experience, and discusses the research and engineering challenges in implementation and deployments."}
{"_id": "f480c56b45a43ecc1249d2e4528b2824a33ede75", "title": "Synthesis and Application of Monolayer Semiconductors (June 2015)", "text": "Recently, semiconducting monolayers, such as MoS2 and WSe2, have been highlighted for their spin-valley coupling, diverse band structures, bendability, and excellent optoelectronic performances. With a subnanometer thickness of atomic layers, the transition metal dichalcogenides (TMDc) atomic layers demonstrate a significant photoresponse, considerable absorption to incident sunlight and favorable transport performances, leading to applications in the electronic circuit requiring low stand-by power, diverse optoelectronic devices, and next-generation nanoelectronics. Therefore, the class of monolayer TMDc offers a burgeoning field in materials science, fundamental physics, and optoelectronics. A feasible synthetic process to realize controlled synthesis of large area and high quality of TMDc monolayers is in demands. In this review, we will introduce the progress on synthesis and applications of the TMDc atomic layers."}
{"_id": "754cfe48a4cb16c80bf2e5a9795188b769a78d96", "title": "Human skin as data transmission medium for improved privacy and usability in wearable electronics", "text": "Wearable electronics have multiple benefits in medical applications, such as feedback, monitoring and prevention. The number of these devices in circulation is rapidly growing, emphasizing increasing problems in their use caused by one of two standard communication types - wireless communications and wired communications. These problems include obtrusiveness and inconvenient usage, limited bandwidth, privacy and security issues and relatively high energy consumption. Wireless communications are less secure, with a more limited bandwidth but more unobtrusive and convenient than wired ones. On the other hand, wired communications don\u2019t need so many batteries, thereby devices can be charged more rarely and provide more privacy. A third, emerging technology for wearable sensor network communication using the body itself as communication medium tries to solve these problems. The body coupled communications benefit from the communication privacy of the wired networks, while not requiring obtrusive wiring. Unfortunately this type of communication requires complex signal processing for reliable communications as the signal changes depending on the body parts connected. To solve this problem and allow more rapid adoption of body coupled communications in wearable electronics we develop a transfer function for calculating the parameters of human skin as data transmission medium, allowing more usable and privacy oriented communications between the wearable nodes. Additionally this has the potential of reducing the energy consumption of wearable devices making them more practical to use for a prolonged period of time as well as opening new applications as secure personal data exchange using touch."}
{"_id": "9ca542d744149f0efc8b8aac8289f5e38e6d200c", "title": "Gender and Smile Classification Using Deep Convolutional Neural Networks", "text": "Facial gender and smile classification in unconstrained environment is challenging due to the invertible and large variations of face images. In this paper, we propose a deep model composed of GNet and SNet for these two tasks. We leverage the multi-task learning and the general-to-specific fine-tuning scheme to enhance the performance of our model. Our strategies exploit the inherent correlation between face identity, smile, gender and other face attributes to relieve the problem of over-fitting on small training set and improve the classification performance. We also propose the tasks-aware face cropping scheme to extract attribute-specific regions. The experimental results on the ChaLearn 16 FotW dataset for gender and smile classification demonstrate the effectiveness of our proposed methods."}
{"_id": "360bc5aa1333fa51a70a130d7052ac7769ab3121", "title": "Evidence-Based Psychological Assessment.", "text": "In recent years there has been increasing emphasis on evidence-based practice in psychology (EBPP), and as is true in most health care professions, the primary focus of EBPP has been on treatment. Comparatively little attention has been devoted to applying the principles of EBPP to psychological assessment, despite the fact that assessment plays a central role in myriad domains of empirical and applied psychology (e.g., research, forensics, behavioral health, risk management, diagnosis and classification in mental health settings, documentation of neuropsychological impairment and recovery, personnel selection and placement in organizational contexts). This article outlines the central elements of evidence-based psychological assessment (EBPA), using the American Psychological Association's tripartite definition of EBPP as integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences. After discussing strategies for conceptualizing and operationalizing evidence-based testing and evidence-based assessment, 6 core skills and 3 meta-skills that underlie proficiency in psychological assessment are described. The integration of patient characteristics, culture, and preferences is discussed in terms of the complex interaction of patient and assessor identities and values throughout the assessment process. A preliminary framework for implementing EBPA is offered, and avenues for continued refinement and growth are described."}
{"_id": "50669a8f145dfc1d84a0ea104a091556a563072d", "title": "Very low-voltage fully differential amplifier for switched-capacitor applications", "text": "A fully differential opamp suitable for very-low voltage switched-capacitor circuits in standard CMOS technologies is introduced. The proposed two stage opamp needs a simple low voltage CMFB switched-capacitor circuit only for the second stage. Due to the reduced supply voltage, the CMFB circuit is implemented using boot-strapped switches. Minor modifications allow to use chop-per stabilization for flicker noise reduction. Two different compensation schemes are discussed and compared using an example for 1V operation of the amplifier."}
{"_id": "e70061fdfd19356178a3fda0998344b30d1ed947", "title": "Optimization of 6Slots-7Poles & 12Slots-14Poles flux-switching permanent magnet machines for plug-in HEV", "text": "Plug-in hybrid electric vehicle (PHEV) is independent of the internal combustion engine but acts as the auxiliary unit mostly on the electric drive system. Main while, the Flux-switching permanent magnet machine (FSPMM) has been studied in terms of structure with its operation principle been analyzed. To achieve a higher torque with a corresponding high power density and lower torque ripple, FSPMM with 6Slots-7Poles and 12Slots-14Poles, according investigations, are optimized based on their objective functions. Moreover, it analyzes several typical performance curves, such as cogging torque, flux linkage and back-EMF. Finally, the torque and corresponding power, efficiency and rotor mechanical strength analysis of optimized designed are obtained. The results indicate that FSPMM is a viable candidate for PHEV and also has good mechanical robustness with strong flux-weakening ability."}
{"_id": "38d6dbc089a0a9b7f05995bb5850c87087fa3183", "title": "Magnetic Equivalent Circuit Modeling of the AC Homopolar Machine for Flywheel Energy Storage", "text": "This paper develops a magnetic equivalent circuit model suitable to the design and optimization of the synchronous ac homopolar machine. The ac homopolar machine is of particular interest in the application of grid-based flywheel energy storage, where it has the potential to significantly reduce self-discharge associated with magnetic losses. The ac homopolar machine features both axial and radial magnetizing flux paths, which requires finite element analysis to be conducted in 3-D. The computation time associated with 3-D finite element modeling is highly prohibitive in the design process. The magnetic equivalent circuit model developed in this paper is shown to be a viable alternative for calculating several design performance parameters and has a computation time which is orders of magnitude less than that of 3-D finite element analysis. Results obtained from the developed model are shown to be in good agreement with finite element and experimental results for varying levels of saturation."}
{"_id": "329d95024e754a75f7ce547cea961afc31a3baea", "title": "Analysis of brushless permanent magnet synchronous motors", "text": "A brief design review of the Permanent Magnet Synchronous Motors has been presented. A procedure has been developed to predict the steady state and dynamic performances of a brushless permanent magnet synchronous motor. Finite element analysis has been combined with a lumped parameter circuit model in order to provide satisfactory engineering information. To this end, two coordinated steps are involved. One is to develop a unified lumped parameter circuit model for both steady state and dynamic analysis. The second step is to extract the individual lumped parameters from finite element solutions based on corresponding equivalent circuits, each with a pre-determined topology. The proposed techniques have been experimentally verified in a laboratory permanent magnet synchronous motor."}
{"_id": "b5ba36842545b7b1c307f03e8a5ca34a15ac763c", "title": "Mail versus Mall: A Strategic Analysis of Competition between Direct Marketers and Conventional Retailers", "text": "Consumers now purchase several offerings from direct sellers, including catalog and Internet marketers. These direct channels exist in parallel with the conventional retail stores. The availability of multiple channels has significant implications for the performance of consumer markets. The literature in marketing and economics has, however, been dominated by a focus on the conventional retail sector. This paper is an effort toward modeling competition in the multiplechannel environment from a strategic viewpoint. At the outset, a parsimonious model that accommodates the following consumer and market characteristics is introduced. First, the relative attractiveness of retail shopping varies across consumers. Second, the fit with the direct channel varies across product categories. Third, the strength of existing retail presence in local markets moderates competition. Fourth, in contrast with the fixed location of the retail store that anchors its localized market power, the location of the direct marketer is irrelevant to the competitive outcome. The model is first applied in a setting where consumers have complete knowledge of product availability and prices in all channels. In the resulting equilibrium, the direct marketer acts as a competitive wedge between retail stores. The direct presence is so strong that each retailer competes against the remotely located direct marketer, rather than against neighboring retailers. This outcome has implications for the marketing mix of retailers, which has traditionally been tuned to attract consumers choosing between retail stores. In the context of market entry, conditions under which a direct channel can access a local market in retail entry equilibrium are derived. Our analysis suggests that the traditional focus on retail entry equilibria may not yield informative or relevant findings when direct channels are a strong presence. Next, the role of information in multiple-channel markets is modeled. This issue is particularly relevant in the context of direct marketing where the seller can typically control the level of information in the marketplace, sometimes on a customer-by-customer basis (e.g., by deciding on the mailing list for a catalog campaign). When a certain fraction of consumers does not receive information from the direct marketer, the retailers compete with each other for that fraction of the market. The retailer\u2019s marketing mix has to be tuned, in this case, to jointly address direct and neighboring retail competition. The level of information disseminated by the direct marketer is shown to have strategic implications, and the use of market coverage as a lever to control competition is described. Even with zero information costs, providing information to all consumers may not be optimal under some circumstances. In particular, when the product is not well adapted to the direct channel, the level of market information about the direct option should ideally be lowered. The only way to compete with retailers on a larger scale with a poorly adapted product is by lowering direct prices, which lowers profits. Lowering market information levels and allowing retailers to compete more with each other facilitates a higher equilibrium retail price. In turn, this allows a higher direct price to be charged and improves overall direct profit. On the other hand, when the product is well adapted, increasing direct market presence and engaging in greater competition with the retail sector yields higher returns. The finding that high market coverage may depress profits raises some issues for further exploration. First, implementing the optimal coverage is straightforward when the seller controls the information mechanism, as in the case of catalog marketing. The Internet, in contrast, is an efficient mechanism to transmit information, but does not provide the sellers with such control over the level of market information. A key reason is that the initiative to gather information on the Internet lies largely with consumers. The design and implementation of mechanisms to control aggregate information levels in electronic markets can, therefore, be an important theme for research and managerial interest. Second, direct marketers have traditionally relied on the statistical analysis of customer records to decide on contact policies. The analysis in this paper reveals that these policies can have significant strategic implications as well. Research that integrates the statistical and strategic aspects could make a valuable contribution. The paper concludes with a discussion of issues for future research in multiple-channel markets, including avenues to model competition in settings with multiple direct marketers. (Channels of Distribution; Catalog Marketing; Direct Marketing; Electronic Marketing; Internet Marketing; Competitive Strategy; Game Theory)"}
{"_id": "a041d575fda4c12b1edbe85fbf05006660b91a47", "title": "How to derive priorities in AHP: a comparative study", "text": "A heated discussion has arisen over the \u201cbest\u201d priorities derivation method. Using a Monte Carlo simulation this article compares and evaluates the solutions of four AHP ratio scaling methods: the right eigenvalue method, the left eigenvalue method, the geometric mean and the mean of normalized values. Matrices with different dimensions and degree of impurities are randomly constructed. We observe a high level of agreement between the different scaling techniques. The number of ranking contradictions increases with the dimension of the matrix and the inconsistencies. However these contradictions affect only close priorities."}
{"_id": "94f27fe399197d05b1920b49f10ad7fcd617f1bb", "title": "An unusual case of asymptomatic spontaneous umbilical endometriosis treated with skin-sparing excision", "text": "Spontaneous umbilical endometriosis is a rare extrapelvic manifestation of endometriosis. Very few such cases have been previously reported, almost always associated with a variety of symptoms, usually during menstruation. We present a case of asymptomatic umbilical endometriosis treated with skin-sparing excision. Differential diagnoses relevant to the clinician, as well as treatment options, are also presented. Surgeons should always consider umbilical endometriosis in their diagnostic approach when confronted with atypical umbilical nodules, regardless of whether they are symptomatic or not."}
{"_id": "97ccaae0495a1900a9868e59eb4ead6454ba33d1", "title": "Bayesian Nonparametric Methods for Learning Markov Switching Processes", "text": "In this article, we explored a Bayesian nonparametric approach to learning Markov switching processes. This framework requires one to make fewer assumptions about the underlying dynamics, and thereby allows the data to drive the complexity of the inferred model. We began by examining a Bayesian nonparametric HMM, the sticky HDPHMM, that uses a hierarchical DP prior to regularize an unbounded mode space. We then considered extensions to Markov switching processes with richer, conditionally linear dynamics, including the HDP-AR-HMM and HDP-SLDS. We concluded by considering methods for transferring knowledge among multiple related time series. We argued that a featural representation is more appropriate than a rigid global clustering, as it encourages sharing of behaviors among objects while still allowing sequence-specific variability. In this context, the beta process provides an appealing alternative to the DP."}
{"_id": "9e2261d0b30b445625a2a631543a0f8925d90739", "title": "Cetus: A Source-to-Source Compiler Infrastructure for Multicores", "text": "The Cetus tool provides an infrastructure for research on multicore compiler optimizations that emphasizes automatic parallelization. The compiler infrastructure, which targets C programs, supports source-to-source transformations, is user-oriented and easy to handle, and provides the most important parallelization passes as well as the underlying enabling techniques."}
{"_id": "c564ceca0040bacbbfff9af23cc315a7c7bb2323", "title": "A NEW GENERALIZATION OF FIBONACCI SEQUENCE AND EXTENDED BINET\u2019S FORMULA", "text": "Consider the Fibonacci sequence {Fn}n=0 with initial conditions F0 = 0, F1 = 1 and recurrence relation Fn = Fn\u22121 + Fn\u22122 (n \u2265 2). The Fibonacci sequence has been generalized in many ways, some by preserving the initial conditions, and others by preserving the recurrence relation. In this article, we study a new generalization {qn}, with initial conditions q0 = 0 and q1 = 1, which is generated by the recurrence relation qn = aqn\u22121 + qn\u22122 (when n is even) or qn = bqn\u22121 + qn\u22122 (when n is odd), where a and b are nonzero real numbers. Some well-known sequences are special cases of this generalization. The Fibonacci sequence is a special case of {qn} with a = b = 1. Pell\u2019s sequence is {qn} with a = b = 2 while k-Fibonacci sequence has a = b = k. We produce an extended Binet\u2019s formula for {qn} and, thereby, identities such as Cassini\u2019s, Catalan\u2019s, d\u2019Ocagne\u2019s, etc."}
{"_id": "5a53e6ce20e4d591b9ab02c6645c9dafc2480c61", "title": "RHex: A Simple and Highly Mobile Hexapod Robot", "text": "In this paper, the authors describe the design and control of RHex, a power autonomous, untethered, compliant-legged hexapod robot. RHex has only six actuators\u2014one motor located at each hip\u2014achieving mechanical simplicity that promotes reliable and robust operation in real-world tasks. Empirically stable and highly maneuverable locomotion arises from a very simple clock-driven, openloop tripod gait. The legs rotate full circle, thereby preventing the common problem of toe stubbing in the protraction (swing) phase. An extensive suite of experimental results documents the robot\u2019s significant \u201cintrinsic mobility\u201d\u2014the traversal of rugged, broken, and obstacle-ridden ground without any terrain sensing or actively controlled adaptation. RHex achieves fast and robust forward locomotion traveling at speeds up to one body length per second and traversing height variations well exceeding its body clearance."}
{"_id": "675a0a5d9b2031498870a3dc4941aae7c0614151", "title": "Track--Stair Interaction Analysis and Online Tipover Prediction for a Self-Reconfigurable Tracked Mobile Robot Climbing Stairs", "text": "This paper analyzes track-stair interactions and develops an online tipover prediction algorithm for a self-reconfigurable tracked mobile robot climbing stairs, which is vulnerable to tipping-over. Tipover prediction and prevention for a tracked mobile robot in stair climbing are intractable problems because of the complex track--stair interactions. Unlike the wheeled mobile robots, which are normally assumed to obey the nonholonomic constraints, slippage is unavoidable for a tracked mobile robot, especially in stair climbing. Furthermore, the track-stair interactive forces are complicated, which may take the forms of grouser-tread hooking force, track--stair edge frictional force, grouser-riser clutching force, and even their compositions. In this paper, the track--stair interactions are analyzed systematically, and tipover stability criteria are derived for a tracked mobile robot climbing stairs. An online tipover prediction algorithm is also developed, which forms an essential part for autonomous and semiautonomous stair-climbing control. The effectiveness of the proposed algorithms are verified by experiments."}
{"_id": "cda3765f31158815a2bdef40717203ae4c20a8c3", "title": "Multi-Modal Locomotion Robotic Platform Using Leg-Track-Wheel Articulations", "text": "Other than from its sensing and processing capabilities, a mobile robotic platform can be limited in its use by its ability to move in the environment. Legs, tracks and wheels are all efficient means of ground locomotion that are most suitable in different situations. Legs allow to climb over obstacles and change the height of the robot, modifying its viewpoint of the world. Tracks are efficient on uneven terrains or on soft surfaces (snow, mud, etc.), while wheels are optimal on flat surfaces. Our objective is to work on a new concept capable of combining different locomotion mechanisms to increase the locomotion capabilities of the robotic platform. The design we came up with, called AZIMUT, is symmetrical and is made of four independent leg-track-wheel articulations. It can move with its articulations up, down or straight, allowing the robot to deal with three-dimensional environments. AZIMUT is also capable of moving sideways without changing its orientation, making it omnidirectional. By putting sensors on these articulations, the robot can also actively perceive its environment by changing the orientation of its articulations. Designing a robot with such capabilities requires addressing difficult design compromises, with measurable impacts seen only after integrating all of the components together. Modularity at the structural, hardware and embedded software levels, all considered concurrently in an iterative design process, reveals to be key in the design of sophisticated mobile robotic platforms."}
{"_id": "35a7a07cb8447254c3ea3d26da37dfdda50fabd5", "title": "Autonomous Stair Climbing for Tracked Vehicles", "text": "In this paper, we present an algorithm for autonomous stair climbing with a tracked vehicle. The proposed method achieves robust performance under real-world conditions, without assuming prior knowledge of the stair geometry, the dynamics of the vehicle\u2019s interaction with the stair surface, or lighting conditions. Our approach relies on fast and accurate estimation of the robot\u2019s heading and its position relative to the stair boundaries. An extended Kalman filter is used for quaternion-based attitude estimation, fusing rotational velocity measurements from a 3-axial gyroscope, and measurements of the stair edges acquired with an onboard camera. A twotiered controller, comprised of a centeringand a headingcontrol module, utilizes the estimates to guide the robot fast, safely, and accurately upstairs. Both the theoretical analysis and implementation of the algorithm are presented in detail, and extensive experimental results demonstrating the algorithm\u2019s performance are described."}
{"_id": "40550c1728adbe4a9580702d40045cb7c2395abe", "title": "Dante II: Technical Description, Results, and Lessons Learned", "text": "Dante II is a unique walking robot that provides important insight into high-mobility robotic locomotion and remote robotic exploration. Dante II\u2019s uniqueness stems from its combined legged and rappelling mobility system, its scanning-laser rangefinder, and its multilevel control scheme. In 1994 Dante II was deployed and successfully tested in a remote Alaskan volcano, as a demonstration of the fieldworthiness of these technologies. For more than five days the robot explored alone in the volcano crater using a combination of supervised autonomous control and teleoperated control. Human operators were located 120 km distant during the mission. This article first describes in detail the robot, support systems, control techniques, and user interfaces. We then describe results from the battery of field tests leading up to and including the volcanic mission. Finally, we put forth important lessons which comprise the legacy of this project. We show that framewalkers are appropriate for rappelling in severe terrain, though tether systems have limitations. We also discuss the importance of future \u201cautonomous\u201d systems to realize when they require human support rather than relying on humans for constant oversight. KEY WORDS\u2014rappelling, walking robot, supervised autonomy, behavior-based control, terrain visualization, volcano exploration"}
{"_id": "c2186cdef76ad37ddf55f6e85ce42f2cbc603d44", "title": "Improving Indoor Localization Using Convolutional Neural Networks on Computationally Restricted Devices", "text": "Indoor localization is one of the key enablers for various application and service areas that rely on precise locations of people, goods, and assets, ranging from home automation and assisted living to increased automation of production and logistic processes and wireless network optimization. Existing solutions provide various levels of precision, which also depends on the complexity of the indoor radio environment. In this paper, we propose two methods for reducing the localization error in indoor non-line-of-sight (NLoS) conditions using raw channel impulse response (CIR) information obtained from ultra-wide band radios requiring no prior knowledge about the radio environment. The methods are based on NLoS channel classification and ranging error regression models, both using convolutional neural networks (CNNs) and implemented in the TensorFlow computational framework. We first show that NLoS channel classification using raw CIR data outperforms existing approaches that are based on derived input signal features. We further demonstrate that the predicted NLoS channel state and predicted ranging error information, used in combination with least squares (LS) and weighted LS location estimation algorithms, significantly improve indoor localization performance. We also evaluate the computational performance and suitability of the proposed CNN-based algorithms on various computing platforms with a wide range of different capabilities and show that in a distributed localization system, they can also be used on computationally restricted devices."}
{"_id": "4dff129a6f988d78c457ece463b774c3d81ac5c7", "title": "Emotion recognition in the wild from videos using images", "text": "This paper presents the implementation details of the proposed solution to the Emotion Recognition in the Wild 2016 Challenge, in the category of video-based emotion recognition. The proposed approach takes the video stream from the audio-video trimmed clips provided by the challenge as input and produces the emotion label corresponding to this video sequence. This output is encoded as one out of seven classes: the six basic emotions (Anger, Disgust, Fear, Happiness, Sad, Surprise) and Neutral. Overall, the system consists of several pipelined modules: face detection, image pre-processing, deep feature extraction, feature encoding and, finally, an SVM classification. \n This system achieves 59.42% validation accuracy, surpassing the competition baseline of 38.81%. With regard to test data, our system achieves 56.66% recognition rate, also improving the competition baseline of 40.47%."}
{"_id": "2fae7cd81be6beed8be65cbda8473603137b0deb", "title": "Capturing Regional Variation with Distributed Place Representations and Geographic Retrofitting", "text": "Dialects are one of the main drivers of language variation, a major challenge for natural language processing tools. In most languages, dialects exist along a continuum, and are commonly discretized by combining the extent of several preselected linguistic variables. However, the selection of these variables is theorydriven and itself insensitive to change. We use Doc2Vec on a corpus of 16.8M anonymous online posts in the German-speaking area to learn continuous document representations of cities. These representations capture continuous regional linguistic distinctions, and can serve as input to downstream NLP tasks sensitive to regional variation. By incorporating geographic information via retrofitting and agglomerative clustering with structure, we recover dialect areas at various levels of granularity. Evaluating these clusters against an existing dialect map, we achieve a match of up to 0.77 V-score (harmonic mean of cluster completeness and homogeneity). Our results show that representation learning with retrofitting offers a robust general method to automatically expose dialectal differences and regional variation at a finer granularity than was previously possible."}
{"_id": "e5ddf76d52ea98e4d54202ef4a7ef82b5b6f912d", "title": "Sky Detection in Hazy Image", "text": "Sky detection plays an essential role in various computer vision applications. Most existing sky detection approaches, being trained on ideal dataset, may lose efficacy when facing unfavorable conditions like the effects of weather and lighting conditions. In this paper, a novel algorithm for sky detection in hazy images is proposed from the perspective of probing the density of haze. We address the problem by an image segmentation and a region-level classification. To characterize the sky of hazy scenes, we unprecedentedly introduce several haze-relevant features that reflect the perceptual hazy density and the scene depth. Based on these features, the sky is separated by two imbalance SVM classifiers and a similarity measurement. Moreover, a sky dataset (named HazySky) with 500 annotated hazy images is built for model training and performance evaluation. To evaluate the performance of our method, we conducted extensive experiments both on our HazySky dataset and the SkyFinder dataset. The results demonstrate that our method performs better on the detection accuracy than previous methods, not only under hazy scenes, but also under other weather conditions."}
{"_id": "0e6e72c40f438c5c0e5c7ca47448d57e0a8c6e54", "title": "A Study and Comparison of Sentiment Analysis Methods for Reputation Evaluation", "text": "The aim of this paper is to study and compare some of the methods used to evaluate the reputation of items using sentiment analysis. We explain the challenge of the increasing amount of data available on the Internet and the role of sentiment analysis in mining this information. We classify recent solutions into different categories based on techniques, document approach, and rating methods. We present six methods corresponding to different categories and analyze them based on the technique used, advances and results."}
{"_id": "81ac71d5fefcda72ee3c6b63db37335934137441", "title": "Target Detection in SFCW Ground Penetrating Radar with C3 Algorithm and Hough Transform based on GPRMAX Simulation and Experimental Data", "text": "We perform target detection in B-scan images obtained using stepped frequency continuous wave (SFCW) ground-penetrating radar (GPR) simulation with Matlab and the open-source software for electromagnetic propagation simulation gprMax and using experimental data from measurements available on the Web. We perform hyperbola detection and target depth estimation using the Hough transform. Additionally, we reduce the complexity of Hough transform using the Column-Connection Clustering (C3) algorithm. The results show that gprMax can be used with SFCW modulated signals, besides its use with conventional impulse signals (impulse GPR). The proposed combination of C3 and Hough transform shows very good detection capability. These algorithms are also tested on experimental GPR data available on the Web to show their applicability in real soil conditions. Mean subtraction method and Singular Value Decomposition (SVD) algorithms are used to reduce the effects of the ground reflected wave."}
{"_id": "5476797b6be75b27b7e2780a6cd61dab3e3acf87", "title": "StackMap: Low-Latency Networking with the OS Stack and Dedicated NICs", "text": "StackMap leverages the best aspects of kernel-bypass networking into a new low-latency Linux network service based on the full-featured TCP kernel implementation, by dedicating network interfaces to applications and offering an extended version of the netmap API as a zero-copy, lowoverhead data path while retaining the socket API for the control path. For small-message, transactional workloads, StackMap outperforms baseline Linux by 4 to 80 % in latency and 4 to 391 % in throughput. It also achieves comparable performance with Seastar, a highly-optimized user-level TCP/IP stack for DPDK."}
{"_id": "1a736409c7711f8673f31d366f583ddc8759547f", "title": "Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets", "text": "The long short-term memory (LSTM) network trained by gradient descent solves difficult problems which traditional recurrent neural networks in general cannot. We have recently observed that the decoupled extended Kalman filter training algorithm allows for even better performance, reducing significantly the number of training steps when compared to the original gradient descent training algorithm. In this paper we present a set of experiments which are unsolvable by classical recurrent networks but which are solved elegantly and robustly and quickly by LSTM combined with Kalman filters."}
{"_id": "0bbb99b053d21f66485c7b58487c4995fe5f3be1", "title": "Free-form deformation of solid geometric models", "text": "A technique is presented for deforming solid geometric models in a free-form manner. The technique can be used with any solid modeling system, such as CSG or B-rep. It can deform surface primitives of any type or degree: planes, quadrics, parametric surface patches, or implicitly defined surfaces, for example. The deformation can be applied either globally or locally. Local deformations can be imposed with any desired degree of derivative continuity. It is also possible to deform a solid model in such a way that its volume is preserved.The scheme is based on trivariate Bernstein polynomials, and provides the designer with an intuitive appreciation for its effects."}
{"_id": "d776ef17169f2a19417221d6844710c192fb67a1", "title": "Conscious and unconscious perception: Experiments on visual masking and word recognition", "text": "Five experiments are presented which explore the relation of masking to consciousness and visual word processing. In Experiment 1 a single word or blank field was followed by a pattern mask. Subjects had to make one of three decisions: Did anything precede the mask? To which of two probe words was what preceded the mask more similar graphically? To which of two probe words was it more similar semantically? As word-mask stimulus onset asynchrony (SOA) was reduced, subjects reached chance performance on the detection, graphic, and semantic decisions in that order. In Experiment 2, subjects again had to choose which of two words was more similar either graphically or semantically to a nondetectable masked word, but the forced-choice stimuli now covaried negatively on graphic and semantic similarity. Subjects were now unable to choose selectively on each dimension, suggesting that their ability to choose in Experiment 1 was passively rather than intentionally mediated. In Experiment 3 subjects had to make manual identification responses to color patches which were either accompanied or preceded by words masked to prevent awareness. Color-congruent words facilitated reaction time (RT), color-incongruent words delayed RT. Experiment 4 used a lexical decision task where a trial consisted of the critical letter string following another not requiring a response. When both were words they were either semantically associated or not. The first letter string was either left unmasked, energy masked monoptically, or pattern masked dichoptically to prevent awareness. The effect of association was equal in the unmasked and pattern masked cases, but absent with energy masking. In Experiment 5 repeating a word-plus-mask (where the SOA precluded detection) from 1 to 20 times (a) increased the association effect on a subsequent lexical decision, but had no effect on (b) detectability or(c) the semantic relatedness of forced guesses of the masked word. It is proposed that central pattern masking has little effect on visual processing itself (while peripheral energy masking does), but affects availability of records of the results of those processes to consciousness. Perceptual processing itself is unconscious and automatically proceeds to all levels of analysis and redescription available to the perceiver. The general importance of these findings is to cast doubt on the paradigm assumption that representations yielded by perceptual analysis are identical to and directly reflected by phenomenal percepts."}
{"_id": "bc8014d2fea5f2a57f282fe0d0ae3b504f7be7de", "title": "Automatic Tempo-based Classification of Argentine Tango Music", "text": "Tempo-based classification is a popular technique to classify music, with applications in media-art projects, systematic study of music etc. The success of this technique using dance music genres has typically been found to be over 70%, yet it was mostly evaluated using standardized music genres, such as ballroom music. Argentine tango is a music genre, originating in Buenos Aires at the end of the 19th century, which has gained a lot of popularity since its creation and is currently being danced in social settings all over the world (see side Photo taken in San Francisco, courtesy of Ref. [10]). There are three music types which define the Argentine tango played in social settings; although these types have different meters from one another, their tempi feature variations influenced by the trends of the era of composition and the composers\u02bc personal style. Therefore, Argentine tango seemingly makes a good, yet interesting candidate to explore for tempo-based classification schemes. In this report, we construct and evaluate an automatic, bayes tempo classifier with application to Argentine tango music, using a library with more than 2,000 pieces. We find that the accuracy of our classifier stands at 79%, which is well within the bound set by similar schemes applied to ballroom music, proving our original point that tempo is a promising feature in the classification of Argentine tango. We also come up with interesting insights about how to improve classification schemes for this particular musical genre."}
{"_id": "9a4cc46532acafa5918bf7ffcadc2552e814042c", "title": "A rule based bengali stemmer", "text": "One of the biggest challenges in doing word lookups is to derive the appropriate base word for any given word in Bengali. The basic concept to the solution of the problem is to eliminate inflections from a given word to derive its stem word. Stemmers attempt to reduce a word to its root form using stemming process, which reduces an inflected or derived word to its stem or root form. Existing works in the literature use lookup tables either for stem words or suffixes, increasing the overheads in terms of memory and time. This paper develops a rule-based algorithm that eliminates inflections stepwise without continuously searching for the desired root in the dictionary. To the best of our knowledge, this paper first investigates that, in Bengali morphology, for a large set of inflections, the stems can be computed algorithmically cutting down the inflections step by step. The proposed algorithm is independent of inflected word lengths and our evaluation shows around 88% accuracy."}
{"_id": "fe0995400b0bb91d2d7afb759c5cd20ba92f4899", "title": "Data quality management in the public domain: a case study within the Dutch justice system", "text": "The need for anonymity preservation within the justice domain requires the introduction of a trusted third party as an intermediary, while integrating individual databases within its boundaries. After the trusted third party encrypts records, it is no longer possible to perform checks on the data quality and correct data anomalies. Therefore, this research examines the concepts of data quality, data integration, record linkage and trusted third party and, then, combines these with four expert interviews in order to identify ways to assess and improve data quality while linking privacy-sensitive data. Next, the trusted data linkage framework (TDLF) is presented to aid data quality management while combining citizens\u2019 privacy-sensitive data from different organisations. Finally, we evaluate the framework in a case study to demonstrate how the quality of structured judicial data can be managed prior to its encryption, while using multiple databases as sources of data and a different final recipient."}
{"_id": "fbfb084872f40d5c2a1cc37ff883d9f8620e19a4", "title": "Discrete-Time Methods for the Analysis of Event Histories", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "7396b51dd7e1bbfe4ab23b8eb96d69d4f8275f63", "title": "Contactless Energy Transfer to a Moving Actuator", "text": "In this paper a new topology for contactless energy transfer is proposed and tested that can transfer energy to a moving actuator using inductive coupling. The proposed topology provides long-stroke contactless energy transfer capability in a plane and a short-stroke movement of a few millimeters perpendicular to the plane. In addition, it is tolerant to small rotations. The experimental setup consists of a platform with one secondary coil, which is attached to a linear actuator and a 3-phase brushless electromotor. Underneath the platform is an array of primary coils, which are each connected to a half-bridge square wave power supply. The energy transfer to the electromotor is measured while the platform is moved over the array of primary coils by the linear actuator. The secondary coil moves with a stroke of 18 cm at speeds over 1 m/s, while up to 33 W power is transferred with 90% efficiency"}
{"_id": "a6a51d35eac9975423eb00c2b4b5c635bdb69dff", "title": "UK Biobank: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age", "text": "Cathie Sudlow and colleagues describe the UK Biobank, a large population-based prospective study, established to allow investigation of the genetic and non-genetic determinants of the diseases of middle and old age."}
{"_id": "91e772d7139e1708b6fb30a404886c9eb121de53", "title": "Bayesian Estimation of Beta Mixture Models with Variational Inference", "text": "Bayesian estimation of the parameters in beta mixture models (BMM) is analytically intractable. The numerical solutions to simulate the posterior distribution are available, but incur high computational cost. In this paper, we introduce an approximation to the prior/posterior distribution of the parameters in the beta distribution and propose an analytically tractable (closed form) Bayesian approach to the parameter estimation. The approach is based on the variational inference (VI) framework. Following the principles of the VI framework and utilizing the relative convexity bound, the extended factorized approximation method is applied to approximate the distribution of the parameters in BMM. In a fully Bayesian model where all of the parameters of the BMM are considered as variables and assigned proper distributions, our approach can asymptotically find the optimal estimate of the parameters posterior distribution. Also, the model complexity can be determined based on the data. The closed-form solution is proposed so that no iterative numerical calculation is required. Meanwhile, our approach avoids the drawback of overfitting in the conventional expectation maximization algorithm. The good performance of this approach is verified by experiments with both synthetic and real data."}
{"_id": "ed3a90b13b962a199a29dc6495d7dd73f199181f", "title": "Action recognition in depth videos using hierarchical gaussian descriptor", "text": "In this paper, we propose a new approach based on distribution descriptors for action recognition in depth videos. Our local features are computed from binary patterns which incorporate the shape and motion cues for effective action recognition. Given pixel-level features, our approach estimates video local statistics in a hierarchical manner, where the distribution of pixel-level features and that of frame-level descriptors are modeled using single Gaussians. In this way, our approach constructs video descriptors directly from low-level features without resorting to codebook learning required by Bag-of-features (BoF) based approaches. In order to capture the spatial geometry and temporal order of a video, we use a spatio-temporal pyramid representation for each video. Our approach is validated on six benchmark datasets, i.e. MSRAction3D, MSRGesture3D, DHA, SKIG, UTD-MHAD and CAD-120. The experimental results show that our approach gives good performance on all the datasets. In particular, it achieves state-of-the-art accuracies on DHA, SKIG and UTD-MHAD datasets."}
{"_id": "77dcb851f7b7d0e478d6e89f22c0726d8ba4949b", "title": "Experiences of integration and performance testing of multilingual OCR for printed Indian scripts", "text": "This paper presents integration and testing scheme for managing a large Multilingual OCR Project. The project is an attempt to implement an integrated platform for OCR of different Indian languages. Software engineering, workflow management and testing processes have been discussed in this paper. The OCR has now been experimentally deployed for some specific applications and currently is being enhanced for handling the space and time constraints, achieving higher recognition accuracies and adding new functionalities."}
{"_id": "46d2e0b490e032b711b634730a156005a4227106", "title": "A 5GHz Digital Fractional- $N$ PLL Using a 1-bit Delta\u2013Sigma Frequency-to-Digital Converter in 65 nm CMOS", "text": "A highly digital two-stage fractional- $N$ phase-locked loop (PLL) architecture utilizing a first-order 1-bit $ \\Delta \\!\\Sigma $ frequency-to-digital converter (FDC) is proposed and implemented in a 65nm CMOS process. Performance of the first-order 1-bit $ \\Delta \\!\\Sigma $ FDC is improved by using a phase interpolator-based fractional divider that reduces phase quantizer input span and by using a multiplying delay-locked loop that increases its oversampling ratio. We also describe an analogy between a time-to-digital converter (TDC) and a $ \\Delta \\!\\Sigma $ FDC followed by an accumulator that allows us to leverage the TDC-based PLL analysis techniques to study the impact of $ \\Delta \\!\\Sigma $ FDC characteristics on $ \\Delta \\!\\Sigma $ FDC-based fractional- $N$ PLL (FDCPLL) performance. Utilizing proposed techniques, a prototype PLL achieves 1 MHz bandwidth, \u2212101.6 dBc/Hz in-band phase noise, and $ {\\textrm {1.22 ps}_{\\textrm {rms}}}$ (1 kHz\u201340 MHz) jitter while generating 5.031GHz output from 31.25MHz reference clock input. For the same output frequency, the stand-alone second-stage fractional- $N$ FDCPLL achieves 1MHz bandwidth, \u2212106.1dBc/Hz in-band phase noise, and $ {\\textrm {403 fs}_{\\textrm {rms}}}$ jitter with a 500MHz reference clock input. The two-stage PLL consumes 10.1mW power from a 1V supply, out of which 7.1 mW is consumed by the second-stage FDCPLL."}
{"_id": "0e316f76dac185ee2d922e64d4659b2e36842196", "title": "Terra: a virtual machine-based platform for trusted computing", "text": "We present a flexible architecture for trusted computing, called Terra, that allows applications with a wide range of security requirements to run simultaneously on commodity hardware. Applications on Terra enjoy the semantics of running on a separate, dedicated, tamper-resistant hardware platform, while retaining the ability to run side-by-side with normal applications on a general-purpose computing platform. Terra achieves this synthesis by use of a trusted virtual machine monitor (TVMM) that partitions a tamper-resistant hardware platform into multiple, isolated virtual machines (VM), providing the appearance of multiple boxes on a single, general-purpose platform. To each VM, the TVMM provides the semantics of either an \"open box,\" i.e. a general-purpose hardware platform like today's PCs and workstations, or a \"closed box,\" an opaque special-purpose platform that protects the privacy and integrity of its contents like today's game consoles and cellular phones. The software stack in each VM can be tailored from the hardware interface up to meet the security requirements of its application(s). The hardware and TVMM can act as a trusted party to allow closed-box VMs to cryptographically identify the software they run, i.e. what is in the box, to remote parties. We explore the strengths and limitations of this architecture by describing our prototype implementation and several applications that we developed for it."}
{"_id": "0e851f49432767888b6ef4421beb268b9f2fc057", "title": "Breaking up is hard to do: security and functionality in a commodity hypervisor", "text": "Cloud computing uses virtualization to lease small slices of large-scale datacenter facilities to individual paying customers. These multi-tenant environments, on which numerous large and popular web-based applications run today, are founded on the belief that the virtualization platform is sufficiently secure to prevent breaches of isolation between different users who are co-located on the same host. Hypervisors are believed to be trustworthy in this role because of their small size and narrow interfaces.\n We observe that despite the modest footprint of the hypervisor itself, these platforms have a large aggregate trusted computing base (TCB) that includes a monolithic control VM with numerous interfaces exposed to VMs. We present Xoar, a modified version of Xen that retrofits the modularity and isolation principles used in micro-kernels onto a mature virtualization platform. Xoar breaks the control VM into single-purpose components called service VMs. We show that this componentized abstraction brings a number of benefits: sharing of service components by guests is configurable and auditable, making exposure to risk explicit, and access to the hypervisor is restricted to the least privilege required for each component. Microrebooting components at configurable frequencies reduces the temporal attack surface of individual components. Our approach incurs little performance overhead, and does not require functionality to be sacrificed or components to be rewritten from scratch."}
{"_id": "18f2d484c7722f4fcd21e1e2a3ae6ea5641dd104", "title": "Principles of remote attestation", "text": "Remote attestation is the activity of making a claim about properties of a target by supplying evidence to an appraiser over a network. We identify five central principles to guide development of attestation systems. We argue that (i) attestation must be able to deliver temporally fresh evidence; (ii) comprehensive information about the target should be accessible; (iii) the target, or its owner, should be able to constrain disclosure of information about the target; (iv) attestation claims should have explicit semantics to allow decisions to be derived from several claims; and (v) the underlying attestation mechanism must be trustworthy. We illustrate how to acquire evidence from a running system, and how to transport it via protocols to remote appraisers. We propose an architecture for attestation guided by these principles. Virtualized platforms, which are increasingly well supported on stock hardware, provide a natural basis for our attestation architecture."}
{"_id": "089895ef5f96bdb7eed9dd54f482c22350c2f30d", "title": "seL4: formal verification of an OS kernel", "text": "Complete formal verification is the only known way to guarantee that a system is free of programming errors.\n We present our experience in performing the formal, machine-checked verification of the seL4 microkernel from an abstract specification down to its C implementation. We assume correctness of compiler, assembly code, and hardware, and we used a unique design approach that fuses formal and operating systems techniques. To our knowledge, this is the first formal proof of functional correctness of a complete, general-purpose operating-system kernel. Functional correctness means here that the implementation always strictly follows our high-level abstract specification of kernel behaviour. This encompasses traditional design and implementation safety properties such as the kernel will never crash, and it will never perform an unsafe operation. It also proves much more: we can predict precisely how the kernel will behave in every possible situation.\n seL4, a third-generation microkernel of L4 provenance, comprises 8,700 lines of C code and 600 lines of assembler. Its performance is comparable to other high-performance L4 kernels."}
{"_id": "52b0c5495e8341c7a6f0afe4bba6b2e0c0dc3a68", "title": "Linux kernel integrity measurement using contextual inspection", "text": "This paper introduces the Linux Kernel Integrity Monitor (LKIM) as an improvement over conventional methods of software integrity measurement. LKIM employs contextual inspection as a means to more completely characterize the operational integrity of a running kernel. In addition to cryptographically hashing static code and data in the kernel, dynamic data structures are examined to provide improved integrity measurement. The base approach examines structures that control the execution flow of the kernel through the use of function pointers as well as other data that affect the operation of the kernel. Such structures provide an efficient means of extending the kernel operations, but they are also a means of inserting malicious code without modifying the static parts. The LKIM implementation is discussed and initial performance data is presented to show that contextual inspection is practical"}
{"_id": "d168c2bd29fcad2083586430dd76f54da69bc8a6", "title": "Person Re-Identification by Iterative Re-Weighted Sparse Ranking", "text": "In this paper we introduce a method for person re-identification based on discriminative, sparse basis expansions of targets in terms of a labeled gallery of known individuals. We propose an iterative extension to sparse discriminative classifiers capable of ranking many candidate targets. The approach makes use of soft- and hard- re-weighting to redistribute energy among the most relevant contributing elements and to ensure that the best candidates are ranked at each iteration. Our approach also leverages a novel visual descriptor which we show to be discriminative while remaining robust to pose and illumination variations. An extensive comparative evaluation is given demonstrating that our approach achieves state-of-the-art performance on single- and multi-shot person re-identification scenarios on the VIPeR, i-LIDS, ETHZ, and CAVIAR4REID datasets. The combination of our descriptor and iterative sparse basis expansion improves state-of-the-art rank-1 performance by six percentage points on VIPeR and by 20 on CAVIAR4REID compared to other methods with a single gallery image per person. With multiple gallery and probe images per person our approach improves by 17 percentage points the state-of-the-art on i-LIDS and by 72 on CAVIAR4REID at rank-1. The approach is also quite efficient, capable of single-shot person re-identification over galleries containing hundreds of individuals at about 30 re-identifications per second."}
{"_id": "589d06db45e2319b29fc96582ea6c8be369f57ed", "title": "Convolutional LSTM Networks for Video-based Person Re-identification", "text": "In this paper, we study the problem of video-based person re-identification. This is more challenging, and of greater practical interest, than conventional image-based person re-identification. To address this problem, we propose the use of convolutional Long Short Term Memory (LSTM) based networks to learn a video-based representation for person re-identification. To this end, we propose to jointly leverage deep Convolutional Neural Networks (CNNs) and LSTM networks. Given sequential video frames of a person, the spatial information encoded in the frames is first extracted by a set of CNNs. An encoderdecoder framework derived from LSTMs is employed to encode the resulting temporal of CNN outputs. This approach leads to a refined feature representation that is able to explicitly model the video as an ordered sequence, while preserving the spatial information. Comparative experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose."}
{"_id": "84534c6fa2dab4768617205b302bfbce9b3bb376", "title": "A Review of Ensemble Methods in Bioinformatics", "text": "Ensemble learning is an intensively studies technique in machine learning and pattern recognition. Recent work in computational biology has seen an increasing use of ensemble learning methods due to their unique advantages in dealing with small sample size, high-dimensionality, and complexity data structures. The aim of this article is two-fold. First, it is to provide a review of the most widely used ensemble learning methods and their application in various bioinformatics problems, including the main topics of gene expression, mass spectrometry-based proteomics, gene-gene interaction identification from genome-wide association studies, and prediction of regulatory elements from DNA and protein sequences. Second, we try to identify and summarize future trends of ensemble methods in bioinformatics. Promising directions such as ensemble of support vector machine, meta-ensemble, and ensemble based feature selection are discussed."}
{"_id": "f1e011307ccc62ce526849fc8ffd8487f452b5d4", "title": "Integration of independent component analysis and neural networks for ECG beat classification", "text": "In this paper, we propose a scheme to integrate independent component analysis (ICA) and neural networks for electrocardiogram (ECG) beat classification. The ICA is used to decompose ECG signals into weighted sum of basic components that are statistically mutual independent. The projections on these components, together with the RR interval, then constitute a feature vector for the following classifier. Two neural networks, including a probabilistic neural network (PNN) and a back-propagation neural network (BPNN), are employed as classifiers. ECG samples attributing to eight different beat types were sampled from the MIT-BIH arrhythmia database for experiments. The results show high classification accuracy of over 98% with either of the two classifiers. Between them, the PNN shows a slightly better performance than BPNN in terms of accuracy and robustness to the number of ICA-bases. The impressive results prove that the integration of independent component analysis and neural networks, especially PNN, is a promising scheme for the computer-aided diagnosis of heart diseases based on ECG. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "3580ebc04b2ee9fe95652436f1b4c9a2779e1733", "title": "Union simulation on lower limbs rehabilitation robot based on MATLAB and ADAMS", "text": "For people who suffers the lower limbs motor dysfunction, using lower limbs exoskeleton rehabilitation devices is a very effective method to restore the motor ability of lower limbs. However, it works only if the device has a suitable mechanical structure and appropriate design parameters for the specific individual. To get this work completed, we carry out the modeling and simulation of a new proposed lower limbs rehabilitation robot with joint help of two kinds of software, i.e. the MATLAB and the ADAMS. Firstly, the three-dimensional model of the robot is created with the SolidWorks software. After importing the model into ADAMS, some essential kinematic pairs are added into the movable joints. Meanwhile, the control system model is established using the MATLAB/SIMULINK software. Then, the union simulation of the lower limbs rehabilitation robot is carried out by using the interface modular ADAMS/CONTROL. The results show that the proposed rehabilitation robot has a good kinematic performance, which means that the device can be used in the actual rehabilitation process. Besides, the results also indicate that the control method is of good trajectory tracking ability. These simulated results will be significant for the development of physical prototype of lower limbs rehabilitation robot."}
{"_id": "87c69cdc26590970e08fed0d8f08af096963122f", "title": "Efficient and Consistent Flow Update for Software Defined Networks", "text": "Software defined network (SDN) provides flexible and scalable routing by separating control plane and data plane. With centralized control, SDN has been widely used in traffic engineering, link failure recovery, and load balancing. This work considers the flow update problem, where a set of flows need to be migrated or rearranged due to change of network status. During flow update, efficiency and consistency are two main challenges. Efficiency refers to how fast these updates are completed, while consistency refers to prevention of blackholes, loops, and network congestions during updates. This paper proposes a scheme that maintains all these properties. It works in four phases. The first phase partitions flows into shorter routing segments to increase update parallelism. The second phase generates a global dependency graph of these segments to be updated. The third phase conducts actual updates and then adjusts dependency graphs accordingly. The last phase deals with deadlocks, if any, and then loops back to phase three if necessary. Through simulations, we validate that our scheme not only ensures freedom of blackholes, loops, congestions, and deadlocks during flow updates, but is also faster than existing schemes."}
{"_id": "65643341ba8172057229074fe2eb31258650adb2", "title": "Realization of three-dimensional walking of a cheetah-modeled bio-inspired quadruped robot", "text": "Adaptability of quadruped animals is not solely reached by brain control, but by the interaction between its body, environment, and control. Especially, morphology of the body is supposed to contribute largely to the adaptability. We have tried to understand quadrupedal locomotion by building a bio-inspired quadruped robot named \u201cPneupard\u201d, which has a feline-like muscular-skeletal structure. In our previous study, we successfully realized alternative gait of hindlimbs by reflex control based on the sole touch information, which is called an unloading rule, and that of forelimbs as well. In this paper, we finally connect forelimbs and hindlimbs by a rigid spine, and conduct 3D walking experiments only with the simple unloading rule. Through several preliminary experiments, we realize that the touch information on the sole is the most critical for stable 3D walking."}
{"_id": "4ab50e9055d08365db3ae56187d3aba56a82a36d", "title": "Development of an exoskeleton haptic interface for virtual task training", "text": "An exoskeleton haptic interface is developed for functional training in virtual environments. A composite control scheme enables a variety of tasks to be implemented, and a \u201cQt\u201d graphics library is used to generate the virtual environment for the haptic interface at the hand and graphical user interfaces for input and telemetry. Inter-process communications convert telemetry from the exoskeleton into motion commands for objects in the virtual environment. A second haptic interface at the upper arm is used to control the elbow orbit self-motion of the arm during tasks. Preliminary results are reviewed for a wall-painting task in which the virtual wall stiffness and viscosity are generated using an admittance controller."}
{"_id": "d85a5e4f1d3108a96490259540973f4032f43ce1", "title": "Centrifugal fan design for permanent magnet synchronous motor in a traction application", "text": "The requirements of high torque density and high efficiency, which are particularly pronounced in electric traction applications, often result in substantial thermal loading of electric machines for driving trams, electric multiple units (EMU) or electric cars. Permanent magnet synchronous machines are suitable candidates for traction applications due to their inherently high torque density and high efficiency. At the same time they are sensitive to temperature rise, especially in permanent magnets, highlighting the need for implementation of efficient cooling system. The performance of the cooling system and its ability to remove heat directly affect the attainable torque and efficiency of the electric machine. In this paper, the selection and sizing of the cooling system for an interior permanent magnet motor designed to drive a low-floor tram is presented. The procedure for selecting the basic dimensions of the centrifugal fan according to the analytical formulas in combination with computational fluid dynamics (CFD) analysis are explained. In addition to the geometry of the centrifugal fan itself, the geometry of the passive system components (e.g. air flow router) which have a significant impact on the performance of the cooling system, are also considered. The results of computer aided CFD analysis, which is taken as a benchmark of system performance in the design stage of the cooling system, have been confirmed with measurements on the machine prototype."}
{"_id": "9fc7bec27774eb786feced2d0fe522092bc553fe", "title": "Sentiment Analysis using Deep Convolutional Neural Networks with Distant Supervision Master Thesis", "text": "This thesis addresses the problem of predicting message-level sentiments of English micro-blog messages from Twitter. Convolutional neural networks (CNN) have shown great promise in the task of sentiment classification. Here we expand the CNN proposed by [31, 32] and perform an in-depth analysis to deepen the understanding of these systems. In a first step we compare the performance of different architectures with focus on deeper architectures. We show how different choices of hyper-parameters impact the results and present the benefits and challenges of using deeper architectures. Second, we introduce the procedure for achieving a high quality initialization of the CNN and show the massive impact it has on its predictive capabilities. This procedure consists of creating custom word embeddings followed by pre-training the network using 90M tweets where the polarity of a tweet is inferred by noisy labels. Third, we present an in-depth analysis of the supervised phase. We show the impact of the pre-training on the supervised phase. We show how the choice of the optimization algorithm and the step size impact the variance of the score during the supervised phase. The analysis is concluded by an inspection of the errors made by the system. This gives us insights into the inner workings of the CNN and hints as to what is needed for further improvement. The CNN presented in this thesis is part of an ensemble which ranked 1st in the SemEval-2016 competition, Task 4a [22]. The task consists of classifying a given tweet as either positive, negative or neutral. This further demonstrates the potential these systems have in the field of sentiment analysis."}
{"_id": "6ed760ce183fad4c40f05b68b637095b6ff5d245", "title": "Climbing gaits of a modular biped climbing robot", "text": "For high-rise work in fields such as agriculture, forestry and architecture, and inspired by the climbing motion of inchworms, chimpanzees, and sloths, we have developed a biped climbing robot - Climbot. Consisting of five 1-DOF joint modules in series and two special grippers at the ends, Climbot is capable of grasping objects and climbing poles, trees and trusses. In this paper, we first introduce this novel robot, and then present three climbing gaits. We perform simulation to illustrate, verify and compare the proposed climbing gaits."}
{"_id": "ab41136c0a73b42463db10f9dd2913f8d8735155", "title": "Efficient algorithm ?", "text": ""}
{"_id": "c0bf57b798b7350ab8ace771faad6320663300bd", "title": "A Quantitative Comparison of Calibration Methods for RGB-D Sensors Using Different Technologies", "text": "RGB-D (Red Green Blue and Depth) sensors are devices that can provide color and depth information from a scene at the same time. Recently, they have been widely used in many solutions due to their commercial growth from the entertainment market to many diverse areas (e.g., robotics, CAD, etc.). In the research community, these devices have had good uptake due to their acceptable levelofaccuracyformanyapplicationsandtheirlowcost,butinsomecases,theyworkatthelimitof their sensitivity, near to the minimum feature size that can be perceived. For this reason, calibration processes are critical in order to increase their accuracy and enable them to meet the requirements of such kinds of applications. To the best of our knowledge, there is not a comparative study of calibration algorithms evaluating its results in multiple RGB-D sensors. Speci\ufb01cally, in this paper, a comparison of the three most used calibration methods have been applied to three different RGB-D sensors based on structured light and time-of-\ufb02ight. The comparison of methods has been carried out by a set of experiments to evaluate the accuracy of depth measurements. Additionally, an object reconstruction application has been used as example of an application for which the sensor works at the limit of its sensitivity. The obtained results of reconstruction have been evaluated through visual inspection and quantitative measurements."}
{"_id": "f30fe8b675bd0d11dc4c2bd4dcaa05b8b7599010", "title": "Interactive ramp merging planning in autonomous driving: Multi-merging leading PGM (MML-PGM)", "text": "Cooperative driving behavior is essential for driving in traffic, especially for ramp merging, lane changing or navigating intersections. Autonomous vehicles should also manage these situations by behaving cooperatively and naturally. The challenge of cooperative driving is estimating other vehicles' intentions. In this paper, we present a novel method to estimate other human-driven vehicles' intentions with the aim of achieving a natural and amenable cooperative driving behavior, without using wireless communication. The new approach allows the autonomous vehicle to cooperate with multiple observable merging vehicles on the ramp with a leading vehicle ahead of the autonomous vehicle in the same lane. To avoid calculating trajectories, simplify computation, and take advantage of mature Level-3 components, the new method reacts to merging cars by determining a following target for an off-the-shelf distance keeping module (ACC) which governs speed control of the autonomous vehicle. We train and evaluate the proposed model using real traffic data. Results show that the new approach has a lower collision rate than previous methods and generates more human driver-like behaviors in terms of trajectory similarity and time-to-collision to leading vehicles."}
{"_id": "c67a8f1da910152dcfda706bda25a37164a75110", "title": "Religious meaning and subjective well-being in late life.", "text": "OBJECTIVES\nThe purpose of this study is to examine the relationship between religious meaning and subjective well-being. A major emphasis is placed on assessing race differences in the relationship between these constructs.\n\n\nMETHODS\nInterviews were conducted with a nationwide sample of older White and older Black adults. Survey items were administered to assess a sense of meaning in life that is derived specifically from religion. Subjective well-being was measured with indices of life satisfaction, self-esteem, and optimism.\n\n\nRESULTS\nThe findings suggest that older adults who derive a sense of meaning in life from religion tend to have higher levels of life satisfaction, self-esteem, and optimism. The data further reveal that older Black adults are more likely to find meaning in religion than older White adults. In addition, the relationships among religious meaning, life satisfaction, self-esteem, and optimism tend to be stronger for older African Americans persons than older White persons.\n\n\nDISCUSSION\nResearchers have argued for some time that religion may be an important source of resilience for older Black adults, but it is not clear how these beneficial effects arise. The data from this study suggest that religious meaning may be an important factor."}
{"_id": "6a636ea58ecf86cba2f25579c3e9806bbd5ddce9", "title": "A New Rectenna With All-Polarization-Receiving Capability for Wireless Power Transmission", "text": "This letter presents a new rectenna with all-polarization-receiving capability for wireless power transmission applications. By utilizing a dual linearly-polarized antenna, the incident wave of arbitrary polarization can be totally collected at its two ports. To be connected with the antenna as a rectenna, a dual-input rectifier is designed so that the RF power supplied by the two ports of the antenna can be efficiently rectified. With a proper polarization of the incident wave, the proposed rectenna exhibits a maximum efficiency of 78% under the input power density of 295.3 \u03bcW/cm2. Moreover, under the same power density, the rectenna's efficiency can always stay higher than 61% regardless of the incident wave's polarization."}
{"_id": "23a0a9b16462b96bf7b5aa6bb4709e919d7d7626", "title": "Linear vs Nonlinear MPC for Trajectory Tracking Applied to Rotary Wing Micro Aerial Vehicles", "text": "Precise trajectory tracking is a crucial property for Micro Air Vehicles (MAVs) to operate in cluttered environment or under disturbances. In this paper we present a detailed comparison between two state-of-the-art model-based control techniques for MAV trajectory tracking. A classical Linear Model Predictive Controller (LMPC) is presented and compared against a more advanced Nonlinear Model Predictive Controller (NMPC) that considers the full system model. In a careful analysis we show the advantages and disadvantages of the two implementations in terms of speed and tracking performance. This is achieved by evaluating hovering performance, step response, and aggressive trajectory tracking under nominal conditions and under external wind disturbances."}
{"_id": "6d1ec802547fbf40f34b4ef5380d76a0c6321d14", "title": "Extraction of Multi-word Expressions from Small Parallel Corpora", "text": "M.Sc. in Computer Science Cum Laude, advised by Shuly Wintner. 2007-2010 \u2022 Research topics: semi-supervised multiword expression extraction from parallel and monolingual corpora. \u2022 Thesis: \u201cExtraction of Multi-word Expressions from Small Parallel Corpora\u201d. \u2022 Awards: Gutwirth award for excellence (awarded top 50 graduate students in Israel); excellence scholarship in honor of Captain Uri Akavia."}
{"_id": "fddfd37d19c22809556dc9a6566f49dee80696a9", "title": "Optically transparent conductive polymer RFID meandering dipole antenna", "text": "In this paper, we present optically transparent flexible conductive polymer antennas for radio frequency identification systems. The designs for these antennas are presented along with simulated and measured results of antenna radiating properties. These conductive polymer antennas are compared to antennas with the same design fabricated out of copper. Finally, we include an analysis of the optical transparency of the conductive polymer antennas."}
{"_id": "17d00c5f11e0fb83946ccb681b9f7ea1b403f26b", "title": "Cryptonite: a programmable crypto processor architecture for high bandwidth applications", "text": "Cryptographic methods are widely used within networking and digital rights management. Numerous algorithms exist, e.g. spanning VPNs or distributing sensitive data over a shared network infrastructure. While these algorithms can be run with moderate performance on general purpose processors, such processors do not meet typical embedded systems requirements (e.g. area, cost and power consumption). Instead, specialized cores dedicated to one or a combination of algorithms are typically used. These cores provide very high bandwidth data transmission and meet the needs of embedded systems. However, with such cores changing the algorithm is not possible without replacing the hardware. This paper describes a fully programmable processor architecture which has been tailored for the needs of a spectrum of cryptographic algorithms and has been explicitly designed to run at high clock rates while maintaining a significantly better performance/area/power tradeoff than general purpose processors. Both the architecture and instruction set have been developed to achieve a bits-per-clock rate of greater than one, even with complex algorithms. This performance will be demonstrated with standard cryptographic algorithms (AES and DES) and a widely used hash algorithm (MD5)."}
{"_id": "46609ce9194f801bc65afe73f581f753bb8563d0", "title": "Potentials and Challenges of C-RAN Supporting Multi-RATs Toward 5G Mobile Networks", "text": "This paper presents an overview of the cloud radio access network (C-RAN), which is a key enabler for future mobile networks in order to meet the explosive capacity demand of mobile traffic, and reduce the capital and operating expenditure burden faced by operators. We start by reviewing the requirements of future mobile networks, called 5G, followed by a discussion on emerging network concepts for 5G network architecture. Then, an overview of C-RAN and related works are presented. As a significant scenario of a 5G system, the ultra dense network deployment based on C-RAN is discussed with focuses on flexible backhauling, automated network organization, and advanced mobility management. Another import feature of a 5G system is the long-term coexistence of multiple radio access technologies (multi-RATs). Therefore, we present some directions and preliminary thoughts for future C-RAN-supporting Multi-RATs, including joint resource allocation, mobility management, as well as traffic steering and service mapping."}
{"_id": "efe133717899b41cd4c0b0c999da312d3af60a6e", "title": "Depth-Based Hand Pose Estimation: Methods, Data, and Challenges", "text": "Hand pose estimation has matured rapidly in recent years. The introduction of commodity depth sensors and a multitude of practical applications have spurred new advances. We provide an extensive analysis of the state-of-the-art, focusing on hand pose estimation from a single depth frame. To do so, we have implemented a considerable number of systems, and have released software and evaluation code. We summarize important conclusions here: (1) Coarse pose estimation appears viable for scenes with isolated hands. However, high precision pose estimation [required for immersive virtual reality and cluttered scenes (where hands may be interacting with nearby objects and surfaces) remain a challenge. To spur further progress we introduce a challenging new dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves with disparate criteria, making comparisons difficult. We define a consistent evaluation criteria, rigorously motivated by human experiments. (3) We introduce a simple nearest-neighbor baseline that outperforms most existing systems. This implies that most systems do not generalize beyond their training sets. This also reinforces the under-appreciated point that training data is as important as the model itself. We conclude with directions for future progress."}
{"_id": "240d5390af19bb43761f112b0209771f19bfb696", "title": "Towards an intelligent framework for multimodal affective data analysis", "text": "An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate."}
{"_id": "438219194cedac00974ad28604b63a66e0b6f436", "title": "Opensmile: the munich versatile and fast open-source audio feature extractor", "text": "We introduce the openSMILE feature extraction toolkit, which unites feature extraction algorithms from the speech processing and the Music Information Retrieval communities. Audio low-level descriptors such as CHROMA and CENS features, loudness, Mel-frequency cepstral coefficients, perceptual linear predictive cepstral coefficients, linear predictive coefficients, line spectral frequencies, fundamental frequency, and formant frequencies are supported. Delta regression and various statistical functionals can be applied to the low-level descriptors. openSMILE is implemented in C++ with no third-party dependencies for the core functionality. It is fast, runs on Unix and Windows platforms, and has a modular, component based architecture which makes extensions via plug-ins easy. It supports on-line incremental processing for all implemented features as well as off-line and batch processing. Numeric compatibility with future versions is ensured by means of unit tests. openSMILE can be downloaded from http://opensmile.sourceforge.net/."}
{"_id": "8626531209284ba1eaba9e41422ef45337d88f01", "title": "A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural Networks", "text": "Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, for example, sarcasm can flip the polarity of an \u201capparently positive\u201d sentence and, hence, negatively affect polarity detection performance. To date, most approaches to sarcasm detection have treated the task primarily as a text categorization problem. Sarcasm, however, can be expressed in very subtle ways and requires a deeper understanding of natural language that standard text categorization techniques cannot grasp. In this work, we develop models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection. Such features, along with the network\u2019s baseline features, allow the proposed models to outperform the state of the art on benchmark datasets. We also address the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase."}
{"_id": "ad3b3d00d3601190dd04155d69c343fb03bfe704", "title": "Convolutional recurrent neural networks: Learning spatial dependencies for image representation", "text": "In existing convolutional neural networks (CNNs), both convolution and pooling are locally performed for image regions separately, no contextual dependencies between different image regions have been taken into consideration. Such dependencies represent useful spatial structure information in images. Whereas recurrent neural networks (RNNs) are designed for learning contextual dependencies among sequential data by using the recurrent (feedback) connections. In this work, we propose the convolutional recurrent neural network (C-RNN), which learns the spatial dependencies between image regions to enhance the discriminative power of image representation. The C-RNN is trained in an end-to-end manner from raw pixel images. CNN layers are firstly processed to generate middle level features. RNN layer is then learned to encode spatial dependencies. The C-RNN can learn better image representation, especially for images with obvious spatial contextual dependencies. Our method achieves competitive performance on ILSVRC 2012, SUN 397, and MIT indoor."}
{"_id": "1ff86893508bfa418f9da6830453a05b419bc453", "title": "Traffic Light Control System for Emergency Vehicles Using Radio Frequency", "text": "Traffic congestion problem is a phenomena which contributed huge impact to the transportation system in country. This causes many problems especially when there are emergency cases at traffic light intersections which are always busy with many vehicles. A traffic light controller system is designed in order to solve these problems. This system was designed to be operated when it received signal from emergency vehicles based on radio frequency (RF) transmission and used the Programmable Integrated Circuit (PIC) 16F877A microcontroller to change the sequence back to the normal sequence before the emergency mode was triggered. This system will reduce accidents which often happen at the traffic light intersections because of other vehicle had to huddle for given a special route to emergency vehicle. As the result, this project successful analyzing and implementing the wireless communication; the radio frequency (RF) transmission in the traffic light control system for emergency vehicles. The prototype of this project is using the frequency of 434 MHz and function with the sequence mode of traffic light when emergency vehicles passing by an intersection and changing the sequence back to the normal sequence before the emergency mode was triggered. In future, this prototype system can be improved by controlling the real traffic situation, in fact improving present traffic light system technology."}
{"_id": "f390476f22c5962585e7c2cc7a77de770a34b60a", "title": "The Ease of Doing Business Index as a tool for Investment location decisions", "text": "The Ease of Doing Business Index (EDBI) uses 41 variables to compare the business environment of different countries. It is widely used by policy makers, researchers and multinational companies. This paper aims to assess EDBI\u2019s consistency and validity in representing the business environment by using factor analysis. It is found that the EDBI presents a limited consistency and descriptive power of a country\u2019s business environment. The consequence of these findings is that multinational firms should handle carefully the EDBI in their investment decisions. GEE The Ease of Doing Business Index as a tool for Investment location decisions \u2013 Jo\u00e3o Zambujal-Oliveira, Ricardo Pinheiro-Alves"}
{"_id": "4e19c13e368b8e8d23bb50004f9be687273df93f", "title": "Literature Review on Automatic Text Summarization: Single and Multiple Summarizations", "text": "The online information available on world wide web is in enormous amount. Search engines like Google, Yahoo were developed to retrieve information from the databases. But actual results were not obtained as the electronic information is increasing day by day. Thus automatic summarization came into demand. Automatic summarization gathers several documents as input and provides the shorter summarized version as output which is informative, unambiguous, save valuable time. Research was done on a single document and moved towards multiple documents. This review categorizes single and multiple summarization methods."}
{"_id": "87a2987b5fe3d6e7a63446ab1ceb1dd4e06eaa57", "title": "AN OVERVIEW OF MASSIVE MIMO SYSTEM IN 5 G", "text": "4G is proving good speeds up to 1Gbps. Then why do we need anything more. The problem is that it is not able to provide real time applications. 5G is the name given to the next generation of mobile data connectivity. It will definitely provide great speeds between 10Gbps to 100Gbps and it will have enough capacity. But the thing that separated 5G from 4G is latency; the latency provided by 4G is between 40ms to 60ms, whereas in 5G it will provide ultra latency between 1ms to 10ms. The standards for 5G will be set till 2020 and it will be applicable by 2022/23. In this paper we have discussed about 5G and its advantages, a probable architecture for 5G and some challenges in 5G. The main technology that maybe used in 5G are massive MIMO, millimetre wave communication, device to device communication, beam division multiple access etc. In this paper we have discussed about massive MIMO, channel estimate in massive MIMO, beam division multiple access technique to be used in massive MIMO, antenna selection in massive MIMO, capacity and energy efficiency in massive MIMO. In future 5G is going to be a technology which will be invisible, I will be just there everywhere just like electricity. It is a very good area for research as standards and frequency band for 5G are yet to be standardised."}
{"_id": "3e6a2ec424a8d7229298ffe4cd2343ae56361176", "title": "Contextual Dueling Bandits", "text": "We consider the problem of learning to choose actions using contextual information when provided with limited feedback in the form of relative pairwise comparisons. We study this problem in the dueling-bandits framework of Yue et al. (2009), which we extend to incorporate context. Roughly, the learner\u2019s goal is to find the best policy, or way of behaving, in some space of policies, although \u201cbest\u201d is not always so clearly defined. Here, we propose a new and natural solution concept, rooted in game theory, called a von Neumann winner, a randomized policy that beats or ties every other policy. We show that this notion overcomes important limitations of existing solutions, particularly the Condorcet winner which has typically been used in the past, but which requires strong and often unrealistic assumptions. We then present three efficient algorithms for online learning in our setting, and for approximating a von Neumann winner from batch-like data. The first of these algorithms achieves particularly low regret, even when data is adversarial, although its time and space requirements are linear in the size of the policy space. The other two algorithms require time and space only logarithmic in the size of the policy space when provided access to an oracle for solving classification problems on the space."}
{"_id": "25ec069b381d7c9faacc5ce54225d6aebb998812", "title": "Beyond Slideware: How a Free-form Presentation Medium Stimulates Free-form Thinking in the Classroom", "text": "We investigate how presentation in a free-form medium stimulates free-form thinking and discussion in the classroom. Most classroom presentations utilize slideware (e.g. PowerPoint). Yet, slides add intrusive segregations that obstruct the flow of information. In contrast, in a free-form medium of presentation, content is not separated into rigid slide compartments. Instead, it is visually arranged and transformed in a continuous space.\n We develop a case study that investigates student experiences authoring, presenting, viewing, and discussing free-form presentations in a graduate seminar class. We analyze interviews, present a sampling of student presentations, and develop findings: free-form presentation stimulates free-form thinking, spontaneous discussion, and emergent ideation."}
{"_id": "89f253d8b1e80528eaccec801b34410df76a3700", "title": "Efficient Restoration Method for Images Corrupted with Impulse Noise", "text": "This paper proposes a two-stage adaptive method for restoration of images corrupted with impulse noise. In the first stage, the pixels which are most likely contaminated by noise are detected based on their intensity values. In the second stage, an efficient average filtering algorithm is used to remove those noisy pixels from the image. Only pixels which are determined to be noisy in the first stage are processed in the second stage. The remaining pixels of the first stage are not processed further and are just copied to their corresponding locations in the restored image. The experimental results for the proposed method demonstrate that it is faster and simpler than even median filtering, and it is very efficient for images corrupted with a wide range of impulse noise densities varying from 10% to 90%. Because of its simplicity, high speed, and low computational complexity, the proposed method can be used in realtime digital image applications, e.g., in consumer electronic products such as digital televisions and cameras."}
{"_id": "c9dbf0a8e177c49d7345f44d2c6cd8f8db098677", "title": "A Simple Parallel Algorithm for the Maximal Independent Set Problem", "text": "Simple parallel algorithms for the maximal independent set (MIS) problem are presented. The first algorithm is a Monte Carlo algorithm with a very local property. The local property of this algorithm may make it a useful protocol design tool in distributed computing environments and artificial intelligence. One of the main contributions of this paper is the development of powerful and general techniques for converting Monte Carlo algorithms into deterministic algorithms. These techniques are used to convert the Monte Carlo algorithm for the MIS problem into a simple deterministic algorithm with the same parallel running time."}
{"_id": "32e98f8b38aefea9518e0035be5bf30ee68685cd", "title": "Interruptions on software teams: a comparison of paired and solo programmers", "text": "This study explores interruption patterns among software developers who program in pairs versus those who program solo. Ethnographic observations indicate that interruption length, content, type, occurrence time, and interrupter and interruptee strategies differed markedly for radically collocated pair programmers versus the programmers who primarily worked alone. After presenting an analysis of 242 interruptions drawn from more than 40 hours of observation data, we discuss how team configuration and work setting influenced how and when developers handled interruptions. We then suggest ways that CSCW systems might better support pair programming and, more broadly, provide interruption-handling support for workers in knowledge-intensive occupations."}
{"_id": "293878f70c72c0497e0bc62781c6d8a80ecabc10", "title": "Metric Learning and Manifolds Metric Learning and Manifolds : Preserving the Intrinsic Geometry", "text": "A variety of algorithms exist for performing non-linear dimension reduction, but these algorithms do not preserve the original geometry of the data except in special cases. In general, in the low-dimensional representations obtained, distances are distorted, as well as angles, areas, etc. This paper proposes a generic method to estimate the distortion incurred at each point of an embedding, and subsequently to correct distances and other intrinsic geometric quantities back to their original values (up to sampling noise). Our approach is based on augmenting the output of an embedding algorithm with geometric information embodied in the Riemannian metric of the manifold. The Riemannian metric allows one to compute geometric quantities (such as angle, length, or volume) for any coordinate system or embedding of the manifold. In this work, we provide an algorithm for estimating the Riemannian metric from data, consider its consistency, and demonstrate the uses of our approach in a variety of examples."}
{"_id": "5170a88889fabd660cf337c7887eb490bbc313bd", "title": "Gaze- vs. hand-based pointing in virtual environments", "text": "This paper contributes to the nascent body of literature on pointing performance in Virtual Environments (VEs), comparing gaze- and hand-based pointing. Contrary to previous findings, preliminary results indicate that gaze-based pointing is slower than hand-based pointing for distant objects."}
{"_id": "bac6ed8196e3c293d67e383acc63c78c095ea9b5", "title": "Toward preprototype user acceptance testing of new information systems: implications for software project management", "text": "Errors in requirements specifications have been identified as a major contributor to costly software project failures. It would be highly beneficial if information systems developers could verify requirements by predicting workplace acceptance of a new system based on user evaluations of its specifications measured during the earliest stages of the development project, ideally before building a working prototype. However, conventional wisdom among system developers asserts that prospective users must have direct hands-on experience with at least a working prototype of a new system before they can provide assessments that accurately reflect future usage behavior after workplace implementation. The present research demonstrates that this assumption is only partially true. Specifically, it is true that stable and predictive assessments of a system's perceived ease of use should be based on direct behavioral experience using the system. However, stable and behaviorally predictive measures of perceived usefulness can be captured from target users who have received information about a system's functionality, but have not had direct hands-on usage experience. This distinction is key because, compared to ease of use, usefulness is generally much more strongly linked to future usage intentions and behaviors in the workplace. Two longitudinal field experiments show that preprototype usefulness measures can closely approximate hands-on based usefulness measures, and are significantly predictive of usage intentions and behavior up to six months after workplace implementation. The present findings open the door toward research on how user acceptance testing may be done much earlier in the system development process than has traditionally been the case. Such preprototype user acceptance tests have greater informational value than their postprototype counterparts because they are captured when only a relatively small proportion of project costs have been incurred and there is greater flexibility to modify a new system's design attributes. Implications are discussed for future research to confirm the robustness of the present findings and to better understand the practical potential and limitations of preprototype user acceptance testing."}
{"_id": "a136d45f1071ed8f68f5c20075576e4c3ee4ef01", "title": "Value co-creation between firms and customers: The role of big data-based cooperative assets", "text": "To better understand how big data interconnects firms and customers in promoting value co-creation, we propose a theoretical framework of big data-based cooperative assets based on evidence of multiple case studies. We identify four types of big data resources and four types of associated digital platforms, and we explore how firms develop the cooperative assets by transforming big data resources via the theoretical lens of service-dominant logic. This study offers a new theoretical perspective on value co-creation and an alternative competitive strategy in the era of big data for firms. \u00e3 2016 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)."}
{"_id": "962e86ac97388780cd4b006e428af31a583cc690", "title": "The TypTop System: Personalized Typo-Tolerant Password Checking", "text": "Password checking systems traditionally allow login only if the correct password is submitted. Recent work on typo-tolerant password checking suggests that usability can be improved, with negligible security loss, by allowing a small number of typographical errors. Existing systems, however, can only correct a handful of errors, such as accidentally leaving caps lock on or incorrect capitalization of the first letter in a password. This leaves out numerous kinds of typos made by users, such as transposition errors, substitutions, or capitalization errors elsewhere in a password. Some users therefore receive no benefit from existing typo-tolerance mechanisms.\n We introduce personalized typo-tolerant password checking. In our approach, the authentication system learns over time the typos made by a specific user. In experiments using Mechanical Turk, we show that 45% of users would benefit from personalization. Therefore, we design a system, called TypTop, that securely implements personalized typo-tolerance. Underlying TypTop is a new stateful password-based encryption scheme that can be used to store recent failed login attempts. Our formal analysis shows that security in the face of an attacker that obtains the state of the system reduces to the difficulty of a brute-force dictionary attack against the real password. We implement TypTop for Linux and Mac OS login and report on a proof-of-concept deployment."}
{"_id": "9e821b309ed983f90362965a90ba80b02b302caf", "title": "Markov Logic Networks for Natural Language Question Answering", "text": "Our goal is to answer elementary-level science questions using knowledge extracted automatically from science textbooks, expressed in a subset of first-order logic. Given the incomplete and noisy nature of these automatically extracted rules, Markov Logic Networks (MLNs) seem a natural model to use, but the exact way of leveraging MLNs is by no means obvious. We investigate three ways of applying MLNs to our task. In the first, we simply use the extracted science rules directly as MLN clauses. Unlike typical MLN applications, our domain has long and complex rules, leading to an unmanageable number of groundings. We exploit the structure present in hard constraints to improve tractability, but the formulation remains ineffective. In the second approach, we instead interpret science rules as describing prototypical entities, thus mapping rules directly to grounded MLN assertions, whose constants are then clustered using existing entity resolution methods. This drastically simplifies the network, but still suffers from brittleness. Finally, our third approach, called Praline, uses MLNs to align the lexical elements as well as define and control how inference should be performed in this task. Our experiments, demonstrating a 15% accuracy boost and a 10x reduction in runtime, suggest that the flexibility and different inference semantics of Praline are a better fit for the natural language question answering task."}
{"_id": "2542752d40cd5152b0658d17ec8434725858aa53", "title": "The Acquisition , Transfer , and Depreciation of Knowledge in Service Organizations : Productivity in Franchises", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "11c116b2750e064ce32b3b8de8760234de508314", "title": "Massively Parallel NUMA-aware Hash Joins", "text": "Driven by the two main hardware trends increasing main memory and massively parallel multi-core processing in the past few years, there has been much research e ort in parallelizing well-known join algorithms. However, the non-uniform memory access (NUMA) of these architectures to main memory has only gained limited attention in the design of these algorithms. We study recent proposals of main memory hash join implementations and identify their major performance problems on NUMA architectures. We then develop a NUMA-aware hash join for massively parallel environments, and show how the speci c implementation details a ect the performance on a NUMA system. Our experimental evaluation shows that a carefully engineered hash join implementation outperforms previous high performance hash joins by a factor of more than two, resulting in an unprecedented throughput of 3/4 billion join argument tuples per second."}
{"_id": "43dca3f6bce492ae074dac0a2aa27d414b0e6259", "title": "Reducing the effect of email interruptions on employees", "text": "It is generally assumed that because it is not necessary to react to email messages when they arrive, employees will read their messages in their own time with minimum interruption to their work. This research has shown that email messages do have some disruptive effect by interrupting the user. Employees at the Danwood Group in the UK were monitored to see how they used email. It was found that most employees had their email software check for incoming messages every 5min and responded to the arrival of a message within 6 s. A recovery time between finishing reading the email and returning to normal work also existed though it was shorter than published recovery times for a telephone interrupt. This analysis has suggested that a number of methods can be employed to reduce this interrupt effect. Employee training, changing the settings and modes of using the email software and the introduction of a one line email facility are all shown to have beneficial effects. This has led to a series of recommendations that will enable the Danwood Group to make better use of email communication and increase employee effectiveness. r 2003 Elsevier Science Ltd. All rights reserved."}
{"_id": "513b0d34c52d4f8e27eed9a13cb3bad160cfc221", "title": "Immersion, presence and performance in virtual environments: an experiment with tri-dimensional chess", "text": "This paper describes an experiment to assess the influence of immersion on performance in immersive virtual environments. The task involved Tri-Dimensional Chess, and required subjects to reproduce on a real chess board the state of the board learned from a sequence of moves witnessed in a virtual environment. Twenty four subjects were allocated to a factorial design consisting of two levels of immersion (exocentric screen based, and egocentric HMD based), and two kinds of environment (plain and realistic. The results suggest that egocentric subjects performed better than exocentric, and those in the more realistic environment performed better than those in the less realistic environment. Previous knowledge of chess, and amount of virtual practice were also significant, and may be considered as control variables to equalise these factors amongst the subjects. Other things being equal, males remembered the moves better than females, although female performance improved with higher spatial ability test score. The paper also attempts to clarify the relationship between immersion, presence and performance, and locates the experiment within such a theoretical framework."}
{"_id": "41761effdfc136061fde3695e2551323d6a3a92d", "title": "Probabilistic graph and hypergraph matching", "text": "We consider the problem of finding a matching between two sets of features, given complex relations among them, going beyond pairwise. Each feature set is modeled by a hypergraph where the complex relations are represented by hyper-edges. A match between the feature sets is then modeled as a hypergraph matching problem. We derive the hyper-graph matching problem in a probabilistic setting represented by a convex optimization. First, we formalize a soft matching criterion that emerges from a probabilistic interpretation of the problem input and output, as opposed to previous methods that treat soft matching as a mere relaxation of the hard matching problem. Second, the model induces an algebraic relation between the hyper-edge weight matrix and the desired vertex-to-vertex probabilistic matching. Third, the model explains some of the graph matching normalization proposed in the past on a heuristic basis such as doubly stochastic normalizations of the edge weights. A key benefit of the model is that the global optimum of the matching criteria can be found via an iterative successive projection algorithm. The algorithm reduces to the well known Sinkhorn [15] row/column matrix normalization procedure in the special case when the two graphs have the same number of vertices and a complete matching is desired. Another benefit of our model is the straight-forward scalability from graphs to hyper-graphs."}
{"_id": "acfb93ebd449ba2ef5f696034bc6250abff3d3a7", "title": "A Review of Non-Invasive Sensory Feedback Methods for Transradial Prosthetic Hands", "text": "Any implant or prosthesis replacing a function or functions of an organ or group of organs should be biologically and sensorily integrated with the human body in order to increase their acceptance with their user. If this replacement is for a human hand, which is an important interface between humans and their environment, the acceptance issue and developing sensory-motor embodiment will be more challenging. Despite progress in prosthesis technologies, 50\u201360% of hand amputees wear a prosthetic device. One primary reason for the rejection of the prosthetic hands is that there is no or negligibly small feedback or tactile sensation from the hand to the user, making the hands less functional. In fact, the loss of a hand means interrupting the closed-loop sensory feedback between the brain (motor control) and the hand (sensory feedback through the nerves). The lack of feedback requires significant cognitive efforts from the user in order to do basic gestures and daily activities. To this aim, recently, there has been significant development in the provision of sensory feedback from transradial prosthetic hands, to enable the user take part in the control loop and improve user embodiment. Sensory feedback to the hand users can be provided via invasive and non-invasive methods. The latter includes the use of temperature, vibration, mechanical pressure and skin stretching, electrotactile stimulation, phantom limb stimulation, audio feedback, and augmented reality. This paper provides a comprehensive review of the non-invasive methods, performs their critical evaluation, and presents challenges and opportunities associated with the non-invasive sensory feedback methods."}
{"_id": "5678ade24d18e146574ad5c74b64b9030ed8cf44", "title": "Intrusion Detection with Neural Networks", "text": "With the rapid expansion of computer networks during the past few years, security has become a crucial issue for modern computer systems. A good way to detect illegitimate use is through monitoring unusual user activity. Methods of intrusion detection based on hand-coded rule sets or predicting commands on-line are laborous to build or not very reliable. This paper proposes a new way of applying neural networks to detect intrusions. We believe that a user leaves a \u2019print\u2019 when using the system; a neural network can be used to learn this print and identify each user much like detectives use thumbprints to place people at crime scenes. If a user\u2019s behavior does not match his/her print, the system administrator can be alerted of a possible security breech. A backpropagation neural network called NNID (Neural Network Intrusion Detector) was trained in the identification task and tested experimentally on a system of 10 users. The system was 96% accurate in detecting unusual activity, with 7% false alarm rate. These results suggest that learning user profiles is an effective way for detecting intrusions."}
{"_id": "2ce369cdbd656fd3a6df6e643657376e63d1aee2", "title": "Network intrusion detection", "text": "Intrusion detection is a new, retrofit approach for providing a sense of security in existing computers and data networks, while allowing them to operate in their current \"open\" mode. The goal of intrusion detection is to identify unauthorized use, misuse, and abuse of computer systems by both system insiders and external penetrators. The intrusion detection problem is becoming a challenging task due to the proliferation of heterogeneous computer networks since the increased connectivity of computer systems gives greater access to outsiders and makes it easier for intruders to avoid identification. Intrusion detection systems (IDSs) are based on the beliefs that an intruder's behavior will be noticeably different from that of a legitimate user and that many unauthorized actions are detectable. Typically, IDSs employ statistical anomaly and rulebased misuse models in order to detect intrusions. A number of prototype IDSs have been developed at several institutions, and some of them have also been deployed on an experimental basis in operational systems. In the present paper, several host-based and network-based IDSs are surveyed, and the characteristics of the corresponding systems are identified. The host-based systems employ the host operating system's audit trails as the main source of input to detect intrusive activity, while most of the network-based IDSs build their detection mechanism on monitored network traffic, and some employ host audit trails as well. An outline of a statistical anomaly detection algorithm employed in a typical IDS is also included.<>"}
{"_id": "741de9043b6c9d4548012ae7392d1e8257515a55", "title": "Comparison of learning algorithms for handwritten digit recognition", "text": "This paper compares the performance of several classi er algorithms on a standard database of handwritten digits. We consider not only raw accuracy, but also rejection, training time, recognition time, and memory requirements."}
{"_id": "7a00a26cdc93436db47882e2f3b81b2859977c0e", "title": "Efficient Progressive Sampling", "text": "Having access to massive amounts of data does not necessarily imply that induction algorithms must use them all. Samples often provide the same accuracy with far less computational cost. However, the correct sample size rarely is obvious. We analyze methods for progressive samplingusing progressively larger samples as long as model accuracy improves. We explore several notions of efficient progressive sampling. We analyze efficiency relative to induction with all instances; we show that a simple, geometric sampling schedule is asymptotically optimal, and we describe how best to take into account prior expectations of accuracy convergence. We then describe the issues involved in instantiating an efficient progressive sampler, including how to detect convergence. Finally, we provide empirical results comparing a variety of progressive sampling methods. We conclude that progressive sampling can be remarkably efficient ."}
{"_id": "9f25df8427a38ac45f227c5e40ee6a2b4f43cd17", "title": "Feature Extraction Methods for Time Series Data in SAS \u00ae Enterprise Miner TM", "text": "Because time series data have a unique data structure, it is not easy to apply some existing data mining tools directly to the data. For example, in classification and clustering problems, each time point is often considered a variable and each time series is considered an observation. As the time dimension increases, the number of variables also increases, in proportion to the time dimension. Therefore, data mining tasks require some feature extraction techniques to summarize each time series in a form that has a significantly lower dimension. This paper describes various feature extraction methods for time series data that are implemented in SAS Enterprise MinerTM. INTRODUCTION Time series data mining has four major tasks: clustering, indexing, classification, and segmentation. Clustering finds groups of time series that have similar patterns. Indexing finds similar time series in order, given a query series. Classification assigns each time series to a known category by using a trained model. Segmentation partitions time series. Time series data can be considered multidimensional data; this means that there is one observation per time unit, and each time unit makes one dimension. Some common data mining techniques are used for the tasks. However, in the real world, each time series is usually a high-dimensional sequence, such as stock market data whose prices change over time or data collected by a medical device. When the time unit is seconds, the dimensions of data that accumulate in just one hour are 3,600. Moreover, the dimensions are highly correlated to one another. The number of variables in training data increases proportionally as the time dimension increases. Most existing data mining tools cannot be used efficiently on time series data without a dimension reduction. Therefore, a dimension reduction is required through feature extraction techniques that map each time series to a lower-dimensional space. The most common techniques of dimension reduction in time series are singular value decomposition (SVD), discrete Fourier transformation (DFT), discrete wavelet transformation (DWT), and line segment methods. Some classical time series analyses, such as seasonal, trend, seasonal decomposition, and correlation analyses, are also used for feature extraction. This paper shows how to use SAS Enterprise Miner to implement the feature extraction techniques. But it does not attempt to explain these time series techniques in detail and instead invites the interested reader to explore some of the literature on the subject: Keogh and Pazzani (2000a), Keogh et al. (2000), and Gavrilov et al. (2000) for time series dimensionality reduction and the TIMESERIES procedure documentation in SAS/ETS for the classical time series analyses. This paper is organized as follows. First, it explains feature extraction and dimension reduction techniques that use classical time series analysis, and then it describes feature extraction techniques that use some well-known mathematical functions for dimension reduction. Next, it demonstrates the performance of dimension reduction techniques by presenting some examples. The paper uses SAS Enterprise Miner 13.1 to demonstrate these techniques."}
{"_id": "cf528f9fe6588b71efa94c219979ce111fc9c1c9", "title": "On Evaluation of 6D Object Pose Estimation", "text": "A pose of a rigid object has 6 degrees of freedom and its full knowledge is required in many robotic and scene understanding applications. Evaluation of 6D object pose estimates is not straightforward. Object pose may be ambiguous due to object symmetries and occlusions, i.e. there can be multiple object poses that are indistinguishable in the given image and should be therefore treated as equivalent. The paper defines 6D object pose estimation problems, proposes an evaluation methodology and introduces three new pose error functions that deal with pose ambiguity. The new error functions are compared with functions commonly used in the literature and shown to remove certain types of non-intuitive outcomes. Evaluation tools are provided at: https : //github.com/thodan/obj pose eval"}
{"_id": "5bd4017ea60f3f9e2d4ebe2e3cff7fde814295f6", "title": "The Lone-Actor Terrorist and the TRAP-18", "text": "An open source sample of 111 lone-actor terrorists from the United States and Europe were studied through the lens of the Terrorist Radicalization Assessment Protocol (TRAP-18). This investigative template consists of 8 proximal warning behaviors and 10 distal characteristics for active risk management or active monitoring, respectively, by national security threat assessors. Several aspects of criterion validity were tested in this known outcome sample. Seventy percent of the terrorists were positive for at least half or more of the indicators. When the sample was divided into Islamic extremists, right-wing extremists, and single-issue terrorists, there were no significant differences across all 18 indicators except for 4. When the sample was divided according to successful versus thwarted attackers, the successful attackers were significantly more fixated, creative, and innovative, and failed to have a prior sexually intimate pair bond. They were significantly less likely to have displayed pathway warning behavior and be dependent on a virtual community of likeminded true believers. Effect sizes were small to medium (! \" 0.190\u20130.317). The TRAP-18 appears to have promise as an eventual structured professional judgment risk assessment instrument according to some of the individual terrorist content domains outlined by Monahan (2012, 2016)."}
{"_id": "92349b09337460277d384bf077e06b25408058d4", "title": "Word and Phrase Translation with word2vec", "text": "Word and phrase tables are key inputs to machine translations, but costly to produce. New unsupervised learning methods represent words and phrases in a high-dimensional vector space, and these monolingual embeddings have been shown to encode syntactic and semantic relationships between language elements. The information captured by these embeddings can be exploited for bilingual translation by learning a transformation matrix that allows matching relative positions across two monolingual vector spaces. This method aims to identify high-quality candidates for word and phrase translation more costeffectively from unlabeled data. This paper expands the scope of previous attempts of bilingual translation to four languages (English, German, Spanish, and French). It shows how to process the source data, train a neural network to learn the high-dimensional embeddings for individual languages and expands the framework for testing their quality beyond the English language. Furthermore, it shows how to learn bilingual transformation matrices and obtain candidates for word and phrase translation, and assess their quality."}
{"_id": "7e1b20245eeb49f5715d21f2c67f62540a6ea135", "title": "Feature Selection Methods: Genetic Algorithms vs. Greedy-like Search", "text": "This paper presents a comparison between two feature selection methods, the Importance Score (IS) which is based on a greedy-like search and a genetic algorithm-based (GA) method, in order to better understand their strengths and limitations and their area of application. The results of our experiments show a very strong relation between the nature of the data and the behavior of both systems. The Importance Score method is more efficient when dealing with little noise and small number of interacting features, while the genetic algorithms can provide a more robust solution at the expense of increased computational effort."}
{"_id": "356827905c70ef763e3aa373f966fe6d8cf753f9", "title": "Fast texture synthesis using tree-structured vector quantization", "text": "Texture synthesis is important for many applications in computer graphics, vision, and image processing. However, it remains difficult to design an algorithm that is both efficient and capable of generating high quality results. In this paper, we present an efficient algorithm for realistic texture synthesis. The algorithm is easy to use and requires only a sample texture as input. It generates textures with perceived quality equal to or better than those produced by previous techniques, but runs two orders of magnitude faster. This permits us to apply texture synthesis to problems where it has traditionally been considered impractical. In particular, we have applied it to constrained synthesis for image editing and temporal texture generation. Our algorithm is derived from Markov Random Field texture models and generates textures through a deterministic searching process. We accelerate this synthesis process using tree-structured vector quantization."}
{"_id": "c0de99c5f15898e2d28f9946436fec2b831d4eae", "title": "ClothCap: seamless 4D clothing capture and retargeting", "text": "Designing and simulating realistic clothing is challenging. Previous methods addressing the capture of clothing from 3D scans have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the minimally clothed body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. ClothCap is able to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes; this provides a step towards virtual try-on."}
{"_id": "54a30890cd4838fc685fe311bef5829338d7bda1", "title": "Representation and Mimesis in Generative Art : Creating Fifty Sisters", "text": "Fifty Sisters is a generative artwork commissioned for the Ars Electronica Museum in Linz. The work consists of fifty 1m \u221e 1m images of computer-synthesized plant-forms, algorithmically \u2018grown\u2019 from computer code using artificial evolution and generative grammars. Each plant-like form is derived from the primitive graphic elements of oil company logos. The title of the work refers to the original \u2018Seven Sisters\u2019 \u2014 a cartel of seven oil companies that dominated the global petrochemical industry and Middle East oil production from the mid\u20131940s until the oil crisis of the 1970s. In this paper I discuss the issue of representation in generative art and how dialogues in mimesis inform the production of a generative artwork, using Fifty Sisters as an example. I also provide information on how these concepts translate into the technical and how issues of representation necessarily pervade all computer-based generative art. C o m p u t i n g C o m m u n i c a t i o n A e s t h e t i c s a n d X . B e r g a m o , I t a l y . x c o a x . o r g"}
{"_id": "30a46f686d0831cfddf8c69ed4ac56300198d7e0", "title": "Arrays in Blitz++", "text": "Numeric arrays in Blitz++ rival the eeciency of Fortran, but without any extensions to the C++ language. Blitz++ has features unavailable in Fortran 90/95, such as arbitrary transpose operations, array renaming, tensor notation, partial reductions, multicomponent arrays and stencil operators. The library handles parsing and analysis of array expressions on its own using the expression templates technique, and performs optimizations (such as loop transformations) which have until now been the responsibility of compilers."}
{"_id": "3d99d2963cd0b3b78a16ddf47c761e8e676ec937", "title": "Integration of a Wireless I/O Interface for PROFIBUS and PROFINET for Factory Automation", "text": "Cost related to the wiring of sensors and actuators in the manufacturing industry is rapidly increasing with the number of field devices. Complex engineering as well as cable wear and tear are important factors. For over a decade, fieldbuses have been the dominating approach to simplify installation, reduce engineering efforts, and increase reliability at the field level. The next step in this process is to introduce wireless communication in certain problematic parts of the field network. In order to do this, a reliable real-time communication system, designed to cope with the harsh environments where factory automation systems are employed, is required. The wireless interface for sensors and actuators (WISA), with its two subconcepts WISA-power and WISA-com, is such a system. This paper discusses how the WISA concept can be efficiently integrated into wired field networks for factory automation-both state-of-the-art [process fieldbus decentral peripherals (PROFIBUS DP)] and new (PROFINET IO) networks. We propose amendments to WISA, which will improve the 802.11b/g coexistence and harmonize the integration of WISA for sensor/actuator communication in, e.g., PROFINET IO systems. The suggested improvements relate to both the radio protocol in order to improve coexistence with other wireless technologies as well as concepts for the integration of WISA into the field network of PROFINET/PROFIBUS."}
{"_id": "8c49d995e0b39732c46609db78b0f4ad73e4e4ba", "title": "Dual-band circularly polarized microstrip antenna", "text": "A new dual-band circularly polarized microstrip antenna is presented experimentally. In this paper, using dual-radiator a new design dual-band antenna can be obtained. The frequency ratio of this design is flexible which more than 1.4."}
{"_id": "ba849ec8faa208660560b5ffa50db6a341748319", "title": "A Transceiver Module for FMCW Radar Sensors Using 94-GHz Dot-Type Schottky Diode Mixer", "text": "In this study, we fabricated a 94-GHz transceiver module for a millimeter-wave (MMW) frequency modulation continuous wave (FMCW) radar sensor. The transceiver modules consist of a waveguide voltage-controlled oscillator (VCO) and Rx module using a single balanced mixer. We designed a mixer with a conversion loss of 6.4 dB, without using an amplifier. Also, the waveguide VCO consisted of an InP Gunn diode, a varactor diode, two bias posts with LPF, and a Magic Tee for the MMW radar transceiver. The fabricated VCO has a tuning range of 1280 MHz by a varactor bias of 0~20 V, 1.69% linearity range of 680 MHz, and current consumption of 154 to 157 mA. The completed module has a good conversion loss of 10.6 dB with an LO power of 11.4 dBm at 94 GHz. With this RF and LO input power, the conversion loss was maintained between 10.2-11.5 dB in the RF frequency range of 93.67-94.95 GHz."}
{"_id": "69131e9d67c04c8aac3bd08cf7b9388e276d9d3c", "title": "A half-bridge resonant inverter with three-phase PWM rectifier for induction heating", "text": "A half-bridge resonant inverter with three-phase PWM rectifier for induction heating is proposed in this paper. An adjustable frequency limit is introduced to maintain the inverter's switching frequency at a value higher than the resonant frequency for zero voltage operation. In case of no work piece in the induction coil, the dc bus can be adjusted while the switching frequency of the inverter remains at the limit. The important advantages of the proposed system are reduced switching loss, small size of filter and faster response at a no load condition. The proposed technique is implemented on the created hardware setup. The inverter operates at in a frequency range of 20\u201340 kHz, while the three-phase PWM rectifier runs at 23.8 kHz to deliver output power at 1.5 kW."}
{"_id": "e19d0a0d498905a6bd66357de89b8dc6aa274530", "title": "Predicting Friends and Foes in Signed Networks Using Inductive Inference and Social Balance Theory", "text": "Besides the notion of friendship, trust or support in social networking sites (SNSs), quite often social interactions also reflect users' antagonistic attitude towards each other. Thus, the hidden knowledge contained in social network data can be considered as an important resource to discover the formation of such positive and negative links. In this work, an inductive learning framework is presented to suggest 'friends' and 'foes' links to individuals which envisage the social balance among users in the corresponding friends and foes networks (FFN). First we learn a model by applying C4.5, the most widely adopted decision tree based classification algorithm, to exploit the feature patterns presented in the users' FFN and utilizing it to further predict friend/foe relationship of unknown links. Secondly, a quantitative measure of social balance, balance index, is used to support our decision on the recommendation of new friends and foes links (FFL) to avoid possible imbalance in the extended FFN with newly suggested links. The proposed scheme ensures that the recommendation of new FFLs either maintains or enhances the balancing factor of the existing FFN of an individual. Experimental results show the effectiveness of our proposed schemes."}
{"_id": "0021150934ac5e73a917e9e633fc992f9402b43d", "title": "A review on 3 D micro-additive manufacturing technologies", "text": "New microproducts need the utilization of a diversity of materials and have complicated three-dimensional (3D) microstructures with high aspect ratios. To date, many micromanufacturing processes have been developed but specific class of such processes are applicable for fabrication of functional and true 3D microcomponents/assemblies. The aptitude to process a broad range of materials and the ability to fabricate functional and geometrically complicated 3D microstructures provides the additive manufacturing (AM) processes some profits over traditional methods, such as lithography-based or micromachining approaches investigated widely in the past. In this paper, 3D micro-AM processes have been classified into three main groups, including scalable micro-AM systems, 3D direct writing, and hybrid processes, and the key processes have been reviewed comprehensively. Principle and recent progress of each 3D micro-AM process has been described, and the advantages and disadvantages of each process have been presented."}
{"_id": "9987d626a2c2e784f0f64bb3b66e85c1b0d79f95", "title": "Semantic audio content-based music recommendation and visualization based on user preference examples", "text": "Preference elicitation is a challenging fundamental problem when designing recommender systems. In the present work we propose a content-based technique to automatically generate a semantic representation of the user\u2019s musical preferences directly from audio. Starting from an explicit set of music tracks provided by the user as evidence of his/her preferences, we infer high-level semantic descriptors for each track obtaining a user model. To prove the benefits of our proposal, we present two applications of our technique. In the first one, we consider three approaches to music recommendation, two of them based on a semantic music similarity measure, and one based on a semantic probabilistic model. In the second application, we address the visualization of the user\u2019s musical preferences by creating a humanoid cartoon-like character \u2013 the Musical Avatar \u2013 automatically inferred from the semantic representation. We conducted a preliminary evaluation of the proposed technique in the context of these applications with 12 subjects. The results are promising: the recommendations were positively evaluated and close to those coming from state-ofthe-art metadata-based systems, and the subjects judged the generated visualizations to capture their core preferences. Finally, we highlight the advantages of the proposed semantic user model for enhancing the user interfaces of information filtering systems. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "41c3dfda0d261070d84a2070c8b70b1f86cab52d", "title": "An estimation of the number of cells in the human body.", "text": "BACKGROUND\nAll living organisms are made of individual and identifiable cells, whose number, together with their size and type, ultimately defines the structure and functions of an organism. While the total cell number of lower organisms is often known, it has not yet been defined in higher organisms. In particular, the reported total cell number of a human being ranges between 10(12) and 10(16) and it is widely mentioned without a proper reference.\n\n\nAIM\nTo study and discuss the theoretical issue of the total number of cells that compose the standard human adult organism.\n\n\nSUBJECTS AND METHODS\nA systematic calculation of the total cell number of the whole human body and of the single organs was carried out using bibliographical and/or mathematical approaches.\n\n\nRESULTS\nA current estimation of human total cell number calculated for a variety of organs and cell types is presented. These partial data correspond to a total number of 3.72\u2009\u00d7\u200910(13).\n\n\nCONCLUSIONS\nKnowing the total cell number of the human body as well as of individual organs is important from a cultural, biological, medical and comparative modelling point of view. The presented cell count could be a starting point for a common effort to complete the total calculation."}
{"_id": "777e82b87b212fe131f1e35872b6fbcda5472543", "title": "Galvanic Skin Response : A Physiological Sensor System for Affective Computing", "text": "system is the change in electrical properties of skin due to variation in physiological and psychological conditions. The change is caused by the degree to which a person's sweat glands are active. Psychological status of a person tends to make the glands active and this change the skin resistance. Drier the skin is, higher will be the skin resistance. This variation in skin resistance ranges from 5k\u2126 to 25k\u2126. In the current work, a subject whose Galvanic skin response (GSR) is to be measured, has been shown/played a movie clipping, images or recorded audio signals. Depending upon the theme, emotion will be evoked in the subject. Due to the change in emotion, GSR varies. This variation in GSR is recorded for a fixed time interval. In the present work, a total of 75 subjects are selected. People from different age groups and social background, both male and female have been carefully considered. Data acquisition and analysis is carried out by using LabVIEW. The results obtained are convincing and follows the ground reality with a high level of accuracy. As the subject also gives the feedback about the emotions he/she experienced, the results obtained are validated."}
{"_id": "94648e10bb0c020f9469f1a895d8ccb9b4e82d7e", "title": "VANET-cloud: a generic cloud computing model for vehicular Ad Hoc networks", "text": "Cloud computing is a network access model that aims to transparently and ubiquitously share a large number of computing resources. These are leased by a service provider to digital customers, usually through the Internet. Due to the increasing number of traffic accidents and dissatisfaction of road users in vehicular networks, the major focus of current solutions provided by intelligent transportation systems is on improving road safety and ensuring passenger comfort. Cloud computing technologies have the potential to improve road safety and traveling experience in ITSs by providing flexible solutions (i.e., alternative routes, synchronization of traffic lights, etc.) needed by various road safety actors such as police, and disaster and emergency services. In order to improve traffic safety and provide computational services to road users, a new cloud computing model called VANET-Cloud applied to vehicular ad hoc networks is proposed. Various transportation services provided by VANET-Cloud are reviewed, and some future research directions are highlighted, including security and privacy, data aggregation, energy efficiency, interoperability, and resource management."}
{"_id": "8930cc1d4383f29afac85aa3d8c131e69015904a", "title": "A dual-polarity grappa kernel for the robust reconstruction of accelerated EPI data", "text": "The quality of high-resolution Echo Planar images of the human brain has improved greatly in recent years, enabled by novel multi-channel receiver coil arrays and parallel imaging. However, in regions with local field inhomogeneity, EPI artifacts limit which parts of the brain can be imaged successfully. In this work, we present evidence that certain image artifacts can be attributed to nonlinear phase errors that are present in regions of local susceptibility gradients and certain coil array elements. Because these phase errors cannot be corrected with conventional Nyquist ghost correction, we propose a new method that integrates ghost correction with parallel imaging reconstruction. The proposed Dual-Polarity GRAPPA method operates directly on raw EPI data to estimate k-space data from the under-sampled acquisition while simultaneously correcting inherent EPI phase errors. We present examples of this method successfully removing strong phase-error artifacts in high-resolution 7T EPI data."}
{"_id": "4b3643e5436a8b8430361e021a3c863765bab3fb", "title": "Marble: high-throughput phenotyping from electronic health records via sparse nonnegative tensor factorization", "text": "The rapidly increasing availability of electronic health records (EHRs) from multiple heterogeneous sources has spearheaded the adoption of data-driven approaches for improved clinical research, decision making, prognosis, and patient management. Unfortunately, EHR data do not always directly and reliably map to phenotypes, or medical concepts, that clinical researchers need or use. Existing phenotyping approaches typically require labor intensive supervision from medical experts. We propose Marble, a novel sparse non-negative tensor factorization method to derive phenotype candidates with virtually no human supervision. Marble decomposes the observed tensor into two terms, a bias tensor and an interaction tensor. The bias tensor represents the baseline characteristics common amongst the overall population and the interaction tensor defines the phenotypes. We demonstrate the capability of our proposed model on both simulated and patient data from a publicly available clinical database. Our results show that Marble derived phenotypes provide at least a 42.8% reduction in the number of non-zero element and also retains predictive power for classification purposes. Furthermore, the resulting phenotypes and baseline characteristics from real EHR data are consistent with known characteristics of the patient population. Thus it can potentially be used to rapidly characterize, predict, and manage a large number of diseases, thereby promising a novel, data-driven solution that can benefit very large segments of the population."}
{"_id": "239a012830a0d43c73eba4549a2980dc9e59ba4d", "title": "Using Augmented Reality to Visualise Architecture Designs in an Outdoor Environment", "text": "This paper presents the use of a wearable computer system to visualise outdoor architectural features using augmented reality. The paper examines the question How does one visualise a design for a building, modification to a building, or extension to an existing building relative to its physical surroundings? The solution presented to this problem is to use a mobile augmented reality platform to visualise the design in spatial context of its final physical surroundings. The paper describes the mobile augmented reality platform TINMITH2 used in the investigation. The operation of the system is described through a detailed example of the system in operation. The system was used to visualise a simple extension to a building on one of the University of South Australia campuses."}
{"_id": "322525892b839524f58d4df4a7b29c4f9a3b74de", "title": "Technologies for Augmented Reality Systems: Realizing Ultrasound-Guided Needle Biopsies", "text": "We present a real-time stereoscopic video-see-through augmented reality (AR) system applied to the medical procedure known as ultrasound-guided needle biopsy of the breast. The AR system was used by a physician during procedures on breast models and during non-invasive examinations of human subjects. The system merges rendered live ultrasound data and geometric elements with stereo images of the patient acquired through head-mounted video cameras and presents these merged images to the physician in a head-mounted display. The physician sees a volume visualization of the ultrasound data directly under the ultrasound probe, properly registered within the patient and with the biopsy needle. Using this system, a physician successfully guided a needle into an artificial tumor within a training phantom of a human breast. We discuss the construction of the AR system and the issues and decisions which led to the system architecture and the design of the video see-through head-mounted display. We designed methods to properly resolve occlusion of the real and synthetic image elements. We developed techniques for realtime volume visualization of timeand position-varying ultrasound data. We devised a hybrid tracking system which achieves improved registration of synthetic and real imagery and we improved on previous techniques for calibration of a magnetic tracker. CR"}
{"_id": "2516f21e4ae17a58bd437b8d16c5551178a5f8fb", "title": "Improving static and dynamic registration in an optical see-through HMD", "text": "In Augmented Reality, see-through HMDs superimpose virtual 3D objects on the real world. This technology has the potential to enhance a user's perception and interaction with the real world. However, many Augmented Reality applications will not be accepted until we can accurately register virtual objects with their real counterparts. In previous systems, such registration was achieved only from a limited range of viewpoints, when the user kept his head still. This paper offers improved registration in two areas. First, our system demonstrates accurate static registration across a wide variety of viewing angles and positions. An optoelectronic tracker provides the required range and accuracy. Three calibration steps determine the viewing parameters. Second, dynamic errors that occur when the user moves his head are reduced by predicting future head locations. Inertial sensors mounted on the HMD aid head-motion prediction. Accurate determination of prediction distances requires low-overhead operating systems and eliminating unpredictable sources of latency. On average, prediction with inertial sensors produces errors 2-3 times lower than prediction without inertial sensors and 5-10 times lower than using no prediction at all. Future steps that may further improve registration are outlined."}
{"_id": "37bec4e4edf7e251312ee849c0c74cf9d4576db7", "title": "A wearable computer system with augmented reality to support terrestrial navigation", "text": "To date augmented realities are typically operated in only a small defined area, in the order of a large room. This paper reports on our investigation into expanding augmented realities to outdoor environments. The project entails providing visual navigation aids to users. A wearable computer system with a see-through display, digital compass, and a differential GPS are used to provide visual cues while performing a standard orienteering task. This paper reports the outcomes of a set of trials using an off the shelf wearable computer, equipped with a custom built navigation software package, \"map-in-the-hat\"."}
{"_id": "2f149870ba8e9356fd76450ffff0ab46c593b074", "title": "Benchmarking of Document Image Analysis Tasks for Palm Leaf Manuscripts from Southeast Asia", "text": "This paper presents a comprehensive test of the principal tasks in document image analysis (DIA), starting with binarization, text line segmentation, and isolated character/glyph recognition, and continuing on to word recognition and transliteration for a new and challenging collection of palm leaf manuscripts from Southeast Asia. This research presents and is performed on a complete dataset collection of Southeast Asian palm leaf manuscripts. It contains three different scripts: Khmer script from Cambodia, and Balinese script and Sundanese script from Indonesia. The binarization task is evaluated on many methods up to the latest in some binarization competitions. The seam carving method is evaluated for the text line segmentation task, compared to a recently new text line segmentation method for palm leaf manuscripts. For the isolated character/glyph recognition task, the evaluation is reported from the handcrafted feature extraction method, the neural network with unsupervised learning feature, and the Convolutional Neural Network (CNN) based method. Finally, the Recurrent Neural Network-Long Short-Term Memory (RNN-LSTM) based method is used to analyze the word recognition and transliteration task for the palm leaf manuscripts. The results from all experiments provide the latest findings and a quantitative benchmark for palm leaf manuscripts analysis for researchers in the DIA community."}
{"_id": "6f77626160f3366e5ea72d636f6a80825ebf5881", "title": "Frostbite following cryolipolysis treatment in a beauty salon: a case study.", "text": "This case study describes frostbite, a previously unreported complication following cryolipolysis, which resulted in substantial necrosis of the flank. Medical attention was not sought until one week after treatment. On examination, two distinct areas of significant frostbite in the left flank with surrounding erythema were revealed. Surgical intervention was avoided, as is recommended in cases of frostbite, and conservative treatment resulted in recovery of the affected area. Here, the authors highlight the adverse effects related to cryolipolysis, analysing the pathogenesis, clinical manifestations and management of this injury. The necessity of regulation within the cosmetic sector and the challenges associated with its implementation are also described. The authors believe emphasis must be placed on increasing patient awareness on the potential hazards of seeking cosmetic treatment from unregulated providers."}
{"_id": "c87f54d96002e8859d73f1d1d8a2b857c9f80e29", "title": "Generation of Pencil Sketch Drawing", "text": "This paper proposes a pencil sketch generating system that can create various pencil sketching styles for images, to satisfy different needs of users. The pencil sketch generating process includes process of three parts: line drawing, tone adjustment, and texture rendering. Some operations needed are first proposed, and various pencil sketching styles that generated by the proposed system are illustrated."}
{"_id": "8cafc53716505314c49fe46fe44dfa9f5542fc66", "title": "Security Risk Assessment of Cloud Carrier", "text": "Cloud computing based delivery model has been adopted by end-users and enterprises to reduce IT costs and complexities. The ability to offload user software and data to cloud data centers has raised many security and privacy concerns over the cloud computing model. Significant research efforts have focused on hyper visor security and low-layer operating system implementations in cloud data centers. Unfortunately, the role of cloud carrier in the security and privacy of user software and data has not been well studied. Cloud carrier represents the wide area network that provides the connectivity and transport of cloud services between cloud consumers and cloud providers. In this paper, we present a risk assessment framework to study the security risk of the cloud carrier between cloud consumers and cloud providers. The risk assessment framework leverages the National Vulnerability Database (NVD) to examine the security vulnerabilities of operating systems of routers within the cloud carrier. This framework provides quantifiable security metrics for cloud carrier, which enables cloud consumers to establish the quality of security services among cloud providers. Such security metric information is very useful in the Service Level Agreement (SLA) negotiation between a cloud consumer and a cloud provider. It can be also be used to build a tool to verify SLA compliance. Furthermore, we implement this framework for the cloud carriers of Amazon Web Services and Windows Azure Platform. Our experiments show that the security risks of cloud carriers on these two commercial clouds are significantly different. This finding provides guidance for a network provider to improve the security of cloud carriers."}
{"_id": "1b068f70592632081e0b4c1864c38c2b36591d20", "title": "Embedded 3D printing of strain sensors within highly stretchable elastomers.", "text": "A new method, embedded-3D printing (e-3DP), is reported for fabricating strain sensors within highly conformal and extensible elastomeric matrices. e-3DP allows soft sensors to be created in nearly arbitrary planar and 3D motifs in a highly programmable and seamless manner. Several embodiments are demonstrated and sensor performance is characterized."}
{"_id": "4138b0035b164b4e2486f34fd9c4da76820bb1f7", "title": "Free viewpoint video (FVV) survey and future research direction", "text": "Free viewpoint video (FVV) is one of the new trends in the development of advanced visual media type that aims to provide a new immersive user experience and interactivity that goes beyond higher image quality (HD/4K TV) and higher realism (3D TV). Potential applications include interactive personal visualization and free viewpoint navigation. The goal of this paper is to provide an overview of the FVV system and some target application scenarios. Associated standardization activities and technological barriers to overcome are also described. This paper is organized as follows: a general description of the FVV system and functionalities is given in Section I. Since an FVV system is composed of a chain of processing modules, an in-depth functional description of each module is provided in Section II. Examples of emerging FVV applications and use cases are given in Section III. A summary of technical challenges to overcome for wider usage and market penetration of FVV is given in Section IV."}
{"_id": "6b32e4d2b7ab7614ffd7f53bcc29dfd1e3014419", "title": "Semisupervised and Weakly Supervised Road Detection Based on Generative Adversarial Networks", "text": "Road detection is a key component of autonomous driving; however, most fully supervised learning road detection methods suffer from either insufficient training data or high costs of manual annotation. To overcome these problems, we propose a semisupervised learning (SSL) road detection method based on generative adversarial networks (GANs) and a weakly supervised learning (WSL) method based on conditional GANs. Specifically, in our SSL method, the generator generates the road detection results of labeled and unlabeled images, and then they are fed into the discriminator, which assigns a label on each input to judge whether it is labeled. Additionally, in WSL method we add another network to predict road shapes of input images and use them in both generator and discriminator to constrain the learning progress. By training under these frameworks, the discriminators can guide a latent annotation process on the unlabeled data; therefore, the networks can learn better representations of road areas and leverage the feature distributions on both labeled and unlabeled data. The experiments are carried out on KITTI ROAD benchmark, and the results show our methods achieve the state-of-the-art performances."}
{"_id": "c77adcb014b0e7990c3a61c49869b3804d1ecfdd", "title": "High performance CMOS dual supply level shifter for a 0.5V input and 1V output in standard 1.2V 65nm technology process", "text": "This paper presents the design of a highly efficient CMOS level shifter qc-level shifter. Unlike many recent level shifters, the proposed qc-level shifter does not use bootstrap capacitors to minimize active area. When implemented on a 65nm CMOS technology, under the large capacitive loading condition (2pF), qc-level shifter has a lower active area (94%), and energy-delay product (21.4%) than the reference bootstrap level shifter circuit (ts-level shifter). In comparison to a conventional shifter (c-level shifter)the corresponding reductions are 9.5% and 55%, respectively. Also qc-level shifter has very small effective input capacitance in comparison with ts-level shifter as it does not need a bootstrap capacitor connected to its input."}
{"_id": "58d3c99aa2710a3f204e7604c52b1df88be88e5b", "title": "Application of Social Media Analytics in the Banking Sector to Drive Growth and Sustainability: A Proposed Integrated Framework", "text": "Large amounts of data sets being generated from social media platforms have led to surge in demand for social media analytics (SMA) use in business operations. Strategic operation of SMA yields a positive impact on marketing activities, customer engagement, risk analysis and assessment product or service design, credit rating of customer profile, customer education and competitive analysis. Banks have started tapping into advanced prescriptive and predictive analytics into a bid to develop insights, managing high costs of compliance and non-compliance such as financial risks and reputational risks thereby generating a significant impact in business operations. Optimisation modelling of business portfolios, products and services offerings are contributing significantly towards achieving a sustainable and profitable growth. This is however, against the background of high volatility and weakening demand for traditional products in the banking industry. Systematic literature review was used in examining how banks are applying social media analytics to improve their business operations. Findings reveals that social media analytics provide concrete solutions which helps to improve revenue streams of banks, providing a guarantee for compliance, survival, sustainability and growth objectives. As a result, a integrated framework is proposed to address the gap existing in literature which lacks a focus on an integrated framework that can assist decision makers on the social media analytics to employ in the banking operations."}
{"_id": "959a011f024f66bfafa64dc36d2968e933cf8147", "title": "Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection", "text": "Raising audience awareness over the creation and evolution of a cultural participatory digital platform is a critical point for its acceptance. The proposed platform adopts user involvement in the content collection level through the implementation of a mobile application easily downloadable to the user\u2019s smartphone and the use of a web portal application. Complementary web portal permits the management of the collected content in a trustworthy manner adopting an extended role-based access control model for authorization purposes. Users can formulate private groups to contribute and share content. Platform guarantees the soundness of contributed content through an auditing procedure requested by the contributors and conducted by experts selected randomly. In order to stress the applicability of our platform to various cultural environments, we present a number of usage scenarios targeting various stakeholders from specialists and museum curators to students, teachers and simple enthusiasts aiming in the devel\u2010 opment of coherent narrations."}
{"_id": "6b0355ffdd9130eb6ce0be3dbf338286507cd64b", "title": "Topic- and Time-Oriented Visual Text Analysis", "text": "In this article, we first present the benefit of combing text analysis (topic models in particular) with interactive visualization. We then highlight examples from prior work on topic and time-oriented visual text analysis. Lastly, we present four challenges that warrant additional future research in the field of visual text analysis."}
{"_id": "0af0b8fdde60e8237e55c301007565c53b4d0f1f", "title": "SWEET: Serving the Web by Exploiting Email Tunnels", "text": "Open communications over the Internet pose serious threats to countries with repressive regimes, leading them to develop and deploy censorship mechanisms within their networks. Unfortunately, existing censorship circumvention systems do not provide high availability guarantees to their users, as censors can easily identify, hence disrupt, the traffic belonging to these systems using today\u2019s advanced censorship technologies. In this paper, we propose Serving the Web by Exploiting Email Tunnels SWEET, a highly available censorship-resistant infrastructure. SWEET works by encapsulating a censored user\u2019s traffic inside email messages that are carried over public email services like Gmail and Yahoo Mail. As the operation of SWEET is not bound to any specific email provider, we argue that a censor will need to block email communications all together in order to disrupt SWEET, which is unlikely as email constitutes an important part of today\u2019s Internet. Through experiments with a prototype of our system, we find that SWEET\u2019s performance is sufficient for Web browsing. In particular, regular Websites are downloaded within couple of seconds."}
{"_id": "ec614c4f636aeeb6ea10accddfcb1a2f7a1ce603", "title": "Social Entrepreneurship: A Critique and Future Directions", "text": "Work on social entrepreneurship constitutes a field of study that intersects a number of domains, including entrepreneurial studies, social innovation, and nonprofit management. Scholars are beginning to contribute to the development of this new discipline through efforts that attempt to trace the emergence of social entrepreneurship as well as by comparing it to other organizational activities such as conventional entrepreneurship. However, as a nascent field, social entrepreneurship scholars are in the midst of a number of debates involving definitional and conceptual clarity, boundaries of the field, and a struggle to arrive at a set of relevant and meaningful research questions. This paper examines the promise of social entrepreneurship as a domain of inquiry and suggests a number of research areas and research questions for future study."}
{"_id": "d56bb2c87e7fcd906fe9da90d896af26561b5e34", "title": "Management of trust in the e-marketplace: the role of the buyer's experience in building trust", "text": "The e-marketplace, or broker-managed online market, is one of today\u2019s most profitable e-business models. In these marketplaces, buyers routinely engage with businesses and individual sellers with whom they have had little or no prior interaction, making trust one of the most important issues in the e-marketplaces. Therefore, a clear understanding of the trust building process in the e-marketplace is important for the success in the market. In this study, we analyze the process by which e-marketplace customers develop trust in both a market-maker and sellers. Additionally, this analysis provides a theoretical framework to identify the antecedents of trust. In the framework, we suggest that the impact of trust on transaction intention is moderated by the customer\u2019s previous experiences in the e-marketplace. The theoretical model presented here was tested on survey and transaction data collected from 692 respondents. The results indicate that customers value the importance of trust in the market-maker and sellers differentially by the level of transaction experience. It is also shown that market-maker\u2019s characteristics (reputation, website\u2019s usability and security) and seller\u2019s characteristics (expertise) play an important role in the formation and development of trust toward a market-maker and sellers, respectively. Journal of Information Technology (2007) 22, 119\u2013132. doi:10.1057/palgrave.jit.2000095 Published online 23 January 2007"}
{"_id": "ffb560ea184edca96e04d0f40f80791af38ecb75", "title": "A note on the maximum flow through a network", "text": "-This note discusses the problem of maximizing the rate of flow from one terminal to another, through a network which consists of a number of branches, each of which has a !imited capacity. The main result is a theorem: The maximum possible flow from left to right through a network is equal to the minimum value among all simple cut-sets. This theorem is applied to solve a more general problem, in which a number of input nodes and a number of output nodes are used."}
{"_id": "06c2f564920f2489b61f0ed288f85ea177072dfa", "title": "The Design of High-Level Features for Photo Quality Assessment", "text": "We propose a principled method for designing high level features forphoto quality assessment. Our resulting system can classify between high quality professional photos and low quality snapshots. Instead of using the bag of low-level features approach, we first determine the perceptual factors that distinguish between professional photos and snapshots. Then, we design high level semantic features to measure the perceptual differences. We test our features on a large and diverse dataset and our system is able to achieve a classification rate of 72% on this difficult task. Since our system is able to achieve a precision of over 90% in low recall scenarios, we show excellent results in a web image search application."}
{"_id": "f17d3a34b349c3521ba6e85d17d6b77ec343743a", "title": "Proposing measures of flicker in the low frequencies for lighting applications", "text": "The IEEE Standards Working Group, IEEE PAR1789 \"Recommending practices for modulating current in High Brightness LEDs for mitigating health risks to viewers\" has been formed to advise the lighting industry and standards groups about the emerging concern of flicker in LED lighting. This paper intends to introduce new measures and definitions of lamp flicker in lighting. The discussion represents on-going work in IEEE PAR1789 that is vital to designing safe LED lamp drivers."}
{"_id": "8f4fbec6053b15603e7ba750fde132bbefd81b5e", "title": "DataPlay: interactive tweaking and example-driven correction of graphical database queries", "text": "Writing complex queries in SQL is a challenge for users. Prior work has developed several techniques to ease query specification but none of these techniques are applicable to a particularly difficult class of queries: quantified queries. Our hypothesis is that users prefer to specify quantified queries interactively by trial-and-error. We identify two impediments to this form of interactive trial-and-error query specification in SQL: (i) changing quantifiers often requires global syntactical query restructuring, and (ii) the absence of non-answers from SQL's results makes verifying query correctness difficult. We remedy these issues with DataPlay, a query tool with an underlying graphical query language, a unique data model and a graphical interface. DataPlay provides two interaction features that support trial-and-error query specification. First, DataPlay allows users to directly manipulate a graphical query by changing quantifiers and modifying dependencies between constraints. Users receive real-time feedback in the form of updated answers and non-answers. Second, DataPlay can auto-correct a user's query, based on user feedback about which tuples to keep or drop from the answers and non-answers. We evaluated the effectiveness of each interaction feature with a user study and we found that direct query manipulation is more effective than auto-correction for simple queries but auto-correction is more effective than direct query manipulation for more complex queries."}
{"_id": "1d76607a7b8f996dfa2562bd28322b6c1544f32b", "title": "Reliability of lower limb motor evoked potentials in stroke and healthy populations: How many responses are needed?", "text": "OBJECTIVE\nTo determine the intra- and inter-session reliability of motor evoked potential (MEP) size parameters in the lower limb of patients with stroke, focussing on the number of MEPs collected and the method of measuring MEP size.\n\n\nMETHODS\nTranscranial magnetic stimulation was used to elicit MEPs in the soleus muscle of patients with stroke (n=13) and age-matched healthy participants (n=13) during low level muscle activation. Two sets of 10 responses were collected in the first session and a further 10 responses collected in a second session held 7 days later. Four MEP size measurements were made using 4, 6, 8, or all 10 of the MEPs collected. Intra- and inter-session reliability was examined using intraclass correlation coefficients (ICC) and typical percentage error.\n\n\nRESULTS\nIntrasession ICC statistics using 6 or more MEPs were >0.85 in the stroke group but intersession values were all <0.5. Reliability was best when measuring parameters from individual MEPs rather than averaged responses.\n\n\nCONCLUSIONS\nReliability of intrasession MEP size is excellent in the lower limb of patients with stroke using as few as 6 MEPs but intersession reliability is poor.\n\n\nSIGNIFICANCE\nComparing MEP size measures across two or more sessions is questionable in the lower limb of patients with stroke."}
{"_id": "61b33a4ed23f09b87da2f5060b4e765d755a7297", "title": "The Future of Memory: Remembering, Imagining, and the Brain", "text": "During the past few years, there has been a dramatic increase in research examining the role of memory in imagination and future thinking. This work has revealed striking similarities between remembering the past and imagining or simulating the future, including the finding that a common brain network underlies both memory and imagination. Here, we discuss a number of key points that have emerged during recent years, focusing in particular on the importance of distinguishing between temporal and nontemporal factors in analyses of memory and imagination, the nature of differences between remembering the past and imagining the future, the identification of component processes that comprise the default network supporting memory-based simulations, and the finding that this network can couple flexibly with other networks to support complex goal-directed simulations. This growing area of research has broadened our conception of memory by highlighting the many ways in which memory supports adaptive functioning."}
{"_id": "40e153460564ed0dbb2523394438c7a1172ca9dc", "title": "A Literature Review and Classification of Recommender Systems on Academic Journals", "text": "Recommender systems have become an important research field since the emergence of the first paper on collaborative filtering in the mid\u20101990s. In general, recommender systems are defined as the supporting systems which help users to find information, products, or services (such as books, movies, music, digital products, web sites, and TV programs) by aggregating and analyzing suggestions from other users, which mean reviews from various authorities, and user attributes. However, as academic researches on recommender systems have increased significantly over the last ten years, more researches are required to be applicable in the real world situation. Because research field on recommender systems is still wide and less mature than other research fields. Accordingly, the existing articles on recommender systems need to be reviewed toward the next generation of recommender systems. However, it would be not easy to confine the recommender system researches to specific disciplines, considering the nature of the recommender system researches. So, we reviewed all articles on recommender systems from 37 journals which were published from 2001 to 2010. The 37 journals are selected from top 125 journals of the MIS Journal Rankings. Also, the literature search was based on the descriptors \u201cRecommender system\u201d, \u201cRecommendation system\u201d, \u201cPersonalization system\u201d, \u201cCollaborative filtering\u201d and \u201c"}
{"_id": "e3dc697462166288aecf36ff7fc7fe7fe3458d05", "title": "Subpixel rendering without color distortions for diamond-shaped PenTile displays", "text": "A novel subpixel rendering algorithm for diamond-shaped PenTile displays, which improves the apparent display resolution and reduces color fringing artifacts, is proposed in this work. We develop two types of filters: main filters and nonnegative filters. First, the main filters are derived to minimize the error between an original input image and a perceived image. Second, the nonnegative filters are modified from the main filters. While the main filters are optimized for improving the resolution, the nonnegative filters suppress color distortions effectively. We analyze local characteristics of an image and estimate the amount of color distortions in each region. Based on the color distortion analysis, we render the image by applying adaptive combinations of the main filters and the nonnegative filters. Experimental results demonstrate that the proposed algorithm outperforms conventional algorithms, by not only improving the apparent resolution but also suppressing color distortions effectively."}
{"_id": "5b324e2db551a7903cb4d867980c011e85cf8071", "title": "Can Scientific Impact Be Predicted?", "text": "A widely used measure of scientific impact is citations. However, due to their heavy-tailed distribution, citations are fundamentally difficult to predict. Instead, to characterize scientific impact, we address two analogous questions asked by many scientific researchers: \u201cHow will my h-index evolve over time, and which of my previously or newly published papers will contribute to it?\u201d To answer these questions, we perform two related tasks. First, we develop a model to predict authors' future h-indices based on their current scientific impact. Second, we examine the factors that drive papers-either previously or newly published-to increase their authors' predicted future h-indices. By leveraging relevant factors, we can predict an author's h-index in five years with an R2 value of 0.92 and whether a previously (newly) published paper will contribute to this future h-index with an F1 score of 0.99 (0.77). We find that topical authority and publication venue are crucial to these effective predictions, while topic popularity is surprisingly inconsequential. Further, we develop an online tool that allows users to generate informed h-index predictions. Our work demonstrates the predictability of scientific impact, and can help researchers to effectively leverage their scholarly position of \u201cstanding on the shoulders of giants\u201d."}
{"_id": "8aeed4d5204e44695e13def6ce2850cfb01cf1aa", "title": "Early prediction of scholar popularity", "text": "Prediction of scholar popularity has become an important research topic for a number of reasons. In this paper, we tackle the problem of predicting the popularity {\\it trend} of scholars by concentrating on making predictions both as earlier and accurate as possible. In order to perform the prediction task, we first extract the popularity trends of scholars from a training set. To that end, we apply a time series clustering algorithm called K-Spectral Clustering (K-SC) to identify the popularity trends as cluster centroids. We then predict trends for scholars in a test set by solving a classification problem. Specifically, we first compute a set of measures for individual scholars based on the distance between earlier points in her particular popularity curve and the identified centroids. We then combine those distance measures with a set of academic features (e.g., number of publications, number of venues, etc) collected during the same monitoring period, and use them as input to a classification method. One aspect that distinguishes our method from other approaches is that the monitoring period, during which we gather information on each scholar popularity and academic features, is determined on a per scholar basis, as part of our approach. Using total citation count as measure of scientific popularity, we evaluate our solution on the popularity time series of more than 500,000 Computer Science scholars, gathered from Microsoft Azure Marketplace (https://datamarket.azure.com/dataset/mrc/microsoftacademic). The experimental results show that the our prediction method outperforms other alternative prediction methods. We also show how to apply our method jointly with regression models to improve the prediction of scholar popularity values (e.g., number of citations) at a given future time."}
{"_id": "09c610de9435726ca8b1a4ae86dbf326b7e575bf", "title": "Bringing PageRank to the citation analysis", "text": "The paper attempts to provide an alternative method for measuring the importance of scientific papers based on the Google\u2019s PageRank. The method is a meaningful extension of the common integer counting of citations and is then experimented for bringing PageRank to the citation analysis in a large citation network. It offers a more integrated picture of the publications\u2019 influence in a specific field. We firstly calculate the PageRanks of scientific papers. The distributional characteristics and comparison with the traditionally used number of citations are then analyzed in detail. Furthermore, the PageRank is implemented in the evaluation of research influence for several countries in the field of Biochemistry and Molecular Biology during the time period of 2000\u20132005. Finally, some advantages of bringing PageRank to the citation analysis are concluded. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "fdd3cd1b0208f6603df035944cfc4b96dfa5c101", "title": "An Efficient Design Methodology for CNFET Based Ternary Logic Circuits", "text": "Ternary logic is a promising alternative to conventional binary logic. Implementation of ternary logic circuits however requires devices with multiple thresholds, which is a complex task with current CMOS technology. Carbon Nanotube based FETs (CNFETs) present an easier alternative for the implementation of ternary logic circuits since their threshold voltages can be varied by varying the diameter. In existing design methodology of ternary logic circuits, ternary signals are first converted to binary signals using ternary decoders which are then passed through binary gates and an encoder to get the final ternary output. Since every input signal requires a ternary decoder, the resulting circuits lead to increased area, power and delay. In this paper, authors suggest a new transistor-level methodology for designing CNFET-based ternary circuits without using ternary decoders. Elimination of decoders results in reduced area, power and delay. Simulation results indicate that up to 88% reduction in power consumption, 64% reduction in transistor count and 12 % reduction in delay can be achieved using the proposed approach when compared to the existing one."}
{"_id": "e3e2219e1b03c7f3385ee17faddaffdb06affa5f", "title": "Adaptive Control of an Inverted Pendulum Using Adaptive PID Neural Network", "text": "Abstract- In this paper, a new method for adaptive control of nonlinear systems using neural networks and Proportional-Integral-Derivative (PID) methodology is proposed. In this method, a PID control and adaptive linear neural network is used to control a inverted pendulum with uncertainities and external disturbances. The system consists of an inverted pole hinged on a cart which is free to move in the x direction. The performance of the proposed method is demonstrated via simulations."}
{"_id": "41e494275eb24b248bddc19c4f7185c9abba803d", "title": "Scaling HDFS to More Than 1 Million Operations Per Second with HopsFS", "text": "HopsFS is an open-source, next generation distribution of the Apache Hadoop Distributed File System(HDFS) that replaces the main scalability bottleneck in HDFS, single node in-memory metadata service, with a no-sharedstate distributed system built on a NewSQL database. By removing the metadata bottleneck in Apache HDFS, HopsFS enables significantly larger cluster sizes, more than an order of magnitude higher throughput, and significantly lower clientlatencies for large clusters. In this paper, we detail the techniques and optimizations that enable HopsFS to surpass 1 million file system operations per second - at least 16 times higher throughput than HDFS. In particular, we discuss how we exploit recent high performance features from NewSQL databases, such as application defined partitioning, partition-pruned index scans, and distribution aware transactions. Together with more traditional techniques, such as batching and write-ahead caches, we show how many incremental optimizations have enabled a revolution in distributed hierarchical file system performance."}
{"_id": "dd5ec545315814204d4aae64647347caa92493f6", "title": "Inpainting of Continuous Frames of Old Movies Based on Deep Neural Network", "text": "With the changes of eras, traditional movies may suffer continuous frame damage after digits due to improper preservation. In order to solve this problem, we proposed a new inpainting technique for continuous video sequences based on deep neural networks. We introduced the latest image restoration techniques for inpainting key frames of new scenes. Then we use the deep neural network interpolation algorithm to interpolate the intermediate frames, so that the video can achieve a coherent effect in time. In order to preserve the original information as much as possible, we only replace the damaged areas and preserve most of undamaged areas, and finally perform image blending. We test different types of videos and compare them with other methods. Our method has higher quality video inpainting than existing methods."}
{"_id": "d9291740b644fc5feb4999c76ec2f50457ef3a77", "title": "Non-Autoregressive Machine Translation with Auxiliary Regularization", "text": "As a new neural machine translation approach, NonAutoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address these two problems by improving the quality of decoder hidden representations via two auxiliary regularization terms in the training process of an NAT model. First, to make the hidden states more distinguishable, we regularize the similarity between consecutive hidden states based on the corresponding target tokens. Second, to force the hidden states to contain all the information in the source sentence, we leverage the dual nature of translation tasks (e.g., English to German and German to English) and minimize a backward reconstruction error to ensure that the hidden states of the NAT decoder are able to recover the source side sentence. Extensive experiments conducted on several benchmark datasets show that both regularization strategies are effective and can alleviate the issues of repeated translations and incomplete translations in NAT models. The accuracy of NAT models is therefore improved significantly over the state-of-the-art NAT models with even better efficiency for inference."}
{"_id": "2b352a475a16b3e12b1af1cf9231ba824db9f937", "title": "Turing Patterns in Memristive Cellular Nonlinear Networks", "text": "The formation of ordered structures, in particular Turing patterns, in complex spatially extended systems has been observed in many different contexts, spanning from natural sciences (chemistry, physics, and biology) to technology (mechanics and electronics). In this paper, it is shown that the use of memristors in a simple cell of a spatially-extended circuit architecture allows us to design systems able to generate Turing patterns. In addition, the memristor parameters play a key role in the selection of the type and characteristics of the emerging pattern, which is also influenced by the initial conditions. The problem of finding the regions of parameters where Turing patterns may emerge in the proposed cellular architecture is solved in an analytic way, and numerical results are shown to illustrate the system behavior with respect to its parameters."}
{"_id": "aa0b03716596832e93f67f99d5c3fed553f89f44", "title": "Integrated multilayered triboelectric nanogenerator for harvesting biomechanical energy from human motions.", "text": "We demonstrate a new flexible multilayered triboelectric nanogenerator (TENG) with extremely low cost, simple structure, small size (3.8 cm\u00d73.8 cm\u00d70.95 cm) and lightweight (7 g) by innovatively integrating five layers of units on a single flexible substrate. Owing to the unique structure and nanopore-based surface modification on the metal surface, the instantaneous short-circuit current (Isc) and the open-circuit voltage (Voc) could reach 0.66 mA and 215 V with an instantaneous maximum power density of 9.8 mW/cm2 and 10.24 mW/cm3. This is the first 3D integrated TENG for enhancing the output power. Triggered by press from normal walking, the TENG attached onto a shoe pad was able to instantaneously drive multiple commercial LED bulbs. With the flexible structure, the TENG can be further integrated into clothes or even attached onto human body without introducing sensible obstruction and discomfort to human motions. The novel design of TENG demonstrated here can be applied to potentially achieve self-powered portable electronics."}
{"_id": "8a5a203905ed5e3e7ef78c26a63519842fde3dbc", "title": "An Operational Amplifier with Recycling Folded Cascode Topology and Adaptive Biasing", "text": "This paper presents a highly adaptive operational amplifier with high gain, high bandwidth, high speed and low power consumption. By adopting the recycling folded cascode topology along with an adaptivebiasing circuit, this design achieves high performance in terms of gain-bandwidth product (GBW) and slew rate (SR). This single stage op-amp has been designed in 0.18\u03bcm technology with a power supply of 1.8V and a 5pF load. The simulation results show that the amplifier achieved a GBW of 335.5MHz, Unity Gain Bandwidth of 247.1MHz and a slew rate of 92.8V/\u03bcs."}
{"_id": "ad943b2c2b859e46481308786c6aea9063dd49a9", "title": "Building Recognition From Multi-Aspect High-Resolution InSAR Data in Urban Areas", "text": "The improved ground resolution of state-of-the-art synthetic aperture radar (SAR) sensors suggests utilizing SAR data for the analysis of urban areas. The appearance of buildings in SAR or interferometric SAR (InSAR) data is characterized by the consequences of the inherent oblique scene illumination, such as layover, occlusion by radar shadow, and multipath signal propagation. Therefore, particularly in dense built-up areas, building reconstruction is often impossible from a single SAR or InSAR measurement alone. But, the reconstruction quality can be significantly improved by a combined analysis of multi-aspect data. In this paper, two approaches are proposed to detect and reconstruct buildings of different size from multi-aspect high-resolution InSAR data sets. Both approaches focus on the recognition of buildings supported by knowledge-based analysis considering the mentioned SAR-specific effects observed in urban areas. Building features are extracted independently for each direction from the magnitude and phase information of the interferometric data. Initial primitives are segmented and afterward projected from slant-range into the world coordinate system. From the fused set of primitives of both flight directions, building hypotheses are generated. The first approach exploits the frequently observed lines of bright double-bounce scattering, which are used for building reconstruction in residential districts. In the case of larger buildings, such as industrial halls, often additional features of roof and facade elements are visible. Therefore, in a second approach, extended buildings are extracted by grouping primitives of different kinds. The two approaches are demonstrated in an urban environment for an InSAR data set, which has spatial resolution of about 30 cm and was taken from two orthogonal flight directions."}
{"_id": "6acd6e321be7ed50c586df028449d649c24b44ca", "title": "Ridge gap waveguide antenna array using integrated coaxial power divider", "text": "A slot array antenna is designed using ridge gap waveguide technology in two sub arrays. Each sub array is fed through a power divider which is designed on a separate ridge- groove gap waveguide. The coupling between two gap waveguides is conducted using coaxial transmission lines. The power divider of ridge-groove gap is fed using coaxial line that can provide a bandwidth of 22% at 14 GHz. The integrated antenna operates within the frequency range of 13.5-15.6 GHz with a gain of 13.9-15.5 dB. Design of the transmission line, power divider and sub-array antenna is presented."}
{"_id": "37e084cdf8b8704b497b2d8e7547380c09468c7b", "title": "Video abstraction: A systematic review and classification", "text": "The demand for various multimedia applications is rapidly increasing due to the recent advance in the computing and network infrastructure, together with the widespread use of digital video technology. Among the key elements for the success of these applications is how to effectively and efficiently manage and store a huge amount of audio visual information, while at the same time providing user-friendly access to the stored data. This has fueled a quickly evolving research area known as video abstraction. As the name implies, video abstraction is a mechanism for generating a short summary of a video, which can either be a sequence of stationary images (keyframes) or moving images (video skims). In terms of browsing and navigation, a good video abstract will enable the user to gain maximum information about the target video sequence in a specified time constraint or sufficient information in the minimum time. Over past years, various ideas and techniques have been proposed towards the effective abstraction of video contents. The purpose of this article is to provide a systematic classification of these works. We identify and detail, for each approach, the underlying components and how they are addressed in specific works."}
{"_id": "afa4b0a6b0c2200a0119b21157a83ff0d57aef3c", "title": "BC-PDS: Protecting Privacy and Self-Sovereignty through BlockChains for OpenPDS", "text": "In the Big Data era, personal metadata may will become a new type of corporate asset, however there have already been a growing public concern about user\u2019s privacy mined from metadata. In this paper we address the problem of implementing the self-sovereignty of personal metadata on the existing OpenPDS/SafeAnswers framework according to the Windhover Principle. In order to do that, we propose a new framework, called BlocakChain-based Personal Data Store (BCPDS), to realize two basic properties: notary and autonomy. This framework, firstly introduces the BlockChain as a notary, into OpenPDS/SafeAnswers for secure storage of personal meta-data instead of the original database. Next, we present an AutoNomybased Access Control (ANAC) to improve the SafeAnswers module, where ANAC is a new mechanism that enforces access based on the relationship among all authorized users and metadata\u2019s owner. In addition, we also propose General Access Structure (GAS) and threshold secret sharing scheme in BlockChain as an implementation method for our BC-PDS framework."}
{"_id": "2c38ef0c12c0a3dabb7015c77638d171609654f5", "title": "Feature Based Sentiment Analysis for Service Reviews", "text": "Sentiment Analysis deals with the analysis of emotions, opinions and facts in the sentences which are expressed by the people. It allows us to track attitudes and feelings of the people by analyzing blogs, comments, reviews and tweets about all the aspects. The development of Internet has strong influence in all types of industries like tourism, healthcare and any business. The availability of Internet has changed the way of accessing the information and sharing their experience among users. Social media provide this information and these comments are trusted by other users. This paper recognizes the use and impact of social media on healthcare industry by analyzing the users\u2019 feelings expressed in the form of free text, thereby gives the quality indicators of services or features related with them. In this paper, a sentiment classifier model using improved Term Frequency Inverse Document Frequency (TFIDF) method and linear regression model has been proposed to classify online reviews, tweets or customer feedback for various features. The model involves the process of gathering online user reviews about hospitals and analyzes those reviews in terms of sentiments expressed. Information Extraction process filters irrelevant reviews, extracts sentimental words of features identified and quantifies the sentiment of features using sentiment dictionary. Emotionally expressed positive or negative words are assigned weights using the classification prescribed in the dictionary. The sentiment analysis on tweets/reviews is done for various features using Natural Language Processing (NLP) and Information Retrieval (IR) techniques. The proposed linear regression model using the senti-score predicts the star rating of the feature of service. The statistical results show that improved TF-IDF method gives better accuracy when compared with TF and TF-IDF methods, used for representing the text. The senti-score obtained as a result of text analysis (user feedback) on features gives not only the opinion summarization but also the comparative results on various features of different competitors. This information can be used by business to focus on the low scored features so as to improve their business and ensure a very high level of user satisfaction."}
{"_id": "cbb901ad1f28911f214465d3da99a76591bb8109", "title": "An AOI algorithm for PCB based on feature extraction", "text": "With the development of the micro-electronic industry, electronical components assembled on the printed circuit board (PCB) become more and more microsize and its mounted density is increasing. It is out of date to depend on manual inspection to assure the joints quality. Instead, automated optical inspection (AOI) for solder joints based on the machine vision has become more and more important. In this paper, based on the image acquired from a 3-CCD color camera and a 3-color hemispherical LED arrays light resource (red, green, and blue), the place, shape, and logical features of solder joints of chip components are extracted. The place features are related to solder joint place. As for shape features, we divide solder joint into several regions and extract its shape features by its color, occupancy ratio of area, center of gravity and continuous pixels. The logical features come from their close relationships of different regions of shape features, color distributing and place features. On the basis of the features, an AOI algorithm is developed. The defects of lacking solder, surplus solder, no solder, pseudo joints, wrong component, damaged component, component absent, shift, tomb stone, wrong polarity, etc. can be identified properly by using the proposed algorithm. Finally, some experiment results are presented to show the validity of the algorithm."}
{"_id": "1bb6718460253bf4c0c80bae657d6ff4a49dd27e", "title": "Robust Hand Detection and Classification in Vehicles and in the Wild", "text": "Robust hand detection and classification is one of the most crucial pre-processing steps to support human computer interaction, driver behavior monitoring, virtual reality, etc. This problem, however, is very challenging due to numerous variations of hand images in real-world scenarios. This work presents a novel approach named Multiple Scale Region-based Fully Convolutional Networks (MSRFCN) to robustly detect and classify human hand regions under various challenging conditions, e.g. occlusions, illumination, low-resolutions. In this approach, the whole image is passed through the proposed fully convolutional network to compute score maps. Those score maps with their position-sensitive properties can help to efficiently address a dilemma between translation-invariance in classification and detection. The method is evaluated on the challenging hand databases, i.e. the Vision for Intelligent Vehicles and Applications (VIVA) Challenge, Oxford hand dataset and compared against various recent hand detection methods. The experimental results show that our proposed MS-FRCN approach consistently achieves the state-of-the-art hand detection results, i.e. Average Precision (AP) / Average Recall (AR) of 95.1% / 94.5% at level 1 and 86.0% / 83.4% at level 2, on the VIVA challenge. In addition, the proposed method achieves the state-of-the-art results for left/right hand and driver/passenger classification tasks on the VIVA database with a significant improvement on AP/AR of ~7% and ~13% for both classification tasks, respectively. The hand detection performance of MS-RFCN reaches to 75.1% of AP and 77.8% of AR on Oxford database."}
{"_id": "bc67a6e3d4dfa1d463a69db0b697298d86daea1d", "title": "Interventions in Small Food Stores to Change the Food Environment, Improve Diet, and Reduce Risk of Chronic Disease", "text": "INTRODUCTION\nMany small-store intervention trials have been conducted in the United States and other countries to improve the food environment and dietary behaviors associated with chronic disease risk. However, no systematic reviews of the methods and outcomes of these trials have been published. The objective of this study was to identify small-store interventions and to determine their impact on food availability, dietary behaviors, and psychosocial factors that influence chronic disease risk.\n\n\nMETHODS\nFrom May 2009 through September 2010, we used PubMed, web-based searches, and listservs to identify small-store interventions that met the following criteria: 1) a focus on small food stores, 2) a completed impact evaluation, and 3) English-written documentation (peer-reviewed articles or other trial documents). We initially identified 28 trials; 16 met inclusion criteria and were used for analysis. We conducted interviews with project staff to obtain additional information. Reviewers extracted and reported data in a table format to ensure comparability between data.\n\n\nRESULTS\nReviewed trials were implemented in rural and urban settings in 6 countries and primarily targeted low-income racial/ethnic minority populations. Common intervention strategies included increasing the availability of healthier foods (particularly produce), point-of-purchase promotions (shelf labels, posters), and community engagement. Less common strategies included business training and nutrition education. We found significant effects for increased availability of healthy foods, improved sales of healthy foods, and improved consumer knowledge and dietary behaviors.\n\n\nCONCLUSION\nTrial impact appeared to be linked to the increased provision of both healthy foods (supply) and health communications designed to increase consumption (demand)."}
{"_id": "e2c8e40bf40b388cec952aedb46d10f110f24c4b", "title": "Detection of money laundering groups using supervised learning in networks", "text": "Money laundering is a major global problem, enabling criminal organisations to hide their ill-gotten gains and to finance further operations. Prevention of money laundering is seen as a high priority by many governments, however detection of money laundering without prior knowledge of predicate crimes remains a significant challenge. Previous detection systems have tended to focus on individuals, considering transaction histories and applying anomaly detection to identify suspicious behaviour. However, money laundering involves groups of collaborating individuals, and evidence of money laundering may only be apparent when the collective behaviour of these groups is considered. In this paper we describe a detection system that is capable of analysing group behaviour, using a combination of network analysis and supervised learning. This system is designed for real-world application and operates on networks consisting of millions of interacting parties. Evaluation of the system using real-world data indicates that suspicious activity is successfully detected. Importantly, the system exhibits a low rate of false positives, and is therefore suitable for use in a live intelligence environment."}
{"_id": "8127d36a763567191c1f862e8c1b984fdeaa0fc1", "title": "The CL-SciSumm Shared Task 2018: Results and Key Insights", "text": "This overview describes the official results of the CL-SciSumm Shared Task 2018 \u2013 the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. This year, the dataset comprised 60 annotated sets of citing and reference papers from the open access research papers in the CL domain. The Shared Task was organized as a part of the 41 Annual Conference of the Special Interest Group in Information Retrieval (SIGIR), held in Ann Arbor, USA in July 2018. We compare the participating systems in terms of two evaluation metrics. The annotated dataset and evaluation scripts can be accessed and used by the community from: https://github.com/WING-NUS/scisumm-corpus."}
{"_id": "a33e53b447d54804bef529f49e9c8469516dd7e3", "title": "Monocyte and macrophage plasticity in tissue repair and regeneration.", "text": "Heterogeneity and high versatility are the characteristic features of the cells of monocyte-macrophage lineage. The mononuclear phagocyte system, derived from the bone marrow progenitor cells, is primarily composed of monocytes, macrophages, and dendritic cells. In regenerative tissues, a central role of monocyte-derived macrophages and paracrine factors secreted by these cells is indisputable. Macrophages are highly plastic cells. On the basis of environmental cues and molecular mediators, these cells differentiate to proinflammatory type I macrophage (M1) or anti-inflammatory or proreparative type II macrophage (M2) phenotypes and transdifferentiate into other cell types. Given a central role in tissue repair and regeneration, the review focuses on the heterogeneity of monocytes and macrophages with current known mechanisms of differentiation and plasticity, including microenvironmental cues and molecular mediators, such as noncoding RNAs."}
{"_id": "90bb8ed63a181c82274f0080bfb49191739897d5", "title": "A survey of traffic sign recognition", "text": "Advanced Driver Assistance Systems (ADAS) refer to various high-tech in-vehicle systems that are designed to increase road traffic safety by helping drivers gain better awareness of the road and its potential hazards as well as other drivers around them. The design of traffic sign recognition, one important subsystem of ADAS, has been a challenge problem for many years and hence become an important and active research topic in the area of intelligent transport systems. The realization of a real-time traffic sign recognition system is usually divided into three stages: detection, tracking and classification. This paper introduces the main difficulties in road sign recognition and briefly surveys the state-of-the-art technologies in this field with further discussions on the potential trend of development of road sign recognition."}
{"_id": "bea0a6d6f98af0d6e730068fddac893ac3d080a3", "title": "Analysis and design of proximity fuze based on waveguide discontinuity", "text": "In order to satisfy the special requirement of antenna directivity base on proximity Fuze, the concept of the waveguide discontinuity was introduced which could be a method of designing antenna. The beam of radiation should deviate from axis direction of bomb in a certain angle The frequency was between 24\u00a0GHz and 25\u00a0GHz.Circular waveguide was connected to coaxial waveguide. This form could inspire high model in discontinuity place. Electric field of coupling model could lead to a oblique radiation pattern. In the meanwhile, two symmetrical metal column was loaded in waveguide in purpose of changing impedance of antenna. The test results demonstrated the veracity of the analysis."}
{"_id": "027a8f5b114a1d3e8cc27a5eddc528d039bcc51b", "title": "Advanced Engineering Electromagnetics (Balanis, C.A.; 2012) [Book Review]", "text": "Since 2012 the second edition of this book is available. This new edition has all the attractive features of the first one and in my opinion it contains some new features and additions that are very useful in the EMC playground. In particular: A section on metamaterials (Section 5.7) is helpful for electromagnetic shielding oriented studies; A section on artificial impedance surfaces (AIS, EBG, PBG, HIS, AMC, PMC) (Section 8.8), is a solid base for any issue related to EBGs; and, in general, Additional smaller inserts throughout the book make it easier and enjoyable to understand; and new figures, photos, and tables have been added. As in the first edition, this second edition is a thorough and detailed engineer-oriented book. The analytical detail, rigor, and thoroughness allow many of the topics to be traced to their origin, and they are presented in sufficient detail so that the reader will follow the analytical developments. The material is presented in a methodical, sequential, and unified manner, and each chapter is subdivided into sections or subsections whose individual headings clearly identify the topics discussed, examined, or illustrated. An exhaustive list of references is included at the end of each chapter to allow the interested reader to trace each topic. A number of appendixes of mathematical identities and special functions, some represented also in tabular and graphical forms, are included. A very useful feature is a set of fortyeight basic MATLAB\u00ae computer programs developed and included in the website that is part of the book. They are an aid, also for the skilled reader, for a conceptual fruition of the results based upon the analytical formulations included throughout the book."}
{"_id": "84d1600a724c4d04c7148d518838cfe1cffc011a", "title": "Stable and Flexible Materials to Mimic the Dielectric Properties of Human Soft Tissues", "text": "Emerging biomedical applications require realistic phantoms for validation and testing of prototype systems. These phantoms require stable and flexible tissue-mimicking materials with realistic dielectric properties in order to properly model human tissues. To create a tissue-mimicking material to fulfill these needs, carbon powder and urethane rubber mixtures were created, and the dielectric properties were measured using a dielectric probe. Both graphite and carbon black were tested. Mixtures of graphite and urethane (0% to 50% by weight) provided relatively low permittivity and conductivity, suitable for mimicking fatty tissues. Mixtures of carbon black and urethane (0% to 15% by weight) provided a broad range of suitable properties. Samples with 15% carbon black had permittivity and conductivity similar to higher-water-content tissues, however the cured samples were not mechanically suitable for moulding into complex shapes. Finally, mixtures of graphite, carbon black, and urethane were created. These exhibited a range of dielectric properties and can be used to mimic a variety of soft tissues. The mechanical properties of these samples were tested and presented properties that exceed typical phantom requirements. This tissue-mimicking material will be useful when creating thin, flexible, and robust structures such as skin layers."}
{"_id": "fe91d6963010701ab8b3da7910c784fd01a9c5eb", "title": "Confocal Microwave Imaging for Breast Cancer Detection: Delay-Multiply-and-Sum Image Reconstruction Algorithm", "text": "A new image reconstruction algorithm, termed as delay-multiply-and-sum (DMAS), for breast cancer detection using an ultra-wideband confocal microwave imaging technique is proposed. In DMAS algorithm, the backscattered signals received from numerical breast phantoms simulated using the finite-difference time-domain method are time shifted, multiplied in pair, and the products are summed to form a synthetic focal point. The effectiveness of the DMAS algorithm is shown by applying it to backscattered signals received from a variety of numerical breast phantoms. The reconstructed images illustrate improvement in identification of embedded malignant tumors over the delay-and-sum algorithm. Successful detection and localization of tumors as small as 2 mm in diameter are also demonstrated."}
{"_id": "3d4f973896b5f0f00cd614a02cb2e4a1fe2b984b", "title": "Microwave Breast Imaging With a Monostatic Radar-Based System: A Study of Application to Patients", "text": "A prototype microwave breast imaging system is used to scan a small group of patients. The prototype implements a monostatic radar-based approach to microwave imaging and utilizes ultra-wideband signals. Eight patients were successfully scanned, and several of the resulting images show responses consistent with the clinical patient histories. These encouraging results motivate further studies of microwave imaging for breast health assessment."}
{"_id": "c8037f66c757cd127d1794f2f6a431a78d1f903d", "title": "Precision open-ended coaxial probes for in vivo and ex vivo dielectric spectroscopy of biological tissues at microwave frequencies", "text": "Hermetic stainless-steel open-ended coaxial probes have been designed for precision dielectric spectroscopy of biological tissue, such as breast tissue, over the 0.5-20-GHz frequency range. Robust data-processing techniques have also been developed for extracting the unknown permittivity of the tissue under test from the reflection coefficient measured with the precision probe and a vector network analyzer. The first technique, referred to as a reflection-coefficient deembedding method, converts the reflection coefficient measured at the probe's calibration plane to the desired aperture-plane reflection coefficient. The second technique uses a rational function model to solve the inverse problem, i.e., to convert the aperture-plane reflection coefficient to the tissue permittivity. The results of the characterization and validation studies demonstrate that these precision probes, used with the prescribed measurement protocols and data-processing techniques, provide highly accurate and reliable in vivo and ex vivo biological tissue measurements, including breast tissue spectroscopy."}
{"_id": "7f938500a588132a9e734471bc253246ef1ecf01", "title": "Painting Outside the Box: Image Outpainting with GANs", "text": "The challenging task of image outpainting (extrapolation) has received comparatively little attention in relation to its cousin, image inpainting (completion). Accordingly, we present a deep learning approach based on [4] for adversarially training a network to hallucinate past image boundaries. We use a three-phase training schedule to stably train a DCGAN architecture on a subset of the Places365 dataset. In line with [4], we also use local discriminators to enhance the quality of our output. Once trained, our model is able to outpaint 128 \u00d7 128 color images relatively realistically, thus allowing for recursive outpainting. Our results show that deep learning approaches to image outpainting are both feasible and promising."}
{"_id": "061c5909dd7b41aacf8f6f6700959ee096be36e3", "title": "CSLIM: contextual SLIM recommendation algorithms", "text": "Context-aware recommender systems (CARS) take contextual conditions into account when providing item recommendations. In recent years, context-aware matrix factorization (CAMF) has emerged as an extension of the matrix factorization technique that also incorporates contextual conditions. In this paper, we introduce another matrix factorization approach for contextual recommendations, the contextual SLIM (CSLIM) recommendation approach. It is derived from the sparse linear method (SLIM) which was designed for Top-N recommendations in traditional recommender systems. Based on the experimental evaluations over several context-aware data sets, we demonstrate that CLSIM can be an effective approach for context-aware recommendations, in many cases outperforming state-of-the-art CARS algorithms in the Top-N recommendation task."}
{"_id": "6c3b2fd0cb23ddb6ed707d6c9986a78d6b76bf43", "title": "Primal-Dual Group Convolutions for Deep Neural Networks", "text": "In this paper, we present a simple and modularized neural network architecture, named primal-dual group convolutional neural networks (PDGCNets). The main point lies in a novel building block, a pair of two successive group convolutions: primal group convolution and dual group convolution. The two group convolutions are complementary: (i) the convolution on each primal partition in primal group convolution is a spatial convolution, while on each dual partition in dual group convolution, the convolution is a point-wise convolution; (ii) the channels in the same dual partition come from different primal partitions. We discuss one representative advantage: Wider than a regular convolution with the number of parameters and the computation complexity preserved. We also show that regular convolutions, group convolution with summation fusion (as used in ResNeXt), and the Xception block are special cases of primal-dual group convolutions. Empirical results over standard benchmarks, CIFAR-10, CIFAR-100, SVHN and ImageNet demonstrate that our networks are more efficient in using parameters and computation complexity with similar or higher accuracy."}
{"_id": "5511e24f89ed6b63e4aad72c88e20761160b123f", "title": "DC-DC converters for electric vehicle applications", "text": "An integral part of any modern day electric vehicle is power electronic circuits (PECs) comprising of DC-AC inverters and DC-DC converters. A DC-AC inverter supplies the high power electric motor and utility loads such as air-conditioning system, whereas a DC-DC converter supplies conventional low-power, low-voltage loads. However, the need for high power bidirectional DC- DC converters in future electric vehicles has led to the development of many new topologies of DC-DC converters. This paper presents an overview of state-of-the-art DC-DC converters used in battery electric vehicles (BEVs), hybrid electric vehicles (HEVs), and fuel cell vehicles (FCVs). Several DC-DC converters such as isolated, nonisolated, half-bridge, full-bridge, unidirectional and bidirectional topologies, and their applications in electric vehicles are presented."}
{"_id": "308cc1b5156facdf69cb658c56b9233b5cfdd144", "title": "Deep Learning and Wavelets for High-Frequency Price Forecasting", "text": "This paper presents improvements in financial time series prediction using a Deep Neural Network (DNN) in conjunction with a Discrete Wavelet Transform (DWT). When comparing our model to other three alternatives, including ARIMA and other deep learning topologies, ours has a better performance. All of the experiments were conducted on High-Frequency Data (HFD). Given the fact that DWT decomposes signals in terms of frequency and time, we expect this transformation will make a better representation of the sequential behavior of high-frequency data. The input data for every experiment consists of 27 variables: The last 3 one-minute pseudo-log-returns and last 3 oneminute compressed tick-by-tick wavelet vectors, each vector is a product of compressing the tick-by-tick transactions inside a particular minute using a DWT with length 8. Furthermore, the DNN predicts the next one-minute pseudo-log-return that can be transformed into the next predicted one-minute average price. For testing purposes, we use tick-by-tick data of 19 companies in the Dow Jones Industrial Average Index (DJIA), from January 2015 to July 2017. The proposed DNN\u2019s Directional Accuracy (DA) presents a remarkable forecasting performance ranging from 64% to 72%."}
{"_id": "c5bc2a38192c60d00e5160caabfd4a242e699b74", "title": "Multipath TCP path schedulers for streaming video", "text": "Modern internet traffic is heavily composed of some form of video communication. In addition, video streaming is slowly replacing regular phone connections. Given that most video transport utilizes Video over Hypertext Transfer Protocol/Transmission Control Protocol (HTTP/TCP), it is important to understand TCP performance in transporting video streams. Recently, multipath transport protocols have allowed video streaming over multiple paths to become a reality. In this paper, we investigate packet scheduling disciplines for injecting video stream packets into multiple paths at the video server. We study video streaming performance when subjected to these schedulers in conjunction with various TCP variants. We utilize network performance measures, as well as video quality metrics, to characterize the performance and interactions between network and application layers of video streams for various network scenarios."}
{"_id": "4c9ecc72ded229ca344f043db45b275d8bdf4c67", "title": "Time-Dependent Trajectory Regression on Road Networks via Multi-Task Learning", "text": "Road travel costs are important knowledge hidden in largescale GPS trajectory data sets, the discovery of which can benefit many applications such as intelligent route planning and automatic driving navigation. While there are previous studies which tackled this task by modeling it as a regression problem with spatial smoothness taken into account, they unreasonably assumed that the latent cost of each road remains unchanged over time. Other works on route planning and recommendation that have considered temporal factors simply assumed that the temporal dynamics be known in advance as a parametric function over time, which is not faithful to reality. To overcome these limitations, in this paper, we propose an extension to a previous static trajectory regression framework by learning the temporal dynamics of road travel costs in an innovative non-parametric manner which can effectively overcome the temporal sparsity problem. In particular, we unify multiple different trajectory regression problems in a multi-task framework by introducing a novel crosstask regularization which encourages temporal smoothness on the change of road travel costs. We then propose an efficient block coordinate descent method to solve the resulting problem by exploiting its separable structures and prove its convergence to global optimum. Experiments conducted on both synthetic and real data sets demonstrate the effectiveness of our method and its improved accuracy on travel time"}
{"_id": "1c01a3943745e91544ccd6afe6869833ca14e388", "title": "Choosing Regularization Parameters in Iterative Methods for Ill-Posed Problems", "text": "Numerical solution of ill-posed problems is often accomplished by discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even smaller dimensional space, via iterative methods based on Krylov subspaces. In this work we present a common framework for efficient algorithms that regularize after this second projection rather than before it. We show that determining regularization parameters based on the final projected problem rather than on the original discretization has firmer justification and often involves less computational expense. We prove some results on the approximate equivalence of this approach to other forms of regularization, and we present numerical examples."}
{"_id": "c43826e860dfd9365aa8905397393d96513d1daa", "title": "Tapping into knowledge base for concept feedback: leveraging conceptnet to improve search results for difficult queries", "text": "Query expansion is an important and commonly used technique for improving Web search results. Existing methods for query expansion have mostly relied on global or local analysis of document collection, click-through data, or simple ontologies such as WordNet. In this paper, we present the results of a systematic study of the methods leveraging the ConceptNet knowledge base, an emerging new Web resource, for query expansion. Specifically, we focus on the methods leveraging ConceptNet to improve the search results for poorly performing (or difficult) queries. Unlike other lexico-semantic resources, such as WordNet and Wikipedia, which have been extensively studied in the past, ConceptNet features a graph-based representation model of commonsense knowledge, in which the terms are conceptually related through rich relational ontology. Such representation structure enables complex, multi-step inferences between the concepts, which can be applied to query expansion. We first demonstrate through simulation experiments that expanding queries with the related concepts from ConceptNet has great potential for improving the search results for difficult queries. We then propose and study several supervised and unsupervised methods for selecting the concepts from ConceptNet for automatic query expansion. The experimental results on multiple data sets indicate that the proposed methods can effectively leverage ConceptNet to improve the retrieval performance of difficult queries both when used in isolation as well as in combination with pseudo-relevance feedback."}
{"_id": "bf8516966cd00efbc3c8ac0b6a360d406971388c", "title": "Development of an Electromagnetic Actuated Microdisplacement Module", "text": "This paper presents the mechanical design and control system design of an electromagnetic actuator-based microdisplacement module. The microdisplacement module composed of a symmetrical leaf-spring parallelogram mechanism and an electromagnetic actuator. The characteristics of the mechanism in terms of stiffness and natural frequencies are derived and verified. Both leakage flux and core inductance are taken into consideration during modeling the mathematic model of the electromagnetic actuator, and based on which, the system dynamic model is established. Due to the nonlinearity characteristics of the system, a dynamic sliding-mode controller is designed without linearizing the system dynamics. A prototype of the microdisplacement module is fabricated, and the parameters of the system are identified and calibrated. Finally, the designed dynamic sliding-mode controller is applied; step response and tracking performance are studied. Experimental results demonstrate that a submicrometer accuracy can be achieved by the module, which validate the effectiveness of the proposed mechanism and controller design as well. The research results show that the electromagnetic actuator-based module can be extended to wide applications in the field of micro/nanopositioning and manipulation."}
{"_id": "841a5de1d71a0b51957d9be9d9bebed33fb5d9fa", "title": "PCANet: A Simple Deep Learning Baseline for Image Classification?", "text": "In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition."}
{"_id": "bc267465eafd8fa1f6dacf827649560cf5fb1809", "title": "Development of Computer-Aided Diagnostic (CADx) System for Distinguishing Neoplastic from Nonneoplastic Lesions in CT Colonography (CTC): Toward CTC beyond Detection", "text": "Half of the polyps surgically removed during conventional colonoscopy are benign with no malignant potential. Our purpose was to develop a CADx system for distinction between neoplastic and non-neoplastic lesions in CTC to reduce \"unnecessary\" colonoscopic polypectomy. Although computer aided detection (CADe) systems have been developed, less attention was given to the development of CADx systems. Our CADx system consists of shape-index-based coarse segmentation of lesions, 3D volume growing and sub-voxel refinement for fine segmentation of lesions, morphologic and texture feature analysis, Wilks' lambda-based stepwise feature selection, linear discriminant analysis for providing an integrated imaging biomarker for diagnosis of neoplastic lesions. Our database contained biopsy-confirmed 54 neoplastic lesions in 29 patients and 14 non-neoplastic lesions in 10 patients. Our CADx system integrating the selected features was able to determine an accurate likelihood of being a neoplasm and distinguish 87% (47/54) neoplastic lesions from 57% (8/14) non-neoplastic lesions correctly only using computed tomography (CT) images, achieving an area under the receiver operating characteristic curve (AUC) of 0.82. This study showed the potential of the use of CTC as a diagnostic tool beyond already accepted detection, thus, CTC with CADx would be potentially useful for reducing \"unnecessary\" polypectomy."}
{"_id": "efdf5b611764da357e881140c76f9af3e85d4093", "title": "Exercise with blood flow restriction: an updated evidence-based approach for enhanced muscular development.", "text": "A growing body of evidence supports the use of moderate blood flow restriction (BFR) combined with low-load resistance exercise to enhance hypertrophic and strength responses in skeletal muscle. Research also suggests that BFR during low-workload aerobic exercise can result in small but significant morphological and strength gains, and BFR alone may attenuate atrophy during periods of unloading. While BFR appears to be beneficial for both clinical and athletic cohorts, there is currently no common consensus amongst scientists and practitioners regarding the best practice for implementing BFR methods. If BFR is not employed appropriately, there is a risk of injury to the participant. It is also important to understand how variations in the cuff application can affect the physiological responses and subsequent adaptation to BFR training. The optimal way to manipulate acute exercise variables, such as exercise type, load, volume, inter-set rest periods and training frequency, must also be considered prior to designing a BFR training programme. The purpose of this review is to provide an evidence-based approach to implementing BFR exercise. These guidelines could be useful for practitioners using BFR training in either clinical or athletic settings, or for researchers in the design of future studies investigating BFR exercise."}
{"_id": "f4b381f4482586dbdd15fc92bee81ce68bcb6898", "title": "POINTWISE: Predicting Points and Valuing Decisions in Real Time with NBA Optical Tracking Data", "text": "Basketball is a game of decisions; at any moment, a player can change the character of a possession by choosing to pass, dribble, or shoot. The current state of basketball analytics, however, provides no way to quantitatively evaluate the vast majority of decisions that players make, as most metrics are driven by events that occur at or near the end of a possession, such as points, turnovers, and assists. We propose a framework for using player-tracking data to assign a point value to each moment of a possession by computing how many points the offense is expected to score by the end of the possession, a quantity we call expected possession value (EPV). EPV allows analysts to evaluate every decision made during a basketball game \u2013 whether it is to pass, dribble, or shoot \u2013 opening the door for a multitude of new metrics and analyses of basketball that quantify value in terms of points. In this paper, we propose a modeling framework for estimating EPV, present results of EPV computations performed using playertracking data from the 2012-13 season, and provide several examples of EPV-derived metrics that answer real basketball questions. A new microeconomics for the NBA Basketball players, coaches, and fans often compare a possession to a high-speed chess match, where teams employ tactics that do not necessarily generate points immediately, but can yield higher-value opportunities several \u201cmoves\u201d down the line. Watching the game in this way can reveal that the decisive moment in a given possession may not have been the open shot at the end, but the pass that led to the open shot, or even the preceding drive that collapsed the defense. These ideas lie at the heart of offensive strategies and the decisions that players make over the course of a possession. Unfortunately, contemporary basketball analytics fail to account for this core idea. Despite many recent innovations, most advanced metrics (PER [1] and +/variations [2], for example) remain based on simple tallies relating to the terminal states of possessions like points, rebounds, and turnovers. While these have shed light on the game, they are akin to analyzing a chess match based only on the move that resulted in checkmate, leaving unexplored the possibility that the key move occurred several turns before. This leaves a major gap to be filled, as an understanding of how players contribute to the whole possession \u2013 not just the events that end it \u2013 can be critical in evaluating players, assessing the quality of their decision-making, and predicting the success of particular in-game tactics. The major obstacle to closing this gap is the current inability to evaluate the individual tactical decisions that form the substructure of every possession of every basketball game. For example, there is no current method to estimate the value of a dribble penetration or to compare the option of taking a contested shot to the option of passing to an open teammate. In this paper, we propose and implement a framework that removes this obstacle. Using player-tracking data, we develop a coherent, quantitative representation of a whole possession that summarizes each moment of the possession in terms of the number of points the offense is expected to score \u2013 a quantity we call expected possession value, or EPV (see Figure 1 for an illustration of EPV). We accomplish this by specifying and fitting a probabilistic model that encodes how ball handlers make decisions based on the spatial configuration of the players on the court."}
{"_id": "e0e15a2bfc63a9212226611004526f833583ff34", "title": "Mobile Phishing Attacks and Mitigation Techniques", "text": "Health Information Technology (HIT) professionals are in increasing demand as healthcare providers need help in the adoption and meaningful use of Electronic Health Record (EHR) systems while the HIT industry needs workforce skilled in HIT and EHR development. To respond to this increasing demand, the School of Computing and Software Engineering at Southern Polytechnic State University designed and implemented a series of HIT educational programs. This paper summarizes our experience in the HIT curriculum development and provides an overview of HIT workforce development initiatives and major HIT and health information management (HIM) educational resources. It also provides instructional implications and experiences for positioning HIT programs and enhancing curriculum."}
{"_id": "ae675bdff2ae0895223cedfaa636ef88308c5a6b", "title": "Learning Orthographic Features in Bi-directional LSTM for Biomedical Named Entity Recognition", "text": "End-to-end neural network models for named entity recognition (NER) have shown to achieve effective performances on general domain datasets (e.g. newswire), without requiring additional hand-crafted features. However, in biomedical domain, recent studies have shown that handengineered features (e.g. orthographic features) should be used to attain effective performance, due to the complexity of biomedical terminology (e.g. the use of acronyms and complex gene names). In this work, we propose a novel approach that allows a neural network model based on a long short-term memory (LSTM) to automatically learn orthographic features and incorporate them into a model for biomedical NER. Importantly, our bi-directional LSTM model learns and leverages orthographic features on an end-to-end basis. We evaluate our approach by comparing against existing neural network models for NER using three well-established biomedical datasets. Our experimental results show that the proposed approach consistently outperforms these strong baselines across all of the three datasets."}
{"_id": "66ecbdb79ce324a4e1c6f44353e5aaee0f164ad9", "title": "The attention system of the human brain.", "text": "The concept of attention as central to human performance extends back to the start of experimental psychology (James 1890), yet even a few years ago, it would not have been possible to outline in even a preliminary form a functional anatomy of the human attentional system. New developments in neuroscience (Hillyard & Picton 1987, Raichle 1983, Wurtz et al 1980) have opened the study of higher cognition to physiological analysis, and have revealed a system of anatomical areas that appear to be basic to the selection of information for focal (conscious) processing. The importance of attention is its unique role in connecting the mental level of description of processes used in cognitive science with the anatomical level common i neuroscience. Sperry (1988, p. 609) describes the central role that mental concepts play in understanding brain function as follows:"}
{"_id": "5627bcd7b6859735d2479c62ae4326dbb4a59e3f", "title": "Improving Collocated Robot Teleoperation with Augmented Reality", "text": "Robot teleoperation can be a challenging task, often requiring a great deal of user training and expertise, especially for platforms with high degrees-of-freedom (e.g., industrial manipulators and aerial robots). Users often struggle to synthesize information robots collect (e.g., a camera stream) with contextual knowledge of how the robot is moving in the environment. We explore how advances in augmented reality (AR) technologies are creating a new design space for mediating robot teleoperation by enabling novel forms of intuitive, visual feedback. We prototype several aerial robot teleoperation interfaces using AR, which we evaluate in a 48-participant user study where participants completed an environmental inspection task. Our new interface designs provided several objective and subjective performance benefits over existing systems, which often force users into an undesirable paradigm that divides user attention between monitoring the robot and monitoring the robot\u00bbs camera feed(s)."}
{"_id": "d10ffc7a9f94c757f68a1aab8e7ed69ec974c831", "title": "Holistically-Nested Edge Detection", "text": ""}
{"_id": "643df5d5c122774e77938a922985f66055e02d4a", "title": "Supplementary Material : Video Summarization with Long Short-term Memory", "text": "\u2013 Section 1: converting between different formats of ground-truth annotations (Section 3.1 in the main text) \u2013 Section 2: details of the datasets (Section 4.1 in the main text) \u2013 Section 3: details of our LSTM-based models, including the learning objective for dppLSTM and the generating process of shot-based summaries for both vsLSTM and dppLSTM (Section 3.4 and 3.5 in the main text) \u2013 Section 4: comparing different network structures for dppLSTM (Section 3.4 in the main text) \u2013 Section 5: Other implementation details \u2013 Section 6: Additional discussions on video summarization"}
{"_id": "114ef0ee39bfe835f7df778f36a8ad60571f5449", "title": "A multiscale retinex for bridging the gap between color images and the human observation of scenes", "text": "Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour."}
{"_id": "6f00ceddb3e7f407c555a546bf3db354b400443d", "title": "When Professionals Become Mothers , Warmth Doesn \u2019 t Cut the Ice", "text": "Working moms risk being reduced to one of two subtypes: homemakers\u2014viewed as warm but incompetent, or female professionals\u2014characterized as competent but cold. The current study ( N = 122 college students) presents four important findings. First, when working women become mothers, they trade perceived competence for perceived warmth. Second, working men don\u2019t make this trade; when they become fathers, they gain perceived warmth and maintain perceived competence. Third, people report less interest in hiring, promoting, and educating working moms relative to working dads and childless employees. Finally, competence ratings predict interest in hiring, promoting, and educating workers. Thus, working moms\u2019 gain in perceived warmth does not help them, but their loss in perceived competence does hurt them."}
{"_id": "bb0975460b27ddbb96e0210a7e6f3a71b31b6193", "title": "Dealing with the new user cold-start problem in recommender systems: A comparative review", "text": "The Recommender System (RS) is an efficient tool for decision makers that assists in the selection of appropriate items according to their preferences and interests. This system has been applied to various domains to personalize applications by recommending items such as books, movies, songs, restaurants, news articles and jokes, among others. An important issue problem, which occurs when there is a new user that has been registered to the system and no prior rating of this user is found in the rating table. In this paper, we first present a classification that divides the relevant studies addressing the new user cold-start problem into three major groups and summarize their advantages and disadvantages in a tabular format. Next, some typical algorithms of these groups, such as MIPFGWC-CS, NHSM, FARAMS and HU\u2013FCF, are described. Finally, these algorithms are implemented and validated on some benchmark RS datasets under various settings of the new user cold start. The experimental results indicate that NHSM achieves better accuracy and computational time than the relevant methods. & 2014 Elsevier Ltd. All rights reserved."}
{"_id": "7bfb1641bf9f7f862d7363ad8f085c891a56ae62", "title": "Tracking-assisted Weakly Supervised Online Visual Object Segmentation in Unconstrained Videos", "text": "This paper tackles the task of online video object segmentation with weak supervision, i.e., labeling the target object and background with pixel-level accuracy in unconstrained videos, given only one bounding box information in the first frame. We present a novel tracking-assisted visual object segmentation framework to achieve this. On the one hand, initialized with a given bounding box in the first frame, the auxiliary object tracking module guides the segmentation module frame by frame by providing motion and region information, which is usually missing in semi-supervised methods. Moreover, compared with the unsupervised approach, our approach with such minimum supervision can focus on the target object without bringing unrelated objects into the final results. On the other hand, the video object segmentation module also improves the robustness of the visual object tracking module by pixel-level localization and objectness information. Thus, segmentation and tracking in our framework can mutually help each other in an online manner. To verify the generality and effectiveness of the proposed framework, we evaluate our weakly supervised method on two cross-domain datasets, i.e., the DAVIS and VOT2016 datasets, with the same configuration and parameter setting. Experimental results show the top performance of our method, which is even better than the leading semi-supervised methods. Furthermore, we conduct the extensive ablation study on our approach to investigate the influence of each component and main parameters."}
{"_id": "2a6f573528eb7010dfeeead151b3bd942ff005fa", "title": "A 77-GHz MMIC power amplifier for automotive radar applications", "text": "A MMIC 77-GHz two-stage power amplifier (PA) is reported in this letter. This MMIC chip demonstrated a measured small signal gain of over 10 dB from 75 GHz to 80 GHz with 18.5-dBm output power at 1 dB compression. The maximum small signal gain is above 12 dB from 77 to 78 GHz. The saturated output power is better than 21.5 dBm and the maximum power added efficiency is 10% between 75 GHz and 78 GHz. This chip is fabricated using 0.1-/spl mu/m AlGaAs/InGaAs/GaAs PHEMT MMIC process on 4-mil GaAs substrate. The output power performance is the highest among the reported 4-mil MMIC GaAs HEMT PAs at this frequency and therefore it is suitable for the 77-GHz automotive radar systems and related transmitter applications in W-band."}
{"_id": "d420b012f78e2cd2145b555a826b358b1f4eb6e9", "title": "Paralinguistics in speech and language - State-of-the-art and the challenge", "text": "Paralinguistic analysis is increasingly turning into a mainstream topic in speech and language processing. This article aims to provide a broad overview of the constantly growing field by defining the field, introducing typical applications, presenting exemplary resources, and sharing a unified view of the chain of processing. It then presents the first broader Paralinguistic Challenge organised at INTERSPEECH 2010 by the authors including a historical overview of the Challenge tasks of recognising age, gender, and affect, a summary of methods used by the participants, and their results. In addition, we present the new benchmark obtained by fusion of participants\u2019 predictions and conclude by discussing ten recent and emerging trends in the analysis of paralinguistics in speech and language. \u00a9 2012 Elsevier Ltd. All rights reserved."}
{"_id": "89f0d7cca06055847ceae6f1b560def6e9f0ad2f", "title": "The Ensembl gene annotation system", "text": "The Ensembl gene annotation system has been used to annotate over 70 different vertebrate species across a wide range of genome projects. Furthermore, it generates the automatic alignment-based annotation for the human and mouse GENCODE gene sets. The system is based on the alignment of biological sequences, including cDNAs, proteins and RNA-seq reads, to the target genome in order to construct candidate transcript models. Careful assessment and filtering of these candidate transcripts ultimately leads to the final gene set, which is made available on the Ensembl website. Here, we describe the annotation process in detail.Database URL: http://www.ensembl.org/index.html."}
{"_id": "087095f4825c0adb1ec747990c8bc9e12385d939", "title": "LocationSpark: A Distributed In-Memory Data Management System for Big Spatial Data", "text": "We present LocationSpark, a spatial data processing system built on top of Apache Spark, a widely used distributed data processing system. LocationSpark offers a rich set of spatial query operators, e.g., range search, kNN, spatio-textual operation, spatial-join, and kNN-join. To achieve high performance, LocationSpark employs various spatial indexes for in-memory data, and guarantees that immutable spatial indexes have low overhead with fault tolerance. In addition, we build two new layers over Spark, namely a query scheduler and a query executor. The query scheduler is responsible for mitigating skew in spatial queries, while the query executor selects the best plan based on the indexes and the nature of the spatial queries. Furthermore, to avoid unnecessary network communication overhead when processing overlapped spatial data, We embed an efficient spatial Bloom filter into LocationSpark\u2019s indexes. Finally, LocationSpark tracks frequently accessed spatial data, and dynamically flushes less frequently accessed data into disk. We evaluate our system on real workloads and demonstrate that it achieves an order of magnitude performance gain over a baseline framework."}
{"_id": "54386f531d48af772ccbd7854dc04f8b8b90bc33", "title": "Deep Gaussian Mixture-Hidden Markov Model for Classification of EEG Signals", "text": "Electroencephalography (EEG) signals are complex dynamic phenomena that exhibit nonlinear and nonstationary behaviors. These characteristics tend to undermine the reliability of existing hand-crafted EEG features that ignore time-varying information and impair the performances of classification models. In this paper, we propose a novel method that can automatically capture the nonstationary dynamics of EEG signals for diverse classification tasks. It consists of two components. The first component uses an autoregressive-deep variational autoencoder model for automatic feature extraction, and the second component uses a Gaussian mixture-hidden Markov model for EEG classification with the extracted features. We compare the performance of our proposed method and the state-of-the-art methods in two EEG classification tasks, subject, and event classification. Results show that our approach outperforms the others by averages of $\\text{15}\\%\\pm \\text{6.3}$ (p-value $<\\text{0.05}$) and $\\text{22}\\%\\pm \\text{5.7}$ (p-value $<\\text{0.05}$) for subject and event classifications, respectively."}
{"_id": "387404b5f5fa7158ab19a466ce925eea72e735c6", "title": "Collaboration with Lean Media: how open-source software succeeds", "text": "Open-source software, usually created by volunteer programmers dispersed worldwide, now competes with that developed by software firms. This achievement is particularly impressive as open-source programmers rarely meet. They rely heavily on electronic media, which preclude the benefits of face-to-face contact that programmers enjoy within firms. In this paper, we describe findings that address this paradox based on observation, interviews and quantitative analyses of two open-source projects. The findings suggest that spontaneous work coordinated afterward is effective, rational organizational culture helps achieve agreement among members and communications media moderately support spontaneous work. These findings can imply a new model of dispersed collaboration."}
{"_id": "acffb7c5ea7a82853024d92114154c22db343d0c", "title": "Face Detection By Finding The Facial Features And The Angle of Inclination of Tilted Face", "text": "Face is a popular topic of research in computer vision, image analysis and understanding. This paper proposes a method to detect straight posed face as well as tilted face in colour images. Algorithm first detects skin colour region and then skin region is further processed by finding connected components and holes. Each connected components is tested to extract facial features and Y-value of each holes. If features exist in the connected component then the component is concluded as face. The Y-value of both eyes are compared to find whether the face is straight or tilted. For tilted face, the angle of tilt is determined and the input image is rotated by the angle of tilt to straighten up the face. After straightening the face, the rectangle is drawn to enclose the face. The dimension of drawn rectangle is obtained by using the distance parameter of the line between two eyes."}
{"_id": "ea5f224ba8f122ee6e8bfe0d732c55c1c5280cd4", "title": "Quality Attributes of Web Software Applications", "text": "In only four or five years, the world wide web has changed from a static collection of HTML web pages to a dynamic engine that powers e-commerce, collaborative work, and distribution of information and entertainment. These exciting changes have been fueled by many changes in software technology, the software development process, and how software is deployed. Although the word \u201cheterogeneous\u201d is commonly used to describe web software, we might easily forget to notice in how many ways it can be applied. In fact, the synonymous term \u201cdiverse\u201d is more general and familiar, and may be more appropriate. Web software applications use diverse types of hardware, they include a diverse collection of types of implementation languages (including traditional programs, HTML, interpreted scripts, and databases), they are composed of software written in diverse languages, and they are built by collections of people with very diverse sets of skills. Although these changes in how web applications are built are interesting and fascinating, one of the most unique aspects of web software applications is in terms of the needs they must satisfy. Web applications have very high requirements for a number of quality attributes. Some of these quality attributes have been important in other (mostly relatively small) segments of the industry, but some of them are relatively new. This paper discusses some of the unique technological aspects of building web software applications, the unique requirements of quality attributes, and how they can be achieved."}
{"_id": "e3f2c0e4053c06342663873937f3617e5ef2ff60", "title": "Small-Size Hybrid Loop/Open-Slot Antenna for the LTE Smartphone", "text": "A small-size hybrid loop/open-slot antenna for the LTE smartphone is presented. The antenna is disposed in a small notch of 7 \u00d7 40 mm2 in the system ground plane of the smartphone and has a thickness of 5.8 mm to be capable of embedding within most modern smartphones. The antenna mainly comprises a capacitively fed loop antenna and a ground strip embedded therein to create an open-slot antenna so as to generate additional resonant modes for bandwidth enhancement. The antenna can provide two wide operating bands to cover the LTE low band (698-960 MHz) and the LTE middle/high bands (1710-2690/3400-3800 MHz). In this study, the creation of the open-slot antenna using a ground strip embedded in the ground clearance region for the coupled-fed loop antenna is addressed. The proposed antenna is also fabricated and tested. Both the simulation study and the experimental results are presented."}
{"_id": "c0727e3757f4d7f7cd1969070ae3ae3929774df6", "title": "Word segmentation for the Myanmar language", "text": "This study reports the development of a Myanmar word segmentation method using Unicode standard encoding. Word segmentation is an essential step prior to natural language processing in the Myanmar language, because a Myanmar text is a string of characters without explicit word boundary delimiters. The proposed method has two phases: syllable segmentation and syllable merging. A rule-based heuristic approach was adopted for syllable segmentation, and a dictionary-based statistical approach for syllable merging. Evaluation of test results showed that the method is very effective for the Myanmar language."}
{"_id": "59af2ae950569b60fa88749ec95f86d8685f8016", "title": "Quasi-Resonant (QR) Controller With Adaptive Switching Frequency Reduction Scheme for Flyback Converter", "text": "A quasi-resonant (QR) controller with an adaptive frequency reduction scheme has been developed for a flyback converter. While maintaining the valley switching, the QR controller reduces the switching frequency for lighter load by skipping some valleys to reduce the power loss and thereby achieve better light-load efficiency. If the QR controller cannot detect any valley due to the damped oscillation of switch voltage, then the valley switching is given up and the nonvalley switching is employed to keep reducing the switching frequency for lighter load. The proposed QR controller has been implemented in a 0.35-\u03bcm 700-V BCDMOS process and applied to a 40-W flyback converter. The power efficiency of the flyback converter is improved by up to 3.0% when the proposed QR controller is employed compared to the one employing the conventional QR controller."}
{"_id": "4080cc361e61a3a1145b47c407e4fce88aa280f6", "title": "Accelerating Big Data Analytics Using FPGAs", "text": "Emerging big data analytics applications require a significant amount of server computational power. As chips are hitting power limits, computing systems are moving away from general-purpose designs and toward greater specialization. Hardware acceleration through specialization has received renewed interest in recent years, mainly due to the dark silicon challenge. To address the computing requirements of big data, and based on the benchmarking and characterization results, we envision a data-driven heterogeneous architecture for next generation big data server platforms that leverage the power of field-programmable gate array (FPGA) to build custom accelerators in a Hadoop MapReduce framework. Unlike a full and dedicated implementation of Hadoop MapReduce algorithm on FPGA, we propose the hardware/software (HW/SW) co-design of the algorithm, which trades some speedup at a benefit of less hardware. Considering communication overhead with FPGA and other overheads involved in Hadoop MapReduce environment such as compression and decompression, shuffling and sorting, our experimental results show significant potential for accelerating Hadoop MapReduce machine learning kernels using HW/SW co-design methodology."}
{"_id": "355633f6ea97194384216629e0c528e5ba32615c", "title": "The Romance of Learning from Disagreement. The Effect of Cohesiveness and Disagreement on Knowledge Sharing Behavior and Individual Performance Within Teams", "text": "PURPOSE: The purpose of this study was to explore the effects of disagreement and cohesiveness on knowledge sharing in teams, and on the performance of individual team members. DESIGN/METHODOLOGY/APPROACH: Data were obtained from a survey among 1,354 employees working in 126 teams in 17 organizations. FINDINGS: The results show that cohesiveness has a positive effect on the exchange of advice between team members and on openness for sharing opinions, whereas disagreement has a negative effect on openness for sharing opinions. Furthermore, the exchange of advice in a team has a positive effect on the performance of individual team members and acts as a mediator between cohesiveness and individual performance. IMPLICATIONS: Managers who want to stimulate knowledge sharing processes and performance within work teams may be advised to take measures to prevent disagreement between team members and to enhance team cohesiveness. ORIGINALITY/VALUE: Although some gurus in organizational learning claim that disagreement has a positive effect on group processes such as knowledge sharing and team learning, this study does not support this claim."}
{"_id": "087828dc4ea75bf6fc788ead2048103928cb6210", "title": "Measurement of carotid blood pressure and local pulse wave velocity changes during cuff induced hyperemia", "text": "We present a prototype design of dual element photoplethysmograph (PPG) probe along with associated measurement system for carotid local pulse wave velocity (PWV) evaluation in a non-invasive and continuous manner. The PPG probe consists of two identical sensing modules placed 23 mm apart. Simultaneously measured blood pulse waveforms from these arterial sites were processed and the pulse transit time delay was resolved using the developed application-specific software. The ability of developed PPG probe and associated measurement system to detect acute changes in carotid local PWV due to blood pressure (BP) variations was experimentally validated by an in-vivo study. Intra-subject carotid BP elevation was achieved by an upper arm cuff based occlusion, which offered a controlled way of local PWV escalation. The elevated carotid BP values were also recorded by a calibrated pressure tonometer prior to the study, and was used as a reference. A significant increment (1.0 \u2013 2.6 m/s) in local PWV was observed and was proportional to the BP increment induced by the occlusive reactive hyperemia. Study results demonstrated the feasibility of real-time signal acquisition and reliable local PWV evaluation under normal and elevated BP conditions using the developed measurement system."}
{"_id": "84011fcb09e7d0d01d5f9aec53852c14f03c681f", "title": "Distributed Compressed Sensing off the Grid", "text": "This letter investigates the joint recovery of a frequency-sparse signal ensemble sharing a common frequency-sparse component from the collection of their compressed measurements. Unlike conventional arts in compressed sensing, the frequencies follow an off-the-grid formulation and are continuously valued in [0, 1]. As an extension of atomic norm, the concatenated atomic norm minimization approach is proposed to handle the exact recovery of signals, which is reformulated as a computationally tractable semidefinite program. The optimality of the proposed approach is characterized using a dual certificate. Numerical experiments are performed to illustrate the effectiveness of the proposed approach and its advantage over separate recovery."}
{"_id": "64e0ba0d941e4ee60cd5fa8bce4b100adfe6f2f7", "title": "CallRank: Combating SPIT Using Call Duration, Social Networks and Global Reputation", "text": "The growing popularity of IP telephony systems has made them attractive targets for spammers. Voice call spam, also known as Spam over Internet Telephony (SPIT), is potentially a more serious problem than email spam because of the real time processing requirements of voice packets. We explore a novel mechanism that uses duration of calls between users to combat SPIT. CallRank, the scheme proposed by us, uses call duration to establish social network linkages and global reputations for callers, based on which call recipients can decide whether the caller is legitimate or not. CallRank has been implemented within a VoIP system simulation and our results show that we are able to achieve a false negative rate of 10% and a false positive rate of 3% even in the presence of a significant fraction of spammers."}
{"_id": "98ff43f27e4985d828b1ebead6b16d3d7ba6c2ae", "title": "Soft gelatin capsules (softgels).", "text": "It is estimated that more than 40% of new chemical entities (NCEs) coming out of the current drug discovery process have poor biopharmaceutical properties, such as low aqueous solubility and/or permeability. These suboptimal properties pose significant challenges for the oral absorption of the compounds and for the development of orally bioavailable dosage forms. Development of soft gelatin capsule (softgel) dosage form is of growing interest for the oral delivery of poorly water soluble compounds (BCS class II or class IV). The softgel dosage form offers several advantages over other oral dosage forms, such as delivering a liquid matrix designed to solubilize and improve the oral bioavailability of a poorly soluble compound as a unit dose solid dosage form, delivering low and ultra-low doses of a compound, delivering a low melting compound, and minimizing potential generation of dust during manufacturing and thereby improving the safety of production personnel. However, due to the very dynamic nature of the softgel dosage form, its development and stability during its shelf-life are fraught with several challenges. The goal of the current review is to provide an in-depth discussion on the softgel dosage form to formulation scientists who are considering developing softgels for therapeutic compounds."}
{"_id": "ba73efbaabc225f3de48f637e5eb165f81a05007", "title": "Fast human segmentation using color and depth", "text": "Accurate segmentation of humans from live videos is an important problem to be solved in developing immersive video experience. We propose to extract the human segmentation information from color and depth cues in a video using multiple modeling techniques. The prior information from human skeleton data is also fused along with the depth and color models to obtain the final segmentation inside a graph-cut framework. The proposed method runs real time on live videos using single CPU and is shown to be quantitatively outperforming the methods that directly fuse color and depth data."}
{"_id": "08639cd6b89ac8f375cdc1076b9485ac9d657083", "title": "Multi-Core, Main-Memory Joins: Sort vs. Hash Revisited", "text": "In this paper we experimentally study the performance of main-memory, parallel, multi-core join algorithms, focusing on sort-merge and (radix-)hash join. The relative performance of these two join approaches have been a topic of discussion for a long time. With the advent of modern multicore architectures, it has been argued that sort-merge join is now a better choice than radix-hash join. This claim is justified based on the width of SIMD instructions (sort-merge outperforms radix-hash join once SIMD is sufficiently wide), and NUMA awareness (sort-merge is superior to hash join in NUMA architectures). We conduct extensive experiments on the original and optimized versions of these algorithms. The experiments show that, contrary to these claims, radixhash join is still clearly superior, and sort-merge approaches to performance of radix only when very large amounts of data are involved. The paper also provides the fastest implementations of these algorithms, and covers many aspects of modern hardware architectures relevant not only for joins but for any parallel data processing operator."}
{"_id": "0a5033c0b2bb2421f8c46e196fb0fb1464a636b6", "title": "Implementing database operations using SIMD instructions", "text": "Modern CPUs have instructions that allow basic operations to be performed on several data elements in parallel. These instructions are called SIMD instructions, since they apply a single instruction to multiple data elements. SIMD technology was initially built into commodity processors in order to accelerate the performance of multimedia applications. SIMD instructions provide new opportunities for database engine design and implementation. We study various kinds of operations in a database context, and show how the inner loop of the operations can be accelerated using SIMD instructions. The use of SIMD instructions has two immediate performance benefits: It allows a degree of parallelism, so that many operands can be processed at once. It also often leads to the elimination of conditional branch instructions, reducing branch mispredictions.We consider the most important database operations, including sequential scans, aggregation, index operations, and joins. We present techniques for implementing these using SIMD instructions. We show that there are significant benefits in redesigning traditional query processing algorithms so that they can make better use of SIMD technology. Our study shows that using a SIMD parallelism of four, the CPU time for the new algorithms is from 10% to more than four times less than for the traditional algorithms. Superlinear speedups are obtained as a result of the elimination of branch misprediction effects."}
{"_id": "30b1293e39c52ddd0e2a617de47c1ad843621258", "title": "Row-wise parallel predicate evaluation", "text": "Table scans have become more interesting recently due to greater use of ad-hoc queries and greater availability of multicore, vector-enabled hardware. Table scan performance is limited by value representation, table layout, and processing techniques. In this paper we propose a new layout and processing technique for efficient one-pass predicate evaluation. Starting with a set of rows with a fixed number of bits per column, we append columns to form a set of banks and then pad each bank to a supported machine word length, typically 16, 32, or 64 bits. We then evaluate partial predicates on the columns of each bank, using a novel evaluation strategy that evaluates column level equality, range tests, IN-list predicates, and conjuncts of these predicates, simultaneously on multiple columns within a bank, and on multiple rows within a machine register. This approach outperforms pure column stores, which must evaluate the partial predicates one column at a time. We evaluate and compare the performance and representation overhead of this new approach and several proposed alternatives."}
{"_id": "313e8120c31fda6877ea426d8a3be9bcf1b6e088", "title": "The Vertica Analytic Database: C-Store 7 Years Later", "text": "This paper describes the system architecture of the Vertica Analytic Database (Vertica), a commercialization of the design of the C-Store research prototype. Vertica demonstrates a modern commercial RDBMS system that presents a classical relational interface while at the same time achieving the high performance expected from modern \u201cweb scale\u201d analytic systems by making appropriate architectural choices. Vertica is also an instructive lesson in how academic systems research can be directly commercialized into a successful product."}
{"_id": "370e1fcea7074072fe5946d3e728affd582a9a44", "title": "MonetDB: Two Decades of Research in Column-oriented Database Architectures", "text": "MonetDB is a state-of-the-art open-source column-store database management system targeting applications in need for analytics over large collections of data. MonetDB is actively used nowadays in health care, in telecommunications as well as in scientific databases and in data management research, accumulating on average more than 10,000 downloads on a monthly basis. This paper gives a brief overview of the MonetDB technology as it developed over the past two decades and the main research highlights which drive the current MonetDB design and form the basis for its future evolution."}
{"_id": "9da507ed2162a406d41a574103e241e52fdd39e8", "title": "Calf stretching in correct alignment. An important consideration in plantar fasciopathies.", "text": "Stretching of the calf muscles is important in the treatment of plantar fasciopathy. In order to correctly stretch the calf muscles without strain on the plantar fascia the correct alignment of the lower limb should be maintained. A clinical method of achieving this is presented along with a practical guide to assisting the patient to become familiar with correct lower limb alignment."}
{"_id": "e0b650b586095f9dd88108563cfb5b09a4fa2237", "title": "Towards Multi-Stakeholder Utility Evaluation of Recommender Systems", "text": "A core value in recommender systems is personalization, the idea that the recommendations produced are those that match the user\u2019s preferences. However, in many real-world recommendation contexts, the concerns of additional stakeholders may come into play, such as the producers of items or those of the system owner. Some researchers have examined special cases of such concerns, for example, in reciprocal recommendation. However, there has not been a comprehensive treatment of the integration of multiple stakeholders into recommendation calculations. The paper suggests a utility-based framework for representing stakeholder values in recommendation actions and calculating a multidimensional utility. We demonstrate how a standard algorithm performs in a simulation of a multi-stakeholder recommendation task requiring on-line optimization, and show that a simple greedy approach can lead to enhanced overall utility with minimal loss of accuracy for users."}
{"_id": "8af3f2d52bfa69ab57ee31e6515e4990935afe42", "title": "SPARTan: Scalable PARAFAC2 for Large & Sparse Data", "text": "In exploratory tensor mining, a common problem is how to analyze a set of variables across a set of subjects whose observations do not align naturally. For example, when modeling medical features across a set of patients, the number and duration of treatments may vary widely in time, meaning there is no meaningful way to align their clinical records across time points for analysis purposes. To handle such data, the state-of-the-art tensor model is the so-called PARAFAC2, which yields interpretable and robust output and can naturally handle sparse data. However, its main limitation up to now has been the lack of efficient algorithms that can handle large-scale datasets.\n In this work, we fill this gap by developing a scalable method to compute the PARAFAC2 decomposition of large and sparse datasets, called SPARTan. Our method exploits special structure within PARAFAC2, leading to a novel algorithmic reformulation that is both faster (in absolute time) and more memory-efficient than prior work. We evaluate SPARTan on both synthetic and real datasets, showing 22X performance gains over the best previous implementation and also handling larger problem instances for which the baseline fails. Furthermore, we are able to apply SPARTan to the mining of temporally-evolving phenotypes on data taken from real and medically complex pediatric patients. The clinical meaningfulness of the phenotypes identified in this process, as well as their temporal evolution over time for several patients, have been endorsed by clinical experts."}
{"_id": "1b8c932573a5d8f35ecb4b9e5387abf6f2f4e403", "title": "mwetoolkit: a Framework for Multiword Expression Identification", "text": "This paper presents the Multiword Expression Toolkit (mwetoolkit), an environment for type and language-independent MWE identification from corpora. The mwetoolkit provides a targeted list of MWE candidates, extracted and filtered according to a number of user-defined criteria and a set of standard statistical association measures. For generating corpus counts, the toolkit provides both a corpus indexation facility and a tool for integration with web search engines, while for evaluation, it provides validation and annotation facilities. The mwetoolkit also allows easy integration with a machine learning tool for the creation and application of supervised MWE extraction models if annotated data is available. In our experiments, the mwetoolkit was tested and evaluated in the context of MWE extraction in the biomedical domain. Our preliminary results show that the toolkit performs better than other approaches, especially concerning recall. Moreover, this first version can be extended in several ways in order to improve the quality of the results."}
{"_id": "e39988d4ee80bd91aa3c125f885ce3c0382767ef", "title": "Path planning algorithm development for autonomous vacuum cleaner robots", "text": "A vacuum cleaner robot, generally called a robovac, is an autonomous robot that is controlled by intelligent program. Autonomous vacuum cleaning robot will perform task like sweeping and vacuuming in a single pass. The DVR-1 vacuum cleaning robot consists of two DC motor operated wheels that allow 360 degree rotation, a castor wheel, side spinning brushes, a front bumper and a miniature vacuum pump. Sensors in the bumper are used for generating binary information of obstacle detection then they are processed by some controlling algorithms. These algorithms are used for path planning and navigation. The robot's bumper prevents them from bumping into walls and furniture by reversing or changing path accordingly."}
{"_id": "8b18277204c7648b33c568ed6cc7806f04d2cccd", "title": "Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed", "text": "Physiological responses during human-robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses."}
{"_id": "600b53b34d9cdc8a0bf5f6cc2b755c2cbcdf88be", "title": "A Miniaturized Hexagonal-Triangular Fractal Antenna for Wide-Band Applications [Antenna Applications Corner]", "text": "Defense and aerospace communication systems need compact wide-band antennas suitable for multiple-frequency-band operation. In this article, a miniaturized hexagonaltriangular fractal antenna is examined for wide-band applications. The iterations are made from hexagonal rings, where the sides of the rings are connected by triangular elements inside a hexagonal metal patch. Transmissionline-feed technique is used to feed the signal. A triangular slotted symmetrical defective ground structure (DGS) with a rectangular slit at the center is used to obtain a bandwidth ratio of 8.4:1 with an operating bandwidth ranging from 3 to 25.2 GHz. The percentage of impedance bandwidth and gain of the proposed antenna are much superior to the recently reported wide-band antennas, which makes it suitable for numerous wireless applications, such as wireless local area networks, Wi-Fi, ultrawideband (UWB), X band, and Ku band. The performance of the antenna is analyzed by various parametric studies and field and current distribution analysis. The corresponding design details and comparison of simulated and practical results are presented."}
{"_id": "25c8c243f74c13ab18668ea109dfd2c389b0f273", "title": "A model-based method for the computation of fingerprints' orientation field", "text": "As a global feature of fingerprints, the orientation field is very important for automatic fingerprint recognition. Many algorithms have been proposed for orientation field estimation, but their results are unsatisfactory, especially for poor quality fingerprint images. In this paper, a model-based method for the computation of orientation field is proposed. First a combination model is established for the representation of the orientation field by considering its smoothness except for several singular points, in which a polynomial model is used to describe the orientation field globally and a point-charge model is taken to improve the accuracy locally at each singular point. When the coarse field is computed by using the gradient-based algorithm, a further result can be gained by using the model for a weighted approximation. Due to the global approximation, this model-based orientation field estimation algorithm has a robust performance on different fingerprint images. A further experiment shows that the performance of a whole fingerprint recognition system can be improved by applying this algorithm instead of previous orientation estimation methods."}
{"_id": "0fdc52fd773f5fa34ff4d4f66e1c4dffee884ff0", "title": "Skin Detection Using Color, Texture and Space Information", "text": "Among various skin detection methods, Skin Probability Map (SPM) method is an effective one. Though SPM method possesses high true acceptance rate (TAR), its false acceptance rate (FAR) is unacceptable in some cases. The reason is that SPM method only use pixel-level color information. This paper proposes an improved skin detection method that integrates color, texture and space information. After color filter, a texture filter is constructed based on texture features extracted form Gabor wavelet transform. Texture filter will further filter non-skin pixels, meanwhile, it may also filter some skin pixel. To compensate the loss, after texture filter, a marker driven watershed transform is then used to grow already obtained skin regions. Experimental results show that the proposed method can achieve better performance than that of SPM."}
{"_id": "5a155d1655c3de620bd73095a7a953b197b6356d", "title": "Information interaction: Providing a framework for information architecture", "text": "Information interaction is the process that people use in interacting with the content of an information system. Information architecture is a blueprint and navigational aid to the content of information-rich systems. As such information architecture performs an important supporting role in information interactivity. This article elaborates on a model of information interactivity that crosses the \u201cno-man\u2019s land\u201d between user and computer articulating a model that includes user, content and system, illustrating the context for information architecture."}
{"_id": "516243c4c2b28d168c72dd6b0b7e3921b40e6d03", "title": "Thematic development for measuring cohesion and coherence between sentences in English paragraph", "text": "Writing is a skill of marking coherent words on a paper and composing a text. There are several criteria of good writing, two of them are cohesion and coherence. Research of cohesion and coherence writing has been conducted by using Centering Theory and Entity Transition Value method. However, the result can still be improved. Therefore, in this research, we tried to use Thematic Development approach which focused on the use of Theme and Rheme as a method to analyze coherence level of a paragraph, combined with CogNIAC rules and Centering Theory to analyze its cohesion. Besides improving the result of previous methods, this research aims to help users in evaluating and assessing their writing text. To achieve these objectives, the proposed method is compared to previous works as well as human judge. Based on the experiment, the proposed method yields average result of 91% which is nearly equivalent to human judge which is 92%. Thematic Development also yields better result compared to Centering Theory which is 29% and Entity Transition Value which is 0%, given the same data set of beginner and intermediate simple writing."}
{"_id": "54c2e0b7907e273a6e31deb0bf37d064a2f5603c", "title": "A brain-computer interface for high-level remote control of an autonomous, reinforcement-learning-based robotic system for reaching and grasping", "text": "We present an Internet-based brain-computer interface (BCI) for controlling an intelligent robotic device with autonomous reinforcement-learning. BCI control was achieved through dry-electrode electroencephalography (EEG) obtained during imaginary movements. Rather than using low-level direct motor control, we employed a high-level control scheme of the robot, acquired via reinforcement learning, to keep the users cognitive load low while allowing control a reaching-grasping task with multiple degrees of freedom. High-level commands were obtained by classification of EEG responses using an artificial neural network approach utilizing time-frequency features and conveyed through an intuitive user interface. The novel ombination of a rapidly operational dry electrode setup, autonomous control and Internet connectivity made it possible to conveniently interface subjects in an EEG laboratory with remote robotic devices in a closed-loop setup with online visual feedback of the robots actions to the subject. The same approach is also suitable to provide home-bound patients with the possibility to control state-of-the-art robotic devices currently confined to a research environment. Thereby, our BCI approach could help severely paralyzed patients by facilitating patient-centered research of new means of communication, mobility and independence."}
{"_id": "8a7fb0daa368e5a15142bf1ab7f8e6c9872ba8a1", "title": "An End-to-End Neural Network for Polyphonic Piano Music Transcription", "text": "We present a supervised neural network model for polyphonic piano music transcription. The architecture of the proposed model is analogous to speech recognition systems and comprises an acoustic model and a music language model. The acoustic model is a neural network used for estimating the probabilities of pitches in a frame of audio. The language model is a recurrent neural network that models the correlations between pitch combinations over time. The proposed model is general and can be used to transcribe polyphonic music without imposing any constraints on the polyphony. The acoustic and language model predictions are combined using a probabilistic graphical model. Inference over the output variables is performed using the beam search algorithm. We perform two sets of experiments. We investigate various neural network architectures for the acoustic models and also investigate the effect of combining acoustic and music language model predictions using the proposed architecture. We compare performance of the neural network-based acoustic models with two popular unsupervised acoustic models. Results show that convolutional neural network acoustic models yield the best performance across all evaluation metrics. We also observe improved performance with the application of the music language models. Finally, we present an efficient variant of beam search that improves performance and reduces run-times by an order of magnitude, making the model suitable for real-time applications."}
{"_id": "0b5c93e5cd886b1b59b05bc87cf9063539987785", "title": "A survey of code-based change impact analysis techniques", "text": "Software change impact analysis (CIA) is a technique for identifying the effects of a change, or estimating what needs to be modified to accomplish a change. Since the 1980s, there have been many investigations on CIA, especially for code-based CIA techniques. However, there have been very few surveys on this topic. This article tries to fill this gap. And 30 papers that provide empirical evaluation on 23 code-based CIA techniques are identified. Then, data was synthesized against four research questions. The study presents a comparative framework including seven properties, which characterize the CIA techniques, and identifies key applications of CIA techniques in software maintenance. In addition, the need for further research is also presented in the following areas: evaluating existing CIA techniques and proposing new CIA techniques under the proposed framework, developing more mature tools to support CIA, comparing current CIA techniques empirically with unified metrics and common benchmarks, and applying the CIA more extensively and effectively in the software maintenance phase. Copyright \u00a9 2012 John Wiley & Sons, Ltd."}
{"_id": "47ac1f89567d41388db9293e43ff4fe29c51fa53", "title": "Color Correction of Underwater Images for Aquatic Robot Inspection", "text": "In this paper, we consider the problem of color restoration using statistical priors. This is applied to color recovery for underwater images, using an energy minimization formulation. Underwater images present a challenge when trying to correct the blue-green monochrome look to bring out the color we know marine life has. For aquatic robot tasks, the quality of the images is crucial and needed in real-time. Our method enhances the color of the images by using a Markov Random Field (MRF) to represent the relationship between color depleted and color images. The parameters of the MRF model are learned from the training data and then the most probable color assignment for each pixel in the given color depleted image is inferred by using belief propagation (BP). This allows the system to adapt the color restoration algorithm to the current environmental conditions and also to the task requirements. Experimental results on a variety of underwater scenes demonstrate the feasibility of our method."}
{"_id": "ec1dc35fb383fdc2efd4b226f7377a24018871ff", "title": "Semantics-Based Online Malware Detection: Towards Efficient Real-Time Protection Against Malware", "text": "Recently, malware has increasingly become a critical threat to embedded systems, while the conventional software solutions, such as antivirus and patches, have not been so successful in defending the ever-evolving and advanced malicious programs. In this paper, we propose a hardware-enhanced architecture, GuardOL, to perform online malware detection. GuardOL is a combined approach using processor and field-programmable gate array (FPGA). Our approach aims to capture the malicious behavior (i.e., high-level semantics) of malware. To this end, we first propose the frequency-centric model for feature construction using system call patterns of known malware and benign samples. We then develop a machine learning approach (using multilayer perceptron) in FPGA to train classifier using these features. At runtime, the trained classifier is used to classify the unknown samples as malware or benign, with early prediction. The experimental results show that our solution can achieve high classification accuracy, fast detection, low power consumption, and flexibility for easy functionality upgrade to adapt to new malware samples. One of the main advantages of our design is the support of early prediction-detecting 46% of malware within first 30% of their execution, while 97% of the samples at 100% of their execution, with <;3% false positives."}
{"_id": "2c9ce2eeac19c7a602f848a460e4f916028b900d", "title": "A 14mW 6.25Gb/s Transceiver in 90nm CMOS for Serial Chip-to-Chip Communications", "text": "A power-efficient 6.25Gb/s transceiver in 90nm CMOS for chip-to-chip communication is presented, it dissipates 2.2mW/Gb/s operating at a BER of <10-15 over a channel with -15dB attenuation at 3.125GHz. A shared LC-PLL, resonant clock distribution, a low-swing voltage-mode transmitter, a low-power phase rotator, and a software-based CDR and an adaptive equalizer are used to reduce power"}
{"_id": "9ec7c1b271cc1aec43d1cdc36a94bc39af81ec57", "title": "Hepta-Band Internal Antenna for Personal Communication Handsets", "text": "Modern personal communication handsets are shrinking in size and are required to operate at multiple frequency bands for enhanced functionality and performance. This poses an important challenge for antenna designers to build multiband antennas within the limited allowable space. In this paper, an internal antenna covering seven frequency bands is presented for personal communication handsets. The proposed antenna operates at GSM (880-960 MHz), DCS (1710-1880 MHz), PCS (1880-1990 MHz), UMTS (1900-2170 MHz), WiBro (2300-2390 MHz), Bluetooth (2.4-2.48 GHz), and WLAN (5.0-5.5 GHz) frequency bands. Measured input return loss of the antenna is better than dB at all the frequency bands with reasonable radiation performance. Antenna volume is 30 mm times15 mm times 4.0 mm (1.8 cm) that makes it attractive for modern multiband and multifunctional slim handsets."}
{"_id": "50a0c42c38c670222d73d527f7e4f868651f32be", "title": "An Introduction to Independent Component Analysis: InfoMax and FastICA algorithms", "text": "This paper presents an introduction to independent component analysis (ICA). Unlike principal component analysis, which is based on the assumptions of uncorrelatedness and normality, ICA is rooted in the assumption of statistical independence. Foundations and basic knowledge necessary to understand the technique are provided hereafter. Also included is a short tutorial illustrating the implementation of two ICA algorithms (FastICA and InfoMax) with the use of the Mathematica software."}
{"_id": "736c18830cf9a696800f0e4382256ac1e9ae5bd6", "title": "Simulating Urban Growth Using the SLEUTH Model in a Coastal Peri-Urban District in China", "text": "China\u2019s southeast coastal areas have witnessed rapid growth in the last two decades, owing mostly to their economic and social attractions. In this paper, we chose Jimei, a coastal peri-urban district of Xiamen city on the southeast coast of China, as a study area to explore the district\u2019s growth dynamics, to predict future sprawl during the next decade and to provide a basis for urban planning. The SLEUTH urban growth model was calibrated against historical data derived from a series of Landsat TM 5 satellite images taken between 1992 and 2007. A Lee-Sallee value of 0.48 was calculated for the district, which is a satisfactory result compared with related studies. Five coefficients of urban growth, diffusion, spread, breed, slope resistance and road gravity had values of 25, 68, 86, 24 and 23, respectively, in 2007. The growth coefficients (i.e., urban character) can capture urban growth characteristics in Jimei district. The urban DNA revealed that, over the study period, urban growth in the district was dominated both by urbanization through establishment of new urban centers, and by expansion outward from existing urban centers. In contrast to interior cities, in which expansions are dramatically shaped by actual road patterns, urban expansion in the district was likely constrained by the nearby coastline. Future urban growth patterns were predicted to 2020 assuming three different development scenarios. The first scenario simulated a continuation of historical urban growth without changing current conditions. The second scenario projected managed growth in which OPEN ACCESS Sustainability 2014, 6 3900 urban growth is limited by a layer with areas excluded from urbanization, which is the future development plan for Jimei district and Xiamen city. The third scenario depicted a growth with maximum protection in which growth was allowed to continue, similar to the second scenario, but with lower diffusion and spread coefficients applied to the growth pattern. The third scenario demonstrated that valuable land could be saved, which is the most desirable outcome for Jimei urban development. The study showed that SLEUTH can be an extremely useful tool for coastal city managers to explore the likely outcomes of their city development plans."}
{"_id": "7687a755350038675f0a663e7ff5e7a8f6274bdb", "title": "Reuters Tracer: A Large Scale System of Detecting & Verifying Real-Time News Events from Twitter", "text": "News professionals are facing the challenge of discovering news from more diverse and unreliable information in the age of social media. More and more news events break on social media first and are picked up by news media subsequently. The recent Brussels attack is such an example. At Reuters, a global news agency, we have observed the necessity of providing a more effective tool that can help our journalists to quickly discover news on social media, verify them and then inform the public.\n In this paper, we describe Reuters Tracer, a system for sifting through all noise to detect news events on Twitter and assessing their veracity. We disclose the architecture of our system and discuss the various design strategies that facilitate the implementation of machine learning models for noise filtering and event detection. These techniques have been implemented at large scale and successfully discovered breaking news faster than traditional journalism"}
{"_id": "b6f21b21f775a71f3e330d06922ce87d02ce9e3a", "title": "Facial Landmark Tracking and Facial Expression Recognition", "text": "In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person\u2019s head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications."}
{"_id": "0c940b02f01bd2f219b2a1ad93e990adb78aa531", "title": "Increasing the breakdown capability of superjunction power MOSFETs at the edge of the active region", "text": "When superjunction power MOSFETs operate near the rim of the safe operating area, avalanche breakdown can occur in the transition region between the active cell array and the edge termination. Numerical device simulations confirmed this and revealed local charge imbalances, created by irregularities in the superjunction doping pattern, as major cause. Based on the simulation results, we proposed optimized transitions of the superjunction doping pattern from the active cell array to the edge termination. Numerical device simulations as well as experiments demonstrated the enhanced breakdown capability of these transition regions."}
{"_id": "7924ebaa10d9a2496f86879f7f7addc3cee5eabc", "title": "Implementation of virtual fitting room using image processing", "text": "There has been a great increase in interests towards online shopping. In case of purchase of products like apparels which always require a sense of knowledge on how cloths would fit upon a person. This is the major reason why less number of apparels are being shopped online. Hence, a virtual dressing room which would make people know how cloths personally fits in would be a great luxury for the online sellers which could give a wide choice for customers. For online marketers, this would be a great tool for enhancing its market."}
{"_id": "71255ceae7c78200e9c37675b39c7d3436771e6f", "title": "Multiple hypothesis tracking for multiple target tracking", "text": "Multiple hypothesis tracking (MHT) is generally accepted as the preferred method for solving the data association problem in modern multiple target tracking (MTT) systems. This paper summarizes the motivations for MHT, the basic principles behind MHT and the alternative implementations in common use. It discusses the manner in which the multiple data association hypotheses formed by MHT can be combined with multiple filter models, such as used by the interacting multiple model (IMM) method. An overview of the studies that show the advantages of MHT over the conventional single hypothesis approach is given. Important current applications and areas of future research and development for MHT are discussed."}
{"_id": "84fd16dda54b68f21ea9042e913e46b3d3d85ecf", "title": "A Metric for Performance Evaluation of Multi-Target Tracking Algorithms", "text": "Performance evaluation of multi-target tracking algorithms is of great practical importance in the design, parameter optimization and comparison of tracking systems. The goal of performance evaluation is to measure the distance between two sets of tracks: the ground truth tracks and the set of estimated tracks. This paper proposes a mathematically rigorous metric for this purpose. The basis of the proposed distance measure is the recently formulated consistent metric for performance evaluation of multi-target filters, referred to as the OSPA metric. Multi-target filters sequentially estimate the number of targets and their position in the state space. The OSPA metric is therefore defined on the space of finite sets of vectors. The distinction between filtering and tracking is that tracking algorithms output tracks and a track represents a labeled temporal sequence of state estimates, associated with the same target. The metric proposed in this paper is therefore defined on the space of finite sets of tracks and incorporates the labeling error. Numerical examples demonstrate that the proposed metric behaves in a manner consistent with our expectations."}
{"_id": "22b26297e0cc5df3efdba54a45714e4e27b59e17", "title": "A Consistent Metric for Performance Evaluation of Multi-Object Filters", "text": "The concept of a miss-distance, or error, between a reference quantity and its estimated/controlled value, plays a fundamental role in any filtering/control problem. Yet there is no satisfactory notion of a miss-distance in the well-established field of multi-object filtering. In this paper, we outline the inconsistencies of existing metrics in the context of multi-object miss-distances for performance evaluation. We then propose a new mathematically and intuitively consistent metric that addresses the drawbacks of current multi-object performance evaluation metrics."}
{"_id": "9ab68269bf4164b08cfea158d9d459bf80ea311c", "title": "Performance evaluation of multi-target tracking using the OSPA metric", "text": "Performance evaluation of multi-target tracking algorithms is of great practical importance in the design and comparison of tracking systems. Recently a consistent metric for performance evaluation of multi-object filters (referred to as OSPA metric) has been proposed. In this paper we describe how the OSPA metric can be adapted to evaluate the performance of multi-target tracking algorithms. The main idea is to introduce the track label error into consideration in order to capture the data association performance, in addition to the existing cardinality and localisation errors. The paper demonstrates the proposed method by assessing and comparing the performance of two particle filters for multi-target tracking."}
{"_id": "9fbf7bb9f8bd898cfb2f2164c269518359ef5f18", "title": "Multitarget miss distance via optimal assignment", "text": "The concept of miss distance-Euclidean, Mahalanobis, etc.-is a fundamental, far-reaching, and taken-for-granted element of the engineering theory and practice of single-target systems. In this paper we introduce a comprehensive L/sub p/-type theory of distance metrics for multitarget (and, more generally, multiobject) systems. We show that this theory extends, and provides a rigorous theoretical basis for, an intuitively appealing optimal-assignment approach proposed by Drummond for evaluating the performance of multitarget tracking algorithms. We describe tractable computational approaches for computing such metrics based on standard optimal assignment or convex optimization techniques. We describe the potentially far-reaching implications of these metrics for applications such as performance evaluation and sensor management. In the former case, we demonstrate the application of multitarget miss-distance metrics as measures of effectiveness (MoEs) for multitarget tracking algorithms."}
{"_id": "576296d739f61fcd4f4433f09a91f350a0c9598d", "title": "Solving dynamic vehicle routing problem via evolutionary search with learning capability", "text": "To date, dynamic vehicle routing problem (DVRP) has attracted great research attentions due to its wide range of real world applications. In contrast to traditional static vehicle routing problem, the whole routing information in DVRP is usually unknown and obtained dynamically during the routing execution process. To solve DVRP, many heuristic and metaheuristic methods have been proposed in the literature. In this paper, we present a novel evolutionary search paradigm with learning capability for solving DVRP. In particular, we propose to capture the structured knowledge from optimized routing solution in early time slot, which can be further reused to bias the customer-vehicle assignment when dynamic occurs. By extending our previous research work, the learning of useful knowledge, and the scheduling of dynamic customer requests are detailed here. Further, to evaluate the efficacy of the proposed search paradigm, comprehensive empirical studies on 21 commonly used DVRP instances with diverse properties are also reported."}
{"_id": "94274d44e3cf956e494c452abe8bd845f63e4505", "title": "Rank Correlation for Low-Rate DDoS Attack Detection: An Empirical Evaluation", "text": "A low-rate distributed denial of service (DDoS) attack has the ability to obscure its traffic because it is very similar to legitimate traffic. It can easily evade current detection mechanisms. Rank correlation measures can quantify significant differences between attack traffic and legitimate traffic based on their rank values. In this paper, we use two rank correlation measures, namely, Spearmen Rank Correlation (SRC) and Partial Rank Correlation (PRC) to detect low-rate DDoS attacks. These measures are empirically evaluated using three real-life datasets. Experimental results show that both measures can effectively discriminate legitimate traffic from attack traffic. We find that PRC performs better than SRC in detection of lowrate DDoS attacks in terms of spacing between malicious and legitimate traffic."}
{"_id": "bff3017da3ee11814f7d5ddccab25990218fdf4f", "title": "Nutritional support for wound healing.", "text": "Healing of wounds, whether from accidental injury or surgical intervention, involves the activity of an intricate network of blood cells, tissue types, cytokines, and growth factors. This results in increased cellular activity, which causes an intensified metabolic demand for nutrients. Nutritional deficiencies can impede wound healing, and several nutritional factors required for wound repair may improve healing time and wound outcome. Vitamin A is required for epithelial and bone formation, cellular differentiation, and immune function. Vitamin C is necessary for collagen formation, proper immune function, and as a tissue antioxidant. Vitamin E is the major lipid-soluble antioxidant in the skin; however, the effect of vitamin E on surgical wounds is inconclusive. Bromelain reduces edema, bruising, pain, and healing time following trauma and surgical procedures. Glucosamine appears to be the rate-limiting substrate for hyaluronic acid production in the wound. Adequate dietary protein is absolutely essential for proper wound healing, and tissue levels of the amino acids arginine and glutamine may influence wound repair and immune function. The botanical medicines Centella asiatica and Aloe vera have been used for decades, both topically and internally, to enhance wound repair, and scientific studies are now beginning to validate efficacy and explore mechanisms of action for these botanicals. To promote wound healing in the shortest time possible, with minimal pain, discomfort, and scarring to the patient, it is important to explore nutritional and botanical influences on wound outcome."}
{"_id": "6c417ddbd84a51f66b375812dfde79ee8920d623", "title": "Semantic Object Segmentation in Tagged Videos via Detection", "text": "Semantic object segmentation (SOS) is a challenging task in computer vision that aims to detect and segment all pixels of the objects within predefined semantic categories. In image-based SOS, many supervised models have been proposed and achieved impressive performances due to the rapid advances of well-annotated training images and machine learning theories. However, in video-based SOS it is often difficult to directly train a supervised model since most videos are weakly annotated by tags. To handle such tagged videos, this paper proposes a novel approach that adopts a segmentation-by-detection framework. In this framework, object detection and segment proposals are first generated using the models pre-trained on still images, which provide useful cues to roughly localize the semantic objects. Based on these proposals, we propose an efficient algorithm to initialize object tracks by solving a joint assignment problem. As such tracks provide rough spatiotemporal configurations of the semantic objects, a voting-based refinement algorithm is further proposed to improve their spatiotemporal consistency. Extensive experiments demonstrate that the proposed framework can robustly and effectively segment semantic objects in tagged videos, even when the image-based object detectors provide inaccurate proposals. On various public benchmarks, the proposed approach obtains substantial improvements over the state-of-the-arts."}
{"_id": "d91e5834a5c8ec6ec217c29e16bf990499db3068", "title": "EmailProfiler: Spearphishing Filtering with Header and Stylometric Features of Emails", "text": "Spearphishing is a prominent targeted attack vector in today's Internet. By impersonating trusted email senders through carefully crafted messages and spoofed metadata, adversaries can trick victims into launching attachments containing malicious code or into clicking on malicious links that grant attackers a foothold into otherwise well-protected networks. Spearphishing is effective because it is fundamentally difficult for users to distinguish legitimate emails from spearphishing emails without additional defensive mechanisms. However, such mechanisms, such as cryptographic signatures, have found limited use in practice due to their perceived difficulty of use for normal users. In this paper, we present a novel automated approach to defending users against spearphishing attacks. The approach first builds probabilistic models of both email metadata and stylometric features of email content. Then, subsequent emails are compared to these models to detect characteristic indicators of spearphishing attacks. Several instantiations of this approach are possible, including performing model learning and evaluation solely on the receiving side, or senders publishing models that can be checked remotely by the receiver. Our evaluation of a real data set drawn from 20 email users demonstrates that the approach effectively discriminates spearphishing attacks from legitimate email while providing significant ease-of-use benefits over traditional defenses."}
{"_id": "4bd1b846997a76f3733344499c88638cd934525a", "title": "Distributed Incremental Pattern Matching on Streaming Graphs", "text": "Big data has shifted the computing paradigm of data analysis. While some of the data can be treated as simple texts or independent data records, many other applications have data with structural patterns which are modeled as a graph, such as social media, road network traffic and smart grid, etc. However, there is still limited amount of work has been done to address the velocity problem of graph processing. In this work, we aim to develop a distributed processing system for solving pattern matching queries on streaming graphs where graphs evolve over time upon the arrives of streaming graph update events. To achieve the goal, we proposed an incremental pattern matching algorithm and implemented it on GPS, a vertex centric distributed graph computing framework. We also extended the GPS framework to support streaming graph, and adapted a subgraphcentric data model to further reduce communication overhead and system performance. Our evaluation using real wiki trace shows that our approach achieves a 3x -- 10x speedup over the batch algorithm, and significantly reduces network and memory usage."}
{"_id": "40fe35ad80100a66ce386ecf2b6c1b76c90e2537", "title": "Design of Kernels in Convolutional Neural Networks for Image Classification", "text": "Despite the effectiveness of convolutional neural networks (CNNs) for image classification, our understanding of the effect of shape of convolution kernels on learned representations is limited. In this work, we explore and employ the relationship between shape of kernels which define receptive fields (RFs) in CNNs for learning of feature representations and image classification. For this purpose, we present a feature visualization method for visualization of pixel-wise classification score maps of learned features. Motivated by our experimental results, and observations reported in the literature for modeling of visual systems, we propose a novel design of shape of kernels for learning of representations in CNNs. In the experimental results, the proposed models achieved an outstanding performance in the classification task, comparing to a base CNN models that introduces more parameters and computational time. Additionally, we examined the region of interest (ROI) of different models in the classification task and analyzed the robustness of the proposed method to occluded images. Our results indicate the effectiveness of the proposed approach."}
{"_id": "4ff9d40a76adb7b889600f8045e3233195023933", "title": "Efficient Edge Detection on Low-Cost FPGAs", "text": "Improving the efficiency of edge detection in embedded applications, such as UAV control, is critical for reducing system cost and power dissipation. Field programmable gate arrays (FPGA) are a good platform for making improvements because of their specialised internal structure. However, current FPGA edge detectors do not exploit this structure well. A new edge detection architecture is proposed that is better optimised for FPGAs. The basis of the architecture is the Sobel edge kernels that are shown to be the most suitable because of their separability and absence of multiplications. Edge intensities are calculated with a new 4:2 compressor that consists of two custom-designed 3:2 compressors. Addition speed is increased by breaking carry propagation chains with look-ahead logic. Testing of the design showed it gives a 28% increase in speed and 4.4% reduction in area over previous equivalent designs, which demonstrated that it will lower the cost of edge detection systems, dissipate less power and still maintain highspeed control. Keywords\u2014edge detection; FPGA; compressor; low-cost; UAV"}
{"_id": "15639e3b5dc0c163a06e1432af773b6a2d69baff", "title": "CSI-MIMO: Indoor Wi-Fi fingerprinting system", "text": "Wi-Fi based fingerprinting systems, mostly utilize the Received Signal Strength Indicator (RSSI), which is known to be unreliable due to environmental and hardware effects. In this paper, we present a novel Wi-Fi fingerprinting system, exploiting the fine-grained information known as Channel State Information (CSI). The frequency diversity of CSI can be effectively utilized to represent a location in both frequency and spatial domain resulting in more accurate indoor localization. We propose a novel location signature CSI-MIMO that incorporates Multiple Input Multiple Output (MIMO) information and use both the magnitude and the phase of CSI of each sub-carrier. We experimentally evaluate the performance of CSI-MIMO fingerprinting using the k-nearest neighbor and the Bayes algorithm. The accuracy of the proposed CSI-MIMO is compared with Finegrained Indoor Fingerprinting System (FIFS) and a simple CSI-based system. The experimental result shows an accuracy improvement of 57% over FIFS with an accuracy of 0.95 meters."}
{"_id": "859af6e67aec769c58ec1ea6a971108a60df0b9d", "title": "Structured Perceptron with Inexact Search", "text": "Structured learning with inexact inference is a fundamental problem. We propose variants of the structured perceptron algorithm under a general \u201cviolation-fixing\u201d framework that guarantees convergence. This framework subsumes previous remedies including \u201cearly update\u201d as special cases, and also explains why standard perceptron may fail with inexact search. We also propose new update methods within this framework which learn better models with dramatically reduced training times on state-of-the-art part-of-speech tagging and incremental parsing systems."}
{"_id": "3be7891b11ef50c7275042b72c262061a1114682", "title": "ON RANDOM DEEP AUTOENCODERS: EXACT ASYMP-", "text": "We study the behavior of weight-tied multilayer vanilla autoencoders under the assumption of random weights. Via an exact characterization in the limit of large dimensions, our analysis reveals interesting phase transition phenomena when the depth becomes large. This, in particular, provides quantitative answers and insights to three questions that were yet fully understood in the literature. Firstly, we provide a precise answer on how the random deep weight-tied autoencoder model performs \u201capproximate inference\u201d as posed by Scellier et al. (2018), and its connection to reversibility considered by several theoretical studies. Secondly, we show that deep autoencoders display a higher degree of sensitivity to perturbations in the parameters, distinct from the shallow counterparts. Thirdly, we obtain insights on pitfalls in training initialization practice, and demonstrate experimentally that it is possible to train a deep autoencoder, even with the tanh activation and a depth as large as 200 layers, without resorting to techniques such as layer-wise pre-training or batch normalization. Our analysis is not specific to any depths or any Lipschitz activations, and our analytical techniques may have broader applicability."}
{"_id": "942b67b78f7f3e1c4bdd71a115bf083f7a5417ee", "title": "Review on fraud detection methods in credit card transactions", "text": "Cashless transactions such as online transactions, credit card transactions, and mobile wallet are becoming more popular in financial transactions nowadays. With increased number of such cashless transaction, number of fraudulent transactions are also increasing. Fraud can be distinguished by analyzing spending behavior of customers (users) from previous transaction data. If any deviation is noticed in spending behavior from available patterns, it is possibly of fraudulent transaction. To detect fraud behavior, bank and credit card companies are using various methods of data mining such as decision tree, rule based mining, neural network, fuzzy clustering approach, hidden markov model or hybrid approach of these methods. Any of these methods is applied to find out normal usage pattern of customers (users) based on their past activities. The objective of this paper is to provide comparative study of different techniques to detect fraud."}
{"_id": "f0e7e39e702c73a185abd5215ad18d9d57d3b82f", "title": "Learning partial differential equations via data discovery and sparse optimization.", "text": "We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection."}
{"_id": "7cc4e7427f66326e2432883917334ddea81a2292", "title": "Biomimetic walking robot SCORPION: Control and modeling", "text": "We present the biomimetic control scheme for the walking robot SCORPION. We used a concept of Basic Motion Patterns, which can be combined in a very flexible manner. In addition our modeling and simulation approach is described, which has been done based on the ADAMS(TM) simulator. Especially the motion patterns of real scorpions were analyzed and used for walking patterns and acceleration of the robot."}
{"_id": "83683138d872e34c08374975f1732d830d899e90", "title": "A Machine Learning Approach to Test Data Generation: A Case Study in Evaluation of Gene Finders", "text": "Programs for gene prediction in computational biology are examples of systems for which the acquisition of authentic test data is difficult as these require years of extensive research. This has lead to test methods based on semiartificially produced test data, often produced by ad hoc techniques complemented by statistical models such as Hidden Markov Models (HMM). The quality of such a test method depends on how well the test data reflect the regularities in known data and how well they generalize these regularities. So far only very simplified and generalized, artificial datasets have been tested, and a more thorough statistical foundation is required. We propose to use logic-statistical modelling methods for machine-learning for analyzing existing and manually marked up data, integrated with the generation of new, artificial data. More specifically, we suggest to use the PRISM system developed by Sato and Kameya. Based on logic programming extended with random variables and parameter learning, PRISM appears as a powerful modelling environment, which subsumes HMMs and a wide range of other methods, all embedded in a declarative language. We illustrate these principles here, showing parts of a model under development for genetic sequences and indicate firstinitial experiments producing test data for evaluation of existing gene finders, exemplified by GENSCAN. The advantage of the approach is the relative ease and flexibility with which these probabilistic models can be developed."}
{"_id": "6356a18fe5136c448f14af8d1775ead09d39adac", "title": "Dynamic Transfer Learning for Named Entity Recognition", "text": "State-of-the-art named entity recognition (NER) systems have been improving continuously using neural architectures over the past several years. However, many tasks including NER require large sets of annotated data to achieve such performance. In particular, we focus on NER from clinical notes, which is one of the most fundamental and critical problems for medical text analysis. Our work centers on effectively adapting these neural architectures towards lowresource settings using parameter transfer methods. We complement a standard hierarchical NER model with a general transfer learning framework consisting of parameter sharing between the source and target tasks, and showcase scores significantly above the baseline architecture. These sharing schemes require an exponential search over tied parameter sets to generate an optimal configuration. To mitigate the problem of exhaustively searching for model optimization, we propose the Dynamic Transfer Networks (DTN), a gated architecture which learns the appropriate parameter sharing scheme between source and target datasets. DTN achieves the improvements of the optimized transfer learning framework with just a single training setting, effectively removing the need for exponential search."}
{"_id": "fcd2898fb259afb070d84456cbe2efd0f9a07d02", "title": "Design a single band microstrip patch antenna at 60 GHz millimeter wave for 5G application", "text": "This proposed paper, a single band microstrip patch antenna for 5G wireless application is presented. This proposed antenna is suitable for the millimeter wave frequency. The single band antenna consist of new H slot and E slot loaded on the radiating patch with the 50 ohms microstrip line feeding used. This single band antenna is simulated on a Rogers RT5880 dielectric substrate have relative permittivity 2.2, loss tangent 0.0009, and height 1.6mm. The antenna is simulated by Electromagnetic simulation, computer software technology Microwave studio. The proposed single band antenna and simulated result on return loss, VSWR, surface current and 3D radiation pattern is presented. The simulated antenna shows the return loss \u221240.99dB at 60 GHz millimeter wave 5G wireless application presented."}
{"_id": "347ca233b5658c6992664484e2f00833fbbcafa7", "title": "Data Analytics Using Ontologies of Management Theories: Towards Implementing \"From Theory to Practice\" (Application Paper)", "text": "We explore how computational ontologies can be impactful vis-a\u0300-vis the developing discipline of \"data science.\" We posit an approach wherein management theories are represented as formal axioms, and then applied to draw inferences about data that reside in corporate databases. That is, management theories would be implemented as rules within a data analytics engine. We demonstrate a case study development of such an ontology by formally representing an accounting theory in First-Order Logic. Though quite preliminary, the idea that an information technology, namely ontologies, can potentially actualize the academic cliche\u0301, \"From Theory to Practice,\" and be applicable to the burgeoning domain of data analytics is novel and exciting."}
{"_id": "538cbb9f78824230b9f4733d69ac1cbdb0dca4c6", "title": "The effects of social media on emotions , brand relationship quality , and word of mouth : An empirical study of music festival attendees", "text": "This article focuses on two under-researched areas of tourism management e the management of music festivals, and the influence of social media on customer relationships. In the new digital era of marketing communications little is known about how social media interactions with tourism brands affect how consumers think and feel about those brands, and consequently how those interactions affect desired marketing outcomes such as word mouth. Based on the literature, a conceptual model was developed and was tested using structural equation modeling. The results show that social media does indeed have a significant influence on emotions and attachments to festival brands, and that social media-based relationships lead to desired outcomes such as positive word of mouth. \u00a9 2014 Elsevier Ltd. All rights reserved."}
{"_id": "464e40375248f8b59331c19949594e71e4711812", "title": "Computational Sociolinguistics: A Survey", "text": "Language is a social phenomenon and variation is inherent to its social nature. Recently, there has been a surge of interest within the computational linguistics (CL) community in the social dimension of language. In this article we present a survey of the emerging field of \u201ccomputational sociolinguistics\u201d that reflects this increased interest. We aim to provide a comprehensive overview of CL research on sociolinguistic themes, featuring topics such as the relation between language and social identity, language use in social interaction, and multilingual communication. Moreover, we demonstrate the potential for synergy between the research communities involved, by showing how the large-scale data-driven methods that are widely used in CL can complement existing sociolinguistic studies, and how sociolinguistics can inform and challenge the methods and assumptions used in CL studies. We hope to convey the possible benefits of a closer collaboration between the two communities and conclude with a discussion of open challenges."}
{"_id": "0e8d3f68c0a0eb9dab241c63aab319dbf596e697", "title": "Probabilistic Latent Semantic Analysis", "text": "Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two-mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, ma\u00ad chine learning from text, and in related ar\u00ad eas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results. in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of ex\u00ad periments."}
{"_id": "1ef4aac0ebc34e76123f848c256840d89ff728d0", "title": "Rapid Synthesis of Massive Face Sets for Improved Face Recognition", "text": "Recent work demonstrated that computer graphics techniques can be used to improve face recognition performances by synthesizing multiple new views of faces available in existing face collections. By so doing, more images and more appearance variations are available for training, thereby improving the deep models trained on these images. Similar rendering techniques were also applied at test time to align faces in 3D and reduce appearance variations when comparing faces. These previous results, however, did not consider the computational cost of rendering: At training, rendering millions of face images can be prohibitive; at test time, rendering can quickly become a bottleneck, particularly when multiple images represent a subject. This paper builds on a number of observations which, under certain circumstances, allow rendering new 3D views of faces at a computational cost which is equivalent to simple 2D image warping. We demonstrate this by showing that the run-time of an optimized OpenGL rendering engine is slower than the simple Python implementation we designed for the same purpose. The proposed rendering is used in a face recognition pipeline and tested on the challenging IJB-A and Janus CS2 benchmarks. Our results show that our rendering is not only fast, but improves recognition accuracy."}
{"_id": "aa344c313e2e2820ecfe8b6edfc24eda34e0a120", "title": "One-Class Convolutional Neural Network", "text": "We present a novel convolutional neural network (CNN) based approach for one-class classification. The idea is to use a zero centered Gaussian noise in the latent space as the pseudo-negative class and train the network using the cross-entropy loss to learn a good representation as well as the decision boundary for the given class. A key feature of the proposed approach is that any pre-trained CNN can be used as the base network for one-class classification. The proposed one-class CNN is evaluated on the UMDAA-02 Face, Abnormality-1001, and FounderType-200 datasets. These datasets are related to a variety of one-class application problems such as user authentication, abnormality detection, and novelty detection. Extensive experiments demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art methods. The source code is available at: github.com/otkupjnoz/oc-cnn."}
{"_id": "aa8a88177cf44f51712518f295c5cb36def65d5d", "title": "Unsupervised Training Set Generation for Automatic Acquisition of Technical Terminology in Patents", "text": "NLP methods for automatic information access to rich technological knowledge sources like patents are of great value. One important resource for accessing this knowledge is the technical terminology of the patent domain. In this paper, we address the problem of automatic terminology acquisition (ATA), i.e., the problem of automatically identifying all technical terms in a document. We analyze technical terminology in patents and define the concept of technical term based on the analysis. We present a novel method for labeling large amounts of high-quality training data for ATA in an unsupervised fashion. We train two ATA methods on this training data, a term candidate classifier and a conditional random field (CRF), and investigate the utility of different types of features. Finally, we show that our method of automatically generating training data is effective and the two ATA methods successfully generalize, considerably increasing recall while preserving high precision relative to a state-of-the-art baseline."}
{"_id": "0d8f77b74460f5abcb8e7a885b677a000a2656be", "title": "Habanero-Java library: a Java 8 framework for multicore programming", "text": "With the advent of the multicore era, it is clear that future growth in application performance will primarily come from increased parallelism. We believe parallelism should be introduced early into the Computer Science curriculum to educate students on the fundamentals of parallel computation. In this paper, we introduce the newly-created Habanero-Java library (HJlib), a pure Java 8 library implementation of the pedagogic parallel programming model [12]. HJlib has been used in teaching a sophomore-level course titled \"Fundamentals of Parallel Programming\" at Rice University.\n HJlib adds to the Java ecosystem a powerful and portable task parallel programming model that can be used to parallelize both regular and irregular applications. By relying on simple orthogonal parallel constructs with important safety properties, HJlib allows programmers with a basic knowledge of Java to get started with parallel programming concepts by writing or refactoring applications to harness the power of multicore architecture. The HJlib APIs make extensive use of lambda expressions and can run on any Java 8 JVM. HJlib runtime feedback capabilities, such as the abstract execution metrics and the deadlock detector, help the programmer to obtain feedback on theoretical performance as well as the presence of potential bugs in their program.\n Being an implementation of a pedagogic programming model, HJlib is also an attractive tool for both educators and researchers. HJlib is actively being used in multiple research projects at Rice and also by external independent collaborators. These projects include exploitation of homogeneous and heterogeneous multicore parallelism in big data applications written for the Hadoop platform [20, 43]."}
{"_id": "3d985a05e4a49be71d497e7a2ff3fcbeb74c4bc8", "title": "A lightweight infrastructure for graph analytics", "text": "Several domain-specific languages (DSLs) for parallel graph analytics have been proposed recently. In this paper, we argue that existing DSLs can be implemented on top of a general-purpose infrastructure that (i) supports very fine-grain tasks, (ii) implements autonomous, speculative execution of these tasks, and (iii) allows application-specific control of task scheduling policies. To support this claim, we describe such an implementation called the Galois system.\n We demonstrate the capabilities of this infrastructure in three ways. First, we implement more sophisticated algorithms for some of the graph analytics problems tackled by previous DSLs and show that end-to-end performance can be improved by orders of magnitude even on power-law graphs, thanks to the better algorithms facilitated by a more general programming model. Second, we show that, even when an algorithm can be expressed in existing DSLs, the implementation of that algorithm in the more general system can be orders of magnitude faster when the input graphs are road networks and similar graphs with high diameter, thanks to more sophisticated scheduling. Third, we implement the APIs of three existing graph DSLs on top of the common infrastructure in a few hundred lines of code and show that even for power-law graphs, the performance of the resulting implementations often exceeds that of the original DSL systems, thanks to the lightweight infrastructure."}
{"_id": "b78c04c7f29ddaeaeb208d4eae684ffccd71e04f", "title": "A Bridging Model for Parallel Computation", "text": "The success of the von Neumann model of sequential computation is attributable to the fact that it is an efficient bridge between software and hardware: high-level languages can be efficiently compiled on to this model; yet it can be effeciently implemented in hardware. The author argues that an analogous bridge between software and hardware in required for parallel computation if that is to become as widely used. This article introduces the bulk-synchronous parallel (BSP) model as a candidate for this role, and gives results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware."}
{"_id": "c7e4ca76c31ffe2a63ea3d1fc33b9a39824cb90a", "title": "Comparison of Two \u03b2-Alanine Dosing Protocols on Muscle Carnosine Elevations.", "text": "OBJECTIVE\n\u03b2-alanine (BA) is a nonproteogenic amino acid that combines with histidine to form carnosine. The amount taken orally in individual doses, however, is limited due to symptoms of paresthesia that are associated with higher doses. The use of a sustained-release formulation has been reported to reduce the symptoms of paresthesia, suggesting that a greater daily dose may be possible. The purpose of the present study was to determine whether increasing the daily dose of BA can result in a similar increase in muscle carnosine in a reduced time.\n\n\nMETHODS\nEighteen men and twelve women were randomized into either a placebo (PLC), 6-g BA (6G), or 12-g BA (12G) groups. PLC and 6G were supplemented for 4\u00a0weeks, while 12G was supplemented for 2\u00a0weeks. A resting blood draw and muscle biopsy were obtained prior to (PRE) and following (POST) supplementation. Plasma and muscle metabolites were measured by high-performance liquid chromatography. The loss in peak torque (\u0394PT) was calculated from maximal isometric contractions before and after 250 isokinetic kicks at 180\u00b0\u00b7sec-1 PRE and POST.\n\n\nRESULTS\nBoth 12G (p = 0.026) and 6G (p = 0.004) increased muscle carnosine compared to PLC. Plasma histidine was decreased from PRE to POST in 12G compared to PLC (p = 0.002) and 6G (p = 0.001), but no group x time interaction (p = 0.662) was observed for muscle histidine. No differences were observed for any hematological measure (e.g., complete blood counts) or in symptoms of paresthesia among the groups. Although no interaction was noted in \u0394PT, a trend (p = 0.073) was observed.\n\n\nCONCLUSION\nResults of this investigation indicate that a BA supplementation protocol of 12\u00a0g/d-1, using a sustained-release formulation, can accelerate the increase in carnosine content in skeletal muscle while attenuating paresthesia."}
{"_id": "baff72ec4a6af8a5e47407ec11ce34ab7853b67e", "title": "Meta-awareness, perceptual decoupling and the wandering mind", "text": "Mind wandering (i.e. engaging in cognitions unrelated to the current demands of the external environment) reflects the cyclic activity of two core processes: the capacity to disengage attention from perception (known as perceptual decoupling) and the ability to take explicit note of the current contents of consciousness (known as meta-awareness). Research on perceptual decoupling demonstrates that mental events that arise without any external precedent (known as stimulus independent thoughts) often interfere with the online processing of sensory information. Findings regarding meta-awareness reveal that the mind is only intermittently aware of engaging in mind wandering. These basic aspects of mind wandering are considered with respect to the activity of the default network, the role of executive processes, the contributions of meta-awareness and the functionality of mind wandering."}
{"_id": "b94734cae410afda30da9c6b168b17a8cdd0267a", "title": "Yoga as an adjunctive treatment for posttraumatic stress disorder: a randomized controlled trial.", "text": "BACKGROUND\nMore than a third of the approximately 10 million women with histories of interpersonal violence in the United States develop posttraumatic stress disorder (PTSD). Currently available treatments for this population have a high rate of incomplete response, in part because problems in affect and impulse regulation are major obstacles to resolving PTSD. This study explored the efficacy of yoga to increase affect tolerance and to decrease PTSD symptomatology.\n\n\nMETHOD\nSixty-four women with chronic, treatment-resistant PTSD were randomly assigned to either trauma-informed yoga or supportive women's health education, each as a weekly 1-hour class for 10 weeks. Assessments were conducted at pretreatment, midtreatment, and posttreatment and included measures of DSM-IV PTSD, affect regulation, and depression. The study ran from 2008 through 2011.\n\n\nRESULTS\nThe primary outcome measure was the Clinician-Administered PTSD Scale (CAPS). At the end of the study, 16 of 31 participants (52%) in the yoga group no longer met criteria for PTSD compared to 6 of 29 (21%) in the control group (n = 60, \u03c7\u00b2\u2081 = 6.17, P = .013). Both groups exhibited significant decreases on the CAPS, with the decrease falling in the large effect size range for the yoga group (d = 1.07) and the medium to large effect size decrease for the control group (d = 0.66). Both the yoga (b = -9.21, t = -2.34, P = .02, d = -0.37) and control (b = -22.12, t = -3.39, P = .001, d = -0.54) groups exhibited significant decreases from pretreatment to the midtreatment assessment. However, a significant group \u00d7 quadratic trend interaction (d = -0.34) showed that the pattern of change in Davidson Trauma Scale significantly differed across groups. The yoga group exhibited a significant medium effect size linear (d = -0.52) trend. In contrast, the control group exhibited only a significant medium effect size quadratic trend (d = 0.46) but did not exhibit a significant linear trend (d = -0.29). Thus, both groups exhibited significant decreases in PTSD symptoms during the first half of treatment, but these improvements were maintained in the yoga group, while the control group relapsed after its initial improvement.\n\n\nDISCUSSION\nYoga significantly reduced PTSD symptomatology, with effect sizes comparable to well-researched psychotherapeutic and psychopharmacologic approaches. Yoga may improve the functioning of traumatized individuals by helping them to tolerate physical and sensory experiences associated with fear and helplessness and to increase emotional awareness and affect tolerance.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier: NCT00839813."}
{"_id": "400078e734d068b8986136ca9e3bbaa01c2bd52b", "title": "ZOZZLE: Fast and Precise In-Browser JavaScript Malware Detection", "text": "JavaScript malware-based attacks account for a large fraction of successful mass-scale exploitation happening today. Attackers like JavaScript-based attacks because they can be mounted against an unsuspecting user visiting a seemingly innocent web page. While several techniques for addressing these types of exploits have been proposed, in-browser adoption has been slow, in part because of the performance overhead these methods incur. In this paper, we propose ZOZZLE, a low-overhead solution for detecting and preventing JavaScript malware that is fast enough to be deployed in the browser. Our approach uses Bayesian classification of hierarchical features of the JavaScript abstract syntax tree to identify syntax elements that are highly predictive of malware. Our experimental evaluation shows that ZOZZLE is able to detect JavaScript malware through mostly static code analysis effectively. ZOZZLE has an extremely low false positive rate of 0.0003%, which is less than one in a quarter million. Despite this high accuracy, the ZOZZLE classifier is fast, with a throughput of over one megabyte of JavaScript code per second."}
{"_id": "5f36dd5c4963b775b9289475ae1833bf9c753b86", "title": "Novel Word-sense Identification", "text": "Automatic lexical acquisition has been an active area of research in computational linguistics for over two decades, but the automatic identification of new word-senses has received attention only very recently. Previous work on this topic has been limited by the availability of appropriate evaluation resources. In this paper we present the largest corpus-based dataset of diachronic sense differences to date, which we believe will encourage further work in this area. We then describe several extensions to a state-of-the-art topic modelling approach for identifying new word-senses. This adapted method shows superior performance on our dataset of two different corpus pairs to that of the original method for both: (a) types having taken on a novel sense over time; and (b) the token instances of such novel senses."}
{"_id": "77e15c790722323b505efe4ab0855ab5b16482f9", "title": "Remaining Useful Life Estimation of Critical Components With Application to Bearings", "text": "Prognostics activity deals with the estimation of the Remaining Useful Life (RUL) of physical systems based on their current health state and their future operating conditions. RUL estimation can be done by using two main approaches, namely model-based and data-driven approaches. The first approach is based on the utilization of physics of failure models of the degradation, while the second approach is based on the transformation of the data provided by the sensors into models that represent the behavior of the degradation. This paper deals with a data-driven prognostics method, where the RUL of the physical system is assessed depending on its critical component. Once the critical component is identified, and the appropriate sensors installed, the data provided by these sensors are exploited to model the degradation's behavior. For this purpose, Mixture of Gaussians Hidden Markov Models (MoG-HMMs), represented by Dynamic Bayesian Networks (DBNs), are used as a modeling tool. MoG-HMMs allow us to represent the evolution of the component's health condition by hidden states by using temporal or frequency features extracted from the raw signals provided by the sensors. The prognostics process is then done in two phases: a learning phase to generate the behavior model, and an exploitation phase to estimate the current health state and calculate the RUL. Furthermore, the performance of the proposed method is verified by implementing prognostics performance metrics, such as accuracy, precision, and prediction horizon. Finally, the proposed method is applied to real data corresponding to the accelerated life of bearings, and experimental results are discussed."}
{"_id": "1d5a7c0bd3b6c445127c6861cea6c69f2291d9d8", "title": "Can you infect me now?: malware propagation in mobile phone networks", "text": "In this paper we evaluate the effects of malware propagating usingcommunication services in mobile phone networks. Although self-propagating malware is well understood in the Internet, mobile phone networks have very different characteristics in terms of topologies, services, provisioning and capacity, devices, and communication patterns. To investigate malware in this new environment, we have developed an event-driver simulator that captures the characteristics and constraints of mobile phone networks. In particular, the simulator models realistic topologies and provisioned capacities of the network infrastructure, as well as the contactgraphs determined by cell phone address books. We evaluate the speedand severity of random contact worms in mobile phone networks, characterize the denial-of-service effects such worms could have on the network, investigate approaches to accelerate malware propagation, and discuss the implications of defending networks against such attacks."}
{"_id": "48e7bf0b315186f0a852030ccf2c53b09b89d6cc", "title": "THE EXPOSURE OF YOUTH TO UNWANTED SEXUAL MATERIAL ON THE INTERNET A National Survey of Risk , Impact , and Prevention", "text": "This national survey of youth, ages 10 to 17, and their caretakers has several implications for the current debate about young people and Internet pornography. Twentyfive percent of youth had unwanted exposure to sexual pictures on the Internet in the past year, challenging the prevalent assumption that the problem is primarily about young people motivated to actively seek out pornography. Most youth had no negative reactions to their unwanted exposure, but one quarter said they were very or extremely upset, suggesting a priority need for more research on and interventions directed toward such negative effects. The use of filtering and blocking software was associated with a modest reduction in unwanted exposure, suggesting that it may help but is far from foolproof. Various forms of parental supervision were not associated with any reduction in exposure. The authors urge that social scientific research be undertaken to inform this highly contentious public policy controversy."}
{"_id": "f411d15fa899df64c1e34d6c1623d10009abee73", "title": "IT Security Governance: A Framework based on ISO 38500", "text": "ISO 38500 is an international standard for IT governance. The guidelines of ISO 38500 can also be applied at the IT security functional level in order to guide the governance of IT security. This paper proposes the use of a strategic information security management (ISM) framework to implement guidelines of ISO 38500. This approach provides several strategic advantages to the organization by 1) aligning IT security initiatives to business strategy; 2) providing a mechanism for establishing and tracking security metrics; and 3) enhancing the overall maturity of business, IT and IT security processes. The framework also leverages tools such as COBIT, the Balanced Scorecard and SSE-CMM in order to implement IT security governance and continuous improvement practices. Using extant literature, this paper identifies certain challenges and solutions with respect to the governance of IT security. For practitioners, it highlights relevant links between principles of ISO 38500 and IT governance, provides an over-arching contextual framework to drive IT security governance, and demonstrates mitigation solutions for IT security governance challenges. For academics, the paper makes theoretical contributions, by relating IT security governance to business strategy and proposing that firms develop dynamic governance capabilities (Pavlou and El Sawy, 2010) or organizational learning ladders (Ciborra and Andreu, 2010)."}
{"_id": "d35b40340094e15a71502cc5971d61d26639719f", "title": "Hierarchical Tree Long Short-Term Memory for Sentence Representations", "text": "A fixed-length feature vector is required for many machine learning algorithms in NLP field. Word embeddings have been very successful at learning lexical information. However, they can\u2019t capture the compositional meaning of sentences, which prevents them from a deeper understanding of language. In this paper, we introduce a novel hierarchical tree long short-term memory (HTLSTM) model that learns vector representations for sentences of arbitrary syntactic type and length. We propose to split one sentence into three hierarchies: short phrase, long phrase and full sentence level. The HTLSTM model gives our algorithm the potential to fully consider the hierarchical information and longterm dependencies of language. We design the experiments on both English and Chinese corpus to evaluate our model on sentiment analysis task. And the results show that our model outperforms several existing state of the art approaches significantly."}
{"_id": "0fa1911622a6c0a3dd43fefbdf2695ebdb7e10fa", "title": "1 Basic model for Speech-to-Speech Translation", "text": "This paper reviews the technology used in Speech-to-Speech Translation that is the phrases spoken in one language are immediately spoken in another language by the device. Speech-to-Speech Translation is a three step software process which includes Automatic speech Recognition, Machine Translation and voice synthesis. This paper includes the major speech translation projects using different approaches for speech recognition, translation and text to speech synthesis highlighting the major pros and cons for the approach being used."}
{"_id": "485f71a007a87c3d8df8e7d22dbc7ba386b7122d", "title": "Hyperbolic Plykin Attractor Can Exist in Neuron Models", "text": "Hyperbolic Plykin attractor can exist in neuron models Strange hyperbolic attractors are hard to find in real physical systems. This paper provides the first example of a realistic system, a canonical three-dimensional (3D) model of bursting neurons, that is likely to have a strange hyperbolic attractor. Using a geometrical approach to the study of the neuron model, we derive a flow-defined Poincare map giving ail accurate account of the system's dynamics. In a parameter region where the neuron system undergoes bifurcations causing transitions between tonic spiking and bursting, this two-dimensional map becomes a map of a disk with several periodic holes. A particular case is the map of a disk with three holes, matching the Plykin example of a planar hyperbolic attractor. The corresponding attractor of the 3D neuron model appears to be hyperbolic (this property is not verified in the present paper) and arises as a result of a two-loop (secondary) homoclinic bifurcation of a saddle. This type of bifurcation, and the complex behavior it can produce, have not been previously examined."}
{"_id": "0656a4fa7b3636ca5122f83510c25f93a21e03c0", "title": "Challenges and Best Practices for Mobile Application Development: Review Paper", "text": "Over the last ten years or so, mobile devices technology has changed significantly, with these devices and operating systems becoming more sophisticated. These developments have led to a huge variety of mobile applications designed for mobile operating systems. These mobile applications are typically harder to design and build because of several factors such as screen size and limited processing power and so forth. Therefore, it is important to clearly identify the characteristics of mobile application development and the issues and challenges related to it, as well as, the key features that characterize a great mobile application which make them valuable and useful. This paper has reviewed existing literature of the challenge and best practices of mobile application development. This study contributes towards a great understanding of the characteristics of mobile application development process, examines real challenges faced and explores the best practices that can be effectively applied to improve the development of mobile application."}
{"_id": "c183fd56d618bc60938c83a100c2f506b7b71683", "title": "Friend or foe: The effect of implicit trustworthiness judgments in social decision-making", "text": "The human face appears to play a key role in signaling social intentions and usually people form reliable and strong impressions on the basis of someone's facial appearance. Therefore, facial signals could have a substantial influence on how people evaluate and behave towards another person in a social interaction, such as an interactive risky decision-making game. Indeed, there is a growing body of evidence that demonstrates that social behavior plays a crucial role in human decision-making. Although previous research has demonstrated that explicit social information about one's partner can influence decision-making behavior, such as knowledge about the partner's moral status, much less is known about how implicit facial social cues affect strategic decision-making. One particular social cue that may be especially important in assessing how to interact with a partner is facial trustworthiness, a rapid, implicit assessment of the likelihood that the partner will reciprocate a generous gesture. In this experiment, we tested the hypothesis that implicit processing of trustworthiness is related to the degree to which participants cooperate with previously unknown partners. Participants played a Trust Game with 79 hypothetical partners who were previously rated on subjective trustworthiness. In each game, participants made a decision about how much to trust their partner, as measured by how much money they invested with that partner, with no guarantee of return. As predicted, people invested more money in partners who were subjectively rated as more trustworthy, despite no objective relationship between these factors. Moreover, the relationship between the amount of money offered seemed to be stronger for trustworthy faces as compared to untrustworthy faces. Overall, these data indicate that the perceived trustworthiness is a strong and important social cue that influences decision-making."}
{"_id": "013dfaac7508a46d6781cb58e7be0ddad2920a23", "title": "Assessing automotive functional safety microprocessor with ISO 26262 hardware requirements", "text": "This paper provides a step-by-step guideline for the assessment of an automotive safety microprocessor with ISO 26262 hardware requirements. ISO 26262 part 5 - Product development at the hardware level - specifies the safety activities during the phase of the automotive hardware development. In this phase, hardware safety design is derived (from the results of ISO 26262 part 3 and 4), implemented, integrated, and tested. To prove the compliance with ISO 26262 hardware development process, quantitative evaluations on the hardware are indispensable. These quantitative evaluations are known as hardware architecture metrics and probabilistic hardware metrics. The assessment results qualify a design with an automotive safety integrity level (ASIL) which ranges from ASIL-A (lowest) to ASIL-D (highest). In this paper, we implemented an exemplary safety microprocessor to demonstrate the ISO 26262 hardware assessment process. The derivation procedures of the ASIL level from the hardware architecture metrics and probabilistic hardware metrics are fully discussed. Based on the evaluation results, we also provide design suggestions for the ISO 26262 safety hardware design."}
{"_id": "b09ebdcd143588efdf995da0d6e16f9127c2da62", "title": "Video retargeting: automating pan and scan", "text": "When a video is displayed on a smaller display than originally intended, some of the information in the video is necessarily lost. In this paper, we introduce Video Retargeting that adapts video to better suit the target display, minimizing the important information lost. We define a framework that measures the preservation of the source material, and methods for estimating the important information in the video. Video retargeting crops each frame and scales it to fit the target display. An optimization process minimizes information loss by balancing the loss of detail due to scaling with the loss of content and composition due to cropping. The cropping window can be moved during a shot to introduce virtual pans and cuts, subject to constraints that ensure cinematic plausibility. We demonstrate results of adapting a variety of source videos to small display sizes."}
{"_id": "434b553f9d0176871cc52a9767904f74e50ae9df", "title": "Tutorial on agent-based modeling and simulation", "text": "Agent-based modeling and simulation (ABMS) is a new approach to modeling systems comprised of autonomous, interacting agents. ABMS promises to have far-reaching effects on the way that businesses use computers to support decision-making and researchers use electronic laboratories to support their research. Some have gone so far as to contend that ABMS is a third way of doing science besides deductive and inductive reasoning. Computational advances have made possible a growing number of agent-based applications in a variety of fields. Applications range from modeling agent behavior in the stock market and supply chains, to predicting the spread of epidemics and the threat of bio-warfare, from modeling consumer behavior to understanding the fall of ancient civilizations, to name a few. This tutorial describes the theoretical and practical foundations of ABMS, identifies toolkits and methods for developing ABMS models, and provides some thoughts on the relationship between ABMS and traditional modeling techniques."}
{"_id": "6915e21f2bdfe41bdc719a3080db900524603666", "title": "Clickstream clustering using weighted longest common subsequences", "text": "Categorizing visitors based on their interactions with a we bsite is a key problem in web usage mining. The clickstreams generated by various users often follow disti nct patterns, the knowledge of which may help in providing customized content. In this paper, we propose a novel and eff ective algorithm for clustering webusers based on a function of the longest common subsequence of their clickst reams that takes into account both the trajectory taken through a website and the time spent at each page. Results are presented on weblogs of www.sulekha.com to illustrate the techniques. keywords : web usage mining, clickstream, subsequence, clustering."}
{"_id": "c28ae286f0324f78905d35430b7b4bb9de7ff046", "title": "Multimodal language grounding for improved human-robot collaboration: exploring spatial semantic representations in the shared space of attention", "text": "There is an increased interest in artificially intelligent technology that surrounds us and takes decisions on our behalf. This creates the need for such technology to be able to communicate with humans and understand natural language and non-verbal behaviour that may carry information about our complex physical world. Artificial agents today still have little knowledge about the physical space that surrounds us and about the objects or concepts within our attention. We are still lacking computational methods in understanding the context of human conversation that involves objects and locations around us. Can we use multimodal cues from human perception of the real world as an example of language learning for robots? Can artificial agents and robots learn about the physical world by observing how humans interact with it and how they refer to it and attend during their conversations? This PhD project\u2019s focus is on combining spoken language and non-verbal behaviour extracted by multi-party dialogue in order to increase context awareness and spatial understanding for artificial agents."}
{"_id": "c4bf2e73271f17b6b3e6b197b3447550a087244d", "title": "Predicting football match results with logistic regression", "text": "Many efforts has been made in order to predict football matches result and selecting significant variables in football. Prediction is very useful in helping managers and clubs make the right decision to win leagues and tournaments. In this paper a logistic regression model is built to predict matches results of Barclays' Premier League season 2015/2016 for home win or away win and to determine what are the significant variable to win matches. Our work is different from others as we use only significant variables gathered from researches in the same field rather than making our own guess on what are the significant variables. We also used data gathered from video game FIFA, as Shin and Gasparyan [8] showed us that including data from the video game could improve prediction quality. The model was built using variations of training data from 2010/2011 season until 2015/2016. Logistic regression is a classification method which can be used to predict sports results and it can gives additional knowledge through regression coefficients. The variables used are \u201cHome Offense\u201d, \u201cHome Defense\u201d, \u201cAway Offense\u201d, and \u201cAway Defense\u201d. We conducted experiments by altering seasons of training data used. Prediction accuracy of built model is 69.5%. We found our work improved significantly compared to the one Snyder [9] did. We concluded that the significant variables are \u201cHome Defense\u201d, and \u201cAway Defense\u201d."}
{"_id": "a6fdb6332d680390e5112adda8889488c466504e", "title": "An Analysis on a Dynamic Amplifier and Calibration Methods for a Pseudo-Differential Dynamic Comparator", "text": "This paper analyzes a pseudo-differential dynamic comparator with a dynamic pre-amplifier. The transient gain of a dynamic preamplifier is derived and applied to equations of the thermal noise and the regeneration time of a comparator. This analysis enhances understanding of the roles of transistor\u2019s parameters in pre-amplifier\u2019s gain. Based on the calculated gain, two calibration methods are also analyzed. One is calibration of a load capacitance and the other is calibration of a bypass current. The analysis helps designers\u2019 estimation for the accuracy of calibration, dead-zone of a comparator with a calibration circuit, and the influence of PVT variation. The analyzed comparator uses 90-nm CMOS technology as an example and each estimation is compared with simulation results. key words: dynamic amplifier, dynamic comparator, load capacitance, bypass current, calibration, and PVT variation"}
{"_id": "ec0576074d8605fc06a1d56ed53ec3b204868535", "title": "Development of a Wheelchair Skills Home Program for Older Adults Using a Participatory Action Design Approach", "text": "Restricted mobility is the most common impairment among older adults and a manual wheelchair is often prescribed to address these limitations. However, limited access to rehabilitation services results in older adults typically receiving little or no mobility training when they receive a wheelchair. As an alternative and novel approach, we developed a therapist-monitored wheelchair skills home training program delivered via a computer tablet. To optimize efficacy and adherence, principles of self-efficacy and adult learning theory were foundational in the program design. A participatory action design approach was used to engage older adult wheelchair users, care providers, and prescribing clinicians in an iterative design and development process. A series of prototypes were fabricated and revised, based on feedback from eight stakeholder focus groups, until a final version was ready for evaluation in a clinical trial. Stakeholder contributions affirmed and enhanced the foundational theoretical principles and provided validation of the final product for the target population."}
{"_id": "4e7ea58c2153f31f56cb98e90d5a5f0cee5868b4", "title": "Mediated parent-child contact in work-separated families", "text": "Parents and children in families living with regular separation due to work develop strategies to manage being apart. We interviewed 14 pairs of parents and children (ages 7 -- 13) from work-separated families to understand their experiences and the strategies that they use to keep their family together. Parents focus on combining scheduled synchronous and spontaneous asynchronous communication to maintain a constant presence in the life of the child. Children, on the other hand, focus on other sources of support, on other activities, and on the eventual reunion. Both the remote parent and the child rely heavily on a collocated adult to maintain aware-ness and contact. We compare work-separated families with other types of separation and highlight opportunities for new designs."}
{"_id": "10af1dfe7f080d1756011e0321bc46e58a672522", "title": "L\u00e9vy flights, non-local search and simulated annealing", "text": "We solve a problem of non-convex stochastic optimisation with help of simulated annealing of L\u00e9vy flights of a variable stability index. The search of the ground state of an unknown potential is non-local due to big jumps of the Levy flights process. The convergence to the ground state is fast due to a polynomial decrease rate of the temperature."}
{"_id": "3220f54aa9078eeae189b54253319de6502f8faf", "title": "Cuckoo Search via L\u00e9vy flights", "text": "In this paper, we intend to formulate a new meta-heuristic algorithm, called Cuckoo Search (CS), for solving optimization problems. This algorithm is based on the obligate brood parasitic behaviour of some cuckoo species in combination with the L\u00e9vy flight behaviour of some birds and fruit flies. We validate the proposed algorithm against test functions and then compare its performance with those of genetic algorithms and particle swarm optimization. Finally, we discuss the implication of the results and suggestion for further research."}
{"_id": "91de962e115bcf65eaf8579471a818ba8c5b0ea6", "title": "Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems", "text": "In this study, a new metaheuristic optimization algorithm, called cuckoo search (CS), is introduced for solving structural optimization tasks. The new CS algorithm in combination with L\u00e9vy flights is first verified using a benchmark nonlinear constrained optimization problem. For the validation against structural engineering optimization problems, CS is subsequently applied to 13 design problems reported in the specialized literature. The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area. The optimal solutions obtained by CS are mostly far better than the best solutions obtained by the existing methods. The unique search features used in CS and the implications for future research are finally discussed in detail."}
{"_id": "a57139fb209ef2f6afed08f66b24212404e42ed6", "title": "Visual Concepts and Compositional Voting", "text": "It is very attractive to formulate vision in terms of pattern theory [26], where patterns are defined hierarchically by compositions of elementary building blocks. But applying pattern theory to real world images is currently less successful than discriminative methods such as deep networks. Deep networks, however, are black-boxes which are hard to interpret and can easily be fooled by adding occluding objects. It is natural to wonder whether by better understanding deep networks we can extract building blocks which can be used to develop pattern theoretic models. This motivates us to study the internal representations of a deep network using vehicle images from the PASCAL3D+ dataset. We use clustering algorithms to study the population activities of the features and extract a set of visual concepts which we show are visually tight and correspond to semantic parts of vehicles. To analyze this we annotate these vehicles by their semantic parts to create a new dataset, VehicleSemanticParts, and evaluate visual concepts as unsupervised part detectors. We show that visual concepts perform fairly well but are outperformed by supervised discriminative methods such as Support Vector Machines (SVM). We next give a more detailed analysis of visual concepts and how they relate to semantic parts. Following this, we use the visual concepts as building blocks for a simple pattern theoretical model, which we call compositional voting. In this model several visual concepts combine to detect semantic parts. We show that this approach is significantly better than discriminative methods like SVM and deep networks trained specifically for semantic part detection. Finally, we return to studying occlusion by creating an annotated dataset with occlusion, called VehicleOcclusion, and show that compositional voting outperforms even deep networks when the amount of occlusion becomes large."}
{"_id": "dec4696ce7122669e4be281b4f2bef9d6791a6c9", "title": "Universal coding, information, prediction, and estimation", "text": "A connection between universal codes and the problems of prediction and statistical estimation is established. A\u2018known lower bound for the mean length of universal codes is sharpened and generalized, and optimum universal codes constructed. The bound is defined to give the information in strings relative to the considered class of processes. The earlier derived minimum description length criterion for estimation of parameters, including their number, is given a fundamental information, theoretic justification by showing that its estimators achieve the information in the strings. It is also shown that one cannot do prediction in Manuscript received July 13,1983; revised January 16,1984. This work was presented in part at the IEEE International Symposium on Information Theory, St. Jovite, Canada, September 26-30, 1983. This work was done while the author was Visiting Professor at the Department of System Science, University of California, Los Angeles, while on leave from the IBM Research Laboratory, San Jose, CA 95193. Gaussian autoregressive moving average (ARMA) processes below a bound, which is determined by the information in the data."}
{"_id": "1e7d7f76b3a7b494122f40c22487e60a51a2d1be", "title": "Social Collaborative Viewpoint Regression with Explainable Recommendations", "text": "A recommendation is called explainable if it not only predicts a numerical rating for an item, but also generates explanations for users' preferences. Most existing methods for explainable recommendation apply topic models to analyze user reviews to provide descriptions along with the recommendations they produce. So far, such methods have neglected user opinions and influences from social relations as a source of information for recommendations, even though these are known to improve the rating prediction.\n In this paper, we propose a latent variable model, called social collaborative viewpoint regression (sCVR), for predicting item ratings based on user opinions and social relations. To this end, we use so-called viewpoints, represented as tuples of a concept, topic, and a sentiment label from both user reviews and trusted social relations. In addition, such viewpoints can be used as explanations. We apply a Gibbs EM sampler to infer posterior distributions of sCVR. Experiments conducted on three large benchmark datasets show the effectiveness of our proposed method for predicting item ratings and for generating explanations."}
{"_id": "c8be347065aa512c366ffc89a478a068be547c03", "title": "Continuous Queries over Append-Only Databases", "text": "In a database to which data is continually added, users may wish to issue a permanent query and be notified whenever data matches the query. If such continuous queries examine only single records, this can be implemented by examining each record as it arrives. This is very efficient because only the incoming record needs to be scanned. This simple approach does not work for queries involving joins or time. The Tapestry system allows users to issue such queries over a database of mail and bulletin board messages. The user issues a static query, such as \u201cshow me all messages that have been replied to by Jones,\u201d as though the database were fixed and unchanging. Tapestry converts the query into an incremental query that efficiently finds new matches to the original query as new messages are added to the database. This paper describes the techniques used in Tapestry, which do not depend on triggers and thus be implemented on any commercial database that supports SQL. Although Tapestry is designed for filtering mail and news messages, its techniques are applicable to any append-only database."}
{"_id": "c29ab0e872e647cec4cef5fe1bf743f3dad071ec", "title": "Data Governance Maturity Model for Micro Financial Organizations in Peru", "text": "Micro finance organizations play an important role since they facilitate integration of all social classes to sustained economic growth. Against this background, exponential growth of data, resulting from transactions and operations carried out with these companies on a daily basis, becomes imminent. Appropriate management of this data is therefore necessary because, otherwise, it will result in a competitive disadvantage due to the lack of valuable and quality information for decision-making and process improvement. Data Governance provides a different approach to data management, as seen from the perspective of business assets. In this regard, it is necessary that the organization have the ability to assess the extent to which that management is correct or is generating expected results. This paper proposes a data governance maturity model for micro finance organizations, which frames a series of formal requirements and criteria providing an objective diagnosis. This model was implemented based on the information of a Peruvian micro finance organization. Four domains, out of the seven listed in the model, were evaluated. Finally, after validation of the proposed model, it was evidenced that it serves as a means for identifying the gap between data management and objectives set."}
{"_id": "998aa2b0065497f56d4922b08e1102fedc89e9c4", "title": "The Use of Case-Based Reasoning for the Monitoring of Financial Fraud Transactions", "text": "Financial transaction fraud constitutes an acute problem domain for detection and early warning systems. Despite that different branches of AI have been addressing the problem since the late 90s, CBR approaches have seldom been applied. This paper provides a proof of concept approach and experimental investigation for the use of a CBR Intelligent Monitoring System for detecting abnormal patterns in financial transaction flows. The representation of workflow related knowledge in this research using graphs is explained. The workflow process is orchestrated by a software system using BPEL technologies within a service-oriented architecture. Workflow cases are represented in terms of events and their corresponding temporal relationships. The matching and CBR retrieval mechanisms used in this research are explained and a simple evaluation of the approach is provided using simulation data. Further work on the system and the extension to a full intelligent monitoring and process optimisation system is finally presented."}
{"_id": "93d0d9cf57d11131688232dfad4db46754858183", "title": "Light at the End of the Tunnel: High-Speed LiDAR-Based Train Localization in Challenging Underground Environments", "text": "In this paper, we present an infrastructure-free mapping and localization framework for rail vehicles using only a lidar sensor. Our method is designed to handle the pathological environment found in modern underground tunnels: narrow, parallel, and relatively smooth concrete walls with very little infrastructure to break up the empty spaces in the tunnel. By using an RQE-based, point-cloud alignment approach, we are able to implement a sliding-window algorithm, used for both mapping and localization. We demonstrate the proposed method with datasets gathered on a subway train travelling at high speeds (up to 70 km/h) in an underground tunnel for a total of 20 km across 6 runs. Our method is capable of mapping the tunnel with less than 0.6% error over the total length of the generated map. It is capable of continuously localizing, relative to the generated map, to within 10 cm in stations and at crossovers, and 1.8 m in pathological sections of tunnel. This method improves rail-based localization in a tunnel, which can be used to increase capacity on existing railways and for automated trains."}
{"_id": "a4803d2f97a89b0ddb60fa1a0c3fa47991830426", "title": "TaxoFinder: A Graph-Based Approach for Taxonomy Learning", "text": "Taxonomy learning is an important task for knowledge acquisition, sharing, and classification as well as application development and utilization in various domains. To reduce human effort to build a taxonomy from scratch and improve the quality of the learned taxonomy, we propose a new taxonomy learning approach, named TaxoFinder. TaxoFinder takes three steps to automatically build a taxonomy. First, it identifies domain-specific concepts from a domain text corpus. Second, it builds a graph representing how such concepts are associated together based on their co-occurrences. As the key method in TaxoFinder, we propose a method for measuring associative strengths among the concepts, which quantify how strongly they are associated in the graph, using similarities between sentences and spatial distances between sentences. Lastly, TaxoFinder induces a taxonomy from the graph using a graph analytic algorithm. TaxoFinder aims to build a taxonomy in such a way that it maximizes the overall associative strengths among the concepts in the graph to build a taxonomy. We evaluate TaxoFinder using gold-standard evaluation on three different domains: emergency management for mass gatherings, autism research, and disease domains. In our evaluation, we compare TaxoFinder with a state-of-the-art subsumption method and show that TaxoFinder is an effective approach significantly outperforming the subsumption method."}
{"_id": "a86c573322ed38db84f7d57a1ea41dae5955034b", "title": "A Survey of Caching Mechanisms in Information-Centric Networking", "text": "Information-Centric Networking (ICN) is a novel networking paradigm that attracts increasing research interests in recent years. In-network caching has been viewed as an attractive feature of ICN because it can reduce network traffic, alleviate server bottleneck, and reduce the user access latencies. Because of this, the network community has proposed many in-network caching mechanisms that aim at optimizing various performance metrics such as cache hit ratio and cache hit distance. In this survey, we present a comprehensive overview of the recently proposed in-network caching mechanisms for ICN. For each caching mechanism, we describe it in detail, present examples to illustrate how it works, and analyze its possible benefits and drawbacks. We also compare some typical in-network caching mechanisms through extensive simulations and discuss the remaining research challenges."}
{"_id": "aa935b19f4f1e6ebb3d6e6e8e795682b5c296bb4", "title": "Convolutional Self Organizing Map", "text": "Recently, deep learning became very popular, and was applied to many fields. The convolutional neural networks are often used for representing the layers for deep learning. In this paper, we propose Convolutional Self Organizing Map, which can be applicable to deep learning. Conventional Self Organizing Map uses single layered architecture, and can visualizes and classifies the input data on 2 dimensional map. SOMs which uses multiple layers are already proposed. In this paper, we propose Self Organizing Map algorithms which include convolutional layers. 2 types of convolution methods, which are based on conventional method and inspired from Self Organizing Map algorithm are proposed, and the performance of both method is examined in the experiments of clustering image data."}
{"_id": "6ec5443ffcf1d5666ac28e78e709b9f613fdf36d", "title": "Probabilistic neural network and its application", "text": "Nuclear marine apparatus is a huge complicated system, most of which equipments are of nonlin-earity, time varying, coupling and inexactness. Neuralnet is widely applied in nuclear fault diagnosis for its approaching any kinds of nonlinearity mapping. At present, BP neural net is used more widely, but the layers of the net and the neurones on each layer can not be delimited easily; such net may fall into the minimum point in the course of training. In this essay, PNN proves effective in diagnosing faults on nuclearmarine apparatus."}
{"_id": "55ad25867444553c28087482f7811d491f7c7cc4", "title": "Cardboard semiotics: reconfigurable symbols as a means for narrative prototyping in game design", "text": "In this paper, we propose the technique of cardboard semiotics. We explain the importance of symbolic analysis as a tool for building narrative prototypes in videogames. Borrowing from the participatory design work in the early 1990s, we suggest a means for adapting and extending this work based on the implicit participation of gamers' immediate-level stories (i.e., the gameplay with narrative implications). Our paper first introduces the concept of semiotics and explains how cardboard semiotics can function as an applied technique within the domain of videogame design and development. Next, we propose a theoretical basis for our work using a simple three act narrative structure and explore some basic concepts from narrative game design. Finally, we conclude with some simple examples of how cardboard semiotics might function in a design environment."}
{"_id": "2c1dcbc6da0c6e9a60b6a985f63b413b4d91385a", "title": "Role of the amygdala in decision-making.", "text": "The somatic marker hypothesis proposes that both the amygdala and the orbitofrontal cortex are parts of a neural circuit critical for judgment and decision-making. Although both structures couple exteroceptive sensory information with interoceptive information concerning somatic/emotional states, they do so at different levels, thus making different contributions to the process. We define \"primary inducers\" as stimuli that unconditionally, or through learning (e.g., conditioning and semantic knowledge), can (perceptually or subliminally) produce states that are pleasurable or aversive. Encountering a fear object (e.g., a snake), a stimulus predictive of a snake, or semantic information such as winning or losing a large sum of money are all examples of primary inducers. \"Secondary inducers\" are entities generated by the recall of a personal or hypothetical emotional event or perceiving a primary inducer that generates \"thoughts\" and \"memories\" about the inducer, all of which, when they are brought to memory, elicit a somatic state. The episodic memory of encountering a snake, losing a large sum of money, imagining the gain of a large sum of money, or hearing or looking at primary inducers that bring to memory \"thoughts\" pertaining to an emotional event are all examples of secondary inducers. We present evidence in support of the hypothesis that the amygdala is a critical substrate in the neural system necessary for triggering somatic states from primary inducers. The ventromedial cortex is a critical substrate in the neural system necessary for the triggering of somatic states from secondary inducers. The amygdala system is a priori a necessary step for the normal development of the orbitofrontal system for triggering somatic states from secondary inducers. However, once this orbitofrontal system is developed, the induction of somatic states by secondary inducers via the orbitofrontal system is less dependent on the amygdala system. Perhaps the amygdala is equivalent to the hippocampus with regard to emotions, that is, necessary for acquiring new emotional attributes (anterograde emotions), but not for retrieving old emotional attributes (retrograde emotions). Given the numerous lesion and functional neuroimaging studies illustrating the involvement of the amygdala in complex cognitive and behavioral functions, including \"social cognition,\" we suggest that this involvement is a manifestation of a more fundamental function mediated by the amygdala, which is to couple stimuli/entities with their emotional attributes, that is, the processing of somatic states from primary inducers."}
{"_id": "93bbb97c42077bc373b498b1967f3a491af33419", "title": "Bazaar-Extension: A CloudSim Extension for Simulating Negotiation Based Resource Allocations", "text": "Negotiation processes taking place between two or more parties need to agree on a viable execution mechanism. Auctioning protocols have proven useful as electronic negotiation mechanisms in the past. Auctions are a way of determining the price of a resource in a dynamic way. Additionally, auctions have well defined rules such as winner and loser determination, time restrictions or minimum price increment. These restrictions are necessary to ensure fair and transparent resource allocation. However, these rules are limiting flexibility of consumers and providers. In this paper we introduce a novel negotiation based resource allocation mechanisms using the offer-counteroffer negotiation protocol paradigm. This allocation mechanism shows similarities to the supermarket approach as consumer and provider are able to communicate directly. Further, the price is determined in a dynamic way similar to auctioning. We developed a Bazaar-Extension for CloudSim which simulates negotiation processes with different strategies. In this paper we introduce and analyze a specific Genetic Algorithm based negotiation strategy. For the comparison of the efficiency of resource allocations a novel Bazaar-Score was used."}
{"_id": "3fd46ca896d023df8c8af2b3951730d8c38defdd", "title": "Training neural nets with the reactive tabu search", "text": "In this paper the task of training subsymbolic systems is considered as a combinatorial optimization problem and solved with the heuristic scheme of the reactive tabu search (RTS). An iterative optimization process based on a \"modified local search\" component is complemented with a meta-strategy to realize a discrete dynamical system that discourages limit cycles and the confinement of the search trajectory in a limited portion of the search space. The possible cycles are discouraged by prohibiting (i.e., making tabu) the execution of moves that reverse the ones applied in the most recent part of the search. The prohibition period is adapted in an automated way. The confinement is avoided and a proper exploration is obtained by activating a diversification strategy when too many configurations are repeated excessively often. The RTS method is applicable to nondifferentiable functions, is robust with respect to the random initialization, and effective in continuing the search after local minima. Three tests of the technique on feedforward and feedback systems are presented."}
{"_id": "16053eba481ab5b72032ef5e48195d1ce31bb2be", "title": "SDN for network security", "text": "Software Defined Networking, SDN, is the programmable separation of control and forwarding elements of networking that enables software control of network forwarding that can be logically and/or physically separated from physical switches and routers. The following important question is considered in this paper: to what extent can SDN-based networks address network security management problems? The new opportunities for enhancing network security brought by this separation are considered in the paper."}
{"_id": "f42f8599d1733b4141c8b414079bbb571d6fac67", "title": "Computerized classification of intraductal breast lesions using histopathological images", "text": "In the diagnosis of preinvasive breast cancer, some of the intraductal proliferations pose a special challenge. The continuum of intraductal breast lesions includes the usual ductal hyperplasia (UDH), atypical ductal hyperplasia (ADH), and ductal carcinoma in situ (DCIS). The current standard of care is to perform percutaneous needle biopsies for diagnosis of palpable and image-detected breast abnormalities. UDH is considered benign and patients diagnosed UDH undergo routine follow-up, whereas ADH and DCIS are considered actionable and patients diagnosed with these two subtypes get additional surgical procedures. About 250 000 new cases of intraductal breast lesions are diagnosed every year. A conservative estimate would suggest that at least 50% of these patients are needlessly undergoing unnecessary surgeries. Thus, improvement in the diagnostic reproducibility and accuracy is critically important for effective clinical management of these patients. In this study, a prototype system for automatically classifying breast microscopic tissues to distinguish between UDH and actionable subtypes (ADH and DCIS) is introduced. This system automatically evaluates digitized slides of tissues for certain cytological criteria and classifies the tissues based on the quantitative features derived from the images. The system is trained using a total of 327 regions of interest (ROIs) collected across 62 patient cases and tested with a sequestered set of 149 ROIs collected across 33 patient cases. An overall accuracy of 87.9% is achieved on the entire test data. The test accuracy of 84.6% is obtained with borderline cases (26 of the 33 test cases) only, when compared against the diagnostic accuracies of nine pathologists on the same set (81.2% average), indicates that the system is highly competitive with the expert pathologists as a stand-alone diagnostic tool and has a great potential in improving diagnostic accuracy and reproducibility when used as a \u201csecond reader\u201d in conjunction with the pathologists."}
{"_id": "c6241e6fc94192df2380d178c4c96cf071e7a3ac", "title": "Action recognition with trajectory-pooled deep-convolutional descriptors", "text": "Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets."}
{"_id": "b7f3dd82e64d6f99188a9e2ea844b4fd01bb0eca", "title": "Designing A Virtual Empowerment Mobile App for the Blinds", "text": "The Virtual Empowerment App for the Blinds is an empowerment mobile application for the blind. It is developed based on the modules of the existing Latihan Maya Kejayaan Orang Kurang Upaya app, which was intended to empower persons with physical disabilities. As the requirements of persons with physical disabilities greatly differ from that of vision disabilities, this study aims to develop an app that would help blind users to navigate the module of Latihan Maya Kejayaan OKU by themselves based on the requirements observed from the blind users. An interface design to develop a functional and practical app for the blinds is proposed. A comparison between different gesture platforms was also compared. Three tests were conducted, namely the preliminary testing, pilot testing and final testing to analyze the interaction of users when using the app. Results suggested that the extended app developed based on the design proposed in this study were easily navigated and adapted by persons with vision disabilities."}
{"_id": "78cec49ca0acd3b961021bc27d5cf78cbbbafc7e", "title": "Is face recognition really a Compressive Sensing problem?", "text": "Compressive Sensing has become one of the standard methods of face recognition within the literature. We show, however, that the sparsity assumption which underpins much of this work is not supported by the data. This lack of sparsity in the data means that compressive sensing approach cannot be guaranteed to recover the exact signal, and therefore that sparse approximations may not deliver the robustness or performance desired. In this vein we show that a simple \u00a32 approach to the face recognition problem is not only significantly more accurate than the state-of-the-art approach, it is also more robust, and much faster. These results are demonstrated on the publicly available YaleB and AR face datasets but have implications for the application of Compressive Sensing more broadly."}
{"_id": "ee8f2130f4142c020ae76187606ab96bf589ef70", "title": "Fully Statistical Neural Belief Tracking", "text": "This paper proposes an improvement to the existing data-driven Neural Belief Tracking (NBT) framework for Dialogue State Tracking (DST). The existing NBT model uses a hand-crafted belief state update mechanism which involves an expensive manual retuning step whenever the model is deployed to a new dialogue domain. We show that this update mechanism can be learned jointly with the semantic decoding and context modelling parts of the NBT model, eliminating the last rule-based module from this DST framework. We propose two different statistical update mechanisms and show that dialogue dynamics can be modelled with a very small number of additional model parameters. In our DST evaluation over three languages, we show that this model achieves competitive performance and provides a robust framework for building resource-light DST models."}
{"_id": "0b3a0710031be11b2ef50437c7d9eb52c91d6a33", "title": "Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems", "text": "Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems."}
{"_id": "11c0d36b008980eb03ef0802bba305c089726cac", "title": "Incorporating Loose-Structured Knowledge into LSTM with Recall Gate for Conversation Modeling", "text": "Modeling human conversations is the essence for building satisfying chat-bots with multi-turn dialog ability. Conversation modeling will notably benefit from domain knowledge since the relationships between sentences can be clarified due to semantic hints introduced by knowledge. In this paper, a deep neural network is proposed to incorporate background knowledge for conversation modeling. Through a specially designed Recall gate, domain knowledge can be transformed into the extra global memory of Long Short-Term Memory (LSTM), so as to enhance LSTM by cooperating with its local memory to capture the implicit semantic relevance between sentences within conversations. In addition, this paper introduces the loose structured domain knowledge base, which can be built with slight amount of manual work and easily adopted by the Recall gate. Our model is evaluated on the context-oriented response selecting task, and experimental results on both two datasets have shown that our approach is promising for modeling human conversations and building key components of automatic chatting systems."}
{"_id": "1b9d8e45250717b9b5a62ae92ef18e3b77d59327", "title": "A Diversity-Promoting Objective Function for Neural Conversation Models", "text": "Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g.,I don\u2019t know) regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (responses) given input (messages) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as objective function in neural models. Experimental results demonstrate that the proposed objective function produces more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets."}
{"_id": "1d1861764141b0255389fecfc309ef74151033fc", "title": "Are We There Yet? Research in Commercial Spoken Dialog Systems", "text": "In this paper we discuss the recent evolution of spoken dialog systems in commercial deployments. Yet based on a simple finite state machine design paradigm, dialog systems reached today a higher level of complexity. The availability of massive amounts of data during deployment led to the development of continuous optimization strategy pushing the design and development of spoken dialog applications from an art to science. At the same time new methods for evaluating the subjective caller experience are available. Finally we describe the inevitable evolution for spoken dialog applications from speech only to multimodal interaction."}
{"_id": "555f5848923c83e2e7a7c21e0a5b7dce7cf2394c", "title": "Tonsillectomy and risk of Parkinson's disease: A danish nationwide population-based cohort study.", "text": "BACKGROUND\nWe hypothesized that tonsillectomy modifies the risk of PD.\n\n\nOBJECTIVES\nTo test the hypothesis in a nationwide population-based cohort study.\n\n\nMETHODS\nWe used Danish medical registries to construct a cohort of all patients in Denmark with an operation code of tonsillectomy 1980-2010 (n\u2009=\u2009195,169) and a matched age and sex general population comparison cohort (n\u2009=\u2009975,845). Patients were followed until PD diagnosis, death, censoring, or end of follow-up 30 November 2013. Using Cox regression, we computed hazard ratios for PD and corresponding 95% confidence intervals, adjusting for age and sex by study design, and potential confounders.\n\n\nRESULTS\nWe identified 100 and 568 patients diagnosed with PD among the tonsillectomy and general population comparison cohort, respectively, finding similar risks of PD (adjusted hazard ratio\u2009=\u20090.95 [95% confidence interval: 0.76-1.19]; for\u2009>\u200920 years' follow-up (adjusted hazard ratio\u2009=\u20090.96 [95% confidence interval: 0.64-1.41]).\n\n\nCONCLUSION\nTonsillectomy is not associated with risk of PD, especially early-onset PD. \u00a9 2017 International Parkinson and Movement Disorder Society."}
{"_id": "2c5ec2fd82aa3f2390011945c21f67aacb2b6efb", "title": "Acceptance of homegrown enterprise resource planning (ERP) systems in Ethiopia", "text": "In the current competitive global market, organizations are implementing information and communication technology (ICT) that could add value to their products, processes, and satisfaction of their users. The adoption, implementation and use of homegrown enterprise resource planning (ERP) systems is one of these mechanisms being globally used for recording, processing, storing, and exchanging organizational information anytime anywhere. Although organizations have been utilizing ERP systems, the acceptance of homegrown ERP systems is given less attention as compared to commercial off-the shelf (COTS) software. Hence, this research studied factors that determine acceptance of homegrown ERP through the extension of unified theory of acceptance and use of technology (UTAUT) model. The finding revealed that performance expectancy, effort expectancy, social influence, competitive advantage, cost effectiveness, and facilitations functions are determinants of homegrown ERP system acceptance in Ethiopia. Moreover, experience and voluntariness are found to be significant moderators of the study."}
{"_id": "3be47a4dffacb033bc1055834a4db9dd7d041a23", "title": "Explaining contradictory relations between risk perception and risk taking.", "text": "Different studies have documented opposite relations between perceived risk and behavior. The present study tested a theoretical explanation that reconciles these conflicting results. Adolescents (N= 596) completed alternative measures of risk perception that differed in cue specificity and response format. As predicted by fuzzy-trace theory, measures that emphasized verbatim retrieval and quantitative processing produced positive correlations between perceived risk and risky behavior; risk perceptions reflected the extent to which adolescents engaged in risky behavior. In contrast, measures that assessed global, gist-based judgments of risk produced negative correlations; higher risk perceptions were associated with less risk taking, a protective rather than reflective relation. Endorsement of simple values and principles provided the greatest protection against risk taking. Results support a dual-processes interpretation of the relation between risk perception and risk taking according to which observed relations depend on whether the cues in questions trigger verbatim or gist processing."}
{"_id": "74d15b56e6d19a7d527e1c2626c76dd1418d798c", "title": "Local optima smoothing for global optimization", "text": "It is widely believed that in order to solve large scale global optimization problems an appropriate mixture of local approximation and global exploration is necessary. Local approximation, if first order information on the objective function is available, is efficiently performed by means of local optimization methods. Unfortunately, global exploration, in absence of some kind of global information on the problem, is a \u201cblind\u201d procedure, aimed at placing observations as evenly as possible in the search domain. Often this procedure reduces to uniform random sampling (like in Multistart algorithms, or in techniques based on clustering). In this paper we propose a new framework for global exploration which tries to guide random exploration towards the region of attraction of low-level local optima. The main idea originated by the use of smoothing techniques (based on gaussian convolutions): the possibility of applying a smoothing transformation not to the objective function but to the result of local searches seems to have never been explored yet. Although an exact smoothing of the results of local searches is impossible to implement, in this paper we propose a computational approximation scheme which has proven to be very efficient and (maybe more important) extremely robust in solving large scale global optimization problems with huge numbers of local optima. \u2217Email: b.addis@ing.unifi.it Dip. Sistemi e Informatica Universit\u00e0 di Firenze \u2020Email: locatell@di.unito.it Dip. Informatica Universit\u00e0 di Torino \u2021Email: schoen@ing.unifi.it Dip. Sistemi e Informatica Universit\u00e0 di Firenze"}
{"_id": "9447b4502eed007a117e4ba87278407ca3d7b354", "title": "Efficient Subgraph Matching on Billion Node Graphs", "text": "The ability to handle large scale graph data is crucial to an increasing number of applications. Much work has been dedicated to supporting basic graph operations such as subgraph matching, reachability, regular expression matching, etc. In many cases, graph indices are employed to speed up query processing. Typically, most indices require either super-linear indexing time or super-linear indexing space. Unfortunately, for very large graphs, super-linear approaches are almost always infeasible. In this paper, we study the problem of subgraph matching on billion-node graphs. We present a novel algorithm that supports efficient subgraph matching for graphs deployed on a distributed memory store. Instead of relying on super-linear indices, we use efficient graph exploration and massive parallel computing for query processing. Our experimental results demonstrate the feasibility of performing subgraph matching on web-scale graph data."}
{"_id": "2381646c5f8f05b430fae68093060d752a5113d9", "title": "A Temporal Logic of Nested Calls and Returns", "text": "Model checking of linear temporal logic (LTL) specifications with respect to pushdown systems has been shown to be a useful tool for analysis of programs with potentially recursive procedures. LTL, however, can specify only regular properties, and properties such as correctness of procedures with respect to pre and post conditions, that require matching of calls and returns, are not regular. We introduce a temporal logic of calls and returns (CARET) for specification and algorithmic verification of correctness requirements of structured programs. The formulas of CARET are interpreted over sequences of propositional valuations tagged with special symbols call and ret. Besides the standard global temporal modalities, CARET admits the abstract-next operator that allows a path to jump from a call to the matching return. This operator can be used to specify a variety of non-regular properties such as partial and total correctness of program blocks with respect to pre and post conditions. The abstract versions of the other temporal modalities can be used to specify regular properties of local paths within a procedure that skip over calls to other procedures. CARET also admits the caller modality that jumps to the most recent pending call, and such caller modalities allow specification of a variety of security properties that involve inspection of the call-stack. Even though verifying context-free properties of pushdown systems is undecidable, we show that model checking CARET formulas against a pushdown model is decidable. We present a tableau construction that reduces our model checking problem to the emptiness problem for a B\u00fcchi pushdown system. The complexity of model checking CARET formulas is the same as that of checking LTL formulas, namely, polynomial in the model and singly exponential in the size of the specification. Comments From the 10th International Conference, TACAS 2004, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2004, Barcelona, Spain, March 29 April 2, 2004. This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/cis_papers/203 A Temporal Logic of Nested Calls and Returns ? Rajeev Alur, Kousha Etessami, and P. Madhusudan 1 University of Pennsylvania 2 University of Edinburgh Abstract. Model checking of linear temporal logic (LTL) speci cations Model checking of linear temporal logic (LTL) speci cations with respect to pushdown systems has been shown to be a useful tool for analysis of programs with potentially recursive procedures. LTL, however, can specify only regular properties, and properties such as correctness of procedures with respect to pre and post conditions, that require matching of calls and returns, are not regular. We introduce a temporal logic of calls and returns (CaRet) for speci cation and algorithmic veri cation of correctness requirements of structured programs. The formulas of CaRet are interpreted over sequences of propositional valuations tagged with special symbols call and ret. Besides the standard global temporal modalities, CaRet admits the abstract-next operator that allows a path to jump from a call to the matching return. This operator can be used to specify a variety of non-regular properties such as partial and total correctness of program blocks with respect to pre and post conditions. The abstract versions of the other temporal modalities can be used to specify regular properties of local paths within a procedure that skip over calls to other procedures. CaRet also admits the caller modality that jumps to the most recent pending call, and such caller modalities allow speci cation of a variety of security properties that involve inspection of the call-stack. Even though verifying contextfree properties of pushdown systems is undecidable, we show that model checking CaRet formulas against a pushdown model is decidable. We present a tableau construction that reduces our model checking problem to the emptiness problem for a B\u007f uchi pushdown system. The complexity of model checking CaRet formulas is the same as that of checking LTL formulas, namely, polynomial in the model and singly exponential in the size of the speci cation."}
{"_id": "318b7aa53be9bd451e7948fb341223c745fabe71", "title": "Q # : A Quantum Programming Language by Microsoft", "text": "The multi-paradigm quantum programming language Q# was analysed and used to study and create novel programs that were able to go beyond the capabilities of any classical program. This language was built by Microsoft to succeed LIQUi|> and is a Domain-Specific Language (DSL) that can be used within Microsoft\u2019s Quantum Development Kit (QDK). The quantum programs are run on a quantum simulator which possesses properties that a real quantum computer would from a behavioural aspect. It uses the .NET Code SDK, allowing for easy creation, building and running of quantum projects via the command line. Initially, the main features and libraries available were studied and experimented with by analysing implementations of the Quantum Teleportation and Deutsch-Jozsa algorithms. Thereafter, an algorithm to solve an arbitrary 2 \u00d7 2 matrix system of linear equations was implemented, demonstrating a theoretical quantum speed-up with time complexities of order O(logN) compared to the classical O(N) for sparse matrices. Running the algorithm for a particular matrix achieved results that were within the range of theoretical predictions. Subsequently, as an extension to the project, concepts within Quantum Game Theory were explored. This led to the Mermin-Peres Magic Square game successfully being simulated in Q#; a game where no classical winning strategy exists, yet a quantum strategy is able to win in all possible cases of the game."}
{"_id": "b1096ac529a3ef10c9b4ae6dc5dba1e87c2dc8cf", "title": "Nuclei segmentation of microscopic breast cancer image using Gram-Schmidt and cluster validation algorithm", "text": "A combination of Gram-Schmidt method and cluster validation algorithm based Bayesian is proposed for nuclei segmentation on microscopic breast cancer image. Gram-Schmidt is applied to identify the cell nuclei on a microscopic breast cancer image and the cluster validation algorithm based Bayesian method is used for separating the touching nuclei. The microscopic image of the breast cancer cells are used as dataset. The segmented cell nuclei results on microscopic breast cancer images using Gram-Schmidt method shows that the most of MSE values are below 0.1 and the average MSE of segmented cell nuclei results is 0.08. The average accuracy of separated cell nuclei counting using cluster validation algorithm is 73% compares with the manual counting."}
{"_id": "574222fa516c90d89a5d3293cfd104016ca5adab", "title": "TCS-TRA-1260 TCS Technical Report ZDD-Based Computation of the Number of Paths in a Graph by", "text": "(Abstract) Counting the number of paths in a graph, for example the number of nonintersecting (or self-avoiding) rook paths joining opposite corners of an n \u00d7 n grid, is not an easy problem since no mathematical formula is found and the number becomes too large to enumerate one by one except for small graphs. We have implemented software based on the ZDD technique in Knuth's \" The Art of Computer Programming \" carefully so as to achieve better memory efficiency, and have succeeded in computing the exact numbers for some graphs that were not known until now."}
{"_id": "fa5c4256ec213dfc936d02e932b9c8f6ff33555c", "title": "Approximate model for interactive-tendon driven mechanism of a multiple-DoFs myoelectric prosthetic hand", "text": "For practical use, a myoelectric prosthetic hand needs to (1) have a human-like structure, (2) be lightweight, (3) have multiple degrees of freedom (DoFs), and (4) have a high grip force. We have developed a myoelectric prosthetic hand with an interactive-tendon driven mechanism. This paper describes the control method by which the interactive-tendon driven mechanism produces fine and precise actions, as well as an approximate model for the control method. The approximate model was developed based on a geometry model and an equilibrium model for the joint torque. Experimental results show that the joint motions of the actual robotic hand are controlled with errors of between 9 and 15% using the approximate model."}
{"_id": "4aefcf5926d784178a42f28d26667c89db206605", "title": "Modeling punctuation prediction as machine translation", "text": "Punctuation prediction is an important task in Spoken Language Translation. The output of speech recognition systems does not typically contain punctuation marks. In this paper we analyze different methods for punctuation prediction and show improvements in the quality of the final translation output. In our experiments we compare the different approaches and show improvements of up to 0.8 BLEU points on the IWSLT 2011 English French Speech Translation of Talks task using a translation system to translate from unpunctuated to punctuated text instead of a language model based punctuation prediction method. Furthermore, we do a system combination of the hypotheses of all our different approaches and get an additional improvement of 0.4 points in BLEU."}
{"_id": "5a4d04b22225b35c541e0e68c1fc00933031515a", "title": "Sneaking around concatMap: efficient combinators for dynamic programming", "text": "We present a framework of dynamic programming combinators that provides a high-level environment to describe the recursions typical of dynamic programming over sequence data in a style very similar to algebraic dynamic programming (ADP). Using a combination of type-level programming and stream fusion leads to a substantial increase in performance, without sacrificing much of the convenience and theoretical underpinnings of ADP.\n We draw examples from the field of computational biology, more specifically RNA secondary structure prediction, to demonstrate how to use these combinators and what differences exist between this library, ADP, and other approaches.\n The final version of the combinator library allows writing algorithms with performance close to hand-optimized C code."}
{"_id": "3c3b468b901236e75c261e29b652719d3f2c462c", "title": "Resting state networks and consciousness Alterations of multiple resting state network connectivity in physiological , pharmacological , and pathological consciousness states", "text": "*Correspondence: Athena Demertzi , Coma Science Group, Cyclotron Research Center, All\u00e9e du 6 ao\u00fbt no 8, Sart Tilman B30, University of Li\u00e8ge, 4000 Li\u00e8ge, Belgium. e-mail: a.demertzi@ulg.ac.be In order to better understand the functional contribution of resting state activity to conscious cognition, we aimed to review increases and decreases in functional magnetic resonance imaging (fMRI) functional connectivity under physiological (sleep), pharmacological (anesthesia), and pathological altered states of consciousness, such as brain death, coma, vegetative state/unresponsive wakefulness syndrome, and minimally conscious state.The reviewed resting state networks were the DMN, left and right executive control, salience, sensorimotor, auditory, and visual networks. We highlight some methodological issues concerning resting state analyses in severely injured brains mainly in terms of hypothesis-driven seed-based correlation analysis and data-driven independent components analysis approaches. Finally, we attempt to contextualize our discussion within theoretical frameworks of conscious processes.We think that this \u201clesion\u201d approach allows us to better determine the necessary conditions under which normal conscious cognition takes place. At the clinical level, we acknowledge the technical merits of the resting state paradigm. Indeed, fast and easy acquisitions are preferable to activation paradigms in clinical populations. Finally, we emphasize the need to validate the diagnostic and prognostic value of fMRI resting state measurements in non-communicating brain damaged patients."}
{"_id": "19fce403e4e4e72bf45176420a7e54830a053e76", "title": "tress and anxiety detection using facial cues from videos", "text": "This study develops a framework for the detection and analysis of stress/anxiety emotional states through video-recorded facial cues. A thorough experimental protocol was established to induce systematic variability in affective states (neutral, relaxed and stressed/anxious) through a variety of external and internal stressors. The analysis was focused mainly on non-voluntary and semi-voluntary facial cues in order to estimate the emotion representation more objectively. Features under investigation included eye-related events, mouth activity, head motion parameters and heart rate estimated through camerabased photoplethysmography. A feature selection procedure was employed to select the most robust features followed by classification schemes discriminating between stress/anxiety and neutral states nxiety tress link rate ead motion with reference to a relaxed state in each experimental phase. In addition, a ranking transformation was proposed utilizing self reports in order to investigate the correlation of facial parameters with a participant perceived amount of stress/anxiety. The results indicated that, specific facial cues, derived from eye activity, mouth activity, head movements and camera based heart activity achieve good accuracy and are suitable as discriminative indicators of stress and anxiety."}
{"_id": "c8cf4d0d4d6276391ac98ef34f85cebcafbb328c", "title": "Unsupervised Concept Categorization and Extraction from Scientific Document Titles", "text": "This paper studies the automated categorization and extraction of scientific concepts from titles of scientific articles, in order to gain a deeper understanding of their key contributions and facilitate the construction of a generic academic knowledgebase. Towards this goal, we propose an unsupervised, domain-independent, and scalable two-phase algorithm to type and extract key concept mentions into aspects of interest (e.g., Techniques, Applications, etc.). In the first phase of our algorithm we proposePhraseType, a probabilistic generative model which exploits textual features and limited POS tags to broadly segment text snippets into aspect-typed phrases. We extend this model to simultaneously learn aspect-specific features and identify academic domains in multi-domain corpora, since the two tasks mutually enhance each other. In the second phase, we propose an approach based on adaptor grammars to extract fine grained concept mentions from the aspect-typed phrases without the need for any external resources or human effort, in a purely data-driven manner. We apply our technique to study literature from diverse scientific domains and show significant gains over state-of-the-art concept extraction techniques. We also present a qualitative analysis of the results obtained."}
{"_id": "7c112ef986eec3ac5649679474a53749fa34e711", "title": "Recover Corrupted Data in Sensor Networks: A Matrix Completion Solution", "text": "Affected by hardware and wireless conditions in WSNs, raw sensory data usually have notable data loss and corruption. Existing studies mainly consider the interpolation of random missing data in the absence of the data corruption. There is also no strategy to handle the successive missing data. To address these problems, this paper proposes a novel approach based on matrix completion (MC) to recover the successive missing and corrupted data. By analyzing a large set of weather data collected from 196 sensors in Zhu Zhou, China, we verify that weather data have the features of low-rank, temporal stability, and spatial correlation. Moreover, from simulations on the real weather data, we also discover that successive data corruption not only seriously affects the accuracy of missing and corrupted data recovery but even pollutes the normal data when applying the matrix completion in a traditional way. Motivated by these observations, we propose a novel Principal Component Analysis (PCA)-based scheme to efficiently identify the existence of data corruption. We further propose a two-phase MC-based data recovery scheme, named MC-Two-Phase, which applies the matrix completion technique to fully exploit the inherent features of environmental data to recover the data matrix due to either data missing or corruption. Finally, the extensive simulations with real-world sensory data demonstrate that the proposed MC-Two-Phase approach can achieve very high recovery accuracy in the presence of successively missing and corrupted data."}
{"_id": "416e5a996bd82fa8291e85d368eda43baf4473a1", "title": "A Secure and Efficient Uniqueness-and-Anonymity-Preserving Remote User Authentication Scheme for Connected Health Care", "text": "Connected health care has several applications including telecare medicine information system, personally controlled health records system, and patient monitoring. In such applications, user authentication can ensure the legality of patients. In user authentication for such applications, only the legal user/patient himself/herself is allowed to access the remote server, and no one can trace him/her according to transmitted data. Chang et al. proposed a uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care (Chang et al., J Med Syst 37:9902,\u00a02013). Their scheme uses the user\u2019s personal biometrics along with his/her password with the help of the smart card. The user\u2019s biometrics is verified using BioHashing. Their scheme is efficient due to usage of one-way hash function and exclusive-or (XOR) operations. In this paper, we show that though their scheme is very efficient, their scheme has several security weaknesses such as (1) it has design flaws in login and authentication phases, (2) it has design flaws in password change phase, (3) it fails to protect privileged insider attack, (4) it fails to protect the man-in-the middle attack, and (5) it fails to provide proper authentication. In order to remedy these security weaknesses in Chang et al.\u2019s scheme, we propose an improvement of their scheme while retaining the original merit of their scheme. We show that our scheme is efficient as compared to Chang et al.\u2019s scheme. Through the security analysis, we show that our scheme is secure against possible attacks. Further, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. In addition, after successful authentication between the user and the server, they establish a secret session key shared between them for future secure communication."}
{"_id": "3d933f007baa84865f609a154d2a378b3d62442f", "title": "S3: A Symbolic String Solver for Vulnerability Detection in Web Applications", "text": "Motivated by the vulnerability analysis of web programs which work on string inputs, we present S3, a new symbolic string solver. Our solver employs a new algorithm for a constraint language that is expressive enough for widespread applicability. Specifically, our language covers all the main string operations, such as those in JavaScript. The algorithm first makes use of a symbolic representation so that membership in a set defined by a regular expression can be encoded as string equations. Secondly, there is a constraint-based generation of instances from these symbolic expressions so that the total number of instances can be limited. We evaluate S3 on a well-known set of practical benchmarks, demonstrating both its robustness (more definitive answers) and its efficiency (about 20 times faster) against the state-of-the-art."}
{"_id": "edef03f8673f8629f0c3b5e121a08b50e55dab91", "title": "NANOFIBRILLATED CELLULOSE AS PAPER ADDITIVE IN EUCALYPTUS PULPS", "text": "In this work, the physical and mechanical properties of bleached Eucalyptus pulp reinforced with nanofibrillated cellulose (NFC) are compared with those of traditional beaten pulp used in the making of writing/printing and offset printing papers. For this purpose, three different types of hardwood slurries were prepared: beaten pulps, unbeaten pulps reinforced with NFC, and slightly beaten pulps also reinforced with NFC. Physical and mechanical tests were performed on handsheets from these different slurries. The results showed that adding NFC to unbeaten pulps results in physical and mechanical properties similar to those in pulps used for printing/writing papers. Nevertheless, the best results were obtained in slurries previously beaten at slight conditions and subsequently reinforced with NFC. These results demonstrate that the addition of NFC allows a reduction in beating intensity without decreasing the desired mechanical properties for this specific purpose."}
{"_id": "cc88ec75abf66afda1ca15c8e022a09b86315848", "title": "Make Evasion Harder: An Intelligent Android Malware Detection System", "text": "To combat the evolving Android malware attacks, in this paper, instead of only using Application Programming Interface (API) calls, we further analyze the different relationships between them and create higher-level semantics which require more efforts for attackers to evade the detection. We represent the Android applications (apps), related APIs, and their rich relationships as a structured heterogeneous information network (HIN). Then we use a meta-path based approach to characterize the semantic relatedness of apps and APIs. We use each meta-path to formulate a similarity measure over Android apps, and aggregate different similarities using multi-kernel learning to make predictions. Promising experimental results based on real sample collections from Comodo Cloud Security Center demonstrate that our developed system HinDroid outperforms other alternative Android malware detection techniques."}
{"_id": "17ce7734904a6162caa24e13a9454c1239924744", "title": "Learning Latent Vector Spaces for Product Search", "text": "We introduce a novel latent vector space model that jointly learns the latent representations of words, e-commerce products and a mapping between the two without the need for explicit annotations. The power of the model lies in its ability to directly model the discriminative relation between products and a particular word. We compare our method to existing latent vector space models (LSI, LDA and word2vec) and evaluate it as a feature in a learning to rank setting. Our latent vector space model achieves its enhanced performance as it learns better product representations. Furthermore, the mapping from words to products and the representations of words benefit directly from the errors propagated back from the product representations during parameter estimation. We provide an in-depth analysis of the performance of our model and analyze the structure of the learned representations."}
{"_id": "d630b7be2d89ca0980386bae4d5232abf7be07f3", "title": "Improving Login Authorization by Providing Graphical Password (Security)", "text": "Usable security has unique usability challenges because the need for security often means that standard humancomputer-interaction approaches cannot be directly applied. An important usability goal for authentication systems is to support users in selecting better passwords. Users often create memorable passwords that are easy for attackers to guess, but strong system-assigned passwords are difficult for users to remember. So researchers of modern days have gone for alternative methods where in graphical pictures are used as passwords. Graphical passwords essentially use images or representation of images as passwords. Human brain is good in remembering picture than textual character. There are various graphical password schemes or graphical password software in the market. However, very little research has been done to analyze graphical passwords that are still immature. There for, this project work merges persuasive selective click points and password guessing resistant protocol. The major goal of this work is to reduce the guessing attacks as well as encouraging users to select more random, and difficult passwords to guess. Well known security threats like brute force attacks and dictionary attacks can be successfully abolished using this method."}
{"_id": "aed5bfab8266b10aaf63be7d00364d0f7d94ec73", "title": "A novel corner-fed patch to reduce cross-polarization for a microstrip antenna array", "text": "This paper presents a novel corner-fed microstrip square patch. On the patch there is a slot along the diagonal. With this slot, the current distribution on the patch is changed. The currents related to co-polarization become more dominant and those related to cross-polarization become more symmetrical. These two factors result in a reduction in the cross-polarization level. An eight element shunt connected series-fed linear array has been designed, manufactured and tested. The measured results show that the cross-polarization level is reduced by 10.5 dB, while the other performances of the array are nearly the same as those of a standard one."}
{"_id": "d93e57320bb16f6439ad984a4768345f056f59d0", "title": "Advertising Keyword Recommendation based on Supervised Link Prediction in Multi-Relational Network", "text": "In the sponsored search system, advertisers bid on keywords that are related to their products or services to participate in repeated auctions and display their ads on the search result pages. Since there is a huge inventory of possible terms, instructive keyword recommendation is an important component to help advertisers optimize their campaigns and improve ad monetization. In this paper, by constructing a heterogeneous network which contains four types of links between advertisers and keywords based on different data resources and mining complex representations of network structure and task-guided attributes of nodes, we propose an approach to keyword recommendation based on supervised link prediction in multi-relational network. This method can retrieve ample candidates and provide informative ranking scores for recommended list of keywords, experimental results with real sponsored search data validate the effectiveness of the proposed algorithm in producing valuable keywords for advertisers."}
{"_id": "1c168275c59ba382588350ee1443537f59978183", "title": "Mean Shift, Mode Seeking, and Clustering", "text": "Mean shift, a simple iterative procedure that shifts each data point to the average of data points in its neighborhood, is generalized and analyzed in this paper. This generalization makes some k-means like clustering algorithms its special cases. It is shown that mean shift is a mode-seeking process on a surface constructed with a \u201cshadow\u201d kernel. For Gaussian kernels, mean shift is a gradient mapping. Convergence is studied for mean shift iterations. Cluster analysis is treated as a deterministic problem of finding a fixed point of mean shift that characterizes the data. Applications in clustering and Hough transform are demonstrated. Mean shift is also considered as an evolutionary strategy that performs multistart global optimization."}
{"_id": "1dc1adaf4dabf7a3afb8339934a3d86375155a35", "title": "Information Theoretic-Learning auto-encoder", "text": "We propose Information Theoretic-Learning (ITL) divergence measures for variational regularization of neural networks. We also explore ITL-regularized autoencoders as an alternative to variational autoencoding bayes, adversarial autoencoders and generative adversarial networks for randomly generating sample data without explicitly defining a paritition function. This paper also formalizes, generative moment matching networks under the ITL framework."}
{"_id": "357776cd7ee889af954f0dfdbaee71477c09ac18", "title": "Adversarial Autoencoders", "text": "In this paper we propose a new method for regularizing autoencoders by imposing an arbitrary prior on the latent representation of the autoencoder. Our method, named \u201cadversarial autoencoder\u201d, uses the recently proposed generative adversarial networks (GAN) in order to match the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior. Matching the aggregated posterior to the prior ensures that there are no \u201choles\u201d in the prior, and generating from any part of prior space results in meaningful samples. As a result, the decoder of the adversarial autoencoder learns a deep generative model that maps the imposed prior to the data distribution. We show how adversarial autoencoders can be used to disentangle style and content of images and achieve competitive generative performance on MNIST, Street View House Numbers and Toronto Face datasets."}
{"_id": "c530fbb3950ccdc88db60eff6627feb9200f6bda", "title": "Information Theory, Inference, and Learning Algorithms", "text": ""}
{"_id": "ce4126d64dc547f8101af4f1dead36f8df474fc8", "title": "EPIC: Efficient Private Image Classification (or: Learning from the Masters)", "text": "Outsourcing an image classification task raises privacy concerns, both from the image provider\u2019s perspective, who wishes to keep their images confidential, and from the classification algorithm provider\u2019s perspective, who wishes to protect the intellectual property of their classifier. We propose EPIC, an efficient private image classification system based on support vector machine (SVM) learning, secure against malicious adversaries. EPIC builds upon transfer learning techniques known from the Machine Learning (ML) literature and minimizes the load on the privacy-preserving part. Our solution is based on Multiparty Computation (MPC), it is 34 times faster than Gazelle (USENIX\u201918) \u2013the state-of-the-art in private image classification\u2013 and it improves the communication cost by 50 times, with a 7% higher accuracy on CIFAR-10 dataset. For the same accuracy as Gazelle achieves on CIFAR-10, EPIC is 700 times faster and the communication cost is reduced by 500 times."}
{"_id": "050b2048c2e102219cf41c1a7189365857d014f1", "title": "Transformation of generalized chebyshev lowpass filter prototype to Suspended Stripline Structure highpass filter for wideband communication systems", "text": "This paper presents the transformation of generalized Chebyshev lowpass filter prototype to highpass filter using Suspended Stripline Structure (SSS) technology. The study involves circuit analysis to determine generalized Chebyshev responses with a transmission zero at finite frequency. The transformation of the highpass filter from the lowpass filter prototype provides a cutoff frequency of 3.1 GHz with a return loss better than -20 dB. The design is simulated on a Roger Duroid RO4350 with a dielectric constant, \u03b5r of 3.48 and a thickness of 0.168 mm. The simulation performance results show promising results that could be further examined during the experimental works. This class of generalized Chebyshev highpass filter with finite transmission zero would be useful in any RF/ microwave communication systems particularly in wideband applications where the reduction of overall physical volume and weight as well as cost very important, while maintaining its excellent performance."}
{"_id": "e9cb3d8e7cc3a13e3d88b4137d244937467f1192", "title": "Coupled IGMM-GANs for deep multimodal anomaly detection in human mobility data", "text": "Detecting anomalous activity in human mobility data has a number of applications including road hazard sensing, telematic based insurance, and fraud detection in taxi services and ride sharing. In this paper we address two challenges that arise in the study of anomalous human trajectories: 1) a lack of ground truth data on what defines an anomaly and 2) the dependence of existing methods on significant pre-processing and feature engineering. While generative adversarial networks seem like a natural fit for addressing these challenges, we find that existing GAN based anomaly detection algorithms perform poorly due to their inability to handle multimodal patterns. For this purpose we introduce an infinite Gaussian mixture model coupled with (bi-directional) generative adversarial networks, IGMM-GAN, that is able to generate synthetic, yet realistic, human mobility data and simultaneously facilitates multimodal anomaly detection. Through estimation of a generative probability density on the space of human trajectories, we are able to generate realistic synthetic datasets that can be used to benchmark existing anomaly detection methods. The estimated multimodal density also allows for a natural definition of outlier that we use for detecting anomalous trajectories. We illustrate our methodology and its improvement over existing GAN anomaly detection on several human mobility datasets, along with MNIST."}
{"_id": "e4d8153665be3eb3410c5cccd46b46c784801ca2", "title": "An Italian Twitter Corpus of Hate Speech against Immigrants", "text": "The paper describes a recently-created Twitter corpus of about 6,000 tweets, annotated for hate speech against immigrants, and developed to be a reference dataset for an automatic system of hate speech monitoring. The annotation scheme was therefore specifically designed to account for the multiplicity of factors that can contribute to the definition of a hate speech notion, and to offer a broader tagset capable of better representing all those factors, which may increase, or rather mitigate, the impact of the message. This resulted in a scheme that includes, besides hate speech, the following categories: aggressiveness, offensiveness, irony, stereotype, and (on an experimental basis) intensity. The paper hereby presented namely focuses on how this annotation scheme was designed and applied to the corpus. In particular, also comparing the annotation produced by CrowdFlower contributors and by expert annotators, we make some remarks about the value of the novel resource as gold standard, which stems from a preliminary qualitative analysis of the annotated data and on future corpus development."}
{"_id": "b55a7ff710dfd618d95e54a59164fc446a9863e8", "title": "Towards reducing the attack surface of software backdoors", "text": "Backdoors in software systems probably exist since the very first access control mechanisms were implemented and they are a well-known security problem. Despite a wave of public discoveries of such backdoors over the last few years, this threat has only rarely been tackled so far.\n In this paper, we present an approach to reduce the attack surface for this kind of attacks and we strive for an automated identification and elimination of backdoors in binary applications. We limit our focus on the examination of server applications within a client-server model. At the core, we apply variations of the delta debugging technique and introduce several novel heuristics for the identification of those regions in binary application that backdoors are typically installed in (i.e., authentication and command processing functions). We demonstrate the practical feasibility of our approach on several real-world backdoors found in modified versions of the popular software tools ProFTPD and OpenSSH. Furthermore, we evaluate our implementation not only on common instruction set architectures such as x86-64, but also on commercial off-the-shelf embedded devices powered by a MIPS32 processor."}
{"_id": "a08f7413297e68ac3ce4756f3e9c3ff94ed57731", "title": "Ovariohysterectomy versus ovariectomy for elective sterilization of female dogs and cats: is removal of the uterus necessary?", "text": "Views: Commentary 1409 E sterilization of female dogs and cats is one of the most common procedures performed in veterinary practice and is considered by private veterinary practitioners as one of the most important skills required of new graduates. Potential benefits of sterilization include population control, prevention of diseases of the reproductive tract, and elimination of undesirable behaviors associated with hormonal cycling. The AVMA and the Association of Shelter Veterinarians both promote elective sterilization of female dogs and cats as integral to reducing euthanasia of unwanted dogs and cats. Sterilization of female dogs and cats can be accomplished by removing both the ovaries and uterus (ovariohysterectomy) or by removing the ovaries alone (ovariectomy). Ovariohysterectomy has historically been recommended in the United States and Canada and is currently emphasized at schools and colleges of veterinary medicine in these countries. To our knowledge, only one of the most frequently used surgical textbooks in the United States and Canada describes ovariectomy, and ovariectomy is generally not taught to veterinary students during their surgical laboratories. The Clinical Proficiency Examination, which is the final step in the Educational Commission for Foreign Veterinary Graduates\u2019 program for establishing the educational equivalence of graduates of nonaccredited veterinary schools seeking licensure in the United States or Canada, requires \u201ccomplete removal of both ovaries and removal of the majority of the uterus\u201d to obtain a passing grade. Despite the apparent preference for ovariohysterectomy in the United States and Canada, ovariectomy appears to have become the standard of care in many European countries. In addition, with the development of minimally invasive surgical techniques, laparoscopic ovariectomy has gained popularity. Thus, it may be helpful to review the scientific evidence comparing ovariohysterectomy and ovariectomy for elective sterilization of healthy female dogs and cats."}
{"_id": "12803b5b57031954ac6c01d3d4eb79d0816e80e0", "title": "Discovering Texture Regularity as a Higher-Order Correspondence Problem", "text": "Understanding texture regularity in real images is a challenging computer vision task. We propose a higher-order feature matching algorithm to discover the lattices of near-regular textures in real images. The underlying lattice of a near-regular texture identifies all of the texels as well as the global topology among the texels. A key contribution of this paper is to formulate lattice-finding as a correspondence problem. The algorithm finds a plausible lattice by iteratively proposing texels and assigning neighbors between the texels. Our matching algorithm seeks assignments that maximize both pair-wise visual similarity and higher-order geometric consistency. We approximate the optimal assignment using a recently developed spectral method. We successfully discover the lattices of a diverse set of unsegmented, real-world textures with significant geometric warping and large appearance variation among"}
{"_id": "55da19ed055cb0e4534af936c2ee72c8c6e06380", "title": "ltm : An R Package for Latent Variable Modeling and Item Response Theory Analyses", "text": "The R package ltm has been developed for the analysis of multivariate dichotomous and polytomous data using latent variable models, under the Item Response Theory approach. For dichotomous data the Rasch, the Two-Parameter Logistic, and Birnbaum\u2019s Three-Parameter models have been implemented, whereas for polytomous data Semejima\u2019s Graded Response model is available. Parameter estimates are obtained under marginal maximum likelihood using the Gauss-Hermite quadrature rule. The capabilities and features of the package are illustrated using two real data examples."}
{"_id": "d2ca8bbb1c6c835b1105586914739208a2177e7c", "title": "A Model-Driven Framework to Develop Personalized Health Monitoring", "text": "Both distributed healthcare systems and the Internet of Things (IoT) are currently hot topics. The latter is a new computing paradigm to enable advanced capabilities in engineering various applications, including those for healthcare. For such systems, the core social requirement is the privacy/security of the patient information along with the technical requirements (e.g., energy consumption) and capabilities for adaptability and personalization. Typically, the functionality of the systems is predefined by the patient\u2019s data collected using sensor networks along with medical instrumentation; then, the data is transferred through the Internet for treatment and decision-making. Therefore, systems creation is indeed challenging. In this paper, we propose a model-driven framework to develop the IoT-based prototype and its reference architecture for personalized health monitoring (PHM) applications. The framework contains a multi-layered structure with feature-based modeling and feature model transformations at the top and the application software generation at the bottom. We have validated the framework using available tools and developed an experimental PHM to test some aspects of the functionality of the reference architecture in real time. The main contribution of the paper is the development of the model-driven computational framework with emphasis on the synergistic effect of security and energy issues."}
{"_id": "be515b4070f746cc39e49e9e80ef0e419cadb1f0", "title": "Recognizing human activity in smart home using deep learning algorithm", "text": "There is a topic of interest for many researchers about human activities recognition in recent years. In this paper, we propose the research to recognize human activities by deep learning (DL) algorithm. We collected the data from participants performing activities in order to evaluate the human activities recognition results. After pre-training the deep network, the fine-tuning process begins. Compared with Hidden Markov Model (HMM) and nai\u0308ve Bayes Classifier (NBC), the experiment results show that the proposed deep learning algorithm is an effective way for recognizing human activities in smart home."}
{"_id": "978ec9eb44b7913933fd9b18c77d34e64c06a3b9", "title": "Real User Evaluation of Spoken Dialogue Systems Using Amazon Mechanical Turk", "text": "This paper describes a framework for evaluation of spoken dialogue systems. Typically, evaluation of dialogue systems is performed in a controlled test environment with carefully selected and instructed users. However, this approach is very demanding. An alternative is to recruit a large group of users who evaluate the dialogue systems in a remote setting under virtually no supervision. Crowdsourcing technology, for example Amazon Mechanical Turk (AMT), provides an efficient way of recruiting subjects. This paper describes an evaluation framework for spoken dialogue systems using AMT users and compares the obtained results with a recent trial in which the systems were tested by locally recruited users. The results suggest that the use of crowdsourcing technology is feasible and it can provide reliable results."}
{"_id": "6ea826f8b8d5b092e639f54adbeb8d65b32a3725", "title": "Pocket reflectometry", "text": "We present a simple, fast solution for reflectance acquisition using tools that fit into a pocket. Our method captures video of a flat target surface from a fixed video camera lit by a hand-held, moving, linear light source. After processing, we obtain an SVBRDF.\n We introduce a BRDF chart, analogous to a color \"checker\" chart, which arranges a set of known-BRDF reference tiles over a small card. A sequence of light responses from the chart tiles as well as from points on the target is captured and matched to reconstruct the target's appearance.\n We develop a new algorithm for BRDF reconstruction which works directly on these LDR responses, without knowing the light or camera position, or acquiring HDR lighting. It compensates for spatial variation caused by the local (finite distance) camera and light position by warping responses over time to align them to a specular reference. After alignment, we find an optimal linear combination of the Lambertian and purely specular reference responses to match each target point's response. The same weights are then applied to the corresponding (known) reference BRDFs to reconstruct the target point's BRDF. We extend the basic algorithm to also recover varying surface normals by adding two spherical caps for diffuse and specular references to the BRDF chart.\n We demonstrate convincing results obtained after less than 30 seconds of data capture, using commercial mobile phone cameras in a casual environment."}
{"_id": "1fb9d129523eec8b33eab860378a06e732885c83", "title": "Un M\u00e9todo para definir la Arquitectura de Procesos", "text": "In this paper we present a method for systematically defining the process architecture of any organization. We begin by presenting a detailed interpretation of the upper levels of Zachman\u2019s framework, providing an answer to the cell formed by the intersection of the perspective of the Planner (row 1) and the dimension of Function (column 2). Our method consists of four steps: capturing the organizational structure; exhaustively analyzing the flow of information between the different organizational units, customers, and providers, allowing for a high-level understanding of the organization\u2019s operation; identifying and modeling the configuration of value that best adapts itself to the creation of value for the organization (value chain, workshop, or network); and, given the specified value configuration, identifying, defining, and interrelating the different essential processes."}
{"_id": "acc1d2a8f1cb0833c26cfdad0f3543f8e2d8ca5d", "title": "Development of Self Balance Transporter : A Segway", "text": "This paper deals with constructing the vehicle which is fully functional to transport the one person only and movement of it is achieve by naturally. Forward and backwards motion is achieved by leaning forwards and backwards. Segway uses handlebar for turning purpose. In Segway technique uses principle of inverted pendulum in which it keeps an angle of Zero degrees with vertical at all times. The rider motion enables this technique to accelerate, brake or steer the vehicle, for this it uses Gyroscopic Sensor and it is Eco-friendly and non polluting."}
{"_id": "5ef91102f8f9c288fa9743ad78cee8339f85ba52", "title": "Can Active Memory Replace Attention?", "text": "Several mechanisms to focus attention of a neural network on selected parts of its input or memory have been used successfully in deep learning models in recent years. Attention has improved image classification, image captioning, speech recognition, generative models, and learning algorithmic tasks, but it had probably the largest impact on neural machine translation. Recently, similar improvements have been obtained using alternative mechanisms that do not focus on a single part of a memory but operate on all of it in parallel, in a uniform way. Such mechanism, which we call active memory, improved over attention in algorithmic tasks, image processing, and in generative modelling. So far, however, active memory has not improved over attention for most natural language processing tasks, in particular for machine translation. We analyze this shortcoming in this paper and propose an extended model of active memory that matches existing attention models on neural machine translation and generalizes better to longer sentences. We investigate this model and explain why previous active memory models did not succeed. Finally, we discuss when active memory brings most benefits and where attention can be a better choice."}
{"_id": "92d27bf70234b305b3df7b665c67a7a1e1a11c18", "title": "Magnetic resonance imaging findings associated with surgically proven rotator interval lesions", "text": "To identify shoulder magnetic resonance imaging (MRI) findings associated with surgically proven rotator interval abnormalities. The preoperative MRI examinations of five patients with surgically proven rotator interval (RI) lesions requiring closure were retrospectively evaluated by three musculoskeletal-trained radiologists in consensus. We assessed the structures in the RI, including the coracohumeral ligament, superior glenohumeral ligament, fat tissue, biceps tendon, and capsule for variations in size and signal alteration. In addition, we noted associated findings of rotator cuff and labral pathology. Three of three of the MR arthrogram studies demonstrated extension of gadolinium to the cortex of the undersurface of the coracoid process compared with the control images, seen best on the sagittal oblique images. Four of five of the studies demonstrated subjective thickening of the coracohumeral ligament, and three of five of the studies demonstrated subjective thickening of the superior glenohumeral ligament. Five of five of the studies demonstrated a labral tear. The MRI arthrogram finding of gadolinium extending to the cortex of the undersurface of the coracoid process was noted on the studies of those patients with rotator interval lesions at surgery in this series. Noting this finding\u2014especially in the presence of a labral tear and/or thickening of the coracohumeral ligament or superior glenohumeral ligament\u2014may be helpful in the preoperative diagnosis of rotator interval lesions."}
{"_id": "c79a608694c3d9a75ef06ed6baa80c6d1ce71bd4", "title": "A Tidy Data Model for Natural Language Processing using cleanNLP", "text": "Recent advances in natural language processing have produced libraries that extract lowlevel features from a collection of raw texts. These features, known as annotations, are usually stored internally in hierarchical, tree-based data structures. This paper proposes a data model to represent annotations as a collection of normalized relational data tables optimized for exploratory data analysis and predictive modeling. The R package cleanNLP, which calls one of two state of the art NLP libraries (CoreNLP or spaCy), is presented as an implementation of this data model. It takes raw text as an input and returns a list of normalized tables. Specific annotations provided include tokenization, part of speech tagging, named entity recognition, sentiment analysis, dependency parsing, coreference resolution, and word embeddings. The package currently supports input text in English, German, French, and Spanish."}
{"_id": "f5968a401ecd76f12e5cad55b3ceb83dec8f36f2", "title": "Learning Anisotropic RBF Kernels", "text": "We present an approach for learning an anisotropic RBF kernel in a game theoretical setting where the value of the game is the degree of separation between positive and negative training examples. The method extends a previously proposed method (KOMD) to perform feature re-weighting and distance metric learning in a kernel-based classification setting. Experiments on several benchmark datasets demonstrate that our method generally outperforms stateof-the-art distance metric learning methods, including the Large Margin Nearest Neighbor Classification family of methods."}
{"_id": "3f7a6b561ac1519856c3c4214d25f8144ba4eb46", "title": "Evaluation of Consumer Understanding of Different Front-of-Package Nutrition Labels, 2010\u20132011", "text": "INTRODUCTION\nGovernments throughout the world are using or considering various front-of-package (FOP) food labeling systems to provide nutrition information to consumers. Our web-based study tested consumer understanding of different FOP labeling systems.\n\n\nMETHODS\nAdult participants (N = 480) were randomized to 1 of 5 groups to evaluate FOP labels: 1) no label; 2) multiple traffic light (MTL); 3) MTL plus daily caloric requirement icon (MTL+caloric intake); 4) traffic light with specific nutrients to limit based on food category (TL+SNL); or 5) the Choices logo. Total percentage correct quiz scores were created reflecting participants' ability to select the healthier of 2 foods and estimate amounts of saturated fat, sugar, and sodium in foods. Participants also rated products on taste, healthfulness, and how likely they were to purchase the product. Quiz scores and product perceptions were compared with 1-way analysis of variance followed by post-hoc Tukey tests.\n\n\nRESULTS\nThe MTL+caloric intake group (mean [standard deviation], 73.3% [6.9%]) and Choices group (72.5% [13.2%]) significantly outperformed the no label group (67.8% [10.3%]) and the TL+SNL group (65.8% [7.3%]) in selecting the more healthful product on the healthier product quiz. The MTL and MTL+caloric intake groups achieved average scores of more than 90% on the saturated fat, sugar, and sodium quizzes, which were significantly better than the no label and Choices group average scores, which were between 34% and 47%.\n\n\nCONCLUSION\nAn MTL+caloric intake label and the Choices symbol hold promise as FOP labeling systems and require further testing in different environments and population subgroups."}
{"_id": "7128b8d8311ee33c05c4573b58626085c462b3d2", "title": "Does High Self-Esteem Cause Better Performance, Interpersonal Success, Happiness, or Healthier Lifestyles?", "text": "Self-esteem has become a household word. Teachers, parents, therapists, and others have focused efforts on boosting self-esteem, on the assumption that high self-esteem will cause many positive outcomes and benefits-an assumption that is critically evaluated in this review. Appraisal of the effects of self-esteem is complicated by several factors. Because many people with high self-esteem exaggerate their successes and good traits, we emphasize objective measures of outcomes. High self-esteem is also a heterogeneous category, encompassing people who frankly accept their good qualities along with narcissistic, defensive, and conceited individuals. The modest correlations between self-esteem and school performance do not indicate that high self-esteem leads to good performance. Instead, high self-esteem is partly the result of good school performance. Efforts to boost the self-esteem of pupils have not been shown to improve academic performance and may sometimes be counterproductive. Job performance in adults is sometimes related to self-esteem, although the correlations vary widely, and the direction of causality has not been established. Occupational success may boost self-esteem rather than the reverse. Alternatively, self-esteem may be helpful only in some job contexts. Laboratory studies have generally failed to find that self-esteem causes good task performance, with the important exception that high self-esteem facilitates persistence after failure. People high in self-esteem claim to be more likable and attractive, to have better relationships, and to make better impressions on others than people with low self-esteem, but objective measures disconfirm most of these beliefs. Narcissists are charming at first but tend to alienate others eventually. Self-esteem has not been shown to predict the quality or duration of relationships. High self-esteem makes people more willing to speak up in groups and to criticize the group's approach. Leadership does not stem directly from self-esteem, but self-esteem may have indirect effects. Relative to people with low self-esteem, those with high self-esteem show stronger in-group favoritism, which may increase prejudice and discrimination. Neither high nor low self-esteem is a direct cause of violence. Narcissism leads to increased aggression in retaliation for wounded pride. Low self-esteem may contribute to externalizing behavior and delinquency, although some studies have found that there are no effects or that the effect of self-esteem vanishes when other variables are controlled. The highest and lowest rates of cheating and bullying are found in different subcategories of high self-esteem. Self-esteem has a strong relation to happiness. Although the research has not clearly established causation, we are persuaded that high self-esteem does lead to greater happiness. Low self-esteem is more likely than high to lead to depression under some circumstances. Some studies support the buffer hypothesis, which is that high self-esteem mitigates the effects of stress, but other studies come to the opposite conclusion, indicating that the negative effects of low self-esteem are mainly felt in good times. Still others find that high self-esteem leads to happier outcomes regardless of stress or other circumstances. High self-esteem does not prevent children from smoking, drinking, taking drugs, or engaging in early sex. If anything, high self-esteem fosters experimentation, which may increase early sexual activity or drinking, but in general effects of self-esteem are negligible. One important exception is that high self-esteem reduces the chances of bulimia in females. Overall, the benefits of high self-esteem fall into two categories: enhanced initiative and pleasant feelings. We have not found evidence that boosting self-esteem (by therapeutic interventions or school programs) causes benefits. Our findings do not support continued widespread efforts to boost self-esteem in the hope that it will by itself foster improved outcomes. In view of the heterogeneity of high self-esteem, indiscriminate praise might just as easily promote narcissism, with its less desirable consequences. Instead, we recommend using praise to boost self-esteem as a reward for socially desirable behavior and self-improvement."}
{"_id": "5854f613c1617e8f27f3406f9319b88e200783ca", "title": "Phantom vibrations among undergraduates: Prevalence and associated psychological characteristics", "text": ""}
{"_id": "95609bd3a8a23bcf43871b486d352e51a60689a5", "title": "Dynamic Adaptive Streaming over HTTP/2.0", "text": "MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a new streaming standard that has been recently ratified as an international standard (IS). In comparison to other streaming systems, e.g., HTTP progressive download, DASH is able to handle varying bandwidth conditions providing smooth streaming. Furthermore, it enables NAT and Firewall traversal, flexible and scalable deployment as well as reduced infrastructure costs due to the reuse of existing Internet infrastructure components, e.g., proxies, caches, and Content Distribution Networks (CDN). Recently, the Hypertext Transfer Protocol Bis (httpbis) working group of the IETF has officially started the development of HTTP 2.0. Initially three major proposals have been submitted to the IETF i.e., Googles' SPDY, Microsofts' HTTP Speed+Mobility and Network-Friendly HTTP Upgrade, but SPDY has been chosen as working draft for HTTP 2.0. In this paper we implemented MPEG-DASH over HTTP 2.0 (i.e., SPDY), demonstrating its potential benefits and drawbacks. Moreover, several experimental evaluations have been performed that compare HTTP 2.0 with HTTP 1.1 and HTTP 1.0 in the context of DASH. In particular, the protocol overhead, the performance for different round trip times, and DASH with HTTP 2.0 in a lab test scenario has been evaluated in detail."}
{"_id": "48289b7d57e43bb5a001c334cf96694e933f7001", "title": "An experimental evaluation of rate-adaptation algorithms in adaptive streaming over HTTP", "text": "Adaptive (video) streaming over HTTP is gradually being adopted, as it offers significant advantages in terms of both user-perceived quality and resource utilization for content and network service providers. In this paper, we focus on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluate two major commercial players (Smooth Streaming, Netflix) and one open source player (OSMF). Our experiments cover three important operating conditions. First, how does an adaptive video player react to either persistent or short-term changes in the underlying network available bandwidth. Can the player quickly converge to the maximum sustainable bitrate? Second, what happens when two adaptive video players compete for available bandwidth in the bottleneck link? Can they share the resources in a stable and fair manner? And third, how does adaptive streaming perform with live content? Is the player able to sustain a short playback delay? We identify major differences between the three players, and significant inefficiencies in each of them."}
{"_id": "79ea39418cd05977830cbcf2d07a546cec0ab129", "title": "Dynamic adaptive streaming over HTTP dataset", "text": "The delivery of audio-visual content over the Hypertext Transfer Protocol (HTTP) got lot of attention in recent years and with dynamic adaptive streaming over HTTP (DASH) a standard is now available. Many papers cover this topic and present their research results, but unfortunately all of them use their own private dataset which -- in most cases -- is not publicly available. Hence, it is difficult to compare, e.g., adaptation algorithms in an objective way due to the lack of a common dataset which shall be used as basis for such experiments. In this paper, we present our DASH dataset including our DASHEncoder, an open source DASH content generation tool. We also provide basic evaluations of the different segment lengths, the influence of HTTP server settings, and, in this context, we show some of the advantages as well as problems of shorter segment lengths."}
{"_id": "a8d36f04fbd33088d02206a32b16a627e2792286", "title": "A VLC media player plugin enabling dynamic adaptive streaming over HTTP", "text": "This paper describes the implementation of a VLC media player plugin enabling Dynamic Adaptive Streaming over HTTP (DASH). DASH is an emerging ISO/IEC MPEG and 3GPP standard for HTTP streaming. It aims to standardize formats enabling segmented progressive download by exploiting existing Internet infrastructure as such. Our implementation of these formats as described in this paper is based on the well-known VLC. Hence, it is fully integrated into the VLC structure and has been also submitted to the VLC development team for consideration in future releases of VLC. Therefore, it is licensed under the GNU Lesser General Public License (LGPL). The plugin provides a very flexible structure that could be easily extended with respect to different adaptation logics or profiles of the DASH standard. As a consequence, the plugin enables the integration of a variety of adaptation logics and comparison thereof, making it attractive for the research community."}
{"_id": "178c3df6059d9fc20f8b643254289642c47b5d9d", "title": "Microcontroller Based Smart Helmet Using GSM & GPRS", "text": "The thought of developing this proposed model comes from social responsibility towards the society. Lack of immediate First Aid and Emergency medical services during accident are prime cause of death in majority of cases of accidents. The one of the main reasons for this may be late arrival of ambulance, no person at place of accident to give information to the ambulance. This thought of taking responsibility of society resulted in our proposed model \u201cMicrocontroller based smart helmet using GSM & GPRS\u201d. The aim of this proposed model design is to inform the responsible persons at the earliest about the accident so that they can take required actions to save the life of the injured person. Our proposed system consists of Arduino as Microcontroller, GSM for calling purpose, GPRS for tracking purpose and mainly Sensors to detect the accident. Our proposed system detects the accident and sends text message along with a voice message within a minute to the registered number."}
{"_id": "c31382156fe9f4e3096a390a475361b72f9f3e78", "title": "Optimal Beaconing Control for Epidemic Routing in Delay-Tolerant Networks", "text": "Owing to the uncertainty of transmission opportunities between mobile nodes, the routing in delay-tolerant networks (DTNs) exploits the mechanism of opportunistic forwarding. Energy-efficient algorithms and policies for DTN are crucial to maximizing the message delivery probability while reducing the delivery cost. In this contribution, we investigate the problem of energy-efficient optimal beaconing control in a DTN. We model the message dissemination under variable beaconing rate with a continuous-time Markov model. Based on this model, we then formulate the optimization problem of the optimal beaconing control for epidemic routing and obtain the optimal threshold policy from the solution of this optimization problem. Furthermore, through extensive numerical results, we demonstrate that the proposed optimal threshold policy significantly outperforms the static policy with constant beaconing rate in terms of system energy consumption savings."}
{"_id": "8b73edd9a803e83bad3f2de0f4e659db12868b85", "title": "Linking Transformational Leadership and Knowledge Sharing: The Mediating Roles of Perceived Team Goal Commitment and Perceived Team Identification", "text": "It is widely assumed that transformational leadership can effectively facilitate the sharing of knowledge among followers, but most investigations of the underlying mechanisms were based on the social exchange perspective. Based on a sensegiving theory perspective, this article attempts to uncover the mechanisms by which transformational leadership has its impact on employee knowledge sharing behavior by proposing two team-directed mediating mechanisms: perceived team goal commitment and perceived team identification. Results of multi-source and time-lagged data from 186 leader-follower pairs supported the proposed mediating effects. Implications and limitations are discussed."}
{"_id": "9ed3afe3736419dfcb96b738cb51f4d30bd265f2", "title": "Flattening Gamma: Radiometric Terrain Correction for SAR Imagery", "text": "Enabling intercomparison of synthetic aperture radar (SAR) imagery acquired from different sensors or acquisition modes requires accurate modeling of not only the geometry of each scene, but also of systematic influences on the radiometry of individual scenes. Terrain variations affect not only the position of a given point on the Earth's surface but also the brightness of the radar return as expressed in radar geometry. Without treatment, the hill-slope modulations of the radiometry threaten to overwhelm weaker thematic land cover induced backscatter differences, and comparison of backscatter from multiple satellites, modes, or tracks loses meaning. The ASAR & PALSAR sensors provide state vectors and timing with higher absolute accuracy than was previously available, allowing them to directly support accurate tie-point-free geolocation and radiometric normalization of their imagery. Given accurate knowledge of the acquisition geometry of a SAR image together with a digital height model (DHM) of the area imaged, radiometric image simulation is applied to estimate the local illuminated area for each point in the image. Ellipsoid-based or sigma naught (\u03c30) based incident angle approximations that fail to reproduce the effect of topographic variation in their sensor model are contrasted with a new method that integrates terrain variations with the concept of gamma naught (\u03b30) backscatter, converting directly from beta naught (\u03b20) to a newly introduced terrain-flattened \u03b30 normalization convention. The interpretability of imagery treated in this manner is improved in comparison to processing based on conventional ellipsoid or local incident angle based \u03c30 normalization."}
{"_id": "530b755651ddd136a55effc5aeb58239b7b49df6", "title": "Low Data Drug Discovery with One-Shot Learning", "text": "Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds (Ma, J. et al. J. Chem. Inf.\n\n\nMODEL\n2015, 55, 263-274). However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the iterative refinement long short-term memory, that, when combined with graph convolutional neural networks, significantly improves learning of meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery (Ramsundar, B. deepchem.io. https://github.com/deepchem/deepchem, 2016)."}
{"_id": "2878bd8a17c2ba7c4df445cd88dd7fc2cb44e15d", "title": "Bayesian Modeling with Gaussian Processes using the GPstuff Toolbox", "text": "Gaussian processes (GP) are powerful tools for probabilistic modeling purposes. They can be used to define prior distributions over latent functions in hierarchical Bayesian models. The prior over functions is defined implicitly by the mean and covariance function, which determine the smoothness and variability of the function. The inference can then be conducted directly in the function space by evaluating or approximating the posterior process. Despite their attractive theoretical properties GPs provide practical challenges in their implementation. GPstuff is a versatile collection of computational tools for GP models compatible with Linux and Windows MATLAB and Octave. It includes, among others, various inference methods, sparse approximations and tools for model assessment. In this work, we review these tools and demonstrate the use of GPstuff in several models."}
{"_id": "accbf0cd5f19a9ec5bbdd97f11db8c3bfdbb4913", "title": "Noncontact Proximity Vital Sign Sensor Based on PLL for Sensitivity Enhancement", "text": "In this paper, a noncontact proximity vital sign sensor, using a phase locked loop (PLL) incorporated with voltage controlled oscillator (VCO) built-in planar type circular resonator, is proposed to enhance sensitivity in severe environments. The planar type circular resonator acts as a series feedback element of the VCO as well as a near-field receiving antenna. The frequency deviation of the VCO related to the body proximity effect ranges from 0.07 MHz/mm to 1.8 MHz/mm (6.8 mV/mm to 205 mV/mm in sensitivity) up to a distance of 50 mm, while the amount of VCO drift is about 21 MHz in the condition of 60 \u00b0C temperature range and discrete component tolerance of \u00b15%. Total frequency variation occurs in the capture range of the PLL which is 60 MHz. Thus, its loop control voltage converts the amount of frequency deviation into a difference of direct current (DC) voltage, which is utilized to extract vital signs regardless of the ambient temperature. The experimental results reveal that the proposed sensor placed 50 mm away from a subject can reliably detect respiration and heartbeat signals without the ambiguity of harmonic signals caused by respiration signal at an operating frequency of 2.4 GHz."}
{"_id": "fa1f10e6fd12698f7dd128396e24ce442b5dbfad", "title": "Principles of micro-RNA production and maturation", "text": "Micro-RNAs (miRNAs) are a class of approximately 22-nucleotide non-coding RNAs expressed in multicellular organisms. They are first transcribed in a similar manner to pre-mRNAs. The transcripts then go through a series of processing steps, including endonucleolytic cleavage, nuclear export and a strand selection procedure, to yield the single-stranded mature miRNA products. The transcription and processing of miRNAs determines the abundance and the sequence of mature miRNAs and has important implications for the function of miRNAs."}
{"_id": "fd21cfb3b3a573c9ba0de366605e194b14d00d46", "title": "A survey on nature inspired metaheuristic algorithms for partitional clustering", "text": "The partitional clustering concept started with K-means algorithm which was published in 1957. Since then many classical partitional clustering algorithms have been reported based on gradient descent approach. The 1990 kick started a new era in cluster analysis with the application of nature inspired metaheuristics. After initial formulation nearly two decades have passed and researchers have developed numerous new algorithms in this field. This paper embodies an up-to-date review of all major nature inspired metaheuristic algorithms employed till date for partitional clustering. Further, key issues involved during formulation of various metaheuristics as a clustering problem and major application areas are discussed. & 2014 Published by Elsevier B.V."}
{"_id": "6b7c8e174acd350677094c113d3d066dd368f77c", "title": "Scaling and criticality in a stochastic multi-agent model of a financial market", "text": "Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way \u2014 from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent \u2018efficient market hypothesis\u2019 in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the \u2018input\u2019 signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the \u2018news arrival process\u2019 in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents."}
{"_id": "4455383c2139b1a9d12f9d8aa8bbbb18fe1558f2", "title": "From genetic privacy to open consent", "text": "Recent advances in high-throughput genomic technologies are showing concrete results in the form of an increasing number of genome-wide association studies and in the publication of comprehensive individual genome\u2013phenome data sets. As a consequence of this flood of information the established concepts of research ethics are stretched to their limits, and issues of privacy, confidentiality and consent for research are being re-examined. Here, we show the feasibility of the co-development of scientific innovation and ethics, using the open-consent framework that was implemented in the Personal Genome Project as an example."}
{"_id": "8fcc6d3c6fb68fcb409def9a385a1a65e39c30c1", "title": "Classification of Heart Disease Dataset using Multilayer Feed forward backpropogation Algorithm", "text": "In this study, for the classification of heart diseases dataset, multilayer feed forward network with backpropogation algorithm is introduced. Artificial Neural Network (ANN) is widely used data mining method to extract patterns. Data mining is the process of automating information discovery .Its main aim is to find relationships in data and to predict outcomes. Classification is one of the important data mining techniques for classifying given set of input data. Many real world problems in various fields can be solved by using classification approach such as business, science, industry and medicine. For analysis the medical data related to Heart diseases is considered and analyzed using artificial neural network (ANN). To perform classification task of medical data, the neural network is trained using back propagation algorithm with momentum. To increase the efficiency of the classification process parallel processing approach is also applies on each neuron in different layers."}
{"_id": "7e23f2bb61e32f50cda4014b404ab69b23cf5c56", "title": "Lightweight Authentication Scheme with Dynamic Group Members in IoT Environments", "text": "In IoT environments, the user may have many devices to connect each other and share the data. Also, the device will not have the powerful computation and storage ability. Many studies have focused on the lightweight authentication between the cloud server and the client in this environment. They can use the cloud server to help sensors or proxies to finish the authentication. But in the client side, how to create the group session key without the cloud capability is the most important issue in IoT environments. The most popular application network of IoT environments is the wireless body area network (WBAN). In WBAN, the proxy usually needs to control and monitor user's health data transmitted from the sensors. In this situation, the group authentication and group session key generation is needed. In this paper, in order to provide an efficient and robust group authentication and group session key generation in the client side of IoT environments, we propose a lightweight authentication scheme with dynamic group members in IoT environments. Our proposed scheme can satisfy the properties including the flexible generation of shared group keys, the dynamic participation, the active revocation, the low communication and computation cost, and no time synchronization problem. Also our scheme can achieve the security requirements including the mutual authentication, the group session key agreement, and prevent all various well-known attacks."}
{"_id": "3c3b48c93ece99f9ae237e80d431c7a6d5071fb0", "title": "Training a Fully Convolutional Neural Network to Route Integrated Circuits", "text": "We present a deep, fully convolutional neural network that learns to route a circuit layout \u2018net\u2019 with appropriate choice of metal tracks and wire class combinations. Inputs to the network are the encoded layouts containing spatial location of pins to be routed. After 15 fully convolutional stages followed by a score comparator, the network outputs 8 layout layers (corresponding to 4 route layers, 3 via layers and an identity-mapped pin layer) which are then decoded to obtain the routed layouts. We formulate this as a binary segmentation problem on a per-pixel per-layer basis, where the network is trained to correctly classify pixels in each layout layer to be \u2018on\u2019 or \u2018off\u2019. To demonstrate learnability of layout design rules, we train the network on a dataset of 50,000 train and 10,000 validation samples that we generate based on certain pre-defined layout constraints. Precision, recall and F1 score metrics are used to track the training progress. Our network achieves F1 \u2248 97% on the train set and F1 \u2248 92% on the validation set. We use PyTorch for implementing our model. Code is made publicly available1."}
{"_id": "e48bc680645fb85226fa3ce2a958f21441c8dd18", "title": "Smart home for elderly living using Wireless Sensor Networks and an Android application", "text": "A Smart Home (SH) is a house or an apartment equipped with advanced automation technologies to provide the occupants with intelligent monitoring and actionable information that can be situation specific. It allows for improvements in the way we live or work, and for improved energy efficiencies. The smart home is especially relevant for the elderly because the intelligent sensing systems would allow for remote monitoring and possibly control of health and environmental parameters according to their health status and living needs. The burgeoning elderly population and rising costs of elderly healthcare and home needs have led to a rapid evolution of the elderly smart living environment. However, the lack of proper eldercare smart home technologies has inspired us to develop Information and Communication Technologies (ICT) based smart home for the elderly using the Android platform. We have proposed a model for the elderly using Wireless Sensor Networks (WSNs) implemented as an Android application. Our smart home research potentially allows the elderly to continue to live in their own homes while being monitored non-invasively, seamlessly and economically according to their healthcare needs and status."}
{"_id": "ee24da4eadc2fb1bd4fa881de108ff8ede664ff5", "title": "E-GOVERNMENT EVALUATION FACTORS: CITIZEN\u2019S PERSPECTIVE", "text": "The e-government field is growing to a considerable size, both in its contents and position with respect to other research fields. The government to citizen segment of egovernment is taking the lead in terms of its importance and size. Like the evaluation of all other information systems initiatives, the evaluation of egovernments in both theory and practice has proved to be important but complex. The complexity of evaluation is mostly due to the multiple perspectives involved, the difficulties of quantifying benefits, and the social and technical context of use. The importance of e-government evaluation is due to the enormous investment of governments on delivering e-government services, and to the considerable pace of growing in the e-government field. However, despite the importance of the evaluation of e-government services, literature shows that e-government evaluation is still an immature area in terms of development and management. This work is part of a research effort that aims to develop a holistic evaluation framework for e-government systems. The main aim of this paper is to investigate the citizen\u2019 perspective in evaluating e-government services, and present a set of evaluating factors that influence citizens\u2019 utilization of e-government services. These evaluation factors can serve as part of an e-government evaluation framework. Moreover, the evaluation factors can also be used as means of providing valuable feedback for the planning of future egovernment initiatives."}
{"_id": "519da94369c1d87e09c592f239b55cc9486b5b7c", "title": "Attentive Moment Retrieval in Videos", "text": "In the past few years, language-based video retrieval has attracted a lot of attention. However, as a natural extension, localizing the specific video moments within a video given a description query is seldom explored. Although these two tasks look similar, the latter is more challenging due to two main reasons: 1) The former task only needs to judge whether the query occurs in a video and returns an entire video, but the latter is expected to judge which moment within a video matches the query and accurately returns the start and end points of the moment. Due to the fact that different moments in a video have varying durations and diverse spatial-temporal characteristics, uncovering the underlying moments is highly challenging. 2) As for the key component of relevance estimation, the former usually embeds a video and the query into a common space to compute the relevance score. However, the later task concerns moment localization where not only the features of a specific moment matter, but the context information of the moment also contributes a lot. For example, the query may contain temporal constraint words, such as \"first'', therefore need temporal context to properly comprehend them. To address these issues, we develop an Attentive Cross-Modal Retrieval Network. In particular, we design a memory attention mechanism to emphasize the visual features mentioned in the query and simultaneously incorporate their context. In the light of this, we obtain the augmented moment representation. Meanwhile, a cross-modal fusion sub-network learns both the intra-modality and inter-modality dynamics, which can enhance the learning of moment-query representation. We evaluate our method on two datasets: DiDeMo and TACoS. Extensive experiments show the effectiveness of our model as compared to the state-of-the-art methods."}
{"_id": "c67192cb7c82d2a0516b656909985823a5b2aba0", "title": "First-person hyper-lapse videos", "text": "We present a method for converting first-person videos, for example, captured with a helmet camera during activities such as rock climbing or bicycling, into hyper-lapse videos, i.e., time-lapse videos with a smoothly moving camera. At high speed-up rates, simple frame sub-sampling coupled with existing video stabilization methods does not work, because the erratic camera shake present in first-person videos is amplified by the speed-up. Our algorithm first reconstructs the 3D input camera path as well as dense, per-frame proxy geometries. We then optimize a novel camera path for the output video that passes near the input cameras while ensuring that the virtual camera looks in directions that can be rendered well from the input. Finally, we generate the novel smoothed, time-lapse video by rendering, stitching, and blending appropriately selected source frames for each output frame. We present a number of results for challenging videos that cannot be processed using traditional techniques."}
{"_id": "6a7fbe1f98ae98c93f1cc47f1900c5da59d26738", "title": "CubeNet: Equivariance to 3D Rotation and Translation", "text": "3D Convolutional Neural Networks are sensitive to transformations applied to their input. This is a problem because a voxelized version of a 3D object, and its rotated clone, will look unrelated to each other after passing through to the last layer of a network. Instead, an idealized model would preserve a meaningful representation of the voxelized object, while explaining the pose-difference between the two inputs. An equivariant representation vector has two components: the invariant identity part, and a discernable encoding of the transformation. Models that can\u2019t explain pose-differences risk \u201cdiluting\u201d the representation, in pursuit of optimizing a classification or regression loss function. We introduce a Group Convolutional Neural Network with linear equivariance to translations and right angle rotations in three dimensions. We call this network CubeNet, reflecting its cube-like symmetry. By construction, this network helps preserve a 3D shape\u2019s global and local signature, as it is transformed through successive layers. We apply this network to a variety of 3D inference problems, achieving state-of-the-art on the ModelNet10 classification challenge, and comparable performance on the ISBI 2012 Connectome Segmentation Benchmark. To the best of our knowledge, this is the first 3D rotation equivariant CNN for voxel representations."}
{"_id": "330f38ea4a5f21d82f320965b8c4acce54c5bbfb", "title": "Like: The Discourse Particle and Semantics", "text": "Using data from interviews with high school students, I first adduce evidence that lends support to Schourup\u2019s (1985) claim that the United States English adolescent hedge like is a discourse particle signalling a possible slight mismatch between words and meaning. Such a particle would generally be included in a grammar in a post-compositional pragmatic component, but, surprisingly, like also affects basic semantic attributes. These include both truth-conditions and the weak/strong distinction\u2014though only in existential there and sluicing sentences. I argue that the differential behaviour of like in various constructions selecting weak NP\u2019s stems from the restricted free variable it introduces, a variable which only there and sluicing require. This variable is available for binding, quantifier interpretation and other syntactic-semantic processes, yet is pragmatically conditioned. Indeed, I show that, due to its formal properties, like can be interpreted only during the assignment of model-theoretic denotations to expressions, along the lines of Lasersohn\u2019s (1999) pragmatic haloes. These results support the idea that weak/strong is not a unitary distinction and suggest that the various components of grammars must be organized to allow information from pragmatic/discourse elements to affect basic compositional semantics."}
{"_id": "a3cc257983108c9e5e0452f87991ec42de909339", "title": "A Maturity Model for Green ICT: The Case of the SURF Green ICT Maturity Model", "text": "This paper discusses the development and evaluation of a maturity model for Green ICT. We describe how the model was developed with the input of a number of Green ICT experts before it was released to the general public. The model consists of three domains with attributes on Green ICT that encompass both Greening of ICT as well as Greening by ICT. The quality of the model and its accuracy to capture the full scope of Green ICT has been evaluated through an online survey. We evaluated the quality of the model on relevancy of attributes, whether the attributes were well defined and whether the domains were complete. Twenty participants contributed meaningfully. Two attributes were considered to be irrelevant and six new attributes were suggested. With these results the quality of the maturity model can be improved. Our next step is to test the usefulness of the model by seeing how it is used in practice. We hope this paper inspires more work on testing the quality and usefulness of models and frameworks on Green ICT."}
{"_id": "1f715fbecfa66968181ea78d506c8dc0a93c7ae4", "title": "Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation", "text": "Transferring representations from large supervised tasks to downstream tasks has shown promising results in AI fields such as Computer Vision and Natural Language Processing (NLP). In parallel, the recent progress in Machine Translation (MT) has enabled one to train multilingual Neural MT (NMT) systems that can translate between multiple languages and are also capable of performing zero-shot translation. However, little attention has been paid to leveraging representations learned by a multilingual NMT system to enable zeroshot multilinguality in other NLP tasks. In this paper, we demonstrate a simple framework, a multilingual EncoderClassifier, for cross-lingual transfer learning by reusing the encoder from a multilingual NMT system and stitching it with a task-specific classifier component. Our proposed model achieves significant improvements in the English setup on three benchmark tasks Amazon Reviews, SST and SNLI. Further, our system can perform classification in a new language for which no classification data was seen during training, showing that zero-shot classification is possible and remarkably competitive. In order to understand the underlying factors contributing to this finding, we conducted a series of analyses on the effect of the shared vocabulary, the training data type for NMT, classifier complexity, encoder representation power, and model generalization on zero-shot performance. Our results provide strong evidence that the representations learned from multilingual NMT systems are widely applicable across languages and tasks."}
{"_id": "14e4bc9b66da64bae88474a77ccf903809eda8ca", "title": "Dynamic Units of Visual Speech", "text": "We present a new method for generating a dynamic, concatenative, unit of visual speech that can generate realistic visual speech animation. We redefine visemes as temporal units that describe distinctive speech movements of the visual speech articulators. Traditionally visemes have been surmized as the set of static mouth shapes representing clusters of contrastive phonemes (e.g. /p, b, m/, and /f, v/). In this work, the motion of the visual speech articulators are used to generate discrete, dynamic visual speech gestures. These gestures are clustered, providing a finite set of movements that describe visual speech, the visemes. Dynamic visemes are applied to speech animation by simply concatenating viseme units. We compare to static visemes using subjective evaluation. We find that dynamic visemes are able to produce more accurate and visually pleasing speech animation given phonetically annotated audio, reducing the amount of time that an animator needs to spend manually refining the animation."}
{"_id": "9eab375b7add5f02a0ae1293d0aa7880ea70233a", "title": "Statistical Parametric Speech Synthesis", "text": ""}
{"_id": "21160691043c73d65f491d100d35c80f6d0aba21", "title": "Photo-Realistic Talking-Heads from Image Samples", "text": "This paper describes a system for creating a photo-realistic model of the human head that can be animated and lip-synched from phonetic transcripts of text. Combined with a state-of-the-art text-to-speech synthesizer (TTS), it generates video animations of talking heads that closely resemble real people. To obtain a naturally looking head, we choose a \u201cdata-driven\u201d approach. We record a talking person and apply image recognition to extract automatically bitmaps of facial parts. These bitmaps are normalized and parameterized before being entered into a database. For synthesis, the TTS provides the audio track, as well as the phonetic transcript from which trajectories in the space of parameterized bitmaps are computed for all facial parts. Sampling these trajectories and retrieving the corresponding bitmaps from the database produces animated facial parts. These facial parts are then projected and blended onto an image of the whole head using its pose information. This talking head model can produce new, never recorded speech of the person who was originally recorded. Talking-head animations of this type are useful as a front-end for agents and avatars in multimedia applications such as virtual operators, virtual announcers, help desks, educational, and expert systems."}
{"_id": "2131860a0782b09d19edf6ed59b5569cac0dd7f6", "title": "Expressive speech-driven facial animation", "text": "Speech-driven facial motion synthesis is a well explored research topic. However, little has been done to model expressive visual behavior during speech. We address this issue using a machine learning approach that relies on a database of speech-related high-fidelity facial motions. From this training set, we derive a generative model of expressive facial motion that incorporates emotion control, while maintaining accurate lip-synching. The emotional content of the input speech can be manually specified by the user or automatically extracted from the audio signal using a Support Vector Machine classifier."}
{"_id": "f74c0634481026d6726765d1db2b3b4fc982879c", "title": "Comparison of main control strategies for DC/DC stage of bidirectional vehicle charger", "text": "This paper presents comparison of two control algorithms for DC/DC converters in vehicle chargers. It presents operation in phase-shifting algorithm and resonant LLC algorithm, their implementation, and test results."}
{"_id": "a969e666e0ad6c13960657ed75ada5902243673d", "title": "Feature Reduction for Document Clustering and Classification", "text": "Often users receive search results which contain a wide range of documents, only some of which are relevant to their information needs. To address this problem, ever more systems not only locate information for users, but also organise that information on their behalf. We look at two main automatic approaches to information organisation: interactive clustering of search results and pre-categorising documents to provide hierarchical browsing structures. To be feasible in real world applications, both of these approaches require accurate yet efficient algorithms. Yet, both suffer from the curse of dimensionality \u2014 documents are typically represented by hundreds or thousands of words (features) which must be analysed and processed during clustering or classification. In this paper, we discuss feature reduction techniques and their application to document clustering and classification, showing that feature reduction improves efficiency as well as accuracy. We validate these algorithms using human relevance assignments and categorisation."}
{"_id": "244ab650b25eb90be76fbbf0c4f8bd9121abe86f", "title": "The \"wake-sleep\" algorithm for unsupervised neural networks.", "text": "An unsupervised learning algorithm for a multilayer network of stochastic neurons is described. Bottom-up \"recognition\" connections convert the input into representations in successive hidden layers, and top-down \"generative\" connections reconstruct the representation in one layer from the representation in the layer above. In the \"wake\" phase, neurons are driven by recognition connections, and generative connections are adapted to increase the probability that they would reconstruct the correct activity vector in the layer below. In the \"sleep\" phase, neurons are driven by generative connections, and recognition connections are adapted to increase the probability that they would produce the correct activity vector in the layer above."}
{"_id": "f45447f428b9b90c9164b1756296e5c48d1506a8", "title": "A Novel Wideband Microstrip Line to Ridge Gap Waveguide Transition Using Defected Ground Slot", "text": "In this letter design of a novel, wideband and low loss microstrip line transition to ridge gap waveguide is described. The transition is made up of two sections. The first section is a microstrip line that feeds a slot on the ground plane. The microstrip line is placed inside a metal box. The second part is the slot to ridge gap waveguide matching section. The proposed design has a broadband coupling to ridge gap waveguide and covers the whole stop-band of the periodic gap structure. The design optimization is performed for the microstrip line, slot size and ridge gap waveguide matching ridges. A prototype in Ku-band is manufactured and measured that shows a return loss less than 14 dB and a back-to-back insertion loss smaller than 0.5 dB for the given bandwidth. The microstrip transition enables integration of monolithic microwave integrated circuits (MMICs) in ridge gap waveguide technology."}
{"_id": "33dd212f71d194394024ac7ea9e20b06759b2287", "title": "Technological Developments in Batteries: A Survey of Principal Roles, Types, and Management Needs", "text": "Battery energy storage effectively staBIlizes the electric grid and aids renewable integration by balancing supply and demand in real time. The importance of such storage is especially crucial in densely populated urban areas, where traditional storage techniques such as pumped hydroelectric energy storage and compressed-air energy storage are often not feasible."}
{"_id": "0d1141aed3b02027846bb88022212ba498a0be59", "title": "TRUSTED CODE EXECUTION ON UNTRUSTED PLATFORMS USING INTEL SGX", "text": "Today, isolated trusted computation and code execution is of paramount importance to protect sensitive information and workfl ows from other malicious privileged or unprivileged software. Intel Software Guard Extensions (SGX) is a set of security architecture extensions fi rst introduced in the Skylake microarchitecture that enables a Trusted Execution Environment (TEE). It provides an \u2018inverse sandbox\u2019, for sensitive programs, and guarantees the integrity and confi dentiality of secure computations, even from the most privileged malicious software (e.g. OS, hypervisor). SGX-capable CPUs only became available in production systems in Q3 2015, and they are not yet fully supported and adopted in systems. Besides the capability in the CPU, the BIOS also needs to provide support for the enclaves, and not many vendors have released the required updates for the system support. This has led to many wrong assumptions being made about the capabilities, features, and ultimately dangers of secure enclaves. By having access to resources and publications such as white papers, patents and the actual SGX-capable hardware and software development environment, we are in a privileged position to be able to investigate and demystify SGX. In this paper, we fi rst review the previous trusted execution technologies, such as ARM Trust Zone and Intel TXT, to better understand and appreciate the new innovations of SGX. Then, we look at the details of SGX technology, cryptographic primitives and the underlying concepts that power it, namely the sealing, attestation, and the Memory Encryption Engine (MEE). We also consider use cases such as trusted and secure code execution on an untrusted cloud platform, and digital rights management (DRM). This is followed by an overview of the software development environment and the available libraries."}
{"_id": "53b717444c90c1e4e47aad3be054a316007b7072", "title": "Getting stress out of stressed-out stress granules.", "text": "Amyotrophic lateral sclerosis (ALS) pathology is linked to the aberrant aggregation of specific proteins, including TDP-43, FUS, and SOD1, but it is not clear why these aggregation events cause ALS. In this issue of The EMBO Journal, Mateju et al (2017) report a direct link between misfolded proteins accumulating in stress granules and the phase transition of these stress granules from liquid to solid. This discovery provides a model connecting protein aggregation to stress granule dysfunction."}
{"_id": "a18205d645096f40bb9844293a4ca414e283ac64", "title": "Targeting the right students using data mining", "text": "The education domain offers a fertile ground for many interesting and challenging data mining applications. These applications can help both educators and students, and improve the quality of education. In this paper, we present a real-life application for the Gifted Education Programme (GEP) of the Ministry of Education (MOE) in Singapore. The application involves many data mining tasks. This paper focuses only on one task, namely, selecting students for remedial classes. Traditionally, a cut-off mark for each subject is used to select the weak students. That is, those students whose scores in a subject fall below the cut-off mark for the subject are advised to take further classes in the subject. In this paper, we show that this traditional method requires too many students to take part in the remedial classes. This not only increases the teaching load of the teachers, but also gives unnecessary burdens to students, which is particularly undesirable in our case because the GEP students are generally taking more subjects than non-GEP students, and the GEP students are encouraged to have more time to explore advanced topics. With the help of data mining, we are able to select the targeted students much more precisely."}
{"_id": "df8c04e4eb7671c70837c0cc972dc8a2f9679c45", "title": "Cultural diversity among nursing students: reanalysis of the cultural awareness scale.", "text": "Nurses are educated to provide culturally competent care. Cultural competence begins with cultural awareness, a concept previously measured with the Cultural Awareness Scale (CAS). The purpose of this study was to reanalyze the CAS to determine construct validity and differences in cultural awareness among students of varying educational levels and experiences. The sample consisted of 150 nursing students (92% female, 33.6% racial minorities). Confirmatory factor analysis yielded three factors (CFI = 0.868, TLI = 0.854, RMSEA = 0.065, and SRMR = 0.086). Cronbach's alpha ranged from 0.70 to 0.89. There were significant differences among educational levels, with lower division BSN students generally scoring higher than upper division and master's of science in nursing students. Students who had taken courses on cultural diversity or global health generally outscored those who had not taken such courses. Findings support the validity of the CAS and its applicability to research studies of cultural awareness in nursing."}
{"_id": "94c78fdc6908e7b6c29f39cd95b7cb6aca9edf95", "title": "A Hybrid Data Mining Approach for Intrusion Detection on Imbalanced NSL-KDD Dataset", "text": "Intrusion detection systems aim to detect malicious viruses from computer and network traffic, which is not possible using common firewall. Most intrusion detection systems are developed based on machine learning techniques. Since datasets which used in intrusion detection are imbalanced, in the previous methods, the accuracy of detecting two attack classes, R2L and U2R, is lower than that of the normal and other attack classes. In order to overcome this issue, this study employs a hybrid approach. This hybrid approach is a combination of synthetic minority oversampling technique (SMOTE) and cluster center and nearest neighbor (CANN). Important features are selected using leave one out method (LOO). Moreover, this study employs NSL KDD dataset. Results indicate that the proposed method improves the accuracy of detecting U2R and R2L attacks in comparison to the baseline paper by 94% and 50%, respectively. Keywords\u2014intrusion detection system; feature selection; imbalanced dataset; SMOTE; NSL KDD"}
{"_id": "e676d0952c2a729adb1606368ab60c79fe2a69fd", "title": "Fog Computing architecture to enable consumer centric Internet of Things services", "text": "Fog Computing is a recent computing paradigm that is extending cloud computing towards the edge of network. Due to its proximity to end-users, dense geographical distribution, open platform and support for high mobility, Fog Computing platforms can provide services with reduced latency and improved QoS. Thus it is becoming an important enabler for consumer centric Internet of Things based applications and services that require real time operations e.g. connected vehicles, smart road intersection management and smart grid. The paper discusses one such architecture for connected vehicles with Road Side Units (RSUs) and M2M gateways including the Fog Computing Platform. M2M data processing with semantics, discovery and management of connected vehicles are briefly discussed as consumer centric IoT services enabled by the distinct characteristics of Fog Computing."}
{"_id": "3485b01f9aa38d33d740994c306357ec83a660e7", "title": "Karma: A System for Mapping Structured Sources into the Semantic Web", "text": "The Linked Data cloud contains large amounts of RDF data generated from databases. Much of this RDF data, generated using tools such as D2R, is expressed in terms of vocabularies automatically derived from the schema of the original database. The generated RDF would be significantly more useful if it were expressed in terms of commonly used vocabularies. Using today\u2019s tools, it is labor-intensive to do this. For example, one can first use D2R to automatically generate RDF from a database and then use R2R to translate the automatically generated RDF into RDF expressed in a new vocabulary. The problem is that defining the R2R mappings is difficult and labor intensive because one needs to write the mapping rules in terms of SPARQL graph patterns. In this work, we present a semi-automatic approach for building mappings that translate data in structured sources to RDF expressed in terms of a vocabulary of the user\u2019s choice. Our system, Karma, automatically derives these mappings, and provides an easy to use interface that enables users to control the automated process to guide the system to produce the desired mappings. In our evaluation, users need to interact with the system less than once per column (on average) in order to construct the desired mapping rules. The system then uses these mapping rules to generate semantically rich RDF for the data sources. We demonstrate Karma using a bioinformatics example and contrast it with other approaches used in that community. Bio2RDF [7] and Semantic MediaWiki Linked Data Extension (SMW-LDE) [2] are examples of efforts that integrate bioinformatics datasets by mapping them to a common vocabulary. We applied our approach to a scenario used in the SMW-LDE that integrate ABA, Uniprot, KEGG Pathway, PharmGKB and Linking Open Drug Data datasets using a"}
{"_id": "0b189fbd1e0d4329f4808b2de6325441c7652998", "title": "Path-tracking for autonomous vehicles at the limit of friction", "text": "The ability to use all of the available tire force is essential to road vehicles for emergency maneuvers and racing. As the front tires of an understeering vehicle saturate while cornering at the limit of tire-road friction, steering is lost as a control input for path-tracking. Experimental data from an autonomous vehicle show that for path-tracking at the limit of friction through steering the value of friction needs to be known to within approximately 2%. This requirement exceeds the capabilities of existing real-time friction estimation algorithms. Data collected with a professional race car driver inspire a novel control framework, with a slip angle-based control strategy of maintaining the front tires at the slip angle for which maximum tire force is attained, and longitudinal speed control for path-tracking. This approach has significantly less demanding requirements on the accuracy of friction estimation. A controller is presented to explore this concept, and experimental results demonstrate successful tracking of a circular path at the friction limit without a priori friction information."}
{"_id": "bfddf2c4078b58aefd05b8ba7000aca1338f16d8", "title": "BEYOND STUDENT PERCEPTIONS: ISSUES OF INTERACTION, PRESENCE, AND PERFORMANCE IN AN ONLINE COURSE", "text": "The research literature on Web-based learning supports the assumption that interaction is important for a successful course, yet questions exist regarding the nature and extent of the interaction and its effects on student performance. Much of the research is based on student perceptions of the quality and quantity of their interactions and how much they have learned in an online course. The purpose of this study is to examine performance in an online course in relationship to student interaction and sense of presence in the course. Data on multiple independent (measures of interaction and presence) and dependent (measures of performance) variables were collected and subjected to analysis. An attempt was made to go beyond typical institutional performance measures such as grades and withdrawal rates and to examine measures specifically related to course objectives."}
{"_id": "0899bb286721ad5161573ce0c002c93af716f794", "title": "Grey Wolf Optimizer", "text": "This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves (Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encircling prey, and attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known test functions, and the results are verified by a comparative study with Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming (EP), and Evolution Strategy (ES). The results show that the GWO algorithm is able to provide very competitive results compared to these wellknown meta-heuristics. The paper also considers solving three classical engineering design problems (tension/compression spring, welded beam, and pressure vessel designs) and presents a real application of the proposed method in the field of optical engineering. The results of the classical engineering design problems and real application prove that the proposed algorithm is applicable to challenging problems with unknown search spaces."}
{"_id": "15f8cfdc9b7342de99a7d25d64d5245c964b3781", "title": "Object detection in VHR optical remote sensing images via learning rotation-invariant HOG feature", "text": "Object detection in very high resolution (VHR) optical remote sensing images is one of the most fundamental but challenging problems in the field of remote sensing image analysis. As object detection is usually carried out in feature space, effective feature representation is very important to construct a high-performance object detection system. During the last decades, a great deal of effort has been made to develop various feature representations for the detection of different types of objects. Among various features developed for visual object detection, the histogram of oriented gradients (HOG) feature is maybe one of the most popular features that has been successfully applied to computer vision community. However, although the HOG feature has achieved great success in nature scene images, it is problematic to directly use it for object detection in optical remote sensing images because it is difficult to effectively handle the problem of object rotation variations. To explore a possible solution to the problem, this paper proposes a novel method to learn rotation-invariant HOG (RIHOG) features for object detection in optical remote sensing images. This is achieved by learning a rotation-invariant transformation model via optimizing a new objective function, which constrains the training samples before and after rotation to share the similar features to achieve rotation-invariance. In the experiments, we evaluate the proposed method on a publicly available 10-class VHR geospatial object detection dataset and comprehensive comparisons with state-of-the-arts demonstrate the effectiveness the proposed method."}
{"_id": "4ee39164c6c7b4d5484b706e475cd7f5768e8d87", "title": "An intelligent decision support system for skin cancer detection from dermoscopic images", "text": "It is challenging to develop an intelligent agent-based or robotic system to conduct long-term automatic health monitoring and robust efficient disease diagnosis as autonomous e-Carers in real-world applications. In this research, we aim to deal with such challenges by presenting an intelligent decision support system for skin lesion recognition as the initial step, which could be embedded into an intelligent service robot for health monitoring in home environments to promote early diagnosis. The system is developed to identify benign and malignant skin lesions using multiple steps, including pre-processing such as noise removal, segmentation, feature extraction from lesion regions, feature selection and classification. After extracting thousands of raw shape, colour and texture features from the lesion areas, a Genetic Algorithm (GA) is used to identify the most discriminating significant feature subsets for healthy and cancerous cases. A Support Vector Machine classifier has been employed to perform benign and malignant lesion recognition. Evaluated with 1300 images from the Dermofit dermoscopy image database, the empirical results indicate that our approach achieves superior performance in comparison to other related research reported in the literature."}
{"_id": "5b606354cc102f981616ee14d7ed710294870815", "title": "Missing Modalities Imputation via Cascaded Residual Autoencoder", "text": "Affordable sensors lead to an increasing interest in acquiring and modeling data with multiple modalities. Learning from multiple modalities has shown to significantly improve performance in object recognition. However, in practice it is common that the sensing equipment experiences unforeseeable malfunction or configuration issues, leading to corrupted data with missing modalities. Most existing multi-modal learning algorithms could not handle missing modalities, and would discard either all modalities with missing values or all corrupted data. To leverage the valuable information in the corrupted data, we propose to impute the missing data by leveraging the relatedness among different modalities. Specifically, we propose a novel Cascaded Residual Autoencoder (CRA) to impute missing modalities. By stacking residual autoencoders, CRA grows iteratively to model the residual between the current prediction and original data. Extensive experiments demonstrate the superior performance of CRA on both the data imputation and the object recognition task on imputed data."}
{"_id": "751d8c740ea2dc1587a26ac492ccd0bbbff13bc1", "title": "Convergence of Stochastic Gradient Descent for PCA", "text": "We consider the problem of principal component analysis (PC A) in a streaming stochastic setting, where our goal is to find a direction of approximate maximal va riance, based on a stream of i.i.d. data points inR. A simple and computationally cheap algorithm for this is st ochastic gradient descent (SGD), which incrementally updates its estimate based on ea ch new data point. However, due to the non-convex nature of the problem, analyzing its performanc e has been a challenge. In particular, existing guarantees rely on a non-trivial eigengap assumption on the covariance matrix, which is intuitively unnecessary. In this paper, we provide (to the best of our kno wledge) the first eigengap-free convergence guarantees for SGD in the context of PCA. This also part ially resolves an open problem posed in [10]. Moreover, under an eigengap assumption, we show tha t the same techniques lead to new SGD convergence guarantees with better dependence on the eigen gap."}
{"_id": "69e3cdc35529d52a331d3d9dce986793c0eaaa0e", "title": "Feeling cybervictims' pain-The effect of empathy training on cyberbullying.", "text": "As the world's population increasingly relies on the use of modern technology, cyberbullying becomes an omnipresent risk for children and adolescents and demands counteraction to prevent negative (online) experiences. The classroom-based German preventive intervention \"Medienhelden\" (engl.: \"Media Heroes\") builds on previous knowledge about links between cognitive empathy, affective empathy, and cyberbullying, among others. For an evaluation study, longitudinal data were available from 722 high school students aged 11-17 years (M\u2009=\u200913.36, SD\u2009=\u20091.00, 51.8% female) before and six months after the implementation of the program. A 10-week version and a 1-day version were conducted and compared with a control group (controlled pre-long-term-follow-up study). Schools were asked to randomly assign their participating classes to the intervention conditions. Multi-group structural equation modeling (SEM) showed a significant effect of the short intervention on cognitive empathy and significant effects of the long intervention on affective empathy and cyberbullying reduction. The results suggest the long-term intervention to be more effective in reducing cyberbullying and promoting affective empathy. Without any intervention, cyberbullying increased and affective empathy decreased across the study period. Empathy change was not generally directly linked to change in cyberbullying behavior. \"Media Heroes\" provides effective teaching materials and empowers schools to address the important topic of cyberbullying in classroom settings without costly support from the outside."}
{"_id": "fcc0086762312d6707feebaa31cc260b451d82dc", "title": "Item recommending by item co-purchasing network and user preference", "text": "A novel recommendation approach based on multiple-objectives optimization is introduced. It utilizes histories of item co-purchases as represented by directed graphs and histories of user preferences for items as represented by user-item rating matrices to create models of user behavior. Herein, multiple objectives for item recommendations are considering to maximize the ratings and rankings of each item as derived from user preferences and co-purchasing networks of items, respectively. Finally, optimal items, called Pareto optimal solutions, will be recommended."}
{"_id": "977f48fe151c06176049b832f2574d8c41309c3a", "title": "Text Classification using the Concept of Association Rule of Data Mining", "text": "As the amount of online text increases, the demand for text classification to aid the analysis and management of text is increasing. Text is cheap, but information, in the form of knowing what classes a text belongs to, is expensive. Automatic classification of text can provide this information at low cost, but the classifiers themselves must be built with expensive human effort, or trained from texts which have themselves been manually classified. In this paper we will discuss a procedure of classifying text using the concept of association rule of data mining. Association rule mining technique has been used to derive feature set from pre-classified text documents. Na\u00efve Bayes classifier is then used on derived features for final classification."}
{"_id": "0647b93b29158c219f84fefda41f7a4b33a7598f", "title": "Scene Text Detection via Connected Component Clustering and Nontext Filtering", "text": "In this paper, we present a new scene text detection algorithm based on two machine learning classifiers: one allows us to generate candidate word regions and the other filters out nontext ones. To be precise, we extract connected components (CCs) in images by using the maximally stable extremal region algorithm. These extracted CCs are partitioned into clusters so that we can generate candidate regions. Unlike conventional methods relying on heuristic rules in clustering, we train an AdaBoost classifier that determines the adjacency relationship and cluster CCs by using their pairwise relations. Then we normalize candidate word regions and determine whether each region contains text or not. Since the scale, skew, and color of each candidate can be estimated from CCs, we develop a text/nontext classifier for normalized images. This classifier is based on multilayer perceptrons and we can control recall and precision rates with a single free parameter. Finally, we extend our approach to exploit multichannel information. Experimental results on ICDAR 2005 and 2011 robust reading competition datasets show that our method yields the state-of-the-art performance both in speed and accuracy."}
{"_id": "22064d411a128de1a91ad87f86055e254c9c5321", "title": "Scene text detection using graph model built upon maximally stable extremal regions", "text": "Scene text detection could be formulated as a bi-label (text and non-text regions) segmentation problem. However, due to the high degree of intraclass variation of scene characters as well as the limited number of training samples, single information source or classifier is not enough to segment text from non-text background. Thus, in this paper, we propose a novel scene text detection approach using graph model built upon Maximally Stable Extremal Regions (MSERs) to incorporate various information sources into one framework. Concretely, after detecting MSERs in the original image, an irregular graph whose nodes are MSERs, is constructed to label MSERs as text regions or non-text ones. Carefully designed features contribute to the unary potential to assess the individual penalties for labeling a MSER node as text or non-text, and color and geometric features are used to define the pairwise potential to punish the likely discontinuities. By minimizing the cost function via graph cut algorithm, different information carried by the cost function could be optimally balanced to get the final MSERs labeling result. The proposed method is naturally context-relevant and scale-insensitive. Experimental results on the ICDAR 2011 competition dataset show that the proposed approach outperforms state-of-the-art methods both in recall and"}
{"_id": "64ad80fe2e375ae4a36f943f0056a4bea526e116", "title": "Scene Text Localization and Recognition with Oriented Stroke Detection", "text": "An unconstrained end-to-end text localization and recognition method is presented. The method introduces a novel approach for character detection and recognition which combines the advantages of sliding-window and connected component methods. Characters are detected and recognized as image regions which contain strokes of specific orientations in a specific relative position, where the strokes are efficiently detected by convolving the image gradient field with a set of oriented bar filters. Additionally, a novel character representation efficiently calculated from the values obtained in the stroke detection phase is introduced. The representation is robust to shift at the stroke level, which makes it less sensitive to intra-class variations and the noise induced by normalizing character size and positioning. The effectiveness of the representation is demonstrated by the results achieved in the classification of real-world characters using an euclidian nearest-neighbor classifier trained on synthetic data in a plain form. The method was evaluated on a standard dataset, where it achieves state-of-the-art results in both text localization and recognition."}
{"_id": "bac9024e49b9df65f08fac5ee39a2a54d96d94b4", "title": "Clustering by Local Gravitation", "text": "The objective of cluster analysis is to partition a set of data points into several groups based on a suitable distance measure. We first propose a model called local gravitation among data points. In this model, each data point is viewed as an object with mass, and associated with a local resultant force (LRF) generated by its neighbors. The motivation of this paper is that there exist distinct differences between the LRFs (including magnitudes and directions) of the data points close to the cluster centers and at the boundary of the clusters. To capture these differences efficiently, two new local measures named centrality and coordination are further investigated. Based on empirical observations, two new clustering methods called local gravitation clustering and communication with local agents are designed, and several test cases are conducted to verify their effectiveness. The experiments on synthetic data sets and real-world data sets indicate that both clustering approaches achieve good performance on most of the data sets."}
{"_id": "bbc013685fa1213dc43e3a4f47289dd0d8cba0b2", "title": "Priming and human memory systems.", "text": "Priming is a nonconscious form of human memory, which is concerned with perceptual identification of words and objects and which has only recently been recognized as separate from other forms of memory or memory systems. It is currently under intense experimental scrutiny. Evidence is converging for the proposition that priming is an expression of a perceptual representation system that operates at a pre-semantic level; it emerges early in development, and access to it lacks the kind of flexibility characteristic of other cognitive memory systems. Conceptual priming, however, seems to be based on the operations of semantic memory."}
{"_id": "2e6d28d44016a6cfab7f949f74fc129e27960575", "title": "UWB antipodal vivaldi antennas with protruded dielectric rods for higher gain, symmetric patterns and minimal phase center variations", "text": "In this paper, the authors proposed a technique to improve the gain, narrow the H-plane beamwidth, and minimize the phase center variations with frequency by utilizing Vivaldi antenna with a protruded dielectric rod. A sample antenna was fabricated and measured, and preliminary measured results are very promising, and in good agreement with the simulation results."}
{"_id": "0abfb1828decb712109b0520ecb957bd928e00c0", "title": "A theory of causal learning in children: causal maps and Bayes nets.", "text": "The authors outline a cognitive and computational account of causal learning in children. They propose that children use specialized cognitive systems that allow them to recover an accurate \"causal map\" of the world: an abstract, coherent, learned representation of the causal relations among events. This kind of knowledge can be perspicuously understood in terms of the formalism of directed graphical causal models, or Bayes nets. Children's causal learning and inference may involve computations similar to those for learning causal Bayes nets and for predicting with them. Experimental results suggest that 2- to 4-year-old children construct new causal maps and that their learning is consistent with the Bayes net formalism."}
{"_id": "63c6aa77e7c1d40f70223a73b08239423236899a", "title": "Investigating Personalized Search in E-Commerce", "text": "Personalized recommendations have become a common feature of many modern online services. In particular on ecommerce sites, one value of such recommendations is that they help consumers find items of interest in large product assortments more quickly. Many of today\u2019s sites take advantage of modern recommendation technologies to create personalized item suggestions for consumers navigating the site. However, limited research exists on the use of personalization and recommendation technology when consumers rely on the site\u2019s catalog search functionality to discover relevant items. In this work we explore the value of personalizing search results on e-commerce sites using recommendation technology. We design and evaluate different personalization strategies using log data of an online retail site. Our results show that considering several item relevance signals within the recommendation process in parallel leads to the best ranking of the search results. Specifically, the factors taken into account include the users\u2019 general interests, their most recent browsing behavior, as well as the consideration of current sales trends."}
{"_id": "8d8a9dcc403838e1ccb72ceb223ae8203fbbb521", "title": "Liquids permeation into HTV silicone rubber under AC/DC electric field", "text": "As shed and sheath materials, high temperature vulcanization (HTV) silicone rubber suffers from various types of environmental stresses in actual use. Liquids permeation, as a kind of silicone rubber aging factor, has been investigated by many researchers. However, most of the studies are conducted under the condition of no electric field. Since that composite insulators, especially the high voltage (HV) side, are under high electric field, it is worthy of attention on liquids permeation under AC/DC voltages. In this study, we utilized a new-designed liquids permeation test unit under AC/DC electric field to measure the weight of HTV silicone rubber specimens during the test process. Deionized water, as well as basic fuchsine ethanol solutions were used to verify the effects of electric field on liquids permeation. Besides, the influence of ATH content on the liquids permeation was involved."}
{"_id": "c47d151f09c567013761632c89e237431c6291a2", "title": "Efficient Randomized Pattern-Matching Algorithms", "text": "We present randomized algorithms to solve the following string-matching problem and some of its generalizations: Given a string X of length n (the pattern) and a string Y (the text), find the first occurrence of X as a consecutive block within Y. The algorithms represent strings of length n by much shorter strings called fingerprints, and achieve their efficiency by manipulating fingerprints instead of longer strings. The algorithms require a constant number of storage locations, and essentially run in real time. They are conceptually simple and easy to implement. The method readily generalizes to higher-dimensional patternmatching problems."}
{"_id": "6f163733669e3404afe44907f3a13c40324b120d", "title": "On-Line Multi-View Video Summarization for Wireless Video Sensor Network", "text": "Battery lifetime is critical for wireless video sensors. To enable battery-powered wireless video sensors, low-power design is required. In this paper, we consider applying multi-view summarization to wireless video sensors to remove redundant contents such that the compression and transmission power can be reduced. A low-complexity online multi-view video summarization scheme is proposed. Experiments show that the proposed summarization method successfully reduces the video content while keeping important events. A power analysis of the system also shows that a significant amount of energy can be saved."}
{"_id": "ba799b63c5224910a788bfd5f6520316f8b770f5", "title": "Publication bias in studies of an applied behavior-analytic intervention: an initial analysis.", "text": "Publication bias arises when studies with favorable results are more likely to be reported than are studies with null findings. If this bias occurs in studies with single-subject experimental designs(SSEDs) on applied behavior-analytic (ABA) interventions, it could lead to exaggerated estimates of intervention effects. Therefore, we conducted an initial test of bias by comparing effect sizes, measured by percentage of nonoverlapping data (PND), in published SSED studies (n=21) and unpublished dissertations (n=10) on 1 well-established intervention for children with autism, pivotal response treatment (PRT). Although published and unpublished studies had similar methodologies, the mean PND in published studies was 22% higher than in unpublished studies, 95% confidence interval (4%, 38%). Even when unpublished studies are included, PRT appeared to be effective (PNDM=62%). Nevertheless, the disparity between published and unpublished studies suggests a need for further assessment of publication bias in the ABA literature."}
{"_id": "663be2e0836fefe0fda59067ad4a775e6f260731", "title": "MLMD: Multi-Layered Visualization for Multi-Dimensional Data", "text": "In this paper, we propose and implement a multi-layered visualization technique in 3D space called Multi-Layered Visualization for Multi-Dimensional Data (MLMD) with its corresponding interaction techniques, for visualizing multi-dimensional data. Layers of point based plots are stacked and connected in a virtual visualization cube for comparison between different dimension settings. Viewed from the side, the plot layers intrinsically form parallel coordinates plots. MLMD integrates point based plots and parallel coordinates compactly so as to present more information at a time to help data investigation. Pertinent user interactions for MLMD method to enable convenient manipulation are designed. By using MLMD and its matching interaction techniques, proper dimension settings and in-depth data perception can be achieved."}
{"_id": "b592dc957029eb3a752e7bb35cd65012d7df6487", "title": "MQTT-S \u2014 A publish/subscribe protocol for Wireless Sensor Networks", "text": "Wireless sensor networks (WSNs) pose novel challenges compared with traditional networks. To answer such challenges a new communication paradigm, data-centric communication, is emerging. One form of data-centric communication is the publish/subscribe messaging system. Compared with other data-centric variants, publish/subscribe systems are common and wide-spread in distributed computing. Thus, extending publish/subscribe systems intoWSNs will simplify the integration of sensor applications with other distributed applications. This paper describes MQTT-S [1], an extension of the open publish/subscribe protocol message queuing telemetry transport (MQTT) [2] to WSNs. MQTT-S is designed in such a way that it can be run on low-end and battery-operated sensor/actuator devices and operate over bandwidth-constraint WSNs such as ZigBee-based networks. Various protocol design points are discussed and compared. MQTT-S has been implemented and is currently being tested on the IBM wireless sensor networking testbed [3]. Implementation aspects, open challenges and future work are also presented."}
{"_id": "714c17193b110e12f254dc03c06b46dd8d15bd61", "title": "An analysis of review content and reviewer variables that contribute to review helpfulness", "text": "Review helpfulness is attracting increasing attention of practitioners and academics. It helps in reducing risks and uncertainty faced by users in online shopping. This study examines uninvestigated variables by looking at not only the review characteristics but also important indicators of reviewers. Several significant review content and two reviewer variables are proposed and an effective review helpfulness prediction model is built using stochastic gradient boosting learning method. This study derived a mechanism to extract novel review content variables from review text. Six popular machine learning models and three real-life Amazon review data sets are used for analysis. Our results are robust to several product categories and along three Amazon review data sets. The results show that review content variables deliver the best performance as compared to the reviewer and state-of-the-art baseline as a standalone model. This study finds that reviewer helpfulness per day and syllables in review text strongly relates to review helpfulness. Moreover, the number of space, aux verb, drives words in review text and productivity score of a reviewer are also effective predictors of review helpfulness. The findings will help customers to write better reviews, help retailers to manage their websites intelligently and aid customers in their product purchasing decisions."}
{"_id": "d03d7fc0b9016f8fbfa294d7822e0f002e9b0143", "title": "SAM: Semantic Attribute Modulated Language Modeling", "text": "As a fundamental task in the natural language processing field, language modeling aims to estimate the distribution of the word sequences. However, the most existing algorithms have focused on the main texts while often ignoring the vastly-accessible semantic attributes of the documents, e.g., titles, authors, sentiments and tags. To address this issue, we propose Semantic Attribute Modulated (SAM) language modeling, a novel language modeling framework that incorporates the various semantic attributes. Attributes are selected automatically with an attribute attention mechanism. We build three text datasets with a diversity of semantic attributes. On the three text datasets, we empirically examine the language model perplexities of several typical corpora, and then demonstrate the superiority of our model with the different combinations of the attributes. Extensive qualitative results, including word semantic analysis, attention values and an interesting lyric generation, further demonstrate the effectiveness of our SAMmethod."}
{"_id": "66194fdf9083ea3a0f417f6ed32637365028514b", "title": "COBRAS: Fast, Iterative, Active Clustering with Pairwise Constraints", "text": "Constraint-based clustering algorithms exploit background knowledge to construct clusterings that are aligned with the interests of a particular user. This background knowledge is often obtained by allowing the clustering system to pose pairwise queries to the user: should these two elements be in the same cluster or not? Active clustering methods aim to minimize the number of queries needed to obtain a good clustering by querying the most informative pairs first. Ideally, a user should be able to answer a couple of these queries, inspect the resulting clustering, and repeat these two steps until a satisfactory result is obtained. We present COBRAS, an approach to active clustering with pairwise constraints that is suited for such an interactive clustering process. A core concept in COBRAS is that of a super-instance: a local region in the data in which all instances are assumed to belong to the same cluster. COBRAS constructs such superinstances in a top-down manner to produce highquality results early on in the clustering process, and keeps refining these super-instances as more pairwise queries are given to get more detailed clusterings later on. We experimentally demonstrate that COBRAS produces good clusterings at fast run times, making it an excellent candidate for the iterative clustering scenario outlined above."}
{"_id": "2ce1b73cee6ddf4dbff391e29b73b35e3fb67685", "title": "TSDF Manifolds: A Scalable and Consistent Dense Mapping Approach", "text": "In many applications, maintaining a consistent dense map of the environment is key to enabling robotic platforms to perform higher level decision making. Several works have addressed the challenge of creating precise dense 3D maps. However, during operation over longer missions, reconstructions can easily become inconsistent due to accumulated camera tracking error and delayed loop closure. Without explicitly addressing the problem of map consistency, recovery from such distortions tends to be difficult. We present a novel system for dense 3D mapping which addresses the challenge of building consistent maps while dealing with scalability. Central to our approach is the representation of the environment as a manifold map, comprised of overlapping Truncated Signed Distance Field (TSDF) volumes. These volumes are localized through featurebased bundle adjustment. Our main contribution is to use a probabilistic measure to identify stable regions in the map, and fuse the contributing subvolumes. This approach allows us to reduce map growth while still maintaining consistency. We demonstrate the proposed system on publicly available datasets, as well as on a number of our own datasets, and demonstrate the efficacy of the proposed approach for building consistent and scalable maps."}
{"_id": "2199be750c972389102f192ca742ef2b3a10a2eb", "title": "Local Rademacher Complexity for Multi-Label Learning", "text": "We analyze the local Rademacher complexity of empirical risk minimization-based multi-label learning algorithms, and in doing so propose a new algorithm for multi-label learning. Rather than using the trace norm to regularize the multi-label predictor, we instead minimize the tail sum of the singular values of the predictor in multi-label learning. Benefiting from the use of the local Rademacher complexity, our algorithm, therefore, has a sharper generalization error bound. Compared with methods that minimize over all singular values, concentrating on the tail singular values results in better recovery of the low-rank structure of the multi-label predictor, which plays an important role in exploiting label correlations. We propose a new conditional singular value thresholding algorithm to solve the resulting objective function. Moreover, a variance control strategy is employed to reduce the variance of variables in optimization. Empirical studies on real-world data sets validate our theoretical results and demonstrate the effectiveness of the proposed algorithm for multi-label learning."}
{"_id": "bfdda57de28e9149d51f9276fc94e8e1381d67a6", "title": "A Novel UWB Monopole Antenna With Tunable Notched Behavior Using Varactor Diode", "text": "This letter presents a novel ultrawideband (UWB) antenna with tunable notched band. The antenna is assembled on an FR4 substrate with thickness of 0.8 mm and \u03b5r = 4.4. By inserting a \u03c0-shaped slot on the radiating patch, band-notch function is achieved. By loading the slot with lumped varactor, tunability of the created notch would be possible. Based on this technique, an electronically controlled notched-band antenna is designed and fabricated including a single varactor diode with a varying capacitance value of 0.63-2.67 pF. Using the cited varactor, notched band tunability of 2.7-7.2 GHz is obtained."}
{"_id": "465a2d29d1c3b2a81e79b816d0ce25f5d6ad9bea", "title": "A multilevel relaxation algorithm for simultaneous localization and mapping", "text": "This paper addresses the problem of simultaneous localization and mapping (SLAM) by a mobile robot. An incremental SLAM algorithm is introduced that is derived from multigrid methods used for solving partial differential equations. The approach improves on the performance of previous relaxation methods for robot mapping, because it optimizes the map at multiple levels of resolution. The resulting algorithm has an update time that is linear in the number of estimated features for typical indoor environments, even when closing very large loops, and offers advantages in handling nonlinearities compared with other SLAM algorithms. Experimental comparisons with alternative algorithms using two well-known data sets and mapping results on a real robot are also presented."}
{"_id": "2f11a5686253b2089dcaef773ee508f8ef63ab42", "title": "Efficient Learning for Undirected Topic Models", "text": "Replicated Softmax model, a well-known undirected topic model, is powerful in extracting semantic representations of documents. Traditional learning strategies such as Contrastive Divergence are very inefficient. This paper provides a novel estimator to speed up the learning based on Noise Contrastive Estimate, extended for documents of variant lengths and weighted inputs. Experiments on two benchmarks show that the new estimator achieves great learning efficiency and high accuracy on document retrieval and classification."}
{"_id": "9b06f4292a18c88681fe394fde9748d915b04ab1", "title": "Heterogeneity of multiple sclerosis lesions: implications for the pathogenesis of demyelination.", "text": "Multiple sclerosis (MS) is a disease with profound heterogeneity in clinical course, neuroradiological appearance of the lesions, involvement of susceptibility gene loci, and response to therapy. These features are supported by experimental evidence, which demonstrates that fundamentally different processes, such as autoimmunity or virus infection, may induce MS-like inflammatory demyelinating plaques and suggest that MS may be a disease with heterogeneous pathogenetic mechanisms. From a large pathology sample of MS, collected in three international centers, we selected 51 biopsies and 32 autopsies that contained actively demyelinating lesions defined by stringent criteria. The pathology of the lesions was analyzed using a broad spectrum of immunological and neurobiological markers. Four fundamentally different patterns of demyelination were found, defined on the basis of myelin protein loss, the geography and extension of plaques, the patterns of oligodendrocyte destruction, and the immunopathological evidence of complement activation. Two patterns (I and II) showed close similarities to T-cell-mediated or T-cell plus antibody-mediated autoimmune encephalomyelitis, respectively. The other patterns (III and IV) were highly suggestive of a primary oligodendrocyte dystrophy, reminiscent of virus- or toxin-induced demyelination rather than autoimmunity. At a given time point of the disease--as reflected in autopsy cases--the patterns of demyelination were heterogeneous between patients, but were homogenous within multiple active lesions from the same patient. This pathogenetic heterogeneity of plaques from different MS patients may have fundamental implications for the diagnosis and therapy of this disease."}
{"_id": "3c37515e7037925c3f0a475b03be72dc853b8533", "title": "Augmenting web pages and search results to support credibility assessment", "text": "The presence (and, sometimes, prominence) of incorrect and misleading content on the Web can have serious consequences for people who increasingly rely on the internet as their information source for topics such as health, politics, and financial advice. In this paper, we identify and collect several page features (such as popularity among specialized user groups) that are currently difficult or impossible for end users to assess, yet provide valuable signals regarding credibility. We then present visualizations designed to augment search results and Web pages with the most promising of these features. Our lab evaluation finds that our augmented search results are particularly effective at increasing the accuracy of users'\" credibility assessments, highlighting the potential of data aggregation and simple interventions to help people make more informed decisions as they search for information online."}
{"_id": "7740bc0f8afdcf2199b797c904b07cddb401682a", "title": "High-Frequency Behavior of Graphene-Based Interconnects\u2014Part I: Impedance Modeling", "text": "This paper presents the first detailed methodology for the accurate evaluation of high-frequency impedance of graphene-based structures relevant to on-chip interconnect and inductor applications. Going beyond the simplifying assumptions of Ohm's law, the effects of electric-field variation within a mean free path and current dependency on the nonlocal electric-field are taken into account to accurately capture the high-frequency behavior of graphene ribbons (GRs). At the same time, a simplified approach that may be adopted at lower frequencies is also explained. Starting from the basic Boltzmann equation and combining with the unique dispersion relation for graphene in its hexagonal Brillouin zone, the current density across the GR structure is derived. First, a semi-infinite slab of GR is analyzed using the theory of Fourier integrals, which is followed by the development of a rigorous methodology for practical finite structures based on a self-consistent numerical calculation of the derived current density using the Green's function approach."}
{"_id": "21a1654b856cf0c64e60e58258669b374cb05539", "title": "You Only Look Once: Unified, Real-Time Object Detection", "text": "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork."}
{"_id": "2dc9b005e936c9c303386caacc8d41cabdb1a0a1", "title": "Return of the Devil in the Details: Delving Deep into Convolutional Nets", "text": "The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available."}
{"_id": "5a39d6c1bb04737cc81634f3ea2e81d3bc1ee6dd", "title": "Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks", "text": "Recognizing arbitrary multi-character text in unconstrained natural photographs is a hard problem. In this paper, we address an equally hard sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from Street View imagery. Traditional approaches to solve this problem typically separate out the localization, segmentation, and recognition steps. In this paper we propose a unified approach that integrates these three steps via the use of a deep convolutional neural network that operates directly on the image pixels. We employ the DistBelief (Dean et al., 2012) implementation of deep neural networks in order to train large, distributed neural networks on high quality images. We find that the performance of this approach increases with the depth of the convolutional network, with the best performance occurring in the deepest architecture we trained, with eleven hidden layers. We evaluate this approach on the publicly available SVHN dataset and achieve over 96% accuracy in recognizing complete street numbers. We show that on a per-digit recognition task, we improve upon the state-of-the-art and achieve 97.84% accuracy. We also evaluate this approach on an even more challenging dataset generated from Street View imagery containing several tens of millions of street number annotations and achieve over 90% accuracy. Our evaluations further indicate that at specific operating thresholds, the performance of the proposed system is comparable to that of human operators. To date, our system has helped us extract close to 100 million physical street numbers from Street View imagery worldwide."}
{"_id": "746add3fde7424f55d8424894e663eee51dc8f1c", "title": "Delay-constrained throughput maximization in UAV-enabled OFDM systems", "text": "The use of unmanned aerial vehicles (UAVs) as aerial base stations (BSs) is of great practical significance in future wireless networks, especially for on-demand deployment during a temporary event and emergency situation. Although prior works have demonstrated the performance improvement brought by the UAV mobility, they mainly focus on the delay-tolerant applications such as file transfer and data collection. As such, it is unknown if the UAV mobility is able to provide performance gain for delay-constrained applications, such as video conferencing and online gaming. Motivated by this, we study in this paper a UAV-enabled downlink orthogonal division multiple access (OFDMA) network where a UAV is dispatched to serve two ground users within a given flight period. By taking into account the delay-specified minimum-rate-ratio constraints of the users, our goal is to maximize the minimum user throughput by jointly optimizing the UAV trajectory and communication resource allocation. We show that the max-min user throughput in general decreases as the minimum-rate-ratio constraints become more stringent, which reveals a fundamental tradeoff between the throughput gain by exploiting the UAV mobility and the user delay requirement. Simulation results verify our theoretical findings and also demonstrate the effectiveness of our proposed design."}
{"_id": "286e9d6b0fa61c77e25b73315a4989f43501ae4f", "title": "Data Encryption Scheme Based on Rules of Cellular Automata and Chaotic Map Function for Information Security", "text": "Cryptography has recently played a significant role in secure data transmissions and storages. Most conventional data encryption schemes are relatively complicated and complexity in encrypted keys is insufficient, resulting in long computational time and low degree of security against all types of attacks. Consequently, a highlysecured and robust data encryption scheme is necessary. This paper therefore presents the data encryption scheme based on a combination of Cellular Automata (CA) and a robust chaotic system that employs absolutevalue piecewise-linear nonlinearity. The proposed encryption scheme is not only applicable to image encryption but also extendable to text and Excel files. Simulation results reveal that the entropy of the encrypted image is nearly 8 corresponding to the image entropy theory, and the correlation coefficients of before-and-after encrypted images are close to zero. Besides, the histogram of the encrypted image of pixel values in the range (0-255) is apparently flat, indicating that the color in image is fully distributed. Such results indicate that the proposed data encryption scheme offers a high level of security with fast processing time duration. The implementation of image encryption Android application is demonstrated. Comparisons to other related approaches are also included."}
{"_id": "fbc2db56170a1edd090fdeda890ed057f833b84b", "title": "LogKV: Exploiting Key-Value Stores for Log Processing", "text": "Event log processing and analysis play a key role in applications ranging from security management, IT trouble shooting, to user behavior analysis. Recent years have seen a rapid growth in system scales and the corresponding rapid increase in the amount of log event data. At the same time, as logs are found to be a valuable information source, log analysis tasks have become more sophisticated demanding both interactive exploratory query processing and batch computation. Desirable query types include selection with time ranges and value filtering criteria, join within time windows, join between log data and reference tables, and various aggregation types. In such a situation, parallel solutions are necessary, but existing parallel and distributed solutions either support limited query types or perform only batch computations on logs. With a system called LogKV, this paper reports a first study of using Key-Value stores to support log processing and analysis, exploiting the scalability, reliability, and efficiency commonly found in Key-Value store systems. LogKV contains a number of unique techniques that are needed to handle log data in terms of event ingestion, load balancing, storage optimization, and query processing. Preliminary experimental results show that LogKV is a promising solution."}
{"_id": "02ec02786d87a88853dc76598d4c02a6e515e7ba", "title": "TCP Ordo: The cost of ordered processing in TCP servers", "text": "To achieve scalable, high-throughput, low-latency packet processing, TCP implementations are parallelized across cores in multicore platforms. This, however, significantly affects the order in which packets from different flows are delivered to application processing. Our measurements record up to 75% of the packets are delivered to applications in a way that does not match the order in which they are received on the network interface. For many important classes of applications, such as financial services, bidding and trading engines, game engines, this cross-flow packet reordering affects their ability to provide fairness guarantees. To address this gap, we propose TCP-Ordo - a TCP stack which provides strict ordering of packets across multiple flows, as well as flexibility to control the degree to which ordering is enforced. TCP-Ordo outperforms existing TCP implementations in both latency and throughput. The current prototype is implemented as a user-level TCP stack for Mellanox Connectx3 NICs with 40Gbps Ethernet interfaces. Without ordering guarantees TCP-Ordo delivers one way latency of 4.75usec (a 5x improvement over the Linux kernel) for 150B packets and throughput of 22Gbps (a 2x improvement over mTCP) for 1500B packets. Furthermore, with TCP-Ordo applications are provided with strict packet order delivery, with performance that continues to be superior to the state-of-the-art, even when enforcing packet order across 800,000 connections on a 12-core platform. Finally, TCP-Ordo supports the notion of `ordered domains' that offer flexibility in the degree of ordering that an application will experience, and pay for."}
{"_id": "413d49118e821c68fc69a26b88f69ad7b63119af", "title": "An Ultralow-Power Low-Noise CMOS Biopotential Amplifier for Neural Recording", "text": "This brief presents a design strategy for a neural recording amplifier array with ultralow-power low-noise operation that is suitable for large-scale integration. The topology combines a highly efficient but supply-sensitive single-ended first stage with a shared reference channel and a differential second stage to effect feedforward supply noise cancellation, combining the low power of single-ended amplifiers with improved supply rejection. For a two-channel amplifier, the measurements show a midband gain of 58.7 dB and a passband from 490 mHz to 10.5 kHz. The amplifier consumes 2.85 \u03bcA per channel from a 1-V supply and exhibits an input-referred noise of 3.04 \u03bcVrms from 0.1 Hz to 100 kHz, corresponding to a noise efficiency factor of 1.93. The power supply rejection ratio is better than 50 dB in the passband. The amplifier is fabricated in a 90-nm CMOS process and occupies 0.137 mm2 of chip area."}
{"_id": "4f43825eb0e31bc9c58f2ed305cf4d45f6f12be5", "title": "Design and Verification of Secure Systems", "text": "This paper reviews some of the difficulties that arise in the verification of kernelized secure systems and suggests new techniques for their resolution.\n It is proposed that secure systems should be conceived as distributed systems in which security is achieved partly through the physical separation of its individual components and partly through the mediation of trusted functions performed within some of those components. The purpose of a security kernel is simply to allow such a 'distributed' system to actually run within a single processor; policy enforcement is not the concern of a security kernel.\n This approach decouples verification of components which perform trusted functions from verification of the security kernel. This latter task may be accomplished by a new verification technique called 'proof of separability' which explicitly addresses the security relevant aspects of interrupt handling and other issues ignored by present methods."}
{"_id": "929bb4a0088a0b420cbf08684e374761690b19f2", "title": "Critical review of the e-loyalty literature: a purchase-centred framework", "text": ""}
{"_id": "e8b0a33837e8ffda6bd95e8fa50d5b9745a729f7", "title": "A new word sense similarity measure in wordnet", "text": "Recognizing similarities between words is a basic element of computational linguistics and artificial intelligence applications. This paper presents a new approach for measuring semantic similarity between words via concepts. Our proposed measure is a hybrid system based on using a new Information content metric and edge counting-based tuning function. In proposed system, hierarchical structure is used to present information content instead of text corpus and our result will be improved by edge counting-based tuning function. The result of the system is evaluated against human similarity ratings demonstration and shows significant improvement in compare with traditional similarity measures."}
{"_id": "fe38277161347bfa700d114d3b38ea9d6b3a487b", "title": "Performance evaluation of pattern classifiers for handwritten character recognition", "text": "This paper describes a performance evaluation study in which some efficient classifiers are tested in handwritten digit recognition. The evaluated classifiers include a statistical classifier (modified quadratic discriminant function, MQDF), three neural classifiers, and an LVQ (learning vector quantization) classifier. They are efficient in that high accuracies can be achieved at moderate memory space and computation cost. The performance is measured in terms of classification accuracy, sensitivity to training sample size, ambiguity rejection, and outlier resistance. The outlier resistance of neural classifiers is enhanced by training with synthesized outlier data. The classifiers are tested on a large data set extracted from NIST SD19. As results, the test accuracies of the evaluated classifiers are comparable to or higher than those of the nearest neighbor (1-NN) rule and regularized discriminant analysis (RDA). It is shown that neural classifiers are more susceptible to small sample size than MQDF, although they yield higher accuracies on large sample size. As a neural classifier, the polynomial classifier (PC) gives the highest accuracy and performs best in ambiguity rejection. On the other hand, MQDF is superior in outlier rejection even though it is not trained with outlier data. The results indicate that pattern classifiers have complementary advantages and they should be appropriately combined to achieve higher performance."}
{"_id": "24ecee4f907f388b85db3c7342b487bda5b83b42", "title": "Optimizing Regular Path Expressions Using Graph Schemas", "text": "Several languages, such as LOREL and UnQL, support querying of semi-structured data. Others, such as WebSQL and WebLog, query Web sites. All these languages model data as labeled graphs and use regular path expressions to express queries that traverse arbitrary paths in graphs. Naive execution of path expressions is ineecient, however, because it often requires exhaustive graph search. We describe two optimization techniques for queries with regular path expressions, which we call regular queries. Both rely on graph schemas, which specify partial knowledge of a graph's structure. Query pruning restricts search to a fragment of the graph; we give an eecient algorithm for rewriting any regular query into a pruned one. Query rewriting using state extents can entirely eliminate or substantially reduce graph traversal; it is reminiscent of optimizing relational queries using indices. There may be several ways to optimize a query using state extents ; we give an exponential-time algorithm that nds all such optimizations. On special forms of regular queries, the algorithm is provably eecient. We also give an eecient approximation algorithm that works on all regular queries. Our target databases are \\intranet\" sites, i.e., Web sites associated with a company or organization. Intranet sites are good targets for our techniques, because they often reeect the structure of their organization, hence their structure is partially known. Preliminary results show that regular queries applied to an intranet site produce more precise results than keyword-only queries and that our optimizations substantially reduce the cost of evaluating regular queries, indicating regular queries could be evaluated eeciently by Web-site search engines."}
{"_id": "f0d766a6dbd2c094fb5c67f0a15d10dbf311967e", "title": "Performance Evaluation of SVM and K-Nearest Neighbor Algorithm over Medical Data set", "text": "In this age of computer science each and every thing becomes intelligent and perform task as human. For that purpose there are various tools, techniques and methods are proposed. Support vector machine is a model for statistics and computer science, to perform supervised learning, methods that are used to make analysis of data and recognize patterns. SVM is mostly used for classification and regression analysis. And in the same way k-nearest neighbor algorithm is a classification algorithm used to classify data using training examples. In this paper we use SVM and KNN algorithm to classify data and get prediction (find hidden patterns) for target. Here we use medical patients nominal data to classify and discover the data pattern to predict future disease, Uses data mining which is use to classify text analysis in future."}
{"_id": "6269c027bb8707c292ef5364bb6c6e983c68da63", "title": "2-Point-based outlier rejection for camera-IMU systems with applications to micro aerial vehicles", "text": "This paper presents a novel method to perform the outlier rejection task between two different views of a camera rigidly attached to an Inertial Measurement Unit (IMU). Only two feature correspondences and gyroscopic data from IMU measurerments are used to compute the motion hypothesis. By exploiting this 2-point motion parametrization, we propose two algorithms to remove wrong data associations in the feature-matching process for case of a 6DoF motion. We show that in the case of a monocular camera mounted on a quadrotor vehicle, motion priors from IMU can be used to discard wrong estimations in the framework of a 2-point-RANSAC based approach. The proposed methods are evaluated on both synthetic and real data."}
{"_id": "867940a08ad9e4260f9dab8d5ea4695ffc435a80", "title": "Phoneme visem mapping for Marathi language using linguistic approach", "text": "Visual speech is transcribed using visems. Visem is a set of visibly different phonemes in a language. There is high correlation between Phoneme and visem and phoneme to visem mapping is many to one because of the fact that many phonemes cannot be distinguished visually. Since each language has its own phonetic set, visem classification becomes language dependant process. Most of the phoneme visem maps available in literature are developed for English language. This paper presents phoneme visem mapping of consonants and vowels of Marathi language based on linguistic approach. This is the first attempt towards Marathi phoneme visem mapping and will be useful for visual speech recognition in Marathi language."}
{"_id": "018fea354b8702eebbe6797405654322bb1b6da0", "title": "Adoption and use of Java generics", "text": "Support for generic programming was added to the Java language in 2004, representing perhaps the most significant change to one of the most widely used programming languages today. Researchers and language designers anticipated this addition would relieve many long-standing problems plaguing developers, but surprisingly, no one has yet measured how generics have been adopted and used in practice. In this paper, we report on the first empirical investigation into how Java generics have been integrated into open source software by automatically mining the history of 40 popular open source Java programs, traversing more than 650 million lines of code in the process. We evaluate five hypotheses and research questions about how Java developers use generics. For example, our results suggest that generics sometimes reduce the number of type casts and that generics are usually adopted by a single champion in a project, rather than all committers. We also offer insights into why some features may be adopted sooner and others features may be held back."}
{"_id": "f5129871011d9aebeab333ea04e2f753678054ca", "title": "SVM classifier on chip for melanoma detection", "text": "Support Vector Machine (SVM) is a common classifier used for efficient classification with high accuracy. SVM shows high accuracy for classifying melanoma (skin cancer) clinical images within computer-aided diagnosis systems used by skin cancer specialists to detect melanoma early and save lives. We aim to develop a medical low-cost handheld device that runs a real-time embedded SVM-based diagnosis system for use in primary care for early detection of melanoma. In this paper, an optimized SVM classifier is implemented onto a recent FPGA platform using the latest design methodology to be embedded into the proposed device for realizing online efficient melanoma detection on a single system on chip/device. The hardware implementation results demonstrate a high classification accuracy of 97.9% and a significant acceleration factor of 26 from equivalent software implementation on an embedded processor, with 34% of resources utilization and 2 watts for power consumption. Consequently, the implemented system meets crucial embedded systems constraints of high performance and low cost, resources utilization and power consumption, while achieving high classification accuracy."}
{"_id": "f331b6490588cbe8fae89ba67c11a18e2ca0c795", "title": "Hardware Trojan Detection by Multiple-Parameter Side-Channel Analysis", "text": "Hardware Trojan attack in the form of malicious modification of a design has emerged as a major security threat. Sidechannel analysis has been investigated as an alternative to conventional logic testing to detect the presence of hardware Trojans. However, these techniques suffer from decreased sensitivity toward small Trojans, especially because of the large process variations present in modern nanometer technologies. In this paper, we propose a novel noninvasive, multiple-parameter side-channel analysisbased Trojan detection approach. We use the intrinsic relationship between dynamic current and maximum operating frequency of a circuit to isolate the effect of a Trojan circuit from process noise. We propose a vector generation approach and several design/test techniques to improve the detection sensitivity. Simulation results with two large circuits, a 32-bit integer execution unit (IEU) and a 128-bit advanced encryption standard (AES) cipher, show a detection resolution of 1.12 percent amidst \u00b120 percent parameter variations. The approach is also validated with experimental results. Finally, the use of a combined side-channel analysis and logic testing approach is shown to provide high overall detection coverage for hardware Trojan circuits of varying types and sizes."}
{"_id": "9637933765f969cda170391d95ee3ff792299665", "title": "Sentiment Analyzer with Rich Features for Ironic and Sarcastic Tweets", "text": "Sentiment Analysis of tweets is a complex task, because these short messages employ unconventional language to increase the expressiveness. This task becomes even more difficult when people use figurative language (e.g. irony, sarcasm and metaphors) because it causes a mismatch between the literal meaning and the actual expressed sentiment. In this paper, we describe a sentiment analysis system designed for handling ironic and sarcastic tweets. Features grounded on several linguistic levels are proposed and used to classify the tweets in a 11-scale range, using a decision tree. The system is evaluated on the dataset released by the organizers of the SemEval 2015, task 11. The results show that our method largely outperforms the systems proposed by the participants of the task on ironic and sarcastic tweets."}
{"_id": "10dedb59fdbe635aa12c850f83e4bb8aa9c3f9eb", "title": "Using neural network ensembles for bankruptcy prediction and credit scoring", "text": "Bankruptcy prediction and credit scoring have long been regarded as critical topics and have been studied extensively in the accounting and finance literature. Artificial intelligence and machine learning techniques have been used to solve these financial decision-making problems. The multilayer perceptron (MLP) network trained by the back-propagation learning algorithm is the mostly used technique for financial decision-making problems. In addition, it is usually superior to other traditional statistical models. Recent studies suggest combining multiple classifiers (or classifier ensembles) should be better than single classifiers. However, the performance of multiple classifiers in bankruptcy prediction and credit scoring is not fully understood. In this paper, we investigate the performance of a single classifier as the baseline classifier to compare with multiple classifiers and diversified multiple classifiers by using neural networks based on three datasets. By comparing with the single classifier as the benchmark in terms of average prediction accuracy, the multiple classifiers only perform better in one of the three datasets. The diversified multiple classifiers trained by not only different classifier parameters but also different sets of training data perform worse in all datasets. However, for the Type I and Type II errors, there is no exact winner. We suggest that it is better to consider these three classifier architectures to make the optimal financial decision. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "75b5d382276853496d7bd5a24381d4cdbd2202d1", "title": "Simplified Optimal Trajectory Control (SOTC) for LLC Resonant Converters", "text": "In this paper, a simplified optimal trajectory control (SOTC) for the LLC resonant converter is proposed. During the steady state, a linear compensator such as proportional-integral or proportional-integral-derivative (PID) is used, controlling the switching frequency fs to eliminate the steady-state error. However, during load transients, the SOTC method takes over, immediately changing the pulse widths of the gate-driving signals. Using the state-plane analysis, the pulse widths are estimated, letting the state variables track the optimal trajectory locus in the minimum period of time. The proposed solution is implemented in a digital controller, and experimental results show that while the digital logic requirement is very small, the performance improvement is significant."}
{"_id": "261e9b82bf60e2b70965b0c1bfdea793aedc0a2e", "title": "PARS: A Uniform and Open-source Password Analysis and Research System", "text": "In this paper, we introduce an open-source and modular password analysis and research system, PARS, which provides a uniform, comprehensive and scalable research platform for password security. To the best of our knowledge, PARS is the first such system that enables researchers to conduct fair and comparable password security research. PARS contains 12 state-of-the-art cracking algorithms, 15 intra-site and cross-site password strength metrics, 8 academic password meters, and 15 of the 24 commercial password meters from the top-150 websites ranked by Alexa. Also, detailed taxonomies and large-scale evaluations of the PARS modules are presented in the paper."}
{"_id": "ae492dac20f8bd50f42cb6f3eccb9a9be37d8876", "title": "Design and fabrication of a lightweight additive-manufactured Ka-band horn antenna array", "text": "In this paper a 2\u00d72 Ka-band lightweight horn antenna array demonstrator is designed and fabricated by means of an additive manufacturing process optimized for complex RF, microwave and mm-wave antenna components. The proposed waveguide feeding network aims to exploit the flexibility offered by this novel fabrication technique and can be easily translated to antenna arrays with more elements. The electromagnetic performance of the featured structure is validated by means of full-wave numerical simulations. The horn antenna array is fabricated and under experimental RF characterization."}
{"_id": "48c42d87cd32c4ec613d103eb6ed3d6adf9f790a", "title": "Microblog Entity Linking with Social Temporal Context", "text": "Nowadays microblogging sites, such as Twitter and Chinese Sina Weibo, have established themselves as an invaluable information source, which provides a huge collection of manually-generated tweets with broad range of topics from daily life to breaking news. Entity linking is indispensable for understanding and maintaining such information, which in turn facilitates many real-world applications such as tweet clustering and classification, personalized microblog search, and so forth. However, tweets are short, informal and error-prone, rendering traditional approaches for entity linking in documents largely inapplicable. Recent work addresses this problem by utilising information from other tweets and linking entities in a batch manner. Nevertheless, the high computational complexity makes this approach infeasible for real-time applications given the high arrival rate of tweets. In this paper, we propose an efficient solution to link entities in tweets by analyzing their social and temporal context. Our proposed framework takes into consideration three features, namely entity popularity, entity recency, and user interest information embedded in social interactions to assist the entity linking task. Effective indexing structures along with incremental algorithms have also been developed to reduce the computation and maintenance costs of our approach. Experimental results based on real tweet datasets verify the effectiveness and efficiency of our proposals."}
{"_id": "32f140fbb9514fd3ead5177025c467b50896db30", "title": "A Synopsis of Linguistic Theory 1930-1955\" in Studies in Linguistic Analysis", "text": ""}
{"_id": "5b9c2b3f85920bc0e160b484ffa7a5f0a9d8f22a", "title": "Efficient and Expressive Knowledge Base Completion Using Subgraph Feature Extraction", "text": "We explore some of the practicalities of using random walk inference methods, such as the Path Ranking Algorithm (PRA), for the task of knowledge base completion. We show that the random walk probabilities computed (at great expense) by PRA provide no discernible benefit to performance on this task, so they can safely be dropped. This allows us to define a simpler algorithm for generating feature matrices from graphs, which we call subgraph feature extraction (SFE). In addition to being conceptually simpler than PRA, SFE is much more efficient, reducing computation by an order of magnitude, and more expressive, allowing for much richer features than paths between two nodes in a graph. We show experimentally that this technique gives substantially better performance than PRA and its variants, improving mean average precision from .432 to .528 on a knowledge base completion task using the NELL KB."}
{"_id": "5d3158674e1a0fedf69299a905151949fb8b01a5", "title": "The RDF-3X engine for scalable management of RDF data", "text": "RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The \u201cpay-as-you-go\u201d nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude."}
{"_id": "98aa1650071dd259f45742dae4b97ef13a9de08a", "title": "Yago: a core of semantic knowledge", "text": ""}
{"_id": "9ef09e98abb60f45db5f69a65354b34be6437885", "title": "Evaluating Hive and Spark SQL with BigBench", "text": "Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission."}
{"_id": "2ed916ec785f3aeb716cbe757e6daa894cf27bda", "title": "Picosecond imaging circuit analysis", "text": "A newly developed optical method for noninvasively measuring the switching activity of operating CMOS integrated circuit chips is described. The method, denoted as picosecond imaging circuit analysis (PICA) can be used to characterize the gate-level performance of such chips and identify the locations and nature of their operational faults. The principles underlying PICA and examples of its use are discussed."}
{"_id": "4d0d38637c74b56bb57af6d108762642eb53f23c", "title": "Robotic Forceps Manipulator With a Novel Bending Mechanism", "text": "This paper proposes a new bending technique with a screwdrive mechanism that allows for omnidirectional bending motion by rotating two linkages, each consisting of a right-handed screw, a universal joint, and a left-handed screw. The new screwdrive mechanism, termed double-screw-drive (DSD) mechanism, is utilized in a robotic forceps manipulator for laparoscopic surgery. A robotic forceps manipulator incorporating the DSD mechanism (DSD forceps) can bend without using wires. Without wires, it has high rigidity, and can bend at 90\u00b0 in any arbitrary direction. In addition, the gripper of the DSD forceps can perform rotational motion, which is achieved by rotating a third linkage in the DSD mechanism. Opening and closing motions of the gripper are attained by wire actuation. Fundamental experiments examining the bending force and the accuracy of the DSD forceps were conducted, and an analysis of the accuracy was performed. Control of the DSD forceps through a teleoperation system was achieved via a joystick-type manipulator. A servo system was constructed for each linkage and the wire actuation mechanism, and tracking control experiments as well as a suturing experiment were conducted. The results of the experiments showed that the required design specifications were fulfilled. Thus, the validity of the DSD forceps was demonstrated."}
{"_id": "345bbd344177815dfb9214c61403cb7eac6de450", "title": "SEARNN: Training RNNs with Global-Local Losses", "text": "We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the \u201clearning to search\u201d (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We demonstrate improved performance over MLE on three different tasks: OCR, spelling correction and text chunking. Finally, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes."}
{"_id": "66fb5bbeb5a8e9cd60b683bc9a0e1d74490562f9", "title": "Ecology-driven stereotypes override race stereotypes.", "text": "Why do race stereotypes take the forms they do? Life history theory posits that features of the ecology shape individuals' behavior. Harsh and unpredictable (\"desperate\") ecologies induce fast strategy behaviors such as impulsivity, whereas resource-sufficient and predictable (\"hopeful\") ecologies induce slow strategy behaviors such as future focus. We suggest that individuals possess a lay understanding of ecology's influence on behavior, resulting in ecology-driven stereotypes. Importantly, because race is confounded with ecology in the United States, we propose that Americans' stereotypes about racial groups actually reflect stereotypes about these groups' presumed home ecologies. Study 1 demonstrates that individuals hold ecology stereotypes, stereotyping people from desperate ecologies as possessing faster life history strategies than people from hopeful ecologies. Studies 2-4 rule out alternative explanations for those findings. Study 5, which independently manipulates race and ecology information, demonstrates that when provided with information about a person's race (but not ecology), individuals' inferences about blacks track stereotypes of people from desperate ecologies, and individuals' inferences about whites track stereotypes of people from hopeful ecologies. However, when provided with information about both the race and ecology of others, individuals' inferences reflect the targets' ecology rather than their race: black and white targets from desperate ecologies are stereotyped as equally fast life history strategists, whereas black and white targets from hopeful ecologies are stereotyped as equally slow life history strategists. These findings suggest that the content of several predominant race stereotypes may not reflect race, per se, but rather inferences about how one's ecology influences behavior."}
{"_id": "37e5503217f38216fae5d408d67551b8cd004831", "title": "A compact magnetic directional proximity sensor for spherical robots", "text": "Spherical robots have recently attracted significant interest due to their ability to offer high speed motion with excellent locomotion efficiency. As a result of the presence of a sealed outer shell, its obstacle avoidance strategy has been simply \u201chit and run\u201d. While this is convenient due to the specific geometry of the spherical robots, it however could pose serious issues when the robots are small and light. For portable spherical robots with on-board cameras, a high speed collision with a hard surface may damage the robot or the camera. This paper proposes a novel and compact proximity sensor that utilizes passive magnetic field to detect the ferromagnetic obstacles through perturbation of the magnetic field. Compared with the existing works that utilize the Earth's weak magnetic field as a means of detection, the approach undertaken here seeks to harness the same principle but uses an intelligently designed magnetic assembly. It efficiently amplifies the perturbation and therefore improves the detection performance. The presented method is able to simultaneously determine both the distance and direction of the nearby ferromagnetic obstacles. Both simulation and experimental results are presented to validate the sensing principle and operational performance."}
{"_id": "153a854241d306d10f81fe11444e2d8643681a0b", "title": "A Provable Time and Space Efficient Implementation of NESL", "text": "In this paper we prove time and space bounds for the implementation of the programming language NESL on various parallel machine models. NESL is a sugared typed λ-calculus with a set of array primitives and an explicit parallel map over arrays. Our results extend previous work on provable implementation bounds for functional languages by considering space and by including arrays. For modeling the cost of NESL we augment a standard call-by-value operational semantics to return two cost measures: a DAG representing the sequential dependence in the computation, and a measure of the space taken by a sequential implementation. We show that a NESL program with w work (nodes in the DAG), d depth (levels in the DAG), and s sequential space can be implemented on a p processor butterfly network, hypercube, or CRCW PRAM using O(w/p + d log p) time and O(s + dp log p) reachable space.1 For programs with sufficient parallelism these bounds are optimal in that they give linear speedup and use space within a constant factor of the sequential space."}
{"_id": "3ca321241e57b2f2ef0ab2872aaae46e5c570b40", "title": "Modeling, parameter identification and model-based control of a lightweight robotic manipulator", "text": "Nowadays many robotic tasks require close and compliant tracking of desired positions, paths, or forces. To achieve these goals, model-based control schemes provide a possible solution, allowing to directly consider the nonlinear dynamics. One of the key challenges, however, is the derivation of suitable models, which allow sufficiently fast evaluation, as well as the parameterization of these models based on measurements. In this work we outline and review a structured approach for model-based controller design for robots. In a first step we derive suitable models for multi-link robots. In a second step we review how such models can be parameterized and how optimal identification experiments can be designed. Based on the model we then derive a simple model based controller to experimentally validate the results considering a lightweight robot. The single steps of the derivation and controller design are supported by a newly developed freely available model toolbox for the considered lightweight robot."}
{"_id": "6b6f187b136616c0514f384ee17dc167498ef619", "title": "Using Part-of-Speech N-grams for Sensitive-Text Classification", "text": "Freedom of Information legislations in many western democracies, including the United Kingdom (UK) and the United States of America (USA), state that citizens have typically the right to access government documents. However, certain sensitive information is exempt from release into the public domain. For example, in the UK, FOIA Exemption 27 (International Relations) excludes the release of Information that might damage the interests of the UK abroad. Therefore, the process of reviewing government documents for sensitivity is essential to determine if a document must be redacted before it is archived, or closed until the information is no longer sensitive. With the increased volume of digital government documents in recent years, there is a need for new tools to assist the digital sensitivity review process. Therefore, in this paper we propose an automatic approach for identifying sensitive text in documents by measuring the amount of sensitivity in sequences of text. Using government documents reviewed by trained sensitivity reviewers, we focus on an aspect of FOIA Exemption 27 which can have a major impact on international relations, namely, information supplied in confidence. We show that our approach leads to markedly increased recall of sensitive text, while achieving a very high level of precision, when compared to a baseline that has been shown to be effective at identifying sensitive text in other domains."}
{"_id": "6b6d357fb4ef19f2330596183ce00d2f3797740d", "title": "Algorithms for the Longest Common Subsequence Problem", "text": "We start by def ining conven t ions and t e rmino logy that will be used th roughou t this paper . String C = clc~ ... cp is a subsequence of string A = aja2 \"'\" am if there is a mapp ing F : {1, 2 . . . . , p} ~ {1, 2, ... , m} such that F(i) = k only if c~ = ak and F is a m o n o t o n e strictly increasing funct ion (i .e. F(i) = u, F(]) = v, and i < j imply that u < v). C can be fo rmed by delet ing m p (not necessari ly ad jacen t ) symbols f rom A . F o r example , \" c o u r s e \" is a subsequence of \" c o m p u t e r sc ience . \" Str ing C is a c o m m o n s ubs equence of strings A and B if C is a s u b s e q u e n c e of A and also a subsequence of B. String C is a longest c o m m o n subsequence (abbrev ia ted LCS) of string A and B if C is a c o m m o n subsequence of A and B of maximal length , i .e. there is no c o m m o n subsequence of A and B that has grea te r length. Th roughou t this paper , we assume that A and B are strings of lengths m and n , m _< n , that have an LCS C of (unknown) length p . We assume that the symbols that may appea r in these strings c o m e f rom some a lphabet of size t . A symbol can be s tored in m e m o r y by using log t bits, which we assume will fit in one word of memory . Symbols can be c o m p a r e d (a -< b?) in one t ime unit . The n u m b e r of di f ferent symbols that actual ly appear in string B is def ined to be s (which must be less than n and t). The longest c o m m o n s u b s e q u e n c e prob lem has been solved by using a recurs ion re la t ionship on the length of the solut ion [7, 12, 16, 21]. These are general ly appl icable a lgor i thms that take O ( m n ) t ime for any input strings o f lengths m and n even though the lower bound on t ime of O ( m n ) need not apply to all inputs [2]. We present a lgor i thms that , depend ing on the na ture of the Input, may not requ i re quadra t ic t ime to r ecove r an LCS. The first a lgor i thm is appl icable in the genera l case and requi res O ( p n + n log n) t ime. T h e second a lgor i thm requi res t ime b o u n d e d by O((m + 1 p )p log n). In the c o m m o n special case where p is close to m , this a lgor i thm takes t ime"}
{"_id": "87c70927282f201908a60673e8169fbef3f2a26b", "title": "Acquiring evolvability through adaptive representations", "text": "Adaptive representations allow evolution to explore the space of phenotypes by choosing the most suitable set of genotypic parameters. Although such an approach is believed to be efficient on complex problems, few empirical studieshave been conducted in such domains. In this paper, three neural network representations, a direct encoding, a complexifying encoding, and an implicit encoding capable of adapting the genotype-phenotype mapping are compared on Nothello, a complex game playing domain from the AAAI General Game Playing Competition. Implicit encoding makes the search more efficient and uses several times fewer parameters. Random mutation leads to highly structured phenotypic variation that is acquired during the course of evolution rather than built into the representation itself. Thus, adaptive representations learn to become evolvable, and furthermore do so in a way that makes search efficient on difficult coevolutionary problems."}
{"_id": "142b87ca518221b009ad8871b8aa652ffd9682aa", "title": "Algorithm for efficient seating plan for centralized exam system", "text": "Exam seat allocation is one of the major concerns in quality education. With the increasing number of students, subjects, departments and rooms, exam seat management becomes complex. Maintaining a decent exam environment with the proper seating arrangement is one of the difficult jobs for authority. This research offers solution for exam seating arrangement problems that can be achieved through the sequential execution of three proposed algorithms. This research offers a solution for preventing some exam hall cheating by arranging seats for large students and it also finds out the best combination of rooms to be assigned for the exam to organize perfect seating based on the room orientation and size, number of students, differentiation of subjects. To do this research, we have collected data and methods from a university those are being used for their exam seating arrangement. By using university exam information we test our algorithms. It provides better seating plan then the manual system used by the university."}
{"_id": "0c4d9cf9398f7a416e438a9734fe0bb73871d98d", "title": "CRMA: collision-resistant multiple access", "text": "Efficiently sharing spectrum among multiple users is critical to wireless network performance. In this paper, we propose a novel spectrum sharing protocol called Collision-Resistant Multiple Access (CRMA) to achieve high efficiency. In CRMA, each transmitter views the OFDM physical layer as multiple orthogonal but sharable channels, and independently selects a few channels for transmission. The transmissions that share the same channel naturally add up in the air. The receiver extracts the received signals from all the channels and efficiently decodes the transmissions by solving a simple linear system. We implement our approach in the Qualnet simulator and show that it yields significant improvement over existing spectrum sharing schemes. We also demonstrate the feasibility of our approach using implementation and experiments on GNU Radios."}
{"_id": "653fbfbad9d565dd5e5e0d48b6bb32dd02e8f157", "title": "SybilInfer: Detecting Sybil Nodes using Social Networks", "text": "SybilInfer is an algorithm for labelling nodes in a social network as honest users or Sybils controlled by an adversary. At the heart of SybilInfer lies a probabilistic model of honest social networks, and an inference engine that returns potential regions of dishonest nodes. The Bayesian inference approach to Sybil detection comes with the advantage label has an assigned probability, indicating its degree of certainty. We prove through analytical results as well as experiments on simulated and real-world network topologies that, given standard constraints on the adversary, SybilInfer is secure, in that it successfully distinguishes between honest and dishonest nodes and is not susceptible to manipulation by the adversary. Furthermore, our results show that SybilInfer outperforms state of the art algorithms, both in being more widely applicable, as well as providing vastly more accurate results."}
{"_id": "644e911d66a6bbc668391fd7a6d5cae5c9161158", "title": "Blind detection for MIMO systems with low-resolution ADCs using supervised learning", "text": "This paper considers a multiple-input-multiple-output (MIMO) system with low-resolution analog-to-digital converters (ADCs). In this system, we propose a novel detection framework that performs data symbol detection without explicitly knowing channel state information at a receiver. The underlying idea of the proposed framework is to exploit supervised learning. Specifically, during channel training, the proposed approach sends a sequence of data symbols as pilots so that the receiver learns a nonlinear function that is determined by both a channel matrix and a quantization function of the ADCs. During data transmission, the receiver uses the learned nonlinear function to detect which data symbols were transmitted. In this context, we propose two blind detection methods to determine the nonlinear function from the training-data set. We also provide an analytical expression for the symbol-vector-error probability of the MIMO systems with one-bit ADCs when employing the proposed framework. Simulations demonstrate the performance improvement of the proposed framework compared to existing detection techniques."}
{"_id": "96a26df6dffedb401b170e9b37723850c5dfa023", "title": "An Ultra-Low Power 1.7-2.7 GHz Fractional-N Sub-Sampling Digital Frequency Synthesizer and Modulator for IoT Applications in 40 nm CMOS", "text": "This paper introduces an ultra-low power 1.7-2.7-GHz fractional-N sub-sampling digital PLL (SS-DPLL) for Internet-of-Things (IoT) applications targeting compliance with Bluetooth Low Energy (BLE) and IEEE802.15.4 standards. A snapshot time-to-digital converter (TDC) acts as a digital sub-sampler featuring an increased out-of-range gain and without any assistance from the traditional counting of DCO edges, thus further reducing power consumption. With a proposed DCO-divider phase rotation in the feedback path, the impact of the digital-to-time converter\u2019s (DTC\u2019s) non-linearity on the PLL is reduced and improves fractional spurs by at least 8 dB across BLE channels. Moreover, a \u201cvariable-preconditioned LMS\u201d calibration algorithm is introduced to dynamically correct the DTC gain error with fractional frequency control word (FCW) down to 1/16384. Fabricated in 40 nm CMOS, the SS-DPLL achieves phase noise performance of \u2212109 dBc/Hz at 1 MHz offset, while consuming a record-low power of 1.19 mW."}
{"_id": "9c337fd9ee994ffc30331a1514ed9330e2648bbe", "title": "Point cloud labeling using 3D Convolutional Neural Network", "text": "In this paper, we tackle the labeling problem for 3D point clouds. We introduce a 3D point cloud labeling scheme based on 3D Convolutional Neural Network. Our approach minimizes the prior knowledge of the labeling problem and does not require a segmentation step or hand-crafted features as most previous approaches did. Particularly, we present solutions for large data handling during the training and testing process. Experiments performed on the urban point cloud dataset containing 7 categories of objects show the robustness of our approach."}
{"_id": "ddeabfbe4c0377fa97db0e7524dffdc1a64e5f53", "title": "Another Stemmer", "text": "In natural language processing, conflation is the process of merging or lumping together nonidentical words which refer to the same principal concept. This can relate both to words which are entirely different in form (e.g., \"group\" and \"collection\"), and to words which share some common root (e.g., \"group\", \"grouping\", \"subgroups\"). In the former case the words can only be mapped by referring to a dictionary or thesaurus, but in the latter case use can be made of the orthographic similarities between the forms. One popular approach is to remove affixes from the input words, thus reducing them to a stem; if this could be done correctly, all the variant forms of a word would be converted to the same standard form. Since the process is aimed at mapping for retrieval purposes, the stem need not be a linguistically correct lemma or root (see also Frakes 1982)."}
{"_id": "2105d6e014290cd0fd093479cc32cece51477a5a", "title": "Global Ridge Orientation Modeling for Partial Fingerprint Identification", "text": "Identifying incomplete or partial fingerprints from a large fingerprint database remains a difficult challenge today. Existing studies on partial fingerprints focus on one-to-one matching using local ridge details. In this paper, we investigate the problem of retrieving candidate lists for matching partial fingerprints by exploiting global topological features. Specifically, we propose an analytical approach for reconstructing the global topology representation from a partial fingerprint. First, we present an inverse orientation model for describing the reconstruction problem. Then, we provide a general expression for all valid solutions to the inverse model. This allows us to preserve data fidelity in the existing segments while exploring missing structures in the unknown parts. We have further developed algorithms for estimating the missing orientation structures based on some a priori knowledge of ridge topology features. Our statistical experiments show that our proposed model-based approach can effectively reduce the number of candidates for pair-wised fingerprint matching, and thus significantly improve the system retrieval performance for partial fingerprint identification."}
{"_id": "8415c8d237398cd39f6ce34df9f7ab6e74bcb813", "title": "Exploring the therapeutic potential of Ayahuasca: acute intake increases mindfulness-related capacities", "text": "Ayahuasca is a psychotropic plant tea used for ritual purposes by the indigenous populations of the Amazon. In the last two decades, its use has expanded worldwide. The tea contains the psychedelic 5-HT2A receptor agonist N,N-dimethyltryptamine (DMT), plus \u03b2-carboline alkaloids with monoamine-oxidase-inhibiting properties. Acute administration induces an introspective dream-like experience characterized by visions and autobiographic and emotional memories. Studies of long-term users have suggested its therapeutic potential, reporting that its use has helped individuals abandon the consumption of addictive drugs. Furthermore, recent open-label studies in patients with treatment-resistant depression found that a single ayahuasca dose induced a rapid antidepressant effect that was maintained weeks after administration. Here, we conducted an exploratory study of the psychological mechanisms that could underlie the beneficial effects of ayahuasca. We assessed a group of 25 individuals before and 24\u00a0h after an ayahuasca session using two instruments designed to measure mindfulness capacities: The Five Facets Mindfulness Questionnaire (FFMQ) and the Experiences Questionnaire (EQ). Ayahuasca intake led to significant increases in two facets of the FFMQ indicating a reduction in judgmental processing of experiences and in inner reactivity. It also led to a significant increase in decentering ability as measured by the EQ. These changes are classic goals of conventional mindfulness training, and the scores obtained are in the range of those observed after extensive mindfulness practice. The present findings support the claim that ayahuasca has therapeutic potential and suggest that this potential is due to an increase in mindfulness capacities."}
{"_id": "6a01ca68b5d5a989004e4690cab36a0c8de0b50a", "title": "Sex differences in the human olfactory system", "text": "The olfactory system (accessory) implicated in reproductive physiology and behavior in mammals is sexually dimorphic. These brain sex differences present two main characteristics: they are seen in neural circuits related to sexual behavior and sexual physiology and they take one of two opposite morphological patterns (male>female or female>male). The present work reports sex differences in the olfactory system in a large homogeneous sample of men (40) and women (51) using of voxel-based morphology. Gray matter concentration showed sexual dimorphism in several olfactory regions. Women have a higher concentration in the orbitofrontal cortex involving Brodmann's areas 10, 11 and 25 and temporomedial cortex (bilateral hippocampus and right amygdala), as well as their left basal insular cortex. In contrast, men show a higher gray matter concentration in the left entorhinal cortex (Brodmann's area 28), right ventral pallidum, dorsal left insular cortex and a region of the orbitofrontal cortex (Brodmann's area 25). This study supports the hypothesis that the mammalian olfactory system is a sexually dimorphic network and provides a theoretical framework for the morphofunctional approach to sex differences in the human brain."}
{"_id": "699d6877808794dc4097d1fd9dca87989f83be07", "title": "Embedding a Mathematical OCR Module into OCRopus", "text": "This paper describes embedding a mathematical formula recognition module into the OCR system OCRopus aiming at developing a OCR system for scientific and technical documents which include mathematical formulas. OCRopus is a open source OCR system emphasizing modularity, easy extensibility, and reuse. This system has several basic components such as preprocessing, layout analysis, and text line recognition, so it is a challenging project to embed the mathematical formula recognition module into the OCRopus system. We have developed the math OCR module, then report how to embed our module into the OCRopus system in order to realize a math OCR which can deal with wide variety of documents including mathematical formulas."}
{"_id": "3b4cad3989b36160a6b0e18363dc2ae4decc2660", "title": "Automatic extraction of printed mathematical formulas using fuzzy logic and propagation of context", "text": "This paper describes a new method to segment printed mathematical documents precisely and extract formulas automatically from their images. Unlike classical methods, it is more directed towards segmentation rather than recognition, isolating mathematical formulas outside and inside text-lines. Our ultimate goal is to delimit parts of text that could disturb OCR applications, not yet trained for formula recognition and restructuring. The method is based on a global and a local segmentation. The global segmentation separates isolated formulas from the text lines using a primary labeling. The local segmentation propagates the context around the mathematical operators met to discard embedded formulas from plain text. The primary labeling identifies some mathematical symbols by models created at a learning step using fuzzy logic. The secondary labeling reinforces the results of the primary labeling and locates the subscripts and the superscripts inside the text. A heuristic has been defined that guides this automatic process. In this paper, the different modules making up the automated segmentation of mathematical document system are presented with examples of results. Experiments carried out on some commonly seen mathematical documents show that our proposed method can achieve quite satisfactory rates, making mathematical formula extraction more feasible for real-world applications. The average rate of primary labeling of mathematical operators is about 95.3% and their secondary labeling can improve the rate by about 4%. The formula extraction rate, evaluated with 300 formulas and 100 mathematical documents having variable complexity, is close to 93%"}
{"_id": "5b83bd7f61c673fecd6d4d9151961ad0432e7721", "title": "Mathematical Formulas Extraction", "text": "As a universal technical language, mathematics has been widely applied in many fields, and it is more accurate than any other languages in describing information. Therefore, numerous mathematical formulas exist in all kinds of documents. There is no doubt that automatic mathematical formulas processing is very important and necessary, of which extract formulas from document images is the first step. In this paper, formulas extraction methods which are not based on recognition results are presented: isolated formulas are extracted based on Parzen window and embedded expressions are extracted based on 2-D structures detection. Experiments show that our methods are very effective in formulas extraction."}
{"_id": "64ee1a00b500cc1924c58d2f3073f7ba653359bd", "title": "INFTY: an integrated OCR system for mathematical documents", "text": "An integrated OCR system for mathematical documents, called INFTY, is presented. INFTY consists of four procedures, i.e., layout analysis, character recognition, structure analysis of mathematical expressions, and manual error correction. In those procedures, several novel techniques are utilized for better recognition performance. Experimental results on about 500 pages of mathematical documents showed high character recognition rates on both mathematical expressions and ordinary texts, and sufficient performance on the structure analysis of the mathematical expressions."}
{"_id": "eceb4785515d875a8e66e935815548c0feca3918", "title": "Automated Segmentation of Math-Zones from Document Images", "text": "With an aim to high-level understanding of the mathematical contents in a document image the requirement of math-zone extraction and recognition technique is obvious. In this paper we present fully auotmatic segmentation of displayed-math zones from the document image, using only the spatial layout information of math-formulas and equations, so as to help commercial OCR systems which cannot discern math-zones and also for the identification and arrangement of math symbols by others."}
{"_id": "8330f94b28d35e67bffa5b7ba2d5c2bac0400f0b", "title": "Exposing digital forgeries in video by detecting double MPEG compression", "text": "With the advent of sophisticated and low-cost video editing software, it is becoming increasingly easier to tamper with digital video. In addition,an ever-growing number of video surveillance cameras is giving rise to an enormous amount of video data. The ability to ensure the integrity and authenticity of this data poses considerable challenges. Here we begin to explore techniques for detecting traces of tampering in digital video. Specifically, we show how a doublycompressed MPEG video sequence introduces specific static and temporal statistical perturbations whose presence can be used as evidence of tampering."}
{"_id": "7b71acff127c9bc736185343221f05aac4768ac0", "title": "Sparse Bayesian Methods for Low-Rank Matrix Estimation", "text": "Recovery of low-rank matrices has recently seen significant activity in many areas of science and engineering, motivated by recent theoretical results for exact reconstruction guarantees and interesting practical applications. In this paper, we present novel recovery algorithms for estimating low-rank matrices in matrix completion and robust principal component analysis based on sparse Bayesian learning (SBL) principles. Starting from a matrix factorization formulation and enforcing the low-rank constraint in the estimates as a sparsity constraint, we develop an approach that is very effective in determining the correct rank while providing high recovery performance. We provide connections with existing methods in other similar problems and empirical results and comparisons with current state-of-the-art methods that illustrate the effectiveness of this approach."}
{"_id": "ed96fbe86a80ab3cf72d1637279f8e2c300bc711", "title": "Early Fault Detection of Machine Tools Based on Deep Learning and Dynamic Identification", "text": "In modern digital manufacturing, nearly 79.6% of the downtime of a machine tool is caused by its mechanical failures. Predictive maintenance (PdM) is a useful way to minimize the machine downtime and the associated costs. One of the challenges with PdM is early fault detection under time-varying operational conditions, which means mining sensitive fault features from condition signals in long-term running. However, fault features are often weakened and disturbed by the time-varying harmonics and noise during a machining process. Existing analysis methods of these complex and diverse data are inefficient and time-consuming. This paper proposes a novel method for early fault detection under time-varying conditions. In this study, a deep learning model is constructed to automatically select the impulse responses from the vibration signals in long-term running of 288 days. Then, dynamic properties are identified from the selected impulse responses to detect the early mechanical fault under time-varying conditions. Compared to traditional methods, the experimental results in this paper have proved that our method was not affected by time-varying conditions and showed considerable potential for early fault detection in manufacturing."}
{"_id": "fd7b38076f69785352d654a0665f165933d049e8", "title": "Exploring New Backbone and Attention Module for Semantic Segmentation in Street Scenes", "text": "Semantic segmentation, as dense pixel-wise classification task, played an important tache in scene understanding. There are two main challenges in many state-of-the-art works: 1) most backbone of segmentation models that often were extracted from pretrained classification models generated poor performance in small categories because they were lacking in spatial information and 2) the gap of combination between high-level and low-level features in segmentation models has led to inaccurate predictions. To handle these challenges, in this paper, we proposed a new tailored backbone and attention select module for segmentation tasks. Specifically, our new backbone was modified from the original Resnet, which can yield better segmentation performance. Attention select module employed spatial and channel self-attention mechanism to reinforce the propagation of contextual features, which can aggregate semantic and spatial information simultaneously. In addition, based on our new backbone and attention select module, we further proposed our segmentation model for street scenes understanding. We conducted a series of ablation studies on two public benchmarks, including Cityscapes and CamVid dataset to demonstrate the effectiveness of our proposals. Our model achieved a mIoU score of 71.5% on the Cityscapes test set with only fine annotation data and 60.1% on the CamVid test set."}
{"_id": "1fc3873f1eaddda17425e7d87b5b9c4597479881", "title": "Tag recommendations in social bookmarking systems", "text": "Collaborative tagging systems allow users to assign keywords\u2014so called \u201ctags\u201d\u2014to resources. Tags are used for navigation, finding resources and serendipitous browsing and thus provide an immediate benefit for users. These systems usually include tag recommendation mechanisms easing the process of finding good tags for a resource, but also consolidating the tag vocabulary across users. In practice, however, only very basic recommendation strategies are applied. In this paper we evaluate and compare several recommendation algorithms on large-scale real life datasets: an adaptation of user-based collaborative filtering, a graph-based recommender built on top of the FolkRank algorithm, and simple methods based on counting tag occurences. We show that both FolkRank and Collaborative Filtering provide better results than non-personalized baseline methods. Moreover, since methods based on counting tag occurrences are computationally cheap, and thus usually preferable for real time scenarios, we discuss simple approaches for improving the performance of such methods. We show, how a simple recommender based on counting tags from users and resources can perform almost as good as the best recommender."}
{"_id": "888e901f49ac0031939e6374c2435eedac5182e1", "title": "Theme Based Clustering of Tweets", "text": "In this paper, we present overview of our approach for clustering tweets. Due to short text of tweets, traditional text clustering mechanisms alone may not produce optimal results. We believe that there is an underlying theme/topic present in majority of tweets which is evident in growing usage of hashtag feature in the Twitter network. Clustering tweets based on these themes seems a more natural way for grouping. We propose to use Wikipedia topic taxonomy to discover the themes from the tweets and use the themes along with traditional word based similarity metric for clustering. We show some of our initial results to demonstrate the effectiveness of our approach."}
{"_id": "0b8051bfcd20e86e8bf454d96f495eec796dd951", "title": "Sexual harassment on the internet", "text": "Sexual harassment offline is a well-known, highly prevalent, extensively investigated, and intensively treated social problem. An accepted model classifies sexual harassment behaviors into the categories of gender harassment, unwanted sexual attention, and sexual coercion. Theory and research show that sexual harassment behavior occurs as a product of person \u00d7 situation characteristics and has substantial personal and organizational costs. This article reviews the available information on sexual harassment in cyberspace, equates this phenomenon with what has been learned about sexual harassment offline, points to specific characteristics of online culture and technology that reinforce the behavior, and proposes ways of promoting prevention."}
{"_id": "569804d3ca477524045fc0c4945f32a5f3156b3d", "title": "Automatic Extraction of Acronym-meaning Pairs from MEDLINE Databases", "text": "Acronyms are widely used in biomedical and other technical texts. Understanding their meaning constitutes an important problem in the automatic extraction and mining of information from text. Here we present a system called ACROMED that is part of a set of Information Extraction tools designed for processing and extracting information from abstracts in the Medline database. In this paper, we present the results of two strategies for finding the long forms for acronyms in biomedical texts. These strategies differ from previous automated acronym extraction methods by being tuned to the complex phrase structures of the biomedical lexicon and by incorporating shallow parsing of the text into the acronym recognition algorithm. The performance of our system was tested with several data sets obtaining a performance of 72 % recall with 97 % precision. These results are found to be better for biomedical texts than the performance of other acronym extraction systems designed for unrestricted text."}
{"_id": "606a20dda7bb43cd3503e2bb37e7c14358c35e6c", "title": "Near-lighting Photometric Stereo for unknown scene distance and medium attenuation", "text": "Photometric Stereo in murky water is subject to light attenuation and near-field illumination, and the resulting image formation model is complex. Apart from the scene normals and albedo, the incident illumination varies per-pixel and it depends on the scene depth and the attenuation coefficient of the medium. When these are unknown, e.g. in a realistic scenario where a robotic platform explores an underwater scene (unknown shape and distance) within the dynamic subsea environment (unknown scattering level), Photometric Stereo becomes ambiguous. Previous approaches have tackled the problem by assuming distant-lighting and resorting to external hardware for estimating the unknown model variables. In our work, we show that the Photometric Stereo problem can be determined as soon as some additional constraints regarding the scene albedo and the presence of pixels with local intensity maxima within the image are incorporated into the optimization framework. Our proposed solution leads to effective Photometric Stereo and yields detailed 3D reconstruction of objects in murky water when the scene distance and the medium attenuation are unknown. We evaluate our work using both numerical simulations and real experiments in the controlled environment of a water tank and real port water using a remotely operated vehicle. \u00a9 2016 Elsevier B.V. All rights reserved."}
{"_id": "2a4b9eeb7db42fee480ed0d75abfdbd3921ec93f", "title": "LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection", "text": "Mechanical devices such as engines, vehicles, aircrafts, etc., are typically instrumented with numerous sensors to capture the behavior and health of the machine. However, there are often external factors or variables which are not captured by sensors leading to time-series which are inherently unpredictable. For instance, manual controls and/or unmonitored environmental conditions or load may lead to inherently unpredictable time-series. Detecting anomalies in such scenarios becomes challenging using standard approaches based on mathematical models that rely on stationarity, or prediction models that utilize prediction errors to detect anomalies. We propose a Long Short Term Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD) that learns to reconstruct \u2018normal\u2019 time-series behavior, and thereafter uses reconstruction error to detect anomalies. We experiment with three publicly available quasi predictable time-series datasets: power demand, space shuttle, and ECG, and two realworld engine datasets with both predictive and unpredictable behavior. We show that EncDecAD is robust and can detect anomalies from predictable, unpredictable, periodic, aperiodic, and quasi-periodic time-series. Further, we show that EncDec-AD is able to detect anomalies from short time-series (length as small as 30) as well as long time-series (length as large as 500)."}
{"_id": "93ed6511a0ae5b13ccf445081ab829d415ca47df", "title": "Neural Graph Machines: Learning Neural Networks Using Graphs", "text": "Label propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural networks, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training framework with a graph-regularised objective, namely Neural Graph Machines, that can combine the power of neural networks and label propagation. This work generalises previous literature on graphaugmented training of neural networks, enabling it to be applied to multiple neural architectures (Feed-forward NNs, CNNs and LSTM RNNs) and a wide range of graphs. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs, with a runtime that is linear in the number of edges. The proposed joint training approach convincingly outperforms many existing methods on a wide range of tasks (multi-label classification on social graphs, news categorization, document classification and semantic intent classification), with multiple forms of graph inputs (including graphs with and without node-level features) and using different types of neural networks. Work done during an internship at Google."}
{"_id": "a32e74d41a066d3dad15b020cce36cc1e3170e49", "title": "A Variational Principle for Graphical Models", "text": "Graphical models bring together graph theory and probability theory in a powerful formalism for multivariate statistical modeling. In statistical signal processing\u2014 as well as in related fields such as communication theory, control theory and bioinformatics\u2014statistical models have long been formulated in terms of graphs, and algorithms for computing basic statistical quantities such as likelihoods and marginal probabilities have often been expressed in terms of recursions operating on these graphs. Examples include hidden Markov models, Markov random fields, the forward-backward algorithm and Kalman filtering [ Rabiner and Juang (1993); Pearl (1988); Kailath et al. (2000)]. These ideas can be understood, unified and generalized within the formalism of graphical models. Indeed, graphical models provide a natural framework for formulating variations on these classical architectures, and for exploring entirely new families of statistical models. The recursive algorithms cited above are all instances of a general recursive algorithm known as the junction tree algorithm [ Lauritzen and Spiegelhalter, 1988]. The junction tree algorithm takes advantage of factorization properties of the joint probability distribution that are encoded by the pattern of missing edges in a graphical model. For suitably sparse graphs, the junction tree algorithm provides a systematic and practical solution to the general problem of computing likelihoods and other statistical quantities associated with a graphical model. Unfortunately, many graphical models of practical interest are not \" suitably sparse, \" so that the junction tree algorithm no longer provides a viable computational solution to the problem of computing marginal probabilities and other expectations. One popular source of methods for attempting to cope with such cases is the Markov chain Monte Carlo (MCMC) framework, and indeed there is a significant literature on"}
{"_id": "5b8358754a92cf648f3c5ccea7f34df62bba1e74", "title": "A distributed representation of temporal context", "text": "The author acknowledge support from National Institute of Health Research Grants MH55687 and AG15685. Marc Howard is now at the Department of Psychology, Boston University, 64 Cunningham Street, Boston, MA 02215. Address correspondence and reprint requests to Michael J. Kahana, Volen Center for Complex Systems, Brandeis University, MS 013, Waltham, MA 02254. E-mail: kahana@brandeis.edu. Marc W. Howard and Michael J. Kahana"}
{"_id": "ea0881b825b5e1f90b1af66104480516f336e220", "title": "AdaScale: Towards Real-time Video Object Detection Using Adaptive Scaling", "text": "In vision-enabled autonomous systems such as robots and autonomous cars, video object detection plays a crucial role, and both its speed and accuracy are important factors to provide reliable operation. The key insight we show in this paper is that speed and accuracy are not necessarily a trade-off when it comes to image scaling. Our results show that re-scaling the image to a lower resolution will sometimes produce better accuracy. Based on this observation, we propose a novel approach, dubbed AdaScale, which adaptively selects the input image scale that improves both accuracy and speed for video object detection. To this end, our results on ImageNet VID and mini YouTube-BoundingBoxes datasets demonstrate 1.3 points and 2.7 points mAP improvement with 1.6\u00d7 and 1.8\u00d7 speedup, respectively. Additionally, we improve state-of-the-art video acceleration work by an extra 1.25\u00d7 speedup with slightly better mAP on ImageNet VID dataset."}
{"_id": "8a89bdfcc5129736db15848d8c430ac054e6b5ed", "title": "Designing of Robust Image Steganography Technique Based on LSB Insertion and Encryption", "text": "This paper discusses the design of a Robust image steganography technique based on LSB (Least Significant Bit) insertion and RSA encryption technique. Steganography is the term used to describe the hiding of data in images to avoid detection by attackers. Steganalysis is the method used by attackers to determine if images have hidden data and to recover that data. The application discussed in this paper ranks images in a users library based on their suitability as cover objects for some data. By matching data to an image, there is less chance of an attacker being able to use steganalysis to recover the data. Before hiding the data in an image the application first encrypts it. The steganography method proposed in this paper and illustrated by the application is superior to that used by current steganography tools."}
{"_id": "5bbfd8dd4786ae50c3e65f9a04d23f93c755fa72", "title": "Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy", "text": "We discuss unique features of lens-free computational imaging tools and report some of their emerging results for wide-field on-chip microscopy, such as the achievement of a numerical aperture (NA) of \u223c0.8\u20130.9 across a field of view (FOV) of more than 20 mm2 or an NA of \u223c0.1 across a FOV of \u223c18 cm2, which corresponds to an image with more than 1.5 gigapixels. We also discuss the current challenges that these computational on-chip microscopes face, shedding light on their future directions and applications."}
{"_id": "1aebc5e907aa355dd9a19b37c63bd6d2f54cea01", "title": "Task offloading decision in fog computing system", "text": "Fog computing is an emerging paradigm of cloud computing which to meet the growing computation demand of mobile application. It can help mobile devices to overcome resource constraints by offloading the computationally intensive tasks to cloud servers. The challenge of the cloud is to minimize the time of data transfer and task execution to the user, whose location changes owing to mobility, and the energy consumption for the mobile device. To provide satisfactory computation performance is particularly challenging in the fog computing environment. In this paper, we propose a novel fog computing model and offloading policy which can effectively bring the fog computing power closer to the mobile user. The fog computing model consist of remote cloud nodes and local cloud nodes, which is attached to wireless access infrastructure. And we give task offloading policy taking into account executi+on, energy consumption and other expenses. We finally evaluate the performance of our method through experimental simulations. The experimental results show that this method has a significant effect on reducing the execution time of tasks and energy consumption of mobile devices."}
{"_id": "469f6e3063c98969e95a9a70525e5017e177f637", "title": "A distributed attenuator for K-band using standard SMD thin-film chip resistors", "text": "This paper presents a distributed attenuator with standard SMD chip components. For low frequencies simple Pi and T-networks with chip resistors offer a good flexibility to design low cost attenuators in a microwave circuit. Nevertheless, with increasing frequency the parasitics of the chip resistors as well as the short additional line sections in the Pi and T network cause the design to fail. Here, a model of an 0201 chip resistor has been setup to show the limited operating range of the Pi and T-networks. Using a distributed attenuator design, two resistors separated by a quarter wavelength transmission line, can overcome these problems and extent the frequency range to K-band."}
{"_id": "1d16975402e5a35c7e33b9a97fa85c689f840ded", "title": "LSHDB: a parallel and distributed engine for record linkage and similarity search", "text": "In this paper, we present LSHDB, the first parallel and distributed engine for record linkage and similarity search. LSHDB materializes an abstraction layer to hide the mechanics of the Locality-Sensitive Hashing (a popular method for detecting similar items in high dimensions) which is used as the underlying similarity search engine. LSHDB creates the appropriate data structures from the input data and persists these structures on disk using a noSQL engine. It inherently supports the parallel processing of distributed queries, is highly extensible, and is easy to use.We will demonstrate LSHDB both as the underlying system for detecting similar records in the context of Record Linkage (and of Privacy-Preserving Record Linkage) tasks, as well as a search engine for identifying string values that are similar to submitted queries."}
{"_id": "2820fe12f286700ca9e7937e4cf3d082fb6d1a23", "title": "Peer-to-Peer Botnets: Overview and Case Study", "text": "Botnets have recently been identified as one of the most important threats to the security of the Internet. Traditionally, botnets organize themselves in an hierarchical manner with a central command and control location. This location can be statically defined in the bot, or it can be dynamically defined based on a directory server. Presently, the centralized characteristic of botnets is us eful to security professionals because it offers a central point of failure for the botnet. In the near future, we believe attackers will move to more resilient architectures. In particular, one class of botnet structure that has entered initial stages of development is peer-to-peer based architectures. In this paper, we present an overview of peer-to-peer botnets. We also present a case study of a Kademlia-based Trojan.Peacomm bot."}
{"_id": "0f57d32eefe31758dbbfb464451e8568171ec025", "title": "Gatekeeper: Monitoring Auto-Start Extensibility Points (ASEPs) for Spyware Management", "text": "Spyware is a rapidly spreading problem for PC users causing significant impact on system stability and privacy concerns. It attaches to extensibility points in the system to ensure the spyware will be instantiated when the system starts. Users may willingly install free versions of software containing spyware as an alternative to paying for it. Traditional anti-virus techniques are less effective in this scenario because they lack the context to decide if the spyware should be removed. In this paper, we introduce Auto-Start Extensibility Points (ASEPs) as the key concept for modeling the spyware problem. By monitoring and grouping \u2018\u2018hooking\u2019\u2019 operations made to the ASEPs, our Gatekeeper solution complements the traditional signature-based approach and provides a comprehensive framework for spyware management. We present ASEP hooking statistics for 120 real-world spyware programs. We also describe several techniques for discovering new ASEPs to further enhance the effectiveness of our solution."}
{"_id": "2c359b75fdac4b74d68309b78a8599731a029ae9", "title": "Detecting stealth software with Strider GhostBuster", "text": "Stealth malware programs that silently infect enterprise and consumer machines are becoming a major threat to the future of the Internet. Resource hiding is a powerful stealth technique commonly used by malware to evade detection by computer users and anti-malware scanners. In this paper, we focus on a subclass of malware, termed \"ghostware\", which hide files, configuration settings, processes, and loaded modules from the operating system's query and enumeration application programming interfaces (APIs). Instead of targeting individual stealth implementations, we describe a systematic framework for detecting multiple types of hidden resources by leveraging the hiding behavior as a detection mechanism. Specifically, we adopt a cross-view diff-based approach to ghostware detection by comparing a high-level infected scan with a low-level clean scan and alternatively comparing an inside-the-box infected scan with an outside-the-box clean scan. We describe the design and implementation of the Strider GhostBuster tool and demonstrate its efficiency and effectiveness in detecting resources hidden by real-world malware such as rootkits, Trojans, and key-loggers."}
{"_id": "3325cdfb66a03aad5a6d0b19840f6bdb713d0e7a", "title": "A multifaceted approach to understanding the botnet phenomenon", "text": "The academic community has long acknowledged the existence of malicious botnets, however to date, very little is known about the behavior of these distributed computing platforms. To the best of our knowledge, botnet behavior has never been methodically studied, botnet prevalence on the Internet is mostly a mystery, and the botnet life cycle has yet to be modeled. Uncertainty abounds. In this paper, we attempt to clear the fog surrounding botnets by constructing a multifaceted and distributed measurement infrastructure. Throughout a period of more than three months, we used this infrastructure to track 192 unique IRC botnets of size ranging from a few hundred to several thousand infected end-hosts. Our results show that botnets represent a major contributor to unwanted Internet traffic - 27% of all malicious connection attempts observed from our distributed darknet can be directly attributed to botnet-related spreading activity. Furthermore, we discovered evidence of botnet infections in 11% of the 800,000 DNS domains we examined, indicating a high diversity among botnet victims. Taken as a whole, these results not only highlight the prominence of botnets, but also provide deep insights that may facilitate further research to curtail this phenomenon."}
{"_id": "99c511c7b6858aac49c960464c6756ad239587cc", "title": "Modeling Botnet Propagation Using Time Zones", "text": "Time zones play an important and unexplored role in malware epidemics. To understand how time and location affect malware spread dynamics, we studied botnets, or large coordinated collections of victim machines (zombies) controlled by attackers. Over a six month period we observed dozens of botnets representing millions of victims. We noted diurnal properties in botnet activity, which we suspect occurs because victims turn their computers off at night. Through binary analysis, we also confirmed that some botnets demonstrated a bias in infecting regional populations. Clearly, computers that are offline are not infectious, and any regional bias in infections will affect the overall growth of the botnet. We therefore created a diurnal propagation model. The model uses diurnal shaping functions to capture regional variations in online vulnerable populations. The diurnal model also lets one compare propagation rates for different botnets, and prioritize response. Because of variations in release times and diurnal shaping functions particular to an infection, botnets released later in time may actually surpass other botnets that have an advanced start. Since response times for malware outbreaks is now measured in hours, being able to predict short-term propagation dynamics lets us allocate resources more intelligently. We used empirical data from botnets to evaluate the analytical model."}
{"_id": "832c3a11df8e944b639f0b7ea90e48dd01fa4858", "title": "Structural description and classification of signature images", "text": "-This, paper introduces a new method for describing signature images. The description involves features of the signatures and relations among its parts organized in a hierarchical structure. The description is used to classify the signatures and globally analyse their structure. Description Segmentation Feature extraction Classification Signature analysis I. I N T R O D U C T I O N Although a lot of research on signatures has been reported in the past 20 years (t) they are mainly confined to verification in which a set of goal oriented features is used to give the decision about the authenticity of the claimed signature. However, it is interesting to be able to access a signature database using a compact and meaningful description derived from its pictorial content. Constructing such a signature database, requires an automatic description of signature images. Researches on signature description are very limited and appeared implicitly in verification researches. In on-line systems, Lamarche and Plamondon (z) reconsidered the theoretical model of the handwriting process introduced by Verdenberg and Koster <3) to describe the pen tip speed and orientation and used this description for feature extraction. In off-line systems, Nagel and Rosenfeld (\u0300 ~ modified the model introduced by Eden (~) for describing upright handwriting by adding the slant parameter and used the parameters of the modified model as a basis for choosing their features for verification, and Brocklehurst (6) used data reduction techniques to encode the signatures into a list of 7 features (called encoded description) and used them for verification. In all cases, neither a structural description of the signature as a 2-D image nor a conceptualization of the features was proposed. In this paper, we introduce a method for signature description and a reliable approach that can describe digitized signature images by this method. The description uses a hierarchical approach, i.e. the signature is at the top level and each lower level gives more detail about the higher one. Since the range of the natural variation of signature writing will change with time as found by Kapoor <~ and Evett, (s) and the ratios of the writing parameters for different samples of the same individual's writing remain essentially constant over a wide range of the writing surface and space available, ('~ features and relations of o u r description are extracted in a relative way (e.g. the length is measured by its ratio to the width, and so on) to attempt to make it insensitive to the natural variation in the genuine signatures. The signature is described using a global description suitable for purposes like classifying the signature data and studying its nature, and a local description convenient for applications which require more detailed information about the signature like signature analysis or similarity oriented retrieval. The global description is represented by a character string, and the local description is represented by a tree structure. In Section 2, the general approach for describing signature images is introduced, and Section 3 introduces the practical method of the description. Section 4 presents the application of the global description to classifying and analysing the signature data, and finally, Section 5 contains the closing remarks. 2. SIGNATURE DESCRIPTION Signatures are a special case of handwriting in which special characters and flourishes are available, and the ratio between the middle zone width and the letter heights is sometimes abnormal. In many cases, the signature is not readable even by a human. Therefore, we deal with the signature as an image that has the principal zones (upper, middle and lower zones) of the handwriting, not as strokes, letters and words. As indicated by Rosenfeld, (9) description of a picture involves properties of the picture or its parts and relationships among them. Our description involves features like slant, and relations among signature properties like the comparison between the widths of the zones. The description is not reversible, that is, the signature cannot be reconstructed from its description. However, the description involves sufficient information to characterize the signature and its parts. Features and relations are extracted quantitatively, and then mapped into a conceptual"}
{"_id": "4b0457533681f9c232ebfcfa77bcb51295a64c91", "title": "Analysis of the wrench-closure workspace of planar parallel cable-driven mechanisms", "text": "The mobile platform of a parallel cable-driven mechanism is connected in parallel to a base by lightweight links, such as cables. Since the cables can only work in tension, the set of poses of the mobile platform for which the cables can balance any external wrench, i.e., for which the platform of the mechanism is fully constrained, is often limited or even nonexistent. Thus, the study and determination of this set of poses, called the wrench-closure workspace (WCW), is an important issue for parallel cable-driven mechanisms. In this paper, the case of planar parallel cable-driven mechanisms is addressed. Theorems that characterize the poses of the WCW are proposed. Then, these theorems are used to disclose the parts of the reachable workspace which belong to the WCW. Finally, an efficient algorithm that determines the constant-orientation cross-sections of these parts is introduced."}
{"_id": "2ee496112cedd44e204ebd1cd0d48ebbb70440d6", "title": "Additive lattice kirigami", "text": "Kirigami uses bending, folding, cutting, and pasting to create complex three-dimensional (3D) structures from a flat sheet. In the case of lattice kirigami, this cutting and rejoining introduces defects into an underlying 2D lattice in the form of points of nonzero Gaussian curvature. A set of simple rules was previously used to generate a wide variety of stepped structures; we now pare back these rules to their minimum. This allows us to describe a set of techniques that unify a wide variety of cut-and-paste actions under the rubric of lattice kirigami, including adding new material and rejoining material across arbitrary cuts in the sheet. We also explore the use of more complex lattices and the different structures that consequently arise. Regardless of the choice of lattice, creating complex structures may require multiple overlapping kirigami cuts, where subsequent cuts are not performed on a locally flat lattice. Our additive kirigami method describes such cuts, providing a simple methodology and a set of techniques to build a huge variety of complex 3D shapes."}
{"_id": "e5857a9a0c09a92624a466a063e03104c1c3a605", "title": "Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson's method", "text": "Within-subject ANOVAs are a powerful tool to analyze data because the variance associated to differences between the participants is removed from the analysis. Hence, small differences, when present for most of the participants, can be significant even when the participants are very different from one another. Yet, graphs showing standard error or confidence interval bars are misleading since these bars include the between-subject variability. Loftus and Masson (1994) noticed this fact and proposed an alternate method to compute the error bars. However, i) their approach requires that the ANOVA be performed first, which is paradoxical since a graph is an aid to decide whether to perform analyses or not; ii) their method provides a single error bar for all the conditions, masking information such as the heterogeneity of variances across conditions; iii) the method proposed is difficult to implement in commonly-used graphing software. Here we propose a simple alternative and sow how it can be implemented in SPSS."}
{"_id": "4f665d1a5e37ac95a5152df46433f99dc3e0a943", "title": "Decision Support Systems (DSS) in Construction Tendering Processes", "text": "The successful execution of a construction project is heavily impacted by making the right decision during tendering processes. Managing tender procedures is very complex and uncertain involving coordination of many tasks and individuals with different priorities and objectives. Bias and inconsistent decision are inevitable if the decision-making process is totally depends on intuition, subjective judgement or emotion. In making transparent decision and healthy competition tendering, there exists a need for flexible guidance tool for decision support. Aim of this paper is to give a review on current practices of Decision Support Systems (DSS) technology in construction tendering processes. Current practices of general tendering processes as applied to the most countries in different regions such as United States, Europe, Middle East and Asia are comprehensively discussed. Applications of Web-based tendering processes is also summarised in terms of its properties. Besides that, a summary of Decision Support System (DSS) components is included in the next section. Furthermore, prior researches on implementation of DSS approaches in tendering processes are discussed in details. Current issues arise from both of paper-based and Web-based tendering processes are outlined. Finally, conclusion is included at the end of this paper."}
{"_id": "43ebca078795b52f5ae6eceb27ed8938a02f58e0", "title": "An empirical resource for discovering cognitive principles of discourse organisation: the ANNODIS corpus", "text": "This paper describes the ANNODIS resource, a discourse-level annotated corpus for French. The corpus combines two perspectives on discourse: a bottom-up approach and a top-down approach. The bottom-up view incrementally builds a structure from elementary discourse units, while the top-down view focuses on the selective annotation of multi-level discourse structures. The corpus is composed of texts that are diversified with respect to genre, length and type of discursive organisation. The methodology followed here involves an iterative design of annotation guidelines in order to reach satisfactory inter-annotator agreement levels. This allows us to raise a few issues relevant for the comparison of such complex objects as discourse structures. The corpus also serves as a source of empirical evidence for discourse theories. We present here two first analyses taking advantage of this new annotated corpus, one that tested hypotheses on constraints governing discourse structure, and another that studied the variations in composition and signalling of multi-level discourse structures."}
{"_id": "31d981dca3c81ef55594fab22b37f419f33bfc08", "title": "The slot-coupled hemispherical dielectric resonator antenna with a parasitic patch: applications to the circularly polarized antenna and wide-band antenna", "text": "The aperture-coupled hemispherical dielectric resonator antenna (DRA) with a parasitic patch is studied rigorously. Using the Green's function approach, integral equations for the unknown patch and slot currents are formulated and solved using the method of moments. The theory is utilized to design a circularly polarized (CP) DRA and a wide-band linearly polarized (LP) DRA. In the former, the CP frequency and axial ratio (AR) can easily be controlled by the patch location and patch size, respectively, with the impedance matched by varying the slot length and microstrip stub length. It is important that the AR will not be affected when the input impedance is tuned, and the CP design is therefore greatly facilitated. For the wide-band LP antenna, a maximum bandwidth of 22% can be obtained, which is much wider than the previous bandwidth of 7.5% with no parasitic patches. Finally, the frequency-tuning characteristics of the proposed antenna are discussed. Since the parasitic patch can be applied to any DRAs, the method will find applications in practical DRA designs."}
{"_id": "119fc2790db038cd1fc456712ac0a98ea7e7a904", "title": "Measuring Urban Social Diversity Using Interconnected Geo-Social Networks", "text": "Large metropolitan cities bring together diverse individuals, creating opportunities for cultural and intellectual exchanges, which can ultimately lead to social and economic enrichment. In this work, we present a novel network perspective on the interconnected nature of people and places, allowing us to capture the social diversity of urban locations through the social network and mobility patterns of their visitors. We use a dataset of approximately 37K users and 42K venues in London to build a network of Foursquare places and the parallel Twitter social network of visitors through check-ins. We define four metrics of the social diversity of places which relate to their social brokerage role, their entropy, the homogeneity of their visitors and the amount of serendipitous encounters they are able to induce. This allows us to distinguish between places that bring together strangers versus those which tend to bring together friends, as well as places that attract diverse individuals as opposed to those which attract regulars. We correlate these properties with wellbeing indicators for London neighbourhoods and discover signals of gentrification in deprived areas with high entropy and brokerage, where an influx of more affluent and diverse visitors points to an overall improvement of their rank according to the UK Index of Multiple Deprivation for the area over the five-year census period. Our analysis sheds light on the relationship between the prosperity of people and places, distinguishing between different categories and urban geographies of consequence to the development of urban policy and the next generation of socially-aware location-based applications."}
{"_id": "860468e9ac4c314bfe30e21a7f35a0aad67d6b49", "title": "SLING: A framework for frame semantic parsing", "text": "We describe SLING, a framework for parsing natural language into semantic frames. SLING supports general transition-based, neural-network parsing with bidirectional LSTM input encoding and a Transition Based Recurrent Unit (TBRU) for output decoding. The parsing model is trained end-to-end using only the text tokens as input. The transition system has been designed to output frame graphs directly without any intervening symbolic representation. The SLING framework includes an efficient and scalable frame store implementation as well as a neural network JIT compiler for fast inference during parsing. SLING is implemented in C++ and it is available for download on GitHub."}
{"_id": "d5c3c8e425920979442048ffbe3a36162405f806", "title": "Context-Sensitive Flow Graph and Projective Single Assignment Form for Resolving Context-Dependency of Binary Code", "text": "Program analysis on binary code is considered as difficult because one has to resolve destinations of indirect jumps. However, there is another difficulty of context-dependency that matters when one processes binary programs that are not compiler generated. In this paper, we propose a novel approach for tackling these difficulties and describe a way to reconstruct a control flow from a binary program with no extra assumptions than the operational meaning of machine instructions."}
{"_id": "72a06022fcd4b5d97c1832808f124b655916a74e", "title": "Human-in-the-Loop Challenges for Entity Matching: A Midterm Report", "text": "Entity matching (EM) has been a long-standing challenge in data management. In the past few years we have started two major projects on EM (Magellan and Corleone/Falcon). These projects have raised many human-in-the-loop (HIL) challenges. In this paper we discuss these challenges. In particular, we show how these challenges forced us to revise our solution architecture, from a typical RDBMS-style architecture to a very human-centric one, in which human users are first-class objects driving the EM process, using tools at pain-point places. We discuss how such solution architectures can be viewed as combining \"tools in the loop\" with \"human in the loop\". Finally, we discuss lessons learned which can potentially be applied to other problem settings. We also hope that more researchers will investigate EM, as it can be a rich \"playground\" for HIL research."}
{"_id": "5f263a5fe7ce37b0f7b370fc1010a59c24de922a", "title": "ReticularSpaces: activity-based computing support for physically distributed and collaborative smart spaces", "text": "Smart spaces research focuses on technology for multiple displays and devices for collocated participants. In most approaches, however, users have to cope with heterogeneous interfaces and information organization, as well as a lack of support for collaboration with mobile and remote users outside the smart space. In this paper, we present ReticularSpaces; a multi-display smart space system built on the principles of activity-based computing. The focus of ReticularSpaces is to support: (i) unified interaction with applications and documents through ReticularUI, a novel distributed user interfaces design; (ii) management of the complexity of tasks between users and displays; (iii) mobile users in a local, remote or 'nomadic' settings; and (iv) collaboration among local and remote users. We describe the motivation, design, and architecture of ReticularSpaces, and report from a preliminary feasibility study. The study shows that participants found ReticularSpaces useful and effective, but at the same time reveals new areas for research on smart environments."}
{"_id": "f2be9b0472c316e6d8249f02fe18ccc3668c83ce", "title": "Advanced control strategies for balancing LED usage of AC LED driver", "text": "This paper presents two enhanced methods to balance LEDs energy usage in the AC LED Driver. While conventional AC LED driver has energy consumption imbalance, the proposed control methods provide equally distributed energy usage. First method is to apply the appropriate firing voltage that is the threshold voltage for equalizing energy consumption of LEDs. This method improves balancing energy consumption within a rectified line cycle. The other method is to change the order of LEDs turning on sequence by every two rectified line cycle. These two approaches are compared with mathematical analysis in terms of the energy consumption of each LED. A prototype driver with three 12 W-LED strings is built. The proposed method 1 and method 2 achieve 0.0030 J and 0.0033 J standard deviation of energy consumption, respectively, compared to the 0.0070 J deviation for the conventional approach."}
{"_id": "88627daf351e12d78513e92d9dcd64c9822bf480", "title": "Deep convolutional neural network based large-scale oil palm tree detection for high-resolution remote sensing images", "text": "This paper proposed a deep convolutional neural network (DCNN) based framework for large-scale oil palm tree detection using high-resolution remote sensing images in Malaysia. Different from the previous palm tree or tree crown detection studies, the palm trees in our study area are very crowded and their crowns often overlap. Moreover, there are various land cover types in our study area, e.g. impervious, bare land, and other vegetation, etc. The main steps of our proposed method include large-scale and multi-class sample collection, AlexNet-based DCNN training and optimization, sliding window-based label prediction, and post-processing. Compared with the manually interpreted ground truth, our proposed method achieves detection accuracies of 92%\u201397% in our study area, which are greatly higher than the accuracies obtained from another two detection methods used in this paper."}
{"_id": "34e09d8b632ca4b95d0c6b3ad4979de9d0cf9625", "title": "Multiple asynchronous stimulus- and task-dependent hierarchies (STDH) within the visual brain's parallel processing systems.", "text": "Results from a variety of sources, some many years old, lead ineluctably to a re-appraisal of the twin strategies of hierarchical and parallel processing used by the brain to construct an image of the visual world. Contrary to common supposition, there are at least three 'feed-forward' anatomical hierarchies that reach the primary visual cortex (V1) and the specialized visual areas outside it, in parallel. These anatomical hierarchies do not conform to the temporal order with which visual signals reach the specialized visual areas through V1. Furthermore, neither the anatomical hierarchies nor the temporal order of activation through V1 predict the perceptual hierarchies. The latter shows that we see (and become aware of) different visual attributes at different times, with colour leading form (orientation) and directional visual motion, even though signals from fast-moving, high-contrast stimuli are among the earliest to reach the visual cortex (of area V5). Parallel processing, on the other hand, is much more ubiquitous than commonly supposed but is subject to a barely noticed but fundamental aspect of brain operations, namely that different parallel systems operate asynchronously with respect to each other and reach perceptual endpoints at different times. This re-assessment leads to the conclusion that the visual brain is constituted of multiple, parallel and asynchronously operating task- and stimulus-dependent hierarchies (STDH); which of these parallel anatomical hierarchies have temporal and perceptual precedence at any given moment is stimulus and task related, and dependent on the visual brain's ability to undertake multiple operations asynchronously."}
{"_id": "5ce1caf9463a7e5c5b5cb98d98a6baa9d12c6732", "title": "From Theory to Experimental Evaluation: Resource Management in Software-Defined Vehicular Networks", "text": "Managing resources in dynamic vehicular environments is a tough task, which is becoming more challenging with the increased number of access technologies today available in connected cars (e.g., IEEE 802.11, LTE), in the variety of applications provided on the road (e.g., safety, traffic efficiency, and infotainment), in the amount of driving awareness/coordination required (e.g., local, context, and cooperative awareness), and in the level of automation toward zero-accident driving (e.g., platooning and autonomous driving). The open programmability and logically centralized control features of the software\u2013defined networking (SDN) paradigm offer an attractive means to manage communication and networking resources in the vehicular environment and promise improved performance. In this paper, we enumerate the potentials of software-defined vehicular networks, analyze the need to rethink the traditional SDN approach from theoretical and practical standpoints when applied in this application context, and present an emulation approach based on the proposed node car architecture in Mininet-WiFi to showcase the applicability and some expected benefits of SDN in a selected use case scenario."}
{"_id": "6323bd74916c4f3b220cb917d377942668813540", "title": "Violent and nonviolent video games differentially affect physical aggression for individuals high vs. low in dispositional anger.", "text": "Although numerous experiments have shown that exposure to violent video games (VVG) causes increases in aggression, relatively few studies have investigated the extent to which this effect differs as a function of theoretically relevant individual difference factors. This study investigated whether video game content differentially influences aggression as a function of individual differences in trait anger. Participants were randomly assigned to play a violent or nonviolent video game before completing a task in which they could behave aggressively. Results showed that participants high in trait anger were the most aggressive, but only if they first played a VVG. This relationship held while statistically controlling for dimensions other than violent content on which game conditions differed (e.g. frustration, arousal). Implications of these findings for models explaining the effects of video games on behavior are discussed."}
{"_id": "e991d4bddfecacb6c4cdf323b99343af1298f3a5", "title": "Avoiding, finding and fixing spreadsheet errors - A survey of automated approaches for spreadsheet QA", "text": "Spreadsheet programs can be found everywhere in organizations and they are used for a variety of purposes, including financial calculations, planning, data aggregation and decision making tasks. A number of research surveys have however shown that such programs are particularly prone to errors. Some reasons for the error-proneness of spreadsheets are that spreadsheets are developed by end users and that standard software quality assurance processes are mostly not applied. Correspondingly, during the last two decades, researchers have proposed a number of techniques and automated tools aimed at supporting the end user in the development of error-free spreadsheets. In this paper, we provide a review of the research literature and develop a classification of automated spreadsheet quality assurance (QA) approaches, which range from spreadsheet visualization, static analysis and quality reports, over testing and support to model-based spreadsheet development. Based on this review, we outline possible opportunities for future work in the area of automated spreadsheet QA."}
{"_id": "c23435782aabd187cabaa9f654461be159807a59", "title": "Applying High-Resolution Visible Imagery to Satellite Melt Pond Fraction Retrieval: A Neural Network Approach", "text": "During summer, melt ponds have a significant influence on Arctic sea-ice albedo. The melt pond fraction (MPF) also has the ability to forecast the Arctic sea-ice in a certain period. It is important to retrieve accurate melt pond fraction (MPF) from satellite data for Arctic research. This paper proposes a satellite MPF retrieval model based on the multi-layer neural network, named MPF-NN. Our model uses multi-spectral satellite data as model input and MPF information from multi-site and multiperiod visible imagery as prior knowledge for modeling. It can effectively model melt ponds evolution of different regions and periods over the Arctic. Evaluation results show that the MPF retrieved from MODIS data using the proposed model has an RMSE of 3.91% and a correlation coefficient of 0.73. The seasonal distribution of MPF is also consistent with previous results."}
{"_id": "dd2ee0bda0a1b1d0d1c6944938dca6d06d0aedaf", "title": "Multimaterial 3D Printing of Graphene-Based Electrodes for Electrochemical Energy Storage Using Thermoresponsive Inks.", "text": "The current lifestyles, increasing population, and limited resources result in energy research being at the forefront of worldwide grand challenges, increasing the demand for sustainable and more efficient energy devices. In this context, additive manufacturing brings the possibility of making electrodes and electrical energy storage devices in any desired three-dimensional (3D) shape and dimensions, while preserving the multifunctional properties of the active materials in terms of surface area and conductivity. This paves the way to optimized and more efficient designs for energy devices. Here, we describe how three-dimensional (3D) printing will allow the fabrication of bespoke devices, with complex geometries, tailored to fit specific requirements and applications, by designing water-based thermoresponsive inks to 3D-print different materials in one step, for example, printing the active material precursor (reduced chemically modified graphene (rCMG)) and the current collector (copper) for supercapacitors or anodes for lithium-ion batteries. The formulation of thermoresponsive inks using Pluronic F127 provides an aqueous-based, robust, flexible, and easily upscalable approach. The devices are designed to provide low resistance interface, enhanced electrical properties, mechanical performance, packing of rCMG, and low active material density while facilitating the postprocessing of the multicomponent 3D-printed structures. The electrode materials are selected to match postprocessing conditions. The reduction of the active material (rCMG) and sintering of the current collector (Cu) take place simultaneously. The electrochemical performance of the rCMG-based self-standing binder-free electrode and the two materials coupled rCMG/Cu printed electrode prove the potential of multimaterial printing in energy applications."}
{"_id": "5bad388456dc5ab43379044207eaae05cf8d852f", "title": "Knowledge management and organizational culture: a theoretical integrative framework", "text": "Purpose \u2013 Organizational culture is a critical factor in building and reinforcing knowledge management in organizations. However, there is no theoretical framework that comprehensively explains the effect of organizational culture on knowledge management in organizations. This paper endeavors to develop a theoretical integrative framework for organizational knowledge management and organizational culture. Design/methodology/approach \u2013 This is a conceptual paper. It modifies the \u2018\u2018competing value framework\u2019\u2019 by adding a new dimension representing ethical and trusting culture, and then integrates it with the SECI model of knowledge creation and conversion by identifying the conceptual parallels between the two frameworks and then analyzing the interaction effects among the dimensions. Findings \u2013 Based on the congruity between the modified competing values framework and the knowledge creation and conversion framework, the paper formulates six propositions about the propensity of organizations of different dominant cultural styles to engage in the four processes of knowledge creation and conversion. Research limitations/implications \u2013 The dynamic nature of the framework presented in the paper points to the importance of longitudinal and comparative research in understanding the effects of organizational culture on organizational knowledge management systems in organizations. Practical implications \u2013 The proposed integrative framework would facilitate organizational learning and lead to the improvement of knowledge management practices in organizations as it helps managers to understand the linkages between culture and knowledge management. Originality/value \u2013 This paper presents a new framework linking organizational culture to knowledge management. It moves away from analyzing culture only in terms of its positive and negative influences on knowledge management. Instead, it suggests a typology of the kind of knowledge management processes that organizations are likely to focus on depending on the culture that prevails in an organization."}
{"_id": "3e9600a8322f7eb5e406f6c25f9a8396017c9470", "title": "Improving lip-reading performance for robust audiovisual speech recognition using DNNs", "text": "This paper presents preliminary experiments using the Kaldi toolkit [1] to investigate audiovisual speech recognition (AVSR) in noisy environments using deep neural networks (DNNs). In particular we use a single-speaker large vocabulary, continuous audiovisual speech corpus to compare the performance of visual-only, audio-only and audiovisual speech recognition. The models trained using the Kaldi toolkit are compared with the performance of models trained using conventional hidden Markov models (HMMs). In addition, we compare the performance of a speech recognizer both with and without visual features over nine different SNR levels of babble noise ranging from 20dB down to -20dB. The results show that the DNN outperforms conventional HMMs in all experimental conditions, especially for the lip-reading only system, which achieves a gain of 37.19% accuracy (84.67% absolute word accuracy). Moreover, the DNN provides an effective improvement of 10 and 12dB SNR respectively for both the single modal and bimodal speech recognition systems. However, integrating the visual features using simple feature fusion is only effective in SNRs at 5dB and above. Below this the degradion in accuracy of an audiovisual system is similar to the audio only recognizer."}
{"_id": "6f9ead16c5464a989dee4cc337d473e353ce54c7", "title": "Real-Time Classification of Dance Gesturesfrom Skeleton Animation", "text": "We present a real-time gesture classification system for skeletal wireframe motion. Its key components include an angular representation of the skeleton designed for recognition robustness under noisy input, a cascaded correlation-based classifier for multivariate time-series data, and a distance metric based on dynamic time-warping to evaluate the difference in motion between an acquired gesture and an oracle for the matching gesture. While the first and last tools are generic in nature and could be applied to any gesture-matching scenario, the classifier is conceived based on the assumption that the input motion adheres to a known, canonical time-base: a musical beat. On a benchmark comprising 28 gesture classes, hundreds of gesture instances recorded using the XBOX Kinect platform and performed by dozens of subjects for each gesture class, our classifier has an average accuracy of 96:9%, for approximately 4-second skeletal motion recordings. This accuracy is remarkable given the input noise from the real-time depth sensor."}
{"_id": "8abbc8a8bdb2c1d7193ecb2a49cb8f9344ce4141", "title": "Passive in-home measurement of stride-to-stride gait variability comparing vision and Kinect sensing", "text": "We present an analysis of measuring stride-to-stride gait variability passively, in a home setting using two vision based monitoring techniques: anonymized video data from a system of two web-cameras, and depth imagery from a single Microsoft Kinect. Millions of older adults fall every year. The ability to assess the fall risk of elderly individuals is essential to allowing them to continue living safely in independent settings as they age. Studies have shown that measures of stride-to-stride gait variability are predictive of falls in older adults. For this analysis, a set of participants were asked to perform a number of short walks while being monitored by the two vision based systems, along with a marker based Vicon motion capture system for ground truth. Measures of stride-to-stride gait variability were computed using each of the systems and compared against those obtained from the Vicon."}
{"_id": "386a8419dfb6a52e522cdab70ee8449422e529ba", "title": "Older adults' attitudes towards and perceptions of \"smart home\" technologies: a pilot study.", "text": "PRIMARY OBJECTIVE\nThe study aim is to explore the perceptions and expectations of seniors in regard to \"smart home\" technology installed and operated in their homes with the purpose of improving their quality of life and/or monitoring their health status.\n\n\nRESEARCH DESIGN AND METHODS\nThree focus group sessions were conducted within this pilot study to assess older adults' perceptions of the technology and ways they believe technology can improve their daily lives. Themes discussed in these groups included participants' perceptions of the usefulness of devices and sensors in health-related issues such as preventing or detecting falls, assisting with visual or hearing impairments, improving mobility, reducing isolation, managing medications, and monitoring of physiological parameters. The audiotapes were transcribed and a content analysis was performed.\n\n\nRESULTS\nA total of 15 older adults participated in three focus group sessions. Areas where advanced technologies would benefit older adult residents included emergency help, prevention and detection of falls, monitoring of physiological parameters, etc. Concerns were expressed about the user-friendliness of the devices, lack of human response and the need for training tailored to older learners.\n\n\nCONCLUSIONS\nAll participants had an overall positive attitude towards devices and sensors that can be installed in their homes in order to enhance their lives."}
{"_id": "3b8a4cc6bb32b50b29943ceb7248f318e589cd79", "title": "Fast Subsequence Matching in Time-Series Databases", "text": "We present an efficient indexing method to locate 1-dimensional subsequences within a collection of sequences, such that the subsequences match a given (query) pattern within a specified tolerance. The idea is to map each data sequences into a small set of multidimensional rectangles in feature space. Then, these rectangles can be readily indexed using traditional spatial access methods, like the R*-tree [9]. In more detail, we use a sliding window over the data sequence and extract its features; the result is a trail in feature space. We propose an efficient and effective algorithm to divide such trails into sub-trails, which are subsequently represented by their Minimum Bounding Rectangles (MBRs). We also examine queries of varying lengths, and we show how to handle each case efficiently. We implemented our method and carried out experiments on synthetic and real data (stock price movements). We compared the method to sequential scanning, which is the only obvious competitor. The results were excellent: our method accelerated the search time from 3 times up to 100 times."}
{"_id": "886431a362bfdbcc6dd518f844eb374950b9de86", "title": "The Recognition of Human Movement Using Temporal Templates", "text": "\u00d0A new view-based approach to the representation and recognition of human movement is presented. The basis of the representation is a temporal template\u00d0a static vector-image where the vector value at each point is a function of the motion properties at the corresponding spatial location in an image sequence. Using aerobics exercises as a test domain, we explore the representational power of a simple, two component version of the templates: The first value is a binary value indicating the presence of motion and the second value is a function of the recency of motion in a sequence. We then develop a recognition method matching temporal templates against stored instances of views of known actions. The method automatically performs temporal segmentation, is invariant to linear changes in speed, and runs in real-time on standard platforms. Index Terms\u00d0Motion recognition, computer vision."}
{"_id": "dbf15b7c5f766ef9f84ba83127c626d79b2087b2", "title": "Churn Prediction", "text": "The rapid growth of the market in every sector is leading to a bigger subscriber base for service providers. More competitors, new and innovative business models and better services are increasing the cost of customer acquisition. In this environment service providers have realized the importance of the retention of existing customers. Therefore, providers are forced to put more efforts for prediction and prevention of churn. This paper aims to present commonly used data mining techniques for the identification of churn. Based on historical data these methods try to find patterns which can point out possible churners. Well-known techniques used for this are Regression analysis, Decision Trees, Neural Networks and Rule based learning. In section 1 we give a short introduction describing the current state of the market, then in section 2 a definition of customer churn, its\u2019 types and the imporance of identification of churners is being discussed. Section 3 reviews different techniques used, pointing out advantages and disadvantages. Finally, current state of research and new emerging algorithms are being presented."}
{"_id": "9f8b0a325584272878d3189f9c7cb418b5f6ec3a", "title": "High speed operation of electrical machines, a review on technology, benefits and challenges", "text": "The High Speed (HS) technology, in electrical machine based drive systems, is reviewed in this paper by investigating around 200 different machine's data that are reported in the literature or open sources. This study includes information regarding clarifying the maximum power vs. speed capability, potential applications, different machine types, design limitation issues consisting of thermal, electro-magnetic, mechanical, control, inverter and power electronics. The limitations of electrical machines in HS technology are classified by introducing HS-index, which is the product of the machine nominal power and nominal speed (= n \u00d7 Pout). It is shown that HS-index behavior changes by speed, because limitation factors for HS power capability take different physics/nature, as speed increases. It is found that in relatively low speeds, n <; 20 krpm, maximum mechanical stress in the rotor, and in high speeds, n > 20 krpm, additional losses in the machine dictates the maximum power capability. The survey shows that Induction Machine (IM) and Permanent Magnet (PM) machine have relatively superior performance. The maximum rotor surface speed, vmax, is limited to 250-300 m/s (can reach 550 m/s) and is independent of n and output power. The HS-index is a function of vmax2 and almost independent of n for n <; 10-20 krpm. However, for n > 20 krpm is proportional to vmax3/n2 due to thermal limitations. The maximum power is almost cubically (Pout \u221d d2.6) proportional to the rotor diameter."}
{"_id": "70da97f4fb509689837818956db40dcf63cf9564", "title": "Limitations of the approximation capabilities of neural networks with one hidden layer", "text": "Let s/> 1 be an integer and W be the class of all functions having integrable partial derivatives on [0, 1] s. We are interested in the minimum number of neurons in a neural network with a single hidden layer required in order to provide a mean approximation order of a preassigned e > 0 to each function in W. We prove that this number cannot be O(e -s log(I/e)) if a spline-like localization is required. This cannot be improved even if one allows different neurons to evaluate different activation functions, even depending upon the target function. Nevertheless, for any 6 > 0, a network with O(c -s-6) neurons can be constructed to provide this order of approximation, with localization. Analogous results are also valid for other L p n o r m s ."}
{"_id": "c9e9eb090facc4e70c96cdd95f38329a7969a4dc", "title": "A 2.5-Gb/s 15-mW clock recovery circuit", "text": "This paper describes the design of a 2.5-Gb/s 15mW clock recovery circuit based on the quadricorrelator architecture. Employing both phase and frequency detection, the circuit combines high-speed operations such as differentiation, full-wave rectification, and mixing in one stage to lower the power dissipation. In addition, a two-stage voltage-controlled oscillator is utilized that incorporates both phase shift elements to provide a wide tuning range and isolation techniques to suppress the feedthrough due to input data transitions. Fabricated in a 20GHz 1m BiCMOS technology, the circuit exhibits an rms jitter of 9.5 ps and a capture range of 300 MHz."}
{"_id": "846a1a0136e69554923301ea445372a57c6afd9d", "title": "A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity", "text": "The key challenge in multiagent learning is learning a best response to the behaviour of other agents, which may be non-stationary: if the other agents adapt their strategy as well, the learning target moves. Disparate streams of research have approached nonstationarity from several angles, which make a variety of implicit assumptions that make it hard to keep an overview of the state of the art and to validate the innovation and significance of new works. This survey presents a coherent overview of work that addresses opponent-induced non-stationarity with tools from game theory, reinforcement learning and multi-armed bandits. Further, we reflect on the principle approaches how algorithms model and cope with this non-stationarity, arriving at a new framework and five categories (in increasing order of sophistication): ignore, forget, respond to target models, learn models, and theory of mind. A wide range of state-of-the-art algorithms is classified into a taxonomy, using these categories and key characteristics of the environment (e.g., observability) and adaptation behaviour of the opponents (e.g., smooth, abrupt). To clarify even further we present illustrative variations of one domain, contrasting the strengths and limitations of each category. Finally, we discuss in which environments the different approaches yield most merit, and point to promising avenues of future research."}
{"_id": "a3ee76b20df36976aceb16e9df93817255e26fd4", "title": "Neural Machine Translation for Query Construction and Composition", "text": "Research on question answering with knowledge base has recently seen an increasing use of deep architectures. In this extended abstract, we study the application of the neural machine translation paradigm for question parsing. We employ a sequence-to-sequence model to learn graph patterns in the SPARQL graph query language and their compositions. Instead of inducing the programs through question-answer pairs, we expect a semi-supervised approach, where alignments between questions and queries are built through templates. We argue that the coverage of language utterances can be expanded using late notable works in natural language generation."}
{"_id": "4928aee4b9a558d8faaa6126201a45b7aaea7bb6", "title": "E-government implementation: A bird's eye view of issues relating to costs, opportunities, benefits and risks", "text": "After more than a decade of comprehensive research work in the area of electronic government (e-government), no attempt has yet been made to undertake a systematic literature review on the costs, opportunities, benefits and risks that influence the implementation of e-government. This is particularly significant given the various related challenges that governments have faced over the years when implementing e-government initiatives. Hence, the aim of this paper is to undertake a comprehensive analysis of relevant literature addressing these issues using a systematic review of 132 studies identified from the Scopus online database and Google Scholar together with a manual review of relevant papers from journals dedicated to electronic government research such as Electronic Government, an International Journal (EGIJ), International Journal of Electronic Government Research (IJEGR) and Transforming Government: People, Process, and Policy (TGPPP). The overall review indicated that although a large number of papers discuss costs, opportunities, benefits and risks, treatment of these issues have tended to be superficial.Moreover, there is a lack of empirical studies which can statistically evaluate the performance of these constructs in relation to the various egovernment systems. Therefore, this research would help governments to better analyse the impact of costs, opportunities, benefits and risks on the success of e-government systems and its pre-adoption from an implementation perspective."}
{"_id": "e001b51acb757d6c23a31f75d71ac125c01a7a0b", "title": "VOLAP: A Scalable Distributed Real-Time OLAP System for High-Velocity Data", "text": "This paper presents VelocityOLAP (VOLAP), a distributed real-time OLAP system for high-velocity data. VOLAP makes use of dimension hierarchies, is highly scalable, exploits both multi-core and multi-processor parallelism, and can guarantee serializable execution of insert and query operations. In contrast to other high performance OLAP systems such as SAP HANA or IBM Netezza that rely on vertical scaling or special purpose hardware, VOLAP supports cost-efficient horizontal scaling on commodity hardware or modest cloud instances. Experiments on 20 Amazon EC2 nodes with TPC-DS data show that VOLAP is capable of bulk ingesting data at over 600\u00a0thousand items per second, and processing streams of interspersed insertions and aggregate queries at a rate of approximately 50\u00a0thousand insertions and 20\u00a0thousand aggregate queries per second with a database of 1\u00a0billion items. VOLAP is designed to support applications that perform large aggregate queries, and provides similar high performance for aggregations ranging from a few items to nearly the entire database."}
{"_id": "c82e4f9cb2aab5091c780c01328d04f501bf6940", "title": "Dynamics co-simulation of a type of spot welding robot by RecurDyn and Simulink", "text": "A co-simulation on virtual prototype of a type of spot welding robot has been accomplished by combining MATLAB/Simulink with RecurDyn. The control algorithm is executed under MATLAB to generate command forces, which are then exported to RecurDyn environment and applied to actuators of the virtual prototype at each cycle of time. Dynamic control on joint angles was accomplished through the co-simulation. Torques of each joint and the velocity of the clamp end were also acquired by co-simulation, which can give useful information for optimizing robot design."}
{"_id": "d990e96bff845b3c4005e20629c613bf6e2c5c40", "title": "A MATLAB-based Convolutional Neural Network Approach for Face Recognition System", "text": ""}
{"_id": "4a242497a1dc78c6ef5746bbff631704c3da9b38", "title": "DeepGaze II: Reading fixations from deep features trained on object recognition", "text": "Here we present DeepGaze II, a model that predicts where people look in images. The model uses the features from the VGG-19 deep neural network trained to identify objects in images. Contrary to other saliency models that use deep features, here we use the VGG features for saliency prediction with no additional fine-tuning (rather, a few readout layers are trained on top of the VGG features to predict saliency). The model is therefore a strong test of transfer learning. After conservative crossvalidation, DeepGaze II explains about 87% of the explainable information gain in the patterns of fixations and achieves top performance in area under the curve metrics on the MIT300 hold-out benchmark. These results corroborate the finding from DeepGaze I (which explained 56% of the explainable information gain), that deep features trained on object recognition provide a versatile feature space for performing related visual tasks. We explore the factors that contribute to this success and present several informative image examples. A web service is available to compute model predictions at http://deepgaze.bethgelab.org."}
{"_id": "2169acce9014fd4ce462da494b71a3d2ef1c8191", "title": "Identifying Generalization Properties in Neural Networks", "text": "While it has not yet been proven, empirical evidence suggests that model generalization is related to local properties of the optima which can be described via the Hessian. We connect model generalization with the local property of a solution under the PAC-Bayes paradigm. In particular, we prove that model generalization ability is related to the Hessian, the higher-order \u201csmoothness\u201d terms characterized by the Lipschitz constant of the Hessian, and the scales of the parameters. Guided by the proof, we propose a metric to score the generalization capability of the model, as well as an algorithm that optimizes the perturbed model accordingly."}
{"_id": "931760c25b22578bb0071117e231aaaf1d94d414", "title": "Evolution of Memory Architecture", "text": "Computer memories continue to serve the role that they first served in the electronic discrete variable automatic computer (EDVAC) machine documented by John von Neumann, namely that of supplying instructions and operands for calculations in a timely manner. As technology has made possible significantly larger and faster machines with multiple processors, the relative distance in processor cycles of this memory has increased considerably. Microarchitectural techniques have evolved to share this memory across ever-larger systems of processors with deep cache hierarchies and have managed to hide this latency for many applications, but are proving to be expensive and energy-inefficient for newer types of problems working on massive amounts of data. New paradigms include scale-out systems distributed across hundreds and even thousands of nodes, in-memory databases that keep data in memory much longer than the duration of a single task, and near-data computation, where some of the computation is off-loaded to the location of the data to avoid wasting energy in the movement of data. This paper provides a historical perspective on the evolution of memory architecture, and suggests that the requirements of new problems and new applications are likely to fundamentally change processor and system architecture away from the currently established von Neumann model."}
{"_id": "6f5d69a37262d95e4c5bb6d1d66b25a3440411f5", "title": "Data Mining for Cyber Security", "text": "This chapter provides an overview of the Minnesota Intrusion Detection System (MINDS), which uses a suite of data mining based algorithms to address different aspects of cyber security. The various components of MINDS such as the scan detector, anomaly detector and the profiling module detect different types of attacks and intrusions on a computer network. The scan detector aims at detecting scans which are the percusors to any network attack. The anomaly detection algorithm is very effective in detecting behavioral anomalies in the network traffic which typically translate to malicious activities such as denial-of-service (DoS) traffic, worms, policy violations and inside abuse. The profiling module helps a network analyst to understand the characteristics of the network traffic and detect any deviations from the normal profile. Our analysis shows that the intrusions detected by MINDS are complementary to those of traditional signature based systems, such as SNORT, which implies that they both can be combined to increase overall attack coverage. MINDS has shown great operational success in detecting network intrusions in two live deployments at the University of Minnesota and as a part of the Interrogator architecture at the US Army Research Labs Center for Intrusion Monitoring and Protection (ARL-CIMP)."}
{"_id": "5a36b944b339ce19874882b20859121158f90b27", "title": "Bachelor Degree Project Hierarchical Temporal Memory Software Agent", "text": "Artificial general intelligence is not well defined, but attempts such as the recent list of \u201cIngredients for building machines that think and learn like humans\u201d are a starting point for building a system considered as such [1]. Numenta is attempting to lead the new era of machine intelligence with their research to re-engineer principles of the neocortex. It is to be explored how the ingredients are in line with the design principles of their algorithms. Inspired by Deep Minds commentary about an autonomyingredient, this project created a combination of Numentas Hierarchical Temporal Memory theory and Temporal Difference learning to solve simple tasks defined in a browser environment. An open source software, based on Numentas intelligent computing platform NUPIC and Open AIs framework Universe, was developed to allow further research of HTM based agents on customized browser tasks. The analysis and evaluation of the results show that the agent is capable of learning simple tasks and there is potential for generalization inherent to sparse representations. However, they also reveal the infancy of the algorithms, not capable of learning dynamic complex problems, and that much future research is needed to explore if they can create scalable solutions towards a more general intelligent system."}
{"_id": "520388eb86852a260f38467bbe7ef6e6cb1cda9a", "title": "Hashcash - Amortizable Publicly Auditable Cost-Functions", "text": "We present a distributed efficiently amortizable CPU cost-function with no trap\u2013door. The absence of a trap\u2013door allows us to avoid needing to trust any party. Applications for such cost-functions are in distributed document popularity estimation, and metering of web advertising. None of the servers involved have any advantage over users in computing the cost function. The amortized token has a small fixed sized representation independent of the amortized value. The valuation function is efficient. Limits can be placed on the resources which can be expended in computing the cost-function to prevent the user inflating the value of his contribution by using more CPU time than the expected token value. The amortized part of the cost result can be blinded so that clients can not obtain service but avoid contributing to the amortization by expending more resources than the expected token value. Interactive and non-interactive variants of the cost-function can be constructed which can be used in situations where the server can issue a challenge (connection oriented interactive protocol), and where it can't (where the communication is store\u2013and\u2013forward, or packet oriented) respectively."}
{"_id": "cf02ad2ffca4ec10b2db1c3e91f4015226745c34", "title": "Linear permanent magnet motor for reciprocating compressor applications", "text": "This paper investigates the performance of a three designs of tubular linear permanent magnet motors (TLPMMs) with quasi-Halbach magnetized moving-magnet armature and slotted stator with a single-slot. The motors were developed for reciprocating compressor applications such as household refrigerators. The three designs have different magnet shape structures, and the stator core of all designs have employed a soft magnetic composite (SMC) material, Somaloy 700. Based on finite element analysis (FEA) an extensively analysis on the opencircuit magnetic field distribution, back-EMF, thrust force, fluxlinkage, winding inductance and cogging force in the three designs have been established. The simulation results indicate the effectiveness of the designs to drive a linear reciprocating compressor in household refrigerators, and also the simulation results have shown the superiority of the two designs with T-shape and trapezoidal magnet configurations, over the TLPMM with conventional rectangular magnet. Moreover, the permanent magnets (PMs) array arrangement has significant effects on the motor performance."}
{"_id": "c6a9ca56c93323c0199dd22631d1cf731bdd7ec1", "title": "Automatic Detection of Fake News", "text": "The proliferation of misleading information in everyday access media outlets such as social media feeds, news blogs, and online newspapers have made it challenging to identify trustworthy news sources, thus increasing the need for computational tools able to provide insights into the reliability of online content. In this paper, we focus on the automatic identification of fake content in online news. Our contribution is twofold. First, we introduce two novel datasets for the task of fake news detection, covering seven different news domains. We describe the collection, annotation, and validation process in detail and present several exploratory analyses on the identification of linguistic differences in fake and legitimate news content. Second, we conduct a set of learning experiments to build accurate fake news detectors, and show that we can achieve accuracies of up to 76%. In addition, we provide comparative analyses of the automatic and manual identification of fake news."}
{"_id": "85b97b5dd6c7f86843cc638bde178ecdde70a4d0", "title": "Recovery of Interdependent Networks.", "text": "Recent network research has focused on the cascading failures in a system of interdependent networks and the necessary preconditions for system collapse. An important question that has not been addressed is how to repair a failing system before it suffers total breakdown. Here we introduce a recovery strategy for nodes and develop an analytic and numerical framework for studying the concurrent failure and recovery of a system of interdependent networks based on an efficient and practically reasonable strategy. Our strategy consists of repairing a fraction of failed nodes, with probability of recovery \u03b3, that are neighbors of the largest connected component of each constituent network. We find that, for a given initial failure of a fraction 1\u2009-\u2009p of nodes, there is a critical probability of recovery above which the cascade is halted and the system fully restores to its initial state and below which the system abruptly collapses. As a consequence we find in the plane \u03b3\u2009-\u2009p of the phase diagram three distinct phases. A phase in which the system never collapses without being restored, another phase in which the recovery strategy avoids the breakdown, and a phase in which even the repairing process cannot prevent system collapse."}
{"_id": "39ed372026adaf052d9c40613386da296ee552dc", "title": "Locality Sensitive Hashing based deepmatching for optical flow estimation", "text": "DeepMatching (DM) is one of the state-of-art matching algorithms to compute quasi-dense correspondences between images. Recent optical flow methods use DeepMatching to find initial image correspondences and achieves outstanding performance. However, the key building block of DeepMatching, the correlation map computation, is time-consuming. In this paper, we propose a new algorithm, LSHDM, which addresses the problem by employing Locality Sensitive Hashing (LSH) to DeepMatching. The computational complexity is greatly reduced for the correlation map computation step. Experiments show that image matching can be accelerated by our approach in ten times or more compared to DeepMatching, while retaining comparable accuracy for optical flow estimation."}
{"_id": "8a0c0213447e82a456e98fc63a0f126f4566767a", "title": "How Small Labels Create Big Improvements", "text": "It is widely believed that identifying communities in an ad hoc mobile communications system, such as a pocket switched network, can reduce the amount of traffic created when forwarding messages, but there has not been any empirical evidence available to support this assumption to date. In this paper, we show through use of real experimental human mobility data, how using a small label, identifying users according to their affiliation, can bring a large improvement in forwarding performance, in term of both delivery ratio and cost."}
{"_id": "972a2201d2b33fb586bacf33f7d036784c48913b", "title": "Reinforcement learning of clothing assistance with a dual-arm robot", "text": "This study aims at robotic clothing assistance as it is yet an open field for robotics despite it is one of the basic and important assistance activities in daily life of elderly as well as disabled people. The clothing assistance is a challenging problem since robots must interact with non-rigid clothes generally represented in a high-dimensional space, and with the assisted person whose posture can vary during the assistance. Thus, the robot is required to manage two difficulties to perform the task of the clothing assistance: 1) handling of non-rigid materials and 2) adaptation of the assisting movements to the assisted person's posture. To overcome these difficulties, we propose to use reinforcement learning with the cloth's state which is low-dimensionally represented in topology coordinates, and with the reward defined in the low-dimensional coordinates. With our developed experimental system, for T-shirt clothing assistance, including an anthropomorphic dual-arm robot and a soft mannequin, we demonstrate the robot quickly learns a suitable arm motion for putting the mannequin's head into a T-shirt."}
{"_id": "2fb8c7faf5e42dba3993a2b7cd8c6fd1b90d29ef", "title": "Virtual peers as partners in storytelling and literacy learning", "text": "Literacy learning learning how to read and write, begins long before children enter school. One of the key skills to reading and writing is the ability to represent thoughts symbolically and share them in language with an audience who may not necessarily share the same temporal and spatial context for the story. Children learn and practice these important language skills everyday, telling stories with the peers and adults around them. In particular, storytelling in the context of peer collaboration provides a key environment for children to learn language skills important for literacy. In light of this, we designed Sam, an embodied conversational agent who tells stories collaboratively with children. Sam was designed to look like a peer for preschool children, but to tell stories in a developmentally advanced way: modeling narrative skills important for literacy. Results demonstrated that children who played with the virtual peer told stories that more closely resembled the virtual peer\u2019s linguistically advanced stories: using more quoted speech and temporal and spatial expressions. In addition, children listened to Sam's stories carefully, assisting her and suggesting improvements. The potential benefits of having technology play a social role in young children\u2019s literacy learning is discussed."}
{"_id": "10205ce087b9190ac18ade8be02a660d92a6ea52", "title": "Diversifying search results", "text": "We study the problem of answering ambiguous web queries in a setting where there exists a taxonomy of information, and that both queries and documents may belong to more than one category according to this taxonomy. We present a systematic approach to diversifying results that aims to minimize the risk of dissatisfaction of the average user. We propose an algorithm that well approximates this objective in general, and is provably optimal for a natural special case. Furthermore, we generalize several classical IR metrics, including NDCG, MRR, and MAP, to explicitly account for the value of diversification. We demonstrate empirically that our algorithm scores higher in these generalized metrics compared to results produced by commercial search engines."}
{"_id": "2b85fb16bc42709dbd9defffee20935282f04d65", "title": "Automatic Extraction of Useful Facet Hierarchies from Text Databases", "text": "Databases of text and text-annotated data constitute a significant fraction of the information available in electronic form. Searching and browsing are the typical ways that users locate items of interest in such databases. Faceted interfaces represent a new powerful paradigm that proved to be a successful complement to keyword searching. Thus far, the identification of the facets was either a manual procedure, or relied on apriori knowledge of the facets that can potentially appear in the underlying collection. In this paper, we present an unsupervised technique for automatic extraction of facets useful for browsing text databases. In particular, we observe, through a pilot study, that facet terms rarely appear in text documents, showing that we need external resources to identify useful facet terms. For this, we first identify important phrases in each document. Then, we expand each phrase with \";context\"; phrases using external resources, such as WordNet and Wikipedia, causing facet terms to appear in the expanded database. Finally, we compare the term distributions in the original database and the expanded database to identify the terms that can be used to construct browsing facets. Our extensive user studies, using the Amazon Mechanical Turk service, show that our techniques produce facets with high precision and recall that are superior to existing approaches and help users locate interesting items faster."}
{"_id": "ffc17dba701eeb4ce5cdbb286403d8f1d5b33e12", "title": "SWAM: Stuxnet Worm Analysis in Metasploit", "text": "Nowadays cyber security is becoming a great challenge. Attacker's community is progressing towards making smart and intelligent malwares (viruses, worms and Root kits). They stealth their existence and also use administrator rights without knowing legal user. Stuxnet worm is an example of a recent malware first detected in July 2010. Its variants were also detected earlier. It is the first type of worm that affects the normal functionality of industrial control systems (ICS) having programmable logic controllers (PLC) through PLC Root kit. Its main goal is to modify ICS behavior by changing the code of PLC and make it to behave in a way that attacker wants. It is a complex piece of malware having different operations and functionalities which are achieved by exploiting zero day vulnerabilities. Stuxnet exploits various vulnerable services in Microsoft Windows. In this paper we will show real time simulation of first three vulnerabilities of these through Metasploit Framework 3.2 and analyze results. A real time scenario is established based on some assumptions. We assumed Proteus design (pressure sensor) as PLC and showed after exploitation that the pressure value drops to an unacceptable level by changing Keil code of this design."}
{"_id": "7ced0987c410c60c030acf02f62e544fb8155800", "title": "The Quest Data Mining System", "text": "The goal of the Quest project at the IBM Almaden Research center is to develop technology to enable a new breed of da&intensive decision-support applications. This paper is a capsule summary of the current functionality and architecture of the Quest data mining System. Our overall approach has been to identify basic data mining operations that cut across applications and develop fast, scalable algorithms for their execution (Agrawal, Imielinski, & Swami 1993a). We wanted our algorithms to:"}
{"_id": "41862f33a6960dc36e495c98a127b26830bad1ed", "title": "Hierarchical Active Transfer Learning", "text": "We describe a unified active transfer learning framework called Hierarchical Active Transfer Learning (HATL). HATL exploits cluster structure shared between different data domains to perform transfer learning by imputing labels for unlabeled target data and to generate effective label queries during active learning. The resulting framework is flexible enough to perform not only adaptive transfer learning and accelerated active learning but also unsupervised and semi-supervised transfer learning. We derive an intuitive and useful upper bound on HATL\u2019s error when used to infer labels for unlabeled target points. We also present results on synthetic data that confirm both intuition and our analysis. Finally, we demonstrate HATL\u2019s empirical effectiveness on a benchmark data set for sentiment classification."}
{"_id": "2f80c49ba75a0e4dd854ebbcad2f84a4044efbfb", "title": "An Interactive Web Mapping Visualization of Urban Air Quality Monitoring Data of China", "text": "In recent years, main cities in China have been suffering from hazy weather, which is gaining great attention among the public, government managers and researchers in different areas. Many studies have been conducted on the topic of urban air quality to reveal different aspects of the air quality problem in China. This paper focuses on the visualization problem of the big air quality monitoring data of all main cities on a nationwide scale. To achieve the intuitive visualization of this dataset, this study develops two novel visualization tools for multi-granularity time series visualization (timezoom.js) and a dynamic symbol declutter map mashup layer for thematic mapping (symadpative.js). With the two invented tools, we develops an interactive web map visualization application of urban air quality data of all main cities in China. This application shows us significant air pollution findings at the nationwide scale. These results give us clues for further studies on air pollutant characteristics, forecasting and control in China. As the tools are invented for general visualization purposes of geo-referenced time series data, they can be applied to other environmental monitoring data (temperature, precipitation, etc.) through some configurations."}
{"_id": "0ae476484e5311e2ec2bedac533977bed941d7ae", "title": "Dynamic Motions by a Quadruped Musculoskeletal Robot with Angle-Dependent Moment Arms", "text": "Current robotics have failed to replicate the great dexterity of the animal\u2019s motions. We believe that it is because most of the works have focused on the controller part rather than on its body structure. Indeed, the musculoskeletal system is essential to produce the animal\u2019s motions and several researchers have attempted to exploit it [1][2][3]. However, most of them have used simple mono-articular muscles and torque control which cannot replicate the nonlinearity of the living tissues, muscles and nerves [4]. In contrast, bipedal robots with monoand bi-articular muscles for performing adaptive motions [5][6][7] have been produced. To further investigate the role of the musculoskeletal system for animal motions, we desgned a quadruped robot with monoand bi-articular muscles. We used artificial pneumatic muscles as actuators due to their high mass/power ratio and their high compliance. Although these actuators replicate several characteristics of biological muscles in terms of damping and elasticity, the length-tension relation differs from them [8][9]. In the result, the angle-torque relation of the robot with pneumatic muscles differed substantially from the one of animals. Thus, we tried to bring the angle-torque relation as close as the one of living tissues by devising the angle-moment arm relation."}
{"_id": "d37a34c204a8beefcaef4dddddb7a90c16e973d4", "title": "Learning Dexterous In-Hand Manipulation", "text": "We use reinforcement learning (RL) to learn dexterous in-hand manipulation policies which can perform vision-based object reorientation on a physical Shadow Dexterous Hand. The training is performed in a simulated environment in which we randomize many of the physical properties of the system like friction coefficients and an object\u2019s appearance. Our policies transfer to the physical robot despite being trained entirely in simulation. Our method does not rely on any human demonstrations, but many behaviors found in human manipulation emerge naturally, including finger gaiting, multi-finger coordination, and the controlled use of gravity. Our results were obtained using the same distributed RL system that was used to train OpenAI Five [43]. We also include a video of our results: https://youtu. be/jwSbzNHGflM."}
{"_id": "c592a4184204f9d65670f4b7bb9aab34cb983492", "title": "First response fire combat: Deep leaning based visible fire detection", "text": "Visible spectrum video based fire detection using non-stationary cameras has been an overlooked research problem. While many authors have successfully developed algorithms to identify and measure the proportions of uncontrolled fire using thermal or stationary surveillance cameras, the development of non-stationary systems provides a much larger application scope. We present a deep learning based approach, based on Google's Inception v3 and evaluate how the use of different optimizers, learning rates and reduction functions impact the convergence time and the results obtained. The experiment used a balanced two-class dataset, composed of images from videos captured through hand-held and drone attached cameras on real fire hazard situations. The results show that using any choice of algorithms, the Inception v3 architecture is able to converge and obtain results that surpass the state-of-the-art fire detection methods. We show that using a proper choice of learning rate, the network is able to achieve a 99% accuracy in less than one hundred interactions."}
{"_id": "40f2479545c8e3f44a09b3cc7bb2bcf60f0a37d1", "title": "Accurate Identification of Ontology Alignments at Different Granularity Levels", "text": "As more and more ontologies are defined with different terms, ontology matching plays a crucial role in addressing the semantic heterogeneity problem in many different disciplines. Many efforts have been made to discover correspondences among terms in different ontologies. Most studies directly match two ontologies by utilizing terminological and structural methods that rely on ontologies themselves only. However, the decentralized characteristic of ontologies raises the uncertainty in ontology matching. To address this problem, we propose a four-stage ontology matching framework (FOMF) to enhance ontology matching performance. It is built upon the commonly accepted claim that an external comprehensive knowledge base can be used as a semantic bridge between domain ontologies for ontology matching. First, FOMF semantically maps domain ontologies to a knowledge base and then produces different types of alignments, including equivalence, subclass, sameas, and instance alignments. Similarities between two domain ontologies are next employed to enhance the equivalence and sameas alignments discovery. Finally, based on acquired alignments, inferred alignments are deduced to guarantee the completeness of matching results. Our experimental results show the superiority of the proposed method over the existing ones."}
{"_id": "2d6177244636f892c1620e3e5c2870c5e3902b55", "title": "Average cost temporal-difference learning", "text": "We propose a variant of temporal-di!erence learning that approximates average and di!erential costs of an irreducible aperiodic Markov chain. Approximations are comprised of linear combinations of \"xed basis functions whose weights are incrementally updated during a single endless trajectory of the Markov chain. We present a proof of convergence (with probability 1) and a characterization of the limit of convergence. We also provide a bound on the resulting approximation error that exhibits an interesting dependence on the `mixing timea of the Markov chain. The results parallel previous work by the authors, involving approximations of discounted cost-to-go. ! 1999 Elsevier Science Ltd. All rights reserved."}
{"_id": "96e1b777e307086426a993081619b89e700f7f93", "title": "Game Development for Computer Science Education", "text": "Educators have long used digital games as platforms for teaching. Games tend to have several qualities that aren't typically found in homework: they often situate problems within a compelling alternate reality that unfolds through intriguing narrative, they often draw more upon a player's intrinsic motivations than extrinsic ones, they can facilitate deliberate low intensity practice, and they often emphasize a spirit of play instead of work.\n At ITiCSE 2016, this working group convened to survey the landscape of existing digital games that have been used to teach and learn computer science concepts. Our group discovered that these games lacked explicitly defined learning goals and even less evaluation of whether or not the games achieved these goals. As part of this process, we identified and played over 120 games that have been released or described in literature as means for learning computer science concepts. In our report, we classified how these games support the learning objectives outlined in the ACM/IEEE Computer Science Curricula 2013.\n While we found more games than we expected, few games explicitly stated their learning goals and even fewer were evaluated for their capacity to meet these goals. Most of the games we surveyed fell into two categories: short-lived proof-of-concept projects built by academics or closed-source games built by professional developers. Gathering adequate learning data is challenging in either situation. Our original intent for the second year of our working group was to prepare a comprehensive framework for collecting and analyzing learning data from computer science learning games. Upon further discussion, however, we decided that a better next step is to validate the design and development guidelines that we put forth in our final report for ITiCSE 2016.\n We extend this working group to a second year---with a mission to collaboratively develop a game with clearly defined learning objectives and define a methodology for evaluating its capacity to meet its goals."}
{"_id": "fa95c592b091e23c5e01b067381916b15c2afc69", "title": "ITIL-based IT service support process reengineering", "text": "The Information Technology Infrastructure Library (ITIL) supports best practices, reengineering activities and IT service support processes. ITIL framework only provides recommendations, and companies need to utilize this framework to improve their IT service support processes and establish best practices. This study provides a methodology on how to apply the ITIL framework for evaluating the IT service support processes, its reengineering and alignment to best practices, and subsequent integration into a decision support system framework. A case study approach was used to identify a set of Key Performance Indicators (KPI) which were monitored by a decision support system (DSS) for triggering on-going reengineering of IT service support processes. This paper focuses on the implementation of the ITIL guidelines at the operational level, improvement of the service desk, and incident, problem, change, release, and configuration management. It also presents the implementation of the ITIL guidelines at the tactical level for the improvement of the service level, capacity, IT service continuity, service availability, and security management. We conclude by providing recommendations for future research."}
{"_id": "5694cb74177dab60abdf95010256e1e21377a97e", "title": "Small footprint implementation of dual-microphone delay-and-sum beamforming for in-car speech enhancement", "text": "For effective speech processing in an automotive environment, speech enhancement is necessary due to significant levels of background noise. In this paper, we present a cost effective small footprint implementation of one particular speech enhancement technique: dual microphone delay-and-sum beamforming. In order to save resources, the implementation utilizes the overlapping frame property used in speech processing systems. The implementation also exhibits a simple interconnection structure leading to even greater resource saving. Experiment results show that the proposed design can produce enhanced output very close to that generated by a theoretical (floating-point) model while only requiring a modest hardware resource usage."}
{"_id": "ae18f67c10a5d4da91f128ec7a6cf7c784122cd5", "title": "Deep Recurrent Neural Networks for Acoustic Modelling", "text": "We present a novel deep Recurrent Neural Network (RNN) model for acoustic modelling in Automatic Speech Recognition (ASR). We term our contribution as a TC-DNN-BLSTM-DNN model, the model combines a Deep Neural Network (DNN) with Time Convolution (TC), followed by a Bidirectional LongShort Term Memory (BLSTM), and a final DNN. The first DNN acts as a feature processor to our model, the BLSTM then generates a context from the sequence acoustic signal, and the final DNN takes the context and models the posterior probabilities of the acoustic states. We achieve a 3.47 WER on the Wall Street Journal (WSJ) eval92 task or more than 8% relative improvement over the baseline DNN models"}
{"_id": "4ba9a40e2bfccbbff61e7937b02480784b627079", "title": "Gaining Access with Social Engineering: An Empirical Study of the Threat", "text": "AbstrAct Recently, research on information security has expanded from its purely technological orientation into striving to understand and explain the role of human behavior in security breaches. However, an area that has been lacking theory-grounded empirical study is in social engineering attacks. While there exists an extensive body of anecdotal literature, factors that account for attack success remains largely speculative. To better understand this increasing phenomenon, we developed a theoretical framework and conducted an empirical field study to investigate social engineering attacks, and from these results, we make recommendations for practice and further research."}
{"_id": "4e4311a5fd99b17bed31b7006a572d29a58cdcf3", "title": "A support vector machine classifier with automatic confidence and its application to gender classification", "text": "In this paper, we propose a support vector machine with automatic confidence (SVMAC) for pattern classification. The main contributions of this work to learning machines are twofold. One is that we develop an algorithm for calculating the label confidence value of each training sample. Thus, the label confidence values of all of the training samples can be considered in training support vector machines. The other one is that we propose a method for incorporating the label confidence value of each training sample into learning and derive the corresponding quadratic programming problems. To demonstrate the e ff ctiveness of the proposed SVMACs, a series of experiments are performed on three benchmarking pattern classification problems and a challenging gender classification problem. Experimental results show that the generalization performance of our SVMACs is superior to that of traditional SVMs."}
{"_id": "d54a9e8b0112cf464a3552ebc2d9b8c33053e0de", "title": "Mixing Confidential Transactions: Comprehensive Transaction Privacy for Bitcoin", "text": "The public nature of the blockchain has been shown to be a severe threat for the privacy of Bitcoin users. Even worse, since funds can be tracked and tainted, no two coins are equal, and fungibility, a fundamental property required in every currency, is at risk. With these threats in mind, several privacyenhancing technologies have been proposed to make Bitcoin more private. However, they either require a deep redesign of the currency, breaking many currently deployed features, or they address only specific privacy issues and consequently provide only very limited guarantees when deployed separately. The goal of this work is to overcome this trade-off. Building on CoinJoin, we design ValueShuffle, the first coin mixing protocol compatible with Confidential Transactions, a proposal to hide payment values in transactions. ValueShuffle ensures a mixing participant\u2019s anonymity and the confidentiality of her payment values not only against an attacker observing the blockchain but also against the other possibly malicious mixing participants and against network attackers. By combining ValueShuffle with the proposal for Confidential Transactions and additionally Stealth Addresses, our solution provides comprehensive privacy (payer anonymity, payee anonymity, and payment value privacy) without breaking with the design or the features of Bitcoin. We demonstrate that the combination of these three privacyenhancing technologies creates synergies that overcome the two major obstacles which so far have prohibited the deployment of coin mixing in practice, namely that users need to mix funds of the same value, and need to do so before they can actually spend the funds. As a result, our approach unleashes the full potential of coin mixing as a privacy solution for Bitcoin."}
{"_id": "1fe93f4143d17dc3f9057f965a058c6b5d942e05", "title": "Compressed sensing radar", "text": "A stylized compressed sensing radar is proposed in which the time- frequency plane is discretized into an N times N grid. Assuming the number of targets K is small (i.e., K Lt N2), then we can transmit a sufficiently \"incoherent\" pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel compressed sensing approach offers great potential for better resolution over classical radar."}
{"_id": "3cd0b8a9d322acd3c928e9f208d59e9b22f8f3a0", "title": "ASR-based corrective feedback on pronunciation: does it really work?", "text": "We studied a group of immigrants who were following regular, teacher-fronted Dutch classes, and who were assigned to three groups using either a) Dutch CAPT, an ASR-based Computer Assisted Pronunciation Training (CAPT) system that provides feedback on a number of Dutch speech sounds that are problematic for L2 learners b) a CAPT system without feedback c) no CAPT system. Participants were tested before and after the training. The results show that the ASR-based feedback was effective in correcting the errors addressed in the training."}
{"_id": "6fdd6ed847b8591979dd273bd2be85198140f9e3", "title": "Plant classification system based on leaf features", "text": "This paper presents a classification approach based on Random Forests (RF) and Linear Discriminant Analysis (LDA) algorithms for classifying the different types of plants. The proposed approach consists of three phases that are pre-processing, feature extraction, and classification phases. Since most types of plants have unique leaves, so the classification approach presented in this research depends on plants leave. Leaves are different from each other by characteristics such as the shape, color, texture and the margin. The used dataset for this experiments is a database of different plant species with total of only 340 leaf images, was downloaded from UCI- Machine Learning Repository. It was used for both training and testing datasets with 10-fold cross-validation. Experimental results showed that LDA achieved classification accuracy of (92.65%) against the RF that achieved accuracy of (88.82%) with combination of shape, first order texture, Gray Level Co-occurrence Matrix (GLCM), HSV color moments, and vein features."}
{"_id": "1c9da6cef6b1be9c116b26dd52c341c0adcf7db2", "title": "Interactive Perception: Leveraging Action in Perception and Perception in Action", "text": "Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research."}
{"_id": "ba81b31598b85259b20399e485a1ab156db5511f", "title": "An Unsupervised Approach to Recognizing Discourse Relations", "text": "We presentan unsupervisedapproachto recognizingdiscourserelationsof CONTRAST, EXPLANATION-EVIDENCE, CONDITION andELABORATION that hold betweenarbitraryspansof texts. We show that discourserelation classifierstrained on examples that are automaticallyextractedfrom massi ve amountsof text can be usedto distinguishbetweensomeof theserelationswith accuraciesashigh as 93%, even whenthe relationsarenot explicitly markedby cuephrases."}
{"_id": "9758f1cd6850c4131af304ec8ae57c8f148da8b3", "title": "ANCESTRY-CONSTRAINED PHYLOGENETIC ANALYSIS SUPPORTS THE INDO-EUROPEAN STEPPE HYPOTHESIS", "text": "Discussion of Indo-European origins and dispersal focuses on two hypotheses. Qualitative evidence from reconstructed vocabulary and correlations with archaeological data suggest that IndoEuropean languages originated in the Pontic-Caspian steppe and spread together with cultural innovations associated with pastoralism, beginning c. 6500\u20135500 bp. An alternative hypothesis, according to which Indo-European languages spread with the diffusion of farming from Anatolia, beginning c. 9500\u20138000 bp, is supported by statistical phylogenetic and phylogeographic analyses of lexical traits. The time and place of the Indo-European ancestor language therefore remain disputed. Here we present a phylogenetic analysis in which ancestry constraints permit more accurate inference of rates of change, based on observed changes between ancient or medieval languages and their modern descendants, and we show that the result strongly supports the steppe hypothesis. Positing ancestry constraints also reveals that homoplasy is common in lexical traits, contrary to the assumptions of previous work. We show that lexical traits undergo recurrent evolution due to recurring patterns of semantic and morphological change.*"}
{"_id": "336779e60b48443bfd5f45f24191616213cbaf81", "title": "The business model concept: theoretical underpinnings and empirical illustrations", "text": "Received: 13 December 2001 Revised: 27 March 2002 : 26 July 2002 Accepted:15 October 2002 Abstract The business model concept is becoming increasingly popular within IS, management and strategy literature. It is used within many fields of research, including both traditional strategy theory and in the emergent body of literature on e-business. However, the concept is often used independently from theory, meaning model components and their interrelations are relatively obscure. Nonetheless, we believe that the business model concept is useful in explaining the relation between IS and strategy. This paper offers an outline for a conceptual business model, and proposes that it should include customers and competitors, the offering, activities and organisation, resources and factor market interactions. The causal inter-relations and the longitudinal processes by which business models evolve should also be included. The model criticises yet draws on traditional strategy theory and on the literature that addresses business models directly. The business model is illustrated by an ERP implementation in a European multi-national company. European Journal of Information Systems (2003) 12, 49\u201359. doi:10.1057/ palgrave.ejis.3000446"}
{"_id": "4623accb0524d3b000866709ec27f1692cc9b15a", "title": "Design science in information systems research", "text": ""}
{"_id": "add1d5602e7aa2bae889ef554e2da7f589117d5b", "title": "Balancing customer and network value in business models for mobile services", "text": "Designing business models for mobile services is complex. A business model can be seen as a blueprint of four interrelated components: service offering, technical architecture, and organisational and financial arrangements. In this paper the connections among these components are explored by analysing the critical design issues in business models for mobile services, e.g., targeting and branding in the service domain, security and quality of service in the technology domain, network governance in the organisation domain, and revenue sharing in the finance domain. A causal framework is developed linking these critical design issues to expected customer value and expected network value, and hence, to business model viability."}
{"_id": "dd5df1147ae291917515a7956a6bc3d7b845f288", "title": "Evaluation in Design-Oriented Research", "text": "Design has been recognized for a long time both as art and as science. In the sixties of the previous century design-oriented research began to draw the attention of scientific researchers and methodologists, not only in technical engineering but also in the social sciences. However, a rather limited methodology for design-oriented research has been developed, especially as to the social sciences. In this article we introduce evaluation methodology and research methodology as a systematic input in the process of designing. A designing cycle is formulated with six stages, and for each of these stages operations, guidelines and criteria for evaluation are defined. All this may be used for a considerable improvement of the process and product of designing."}
{"_id": "26f9114b2fdc34696c483e0f29da3b3d89482741", "title": "Business Models for Electronic Markets", "text": "Introduction Electronic commerce can be defined loosely as \u201cdoing business electronically\u201d (European Commission 1997). Electronic commerce includes electronic trading of physical goods and of intangibles such as information. This encompasses all the trading steps such as online marketing, ordering, payment, and support for delivery. Electronic commerce includes the electronic provision of services, such as aftersales support or online legal advice. Finally it also includes electronic support for collaboration between companies, such as collaborative design."}
{"_id": "9f100217fbca173fe0d64d4527956d272735e41c", "title": "Geometric mechanics of curved crease origami.", "text": "Folding a sheet of paper along a curve can lead to structures seen in decorative art and utilitarian packing boxes. Here we present a theory for the simplest such structure: an annular circular strip that is folded along a central circular curve to form a three-dimensional buckled structure driven by geometrical frustration. We quantify this shape in terms of the radius of the circle, the dihedral angle of the fold, and the mechanical properties of the sheet of paper and the fold itself. When the sheet is isometrically deformed everywhere except along the fold itself, stiff folds result in creases with constant curvature and oscillatory torsion. However, relatively softer folds inherit the broken symmetry of the buckled shape with oscillatory curvature and torsion. Our asymptotic analysis of the isometrically deformed state is corroborated by numerical simulations that allow us to generalize our analysis to study structures with multiple curved creases."}
{"_id": "c979efe1f0a8b0b343ea332368e5b51dc153c522", "title": "Policy Optimization with Demonstrations", "text": "Exploration remains a significant challenge to reinforcement learning methods, especially in environments where reward signals are sparse. Recent methods of learning from demonstrations have shown to be promising in overcoming exploration difficulties but typically require considerable highquality demonstrations that are difficult to collect. We propose to effectively leverage available demonstrations to guide exploration through enforcing occupancy measure matching between the learned policy and current demonstrations, and develop a novel Policy Optimization from Demonstration (POfD) method. We show that POfD induces implicit dynamic reward shaping and brings provable benefits for policy improvement. Furthermore, it can be combined with policy gradient methods to produce state-of-the-art results, as demonstrated experimentally on a range of popular benchmark sparse-reward tasks, even when the demonstrations are few and imperfect."}
{"_id": "17ee773e6b2baecba638e83ffbcbb4f103231236", "title": "Are You Asking the Right Questions? Teaching Machines to Ask Clarification Questions", "text": "Inquiry is fundamental to communication, and machines cannot effectively collaborate with humans unless they can ask questions. In this thesis work, we explore how can we teach machines to ask clarification questions when faced with uncertainty, a goal of increasing importance in today\u2019s automated society. We do a preliminary study using data from StackExchange, a plentiful online resource where people routinely ask clarifying questions to posts so that they can better offer assistance to the original poster. We build neural network models inspired by the idea of the expected value of perfect information: a good question is one whose expected answer is going to be most useful. To build generalizable systems, we propose two future research directions: a template-based model and a sequence-to-sequence based neural generative model."}
{"_id": "b305233e5b300d7fc77fb9595c2c3ff7a6a9b91a", "title": "Sentiment Adaptive End-to-End Dialog Systems", "text": "v This work focuses on incorporating sentiment information into task-oriented dialogue systems. v Current end-to-end approaches only consider user semantic inputs in learning and under-utilize other user information. v But the ultimate evaluator of dialog systems is the end-users and their sentiment is a direct reflection of user satisfaction and should be taken into consideration. v Therefore, we propose to include user sentiment obtained through multimodal information (acoustic, dialogic and textual), in the end-to-end learning framework to make systems more user-adaptive and effective. v We incorporated user sentiment information in both supervised and reinforcement learning settings. v In both settings, adding sentiment information reduced the dialog length and improved the task success rate on a bus information search task. Multimodal Sentiment Detector v We manually annotated 50 dialogs with 517 conversation turns to train this sentiment detector. The annotated set is open to public. v Prediction made by the detector will be used in the supervised learning and reinforcement learning. v Three sets of features: 1) Acoustic features; 2) Dialogic features; 3) Textual features. v Dialogic features include: 1) Interruption; 2) Button usage; 3) Repetitions; 4) Start over. These four categories of dialog features are chosen based on the previous literature and the observed statistics in the dataset."}
{"_id": "e3adb12fbd126ce9abbbec86c6ac1642e61ded34", "title": "Social Media for Opioid Addiction Epidemiology: Automatic Detection of Opioid Addicts from Twitter and Case Studies", "text": "Opioid (e.g., heroin and morphine) addiction has become one of the largest and deadliest epidemics in the United States. To combat such deadly epidemic, there is an urgent need for novel tools and methodologies to gain new insights into the behavioral processes of opioid abuse and addiction. The role of social media in biomedical knowledge mining has turned into increasingly significant in recent years. In this paper, we propose a novel framework named AutoDOA to automatically detect the opioid addicts from Twitter, which can potentially assist in sharpening our understanding toward the behavioral process of opioid abuse and addiction. In AutoDOA, to model the users and posted tweets as well as their rich relationships, a structured heterogeneous information network (HIN) is first constructed. Then meta-path based approach is used to formulate similarity measures over users and different similarities are aggregated using Laplacian scores. Based on HIN and the combined meta-path, to reduce the cost of acquiring labeled examples for supervised learning, a transductive classification model is built for automatic opioid addict detection. To the best of our knowledge, this is the first work to apply transductive classification in HIN into drug-addiction domain. Comprehensive experiments on real sample collections from Twitter are conducted to validate the effectiveness of our developed system AutoDOA in opioid addict detection by comparisons with other alternate methods. The results and case studies also demonstrate that knowledge from daily-life social media data mining could support a better practice of opioid addiction prevention and treatment."}
{"_id": "5c6f5ed9a3d7b1754d26157844511a8f8a8e76f3", "title": "Integrated Wideband Self-Interference Cancellation in the RF Domain for FDD and Full-Duplex Wireless", "text": "A fully integrated technique for wideband cancellation of transmitter (TX) self-interference (SI) in the RF domain is proposed for multiband frequency-division duplexing (FDD) and full-duplex (FD) wireless applications. Integrated wideband SI cancellation (SIC) in the RF domain is accomplished through: 1) a bank of tunable, reconfigurable second-order high-Q RF bandpass filters in the canceller that emulate the antenna interface's isolation (essentially frequency-domain equalization in the RF domain) and 2) a linear N-path Gm-C filter implementation with embedded variable attenuation and phase shifting. A 0.8-1.4 GHz receiver (RX) with the proposed wideband SIC circuits is implemented in a 65 nm CMOS process. In measurement, >20 MHz 20 dB cancellation bandwidth (BW) is achieved across frequency-selective antenna interfaces: 1) a custom-designed LTElike 0.780/0.895 GHz duplexer with TX/RX isolation peak magnitude of 30 dB, peak group delay of 11 ns, and 7 dB magnitude variation across the TX band for FDD and 2) a 1.4 GHz antenna pair for FD wireless with TX/RX isolation peak magnitude of 32 dB, peak group delay of 9 ns, and 3 dB magnitude variation over 1.36-1.38 GHz. For FDD, SIC enhances the effective outof-band (OOB) IIP3 and IIP2 to +25-27 dBm and +90 dBm, respectively (enhancements of 8-10 and 29 dB, respectively). For FD, SIC eliminates RX gain compression for as high as -8 dBm of peak in-band (IB) SI, and enhances effective IB IIP3 and IIP2 by 22 and 58 dB."}
{"_id": "2dc1775eefe3dc9f43ae25d3c7335378615fa33d", "title": "The analyse of the various methods for location of Data Matrix codes in images", "text": "Data Matrix codes belong to the group of 2-D bar codes, which are widely used in marking the products in storing, production, distribution and sales processes. Their compact matrix structure enables to store big amount of information in a very small area compared to conventional 1-D bar codes. In following paper we compare several methods for detection of Data Matrix codes in images. When locating the position of the code we start from typical bordering of Data Matrix code \u2014 which forms \u201cL\u201d so called Finder Pattern and to it the parallel dotting so called Timing Pattern. On the first stage we try to locate Finder Pattern using the edge detection or adaptive thresholding, then continuing with connecting the points into the continuous regions, we are searching for the regions, which could represent the Finder Pattern. On the second stage we verify Timing Pattern where the number of crossing between the background and foreground must be even."}
{"_id": "24bbccc21bd4826365460939a5ca295f690862c5", "title": "Quorum responses and consensus decision making.", "text": "Animal groups are said to make consensus decisions when group members come to agree on the same option. Consensus decisions are taxonomically widespread and potentially offer three key benefits: maintenance of group cohesion, enhancement of decision accuracy compared with lone individuals and improvement in decision speed. In the absence of centralized control, arriving at a consensus depends on local interactions in which each individual's likelihood of choosing an option increases with the number of others already committed to that option. The resulting positive feedback can effectively direct most or all group members to the best available choice. In this paper, we examine the functional form of the individual response to others' behaviour that lies at the heart of this process. We review recent theoretical and empirical work on consensus decisions, and we develop a simple mathematical model to show the central importance to speedy and accurate decisions of quorum responses, in which an animal's probability of exhibiting a behaviour is a sharply nonlinear function of the number of other individuals already performing this behaviour. We argue that systems relying on such quorum rules can achieve cohesive choice of the best option while also permitting adaptive tuning of the trade-off between decision speed and accuracy."}
{"_id": "d663e0f387bc9dc826b2a8594f7967c3d6a0e804", "title": "Single-unit pattern generators for quadruped locomotion", "text": "Legged robots can potentially venture beyond the limits of wheeled vehicles. While creating controllers for such robots by hand is possible, evolutionary algorithms are an alternative that can reduce the burden of hand-crafting robotic controllers. Although major evolutionary approaches to legged locomotion can generate oscillations through popular techniques such as continuous time recurrent neural networks (CTRNNs) or sinusoidal input, they typically face a challenge in maintaining long-term stability. The aim of this paper is to address this challenge by introducing an effective alternative based on a new type of neuron called a single-unit pattern generator (SUPG). The SUPG, which is indirectly encoded by a compositional pattern producing network (CPPN) evolved by HyperNEAT, produces a flexible temporal activation pattern that can be reset and repeated at any time through an explicit trigger input, thereby allowing it to dynamically recalibrate over time to maintain stability. The SUPG approach, which is compared to CTRNNs and sinusoidal input, is shown to produce natural-looking gaits that exhibit superior stability over time, thereby providing a new alternative for evolving oscillatory locomotion."}
{"_id": "cb602795e6bdcf0938761ca08e77df4e9842ba9f", "title": "A modular control scheme for PMSM speed control with pulsating torque minimization", "text": "In this paper, a modular control approach is applied to a permanent-magnet synchronous motor (PMSM) speed control. Based on the functioning of the individual module, the modular approach enables the powerfully intelligent and robust control modules to easily replace any existing module which does not perform well, meanwhile retaining other existing modules which are still effective. Property analysis is first conducted for the existing function modules in a conventional PMSM control system: proportional-integral (PI) speed control module, reference current-generating module, and PI current control module. Next, it is shown that the conventional PMSM controller is not able to reject the torque pulsation which is the main hurdle when PMSM is used as a high-performance servo. By virtue of the internal model, to nullify the torque pulsation it is imperative to incorporate an internal model in the feed-through path. This is achieved by replacing the reference current-generating module with an iterative learning control (ILC) module. The ILC module records the cyclic torque and reference current signals over one entire cycle, and then uses those signals to update the reference current for the next cycle. As a consequence, the torque pulsation can be reduced significantly. In order to estimate the torque ripples which may exceed certain bandwidth of a torque transducer, a novel torque estimation module using a gain-shaped sliding-mode observer is further developed to facilitate the implementation of torque learning control. The proposed control system is evaluated through real-time implementation and experimental results validate the effectiveness."}
{"_id": "f2d9f9beb61f61edc50257292c63dc9e7c6339c4", "title": "LIP ACTIVITY DETECTION FOR TALKING FACES CLASSIFICATION IN TV-CONTENT", "text": "Our objective is to index people in a TV-Content. In this context, because of multi-face shots and non-speaking face shots, it is difficult to determine which face is speaking. There is no guaranteed synchronization between sequences of a person\u2019s appearance and sequences of his or her speech. In this work, we want to separate talking and non-talking faces by detecting lip motion. We propose a method to detect the lip motion by measuring the degree of disorder of pixel directions around the lip. Results of experiments on a TV-Show database show that a high correct classification rate can be achieved by the proposed method."}
{"_id": "f9791399e87bba3f911fd8f570443cf721cf7b1e", "title": "Modelling, Visualising and Summarising Documents with a Single Convolutional Neural Network", "text": "Capturing the compositional process which maps the meaning of words to that of documents is a central challenge for researchers in Natural Language Processing and Information Retrieval. We introduce a model that is able to represent the meaning of documents by embedding them in a low dimensional vector space, while preserving distinctions of word and sentence order crucial for capturing nuanced semantics. Our model is based on an extended Dynamic Convolution Neural Network, which learns convolution filters at both the sentence and document level, hierarchically learning to capture and compose low level lexical features into high level semantic concepts. We demonstrate the effectiveness of this model on a range of document modelling tasks, achieving strong results with no feature engineering and with a more compact model. Inspired by recent advances in visualising deep convolution networks for computer vision, we present a novel visualisation technique for our document networks which not only provides insight into their learning process, but also can be interpreted to produce a compelling automatic summarisation system for texts."}
{"_id": "6f2cdce2eb8e6afdfd9e81316ff08f80e972cc47", "title": "The Computable News project: Research in the Newsroom", "text": "We report on a four year academic research project to build a natural language processing platform in support of a large media company. The Computable News platform processes news stories, producing a layer of structured data that can be used to build rich applications. We describe the underlying platform and the research tasks that we explored building it. The platform supports a wide range of prototype applications designed to support different newsroom functions. We hope that this qualitative review provides some insight into the challenges involved in this type of project."}
{"_id": "1cc7013247056e45264de9817171d72690181692", "title": "A language modeling framework for resource selection and results merging", "text": "Statistical language models have been proposed recently for several information retrieval tasks, including the resource selection task in distributed information retrieval. This paper extends the language modeling approach to integrate resource selection, ad-hoc searching, and merging of results from different text databases into a single probabilistic retrieval model. This new approach is designed primarily for Intranet environments, where it is reasonable to assume that resource providers are relatively homogeneous and can adopt the same kind of search engine. Experiments demonstrate that this new, integrated approach is at least as effective as the prior state-of-the-art in distributed IR."}
{"_id": "27d205314a4e416a685d153a8c7bd65966f9f7d1", "title": "Frontal fibrosing alopecia: a clinical review of 36 patients.", "text": "BACKGROUND\nFrontal fibrosing alopecia (FFA) is a primary lymphocytic cicatricial alopecia with a distinctive clinical pattern of progressive frontotemporal hairline recession. Currently, there are no evidence-based studies to guide treatment for patients with FFA; thus, treatment options vary among clinicians.\n\n\nOBJECTIVES\nWe report clinical findings and treatment outcomes of 36 patients with FFA, the largest cohort to date. Further, we report the first evidence-based study of the efficacy of hydroxychloroquine in FFA using a quantitative clinical score, the Lichen Planopilaris Activity Index (LPPAI).\n\n\nMETHODS\nA retrospective case note review was performed of 36 adult patients with FFA. Data were collected on demographics and clinical findings. Treatment responses to hydroxychloroquine, doxycycline and mycophenolate mofetil were assessed using the LPPAI. Adverse events were monitored.\n\n\nRESULTS\nMost patients in our cohort were female (97%), white (92%) and postmenopausal (83%). Apart from hairline recession, 75% also reported eyebrow loss. Scalp pruritus (67%) and perifollicular erythema (86%) were the most common presenting symptom and sign, respectively. A statistically significant reduction in signs and symptoms in subjects treated with hydroxychloroquine (P < 0\u00b705) was found at both 6- and 12-month follow up.\n\n\nCONCLUSIONS\nIn FFA, hairline recession, scalp pruritus, perifollicular erythema and eyebrow loss are common at presentation. Despite the limitations of a retrospective review, our data reveal that hydroxychloroquine is significantly effective in reducing signs and symptoms of FFA after both 6 and 12 months of treatment. However, the lack of a significant reduction in signs and symptoms between 6 and 12 months indicates that the maximal benefits of hydroxychloroquine are evident within the first 6 months of use."}
{"_id": "94ee91bc1ec67bb2fe8a1d5b48713f19db98be54", "title": "Incorporating Entity Correlation Knowledge into Topic Modeling", "text": "Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for exploring hidden topics in text corpora. Standard LDA model suffers the problem that the topic assignment of each word is independent and lacks the mechanism to utilize the rich prior background knowledge to learn semantically coherent topics. To address this problem, in this paper, we propose a model called Entity Correlation Latent Dirichlet Allocation (EC-LDA) by incorporating constraints derived from entity correlations as the prior knowledge into LDA topic model. Different from other knowledge-based topic models which extract the knowledge information directly from the train dataset itself or even from the human judgements, for our work, we take advantage of the prior knowledge from the external knowledge base (Freebase 1, in our experiment). Hence, our approach is more suitable to widely kinds of text corpora in different scenarios. We fit our proposed model using Gibbs sampling. Experiment results demonstrate the effectiveness of our model compared with standard LDA."}
{"_id": "8b024d8b0b62593d44306613860a4bedc857a021", "title": "Learning by Googling", "text": "The goal of giving a well-defined meaning to information is currently shared by endeavors such as the Semantic Web as well as by current trends within Knowledge Management. They all depend on the large-scale formalization of knowledge and on the availability of formal metadata about information resources. However, the question how to provide the necessary formal metadata in an effective and efficient way is still not solved to a satisfactory extent. Certainly, the most effective way to provide such metadata as well as formalized knowledge is to let humans encode them directly into the system, but this is neither efficient nor feasible. Furthermore, as current social studies show, individual knowledge is often less powerful than the collective knowledge of a certain community.As a potential way out of the knowledge acquisition bottleneck, we present a novel methodology that acquires collective knowledge from the World Wide Web using the GoogleTM API. In particular, we present PANKOW, a concrete instantiation of this methodology which is evaluated in two experiments: one with the aim of classifying novel instances with regard to an existing ontology and one with the aim of learning sub-/superconcept relations."}
{"_id": "6ab5acb5f32ef2d28f91109d40e5e859a9c101bf", "title": "The challenge problem for automated detection of 101 semantic concepts in multimedia", "text": "We introduce the challenge problem for generic video indexing to gain insight in intermediate steps that affect performance of multimedia analysis methods, while at the same time fostering repeatability of experiments. To arrive at a challenge problem, we provide a general scheme for the systematic examination of automated concept detection methods, by decomposing the generic video indexing problem into 2 unimodal analysis experiments, 2 multimodal analysis experiments, and 1 combined analysis experiment. For each experiment, we evaluate generic video indexing performance on 85 hours of international broadcast news data, from the TRECVID 2005/2006 benchmark, using a lexicon of 101 semantic concepts. By establishing a minimum performance on each experiment, the challenge problem allows for component-based optimization of the generic indexing issue, while simultaneously offering other researchers a reference for comparison during indexing methodology development. To stimulate further investigations in intermediate analysis steps that inuence video indexing performance, the challenge offers to the research community a manually annotated concept lexicon, pre-computed low-level multimedia features, trained classifier models, and five experiments together with baseline performance, which are all available at http://www.mediamill.nl/challenge/."}
{"_id": "dc9681dbb3c9cc83b4636ec97680aa3326a7e7d0", "title": "Robust video denoising using low rank matrix completion", "text": "Most existing video denoising algorithms assume a single statistical model of image noise, e.g. additive Gaussian white noise, which often is violated in practice. In this paper, we present a new patch-based video denoising algorithm capable of removing serious mixed noise from the video data. By grouping similar patches in both spatial and temporal domain, we formulate the problem of removing mixed noise as a low-rank matrix completion problem, which leads to a denoising scheme without strong assumptions on the statistical properties of noise. The resulting nuclear norm related minimization problem can be efficiently solved by many recently developed methods. The robustness and effectiveness of our proposed denoising algorithm on removing mixed noise, e.g. heavy Gaussian noise mixed with impulsive noise, is validated in the experiments and our proposed approach compares favorably against some existing video denoising algorithms."}
{"_id": "005aea80a403da18f95fcb9944236a976d83580e", "title": "Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information", "text": "This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f."}
{"_id": "023f6fc69fe1f6498e35dbf85932ecb549d36ca4", "title": "A Singular Value Thresholding Algorithm for Matrix Completion", "text": "This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices {Xk ,Y k}, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {Xk} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which 1, 000 \u00d7 1, 000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for 1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms."}
{"_id": "715f389c2879cd4a01d315efe09c94c5c0979176", "title": "Prevalence and Predictors of Maternal Anemia during Pregnancy in Gondar, Northwest Ethiopia: An Institutional Based Cross-Sectional Study", "text": "Background. Anaemia is a global public health problem which has an eminence impact on pregnant mother. The aim of this study was to assess the prevalence and predictors of maternal anemia. Method. A cross-sectional study was conducted from March 1 to April 30, 2012, on 302 pregnant women who attended antenatal care at Gondar University Hospital. Interview-based questionnaire, clinical history, and laboratory tests were used to obtain data. Bivariate and multivariate logistic regression was used to identify predictors. Result. The prevalence of anemia was 16.6%. Majority were mild type (64%) and morphologically normocytic normochromic (76%) anemia. Anemia was high at third trimester (18.9%). Low family income (AOR [95% CI] = 3.1 [1.19, 8.33]), large family size (AOR [95% CI] = 4.14 [4.13, 10.52]), hookworm infection (AOR [95% CI] = 2.72 [1.04, 7.25]), and HIV infection (AOR [95% CI] = 5.75 [2.40, 13.69]) were independent predictors of anemia. Conclusion. The prevalence of anemia was high; mild type and normocytic normochromic anemia was dominant. Low income, large family size, hookworm infection, and HIV infection were associated with anemia. Hence, efforts should be made for early diagnosis and management of HIV and hookworm infection with special emphasis on those having low income and large family size."}
{"_id": "119a5ffb74649f1c0248b336bcd7928b36f4438f", "title": "An IoT based reference architecture for smart water management processes", "text": "Water is a vital resource for life, and its management is a key issue nowadays. Information and communications technology systems for water control are currently facing interoperability problems due to the lack of support of standardization in monitory and control equipment. This problem affects various processes in water management, such as water consumption, distribution, system identification and equipment maintenance. OPC UA (Object Linking and Embedding for Process Control Unified Architecture) is a platform independent service-oriented architecture for the control of processes in the logistic and manufacturing sectors. Based on this standard we propose a smart water management model combining Internet of Things technologies with business processes coordination and decision support systems. We provide an architecture for sub-system interaction and a detailed description of the physical scenario in which we will test our implementation, allowing specific vendor equipment to be manageable and interoperable in the specific context of water management processes."}
{"_id": "e9839782321e25ed87f1a14abf7107e0d037e630", "title": "Multimodal Word Distributions", "text": "Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquely expressive semantic information, and outperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, on benchmark datasets such as word similarity and entailment."}
{"_id": "d5a14f0808d6f564b6219369986447f3a777972b", "title": "Learning to dress: synthesizing human dressing motion via deep reinforcement learning", "text": "Creating animation of a character putting on clothing is challenging due to the complex interactions between the character and the simulated garment. We take a model-free deep reinforcement learning (deepRL) approach to automatically discovering robust dressing control policies represented by neural networks. While deepRL has demonstrated several successes in learning complex motor skills, the data-demanding nature of the learning algorithms is at odds with the computationally costly cloth simulation required by the dressing task. This paper is the first to demonstrate that, with an appropriately designed input state space and a reward function, it is possible to incorporate cloth simulation in the deepRL framework to learn a robust dressing control policy. We introduce a salient representation of haptic information to guide the dressing process and utilize it in the reward function to provide learning signals during training. In order to learn a prolonged sequence of motion involving a diverse set of manipulation skills, such as grasping the edge of the shirt or pulling on a sleeve, we find it necessary to separate the dressing task into several subtasks and learn a control policy for each subtask. We introduce a policy sequencing algorithm that matches the distribution of output states from one task to the input distribution for the next task in the sequence. We have used this approach to produce character controllers for several dressing tasks: putting on a t-shirt, putting on a jacket, and robot-assisted dressing of a sleeve."}
{"_id": "14bc4279dd1d7b1ff48dbced4634983343ee6dc6", "title": "OmniPres project IST-2001-39237 Deliverable 5 Measuring Presence : A Guide to Current Measurement Approaches", "text": "This compendium constitutes a comprehensive overview of presence measures described in the literature so far. It contains both subjective and objective approaches to presence measurement. The theoretical basis of measures is described, along with research in which they have been applied, and other relevant literature. Authors: Joy van Baren and Wijnand IJsselsteijn Project funded by the European Community under the \"Information Society Technologies\" Programme"}
{"_id": "e63ade93d75bc8f34639e16e2b15dc018ec9c208", "title": "Clinical applications of the Miniscrew Anchorage System.", "text": ""}
{"_id": "e51e557cf4701510916bb1444074f62e6a04f889", "title": "Anonymous Walk Embeddings", "text": "The task of representing entire graphs has seen a surge of prominent results, mainly due to learning convolutional neural networks (CNNs) on graphstructured data. While CNNs demonstrate stateof-the-art performance in graph classification task, such methods are supervised and therefore steer away from the original problem of network representation in task-agnostic manner. Here, we coherently propose an approach for embedding entire graphs and show that our feature representations with SVM classifier increase classification accuracy of CNN algorithms and traditional graph kernels. For this we describe a recently discovered graph object, anonymous walk, on which we design task-independent algorithms for learning graph representations in explicit and distributed way. Overall, our work represents a new scalable unsupervised learning of state-of-the-art representations of entire graphs."}
{"_id": "d7231356f1d2d87fd02d12c5da88f9a7f0caf541", "title": "Evaluation Of Linear Interpolation Smoothing On Naive Bayes Spam Classifier", "text": "The inconvenience associated with spams and the cost of having an important mail misclassified as spam have made all efforts at improving spam filtering worthwhile. The Naive Bayes algorithm has been found to be successful in properly classifying mails. However, they are not perfect. Recent researches have introduced the idea of smoothing into the Naive Bayes algorithm and they have been found to produce better classification. This study applies the concept of linear interpolation smoothing to Naive Bayes spam classification. The resulting classifier did well at improving spam classification and also reducing false positives."}
{"_id": "66b0d9c3b89d3088ac05102880e18e47f4afcd79", "title": "CS Unplugged and Middle-School Students' Views, Attitudes, and Intentions Regarding CS", "text": "Many students hold incorrect ideas and negative attitudes about computer science (CS). In order to address these difficulties, a series of learning activities called Computer Science Unplugged was developed by Tim Bell and his colleagues. These activities expose young people to central concepts in CS in an entertaining way without requiring a computer. The CS Unplugged activities have become more and more popular among CS educators and several activities are recommended in the ACM K-12 curriculum for elementary schools. CS Unplugged is used worldwide and has been translated into many languages.\n We examined the effect of the CS Unplugged activities on middle-school students\u2019 ideas about CS and their desire to consider and study it in high school. The results indicate that following the activities the ideas of the students on what CS is about were partially improved, but their desire to study CS lessened.\n In order to provide possible explanations to these results, we analyzed the CS Unplugged activities to determine to what extent the objectives of CS Unplugged were addressed in the activities. In addition, we checked whether the activities were designed according to constructivist principles and whether they were explicitly linked to central concepts in CS. We found that only some of the objectives were addressed in the activities, that the activities do not engage with the students\u2019 prior knowledge and that most of the activities are not explicitly linked to central concepts in CS. We offer suggestions for modifying the CS Unplugged activities so that they will be more likely to achieve their objectives."}
{"_id": "44925e155f7b15b573e6f3a0bcf023b7623d5e90", "title": "Assistive Clothing Pattern Recognition for Visually Impaired People", "text": "Choosing clothes with complex patterns and colors is a challenging task for visually impaired people. Automatic clothing pattern recognition is also a challenging research problem due to rotation, scaling, illumination, and especially large intraclass pattern variations. We have developed a camera-based prototype system that recognizes clothing patterns in four categories (plaid, striped, patternless, and irregular) and identifies 11 clothing colors. The system integrates a camera, a microphone, a computer, and a Bluetooth earpiece for audio description of clothing patterns and colors. A camera mounted upon a pair of sunglasses is used to capture clothing images. The clothing patterns and colors are described to blind users verbally. This system can be controlled by speech input through microphone. To recognize clothing patterns, we propose a novel Radon Signature descriptor and a schema to extract statistical properties from wavelet subbands to capture global features of clothing patterns. They are combined with local features to recognize complex clothing patterns. To evaluate the effectiveness of the proposed approach, we used the CCNY Clothing Pattern dataset. Our approach achieves 92.55% recognition accuracy which significantly outperforms the state-of-the-art texture analysis methods on clothing pattern recognition. The prototype was also used by ten visually impaired participants. Most thought such a system would support more independence in their daily life but they also made suggestions for improvements."}
{"_id": "8f84fd69ea302f28136a756b433ad9a4711571c2", "title": "An O(sqrt(|v|) |E|) Algorithm for Finding Maximum Matching in General Graphs", "text": ""}
{"_id": "47ff9ef572e8ae8ce8cb1ede47d0b9f215fd30b0", "title": "Automatic prediction of frustration", "text": "Predicting when a person might be frustrated can provide an intelligent system with important information about when to initiate interaction. For example, an automated Learning Companion or Intelligent Tutoring System might use this information to intervene, providing support to the learner who is likely to otherwise quit, while leaving engaged learners free to discover things without interruption. This paper presents the first automated method that assesses, using multiple channels of affect-related information, whether a learner is about to click on a button saying \u2018\u2018I\u2019m frustrated.\u2019\u2019 The new method was tested on data gathered from 24 participants using an automated Learning Companion. Their indication of frustration was automatically predicted from the collected data with 79% accuracy \u00f0chance 1\u20444 58%\u00de. The new assessment method is based on Gaussian process classification and Bayesian inference. Its performance suggests that non-verbal channels carrying affective cues can help provide important information to a system for formulating a more intelligent response. r 2007 Elsevier Ltd. All rights reserved."}
{"_id": "54588be2456c4974d113f066c978feef04b6dacd", "title": "utomatic detection of skin defects in citrus fruits using a multivariate image nalysis approach", "text": "One of the main problems in the post-harvest processing of citrus is the detection of visual defects in order to classify the fruit depending on their appearance. Species and cultivars of citrus present a high rate of unpredictability in texture and colour that makes it difficult to develop a general, unsupervised method able of perform this task. In this paper we study the use of a general approach that was originally developed for the detection of defects in random colour textures. It is based on a Multivariate Image Analysis strategy and uses Principal Component Analysis to extract a reference eigenspace from a matrix built by unfolding colour and spatial data from samples of defect-free peel. Test images are also unfolded and projected onto the reference eigenspace and the result is a score matrix which is used to compute defective maps based on the T2 statistic. In addition, a multiresolution scheme is introduced nsupervised Methods in the original method to speed up the process. Unlike the techniques commonly used for the detection of defects in fruits, this is an unsupervised method that only needs a few samples to be trained. It is also a simple approach that is suitable for real-time compliance. Experimental work was performed on 120 samples of oranges and mandarins from four different cultivars: Clemenules, Marisol, Fortune, and Valencia. The success ratio for the detection of individual defects was 91.5%, while the classification ratio of damaged/sound samples was 94.2%. These results show that the studied method can be suitable for on. the task of citrus inspecti"}
{"_id": "f2020cc2d672f46fb1ad2311c19a82516d57c50d", "title": "Improved random redundant iterative HDPC decoding", "text": "An iterative algorithm for soft-input soft-output (SISO) decoding of classical algebraic cyclic block codes is presented below. Inspired by other approaches for high performance belief propagation (BP) decoding, this algorithm requires up to 10 times less computational complexity than other methods that achieve similar performance. By utilizing multiple BP decoders, and using random permutation taken from the permutation group of the code, this algorithm reaches near maximum likelihood performance. A computational complexity comparison of the proposed algorithm versus other methods is presented as well. This includes complexity versus performance analysis, allowing one to trade between the former and the latter, according to ones needs."}
{"_id": "d764fd52625933692843941825845fcd5ee27f0d", "title": "Unsupervised Video Object Segmentation with Motion-Based Bilateral Networks", "text": "In this work, we study the unsupervised video object segmentation problem where moving objects are segmented without prior knowledge of these objects. First, we propose a motion-based bilateral network to estimate the background based on the motion pattern of non-object regions. The bilateral network reduces false positive regions by accurately identifying background objects. Then, we integrate the background estimate from the bilateral network with instance embeddings into a graph, which allows multiple frame reasoning with graph edges linking pixels from different frames. We classify graph nodes by defining and minimizing a cost function, and segment the video frames based on the node labels. The proposed method outperforms previous state-of-the-art unsupervised video object segmentation methods against the DAVIS 2016 and the FBMS-59 datasets."}
{"_id": "a970352e2a6c3c998c4e483e2d78c4b3643c7809", "title": "Moving Target Defense - Creating Asymmetric Uncertainty for Cyber Threats", "text": "Excellent book is always being the best friend for spending little time in your office, night time, bus, and everywhere. It will be a good way to just look, open, and read the book while in that time. As known, experience and skill don't always come with the much money to acquire them. Reading this book with the PDF moving target defense creating asymmetric uncertainty for cyber threats will let you know more things."}
{"_id": "353ec611af298e87dd8baf1ac7fb0038182f0fa7", "title": "Decade-bandwidth planar balun using CPW-to-slotline transition for UHF applications", "text": "This paper presents an improved broadband balun using a coplanar-waveguide(CPW)-to-slotline field transformation. It operates at very wide frequency, and is of compact size since it does not depend on a resonant structure. The measured results show a passband of 200 MHz to 2 GHz, insertion loss less than 0.75 dB and a size of 20 mm \u00d7 14 mm. The amplitude imbalance is approximately 0.3 dB and the phase imbalance is less than 6o over the entire operation range."}
{"_id": "c0213566808583943c1fcb8b77192d04dd6e749f", "title": "GHz UWB automotive short range radar \u2013 Spectrum allocation and technology trends", "text": "Automotive UWB (Ultra-Wideband) short range radar (SSR) is on the market as a key technology for novel comfort and safety systems. SiGe based 79 GHz UWB SRR will be a definite candidate for the long term substitution of the 24 GHz UWB SRR. This paper will give an overview of the finished BMBF joint project KOKON and the recently started successing project RoCC, which concentrate on the development of this technology and sensor demonstrators. In both projects, the responsibilities of Daimler AG deal with application based sensor specification, test and evaluation of realized sensor demonstrators. Recent UWB SRR frequency regulation approaches and activitites will be introduced. Furthermore, some first results of Daimler activities within RoCC will be presented, dealing with the packaging and operation of these sensors within the complex car environment."}
{"_id": "3a1a9e15d42b67bfdee7761311aea9b699cd0d5f", "title": "JackIn: integrating first-person view with out-of-body vision generation for human-human augmentation", "text": "JackIn is a new human-human communication framework for connecting two or more people. With first-person view video streaming from a person (called Body) wearing a transparent head-mounted display and a head-mounted camera, the other person (called Ghost) participates in the shared first-person view. With JackIn, people's activities can be shared and assistance or guidance can be given through other peoples expertise. This can be applied to daily activities such as cooking lessons, shopping navigation, education in craft-work or electrical work, and sharing experiences of sporting and live events. For a better viewing experience with frist-person view, we developed the out-of-body view in which first-person images are integrated to construct a scene around a Body, and a Ghost can virtually control the viewpoint to look around the space surrounding the Body. We also developed a tele-pointing gesture interface. We conducted an experiment to evaluate how effective this framework is and found that Ghosts can understand the spatial situation of the Body."}
{"_id": "ba371bce57d2c67133ff121cc1c33c314f7ac59e", "title": "Adaptive Control of an Electrically Driven Nonholonomic Mobile Robot via Backstepping and Fuzzy Approach", "text": "This paper investigates the tracking control of an electrically driven nonholonomic mobile robot with model uncertainties in the robot kinematics, the robot dynamics, and the wheel actuator dynamics. A robust adaptive controller is proposed with the utilization of adaptive control, backstepping and fuzzy logic techniques. The proposed control scheme employs the adaptive control approach to design an auxiliary wheel velocity controller to make the tracking error as small as possible in consideration of uncertainties in the kinematics of the robot, and makes use of the fuzzy logic systems to learn the behaviors of the unknown dynamics of the robot and the wheel actuators. The approximation errors and external disturbances can be efficiently counteracted by employing smooth robust compensators. A major advantage of the proposed method is that previous knowledge of the robot kinematics and the dynamics of the robot and wheel actuators is no longer necessary. This is because the controller learns both the robot kinematics and the robot and wheel actuator dynamics online. Most importantly, all signals in the closed-loop system can be guaranteed to be uniformly ultimately bounded. For the dynamic uncertainties of robot and actuator, the assumption of ldquolinearity in the unknown parametersrdquo and tedious analysis of determining the ldquoregression matricesrdquo in the standard adaptive robust controllers are no longer necessary. The performance of the proposed approach is demonstrated through a simulation example."}
{"_id": "8c5f41936704429ede0bacff65c67119cb5dcc08", "title": "Fast and accurate analysis of scanning slotted waveguide arrays", "text": "An iterative procedure is presented for a fast and accurate analysis of scanning slotted waveguide arrays including the effects of mutual couplings and their dependence on the scanning angle. The analysis takes also into account the input transitions from the feeding network; each slotted waveguide being excited either at one end or at the centre. The procedure is applied to the simple case of a linear array of four slots, for which a full-wave analysis could be carried out for comparison. The proposed method is shown to provide excellent agreement with the full-wave computation still requiring extremely short computation times."}
{"_id": "a0403302f171cfdc3a89f64d50936f6fcfc1b94f", "title": "One Sentence One Model for Neural Machine Translation", "text": "Neural machine translation (NMT) becomes a new state-ofthe-art and achieves promising translation results using a simple encoder-decoder neural network. This neural network is trained once on the parallel corpus and the fixed network is used to translate all the test sentences. We argue that the general fixed network cannot best fit the specific test sentences. In this paper, we propose the dynamic NMT which learns a general network as usual, and then fine-tunes the network for each test sentence. The fine-tune work is done on a small set of the bilingual training data that is obtained through similarity search according to the test sentence. Extensive experiments demonstrate that this method can significantly improve the translation performance, especially when highly similar sentences are available."}
{"_id": "e4eaf2eaf4a4a59be1b8a1ae19ca5a9ea58d250f", "title": "Natural Image Stitching with the Global Similarity Prior", "text": "This paper proposes a method for stitching multiple images together so that the stitched image looks as natural as possible. Our method adopts the local warp model and guides the warping of each image with a grid mesh. An objective function is designed for specifying the desired characteristics of the warps. In addition to good alignment and minimal local distortion, we add a global similarity prior in the objective function. This prior constrains the warp of each image so that it resembles a similarity transformation as a whole. The selection of the similarity transformation is crucial to the naturalness of the results. We propose methods for selecting the proper scale and rotation for each image. The warps of all images are solved together for minimizing the distortion globally. A comprehensive evaluation shows that the proposed method consistently outperforms several state-of-the-art methods, including AutoStitch, APAP, SPHP and ANNAP."}
{"_id": "3729a9a140aa13b3b26210d333fd19659fc21471", "title": "A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks", "text": "Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a single model. We introduce a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks. Higher layers include shortcut connections to lower-level task predictions to reflect linguistic hierarchies. We use a simple regularization term to allow for optimizing all model weights to improve one task\u2019s loss without exhibiting catastrophic interference of the other tasks. Our single end-to-end model obtains state-of-the-art or competitive results on five different tasks from tagging, parsing, relatedness, and entailment tasks."}
{"_id": "562c09c112df56c5696c010d90a815d6018a86c8", "title": "Word Translation Without Parallel Data", "text": "State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or EnglishChinese. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation. Our code, embeddings and dictionaries are publicly available1."}
{"_id": "8cbef23c9ee2ae7c35cc691a0c1d713a6377c9f2", "title": "Deep Biaffine Attention for Neural Dependency Parsing", "text": "Andor, D., Alberti, C., Weiss, D., Severyn, A., Presta, A., Ganchev, K., Petrov, S., and Collins, M. (2016). Globally normalized transition-based neural networks. In Association for Computational Linguistics. Ballesteros, M., Goldberg, Y., Dyer, C., and Smith, N. A. (2016). Training with exploration improves a greedy stack-LSTM parser. Proceedings of the conference on empirical methods in natural language processing. Chen, D. and Manning, C. D. (2014). A fast and accurate dependency parser using neural networks. In Proceedings of the conference on empirical methods in natural language processing, pages 740\u2013750. Cheng, H., Fang, H., He, X., Gao, J., and Deng, L. (2016). Bi-directional attention with agreement for dependency parsing. arXiv preprint arXiv:1608.02076. Hashimoto, K., Xiong, C., Tsuruoka, Y., and Socher, R. (2016). A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587. Kiperwasser, E. and Goldberg, Y. (2016). Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313\u2013327. Kuncoro, A., Ballesteros, M., Kong, L., Dyer, C., Neubig, G., and Smith, N. A. (2016). What do recurrent neural network grammars learn about syntax? CoRR, abs/1611.05774. McDonald, R. T. and Pereira, F. C. (2006). Online learning of approximate dependency parsing algorithms. In EACL. Nivre, J., Hall, J., and Nilsson, J. (2006). Maltparser: A data-driven parser-generator for dependency parsing. In Proceedings of LREC, volume 6, pages 2216\u20132219."}
{"_id": "916ac71109a4c59640ba2de167acab7276d33429", "title": "Dependency-Based Open Information Extraction", "text": "Building shallow semantic representations from text corpora is the first step to perform more complex tasks such as text entailment, enrichment of knowledge bases, or question answering. Open Information Extraction (OIE) is a recent unsupervised strategy to extract billions of basic assertions from massive corpora, which can be considered as being a shallow semantic representation of those corpora. In this paper, we propose a new multilingual OIE system based on robust and fast rule-based dependency parsing. It permits to extract more precise assertions (verb-based triples) from text than state of the art OIE systems, keeping a crucial property of those systems: scaling to Web-size document collections."}
{"_id": "928f9dccb806a3278d20d82cc53781c5f44e2bb1", "title": "Constituency Parsing with a Self-Attentive Encoder", "text": "We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-ofthe-art discriminative constituency parser. The use of attention makes explicit the manner in which information is propagated between different locations in the sentence, which we use to both analyze our model and propose potential improvements. For example, we find that separating positional and content information in the encoder can lead to improved parsing accuracy. Additionally, we evaluate different approaches for lexical representation. Our parser achieves new state-ofthe-art results for single models trained on the Penn Treebank: 93.55 F1 without the use of any external data, and 95.13 F1 when using pre-trained word representations. Our parser also outperforms the previous best-published accuracy figures on 8 of the 9 languages in the SPMRL dataset."}
{"_id": "fe9fcaaeaf1fbd97b219e854b3f025a3e19238d1", "title": "A novel wireless power transfer for in-motion EV/PHEV charging", "text": "Wireless power transfer (WPT) is a convenient, safe, and autonomous means for electric and plug-in hybrid electric vehicle charging that has seen rapid growth in recent years for stationary applications. WPT does not require bulky contacts, plugs, and wires, is not affected by dirt or weather conditions, and is as efficient as conventional charging systems. When applied in-motion, WPT additionally relives range anxiety, adds further convenience, reduces battery size, and may help increase the battery life through charge sustaining approach. This study summarizes some of the recent activities of Oak Ridge National Laboratory (ORNL) in WPT charging of EV and PHEV's inmotion. Laboratory experimental results that highlight the wireless transfer of power to a moving receiver coil as it passes a pair of transmit coils and investigation of results of insertion loss due to roadway surfacing materials. Some of the experimental lessons learned are also included in this study."}
{"_id": "11f3421ee2e0fe35aca0b7b8ad3379eb4675d50f", "title": "Minoxidil-induced trichostasis spinulosa of terminal hair.", "text": "depression after pressure application to the skin and a fold after stretching of the skin are clinical signs of CL. To our knowledge, the exclusive acral involvement of CL has been reported in only 7 patients, and 4 of them also had multiple myeloma amyloidosis. Unlike those 4 patients, our patient manifested elastolytic cutaneous lesions as the only clinical sign of his multiple myeloma. The histopathologic findings were the clue for the diagnosis of underlying monoclonal gammopathy. Destruction of elastic fibers could be immune mediated by deposition of abnormal proteins. Wewould like toemphasize the importanceof recognizing cutaneous laxity on fingertips associated with temporary dimpling of the skin for minutes after pressure is applied, which can be the clue for the diagnosis of a monoclonalgammopathy.Clinicopathologiccorrelationisessential; demonstration of amyloid deposits with monotypic expression of light chains and diminution of dermal elastic fibers are the histopathologic clues of these forms of acrolocalized acquired CL associated with multiple myeloma."}
{"_id": "8645a7ff78dc321e08dea6576c04f02a3ce158f9", "title": "Sequential Deep Learning for Disaster-Related Video Classification", "text": "Videos serve to convey complex semantic information and ease the understanding of new knowledge. However, when mixed semantic meanings from different modalities (i.e., image, video, text) are involved, it is more difficult for a computer model to detect and classify the concepts (such as flood, storm, and animals). This paper presents a multimodal deep learning framework to improve video concept classification by leveraging recent advances in transfer learning and sequential deep learning models. Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN) models are then used to obtain the sequential semantics for both audio and textual models. The proposed framework is applied to a disaster-related video dataset that includes not only disaster scenes, but also the activities that took place during the disaster event. The experimental results show the effectiveness of the proposed framework."}
{"_id": "77febaeb483dc8d145a2c897f743ee46b11266ad", "title": "Treatment of Hyaluronic Acid Filler-Induced Impending Necrosis With Hyaluronidase: Consensus Recommendations.", "text": "Injection-induced necrosis is a rare but dreaded consequence of soft tissue augmentation with filler agents. It usually occurs as a result of injection of filler directly into an artery, but can also result from compression or injury. We provide recommendations on the use of hyaluronidase when vascular compromise is suspected. Consensus recommendations were developed by thorough discussion and debate amongst the authors at a roundtable meeting on Wednesday June 18, 2014 in Las Vegas, NV as well as significant ongoing written and verbal communications amongst the authors in the months prior to journal submission. All authors are experienced tertiary care providers. A prompt diagnosis and immediate treatment with high doses of hyaluronidase (at least 200 U) are critically important. It is not felt necessary to do a skin test in cases of impending necrosis. Some experts recommend dilution with saline to increase dispersion or lidocaine to aid vasodilation. Additional hyaluronidase should be injected if improvement is not seen within 60 minutes. A warm compress also aids vasodilation, and massage has been shown to help. Some experts advocate the use of nitroglycerin paste, although this area is controversial. Introducing an oral aspirin regimen should help prevent further clot formation due to vascular compromise. In our experience, patients who are diagnosed promptly and treated within 24 hours will usually have the best outcomes."}
{"_id": "7e0d8c44695523e009748eb5f9709579162a183d", "title": "What is the amygdala?", "text": "'Amygdala' and 'amygdalar complex' are terms that now refer to a highly differentiated region near the temporal pole of the mammalian cerebral hemisphere. Cell groups within it appear to be differentiated parts of the traditional cortex, the claustrum, or the striatum, and these parts belong to four obvious functional systems--accessory olfactory, main olfactory, autonomic and frontotemporal cortical. In rats, the central nucleus is a specialized autonomic-projecting motor region of the striatum, whereas the lateral and anterior basolateral nuclei together are a ventromedial extension of the claustrum for major regions of the temporal and frontal lobes. The rest of the amygdala forms association parts of the olfactory system (accessory and main), with cortical, claustral and striatal parts. Terms such as 'amygdala' and 'lenticular nucleus' combine cell groups arbitrarily rather than according to the structural and functional units to which they now seem to belong. The amygdala is neither a structural nor a functional unit."}
{"_id": "c131f2b65169e3162e2d6430019bad81c7919ed5", "title": "Mitigating YARN Container Overhead with Input Splits", "text": "We analyze YARN container overhead and present early results of reducing its overhead by dynamically adjusting the input split size. YARN is designed as a generic resource manager that decouples programming models from resource management infrastructures. We demonstrate that YARN\u2019s generic design incurs significant overhead because each con- tainer must perform various initialization steps, including authentication. To reduce container overhead without changing the existing YARN framework significantly, we propose leverag- ing the input split, which is the logical representation of physical HDFS blocks. With input splits, we can combine multiple HDFS blocks and increase the input size of each container, thereby enabling a single map wave and reducing the number of containers and their initialization overhead. Experimental results shows that we can avoid recurring container overhead by selecting the right size for input splits and reducing the number of containers."}
{"_id": "c08549d77b291b720e66392378ff9917d5a1a498", "title": "Online-specific fear of missing out and Internet-use expectancies contribute to symptoms of Internet-communication disorder", "text": "Some of the most frequently used online applications are Facebook, WhatsApp, and Twitter. These applications allow individuals to communicate with other users, to share information or pictures, and to stay in contact with friends all over the world. However, a growing number of users suffer from negative consequences due to their excessive use of these applications, which can be referred to as Internet-communication disorder. The frequent use and easy access of these applications may also trigger the individual's fear of missing out on content when not accessing these applications. Using a sample of 270 participants, a structural equation model was analyzed to investigate the role of psychopathological symptoms and the fear of missing out on expectancies towards Internet-communication applications in the development of symptoms of an Internet-communication disorder. The results suggest that psychopathological symptoms predict higher fear of missing out on the individual's Internet-communication applications and higher expectancies to use these applications as a helpful tool to escape from negative feelings. These specific cognitions mediate the effect of psychopathological symptoms on Internet-communication disorder. Our results are in line with the theoretical model by Brand et al. (2016) as they show how Internet-related cognitive bias mediates the relationship between a person's core characteristics (e.g., psychopathological symptoms) and Internet-communication disorder. However, further studies should investigate the role of the fear of missing out as a specific predisposition, as well as specific cognition in the online context."}
{"_id": "182a48ae7deb6e78bcb206414b56be6a52609c42", "title": "Joint Representation Learning for Top-N Recommendation with Heterogeneous Information Sources", "text": "The Web has accumulated a rich source of information, such as text, image, rating, etc, which represent different aspects of user preferences. However, the heterogeneous nature of this information makes it difficult for recommender systems to leverage in a unified framework to boost the performance. Recently, the rapid development of representation learning techniques provides an approach to this problem. By translating the various information sources into a unified representation space, it becomes possible to integrate heterogeneous information for informed recommendation. \n In this work, we propose a Joint Representation Learning (JRL) framework for top-N recommendation. In this framework, each type of information source (review text, product image, numerical rating, etc) is adopted to learn the corresponding user and item representations based on available (deep) representation learning architectures. Representations from different sources are integrated with an extra layer to obtain the joint representations for users and items. In the end, both the per-source and the joint representations are trained as a whole using pair-wise learning to rank for top-N recommendation. We analyze how information propagates among different information sources in a gradient-descent learning paradigm, based on which we further propose an extendable version of the JRL framework (eJRL), which is rigorously extendable to new information sources to avoid model re-training in practice. \n By representing users and items into embeddings offline, and using a simple vector multiplication for ranking score calculation online, our framework also has the advantage of fast online prediction compared with other deep learning approaches to recommendation that learn a complex prediction network for online calculation."}
{"_id": "07deac012e4cc97cbc4ef0cb96cbe7e9fc76926e", "title": "The price of power: power seeking and backlash against female politicians.", "text": "Two experimental studies examined the effect of power-seeking intentions on backlash toward women in political office. It was hypothesized that a female politician's career progress may be hindered by the belief that she seeks power, as this desire may violate prescribed communal expectations for women and thereby elicit interpersonal penalties. Results suggested that voting preferences for female candidates were negatively influenced by her power-seeking intentions (actual or perceived) but that preferences for male candidates were unaffected by power-seeking intentions. These differential reactions were partly explained by the perceived lack of communality implied by women's power-seeking intentions, resulting in lower perceived competence and feelings of moral outrage. The presence of moral-emotional reactions suggests that backlash arises from the violation of communal prescriptions rather than normative deviations more generally. These findings illuminate one potential source of gender bias in politics."}
{"_id": "e291622a815c6af3c00e06171f6d8ae72021893c", "title": "Recognition of Car Makes and Models From a Single Traffic-Camera Image", "text": "This paper proposes the recognition framework of car makes and models from a single image captured by a traffic camera. Due to various configurations of traffic cameras, a traffic image may be captured in different viewpoints and lighting conditions, and the image quality varies in resolution and color depth. In the framework, cars are first detected using a part-based detector, and license plates and headlamps are detected as cardinal anchor points to rectify projective distortion. Car features are extracted, normalized, and classified using an ensemble of neural-network classifiers. In the experiment, the performance of the proposed method is evaluated on a data set of practical traffic images. The results prove the effectiveness of the proposed method in vehicle detection and model recognition."}
{"_id": "66ea9389b7ecd7f8070cddc8c6c3ecbf301e3577", "title": "The turing test track of the 2012 Mario AI Championship: Entries and evaluation", "text": "The Turing Test Track of the Mario AI Championship focused on developing human-like controllers for a clone of the popular game Super Mario Bros. Competitors participated by submitting AI agents that imitate human playing style. This paper presents the rules of the competition, the software used, the voting interface, the scoring procedure, the submitted controllers and the recent results of the competition for the year 2012. We also discuss what can be learnt from this competition in terms of believability in platform games. The discussion is supported by a statistical analysis of behavioural similarities and differences among the agents, and between agents and humans. The paper is co-authored by the organizers of the competition (the first three authors) and the competitors."}
{"_id": "6bd17ab8f24f6c117cf842773aec99e9c3b5fa8b", "title": "A cross-national profile of bullying and victimization among adolescents in 40 countries", "text": "(1) To compare the prevalence of bullying and victimization among boys and girls and by age in 40 countries. (2) In 6 countries, to compare rates of direct physical, direct verbal, and indirect bullying by gender, age, and country. Cross-sectional self-report surveys including items on bullying and being bullied were obtained from nationally representative samples of 11, 13 and 15\u00a0year old school children in 40 countries, N = 202,056. Six countries (N = 29,127 students) included questions about specific types of bullying (e. g., direct physical, direct verbal, indirect). Exposure to bullying varied across countries, with estimates ranging from 8.6% to 45.2% among boys, and from 4.8% to 35.8% among girls. Adolescents in Baltic countries reported higher rates of bullying and victimization, whereas northern European countries reported the lowest prevalence. Boys reported higher rates of bullying in all countries. Rates of victimization were higher for girls in 29 of 40 countries. Rates of victimization decreased by age in 30 of 40 (boys) and 25 of 39 (girls) countries. There are lessons to be learned from the current research conducted in countries where the prevalence is low that could be adapted for use in countries with higher prevalence."}
{"_id": "daa3b94cdc7b52b3381a7c7e21022a7a8c005f84", "title": "The Flipped Classroom : An Opportunity To Engage Millennial Students Through Active Learning Strategies", "text": "\"Flipping\" the classroom employs easy-to-use, readily accessihle technology in order to free class time from lecture. This allows for an expanded range of learning activities during class time. Using class time for active learning versus lecture provides opportunities for greater teacher-to-student mentoring, peer-to-peer collaboration and cross-disciplinary engagement. This review of literature addresses the challenges of engaging today's students in lecture-based classrooms and presents an argument for application of the \"flipped classroom\" model hy educators in the disciplines of family and consumer sciences."}
{"_id": "8534c9b03ea00647e7bb3cb85133e1897d913d23", "title": "A stepped-carrier 77-GHz OFDM MIMO radar system with 4 GHz bandwidth", "text": "This paper presents a 77-GHz orthogonal frequency-division multiplexing (OFDM) multiple-input multiple-output (MIMO) radar system with 4 GHz bandwidth. Due to the limited bandwidth of the analog-to-digital and digital-to-analog converters a stepped-carrier approach is used. OFDM radar measurements with a bandwidth of 200MHz at 20 different carrier frequencies are combined in signal processing to achieve a high range resolution OFDM MIMO radar. Spectrally-interleaved OFDM signals are used to allow for simultaneous transmission and thus MIMO operation. The proposed approach is verified in practical measurements of a static target scenario and a slowly moving target scenario. Therefore a self-developed software defined radar evaluation platform with four transmitters and four receivers is used. The expected high range resolution could be achieved and range/angle/velocity measurements show the suitability for detecting slowly moving targets like pedestrians."}
{"_id": "80dd62d02e1b0766859382c00309ca4e21fee6d9", "title": "Machine Learning Paradigms for Speech Recognition: An Overview", "text": "Automatic Speech Recognition (ASR) has historically been a driving force behind many machine learning (ML) techniques, including the ubiquitously used hidden Markov model, discriminative learning, structured sequence learning, Bayesian learning, and adaptive learning. Moreover, ML can and occasionally does use ASR as a large-scale, realistic application to rigorously test the effectiveness of a given technique, and to inspire new problems arising from the inherently sequential and dynamic nature of speech. On the other hand, even though ASR is available commercially for some applications, it is largely an unsolved problem - for almost all applications, the performance of ASR is not on par with human performance. New insight from modern ML methodology shows great promise to advance the state-of-the-art in ASR technology. This overview article provides readers with an overview of modern ML techniques as utilized in the current and as relevant to future ASR research and systems. The intent is to foster further cross-pollination between the ML and ASR communities than has occurred in the past. The article is organized according to the major ML paradigms that are either popular already or have potential for making significant contributions to ASR technology. The paradigms presented and elaborated in this overview include: generative and discriminative learning; supervised, unsupervised, semi-supervised, and active learning; adaptive and multi-task learning; and Bayesian learning. These learning paradigms are motivated and discussed in the context of ASR technology and applications. We finally present and analyze recent developments of deep learning and learning with sparse representations, focusing on their direct relevance to advancing ASR technology."}
{"_id": "c64f1bd3a1bd7f7fe4cadc469b4b94c45ad12b5d", "title": "Spell Checking Techniques in NLP: A Survey", "text": "Spell checkers in Indian languages are the basic tools that need to be developed. A spell checker is a software tool that identifies and corrects any spelling mistakes in a text. Spell checkers can be combined with other applications or they can be distributed individually. In this paper the authors are discussing both the approaches and their roles in various applications."}
{"_id": "199a71f02c63b4b84a00e9b73fd538d28ed92362", "title": "2CAuth: A New Two Factor Authentication Scheme Using QR-Code", "text": "Password based schemes has been the standard means of authentication over decades. Enhancements use entities like ownership (something one possess), knowledge (something one knows), and inherence (something one is) as first factor and mobile phones as token less second factor, in combinations, to offer different levels of security assurances, trading off usability. In this paper we present \u20182CAuth\u2019 a new two factor authentication scheme that enhances secure usage of application information and preserves usability, without sacrificing user\u2019s privacy. A significant feature of the scheme is that it DOES NOT call for any synchronization between Mobile Network Operator (MNO) and users. The analysis of the scheme clearly brings out its effectiveness in terms of its usability even at times of peak loads on mobile networks. The scheme has the dual advantage of carrying out the verification of transactions which involve the physical presence of the user as well as those to be done in his absence."}
{"_id": "233866c68491176ea2cf15c8a72e8f2c3090004b", "title": "Ultraflat Interleaved Triangular Current Mode (TCM) Single-Phase PFC Rectifier", "text": "This paper presents the analysis and realization of a topology suitable to realize a power factor correction (PFC) rectifier with a thickness of only a few millimeters. The low height of the converter requires all components to be integrated into the printed circuit board (PCB). Still reasonable dimensions of the converter PCB are feasible (221 mm \u00d7 157 mm for a 200 W PFC rectifier), since PCB-integrated inductors and capacitors allow for high energy densities due to their large surface area which facilitates a low thermal resistance to ambient. A multicell totem-pole PFC rectifier employing a soft-switching modulation scheme over the complete mains period is identified as an adequate topology. The mode of operation is entitled triangular current mode (TCM) due to the triangular-shaped inductor currents. The modulation technique requires a reliable description of the switching transition of a half-bridge in order to provide accurate timing parameters. For this purpose, a simplified model of the nonlinear MOSFETs' output capacitances facilitates closed-form analytical expressions for duty cycle and switching frequency. Furthermore, this paper details the control of three interleaved converter cells which yields a reduction of the input current ripple. A 200 W TCM PFC rectifier with a low height of 5 mm has been realized and measurement results are provided in order to validate the theoretical considerations. The presented TCM PFC rectifier achieves an efficiency of 94.6% and a power factor of 99.3% at nominal power."}
{"_id": "125e8c9e923442ebdf6a9f90267089b571caecd5", "title": "Detecting False Data Injection Attacks on DC State Estimation", "text": "\u2022 Y. Liu, M. K. Reiter, and P. Ning. False Data Injection Attacks Against State Estimation in Electric Power Grids. In 16th ACM Conference on Computer and Communications Security (CCS \u201909). Defense Against False Data Injection Attacks on DC State Estimation Miao Lu, University of Illinois at Urbana-Champaign ADVISORS: Rakesh Bobba, Kate Morrow I T I S U M M E R U N D E R G R A D U A T E I N T E R N P R O G R A M, 2 0 1 1"}
{"_id": "27e13389203b2f8f6138afed867965a3a38cbd8e", "title": "EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signals", "text": "Generative adversarial networks (GANs) are recently highly successful in generative applications involving images and start being applied to time series data. Here we describe EEG-GAN as a framework to generate electroencephalographic (EEG) brain signals. We introduce a modification to the improved training of Wasserstein GANs to stabilize training and investigate a range of architectural choices critical for time series generation (most notably upand down-sampling). For evaluation we consider and compare different metrics such as Inception score, Frechet inception distance and sliced Wasserstein distance, together showing that our EEG-GAN framework generated naturalistic EEG examples. It thus opens up a range of new generative application scenarios in the neuroscientific and neurological context, such as data augmentation in brain-computer interfacing tasks, EEG super-sampling, or restoration of corrupted data segments. The possibility to generate signals of a certain class and/or with specific properties may also open a new avenue for research into the underlying structure of brain signals."}
{"_id": "3711625f7f22a59a9ac5251a99bb8e3298048ae4", "title": "Image inpainting", "text": "Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. The goals and applications of inpainting are numerous, from the restoration of damaged paintings and photographs to the removal/replacement of selected objects. In this paper, we introduce a novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators. After the user selects the regions to be restored, the algorithm automatically fills-in these regions with information surrounding them. The fill-in is done in such a way that isophote lines arriving at the regions' boundaries are completed inside. In contrast with previous approaches, the technique here introduced does not require the user to specify where the novel information comes from. This is automatically done (and in a fast way), thereby allowing to simultaneously fill-in numerous regions containing completely different structures and surrounding backgrounds. In addition, no limitations are imposed on the topology of the region to be inpainted. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or publicity; and the removal of entire objects from the image like microphones or wires in special effects."}
{"_id": "119bb251cff0292cbf6bed27acdcad424ed9f9d0", "title": "An Introduction to Variational Methods for Graphical Models", "text": "This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, which exploit laws of large numbers to transform the original graphical model into a simplified graphical model in which inference is efficient. Inference in the simpified model provides bounds on probabilities of interest in the original model. We describe a general framework for generating variational transformations based on convex duality. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case."}
{"_id": "17e9d3ba861db8a6d323e1410fe5ca0986d5ad6a", "title": "Texture Synthesis by Non-parametric Sampling", "text": "A non-parametric method for texture synthesis is proposed. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by querying the sample image and finding all similar neighborhoods. The degree of randomness is controlled by a single perceptually intuitive parameter. The method aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures."}
{"_id": "54205667c1f65a320f667d73c354ed8e86f1b9d9", "title": "Nonlinear total variation based noise removal algorithms", "text": "A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t \u2192 \u221e the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set."}
{"_id": "c29270f3f2b38a237b9f3b48e0c09c5be244910d", "title": "Google and the scholar: the role of Google in scientists' information-seeking behaviour", "text": "Purpose \u2013 This paper aims to demonstrate the role that the Google general search engine plays in the information-seeking behaviour of scientists, particularly physicists and astronomers. Design/methodology/approach \u2013 The paper is based on a mixed-methods study including 56 semi-structured interviews, a questionnaire survey of 114 respondents (47 per cent response rate) and the use of information-event cards to collect critical incident data. The study was conducted at the Department of Physics and Astronomy at University College, London. Findings \u2013 The results show that Google is the tool most used for problem-specific information seeking. The results also show the growing reliance of scientists on general search engines, particularly Google, for finding scholarly articles. Initially, finding scholarly articles was a by-product of general searching for information rather than focused searches for papers. However, a growing number of articles read by scientists are identified through the Google general search engine and, as scientists are becoming more aware of the quantity of scholarly papers searchable by Google, they are increasingly relying on Google for finding scholarly literature. Research limitations/implications \u2013 As the only fields covered in the study were physics and astronomy, and the research participants were sourced from just one department of one institution, caution should be taken in generalising the findings. Originality/value \u2013 The data are based on a mixed-methods in-depth study of scientists\u2019 information-seeking behaviour which sheds some light on a question raised in past studies relating to the reason for the high number of articles identified through Google."}
{"_id": "cc6a972b3ce231aa86757ecfe6af7997e6623a13", "title": "Character-level incremental speech recognition with recurrent neural networks", "text": "In real-time speech recognition applications, the latency is an important issue. We have developed a character-level incremental speech recognition (ISR) system that responds quickly even during the speech, where the hypotheses are gradually improved while the speaking proceeds. The algorithm employs a speech-to-character unidirectional recurrent neural network (RNN), which is end-to-end trained with connectionist temporal classification (CTC), and an RNN-based character-level language model (LM). The output values of the CTC-trained RNN are character-level probabilities, which are processed by beam search decoding. The RNN LM augments the decoding by providing long-term dependency information. We propose tree-based online beam search with additional depth-pruning, which enables the system to process infinitely long input speech with low latency. This system not only responds quickly on speech but also can dictate out-of-vocabulary (OOV) words according to pronunciation. The proposed model achieves the word error rate (WER) of 8.90% on the Wall Street Journal (WSJ) Nov'92 20K evaluation set when trained on the WSJ SI-284 training set."}
{"_id": "3463c43df6ecafd52cfe6e1bb5ceba8eaa8e6d4c", "title": "A novel approach to CAD system for the detection of lung nodules in CT images", "text": "Detection of pulmonary nodule plays a significant role in the diagnosis of lung cancer in early stage that improves the chances of survival of an individual. In this paper, a computer aided nodule detection method is proposed for the segmentation and detection of challenging nodules like juxtavascular and juxtapleural nodules. Lungs are segmented from computed tomography (CT) images using intensity thresholding; brief analysis of CT image histogram is done to select a suitable threshold value for better segmentation results. Simple morphological closing is used to include juxtapleural nodules in segmented lung regions. K-means clustering is applied for the initial detection and segmentation of potential nodules; shape specific morphological opening is implemented to refine segmentation outcomes. These segmented potential nodules are then divided into six groups on the basis of their thickness and percentage connectivity with lung walls. Grouping not only helped in improving system's efficiency but also reduced computational time, otherwise consumed in calculating and analyzing unnecessary features for all nodules. Different sets of 2D and 3D features are extracted from nodules in each group to eliminate false positives. Small size nodules are differentiated from false positives (FPs) on the basis of their salient features; sensitivity of the system for small nodules is 83.33%. SVM classifier is used for the classification of large nodules, for which the sensitivity of the proposed system is 93.8% applying 10-fold cross-validation. Receiver Operating Characteristic (ROC) curve is used for the analysis of CAD system. Overall sensitivity of the system is 91.65% with 3.19 FPs per case, and accuracy is 96.22%. The system took 3.8 seconds to analyze each image."}
{"_id": "be1bf03960f0337a8a4d5430b0c87f66f76b79f1", "title": "Optimal Online Cyberbullying Detection", "text": "Cyberbullying has emerged as a serious societal and public health problem that demands accurate methods for the detection of cyber-bullying instances in an effort to mitigate the consequences. While techniques to automatically detect cyberbullying incidents have been developed, the scalability and timeliness of existing cyberbullying detection approaches have largely been ignored. We address this gap by formulating cyberbullying detection as a sequential hypothesis testing problem. Based on this formulation, we propose a novel algorithm designed to reduce the time to raise a cyberbullying alert by drastically reducing the number of feature evaluations necessary for a decision to be made. We demonstrate the effectiveness of our approach using a real-world dataset from Twitter, one of the top five networks with the highest percentage of users reporting cyberbullying instances. We show that our approach is highly scalable while not sacrificing accuracy for scalability."}
{"_id": "79c2f5df834f05473bf7ab22b83117d761cce8a3", "title": "Oral-motor function and feeding intervention.", "text": "This article presents the elements of the Oral Motor Intervention section of the Infant Care Path for Physical Therapy in the Neonatal Intensive Care Unit (NICU). The types of physical therapy interventions presented in this path are evidence based as well as infant driven and family focused. In the context of anticipated maturation of suck-swallow-breathe coordination, the timing and methods for initiation of oral feedings and transition from gavage to full breast or bottle-feedings are presented with supporting evidence."}
{"_id": "72657b0428f9b8f705546eb5a9147203a534d8f6", "title": "Transparent, lightweight application execution replay on commodity multiprocessor operating systems", "text": "We present Scribe, the first system to provide transparent, low-overhead application record-replay and the ability to go live from replayed execution. Scribe introduces new lightweight operating system mechanisms, rendezvous and sync points, to efficiently record nondeterministic interactions such as related system calls, signals, and shared memory accesses. Rendezvous points make a partial ordering of execution based on system call dependencies sufficient for replay, avoiding the recording overhead of maintaining an exact execution ordering. Sync points convert asynchronous interactions that can occur at arbitrary times into synchronous events that are much easier to record and replay.\n We have implemented Scribe without changing, relinking, or recompiling applications, libraries, or operating system kernels, and without any specialized hardware support such as hardware performance counters. It works on commodity Linux operating systems, and commodity multi-core and multiprocessor hardware. Our results show for the first time that an operating system mechanism can correctly and transparently record and replay multi-process and multi-threaded applications on commodity multiprocessors. Scribe recording overhead is less than 2.5% for server applications including Apache and MySQL, and less than 15% for desktop applications including Firefox, Acrobat, OpenOffice, parallel kernel compilation, and movie playback."}
{"_id": "5112085c76e4b77158ab2ba3840b4212c3fcd5d9", "title": "The Jalview Java alignment editor", "text": "Multiple sequence alignment remains a crucial method for understanding the function of groups of related nucleic acid and protein sequences. However, it is known that automatic multiple sequence alignments can often be improved by manual editing. Therefore, tools are needed to view and edit multiple sequence alignments. Due to growth in the sequence databases, multiple sequence alignments can often be large and difficult to view efficiently. The Jalview Java alignment editor is presented here, which enables fast viewing and editing of large multiple sequence alignments."}
{"_id": "d8c3f59f5e0dd5ad669068f6a6d04d7a2fe930d7", "title": "Improvement of pixel electrode circuit for active-matrix OLED by application of reversed-biased voltage", "text": "In this paper, an improved ac pixel electrode circuit for active-matrix organic light-emitting display (AMOLED) has been proposed by adding a thin-film transistor. This circuit can provide an ac driving mode for AMOLED and makes the OLED in a reversed-biased voltage during the reverse cycle. And a circuit design for understanding ac driving mode was presented. The circuit simulation results indicate that this circuit is feasible. The circuit structure is practical for the AMOLED pixel driver; it can improve the performance of OLED."}
{"_id": "0f3e5190b4c5463bec873d1a262b7d34fce9d338", "title": "The theory of multidimensional persistence", "text": "Persistent homology captures the topology of a filtration - a one-parameter family of increasing spaces - in terms of a complete discrete invariant. This invariant is a multiset of intervals that denote the lifetimes of the topological entities within the filtration. In many applications of topology, we need to study a multifiltration: a family of spaces parameterized along multiple geometric dimensions. In this paper, we show that no similar complete discrete invariant exists for multidimensional persistence. Instead, we propose the rank invariant, a discrete invariant for the robust estimation of Betti numbers in a multifiltration, and prove its completeness in one dimension."}
{"_id": "fb816384f7e25fc9fbd44ecb897884093fd4986f", "title": "Analysis of a Quadrotor with a Two-Degree-of-Freedom Robotic Arm", "text": "The classical kinematic approach used in analyzing systems of a small number of interconnected bodies is unsuitable for researching complex kinematics. Moreover, solving nonlinear algebraic equations that involve trigonometric functions is difficult. However, the computational kinematic approach involves using a set of algebraic kinematic equations that describe the joint connectivity between the bodies of a system as well as the specified motion trajectories. This paper presents an analytical approach for a quadrotor with a two-degree-of-freedom (DOF) robotic arm. The approach, which is based on the computational kinematic approach, was adopted to model algebraic equations to obtain the values of position, velocity, acceleration, and the trajectories of motion of the joint and end effector. MATLAB was employed to simulate and solve the kinematic equations. The results showed that the analysis approach is feasible for the quadrotor with a two-DOF robotic arm."}
{"_id": "57044431b148c2fb365953899337b593f2adba5c", "title": "Emerging tobacco hazards in China: 1. Retrospective proportional mortality study of one million deaths.", "text": "OBJECTIVE\nTo assess the hazards at an early phase of the growing epidemic of deaths from tobacco in China.\n\n\nDESIGN\nSmoking habits before 1980 (obtained from family or other informants) of 0.7 million adults who had died of neoplastic, respiratory, or vascular causes were compared with those of a reference group of 0.2 million who had died of other causes.\n\n\nSETTING\n24 urban and 74 rural areas of China.\n\n\nSUBJECTS\nOne million people who had died during 1986-8 and whose families could be interviewed.\n\n\nMAIN OUTCOME MEASURES\nTobacco attributable mortality in middle or old age from neoplastic, respiratory, or vascular disease.\n\n\nRESULTS\nAmong male smokers aged 35-69 there was a 51% (SE 2) excess of neoplastic deaths, a 31% (2) excess of respiratory deaths, and a 15% (2) excess of vascular deaths. All three excesses were significant (P<0.0001). Among male smokers aged >/70 there was a 39% (3) excess of neoplastic deaths, a 54% (2) excess of respiratory deaths, and a 6% (2) excess of vascular deaths. Fewer women smoked, but those who did had tobacco attributable risks of lung cancer and respiratory disease about the same as men. For both sexes, the lung cancer rates at ages 35-69 were about three times as great in smokers as in non-smokers, but because the rates among non-smokers in different parts of China varied widely the absolute excesses of lung cancer in smokers also varied. Of all deaths attributed to tobacco, 45% were due to chronic obstructive pulmonary disease and 15% to lung cancer; oesophageal cancer, stomach cancer, liver cancer, tuberculosis, stroke, and ischaemic heart disease each caused 5-8%. Tobacco caused about 0.6 million Chinese deaths in 1990 (0.5 million men). This will rise to 0.8 million in 2000 (0.4 million at ages 35-69) or to more if the tobacco attributed fractions increase.\n\n\nCONCLUSIONS\nAt current age specific death rates in smokers and non-smokers one in four smokers would be killed by tobacco, but as the epidemic grows this proportion will roughly double. If current smoking uptake rates persist in China (where about two thirds of men but few women become smokers) tobacco will kill about 100 million of the 0.3 billion males now aged 0-29, with half these deaths in middle age and half in old age."}
{"_id": "250f1496d3b889e145c77cf15d4c8b12b0226777", "title": "Chapter 3 Eye Tracking and Eye-Based Human \u2013 Computer Interaction", "text": "Eye tracking has a long history in medical and psychological research as a tool for recording and studying human visual behavior. Real-time gaze-based text entry can also be a powerful means of communication and control for people with physical disabilities. Following recent technological advances and the advent of affordable eye trackers, there is a growing interest in pervasive attention-aware systems and interfaces that have the potential to revolutionize mainstream humantechnology interaction. In this chapter, we provide an introduction to the state-of-the art in eye tracking technology and gaze estimation. We discuss challenges involved in using a perceptual organ, the eye, as an input modality. Examples of real life applications are reviewed, together with design solutions derived from research results. We also discuss how to match the user requirements and key features of different eye tracking systems to find the best system for each task and application."}
{"_id": "3089b4085607c2c005005d776ec3b2ed44a72d1a", "title": "Emerging ethical issues in neuroscience", "text": "There is growing public awareness of the ethical issues raised by progress in many areas of neuroscience. This commentary reviews the issues, which are triaged in terms of their novelty and their imminence, with an exploration of the relevant ethical principles in each case."}
{"_id": "752569e2cb6d0204543326af7369cbd52717f12f", "title": "Age, executive function, and social decision making: a dorsolateral prefrontal theory of cognitive aging.", "text": "Current neuropsychological models propose that some age-related cognitive changes are due to frontal-lobe deterioration. However, these models have not considered the possible subdivision of the frontal lobes into the dorsolateral and ventromedial regions. This study assessed the age effects on 3 tasks of executive function and working memory, tasks dependent on dorsolateral prefrontal dysfunction; and 3 tasks of emotion and social decision making, tasks dependent on ventromedial prefrontal dysfunction. Age-related differences in performance were found on all tasks dependent on dorsolateral prefrontal dysfunction. In contrast, age-related differences were not found on the majority of the tasks dependent on ventromedial prefrontal dysfunction. The results support a specific dorsolateral prefrontal theory of cognitive changes with age, rather than a global decline in frontal-lobe function."}
{"_id": "4a8839d1b36b614b35ced8c0db42c89f6598fa6c", "title": "Efficient multiple-click models in web search", "text": "Many tasks that leverage web search users' implicit feedback rely on a proper and unbiased interpretation of user clicks. Previous eye-tracking experiments and studies on explaining position-bias of user clicks provide a spectrum of hypotheses and models on how an average user examines and possibly clicks web documents returned by a search engine with respect to the submitted query. In this paper, we attempt to close the gap between previous work, which studied how to model a single click, and the reality that multiple clicks on web documents in a single result page are not uncommon. Specifically, we present two multiple-click models: the independent click model (ICM) which is reformulated from previous work, and the dependent click model (DCM) which takes into consideration dependencies between multiple clicks. Both models can be efficiently learned with linear time and space complexities. More importantly, they can be incrementally updated as new click logs flow in. These are well-demanded properties in reality.\n We systematically evaluate the two models on click logs obtained in July 2008 from a major commercial search engine. The data set, after preprocessing, contains over 110 thousand distinct queries and 8.8 million query sessions. Extensive experimental studies demonstrate the gain of modeling multiple clicks and their dependencies. Finally, we note that since our experimental setup does not rely on tweaking search result rankings, it can be easily adopted by future studies."}
{"_id": "9e894cda6d4d7b2766b6fc5c7822e5d5d24472c8", "title": "A Systematic Review of Virtual Reality in Education.", "text": "Virtual reality has existed in the realm of education for over half a century. However, its widespread adoption is still yet to occur. This is a result of a myriad of limitations to both the technologies themselves, and the costs and logistics required to deploy them. In order to gain a better understanding of what these issues are, and what it is that educators hope to gain by using these technologies in the first place, we have performed both a systematic review of the use of virtual reality in education, as well as two distinct thematic analyses. The first analysis investigated the applications and reported motivations provided by educators in academic literature for developing virtual reality educational systems, while the second investigated the reported problems associated with doing so. These analyses indicate that the majority of researchers use virtual reality to increase the intrinsic motivation of students, and refer to a narrow range of factors such as constructivist pedagogy, collaboration, and gamification in the design of their experiences. Similarly, a small number of educational areas account for the vast majority of educational virtual reality implementations identified in our analyses. Next, we introduced and compared a multitude of recent virtual reality technologies, discussing their potential to overcome several of the problems identified in our analyses, including cost, user experience and interactivity. However, these technologies are not without their own issues, thus we conclude this paper by providing several novel techniques to potentially address them, as well as potential directions for future researchers wishing to apply these emerging technologies to education."}
{"_id": "6142b4a07af3a909dffb87f44445da81ca85d53a", "title": "Clustering-based selection for evolutionary multi-objective optimization", "text": "In this study, a novel clustering-based selection strategy of nondominated individuals for evolutionary multi-objective optimization is proposed. The new strategy partitions the nondominated individuals in current Pareto front adaptively into desired clusters. Then one representative individual will be selected in each cluster for pruning nondominated individuals. In order to evaluate the validity of the new strategy, we apply it into one state of the art multi-objective evolutionary algorithm. The experimental results based on thirteen benchmark problems show that the new strategy improves the performance obviously in terms of breadth and uniformity of nondominated solutions."}
{"_id": "6e953a5caa643ef2e310473b680ef23262bb80ce", "title": "Characterizing Email Search using Large-scale Behavioral Logs and Surveys", "text": "As the number of email users and messages continues to grow, search is becoming more important for finding information in personal archives. In spite of its importance, email search is much less studied than web search, particularly using large-scale behavioral log analysis. In this paper we report the results of a large-scale log analysis of email search and complement this with a survey to better understand email search intent and success. We characterize email search behaviors and highlight differences from web search. When searching for email, people know many attributes about what they are looking for; they often look for specific known items; their queries are shorter and they click on fewer items than in web search. Although repeat queries are common in both email and web search, repeat visits to the same search result are much less common in email search suggesting that the same query is used for different search intents over time. We consider search intent from multiple angles. In email search logs, we find that people use email search not just to find information but also to perform tasks such as cleanup or organization, and that the distribution of actions they perform depends on the type of query. In our survey, people reported that they looked for specific information in both email search and web search, but they were much less likely to search for general information on a topic in email. The differences in overall behavior, re-finding patterns and search intents we observed between email and web search have important implications for the design of email search algorithms and interfaces."}
{"_id": "62f50ddebe9db1f12851e5cf56960c3fec46cb2f", "title": "Accurate Approximations for Posterior Moments and Marginal Densities Author ( s ) :", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "60f47276891841fff6a0c1ffdeb71b4e1082f985", "title": "Review of battery cell balancing techniques", "text": "A highly reliable and efficient battery management system (BMS) is crucial for applications that are powered by electrochemical power. Cell balancing is one of the most important features of a BMS. Cell balancing techniques help to distribute energy evenly among battery cells. Without cell balancing, a portion of the capacity or energy in the battery bank will be wasted, especially for long battery string which operates in frequent recycling condition. In this paper, some popular cell balancing techniques are described and categorized according to the way of processing redundant energy in battery cells."}
{"_id": "2bb6ab1cb16edea5c71174f8e277d001c0084b58", "title": "Transductive Face Sketch-Photo Synthesis", "text": "Face sketch-photo synthesis plays a critical role in many applications, such as law enforcement and digital entertainment. Recently, many face sketch-photo synthesis methods have been proposed under the framework of inductive learning, and these have obtained promising performance. However, these inductive learning-based face sketch-photo synthesis methods may result in high losses for test samples, because inductive learning minimizes the empirical loss for training samples. This paper presents a novel transductive face sketch-photo synthesis method that incorporates the given test samples into the learning process and optimizes the performance on these test samples. In particular, it defines a probabilistic model to optimize both the reconstruction fidelity of the input photo (sketch) and the synthesis fidelity of the target output sketch (photo), and efficiently optimizes this probabilistic model by alternating optimization. The proposed transductive method significantly reduces the expected high loss and improves the synthesis performance for test samples. Experimental results on the Chinese University of Hong Kong face sketch data set demonstrate the effectiveness of the proposed method by comparing it with representative inductive learning-based face sketch-photo synthesis methods."}
{"_id": "197e3a315c57c9278876d95b7e522700aa112886", "title": "Integrating constraints and metric learning in semi-supervised clustering", "text": "Semi-supervised clustering employs a small amount of labeled data to aid unsupervised learning. Previous work in the area has utilized supervised data in one of two approaches: 1) constraint-based methods that guide the clustering algorithm towards a better grouping of the data, and 2) distance-function learning methods that adapt the underlying similarity metric used by the clustering algorithm. This paper provides new methods for the two approaches as well as presents a new semi-supervised clustering algorithm that integrates both of these techniques in a uniform, principled framework. Experimental results demonstrate that the unified approach produces better clusters than both individual approaches as well as previously proposed semi-supervised clustering algorithms."}
{"_id": "f4f3a10d96e0b6d134e7e347e1727b7438d4006f", "title": "Semi-supervised Clustering by Seeding", "text": ""}
{"_id": "41e71f20d0251883846b67bd8a67d5b4986b4b1b", "title": "Introduction: attachment theory and psychotherapy.", "text": "In this introduction to the JCLP: In Session 69(11) issue on attachment theory and psychotherapy, the key points of attachment theory (Bowlby, , , 1981) and its relevance to psychotherapy are briefly described. The aim of this issue is to provide case illustrations of how an attachment theory perspective and principles can expand our understanding of psychotherapy practice."}
{"_id": "0ab5e90d14264b7faf406c15615a4f08cc945f27", "title": "Neural Networks - A Systematic Introduction", "text": "In what case do you like reading so much? What about the type of the neural networks a systematic introduction book? The needs to read? Well, everybody has their own reason why should read some books. Mostly, it will relate to their necessity to get knowledge from the book and want to read just to get entertainment. Novels, story book, and other entertaining books become so popular this day. Besides, the scientific books will also be the best reason to choose, especially for the students, teachers, doctors, businessman, and other professions who are fond of reading."}
{"_id": "bff1e1ecf00c37ec91edc7c5c85c1390726c3687", "title": "Constrained Deep Metric Learning for Person Re-identification", "text": "Person re-identification aims to re-identify the probe image from a given set of images under different camera views. It is challenging due to large variations of pose, illumination, occlusion and camera view. Since the convolutional neural networks (CNN) have excellent capability of feature extraction, certain deep learning methods have been recently applied in person re-identification. However, in person re-identification, the deep networks often suffer from the over-fitting problem. In this paper, we propose a novel CNN-based method to learn a discriminative metric with good robustness to the over-fitting problem in person re-identification. Firstly, a novel deep architecture is built where the Mahalanobis metric is learned with a weight constraint. This weight constraint is used to regularize the learning, so that the learned metric has a better generalization ability. Secondly, we find that the selection of intraclass sample pairs is crucial for learning but has received little attention. To cope with the large intra-class variations in pedestrian images, we propose a novel training strategy named moderate positive mining to prevent the training process from over-fitting to the extreme samples in intraclass pairs. Experiments show that our approach significantly outperforms state-of-the-art methods on several benchmarks of person re-identification."}
{"_id": "a4c7eed69558ba96ea2d0e8cb2810fc359b43175", "title": "SQL-to-NoSQL Schema Denormalization and Migration: A Study on Content Management Systems", "text": "Content management systems (CMSs) are able to let people, who have no technical skill to manage websites, rapidly create, edit, organize and publish online contents. For example, CMSs are often used to establish e-commerce online shops, community portals, personal blogs, or organization websites. Thus, CMSs play the role of collecting data and then can be easily extended to become intelligent Internet systems. Most popular CMSs, e.g., Word Press and Joomla, rely on relational databases or SQL databases, e.g., MySQL, PostgreSQL, etc. However, due to the explosive growth of huge data, the well-structured characteristic of SQL databases may limit the scalability to handle horizontal scaling. Therefore, regarding the flexibility and the feasibility of parallel processing, cloud computing takes Not Only SQL (NoSQL) databases into consideration. How to migrate the original CMS software from SQL databases to NoSQL databases becomes one emerging and critical research issue. This paper is motivated to propose an autonomous SQL-to-NoSQL schema denormalization and migration. At the same time, this paper also evaluates the proposed mechanism on the realistic CMS software. Based on our experimental results, our mechanism not only helps the current CMS software migrate into the cloud platform without re-designing their database schemas, but also improves at least 45% access performance after schema migration."}
{"_id": "ef840fa97be8c9425052672240ae0cdea7820c89", "title": "Testing Unconstrained Optimization Software", "text": "Much of the testing of optimization software is inadequate because the number of test functmns is small or the starting points are close to the solution. In addition, there has been too much emphasm on measurmg the efficmncy of the software and not enough on testing reliability and robustness. To address this need, we have produced a relatwely large but easy-to-use collection of test functions and designed gmdelines for testing the reliability and robustness of unconstrained optimization software."}
{"_id": "2ed164a624809c4dc339f973a7e12c8dc847da47", "title": "Short Text Similarity with Word Embeddings", "text": "Determining semantic similarity between texts is important in many tasks in information retrieval such as search, query suggestion, automatic summarization and image finding. Many approaches have been suggested, based on lexical matching, handcrafted patterns, syntactic parse trees, external sources of structured semantic knowledge and distributional semantics. However, lexical features, like string matching, do not capture semantic similarity beyond a trivial level. Furthermore, handcrafted patterns and external sources of structured semantic knowledge cannot be assumed to be available in all circumstances and for all domains. Lastly, approaches depending on parse trees are restricted to syntactically well-formed texts, typically of one sentence in length.\n We investigate whether determining short text similarity is possible using only semantic features---where by semantic we mean, pertaining to a representation of meaning---rather than relying on similarity in lexical or syntactic representations. We use word embeddings, vector representations of terms, computed from unlabelled data, that represent terms in a semantic space in which proximity of vectors can be interpreted as semantic similarity.\n We propose to go from word-level to text-level semantics by combining insights from methods based on external sources of semantic knowledge with word embeddings. A novel feature of our approach is that an arbitrary number of word embedding sets can be incorporated. We derive multiple types of meta-features from the comparison of the word vectors for short text pairs, and from the vector means of their respective word embeddings. The features representing labelled short text pairs are used to train a supervised learning algorithm. We use the trained model at testing time to predict the semantic similarity of new, unlabelled pairs of short texts \n We show on a publicly available evaluation set commonly used for the task of semantic similarity that our method outperforms baseline methods that work under the same conditions."}
{"_id": "10bed48e1697f70683da81317986f8341278a0d4", "title": "Securing vulnerable home IoT devices with an in-hub security manager", "text": "The proliferation of consumer Internet of Things (IoT) devices offers as many convenient benefits as it poses significant vulnerabilities. Patching or otherwise mitigating these vulnerabilities will be difficult for the existing home security ecosystem. This paper proposes a central security manager that is built on top of the smarthome's hub or gateway router and positioned to intercept all traffic to and from devices. Aware of the status of all devices in the home and of reported vulnerabilities, the security manager could intervene as needed to deter or alleviate many types of security risks. Modules built atop this manager could offer convenient installation of software updates, filter traffic that might otherwise exploit devices, and strengthen authentication for both legacy and future devices. We believe that this design offers the potential to increase security for smarthome IoT devices, and we encourage other researchers and regulators to explore and extend our ideas."}
{"_id": "d222a25921181a8072d8cbb1f06a522a2a3df8d8", "title": "Nutritional Ecology and Human Health.", "text": "In contrast to the spectacular advances in the first half of the twentieth century with micronutrient-related diseases, human nutrition science has failed to stem the more recent rise of obesity and associated cardiometabolic disease (OACD). This failure has triggered debate on the problems and limitations of the field and what change is needed to address these. We briefly review the two broad historical phases of human nutrition science and then provide an overview of the main problems that have been implicated in the poor progress of the field with solving OACD. We next introduce the field of nutritional ecology and show how its ecological-evolutionary foundations can enrich human nutrition science by providing the theory to help address its limitations. We end by introducing a modeling approach from nutritional ecology, termed nutritional geometry, and demonstrate how it can help to implement ecological and evolutionary theory in human nutrition to provide new direction and to better understand and manage OACD."}
{"_id": "6971ca062d8e8a77e12c1624329a8dc754255910", "title": "Innovative Cost Effective Approach for Cell Phone Based Remote Controlled Embedded System for Irrigation", "text": "This work describes the development of innovative low cost cell phone based remote control application for induction motor-pump based irrigation in agriculture. Rural areas in many states of India are plagued by frequent power cuts and abnormal voltage conditions. The developed system ensures that water is distributed to field whenever normal conditions exist based on task specified. A novel concept of miscall for specified duration has been used to reduce the operational cost of the system and for the convenience of farmers facing difficulty in typing messages. Information is exchanged in form of miscalls / message between the system and the user cell phones. The system is based on AVR ATMega32 micro-controller and includes protection against single phasing, over-current, dry running and other desirable features. DS1307 and DS18S20 are used for time and temperature measurement respectively. It is expected that system will relieve hardships of farmers relating water distribution to a great extent."}
{"_id": "1ecffbf969e0d46acfcdfb213e47027227d8667b", "title": "Predicting Domain-Specific Risk Taking With the HEXACO Personality Structure", "text": "Although risk taking traditionally has been viewed as a unitary, stable individual difference variable, emerging evidence in behavioral decision-making research suggests that risk taking is a domain-specific construct. Utilizing a psychological risk-return framework that regresses risk taking on the perceived benefits and perceived riskiness of an activity (Weber & Milliman, 1997), this study examined the relations between risk attitude and broad personality dimensions using the new HEXACO personality framework (Lee &Ashton, 2004) across four risk domains. This personality framework, which has been replicated in lexical studies in over 12 natural languages, assess personality over six broad personality dimensions, as opposed to the traditional Five-Factor Model, or \u2018\u2018Big Five.\u2019\u2019 Through path analysis, we regressed risk taking in four separate domains on risk perceptions, perceived benefits, and the six HEXACO dimensions. Across all risk domains, we found that the emotionality dimension was associated with heightened risk perceptions and high conscientiousness was associated with less perceived benefits. We also report several unique patterns of domain-specific relations between the HEXACO dimensions and risk attitude. Specifically, openness was associated with risk taking and perceived benefits for social and recreational risks, whereas lower honesty/humility was associated with greater health/safety and ethical risk taking. These findings extend our understanding of how individuals approach risk across a variety of contexts, and further highlight the utility of honesty/humility, a dimension not recovered in Big Five models, in individual differences research. Copyright # 2010 John Wiley & Sons, Ltd. key words risk taking; risk perception; risk-return framework; personality; HEXACO; honesty/humility Risk taking has traditionally been viewed as an enduring, stable, and domain-invariant construct in both behavioral decision making and personality research (e.g., Eysenck & Eysenck, 1977; Kahneman & Tversky, 1979; Paunonen & Jackson, 1996; Tellegen, 1985). However, recent advances suggest that risk taking is content, or domain, specific (Blais &Weber, 2006; Hanoch, Johnson, &Wilke, 2006; Soane & Chmiel, 2005; Weber, Blais, & Betz, 2002). In light of this knowledge, psychologists would benefit from a deeper ler, Decision Research, Eugene, OR 97401, USA. E-mail: jweller@decisionresearch.org"}
{"_id": "80f8d31b80dc3195442e667dbc5309578520c2e3", "title": "Robust Interactive Image Segmentation Using Convex Active Contours", "text": "The state-of-the-art interactive image segmentation algorithms are sensitive to the user inputs and often unable to produce an accurate boundary with a small amount of user interaction. They frequently rely on laborious user editing to refine the segmentation boundary. In this paper, we propose a robust and accurate interactive method based on the recently developed continuous-domain convex active contour model. The proposed method exhibits many desirable properties of an effective interactive image segmentation algorithm, including robustness to user inputs and different initializations, the ability to produce a smooth and accurate boundary contour, and the ability to handle topology changes. Experimental results on a benchmark data set show that the proposed tool is highly effective and outperforms the state-of-the-art interactive image segmentation algorithms."}
{"_id": "d3e3aae0f610d5e1f2314ba417d41fbfb74bef2d", "title": "Model Reference Adaptive Control-Based Speed Control of Brushless DC Motors With Low-Resolution Hall-Effect Sensors", "text": "A control system with a novel speed estimation approach based on model reference adaptive control (MRAC) is presented for low cost brushless dc motor drives with low-resolution hall sensors. The back EMF is usually used to estimate speed. But the estimation result is not accurate enough at low speeds because of the divided voltage of stator resistors and too small back EMF. Moreover, the stator resistor is always varying with the motor's temperature. A speed estimation algorithm based on MRAC was proposed to correct the speed error estimated by using back EMF. The proposed algorithm's most innovative feature is its adaptability to the entire speed range including low speeds and high speeds and temperature and different motors do not affect the accuracy of the estimation result. The effectiveness of the algorithm was verified through simulations and experiments."}
{"_id": "595d0fe1c259c02069075d8c687210211908c3ed", "title": "A Survey on Learning to Hash", "text": "Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics."}
{"_id": "c2d0c15b932b9c17ac3ac5da953e5584c34ca559", "title": "KBQA: An Online Template Based Question Answering System over Freebase", "text": "Question answering (QA) has become a popular way for humans to access billion-scale knowledge bases. QA systems over knowledge bases produce accurate and concise answers. The key of QA over knowledge bases is to map the question to a certain substructure in the knowledge base. To do this, KBQA (Question Answering over Knowledge Bases) uses a new kind of question representation: templates, learned from a million scale QA corpora. For example, for questions about a city\u2019s population, KBQA learns templates such as What\u2019s the population of $city?, How many people are there in $city?. It learns overall 1171303 templates for 4690 relations. Based on these templates, KBQA effectively and efficiently supports binary factoid questions or complex questions."}
{"_id": "95b552de90b37aa040d334baedc13a8d3879c9bb", "title": "Effects of gratitude meditation on neural network functional connectivity and brain-heart coupling", "text": "A sense of gratitude is a powerful and positive experience that can promote a happier life, whereas resentment is associated with life dissatisfaction. To explore the effects of gratitude and resentment on mental well-being, we acquired functional magnetic resonance imaging and heart rate (HR) data before, during, and after the gratitude and resentment interventions. Functional connectivity (FC) analysis was conducted to identify the modulatory effects of gratitude on the default mode, emotion, and reward-motivation networks. The average HR was significantly lower during the gratitude intervention than during the resentment intervention. Temporostriatal FC showed a positive correlation with HR during the gratitude intervention, but not during the resentment intervention. Temporostriatal resting-state FC was significantly decreased after the gratitude intervention compared to the resentment intervention. After the gratitude intervention, resting-state FC of the amygdala with the right dorsomedial prefrontal cortex and left dorsal anterior cingulate cortex were positively correlated with anxiety scale and depression scale, respectively. Taken together, our findings shed light on the effect of gratitude meditation on an individual\u2019s mental well-being, and indicate that it may be a means of improving both emotion regulation and self-motivation by modulating resting-state FC in emotion and motivation-related brain regions."}
{"_id": "34b9ba36c030cfb7a141e53156aa1591dfce3dcd", "title": "Effective or Efficient ? Effects of Beijing \u2019 s Vehicle Lottery System on Fleet Composition and Environment *", "text": "To control vehicle growth and air pollution, Beijing\u2019s municipal government imposed a vehicle lottery system in 2011, which randomly allocated a quota of licenses to potential buyers. This paper investigates the effect of this policy on fleet composition, fuel consumption, air pollution, and social welfare. Using car registration data, we estimate a random coefficient discrete choice model and conduct counterfactual analysis based on the estimated parameters. We find that the lottery reduced new passenger vehicle *We gratefully acknowledge the kind help and constructive comments of Professors Andrew Cassey, Ana Espinola-Arredondo, Benjamin Cowan, Gregmar Galinato, Dong Lu, Jill McCluskey, Mark Gibson, Shanjun Li, Jia Yan, Dan Yang and seminar participants at Washington State University. Financial support from the China National Science Fund (grant #71620107005) is acknowledged. \u2020Corresponding author. Email: yziying1988@gmail.com (Z. Yang), fmunoz@wsu.edu (F. Mu\u00f1oz-Garc\u00eda), tangmp.sicau@gmail.com (M. Tang). sales by 50.15%, fuel consumption by 48.69%, and pollutant emissions by 48.69% in 2012. Also, such lottery shifted new auto purchases towards high-end but less fuel efficient vehicles. In our counterfactual analysis, we show that a progressive tax scheme works better than the lottery system at decreasing fuel consumption and air pollution, and leads to a higher fleet fuel efficiency and less welfare loss."}
{"_id": "cf86a14fa3a973bf58bb241bd075501e933b7222", "title": "Smoothed Sarsa: Reinforcement learning for robot delivery tasks", "text": "Our goal in this work is to make high level decisions for mobile robots. In particular, given a queue of prioritized object delivery tasks, we wish to find a sequence of actions in real time to accomplish these tasks efficiently. We introduce a novel reinforcement learning algorithm called Smoothed Sarsa that learns a good policy for these delivery tasks by delaying the backup reinforcement step until the uncertainty in the state estimate improves. The state space is modeled by a Dynamic Bayesian Network and updated using a Region-based Particle Filter. We take advantage of the fact that only discrete (topological) representations of entity locations are needed for decision-making, to make the tracking and decision making more efficient. Our experiments show that policy search leads to faster task completion times as well as higher total reward compared to a manually crafted policy. Smoothed Sarsa learns a policy orders of magnitude faster than previous policy search algorithms. We demonstrate our results on the Player/Stage simulator and on the Pioneer robot."}
{"_id": "b6f49e3126d056ffc916f972fd0d4f0d2ed20ec7", "title": "A Wideband Dual-Polarized Dielectric Magnetoelectric Dipole Antenna", "text": "A new wideband \u00b145\u00b0 dual-polarized metal loop dielectric resonator magnetoelectric antenna is proposed in this communication. The antenna consists of two orthogonal dielectric bars that support two orthogonal electric dipoles and two cross interlaced metal semi-loops that are equivalent to two orthogonal magnetic dipoles above the ground. With a legitimate combination of the electric and magnetic dipoles, a unidirectional radiation with low backward radiation and equal E-plane and H-plane radiation patterns can be achieved. The antenna can be made of a monolithic dielectric block with simple installation. To validate the new antenna configuration, a prototype is designed, manufactured, and measured. The prototype antenna using ceramic with the electrical size of $0.33\\lambda _{o} \\times 0.33\\lambda _{o} \\times 0.21\\lambda _{o}$ demonstrates that an impedance bandwidth of 11.4% ( $\\vert S_{11}\\vert < -15$ dB) in the 3.5 GHz frequency band can be achieved with the measured forward/backward ratio of better than 22 dB. The proposed antenna is suitable as antenna element for massive-MIMO array antennas, where a large number of antenna elements need to be installed in a compact space."}
{"_id": "87941a5597b950d35e104fde4507ec1215c66366", "title": "Neural Monkey: The Current State and Beyond", "text": "Neural Monkey is an open-source toolkit for sequence-to-sequence learning. The focus of this paper is to present the current state of the toolkit to the intended audience, which includes students and researchers, both active in the deep learning community and newcomers. For each of these target groups, we describe the most relevant features of the toolkit, including the simple configuration scheme, methods of model inspection that promote useful intuitions, or a modular design for easy prototyping. We summarize relevant contributions to the research community which were made using this toolkit and discuss the characteristics of our toolkit with respect to other existing systems. We conclude with a set of proposals for future development."}
{"_id": "104fc2ff407a457b7df1d2eba86225bb2b217d3f", "title": "Introduction to the talking points project", "text": "A number of systems, technological and otherwise, exist for helping blind and visually impaired users navigate from place to place. [5, 3] However, in focusing only on the destination, these systems often neglect the journey. We propose to rectify this, with a device to improve a user\u2019s peripheral awareness of their surroundings. This is the motivation and the primary goal of this project. The Talking Points project aims to create a system for attaching information to places and objects, in a format that can be easily converted to speech. It allows blind and visually impaired users to get information about the places and objects they are walking by, and can also be used to provide digital information to other passersby. We have created a proof-of-concept of this system. The system reads passive RFID tags from a mobile reader. It looks up a tag\u2019s ID number to find associated text, then uses text-to-speech software to read that speech to the user."}
{"_id": "47031c7c1c160a990583fa9299f0d5d6eb366f0c", "title": "CAST: Collaborative Agents for Simulating Teamwork", "text": "Psychological studies on teamwork have shown that an effective team often can anticipate information needs of teammates based on a shared mental model. Existing multi-agent models for teamwork are limited in their ability to support proactive information exchange among teammates. To address this issue, we have developed and implemented a multi-agent architecture called CAST that simulates teamwork and supports proactive information exchange in a dynamic environment. We present a formal model for proactive information exchange. Knowledge regarding the structure and process of a team is described in a language called MALLET. Beliefs about shared team processes and their states are represented using Petri Nets. Based on this model, CAST agents offer information proactively to those who might need it using an algorithm called DIARG. Empirical evaluations using a multi-agent synthetic testbed application indicate that CAST enhances the effectiveness of teamwork among agents without sacrificing a high cost for communications."}
{"_id": "a8145c4a9183eea07d35989d0fa2e3071b66c20e", "title": "Community Detection in Networks using Node Attributes and Modularity", "text": "Community detection in network is of vital importance to find cohesive subgroups. Node attributes can improve the accuracy of community detection when combined with link information in a graph. Community detection using node attributes has not been investigated in detail. To explore the aforementioned idea, we have adopted an approach by modifying the Louvain algorithm. We have proposed Louvain-ANDAttribute (LAA) and Louvain-OR-Attribute (LOA) methods to analyze the effect of using node attributes with modularity. We compared this approach with existing community detection approaches using different datasets. We found the performance of both algorithms better than Newman\u2019s Eigenvector method in achieving modularity and relatively good results of gain in modularity in LAA than LOA. We used density, internal and external edge density for the evaluation of quality of detected communities. LOA provided highly dense partitions in the network as compared to Louvain and Eigenvector algorithms and close values to Clauset. Moreover, LOA achieved few numbers of edges between communities. Keywords\u2014Community Detection; Louvain algorithm; Node attributes; and Modularity"}
{"_id": "12510c9659c71ce821fe671de5ed7033ba0af31c", "title": "Chinese Word Segmentation and Named Entity Recognition Based on Conditional Random Fields", "text": "Chinese word segmentation (CWS), named entity recognition (NER) and part-ofspeech tagging is the lexical processing in Chinese language. This paper describes the work on these tasks done by France Telecom Team (Beijing) at the fourth International Chinese Language Processing Bakeoff. In particular, we employ Conditional Random Fields with different features for these tasks. In order to improve NER relatively low recall; we exploit non-local features and alleviate class imbalanced distribution on NER dataset to enhance the recall and keep its relatively high precision. Some other post-processing measures such as consistency checking and transformation-based error-driven learning are used to improve word segmentation performance. Our systems participated in most CWS and POS tagging evaluations and all the NER tracks. As a result, our NER system achieves the first ranks on MSRA open track and MSRA/CityU closed track. Our CWS system achieves the first rank on CityU open track, which means that our systems achieve state-of-the-art performance on Chinese lexical processing."}
{"_id": "498da4119cbf1441ad9b2b4234bb5174b3233b78", "title": "A survey of named entity recognition and classification", "text": "The term \u201cNamed Entity\u201d , now widely used in Natural Language Processing, was coined for the Sixth Message Understanding Conference (MUC-6) (R. Grishman & Sundheim 1996). At that time, MUC was focusing on Information Extraction (IE) tasks where structured information of company activities and defense related activities is extracted from unstructured text, such as newspaper articles. In defining the task, people noticed that it is essential to recognize information units like names, including person, organization and location names, and numeric expressions including time, date, money and percent expressions. Identifying references to these entities in text was recognized as one of the important sub-tasks of IE and was called \u201cNamed Entity Recognition and Classification (NERC)\u201d . We present here a survey of fifteen years of research in the NERC field, from 1991 to 2006. While early systems were making use of handcrafted rule-based algorithms, modern systems most often resort to machine learning techniques. We survey these techniques as well as other critical aspects of NERC such as features and evaluation methods. It was indeed concluded in a recent conference that the choice of features is at least as important as the choice of technique for obtaining a good NERC system (E. Tjong Kim Sang & De Meulder 2003). Moreover, the way NERC systems are evaluated and compared is essential to progress in the field. To the best of our knowledge, NERC features, techniques, and evaluation methods have not been surveyed extensively yet. The first section of this survey presents some observations on published work from the point of view of activity per year, supported languages, preferred textual genre and domain, and supported entity types. It was collected from the review of a hundred English language papers sampled from the major conferences and journals. We do not claim this review to be exhaustive or representative of all the research in all languages, but we believe it gives a good feel for the breadth and depth of previous work. Section 2 covers the algorithmic techniques that were proposed for addressing the NERC task. Most techniques are borrowed from the Machine Learning (ML) field. Instead of elaborating on techniques themselves, the third section lists and classifies the proposed features, i.e., descriptions and characteristic of words for algorithmic consumption. Section 4 presents some of the evaluation paradigms that were proposed throughout the major forums. Finally, we present our conclusions."}
{"_id": "4de296483421d72eeeae0df297331090e86d202b", "title": "Chinese Word Segmentation and Named Entity Recognition: A Pragmatic Approach", "text": "This article presents a pragmatic approach to Chinese word segmentation. It differs from most previous approaches mainly in three respects. First, while theoretical linguists have defined Chinese words using various linguistic criteria, Chinese words in this study are defined pragmatically as segmentation units whose definition depends on how they are used and processed in realistic computer applications. Second, we propose a pragmatic mathematical framework in which segmenting known words and detecting unknown words of different types (i.e., morphologically derived words, factoids, named entities, and other unlisted words) can be performed simultaneously in a unified way. These tasks are usually conducted separately in other systems. Finally, we do not assume the existence of a universal word segmentation standard that is application-independent. Instead, we argue for the necessity of multiple segmentation standards due to the pragmatic fact that different natural language processing applications might require different granularities of Chinese words. These pragmatic approaches have been implemented in an adaptive Chinese word segmenter, called MSRSeg, which will be described in detail. It consists of two components: (1) a generic segmenter that is based on the framework of linear mixture models and provides a unified approach to the five fundamental features of word-level Chinese language processing: lexicon word processing, morphological analysis, factoid detection, named entity recognition, and new word identification; and (2) a set of output adaptors for adapting the output of (1) to different application-specific standards. Evaluation on five test sets with different standards shows that the adaptive system achieves state-of-the-art performance on all the test sets."}
{"_id": "9697c926377de131c9de3d478c350a795d13d0a8", "title": "Generating narrative variation in interactive fiction", "text": "GENERATING NARRATIVE VARIATION IN INTERACTIVE FICTION Nick Montfort Mitchell P. Marcus and Gerald Prince A general method for the generation of natural language narrative is described. It allows the expression, or narrative discourse, to vary independently of the underlying events and existents that are the narrative\u2019s content. Specifically, this variation is accomplished in an interactive fiction (IF) system which replies to typed input by narrating what has happened in a simulated world. IF works have existed for about 30 years as forms of text-based computer simulation, instances of dialog systems, and examples of literary art. Theorists of narrative have carefully distinguished between the level of underlying content (corresponding to the simulated world in interactive fiction) and that of expression (corresponding to the textual exchange between computer arnd user) since the mid-1960s, when the field of narratology began to develop, but IF systems have not yet made use of this distinction. The current project contributes new techniques for automatic narration by building on work done in computational linguistics, specifically natural language generation, and in narratology. First, types of narrative variation that are possible in IF are identified and formalized in a way that is suitable for a natural language generation system. An architecture for an IF system is then described and implemented; the result allows multiple works of interactive fiction to be realized and, using a general plan for narrating, allows them to be narrated in different ways during interaction. The system\u2019s ability to generate text is considered in a pilot evaluation. Plans for future work are also discussed. They include publicly released systems for IF development and narratology education, adding a planning capability that uses actors\u2019 individual perspectives, and adapting automatic narration to different sorts of interactive systems."}
{"_id": "fac0151ed0494caf10c7d778059f176ba374e29c", "title": "Recognising Complex Mental States from Naturalistic Human-Computer Interactions", "text": ""}
{"_id": "2b22bc00e23fc6665a190dcc0b3ce888e1f9d13c", "title": "Semantic Similarity Strategies for Job Title Classification", "text": "Automatic and accurate classification of items enables numerous downstream applications in many domains. These applications can range from faceted browsing of items to product recommendations and big data analytics. In the online recruitment domain, we refer to classifying job ads to pre-defined or custom occupation categories as job title classification. A large-scale job title classification system can power various downstream applications such as semantic search, job recommendations and labor market analytics. In this paper, we discuss experiments conducted to improve our in-house job title classification system. The classification component of the system is composed of a two-stage coarse and fine level classifier cascade that classifies input text such as job title and/or job ads to one of the thousands of job titles in our taxonomy. To improve classification accuracy and effectiveness, we experiment with various semantic representation strategies such as average W2V vectors and document similarity measures such as Word Movers Distance (WMD). Our initial results show an overall improvement in accuracy of Carotene[1]."}
{"_id": "1250d9f9e02e15b090742d5f7b2aefae902be88d", "title": "Blepharophimosis mental retardation syndrome Say-Barber/Biesecker/Young-Simpson type - new findings with neuroimaging.", "text": "We report on a female patient with blepharophimosis mental retardation syndrome of Say/Barber/Biesecker/Young-Simpson (SBBYS) type. Main findings in her were marked developmental delay, blepharophimosis, ptosis, cleft palate, external auditory canal stenosis, small and malformed teeth, hypothyroidism, hearing impairment, and joint limitations. We performed diffusion tensor magnetic resonance imaging (MRI) and tractography of the brain which showed inappropriate myelination and disturbed white matter integrity. Cytogenetic analysis, subtelomeric fluorescence in situ hybridization and comparative genomic hybridization failed to identify an abnormality. It remains uncertain whether the MRI findings are specific to the present patient or form part of the SBBYS syndrome."}
{"_id": "7de1c0db3b3848d1a51dbfefce8b73e89a682177", "title": "Classification of Moral Foundations in Microblog Political Discourse", "text": "Previous works in computer science, as well as political and social science, have shown correlation in text between political ideologies and the moral foundations expressed within that text. Additional work has shown that policy frames, which are used by politicians to bias the public towards their stance on an issue, are also correlated with political ideology. Based on these associations, this work takes a first step towards modeling both the language and how politicians frame issues on Twitter, in order to predict the moral foundations that are used by politicians to express their stances on issues. The contributions of this work includes a dataset annotated for the moral foundations, annotation guidelines, and probabilistic graphical models which show the usefulness of jointly modeling abstract political slogans, as opposed to the unigrams of previous works, with policy frames for the prediction of the morality underlying political tweets."}
{"_id": "b67c7e030e27762546456b36f6eb10133207b720", "title": "A 0.1-1.8 GHz, 100 W GaN HEMT Power Amplifier Module", "text": "We have demonstrated a 0.1-1.8 GHz, 100 W GaN HEMT power amplifier module with 11 dB gain, 94-142 W CW output power and 40.6-74.0 % drain efficiency over the band. The power amplifier module uses broadband low loss coaxial baluns to combine four 30 W broadband lossy matched GaN HEMT PAs on a 2 x 2 inch compact PCB. The individual PAs are fully matched to 50 Ohms and obtains 23.6-30.9 W with 44.5-63.7 % drain efficiency over the band. The packaged amplifiers contain a GaN on SiC device operating at 50 V drain voltage with a GaAs integrated passive matching circuitry. These amplifiers are targeted for use in multi-band multi-standard communication systems and for instrumentation applications."}
{"_id": "2b1327a51412646fcf96aa16329f6f74b42aba89", "title": "Improving performance of recurrent neural network with relu nonlinearity", "text": "In recent years significant progress has been made in successfully training recurrent neural networks (RNNs) on sequence learning problems involving long range temporal dependencies. The progress has been made on three fronts: (a) Algorithmic improvements involving sophisticated optimization techniques, (b) network design involving complex hidden layer nodes and specialized recurrent layer connections and (c) weight initialization methods. In this paper, we focus on recently proposed weight initialization with identity matrix for the recurrent weights in a RNN. This initialization is specifically proposed for hidden nodes with Rectified Linear Unit (ReLU) non linearity. We offer a simple dynamical systems perspective on weight initialization process, which allows us to propose a modified weight initialization strategy. We show that this initialization technique leads to successfully training RNNs composed of ReLUs. We demonstrate that our proposal produces comparable or better solution for three toy problems involving long range temporal structure: the addition problem, the multiplication problem and the MNIST classification problem using sequence of pixels. In addition, we present results for a benchmark action recognition problem."}
{"_id": "7569503a38fa892b6714d44de538d44f1542b8d4", "title": "Interactive play objects and the effects of open-ended play on social interaction and fun", "text": "This paper describes a study that examines the influence of openended play in interactive play objects on social interaction and fun experience of children. We developed a prototype to examine whether children enjoy playing with simple intelligent objects. Children between 7 and 11 years old were asked to play with the objects in a free-play and pre-set game session. The study shows that children create a wide variety of games and practice many social skills when negotiating the rules of various games. Overall, children felt playing with the objects in the free-play sessions was more fun than in the pre-set sessions. The insights will be used to design the next version of our play concept."}
{"_id": "569b24a3d464500bf9cd81560c09cc468babd13c", "title": "Conductive polymers: towards a smart biomaterial for tissue engineering.", "text": "Developing stimulus-responsive biomaterials with easy-to-tailor properties is a highly desired goal of the tissue engineering community. A novel type of electroactive biomaterial, the conductive polymer, promises to become one such material. Conductive polymers are already used in fuel cells, computer displays and microsurgical tools, and are now finding applications in the field of biomaterials. These versatile polymers can be synthesised alone, as hydrogels, combined into composites or electrospun into microfibres. They can be created to be biocompatible and biodegradable. Their physical properties can easily be optimized for a specific application through binding biologically important molecules into the polymer using one of the many available methods for their functionalization. Their conductive nature allows cells or tissue cultured upon them to be stimulated, the polymers' own physical properties to be influenced post-synthesis and the drugs bound in them released, through the application of an electrical signal. It is thus little wonder that these polymers are becoming very important materials for biosensors, neural implants, drug delivery devices and tissue engineering scaffolds. Focusing mainly on polypyrrole, polyaniline and poly(3,4-ethylenedioxythiophene), we review conductive polymers from the perspective of tissue engineering. The basic properties of conductive polymers, their chemical and electrochemical synthesis, the phenomena underlying their conductivity and the ways to tailor their properties (functionalization, composites, etc.) are discussed."}
{"_id": "732eaf003c54ac496285030922e0563b2eb5fbbd", "title": "Marker-based tracking with unmanned aerial vehicles", "text": "With the availability of low-cost micro aerial vehicles (MAVs), unmanned aerial vehicles (UAVs) quickly gain popularity and application potential. This requires techniques that can be understood by non-experts and flexibly applied for rapid prototyping. Visual tracking is an essential task with many applications, such as autonomous navigation and scene acquisition. While marker-less methods emerge, marker-based methods still have major advantages including simplicity, robustness, accuracy and performance. In practice, however, multi-marker setups introduce complexity and calibration efforts that can void the advantages. This work proposes a solution for practical, robust and easy-to-use marker-based tracking with independent compound targets. We introduce two novel target designs and describe pose estimation, noise removal and geometric transformations. The concepts are implemented in a tracking library for the Parrot AR. Drone 2.0. We explain its video access and camera calibration, and provide a first set of intrinsic parameters, jointly estimated from 14 units with high accuracy and low variance. The library is applied in a one-day contest on automatic visual navigation of UAVs, where students without technical background and programming skills achieved learning by experience and rapid development. This shows the effectiveness of combining capability with simplicity, and provides a case study on robotics in interdisciplinary education."}
{"_id": "0b4ba20ded29f8446940ed9279a31d3d52029730", "title": "Program Synthesis Using Natural Language", "text": "Interacting with computers is a ubiquitous activity for millions of people. Repetitive or specialized tasks often require creation of small, often one-off, programs. End-users struggle with learning and using the myriad of domain-specific languages (DSLs) to effectively accomplish these tasks.\n We present a general framework for constructing program synthesizers that take natural language (NL) inputs and produce expressions in a target DSL. The framework takes as input a DSL definition and training data consisting of NL/DSL pairs. From these it constructs a synthesizer by learning optimal weights and classifiers (using NLP features) that rank the outputs of a keyword-programming based translation. We applied our framework to three domains: repetitive text editing, an intelligent tutoring system, and flight information queries. On 1200+ English descriptions, the respective synthesizers rank the desired program as the top-1 and top-3 for 80% and 90% descriptions respectively."}
{"_id": "9b00eda33f9277f05d286bd0974aa748af13b5cc", "title": "Reading and Interpreting Ethnography", "text": "Although ethnographic methods are still regarded, to an extent, as new aspects of HCI research practice, they have been part of HCI research almost since its inception, and certainly since the early 1980s, about the same time as the CHI conference was founded. What, then, accounts for this sense of novelty and the mystery that goes along with it? One reason is that ethnographic methods have generally been associated with what we might call non-traditional settings in relation to HCI\u2019s cognitive science roots, emerging at first particular in organizational studies of collaborative work (in the domain of CSCW), being applied later in studies of alternative interaction patterns in ubiquitous computing, and being associated with domains such as domestic life, experience design, and cultural analysis that have been more recent arrivals on the scene. Another is that ethnographic methods are often associated with forms of analysis and theorizing of human action \u2013 ethnomethodology stands out as an example here \u2013 that are themselves alien to HCI\u2019s intellectual traditions and which have not always been clearly explained. Indeed, debates within the field have often founded on these sorts of confusions, so that in the internecine battles amongst social theorists, ethnographic methods suffer collateral damage (e.g. Crabtree et al. 2009). Finally, in a discipline that has often proceeded with something of a mix-and-match approach, liberally and creatively borrowing ideas and elements from different places, ethnography has often been seen instrumentally as a way of understanding important aspects of technological practice while its own epistemological commitments have remained somewhat murky."}
{"_id": "e80b1c3cf0da2de30e4f1a70a542d663f5627d46", "title": "AZR-LEACH: An Energy Efficient Routing Protocol for Wireless Sensor Networks", "text": "Reducing the energy consumption of available resources is still a problem to be solved in Wireless Sensor Networks (WSNs). Many types of existing routing protocols are developed to save power consumption. In these protocols, cluster-based routing protocols are found to be more energy efficient. A cluster head is selected to aggregate the data received from root nodes and forwards these data to the base station in cluster-based routing. The selection of cluster heads should be efficient to save energy. In our proposed protocol, we use static clustering for the efficient selection of cluster heads. The proposed routing protocol works efficiently in large as well as small areas. For an optimal number of cluster head selection we divide a large sensor field into rectangular clusters. Then these rectangular clusters are further grouped into zones for efficient communication between cluster heads and a base station. We perform MATLAB simulations to observe the network stability, throughput, energy consumption, network lifetime and the number of cluster heads. Our proposed routing protocol outperforms in large areas in comparison with the LEACH, MH-LEACH, and SEP routing protocols."}
{"_id": "c77399a53c460237c799155b92f4859c62516071", "title": "A CMOS voltage reference based on weighted difference of gate-source voltages between PMOS and NMOS transistors for low dropout regulators", "text": "A CMOS voltage reference, which takes advantage of weighted difference of the gate-source voltages between a PMOS and an NMOS transistor operating in saturation region, is presented in this paper. The reference has been implemented in a standard 0.6-\u00b5m CMOS process (Vthn\u2248 |Vthp| \u2248 0.9 V@0\u00b0 C) and gives a temperature coefficient of not greater than 62 ppm/\u00b0C from 0 to 100\u00b0 C without trimming, while consuming a maximum of 9.7 \u00b5A with a minimum supply of 1.4 V. The worst-case line regulation is \u00b10.17 %/V. The occupied chip area is 0.055 mm2. The proposed reference has been applied to a 10-mA CMOS low dropout regulator, and a temperature coefficient of 94 ppm/\u00b0C is achieved when the regulator delivers maximum load current."}
{"_id": "4ba375450cf7bbe4f0941abcb9fc0dac10b8217b", "title": "Deep Sentence-Level Authorship Attribution", "text": "We examine the problem of authorship attribution in collaborative documents. We seek to develop new deep learning models tailored to this task. We have curated a novel dataset by parsing Wikipedia\u2019s edit history, which we use to demonstrate the feasiblity of deep models to multi-author attribution at the sentence-level. Though we attempt to formulate models which learn stylometric features based on both grammatical structure and vocabulary, our error analysis suggests that our models mostly learn to recognize vocabulary-based cues, making them non-competitive with baselines tailored to vocabulary-based features. We explore why this may be, and suggest directions for future models to mitigate this shortcoming."}
{"_id": "db622a20ee2084869f618753744d3ac68d5aa9cc", "title": "MEMS Capacitive Pressure Sensors : A Review on Recent Development and Prospective", "text": "Recently MEMS Capacitive Pressure Sensor gains more advantage over micromachined piezoresistive pressure sensor due to high sensitivity, low power consumption, free from temperature effects, IC compatibility, etc,. The spectrum of capacitive pressure sensor application is increasing, hence it is essential to review the path of technological development and further prospective of micromachined capacitive pressure sensor. This paper focuses on the review of various types of capacitive pressure sensor principle, MEMS materials used in fabrication, procedures adopted in microfabrication for silicon and polymer material diaphragm, bonding and packaging techniques used. Selected result on capacitive sensitivity, effect of temperature on capacitive sensitivity was also presented. Finally, the development of smart sensor was discussed. MEMS Capacitive pressure sensor, Review on pressure sensor, CDPS, MEMS Fabrication, MEMS Material, pressure sensor, MEMS (Micro Electro Mechanical System)"}
{"_id": "f43769201302c958484b4a8acb66f9cee108b852", "title": "The Interplay of Institutional Logics in it Public?Private Partnerships", "text": "Public-private partnerships (PPPs) offer a popular means by which the public sector can obtain infor-mation technology (IT) innovations and management know-how from private firms. However, these IT PPPs are extremely difficult to realize, especially considering the divergent interests of public- and private-side stake-holders. Our case study of an IT PPP reveals public- and private-side differences that initially impeded the establishment of a partnership; using institutional logics theory as meta-theoretical lens, we propose a model that explains how public and private parties managed to negotiate their mode of collaboration by balancing their competing institutional norms and practices which ultimately resulted in the convergence of the two divergent logics. Our paper contributes to theory and practice by (1) elucidating the theoretical foundations and role of institutional logics for IT project management that we found dominated by public and private norms and practices, (2) explaining why collaboration in IT PPPs is so diffi-cult, and (3) how eventually an IT PPP can be estab-lished. We discuss theoretical and practical implica-tions in the paper."}
{"_id": "f61c006e07964cbd1d1dd912677737a890f662ed", "title": "Grid-based DBSCAN for clustering extended objects in radar data", "text": "The online observation using high-resolution radar of a scene containing extended objects imposes new requirements on a robust and fast clustering algorithm. This paper presents an algorithm based on the most cited and common clustering algorithm: DBSCAN [1]. The algorithm is modified to deal with the non-equidistant sampling density and clutter of radar data while maintaining all its prior advantages. Furthermore, it uses varying sampling resolution to perform an optimized separation of objects at the same time it is robust against clutter. The algorithm is independent of difficult to estimate input parameters such as the number or shape of available objects. The algorithm outperforms DBSCAN in terms of speed by using the knowledge of the sampling density of the sensor (increase of app. 40-70%). The algorithm obtains an even better result than DBSCAN by including the Doppler and amplitude information (unitless distance criteria)."}
{"_id": "982cf3eb865504cd3cb1b9111fd4c43380d0717f", "title": "Generalized Mongue-Elkan Method for Approximate Text String Comparison", "text": "The Mongue-Elkan method is a general text string comparison method based on an internal character-based similarity measure (e.g. edit distance) combined with a token level (i.e. word level) similarity measure. We propose a generalization of this method based on the notion of the generalized arithmetic mean instead of the simple average used in the expression to calculate the Monge-Elkan method. The experiments carried out with 12 well-known name-matching data sets show that the proposed approach outperforms the original Monge-Elkan method when character-based measures are used to compare tokens."}
{"_id": "6d6c24af71b5423402a78214b30addfdb8bf85c4", "title": "Good Features to Correlate for Visual Tracking", "text": "During the recent years, correlation filters have shown dominant and spectacular results for visual object tracking. The types of the features that are employed in this family of trackers significantly affect the performance of visual tracking. The ultimate goal is to utilize the robust features invariant to any kind of appearance change of the object, while predicting the object location as properly as in the case of no appearance change. As the deep learning based methods have emerged, the study of learning features for specific tasks has accelerated. For instance, discriminative visual tracking methods based on deep architectures have been studied with promising performance. Nevertheless, correlation filter based (CFB) trackers confine themselves to use the pre-trained networks, which are trained for object classification problem. To this end, in this manuscript the problem of learning deep fully convolutional features for the CFB visual tracking is formulated. In order to learn the proposed model, a novel and efficient backpropagation algorithm is presented based on the loss function of the network. The proposed learning framework enables the network model to be flexible for a custom design. Moreover, it alleviates the dependency on the network trained for classification. Extensive performance analysis shows the efficacy of the proposed custom design in the CFB tracking framework. By fine-tuning the convolutional parts of a state-of-the-art network and integrating this model to a CFB tracker, which is the top performing one of VOT2016, 18% increase is achieved in terms of expected average overlap, and tracking failures are decreased by 25%, while maintaining the superiority over the state-of-the-art methods in OTB-2013 and OTB-2015 tracking datasets."}
{"_id": "2e564b8c27153bb083ff4bf93f3cbc84740b506c", "title": "Internet of intelligent things and robot as a service", "text": "Article history: Received 21 January 2012 Received in revised form 16 March 2012 Accepted 17 March 2012 Available online 17 April 2012"}
{"_id": "ab05a0e95d1f66c06134b2bbf6c1cb518958726e", "title": "Integration of face recognition and sound localization for a smart door phone system", "text": "This paper proposes a smart system using both face recognition and sound localization techniques to identify the faces of visitors from a door phone in much efficient and accurate ways. This system is effectively used to recognize the faces when their locations are out of the boundaries of the camera scope of the door phone. The smart door phone system proposed in this paper uses a visitor's voice source to rotate the camera to his face and then to recognize the face accurately. The integrated system has been designed with one FPGA(Field Programmable Gate Array) chip and tested for actual use in door phone environments."}
{"_id": "3cca070b407147d51e9aa33ffec84bf644617544", "title": "Balance of power: dynamic thermal management for Internet data centers", "text": "Internet-based applications and their resulting multitier distributed architectures have changed the focus of design for large-scale Internet computing. Internet server applications execute in a horizontally scalable topology across hundreds or thousands of commodity servers in Internet data centers. Increasing scale and power density significantly impacts the data center's thermal properties. Effective thermal management is essential to the robustness of mission-critical applications. Internet service architectures can address multisystem resource management as well as thermal management within data centers."}
{"_id": "7633c7470819061477433fdae15c64c8b49a758b", "title": "DTAM: Dense tracking and mapping in real-time", "text": "DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application."}
{"_id": "0e09033f6636217d34fc0222de46a87c108e1c06", "title": "A Theory of Shape by Space Carving", "text": "In this paper we consider the problem of computing the 3D shape of an unknown, arbitrarily-shaped scene from multiple photographs taken at known but arbitrarily-distributed viewpoints. By studying the equivalence class of all 3D shapes that reproduce the input photographs, we prove the existence of a special member of this class, the photo hull, that (1) can be computed directly from photographs of the scene, and (2) subsumes all other members of this class. We then give a provably-correct algorithm, called Space Carving, for computing this shape and present experimental results on complex real-world scenes. The approach is designed to (1) capture photorealistic shapes that accurately model scene appearance from a wide range of viewpoints, and (2) account for the complex interactions between occlusion, parallax, shading, and their view-dependent effects on scene-appearance."}
{"_id": "e05a8d591f6452b4391b0d1bf8b12de42d7df5ec", "title": "New types of acetate-oxidizing, sulfate-reducing Desulfobacter species, D. hydrogenophilus sp. nov., D. latus sp. nov., and D. curvatus sp. nov.", "text": "Sulfate-reducing bacteria with oval to rod-shaped cells (strains AcRS1, AcRS2) and vibrio-shaped cells (strains AcRM3, AcRM4, AcRM5) differing by size were isolated from anaerobic marine sediment with acetate as the only electron donor. A vibrio-shaped type (strain AcKo) was also isolated from freshwater sediment. Two strains (AcRS1, AcRM3) used ethanol and pyruvate in addition to acetate, and one strain (AcRS1) grew autotrophically with H2, sulfate and CO2. Higher fatty acids or lactate were never utilized. All isolates were able to grow in ammonia-free medium in the presence of N2. Nitrogenase activity under such conditions was demonstrated by the acetylene reduction test. The facultatively lithoautotrophic strain (AcRS1), a strain (AcRS2) with unusually large cells (2\u00d75 \u03bcm), and a vibrio-shaped strain (AcRM3) are described as new Desulfobacter species, D. hydrogenophilus, D. latus, and D. curvatus, respectively."}
{"_id": "bbc85ec562c709a86ed4675ca17c1b8e3f8a3c9e", "title": "Offline and Online calibration of Mobile Robot and SLAM Device for Navigation", "text": "Robot navigation technology is required to accomplish difficult tasks in various environments. In navigation, it is necessary to know the information of the external environments and the state of the robot under the environment. On the other hand, various studies have been done on SLAM technology, which is also used for navigation, but also applied to devices for Mixed Reality and the like. In this paper, we propose a robot-device calibration method for navigation with a device using SLAM technology on a robot. The calibration is performed by using the position and orientation information given by the robot and the device. In the calibration, the most efficient way of movement is clarified according to the restriction of the robot movement. Furthermore, we also show a method to dynamically correct the position and orientation of the robot so that the information of the external environment and the shape information of the robot maintain consistency in order to reduce the dynamic error occurring during navigation. Our method can be easily used for various kinds of robots and localization with sufficient precision for navigation is possible with offline calibration and online position correction. In the experiments, we confirm the parameters obtained by two types of offline calibration according to the degree of freedom of robot movement and validate the effectiveness of online correction method by plotting localized position error during robot\u2019s intense movement. Finally, we show the demonstration of navigation using SLAM device."}
{"_id": "caec97674544a4948a1b0ec2b9f6c624b87b647b", "title": "Text Detection and Recognition in Imagery: A Survey", "text": "This paper analyzes, compares, and contrasts technical challenges, methods, and the performance of text detection and recognition research in color imagery. It summarizes the fundamental problems and enumerates factors that should be considered when addressing these problems. Existing techniques are categorized as either stepwise or integrated and sub-problems are highlighted including text localization, verification, segmentation and recognition. Special issues associated with the enhancement of degraded text and the processing of video text, multi-oriented, perspectively distorted and multilingual text are also addressed. The categories and sub-categories of text are illustrated, benchmark datasets are enumerated, and the performance of the most representative approaches is compared. This review provides a fundamental comparison and analysis of the remaining problems in the field."}
{"_id": "40daba541db2cc489aafa533a386d63e869b21ef", "title": "A Neural Network Framework for Fair Classifier", "text": "Machine learning models are extensively being used in decision making, especially for prediction tasks. These models could be biased or unfair towards a specific sensitive group either of a specific race, gender or age. Researchers have put efforts into characterizing a particular definition of fairness and enforcing them into the models. In this work, mainly we are concerned with the following three definitions, Disparate Impact, Demographic Parity and Equalized Odds. Researchers have shown that Equalized Odds cannot be satisfied in calibrated classifiers unless the classifier is perfect. Hence the primary challenge is to ensure a degree of fairness while guaranteeing as much accuracy as possible. Fairness constraints are complex and need not be convex. Incorporating them into a machine learning algorithm is a significant challenge. Hence, many researchers have tried to come up with a surrogate loss which is convex in order to build fair classifiers. Besides, certain papers try to build fair representations by preprocessing the data, irrespective of the classifier used. Such methods, not only require a lot of unrealistic assumptions but also require human engineered analytical solutions to build a machine learning model. We instead propose an automated solution which is generalizable over any fairness constraint. We use a neural network which is trained on batches and directly enforces the fairness constraint as the loss function without modifying it further. We have also experimented with other complex performance measures such as H-mean loss, Q-mean-loss, F-measure; without the need for any surrogate loss functions. Our experiments prove that the network achieves similar performance as state of the art. Thus, one can just plug-in appropriate loss function as per required fairness constraint and performance measure of the classifier and train a neural network to achieve that."}
{"_id": "7950117952550a5484f9c6aae1aa2586a8ab6e4c", "title": "On Modeling and Predicting Individual Paper Citation Count over Time", "text": "Evaluating a scientist\u2019s past and future potential impact is key in decision making concerning with recruitment and funding, and is increasingly linked to publication citation count. Meanwhile, timely identifying those valuable work with great potential before they receive wide recognition and become highly cited papers is both useful for readers and authors in many regards. We propose a method for predicting the citation counts of individual publications, over an arbitrary time period. Our approach explores paper-specific covariates, and a point process model to account for the aging effect and triggering role of recent citations, through which papers lose and gain their popularity, respectively. Empirical results on the Microsoft Academic Graph data suggests that our model can be useful for both prediction and interpretability."}
{"_id": "ca5d54f24bd99ed2aacd63ec0b07d017405b87ae", "title": "Adaptive Knee Joint Exoskeleton Based on Biological Geometries", "text": "This paper presents a relatively complete analytical model of a knee joint interacting with a two-link exoskeleton for investigating the effects of different exoskeleton designs on the internal joint forces/torque in the knee. The closed kinematic chain formed by the leg and exoskeleton has a significant effect on the joint forces/torque in the knee. A bio-joint model is used to capture this effect by relaxing a commonly made assumption that approximates a knee joint as a perfect engineering pin-joint in designing an exoskeleton. Based on the knowledge of a knee-joint kinematics, an adaptive knee-joint exoskeleton has been designed to eliminate negative effects associated with the closed leg-exoskeleton kinematic chain on a human knee. For experimental validation, the flexion motion of an artificial human knee is investigated comparing the performances of five exoskeleton designs against the case with no exoskeleton. Analytical results that estimate internal forces/torque using the kinematic and dynamic models (based on the properties of a knee joint) agree well with data obtained experimentally. This investigation illustrates the applications of the analytical model for designing an adaptive exoskeleton that minimizes internal joint forces due to a knee-exoskeleton interaction."}
{"_id": "8c4e9564807e23681fba4be5cae7710994e8efad", "title": "Constrained Two-dimensional Cutting Stock Problems a Best-first Branch-and-bound Algorithm Constrained Two-dimensional Cutting Stock Problems a Best-first Branch-and-bound Algorithm", "text": "In this work, we develop a new version of the algorithm proposed in 17] for solving exactly some variants of (un)weighted constrained two-dimensional cutting stock problems. We introduce one-dimensional bounded knapsacks in order to obtain an improved initial lower bound for limiting initially the size of the search space, an improved upper bound at each internal node and a symmetry detection for neglecting some useless duplicate patterns. A new eecient data structure of the set representing the best subprob-lems is used to implement the algorithms over the BOB library. The performance of our algorithms is evaluated on some problem instances of the literature and on other randomly generated hard problem instances (a total of 27 problem instances). Les Probl emes de D ecoupe 2-Dimensions Contraints Un Algorithme de Branch-and-Bound meilleur d'abord R esum e : Dans ce travail, nous concevons une nouvelle version de l'algorithme pro-pos e dans 17] pour r esoudre de mani ere exacte des probl emes de d ecoupe en 2-dimensions contraints avec ou sans pond eration. Nous introduisons des sacs a dos unidimension-nels contraints aan d'obtenir une meilleure borne inf erieure initiale pour limiter la taille de l'espace de recherche, une meilleure borne sup erieure pour chaque nnud interne et une d etection de sym etries pour supprimer des conngurations identiques. Une nouvelle structure de donn ees eecace pour g erer les meilleurs sous-probl emes a et e employ ee pour l'impl ementation des algorithmes sur la biblioth eque BOB. La performance de nos algo-rithmes est evalu ee sur 27 instances de probl emes extraites pour certaines de la litt erature et d'autres diiciles, engendr ees al eatoirement."}
{"_id": "f33d5692f2305f108d2d9a35e558a441529949b5", "title": "Developing an intelligent data discriminating system of anti-money laundering based on SVM", "text": "Statistical learning theory (SLT) is introduced to improve the embarrassments of anti-money laundering (AML) intelligence collection. A set of unusual behavior detection algorithm is presented in this paper based on support vector machine (SVM) in order to take the place of traditional predefined-rule suspicious transaction data filtering system. It could efficiently surmount the worst forms of suspicious data analyzing and reporting mechanism among bank branches including enormous data volume, dimensionality disorder with massive variances and feature overload."}
{"_id": "7ca069e923d0fbbc1b700786357c1772ac625ad9", "title": "Specific language impairment: a convenient label for whom?", "text": "BACKGROUND\nThe term 'specific language impairment' (SLI), in use since the 1980s, describes children with language impairment whose cognitive skills are within normal limits where there is no identifiable reason for the language impairment. SLI is determined by applying exclusionary criteria, so that it is defined by what it is not rather than by what it is. The recent decision to not include SLI in DSM-5 provoked much debate and concern from researchers and clinicians.\n\n\nAIMS\nTo explore how the term 'specific language impairment' emerged, to consider how disorders, including SLI, are generally defined and to explore how societal changes might impact on use the term.\n\n\nMETHODS & PROCEDURES\nWe reviewed the literature to explore the origins of the term 'specific language impairment' and present published evidence, as well as new analyses of population data, to explore the validity of continuing to use the term.\n\n\nOUTCOMES & RESULTS AND CONCLUSIONS & IMPLICATIONS\nWe support the decision to exclude the term 'specific language impairment' from DSM-5 and conclude that the term has been a convenient label for researchers, but that the current classification is unacceptably arbitrary. Furthermore, we argue there is no empirical evidence to support the continued use of the term SLI and limited evidence that it has provided any real benefits for children and their families. In fact, the term may be disadvantageous to some due to the use of exclusionary criteria to determine eligibility for and access to speech pathology services. We propose the following recommendations. First, that the word 'specific' be removed and the label 'language impairment' be used. Second, that the exclusionary criteria be relaxed and in their place inclusionary criteria be adopted that take into account the fluid nature of language development particularly in the preschool period. Building on the goodwill and collaborations between the clinical and research communities we propose the establishment of an international consensus panel to develop an agreed definition and set of criteria for language impairment. Given the rich data now available in population studies it is possible to test the validity of these definitions and criteria. Consultation with service users and policy-makers should be incorporated into the decision-making process."}
{"_id": "0b9286a010bee710e74362a35f96dd1c6fee0fdb", "title": "The Optimality of Naive Bayes", "text": "Naive Bayes is one of the most efficient and effective inductive learning algorithms for machine learning and data mining. Its competitive performance in classification is surprising, because the conditional independence assumption on which it is based, is rarely true in realworld applications. An open question is: what is the true reason for the surprisingly good performance of naive Bayes in classification? In this paper, we propose a novel explanation on the superb classification performance of naive Bayes. We show that, essentially, the dependence distribution; i.e., how the local dependence of a node distributes in each class, evenly or unevenly, and how the local dependencies of all nodes work together, consistently (supporting a certain classification) or inconsistently (canceling each other out), plays a crucial role. Therefore, no matter how strong the dependences among attributes are, naive Bayes can still be optimal if the dependences distribute evenly in classes, or if the dependences cancel each other out. We propose and prove a sufficient and necessary conditions for the optimality of naive Bayes. Further, we investigate the optimality of naive Bayes under the Gaussian distribution. We present and prove a sufficient condition for the optimality of naive Bayes, in which the dependence between attributes do exist. This provides evidence that dependence among attributes may cancel out each other. In addition, we explore when naive Bayes works well. Naive Bayes and Augmented Naive Bayes Classification is a fundamental issue in machine learning and data mining. In classification, the goal of a learning algorithm is to construct a classifier given a set of training examples with class labels. Typically, an example E is represented by a tuple of attribute values (x1, x2, , \u00b7 \u00b7 \u00b7 , xn), where xi is the value of attribute Xi. Let C represent the classification variable, and let c be the value of C. In this paper, we assume that there are only two classes: + (the positive class) or \u2212 (the negative class). A classifier is a function that assigns a class label to an example. From the probability perspective, according to Bayes Copyright c \u00a9 2004, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Rule, the probability of an example E = (x1, x2, \u00b7 \u00b7 \u00b7 , xn) being class c is p(c|E) = p(E|c)p(c) p(E) . E is classified as the class C = + if and only if fb(E) = p(C = +|E) p(C = \u2212|E) \u2265 1, (1) where fb(E) is called a Bayesian classifier. Assume that all attributes are independent given the value of the class variable; that is, p(E|c) = p(x1, x2, \u00b7 \u00b7 \u00b7 , xn|c) = n \u220f"}
{"_id": "1b14373240bb8f50a0442e18c3e6893c6db81a0e", "title": "Using Topic Models in Content-Based News Recommender Systems", "text": "We study content-based recommendation of Finnish news in a system with a very small group of users. We compare three standard methods, Na\u00efve Bayes (NB), K-Nearest Neighbor (kNN) Regression and Regulairized Linear Regression in a novel online simulation setting and in a coldstart simulation. We also apply Latent Dirichlet Allocation (LDA) on the large corpus of news and compare the learned features to those found by Singular Value Decomposition (SVD). Our results indicate that Na\u00efve Bayes is the worst of the three models. K-Nearest Neighbor performs consistently well across input features. Regularized Linear Regression performs generally worse than kNN, but reaches similar performance as kNN with some features. Regularized Linear Regression gains statistically significant improvements over the word-features with LDA both on the full data set and in the cold-start simulation. In the cold-start simulation we find that LDA gives statistically significant improvements for all the methods."}
{"_id": "1ea332630dc12822f04642019097bde59168a2c5", "title": "SCAN: Structure Correcting Adversarial Network for Organ Segmentation in Chest X-Rays", "text": "Chest X-ray (CXR) is one of the most commonly prescribed medical imaging procedures, often with over 2\u2013 10x more scans than other imaging modalities such as MRI, CT scan, and PET scans. These voluminous CXR scans place significant workloads on radiologists and medical practitioners. Organ segmentation is a crucial step to obtain effective computer-aided detection on CXR. In this work, we propose Structure Correcting Adversarial Network (SCAN) to segment lung fields and the heart in CXR images. SCAN incorporates a critic network to impose on the convolutional segmentation network the structural regularities emerging from human physiology. During training, the critic network learns to discriminate between the ground truth organ annotations from the masks synthesized by the segmentation network. Through this adversarial process the critic network learns the higher order structures and guides the segmentation model to achieve realistic segmentation outcomes. Extensive experiments show that our method produces highly accurate and natural segmentation. Using only very limited training data available, our model reaches human-level performance without relying on any existing trained model or dataset. Our method also generalizes well to CXR images from a different patient population and disease profiles, surpassing the current state-of-the-art."}
{"_id": "8ec85cc3f9360ad76842ec9155d4f77d2618d9ea", "title": "Quantizing Convolutional Neural Networks for Low-Power High-Throughput Inference Engines", "text": "Deep learning as a means to inferencing has proliferated thanks to its versatility and ability to approach or exceed human-level accuracy. These computational models have seemingly insatiable appetites for computational resources not only while training, but also when deployed at scales ranging from data centers all the way down to embedded devices. As such, increasing consideration is being made to maximize the computational efficiency given limited hardware and energy resources and, as a result, inferencing with reduced precision has emerged as a viable alternative to the IEEE 754 Standard for Floating-Point Arithmetic. We propose a quantization scheme that allows inferencing to be carried out using arithmetic that is fundamentally more efficient when compared to even half-precision floating-point. Our quantization procedure is significant in that we determine our quantization scheme parameters by calibrating against its reference floating-point model using a single inference batch rather than (re)training and achieve end-to-end post quantization accuracies comparable to the reference model."}
{"_id": "bf9069992134977bf93384620153dfe39d94b7cf", "title": "SCARE: Side-Channel Analysis Based Reverse Engineering for Post-Silicon Validation", "text": "Reverse Engineering (RE) has been historically considered as a powerful approach to understand electronic hardware in order to gain competitive intelligence or accomplish piracy. In recent years, it has also been looked at as a way to authenticate hardware intellectual properties in the court of law. In this paper, we propose a beneficial role of RE in post-silicon validation of integrated circuits (IC) with respect to IC functionality, reliability and integrity. Unlike traditional destructive RE approaches, we propose a fast non-destructive side-channel analysis approach that can hierarchically extract structural information from an IC through its transient current signature. Such a top-down side-channel analysis approach is capable of reliably identifying pipeline stages and functional blocks. It is also suitable to distinguish sequential elements from combinational gates. For extraction of random logic structures (e.g. control blocks and finite state machines) we combine side-channel analysis with logic testing based Boolean function extraction. The proposed approach is amenable to automation, scalable, and can be applied as part of post-silicon validation process to verify that each IC implements exclusively the functionality described in the specification and is free from malicious modification or Trojan attacks. Simulation results on a pipelined DLX processor demonstrate the effectiveness of the proposed approach."}
{"_id": "8ccd1e6e73d5a990e95798ab11f4c92665bca755", "title": "Process models in the practice of distributed software development: A systematic review of the literature", "text": "0950-5849/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.infsof.2010.03.009 * Corresponding author. E-mail addresses: rafaelp@pucrs.br (R. Prikladnicki) Context: Distributed Software Development (DSD) has recently become an active research area. Although considerable research effort has been made in this area, as yet, no agreement has been reached as to an appropriate process model for DSD. Purpose: This paper is intended to identify and synthesize papers that describe process models for distributed software development in the context of overseas outsourcing, i.e. \u2018\u2018offshoring\u201d. Method: We used a systematic review methodology to search seven digital libraries and one topic-specific conference. Results: We found 27 primary studies describing stage-related DSD process models. Only five of such studies looked into outsourcing to a subsidiary company (i.e. \u2018\u2018internal offshoring\u201d). Nineteen primary studies addressed the need for DSD process models. Eight primary studies and three literature surveys described stage-based DSD process models, but only three of such models were empirically evaluated. Conclusion: We need more research aimed at internal offshoring. Furthermore, proposed models need to be empirically validated. 2010 Elsevier B.V. All rights reserved."}
{"_id": "cf4e854241f88f814d5f8616cc5980d7573539a2", "title": "An integrative imputation method based on multi-omics datasets", "text": "Integrative analysis of multi-omics data is becoming increasingly important to unravel functional mechanisms of complex diseases. However, the currently available multi-omics datasets inevitably suffer from missing values due to technical limitations and various constrains in experiments. These missing values severely hinder integrative analysis of multi-omics data. Current imputation methods mainly focus on using single omics data while ignoring biological interconnections and information imbedded in multi-omics data sets. In this study, a novel multi-omics imputation method was proposed to integrate multiple correlated omics datasets for improving the imputation accuracy. Our method was designed to: 1) combine the estimates of missing value from individual omics data itself as well as from other omics, and 2) simultaneously impute multiple missing omics datasets by an iterative algorithm. We compared our method with five imputation methods using single omics data at different noise levels, sample sizes and data missing rates. The results demonstrated the advantage and efficiency of our method, consistently in terms of the imputation error and the recovery of mRNA-miRNA network structure. We concluded that our proposed imputation method can utilize more biological information to minimize the imputation error and thus can improve the performance of downstream analysis such as genetic regulatory network construction."}
{"_id": "4a74eb59728f0d3a06302c668db44d434bd7d69e", "title": "Money Laundering on the Internet", "text": "Electronic money (or e-money) is money that is represented digitally and can be exchanged by means of smart card from one party to another without the need for an intermediary. It is anticipated that emoney will work just like paper money. One of its potential key features is anonymity. The proceeds of crime that are in the form of e-money could be used, to buy foreign currency and high value goods to be resold. E-money may therefore be used to place dirty money without having to smuggle cash or conduct face to face transactions."}
{"_id": "230239fb61d7a6996ac9552706363323b34735f2", "title": "Reining in the Outliers in Map-Reduce Clusters using Mantri", "text": "Experience from an operational Map-Reduce cluster reveals that outliers signi cantly prolong job completion. \u0088e causes for outliers include run-time contention for processor, memory and other resources, disk failures, varying bandwidth and congestion along network paths and, imbalance in task workload. We present Mantri, a system that monitors tasks and culls outliers using causeand resource-aware techniques. Mantri\u2019s strategies include restarting outliers, network-aware placement of tasks and protecting outputs of valuable tasks. Using real-time progress reports,Mantri detects and acts on outliers early in their lifetime. Early action frees up resources that can be used by subsequent tasks and expedites the job overall. Acting based on the causes and the resource and opportunity cost of actions lets Mantri improve over prior work that only duplicates the laggards. Deployment in Bing\u2019s production clusters and trace-driven simulations show that Mantri improves job completion times by \uf646\uf645\uf642."}
{"_id": "14319018fc6d26d690c4a771a08634bc77306188", "title": "CellSDN : Software-Defined Cellular Networks", "text": "Existing cellular networks suffer from inflexible and expen sive equipment, complex control-plane protocols, and vend orspecific configuration interfaces. In this paper, we argue th at software defined networking (SDN) can simplify the design and management of cellular data networks, while enabling new services. However, supporting many subscribers, frequent mobility, fine-grained measurement and control, and real-time adaptation introduces new scalability challeng es that future SDN architectures should address. As a first step , we present a software-defined cellular network architectur that (i) allows controller applications to express policie s based on the attributes of subscribers, rather than network addre sses and locations, (ii) enables real-time, fine-grained contro l via a local agent on each switch, and (iii) extends switches to support features like deep packet inspection and header com pression to meet the needs of cellular data services, (iv) su pports flexible \u201cslicing\u201d of network resources based on the at tributes of subscribers, rather than the packet header field s, and flexible \u201cslicing\u201d of base stations and radio resources b y having the controller to handle radio resource management, admission control and mobility in each slice."}
{"_id": "34abbfb70b22be3dabb54760d6a2f5a3eb3ab1e8", "title": "Active Shape Models - 'smart snakes'", "text": "We describe 'Active Shape Models' which iteratively adapt to refine estimates of the pose, scale and shape of models of image objects. The method uses flexible models derived from sets of training examples. These models, known as Point Distribution Models, represent objects as sets of labelled points. An initial estimate of the location of the model points in an image is improved by attempting to move each point to a better position nearby. Adjustments to the pose variables and shape parameters are calculated. Limits are placed on the shape parameters ensuring that the example can only deform into shapes conforming to global constraints imposed by the training set. An iterative procedure deforms the model example to find the best fit to the image object. Results of applying the method are described. The technique is shown to be a powerful method for refining estimates of object shape and location."}
{"_id": "5b659bff5c423749af541b3d743158ee117ad94e", "title": "Large-vocabulary audio-visual speech recognition by machines and humans", "text": "We compare automatic recognition with human perception of audio-visual speech, in the large-vocabulary, continuous speech recognition (LVCSR) domain. Specifically, we study the benefit of the visual modality for both machines and humans, when combined with audio degraded by speech-babble noise at various signal-to-noise ratios (SNRs). We first consider an automatic speechreading system with a pixel based visual front end that uses feature fusion for bimodal integration, and we compare its performance with an audio-only LVCSR system. We then describe results of human speech perception experiments, where subjects are asked to transcribe audio-only and audiovisual utterances at various SNRs. For both machines and humans, we observe approximately a 6 dB effective SNR gain compared to the audio-only performance at 10 dB, however such gains significantly diverge at other SNRs. Furthermore, automatic audio-visual recognition outperforms human audioonly speech perception at low SNRs."}
{"_id": "85147a6c93ce64cfe9871f480eef555e2c7d5a4e", "title": "Efficient Evaluation of the Terminal Response of a Twisted-Wire Pair Excited by a Plane-Wave Electromagnetic Field", "text": "Twisted-wire pair (TWP) loops are used as one of the primary communication channels in digital subscriber line (DSL) networks. As part of the development of channel noise models, a transmission-line-based model for predicting the terminal response of a single TWP to an illuminating plane-wave electromagnetic field is presented. Closed-form analytic approximations of the terminal response valid for an arbitrary direction of incidence and polarization of the incoming plane-wave field are provided. A worst-case model with the very same purpose has been previously proposed in the DSL literature; however, a worst-case approach only provides partial information about the field coupling mechanism. The goal of the presented transmission-line model is to introduce a more robust yet efficient approach for characterizing radio noise interference in DSL systems. Computed results for two TWP configurations of interest are presented together with measured results for one of them."}
{"_id": "162fb818aa65132f3a425c2e8539e9633bbc1fa0", "title": "A simple real-word error detection and correction using local word bigram and trigram", "text": "Spelling error is broadly classified in two categories namely non word error and real word error. In this paper a localized real word error detection and correction method is proposed where the scores of bigrams generated by immediate left and right neighbour of the candidate word and the trigram of these three words are combined. A single character position error model is assumed so that if a word W is erroneous then the correct word belongs to the set of real words S generated by single character edit operation on W. The above combined score is calculated also on all members of S. These words are ranked in the decreasing order of the score. By observing the rank and using a rule based approach, the error decision and correction candidates are simultaneously selected. The approach gives comparable accuracy with other existing approaches but is computationally attractive. Since only left and right neighbor are involved, multiple errors in a sentence can also be detected ( if the error occurs in every alternate words )."}
{"_id": "156bf09e393bde29351f2f7375db68467e7c325b", "title": "A Proposal to Measure Success Factors for Location-Based Mobile Cardiac Telemedicine System ( LMCTS )", "text": "Cardiac telemedicine systems facilitate treatment of patients by sending blood pressure and cardiac performance information to hospitals and medical emergency care units. Location-based services can be used in mobile cardiac telemedicine systems to improve efficiency, robustness and accuracy. This paper proposes a combination of mobile telemedicine systems and location-based services that will enable hospitals and emergency departments to do continuous monitoring of a patient\u2019s heart activity; such a system would decrease the probability of critical condition patients in emergency cases and increase the chance of saving lives. In order to evaluate whether this kind of health care system might work, we explore the success factors for such a system by adapting the DeLone & McLean IS success model to the context of location-based mobile system in cardiac healthcare. After reviewing previous works, we identify fourteen factors which can affect the success of a location-based mobile cardiac telemedicine system: Realization of user expectations, Portability, Accessibility, Extent of data processing, Time response, Urgency, Currency, Sufficiency, Understandability, Reliability, Extent of Mobility (spatial, temporal, contextual), System support, Responsiveness, and Assurance. We apply these factors to propose a success model for our location-based mobile cardiac telemedicine system."}
{"_id": "ed73dcf31265f02ad99693da562d02bbb8ee55f9", "title": "Probabilistic Spatial Context Models for Scene Content Understanding", "text": "Scene content understanding facilitates a large number of applications, ranging from content-based image retrieval to other multimedia applications. Material detection refers to the problem of identifying key semantic material types (such as sky, grass, foliage, water, and snow in images. In this paper, we present a holistic approach to determining scene content, based on a set of individual material detection algorithms, as well as probabilistic spatial context models. A major limitation of individual material detectors is the significant number of misclassifications that occur because of the similarities in color and texture characteristics of various material types. We have developed a spatial context-aware material detection system that reduces misclassification by constraining the beliefs to conform to the probabilistic spatial context models. Experimental results show that the accuracy of materials detection is improved by 13% using the spatial context models over the individual material detectors themselves."}
{"_id": "921cd09d4483812e3ae434d37ad98d76ac87b32d", "title": "Security Ontology: Simulating Threats to Corporate Assets", "text": "Threat analysis and mitigation, both essential for corporate security, are time consuming, complex and demand expert knowledge. We present an approach for simulating threats to corporate assets, taking the entire infrastructure into account. Using this approach effective countermeasures and their costs can be calculated quickly without expert knowledge and a subsequent security decisions will be based on objective criteria. The ontology used for the simulation is based on Landwehr\u2019s [ALRL04] taxonomy of computer security and dependability."}
{"_id": "cc98decd850a4fe0d054b0d9bb661b082951e427", "title": "Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition", "text": "Since Chavez proposed the highpass filtering procedure to fuse multispectral and panchromatic images, several fusion methods have been developed based on the same principle: to extract from the panchromatic image spatial detail information to later inject it into the multispectral one. In this paper, we present new fusion alternatives based on the same concept, using the multiresolution wavelet decomposition to execute the detail extraction phase and the intensity-hue-saturation (IHS) and principal component analysis (PCA) procedures to inject the spatial detail of the panchromatic image into the multispectral one. The multiresolution wavelet decomposition has been performed using both decimated and undecimated algorithms and the resulting merged images compared both spectral and spatially. These fusion methods, as well as standard IHS-, PCA-, and wavelet-based methods have been used to merge Systeme Pour l'Observation de la Terre (SPOT) 4 XI and SPOT 4 M images with a ratio 4:1. We have estimated the validity of each fusion method by analyzing, visually and quantitatively, the quality of the resulting fused images. The methodological approaches proposed in this paper result in merged images with improved quality with respect to those obtained by standard IHS, PCA, and standard wavelet-based fusion methods. For both proposed fusion methods, better results are obtained when an undecimated algorithm is used to perform the multiresolution wavelet decomposition."}
{"_id": "4ae8a49a272d985878d7f5d20440840dcff9d3af", "title": "Deep Learning for Acoustic Echo Cancellation in Noisy and Double-Talk Scenarios", "text": "Traditional acoustic echo cancellation (AEC) works by identifying an acoustic impulse response using adaptive algorithms. We formulate AEC as a supervised speech separation problem, which separates the loudspeaker signal and the nearend signal so that only the latter is transmitted to the far end. A recurrent neural network with bidirectional long short-term memory (BLSTM) is trained to estimate the ideal ratio mask from features extracted from the mixtures of near-end and farend signals. A BLSTM estimated mask is then applied to separate and suppress the far-end signal, hence removing the echo. Experimental results show the effectiveness of the proposed method for echo removal in double-talk, background noise, and nonlinear distortion scenarios. In addition, the proposed method can be generalized to untrained speakers."}
{"_id": "2275762a28582716db92df6d525ed2481c7d7f14", "title": "Factorization meets the neighborhood: a multifaceted collaborative filtering model", "text": "Recommender systems provide users with personalized suggestions for products or services. These systems often rely on Collaborating Filtering (CF), where past transactions are analyzed in order to establish connections between users and products. The two more successful approaches to CF are latent factor models, which directly profile both users and products, and neighborhood models, which analyze similarities between products or users. In this work we introduce some innovations to both approaches. The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model. Further accuracy improvements are achieved by extending the models to exploit both explicit and implicit feedback by the users. The methods are tested on the Netflix data. Results are better than those previously published on that dataset. In addition, we suggest a new evaluation metric, which highlights the differences among methods, based on their performance at a top-K recommendation task."}
{"_id": "6883a910d49afa95e7e21078bb8e6aa37ea4ef63", "title": "Blending liquids", "text": "We present a method for smoothly blending between existing liquid animations. We introduce a semi-automatic method for matching two existing liquid animations, which we use to create new fluid motion that plausibly interpolates the input. Our contributions include a new space-time non-rigid iterative closest point algorithm that incorporates user guidance, a subsampling technique for efficient registration of meshes with millions of vertices, and a fast surface extraction algorithm that produces 3D triangle meshes from a 4D space-time surface. Our technique can be used to instantly create hundreds of new simulations, or to interactively explore complex parameter spaces. Our method is guaranteed to produce output that does not deviate from the input animations, and it generalizes to multiple dimensions. Because our method runs at interactive rates after the initial precomputation step, it has potential applications in games and training simulations."}
{"_id": "202d24310e30333391d966b5979358d5bee713c5", "title": "Self-referenced laser system for optical 3D seafloor mapping", "text": "Underwater high resolution 3D mapping implies a very accurate sensor pose estimation to fuse the sensor readings over time into one consistent model. In the case of optical mapping systems, the needed accuracy can easily lie outside of the specification of Doppler Velocity Logs normally used for pose estimation by remotely operated and autonomous underwater vehicles. This is especially the case for difficult terrains or for low altitude situations. Therefore, to improve performance in such situations, a self-referenced laser system is proposed performing micro navigation to significantly improve the resolution of optical underwater 3D seabed mapping."}
{"_id": "6bed44cc1bed40a23f598d1a6c6a2b3ebb72bdc9", "title": "How is pneumonia diagnosed in clinical stroke research? A systematic review and meta-analysis.", "text": "BACKGROUND AND PURPOSE\nDiagnosis of pneumonia complicating stroke is challenging, and there are currently no consensus diagnostic criteria. As a first step in developing such consensus-based diagnostic criteria, we undertook a systematic review to identify the existing diagnostic approaches to pneumonia in recent clinical stroke research to establish the variation in diagnosis and terminology.\n\n\nMETHODS\nStudies of ischemic stroke, intracerebral hemorrhage, or both, which reported occurrence of pneumonia from January 2009 to March 2014, were considered and independently screened for inclusion by 2 reviewers after multiple searches using electronic databases. The primary analysis was to identify existing diagnostic approaches for pneumonia. Secondary analyses explored potential reasons for any heterogeneity where standard criteria for pneumonia had been applied.\n\n\nRESULTS\nSixty-four studies (56% ischemic stroke, 6% intracerebral hemorrhage, 38% both) of 639\u2009953 patients were included. Six studies (9%) reported no information on the diagnostic approach, whereas 12 (19%) used unspecified clinician-reported diagnosis or initiation of antibiotics. The majority used objective diagnostic criteria: 20 studies (31%) used respiratory or other published standard criteria; 26 studies (41%) used previously unpublished ad hoc criteria. The overall occurrence of pneumonia was 14.3% (95% confidence interval 13.2%-15.4%; I(2)=98.9%). Occurrence was highest in studies applying standard criteria (19.1%; 95% confidence interval 15.1%-23.4%; I(2)=98.5%). The substantial heterogeneity observed was not explained by stratifying for other potential confounders.\n\n\nCONCLUSIONS\nWe found considerable variation in terminology and the diagnostic approach to pneumonia. Our review supports the need for consensus development of operational diagnostic criteria for pneumonia complicating stroke."}
{"_id": "b974e5db6acf996079adac89395286c1e96b5b97", "title": "Review of Portable and Low-Cost Sensors for the Ambient Air Monitoring of Benzene and Other Volatile Organic Compounds", "text": "This article presents a literature review of sensors for the monitoring of benzene in ambient air and other volatile organic compounds. Combined with information provided by stakeholders, manufacturers and literature, the review considers commercially available sensors, including PID-based sensors, semiconductor (resistive gas sensors) and portable on-line measuring devices as for example sensor arrays. The bibliographic collection includes the following topics: sensor description, field of application at fixed sites, indoor and ambient air monitoring, range of concentration levels and limit of detection in air, model descriptions of the phenomena involved in the sensor detection process, gaseous interference selectivity of sensors in complex VOC matrix, validation data in lab experiments and under field conditions."}
{"_id": "05f0c711c4788e14d1be3ca0e6d3c5ae8ad05cc1", "title": "Study of Hand Gesture Recognition Technologies and Applications for Human Vehicle Interaction", "text": "This paper describes the primary and secondary driving task together with Human Machine Interface (HMI) trends and issues which are driving automotive user interface designers to consider hand gesture recognition as a realistic alternative for user controls. A number of hand gesture recognition technologies and applications for Human Vehicle Interaction (HVI) are also discussed including a summary of current automotive hand gesture recognition research."}
{"_id": "b7bceac24ca6ec76c50d42602e8566efa671cb76", "title": "k-Stacks: High-density valet parking for automated vehicles", "text": "Automated valet parking not only improves driving comfort, but can have a considerable impact on the urban landscape by reducing the required parking space. We present the first study of parking space optimization for automated valet parking with an in-depth theoretical analysis of the parking lot properties under various aspects, inluding the worst-case extraction time, total shunting distance, and the number of shunting operations (each per car). Most importantly, the proposed model bounds all these values. We verify the theoretical properties of our model in four simulated scenarios, one of which is based on real-world data from a downtown parking garage. We show that very good pick-up times of about 1 min are possible with very little overhead in terms of shunting distance and time, while providing a significantly improved parking density as compared to conventional parking lots."}
{"_id": "cbe8b5a1f88660373d63c31a81c505e13cd4dfa6", "title": "Text Document Categorization by Term Association", "text": "A goodtext classifieris a classifierthat efficiently categorizeslarge setsof text documentsin a reasonabletime frameand with an acceptableaccuracy, and that provides classificationrules that are humanreadablefor possible fine-tuning. If the training of the classifier is also quick, this couldbecomein someapplicationdomainsa goodasset for the classifier . Many techniquesand algorithmsfor automatictext categorizationhavebeendevised.According to publishedliterature, somearemoreaccuratethanothers, andsomeprovidemore interpretableclassificationmodels thanothers. However, nonecancombineall thebeneficial propertiesenumer atedabove. In this paper, we presenta novel approach for automatictext categorizationthat borrowsfrom market basket analysistechniquesusingassociation rule miningin thedata-miningfield. We focuson two major problems:(1) findingthebesttermassociationrules in a textual databaseby generating and pruning; and (2) using the rules to build a text classifier . Our text categorization methodprovesto beefficientandeffective, andexperimentsonwell-knowncollectionsshowthattheclassifier performswell. In addition,trainingaswell asclassification arebothfastandthegeneratedrulesarehumanreadable."}
{"_id": "3958ae8784db5b44a1f3d0b59d049f9414b1b0bd", "title": "Indexing and Query Processing Techniques in Spatio-Temporal Data", "text": "Indexing and query processing is an emerging research field in spatio temporal data. Most of the real-time applications such as location based services, fleet management, traffic prediction and radio frequency identification and sensor networks are based on spatiotemporal indexing and query processing. All the indexing and query processing applications is any one of the forms, such as spatio index access and supporting queries or spatio-temporal indexing method and support query or temporal dimension, while in spatial data it is considered as the second priority. In this paper, give the survey of the various uncertain indexing and query processing techniques. Most of the existing survey works on spatio-temporal are based on indexing methods and query processing, but presented separately. Both the indexing and querying are related, hence state of art of both the indexing and query processing techniques are considered together. This paper gives the details of spatio-temporal data classification, various types of indexing methods, query processing, application areas and research direction of spatio-temporal indexing and query"}
{"_id": "f1d498d4a7c58b4189da12786b29510341171a14", "title": "Single image haze removal with WLS-based edge-preserving smoothing filter", "text": "Images captured under hazy conditions have low contrast and poor color. This is primarily due to air-light which degrades image quality according to the transmission map. The approach to enhance these hazy images we introduce here is based on the `Dark-Channel Prior' method with image refinement by the `Weighted Least Square' based edge-preserving smoothing. Local contrast is further enhanced by multi-scale tone manipulation. The proposed method improves the contrast, color and detail for the entire image domain effectively. In the experiment, we compare the proposed method with conventional methods to validate performance."}
{"_id": "1ba9363080a3014a5d793aa9455e8db32478b7cc", "title": "On a Methodology for Robust Segmentation of Nonideal Iris Images", "text": "Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms."}
{"_id": "82a46a91665af46f6251005e11c3d64589e5edd1", "title": "Digital Image Enhancement and Noise Filtering by Use of Local Statistics", "text": "Computational techniques involving contrast enhancement and noise filtering on two-dimensional image arrays are developed based on their local mean and variance. These algorithms are nonrecursive and do not require the use of any kind of transform. They share the same characteristics in that each pixel is processed independently. Consequently, this approach has an obvious advantage when used in real-time digital image processing applications and where a parallel processor can be used. For both the additive and multiplicative cases, the a priori mean and variance of each pixel is derived from its local mean and variance. Then, the minimum mean-square error estimator in its simplest form is applied to obtain the noise filtering algorithms. For multiplicative noise a statistical optimal linear approximation is made. Experimental results show that such an assumption yields a very effective filtering algorithm. Examples on images containing 256 \u00d7 256 pixels are given. Results show that in most cases the techniques developed in this paper are readily adaptable to real-time image processing."}
{"_id": "65d1cb55c861bd5ac67c443aa6a0185de4f03324", "title": "Automatic Signature Verification: The State of the Art - 1989-1993", "text": "This paper is a follow up to an article published in 1989 by R. Plamondon and G. Lorette on the state of the art in automatic signature verification and writer identification. It summarizes the activity from year 1989 to 1993 in automatic signature verification. For this purpose, we report on the different projects dealing with dynamic, static and neural network approaches. In each section, a brief description of the major investigations is given."}
{"_id": "52e8b1cb5c4f6b43b13db9c322861ad53aaa28e9", "title": "Photo Aesthetics Ranking Network with Attributes and Content Adaptation", "text": "A Aesthetics & Attribute Database (AADB) Fusing Attributes and Content for Aesthetics Ranking Demo, code and model can be download through project webpage http://www.ics.uci.edu/~skong2/aesthetics.html References: [8] He, K., Zhang, X., Ren, S., Sun, J., ECCV, 2014 [15] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J., IEEE Trans. on Multimedia, 2015 [16] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z., ACMMM, 2014 [17] Lu, X., Lin, Z., Shen, X., Mech, R., Wang, J.Z., ICCV, 2015 [23] Murray, N., Marchesotti, L., Perronnin, F., CVPR, 2012 Acknowledgements: This work was supported by Adobe gift fund, NSF grants DBI-1262547 and IIS1253538. Experimental Results We use Spearman's rho rank correlation ( ) to measure ranking performance . By thresholding the rating scores, we achieve state-of-the-art classification accuracy on AVA despite never training with a classification loss. We first train a simple model with Euclidean loss for numerical rating of photo aesthetics (a) fine-tuning with rank loss Based on the regression net, we apply rank loss to fine-tune the network"}
{"_id": "21f9b472fd25dcd75943d5da7f344cf23cfacabf", "title": "The Homogeneous Chaos Author ( s ) :", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "8d52a55b61439ab6ee96f869720aa54b3f4058a0", "title": "The Wiener-Askey Polynomial Chaos for Stochastic Differential Equations", "text": "We present a new method for solving stochastic differential equations based on Galerkin projections and extensions of Wiener\u2019s polynomial chaos. Specifically, we represent the stochastic processes with an optimum trial basis from the Askey family of orthogonal polynomials that reduces the dimensionality of the system and leads to exponential convergence of the error. Several continuous and discrete processes are treated, and numerical examples show substantial speed-up compared to Monte Carlo simulations for low dimensional stochastic inputs."}
{"_id": "e5ba4113f3ea291ad4877acbbe2217babaac56f4", "title": "Modeling Multibody Dynamic Systems With Uncertainties . Part I : Theoretical and Computational Aspects", "text": "This study explores the use of generalized polynomial chaos theory for modeling complex nonlinear multibody dynamic systems in the presence of parametric and external uncertainty. The polynomial chaos framework has been chosen because it offers an efficient computational approach for the large, nonlinear multibody models of engineering systems of interest, where the number of uncertain parameters is relatively small, while the magnitude of uncertainties can be very large (e.g., vehicle-soil interaction). The proposed methodology allows the quantification of uncertainty distributions in both time and frequency domains, and enables the simulations of multibody systems to produce results with \u201cerror bars\u201d. The first part of this study presents the theoretical and computational aspects of the polynomial chaos methodology. Both unconstrained and constrained formulations of multibody dynamics are considered. Direct stochastic collocation is proposed as less expensive alternative to the traditional Galerkin approach. It is established that stochastic collocation is equivalent to a stochastic response surface approach. We show that multi-dimensional basis functions are constructed as tensor products of one-dimensional basis functions and discuss the treatment of polynomial and trigonometric nonlinearities. Parametric uncertainties are modeled by finite-support probability densities. Stochastic forcings are discretized using truncated Karhunen-Loeve expansions. The companion paper \u201cModeling Multibody Dynamic Systems With Uncertainties. Part II: Numerical Applications\u201d illustrates the use of the proposed methodology on a selected set of test problems. The overall conclusion is that despite its limitations, polynomial chaos is a powerful approach for the simulation of multibody systems with uncertainties."}
{"_id": "f8d332548ef65a73e63bc04ece86b8504c32074c", "title": "High dimensional polynomial interpolation on sparse grids", "text": "We study polynomial interpolation on a d-dimensional cube, where d is large. We suggest to use the least solution at sparse grids with the extrema of the Chebyshev polynomials. The polynomial exactness of this method is almost optimal. Our error bounds show that the method is universal, i.e., almost optimal for many different function spaces. We report on numerical experiments for d = 10 using up to 652 065 interpolation points."}
{"_id": "4fe57e44d71c1b351d112a622dacd934f5a84fd7", "title": "A human parietal face area contains aligned head-centered visual and tactile maps", "text": "Visually guided eating, biting and kissing, and avoiding objects moving toward the face and toward which the face moves require prompt, coordinated processing of spatial visual and somatosensory information in order to protect the face and the brain. Single-cell recordings in parietal cortex have identified multisensory neurons with spatially restricted, aligned visual and somatosensory receptive fields, but so far, there has been no evidence for a topographic map in this area. Here we mapped the organization of a multisensory parietal face area in humans by acquiring functional magnetic resonance images while varying the polar angle of facial air puffs and close-up visual stimuli. We found aligned maps of tactile and near-face visual stimuli at the highest level of human association cortex\u2014namely, in the superior part of the postcentral sulcus. We show that this area may code the location of visual stimuli with respect to the face, not with respect to the retina.*NOTE: In the version of this article initially published online, there was an error in the affiliation in the html version. The first affiliation should read Department of Cognitive Science, University of California San Diego, La Jolla, California 92093, USA. The error has been corrected online."}
{"_id": "87f2aced42e4c2fc17e8b9c16c51a4d9d3f01fbf", "title": "Deconstructing Dynamic Symbolic Execution", "text": "Dynamic symbolic execution (DSE) is a well-known technique for automatically generating tests to achieve higher levels of coverage in a program. Two keys ideas of DSE are to: (1) seed symbolic execution by executing a program on an initial input; (2) using concrete values from the program execution in place of symbolic expressions whenever symbolic reasoning is hard or not desired. We describe DSE for a simple core language and then present a minimalist implementation of DSE for Python (in Python) that follows this basic recipe. The code is available at https://www.github.com/thomasjball/PyExZ3/ (tagged \u201cv1.0\u201d) and has been designed to make it easy to experiment with and extend."}
{"_id": "f3ef161eea5efee306b9579b3e3a0362ef771af3", "title": "Electronic health records implementation: An evaluation of information system impact and contingency factors", "text": "OBJECTIVE\nThis paper provides a review of EHR (electronic health record) implementations around the world and reports on findings including benefits and issues associated with EHR implementation.\n\n\nMATERIALS AND METHODS\nA systematic literature review was conducted from peer-reviewed scholarly journal publications from the last 10 years (2001-2011). The search was conducted using various publication collections including: Scopus, Embase, Informit, Medline, Proquest Health and Medical Complete. This paper reports on our analysis of previous empirical studies of EHR implementations. We analysed data based on an extension of DeLone and McLean's information system (IS) evaluation framework. The extended framework integrates DeLone and McLean's dimensions, including information quality, system quality, service quality, intention of use and usage, user satisfaction and net benefits, together with contingent dimensions, including systems development, implementation attributes and organisational aspects, as identified by Van der Meijden and colleagues.\n\n\nRESULTS\nA mix of evidence-based positive and negative impacts of EHR was found across different evaluation dimensions. In addition, a number of contingent factors were found to contribute to successful implementation of EHR.\n\n\nLIMITATIONS\nThis review does not include white papers or industry surveys, non-English papers, or those published outside the review time period.\n\n\nCONCLUSION\nThis review confirms the potential of this technology to aid patient care and clinical documentation; for example, in improved documentation quality, increased administration efficiency, as well as better quality, safety and coordination of care. Common negative impacts include changes to workflow and work disruption. Mixed observations were found on EHR quality, adoption and satisfaction. The review warns future implementers of EHR to carefully undertake the technology implementation exercise. The review also informs healthcare providers of contingent factors that potentially affect EHR development and implementation in an organisational setting. Our findings suggest a lack of socio-technical connectives between the clinician, the patient and the technology in developing and implementing EHR and future developments in patient-accessible EHR. In addition, a synthesis of DeLone and McLean's framework and Van der Meijden and colleagues' contingent factors has been found useful in comprehensively understanding and evaluating EHR implementations."}
{"_id": "806d3f986e08c9208cab68b994a250619062083c", "title": "Probabilistic Models for Expert Finding", "text": "A common task in many applications is to find persons who are knowledgeable about a given topic (i.e., expert finding). In this paper, we propose and develop a general probabilistic framework for studying expert finding problem and derive two families of generative models (candidate generation models and topic generation models) from the framework. These models subsume most existing language models proposed for expert finding. We further propose several techniques to improve the estimation of the proposed models, including incorporating topic expansion, using a mixture model to model candidate mentions in the supporting documents, and defining an email count-based prior in the topic generation model. Our experiments show that the proposed estimation strategies are all effective to improve retrieval accuracy."}
{"_id": "9aafcf0cb5702c01fe7cbd591bd4e752ac5b8986", "title": "Improving Learning and Inference in a Large Knowledge-Base using Latent Syntactic Cues", "text": "Automatically constructed Knowledge Bases (KBs) are often incomplete and there is a genuine need to improve their coverage. Path Ranking Algorithm (PRA) is a recently proposed method which aims to improve KB coverage by performing inference directly over the KB graph. For the first time, we demonstrate that addition of edges labeled with latent features mined from a large dependency parsed corpus of 500 million Web documents can significantly outperform previous PRAbased approaches on the KB inference task. We present extensive experimental results validating this finding. The resources presented in this paper are publicly available."}
{"_id": "6facf2019631fadf9d90165631c1667c40ebc902", "title": "Semantic Framework for Comparison Structures in Natural Language", "text": "Comparison is one of the most important phenomena in language for expressing objective and subjective facts about various entities. Systems that can understand and reason over comparative structure can play a major role in the applications which require deeper understanding of language. In this paper we present a novel semantic framework for representing the meaning of comparative structures in natural language, which models comparisons as predicate-argument pairs interconnected with semantic roles. Our framework supports not only adjectival, but also adverbial, nominal, and verbal comparatives. With this paper, we provide a novel dataset of gold-standard comparison structures annotated according to our semantic framework."}
{"_id": "39c2e3182960a54973c1943ebc8f574157e0179e", "title": "Cross-Entropy Clustering", "text": "We build a general and highly applicable clustering theory, which we call cross-entropy clustering (shortly CEC) which joins advantages of classical kmeans (easy implementation and speed) with those of EM (affine invariance and ability to adapt to clusters of desired shapes). Moreover, contrary to k-means and EM, CEC finds the optimal number of clusters by automatically removing groups which carry no information. Although CEC, similarly like EM, can be build on an arbitrary family of densities, in the most important case of Gaussian CEC the division into clusters is affine invariant, while the numerical complexity is comparable to that of k-means."}
{"_id": "92f98b189cec1220d479e3079b942e71b244aa65", "title": "Model-based vision: a program to see a walking person", "text": "For a machine to be able to \u2018see\u2019, it must know something about the object it is \u2018looking\u2019 at, A common method in machine vision is to provide the machine with general rather than specific knowledge about the object. An alternative technique, and the one used in thti paper, b a model-based approach in which particulars aboat the object are given and this drives the analysis. The computer program described here, the WALKER model, maps images into a description in which a person is represented by the series of hierarchical levels, i.e. a person has an arm which has a lower-arm which has a hand. l7te performance of the program ti illustrated by superimposing the machinegenerated picture over the original photographic images."}
{"_id": "610b3022f09362a7280e861580fa8388b56f43dc", "title": "What is Coordination Theory and How Can It Help Design Cooperative Work Systems?", "text": "It is possible to design cooperative work tools based only on \u201ccommon sense\u201d and good intuitions. But the history of technology is replete with examples of good theories greatly aiding the development of useful technology. Where, then, might we look for theories to help us design computer-supported cooperative work tools? In this paper, we will describe one possible perspective\u2014the interdisciplinary study of coordination\u2014that focuses, in part, on how people work together now and how they might do so differently with new information technologies.\nIn one sense, there is little that is new about the study of coordination. Many different disciplines\u2014including computer science, sociology, political science, management science, systems theory, economics, linguistics, and psychology\u2014have all dealt, in one way or another, with fundamental questions about coordination. Furthermore, several previous writers have suggested that theories about coordination are likely to be important for designing cooperative work tools (e.g., [Holt88], [Wino86]).\nWe hope to suggest here, however, that the potential for fruitful interdisciplinary connections concerning coordination is much greater than has as yet been widely appreciated. For instance, we believe that fundamentally similar coordination phenomena arise\u2014unrecognized as such\u2014in many of the fields listed above. Though a single coherent body of theory about coordination does not yet exist, many different disciplines could both contribute to and benefit from more general theories of coordination. Of particular interest to researchers in the field of computer-supported cooperative work is the prospect of drawing on a much richer body of existing and future work in these fields than has previously been suggested.\nIn this paper, we will first describe what we mean by \u201ccoordination theory\u201d and give examples of how previous research on computer-supported cooperative work can be interpreted from this perspective. We will then suggest one way of developing this perspective further by proposing tentative definitions of coordination and analyzing its components in more detail."}
{"_id": "bb21a57edd10c042bd137b713fcbf743021ab232", "title": "The More You Know: Using Knowledge Graphs for Image Classification", "text": "One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification."}
{"_id": "17856bad4335c8ca7419aab2c715ea25ce5e0621", "title": "ECDSA Security in Bitcoin and Ethereum : a Research Survey", "text": "This survey discusses the security level of the signature scheme implemented in Bitcoin and Ethereum."}
{"_id": "74f661e3213529c7ad54e2794bc28d7ef13f90e0", "title": "Cryolipolysis for Targeted Fat Reduction and Improved Appearance of the Enlarged Male Breast.", "text": "BACKGROUND\nPseudogynecomastia refers to benign male breast enlargement due to excess subareolar fat. Standard treatment is surgical excision under general anesthesia, liposuction, or a combination of both.\n\n\nOBJECTIVE\nThe safety and efficacy of cryolipolysis was investigated for nonsurgical treatment of pseudogynecomastia.\n\n\nMETHODS AND MATERIALS\nEnrollment consisted of 21 males with pseudogynecomastia. Subjects received a first treatment consisting of a 60-minute cryolipolysis cycle, followed by a two-minute massage, and a second 60-minute cycle with 50% treatment area overlap. At 60 days of follow-up, subjects received a second 60-minute treatment. Safety was evaluated by monitoring side effects and adverse events. Efficacy was assessed by ultrasound, clinical photographs, and subject surveys.\n\n\nRESULTS\nSurveys revealed that 95% of subjects reported improved visual appearance and 89% reported reduced embarrassment associated with pseudogynecomastia. Ultrasound showed mean fat layer reduction of 1.6 \u00b1 1.2 mm. Blinded reviewers correctly identified 82% of baseline photographs. Side effects included mild discomfort during treatment and transient paresthesia and tenderness. One case of paradoxical hyperplasia (PH) occurred but likelihood of PH in the male breast is not believed to be greater than in any other treatment area.\n\n\nCONCLUSION\nThis study demonstrated feasibility of cryolipolysis for safe, effective, and well-tolerated nonsurgical treatment of pseudogynecomastia."}
{"_id": "93c07463c3058591d577743a9c05a7782b25fb5b", "title": "Variational Inference for Uncertainty on the Inputs of Gaussian Process Models", "text": "The Gaussian process latent variable model (GP-LVM) provides a flexible approach for non-linear dimensionality reduction that has been widely applied. However, the current approach for training GP-LVMs is based on maximum likelihood, where the latent projection variables are maximized over rather than integrated out. In this paper we present a Bayesian method for training GP-LVMs by introducing a non-standard variational inference framework that allows to approximately integrate out the latent variables and subsequently train a GP-LVM by maximizing an analytic lower bound on the exact marginal likelihood. We apply this method for learning a GP-LVM from iid observations and for learning non-linear dynamical systems where the observations are temporally correlated. We show that a benefit of the variational Bayesian procedure is its robustness to overfitting and its ability to automatically select the dimensionality of the nonlinear latent space. The resulting framework is generic, flexible and easy to extend for other purposes, such as Gaussian process regression with uncertain inputs and semi-supervised Gaussian processes. We demonstrate our method on synthetic data and standard machine learning benchmarks, as well as challenging real world datasets, including high resolution video data."}
{"_id": "e95ddfc74e7a7ba8708fdf56c09cac65706c4e96", "title": "Building a case-based diet recommendation system without a knowledge engineer", "text": "We present a new approach to the effective development of menu construction systems that allow to automatically construct a menu that is strongly tailored to the individual requirements and food preferences of a client. In hospitals and other health care institutions dietitians develop diets for clients which need to change their eating habits. Many clients have special needs in regards to their medical conditions, cultural backgrounds, or special levels of nutrient requirements for better recovery from diseases or surgery, etc. Existing computer support for this task is insufficient-many diets are not specifically tailored for the client's needs or require substantial time of a dietitian to be manually developed. Our approach is based on case-based reasoning, an artificial intelligence technique that finds increasing entry into industrial practice. Our approach goes beyond the traditional case-based reasoning (CBR) approach by allowing an incremental improvement of the system's competency during routine use of the system. The improvement of the system takes place through a direct expert user-system interaction while the expert is accomplishing their tasks of constructing a diet for a given client. Whenever the system performs unsatisfactorily, the expert will need to modify the system-produced diet 'manually', i.e. by entering the desired modifications into the system. Our implemented system, menu construction using an incremental knowledge acquisition system (MIKAS), asks the expert for simple explanations for each of the manual actions he/she takes and incorporates the explanations automatically into its knowledge base (KB) so that the system will perform these manually conducted actions automatically at the next occasion. We present MIKAS and discuss the results of our case study. While still being a prototype, the senior clinical dietitian involved in our evaluation studies judges the approach to have considerable potential to improve the daily routine of hospital dietitians as well as to improve the average quality of the dietary advice given to patients within the limited available time for dietary consultations. Our approach opens up a new avenue towards building highly specialised CBR systems in a more cost-effective way. Hence, our approach promises to allow a significantly more widespread development and practical deployment of CBR systems in a large variety of application domains including many medical applications."}
{"_id": "228af90bbd22f12743d65b2abb45ce407323d2ab", "title": "A Revised Discrete Particle Swarm Optimization for Cloud Workflow Scheduling", "text": "A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. Compared with grid environment, data transfer is a big overhead for cloud workflows due to the market-oriented business model in the cloud environments. In this paper, a Revised Discrete Particle Swarm Optimization (RDPSO) is proposed to schedule applications among cloud services that takes both data transmission cost and computation cost into account. Experiment is conducted with a set of workflow applications by varying their data communication costs and computation costs according to a cloud price model. Comparison is made on make span and cost optimization ratio and the cost savings with RDPSO, the standard PSO and BRS (Best Resource Selection) algorithm. Experimental results show that the proposed RDPSO algorithm can achieve much more cost savings and better performance on make span and cost optimization."}
{"_id": "4356893d96e116b8aedd918e243f8d165cce1da2", "title": "Single-stage posterior corpectomy and expandable cage placement for treatment of thoracic or lumbar burst fractures.", "text": "STUDY DESIGN\nA prospective study was performed.\n\n\nOBJECTIVE\nTo assess an unusual technique for corpectomy and expandable cage placement via single-stage posterior approach in acute thoracic or lumbar burst fractures.\n\n\nSUMMARY AND BACKGROUND DATA\nBurst fractures represent 10% to 20% of all spine injuries at or near the thoracolumbar junction, and can cause neurologic complications and kyphotic deformity. The goal of surgical intervention is to decompress the neural elements, restore vertebral body height, correct angular deformity, and stabilize the columns of the spine.\n\n\nMETHODS\nThe study comprised 14 patients (8 women and 6 men aged 40.3 years) who had 1 spinal burst fracture between T8 and L4 and who underwent single-stage posterior corpectomy, circumferential reconstruction with expandable-cage placement, and transpedicle screwing between January 2003 and May 2005. Neurologic status was classified using the American Spinal Injury Association (ASIA) impairment scale and functional outcomes were analyzed using a visual analogue scale (VAS) for pain. The kyphotic angle (alpha) and lordotic angle (beta) were measured in the thoracic or thoracolumbar and lumbar regions, respectively. RESULTS.: The mean follow-up time was 24 months (range, 12-48 months). Neurologic status was in 7 patients (preop: ASIA-E, postop: unchanged), 2 patients (preop: ASIA-D, postop: 1 unchanged, 1 improved to ASIA-E), 3 patients (preop: ASIA-C, postop: 2 improved to ASIA-D, 1 improved to ASIA-E), 2 patients (preop: ASIA-B, postop: 1 improved to ASIA-C, 1 unchanged). The mean operative time was 187.8 minutes. The mean blood loss was 596.4 mL. Regarding postoperative complications, 1 patient experienced transient worsening of neurologic deficits and 1 patient developed pseudarthrosis. The mean preoperative VAS score was 8.21 and the mean postoperative VAS score was 2.66 (P < 0.05). The mean preoperative kyphotic angle for the 11 individuals with the thoracic or thoracolumbar burst fractures was 24.6 degrees and the mean preoperative lordotic angle for the 3 individuals with lumbar burst fractures was 10.6 degrees. The corresponding values at 12 months postsurgery were 17.1 degrees and 13.6 degrees.\n\n\nCONCLUSION\nThis single-stage posterior approach for acute thoracic and lumbar burst fractures offers some advantages over the classic combined anterior-posterior approach. The results from this small series suggest that a single-stage posterior approach should be considered in select cases."}
{"_id": "4cb57ae570214b24f0bf7e96e19b3cdc28697bb3", "title": "A holistic approach to service survivability", "text": "We present SABER (Survivability Architecture: Block, Evade, React), a proposed survivability architecture that blocks, evades and reacts to a variety of attacks by using several security and survivability mechanisms in an automated and coordinated fashion. Contrary to the ad hoc manner in which contemporary survivable systems are built-using isolated, independent security mechanisms such as firewalls, intrusion detection systems and software sandboxes-SABER integrates several different technologies in an attempt to provide a unified framework for responding to the wide range of attacks malicious insiders and outsiders can launch.\n This coordinated multi-layer approach will be capable of defending against attacks targeted at various levels of the network stack, such as congestion-based DoS attacks, software-based DoS or code-injection attacks, and others. Our fundamental insight is that while multiple lines of defense are useful, most conventional, uncoordinated approaches fail to exploit the full range of available responses to incidents. By coordinating the response, the ability to survive successful security breaches increases substantially.\n We discuss the key components of SABER, how they will be integrated together, and how we can leverage on the promising results of the individual components to improve survivability in a variety of coordinated attack scenarios. SABER is currently in the prototyping stages, with several interesting open research topics."}
{"_id": "b41b3fe42675d5cd1e74463098b2f6b0008cf46f", "title": "Hourly rounding to improve nursing responsiveness: a systematic review.", "text": "UNLABELLED\nThe aims of this study were to synthesize the evidence concerning the effect of hourly rounding programs on patient satisfaction with nursing care and discuss implications for nurse administrators.\n\n\nBACKGROUND\nPatient satisfaction is a key metric that influences both hospital ratings and reimbursement. Studies have suggested that purposeful nursing rounds can improve patient satisfaction, but the evidence to date has not been systematically examined.\n\n\nMETHODS\nA systematic review of published literature and GRADE analysis of evidence regarding nursing rounds were conducted.\n\n\nRESULTS\nThere is little consistency in how results of hourly rounds were measured, precluding quantitative analysis. There is moderate-strength evidence that hourly rounding programs improve patients' perception of nursing responsiveness. There is also moderate-strength evidence that these programs reduce patient falls and call light use.\n\n\nCONCLUSIONS\nNurse administrators should consider implementing an hourly rounding program while controlled trials discern the most cost-effective approach."}
{"_id": "1445e2a1c25d29ac309904606b0b468de9f196b5", "title": "Extreme gigantomastia in pregnancy: case report\u2014my experience with two cases in last 5\u00a0years", "text": "We present an extreme case of gigantomastia in pregnancy during the second gemelar pregnancy of a 30-year-old woman. Her first pregnancy was 8\u00a0years ago, was also gemelar and she delivered with caesarean section. From the beginning of her current pregnancy, the patient noted steady growth of both of her breasts that reached enormous dimensions at the end of the pregnancy. This kind of breast changes did not occur during her first pregnancy. The patient also suffered from myasthenia gravis that was in remission during this pregnancy, without any therapy. The patient was in the 38\u00a0weeks of gestation, and a delivery with caesarean section was performed in line with the reduction of her breasts. The main reasons that led me to perform these two interventions as one act were the fact that puerperal mastitis could develop on these enormous breasts, further the small regression of these huge breasts during the bromocriptine treatment, as well as the intention to avoid other operative traumas, considering possibility of exacerbation of myasthenia gravis. I had already performed bilateral reduction mammaplasty with free areola-nipple graft, when a tissue with total weight of 20\u00a0kg (2\u00a0\u00d7\u00a010\u00a0kg) was removed. The patient had an excellent post-operation recovery course."}
{"_id": "091dc8fb55fbf82f52c3842ca5cf8322dfd3ee2d", "title": "Classification of copy move forgery and normal images by ORB features and SVM classifier", "text": "Today, the characterization of the technological age is done by the digital images spread. They are the most common form of conveying information whether through internet, newspapers, magazines, or scientific journals. They are used as a strong proof of various crimes and as evidence used for various purposes. The modification, capturing or creating of the image has become easier and available with the emergence of means of image editing and processing tools. One of the most important and popular types of image forgery is a copy-move forgery in which an image part is copied and then pasted into the same image that has the intention of hiding something important or showing a false scene. Because the important properties of the copied parts come from the same image, such as brightness, noise, and texture which will be compatible with the entire image that makes more difficult for experts for the detection and distinguishing the alteration. Usually, the detecting copy move forgery conventional techniques suffer severely from the time-consuming problem. The evaluation of the improved method had been done using (150) images that were selected from two different datasets, \u201cCoMoFoD\u201d and \u201cMICC-F2000\u201d. Experimental results show that the improved method can accurately and quickly reveal the doubled regions of a tampered image. In addition, greatly reducing the processing time in comparison to the Khan algorithm, and the accuracy is kept at the same level. Owing to the availability and technological advancement of the image editing sophisticated tools, there is an increase in the loss of authentication in digital images. Thus, this led us to the proposal of different detection techniques that checks whether the digital images are forged or authentic. The specific type of forgery technique is copy move forgery in which widely used research topic is detection under digital image forensics. In this thesis, an enhancement of copy move image forgery classification is done by implementing hybrid features with classification algorithms like SIFT with SVM and EM algorithm and ORB with SVM and EM.The technique works by applying Firstly the DCT on an image and then on a resultant image, SIFT is obtained after applying DCT. A supervised learning method is proposed for classifying a copy-move image forgery of TIFF, JPEG, and BMP. The process starts with reducing the color of the photos. Achieve the accuracy more than 90%."}
{"_id": "fe094250815255a9dbd49c495aa77a2fff467817", "title": "Strength-based interventions : Their importance in application to the gifted", "text": "Positive psychology has revived psychology\u2019s abandoned interest in the study of morally positively valued traits (the so-called character strengths) and virtues. We review literature generated on strength-based approaches and focus on applications in the so-called positive psychology interventions. There seems to be great potential in this approach for research in the field of giftedness and, of course, also when practically working with gifted children and adolescents. We highlight some ideas for future research directions. DOI: https://doi.org/10.1177/0261429416640334 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-128345 Accepted Version Originally published at: Proyer, Ren\u00e9 T; Gander, Fabian; Tandler, Nancy (2017). Strength-based interventions: Their importance in application to the gifted. Gifted Education International, 33(2):118-130. DOI: https://doi.org/10.1177/0261429416640334 Strength-based interventions: Their importance in application to the gifted Ren\u00e9 T. Proyer Martin-Luther University Halle-Wittenberg, Germany University of Zurich, Switzerland Fabian Gander University of Zurich, Switzerland Nancy Tandler Martin-Luther University Halle-Wittenberg, Germany Ren\u00e9 T. Proyer is at the Department of Psychology at the Martin-Luther University Halle-Wittenberg (Germany) and the Department of Psychology at the University of Zurich (Switzerland). Fabian Gander is at Department of Psychology at the University of Zurich (Switzerland). Nany Tandler is at the Department of Psychology at the Martin-Luther University Halle-Wittenberg (Germany) The preparation of this paper has been facilitated by research grants of the Swiss National Science Foundation (SNSF; 100014_132512 and 100014_149772) awarded to RTP. Correspondence concerning this article should be addressed to Ren\u00e9 Proyer, MartinLuther University Halle-Wittenberg (Germany), Emil-Abderhalden-Stra\u00dfe 26-27, 06108 Halle (Saale), Germany; E-mail: rene.proyer@psych.uni-halle.de"}
{"_id": "59b471a0d1e457ad4a058ebc46f6d28ddfa01ec1", "title": "Identifying Personality Traits , and Especially Traits Resulting in Violent Behavior through Automatic Handwriting Analysis", "text": "Handwriting analysis is a process that has been carried out for centuries. But its effectiveness when analyzing the behavior and personality of an individual is still a debate. Is it possible to detect potential deviant behavior and personality traits of an individual by carrying out an analysis of his/her handwriting? There are two methods of handwriting analysis: Graphology is the method of psychological analysis, while forensic document examination or handwriting identification which is the examination of documents and writing samples by a known source, or person. In this paper we have carried out research of the various state of the art technologies available in analyzing an individual's behavior based on their handwriting and the effectiveness of predicting the character and personality of that individual. We also wanted to determine if we can uncover handedness, authorship and gender through analysis. Apart from working on Lewinson-Zubin method of analyzing handwriting, various online tools are also available for handwriting analysis, such as: NEURO SCRIPT, WANDA, CEDAR-FOX, and Gaussian Mixture Model."}
{"_id": "8b8a377706d4d8675ccc440d8c1da0d5f949ae0a", "title": "Handwriting Analysis based on Segmentation Method for Prediction of Human Personality using Support Vector Machine", "text": "Handwriting analysis is a method to predict personality of an author and to better understand the writer. Allograph and allograph combination analysis is a scientific method of writer identification and evaluating the behavior. To make this computerized we considered six main different types of features: (i) size of letters, (ii) slant of letters and words, (iii) baseline, (iv) pen pressure, (v) spacing between letters and (vi) spacing between words in a document to identify the personality of the writer. Segmentation is used to calculate the features from digital handwriting and is trained to SVM which outputs the behavior of the writer. For this experiment 100 different writers were used for different handwriting data samples. The proposed method gives about 94% of accuracy rate with RBF kernel. In this paper an automatic method has been proposed to predict the psychological personality of the writer. The system performance is measured under two different conditions with the same sample."}
{"_id": "5d8859fe23d0c5a8642107e44ac5d966e54c8e85", "title": "On-Line and Off-Line Handwriting Recognition: A Comprehensive Survey", "text": "\u00d0Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the on-line case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered. Index Terms\u00d0Handwriting recognition, on-line, off-line, written language, signature verification, cursive script, handwriting learning tools, writer authentification."}
{"_id": "85bffadd76058cac93c19342813360b4af1863bb", "title": "Automated Human Behavior Prediction through Handwriting Analysis", "text": "Handwriting Analysis or Graphology is a scientific method of identifying, evaluating and understanding personality through the strokes and patterns revealed by handwriting. Handwriting reveals the true personality including emotional outlay, fears, honesty, defenses and many others. Professional handwriting examiners called graphologist often identify the writer with a piece of handwriting. Accuracy of handwriting analysis depends on how skilled the analyst is. Although human intervention in handwriting analysis has been effective, it is costly and prone to fatigue. Hence the proposed methodology focuses on developing a tool for behavioral analysis which can predict the personality traits automatically with the aid of a computer without the human intervention. In this paper a method has been proposed to predict the personality of a person from the baseline, the pen pressure, the letter\u2018t\u2019, the lower loop of letter \u2018y\u2019 and the slant of the writing as found in an individual\u2019s handwriting. These parameters are the inputs to a Rule-Base which outputs the personality trait of the writer."}
{"_id": "67e382209578e4d9efd4a3bba215a8d398743b50", "title": "Human Action Recognition under Log-Euclidean Riemannian Metric", "text": "This paper presents a new action recognition approach based on local spatio-temporal features. The main contributions of our approach are twofold. First, a new local spatio-temporal feature is proposed to represent the cuboids detected in video sequences. Specifically, the descriptor utilizes the covariance matrix to capture the self-correlation information of the low-level features within each cuboid. Since covariance matrices do not lie on Euclidean space, the Log-Euclidean Riemannian metric is used for distance measure between covariance matrices. Second, the Earth Mover\u2019s Distance (EMD) is used for matching any pair of video sequences. In contrast to the widely used Euclidean distance, EMD achieves more robust performances in matching histograms/distributions with different sizes. Experimental results on two datasets demonstrate the effectiveness of the proposed approach."}
{"_id": "db0b5802c512c7058cc45df3f0e6a75817375035", "title": "How Personal Experience and Technical Knowledge Affect Using Conversational Agents", "text": "Conversational agents (CA) use dialogues to interact with users so as to offer an experience of naturalistic interaction. However, due to the low transparency and poor explanability of mechanism inside CA, individual's understanding of CA's capabilities may affect how the individual interacts with CA and the sustainability of CA use. To examine how users' understanding affect perceptions and experiences of using CA, we conducted a laboratory study asking 41 participants performed a set of tasks using Apple Siri. We independently manipulated two factors: (1) personal experience of using CA, and (2) technical knowledge about CA's system model. We conducted mixed-method analyses of post-task usability measures and interviews, and confirmed that use experience and technical knowledge affects perceived usability and mental models differently."}
{"_id": "5f5e18de6c7fd5cf459dabd2a6f0bae555467aa5", "title": "Dynamic TXOP HCCA reclaiming scheduler with transmission time estimation for IEEE 802.11e real-time networks", "text": "IEEE 802.11e HCCA reference scheduler guarantees Quality of Service only for Constant Bit Rate traffic streams, whereas its assignment of scheduling parameters (transmission time TXOP and polling period) is too rigid to serve Variable Bit Rate (VBR) traffic.\n This paper presents a new scheduling algorithm, Dynamic TXOP HCCA (DTH). Its scheduling scheme, integrated with the centralized scheduler, uses both a statistical estimation of needed transmission duration and a bandwidth reclaiming mechanism with the aim of improving the resource management and providing an instantaneous dynamic Transmission Opportunity (TXOP), tailored to multimedia applications with variable bit rate. Performance evaluation through simulation, confirmed by the scheduling analysis, shows that DTH is suitable to reduce the transmission queues length. This positively impacts on the delay and on packets drop rate experienced by VBR traffic streams."}
{"_id": "4b0ab5024ab4fa75f3027717ed3e5af49d44f5fe", "title": "Netradar - Measuring the wireless world", "text": "There exists a number of network measurement tools and web-based services for end users. Almost all of them focus on network bandwidth and possibly latency, and simply report the results to the user. This paper presents Netradar, a somewhat different measurement service. We analyze network bandwidth and latency but also over ten other parameters. We dwell into the data and show various coverage maps and statistics on our web site for the benefit of the end users. Our long term goal is to be able to answer the simple question: why did I get the indicated result at a given time and place?"}
{"_id": "d8237600841361f7811f5fd9effaed9d2e6e34b0", "title": "A Behavioral Model of Rational Choice", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "e0f05b84b81a0416b849e604b14f92d573c51635", "title": "Building Domain-Specific Search Engines with Machine Learning Techniques", "text": "Domain-specific search engines are growing in popularity because they offer increased accuracy and extra functionality not possible with the general, Web-wide search engines. For example, www.campsearch.com allows complex queries by age-group, size, location and cost over .summer camps. Unfortunately these domain-specific search engines are difficult and timeconsuming to maintain. This paper proposes the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific search engines. We describe new research in reinforcement learning, information extraction and text classification that enables efficient spidering, identifying informative text segments, and populating topic hierarchies. Using these techniques, we have built a demonstration system: a search engine for computer science research papers. It already contaius over 50,000 papers and is publicly available at ~w. cora.justres earch, com."}
{"_id": "6bda201198dc42f77e8f6132c492daed947308e0", "title": "Dynamic Identification of Manipulator : Comparison between CAD and Actual Parameters", "text": "It is essential to know the dynamic parameters of the robot for its precise control and simulation. Philosophy of identification is based on finding the model using its input-output data. The identification equation of the manipulator is derived from Newton-Euler equations, using manipulator kinematic, i.e., geometric parameters and joint values as input and joint torque data as output. In this paper, the dynamic parameters are identified for the CAD model provided by the robot manufacturer in simulation. And experimentally for the installed seven degrees of freedom (DOF) robot KUKA-iiwaR800. The variation between the joint torques predicted from the estimated base parameters obtained using CAD model and actual robot are presented. The factors responsible for the variation are also highlighted."}
{"_id": "49a19fe67d8ef941af598e87775112f0b3191091", "title": "An Efficient Algorithm for Disease Prediction with Multi Dimensional Data", "text": "The main objective of this study is to create a fast, easy and an efficient algorithm for disease prediction , with less error rate and can apply with even large data sets and show reasonable patterns with dependent variables. For disease identification and prediction in data mining a new hybrid algorithm was constructed. The Disease Identification and Prediction(DIP) algorithm, which is the combination of decision tree and association rule is used to predict the chances of some of the disease hits in some particular areas. It also shows the relationship between different parameters for the prediction. To implement this algorithm using vb.net software was also developed."}
{"_id": "043c48305a51564b24e8413ddbc49c8440b4a788", "title": "Stemming of Amharic Words for Information Retrieval", "text": "This paper presents a stemmer for processing document and query words to facilitate searching databases of Amharic text. An iterative stemmer has been developed that involves the removal of both prefixes and suffixes and that also takes account of letter inconsistency and reiterative verb forms. Application of the stemmer to a test file of 1221 words suggested that appropriate stems were generated for ca. 95 per cent of them, with only limited overstemming and understemming. Lit&Ling 17/1 001-017 FINAL 23/7/02 3:33 pm Page 1"}
{"_id": "6a1a3bc5bf1122436fac7831b7e35a795498c804", "title": "Graph-based Approach to Automatic Taxonomy Generation (GraBTax)", "text": "We propose a novel graph-based approach for constructing concept hierarchy from a large text corpus. Our algorithm, GraBTax, incorporates both statistical co-occurrences and lexical similarity in optimizing the structure of the taxonomy. To automatically generate topic-dependent taxonomies from a large text corpus, GraBTax first extracts topical terms and their relationships from the corpus. The algorithm then constructs a weighted graph representing topics and their associations. A graph partitioning algorithm is then used to re-cursively partition the topic graph into a tax-onomy. For evaluation, we apply GraBTax to articles, primarily computer science, in the CiteSeerX digital library and search engine. The quality of the resulting concept hierarchy is assessed by both human judges and comparison with Wikipedia categories."}
{"_id": "4bc0a9a4da43ac7e42a65fa14560641ca570ac5d", "title": "Research on metadata-driven data quality assessment architecture", "text": "So far there have never been systematic evaluation criteria and entire assessment and evaluation system in Data quality internationally. Based on the research on the relative content of both international and domestic data quality, this article analyzes the requests of data quality for the large enterprises. First of all, this paper raises and builds a complete data quality assessment system. Second, the definitions and specific algorithms of data quality assessment indicators and poses data quality analysis are built to evaluate the architecture and processes. A frame structure of data quality meta-model is presented in this paper. In addition, this paper also designs an evaluation system. This system includes the classification and definition of data quality and the algorithm in evaluation index and the system and process of data quality evaluation. This paper provides credibility basis for enterprises in evaluation of data quality."}
{"_id": "9b4b3ef8e6e319380fed816779ec0f66b13349fc", "title": "Cortical mechanism for the visual guidance of hand grasping movements in the monkey: A reversible inactivation study.", "text": "Picking up an object requires two basic motor operations: reaching and grasping. Neurophysiological studies in monkeys have suggested that the visuomotor transformations necessary for these two operations are carried out by separate parietofrontal circuits and that, for grasping, a key role is played by a specific sector of the ventral premotor cortex: area F5. The aim of the present study was to test the validity of this hypothesis by reversibly inactivating area F5 in monkeys trained to grasp objects of different shape, size and orientation. In separate sessions, the hand field of the primary motor cortex (area F1 or area 4) was also reversibly inactivated. The results showed that after inactivation of area F5 buried in the bank of the arcuate sulcus (the F5 sector where visuomotor neurones responding to object presentation are located), the hand shaping preceding grasping was markedly impaired and the hand posture was not appropriate for the object size and shape. The monkeys were eventually able to grasp the objects, but only after a series of corrections made under tactile control. With small inactivations the deficits concerned the contralesional hand, with larger inactivations the ipsilateral hand as well. In addition, there were signs of peripersonal neglect in the hemispace contralateral to the inactivation site. Following inactivation of area F5 lying on the cortical convexity (the F5 sector where visuomotor neurones responding to action observation, 'mirror neurones', are found) only a motor slowing was observed, the hand shaping being preserved. The inactivation of the hand field of area F1 produced a severe paralysis of contralateral finger movements with hypotonia. The results of this study indicate the crucial role of the ventral premotor cortex in visuomotor transformations for grasping movements. More generally, they provide strong support for the notion that distal and proximal movement organization relies upon distinct cortical circuits. Clinical data on distal movement deficits in humans are re-examined in the light of the present findings."}
{"_id": "6d2021bb6874d0ce01e6c45d0dccad2c3539528e", "title": "Pegasus: A framework for mapping complex scientific workflows onto distributed systems", "text": "This paper describes the Pegasus framework that can be used to map complex scientific workflows onto distributed resources. Pegasus enables users to represent the workflows at an abstract level without needing to worry about the particulars of the target execution systems. The paper describes general issues in mapping applications and the functionality of Pegasus. We present the results of improving application performance through workflow restructuring which clusters multiple tasks in a workflow into single entities. A real-life astronomy application is used as the basis for the study."}
{"_id": "dfb9eec6c6ae7d3e07123045c3468c9b57b2a7e2", "title": "Stability of Drugs and Dosage Forms", "text": ""}
{"_id": "a6d85378ac408a9f6c18780d50fcd323bf64fc53", "title": "Implementation of 3D Obstacle Compliant Mobility Models for UAV Networks in ns-3", "text": "UAV networks are envisioned to play a crucial role in the future generations of wireless networks. The mechanical degrees of freedom in the movement of UAVs provides various advantages for tactical and civilian applications. Due to the high cost of failures in system-based tests, initial analysis and refinement of designs and algorithms for UAV applications are performed through rigorous simulations. Current trend of UAV specific simulators is mainly biased towards the mechanical properties of flying. For network-centric simulations, the intended measurements on the performance of protocols in mobile scenarios are conventionally captured from general-purpose network simulators, which are not natively equipped with comprehensive models for 3D movements of UAVs. To facilitate such simulations for UAV systems, this paper presents different mobility models for emulation of the movement of a UAV. Detailed description of three mobility models (random walk, random direction, and Gauss-Markov) are presented, and their associated movement patterns are characterized. This characterization is further extended by considering the effect of large obstacles on movement patterns of nodes following the three models. The mobility models are prepared as open-source add-ons for ns-3 network simulator."}
{"_id": "36ed02a62984f7343de82927d34d49de4405b117", "title": "A Monocephalus Diprosopus Fetus: Antenatal Sonographic Findings -", "text": "Monocephalus Diprosopus is the rarest form of conjoined twins. The etiology of this anomaly is stil obscure. We herein report a monocephalus diprosopus case that was diagnosed in week 19 of pregnancy was presented due to its rarity and the significance of its prenatal diagnosis. Recommended a pregnancy termination since it does not have a definitive treatment today, prenatal diagnosis made for such fetuses at an early stage bears importance in terms of lowering the severity of psychological trauma."}
{"_id": "c25d70366d2edaf597f82bb03312b31ba9045cb5", "title": "The Self-Serv Environment for Web Services Composition", "text": "The composition of Web services to handle complex transactions such as finance, billing, and traffic information services is gaining considerable momentum as a way to enable business-to-business (B2B) collaborations.1 Web services allow organizations to share costs, skills, and resources by joining their applications and systems.2 Although current technologies provide the foundation for building composite services, several issues still need to be addressed:"}
{"_id": "fd819f5f952b6e46b0cfe9fcab83e1501f4cb3ef", "title": "Millimeter-wave antenna in package on low-cost organic substrate for flip chip chip scale (FCCSP) package", "text": "This work proposes a broadband microstrip patch antenna array with H-shaped aperture-coupled stacked rectangular patches is implemented to low-cost and low-loss organic substrate for flip chip chip scale (FCCSP) package for millimeter-wave antenna in package (AiP) application. The proposed antenna array is built on a low-cost and low-loss multilayer organic substrate for flip chip chip scale (FCCSP) package. The return loss of < \u221210 dB bandwidth of the proposed antenna array is from 55 to 69 GHz, which covers the requirement of IEEE 802.11ad standard. The cross-polarization levels in both and planes are better than 30 dB. The front-to-back ratio of the antenna radiation pattern is better than 20 dB. The prospoed microstrip patch antenna can achieve a wide bandwidth, low cross-polarization levels, low backward radiation levels, and easy integration with radio frequency (RF) circuits. This work provides a cost-effective approach for fifth-generation mobile communications (5G), IEEE 802.11ad, such as smart phone, laptops, and wireless router."}
{"_id": "11a0dabaacc40144adcae5df3b06873236acca8f", "title": "Question Asking During Tutoring", "text": "Whereas it is well documented that student question asking is infrequent in classroom environments, there is little research on questioning processes during tutoring. The present study investigated the questions asked in tutoring sessions on research methods (college students) and algebra (7th graders). Student questions were approximately 240 times as frequent in tutoring settings as classroom settings, whereas tutor questions were only slightly more frequent than teacher questions. Questions were classified by (a) degree of specification , (b) content, and (c) question-generation mechanism to analyze their quality. Student achievement was positively correlated with the quality of student questions after students had some experience with tutoring, but the frequency of questions was not correlated with achievement. Students partially self-regulated their learning by identifying knowledge deficits and asking questions to repair them, but they need training to improve these skills. We identified some ways that tutors and teachers might improve their question-asking skills. specializations are discourse processes and questioning mechanisms."}
{"_id": "31900b62fabf7da87573e93e473dd72cc68f24fa", "title": "Sentiment Analysis in MOOC Discussion Forums: What does it tell us?", "text": "Sentiment analysis is one of the great accomplishments of the last decade in the field of Language Technologies. In this paper, we explore mining collective sentiment from forum posts in a Massive Open Online Course (MOOC) in order to monitor students\u2019 trending opinions towards the course and major course tools, such as lecture and peer-assessment. We observe a correlation between sentiment ratio measured based on daily forum posts and number of students who drop out each day. On a user-level, we evaluate the impact of sentiment on attrition over time. A qualitative analysis clarifies the subtle differences in how these language behaviors are used in practice across three MOOCs. Implications for research and practice are discussed."}
{"_id": "540c90efdf4c2a5c1f63e69cdd56fcbc79291f94", "title": "Visualizing patterns of student engagement and performance in MOOCs", "text": "In the last five years, the world has seen a remarkable level of interest in Massive Open Online Courses, or MOOCs. A consistent message from universities participating in MOOC delivery is their eagerness to understand students' online learning processes. This paper reports on an exploratory investigation of students' learning processes in two MOOCs which have different curriculum and assessment designs. When viewed through the lens of common MOOC learning analytics, the high level of initial student interest and, ultimately, the high level of attrition, makes these two courses appear very similar to each other, and to MOOCs in general. With the goal of developing a greater understanding of students' patterns of learning behavior in these courses, we investigated alternative learning analytic approaches and visual representations of the output of these analyses. Using these approaches we were able to meaningfully classify student types and visualize patterns of student engagement which were previously unclear. The findings from this research contribute to the educational community's understanding of students' engagement and performance in MOOCs, and also provide the broader learning analytics community with suggestions of new ways to approach learning analytic data analysis and visualization."}
{"_id": "82ff00b247f05d486e02f36bbca498eb2992b778", "title": "Active-Constructive-Interactive: A Conceptual Framework for Differentiating Learning Activities", "text": "Active, constructive, and interactive are terms that are commonly used in the cognitive and learning sciences. They describe activities that can be undertaken by learners. However, the literature is actually not explicit about how these terms can be defined; whether they are distinct; and whether they refer to overt manifestations, learning processes, or learning outcomes. Thus, a framework is provided here that offers a way to differentiate active, constructive, and interactive in terms of observable overt activities and underlying learning processes. The framework generates a testable hypothesis for learning: that interactive activities are most likely to be better than constructive activities, which in turn might be better than active activities, which are better than being passive. Studies from the literature are cited to provide evidence in support of this hypothesis. Moreover, postulating underlying learning processes allows us to interpret evidence in the literature more accurately. Specifying distinct overt activities for active, constructive, and interactive also offers suggestions for how learning activities can be coded and how each kind of activity might be elicited."}
{"_id": "318eb2148da32a4de2c5912733dfe936bdd66987", "title": "Major life changes and behavioral markers in social media: case of childbirth", "text": "We explore the harnessing of social media as a window on changes around major life events in individuals and larger populations. We specifically examine patterns of activity, emotional, and linguistic correlates for childbirth and postnatal course. After identifying childbirth events on Twitter, we analyze daily posting patterns and language usage before and after birth by new mothers, and make inferences about the status and dynamics of changes in emotions expressed following childbirth. We find that childbirth is associated with some changes for most new mothers, but approximately 15% of new mothers show significant changes in their online activity and emotional expression postpartum. We observe that these mothers can be distinguished by linguistic changes captured by shifts in a relatively small number of words in their social media posts. We introduce a greedy differencing procedure to identify the type of language that characterizes significant changes in these mothers during postpartum. We conclude with a discussion about how such characterizations might be applied to recognizing and understanding health and well-being in women following childbirth."}
{"_id": "3f787bfac8f9d35a0cf1fe66cc275b6ed3e26819", "title": "Human microbiomes and their roles in dysbiosis, common diseases, and novel therapeutic approaches", "text": "The human body is the residence of a large number of commensal (non-pathogenic) and pathogenic microbial species that have co-evolved with the human genome, adaptive immune system, and diet. With recent advances in DNA-based technologies, we initiated the exploration of bacterial gene functions and their role in human health. The main goal of the human microbiome project is to characterize the abundance, diversity and functionality of the genes present in all microorganisms that permanently live in different sites of the human body. The gut microbiota expresses over 3.3 million bacterial genes, while the human genome expresses only 20 thousand genes. Microbe gene-products exert pivotal functions via the regulation of food digestion and immune system development. Studies are confirming that manipulation of non-pathogenic bacterial strains in the host can stimulate the recovery of the immune response to pathogenic bacteria causing diseases. Different approaches, including the use of nutraceutics (prebiotics and probiotics) as well as phages engineered with CRISPR/Cas systems and quorum sensing systems have been developed as new therapies for controlling dysbiosis (alterations in microbial community) and common diseases (e.g., diabetes and obesity). The designing and production of pharmaceuticals based on our own body's microbiome is an emerging field and is rapidly growing to be fully explored in the near future. This review provides an outlook on recent findings on the human microbiomes, their impact on health and diseases, and on the development of targeted therapies."}
{"_id": "bf5230d345c1f1e3d876ab240f5e087e5d6a629b", "title": "Human motion recovery utilizing truncated schatten p-norm and kinematic constraints", "text": "Human motion capture (mocap) data, which records the movements of joints of a human body, has been widely used in many areas. However, the raw captured data inevitably contains missing data due to the limitations of capture systems. Recently, by exploiting the low-rank prior embedded in mocap data, several approaches have resorted to the low-rank matrix completion (LRMC) to fill in the missing data and obtained encouraging results. In order to solve the resulting rank minimization problem which is known as NP-hard, all existing methods use the convex nuclear norm as the surrogate of rank to pursue the convexity of the objective function. However, the nuclear norm, which ignores physical interpretations of singular values and has over-shrinking problem, obtains less accurate approximation than its nonconvex counterpart. Therefore, this paper presents a nonconvex LRMC based method wherein we exploit the state-of-art nonconvex truncated schattenp norm to approximate rank. Moreover, we add two significant constraints to the low-rank based model to preserve the spatial-temporal properties and structural characteristics embedded in human motion. We also develop a framework based on alternating direction multiplier method (ADMM) to solve the resulting nonconvex problem. Extensive experiment results demonstrate that the proposed method significantly outperforms the existing state-of-art methods in terms of both recovery error and agreement with human intuition. \u00a9 2018 Elsevier Inc. All rights reserved."}
{"_id": "1008e1b5d8d30a7d2f3a3113521e840c58d7b4ac", "title": "Self-Adjusting Binary Search Trees", "text": "The splay tree, a self-adjusting form of binary search tree, is developed and analyzed. The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy. On an n-node splay tree, all the standard search tree operations have an amortized time bound of O(log n) per operation, where by \u201camortized time\u201d is meant the time per operation averaged over a worst-case sequence of operations. Thus splay trees are as efficient as balanced trees when total running time is the measure of interest. In addition, for sufficiently long access sequences, splay trees are as efficient, to within a constant factor, as static optimum search trees. The efficiency of splay trees comes not from an explicit structural constraint, as with balanced trees, but from applying a simple restructuring heuristic, called splaying, whenever the tree is accessed. Extensions of splaying give simplified forms of two other data structures: lexicographic or multidimensional search trees and link/cut trees."}
{"_id": "d29ddce3ce4d8b85fcf0036d13e17a97506c6257", "title": "Minimal intervention dentistry: part 4. Detection and diagnosis of initial caries lesions", "text": "The detection of carious lesions is focused on the identification of early mineral changes to allow the demineralisation process to be managed by non-invasive interventions. The methods recommended for clinical diagnosis of initial carious lesions are discussed and illustrated. These include the early detection of lesions, evaluation of the extent of the lesion and its state of activity and the establishment of appropriate monitoring. The place of modern tools, including those based on fluorescence, is discussed. These can help inform patients. They are also potentially useful in regular control visits to monitor the progression or regression of early lesions. A rigorous and systematic approach to caries diagnosis is essential to establish a care plan for the disease and to identify preventive measures based on more precise diagnosis and to reduce reliance on restorative measures."}
{"_id": "fd3c4a6fd18eaa55b9bc86cfbbd5039024576430", "title": "Guidelines for usability testing with children", "text": "A Although user-centered design is a well-supported concept in the literature on adult computer products, not until recently have publications begun to appear addressing the need to include the user in the design process of children's computer products. Good examples are a recent panel discussion in interactions on the importance of understanding the perspectives and needs of children, and the energizing work of Allison Druin and Cynthia Solomon [1, 2]. Growth has also occurred in evaluation research in both the industrial and academic communities, assessing the effectiveness or appeal of various types of (6 to 10 years), and middle-school-aged children (11 to 14 years). These age divisions are arbitrary , and many behaviors will overlap. In our experience, most children younger than 2 1/2 years of age are not proficient enough with standard input devices (e.g., mouse, trackball, or keyboard) to interact with the technology and provide useful data. Children older than 14 years of age will likely behave as adults in a testing situation and should be treated accordingly. Preschool (ages 2 to 5 years) Preschoolers require the most extensive adaptations of usability testing because their attention span, their motivation to please adults, and their ability to adjust to strange surroundings and new people may change from one moment to the next. In general, children in this age range should be allowed to explore the computer according to their own interests and pacing instead of performing a series of directed tasks. They will often be happy to show you what they know, and what they can do on the computer independently. When assessing appeal or engagement, testers will need to closely observe children's behavior such as sighing, smiling, or sliding under the table. Children this age often have difficulty expressing their likes and dislikes in words. Elementary School (ages 6 to 10 years) Children in this age range are relatively easy to include in software usability testing. Their experience in school makes them ready to sit at a task and follow directions from an adult, and they are generally not self-conscious about being observed as they play on the computer. They will answer questions and try new things with ease. In this age range, children will develop more sophistication about how they can describe the things they see and do. Six-and seven-year-old children will be more hands-on\u2014ready to work on the computer but a little shy or inarticulate when \u2026"}
{"_id": "c1318f3cb939774c3a0fde4426c624c32d480486", "title": "4D Light Field Superpixel and Segmentation", "text": "Superpixel segmentation of 2D image has been widely used in many computer vision tasks. However, limited to the Gaussian imaging principle, there is not a thorough segmentation solution to the ambiguity in defocus and occlusion boundary areas. In this paper, we consider the essential element of image pixel, i.e., rays in the light space and propose light field superpixel (LFSP) segmentation to eliminate the ambiguity. The LFSP is first defined mathematically and then a refocus-invariant metric named LFSP self-similarity is proposed to evaluate the segmentation performance. By building a clique system containing 80 neighbors in light field, a robust refocus-invariant LFSP segmentation algorithm is developed. Experimental results on both synthetic and real light field datasets demonstrate the advantages over the state-of-the-arts in terms of traditional evaluation metrics. Additionally the LFSP self-similarity evaluation under different light field refocus levels shows the refocus-invariance of the proposed algorithm."}
{"_id": "129028ab6e19452b441994b929c64a198fee8dc0", "title": "Parameter Sharing Reinforcement Learning Architecture for Multi Agent Driving Behaviors", "text": "Multi-agent learning provides a potential framework for learning and simulating traffic behaviors. This paper proposes a novel architecture to learn multiple driving behaviors in a traffic scenario. The proposed architecture can learn multiple behaviors independently as well as simultaneously. We take advantage of the homogeneity of agents and learn in a parameter sharing paradigm. To further speed up the training process asynchronous updates are employed into the architecture. While learning different behaviors simultaneously, the given framework was also able to learn cooperation between the agents, without any explicit communication. We applied this framework to learn two important behaviors in driving: 1) Lane-Keeping and 2) Over-Taking. Results indicate faster convergence and learning of a more generic behavior, that is scalable to any number of agents. When compared the results with existing approaches, our results indicate equal and even better performance in some cases."}
{"_id": "257c1b75d56c7a13975ea0f000a9ba4ff833cd6c", "title": "HarmTrace : Automatic functional harmonic analysis", "text": "Music scholars have been intensively studying tonal harmony for centuries, yielding numerous theories and models. Unfortunately, a large number of these theories are formulated in a rather informal fashion and lack mathematical precision. In this article we present HarmTrace, a functional model of Western tonal harmony, which builds on well-known theories of tonal harmony. In contrast to many other theories which remain purely theoretical, we present an implemented system that is evaluated empirically. Given a sequence of symbolic chord labels, HarmTrace automatically derives the harmonic relations between chords. For this, we use advanced functional programming techniques which are uniquely available in the Haskell programming language. We show that our system is fast, easy to modify and maintain, is robust against noisy data, and returns harmonic analyses that comply with Western tonal harmony theory."}
{"_id": "0a7c04c252621633992810bf0f184f287610c461", "title": "Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks", "text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for short-text classification. However, many short texts occur in sequences (e.g., sentences in a document or utterances in a dialog), and most existing ANN-based systems do not leverage the preceding short texts when classifying a subsequent one. In this work, we present a model based on recurrent neural networks and convolutional neural networks that incorporates the preceding short texts. Our model achieves state-of-the-art results on three different datasets for dialog act prediction."}
{"_id": "c030c6d72c4bb170602120d6ef447564648f726d", "title": "Snooping Keystrokes with mm-level Audio Ranging on a Single Phone", "text": "This paper explores the limits of audio ranging on mobile devices in the context of a keystroke snooping scenario. Acoustic keystroke snooping is challenging because it requires distinguishing and labeling sounds generated by tens of keys in very close proximity. Existing work on acoustic keystroke recognition relies on training with labeled data, linguistic context, or multiple phones placed around a keyboard --- requirements that limit usefulness in an adversarial context. In this work, we show that mobile audio hardware advances can be exploited to discriminate mm-level position differences and that this makes it feasible to locate the origin of keystrokes from only a single phone behind the keyboard. The technique clusters keystrokes using time-difference of arrival measurements as well as acoustic features to identify multiple strokes of the same key. It then computes the origin of these sounds precise enough to identify and label each key. By locating keystrokes this technique avoids the need for labeled training data or linguistic context. Experiments with three types of keyboards and off-the-shelf smartphones demonstrate scenarios where our system can recover $94\\%$ of keystrokes, which to our knowledge, is the first single-device technique that enables acoustic snooping of passwords."}
{"_id": "0a3b32fee59e5c419b3aa33d53c89843377b2200", "title": "On Best-Possible Obfuscation", "text": "An obfuscator is a compiler that transforms any program (which we will view in this work as a boolean circuit) into an obfuscated program (also a circuit) that has the same input-output functionality as the original program, but is \u201cunintelligible\u201d. Obfuscation has applications for cryptography and for software protection. Barak et\u00a0al. (CRYPTO 2001, pp.\u00a01\u201318, 2001) initiated a theoretical study of obfuscation, which focused on black-box obfuscation, where the obfuscated circuit should leak no information except for its (black-box) input-output functionality. A family of functionalities that cannot be obfuscated was demonstrated. Subsequent research has showed further negative results as well as positive results for obfuscating very specific families of circuits, all with respect to black box obfuscation. This work is a study of a new notion of obfuscation, which we call best-possible obfuscation. Best possible obfuscation makes the relaxed requirement that the obfuscated program leaks as little information as any other program with the same functionality (and of similar size). In particular, this definition allows the program to leak information that cannot be obtained from a black box. Best-possible obfuscation guarantees that any information that is not hidden by the obfuscated program is also not hidden by any other similar-size program computing the same functionality, and thus the obfuscation is (literally) the best possible. In this work we study best-possible obfuscation and its relationship to previously studied definitions. Our main results are: (1)\u00a0A separation between black-box and best-possible obfuscation. We show a natural obfuscation task that can be achieved under the best-possible definition, but cannot be achieved under the black-box definition. (2)\u00a0A hardness result for best-possible obfuscation, showing that strong (information-theoretic) best-possible obfuscation implies a collapse in the Polynomial-Time Hierarchy. (3)\u00a0An impossibility result for efficient best-possible (and black-box) obfuscation in the presence of random oracles. This impossibility result uses a random oracle to construct hard-to-obfuscate circuits, and thus it does not imply impossibility in the standard model."}
{"_id": "903148db6796946182f27affc89c5045e6572ada", "title": "Main-memory foreign key joins on advanced processors: design and re-evaluations for OLAP workloads", "text": "The hash join algorithm family is one of the leading techniques for equi-join performance evaluation. OLAP systems borrow this line of research to efficiently implement foreign key joins between dimension tables and big fact tables. From data warehouse schema and workload feature perspective, the hash join algorithm can be further simplified with multidimensional mapping, and the foreign key join algorithms can be evaluated from multiple perspectives instead of single performance perspective. In this paper, we introduce the surrogate key index oriented foreign key join as schema-conscious and OLAP workload customized design foreign key join to comprehensively evaluate how state-of-the-art join algorithms perform in OLAP workloads. Our experiments and analysis gave the following insights: (1) customized foreign key join algorithm for OLAP workload can make join performance step forward than general-purpose hash joins; (2) each join algorithm shows strong and weak performance regions dominated by the cache locality ratio of input_size/cache_size with a fine-grained micro join benchmark; (3) the simple hardware-oblivious shared hash table join outperforms complex hardware-conscious radix partitioning hash join in most benchmark cases; (4) the customized foreign key join algorithm with surrogate key index simplified the algorithm complexity for hardware accelerators and make it easy to be implemented for different hardware accelerators. Overall, we argue that improving join performance is a systematic work opposite to merely hardware-conscious algorithm optimizations, and the OLAP domain knowledge enables surrogate key index to be effective for foreign key joins in data warehousing workloads for both CPU and hardware accelerators."}
{"_id": "76c20893b2e4e613b539f582b20cedb2136a7c60", "title": "Electricity-producing bacterial communities in microbial fuel cells.", "text": "Microbial fuel cells (MFCs) are not yet commercialized but they show great promise as a method of water treatment and as power sources for environmental sensors. The power produced by these systems is currently limited, primarily by high internal (ohmic) resistance. However, improvements in the system architecture will soon result in power generation that is dependent on the capabilities of the microorganisms. The bacterial communities that develop in these systems show great diversity, ranging from primarily delta-Proteobacteria that predominate in sediment MFCs to communities composed of alpha-, beta-, gamma- or delta-Proteobacteria, Firmicutes and uncharacterized clones in other types of MFCs. Much remains to be discovered about the physiology of these bacteria capable of exocellular electron transfer, collectively defined as a community of \"exoelectrogens\". Here, we review the microbial communities found in MFCs and the prospects for this emerging bioenergy technology."}
{"_id": "9860e4fe7e3be606b6f9432319e6b2273687bf08", "title": "A survey on ontology mapping", "text": "Ontology is increasingly seen as a key factor for enabling interoperability across heterogeneous systems and semantic web applications. Ontology mapping is required for combining distributed and heterogeneous ontologies. Developing such ontology mapping has been a core issue of recent ontology research. This paper presents ontology mapping categories, describes the characteristics of each category, compares these characteristics, and surveys tools, systems, and related work based on each category of ontology mapping. We believe this paper provides readers with a comprehensive understanding of ontology mapping and points to various research topics about the specific roles of ontology mapping."}
{"_id": "791d2f15a664afa3de93151d81589a661ad1f5bf", "title": "Radial-arrayed rotary electrification for high performance triboelectric generator.", "text": "Harvesting mechanical energy is an important route in obtaining cost-effective, clean and sustainable electric energy. Here we report a two-dimensional planar-structured triboelectric generator on the basis of contact electrification. The radial arrays of micro-sized sectors on the contact surfaces enable a high output power of 1.5\u2009W (area power density of 19\u2009mW\u2009cm(-2)) at an efficiency of 24%. The triboelectric generator can effectively harness various ambient motions, including light wind, tap water flow and normal body movement. Through a power management circuit, a triboelectric-generator-based power-supplying system can provide a constant direct-current source for sustainably driving and charging commercial electronics, immediately demonstrating the feasibility of the triboelectric generator as a practical power source. Given exceptional power density, extremely low cost and unique applicability resulting from distinctive mechanism and structure, the triboelectric generator can be applied not only to self-powered electronics but also possibly to power generation at a large scale."}
{"_id": "30436a7ecdf498ea43136241524378b3c67cc2c7", "title": "CATCH: A detecting algorithm for coalition attacks of hit inflation in internet advertising", "text": "As the Internet flourishes, online advertising becomes essential for marketing campaigns for business applications. To perform a marketing campaign, advertisers provide their advertisements to Internet publishers and commissions are paid to the publishers of the advertisements based on the clicks made for the posted advertisements or the purchases of the products of which advertisements posted. Since the payment given to a publisher is proportional to the amount of clicks received for the advertisements posted by the publisher, dishonest publishers are motivated to inflate the number of clicks on the advertisements hosted on their web sites. Since the click frauds are critical for online advertising to be reliable, the online advertisers make the efforts to prevent them effectively. However, the methods used for click frauds are also becoming more complex and sophisticated. In this paper, we study the problem of detecting coalition attacks of click frauds. The coalition attacks of click fraud is one of the latest sophisticated techniques utilized for click frauds because the fraudsters can obtain not only more gain but also less probability of being detected by joining a coalition. We introduce new definitions for the coalition and propose the novel algorithm called CATCH to find such coalitions. Extensive experiments with synthetic and real-life data sets confirm that our notion of coalition allows us to detect coalitions much more effectively than that of"}
{"_id": "85b53ce504a6c9a087e4bacf1550221ad9f9787a", "title": "A Topic-based Reviewer Assignment System", "text": "Peer reviewing is a widely accepted mechanism for assessing the quality of submitted articles to scientific conferences or journals. Conference management systems (CMS) are used by conference organizers to invite appropriate reviewers and assign them to submitted papers. Typical CMS rely on paper bids entered by the reviewers and apply simple matching algorithms to compute the paper assignment. In this paper, we demonstrate our Reviewer Assignment System (RAS), which has advanced features compared to broadly used CMSs. First, RAS automatically extracts the profiles of reviewers and submissions in the form of topic vectors. These profiles can be used to automatically assign reviewers to papers without relying on a bidding process, which can be tedious and error-prone. Second, besides supporting classic assignment models (e.g., stable marriage and optimal assignment), RAS includes a recently published assignment model by our research group, which maximizes, for each paper, the coverage of its topics by the profiles of its reviewers. The features of the demonstration include (1) automatic extraction of paper and reviewer profiles, (2) assignment computation by different models, and (3) visualization of the results by different models, in order to assess their effectiveness."}
{"_id": "360d4003511682c5f5b5f82f6befdda88ca3fa73", "title": "Smooth sensitivity and sampling in private data analysis", "text": "We introduce a new, generic framework for private data analysis.The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.Our framework allows one to release functions f of the data withinstance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also bythe database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smoothsensitivity of f on the database x --- a measure of variabilityof f in the neighborhood of the instance x. The new frameworkgreatly expands the applicability of output perturbation, a technique for protecting individuals' privacy by adding a smallamount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-basednoise in the context of data privacy.\n Our framework raises many interesting algorithmic questions. Namely,to apply the framework one must compute or approximate the smoothsensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost ofthe minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on manydatabases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known orwhen f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians."}
{"_id": "9e01c1886c389662f6e4a6074273272bb0d5b9a7", "title": "Dynamics and control of a 4-dof wearable cable-driven upper arm exoskeleton", "text": "In this paper, we present the dynamics, control, and preliminary experiments on a wearable upper arm exoskeleton intended for human users with four degrees-of-freedom (dof), driven by six cables. The control of this cable-driven exoskeleton is complicated because the cables can transmit forces to the arm only under tension. The standard PD controllers or Computed torque controllers perform only moderately since the cables need to be in tension. Future efforts will seek to refine these control strategies and their implementations to improve functionality of a human user."}
{"_id": "b92e1545fe1ac506a2649a2086e51840a8651b82", "title": "Household availability of ultra-processed foods and obesity in nineteen European countries.", "text": "OBJECTIVE\nTo assess household availability of NOVA food groups in nineteen European countries and to analyse the association between availability of ultra-processed foods and prevalence of obesity.\n\n\nDESIGN\nEcological, cross-sectional study.\n\n\nSETTING\nEurope.\n\n\nSUBJECTS\nEstimates of ultra-processed foods calculated from national household budget surveys conducted between 1991 and 2008. Estimates of obesity prevalence obtained from national surveys undertaken near the budget survey time.\n\n\nRESULTS\nAcross the nineteen countries, median average household availability amounted to 33\u00b79 % of total purchased dietary energy for unprocessed or minimally processed foods, 20\u00b73 % for processed culinary ingredients, 19\u00b76 % for processed foods and 26\u00b74 % for ultra-processed foods. The average household availability of ultra-processed foods ranged from 10\u00b72 % in Portugal and 13\u00b74 % in Italy to 46\u00b72 % in Germany and 50\u00b74 % in the UK. A significant positive association was found between national household availability of ultra-processed foods and national prevalence of obesity among adults. After adjustment for national income, prevalence of physical inactivity, prevalence of smoking, measured or self-reported prevalence of obesity, and time lag between estimates on household food availability and obesity, each percentage point increase in the household availability of ultra-processed foods resulted in an increase of 0\u00b725 percentage points in obesity prevalence.\n\n\nCONCLUSIONS\nThe study contributes to a growing literature showing that the consumption of ultra-processed foods is associated with an increased risk of diet-related non-communicable diseases. Its findings reinforce the need for public policies and actions that promote consumption of unprocessed or minimally processed foods and make ultra-processed foods less available and affordable."}
{"_id": "4e8930ae948262a89acf2e43c8e8b6e902c312c4", "title": "Variable Rate Image Compression with Recurrent Neural Networks", "text": "Although image compression has been actively studied for decades, there has been relatively little research on learning to compress images with modern neural networks. Standard approaches, such as those employing patch-based autoencoders, have shown a great deal of promise but cannot compete with popular image codecs because they fail to address three questions: 1) how to effectively binarize activations: in the absence of binarization, a bottleneck layer alone tends not to lead to efficient compression; 2) how to achieve variable-rate encoding: a standard autoencoder generates a fixed-length code for each fixed-resolution input patch, resulting in the same cost for lowand high-entropy patches, and requiring the network to be completely retrained to achieve different compression rates; and 3) how to avoid block artifacts: patch-based approaches are prone to block discontinuities. We propose a general framework for variable-rate image compression and a novel architecture based on convolutional and deconvolutional recurrent networks, including LSTMs, that address these issues and report promising results compared to existing baseline codecs. We evaluate the proposed methods on a large-scale benchmark consisting of tiny images (32\u00d7 32), which proves to be very challenging for all the methods."}
{"_id": "fea861d6560483ee4460a86c9276f2f7f95de2d9", "title": "Predicate learning in neural systems: Discovering latent generative structures", "text": "Humans learn complex latent structures from their environments (e.g., natural language, mathematics, music, social hierarchies). In cognitive science and cognitive neuroscience, models that infer higher-order structures from sensory or first-order representations have been proposed to account for the complexity and flexibility of human behavior. But how do the structures that these models invoke arise in neural systems in the first place? To answer this question, we explain how a system can learn latent representational structures (i.e., predicates) from experience with wholly unstructured data. During the process of predicate learning, an artificial neural network exploits the naturally occurring dynamic properties of distributed computing across neuronal assemblies in order to learn predicates, but also to combine them compositionally, two computational aspects which appear to be necessary for human behavior as per formal theories in multiple domains. We describe how predicates can be combined generatively using neural oscillations to achieve human-like extrapolation and compositionality in an artificial neural network. The ability to learn predicates from experience, to represent structures compositionally, and to extrapolate to unseen data offers an inroads to understanding and modeling the most complex human behaviors."}
{"_id": "1af5293e7e270d35be5eca28cd904d1e8fc9219c", "title": "CEDD: Color and Edge Directivity Descriptor: A Compact Descriptor for Image Indexing and Retrieval", "text": "This paper deals with a new low level feature that is extracted from the images and can be used for indexing and retrieval. This feature is called \u201cColor and Edge Directivity Descriptor\u201d and incorporates color and texture information in a histogram. CEDD size is limited to 54 bytes per image, rendering this descriptor suitable for use in large image databases. One of the most important attribute of the CEDD is the low computational power needed for its extraction, in comparison with the needs of the most MPEG-7 descriptors. The objective measure called ANMRR is used to evaluate the performance of the proposed feature. An online demo that implements the proposed feature in an image retrieval system is available at: http://orpheus.ee.duth.gr/image_retrieval."}
{"_id": "597edc3174dc9f26badd67c4e81d0e8a58f9dbb3", "title": "Textural Features Corresponding to Visual Perception", "text": "Textural features corresponding to human visual perception are very useful for optimum feature selection and texture analyzer design. We approximated in computational form six basic textural features, namely, coarseness, contrast, directionality, line-likeness, regularity, and roughness. In comparison with psychological measurements for human subjects, the computational measures gave good correspondences in rank correlation of 16 typical texture patterns. Similarity measurements using these features were attempted. The discrepancies between human vision and computerized techniques that we encountered in this study indicate fundamental problems in digital analysis of textures. Some of them could be overcome by analyzing their causes and using more sophisticated techniques."}
{"_id": "6646c3e932c69c6c2f317d615c55ccc22e21b4be", "title": "FCTH: Fuzzy Color and Texture Histogram - A Low Level Feature for Accurate Image Retrieval", "text": "This paper deals with the extraction of a new low level feature that combines, in one histogram, color and texture information. This feature is named FCTH - Fuzzy Color and Texture Histogram - and results from the combination of 3 fuzzy systems. FCTH size is limited to 72 bytes per image, rendering this descriptor suitable for use in large image databases. The proposed feature is appropriate for accurately retrieving images even in distortion cases such as deformations, noise and smoothing. It is tested on a large number of images selected from proprietary image databases or randomly retrieved from popular search engines. To evaluate the performance of the proposed feature, the averaged normalized modified retrieval rank was used. An online demo that implements the proposed feature in an image retrieval system is available at: http://orpheus.ee.duth.gr/image_retrieval."}
{"_id": "1b8fb3367b2527b53eda74c7966db809172eed28", "title": "M-tree: An Efficient Access Method for Similarity Search in Metric Spaces", "text": "A new access method called M tree is pro posed to organize and search large data sets from a generic metric space i e where ob ject proximity is only de ned by a distance function satisfying the positivity symmetry and triangle inequality postulates We detail algorithms for insertion of objects and split management which keep the M tree always balanced several heuristic split alternatives are considered and experimentally evaluated Algorithms for similarity range and k nearest neighbors queries are also described Re sults from extensive experimentation with a prototype system are reported considering as the performance criteria the number of page I O s and the number of distance computa tions The results demonstrate that the M tree indeed extends the domain of applica bility beyond the traditional vector spaces performs reasonably well in high dimensional data spaces and scales well in case of growing les"}
{"_id": "562283bfb84f6b7a0b881a9dcf5b713e1c1f57bf", "title": "Normalized cuts in 3-D for spinal MRI segmentation", "text": "Segmentation of medical images has become an indispensable process to perform quantitative analysis of images of human organs and their functions. Normalized Cuts (NCut) is a spectral graph theoretic method that readily admits combinations of different features for image segmentation. The computational demand imposed by NCut has been successfully alleviated with the Nystro/spl uml/m approximation method for applications different than medical imaging. In this paper we discuss the application of NCut with the Nystro/spl uml/m approximation method to segment vertebral bodies from sagittal T1-weighted magnetic resonance images of the spine. The magnetic resonance images were preprocessed by the anisotropic diffusion algorithm, and three-dimensional local histograms of brightness was chosen as the segmentation feature. Results of the segmentation as well as limitations and challenges in this area are presented."}
{"_id": "9970938755296a73e48b362f1c29c70690e769b7", "title": "Mining Efficient Taxi Operation Strategies From Large Scale Geo-Location Data", "text": "Taxi drivers always look for strategies to locate passengers quickly and therefore increase their profit margin. In reality, the passenger seeking strategies are mostly empirical and substantially vary among taxi drivers. From the history taxi data, the top performing taxi drivers can earn 25% more than the ones with mediocre seeking strategy in the same period of time. A better strategy not only helps taxi drivers earn more with less effort, but also reduce fuel consumption and carbon emissions. It is interesting to examine the influential factors in passenger seeking strategies and find algorithms to guide taxi drivers to passenger hotspots with the right timing. With the abundant availability of history taxicab traces, the existing methods of doing taxi business have been radically changed. This paper focuses on the problem of mining efficient operation strategies from a large-scale history taxi traces collected over one year. Our approach presents generic insights into the dynamics of taxicab services with the objective of maximizing the profit margins for the concerned parties. We propose important metrics, such as trip frequency, hot spots, and taxi mileage, and provide valuable insights toward more efficient operation strategies. We analyze these metrics using techniques, such as Newton\u2019s polynomial interpolation and Gamma distribution, to understand their dynamics. Our strategies use the real taxicab traces from the city of Changsha (P.R.China), may predict the taxi rides at different times by 90.68% per day, and increase the taxi drivers income levels up to 19.38% by controlling appropriate mileage per trip and following the route across more urban hot spots."}
{"_id": "ea5e5d3cbd9898a174f8b8e7f764340594533e0b", "title": "Tracking Depression Dynamics in College Students Using Mobile Phone and Wearable Sensing", "text": "There are rising rates of depression on college campuses. Mental health services on our campuses are working at full stretch. In response researchers have proposed using mobile sensing for continuous mental health assessment. Existing work on understanding the relationship between mobile sensing and depression, however, focuses on generic behavioral features that do not map to major depressive disorder symptoms defined in the standard mental disorders diagnostic manual (DSM-5). We propose a new approach to predicting depression using passive sensing data from students' smartphones and wearables. We propose a set of symptom features that proxy the DSM-5 defined depression symptoms specifically designed for college students. We present results from a study of 83 undergraduate students at Dartmouth College across two 9-week terms during the winter and spring terms in 2016. We identify a number of important new associations between symptom features and student self reported PHQ-8 and PHQ-4 depression scores. The study captures depression dynamics of the students at the beginning and end of term using a pre-post PHQ-8 and week by week changes using a weekly administered PHQ-4. Importantly, we show that symptom features derived from phone and wearable sensors can predict whether or not a student is depressed on a week by week basis with 81.5% recall and 69.1% precision."}
{"_id": "020faf613a105bbdd2ebd32ff78fe018c1e7f912", "title": "Real-time super-resolution Sound Source Localization for robots", "text": "Sound Source Localization (SSL) is an essential function for robot audition and yields the location and number of sound sources, which are utilized for post-processes such as sound source separation. SSL for a robot in a real environment mainly requires noise-robustness, high resolution and real-time processing. A technique using microphone array processing, that is, Multiple Signal Classification based on Standard EigenValue Decomposition (SEVD-MUSIC) is commonly used for localization. We improved its robustness against noise with high power by incorporating Generalized EigenValue Decomposition (GEVD). However, GEVD-based MUSIC (GEVD-MUSIC) has mainly two issues: 1) the resolution of pre-measured Transfer Functions (TFs) determines the resolution of SSL, 2) its computational cost is expensive for real-time processing. For the first issue, we propose a TF interpolation method integrating time-domain-based and frequency-domain-based interpolation. The interpolation achieves super-resolution SSL, whose resolution is higher than that of the pre-measured TFs. For the second issue, we propose two methods, MUSIC based on Generalized Singular Value Decomposition (GSVD-MUSIC), and Hierarchical SSL (H-SSL). GSVD-MUSIC drastically reduces the computational cost while maintaining noise-robustness in localization. H-SSL also reduces the computational cost by introducing a hierarchical search algorithm instead of using greedy search in localization. These techniques are integrated into an SSL system using a robot embedded microphone array. The experimental result showed: the proposed interpolation achieved approximately 1 degree resolution although we have only TFs at 30 degree intervals, GSVD-MUSIC attained 46.4% and 40.6% of the computational cost compared to SEVD-MUSIC and GEVD-MUSIC, respectively, H-SSL reduces 59.2% computational cost in localization of a single sound source."}
{"_id": "dd008f0ed8b1be16850036fd19809674889a1c5f", "title": "Medical image segmentation on GPUs - A comprehensive review", "text": "Segmentation of anatomical structures, from modalities like computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound, is a key enabling technology for medical applications such as diagnostics, planning and guidance. More efficient implementations are necessary, as most segmentation methods are computationally expensive, and the amount of medical imaging data is growing. The increased programmability of graphic processing units (GPUs) in recent years have enabled their use in several areas. GPUs can solve large data parallel problems at a higher speed than the traditional CPU, while being more affordable and energy efficient than distributed systems. Furthermore, using a GPU enables concurrent visualization and interactive segmentation, where the user can help the algorithm to achieve a satisfactory result. This review investigates the use of GPUs to accelerate medical image segmentation methods. A set of criteria for efficient use of GPUs are defined and each segmentation method is rated accordingly. In addition, references to relevant GPU implementations and insight into GPU optimization are provided and discussed. The review concludes that most segmentation methods may benefit from GPU processing due to the methods' data parallel structure and high thread count. However, factors such as synchronization, branch divergence and memory usage can limit the speedup."}
{"_id": "771b9855ef09cf6106700a7cd2c325e3459db429", "title": "A compact printed monopole antenna with symmetrical i and rectangular shaped slots for bluetooth/WLAN/WIMAX applications", "text": "A compact dual band microstrip fed printed monopole antenna is designed and presented in this paper for Bluetooth/WLAN/WiMAX applications. Two I with rectangular shape slot have been etched on a Roger RT/duroid substrate to form desired monopole antenna. The radiator of the antenna is very compact with an area of only 9.1 \u00d7 10.2 mm2. The size of the I & rectangular slots of dual radiating elements were adjusted so as to achieve two different resonant modes for dual band operation. The simulation result shows that the proposed antenna has 10-dB impedance bandwidth of 219.6 MHz and 3.20 GHz which cover the Bluetooth, WLAN 2.4/5.8 GHz and the WiMAX 2.5/3.5 GHz bands. The overall proposed antenna size is 15 \u00d7 17 \u00d7 0.762 mm 3 with the obtained VSWR less than 1.5."}
{"_id": "062397dbc063312de6b006b49acfa76e3ee27e59", "title": "Phase statistics of interferograms with applications to synthetic aperture radar.", "text": "Interferometric methods are well established in optics and radio astronomy. In recent years, interferometric concepts have been applied successfully to synthetic aperture radar (SAR) and have opened up new possibilities in the area of earth remote sensing. However interferometric SAR applications require thorough phase control through the imaging process. The phase accuracy of SAR images is affected by decorrelation effects between the individual surveys. We analyze quantitatively the influence of decorrelation on the phase statistics of SAR interferograms. In particular, phase aberrations as they occur in typical SAR processors are studied in detail. The dependence of the resulting phase bias and variance on processor parameters is presented in several diagrams."}
{"_id": "6a47e89f4afb3c550a5f240417590d3dce6cc6be", "title": "Structured Indoor Modeling", "text": "This paper presents a novel 3D modeling framework that reconstructs an indoor scene as a structured model from panorama RGBD images. A scene geometry is represented as a graph, where nodes correspond to structural elements such as rooms, walls, and objects. The approach devises a structure grammar that defines how a scene graph can be manipulated. The grammar then drives a principled new reconstruction algorithm, where the grammar rules are sequentially applied to recover a structured model. The paper also proposes a new room segmentation algorithm and an offset-map reconstruction algorithm that are used in the framework and can enforce architectural shape priors far beyond existing state-of-the-art. The structured scene representation enables a variety of novel applications, ranging from indoor scene visualization, automated floorplan generation, Inverse-CAD, and more. We have tested our framework and algorithms on six synthetic and five real datasets with qualitative and quantitative evaluations. The source code and the data are available at the project website [15]."}
{"_id": "8f9376f3b71e9182c79531551d6e953cd02d7fe6", "title": "A piezoelectric vibration based generator for wireless electronics", "text": "Enabling technologies for wireless sensor networks have gained considerable attention in research communities over the past few years. It is highly desirable, even necessary in certain situations, for wireless sensor nodes to be self-powered. With this goal in mind, a vibration based piezoelectric generator has been developed as an enabling technology for wireless sensor networks. The focus of this paper is to discuss the modeling, design, and optimization of a piezoelectric generator based on a two-layer bending element. An analytical model of the generator has been developed and validated. In addition to providing intuitive design insight, the model has been used as the basis for design optimization. Designs of 1 cm3 in size generated using the model have demonstrated a power output of 375 \u03bcW from a vibration source of 2.5 m s\u22122 at 120 Hz. Furthermore, a 1 cm3 generator has been used to power a custom designed 1.9 GHz radio transmitter from the same vibration source. (Some figures in this article are in colour only in the electronic version)"}
{"_id": "fdbf3fefa202391059a3802f0f288a8c7487fc76", "title": "Weakly supervised user intent detection for multi-domain dialogues", "text": "Users interact with mobile apps with certain intents such as finding a restaurant. Some intents and their corresponding activities are complex and may involve multiple apps; for example, a restaurant app, a messenger app and a calendar app may be needed to plan a dinner with friends. However, activities may be quite personal and third-party developers would not be building apps to specifically handle complex intents (e.g., a DinnerPlanner). Instead we want our intelligent agent to actively learn to understand these intents and provide assistance when needed. This paper proposes a framework to enable the agent to learn an inventory of intents from a small set of task-oriented user utterances. The experiments show that on previously unseen user activities, the agent is able to reliably recognize user intents using graph-based semi-supervised learning methods. The dataset, models, and the system outputs are available to research community."}
{"_id": "fede9cccd96292d3ede3a1111a87d0fce7d147d5", "title": "Does telecommuting really increase productivity?", "text": "As many companies have learned in the last decade, the reality of telecommuting does not reflect the hype, the expected potential, or the existing literature."}
{"_id": "5ab1d4d7aa2111e1d6254cfd81f35d4cac8d7fb8", "title": "Why social preferences matter-the impact of non-selfish motives on competition , cooperation and incentives", "text": "A substantial number of people exhibit social preferences, which means they are not solely motivated by material self-interest but also care positively or negatively for the material payoffs of relevant reference agents. We show empirically that economists fail to understand fundamental economic questions when they disregard social preferences, in particular, that without taking social preferences into account, it is not possible to understand adequately (i) the effects of competition on market outcomes, (ii) laws governing cooperation and collective action, (iii) effects and the determinants of material incentives, (iv) which contracts and property rights arrangements are optimal, and (v) important forces shaping social norms and market failures. a) Ernst Fehr, Institute for Empirical Research in Economics, University of Zurich, Bluemlisalpstrasse 10, CH-8006 Zurich, Switzerland, email: efehr@iew.unizh.ch. b) Urs Fischbacher, Institute for Empirical Research in Economics, University of Zurich, Bluemlisalpstrasse 10, CH-8006 Zurich, Switzerland, email: fiba@iew.unizh.ch."}
{"_id": "4b129c53e93753ec8855e5e4d7b4d1a4d586d3b1", "title": "CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases", "text": "Extracting entities and relations for types of interest from text is important for understanding massive text corpora. Traditionally, systems of entity relation extraction have relied on human-annotated corpora for training and adopted an incremental pipeline. Such systems require additional human expertise to be ported to a new domain, and are vulnerable to errors cascading down the pipeline. In this paper, we investigate joint extraction of typed entities and relations with labeled data heuristically obtained from knowledge bases (i.e., distant supervision). As our algorithm for type labeling via distant supervision is context-agnostic, noisy training data poses unique challenges for the task. We propose a novel domainindependent framework, called COTYPE, that runs a data-driven text segmentation algorithm to extract entity mentions, and jointly embeds entity mentions, relation mentions, text features and type labels into two low-dimensional spaces (for entity and relation mentions respectively), where, in each space, objects whose types are close will also have similar representations. COTYPE, then using these learned embeddings, estimates the types of test (unlinkable) mentions. We formulate a joint optimization problem to learn embeddings from text corpora and knowledge bases, adopting a novel partial-label loss function for noisy labeled data and introducing an object \"translation\" function to capture the cross-constraints of entities and relations on each other. Experiments on three public datasets demonstrate the effectiveness of COTYPE across different domains (e.g., news, biomedical), with an average of 25% improvement in F1 score compared to the next best method."}
{"_id": "c4e9e63883657895be0a3b2f78d058752bf23f41", "title": "Pruning ConvNets Online for Efficient Specialist Models", "text": "Convolutional neural networks (CNNs) excel in various computer vision related tasks but are extremely computationally intensive and power hungry to run on mobile and embedded devices. Recent pruning techniques can reduce the computation and memory requirements of CNNs, but a costly retraining step is needed to restore the classification accuracy of the pruned model. In this paper, we present evidence that when only a subset of the classes need to be classified, we could prune a model and achieve reasonable classification accuracy without retraining. The resulting specialist model will require less energy and time to run than the original full model. To compensate for the pruning, we take advantage of the redundancy among filters and class-specific features. We show that even simple methods such as replacing channels with mean or with the most correlated channel can boost the accuracy of the pruned model to reasonable levels."}
{"_id": "678a62aeeb8746906d5621c5b3502a25b2c4a2bf", "title": "First- and second-order optimality conditions for piecewise smooth objective functions", "text": "Any piecewise smooth function that is specified by an evaluation procedure involving smooth elemental functions and piecewise linear functions like min and max can be represented in the so-called abs-normal form. By an extension of algorithmic, or automatic, differentiation, one can then compute certain first and second order derivative vectors and matrices that represent a local piecewise linearization and provide additional curvature information. On the basis of these quantities we characterize local optimality by first and second order necessary and sufficient conditions, which generalize the corresponding KKT theory for smooth problems. The key assumption is the Linear Independence Kink Qualification (LIKQ), a generalization of LICQ familiar from nonlinear optimization. It implies that the objective has locally a so-called VU decomposition and renders everything tractable in terms of matrix factorizations and other simple linear algebra operations. By yielding descent directions whenever they are violated the new optimality conditions point the way to a superlinearly convergent generalized QP solver, which is currently under development. We exemplify the theory on two nonsmooth examples of Nesterov."}
{"_id": "7204441181b74ece91f442291c74fc76e7542402", "title": "Precision SAR processing using chirp scaling", "text": "Abstmct-A space-variant interpolation is required to compensate for the migration of signal energy through range resolution cells when processing synthetic aperture radar (SAR) data, using either the classical rangelDoppler (RID) algorithm or related frequency domain techniques. In general, interpolation requires significant computation time, and leads to loss of image quality, especially in the complex image. The new chirp scaling algorithm avoids interpolation, yet performs range cell migration correction accurately. The algorithm requires only complex multiplies and Fourier transforms to implement, is inherently phase preserving, and is suitable for wide-swath, largebeamwidth, and large-squint applications. This paper describes the chirp scaling algorithm, summarizes simulation results, presents imagery processed with the algorithm, and reviews quantitative measures of its performance. Based on quantitative comparison, the chirp scaling algorithm provides image quality equal to or better than the precision rangel Doppler processor. Over the range of parameters tested, image quality results approach the theoretical limit, as defined by the system bandwidth."}
{"_id": "6e930b5c4913f4c51dac9e88d860fb6fb7757e45", "title": "The differences in motivations of online game players and offline game players: A combined analysis of three studies at higher education level", "text": "Computer games have become a highly popular form of entertainment and have had a large impact on how University students spend their leisure time. Due to their highly motivating properties computer games have come to the attention of educationalists who wish to exploit these highly desirable properties for educational purposes. Several studies have been performed looking at motivations for playing computer games in a general context and in a Higher Education (HE) context. These studies did not focus on the differences in motivations between online and offline game players. Equally the studies did not look at the differences in motivations of people who prefer single player games and people who prefer multiplayer games. If games-based learning is to become a recognised teaching approach then such motivations for playing computer games must be better understood. This paper presents the combined analysis of three studies at HE level, performed over a four year period from 2005 to 2009. The paper focuses on differences of motivations in relation to single player/multiplayer preference and online/ offline game participation. The study found that challenge is the top ranking motivation and recognition is the lowest ranking motivation for playing computer games in general. Challenge is also the top ranking motivation for playing games in HE while fantasy and recognition are the lowest ranking motivations for playing games in HE. Multiplayer gamers derive more competition, cooperation, recognition, fantasy and curiosity from playing games and online gamers derive more challenge, cooperation, recognition and control from playing games. Multiplayer gamers and online gamers ranked competition, cooperation and recognition significantly more important for playing games in HE than single players and offline participants. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "5de3ba76eeead6a6ee3295220080ee881f84bd27", "title": "Homography-based 2D Visual Tracking and Servoing", "text": "The objective of this paper is to propose a new homography-based approach to image-based visual tracking and servoing. The visual tracking algorithm proposed in the paper is based on a new efficient second-order minimization method. Theoretical analysis and comparative experiments with other tracking approaches show that the proposed method has a higher convergence rate than standard first-order minimization techniques. Therefore, it is well adapted to real-time robotic applications. The output of the visual tracking is a homography linking the current and the reference image of a planar target. Using the homography, a task function isomorphic to the camera pose has been designed. A new image-based control law is proposed which does not need any measure of the 3D structure of the observed target (e.g. the normal to the plane). The theoretical proof of the existence of the isomorphism between the task function and the camera pose and the theoretical proof of the stability of the control law are provided. The experimental results, obtained with a 6 d.o.f. robot, show the advantages of the proposed method with respect to the existing approaches. KEY WORDS\u2014visual tracking, visual servoing, efficient second-order minimization, homography-based control law"}
{"_id": "19b7224e55ecd61783d4f19ac3f41f1845bad189", "title": "Supervised tensor learning", "text": "Tensor representation is helpful to reduce the small sample size problem in discriminative subspace selection. As pointed by this paper, this is mainly because the structure information of objects in computer vision research is a reasonable constraint to reduce the number of unknown parameters used to represent a learning model. Therefore, we apply this information to the vector-based learning and generalize the vector-based learning to the tensor-based learning as the supervised tensor learning (STL) framework, which accepts tensors as input. To obtain the solution of STL, the alternating projection optimization procedure is developed. The STL framework is a combination of the convex optimization and the operations in multilinear algebra. The tensor representation helps reduce the overfitting problem in vector-based learning. Based on STL and its alternating projection optimization procedure, we generalize support vector machines, minimax probability machine, Fisher discriminant analysis, and distance metric learning, to support tensor machines, tensor minimax probability machine, tensor Fisher discriminant analysis, and the multiple distance metrics learning, respectively. We also study the iterative procedure for feature extraction within STL. To examine the effectiveness of STL, we implement the tensor minimax probability machine for image classification. By comparing with minimax probability machine, the tensor version reduces the overfitting problem."}
{"_id": "93ce62fb04283efb253b512dc3f02b1d169ee7ed", "title": "Two-Stream 3-D convNet Fusion for Action Recognition in Videos With Arbitrary Size and Length", "text": "3-D convolutional neural networks (3-D-convNets) have been very recently proposed for action recognition in videos, and promising results are achieved. However, existing 3-D-convNets has two \u201cartificial\u201d requirements that may reduce the quality of video analysis: 1) It requires a fixed-sized (e.g., 112 $\\times$ 112) input video; and 2) most of the 3-D-convNets require a fixed-length input (i.e., video shots with fixed number of frames). To tackle these issues, we propose an end-to-end pipeline named Two-stream 3-D-convNet Fusion, which can recognize human actions in videos of arbitrary size and length using multiple features. Specifically, we decompose a video into spatial and temporal shots. By taking a sequence of shots as input, each stream is implemented using a spatial temporal pyramid pooling (STPP) convNet with a long short-term memory (LSTM) or CNN-E model, softmax scores of which are combined by a late fusion. We devise the STPP convNet to extract equal-dimensional descriptions for each variable-size shot, and we adopt the LSTM/CNN-E model to learn a global description for the input video using these time-varying descriptions. With these advantages, our method should improve all 3-D CNN-based video analysis methods. We empirically evaluate our method for action recognition in videos and the experimental results show that our method outperforms the state-of-the-art methods (both 2-D and 3-D based) on three standard benchmark datasets (UCF101, HMDB51 and ACT datasets)."}
{"_id": "d72458f9501963670b50ee9fe78e622425955630", "title": "Fidelius Charm: Isolating Unsafe Rust Code", "text": "The Rust programming language has a safe memory model that promises to eliminate critical memory bugs. While the language is strong in doing so, its memory guarantees are lost when any unsafe blocks are used. Unsafe code is often needed to call library functions written in an unsafe language inside a Rust program. We present Fidelius Charm (FC), a system that protects a programmer-specified subset of data in memory from unauthorized access through vulnerable unsafe libraries. FC does this by limiting access to the program's memory while executing unsafe libraries. FC uses standard features of Rust and utilizes the Linux kernel as a trusted base for splitting the address space into a trusted privileged region under the control of functions written in Rust and a region available to unsafe external libraries. This paper presents our design and implementation of FC, presents two case studies for using FC in Rust TLS libraries, and reports on experiments showing its performance overhead is low for typical uses."}
{"_id": "169d7ac1fd596bd51d5310a71079c797102084c6", "title": "Degree of Modularity in Engineering Systems and Products with Technical and Business Constraints", "text": "There is consensus that modularity has many benefits from cost savings due to increased commonality to enabling a higher variety of products. Full modularity is, however, not always achievable. How engineering systems and products whose design is heavily influenced by technical constraints, such as weight or size limitations, tend to exhibit rather integral architectures is shown in this study. For this, two metrics are defined on the basis of a binary design structure matrix (DSM) representation of a system or product. The non-zero fraction (NZF) captures the sparsity of the interrelationships between components between zero and one, while the singular value modularity index (SMI) captures the degree of internal coupling, also between zero and one. These metrics are first developed using idealized canonical architectures and are then applied to two different product pairs that are functionally equivalent, but different in terms of technical constraints. Empirical evidence is presented that the lightweight variant of the same product tends to be more integral, presumably to achieve higher mass efficiency. These observations are strengthened by comparing the results to another, previously published, modularity metric as well as by comparing sparsity and modularity of a set of 15 products against a control population of randomly generated architectures of equivalent size and density. The results suggest that, indeed, some products are inherently less modular than others due to technological factors. The main advantage of SMI is that it enables analysis of the degree of modularity of any architecture independent of subjective module choices."}
{"_id": "8c6e0e318950dfb911915e09018c129b4cb9392b", "title": "Applying the design structure matrix to system decomposition and integration problems: a review and new directions", "text": "Systems engineering of products, processes, and organizations requires tools and techniques for system decomposition and integration. A design structure matrix (DSM) provides a simple, compact, and visual representation of a complex system that supports innovative solutions to decomposition and integration problems. The advantages of DSMsvis-\u00e0-visalternative system representation and analysis techniques have led to their increasing use in a variety of contexts, including product development, project planning, project management, systems engineering, and organization design. This paper reviews two types of DSMs, static and time-based DSMs, and four DSM applications: 1) Component-Basedor Architecture DSM, useful for modeling system component relationships and facilitating appropriate architectural decomposition strategies; 2) Team-Basedor Organization DSM, beneficial for designing integrated organization structures that account for team interactions; 3)Activity-Basedor Schedule DSM, advantageous for modeling the information flow among process activities; and 4)Parameter-Based(or low-level schedule)DSM, effective for integrating low-level design processes based on physical design parameter relationships. A discussion of each application is accompanied by an industrial example. The review leads to conclusions regarding the benefits of DSMs in practice and barriers to their use. The paper also discusses research directions and new DSM applications, both of which may be approached with a perspective on the four types of DSMs and their relationships."}
{"_id": "a736c328bb96b05847dccad89ca19a5fe890bf65", "title": "Architectural Innovation : The Reconfiguration of Existing Product Technologies and the Failure of Established Firms", "text": "Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms Author(s): Rebecca M. Henderson and Kim B. Clark Source: Administrative Science Quarterly, Vol. 35, No. 1, Special Issue: Technology, Organizations, and Innovation (Mar., 1990), pp. 9-30 Published by: Johnson Graduate School of Management, Cornell University Stable URL: http://www.jstor.org/stable/2393549 Accessed: 24/08/2008 13:09"}
{"_id": "0d0db78f584979413a28bc174b41188c804052aa", "title": "Innovation : Mapping the winds of creative destruction *", "text": "This paper develops a framework for analyzing the competitive implications of innovation. The framework is based on the concept of transilience the capacity of an innovation to influence the established systems of production and marketing. Application of the concept results in a categorization of innovation into four types. Examples from the technical history of the US auto industry are used to illustrate the concepts and their applicability. The analysis shows that the categories of innovation are closely linked to different patterns of evolution and to different managerial environments. Special emphasis is placed on the role of incremental technical change in shaping competition and on the possibilities for a technology based reversal in the process of industrial maturity."}
{"_id": "b372ede0b08efd18e1f350bc4c3e03f732d37d27", "title": "Technological paradigms and technological trajectories A suggested interpretation of the determinants and directions of technical change", "text": "The procedures and the nature of \"technologies\" are suggested to be broadly s~milar to those which characterize \"sci-ence\". In particular, there appear to be \"technological para-dig~as\" (or research p,rogrammes) performing a similar role to \"sci~:~ntific paradigms\" (or research programmes). The n.udel tries to account for both continuous changes and discontinui-ties m technological i:anovation. Continuous changes are often related to progress along a technological trajectory defined by a technological paradigm, wMle discontinuities are associated with the emergence <,f a new paradigm. One-directional explanations of the i,movative process, and in particular those assuming \"the market\" as the prime mover, are inadequate to explain the emergence of new technological paradigms, The origin of the latter stems from the interplay between scientific advances, economic factors, institutional variables, and unsolved difficulties on established technological paths. The model tries to establish a sufficiently general framework which accounts for all these factors and to define the process of selection of new technological paradigms among a greater set of notionally possible cnes. The history of a technology is contextual to the history of the industrial structures associated with that technology. The emergence of a new paradigm is often related to new \"schumpeterian\" companies, while its establishment often shows also a process of oligopolistic stabilization. and two anonymous referees for their comments and criticisms on previous drafts. The responsibility for this draft is obviously mine. A version of this research, more focussed on the effects of technical change upon long-run patterns of growth, is appearing in C. Free"}
{"_id": "725f4bd53c410d87d53e74213c71dfdf8618bb14", "title": "Combining Labeled and Unlabeled Data for MultiClass Text Categorization", "text": "Supervised learning techniques for text classi cation often require a large number of labeled examples to learn accurately. One way to reduce the amount of labeled data required is to develop algorithms that can learn e ectively from a small number of labeled examples augmented with a large number of unlabeled examples. Current text learning techniques for combining labeled and unlabeled, such as EM and Co-Training, are mostly applicable for classi cation tasks with a small number of classes and do not scale up well for large multiclass problems. In this paper, we develop a framework to incorporate unlabeled data in the Error-Correcting Output Coding (ECOC) setup by rst decomposing multiclass problems into multiple binary problems and then using Co-Training to learn the individual binary classi cation problems. We show that our method is especially useful for text classi cation tasks involving a large number of categories and outperforms other semi-supervised learning techniques such as EM and Co-Training. In addition to being highly accurate, this method utilizes the hamming distance from ECOC to provide high-precision results. We also present results with algorithms other than co-training in this framework and show that co-training is uniquely suited to work well within ECOC."}
{"_id": "a04273851ae262e884b175c22decd56cbd24e14e", "title": "Correcting the Triplet Selection Bias for Triplet Loss", "text": "Triplet loss, popular for metric learning, has made a great success in many computer vision tasks, such as fine-grained image classification, image retrieval, and face recognition. Considering that the number of triplets grows cubically with the size of training data, triplet selection is thus indispensable for efficiently training with triplet loss. However, in practice, the training is usually very sensitive to the selection of triplets, e.g., it almost does not converge with randomly selected triplets and selecting the hardest triplets also leads to bad local minima. We argue that the bias in the selection of triplets degrades the performance of learning with triplet loss. In this paper, we propose a new variant of triplet loss, which tries to reduce the bias in triplet selection by adaptively correcting the distribution shift on the selected triplets. We refer to this new triplet loss as adapted triplet loss. We conduct a number of experiments on MNIST and Fashion-MNIST for image classification, and on CARS196, CUB200-2011, and Stanford Online Products for image retrieval. The experimental results demonstrate the effectiveness of the proposed method."}
{"_id": "895d0b3aa10f1d86ddfec19a7bd3316ba78c26bf", "title": "Min-BDeu and Max-BDeu Scores for Learning Bayesian Networks", "text": "This work presents two new score functions based on the Bayesian Dirichlet equivalent uniform (BDeu) score for learning Bayesian network structures. They consider the sensitivity of BDeu to varying parameters of the Dirichlet prior. The scores take on the most adversary and the most beneficial priors among those within a contamination set around the symmetric one. We build these scores in such way that they are decomposable and can be computed efficiently. Because of that, they can be integrated into any state-of-the-art structure learning method that explores the space of directed acyclic graphs and allows decomposable scores. Empirical results suggest that our scores outperform the standard BDeu score in terms of the likelihood of unseen data and in terms of edge discovery with respect to the true network, at least when the training sample size is small. We discuss the relation between these new scores and the accuracy of inferred models. Moreover, our new criteria can be used to identify the amount of data after which learning is saturated, that is, additional data are of little help to improve the resulting model."}
{"_id": "c5f19c26489e78eaa28b8b56add0af2b271d9abc", "title": "A Meta-Framework for Modeling the Human Reading Process in Sentiment Analysis", "text": "This article introduces a sentiment analysis approach that adopts the way humans read, interpret, and extract sentiment from text. Our motivation builds on the assumption that human interpretation should lead to the most accurate assessment of sentiment in text. We call this automated process Human Reading for Sentiment (HRS). Previous research in sentiment analysis has produced many frameworks that can fit one or more of the HRS aspects; however, none of these methods has addressed them all in one approach. HRS provides a meta-framework for developing new sentiment analysis methods or improving existing ones. The proposed framework provides a theoretical lens for zooming in and evaluating aspects of any sentiment analysis method to identify gaps for improvements towards matching the human reading process. Key steps in HRS include the automation of humans low-level and high-level cognitive text processing. This methodology paves the way towards the integration of psychology with computational linguistics and machine learning to employ models of pragmatics and discourse analysis for sentiment analysis. HRS is tested with two state-of-the-art methods; one is based on feature engineering, and the other is based on deep learning. HRS highlighted the gaps in both methods and showed improvements for both."}
{"_id": "96989405985e2d90370185f1e025443b56106d1a", "title": "Discriminative n-gram language modeling", "text": "This paper describes discriminative language modeling for a large vocabulary speech recognition task. We contrast two parameter estimation methods: the perceptron algorithm, and a method based on maximizing the regularized conditional log-likelihood. The models are encoded as deterministic weighted finite state automata, and are applied by intersecting the automata with word-lattices that are the output from a baseline recognizer. The perceptron algorithm has the benefit of automatically selecting a relatively small feature set in just a couple of passes over the training data. We describe a method based on regularized likelihood that makes use of the feature set given by the perceptron algorithm, and initialization with the perceptron\u2019s weights; this method gives an additional 0.5% reduction in word error rate (WER) over training with the perceptron alone. The final system achieves a 1.8% absolute reduction in WER for a baseline first-pass recognition system (from 39.2% to 37.4%), and a 0.9% absolute reduction in WER for a multi-pass recognition system (from 28.9% to 28.0%). 2006 Elsevier Ltd. All rights reserved."}
{"_id": "4ec34709eab69fc5d1b42c91afeb6d6370495e3b", "title": "A Game-like Application for Dance Learning Using a Natural Human Computer Interface", "text": "Game-based learning and gamification techniques are recently becoming a popular trend in the field of Technology Enhanced Learning. In this paper, we mainly focus on the use of game design elements for the transmission of Intangible Cultural Heritage (ICH) knowledge and, especially, for the learning of traditional dances. More specifically, we present a 3D game environment that employs an enjoyable natural human computer interface, which is based on the fusion of multiple depth sensors data in order to capture the body movements of the user/learner. In addition, the system automatically assesses the learner\u2019s performance by utilizing a combination of Dynamic Time Warping (DTW) with Fuzzy Inference System (FIS) approach and provides feedback in a form of a score as well as instructions from a virtual tutor in order to promote self-learning. As a pilot use case, a Greek traditional dance, namely Tsamiko, has been selected. Preliminary small-scaled experiments with students of the Department of Physical Education and Sports Science at Aristotle University of Thessaloniki have shown the great potential of the proposed application."}
{"_id": "7e359f8fa3ba07cca2458ae95b341d7937810ac3", "title": "Efficient tracing of failed nodes in sensor networks", "text": "In sensor networks, nodes commonly rely on each other to route messages to a base station. Although this practice conserves power it can obscure the cause of a measurement outage in a portion of the network. For example, when a base station ceases to receive measurements from a region of nodes it can't immediately determine whether this is because of the destruction of all the nodes in that region (due to an enemy attack, for example) or merely the result of the failure of a few nodes bearing much of the routing load. Previous solutions to this problem typically consist of re-running the route-discovery protocol, a process that can be quite expensive in terms of the number of messages that must be exchanged. We demonstrate that the topology of the network can be efficiently conveyed to the base station allowing for the quick tracing of the identities of the failed nodes with moderate communication overhead. Our algorithms work in conjunction with the existing functions of the network, requiring the nodes to send no additional messages."}
{"_id": "154387fe1347664ed7433156f19f9ea29b0ceb33", "title": "The role of anterior insular cortex in social emotions", "text": "Functional neuroimaging investigations in the fields of social neuroscience and neuroeconomics indicate that the anterior insular cortex (AI) is consistently involved in empathy, compassion, and interpersonal phenomena such as fairness and cooperation. These findings suggest that AI plays an important role in social emotions, hereby defined as affective states that arise when we interact with other people and that depend on the social context. After we link the role of AI in social emotions to interoceptive awareness and the representation of current global emotional states, we will present a model suggesting that AI is not only involved in representing current states, but also in predicting emotional states relevant to the self and others. This model also proposes that AI enables us to learn about emotional states as well as about the uncertainty attached to events, and implies that AI plays a dominant role in decision making in complex and uncertain environments. Our review further highlights that dorsal and ventro-central, as well as anterior and posterior subdivisions of AI potentially subserve different functions and guide different aspects of behavioral regulation. We conclude with a section summarizing different routes to understanding other people\u2019s actions, feelings and thoughts, emphasizing the notion that the predominant role of AI involves understanding others\u2019 feeling and bodily states rather than their action intentions or abstract beliefs."}
{"_id": "399968ac10586b66252d4f7bfba9609612f9fbb3", "title": "UMH: A Hardware-Based Unified Memory Hierarchy for Systems with Multiple Discrete GPUs", "text": "In this article, we describe how to ease memory management between a Central Processing Unit (CPU) and one or multiple discrete Graphic Processing Units (GPUs) by architecting a novel hardware-based Unified Memory Hierarchy (UMH). Adopting UMH, a GPU accesses the CPU memory only if it does not find its required data in the directories associated with its high-bandwidth memory, or the NMOESI coherency protocol limits the access to that data. Using UMH with NMOESI improves performance of a CPU-multiGPU system by at least 1.92 \u00d7 in comparison to alternative software-based approaches. It also allows the CPU to access GPUs modified data by at least 13 \u00d7 faster."}
{"_id": "acaa45cd13067803d815d8228840d936e6b64844", "title": "Nonexistence of Certain Symmetrical Spherical Codes", "text": "A spherical 1-code Wis any finite subset of the unit sphere in n dimensions S n-1 for which d(u, v) > I for every u, v from W, u ~ v. A spherical 1-code is symmetric ifu ~ Wimplies u fi W. The best upper bounds on the size of symmetric spherical codes on S n-~ were obtained in [1]. Here we obtain the same bounds by a similar method and improve these bounds for n = 5, 10, 14 and 22."}
{"_id": "75318ad33441f12ba2b43bbcfeb59a805d4e247b", "title": "Understanding uncertainty and reducing vulnerability : lessons from resilience thinking", "text": "Vulnerability is registered not by exposure to hazards alone; it also resides in the resilience of the system experiencing the hazard. Resilience (the capacity of a system to absorb recurrent disturbances, such as natural disasters, so as to retain essential structures, processes and feedbacks) is important for the discussion of vulnerability for three reasons: (1) it helps evaluate hazards holistically in coupled human\u2013environment systems, (2) it puts the emphasis on the ability of a system to deal with a hazard, absorbing the disturbance or adapting to it, and (3) it is forward-looking and helps explore policy options for dealing with uncertainty and future change. Building resilience into human\u2013environment systems is an effective way to cope with change characterized by surprises and unknowable risks. There seem to be four clusters of factors relevant to building resilience: (1) learning to live with change and uncertainty, (2) nurturing various types of ecological, social and political diversity for increasing options and reducing risks, (3) increasing the range of knowledge for learning and problem-solving, and (4) creating opportunities for self-organization, including strengthening of local institutions and building cross-scale linkages and problem-solving networks."}
{"_id": "e5f42f1727e77cdb300dee189e323ce6710df813", "title": "\"Strong beats skinny every time\": Disordered eating and compulsive exercise in women who post fitspiration on Instagram.", "text": "OBJECTIVE\nFitspiration is a recent Internet trend designed to motivate people to eat healthily and to exercise. The aim of the study was to investigate disordered eating and exercise in women who post fitspiration on Instagram.\n\n\nMETHOD\nParticipants were 101 women who post fitspiration images on Instagram and a comparison group of 102 women who post travel images. Both groups completed measures of disordered eating and compulsive exercise.\n\n\nRESULTS\nWomen who post fitspiration images scored significantly higher on drive for thinness, bulimia, drive for muscularity, and compulsive exercise. Almost a fifth (17.5%) of these women were at risk for diagnosis of a clinical eating disorder, compared to 4.3% of the travel group. Compulsive exercise was related to disordered eating in both groups, but the relationship was significantly stronger for women who post fitspiration images.\n\n\nDISCUSSION\nFor some women, posting fitspiration images on Instagram may signify maladaptive eating and exercise behaviors. \u00a9 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2017; 50:76-79)."}
{"_id": "d49748ab45d9430170e5a64d068fb223bff26306", "title": "Sixth sense technology: Comparisons and future predictions", "text": "This paper discusses a technology that is expected to widely spread in the near future, which is the Sixth Sense Technology. With this new technology yet to be introduced in the market, different implementation approaches are discussed. Such approaches are demonstrated through other inventions similar to the Sixth Sense (SS). They all fall into the same category, which is Augmented Reality Technologies. The paper focuses on the possible applications and opportunities of such technology. Furthermore, the implementation approaches are compared and the pros/cons of each approach are explored. Most importantly, technical challenges and open issues regarding each implementation approach are brought out. Furthermore, predictions are made on which approach is expected to succeed in the coming years. Finally, solutions are discussed to improve the Sixth Sense Technology with regards to its implementation approaches to insure new and better ways of human-computer interaction."}
{"_id": "8fc1f8efa0bf42b538f3f296989f9e830c8d12b6", "title": "Abacus: A Processor Family for Education", "text": "We present the Abacus processor family and its compiler framework for the MiniC language that we have developed for teaching processor architectures. Besides typical RISC instructions, Abacus also offers instructions for vector processing and thread synchronization, but it is still small enough to be discussed completely in a class. With reasonable effort, students can therefore modify given implementations of micro-architectures and code generators to deepen their understanding of the theoretical concepts. Moreover, using benchmark examples, they can explore the quantitative aspects of their optimizations. In contrast to commercial and other educational processors, we provide many micro-architectures that are based on a pure concept only rather than on a combination of concepts, and we provide code generators which contain the core ideas of some architectures."}
{"_id": "7387ba6113e033eaba653e78e18c567d12a32987", "title": "IFCaaS: Information Flow Control as a Service for Cloud Security", "text": "With the maturity of service-oriented architecture (SOA) and Web technologies, web services have become critical components of Software as a Service (SaaS) applications in cloud ecosystem environments. Most SaaS applications leverage multi-tenant data stores as a back end to keep and process data with high agility. Although these technologies promise impressive benefits, they put SaaS applications at risk against novel as well as prevalent attack vectors. This security risk is further magnified by the loss of control and lack of security enforcement over sensitive data manipulated by SaaS applications. An effective solution is needed to fulfill several requirements originating in the dynamic and complex nature of such applications. Inspired by the rise of Security as a Service (SecaaS) model, this paper introduces \"Information Flow Control as a Service (IFCaaS)\". IFCaaS lays the foundation of cloud-delivered IFC-based security analysis and monitoring services. As an example of the adoption of the IFCaaS, this paper presents a novel framework that addresses the detection of information flow vulnerabilities in SaaS applications. Our initial experiments show that the framework is a viable solution to protect against data integrity and confidentiality violations leading to information leakage."}
{"_id": "f3c7efbf1048e9bb7133587541f998702a9c39f6", "title": "A resistively degenerated wide-band passive mixer with low noise figure and +60dBm IIP2 in 0.18\u03bcm CMOS", "text": "This paper presents a wide-band CMOS passive mixer with 8 dB double side band (DSB) noise figure (NF) and 24 dB voltage gain with +60 dBm of uncalibrated IIP2 and +9 dBm of IIP3 at 2 GHz. The linearity is maintained for a wide frequency offset range to ensure interferer performance for in-band jammers. A source degeneration method to improve NF and linearity is introduced and analyzed. Gm boosting methods, such as input cross-coupling, current reuse, complementary input, and back gate connection, are used. The Mixer consumes only 10 mW from a 2V power supply for I and Q both channels including trans-impedance amplifier stages (TIA)."}
{"_id": "4808ca8a317821bb70e226c5ca8c23241dd22586", "title": "A Systematic Review on Mobile Health Care", "text": "This paper presents the state of the art from the available literature on mobile health care. The study was performed by means of a systematic review, a way of assessing and interpreting all available research on a particular topic in a given area, using a reliable and rigorous method. From an initial amount of 1,482 papers, we extracted and analysed data via a full reading of 40 (2.69%) of the papers that matched our selection criteria. Our analysis since 2010 show current development in 10 application areas and present ongoing trends and technical hallenges on the subject. The application areas include: patient monitoring, infrastructure, software architecture, modeling, framework, security, fications, multimedia, mobile cloud computing, and literature reviews on the topic. The most relevant challenges include the low battery life of devices, ultiplatform development, data transmission and security. Our paper consolidates ecent findings in the field and serves as a resourceful guide for future research planning and development."}
{"_id": "321f255bb1c60ddc374a775b47ae52f9a8a4985f", "title": "Truth in Advertising: The Hidden Cost of Mobile Ads for Software Developers", "text": "The \"free app\" distribution model has been extremely popular with end users and developers. Developers use mobile ads to generate revenue and cover the cost of developing these free apps. Although the apps are ostensibly free, they in fact do come with hidden costs. Our study of 21 real world Android apps shows that the use of ads leads to mobile apps that consume significantly more network data, have increased energy consumption, and require repeated changes to ad related code. We also found that complaints about these hidden costs are significant and can impact the ratings given to an app. Our results provide actionable information and guidance to software developers in weighing the tradeoffs of incorporating ads into their mobile apps."}
{"_id": "787f25b0bc1a75b367807a8b803b6d3dd7f4cbeb", "title": "Haptic feedback for pen computing: directions and strategies", "text": "We seek to improve the xperience of using pen computing devices by augmenting them with haptic, tactile feedback displays. We present the design of the haptic display for pen computers, and explore interaction techniques that allow users to feel GUI elements, textures, photographs and other interface elements with a pen. We discuss research directions in haptic displays for pen devices and report results of an early experimental study that evaluated the benefits of tactile feedback in pen computing."}
{"_id": "65ef843b336fb8bf0c8d4f9cca5797d6c1ad4d20", "title": "Chapter 6: Models of Process Variations in Device and Interconnect", "text": "Variation is the deviation from intended or designed values for a structure or circuit parameter of concern. The electrical performance of microprocessors or other integrated circuits are impacted by two sources of variation. Environmental factors are those which arise during the operation of a circuit, and include variations in power supply, switching activity, and temperature of the chip or across the chip. Physical factors during manufacture result in structural device and interconnect variations which are essentially permanent. These variations arise due to processing and masking limitations, and result in random or spatially varying deviations from designed parameter values. in this chapter we will focus on parametric variation due to continuously varying structural or electrical parameters, as these can signi cantly impact not only yield but also performance in high speed microprocessor and other digital circuits. Such parametric variation is becoming a larger concern, as variation and margins for device and"}
{"_id": "a2c3336f5c04cb01a20bfc1f48fb3ba3f6035246", "title": "Variable radius pulley design methodology for pneumatic artificial muscle-based antagonistic actuation systems", "text": "There is a growing interest in utilizing pneumatic artificial muscles (PAMs) as actuators for human-friendly robots. However, several performance drawbacks prevent the widespread use of PAMs. Although many approaches have been proposed to overcome the low control bandwidth of PAMs, some limitations of PAMs such as restricted workspace and torque capacity remain to be addressed. This paper analyzes the limitations of conventional circular pulley joints and subsequently proposes a design methodology to synthesize a pair of variable radius pulleys to improve joint torque capacity over a large workspace. Experimental results show that newly synthesized variable radius pulleys significantly improve position tracking performance in the enlarged workspace."}
{"_id": "221e1e14fe2245a2894ef449fc1508c92a73aca5", "title": "Object based Information Extraction from High Resolution Satellite Imagery using eCognition", "text": "High resolution images offer rich contextual information, including spatial, spectral and contextual information. In order to extract the information from these high resolution images, we need to utilize the spatial and contextual information of an object and its surroundings. If pixel based approaches are applied to extract information from such remotely sensed data, only spectral information is used. Thereby, in Pixel based approaches, information extraction is based exclusively on the gray level thresholding methods. To extract the certain features only from high resolution satellite imagery, this situation becomes worse. To overcome this situation an object-oriented approach is implemented. This paper demonstrated the concept of objectoriented information extraction from high resolution satellite imagery using eCognition software, allows the classification of remotely-sensed data based on different object features, such as spatial, spectral, and contextual information. Thus with the object based approach, information is extracted on the basis of meaningful image objects rather than individual gray values of pixels. The test area has different discrimination, assigned to different classes by this approach: Agriculture, built-ups, forest and water. The Multiresolution segmentation and the nearest neighbour (NN) classification approaches are used and overall accuracy is assessed. The overall accuracy reaches to 97.30% thus making it an efficient and practical approach for information extraction."}
{"_id": "0c6545a75ca1f42b2f10faf2de4c5b7bec000975", "title": "Connotation Frames: A Data-Driven Investigation", "text": "Through a particular choice of a predicate (e.g., \u201cx violated y\u201d), a writer can subtly connote a range of implied sentiments and presupposed facts about the entities x and y: (1) writer\u2019s perspective: projecting x as an \u201cantagonist\u201d and y as a \u201cvictim\u201d, (2) entities\u2019 perspective: y probably dislikes x, (3) effect: something bad happened to y, (4) value: y is something valuable, and (5) mental state: y is distressed by the event. We introduce connotation frames as a representation formalism to organize these rich dimensions of connotation using typed relations. First, we investigate the feasibility of obtaining connotative labels through crowdsourcing experiments. We then present models for predicting the connotation frames of verb predicates based on their distributional word representations and the interplay between different types of connotative relations. Empirical results confirm that connotation frames can be induced from various data sources that reflect how people use language and give rise to the connotative meanings. We conclude with analytical results that show the potential use of connotation frames for analyzing subtle biases in online news media."}
{"_id": "0fb926cae217b70c97c74eb70b2a6b8c47574812", "title": "Network biology: understanding the cell's functional organization", "text": "A key aim of postgenomic biomedical research is to systematically catalogue all molecules and their interactions within a living cell. There is a clear need to understand how these molecules and the interactions between them determine the function of this enormously complex machinery, both in isolation and when surrounded by other cells. Rapid advances in network biology indicate that cellular networks are governed by universal laws and offer a new conceptual framework that could potentially revolutionize our view of biology and disease pathologies in the twenty-first century."}
{"_id": "225050ee487b17ced9f05844f078ff5345f5c9cc", "title": "A Taxonomy for Artificial Embryogeny", "text": "A major challenge for evolutionary computation is to evolve phenotypes such as neural networks, sensory systems, or motor controllers at the same level of complexity as found in biological organisms. In order to meet this challenge, many researchers are proposing indirect encodings, that is, evolutionary mechanisms where the same genes are used multiple times in the process of building a phenotype. Such gene reuse allows compact representations of very complex phenotypes. Development is a natural choice for implementing indirect encodings, if only because nature itself uses this very process. Motivated by the development of embryos in nature, we define artificial embryogeny (AE) as the subdiscipline of evolutionary computation (EC) in which phenotypes undergo a developmental phase. An increasing number of AE systems are currently being developed, and a need has arisen for a principled approach to comparing and contrasting, and ultimately building, such systems. Thus, in this paper, we develop a principled taxonomy for AE. This taxonomy provides a unified context for long-term research in AE, so that implementation decisions can be compared and contrasted along known dimensions in the design space of embryogenic systems. It also allows predicting how the settings of various AE parameters affect the capacity to efficiently evolve complex phenotypes."}
{"_id": "c876c5fed5b6a3a91b5f55e1f776d629cc8ed9bc", "title": "An Analysis of Alpha-Beta Pruning", "text": ""}
{"_id": "1a590df6a3d209206ccdb7fe81e62dfe72525897", "title": "Dynamic queries for visual information seeking", "text": "Considers how dynamic queries allow users to \"fly through\" databases by adjusting widgets and viewing the animated results. In studies, users reacted to this approach with an enthusiasm more commonly associated with video games. Adoption requires research into retrieval and display algorithms and user-interface design. The author discusses how experts may benefit from visual interfaces because they will be able to formulate more complex queries and interpret intricate results.<>"}
{"_id": "b5c8473a6d707b17bfe3bb63fd873f0cc53b7e91", "title": "Evaluation of Business Processes for Business Process Standardization", "text": "Companies often implement multiple process variants in their organizations at the same time. Often these variants differ widely in terms of efficiency, quality and cycle time. In times of highly volatile global economic markets, companies cannot afford unnecessary redundant business processes. Business Process Standardization has therefore become a major topic in the field of Business Process Management, both in research and practice. Management decisions concerning standardization are difficult and complex, due to limited project budgets, organizational and technological changes as well as the increasing challenges of globalization. Choosing suitable processes for standardization is the essential precondition for streamlining business processes in the first place and therefore for reaping the full benefits of Business Process Standardization. However, there is hardly any tool available that supports the evaluation of business processes for standardization. To close this research gap, we develop an instrument for the evaluation of business processes in the context of Business Process Standardization with the aim to help practitioners in their day-to-day operations. Our research aims at providing the necessary preconditions for companies to profit from the advantages of business process standardization and to support decision-making."}
{"_id": "675913f7d834cf54a6d90b33e929b999cd6afc7b", "title": "An Improved PSFB PWM DC\u2013DC Converter for High-Power and Frequency Applications", "text": "In the phase shifted full bridge (PSFB) pulse width modulation (PWM) converter, external snubber capacitors are connected in parallel to insulated gate bipolar transistors (IGBTs) in order to decrease turn-off losses. The zero voltage transition (ZVT) condition is not provided at light loads, thus the parallel capacitors discharge through IGBTs at turn on which causes switching losses and failure risk of the IGBTs. Capacitor discharge through IGBT restricts the use of high-value snubber capacitors, and turn-off loss of the IGBT increases at high currents. This problematic condition occurs especially at the lagging leg. In this study, a new technique enabling the use of high-value snubber capacitors with the lagging leg of the PSFB PWM converter is proposed. As advantages of the proposed technique, high-capacitive discharge current through IGBT is prevented at light loads, the turn-off switching losses of the IGBTs are decreased, and the performance of the converter is improved at high currents. The proposed PSFB PWM converter includes an auxiliary circuit, and it has a simple structure, low cost, and ease of control as well. The operation principle and detailed design procedure of the converter are presented. The theoretical analysis is verified exactly by a prototype of 75 kHz and 10 kW converter."}
{"_id": "87f0d151e3dd8f72eab2efa762ec35c7c958fc03", "title": "An Analysis of Security and Privacy Issues in Smart Grid Software Architectures on Clouds", "text": "Power utilities globally are increasingly upgrading to Smart Grids that use bi-directional communication with the consumer to enable an information-driven approach to distributed energy management. Clouds offer features well suited for Smart Grid software platforms and applications, such as elastic resources and shared services. However, the security and privacy concerns inherent in an information-rich Smart Grid environment are further exacerbated by their deployment on Clouds. Here, we present an analysis of security and privacy issues in a Smart Grids software architecture operating on different Cloud environments, in the form of a taxonomy. We use the Los Angeles Smart Grid Project that is underway in the largest U.S. municipal utility to drive this analysis that will benefit both Cloud practitioners targeting Smart Grid applications, and Cloud researchers investigating security and privacy."}
{"_id": "c03a1354d39dfcd9ec6333809d1e6f867d58cbea", "title": "Fixing the mirrors: a feasibility study of the effects of dance movement therapy on young adults with autism spectrum disorder.", "text": "From the 1970s on, case studies reported the effectiveness of therapeutic mirroring in movement with children with autism spectrum disorder. In this feasibility study, we tested a dance movement therapy intervention based on mirroring in movement in a population of 31 young adults with autism spectrum disorder (mainly high-functioning and Asperger's syndrome) with the aim to increase body awareness, social skills, self-other distinction, empathy, and well-being. We employed a manualized dance movement therapy intervention implemented in hourly sessions once a week for 7 weeks. The treatment group (n = 16) and the no-intervention control group (n = 15) were matched by sex, age, and symptom severity. Participants did not participate in any other therapies for the duration of the study. After the treatment, participants in the intervention group reported improved well-being, improved body awareness, improved self-other distinction, and increased social skills. The dance movement therapy-based mirroring approach seemed to address more primary developmental aspects of autism than the presently prevailing theory-of-mind approach. Results suggest that dance movement therapy can be an effective and feasible therapy approach for autism spectrum disorder, while future randomized control trials with bigger samples are needed."}
{"_id": "ac71dd11dbb0dedb36f5ecd3a2d67f6ed623f234", "title": "Survival Analysis Part I: Basic concepts and first analyses", "text": "In many cancer studies, the main outcome under assessment is the time to an event of interest. The generic name for the time is survival time, although it may be applied to the time \u2018survived\u2019 from complete remission to relapse or progression as equally as to the time from diagnosis to death. If the event occurred in all individuals, many methods of analysis would be applicable. However, it is usual that at the end of follow-up some of the individuals have not had the event of interest, and thus their true time to event is unknown. Further, survival data are rarely Normally distributed, but are skewed and comprise typically of many early events and relatively few late ones. It is these features of the data that make the special methods called survival analysis necessary. This paper is the first of a series of four articles that aim to introduce and explain the basic concepts of survival analysis. Most survival analyses in cancer journals use some or all of Kaplan\u2013 Meier (KM) plots, logrank tests, and Cox (proportional hazards) regression. We will discuss the background to, and interpretation of, each of these methods but also other approaches to analysis that deserve to be used more often. In this first article, we will present the basic concepts of survival analysis, including how to produce and interpret survival curves, and how to quantify and test survival differences between two or more groups of patients. Future papers in the series cover multivariate analysis and the last paper introduces some more advanced concepts in a brief question and answer format. More detailed accounts of these methods can be found in books written specifically about survival analysis, for example, Collett (1994), Parmar and Machin (1995) and Kleinbaum (1996). In addition, individual references for the methods are presented throughout the series. Several introductory texts also describe the basis of survival analysis, for example, Altman (2003) and Piantadosi (1997)."}
{"_id": "ed6eef97c46eca5c5b1caa3f92ca7733b63eca15", "title": "Ternary Residual Networks", "text": "Sub-8-bit representation of DNNs incur some noticeable loss of accuracy despite rigorous (re)training at low-precision. Such loss of accuracy essentially makes them equivalent to a much shallower counterpart, diminishing the power of being deep networks. To address this problem of accuracy drop we introduce the notion of residual networks where we add more low-precision edges to sensitive branches of the sub-8-bit network to compensate for the lost accuracy. Further, we present a perturbation theory to identify such sensitive edges. Aided by such an elegant trade-off between accuracy and model size, the 8-2 architecture (8-bit activations, ternary weights), enhanced by residual ternary edges, turns out to be sophisticated enough to achieve similar accuracy as 8-8 representation (\u223c 1% drop from our FP-32 baseline), despite \u223c 1.6\u00d7 reduction in model size, \u223c 26\u00d7 reduction in number of multiplications , and potentially \u223c 2\u00d7 inference speed up comparing to 8-8 representation, on the state-of-the-art deep network ResNet-101 pre-trained on ImageNet dataset. Moreover, depending on the varying accuracy requirements in a dynamic environment, the deployed low-precision model can be upgraded/downgraded on-the-fly by partially enabling/disabling residual connections. For example, disabling the least important residual connections in the above enhanced network, the accuracy drop is \u223c 2% (from our FP-32 baseline), despite \u223c 1.9\u00d7 reduction in model size, \u223c 32\u00d7 reduction in number of multiplications, and potentially \u223c 2.3\u00d7 inference speed up comparing to 8-8 representation. Finally, all the ternary connections are sparse in nature, and the residual ternary conversion can be done in a resource-constraint setting without any low-precision (re)training and without accessing the data."}
{"_id": "47ae235743994840d412699ce8f7ce6cd8fc4c03", "title": "A Compact Meander-Line UHF RFID Tag Antenna Loaded With Elements Found in Right/Left-Handed Coplanar Waveguide Structures", "text": "A new planar meander-line antenna for passive UHF radio frequency identification (RFID) tags is presented. Specifically, a meander-line antenna is loaded periodically with coplanar waveguide (CPW) LC elements traditionally found in right/left-handed waveguide structures. It is shown that by using the antenna presented in this letter in a prototype passive UHF RFID tag, effective read ranges up to 4.87 m can be achieved. Many different dielectric substrates and CPW-LC load dimensions are investigated to illustrate how the input impedance, gain, and overall dimensions of the antenna are affected by these structural differences. It is shown that the overall dimension of the meander-line antenna can be reduced by slightly more than 18% with the introduction of the CPW-LC elements to the design. Several of the simulation results are validated by comparison with measurements."}
{"_id": "40c97ba0f8c50cae18151901c0ea2504bba4e79b", "title": "An Attribute-Based High-Level Image Representation for Scene Classification", "text": "Scene classification is increasingly popular due to its extensive usage in many real-world applications such as object detection, image retrieval, and so on. Traditionally, the low-level hand-crafted image representations are adopted to describe the scene images. However, they usually fail to detect semantic features of visual concepts, especially in handling complex scenes. In this paper, we propose a novel high-level image representation which utilizes image attributes as features for scene classification. More specifically, the attributes of each image are firstly extracted by a deep convolution neural network (CNN), which is trained to be a multi-label classifier by minimizing an element-wise logistic loss function. The process of generating attributes can reduce the \u201csemantic gap\u201d between the low-level feature representation and the high level scene meaning. Based on the attributes, we then build a system to discover semantically meaningful descriptions of the scene classes. Extensive experiments on four large-scale scene classification datasets show that our proposed algorithm considerably outperforms other state-of-the-art methods."}
{"_id": "fc068f7f8a3b2921ec4f3246e9b6c6015165df9a", "title": "Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline)", "text": "Employing part-level features offers fine-grained information for pedestrian image description.A prerequisite of part discovery is that each part should be well located. Instead of using external resources like pose estimator, we consider content consistency within each part for precise part location. Specifically, we target at learning discriminative part-informed features for person retrieval and make two contributions. (i) A network named Part-based Convolutional Baseline (PCB). Given an image input, it outputs a convolutional descriptor consisting of several part-level features. With a uniform partition strategy, PCB achieves competitive results with the state-of-the-art methods, proving itself as a strong convolutional baseline for person retrieval. (ii) A refined part pooling (RPP) method. Uniform partition inevitably incurs outliers in each part, which are in fact more similar to other parts. RPP re-assigns these outliers to the parts they are closest to, resulting in refined parts with enhanced within-part consistency. Experiment confirms that RPP allows PCB to gain another round of performance boost. For instance, on the Market-1501 dataset, we achieve (77.4+4.2)% mAP and (92.3+1.5)% rank-1 accuracy, surpassing the state of the art by a large margin. Code is available at: https://github.com/syfafterzy/PCB_RPP"}
{"_id": "a9d26023bcd6cf378c6b6391c4c5a431f66960de", "title": "Can Rumour Stance Alone Predict Veracity?", "text": "Prior manual studies of rumours suggested that crowd stance can give insights into the actual rumour veracity. Even though numerous studies of automatic veracity classification of social media rumours have been carried out, none explored the effectiveness of leveraging crowd stance to determine veracity. We use stance as an additional feature to those commonly used in earlier studies. We also model the veracity of a rumour using variants of Hidden Markov Models (HMM) and the collective stance information. This paper demonstrates that HMMs that use stance and tweets\u2019 times as the only features for modelling true and false rumours achieve F1 scores in the range of 80%, outperforming those approaches where stance is used jointly with content and user based features."}
{"_id": "e058aaa3db2af3d7ecae8a2af111ec05c765e8e0", "title": "Investigation on perceptual and robustness of LSB digital watermarking scheme on Halal Logo authentication", "text": "In Malaysia, Halal Certificate can only be issued by Jakim. For the time being, there is no specific method to verify authentication of Halal Certificate displayed at food or service premises. Watermarking technique however, offered a solution for authenticity of the data and copyright protection. In this paper, an investigation on perceptual and robustness of least significant bit (LSB) digital watermarking scheme on Halal Logo authentication is implemented using MATLAB software. LSB digital watermarking scheme allows pixel value modification by dividing its entire bit leaving most significant bit (contains most information) and least significant bit (contains less information). These small modifications offer a high perceptual transparency to the watermarked image. A Quick response (QR) code with message is generated as an embedded watermark image for Halal Logo. The watermarked image quality is measured based on Peak signal-to-noise (PSNR), mean square error (MSE), and Normalized Cross-Correlation (NC). The investigation shows the scheme provided high PSNR performance which is between 12-22 dB with Gaussian noise added. The scheme successfully shows the ability to retrieve the embed watermark even though the cover image visual quality is degraded with 50 % Gaussian noise variance. The reliability of the scheme is proven when it successfully to produce an acceptable 0.8442 NC value. The observation on perceptibility shows 51 dB of PSNR with 0.4714 MSE."}
{"_id": "d65df5c3f4b18b51eabde3553bf0ba8533e18042", "title": "FastQRE: Fast Query Reverse Engineering", "text": "We study the problem of Query Reverse Engineering (QRE), where given a database and an output table, the task is to find a simple project-join SQL query that generates that table when applied on the database. This problem is known for its efficiency challenge due to mainly two reasons. First, the problem has a very large search space and its various variants are known to be NP-hard. Second, executing even a single candidate SQL query can be very computationally expensive. In this work we propose a novel approach for solving the QRE problem efficiently. Our solution outperforms the existing state of the art by 2-3 orders of magnitude for complex queries, resolving those queries in seconds rather than days, thus making our approach more practical in real-life settings."}
{"_id": "be724be238ee13f499d280446b6b503b49f3d3aa", "title": "Putting Things in Context: Community-specific Embedding Projections for Sentiment Analysis", "text": "Variation in language is ubiquitous, and is particularly evident in newer forms of writing such as social media. Fortunately, variation is not random, but is usually linked to social factors. By exploiting linguistic homophily \u2014 the tendency of socially linked individuals to use language similarly \u2014 it is possible to build models that are more robust to variation. In this paper, we focus on social network communities, which make it possible to generalize sociolinguistic properties from authors in the training set to authors in the test sets, without requiring demographic author metadata. We detect communities via standard graph clustering algorithms, and then exploit these communities by learning community-specific projections of word embeddings. These projections capture shifts in word meaning in different social groups; by modeling them, we are able to improve the overall accuracy of Twitter sentiment analysis by a significant margin over competitive prior work."}
{"_id": "6d8897f5cc94868c02f29ce365148cd4df039f5d", "title": "Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal", "text": "Model-free reinforcement learning has recently been shown to be effective at learning navigation policies from complex image input. However, these algorithms tend to require large amounts of interaction with the environment, which can be prohibitively costly to obtain on robots in the real world. We present an approach for efficiently learning goal-directed navigation policies on a mobile robot, from only a single coverage traversal of recorded data. The navigation agent learns an effective policy over a diverse action space in a large heterogeneous environment consisting of more than 2km of travel, through buildings and outdoor regions that collectively exhibit large variations in visual appearance, self-similarity, and connectivity. We compare pretrained visual encoders that enable precomputation of visual embeddings to achieve a throughput of tens of thousands of transitions per second at training time on a commodity desktop computer, allowing agents to learn from millions of trajectories of experience in a matter of hours. We propose multiple forms of computationally efficient stochastic augmentation to enable the learned policy to generalise beyond these precomputed embeddings, and demonstrate successful deployment of the learned policy on the real robot without fine tuning, despite environmental appearance differences at test time. The dataset and code required to reproduce these results and apply the technique to other datasets and robots is made publicly available at rl-navigation.github.io/deployable."}
{"_id": "ce56eb0d9841b6e727077e9460b938f78506b324", "title": "Virtual reality using games for improving physical functioning in older adults: a systematic review", "text": "The use of virtual reality through exergames or active video game, i.e. a new form of interactive gaming, as a complementary tool in rehabilitation has been a frequent focus in research and clinical practice in the last few years. However, evidence of their effectiveness is scarce in the older population. This review aim to provide a summary of the effects of exergames in improving physical functioning in older adults. A search for randomized controlled trials was performed in the databases EMBASE, MEDLINE, PsyInfo, Cochrane data base, PEDro and ISI Web of Knowledge. Results from the included studies were analyzed through a critical review and methodological quality by the PEDro scale. Thirteen studies were included in the review. The most common apparatus for exergames intervention was the Nintendo Wii gaming console (8 studies), followed by computers games, Dance video game with pad (two studies each) and only one study with the Balance Rehabilitation Unit. The Timed Up and Go was the most frequently used instrument to assess physical functioning (7 studies). According to the PEDro scale, most of the studies presented methodological problems, with a high proportion of scores below 5 points (8 studies). The exergames protocols and their duration varied widely, and the benefits for physical function in older people remain inconclusive. However, a consensus between studies is the positive motivational aspect that the use of exergames provides. Further studies are needed in order to achieve better methodological quality, external validity and provide stronger scientific evidence."}
{"_id": "050f9321d755d070bd278130d215cf08abc66a6e", "title": "A Novel Implementation of Phase Control Technique for Speed Control of Induction Motor Using ARDUINO", "text": "Induction motors are the most widely used motors for appliances, industrial control and automation. However there arises a problem in voltage levels, which affects the speed of induction motor. As the voltage (V) is directly proportional to speed (N), we need to control the stator voltage which controls the speed proportionally. In this Paper a novel Open loop phase control method is developed by coding a program using ARDUINO software in which ARDUINO controller takes input from the user and generates firing pulses for the TRIAC which controls the speed of the Induction motor. The total process is executed with the help of an ARDUINO controller kit where ARDUINO and Tera-Term softwares are used for Micro Controller and for serial monitor. This results in variable speed control of Induction motor."}
{"_id": "283ae048d4bd7603bbf7bdae059580079439e1b8", "title": "The Gremlin graph traversal machine and language (invited talk)", "text": "Gremlin is a graph traversal machine and language designed, developed, and distributed by the Apache TinkerPop project. Gremlin, as a graph traversal machine, is composed of three interacting components: a graph, a traversal, and a set of traversers. The traversers move about the graph according to the instructions specified in the traversal, where the result of the computation is the ultimate locations of all halted traversers. A Gremlin machine can be executed over any supporting graph computing system such as an OLTP graph database and/or an OLAP graph processor. Gremlin, as a graph traversal language, is a functional language implemented in the user's native programming language and is used to define the traversal of a Gremlin machine. This article provides a mathematical description of Gremlin and details its automaton and functional properties. These properties enable Gremlin to naturally support imperative and declarative querying, host language agnosticism, user-defined domain specific languages, an extensible compiler/optimizer, single- and multi-machine execution models, hybrid depth- and breadth-first evaluation, as well as the existence of a Universal Gremlin Machine and its respective entailments."}
{"_id": "c81a574a47a0c21149e1efbdcaa251786a18484d", "title": "Understanding Internet Banking Adoption and Use Behavior: A Hong Kong Perspective", "text": "Hong Kong is an international financial center well known for its efficiency and its ability to adapt and keep up with the times. Recently, however, the Hong Kong banking industry has been losing competitive advantages in some areas, with the adoption of Internet Banking being one of them. Hong Kong banks have been slower than some other international banks in joining the e-commerce evolution, which first emerged in the United States in mid-90s. Financial institutions in the U.S. have introduced and promoted online banking to provide better customer services. Many property and stock investment firms in Hong Kong have also jumped on the bandwagon and adopted the Internet as a channel for providing better and more efficient services ABSTRACT"}
{"_id": "bee2b9755532c5577fd8b8dee6e0845fc19e62bd", "title": "Phone Merging For Code-Switched Speech Recognition", "text": "Speakers in multilingual communities often switch between or mix multiple languages in the same conversation. Automatic Speech Recognition (ASR) of codeswitched speech faces many challenges including the influence of phones of different languages on each other. This paper shows evidence that phone sharing between languages improves the Acoustic Model performance for Hindi-English code-switched speech. We compare baseline system built with separate phones for Hindi and English with systems where the phones were manually merged based on linguistic knowledge. Encouraged by the improved ASR performance after manually merging the phones, we further investigate multiple data-driven methods to identify phones to be merged across the languages. We show detailed analysis of automatic phone merging in this language pair and the impact it has on individual phone accuracies and WER. Though the best performance gain of 1.2% WER was observed with manually merged phones, we show experimentally that the manual phone merge is not optimal."}
{"_id": "1e6dda18549f74155ca3bc6d144f439108aa5474", "title": "Constant Communication ORAM with Small Blocksize", "text": "There have been several attempts recently at using homomorphic encryption to increase the efficiency of Oblivious RAM protocols. One of the most successful has been Onion ORAM, which achieves O(1) communication overhead with polylogarithmic server computation. However, it has two drawbacks. It requires a large block size of B = \u03a9(log6 N) with large constants. Moreover, while it only needs polylogarithmic computation complexity, that computation consists mostly of expensive homomorphic multiplications. In this work, we address these problems and reduce the required block size to \u03a9(log4 N). We remove most of the homomorphic multiplications while maintaining O(1) communication complexity. Our idea is to replace their homomorphic eviction routine with a new, much cheaper permute-and-merge eviction which eliminates homomorphic multiplications and maintains the same level of security. In turn, this removes the need for layered encryption that Onion ORAM relies on and reduces both the minimum block size and server computation."}
{"_id": "32bd968e6cf31e69ee5fca14d3eadeec7f4187c6", "title": "Monocular Pedestrian Detection: Survey and Experiments", "text": "Pedestrian detection is a rapidly evolving area in computer vision with key applications in intelligent vehicles, surveillance, and advanced robotics. The objective of this paper is to provide an overview of the current state of the art from both methodological and experimental perspectives. The first part of the paper consists of a survey. We cover the main components of a pedestrian detection system and the underlying models. The second (and larger) part of the paper contains a corresponding experimental study. We consider a diverse set of state-of-the-art systems: wavelet-based AdaBoost cascade, HOG/linSVM, NN/LRF, and combined shape-texture detection. Experiments are performed on an extensive data set captured onboard a vehicle driving through urban environment. The data set includes many thousands of training samples as well as a 27-minute test sequence involving more than 20,000 images with annotated pedestrian locations. We consider a generic evaluation setting and one specific to pedestrian detection onboard a vehicle. Results indicate a clear advantage of HOG/linSVM at higher image resolutions and lower processing speeds, and a superiority of the wavelet-based AdaBoost cascade approach at lower image resolutions and (near) real-time processing speeds. The data set (8.5 GB) is made public for benchmarking purposes."}
{"_id": "05fe01b57b3ba58dc5029c068a48567b55018ea5", "title": "Fast Human Detection Using a Cascade of Histograms of Oriented Gradients", "text": "We integrate the cascade-of-rejectors approach with the Histograms of Oriented Gradients (HoG) features to achieve a fast and accurate human detection system. The features used in our system are HoGs of variable-size blocks that capture salient features of humans automatically. Using AdaBoost for feature selection, we identify the appropriate set of blocks, from a large set of possible blocks. In our system, we use the integral image representation and a rejection cascade which significantly speed up the computation. For a 320 \u00d7 280 image, the system can process 5 to 30 frames per second depending on the density in which we scan the image, while maintaining an accuracy level similar to existing methods."}
{"_id": "26d644962cf14405f38502c3848473422d810ea5", "title": "Flexible HALS algorithms for sparse non-negative matrix/tensor factorization", "text": "In this paper we propose a family of new algorithms for non-negative matrix/tensor factorization (NMF/NTF) and sparse nonnegative coding and representation that has many potential applications in computational neuroscience, multi-sensory, multidimensional data analysis and text mining. We have developed a class of local algorithms which are extensions of hierarchical alternating least squares (HALS) algorithms proposed by us in . For these purposes, we have performed simultaneous constrained minimization of a set of robust cost functions called alpha and beta divergences. Our algorithms are locally stable and work well for the NMF blind source separation (BSS) not only for the over-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, new algorithms can be potentially accommodated to different noise statistics by just adjusting a single parameter. Extensive experimental results confirm the validity and high performance of the developed algorithms, especially, with usage of the multi-layer hierarchical approach ."}
{"_id": "8d305d174bc9b36d5aae54bb565b3c24fb8efc28", "title": "We don't need another hero.", "text": "Everybody loves the stories of heroes like Martin Luther King, Jr., Mother Teresa, and Gandhi. But the heroic model of moral leadership usually doesn't work in the corporate world. Modesty and restraint are largely responsible for the achievements of the most effective moral leaders in business. The author, a specialist in business ethics, says the quiet leaders he has studied follow four basic rules in meeting ethical challenges and making decisions. The rules constitute an important resource for executives who want to encourage the development of such leaders among their middle managers. The first rule is \"Put things off till tomorrow.\" The passage of time allows turbulent waters to calm and lets leaders' moral instincts emerge. \"Pick your battles\" means that quiet leaders don't waste political capital on fights they can't win; they save it for occasions when they really want to fight. \"Bend the rules, don't break them\" sounds easier than it is--bending the rules in order to resolve a complicated situation requires imagination, discipline, restraint, flexibility, and entrepreneurship. The fourth rule, \"Find a compromise,\" reflects the author's finding that quiet leaders try not to see situations as polarized tests of ethical principles. These individuals work hard to craft compromises that are \"good enough\"--responsible and workable enough--to satisfy themselves, their companies, and their customers. The vast majority of difficult problems are solved through the consistent striving of people working far from the limelight. Their quiet approach to leadership doesn't inspire, thrill, or provide story lines for uplifting TV shows. But the unglamorous efforts of quiet leaders make a tremendous difference every day in the corporate world."}
{"_id": "4f72ddff72210319272122bb588007cb12caf8f9", "title": "A motivational theory of life-span development.", "text": "This article had four goals. First, the authors identified a set of general challenges and questions that a life-span theory of development should address. Second, they presented a comprehensive account of their Motivational Theory of Life-Span Development. They integrated the model of optimization in primary and secondary control and the action-phase model of developmental regulation with their original life-span theory of control to present a comprehensive theory of development. Third, they reviewed the relevant empirical literature testing key propositions of the Motivational Theory of Life-Span Development. Finally, because the conceptual reach of their theory goes far beyond the current empirical base, they pointed out areas that deserve further and more focused empirical inquiry."}
{"_id": "08f9a62cdbe43fca7199147123a7d957892480af", "title": "Chip and Skim: Cloning EMV Cards with the Pre-play Attack", "text": "EMV, also known as \"Chip and PIN\", is the leading system for card payments worldwide. It is used throughout Europe and much of Asia, and is starting to be introduced in North America too. Payment cards contain a chip so they can execute an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate a nonce, called the unpredictable number, for each transaction to ensure it is fresh. We have discovered two serious problems: a widespread implementation flaw and a deeper, more difficult to fix flaw with the EMV protocol itself. The first flaw is that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply this nonce. This exposes them to a \"pre-play\" attack which is indistinguishable from card cloning from the standpoint of the logs available to the card-issuing bank, and can be carried out even if it is impossible to clone a card physically. Card cloning is the very type of fraud that EMV was supposed to prevent. We describe how we detected the vulnerability, a survey methodology we developed to chart the scope of the weakness, evidence from ATM and terminal experiments in the field, and our implementation of proof-of-concept attacks. We found flaws in widely-used ATMs from the largest manufacturers. We can now explain at least some of the increasing number of frauds in which victims are refused refunds by banks which claim that EMV cards cannot be cloned and that a customer involved in a dispute must therefore be mistaken or complicit. The second problem was exposed by the above work. Independent of the random number quality, there is a protocol failure: the actual random number generated by the terminal can simply be replaced by one the attacker used earlier when capturing an authentication code from the card. This variant of the pre-play attack may be carried out by malware in an ATM or POS terminal, or by a man-in-the-middle between the terminal and the acquirer. We explore the design and implementation mistakes that enabled these flaws to evade detection until now: shortcomings of the EMV specification, of the EMV kernel certification process, of implementation testing, formal analysis, and monitoring customer complaints. Finally we discuss countermeasures. More than a year after our initial responsible disclosure of these flaws to the banks, action has only been taken to mitigate the first of them, while we have seen a likely case of the second in the wild, and the spread of ATM and POS malware is making it ever more of a threat."}
{"_id": "3e833160f25d79e158b6079fdfd8f2bff942c00b", "title": "A Simplified Agile Methodology for Ontology Development", "text": "In this paper we introduce SAMOD, a.k.a. Simplified Agile Methodology for Ontology Development, a novel agile methodology for the development of ontologies by means of small steps of an iterative workflow that focuses on creating well-developed and documented models starting from exemplar domain descriptions. RASH: https://w3id.org/people/essepuntato/papers/samod-owled2016.html"}
{"_id": "176d486ccd79bdced91d1d42dbd3311a95e449de", "title": "Does Inclusion of CIO in Top Management Team Impact Firm Performance? Evidence from a Long-Term Event Analysis", "text": "Inclusion of chief information officers (CIOs) in top management teams (TMTs) increases the functional and knowledge diversity of the teams, and elevates the role of IT in strategic business decisions. Considering that firms are increasingly relying on IT for operational efficiency and competitive advantage, TMTs with IT diversity can be expected to have superior performance compared to those without. While many studies have argued conceptually the importance of CIOs membership in TMTs, very few studies examined the performance effects of this membership. The goal of this study is to investigate this relationship. Moreover, we consider contingency factors that may have an effect on the focal relationship. Using firm-level secondary data and long-term event study methodology, we show that inclusion of CIO in TMT has a significant positive effect on firm performance, and this positive effect is larger for firms in dynamic environments and during the more recent years."}
{"_id": "6a4b4f04d5ce3c3592832eb40c23cc8fc5a9131e", "title": "Optical Character Recognition", "text": "72 \uf020 Abstract\u2014 The Optical Character Recognition is a mobile application. It uses smart mobile phones of android platform. This paper combines the functionality of Optical Character Recognition and speech synthesizer. The objective is to develop user friendly application which performs image to speech conversion system using android phones. The OCR takes image as the input, gets text from that image and then converts it into speech. This system can be useful in various applications like banking, legal industry, other industries, and home and office automation. It mainly designed for people who are unable to read any type of text documents. In this paper, the character recognition method is presented by using OCR technology and android phone with higher quality camera."}
{"_id": "7e895b230760ac34ce3d5472fb1cbf92b08566cc", "title": "AGE STANDARDIZATION OF RATES : A NEW WHO STANDARD", "text": "2 Summary A recent WHO analysis has revealed the need for a new world standard population (see attached table). This has become particularly pertinent given the rapid and continued declines in age-specific mortality rates among the oldest old, and the increasing availability of epidemiological data for higher age groups. There is clearly no conceptual justification for choosing one standard over another, hence the choice is arbitrary. However, choosing a standard population with higher proportions in the younger age groups tends to weight events at these ages disproportionately. Similarly, choosing an older standard does the opposite. Hence, rather than selecting a standard to match the current age-structure of some population(s), the WHO adopted a standard based on the average age-structure of those populations to be compared (the world) over the likely period of time that a new standard will be used (some 25-30 years), using the latest UN assessment for 1998 (UN Population Division, 1998). From these estimates, an average world population age-structure was constructed for the period 2000-2025. The use of an average world population, as well as a time series of observations, removes the effects of historical events such as wars and famine on population age composition. The terminal age group in the new WHO standard population has been extended out to 100 years and over, rather than the 85 and over as is the current practice. The WHO World Standard population has fewer children and notably more adults aged 70 and above than the world standard. It is also notably younger than the European standard. It is important to note, however, that the age standardized death rates based on the new standard are not comparable to previous estimates that are based on some earlier standard(s). However, to facilitate comparative analyses, WHO will disseminate trend analyses of the \" complete \" historical mortality data using on the new WHO World Standard Population in future editions of the World Health Statistics Annual."}
{"_id": "e4c8421ffb0e203863f4ca3a597e1acde9044080", "title": "\"Everybody knows what you're doing\": a critical design approach to personal informatics", "text": "We present an alternative approach to the design of personal informatics systems: instead of motivating people to examine their own behaviors, this approach promotes awareness of and reflection on the infrastructures behind personal informatics and the modes of engagement that they promote. Specifically, this paper presents an interface that displays personal web browsing data. The interface aims to reveal underlying infrastructure using several methods: drawing attention to the scope of mined data by displaying deliberately selected sensitive data, using purposeful malfunction as a way to encourage reverse engineering, and challenging normative expectations around data mining by displaying information in unconventional ways. Qualitative results from a two-week deployment show that these strategies can raise people's awareness about data mining, promote efficacy and control over personal data, and inspire reflection on the goals and assumptions embedded in infrastructures for personal data analytics."}
{"_id": "b8049001776d63f32b7111268b4f05a4731af959", "title": "The neurophysiology of unmyelinated tactile afferents", "text": "CT (C tactile) afferents are a distinct type of unmyelinated, low-threshold mechanoreceptive units existing in the hairy but not glabrous skin of humans and other mammals. Evidence from patients lacking myelinated tactile afferents indicates that signaling in these fibers activate the insular cortex. Since this system is poor in encoding discriminative aspects of touch, but well-suited to encoding slow, gentle touch, CT fibers in hairy skin may be part of a system for processing pleasant and socially relevant aspects of touch. CT fiber activation may also have a role in pain inhibition. This review outlines the growing evidence for unique properties and pathways of CT afferents."}
{"_id": "f75a1b4a87bb9eecaecfcb4acf8b0d60bd2cf334", "title": "The nature of cyberbullying, and strategies for prevention", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.chb.2012.05.024 \u21d1 Corresponding author. Address: Unit for School an University of London, UK. Tel.: +44 (0)207 919 7870; E-mail address: r.slonje@gmail.com (R. Slonje). Cyberbullying has been identified as an important problem amongst youth in the last decade. This paper reviews some recent findings and discusses general concepts within the area. The review covers definitional issues such as repetition and power imbalance, types of cyberbullying, age and gender differences, overlap with traditional bullying and sequence of events, differences between cyberbullying and traditional bullying, motives for and impact of cyber victimization, coping strategies, and prevention/intervention possibilities. These issues will be illustrated by reference to recent and current literature, and also by in-depth interviews with nine Swedish students aged 13\u201315 years, who had some first-hand experience of one or more cyberbullying episodes. We conclude by discussing the evidence for different coping, intervention and prevention strategies. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "15f5ce559c8f3ea14a59cf49bacead181545dfb0", "title": "Short Group Signatures", "text": "We construct a short group signature scheme. Signatures in our scheme are approximately the size of a standard RSA signature with the same security. Security of our group signature is based on the Strong Diffie-Hellman assumption and a new assumption in bilinear groups called the Decision Linear assumption. We prove security of our system, in the random oracle model, using a variant of the security definition for group signatures recently given by Bellare, Micciancio, and Warinschi."}
{"_id": "79b4d9f478fa1e0306cd8e69f49b984b29d326a9", "title": "Four types of context for automatic spelling correction", "text": "This paper presents an investigation on using four types of contextual information for improving the accuracy of automatic correction of single-token non-word misspellings. The task is framed as contextually-informed re-ranking of correction candidates. Immediate local context is captured by word n-grams statistics from a Web-scale language model. The second approach measures how well a candidate correction fits in the semantic fabric of the local lexical neighborhood, using a very large Distributional Semantic Model. In the third approach, recognizing a misspelling as an instance of a recurring word can be useful for reranking. The fourth approach looks at context beyond the text itself. If the approximate topic can be known in advance, spelling correction can be biased towards the topic. Effectiveness of proposed methods is demonstrated with an annotated corpus of 3,000 student essays from international high-stakes English language assessments. The paper also describes an implemented system that achieves high accuracy on this task. R\u00c9SUM\u00c9. Cet article pr\u00e9sente une enqu\u00eate sur l\u2019utilisation de quatre types d\u2019informations contextuelles pour am\u00e9liorer la pr\u00e9cision de la correction automatique de fautes d\u2019orthographe de mots seuls. La t\u00e2che est pr\u00e9sent\u00e9e comme un reclassement contextuellement inform\u00e9. Le contexte local imm\u00e9diat, captur\u00e9 par statistique de mot n-grammes est mod\u00e9lis\u00e9 \u00e0 partir d\u2019un mod\u00e8le de langage \u00e0 l\u2019\u00e9chelle du Web. La deuxi\u00e8me m\u00e9thode consiste \u00e0 mesurer \u00e0 quel point une correction s\u2019inscrit dans le tissu s\u00e9mantique local, en utilisant un tr\u00e8s grand mod\u00e8le s\u00e9mantique distributionnel. La troisi\u00e8me approche reconnaissant une faute d\u2019orthographe comme une instance d\u2019un mot r\u00e9current peut \u00eatre utile pour le reclassement. La quatri\u00e8me approche s\u2019attache au contexte au-del\u00e0 du texte lui-m\u00eame. Si le sujet approximatif peut \u00eatre connu \u00e0 l\u2019avance, la correction orthographique peut \u00eatre biais\u00e9e par rapport au sujet. L\u2019efficacit\u00e9 des m\u00e9thodes propos\u00e9es est d\u00e9montr\u00e9e avec un corpus annot\u00e9 de 3 000 travaux d\u2019\u00e9tudiants des \u00e9valuations internationales de langue anglaise. Le document d\u00e9crit \u00e9galement un syst\u00e8me mis en place qui permet d\u2019obtenir une grande pr\u00e9cision sur cette t\u00e2che."}
{"_id": "f3dcd5e56344b635647f9d3005212677a1eec8cd", "title": "On the Power Leakage Problem in Beamspace MIMO Systems with Lens Antenna Array", "text": "The recently proposed concept of beamspace MIMO can significantly reduce the number of power- hungry radio frequency (RF) chains in millimeter- wave (mmWave) massive MIMO systems. However, most existing studies ignore the power leakage problem in beamspace MIMO systems, which results in an obvious loss in the achievable sum rate. In this paper, a phase shifter network (PSN)-based precoding structure is proposed to solve this problem. Its key idea is to employ multiple phase shifters from each RF chain to select multiple instead of only one beam to collect most of the leaked power. Based on the proposed structure, a rotation-based precoding algorithm is further designed to maximize the signal-to-noise-ratio (SNR) of each user by rotating the channel gains of the selected beams to the same direction. Simulation results show that the proposed PSN- based precoding can effectively collect the leaked power to achieve the near-optimal sum rate, and enjoys a higher energy efficiency than the conventional precoding solutions."}
{"_id": "8da7ad35bebac705cd988943e8c6df00ade0f7a5", "title": "A Random Linear Network Coding Approach to Multicast", "text": "We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks"}
{"_id": "6af4d0bd3d4d00bf0d8750876fd3e36ab6a95c67", "title": "Insecticide Resistance Alleles Affect Vector Competence of Anopheles gambiae s.s. for Plasmodium falciparum Field Isolates", "text": "The widespread insecticide resistance raises concerns for vector control implementation and sustainability particularly for the control of the main vector of human malaria, Anopheles gambiae sensu stricto. However, the extent to which insecticide resistance mechanisms interfere with the development of the malignant malaria parasite in its vector and their impact on overall malaria transmission remains unknown. We explore the impact of insecticide resistance on the outcome of Plasmodium falciparum infection in its natural vector using three An. gambiae strains sharing a common genetic background, one susceptible to insecticides and two resistant, one homozygous for the ace-1(R) mutation and one for the kdr mutation. Experimental infections of the three strains were conducted in parallel with field isolates of P. falciparum from Burkina Faso (West Africa) by direct membrane feeding assays. Both insecticide resistant mutations influence the outcome of malaria infection by increasing the prevalence of infection. In contrast, the kdr resistant allele is associated with reduced parasite burden in infected individuals at the oocyst stage, when compared to the susceptible strain, while the ace-1 (R) resistant allele showing no such association. Thus insecticide resistance, which is particularly problematic for malaria control efforts, impacts vector competence towards P. falciparum and probably parasite transmission through increased sporozoite prevalence in kdr resistant mosquitoes. These results are of great concern for the epidemiology of malaria considering the widespread pyrethroid resistance currently observed in Sub-Saharan Africa and the efforts deployed to control the disease."}
{"_id": "0e76e8d623882945ef8891a02fc01b3675fd1222", "title": "Learning Multimodal Transition Dynamics for Model-Based Reinforcement Learning", "text": "In this paper we study how to learn stochastic, multimodal transition dynamics in reinforcement learning (RL) tasks. We focus on evaluating transition function estimation, while we defer planning over this model to future work. Stochasticity is a fundamental property of many task environments. However, discriminative function approximators have difficulty estimating multimodal stochasticity. In contrast, deep generative models do capture complex high-dimensional outcome distributions. First we discuss why, amongst such models, conditional variational inference (VI) is theoretically most appealing for model-based RL. Subsequently, we compare different VI models on their ability to learn complex stochasticity on simulated functions, as well as on a typical RL gridworld with multimodal dynamics. Results show VI successfully predicts multimodal outcomes, but also robustly ignores these for deterministic parts of the transition dynamics. In summary, we show a robust method to learn multimodal transitions using function approximation, which is a key preliminary for model-based RL in stochastic domains."}
{"_id": "2ec08f3979701ce3046d0318ba6b312345eef772", "title": "Optimizing query execution for variable-aligned length compression of bitmap indices", "text": "Indexing is a fundamental mechanism for efficient data access. Recently, we proposed the Variable-Aligned Length (VAL) bitmap index encoding framework, which generalizes the commonly used word-aligned compression techniques. VAL presented a variable-aligned compression framework, which allows columns of a bitmap to be compressed using different encoding lengths. This flexibility creates a tunable compression that balances the trade-off between space and query processing time. The variable format of VAL presents several unique opportunities for query optimization.\n In this paper we explore multiple algorithms to optimize both point queries and range queries in VAL. In particular, we propose a dynamic encoding-length translation heuristic to process point queries. For range queries, we propose several column orderings based on the bitmap's metadata: largest segment length first (lsf), column size (size), and weighted size (ws). In our empirical study over both real and synthetic data sets, we show that our dynamic translation selection scheme produces query execution times only 3.5% below the optimal. We also found that the weighted size column ordering significantly and consistently out-performs other ordering techniques. Finally, we show that algorithms scale to data sets that are row-ordered."}
{"_id": "99777c714611f3c7b6f3d2434de4cdd8ac23cc6f", "title": "Iris Recognition Based on Combined Feature of GLCM and Wavelet Transform", "text": "Iris biometric is considered to be one of the most efficient and trusted biometric method for authenticating users. This paper presents a method for iris recognition based on hybrid feature set of wavelet and GLCM features, a non-filter based technique, combined with Haar wavelet transform to increase the efficiency of the system. Here we combine frequency domain feature with spatial domain feature to increase overall efficiency of system. Probability neural Network is used to classify the features. Results show that the overall system efficiency is 94% with false rejection rate higher than false acceptance rate. The technique is tested on CASIA Iris database."}
{"_id": "12e09efc451f066fe1d81ce52c546dcffa37f8d5", "title": "Modeling of rolling friction by recurrent neural network using LSTM", "text": "The modeling and identification of a mechanical system is the most important issue for many control systems in order to realize the desired control specifications. In particular, the friction characteristics often deteriorate the control performance, such as in the fast and precise positioning performance in industrial robots, the force estimation accuracy based on a disturbance observer, and the posture control performance of an inverted pendulum robot. Rolling friction tends to cause overshoot, undershoot, or limit cycles of the target value in positioning systems. In previous research, some model structures for rolling friction have been proposed to express the hysteresis characteristics in order to overcome these control issues. However, it is difficult to identify the correct parameters for precise modeling. In this paper, the modeling of rolling friction based on a Recurrent Neural Network (RNN) using Long Short-Term Memory (LSTM) is proposed to precisely express the rolling friction characteristics. The initial value design of the RNN during supervised learning is also presented to achieve a better model. The effectiveness of the proposed approach is verified by comparison with conventional friction models using an actual experimental setup."}
{"_id": "96084442678300ac8be7778cf10a1379d389901f", "title": "Additive manufacturing of microwave components: Different approaches and methodologies", "text": "In this paper, two different approaches for the additive manufacturing of microwave component are discussed. The first approach consists is the popular and cheap fused deposition modeling (FDM). In this paper it is shown that, by using different infill factor, FDM is suitable for the manufacturing of devices with controlled dielectric constant. The second approach is the Stereolithography (SLA). With this approach, better results can be obtained in terms of resolution. Furthermore a very easy way to copper plate the surface of microwave devices is shown and it effectiveness is demonstrated through the manufacturing and measurement of a two-pole filter with mushroom-shaped resonators."}
{"_id": "b0b811b73cd857913dcaf4858a520150cacbeb6a", "title": "Enhanced autonomous resource selection for LTE-based V2V communication", "text": "Vehicle-to-vehicle (V2V) communication has the potential to improve road safety significantly by providing connectivity between vehicles to exchange information messages. V2V has recently gained considerable attention in part due to the standardization work that has started for Long Term Evolution (LTE) under the Third Generation Partnership Project (3GPP). Efficient and reliable distributed allocation of time-frequency resources among different vehicle user equipments (VUEs) remains a challenge particularly in dense urban environments. In this work, a new autonomous resource selection scheme for urban V2V communication scenario is proposed. Specifically, the proposed scheme relies on resource partitioning based on VUE heading direction along with a sensing-based collision avoidance mechanism, which aims to alleviate the potential interference between VUEs due to resource collision and in-band emission (IBE). The performance of the two-stage autonomous resource selection scheme is evaluated by system-level simulations in urban scenarios, where the results show a significant performance gain in comparison to existing approaches."}
{"_id": "f580c5fc2719fb1cfe90d7f270ae22068074f7f3", "title": "Support for vehicle-to-everything services based on LTE", "text": "Vehicular applications and communications technologies are often referred to as vehicle-to-everything (V2X), which is classified into four different types: vehicle-to-vehicle (V2V), vehicle- to-infrastructure (V2I), vehicle-to-network (V2N), and vehicle-to-pedestrian (V2P) [1]. V2X related research projects, field tests, and regulatory work have been promoted in different countries and regions. In May 2015, the Ministry of Industry and Information Technology (MIIT) of China explained national strategies, \"Made in China 2025,\" about intelligent connected vehicles. In 2020 and 2025, the overall technology and key technologies for intelligent driver assistance and automatic driving will be available in China, respectively [2]. V2X solutions are the critical technologies to support the realization of such visions. Although IEEE 802.11p has been selected as the technology for V2X communications in some countries such as the United States and in Europe, the intrinsic characteristics of IEEE 802.11p have confined the technology to support low latency with high reliability [3, 4]. Standardization of Long Term Evolution (LTE)-based V2X is being actively conducted by the Third Generation Partnership Project (3GPP) to provide the solution for V2X communications that benefit from the global deployment and fast commercialization of LTE systems. Because of the wide deployment of LTE networks, V2I and V2N services can be provided with high data rate, comprehensive quality of service (QoS) support, ubiquitous coverage, and high penetration rate [5]. Meanwhile, LTE can be extended to support V2V direct communications based on device-to-device (D2D) sidelink design to satisfy the QoS requirements, such as low latency, high reliability, and high speed in the case of high vehicle density [6]."}
{"_id": "0da40aa1c4a60cc1457a08fbd9c2f46429872854", "title": "LTE evolution for vehicle-to-everything services", "text": "Wireless communication has become a key technology for competitiveness of next generation vehicles. Recently, the 3GPP has initiated standardization activities for LTE-based V2X services composed of vehicle-to-vehicle, vehicle- to-pedestrian, and vehicle-to-infrastructure/network. The goal of these 3GPP activities is to enhance LTE systems to enable vehicles to communicate with other vehicles, pedestrians, and infrastructure in order to exchange messages for aiding in road safety, controlling traffic flow, and providing various traffic notifications. In this article, we provide an overview of the service flow and requirements of the V2X services LTE systems are targeting. This article also discusses the scenarios suitable for operating LTE-based V2X services, and addresses the main challenges of high mobility and densely populated vehicle environments in designing technical solutions to fulfill the requirements of V2X services. Leveraging the spectral-efficient air interface, the cost-effective network deployment, and the versatile nature of supporting different communication types, LTE systems along with proper enhancements can be the key enabler of V2X services."}
{"_id": "9ba0024271135de5f1b99d1e1174b80505e19d6f", "title": "An improved IEEE 802.16 WiMAX module for the ns-3 simulator", "text": "IEEE 802.16 WiMAX is a promising new wireless technology for providing broadband ubiquitous network access. As more and more researchers and industrials are interested in simulating such networks, a number of WiMAX simulators have been emerged in the networking community. One of the most recent WiMAX simulator is the one developed for ns-3. This module provides a standard compliant and well designed implementation of the standard and benefits from the major enhancements provided by ns-3 (compared to other network simulators) which has all the capabilities of becoming the leading network simulator in near future. However, this WiMAX module still lacks some important features which motivated this work. In this paper, we first provide a snapshot of existing WiMAX simulators available in the public domain, while highlighting their limitations. Then, we describe the new features and enhancements we have integrated within the ns-3 WiMAX module, and in particular: a realistic and scalable physical model, an IP packet classifier for the convergence sub-layer, efficient uplink and downlink schedulers, support for multicast traffic and pcap packet tracing functionality. The new design of the physical layer has improved the simulation time by several magnitude orders while still providing a realistic implementation of the standard. Furthermore, the IP classifier has enabled the simulation of an unlimited number of service flows per subscriber station, while the proposed schedulers improve the management of the QoS requirements for the different service flows."}
{"_id": "c72f7deac690472f87307018a67c90661cd1312e", "title": "Use of the scored Patient-Generated Subjective Global Assessment (PG-SGA) as a nutrition assessment tool in patients with cancer", "text": "Objective: To evaluate the use of the scored Patient-Generated Subjective Global Assessment (PG-SGA) as a nutrition assessment tool in patients with cancer.Design: An observational study assessing the nutritional status of patients with cancer.Setting: Oncology ward of a private tertiary Australian hospital.Subjects: Seventy-one cancer patients aged 18\u201392\u2005y.Intervention: Scored PG-SGA questionnaire, comparison of scored PG-SGA with subjective global assessment (SGA), sensitivity, specificity.Results: Some 24% (17) of 71 patients were well nourished, 59% (42) of patients were moderately or suspected of being malnourished and 17% (12) of patients were severely malnourished according to subjective global assessment (SGA). The PG-SGA score had a sensitivity of 98% and a specificity of 82% at predicting SGA classification. There was a significant difference in the median PG-SGA scores for each of the SGA classifications (P<0.001), with the severely malnourished patients having the highest scores. Re-admission within 30 days of discharge was significantly different between SGA groups (P=0.037). The mortality rate within 30 days of discharge was not significantly different between SGA groups (P=0.305). The median length of stay of well nourished patients (SGA A) was significantly lower than that of the malnourished (SGA B+C) patients (P=0.024).Conclusion: The scored PG-SGA is an easy to use nutrition assessment tool that allows quick identification and prioritisation of malnutrition in hospitalised patients with cancer.Sponsors: The Wesley Research Institute."}
{"_id": "a07641e1ddb1c9ab907e4c269206343ce44e9168", "title": "Grid Cells and Neural Coding in High-End Cortices", "text": "An ultimate goal of neuroscience is to understand the mechanisms of mammalian intellectual functions, many of which are thought to depend extensively on the cerebral cortex. While this may have been considered a remote objective when Neuron was launched in 1988, neuroscience has now evolved to a stage where it is possible to decipher neural-circuit mechanisms in the deepest parts of the cortex, far away from sensory receptors and motoneurons. In this review, we show how studies of place cells in the hippocampus and grid cells in the entorhinal cortex may provide some of the first glimpses into these mechanisms. We shall review the events that led up to the discovery of grid cells and a functional circuit in the entorhinal cortex and highlight what we currently see as the big questions in this field--questions that, if resolved, will add to our understanding of cortical computation in a general sense."}
{"_id": "5400daa7c04f4dd144d7eab0d75f8a6f412c6763", "title": "A Miniaturized, Dual-Band, Circularly Polarized Microstrip Antenna for Installation Into Satellite Mobile Phones", "text": "A miniaturized, dual-band, circularly polarized microstrip antenna (CPMA) for satellite mobile phones was designed. Two miniaturized radiation elements using folded structures are stacked vertically to achieve dual bands. Each radiation element shows left-handed circular polarization. The total volume of this dual-band, circularly polarized, microstrip antenna is 46 mm (length) x 24 mm (width) x 13 mm (height). The overall antenna is mounted on a phone-shaped ground plane to fit inside satellite mobile phones."}
{"_id": "b348a16fcac16f77bf13146d54b778a386c9558a", "title": "Learning Deep Features for DNA Methylation Data Analysis", "text": "Many studies demonstrated that the DNA methylation, which occurs in the context of a CpG, has strong correlation with diseases, including cancer. There is a strong interest in analyzing the DNA methylation data to find how to distinguish different subtypes of the tumor. However, the conventional statistical methods are not suitable for analyzing the highly dimensional DNA methylation data with bounded support. In order to explicitly capture the properties of the data, we design a deep neural network, which composes of several stacked binary restricted Boltzmann machines, to learn the low-dimensional deep features of the DNA methylation data. Experimental results show that these features perform best in breast cancer DNA methylation data cluster analysis, compared with some state-of-the-art methods."}
{"_id": "d901928c31d8ccdb12a5a08600be3c2ed10c97c8", "title": "SeriesNet:A Generative Time Series Forecasting Model", "text": "Time series forecasting is emerging as one of the most important branches of big data analysis. However, traditional time series forecasting models can not effectively extract good enough sequence data features and often result in poor forecasting accuracy. In this paper, a novel time series forecasting model, named SeriesNet, which can fully learn features of time series data in different interval lengths. The SeriesNet consists of two networks. The LSTM network aims to learn holistic features and to reduce dimensionality of multi-conditional data, and the dilated causal convolution network aims to learn different time interval. This model can learn multi-range and multi-level features from time series data, and has higher predictive accuracy compared those models using fixed time intervals. Moreover, this model adopts residual learning and batch normalization to improve generalization. Experimental results show our model has higher forecasting accuracy and has greater stableness on several typical time series data sets."}
{"_id": "3c88ff2def05e00d909f3156e8ec26c184399716", "title": "Discrete Dynamic Shortest Path Problems in Transportation Applications: Complexity and Algorithms with Optimal Run Time", "text": "This paper solves what appears to be a 30 years old problem dealing with the discovery of most effi algorithms possible to compute all-to-one shortest paths in discrete dynamic networks. This problem lies at th of efficient solution approaches to dynamic network models that arise in dynamic transportation systems, suc Intelligent Transportation Systems (ITS) applications. While the main objective of this paper is the study of the one dynamic shortest paths problem, one-to-all fastest paths problems are studied as well. Early results are and new properties are established. We establish the exact complexity of these problems and develop optima run time sense, solution algorithms. A new and simple solution algorithm is proposed for all-to-one all departur intervals shortest path problems. It is proved, theoretically, that the new solution algorithm has an optimal run complexity that equals the complexity of the problem. Computer implementations and experimental evaluatio various solution algorithms support the theoretical findings and demonstrate the efficiency of the proposed so algorithm. We expect our findings to be of major benefit to research and development activities in the field of dynamic, in particular real-time, management and control of large-scale Intelligent Transportation Systems (I"}
{"_id": "cc5d6addb376bfcc9f9c4e1c31b853c7981118ab", "title": "Developing an Air Hockey Game in LabVIEW", "text": "Air Hockey consists of two players volleying a puck, using mallets, across a low friction table surface. Points are scored when a player lands the puck in the opponent's goal. The game is won by the player with the higher score at the end of the match. Virtual versions of air hockey have been implemented using various game development software. This paper details the implementation of the air hockey game in LabVIEW. This version of the game uses a physics engine and an Artificial Intelligence (AI) player. This effort discusses the steps followed to design the game and the effectiveness of the AI player versus a human player."}
{"_id": "b9eaf65730ae6fbb81c8c86145a6c92f6d406efc", "title": "Social Capital and Knowledge Integration in Digitally Enabled Teams", "text": "T understand the impact of social capital on knowledge integration and performance within digitally enabled teams, we studied 46 teams who had a history and a future working together. All three dimensions of their social capital (structural, relational, and cognitive) were measured prior to the team performing two tasks in a controlled setting, one face-to-face and the other through a lean digital network. Structural and cognitive capital were more important to knowledge integration when teams communicated through lean digital networks than when they communicated face-to-face; relational capital directly impacted knowledge integration equally, regardless of the communication media used by the team. Knowledge integration, in turn, impacted team decision quality, suggesting that social capital influences team performance in part by increasing a team\u2019s ability to integrate knowledge. These results suggest that team history may be necessary but not sufficient for teams to overcome the problems with the use of lean digital networks as a communication environment. However, team history may present a window of opportunity for social capital to develop, which in turn allows teams to perform just as well as in either communication environment."}
{"_id": "3c58d89bb6693ccdd7673c0323a7c7b472e878ae", "title": "Graphene-SGX: A Practical Library OS for Unmodified Applications on SGX", "text": "Intel SGX hardware enables applications to protect themselves from potentially-malicious OSes or hypervisors. In cloud computing and other systems, many users and applications could benefit from SGX. Unfortunately, current applications will not work out-of-the-box on SGX. Although previous work has shown that a library OS can execute unmodified applications on SGX, a belief has developed that a library OS will be ruinous for performance and TCB size, making application code modification an implicit prerequisite to adopting SGX. This paper demonstrates that these concerns are exaggerated, and that a fully-featured library OS can rapidly deploy unmodified applications on SGX with overheads comparable to applications modified to use \u201cshim\u201d layers. We present a port of Graphene to SGX, as well as a number of improvements to make the security benefits of SGX more usable, such as integrity support for dynamically-loaded libraries, and secure multi-process support. Graphene-SGX supports a wide range of unmodified applications, including Apache, GCC, and the R interpreter. The performance overheads of GrapheneSGX range from matching a Linux process to less than 2\u00d7 in most single-process cases; these overheads are largely attributable to current SGX hardware or missed opportunities to optimize Graphene internals, and are not necessarily fundamental to leaving the application unmodified. Graphene-SGX is open-source and has been used concurrently by other groups for SGX research."}
{"_id": "a67aee51099bfb27b566d06614a38962ecb1b3da", "title": "High-Speed Flight of Quadrotor Despite Loss of Single Rotor", "text": "In order to achieve high-speed flight of a damaged quadrotor with complete loss of a single rotor, a multiloop hybrid nonlinear controller is designed. By fully making use of sensor measurements, the model dependence of this control method is reduced, which is conducive to handling disturbance from the unknown aerodynamic effects. This controller is tested on a quadrotor vehicle with one rotor completely removed in the high-speed condition. Free flights are performed in the Open Jet Facility, a large-scale wind tunnel. Over 9\u00a0m/s flight speed is reached for the damaged quadrotor in these tests. In addition, several high-speed spin-induced aerodynamic effects are discovered and analyzed."}
{"_id": "3dd0c3eb91cadf5890186d967e57f654f3b82669", "title": "Sensor Systems for Prognostics and Health Management", "text": "Prognostics and health management\u00a0(PHM) is an enabling discipline consisting of technologies and methods to assess the reliability of a product in its actual life cycle conditions to determine the advent of failure and mitigate system risk. Sensor systems are needed for PHM to monitor environmental, operational, and performance-related characteristics. The gathered data can be analyzed to assess product health and predict remaining life. In this paper, the considerations for sensor system selection for PHM applications, including the parameters to be measured, the performance needs, the electrical and physical attributes, reliability, and cost of the sensor system, are discussed. The state-of-the-art sensor systems for PHM and the emerging trends in technologies of sensor systems for PHM are presented."}
{"_id": "f56edb6f2bf4f5bc9d54284289212b8d4a437c1b", "title": "Detection and Localization of Texture-less Objects with Deep Neural Networks", "text": "This thesis studies Faster R-CNN, the state-of-art method for object detection in RGB images, and proposes its extension to RGB-D images. Solutions to the following problems are proposed and evaluated: filling missing values in depth images, depth encoding (raw depth vs. surface normals), extension of the CNN architecture to accept the extra depth information, and initialization of weights in the extended network. The overall best results were achieved with a network that accepts an extra depth channel, pre-processed by the iterative median filter to fill in the missing values, and has the depth weights in the first convolutional layer initialized with the mean of the color weights that were pretrained on ImageNet. However, the improvement over the original method using only RGB channels is not significant (mAP was increased by 1 \u2212 2%), which suggests a need for different incorporation of the depth information."}
{"_id": "cbd50d36c92dd2e3e0dfd35e0521cc49cd92b20c", "title": "A Fast Iterative Method for Solving the Eikonal Equation on Triangulated Surfaces", "text": "This paper presents an efficient, fine-grained parallel algorithm for solving the Eikonal equation on triangular meshes. The Eikonal equation, and the broader class of Hamilton-Jacobi equations to which it belongs, have a wide range of applications from geometric optics and seismology to biological modeling and analysis of geometry and images. The ability to solve such equations accurately and efficiently provides new capabilities for exploring and visualizing parameter spaces and for solving inverse problems that rely on such equations in the forward model. Efficient solvers on state-of-the-art, parallel architectures require new algorithms that are not, in many cases, optimal, but are better suited to synchronous updates of the solution. In previous work [W. K. Jeong and R. T. Whitaker, SIAM J. Sci. Comput., 30 (2008), pp. 2512-2534], the authors proposed the fast iterative method (FIM) to efficiently solve the Eikonal equation on regular grids. In this paper we extend the fast iterative method to solve Eikonal equations efficiently on triangulated domains on the CPU and on parallel architectures, including graphics processors. We propose a new local update scheme that provides solutions of first-order accuracy for both architectures. We also propose a novel triangle-based update scheme and its corresponding data structure for efficient irregular data mapping to parallel single-instruction multiple-data (SIMD) processors. We provide detailed descriptions of the implementations on a single CPU, a multicore CPU with shared memory, and SIMD architectures with comparative results against state-of-the-art Eikonal solvers."}
{"_id": "d8a63c0729bcef4fd1d41dc0e723620e48fbdc7d", "title": "OntoNotes: The 90% Solution", "text": "OntoNotes is a five year multi-site collaboration between BBN Technologies, Information Sciences Institute of University of Southern California, University of Colorado, University of Pennsylvania and Brandeis University. The goal of the OntoNotes project is to provide linguistic data annotated with a skeletal representation of the literal meaning of sentences including syntactic parse, predicate-argument structure, coreference, and word senses linked to an ontology, allowing a new generation of language understanding technologies to be developed with new functional capabilities. In its third year of existence, the OntoNotes project has generated a large amount of high quality data covering various layers of linguistic annotation. This is probably the first time that data of such quality has been available in large quantities covering multiple genres (newswire, broadcast news, broadcast conversation and weblogs) and languages (English, Chinese and Arabic). The guiding principle has been to find a \u201csweet spot\u201d in the space of inter-tagger agreement, productivity, and depth of representation. The most effective use of this resource for research requires simultaneous access to multiple layers of annotation. This has been made possible by representing the corpus with a relational database to accommodate the dense connectedness of the data and ensure consistency across layers. In order to facilitate ease of understanding and manipulability, the database has also been supplemented with a object-oriented Python API. The tutorial consists of two parts. In the first part we will familiarize the user with this new resource, describe the various layers of annotations in some detail and discuss the linguistic principles and sometimes practical considerations behind the important design decisions that shapes the corpus. We will also describe the salient differences between the three languages at each layer of annotation and how linguistic peculiarities of different languages were handled in the data. In the second part, we will describe the data formats of each of the layers and talk about various design decisions that went into the creation of the architecture of the database and the individual tables comprising it, along with issues that came up during the representation process and compromises that were made without sacrificing some primary objectives one of which being the independent existence of each layer that is necessary to allow multi-site collaboration. We will explain how the database schema attempts to interconnect all the layers. Then we will go into the details of the Python API that allows easy access to each of the layers and show that by making the objects closely resemble database tables, the API allows for their flexible integration. This will be followed by a hands-on working session."}
{"_id": "7ab75fceb0b1c8601019ce9c491eba8d8b6ab07a", "title": "Locally linear reconstruction based missing value imputation for supervised learning", "text": "Most learning algorithms generally assume that data is complete so each attribute of all instances is filled with a valid value. However, missing values are very common in real datasets for various reasons. In this paper, we propose a new single imputation method based on locally linear reconstruction (LLR) that improves the prediction performance of supervised learning (classification & regression) with various missing ratios. Next, we compare the proposed missing value imputation method (LLR) with six well-known single imputation methods for five different learning algorithms based on 13 classification and nine regression datasets. The experimental results showed that (1) all imputation methods helped to improve the prediction accuracy, although some were very simple; (2) the proposed LLR imputation method enhanced the modeling performance more than all other imputation methods, irrespective of the learning algorithms and the missing ratios; and (3) LLR was outstanding when the missing ratio was relatively high and its prediction accuracy was similar to that of the complete dataset. & 2013 Elsevier B.V. All rights reserved."}
{"_id": "d2db29517228559cb8cb7f880b2fb9bf776bf172", "title": "High Performance Computing for Hyperspectral Remote Sensing", "text": "Advances in sensor and computer technology are revolutionizing the way remotely sensed data is collected, managed and analyzed. In particular, many current and future applications of remote sensing in Earth science, space science, and soon in exploration science will require real- or near real-time processing capabilities. In recent years, several efforts have been directed towards the incorporation of high-performance computing (HPC) models to remote sensing missions. A relevant example of a remote sensing application in which the use of HPC technologies (such as parallel and distributed computing) is becoming essential is hyperspectral remote sensing, in which an imaging spectrometer collects hundreds or even thousands of measurements (at multiple wavelength channels) for the same area on the surface of the Earth. In this paper, we review recent developments in the application of HPC techniques to hyperspectral imaging problems, with particular emphasis on commodity architectures such as clusters, heterogeneous networks of computers, and specialized hardware devices such as field programmable gate arrays (FPGAs) and commodity graphic processing units (GPUs). A quantitative comparison across these architectures is given by analyzing performance results of different parallel implementations of the same hyperspectral unmixing chain, delivering a snapshot of the state-of-the-art in this area and a thoughtful perspective on the potential and emerging challenges of applying HPC paradigms to hyperspectral remote sensing problems."}
{"_id": "9ba4e31bc3b5a2e50feccae168dafd64d9be1f69", "title": "Pathway-Finder: An Interactive Recommender System for Supporting Personalized Care Pathways", "text": "Clinical pathways define the essential component of the complex care process, with the objective to optimize patient outcomes and resource allocation. Clinical pathway analysis has gained increased attention in order to augment the patient treatment process. In this demonstration paper, we propose Pathway-Finder, an interactive recommender system to visually explore and discover clinical pathways. The interactive web service efficiently collects and displays patient information in a meaningful way to support an effective personalized treatment plan. Pathway-Finder implements a Bayesian Network to discover causal relationships among different factors. To support real-time recommendation and visualization, a key-value structure has been implemented to traverse the Bayesian Network faster. Additionally, Pathway-Finder is a cloud based web service hosted on Microsoft Azure which enables the health providers to access the system without the need to deploy analytics infrastructure. We demonstrate Pathway-Finder to interactively recommend personalized interventions to minimize 30-day readmission risk for Heart Failure (HF)."}
{"_id": "0668aba8199335b347a5c8d0cdd8e75cb7cd6122", "title": "Food vs Non-Food Classification", "text": "Automatic understanding of food is an important research challenge. Food recognition engines can provide a valid aid for automatically monitoring the patient's diet and food-intake habits directly from images acquired using mobile or wearable cameras. One of the first challenges in the field is the discrimination between images containing food versus the others. Existing approaches for food vs non-food classification have used both shallow and deep representations, in combination with multi-class or one-class classification approaches. However, they have been generally evaluated using different methodologies and data, making a real comparison of the performances of existing methods unfeasible. In this paper, we consider the most recent classification approaches employed for food vs non-food classification, and compare them on a publicly available dataset. Different deep-learning based representations and classification methods are considered and evaluated."}
{"_id": "1f7d9319714b603d87762fa60e47b0bb40db25b5", "title": "Automatic Essay Grading Using Text Categorization Techniques", "text": "Several standard text-categorization techniques were applied to the problem of automated essay grading. Bayesian independence classifiers and knearest-neighbor classifiers were trained to assign scores to manually-graded essays. These scores were combined with several other summary text measures using linear regression. The classifiers and regression equations were then applied to a new set of essays. The classifiers worked very well. The agreement between the automated grader and the final manual grade was as good as the agreement between human graders."}
{"_id": "81dbf427ba087cf3a0f22b59e74d049f881bbbee", "title": "The flipped classroom: A survey of the research", "text": "Recent advances in technology and in ideology have unlocked entirely new directions for education research. Mounting pressure from increasing tuition costs and free, online course offerings is opening discussion and catalyzing change in the physical classroom. The flipped classroom is at the center of this discussion. The flipped classroom is a new pedagogical method, which employs asynchronous video lectures and practice problems as homework, and active, group-based problem solving activities in the classroom. It represents a unique combination of learning theories once thought to be incompatible\u2014active, problem-based learning activities founded upon a constructivist ideology and instructional lectures derived from direct instruction methods founded upon behaviorist principles. This paper provides a comprehensive survey of prior and ongoing research of the flipped classroom. Studies are characterized on several dimensions. Among others, these include the type of in-class and out-of-class activities, the measures used to evaluate the study, and methodological characteristics for each study. Results of this survey show that most studies conducted to date explore student perceptions and use single-group study designs. Reports of student perceptions of the flipped classroom are somewhat mixed, but are generally positive overall. Students tend to prefer in-person lectures to video lectures, but prefer interactive classroom activities over lectures. Anecdotal evidence suggests that student learning is improved for the flipped compared to traditional classroom. However, there is very little work investigating student learning outcomes objectively. We recommend for future work studies investigating of objective learning outcomes using controlled experimental or quasi-experimental designs. We also recommend that researchers carefully consider the theoretical framework used to guide the design of in-class activities. 1 The Rise of the Flipped Classroom There are two related movements that are combining to change the face of education. The first of these is a technological movement. This technological movement has enabled the amplification and duplication of information at an extremely low-cost. It started with the printing press in the 1400s, and has continued at an ever-increasing rate. The electronic telegraph came in the 1830s, wireless radio in the late 1800s and early 1900s, television in the 1920s, computers in the 1940s, the internet in the 1960s, and the world-wide web in the 1990s. As these technologies have been adopted, the ideas that have been spread through their channels have enabled a second movement. Whereas the technological movement sought to overcome real physical barriers to the free and open flow of information, this ideological movement seeks to remove the artificial, man-made barriers. This is epitomized in the free software movement (see, e.g., Stallman and Lessig [67]), although this movement is certainly not limited to software. A good example of this can be seen from the encyclopedia. Encyclopedia Britannica has been continuously published for nearly 250 years [20] (since 1768). Although Encyclopedia Britannica content has existed digitally since 1981, it was not until the advent of Wikipedia in 2001 that open access to encyclopedic content became available to users worldwide. Access to Encyclopedia Britannica remains restricted to a limited number of paid subscribers [21], but access to Wikipedia is open, and the website receives over 2.7 billion US monthly page views [81]. Thus, although the technology and digital content was available to enable free access to encyclopedic content, ideological roadblocks prevented this from happening. It was not until these ideologies had been overcome that humanity was empowered to create what has become the world\u2019s largest, most up-to-date encyclopedia [81]. In a similar way, we are beginning to see the combined effects of these two movements on higher education. In the technological arena, research has made significant advances. Studies show that video lectures (slightly) outperform in-person lectures [9], with interactive online videos doing even better (Effect size=0.5) [83,51]. Online homework is just as effective as paper-and-pencil homework [8,27], and carefully developed intelligent tutoring systems have been shown to be just as effective as human tutors [77]. Despite these advancements, adoption has been slow, as the development of good educational systems can be prohibitively expensive. However, the corresponding ideological movement is breaking down these financial barriers. Ideologically, MIT took a significant step forward when it announced its OpenCourseWare (OCW) initiative in 2001 [53]. This opened access to information that had previously only been available to students who paid university tuition, which is over $40,000/yr at MIT [54]. Continuing this trend, MIT alum Salman Khan founded the Khan Academy in 2006, which has released a library of over 3200 videos and 350 practice exercises 2012. The stated mission of the Khan Academy is to provide \u201ca free world-class education to anyone anywhere2012.\u201d In the past year, this movement has rapidly gained momentum. Inspired by Khan\u2019s efforts, Stanford professors Sebastian Thrun and Andrew Ng opened access to their online courses in Fall 2011. Thrun taught artificial intelligence with Peter Norvig, attracting over 160,000 students to their free online course. Subsequently, Thrun left the university and founded Udacity, which is now hosting 11 free courses [76]. With support from Stanford, Ng also started his own open online educational initiative, Coursera. Princeton, the University of Pennsylvania, and the University of Michigan have joined the Coursera partnership, which has expanded its offerings to 42 courses [10]. MIT has also upgraded its open educational initiative, and joined with Harvard in a $60 million dollar venture, edX [19]. EdX will, \u201coffer Harvard and MIT classes online for free.\u201d While online education is improving, expanding, and becoming openly available for free, university tuition at brick-and-mortar schools is rapidly rising [56]. Tuition in the University of California system has nearly tripled since 2000 [32]. Naturally, this is not being received well by university students in California [2]. Likewise, students in Quebec are actively protesting planned tuition hikes [13]. In resistance to planned tuition hikes, student protestors at Rutgers interrupted (on June 20, 2012) a board meeting to make their voices heard [36]. Adding fuel to the fire, results from a recent study by Gillen et al. [31] indicate that undergraduate student tuition is used to subsidize research. As a result, the natural question being asked by both students and educational institutions is exactly what students are getting for their money. This is applying a certain pressure on physical academic institutions to improve and enhance the in-person educational experience of their"}
{"_id": "0be102aa23582c98c357fbf3fcdbd1b6442484c9", "title": "LSTM: A Search Space Odyssey", "text": "Several variants of the long short-term memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful functional ANalysis Of VAriance framework. In total, we summarize the results of 5400 experimental runs ( $\\approx 15$ years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment."}
{"_id": "1d0b94641dadd41d6bb2242c217390697a87923d", "title": "Personal Sensing: Understanding Mental Health Using Ubiquitous Sensors and Machine Learning.", "text": "Sensors in everyday devices, such as our phones, wearables, and computers, leave a stream of digital traces. Personal sensing refers to collecting and analyzing data from sensors embedded in the context of daily life with the aim of identifying human behaviors, thoughts, feelings, and traits. This article provides a critical review of personal sensing research related to mental health, focused principally on smartphones, but also including studies of wearables, social media, and computers. We provide a layered, hierarchical model for translating raw sensor data into markers of behaviors and states related to mental health. Also discussed are research methods as well as challenges, including privacy and problems of dimensionality. Although personal sensing is still in its infancy, it holds great promise as a method for conducting mental health research and as a clinical tool for monitoring at-risk populations and providing the foundation for the next generation of mobile health (or mHealth) interventions."}
{"_id": "646a2258d4084827adfd8bd7cb095555944e9496", "title": "Tracking drone orientation with multiple GPS receivers", "text": "Inertial sensors continuously track the 3D orientation of a flying drone, serving as the bedrock for maneuvers and stabilization. However, even the best inertial measurement units (IMU) are prone to various types of correlated failures. We consider using multiple GPS receivers on the drone as a fail-safe mechanism for IMU failures. The core challenge is in accurately computing the relative locations between each receiver pair, and translating these measurements into the drone's 3D orientation. Achieving IMU-like orientation requires the relative GPS distances to be accurate to a few centimeters -- a difficult task given that GPS today is only accurate to around 1-4 meters. Moreover, GPS-based orientation needs to be precise even under sharp drone maneuvers, GPS signal blockage, and sudden bouts of missing data. This paper designs SafetyNet, an off-the-shelf GPS-only system that addresses these challenges through a series of techniques, culminating in a novel particle filter framework running over multi-GNSS systems (GPS, GLONASS, and SBAS). Results from 11 sessions of 5-7 minute flights report median orientation accuracies of 2\u00b0 even under overcast weather conditions. Of course, these improvements arise from an increase in cost due to the multiple GPS receivers, however, when safety is of interest, we believe that tradeoff is worthwhile."}
{"_id": "56bdc169cc624b152a3d97899382c6026ac6b81e", "title": "Active Reinforcement Learning with Monte-Carlo Tree Search", "text": "Active Reinforcement Learning (ARL) is a twist on RL where the agent observes reward information only if it pays a cost. This subtle change makes exploration substantially more challenging. Powerful principles in RL like optimism, Thompson sampling, and random exploration do not help with ARL. We relate ARL in tabular environments to BayesAdaptive MDPs. We provide an ARL algorithm using Monte-Carlo Tree Search that is asymptotically Bayes optimal. Experimentally, this algorithm is near-optimal on small Bandit problems and MDPs. On larger MDPs it outperforms a Q-learner augmented with specialised heuristics for ARL. By analysing exploration behaviour in detail, we uncover obstacles to scaling up simulation-based algorithms for ARL."}
{"_id": "f14c507d02af3334b0410fb380e7ee8c0dfa2e9e", "title": "Health hazards and waste management.", "text": "Different methods of waste management emit a large number of substances, most in small quantities and at extremely low levels. Raised incidence of low birth weight births has been related to residence near landfill sites, as has the occurrence of various congenital malformations. There is little evidence for an association with reproductive or developmental effects with proximity to incinerators. Studies of cancer incidence and mortality in populations around landfill sites or incinerators have been equivocal, with varying results for different cancer sites. Many of these studies lack good individual exposure information and data on potential confounders, such as socio-economic status. The inherent latency of diseases and migration of populations are often ignored. Waste management workers have been shown to have increased incidence of accidents and musculoskeletal problems. The health impacts of new waste management technologies and the increasing use of recycling and composting will require assessment and monitoring."}
{"_id": "35c7389554c815372bfd227f3d76867efa72c09f", "title": "Comparative Survey of Indoor Positioning Technologies, Techniques, and Algorithms", "text": "The user location information represents a core dimension as understanding user context is a prerequisite for providing human-centered services that generally improve quality of life. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task due in part to the expected various objects (such as walls and people) that reflect and disperse signals. In this paper, we survey the related work in the field of indoor positioning by providing a comparative analysis of the state-of-the-art technologies, techniques, and algorithms. Unlike previous studies and surveys, our survey present new taxonomies, review some major recent advances, and argue on the area open problems and future potential. We believe this paper would spur further exploration by the research community of this challenging problem space."}
{"_id": "60105db6a5cce5eb7bcec56b87d95ef83a5e58e0", "title": "GraphGen: Exploring Interesting Graphs in Relational Data", "text": "Analyzing interconnection structures among the data through the use of graph algorithms and graph analytics has been shown to provide tremendous value in many application domains. However, graphs are not the primary choice for how most data is currently stored, and users who want to employ graph analytics are forced to extract data from their data stores, construct the requisite graphs, and then use a specialized engine to write and execute their graph analysis tasks. This cumbersome and costly process not only raises barriers in using graph analytics, but also makes it hard to explore and identify hidden or implicit graphs in the data. Here we demonstrate a system, called GRAPHGEN, that enables users to declaratively specify graph extraction tasks over relational databases, visually explore the extracted graphs, and write and execute graph algorithms over them, either directly or using existing graph libraries like the widely used NetworkX Python library. We also demonstrate how unifying the extraction tasks and the graph algorithms enables significant optimizations that would not be possible otherwise."}
{"_id": "13f279a8102df272577b9fe691276d590e61311a", "title": "Facial expression recognition in peripheral versus central vision: role of the eyes and the mouth.", "text": "This study investigated facial expression recognition in peripheral relative to central vision, and the factors accounting for the recognition advantage of some expressions in the visual periphery. Whole faces or only the eyes or the mouth regions were presented for 150 ms, either at fixation or extrafoveally (2.5\u00b0 or 6\u00b0), followed by a backward mask and a probe word. Results indicated that (a) all the basic expressions were recognized above chance level, although performance in peripheral vision was less impaired for happy than for non-happy expressions, (b) the happy face advantage remained when only the mouth region was presented, and (c) the smiling mouth was the most visually salient and most distinctive facial feature of all expressions. This suggests that the saliency and the diagnostic value of the smile account for the advantage in happy face recognition in peripheral vision. Because of saliency, the smiling mouth accrues sensory gain and becomes resistant to visual degradation due to stimulus eccentricity, thus remaining accessible extrafoveally. Because of diagnostic value, the smile provides a distinctive single cue of facial happiness, thus bypassing integration of face parts and reducing susceptibility to breakdown of configural processing in peripheral vision."}
{"_id": "3e3227c8e9f44593d2499f4d1302575c77977b2e", "title": "Facial Expression Recognition Using a Large Out-of-Context Dataset", "text": "We develop a method for emotion recognition from facial imagery. This problem is challenging in part because of the subjectivity of ground truth labels and in part because of the relatively small size of existing labeled datasets. We use the FER+ dataset [8], a dataset with multiple emotion labels per image, in order to build an emotion recognition model that encompasses a full range of emotions. Since the amount of data in the FER+ dataset is limited, we explore the use of a much larger face dataset, MS-Celeb-1M [41], in conjunction with the FER+ dataset. Specific layers within an Inception-ResNet-v1 [13, 38] model trained for facial recognition are used for the emotion recognition problem. Thus, we leverage the MS-Celeb-1M dataset in addition to the FER+ dataset and experiment with different architectures to assess the overall performance of neural networks to recognize emotion using facial imagery."}
{"_id": "f395edb9c8ca5666ec54d38fda289e181dbc5d1b", "title": "A New Nested Neutral Point-Clamped (NNPC) Converter for Medium-Voltage (MV) Power Conversion", "text": "In this paper, a new voltage source converter for medium voltage applications is presented which can operate over a wide range of voltages (2.4-7.2 kV) without the need for connecting the power semiconductor in series. The operation of the proposed converter is studied and analyzed. In order to control the proposed converter, a space-vector modulation (SVM) strategy with redundant switching states has been proposed. SVM usually has redundant switching state anyways. What is the main point we are trying to get to? These redundant switching states help to control the output voltage and balance voltages of the flying capacitors in the proposed converter. The performance of the converter under different operating conditions is investigated in MATLAB/Simulink environment. The feasibility of the proposed converter is evaluated experimentally on a 5-kVA prototype."}
{"_id": "3e22c691424fb870dc4199af58fc2a5e41b12b0d", "title": "Automating Content Extraction of HTML Documents", "text": "Web pages often contain clutter (such as unnecessary images and extraneous links) around the body of an article that distracts a user from actual content. Extraction of \u201cuseful and relevant\u201d content from web pages has many applications, including cell phone and PDA browsing, speech rendering for the visually impaired, and text summarization. Most approaches to making content more readable involve changing font size or removing HTML and data components such as images, which takes away from a webpage\u2019s inherent look and feel. Unlike \u201cContent Reformatting,\u201d which aims to reproduce the entire webpage in a more convenient form, our solution directly addresses \u201cContent Extraction.\u201d We have developed a framework that employs an easily extensible set of techniques. It incorporates advantages of previous work on content extraction. Our key insight is to work with DOM trees, a W3C specified interface that allows programs to dynamically access document structure, rather than with raw HTML markup. We have implemented our approach in a publicly available Web proxy to extract content from HTML web pages. This proxy can be used both centrally, administered for groups of users, as well as by individuals for personal browsers. We have also, after receiving feedback from users about the proxy, created a revised version with improved performance and accessibility in mind."}
{"_id": "406b4385c65ce7b72e99227aec3c723927ce9460", "title": "Using argumentation logic for firewall configuration management", "text": "Firewalls remain the main perimeter security protection for corporate networks. However, network size and complexity make firewall configuration and maintenance notoriously difficult. Tools are needed to analyse firewall configurations for errors, to verify that they correctly implement security requirements and to generate configurations from higher-level requirements. In this paper we extend our previous work on the use of formal argumentation and preference reasoning for firewall policy analysis and develop means to automatically generate firewall policies from higher-level requirements. This permits both analysis and generation to be done within the same framework, thus accommodating a wide variety of scenarios for authoring and maintaining firewall configurations. We validate our approach by applying it to both examples from the literature and real firewall configurations of moderate size (\u2248 150 rules)."}
{"_id": "595779d7b80e9695ce6da2686859d198f4682000", "title": "Semantic Segmentation of Colorectal Polyps with DeepLab and LSTM Networks", "text": "In this work we attempted to use the existing deep neural network called DeepLab_v3 to detect the polyps in colonoscopy images. Due to its large structure, the location of polyps may not be preserved and transmitted effectively. To address the issue we combined Long Short-Term Memory networks and DeepLab_v3 in parallel to augment the signal of the polyps' location. The new modification was examined with the colonoscopy image database \u2018CVC-ClinicDB\u2019 from MICCAI sub-challenge 2015. After training with 267 images and testing with 345 images, we got a good performance, 93.21% mean Intersection over Union (mIOU). The average computing time is 0.023 second per image. Once the model is applied to clinical colonoscopy exam videos, it could provide effective second opinions in real time to aid the diagnosis."}
{"_id": "f097415e4fef9d791ccdceb6c673d59c2b776d9a", "title": "Differential Mode Input Filter Design for a Three-Phase Buck-Type PWM Rectifier Based on Modeling of the EMC Test Receiver", "text": "For a three-phase buck-type pulsewidth modulation rectifier input stage of a high-power telecommunications power supply module, a differential-mode (DM) electromagnetic compatibility (EMC) filter is designed for compliance to CISPR 22 Class B in the frequency range of 150 kHz-30 MHz. The design is based on a harmonic analysis of the rectifier input current and a mathematical model of the measurement procedure including the line impedance stabilization network (LISN) and the test receiver. Guidelines for a successful filter design are given, and components for a 5-kW rectifier prototype are selected. Furthermore, formulas for the estimation of the quasi-peak detector output based on the LISN output voltage spectrum are provided. The damping of filter resonances is optimized for a given attenuation in order to facilitate a higher stability margin for system control. Furthermore, the dependence of the filter input and output impedances and the attenuation characteristic on the inner mains impedance are discussed. As experimentally verified by using a three-phase common-/Differential-Mode separator, this procedure allows accurate prediction of the converter DM conducted emission levels and therefore could be employed in the design process of the rectifier system to ensure compliance to relevant EMC standards"}
{"_id": "81a4183d5042a93356bc59cda54ede3283efe583", "title": "People Identification Using Gait Via Floor Pressure Sensing and Analysis", "text": "This paper presents an approach to people identification using gait based on floor pressure data. By using a large area high resolution pressure sensing floor, we were able to obtain 3D trajectories of the center of foot pressures over a footstep which contain both the 1D pressure profile and 2D position trajectories of the COP. Based on the 3D COP trajectories a set of features are then extracted and used for people identification together with other features such as stride length and cadence. The Fisher linear discriminant is used as the classifier. Encouraging results have been obtained using the proposed method with an average recognition rate of 94% and false alarm rate of 3% using pair-wise footstep data from 10 subjects."}
{"_id": "83369499524590b8707216e0711673ab8b3c8f58", "title": "The Impact of Training and Social Norms on Information Security Compliance: A Pilot Study", "text": "Security training has been shown to be an important factor that impacts employees\u2019 intentions to comply with organization\u2019s security policies. In this study, we define and then study the impact of two sub-constructs of security training, threat appraisal and policy awareness, on intentions to comply with organizational security policies. Injunctive and descriptive norms, which are standards of behavior that recommends and forbids behavior in specific circumstances, have been hypothesized as mediators between training constructs and behavioral intention to comply. We pilot-tested our proposed set of hypotheses with survey data collected from 69 employees in a higher education institute. Results supported our proposed model. Based on the findings, implications for theory and practices are discussed."}
{"_id": "95fd1c82be55a99d27cdb48cffe2e3a4b5de3d1d", "title": "Use multiplexing to increase information in QR code", "text": "In this paper a novel approach to multiplex data in QR Code is presented. This technique can increase the amount of data, as the original information, in QR Code as well as keeping secret information. The original data for encoding is divided into smaller parts, each part will form QR Code pattern in its standard form. Each pattern is encoded or multiplexed and represented each module in QR Code with black and white special symbols. At the receiving end, this QR Code with special symbols (that was multiplexed) is decoded to give back the number of QR Code patterns that was multiplexed. These QR Code pattern can be read by the general QR Code reader and the data can be concatenated back to form its original information."}
{"_id": "e750b1cf7ae8453f637b3166e475170e0c267150", "title": "Patching. Restitching business portfolios in dynamic markets.", "text": "In turbulent markets, businesses and opportunities are constantly falling out of alignment. New technologies and emerging markets create fresh opportunities. Converging markets produce more. And of course, some markets fade. In this landscape of continuous flux, it's more important to build corporate-level strategic processes that enable dynamic repositioning than it is to build any particular defensible position. That's why smart corporate strategists use patching, a process of mapping and remapping business units to create a shifting mix of highly focused, tightly aligned businesses that can respond to changing market opportunities. Patching is not just another name for reorganizing; patchers have a distinctive mindset. Traditional managers see structure as stable; patching managers believe structure is inherently temporary. Traditional managers set corporate strategy first, but patching managers keep the organization focused on the right set of business opportunities and let strategy emerge from individual businesses. Although the focus of patching is flexibility, the process itself follows a pattern. Patching changes are usually small in scale and made frequently. Patching should be done quickly; the emphasis is on getting the patch about right and fixing problems later. Patches should have a test drive before they're formalized but then be tightly scripted after they've been announced. And patching won't work without the right infrastructure: modular business units, fine-grained and complete unit-level metrics, and companywide compensation parity. The authors illustrate how patching works and point out some common stumbling blocks."}
{"_id": "8d0c816ab14ca3748a1887d7f2ef088d630f831d", "title": "Description of the YaCy Distributed Web Search Engine", "text": "Distributed web search engines have been proposed to mitigate the privacy issues that arise in centralized search systems. These issues include censorship, disclosure of sensitive queries to the search server and to any third parties with whom the search service operator might share data, as well as the lack of transparency of proprietary search algorithms. YaCy is a deployed distributed search engine that aims to provide censorship resistance and privacy to its users. Its user base has been steadily increasing and it is currently being used by several hundreds of people every day. Unfortunately, there exists no document that thoroughly describes how YaCy exactly works. We therefore investigated the source code of YaCy and summarize in this document our findings and explanation on YaCy. We confirmed with the YaCy community that our description on YaCy is accurate."}
{"_id": "9bbbb118fffaf65bad481076633379a2c6cf999a", "title": "Accurate mobile malware detection and classification in the cloud", "text": "As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16\u00a0%) and acceptable false positive rate (1.30\u00a0%); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94\u00a0%. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service."}
{"_id": "6b6fa87688f1e0ddb676a9ce5d18a7185f98d0c5", "title": "Calibrate Multiple Consumer RGB-D Cameras for Low-Cost and Efficient 3D Indoor Mapping", "text": "Traditional indoor laser scanning trolley/backpacks with multi-laser scanner, panorama cameras, and an inertial measurement unit (IMU) installed are a popular solution to the 3D indoor mapping problem. However, the cost of those mapping suits is quite expensive, and can hardly be replicated by consumer electronic components. The consumer RGB-Depth (RGB-D) camera (e.g., Kinect V2) is a low-cost option for gathering 3D point clouds. However, because of the narrow field of view (FOV), its collection efficiency and data coverages are lower than that of laser scanners. Additionally, the limited FOV leads to an increase of the scanning workload, data processing burden, and risk of visual odometry (VO)/simultaneous localization and mapping (SLAM) failure. To find an efficient and low-cost way to collect 3D point clouds data with auxiliary information (i.e., color) for indoor mapping, in this paper we present a prototype indoor mapping solution that is built upon the calibration of multiple RGB-D sensors to construct an array with large FOV. Three time-of-flight (ToF)-based Kinect V2 RGB-D cameras are mounted on a rig with different view directions in order to form a large field of view. The three RGB-D data streams are synchronized and gathered by the OpenKinect driver. The intrinsic calibration that involves the geometry and depth calibration of single RGB-D cameras are solved by homography-based method and ray correction followed by range biases correction based on pixel-wise spline line functions, respectively. The extrinsic calibration is achieved through a coarse-to-fine scheme that solves the initial exterior orientation parameters (EoPs) from sparse control markers and further refines the initial value by an iterative closest point (ICP) variant minimizing the distance between the RGB-D point clouds and the referenced laser point clouds. The effectiveness and accuracy of the proposed prototype and calibration method are evaluated by comparing the point clouds derived from the prototype with ground truth data collected by a terrestrial laser scanner (TLS). The overall analysis of the results shows that the proposed method achieves the seamless integration of multiple point clouds from three Kinect V2 cameras collected at 30 frames per second, resulting in low-cost, efficient, and high-coverage 3D color point cloud collection for indoor mapping applications."}
{"_id": "f7763e0b031aafae055e9511568fbe8e9df44214", "title": "Half-Mode Substrate Integrated Waveguide (HMSIW) Leaky-Wave Antenna for Millimeter-Wave Applications", "text": "A novel leaky-wave antenna is demonstrated and developed at Ka-band in this work based on the newly proposed half-mode substrate integrated waveguide (HWSIW). This antenna is accurately simulated by using a full-wave electromagnetic simulator and then fabricated through a single-layer printed circuit board (PCB) process. Wide bandwidth and a quasi-omnidirectional radiation pattern are obtained. The proposed antenna is therefore a good candidate for millimeter-wave applications. Measured results are in good agreement with simulated results."}
{"_id": "746bcd8b5a3ad103a4786f0429d7766252b640ad", "title": "\"One of the greatest medical success stories:\" Physicians and nurses' small stories about vaccine knowledge and anxieties.", "text": "In recent years, the Canadian province of Alberta experienced outbreaks of measles, mumps, pertussis, and influenza. Even so, the dominant cultural narrative maintains that vaccines are safe, effective, and necessary to maintain population health. Many vaccine supporters have expressed anxieties that stories contradicting this narrative have lowered herd immunity levels because they frighten the public into avoiding vaccination. As such, vaccine policies often emphasize educating parents and the public about the importance and safety of vaccination. These policies rely on health professionals to encourage vaccine uptake and assume that all professionals support vaccination. Health professionals, however, are socially positioned between vaccine experts (such as immunologists) and non-experts (the wider public). In this article, I discuss health professionals' anxieties about the potential risks associated with vaccination and with the limitations of Alberta's immunisation program. Specifically, I address the question: If medical knowledge overwhelmingly supports vaccination, then why do some professionals continue to question certain vaccines? To investigate this topic, I interviewed twenty-seven physicians and seven nurses. With stock images and small stories that interviewees shared about their vaccine anxieties, I challenge the common assumption that all health professionals support vaccines uncritically. All interviewees provided generic statements that supported vaccination and Alberta's immunisation program, but they expressed anxieties when I asked for details. I found that their anxieties reflected nuances that the culturally dominant vaccine narrative overlooks. Particularly, they critiqued the influence that pharmaceutical companies, the perceived newness of specific vaccines, and the limitations of medical knowledge and vaccine schedules."}
{"_id": "6438f86ea8ad8cfab6144a350cb3da901396d355", "title": "Fuzzy Logic-Based Natural Language Processing and Its Application to Speech Recognition", "text": "In this paper we describe a fuzzy logic-based language processing method, which is applied to speech recognition. Our purpose is to create a system that can learn from a linguistic corpus the fuzzy semantic relations between the concepts represented by words and use such relations to process the word sequences generated by speech recognition systems. In particular, the system will be able to predict the words failed to be recognized by a speech recognition system. This will help to increase the accuracy of a speech recognition system. This will also serve as the first stage of deep semantic processing of speech recognition results by providing \u201csemantic relatedness\u201d between the recognized words. We report the fuzzy inference rule learning system, which we have developed and also report the experimental results based on the system. Key-words: Fuzzy logic, Natural Language Analysis, Speech Recognition, Corpus Linguistics."}
{"_id": "058879dd03907ba4910e8da64ce521c2be1a11c5", "title": "KEGGgraph: a graph approach to KEGG PATHWAY in R and bioconductor", "text": "MOTIVATION\nKEGG PATHWAY is a service of Kyoto Encyclopedia of Genes and Genomes (KEGG), constructing manually curated pathway maps that represent current knowledge on biological networks in graph models. While valuable graph tools have been implemented in R/Bioconductor, to our knowledge there is currently no software package to parse and analyze KEGG pathways with graph theory.\n\n\nRESULTS\nWe introduce the software package KEGGgraph in R and Bioconductor, an interface between KEGG pathways and graph models as well as a collection of tools for these graphs. Superior to existing approaches, KEGGgraph captures the pathway topology and allows further analysis or dissection of pathway graphs. We demonstrate the use of the package by the case study of analyzing human pancreatic cancer pathway.\n\n\nAVAILABILITY\nKEGGgraph is freely available at the Bioconductor web site (http://www.bioconductor.org). KGML files can be downloaded from KEGG FTP site (ftp://ftp.genome.jp/pub/kegg/xml)."}
{"_id": "0647fe48491f82a8314453ad79912c780badddba", "title": "Sentence Simplification by Monolingual Machine Translation", "text": "In this paper we describe a method for simplifying sentences using Phrase Based Machine Translation, augmented with a re-ranking heuristic based on dissimilarity, and trained on a monolingual parallel corpus. We compare our system to a word-substitution baseline and two state-of-the-art systems, all trained and tested on paired sentences from the English part of Wikipedia and Simple Wikipedia. Human test subjects judge the output of the different systems. Analysing the judgements shows that by relatively careful phrase-based paraphrasing our model achieves similar simplification results to state-of-the-art systems, while generating better formed output. We also argue that text readability metrics such as the Flesch-Kincaid grade level should be used with caution when evaluating the output of simplification systems."}
{"_id": "2dbd523c9cd754c9329e722a1c33e317c6c0d53b", "title": "Motivations and Methods for Text Simplification", "text": "Long and complicated sentences prove to be a stumbling block for current systems relying on NL input. These systems stand to gain from methods that syntactically simplify such sentences. To simplify a sentence, we need an idea of the structure of the sentence, to identify the components to be separated out. Obviously a parser could be used to obtain the complete structure of the sentence. However, full parsing is slow and prone to failure, especially on complex sentences. In this paper, we consider two alternatives to full parsing which could be used for simpli cation. The rst approach uses a Finite State Grammar (FSG) to produce noun and verb groups while the second uses a Supertagging model to produce dependency linkages. We discuss the impact of these two input representations on the simpli cation process. 1 Reasons for Text Simpli cation Long and complicated sentences prove to be a stumbling block for current systems which rely on natural language input. These systems stand to gain from methods that preprocess such sentences so as to make them simpler. Consider, for example, the following sentence: (1) The embattled Major government survived a crucial vote on coal pits closure as its last-minute concessions curbed the extent of Tory revolt over an issue that generated unusual heat in the House of Commons and brought the miners to London streets. Such sentences are not uncommon in newswire texts. Compare this with the multi-sentence version which has been manually simpli ed: (2) The embattled Major government survived a crucial vote on coal pits closure. Its last-minute concessions curbed the extent of On leave from the National Centre for Software Technology, Gulmohar Cross Road No. 9, Juhu, Bombay 400 049, India Tory revolt over the coal-mine issue. This issue generated unusual heat in the House of Commons. It also brought the miners to"}
{"_id": "911d79afaf3ca3e3b9634ec6ed16de450cce0c8c", "title": "Improving Text Simplification Language Modeling Using Unsimplified Text Data", "text": "In this paper we examine language modeling for text simplification. Unlike some text-to-text translation tasks, text simplification is a monolingual translation task allowing for text in both the input and output domain to be used for training the language model. We explore the relationship between normal English and simplified English and compare language models trained on varying amounts of text from each. We evaluate the models intrinsically with perplexity and extrinsically on the lexical simplification task from SemEval 2012. We find that a combined model using both simplified and normal English data achieves a 23% improvement in perplexity and a 24% improvement on the lexical simplification task over a model trained only on simple data. Post-hoc analysis shows that the additional unsimplified data provides better coverage for unseen and rare n-grams."}
{"_id": "29778f86a936c5a5fbedcdffdc11d0ddfd3984f1", "title": "Video In Sentences Out", "text": "We present a system that produces sentential descriptions of video: who did what to whom, and where and how they did it. Action class is rendered as a verb, participant objects as noun phrases, properties of those objects as adjectival modifiers in those noun phrases, spatial relations between those participants as prepositional phrases, and characteristics of the event as prepositional-phrase adjuncts and adverbial modifiers. Extracting the information needed to render these linguistic entities requires an approach to event recognition that recovers object tracks, the trackto-role assignments, and changing body posture."}
{"_id": "9ed3140c455f98e07796fade833b972fc1d6c83d", "title": "Multi-dimension semantic dictionary for online intelligence", "text": "The proposed online intelligence (OI) is to provide popular and well-formatted reports for direct decision support based on online data. Completeness, preciseness, conciseness and smartness are proposed as the basic technical indicators to evaluate an OI application. OI requires similar data processing flow and reporting tools with traditional business intelligence (BI). However, OI collects data basically from online other than provided by enterprise application systems. The online data is usually non-structural or semi-structural. Semantics techniques are required by OI to find and process online data into decision-making knowledge automatically in many cases. To this end, a multi-dimension semantic dictionary is proposed to divide online objects' semantics into four semantic aspects, i.e., information semantics, business semantics, application semantics and structural semantics. The four semantic aspects of the semantic dictionary are developed separately and interlinked as a multi-dimension semantic matrix to conduct matchmaking when find and process online data. In this work, the related concepts and implementation framework of the multi-dimension semantic dictionary are discussed and a real application for illegal drug tracing is provided to illustrate how OI make sense and semantics are developed."}
{"_id": "66c28481397a109ff70d07e9692a41bf04ad9ae8", "title": "Dead-time elimination method and current polarity detection circuit for three-phase PWM-controlled inverter", "text": "This paper will present a dead-time elimination scheme and the related current polarity detection circuit without separate power sources for three-phase inverters. The presented scheme includes the freewheeling current polarity detection circuit and the PWM control generator without dead time. It will be shown that the presented scheme eliminates the dead time of PWM control for inverter and therefore dramatically improving the output voltage loss and current distortion. Experimental results derived from an FPGA-based PWM-controlled inverter are shown for confirmation."}
{"_id": "429aea828954ba77da9cb3a1a8dfd4d9ea4d7101", "title": "Single-image SVBRDF capture with a rendering-aware deep network", "text": "Texture, highlights, and shading are some of many visual cues that allow humans to perceive material appearance in single pictures. Yet, recovering spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a single image based on such cues has challenged researchers in computer graphics for decades. We tackle lightweight appearance capture by training a deep neural network to automatically extract and make sense of these visual cues. Once trained, our network is capable of recovering per-pixel normal, diffuse albedo, specular albedo and specular roughness from a single picture of a flat surface lit by a hand-held flash. We achieve this goal by introducing several innovations on training data acquisition and network design. For training, we leverage a large dataset of artist-created, procedural SVBRDFs which we sample and render under multiple lighting directions. We further amplify the data by material mixing to cover a wide diversity of shading effects, which allows our network to work across many material classes. Motivated by the observation that distant regions of a material sample often offer complementary visual cues, we design a network that combines an encoder-decoder convolutional track for local feature extraction with a fully-connected track for global feature extraction and propagation. Many important material effects are view-dependent, and as such ambiguous when observed in a single image. We tackle this challenge by defining the loss as a differentiable SVBRDF similarity metric that compares the renderings of the predicted maps against renderings of the ground truth from several lighting and viewing directions. Combined together, these novel ingredients bring clear improvement over state of the art methods for single-shot capture of spatially varying BRDFs."}
{"_id": "330f258e290adc2f78820eddde589946f775ae65", "title": "Attribute reduction in decision-theoretic rough set models", "text": "Rough set theory can be applied to rule induction. There are two different types of classification rules, positive and boundary rules, leading to different decisions and consequences. They can be distinguished not only from the syntax measures such as confidence, coverage and generality, but also the semantic measures such as decision-monotocity, cost and risk. The classification rules can be evaluated locally for each individual rule, or globally for a set of rules. Both the two types of classification rules can be generated from, and interpreted by, a decision-theoretic model, which is a probabilistic extension of the Pawlak rough set model. As an important concept of rough set theory, an attribute reduct is a subset of attributes that are jointly sufficient and individually necessary for preserving a particular property of the given information table. This paper addresses attribute reduction in decision-theoretic rough set models regarding different classification properties, such as: decisionmonotocity, confidence, coverage, generality and cost. It is important to note that many of these properties can be truthfully reflected by a single measure c in the Pawlak rough set model. On the other hand, they need to be considered separately in probabilistic models. A straightforward extension of the c measure is unable to evaluate these properties. This study provides a new insight into the problem of attribute reduction. Crown Copyright 2008 Published by Elsevier Inc. All rights reserved."}
{"_id": "26afa926e218357b04ff77bafda3fea4b24dc8fc", "title": "Next-Term Student Performance Prediction: A Recommender Systems Approach", "text": "An enduring issue in higher education is student retention to successful graduation. National statistics indicate that most higher education institutions have four-year degree completion rates around 50%, or just half of their student populations. While there are prediction models which illuminate what factors assist with college student success, interventions that support course selections on a semester-to-semester basis have yet to be deeply understood. To further this goal, we develop a system to predict students\u2019 grades in the courses they will enroll in during the next enrollment term by learning patterns from historical transcript data coupled with additional information about students, courses and the instructors teaching them. We explore a variety of classic and state-of-the-art techniques which have proven effective for recommendation tasks in the e-commerce domain. In our experiments, Factorization Machines (FM), Random Forests (RF), and the Personalized Linear Multiple Regression model achieve the lowest prediction error. Application of a novel feature selection technique is key to the predictive success and interpretability of the FM. By comparing feature importance across populations and across models, we uncover strong connections between instructor characteristics and student performance. We also discover key differences between transfer and non-transfer students. Ultimately we find that a hybrid FM-RF method can be used to accurately predict grades for both new and returning students taking both new and existing courses. Application of these techniques holds promise for student degree planning, instructor interventions, and personalized advising, all of which could improve retention and academic performance."}
{"_id": "c1bd75dd43ad483e93a0c915d754b15e42eeec04", "title": "Semantics Extraction from Images", "text": "An overview of the state-of-the-art on semantics extraction from images is presented. In this survey, we present the relevant approaches in terms of content representation as well as in terms of knowledge representation. Knowledge can be represented in either implicit or explicit fashion while the image is represented in different levels, namely, low-level, intermediate and semantic level. For each combination of knowledge and image representation, a detailed discussion is addressed that leads to fruitful conclusions for the impact of each approach. 1 Semantics extraction basic pipeline Semantics extraction refers to digital data interpretation from a human point of view. In the case that the digital data correspond to images, this usually entails an appearance-based inference using color, texture and/or shape information along with a type of context inference (or representation) that can combine and transform these machine-extracted evidence into what we call a scene description. Following Biederman et al. [1] definitions of context in a visual scene, we can derive three types of context for real-world scene annotation problems: (i) semantic context which encodes the probability of a certain category to be present in a scene (e.g category \u201cstreets\u201d has high probability to coexist with category \u201cbuilding\u201d in the same scene ); (ii) spatial context which encodes the spatial relations of categories (e.g sky is usually above grass in a scene) and (iii) scale context which encodes the relative object size (category \u201chuman\u201d is expected to occupy a small region in a scene which includes the\u201cbuilding\u201d category). The research goals in semantics extraction are mostly a function of the granularity of the semantics in question. The goal could be the extraction of a single or multiple semantics of the entire image (e.g. indoor/outdoor setting), or the extraction of the semantics for different objects in an image. In the latter case, the semantics could be generic (e.g. a vehicle) or specific (e.g. a motorbike). Those goals make it clear that semantics extraction is not a new research area. Depending on the goal, the task of semantics extraction can be considered as a categorization, classification, recognition and understanding task that all share in common the effort for solving the semantic gap. As stated in [2], \u201cDespite intensive recent research, the automatic establishment of a correspondence between"}
{"_id": "2ba8d5195d15c45418ee77ada809ab7e1ba0df53", "title": "A Distributed Reinforcement Learning Scheme for Network Routing", "text": "In this paper we describe a self-adjusting algorithm for packet routing, in which a reinforcement learning module is embedded into each node of a switching network. Only local communication is used to keep accurate statistics at each node on which routing policies lead to minimal delivery times. In simple experiments involving a 36-node, irregularly connected network, this learning approach proves superior to a nonadaptive algorithm based on precomputed shortest paths. The authors would like to thank for their support the Bellcore Cognitive Science Research Group, the National Defense Science and Engineering Graduate fellowship program, and National Science Foundation Grant IRI-9214873. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the o cial policies, either expressed or implied, of Bellcore, the National Science Foundation or the U.S. Government."}
{"_id": "d69bfbe6763a5ab0708c8a85eb67d5ea450177c5", "title": "VADI: GPU Virtualization for an Automotive Platform", "text": "Modern vehicles are evolving with more electronic components than ever before (In this paper, \u201cvehicle\u201d means \u201cautomotive vehicle.\u201d It is also equal to \u201ccar.\u201d) One notable example is graphical processing unit (GPU), which is a key component to implement a digital cluster. To implement the digital cluster that displays all the meters (such as speed and fuel gauge) together with infotainment services (such as navigator and browser), the GPU needs to be virtualized; however, GPU virtualization for the digital cluster has not been addressed yet. This paper presents a Virtualized Automotive DIsplay (VADI) system to virtualize a GPU and its attached display device. VADI manages two execution domains: one for the automotive control software and the other for the in-vehicle infotainment (IVI) software. Through GPU virtualization, VADI provides GPU rendering to both execution domains, and it simultaneously displays their images on a digital cluster. In addition, VADI isolates GPU from the IVI software in order to protect it from potential failures of the IVI software. We implement VADI with Vivante GC2000 GPU and perform experiments to ensure requirements of International Standard Organization (ISO) safety standards. The results show that VADI guarantees 30 frames per second (fps), which is the minimum frame rate for digital cluster mandated by ISO safety standards even with the failure of the IVI software. It also achieves 60 fps in a synthetic workload."}
{"_id": "c2c6a92198568be2ee891c93a3830a69bee26158", "title": "An Overview of Smart Home Environments: Architectures, Technologies and Applications", "text": "The aim of this research survey note is to provide a comprehensive overview of Smart Home Environments with a focus on their architectures and application areas, as well as utilized technologies, infrastructures, and standards. The main source of information was provided by past and existing research and development projects carried out in this area, while the main result of our survey is a classification of Smart Home Environments, as revealed by this projects\u2019 survey."}
{"_id": "74a308f986a8368542f6945c94559a936bc3a31e", "title": "Defending against malicious peripherals", "text": "Attacks on host computers by malicious peripherals are a growing problem. Inexpensive and powerful peripherals, which attach to plug-and-play buses, have made such attacks easy to mount. Making matters worse, commodity operating systems lack systematic defenses, and users are often not aware of the scope of the problem. We present Cinch, a pragmatic response to this threat. Cinch uses virtualization to place the hardware in a logically separate, untrusted machine, and includes an interposition layer between the untrusted machine and the protected one. This layer accepts or rejects interaction with devices and enforces security policies that are easily configured and extended by users. We show that Cinch integrates with existing OSes, enforces policies that thwart real world attacks, and has low overhead."}
{"_id": "9d901c1ce5e024ad22f1c2663dba8e1099b496e9", "title": "HADOOP BLOCK PLACEMENT POLICY FOR DIFFERENT FILE FORMATS", "text": "Now a day\u2019s Peta-Bytes of data becomes the norm in industries. Handling, analyzing such big data is challenging task. Even frameworks like Hadoop (Open Source Implementation of MapReduce Paradigm) and NoSQL databases like Cassandra, HBase can be used to analyze and store such large data; heterogeneity of data is still an issue. Data centers usually have clusters formed using heterogeneous nodes. Ecosystem like Hadoop can be used to manage such types of cluster but it can not schedule jobs (Application) efficiently on this heterogeneous cluster when data is itself heterogeneous. Heterogeneity of data may be because of data format or because of the complexity. This paper is review of systems and algorithms for distributed management of huge size data, efficiency of these approaches."}
{"_id": "a1e128a78cc9ae7aa2aaaf4c3934a0f6387b7606", "title": "Simultaneous estimation of food categories and calories with multi-task CNN", "text": "In this paper, we propose simultaneous estimation of food categories and calories for food photos. Since there exists strong correlation between food categories and calories in general, we expect that simultaneous training of both brings performance boosting compared to independent single training. To this end, we use a multitask CNN. In the experiments, we collected calorie-annotated recipe data from the online cooking recipe sites, and trained multi-task and single-task CNNs. As results, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs."}
{"_id": "149a43346f5a51db064be348fd266c4959e0d1f0", "title": "Bandit Structured Prediction for Learning from Partial Feedback in Statistical Machine Translation", "text": "We present an approach to structured prediction from bandit feedback, called Bandit Structured Prediction, where only the value of a task loss function at a single predicted point, instead of a correct structure, is observed in learning. We present an application to discriminative reranking in Statistical Machine Translation (SMT) where the learning algorithm only has access to a 1 \u2212 BLEU loss evaluation of a predicted translation instead of obtaining a gold standard reference translation. In our experiment bandit feedback is obtained by evaluating BLEU on reference translations without revealing them to the algorithm. This can be thought of as a simulation of interactive machine translation where an SMT system is personalized by a user who provides single point feedback to predicted translations. Our experiments show that our approach improves translation quality and is comparable to approaches that employ more informative feedback in learning."}
{"_id": "c8e72009434e34e6800c54a7bb571abc8e9279ca", "title": "Review of Machine Learning Algorithms in Differential Expression Analysis", "text": "In biological research machine learning algorithms are part of nearly every analytical process. They are used to identify new insights into biological phenomena, interpret data, provide molecular diagnosis for diseases and develop personalized medicine that will enable future treatments of diseases. In this paper we (1) illustrate the importance of machine learning in the analysis of large scale sequencing data, (2) present an illustrative standardized workflow of the analysis process, (3) perform a Differential Expression (DE) analysis of a publicly available RNA sequencing (RNASeq) data set to demonstrate the capabilities of various algorithms at each step of the workflow, and (4) show a machine learning solution in improving the computing time, storage requirements, and minimize utilization of computer memory in analyses of RNA-Seq datasets. The source code of the analysis pipeline and associated scripts are presented in the paper appendix to allow replication of experiments."}
{"_id": "39101b6482e21eccbd77f2dee6a3710037e4ba6f", "title": "As Time Goes By: Comprehensive Tagging of Textual Phrases with Temporal Scopes", "text": "Temporal expressions (TempEx\u2019s for short) are increasingly important in search, question answering, information extraction, and more. Techniques for identifying and normalizing explicit temporal expressions work well, but are not designed for and cannot cope with textual phrases that denote named events, such as \u201cClinton\u2019s term as secretary of state\u201d. This paper addresses the problem of detecting such temponyms, inferring their temporal scopes, and mapping them to events in a knowledge base if present there. We present methods for this kind of temponym resolution, using an entityand TempEx-oriented document model and the Yago knowledge base for distant supervision. We develop a family of Integer Linear Programs for jointly inferring temponym mappings to the timeline and knowledge base. This enriches the document representation and also extends the knowledge base by obtaining new alias names for events. Experiments with three different corpora demonstrate the viability of our methods."}
{"_id": "5a609c164899a6e07317add1c8ae7420560c6ff1", "title": "Stakeholder participation for sustainable waste management", "text": "Inadequate environmental sanitation in many cities is a major cause of diseases and is a drain on the economy by way of lost workdays, cost of treatment and cleanup activities. Municipal authorities and policymakers need to act fast to address this issue. Sustainable waste management provides a comprehensive inter-disciplinary framework for addressing the problems of managing urban solid waste, in the resource constrained developing countries where quality of such services are poor and costs are high often with no effective means of recovering them. Upgrading the coverage of waste management and services and increasing their efficiency is a precondition for improving the environmental quality of cities. This paper highlights the fact that the involvement and participation of all the stakeholders such as the waste generators, waste processors, formal and informal agencies, non-governmental organisations and financing institutions is a key factor for the sustainable waste management. r 2005 Elsevier Ltd. All rights reserved."}
{"_id": "663ae1af2835f456de9f20ce6dce85ea559dad2e", "title": "HILOG: A Foundation for Higher-Order Logic Programming", "text": "D We describe a novel logic, called HiLog, and show that it provides a more suitable basis for logic programming than does traditional predicate logic. HiLog has a higher-order syntax and allows arbitrary terms to appear in places where predicates, functions, and atomic formulas occur in predicate calculus. But its semantics is first-order and admits a sound and complete proof procedure. Applications of HiLog are discussed, including DCG grammars, higher-order and modular logic programming, and deductive databases. a"}
{"_id": "13d79d6021ee33bb403a6ee1c5935a350b87bdba", "title": "GIST: A Model for Design and Management of Content and Interactivity of Customer-Centric Web Sites", "text": "Customer-centric Web-based systems, such as ecommerce Web sites, or sites that support customer relationship management (CRM) activities, are themselves information systems, but their design and maintenance need to follow vastly different approaches from the traditional systems lifecycle approach. Based on marketing frameworks that are applicable to the online world, and following design science principles, we develop a model to guide the design and the continuous management of such sites. The model makes extensive use of current technologies for tracking the customers and their behaviors, and combines elements of data mining and statistical analyses. A case study based on a financial services Web site is used to provide a preliminary validation and design evaluation of our approach. The case study showed considerable measured improvement in the effectiveness of the company\u0092s Web site. In addition, it also highlighted an important benefit of the our approach: the identification of previously unknown or unexpected segments of visitors. This finding can lead to promising new business opportunities."}
{"_id": "e76e9c99452edf1c477d4fc099fec7d607a6d65f", "title": "18F-Fluciclovine PET/MRI for preoperative lymph node staging in high-risk prostate cancer patients", "text": "To investigate the diagnostic potential of simultaneous 18F-fluciclovine PET/MRI for pelvic lymph node (LN) staging in patients with high-risk prostate cancer. High-risk prostate cancer patients (n=28) underwent simultaneous 18F-fluciclovine PET/MRI prior to surgery. LNs were removed according to a predefined template of eight regions. PET and MR images were evaluated for presence of LN metastases according to these regions. Sensitivity/specificity for detection of LN metastases were calculated on patient and region basis. Sizes of LN metastases in regions with positive and negative imaging findings were compared with linear mixed models. Clinical parameters of PET-positive and -negative stage N1 patients were compared with the Mann-Whitney U test. Patient- and region-based sensitivity/specificity for detection of pelvic LN metastases was 40 %/87.5 % and 35 %/95.7 %, respectively, for MRI and 40 %/100 % and 30 %/100 %, respectively, for PET. LN metastases in true-positive regions were significantly larger than metastases in false-negative regions. PET-positive stage N1 patients had higher metastatic burden than PET-negative N1 patients. Simultaneous 18F-fluciclovine PET/MRI provides high specificity but low sensitivity for detection of LN metastases in high-risk prostate cancer patients. 18F-Fluciclovine PET/MRI scan positive for LN metastases indicates higher metastatic burden than negative scan. \u2022 18 F-Fluciclovine PET/MRI has high specificity for detection of lymph node metastasis. \u2022 18 F-Fluciclovine PET/MRI lacks sensitivity to replace ePLND. \u2022 18 F-Fluciclovine PET/MRI may be used to aid surgery and select adjuvant therapy. \u2022 18 F-Fluciclovine PET-positive patients have more extensive disease than PET-negative patients. \u2022 Size of metastatic lymph nodes is an important factor for detection."}
{"_id": "7ee42abcd6f8c3c16375aead3346d0c7f3a6f672", "title": "Neuro-dynamic programming", "text": "What should you think more? Time to get this [PDF? It is easy then. You can only sit and stay in your place to get this book. Why? It is on-line book store that provide so many collections of the referred books. So, just with internet connection, you can enjoy downloading this book and numbers of books that are searched for now. By visiting the link page download that we have provided, the book that you refer so much can be found. Just save the requested book downloaded and then you can enjoy the book to read every time and place you want."}
{"_id": "642db624b5b33a02a435ee1415d7c9f9cef36e1d", "title": "Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming", "text": "This paper extends previous work with Dyna a class of architectures for intelligent systems based on approximating dynamic program ming methods Dyna architectures integrate trial and error reinforcement learning and execution time planning into a single process operating alternately on the world and on a learned model of the world In this paper I present and show results for two Dyna archi tectures The Dyna PI architecture is based on dynamic programming s policy iteration method and can be related to existing AI ideas such as evaluation functions and uni versal plans reactive systems Using a nav igation task results are shown for a simple Dyna PI system that simultaneously learns by trial and error learns a world model and plans optimal routes using the evolving world model The Dyna Q architecture is based on Watkins s Q learning a new kind of rein forcement learning Dyna Q uses a less famil iar set of data structures than does Dyna PI but is arguably simpler to implement and use We show that Dyna Q architectures are easy to adapt for use in changing environments Introduction to Dyna How should a robot decide what to do The traditional answer in AI has been that it should deduce its best action in light of its current goals and world model i e that it should plan However it is now widely recognized that planning s usefulness is limited by its computational complexity and by its dependence on an accurate world model An alternative approach is to do the planning in advance and compile its result into a set of rapid reactions or situation action rules which are then used for real time decision making Yet a third approach is to learn a good set of reactions by trial and error this has the advantage of eliminating the dependence on a world model In this paper I brie y introduce Dyna a class of simple architectures integrating and permitting tradeo s among these three approaches Dyna architectures use machine learning algo rithms to approximate the conventional optimal con trol technique known as dynamic programming DP Bellman Ross DP itself is not a learn ing method but rather a computational method for determining optimal behavior given a complete model of the task to be solved It is very similar to state space search but di ers in that it is more incremental and never considers actual action sequences explicitly only single actions at a time This makes DP more amenable to incremental planning at execution time and also makes it more suitable for stochastic or in completely modeled environments as it need not con sider the extremely large number of sequences possi ble in an uncertain environment Learned world mod els are likely to be stochastic and uncertain making DP approaches particularly promising for learning sys tems Dyna architectures are those that learn a world model online while using approximations to DP to learn and plan optimal behavior Intuitively Dyna is based on the old idea that planning is like trial and error learning from hypothet ical experience Craik Dennett The theory of Dyna is based on the theory of DP e g Ross and on DP s relationship to reinforcement learning Watkins Barto Sutton Watkins to temporal di erence learning Sutton and to AI methods for planning and search Korf Werbos has previously argued for the general idea of building AI systems that approx imate dynamic programming and Whitehead and others Sutton Barto Sutton Pinette Rumelhart et al have presented results for the speci c idea of augmenting a reinforcement learning system with a world model used for planning Dyna PI Dyna by Approximating Policy Iteration I call the rst Dyna architecture Dyna PI because it is based on approximating a DP method known as pol icy iteration Howard The Dyna PI architec ture consists of four components interacting as shown in Figure The policy is simply the function formed by the current set of reactions it receives as input a description of the current state of the world and pro duces as output an action to be sent to the world The world represents the task to be solved prototypi cally it is the robot s external environment The world receives actions from the policy and produces a next state output and a reward output The overall task is de ned as maximizing the long term average reward per time step cf Russell The architecture also includes an explicit world model The world model is intended to mimic the one step input output behavior of the real world Finally the Dyna PI architecture in cludes an evaluation function that rapidly maps states to values much as the policy rapidly maps states to actions The evaluation function the policy and the world model are each updated by separate learning processes WORLD Action Reward (scalar) Heuristic Reward (scalar) State EVALUATION FUNCTION"}
{"_id": "605ba69df3b093e949539af906bad80d30622a5c", "title": "Static hand gesture recognition using neural networks", "text": "This paper presents a novel technique for hand gesture recognition through human\u2013computer interaction based on shape analysis. The main objective of this effort is to explore the utility of a neural network-based approach to the recognition of the hand gestures. A unique multi-layer perception of neural network is built for classification by using backpropagation learning algorithm. The goal of static hand gesture recognition is to classify the given hand gesture data represented by some features into some predefined finite number of gesture classes. The proposed system presents a recognition algorithm to recognize a set of six specific static hand gestures, namely: Open, Close, Cut, Paste, Maximize, and Minimize. The hand gesture image is passed through three stages, preprocessing, feature extraction, and classification. In preprocessing stage some operations are applied to extract the hand gesture from its background and prepare the hand gesture image for the feature extraction stage. In the first method, the hand contour is used as a feature which treats scaling and translation of problems (in some cases). The complex moment algorithm is, however, used to describe the hand gesture and treat the rotation problem in addition to the scaling and translation. The algorithm used in a multi-layer neural network classifier which uses back-propagation learning algorithm. The results show that the first method has a performance of 70.83% recognition, while the second method, proposed in this article, has a better performance of 86.38% recognition rate."}
{"_id": "56011257f65411b13da37dcff517a64c275b96f1", "title": "Intrusion Detection Model Based On Particle Swarm Optimization and Support Vector Machine", "text": "Advance in information and communication technologies, force us to keep most of the information electronically, consequently, the security of information has become a fundamental issue. The traditional intrusion detection systems look for unusual or suspicious activity, such as patterns of network traffic that are likely indicators of unauthorized activity. However, normal operation often produces traffic that matches likely \"attack signature\", resulting in false alarms. One main drawback is the inability of detecting new attacks which do not have known signatures. In this paper particle swarm optimization (PSO) is used to implement a feature selection, and support vector machine (SVMs) with the one-versus-rest method serve as a fitness function of PSO for classification problems from the literature. Experimental result shows that our method allows us to recognize not only known attacks but also to detect suspicious activity that may be the result of a new, unknown attack. Our method simplifies features effectively and obtains a higher classification accuracy compared to other methods"}
{"_id": "b1ca425cab859aa04259f0a093b7c948abd0e630", "title": "Wireless Relay Communications with Unmanned Aerial Vehicles: Performance and Optimization", "text": "In this paper, we investigate a communication system in which unmanned aerial vehicles (UAVs) are used as relays between ground-based terminals and a network base station. We develop an algorithm for optimizing the performance of the ground-to-relay links through control of the UAV heading angle. To quantify link performance, we define the ergodic normalized transmission rate (ENTR) for the links between the ground nodes and the relay, and derive a closed-form expression for it in terms of the eigenvalues of the channel correlation matrix. We show that the ENTR can be approximated as a sinusoid with an offset that depends on the heading of the UAV. Using this observation, we develop a closed-form expression for the UAV heading that maximizes the uplink network data rate while keeping the rate of each individual link above a certain threshold. When the current UAV relay assignments cannot meet the minimum link requirements, we investigate the deployment and heading control problem for new UAV relays as they are added to the network, and propose a smart handoff algorithm that updates node and relay assignments as the topology of the network evolves."}
{"_id": "f1ae6cc5d3ec4c2ff310401562476cfcc79e9f0d", "title": "Modeling Social Perception of Faces [Social Sciences]", "text": "The face is our primary source of visual information for identifying people and reading their emotional and mental states. With the exception of prosopagnosics (who are unable to recognize faces) and those suffering from such disorders of social cognition as autism, people are extremely adept at these two tasks. However, our cognitive powers in this regard come at the price of reading too much into the human face. The face is often treated as a window into a person's true nature. Given the agreement in social perception of faces, this paper discusses that it should be possible to model this perception."}
{"_id": "d66edebece1f13a82a0cb68d7eaada7a83b9d77c", "title": "Supporting Situationally Aware Cybersecurity Systems 30 th September 2015", "text": "In this report, we describe the Unified Cyber Security ontology (UCO) to support situational awareness in cyber security systems. The ontology is an effort to incorporate and integrate heterogeneous information available from different cyber security systems and most commonly used cyber security standards for information sharing and exchange. The ontology has also been mapped to a number of existing cyber security ontologies as well as concepts in the Linked Open Data cloud. Similar to DBpedia which serves as the core for Linked Open Data cloud, we envision UCO to serve as the core for the specialized cyber security Linked Open Data cloud which would evolve and grow with the passage of time with additional cybersecurity data sets as they become available. We also present a prototype system and concrete use-cases supported by the UCO ontology. To the best of our knowledge, this is the first cyber security ontology that has been mapped to general world ontologies to support broader and diverse security use-cases. We compare the resulting ontology with previous efforts, discuss its strengths and limitations, and describe potential future work directions."}
{"_id": "cdfd7a02a2d7b8e729dc50ccad81180b2f2a9e6e", "title": "Training Spiking ConvNets by STDP and Gradient Descent", "text": "This paper proposes a new method for training multi-layer spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly use backpropagation method for non-spiking networks is not easily applied. Our method uses a novel version of layered spike-timing- dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for spiking neural networks (SNNs). The training process uses two components for unsupervised feature extraction and supervised classification. The first component is a new STDP rule for spike-based representation learning which trains convolutional filters. The second introduces a new STDP-based supervised learning rule for spike pattern classification via an approximation to gradient descent. Stacking these components implements a novel spiking CNN of integrate-and-fire (IF) neurons with performances comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for the MNIST handwritten digit classification. Our network architecture is the only high performance, spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework."}
{"_id": "ac00e898555cea898d31cba1026d02ca780c4504", "title": "Practical Implementations of Arithmetic Coding 1", "text": "We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, space-e cient, approximate arithmetic coder with only minimal loss of compression e ciency. Our coder is based on the replacement of arithmetic by table lookups coupled with a new deterministic probability estimation scheme. Index terms : Data compression, arithmetic coding, adaptive modeling, analysis of algorithms, data structures, low precision arithmetic. A similar version of this paper appears in Image and Text Compression, James A. Storer, ed., Kluwer Academic Publishers, Norwell, MA, 1992, 85{112. A shortened version of this paper appears in the proceedings of the International Conference on Advances in Communication and Control (COMCON 3), Victoria, British Columbia, Canada, October 16{18, 1991. Support was provided in part by NASA Graduate Student Researchers Program grant NGT{ 50420 and by a National Science Foundation Presidential Young Investigators Award grant with matching funds from IBM. Additional support was provided by a Universities Space Research Association/CESDIS associate membership. Support was provided in part by National Science Foundation Presidential Young Investigator Award CCR{9047466 with matching funds from IBM, by NSF research grant CCR{9007851, by Army Research O ce grant DAAL03{91{G{0035, and by the O ce of Naval Research and the Defense Advanced Research Projects Agency under contract N00014{91{J{4052ARPA Order No. 8225. Additional support was provided by a Universities Space Research Association/CESDIS associate membership."}
{"_id": "a6f87187553ab5b1f85b30f63cecd105f222a91f", "title": "Emotional Branding and the Strategic Value of the Doppelg\u00e4nger Brand Image", "text": "Emotional branding is widely heralded as a key to marketing success. However, little attention has been given to the risks posed by this strategy. This article argues that emotional-branding strategies are conducive to the emergence of a doppelg\u00e4nger brand image, which is defined as a family of disparaging images and meanings about a brand that circulate throughout popular culture. This article\u2019s thesis is that a doppelg\u00e4nger brand image can undermine the perceived authenticity of an emotional-branding story and, thus, the identity value that the brand provides to consumers. The authors discuss how the tenets of emotional branding paradoxically encourage the formation and propagation of doppelg\u00e4nger brand imagery. This article develops the counterintuitive proposition that rather than merely being a threat to be managed, a doppelg\u00e4nger brand image can actually benefit a brand by providing early warning signs that an emotional-branding story is beginning to lose its cultural resonance. This article demonstrates these ideas through an analysis of the doppelg\u00e4nger brand image that is beginning to haunt a paragon of emotional branding\u2014Starbucks. The authors conclude with a discussion of how marketing managers can proactively use the insights gained by analyzing a doppelg\u00e4nger brand image."}
{"_id": "30a0e79740d6108d9671dd936db72c485a35b3e9", "title": "A longitudinal study of the social and emotional predictors and consequences of cyber and traditional bullying victimisation", "text": "Few longitudinal studies have investigated how cyberbullying interacts with traditional bullying among young people, who are increasingly using online environments to seek information, entertainment and to socialise. This study aimed to identify the associations between the relative contribution of cyberbullying victimisation and traditional bullying victimisation on social and emotional antecedents and outcomes among adolescents. Participants were a cohort of 1,504 adolescents from 16 Australian schools followed from age 13 to 15\u00a0years. Adolescents experiencing social and emotional difficulties were more likely to be cyberbullied and traditionally bullied, than traditionally bullied only. Those targeted in both ways experienced more harm and stayed away from school more often than those traditionally bullied only. These findings suggest a high coexistence of cyber and traditional bullying behaviours and their antecedents, and higher levels of harm from a combination of these behaviours for adolescents over time. Future research should engage students as co-researchers to enhance school and parent strategies to support adolescents experiencing difficulties, and to reduce the likelihood of both cyber and traditional bullying."}
{"_id": "6d81305e1a3680455ee66418e306defeb5969a68", "title": "Seeing Stars When There Aren\u2019t Many Stars: Graph-Based Semi-Supervised Learning For Sentiment Categorization", "text": "We present a graph-based semi-supervised learning algorithm to address the sentiment analysis task of rating inference. Given a set of documents (e.g., movie reviews) and accompanying ratings (e.g., \u201c4 stars\u201d), the task calls for inferring numerical ratings for unlabeled documents based on the perceived sentiment expressed by their text. In particular, we are interested in the situation where labeled data is scarce. We place this task in the semi-supervised setting and demonstrate that considering unlabeled reviews in the learning process can improve ratinginference performance. We do so by creating a graph on both labeled and unlabeled data to encode certain assumptions for this task. We then solve an optimization problem to obtain a smooth rating function over the whole graph. When only limited labeled data is available, this method achieves significantly better predictive accuracy over other methods that ignore the unlabeled examples during training."}
{"_id": "53c9d4729c8a40ee32f2452ff6b8023b5de48f0e", "title": "CNN-SLAM: Real-Time Dense Monocular SLAM with Learned Depth Prediction", "text": "Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for the goal of accurate and dense monocular reconstruction. We propose a method where CNN-predicted dense depth maps are naturally fused together with depth measurements obtained from direct monocular SLAM, based on a scheme that privileges depth prediction in image locations where monocular SLAM approaches tend to fail, e.g. along low-textured regions, and vice-versa. We demonstrate the use of depth prediction to estimate the absolute scale of the reconstruction, hence overcoming one of the major limitations of monocular SLAM. Finally, we propose a framework to efficiently fuse semantic labels, obtained from a single frame, with dense SLAM, so to yield semantically coherent scene reconstruction from a single view. Evaluation results on two benchmark datasets show the robustness and accuracy of our approach."}
{"_id": "ff120c4826c59f97c2eb11e173eb50399914c0f9", "title": "Development of a one-axis actively regulated bearingless motor with a repulsive type passive magnetic bearing", "text": "This paper describes a novel one-axis actively regulated bearingless motor with a spinning top-shaped rotor, which is intended for a use as a cooling fan. This motor requires only one set of three-phase winding and thus one three-phase inverter to realize both rotation and magnetic suspension, and thus it is dubbed a single-drive bearingless motor. An axial position of the rotor is actively regulated and radial and tilting motions are passively stabilized by the passive magnetic bearing. Torque and axial suspension force are regulated by q-axis and d-axis currents, respectively. In this paper, the structure and principle, design, and test results of the proposed bearingless motor are presented."}
{"_id": "0e5a7420b73fb13a69eb8286ea8604a3ecbd2e37", "title": "Mutual information and minimum mean-square error in multiuser Gaussian channels", "text": "1. Introduction Due to the lack of explicit closed form expressions of the mutual information for binary inputs, which were provided only for the BPSK and QPSK for the single input single output (SISO) case, [1], [2], [3], it is of particular importance to address connections between information theory and estimation theory for the multiuser case. Connections between information theory and estimation theory dates back to the work of Duncan, in [4] who showed that for the continuous-time additive white Gaussian noise (AWGN) channel, the filtering minimum mean squared error (causal estimation) is twice the input-output mutual information for any underlying signal distribution. Recently, Guo, Shamai, and Verdu have illuminated intimate connections between information theory and estimation theory in a seminal paper, [1]. In particular, Guo et al. have shown that in the classical problem of information transmission through the conventional AWGN channel, the derivative of the mutual information with respect to the SNR is equal to the smoothing minimum mean squared error (noncausal estimation); a relationship that holds for scalar, vector, discrete-time and continuous-time channels regardless of the input statistics. There have been extensions of these results to the case of mismatched input distributions in the scalar Gaussian channel in [5] and [6]. However, the fundamental relation between the derivative of the mutual information and the MMSE, known as I-MMSE identity, and defined for point to point channels with any noise or input distributions in [1] is not anymore suitable for the multiuser case. Therefore, in this paper, we revisit the connections between the mutual information and the MMSE for the multiuser setup. We generalize the I-MMSE relation to the multiuser case. In particular, we prove that the derivative of the mutual information with respect to the signal to noise ratio (SNR) is equal to the minimum mean squared error (MMSE) plus a covariance induced due to the interference, quantified by a term with respect to the cross correlation of the users inputs' estimates, their channels, and their precoding matrices. Further, we capitalize on this unveiled multiuser I-MMSE relation to derive the components of the multiuser mutual information. In particular, we derive the derivative of the conditinal and non-conditional mutual information with respect to the SNR. Further extensions of this result allows a generalization of the relations of linear vector Gaussian channels in [7] to multiuser channels. In particular, [8], [9] generalize the I-MMSE relation to the per-user gradient \u2026"}
{"_id": "6149616097ee4ba1c63b37fe031ce5af752eb9ac", "title": "Mersenne Twister A Pseudo-Random Number Generator", "text": "Pseudo random number generators have been widely used in numbers of applications, particularly simulation and cryptography. Among them is a Mersenne Twister. Introduced in 1998 by Makoto Matsumoto and Takuji Nishimura, it has been a highly preferred generator since it provides long period, high order of dimensional equidistribution, speed and reliability. In this paper, we will focus mainly on describing the theories and algorithm of the Mersenne Twister. We also briefly introduce its variants and discuss its limitations and improvements."}
{"_id": "1cd5ebefd5750b9550fb6994ef089306666a736d", "title": "Evaluating WAP News Sites: The Webqual/m Approach", "text": "This paper reports on the evaluation of wireless Internet news sites using the WebQual/m instrument. From initial application in the domain of traditional Internet Web sites, the instrument has been adapted for sites delivered using the wireless application protocol (WAP). The WebQual approach is to assess the Web-site quality from the perspective of the 'voice of the customer\u2019, an approach adopted in quality function deployment. The WebQual/m instrument is used to assess customer perceptions of information, site and useroriented qualities. In particular, the qualities of three UK-based WAP news sites are assessed via an online questionnaire. The results are reported and analysed and demonstrate considerable variations in the offerings of the news sites. The findings and their implications for mobile commerce are discussed and some conclusions and directions for further research are provided."}
{"_id": "113551b39cfbe39d607942dc3cbb07df0fbc216a", "title": "Phialemoniopsis endophytica sp. nov., a new species of endophytic fungi from Luffa cylindrica in Henan, China", "text": "During a survey of endophytic fungi in the cucurbit plants collected from Henan, China, a new species, Phialemoniopsis endophytica was isolated from the lower stem of Luffa cylindrica. It differs from other Phialemoniopsis species by its cylindrical to flask-shaped phialides, falcate conidia with blunt ends, ostiolate pycnidium-like conidiomata without marginal setae and ellipsoidal chlamydospores. Multi-locus (ITS, LSU, ACT, and TUB) phylogenetic analysis confirmed that P. endophytica is distinct from other species. A synopsis of the morphological characters of the new species is provided."}
{"_id": "4ae093d626e17670ede21200f9cf1790e8c23dc8", "title": "Learning user interaction models for predicting web search result preferences", "text": "Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance. We present a real-world study of modeling the behavior of web search users to predict web search result preferences. Accurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks. Our key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected \"noisy\" user behavior. We show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods. We generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone. We report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods."}
{"_id": "7ca41ed6116d223af3bb52efb55832ee6786a61c", "title": "Diffusion dynamics on multiplex networks", "text": "We study the time scales associated with diffusion processes that take place on multiplex networks, i.e., on a set of networks linked through interconnected layers. To this end, we propose the construction of a supra-laplacian matrix, which consists of a dimensional lifting of the laplacian matrix of each layer of the multiplex network. We use perturbative analysis to reveal analytically the structure of eigenvectors and eigenvalues of the complete network in terms of the spectral properties of the individual layers. The spectrum of the supra-laplacian allows us to understand the physics of diffusionlike processes on top of multiplex networks."}
{"_id": "1d96e8f4a9d675139bcc9082d32e768a6c6acc28", "title": "Gender Classification of Blog Authors: With Feature Engineering and Deep Learning using LSTM Networks", "text": "In this paper, we present two approaches to automatically classify the gender of blog authors: the first is a manual feature extraction based system incorporating two novel feature classes: variable length character sequence patterns and thirteen new word classes, along with an added class of surface features while the second is a first-ever application of a memory variant of Recurrent Neural Networks, i.e. Bidirectional Long Short Term Memory Networks (BLSTMs) on this task. We use two blog data sets to report our results: the first is a well-explored one used by the previous state-of-the-art model while the other is a 20 times larger corpus. For the first system, we use a voting of machine learning classifiers to obtain an improved accuracy with respect to the previous feature mining systems on the former data set. Using our second approach, we show that the accuracy obtained using such deep LSTMs is comparable to the current state-of-the-art deep learning system for the task of gender classification. Finally, we carry out a comparative study of performance of both the systems on the two data sets."}
{"_id": "2b62024e8ee099f9138d0214e51f14b5b3222f6d", "title": "Decision Making Under Uncertainty in Visualisation", "text": "Decision making under uncertainty can lead to irrational behaviour; such errors are often being referred to as cognitive biases. Related work in this area has tended to focus on the human\u2019s analytic and sensemaking processes. This paper puts forward a novel perspective on this, proposing that some cognitive biases can also occur in the process of viewing visualisations. Consequently, this source of error may have a negative impact on decision making. This paper presents examples of situations where cognitive biases in visualisation can occur and outlines a future user study to investigate the anchoring and adjustment cognitive biases in visualisation."}
{"_id": "69af87477fc2ee6fe6fa774eb56776aa4039e434", "title": "Detection of SQL injection and XSS attacks in three tier web applications", "text": "Web applications are used on a large scale worldwide, which handles sensitive personal data of users. With web application that maintains data ranging from as simple as telephone number to as important as bank account information, security is a prime point of concern. With hackers aimed to breakthrough this security using various attacks, we are focusing on SQL injection attacks and XSS attacks. SQL injection attack is very common attack that manipulates the data passing through web application to the database servers through web servers in such a way that it alters or reveals database contents. While Cross Site Scripting (XSS) attacks focuses more on view of the web application and tries to trick users that leads to security breach. We are considering three tier web applications with static and dynamic behavior, for security. Static and dynamic mapping model is created to detect anomalies in the class of SQL Injection and XSS attacks."}
{"_id": "35a231c981a4a5f6711eb831d0d55673c2bcb161", "title": "The reviewing of object files: Object-specific integration of information", "text": "A series of experiments explored a form of object-specific priming. In all experiments a preview field containing two or more letters is followed by a target letter that is to be named. The displays are designed to produce a perceptual interpretation of the target as a new state of an object that previously contained one of the primes. The link is produced in different experiments by a shared location, by a shared relative position in a moving pattern, or by successive appearance in the same moving frame. An object-specific advantage is consistently observed: naming is facilitated by a preview of the target, if (and in some cases only if) the two appearances are linked to the same object. The amount and the object specificity of the preview benefit are not affected by extending the preview duration to 1 s, or by extending the temporal gap between fields to 590 ms. The results are interpreted in terms of a reviewing process, which is triggered by the appearance of the target and retrieves just one of the previewed items. In the absence of an object link, the reviewing item is selected at random. We develop the concept of an object file as a temporary episodic representation, within which successive states of an object are linked and integrated."}
{"_id": "0f34fcab599aabf0ab46d91c21703a9a86b5f048", "title": "On computable numbers, with an application to the Entscheidungsproblem", "text": "The \"computable\" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means. Although the subject of this paper is ostensibly the computable numbers. it is almost equally easy to define and investigate computable functions of an integral variable or a real or computable variable, computable predicates, and so forth. The fundamental problems involved are, however, the same in each case, and I have chosen the computable numbers for explicit treatment as involving the least cumbrous technique. I hope shortly to give an account of the relations of the computable numbers, functions, and so forth to one another. This will include a development of the theory of functions of a real variable expressed in terms of computable numbers. According to my definition, a number is computable if its decimal can be written down by a machine."}
{"_id": "8bdfdc2c2777b395c086810c03a8cdeccc55c4db", "title": "No Free Lunch Theorems for Search", "text": "We show that all algorithms that search for an extremum of a cost function perform exactly the same when averaged over all possible cost functions In particular if algorithm A outperforms algorithm B on some cost functions then loosely speaking there must exist exactly as many other functions where B outperforms A Starting from this we analyze a number of the other a priori characteristics of the search problem like its geometry and its information theoretic aspects This analysis allows us to derive mathematical benchmarks for assessing a particular search algo rithm s performance We also investigate minimax aspects of the search problem the validity of using characteristics of a partial search over a cost function to predict future behavior of the search algorithm on that cost function and time varying cost functions We conclude with some discussion of the justi ability of biologically inspired search methods"}
{"_id": "2ed677f29416f44196f379a3f06f682f53911bcf", "title": "Information Retrieval from Neurophysiological Signals", "text": "One of the ultimate goals of neuroscience is decoding someone\u2019s intentions directly from his/her brain activities. In this thesis, we aim at pursuing this goal in different scenarios. Firstly, we show the possibility of creating a user-centric music/movie recommender system by employing neurophysiological signals. Regarding this, we employed a brain decoding paradigm in order to classify the features extracted from brain signals of participants watching movie/music video clips, into our target classes (two broad music genres and four broad movie genres). Our results provide a preliminary experimental evidence towards user-centric music/movie content retrieval by exploiting brain signals. Secondly, we addressed one of the main issue of the applications of brain decoding algorithms. Generally, the performance of such algorithms suffers from the constraint of having few and noisy samples, which is the case in most of the neuroimaging datasets. In order to overcome this limitation, we employed an adaptation paradigm in order to transfer knowledge from another domain (e.g. large-scale image domain) to the brain domain. We experimentally show that such adaptation procedure leads to improved results. We performed such adaptation pipeline on different tasks (i.e. object recognition and genre classification) using different neuroimaging modalities (i.e. fMRI, EEG, and MEG). Thirdly, we aimed at one of the fundamental goals in brain decoding which is reconstructing the external stimuli using only the brain features. Under this scenario, we show the possibility of regressing the stimuli spectrogram using time-frequency analysis of the brain signals. Finally, we conclude the thesis by summarizing our contributions and discussing the future directions and applications of our research."}
{"_id": "c1d4d3b2dab80ac3eb4a0a537696248adabd1922", "title": "An energy framework for the network simulator 3 (NS-3)", "text": "The Network Simulator-3 (ns-3) is rapidly developing into a flexible and easy-to-use tool suitable for wireless network simulation. Since energy consumption is a key issue for wireless devices, wireless network researchers often need to investigate the energy consumption at a battery powered node or in the overall network, while running network simulations. This requires the underlying simulator to support energy consumption and energy source modeling. Currently however, ns-3 does not provide any support for modeling energy consumption or energy sources. In this paper, we introduce an integrated energy framework for ns-3, with models for energy source as well as energy consumption. We present the design and implementation of the overall framework and the specific models therein. Further, we show how the proposed framework can be used in ns-3 to simulate energy-aware protocols in a wireless network."}
{"_id": "1cb051fe38814109803218301a2156e0b1649c88", "title": "Differentiating optic disc edema from optic nerve head drusen on optical coherence tomography.", "text": "OBJECTIVE\nTo assess optical coherence tomography in differentiating optic disc edema (ODE) due to papilledema and other optic neuropathies from optic nerve head drusen (ONHD).\n\n\nMETHODS\nOptical coherence tomographic images from 60 subjects (20 with ODE, 20 with ONHD, and 20 control subjects) were assessed qualitatively and quantitatively. Qualitative criteria for ODE included an elevated optic nerve head with smooth internal contour and subretinal hyporeflective space (SHYPS) with recumbent \"lazy V\" pattern. Optic nerve head drusen displayed a \"lumpy-bumpy\" internal optic nerve contour and a rapid decline in SHYPS thickness. Quantitative comparisons included retinal nerve fiber layer and SHYPS thickness.\n\n\nRESULTS\nOptical coherence tomography differentiated ODE from ONHD qualitatively (sensitivity, 63%; specificity, 63%) and quantitatively (sensitivity, 80%; specificity, 90%). Respective differences in mean retinal nerve fiber layer thickness between ODE and ONHD were significant (P < .002) superiorly (206.8 vs 121.7 microm), nasally (176.3 vs 78.6 microm), inferiorly (247.2 vs 153.8 microm), and temporally (180.0 vs 85.5 microm). Respective differences in mean SHYPS thickness between ODE and ONHD were significant (P < .001) at radii of 0.75 mm (512.1 vs 274.4 microm), 1.5 mm (291.4 vs 103.0 microm), and 2.0 mm (145.5 vs 60.7 microm).\n\n\nCONCLUSION\nOptical coherence tomography can differentiate ODE from ONHD, particularly when the nasal retinal nerve fiber layer and SHYPS thickness at the 2.0-mm radius are greater than 86 microm and 127 microm, respectively."}
{"_id": "2b02c8cfbe38dd1543c251022253ddc0b818cc34", "title": "Trajectory planning and tracking of ball and plate system using hierarchical fuzzy control scheme", "text": "Ball and plate system is the extension of traditional ball and beam system. In this paper, a trajectory planning and tracking problem of ball and plate system is put forward to proof-test diverse control schemes. Firstly, we derive the mathematical model of the ball and plate system in detail. Then a hierarchical fuzzy control scheme is proposed to deal with this problem. Our scheme is composed of three levels. The lowest level is a TS type fuzzy tacking controller; the middle level is a fuzzy supervision controller that takes actions in extreme situations; and the top level is a fuzzy planning controller that determines the desired trajectory. In order to optimize the ball\u2019s moving trajectory, the genetic algorithm is introduced to adjust the output membership functions of the fuzzy planning controller. Simulation results have shown that the hierarchical fuzzy control scheme can control the ball from a point to another without hitting the obstacles and in the least time. c \u00a9 2003 Elsevier B.V. All rights reserved."}
{"_id": "5e25309f3f583f6ce606128b99659dc8400b25d0", "title": "Anomaly detection and monitoring in Internet of Things communication", "text": "The Internet of Things (IoT) presents unique challenges in detecting anomaly and monitoring all connected devices in a network. Moreover, one of the objectives of anonymity in communication is to protect the data traffic of devices. The summary status visualization is indispensable to depict all devices/sensors that are most indicative of a pending failure and a predictive power/energy. Thus, this paper proposes a multi-platform monitoring and anomaly detection system that supports heterogeneous devices. The proposed system addresses the problems of: (i) how to monitor the network to prevent device failures and (ii) how to design a comprehensive feature for the early anomaly detection of IoT communication."}
{"_id": "c55cf8ec010075318fbe893280a29e8d14dfc0bd", "title": "Fractional Order ControlA Tutorial", "text": "Many real dynamic systems are better characterized using a non-integer order dynamic model based on fractional calculus or, differentiation or integration of noninteger order. Traditional calculus is based on integer order differentiation and integration. The concept of fractional calculus has tremendous potential to change the way we see, model, and control the nature around us. Denying fractional derivatives is like saying that zero, fractional, or irrational numbers do not exist. In this paper, we offer a tutorial on fractional calculus in controls. Basic definitions of fractional calculus, fractional order dynamic systems and controls are presented first. Then, fractional order PID controllers are introduced which may make fractional order controllers ubiquitous in industry. Additionally, several typical known fractional order controllers are introduced and commented. Numerical methods for simulating fractional order systems are given in detail so that a beginner can get started quickly. Discretization techniques for fractional order operators are introduced in some details too. Both digital and analog realization methods of fractional order operators are introduced. Finally, remarks on future research efforts in fractional order control are given."}
{"_id": "85db2d575c43e154a4f29a3062ed410b202043b5", "title": "Machine Learning Approaches for Catchphrase Extraction in Legal Documents", "text": "The purpose of this research was to automatically extract catchphrases given a set of Legal documents. For this task, our focus was mainly on the Machine learning approaches: a comparative approach was used between the unsupervised and supervised approaches. The idea was to compare the different approaches to see which one of the two was comparatively better for automatic catchphrase extraction given a dataset of Legal documents. To perform this, two open source text mining software were used; one for the unsupervised approach while another one was used for the supervised approach. We then fine tuned some parameters for each tool before extracting catchphrases. The training dataset was used when fine tuning parameters in order to find optimal parameters that were then used for generating the final catchphrases. Different metrics were used to evaluate the results. We used the most common measures in Information Extraction which include Precision and Recall and the results from the two Machine learning approaches were compared. In general our results showed that the supervised approach performed far much better than the unsupervised approach."}
{"_id": "2242ede94533652741db89db0a186d2ee61dfa72", "title": "Truthful incentives in crowdsourcing tasks using regret minimization mechanisms", "text": "What price should be offered to a worker for a task in an online labor market? How can one enable workers to express the amount they desire to receive for the task completion? Designing optimal pricing policies and determining the right monetary incentives is central to maximizing requester's utility and workers' profits. Yet, current crowdsourcing platforms only offer a limited capability to the requester in designing the pricing policies and often rules of thumb are used to price tasks. This limitation could result in inefficient use of the requester's budget or workers becoming disinterested in the task.\n In this paper, we address these questions and present mechanisms using the approach of regret minimization in online learning. We exploit a link between procurement auctions and multi-armed bandits to design mechanisms that are budget feasible, achieve near-optimal utility for the requester, are incentive compatible (truthful) for workers and make minimal assumptions about the distribution of workers' true costs. Our main contribution is a novel, no-regret posted price mechanism, BP-UCB, for budgeted procurement in stochastic online settings. We prove strong theoretical guarantees about our mechanism, and extensively evaluate it in simulations as well as on real data from the Mechanical Turk platform. Compared to the state of the art, our approach leads to a 180% increase in utility."}
{"_id": "ed581941ff272d18eb37bece2a2de7b9903fea3d", "title": "A hierarchical framework for evaluating simulation software", "text": "In simulation software selection problems, packages are evaluated either on their own merits or in comparison with other packages. In either method, a comprehensive list of criteria for evaluation of simulation software is essential for proper selection. Although various simulation software evaluation checklists do exist, there are di\u0080erences in the lists provided and the terminologies used. This paper presents a hierarchical framework for simulation software evaluation consisting of seven main groups and several subgroups. An explanation for each criterion is provided and an analysis of the usability of the proposed framework is further discussed. \u00d3 1999 Elsevier Science B.V. All rights reserved."}
{"_id": "5991fee5265df4466627ebba62e545a242d9e22d", "title": "Applying Deep Learning to Enhance Momentum Trading Strategies in Stocks", "text": "We use an autoencoder composed of stacked restricted Boltzmann machines to extract features from the history of individual stock prices. Our model is able to discover an enhanced version of the momentum effect in stocks without extensive hand-engineering of input features and deliver an annualized return of 45.93% over the 1990-2009 test period versus 10.53% for basic momentum."}
{"_id": "e00151237f247900f6d32525882a870ef4db5d65", "title": "A New Method for Optimization of Dynamic Ride Sharing System", "text": "Dynamic ridesharing is a profitable way to reduce traffic and carbon emissions by providing an opportunity for a flexible and affordable service that utilizes vehicle seating space. Matching of ride seeker requests with the rides, distributed over the roads is a tedious work. While fulfilling the request of all passengers, the total travel distance of the trip may get increased. Therefore, this article proposes optimal dynamic ridesharing system which matches rides and requests in real time by satisfying multiple participant constraints (e.g. time bounds, availability of empty seat, maximum allowed deviation distance and minimized route ride) to minimize the total travel distance. To efficiently match ride givers and riders we are proposing a novel dynamic ride matching algorithm MRB (Minimal route bisearching algorithm) considering all above mentioned constraints. We demonstrate working of our algorithm by developing a prototype and evaluated our system on GPS (Global positioning system) trajectories of Lahore city dataset. Evaluated results are compared with existing algorithms which shows that our system significantly reduces the travel distance and computation cost in comparison with other recent ride searching methods to maximize efficiency."}
{"_id": "326132546d6b4bd9e1a8237378b37b96cd47e51a", "title": "On the Effectiveness of Ambient Sensing for Detecting NFC Relay Attacks", "text": "Smartphones with Near-Field Communication (NFC) may emulate contactless smart cards, which has resulted in the deployment of various access control, transportation and payment services, such as Google Pay and Apple Pay. Like contactless cards, however, NFC-based smartphone transactions are susceptible to relay attacks, and ambient sensing has been suggested as a potential countermeasure. In this study, we empirically evaluate the suitability of ambient sensors as a proximity detection mechanism for smartphone-based transactions under EMV constraints. We underpin our study using sensing data collected from 17 sensors from an emulated relay attack test-bed to assess whether they can thwart such attacks effectively. Each sensor, where feasible, was used to record 350-400 legitimate and relay (illegitimate) contactless transactions at two different physical locations. Our analysis provides an empirical foundation upon which to determine the efficacy of ambient sensing for providing a strong anti-relay mechanism in security-sensitive applications. We demonstrate that no single, evaluated mobile ambient sensor is suitable for such critical applications under realistic deployment constraints."}
{"_id": "d6cc46d8da91ded74ff31785000edc9ca8d67e23", "title": "Compact 4-element MIMO antenna with isolation enhancement for 4G LTE terminals", "text": "In this work, a wide-band, planar, printed inverted-F antenna (PIFA) is proposed with multiple-input-multiple-output (MIMO) antenna configuration. The MIMO antenna system consists of 4-elements operating at 2.1 GHz frequency band for 4G LTE applications. The proposed design is compact, low profile and suitable for wireless handheld devices. The MIMO antenna is fabricated on commercially available FR4 substrate with \u03b5r equal to 4.4. The dimensions of single element are 26\u00d76 mm2 with board volume equal to 100\u00d760\u00d70.8 mm3. Isolation is improved by 5 dB in the proposed design using ground slots. Characteristics mode analysis (CMA) is used to analyze the behaviour of the antenna system."}
{"_id": "2c13578a6a9257a048121bd8238b140e3350657e", "title": "LIMERIC: a linear message rate control algorithm for vehicular DSRC systems", "text": "Wireless vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication holds great promise for significantly reducing the human and financial costs of vehicle collisions. A common characteristic of this communication is the broadcast of a device's core state information at regular intervals (e.g. a vehicle's speed and location, or a traffic signal's state and timing). The aggregate of these uncoordinated broadcasts will lead to channel congestion under dense traffic scenarios, with a resulting reduction in the effectiveness of the collision avoidance applications making use of the transmitted information. Active congestion control using distributed techniques is a topic of great interest for establishing the scalability of this technology for deployment. This paper defines a new congestion control algorithm that can be applied to the message rate of devices in this vehicular environment. While other published approaches rely on binary control, the LInear MEssage Rate Integrated Control (LIMERIC) algorithm takes advantage of full precision control inputs that are available on the wireless channel. The result is provable convergence to fair and efficient channel utilization in the deterministic environment, under simple criteria for setting adaptive parameters. This \"perfect\" convergence avoids the limit cycle behavior inherent to binary control. We also discuss several practical aspects associated with implementing LIMERIC, including: guidelines for the choice of system parameters to obtain desired utilization outcomes, a gain saturation technique that maintains robust stability under all conditions, convergence with asynchronous updates, and the implications of measurement noise for statistical properties of convergence. The paper illustrates key analytical results using MATLAB numerical results, and employs standard NS-2 simulations to demonstrate the performance of LIMERIC in several high density scenarios."}
{"_id": "8bb53e8e02e7e3c17028d4e9b83b091ca0781490", "title": "Measuring cultural awareness in nursing students.", "text": "Recognizing the need for a valid and reliable way to measure outcomes of a program to promote multicultural awareness among nursing faculty and students, the authors developed a cultural awareness scale. In the first phase of the study, a scale consisting of 37 items was generated from a literature review on cultural awareness, sensitivity, and competence in nursing. A Cronbach's alpha reliability coefficient of .91 was obtained from a sample of 72 student nurses. In the second phase, the items were presented to a panel of experts in nursing and culture to determine content validity. A content validity index of .88 was calculated, and the total number of items on the scale was reduced to 36. The scale then was administered to 118 nursing students. Data from the two samples then were combined, and factor analysis was conducted to support construct validity. Cronbach's alpha for the combined samples was .82."}
{"_id": "bff6bdff8f79702cc65e0811a4fab33a13379aee", "title": "CORPORATE SOCIAL RESPONSIBILITY IN EMERGING MARKETS", "text": "Purpose: This paper analyses CSR initiatives in emerging markets from developed-countriesbased multinational companies (MNCs). Methods: The analysis is based on eight case studies with data collected through in-depth interviews with senior managers of the companies representing 85% of the Spanish foreign investments in Latin America. Results: The results show that CSR initiatives from these companies seem to be guided by Instrumental Theories as they use these initiatives as a strategic tool to achieve economic objectives, seek a positive relation between them and their financial performance, and use them to strengthen their reputation. Conclusions: The findings tend to indicate that Instrumental Theories of CSR seem to apply for Western MNCs operating in emerging markets."}
{"_id": "42ff8359321613d209e38f30c90361b4935f3f36", "title": "Dynamics of projective adaptive resonance theory model: the foundation of PART algorithm", "text": "Projective adaptive resonance theory (PART) neural network developed by Cao and Wu recently has been shown to be very effective in clustering data sets in high dimensional spaces. The PART algorithm is based on the assumptions that the model equations of PART (a large scale and singularly perturbed system of differential equations coupled with a reset mechanism) have quite regular computational performance. This paper provides a rigorous proof of these regular dynamics of the PART model when the signal functions are special step functions, and provides additional simulation results to illustrate the computational performance of PART."}
{"_id": "57e2a179c15fa8fd62d6f6c7cb45186c5838d372", "title": "OpenHSM: An Open Key Life Cycle Protocol for Public Key Infrastructure's Hardware Security Modules", "text": "The private keys used in a PKI are its most important asset. Protect these keys from unauthorised use or disclosure is essential to secure a PKI. Relying parties need assurances that the private key used to sign their certificates is controlled and managed following pre-defined statement policy. Hardware Security Modules (HSM) offer physical and logical protection and should be considered for any PKI deployment. The software that manages keys inside an HSM should control all life cycle of a private key. Normally this kind of equipment implements a embedded key management protocol and this protocols are not available to public scrutiny due to industrial interests. Other important issue is that HSMs are targeted in their development to the Bank industry and not to PKI, making some important PKI issues, like, strict key usage control and a secure auditing trail, play a secondary role. This paper presents an open protocol to securely manage private keys inside HSMs. The protocol is described, analysed and discussed."}
{"_id": "2ce0fc69327726b577854d1cb0fdca03b92bbeed", "title": "Knowledge Begets Knowledge: Steps towards Assisted Knowledge Acquisition in Cyc", "text": "The Cyc project is predicated on the idea that, in order to be effective and flexible, computer software must have an understanding of the context in which its tasks are performed. We believe this context is what is known informally as \u201ccommon sense.\u201d Over the last twenty years, sufficient common sense knowledge has been entered into Cyc to allow it to more effectively and flexibly support an important task: increasing its own store of world knowledge. In this paper, we describe the Cyc knowledge base and inference system, enumerate the means that it provides for knowledge elicitation, including some means suitable for use by untrained or lightly trained volunteers, review some ways in which we expect to have Cyc assist in verifying and validating collected knowledge, and describe how we expect the knowledge acquisition process to accelerate in the future."}
{"_id": "c064b8dff03dcefdbb54e8d50264954fe018db09", "title": "Trends in computer science research", "text": "Keywords in the ACM Digital Library and IEEE Xplore digital library and in NSF grants anticipate future CS research."}
{"_id": "577921d236feabd239113305e8fed707036bf5a8", "title": "Spinal and supraspinal factors in human muscle fatigue.", "text": "Muscle fatigue is an exercise-induced reduction in maximal voluntary muscle force. It may arise not only because of peripheral changes at the level of the muscle, but also because the central nervous system fails to drive the motoneurons adequately. Evidence for \"central\" fatigue and the neural mechanisms underlying it are reviewed, together with its terminology and the methods used to reveal it. Much data suggest that voluntary activation of human motoneurons and muscle fibers is suboptimal and thus maximal voluntary force is commonly less than true maximal force. Hence, maximal voluntary strength can often be below true maximal muscle force. The technique of twitch interpolation has helped to reveal the changes in drive to motoneurons during fatigue. Voluntary activation usually diminishes during maximal voluntary isometric tasks, that is central fatigue develops, and motor unit firing rates decline. Transcranial magnetic stimulation over the motor cortex during fatiguing exercise has revealed focal changes in cortical excitability and inhibitability based on electromyographic (EMG) recordings, and a decline in supraspinal \"drive\" based on force recordings. Some of the changes in motor cortical behavior can be dissociated from the development of this \"supraspinal\" fatigue. Central changes also occur at a spinal level due to the altered input from muscle spindle, tendon organ, and group III and IV muscle afferents innervating the fatiguing muscle. Some intrinsic adaptive properties of the motoneurons help to minimize fatigue. A number of other central changes occur during fatigue and affect, for example, proprioception, tremor, and postural control. Human muscle fatigue does not simply reside in the muscle."}
{"_id": "048e82ac9c88c458a50cc6289662e9cb2ecb4fc9", "title": "A Multiple Instance Learning Framework for Identifying Key Sentences and Detecting Events", "text": "State-of-the-art event encoding approaches rely on sentence or phrase level labeling, which are both time consuming and infeasible to extend to large scale text corpora and emerging domains. Using a multiple instance learning approach, we take advantage of the fact that while labels at the sentence level are difficult to obtain, they are relatively easy to gather at the document level. This enables us to view the problems of event detection and extraction in a unified manner. Using distributed representations of text, we develop a multiple instance formulation that simultaneously classifies news articles and extracts sentences indicative of events without any engineered features. We evaluate our model in its ability to detect news articles about civil unrest events (from Spanish text) across ten Latin American countries and identify the key sentences pertaining to these events. Our model, trained without annotated sentence labels, yields performance that is competitive with selected state-of-the-art models for event detection and sentence identification. Additionally, qualitative experimental results show that the extracted event-related sentences are informative and enhance various downstream applications such as article summarization, visualization, and event encoding."}
{"_id": "3ba6dbd4b8dd913be895118601760772f9b18c2d", "title": "LTE for vehicular networking: a survey", "text": "A wide variety of applications for road safety and traffic efficiency are intended to answer the urgent call for smarter, greener, and safer mobility. Although IEEE 802.11p is considered the de facto standard for on-the-road communications, stakeholders have recently started to investigate the usability of LTE to support vehicular applications. In this article, related work and running standardization activities are scanned and critically discussed; strengths and weaknesses of LTE as an enabling technology for vehicular communications are analyzed; and open issues and critical design choices are highlighted to serve as guidelines for future research in this hot topic."}
{"_id": "45e68bcc151cada00809bcf99bf8928f832aa806", "title": "3GPP LTE Versus IEEE 802.11p/WAVE: Which Technology is Able to Support Cooperative Vehicular Safety Applications?", "text": "The concept of vehicular ad-hoc networks enables the design of emergent automotive safety applications, which are based on the awareness among vehicles. Recently, a suite of 802.11p/WAVE protocols aimed at supporting car-to-car communications was approved by IEEE. Existing cellular infrastructure and, above all 3GPP LTE, is being considered as another communication technology appropriate for vehicular applications. This letter provides a theoretical framework which compares the basic patterns of both the technologies in the context of safety-of-life vehicular scenarios. We present mathematical models for the evaluation of the considered protocols in terms of successful beacon delivery probability."}
{"_id": "51f2804e45e9135006923a18da75b6a937335c31", "title": "A Survey on Device-to-Device Communication in Cellular Networks", "text": "Device-to-device (D2D) communications was initially proposed in cellular networks as a new paradigm for enhancing network performance. The emergence of new applications such as content distribution and location-aware advertisement introduced new user cases for D2D communications in cellular networks. The initial studies showed that D2D communications has advantages such as increased spectral efficiency and reduced communication delay. However, this communication mode introduces complications in terms of interference control overhead and protocols that are still open research problems. The feasibility of D2D communications in Long-Term Evolution Advanced is being studied by academia, industry, and standardization bodies. To date, there are more than 100 papers available on D2D communications in cellular networks, but there is no survey on this field. In this paper, we provide a taxonomy based on the D2D communicating spectrum and review the available literature extensively under the proposed taxonomy. Moreover, we provide new insights into the over-explored and under-explored areas that lead us to identify open research problems of D2D communications in cellular networks."}
{"_id": "6388a1a8789c2fa7664471476afcc7b9473694fc", "title": "D2D for Intelligent Transportation Systems: A Feasibility Study", "text": "Intelligent transportation systems (ITS) are becoming a crucial component of our society, whereas reliable and efficient vehicular communications consist of a key enabler of a well-functioning ITS. To meet a wide variety of ITS application needs, vehicular-to-vehicular and vehicular-to-infrastructure communications have to be jointly considered, configured, and optimized. The effective and efficient coexistence and cooperation of the two give rise to a dynamic spectrum management problem. One recently emerged and rapidly adopted solution of a similar problem in cellular networks is the so-termed device-to-device (D2D) communications. Its potential in the vehicular scenarios with unique challenges, however, has not been thoroughly investigated to date. In this paper, we for the first time carry out a feasibility study of D2D for ITS based on both the features of D2D and the nature of vehicular networks. In addition to demonstrating the promising potential of this technology, we will also propose novel remedies necessary to make D2D technology practical as well as beneficial for ITS."}
{"_id": "9e5b3e79483b496156cd94d5731a83cc0f5d8ebc", "title": "Vehicular communication systems: Enabling technologies, applications, and future outlook on intelligent transportation", "text": "Numerous technologies have been deployed to assist and manage transportation. But recent concerted efforts in academia and industry point to a paradigm shift in intelligent transportation systems. Vehicles will carry computing and communication platforms, and will have enhanced sensing capabilities. They will enable new versatile systems that enhance transportation safety and efficiency and will provide infotainment. This article surveys the state-of-the-art approaches, solutions, and technologies across a broad range of projects for vehicular communication systems."}
{"_id": "2a06f64857023c92c33c98682dc6b79bddf2b990", "title": "Inferring Painting Style with Multi-Task Dictionary Learning", "text": "Recent advances in imaging and multimedia technologies have paved the way for automatic analysis of visual art. Despite notable attempts, extracting relevant patterns from paintings is still a challenging task. Different painters, born in different periods and places, have been influenced by different schools of arts. However, each individual artist also has a unique signature, which is hard to detect with algorithms and objective features. In this paper we propose a novel dictionary learning approach to automatically uncover the artistic style from paintings. Specifically, we present a multi-task learning algorithm to learn a style-specific dictionary representation. Intuitively, our approach, by automatically decoupling style-specific and artist-specific patterns, is expected to be more accurate for retrieval and recognition tasks than generic methods. To demonstrate the effectiveness of our approach, we introduce the DART dataset, containing more than 1.5K images of paintings representative of different styles. Our extensive experimental evaluation shows that our approach significantly outperforms state-of-the-art methods."}
{"_id": "7227252f30c6fcb9c943f67a6fa93997aaece172", "title": "Image Based Camera Localization: an Overview", "text": "Virtual reality, augmented reality, robotics, and autonomous driving, have recently attracted much attention from both academic and industrial communities, in which image-based camera localization is a key task. However, there has not been a complete review on image-based camera localization. It is urgent to map this topic to enable individuals enter the field quickly. In this paper, an overview of image-based camera localization is presented. A new and complete classification of image-based camera localization approaches is provided and the related techniques are introduced. Trends for future development are also discussed. This will be useful not only to researchers, but also to engineers and other individuals interested in this field."}
{"_id": "a72c5cc3075d02f50d9fafcc51f202f168f1dd58", "title": "The peculiarities of knowledge management processes in SMEs: the case of Singapore", "text": "Purpose \u2013 The objectives of this study are two-fold. The first is to examine the peculiarities of KM processes that are unique in SMEs from three perspectives, namely knowledge creation, knowledge sharing and knowledge reuse. Secondly, to identify enablers and impediments of these KM processes that influence the competitiveness of SMEs. Design/methodology/approach \u2013 The study adopted a case study approach involving 21 participants comprising management staff and front-line employees from four Singaporean SMEs. Findings \u2013 The SME owner, rather than the employees, was found to be the key source and creator of knowledge and the sole driver of the KM processes. In SMEs, knowledge creation takes the form of innovative customized solutions to meet customers\u2019 needs; knowledge sharing occurs through cross functionality, overlapping roles, and facilitated by close physical proximity in open workspaces; and knowledge reuse is often made tacitly, where common knowledge is prevalently embedded within the KM processes of SMEs. The enablers of knowledge creation process rested largely on the owner\u2019s innovativeness, creativity and ability to acquire knowledge of the industry. Knowledge sharing processes are enabled by the awareness of roles, mutual respect and the level of trust among employees within the SME while knowledge reuse is fostered by close proximity of employees and the willingness and openness of the owner to impart his knowledge. The lack of the above enablement factors mentioned will hinder these KM processes. Research limitations/implications \u2013 The study is limited by the fact that data was collected from four SMEs in Singapore. Furthermore, only a small sample of staff from these SMEs was interviewed. Hence the findings need to be interpreted in light of such a scope. Practical implications \u2013 For SMEs, this research provides perspectives on the factors influencing KM processes, in particular, the importance of the owners\u2019 knowledge and leadership, the flexibility and adaptability of the organization, and open culture to enable the capitalization of its knowledge assets to survive and stay competitive. For practitioners, this paper reinforces the importance of the management owners\u2019 innovativeness, initiatives and support, and the level of social interaction and high level of trusts among employees in the SMEs to as enablers to effective KM processes in SMEs. Originality/value \u2013 To deepen on-going knowledge management research on SMEs, this paper provides insights and rich context to the distinctness of KM processes in SMEs."}
{"_id": "1391d7b9739db12ff3bb23659430df13d71a44fc", "title": "An Analytical Solution to the Stance Dynamics of Passive Spring-Loaded Inverted Pendulum with Damping", "text": "The Spring-Loaded Inverted Pendulum (SLIP) model has been established both as a very accurate descriptive tool as well as a good basis for the design and control of running robots. In particular, approximate analytic solutions to the otherwise nonintegrable dynamics of this model provide principled ways in which gait controllers can be built, yielding invaluable insight into their stability properties. However, most existing work on the SLIP model completely disregards the effects of damping, which often cannot be neglected for physical robot platforms. In this paper, we introduce a new approximate analytical solution to the dynamics of this system that also takes into account viscous damping in the leg. We compare both the predictive performance of our approximation as well as the tracking performance of an associated deadbeat gait controller to similar existing methods in the literature and show that it significantly outperforms them in the presence of damping in the leg."}
{"_id": "e940ba0e50107359993402541932e0145603bb70", "title": "Smart foot device for women safety", "text": "In this paper, an attempt has been made to develop a smart device that can assist women when they feel unsafe. This smart device will be clipped to the footwear of the user and can be triggered discreetly. On tapping one foot behind the other four times, an alert is sent via Bluetooth Low Energy communication to an application on the victim's phone, programmed to generate a message seeking help with the location of the device attached. The results obtained were analysed using Nai\u0308ve Bayes classifier and this low cost device showed an overall accuracy of 97.5%."}
{"_id": "2586dd5514cb203f42292f25238f1537ea5e4b8c", "title": "Informal social communication.", "text": ""}
{"_id": "e0893701ac08f8a949e61234364004e901793b3d", "title": "Stuttering in relation to anxiety, temperament, and personality: review and analysis with focus on causality.", "text": "UNLABELLED\nAnxiety and emotional reactions have a central role in many theories of stuttering, for example that persons who stutter would tend to have an emotionally sensitive temperament. The possible relation between stuttering and certain traits of temperament or personality were reviewed and analyzed, with focus on temporal relations (i.e., what comes first). It was consistently found that preschool children who stutter (as a group) do not show any tendencies toward elevated temperamental traits of shyness or social anxiety compared with children who do not stutter. Significant group differences were, however, repeatedly reported for traits associated with inattention and hyperactivity/impulsivity, which is likely to reflect a subgroup of children who stutter. Available data is not consistent with the proposal that the risk for persistent stuttering is increased by an emotionally reactive temperament in children who stutter. Speech-related social anxiety develops in many cases of stuttering, before adulthood. Reduction of social anxiety in adults who stutter does not in itself appear to result in significant improvement of speech fluency. Studies have not revealed any relation between the severity of the motor symptoms of stuttering and temperamental traits. It is proposed that situational variability of stuttering, related to social complexity, is an effect of interference from social cognition and not directly from the emotions of social anxiety. In summary, the studies in this review provide strong evidence that persons who stutter are not characterized by constitutional traits of anxiety or similar constructs.\n\n\nEDUCATIONAL OBJECTIVES\nThis paper provides a review and analysis of studies of anxiety, temperament, and personality, organized with the objective to clarify cause and effect relations. Readers will be able to (a) understand the importance of effect size and distribution of data for interpretation of group differences; (b) understand the role of temporal relations for interpretation of cause and effect; (c) discuss the results of studies of anxiety, temperament and personality in relation to stuttering; and (d) discuss situational variations of stuttering and the possible role of social cognition."}
{"_id": "f20fbad0632fdd7092529907230f69801c382c0f", "title": "A quad 25Gb/s 270mW TIA in 0.13\u00b5m BiCMOS with <0.15dB crosstalk penalty", "text": "The push for 100Gb/s optical transport and beyond necessitates electronic components at higher speed and integration level in order to drive down cost, complexity and size of transceivers [1-2]. This requires parallel multi-channel optical transceivers each operating at 25Gb/s and beyond. Due to variations in the output power of transmitters and in some cases different optical paths the parallel receivers have to operate at different input optical power levels. This trend places increasing strain to the acceptable inter-channel crosstalk in integrated multi-channel receivers [3]. Minimizing this cross-talk penalty when all channels are operational is becoming increasingly important in ultra-high throughput optical links."}
{"_id": "32d776a191dd6e51cc35c3a5566dd2fdb30aff8a", "title": "Learning from Multiple Partially Observed Views - an Application to Multilingual Text Categorization", "text": "We address the problem of learning classifiers when observat ions have multiple views, some of which may not be observed for all examples. We a ssume the existence of view generating functions which may complete t he missing views in an approximate way. This situation corresponds for examp le to learning text classifiers from multilingual collections where documents are not available in all languages. In that case, Machine Translation (MT) systems m ay be used to translate each document in the missing languages. We derive a gene ralization error bound for classifiers learned on examples with multiple arti ficially created views. Our result uncovers a trade-off between the size of the train ing set, the number of views, and the quality of the view generating functions. A s a consequence, we identify situations where it is more interesting to use mu ltiple views for learning instead of classical single view learning. An extension of this framework is a natural way to leverage unlabeled multi-view data in semisupervised learning. Experimental results on a subset of the Reuters RCV1/RCV2 co llecti ns support our findings by showing that additional views obtained from M T may significantly improve the classification performance in the cases identifi d by our trade-off."}
{"_id": "2e6381f85437ac703e7d1739d20525c55a4f2abc", "title": "Bilateral Back-Projection for Single Image Super Resolution", "text": "In this paper, a novel algorithm for single image super resolution is proposed. Back-projection [1] can minimize the reconstruction error with an efficient iterative procedure. Although it can produce visually appealing result, this method suffers from the chessboard effect and ringing effect, especially along strong edges. The underlining reason is that there is no edge guidance in the error correction process. Bilateral filtering can achieve edge-preserving image smoothing by adding the extra information from the feature domain. The basic idea is to do the smoothing on the pixels which are nearby both in space domain and in feature domain. The proposed bilateral back-projection algorithm strives to integrate the bilateral filtering into the back-projection method. In our approach, the back-projection process can be guided by the edge information to avoid across-edge smoothing, thus the chessboard effect and ringing effect along image edges are removed. Promising results can be obtained by the proposed bilateral back-projection method efficiently."}
{"_id": "2e8ddf1924da822408925988570c11ad663f8874", "title": "Prediction of air temperature using Multi-layer perceptrons with Levenberg-Marquardt training algorithm", "text": "related to the prediction of the air temperature in the region of Meknes in Morocco. Depending on weather parameters, such as atmospheric pressure (Pr), humidity (H), visibility (Vis), wind speed (V), dew point (Tr) and the precipitation (P). The database used contains a history of meteorological parameters covering 3288 days, from 2004 to 2012. This multilayer perceptron type of neuron (MLP) was used to approximate the relationship between these parameters with minimum square error. For a better air temperature prediction, we developed a stochastic neural model. The mean square error (MSE) and the correlation coefficient (R) were used to evaluate the performance of the developed models. The study of these statistical indicators demonstrated that the prediction of the air temperature was powerful with the Levenberg-Marquard algorithm, with architecture [610-1], the Tansig function in the hidden layer and the Purelin function in the output layer."}
{"_id": "25b7f1b06e60a39113ebc25049e13b58f84fec1b", "title": "FiberMesh: designing freeform surfaces with 3D curves", "text": "This paper presents a system for designing freeform surfaces with a collection of 3D curves. The user first creates a rough 3D model by using a sketching interface. Unlike previous sketching systems, the user-drawn strokes stay on the model surface and serve as handles for controlling the geometry. The user can add, remove, and deform these control curves easily, as if working with a 2D line drawing. The curves can have arbitrary topology; they need not be connected to each other. For a given set of curves, the system automatically constructs a smooth surface embedding by applying functional optimization. Our system provides real-time algorithms for both control curve deformation and the subsequent surface optimization. We show that one can create sophisticated models using this system, which have not yet been seen in previous sketching or functional optimization systems."}
{"_id": "46ebcc73d6f90f1b60869d212f3d0a80ef2a8673", "title": "A framework and theory for cyber security assessments", "text": "Information system security risk, defined as the product of the monetary losses associated with security incidents and the probability that they occur, is a suitable decision criterion when considering different information system architectures. This paper describes how probabilistic relational models can be used to specify architecture metamodels so that security risk can be inferred from metamodel-instantiations. A probabilistic relational model contains classes, attributes, and class-relationships. It can be used to specify architectural metamodels similar to class diagrams in the Unified Modeling Language. In addition, a probabilistic relational model makes it possible to associate a probabilistic dependency model to the attributes of classes in the architectural metamodel. This paper proposes a set of abstract classes that can be used to create probabilistic relational models so that they enable inference of security risk from instantiated architecture models. If an architecture metamodel is created by specializing the abstract classes proposed in this paper, the instantiations of the metamodel will generate a probabilistic dependency model that can be used to calculate the security risk associated with these instantiations. The abstract classes make it possible to derive the dependency model and calculate security risk from an instance model that only specifies assets and their relationships to each other. Hence, the person instantiating the architecture metamodel is not required to assess complex security attributes to quantify security risk using the instance model. Paper A: A probabilistic relational model for security risk analysis"}
{"_id": "6ee423ad2fdce31fcce5885fb35475d9299e272c", "title": "An interface for targeted collection of common sense knowledge using a mixture model", "text": "We present a game-based interface for acquiring common sense knowledge. In addition to being interactive and entertaining, our interface guides the knowledge acquisition process to learn about the most salient characteristics of a particular concept. We use statistical classification methods to discover the most informative characteristics in the Open Mind Common Sense knowledge base, and use these characteristics to play a game of 20 Questions with the user. Our interface also allows users to enter knowledge more quickly than a more traditional knowledge-acquisition interface. An evaluation showed that users enjoyed the game and that it increased the speed of knowledge acquisition."}
{"_id": "0df6ad955ff43a6aebec23a709e7b2bacec6b695", "title": "DISCO: Distributed multi-domain SDN controllers", "text": "Software-Defined Networking (SDN) is now envisioned for Wide Area Networks (WAN) and constrained overlay networks. Such networks require a resilient, scalable and easily extensible SDN control plane. In this paper, we propose DISCO, an extensible DIstributed SDN COntrol plane able to cope with the distributed and heterogeneous nature of modern overlay networks. A DISCO controller manages its own network domain and communicates with other controllers to provide end-to-end network services. This east-west communication is based on a lightweight and highly manageable control channel. We implemented DISCO on top of the Floodlight OpenFlow controller and the AMQP protocol and we evaluated it through an inter-domain topology disruption use case."}
{"_id": "aa11849871fe719f02c75051c2ff0f9bd3134de1", "title": "Folksonomies-Cooperative Classification and Communication Through Shared Metadata", "text": "This paper examines user-generated metadata as implemented and applied in two web services designed to share and organize digital media to better understand grassroots classification. Metadata data about data allows systems to collocate related information, and helps users find relevant information. The creation of metadata has generally been approached in two ways: professional creation and author creation. In libraries and other organizations, creating metadata, primarily in the form of catalog records, has traditionally been the domain of dedicated professionals working with complex, detailed rule sets and vocabularies. The primary problem with this approach is scalability and its impracticality for the vast amounts of content being produced and used, especially on the World Wide Web. The apparatus and tools built around professional cataloging systems are generally too complicated for anyone without specialized training and knowledge. A second approach is for metadata to be created by authors. The movement towards creator described documents was heralded by SGML, the WWW, and the Dublin Core Metadata Initiative. There are problems with this approach as well often due to inadequate or inaccurate description, or outright deception. This paper examines a third approach: user-created metadata, where users of the documents and media create metadata for their own individual use that is also shared throughout a community. 1 The Creation of Metadata: Professionals, Content Creators, Users Metadata is often characterized as \u201cdata about data.\u201d Metadata is information, often highly structured, about documents, books, articles, photographs, or other items that is designed to support specific functions. These functions are usually"}
{"_id": "89ac6035c01eb6234fe45601809e0097c4096d4a", "title": "Hemihyperplasia-multiple lipomatosis syndrome (HHML): a challenge in spinal care.", "text": "A 15-year-old girl developed a progressive paraparesis over a period of six months, secondary to spinal cord compression by a lipomatous mass and anomalies of the vertebral column. Clinically, a right hemihyperplasia affecting the trunk and lower limb was evident, as well as a right convex lumbar scoliosis. CT and MRI demonstrated severe spinal cord compression resulting from intraspinal lipomatosis, overgrowth of right facet joints (T8 to L5), and kyphoscoliosis. Surgical decompression was undertaken. A lumbar scoliosis of 48 degrees was partially corrected by means of dual-rod instrumentation. The neurological deficit improved significantly, and ambulation was progressively restored. The patient carried the diagnosis of Proteus syndrome for several years, but reevaluation of clinical features prompted the diagnosis of Hemihyperplasia Multiple Lipomatosis syndrome (HHML). This rare sporadic disorder is often confused with Proteus syndrome. As in Proteus syndrome, spinal cord compression in patients with HHML can result from lipomatous infiltration and/or significant spinal abnormalities including kyphoscoliosis and overgrowth. HHML and Proteus syndrome are discussed and compared with special emphasis on spinal and orthopaedic pathologies."}
{"_id": "430ddd5f2ed668e4c77b529607afa378453e11be", "title": "Reverse Word Order Models", "text": "In this work, we study the impact of the word order decoding direction for statistical machine translation (SMT). Both phrase-based and hierarchical phrasebased SMT systems are investigated by reversing the word order of the source and/or target language and comparing the translation results with the normal direction. Analysis are done on several components such as alignment model, language model and phrase table to see which of them accounts for the differences generated by various translation directions. Furthermore, we propose to use system combination, alignment combinations and phrase table combinations to take benefit from systems trained with different translation directions. Experimental results show improvements of up to 1.7 points in BLEU and 3.1 points in TER compared to the normal direction systems for the NTCIR9 Japanese-English and Chinese-English tasks."}
{"_id": "688909988657a21632ded9b2225997f22a664307", "title": "Groove Radio: A Bayesian Hierarchical Model for Personalized Playlist Generation", "text": "This paper describes an algorithm designed for Microsoft's Groove music service, which serves millions of users world wide. We consider the problem of automatically generating personalized music playlists based on queries containing a ``seed'' artist and the listener's user ID. Playlist generation may be informed by a number of information sources including: user specific listening patterns, domain knowledge encoded in a taxonomy, acoustic features of audio tracks, and overall popularity of tracks and artists. The importance assigned to each of these information sources may vary depending on the specific combination of user and seed~artist.\n The paper presents a method based on a variational Bayes solution for learning the parameters of a model containing a four-level hierarchy of global preferences, genres, sub-genres and artists. The proposed model further incorporates a personalization component for user-specific preferences. Empirical evaluations on both proprietary and public datasets demonstrate the effectiveness of the algorithm and showcase the contribution of each of its components."}
{"_id": "0fbec54268f444ee7d884c09a4819a94677b2734", "title": "Communication-optimal parallel algorithm for strassen's matrix multiplication", "text": "Parallel matrix multiplication is one of the most studied fundamental problems in distributed and high performance computing. We obtain a new parallel algorithm that is based on Strassen's fast matrix multiplication and minimizes communication. The algorithm outperforms all known parallel matrix multiplication algorithms, classical and Strassen-based, both asymptotically and in practice. A critical bottleneck in parallelizing Strassen's algorithm is the communication between the processors. Ballard, Demmel, Holtz, and Schwartz (SPAA '11) prove lower bounds on these communication costs, using expansion properties of the underlying computation graph. Our algorithm matches these lower bounds, and so is communication-optimal. It exhibits perfect strong scaling within the maximum possible range.\n Benchmarking our implementation on a Cray XT4, we obtain speedups over classical and Strassen-based algorithms ranging from 24% to 184% for a fixed matrix dimension n=94080, where the number of processors ranges from 49 to 7203.\n Our parallelization approach generalizes to other fast matrix multiplication algorithms."}
{"_id": "c9ce75202d567acb309f2f54b7c8e08070a35618", "title": "Road scene analysis for determination of road traffic density", "text": "Road traffic density has always been a concern in large cities around the world, and many approaches were developed to assist in solving congestions related to slow traffic flow. This work proposes a congestion rate estimation approach that relies on real-time video scenes of road traffic, and was implemented and evaluated on eight different hotspots covering 33 different urban roads. The approach relies on road scene morphology for estimation of vehicles average speed along with measuring the overall video scenes randomness acting as a frame texture analysis indicator. Experimental results shows the feasibility of the proposed approach in reliably estimating traffic density and in providing an early warning to drivers on road conditions, thereby mitigating the negative effect of slow traffic flow on their daily lives."}
{"_id": "4b9b7eed30feee37db3452b74503d0db9f163074", "title": "Recurrent Continuous Translation Models", "text": "We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43% lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations."}
{"_id": "85aa1c6883d6b51171a80c57d82370207d4f1b89", "title": "The principles of software QRS detection", "text": "The QRS complex is the most striking waveform within the electrocardiogram (ECG). Since it reflects the electrical activity within the heart during the ventricular contraction, the time of its occurrence as well as its shape provide much information about the current state of the heart. Due to its characteristic shape it serves as the basis for the automated determination of the heart rate, as an entry point for classification schemes of the cardiac cycle, and often it is also used in ECG data compression algorithms. In that sense, QRS detection provides the fundamentals for almost all automated ECG analysis algorithms. Software QRS detection has been a research topic for more than 30 years. The evolution of these algorithms clearly reflects the great advances in computer technology. Within the last decade many new approaches to QRS detection have been proposed; for example, algorithms from the field of artificial neural networks genetic algorithms wavelet transforms, filter banks as well as heuristic methods mostly based on nonlinear transforms. The authors provide an overview of these recent developments as well as of formerly proposed algorithms."}
{"_id": "7a1e584f9a91472d6e15184f1648f57256216198", "title": "A Language and Program for Complex Bayesian Modelling", "text": "http://www.jstor.org A Language and Program for Complex Bayesian Modelling Author(s): W. R. Gilks, A. Thomas and D. J. Spiegelhalter Source: Journal of the Royal Statistical Society. Series D (The Statistician), Vol. 43, No. 1, Special Issue: Conference on Practical Bayesian Statistics, 1992 (3) (1994), pp. 169-177 Published by: for the Wiley Royal Statistical Society Stable URL: http://www.jstor.org/stable/2348941 Accessed: 19-08-2014 17:40 UTC"}
{"_id": "062ece9dd7019b0a3ca7e789acf1dee57571e26d", "title": "Statistical Methods in Psychology Journals Guidelines and Explanations", "text": "n the light of continuing debate over the applications of significance testing in psychology journals and following the publication of Cohen's (1994) article, the Board of Scientific Affairs (BSA) of the American Psychological Association (APA) convened a committee called the Task Force on Statistical Inference (TFSI) whose charge was \"to elucidate some of the controversial issues surrounding applications of statistics including significance testing and its alternatives; alternative underlying models and data transformation; and newer methods made possible by powerful computers\" (BSA, personal communication, February 28, 1996). Robert Rosenthal, Robert Abelson, and Jacob Cohen (cochairs) met initially and agreed on the desirability of having several types of specialists on the task force: statisticians, teachers of statistics, journal editors, authors of statistics books, computer experts, and wise elders. Nine individuals were subsequently invited to join and all agreed. These were Leona Aiken, Mark Appelbaum, Gwyneth Boodoo, David A. Kenny, Helena Kraemer, Donald Rubin, Bruce Thompson, Howard Wainer, and Leland Wilkinson. In addition, Lee Cronbach, Paul Meehl, Frederick Mosteller and John Tukey served as Senior Advisors to the Task Force and commented on written materials. The TFSI met twice in two years and corresponded throughout that period. After the first meeting, the task force circulated a preliminary report indicating its intention to examine issues beyond null hypothesis significance testing. The task force invited comments and used this feedback in the deliberations during its second meeting. After the second meeting, the task force recommended several possibilities for further action, chief of which would be to revise the statistical sections of the American Psychological Association Publication Manual (APA, 1994). After extensive discussion, the BSA recommended that \"before the TFSI undertook a revision of the APA Publication Manual, it might want to consider publishing an article in American Psychologist, as a way to initiate discussion in the field about changes in current practices of data analysis and reporting\" (BSA, personal communication, November 17, 1997). This report follows that request. The sections in italics are proposed guidelines that the TFSI recommends could be used for revising the APA publication manual or for developing other BSA supporting materials. Following each guideline are comments, explanations, or elaborations assembled by Leland Wilkinson for the task force and under its review. This report is concerned with the use of statistical methods only and is not meant as an assessment of research methods in general. Psychology is a broad science. Methods appropriate in one area may be inappropriate in another. The title and format of this report are adapted from a similar article by Bailar and Mosteller (1988). That article should be consulted, because it overlaps somewhat with this one and discusses some issues relevant to research in psychology. Further detail can also be found in the publications on this topic by several committee members (Abelson, 1995, 1997; Rosenthal, 1994; Thompson, 1996; Wainer, in press; see also articles in Harlow, Mulaik, & Steiger, 1997)."}
{"_id": "21e2150b6cc03bc6f51405473f57efff598c77bc", "title": "A generative theory of similarity", "text": "We argue that similarity judgments are inferences about generative processes, and that two objects appear similar when they are likely to have been generated by the same process. We describe a formal model based on this idea and show how featural and spatial models emerge as special cases. We compare our approach to the transformational approach, and present an experiment where our model performs better than a transformational model. Every object is the outcome of a generative process. An animal grows from a fertilized egg into an adult, a city develops from a settlement into a metropolis, and an artifact is assembled from a pile of raw materials according to the plan of its designer. Observations like these motivate the generative approach, which proposes that an object may be understood by thinking about the process that generated it. The promise of the approach is that apparently complex objects may be produced by simple processes, an insight that has proved productive across disciplines including biology [18], physics [21], and architecture [1]. To give two celebrated examples from biology, the shape of a pinecone and the markings on a cheetah\u2019s tail can be generated by remarkably simple processes of growth. These patterns can be characterized much more compactly by describing their causal history than by attempting to describe them directly. Leyton has argued that the generative approach provides a general framework for understanding cognition. Applications of the approach can be found in generative theories of perception [12], memory [12], language [3], categorization [2], and music [11]. This paper offers a generative theory of similarity, a notion often invoked by models of high-level cognition. We argue that two objects are similar to the extent that they seem to have been generated by the same underlying process. The literature on similarity covers settings that extend from the comparison of simple stimuli like tones and colored patches to the comparison of highly-structured objects like narratives. The generative approach is relevant to the entire spectrum of applications, but we are particularly interested in high-level similarity. In particular, we are interested in how similarity judgments draw on intuitive theories, or systems of rich conceptual knowledge [15]. Generative processes and theories are intimately linked. Murphy [14], for example, defines a theory as \u2018a set of causal relations that collectively generate or explain the phenomena in a domain.\u2019 We hope that our generative theory provides a framework in which to model how similarity judgments emerge from intuitive theories. We develop a formal theory of similarity and compare it to three existing theories. The featural account [20] suggests that the similarity of two objects is a function of their common and distinctive features, the spatial account suggests that similarity is inversely proportional to distance in a spatial representation, [19] and the transformation account suggests that similarity depends on the number of operations required to transform one object into the other [6]. We show that versions of each of these approaches emerge as special cases of our generative approach, and present an experiment that directly compares our approach with the transformation account. A fourth theory suggests that similarity relies on a process of analogical mapping [5]. We will not discuss this approach in detail, but finish by suggesting how a generative approach to analogy differs from the standard view. Generative processes and similarity Before describing our formal model, we give an informal motivation for a generative approach to similarity. Suppose we are shown a prototype object and asked to describe similar objects we might find in the world. There are two kinds of answers: small perturbations of the prototype, or objects produced by small perturbations of the process that generated the prototype. The second strategy is likely to be more successful than the first, since many perturbations of the prototype will not arise from any plausible generative process, and thus could never appear in practice. By construction, however, an object produced by a perturbation of an existing generative process will have a plausible causal history. To give a concrete example, suppose the prototype is a bug generated by a biological process of growth (Figure 1ii). The bug in i is a small perturbation of the prototype, but seems unlikely to arise since legs are generated in pairs. A perturbation of the generative process might produce a bug with more segments, such as the bug in iii. If we hope to find a bug that is similar but not identical to the prototype, iii is a better bet than i. A sceptic might argue that this one-shot learning problem can be solved by taking the intersection of the set of objects similar to the prototype and the set of obi) ii) Prototype iii) Figure 1: Three bugs. Which is more similar to the prototype \u2014 i or iii? jects that are likely to exist. The second set depends critically on generative processes, but the first set (and therefore the notion of similarity) need not. We think it more likely that the notion of similarity is ultimately grounded in the world, and that it evolved for the purpose of comparing real-world objects. If so, then knowledge about what kinds of objects are likely to exist may be deeply bound up with the notion of similarity. The one-shot learning problem is of practical importance, but is not the standard context in which similarity is discussed. More commonly, subjects are shown a pair of objects and asked to rate the similarity of the pair. Note that both objects are observed to exist and the previous argument does not apply. Yet generative processes are still important, since they help pick out the features critical for the similarity comparison. Suppose, for instance, that a forest-dweller discovers a nutritious mushroom. Which is more similar to the mushroom: a mushroom identical except for its size, or a mushroom identical except for its color? Knowing how mushrooms are formed suggests that size is not a key feature. Mushrooms grow from small to large, and the final size of a plant depends on factors like the amount of sunlight it received and the fertility of the soil that it grew in. Reflections like these suggest that the differently-sized mushroom should be judged more similar. A final reason why generative processes matter is that they are deeply related to essentialism. Medin and Ortony [13] note that \u2018surface features are frequently constrained by, and sometimes generated by, the deeper, more central parts of objects.\u2019 Even if we observe only the surface features of two objects, it may make sense to judge their similarity by comparing the deeper properties inferred to generate the surface features. Yet we can say more: just as surface features are generated by the essence of the object, the essence itself has a generative history. Surface features are often reliable guides to the essence of an object, but the object\u2019s causal history is a still more reliable indicator, if not a defining criterion of its essence. Keil [9] discusses the case of an animal that is born a skunk, then undergoes surgery that leaves it looking exactly like a raccoon. Since the animal is generated in the same way as a skunk (born of skunk parents), we conclude that it remains a skunk, no matter how it appears on the surface. These examples suggest that the generative approach may help to explain a broad class of theory-dependent inferences. We now present a formal model that attempts to capture the intuitions behind all of these cases. A computational theory of similarity Given a domain D, we develop a theory that specifies the similarity between any two samples from D. A sample from D will usually contain a single object, but working with similarities between sets of objects is useful for some applications. We formalize a generative process as a probability distribution over D that depends on parameter vector \u03b8. Suppose that s1 and s2 are samples from D. We consider two hypotheses: H1 holds that s1 and s2 are independent samples from a single generative process, and H2 holds that the samples are generated from two independently chosen processes. Similarity is defined as the probability that the objects are generated by the same process: that is, the relative posterior probability of H1 compared to H2: sim(s1, s2) = P (H1|s1, s2)"}
{"_id": "6377fee5214d9ace4ce629c9bfe463bdebbd889f", "title": "Explaining the Gibbs Sampler", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "87f8bcae68df7ba371baec5d0a2283ecb366b0fc", "title": "The Adaptive Nature of Human Categorization", "text": "A rational model of human categorization behavior is presented that assumes that categorization reflects the derivation of optimal estimates of the probability of unseen features of objects. A Bayesian analysis is performed of what optimal estimations would be if categories formed a disjoint partitioning of the object space and if features were independently displayed within a category. This Bayesian analysis is placed within an incremental categorization algorithm. The resulting rational model accounts for effects of central tendency of categories, effects of specific instances, learning of linearly nonseparable categories, effects of category labels, extraction of basic level categories, base-rate effects, probability matching in categorization, and trial-by-trial learning functions. Although the rational model considers just I level of categorization, it is shown how predictions can be enhanced by considering higher and lower levels. Considering prediction at the lower, individual level allows integration of this rational analysis of categorization with the earlier rational analysis of memory (Anderson & Milson, 1989)."}
{"_id": "322beb11ceeacf72dfc9df8df8cb045efb46d67f", "title": "Workflow mining: discovering process models from event logs", "text": "Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and, typically, there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, we have developed techniques for discovering workflow models. The starting point for such techniques is a so-called \"workflow log\" containing information about the workflow process as it is actually being executed. We present a new algorithm to extract a process model from such a log and represent it in terms of a Petri net. However, we also demonstrate that it is not possible to discover arbitrary workflow processes. We explore a class of workflow processes that can be discovered. We show that the /spl alpha/-algorithm can successfully mine any workflow represented by a so-called SWF-net."}
{"_id": "370df20e88df29b9c62c33fff092b853cff1afc1", "title": "Yelp Restaurant Photo Classification", "text": "Everyday, millions of users upload numerous photos of different businesses to Yelp. Though most of them are tagged with simple information such as business names, the deep-level information typically are not revealed until looked by humans. Such deep-level information contains things like whether the restaurant is good for lunch or dinner, whether it has outdoor seating, etc. This information can be crucial in helping potential customers to determine whether its the ideal restaurant they want to dine in. Hand labeling these restaurants can be not only costly but also tedious since photos are uploaded almost every second to Yelp. In this project, we used machine learning techniques, specifically image classification methods, to identify the context and rich information of different photos on Yelp. We used three different models to train and test the given dataset. The dataset is provided by Yelp on Kaggle.com. In the training set, 234842 arbitrary-sized images submitted by Yelp users are correlated to one of 2000 Yelp businesses. The number of corresponding photos for each business ranges from 2 to 2974, with mean of 117. The image count distribution is shown by Fig1. Every business is tagged with multiple descriptive labels in 9 aspects. The labels includes: 0: good for lunch 1: good for dinner 2: takes reservations 3: outdoor seating 4: restaurant is expensive 5: has alcohol 6: has table service 7: ambience is classy 8: good for kids As an example, Fig2 illustrates the structure of the dataset. Our goal is to predict the labels of a business in these 9 aspects given its photos. A notable challenge is that the labels are not directly correlated to each image. Individual photos can possibly be indicative in one aspect, yet it is tagged with all the labels of the business. As a result, the dataset might exhibit high noise level. The fraction of each label across the training set is shown in Fig3. Though assumed independent, some label pairs exhibit high covariance, such as \u201dhas alcohol\u201d and \u201dhas table service\u201d (Fig4)."}
{"_id": "0f6863ca14fc33a87f03ac46051b39fe2541cdf1", "title": "Be Conservative: Enhancing Failure Diagnosis with Proactive Logging", "text": "When systems fail in the field, logged error or warning messages are frequently the only evidence available for assessing and diagnosing the underlying cause. Consequently, the efficacy of such logging\u2014how often and how well error causes can be determined via postmortem log messages\u2014is a matter of significant practical importance. However, there is little empirical data about how well existing logging practices work and how they can yet be improved. We describe a comprehensive study characterizing the efficacy of logging practices across five large and widely used software systems. Across 250 randomly sampled reported failures, we first identify that more than half of the failures could not be diagnosed well using existing log data. Surprisingly, we find that majority of these unreported failures are manifested via a common set of generic error patterns (e.g., system call return errors) that, if logged, can significantly ease the diagnosis of these unreported failure cases. We further mechanize this knowledge in a tool called Errlog , that proactively adds appropriate logging statements into source code while adding only 1.4% performance overhead. A controlled user study suggests that Errlog can reduce diagnosis time by 60.7%."}
{"_id": "34c986f74c9588c0380d2f0d7d3f84813bbb4a75", "title": "Applying Semantic Web Technologies for Diagnosing Road Traffic Congestions", "text": "Diagnosis, or the method to connect causes to its effects, is an important reasoning task for obtaining insight on cities and reaching the concept of sustainable and smarter cities that is envisioned nowadays. This paper, focusing on transportation and its road traffic, presents how road traffic congestions can be detected and diagnosed in quasi real-time. We adapt pure Artificial Intelligence diagnosis techniques to fully exploit knowledge which is captured through relevant semantics-augmented stream and static data from various domains. Our prototype of semantic-aware diagnosis of road traffic congestions, experimented in Dublin Ireland, works efficiently with large, heterogeneous information sources and delivers value-added services to citizens and city managers in quasi real-time."}
{"_id": "6a1f99f44290d67cda02976e24358dddce24a739", "title": "Analytic Performance Modeling and Optimization of Live VM Migration", "text": "Earlier virtual machine (VM) migration techniques consisted of stop-and-copy: the VM was stopped, its address space was copied to a different physical machine, and the VM was restarted at that machine. Recent VM hypervisors support live VM migration, which allows pages to be copied while the VM is running. If any copied page is dirtied (i.e., modified), it has to be copied again. The process stops when a fraction \u03b1 of the pages need to be copied. Then, the VM is stopped and the remaining pages are copied. This paper derives a model to compute the downtime, total number of pages copied, and network utilization due to VM migration, as a function of \u03b1 and other parameters under uniform and non-uniform dirtying rates. The paper also presents a non-linear optimization model to find the value of \u03b1 that minimizes the downtime subject to network utilization constraints."}
{"_id": "c6cc3d8f78ca0e2ac65063430c28a940d1982619", "title": "Trends, challenges and needs for lattice-based cryptography implementations: special session", "text": "Advances in computing steadily erode computer security at its foundation, calling for fundamental innovations to strengthen the weakening cryptographic primitives and security protocols. At the same time, the emergence of new computing paradigms, such as Cloud Computing and Internet of Everything, demand that innovations in security extend beyond their foundational aspects, to the actual design and deployment of these primitives and protocols while satisfying emerging design constraints such as latency, compactness, energy efficiency, and agility. While many alternatives have been proposed for symmetric key cryptography and related protocols (e.g., lightweight ciphers and authenticated encryption), the alternatives for public key cryptography are limited to post-quantum cryptography primitives and their protocols. In particular, lattice-based cryptography is a promising candidate, both in terms of foundational properties, as well as its application to traditional security problems such as key exchange, digital signature, and encryption/decryption. We summarize trends in lattice-based cryptographic schemes, some fundamental recent proposals for the use of lattices in computer security, challenges for their implementation in software and hardware, and emerging needs."}
{"_id": "35763cf6b9e75474f950872c6bc86b381d9f71b6", "title": "Interaction-Aware Topic Model for Microblog Conversations through Network Embedding and User Attention", "text": "Traditional topic models are insufficient for topic extraction in social media. The existing methods only consider text information or simultaneously model the posts and the static characteristics of social media. They ignore that one discusses diverse topics when dynamically interacting with different people. Moreover, people who talk about the same topic have different effects on the topic. In this paper, we propose an Interaction-Aware Topic Model (IATM) for microblog conversations by integrating network embedding and user attention. A conversation network linking users based on reposting and replying relationship is constructed to mine the dynamic user behaviours. We model dynamic interactions and user attention so as to learn interaction-aware edge embeddings with social context. Then they are incorporated into neural variational inference for generating the more consistent topics. The experiments on three real-world datasets show that our proposed model is effective."}
{"_id": "e3e24aa5420e6085338fc7cafeb3f653b4c08fb5", "title": "Dyslexia in English as a second language.", "text": "This study focused on English as L2 in a group of Norwegian dyslexic 12 year olds, compared to an age and gender matched control group. Norwegian school children learn English from the first grades on. The subjects were assessed with a test battery of verbal and written tasks. First, they were given a comprehension task; second, a model sentence task; third, two pragmatic tasks, and fourth, three tasks of literacy. The verbal tasks were scored according to comprehension, morphology, syntax and semantics, while the literacy tasks were scored by spelling, translation and reading skills. It was hypothesized that the results of the control group and the dyslexia group would differ on all tasks, but that subgrouping the dyslexia group by comprehension skills would show heterogeneity within the dyslexia group. The data analyses confirmed these hypotheses. Significant differences were seen between the dyslexia group and the control group. However, the subgrouping revealed minor differences between the control group and the subgroup with good comprehension skills, and major differences between the control group and the subgroup with poor comprehension skills. Especially morphology and spelling were difficult for the dyslexia group. The results were tentatively discussed within the framework of biological and cognitive models of how to interpret L2 performance in dyslexia, underlining the importance of further research in L2 acquisition in dyslexia."}
{"_id": "bfae193bb05fb4e6cf6002ea3dbdd8c4adf56c3e", "title": "Impact of Social Media on Social Anxiety: A Systematic Review", "text": "Introduction: Online social networking sites are being used all around the world. However, only recently researchers have started to investigate their relationship with mental health. Evidence coming from literature suggests that they have both advantages and disadvantages for individuals. The aim of this study is to critically review the existent research conducted on the relationship between online social networking and social anxiety. Method: According to PRISMA guidelines, comprehensive systematic searches of electronic databases were conducted (PsychInfo, Cochrane, PubMed, Scopus, Web of Science). Terms related to online social networking were combined with social anxiety terms. After identifying relevant papers, the relationship between social media networking and social anxiety as well as limitations of each study were presented. All of the papers included were cross-sectional, relying on self-report assessments. Conclusion: There are several papers published on the relationship between social media and social anxiety; however, more research needs to be done in order to clarify the influence of one variable over the other. Rigorous experimental studies need to be conducted with different time-point assessments, and bidirectionality should be investigated. Important mediators and moderators have to be considered in order to explain the relationship between social media networking and social anxiety."}
{"_id": "b4b3caf8e55aad64a31a29000ee990f93bf1755c", "title": "Self-organizing neural integration of pose-motion features for human action recognition", "text": "The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions."}
{"_id": "c2a59efb41a486c1942085fd4c64318bbbfd94a9", "title": "Bag-of-Discriminative-Words (BoDW) Representation via Topic Modeling", "text": "Many of the words in a given document either deliver facts (objective) or express opinions ( subjective), respectively, depending on the topics they are involved in. For example, given a bunch of documents, the word \u201cbug\u201d assigned to the topic \u201corder Hemiptera\u201d apparently remarks one object (i.e., one kind of insects), while the same word assigned to the topic \u201csoftware \u201d probably conveys a negative opinion. Motivated by the intuitive assumption that different words have varying degrees of discriminative power in delivering the objective sense or the subjective sense with respect to their assigned topics, a model named as discriminatively objective- subjective LDA (dosLDA) is proposed in this paper. The essential idea underlying the proposed dosLDA is that a pair of objective and subjective selection variables are explicitly employed to encode the interplay between topics and discriminative power for the words in documents in a supervised manner. As a result, each document is appropriately represented as \u201cbag-of-discriminative-words\u201d (BoDW). The experiments reported on documents and images demonstrate that dosLDA not only performs competitively over traditional approaches in terms of topic modeling and document classification, but also has the ability to discern the discriminative power of each word in terms of its objective or subjective sense with respect to its assigned topic."}
{"_id": "bffec29496d8602e04b2ea1951fb442141547870", "title": "Forensic mycology: current perspectives", "text": "(unported, v3.0) License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php Research and Reports in Forensic Medical Science 2015:5 75\u201383 Research and Reports in Forensic Medical Science Dovepress"}
{"_id": "3c7e9d70a7062ba8dd018f22cb8cef8182b09432", "title": "Guidelines for the process of cross-cultural adaptation of self-report measures.", "text": "With the increase in the number of multinational and multicultural research projects, the need to adapt health status measures for use in other than the source language has also grown rapidly. Most questionnaires were developed in English-speaking countries, but even within these countries, researchers must consider immigrant populations in studies of health, especially when their exclusion could lead to a systematic bias in studies of health care utilization or quality of life. The cross-cultural adaptation of a health status selfadministered questionnaire for use in a new country, culture, and/or language necessitates use of a unique method, to reach equivalence between the original source and target versions of the questionnaire. It is now recognized that if measures are to be used across cultures, the items must not only be translated well linguistically, but also must be adapted culturally to maintain the content validity of the instrument at a conceptual level across different cultures. Attention to this level of detail allows increased confidence that the impact of a disease or its treatment is described in a similar manner in multinational trials or outcome evaluations. The term \u201ccross-cultural adaptation\u201d is used to encompass a process that looks at both language (translation) and cultural adaptation issues in the process of preparing a questionnaire for use in another setting. Cross-cultural adaptations should be considered for several different scenarios. In some cases, this is more obvious than in others. Guillemin et al suggest five different examples of when attention should be paid to this adaptation by comparing the target (where it is going to be used) and source (where it was developed) language and culture. The first scenario is that it is to be used in the same language and culture in which it was developed. No adaptation is necessary. The last scenario is the opposite extreme, the application of a questionnaire in a different culture, language and country\u2014moving the Short Form 36-item questionnaire from the United States (source) to Japan (target) which would necessitate translation and cultural adaptation. The other scenarios are summarized in Table 1 and reflect situations when some translation and/or adaptation is needed. The guidelines described in this document are based on a review of cross-cultural adaptation in the medical, sociological, and psychological literature. This review led to the description of a thorough adaptation process designed to maximize the attainment of semantic, idiomatic, experiential, and conceptual equivalence between the source and target questionnaires.. Further experience in cross-cultural adaptation of generic and diseasespecific instruments and alternative strategies driven by different research groups have led to some refinements in methodology since the 1993 publication.. These guidelines serve as a template for the translation and cultural adaptation process. The process involves the adaptation of individual items, the instructions for the questionnaire, and the response options. The text in the next section outlines the methodology suggested (Stages I\u2013V). The subsequent section (Stage VI) presents a suggested appraisal process whereby an advisory committee or the developers review the process and determine whether this is an acceptable translation. Although such a committee or the developers may not be engaged in tracking translated versions of the instrument, this stage has been included in case there is a tracking system. Records of translated versions not only can save considerable time and effort (by using already available questionnaires) but also avoid erroneous comparisons of results across different translated versions. The process of cross-cultural adaptation tries to produce equivalency between source and target based on content. The assumption that is sometimes made is that this process will ensure retention of psychometric properties such as validity and reliability at an item and/or a scale level. However, this is not necessarily the case: For instance, if the new culture has a different way of approaching a task that makes it inherently more or less difficult compared with other items, it would change the validity, certainly in terms of item-level analyses (such as item response theory, similar to Rasch). Further tests should be conducted on the psychometric properties of the adapted questionnaire after the translation is complete. This will be discussed briefly at the end of the guidelines. In fact, the translation process outlined in this article is the first step in the three-step process adopted by the International Society for Quality of Life Assessment (IQOLA) project. The other two steps From the *Institute for Work and Health; \u2020St. Michael\u2019s Hospital; Departments of \u2021Occupational Therapy and 4Medicine and the \u00a7Clinical Epidemiology and Health Care Research Program, University of Toronto \u00b6The University Health Network, Toronto General Hospital; and #Mt. Sinai Hospital, Toronto, Ontario, Canada; **Ecole de Sant, Publique, Facult, de M,decine,Vandoeuvre-les-Nancy, France; and the \u2020\u2020Division of Rheumatology, Department of Medicine, Escola Paulista de Medicina, Universidade Federal de S\u00f5o Paulo, S\u00f5o Paulo, Brazil. Supported in part by the American Academy of Orthopaedic Surgeons and the Institute for Work & Health who co-sponsored the development of these guidelines. DB was supported by a Medical Research Council of Canada PhD Fellowship during the preparation of this work."}
{"_id": "773321cf179b499d81dd026c95703857bf993cb8", "title": "Data processing and semantics for advanced internet of things (IoT) applications: modeling, annotation, integration, and perception", "text": "This tutorial presents tools and techniques for effectively utilizing the Internet of Things (IoT) for building advanced applications, including the Physical-Cyber-Social (PCS) systems. The issues and challenges related to IoT, semantic data modelling, annotation, knowledge representation (e.g. modelling for constrained environments, complexity issues and time/location dependency of data), integration, analysis, and reasoning will be discussed. The tutorial will describe recent developments on creating annotation models and semantic description frameworks for IoT data (e.g. such as W3C Semantic Sensor Network ontology). A review of enabling technologies and common scenarios for IoT applications from the data and knowledge engineering point of view will be discussed. Information processing, reasoning, and knowledge extraction, along with existing solutions related to these topics will be presented. The tutorial summarizes state-of-the-art research and developments on PCS systems, IoT related ontology development, linked data, domain knowledge integration and management, querying large-scale IoT data, and AI applications for automated knowledge extraction from real world data."}
{"_id": "4c341889abc15b296c85dd5b4e1356c664f633f4", "title": "THE RELATIONSHIPS BETWEEN IT FLEXIBILITY, IT-BUSINESS STRATEGIC ALIGNMENT, AND IT CAPABILITY", "text": "What seems to still be the main concern for managers in the corporate world across the globe is ITbusiness strategic alignment. This study seeks to address the research problem about the lack of alignment between IT and business strategies. Upon reviewing various literature on this subject, it was found that IT flexibility is one of the most vital factors that help sustain strategic alignment. The researcher upon having a detailed discussion on the possible areas associated with the present body of knowledge has discovered gaps in the studies that have been undertaken on strategic alignment and IT flexibility. This is because IT capability in relation to IT flexibility and strategic alignment has been ignored in the previous studies. As a result, this research proposes a relationship between IT flexibility (i.e., modularity, connectivity and compatibility), IT capability, and strategic alignment."}
{"_id": "80b2dbefdef33858dc6c78b0df1f8179e6c6d77a", "title": "Deep Blind Image Inpainting", "text": "Image inpainting is a challenging problem as it needs to fill the information of the corrupted regions. Most of the existing inpainting algorithms assume that the positions of the corrupted regions are known. Different from the existing methods that usually make some assumptions on the corrupted regions, we present an efficient blind image inpainting algorithm to directly restore a clear image from a corrupted input. Our algorithm is motivated by the residual learning algorithm which aims to learn the missing information in corrupted regions. However, directly using existing residual learning algorithms in image restoration does not well solve this problem as little information is available in the corrupted regions. To solve this problem, we introduce an encoder and decoder architecture to capture more useful information and develop a robust loss function to deal with outliers. Our algorithm can predict the missing information in the corrupted regions, thus facilitating the clear image restoration. Both qualitative and quantitative experimental demonstrate that our algorithm can deal with the corrupted regions of arbitrary shapes and performs favorably against state-of-the-art methods."}
{"_id": "eeaafa443ef52f77eae3f25fe87ae72f968b0c4e", "title": "Localizing Ashkenazic Jews to Primeval Villages in the Ancient Iranian Lands of Ashkenaz", "text": "The Yiddish language is over 1,000 years old and incorporates German, Slavic, and Hebrew elements. The prevalent view claims Yiddish has a German origin, whereas the opposing view posits a Slavic origin with strong Iranian and weak Turkic substrata. One of the major difficulties in deciding between these hypotheses is the unknown geographical origin of Yiddish speaking Ashkenazic Jews (AJs). An analysis of 393 Ashkenazic, Iranian, and mountain Jews and over 600 non-Jewish genomes demonstrated that Greeks, Romans, Iranians, and Turks exhibit the highest genetic similarity with AJs. The Geographic Population Structure analysis localized most AJs along major primeval trade routes in northeastern Turkey adjacent to primeval villages with names that may be derived from \"Ashkenaz.\" Iranian and mountain Jews were localized along trade routes on the Turkey's eastern border. Loss of maternal haplogroups was evident in non-Yiddish speaking AJs. Our results suggest that AJs originated from a Slavo-Iranian confederation, which the Jews call \"Ashkenazic\" (i.e., \"Scythian\"), though these Jews probably spoke Persian and/or Ossete. This is compatible with linguistic evidence suggesting that Yiddish is a Slavic language created by Irano-Turko-Slavic Jewish merchants along the Silk Roads as a cryptic trade language, spoken only by its originators to gain an advantage in trade. Later, in the 9th century, Yiddish underwent relexification by adopting a new vocabulary that consists of a minority of German and Hebrew and a majority of newly coined Germanoid and Hebroid elements that replaced most of the original Eastern Slavic and Sorbian vocabularies, while keeping the original grammars intact."}
{"_id": "d5fbf239cb327a95624ca7b1b51d44ea8615bb7e", "title": "A 6.25Gbps Feed-forward Equalizer in 0.18\u03bcm CMOS Technology for SerDes", "text": "This paper presents a 6.25Gbps feed-forward equalizer (FFE) to reduce the inter-symbol-interference (ISI) in high-speed transmission backplane. The 3-tap fractionally spaced equalizer consists of a delay line, multiplier & summer cells and an output circuit. Active-inductive peaking circuit and capacitor-degenerated circuit are used in the delay line and the output stage respectively to meet the bandwidth demand. The proposed FFE has been implemented in TSMC 0.18\u03bcm CMOS technology with a whole area of 0.5mm\u00d70.51mm including I/O pads. The power consumption of the core circuit is 31.7mW under 1.8V power supply. Post simulation results show that the distorted signal through a 24-inch backplane is well recovered by this equalizer."}
{"_id": "5856571d1a267e97adcface00e7a45ef58ae238f", "title": "Spatiotemporal Sequential Influence Modeling for Location Recommendations: A Gravity-based Approach", "text": "Recommending to users personalized locations is an important feature of Location-Based Social Networks (LBSNs), which benefits users who wish to explore new places and businesses to discover potential customers. In LBSNs, social and geographical influences have been intensively used in location recommendations. However, human movement also exhibits spatiotemporal sequential patterns, but only a few current studies consider the spatiotemporal sequential influence of locations on users\u2019 check-in behaviors. In this article, we propose a new gravity model for location recommendations, called LORE, to exploit the spatiotemporal sequential influence on location recommendations. First, LORE extracts sequential patterns from historical check-in location sequences of all users as a Location-Location Transition Graph (L2TG), and utilizes the L2TG to predict the probability of a user visiting a new location through the developed additive Markov chain that considers the effect of all visited locations in the check-in history of the user on the new location. Furthermore, LORE applies our contrived gravity model to weigh the effect of each visited location on the new location derived from the personalized attractive force (i.e., the weight) between the visited location and the new location. The gravity model effectively integrates the spatiotemporal, social, and popularity influences by estimating a power-law distribution based on (i) the spatial distance and temporal difference between two consecutive check-in locations of the same user, (ii) the check-in frequency of social friends, and (iii) the popularity of locations from all users. Finally, we conduct a comprehensive performance evaluation for LORE using three large-scale real-world datasets collected from Foursquare, Gowalla, and Brightkite. Experimental results show that LORE achieves significantly superior location recommendations compared to other state-of-the-art location recommendation techniques."}
{"_id": "71f920d3a03479af5ced10f75d02bbd51fa06177", "title": "Orthogonal Coaxial Cavity Filters With Distributed Cross-Coupling", "text": "In this letter, six-pole filters are presented, in which the coaxial cavity resonators are arranged with 90 degree rotated spatial orientation. Cross-coupling between nonadjacent resonators will appear. In the new proposed filter topologies the cross-coupling elements are not arranged in separated triplet or quadruplet blocks. They interlace to form distributed cross-coupling. Synthesis of the distributed coupling matrix based on the symmetric canonical prototype is discussed. Two different bandpass filter topologies are presented. The first one has an inline arrangement of six orthogonal coaxial cavity resonators. Its distributed cross-coupling consists of four interlaced triplets, which generate two transmission zeros above the passband. The second filter design has a folded arrangement of two inline triplets with strong cross-coupling. An additional interlaced quadruplet pushes the resulting transmission zeros to the complex frequency domain. Measurement results of two corresponding filter prototypes confirm the predicted performance."}
{"_id": "e742d8d7cdbef9393af36495137088cc7ca4e5d5", "title": "A virtual reality intervention to improve the understanding and empathy for people with dementia in informal caregivers: results of a pilot study.", "text": "OBJECTIVE\nInformal caregivers often experience psychological distress due to the changing functioning of the person with dementia they care for. Improved understanding of the person with dementia reduces psychological distress. To enhance understanding and empathy in caregivers, an innovative technology virtual reality intervention Through the D'mentia Lens (TDL) was developed to experience dementia, consisting of a virtual reality simulation movie and e-course. A pilot study of TDL was conducted.\n\n\nMETHODS\nA pre-test-post-test design was used. Informal caregivers filled out questionnaires assessing person-centeredness, empathy, perceived pressure from informal care, perceived competence and quality of the relationship. At post-test, additional questions about TDL's feasibility were asked.\n\n\nRESULTS\nThirty-five caregivers completed the pre-test and post-test. Most participants were satisfied with TDL and stated that TDL gave more insight in the perception of the person with dementia. The simulation movie was graded 8.03 out of 10 and the e-course 7.66. Participants significantly improved in empathy, confidence in caring for the person with dementia, and positive interactions with the person with dementia.\n\n\nCONCLUSION\nTDL is feasible for informal caregivers and seems to lead to understanding of and insight in the experience of people with dementia. Therefore, TDL could support informal caregivers in their caregiving role."}
{"_id": "7a3baa27c860d5c19672581b5b45a65a39b19cb6", "title": "Substrate Integrated Waveguide ( SIW ) to Microstrip Transition at X-Band", "text": "Substrate integrated waveguide (SIW) is a new form of transmission line. It facilitates the realization of nonplanar (waveguide based) circuits into planar form for easy integration with other planar (microstrip) circuits and systems. This paper describes the design of an SIW to microstrip transition. The transition is broadband covering the frequency range of 8 \u2013 12GHz. The measured in-band insertion loss is below 0.6dB while the return loss is less than 10dB. The circuit is simulated in HFSS and results are measured on vector network analyzer (VNA). Keywords\u2014 Microstrip; Substrate Integrated Waveguide; Transition"}
{"_id": "cefa26984e78055891df190825cbdbb4f43ce189", "title": "Why experts make errors", "text": "Expert latent f ingerprint examiners were presented with fingerprints taken from real criminal cases. Half of the prints had been previously judged as individualizations and the other half as exclusions. We re-presented the same prints to the same experts who had judged them previously, but provided biasing contextual information in both the individualizations and exclusions. A control set of individualizations and exclusions was also re-presented as part of the study. The control set had no biasing contextual information associated with it. Each expert examined a total of eight past decisions. Two-thirds of the experts made inconsistent decisions. The f indings are discussed in terms of psychological and cognitive vulnerabilities."}
{"_id": "23e7d3999b97009f94a972114829e0dab6e494e5", "title": "Completed Local Binary Count for Rotation Invariant Texture Classification", "text": "In this brief, a novel local descriptor, named local binary count (LBC), is proposed for rotation invariant texture classification. The proposed LBC can extract the local binary grayscale difference information, and totally abandon the local binary structural information. Although the LBC codes do not represent visual microstructure, the statistics of LBC features can represent the local texture effectively. In addition, a completed LBC (CLBC) is also proposed to enhance the performance of texture classification. Experimental results obtained from three databases demonstrate that the proposed CLBC can achieve comparable accurate classification rates with completed local binary pattern."}
{"_id": "369634f497852e05d5e72b12874e2a3db2d3945f", "title": "Description of interest regions with local binary patterns", "text": "This paper presents a novel method for interest region description. We adopted the idea that the appearance of an interest region can be well characterized by the distribution of its local features. The most well-known descriptor built on this idea is the SIFT descriptor that uses gradient as the local feature. Thus far, existing texture features are not widely utilized in the context of region description. In this paper, we introduce a new texture feature called center-symmetric local binary pattern (CS-LBP) that is a modified version of the well-known local binary pattern (LBP) feature. To combine the strengths of the SIFT and LBP, we use the CS-LBP as the local feature in the SIFT algorithm. The resulting descriptor is called the CS-LBP descriptor. In the matching and object category classification experiments, our descriptor performs favorably compared to the SIFT. Furthermore, the CS-LBP descriptor is computationally simpler than the SIFT."}
{"_id": "76e834df333586fa9906afbdabb9a33bef98a56b", "title": "Survey on LBP based texture descriptors for image classification", "text": "The aim of this work is to find the best way for describing a given texture using a local binary pattern (LBP) based approach. First several different approaches are compared, then the best fusion approach is tested on different datasets and compared with several approaches proposed in the literature (for fair comparisons, when possible we have used code shared by the original authors). Our experiments show that a fusion approach based on uniform local quinary pattern (LQP) and a rotation invariant local quinary pattern, where a bin selection based on variance is performed and Neighborhood Preserving Embedding (NPE) feature transform is applied, obtains a method that performs well on all tested datasets. As the classifier, we have tested a stand-alone support vector machine (SVM) and a random subspace ensemble of SVM. We compare several texture descriptors and show that our proposed approach coupled with random subspace ensemble outperforms other recent state-of-the-art approaches. This conclusion is based on extensive experiments conducted in several domains using six benchmark databases. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "bd143f6fe886ddc137783864ee42fef8927571d0", "title": "The Language of Place: Semantic Value from Geospatial Context", "text": "There is a relationship between what we say and where we say it. Word embeddings are usually trained assuming that semantically-similar words occur within the same textual contexts. We investigate the extent to which semantically-similar words occur within the same geospatial contexts. We enrich a corpus of geolocated Twitter posts with physical data derived from Google Places and OpenStreetMap, and train word embeddings using the resulting geospatial contexts. Intrinsic evaluation of the resulting vectors shows that geographic context alone does provide useful information about semantic relatedness."}
{"_id": "ce2845cadc5233ff0a647aa22ae3bbe646258890", "title": "Spherical CNNs", "text": "Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images. However, a number of problems of recent interest have created a demand for models that can analyze spherical images. Examples include omnidirectional vision for drones, robots, and autonomous cars, molecular regression problems, and global weather and climate modelling. A naive application of convolutional networks to a planar projection of the spherical signal is destined to fail, because the space-varying distortions introduced by such a projection will make translational weight sharing ineffective. In this paper we introduce the building blocks for constructing spherical CNNs. We propose a definition for the spherical cross-correlation that is both expressive and rotation-equivariant. The spherical correlation satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. We demonstrate the computational efficiency, numerical accuracy, and effectiveness of spherical CNNs applied to 3D model recognition and atomization energy regression."}
{"_id": "56ec221ef329cfce7acd2e1afea2a2fbf4b3807c", "title": "A novel method for providing relational databases with rich semantics and natural language processing", "text": "Purpose \u2013 With the development of systems and applications, the number of users interacting with databases has increased considerably. The relational database model is still considered as the most used model for data storage and manipulation. However, it doesn\u2019t offer any semantic support for the stored data which can facilitate data access for users. Indeed, a large number of users are intimidated when retrieving data because they are non-technical or have little technical knowledge. To overcome this problem, researchers are continuously developing new techniques for Natural Language Interfaces to Databases (NLIDB). Nowadays, the usage of existing NLIDB\u2019s is not wide-spread due to their deficiencies in understanding natural language queries. In this sense, this paper proposes a novel method for intelligent understanding of natural language queries using semantically enriched database sources. Design/Methodology/Approach \u2013 First a reverse engineering process is applied to extract relational database hidden semantics. In the second step, the extracted semantics are enriched further using a domain ontology. After this, all semantics are stored in the same relational database. The phase of processing natural language queries uses the stored semantics to generate a semantic-tree. Findings \u2013 The evaluation part of our work show the advantages of using a semantically enriched database source to understand natural language queries. Additionally, enriching a relational database has given more flexibility to understand contextual and synonymous words that may be used in a natural language query. Originality/value \u2013 Existing NLIDB\u2019s aren\u2019t yet a standard option for interfacing a relational database due their lack for understanding natural language queries. Indeed, the techniques used in the literature have their limits. This paper handles those limits by identifying natural language elements by their semantic nature in order to generate a semantic-tree. This last is a key solution toward intelligent understanding of natural language queries to relational databases."}
{"_id": "895700be5f5d63c7e7b7370843ca480909f95225", "title": "Teaching a lay theory before college narrows achievement gaps at scale.", "text": "Previous experiments have shown that college students benefit when they understand that challenges in the transition to college are common and improvable and, thus, that early struggles need not portend a permanent lack of belonging or potential. Could such an approach-called a lay theory intervention-be effective before college matriculation? Could this strategy reduce a portion of racial, ethnic, and socioeconomic achievement gaps for entire institutions? Three double-blind experiments tested this possibility. Ninety percent of first-year college students from three institutions were randomly assigned to complete single-session, online lay theory or control materials before matriculation (n > 9,500). The lay theory interventions raised first-year full-time college enrollment among students from socially and economically disadvantaged backgrounds exiting a high-performing charter high school network or entering a public flagship university (experiments 1 and 2) and, at a selective private university, raised disadvantaged students' cumulative first-year grade point average (experiment 3). These gains correspond to 31-40% reductions of the raw (unadjusted) institutional achievement gaps between students from disadvantaged and nondisadvantaged backgrounds at those institutions. Further, follow-up surveys suggest that the interventions improved disadvantaged students' overall college experiences, promoting use of student support services and the development of friendship networks and mentor relationships. This research therefore provides a basis for further tests of the generalizability of preparatory lay theories interventions and of their potential to reduce social inequality and improve other major life transitions."}
{"_id": "dc3e6c32b17aeb91b6c4bdfb59efaf419caa298f", "title": "Experts \u2019 Views on Digital Parenting Strategies", "text": "American teenagers are spending much of their time online. Online communities offer teens perspective, acceptance, connection, and education that were once more limited by geography. However, the online world also opens teens to risks. Adults must decide how to guide teen online behavior in a space that they may not understand. We interviewed 16 experts about teen online behavior, related benefits and risks, and risk mitigation methods. We found that experts agree on certain mitigation methods, especially communication and education about online behavior, but are divided over monitoring. We propose a set of possible solutions that promote online safety while preserving teens\u2019 privacy."}
{"_id": "2b4c326582a5a86379816f7c78199886c84e638c", "title": "Using Machine Learning Methods and Linguistic Features in Single-Document Extractive Summarization", "text": "Extractive summarization of text documents usually consists of ranking the document sentences and extracting the top-ranked sentences subject to the summary length constraints. In this paper, we explore the contribution of various supervised learning algorithms to the sentence ranking task. For this purpose, we introduce a novel sentence ranking methodology based on the similarity score between a candidate sentence and benchmark summaries. Our experiments are performed on three benchmark summarization corpora: DUC-2002, DUC2007 and MultiLing-2013. The popular linear regression model achieved the best results in all evaluated datasets. Additionally, the linear regression model, which included POS (Part-of-Speech)-based features, outperformed the one with statistical features only."}
{"_id": "5b0d497532c18ee2e48b4d59d6e5cab343d6f8a8", "title": "How effective is the IEEE 802.11 RTS/CTS handshake in ad hoc networks", "text": "IEEE 802.11 MAC mainly relies on two techniques to combat interference: physical carrier sensing and RTS/CTS handshake (also known as \u201cvirtual carrier sensing\u201d). Ideally, the RTS/CTS handshake can eliminate most interference. However, the effectiveness of RTS/CTS handshake is based on the assumption that hidden nodes are within transmission range of receivers. In this paper, we prove using analytic models that in ad hoc networks, such an assumption cannot hold due to the fact that power needed for interrupting a packet reception is much lower than that of delivering a packet successfully. Thus, the \u201cvirtual carrier sensing\u201d implemented by RTS/CTS handshake cannot prevent all interference as we expect in theory. Physical carrier sensing can complement this in some degree. However, since interference happens at receivers, while physical carrier sensing is detecting transmitters (the same problem causing the hidden terminal situation), physical carrier sensing cannot help much, unless a very large carrier sensing range is adopted, which is limited by the antenna sensitivity. In this paper, we investigate how effective is the RTS/CTS handshake in terms of reducing interference. We show that in some situations, the interference range is much larger than transmission range, where RTS/CTS cannot function well. Then, a simple MAC layer scheme is proposed to solve this problem. Simulation results verify that our scheme can help IEEE 802.11 resolve most interference caused by large interference range."}
{"_id": "5218ebc5fc53fbd8c35b72c7043bba855649f023", "title": "Learning recursion: Multiple nested and crossed dependencies", "text": "Language acquisition in both natural and artificial language learning settings crucially depends on extracting information from ordered sequences. A shared sequence learning mechanism is thus assumed to underlie both natural and artificial language learning. A growing body of empirical evidence is consistent with this hypothesis. By means of artificial language learning experiments, we may therefore gain more insight in this shared mechanism. In this paper, we review empirical evidence from artificial language learning and computational modeling studies, as well as natural language data, and suggest that there are two key factors that help determine processing complexity in sequence learning, and thus in natural language processing. We propose that the specific ordering of non-adjacent dependencies (i.e. nested or crossed), as well as the number of non-adjacent dependencies to be resolved simultaneously (i.e. two or three) are important factors in gaining more insight into the boundaries of human sequence learning; and thus, also in natural language processing. The implications for theories of linguistic competence are discussed."}
{"_id": "53dcb497dfbba0a23a9d247114379b5f3d920a1c", "title": "Validity and Utility of Alternative Predictors of Job Performance", "text": "Meta-analysis of the cumulative research on various predictors of job performance shows that for entry-level jobs there is no predictor with validity equal to that of ability, which has a mean validity of .53. For selection on the basis of current job performance, the work sample test, with mean validity of .54, is slightly better. For federal entry-level jobs, substitution of an alternative predictor would cost from $3.12 billion (job tryout) to $15.89 billion per year (age). Hiring on ability has a utility of $15.61 billion per year, but affects minority groups adversely. Hiring on ability by quotas would decrease this utility by 5%. A third strategy\u2014using a low cutoff score\u2014would decrease utility by 83%. Using other predictors in conjunction with ability tests might improve validity and reduce adverse impact, but there is as yet no data base for studying this possibility."}
{"_id": "91e40bc40d57599f62e79b6cbddb501d31907acf", "title": "Enhancing Cross-Device Interaction Scripting with Interactive Illustrations", "text": "Cross-device interactions involve input and output on multiple computing devices. Implementing and reasoning about interactions that cover multiple devices with a diversity of form factors and capabilities can be complex. To assist developers in programming cross-device interactions, we created DemoScript, a technique that automatically analyzes a cross-device interaction program while it is being written. DemoScript visually illustrates the step-by-step execution of a selected portion or the entire program with a novel, automatically generated cross-device storyboard visualization. In addition to helping developers understand the behavior of the program, DemoScript also allows developers to revise their program by interactively manipulating the cross-device storyboard. We evaluated DemoScript with 8 professional programmers and found that DemoScript significantly improved development efficiency by helping developers interpret and manage cross-device interaction; it also encourages testing to think through the script in a development process."}
{"_id": "c1369d9ebdde76b855bd8a9405db65400af01620", "title": "An alginate-antacid formulation (Gaviscon Double Action Liquid) can eliminate or displace the postprandial 'acid pocket' in symptomatic GERD patients.", "text": "BACKGROUND\nRecently, an 'acid pocket' has been described in the proximal stomach, particularly evident postprandially in GERD patients, when heartburn is common. By creating a low density gel 'raft' that floats on top of gastric contents, alginate-antacid formulations may neutralise the 'acid pocket'.\n\n\nAIM\nTo assess the ability of a commercial high-concentration alginate-antacid formulation to neutralize and/or displace the acid pocket in GERD patients.\n\n\nMETHODS\nThe 'acid pocket' was studied in ten symptomatic GERD patients. Measurements were made using concurrent stepwise pH pull-throughs, high resolution manometry and fluoroscopy in a semi-recumbent posture. Each subject was studied in three conditions: fasted, 20 min after consuming a high-fat meal and 20 min later after a 20 mL oral dose of an alginate-antacid formulation (Gaviscon Double Action Liquid, Reckitt Benckiser Healthcare, Hull, UK). The relative position of pH transition points (pH >4) to the EGJ high-pressure zone was analysed.\n\n\nRESULTS\nMost patients (8/10) exhibited an acidified segment extending from the proximal stomach into the EGJ when fasted that persisted postprandially. Gaviscon neutralised the acidified segment in six of the eight subjects shifting the pH transition point significantly away from the EGJ. The length and pressure of the EGJ high-pressure zone were minimally affected.\n\n\nCONCLUSIONS\nGaviscon can eliminate or displace the 'acid pocket' in GERD patients. Considering that EGJ length was unchanged throughout, this effect was likely attributable to the alginate 'raft' displacing gastric contents away from the EGJ. These findings suggest the alginate-antacid formulation to be an appropriately targeted postprandial GERD therapy."}
{"_id": "f6a4bf043af1a9ec7f104a7b7ab56806b241ceda", "title": "Model compression via distillation and quantization", "text": "Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger networks, called \u201cteachers,\u201d into compressed \u201cstudent\u201d networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher network, into the training of a smaller student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to state-of-the-art full-precision teacher models, while providing up to order of magnitude compression, and inference speedup that is almost linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices."}
{"_id": "0a28c12a02140983603c7231ebae70564066f86b", "title": "Expectancy-Value Theory of Achievement Motivation.", "text": "We discuss the expectancy-value theory of motivation, focusing on an expectancy-value model developed and researched by Eccles, Wigfield, and their colleagues. Definitions of crucial constructs in the model, including ability beliefs, expectancies for success, and the components of subjective task values, are provided. These definitions are compared to those of related constructs, including self-efficacy, intrinsic and extrinsic motivation, and interest. Research is reviewed dealing with two issues: (1) change in children's and adolescents' ability beliefs, expectancies for success, and subjective values, and (2) relations of children's and adolescents' ability-expectancy beliefs and subjective task values to their performance and choice of activities. Copyright 2000 Academic Press."}
{"_id": "bb7e34dd0d8b3d6a482ba2fe292e635082638612", "title": "Smart street light system looking like usual street lights based on sensor networks", "text": "Currently, in the whole world, enormous electric energy is consumed by the street lights, which are automatically turn on when it becomes dark and automatically turn off when it becomes bright. This is the huge waste of energy in the whole world and should be changed. This paper discusses a smart street light system, whose concept is proposed by Fujii et al. The main aim of smart street light systems is that lights turn on when needed and light turn off when not needed. Moreover, the smart street light system in this paper behaves like usual street lights that turn on all night. The ideal behavior of the smart street light system is that no one finds turn-off of street lights at night. Whenever someone see street lights, they turn on and whenever no one see street lights, they turn off. The smart street light system consists of LED lights, brightness sensors, motion sensors and short-distance communication networks. The lights turn on before pedestrians and vehicles come and turn off or reduce brightness when there is no one. It will be difficult for pedestrians and drivers of vehicles to distinguish our smart street lights and the conventional street lights because our street lights all turn on before they come."}
{"_id": "434fee9fce74d83c9ed0d47d6a133b68229f1bc1", "title": "On the Consistency of Top-k Surrogate Losses", "text": "The top-k error is often employed to evaluate performance for challenging classification tasks in computer vision as it is designed to compensate for ambiguity in ground truth labels. This practical success motivates our theoretical analysis of consistent top-k classification. To this end, we define top-k calibration as a necessary and sufficient condition for consistency, for bounded below loss functions. Unlike prior work, our analysis of top-k calibration handles non-uniqueness of the predictor scores, and extends calibration to consistency \u2013 providing a theoretically sound basis for analysis of this topic. Based on the topk calibration analysis, we propose a rich class of top-k calibrated Bregman divergence surrogates. Our analysis continues by showing previously proposed hinge-like top-k surrogate losses are not top-k calibrated and thus inconsistent. On the other hand, we propose two new hinge-like losses, one which is similarly inconsistent, and one which is consistent. Our empirical results highlight theoretical claims, confirming our analysis of the consistency of these losses."}
{"_id": "13d48c75a6ed770bc8fa0670589db493f3dce819", "title": "Genetic studies of human diversity in East Asia.", "text": "East Asia is one of the most important regions for studying evolution and genetic diversity of human populations. Recognizing the relevance of characterizing the genetic diversity and structure of East Asian populations for understanding their genetic history and designing and interpreting genetic studies of human diseases, in recent years researchers in China have made substantial efforts to collect samples and generate data especially for markers on Y chromosomes and mtDNA. The hallmark of these efforts is the discovery and confirmation of consistent distinction between northern and southern East Asian populations at genetic markers across the genome. With the confirmation of an African origin for East Asian populations and the observation of a dominating impact of the gene flow entering East Asia from the south in early human settlement, interpretation of the north-south division in this context poses the challenge to the field. Other areas of interest that have been studied include the gene flow between East Asia and its neighbouring regions (i.e. Central Asia, the Sub-continent, America and the Pacific Islands), the origin of Sino-Tibetan populations and expansion of the Chinese."}
{"_id": "1e8685728af234a320878f3aec2317cd0ae9f179", "title": "A Secured System for Information Hiding in Image Steganography using Genetic Algorithm and Cryptography", "text": "Encryption is used to securely communicate data in open networks. Each type of data has its own structures; therefore according to the data the different techniques should be used to defend confidential data. Among them digital images are also very popular to carry confidential information in untrusted networks. . For pleasing the defense of data hiding and communication over network, the proposed system uses cryptographic algorithm along with Steganography. In the proposed system, the file which user want to make secure is firstly compressed to shrink in size and then the compressed data is altered into cipher text by using AES cryptographic algorithm and then the encrypted data is concealed in the image. In order to hide the information over the image in complex manner the genetic algorithm based technique is implemented which is used to evaluate the valuable pixels where the data can be hide in a secure manner. In addition of that, for hiding the information in images, the LSB (least significant bits) based steganographic method is used after the selection of eligible pixels. The implementation of the anticipated technique is performed using JAVA technology and for performance evaluation the time and space complexity is computed. In addition of that a"}
{"_id": "0821ce635b050195ee5f3599e249d2130c7d7439", "title": "Generating multi-fingered robotic grasps via deep learning", "text": "This paper presents a deep learning architecture for detecting the palm and fingertip positions of stable grasps directly from partial object views. The architecture is trained using RGBD image patches of fingertip and palm positions from grasps computed on complete object models using a grasping simulator. At runtime, the architecture is able to estimate grasp quality metrics without the need to explicitly calculate the given metric. This ability is useful as the exact calculation of these quality functions is impossible from an incomplete view of a novel object without any tactile feedback. This architecture for grasp quality prediction provides a framework for generalizing grasp experience from known to novel objects."}
{"_id": "1835227a28b84b8e7c93a7232437b54c31c52a02", "title": "Training Products of Experts by Minimizing Contrastive Divergence", "text": "It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual expert models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called contrastive divergence whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data."}
{"_id": "2211cbe7670c43c2c4d39b43241901f395e07b34", "title": "Artificial General Intelligence", "text": "Against the backdrop of increasing progresses in AI research paired with a rise of AI applications in decision-making processes, security-critical domains as well as in ethically relevant frames, a largescale debate on possible safety measures encompassing corresponding long-term and short-term issues has emerged across different disciplines. One pertinent topic in this context which has been addressed by various AI Safety researchers is e.g. the AI alignment problem for which no final consensus has been achieved yet. In this paper, we present a multidisciplinary toolkit of AI Safety strategies combining considerations from AI and Systems Engineering as well as from Cognitive Science with a security mindset as often relevant in Cybersecurity. We elaborate on how AGI \u201cSelf-awareness\u201d could complement different AI Safety measures in a framework extended by a jointly performed Human Enhancement procedure. Our analysis suggests that this hybrid framework could contribute to undertake the AI alignment problem from a new holistic perspective through security-building synergetic effects emerging thereof and could help to increase the odds of a possible safe future transition towards superintelligent systems."}
{"_id": "d2922d338fb2284bc9c16e8d597cedfbd3015f91", "title": "Spectrum Sensing Methods and Dynamic Spectrum Sharing in Cognitive Radio Networks: A Survey", "text": "Dynamic spectrum access and cognitive radio (CR) are emerging technologies to utilize the scarce frequency spectrum in an efficient and opportunistic manner. They have been receiving tremendous amount of well-deserved attention in the literature. One of the challenging problems in dynamic spectrum access for cognitive radios is to sense the spectrum in wide band regime of heterogeneous networks to identify the spectrum opportunities. Our main goal in this paper is to present the state-of-the-art research results in signal processing techniques for spectrum sensing in CR network to make automatic and wise decision by unlicensed secondary users to adapt their transmission parameters according to their operating RF environment and respect the primary users. Specifically, we present dynamic spectrum sharing models for spectrum access and signal processing techniques for spectrum sensing by categorizing them into five basic groups. We also present the comparison of different signal processing techniques from different perspectives including networking metrics for the evaluation of the methods e.g. how accurate and how fast can sense spectrum utilization given the various methods and how this will affect networking functionality, and pros and cons of all the methods in the context of networking systems. We also present the status, research challenges and perspectives of signal processing/detection techniques for spectrum sensing in cognitive radio networks."}
{"_id": "ff0473637a6b2fcd753a477d361b30547c3bbcd9", "title": "Signal Processing in Cognitive Radio", "text": "Cognitive radio allows for usage of licensed frequency bands by unlicensed users. However, these unlicensed (cognitive) users need to monitor the spectrum continuously to avoid possible interference with the licensed (primary) users. Apart from this, cognitive radio is expected to learn from its surroundings and perform functions that best serve its users. Such an adaptive technology naturally presents unique signal-processing challenges. In this paper, we describe the fundamental signal-processing aspects involved in developing a fully functional cognitive radio network, including spectrum sensing and spectrum sculpting."}
{"_id": "504939c784878906c8f9fbd41783e3bbce4e8e4b", "title": "Cognitive radio: brain-empowered wireless communications", "text": "Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a software-defined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: /spl middot/ highly reliable communication whenever and wherever needed; /spl middot/ efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radio-scene analysis. 2) Channel-state estimation and predictive modeling. 3) Transmit-power control and dynamic spectrum management. This work also discusses the emergent behavior of cognitive radio."}
{"_id": "a1ac1c423afa2a19bcadc886382df67713028e53", "title": "Cooperative Sensing among Cognitive Radios", "text": "Cognitive Radios have been advanced as a technology for the opportunistic use of under-utilized spectrum since they are able to sense the spectrum and use frequency bands if no Primary user is detected. However, the required sensitivity is very demanding since any individual radio might face a deep fade. We propose light-weight cooperation in sensing based on hard decisions to mitigate the sensitivity requirements on individual radios. We show that the \"link budget\" that system designers have to reserve for fading is a significant function of the required probability of detection. Even a few cooperating users (~10-20) facing independent fades are enough to achieve practical threshold levels by drastically reducing individual detection requirements. Hard decisions perform almost as well as soft decisions in achieving these gains. Cooperative gains in a environment where shadowing is correlated, is limited by the cooperation footprint (area in which users cooperate). In essence, a few independent users are more robust than many correlated users. Unfortunately, cooperative gain is very sensitive to adversarial/failing Cognitive Radios. Radios that fail in a known way (always report the presence/absence of a Primary user) can be compensated for by censoring them. On the other hand, radios that fail in unmodeled ways or may be malicious, introduce a bound on achievable sensitivity reductions. As a rule of thumb, if we believe that 1/N users can fail in an unknown way, then the cooperation gains are limited to what is possible with N trusted users."}
{"_id": "59df762d90a797d6f57c5d4573934f9d07ed3195", "title": "An intrusion detection system against malicious attacks on the communication network of driverless cars", "text": "Vehicular ad hoc networking (VANET) have become a significant technology in the current years because of the emerging generation of self-driving cars such as Google driverless cars. VANET have more vulnerabilities compared to other networks such as wired networks, because these networks are an autonomous collection of mobile vehicles and there is no fixed security infrastructure, no high dynamic topology and the open wireless medium makes them more vulnerable to attacks. It is important to design new approaches and mechanisms to rise the security these networks and protect them from attacks. In this paper, we design an intrusion detection mechanism for the VANETs using Artificial Neural Networks (ANNs) to detect Denial of Service (DoS) attacks. The main role of IDS is to detect the attack using a data generated from the network behavior such as a trace file. The IDSs use the features extracted from the trace file as auditable data. In this paper, we propose anomaly and misuse detection to detect the malicious attack."}
{"_id": "79fe8cc97ae5c056361c8a3da25ac309c8709c9a", "title": "Modelling and testing consumer trust dimensions in e-commerce", "text": "Prior research has found trust to play a significant role in shaping purchase intentions of a consumer. However there has been limited research where consumer trust dimensions have been empirically defined and tested. In this paper we empirically test a path model such that Internet vendors would have adequate solutions to increase trust. The path model presented in this paper measures the three main dimensions of trust, i.e. competence, integrity, and benevolence. And assesses the influence of overall trust of consumers. The paper also analyses how various sources of trust, i.e. consumer characteristics, firm characteristic, website infrastructure and interactions with consumers, influence dimensions of trust. The model is tested using 365 valid responses. Findings suggest that consumers with high overall trust demonstrate a higher intention to purchase online. \u00a9 2017 Elsevier Ltd. All rights reserved."}
{"_id": "cb04995e7b9f2dc2a72dd2a69f50b380e28c5bc1", "title": "Time Series Data Classification Using Recurrent Neural Network with Ensemble Learning", "text": "In statistics and signal processing, a time series is a sequence of data points, measured typically at successive times, spaced apart at uniform time intervals. Time series prediction is the use of a model to predict future events based on known past events; to predict future data points before they are measured. Solutions in such cases can be provided by non-parametric regression methods, of which each neural network based predictor is a class. As a learning method of time series data with neural network, Elman type Recurrent Neural Network has been known. In this paper, we propose the multi RNN. In order to verify the effectiveness of our proposed method, we experimented by the simple artificial data and the heart pulse wave data."}
{"_id": "fa6f01cc26d2fd9ca7e6e1101067d10d111965ab", "title": "Evidence-based strategies in weight-loss mobile apps.", "text": "BACKGROUND\nPhysicians have limited time for weight-loss counseling, and there is a lack of resources to which they can refer patients for assistance with weight loss. Weight-loss mobile applications (apps) have the potential to be a helpful tool, but the extent to which they include the behavioral strategies included in evidence-based interventions is unknown.\n\n\nPURPOSE\nThe primary aims of the study were to determine the degree to which commercial weight-loss mobile apps include the behavioral strategies included in evidence-based weight-loss interventions, and to identify features that enhance behavioral strategies via technology.\n\n\nMETHODS\nThirty weight-loss mobile apps, available on iPhone and/or Android platforms, were coded for whether they included any of 20 behavioral strategies derived from an evidence-based weight-loss program (i.e., Diabetes Prevention Program). Data on available apps were collected in January 2012; data were analyzed in June 2012.\n\n\nRESULTS\nThe apps included on average 18.83% (SD=13.24; range=0%-65%) of the 20 strategies. Seven of the strategies were not found in any app. The most common technology-enhanced features were barcode scanners (56.7%) and a social network (46.7%).\n\n\nCONCLUSIONS\nWeight-loss mobile apps typically included only a minority of the behavioral strategies found in evidence-based weight-loss interventions. Behavioral strategies that help improve motivation, reduce stress, and assist with problem solving were missing across apps. Inclusion of additional strategies could make apps more helpful to users who have motivational challenges."}
{"_id": "77e3e0f2232ce942cf9ab30e1b1da54e72dd1d4a", "title": "Pipsqueak: Lean Lambdas with Large Libraries", "text": "Microservices are usually fast to deploy because each microservice is small, and thus each can be installed and started quickly. Unfortunately, lean microservices that depend on large libraries will start slowly and harm elasticity. In this paper, we explore the challenges of lean microservices that rely on large libraries in the context of Python packages and the OpenLambda serverless computing platform. We analyze the package types and compressibility of libraries distributed via the Python Package Index and propose PipBench, a new tool for evaluating package support. We also propose Pipsqueak, a package-aware compute platform based on OpenLambda."}
{"_id": "acc5d4874a55895f3cf00dc7101f769725914241", "title": "True Inline Cross-Coupled Coaxial Cavity Filters", "text": "In this paper, a novel true inline configuration for cross-coupled coaxial cavity filters is presented, which is characterized by a simple and compact structure, improved performance, and good tunability. Instead of using folded structures, dedicated coupling probes, or extra cavities, as required by conventional techniques, cross coupling is realized by changing the orientation of selected resonators. Sequential coupling between adjacent resonators and cross coupling between nonadjacent resonators are effectively controlled by introducing small metal plates at different locations. A six-pole bandpass filter with two transmission zeros was built and tested. The measurement and simulation results agree very well, demonstrating feasibility of the inline filter configuration. It enables compact design with improved resonator Q compared to conventional combline filters. Furthermore, cross coupling can be readily adjusted using tuning elements."}
{"_id": "d8ae57ebc311a32d356f0d2fc2f83071a240af4b", "title": "Bridgeless half-bridge AC-DC converter with series-connected two transformers", "text": "This paper proposes a bridgeless half-bridge ac-dc converter with series-connected two transformers. The proposed converter integrates the operation of the bridgeless power factor correction (PFC) boost rectifier and the asymmetrical pulse-width modulation half-bridge dc-dc converter. The proposed converter provides an isolated dc output voltage without any full-bridge diode rectifier. Conduction losses are lowered by eliminating the full-bridge diode rectifier. Switching losses are reduced by achieving zero-voltage switching of the power switches. To improve the power efficiency of the bridgeless half-bridge ac-dc converter, this paper presents the study results about the bridgeless half-bridge ac-dc converter with the series-connected two transformers. By using series-connected two transformers, the proposed convert has a simpler power circuit. The proposed converter gives a high efficiency, high power factor, and low cost. The proposed converter achieves a high efficiency of 93.5 % for 250 W (50 V / 5 A) output power at 90 Vrms line voltage."}
{"_id": "fbeee348ed0d9230ec54bb37a5db4c1d8eeeb13f", "title": "Phase Current Sensor and Short-Circuit Detection based on Rogowski Coils Integrated on Gate Driver for 1.2 kV SiC MOSFET Half-Bridge Module", "text": "Silicon-carbide (SiC) MOSFETs are enabling electrical vehicle motor drives to meet the demands of higher power density, efficiency, and lower system cost. Hence, this paper seeks to explore the benefits that a gate-driver-level intelligence can contribute to SiC-based power inverters. The intelligence is brought by PCB-embedded Rogowski switch-current sensors (RSCS) integrated on the gate driver of a 1.2 kV, 300 A SiC MOSFET half-bridge module. They collect two MOSFET switch currents in a manner of high magnitude, high bandwidth, and solid signal isolation. The switch-current signals are used for short-circuit detection under various fault impedances, as well as for phase-current reconstruction by subtracting one switch current from another. The fundamentals and noise-immunity design of the gate driver containing the RSCS are presented in the paper and can be applied to any half-bridge power module. A three-phase inverter prototype has been built and operated in continuous PWM mode. On this setup, the performance and limitations of the short-circuit detection and phase-current reconstruction are experimentally validated by comparing with commercial current probes and Hall sensors."}
{"_id": "1ed1c1f7dcaa00d758edb7b05172a7720cf31725", "title": "Design of a CMOS bandgap Reference Circuit with a Wide temperature Range, High Precision and Low temperature coefficient", "text": "This paper presents an approach to the design of a high-precision CMOS voltage reference. The proposed circuit is designed for TSMC 0.35 m standard CMOS process. We design the \u0304rstorder temperature compensation bandgap voltage reference circuit. The proposed post-simulated circuit delivers an output voltage of 0.596V and achieves the reported temperature coe\u00b1cient (TC) of 3.96 ppm/ C within the temperature range from 60 C to 130 C when the supply voltage is 1.8V. When simulated in a smaller temperature range from 40 C to 80 C, the circuit achieves the lowest reported TC of 2.09 ppm/ C. The reference current is 16.586 A. This circuit provides good performances in a wide range of temperature with very small TC."}
{"_id": "1ddcf7dbdaaf1d11329b3f6e939c16445de74cb5", "title": "A brief survey on deep belief networks and introducing a new object oriented MATLAB toolbox (DeeBNet)", "text": "Nowadays, this is very popular to use the deep architectures in machine learning. Deep Belief Networks (DBNs) are deep architectures that use stack of Restricted Boltzmann Machines (RBM) to create a powerful generative model using training data. DBNs have many ability like feature extraction and classification that are used in many applications like image processing, speech processing and etc. This paper introduces a new object oriented MATLAB toolbox with most of abilities needed for the implementation of DBNs. According to the results of the experiments conducted on MNIST (image), ISOLET (speech), and 20 Newsgroups (text) datasets, it was shown that the toolbox can learn automatically a good representation of the input from unlabeled data with better discrimination between different classes. Also on all datasets, the obtained classification errors are comparable to those of state of the art classifiers. In addition, the toolbox supports different sampling methods (e.g. Gibbs, CD, PCD and our new FEPCD method), different sparsity methods (quadratic, rate distortion and our new normal method), different RBM types (generative and discriminative), using GPU, etc. The toolbox is a user-friendly open source software and is freely available on the website http://ceit.aut.ac.ir/~keyvanrad/DeeBNet%20Toolbox.html ."}
{"_id": "39947dfc303a7a8c459c2743371483cc3a9f8546", "title": "Detection and Visualization of Android Malware Behavior", "text": "Malware analysts still need to manually inspect malware samples that are considered suspicious by heuristic rules. They dissect software pieces and look for malware evidence in the code. The increasing number of malicious applications targeting Android devices raises the demand for analyzing them to find where the malcode is triggered when user interacts with them. In this paper a framework to monitor and visualize Android applications\u2019 anomalous function calls is described. Our approach includes platformindependent application instrumentation, introducing hooks in order to trace restricted API functions used at runtime of the application. These function calls are collected at a central server where the application behavior filtering and a visualization take place. This can help Android malware analysts in visually inspecting what the application under study does, easily identifying such malicious functions."}
{"_id": "b2d5c67f5dc537424987cc28c3e9926250023502", "title": "Discriminatively trained recurrent neural networks for single-channel speech separation", "text": "This paper describes an in-depth investigation of training criteria, network architectures and feature representations for regression-based single-channel speech separation with deep neural networks (DNNs). We use a generic discriminative training criterion corresponding to optimal source reconstruction from time-frequency masks, and introduce its application to speech separation in a reduced feature space (Mel domain). A comparative evaluation of time-frequency mask estimation by DNNs, recurrent DNNs and non-negative matrix factorization on the 2nd CHiME Speech Separation and Recognition Challenge shows consistent improvements by discriminative training, whereas long short-term memory recurrent DNNs obtain the overall best results. Furthermore, our results confirm the importance of fine-tuning the feature representation for DNN training."}
{"_id": "233440a17c93631d510a75790cc36109d1fc10a3", "title": "Accurate Whole Human Genome Sequencing using Reversible Terminator Chemistry", "text": "DNA sequence information underpins genetic research, enabling discoveries of important biological or medical benefit. Sequencing projects have traditionally used long (400\u2013800 base pair) reads, but the existence of reference sequences for the human and many other genomes makes it possible to develop new, fast approaches to re-sequencing, whereby shorter reads are compared to a reference to identify intraspecies genetic variation. Here we report an approach that generates several billion bases of accurate nucleotide sequence per experiment at low cost. Single molecules of DNA are attached to a flat surface, amplified in situ and used as templates for synthetic sequencing with fluorescent reversible terminator deoxyribonucleotides. Images of the surface are analysed to generate high-quality sequence. We demonstrate application of this approach to human genome sequencing on flow-sorted X chromosomes and then scale the approach to determine the genome sequence of a male Yoruba from Ibadan, Nigeria. We build an accurate consensus sequence from\u2009>30\u00d7 average depth of paired 35-base reads. We characterize four million single-nucleotide polymorphisms and four hundred thousand structural variants, many of which were previously unknown. Our approach is effective for accurate, rapid and economical whole-genome re-sequencing and many other biomedical applications."}
{"_id": "52f71dd1d0dce311869fc0dd5623b2ac917d3f61", "title": "Pleasure and orgasm in women with Female Genital Mutilation/Cutting (FGM/C).", "text": "INTRODUCTION\nFemale genital mutilation/cutting (FGM/C) violates human rights. FGM/C women's sexuality is not well known and often it is neglected by gynecologists, urologists, and sexologists. In mutilated/cut women, some fundamental structures for orgasm have not been excised.\n\n\nAIM\nThe aim of this report is to describe and analyze the results of four investigations on sexual functioning in different groups of cut women.\n\n\nMAIN OUTCOME MEASURE\n\n\n\nINSTRUMENTS\nsemistructured interviews and the Female Sexual Function Index (FSFI).\n\n\nMETHODS\n\n\n\nSAMPLE\n137 adult women affected by different types of FGM/C; 58 young FGM/C ladies living in the West; 57 infibulated women; 15 infibulated women after the operation of defibulation.\n\n\nRESULTS\nThe group of 137 women, affected by different types of FGM/C, reported orgasm in almost 86%, always 69.23%; 58 mutilated young women reported orgasm in 91.43%, always 8.57%; after defibulation 14 out of 15 infibulated women reported orgasm; the group of 57 infibulated women investigated with the FSFI questionnaire showed significant differences between group of study and an equivalent group of control in desire, arousal, orgasm, and satisfaction with mean scores higher in the group of mutilated women. No significant differences were observed between the two groups in lubrication and pain.\n\n\nCONCLUSION\nEmbryology, anatomy, and physiology of female erectile organs are neglected in specialist textbooks. In infibulated women, some erectile structures fundamental for orgasm have not been excised. Cultural influence can change the perception of pleasure, as well as social acceptance. Every woman has the right to have sexual health and to feel sexual pleasure for full psychophysical well-being of the person. In accordance with other research, the present study reports that FGM/C women can also have the possibility of reaching an orgasm. Therefore, FGM/C women with sexual dysfunctions can and must be cured; they have the right to have an appropriate sexual therapy."}
{"_id": "81ada473d6bf2b6cda837d869579619a9e009338", "title": "Midpalatal implants vs headgear for orthodontic anchorage--a randomized clinical trial: cephalometric results.", "text": "INTRODUCTION\nThe purpose of this study was to compare the clinical effectiveness of the midpalatal implant as a method of reinforcing anchorage during orthodontic treatment with that of conventional extraoral anchorage. This was a prospective, randomized, clinical trial at Chesterfield and North Derbyshire Royal Hospital NHS Trust and the Charles Clifford Dental Hospital, Sheffield, in the United Kingdom.\n\n\nMETHODS\nFifty-one orthodontic patients between the ages of 12 and 39, with Class II Division 1 malocclusion and absolute anchorage requirements, were randomly allocated to receive either a midpalatal implant or headgear to reinforce orthodontic anchorage. The main outcome was to compare the mesial movement of the molars and the incisors of the 2 treatment groups between the start and the end of anchorage reinforcement as measured from cephalometric radiographs.\n\n\nRESULTS\nThe reproducibility of the measuring technique was acceptable. There were significant differences between T1 and T2 in the implant group for the positions of the maxillary central incisor (P <.001), the maxillary molar (P = .009), and the mandibular molar (P <.001). There were significant differences between T1 and T2 in the headgear group for the positions of the mandibular central incisor (P <.045), the maxillary molar (P <.001), and the mandibular molar (P <.001). All skeletal and dental points moved mesially more in the headgear group during treatment than in the implant group. These ranged from an average of 0.5 mm more mesially for the mandibular permanent molar to 1.5 mm more mesially for the maxillary molar and the mandibular base. No treatment changes between the groups were statistically significant.\n\n\nCONCLUSIONS\nMidpalatal implants are an acceptable technique for reinforcing anchorage in orthodontic patients."}
{"_id": "5e498a71b94d37a69ce89c6461ade7ec4fc391af", "title": "Electronic Game Play and School Performance of Adolescents in Southern Thailand", "text": "Increasing exposure of children and adolescents to electronic media is a worldwide phenomenon, including in Thailand. To date, few studies examine electronic game play in Thai adolescents. Our research describes the prevalence of electronic game play and examines associations between the time spent engaged in electronic game play and school performance of adolescents in Hat-Yai municipality. A cross-sectional study was conducted among 1,492 adolescents from four secondary schools and one commercial college from January through March 2007, using questionnaires for collecting information about demographic data, school grades, and electronic game play behaviors. The prevalence of electronic game play was 75% in boys and 59% in girls. Twenty-two percent of boys and 8.7% of girls played electronic games every day with more than 2 hours per session. The two most common places of game play were at game shops (71%), followed by at their own home (70%). Using linear regression analysis, the \"low user or less than 2 hours per session\" game players and females were less likely to have school grades below 3.00 with adjusted odds ratios of 0.44 (95% CI 0.25-0.80, p = 0.004) and 0.49 (95% CI 0.30-0.76, p = 0.005) respectively. This study finds that excessive playing of electronic games is associated with school grades below 3.00."}
{"_id": "d5304dcb730b86b5af756e3c59ff8f63b67119b8", "title": "Implant Removal after Percutaneous Short Segment Fixation for Thoracolumbar Burst Fracture : Does It Preserve Motion?", "text": "OBJECTIVE\nThe purpose of this study was to evaluate the efficacy of implant removal of percutaneous short segment fixation after vertebral fracture consolidation in terms of motion preservation.\n\n\nMETHODS\nBetween May 2007 and January 2011, 44 patients underwent percutaneous short segment screw fixation due to a thoracolumbar burst fracture. Sixteen of these patients, who underwent implant removal 12 months after screw fixation, were enrolled in this study. Motor power was intact in all patients, despite significant vertebral height loss and canal compromise. The patients were divided into two groups by degree of osteoporosis : Group A (n=8), the non-osteoporotic group, and Group B (n=8), the osteoporotic group. Imaging and clinical findings including vertebral height loss, kyphotic angle, range of motion (ROM), and complications were analyzed.\n\n\nRESULTS\nSignificant pain relief was achieved in both groups at final follow-up versus preoperative values. In terms of vertebral height loss, both groups showed significant improvement at 12 months after screw fixation and restored vertebral height was maintained to final follow-up in spite of some correction loss. ROM (measured using Cobb's method) in flexion and extension in Group A was 10.5\u00b0 (19.5/9.0\u00b0) at last follow-up, and in Group B was 10.2\u00b0 (18.8/8.6\u00b0) at last follow-up. Both groups showed marked improvement in ROM as compared with the screw fixation state, which was considered motionless.\n\n\nCONCLUSION\nRemoval of percutaneous implants after vertebral fracture consolidation can be an effective treatment to preserve motion regardless of osteoporosis for thoracolumbar burst fractures."}
{"_id": "146d70b17c8002a922b5d1826e46e67c7acdbbad", "title": "The effects of monetary and social rewards on task performance in children and adolescents: liking is not enough.", "text": "The current study compared the effects of reward anticipation on task performance in children and adolescents (8-16 years old) using monetary and various social rewards. Eighty-five typically developing children undertook the Monetary Incentive Delay (MID) task. Of these 44 also undertook the Social Incentive Delay (SID-basic) task where social reward was operationalized as a smiling face and spoken compliments. Forty-one children participated in the SID-plus where points were added to a pictogram with written compliments. In a preparatory validation study participants were asked howmuch they liked the SID-basic rewards.Results showed that there was an effect of reward size on accuracy and RT in both the MID task and SID-plus, but not SID-basic. Subjective value of the SID-basic rewards was rated higher with hypothesized increasing reward intensity. In conclusion, although the social rewards in SID-basic were liked by children andadolescents in the validation study, they had no effect on the behaviour. Only when points were added (SID-plus), anticipated social reward affected task performance. Thus our results highlight (i) the difference between likeability andreinforcing quality and (ii) the need for a quantifiable element to rewards for themto be reinforcing for children."}
{"_id": "a32496bc36a7a8673b0e04721ac0debd1caa04c1", "title": "Subgoal-labeled instructional material improves performance and transfer in learning to develop mobile applications", "text": "Mental models are mental representations of how an action changes a problem state. Creating a mental model early in the learning process is a strong predictor of success in computer science classes. One major problem in computer science education, however, is that novices have difficulty creating mental models perhaps because of the cognitive overload caused by traditional teaching methods. The present study employed subgoal-labeled instructional materials to promote the creation of mental models when teaching novices to program in Android App Inventor. Utilizing this and other well-established educational tools, such as scaffolding, to reduce cognitive load in computer science education improved the performance of participants on novel tasks when learning to develop mobile applications."}
{"_id": "71439cf2318205de70b3a447bf30f8a72e039fe4", "title": "Predicting Knowledge Sharing Intention Based on Theory of Reasoned Action Framework : An Empirical Study on Higher Education Institution", "text": "Based on theory of reasoned action (TRA) theoretical framework, this study try to predict individual knowledge sharing intention on higher education institution context. This study also integrate some potential factors (channel richness, psychological forces and organizational climate factors) which supporting individuals' knowledge-sharing intentions Through a field survey of 242 lecturer on public and privateuniversity at Purwokerto and Yogyakarta city, this study confirm our hypothesis that attitudes toward knowledge sharing and subjective norms with regard to knowledge sharing as well as organizational climate affect individuals' intentions to share knowledge. This study showed thatPerceived Reciprocal Benefits, Perceived Enjoyment in Helping Others, Perceived Reputation Enhancement and channel richness affect individuals'attitudes toward knowledge sharing while organizational climate affect subjective norms. Overall, the results of the study advance prior research in the area of knowledge sharing by shedding light on the determinants of knowledge sharing behaviors of knowledge workers. These insights could be used by university in developing work environments that are conducive for knowledge worker to share their knowledge. Keyword: Knowledge sharing intention, TRA, Knowledge worker, attitude, university."}
{"_id": "e1257afedb5699bd1c76c6e5cd9b8281d72245c6", "title": "Gradient Domain Guided Image Filtering", "text": "Guided image filter (GIF) is a well-known local filter for its edge-preserving property and low computational complexity. Unfortunately, the GIF may suffer from halo artifacts, because the local linear model used in the GIF cannot represent the image well near some edges. In this paper, a gradient domain GIF is proposed by incorporating an explicit first-order edge-aware constraint. The edge-aware constraint makes edges be preserved better. To illustrate the efficiency of the proposed filter, the proposed gradient domain GIF is applied for single-image detail enhancement, tone mapping of high dynamic range images and image saliency detection. Both theoretical analysis and experimental results prove that the proposed gradient domain GIF can produce better resultant images, especially near the edges, where halos appear in the original GIF."}
{"_id": "a6e3ab79ccf35a4e138521c110f737f65bf48acd", "title": "Dead Code Detection Method Based on Program Slicing", "text": "Malicious code writers often insert dead code into the source code to prevent reverse engineer analysis. There are few methods for detecting dead code used in conjunction with opaque predicates. In this paper, a method based on program slicing for dead code detection is presented, which combines static slicing analysis and dynamic slicing analysis. Further, we design a dead code detection framework in the LLVM compiler infrastructure. The proposed method is used to detect the dead code inserted in some benchmark programs. The experimental results show that the proposed method has high detection rate."}
{"_id": "2f1af7459d4c21aab69f54ac54af60ea92af17b9", "title": "Face poser: Interactive modeling of 3D facial expressions using facial priors", "text": "This article presents an intuitive and easy-to-use system for interactively posing 3D facial expressions. The user can model and edit facial expressions by drawing freeform strokes, by specifying distances between facial points, by incrementally editing curves on the face, or by directly dragging facial points in 2D screen space. Designing such an interface for 3D facial modeling and editing is challenging because many unnatural facial expressions might be consistent with the user's input. We formulate the problem in a maximum a posteriori framework by combining the user's input with priors embedded in a large set of facial expression data. Maximizing the posteriori allows us to generate an optimal and natural facial expression that achieves the goal specified by the user. We evaluate the performance of our system by conducting a thorough comparison of our method with alternative facial modeling techniques. To demonstrate the usability of our system, we also perform a user study of our system and compare with state-of-the-art facial expression modeling software (Poser 7)."}
{"_id": "e0acd5f322837547d63f7ed65670e6854bd96a70", "title": "High-quality passive facial performance capture using anchor frames", "text": "We present a new technique for passive and markerless facial performance capture based on anchor frames. Our method starts with high resolution per-frame geometry acquisition using state-of-the-art stereo reconstruction, and proceeds to establish a single triangle mesh that is propagated through the entire performance. Leveraging the fact that facial performances often contain repetitive subsequences, we identify anchor frames as those which contain similar facial expressions to a manually chosen reference expression. Anchor frames are automatically computed over one or even multiple performances. We introduce a robust image-space tracking method that computes pixel matches directly from the reference frame to all anchor frames, and thereby to the remaining frames in the sequence via sequential matching. This allows us to propagate one reconstructed frame to an entire sequence in parallel, in contrast to previous sequential methods. Our anchored reconstruction approach also limits tracker drift and robustly handles occlusions and motion blur. The parallel tracking and mesh propagation offer low computation times. Our technique will even automatically match anchor frames across different sequences captured on different occasions, propagating a single mesh to all performances."}
{"_id": "147e08459aa57f99a86a701c9e352e96efb99dfa", "title": "Hierarchical Model-Based Motion Estimation", "text": "This paper describes a hierarchical estimation framework for the computation of diverse representations of motion information. The key features of the resulting framework (or family of algorithms) a,re a global model that constrains the overall structure of the motion estimated, a local rnodel that is used in the estimation process, and a coa,rse-fine refinement strategy. Four specific motion models: affine flow, planar surface flow, rigid body motion, and general optical flow, are described along with their application to specific examPles."}
{"_id": "55062ce658f7828876c058c5e96c5071525a1b5e", "title": "Detailed Real-Time Urban 3D Reconstruction from Video", "text": "The paper presents a system for automatic, geo-registered, real-time 3D reconstruction from video of urban scenes. The system collects video streams, as well as GPS and inertia measurements in order to place the reconstructed models in geo-registered coordinates. It is designed using current state of the art real-time modules for all processing steps. It employs commodity graphics hardware and standard CPU\u2019s to achieve real-time performance. We present the main considerations in designing the system and the steps of the processing pipeline. Our system extends existing algorithms to meet the robustness and variability necessary to operate out of the lab. To account for the large dynamic range of outdoor videos the processing pipeline estimates global camera gain changes in the feature tracking stage and efficiently compensates for these in stereo estimation without impacting the real-time performance. The required accuracy for many applications is achieved with a two-step stereo reconstruction process exploiting the redundancy across frames. We show results on real video sequences comprising hundreds of thousands of frames."}
{"_id": "5cda0f3a10f2dc9aba5e5bde37d2c156e268b8f1", "title": "Establishment of the Diagnosis of Follicular Occlusion Tetrad", "text": "Background: Follicular occlusion tetrad (FOT) is a clinical syndrome consisting of suppurative hidradenitis, acne conglobata, dissecting cellulitis of the scalp, and pilonidal sinus. FOT is a rare case that is mostly found in severe conditions resulting in resistance to therapy. Diagnostic accuracy and therapy for all components of FOT is extremely important. Case: A 43-year old male presented with small, painful blisters and bumps filled with pus that had been ongoing for five years. We found pustules and erythematous nodules and multiple abscess with atrophic scars on the scalp, anterior and posterior trunks, abdomen, groin and gluteal regions on the facial region, we found multiple atrophic scars. The pus culture showed the presence of Escherichia coli, and fistulography examination revealed multiple cutaneous fistules and enterocutaneous sinus. The histopathological examination indicated rupture of hair follicles, follicular plugs, and infiltration of heavy-mixed infiltrate cells. Discussion: The FOT diagnosis was established through medical history recording, physical and histopathological examinations, and fistulography. Upon medical history recording and physical examination, we found pustules and nodules in several hairy areas. The fistulographic examination showed some fistules and sinus tracts, and the histopathological examination showed adnexal tissue damage caused by occlusion of hair follicles and inflammation due to accumulation of keratin and debris. All these findings led to the FOT diagnosis. ISSN 2476-2415"}
{"_id": "4a333b6c7d2ab05724368c757ac08feb2a16f586", "title": "A Case Study of Automated Face Recognition: The Boston Marathon Bombings Suspects", "text": "While face recognition can be a useful tool, helping authorities to narrow leads and confirm a criminal suspect's identity, the media tended to highlight the technology's limitations in the Boston Marathon bombings investigation. Consequently, we were motivated to reexamine the efficacy of unconstrained face recognition using this incident as a case study. We chose two automated face recognition systems known for their superior unconstrained matching performance based on tests conducted by the US National Institute of Standards and Technology: NEC's NeoFace v3.1 and Google-owned Pittsburgh Pattern Recognition v5.2.2. To conduct the experiment, we added six reference images of the Tsarnaevs taken from press releases and news articles following their identification to a background database of 1 million mug shots. Against this database, we searched for matches of the five face images of the brothers extracted by the FBI from surveillance, smartphone, or point-and-shoot camera footage before their identification."}
{"_id": "321fbff0c6b5cd50d68bec43142460ab7591f91a", "title": "Using link structure to infer opinions in social networks", "text": "The emergence of online social networks in the past few years has generated an enormous amount of information about potentially any subject. Valuable data containing users' opinions and thoughts are available on those repositories and several sentiment analysis techniques have been proposed that address the problem of understanding the opinion orientation of the users' postings. In this paper, we take a different perspective to the problem through a user centric approach, which uses a graph to model users (and their postings) and applies link mining techniques to infer opinions of users. Preliminary experiments on a Twitter corpus have shown promising results."}
{"_id": "6ac11d8695a64277dbbd603ec3b45edf6ae41bd0", "title": "Political Facebook groups: Micro-activism and the digital front stage", "text": "The growth of social networking sites (SNS), like Facebook, has caused many to rethink how we understand political activism and citizen engagement. In 2010, 43% of Internet users reported using social networking sites \" several times a day, \" a sizeable increase from 2009 where only 34% reported using social networking at the same rate (Pew 2010). Research on digital activism has emphasized the use of websites, listservs, and other forums by formal organizations to raise awareness for their specific causes (McCaughey and Ayers 2003; Van de Donk 2004). Information communication technologies (ICT's) have been regarded as particularly effective tools in mobilizing disparate, resource-poor counter-spheres in collective off-line activity (Van de Donk 2004). In recent years, scholars have moved away from viewing information communication technologies (ICT's) as simply a tool for formal activist organizations. Breindl (2010) argues that on-line digital activism has largely been examined through a classic \" social movement paradigm. \" She calls for scholarship that looks at how the meaning and structure of activism has been transformed by ICT's. One key way it has been transformed is by allowing for small scale, many-to-many forms of politically oriented communication, what I conceptualize as a new term: microactivism. Examples include the formation of political Facebook groups, re-tweeting of articles of political interest and sharing politically relevant videos on YouTube. These acts reflect micro-level intentions and are not necessarily geared towards mobilization like more traditional forms of digital activism. This microactivism has helped bring about a radical reformulation of the political arena. One major change that emerges from the ease of many-to-many communication brought about by SMS is the reduced incentive to join social movements (Earl and Schussman 2003). These new applications encourage movement entrepreneurs (Garrett 2006) not affiliated with formal social movements but interested in fomenting social change. The ease with which individuals can create content and connect with one another to share content is viewed by others as a harbinger of a more democratic and egalitarian society (Benkler 2006, Jenkins 2007, Shirky 2010). Salter (2005) argues that sites like SNS' provide new radical public spheres that provide additional spaces for voice cultivation and political citizenship formation. More importantly, in his view information technologies help shift political discourse to more informal venues that are less subject to elite control (Dahlberg 2004). Social critics like Morozov (2009) suggest, however, that microactivism might do more harm than good. Morozov (2009) refers \u2026"}
{"_id": "08fdff58458243e64bd3dab66e85838f26696830", "title": "Classification of Model Transformation Approaches", "text": "While the current OMG standards such as the Meta Object Facility (MOF) [MOF] and the UML [UML] provide a well-established foundation for defining PIMs and PSMs, no such wellestablished foundation exists for transforming PIMs into PSMs [GLR+02]. In 2002, in its effort to change this situation, the OMG initiated a standardization process by issuing a Request for Proposal (RFP) on Query / Views / Transformations (QVT) [QVT]. This process will eventually lead to an OMG standard for defining model transformations, which will be of interest not only for PIM-to-PSM transformations, but also for defining views on models and synchronization between models. Driven by practical needs and the OMG\u2019s request, a large number of approaches to model transformation have recently been proposed."}
{"_id": "bf9230a275f3b192859cc161711091d286a9a75d", "title": "Slant: A Blackboard System to Generate Plot, Figuration, and Narrative Discourse Aspects of Stories", "text": "We introduce Slant, a system that integrates more than a decade of research into computational creativity, and specifically story generation, by connecting subsystems that deal with plot, figuration, and the narrative discourse using a blackboard. The process of integrating these systems highlights differences in the representation of story and has led to a better understanding of how story can be usefully abstracted. The plot generator MEXICA and a component of Curveship are used with little modification in Slant, while the figuration subsystem Fig-S and the template generator GRIOT-Gen, inspired by GRIOT, are also components. The development of the new subsystem Verso, which deals with genre, shows how different genres can be computationally modeled and applied to in-development stories to generate results that are surprising in terms of their connections and valuable in terms of their relationship to cultural questions. Example stories are discussed, as is the potential of the system to allow for broader collaboration, the empirical testing of how subsystems interrelate, and possible contributions in literary and artistic contexts."}
{"_id": "a54d0ddbbae797967f52b84cbe8a6c5b357c08e9", "title": "Probabilistic Horn Abduction and Bayesian Networks", "text": "This paper presents a simple framework for Horn clause abduc tion with probabilities associated with hypotheses The framework incorporates assumptions about the rule base and independence as sumptions amongst hypotheses It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework The main contribution is in nding a relationship between logical and probabilistic notions of evidential reasoning This provides a useful representation language in its own right providing a compromise between heuristic and epistemic ad equacy It also shows how Bayesian networks can be extended be yond a propositional language This paper also shows how a language with only unconditionally independent hypotheses can represent any probabilistic knowledge and argues that it is better to invent new hy potheses to explain dependence rather than having to worry about dependence in the language Scholar Canadian Institute for Advanced Research Probabilistic Horn abduction and Bayesian networks"}
{"_id": "2cddac8df01b5d803e839ede451dabaf512569b4", "title": "Locking the Elbow: Improved Antibody Fab Fragments as Chaperones for Structure Determination.", "text": "Antibody Fab fragments have been exploited with significant success to facilitate the structure determination of challenging macromolecules as crystallization chaperones and as molecular fiducial marks for single particle cryo-electron microscopy approaches. However, the inherent flexibility of the \"elbow\" regions, which link the constant and variable domains of the Fab, can introduce disorder and thus diminish their effectiveness. We have developed a phage display engineering strategy to generate synthetic Fab variants that significantly reduces elbow flexibility, while maintaining their high affinity and stability. This strategy was validated using previously recalcitrant Fab-antigen complexes where introduction of an engineered elbow region enhanced crystallization and diffraction resolution. Furthermore, incorporation of the mutations appears to be generally portable to other synthetic antibodies and may serve as a universal strategy to enhance the success rates of Fabs as structure determination chaperones."}
{"_id": "59305b7556616b561a89992c7a173650b0bedb20", "title": "Double Pole Four Throw switch design with CMOS inverter", "text": "To avoid the uses of multiple RF chain associated with the multiple antennas, RF switch is most essential component. Multiple antennas systems are used to replace traditional single antennas circuitry in the radio transceiver system in order to improve the transmission capability and reliability. The desired switching system must have low cost and simple structure, yet still can capture all the advantage of Multiple Inputs Multiple Output (MIMO) systems. In this paper, we have proposed a Double Pole Four Throw (DP4T) RF switch using Complementary Metal Oxide Semiconductor (CMOS) technology with inverter property. This switch is low in cost, and capable of selecting data streams to or from the two antennas for transmitting or receiving processes, respectively."}
{"_id": "61e80faa61e60c487b07b38f09e0210fe508f32f", "title": "Trellis: Privilege Separation for Multi-user Applications Made Easy", "text": "Operating systems provide a wide variety of resource isolation and access control mechanisms, ranging from traditional user-based security models to fine-grained permission systems as found in modern mobile operating systems. However, comparatively little assistance is available for defining and enforcing access control policies within multiuser applications. These applications, often found in enterprise environments, allow multiple users to operate at different privilege levels in terms of exercising application functionality and accessing data. Developers of such applications bear a heavy burden in ensuring that security policies over code and data in this setting are properly expressed and enforced. We present Trellis, an approach for expressing hierarchical access control policies in applications and enforcing these policies during execution. The approach enhances the development toolchain to allow programmers to partially annotate code and data with simple privilege level tags, and uses a static analysis to infer suitable tags for the entire application. At runtime, policies are extracted from the resulting binaries and are enforced by a modified operating system kernel. Our evaluation demonstrates that this approach effectively supports the development of secure multi-user applications with modest runtime performance overhead."}
{"_id": "aa2cd1f33fab53f083ed4f55664bca69143b9830", "title": "Markov Chains for Exploring Posterior Distributions", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "eb3344eeef17290d91d503afcc89566788e0690d", "title": "White matter lesion extension to automatic brain tissue segmentation on MRI", "text": "A fully automated brain tissue segmentation method is optimized and extended with white matter lesion segmentation. Cerebrospinal fluid (CSF), gray matter (GM) and white matter (WM) are segmented by an atlas-based k-nearest neighbor classifier on multi-modal magnetic resonance imaging data. This classifier is trained by registering brain atlases to the subject. The resulting GM segmentation is used to automatically find a white matter lesion (WML) threshold in a fluid-attenuated inversion recovery scan. False positive lesions are removed by ensuring that the lesions are within the white matter. The method was visually validated on a set of 209 subjects. No segmentation errors were found in 98% of the brain tissue segmentations and 97% of the WML segmentations. A quantitative evaluation using manual segmentations was performed on a subset of 6 subjects for CSF, GM and WM segmentation and an additional 14 for the WML segmentations. The results indicated that the automatic segmentation accuracy is close to the interobserver variability of manual segmentations."}
{"_id": "173b1b98f3fc2574bea4152d1aa06b9073469614", "title": "T-bar reconstruction of constricted ears and a new classification.", "text": "For the correction of constricted ears, many techniques are described in the literature, the majority based on Tanzer's classification of 1975. The improvements in ear reconstruction published by Brent, Nagata, Firmin and Park make better outcomes possible. It is therefore that a new classification for constricted ears is proposed, together with an alternative technique for correction of group IIA and IIB deformities, using a T-strut of costal cartilage to reconstruct the underdeveloped or missing superior crus of the antihelix."}
{"_id": "35da44eb2b036f4f34cee38f0cd60f1ce77053f0", "title": "Penetration testing in a box", "text": "Network and application vulnerability assessments have a tendency to be difficult and costly; however, failing to have an assessment done and fixing security loopholes may result in a security breach by malicious attackers. A security breach can cost an organization time and money remediating the damage, such as lost confidential business information, which far exceeds the cost of a security assessment. Our solution for this problem is a semi-automated system that can help a penetration tester, security professional, or systems administrator, scan and report on the vulnerabilities of a network and services running on the network. It is also able to perform some simulated attacks. This system relies on a miniaturized computer to host the necessary components to conduct a security assessment. This system has been done in an open source manner to allow others to contribute and benefit from a community effort in the name of security."}
{"_id": "e8455cd00dd7513800bce5aa028067de7138f53d", "title": "X-Band GaN SPDT MMIC with over 25 Watt Linear Power Handling", "text": "Single pole double throw (SPDT) switches are becoming more and more key components in phased-array radar transmit/receive modules. An SPDT switch must be able to handle the output power of a high power amplifier and must provide enough isolation to protect the low noise amplifier in the receive chain when the T/R module is transmitting. Therefore gallium nitride technology seems to become a key technology for high power SPDT switch design. The technology shows good performance on microwave frequencies and is able to handle high power. An X-band SPDT switch, with a linear power handling of over 25 W, has been designed, measured and evaluated. The circuit is designed in the coplanar waveguide AlGaN/GaN technology established at QinetiQ."}
{"_id": "56bfffdab3afae91e87462fb87bc2c018d5c7e0a", "title": "The nature of procrastination: a meta-analytic and theoretical review of quintessential self-regulatory failure.", "text": "Procrastination is a prevalent and pernicious form of self-regulatory failure that is not entirely understood. Hence, the relevant conceptual, theoretical, and empirical work is reviewed, drawing upon correlational, experimental, and qualitative findings. A meta-analysis of procrastination's possible causes and effects, based on 691 correlations, reveals that neuroticism, rebelliousness, and sensation seeking show only a weak connection. Strong and consistent predictors of procrastination were task aversiveness, task delay, self-efficacy, and impulsiveness, as well as conscientiousness and its facets of self-control, distractibility, organization, and achievement motivation. These effects prove consistent with temporal motivation theory, an integrative hybrid of expectancy theory and hyperbolic discounting. Continued research into procrastination should not be delayed, especially because its prevalence appears to be growing."}
{"_id": "dc755df031b02212fb7408bee97e1712411add32", "title": "Morpheo: Traceable Machine Learning on Hidden data", "text": "Morpheo is a transparent and secure machine learning platform collecting and analysing large datasets. It aims at building state-of-the art prediction models in various fields where data are sensitive. Indeed, it offers strong privacy of data and algorithm, by preventing anyone to read the data, apart from the owner and the chosen algorithms. Computations in Morpheo are orchestrated by a blockchain infrastructure, thus offering total traceability of operations. Morpheo aims at building an attractive economic ecosystem around data prediction by channelling cryptomoney from prediction requests to useful data and algorithms providers. Morpheo is designed to handle multiple data sources in a transfer learning approach in order to mutualize knowledge acquired from large datasets for applications with smaller but similar datasets. Document purpose: This is a white-paper: an introduction to the technology powering Morpheo. It is evolving with the project to reflect the current vision and development status of Morpheo. It introduces Morpheo as a data-agnostic platform, although Morpheo is originally developed for a medical application, namely sleep data analysis. For technical details about Morpheo, see the online documentation and source code."}
{"_id": "8390a7456294a543c938131f9c5a25f697579cbd", "title": "A Method for Refocusing Photos using Depth from Defocus", "text": "This paper will describe a method for refocusing photos taken with a regular camera. By capturing a series of images from the same viewpoint with varying focal depth, a depth metric for the scene can be calculated and used to artificially blur the photo in a realistic way to emulate a new focal depth."}
{"_id": "74cc7fc9b43507dcfee7896704a225c8f1da8b2e", "title": "ROUTERLESS NETWORKS-ON-CHIP", "text": "Traditional bus-based interconnects are simple and easy to implement, but the scalability is greatly limited. While router-based networks-on-chip (NoCs) offer superior scalability, they also incur significant power and area overhead due to complex router structures. In this paper, we explore a new class of on-chip networks, referred to as Routerless NoCs, where routers are completely eliminated. We propose a novel design that utilizes on-chip wiring resources smartly to achieve comparable hop count and scalability as router-based NoCs. Several effective techniques are also proposed that significantly reduce the resource requirement to avoid new network abnormalities in routerless NoC designs. Evaluation results show that, compared with a conventional mesh, the proposed routerless NoC achieves 9.5X reduction in power, 7.2X reduction in area, 2.5X reduction in zero-load packet latency, and 1.7X increase in throughput. Compared with a state-of-the-art low-cost NoC design, the proposed approach achieves 7.7X reduction in power, 3.3X reduction in area, 1.3X reduction in zero-load packet latency, and 1.6X increase"}
{"_id": "b0d343ad82eb4060f016ff39289eacb222c45632", "title": "Transferable Semi-Supervised Semantic Segmentation", "text": "The performance of deep learning based semantic segmentation models heavily depends on sufficient data with careful annotations. However, even the largest public datasets only provide samples with pixel-level annotations for rather limited semantic categories. Such data scarcity critically limits scalability and applicability of semantic segmentation models in real applications. In this paper, we propose a novel transferable semi-supervised semantic segmentation model that can transfer the learned segmentation knowledge from a few strong categories with pixel-level annotations to unseen weak categories with only image-level annotations, significantly broadening the applicable territory of deep segmentation models. In particular, the proposed model consists of two complementary and learnable components: a Label transfer Network (L-Net) and a Prediction transfer Network (PNet). The L-Net learns to transfer the segmentation knowledge from strong categories to the images in the weak categories and produces coarse pixel-level semantic maps, by effectively exploiting the similar appearance shared across categories. Meanwhile, the P-Net tailors the transferred knowledge through a carefully designed adversarial learning strategy and produces refined segmentation results with better details. Integrating the L-Net and P-Net achieves 96.5% and 89.4% performance of the fully-supervised baseline using 50% and 0% categories with pixel-level annotations respectively on PASCAL VOC 2012. With such a novel transfer mechanism, our proposed model is easily generalizable to a variety of new categories, only requiring image-level annotations, and offers appealing scalability in real applications."}
{"_id": "4e94ff9a775eb5b73af4971322ccf383cb1f85b2", "title": "FraudMiner: A Novel Credit Card Fraud Detection Model Based on Frequent Itemset Mining", "text": "This paper proposes an intelligent credit card fraud detection model for detecting fraud from highly imbalanced and anonymous credit card transaction datasets. The class imbalance problem is handled by finding legal as well as fraud transaction patterns for each customer by using frequent itemset mining. A matching algorithm is also proposed to find to which pattern (legal or fraud) the incoming transaction of a particular customer is closer and a decision is made accordingly. In order to handle the anonymous nature of the data, no preference is given to any of the attributes and each attribute is considered equally for finding the patterns. The performance evaluation of the proposed model is done on UCSD Data Mining Contest 2009 Dataset (anonymous and imbalanced) and it is found that the proposed model has very high fraud detection rate, balanced classification rate, Matthews correlation coefficient, and very less false alarm rate than other state-of-the-art classifiers."}
{"_id": "6005c9cc7abfd313cfc159aceda335aa691998e9", "title": "A decision support based on data mining in e-banking", "text": "The use of data mining techniques in banking domain is suitable due to the nature and sensitivity of bank data and due to the real time complex decision processes. The main concern for a bank's manager is to take good decisions in order to minimize the risks level associated to bank's activities. It is very important for a bank to have knowledge of causes which generate the financial crises or imbalances. Lending is one of the most risky activities in banking area and adequate methods to support the decision making process are necessary. In this paper the authors present a prototype decision support system based on data mining techniques used in lending process. The proposed system was designed to assist a customer who applies for a credit and it may represent an extension for e-banking activities."}
{"_id": "863e77593b9c3abed4d83348e2dc898a0bd9e850", "title": "A Survey on Outlier Detection Techniques for Credit Card Fraud Detection", "text": "Credit card fraud detection is an important application of outlier detection. Due to drastic increase in digital frauds, there is a loss of billions dollars and therefore various techniques are evolved for fraud detection and applied to diverse business fields. The traditional fraud detection schemes use data analysis methods that require knowledge about different domains such as financial, economics, law and business practices. The current fraud detection techniques may be offline or online, and may use neural networks, clustering, genetic algorithm, decision tree etc. There are various outlier detection techniques are available such as statistical based, density based, clustering based and so on. This paper projected to find credit card fraud by using appropriate outlier detection technique, which is suitable for online applications where large scale data is involved. The method should also work efficiently for applications where memory and computation limitations are present. Here we have discussed one such unsupervised method Principal Component Analysis(PCA) to detect an outlier."}
{"_id": "0bad381b84f48b28abc1a98f05993c8eb5be747d", "title": "Anomaly detection: A survey", "text": "Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with."}
{"_id": "3d07718300d4a59482c3f3baafaa696d28a4e027", "title": "Smart Home Mobile RFID-Based Internet-of-Things Systems and Services", "text": "Smart homes can apply new Internet-Of-Things concepts along with RFID technologies for creating ubiquitous services. This paper introduces a novel read-out method for a hierarchical wireless master-slave RFID reader architecture of multi standard NFC (Near Field Communication) and UHF (Ultra High Frequency) technologies to build a smart home service system that benefits in terms of cost, energy consumption and complexity. Various smart home service use cases such as washing programs, cooking, shopping and elderly health care are described as examples that make use of this system."}
{"_id": "5f0806351685bd999699399ea9553c91733ccb7d", "title": "Handwritten document image segmentation into text lines and words", "text": "Article history: Received 22 July 2008 Received in revised form 23 February 2009 Accepted 14 May 2009"}
{"_id": "4cbb07dff68a680870c2dd9f6cf1d9c6804d9367", "title": "A hybrid dragonfly algorithm with extreme learning machine for prediction", "text": "In this work, a proposed hybrid dragonfly algorithm (DA) with extreme learning machine (ELM) system for prediction problem is presented. ELM model is considered a promising method for data regression and classification problems. It has fast training advantage, but it always requires a huge number of nodes in the hidden layer. The usage of a large number of nodes in the hidden layer increases the test/evaluation time of ELM. Also, there is no guarantee of optimality of weights and biases settings on the hidden layer. DA is a recently promising optimization algorithm that mimics the moving behavior of moths. DA is exploited here to select less number of nodes in the hidden layer to speed up the performance of the ELM. It also is used to choose the optimal hidden layer weights and biases. A set of assessment indicators is used to evaluate the proposed and compared methods over ten regression data sets from the UCI repository. Results prove the capability of the proposed DA-ELM model in searching for optimal feature combinations in feature space to enhance ELM generalization ability and prediction accuracy. The proposed model was compared against the set of commonly used optimizers and regression systems. These optimizers are namely, particle swarm optimization (PSO) and genetic algorithm (GA). The proposed DA-ELM model proved an advance overall compared methods in both accuracy and generalization ability."}
{"_id": "7e56a6f07f578908363e1a5ce65056c0f3f143af", "title": "A comparative study of vein pattern recognition for biometrie authentication", "text": "In informational age there are many information crucial to a individual. To protect the data from the unwanted use we proposed the biometric system of vein recognition as today's world demand for a easy and accurate biometric system to protect our data. The biometric system is accurate and easy to use as every individual has a unique vein pattern. In this paper we have studied various method used by different people to achieve the goal to secure world and give the reader a brief prospective of this technology."}
{"_id": "20c7139595570f080fc85a054c84262d11a488bd", "title": "The 2009 Mario AI Competition", "text": "This paper describes the 2009 Mario AI Competition, which was run in association with the IEEE Games Innovation Conference and the IEEE Symposium on Computational Intelligence and Games. The focus of the competition was on developing controllers that could play a version of Super Mario Bros as well as possible. We describe the motivations for holding this competition, the challenges associated with developing artificial intelligence for platform games, the software and API developed for the competition, the competition rules and organization, the submitted controllers and the results. We conclude the paper by discussing what the outcomes of the competition can teach us both about developing platform game AI and about organizing game AI competitions. The first two authors are the organizers of the competition, while the third author is the winner of the competition."}
{"_id": "3c913046faba89d9befdb6d96faa9e556f739367", "title": "Characterization of evoked tactile sensation in forearm amputees with transcutaneous electrical nerve stimulation.", "text": "OBJECTIVE\nThe goal of this study is to characterize the phenomenon of evoked tactile sensation (ETS) on the stump skin of forearm amputees using transcutaneous electrical nerve stimulation (TENS).\n\n\nAPPROACH\nWe identified the projected finger map (PFM) of ETS on the stump skin in 11 forearm amputees, and compared perceptual attributes of the ETS in nine forearm amputees and eight able-bodied subjects using TENS. The profile of perceptual thresholds at the most sensitive points (MSPs) in each finger-projected area was obtained by modulating current amplitude, pulse width, and frequency of the biphasic, rectangular current stimulus. The long-term stability of the PFM and the perceptual threshold of the ETS were monitored in five forearm amputees for a period of 11 months.\n\n\nMAIN RESULTS\nFive finger-specific projection areas can be independently identified on the stump skin of forearm amputees with a relatively long residual stump length. The shape of the PFM was progressively similar to that of the hand with more distal amputation. Similar sensory modalities of touch, pressure, buzz, vibration, and numb below pain sensation could be evoked both in the PFM of the stump skin of amputees and in the normal skin of able-bodied subjects. Sensory thresholds in the normal skin of able-bodied subjects were generally lower than those in the stump skin of forearm amputees, however, both were linearly modulated by current amplitude and pulse width. The variation of the MSPs in the PFM was confined to a small elliptical area with 95% confidence. The perceptual thresholds of thumb-projected areas were found to vary less than 0.99 \u00d7 10(-2) mA cm(-2).\n\n\nSIGNIFICANCE\nThe stable PFM and sensory thresholds of ETS are desirable for a non-invasive neural interface that can feed back finger-specific tactile information from the prosthetic hand to forearm amputees."}
{"_id": "44a1df62a1c815fc84aa42788283655a38c85550", "title": "Evaluation of Text Summarization in a Cross-lingual Information Retrieval Framework", "text": "We report on research in multi-document summarization and on evaluation of summarization in the framework of cross-lingual information retrieval. This work was carried out during a summer workshop on Language Engineering held at Johns Hopkins University by a team of nine researchers from seven universities. The goals of the research were as follows: (1) to develop a toolkit for evaluation of single-document and multi-document summarizers, (2) to develop a modular multi-document summarizer, called MEAD, that works in both English and Chinese, and (3) to perform a meta-evaluation of four automatic summarizers, including MEAD, using several types of evaluation measures: some currently used by summarization researchers and a couple of novel techniques. Central to the experiments in this workshop was the cross-lingual experimental setup based on a large-scale Chinese and English parallel corpus. An extensive set of human judgments were specifically prepared by the Linguistic Data Consortium for our research. These human judgments include a) which documents are relevant to a certain query and b) which sentences in the relevant documents are most relevant to the query and which therefore constitute a good summary of the cluster. These judgments were used to construct variable-length multiand single document summaries as model summaries. Since one of the novel evaluation metrics that we used, Relevance Correlation, is based on the premise that good summaries preserve query relevance both within a language and across languages, we made use of a cross-lingual Information Retrieval (IR) engine. We evaluated the quality of the automatic summaries using co-selection and content-based evaluation, two established techniques. A relatively new metric, relative utility, was also extensively tested. Part of the new scientific contribution is the measurement of relevance correlation, which we introduced and systematically examined in this workshop. Relevance correlation measures the quality of summaries in comparison to the entire documents as a function of how much document relevance drops if summaries are indexed instead of documents. Our results show that this measure is sensible, in that it correlates with more established evaluation measures. Another contribution is the cross-lingual setup which allows us to automatically translate English queries into Chinese, perform Chinese IR with or without summarization. This allows us to calculate relevance correlation for English and for Chinese in parallel (i.e., for the same queries) and to take direct cross-lingual comparisons of evaluations. Additionally, an alternative way of constructing Chinese model summaries from English ones was implemented which relies on the sentence alignment of English and Chinese documents. The results of our large-scale meta-evaluation are numerous, but some of the highlights are the following: (1) All evaluation measures rank human summaries first, which is an appropriate and expected property of such measures, (2) Both relevance correlation and the content-based measures place leading sentence extracts ahead of the more sophisticated summarizers, (3) Relative utility ranks our system, MEAD, as the best summarizer for shorter summaries, although for longer summaries, lead-based summaries outperform MEAD, (4) Co-selection measurements show overall low agreement amongst humans (above chance), whereas relative utility reports higher numbers on the same data (but does not normalize for chance). The deliverable resources and software include: (1) a turn-key extractive multi-document summarizer, MEAD, which allows users to add their own features based on single sentences or pairs of sentences, (2) a large corpus of summaries produced by several automatic methods, including baseline and random summaries, (3) a collection of manual summaries produced by the Linguistic Data Consortium (LDC), (4) a battery of evaluation routines, (5) a collection of IR queries in English and Chinese and the corresponding relevance judgments from the Hong Kong news collection, (6) SMART relevance outputs for both full documents and summaries, (7) XML tools for processing of documents and summaries. JHU 2001 Summer workshop final report Evaluation of Text Summarization"}
{"_id": "ddd460c5d4ce5cc27c11b9aa1fe7d77a3e2bb7ae", "title": "AutoHair: fully automatic hair modeling from a single image", "text": "We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval."}
{"_id": "649eef470d1dd398279789031c0b4c800ce16383", "title": "A Dual-Loop Antenna Design for Hepta-Band WWAN/LTE Metal-Rimmed Smartphone Applications", "text": "A simple direct-fed dual-loop antenna capable of providing hepta-band WWAN/LTE operation under surroundings of an unbroken metal rim in smartphone applications is proposed. The greatest highlight of this proposed antenna is that it provides a simple and effective multiband antenna solution for an unbroken metal-rimmed smartphone. The unbroken metal rim with 5 mm in height embraces the system circuit board of 130 \u00d7 70 mm2. Two no-ground portions of 10 \u00d7 70 mm2 and 5 \u00d7 70 mm2 are set on the top and bottom edge of the system circuit board, respectively. In-between the two separate no-ground portions, there is a system ground of 115 \u00d7 70 mm2 connected with the unbroken metal rim via a small grounded patch which divides the unbroken metal rim into two strips. Finally, a dual-loop antenna is formed by combining the inner system ground and two strips. This proposed dual-loop antenna is capable of covering GSM850/900/DCS/PCS/UMTS2100/LTE 2300/2500 operating bands. Detailed design considerations of the proposed antenna are described and both experimental and simulation results are also presented and discussed."}
{"_id": "8d66e5fecfdac7af9e726b05e957af224678e82c", "title": "Kalman filtering with state constraints : a survey of linear and nonlinear algorithms", "text": "The Kalman filter is the minimum-variance state estimator for linear dynamic systems with Gaussian noise. Even if the noise is non-Gaussian, the Kalman filter is the best linear estimator. For nonlinear systems it is not possible, in general, to derive the optimal state estimator in closed form, but various modifications of the Kalman filter can be used to estimate the state. These modifications include the extended Kalman filter, the unscented Kalman filter, and the particle filter. Although the Kalman filter and its modifications are powerful tools for state estimation, we might have information about a system that the Kalman filter does not incorporate. For example, we may know that the states satisfy equality or inequality constraints. In this case we can modify the Kalman filter to exploit this additional information and get better filtering performance than the Kalman filter provides. This paper provides an overview of various ways to incorporate state constraints in the Kalman filter and its nonlinear modifications. If both the system and state constraints are linear, then all of these different approaches result in the same state estimate, which is the optimal constrained linear state estimate. If either the system or constraints are nonlinear, then constrained filtering is, in general, not optimal, and different approaches give different results."}
{"_id": "e3a4988781e8613cecffa8b492fcf254c922cad9", "title": "Feature line extraction from unorganized noisy point clouds using truncated Fourier series", "text": "The detection of feature lines is important for representing and understanding geometric features of 3D models. In this paper, we introduce a new and robust method for extracting feature lines from unorganized point clouds. We use a one-dimensional truncated Fourier series for detecting feature points. Each point and its neighbors are approximated along the principal directions by using the truncated Fourier series, and the curvature of the point is computed from the approximated curves. The Fourier coefficients are computed by Fast Fourier Transform (FFT). We apply low-pass filtering to remove noise and to compute the curvature of the point robustly. For extracting feature points from the detected potential feature points, the potential feature points are thinned using a curvature weighted Laplacian-like smoothing method. The feature lines are constructed through growing extracted points and then projected onto the original point cloud. The efficiency and robustness of our approach is illustrated by several experimental results."}
{"_id": "da5bfddcfe703ca60c930e79d6df302920ab9465", "title": "An analysis of facial expression recognition under partial facial image occlusion", "text": "In this paper, an analysis of the effect of partial occlusion on facial expression recognition is investigated. The classification from partially occluded images in one of the six basic facial expressions is performed using a method based on Gabor wavelets texture information extraction, a supervised image decomposition method based on Discriminant Non-negative Matrix Factorization and a shape-based method that exploits the geometrical displacement of certain facial features. We demonstrate how partial occlusion affects the above mentioned methods in the classification of the six basic facial expressions, and indicate the way partial occlusion affects human observers when recognizing facial expressions. An attempt to specify which part of the face (left, right, lower or upper region) contains more discriminant information for each facial expression, is also made and conclusions regarding the pairs of facial expressions misclassifications that each type of occlusion introduces, are drawn. 2008 Published by Elsevier B.V."}
{"_id": "76c47b3ca6b51b66a54a3de5c1abdf2347845728", "title": "Otologic and Audiology Concerns of Microtia Repair.", "text": "Microtia is a congenital auricular deformity that commonly presents with associated congenital aural atresia. The most acute concern in these patients is concomitant hearing loss at birth. A team-based approach by plastic surgeons and otologists is necessary to address both the otologic and audiologic concerns of microtia and atresia. Hearing rehabilitation is imperative; yet it should not compromise the aesthetic components of reconstruction and vice versa. Here, the authors propose a framework to evaluate and manage patients with microtia and atresia with the goal of optimizing functional and cosmetic outcomes."}
{"_id": "6ac1ea07e72561b0c3d2b081770aecf807d42bff", "title": "Building a smart home environment for service robots based on RFID and sensor networks", "text": "This paper is concerned with constructing a prototype smart home environment which has been built in the research building of Korea Institute of Industrial Technology (KITECH) to demonstrate the practicability of a robot-assisted future home environment. Key functionalities that a home service robot must provide are localization, navigation, object recognition and object handling. A considerable amount of research has been conducted to make the service robot perform these operations with its own sensors, actuators and a knowledge database. With all heavy sensors, actuators and a database, the robot could have performed the given tasks in a limited environment or showed the limited capabilities in a natural environment. We initiated a smart home environment project for light-weight service robots to provide reliable services by interacting with the environment through the wireless sensor networks. This environment consists of the following three main components: smart objects with an radio frequency identification (RFID) tag and smart appliances with sensor network functionality; the home server that connects smart devices as well as maintains information for reliable services; and the service robots that perform tasks in collaboration with the environment. In this paper, we introduce various types of smart devices which are developed for assisting the robot by providing sensing and actuation, and present our approach on the integration of these devices to construct the smart home environment. Finally, we discuss the future directions of our project."}
{"_id": "eebebfa7b3e1a9544240fa173b3d65cad7ddb9ee", "title": "Natural Language Interface to Relational Database (NLI-RDB) Through Object Relational Mapping (ORM)", "text": "This paper proposes a novel approach for building a Natural Language Interface to a Relational Database (NLI-RDB) using Conversational Agent (CA), Information Extraction (IE) and Object Relational Mapping (ORM) framework. The CA will help in disambiguating the user\u2019s queries and guiding the user interaction. IE will play an important role in named entities extraction in order to map Natural Language queries into database queries. The ORM framework i.e. the Hibernate framework resolves the impedance mismatch between the Object Oriented Paradigms (OOP) and Relational Databases (RDBs) i.e. OOP concepts differ from RDB concepts, thus it reduces the complexity in generating SQL statements. Also, by utilizing ORM framework, the RDBs entities are mapped into real world objects, which bring the RDBs a step closer to the user. In addition, the ORM framework simplify the interaction between OOP and RDBs. The developed NLI-RDB system allows the user to interact with objects directly in natural language and through navigation, rather than by using SQL statements. This direct interaction tends to be easier and more acceptable for humans whom are nor technically orientated and have no SQL knowledge. The NLI-RDB system also offers friendly and interactive user interface in order to refine the query generated automatically. The NLI-RDB system has been evaluated by a group of participants through a combination of qualitative and quantitative measures. The experimental results show good performance of the prototype and excellent user\u2019s satisfaction."}
{"_id": "936b23b7611fdebc8c002b34f309762f4d719571", "title": "Artificial intelligence applications in the intensive care unit.", "text": "OBJECTIVE\nTo review the history and current applications of artificial intelligence in the intensive care unit.\n\n\nDATA SOURCES\nThe MEDLINE database, bibliographies of selected articles, and current texts on the subject.\n\n\nSTUDY SELECTION\nThe studies that were selected for review used artificial intelligence tools for a variety of intensive care applications, including direct patient care and retrospective database analysis.\n\n\nDATA EXTRACTION\nAll literature relevant to the topic was reviewed.\n\n\nDATA SYNTHESIS\nAlthough some of the earliest artificial intelligence (AI) applications were medically oriented, AI has not been widely accepted in medicine. Despite this, patient demographic, clinical, and billing data are increasingly available in an electronic format and therefore susceptible to analysis by intelligent software. Individual AI tools are specifically suited to different tasks, such as waveform analysis or device control.\n\n\nCONCLUSIONS\nThe intensive care environment is particularly suited to the implementation of AI tools because of the wealth of available data and the inherent opportunities for increased efficiency in inpatient care. A variety of new AI tools have become available in recent years that can function as intelligent assistants to clinicians, constantly monitoring electronic data streams for important trends, or adjusting the settings of bedside devices. The integration of these tools into the intensive care unit can be expected to reduce costs and improve patient outcomes."}
{"_id": "e5aaaac7852df686c35e61a6c777cfcb2246c726", "title": "Design of a Wideband Waveguide Slot Array Antenna and its Decoupling Method for Synthetic Aperture Radar", "text": "For the synthetic aperture radar systems, there is an increasing demand for achieving resolutions of 0.2 mtimes0.2 m. As the range resolution and system bandwidth are inversely proportional, the system bandwidth is expected to be greater than 1 GHz. Thus an antenna with wider band needs to be developed. Waveguide slot antennas have been implemented on several SAR satellites due to its inherent advantages such as high efficiency and power handling capacity, but its bandwidth is quite limited. To avoid the manufacture difficulties of the ridge waveguide, which is capable of broadening the bandwidth of slot antennas, a novel antenna element with conventional waveguide is designed. The bandwidth of VSWR les1.5 is greater than 1 GHz at X-band. To reduce the mutual coupling of closely placed antenna elements, a decoupling method with cavity-like walls inserted between adjacent elements is adopted and their effects on the performance of the antenna are summarized."}
{"_id": "0beaa7dd8d81034fa84dcdfca859bcb45bcbb8f1", "title": "Identification of a DNA-binding site for the transcription factor Haa1, required for Saccharomyces cerevisiae response to acetic acid stress", "text": "The transcription factor Haa1 is the main player in reprogramming yeast genomic expression in response to acetic acid stress. Mapping of the promoter region of one of the Haa1-activated genes, TPO3, allowed the identification of an acetic acid responsive element (ACRE) to which Haa1 binds in vivo. The in silico analysis of the promoter regions of the genes of the Haa1-regulon led to the identification of an Haa1-responsive element (HRE) 5'-GNN(G/C)(A/C)(A/G)G(A/G/C)G-3'. Using surface plasmon resonance experiments and electrophoretic mobility shift assays it is demonstrated that Haa1 interacts with high affinity (K(D) of 2\u2009nM) with the HRE motif present in the ACRE region of TPO3 promoter. No significant interaction was found between Haa1 and HRE motifs having adenine nucleotides at positions 6 and 8 (K(D) of 396 and 6780\u2009nM, respectively) suggesting that Haa1p does not recognize these motifs in vivo. A lower affinity of Haa1 toward HRE motifs having mutations in the guanine nucleotides at position 7 and 9 (K(D) of 21 and 119\u2009nM, respectively) was also observed. Altogether, the results obtained indicate that the minimal functional binding site of Haa1 is 5'-(G/C)(A/C)GG(G/C)G-3'. The Haa1-dependent transcriptional regulatory network active in yeast response to acetic acid stress is proposed."}
{"_id": "7cd7db67bdf9341ae65b56591088cf9ef5fb95c2", "title": "Analyzing spatial user behavior in computer games using geographic information systems", "text": "An important aspect of the production of digital games is user-oriented testing. A central problem facing practitioners is however the increasing complexity of user-game interaction in modern games, which places challenges on the evaluation of interaction using traditional user-oriented approaches. Gameplay metrics are instrumentation data which detail user behavior within the virtual environment of digital games, forming accurate and detailed datasets about user behavior that supplement existing user-testing methods such as playtesting and usability testing. In this paper existing work on gameplay metrics is reviewed, and spatial analysis of gameplay metrics introduced as a new approach in the toolbox of user-experience testing and research. Furthermore, Geographic Information Systems (GIS) are introduced as a tool for performing spatial analysis. A case study is presented with Tomb Raider: Underworld, showcasing the merger of GIS with gameplay metrics analysis and its application to game testing and design."}
{"_id": "3803e1c245c6b2867ab0460959d452f75c291481", "title": "Nanoseconds Switching for High Voltage Circuit Using Avalanche Transistors", "text": "The avalanche transistor is suitable for switching high voltage in kilovolts region. In this paper, a simple switching circuit consists of avalanche transistor is demonstrated. Sixteen of ZTX 415 avalanche transistors were used to switch a high voltage circuit up to 4.5 kV. A PIC16F84 microcontroller was utilized as control unit to generate input trigger. The result shows that the developed circuit was able to switch applied voltage up-to 4.5 kV with an average falling time is 2.89 ns."}
{"_id": "763afb9dc8650101be06053e2eb612d9e3a1ce18", "title": "Signal Processing and Machine Learning with Differential Privacy: Algorithms and Challenges for Continuous Data", "text": "Private companies, government entities, and institutions such as hospitals routinely gather vast amounts of digitized personal information about the individuals who are their customers, clients, or patients. Much of this information is private or sensitive, and a key technological challenge for the future is how to design systems and processing techniques for drawing inferences from this large-scale data while maintaining the privacy and security of the data and individual identities. Individuals are often willing to share data, especially for purposes such as public health, but they expect that their identity or the fact of their participation will not be disclosed. In recent years, there have been a number of privacy models and privacy-preserving data analysis algorithms to answer these challenges. In this article, we will describe the progress made on differentially private machine learning and signal processing."}
{"_id": "4208a299d76183df72c8fa77452aecd7d590c24b", "title": "Overview of the ImageCLEF 2016 Medical Task", "text": "ImageCLEF is the image retrieval task of the Conference and Labs of the Evaluation Forum (CLEF). ImageCLEF has historically focused on the multimodal and language\u2013independent retrieval of images. Many tasks are related to image classification and the annotation of image data as well. The medical task has focused more on image retrieval in the beginning and then retrieval and classification tasks in subsequent years. In 2016 a main focus was the creation of meta data for a collection of medical images taken from articles of the the biomedical scientific literature. In total 8 teams participated in the four tasks and 69 runs were submitted. No team participated in the caption prediction task, a totally new task. Deep learning has now been used for several of the ImageCLEF tasks and by many of the participants obtaining very good results. A majority of runs was submitting using deep learning and this follows general trends in machine learning. In several of the tasks multimodal approaches clearly led to best results."}
{"_id": "f27ef9c1ff0b00ee46beb1bed2f34002bae728ac", "title": "Principal Component Analysis", "text": ""}
{"_id": "bd11e68d9ca591fb3d2afe630a0f9a4a2654852c", "title": "Recognizing independent and joint activities among multiple residents in smart environments", "text": "The pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. A primary challenge that needs to be tackled to meet this need is the ability to recognize and track functional activities that people perform in their own homes and everyday settings. In this paper, we look at approaches to perform real-time recognition of Activities of Daily Living. We enhance other related research efforts to develop approaches that are effective when activities are interrupted and interleaved. To evaluate the accuracy of our recognition algorithms we assess them using real data collected from participants performing activities in our on-campus smart apartment testbed."}
{"_id": "4f33608445f43a07cfc37d48e1c2326839d884b9", "title": "Definition Modeling: Learning to define word embeddings in natural language", "text": "Distributed representations of words have been shown to capture lexical semantics, as demonstrated by their effectiveness in word similarity and analogical relation tasks. But, these tasks only evaluate lexical semantics indirectly. In this paper, we study whether it is possible to utilize distributed representations to generate dictionary definitions of words, as a more direct and transparent representation of the embeddings\u2019 semantics. We introduce definition modeling, the task of generating a definition for a given word and its embedding. We present several definition model architectures based on recurrent neural networks, and experiment with the models over multiple data sets. Our results show that a model that controls dependencies between the word being defined and the definition words performs significantly better, and that a characterlevel convolution layer designed to leverage morphology can complement word-level embeddings. Finally, an error analysis suggests that the errors made by a definition model may provide insight into the shortcomings of word embeddings."}
{"_id": "d39f016ef25f0ba3d9db24b94667eceeef981ecc", "title": "Message Passing Inference for Large Scale Graphical Models with High Order Potentials", "text": "To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields. Despite this parallelization, current algorithms struggle when the energy has high order terms and the graph is densely connected. In this paper we propose a partitioning strategy followed by a message passing algorithm which is able to exploit pre-computations. It only updates the high-order factors when passing messages across machines. We demonstrate the effectiveness of our approach on the task of joint layout and semantic segmentation estimation from single images, and show that our approach is orders of magnitude faster than current methods."}
{"_id": "b4637f569695c9399781899115ca68a9feac10c6", "title": "Increasing emitter efficiency in 3.3-kV enhanced trench IGBTs for higher short-circuit capability", "text": "In this paper, a 3.3-kV Enhanced Trench IGBT has been designed with a high emitter efficiency, for improving its short-circuit robustness. The carrier distribution profile has been shaped in a way that it is possible to increase the electric field at the surface of the IGBT, and thereby, counteract the Kirk Effect onset. This design approach is beneficial for mitigating high-frequency oscillations, typically observed in IGBTs under short-circuit conditions. The effectiveness of the proposed design rule is validated by means of mixed-mode device simulations. Then, two IGBTs have been fabricated with different emitter efficiencies and tested under short circuit, validating that the high-frequency oscillations can be mitigated, with higher emitter efficiency IGBT designs."}
{"_id": "3f2ba7afbc0457b9429108d6807558f12df8c508", "title": "Transparent authentication systems for mobile device security: A review", "text": "Sensitive data such as text messages, contact lists, and personal information are stored on mobile devices. This makes authentication of paramount importance. More security is needed on mobile devices since, after point-of-entry authentication, the user can perform almost all tasks without having to re-authenticate. For this reason, many authentication methods have been suggested to improve the security of mobile devices in a transparent and continuous manner, providing a basis for convenient and secure user re-authentication. This paper presents a comprehensive analysis and literature review on transparent authentication systems for mobile device security. This review indicates a need to investigate when to authenticate the mobile user by focusing on the sensitivity level of the application, and understanding whether a certain application may require a protection or not."}
{"_id": "7224d949cd34082b1249e8be84fde65b2c6b34fd", "title": "Cayuga : A High-Performance Event Processing Engine \u2217 [ Demonstration Paper ]", "text": "We propose a demonstration of Cayuga, a complex event monitoring system for high speed data streams. Our demonstration will show Cayuga applied to monitoring Web feeds; the demo will illustrate the expressiveness of the Cayuga query language, the scalability of its query processing engine to high stream rates, and a visualization of the internals of the query processing engine."}
{"_id": "66cdcad99a8bc9b733c8f9a5d42b098844c13d42", "title": "General Sales Forecast Models for Automobile Markets and their Analysis", "text": "In this paper, various enhanced sales forecast methodologies and models for the automobile market are presented. The methods used deliver highly accurate predictions while maintaining the ability to explain the underlying model at the same time. The representation of the economic training data is discussed, as well as its effects on the newly registered automobiles to be predicted. The methodology mainly consists of time series analysis and classical Data Mining algorithms, whereas the data is composed of absolute and/or relative market-specific exogenous parameters on a yearly, quarterly, or monthly base. It can be concluded that the monthly forecasts were especially improved by this enhanced methodology using absolute, normalized exogenous parameters. Decision Trees are considered as the most suitable method in this case, being both accurate and explicable. The German and the US-American automobile market are presented for the evaluation of the forecast models."}
{"_id": "8743c575c1a9735ecb4b0ffec9c20b4bdef65789", "title": "Quality expectations of machine translation", "text": "Machine Translation (MT) is being deployed for a range of use-cases by millions of people on a daily basis. There should, therefore, be no doubt as to the utility of MT. However, not everyone is convinced that MT can be useful, especially as a productivity enhancer for human translators. In this chapter, I address this issue, describing how MT is currently deployed, how its output is evaluated and how this could be enhanced, especially as MT quality itself improves. Central to these issues is the acceptance that there is no longer a single \u2018gold standard\u2019 measure of quality, such that the situation in which MT is deployed needs to be borne in mind, especially with respect to the expected \u2018shelf-life\u2019 of the translation itself. 1 Machine Translation Today Machine Translation (MT) is being deployed for a range of use-cases by millions of people on a daily basis. I will examine the reasons for this later in this chapter, but one inference is very clear: those people using MT in those use-cases must already be satisfied with the level of quality emanating from the MT systems they are deploying, otherwise they would stop using them. That is not the same thing at all as saying that MT quality is perfect, far from it. The many companies and academic researchers who develop and deploy MT engines today continue to strive to improve the quality of the translations produced. This too is an implicit acceptance of the fact that the level of quality is sub-optimal \u2013 for some use-cases at least \u2013 and can be improved. If MT system output is good enough for some areas of application, yet at the same time system developers are trying hard to improve the level of translations produced by their engines, then translation quality \u2013 whether produced by a machine or by a human \u2013 needs to be measurable. Note that this applies also to translators who complain that MT quality is too poor to be used in their workflows; in order to decide that with some certainty \u2013 rather than rejecting MT out-of-hand merely as a knee-jerk reaction to the onset of this new technology \u2013 the impact of MT on translators\u2019 work needs to be measurable. In Way (2013), I appealed to two concepts, which are revisited here, namely:"}
{"_id": "96e7561bd99ed9f607440245451038aeda8d8075", "title": "Big data reduction framework for value creation in sustainable enterprises", "text": ""}
{"_id": "212d1c7cfad4d8dae39deb669337cb46b0274d78", "title": "Fuzzy Queries over NoSQL Graph Databases: Perspectives for Extending the Cypher Language", "text": "When querying databases, users often wish to express vague concepts, as for instance asking for the cheap hotels. This has been extensively studied in the case of relational databases. In this paper, we propose to study how such useful techniques can be adapted to NoSQL graph databases where the role of fuzziness is crucial. Such databases are indeed among the fastest-growing models for dealing with big data, especially when dealing with network data (e.g., social networks). We consider the Cypher declarative query language proposed for Neo4j which is the current leader on this market, and we present how to express fuzzy queries."}
{"_id": "6efc98642024e84269ba2a0cfbfd10ca06212368", "title": "Are image quality metrics adequate to evaluate the quality of geometric objects?", "text": "Geometric objects are often represented by many millions of triangles or polygons, which limits the ease with which they can be transmitted and displayed electronically. This has led to the development of many algorithms for simplifying geometric models, and to the recognition that metrics are required to evaluate their success. The goal is to create computer graphic renderings of the object that do not appear to be degraded to a human observer. The perceptual evaluation of simplified objects is a new topic. One approach has been to use image-based metrics to predict the perceived degradation of simplified 3-D models 1 Since that 2-D images of 3-D objects can have significantly different perceived quality, depending on the direction of the illumination, 2 2-D measures of image quality may not adequately capture the perceived quality of 3-D objects. To address this question, we conducted experiments in which we explicitly compared the perceived quality of animated 3-D objects and their corresponding 2-D still image projections. Our results suggest that 2-D judgements do not provide a good predictor of 3-D image quality, and identify a need to develop \u201cobject quality metrics.\u201d"}
{"_id": "a60791316f5d749d9248c755112653bd527db2fe", "title": "Motor Bearing Fault Diagnosis Using Trace Ratio Linear Discriminant Analysis", "text": "Bearings are critical components in induction motors and brushless direct current motors. Bearing failure is the most common failure mode in these motors. By implementing health monitoring and fault diagnosis of bearings, unscheduled maintenance and economic losses caused by bearing failures can be avoided. This paper introduces trace ratio linear discriminant analysis (TR-LDA) to deal with high-dimensional non-Gaussian fault data for dimension reduction and fault classification. Motor bearing data with single-point faults and generalized-roughness faults are used to validate the effectiveness of the proposed method for fault diagnosis. Comparisons with other conventional methods, such as principal component analysis, local preserving projection, canonical correction analysis, maximum margin criterion, LDA, and marginal Fisher analysis, show the superiority of TR-LDA in fault diagnosis."}
{"_id": "60f732dc6c823a4210d0fc7e35ef9bed99323f14", "title": "GPGPU Enabled Ray Directed Adaptive Volume Visualization for High Density Scans", "text": "This paper presents an open source implementation of a volume visualizer capable of rendering large scale simulated and tomographic volume datasets on everyday commodity hardware. The ability to visualize this data enables a wide range of analysis in many scientific disciplines, such as medicine, biology, engineering, and physics. The implementation presented takes advantage of a hierarchical space filling curve to efficiently query only a subset of the data required for accurate visualization. A major advantage of this implementation is that it is open source, and works on a wide range of visualization devices by using the Unity game engine, a cross platform engine capable of targeting over 25 platforms [24]. This work's unique contribution is utilizing a space filling curve for multiple bricks in the volume to permit adaptive streaming at different resolutions."}
{"_id": "81dd01c60d3a31703da550c786b1048243b61aee", "title": "A real-time grading method of apples based on features extracted from defects", "text": "This paper presents a hierarchical grading method applied to Jonagold apples. Several images covering the whole surface of the fruits were acquired thanks to a prototype grading machine. These images were then segmented and the features of the defects were extracted. During a learning procedure, the objects were classified into clusters by k-mean clustering. The classification probabilities of the objects were summarised and on this basis the fruits were graded using quadratic discriminant analysis. The fruits were correctly graded with a rate of 73 %. The errors were found having origins in the segmentation of the defects or for a particular wound, in a confusion with the calyx end."}
{"_id": "893746eaec95802621b34caeaea57ccc4bf168f7", "title": "Immuno-inspired autonomic system for cyber defense", "text": "The biological immune system is an autonomic system for self-protection, which has evolved over millions of years probably through extensive redesigning, testing, tuning and optimization process. The powerful information processing capabilities of the immune system, such as feature extraction, pattern recognition, learning, memory, and its distributive nature provide rich metaphors for its artificial counterpart. Our study focuses on building an autonomic defense system, using some immunological metaphors for information gathering, analyzing, decision making and launching threat and attack responses. This on-going research effort is not to mimic the nature but to explore and learn valuable lessons useful for self-adaptive cyber defense systems."}
{"_id": "6112ba25390036f5cd73797520d49418ff715c56", "title": "Fast \u03b5-free Inference of Simulation Models with Bayesian Conditional Density Estimation", "text": "Many statistical models can be simulated forwards but have intractable likelihoods. Approximate Bayesian Computation (ABC) methods are used to infer properties of these models from data. Traditionally these methods approximate the posterior over parameters by conditioning on data being inside an -ball around the observed data, which is only correct in the limit \u21920. Monte Carlo methods can then draw samples from the approximate posterior to approximate predictions or error bars on parameters. These algorithms critically slow down as \u21920, and in practice draw samples from a broader distribution than the posterior. We propose a new approach to likelihood-free inference based on Bayesian conditional density estimation. Preliminary inferences based on limited simulation data are used to guide later simulations. In some cases, learning an accurate parametric representation of the entire true posterior distribution requires fewer model simulations than Monte Carlo ABC methods need to produce a single sample from an approximate posterior."}
{"_id": "e9676cb69fd2659a2c3298e8297b7281719738dd", "title": "Gradient Descent Quantizes ReLU Network Features", "text": "Deep neural networks are often trained in the over-parametrized regime (i.e. with far more parameters than training examples), and understanding why the training converges to solutions that generalize remains an open problem Zhang et al. [2017]. Several studies have highlighted the fact that the training procedure, i.e. mini-batch Stochastic Gradient Descent (SGD) leads to solutions that have specific properties in the loss landscape. However, even with plain Gradient Descent (GD) the solutions found in the over-parametrized regime are pretty good and this phenomenon is poorly understood. We propose an analysis of this behavior for feedforward networks with a ReLU activation function under the assumption of small initialization and learning rate and uncover a quantization effect: The weight vectors tend to concentrate at a small number of directions determined by the input data. As a consequence, we show that for given input data there are only finitely many, \u201csimple\u201d functions that can be obtained, independent of the network size. This puts these functions in analogy to linear interpolations (for given input data there are finitely many triangulations, which each determine a function by linear interpolation). We ask whether this analogy extends to the generalization properties while the usual distribution-independent generalization property does not hold, it could be that for e.g. smooth functions with bounded second derivative an approximation property holds which could \u201cexplain\u201d generalization of networks (of unbounded size) to unseen inputs."}
{"_id": "0ecb87695437518a3cc5e98f0b872fbfaeeb62be", "title": "An Intrusion Detection System Based on KDD-99 Data using Data Mining Techniques and Feature Selection", "text": "Internet and internet users are increasing day by day. Also due to rapid development of internet technology , security is becoming big issue. Intruders are monitoring comput er network continuously for attacks. A sophisticated firewall with efficient intrusion detection system (IDS) is required to pre vent computer network from attacks. A comprehensive study of litera tures proves that data mining techniques are more powerful techni que to develop IDS as a classifier. Performance of classif ier is a crucial issue in terms of its efficiency, also number of fe ature to be scanned by the IDS should also be optimized. In thi s paper two techniques C5.0 and artificial neural network (ANN) ar e utilized with feature selection. Feature selection techniques will discard some irrelevant features while C5.0 and ANN acts as a classifier to classify the data in either normal type or one of t he five types of attack.KDD99 data set is used to train and test the models ,C5.0 model with numbers of features is producing better r sults with all most 100% accuracy. Performances were also verified in terms of data partition size."}
{"_id": "5b69da724e0e2c0cfec7a81de3c117cf8983fb55", "title": "Reward, Motivation, and Reinforcement Learning", "text": "There is substantial evidence that dopamine is involved in reward learning and appetitive conditioning. However, the major reinforcement learning-based theoretical models of classical conditioning (crudely, prediction learning) are actually based on rules designed to explain instrumental conditioning (action learning). Extensive anatomical, pharmacological, and psychological data, particularly concerning the impact of motivational manipulations, show that these models are unreasonable. We review the data and consider the involvement of a rich collection of different neural systems in various aspects of these forms of conditioning. Dopamine plays a pivotal, but complicated, role."}
{"_id": "59ad3c0a160d67c1b75e8355d9ded6bdfa2ca1b6", "title": "Misconceptions about real-time computing: a serious problem for next-generation systems", "text": "The author defines real-time computing and states and dispels the most common misconceptions about it. He discusses the fundamental technical issues of real-time computing. He examines specification and verification, scheduling theory, operating systems, programming languages and design methodology, distributed databases, artificial intelligence, fault tolerance, architectures, and communication.<>"}
{"_id": "37804bb640f5a739ff1cb5ab5ce317a12094738b", "title": "The dark side of the sharing economy ... and how to lighten it", "text": "Improving the sharing economy will require addressing myriad problems."}
{"_id": "735d4220d5579cc6afe956d9f6ea501a96ae99e2", "title": "On the momentum term in gradient descent learning algorithms", "text": "A momentum term is usually included in the simulations of connectionist learning algorithms. Although it is well known that such a term greatly improves the speed of learning, there have been few rigorous studies of its mechanisms. In this paper, I show that in the limit of continuous time, the momentum parameter is analogous to the mass of Newtonian particles that move through a viscous medium in a conservative force field. The behavior of the system near a local minimum is equivalent to a set of coupled and damped harmonic oscillators. The momentum term improves the speed of convergence by bringing some eigen components of the system closer to critical damping. Similar results can be obtained for the discrete time case used in computer simulations. In particular, I derive the bounds for convergence on learning-rate and momentum parameters, and demonstrate that the momentum term can increase the range of learning rate over which the system converges. The optimal condition for convergence is also analyzed."}
{"_id": "c878f99c2f0bb4be290f082470d803acf4048b16", "title": "Real time contextual collective anomaly detection over multiple data streams", "text": "Anomaly detection has always been a critical and challenging problem in many application areas such as industry, healthcare, environment and finance. This problem becomes more di cult in the Big Data era as the data scale increases dramatically and the type of anomalies gets more complicated. In time sensitive applications like real time monitoring, data are often fed in streams and anomalies are required to be identified online across multiple streams with a short time delay. The new data characteristics and analysis requirements make existing solutions no longer suitable. In this paper, we propose a framework to discover a new type of anomaly called contextual collective anomaly over a collection of data streams in real time. A primary advantage of this solution is that it can be seamlessly integrated with real time monitoring systems to timely and accurately identify the anomalies. Also, the proposed framework is designed in a way with a low computational intensity, and is able to handle large scale data streams. To demonstrate the e\u21b5ectiveness and e ciency of our framework, we empirically validate it on two real world applications."}
{"_id": "94013c80fc158979aa0a0084db993ba0f9d6c35b", "title": "Evaluating Induced CCG Parsers on Grounded Semantic Parsing", "text": "We compare the effectiveness of four different syntactic CCG parsers for a semantic slotfilling task to explore how much syntactic supervision is required for downstream semantic analysis. This extrinsic, task-based evaluation provides a unique window to explore the strengths and weaknesses of semantics captured by unsupervised grammar induction systems. We release a new Freebase semantic parsing dataset called SPADES (Semantic PArsing of DEclarative Sentences) containing 93K cloze-style questions paired with answers. We evaluate all our models on this dataset. Our code and data are available at https://github.com/ sivareddyg/graph-parser."}
{"_id": "f04391a851714d7c3d0f57365ae6f2e38a30df57", "title": "Analysis of stairs-climbing ability for a tracked reconfigurable modular robot", "text": "Stairs-climbing ability is the crucial performance of mobile robot for urban environment mission such as urban search and rescue or urban reconnaissance. The track type mobile mechanism has been widely applied for its advantages such as high stability, easy to control, low terrain pressure, and continuous drive. Stairs-climbing is a complicated process for a tracked mobile robot under kinematics and dynamics constraints. In this paper, the stairs-climbing process has been divided into riser climbing, riser crossing, and nose line climbing. During each climbing process, robot's mobility has been analyzed for its kinematics and dynamics factor. The track velocity and acceleration's influences on riser climbing have been analyzed. And the semiempirical design method of the track grouser and the module length has been provided in riser crossing and nose line climbing correspondingly. Finally, stairs-climbing experiments have been made on the two-module robot in line type, and three-module robot in line type and in triangle type respectively."}
{"_id": "8b8788ac5a01280c6484b30cac7a14894f29edf7", "title": "An Overview of the Theory and Applications of Metasurfaces: The Two-Dimensional Equivalents of Metamaterials", "text": "Metamaterials are typically engineered by arranging a set of small scatterers or apertures in a regular array throughout a region of space, thus obtaining some desirable bulk electromagnetic behavior. The desired property is often one that is not normally found naturally (negative refractive index, near-zero index, etc.). Over the past ten years, metamaterials have moved from being simply a theoretical concept to a field with developed and marketed applications. Three-dimensional metamaterials can be extended by arranging electrically small scatterers or holes into a two-dimensional pattern at a surface or interface. This surface version of a metamaterial has been given the name metasurface (the term metafilm has also been employed for certain structures). For many applications, metasurfaces can be used in place of metamaterials. Metasurfaces have the advantage of taking up less physical space than do full three-dimensional metamaterial structures; consequently, metasurfaces offer the possibility of less-lossy structures. In this overview paper, we discuss the theoretical basis by which metasurfaces should be characterized, and discuss their various applications. We will see how metasurfaces are distinguished from conventional frequency-selective surfaces. Metasurfaces have a wide range of potential applications in electromagnetics (ranging from low microwave to optical frequencies), including: (1) controllable \u201csmart\u201d surfaces, (2) miniaturized cavity resonators, (3) novel wave-guiding structures, (4) angular-independent surfaces, (5) absorbers, (6) biomedical devices, (7) terahertz switches, and (8) fluid-tunable frequency-agile materials, to name only a few. In this review, we will see that the development in recent years of such materials and/or surfaces is bringing us closer to realizing the exciting speculations made over one hundred years ago by the work of Lamb, Schuster, and Pocklington, and later by Mandel'shtam and Veselago."}
{"_id": "aed221e57f865d605257383426a6bbc39200c61a", "title": "Millimeter-Wave Radar Sensor for Snow Height Measurements", "text": "A small and lightweight frequency-modulated continuous-wave (FMCW) radar system is used for the determination of snow height by measuring the distance to the snow surface from a platform. The measurements have been performed at the Centre des E\u0301tudes de la Neige (Col de Porte), which is located near Grenoble in the French Alps. It is shown that the FMCW radar at millimeter-wave frequencies is an extremely promising approach for distance measurements to snow surfaces, e.g., in the mountains or in an Arctic environment. The characteristics of the radar sensor are described in detail. The relevant accuracy to measure the distance to a snow layer is shown at different heights and over an extended time duration. A dedicated laser snow telemeter is used as reference. In addition, the reflection from different types of snow is shown."}
{"_id": "51cc5d02fefa04535933c216c077ac7ffc4ee205", "title": "Real-time multi-feature based fire flame detection in video", "text": "In this paper, we present a new approach to detect fire flame by processing and analyzing the stationary camera videos. For a fire detection system, it is desired to be sensitive and reliable. The proposed method improves not only the sensitivity but also the reliability through reducing the susceptibility to false alarms. The proposed approach based on multi-feature, i.e., chromatic features, dynamic features, texture features, and contour features, can both improve the sensitivity and reliability in fire detection. In our approach, we adopt a novel algorithm to extract the moving region and analyze the frequency of flickers. Experimental results show that the proposed method can run in real-time and performs favorably against the state-of-art methods with higher Page 1 of 22 IET Review Copy Only IET Image Processing This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page."}
{"_id": "3d5c189f522f68de8a5f5e2e1f498de392a5e0b1", "title": "Melody Generation for Pop Music via Word Representation of Musical Properties", "text": "Automatic melody generation for pop music has been a long-time aspiration for both AI researchers and musicians. However, learning to generate euphonious melody has turned out to be highly challenging due to a number of factors. Representation of multivariate property of notes has been one of the primary challenges. It is also difficult to remain in the permissible spectrum of musical variety, outside of which would be perceived as a plain random play without auditory pleasantness. Observing the conventional structure of pop music poses further challenges. In this paper, we propose to represent each note and its properties as a unique \u2018word,\u2019 thus lessening the prospect of misalignments between the properties, as well as reducing the complexity of learning. We also enforce regularization policies on the range of notes, thus encouraging the generated melody to stay close to what humans would find easy to follow. Furthermore, we generate melody conditioned on song part information, thus replicating the overall structure of a full song. Experimental results demonstrate that our model can generate auditorily pleasant songs that are more indistinguishable from human-written ones than previous models. 1"}
{"_id": "c577041829cdde8078bb63ee54078673e0fed1f5", "title": "PinMe: Tracking a Smartphone User around the World", "text": "With the pervasive use of smartphones that sense, collect, and process valuable information about the environment, ensuring location privacy has become one of the most important concerns in the modern age. A few recent research studies discuss the feasibility of processing sensory data gathered by a smartphone to locate the phone\u2019s owner, even when the user does not intend to share his location information, e.g., when the user has turned off the Global Positioning System (GPS) on the device. Previous research efforts rely on at least one of the two following fundamental requirements, which impose significant limitations on the adversary: (i) the attacker must accurately know either the user\u2019s initial location or the set of routes through which the user travels and/or (ii) the attacker must measure a set of features, e.g., device acceleration, for different potential routes in advance and construct a training dataset. In this paper, we demonstrate that neither of the above-mentioned requirements is essential for compromising the user\u2019s location privacy. We describe PinMe, a novel user-location mechanism that exploits non-sensory/sensory data stored on the smartphone, e.g., the environment\u2019s air pressure and device\u2019s timezone, along with publicly-available auxiliary information, e.g., elevation maps, to estimate the user\u2019s location when all location services, e.g., GPS, are turned off. Unlike previously-proposed attacks, PinMe neither requires any prior knowledge about the user nor a training dataset on specific routes. We demonstrate that PinMe can accurately estimate the user\u2019s location during four activities (walking, traveling on a train, driving, and traveling on a plane). We also suggest several defenses against the proposed attack."}
{"_id": "6307f94aefdc7268c27e3af8fc04f090bc1b18bb", "title": "A Taxi Order Dispatch Model based On Combinatorial Optimization", "text": "Taxi-booking apps have been very popular all over the world as they provide convenience such as fast response time to the users. The key component of a taxi-booking app is the dispatch system which aims to provide optimal matches between drivers and riders. Traditional dispatch systems sequentially dispatch taxis to riders and aim to maximize the driver acceptance rate for each individual order. However, the traditional systems may lead to a low global success rate, which degrades the rider experience when using the app. In this paper, we propose a novel system that attempts to optimally dispatch taxis to serve multiple bookings. The proposed system aims to maximize the global success rate, thus it optimizes the overall travel efficiency, leading to enhanced user experience. To further enhance users' experience, we also propose a method to predict destinations of a user once the taxi-booking APP is started. The proposed method employs the Bayesian framework to model the distribution of a user's destination based on his/her travel histories.\n We use rigorous A/B tests to compare our new taxi dispatch method with state-of-the-art models using data collected in Beijing. Experimental results show that the proposed method is significantly better than other state-of-the art models in terms of global success rate (increased from 80% to 84%). Moreover, we have also achieved significant improvement on other metrics such as user's waiting-time and pick-up distance. For our destination prediction algorithm, we show that our proposed model is superior to the baseline model by improving the top-3 accuracy from 89% to 93%. The proposed taxi dispatch and destination prediction algorithms are both deployed in our online systems and serve tens of millions of users everyday."}
{"_id": "f348604717a64b54a0419b04902f979dadee60b3", "title": "LiveNet: Improving features generalization for face liveness detection using convolution neural networks", "text": "Performance of face liveness detection algorithms in cross-database face liveness detection tests is one of the key issues in face-biometric based systems. Recently, Convolution Neural Networks (CNN) classifiers have shown remarkable performance in intra-database face liveness detection tests. However, a little effort has been made to improve the generalization capability of CNN classifiers for cross-database and unconstrained face liveness detection tests. In this paper, we propose an efficient strategy for training deep CNN classifiers for face liveness detection task. We utilize continuous data-randomization (like bootstrapping) in the form of small mini-batches during training CNN classifiers on small scale face antispoofing database. Experimental results revealed that the proposed approach reduces the training time by 18.39%, while significantly lowering the HTER by 8.28% and 14.14% in cross-database tests on CASIAFASD and Replay-Attack database respectively as compared to state-of-the-art approaches. Additionally, the proposed approach achieves satisfactory results on intra-database and cross-database face liveness detection tests, claiming a good generality over other state-of-the-art face anti-spoofing approaches. \u00a9 2018 Elsevier Ltd. All rights reserved."}
{"_id": "722151798c178c71108ec6e64d25b3829e69892b", "title": "Character Recognition Using Convolutional Neural Networks", "text": "Pattern recognition is one of the traditional uses of neural networks. When trained with gradient-based learning methods, these networks can learn the classification of input data by example. An introduction to classifiers and gradient-based learning is given. It is shown how several perceptrons can be combined and trained gradient-based. Furthermore, an overview of convolutional neural networks, as well as a real-world example, are discussed."}
{"_id": "125670273768bf82405e0195e62a8d87aa6ba888", "title": "Inverted Files Versus Signature Files for Text Indexing", "text": "Two well-known indexing methods are inverted files and signature files. We have undertaken a detailed comparison of these two approaches in the context of text indexing, paying particular attention to query evaluation speed and space requirements. We have examined their relative performance using both experimentation and a refined approach to modeling of signature files, and demonstrate that inverted files are distinctly superior to signature files. Not only can inverted files be used to evaluate typical queries in less time than can signature files, but inverted files require less space and provide greater functionality. Our results also show that a synthetic text database can provide a realistic indication of the behavior of an actual text database. The tools used to generate the synthetic database have been made publicly available"}
{"_id": "bfcce714788e2d9183e7b377b90cea9dedca9036", "title": "Fast Linear Model for Knowledge Graph Embeddings", "text": "This paper shows that a simple baseline based on a Bag-of-Words (BoW) representation learns surprisingly good knowledge graph embeddings. By casting knowledge base completion and question answering as supervised classification problems, we observe that modeling co-occurences of entities and relations leads to state-of-the-art performance with a training time of a few minutes using the open sourced library fastText1."}
{"_id": "294049866a9547b0c53825bef052e6d4a0a32ce6", "title": "Color nonuniformity in projection-based displays: analysis and solutions", "text": "Large-area displays made up of several projectors show significant variation in color. Here, we identify different projector parameters that cause the color variation and study their effects on the luminance and chrominance characteristics of the display. This work leads to the realization that luminance varies significantly within and across projectors, while chrominance variation is relatively small, especially across projectors of same model. To address this situation, we present a method to achieve luminance matching across all pixels of a multiprojector display that results in photometrically uniform displays. We use a camera as a measurement device for this purpose. Our method comprises a one-time calibration step that generates a per channel per projector luminance attenuation map (LAM), which is then used to correct any image projected on the display at interactive rates on commodity graphics hardware. To the best of our knowledge, this is the first effort to match luminance across all the pixels of a multiprojector display."}
{"_id": "b8347b48d47d0713194f33f03e02130466ab0825", "title": "Entity Synonyms for Structured Web Search", "text": "Nowadays, there are many queries issued to search engines targeting at finding values from structured data (e.g., movie showtime of a specific location). In such scenarios, there is often a mismatch between the values of structured data (how content creators describe entities) and the web queries (how different users try to retrieve them). Therefore, recognizing the alternative ways people use to reference an entity, is crucial for structured web search. In this paper, we study the problem of automatic generation of entity synonyms over structured data toward closing the gap between users and structured data. We propose an offline, data-driven approach that mines query logs for instances where content creators and web users apply a variety of strings to refer to the same webpages. This way, given a set of strings that reference entities, we generate an expanded set of equivalent strings (entity synonyms) for each entity. Our framework consists of three modules: candidate generation, candidate selection, and noise cleaning. We further study the cause of the problem through the identification of different entity synonym classes. The proposed method is verified with experiments on real-life data sets showing that we can significantly increase the coverage of structured web queries with good precision."}
{"_id": "bc3d201df69c85f23c3cac77c5845127b7fe23bf", "title": "Rough Set Theory in analyzing the attributes of combination values for the insurance market", "text": "Based on Rough Set Theory, this research addresses the effect of attributes/features on the combination values of decisions that insurance companies make to satisfy customers\u2019 needs. Attributes impact on combination values by yielding sets with fewer objects (such as one or two objects), which increases both the lower and upper approximations. It also increases the decision rules, and degrades the precision of decisions. Our approach redefines the value set of attributes through expert knowledge by reducing the independent data set and reclassifying it. This approach is based on an empirical study. The results demonstrate that the redefined combination values of attributes can contribute to the precision of decisions in insurance marketing. Following an empirical analysis, we use a hit test that incorporates 50 validated sample data into the decision rule so that the hit rate reaches 100%. The results of the empirical study indicate that the generated decision rules can cover all new data. Consequently, we believe that the effects of attributes on combination values can be fully applied in research into insurance marketing. 2005 Elsevier Ltd. All rights reserved."}
{"_id": "c3ea4dcb16d7735eaf955dd99fdb6a2cacc21db9", "title": "Model-Driven Approach for Body Area Network Application Development", "text": "This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application."}
{"_id": "40048481e1bc23e56b7507575f31cc5255a02e9f", "title": "Entropy-based algorithms for best basis selection", "text": "Adapted waveform analysis uses a library of orthonormal bases and an efficiency functional to match a basis to a given signal or family of signals. It permits efficient compression of a variety of signals such as sound and images. The predefined libraries of modulated waveforms include orthogonal wavelet-packets, and localized trigonometric functions, have reasonably well controlled time-frequency localization properties. The idea is to build out of the library functions an orthonormal basis relative to which the given signal or collection of signals has the lowest information cost. The method relies heavily on the remarkable orthogonality properties of the new libraries: all expansions in a given library conserve energy, hence are comparable. Several cost functionals are useful; one of the most attractive is Shannon entropy, which has a geometric interpretation in this context."}
{"_id": "a9515ce06633d20335a2bb5afc1134e6dea1f372", "title": "Non-invasive glucose monitoring in patients with diabetes: a novel system based on impedance spectroscopy.", "text": "The aim of this work was to evaluate the performance of a novel non-invasive continuous glucose-monitoring system based on impedance spectroscopy (IS) in patients with diabetes. Ten patients with type 1 diabetes (mean+/-S.D., age 28+/-8 years, BMI 24.2+/-3.2 kg/m(2) and HbA(1C) 7.3+/-1.6%) and five with type 2 diabetes (age 61+/-8 years, BMI 27.5+/-3.2 kg/m(2) and HbA(1C) 8.3+/-1.8%) took part in this study, which comprised a glucose clamp experiment followed by a 7-day outpatient evaluation. The measurements obtained by the NI-CGMD and the reference blood glucose-measuring techniques were evaluated using retrospective data evaluation procedures. Under less controlled outpatient conditions a correlation coefficient of r=0.640 and a standard error of prediction (SEP) of 45 mg dl(-1) with a total of 590 paired glucose measurements was found (versus r=0.926 and a SEP of 26 mg dl(-1) under controlled conditions). Clark error grid analyses (EGA) showed 56% of all values in zone A, 37% in B and 7% in C-E. In conclusion, these results indicate that IS in the used technical setting allows retrospective, continuous and truly non-invasive glucose monitoring under defined conditions for patients with diabetes. Technical advances and developments are needed to expand on this concept to bring the results from the outpatient study closer to those in the experimental section of the study. Further studies will not only help to evaluate the performance and limitations of using such a technique for non non-invasive glucose monitoring but also help to verify technical extensions towards a IS-based concept that offers improved performance under real life operating conditions."}
{"_id": "5829feb8b0a8371f4b7e1854d97ea6d9c564bb39", "title": "Citation analysis as a tool in journal evaluation.", "text": "As a communications system, the network of journals that play a paramount role in the exchange of scientific and technical information is little understood. Periodically since 1927, when Gross and Gross published their study (1) of references in 1 year\u2019s issues of the Journal of the American Chemical Socie/y, pieces of the network have been illuminated by the work of Bradford (2), Allen (3), Gross and Woodford (4), Hooker (5), Henkle (6), Fussier (7), Brown (8), and others (9). Nevertheless, there is still no map of the journal network as a whok. To date, studies of the network and of the interrelation of its components have been limited in the number of journak, the areas of scientific study, and the periods of time their authors were able to consider, Such shortcomings have not been due to any lack of purpose, insight, or energy on the part of investigators, but to the practical difficulty of compiling and manipulating manually the enormous amount of necessary data. A solution to this problem of data is available in the data base used to produce the Science Citation Index ( SCI ) (10). The coverage of the SCI is international and multidisciplinary; it has grown from 600 journals in 1964 to 2400 journals in 1972, and now includes the world\u2019s most important scientific and technical journals in mow disciplines. The SCI is published quarterly and is cumulated annually and quinquennially, but the data base from which the volumes are compiled is maintained on magnetic tape and is updated weekly. At the end of 1971, this data base contained more than 27 mi[tion references to about 10 million different published items. These references appeared over the past decade in the footnotes and bibliographies of more than 2 million journal articles, communications, letters, and so on. The data base is, thus, not only multidisciplinary, it covers a substantial period of time and, being in machine-readable form, is amenable to extensive manipulation by computer. In 1971, the Institute for Scientfic Information (1S1) decided to undertake a systematic analysis of journal citation patterns across the whole of science and technology. It began by extracting from the data base all references pobIished during the last quarter of 1969 in the 2200 journals then covered by the SCL The resultant sample was about 1 million citations of journals, books, reports, theses, and so forth. To test whether this 3-month sample was representative of the year as a whole, it was matched against another sample made by selecting every 27th reference from the approximately 4 million references collected over the entire year. The two samples were similar enough in scope (number of diflerent items cited) and detail (relative frequency of their citation by different journals) to"}
{"_id": "b4543c7c9733f9199ee35b2e37952fdc20f838f9", "title": "Discrete Cross-Modal Hashing for Efficient Multimedia Retrieval", "text": "Hashing techniques have been widely adopted for cross-modal retrieval due to its low storage cost and fast query speed. Most existing cross-modal hashing methods aim to map heterogeneous data into the common low-dimensional hamming space and then threshold to obtain binary codes by relaxing the discrete constraint. However, this independent relaxation step also brings quantization errors, resulting in poor retrieval performances. Other cross-modal hashing methods try to directly optimize the challenging objective function with discrete binary constraints. Inspired by [1], we propose a novel supervised cross-modal hashing method called Discrete Cross-Modal Hashing (DCMH) to learn the discrete binary codes without relaxing them. DCMH is formulated through reconstructing the semantic similarity matrix and learning binary codes as ideal features for classification. Furthermore, DCMH alternately updates binary codes of each modality, and iteratively learns the discrete hashing codes bit by bit efficiently, which is quite promising for large-scale datasets. Extensive empirical results on three real-world datasets show that DCMH outperforms the baseline approaches significantly."}
{"_id": "10e887664b0e9d0b193e8dc8e92f6f1a8bfba5e5", "title": "The Complementarity of Information Technology Infrastructure and E-Commerce Capability: A Resource-Based Assessment of Their Business Value", "text": "This study seeks to assess the business value of e-commerce capability and information technology (IT) infrastructure in the context of electronic business at the firm level. Grounded in the IT business-value literature and enhanced by the resource-based theory of the firm, we developed a research framework in which both the main effects and the interaction effects of e-commerce and IT on firm performance were tested. Within this theoretical framework, we formulated several hypotheses. We then developed a multidimensional e-commerce capability construct, and after establishing its validity and reliability, tested the hypotheses with empirical data from 114 companies in the retail industry. Controlling for variations of firm size and subindustry effects, our empirical analysis found a strong positive interaction effect between IT infrastructure and e-commerce capability. This suggests that their complementarity positively contributes to firm performance in terms of sales per employee, inventory turnover, and cost reduction. The results are consistent with the resource-based theory, and provide empirical evidence to the complementary synergy between front-end e-commerce capability and back-end IT infrastructure. Combined together, they become more effective in producing business value. Yet the value of this synergy has not been recognized in the IT payoff literature. The \u201cproductivity paradox\u201d observed in various studies has been attributed to variation in methods and measures, yet we offer an additional explanation: ignoring complementarities in business value measurement implies that the impact of IT was seriously underestimated. Our results emphasized the integration of resources as a feasible path to e-commerce value\u2014companies need to enhance the integration between front-end e-commerce capability and back-end IT infrastructure in order to reap the benefits of e-commerce investments."}
{"_id": "1c17f6ab76a32648cd84c8eef2e47045e8379310", "title": "Fast Kernels for String and Tree Matching", "text": "In this paper we present a new algorithm suitable for matching discrete objects such as strings and trees in linear time, thus obviating dynamic programming with quadratic time complexity. Furthermore, prediction cost in many cases can be reduced to linear cost in the length of the sequence to be classified, regardless of the number of support vectors. This improvement on the currently available algorithms makes string kernels a viable alternative for the practitioner."}
{"_id": "33a31332ebd3de605a886546cf0576ad556bfed8", "title": "A Kernel Between Sets of Vectors", "text": "In various application domains, including image recognition, it is natural to represent each example as a set of vectors. With a base kernel we can implicitly map these vectors to a Hilbert space and fit a Gaussian distribution to the whole set using Kernel PCA. We define our kernel between examples as Bhattacharyya\u2019s measure of affinity between such Gaussians. The resulting kernel is computable in closed form and enjoys many favorable properties, including graceful behavior under transformations, potentially justifying the vector set representation even in cases when more conventional representations also exist."}
{"_id": "16c3eaad79a51a9e1a408d768a5096ca7675d4fd", "title": "Convolution Kernels for Natural Language", "text": "We describe the application of kernel methods to Natural Language Processing (NLP) problems. In many NLP tasks the objects being modeled are strings, trees, graphs or other discrete structures which require some mechanism to convert them into feature vectors. We describe kernels for various natural language structures, allowing rich, high dimensional representations of these structures. We show how a kernel over trees can be applied to parsing using the voted perceptron algorithm, and we give experimental results on the ATIS corpus of parse trees."}
{"_id": "0b48e2d88382c9ed27ba2d6fb1b61834d261122b", "title": "Batch and On-Line Parameter Estimation of Gaussian Mixtures Based on the Joint Entropy", "text": "We describe a new iterative method for parameter estimation of Gaussian mixtures. The new method is based on a framework developed by Kivinen and Warmuth for supervised on-line learning. In contrast to gradient descent and EM, which estimate the mixture's covariance matrices, the proposed method estimates the inverses of the covariance matrices. Furthennore, the new parameter estimation procedure can be applied in both on-line and batch settings. We show experimentally that it is typically faster than EM, and usually requires about half as many iterations as EM."}
{"_id": "692f807eb4a65179918a0df871c6339a6b7377f1", "title": "Modulation of connectivity in visual pathways by attention: cortical interactions evaluated with structural equation modelling and fMRI.", "text": "Electrophysiological and neuroimaging studies have shown that attention to visual motion can increase the responsiveness of the motion-selective cortical area V5 and the posterior parietal cortex (PP). Increased or decreased activation in a cortical area is often attributed to attentional modulation of the cortical projections to that area. This leads to the notion that attention is associated with changes in connectivity. We have addressed attentional modulation of effective connectivity using functional magnetic resonance imaging (fMRI). Three subjects were scanned under identical stimulus conditions (visual motion) while varying only the attentional component of the task. Haemodynamic responses defined an occipito-parieto-frontal network, including the, primary visual cortex (V1), V5 and PR A structural equation model of the interactions among these dorsal visual pathway areas revealed increased connectivity between V5 and PP related to attention. On the basis of our analysis and the neuroanatomical pattern of projections from the prefrontal cortex to PP we attributed the source of modulatory influences, on the posterior visual pathway, to the prefrontal cortex (PFC). To test this hypothesis we included the PFC in our model as a 'modulator' of the pathway between V5 and PP, using interaction terms in the structural equation model. This analysis revealed a significant modulatory effect of prefrontal regions on V5 afferents to posterior parietal cortex."}
{"_id": "63213d080a43660ac59ea12e3c35e6953f6d7ce8", "title": "ActionVLAD: Learning Spatio-Temporal Aggregation for Action Classification", "text": "In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video. We do so by integrating state-of-the-art two-stream networks [42] with learnable spatio-temporal feature aggregation [6]. The resulting architecture is end-to-end trainable for whole-video classification. We investigate different strategies for pooling across space and time and combining signals from the different streams. We find that: (i) it is important to pool jointly across space and time, but (ii) appearance and motion streams are best aggregated into their own separate representations. Finally, we show that our representation outperforms the two-stream base architecture by a large margin (13% relative) as well as outperforms other baselines with comparable base architectures on HMDB51, UCF101, and Charades video classification benchmarks."}
{"_id": "4606add11d6c6c24cf91f0800e4fe99392599613", "title": "A practical model for subsurface light transport", "text": "This paper introduces a simple model for subsurface light transport in translucent materials. The model enables efficient simulation of effects that BRDF models cannot capture, such as color bleeding within materials and diffusion of light across shadow boundaries. The technique is efficient even for anisotropic, highly scattering media that are expensive to simulate using existing methods. The model combines an exact solution for single scattering with a dipole point source diffusion approximation for multiple scattering. We also have designed a new, rapid image-based measurement technique for determining the optical properties of translucent materials. We validate the model by comparing predicted and measured values and show how the technique can be used to recover the optical properties of a variety of materials, including milk, marble, and skin. Finally, we describe sampling techniques that allow the model to be used within a conventional ray tracer."}
{"_id": "db5f395ad6cf9d911e448f2689b2c4d1b9d72f92", "title": "Effects of lumbar stabilization exercise on functional disability and lumbar\nlordosis angle in patients with chronic low back pain", "text": "[Purpose] This study examined the effects of lumbar stabilization exercises on the functional disability and lumbar lordosis angles in patients with chronic low back pain. [Subjects] The subjects were 30 patients with chronic low back pain divided into a lumbar stabilization exercise group (n = 15) and a conservative treatment group (n = 15). [Methods] The lumbar stabilization exercise and conservative treatment groups performed an exercise program and conservative physical treatment, respectively. Both programs were performed 3 times a week for 6 weeks. The degree of functional disability was assessed by the Oswestry disability index, and lumbar lordosis angles were measured by plain radiography. [Results] The Oswestry disability index decreased significantly in the both groups; however, it was significantly lower in the lumbar stabilization exercise group. The lumbar lordosis angle increased significantly in the lumbar stabilization exercise group after treatment and was also significantly greater than that in the conservative treatment group. [Conclusion] Lumbar stabilization exercise is more effective than conservative treatment for improving functional disability and lumbar lordosis angles."}
{"_id": "7f9ac85a5ec08aa078018d0a498cea2380bc20e7", "title": "Tetris game design based on the FPGA", "text": "Tetris game is a classic game of logic control. This article gives a programming design of Tetris game based on the FPGA using VHDL. Game players can move and rotate blocks with the PS/2 interface keyboard, and the game video is showed in a VGA monitor. The game realized the function of the movement and rotation of blocks, randomly generating next blocks, eliminating rows, getting scores and speeding up. It also contained the normal mode corresponding to 7 types of blocks and the expert mode corresponding to 11 types of blocks. The successful transplant of Tetris game provides a template for the development of other visual control systems in the FPGA."}
{"_id": "a310091e1c3688a6e922711ee5c5796aa614b36e", "title": "Nonparametric Bayesian Models for Joint Analysis of Imagery and Text", "text": "Nonparametric Bayesian Models for Joint Analysis of Imagery and Text"}
{"_id": "577a1ae723c8e151fc009c1e34cd34a499227175", "title": "The merits of sharing a ride", "text": "The culture of sharing instead of ownership is sharply increasing in individuals behaviors. Particularly in transportation, concepts of sharing a ride in either carpooling or ridesharing have been recently adopted. An efficient optimization approach to match passengers in real-time is the core of any ridesharing system. In this paper, we model ridesharing as an online matching problem on general graphs such that passengers do not drive private cars and use shared taxis. We propose an optimization algorithm to solve it. The outlined algorithm calculates the optimal waiting time when a passenger arrives. This leads to a matching with minimal overall overheads while maximizing the number of partnerships. To evaluate the behavior of our algorithm, we used NYC taxi real-life data set. Results represent a substantial reduction in overall overheads."}
{"_id": "30dbd52334d49d95fb0d0cb48b95e9588bceca0b", "title": "A Survey of Procedural Noise Functions", "text": "Procedural noise functions are widely used in Computer Graphics, from off-line rendering in movie production to interactive video games. The ability to add complex and intricate details at low memory and authoring cost is one of its main attractions. This survey is motivated by the inherent importance of noise in graphics, the widespread use of noise in industry, and the fact that many recent research developments justify the need for an up-to-date survey. Our goal is to provide both a valuable entry point into the field of procedural noise functions, as well as a comprehensive view of the field to the informed reader. In this report, we cover procedural noise functions in all their aspects. We outline recent advances in research on this topic, discussing and comparing recent and well established methods. We first formally define procedural noise functions based on stochastic processes and then classify and review existing procedural noise functions. We discuss how procedural noise functions are used for modeling and how they are applied to surfaces. We then introduce analysis tools and apply them to evaluate and compare the major approaches to noise generation. We finally identify several directions for future work."}
{"_id": "f8fdf353ff6fa428b1e1f6a451d8d25cacc93de7", "title": "Pseudo-analysis approach to nonlinear partial differential equations", "text": "An overview of methods of pseudo-analysis in applications on important classes of nonlinear partial differential equations, occurring in different fields, is given. Hamilton-Jacobi equations, specially important in the control theory, are for important models usually with non-linear Hamiltonian H which is also not smooth, e.g., the absolute value, min or max operations, where it can not apply the classical mathematical analysis. Using the pseudo-analysis with generalized pseudo-convolution it is possible to obtain solutions which can be interpreted in the mentioned classical way. Another important classes of nonlinear equations, where there are applied the pseudo-analysis, are the Burgers type equations and Black and Shole equation in option pricing. Very recent applications of pseudo-analysis are obtained on equations which model fluid mechanics (Navier-Stokes equation) and image processing (Perona and Malik equation)."}
{"_id": "a0bca6dbb9232bc2a868f27b7d07acd08d0a1622", "title": "A deep learning method for classifying mammographic breast density categories.", "text": "PURPOSE\nMammographic breast density is an established risk marker for breast cancer and is visually assessed by radiologists in routine mammogram image reading, using four qualitative Breast Imaging and Reporting Data System (BI-RADS) breast density categories. It is particularly difficult for radiologists to consistently distinguish the two most common and most variably assigned BI-RADS categories, i.e., \"scattered density\" and \"heterogeneously dense\". The aim of this work was to investigate a deep learning-based breast density classifier to consistently distinguish these two categories, aiming at providing a potential computerized tool to assist radiologists in assigning a BI-RADS category in current clinical workflow.\n\n\nMETHODS\nIn this study, we constructed a convolutional neural network (CNN)-based model coupled with a large (i.e., 22,000 images) digital mammogram imaging dataset to evaluate the classification performance between the two aforementioned breast density categories. All images were collected from a cohort of 1,427 women who underwent standard digital mammography screening from 2005 to 2016 at our institution. The truths of the density categories were based on standard clinical assessment made by board-certified breast imaging radiologists. Effects of direct training from scratch solely using digital mammogram images and transfer learning of a pretrained model on a large nonmedical imaging dataset were evaluated for the specific task of breast density classification. In order to measure the classification performance, the CNN classifier was also tested on a refined version of the mammogram image dataset by removing some potentially inaccurately labeled images. Receiver operating characteristic (ROC) curves and the area under the curve (AUC) were used to measure the accuracy of the classifier.\n\n\nRESULTS\nThe AUC was 0.9421 when the CNN-model was trained from scratch on our own mammogram images, and the accuracy increased gradually along with an increased size of training samples. Using the pretrained model followed by a fine-tuning process with as few as 500 mammogram images led to an AUC of 0.9265. After removing the potentially inaccurately labeled images, AUC was increased to 0.9882 and 0.9857 for without and with the pretrained model, respectively, both significantly higher (P\u00a0<\u00a00.001) than when using the full imaging dataset.\n\n\nCONCLUSIONS\nOur study demonstrated high classification accuracies between two difficult to distinguish breast density categories that are routinely assessed by radiologists. We anticipate that our approach will help enhance current clinical assessment of breast density and better support consistent density notification to patients in breast cancer screening."}
{"_id": "6858fd932670d15009a9253b1baf1b180cd807d4", "title": "SOCIAL ENTREPRENEURSHIP LEADERSHIP THAT FACILITATES SOCIETAL TRANSFORMATION \u2014 AN EXPLORATORY STUDY 5", "text": "This study provides a comparative analysis of seven cases of social entrepre-neurship that have been widely recognized as successful. The paper suggests factors associated with successful social entrepreneurship, particularly with social entrepreneurship that leads to significant changes in the social, political and economic contexts for poor and marginalized groups. It generates hypotheses about core innovations, leadership, organization, and scaling up in successful social entrepreneurship. The paper concludes with a discussion of the implications for the practice of social entrepreneurship, for further research, and for the continued development of support technologies and institutions that will encourage future social entrepreneurship."}
{"_id": "7d2196f89d0054f4665a124fd92d88d7a06894ba", "title": "Plantar fasciopathy: revisiting the risk factors.", "text": "BACKGROUND\nPlantar fasciopathy is the most common cause of acquired sub-calcaneal heel pain in adults. To-date, research of this condition has mainly focused on management rather than causal mechanisms. The aetiology of plantar fasciopathy is likely to be multifactorial, as both intrinsic and extrinsic risk factors have been reported. The purpose of this review is to critically reevaluate risk factors for plantar fasciopathy.\n\n\nMETHODS\nA detailed literature review was undertaken using English language medical databases.\n\n\nRESULTS\nNo clear consensus exists as to the relative strength of the risk factors reported.\n\n\nCONCLUSIONS\nTo-date numerous studies have examined various intrinsic and extrinsic risk factors implicated in the aetiology of plantar fasciopathy. How these factors interact may provide useful data to establish an individuals' risk profile for plantar fasciopathy and their potential for response to treatment. Further research is indicated to rank the relative significance of these risk factors."}
{"_id": "147b029e07d952cc5dcd250129d78f2e171a605d", "title": "Why do Larks Perform Better at School than Owls ? The Mediating Effect of Conscientiousness", "text": "Article History: Received 26.06.2016 Received in revised form 25.10.2016 Accepted 26.10.2016 Available online 25.11.2016 Circadian preference refers to individuals\u2019 preference for morning or evening activities. Its two dimensions (i.e., morningness and eveningness) are related to a number of academic outcomes. While morningness shows positive relations with academic achievement, eveningness shows negative relations. Further, morningness and eveningness show the same correlational pattern with conscientiousness (i.e., positive relations for morningness, negative relations for eveningness), which \u2013 in turn \u2013 predicts academic achievement. Therefore, the main aim of the present study was to investigate if the relation between circadian preference and academic achievement was mediated by conscientiousness. The sample comprised 422 students attending the 11th grade at a grammar school in Germany. Circadian preference (morningness and eveningness) and conscientiousness were assessed by self-report questionnaires; academic achievement was operationalized by school grades. Using confirmatory analyses and structural equation modelling, the results supported the assumption that conscientiousness mediates the relation between circadian preference and academic achievement. Implications for research into circadian preference and for education are discussed. \u00a9 2016 IOJES. All rights reserved"}
{"_id": "b2e83112b2956483c6cc5982b56f5987788dd973", "title": "A Multiband Antenna for Satellite Communications on the Move", "text": "The design of a multiband reflector antenna for an on-the-move satellite communications terminal is presented. This antenna was designed to operate with numerous modern and future military communications satellites, which in turn requires the antenna to be capable of operation at multiple frequencies and polarizations while maintaining high aperture efficiency. Several feed antenna concepts were developed to accomplish this task, and are discussed in detail. Multiple working prototypes based on this design have been realized, with excellent performance. Measured data of individual antenna components and the complete assembly is also included"}
{"_id": "fe4b0925b52c89ec04a9d29e41eae98e0c98df78", "title": "Porokeratosis with follicular involvement: report of three cases and review of literatures.", "text": "Porokeratosis is characterized clinically by annular plaques with a distinct peripheral keratotic ridge and histologically by the cornoid lamella. Porokeratosis with follicular involvement is rarely reported. To provide the basis of that follicular porokeratosis is a clinical variant or not. Biopsy was taken from three patients who were diagnosed porokeratosis. Routine stain was made and reviewed the literatures about well-documented cases of porokeratosis with follicular involvement. Porokeratosis with follicular involvement may have some clinical features: asymptomatic, erythematous, brownish or skin-color, less than 1 cm in the areas excluding palm and plantar, which commonly involved on middle-age. But there have still not enough proof as an independent clinical variant."}
{"_id": "29d865556ae25f08d5da963cd5df4c7798fe8fab", "title": "Interleaved active clamp flyback inverter using a synchronous rectifier for a photovoltaic AC module system", "text": "An interleaved active clamp flyback inverter using a synchronous rectifier for a photovoltaic AC module system is proposed. In a conventional single flyback inverter for the photovoltaic AC module system, this inverter has drawbacks of a conduction losses of each switch, transformer copper and high ripple current of capacitor. To overcome these problems, two flyback converter modules with interleaved pulse-width modulation (PWM) are connected in parallel at the input and output sides reducing the ripple current on the input and output capacitors and decrease the current stress on the transformer windings. Thus, the transformer copper losses and the conduction losses on the switches are reduced. Also, reliability and life of capacitor are increased due to reducing the ripple current of input and output capacitors. In order to verify these, theoretical analysis and simulation are performed."}
{"_id": "5c9b4e652e47a168fd73e6e53410678194cd6b03", "title": "Generative Local Metric Learning for Nearest Neighbor Classification", "text": "We consider the problem of learning a local metric in order to enhance the performance of nearest neighbor classification. Conventional metric learning methods attempt to separate data distributions in a purely discriminative manner; here we show how to take advantage of information from parametric generative models. We focus on the bias in the information-theoretic error arising from finite sampling effects, and find an appropriate local metric that maximally reduces the bias based upon knowledge from generative models. As a byproduct, the asymptotic theoretical analysis in this work relates metric learning to dimensionality reduction from a novel perspective, which was not understood from previous discriminative approaches. Empirical experiments show that this learned local metric enhances the discriminative nearest neighbor performance on various datasets using simple class conditional generative models such as a Gaussian."}
{"_id": "2a61ff2716b68672cbfeddb3ecd166dc856afe96", "title": "Efficient Authentication and Signing of Multicast Streams over Lossy Channels", "text": "Multicast stream authentication and signing is an important and challenging problem. Applications include the continuous authentication of radio and TV Internet broadcasts, and authenticated data distribution by satellite. The main challenges are fourfold. First, authenticity must be guaranteed even when only the sender of the data is trusted. Second, the scheme needs to scale to potentially millions of receivers. Third, streamed media distribution can have high packet loss. Finally, the system needs to be efficient to support fast packet rates. We propose two efficient schemes, TESLA and EMSS, for secure lossy multicast streams. TESLA, short for Timed Efficient Stream Loss-tolerant Authentication, offers sender authentication, strong loss robustness, high scalability, and minimal overhead, at the cost of loose initial time synchronization and slightly delayed authentication. EMSS, short for Efficient Multi-chained Stream Signature, provides nonrepudiation of origin, high loss resistance, and low overhead, at the cost of slightly delayed verification. This work began in Summer 1999 when Adrian Perrig and Dawn Song were visiting the IBM T. J. Watson research lab. Initial research on stream authentication was done during Summer 1999 by Ran Canetti, Adrian Perrig, and Dawn Song at IBM. Additional improvements were suggested by J. D. Tygar in Fall 1999 at UC Berkeley. Implementation was done in Fall 1999 by Adrian Perrig at UC Berkeley. The work on stream signatures was done by J. D. Tygar, Adrian Perrig, and Dawn Song at UC Berkeley. Additional work was performed by Ran Canetti in Spring 2000. Ran Canetti is at IBM T. J. Watson Research Center, and Adrian Perrig, Dawn Song, and J. D. Tygar are at the Computer Science Division, UC Berkeley. This research was suported in part by the Defense Advanced Research Projects Agency under DARPA contract N6601-99-28913 (under supervision of the Space and Naval Warfare Systems Center San Diego), by the National Science foundation under grant FD99-79852, and by the United States Postal Service under grant USPS 1025 90-98-C-3513. Views and conclusions contained in this document are those of the authors and do not necessarily represent the official opinion or policies, either expressed or implied of the US government or any of its agencies, DARPA, NSF, USPS, or IBM."}
{"_id": "af9e6c190d3928336d9975d7af332653cf68445c", "title": "Task-Oriented Query Reformulation with Reinforcement Learning", "text": "Search engines play an important role in our everyday lives by assisting us in finding the information we need. When we input a complex query, however, results are often far from satisfactory. In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned. We train this neural network with reinforcement learning. The actions correspond to selecting terms to build a reformulated query, and the reward is the document recall. We evaluate our approach on three datasets against strong baselines and show a relative improvement of 5-20% in terms of recall. Furthermore, we present a simple method to estimate a conservative upperbound performance of a model in a particular environment and verify that there is still large room for improvements."}
{"_id": "528a8ef7277d81d337c8b4c4fe0a9df483031773", "title": "Aggressive driving with model predictive path integral control", "text": "In this paper we present a model predictive control algorithm designed for optimizing non-linear systems subject to complex cost criteria. The algorithm is based on a stochastic optimal control framework using a fundamental relationship between the information theoretic notions of free energy and relative entropy. The optimal controls in this setting take the form of a path integral, which we approximate using an efficient importance sampling scheme. We experimentally verify the algorithm by implementing it on a Graphics Processing Unit (GPU) and apply it to the problem of controlling a fifth-scale Auto-Rally vehicle in an aggressive driving task."}
{"_id": "1dc697ae0d6a1e90dc8ff061e36441b6efdcff7e", "title": "A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems", "text": "We present an iterative linear-quadratic-Gaussian method for locally-optimal feedback control of nonlinear stochastic systems subject to control constraints. Previously, similar methods have been restricted to deterministic unconstrained problems with quadratic costs. The new method constructs an affine feedback control law, obtained by minimizing a novel quadratic approximation to the optimal cost-to-go function. Global convergence is guaranteed through a Levenberg-Marquardt method; convergence in the vicinity of a local minimum is quadratic. Performance is illustrated on a limited-torque inverted pendulum problem, as well as a complex biomechanical control problem involving a stochastic model of the human arm, with 10 state dimensions and 6 muscle actuators. A Matlab implementation of the new algorithm is availabe at www.cogsci.ucsd.edu//spl sim/todorov."}
{"_id": "26949ee96ec22e400d8b4b264ee23360f3df1f7b", "title": "Policy Improvement Methods: Between Black-Box Optimization and Episodic Reinforcement Learning", "text": "Policy improvement methods seek to optimize the parameters of a policy with respect to a utility function. There are two main approaches to performing this optimization: reinforcement learning (RL) and black-box optimization (BBO). Whereas BBO algorithms are generic optimization methods that, due to there generality, may also be applied to optimizing policy parameters, RL algorithms are specifically tailored to leveraging the structure of policy improvement problems. In recent years, benchmark comparisons between RL and BBO have been made, and there has been several attempts to specify which approach works best for which types of problem classes. In this article, we make several contributions to this line of research: 1) We define four algorithmic properties that further clarify the relationship between RL and BBO: action-perturbation vs. parameter-perturbation, gradient estimation vs. rewardweighted averaging, use of only rewards vs. use of rewards and state information, actor-critic vs. direct policy search. 2) We show how the chronology of the derivation of ever more powerful algorithms displays a trend towards algorithms based on parameter-perturbation and reward-weighted averaging. A striking feature of this trend is that it has moved RL methods closer and closer to BBO. 3) We continue this trend by applying two modifications to the state-of-the-art \u201cPolicy Improvement with Path Integrals\u201d (PI), which yields an algorithm we denote PI. We show that PI is a BBO algorithm, and, more specifically, that it is a special case of the \u201cCovariance Matrix Adaptation \u2013 Evolutionary Strategy\u201d algorithm. Our empirical evaluation demonstrates that the simpler PI outperforms PI on simple evaluation tasks in terms of convergence speed and final cost. 4) Although our evaluation implies that, for these five tasks, BBO outperforms RL, we do not hold this to be a general statement, and provide an analysis of why these tasks are particularly well-suited for BBO. Thus, rather than making the case for BBO or RL, one of the main contributions of this article is rather to provide an algorithmic framework in which such cases may be made, as PI and PI use identical perturbation and parameter update methods, and differ only in being BBO and RL approaches respectively."}
{"_id": "2fd17ebb88c0748fe79f59da6d2fec51233c2dc0", "title": "A dynamically stable single-wheeled mobile robot with inverse mouse-ball drive", "text": "Multi-wheel statically-stable mobile robots tall enough to interact meaningfully with people must have low centers of gravity, wide bases of support, and low accelerations to avoid tipping over. These conditions present a number of performance limitations. Accordingly, we are developing an inverse of this type of mobile robot that is the height, width, and weight of a person, having a high center of gravity, that balances dynamically on a single spherical wheel. Unlike balancing 2-wheel platforms which must turn before driving in some direction, the single-wheel robot can move directly in any direction. We present the overall design, actuator mechanism based on an inverse mouse-ball drive, control system, and initial results including dynamic balancing, station keeping, and point-to-point motion"}
{"_id": "13566e7b702a156d2f0d6b2ccbcb58b95cdd41c5", "title": "Evolutionary Function Approximation for Reinforcement Learning", "text": "Temporal difference methods are theoretically grounded and empirically effective methods for addressing reinforcement learning problems. In most real-world reinforcement learning tasks, TD methods require a function approximator to represent the value function. However, using function approximators requires manually making crucial representational decisions. This thesis investigates evolutionary function approximation, a novel approach to automatically selecting function approximator representations that enable efficient individual learning. This method evolves individuals that are better able to learn. I present a fully implemented instantiation of evolutionary function approximation which combines NEAT, a neuroevolutionary optimization technique, with Q-learning, a popular TD method. The resulting NEAT+Q algorithm automatically discovers effective representations for neural network function approximators. This thesis also presents on-line evolutionary computation, which improves the on-line performance of evolutionary computation by borrowing selection mechanisms used in TD methods to choose individual actions and using them in evolutionary computation to select policies for evaluation. I evaluate these contributions with extended empirical studies in two domains: 1) the mountain car task, a standard reinforcement learning benchmark on which neural network function approximators have previously performed poorly and 2) server job scheduling, a large probabilistic domain drawn from the field of autonomic computing. The results demonstrate that evolutionary function approximation can significantly improve the performance of TD methods and on-line evolutionary computation can significantly improve evolutionary methods."}
{"_id": "95b8c5ea18bc8532b0b725488bd2b28cebe6e760", "title": "Loneliness, social contacts and Internet addiction: A cross-lagged panel study", "text": "This study aims to examine the causal priority in the observed empirical relationships between Internet addiction and other psychological problems. A cross-lagged panel survey of 361 college students in Hong Kong was conducted. Results show that excessive and unhealthy Internet use would increase feelings of loneliness over time. Although depression had a moderate and positive bivariate relationship with Internet addiction at each time point, such a relationship was not significant in the cross-lagged analyses. This study also found that online social contacts with friends and family were not an effective alternative for offline social interactions in reducing feelings of loneliness. Furthermore, while an increase in face-to-face contacts could help to reduce symptoms of Internet addiction, this effect may be neutralized by the increase in online social contacts as a result of excessive Internet use. Taken as a whole, findings from the study show a worrisome vicious cycle between loneliness and Internet addiction. 2013 Elsevier Ltd. All rights reserved."}
{"_id": "8a131af3f9f27037f3e3612b84d1c508c5ac1823", "title": "Real-time Vision-based Hand Gesture Recognition Using Haar-like Features", "text": "This paper proposes a two level approach to solve the problem of real-time vision-based hand gesture classification. The lower level of the approach implements the posture recognition with Haar-like features and the AdaBoost learning algorithm. With this algorithm, real-time performance and high recognition accuracy can be obtained. The higher level implements the linguistic hand gesture recognition using a context-free grammar-based syntactic analysis. Given an input gesture, based on the extracted postures, the composite gestures can be parsed and recognized with a set of primitives and production rules."}
{"_id": "4d60d8d336a68008609712e3153757daefdca050", "title": "DeepDefense: Training Deep Neural Networks with Improved Robustness", "text": "Despite the efficacy on a variety of computer vision tasks, deep neural networks (DNNs) are vulnerable to adversarial attacks, limiting their applications in securitycritical systems. Recent works have shown the possibility of generating imperceptibly perturbed image inputs (a.k.a., adversarial examples) to fool well-trained DNN classifiers into making arbitrary predictions. To address this problem, we propose a training recipe named \u201cdeep defense\u201d. Our core idea is to integrate an adversarial perturbation-based regularizer into the classification objective, such that the obtained models learn to resist potential attacks, directly and precisely. The whole optimization problem is solved just like training a recursive network. Experimental results demonstrate that our method outperforms training with adversarial/Parseval regularizations by large margins on various datasets (including MNIST, CIFAR-10 and ImageNet) and different DNN architectures. Code and models for reproducing our results will be made publicly available."}
{"_id": "3a68b92df71637d2ba0ecc1cde8cfe5b29f2d709", "title": "Ambiguity Resolution in a Cognitive Model of Language Comprehension", "text": "The Lucia comprehension system attempts to model human comprehension by using the Soar cognitive architecture, Embodied Construction Grammar (ECG), and an incremental, word-by-word approach to grounded processing. Traditional approaches use techniques such as parallel paths and global optimization to resolve ambiguities. Here we describe how Lucia deals with lexical, grammatical, structural, and semantic ambiguities by using knowledge from the surrounding linguistic and environmental context. It uses a local repair mechanism to maintain a single path, and shows a garden path effect when local repair breaks down. Data on adding new linguistic knowledge shows that the ECG grammar grows faster than the knowledge for handling context, and that lowlevel grammar items grow faster than more general ones."}
{"_id": "0fd49277e7ad0b0ef302f05ebcca772d28e34292", "title": "Personalized Reliability Prediction of Web Services", "text": "Service Oriented Architecture (SOA) is a business-centric IT architectural approach for building distributed systems. Reliability of service-oriented systems heavily depends on the remote Web services as well as the unpredictable Internet connections. Designing efficient and effective reliability prediction approaches of Web services has become an important research issue. In this article, we propose two personalized reliability prediction approaches of Web services, that is, neighborhood-based approach and model-based approach. The neighborhood-based approach employs past failure data of similar neighbors (either service users or Web services) to predict the Web service reliability. On the other hand, the model-based approach fits a factor model based on the available Web service failure data and use this factor model to make further reliability prediction. Extensive experiments are conducted with our real-world Web service datasets, which include about 23 millions invocation results on more than 3,000 real-world Web services. The experimental results show that our proposed reliability prediction approaches obtain better reliability prediction accuracy than other competing approaches."}
{"_id": "24ecb323657c8bc581c34ef26e07c58bb237922b", "title": "A Study on the Effective Permittivity of Carbon/PI Honeycomb Composites for Radar Absorbing Design", "text": "Radar absorbing honeycomb composites with different coating thicknesses are prepared by impregnation of aramid paper frame with solutions containing conductive carbon blacks (non-magnetic) and PI (polyimide). Expressions for the effective permittivity of the composites are studied and validated both in theory and experiment. It is found that a theoretical equivalent panel with given permittivity can be obtained to represent the honeycomb structure in the quasistatic approximation, which provides a feasible way to optimize the design of radar absorbing honeycomb structure by connecting the effective electromagnetic parameters with the unit cell dimensions. The effective permittivity is measured by a network analyzer system in the frequency range of 8-12 GHz and compared with the theoretical result."}
{"_id": "9af5d0d7d6106863629ea0a643ffa05f934e0ee7", "title": "SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems", "text": "Direct methods for visual odometry (VO) have gained popularity for their capability to exploit information from all intensity gradients in the image. However, low computational speed as well as missing guarantees for optimality and consistency are limiting factors of direct methods, in which established feature-based methods succeed instead. Based on these considerations, we propose a semidirect VO (SVO) that uses direct methods to track and triangulate pixels that are characterized by high image gradients, but relies on proven feature-based methods for joint optimization of structure and motion. Together with a robust probabilistic depth estimation algorithm, this enables us to efficiently track pixels lying on weak corners and edges in environments with little or high-frequency texture. We further demonstrate that the algorithm can easily be extended to multiple cameras, to track edges, to include motion priors, and to enable the use of very large field of view cameras, such as fisheye and catadioptric ones. Experimental evaluation on benchmark datasets shows that the algorithm is significantly faster than the state of the art while achieving highly competitive accuracy."}
{"_id": "f6e5e70860080a69e232d14a98bf20128957b9b5", "title": "Fuzzy memoization for floating-point multimedia applications", "text": "Instruction memoization is a promising technique to reduce the power consumption and increase the performance of future low-end/mobile multimedia systems. Power and performance efficiency can be improved by reusing instances of an already executed operation. Unfortunately, this technique may not always be worth the effort due to the power consumption and area impact of the tables required to leverage an adequate level of reuse. In this paper, we introduce and evaluate a novel way of understanding multimedia floating-point operations based on the fuzzy computation paradigm: performance and power consumption can be improved at the cost of small precision losses in computation. By exploiting this implicit characteristic of multimedia applications, we propose a new technique called tolerant memoization. This technique expands the capabilities of classic memoization by associating entries with similar inputs to the same output. We evaluate this new technique by measuring the effect of tolerant memoization for floating-point operations in a low-power multimedia processor and discuss the trade-offs between performance and quality of the media outputs. We report energy improvements of 12 percent for a set of key multimedia applications with small LUT of 6 Kbytes, compared to 3 percent obtained using previously proposed techniques."}
{"_id": "b4d6f5a6471eef2f2943ade0dec77d122720d719", "title": "Energy-Efficient Multiple-Precision Floating-Point Multiplier for Embedded Applications", "text": "Floating-point (FP) multipliers are the main energy consumers in many modern embedded digital signal processing (DSP) and multimedia systems. For lossy applications, minimizing the precision of FP multiplication operations under the acceptable accuracy loss is a well-known approach for reducing the energy consumption of FP multipliers. This paper proposes a multiple-precision FP multiplier to efficiently trade the energy consumption with the output quality. The proposed FP multiplier can perform lowprecision multiplication that generates 8\u2212, 14\u2212, 20\u2212, or 26bit mantissa product through an iterative and truncated modified Booth multiplier. Energy saving for lowprecision multiplication is achieved by partially suppressing the computation of mantissa multiplier. In addition, the proposed multiplier allows the bitwidth of mantissa in the multiplicand, multiplier, and output product to be dynamically changed when it performs different FP multiplication operations to further reduce energy consumption. Experimental results show that the proposed multiplier can achieve 59 %, 71 %, 73 %, and 82 % energy saving under 0.1 %, 1 %, 5 %, and 11 % accuracy loss, respectively, for the RGB-to-YUV & YUV-to-RGB conversion when compared to the conventional IEEE single-precision multiplier. In addition, the results also exhibit that the proposed multiplier can obtain more energy reduction than previous multipleprecision iterative FP multipliers."}
{"_id": "3d0e07fc681428f230d1be6a79c5b40450e2e332", "title": "Bootstrapping a Neural Conversational Agent with Dialogue Self-Play, Crowdsourcing and On-Line Reinforcement Learning", "text": "End-to-end neural models show great promise towards building conversational agents that are trained from data and on-line experience using supervised and reinforcement learning. However, these models require a large corpus of dialogues to learn effectively. For goal-oriented dialogues, such datasets are expensive to collect and annotate, since each task involves a separate schema and database of entities. Further, the Wizard-of-Oz approach commonly used for dialogue collection does not provide sufficient coverage of salient dialogue flows, which is critical for guaranteeing an acceptable task completion rate in consumer-facing conversational agents. In this paper, we study a recently proposed approach for building an agent for arbitrary tasks by combining dialogue self-play and crowd-sourcing to generate fully-annotated dialogues with diverse and natural utterances. We discuss the advantages of this approach for industry applications of conversational agents, wherein an agent can be rapidly bootstrapped to deploy in front of users and further optimized via interactive learning from actual users of the system."}
{"_id": "355887cec4b19668adb7db0a72c93f4ad3e1ea09", "title": "Characterization and Implementation of Dual-SiC MOSFET Modules for Future Use in Traction Converters", "text": "Silicon (Si) insulated-gate bipolar transistors are widely used in railway traction converters. In the near future, silicon carbide (SiC) technology will push the limits of switching devices in three directions: higher blocking voltage, higher operating temperature, and higher switching speeds. The first silicon carbide (SiC) MOSFET modules are available on the market and look promising. Although they are still limited in breakdown voltage, these wide-bandgap components should improve traction-chain efficiency. Particularly, a significant reduction in the switching losses is expected which should lead to improvements in power-weight ratios. Nevertheless, because of the high switching speed and the high current levels required by traction applications, the implementation of these new modules is critical. An original method is proposed to compare, in terms of stray inductance, several dc bus-bar designs. To evaluate the potential of these new devices, a first set of measurements, based on a single-pulse test-bench, was obtained. The switching behavior of SiC devices was well understood at turn-off and turn-on. To complete this work, the authors use an opposition method to compare Si-IGBT and SiC-MOSFET modules in voltage source inverter operation. For this purpose, a second test-bench, allowing electrical and thermal measurements, was developed. Experimental results confirm the theoretical loss-calculation of the single-pulse tests and the correct operation of up to three modules directly connected in parallel. This analysis provides guidelines for a full SiC inverter design, and prospects for developments in traction applications are presented."}
{"_id": "48fc3f5091faff4d17572ed49a24075d1d9da33f", "title": "A High-Speed Mesh of Tactile Sensors Fitting Arbitrary Surfaces", "text": "A tactile sensor is developed with the aim of covering a robot's entire structure, while reducing wiring requirement and ensuring high-speed response. The sensor detects the center point of load distribution on 2-D surfaces as well as the overall load. There are only four signal wires from the sensor. The sensor response time is nearly constant (within 1 ms) regardless of the number of detection elements, their placements or sensor areas. In this paper, the principles behind the operation of this sensor and the results of experiments using the sensor are described."}
{"_id": "49aba86ad0178ae9f85f6e40b625fc8b91410874", "title": "A PPG sensor for continuous cuffless blood pressure monitoring with self-adaptive signal processing", "text": "A new portable PPG-BP device designed for continuously measuring blood pressure (BP) without a cuff is proposed in this study. This continuous and long-time BP monitoring enabled herein by the proposed portable cuffless BP sensor. Towards the aforementioned goal, the sensor is designed capable of detecting in real time, non-invasively and continuously the temporal intravascular blood volume change based on the principle of photoplethysmograph (PPG) for estimating BP. The hardware of the sensor consists mainly of light emitting diodes (LEDs) in wavelengths of 660 nm, a photo detectors (PD), and also signal processing chips to read output signals of the PD. The PD readout circuit includes a PD pre-amplifier, a band-pass filter, a programmable gain amplifier (PGA), a microcontroller unit for calculation and a wireless module for communication. A laptop is also used to display continuous BPs and conducts statistical analysis and displaying results. 27 subjects participated in the experimental validation, in which the obtained BPs are calibrated by and then compared with the results from a commercial blood pressure monitor by OMRON. The resultant signal-to-noise ratio (SNR) is capable of rising more than 15%, correlation coefficient, R2, for systolic blood pressure (SBP) and diastolic blood pressure (DBP) are 0.89 and 0.98, respectively."}
{"_id": "35b7e8ad60cb357cd9457ca2687eb2ba37068bd6", "title": "How Visualization Layout Relates to Locus of Control and Other Personality Factors", "text": "Existing research suggests that individual personality differences are correlated with a user's speed and accuracy in solving problems with different types of complex visualization systems. We extend this research by isolating factors in personality traits as well as in the visualizations that could have contributed to the observed correlation. We focus on a personality trait known as \"locus of control\u201d (LOC), which represents a person's tendency to see themselves as controlled by or in control of external events. To isolate variables of the visualization design, we control extraneous factors such as color, interaction, and labeling. We conduct a user study with four visualizations that gradually shift from a list metaphor to a containment metaphor and compare the participants' speed, accuracy, and preference with their locus of control and other personality factors. Our findings demonstrate that there is indeed a correlation between the two: participants with an internal locus of control perform more poorly with visualizations that employ a containment metaphor, while those with an external locus of control perform well with such visualizations. These results provide evidence for the externalization theory of visualization. Finally, we propose applications of these findings to adaptive visual analytics and visualization evaluation."}
{"_id": "822544a8601845a758255975f72cd91c24f705d2", "title": "MVC architecture driven restructuring to achieve client-side web page composition", "text": "This paper presents a restructuring approach to relocating web page composition from servers to browsers for Java web applications. The objective is to reduce redundant manipulation and transfer of code/data that are shared by web pages. The reduction is carried out through a restructuring algorithm, effectively keeping consistency between source and target applications from the perspective of the model-view-controller (MVC) architecture, because the problem requires the target application to preserve the observable behavior of its source application. Case studies show that our restructuring tool can efficiently support the restructuring process."}
{"_id": "e321c0f7ccc738d3ad00ea92e9cf16a32a7aa071", "title": "High speed cascode flyback converter using multilayered coreless printed circuit board (PCB) step-down power transformer", "text": "In this paper, design and analysis of the high speed isolated cascode flyback converter using multilayered coreless PCB step down power transformer is presented. The converter is tested for the input voltage variation of 60\u2013120V with a nominal DC input voltage of 90V. The designed converter was simulated and tested successfully up to the output power level of 30W within the switching frequency range of 2.6\u20133.7MHz. The cascode flyback converter is compared with the single switch flyback converter in terms of operating frequency, gate drive power consumption, conduction losses and stresses on MOSFETs. The maximum energy efficiency of the cascode converter is approximately 81% with a significant improvement of about 3\u20134% compared to single switch flyback converter. The gate drive power consumption which is more dominant compared to conduction losses of the cascode converter using GaN MOSFET is found to be negligible compared to single switch flyback converter."}
{"_id": "43f87f70c86398ecd2093723f8fb9d5024a57b68", "title": "Optimal integration of energy storage in distribution networks", "text": "Energy storage, traditionally well established in the form of large scale pumped-hydro systems, is finding increased attraction in medium and smaller scale systems. Such expansion is entirely complementary to the wider uptake of intermittent renewable resources and to distributed generation in general, which are likely to present a whole range of new business opportunities for storage systems and their suppliers. In the paper, by assuming that Distribution System Operator has got the ownership and operation of storage, a new software planning tool for distribution networks able to define the optimal placement, rating and control strategies of distributed storage systems that minimize the overall network cost is proposed. This tool will assist the System Operators in defining the better integration strategies of distributed storage systems in distribution networks and in assessing their potential as an option for a more efficient operation and development of future electricity distribution networks."}
{"_id": "895719a72d72cfaf4b47d95c474a708934b9d42d", "title": "ACES: Automatic Compartments for Embedded Systems", "text": "Securing the rapidly expanding Internet of Things (IoT) is critical. Many of these \u201cthings\u201d are vulnerable baremetal embedded systems where the application executes directly on hardware without an operating system. Unfortunately, the integrity of current systems may be compromised by a single vulnerability, as recently shown by Google\u2019s P0 team against Broadcom\u2019s WiFi SoC. We present ACES (Automatic Compartments for Embedded Systems), an LLVM-based compiler that automatically infers and enforces inter-component isolation on bare-metal systems, thus applying the principle of least privileges. ACES takes a developer-specified compartmentalization policy and then automatically creates an instrumented binary that isolates compartments at runtime, while handling the hardware limitations of baremetal embedded devices. We demonstrate ACES\u2019 ability to implement arbitrary compartmentalization policies by implementing three policies and comparing the compartment isolation, runtime overhead, and memory overhead. Our results show that ACES\u2019 compartments can have low runtime overheads (13% on our largest test application), while using 59% less Flash, and 84% less RAM than the Mbed \u03bcVisor\u2014the current state-of-theart compartmentalization technique for bare-metal systems. ACES \u2018 compartments protect the integrity of privileged data, provide control-flow integrity between compartments, and reduce exposure to ROP attacks by 94.3% compared to \u03bcVisor."}
{"_id": "268a88fedcf949ffda3bc0f5573ad5f1c8b0c29d", "title": "On the speedup of single-disk failure recovery in XOR-coded storage systems: Theory and practice", "text": "Modern storage systems stripe redundant data across multiple disks to provide availability guarantees against disk failures. One form of data redundancy is based on XOR-based erasure codes, which use only XOR operations for encoding and decoding. In addition to providing failure tolerance, a storage system must also provide fast failure recovery to avoid data unavailability. We consider the problem of speeding up the recovery of a single-disk failure for arbitrary XOR-based erasure codes. We address this problem from both theoretical and practical perspectives. We propose a replace recovery algorithm, which uses a hill-climbing technique to search for a fast recovery solution, such that the solution search can be completed within a short time period. We further implement our replace recovery algorithm atop a parallelized architecture to justify its practicality. We experiment our replace recovery algorithm and its parallelized implementation on a networked storage system testbed, and demonstrate that our replace recovery algorithm uses less recovery time than the conventional approach."}
{"_id": "c7f0ecde0907abfe033d0b347c62ec2b5761043a", "title": "Internet of Cloud: Security and Privacy issues", "text": "The synergy between the cloud and the IoT has emerged largely due to the cloud having attributes which directly benefit the IoT and enable its continued growth. IoT adopting Cloud services has brought new security challenges. In this book chapter, we pursue two main goals: 1) to analyse the different components of Cloud computing and the IoT and 2) to present security and privacy problems that these systems face. We thoroughly investigate current security and privacy preservation solutions that exist in this area, with an eye on the Industrial Internet of Things, discuss open issues and propose future directions."}
{"_id": "41809d7fc7c41cf4d0afd5823034b5c0ac2949aa", "title": "K-means clustering via principal component analysis", "text": "Principal component analysis (PCA) is a widely used statistical technique for unsupervised dimension reduction. K-means clustering is a commonly used data clustering for performing unsupervised learning tasks. Here we prove that principal components are the continuous solutions to the discrete cluster membership indicators for K-means clustering. New lower bounds for K-means objective function are derived, which is the total variance minus the eigenvalues of the data covariance matrix. These results indicate that unsupervised dimension reduction is closely related to unsupervised learning. Several implications are discussed. On dimension reduction, the result provides new insights to the observed effectiveness of PCA-based data reductions, beyond the conventional noise-reduction explanation that PCA, via singular value decomposition, provides the best low-dimensional linear approximation of the data. On learning, the result suggests effective techniques for K-means data clustering. DNA gene expression and Internet newsgroups are analyzed to illustrate our results. Experiments indicate that the new bounds are within 0.5-1.5% of the optimal values."}
{"_id": "03f3a4a40644258628f2361d848a3bb28b42c1ae", "title": "Cartesian K-Means", "text": "A fundamental limitation of quantization techniques like the k-means clustering algorithm is the storage and run-time cost associated with the large numbers of clusters required to keep quantization errors small and model fidelity high. We develop new models with a compositional parameterization of cluster centers, so representational capacity increases super-linearly in the number of parameters. This allows one to effectively quantize data using billions or trillions of centers. We formulate two such models, Orthogonal k-means and Cartesian k-means. They are closely related to one another, to k-means, to methods for binary hash function optimization like ITQ (Gong and Lazebnik, 2011), and to Product Quantization for vector quantization (Jegou et al., 2011). The models are tested on large-scale ANN retrieval tasks (1M GIST, 1B SIFT features), and on codebook learning for object recognition (CIFAR-10)."}
{"_id": "0fbb184871bd7660bc579178848d58beb8288b7d", "title": "Aggregating local descriptors into a compact image representation", "text": "We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms."}
{"_id": "e2d6a601e1f047322c2a524e424dd9a94a71b7ae", "title": "Internet of Things Security Analysis", "text": "Internet of Things is an up-and-coming information and technology industry, and in such an environment of the concept and the content and the extension of Internet of Things are not very distinctive, the project of Internet of Things which is a piece of small area with small-scale and self-system obtain gratifying achievement and bright future, it can promote the development of Internet of Things in some extent. But there are some serious hidden danger and potential crisis problems. The paper focuses on the application of Internet of Things in the nation and even in the global in the future, analysing the existed security risks of the Internet of Things 's network points, transmission, finally we propose some suggestive solutions due to these problems."}
{"_id": "fc7b81e9a3c1552b846e6728cb28c14f75db6e08", "title": "Open Targets: a platform for therapeutic target identification and validation", "text": "We have designed and developed a data integration and visualization platform that provides evidence about the association of known and potential drug targets with diseases. The platform is designed to support identification and prioritization of biological targets for follow-up. Each drug target is linked to a disease using integrated genome-wide data from a broad range of data sources. The platform provides either a target-centric workflow to identify diseases that may be associated with a specific target, or a disease-centric workflow to identify targets that may be associated with a specific disease. Users can easily transition between these target- and disease-centric workflows. The Open Targets Validation Platform is accessible at https://www.targetvalidation.org."}
{"_id": "cf208a4d519576f355c963a5bc79f687fcf216fe", "title": "3D velocity model and ray tracing of antenna array GPR", "text": "Migration is an important signal processing method that can improve signal-clutter ratio and reconstruct subsurface image. Diffraction stacking migration and Kirchhoff migration sum amplitudes along the migration trajectory, which generally is hyperbolic. But when the ground surface varies acutely, the migration trajectory is not hyperbolic. To computer the migration trajectory need the technique of ray tracing. We introduce a method of ray tracing based on 3D velocity model. Firstly, we build the 3D velocity model depending on the estimation of both ground surface topography and velocities. Then we compute the travel time between transmitter, receiver and each subsurface scattering point, and search the propagation ray depending on the Fermat's principle. The method is tested by an experiment data acquired by the stepped-frequency (SF) CMP antenna GPR system. The target is a metal ball that is buried under a sand mound. A nice result of ray tracing is shown in the case."}
{"_id": "8a2e812fc3c192854cf319cbf9716107a2c00e05", "title": "Active noise control: a tutorial review", "text": "Active noise control (ANC) is achieved by introducing a canceling \u201cantinoise\u201d wave through an appropriate array of secondary sources. These secondary sources are interconnected through an electronic system using a specific signal processing algorithm for the particular cancellation scheme. ANC has application to a wide variety of problems in manufacturing, industrial operations, and consumer products. The emphasis of this paper is on the practical aspects of ANC systems in terms of adaptive signal processing and digital signal processing (DSP) implementation for real-world applications. In this paper, the basic adaptive algorithm for ANC is developed and analyzed based on single-channel broad-band feedforward control. This algorithm is then modified for narrow-band feedforward and adaptive feedback control. In turn, these single-channel ANC algorithms are expanded to multiple-channel cases. Various online secondary-path modeling techniques and special adaptive algorithms, such as lattice, frequency-domain, subband, and recursive-least-squares, are also introduced. Applications of these techniques to actual problems are highlighted by several examples."}
{"_id": "38cb7a9cf108588d86154e270239926270603f48", "title": "An Enhanced PRACH Preamble Detector for Cellular IoT Communications", "text": "We propose an enhanced physical random access channel (PRACH) preamble detector, which efficiently identifies non-orthogonal preambles while suppressing the noise rise. The proposed PRACH preamble detector reconstructs preambles from the received signal and efficiently utilizes them to improve the detection performance. Through simulations, we verify that the proposed detector successfully detects the desired preambles while suppressing the undesired noise rise, and, thus, it can detect ten non-orthogonal preambles simultaneously with an extremely low mis-detection probability lower than $10^{-5}$ , while achieving a false alarm probability of approximately $10^{-3}$ ."}
{"_id": "d161f05ce278d0c66fc44041d4ebacc5110e0fc3", "title": "Recognition of Affect, Judgment, and Appreciation in Text", "text": "The main task we address in our research is classification of text using fine-grained attitude labels. The developed @AM system relies on the compositionality principle and a novel approach based on the rules elaborated for semantically distinct verb classes. The evaluation of our method on 1000 sentences, that describe personal experiences, showed promising results: average accuracy on the finegrained level (14 labels) was 62%, on the middle level (7 labels) \u2013 71%, and on the top level (3 labels) \u2013 88%."}
{"_id": "6f5da0bb1327e65facadd98243d6774f013ba006", "title": "Avionic Data acquisition system using MIL STD 1553B controller with IRIG-B timecode decoder", "text": "Many critical parameters of rockets and satellite are obtained by means of telemetry system present onboard. For meaningful analysis of data these parameters are to be time stamped. This is achieved by means IRIG B time code proposed by Inter-range instrumentation group. The data obtained need to be transmitted to various sub systems present in ground. Bus based configuration is planned for this inter connection. This is achieved by MIL STD 1553B protocol. A data bus is used to provide a medium for the exchange of data and information between various systems. The MIL STD 1553B bus consists of Bus Controller, Remote Terminals and transmission media. The data received from telemetry system are Manchester encoded signals which needs to be decoded. The objective of the project was to design and develop MIL STD 1553B Bus controller to transmit command and data words to various other remote terminals. An IRIG B time decoder module was developed to decode the timing information transmitted from a reference so that the Data acquisition system can be used to process the data worldwide. A Manchester Data Decoder for decoding the data was developed. The system had better synchronization capability and clock independence."}
{"_id": "1d918bd865abc1f85394b144b84b4b77dcced4e2", "title": "Detecting Semantic Parts on Partially Occluded Objects", "text": "In this paper, we address the task of detecting semantic parts on partially occluded objects. We consider a scenario where the model is trained using non-occluded images but tested on occluded images. The motivation is that there are infinite number of occlusion patterns in real world, which cannot be fully covered in the training data. So the models should be inherently robust and adaptive to occlusions instead of fitting / learning the occlusion patterns in the training data. Our approach detects semantic parts by accumulating the confidence of local visual cues. Specifically, the method uses a simple voting method, based on log-likelihood ratio tests and spatial constraints, to combine the evidence of local cues. These cues are called visual concepts, which are derived by clustering the internal states of deep networks. We evaluate our voting scheme on the VehicleSemanticPart dataset with dense part annotations. We randomly place two, three or four irrelevant objects onto the target object to generate testing images with various occlusions. Experiments show that our algorithm outperforms several competitors in semantic part detection when occlusions are present. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. WANG ET.AL.: DETECTING SEMANTIC PARTS ON PARTIALLY OCCLUDED OBJECTS 1 Detecting Semantic Parts on Partially Occluded Objects Jianyu Wang*1 wjyouch@gmail.com Cihang Xie*2 cihangxie306@gmail.com Zhishuai Zhang*2 zhshuai.zhang@gmail.com Jun Zhu1 zhujun.sjtu@gmail.com Lingxi Xie( )2 198808xc@gmail.com Alan L. Yuille2 alan.l.yuille@gmail.com 1 Baidu Research (USA), Sunnyvale, CA 94089 USA 2 Department of Computer Science, The Johns Hopkins University, Baltimore, MD 21218 USA . . . . * This work was done when the first author was a Ph.D. student at UCLA. The first three authors contributed equally to this work."}
{"_id": "32e7f0863e7c56cfced89abedaee46e2288bc127", "title": "PACRR: A Position-Aware Neural IR Model for Relevance Matching", "text": "In order to adopt deep learning for information retrieval, models are needed that can capture all relevant information required to assess the relevance of a document to a given user query. While previous works have successfully captured unigram term matches, how to fully employ position-dependent information such as proximity and term dependencies has been insufficiently explored. In this work, we propose a novel neural IR model named PACRR aiming at better modeling position-dependent interactions between a query and a document. Extensive experiments on six years\u2019 TREC Web Track data confirm that the proposed model yields better results under multiple benchmarks."}
{"_id": "b7dbd8038ec67c28af40fa5199008789446b1202", "title": "Resume Parser: Semi-structured Chinese Document Analysis", "text": "Semi-structured Chinese document anlysis is the most diffcult task for complex structure and Chinese semantics. According to the generic characteristics of the semi-structured document and the specific characteristics of the resume document, the paper researched on resume document block anlysis based on pattern matching, multi-level information identification and feedback control algorithms was also prompted. Based on the research, Resume Parser system was implemented for ChinaHR, which is the biggest recruitment website. It can read, analysis, retrieval and store the information automatically. According to all kinds of experienments results, the accuracy and efficiency of this system can generally satisfy the practical requirements. As the research on the processing of the semi-structured document, it will not only be as a directive of the further research on the resume analysis, but also be as the reference to other form of the semi-structured document."}
{"_id": "180208256d32be9d25dbe79092e4c49ec400780f", "title": "Multi-modal feature fusion for action recognition in RGB-D sequences", "text": "Microsoft Kinect's output is a multi-modal signal which gives RGB videos, depth sequences and skeleton information simultaneously. Various action recognition techniques focused on different single modalities of the signals and built their classifiers over the features extracted from one of these channels. For better recognition performance, it's desirable to fuse these multi-modal information into an integrated set of discriminative features. Most of current fusion methods merged heterogeneous features in a holistic manner and ignored the complementary properties of these modalities in finer levels. In this paper, we proposed a new hierarchical bag-of-words feature fusion technique based on multi-view structured spar-sity learning to fuse atomic features from RGB and skeletons for the task of action recognition."}
{"_id": "89d1b3bfaacfc384fef6c235f7add1b5b67887dd", "title": "Query suggestion for E-commerce sites", "text": "Query suggestion module is an integral part of every search engine. It helps search engine users narrow or broaden their searches. Published work on query suggestion methods has mainly focused on the web domain. But, the module is also popular in the domain of e-commerce for product search. In this paper, we discuss query suggestion and its methodologies in the context of e-commerce search engines. We show that dynamic inventory combined with long and sparse tail of query distribution poses unique challenges to build a query suggestion method for an e-commerce marketplace. We compare and contrast the design of a query suggestion system for web search engines and e-commerce search engines. Further, we discuss interesting measures to quantify the effectiveness of our query suggestion methodologies. We also describe the learning gained from exposing our query suggestion module to a vibrant community of millions of users."}
{"_id": "4c9dcdb656cd066f244eb41a49c8676de8ec1abf", "title": "Virtual Robot Experimentation Platform V-REP: A Versatile 3D Robot Simulator", "text": "From exploring planets to cleaning homes, the reach and versatility of robotics is vast. The integration of actuation, sensing and control makes robotics systems powerful, but complicates their simulation. This paper introduces a modular and decentralized architecture for robotics simulation. In contrast to centralized approaches, this balances functionality, provides more diversity, and simplifies connectivity between (independent) calculation modules. As the Virtual Robot Experimentation Platform (V-REP) demonstrates, this gives a smallfootprint 3D robot simulator that concurrently simulates control, actuation, sensing and monitoring. Its distributed and modular approach are ideal for complex scenarios in which a diversity of sensors and actuators operate asynchronously with various rates and characteristics. This allows for versatile prototyping applications including systems verification, safety/remote monitoring, rapid algorithm development, and factory automation simulation."}
{"_id": "def29939ce675ae9d9d855c661cbf85e34a1daaf", "title": "Detecting Clickbait in Online Social Media: You Won't Believe How We Did It", "text": "In this paper, we propose an approach for the detection of clickbait posts in online social media (OSM). Clickbait posts are short catchy phrases that attract a user\u2019s attention to click to an article. The approach is based on a machine learning (ML) classifier capable of distinguishing between clickbait and legitimate posts published in OSM. The suggested classifier is based on a variety of features, including image related features, linguistic analysis, and methods for abuser detection. In order to evaluate our method, we used two datasets provided by Clickbait Challenge 2017. The best performance obtained by the ML classifier was an AUC of 0.8, accuracy of 0.812, precision of 0.819, and recall of 0.966. In addition, as opposed to previous studies, we found that clickbait post titles are statistically significant shorter than legitimate post titles. Finally, we found that counting the number of formal English words in the given contentis useful for clickbait detection."}
{"_id": "0a71b71421d8a33c41625963d19d5df85685dffc", "title": "Analyzing Behavior Specialized Acceleration", "text": "Hardware specialization has become a promising paradigm for overcoming the inefficiencies of general purpose microprocessors. Of significant interest are Behavioral Specialized Accelerators (BSAs), which are designed to efficiently execute code with only certain properties, but remain largely configurable or programmable. The most important strength of BSAs -- their ability to target a wide variety of codes -- also makes their interactions and analysis complex, raising the following questions: can multiple BSAs be composed synergistically, what are their interactions with the general purpose core, and what combinations favor which workloads? From a methodological standpoint, BSAs are also challenging, as they each require ISA development, compiler and assembler extensions, and either simulator or RTL models.\n To study the potential of BSAs, we propose a novel modeling technique called the Transformable Dependence Graph (TDG) - a higher level alternative to the time-consuming traditional compiler+simulator approach, while still enabling detailed microarchitectural models for both general cores and accelerators. We then propose a multi-BSA organization, called ExoCore, which we model and study using the TDG. A design space exploration reveals that an ExoCore organization can push designs beyond the established energy-performance frontiers for general purpose cores. For example, a 2-wide OOO processor with three BSAs matches the performance of a conventional 6-wide OOO core, has 40% lower area, and is 2.6x more energy efficient."}
{"_id": "413a03a146e6f7b16c11e73243d83e6f1a6627a3", "title": "Breaking NLI Systems with Sentences that Require Simple Lexical Inferences", "text": "We create a new NLI test set that shows the deficiency of state-of-the-art models in inferences that require lexical and world knowledge. The new examples are simpler than the SNLI test set, containing sentences that differ by at most one word from sentences in the training set. Yet, the performance on the new test set is substantially worse across systems trained on SNLI, demonstrating that these systems are limited in their generalization ability, failing to capture many simple inferences."}
{"_id": "d4b726af56e079623a4b877e6561176d8069ac17", "title": "In Defense of Active Part Selection for Fine-Grained Classification", "text": "Fine-grained classification is a recognition task where subtle differences distinguish between different classes. To tackle this classification problem, part-based classification methods are mostly used. Partbased methods learn an algorithm to detect parts of the observed object and extract local part features for the detected part regions. In this paper we show that not all extracted part features are always useful for the classification. Furthermore, given a part selection algorithm that actively selects parts for the classification we estimate the upper bound for the fine-grained recognition performance. This upper bound lies way above the current state-of-the-art recognition performances which shows the need for such an active part selection method. Though we do not present such an active part selection algorithm in this work, we propose a novel method that is required by active part selection and enables sequential part-based classification. This method uses a support vector machine (SVM) ensemble and allows to classify an image based on arbitrary number of part features. Additionally, the training time of our method does not increase with the amount of possible part features. This fact allows to extend the SVM ensemble with an active part selection component that operates on a large amount of part feature proposals without suffering from increasing training time."}
{"_id": "0f12c93d685ec82d23f2c43d555e7687f80e5b7c", "title": "Detecting unexpected obstacles for self-driving cars: Fusing deep learning and geometric modeling", "text": "The detection of small road hazards, such as lost cargo, is a vital capability for self-driving cars. We tackle this challenging and rarely addressed problem with a vision system that leverages appearance, contextual as well as geometric cues. To utilize the appearance and contextual cues, we propose a new deep learning-based obstacle detection framework. Here a variant of a fully convolutional network is proposed to predict a pixel-wise semantic labeling of (i) free-space, (ii) on-road unexpected obstacles, and (iii) background. The geometric cues are exploited using a state-of-the-art detection approach that predicts obstacles from stereo input images via model-based statistical hypothesis tests. We present a principled Bayesian framework to fuse the semantic and stereo-based detection results. The mid-level Stixel representation is used to describe obstacles in a flexible, compact and robust manner. We evaluate our new obstacle detection system on the Lost and Found dataset, which includes very challenging scenes with obstacles of only 5 cm height. Overall, we report a major improvement over the state-of-the-art, with a performance gain of 27.4%. In particular, we achieve a detection rate of over 90% for distances of up to 50 m. Our system operates at 22 Hz on our self-driving platform."}
{"_id": "7ab69245f85b62df5faa9248004b677952dfaa4f", "title": "SVM Based Speaker Verification using a GMM Supervector Kernel and NAP Variability Compensation", "text": "Gaussian mixture models with universal backgrounds (UBMs) have become the standard method for speaker recognition. Typically, a speaker model is constructed by MAP adaptation of the means of the UBM. A GMM supervector is constructed by stacking the means of the adapted mixture components. A recent discovery is that latent factor analysis of this GMM supervector is an effective method for variability compensation. We consider this GMM supervector in the context of support vector machines. We construct a support vector machine kernel using the GMM supervector. We show similarities based on this kernel between the method of SVM nuisance attribute projection (NAP) and the recent results in latent factor analysis. Experiments on a NIST SRE 2005 corpus demonstrate the effectiveness of the new technique"}
{"_id": "25dc67f6cbd30c0791e3f33aa6612d1b7e8080cc", "title": "3D-Printing Spatially Varying BRDFs", "text": "A new method fabricates custom surface reflectance and spatially varying bidirectional reflectance distribution functions (svBRDFs). Researchers optimize a microgeometry for a range of normal distribution functions and simulate the resulting surface's effective reflectance. Using the simulation's results, they reproduce an input svBRDF's appearance by distributing the microgeometry on the printed material's surface. This method lets people print svBRDFs on planar samples with current 3D printing technology, even with a limited set of printing materials. It extends naturally to printing svBRDFs on arbitrary shapes."}
{"_id": "2c78b1689ddb99825add3184d87bc0395b202f80", "title": "Cognitive Radio for Smart Grids: Survey of Architectures, Spectrum Sensing Mechanisms, and Networking Protocols", "text": "Traditional power grids are currently being transformed into smart grids (SGs). SGs feature multi-way communication among energy generation, transmission, distribution, and usage facilities. The reliable, efficient, and intelligent management of complex power systems requires integration of high-speed, reliable, and secure data information and communication technology into the SGs to monitor and regulate power generation and usage. Despite several challenges, such as trade-offs between wireless coverage and capacity as well as limited spectral resources in SGs, wireless communication is a promising SG communications technology. Cognitive radio networks (CRNs) in particular are highly promising for providing timely SG wireless communications by utilizing all available spectrum resources. We provide in this paper a comprehensive survey on the CRN communication paradigm in SGs, including the system architecture, communication network compositions, applications, and CR-based communication technologies. We highlight potential applications of CR-based SG systems. We survey CR-based spectrum sensing approaches with their major classifications. We also provide a survey on CR-based routing and MAC protocols, and describe interference mitigation schemes. We furthermore present open issues and research challenges faced by CR-based SG networks along with future directions."}
{"_id": "fc6ef16ef99628c7ab716acd10f8df2e96b3ac59", "title": "The individual and social dynamics of knowledge sharing: an exploratory study", "text": "Purpose \u2013 This paper aims to examine how knowledge sharing behavior is influenced by three sets of dynamics: a rational calculus that weighs the costs and benefits of sharing; a dispositional preference that favors certain patterns of sharing outcomes; and a relational effect based on working relationships. Design/methodology/approach \u2013 Concepts from social exchange theory, social value orientation, and leader-member exchange theory are applied to analyze behavioral intentions to share knowledge. The study population consists of employees of a large pension fund in Canada. Participants answered a survey that used allocation games and situational vignettes to measure social value orientation, propensity to share knowledge, and perception of cost and benefit. Findings \u2013 The results suggest that personal preferences about the distribution of sharing outcomes, individual perceptions about costs and benefits, and structural relationship with knowledge recipients, all affect knowledge sharing behavior significantly. Notably, it was found that propensity to share knowledge is positively related to perceived benefit to the recipient, thus suggesting that evaluation of cost and benefit in social exchange is not limited to self-interest, but is also influenced by perceived recipient benefit. Moreover, it was found that the relationship with the sharing target (superior or colleague) also influenced sharing. Originality/value \u2013 Most studies emphasize the organizational benefits of knowledge sharing. This study examines knowledge sharing from the perspective of the individual who approaches knowledge sharing as a social exchange that involves perceptions of costs and benefits, preferences about sharing outcomes, and relationship with the sharing target. The study also introduces innovative methods to measure social value orientation and propensity to share knowledge."}
{"_id": "3b632a49509c0c46ab0e0c0780b2170524f7c0ac", "title": "Demand Response as a Market Resource Under the Smart Grid Paradigm", "text": "Demand response (DR), distributed generation (DG), and distributed energy storage (DES) are important ingredients of the emerging smart grid paradigm. For ease of reference we refer to these resources collectively as distributed energy resources (DER). Although much of the DER emerging under smart grid are targeted at the distribution level, DER, and more specifically DR resources, are considered important elements for reliable and economic operation of the transmission system and the wholesale markets. In fact, viewed from transmission and wholesale operations, sometimes the term \u00bfvirtual power plant\u00bf is used to refer to these resources. In the context of energy and ancillary service markets facilitated by the independent system operators (ISOs)/regional transmission organizations (RTOs), the market products DER/DR can offer may include energy, ancillary services, and/or capacity, depending on the ISO/RTO market design and applicable operational standards. In this paper we first explore the main industry drivers of smart grid and the different facets of DER under the smart grid paradigm. We then concentrate on DR and summarize the existing and evolving programs at different ISOs/RTOs and the product markets they can participate in. We conclude by addressing some of the challenges and potential solutions for implementation of DR under smart grid and market paradigms."}
{"_id": "af954a7a097abaa828bc2b4080c2d8c1b6acb853", "title": "A survey of communication/networking in Smart Grids", "text": "Smart Grid is designed to integrate advanced communication/networking technologies into electrical power grids to make them \u2018\u2018smarter\u2019\u2019. Current situation is that most of the blackouts and voltage sags could be prevented if we have better and faster communication devices and technologies for the electrical grid. In order to make the current electrical power grid a Smart Grid, the design and implementation of a new communication infrastructure for the grid are two important fields of research. However, Smart Grid projects have only been proposed in recent years and only a few proposals for forwardlooking requirements and initial research work have been offered in this field. No any systematic reviews of communication/networking in Smart Grids have been conducted yet. Therefore, we conduct a systematic review of communication/networking technologies in Smart Grid in this paper, including communication/networking architecture, different communication technologies thatwould be employed into this architecture, quality of service (QoS), optimizing utilization of assets, control and management, etc. \u00a9 2011 Elsevier B.V. All rights reserved."}
{"_id": "b746fc72c0bd3f0a94145e375cc267e1128ba32e", "title": "An Optimal Power Scheduling Method for Demand Response in Home Energy Management System", "text": "With the development of smart grid, residents have the opportunity to schedule their power usage in the home by themselves for the purpose of reducing electricity expense and alleviating the power peak-to-average ratio (PAR). In this paper, we first introduce a general architecture of energy management system (EMS) in a home area network (HAN) based on the smart grid and then propose an efficient scheduling method for home power usage. The home gateway (HG) receives the demand response (DR) information indicating the real-time electricity price that is transferred to an energy management controller (EMC). With the DR, the EMC achieves an optimal power scheduling scheme that can be delivered to each electric appliance by the HG. Accordingly, all appliances in the home operate automatically in the most cost-effective way. When only the real-time pricing (RTP) model is adopted, there is the possibility that most appliances would operate during the time with the lowest electricity price, and this may damage the entire electricity system due to the high PAR. In our research, we combine RTP with the inclining block rate (IBR) model. By adopting this combined pricing model, our proposed power scheduling method would effectively reduce both the electricity cost and PAR, thereby, strengthening the stability of the entire electricity system. Because these kinds of optimization problems are usually nonlinear, we use a genetic algorithm to solve this problem."}
{"_id": "d117ab7678e040d403f381505c22f72cb4c2b5ed", "title": "Optimal Residential Load Control With Price Prediction in Real-Time Electricity Pricing Environments", "text": "Real-time electricity pricing models can potentially lead to economic and environmental advantages compared to the current common flat rates. In particular, they can provide end users with the opportunity to reduce their electricity expenditures by responding to pricing that varies with different times of the day. However, recent studies have revealed that the lack of knowledge among users about how to respond to time-varying prices as well as the lack of effective building automation systems are two major barriers for fully utilizing the potential benefits of real-time pricing tariffs. We tackle these problems by proposing an optimal and automatic residential energy consumption scheduling framework which attempts to achieve a desired trade-off between minimizing the electricity payment and minimizing the waiting time for the operation of each appliance in household in presence of a real-time pricing tariff combined with inclining block rates. Our design requires minimum effort from the users and is based on simple linear programming computations. Moreover, we argue that any residential load control strategy in real-time electricity pricing environments requires price prediction capabilities. This is particularly true if the utility companies provide price information only one or two hours ahead of time. By applying a simple and efficient weighted average price prediction filter to the actual hourly-based price values used by the Illinois Power Company from January 2007 to December 2009, we obtain the optimal choices of the coefficients for each day of the week to be used by the price predictor filter. Simulation results show that the combination of the proposed energy consumption scheduling design and the price predictor filter leads to significant reduction not only in users' payments but also in the resulting peak-to-average ratio in load demand for various load scenarios. Therefore, the deployment of the proposed optimal energy consumption scheduling schemes is beneficial for both end users and utility companies."}
{"_id": "15da2e12df4168a5afe6bb897e1f52a47451b0cd", "title": "A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks", "text": "We present the soft exponential activation function for artificial neural networks that continuously interpolates between logarithmic, linear, and exponential functions. This activation function is simple, differentiable, and parameterized so that it can be trained as the rest of the network is trained. We hypothesize that soft exponential has the potential to improve neural network learning, as it can exactly calculate many natural operations that typical neural networks can only approximate, including addition, multiplication, inner product, distance, and sinusoids."}
{"_id": "e78ac6617fee67cfb981423cb6d42526b51bb9db", "title": "Low-Resolution Face Recognition of Multi-Scale Blocking CS-LBP and Weighted PCA", "text": "A novel method is proposed in this paper to improve the recognition accuracy of Local Binary Pattern (LBP) on low-resolution face recognition. More precise descriptors and e\u00aeectively face features can be extracted by combining multi-scale blocking center symmetric local binary pattern (CS-LBP) based on Gaussian pyramids and weighted principal component analysis (PCA) on low-resolution condition. Firstly, the features statistical histograms of face images are calculated by multi-scale blocking CS-LBP operator. Secondly, the stronger classi \u0304cation and lower dimension features can be got by applying weighted PCA algorithm. Finally, the di\u00aeerent classi \u0304ers are used to select the optimal classi \u0304cation categories of low-resolution face set and calculate the recognition rate. The results in the ORL human face databases show that recognition rate can get 89.38% when the resolution of face image drops to 12 10 pixel and basically satisfy the practical requirements of recognition. The further comparison of other descriptors and experiments from videos proved that the novel algorithm can improve recognition accuracy."}
{"_id": "290f6b98d15800753329b156560c32ffae9ba166", "title": "Frames: A Corpus for Adding Memory to Goal-Oriented Dialogue Systems", "text": "This paper proposes a new dataset, Frames, composed of 1369 human-human dialogues with an average of 15 turns per dialogue. This corpus contains goal-oriented dialogues between users who are given some constraints to book a trip and assistants who search a database to find appropriate trips. The users exhibit complex decision-making behaviour which involve comparing trips, exploring different options, and selecting among the trips that were discussed during the dialogue. To drive research on dialogue systems towards handling such behaviour, we have annotated and released the dataset and we propose in this paper a task called frame tracking. This task consists of keeping track of different semantic frames throughout each dialogue. We propose a rule-based baseline and analyse the frame tracking task through this baseline."}
{"_id": "14815c67e4d215acf9558950e2762759229fe277", "title": "Beyond 'Caveman Communities': Hubs and Spokes for Graph Compression and Mining", "text": "Given a real world graph, how should we lay-out its edges? How can we compress it? These questions are closely related, and the typical approach so far is to find clique-like communities, like the `cavemen graph', and compress them. We show that the block-diagonal mental image of the `cavemen graph' is the wrong paradigm, in full agreement with earlier results that real world graphs have no good cuts. Instead, we propose to envision graphs as a collection of hubs connecting spokes, with super-hubs connecting the hubs, and so on, recursively. Based on the idea, we propose the Slash Burn method (burn the hubs, and slash the remaining graph into smaller connected components). Our view point has several advantages: (a) it avoids the `no good cuts' problem, (b) it gives better compression, and (c) it leads to faster execution times for matrix-vector operations, which are the back-bone of most graph processing tools. Experimental results show that our Slash Burn method consistently outperforms other methods on all datasets, giving good compression and faster running time."}
{"_id": "cced0f6594f90a1702ebefd233daea4af36ded5e", "title": "Advanced universal remote controller for home automation and security", "text": "There has been inconvenience in controlling each digital home appliance which requires its own remote controller. In this paper, we present an advanced universal remote controller (URC) with the total solution for home automation and security. All kinds of home appliances can be controlled with the URC, which can be also connected to a PC dealing with Internet as well. To use the URC, we need several receivers with wired or wireless communication methods to be connected to all appliances. The receivers have many channels and IDs to control many appliances at the same time and to support multi-zone services. In addition, we propose a PC-based interface for end-users to use the URC conveniently. With the proposed URC, we can easily construct a ubiquitous home automation and security environment with the total solution. Furthermore, this solution can be applied to the automated control of all kinds of appliances installed within buildings for companies, schools, hospitals, and so on."}
{"_id": "69f955a0a43b79790a061cca5470abebde577c06", "title": "LSDSCC: a Large Scale Domain-Specific Conversational Corpus for Response Generation with Diversity Oriented Evaluation Metrics", "text": "It has been proven that automatic conversational agents can be built up using the Endto-End Neural Response Generation (NRG) framework, and such a data-driven methodology requires a large number of dialog pairs for model training and reasonable evaluation metrics for testing. This paper proposes a Large Scale Domain-Specific Conversational Corpus (LSDSCC) composed of high-quality queryresponse pairs extracted from the domainspecific online forum, with thorough preprocessing and cleansing procedures. Also, a testing set, including multiple diverse responses annotated for each query, is constructed, and on this basis, the metrics for measuring the diversity of generated results are further presented. We evaluate the performances of neural dialog models with the widely applied diversity boosting strategies on the proposed dataset. The experimental results have shown that our proposed corpus can be taken as a new benchmark dataset for the NRG task, and the presented metrics are promising to guide the optimization of NRG models by quantifying the diversity of the generated responses reasonably."}
{"_id": "1057a289a1438a6a7caccb84d4fff7daa7b779d9", "title": "Litz wire design for wireless power transfer in electric vehicles", "text": "Eddy current losses in wireless charging systems for electric vehicles must be minimized in consideration of constraints like the available space, the strand diameter in litz wire and the cost. A mathematical model for eddy current losses is needed to find the optimum geometrical design for a given application. This paper gives the fundamental equations to calculate skin effect, proximity effect and dc losses in litz wire. Litz wire consists of strands, which are twisted to form multiple bundles. Therefore eddy current losses occur not only on the strand level but the bundle level as well. The losses caused by different effects and bundle levels are compared, and the influence of strand diameter and strand number on the total loss is examined. Finally, aluminum litz wire is analyzed. It becomes apparent that the overall loss can be reduced with the aid of replacing copper litz wire by aluminum in some areas of application. Material choice and geometrical design require detailed knowledge of the systems nominal operation point."}
{"_id": "760206dbc361642fc82eaf254d61d68b3ff0eedf", "title": "The New DLR Flight Dynamics Library Gertjan Looye", "text": "An overview of the new Modelica Flight Dynamics Library of the German Aerospace Center DLR is given. This library is intended for construction of multi-disciplinary flight dynamics models of rigid and flexible flight vehicles. The environment models provide the functionality to cover on-ground operations up to flight at high speeds and high altitudes. The resulting models may be used in various fields and stages of the aircraft development process, like flight control law design, as well as for real-time flight simulation."}
{"_id": "cd8db517b9a73274c0d69c39db44d4be9bd57f0c", "title": "The Marulan Data Sets: Multi-sensor Perception in a Natural Environment with Challenging Conditions", "text": "This paper presents large, accurately calibrated and timesynchronised data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a colour camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain."}
{"_id": "c3fc1ef004edf47d494bae14cb5f0bd5f663a222", "title": "Boosting in the presence of label noise", "text": "Boosting is known to be sensitive to label noise. We studied two approaches to improve AdaBoost\u2019s robustness against labelling errors. One is to employ a label-noise robust classifier as a base learner, while the other is to modify the AdaBoost algorithm to be more robust. Empirical evaluation shows that a committee of robust classifiers, although converges faster than non label-noise aware AdaBoost, is still susceptible to label noise. However, pairing it with the new robust Boosting algorithm we propose here results in a more resilient algorithm under mislabelling."}
{"_id": "b46b72c02035a80afbcb9d072405661be5f48b31", "title": "Seaweed extract improve drought tolerance of soybean by regulating stress-response genes", "text": "There is an increasing global concern about the availability of water for agricultural use. Drought stress negatively impacts plant physiology and crop productivity. Soybean (Glycine max) is one of the important oilseed crops, and its productivity is often reduced by drought. In this study, a commercial extract of Ascophyllum nodosum (ANE) was evaluated for its potential to alleviate drought stress in soybean. The aim of this study was to determine the effects of ANE on the response of soybean plants to drought stress by monitoring stomatal conductance, relative leaf water content, antioxidant activity and expression of stress-responsive genes. Plants treated with ANE had higher relative water content and higher stomatal conductance under drought stress. During early recovery in the post-drought phase, ANE treated plants had significantly higher stomatal conductance. The antioxidant activity was also found higher in the plants treated with ANE. In addition, ANE-treatment led to changes in the expression of stress-responsive genes: GmCYP707A1a, GmCYP707A3b, GmRD22, GmRD20, GmDREB1B, GmERD1, GmNFYA3, FIB1a, GmPIP1b, GmGST, GmBIP and GmTp55. Taken together, these results suggest that applications of ANE improve the drought tolerance of soybean by changing physiology and gene expression."}
{"_id": "ddb98f69b500336525c7cf73ec7ac5a9bbc417dc", "title": "Efficient detail-enhanced exposure correction based on auto-fusion for LDR image", "text": "We consider the problem of how to simultaneously and well correct the over- and under-exposure regions in a single low dynamic range (LDR) image. Recent methods typically focus on global visual quality but cannot well-correct much potential details in extremely wrong exposure areas, and some are also time consuming. In this paper, we propose a fast and detail-enhanced correction method based on automatic fusion which combines a pair of complementarily corrected images, i.e. backlight & highlight correction images (BCI &HCI). A BCI with higher visual quality in details is quickly produced based on a proposed faster multi-scale retinex algorithm; meanwhile, a HCI is generated through contrast enhancement method. Then, an automatic fusion algorithm is proposed to create a color-protected exposure mask for fusing BCI and HCI when avoiding potential artifacts on the boundary. The experiment results show that the proposed method can fast correct over/under-exposed regions with higher detail quality than existing methods."}
{"_id": "fb153ce56e0040214146809cd8c16c0e171e43e6", "title": "Minimum PCB footprint point-of-load DC-DC converter realized with Switched-Capacitor architecture", "text": "This work reports on the design and test of a CMOS-based Switched Capacitor (SC) DC-DC conversion integrated circuit (IC) for the point-of-load (POL) application conventionally addressed with the buck converter. A 12V-to-1.5V converter is fabricated in a 0.18\u00b5m technology, with an active die area of 3mm2. There is a significant reduction in printed circuit board (PCB) footprint, passive component height and cost when compared to surveyed buck converters. The converter achieves 93% peak efficiency and an efficiency of 80% over an output current range of 7mA to 1A. This work demonstrates the vast potential of SC converters for wide-range voltage conversion in deep-submicron CMOS technologies, when cost and efficiency are of critical importance."}
{"_id": "dc442551c2e59f8bf09e1bddd8500bb5b2176d6f", "title": "Towards automatic assessment of government web sites", "text": "This paper presents an approach for automatic assessment of web sites in large scale e-Government surveys. The approach aims at supplementing and to some extent replacing human evaluation which is typically the core part of these surveys.\n The heart of the solution is a colony inspired algorithm, called the lost sheep, which automatically locates targeted governmental material online. The algorithm centers around classifying link texts to determine if a web page should be downloaded for further analysis.\n The proposed algorithm is designed to work with minimum human interaction and utilize the available resources as best possible. Using the lost sheep, the people carrying out a survey will only provide sample data for a few web sites for each type of material sought after. The algorithm will automatically locate the same type of material in the other web sites part of the survey. This way it significantly reduces the need for manual work in large scale e-Government surveys."}
{"_id": "0b52e925b8164cc39a4ec222e128da8f216ceb28", "title": "A parallel and efficient approach to large scale clone detection", "text": "Over the past few years, researchers have implemented various algorithms to improve the scalability of clone detection. Most of these algorithms focus on scaling vertically on a single machine, and require complex intermediate data structures (e.g., suffix tree, etc.). However, several new use-cases of clone detection have emerged, which are beyond the computational capacity of a single machine. Moreover, for some of these use-cases it may be expensive to invest upfront in the cost of building these data structures.\n In this paper, we propose a technique to horizontally scale clone detection across multiple machines using the popular MapReduce framework. The technique does not require building any complex intermediate data structures. Moreover, in order to increase the efficiency, the technique uses a filtering heuristic to prune the number of code block comparisons. The filtering heuristic is independent of our approach and it can be used in conjunction with other approaches to increase their efficiency. In our experiments, we found that: (i) the computation time to detect clones decreases by almost half every time we double the number of nodes; and (ii) the scaleup is linear, with a decline of not more than 70% compared to the ideal case, on a cluster of 2-32 nodes for 150-2800 projects."}
{"_id": "5f09cb313b6fb14877c6b5be79294faf1f4f7f02", "title": "Thinking inside the Box: five Organizational Strategies Enabled through Information Systems", "text": "The relationship between information systems (IS) and organizational strategies has been a much discussed topic with most of the prior studies taking a highly positive view of technology\u2019s role in enabling organizational strategies. Despite this wealth of studies, there is a dearth of empirical investigations on how IS enable specific organizational strategies. Through a qualitative empirical investigation of five case organizations this research derives five organizational strategies that are specifically enabled through IS. The five strategies; (i) generic-heartland, (ii) craft-based selective, (iii) adhoc, IT-driven, (iv) corporative-orchestrated and (v) transformative provide a unique perspective of how IS enable organizational strategy."}
{"_id": "9f4ec4dd3377612383057c3387421df65ab19cce", "title": "Microstrip patch antenna array at 3.8 GHz for WiMax and UAV applications", "text": "This paper presents the design of a rectangular microstrip line-fed patch antenna array with a centre frequency of 3.8 GHz for WiMAX and Unmanned Air Vehicle (UAV) applications. A single element, 1\u00d72 and 2\u00d72 microstrip rectangular patch antennas were designed and simulated in Computer Simulation Tool (CST) Microwave Studio environment. The results of designed antennas were compared in terms of Return Loss (S11 parameters), bandwidth, directivity, gain and radiation pattern. Compared to traditional microstrip antennas the proposed array structure achieved a gain and directivity of 13.2 dB and 13.5 dBi respectively. The antenna was fabricated using Rogers Duroid RT-5880 substrate with a dielectric constant er of 2.2 and a thickness of 1.574 mm respectively. The array antennas were measured in the laboratory using Vector Network Analyser (VNA) and the results show good agreement with the array antenna simulation."}
{"_id": "e19c4e1fbe2bfecba7839f934faf6be0319476b5", "title": "Modeling and simulation of DC rail traction systems for energy saving", "text": "The modeling and simulation of the electrified transit system is an essential element in the design process of a new railway, or an existing one being modernized, particularly in DC powered railway systems which have significant losses in the power network. With the continuing focus on environmental concerns and rising energy prices, energy-saving operation technology for railway systems has been paid more and more attention. Previous work on energy optimization techniques mainly focuses on optimizing driving strategies subject to geographic and physical constraints, and kinematic equations, which only minimizes the mechanical energy consumption without considering the loss from the power supply network. This paper proposes a DC power network modeling technique and extends the traditional energy-saving methods to develop a novel approach which combines traction power supply network calculations and numerical algorithms to minimize the electrical energy delivered from substations. As train resistance is time-varying with the train movement, iterative algorithms are presented in order to calculate the energy consumption dynamically. Some case studies based on the Beijing Yizhuang Subway Line are presented to illustrate the proposed approach for power network simulation and energy-saving, in which the energy consumption of both the practical operation and optimal operation are compared."}
{"_id": "ad384ff98f002c16ccdb8264a631068f2c3287f2", "title": "Consensus in the Age of Blockchains", "text": "The blockchain initially gained traction in 2008 as the technology underlying Bitcoin [105], but now has been employed in a diverse range of applications and created a global market worth over $150B as of 2017. What distinguishes blockchains from traditional distributed databases is the ability to operate in a decentralized setting without relying on a trusted third party. As such their core technical component is consensus: how to reach agreement among a group of nodes. This has been extensively studied already in the distributed systems community for closed systems, but its application to open blockchains has revitalized the field and led to a plethora of new designs. The inherent complexity of consensus protocols and their rapid and dramatic evolution makes it hard to contextualize the design landscape. We address this challenge by conducting a systematic and comprehensive study of blockchain consensus protocols. After first discussing key themes in classical consensus protocols, we describe: (i) protocols based on proof-of-work (PoW), (ii) proof-of-X (PoX) protocols that replace PoW with more energy-efficient alternatives, and (iii) hybrid protocols that are compositions or variations of classical consensus protocols. We develop a framework to evaluate their performance, security and design properties, and use it to systematize key themes in the protocol categories described above. This evaluation leads us to identify research gaps and challenges for the community to consider in future research endeavours."}
{"_id": "d60c52a77e974797b14774f22bff06faa1e03003", "title": "What influences literacy outcome in children with speech sound disorder?", "text": "PURPOSE\nIn this study, the authors evaluated literacy outcome in children with histories of speech sound disorder (SSD) who were characterized along 2 dimensions: broader language function and persistence of SSD. In previous studies, authors have demonstrated that each dimension relates to literacy but have not disentangled their effects. Methods Two groups of children (86 SSD and 37 controls) were recruited at ages 5-6 and were followed longitudinally. The authors report the literacy of children with SSD at ages 7-9, compared with controls and national norms, and relative to language skill and SSD persistence (both measured at age 5-6).\n\n\nRESULTS\nThe SSD group demonstrated elevated rates of reading disability. Language skill but not SSD persistence predicted later literacy. However, SSD persistence was associated with phonological awareness impairments. Phonological awareness alone predicted literacy outcome less well than a model that also included syntax and nonverbal IQ.\n\n\nCONCLUSIONS\nResults support previous literature findings that SSD history predicts literacy difficulties and that the association is strongest for SSD + language impairment (LI). Magnitude of phonological impairment alone did not determine literacy outcome, as predicted by the core phonological deficit hypothesis. Instead, consistent with a multiple deficit approach, phonological deficits appeared to interact with other cognitive factors in literacy development."}
{"_id": "7d06a08c0d9c0919051f8e271a7a14cf213684ae", "title": "Low-Cost 60-GHz Smart Antenna Receiver Subsystem Based on Substrate Integrated Waveguide Technology", "text": "In this paper, a low-cost integrated 60-GHz switched-beam smart antenna subsystem is studied and demonstrated experimentally for the first time based on almost all 60-GHz substrate integrated waveguide (SIW) components including a slot antenna, 4\u00d74 Butler matrix network, bandpass filter, sub-harmonically pumped mixer, and local oscillator (LO) source. In this study, an antenna array, a Butler matrix, and a bandpass filter are integrated and fabricated into one single substrate. Instead of using a 60-GHz LO source, a 30-GHz LO source is developed to drive a low-cost 60-GHz sub-harmonically pumped mixer. This 30-GHz LO circuit consists of 10-GHz SIW voltage-controlled oscillator and frequency tripler. Following the frequency down-conversion of four 60-GHz signals coming from the 4\u00d74 Butler matrix and a comparison of the four IF signals executed in the digital processor based on the maximum received power criterion, control signals will be feed-backed to drive the single-pole four-throw switch array and then the beam is tuned in order to point toward the main beam of the transmit antenna. In this way, the arriving 60-GHz RF signal can be tracked effectively. All designed components are verified experimentally. The proposed smart receiver subsystem that integrates all those front-end components is concluded with satisfactory measured results."}
{"_id": "7eb2c29ed9a7a65d9e4328fd7c91cc7698b7d3b1", "title": "Quarter-Mode Substrate Integrated Waveguide and Its Application to Antennas Design", "text": "A quarter-mode substrate integrated waveguide (QMSIW) which is the quadrant sector of a square waveguide resonator is proposed and investigated in this paper. The QMSIW is realized by bisecting the half-mode substrate integrated waveguide (HMSIW) into two parts along the fictitious quasi-magnetic wall when it operates with TE101 and TE202 modes as the way bisecting the substrate integrated waveguide (SIW) to HMSIW. The QMSIW can almost preserve the field distribution of original SIW and leaky wave is achieved from the dielectric aperture of the QMSIW. When the feeding port is placed at one corner of the QMSIW, a linearly polarized radiation is obtained when the QMSIW resonates in TE101QM mode, and when the QMSIW resonates in TE202QM mode, a circularly polarized (CP) wave is achieved. An antenna is designed, fabricated, and measured based on the proposed QMSIW. The measurement results match with the simulation results very well."}
{"_id": "aaaea1314570b6b692ff3cce3715ec9dada7c7aa", "title": "LTCC Packages With Embedded Phased-Array Antennas for 60 GHz Communications", "text": "A low-cost, fully-integrated antenna-in-package solution for 60 GHz phased-array systems is demonstrated. Sixteen patch antennas are integrated into a 28 mm \u00d7 28 mm ball grid array together with a flip-chip attached transmitter or receiver IC. The packages have been implemented using low temperature co-fired ceramic technology. 60 GHz interconnects, including flip-chip transitions and via structures, are optimized using full-wave simulation. Anechoic chamber measurement has shown ~ 5 dBi unit antenna gain across all four IEEE 802.15.3c channels, achieving excellent model-to-hardware correlation. The packaged transmitter and receiver ICs, mounted on evaluation boards, have demonstrated beam-steered, non-line-of-sight links with data rates up to 5.3 Gb/s."}
{"_id": "0d3f6d650b1a878d5896e3b85914aeaeb9d78a4f", "title": "Enhancing the quality of life through wearable technology", "text": "An overview of the key challenges facing the practice of medicine today is presented along with the need for technological solutions that can \"prevent\" problems. Then, the development of the Wearable Motherboard/spl trade/ (Smart Shirt) as a platform for sensors and monitoring devices that can unobtrusively monitor the health and well being of individuals (directly and/or remotely) is described. This is followed by a discussion of the applications and impact of this technology in the continuum of life-from preventing SIDS to facilitating independent living for senior citizens. Finally, the future advancements in the area of wearable, yet comfortable, systems that can continue the transformation of healthcare - all aimed at enhancing the quality of life for humans - are presented."}
{"_id": "ae51dd4819b2e61e39add22ca017298c9b5c0062", "title": "Automatic fine tuning of cavity filters Automatisk finjustering av kavitetsfilter", "text": "Cavity filters are a necessary component in base stations used for telecommunication. Without these filters it would not be possible for base stations to send and receive signals at the same time. Today these cavity filters require fine tuning by humans before they can be deployed. This thesis have designed and implemented a neural network that can tune cavity filters. Different types of design parameters have been evaluated, such as neural network architecture, data presentation and data preprocessing. While the results was not comparable to human fine tuning, it was shown that there was a relationship between error and number of weights in the neural network. The thesis also presents some rules of thumb for future designs of neural network used for filter tuning."}
{"_id": "1f932f0d49a4c56d9718e8506d6177c6a6848831", "title": "CACHE MISSING FOR FUN AND PROFIT", "text": "Simultaneous multithreading \u2014 put simply, the sharing of the execution resources of a superscalar processor between multiple execution threads \u2014 has recently become widespread via its introduction (under the name \u201cHyper-Threading\u201d) into Intel Pentium 4 processors. In this implementation, for reasons of efficiency and economy of processor area, the sharing of processor resources between threads extends beyond the execution units; of particular concern is that the threads share access to the memory caches. We demonstrate that this shared access to memory caches provides not only an easily used high bandwidth covert channel between threads, but also permits a malicious thread (operating, in theory, with limited privileges) to monitor the execution of another thread, allowing in many cases for theft of cryptographic keys. Finally, we provide some suggestions to processor designers, operating system vendors, and the authors of cryptographic software, of how this attack could be mitigated or eliminated entirely."}
{"_id": "c0006305ad20d94c2a5d561d1f0a8224adf8390b", "title": "Grounded Theory and the Constant Comparative Method : Valid Research Strategies for Educators", "text": "University of Wisconsin-Whitewater. USA ___________________________________________________________________________ Grounded theory was developed by Glaser and Strauss who believed that theory could emerge through qualitative data analysis. In grounded theory, the researcher uses multiple stages of collecting, refining, and categorizing the data. The researcher uses the strategies of the making constant comparisons and applying theoretical sampling to obtain a theory grounded in the data. The justification of this paper is to provide discussion on the validity of grounded theory and the constant comparative method as effective research strategies for educators. The qualitative design of grounded theory will be the focus of this paper, along with a discussion of the constant comparative method, issues related to trustworthiness, and limitations inherent in grounded theory methodology __________________________________________________________________________________________"}
{"_id": "8fbf2e242fd76b1a5e114bdd93f3507d6e10363e", "title": "High performance CAN/FlexRay gateway design for in-vehicle network", "text": "A gateway in in-vehicle network (IVN) is an indispensable component as a conversion device between different communication protocols. Due to meet more complex network for IVN, how to improving the performance of gateway is more important. In this paper, a high efficient of CAN/FlexRay gateway using hardware/software (HW/SW) co-design method is proposed which is implemented with Xilinx FPGA system. The CAN controller and FlexRay controller are designed by SW and HW, respectively. The interface of HW and SW are used AXI-bus. The convert module is evaluated by a cost function. In addition, a transceiver PCB for connecting between the proposed gateway and external CAN/FlexRay-based ECUs is implemented. Experimental results show that the proposed gateway system can reduce at most 94.7% on execution time with related works which is more suited for IVN."}
{"_id": "2a4a7d37babbab47ef62a60d9f0ea2cfa979cf08", "title": "Mobile-assisted localization in wireless sensor networks", "text": "The localization problem is to determine an assignment of coordinates to nodes in a wireless ad-hoc or sensor network that is consistent with measured pairwise node distances. Most previously proposed solutions to this problem assume that the nodes can obtain pairwise distances to other nearby nodes using some ranging technology. However, for a variety of reasons that include obstructions and lack of reliable omnidirectional ranging, this distance information is hard to obtain in practice. Even when pairwise distances between nearby nodes are known, there may not be enough information to solve the problem uniquely. This paper describes MAL, a mobile-assisted localization method which employs a mobile user to assist in measuring distances between node pairs until these distance constraints form a \"globally rigid'* structure that guarantees a unique localization. We derive the required constraints on the mobile's movement and the minimum number of measurements it must collect; these constraints depend on the number of nodes visible to the mobile in a given region. We show how to guide the mobile's movement to gather a sufficient number of distance samples for node localization. We use simulations and measurements from an indoor deployment using the Cricket location system to investigate the performance of MAL, finding in real-world experiments that MAL's median pairwise distance error is less than 1.5% of the true node distance."}
{"_id": "5075ebef8e403771488c40461f1fb4ef94e740fb", "title": "Twitter as an instrument for crisis response: The Typhoon Haiyan case study", "text": "The research presented in this paper attempts an initial evaluation of Twitter as an instrument for emergency response in the context of a recent crisis event. The case of the 2013 disaster, when typhoon Haiyan hit Philippines is examined by analyzing nine consecutive days of Twitter messages and comparing them to the actual events. The results indicate that during disasters, Twitter users tend to post messages to enhance situation awareness and to motivate people to act. Furthermore, tweets were found reliable and provided valuable information content, supporting the argument that Twitter presents a very good potential to become a useful tool in situations where rapid emergency response is essential."}
{"_id": "e42838d321ece2ef7f8399c54d4dd856bfdbe4a4", "title": "Analysis and modelling of broad-band ferrite-based coaxial transmission-line transformers", "text": "The work presented in the paper focuses on accuracy of models for broad-band ferrite based coaxial transmission-line transformers. Soft-ferrites are largely used in VHF/UHF components allowing band enlargement on the low-edge side. Degradation of frequency performance on the high-edge side are produced both by ferrite losses, and by parasitic capacitance due to connection to the thermal and electrical ground in high power applications. Both a circuital model for low-power applications and a scalable e.m. model for high-power applications are presented and discussed."}
{"_id": "a0dda76526e714ffa8eae4b8254a2069bf6888c0", "title": "Physical Limitations of Omnidirectional Antennas Physical Li\u00a3itations of Omnidirectional Antennas", "text": "The physical limitations of omnidirectional antennas are considered. With the use of the spherical wave functions to describe the field, the directivity gain G and the Q of an unspecified antenna are calculated under idealized conditions. To obtain the optimum performance, three criteria are used: (1) maximum gain for a given complexity of the antenna structure, (2) minimum Q, (3) maximum ratio of G/Q. It is found that an antenna of which the maximum dimension is 2a has the potentiality of a broad bandwidth provided that the gain is equal to or less than 4a/. To obtain a gain higher than this value, the Q of the antenna increases at an astronomical rate. The antenna which has potentially the broadest bandwidth of all omnidirectional antennas is one which has a radiation pattern corresponding to that of an infinitesimally small dipole."}
{"_id": "62eff7763f8679d0afe53dad4d85279d54f763c5", "title": "Using WordNet as a Knowledge Base for Measuring Semantic Similarity between Words", "text": "In this paper we propose the use of WordNet as a knowledge base in an information retrieval task. The application areas range from information filtering and document retrieval to multimedia retrieval and data sharing in large scale distributed database systems. The WordNet derived knowledge base makes semantic knowledge available which can be used in overcoming many problems associated with the richness of natural language. A semantic similarity measure is also proposed which can be used as an alternative to pattern matching in the comparison process."}
{"_id": "1a407fa3a4da0c30505c3018afcb7b88cc841a13", "title": "Automatic Labelling of Topic Models", "text": "We propose a method for automatically labelling topics learned via LDA topic models. We generate our label candidate set from the top-ranking topic terms, titles of Wikipedia articles containing the top-ranking topic terms, and sub-phrases extracted from the Wikipedia article titles. We rank the label candidates using a combination of association measures and lexical features, optionally fed into a supervised ranking model. Our method is shown to perform strongly over four independent sets of topics, significantly better than a benchmark method."}
{"_id": "5545065e049576fcc9ebb3be80b63a2897b28c85", "title": "Deep discriminative manifold learning", "text": "This paper presents a new non-linear dimensionality reduction with stochastic neighbor embedding. A deep neural network is developed for discriminative manifold learning where the class information in transformed low-dimensional space is preserved. Importantly, the objective function for deep manifold learning is formed as the Kullback-Leibler divergence between the probability measures of the labeled samples in high-dimensional and low-dimensional spaces. Different from conventional methods, the derived objective does not require the empirically-tuned parameter. This objective is optimized to attractive those samples from the same class to be close together and simultaneously impose those samples from different classes to be far apart. In the experiments on image and audio tasks, we illustrate the effectiveness of the proposed discriminative manifold learning in terms of visualization and classification performance."}
{"_id": "28d4f11075134bd5fcc9d87f473c87e7ee0b7170", "title": "Adaptive Nonlinear Model Inversion Control of a Twin Rotor System Using Artificial Intelligence", "text": "The paper investigates the development of an adaptive dynamic nonlinear model inversion control law for a twin rotor MIMO system (TRMS) utilizing artificial neural networks and genetic algorithms. The TRMS is an aerodynamic test rig representing the control challenges of modern air vehicles. A highly nonlinear 1DOF mathematical model of the TRMS is considered in this study and a nonlinear inverse model is developed for the pitch channel. In the absence of model inversion errors, a genetic algorithm-tuned PD controller is used to enhance the tracking characteristics of the system. An adaptive neural network element is integrated thereafter with the feedback control system to compensate for model inversion errors. In order to show the effectiveness of the proposed method in the simulation environment an inversion error has deliberately been provided as an uncertainty in the real situation. Square and sinusoidal reference command signals are used to test the control system performance, and it is noted that an excellent tracking response is exhibited in the presence of inversion errors caused by model uncertainty."}
{"_id": "e8e018e9b87a75e9be59cef27d8aae5892921900", "title": "Supporting Private Data on Hyperledger Fabric with Secure Multiparty Computation", "text": "Hyperledger Fabric is a \"permissioned\" blockchain architecture, providing a consistent distributed ledger, shared by a set of \"peers.\" As with every blockchain architecture, the core principle of Hyperledger Fabric is that all the peers must have the same view of the shared ledger, making it challenging to support private data for the different peers. Extending Hyperledger Fabric to support private data (that can influence transactions) would open the door to many exciting new applications, in areas from healthcare to commerce, insurance, finance, and more. In this work we explored adding private-data support to Hyperledger Fabric using secure multiparty computation (MPC). Specifically, in our solution the peers store on the chain encryption of their private data, and use secure MPC whenever such private data is needed in a transaction. This solution is very general, allowing in principle to base transactions on any combination of public and private data. We created a demo of our solution over Hyperledger Fabric v1.0, implementing a bidding system where sellers can list assets on the ledger with a secret reserve price, and bidders publish their bids on the ledger but keep secret the bidding price itself. We implemented a smart contract (aka \"chaincode\") that runs the auction on this secret data, using a simple secure-MPC protocol that was built using the EMP-toolkit library. The chaincode itself was written in Go, and we used the SWIG library to make it possible to call our protocol implementation in C++. We identified two basic services that should be added to Hyperledger Fabric to support our solution, and are now working on implementing them."}
{"_id": "cd32d7383b1e987329d2412f2907b7db6dd8d396", "title": "Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks", "text": "In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an approach to visualize and understand the decisions made by deep neural networks (DNNs) given a specific input. CLEAR facilitates the visualization of attentive regions and levels of interest of DNNs during the decision-making process. It also enables the visualization of the most dominant classes associated with these attentive regions of interest. As such, CLEAR can mitigate some of the shortcomings of heatmap-based methods associated with decision ambiguity, and allows for better insights into the decision-making process of DNNs. Quantitative and qualitative experiments across three different datasets demonstrate the efficacy of CLEAR for gaining a better understanding of the inner workings of DNNs during the decision-making process."}
{"_id": "de429f5edb30238e7e8d9a76d9d58d34df6250d6", "title": "Operating Systems for Low-End Devices in the Internet of Things: A Survey", "text": "The Internet of Things (IoT) is projected to soon interconnect tens of billions of new devices, in large part also connected to the Internet. IoT devices include both high-end devices which can use traditional go-to operating systems (OSs) such as Linux, and low-end devices which cannot, due to stringent resource constraints, e.g., very limited memory, computational power, and power supply. However, large-scale IoT software development, deployment, and maintenance requires an appropriate OS to build upon. In this paper, we thus analyze in detail the specific requirements that an OS should satisfy to run on low-end IoT devices, and we survey applicable OSs, focusing on candidates that could become an equivalent of Linux for such devices, i.e., a one-size-fits-most, open source OS for low-end IoT devices."}
{"_id": "29c169580719d4722135951c99b9d508f58fa641", "title": "CBSA: content-based soft annotation for multimodal image retrieval using Bayes point machines", "text": "We propose a content-based soft annotation (CBSA) procedure for providing images with semantical labels. The annotation procedure starts with labeling a small set of training images, each with one single semantical label (e.g., forest, animal, or sky). An ensemble of binary classifiers is then trained for predicting label membership for images. The trained ensemble is applied to each individual image to give the image multiple soft labels, and each label is associated with a label membership factor. To select a base binary-classifier for CBSA, we experiment with two learning methods, Support Vector Machines (SVMs) and Bayes Point Machines (BPMs, and compare their class-prediction accuracy. Our empirical study on a 116-category 25K-image set shows that the BPM-based ensemble provides better annotation quality than the SVM-based ensemble for supporting multimodal image retrievals."}
{"_id": "fdd5db9fbcd1a93431687f5e64ef2e40cb8adc92", "title": "0.293-mm2 Fast Transient Response Hysteretic Quasi- $V^{2}$ DC\u2013DC Converter With Area-Efficient Time-Domain-Based Controller in 0.35- $\\mu$ m CMOS", "text": "A time-domain-control-based quasi- $V^{2}$ buck converter with small area, fast transient response, wide load current range, and constant switching frequency is proposed in this paper. The proposed time-domain-based controller achieves compact core area and simplified constant switching frequency control by replacing the conventional voltage-domain comparator with a phase detector having an adaptive detection window. Moreover, a coupling-based inductor current emulator is also proposed to support the discontinuous conduction mode operation in the light load. The proposed buck converter is implemented using 0.35- $\\mu \\text{m}$ CMOS process. The core area of the proposed buck converter is only 0.293 mm2. The measured peak efficiency in the continuous conduction mode is 92% and the end-to-end efficiency in the load current range of 25\u2013700 mA is 73%. The undershoot/overshoot and recovery time with the up/down load current steps of 510 mA are 38/20 mV and 2.5/2.6 $\\mu \\text{s}$ , respectively."}
{"_id": "3a2fa3be15a5173e2d6616ceb5eecfc581ed1651", "title": "Loughborough University Institutional Repository Automated sizing of coarse-grained sediments : image-processing procedures", "text": "This is the first in a pair of papers in which we present image-processing-based procedures for the measurement of fluvial gravels. The spatial and temporal resolution of surface grain-size characterization is constrained by the time-consuming and costly nature of traditional measurement techniques. Several groups have developed image-processing-based procedures, but none have demonstrated the transferability of these techniques between sites with different lithological, clast form and textural characteristics. Here we focus on image-processing procedures for identifying and measuring image objects (i.e. grains); the second paper examines the application of such procedures to the measurement of fluvially deposited gravels. Four image-segmentation procedures are developed, each having several internal parameters, giving a total of 416 permutations. These are executed on 39 images from three field sites at which the clasts have contrasting physical properties. The performance of each procedure is evaluated against a sample of manually digitized grains in the same images, by comparing three derived statistics. The results demonstrate that it is relatively straightforward to develop procedures that satisfactorily identify objects in any single image or a set of images with similar sedimentary characteristics. However, the optimal procedure is that which gives consistently good results across sites with dissimilar sedimentary characteristics. We show that neighborhood-based operations are the most powerful, and a morphological bottom-hat transform with a double threshold is optimal. It is demonstrated that its performance approaches that of the procedures giving the best results for individual sites. Overall, it out-performs previously published, or improvements to previously published, methods."}
{"_id": "51910ced61aee4b6da68b9619a4edcb304a52ccb", "title": "Lagged correlation-based deep learning for directional trend change prediction in financial time series", "text": "Trend change prediction in complex systems with a large number of noisy time series is a problem with many applications for real-world phenomena, with stock markets as a notoriously difficult to predict example of such systems. We approach predictions of directional trend changes via complex lagged correlations between them, excluding any information about the target series from the respective inputs to achieve predictions purely based on such correlations with other series. We propose the use of deep neural networks that employ step-wise linear regressions with exponential smoothing in the preparatory feature engineering for this task, with regression slopes as trend strength indicators for a given time interval. We apply this method to historical stock market data from 2011 to 2016 as a use case example of lagged correlations between large numbers of time series that are heavily influenced by externally arising new information as a random factor. The results demonstrate the viability of the proposed approach, with state-of-the-art accuracies and accounting for the statistical significance of the results for additional validation, as well as important implications for modern financial economics."}
{"_id": "0fd8f2a37e81404492e9058fe89e48815cf33966", "title": "One-Class SVMs for Document Classification", "text": "We implemented versions of the SVM appropriate for one-class classification in the context of information retrieval. The experiments were conducted on the standard Reuters data set. For the SVM implementation we used both a version of Sch\u00f6lkopf et al. and a somewhat different version of one-class SVM based on identifying \u201coutlier\u201d data as representative of the second-class. We report on experiments with different kernels for both of these implementations and with different representations of the data, including binary vectors, tf-idf representation and a modification called \u201cHadamard\u201d representation. Then we compared it with one-class versions of the algorithms prototype (Rocchio), nearest neighbor, naive Bayes, and finally a natural one-class neural network classification method based on \u201cbottleneck\u201d compression generated filters. The SVM approach as represented by Sch\u00f6lkopf was superior to all the methods except the neural network one, where it was, although occasionally worse, essentially comparable. However, the SVM methods turned out to be quite sensitive to the choice of representation and kernel in ways which are not well understood; therefore, for the time being leaving the neural network approach as the most robust."}
{"_id": "c9b41a66017ca13333e8477a58ffdca2e3599b7e", "title": "Discovering users' topics of interest on twitter: a first look", "text": "Twitter, a micro-blogging service, provides users with a framework for writing brief, often-noisy postings about their lives. These posts are called \"Tweets.\" In this paper we present early results on discovering Twitter users' topics of interest by examining the entities they mention in their Tweets. Our approach leverages a knowledge base to disambiguate and categorize the entities in the Tweets. We then develop a \"topic profile,\" which characterizes users' topics of interest, by discerning which categories appear frequently and cover the entities. We demonstrate that even in this early work we are able to successfully discover the main topics of interest for the users in our study."}
{"_id": "01da903f93df749aa600c1aea50c86e611818e85", "title": "Statistical Methods for Speech Recognition", "text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. statistical methods for speech recognition really offers what everybody wants."}
{"_id": "8230f2d11bac8eac75260dc503094392d1edbe5b", "title": "Student Privacy and Educational Data Mining: Perspectives from Industry", "text": "While the field of educational data mining (EDM) has generated many innovations for improving educational software and student learning, the mining of student data has recently come under a great deal of scrutiny. Many stakeholder groups, including public officials, media outlets, and parents, have voiced concern over the privacy of student data and their efforts have garnered national attention. The momentum behind and scrutiny of student privacy has made it increasingly difficult for EDM applications to transition from academia to industry. Based on experience as academic researchers transitioning into industry, we present three primary areas of concern related to student privacy in practice: policy, corporate social responsibility, and public opinion. Our discussion will describe the key challenges faced within these categories, strategies for overcoming them, and ways in which the academic EDM community can support the adoption of innovative technologies in large-scale production."}
{"_id": "9b0adf09185ea4739e8936bfad0cff36bc38f43c", "title": "G3D: A gaming action dataset and real time action recognition evaluation framework", "text": "In this paper a novel evaluation framework for measuring the performance of real-time action recognition methods is presented. The evaluation framework will extend the time-based event detection metric to model multiple distinct action classes. The proposed metric provides more accurate indications of the performance of action recognition algorithms for games and other similar applications since it takes into consideration restrictions related to time and consecutive repetitions. Furthermore, a new dataset, G3D for real-time action recognition in gaming containing synchronised video, depth and skeleton data is provided. Our results indicate the need of an advanced metric especially designed for games and other similar real-time applications."}
{"_id": "fd06af7e272e6a19b47b11eb814fff543605e7c3", "title": "Measurements revealing Challenges in Radar Sensor Modeling for Virtual Validation of Autonomous Driving", "text": "The virtual validation of automated driving functions requires meaningful simulation models of environment perception sensors such as radar, lidar, and cameras. There does not yet exist an unrivaled standard for perception sensor models, and radar especially lacks modeling approaches that consistently produce realistic results. In this paper, we present measurements that exemplify challenges in the development of meaningful radar sensor models. We highlight three major challenges: multi-path propagation, separability, and sensitivity of radar cross section to the aspect angle. We also review previous work addressing these challenges and suggest further research directions towards meaningful automotive radar simulation models."}
{"_id": "8d465d3ebc56895f9716423f9ad39c116ddda235", "title": "Community Structure Detection for Overlapping Modules through Mathematical Programming in Protein Interaction Networks", "text": "Community structure detection has proven to be important in revealing the underlying properties of complex networks. The standard problem, where a partition of disjoint communities is sought, has been continually adapted to offer more realistic models of interactions in these systems. Here, a two-step procedure is outlined for exploring the concept of overlapping communities. First, a hard partition is detected by employing existing methodologies. We then propose a novel mixed integer non linear programming (MINLP) model, known as OverMod, which transforms disjoint communities to overlapping. The procedure is evaluated through its application to protein-protein interaction (PPI) networks of the rat, E. coli, yeast and human organisms. Connector nodes of hard partitions exhibit topological and functional properties indicative of their suitability as candidates for multiple module membership. OverMod identifies two types of connector nodes, inter and intra-connector, each with their own particular characteristics pertaining to their topological and functional role in the organisation of the network. Inter-connector proteins are shown to be highly conserved proteins participating in pathways that control essential cellular processes, such as proliferation, differentiation and apoptosis and their differences with intra-connectors is highlighted. Many of these proteins are shown to possess multiple roles of distinct nature through their participation in different network modules, setting them apart from proteins that are simply 'hubs', i.e. proteins with many interaction partners but with a more specific biochemical role."}
{"_id": "5414b4b90edf715a190d188fdc88cf5f69077081", "title": "The comparison of soil sensors for integrated creation of IOT-based Wetting front detector (WFD) with an efficient irrigation system to support precision farming", "text": "This study investigates a prototyping of integrated system of Internet of Things based Wetting front detector (IOT-WFD) which focuses on how to enhance the IOT based Wetting front detector design for smart irrigation system. The empirical study was conducted with 2 sensors type to detect the wetting fronts which are the Frequency Domain Reflectrometry sensor (FDR) and Resistor-based sensor (RB) integrated and design with low-cost WFD. The results of this study point toward the IOT-WFD as an appropriated technology providing real time wetting front information in soil positively for application in terms of agricultural water management, with precision agriculture and efficient irrigation domain with a related decision knowledge that matches with the technology trend and smart farmer requirements. Evidence of positive results of this prototyping summary has been provided."}
{"_id": "5641404899532c4fc6e63a16229491f71a08b4b1", "title": "Parents' perceptions of health and physical activity needs of children with Down syndrome.", "text": "Individuals with Down syndrome typically have low fitness levels and obesity despite data that indicate physiological gains from physical activity and exercise interventions. Low fitness levels and obesity in individuals with Down syndrome may be related to sedentary lifestyles, social and recreational opportunities, or low motivation to be physically active. These causal influences on the overall health of individuals with Down syndrome may be related to parental or caregiver support. Through this study, parents of children with Down syndrome from preschool to adolescent ages were interviewed about their perceptions of the health and physical activity needs of their children. Data from four focus groups indicated the following most salient themes: (1) all parents believed participation in physical activity has immediate and long-term positive health impacts on their child with Down syndrome, and most of the parents thought their child would benefit from being more physically active, (2) most parents observed that their child participated in physical activities primarily for social reasons, most notably to be with their peers with or without Down syndrome or to be with their sibling(s), and that without such motivation their child would choose sedentary activities, (3) parents of teenagers identified a need for their child to learn an individual sport to have sporting opportunities that do not require ability-matched teammates and opponents, and (4) parents recognised their need for help from physical activity specialists through either parent education regarding home-based physical activity programmes or an increase in appropriate community-based physical activity programmes for their child with Down syndrome. The interview data suggest future research should evaluate the outcomes of long-term individualised home-based physical activity interventions for children with Down syndrome. Additionally, educators, recreation specialists, and therapists should assist children and youth with their acquisition of skills used in individual and dual sports."}
{"_id": "15941d6904c641e9225bb00648d0664026d17247", "title": "Foundations and Trends Vision for Robotics", "text": "Robot vision refers to the capability of a robot to visually perceive the environment and use this information for execution of different tasks. Visual feedback has been used extensively for robot navigation and obstacle avoidance. In the recent years, there are also examples that include interaction with people and manipulation of objects. In this paper, we review some of the work that goes beyond of using artificial landmarks and fiducial markers for the purpose of implementing vision based control in robots. We discuss different application areas, both from the systems perspective and individual problems such as object tracking and recognition."}
{"_id": "82394a6f2cdf8fb91a6b81587ce07a5de100d6dc", "title": "String diagrams for free monads (functional pearl)", "text": "We show how one can reason about free monads using their universal properties rather than any concrete implementation. We introduce a graphical, two-dimensional calculus tailor-made to accommodate these properties."}
{"_id": "6496829d9465bb3d3375764ad76a6b0cfb7cc0c3", "title": "See-through-wall imaging using ultra wideband short-pulse radar system", "text": "See-through-wall imaging radar is a unique application of ultra wideband communication that can provide soldiers and law enforcement officers with an enhanced situation awareness. We have developed an ultra-wideband high-resolution short pulse imaging radar system operating around 10 GHz, where two essential considerations are addressed: the effect of penetrating the walls; the pulse fidelity through the UWB components and antennas of the radar. Modeled and measured wall parameters, and the effect of antenna types on signal fidelity are discussed in detail."}
{"_id": "8597ea07d16d413f6ac24cc247d2f05a73fa5f51", "title": "Data Mining and Opinion Mining: A Tool in Educational Context", "text": "The use of the web as a universal communication platform generates large volumes of data (Big data), which in many cases, need to be processed so that they can become useful knowledge in face of the sceptics who have doubts about the credibility of such information. The use of web data that comes from educational contexts needs to be addressed, since that large amount of unstructured information is not being valued, losing valuable information that can be used. To solve this problem, we propose the use of data mining techniques such as sentiment analysis to validate the information that comes from the educational platforms. The objective of this research is to propose a methodology that allows the user to apply sentiment analysis in a simple way, because although some researchers have done it, very few do with data in the educational context. The results obtained prove that the proposal can be used in similar cases."}
{"_id": "5d094b5304d314c1867cb7cbcfb045c2df291b30", "title": "A Survey of Radial Methods for Information Visualization", "text": "Radial visualization, or the practice of displaying data in a circular or elliptical pattern, is an increasingly common technique in information visualization research. In spite of its prevalence, little work has been done to study this visualization paradigm as a methodology in its own right. We provide a historical review of radial visualization, tracing it to its roots in centuries-old statistical graphics. We then identify the types of problem domains to which modern radial visualization techniques have been applied. A taxonomy for radial visualization is proposed in the form of seven design patterns encompassing nearly all recent works in this area. From an analysis of these patterns, we distill a series of design considerations that system builders can use to create new visualizations that address aspects of the design space that have not yet been explored. It is hoped that our taxonomy will provide a framework for facilitating discourse among researchers and stimulate the development of additional theories and systems involving radial visualization as a distinct design metaphor."}
{"_id": "be5ee24a32609620546d91c673d7997f4b172776", "title": "Game Theory and Distributed Control", "text": "Game theory has been employed traditionally as a modeling tool for describing and influencing behavior in societal systems. Recently, game theory has emerged as a valuable tool for controlling or prescribing behavior in distributed engineered systems. The rationale for this new perspective stems from the parallels between the underlying decision making architectures in both societal systems and distributed engineered systems. In particular, both settings involve an interconnection of decision making elements whose collective behavior depends on a compilation of local decisions that are based on partial information about each other and the state of the world. Accordingly, there is extensive work in game theory that is relevant to the engineering agenda. Similarities notwithstanding, there remain important differences between the constraints and objectives in societal and engineered systems that require looking at game theoretic methods from a new perspective. This chapter provides an overview of selected recent developments of game theoretic methods in this role as a framework for distributed control in engineered systems."}
{"_id": "f3aeb3a51e73237c17d3c4805cbc3dc26eda1031", "title": "A Fast Computational Algorithm for the Discrete Cosine Transform", "text": "After his military service he joined N. V. Philips' Gloeilampenfabrieken, Eindhoven, where he worked in the fields of electronic design of broadcast radio sets, integrated circuits and optical character recognition. Since 1970 he has been at the Philips Research Laboratories , where he is now working on digital communication systems. Mr. Roza is a member of the Netherlands Electronics and Radio Society. he worked in the field of data transmission and on the synchronizationaspects of a videophone system. Since 1972 he has been engaged in system design and electronics of high speed digital transmission. Abstruct-A Fast Discrete Cosine Transform algorithm has been developed which provides a factor of six improvement in computational complexity when compared to conventional Discrete Cosine Transform algorithms using the Fast Fourier Transform. The algorithm is derived in the form of matrices and illustrated by a signal-flow graph, which may be readily translated to hardware or software implementations."}
{"_id": "7f7b1d3a1a9749e3a22073a63449a6a4b757fb6d", "title": "Organic Photodiodes: The Future of Full Color Detection and Image Sensing.", "text": "Major growth in the image sensor market is largely as a result of the expansion of digital imaging into cameras, whether stand-alone or integrated within smart cellular phones or automotive vehicles. Applications in biomedicine, education, environmental monitoring, optical communications, pharmaceutics and machine vision are also driving the development of imaging technologies. Organic photodiodes (OPDs) are now being investigated for existing imaging technologies, as their properties make them interesting candidates for these applications. OPDs offer cheaper processing methods, devices that are light, flexible and compatible with large (or small) areas, and the ability to tune the photophysical and optoelectronic properties - both at a material and device level. Although the concept of OPDs has been around for some time, it is only relatively recently that significant progress has been made, with their performance now reaching the point that they are beginning to rival their inorganic counterparts in a number of performance criteria including the linear dynamic range, detectivity, and color selectivity. This review covers the progress made in the OPD field, describing their development as well as the challenges and opportunities."}
{"_id": "d101040124ccceb0cd46f92c8815d5c475605cd1", "title": "Deep Machine Learning and Neural Networks: An Overview", "text": "Received Feb 10, 2017 Revised Apr 14, 2017 Accepted May 23, 2017 Deep learning is a technique of machine learning in artificial intelligence area. Deep learning in a refined \"machine learning\" algorithm that far surpasses a considerable lot of its forerunners in its capacities to perceive syllables and picture. Deep learning is as of now a greatly dynamic examination territory in machine learning and example acknowledgment society. It has increased colossal triumphs in an expansive zone of utilizations, for example, speech recognition, computer vision and natural language processing and numerous industry item. Neural network is used to implement the machine learning or to design intelligent machines. In this paper brief introduction to all machine learning paradigm and application area of deep machine learning and different types of neural networks with applications is discussed. Keyword:"}
{"_id": "2dcd3ea0af519e3441fa37ae047b422344fff0e9", "title": "Component-Based Robotic Engineering (Part II)", "text": "This article discusses the role of software components as architectural units of large, possibly distributed, software-intensive robotic systems. The focus is on technologies to manage the heterogeneity of hardware, computational, and communication resources and on design techniques to assemble components into systems .A component-based system is a composition of components, and the way components interact with other components and with the computational environment greatly affects the flexibility of the entire system and the reusability of individual functionality."}
{"_id": "46d8612ee52a0d608a7e02a237db61ffd233a071", "title": "Artificial Noise Aided Secure Cognitive Beamforming for Cooperative MISO-NOMA Using SWIPT", "text": "Cognitive radio (CR) and non-orthogonal multiple access (NOMA) have been deemed two promising technologies due to their potential to achieve high spectral efficiency and massive connectivity. This paper studies a multiple-input single-output NOMA CR network relying on simultaneous wireless information and power transfer conceived for supporting a massive population of power limited battery-driven devices. In contrast to most of the existing works, which use an ideally linear energy harvesting model, this study applies a more practical non-linear energy harvesting model. In order to improve the security of the primary network, an artificial-noise-aided cooperative jamming scheme is proposed. The artificial-noise-aided beamforming design problems are investigated subject to the practical secrecy rate and energy harvesting constraints. Specifically, the transmission power minimization problems are formulated under both perfect channel state information (CSI) and the bounded CSI error model. The problems formulated are non-convex, hence they are challenging to solve. A pair of algorithms either using semidefinite relaxation (SDR) or a cost function are proposed for solving these problems. Our simulation results show that the proposed cooperative jamming scheme succeeds in establishing secure communications and NOMA is capable of outperforming the conventional orthogonal multiple access in terms of its power efficiency. Finally, we demonstrate that the cost function algorithm outperforms the SDR-based algorithm."}
{"_id": "6c327c3d22dfff18d38249fe0444757209026be2", "title": "Volumetric spatial transformer network for object recognition", "text": "Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots [Qi et al. 2016]. At the present time, object recognition mainly employs two methods: volumetric CNNs [Wu Z 2015] and multi-view CNNs [Xu et al. 2015] [Xu et al. 2016]. In this paper, we propose a volumetric spatial transformer network for object recognition. It fills the gap between 3D CNN and 2D CNN for the first time, and provides an end-to-end training fashion. Given a 3D shape, the network can automatically select the best view that maximizes the accuracy of object recognition."}
{"_id": "8bc5cc492d72ec2700a127bab8674aed3acc2e41", "title": "A framework of mobile transaction use : the user's perspective", "text": ".....................................................................................................................................II ACKNOWLEDGEMENTS .......................................................................................................... IV DEDICATION.................................................................................................................................. V TABLE OF CONTENTS .............................................................................................................. VI LIST OF FIGURES ...................................................................................................................... XII LIST OF TABLES ...................................................................................................................... XIII CHAPTER"}
{"_id": "81c7cfd1d1e7b568a75b496b39190281b2959f44", "title": "Quranic Verse Extraction base on Concepts using OWL-DL Ontology", "text": "In recent years, there has been a global growing demand for Islamic knowledge by both Muslims and non-Muslims. This has brought about a number of automated applications that ease the retrieval of knowledge from the Holy Book, being the major source of Knowledge in Islam. However, the current retrieval methods in the Quranic domain lack adequate semantic search capabilities; they are mostly based on the keywords matching approach. There is a lack of adequate linked data to provide a better description of concepts found in the Holy Quran. In this study we propose an Ontology assisted semantic search system in the Qur\u2019an domain. The system makes use of Quran ontology and various relationships and restrictions. This will enable the user to semantically search for verses related to their query in Al-Quran. The system has improved the search capability of the Holy Quran knowledge to 95 percent accuracy level."}
{"_id": "bbaac3ff0eb5b661faead2748313834c9cf771cf", "title": "Outline for a Logical Theory of Adaptive Systems", "text": "The purpose of this paper is to outline a theory of automata appropriate to the properties, requirements and questions of adaptation. The conditions that such a theory should satisfy come from not one but several fields: It should be possible to formulate, at least in an abstract version, some of the key hypotheses and problems from relevant parts of biology, particularly the areas concerned with molecular control and neurophysiology. The work in theoretical genetics initiated by R. A. Fisher [5] and Sewall Wright [24] should find a natural place in the theory. At the same time the rigorous methods of automata theory should be brought to bear (particularly those parts concerned with growing automata [1, 2, 3, 7, 8, 12, 15, 18, 23]). Finally the theory should include among its models abstract counterparts of artificial adaptive systems currently being studied, systems such as Newell-Shaw-Simon's \"General Problem Solver\" [13], Selfridge's \"Pandemonium\" [17], von Neumann's self-reproducing automata [22] and Turing's morphogenetic systems [19, 20]. The theory outlined here (which is intended as a theory and not the theory) is presented in four main parts. Section 2 discusses the study of adaptation via generation procedures and generated populations. Section 3 defines a continuum of generation procedures realizable in a reasonably direct fashion. Section 4 discusses the realization of generation procedures as populations of interacting programs in an iterative circuit computer. Section 5 discusses the process of adaptation in the context of the earlier sections. The paper concludes with a discussion of the nature of the theorems of this theory. Before entering upon the detailed discussion, one general feature of the theory should be noted. The interpretations or models of the theory divide into two broad categories: \"complete\" models and \"incomplete\" models. The \"complete\" models comprise the artificial systems--systems with properties and specifications completely delimited at the outset (cf. the rules of a game). One set of \"complete\" models for the theory consists of various programmed parallel computers. The \"incomplete\" models encompass natural systems. Any natural system involves an unlimited number of factors and, inevitably, the theory can handle only a selected few of these. Because there will always be variables which do not have explicit counterparts in the theory, the derived statements must be approximate relative to natural systems. For this reason it helps greatly that"}
{"_id": "560760fd9032d3125da027e35f2faebdcf6e64dd", "title": "Maximum Power Point Tracking Scheme for PV Systems Operating Under Partially Shaded Conditions", "text": "Current-voltage and power-voltage characteristics of large photovoltaic (PV) arrays under partially shaded conditions are characterized by multiple steps and peaks. This makes the tracking of the actual maximum power point (MPP) [global peak (GP)] a difficult task. In addition, most of the existing schemes are unable to extract maximum power from the PV array under these conditions. This paper proposes a novel algorithm to track the global power peak under partially shaded conditions. The formulation of the algorithm is based on several critical observations made out of an extensive study of the PV characteristics and the behavior of the global and local peaks under partially shaded conditions. The proposed algorithm works in conjunction with a DC-DC converter to track the GP. In order to accelerate the tracking speed, a feedforward control scheme for operating the DC-DC converter is also proposed, which uses the reference voltage information from the tracking algorithm to shift the operation toward the MPP. The tracking time with this controller is about one-tenth as compared to a conventional controller. All the observations and conclusions, including simulation and experimental results, are presented."}
{"_id": "578ccaba4a370816254c1c36d82466838ef9035d", "title": "A Particle Swarm Optimization-Based Maximum Power Point Tracking Algorithm for PV Systems Operating Under Partially Shaded Conditions", "text": "A photovoltaic (PV) generation system (PGS) is becoming increasingly important as renewable energy sources due to its advantages such as absence of fuel cost, low maintenance requirement, and environmental friendliness. For large PGS, the probability for partially shaded condition (PSC) to occur is also high. Under PSC, the P-V curve of PGS exhibits multiple peaks, which reduces the effectiveness of conventional maximum power point tracking (MPPT) methods. In this paper, a particle swarm optimization (PSO)-based MPPT algorithm for PGS operating under PSC is proposed. The standard version of PSO is modified to meet the practical consideration of PGS operating under PSC. The problem formulation, design procedure, and parameter setting method which takes the hardware limitation into account are described and explained in detail. The proposed method boasts the advantages such as easy to implement, system-independent, and high tracking efficiency. To validate the correctness of the proposed method, simulation, and experimental results of a 500-W PGS will also be provided to demonstrate the effectiveness of the proposed technique."}
{"_id": "cfba0478db1baefe24cbc4e796f176edfb8410fb", "title": "Adolescent thriving: the role of sparks, relationships, and empowerment.", "text": "Although most social science research on adolescence emphasizes risks and challenges, an emergent field of study focuses on adolescent thriving. The current study extends this line of inquiry by examining the additive power of identifying and nurturing young people's \"sparks,\" giving them \"voice,\" and providing the relationships and opportunities that reinforce and nourish thriving. A national sample of 1,817 adolescents, all age 15 (49% female), and including 56% white, 17% Hispanic/Latino, and 17% African-American adolescents, completed an online survey that investigated their deep passions or interests (their \"sparks\"), the opportunities and relationships they have to support pursuing those sparks, and how empowered they feel to make civic contributions (their \"voice\"). Results consistently supported the hypothesis that linking one's spark with a sense of voice and supportive opportunities and relationships strengthens concurrent outcomes, particularly those reflecting prosociality, during a key developmental transition period. The three developmental strengths also predicted most outcomes to a greater degree than did demographics. However, less than 10 percent of 15-year-olds reported experiencing high levels of all three strengths. The results demonstrate the value of focusing on thriving in adolescence, both to reframe our understanding of this age group and to highlight the urgency of providing adolescents the opportunities and relationships they need to thrive."}
{"_id": "44acefce63fd7ff00edda381447ae5220c0bd741", "title": "Large-Scale Multiclass Support Vector Machine Training via Euclidean Projection onto the Simplex", "text": "Dual decomposition methods are the current state-of-the-art for training multiclass formulations of Support Vector Machines (SVMs). At every iteration, dual decomposition methods update a small subset of dual variables by solving a restricted optimization problem. In this paper, we propose an exact and efficient method for solving the restricted problem. In our method, the restricted problem is reduced to the well-known problem of Euclidean projection onto the positive simplex, which we can solve exactly in expected O(k) time, where k is the number of classes. We demonstrate that our method empirically achieves state-of-the-art convergence on several large-scale high-dimensional datasets."}
{"_id": "f8e8e572995ecd798dc29965457b8d4f044bdf47", "title": "The Foreign Language Effect on Moral Judgment: The Role of Emotions and Norms", "text": "We investigated whether and why the use of a foreign language influences moral judgment. We studied the trolley and footbridge dilemmas, which propose an action that involves killing one individual to save five. In line with prior work, the use of a foreign language increased the endorsement of such consequentialist actions for the footbridge dilemma, but not for the trolley dilemma. But contrary to recent theorizing, this effect was not driven by an attenuation of emotions. An attenuation of emotions was found in both dilemmas, and it did not mediate the foreign language effect on moral judgment. An examination of additional scenarios revealed that foreign language influenced moral judgment when the proposed action involved a social or moral norm violation. We propose that foreign language influences moral judgment by reducing access to normative knowledge."}
{"_id": "a0868284269809ff605a46fbccf78b9f2437b2e4", "title": "Confidence intervals for the overall effect size in random-effects meta-analysis.", "text": "One of the main objectives in meta-analysis is to estimate the overall effect size by calculating a confidence interval (CI). The usual procedure consists of assuming a standard normal distribution and a sampling variance defined as the inverse of the sum of the estimated weights of the effect sizes. But this procedure does not take into account the uncertainty due to the fact that the heterogeneity variance (tau2) and the within-study variances have to be estimated, leading to CIs that are too narrow with the consequence that the actual coverage probability is smaller than the nominal confidence level. In this article, the performances of 3 alternatives to the standard CI procedure are examined under a random-effects model and 8 different tau2 estimators to estimate the weights: the t distribution CI, the weighted variance CI (with an improved variance), and the quantile approximation method (recently proposed). The results of a Monte Carlo simulation showed that the weighted variance CI outperformed the other methods regardless of the tau2 estimator, the value of tau2, the number of studies, and the sample size."}
{"_id": "97c5475c8c2ee4c29d42670e0a74a0c20f88f199", "title": "Automated leukemia detection in blood microscopic images using statistical texture analysis", "text": "Pathological image analysis plays a significant role in effective disease diagnostics. Quantitative microscopy has supplemented clinicians with accurate results for diagnosis of dreaded diseases such as leukemia, hepatitis, AIDS, psoriasis. In this paper we present a texture based approach for automated leukemia detection. Acute lymphocytic leukemia (ALL) is a malignant disease characterized by the accumulation of lymphoblast in the bone marrow. Texture features of the blood nucleus are investigated for diagnostic prediction of ALL. Other shape features are also extracted to classify a lymphocytic cell in the blood image into normal lymphocyte or lymphoblast (blasts). Initial segmentation is done using K-means clustering which segregates leukocytes or white blood cells (WBC) from other blood components i.e. erythrocytes and platelets. The results of K-means are used for evaluating individual cell shape, texture and other features for final detection of leukemia. A total of 108 blood smear images were considered for feature extraction and final performance evaluation is validated with the results of a hematologist."}
{"_id": "3fbea89d8bb3a1ad1f6c1bc5728820310c9c1c99", "title": "Self-Assembling Machine", "text": "This letter describes dynamic self-assembly of two-component rotors floating at the interface between liquid and air into simple, reconfigurable mechanical systems ~\u2018\u2018machines\u2019\u2019!. The rotors are powered by an external, rotating magnetic field, and their positions within the interface are controlled by:~i! repulsive hydrodynamic interactions between them and ~ii ! by localized magnetic fields produced by an array of small electromagnets located below the plane of the interface. The mechanical functions of the machines depend on the spatiotemporal sequence of activation of the electromagnets. \u00a92004 American Institute of Physics. @DOI: 10.1063/1.1664019 #"}
{"_id": "5120a5c613a6e2584324afe9e4556e4a5dcfbbf0", "title": "Detection of cancer tumors in mammography images using support vector machine and mixed gravitational search algorithm", "text": "In this paper, support vector machine (SVM) and mixed gravitational search algorithm (MGSA) are utilized to detect the breast cancer tumors in mammography images. Sech template matching method is used to segment images and extract the regions of interest (ROIs). Gray-level co-occurrence matrix (GLCM) is used to extract features. The mixed GSA is used for optimization of the classifier parameters and selecting salient features. The main goal of using MGSA-SVM is to decrease the number of features and to improve the SVM classification accuracy. Finally, the selected features and the tuned SVM classifier are used for detecting tumors. The experimental results show that the proposed method is able to optimize both feature selection and the SVM parameters for the breast cancer tumor detection."}
{"_id": "49b7bcd2ff82b0527d25d5fcea21fbe1261f3740", "title": "Machine Learning for Vehicular Networks", "text": "The emerging vehicular networks are expected to make everyday vehicular operation safer, greener, and more efficient, and pave the path to autonomous driving in the advent of the fifth generation (5G) cellular system. Machine learning, as a major branch of artificial intelligence, has been recently applied to wireless networks to provide a data-driven approach to solve traditionally challenging problems. In this article, we review recent advances in applying machine learning in vehicular networks and attempt to bring more attention to this emerging area. After a brief overview of the major concepts of machine learning, we present some application examples of machine learning in solving problems arising in vehicular networks. We finally discuss and highlight several open issues that warrant further research."}
{"_id": "34a1533f64194b471c95f0937ebc971cc0a50ebb", "title": "Mining Graphs for Understanding Time-Varying Volumetric Data", "text": "A notable recent trend in time-varying volumetric data analysis and visualization is to extract data relationships and represent them in a low-dimensional abstract graph view for visual understanding and making connections to the underlying data. Nevertheless, the ever-growing size and complexity of data demands novel techniques that go beyond standard brushing and linking to allow significant reduction of cognition overhead and interaction cost. In this paper, we present a mining approach that automatically extracts meaningful features from a graph-based representation for exploring time-varying volumetric data. This is achieved through the utilization of a series of graph analysis techniques including graph simplification, community detection, and visual recommendation. We investigate the most important transition relationships for time-varying data and evaluate our solution with several time-varying data sets of different sizes and characteristics. For gaining insights from the data, we show that our solution is more efficient and effective than simply asking users to extract relationships via standard interaction techniques, especially when the data set is large and the relationships are complex. We also collect expert feedback to confirm the usefulness of our approach."}
{"_id": "fe92ba79ab8f96ec09e0ce3a254732d7e2600a88", "title": "Human iris recognition in post-mortem subjects: Study and database", "text": "This paper presents a unique study of post-mortem human iris recognition and the first known to us database of near-infrared and visible-light iris images of deceased humans collected up to almost 17 days after death. We used four different iris recognition methods to analyze the dynamics of iris quality decay in short-term comparisons (samples collected up to 60 hours after death) and long-term comparisons (for samples acquired up to 407 hours after demise). This study shows that post-mortem iris recognition is possible and occasionally works even 17 days after death. These conclusions contradict a promulgated rumor that iris is unusable shortly after decease. We make this dataset publicly available to let others verify our findings and to research new aspects of this important and unfamiliar topic. We are not aware of any earlier papers offering post-mortem human iris images and such comprehensive analysis employing four different matchers."}
{"_id": "3d8b975e108aab83539697c4e98bb32ee7c75b63", "title": "Performance analysis of visualmarkers for indoor navigation systems", "text": "The massive diffusion of smartphones, the growing interest in wearable devices and the Internet of Things, and the exponential rise of location based services (LBSs) have made the problem of localization and navigation inside buildings one of the most important technological challenges of recent years. Indoor positioning systems have a huge market in the retail sector and contextual advertising; in addition, they can be fundamental to increasing the quality of life for citizens if deployed inside public buildings such as hospitals, airports, and museums. Sometimes, in emergency situations, they can make the difference between life and death. Various approaches have been proposed in the literature. Recently, thanks to the high performance of smartphones\u2019 cameras, marker-less and marker-based computer vision approaches have been investigated. In a previous paper, we proposed a technique for indoor localization and navigation using both Bluetooth low energy (BLE) and a 2D visual marker system deployed into the floor. In this paper, we presented a qualitative performance evaluation of three 2D visual markers, Vuforia, ArUco marker, and AprilTag, which are suitable for real-time applications. Our analysis focused on specific case study of visual markers placed onto the tiles, to improve the efficiency of our indoor localization and navigation approach by choosing the best visual marker system."}
{"_id": "536c6d5e59a05da27153303a19e0274262affdcd", "title": "An Investigation into Neural Net Optimization via Hessian Eigenvalue Density", "text": "To understand the dynamics of optimization in deep neural networks, we develop a tool to study the evolution of the entire Hessian spectrum throughout the optimization process. Using this, we study a number of hypotheses concerning smoothness, curvature, and sharpness in the deep learning literature. We then thoroughly analyze a crucial structural feature of the spectra: in non-batch normalized networks, we observe the rapid appearance of large isolated eigenvalues in the spectrum, along with a surprising concentration of the gradient in the corresponding eigenspaces. In batch normalized networks, these two effects are almost absent. We characterize these effects, and explain how they affect optimization speed through both theory and experiments. As part of this work, we adapt advanced tools from numerical linear algebra that allow scalable and accurate estimation of the entire Hessian spectrum of ImageNet-scale neural networks; this technique may be of independent interest in other applications."}
{"_id": "4f206b7cbcc6b41ea7216600dae0f87a12bdcc9c", "title": "FlexiStickers: photogrammetric texture mapping using casual images", "text": "Texturing 3D models using casual images has gained importance in the last decade, with the advent of huge databases of images. We present a novel approach for performing this task, which manages to account for the 3D geometry of the photographed object. Our method overcomes the limitation of both the constrained-parameterization approach, which does not account for the photography effects, and the photogrammetric approach, which cannot handle arbitrary images. The key idea of our algorithm is to formulate the mapping estimation as a Moving-Least-Squares problem for recovering local camera parameters at each vertex. The algorithm is realized in a FlexiStickers application, which enables fast interactive texture mapping using a small number of constraints."}
{"_id": "3c5e3866f8581bb82c4d936171ddf54a71592932", "title": "Monte-Carlo Tree Search in Settlers of Catan", "text": "Games are considered important benchmark tasks of artificial intelligence research. Modern strategic board games can typically be played by three or more people, which makes them suitable test beds for investigating multi-player strategic decision making. Monte-Carlo Tree Search (MCTS) is a recently published family of algorithms that achieved successful results with classical, two-player, perfect-information games such as Go. In this paper we apply MCTS to the multi-player, non-deterministic board game Settlers of Catan. We implemented an agent that is able to play against computer-controlled and human players. We show that MCTS can be adapted successfully to multi-agent environments, and present two approaches of providing the agent with a limited amount of domain knowledge. Our results show that the agent has considerable playing strength when compared to existing heuristics for the game. We conclude that MCTS is suitable for implementing a strong Settlers of Catan player."}
{"_id": "424b5bd20306a1fd0843ab20f0cdce1ea48155dc", "title": "Enduring effects for cognitive behavior therapy in the treatment of depression and anxiety.", "text": "Recent studies suggest that cognitive and behavioral interventions have enduring effects that reduce risk for subsequent symptom return following treatment termination. These enduring effects have been most clearly demonstrated with respect to depression and the anxiety disorders. It remains unclear whether these effects are a consequence of the amelioration of the causal processes that generate risk or the introduction of compensatory strategies that offset them and whether these effects reflect the mobilization of cognitive or other mechanisms. No such enduring effects have been observed for the psychoactive medications, which appear to be largely palliative in nature. Other psychosocial interventions remain largely untested, although claims that they produce lasting change have long been made. Whether such enduring effects extend to other disorders remains to be seen, but the capacity to reduce risk following treatment termination is one of the major benefits provided by the cognitive and behavioral interventions with respect to the treatment of depression and the anxiety disorders."}
{"_id": "fe6d25193410fe00d355d99282593db6f965eb05", "title": "Analyzing and labeling telecom communities using structural properties", "text": "Social network analysis and mining has been highly influenced by the online social web sites, telecom consumer data and instant messaging systems and has widely analyzed the presence of dense communities using graph theory and machine learning techniques. Mobile social network analysis is the mapping and measuring of interactions and flows between people, groups, and organizations based on the usage of their mobile communication services. Community identification and mining is one of the recent major directions in social network analysis. In this paper we find the communities in the network based on a modularity factor. Then we propose a graph theory-based algorithm for further split of communities resulting in smaller sized and closely knit sub-units, to drill down and understand consumer behavior in a comprehensive manner. These sub-units are then analyzed and labeled based on their group behavior pattern. The analysis is done using two approaches:\u2014rule-based, and cluster-based, for comparison and the able usage of information for suitable labeling of groups. Moreover, we measured and analyzed the uniqueness of the structural properties for each small unit; it is another quick and dynamic way to assign suitable labels for each distinct group. We have mapped the behavior-based labeling with unique structural properties of each group. It reduces considerably the time taken for processing and identifying smaller sub-communities for effective targeted marketing. The efficiency of the employed algorithms was evaluated on a large telecom dataset in three different stages of our work."}
{"_id": "9642d274c43f1e764a3eacd021205a4813ee33dd", "title": "Patterns and dynamics of users' behavior and interaction: Network analysis of an online community", "text": "This research draws on longitudinal network data from an online community to examine patterns of users\u2019 behavior and social interaction, and infer the processes underpinning dynamics of system use. The online community represents a prototypical example of a complex evolving social network in which connections between users are established over time by online messages. We study the evolution of a variety of properties since the inception of the system, including how users create, reciprocate, and deepen relationships with one another, variations in users\u2019 gregariousness and popularity, reachability and typical distances among users, and the degree of local redundancy in the system. Results indicate that the system is a \u201csmall world\u201dcharacterized by the emergence, in its early stages,of a hub-dominated structure with heterogeneity in users\u2019 behavior. We investigate whether hubs are responsible for holding the system together and facilitating information flow, examine first-mover advantages underpinning users\u2019ability to rise to system prominence, and uncover gender differences in users\u2019 gregariousness, popularity, and local redundancy. We discuss the implications of the results for research on system use and evolving social networks, and for a host of applications, including information diffusion, communities of practice, and the security and robustness of information systems."}
{"_id": "c5e776f11d454d20bb08eaf2c004bc6b0f2055c1", "title": "Modeling Analysis and Parameters Calculation of Permanent Magnet Linear Synchronous Motor", "text": "Permanent magnet linear synchronous motor (PMLSM) has several advantages, such as simple structure, small volume, high force, and so on. It is a research focus and difficulties in the research field of motor. Because PMLSM is different from the traditional rotary motors in structure and running state, it is difficult for to build its precise mathematical model by analytical method. The asymmetry of the three-phase winding electromagnetic and electrical parameters of open PMLSM in a low-speed working zone is impossible to take into account. In this paper, the difference between the permanent magnet linear synchronous motor and traditional motors are analyzed. And then, a non-linear, asymmetric and variable parameter space state model is built, considering the structural and running characteristics of PMLSM. The proposed model needs all kinds of parameters of PMLSM, including selfinductance, mutual inductance, flux linkage, etc. So, a finite element calculation and actual measurement of the required parameters of a prototype PMLSM are done, respectively. The influence of air-gap on the inductance parameters was analyzed."}
{"_id": "c4ae52a107e53d5f01319fda328cce382829eae8", "title": "Super High Gain Substrate Integrated Clamped-Mode Printed Log-Periodic Dipole Array Antenna", "text": "A compact, low profile, super high antenna gain, and broadband antenna, based on the printed log-periodic dipole array (PLPDA) antenna and the substrate integrated waveguide (SIW) technologies, has been proposed. It is so called substrate integrated waveguide clamped-mode printed log-periodic dipole array (SIW clamped-mode PLPDA) antenna. The arrangement of the dipole elements and feed lines in the SIW clamped-mode PLPDA antenna is designed and optimized. The proposed SIW clamped-mode PLPDA antenna was designed, fabricated, and tested, while a single SIW PLPDA antenna and a 2-element SIW PLPDA array were designed for comparison. The impedance bandwidth and the far-field radiation patterns in both E-plane and H-plane were measured. The measured results show that the performance of the SIW clamped-mode PLPDA antenna is similar to that of the 2-element SIW PLPDA array over the working frequency range 25 GHz-40 GHz, while the proposed antenna has only half the number of dipole elements and feed lines."}
{"_id": "5015e332f5c9f1f9d059b6fa0644a689fe64a017", "title": "AN INTEGRATIVE MODEL OF ORGANIZATIONAL", "text": "Scholars in various disciplines have considered the causes, nature, and effects of trust. Prior approaches to studying trust are considered, Including characteristics of the tnistor, the trustee, and the role of risk. A definition of trust and a model of its antecedents and outcomes are presented, which integrate research from multiple disciplines and differentiate trust from similar constructs. Several research propositions based on the model are presented."}
{"_id": "e107087aeaee9b98b17f229e96392aed1781bdbe", "title": "Safe Planes for Injection Rhinoplasty: A Histological Analysis of Midline Longitudinal Sections of the Asian Nose", "text": "Dorsal nasal augmentation is an essential part of injection rhinoplasty on the Asian nose. Aesthetic physicians require detailed knowledge of the nasal anatomy to accurately and safely inject filler. One hundred and thirty-five histological cross sections were examined from 45 longitudinal strips of soft tissue harvested from the midline of the nose, beginning from the glabella to the nasal tip. Muscles and nasal cartilage were used as landmarks for vascular identification. At the nasal tip, a midline longitudinal columellar artery with a diameter of 0.21\u00a0\u00b1\u00a00.09\u00a0mm was noted in 14 cadavers (31.1\u00a0%). At the infratip, subcutaneous tissue contained cavernous tissue similar to that of the nasal mucosa. The feeding arteries of these dilated veins formed arteriovenous shunts, into which retrograde injection of filler may be possible. All of the nasal arteries present were identified as subcutaneous arteries. They coursed mainly in the superficial layer of the subcutaneous tissues, with smaller branches forming subdermal plexuses. A substantial arterial anastomosis occurred at the supratip region, in which the artery lay in the middle of the subcutaneous tissue at the level of the major alar cartilages. These arteries had a diameter ranging between 0.4 and 0.9\u00a0mm and were found in 29 of 45 specimens (64.4\u00a0%). This was at the level midway between the rhinion above the supratip and the infratip. This anastomotic artery also crossed the midline at the rhinion superficial to the origin of the procerus on the lower end of the nasal bone. Here the arterial diameter ranged between 0.1 and 0.3\u00a0mm, which was not large enough to cause arterial emboli. Fascicular cross sections of the nasalis muscle directly covered the entire upper lateral cartilage. The subdermal tissue contained few layers of fat cells along with the occasional small artery. The procerus arose from the nasal bone and was continuous with the nasalis in 16 cadavers (35.6\u00a0%). There was fatty areolar tissue between the procerus and the periosteal layer and no significant arteries present. The procerus ascended beyond the brow to insert into the frontalis muscle with very few cutaneous insertions. The supratrochlear vessels and accompanying nerve were occasionally found on the surface of the frontalis muscle. Most nasal arteries found in the midline are subcutaneous arteries. Filler should be injected deeply to avoid vascular injury leading to compromised perfusion at the dorsum or filler emboli at the nasal tip. This journal requires that the authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 ."}
{"_id": "0f3b8eba6e536215b6f6c727a009c8b44cda0a91", "title": "A Convolutional Attention Network for Extreme Summarization of Source Code", "text": "Attention mechanisms in neural networks have proved useful for problems in which the input and output do not have fixed dimension. Often there exist features that are locally translation invariant and would be valuable for directing the model\u2019s attention, but previous attentional architectures are not constructed to learn such features specifically. We introduce an attentional neural network that employs convolution on the input tokens to detect local time-invariant and long-range topical attention features in a context-dependent way. We apply this architecture to the problem of extreme summarization of source code snippets into short, descriptive function name-like summaries. Using those features, the model sequentially generates a summary by marginalizing over two attention mechanisms: one that predicts the next summary token based on the attention weights of the input tokens and another that is able to copy a code token as-is directly into the summary. We demonstrate our convolutional attention neural network\u2019s performance on 10 popular Java projects showing that it achieves better performance compared to previous attentional mechanisms."}
{"_id": "049b30d8aaedc87c9b66dac2e607ea0cf4e87b56", "title": "'C and tcc: A Language and Compiler for Dynamic Code Generation", "text": "Dynamic code generation allows programmers to use run-time information in order to achieve performance and expressiveness superior to those of static code. The 'C(Tick C) language is a superset of ANSI C that supports efficient and high-level use of dynamic code generation. 'C provides dynamic code generation at the level of C expressions and statements and supports the composition of dynamic code at run time. These features enable programmers to add dynamic code generation to existing C code incrementally and to write important applications (such as \u201cjust-in-time\u201d compilers) easily. The article presents many examples of how 'C can be used to solve practical problems. The tcc compiler is an efficient, portable, and freely available implementation of 'C. tcc allows programmers to trade dynamic compilation speed for dynamic code quality: in some aplications, it is most important to generate code quickly, while in others code quality matters more than compilation speed. The overhead of dynamic compilation is on the order of 100 to 600 cycles per generated instruction, depending on the level of dynamic optimizaton. Measurements show that the use of dynamic code generation can improve performance by almost an order of magnitude; two- to four-fold speedups are common. In most cases, the overhead of dynamic compilation is recovered in under 100 uses of the dynamic code; sometimes it can be recovered within one use."}
{"_id": "83cfac5ff6c789089b852c11106eb44500567078", "title": "The dark side of smartphone usage: Psychological traits, compulsive behavior and technostress", "text": "0747-5632/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2013.10.047 \u21d1 Corresponding author. Address: Department of Business Management, National Sun Yat-sen University, No. 70, Lianhai Rd., Gushan District, Kaohsiung City 804, Taiwan, Tel.: +886 7 525 2000x4627. E-mail address: ctchang@faculty.nsysu.edu.tw (C.-T. Chang). Yu-Kang Lee, Chun-Tuan Chang \u21d1, You Lin, Zhao-Hong Cheng"}
{"_id": "89d1b0072e6d3ff476cef0ad5224786851ae144f", "title": "Fourier Lucas-Kanade Algorithm", "text": "In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs)."}
{"_id": "a0bbd8ca2a7d881158a71ee6b6b1b586fe254a44", "title": "Sensor Reduction for Driver-Automation Shared Steering Control via an Adaptive Authority Allocation Strategy", "text": "This paper presents a new shared control method for lane keeping assist (LKA) systems of intelligent vehicles. The proposed method allows the LKA system to effectively share the control authority with a human driver by avoiding or minimizing the conflict situations between these two driving actors. To realize the shared control scheme, the unpredictable driver-automation interaction is explicitly taken into account in the control design via a fictive driver activity variable. This latter is judiciously introduced into the driver\u2013road\u2013vehicle system to represent the driver's need for assistance in accordance with his/her real-time driving activity. Using Lyapunov stability arguments, Takagi\u2013Sugeno fuzzy model-based design conditions are derived to handle not only the time-varying driver activity variable, but also a large variation range of vehicle speed. Both simulation and hardware experiments are presented to demonstrate that the proposed control strategy together with a linear matrix inequality design formulation provide an effective tool to deal with the challenging shared steering control issue. In particular, a fuzzy output feedback control scheme is exploited to achieve the shared control goal without at least two important vehicle sensors. These physical sensors are widely employed in previous works to measure the lateral speed and the steering rate for the control design and real-time implementation. The successful results of this idea of sensor-reduction control has an obvious interest from practical viewpoint."}
{"_id": "147ba62f2e7af3a5b85781941227132a9ec3535b", "title": "Predictability, Complexity, and Learning", "text": "We define predictive information Ipred(T) as the mutual information between the past and the future of a time series. Three qualitatively different behaviors are found in the limit of large observation times T: Ipred(T) can remain finite, grow logarithmically, or grow as a fractional power law. If the time series allows us to learn a model with a finite number of parameters, then Ipred(T) grows logarithmically with a coefficient that counts the dimensionality of the model space. In contrast, power-law growth is associated, for example, with the learning of infinite parameter (or non-parametric) models such as continuous functions with smoothness constraints. There are connections between the predictive information and measures of complexity that have been defined both in learning theory and the analysis of physical systems through statistical mechanics and dynamical systems theory. Furthermore, in the same way that entropy provides the unique measure of available information consistent with some simple and plausible conditions, we argue that the divergent part of Ipred(T) provides the unique measure for the complexity of dynamics underlying a time series. Finally, we discuss how these ideas may be useful in problems in physics, statistics, and biology."}
{"_id": "47f193b934e38fb3d056aa75fabaca4d60e93055", "title": "Modeling Response Time in Digital Human Communication", "text": "Our daily lives increasingly involve interactions with other individuals via different communication channels, such as email, text messaging, and social media. In this paper we focus on the problem of modeling and predicting how long it takes an individual to respond to an incoming communication event, such as receiving an email or a text. In particular, we explore the effect on response times of an individual\u2019s temporal pattern of activity, such as circadian and weekly patterns which are typically present in individual data. A probabilistic time-warping approach is used, considering linear time to be a transformation of \u201ceffective time,\u201d where the transformation is a function of an individual\u2019s activity rate. We apply this transformation of time to two different types of temporal event models, the first for modeling response times directly, and the second for modeling event times via a Hawkes process. We apply our approach to two different sets of real-world email histories. The experimental results clearly indicate that the transformation-based approach produces systematically better models and predictions compared to simpler methods that ignore circadian and weekly patterns. Current technology allows us to collect large quantities of time-stamped individual-level event data characterizing our \u201cdigital behavior\u201d in contexts such as texting, email activity, microblogging, social media interactions, and more \u2014 and the volume and variety of this type of data is continually increasing. The resulting time-series of events are rich in behavioral information about our daily lives. Tools for obtaining and visualizing such information are becoming increasingly popular, such as the ability to download your entire email history for mail applications such as Gmail, and various software packages for tracking personal fitness using data from devices such as Fitbit. This paper is focused on modeling the temporal aspects of how an individual (also referred to as the \u201cego\u201d) responds to others, given a sequence of timestamped events (e.g. communication messages via email or text). What can we learn from the way we respond to others? Are there systematic \u2217Currently employed at Google. The research described in this paper was conducted while the author was a graduate student at UC Irvine. Copyright c \u00a9 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Mon Tue Wed Thu Fri Sat Sun 0 1 a (t ) Mon Tue Wed Thu Fri Sat Sun 0 1 a (t ) Figure 1: Example of smoothed daily and weekly patterns over two separate individuals\u2019 email histories. a(t) on the yaxis represents an individual\u2019s typical activity as a function of time. patterns that can be extracted and used predictively? How do our daily sleep and work patterns factor in to our response patterns? These are common issues that arise when modeling communication events, as the ego\u2019s behavior typically changes significantly over the course of a day or week. Examples of such patterns are shown in Figure 1. We propose a novel method for parameterizing these circadian (daily) and weekly patterns, allowing the time dimension to be transformed according to such patterns. This transformation of time will allow models to describe and predict behavioral patterns which are invariant to the ego\u2019s routine patterns. We apply this approach to predicting the ego\u2019s response time to an event. Learning such response patterns is useful not only for identifying relationships between pairs of individuals (Halpin and De Boeck 2013), but also as features for prioritization of events, e.g., the priority inbox implemented in Gmail (Aberdeen, Pacovsky, and Slater 2010). Our experimental results show clear advantages in terms of predictive power when modeling response times in the transformed time domain. \u03c40 \u03c41 \u03c42 t0 t1 t2 \u22060"}
{"_id": "ac29f5feff6c2bab6fc5147fd8cb6d327e1dc1d5", "title": "An empirical investigation of mobile ticketing service adoption in public transportation", "text": "In this paper, we present results from a study of mobile ticketing service adoption in public transportation. The theoretical background of the study is based on technology adoption and trust theories, which are augmented with concepts of mobile use context and mobility. Our empirical findings from analyses of a survey data suggest that compatibility of the mobile ticketing service with consumer behavior is a major determinant of adoption. Mobility and contextual factors, including budget constraints, availability of other alternatives, and time pressure in the service use situation were also found to have a strong effect on the adoption decision. Our findings suggest that contextual and mobile service-specific features are important determinants of mobile service adoption and should thus be integrated into the traditional adoption models."}
{"_id": "26c2d005898b5055420a31d96b73d7b4830ba3c4", "title": "The obstacle-restriction method for robot obstacle avoidance in difficult environments", "text": "This paper addresses the obstacle avoidance problem in difficult scenarios that usually are dense, complex and cluttered. The proposal is a method called the obstacle-restriction. At each iteration of the control cycle, this method addresses the obstacle avoidance in two steps. First there is procedure to compute instantaneous subgoals in the obstacle structure (obtained by the sensors). The second step associates a motion restriction to each obstacle, which are managed next to compute the most promising motion direction. The advantage of this technique is that it avoids common limitations of previous obstacle avoidance methods, improving their navigation performance in difficult scenarios. Furthermore, we obtain similar results to the recent methods that achieve navigation in troublesome scenarios. However, the new method improves their behavior in open spaces. The performance of this method is illustrated with experimental results obtained with a robotic wheelchair vehicle."}
{"_id": "40f5d252394c8dd98dd34c2a3d047da28d1636eb", "title": "Perceived aggressiveness predicts fighting performance in mixed-martial-arts fighters.", "text": "Accurate assessment of competitive ability is a critical component of contest behavior in animals, and it could be just as important in human competition, particularly in human ancestral populations. Here, we tested the role that facial perception plays in this assessment by investigating the association between both perceived aggressiveness and perceived fighting ability in fighters' faces and their actual fighting success. Perceived aggressiveness was positively associated with the proportion of fights won, after we controlled for the effect of weight, which also independently predicted perceived aggression. In contrast, perception of fighting ability was confounded by weight, and an association between perceived fighting ability and actual fighting success was restricted to heavyweight fighters. Shape regressions revealed that aggressive-looking faces are generally wider and have a broader chin, more prominent eyebrows, and a larger nose than less aggressive-looking faces. Our results indicate that perception of aggressiveness and fighting ability might cue different aspects of success in male-male physical confrontation."}
{"_id": "6eca84343280362cfa9fe227921a316533adaa49", "title": "Positive psychology : Past , present , and ( possible ) future", "text": "What is positive psychology? Where has it come from? Where is it going? These are the questions we address in this article. In defining positive psychology, we distinguish between the meta-psychological level, where the aim of positive psychology is to redress the imbalance in psychology research and practice, and the pragmatic level, which is concerned with what positive psychologists do, in terms of their research, practice, and areas of interest. These distinctions in how we understand positive psychology are then used to shape conceptions of possible futures for positive psychology. In conclusion, we identify several pertinent issues for the consideration of positive psychology as it moves forward. These include the need to synthesize the positive and negative, build on its historical antecedents, integrate across levels of analysis, build constituency with powerful stakeholders, and be aware of the implications of description versus prescription."}
{"_id": "d8354a6d188a9bfca01586a5467670650f3e3a8a", "title": "Design and Optimization of Printed Spiral Coils for Efficient Transcutaneous Inductive Power Transmission", "text": "The next generation of implantable high-power neuroprosthetic devices such as visual prostheses and brain computer interfaces are going to be powered by transcutaneous inductive power links formed between a pair of printed spiral coils (PSC) that are batch-fabricated using micromachining technology. Optimizing the power efficiency of the wireless link is imperative to minimize the size of the external energy source, heating dissipation in the tissue, and interference with other devices. Previous design methodologies for coils made of 1-D filaments are not comprehensive and accurate enough to consider all geometrical aspects of PSCs with planar 3-D conductors as well as design constraints imposed by implantable device application and fabrication technology. We have outlined the theoretical foundation of optimal power transmission efficiency in an inductive link, and combined it with semi-empirical models to predict parasitic components in PSCs. We have used this foundation to devise an iterative PSC design methodology that starts with a set of realistic design constraints and ends with the optimal PSC pair geometries. We have executed this procedure on two design examples at 1 and 5 MHz achieving power transmission efficiencies of 41.2% and 85.8%, respectively, at 10-mm spacing. All results are verified with simulations using a commercial field solver (HFSS) as well as measurements using PSCs fabricated on printed circuit boards."}
{"_id": "585dcc735dc74e50c7a39f3c02f5ed36f3ef8fd1", "title": "Mapping distributed brain function and networks with diffuse optical tomography.", "text": "Mapping of human brain function has revolutionized systems neuroscience. However, traditional functional neuroimaging by positron emission tomography or functional magnetic resonance imaging cannot be used when applications require portability, or are contraindicated because of ionizing radiation (positron emission tomography) or implanted metal (functional magnetic resonance imaging). Optical neuroimaging offers a non-invasive alternative that is radiation free and compatible with implanted metal and electronic devices (for example, pacemakers). However, optical imaging technology has heretofore lacked the combination of spatial resolution and wide field of view sufficient to map distributed brain functions. Here, we present a high-density diffuse optical tomography imaging array that can map higher-order, distributed brain function. The system was tested by imaging four hierarchical language tasks and multiple resting-state networks including the dorsal attention and default mode networks. Finally, we imaged brain function in patients with Parkinson's disease and implanted deep brain stimulators that preclude functional magnetic resonance imaging."}
{"_id": "77faf16660b785f565778ac920dc1f94e06c252c", "title": "On the Performance of Intel SGX", "text": "As cloud computing is widely used in various fields, more and more individuals and organizations are considering outsourcing data to the public cloud. However, the security of the cloud data has become the most prominent concern of many customers, especially those who possess a large volume of valuable and sensitive data. Although some technologies like Homomorphic Encryption were proposed to solve the problem of secure data, the result is still not satisfying. With the advent of Intel SGX processor, which aims to thoroughly eliminate the security concern of cloud environment in a hardware-assisted approach, it brings us a number of questions on its features and its practicability for the current cloud platform. To evaluate the potential impact of Intel SGX, we analyzed the current SGX programming mode and inferred some possible factors that may arise the overhead. To verify our performance hypothesis, we conducted a systematic study on SGX performance by a series of benchmark experiments. After analyzing the experiment result, we performed a workload characterization to help programmer better exploit the current availability of Intel SGX and identify feasible research directions."}
{"_id": "400f329c0c411507285cc801c1aee8e49f6329e3", "title": "A Generative Modeling Approach to Limited Channel ECG Classification", "text": "Processing temporal sequences is central to a variety of applications in health care, and in particular multichannel Electrocardiogram (ECG) is a highly prevalent diagnostic modality that relies on robust sequence modeling. While Recurrent Neural Networks (RNNs) have led to significant advances in automated diagnosis with time-series data, they perform poorly when models are trained using a limited set of channels. A crucial limitation of existing solutions is that they rely solely on discriminative models, which tend to generalize poorly in such scenarios. In order to combat this limitation, we develop a generative modeling approach to limited channel ECG classification. This approach first uses a Seq2Seq model to implicitly generate the missing channel information, and then uses the latent representation to perform the actual supervisory task. This decoupling enables the use of unsupervised data and also provides highly robust metric spaces for subsequent discriminative learning. Our experiments with the Physionet dataset clearly evidence the effectiveness of our approach over standard RNNs in disease prediction."}
{"_id": "540933a2858483f2e71eea58026d011fb4bba040", "title": "A Boosted Bayesian Multiresolution Classifier for Prostate Cancer Detection From Digitized Needle Biopsies", "text": "Diagnosis of prostate cancer (CaP) currently involves examining tissue samples for CaP presence and extent via a microscope, a time-consuming and subjective process. With the advent of digital pathology, computer-aided algorithms can now be applied to disease detection on digitized glass slides. The size of these digitized histology images (hundreds of millions of pixels) presents a formidable challenge for any computerized image analysis program. In this paper, we present a boosted Bayesian multiresolution (BBMR) system to identify regions of CaP on digital biopsy slides. Such a system would serve as an important preceding step to a Gleason grading algorithm, where the objective would be to score the invasiveness and severity of the disease. In the first step, our algorithm decomposes the whole-slide image into an image pyramid comprising multiple resolution levels. Regions identified as cancer via a Bayesian classifier at lower resolution levels are subsequently examined in greater detail at higher resolution levels, thereby allowing for rapid and efficient analysis of large images. At each resolution level, ten image features are chosen from a pool of over 900 first-order statistical, second-order co-occurrence, and Gabor filter features using an AdaBoost ensemble method. The BBMR scheme, operating on 100 images obtained from 58 patients, yielded: 1) areas under the receiver operating characteristic curve (AUC) of 0.84, 0.83, and 0.76, respectively, at the lowest, intermediate, and highest resolution levels and 2) an eightfold savings in terms of computational time compared to running the algorithm directly at full (highest) resolution. The BBMR model outperformed (in terms of AUC): 1) individual features (no ensemble) and 2) a random forest classifier ensemble obtained by bagging multiple decision tree classifiers. The apparent drop-off in AUC at higher image resolutions is due to lack of fine detail in the expert annotation of CaP and is not an artifact of the classifier. The implicit feature selection done via the AdaBoost component of the BBMR classifier reveals that different classes and types of image features become more relevant for discriminating between CaP and benign areas at different image resolutions."}
{"_id": "8acf78df5aa283f02d3805867e1dd1c6a97f389b", "title": "Innovation and practice of continuous auditing", "text": "Article history: Received 27 August 2010 Received in revised form 23 December 2010 Accepted 6 January 2011 The traditional audit paradigm is outdated in the real time economy. Innovation of the traditional audit process is necessary to support real time assurance. Practitioners and academics are exploring continuous auditing as a potential successor to the traditional audit paradigm. Using technology and automation, continuous auditing methodology enhances the efficiency and effectiveness of the audit process to support real time assurance. This paper defines how continuous auditing methodology introduces innovation to practice in seven dimensions and proposes a four-stage paradigm to advance future research. In addition, we formulate a set of methodological propositions concerning the future of assurance for practitioners and academic researchers. \u00a9 2011 Elsevier Inc. All rights reserved."}
{"_id": "d2f1e9129030dbd1900a210bb4196112fba59103", "title": "Severity Analyses of Single-Vehicle Crashes Based on Rough Set Theory", "text": "A single-vehicle crash is a typical pattern of traffic accidents and tends to cause heavy loss. The purpose of this study is to identify the factors significantly influencing single-vehicle crash injury severity, using a data selected from Beijing city for a 4-year period. Rough set theory was applied to complete the injury severity analysis, and followed by applying cross-validation method to estimate the prediction accuracy of extraction rules. Results show that it is effective for analyzing the severity of Single-vehicle crashes with rough set theory."}
{"_id": "f693699344261d0d7b5f57b6c402fd8d26781ba5", "title": "Galvanometric optical laser beam steering system for microfactory application", "text": "This article presents a kinematic model and control of a galvanometric laser beam steering system for high precision marking, welding or soldering applications as a microfactory module. Galvo systems are capable of scanning laser beam with relatively high frequencies that makes them suitable for fast processing applications. For the sake of flexibility and ease of use 2D reference shapes to be processed are provided as CAD drawings. Drawings are parsed and interpolated to x - y reference data points on MATLAB then stored as arrays in C code. C header file is further included as reference data points to be used by the system. Theoretical kinematic model of the system is derived and model parameters are tuned for practical implementation and validated with respect to measured positions on rotation space with optical position sensor and image field with position sensitive device. Machining with material removal requires high power laser to be employed that makes position measurement on image field unfeasible. Therefore for closed loop applications optical position sensor embedded in galvo motors is used for position feedback. Since the model approved to be approximately linear in the range of interest by simulations, a PI controller is used for precise positioning of the galvo motors. Experimental results for tracking circular and rectangular shape references are proved to be precise with errors of less than 2%."}
{"_id": "479fb8640836baa92fe99de50adbe6af44bc5444", "title": "Local potassium signaling couples neuronal activity to vasodilation in the brain", "text": "The mechanisms by which active neurons, via astrocytes, rapidly signal intracerebral arterioles to dilate remain obscure. Here we show that modest elevation of extracellular potassium (K+) activated inward rectifier K+ (Kir) channels and caused membrane potential hyperpolarization in smooth muscle cells (SMCs) of intracerebral arterioles and, in cortical brain slices, induced Kir-dependent vasodilation and suppression of SMC intracellular calcium (Ca2+) oscillations. Neuronal activation induced a rapid (<2 s latency) vasodilation that was greatly reduced by Kir channel blockade and completely abrogated by concurrent cyclooxygenase inhibition. Astrocytic endfeet exhibited large-conductance, Ca2+-sensitive K+ (BK) channel currents that could be activated by neuronal stimulation. Blocking BK channels or ablating the gene encoding these channels prevented neuronally induced vasodilation and suppression of arteriolar SMC Ca2+, without affecting the astrocytic Ca2+ elevation. These results support the concept of intercellular K+ channel\u2013to\u2013K+ channel signaling, through which neuronal activity in the form of an astrocytic Ca2+ signal is decoded by astrocytic BK channels, which locally release K+ into the perivascular space to activate SMC Kir channels and cause vasodilation."}
{"_id": "0d99a8787bd3abe24c7737775da4d842bb86e4ab", "title": "Winnowing: Local Algorithms for Document Fingerprinting", "text": "Digital content is for copying: quotation, revision, plagiarism, and file sharing all create copies. Document fingerprinting is concerned with accurately identifying copying, including small partial copies, within large sets of documents.We introduce the class of local document fingerprinting algorithms, which seems to capture an essential property of any finger-printing technique guaranteed to detect copies. We prove a novel lower bound on the performance of any local algorithm. We also develop winnowing, an efficient local fingerprinting algorithm, and show that winnowing's performance is within 33% of the lower bound. Finally, we also give experimental results on Web data, and report experience with MOSS, a widely-used plagiarism detection service."}
{"_id": "310974f3c1fc87120414eceba7f07b31885f1292", "title": "Analysis of Metastability Errors in Conventional, LSB-First, and Asynchronous SAR ADCs", "text": "A practical model for characterizing comparator metastability errors in SAR ADCs is presented, and is used to analyze not only the conventional SAR but also LSB-first and asynchronous versions. This work makes three main contributions: first, it is shown that for characterizing metastability it is more reasonable to use input signals with normal or Laplace distributions. Previous work used uniformly-distributed signals in the interest of making derivations easier, but this simplifying assumption overestimated SMR by as much as 18 dB compared to the more reasonable analysis presented here. Second, this work shows that LSB-first SAR ADCs achieve SMR performance equal to or better than conventional SARs with the same metastability window, depending on bandwidth. Finally, the analysis is used to develop a framework for calculating the maximum effective sample rate for asynchronous SAR ADCs, and in doing so demonstrates that proximity detectors are not effective solutions to improving metastability performance."}
{"_id": "ec3472acc24fe5ef9eb07a31697f2cd446c8facc", "title": "PixelNet: Representation of the pixels, by the pixels, and for the pixels", "text": "We explore design principles for general pixel-level prediction problems, from low-level edge detection to midlevel surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fullyconvolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that stratified sampling of pixels allows one to (1) add diversity during batch updates, speeding up learning; (2) explore complex nonlinear predictors, improving accuracy; and (3) efficiently train state-of-the-art models tabula rasa (i.e., \u201cfrom scratch\u201d) for diverse pixel-labeling tasks. Our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context dataset, surface normal estimation on NYUDv2 depth dataset, and edge detection on BSDS."}
{"_id": "48cfc4b36202f2cf10a70edac9490c5234d603b3", "title": "Culture Wires the Brain: A Cognitive Neuroscience Perspective.", "text": "There is clear evidence that sustained experiences may affect both brain structure and function. Thus, it is quite reasonable to posit that sustained exposure to a set of cultural experiences and behavioral practices will affect neural structure and function. The burgeoning field of cultural psychology has often demonstrated the subtle differences in the way individuals process information-differences that appear to be a product of cultural experiences. We review evidence that the collectivistic and individualistic biases of East Asian and Western cultures, respectively, affect neural structure and function. We conclude that there is limited evidence that cultural experiences affect brain structure and considerably more evidence that neural function is affected by culture, particularly activations in ventral visual cortex-areas associated with perceptual processing."}
{"_id": "7736dde8b37604e60c03fbf8bb29b871a7e43860", "title": "Passive Self-Interference Suppression for Full-Duplex Infrastructure Nodes", "text": "Recent research results have demonstrated the feasibility of full-duplex wireless communication for short-range links. Although the focus of the previous works has been active cancellation of the self-interference signal, a majority of the overall self-interference suppression is often due to passive suppression, i.e., isolation of the transmit and receive antennas. We present a measurement-based study of the capabilities and limitations of three key mechanisms for passive self-interference suppression: directional isolation, absorptive shielding, and cross-polarization. The study demonstrates that more than 70 dB of passive suppression can be achieved in certain environments, but also establishes two results on the limitations of passive suppression: (1) environmental reflections limit the amount of passive suppression that can be achieved, and (2) passive suppression, in general, increases the frequency selectivity of the residual self-interference signal. These results suggest two design implications: (1) deployments of full-duplex infrastructure nodes should minimize near-antenna reflectors, and (2) active cancellation in concatenation with passive suppression should employ higher-order filters or per-subcarrier cancellation."}
{"_id": "8437b9eb07166ccf3b1c8c32e0783dae768a0b51", "title": "System-level performance evaluation of downlink non-orthogonal multiple access (NOMA)", "text": "As a promising downlink multiple access scheme for further LTE enhancement and future radio access (FRA), this paper investigates the system-level performance of non-orthogonal multiple access (NOMA) with a successive interference canceller (SIC) on the receiver side. The goal is to clarify the potential gains of NOMA over orthogonal multiple access (OMA) such as OFDMA, taking into account key link adaptation functionalities of the LTE radio interface such as adaptive modulation and coding (AMC), hybrid automatic repeat request (HARQ), time/frequency-domain scheduling, and outer loop link adaptation (OLLA), in addition to NOMA specific functionalities such as dynamic multi-user power allocation. Based on computer simulations, we show under multiple configurations that the system-level performance achieved by NOMA is superior to that for OMA."}
{"_id": "863f2291f14a8c8b12a2c98f0c47e26a65949377", "title": "Non-Orthogonal Multiple Access With Cooperative Full-Duplex Relaying", "text": "We propose a full-duplex (FD) cooperative non-orthogonal multiple access (NOMA) system with dual users, where a dedicated FD relay assists the information transmission to the user with weak channel condition. Under the realistic assumption of imperfect self-interference cancellation, the achievable outage probability of both users and ergodic sum capacity are investigated, and exact analytical expressions are derived. Simulation results demonstrate that the proposed FD cooperative NOMA system attains better performance than the half-duplex cooperative NOMA system in the moderate signal-to-noise ratio regime."}
{"_id": "208125449f697c46d02a98eceb18b8c4622384c5", "title": "An Analysis of Several Heuristics for the Traveling Salesman Problem", "text": "Several polynomial time algorithms finding \"good,\" but not necessarily optimal, tours for the traveling salesman problem are considered. We measure the closeness of a tour by the ratio of the obtained tour length to the minimal tour length. For the nearest neighbor method, we show the ratio is bounded above by a logarithmic function of the number of nodes. We also provide a logarithmic lower bound on the worst case. A class of approximation methods we call insertion methods are studied, and these are also shown to have a logarithmic upper bound. For two specific insertion methods, which we call nearest insertion and cheapest insertion, the ratio is shown to have a constant upper bound of 2, and examples are provided that come arbitrarily close to this upper bound. It is also shown that for any n => 8, there are traveling salesman problems with n nodes having tours which cannot be improved by making n/4 edge changes, but for which the ratio is 2(1l/n). Key words, traveling salesman problem, approximation algorithm, k-optimal, minimal spanning tree, triangle inequality"}
{"_id": "54e4435e641902c2632478a935245528cb039edf", "title": "A survey of vertical handover decision algorithms in Fourth Generation heterogeneous wireless networks", "text": "1389-1286/$ see front matter 2010 Elsevier B.V doi:10.1016/j.comnet.2010.02.006 * Corresponding author. Tel.: +61 3 99053503; fa E-mail address: ASekerci@ieee.org (Y. Ahmet S ek Vertical handover decision (VHD) algorithms are essential components of the architecture of the forthcoming Fourth Generation (4G) heterogeneous wireless networks. These algorithms need to be designed to provide the required Quality of Service (QoS) to a wide range of applications while allowing seamless roaming among a multitude of access network technologies. In this paper, we present a comprehensive survey of the VHD algorithms designed to satisfy these requirements. To offer a systematic comparison, we categorize the algorithms into four groups based on the main handover decision criterion used. Also, to evaluate tradeoffs between their complexity of implementation and efficiency, we discuss three representative VHD algorithms in each group. 2010 Elsevier B.V. All rights reserved."}
{"_id": "35c53e97d15aa2bf0debd17c9fb07c1b7de3ed67", "title": "Animal Navigation: Path Integration, Visual Landmarks and Cognitive Maps", "text": "Animals typically have several navigational strategies available to them. Interactions between these strategies can reduce navigational errors and may lead to the emergence of new capacities."}
{"_id": "d508e3b2a0ab2b1af6582151a4c8e786422026fb", "title": "A Survey of Model Comparison Approaches and Applications", "text": "This survey paper presents the current state of model comparison as it applies to Model-Driven Engineering. We look specifically at how model matching is accomplished, the application of the approaches, and the types of models that approaches are intended to work with. Our paper also indicates future trends and directions. We find that many of the latest model comparison techniques are geared towards facilitating arbitrary meta models and use similarity-based matching. Thus far, model versioning is the most prevalent application of model comparison. Recently, however, work on comparison for versioning has begun to stagnate, giving way to other applications. Lastly, there is wide variance among the tools in the amount of user effort required to perform model comparison, as some require more effort to facilitate more generality and expressive power."}
{"_id": "becfc0b843ec2610be2a820b909f888e7bbdc451", "title": "A topological hierarchy for functions on triangulated surfaces", "text": "We combine topological and geometric methods to construct a multiresolution representation for a function over a two-dimensional domain. In a preprocessing stage, we create the Morse-Smale complex of the function and progressively simplify its topology by cancelling pairs of critical points. Based on a simple notion of dependency among these cancellations, we construct a hierarchical data structure supporting traversal and reconstruction operations similarly to traditional geometry-based representations. We use this data structure to extract topologically valid approximations that satisfy error bounds provided at runtime."}
{"_id": "14a6948d9ce94b06cfe3b9883a68961a5e7da49a", "title": "CampusSense \u2014 A smart vehicle parking monitoring and management system using ANPR cameras and android phones", "text": "Vehicle parking monitoring and management has become a big challenge for educational institutions with increasing enrollments, high percentage of vehicle ownership and decreasing parking supply which in result triggering blockage of vehicle, congestion, wastage of time and money. In university campuses particularly in Kingdom of Saudi Arabia, vehicle parking monitoring and management problem is getting worse and more frustrating due to the fact that majority of students, faculty and staff members own cars and drive through them to the university campuses. These common problems include finding out people (evidence) who are responsible for the damages (hitting, scraping, scratching and dents) to other cars and the blockage of car due to wrong car parking which takes much time to locate the owner of the car. Moreover, locating or forgetting their car park location another difficulty that is often faced by the students, faculty and staff members. The existing cameras located at the parking lots are only for video surveillance and cannot help in such situations as there is a lack of proper vehicle parking monitoring and management system. To cope with above mentioned problems and to ensure a better parking experience by accommodating increasing number of vehicles in a proper convenient manner, we propose a smart vehicle parking monitoring and management system called CampusSense. In CampusSense, Automatic Number Plate Recognition (ANPR) cameras and android based mobile application is developed to efficiently monitor, manage and protect the parking facilities at university campuses. Parking problems around the university campus faced by the students, faculty and staff members are analyzed by conducting a survey."}
{"_id": "c94ef9cf287122b31eb380b3fc34013f7c56a0a8", "title": "Angry, disgusted, or afraid? Studies on the malleability of emotion perception.", "text": "Current theories of emotion perception posit that basic facial expressions signal categorically discrete emotions or affective dimensions of valence and arousal. In both cases, the information is thought to be directly \"read out\" from the face in a way that is largely immune to context. In contrast, the three studies reported here demonstrated that identical facial configurations convey strikingly different emotions and dimensional values depending on the affective context in which they are embedded. This effect is modulated by the similarity between the target facial expression and the facial expression typically associated with the context. Moreover, by monitoring eye movements, we demonstrated that characteristic fixation patterns previously thought to be determined solely by the facial expression are systematically modulated by emotional context already at very early stages of visual processing, even by the first time the face is fixated. Our results indicate that the perception of basic facial expressions is not context invariant and can be categorically altered by context at early perceptual levels."}
{"_id": "ba616035a6ea91b52a3c78e6779f5c055268c153", "title": "Simulator sickness in virtual display gaming: a comparison of stereoscopic and non-stereoscopic situations", "text": "In this paper we compare simulator sickness symptoms produced by racing game in three different conditions. In the first experiment the participants played the Need for Speed car racing game with an ordinary 17\" display and in the second and third experiments they used a head-worn virtual display for the game playing. The difference between experiments 2 and 3 was in the use of stereoscopy, as in the third experiment the car racing game was seen in stereoscopic three-dimensions. Our results indicated that there were no significant differences in sickness symptoms when we compared the ordinary display and the virtual display in non-stereoscopic mode. In stereoscopic condition the eye strain and disorientation symptoms were significantly elevated compared to the ordinary display. We conclude that using a virtual display as an accessory in a mobile device is a viable alternative, because the non-stereoscopic virtual display did not produce significantly more sickness symptoms compared to ordinary game playing."}
{"_id": "12960755118168d22952df68291d19f30df1dead", "title": "Assessment of lexical semantic judgment abilities in alcohol-dependent subjects: An fMRI study", "text": "Neuropsychological studies have shown that alcohol dependence is associated with neurocognitive deficits in tasks requiring memory, perceptual motor skills, abstraction and problem solving, whereas language skills are relatively spared in alcoholics despite structural abnormalities in the language-related brain regions. To investigate the preserved mechanisms of language processing in alcohol-dependents, functional brain imaging was undertaken in healthy controls (n=18) and alcohol-dependents (n=16) while completing a lexical semantic judgment task in a 3 T MR scanner. Behavioural data indicated that alcohol-dependents took more time than controls for performing the task but there was no significant difference in their response accuracy. fMRI data analysis revealed that while performing the task, the alcoholics showed enhanced activations in left supramarginal gyrus, precuneus bilaterally, left angular gyrus, and left middle temporal gyrus as compared to control subjects. The extensive activations observed in alcoholics as compared to controls suggest that alcoholics recruit additional brain areas to meet the behavioural demands for equivalent task performance. The results are consistent with previous fMRI studies suggesting compensatory mechanisms for the execution of task for showing an equivalent performance or decreased neural efficiency of relevant brain networks. However, on direct comparison of the two groups, the results did not survive correction for multiple comparisons; therefore, the present findings need further exploration."}
{"_id": "2de46b4679fa5ae706232060d4f9ecab6b9aafbd", "title": "Markov Random Field model for single image defogging", "text": "Fog reduces contrast and thus the visibility of vehicles and obstacles for drivers. Each year, this causes traffic accidents. Fog is caused by a high concentration of very fine water droplets in the air. When light hits these droplets, it is scattered and this results in a dense white background, called the atmospheric veil. As pointed in [1], Advanced Driver Assistance Systems (ADAS) based on the display of defogged images from a camera may help the driver by improving objects visibility in the image and thus may lead to a decrease of fatality and injury rates. In the last few years, the problem of single image defogging has attracted attention in the image processing community. Being an ill-posed problem, several methods have been proposed. However, a few among of these methods are dedicated to the processing of road images. One of the first exception is the method in [2], [1] where a planar constraint is introduced to improve the restoration of the road area, assuming an approximately flat road. The single image defogging problem being ill-posed, the choice of the Bayesian approach seems adequate to set this problem as an inference problem. A first Markov Random Field (MRF) approach of the problem has been proposed recently in [3]. However, this method is not dedicated to road images. In this paper, we propose a novel MRF model of the single image defogging problem which applies to all kinds of images but can also easily be refined to obtain better results on road images using the planar constraint. A comparative study and quantitative evaluation with several state-of-the-art algorithms is presented. This evaluation demonstrates that the proposed MRF model allows to derive a new algorithm which produces better quality results, in particular in case of a noisy input image."}
{"_id": "ab734bac3994b00bf97ce22b9abc881ee8c12918", "title": "Log-Euclidean Metric Learning on Symmetric Positive Definite Manifold with Application to Image Set Classification", "text": "The manifold of Symmetric Positive Definite (SPD) matrices has been successfully used for data representation in image set classification. By endowing the SPD manifold with LogEuclidean Metric, existing methods typically work on vector-forms of SPD matrix logarithms. This however not only inevitably distorts the geometrical structure of the space of SPD matrix logarithms but also brings low efficiency especially when the dimensionality of SPD matrix is high. To overcome this limitation, we propose a novel metric learning approach to work directly on logarithms of SPD matrices. Specifically, our method aims to learn a tangent map that can directly transform the matrix logarithms from the original tangent space to a new tangent space of more discriminability. Under the tangent map framework, the novel metric learning can then be formulated as an optimization problem of seeking a Mahalanobis-like matrix, which can take the advantage of traditional metric learning techniques. Extensive evaluations on several image set classification tasks demonstrate the effectiveness of our proposed metric learning method."}
{"_id": "762182cd3e90179609dd0adcf9d4f1d84db14526", "title": "Classifying streaming of Twitter data based on sentiment analysis using hybridization", "text": "Twitter is a social media that developed rapidly in today\u2019s modern world. As millions of Twitter messages are sent day by day, the value and importance of developing a new technique for detecting spammers become significant. Moreover, legitimate users are affected by means of spams in the form of unwanted URLs, irrelevant messages, etc. Another hot topic of research is sentiment analysis that is based on each tweet sent by the user and opinion mining of the customer reviews. Most commonly natural language processing is used for sentiment analysis. The text is collected from user\u2019s tweets by opinion mining and automatic sentiment analysis that are oriented with ternary classifications, such as \u201cpositive,\u201d \u201cneutral,\u201d and \u201cnegative.\u201d Due to limited size, unstructured nature, misspells, slangs, and abbreviations, it is more challenging for researchers to find sentiments for Twitter data. In this paper, we collected 600 million public tweets using URL-based security tool and feature generation is applied for sentiment analysis. The ternary classification is processed based on preprocessing technique, and the results of tweets sent by the users are obtained. We use a hybridization technique using two optimization algorithms and one machine learning classifier, namely particle swarm optimization and genetic algorithm and decision tree for classification accuracy by sentiment analysis. The results are compared with previous works, and our proposed method shows a better analysis than that of other classifiers."}
{"_id": "db5fad57f5b1ebedd519d4618ecf351ccd001eb4", "title": "Semi-autonomous flying robot for physical interaction with environment", "text": "This contribution presents the first results of the development of an unmanned aerial vehicle (UAV) which is capable of applying force to a wall while maintaining flight stability. This is a novel idea since UAVs are used so far only for tasks without physical contact to the surrounding objects. The basis for the work presented is a quadrotor system which is stabilized with an inertial measurement unit. As a new approach an additional actuator was added to generate forces in physical contact while the UAV stays horizontal. A control architecture based on ultrasonic distance sensors and a CMOS-camera is proposed. The performance of the system was proved by several flight tests. Potential applications of the system can be physical tasks at high places like cleaning windows or walls as well as rescue or maintenance tasks."}
{"_id": "4f5f8ffa466c176de35e5688657681eb534350b6", "title": "Bi-Modal DRAM Cache: Improving Hit Rate, Hit Latency and Bandwidth", "text": "In this paper, we present Bi-Modal Cache - a flexible stacked DRAM cache organization which simultaneously achieves several objectives: (i) improved cache hit ratio, (ii) moving the tag storage overhead to DRAM, (iii) lower cache hit latency than tags-in-SRAM, and (iv) reduction in off-chip bandwidth wastage. The Bi-Modal Cache addresses the miss rate versus off-chip bandwidth dilemma by organizing the data in a bi-modal fashion - blocks with high spatial locality are organized as large blocks and those with little spatial locality as small blocks. By adaptively selecting the right granularity of storage for individual blocks at run-time, the proposed DRAM cache organization is able to make judicious use of the available DRAM cache capacity as well as reduce the off-chip memory bandwidth consumption. The Bi-Modal Cache improves cache hit latency despite moving the metadata to DRAM by means of a small SRAM based Way Locator. Further by leveraging the tremendous internal bandwidth and capacity that stacked DRAM organizations provide, the Bi-Modal Cache enables efficient concurrent accesses to tags and data to reduce hit time. Through detailed simulations, we demonstrate that the Bi-Modal Cache achieves overall performance improvement (in terms of Average Normalized Turnaround Time (ANTT)) of 10.8%, 13.8% and 14.0% in 4-core, 8-core and 16-core workloads respectively."}
{"_id": "be2bf691bafe37e64c2f2bfa3d9d8a8db82c3bc8", "title": "Intangible benefits valuation in ERP projects", "text": "The development, implementation and ownership of information systems, especially large-scale systems such as enterprise resource planning (ERP), has become progressively longer in duration and more cost intensive. As a result, IS managers are being required to justify projects financially based on their return. Historically, information systems have been difficult to quantify in monetary terms because of the intangible nature of many of the derived benefits, e.g. improved customer service. Using the case study methodology, this paper examines an attempt by a large computer manufacturer to incorporate intangibles into traditional cost\u2013benefit analysis in an ERP project. The paper reviews the importance of intangibles, lists intangible benefits that are important in ERP projects and demonstrates the use of a scheme through which they can be incorporated into traditional evaluation techniques."}
{"_id": "420269bc0321288460b978072cfe901874003777", "title": "Modeling of electric vehicle loads for power flow analysis based on PSAT", "text": "This paper proposes the modeling of electric vehicle(EV) loads for power flow analysis based on User Define Models (UDMs) in power system analysis toolbox (PSAT) programming, According to the development of EV technology especially motor controller and battery, the EV has totally designed and made for using in city area as small city cars. To study the performance of EV, this aims to present the model of EV loads for using to solve power flow analysis in the real power system. EV load is modeled as the function box equivalent on PSAT that can connect to the single line diagram of power system model. The IEEE 14 bus system is selected for simulating the performance of EV and charging station which compared between PQ load base case and EV load. The simulation result showed that the EV load was installed at bus No.14. The rated power of EV load was increased from 5.0 p.u. to 79 p.u. This proposed EV load model can be used to solve the power flow analysis with continuation power flow (CPF) method. The EV load model is directly effected to voltage stability margin when the EV load increased. This study can be verified that proposed EV load can use to study the EV load in the future works."}
{"_id": "6aa0198bfae0cbadc1f076f689fea7bbfe4b4db3", "title": "Modular equalization architecture using inter-module and switchless intra-module equalizer for energy storage system", "text": "This paper proposes a novel modular equalization architecture using inter-module and intra-module equalizers for energy storage systems comprising multiple cells/modules connected in series. In the proposed equalization architecture, cells in a module is equalized by a series-resonant voltage multiplier (SRVM)-based intra-module equalizer while module voltages are unified by a switched capacitor converter (SCC)-based inter-module equalizer. A square wave voltage generated at a switching node of the SCC-based inter-module equalizer is utilized to drive the SRVM-based intra-module equalizer so that the SRVM-based intra-module equalizer can be totally switchless topology. The required switch count of the proposed equalization architecture can be dramatically reduced compared to conventional architectures, allowing simplified circuitry. The proposed equalization architecture is modular and allows different equalizer topologies to be used for inter- and intra-module equalizers, offering good modularity and design flexibility. The derivation procedure of the proposed equalization architecture is explained, followed by separate analyses for the SCC-based inter-module equalizer and SRVM-based intra-module equalizer. An experimental equalization test for three supercapacitor (SC) modules, each consisting of four cells in series, was performed from an initially voltage-imbalanced condition. Module and cell voltages were equalized at different rates, and the equalization performance of the proposed modular equalization architecture was successfully demonstrated."}
{"_id": "34e86b693642b41e60e41f7a53bd9821764bda7c", "title": "Nonnegative Matrix Factorization with the Itakura-Saito Divergence: With Application to Music Analysis", "text": "This letter presents theoretical, algorithmic, and experimental results about nonnegative matrix factorization (NMF) with the Itakura-Saito (IS) divergence. We describe how IS-NMF is underlaid by a well-defined statistical model of superimposed gaussian components and is equivalent to maximum likelihood estimation of variance parameters. This setting can accommodate regularization constraints on the factors through Bayesian priors. In particular, inverse-gamma and gamma Markov chain priors are considered in this work. Estimation can be carried out using a space-alternating generalized expectation-maximization (SAGE) algorithm; this leads to a novel type of NMF algorithm, whose convergence to a stationary point of the IS cost function is guaranteed. We also discuss the links between the IS divergence and other cost functions used in NMF, in particular, the Euclidean distance and the generalized Kullback-Leibler (KL) divergence. As such, we describe how IS-NMF can also be performed using a gradient multiplicative algorithm (a standard algorithm structure in NMF) whose convergence is observed in practice, though not proven. Finally, we report a furnished experimental comparative study of Euclidean-NMF, KL-NMF, and IS-NMF algorithms applied to the power spectrogram of a short piano sequence recorded in real conditions, with various initializations and model orders. Then we show how IS-NMF can successfully be employed for denoising and upmix (mono to stereo conversion) of an original piece of early jazz music. These experiments indicate that IS-NMF correctly captures the semantics of audio and is better suited to the representation of music signals than NMF with the usual Euclidean and KL costs."}
{"_id": "d3ef8bc5abcaff84ad72a6b0863e4fcdd28f6a6a", "title": "The Life Satisfaction Advantage of Being Married and Gender Specialization", "text": "This investigation examined whether the life satisfaction advantage of married over unmarried persons decreased over the last three decades, and whether the changes in the contextual gender specialization explained this trend. The author used representative data from the World Values Survey\u2013European Values Study (WVS\u2013EVS)-integrated data set for 87 countries (N = 292,525) covering a period of 29 years. Results showed that the life satisfaction advantage of being married decreased among men but not among women. The analysis did not support the hypothesis that contextual gender specialization shaped the observed trend. Only in developed countries the declining contextual specialization correlated with smaller life satisfaction advantage of being married. This evidence suggests that the advantages of marriage are greater under conditions that support freedom of choice rather than economic necessity. (130 words)"}
{"_id": "fa181cc241caf85fd1de7faec5080eee8b0a6a49", "title": "UFMC system performance analysis for discrete narrow-band private networks", "text": "In this paper, the performance of universal-filtered multi-carrier (UFMC) system is analyzed. Specifically, the UFMC signals are generated with Dolph-Chebyshev, Hamming and Hanning filtering. The characteristics of UFMC signals are also compared with OFDM signal, in both software and hardware-in-the-loop simulations. Experimental results show that UFMC signals have better performance than OFDM signal, by means of side-lobe attenuation, bit-error-ratio (BER) and Error Vector Magnitude (EVM). The results indicate that the UFMC signal is more suitable than OFDM signal for discrete narrow-band private networks. The results can provide useful information on the design of multi-carrier radio systems."}
{"_id": "9171dcbea95321659b2a11a70bfdfe0c33ed081d", "title": "Following My Head and My Heart: Integrating Preschoolers' Empathy, Theory of Mind, and Moral Judgments.", "text": "Associations among hypothetical, prototypic moral, and conventional judgments; theory of mind (ToM); empathy; and personal distress were examined in 108 socioeconomically diverse preschoolers (Mage \u00a0=\u00a042.94\u00a0months, SD\u00a0=\u00a01.42). Repeated measures analysis of covariance with empathy, false beliefs, and their interaction as covariates indicated that empathy was significantly associated with judgments of greater moral but not conventional transgression severity, particularly for psychological harm, and with deserved punishment for unfairness. False beliefs were associated with (combined) moral criterion judgments of rule and authority independence and inalterability. Empathy also was positively associated with criterion judgments but only for children low in ToM. Personal distress was unrelated to judgments. Results demonstrate the importance of both affective and cognitive processes in preschoolers' moral judgments."}
{"_id": "eedff5b2de79b724aac852e8cad08f8e90d980d5", "title": "A 5-MHz 91% Peak-Power-Efficiency Buck Regulator With Auto-Selectable Peak- and Valley-Current Control", "text": "This paper presents a multi-MHz buck regulator for portable applications using an auto-selectable peak- and valley-current control (ASPVCC) scheme. The proposed ASPVCC scheme can enable the current-mode buck regulator to reduce the settling-time requirement of the current sensing by two times. In addition, the dynamically biased shunt feedback technique is employed to improve the sensing speed and the sensing accuracy of both the peak and valley current sensors. With both ASPVCC scheme and advanced current sensors, the buck regulator can thus operate at high switching frequencies with a wide range of duty ratios for reducing the required off-chip inductance. The proposed current-mode buck regulator was fabricated in a standard 0.35-\u03bcm CMOS process and occupies a small chip area of 0.54 mm2. Measurement results show that the buck regulator can deliver a maximum output current of 500 mA, operate at 5 MHz with a wide duty-ratio range of about 0.6, use a small-value off-chip inductor of 1 \u03bcH, and achieve the peak power efficiency of 91%."}
{"_id": "4fba12c75d28e46e3ea93102499fb0c27d360e17", "title": "Low-Quality Product Review Detection in Opinion Summarization", "text": "Product reviews posted at online shopping sites vary greatly in quality. This paper addresses the problem of detecting lowquality product reviews. Three types of biases in the existing evaluation standard of product reviews are discovered. To assess the quality of product reviews, a set of specifications for judging the quality of reviews is first defined. A classificationbased approach is proposed to detect the low-quality reviews. We apply the proposed approach to enhance opinion summarization in a two-stage framework. Experimental results show that the proposed approach effectively (1) discriminates lowquality reviews from high-quality ones and (2) enhances the task of opinion summarization by detecting and filtering lowquality reviews."}
{"_id": "14a80a61c5d2dbc6b964734aa626b5d0f99d4b2f", "title": "Automatically Producing Plot Unit Representations for Narrative Text", "text": "In the 1980s, plot units were proposed as a conceptual knowledge structure for representing and summarizing narrative stories. Our research explores whether current NLP technology can be used to automatically produce plot unit representations for narrative text. We create a system called AESOP that exploits a variety of existing resources to identify affect states and applies \u201cprojection rules\u201d to map the affect states onto the characters in a story. We also use corpus-based techniques to generate a new type of affect knowledge base: verbs that impart positive or negative states onto their patients (e.g., being eaten is an undesirable state, but being fed is a desirable state). We harvest these \u201cpatient polarity verbs\u201d from a Web corpus using two techniques: co-occurrence with Evil/Kind Agent patterns, and bootstrapping over conjunctions of verbs. We evaluate the plot unit representations produced by our system on a small collection of Aesop\u2019s fables."}
{"_id": "3581680a0cf90c0e5ec5f568020216395f67f415", "title": "Plot Units and Narrative Summarization", "text": "In order to summarize a story, i t is necessary to access a high level analysis of the story that highlights its central concepts. A technique of memory representation based on plot units appears to provide a rich foundation for such an analysis. Plot units are conceptual structures that overlap with each other when a narrative is cohesive. When overlapping intersections between plot units are interpreted as arcs in a graph of plot units, the resulting graph encodes the plot of the story. Structural features of the graph then reveal which concepts are central to the story, and which concepts are peripheral. Plot unit analysis is currently being investigated as a processing strategy for narrative summarization by both computer simulotion ond psychological experiments."}
{"_id": "407debfdc988b0bdbf44a930e2d3a1c3354781b0", "title": "Building a Bank of Semantically Encoded Narratives", "text": "We propose a methodology for a novel type of discourse annotation whose model is tuned to the analysis of a text as narrative. This is intended to be the basis of a \u201cstory bank\u201d resource that would facilitate the automatic analysis of narrative structure and content. The methodology calls for annotators to construct propositions that approximate a reference text, by selecting predicates and arguments from among controlled vocabularies drawn from resources such as WordNet and VerbNet. Annotators then integrate the propositions into a conceptual graph that maps out the entire discourse; the edges represent temporal, causal and other relationships at the level of story content. Because annotators must identify the recurring objects and themes that appear in the text, they also perform coreference resolution and word sense disambiguation as they encode propositions. We describe a collection experiment and a method for determining inter-annotator agreement when multiple annotators encode the same short story. Finally, we describe ongoing work toward extending the method to integrate the annotator\u2019s interpretations of character agency (the goals, plans and beliefs that are relevant, yet not explictly stated in the text)."}
{"_id": "45fcbb3149fdb01a130f5f013a4713328ee3e3c7", "title": "Modeling narrative discourse", "text": "Modeling Narrative Discourse"}
{"_id": "6d185f5cfc7e1df9105459a68e96f82fbb047f52", "title": "Constructing inferences during narrative text comprehension.", "text": "The authors describe a constructionist theory that accounts for the knowledge-based inferences that are constructed when readers comprehend narrative text. Readers potentially generate a rich variety of inferences when they construct a referential situation model of what the text is about. The proposed constructionist theory specifies that some, but not all, of this information is constructed under most conditions of comprehension. The distinctive assumptions of the constructionist theory embrace a principle of search (or effort) after meaning. According to this principle, readers attempt to construct a meaning representation that addresses the reader's goals, that is coherent at both local and global levels, and that explains why actions, events, and states are mentioned in the text. This study reviews empirical evidence that addresses this theory and contrasts it with alternative theoretical frameworks."}
{"_id": "7f7e5559c6d25b8d42c61d83988fded70e3b10d1", "title": "Low-Power, Adaptive Neuromorphic Systems: Recent Progress and Future Directions", "text": "In this paper, we present a survey of recent works in developing neuromorphic or neuro-inspired hardware systems. In particular, we focus on those systems which can either learn from data in an unsupervised or online supervised manner. We present algorithms and architectures developed specially to support on-chip learning. Emphasis is placed on hardware friendly modifications of standard algorithms, such as backpropagation, as well as novel algorithms, such as structural plasticity, developed specially for low-resolution synapses. We cover works related to both spike-based and more traditional non-spike-based algorithms. This is followed by developments in novel devices, such as floating-gate MOS, memristors, and spintronic devices. CMOS circuit innovations for on-chip learning and CMOS interface circuits for post-CMOS devices, such as memristors, are presented. Common architectures, such as crossbar or island style arrays, are discussed, along with their relative merits and demerits. Finally, we present some possible applications of neuromorphic hardware, such as brain\u2013machine interfaces, robotics, etc., and identify future research trends in the field."}
{"_id": "7abfc23db18808f96b2d5f471194443ecd58a2ce", "title": "Health risks in areas close to urban solid waste landfill sites.", "text": "OBJECTIVE\nTo evaluate the association between living close to solid waste landfill sites and occurrences of cancer and congenital malformations among populations in their vicinity.\n\n\nMETHODS\nDeaths among people living in the municipality of S\u00e3o Paulo, Southeastern Brazil, between 1998 and 2002 were selected and geocoded, according to selected causes. Over the period evaluated, there were 351 deaths due to liver cancer, 160 due to bladder cancer and 224 due to leukemia, among adults, 25 due to childhood leukemia and 299 due to congenital malformation, in areas close to landfill sites. Buffer zones of radius 2 km around the 15 sites delimited the areas exposed. Standardized mortality ratios for each outcome were analyzed in Bayesian spatial models.\n\n\nRESULTS\nIn a general manner, the highest values for the standardized mortality ratios were found in more central areas of the municipality, while the landfill sites were located in more peripheral areas. The standardized mortality ratios did not indicate any excess risk for people living in areas close to solid waste landfill sites in the municipality of S\u00e3o Paulo. For landfill sites in operation, there was a greater risk of bladder and liver cancer, and death due to congenital malformation, but without statistical significance.\n\n\nCONCLUSIONS\nNo increase in the risk of cancer or congenital malformations was found in areas in the vicinity of urban waste dumps in the municipality of S\u00e3o Paulo. The weak associations and the imprecision of the estimates obtained did not allow any causal relationship to be established."}
{"_id": "09ff457e09ae277e52441e228c0ed7a8921d67a7", "title": "PEMOGEN: Automatic adaptive performance modeling during program runtime", "text": "Traditional means of gathering performance data are tracing, which is limited by the available storage, and profiling, which has limited accuracy. Performance modeling is often used to interpret the tracing data and generate performance predictions. We aim to complement the traditional data collection mechanisms with online performance modeling, a method that generates performance models while the application is running. This allows us to greatly reduce the storage overhead while still producing accurate predictions. We present PEMOGEN, our compilation and modeling framework that automatically instruments applications to generate performance models during program execution. We demonstrate the ability of PEMOGEN to both reduce storage cost and improve the prediction accuracy compared to traditional techniques such as least squares fitting. With our tool, we automatically detect 3,370 kernels from fifteen NAS and Mantevo applications and model their execution time with a median coefficient of variation R2 of 0.81. These automatically generated performance models can be used to quickly assess the scaling and potential bottlenecks with regards to any input parameter and the number of processes of a parallel application."}
{"_id": "57d5911b45d05f7c1f59fcd11a9382ee9b816bb7", "title": "FAST CALCULATION OF HARALICK TEXTURE FEATURES", "text": "It is our aim in this research to optimize the numerical computation of the Haralick texture features [1] that consists of two steps. Haralick texture features are used as a primary component to discern between different protein structures in microscopic bio-images. The first of these two parts is the construction of the co-occurrence matrix. Upon completion of this implementation, we will attempt to optimize the code using a novel recursive blocking algorithm. We will also use standard practices of optimizing code such as scalar replacement and unrolling. The second part, feature calculation, will be optimized by putting to use information about the data that we will be working with and eliminating any redundancies in the code. We will also apply the practices of unrolling and scalar replacement. With these optimizations in place, we will show that we reduce the runtime by a20% in the co-occurrence construction phase and by a factor of20 in the feature calculation phase of computation."}
{"_id": "3863fd3fb1764137128166cc2a2e2a2222b652c0", "title": "Semantic Wordification of Document Collections", "text": "Word clouds have become one of the most widely accepted visual resources for document analysis and visualization, motivating the development of several methods for building layouts of keywords extracted from textual data. Existing methods are effective to demonstrate content, but are not capable of preserving semantic relationships among keywords while still linking the word cloud to the underlying document groups that generated them. Such representation is highly desirable for exploratory analysis of document collections. In this paper we present a novel approach to build document clouds, named ProjCloud that aim at solving both semantical layouts and linking with document sets. ProjCloud generates a semantically consistent layout from a set of documents. Through a multidimensional projection, it is possible to visualize the neighborhood relationship between highly related documents and their corresponding word clouds simultaneously. Additionally, we propose a new algorithm for building word clouds inside polygons, which employs spectral sorting to maintain the semantic relationship among words. The effectiveness and flexibility of our methodology is confirmed when comparisons are made to existing methods. The technique automatically constructs projection based layouts the user may choose to examine in the form of the point clouds or corresponding word clouds, allowing a high degree of control over the exploratory process."}
{"_id": "645d90ede11053f795ad2e6c142d493a5a793303", "title": "Holonomic Control of a robot with an omni-directional drive .", "text": "This paper shows how to control a robot with omnidirectional wheels, using as example robots with four motors, and generalizing to n motors. More than three wheels provide redundancy: many combinations of motors speeds can provide the same Euclidean movement. Since the system is over-determined, we show how to compute a set of consistent and optimal motor forces and speeds using the pseudoinverse of coupling matrices. This approach allows us also to perform a consistency check to determine whether a wheel is slipping on the floor or not. We show that it is possible to avoid wheel slippage by driving the robot with a motor torque under a certain threshold or handle it and make high accelerations possible."}
{"_id": "eddb1a126eafecad2cead01c6c3bb4b88120d78a", "title": "Applications of a Graph Theoretic Based Clustering Framework in Computer Vision and Pattern Recognition", "text": "RECENTLY , several clustering algorithms have been used to solve variety of problems from different discipline. This dissertation aims to address different challenging tasks in computer vision and pattern recognition by casting the problems as a clustering problem. We proposed novel approaches to solve multi-target tracking, visual geo-localization and outlier detection problems using a unified underlining clustering framework, i.e., dominant set clustering and its extensions, and presented a superior result over several state-of-the-art approaches. Firstly, this dissertation will present a new framework for multi-target tracking in a single camera. We proposed a novel data association technique using Dominant set clustering (DCS) framework. We formulate the tracking task as finding dominant sets on the constructed undirected edge weighted graph. Unlike most techniques which are limited in temporal locality (i.e. few frames are considered), we utilized a pairwise relationships (in appearance and position) between different detections across the whole temporal span of the video for data association in a global manner. Meanwhile, temporal sliding window technique is utilized to find tracklets and perform further merging on them. Our robust tracklet merging step renders our tracker to long term occlusions with more robustness. DSC leads us to a more accurate approach to multi-object tracking by considering all the pairwise relationships in a batch of frames; however, it has some limitations. Firstly, it finds target trajectories(clusters) one-by-one (peel off strategy), causing change in the scale of the problem. Secondly, the algorithm used to enumerate dominant sets, i.e., replicator dynamics, have quadratic computational complexity, which makes it impractical on larger graphs. To address these problems and extend tracking problem to multiple non-overlapping cameras, we proposed a novel and unified three-layer hierarchical approach. Given a video and a set of detections (obtained by any person detector), we first solve withincamera tracking employing the first two layers of our framework and, then, in the third layer, we solve across-camera tracking by merging tracks of the same person in all cameras in a simultaneous fashion. To best serve our purpose, a constrained dominant set clustering (CDSC) technique, a parametrized version of standard quadratic optimization, is employed to solve both tracking tasks. The tracking problem is caste as"}
{"_id": "47bddba5515dfc5acc2b18f4b19b269dba589289", "title": "An Empirical Study on Using the National Vulnerability Database to Predict Software Vulnerabilities", "text": "Software vulnerabilities represent a major cause of cybersecurity problems. The National Vulnerability Database (NVD) is a public data source that maintains standardized information about reported software vulnerabilities. Since its inception in 1997, NVD has published information about more than 43,000 software vulnerabilities affecting more than 17,000 software applications. This information is potentially valuable in understanding trends and patterns in software vulnerabilities, so that one can better manage the security of computer systems that are pestered by the ubiquitous software security flaws. In particular, one would like to be able to predict the likelihood that a piece of software contains a yet-to-be-discovered vulnerability, which must be taken into account in security management due to the increasing trend in zero-day attacks. We conducted an empirical study on applying data-mining techniques on NVD data with the objective of predicting the time to next vulnerability for a given software application. We experimented with various features constructed using the information available in NVD, and applied various machine learning algorithms to examine the predictive power of the data. Our results show that the data in NVD generally have poor prediction capability, with the exception of a few vendors and software applications. By doing a large number of experiments and observing the data, we suggest several reasons for why the NVD data have not produced a reasonable prediction model for time to next vulnerability with our current approach."}
{"_id": "c434e1cc54ea2911edc2b17415c64511b01530d0", "title": "Introducing Entity-Based Concepts to Business Process Modeling", "text": "The so-called Internet of Things (IoT) that comprises interconnected physical devices such as sensor networks and its technologies like Radio Frequency Identification (RFID) is increasingly adopted in many industries and thus becomes highly relevant for process modeling and execution. As BPMN 2.0 does not yet consider the idiosyncrasies of real-world entities we suggest new modeling concepts for a physical entity as well as a sensing task and an actuation task to make BPMN IoT-aware."}
{"_id": "a13dba3048279fe5518a83c25731c20c8c1cf15c", "title": "Vocabulary learning by mobile-assisted authentic content creation and social meaning-making: two case studies", "text": "In recent years, we have witnessed the concomitant rise of communicative and contextualised approaches as well as the paradigmatic development of the Mobile Assisted Language Learning (MALL) framework in analyzing language learning. The focus of MALL research has gradually shifted from content-based (delivery of learning content through mobile devices) to design-oriented (authentic and/or social mobile learning activities) study. In this paper, we present two novel case studies of MALL that emphasize learner-created content. In learning English prepositions and Chinese idioms respectively, the primary school students used the mobile devices assigned to them on a one-to-one basis to take photos in real-life contexts so as to construct sentences with the newly acquired prepositions or idioms. Subsequently, the learners were voraciously engaged in classroom or online discussion of their semantic constructions, thereby enhancing their understanding of the proper usage of the prepositions or idioms. This work shows the potential of transforming language learning into an authentic seamless learning experience."}
{"_id": "9ec5dfd10c091d008ab32a4b85d2f77cc74084c8", "title": "Testing using Log File Analysis: Tools, Methods, and Issues", "text": "Large software systems often keep log files of events. Such log files can be analyzed to check whether a run of a program reveals faults in the system. We discuss how such log files can be used in software testing. We present a framework for automatically analyzing log files, and describe a language for specifying analyzer programs and an implementation of that language. The language permits compositional, compact specifications of software, which act as test oracles; we discuss the use and efficacy of these oracles for unitand system-level testing in various settings. We explore methodological issues such as efficiency and logging policies, and the scope and limitations of the framework. We conclude that testing using log file analysis constitutes a useful methodology for software verification, somewhere between current testing practice and formal verification methodologies."}
{"_id": "b05c34cc70a3b397a9bea1fff48c3adad623152b", "title": "Crowdsourcing and the crisis-affected community", "text": "This article reports on Mission 4636, a real-time humanitarian crowdsourcing initiative that processed 80,000 text messages (SMS) sent from within Haiti following the 2010 earthquake. It was the first time that crowdsourcing (microtasking) had been used for international relief efforts, and is the largest deployment of its kind to date. This article presents the first full report and analysis of the initiative looking at the accuracy and timeliness in creating structured data from the messages and the collaborative nature of the process. Contrary to all previous papers, studies and media reports about Mission 4636, which have typically chosen to exclude empirical analyses and the involvement of the Haitian population, it is found that the greatest volume, speed and accuracy in information processing was by Haitian nationals, the Haitian diaspora, and those working closest with them, and that no new technologies played a significant role. It is concluded that international humanitarian organizations have been wrongly credited for large-scale information processing initiatives (here and elsewhere) and that for the most part they were largely just witnesses to crisis-affected communities bootstrapping their own recovery through communications technologies. The particular focus is on the role of the diaspora, an important population that are increasingly able to contribute to response efforts thanks to their increased communication potential. It is recommended that future humanitarian deployments of crowdsourcing focus on information processing within the populations they serve, engaging those with crucial local knowledge wherever they happen to be in the world."}
{"_id": "b217806e56ea1b04327d26576a1174752f54b210", "title": "Cross-lingual Semantic Generalization for the Detection of Metaphor", "text": "In this work, we describe a supervised cross-lingual methodology for detecting novel and conventionalized metaphors that derives generalized semantic patterns from a collection of metaphor annotations. For this purpose, we model each metaphor annotation as an abstract tuple \u2013 (source, target, relation, metaphoricity) \u2013 that packages a metaphoricity judgement with a relational grounding of the source and target lexical units in text. From these annotations, we derive a set of semantic patterns using a three-step process. First, we employ several generalized representations of the target using a variety of WordNet information and representative domain terms. Then, we generalize relations using a rule-based, pseudo-semantic role labeling. Finally, we generalize the source by partitioning a semantic hierarchy (defined by the target and the relation) into metaphoric and non-metaphoric regions so as to optimally account for the evidence in the annotated data. Experiments show that by varying the generality of the source, target, and relation representations in our derived patterns, we are able to significantly extend the impact of our annotations, detecting metaphors in a variety of domains at an F-measure of between 0.88 and 0.92 for English, Spanish, Russian, and Farsi. This generalization process both enhances our ability to jointly detect novel and conventionalized metaphors and enables us to transfer the knowledge encoded in metaphoricity annotations to novel languages."}
{"_id": "192235f5a9e4c9d6a28ec0d333e36f294b32f764", "title": "Reconfiguring the Imaging Pipeline for Computer Vision", "text": "Advancements in deep learning have ignited an explosion of research on efficient hardware for embedded computer vision. Hardware vision acceleration, however, does not address the cost of capturing and processing the image data that feeds these algorithms. We examine the role of the image signal processing (ISP) pipeline in computer vision to identify opportunities to reduce computation and save energy. The key insight is that imaging pipelines should be be configurable: to switch between a traditional photography mode and a low-power vision mode that produces lower-quality image data suitable only for computer vision. We use eight computer vision algorithms and a reversible pipeline simulation tool to study the imaging system's impact on vision performance. For both CNN-based and classical vision algorithms, we observe that only two ISP stages, demosaicing and gamma compression, are critical for task performance. We propose a new image sensor design that can compensate for these stages. The sensor design features an adjustable resolution and tunable analog-to-digital converters (ADCs). Our proposed imaging system's vision mode disables the ISP entirely and configures the sensor to produce subsampled, lower-precision image data. This vision mode can save ~75% of the average energy of a baseline photography mode with only a small impact on vision task accuracy."}
{"_id": "e28007775ad8bd1e3ecdca3cea682eafcace011b", "title": "DISCO: A Multilingual Database of Distributionally Similar Words", "text": "This paper1 presents DISCO, a tool for retrieving the distributional similarity between two given words, and for retrieving the distributionally most similar words for a given word. Pre-computed word spaces are freely available for a number of languages including English, German, French and Italian, so DISCO can be used off the shelf. The tool is implemented in Java, provides a Java API, and can also be called from the command line. The performance of DISCO is evaluated by measuring the correlation with WordNet-based semantic similarities and with human relatedness judgements. The evaluations show that DISCO has a higher correlation with semantic similarities derived from WordNet than latent semantic analysis (LSA) and the web-based PMI-IR."}
{"_id": "2c2f4db96dc791176d74a85a18e2b4e26fc1d766", "title": "Semi-global matching based disparity estimate using fast Census transform", "text": "The Census transform is very useful in stereo matching algorithms. But the traditional Census transform cost too much computation, and its accuracy and robustness are not satisfying. In this paper, a modified semi-global matching (SGM) algorithm based on the fast Census transform is proposed to improve the quality of the range map and decrease the computation. In addition, we modify the disparity refinement to enhance the quality of the estimated range map. Our experiments show that the proposed method performs better than the original SGM algorithm in terms of the accuracy and the anti-noise capability."}
{"_id": "efba494e7b708184690774994895c3e14945a5ea", "title": "A Supervised Learning Approach for Imbalanced Text Classification of Biomedical Literature Triage", "text": "A Supervised Learning Approach for Imbalanced Text Classification of Biomedical Literature Triage"}
{"_id": "c2b765e2c0a5d17df407dfbd32b0cbea6b9bf430", "title": "Cloudlet deployment in local wireless networks: Motivation, architectures, applications, and open challenges", "text": "In past few years, advancement in mobile applications and their integration with Cloud computing services has introduced a new computing paradigm known as Mobile Cloud Computing. Although Cloud services support a wide range of mobile applications, access to these services suffers from several performance issues such as WAN latency, jitter, and packet losses. Cloudlet frameworks are proposed to overcome these performance issues. More specifically, Cloudlets aim to bring the Cloud or a specific part of the Cloud closer to the mobile device by utilizing proximate computing resources to perform compute-intensive tasks. This paper presents a comprehensive survey on the state-of-the-art mobile Cloudlet architectures. We also classify the state-of-the-art Cloudlet solutions by presenting a hierarchical taxonomy. Moreover, the areas of Cloudlet applications are also identified and presented. Cloudlet discovery, resource management, data security, mobility, application offloading, and most importantly incentives to deploy a Cloudlet are areas that still needs to be investigated by the research community. The critical aspects of the current Cloudlet frameworks in Mobile Cloud Computing are analyzed to determine the strengths and weaknesses of the frameworks. The similarities and differences of the frameworks based on the important parameters, such as scalability, mobility support, Internet dependency, dynamic configuration, energy savings, and execution cost, are also investigated. The requirements for deploying the Cloudlet in a Local Wireless Network are also highlighted and presented. We also discuss open research challenges that Cloudlet deployments face in Local Wireless Networks. & 2015 Elsevier Ltd. All rights reserved."}
{"_id": "dd65617e1cee10f372189fea92a5923cb550f01b", "title": "Occurrence of partenites in mollusks and the influence that metacercaria of Apophallus muehlingi (Jagerskiold, 1898) and Posthodiplostomum cuticola (Nordmann, 1832) has on some biochemical parameters in fish", "text": "A heavy parasitic load from trematode partenites of Apophallus muehlingi (Jagerskiold, 1898) was observed in the mollusk population of Lithoglyphus naticoides (Pfeiffer) after its natural invasion into the Rybinsk Reservoir. An elevated concentration of glycogen was detected in the muscles of infected fish, both in the natural populations and under experimental conditions. Interspecific differences were found for the protein concentrations when elevated concentrations were observed in the muscles of the infected roach, and no differences were discovered for the goldfish (both infected and parasite-free). The ties between the biochemical parameters and motor activity are discussed for the infected fish."}
{"_id": "3660d66361bf6d2b564ca170d9147f6ce7c1e912", "title": "Personality and Psychological Aspects of Cosmetic Surgery", "text": "In recent years, cosmetic surgery in Iran, which is provided almost entirely by the private sector, has gained popularity despite evidence of its potential risks. In most cases, cosmetic surgeries are done to increase self-satisfaction and self-esteem, thus seeking cosmetic surgery potentially shows an individual\u2019s psychological profile. Current evidence needs studies on the psychological profile of Asian cosmetic surgery patients. The present study investigates psychological profile and personality traits of people seeking cosmetic surgery in Iran. The present prospective observational study was conducted with a sample of 274 randomly selected persons seeking cosmetic surgery (rhinoplasty, blepharoplasty, face/jaw implant, mammoplasty, and liposuction). All participants completed the validated and reliable the Global Severity Index (GSI)\u2014Symptom Checklist-90-Revised (SCL-90-R)\u2014and the short Neuroticism-Extraversion-Openness Five-Factor Inventory (NEO-FFI). The prevalence rate of psychiatric problems based on the GSI cut-off point (>63) of SCL-90-R was about 51\u00a0%, and interpersonal sensitivity and psychosis were the highest and lowest endorsed syndromes among the subjects, respectively. Openness had the lowest mean score; agreeableness and extroversion had the highest mean. The current study shows that understanding and psychological evaluation prior to surgery is necessary and screening can reduce the number of unnecessary surgeries and may enhance satisfaction with surgical results. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 ."}
{"_id": "b3dbdd8859e9a38712816dddf221843a5cae95a8", "title": "Mining Social Entrepreneurship Strategies Using Topic Modeling", "text": "Despite the burgeoning research on social entrepreneurship (SE), SE strategies remain poorly understood. Drawing on extant research on the social activism and social change, empowerment and SE models, we explore, classify and validate the strategies used by 2,334 social entrepreneurs affiliated with the world's largest SE support organization, Ashoka. The results of the topic modeling of the social entrepreneurs' strategy profiles reveal that they employed a total of 39 change-making strategies that vary across resources (material versus symbolic strategies), specificity (general versus specific strategies), and mode of participation (mass versus elite participation strategies); they also vary across fields of practice and time. Finally, we identify six meta-SE strategies-a reduction from the 39 strategies-and identify four new meta-SE strategies (i.e., system reform, physical capital development, evidence-based practices, and prototyping) that have been overlooked in prior SE research. Our findings extend and deepen the research into SE strategies and offer a comprehensive model of SE strategies that advances theory, practice and policy making."}
{"_id": "e8597f938e5158ab2015a69c45b6f0fbddb9955c", "title": "What factors lead to software project failure?", "text": "It has been suggested that there is more than one reason for a software development project to fail. However, most of the literature that discusses project failure tends to be rather general, supplying us with lists of risk and failure factors, and focusing on the negative business effects of the failure. Very little research has attempted an in-depth investigation of a number of failed projects to identify exactly what are the factors behind the failure. In this research we analyze data from 70 failed projects. This data provides us with practitionerspsila perspectives on 57 development and management factors for projects they considered were failures. Our results show that all projects we investigated suffered from numerous failure factors. For a single project the number of such factors ranges from 5 to 47. While there does not appear to be any overarching set of failure factors we discovered that all of the projects suffered from poor project management. Most projects additionally suffered from organizational factors outside the project managerpsilas control. We conclude with suggestions for minimizing the four most common failure factors."}
{"_id": "4b60e45b6803e2e155f25a2270a28be9f8bec130", "title": "Attribute based object identification", "text": "Over the last years, the robotics community has made substantial progress in detection and 3D pose estimation of known and unknown objects. However, the question of how to identify objects based on language descriptions has not been investigated in detail. While the computer vision community recently started to investigate the use of attributes for object recognition, these approaches do not consider the task settings typically observed in robotics, where a combination of appearance attributes and object names might be used in referral language to identify specific objects in a scene. In this paper, we introduce an approach for identifying objects based on natural language containing appearance and name attributes. To learn rich RGB-D features needed for attribute classification, we extend recently introduced sparse coding techniques so as to automatically learn attribute-dependent features. We introduce a large data set of attribute descriptions of objects in the RGB-D object dataset. Experiments on this data set demonstrate the strong performance of our approach to language based object identification. We also show that our attribute-dependent features provide significantly better generalization to previously unseen attribute values, thereby enabling more rapid learning of new attribute values."}
{"_id": "159550b613a6310bc78df45fe8c084c3503df05f", "title": "Hybrid analog-digital backscatter: A new approach for battery-free sensing", "text": "After comparing the properties of analog backscatter and digital backscatter, we propose that a combination of the two can provide a solution for high data rate battery free wireless sensing that is superior to either approach on its own. We present a hybrid analog-digital backscatter platform that uses digital backscatter for addressability and control but switches into analog backscatter mode for high data rate transmission of sensor data. Using hybrid backscatter, we report the first digitally addressable real-time battery free wireless microphone. We develop the hybrid backscatter platform by integrating an electret microphone and RF switch with a digital RFID platform (WISP). The hybrid WISP operates by default in digital mode, transmitting and receiving digital data using the EPC Gen 2 RFID protocol but switching into analog mode to backscatter audio sensor data when activated by Gen 2 READ command. The data is recovered using a USRP-based Software Defined RFID reader. We report an operating range of 7.4 meters for the analog backscatter microphone and 2.7 meters for hybrid microphone with 26.7 dBm output power USRP-based RFID reader."}
{"_id": "0a4bdc7ee4d07c7202a60641612dac1d9cfe0df2", "title": "Application Design for Wearable Computing", "text": "The confluence of decades of computer science and computer engineering research in multimodal interaction (e.g., speech and gesture recognition), machine learning (e.g., classification and feature extraction), software (e.g., web browsers, distributed agents), electronics (e.g., energy-efficient microprocessors, head-mounted displays), design methodology in user-centered design, and rapid prototyping have enabled a new class of computers\u2014wearable computers. The lecture takes the viewpoint of a potential designer or researcher in wearable computing. Designing wearable computers requires attention to many different factors because of the computer\u2019s closeness to the body and its use while performing other tasks. For the purposes of discussion, we have created the UCAMP framework, which consists of the following factors: user, corporal, attention, manipulation, and perception. Each of these factors and their importance is described. A number of example prototypes developed by the authors, as well as by other researchers, are used to illustrate these concepts. Wearable computers have established their first foothold in several application domains, such as vehicle and aircraft maintenance and manufacturing, inspection, language translation, and other areas. The lecture continues by describing the next step in the evolution of wearable computers, namely, context awareness. Context-aware computing takes into account a user\u2019s state and surroundings, and the mobile computer modifies its behavior based on this information. A user\u2019s context can be quite rich, consisting of attributes such as physical location, physiological state, personal history, daily behavioral patterns, and so forth. If a human assistant were given such context, he or she would make decisions in a proactive fashion, anticipating user needs, and acting as a proactive assistant. The goal is to enable mobile computers to play an analogous role, exploiting context information to significantly reduce demands on human attention. Context-aware intelligent agents can deliver relevant information when a user needs that information. These data make possible many exciting new applications, such as augmented reality, context-aware collaboration, and augmented manufacturing. The combined studies and research reported in this lecture suggest a number of useful guidelines for designing wearable computing devices. Also included with the guidelines is a list of questions that designers should consider when beginning to design a wearable computer. The research directions section emphasizes remaining challenges and trends in the areas of user interface, modalities of interaction, and wearable cognitive augmentation. Finally, we summarize the most important challenges and conclude with a projection of future directions in wearable computing. KEyWoRDS wearable computers, wearability design, user interface, manipulation devices, interaction modalities, context-aware computing, cognitive augmentation vi"}
{"_id": "a5c51fdbb4dfd15a90c56521790eaec1f2a3b6dc", "title": "Design and control of warehouse order picking: A literature review", "text": "Order picking has long been identified as the most labour-intensive and costly activity for almost every warehouse; the cost of order picking is estimated to be as much as 55% of the total warehouse operating expense. Any underperformance in order picking can lead to unsatisfactory service and high operational cost for its warehouse, and consequently for the whole supply chain. In order to operate efficiently, the orderpicking process needs to be robustly designed and optimally controlled. This paper gives a literature overview on typical decision problems in design and control of manual order-picking processes. We focus on optimal (internal) layout design, storage assignment methods, routing methods, order batching and zoning. The research in this area has grown rapidly recently. Still, combinations of the above areas have hardly been explored. Order-picking system developments in practice lead to promising new research directions."}
{"_id": "f174f8bfedfcd64ab740d1ff6160c05f2588cada", "title": "Pick-by-Vision comes on age: evaluation of an augmented reality supported picking system in a real storage environment", "text": "Order picking is one of the most important process steps in logistics. Because of their flexibility human beings cannot be replaced by machines. But if workers in order picking systems are equipped with a head-mounted display, Augmented Reality can improve the information visualization.\n In this paper the development of such a system -- called Pick-by-Vision - is presented. The system is evaluated in a user study performed in a real storage environment. Important logistics figures as well as subjective figures were measured. The results show that a Pick-by-Vision system can improve considerably industrial order picking processes."}
{"_id": "348e64727356683dd6582b746e81d66d1b6f8e42", "title": "Travel time estimation and order batching in a 2-block warehouse", "text": "The order batching problem (OBP) is the problem of determining the number of orders to be picked together in one picking tour. Although various objectives may arise in practice, minimizing the average throughput time of a random order is a common concern. In this paper, we consider the OBP for a 2-block rectangular warehouse with the assumptions that orders arrive according to a Poisson process and the method used for routing the orderpickers is the well-known S-shape heuristic. We first elaborate on the first and second moment of the order-picker's travel time. Then we use these moments to estimate the average throughput time of a random order. This enables us to estimate the optimal picking batch size. Results from simulation show that the method provides a high accuracy level. Furthermore, the method is rather simple and can be easily applied in practice."}
{"_id": "e600bfc1dc534abbbc82e9e8a0a13f87ca11c0c3", "title": "An analysis of electromagnetic characteristics and mechanical load characteristics of actuator for 4 pole MC", "text": "MC(Magnetic Contactor) is a widely used switch in power distribution systems for controlling electrical loads by making and breaking electric circuits using electromagnetic force of an actuator. MCs are classified as 3 and 4 pole MC according to the number of main contacts. 4 pole MC consists of 4 main contacts, which controls a three phase four wire system or two single phase systems. The type of the main contact affects not only the electric characteristics but also the mechanical load characteristics of the MC. In order to design the actuator for the 4 pole MC, it is important to analyze the mechanical load characteristics and the electromagnetic characteristics of the actuator precisely. In this paper, the analysis of mechanical load characteristics according to the formation types of main contacts is conducted and the electromagnetic characteristics are analyzed using 2-D FEM(Finite Element Method). Furthermore, the dynamic characteristics analysis of the actuator is conducted and its validity is verified by experiments."}
{"_id": "e760859dd79d832c0049c88c8b784d5fb732e4be", "title": "Charge and discharge characteristics of lead-acid battery and LiFePO4 battery", "text": "The charge and discharge characteristics of lead-acid battery and LiFePO4 battery is proposed in this paper. The purpose of this paper lies in offering the pulse current charger of higher peak value which can shorten the charging time to reach the goal of charging fast and also avoids the polarization phenomena produced while charging the voltage and current signal simultaneously, supervising whole charging course of the battery, avoiding the situation of excessive charging, and ensuring the life of battery. The hardware circuit of this pulse current charger adopts the power electronic elements for the main structure, uses Digital Signal Processor as the core of the controller, and substantially decreases the volume of charger and the loss of circuit. Besides, the input power supply of this charger is utility, greatly facilitating its using or carrying, which contributes to the development of variety of electric equipments in the future."}
{"_id": "0f8f6c2e8c41b835407ba4b07d48944252a41400", "title": "Latent Space Model for Road Networks to Predict Time-Varying Traffic", "text": "Real-time traffic prediction from high-fidelity spatiotemporal traffic sensor datasets is an important problem for intelligent transportation systems and sustainability. However, it is challenging due to the complex topological dependencies and high dynamism associated with changing road conditions. In this paper, we propose a Latent Space Model for Road Networks (LSM-RN) to address these challenges holistically. In particular, given a series of road network snapshots, we learn the attributes of vertices in latent spaces which capture both topological and temporal properties. As these latent attributes are time-dependent, they can estimate how traffic patterns form and evolve. In addition, we present an incremental online algorithm which sequentially and adaptively learns the latent attributes from the temporal graph changes. Our framework enables real-time traffic prediction by 1) exploiting real-time sensor readings to adjust/update the existing latent spaces, and 2) training as data arrives and making predictions on-the-fly. By conducting extensive experiments with a large volume of real-world traffic sensor data, we demonstrate the superiority of our framework for real-time traffic prediction on large road networks over competitors as well as baseline graph-based LSM's."}
{"_id": "ea30a34c06edf902c67acb9e9cdf3925d0eb1570", "title": "A knowledge-based approach in designing combinatorial or medicinal chemistry libraries for drug discovery. 1. A qualitative and quantitative characterization of known drug databases.", "text": "The discovery of various protein/receptor targets from genomic research is expanding rapidly. Along with the automation of organic synthesis and biochemical screening, this is bringing a major change in the whole field of drug discovery research. In the traditional drug discovery process, the industry tests compounds in the thousands. With automated synthesis, the number of compounds to be tested could be in the millions. This two-dimensional expansion will lead to a major demand for resources, unless the chemical libraries are made wisely. The objective of this work is to provide both quantitative and qualitative characterization of known drugs which will help to generate \"drug-like\" libraries. In this work we analyzed the Comprehensive Medicinal Chemistry (CMC) database and seven different subsets belonging to different classes of drug molecules. These include some central nervous system active drugs and cardiovascular, cancer, inflammation, and infection disease states. A quantitative characterization based on computed physicochemical property profiles such as log P, molar refractivity, molecular weight, and number of atoms as well as a qualitative characterization based on the occurrence of functional groups and important substructures are developed here. For the CMC database, the qualifying range (covering more than 80% of the compounds) of the calculated log P is between -0.4 and 5.6, with an average value of 2.52. For molecular weight, the qualifying range is between 160 and 480, with an average value of 357. For molar refractivity, the qualifying range is between 40 and 130, with an average value of 97. For the total number of atoms, the qualifying range is between 20 and 70, with an average value of 48. Benzene is by far the most abundant substructure in this drug database, slightly more abundant than all the heterocyclic rings combined. Nonaromatic heterocyclic rings are twice as abundant as the aromatic heterocycles. Tertiary aliphatic amines, alcoholic OH and carboxamides are the most abundant functional groups in the drug database. The effective range of physicochemical properties presented here can be used in the design of drug-like combinatorial libraries as well as in developing a more efficient corporate medicinal chemistry library."}
{"_id": "4605d6feb7ae32f746e0589f17895b98ef65e0ba", "title": "Comparison of direct and iterative artificial neural network forecast approaches in multi-periodic time series forecasting", "text": "Artificial neural network is a valuable tool for time series forecasting. In the case of performing multiperiodic forecasting with artificial neural networks, two methods, namely iterative and direct, can be used. In iterative method, first subsequent period information is predicted through past observations. Afterwards, the estimated value is used as an input; thereby the next period is predicted. The process is carried on until the end of the forecast horizon. In the direct forecast method, successive periods can be predicted all at once. Hence, this method is thought to yield better results as only observed data is utilized in order to predict future periods. In this study, forecasting was performed using direct and iterative methods, and results of the methods are compared using grey relational analysis to find the method which gives a better result. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "fa7eff762690f6f9414957d1805faf65bf814d6c", "title": "Application of information technology: MedEx: a medication information extraction system for clinical narratives", "text": "Medication information is one of the most important types of clinical data in electronic medical records. It is critical for healthcare safety and quality, as well as for clinical research that uses electronic medical record data. However, medication data are often recorded in clinical notes as free-text. As such, they are not accessible to other computerized applications that rely on coded data. We describe a new natural language processing system (MedEx), which extracts medication information from clinical notes. MedEx was initially developed using discharge summaries. An evaluation using a data set of 50 discharge summaries showed it performed well on identifying not only drug names (F-measure 93.2%), but also signature information, such as strength, route, and frequency, with F-measures of 94.5%, 93.9%, and 96.0% respectively. We then applied MedEx unchanged to outpatient clinic visit notes. It performed similarly with F-measures over 90% on a set of 25 clinic visit notes."}
{"_id": "3ffbd7d45bd037e3c0d88038c3dd8178eba0df14", "title": "Intonation issues in HMM-based speech synthesis for Vietnamese", "text": "In an HMM-based Text-To-Speech system, contextual features, including phonetic and prosodic factors have a significant influence to the spectrum, F0 and duration of the synthetic voice. This paper proposes prosodic features aiming at improving the naturalness of an HMM-based TTS system (VTed) for a tonal language, Vietnamese. The ToBI (Tones and Break Indices) features are used to learn two crucial prosodic cues i.e. intonation (boundary tones) and pause (break indices), concurrently with another set of features. The result of MOS test showed that the general quality of synthetic voice is rather good, 1.21 point lower than the natural voice. About 55% of the voice trained with ToBI boundary tone feature are perceived as similar to the voice trained without this feature, while a 10% difference in favour of the voice trained without this ToBI feature is observed. This may be linked with F0 contour lowering or raising regardless of lexical tones. This brought two main problems in the synthetic voice: discontinuity in spectrum and F0 or unexpected voice quality. This paper then concluded the need of much more work on intonation modeling that should take into account the Vietnamese tones. A new prosody model can be designed, which may consider the ToBI model, with respect to lexical tones and the syntactic structure of Vietnamese. Index Terms \u2014 Text-to-speech (TTS), speech synthesis, tonal language, Vietnamese, HMM-based speech synthesis, intonation, ToBI"}
{"_id": "7ca3809484eb57c509acc18b016e9b010759dfa1", "title": "Deep Reflectance Maps", "text": "Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constrained nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image editing tasks on real images."}
{"_id": "660d7b302c6f2119b693e342ba76d5ed794423a1", "title": "Gaussian process for long-term time-series forecasting", "text": "Gaussian process (GP), as one of the cornerstones of Bayesian non-parametric methods, has received extensive attention in machine learning. GP has intrinsic advantages in data modeling, given its construction in the framework of Bayesian hieratical modeling and no requirement for a priori information of function forms in Bayesian reference. In light of its success in various applications, utilizing GP for time-series forecasting has gained increasing interest in recent years. This paper is concerned with using GP for multiple-step-ahead time-series forecasting, an important type of time-series analysis. Utilizing a large number of real-world time series, this paper evaluates two different GP modeling strategies (direct and recursive) for performing multiple-step-ahead forecasting."}
{"_id": "6ac7f1674286da8473f5c4e2e85b64a68427eaab", "title": "Multi-clip video editing from a single viewpoint", "text": "We propose a framework for automatically generating multiple clips suitable for video editing by simulating pan-tilt-zoom camera movements within the frame of a single static camera. Assuming important actors and objects can be localized using computer vision techniques, our method requires only minimal user input to define the subject matter of each sub-clip. The composition of each sub-clip is automatically computed in a novel L1-norm optimization framework. Our approach encodes several common cinematographic practices into a single convex cost function minimization problem, resulting in aesthetically pleasing sub-clips which can easily be edited together using off-the-shelf multi-clip video editing software. We demonstrate our approach on five video sequences of a live theatre performance by generating multiple synchronized subclips for each sequence."}
{"_id": "11f5d500fc168cbd03775c1362ebbb75ab206873", "title": "Evaluate the Feasibility of Using Frontal SSVEP to Implement an SSVEP-Based BCI in Young, Elderly and ALS Groups", "text": "This paper studies the amplitude-frequency characteristic of frontal steady-state visual evoked potential (SSVEP) and its feasibility as a control signal for brain computer interface (BCI). SSVEPs induced by different stimulation frequencies, from 13 ~ 31 Hz in 2 Hz steps, were measured in eight young subjects, eight elders and seven ALS patients. Each subject was requested to participate in a calibration study and an application study. The calibration study was designed to find the amplitude-frequency characteristics of SSVEPs recorded from Oz and Fpz positions, while the application study was designed to test the feasibility of using frontal SSVEP to control a two-command SSVEP-based BCI. The SSVEP amplitude was detected by an epoch-average process which enables artifact-contaminated epochs can be removed. The seven ALS patients were severely impaired, and four patients, who were incapable of completing our BCI task, were excluded from calculation of BCI performance. The averaged accuracies, command transfer intervals and information transfer rates in operating frontal SSVEP-based BCI were 96.1%, 3.43 s/command, and 14.42 bits/min in young subjects; 91.8%, 6.22 s/command, and 6.16 bits/min in elders; 81.2%, 12.14 s/command, and 1.51 bits/min in ALS patients, respectively. The frontal SSVEP could be an alternative choice to design SSVEP-based BCI."}
{"_id": "4d2bd65a4b24679d4c440afc6fe88f13229e644d", "title": "Human Face Detection by using Skin Color Segmentation, Face Features and Regions Properties", "text": "The goal of face detection is to lo\u00adcate all regions that contain a face. This pa\u00adper has a simple face detection procedure which has two major steps, first to segment skin region from an image, and second, to decide these regions contain human face or not. Our procedure is based on skin color segmentation and human face features (knowledge-based approach). In this paper, we used RGB, YCbCr, CEILAB (L*a*b) and HSV color models for skin color segmentation. These color models with thresholds, help to remove non skin like pixels from an image. We tested each skin region, that skin region is actually represents a human face or not, by using human face features based on knowledge of geometrical properties of human face. The experiment result shows that, the algorithm gives hopeful results. At last, we concluded this paper and proposed future work."}
{"_id": "3103564188b516277a5f34baa36a64039d6d86d4", "title": "The myth of the visual word form area", "text": "Recent functional imaging studies have referred to a posterior region of the left midfusiform gyrus as the \"visual word form area\" (VWFA). We review the evidence for this claim and argue that neither the neuropsychological nor neuroimaging data are consistent with a cortical region specialized for visual word form representations. Specifically, there are no reported cases of pure alexia who have deficits limited to visual word form processing and damage limited to the left midfusiform. In addition, we present functional imaging data to demonstrate that the so-called VWFA is activated by normal subjects during tasks that do not engage visual word form processing such as naming colors, naming pictures, reading Braille, repeating auditory words, and making manual action responses to pictures of meaningless objects. If the midfusiform region has a single function that underlies all these tasks, then it does not correspond to visual word form processing. On the other hand, if the region participates in several functions as defined by its interactions with other cortical areas, then identifying the neural system sustaining visual word form representations requires identification of the set of regions involved. We conclude that there is no evidence that visual word form representations are subtended by a single patch of neuronal cortex and it is misleading to label the left midfusiform region as the visual word form area."}
{"_id": "39655a2a25dae954bdbb6b679a86d6dfa39d47ad", "title": "Atrial Fibrillation Classification from a Short Single Lead ECG Recording Using a Deep Convolutional Neural Network", "text": "In this work, a deep convolutional neural network (CNN) is proposed to detect atrial fibrillation (AF) among the normal, noisy and other categories of cardiac arrhythmias electrocardiogram (ECG) recordings. The proposed CNN is trained by stochastic gradient descent with the categorical cross-entropy loss function. The network performance is evaluated on training (75%) and validation (25%) data sets that are obtained from 2017 Physionet/CinC challenge database. The proposed CNN model respectively achieves the average accuracy and F1 score of 87% and 0.84 on validation data set. One of the main advantages of this work besides high accuracy and reliability, is to simplify the feature extraction process and to remove the need for detecting ECG signal fiducial points and extracting hand-crafted features unlike conventional methods available in the literature. Moreover, it provides an opportunity for ECG screening in a large population, especially for atrial fibrillation screening, using wearable devices such as OM apparel that records high-quality single channel ECG signal."}
{"_id": "c1742ca74f40c44dae2af6a992e569edc969c62c", "title": "3D printing of microwave passive components by different additive manufacturing technologies", "text": "This paper presents an overview of the possibility offered by 3D plastic printers for a quick, simple and affordable manufacturing of working filters and other passive devices such as antennas. This paper thus goes through numerous examples of passive devices made with the Fused Deposition Modeling (FDM) and material jetting (Polyjet\u00a9) technologies and will highlight how they can now be considered as a solid companion to RF designers during an optimization process up to Ku and higher bands."}
{"_id": "8ff4632cf7e2ace0e067c57a63dfaa90c615a364", "title": "Modelling and Simulation for Optimal Control of Nonlinear Inverted Pendulum Dynamical System Using PID Controller and LQR", "text": "This paper presents the modelling and simulation for optimal control design of nonlinear inverted pendulum-cart dynamic system using Proportional-Integral-Derivative (PID) controller and Linear Quadratic Regulator (LQR). LQR, an optimal control technique, and PID control method, both of which are generally used for control of the linear dynamical systems have been used in this paper to control the nonlinear dynamical system. The nonlinear system states are fed to LQR which is designed using linear state-space model. Inverted pendulum, a highly nonlinear unstable system is used as a benchmark for implementing the control methods. Here the control objective is to control the system such that the cart reaches at a desired position and the inverted pendulum stabilizes in upright position. The MATLAB-SIMULINK models have been developed for simulation of control schemes. The simulation results justify the comparative advantages of LQR control methods."}
{"_id": "bfc8903e020581c9f60794874102e7699b866b24", "title": "Data mining workflow templates for intelligent discovery assistance and auto-experimentation", "text": "Knowledge Discovery in Databases (KDD) has grown a lot during the last years. But providing user support for constructing workflows is still problematic. The large number of operators available in current KDD systems makes it difficult for a user to successfully solve her task. Also, workflows can easily reach a huge number of operators(hundreds) and parts of the workflows are applied several times. Therefore, it becomes hard for the user to construct them manually. In addition, workflows are not checked for correctness before execution. Hence, it frequently happens that the execution of the workflow stops with an error after several hours runtime. In this paper we present a solution to these problems. We introduce a knowledge-based representation of Data Mining (DM) workflows as a basis for cooperative interactive planning. Moreover, we discuss workflow templates, i.e. abstract workflows that can mix executable operators and tasks to be refined later into sub-workflows. This new representation helps users to structure and handle workflows, as it constrains the number of operators that need to be considered. Finally, workflows can be grouped in templates which foster re-use further simplifying DM workflow construction. Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-44851 Originally published at: Kietz, J\u00f6rg-Uwe; Serban, Floarea; Bernstein, Abraham; Fischer, Simon (2010). Data mining workflow templates for intelligent discovery assistance and auto-experimentation. In: Proc of the ECML/PKDD\u201910 Workshop on Third Generation Data Mining: Towards Service-oriented Knowledge Discovery (SoKD\u201910), Barcelona, Spain, 20 September 2010 24 September 2010, 1-12. Data Mining Workflow Templates for Intelligent Discovery Assistance and Auto-Experimentation J\u00f6rg-Uwe Kietz, Floarea Serban, Abraham Bernstein, and Simon Fischer 1 University of Zurich, Department of Informatics, Dynamic and Distributed Information Systems Group, Binzm\u00fchlestrasse 14, CH-8050 Zurich, Switzerland {kietz|serban|bernstein}@ifi.uzh.ch 2 Rapid-I GmbH, Stockumer Str. 475, 44227 Dortmund, Germany fischer@rapid-i.com Abstract. Knowledge Discovery in Databases (KDD) has grown a lot during the last years. But providing user support for constructing workflows is still problematic. The large number of operators available in current KDD systems makes it difficult for a user to successfully solve her task. Also, workflows can easily reach a huge number of operators (hundreds) and parts of the workflows are applied several times. Therefore, it becomes hard for the user to construct them manually. In addition, workflows are not checked for correctness before execution. Hence, it frequently happens that the execution of the workflow stops with an error after several hours runtime. In this paper we present a solution to these problems. We introduce a knowledge-based representation of Data Mining (DM) workflows as a basis for cooperative-interactive planning. Moreover, we discuss workflow templates, i.e. abstract workflows that can mix executable operators and tasks to be refined later into sub-workflows. This new representation helps users to structure and handle workflows, as it constrains the number of operators that need to be considered. Finally, workflows can be grouped in templates which foster re-use further simplifying DM workflow construction. Knowledge Discovery in Databases (KDD) has grown a lot during the last years. But providing user support for constructing workflows is still problematic. The large number of operators available in current KDD systems makes it difficult for a user to successfully solve her task. Also, workflows can easily reach a huge number of operators (hundreds) and parts of the workflows are applied several times. Therefore, it becomes hard for the user to construct them manually. In addition, workflows are not checked for correctness before execution. Hence, it frequently happens that the execution of the workflow stops with an error after several hours runtime. In this paper we present a solution to these problems. We introduce a knowledge-based representation of Data Mining (DM) workflows as a basis for cooperative-interactive planning. Moreover, we discuss workflow templates, i.e. abstract workflows that can mix executable operators and tasks to be refined later into sub-workflows. This new representation helps users to structure and handle workflows, as it constrains the number of operators that need to be considered. Finally, workflows can be grouped in templates which foster re-use further simplifying DM workflow construction."}
{"_id": "f825b2dfac69d2e161b128189bf06318b0bda00c", "title": "Prospective outcomes of young and middle-aged adults with medial compartment osteoarthritis treated with a proximal tibial opening wedge osteotomy.", "text": "PURPOSE\nThe purpose of this study was to conduct a prospective outcome analysis of proximal tibial opening wedge osteotomies performed in young and middle-aged patients (aged <55 years) for the treatment of symptomatic medial compartment osteoarthritis of the knee.\n\n\nMETHODS\nA consecutive series of young and middle-aged adults who underwent proximal tibial opening wedge osteotomies for symptomatic medial compartment osteoarthritis and genu varus alignment were prospectively followed up. Patients were evaluated with preoperative and postoperative modified Cincinnati Knee Scores and International Knee Documentation Committee objective knee subscores for knee effusions and the single-leg hop. Calculations were made of the preoperative and postoperative long-leg radiographic mechanical weight-bearing axis, patellar height (Insall-Salvati index), and tibial slope. A separate cohort of asymptomatic patients was used to quantify tibial plateau anatomy to provide an objective description of the lower extremity mechanical axis.\n\n\nRESULTS\nThere were 47 patients, with a mean age of 40.5 years, with a minimum of 2 years' follow-up, who formed this patient cohort. Modified Cincinnati Knee Scores improved significantly from 42.9 preoperatively to 65.1 at a mean of 3.6 years of follow-up. Radiographic analysis of a separate cohort showed the medial tibial eminence to be located at the 41% point along the tibial plateau from medial (0%) to lateral (100%). There was a significant improvement in malalignment: the mean mechanical axis passed through the tibial plateau at 23% of the distance along the proximal tibia preoperatively versus 54% postoperatively. The Insall-Salvati index decreased from 1.03 to 0.95 (P < .05), and posterior tibial slope increased from 9.4\u00b0 to 11.7\u00b0 (P < .05). Of the osteotomies, 3 (6%) were considered failures, defined by revision of the osteotomy or conversion to total knee arthroplasty.\n\n\nCONCLUSIONS\nPerforming proximal tibial opening wedge osteotomies to treat symptomatic medial compartment osteoarthritis in carefully selected patients leads to a significant improvement in subjective and objective clinical outcome scores with correction of malalignment at a mean of 3.6 years postoperatively.\n\n\nLEVEL OF EVIDENCE\nLevel IV, therapeutic case series."}
{"_id": "901219cbe5cd9f024c64939596aab9fc9154afdb", "title": "Detectability thresholds and optimal algorithms for community structure in dynamic networks", "text": "We study the fundamental limits on learning latent community structure in dynamic networks. Specifically, we study dynamic stochastic block models where nodes change their community membership over time, but where edges are generated independently at each time step. In this setting (which is a special case of several existing models), we are able to derive the detectability threshold exactly, as a function of the rate of change and the strength of the communities. Below this threshold, we claim that no algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this limit. The first uses belief propagation (BP), which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the BP equations. We verify our analytic and algorithmic results via numerical simulation, and close with a brief discussion of extensions and open questions."}
{"_id": "2afe3d3806efcb65da0c97f90c1be816d3ad5224", "title": "Consideration on successive interference canceller (SIC) receiver at cell-edge users for non-orthogonal multiple access (NOMA) with SU-MIMO", "text": "Non-orthogonal multiple access (NOMA) utilizing transmit power difference and advanced successive interference canceller (SIC) receivers has been considered as a candidate radio access technology for future wireless communication systems. Single-user multiple-input multiple-output (SU-MIMO) is one of the key technologies in Long-Term-Evolution (LTE) systems. Thus, it has been proved to combine with NOMA for further system performance improvement. In previous studies, SIC receiver is adopted at cell center users (UEs), while SIC processing is not taken into account at cell-edge UEs, considering the feasibility of UE receiver and control signaling overhead. However, it may put limitation on the performance of NOMA. To further enhance the performance of NOMA with SU-MIMO, advanced receiver can be considered at cell-edge UEs. In this paper, we investigate the impact of applying the SIC receiver at cell-edge UEs in downlink NOMA with SU-MIMO. The detail SIC detection schemes at cell-edge UE will be discussed, including ideal SIC processing and non-ideal SIC processing. The simulation results show that performance gains of NOMA can be improved with SIC processing at cell-edge UEs. It indicates that more advanced receiver can be considered for cell-edge UE to achieve upper-bound performance."}
{"_id": "940fefaaecad64102e2417ac03b2bac348b8d08a", "title": "Bipolar transistor gain influence on the high temperature thermal stability of HV-BiGTs", "text": "In this paper we present the detailed investigation of the influence of the internal bipolar PNP transistor gain on the thermal stability of high voltage IGBTs and BiGTs. The bipolar gain is controlled by means of anode and buffer design and by the introduction of anode shorts. The influence of the different buffer and anode doping profiles and the different layouts in the case of anode-shorted designs are analyzed. Temperature dependent leakage current measurements confirm that the lowering of the leakage current and its subsequent weak temperature dependency can be achieved by buffer and anode engineering albeit with certain design trade-off restrictions. Nevertheless, another effective approach for suppressing the leakage current and its dependency on temperature is achieved by the introduction of anode shorts as demonstrated in reverse conducting IGBT or BiGT structures. Such designs eliminate to a large extent the internal bipolar transistor action in the BiGT anode shorted designs while allowing different anode and buffer doping profiles for the design trade-offs. Despite the fact that the lifetime control in the BiGT drift region causes the leakage current to increase, the temperature coefficient remains unchanged, hence, making the hard switched BiGT suitable for high temperature operation."}
{"_id": "08c3e50a2913da51ed3cdafdcfdfb488e8fa83c3", "title": "Return-oriented programming without returns", "text": "We show that on both the x86 and ARM architectures it is possible to mount return-oriented programming attacks without using return instructions. Our attacks instead make use of certain instruction sequences that behave like a return, which occur with sufficient frequency in large libraries on (x86) Linux and (ARM) Android to allow creation of Turing-complete gadget sets.\n Because they do not make use of return instructions, our new attacks have negative implications for several recently proposed classes of defense against return-oriented programming: those that detect the too-frequent use of returns in the instruction stream; those that detect violations of the last-in, first-out invariant normally maintained for the return-address stack; and those that modify compilers to produce code that avoids the return instruction."}
{"_id": "48a8e9d8a41009eb6b7733b139eb5eff30d72776", "title": "Binary stirring: self-randomizing instruction addresses of legacy x86 binary code", "text": "Unlike library code, whose instruction addresses can be randomized by address space layout randomization (ASLR), application binary code often has static instruction addresses. Attackers can exploit this limitation to craft robust shell codes for such applications, as demonstrated by a recent attack that reuses instruction gadgets from the static binary code of victim applications.\n This paper introduces binary stirring, a new technique that imbues x86 native code with the ability to self-randomize its instruction addresses each time it is launched. The input to STIR is only the application binary code without any source code, debug symbols, or relocation information. The output is a new binary whose basic block addresses are dynamically determined at load-time. Therefore, even if an attacker can find code gadgets in one instance of the binary, the instruction addresses in other instances are unpredictable. An array of binary transformation techniques enable STIR to transparently protect large, realistic applications that cannot be perfectly disassembled due to computed jumps, code-data interleaving, OS callbacks, dynamic linking and a variety of other difficult binary features. Evaluation of STIR for both Windows and Linux platforms shows that stirring introduces about 1.6% overhead on average to application runtimes."}
{"_id": "acf32e644db8c3ac54834d294bba4cf46551480a", "title": "Enhanced Operating System Security Through Efficient and Fine-grained Address Space Randomization", "text": "In recent years, the deployment of many applicationlevel countermeasures against memory errors and the increasing number of vulnerabilities discovered in the kernel has fostered a renewed interest in kernel-level exploitation. Unfortunately, no comprehensive and wellestablished mechanism exists to protect the operating system from arbitrary attacks, due to the relatively new development of the area and the challenges involved. In this paper, we propose the first design for finegrained address space randomization (ASR) inside the operating system (OS), providing an efficient and comprehensive countermeasure against classic and emerging attacks, such as return-oriented programming. To motivate our design, we investigate the differences with application-level ASR and find that some of the wellestablished assumptions in existing solutions are no longer valid inside the OS; above all, perhaps, that information leakage becomes a major concern in the new context. We show that our ASR strategy outperforms stateof-the-art solutions in terms of both performance and security without affecting the software distribution model. Finally, we present the first comprehensive live rerandomization strategy, which we found to be particularly important inside the OS. Experimental results demonstrate that our techniques yield low run-time performance overhead (less than 5% on average on both SPEC and syscall-intensive benchmarks) and limited run-time memory footprint increase (around 15% during the execution of our benchmarks). We believe our techniques can greatly enhance the level of OS security without compromising the performance and reliability of the OS."}
{"_id": "352e74019d86163d73618f03429ae452ab429629", "title": "Cache-timing attacks on AES", "text": "This paper demonstrates complete AES key recovery from known-plaintext timings of a network server on another computer. This attack should be blamed on the AES design, not on the particular AES library used by the server; it is extremely difficult to write constant-time high-speed AES software for common general-purpose computers. This paper discusses several of the obstacles in detail."}
{"_id": "41d3fefdb1843abc74834226256a25ad0eea697a", "title": "Online variance minimization", "text": "We consider the following type of online variance minimization problem: In every trial t our algorithms get a covariance matrix C t and try to select a parameter vector w t\u22121 such that the total variance over a sequence of trials $\\sum_{t=1}^{T} (\\boldsymbol {w}^{t-1})^{\\top} \\boldsymbol {C}^{t}\\boldsymbol {w}^{t-1}$ is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces in \u211d n are considered\u2014the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue of the total covariance matrix $\\sum_{t=1}^{T} \\boldsymbol {C}^{t}$ . For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy regularization. In the second case, the algorithm has to maintain uncertainty information over all unit directions u. For this purpose, directions are represented as dyads uu \u22a4 and the uncertainty over all directions as a mixture of dyads which is a density matrix. The motivating divergence for density matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated Gradient algorithm. In each of the two cases we prove bounds on the additional total variance incurred by the online algorithm over the best offline parameter."}
{"_id": "92747e1b72eedc38416e836fbb82d236bcd9fb32", "title": "Listen and Translate: A Proof of Concept for End-to-End Speech-to-Text Translation", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L\u2019archive ouverte pluridisciplinaire HAL, est destin\u00e9e au d\u00e9p\u00f4t et \u00e0 la diffusion de documents scientifiques de niveau recherche, publi\u00e9s ou non, \u00e9manant des \u00e9tablissements d\u2019enseignement et de recherche fran\u00e7ais ou \u00e9trangers, des laboratoires publics ou priv\u00e9s. Listen and Translate: A Proof of Concept for End-to-End Speech-to-Text Translation Alexandre B\u00e9rard, Olivier Pietquin, Laurent Besacier, Christophe Servan"}
{"_id": "3e2796dbad214d971f8ed2b57feb0a120e250888", "title": "Transactional events", "text": "Concurrent programs require high-level abstractions in order to manage complexity and enable compositional reasoning. In this paper, we introduce a novel concurrency abstraction, dubbed transactional events, which combines first-class synchronous messagepassing events with all-or-nothing transactions. This combination enables simple solutions to interesting problems in concurrent programming. For example, guarded synchronous receive can be implemented as an abstract transactional event, whereas in other languages it requires a non-abstract, non-modular protocol. Likewise, three-way rendezvous can also be implemented as an abstract transactional event, which is impossible using first-class events alone. Both solutions are easy to code and easy to reason about.The expressive power of transactional events arises from a sequencing combinator whose semantics enforces an all-or-nothing transactional property - either both of the constituent events synchronize in sequence or neither of them synchronizes. This sequencing combinator, along with a non-deterministic choice combinator, gives transactional events the compositional structure of a monad-with-plus. We provide a formal semantics for and a preliminary implementation of transactional events."}
{"_id": "e38f26b8b8190c95806cdd788d3303dbf0651e91", "title": "Unsupervised Text Normalization Using Distributed Representations of Words and Phrases", "text": "Text normalization techniques that use rule-based normalization or string similarity based on static dictionaries are typically unable to capture domain-specific abbreviations (custy, cx \u2192 customer) and shorthands (5ever, 7ever \u2192 forever) used in informal texts. In this work, we exploit the property that noisy and canonical forms of a particular word share similar context in a large noisy text collection (millions or billions of social media feeds from Twitter, Facebook, etc.). We learn distributed representations of words to capture the notion of contextual similarity and subsequently learn normalization lexicons from these representations in a completely unsupervised manner. We experiment with linear and non-linear distributed representations obtained from log-linear models and neural networks, respectively. We apply our framework for normalizing customer care notes and Twitter. We also extend our approach to learn phrase normalization lexicons (g2g \u2192 got to go) by training distributed representations over compound words. Our approach outperforms Microsoft Word, Aspell and a manually compiled urban dictionary from the Web and achieves state-of-the-art results on a publicly available Twitter dataset."}
{"_id": "1b3fa5344a84a82d69b94fdfe1cd3f62398ce77a", "title": "Problems with the Concept of Video Game \u201cAddiction\u201d: Some Case Study Examples", "text": "This paper argues that the recent concerns about video game \u201caddiction\u201d have been based less on scientific facts and more upon media hysteria. By examining the literature, it will be demonstrated that the current criteria used for identifying this concept are both inappropriate and misleading. Furthermore, by presenting four case studies as examples it will be demonstrated how such claims of video game addiction can be inaccurately applied. It is concluded that the most likely reasons that people play video games excessively are due to either ineffective time management skills, or as a symptomatic response to other underlying problems that they are escaping from, rather than any inherent addictive properties of the actual games."}
{"_id": "b77698ef61449e555fabce5272795d569adb0df1", "title": "Transition-based Dependency DAG Parsing Using Dynamic Oracles", "text": "In most of the dependency parsing studies, dependency relations within a sentence are often presented as a tree structure. Whilst the tree structure is sufficient to represent the surface relations, deep dependencies which may result to multi-headed relations require more general dependency structures, namely Directed Acyclic Graphs (DAGs). This study proposes a new dependency DAG parsing approach which uses a dynamic oracle within a shift-reduce transitionbased parsing framework. Although there is still room for improvement on performance with more feature engineering, we already obtain competitive performances compared to static oracles as a result of our initial experiments conducted on the ITU-METU-Sabanc\u0131 Turkish Treebank (IMST)."}
{"_id": "a29a2aa3ce86e770f8857824003e3fb055d87990", "title": "Exploring Session Context using Distributed Representations of Queries and Reformulations", "text": "Search logs contain examples of frequently occurring patterns of user reformulations of queries. Intuitively, the reformulation \"San Francisco\" -- \"San Francisco 49ers\" is semantically similar to \"Detroit\" -- \"Detroit Lions\". Likewise, \"London\" -- \"things to do in London\" and \"New York\" -- \"New York tourist attractions\" can also be considered similar transitions in intent. The reformulation \"movies\" -- \"new movies\" and \"york\" -- \"New York\", however, are clearly different despite the lexical similarities in the two reformulations. In this paper, we study the distributed representation of queries learnt by deep neural network models, such as the Convolutional Latent Semantic Model, and show that they can be used to represent query reformulations as vectors. These reformulation vectors exhibit favourable properties such as mapping semantically and syntactically similar query changes closer in the embedding space. Our work is motivated by the success of continuous space language models in capturing relationships between words and their meanings using offset vectors. We demonstrate a way to extend the same intuition to represent query reformulations. Furthermore, we show that the distributed representations of queries and reformulations are both useful for modelling session context for query prediction tasks, such as for query auto-completion (QAC) ranking. Our empirical study demonstrates that short-term (session) history context features based on these two representations improves the mean reciprocal rank (MRR) for the QAC ranking task by more than 10% over a supervised ranker baseline. Our results also show that by using features based on both these representations together we achieve a better performance, than either of them individually."}
{"_id": "4f22ad9252ba60f5971c627e686458b220b53110", "title": "E-ethical leadership for virtual project teams", "text": "This paper presents a review of current literature on ethical theories as they relate to ethical leadership in the virtual business environment (e-ethics) and virtual project leadership. Ethical theories are reviewed in relation to virtual project management, such as participative management, Theory Y, and its relationship to utilitarianism; Kantian ethics, motivation, and trust; communitarian ethics, ethic of care and egalitarianism; Stakeholder Theory; and the use of political tactics. Challenges to e-ethical leadership are presented and responses to these issues discussed. The conclusion presents four propositions for future research. The purpose of this paper is to identify secondary literature on e-ethics and how this new area of business ethics may affect the leaders of virtual project teams. 2008 Elsevier Ltd and IPMA. All rights reserved."}
{"_id": "32be839a8cd7c71fb4ba93a60251f05c87a3d286", "title": "1 . RELATED WORK 1 . 1 3 D Reconstruction", "text": "Creating and animating realistic 3D human faces is an important element of virtual reality, video games, and other areas that involve interactive 3D graphics. In this paper, we propose a system to generate photorealistic 3D blendshape-based face models automatically using only a single consumer RGB-D sensor. The capture and processing requires no artistic expertise to operate, takes 15 seconds to capture and generate a single facial expression, and approximately 1 minute of processing time per expression to transform it into a blendshape model. Our main contributions include a complete end-to-end pipeline for capturing and generating photorealistic blendshape models automatically and a registration method that solves dense correspondences between two face scans by utilizing facial landmarks detection and optical flows. We demonstrate the effectiveness of the proposed method by capturing different human subjects with a variety of sensors and puppeteering their 3D faces with real-time facial performance retargeting. The rapid nature of our method allows for just-in-time construction of a digital face. To that end, we also integrated our pipeline with a virtual reality facial performance capture system that allows dynamic embodiment of the generated faces despite partial occlusion of the user\u2019s real face by the head-mounted display."}
{"_id": "9add1f1323c7d5ea08a4c61ddff5c36d773356ca", "title": "Non-linear robust control for inverted-pendulum 2D walking", "text": "We present an approach to high-level control for bipedal walking exemplified with a 2D point-mass inextensible-legs inverted-pendulum model. Balance control authority here is only from step position and trailing-leg push-off, both of which are bounded to reflect actuator limits. The controller is defined implicitly as the solution of an optimization problem. The optimization robustly avoids falling for given bounded disturbances and errors and, given that, minimizes the number of steps to reach a given target speed. The optimization can be computed in advance and stored for interpolated real-time use online. The general form of the resulting optimized controller suggests a few simple principles for regulating walking speed: 1) The robot should take bigger steps when speeding up and should also take bigger steps when slowing down 2) push-off is useful for regulating small changes in speed, but it is fully saturated or inactive for larger changes in speed. While the numerically optimized model is simple, the approach should be applicable to, and we plan to use it for, control of bipedal robots in 3D with many degrees of freedom."}
{"_id": "262f97abfaab2ebef1cb0bc0d189f54851ce876b", "title": "On mining cross-graph quasi-cliques", "text": "Joint mining of multiple data sets can often discover interesting, novel, and reliable patterns which cannot be obtained solely from any single source. For example, in cross-market customer segmentation, a group of customers who behave similarly in multiple markets should be considered as a more coherent and more reliable cluster than clusters found in a single market. As another example, in bioinformatics, by joint mining of gene expression data and protein interaction data, we can find clusters of genes which show coherent expression patterns and also produce interacting proteins. Such clusters may be potential pathways.In this paper, we investigate a novel data mining problem, mining cross-graph quasi-cliques, which is generalized from several interesting applications such as cross-market customer segmentation and joint mining of gene expression data and protein interaction data. We build a general model for mining cross-graph quasi-cliques, show why the complete set of cross-graph quasi-cliques cannot be found by previous data mining methods, and study the complexity of the problem. While the problem is difficult, we develop an efficient algorithm, Crochet, which exploits several interesting and effective techniques and heuristics to efficaciously mine cross-graph quasi-cliques. A systematic performance study is reported on both synthetic and real data sets. We demonstrate some interesting and meaningful cross-graph quasi-cliques in bioinformatics. The experimental results also show that algorithm Crochet is efficient and scalable."}
{"_id": "2bcfcbfe9e6a94ce1b1e483c39ec495acd90d750", "title": "Idle sense: an optimal access method for high throughput and fairness in rate diverse wireless LANs", "text": "We consider wireless LANs such as IEEE 802.11 operating in the unlicensed radio spectrum. While their nominal bit rates have increased considerably, the MAC layer remains practically unchanged despite much research effort spent on improving its performance. We observe that most proposals for tuning the access method focus on a single aspect and disregard others. Our objective is to define an access method optimized for throughput and fairness, able to dynamically adapt to physical channel conditions, to operate near optimum for a wide range of error rates, and to provide equal time shares when hosts use different bit rates.We propose a novel access method derived from 802.11 DCF [2] (Distributed Coordination Function) in which all hosts use similar values of the contention window CW to benefit from good short-term access fairness. We call our method Idle Sense, because each host observes the mean number of idle slots between transmission attempts to dynamically control its contention window. Unlike other proposals, Idle Sense enables each host to estimate its frame error rate, which can be used for switching to the right bit rate. We present simulations showing how the method leads to high throughput, low collision overhead, and low delay. The method also features fast reactivity and time-fair channel allocation."}
{"_id": "73ccc8c2343334116639e14de52f6e271da24221", "title": "Path Planning with Real Time Obstacle Avoidance", "text": "One of the most important areas of research on mobile robots is that of their moving from one point in a space to the other and that too keeping aloof from the different objects in their path i. e. real time obstacle detection. This is the basic problem of path planning through a continuous domain. For this a large number of algorithms have been proposed of which only a few are really good as far as local and global path planning are concerned as to some extent a trade of has to be made between the efficiency of the algorithm and its accuracy. In this project an integrated approach for both local as well as global path planning of a robot has been proposed. The primary algorithm that has been used for path planning is the artificial Potential field approach [1] and a* search algorithm has been used for finding the most optimum path for the robot. Obstacle detection for collision avoidance (a high level planning problem) can be effectively done by doing complex robot operations in real time and distributing the whole problem between different control levels. This project proposes the artificial potential field algorithm not only for static but also for moving obstacles using real time potential field values by creating sub-goals which eventually lead to the main goal of the most optimal complete path found by the A* search algorithm. Apart from these scan line and convex hull techniques have been used to improve the robustness of the algorithm. To some extent the shape and size of a robot has also been taken into consideration. The effectiveness of these algorithms has been"}
{"_id": "f9537e14be6c14aaeea9983e4a96a1fa60c634bf", "title": "Mining Human Activity Patterns From Smart Home Big Data for Health Care Applications", "text": "Nowadays, there is an ever-increasing migration of people to urban areas. Health care service is one of the most challenging aspects that is greatly affected by the vast influx of people to city centers. Consequently, cities around the world are investing heavily in digital transformation in an effort to provide healthier ecosystems for people. In such a transformation, millions of homes are being equipped with smart devices (e.g., smart meters, sensors, and so on), which generate massive volumes of fine-grained and indexical data that can be analyzed to support smart city services. In this paper, we propose a model that utilizes smart home big data as a means of learning and discovering human activity patterns for health care applications. We propose the use of frequent pattern mining, cluster analysis, and prediction to measure and analyze energy usage changes sparked by occupants\u2019 behavior. Since people\u2019s habits are mostly identified by everyday routines, discovering these routines allows us to recognize anomalous activities that may indicate people\u2019s difficulties in taking care for themselves, such as not preparing food or not using a shower/bath. This paper addresses the need to analyze temporal energy consumption patterns at the appliance level, which is directly related to human activities. For the evaluation of the proposed mechanism, this paper uses the U.K. Domestic Appliance Level Electricity data set\u2014time series data of power consumption collected from 2012 to 2015 with the time resolution of 6 s for five houses with 109 appliances from Southern England. The data from smart meters are recursively mined in the quantum/data slice of 24 h, and the results are maintained across successive mining exercises. The results of identifying human activity patterns from appliance usage are presented in detail in this paper along with the accuracy of short- and long-term predictions."}
{"_id": "81fbb911501df6b6f4a6b5ce2be9bf45a0ec89c1", "title": "Blockchain-based sharing services : What blockchain technology can contribute to smart cities", "text": "Background: The notion of smart city has grown popular over the past few years. It embraces several dimensions depending on the meaning of the word \u201csmart\u201d and benefits from innovative applications of new kinds of information and communications technology to support communal sharing. Methods: By relying on prior literature, this paper proposes a conceptual framework with three dimensions: (1) human, (2) technology, and (3) organization, and explores a set of fundamental factors that make a city smart from a sharing economy perspective. Results: Using this triangle framework, we discuss what emerging blockchain technology may contribute to these factors and how its elements can help smart cities develop sharing services. Conclusions: This study discusses how blockchain-based sharing services can contribute to smart cities based on a conceptual framework. We hope it can stimulate interest in theory and practice to foster discussions in this area."}
{"_id": "4c901116b5f47b1416a592d6aff7f786dc252901", "title": "R\u00e9nyi Divergence and Kullback-Leibler Divergence", "text": "Re\u0301nyi divergence is related to Re\u0301nyi entropy much like Kullback-Leibler divergence is related to Shannon's entropy, and comes up in many settings. It was introduced by Re\u0301nyi as a measure of information that satisfies almost the same axioms as Kullback-Leibler divergence, and depends on a parameter that is called its order. In particular, the Re\u0301nyi divergence of order 1 equals the Kullback-Leibler divergence. We review and extend the most important properties of Re\u0301nyi divergence and Kullback- Leibler divergence, including convexity, continuity, limits of \u03c3-algebras, and the relation of the special order 0 to the Gaussian dichotomy and contiguity. We also show how to generalize the Pythagorean inequality to orders different from 1, and we extend the known equivalence between channel capacity and minimax redundancy to continuous channel inputs (for all orders) and present several other minimax results."}
{"_id": "3ac814bf416d7769ea515b2f7324a0d3b5cba3b4", "title": "Advanced Metamorphic Techniques in Computer Viruses", "text": "Nowadays viruses use polymorphic techniques to mutate their code on each replication, thus evading detection by antiviruses. However detection by emulation can defeat simpl e polymorphism: thus metamorphic techniques are used which thoro ughly change the viral code, even after decryption. We briefly deta il this evolution of virus protection techniques against detectio n and then study the METAPHOR virus, today\u2019s most advanced metamorphic virus. Keywords\u2014Computer virus, Viral mutation, Polymorphism, Metamorphism, MetaPHOR, Virus history, Obfuscation, Viral gen etic techniques"}
{"_id": "bbabd8a25260bf2413befcd756077efa81b1c618", "title": "Split learning for health: Distributed deep learning without sharing raw patient data", "text": "Can health entities collaboratively train deep learning models without sharing sensitive raw data? This paper proposes several configurations of a distributed deep learning method called SplitNN to facilitate such collaborations. SplitNN does not share raw data or model details with collaborating institutions. The proposed configurations of splitNN cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple tasks and iii) learning without sharing labels. We compare performance and resource efficiency trade-offs of splitNN and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitNN."}
{"_id": "ccbf2067133a2caa7f8a01ebdcf882e85a704374", "title": "Computer proficiency questionnaire: assessing low and high computer proficient seniors.", "text": "PURPOSE OF THE STUDY\nComputers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users.\n\n\nDESIGN AND METHODS\nTo assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ.\n\n\nRESULTS\nThe CPQ demonstrated excellent reliability (Cronbach's \u03b1 = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions.\n\n\nIMPLICATIONS\nThe CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults."}
{"_id": "3ca6ce29be2b5220e099229668eca3990c160680", "title": "An empirical evaluation for the intrusion detection features based on machine learning and feature selection methods", "text": "Despite the great developments in information technology, particularly the Internet, computer networks, global information exchange, and its positive impact in all areas of daily life, it has also contributed to the development of penetration and intrusion which forms a high risk to the security of information organizations, government agencies, and causes large economic losses. There are many techniques designed for protection such as firewall and intrusion detection systems (IDS). IDS is a set of software and/or hardware techniques used to detect hacker's activities in computer systems. Two types of anomalies are used in IDS to detect intrusive activities different from normal user behavior. Misuse relies on the knowledge base that contains all known attack techniques and intrusion is discovered through research in this knowledge base. Artificial intelligence techniques have been introduced to improve the performance of these systems. The importance of IDS is to identify unauthorized access attempting to compromise confidentiality, integrity or availability of the computer network. This paper investigates the Intrusion Detection (ID) problem using three machine learning algorithms namely, BayesNet algorithm, Multi-Layer Perceptron (MLP), and Support Vector Machine (SVM). The algorithms are applied on a real, Management Information Based (MIB) dataset that is collected from real life environment. To enhance the detection process accuracy, a set of feature selection approaches is used; Infogain (IG), ReleifF (RF), and Genetic Search (GS). Our experiments show that the three feature selection methods have enhanced the classification performance. GS with bayesNet, MLP and SVM give high accuracy rates, more specifically the BayesNet with the GS accuracy rate is 99.9%."}
{"_id": "7e5c9cf94c713c5669adafb989067b9b63aa129d", "title": "IGCTs vs. IGBTs for circuit breakers in advanced ship electrical systems", "text": "Solid-State Circuit Breakers (SSCBs) are being considered for several of the new Navy ships' power distribution platforms. This is because SSCBs offer many advantages to electrical systems that conventional Electro-Mechanical Circuit Breakers (EMCBs) cannot. These advantages include much faster interruption of overcurrents, based on lower current thresholds and/or di/dt rate of current rise; reduced energy flowing into the electrical system during faults; and reduced system ringing during fault interruptions. Additional advantages of SSCBs are the elimination of electrical arcing, which is particularly useful for DC and higher voltage distribution systems and systems in environments with volatile gases or explosives; reduced acoustic noise and Electro-Magnetic Interference (EMI) levels during circuit breaker switching; and reduced maintenance and lifecycle costs during the typical 40-year lifetime of the ship. This paper describes the advantages and disadvantages of SSCBs using IGCTs, SSCBs using IGBTs and conventional EMCBs. It will also describe the differences between IGCTs and IGBTs and will consider the design of the bus voltage snubber. This paper will also show how redesigning the ship's electrical system, while considering these important issues, can result in a more reliable, cost-effective power distribution system for the ship."}
{"_id": "5da43ff9c246ae37d9006bba3406009cb4fb1dcf", "title": "Lifelong Machine Learning November , 2016", "text": "Lifelong machine learning (or lifelong learning) is an advanced machine learning paradigm that learns continuously, accumulates the knowledge learned in previous tasks, and uses it to help future learning. In the process, the learner becomes more and more knowledgeable and effective at learning. This learning ability is one of the hallmarks of human intelligence. However, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model. It makes no attempt to retain the learned knowledge and use it in future learning. Although this isolated learning paradigm has been very successful, it requires a large number of training examples, and is only suitable for well-defined and narrow tasks. In comparison, we humans can learn effectively with a few examples because we have accumulated so much knowledge in the past which enables us to learn with little data or effort. Lifelong learning aims to achieve this capability. As statistical machine learning matures, it is time to make a major effort to break the isolated learning tradition and to study lifelong learning to bring machine learning to a new height. Applications such as intelligent assistants, chatbots, and physical robots that interact with humans and systems in real-life environments are also calling for such lifelong learning capabilities. Without the ability to accumulate the learned knowledge and use it to learn more knowledge incrementally, a system will probably never be truly intelligent. This book serves as an introductory text and survey to lifelong learning."}
{"_id": "2f845a587699129bdd130e567a25785bcb5e97af", "title": "Who are the crowdworkers?: shifting demographics in mechanical turk", "text": "Amazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is increasingly popular with researchers and developers. Here we extend previous studies of the demographics and usage behaviors of MTurk workers. We describe how the worker population has changed over time, shifting from a primarily moderate-income, U.S.-based workforce towards an increasingly international group with a significant population of young, well-educated Indian workers. This change in population points to how workers may treat Turking as a full-time job, which they rely on to make ends meet."}
{"_id": "33f98a1261062480963e4e93e83aab458841f00c", "title": "On the Use of College Students in Social Science Research : Insights from a Second-Order Meta-analysis", "text": "A second-order meta-analysis was conducted to assess the implications of using college student subjects in social science research. Four meta-analyses investigating response homogeneity (cumulative N 1 650,000) and 30 meta-analyses reporting effect sizes for 65 behavioral or psychological relationships (cumulative N 1 350,000) provided comparative data for college student subjects and nonstudent (adult) subjects for the present research. In general, responses of college student subjects were found to be slightly more homogeneous than those of nonstudent subjects.Moreover, effect sizes derived from college student subjects frequently differed from those derived from nonstudent subjects both directionally and in magnitude. Because there was no systematic pattern to the differences observed, caution must be exercised when attempting to extend any relationship found using college student subjects to a nonstudent (adult) population. The results augur in favor of, and emphasize the importance of, replicating research based on college student subjects withnonstudent subjects before attempting any generalizations."}
{"_id": "16fadde3e68bba301f9829b3f99157191106bd0f", "title": "Utility data annotation with Amazon Mechanical Turk", "text": "We show how to outsource data annotation to Amazon Mechanical Turk. Doing so has produced annotations in quite large numbers relatively cheaply. The quality is good, and can be checked and controlled. Annotations are produced quickly. We describe results for several different annotation problems. We describe some strategies for determining when the task is well specified and properly priced."}
{"_id": "26196511e307ec89466af06751a66ee2d95b6305", "title": "Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks", "text": "Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon\u2019s Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web. We investigate five tasks: affect recognition, word similarity, recognizing textual entailment, event temporal ordering, and word sense disambiguation. For all five, we show high agreement between Mechanical Turk non-expert annotations and existing gold standard labels provided by expert labelers. For the task of affect recognition, we also show that using non-expert labels for training machine learning algorithms can be as effective as using gold standard annotations from experts. We propose a technique for bias correction that significantly improves annotation quality on two tasks. We conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expense."}
{"_id": "26a2a2f683b6e6d93c510a2f8065870c54b05f05", "title": "Crowdsourcing user studies with Mechanical Turk", "text": "User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach."}
{"_id": "1cacd4048d1d302440e602e20b65937132911b82", "title": "FORWARD AND INVERSE KINEMATICS STUDY OF INDUSTRIAL ROBOTS TAKING INTO ACCOUNT CONSTRUCTIVE AND FUNCTIONAL PARAMETER ' S MODELING", "text": "Forward and inverse kinematic studies of industrial robots (IR) have been developed and presented in a large number of papers. However, eve n g neral mathematic formalization is usually almost correct, (basically following up general Har tenberg Denavit (H-D) conventions and associated homogenous transformation matrix), only few papers presents kinematic models ready to be directly implemented on a real scale industrial robot or as well able to evaluate kinematics behavior of a real scale IR specific model. That is usually due to som e inconsistencies in modeling, the most frequently of these referring on: the incomplete formalization of the full set of constructive and functional parame ters (that mandatory need to be considered in case of a specific real IR's model), avoidance of considering IR's specific design features, (as joint dimensions a d links dimensions are) leading to wrongly locat ing the reference frames used for expressing homogenous coordinate transformations, as well as missing of the validation procedures able to check the correct itude of the mathematical models, previously to its implementing in a real scale IR's controller. That is why present paper shows first a completely new approach for IR's forward an inverse kinematics, in terms of IR's analytical modeling by taking into account the full set of IR's constructive and funct ional parameters of two different IR's models. Then , for both direct and inverse mathematical models complet e symbolic formalization and full set of solutions for forward and inverse kinematics are presented for bo th IR types. In order to study mathematical models applicability on the real scale IR, two specific IR models were studied: an ABB serial-link open chain kinematics IR and a Fanuc serial-link closed chain ki ematics IR. Numerical results were verified by cross validation using both analytically calculatio ns results and by mean of a constrained 3D CAD mode l used to geometrically verify the results. The param etric form of the model elaborated in PTC Mathcad 1 4 allows a quick reconfiguration for other robot's mo dels having similar configurations. Results can be also used for solving dynamics, path planning and contro l p blems in case of real scale IR."}
{"_id": "360071e2f644fdecacaddca9d4af6188dc89846b", "title": "Comparing the effects of different individualized music interventions for elderly individuals with severe dementia", "text": "BACKGROUND\nIndividuals with dementia often experience poor quality of life (QOL) due to behavioral and psychological symptoms of dementia (BPSD). Music therapy can reduce BPSD, but most studies have focused on patients with mild to moderate dementia. We hypothesized that music intervention would have beneficial effects compared with a no-music control condition, and that interactive music intervention would have stronger effects than passive music intervention.\n\n\nMETHODS\nThirty-nine individuals with severe Alzheimer's disease were randomly and blindly assigned to two music intervention groups (passive or interactive) and a no-music Control group. Music intervention involved individualized music. Short-term effects were evaluated via emotional response and stress levels measured with the autonomic nerve index and the Faces Scale. Long-term effects were evaluated by BPSD changes using the Behavioral Pathology in Alzheimer's Disease (BEHAVE-AD) Rating Scale.\n\n\nRESULTS\nPassive and interactive music interventions caused short-term parasympathetic dominance. Interactive intervention caused the greatest improvement in emotional state. Greater long-term reduction in BPSD was observed following interactive intervention, compared with passive music intervention and a no-music control condition.\n\n\nCONCLUSION\nMusic intervention can reduce stress in individuals with severe dementia, with interactive interventions exhibiting the strongest beneficial effects. Since interactive music intervention can restore residual cognitive and emotional function, this approach may be useful for aiding severe dementia patients' relationships with others and improving QOL. The registration number of the trial and the name of the trial registry are UMIN000008801 and \"Examination of Effective Nursing Intervention for Music Therapy for Severe Dementia Elderly Person\" respectively."}
{"_id": "07798b93ce1f303b33ef4eae9e287bb5b3323c0d", "title": "Generating Legal Arguments and Predictions from Case Texts", "text": "In this paper, we present methods for automatically finding abstract, legally relevant concepts in case texts and demonstrate how they can be used to make predictions of case outcomes, given the texts as inputs.In a set of experiments to test these methods, we focus on the open question of how best to represent legal text for finding abstract concepts. We compare different ways of representing legal case texts in order to test whether adding domain knowledge and some linguistic information can improve performance.We found that replacing individual names by roles in the case texts led to better indexing, and that adding certain syntactic and semantic information, in the form of Propositional Patterns that capture a sense of \"who did what\", led to better prediction. Our experiments also showed that of three learning algorithms, Nearest Neighbor worked best in learning how to identify indexing concepts in texts.In these experiments, we introduced a prototype system that can reason with text cases; it analyzes a case, predicts its outcome considering other cases in the database, and explains the prediction, all starting with a textual description of the case's facts as input."}
{"_id": "d68c3e03852c40a48e02ac46ff8fbafae2826e8a", "title": "GUIDANCE LAW EVALUATION FOR MISSILE GUIDANCE SYSTEMS", "text": "In missile guidance system, to reduce the interception \u201cmiss distance,\u201d it is important to choose a suitable guidance law and navigation constant. This paper investigates and compares the system behavior of guidance laws under different navigation constants. Based on use of the adjoint technique, miss distance sensitivity analyses which consider the system noise, target step maneuver, initial heading error and system parameters for different guidance laws and navigation constants are presented. Based on these analyses, some suggestions for choosing a suitable guidance law and navigation constant are given for the design of missile guidance systems. Also, a suggestion for the optimal escape time for pilots of fighter planes is given."}
{"_id": "276d43e80d64ce8ee3b07aaa3071220f13334a59", "title": "Ant colony system: a cooperative learning approach to the traveling salesman problem", "text": "This paper introduces ant colony system (ACS), a distributed algorithm that is applied to the traveling salesman problem (TSP). In ACS, a set of cooperating agents called ants cooperate to find good solutions to TSPs. Ants cooperate using an indirect form of communication mediated by pheromone they deposit on the edges of the TSP graph while building solutions. We study ACS by running experiments to understand its operation. The results show that ACS outperforms other nature-inspired algorithms such as simulated annealing and evolutionary computation, and we conclude comparing ACS-3-opt, a version of ACS augmented with a local search procedure, to some of the best performing algorithms for symmetric and asymmetric TSPs. Accepted for publication in the IEEE Transactions on Evolutionary Computation, Vol.1, No.1, 1997. In press."}
{"_id": "623fbe2dc910c032cbb7a9c8f89b2b6212d2dc8d", "title": "Predicting Player Churn in Multiplayer Games using Goal-Weighted Empowerment", "text": "The extent to which players feel compelled to play a game is a primary factor in determining that game\u2019s success. Using ideas from self-determination theory, we propose that the drive to play games is related to player\u2019s ability to exercise their mastery drive. In self-determination theory, increasing one\u2019s control is intrinsically rewarding, however it is difficult to instantiate this theory to make concrete predictions of when a player will quit a game early in favor of another activity. The problem of predicting within-game churn events involves modeling and predicting a player\u2019s motivational state to remain in a game. We combine new motivation theory ideas with machine learning methods to develop a computational model that postulates that player\u2019s satisfaction is directly related to their perceived locus of control, extrinsic vs. intrinsic, and hypothesize a set of measurable signals that mediate a player\u2019s locus of control. An agent will continue goal pursuit within the game based on its \u201cempowerment\u201d, which is essentially its ability to predict the future based on its own actions. This indicator of locus of control accounts for both the feasibility of the goal and whether expected progress toward a selected goal is being achieved. A player\u2019s behavior can be used to estimate their empowerment by correlating their continuation of goal pursuit with measures of their expectations of progress and goal-achievability. We apply these concepts and build a model to predict player when players will quit a game early for the multiplayer game, Dota2."}
{"_id": "0afbea311a0dcd591106893228abd395a2a93dad", "title": "The Mobile Sensing Platform: An Embedded Activity Recognition System", "text": "Activity-aware systems have inspired novel user interfaces and new applications in smart environments, surveillance, emergency response, and military missions. Systems that recognize human activities from body-worn sensors can further open the door to a world of healthcare applications, such as fitness monitoring, eldercare support, long-term preventive and chronic care, and cognitive assistance. Wearable systems have the advantage of being with the user continuously. So, for example, a fitness application could use real-time activity information to encourage users to perform opportunistic activities. Furthermore, the general public is more likely to accept such activity recognition systems because they are usually easy to turn off or remove."}
{"_id": "2fd8689da8211fa0a9c224a91610ed4083adeb55", "title": "Breast shape (ptosis) as a marker of a woman's breast attractiveness and age: Evidence from Poland and Papua.", "text": "OBJECTIVES\nA women's breast is a sex-specific and aesthetic bodily attribute. It is suggested that breast morphology signals maturity, health, and fecundity. The perception of a woman's attractiveness and age depends on various cues, such as breast size or areola pigmentation. Conducted in Poland and Papua, the current study investigated how breast attractiveness, and the further estimate of a woman's age based on her breast's appearance, is affected by the occurrence of breast ptosis (ie, sagginess, droopiness).\n\n\nMETHODS\nIn the Polish sample, 57 women and 50 men (N\u2009=\u2009107) were presented with sketches of breasts manipulated to represent different stages of ptosis based on two different breast ptosis classifications. The participants were asked to rate the breast attractiveness and age of the woman whose breasts were depicted in each sketch. In Papua, 45 men aged 20 to 75\u2009years took part in the study, which was conducted using only one of the classifications of breast ptosis.\n\n\nRESULTS\nRegardless of the classification used, the results showed that the assessed attractiveness of the breasts decreased as the estimated age increased with respect to the more ptotic breasts depicted in the sketches. The results for Papuan raters were the same as for the Polish sample.\n\n\nCONCLUSIONS\nBreast ptosis may be yet another physical trait that affects the perception and preferences of a potential sexual partner. The consistency in ratings between Polish and Papuan raters suggests that the tendency to assess ptotic breasts with aging and a loss of attractiveness is cross-culturally universal."}
{"_id": "f6bf94bc70e4aee7b9395341fa1f8f24ae57d147", "title": "Growth charts for individuals with Smith-Lemli-Opitz syndrome.", "text": "Smith-Lemli-Opitz syndrome (SLOS) is a rare multiple congenital anomaly neurodevelopmental syndrome of impaired cholesterol synthesis. Growth restriction and developmental delay are very common clinical manifestations of SLOS. The degree, etiology, and consequences of growth restriction in SLOS remain an area of limited knowledge to the scientific community. There have been no studies describing the growth parameters and providing reference growth charts for individuals with SLOS. Our longitudinal data from 78 patients between the ages of 0.1 and 16 years with SLOS show a growth restriction of about two standard deviations below the Centers for Disease Control (CDC) norms for age. This study represents comprehensive anthropometric data from the largest cohort available, and proposes growth charts for widespread use in the management and study of individuals with SLOS."}
{"_id": "83050af51958f37c9fea4ec0867334e67fec7f7a", "title": "Superhydrophobic silanized melamine sponges as high efficiency oil absorbent materials.", "text": "Superhydrophobic sponges and sponge-like materials have attracted great attention recently as potential sorbent materials for oil spill cleanup due to their excellent sorption capacity and high selectivity. A major challenge to their broad use is the fabrication of superhydrophobic sponges with superior recyclability, good mechanical strength, low cost, and manufacture scalability. In this study, we demonstrate a facile, cost-effective, and scalable method to fabricate robust, superhydrophobic sponges through the silanization of commercial melamine sponges via a solution-immersion process. The silanization was achieved through secondary amine groups on the surface of the sponge skeletons with alkylsilane compounds, forming self-assembled monolayers on the surface of sponge skeletons. This resulted in our ability to tune the surface properties of the sponges from being hydrophilic to superhydrophobic with a water contact angle of 151.0\u00b0. The superhydrophobic silanized melamine sponge exhibited excellent sorption capacity for a wide range of organic solvents and oils, from 82 to 163 times its own weight, depending on the polarity and density of the employed organic solvents and oils, and high selectivity and outstanding recyclability with an absorption capacity retention greater than 90% after 1000 cycles. These findings offer an effective approach for oil spill containment and environmental remediation."}
{"_id": "631e5241a4d7d7832bcc68c035f6d9073cbd318d", "title": "Data mining models for student careers", "text": "This paper presents a data mining methodology to analyze the careers of University graduated students. We present different approaches based on clustering and sequential patterns techniques in order to identify strategies for improving the performance of students and the scheduling of exams. We introduce an ideal career as the career of an ideal student which has taken each examination just after the end of the corresponding course, without delays. We then compare the career of a generic student with the ideal one by using the different techniques just introduced. Finally, we apply the methodology to a real case study and interpret the results which underline that the more students follow the order given by the ideal career the more they get good performance in terms of graduation time and final grade. 2015 Elsevier Ltd. All rights reserved."}
{"_id": "be227e18173398ba88621c4168258c231be44273", "title": "AdGraph: A Machine Learning Approach to Automatic and Effective Adblocking", "text": "Filter lists are widely deployed by adblockers to block ads and other forms of undesirable content in web browsers. However, these filter lists are manually curated based on informal crowdsourced feedback, which brings with it a significant number of maintenance challenges. To address these challenges, we propose a machine learning approach for automatic and effective adblocking called AdGraph. Our approach relies on information obtained from multiple layers of the web stack (HTML, HTTP, and JavaScript) to train a machine learning classifier to block ads and trackers. Our evaluation on Alexa top-10K websites shows that AdGraph automatically and effectively blocks ads and trackers with 97.7% accuracy. Our manual analysis shows that AdGraph has better recall than filter lists, it blocks 16%more ads and trackers with 65% accuracy. We also show that AdGraph is fairly robust against adversarial obfuscation by publishers and advertisers that bypass filter lists."}
{"_id": "1fef6cb55da4eb34fe5444546485ddc41e845758", "title": "A cluster validity index for fuzzy clustering", "text": "Cluster validity indexes have been used to evaluate the fitness of partitions produced by clustering algorithms. This paper presents a new validity index for fuzzy clustering called a partition coefficient and exponential separation (PCAES) index. It uses the factors from a normalized partition coefficient and an exponential separation measure for each cluster and then pools these two factors to create the PCAES validity index. Considerations involving the compactness and separation measures for each cluster provide different cluster validity merits. In this paper, we also discuss the problem that the validity indexes face in a noisy environment. The efficiency of the proposed PCAES index is compared with several popular validity indexes. More information about these indexes is acquired in series of numerical comparisons and also three real data sets of Iris, Glass and Vowel. The results of comparative study show that the proposed PCAES index has high ability in producing a good cluster number estimate and in addition, it provides a new point of view for cluster validity in a noisy environment. 2004 Elsevier B.V. All rights reserved."}
{"_id": "68b655f1340e8484c46f521309330a311d634e3b", "title": "Modeling and Control of a Flux-Modulated Compound-Structure Permanent-Magnet Synchronous Machine for Hybrid Electric Vehicles", "text": "The compound-structure permanent-magnet synchronous machine (CS-PMSM), comprising a double rotor machine (DRM) and a permanent-magnet (PM) motor, is a promising electronic-continuously variable transmission (e-CVT) concept for hybrid electric vehicles (HEVs). By CS-PMSM, independent speed and torque control of the vehicle engine is realized without a planetary gear unit. However, the slip rings and brushes of the conventional CS-PMSM are considered a major drawback for vehicle application. In this paper, a brushless flux-modulated CS-PMSM is investigated. The operating principle and basic working modes of the CS-PMSM are discussed. Mathematical models of the CS-PMSM system are given, and joint control of the two integrated machines is proposed. As one rotor of the DRM is mechanically connected with the rotor of the PM motor, special rotor position detection and torque allocation methods are required. Simulation is carried out by Matlab/Simulink, and the feasibility of the control system is proven. Considering the complexity of the controller, a single digital signal processor (DSP) is used to perform the interconnected control of dual machines instead of two separate ones, and a typical hardware implementation is proposed."}
{"_id": "d3a3d15a32644beffaac4322b9f165ed51cfd99b", "title": "Eye detection by using deep learning", "text": "In recent years, deep learning algorithm has been one of the most used method in machine learning. Success rate of the most popular machine learning problems has been increased by using it. In this work, we develop an eye detection method by using a deep neural network. The designed network, which is accepted as an input by Caffe, has 3 convolution layers and 3 max pooling layers. This model has been trained with 16K positive and 52K negative image patches. The last layer of the network is the classification layer which operates a softmax algorithm. The trained model has been tested with images, which were provided on FDDB and CACD datasets, and also compared with Haar eye detection algorithm. Recall value of the method is higher than the Haar algorithm on the two datasets. However, according to the precision rate, the Haar algorithm is successful on FDDB dataset."}
{"_id": "b2f4404b4b164636ba0a6356a60f5d2a14c7820b", "title": "Cognitive dysfunction in psychiatric disorders: characteristics, causes and the quest for improved therapy", "text": "Studies of psychiatric disorders have traditionally focused on emotional symptoms such as depression, anxiety and hallucinations. However, poorly controlled cognitive deficits are equally prominent and severely compromise quality of life, including social and professional integration. Consequently, intensive efforts are being made to characterize the cellular and cerebral circuits underpinning cognitive function, define the nature and causes of cognitive impairment in psychiatric disorders and identify more effective treatments. Successful development will depend on rigorous validation in animal models as well as in patients, including measures of real-world cognitive functioning. This article critically discusses these issues, highlighting the challenges and opportunities for improving cognition in individuals suffering from psychiatric disorders."}
{"_id": "cc2df2a10488cbbf5b16e9d8ed16c00b98328556", "title": "D-AMINO ACID OXIDASE: A REVIEW", "text": "Over many years, D-serine and glycine were found to be the endogenous ligands for glycine-binding site for N-methyl-D-aspartate receptors. D-serine before being used up by the cells undergoes oxidative deamination by the enzyme D-amino acid oxidase (DAAO) and makes D-serine levels reduced in the brain, thereby affecting brain functions. In this review, we will discuss about the synthesis, location, therapeutic potential of DAAO function, role in cognition, and neuropathic pain."}
{"_id": "743959ba0f8e696635665e155cb177792f787205", "title": "A Survey of Mobile Crowdsensing Techniques: A Critical Component for the Internet of Things", "text": "Mobile crowdsensing serves as a critical building block for emerging Internet of Things (IoT) applications. However, the sensing devices continuously generate a large amount of data, which consumes much resources (e.g., bandwidth, energy, and storage) and may sacrifice the Quality-of-Service (QoS) of applications. Prior work has demonstrated that there is significant redundancy in the content of the sensed data. By judiciously reducing redundant data, data size and load can be significantly reduced, thereby reducing resource cost and facilitating the timely delivery of unique, probably critical information and enhancing QoS. This article presents a survey of existing works on mobile crowdsensing strategies with an emphasis on reducing resource cost and achieving high QoS. We start by introducing the motivation for this survey and present the necessary background of crowdsensing and IoT. We then present various mobile crowdsensing strategies and discuss their strengths and limitations. Finally, we discuss future research directions for mobile crowdsensing for IoT. The survey addresses a broad range of techniques, methods, models, systems, and applications related to mobile crowdsensing and IoT. Our goal is not only to analyze and compare the strategies proposed in prior works, but also to discuss their applicability toward the IoT and provide guidance on future research directions for mobile crowdsensing."}
{"_id": "399a667f4f3388950b56c78005c872ad02b53085", "title": "End-to-end Learning of Latent Dirichlet Allocation by Mirror-Descent Back Propagation", "text": "We develop a fully discriminative learning approach for supervised Latent Dirichlet Allocation (LDA) model, which maximizes the posterior probability of the prediction variable given the input document. Different from traditional variational learning or Gibbs sampling approaches, the proposed learning method applies (i) the mirror descent algorithm for exact maximum a posterior inference and (ii) back propagation with stochastic gradient descent for model parameter estimation, leading to scalable learning of the model in an end-to-end discriminative manner. As a byproduct, we also apply this technique to develop a new learning method for the traditional unsupervised LDA model. Experimental results on two real-world regression and classification tasks show that the proposed methods significantly outperform the previous supervised/unsupervised LDA learning methods."}
{"_id": "c28a16aa8bdc1013322c32a792d1b88840c44fbf", "title": "Multi-Objective Optimization of 50 kW/85 kHz IPT System for Public Transport", "text": "Inductive power transfer (IPT) is an attractive solution for the automated battery charging of public transport electric vehicles (EVs) with low maintenance requirements. This paper presents the design of a 50 kW/85 kHz contactless EV charger, with a focus on the IPT transmitter and receiver coils. The multi-objective magnetics optimization reveals the Pareto tradeoffs and performance limitations in terms of high efficiency, high power density, and low stray field for high-power IPT systems without moving mechanical parts. A full-scale hardware prototype is realized and experimentally investigated. The dc-dc conversion efficiency, including all the power electronics stages, is measured as 95.8% at 50 kW power transfer across an air gap of 160 mm (coil dimensions 410 \u00d7 760 \u00d7 60 mm3). With 150 mm coil misalignment in the worst case direction, the dc-dc efficiency drops to 92%. The measurements of the magnetic stray field show compliance with ICNIRP 2010 at 800 mm distance from the IPT coil center."}
{"_id": "64412b9a694ce470faf57186f248ca8ec027ee8b", "title": "On the Effectiveness of Minimal Context Selection for Robust Question Answering", "text": "Machine learning models for question-answering (QA), where given a question 1 and a passage, the learner must select some span in the passage as an answer, are 2 known to be brittle. By inserting a single nuisance sentence into the passage, an 3 adversary can fool the model into selecting the wrong span. A promising new 4 approach for QA decomposes the task into two stages: (i) select relevant sentences 5 from the passage; and (ii) select a span among those sentences. Intuitively, if 6 the sentence selector excludes the offending sentence, then the downstream span 7 selector will be robust. While recent work has hinted at the potential robustness 8 of two-stage QA, these methods have never, to our knowledge, been explicitly 9 combined with adversarial training. This paper offers a thorough empirical in10 vestigation of adversarial robustness, demonstrating that although the two-stage 11 approach lags behind single-stage span selection, adversarial training improves its 12 performance significantly, leading to an improvement of over 22 points in F1 score 13 over the adversarially-trained single-stage model. 14"}
{"_id": "5f6d5608cf1b3071c938a2271637fc555bf53231", "title": "Hyracks: A flexible and extensible foundation for data-intensive computing", "text": "Hyracks is a new partitioned-parallel software platform designed to run data-intensive computations on large shared-nothing clusters of computers. Hyracks allows users to express a computation as a DAG of data operators and connectors. Operators operate on partitions of input data and produce partitions of output data, while connectors repartition operators' outputs to make the newly produced partitions available at the consuming operators. We describe the Hyracks end user model, for authors of dataflow jobs, and the extension model for users who wish to augment Hyracks' built-in library with new operator and/or connector types. We also describe our initial Hyracks implementation. Since Hyracks is in roughly the same space as the open source Hadoop platform, we compare Hyracks with Hadoop experimentally for several different kinds of use cases. The initial results demonstrate that Hyracks has significant promise as a next-generation platform for data-intensive applications."}
{"_id": "2ea28c8b3f4ec3eb328012207cc41b852f611258", "title": "An improved traffic signs recognition and tracking method for driver assistance system", "text": "We introduce a new computer vision based system for robust traffic sign recognition and tracking. Such a system presents a vital support for driver assistance in an intelligent automotive. Firstly, a color based segmentation method is applied to generate traffic sign candidate regions. Secondly, the HoG features are extracted to encode the detected traffic signs and then generating the feature vector. This vector is used as an input to an SVM classifier to identify the traffic sign class. Finally, a tracking method based on optical flow is performed to ensure a continuous capture of the recognized traffic sign while accelerating the execution time. Our method affords high precision rates under different challenging conditions."}
{"_id": "12d6003433ce030090b109cd9c55f802fde08315", "title": "Applying a flexible microactuator to robotic mechanisms", "text": "A flexible microactuator (FMA) driven by an electropneumatic (or electrohydraulic) system has been developed. It has three degrees of freedom-pitch, yaw, and stretch-making it suitable for robotic mechanisms such as fingers, arms, or legs. It is made of fiber-reinforced rubber, and the mechanism is very simple, enabling miniature robots without conventional link mechanisms to be designed. Serially connected FMAs act as a miniature robot manipulator. The kinematics and control algorithm for this type of robot are presented. FMAs combined in parallel act as a multifingered robot hand, with each FMA representing a finger. An algorithm for the cooperative control for such FMAs, the stable region for holding, and its performance have been developed.<>"}
{"_id": "40975d415b39697ab866ce6bd061050629c96fd0", "title": "Design, fabrication and control of soft robots", "text": "Conventionally, engineers have employed rigid materials to fabricate precise, predictable robotic systems, which are easily modelled as rigid members connected at discrete joints. Natural systems, however, often match or exceed the performance of robotic systems with deformable bodies. Cephalopods, for example, achieve amazing feats of manipulation and locomotion without a skeleton; even vertebrates such as humans achieve dynamic gaits by storing elastic energy in their compliant bones and soft tissues. Inspired by nature, engineers have begun to explore the design and control of soft-bodied robots composed of compliant materials. This Review discusses recent developments in the emerging field of soft robotics."}
{"_id": "4b92bb4b8fb64b0101f7cb7c2feb27360adbbec9", "title": "Monolithic fabrication of sensors and actuators in a soft robotic gripper", "text": "In this paper, we present a fluidically functionalized soft-bodied robot that integrates both sensing and actuation. Rather than combining these functions as an afterthought, we design sensors and actuators into the robot at the onset, both reducing fabrication complexity and optimizing component interactions. We utilize liquid metal strain sensors and pneumatic actuators embedded into a silicone robotic gripper. The robot's body is formed by curing the silicone in complex 3D printed molds. We show that the liquid metal strain gauges provide a repeatable resistance response during robotic actuation. We further show that, given sufficient control over other time-dependent variables, it is possible to determine when the robot begins gripping an object during actuation."}
{"_id": "56d0df7d39572769cec7de6dc2645767449bbe5e", "title": "Soft robotic glove for combined assistance and at-home rehabilitation", "text": "This paper presents a portable, assistive, soft robotic glove designed to augment hand rehabilitation for individuals with functional grasp pathologies. The robotic glove utilizes soft actuators consisting of molded elastomeric chambers with fiber reinforcements that induce specific bending, twisting and extending trajectories under fluid pressurization. These soft actuators were mechanically programmed tomatch and support the range of motion of individual fingers. They demonstrated the ability to generate significant force when pressurized and exhibited low impedance when un-actuated. To operate the soft robotic glove, a control hardware system was designed and included fluidic pressure sensors in line with the hydraulic actuators and a closed-loop controller to regulate the pressure. Demonstrations with the complete system were performed to evaluate the ability of the soft robotic glove to carry out gross and precise functional grasping. Compared to existing devices, the soft robotic glove has the potential to increase user freedom and independence through its portable waist belt pack and open palm design. \u00a9 2014 Elsevier B.V. All rights reserved."}
{"_id": "015f67cda975f7d2357dd1b1df29ab53aa1632b5", "title": "Peristaltic locomotion with antagonistic actuators in soft robotics", "text": "This paper presents a soft robotic platform that exhibits peristaltic locomotion. The design principle is based on the unique antagonistic arrangement of radial/circular and longitudinal muscle groups of Oligochaeta. Sequential antagonistic motion is achieved in a flexible braided mesh-tube structure with NiTi coil actuators. A numerical model for the mesh structure describes how peristaltic motion induces robust locomotion and details the deformation by the contraction of NiTi actuators. Several peristaltic locomotion modes are modeled, tested, and compared on the basis of locomotion speed. The entire mechanical structure is made of flexible mesh materials and can withstand significant external impacts during locomotion. This approach can enable a completely soft robotic platform by employing a flexible control unit and energy sources."}
{"_id": "6860f804436d856738369dd10922a004c3c5220d", "title": "A Fast Distributed Algorithm for Mining Association Rules", "text": "With the existence of many large transaction databases, the huge amounts of data, the high scalability of distributed systems, and the easy partition and distribution of a centralized database, it is important to inuestzgate eficient methods for distributed mining of association rules. This study discloses some interesting relationships between locally large and globally large itemsets and proposes an interesting distributed association rule mining algorithm, FDM (Fast Distributed Mining of association rules), which generates a small number of candidate sets and substantially reduces the number of messages to be passed at mining association rules. Our performance study shows that FDM has a superior performance over the direct application of a typical sequential algorithm. Further performance enhancement leads to a few variations of the algorithm."}
{"_id": "7a7b3f99fef5f7cb0c4597e3361209d974fb542c", "title": "Mathematical methods in solutions of the problems from the Third International Students' Olympiad in Cryptography", "text": "The mathematical problems and their solutions of the Third International Students\u2019 Olympiad in Cryptography NSUCRYPTO\u20192016 are presented. We consider mathematical problems related to the construction of algebraic immune vectorial Boolean functions and big Fermat numbers, problems about secrete sharing schemes and pseudorandom binary sequences, biometric cryptosystems and the blockchain technology, etc. Two open problems in mathematical cryptography are also discussed and a solution for one of them proposed by a participant during the Olympiad is described. It was the first time in the Olympiad history."}
{"_id": "3856c16bf4c0bceec9f9022be67190d645557e36", "title": "TraceMixer: Privacy-preserving crowd-sensing sans trusted third party", "text": "Crowd-sensing promises cheap and easy large scale data collection by tapping into the sensing and processing capabilities of smart phone users. However, the vast amount of fine-grained location data collected raises serious privacy concerns among potential contributors. In this paper, we argue that crowd-sensing has unique requirements w.r.t. privacy and data utility which renders existing protection mechanisms infeasible. We hence propose TraceMixer, a novel location privacy protection mechanism tailored to the special requirements in crowd-sensing. TraceMixer builds upon the well-studied concept of mix zones to provide trajectory privacy while achieving high spatial accuracy. First in this line of research, TraceMixer applies secure two-party computation technologies to realize a trustless architecture that does not require participants to share locations with anyone in clear. We evaluate TraceMixer on real-world datasets to show the feasibility of our approach in terms of privacy, utility, and performance. Finally, we demonstrate the applicability of TraceMixer in a real-world crowd-sensing campaign."}
{"_id": "31c51d22abeef7a071920a38b56a232a888a95e3", "title": "EXPOSURE: Finding Malicious Domains Using Passive DNS Analysis", "text": "The domain name service (DNS) plays an important role in the operation of the Internet, providing a two-way mapping between domain names and their numerical identifiers. Given its fundamental role, it is not surprising that a wide variety of malicious activities involve the domain name service in one way or another. For example, bots resolve DNS names to locate their command and control servers, and spam mails contain URLs that link to domains that resolve to scam servers. Thus, it seems beneficial to monitor the use of the DNS system for signs that indicate that a certain name is used as part of a malicious operation. In this paper, we introduce EXPOSURE, a system that employs large-scale, passive DNS analysis techniques to detect domains that are involved in malicious activity. We use 15 features that we extract from the DNS traffic that allow us to characterize different properties of DNS names and the ways that they are queried. Our experiments with a large, real-world data set consisting of 100 billion DNS requests, and a real-life deployment for two weeks in an ISP show that our approach is scalable and that we are able to automatically identify unknown malicious domains that are misused in a variety of malicious activity (such as for botnet command and control, spamming, and phishing)."}
{"_id": "cf8c44a703350ebc5df46a861c76db9f0e49457b", "title": "Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy", "text": "Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems \u2014 the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline."}
{"_id": "e8d57223a3b88c58131a6642579e66f4739edfb7", "title": "08 67 0 v 1 [ cs . C V ] 2 3 M ar 2 01 8 Object Detection for Comics using Manga 109 Annotations", "text": "With the growth of digitized comics, image understanding techniques are becoming important. In this paper, we focus on object detection, which is a fundamental task of image understanding. Although convolutional neural networks (CNN)-based methods archived good performance in object detection for naturalistic images, there are two problems in applying these methods to the comic object detection task. First, there is no large-scale annotated comics dataset. The CNN-based methods require large-scale annotations for training. Secondly, the objects in comics are highly overlapped compared to naturalistic images. This overlap causes the assignment problem in the existing CNN-based methods. To solve these problems, we proposed a new annotation dataset and a new CNN model. We annotated an existing image dataset of comics and created the largest annotation dataset, named Manga109-annotations. For the assignment problem, we proposed a new CNN-based detector, SSD300-fork. We compared SSD300-fork with other detection methods using Manga109-annotations and confirmed that our model outperformed them based on the mAP score."}
{"_id": "967566161bedeaaa09ad76cff6e860d193cb9968", "title": "Simulation of phasor measurement unit (PMU) in MATLAB", "text": "The advent of phasor measurement unit in power system is revolutionizing the conventional grid towards smart grid. Phasor measurement units are extremely expensive devices which takes into account many aspects of the power system that are not disclosed by the manufacturer while estimating phasors of currents and voltage. This paper aims to build a laboratory prototype of PMU which can estimate the phasor updating process of a commercial PMU at the benefit of improved measurement accuracy, reduced manufacturing cost and increased timely information."}
{"_id": "3dc8f5a7dcdfbdfad4a14bd9a4587de1fb1c2c22", "title": "Discriminative Information Retrieval for Knowledge Discovery", "text": "We propose a framework for discriminative Information Retrieval (IR) atop linguistic features, trained to improve the recall of tasks such as answer candidate passage retrieval, the initial step in text-based Question Answering (QA). We formalize this as an instance of linear feature-based IR (Metzler and Croft, 2007), illustrating how a variety of knowledge discovery tasks are captured under this approach, leading to a 44% improvement in recall for candidate"}
{"_id": "4f86e8944de74a3f002afa9f55791a5e9b9cbe53", "title": "The Business Value of Business inTelligence", "text": "This paper focuses on the influence of BI on manufacturing and distribution organizations, although much of what\u2019s covered in it can apply to businesses in non-related industries, as well. To kick it off, the chart below illustrates the primary value propositions of business intelligence and the types of business analytics (e.g., Sales, Inventory, Supplier Performance analysis) that support each value area. The effect of BI on each of these business value areas are then discussed in more detail in the following sections."}
{"_id": "0e7cf44e63829a80874d9cbaf652f0a6884528a8", "title": "Software project management with GAs", "text": "A Project Scheduling Problem consists in deciding who does what during the software project lifetime. This is a capital issue in the practice of software engineering, since the total budget and human resources involved must be managed optimally in order to end in a successful project. In short, companies are principally concerned with reducing the duration and cost of projects, and these two goals are in conflict with each other. In this work we tackle the problem by using genetic algorithms (GAs) to solve many different software project scenarios. Thanks to our newly developed instance generator we can perform structured studies on the influence the most important problem attributes have on the solutions. Our conclusions show that GAs are quite flexible and accurate for this application, and an important tool for automatic project management. 2007 Elsevier Inc. All rights reserved."}
{"_id": "1e1c462bbb9659e83b9c1b8bddc47767dbb18f20", "title": "Language processing and learning models for community question answering in Arabic", "text": "In this paper we focus on the problem of question ranking in community question answering (cQA) forums in Arabic. We address the task with machine learning algorithms using advanced Arabic text representations. The latter are obtained by applying tree kernels to constituency parse trees combined with textual similarities, including word embeddings. Our two main contributions are: (i) an Arabic language processing pipeline based on UIMA \u2014from segmentation to constituency parsing\u2014 built on top of Farasa, a state-of-the-art Arabic language processing toolkit; and (ii) the application of long short-term memory neural networks to identify the best text fragments in questions to be used in our tree-kernel-based ranker. Our thorough experimentation on a recently released cQA dataset shows that the Arabic linguistic processing provided by Farasa produces strong results and that neural networks combined with tree kernels further boost the performance in terms of both efficiency and accuracy. Our approach also enables an implicit comparison between different processing pipelines as our tests on Farasa and Stanford parsers demonstrate."}
{"_id": "0a1f6b8f279f74b4528d8cb377c90d4c28f2df38", "title": "Simulating LTE/LTE-Advanced Networks with SimuLTE", "text": "In this work we present SimuLTE, an OMNeT++-based simulator for LTE and LTE-Advanced networks. Following well-established OMNeT++ programming practices, SimuLTE exhibits a fully modular structure, which makes it easy to be extended, verified and integrated. Moreover, it inherits all the benefits of such a widely-used and versatile simulation framework as OMNeT++, i.e., experiment support and seamless integration with the OMNeT++ network modules, such as INET. This allows SimuLTE users to build up mixed scenarios where LTE is only a part of a wider network. This paper describes the architecture of SimuLTE, with particular emphasis on the modeling choices at the MAC layer, where resource scheduling is located. Furthermore, we describe some of the verification and validation efforts and present an example of the performance analysis that can be carried out with SimuLTE."}
{"_id": "a1edbd27c020815b643fb7b5db99c659ca34eda5", "title": "Abstract Cryptography", "text": "Cryptography Ueli Maurer ETH Zurich FOSAD 2009, Bertinoro, Aug./Sept. 2009. Abstract CryptographyCryptography Ueli Maurer ETH Zurich FOSAD 2009, Bertinoro, Aug./Sept. 2009. \u201cI can only understand simple things.\u201d JAMES MASSEY Abstraction Abstraction: eliminate irrelevant details from consideration Examples: group, field, vector space, relation, graph, .... Goals of abstraction: \u2022 simpler definitions \u2022 generality of results \u2022 simpler proofs \u2022 elegance \u2022 didactic suitabilityion Abstraction: eliminate irrelevant details from consideration Examples: group, field, vector space, relation, graph, .... Goals of abstraction: \u2022 simpler definitions \u2022 generality of results \u2022 simpler proofs \u2022 elegance \u2022 didactic suitability Abstraction Abstraction: eliminate irrelevant details from consideration Examples: group, field, vector space, relation, graph, .... Goals of abstraction: \u2022 simpler definitions \u2022 generality of results \u2022 simpler proofs \u2022 elegance \u2022 didactic suitability \u2022 understandingion Abstraction: eliminate irrelevant details from consideration Examples: group, field, vector space, relation, graph, .... Goals of abstraction: \u2022 simpler definitions \u2022 generality of results \u2022 simpler proofs \u2022 elegance \u2022 didactic suitability \u2022 understanding Abstraction Abstraction: eliminate irrelevant details from consideration Examples: group, field, vector space, relation, graph, .... Goals of abstraction: \u2022 simpler definitions \u2022 generality of results \u2022 simpler proofs \u2022 elegance \u2022 didactic suitability \u2022 understanding Goals of this talk: \u2022 Introduce layers of abstraction in cryptography. \u2022 Ex mples of abstract definitions and proofs. \u2022 Announce new security framework \u201cabstract cryptography\u201d (with Renato Renner).ion Abstraction: eliminate irrelevant details from consideration Examples: group, field, vector space, relation, graph, .... Goals of abstraction: \u2022 simpler definitions \u2022 generality of results \u2022 simpler proofs \u2022 elegance \u2022 didactic suitability \u2022 understanding Goals of this talk: \u2022 Introduce layers of abstraction in cryptography. \u2022 Ex mples of abstract definitions and proofs. \u2022 Announce new security framework \u201cabstract cryptography\u201d (with Renato Renner). Motivating example: One-time pad 1 C , C , ... 1 ciphertext 1 2 key 2 1 key 2 1 2 2 addition modulo 2 M , M , ... M , M , ... plaintext plaintext K , K , ... K , K , ... Motivating example: One-time pad 1 C , C , ... 1 ciphertext 1 2 key 2 1 key 2 1 2 2 addition modulo 2 M , M , ... M , M , ... plaintext plaintext K , K , ... K , K , ... Perfect secrecy (Shannon): C and M statist. independent. One-time pad in terms of systems ."}
{"_id": "4342cd22185d9f655f013c4590cab2617ed1631a", "title": "A low cost ultra-wideband pulse transceiver", "text": "A low cost pulse transceiver has been developed for measuring the electrical properties of materials. The transceiver generates an ultra wide band (UWB) pulse as well as samples the received pulse. The pulse generator has been designed using a silicon-germanium (SiGe) analog comparator which is a possible alternative to step recovery diodes. The pulse is received by an extended time sampling circuit. The sampling circuit presented is an alternative design that does not require the use of a broadband balun."}
{"_id": "b9ea3aee650a75b29f4e3883100ad4e425f22f3e", "title": "A Survey of Shortest-Path Algorithms", "text": "A shortest-path algorithm finds a path containing the minimal cost between two vertices in a graph. A plethora of shortest-path algorithms is studied in the literature that span across multiple disciplines. This paper presents a survey of shortest-path algorithms based on a taxonomy that is introduced in the paper. One dimension of this taxonomy is the various flavors of the shortest-path problem. There is no one general algorithm that is capable of solving all variants of the shortest-path problem due to the space and time complexities associated with each algorithm. Other important dimensions of the taxonomy include whether the shortest-path algorithm operates over a static or a dynamic graph, whether the shortest-path algorithm produces exact or approximate answers, and whether the objective of the shortest-path algorithm is to achieve time-dependence or is to only be goal directed. This survey studies and classifies shortest-path algorithms according to the proposed taxonomy. The survey also presents the challenges and proposed solutions associated with each category in the taxonomy."}
{"_id": "ea0776391d071d7699f56bee0b252e50bdd5f0d9", "title": "Image super-resolution by estimating the enhancement weight of self example and external missing patches", "text": "Image super-resolution (SR) is the process of generating a high-resolution (HR) image using one or more low-resolution (LR) inputs. Many SR methods have been proposed, but generating the small-scale structure of an SR image remains a challenging task. We hence propose a single-image SR algorithm that combines the benefits of both internal and external SR methods. First, we estimate the enhancement weights of each LR-HR image patch pair. Next, we multiply each patch by the estimated enhancement weight to generate an initial SR patch. We then employ a method to recover the missing information from the high-resolution patches and create that missing information to generate a final SR image. We then employ iterative back-projection to further enhance visual quality. The method is compared qualitatively and quantitatively with several state-of-the-art methods, and the experimental results indicate that the proposed framework provides high contrast and better visual quality, particularly for non-smooth texture areas."}
{"_id": "4f1b4d8c286394674ab558878d60ac0328e1232c", "title": "Tailoring graph-coloring register allocation for runtime compilation", "text": "Just-in-time compilers are invoked during application execution and therefore need to ensure fast compilation times. Consequently, runtime compiler designers are averse to implementing compile-time intensive optimization algorithms. Instead, they tend to select faster but less effective transformations. In this paper, we explore this trade-off for an important optimization - global register allocation. We present a graph-coloring register allocator that has been redesigned for runtime compilation. Compared to Chaitin- Briggs [7], a standard graph-coloring technique, the reformulated algorithm requires considerably less allocation time and produces allocations that are only marginally worse than those of Chaitin-Briggs. Our experimental results indicate that the allocator performs better than the linear-scan and Chaitin-Briggs allocators on most benchmarks in a runtime compilation environment. By increasing allocation efficiency and preserving optimization quality, the presented algorithm increases the suitability and profitability of a graph-coloring register allocation strategy for a runtime compiler."}
{"_id": "040678daf6a49a88345ee0c680fccfd134f24d4b", "title": "Introduction to Information Retrieval Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schtze (Stanford University, Yahoo! Research, and University of Stuttgart) Cambridge: Cambridge University Press, 2008, xxi+482 pp; hardbound, ISBN 978-0-521-86571-5, $60.00", "text": "Introduction to Information Retrieval is the first textbook with a coherent treatment of classical and web information retrieval, including web search and the related areas of text classification and text clustering. Written from a computer science perspective, it gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents and of methods for evaluating systems, along with an introduction to the use of machine learning methods on text collections. Designed as the primary text for a graduate or advanced undergraduate course in information retrieval, the book will also interest researchers and professionals. A complete set of lecture slides and exercises that accompany the book are available on the web."}
{"_id": "037c2dfa4b14a0928a4fdcb59f3b3ff8e13c6fe6", "title": "EMI filter design and optimization for both AC and DC side in a DC-fed motor drive system", "text": "This paper proposes an EMI filter design and optimization method for both AC and DC side in a DC-fed motor drive system. Based on the noise generation and propagation mechanism, the analysis of common mode (CM) and differential mode (DM) EMI noise equivalent circuits is provided. Based on those equivalent circuits, this paper investigates the system EMI noise between AC and DC side, which shows the interaction between DM and CM noises and the interaction between adding AC and DC side filters. With these considerations, an optimized EMI filter design procedure is proposed to design CM and DM filters for both AC and DC sides. To minimize the impact on the EMI noise of one side caused by adding filter on the other side, certain order must be followed to design AC and DC CM and DM filters. Moreover, the EMI filter weight optimization method is also discussed to get the minimum weight of EMI filter and improve system power density. Simulation and experimental results verify the interaction between AC and DC filters and show that EMI filters can be designed to suppress both AC and DC EMI noise to meet the standard with the proposed EMI filter design method."}
{"_id": "2cad1d72fd2329797884fc656fcb5d4cc91c33fe", "title": "A Survey on Potential Applications of Honeypot Technology in Intrusion Detection Systems", "text": "Information security in the sense of personal and institutional has become a top priority in digitalized modern world in parallel to the new technological developments. Many methods, tools and technologies are used to provide the information security of IT systems. These are considered, encryption, authentication, firewall, and intrusion detection and prevention systems. Moreover, honeypot systems are proposed as complementary structures. This paper presents the overall view of the publications in IDS, IPS and honeypot systems. Recently, honeypot systems are anymore used in connection with intrusion detection systems. So this paper describes possible implementation of honeypot technologies combined with IDS/IPS in a network. Studies in the literature have shown intrusion detection systems cannot find the 0-day vulnerabilities. The system provided by the honeypots and intrusion detection systems in the network, might detect new exploit and hacker attempt. Index Terms \u2013 Information security, Intrusion detection system (IDS), Intrusion prevention system (IPS), Honeypot, Network Security."}
{"_id": "e5f008c958dbdc934b81c25719e829c7be9f7c3d", "title": "Does core strength training influence running kinetics, lower-extremity stability, and 5000-M performance in runners?", "text": "Although strong core muscles are believed to help athletic performance, few scientific studies have been conducted to identify the effectiveness of core strength training (CST) on improving athletic performance. The aim of this study was to determine the effects of 6 weeks of CST on ground reaction forces (GRFs), stability of the lower extremity, and overall running performance in recreational and competitive runners. After a screening process, 28 healthy adults (age, 36.9 +/- 9.4 years; height, 168.4 +/- 9.6 cm; mass, 70.1 +/- 15.3 kg) volunteered and were divided randomly into 2 groups (n = 14 in each group). A test-retest design was used to assess the differences between CST (experimental) and no CST (control) on GRF measures, lower-extremity stability scores, and running performance. The GRF variables were determined by calculating peak impact, active vertical GRFs (vGRFs), and duration of the 2 horizontal GRFs (hGRFs), as measured while running across a force plate. Lower-extremity stability was assessed using the Star Excursion Balance Test. Running performance was determined by 5000-m run time measured on outdoor tracks. Six 2 (pre, post) x 2 (CST, control) mixed-design analyses of variance were used to determine the influence of CST on each dependent variable, p < 0.05. Twenty subjects completed the study (nexp = 12 and ncon = 8). A significant interaction occurred, with the CST group showing faster times in the 5000-m run after 6 weeks. However, CST did not significantly influence GRF variables and lower-leg stability. Core strength training may be an effective training method for improving performance in runners."}
{"_id": "0baafcd47f1bff2935f7f3d316b4341a769ab4d1", "title": "A performance and energy comparison of FPGAs, GPUs, and multicores for sliding-window applications", "text": "With the emergence of accelerator devices such as multicores, graphics-processing units (GPUs), and field-programmable gate arrays (FPGAs), application designers are confronted with the problem of searching a huge design space that has been shown to have widely varying performance and energy metrics for different accelerators, different application domains, and different use cases. To address this problem, numerous studies have evaluated specific applications across different accelerators. In this paper, we analyze an important domain of applications, referred to as sliding-window applications, when executing on FPGAs, GPUs, and multicores. For each device, we present optimization strategies and analyze use cases where each device is most effective. The results show that FPGAs can achieve speedup of up to 11x and 57x compared to GPUs and multicores, respectively, while also using orders of magnitude less energy."}
{"_id": "40b603ff878315c301c44c5f04d9c4a2a9d4da80", "title": "Correntropy: Properties and Applications in Non-Gaussian Signal Processing", "text": "The optimality of second-order statistics depends heavily on the assumption of Gaussianity. In this paper, we elucidate further the probabilistic and geometric meaning of the recently defined correntropy function as a localized similarity measure. A close relationship between correntropy and M-estimation is established. Connections and differences between correntropy and kernel methods are presented. As such correntropy has vastly different properties compared with second-order statistics that can be very useful in non-Gaussian signal processing, especially in the impulsive noise environment. Examples are presented to illustrate the technique."}
{"_id": "7aa32daf5a7e9fb62fabff1dcc684de64972196f", "title": "Real-Time Optical Flow Calculations on FPGA and GPU Architectures: A Comparison Study", "text": "FPGA devices have often found use as higher-performance alternatives to programmable processors for implementing a variety of computations. Applications successfully implemented on FPGAs have typically contained high levels of parallelism and have often used simple statically-scheduled control and modest arithmetic. Recently introduced computing devices such as coarse grain reconfigurable arrays, multi-core processors, and graphical processing units (GPUs) promise to significantly change the computational landscape for the implementation of high-speed real-time computing tasks. One reason for this is that these architectures take advantage of many of the same application characteristics that fit well on FPGAs. One real-time computing task, optical flow, is difficult to apply in robotic vision applications in practice because of its high computational and data rate requirements, and so is a good candidate for implementation on FPGAs and other custom computing architectures. In this paper, a tensor-based optical flow algorithm is implemented on both an FPGA and a GPU and the two implementations discussed. The two implementations had similar performance, but with the FPGA implementation requiring 12\u00d7 more development time. Other comparison data for these two technologies is then given for three additional applications taken from a MIMO digital communication system design, providing additional examples of the relative capabilities of these two technologies."}
{"_id": "d70a2c44b9c22348600b7359ffc43d36876c5a42", "title": "Performance comparison of FPGA, GPU and CPU in image processing", "text": "Many applications in image processing have high inherent parallelism. FPGAs have shown very high performance in spite of their low operational frequency by fully extracting the parallelism. In recent micro processors, it also becomes possible to utilize the parallelism using multi-cores which support improved SIMD instructions, though programmers have to use them explicitly to achieve high performance. Recent GPUs support a large number of cores, and have a potential for high performance in many applications. However, the cores are grouped, and data transfer between the groups is very limited. Programming tools for FPGA, SIMD instructions on CPU and a large number of cores on GPU have been developed, but it is still difficult to achieve high performance on these platforms. In this paper, we compare the performance of FPGA, GPU and CPU using three applications in image processing; two-dimensional filters, stereo-vision and k-means clustering, and make it clear which platform is faster under which conditions."}
{"_id": "a3919347048163f53a56bd3344bbffe135e31564", "title": "Scalable matrix inversion using MapReduce", "text": "Matrix operations are a fundamental building block of many computational tasks in fields as diverse as scientific computing, machine learning, and data mining. Matrix inversion is an important matrix operation, but it is difficult to implement in today's popular parallel dataflow programming systems, such as MapReduce. The reason is that each element in the inverse of a matrix depends on multiple elements in the input matrix, so the computation is not easily partitionable. In this paper, we present a scalable and efficient technique for matrix inversion in MapReduce. Our technique relies on computing the LU decomposition of the input matrix and using that decomposition to compute the required matrix inverse. We present a technique for computing the LU decomposition and the matrix inverse using a pipeline of MapReduce jobs. We also present optimizations of this technique in the context of Hadoop. To the best of our knowledge, our technique is the first matrix inversion technique using MapReduce. We show experimentally that our technique has good scalability, enabling us to invert a 10^5 x 10^5 matrix in 5 hours on Amazon EC2. We also show that our technique outperforms ScaLAPACK, a state-of-the-art linear algebra package that uses MPI."}
{"_id": "e96ed6eb72e6ab56b41afbb897eb8e3299ba607c", "title": "AMEE Guide No. 14: Outcome-based education: Part 1-An introduction to outcome-based education", "text": "SUMM AR Y Outcome-based education, a performance-based approach at the cutting edge of curr iculum development, offers a powerful and appealing way of reforming and managing medical education. The emphasis is on the product\u00d0 what sort of doctor will be produced\u00d0 rather than on the educational process. In outcome-based education the educational outcomes are clearly and unambiguously speci \u00ae ed. These determine the curr iculum content and its organisation, the teaching methods and strategies, the courses offered, the assessment process, the educational environment and the curriculum timetable.They also provide a framework for curr iculum evaluation. A doctor is a unique combination of different kinds of abilities. A three-circle model can be used to present the learning outcomes in medical education, with the tasks to be performed by the doctor in the inner core, the approaches to the performance of the tasks in the middle area, and the growth of the individual and his or her role in the practice of medicine in the outer area. Medical schools need to prepare young doctors to practise in an increasingly complex healthcare scene with changing patient and public expectations, and increasing demands from employing authorities. Outcome-based education offers many advantages as a way of achieving this. It emphasises relevance in the curriculum and accountability, and can provide a clear and unambiguous framework for curr iculum planning which has an intuitive appeal. It encourages the teacher and the student to share responsibility for learning and it can guide student assessment and course evaluation. What sort of outcomes should be covered in a curriculum, how should they be assessed and how should outcome-based education be implemented are issues that need to be addressed."}
{"_id": "dbb2a4c38d76311eee3b7c307b738cbcead90f32", "title": "Comparative assessment of methods for aligning multiple genome sequences", "text": "Multiple sequence alignment is a difficult computational problem. There have been compelling pleas for methods to assess whole-genome multiple sequence alignments and compare the alignments produced by different tools. We assess the four ENCODE alignments, each of which aligns 28 vertebrates on 554 Mbp of total input sequence. We measure the level of agreement among the alignments and compare their coverage and accuracy. We find a disturbing lack of agreement among the alignments not only in species distant from human, but even in mouse, a well-studied model organism. Overall, the assessment shows that Pecan produces the most accurate or nearly most accurate alignment in all species and genomic location categories, while still providing coverage comparable to or better than that of the other alignments in the placental mammals. Our assessment reveals that constructing accurate whole-genome multiple sequence alignments remains a significant challenge, particularly for noncoding regions and distantly related species."}
{"_id": "1c9a61c8ec255d033201fb9b394b283a6b6acacc", "title": "Structured Feature Learning for Pose Estimation", "text": "In this paper, we propose a structured feature learning framework to reason the correlations among body joints at the feature level in human pose estimation. Different from existing approaches of modeling structures on score maps or predicted labels, feature maps preserve substantially richer descriptions of body joints. The relationships between feature maps of joints are captured with the introduced geometrical transform kernels, which can be easily implemented with a convolution layer. Features and their relationships are jointly learned in an end-to-end learning system. A bi-directional tree structured model is proposed, so that the feature channels at a body joint can well receive information from other joints. The proposed framework improves feature learning substantially. With very simple post processing, it reaches the best mean PCP on the LSP and FLIC datasets. Compared with the baseline of learning features at each joint separately with ConvNet, the mean PCP has been improved by 18% on FLIC. The code is released to the public."}
{"_id": "d7c02288bf660e2236d6068840c0ca5103aae3c7", "title": "Power Electronic Transformer-Based Railway Traction Systems: Challenges and Opportunities", "text": "In this paper, power electronic transformer (PET)-based railway traction systems are comprehensively reviewed according to the unique application features and requirements. By comparing PET and conventional line frequency transformer (LFT)-based systems, their pros and cons are summarized. By further reviewing all kinds of PET-based designs from the early concepts to the latest ones in the order of their publication dates, the developing trends are highlighted. By synthetically considering the requirements and the state of the art, the key challenges and opportunities are identified and discussed. It shows that although PET-based systems are still developing and far from mature, they are already superior to LFT-based systems in terms of system weight, efficiency, and functionalities especially for 15-kV/16.7-Hz applications. With the advancements on wide bandgap power devices, medium frequency transformers, and converters, PET systems will be even more promising and available for all types of railway tractions in the near future."}
{"_id": "d790f8f82097515c84ce0f6752951652b127aff6", "title": "ACE: An Efficient Asynchronous Corner Tracker for Event Cameras", "text": "The emergence of bio-inspired event cameras has opened up new exciting possibilities in high-frequency tracking, overcoming some of the limitations of traditional frame-based vision (e.g. motion blur during high-speed motions or saturation in scenes with high dynamic range). As a result, research has been focusing on the processing of their unusual output: an asynchronous stream of events. With the majority of existing techniques discretizing the event-stream into frame-like representations, we are yet to harness the true power of these cameras. In this paper, we propose the ACE tracker: a purely asynchronous framework to track corner-event features. Evaluation on benchmarking datasets reveals significant improvements in accuracy and computational efficiency in comparison to state-of-the-art event-based trackers. ACE achieves robust performance even in challenging scenarios, where traditional frame-based vision algorithms fail."}
{"_id": "24e1d60e37812492c220af107d911d440d997056", "title": "Branch-and-bound processing of ranked queries", "text": "Despite the importance of ranked queries in numerous applications involving multi-criteria decision making, they are not efficiently supported by traditional database systems. In this paper, we propose a simple yet powerful technique for processing such queries based on multi-dimensional access methods and branch-and-bound search. The advantages of the proposed methodology are: (i) it is space efficient, requiring only a single index on the given relation (storing each tuple at most once), (ii) it achieves significant (i.e., orders of magnitude) performance gains with respect to the current state-of-theart, (iii) it can efficiently handle data updates, and (iv) it is applicable to other important variations of ranked search (including the support for non-monotone preference functions), at no extra space overhead. We confirm the superiority of the proposed methods with a detailed experimental study. r 2006 Elsevier B.V. All rights reserved."}
{"_id": "8f92b4ea04758df2acfb49bd46a4cde923c3ddcb", "title": "Deep visual foresight for planning robot motion", "text": "A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation \u2014 pushing objects \u2014 and can handle novel objects not seen during training."}
{"_id": "2d6bbb97830b931a321807ec69b7e6b7798402c1", "title": "Credit Card Fraud Detection: A Hybrid Approach Using Fuzzy Clustering & Neural Network", "text": "Due to the rapid progress of the e-commerce and online banking, use of credit cards has increased considerably leading to a large number of fraud incidents. In this paper, we have proposed a novel approach towards credit card fraud detection in which the fraud detection is done in three phases. The first phase does the initial user authentication and verification of card details. If the check is successfully cleared, then the transaction is passed to the next phase where fuzzy c-means clustering algorithm is applied to find out the normal usage patterns of credit card users based on their past activity. A suspicion score is calculated according to the extent of deviation from the normal patterns and thereby the transaction is classified as legitimate or suspicious or fraudulent. Once a transaction is found to be suspicious, neural network based learning mechanism is applied to determine whether it was actually a fraudulent activity or an occasional deviation by a genuine user. Extensive experimentation with stochastic models shows that the combined use of clustering technique along with learning helps in detecting fraudulent activities effectively while minimizing the generation of false alarms."}
{"_id": "863b4872279a807e0d2f3d14c98d056dee0c863c", "title": "3D reconstruction of freely moving persons for re-identification with a depth sensor", "text": "In this work, we describe a novel method for creating 3D models of persons freely moving in front of a consumer depth sensor and we show how they can be used for long-term person re-identification. For overcoming the problem of the different poses a person can assume, we exploit the information provided by skeletal tracking algorithms for warping every point cloud frame to a standard pose in real time. Then, the warped point clouds are merged together to compose the model. Re-identification is performed by matching body shapes in terms of whole point clouds warped to a standard pose with the described method. We compare this technique with a classification method based on a descriptor of skeleton features and with a mixed approach which exploits both skeleton and shape features. We report experiments on two datasets we acquired for RGB-D re-identification which use different skeletal tracking algorithms and which are made publicly available to foster research in this new research branch."}
{"_id": "67c24fe3c0c8e8841bdc2d936f876e5e726f210d", "title": "An Enhanced Neural Network Technique for Software Risk Analysis", "text": "An enhanced technique for risk categorization is presented. This technique, PCA-ANN, provides an improved capability to discriminate high-risk software. The approach draws on the combined strengths of pattern recognition, multivariate statistics and neural networks. Principal component analysis is utilized to provide a means of normalizing and orthogonalizing the input data, thus eliminating the ill effects of multicollinearity. A neural network is used for risk determination/classification. A significant feature of this approach is a procedure, herein termed cross-normalization. This procedure provides the technique with capability to discriminate data sets that include disproportionately large numbers of high-risk software modules."}
{"_id": "e570d707d5ea094cb6cb5ad63083a4831fd4ac8c", "title": "Comparison of the Full Outline of UnResponsiveness and Glasgow Liege Scale/Glasgow Coma Scale in an intensive care unit population.", "text": "BACKGROUND\nThe Full Outline of UnResponsiveness (FOUR) has been proposed as an alternative for the Glasgow Coma Scale (GCS)/Glasgow Li\u00e8ge Scale (GLS) in the evaluation of consciousness in severely brain-damaged patients. We compared the FOUR and GLS/GCS in intensive care unit patients who were admitted in a comatose state.\n\n\nMETHODS\nFOUR and GLS evaluations were performed in randomized order in 176 acutely (<1\u00a0month) brain-damaged patients. GLS scores were transformed in GCS scores by removing the GLS brainstem component. Inter-rater agreement was assessed in 20% of the studied population (N\u00a0=\u00a035). A logistic regression analysis adjusted for age, and etiology was performed to assess the link between the studied scores and the outcome 3\u00a0months after injury (N\u00a0=\u00a0136).\n\n\nRESULTS\nGLS/GCS verbal component was scored 1 in 146 patients, among these 131 were intubated. We found that the inter-rater reliability was good for the FOUR score, the GLS/GCS. FOUR, GLS/GCS total scores predicted functional outcome with and without adjustment for age and etiology. 71 patients were considered as being in a vegetative/unresponsive state based on the GLS/GCS. The FOUR score identified 8 of these 71 patients as being minimally conscious given that these patients showed visual pursuit.\n\n\nCONCLUSIONS\nThe FOUR score is a valid tool with good inter-rater reliability that is comparable to the GLS/GCS in predicting outcome. It offers the advantage to be performable in intubated patients and to identify non-verbal signs of consciousness by assessing visual pursuit, and hence minimal signs of consciousness (11% in this study), not assessed by GLS/GCS scales."}
{"_id": "af44f33690aacd484a633fc6b1d13435d122d06d", "title": "A weight-neutral versus weight-loss approach for health promotion in women with high BMI: A randomized-controlled trial", "text": "Weight loss is the primary recommendation for health improvement in individuals with high body mass index (BMI) despite limited evidence of long-term success. Alternatives to weight-loss approaches (such as Health At Every Size - a weight-neutral approach) have been met with their own concerns and require further empirical testing. This study compared the effectiveness of a weight-neutral versus a weight-loss program for health promotion. Eighty women, aged 30-45 years, with high body mass index (BMI\u00a0\u2265\u00a030\u00a0kg/m(2)) were randomized to 6 months of facilitator-guided weekly group meetings using structured manuals that emphasized either a weight-loss or weight-neutral approach to health. Health measurements occurred at baseline, post-intervention, and 24-months post-randomization. Measurements included blood pressure, lipid panels, blood glucose, BMI, weight, waist circumference, hip circumference, distress, self-esteem, quality of life, dietary risk, fruit and vegetable intake, intuitive eating, and physical activity. Intention-to-treat analyses were performed using linear mixed-effects models to examine group-by-time interaction effects and between and within-group differences. Group-by-time interactions were found for LDL cholesterol, intuitive eating, BMI, weight, and dietary risk. At post-intervention, the weight-neutral program had larger reductions in LDL cholesterol and greater improvements in intuitive eating; the weight-loss program had larger reductions in BMI, weight, and larger (albeit temporary) decreases in dietary risk. Significant positive changes were observed overall between baseline and 24-month follow-up for waist-to-hip ratio, total cholesterol, physical activity, fruit and vegetable intake, self-esteem, and quality of life. These findings highlight that numerous health benefits, even in the absence of weight loss, are achievable and sustainable in the long term using a weight-neutral approach. The trial positions weight-neutral programs as a viable health promotion alternative to weight-loss programs for women of high weight."}
{"_id": "226a1fef9b9c72ea65897c82cd720b6c9e5f53ee", "title": "Budgeted Batch Bayesian Optimization With Unknown Batch Sizes", "text": "Parameter settings profoundly impact the performance of machine learning algorithms and laboratory experiments. The classical grid search or trial-error methods are exponentially expensive in large parameter spaces, and Bayesian optimization (BO) offers an elegant alternative for global optimization of black box functions. In situations where the black box function can be evaluated at multiple points simultaneously, batch Bayesian optimization is used. Current batch BO approaches are restrictive in that they fix the number of evaluations per batch, and this can be wasteful when the number of specified evaluations is larger than the number of real maxima in the underlying acquisition function. We present the Budgeted Batch Bayesian Optimization (B3O) for hyper-parameter tuning and experimental design we identify the appropriate batch size for each iteration in an elegant way. To set the batch size flexible, we use the infinite Gaussian mixture model (IGMM) for automatically identifying the number of peaks in the underlying acquisition functions. We solve the intractability of estimating the IGMM directly from the acquisition function by formulating the batch generalized slice sampling to efficiently draw samples from the acquisition function. We perform extensive experiments for both synthetic functions and two real world applications machine learning hyper-parameter tuning and experimental design for alloy hardening. We show empirically that the proposed B3O outperforms the existing fixed batch BO approaches in finding the optimum whilst requiring a fewer number of evaluations, thus saving cost and time."}
{"_id": "26913f8e2eade1462669acb79f4e71c90b42fe20", "title": "A Roadmap for Natural Language Processing Research in Information Systems", "text": "Natural Language Processing (NLP) is now widely integrated into web and mobile applications, enabling natural interactions between human and computers. Although many NLP studies have been published, none have comprehensively reviewed or synthesized tasks most commonly addressed in NLP research. We conduct a thorough review of IS literature to assess the current state of NLP research, and identify 12 prototypical tasks that are widely researched. Our analysis of 238 articles in Information Systems (IS) journals between 2004 and 2015 shows an increasing trend in NLP research, especially since 2011. Based on our analysis, we propose a roadmap for NLP research, and detail how it may be useful to guide future NLP research in IS. In addition, we employ Association Rules (AR) mining for data analysis to investigate co-occurrence of prototypical tasks and discuss insights from the findings."}
{"_id": "34065ca7e89ff08995eca91317ef5893a9a646b7", "title": "Generative Adversarial Imitation Learning", "text": "The first needs a lot of data and suffers from covariate shift. It\u2019s a supervised learning method when we fit the state-action pair mapping. The latter finds a cost function under which the expert is uniquely optimal. Compounding error is not a problem here, but IRL is super data inefficiend and requires RL in the inner loop. Since we only interested in the final policy, why do we learn the cost function? So, GAIL was born."}
{"_id": "a6ae6e7481c6002ffd98b6ce9cf076ec9e350969", "title": "Real-time capable methods to determine the magnet temperature of permanent magnet synchronous motors \u2014 A review", "text": "The permanent magnet synchronous motor (PMSM) is widely used in highly utilised automotive traction drives and other industrial applications. With regards to the device life-time, safe operation and control performance, the magnet temperature is of great interest. Since a direct magnet temperature measurement is not feasible in most cases, this contribution gives a review on state-of-the-art model-based magnet temperature determination methods in PMSM. In this context, the existing publications can be classified into thermal models, flux observers and voltage signal injection approaches. Firstly, brief introductions of these methods are given, followed by a direct comparison regarding drawbacks and benefits. Finally, this contribution concludes with an outlook of potential further investigations in this research field."}
{"_id": "ba6d4cf28ce7b023fab6a62d2cb38b4fbfc91064", "title": "A Floor and Obstacle Height Map for 3D Navigation of a Humanoid Robot", "text": "With the development of biped robots, systems became able to navigate in a 3 dimensional world, walking up and down stairs, or climbing over small obstacles. We present a method for obtaining a labeled 2.5D grid map of the robot's surroundings. Each cell is marked either as floor or obstacle and contains a value telling the height of the floor or obstacle. Such height maps are useful for path planning and collision avoidance. The method uses a novel combination of a 3D occupancy grid for robust sensor data interpretation and a 2.5D height map for fine resolution floor values. We evaluate our approach using stereo vision on the humanoid robot QRIO and show the advantages over previous methods. Experimental results from navigation runs on an obstacle course demonstrate the ability of the method to generate detailed maps for autonomous navigation."}
{"_id": "c7b160f762ff472bfaa800e60b030761231e7975", "title": "Deep Learning Driven Visual Path Prediction From a Single Image", "text": "Capabilities of inference and prediction are the significant components of visual systems. Visual path prediction is an important and challenging task among them, with the goal to infer the future path of a visual object in a static scene. This task is complicated as it needs high-level semantic understandings of both the scenes and underlying motion patterns in video sequences. In practice, cluttered situations have also raised higher demands on the effectiveness and robustness of models. Motivated by these observations, we propose a deep learning framework, which simultaneously performs deep feature learning for visual representation in conjunction with spatiotemporal context modeling. After that, a unified path-planning scheme is proposed to make accurate path prediction based on the analytic results returned by the deep context models. The highly effective visual representation and deep context models ensure that our framework makes a deep semantic understanding of the scenes and motion patterns, consequently improving the performance on visual path prediction task. In experiments, we extensively evaluate the model's performance by constructing two large benchmark datasets from the adaptation of video tracking datasets. The qualitative and quantitative experimental results show that our approach outperforms the state-of-the-art approaches and owns a better generalization capability."}
{"_id": "81cee0e83c2667af1b1eeee03e2a6ddc23c3c487", "title": "Railway Track Circuit Fault Diagnosis Using Recurrent Neural Networks", "text": "Timely detection and identification of faults in railway track circuits are crucial for the safety and availability of railway networks. In this paper, the use of the long-short-term memory (LSTM) recurrent neural network is proposed to accomplish these tasks based on the commonly available measurement signals. By considering the signals from multiple track circuits in a geographic area, faults are diagnosed from their spatial and temporal dependences. A generative model is used to show that the LSTM network can learn these dependences directly from the data. The network correctly classifies 99.7% of the test input sequences, with no false positive fault detections. In addition, the t-Distributed Stochastic Neighbor Embedding (t-SNE) method is used to examine the resulting network, further showing that it has learned the relevant dependences in the data. Finally, we compare our LSTM network with a convolutional network trained on the same task. From this comparison, we conclude that the LSTM network architecture is better suited for the railway track circuit fault detection and identification tasks than the convolutional network."}
{"_id": "2bbc6fb4ed0437ab1e88da6fde440d094bc959f7", "title": "Does working memory training work? The promise and challenges of enhancing cognition by training working memory.", "text": "A growing body of literature shows that one's working memory (WM) capacity can be expanded through targeted training. Given the established relationship between WM and higher cognition, these successful training studies have led to speculation that WM training may yield broad cognitive benefits. This review considers the current state of the emerging WM training literature, and details both its successes and limitations. We identify two distinct approaches to WM training, strategy training and core training, and highlight both the theoretical and practical motivations that guide each approach. Training-related increases in WM capacity have been successfully demonstrated across a wide range of subject populations, but different training techniques seem to produce differential impacts upon the broader landscape of cognitive abilities. In particular, core WM training studies seem to produce more far-reaching transfer effects, likely because they target domain-general mechanisms of WM. The results of individual studies encourage optimism regarding the value of WM training as a tool for general cognitive enhancement. However, we discuss several limitations that should be addressed before the field endorses the value of this approach."}
{"_id": "d4f86a14dfcafd6008e13a5ffdaa6a2e06c89a11", "title": "Data-Driven Morphological Analysis and Disambiguation for Morphologically Rich Languages and Universal Dependencies", "text": "Parsing texts into universal dependencies (UD) in realistic scenarios requires infrastructure for morphological analysis and disambiguation (MA&D) of typologically different languages as a first tier. MA&D is particularly challenging in morphologically rich languages (MRLs), where the ambiguous space-delimited tokens ought to be disambiguated with respect to their constituent morphemes. Here we present a novel, language-agnostic, framework for MA&D, based on a transition system with two variants, word-based and morpheme-based, and a dedicated transition to mitigate the biases of variable-length morpheme sequences. Our experiments on a Modern Hebrew case study outperform the state of the art, and we show that the morpheme-based MD consistently outperforms our word-based variant. We further illustrate the utility and multilingual coverage of our framework by morphologically analyzing and disambiguating the large set of languages in the UD treebanks."}
{"_id": "c5ddae3a1827edad2283b86971cb93a41806312b", "title": "Maturity and change in personality: developmental trends of temperament and character in adulthood.", "text": "We studied the developmental trends of temperament and character in a longitudinal population-based sample of Finnish men and women aged 20-45 years using the Temperament and Character Inventory model of personality. Personality was assessed in 1997, 2001, and 2007 (n = 2,104, 2,095, and 2,056, respectively). Mean-level changes demonstrated qualitatively distinct developmental patterns for character (self-directedness, cooperativeness, and self-transcendence) and temperament (novelty seeking, harm avoidance, reward dependence, and persistence). Character developed toward greater maturity, although self-transcendence decreased with age. However, self-transcendence was the strongest predictor of overall personality change. Cohort effects indicated lower level of self-transcendence and higher level of self-directedness and cooperativeness in younger birth cohorts. Regarding temperament, novelty seeking decreased and persistence increased slightly with age. Both high novelty seeking and high persistence predicted overall personality change. These findings suggest that temperament and character traits follow different kinds of developmental trajectories."}
{"_id": "83e7654d545fbbaaf2328df365a781fb67b841b4", "title": "Enhanced LSTM for Natural Language Inference", "text": "(1) Handcrafted features [BAPM15] 78.2 (2) LSTM [BGR+16] 80.6 (3) GRU [VKFU15] 81.4 (4) Tree CNN [MML+16] 82.1 (5) SPINN-PI [BGR+16] 83.2 (6) BiLSTM intra-Att [LSLW16] 84.2 (7) NSE [MY16a] 84.6 (8) Att-LSTM [RGH+15] 83.5 (9) mLSTM [WJ16] 86.1 (10) LSTMN [CDL16] 86.3 (11) Decomposable Att [PTDU16] 86.3 (12) Intra-sent Att+(11) [PTDU16] 86.8 (13) NTI-SLSTM-LSTM [MY16b] 87.3 (14) Re-read LSTM [SCSL16] 87.5 (15) btree-LSTM [PAD+16] 87.6"}
{"_id": "dcea8702119f4f74460de1a9aefd81474b360a7f", "title": "The dynamics of organizational identity", "text": "While many organizational researchers make reference to Mead\u2019s theory of social identity, none have explored how Mead\u2019s ideas about the relationship between the \u201cI\u201d and the \u201cme\u201d might be extended to identity processes at the organizational level of analysis. In this paper we define organizational analogues for Mead\u2019s \u201cI\u201d and \u201cme\u201d and explain how these two phases of organizational identity are related. In doing so we bring together existing theory concerning the links between organizational identities and images, with new theory concerning how reflection embeds identity in organizational culture and how identity expresses cultural understandings through symbols. We offer a model of organizational identity dynamics built on four processes linking organizational identity to culture and image. While the processes linking identity and image (mirroring and impressing) have been described in the literature before, the contribution of this paper lies in articulation of the processes linking identity and culture (reflecting and expressing), and of the interaction of all four processes working dynamically together to create, maintain and change organizational identity. We discuss the implications of our model in terms of two dysfunctions of organizational identity dynamics: narcissism and loss of culture."}
{"_id": "0d4d39f220ca90cb9819da2cfbf02746cf2efa33", "title": "Cross-modal plasticity: where and how?", "text": "Animal studies have shown that sensory deprivation in one modality can have striking effects on the development of the remaining modalities. Although recent studies of deaf and blind humans have also provided convincing behavioural, electrophysiological and neuroimaging evidence of increased capabilities and altered organization of spared modalities, there is still much debate about the identity of the brain systems that are changed and the mechanisms that mediate these changes. Plastic changes across brain systems and related behaviours vary as a function of the timing and the nature of changes in experience. This specificity must be understood in the context of differences in the maturation rates and timing of the associated critical periods, differences in patterns of transiently existing connections, and differences in molecular factors across brain systems."}
{"_id": "0148bbc80ea2f2526ab019a317639b4fb357f399", "title": "A Machine Learning Approach to Twitter User Classification", "text": "This paper addresses the task of user classification in social media, with an application to Twitter. We automatically infer the values of user attributes such as political orientation or ethnicity by leveraging observable information such as the user behavior, network structure and the linguistic content of the user\u2019s Twitter feed. We employ a machine learning approach which relies on a comprehensive set of features derived from such user information. We report encouraging experimental results on 3 tasks with different characteristics: political affiliation detection, ethnicity identification and detecting affinity for a particular business. Finally, our analysis shows that rich linguistic features prove consistently valuable across the 3 tasks and show great promise for additional user classification needs."}
{"_id": "cee34135983222e021557154d0c2bbb921055aa2", "title": "A Weighted Finite-State Transducer (WFST)-Based Language Model for Online Indic Script Handwriting Recognition", "text": "Though designing of classifies for Indic script handwriting recognition has been researched with enough attention, use of language model has so far received little exposure. This paper attempts to develop a weighted finite-state transducer (WFST) based language model for improving the current recognition accuracy. Both the recognition hypothesis (i.e. the segmentation lattice) and the lexicon are modeled as two WFSTs. Concatenation of these two FSTs accept a valid word(s) which is (are) present in the recognition lattice. A third FST called error FST is also introduced to retrieve certain words which were missing in the previous concatenation operation. The proposed model has been tested for online Bangla handwriting recognition though the underlying principle can equally be applied for recognition of offline or printed words. Experiment on a part of ISI-Bangla handwriting database shows that while the present classifiers (without using any language model) can recognize about 73% word, use of recognition and lexicon FSTs improve this result by about 9% giving an average word-level accuracy of 82%. Introduction of error FST further improves this accuracy to 93%. This remarkable improvement in word recognition accuracy by using FST-based language model would serve as a significant revelation for the research in handwriting recognition, in general and Indic script handwriting recognition, in particular."}
{"_id": "5437a91d51f811eeb0caa67bde36262deafc8559", "title": "#greysanatomy vs. #yankees: Demographics and Hashtag Use on Twitter", "text": "Demographics, in particular, gender, age, and race, are a key predictor of human behavior. Despite the significant effect that demographics plays, most scientific studies using online social media do not consider this factor, mainly due to the lack of such information. In this work, we use state-of-theart face analysis software to infer gender, age, and race from profile images of 350K Twitter users from New York. For the period from November 1, 2014 to October 31, 2015, we study which hashtags are used by different demographic groups. Though we find considerable overlap for the most popular hashtags, there are also many group-specific hashtags."}
{"_id": "09f37a2f04406faec9ce8f003a072a2125f2ce68", "title": "Feminism and interaction design", "text": "This workshop is aimed at exploring the issues at the intersection of feminist thinking and human computer interaction. Both feminism and HCI have made important contributions to social science in the past several decades, but though their potential for overlap seem high, they have not engaged each other directly until recently. In this workshop we will explore diverse--and contentious--ways that feminist perspectives can support user research, design ideation and problem framing, sketching and prototyping, and design criticism and evaluation. The workshop will include fast-moving mini-panels and hands-on group exercises emphasizing feminist interaction criticism and design ideation."}
{"_id": "388c99152ee9541be781453529abff41263dba7d", "title": "Scope+: a stereoscopic video see-through augmented reality microscope", "text": "During the usage of conventional stereo microscope, repetitive head movement are often inevitable for the users to retrieve information away from the microscope. Moreover, risks of surgeons loosing focus and increasing fatigue could also arise during microsurgeries. Therefore, Scope+, a stereoscopic video see-through augmented reality system was created not only to solve the problems mentioned above, but also to improve users stereo microscopic experience."}
{"_id": "013332299b962dfcbec40eb24fee59808478d8aa", "title": "Radio Link Frequency Assignment", "text": "The problem of radio frequency assignment is to provide communication channels from limited spectral resources whilst keeping to a minimum the interference suffered by those whishing to communicate in a given radio communication network. This problem is a combinatorial (NP-hard) optimization problem. In 1993, the CELAR (the French \u201cCentre d'Electronique de l'Armement\u201d) built a suite of simplified versions of Radio Link Frequency Assignment Problems (RLFAP) starting from data on a real network Roisnel93. Initially designed for assessing the performances of several Constraint Logic Programming languages, these benchmarks have been made available to the public in the framework of the European EUCLID project CALMA (Combinatorial Algorithms for Military Applications). These problems should look very attractive to the CSP community: the problem is simple to represent, all constraints are binary and involve finite domain variables. They nevertheless have some of the flavors of real problems (including large size and several optimization criteria). This paper gives essential facts about the CELAR instances and also introduces the GRAPH instances which were generated during the CALMA project."}
{"_id": "3d885ce83b18a7baa3675471ec89552dfbabdcbf", "title": "A three-stage decision framework for multi-subject emotion recognition using physiological signals", "text": "This paper investigates the potential of physiological signals as reliable channels for multi-subject emotion recognition. A three-stage decision framework is proposed for recognizing four emotions of multiple subjects. The decision framework consists of three stages: (1) in the initial stage, identifying a subject group that a test subject can be mapped to; (2) in the second stage, identifying an emotion pool that an instance of the test subject can be assigned to; and (3) in the final stage, generating the predicted emotion from the given emotion pool for the test instance. In comparison with a series of alternative methods, the high accuracy of 70.04% achieved by our proposed method clearly demonstrates the potential of the three-stage decision method in multi-subject emotion recognition."}
{"_id": "0b9af9b0ac87fafd9d7747d8047df38ee58dc647", "title": "Multimodal deep learning for robust RGB-D object recognition", "text": "Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications. This paper leverages recent progress on Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture for object recognition. Our architecture is composed of two separate CNN processing streams - one for each modality - which are consecutively combined with a late fusion network. We focus on learning with imperfect sensor data, a typical problem in real-world robotics tasks. For accurate learning, we introduce a multi-stage training methodology and two crucial ingredients for handling depth data with CNNs. The first, an effective encoding of depth information for CNNs that enables learning without the need for large depth datasets. The second, a data augmentation scheme for robust learning with depth images by corrupting them with realistic noise patterns. We present state-of-the-art results on the RGB-D object dataset [15] and show recognition in challenging RGB-D real-world noisy settings."}
{"_id": "84af5d7263b357852daabffa44ea13bf0a7e9bd2", "title": "Replicating Milgram: Would people still obey today?", "text": "The author conducted a partial replication of Stanley Milgram's (1963, 1965, 1974) obedience studies that allowed for useful comparisons with the original investigations while protecting the well-being of participants. Seventy adults participated in a replication of Milgram's Experiment 5 up to the point at which they first heard the learner's verbal protest (150 volts). Because 79% of Milgram's participants who went past this point continued to the end of the shock generator's range, reasonable estimates could be made about what the present participants would have done if allowed to continue. Obedience rates in the 2006 replication were only slightly lower than those Milgram found 45 years earlier. Contrary to expectation, participants who saw a confederate refuse the experimenter's instructions obeyed as often as those who saw no model. Men and women did not differ in their rates of obedience, but there was some evidence that individual differences in empathic concern and desire for control affected participants' responses."}
{"_id": "8e4c77d35317658162b1ebec9bdf00f43e3e0ff7", "title": "MaCow: Masked Convolutional Generative Flow", "text": "Flow-based generative models, conceptually attractive due to tractability of both the exact log-likelihood computation and latent-variable inference, and efficiency of both training and sampling, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations. Despite their computational efficiency, the density estimation performance of flow-based generative models significantly falls behind those of state-of-the-art autoregressive models. In this work, we introduce masked convolutional generative flow (MACOW), a simple yet effective architecture of generative flow using masked convolution. By restricting the local connectivity in a small kernel, MACOW enjoys the properties of fast and stable training, and efficient sampling, while achieving significant improvements over Glow for density estimation on standard image benchmarks, considerably narrowing the gap to autoregressive models."}
{"_id": "9d5463df7ddb1c12a477f8b8b423f1a5fd6e360b", "title": "Spaceborne MIMO Synthetic Aperture Radar for Multimodal Operation", "text": "In this paper, we introduce a novel multiple-input multiple-output (MIMO) synthetic aperture radar (SAR) concept for multimodal operation. The proposed system employs waveforms based on the orthogonal frequency division multiplexing (OFDM) technique and digital beamforming (DBF) on receive. Thereby, it becomes feasible to maximize spatial degrees of freedom, which are necessary for the multimodal operation. The proposed MIMO SAR system produces multiple high-resolution wide-swath SAR imageries that are used for coherent postprocessing. Through this paper, we aim to open up a new perspective of using MIMO concept for a wide-swath SAR imaging with high resolution in interferometric and polarimetric modes, based on OFDM and DBF techniques. Therefore, this paper encompasses broad theoretical backgrounds and general system design issues as well as specific signal processing techniques and aspects."}
{"_id": "bd2d6819f2bd9dfa9e59848e4e4cd204db218de3", "title": "A Different Approach to the Management of Osteoarthritis in the Knee: Ultrasound Guided Genicular Nerve Block.", "text": "A 61-year-old female was referred to our tertiary physical medicine and rehabilitation outpatient clinic with a 7-year history of severe left knee pain. The patient was a nonsmoker and obese, with no comorbidities, and reported that the left knee pain worsened when walking and descending stairs and particularly when climbing stairs. Radiography of the left knee revealed KellgrenLawrence grade 3 osteoarthritis (OA) with multiple osteophytes and definite joint space narrowing."}
{"_id": "2e6df38c92e9a6a4275a8cf2e495690db19a5dfd", "title": "Qualitative comparison of graph-based and logic-based multi-relational data mining: a case study", "text": "The goal of this paper is to generate insights about the differences between graph-based and logic-based approaches to multi-relational data mining by performing a case study of graph-based system, Subdue and the inductive logic programming system, CProgol. We identify three key factors for comparing graph-based and logic-based multi-relational data mining; namely, the ability to discover structurally large concepts, the ability to discover semantically complicated concepts and the ability to effectively utilize background knowledge. We perform an experimental comparison of Subdue and CProgol on the Mutagenesis domain and various artificially generated Bongard problems. Experimental results indicate that Subdue can significantly outperform CProgol while discovering structurally large multi-relational concepts. It is also observed that CProgol is better at learning semantically complicated concepts and it tends to use background knowledge more effectively than Subdue."}
{"_id": "f645c139c1023cbb5dc570e8b8a5ad414b1ac982", "title": "Lung Nodule Detection Using Convolutional Neural Networks", "text": "We present a novel procedure to apply deep learning techniques to medical image classification. With increasing popularity of computed tomography (CT) lung screening, fully manual diagnosis of lung cancer puts a burden on the radiologists who need to spend hours reading through CT scanned images to identify Region of Interests (ROIs) to schedule follow-ups. Accurate computer-aided diagnosis of lung cancer can effectively reduce their workload and help training new radiologists. However, lung cancer detection is challenging because of the varying size, location, shape, and density of nodules. Many studies have approached this problem using image-processing techniques with the intention of developing an optimal set of features. Convolutional neural network has demonstrated to learn discriminative visual features automatically and has beat many state-of-art algorithms in image-processing tasks, such as pattern recognition, object detection, segmentation, etc. In this report, we evaluate the feasibility of implementing deep learning algorithms for lung cancer diagnosis with the Lung Image Database Consortium (LIDC) database. We compare the performance of two 2.5D convolutional neural network models, two 3D convolutional neural network models, and one 3D convolutional neural networks with a 3D spatial transformer network module on the task of detecting lung nodules. The performance of our best model is comparable with the state-of-art results in the lung nodule detection task."}
{"_id": "76812f996c4ac58e4881792691f3be22dff363db", "title": "Extreme Programming and Agile Software Development Methodologies", "text": "As a stakeholder of a software project, how does this sound to you? You can have releases as often as you like. The small number of defects is unprecedented. All of the features in the system are the most valuable ones to your business. At anytime, you have access to complete, accurate information as the status of any feature and of the quality of the system as a whole. The team developing your project works in an energized space with constant communication about the project. You are not dependent on any one or even two programmers for the continued success of the project. If your needs change, the development team welcomes the change of direction. As a developer of a software project, how does this sound to you? No one estimates your tasks but you, period! You always have access to a customer to clarify details about the features you are implementing. You are free (and required) to clean up the code whenever necessary. You complete a project every two weeks. You can work in any part of the system that you wish. You can pick who will help you on any given task. You are not required to work constantly long hours. Does this sound to good to be true? Teams are achieving these advantages using a relatively new set of software methodologies. Collectively, they are referred to as Agile Software Development Methodologies. The most pervasive is Extreme Programming. This chapter will introduce you to these exciting, popular, yet controversial new approaches to software development. ____________________________________________________________________________"}
{"_id": "5ab94b60a2fe2e9216995d8c56685e2eff6a0127", "title": "Re-inventing PTPTN study loan with blockchain and smart contracts", "text": "The issue of default payments from borrowers of the National Higher Education Fund Corporation (PTPTN) is worrisome. Many borrowers fail to pay their loans and claims that the PTPTN has poor management and filing system. This study proposed a prototype for managing study loan repayment utilizing blockchain and smart contracts. Borrowers have full access toward their accounts and ledgers while corporation filing and management system get automatically up-to-date with the assistance of smart contracts."}
{"_id": "3c4228215c9a8214e4d73ccbea36badcf461a127", "title": "Arousal-Biased Competition in Perception and Memory.", "text": "Our everyday surroundings besiege us with information. The battle is for a share of our limited attention and memory, with the brain selecting the winners and discarding the losers. Previous research shows that both bottom-up and top-down factors bias competition in favor of high priority stimuli. We propose that arousal during an event increases this bias both in perception and in long-term memory of the event. Arousal-biased competition theory provides specific predictions about when arousal will enhance memory for events and when it will impair it, which accounts for some puzzling contradictions in the emotional memory literature."}
{"_id": "20a00d738b3c6ec8a73443b9b89caed66e2ac6d5", "title": "Innovation in Analogical Design: A Model-Based Approach", "text": "Analogical reasoning, it is commonly accepted, plays an important role in innovation and creativity. Since design often is innovative, and can be creative, the design task provides a good context for exploring the role of analogy in innovation and creativity. In the IDEAL project, we are exploring the processes of innovation and creativity in the context of conceptual (or preliminary, qualitative) design of physical devices. In particular, we are developing a model-based approach to innovation and creativity in analogical design: our core hypothesis is that innovative analogical design involves both reasoning from past experiences in designing devices (in the form of analogs or cases) and comprehension of how those devices work (in the form of device models or theories). In this paper, we focus on the issues of understanding feedback on the performance of a design, discovery of new design constraints, and reformulation of the design problem, and describe how the model-based approach addresses these issues."}
{"_id": "a9022d8ffb5e417458fba9a280f90c1b08cb6c73", "title": "Stronger generalization bounds for deep nets via a compression approach", "text": "Deep nets generalize well despite having more parameters than the number of training samples. Recent works try to give an explanation using PAC-Bayes and Margin-based analyses, but do not as yet result in sample complexity bounds better than naive parameter counting. The current paper shows generalization bounds that\u2019re orders of magnitude better in practice. These rely upon new succinct reparametrizations of the trained net \u2014 a compression that is explicit and efficient. These yield generalization bounds via a simple compression-based framework introduced here. Our results also provide some theoretical justification for widespread empirical success in compressing deep nets. Analysis of correctness of our compression relies upon some newly identified \u201cnoise stability\u201dproperties of trained deep nets, which are also experimentally verified. The study of these properties and resulting generalization bounds are also extended to convolutional nets, which had eluded earlier attempts on proving generalization."}
{"_id": "8077a34c426acff10f0717c0cf0b99958fc3c5ed", "title": "Automatic Text Classification: A Technical Review", "text": "Automatic Text Classification is a semi-supervised machine learning task that automatically assigns a given document to a set of pre-defined categories based on its textual content and extracted features. Automatic Text Classification has important applications in content management, contextual search, opinion mining, product review analysis, spam filtering and text sentiment mining. This paper explains the generic strategy for automatic text classification and surveys existing solutions to major issues such as dealing with unstructured text, handling large number of attributes and selecting a machine learning technique appropriate to the text-classification application."}
{"_id": "9fe75a6870a6c0e19d34513052b542053d9b316e", "title": "Control of the power in induction heating systems with L-LC resonant voltage source inverters", "text": "This paper is a synthesis of the control circuit in an induction heating system with three-phase fully-controlled rectifier, single-phase voltage source inverter and L-LC resonant load. The attention is mainly directed on the control of the power transmitted to the induction coil. It is highlighted that the analytical design of a proportional integral derivative or proportional integral controller is practically impossible when the direct control of the current through inductor is desired. The two possibilities of power control taken into consideration are the inverter output current control and the inverter input voltage control. The design of the current and voltage controllers of proportional integral derivative type was carried out successfully based on Modulus Optimum criterion in Kessler variant. The performances of the control system in both variants of the power control were tested and validated by simulation under the Matlab-Simulink environment by using real parameters of induction coils and heated pipes from a leading Romanian manufacturer."}
{"_id": "4074b5f21a1da0e21301818e4c0fc7edbc9aef46", "title": "\"Exercise to be fit, not skinny\": The effect of fitspiration imagery on women's body image.", "text": "Fitspiration is an online trend designed to inspire viewers towards a healthier lifestyle by promoting exercise and healthy food. The present study aimed to experimentally investigate the impact of fitspiration images on women's body image. Participants were 130 female undergraduate students who were randomly assigned to view either a set of Instagram fitspiration images or a control set of travel images presented on an iPad. Results showed that acute exposure to fitspiration images led to increased negative mood and body dissatisfaction and decreased state appearance self-esteem relative to travel images. Importantly, regression analyses showed that the effects of image type were mediated by state appearance comparison. Thus it was concluded that fitspiration can have negative unintended consequences for body image. The results offer support to general sociocultural models of media effects on body image, and extend these to \"new\" media."}
{"_id": "7e4cfd0d7fce6b9ae637b5d9d6d3991e5d3e747e", "title": "Accurate Prediction of Ferrite Core Loss with Nonsinusoidal Waveforms Using Only Steinmetz Parameters", "text": "An improved calculation of ferrite core loss for nonsinusoidal waveforms separates a flux trajectory into major and minor loops via a new recursive algorithm. It is highly accurate and outperforms two previous methods for our measured data. The only characteristics of the material required are the standard Steinmetz-equation parameters."}
{"_id": "9f886f40ca1eabd13a9608cf59fee9792af888b0", "title": "Model and design of PCB winding parallel for planar transformer", "text": "In this article, the effect of different paralleling strategies, 1D field has been adopted, and a software has been developed to calculate the AC loss and leakage inductance. The problem and causes for unequal current sharing among parallel layers are discussed in this paper."}
{"_id": "fc6660aa5d2ed2069aee41420c6bb3d581d07ac6", "title": "A dynamic core loss model for soft ferromagnetic and power ferrite materials in transient finite element analysis", "text": "A dynamic core loss model is proposed to estimate core loss in both soft ferromagnetic and power ferrite materials with arbitrary flux waveforms. The required parameters are the standard core loss coefficients that are either directly provided by manufacturers or extracted from the loss curve associated with sinusoidal excitation. The model is applied to calculating core loss in both two-dimensional and three-dimensional transient finite element analysis, and the results are compared with measured data."}
{"_id": "8840316f3e7d9b559e2b9a14678d48171519d477", "title": "Current-Fed Dual-Bridge DC\u2013DC Converter", "text": "A new isolated current-fed pulsewidth modulation dc-dc converter-current-fed dual-bridge dc-dc converter-with small inductance and no deadtime operation is presented and analyzed. The new topology has more than 3times smaller inductance than that of current-fed full-bridge converter, thus having faster transient response speed. Other characteristics include simple self-driven synchronous rectification, simple housekeeping power supply, and smaller output filter capacitance. Detailed analysis shows the proposed converter can have either lower voltage stress on all primary side power switches or soft switching properties when different driving schemes are applied. A 48-V/125-W prototype dc-dc converter with dual output has been tested for the verification of the principles. Both simulations and experiments verify the feasibility and advantages of the new topology"}
{"_id": "8544b15a0f6aa6a795cf1d5af956e980f68dec6b", "title": "Soil Property Prediction: An Extreme Learning Machine Approach", "text": "In this paper, we propose a method for predicting functional properties of soil samples from a number of measurable spatial and spectral features of those samples. Our method is based on Savitzky-Golay filter for preprocessing and a relatively recent evolution of single hiddenlayer feed-forward network (SLFN) learning technique called extreme learning machine (ELM) for prediction. We tested our method with Africa Soil Property Prediction dataset, and observed that the results were promising."}
{"_id": "ec7ad664975c59200a477238f3cfad4942da05c8", "title": "ADMM for harmonic retrieval from one-bit sampling with time-varying thresholds", "text": "Parameter estimation of sinusoids is a problem of significance in many practical applications. This problem is revisited through a new alternating direction method of multipliers (ADMM) based approach, where the unknown parameters are estimated from one-bit quantized noisy measurements with time varying thresholds. The proposed method is computationally efficient and easy to implement, since each ADMM update has simple closed-form formula. Moreover, it provides accurate estimates by exploiting group sparsity hidden in the signal model. Simulations demonstrate the effectiveness of our algorithm."}
{"_id": "16e6d49b478cee9394712e4de2f61ef3746d8f18", "title": "Monte Carlo localization in hand-drawn maps", "text": "Robot localization is one of the most important problems in robotics. Most of the existing approaches assume that the map of the environment is available beforehand and focus on accurate metrical localization. In this paper, we address the localization problem when the map of the environment is not present beforehand, and the robot relies on a hand-drawn map from a non-expert user. We addressed this problem by expressing the robot pose in the pixel coordinate and simultaneously estimate a local deformation of the hand-drawn map. Experiments show that we can successfully identify the room in which the robot is located in 80% of the tests."}
{"_id": "8278c71f0c58df9387821ecbbe7369f5da0400c7", "title": "The Fast Bilateral Solver Supplement", "text": "This supplement provides additional details on the bilateral solver and its extensions in Sections 1\u20134, and a plethora of experiments and comparisons to prior work in Section 5. In Section 1 we derive the bilateral-space quadratic objective at the core of the bilateral solver. Section 2 details the bilateral-space pyramid representation used for faster initialization and preconditioning. Section 3 extends the bilateral solver to use a robust error function to cope with outliers. Section 4 improves the performance of the bilateral solver when applied to multiple output channels. Finally, in Section 5 we provide additional evaluation of the bilateral solver and extensively compare it to numerous existing and novel baseline techniques."}
{"_id": "340bb6c4a35b2ff408a5366411e3422bdddf0c8d", "title": "A Framework for Emotion Mining from Text in Online Social Networks", "text": "Online Social Networks are so popular nowadays that they are a major component of an individual\u2019s social interaction. They are also emotionally-rich environments where close friends share their emotions, feelings and thoughts. In this paper, a new framework is proposed for characterizing emotional interactions in social networks, and then using these characteristics to distinguish friends from acquaintances. The goal is to extract the emotional content of texts in online social networks. The interest is in whether the text is an expression of the writer\u2019s emotions or not. For this purpose, text mining techniques are performed on comments retrieved from a social network. The framework includes a model for data collection, database schemas, data processing and data mining steps. The informal language of online social networks is a main point to consider before performing any text mining techniques. This is why the framework includes the development of special lexicons. In general, the paper presents a new perspective for studying friendship relations and emotions\u2019 expression in online social networks where it deals with the nature of these sites and the nature of the language used. It considers Lebanese Face book users as a case study. The technique adopted is unsupervised, it mainly uses the k-means clustering algorithm. Experiments show high accuracy for the model in both determining subjectivity of texts and predicting friendship."}
{"_id": "3cb84429bc25ad60c0b00bba3f06c09200580536", "title": "Pseudo Random Pulse Driven Advanced In-Cell Touch Screen Panel for Spectrum Spread Electromagnetic Interference", "text": "For the first time, we analyze the mechanism of electromagnetic interference (EMI) in an advanced in-cell touch (AIT) panel, which is a state-of-the-art in-cell touch screen panel technology. The AIT panel has stronger EMI generation at a specific frequency band (0.15 ~ 30 MHz) due to a load free driving (LFD) method. The LFD is adopted to overcome the structural vulnerability of in-cell touch panel, which incurs large parasitic capacitance due to the touch electrodes within the display panel. In order to overcome the EMI problem, we present a novel driving method using pseudo random pulse (PRP). The power spectrum of PRP is well spread over the frequency band, resulting in a lower EMI. We measured the EMI level with a near-field probe, and the proposed driving method shows the EMI reduction of 6.5 dB. This EMI reduction is accomplished without structural reconfiguration, as well as the touch performance remains the same as that before PRP is applied. Therefore, the proposed driving method can be utilized in the field of automotive, military, and aviation industries as a user interface which requires high touch performance and low EMI generation due to the peculiarities."}
{"_id": "a1e0983b74ca29db8aa448de2cf6b5a20aed8da5", "title": "Design of a new Dual Rotor Radial Flux BLDC motor with Halbach array magnets for an electric vehicle", "text": "This paper presents a novel topology of dual rotor radial flux-BLDC motor with Halbach array magnets as an efficient solution for drives. Energy efficient motors are the need of day for electric vehicles, industrial and consumer products. Especially this concept is arrived to cater for the requirements of electrical vehicle. Due to advent of high permeability, high saturation magnetic materials, high energy product permanent magnets, BLDC motors are suitable candidate for electric vehicle propulsion. Traditionally, outer rotor BLDC motor is a wide choice which is being used in commercially available electric vehicles. New topologies of BLDC motors are being proposed by research activities which aim at improvement of torque density of motors. Dual rotor brushless DC machine (DR-BLDC) is one of the consequences of such research and claimed to have very high torque density and power density. In this paper, a new topology of Dual Rotor Halbach (DR-HB) BLDC motor is proposed which is aimed to increase the torque density of motor. The complete design of DR-HB BLDC motor is carried out and compared with DR-BLDC of the same specifications and volume. The obtained performance characteristics of the proposed design are very much encouraging in terms of torque density."}
{"_id": "3dded149a16ee3cb28ce726a2268542943cf39ad", "title": "Improved Fully Differential Analog Filters", "text": "This paper proposes two novel design techniques that improve the performance of fully differential filters built by coupling two equal single-ended filters. These filters usually reach a high common-mode rejection ratio (CMRR) without using tightly matched passive components but may become unstable because of common-mode signals. Previously proposed techniques to solve these problems, based on resistor or capacitor T-networks, reduce the CMRR. The two techniques proposed overcome this tradeoff. The first technique improves the stability of the filter common-mode gain by adding damping resistors that do not modify the desired differential frequency response or the CMRR. The second technique, which is particularly valuable when a path for input bias currents must be provided, uses a resistor network to estimate the common-mode voltage of the input and uses it as common ground voltage for the filter."}
{"_id": "64228e5000610eb7d019f1125964231fe9d803b9", "title": "MLH-IDS: A Multi-Level Hybrid Intrusion Detection Method", "text": "With the growth of networked computers and associated applications, intrusion detection has become essential to keeping networks secure. A number of intrusion detection methods have been developed for protecting computers and networks using conventional statistical methods as well as data mining methods. Data mining methods for misuse and anomaly-based intrusion detection, usually encompass supervised, unsupervised and outlier methods. It is necessary that the capabilities of intrusion detection methods be updated with the creation of new attacks. This paper proposes a multi-level hybrid intrusion detection method that uses a combination of supervised, unsupervised and outlierbased methods for improving the efficiency of detection of new and old attacks. The method is evaluated with a captured real-time flow and packet dataset called the Tezpur University intrusion detection system (TUIDS) dataset, a distributed denial of service dataset, and the benchmark intrusion dataset called the knowledge discovery and data mining Cup 1999 dataset and the new version of KDD (NSL-KDD) dataset. Experimental results are compared with existing multi-level intrusion detection methods and other classifiers. The performance of our method is very good."}
{"_id": "8774b4aecffc4dae4f9ed96e34d2be019a75fbb7", "title": "Comparisons of randomization and K-degree anonymization schemes for privacy preserving social network publishing", "text": "Many applications of social networks require identity and/or relationship anonymity due to the sensitive, stigmatizing, or confidential nature of user identities and their behaviors. Recent work showed that the simple technique of anonymizing graphs by replacing the identifying information of the nodes with random ids does not guarantee privacy since the identification of the nodes can be seriously jeopardized by applying background based attacks. In this paper, we investigate how well an edge based graph randomization approach can protect node identities and sensitive links. We quantify both identity disclosure and link disclosure when adversaries have one specific type of background knowledge (i.e., knowing the degrees of target individuals). We also conduct empirical comparisons with the recently proposed K-degree anonymization schemes in terms of both utility and risks of privacy disclosures."}
{"_id": "b882cfd4cbfe33dd7bede63dd8e2a4d46986d42b", "title": "Animal-assisted therapy with children suffering from insecure attachment due to abuse and neglect: a method to lower the risk of intergenerational transmission of abuse?", "text": "Children suffering from insecure attachment due to severe abuse and/or neglect are often characterized by internal working models which, although perhaps adaptive within the original family situation, are inappropriate and maladaptive in other relationships and situations. Such children have a higher probability than the general population of becoming abusing or neglecting parents. Besides the usual goals of psychotherapy, an overall goal is to stop the cycle of abuse in which abused children may grow up to be abusing parents. Therapy with these children is complicated by their distrust in adults as well as difficulties in symbolization due to trauma during the preverbal stage. Animal-Assisted Therapy (AAT) provides avenues for circumventing these difficulties, as well as providing additional tools for reaching the inner world of the client. This article gives a brief background of the connection between insecure attachment and intergenerational transmission of abuse and neglect as well as a brief overview of the principles of AAT in a play therapy setting. A rationale for the use of AAT as a unique therapy technique for children having suffered from abuse and neglect is followed by a number of clinical examples illustrating AAT."}
{"_id": "2257271f86889839faaccfa400519a0e37a081f2", "title": "Adapting recommender systems to the requirements of personal health record systems", "text": "In the future many people in industrialized countries will manage their personal health data electronically in centralized, reliable and trusted repositories - so-called personal health record systems (PHR). At this stage PHR systems still fail to satisfy the individual medical information needs of their users. Personalized recommendations could solve this problem.\n A first approach of integrating recommender system (RS) methodology into personal health records - termed health recommender system (HRS) - is presented. By exploitation of existing semantic networks like Wikipedia a health graph data structure is obtained. The data kept within such a graph represent health related concepts and are used to compute semantic distances among pairs of such concepts.\n A ranking procedure based on the health graph is outlined which enables a match between entries of a PHR system and health information artifacts. This way a PHR user will obtain individualized health information he might be interested in."}
{"_id": "efc83fb3403d2b93c0712300cda423a850c72c31", "title": "A proposed semantic machine translation system for translating Arabic text to Arabic sign language", "text": "Arabic Sign Language (ArSL) is the native language for the Arabic deaf community. ArSL allows deaf people to communicate among themselves and with non-deaf people around them to express their needs, thoughts and feelings. Opposite to spoken languages, Sign Language (SL) depends on hands and facial expression to express person thoughts instead of sounds. In recent years, interest in automatically translating text to sign language for different languages has increased. However, a small set of these works are specialized in ArSL. Basically, these works translate word by word without taking care of the semantics of the translated sentence or the translation rules of Arabic text to Arabic sign language. In this paper we will present a proposed system for translating Arabic text to Arabic sign language in the jurisprudence of prayer domain. The proposed system will translate Arabic text to ArSL by applying ArSL translation rules as well as using domain ontology."}
{"_id": "76c3cd71c877ea61ea71619d1d5aa96301f8abd3", "title": "Automatic image retargeting with fisheye-view warping", "text": "Image retargeting is the problem of adapting images for display on devices different than originally intended. This paper presents a method for adapting large images, such as those taken with a digital camera, for a small display, such as a cellular telephone. The method uses a non-linear fisheye-view warp that emphasizes parts of an image while shrinking others. Like previous methods, fisheye-view warping uses image information, such as low-level salience and high-level object recognition to find important regions of the source image. However, unlike prior approaches, a non-linear image warping function emphasizes the important aspects of the image while retaining the surrounding context. The method has advantages in preserving information content, alerting the viewer to missing information and providing robustness."}
{"_id": "6a93746a943ac42ebd8392a12ddc8c4f0c304b80", "title": "A 7.6 mW, 414 fs RMS-Jitter 10 GHz Phase-Locked Loop for a 40 Gb/s Serial Link Transmitter Based on a Two-Stage Ring Oscillator in 65 nm CMOS", "text": "This paper describes the design of a 10 GHz phase-locked loop (PLL) for a 40 Gb/s serial link transmitter (TX). A two-stage ring oscillator is used to provide a four-phase, 10 GHz clock for a quarter-rate TX. Several analyses and verification techniques, ranging from the clocking architectures for a 40 Gb/s TX to oscillation failures in a two-stage ring oscillator, are addressed in this paper. A tri-state-inverter-based frequency-divider and an AC-coupled clock-buffer are used for high-speed operations with minimal power and area overheads. The proposed 10 GHz PLL fabricated in the 65 nm CMOS technology occupies an active area of 0.009 mm2 with an integrated-RMS-jitter of 414 fs from 10 kHz to 100 MHz while consuming 7.6 mW from a 1.2-V supply. The resulting figure-of-merit is -238.8 dB, which surpasses that of the state-of-the-art ring-PLLs by 4 dB."}
{"_id": "9d531ef3b1d58ffd1f68855f485f1cc704970b6d", "title": "Fusion of VR and teleoperation for innovative near-presence laboratory experience in engineering education", "text": "Distance learning is becoming a popular mode of learning among individuals seeking to add new skills to their repertoire or acquire an advanced degree. The availability of online course materials from institutions in academia aid this process, lending standardized and validated information to the audience seeking such knowledge. Although the online system of learning is extremely convenient for acquiring theoretical knowledge, the scope of hands-on experience is limited to the use of simulation software or involves the purchase of expensive laboratory kits. The proliferation of new technology like Virtual Reality (VR), Fifth Generation cellular networks (5G), and industrial robotics can be leveraged to bridge this gap, however. This will enable effective delivery of traditional disciplines like Electrical, Computer, or Mechanical Engineering using the online mode. This paper discusses the enabling technologies for implementing such a system. An ongoing effort to establish a proof-of-concept system in the Electrical and Computer Engineering department of the University of Nebraska-Lincoln is presented."}
{"_id": "7bad7050e0f447ea68c1ff5838cd5392726ca388", "title": "Discriminating IT Governance", "text": "T information technology (IT) governance literature predominantly explains firms\u2019 IT governance choices, but not their strategic consequences. We develop the idea that a firm\u2019s IT governance choices induce adeptness at strategically exploiting IT only when they are discriminatingly aligned with its departments\u2019 knowledge outside their specialty. Discriminating means that governing the two undertheorized classes of IT assets\u2014apps and infrastructure\u2014requires \u201cperipheral\u201d knowledge in different departments. Analyses of data from 105 firms support our middle-range theory."}
{"_id": "bfc2092f1b7ae6238175f0b09caf7e1d04e81d00", "title": "Detecting malicious tweets in trending topics using a statistical analysis of language", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.12.015 \u21d1 Corresponding author. E-mail addresses: juaner@lsi.uned.es (J. MartinezAraujo). Twitter spam detection is a recent area of research in which most previous works had focused on the identification of malicious user accounts and honeypot-based approaches. However, in this paper we present a methodology based on two new aspects: the detection of spam tweets in isolation and without previous information of the user; and the application of a statistical analysis of language to detect spam in trending topics. Trending topics capture the emerging Internet trends and topics of discussion that are in everybody\u2019s lips. This growing microblogging phenomenon therefore allows spammers to disseminate malicious tweets quickly and massively. In this paper we present the first work that tries to detect spam tweets in real time using language as the primary tool. We first collected and labeled a large dataset with 34 K trending topics and 20 million tweets. Then, we have proposed a reduced set of features hardly manipulated by spammers. In addition, we have developed a machine learning system with some orthogonal features that can be combined with other sets of features with the aim of analyzing emergent characteristics of spam in social networks. We have also conducted an extensive evaluation process that has allowed us to show how our system is able to obtain an F-measure at the same level as the best state-ofthe-art systems based on the detection of spam accounts. Thus, our system can be applied to Twitter spam detection in trending topics in real time due mainly to the analysis of tweets instead of user accounts. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "15e2a5dae8881d90d9f2581c4c98d630b88189ff", "title": "Enriching textbooks with images", "text": "Textbooks have a direct bearing on the quality of education imparted to the students. Therefore, it is of paramount importance that the educational content of textbooks should provide rich learning experience to the students. Recent studies on understanding learning behavior suggest that the incorporation of digital visual material can greatly enhance learning. However, textbooks used in many developing regions are largely text-oriented and lack good visual material. We propose techniques for finding images from the web that are most relevant for augmenting a section of the textbook, while respecting the constraint that the same image is not repeated in different sections of the same chapter. We devise a rigorous formulation of the image assignment problem and present a polynomial time algorithm for solving the problem optimally. We also present two image mining algorithms that utilize orthogonal signals and hence obtain different sets of relevant images. Finally, we provide an ensembling algorithm for combining the assignments. To empirically evaluate our techniques, we use a corpus of high school textbooks in use in India. Our user study utilizing the Amazon Mechanical Turk platform indicates that the proposed techniques are able to obtain images that can help increase the understanding of the textbook material."}
{"_id": "8075e2a607caac7d458f081c46d51cf1c7833ae9", "title": "Balun and power divider based on multilayer ring resonators (MRR)", "text": "This paper presents novel super compact microwave power dividers and balun (balanced-to-unbalanced) circuits. The proposed devices are based on multilayer ring resonators (MRR) structure. These new microwave devices are highly compact and flexible in design that can operate within various desirable bandwidths from narrow-band to ultrawideband (UWB), hence performing as bandpass filters simultaneously along with their own functions. It is also possible to have arbitrary power divisions. By this technique, a balun can be simply converted to a power diver and vice versa. Sample circuits are designed and the scattering characteristics are provided using electromagnetic simulation software. The dimensions of the devices are 2.3 mm \u03c7 2.3 mm \u03c7 1.5 mm."}
{"_id": "5727e10c5fa55058531bc94bf63b4e265cf55c5d", "title": "Comparative Study on Generative Adversarial Networks", "text": "In recent years, there have been tremendous advancements in the field of machine learning. These advancements have been made through both academic as well as industrial research. Lately, a fair amount of research has been dedicated to the usage of generative models in the field of computer vision and image classification. These generative models have been popularized through a new framework called Generative Adversarial Networks. Moreover, many modified versions of this framework have been proposed in the last two years. We study the original model proposed by Goodfellow et al. (2014) as well as modifications over the original model and provide a comparative analysis of these models."}
{"_id": "37d5aa2d7a05f4c089022a55a017fc9ce556cc06", "title": "A Novel Deployment Scheme for Green Internet of Things", "text": "The Internet of Things (IoT) has been realized as one of the most promising networking paradigms that bridge the gap between the cyber and physical world. Developing green deployment schemes for IoT is a challenging issue since IoT achieves a larger scale and becomes more complex so that most of the current schemes for deploying wireless sensor networks (WSNs) cannot be transplanted directly in IoT. This paper addresses this challenging issue by proposing a deployment scheme to achieve green networked IoT. The contributions made in this paper include: 1) a hierarchical system framework for a general IoT deployment, 2) an optimization model on the basis of proposed system framework to realize green IoT, and 3) a minimal energy consumption algorithm for solving the presented optimization model. The numerical results on minimal energy consumption and network lifetime of the system indicate that the deployment scheme proposed in this paper is more flexible and energy efficient compared to typical WSN deployment scheme; thus is applicable to the green IoT deployment."}
{"_id": "5bf772ffd6364e4c4813346a43a1cb60f8274fd3", "title": "Background modelling based on generative unet", "text": "Background Modelling is a crucial step in background/foreground detection which could be used in video analysis, such as surveillance, people counting, face detection and pose estimation. Most methods need to choose the hyper parameters manually or use ground truth background masks (GT). In this work, we present an unsupervised deep background (BG) modelling method called BM-Unet which is based on a generative architecture that given a certain frame as input it generates as output the corresponding background image \u2014 to be more precise, a probabilistic heat map of the colour values. Our method learns parameters automatically and an augmented version of it that utilises colour, intensity differences and optical flow between a reference and a target frame is robust to rapid illumination changes and camera jitter. Besides, it can be used on a new video sequence without the need of ground truth background/foreground masks for training. Experiment evaluations on challenging sequences in SBMnet data set demonstrate promising results over state-of-the-art methods."}
{"_id": "1f1d7720b1a153b5af8ec707a116c8d08a0a8210", "title": "Software traceability: trends and future directions", "text": "Software traceability is a sought-after, yet often elusive quality in software-intensive systems. Required in safety-critical systems by many certifying bodies, such as the USA Federal Aviation Authority, software traceability is an essential element of the software development process. In practice, traceability is often conducted in an ad-hoc, after-the-fact manner and, therefore, its benefits are not always fully realized. Over the past decade, researchers have focused on specific areas of the traceability problem, developing more sophisticated tooling, promoting strategic planning, applying information retrieval techniques capable of semi-automating the trace creation and maintenance process, developing new trace query languages and visualization techniques that use trace links, and applying traceability in specific domains such as Model Driven Development, product line systems, and agile project environments. In this paper, we build upon a prior body of work to highlight the state-of-the-art in software traceability, and to present compelling areas of research that need to be addressed."}
{"_id": "0cb2dd5f178e3a297a0c33068961018659d0f443", "title": "IARPA Janus Benchmark-B Face Dataset", "text": "Despite the importance of rigorous testing data for evaluating face recognition algorithms, all major publicly available faces-in-the-wild datasets are constrained by the use of a commodity face detector, which limits, among other conditions, pose, occlusion, expression, and illumination variations. In 2015, the NIST IJB-A dataset, which consists of 500 subjects, was released to mitigate these constraints. However, the relatively low number of impostor and genuine matches per split in the IJB-A protocol limits the evaluation of an algorithm at operationally relevant assessment points. This paper builds upon IJB-A and introduces the IARPA Janus Benchmark-B (NIST IJB-B) dataset, a superset of IJB-A. IJB-B consists of 1,845 subjects with human-labeled ground truth face bounding boxes, eye/nose locations, and covariate metadata such as occlusion, facial hair, and skintone for 21,798 still images and 55,026 frames from 7,011 videos. IJB-B was also designed to have a more uniform geographic distribution of subjects across the globe than that of IJB-A. Test protocols for IJB-B represent operational use cases including access point identification, forensic quality media searches, surveillance video searches, and clustering. Finally, all images and videos in IJB-B are published under a Creative Commons distribution license and, therefore, can be freely distributed among the research community."}
{"_id": "7752e0835506a6629c1b06e67f2afb1e5d2bb714", "title": "Convergent and discriminant validation by the multitrait-multimethod matrix.", "text": "Content Memory (Learning Ability) As Comprehension 82 Vocabulary Cs .30 ( ) .23 .31 ( ) .31 .31 .35 ( ) .29 .48 .35 .38 ( ) .30 .40 .47 .58 .48 ( ) As judged against these latter values, comprehension (.48) and vocabulary (.47), but not memory (.31), show some specific validity. This transmutability of the validation matrix argues for the comparisons within the heteromethod block as the most generally relevant validation data, and illustrates the potential interchangeability of trait and method components. Some of the correlations in Chi's (1937) prodigious study of halo effect in ratings are appropriate to a multitrait-multimethod matrix in which each rater might be regarded as representing a different method. While the published report does not make these available in detail because it employs averaged values, it is apparent from a comparison of his Tables IV and VIII that the ratings generally failed to meet the requirement that ratings of the same trait by different raters should correlate higher than ratings of different traits by the same rater. Validity is shown to the extent that of the correlations in the heteromethod block, those in the validity diagonal are higher than the average heteromethod-heterotrait values. A conspicuously unsuccessful multitrait-multimethod matrix is provided by Campbell (1953, 1956) for rating of the leadership behavior of officers by themselves and by their subordinates. Only one of 11 variables (Recognition Behavior) met the requirement of providing a validity diagonal value higher than any of the heterotrait-heteromethod values, that validity being .29. For none of the variables were the validities higher than heterotrait-monomethod values. A study of attitudes toward authority and nonauthority figures by Burwen and Campbell (1957) contains a complex multitrait-multimethod matrix, one symmetrical excerpt from which is shown in Table 6. Method variance was strong for most of the procedures in this study. Where validity was found, it was primarily at the level of validity diagonal values higher than heterotrait-heteromethod values. As illustrated in Table 6, attitude toward father showed this kind of validity, as did attitude toward peers to a lesser degree. Attitude toward boss showed no validity. There was no evidence of a generalized attitude toward authority which would include father and boss, although such values as the VALIDATION BY THE MULTITRAIT-MULTIMETHOD MATRIX"}
{"_id": "9a756fa7e7c8afa53ada2201bcea38a095425a8e", "title": "The Nature and Role of Feedback Text Comments in Online Marketplaces: Implications for Trust Building, Price Premiums, and Seller Differentiation", "text": ""}
{"_id": "e0f22e9eee05566a4fdd9b5c0fae7e2f8b6802f4", "title": "A Latent Semantic Indexing-based approach to multilingual document clustering", "text": "The creation and deployment of knowledge repositories formanaging, sharing, and reusing tacit knowledgewithin an organization has emerged as a prevalent approach in current knowledge management practices. A knowledge repository typically contains vast amounts of formal knowledge elements, which generally are available as documents. To facilitate users' navigation of documents within a knowledge repository, knowledge maps, often created by document clustering techniques, represent an appealing and promising approach. Various document clustering techniques have been proposed in the literature, but most deal with monolingual documents (i.e., written in the same language). However, as a result of increased globalization and advances in Internet technology, an organization often maintains documents in different languages in its knowledge repositories, which necessitates multilingual document clustering (MLDC) to create organizational knowledge maps. Motivated by the significance of this demand, this study designs a Latent Semantic Indexing (LSI)-based MLDC technique capable of generating knowledge maps (i.e., document clusters) from multilingual documents. The empirical evaluation results show that the proposed LSI-based MLDC technique achieves satisfactory clustering effectiveness, measured by both cluster recall and cluster precision, and is capable of maintaining a good balance between monolingual and cross-lingual clustering effectiveness when clustering a multilingual document corpus. \u00a9 2007 Elsevier B.V. All rights reserved."}
{"_id": "3fb4f9bb4a82945558c1b92f00f82fc38f160155", "title": "Interworking of DSRC and Cellular Network Technologies for V2X Communications: A Survey", "text": "Vehicle-to-anything (V2X) communications refer to information exchange between a vehicle and various elements of the intelligent transportation system (ITS), including other vehicles, pedestrians, Internet gateways, and transport infrastructure (such as traffic lights and signs). The technology has a great potential of enabling a variety of novel applications for road safety, passenger infotainment, car manufacturer services, and vehicle traffic optimization. Today, V2X communications is based on one of two main technologies: dedicated short-range communications (DSRC) and cellular networks. However, in the near future, it is not expected that a single technology can support such a variety of expected V2X applications for a large number of vehicles. Hence, interworking between DSRC and cellular network technologies for efficient V2X communications is proposed. This paper surveys potential DSRC and cellular interworking solutions for efficient V2X communications. First, we highlight the limitations of each technology in supporting V2X applications. Then, we review potential DSRC-cellular hybrid architectures, together with the main interworking challenges resulting from vehicle mobility, such as vertical handover and network selection issues. In addition, we provide an overview of the global DSRC standards, the existing V2X research and development platforms, and the V2X products already adopted and deployed in vehicles by car manufactures, as an attempt to align academic research with automotive industrial activities. Finally, we suggest some open research issues for future V2X communications based on the interworking of DSRC and cellular network technologies."}
{"_id": "3ae1699ac44725f9eeba653aeb59d4c35a498b12", "title": "Dual band compact printed monopole antenna for Bluetooth (2.54GHz) and WLAN (5.2GHz) applications", "text": "The printed monopole antenna having dual band is proposed which operates at frequency 2.54 GHz (Bluetooth) and 5.2GHz (WLAN). Bluetooth and WLAN have been widely applied in all new trending laptops and smart phones. These two technologies are well known for its effective cost and high-speed data connection. The antenna comprises of two rectangular patches of different sizes for the required dual-band operations. The presented antenna is fed by corporate feed network which improves impedance bandwidth. The prime motto of this project is to make the smallest (compact) possible antenna so that it can be placed in the limited area of handheld devices. Simulated percentage impedance bandwidth of the antenna are 46.25 (1.958 GHz to 3.13 GHz) and 31.30 (4.15 GHz to 5.69 GHz) respectively. Good return loss and VSWR (less than 2) of the designed antenna is obtained by simulating on IE3D software."}
{"_id": "c7fd5705cf27f32cf36d718fe5ab499fec2d02e3", "title": "Tibial periosteal ganglion cyst: The ganglion in disguise", "text": "Soft tissue ganglions are commonly encountered cystic lesions around the wrist presumed to arise from myxomatous degeneration of periarticular connective tissue. Lesions with similar pathology in subchondral location close to joints, and often simulating a geode, is the less common entity called intraosseous ganglion. Rarer still is a lesion produced by mucoid degeneration and cyst formation of the periostium of long bones, rightly called the periosteal ganglion. They are mostly found in the lower extremities at the region of pes anserinus, typically limited to the periosteum and outer cortex without any intramedullary component. We report the case of a 62 year-old male who presented with a tender swelling on the mid shaft of the left tibia, which radiologically suggested a juxtacortical lesion extending to the soft tissue or a soft tissue neoplasm eroding the bony cortex of tibia. It was later diagnosed definitively as a periosteal ganglion in an atypical location, on further radiologic work-up and histopathological correlation."}
{"_id": "f456ca5d759ba05402721125fe4ce0da1730f683", "title": "CHAPTER 12 WORKFLOW ENGINE FOR CLOUDS", "text": "A workflow models a process as consisting of a series of steps that simplifies the complexity of execution and management of applications. Scientific workflows in domains such as high-energy physics and life sciences utilize distributed resources in order to access, manage, and process a large amount of data from a higher level. Processing and managing such large amounts of data require the use of a distributed collection of computation and storage facilities. These resources are often limited in supply and are shared among many competing users. The recent progress in virtualization technologies and the rapid growth of cloud computing services have opened a new paradigm in distributed computing for utilizing existing (and often cheaper) resource pools for ondemand and scalable scientific computing. Scientific Workflow Management Systems (WfMS) need to adapt to this new paradigm in order to leverage the benefits of cloud services. Cloud services vary in the levels of abstraction and hence the type of service they present to application users. Infrastructure virtualization enables providers such as Amazon to offer virtual hardware for use in computeand dataintensive workflow applications. Platform-as-a-Service (PaaS) clouds expose a higher-level development and runtime environment for building and deploying workflow applications on cloud infrastructures. Such services may also expose domain-specific concepts for rapid-application development. Further up in the cloud stack are Software-as-a-Service providers who offer end users with"}
{"_id": "2a70fbec8ca4e271d67c565a631ea05d3364164a", "title": "Coding polygon meshes as compressable ASCII", "text": "Because of the convenience of a text-based format 3D content is often published in form of a gzipped file that contains an ASCII description of the scene graph. While compressed image, audio, and video data is kept in seperate binary files, polygonal data is usually included uncompressed into the ASCII description, as there is no widely-accepted standard for compressed polygon meshes.In this paper we show how to incorporate compression of polygonal data into a purely text-based scene graph description. Our scheme codes polygon meshes as ASCII strings that compress well with standard compression schemes such as gzip. The coder is lossless when only the position and texture coordinate {\\em indices} are coded. If loss is acceptable, positions and texture coordinates can be quantized and delta coded, which reduces the file size further. The gzipped scene graph description files decrease by a factor of two (six) in size when the polygon meshes they contain are coded with the lossless (lossy) ASCII coder.Furthermore we describe in detail a proof-of-concept implementation that uses the Shout3D~\\cite{shout3d} pure java API---a plugin-less Web3D player that downloads all required java classes on demand. Our prototype is an extremely light-weight implementation of the decoder that can be distributed at minimal additional cost. The size of the compiled decoder class is less than 6KB by itself and less than 3KB if included into a compressed archive of java class files. It makes no use of specific features of the Shout3D API. Hence, our method will work for any scene graph API that allows (a) to extend the node set and (b) to store the scene graph as ASCII."}
{"_id": "3fb1cbc03bbb237343d1c1298e2d380c1b38e61d", "title": "Analyzing and Comparing the Protection Quality of Security Enhanced Operating Systems", "text": "Host compromise is a serious computer security problem today. To better protect hosts, several Mandatory Access Control systems, such as Security Enhanced Linux (SELinux) and AppArmor, have been introduced. In this paper we propose an approach to analyze and compare the quality of protection offered by these different MAC systems. We introduce the notion of vulnerability surfaces under attack scenarios as the measurement of protection quality, and implement a tool called VulSAN for computing such vulnerability surfaces. In VulSAN, we encode security policies, system states, and system rules using logic programs. Given an attack scenario, VulSAN computes a host attack graph and the vulnerability surface. We apply our approach to compare SELinux and AppArmor policies in several Linux distributions and discuss the results. Our tool can also be used by Linux system administrators as a system hardening tool. Because of its ability to analyze SELinux as well as AppArmor policies, it can be used for most enterprise Linux distributions and home user distributions."}
{"_id": "10987d17af1245d49139fce9feae0e0afa71b6f2", "title": "Continual Learning in Generative Adversarial Nets", "text": "Developments in deep generative models have allowed for tractable learning of high-dimensional data distributions. While the employed learning procedures typically assume that training data is drawn i.i.d. from the distribution of interest, it may be desirable to model distinct distributions which are observed sequentially, such as when different classes are encountered over time. Although conditional variations of deep generative models permit multiple distributions to be modeled by a single network in a disentangled fashion, they are susceptible to catastrophic forgetting when the distributions are encountered sequentially. In this paper, we adapt recent work in reducing catastrophic forgetting to the task of training generative adversarial networks on a sequence of distinct distributions, enabling continual generative modeling."}
{"_id": "e416e9a7bb3d27086ff0487894ba636b4123bfe5", "title": "Inverse reinforcement learning for interactive systems", "text": "Human machine interaction is a field where machine learning is present at almost any level, from human activity recognition to natural language generation. The interaction manager is probably one of the latest components of an interactive system that benefited from machine learning techniques. In the late 90's, sequential decision making algorithms like reinforcement learning have been introduced in the field with the aim of making the interaction more natural in a measurable way. Yet, these algorithms require providing the learning agent with a reward after each interaction. This reward is generally handcrafted by the system designer who introduces again some expertise in the system. In this paper, we will discuss a method for learning a reward function by observing expert humans, namely inverse reinforcement learning (IRL). IRL will then be applied to several steps of the spoken dialogue management design such as user simulation and clustering but also to co-adaptation of human user and machine."}
{"_id": "41f89ee0e3b4d039e678fb197f69e87820f3f171", "title": "CS224N Project: Let Computers Do Reading Comprehension", "text": "In this work, we want to teach the computers to do reading comprehension. More specifically, the task is to let computers read a short passage and then answer some simple questions based on the passage.[1] proposes an end-to-end, RNN-like approach which, according to them, needs significantly less supervision but performs comparatively good to benchmarks for many applications. However, we found that their Q&A result is based on a small synthetic data set defined in [2]. We could see that the dataset is quite small and simple with a vocabulary size of 177. Here is a sample from the dataset."}
{"_id": "bed4a1ffc5f7fc5ad3a1298f0dd82893dcade055", "title": "Performance Management : A model and research agenda", "text": "\u00a9 International Association for Applied Psychology, 2004. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Blackwell Publishing, Ltd. Oxford, UK APPS pplied Psychology: an International Review 0269-994X \u00a9 Blackwell Publishing 2004 ct ber 2004 53 4 riginal Arti le PERFORMANCE MANAGEMENT D N HARTOG ET AL. Performance Management: A Model and Research Agenda"}
{"_id": "ced4d6cf95921608c91b3fcf52e96fc758f0912d", "title": "Insulin detemir versus insulin glargine for type 2 diabetes mellitus.", "text": "BACKGROUND\nChronically elevated blood glucose levels are associated with significant morbidity and mortality. Many diabetes patients will eventually require insulin treatment to maintain good glycaemic control. There are still uncertainties about the optimal insulin treatment regimens for type 2 diabetes, but the long-acting insulin analogues seem beneficial. Several reviews have compared either insulin detemir or insulin glargine to NPH insulin, but research directly comparing both insulin analogues is limited.\n\n\nOBJECTIVES\nTo assess the effects of insulin detemir and insulin glargine compared with each other in the treatment of type 2 diabetes mellitus.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE, EMBASE, The Cochrane Library, online registries of ongoing trials and abstract books. Date of last search was January 2011.\n\n\nSELECTION CRITERIA\nAll randomised controlled trials comparing insulin detemir with insulin glargine with a duration of 12 weeks or longer were included.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo authors independently selected the studies and extracted the data. Pooling of studies by means of random-effects meta-analysis was performed.\n\n\nMAIN RESULTS\nThis review examined four trials lasting 24 to 52 weeks involving 2250 people randomised to either insulin detemir or glargine. Overall, risk of bias of the evaluated studies was high. Insulin glargine was dosed once-daily in the evening. Insulin detemir was initiated once-daily in the evening with the option of an additional dose in the morning in three studies and initiated twice-daily in one study. Of randomised patients 13.6% to 57.2% were injecting insulin detemir twice-daily at the end of trial.Glycaemic control, measured by glycosylated haemoglobin A1c (HbA1c) and HbA1c equal to or less than 7% with or without hypoglycaemia, did not differ statistically significantly between treatment groups.The results showed no significant differences in overall, nocturnal and severe hypoglycaemia between treatment groups.Insulin detemir was associated with less weight gain. Treatment with insulin glargine resulted in a lower daily basal insulin dose and a lower number of injection site reactions.There was no significant difference in the variability of FPG or glucose values in 24-hour profiles between treatment groups. It was not possible to draw conclusions on quality of life, costs or mortality. Only one trial reported results on health-related quality of life and showed no significant differences between treatment groups.\n\n\nAUTHORS' CONCLUSIONS\nOur analyses suggest that there is no clinically relevant difference in efficacy or safety between insulin detemir and insulin glargine for targeting hyperglycaemia. However, to achieve the same glycaemic control insulin detemir was often injected twice-daily in a higher dose but with less weight gain, while insulin glargine was injected once-daily, with somewhat fewer injection site reactions."}
{"_id": "f7fc50c74863441ec375a5cf26154ede7e22b9d5", "title": "Depth From Defocus in Presence of Partial Self Occlusion", "text": "Contrary to the normal belief we show that selfocclusion is present in any real aperture image and we present a method on how we can take care of the occlusion while recovering the depth using the defocus as the cue. The spacevariant blur is modeled as an MRF and the MAP estimates are obtained f o r both the depth map and the everywhere f o cused intensity image. The blur kernel is adjusted in the regions where occlusion is present, particularly at the regions of discontinuities in the scene. The performance of the proposed algorithm is tested over synthetic data and the estimates are found to be better than the earlier schemes where such subtle effects were ignored."}
{"_id": "c0555be12364c6621bf591a1403da842df492f45", "title": "QUOTA: The Quantile Option Architecture for Reinforcement Learning", "text": "In this paper, we propose the Quantile Option Architecture (QUOTA) for exploration based on recent advances in distributional reinforcement learning (RL). In QUOTA, decision making is based on quantiles of a value distribution, not only the mean. QUOTA provides a new dimension for exploration via making use of both optimism and pessimism of a value distribution. We demonstrate the performance advantage of QUOTA in both challenging video games and physical robot simulators."}
{"_id": "3296c12d8edfdd0ed867273458d6a4f683e1c30d", "title": "Acquiring a Dictionary of Emotion-Provoking Events", "text": "This paper is concerned with the discovery and aggregation of events that provoke a particular emotion in the person who experiences them, or emotion-provoking events. We first describe the creation of a small manually-constructed dictionary of events through a survey of 30 subjects. Next, we describe first attempts at automatically acquiring and aggregating these events from web data, with a baseline from previous work and some simple extensions using seed expansion and clustering. Finally, we propose several evaluation measures for evaluating the automatically acquired events, and perform an evaluation of the effectiveness of automatic event extraction."}
{"_id": "db77e6b8030e7f8f2c1503b99fc88ab002b84cb4", "title": "Multi-linear polarization reconfigurable center-fed circular patch antenna with shorting posts", "text": "In this paper, a novel multi-linear polarization reconfigurable antenna with shorting posts, which can achieve four linear polarizations (0\u00b0, 45\u00b0, 90\u00b0, 135\u00b0), has been proposed. By switching the diodes between two groups of shorting posts, four linear polarizations can be realized. The dimensions of the proposed antenna are about 0.56\u03bb\u00d7 0.56\u03bb\u00d7 0.07\u03bb at 2.4 GHz. The measured results agree well with the simulated ones."}
{"_id": "b8e202587a83d13fcabeb9d2e21df6b12150a8e2", "title": "Description of Luciola aquatilis sp. nov., a new aquatic firefly (Coleoptera: Lampyridae: Luciolinae) from Thailand", "text": "A new species of aquatic firefly belonging to Luciola Laporte is described and illustrated based on external morphology of both males and females, and the genitalia of males. Luciola aquatilis sp. nov., a common firefly in Thailand was formerly commonly misidentified as Luciola brahmina Bourgeois. Other Luciola species that resemble L. aquatilis are discussed, as well as past confusion concerning their taxonomic affinities."}
{"_id": "086a6b1a757c25aa09e2a5bbdc39ae27d6e7ec9c", "title": "Comparing two evolutionary algorithm based methods for layout generation: Dense packing versus subdivision", "text": "We present and compare two evolutionary algorithm based methods for rectangular architectural layout generation: dense packing and subdivision algorithms. We analyze the characteristics of the two methods on the basis of three floor plan scenarios. Our analyses include the speed with which solutions are generated, the reliability with which optimal solutions can be found, and the number of different solutions that can be found overall. In a following step, we discuss the methods with respect to their different user interaction capabilities. In addition, we show that each method has the capability to generate more complex L-shaped layouts. Finally, we conclude that neither of the methods is superior but that each of them is suitable for use in distinct application scenarios because of its different properties."}
{"_id": "0d3bb75852098b25d90f31d2f48fd0cb4944702b", "title": "A data-driven approach to cleaning large face datasets", "text": "Large face datasets are important for advancing face recognition research, but they are tedious to build, because a lot of work has to go into cleaning the huge amount of raw data. To facilitate this task, we describe an approach to building face datasets that starts with detecting faces in images returned from searches for public figures on the Internet, followed by discarding those not belonging to each queried person. We formulate the problem of identifying the faces to be removed as a quadratic programming problem, which exploits the observations that faces of the same person should look similar, have the same gender, and normally appear at most once per image. Our results show that this method can reliably clean a large dataset, leading to a considerable reduction in the work needed to build it. Finally, we are releasing the FaceScrub dataset that was created using this approach. It consists of 141,130 faces of 695 public figures and can be obtained from http://vintage.winklerbros.net/facescrub.html."}
{"_id": "efe573cbfa7f4de4fd31eda183fefa8a7aa80888", "title": "Blockchain beyond bitcoin", "text": "Blockchain technology has the potential to revolutionize applications and redefine the digital economy."}
{"_id": "acbfe25e9f78a1a1b5bb084b0455b6e88f6d3b4a", "title": "AcuBot: a robot for radiological interventions", "text": "We report the development of a robot for radiological percutaneous interventions using uniplanar fluoroscopy, biplanar fluoroscopy, or computed tomography (CT) for needle biopsy, radio frequency ablation, cryotherapy, and other needle procedures. AcuBot is a compact six-degree-of-freedom robot for manipulating a needle or other slender surgical instrument in the confined space of the imager without inducing image artifacts. Its distinctive characteristic is its decoupled motion capability correlated to the positioning, orientation, and instrument insertion steps of the percutaneous intervention. This approach allows each step of the intervention to be performed using a separate mechanism of the robot. One major advantage of this kinematic approach is patient safety. The first feasibility experiment performed with the robot, a cadaver study of perispinal blocks under biplanar fluoroscopy, is presented. The main expected application of this system is to CT-based procedures. AcuBot has received Food and Drug Administration clearance (IDE G010331/S1), and a clinical trial of using the robot for perispinal nerve and facet blocks is presently underway at Georgetown University, Washington, DC."}
{"_id": "39ee50989613888ff23bdac0d6711ca8eefe658b", "title": "Visualizing the Hidden Activity of Artificial Neural Networks", "text": "In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between learned representations of observations, and visualizing the relationships between artificial neurons. Through experiments conducted in three traditional image classification benchmark datasets, we show how visualization can provide highly valuable feedback for network designers. For instance, our discoveries in one of these datasets (SVHN) include the presence of interpretable clusters of learned representations, and the partitioning of artificial neurons into groups with apparently related discriminative roles."}
{"_id": "41e2840d51e23727ec1c133c6fc924fa199b5f82", "title": "Forward and inverse kinematics model for robotic welding process using KR-16KS KUKA robot", "text": "This paper aims to model the forward and inverse kinematics of a KUKA KR-16KS robotic arm in the application of a simple welding process. A simple welding task to weld a block onto a metal sheet is carried out in order to investigate the forward and inverse kinematics models of KR-16KS. A movement flow planning is designed and further developed into the KR-16KS programming. Eleven points of movement are studied for the forward kinematic modeling. A summary of calculation is obtained. A general D-H representation of forward and inverse matrix is obtained. This can be used in each of the welding operation movement based on KUKA KR-16KS robotic arm. A forward kinematic and an inverse kinematic aspect of KUKA KR-16KS is successfully modeled based on a simple welding task."}
{"_id": "3393866fbeb1f8510ff458b19f6e98b7f4a902ec", "title": "Vision based distance measurement system using single laser pointer design for underwater vehicle", "text": "As part of a continuous research and development of underwater robotics technology at ITB, a visionbased distance measurement system for an Unmanned Underwater vehicle (UUV) has been designed. The proposed system can be used to predict horizontal distance between underwater vehicle and wall in front of vehicle. At the same time, it can be used to predict vertical distance between vehicle and the surface below it as well. A camera and a single laser pointer are used to obtain data needed by our algorithm. The vision-based navigation consists of two main processes which are the detection of a laser spot using image processing and the calculation of the distance based on laser spot position on the image."}
{"_id": "ac6574a51108e57d1265b97fcd78992bc6360ed0", "title": "Interactive Storytelling with Literary Feelings", "text": "In this paper, we describe the integration of Natural Language Processing (NLP) within an emotional planner to support Interactive Storytelling. Our emotional planner is based on a standard HSP planner, whose originality is drawn from altering the agents\u2019 beliefs and emotional states. Each character is driven by its own planner, while characters are able to operate on their reciprocal feelings thus affecting each other. Our baseline story is constituted by a classic XIX century French novel from Gustave Flaubert in which characters feelings play a dominant role. This approach benefits from the fact that Flaubert has described a specific ontology for his characters feelings. The objective of NLP should be to uncover from natural language utterances the same kind of affective elements, which requires an integration between NLP and the planning component at the level of semantic content. This research is illustrated with examples from a first fully integrated prototype comprising NLP, emotional planning and real-time 3D animation."}
{"_id": "04dc5e68bc6f81e7d71d9bd3add36e6f6f077539", "title": "Decision Forests, Convolutional Networks and the Models in-Between", "text": "This paper investigates the connections between two state of the art classifiers: decision forests (DFs, including decision jungles) and convolutional neural networks (CNNs). Decision forests are computationally efficient thanks to their conditional computation property (computation is confined to only a small region of the tree, the nodes along a single branch). CNNs achieve state of the art accuracy, thanks to their representation learning capabilities. We present a systematic analysis of how to fuse conditional computation with representation learning and achieve a continuum of hybrid models with different ratios of accuracy vs. efficiency. We call this new family of hybrid models conditional networks. Conditional networks can be thought of as: i) decision trees augmented with data transformation operators, or ii) CNNs, with block-diagonal sparse weight matrices, and explicit data routing functions. Experimental validation is performed on the common task of image classification on both the CIFAR and Imagenet datasets. Compared to state of the art CNNs, our hybrid models yield the same accuracy with a fraction of the compute cost and much smaller number of parameters."}
{"_id": "9f5a4f397f1414116ebd9d53049fce1c53e35d4f", "title": "Statistical mechanics and phase transitions in clustering.", "text": ""}
{"_id": "c377a35ff719d2f6e85092a9fc9776a4a8ccd5c2", "title": "Janus: a general purpose WebRTC gateway", "text": "This paper deals with the design and implementation of Janus, a general purpose, open source WebRTC gateway. Details will be provided on the architectural choices we took for Janus, as well as on the APIs we made available to extend and make use of it. Examples of how the gateway can be used for complex WebRTC applications are presented, together with some experimental results we collected during the development process."}
{"_id": "50d89414855127f1f83b10681855f1e4aaadd41a", "title": "Parallel online exact sum for Java8", "text": "Java8 introduced the notion of streams that is a new data structure and supports multi-core processors. When the sum method is called for a stream of floating-point numbers, the summation is calculated at high-speed by applying MapReduce, which distributes computations to cores. However, since floating-point calculation causes an error, simple adaptation of this method can not determine the result uniquely. Then, in this study, we develop a summation program that can be applied to a stream with MapReduce. Our method can calculate at high-speed with keeping correctly rounded."}
{"_id": "4007643eddbea0af2c6337d360b6474652f32223", "title": "Factorial LDA: Sparse Multi-Dimensional Text Models", "text": "Latent variable models can be enriched with a multi-dimensional structure to consider the many latent factors in a text corpus, such as topic, author perspective and sentiment. We introduce factorial LDA, a multi-dimensional model in which a document is influenced by K different factors, and each word token depends on a K-dimensional vector of latent variables. Our model incorporates structured word priors and learns a sparse product of factors. Experiments on research abstracts show that our model can learn latent factors such as research topic, scientific discipline, and focus (methods vs. applications). Our modeling improvements reduce test perplexity and improve human interpretability of the discovered factors."}
{"_id": "d62e4babd32bc9a213dd20957e0fab93e26d852c", "title": "Model-Free Dual Heuristic Dynamic Programming", "text": "Model-based dual heuristic dynamic programming (MB-DHP) is a popular approach in approximating optimal solutions in control problems. Yet, it usually requires offline training for the model network, and thus resulting in extra computational cost. In this brief, we propose a model-free DHP (MF-DHP) design based on finite-difference technique. In particular, we adopt multilayer perceptron with one hidden layer for both the action and the critic networks design, and use delayed objective functions to train both the action and the critic networks online over time. We test both the MF-DHP and MB-DHP approaches with a discrete time example and a continuous time example under the same parameter settings. Our simulation results demonstrate that the MF-DHP approach can obtain a control performance competitive with that of the traditional MB-DHP approach while requiring less computational resources."}
{"_id": "2c8113b530a695716c31a13e6fe494016b7ebe5e", "title": "Impact of Smartphone Position on Sensor Values and Context Discovery", "text": "With near-ubiquitous availability of smartphones equipped with a wide variety of sensors, research in building contextaware services has been growing. However, despite a large number of services proposed and developed as research prototypes, the number of truly context-aware applications available in smartphone applications market is quite limited. A major barrier for the large-scale proliferation of context aware applications is poor accuracy. This paper addresses one of the key reasons for this poor accuracy, which is the impact of smartphone positions. Users carry their smartphones in different positions such as holding in their hand, keeping inside their pants or jacket pocket, keeping in their purse, and so on. This paper addresses the issue of poor application accuracy due to varying smartphone positions. It first shows that smartphone positions significantly affect the values of the sensor data being collected by a context aware application, and this in turn has a significant impact on the accuracy of the application. Next, it describes the design and prototype development of a smartphone position discovery service that accurately detects a smartphone position. This service is based on the sensor data collected from carefully chosen sensors and runs orthogonal to any other context aware application or service. Finally, the paper demonstrates that the accuracy of an existing context aware service or application is significantly enhanced when run in conjunction with the proposed smartphone position discovery service."}
{"_id": "76d71d1726bf96a142b203dfca12a4401da8ecee", "title": "An MPC/hybrid system approach to traction control", "text": "This paper describes a hybrid model and a model predictive control (MPC) strategy for solving a traction control problem. The problem is tackled in a systematic way from modeling to control synthesis and implementation. The model is described first in the Hybrid Systems Description Language to obtain a mixed-logical dynamical (MLD) hybrid model of the open-loop system. For the resulting MLD model, we design a receding horizon finite-time optimal controller. The resulting optimal controller is converted to its equivalent piecewise affine form by employing multiparametric programming techniques, and finally experimentally tested on a car prototype. Experiments show that good and robust performance is achieved in a limited development time by avoiding the design of ad hoc supervisory and logical constructs usually required by controllers developed according to standard techniques."}
{"_id": "8ce45732700ff32de73c78b82dde94e954ea1dbb", "title": "Image Restoration from Patch-based Compressed Sensing Measurement", "text": "A series of methods have been proposed to reconstruct an image from compressively sensed random measurement, Patch Vectorization CS Measurement"}
{"_id": "8409907923aa51246bf03c48caf357f7a23252d5", "title": "TrajAnalytics: A Web-Based Visual Analytics Software of Urban Trajectory", "text": "We present a web-based software, named TrajAnalytics, for the visual analytics of urban trajectory datasets. It allows users to interactively visualize and analyze the massive taxi trajectories over urban spaces. The software offers data management capability and enable various visual queries through a web interface. A set of visualization widgets and interaction functions are developed to promote easy user engagement. We implemented two prototypes with realworld datasets: (1) the origin/destination (OD) data of taxi trips collected in New York City; and (2) the taxi trajectory data collected in Porto City in Portugal. This open source software supports practitioners, researchers, and decision-makers in advancing transportation and urban studies in the new era of smart city."}
{"_id": "4f2bf7a6c999c087d69afa49338596d63ad4b3f7", "title": "Understanding of Antecedents to Achieve Customer Trust and Customer Intention to Purchase E-Commerce in Social Media , an Empirical Assessment", "text": "Received Dec 9, 2016 Revised Mar 27, 2017 Accepted Apr 11, 2017 This study aims to analyze empirically three factors antecedents of trust they are system quality, information quality, and service quality. Customer trust is used in determining customer intention to purchase of e-commerce in social media (facebook). A number of respondents were 451. The results of this study concluded that three factors antecedents of trust directly had a positive impact to customer trust and indirectly had positive impact on customer intention to purchase in e-commerce transactions on social media. Keyword:"}
{"_id": "78cd4511e1e01ddf9f2f44bb569a400c6213f299", "title": "Cooperative Resource Management in Cloud-Enabled Vehicular Networks", "text": "Cloud-enabled vehicular networks are a new paradigm to improve the quality of vehicular services, which have drawn considerable attention in industry and academia. In this paper, we consider the resource management and sharing problem for bandwidth and computing resources to support mobile applications in cloud-enabled vehicular networks. In such an environment, cloud service providers (SPs) can cooperate to form coalitions to share their idle resources with each other. We propose a coalition game model based on two-sided matching theory for cooperation among cloud SPs to share their idle resources. As a result, the resources can be better utilized, and the QoS for users can be improved. Numerical results indicate that our scheme can improve resource utilization and increase by 75% the QoS of the applications compared with that without cooperation. Moreover, the higher service cost of cooperation brings negative effect on coalition formation. The higher cooperation willingness of cloud SPs and the lower service cost support more service applications."}
{"_id": "98d3103c82f2fec798b6467ebe8a693329594648", "title": "A model to support sign language content development for digital television", "text": "This paper proposes a model to support sign language content development and deployment in digital television scenarios. The model objective is to overcome traditional video limitations on the production of signed content and improve accessibility on digital television. Previous work on sign and gesture recognition, sign notation systems and avatar animation are reviewed. Sign communication requirements are investigated and adequate algorithms are identified to enable sign language processing and transmission. An application interface is proposed in order to access the model functionalities. The final discussion lists the benefits and limitations of the proposed model."}
{"_id": "a3d5dd4ebfe1723b7d57d13f5dd1298988bcf39a", "title": "Recent Advancements in Event Processing", "text": "Event processing (EP) is a data processing technology that conducts online processing of event information. In this survey, we summarize the latest cutting-edge work done on EP from both industrial and academic research community viewpoints. We divide the entire field of EP into three subareas: EP system architectures, EP use cases, and EP open research topics. Then we deep dive into the details of each subsection. We investigate the system architecture characteristics of novel EP platforms, such as Apache Storm, Apache Spark, and Apache Flink. We found significant advancements made on novel application areas, such as the Internet of Things; streaming machine learning (ML); and processing of complex data types such as text, video data streams, and graphs. Furthermore, there has been significant body of contributions made on event ordering, system scalability, development of EP languages and exploration of use of heterogeneous devices for EP, which we investigate in the latter half of this article. Through our study, we found key areas that require significant attention from the EP community, such as Streaming ML, EP system benchmarking, and graph stream processing."}
{"_id": "266c9d1faa0d2c3353b72b6f76999b54a61c3f1b", "title": "A Single-Stage LED Driver Based on Interleaved Buck-boost Circuit and LLC Resonant Converter", "text": "A single-stage LED driver based on interleaved buck-boost circuit and LLC resonant converter is proposed. The buck-boost circuit and LLC resonant converter are integrated by sharing switches, which can decrease the system cost and improve system efficiency. The input voltage of the buck-boost circuit is half of the rectified voltage, and two buckboost circuits are formed with the two half-bridge switches and corresponding diodes. The two buckboost circuits work in interleaved mode and the inductor current is in discontinuous conduction mode, both helping to achieve the power factor correction. The half-bridge LLC resonant converter is adopted here, and the soft switching characteristic of the LLC resonant converter is not changed by the switch integration. The primary-side switches still work in zero voltage switching (ZVS) mode, and the secondary diodes still work in zero current switching (ZCS) mode, which both reduce the switching losses and improve the efficiency of the system."}
{"_id": "73236f679bc22e0a49b23d84b974cd687992d67a", "title": "Design of an anthropomorphic prosthetic hand driven by Shape Memory Alloy actuators", "text": "This paper presents the mechanical design of an ultra-lightweight, biomimetically actuated anthropomorphic hand for prosthetic purposes. The proposed design is based on an underactuated configuration of 16 joints and 7 active degrees of freedom. Shape Memory Alloy wires are used as motive elements of a specially designed actuation system installed within the envelope of the hand and forearm; due to their inherent contraction ability when heated, these innovative micro-actuators produce linear motion which is imparted to the fingers via a tendon transmission system. The overall design is completed with the integration of the necessary locking mechanisms for power saving. An experimental prototype of the suggested prosthesis has been fabricated using rapid prototyping techniques and it will be used as an evaluation platform for further research towards the development of a truly multifunctional, silent and cosmetically appealing hand for upper limb amputees."}
{"_id": "ed81d19dc73ac702782ff170457d5b0c11d35342", "title": "A Study of Momentum Effects on the Swedish Stock Market using Time Series Regression", "text": "This study investigates if momentum effects can be found on the Swedish stock market by testing a cross-sectional momentum strategy on historical data. To explain the results mathematically, a second approach, involving time series regression for predicting future returns is introduced and thereby extends the cross-sectional theory. The result of the study shows that momentum effects through the cross-sectional strategy exist on the Swedish stock market. Although positive return is found, the time series regression do not give any significance for predicting future returns. Hence, there is a contradiction between the two approaches."}
{"_id": "ba327d7fbc52c3e033dc681b3e97afc85fcf5985", "title": "Knowledge warehouse: an architectural integration of knowledge management, decision support, artificial intelligence and data warehousing", "text": "Decision support systems (DSS) are becoming increasingly more critical to the daily operation of organizations. Data warehousing, an integral part of this, provides an infrastructure that enables businesses to extract, cleanse, and store vast amounts of data. The basic purpose of a data warehouse is to empower the knowledge workers with information that allows them to make decisions based on a solid foundation of fact. However, only a fraction of the needed information exists on computers; the vast majority of a firm\u2019s intellectual assets exist as knowledge in the minds of its employees. What is needed is a new generation of knowledge-enabled systems that provides the infrastructure needed to capture, cleanse, store, organize, leverage, and disseminate not only data and information but also the knowledge of the firm. The purpose of this paper is to propose, as an extension to the data warehouse model, a knowledge warehouse (KW) architecture that will not only facilitate the capturing and coding of knowledge but also enhance the retrieval and sharing of knowledge across the organization. The knowledge warehouse proposed here suggests a different direction for DSS in the next decade. This new direction is based on an expanded purpose of DSS. That is, the purpose of DSS in knowledge improvement. This expanded purpose of DSS also suggests that the effectiveness of a DS will, in the future, be measured based on how well it promotes and enhances knowledge, how well it improves the mental model(s) and understanding of the decision maker(s) and thereby how well it improves his/her decision making."}
{"_id": "4ff6c00b1368962dc1bdb813e31b52dcdfe70b4d", "title": "Knowledge Discovery in Databases (KDD) with Images: A Novel Approach toward Image Mining and Processing", "text": "We are in an age often referred to as the information age. In this information age, because we believe that information leads to power and success, from the technologies such as computers, satellites, etc., we have been collecting tremendous amounts of information. Our ability to analyze and understand massive datasets lags far behind our ability to gather and store data. Image and video data contains abundant, rich information for data miners to explore. On one hand, the rich literature on image and video data analysis will naturally provide many advanced methods that may help mining other kinds of data. On the other hand, recent research on data mining will also provide some new, interesting methods that may benefit image and video data retrieval and analysis. Today, a lot of data is available everywhere but the ability to understand and make use of that data is very less. Whether the context is business, medicine, science or government, the datasets themselves are of little value. What is of value is the knowledge that can be inferred from the data and put to use. We need systems which would analyze the data for us. This paper basically aims to find out important pixels of an image using one of the classification technique named as decision tree (ID-3). Our aim is to separate the important and unimportant pixels of an image using simple rules. Further one of the compression techniques named as Huffman algorithm is applied for image compression. Finally, resultant image is stored with lesser space complexity ."}
{"_id": "9de226442a64985c70eb07619c4f3f8f78c0bbf3", "title": "A Framework for Passengers Demand Prediction and Recommendation", "text": "With the rapid development of mobile internet and wireless network technologies, more and more people use the mobile app to call a taxicab to pick them up. Therefore, understanding the passengers' travel demand becomes crucial to improve the utilization of the taxicabs and reduce their cost. In this paper, based on spatio-temporal clustering, we propose a demand hotspots prediction framework to generate recommendation for taxi drivers. Specially, an adaptive prediction approach is presented to demand hotspots and their hotness, and then combing the driver's location and the hotness, top candidates are recommended and visually presented to drivers. Based on the dataset provided by CAR INC., the experiment shows that our approach gains a significant improvement in hotspots prediction and recommendation, with 15.21% improvement on average f-measure for prediction and 79.6% hit ratio for recommendation."}
{"_id": "283743048be918a1a4144d78ea65f45398c37e2d", "title": "Design and perceptual validation of performance measures for salient object segmentation", "text": "Empirical evaluation of salient object segmentation methods requires i) a dataset of ground truth object segmentations and ii) a performance measure to compare the output of the algorithm with the ground truth. In this paper, we provide such a dataset, and evaluate 5 distinct performance measures that have been used in the literature practically and psychophysically. Our results suggest that a measure based upon minimal contour mappings is most sensitive to shape irregularities and most consistent with human judgements. In fact, the contour mapping measure is as predictive of human judgements as human subjects are of each other. Region-based methods, and contour methods such as Hausdorff distances that do not respect the ordering of points on shape boundaries are significantly less consistent with human judgements. We also show that minimal contour mappings can be used as the correspondence paradigm for Precision-Recall analysis. Our findings can provide guidance in evaluating the results of segmentation algorithms in the future."}
{"_id": "5319d6eb27ec07471ba46cecbf7ac481b69a1668", "title": "An experimental comparison of reading techniques for defect detection in UML design documents", "text": "The basic motivation for software inspections is to detect and remove defects before they propagate to subsequent development phases where their detection and removal becomes more expensive. To attain this potential, the examination of the artefact under inspection must be as thorough and detailed as possible. This implies the need for systematic reading techniques that tell inspection participants what to look for and, more importantly, how to scrutinise a software document. Recent research efforts investigated the benefits of scenario-based reading techniques for defect detection in functional requirements and functional code documents. A major finding has been that these techniques help inspection teams find more defects than existing state-of-the-practice approaches, such as, ad-hoc or checklist-based reading (CBR). In this paper we describe and experimentally compare one scenariobased reading technique, namely perspective-based reading (PBR), for defect detection in objectoriented design documents using the notation of the Unified Modelling Language (UML) with the more traditional CBR approach. The comparison was performed in a controlled experiment with 18 practitioners as subjects. Our results indicate that PBR is more effective than CBR (i.e., it resulted in inspection teams detecting on average 41% more unique defects than CBR). Moreover the cost of defect detection using PBR is significantly lower than CBR (i.e., PBR exhibits on average a 58% cost per defect improvement over CBR). This study therefore provides evidence demonstrating the efficacy of PBR scenarios for defect detection in UML design documents. In addition, it demonstrates that a PBR inspection is a promising approach for improving the quality of models developed using the UML notation."}
{"_id": "2d6d056ca33bb20e7bec33b49093cc4a907bf1a0", "title": "Review of Wheeled Mobile Robots\u2019 Navigation Problems and Application Prospects in Agriculture", "text": "Robot navigation in the environment with obstacles is still a challenging problem. In this paper, the navigation problems with wheeled mobile robots (WMRs) are reviewed, the navigation mechanism of WMRs is analyzed in detail, the methods of solving the sub problems such as mapping, localization and path planning which all both related to robot navigation are summarized and the advantages and disadvantages of the existing methods are expounded. Especially in the agricultural field, the precise navigation of robots in the complex agricultural environment is the prerequisite for the completion of various tasks. This paper is aimed at the special complexity of the agricultural environment, prospected the application of the solution to the navigation problem of WMRs in agricultural engineering, put forward the research direction to solve the problems of precise navigation in agricultural environments."}
{"_id": "00429aeb0dee2ef81b8850a0f8b98bb9a72020d6", "title": "Developer Behavior and Sentiment from Data Mining Open Source Repositories", "text": "Developer sentiment may wax and wane as a project progresses. Open-source projects that attract and retain developers tend to be successful. It may be possible to predict project success, in part, if one can measure developer behavior and sentiment -- projects with active, happy developers are more likely to succeed. We have analyzed GitHub.com projects in an attempt to model these concepts. We have data mined 124 projects from GitHub.com. The projects were automatically mined using sequence mining methods to derive a behavioral model of developer activities. The projects were also mined for developer sentiment. Finally, a regression model shows how sentiment varies with behavioral differences -- a change in behavior is correlated with a change in sentiment. The relationship between sentiment and success is not directly explored, herein. This research project is a preliminary step in a larger research project aimed at understanding and monitoring FLOSS projects using a process modeling approach."}
{"_id": "b8f13073a8fa986ec9623cc905690817cdc1dfd3", "title": "Dynamic relief mapping1", "text": "The best texture mapping techniques that focus on the microrelief effect use iterative algorithms. The number of iterations must be determined in advance, but for real-time rendering, the number of iterations must be chosen dynamically to take into account the change of the depth scale and the viewing angle (viewing direction). In this paper, we presented a real-time rendering technique that allows a combined synchronization between depth scaling, viewing angle and number of iterations."}
{"_id": "6e8b32fc4f0a723f0629f7524d01a382ef77715a", "title": "A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals.", "text": "Brain-computer interfaces (BCIs) aim at providing a non-muscular channel for sending commands to the external world using the electroencephalographic activity or other electrophysiological measures of the brain function. An essential factor in the successful operation of BCI systems is the methods used to process the brain signals. In the BCI literature, however, there is no comprehensive review of the signal processing techniques used. This work presents the first such comprehensive survey of all BCI designs using electrical signal recordings published prior to January 2006. Detailed results from this survey are presented and discussed. The following key research questions are addressed: (1) what are the key signal processing components of a BCI, (2) what signal processing algorithms have been used in BCIs and (3) which signal processing techniques have received more attention?"}
{"_id": "392c43f9e521b829d9d5b7d072e4bd7f2bcfbe8a", "title": "Control of distributed generation systems - Part II: Load sharing control", "text": "This work is concerned with the control strategy for the parallel operation of distributed generation systems (DGS) in a standalone ac power supply. The proposed control method uses only low-bandwidth data communication signals between each generation system in addition to the locally measurable feedback signals. This is achieved by combining two control methods: droop control method and average power control method. The average power method with slow update rate is used in order to overcome the sensitivity about voltage and current measurement errors. In addition, a harmonic droop scheme for sharing harmonic content of the load currents is proposed based on the voltages and currents control algorithm. Experimental and simulation studies using two parallel three-phase pulsewidth modulation (PWM) inverters are presented to show the effectiveness of the proposed control."}
{"_id": "34053d06f6b7c06e40280f8736843e1b17c6c855", "title": "Building personal maps from GPS data.", "text": "In this article we discuss an assisted cognition information technology system that can learn personal maps customized for each user and infer his daily activities and movements from raw GPS data. The system uses discriminative and generative models for different parts of this task. A discriminative relational Markov network is used to extract significant places and label them; a generative dynamic Bayesian network is used to learn transportation routines, and infer goals and potential user errors at real time. We focus on the basic structures of the models and briefly discuss the inference and learning techniques. Experiments show that our system is able to accurately extract and label places, predict the goals of a person, and recognize situations in which the user makes mistakes, such as taking a wrong bus."}
{"_id": "00a4eef18b0236875dc3785eb6996d374e78714a", "title": "A Metric for Distributions with Applications to Image Databases", "text": "Proceedings of the 1998 IEEE International Conference on Computer Vision, Bombay, India We introduce a new distance between two distributions that we call the Earth Mover\u2019s Distance (EMD), which reflects the minimal amount of work that must be performed to transform one distribution into the other by moving \u201cdistribution mass\u201d around. This is a special case of the transportation problem from linear optimization, for which efficient algorithms are available. The EMD also allows for partial matching. When used to compare distributions that have the same overall mass, the EMD is a true metric, and has easy-to-compute lower bounds. In this paper we focus on applications to image databases, especially color and texture. We use the EMD to exhibit the structure of color-distribution and texture spaces by means of MultiDimensional Scaling displays. We also propose a novel approach to the problem of navigating through a collection of color images, which leads to a new paradigm for image database search."}
{"_id": "3eca189e1d19303a759eba87b978c0a2a4bc5ec1", "title": "Case study of the Miner Botnet", "text": "Malware and botnets are one of the most serious threats to today's Internet security. In this paper, we characterise the so-called &Miner Botnet\u201d. It received major media attention after massive distributed denial of service attacks against a wide range of German and Russian websites, mainly during August and September 2011. We use our insights on this botnet to outline current botnet-related money-making concepts and to show that multiple activities of this botnet are actually centred on the virtual anonymised currency Bitcoin, thus justifying the name. Furthermore, we provide a binary-level analysis of the malware's design and components to illustrate the modularity of the previously mentioned concepts. We give an overview of the structure of the command-and-control protocol as well as of the botnet's architecture. Both centralised as well as distributed infrastructure aspects realised through peer-to-peer are present to run the botnet, the latter for increasing its resiliency. Finally, we provide the results of our ongoing tracking efforts that started in September 2011, focusing on the development of the botnet's size and geographic distribution. In addition we point out the challenge that is generally connected with size measurements of botnets due to the reachability of individual nodes and the persistence of IP addresses over time."}
{"_id": "72e97a8a8895b93402e52980e4fde9cc8cc2c6f9", "title": "Design and implementation of fine pitch COB LED display", "text": "LED displays are widely used in advertisement, transportation, and commercial signages. Conventional LED display modules consist of a LED array assembly and a driving circuit board, inside an external housing. The dot pitch of conventional LED displays is typically 2.0 mm or above, which is relatively coarse. In some applications, such as the screen of personal mobile LED devices, a higher resolution with the dot pitch smaller than <;1.5 mm is needed due to the short viewing distance. Therefore, technologies for implementing fine pitch LED displays are in demand. This paper introduces a new type of LED display with the chip-on-board (COB) technology. The dot pitch of this new type of COB LED display may be reduced to 0.5 mm. The configurations for monochromatic and tricolored display modules are discussed. The reliability studies on the COB LED display are also presented. The results of ball bond shear, wire pull, junction temperature and thermal cycling tests indicate that the fine pitch COB LED display is feasible and reliable."}
{"_id": "11c88f516e1437e16fc94ff8db0e5f906f9aeb24", "title": "Logic Programming for Software-Defined Networks", "text": ""}
{"_id": "136146584af0fa33343b31e3f69224be9c0f2672", "title": "Deploying Dense Networks for Maximal Energy Efficiency: Small Cells Meet Massive MIMO", "text": "What would a cellular network designed for maximal energy efficiency look like? To answer this fundamental question, tools from stochastic geometry are used in this paper to model future cellular networks and obtain a new lower bound on the average uplink spectral efficiency. This enables us to formulate a tractable uplink energy efficiency (EE) maximization problem and solve it analytically with respect to the density of base stations (BSs), the transmit power levels, the number of BS antennas and users per cell, and the pilot reuse factor. The closed-form expressions obtained from this general EE maximization framework provide valuable insights on the interplay between the optimization variables, hardware characteristics, and propagation environment. Small cells are proved to give high EE, but the EE improvement saturates quickly with the BS density. Interestingly, the maximal EE is achieved by also equipping the BSs with multiple antennas and operate in a \u201cmassive MIMO\u201d fashion, where the array gain from coherent detection mitigates interference and the multiplexing of many users reduces the energy cost per user."}
{"_id": "3cc3b6d7af3b7da11da9e7049eafd5113cbad2ae", "title": "Automatic Room Detection and Room Labeling from Architectural Floor Plans", "text": "This paper presents an automatic system for analyzing and labeling architectural floor plans. In order to detect the locations of the rooms, the proposed systems extracts both, structural and semantic information from given floor plans. Furthermore, OCR is applied on the text layer to retrieve the meaningful room labeling. Finally, a novel post-processing is proposed to split rooms into several sub-regions if several semantic rooms share the same physical room. Our fully automatic system is evaluated on a publicly available dataset of architectural floor plans. In our experiments, we could clearly outperform other state-of-the-art approaches for room detection."}
{"_id": "db0144908fdc6aacdfa25afacd4f13d1bae71cdb", "title": "A Computational Semantic Analysis of Noun Compounds in Dutch", "text": ".........................................................................................................................................1 TABLE OF CONTENTS .........................................................................................................................2 0. PREFACE.............................................................................................................................................5"}
{"_id": "7be0b178252f5d8410a4c8340b7eb712f7e76327", "title": "Multi-User Energy Consumption Monitoring and Anomaly Detection with Partial Context Information", "text": "Anomaly detection is an important problem in building energy management in order to identify energy theft and inefficiencies. However, it is hard to differentiate actual anomalies from the genuine changes in energy consumption due to seasonal variations and changes in personal settings such as holidays. One of the important drawbacks of existing anomaly detection algorithms is that various unknown context variables, such as seasonal variations, can affect the energy consumption of users in ways that appear anomalous to existing time series based anomaly detection algorithms.\n In this paper, we present a system for monitoring the energy consumption of multiple users within a neighborhood and a novel algorithm for detecting anomalies by combining data from multiple users. For each user, the neighborhood is defined as the set of all other users that have similar characteristics (function, location or demography), and are therefore likely to react and consume energy in the similar way in response to the external conditions. The neighborhood can be predefined based on prior customer information, or can be identified through an analysis of historical energy consumption. The proposed algorithm works as a two-step process. In the first step, the algorithm periodically computes an anomaly score for each user by just considering their own energy consumption and variations in the consumption of the past. In the second step, the anomaly score for a user is adjusted by analyzing the energy consumption data in the neighborhood. The collation of data within the neighborhood allows the proposed algorithm to differentiate between these genuine effects and real anomalous behavior of users. Unlike multivariate time series anomaly detection algorithms, the proposed algorithm can identify specific users that are exhibiting anomalous behavior. The capabilities of the algorithm are demonstrated using several year-long real-world data sets, for commercial as well as residential consumers."}
{"_id": "2f14c347cfb6b4ea8bd20f40ad5cf819eb2329b8", "title": "Image-based phenotyping of plant disease symptoms", "text": "Plant diseases cause significant reductions in agricultural productivity worldwide. Disease symptoms have deleterious effects on the growth and development of crop plants, limiting yields and making agricultural products unfit for consumption. For many plant-pathogen systems, we lack knowledge of the physiological mechanisms that link pathogen infection and the production of disease symptoms in the host. A variety of quantitative high-throughput image-based methods for phenotyping plant growth and development are currently being developed. These methods range from detailed analysis of a single plant over time to broad assessment of the crop canopy for thousands of plants in a field and employ a wide variety of imaging technologies. Application of these methods to the study of plant disease offers the ability to study quantitatively how host physiology is altered by pathogen infection. These approaches have the potential to provide insight into the physiological mechanisms underlying disease symptom development. Furthermore, imaging techniques that detect the electromagnetic spectrum outside of visible light allow us to quantify disease symptoms that are not visible by eye, increasing the range of symptoms we can observe and potentially allowing for earlier and more thorough symptom detection. In this review, we summarize current progress in plant disease phenotyping and suggest future directions that will accelerate the development of resistant crop varieties."}
{"_id": "e70b7eb92afc9a67ab4bd2cec052a0e3098db7cd", "title": "The adoption of hyped technologies: a qualitative study", "text": "The introduction of new consumer technology is often greeted with declarations that the way people conduct their lives will be changed instantly. In some cases, this might create hype surrounding a specific technology. This article investigates the adoption of hyped technology, a special case that is absent in the adoption literature. The study employs a consumer research perspective, specifically the theory of consumption values (TCV), to understand the underlying motives for adopting the technology. In its original form, TCV entails five values that influence consumer behavior: functional, social, epistemic, emotional and conditional. The values catch the intrinsic and extrinsic motives influencing behavior. Using a qualitative approach that includes three focus groups and 60 one-on-one interviews, the results of the study show that emotional, epistemic and social values influence the adoption of hyped technologies. Contrary to expectations, functional value, which is similar to the widely used information system constructs of perceived usefulness and relative advantage, has little impact on the adoption of technologies that are surrounded with significant hype. Using the findings of the study, this article proposes a model for investigating and understanding the adoption of hyped technologies. This article contributes to the literature by (1) focusing on the phenomenon of hyped technology, (2) introducing TCV, a consumer research-based theoretical framework, to enhance the understanding of technology adoption, and (3) proposing a parsimonious model explaining the adoption of hyped technology."}
{"_id": "a7b9dfbd46bf9774983f333fa28e73918c18a351", "title": "A Thin Shell Volume for Modeling Human Hair", "text": "Hair-to-hair interaction is often ignored in human hair modeling, due to its computational and algorithmic complexity. In this paper, we present our experimental approach to simulate the complex behavior of long human hair, taking into account hairto-hair interactions. We describe a thin shell volume (TSV) model for enhancing hair realism by simulating complex hair-hair interaction. The TSV is a thin bounding volume that encloses a given hair surface. The TSV method enables virtual hair combing for surface-model hairstyles. Combing produces the effects of hair-hair interaction that occur in real human hair. Any hair model based on a surface representation can benefit from this approach. The TSV is presented mainly as a tool to modeling human hair, but this model also gives rise to global hair animation control."}
{"_id": "8fa65b0d7663f046a954462d56ad69f42d0bace1", "title": "Combination therapy : Synergism between natural plant extracts and antibiotics against infectious diseases", "text": "Antibiotics are one of the most important weapons in fighting bacterial infections and have greatly benefited the health\u2010related quality of human life since their introduction. However, over the past few decades these health benefits are under threat as many commonly used antibiotics have become less and less effective against certain illnesses not only because many of them produce toxic reactions but also due to emergence of drug resistant bacteria. Resistance development is an even bigger problem since the bacterial resistance is often not restricted to the specific antibiotic prescribed, but generally extends to other compounds of the same class. Bacterial resistance and its rapid increase is a major concern of global public health and is emerging as one of the most significant challenges to human health. Treating bacterial infections by antibiotics is beneficial but their indiscriminate use has led to an alarming resistance among microorganisms as well as led to re-emergence of old infectious diseases. One approach to treat infectious diseases is the use of plant extracts individually and /or as an alternative approach is the use of combination of antibiotics with plant extracts. This latter approach i.e. combination therapy or synergistic therapy; against resistant microorganisms may lead to new ways of treating infectious diseases and probably this represents a potential area for further future investigations. Combination therapy is helpful and useful for patients with serious infections caused by drug resistant pathogens. The present review describes in detail, the observed synergy between natural extracts and standard antibiotics combating bacterial and fungal infections. The mode of action of combination therapy significantly differs from that of the same drugs acting individually; therefore the selection of an appropriate combination is crucial and essential which requires understanding the potential interaction between the plant extracts and antimicrobial agents."}
{"_id": "80651f9eb82f0fe4e070aa079e036d0e23e7c7af", "title": "Social ties and mental health", "text": "It is generally agreed that social ties play a beneficial role in the maintenance of psychological well-being. In this targeted review, we highlight four sets of insights that emerge from the literature on social ties and mental health outcomes (defined as stress reactions, psychological well-being, and psychological distress, including depressive symptoms and anxiety). First, the pathways by which social networks and social supports influence mental health can be described by two alternative (although not mutually exclusive) causal models\u2014the main effect model and the stress-buffering model. Second, the protective effects of social ties on mental health are not uniform across groups in society. Gender differences in support derived from social network participation may partly account for the higher prevalence of psychological distress among women compared to men. Social connections may paradoxically increase levels of mental illness symptoms among women with low resources, especially if such connections entail role strain associated with obligations to provide social support to others. Third, egocentric networks are nested within a broader structure of social relationships. The notion of social capital embraces the embeddedness of individual social ties within the broader social structure. Fourth, despite some successes reported in social support interventions to enhance mental health, further work is needed to deepen our understanding of the design, timing, and dose of interventions that work, as well as the characteristics of individuals who benefit the most."}
{"_id": "2651de5b84c12d47f568370b92ba3b5d9a309ade", "title": "The inhibitors of apoptosis (IAPs) as cancer targets", "text": "Apoptosis has been accepted as a fundamental component in the pathogenesis of cancer, in addition to other human diseases including neurodegeneration, coronary disease and diabetes. The origin of cancer involves deregulated cellular proliferation and the suppression of apoptotic processes, ultimately leading to tumor establishment and growth. Several lines of evidence point toward the IAP family of proteins playing a role in oncogenesis, via their effective suppression of apoptosis. The central mechanisms of IAP apoptotic suppression appear to be through direct caspase and pro-caspase inhibition (primarily caspase 3 and 7) and modulation of, and by, the transcription factor NF-kappaB. Thus, when the IAPs are over-expressed or over-active, as is the case in many cancers, cells are no longer able to die in a physiologically programmed fashion and become increasingly resistant to standard chemo- and radiation therapies. To date several approaches have been taken to target and eliminate IAP function in an attempt to re-establish sensitivity, reduce toxicity, and improve efficacy of cancer treatment. In this review, we address IAP proteins as therapeutic targets for the treatment of cancer and emphasize the importance of novel therapeutic approaches for cancer therapy. Novel targets of IAP function are being identified and include gene therapy strategies and small molecule inhibitors that are based on endogenous IAP antagonists. As well, molecular mechanistic approaches, such as RNAi to deplete IAP expression, are in development."}
{"_id": "34b4f2ae4d541be465ee34a6d168d80edd18123e", "title": "Large-Scale Analysis of Soccer Matches Using Spatiotemporal Tracking Data", "text": "Although the collection of player and ball tracking data is fast becoming the norm in professional sports, large-scale mining of such spatiotemporal data has yet to surface. In this paper, given an entire season's worth of player and ball tracking data from a professional soccer league (\u2248400,000,000 data points), we present a method which can conduct both individual player and team analysis. Due to the dynamic, continuous and multi-player nature of team sports like soccer, a major issue is aligning player positions over time. We present a \"role-based\" representation that dynamically updates each player's relative role at each frame and demonstrate how this captures the short-term context to enable both individual player and team analysis. We discover role directly from data by utilizing a minimum entropy data partitioning method and show how this can be used to accurately detect and visualize formations, as well as analyze individual player behavior."}
{"_id": "22362e199a0a861b3d7affd3660900955e9011b4", "title": "Motor skill acquisition.", "text": "INTRODUCTION ..................................................................................... 213 A GENERAL LAW OF MOTOR LEARNING ................................................. 215 Motor Learning as a Power Law .............................................................. 216 The Power Law--A Special Case of Motor Learning? .................................... 217 WHAT IS LEARNED WITH PRACTICE ....................................................... 219 One-To-One R call-Recognition Processes .................................................. 219 Schema Representation of Action .............................................................. 221 Problems with Prescriptive Accounts of Motor Learning ................................. 223 Motor Skill Acquisition as a Search Strategy ............................................... 224 INFORMATION A D MOTOR SKILL ACQUISITION ..................................... 225 Information as a Prescription .................................................................. 226 Information as Feedback ........................................................................ 227 Information to Channel the Search ............................................................ 231 CONCLUDING REMARKS ........................................................................ 233"}
{"_id": "4674c065f230435193c7a5e968e48c43d7eb3565", "title": "Bayesian Estimation of a Stochastic Volatility Model Using Option and Spot Prices : Application of a Bivariate Kalman Filter", "text": "In this paper Bayesian methods are applied to a stochastic volatility model using both the prices of the asset and the prices of options written on the asset. Posterior densities for all model parameters, latent volatilities and the market price of volatility risk are produced via a hybrid Markov Chain Monte Carlo sampling algorithm. Candidate draws for the unobserved volatilities are obtained using the Kalman filter and smoother to a linearization of a statespace representation of the model. The method is illustrated using the Heston (1993) stochastic volatility model applied to Australian News Corporation spot and option price data. Alternative models nested in the Heston framework are ranked via Bayes Factors and via fit, predictive and hedging performance."}
{"_id": "5a74e5f2d08d379f0816adf27b41d654f9068013", "title": "Genetic and Stress-Induced Loss of NG2 Glia Triggers Emergence of Depressive-like Behaviors through Reduced Secretion of FGF2", "text": "NG2-expressing glia (NG2 glia) are a uniformly distributed and mitotically active pool of cells in the central nervous system (CNS). In addition to serving as progenitors of myelinating oligodendrocytes, NG2 glia might also fulfill physiological roles in CNS homeostasis, although the mechanistic nature of such roles remains unclear. Here, we report that ablation of NG2 glia in the prefrontal cortex (PFC) of the adult brain causes deficits in excitatory glutamatergic neurotransmission and astrocytic extracellular glutamate uptake and induces depressive-like behaviors in mice. We show in parallel that chronic social stress causes NG2 glia density to decrease in areas critical to Major Depressive Disorder (MDD) pathophysiology at the time of symptom emergence in stress-susceptible mice. Finally, we demonstrate that loss of NG2 glial secretion of fibroblast growth factor 2 (FGF2) suffices to induce the same behavioral deficits. Our findings outline a pathway and role for NG2 glia in CNS homeostasis and mood disorders."}
{"_id": "4e74cadb44acfe373940f0b151c41ef3a02b9b0c", "title": "Substrate Integrated Waveguide Quasi-Elliptic Filter Using Slot-Coupled and Microstrip-Line Cross-Coupled Structures", "text": "This paper proposes a quasi-elliptic filter with slot coupling and nonadjacent cross coupling based on the substrate integrated waveguide (SIW) cavity. The slots etched on the top metal plane of SIW cavity are used to produce electrical coupling, and the cross coupling is realized by the microstrip transmission line above the SIW cavity. The coupling strength is mainly controlled by the width and height of the slot. The length of the open-ended microstrip line controls the sign of the cross coupling. The cross coupling with different signs are used in the filter to produce a pair of transmission zeros (TZs) at both sides of the passband. In order to prove the validity, a fourth-order SIW quasi-elliptic filter with TZsat both sides of the passband is fabricated in a two-layer printed circuit board. The measured insertion loss at a center frequency of 3.7 GHz is 1.1 dB. The return loss within the passband is below -18 dB with a fractional bandwidth of 16%. The measured results are in good agreement with the simulated results."}
{"_id": "bfb88f34328be56dc7917a59c2aee7a8c22795e1", "title": "A distributed ADMM approach for mobile data offloading in software defined network", "text": "Mobile data offloading has been introduced to alleviate the congestion of cellular networks and to improve the quality of service for mobile end users. This paper presents a distributed mechanism for mobile data offloading in software defined network (SDN) at the network edge. In SDN, the data traffic of base stations (BSs) can be dynamically offloaded to access points (APs), which is enabled by the SDN controller. The SDN controller formulates a revenue maximization problem to optimize the data offloading decision, and solves the problem in a fully distributed fashion. The proposed mechanism is based on the proximal Jacobian multi-block alternating direction method of multipliers (ADMM). BSs and APs perform the offloading decision update concurrently, and are coordinated by the SDN controller through dual variables to reach a consensus on the offloading demand and supply. Numerical simulations validate the effectiveness of the proposed algorithm."}
{"_id": "cbfea7048ff441a2acc14a2936b30e110bed6487", "title": "A Sequential Importance Sampling Algorithm for Generating Random Graphs with Prescribed Degrees", "text": "Random graphs with a given degree sequence are a useful model capturing several features absent in the classical Erd\u0151s-R\u00e9nyi model, such as dependent edges and non-binomial degrees. In this paper, we use a characterization due to Erd\u0151s and Gallai to develop a sequential algorithm for generating a random labeled graph with a given degree sequence. The algorithm is easy to implement and allows surprisingly efficient sequential importance sampling. Applications are given, including simulating a biological network and estimating the number of graphs with a given degree sequence."}
{"_id": "5d3a16b548477064cdee45cb9512d943f3a0ffa7", "title": "Immersive Analytics for Clinical Neuroscience", "text": "This paper introduces NeuroCave, an open-source immersive analytics software tool for exploring and making sense of both structural and functional connectomes describing the interconnectivity between brain regions. The software supports visual analysis tasks for clinical neuroscientists, facilitating explorations of connectome datasets used in comparative group studies. Researchers can enter and leave virtual reality as desired in order to visualize and reason about connectome datasets from a range of different perspectives."}
{"_id": "d9676c349b51b066dee846db6792064cb1ee2a39", "title": "An Improved Topology of SEPIC Converter With Reduced Output Voltage Ripple", "text": "An improved version of a single-ended primary inductor converter (SEPIC) is presented. The converter consists of a conventional SEPIC converter plus an additional high-frequency transformer and diode to maintain a freewheeling mode of the DC inductor currents during the switch on state. The voltage conversion ratio characteristics and semiconductor device voltage and current stresses are characterized. The main advantages of this converter are the continuous output current, smaller output voltage ripple, and lower semiconductors current stress compared with the conventional SEPIC converter. The design and simulation of the concept is verified by an experiment with a 48-V input and 12-V/3.75-A output converter."}
{"_id": "a184ad99f07b76f2f10db7425250ebe938ee3720", "title": "A single-layer folded substrate integrated waveguide (SIW) filter", "text": "A novel single-layer folded filter using the substrate integrated waveguide (SIW) technique is presented in this paper. A prototype at X-band for the proposed filter has been fabricated using standard printed-circuit-board (PCB) process, and is measured with a vector network analyzer (VNA). The measured results show the proposed filter has a narrow band-pass bandwidth, small insert loss and good frequency selective performances. The proposed filter has a compact size and it is suitable for designing microwave and millimeter-wave circuits."}
{"_id": "c6af28e992a1389114d4760c65ca258fc9cb74f9", "title": "Substrate Integrated Waveguide-to-Microstrip Transition in Multilayer Substrate", "text": "This paper presents a novel transition between a microstrip line and a substrate integrated waveguide (SIW) in a multilayer substrate design environment. In order to achieve a low-loss broadband response, the transition, consisting of a tapered or multisectional ridged SIW and a tapered microstrip line, is modeled and designed by simultaneously considering both impedance matching and field matching. Characteristic impedance and guided wavelength calculated by using closed-form expressions based on a transverse resonant method are used to develop our design procedure. Effective broad bandwidth is obtained in two examples developed in this study, which are validated with simulated and measured results. This transition provides a simple way to design substrate integrated circuits with buried microstrip circuits in the multilayer substrate in which any ratio of impedance transform can be anticipated."}
{"_id": "442a209e48c365076825198846cf7ec4761f3463", "title": "Integrated transition of coplanar to rectangular waveguides", "text": "Usual transitions between planar circuit and rectangular waveguide make use of 3-D complex mounting structures. Such an integration requires costly high precision mechanical alignment, In this paper, a new planar platform is developed in which a coplanar waveguide (CPW) and a rectangular waveguide are fully integrated on the same substrate, and they are interconnected via a simple transition. They can be built with a standard PCB process. Our experiments at 28 GHz show that an effective bandwidth of 7% at 15 dB return loss can easily be achieved. The CPW-to-waveguide transition allows for a complete integration of waveguide components on substrate with active components such as MMIC."}
{"_id": "520676110b3f7be99f170fe36d4aec1d9c2040a8", "title": "The substrate integrated circuits - a new concept for high-frequency electronics and optoelectronics", "text": "A new generation of high-frequency integrated circuits is presented, which is called substrate integrated circuits (SICs). Current state-of-the-art of circuit design and implementation platforms based on this new concept are reviewed and discussed in detail. Different possibilities and numerous advantages of the SICs are shown for microwave, millimeter-wave and optoelectronics applications. Practical examples are illustrated with theoretical and experimental results for substrate integrated waveguide (SIW), substrate integrated slab waveguide (SISW) and substrate integrated nonradiating dielectric (SINRD) guide circuits. Future research and development trends are also discussed with reference to low-cost innovative design of millimeter-wave and optoelectronic integrated circuits."}
{"_id": "4d4847c1c802c98bc78482d0178788c51691b309", "title": "Biologically Plausible, Human-Scale Knowledge Representation", "text": "Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, ), \"mesh\" binding (van der Velde & de Kamps, ), and conjunctive binding (Smolensky, ). Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode structured representations using any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions. Here, we empirically demonstrate that the biologically plausible structured representations employed in the Semantic Pointer Architecture (SPA) approach to modeling cognition (Eliasmith, ) do scale appropriately. Specifically, we construct a spiking neural network of about 2.5 million neurons that employs semantic pointers to successfully encode and decode the main lexical relations in WordNet, which has over 100,000 terms. In addition, we show that the same representations can be employed to construct recursively structured sentences consisting of arbitrary WordNet concepts, while preserving the original lexical structure. We argue that these results suggest that semantic pointers are uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition."}
{"_id": "40f913627f9c554686e2be02333ca719032e8012", "title": "Poststroke fatigue: a 2-year follow-up study of stroke patients in Sweden.", "text": "BACKGROUND AND PURPOSE\nFatigue is common among stroke patients. This study determined the prevalence of fatigue among long-term survivors after stroke and what impact fatigue had on various aspects of daily life and on survival.\n\n\nMETHODS\nThis study was based on Riks-Stroke, a hospital-based national register for quality assessment of acute stroke events in Sweden. During the first 6 months of 1997, 8194 patients were registered in Riks-Stroke, and 5189 were still alive 2 years after the stroke. They were followed up by a mail questionnaire, to which 4023 (79%) responded. Patients who reported that they always felt depressed were excluded.\n\n\nRESULTS\nTo the question, \"Do you feel tired?\" 366 (10.0%) of the patients answered that they always felt tired, and an additional 1070 (29.2%) were often tired. Patients who always felt tired were on average older than the rest of the study population (74.5 versus 71.5 years, P<0.001); therefore, all subsequent analyses were age adjusted. Fatigue was an independent predictor for having to move into an institutional setting after stroke. Fatigue was also an independent predictor for being dependent in primary activities of daily living functions. Three years after stroke, patients with fatigue also had a higher case fatality rate.\n\n\nCONCLUSIONS\nFatigue is frequent and often severe, even late after stroke. It is associated with profound deterioration of several aspects of everyday life and with higher case fatality, but it usually receives little attention by healthcare professionals. Intervention studies are needed."}
{"_id": "5466e42134dbba93c37249f37ff66da5658b415f", "title": "Microcontroller Based Fan Speed Regulator with Continuous Monitoring using LCD Display", "text": "Design and implementation of Microcontroller based automatic Fan speed regulator using temperature sensor is represented here. Most of the available Fans today are controlled manually by voltage regulators which have different stages of speed. During summer nights, especially the room temperature is initially quite high, as time passes, the temperature starts dropping. Also, after a person falls asleep, the metabolic rate of one\u2019s body decreases, and one is expected to wake up from time to time to adjust the speed of the Fan. Many people who are disabled / physically challenged persons are affected the most because of the inconveniences associated in changing the Fan speed level manually when the room temperature changes. So, an efficient automatic Fan speed control system that automatically changes the speed level according to the change in environment / room temperature which is more comfortable than manual system. This project has been designed in such a way that the required components are available in local market. Therefore an indigenous low cost control scheme has been developed which can be used in real life application."}
{"_id": "64ffab81fa72869f9c85ca615767aff155bb1ac7", "title": "Development of an Autonomous Remote Access Water Quality Monitoring System", "text": "Due to the vast increase in global industrial output, rural to urban drift and the over-utilization of land and sea resources, the quality of water available to people has deteriorated greatly. Before the sensor based approach to water quality monitoring, water quality was tested by collecting the samples of water and experimentally analyzing it in the laboratories. However, in today, with time being a scarce resource, the traditional method of water quality testing is not efficient anymore. To tackle this issue, several electronic (microcontroller and sensor based) water quality monitoring systems were developed in the past decade. However, an in depth study of this current water quality testing technology shows that there are some limitations that should be taken into consideration. Therefore, an automatic, remote, and low cost water quality monitoring system has been developed. This system consists of a core microcontroller, multiple sensors, GSM module, LCD display screen, and an alarm subsystem. The quality of water is read from the physical world through the water quality testing sensors and sent to the microcontroller. The data is then analyzed by the microcontroller and the result is displayed on the LCD screen on the device. At the same time, another copy of the sensor readings is sent remotely to the user\u2019s mobile phone in the form of SMS. If an abnormal water quality parameter is detected by any sensor, the alarm system will turn on the respective red LED for that parameter and the buzzer will give warning sound. At the same time, the abnormality of the water parameter is reported to the user through SMS. The system is aimed to be used for wide applications and by all categories of users. It can facilitate the process of water quality monitoring autonomously and with low cost; to help people improve their quality of drinking water, household water supplies and aquaculture farms, especially in rural areas where residents do not have access to standardized water supply and suffer from different diseases caused by contaminated water."}
{"_id": "8117bc9d9d2fadd4f01d3bee09a7b62fb3592784", "title": "Analyzing Enterprise Architecture in National Governments: The Cases of Denmark and the Netherlands", "text": "National enterprise architectures (NEA) promise to fill the gap between policy and implementation. NEAs are embedded within an institutional environment consisting of active players capable of responding strategically and innovatively to architectural initiatives, which might complicate NEA adoption. In this paper we analyze the efforts of two European national governments in developing enterprise architecture. Grounded in institutional theory and practice we develop an analytical framework and use this framework to analyze the efforts of two countries, Denmark and the Netherlands. Our framework and analysis draws the attention to the need to take a broader perspective on enterprise architecture, especially governance aspects determine the adoption and diffusion of NEA"}
{"_id": "fa995985fa0d34c91eeb26aa6b0fbfad419d0e1b", "title": "A Cross Tenant Access Control (CTAC) Model for Cloud Computing: Formal Specification and Verification", "text": "Sharing of resources on the cloud can be achieved on a large scale, since it is cost effective and location independent. Despite the hype surrounding cloud computing, organizations are still reluctant to deploy their businesses in the cloud computing environment due to concerns in secure resource sharing. In this paper, we propose a cloud resource mediation service offered by cloud service providers, which plays the role of trusted third party among its different tenants. This paper formally specifies the resource sharing mechanism between two different tenants in the presence of our proposed cloud resource mediation service. The correctness of permission activation and delegation mechanism among different tenants using four distinct algorithms (activation, delegation, forward revocation, and backward revocation) is also demonstrated using formal verification. The performance analysis suggests that the sharing of resources can be performed securely and efficiently across different tenants of the cloud."}
{"_id": "fe77d9c1285f3d07721a9cc92f85d972197a6932", "title": "Applying Deep Learning to Better Predict Cryptocurrency Trends", "text": "In this paper, we will create deep learning models using the Python library, Keras, to make predictions on Bitcoin. We designed two neural networks to make similar predictions with important differences. In our first approach, we created a simple neural net with one input layer and an additional dropout layer to generate a continuous value. This continuous value is the predicted price of the Bitcoin a week from the given input. For the second model, we attempted to predict whether or not the price of the Bitcoin would go up 3%, stay within 3%, or go down 3% a week from a given date. Instead of returning a single continuous value, the result contains an array of three values. Each value is a percentage of the likelihood that it will either go up 3%, stay within 3%, or go down 3%. We approached designing the models using many different optimizers, activation functions, number of neurons, and various quantities of layers before settling on the final models. We also compared three different optimizers for each problem: Stochastic Gradient Descent (SGD), Adam, and RMSProp. We compared the results of all three optimizers to determine which one was the most effective at creating the most accurate model."}
{"_id": "841df50ecd50d27ffea9dada180e34a24aa12409", "title": "Using ontologies to improve semantic interoperability in health data.", "text": "The present-day health data ecosystem comprises a wide array of complex heterogeneous data sources. A wide range of clinical, health care, social and other clinically relevant information are stored in these data sources. These data exist either as structured data or as free-text. These data are generally individual person-based records, but social care data are generally case based and less formal data sources may be shared by groups. The structured data may be organised in a proprietary way or be coded using one-of-many coding, classification or terminologies that have often evolved in isolation and designed to meet the needs of the context that they have been developed. This has resulted in a wide range of semantic interoperability issues that make the integration of data held on these different systems changing. We present semantic interoperability challenges and describe a classification of these. We propose a four-step process and a toolkit for those wishing to work more ontologically, progressing from the identification and specification of concepts to validating a final ontology. The four steps are: (1) the identification and specification of data sources; (2) the conceptualisation of semantic meaning; (3) defining to what extent routine data can be used as a measure of the process or outcome of care required in a particular study or audit and (4) the formalisation and validation of the final ontology. The toolkit is an extension of a previous schema created to formalise the development of ontologies related to chronic disease management. The extensions are focused on facilitating rapid building of ontologies for time-critical research studies."}
{"_id": "64c913c37b4099fc7ffbb70045e7beb19ed14613", "title": "Generating Responses Expressing Emotion in an Open-domain Dialogue System", "text": "Neural network-based Open-ended conversational agents automatically generate responses based on predictive models learned from a large number of pairs of utterances. The generated responses are typically acceptable as a sentence but are often dull, generic, and certainly devoid of any emotion. In this paper we present neural models that learn to express a given emotion in the generated response. We propose four models and evaluate them against 3 baselines. An encoder-decoder framework-based model with multiple attention layers provides the best overall performance in terms of expressing the required emotion. While it does not outperform other models on all emotions, it presents promising results in most cases."}
{"_id": "9cc76f358a36c50dafc629d4735fcdd09f09f876", "title": "Aging in Place: Self-Care in Smart Home Environments", "text": ""}
{"_id": "019119dff662e50678563bb50bccca13d433bab2", "title": "Horse Racing Prediction Using Artificial Neural Networks", "text": "Artificial Neural Networks (ANNs) have been applied to predict many complex problems. In this paper ANNs are applied to horse racing prediction. We employed Back-Propagation, Back-Propagation with Momentum, QuasiNewton, Levenberg-Marquardt and Conjugate Gradient Descent learning algorithms for real horse racing data and the performances of five supervised NN algorithms were analyzed. Data collected from AQUEDUCT Race Track in NY, include 100 actual races from 1 January to 29 January 2010. The experimental results demonstrate that NNs are appropriate methods in horse racing prediction context. Results show that BP algorithm performs slightly better than other algorithms but it needs a longer training time and more parameter selection. Furthermore, LM algorithm is the fastest. Key-Words: Artificial Neural Networks, Time Series Analysis, Horse Racing Prediction, Learning Algorithms, BackPropagation"}
{"_id": "b2593c4281a5c1b83f6861336834dbfc374f19b5", "title": "CellDesigner: a process diagram editor for gene-regulatory and biochemical networks", "text": "Systems biology is characterized by synergistic integration of theory, computational modeling, and experiment. Though software infrastructure is one of the most critical components of systems biology research, there has been no common infrastructure or standard to enable integration of computational resources. To solve this problem, the Systems Biology Markup Language (SBML) [2] and Systems Biology Workbench (SBW) have been developed. A number of simulation and analysis software packages already support SBML and SBW, or are in the process to support it. An identification of logic and dynamics of gene-regulatory and biochemical networks is a major challenge of systems biology. We believe that such network building tools and simulation environments using standardized technologies play an important role in software platform of systems biology. As one of the approaches, we have developed CellDesigner [1], which is a process diagram editor for gene-regulatory and biochemical networks."}
{"_id": "00eb744cdf12275e512703dcc7e74c7435bd0d62", "title": "SVMs \u2014 a practical consequence of learning theory", "text": "Bernhard Sch\u00f6lkopf, GMD First Is there anything worthwhile to learn about the new SVM algorithm, or does it fall into the category of \u201cyet-another-algorithm,\u201d in which case readers should stop here and save their time for something more useful? In this short overview, I will try to argue that studying support-vector learning is very useful in two respects. First, it is quite satisfying from a theoretical point of view: SV learning is based on some beautifully simple ideas and provides a clear intuition of what learning from examples is about. Second, it can lead to high performances in practical applications. In the following sense can the SV algorithm be considered as lying at the intersection of learning theory and practice: for certain simple types of algorithms, statistical learning theory can identify rather precisely the factors that need to be taken into account to learn successfully. Real-world applications, however, often mandate the use of more complex models and algorithms\u2014such as neural networks\u2014that are much harder to analyze theoretically. The SV algorithm achieves both. It constructs models that are complex enough: it contains a large class of neural nets, radial basis function (RBF) nets, and polynomial classifiers as special cases. Yet it is simple enough to be analyzed mathematically, because it can be shown to correspond to a linear method in a high-dimensional feature space nonlinearly related to input space. Moreover, even though we can think of it as a linear algorithm in a high-dimensional space, in practice, it does not involve any computations in that high-dimensional space. By the use of kernels, all necessary computations are performed directly in input space. This is the characteristic twist of SV methods\u2014we are dealing with complex algorithms for nonlinear pattern recognition, 1 regression, 2 or feature extraction,3 but for the sake of analysis and algorithmics, we can pretend that we are working with a simple linear algorithm. I will explain the gist of SV methods by describing their roots in learning theory, the optimal hyperplane algorithm, the kernel trick, and SV function estimation. For details and further references, see Vladimir Vapnik\u2019s authoritative treatment, 2 he collection my colleagues and I have put together, 4 and the SV Web page at http://svm. first.gmd.de."}
{"_id": "584a0cb0d7d70673067daef9e473cca381e10ef2", "title": "Off-Line Arabic Character Recognition \u2013 A Review", "text": "Off-line recognition requires transferring the text under consideration into an image file. This represents the only available solution to bring the printed materials to the electronic media. However, the transferring process causes the system to lose the temporal information of that text. Other complexities that an off-line recognition system has to deal with are the lower resolution of the document and the poor binarisation, which can contribute to readability when essential features of the characters are deleted or obscured. Recognising Arabic script presents two additional challenges: orthography is cursive and letter shape is context sensitive. Certain character combinations form new ligature shapes, which are often font-dependent. Some ligatures involve vertical stacking of characters. Since not all letters connect, word boundary location becomes an interesting problem, as spacing may separate not only words, but also certain characters within a word. Various techniques have been implemented to achieve high recognition rates. These techniques have tackled different aspects of the recognition system. This review is organised into five major sections, covering a general overview, Arabic writing characteristics, Arabic text recognition system, Arabic OCR software and conclusions."}
{"_id": "146bae5cc033f0425d82aa98bcb90ce4ca794a88", "title": "Broadband design of substrate integrated waveguide to stripline interconnect", "text": "A broadband design of substrate integrated waveguide (SIW) to stripline interconnects is presented for the first time. The transition shows a wideband performance and, contrary to microstrip and coplanar waveguide circuitry, interconnects two planar transmission-line media that are both capable of medium-level power handling capabilities. Over a bandwidth of 18 GHz to 28 GHz (43.4 percent), the single interconnect shows a worst-case return loss of 24 dB and maximum insertion losses of 0.44 dB. For a back-to-back connection, the return loss reduces to 19 dB and insertion loss increases to 0.87 dB. All dimensional parameters are specified, and the design is validated by two commercially available field-solver packages. Moreover, field plots are presented that highlight the step-by-step mode conversion from SIW to stripline."}
{"_id": "3fa13418dd46aa19be6a99a5eed15a8004e4c140", "title": "Deep Learning Powered In-Session Contextual Ranking using Clickthrough Data", "text": "http://en.wikipedia.org/wiki/The_Dangerous_Days_of_Daniel_X http://en.wikipedia.org/wiki/The_Dangerous_Days_of_Daniel_X http://www.amazon.com/The-Dangerous-Days-Daniel-X/dp/0316119709 http://fanon.wikia.com/wiki/The_Dangerous_Days_of_Daniel_X http://www.goodreads.com/book/show/2235597.The_Dangerous_Days_of_Daniel_X http://www.amazon.com/The-Dangerous-Days-Daniel-X/dp/0316119709 http://www.daniel-x.co.uk/books/dangerous-days/ http://danielx.wikia.com/wiki/The_Dangerous_Days_of_Daniel_X_(novel) http://www.freebooknotes.com/summaries-analysis/the-dangerous-days-of-daniel-x/ http://danielx.wikia.com/wiki/Daniel_X http://www.jamespatterson.com/books_danielX.php#.VCR0vOfUe1A http://www.goodreads.com/series/49946-daniel-x http://jamespatterson.com/books_daniel_x.php#.VCR01efUe1A http://en.wikipedia.org/wiki/Daniel_X:_Watch_the_Skies http://books.google.com/books/about/The_Dangerous_Days_of_Daniel_X.html? id=2UBONTvr_BEC http://www.jamespatterson.com/books_danielX.php"}
{"_id": "572c58aee06d3001f1e49bbe6b39df18757fb3c5", "title": "VISUAL SENSITIVITY ANALYSIS", "text": ""}
{"_id": "e16a19836ff9870ef5739cb64c647a6315c845e9", "title": "High Vgs MOSFET characteristics with thin gate oxide for PMIC application", "text": "We present the characteristics of a newly developed MOSFET which has thin gate oxide but sustains high gate voltage. It consists of MIM capacitor coupled floating gate poly and source/drain junction formed by low voltage well. The characteristics of the MOSFET depend on the choice of capacitance coupling ratio which is determined by the ratio of MIM (Metal-Insulator-Metal) capacitance and gate oxide capacitance. This means that the proper choice of layout can control the device performances; threshold voltage, ID-VD, and ID-VG characteristics. Some of smart power management ICs requires of a low density non-volatile memory for the purpose of PMIC trimming. To meet this purpose, NVM embedding on PMIC solution should be cost effective. This newly proposed high gate voltage MOSFET with thin gate oxide can be an effective solution for the periphery circuits of single poly EEPROM operation."}
{"_id": "4e99f2e963208bd89286f849ee1e5fde2732cfba", "title": "Path planning and collision avoidance for robots", "text": "An optimal control problem to find the fastest collision-free trajectory of a robot surrounded by obstacles is presented. The collision avoidance is based on linear programming arguments and expressed as state constraints. The optimal control problem is solved with a sequential programming method. In order to decrease the number of unknowns and constraints a backface culling active set strategy is added to the resolution technique."}
{"_id": "dcb80fe5fe1d42510f88de118065d42229e09fdb", "title": "Wideband waveguide slot array antennas with corporate-feed in 120GHz and 350GHz bands", "text": "We design and fabricate double-layer slotted waveguide array antennas with wide bandwidths and high efficiencies for 120 GHz and 350 GHz bands. To achieve high gain and high efficiency in high frequency bands, the diffusion bonding technique of plate laminated waveguide is used as a fabrication method. The antennas with 16\u00d716 slots show about 70% efficiency with 32 dBi gain and about 50% efficiency with 31 dBi gain in the 120 GHz and 350 GHz bands, respectively."}
{"_id": "651e45bf2511dbe93ac7129378cc1b1c739f0255", "title": "DoS & DDoS in Named Data Networking", "text": "With the growing realization that current Internet protocols are reaching the limits of their senescence, several ongoing research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denialof-Service (DoS) attacks that plague today\u2019s Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) \u2013 a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN\u2019s resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking."}
{"_id": "6edb419538db1116ecb2be9ed005e469a3c029f3", "title": "ClaimEval: Integrated and Flexible Framework for Claim Evaluation Using Credibility of Sources", "text": "The World Wide Web (WWW) has become a rapidly growing platform consisting of numerous sources which provide supporting or contradictory information about claims (e.g., \u201cChicken meat is healthy\u201d). In order to decide whether a claim is true or false, one needs to analyze content of different sources of information on the Web, measure credibility of information sources, and aggregate all these information. This is a tedious process and the Web search engines address only part of the overall problem, viz., producing only a list of relevant sources. In this paper, we present ClaimEval, a novel and integrated approach which given a set of claims to validate, extracts a set of pro and con arguments from the Web information sources, and jointly estimates credibility of sources and correctness of claims. ClaimEval uses Probabilistic Soft Logic (PSL), resulting in a flexible and principled framework which makes it easy to state and incorporate different forms of prior-knowledge. Through extensive experiments on realworld datasets, we demonstrate ClaimEval\u2019s capability in determining validity of a set of claims, resulting in improved accuracy compared to state-of-the-art baselines."}
{"_id": "7534ac4b42d39e2033e32cb4a38a9dc95070b50d", "title": "Where the Action Is - The Foundations of Embodied Interaction", "text": "Most textbooks in HCI and CSCW do not offer a coherent and over\u2013arching understanding of social and technological issues. They present a variety of techniques and technologies, and outline a little history, but offer little in terms of theory that addresses the complexity of collaborative systems\u2019 structure and use. The majority of practitioners and researchers do not see theory as one of the things that they do. Immersed in their craft, focusing on technological innovation or ethnomethodological detail, they do not engage in theoretical abstraction. Technologists contentedly explore new tools and devices, with little heed to older disciplines or deeper discussion about the limits and assumptions inherent in their craft. The area of sociology most influential in CSCW, ethnomethodology, deliberately keeps theorising and generalisation at a distance, seeing abstraction as brutish and creativity as foreign. In our field, theory is like the public library. If asked, most of us would say that we are glad that it is around\u2014but few of us actually go there. Most see it as a haven for the old, the unemployed and the eccentric. Paul Dourish is a card\u2013carrying member, however. He reads avidly and widely, but is also a skilful system designer and developer. The result is a book that is deep, accessible and useful, which is a rare thing nowadays. In the introduction, Dourish lays out the structure of the book. He describes and reflects on two current trends in system design: tangible and social computing. By looking at the usually hidden assumptions in system design, he makes his later theoretical discussion more relevant and accessible to a computer science audience. He does not develop new philosophy or social theory, but draws upon established 20th century philosophy of language and phenomenology. Much of this material is likely to be unfamiliar to \u2018the average programmer\u2019 and, although Dourish presents it well, it may still be challenging to the reader. However, its use is one of the book\u2019s main contributions. He uses it to ground a conceptual framework and a corresponding set of principles for system design practice. He aims to do justice to the sociality and heterogeneity of interactive media, and to avoid putting theory above practice\u2014or vice versa. His ideal is balance: \u201cthe ability to develop systemsthat resonate with, rather than restrict (or, worse, refute), the social organization of action\u201d. The term \u2018tangible computing\u2019 is used very broadly in this book. The chapter on this theme covers the designs of Hiroshi Ishii\u2019s group at MIT\u2019s Media Lab, but also ubiquitous computing work such as the badges, tabs and LiveBoards of Xerox PARC, and augmented reality systems such as Pierre Wellner\u2019s DigitalDesk. Although \u2018UbiComp\u2019 is more fashionable nowadays, Dourish chooses a term that helps him focus more on our perception and \u201cthe ways we experience the everyday world\u201d than on computation and technology. Much of this discussion centres on exemplary systems, and the way that an increasing number and variety of computational devices and sensors are distributed in our environment. This contrasts with older systems that used very few media and which were encapsulated in the beige box of the traditional PC. The following chapter is on social computing: \u201cthe application of sociological understanding to the design of interactive systems\u201d. This stands in contrast to the more traditional tendency for designers to treat people as isolated system users, with little account taken of organisational and social context. Dourish draws upon some influential studies of existing technology such as that of an air traffic control room by Hughes et al., and the ethnography of a print shop by Bowers, Button and Sharrock. Compared to the previous chapter, exemplary systems are scarce. There are none of the previous chapter\u2019s attractive images of exotic displays and devices, so beloved in undergraduate lectures and conference presentations, as this chapter addresses systems\u2019 internal structure rather than external interaction. Dourish has developed more systems in the \u2018social\u2019 category than the \u2018tangible\u2019, and here he offers as examples Mansfield et al.\u2019s Orbit and some of his own work: using a system design technique, computational reflection, to offer users an account of deep system structure. In describing this particular technique, Dourish touches on the essence of the entire book: What is radical is the relationship it proposes between technical design and social understandings. It argues that the most fruitful place to forge these relationships is at a foundational level, one that attempts to take sociological insights into the heart of the process and fabric of design. (p. 87)"}
{"_id": "2f4010c6f7248d9cd1e43bd5985a26fe2f068211", "title": "The reflective practitioner: How professionals think in action", "text": "What do you do to start reading the reflective practitioner how professionals think in action arena ? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this the reflective practitioner how professionals think in action arena ."}
{"_id": "58d80963de554886910e878c6ad3b515fd600cdf", "title": "Systems Development in Information Systems Research", "text": "In this paper, the use of systems development as a methodology in information systems (is) research is described and defended. A framework to explain the nature of systems development as a research methodology in is research is proposed. Use of this methodology in the engineering field in general is compared with its use specifically in computer science and computer engineering. An integrated program for conducting IS research that incorporates theory building, systems development, experimentation, and observation is proposed. Progress in several application domains is reviewed to provide a basis upon which to argue that systems development is a valid research methodology. A systems development research process is presented from a methodological perspective. Software engineering, which is the basic method of applying the systems development research methodology, is then discussed. It is the authors' belief that systems development and other research methodologies are complementary and that an integrated multi-dimensional and multimethodological approach will generate fruitful is research results. The premise is that research contributions can result from systems development, experimentation, observation, and performance testing of the systems under development and that all of these research approaches are needed to investigate different aspects of the research question. An earlier version of Ihis paper was originally published in the Proceedings of ihe Twenty-Third Hawaii International Conference on System Sciences (IEEE Computer Society Press, 1990). Journal of Management Information Systems I Winter 1990-91, Vol. 7, No, 3, pp. 89-106. Copyright \u00a9 M. E. Sharpe, Inc., 1991. 90 NUNAMAKER, CHEN. AND PURDIN"}
{"_id": "10b4ec477d9853c072569372661d564ee15e505f", "title": "PUBLIC DEBT AND GROWTH", "text": "This paper examines the impact of high public debt on long-run economic growth in a panel of advanced and emerging economies over four decades, while taking into account various estimation issues including reverse causality and endogeneity. Threshold effects, non-linearities, and differences between advanced and emerging market economies are also explored. High initial public debt is found to be significantly and consistently associated with slower subsequent growth, controlling for other determinants of growth. The adverse effect largely reflects a slowdown in labor productivity growth mainly due to reduced investment and slower growth of capital stock. Extensive robustness checks confirm the results."}
{"_id": "ccc560f932e36b8561a847e02fcf20522bc71c5f", "title": "Loihi: A Neuromorphic Manycore Processor with On-Chip Learning", "text": "Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions."}
{"_id": "7164f940ad4b0d23ab87475e4edfcde7c8df0d14", "title": "Towards improved genome-scale metabolic network reconstructions: unification, transcript specificity and beyond", "text": "Genome-scale metabolic network reconstructions provide a basis for the investigation of the metabolic properties of an organism. There are reconstructions available for multiple organisms, from prokaryotes to higher organisms and methods for the analysis of a reconstruction. One example is the use of flux balance analysis to improve the yields of a target chemical, which has been applied successfully. However, comparison of results between existing reconstructions and models presents a challenge because of the heterogeneity of the available reconstructions, for example, of standards for presenting gene-protein-reaction associations, nomenclature of metabolites and reactions or selection of protonation states. The lack of comparability for gene identifiers or model-specific reactions without annotated evidence often leads to the creation of a new model from scratch, as data cannot be properly matched otherwise. In this contribution, we propose to improve the predictive power of metabolic models by switching from gene-protein-reaction associations to transcript-isoform-reaction associations, thus taking advantage of the improvement of precision in gene expression measurements. To achieve this precision, we discuss available databases that can be used to retrieve this type of information and point at issues that can arise from their neglect. Further, we stress issues that arise from non-standardized building pipelines, like inconsistencies in protonation states. In addition, problems arising from the use of non-specific cofactors, e.g. artificial futile cycles, are discussed, and finally efforts of the metabolic modelling community to unify model reconstructions are highlighted."}
{"_id": "0a866d10c90e931d8b60a84f9f029c0cc79276fa", "title": "A Multiphase Switched-Capacitor DC\u2013DC Converter Ring With Fast Transient Response and Small Ripple", "text": "A fully integrated step-down switched-capacitor dc\u2013dc converter ring with 123 phases has been designed that could achieve fast dynamic voltage scaling for the microprocessor of wearable devices. The symmetrical multiphase converter ring surrounds its load in the square and supplies power to the on-chip power grid that is easily accessible at any point of the chip edges. The frequency of the $V_{\\mathrm {DD}}$ -controlled oscillator is adjusted through its supply voltage $V_{\\mathrm {DD}}$ , which allows the unity-gain frequency to be designed higher than the switching frequency. The converter ring has been fabricated in a low-leakage 65-nm CMOS process. This converter achieves a response time of 3 ns, a reference tracking speed of 2.5 V/ $\\mu \\text{s}$ , and a minimum output ripple of 2.2 mV. The peak efficiency is 80% at the power density of 66.6 mW/mm2, and the maximum power density is 180 mW/mm2."}
{"_id": "1fcfa9be6fb3a444deafcda2824cf6624f2ae6a6", "title": "U-Air: when urban air quality inference meets big data", "text": "Information about urban air quality, e.g., the concentration of PM2.5, is of great importance to protect human health and control air pollution. While there are limited air-quality-monitor-stations in a city, air quality varies in urban spaces non-linearly and depends on multiple factors, such as meteorology, traffic volume, and land uses. In this paper, we infer the real-time and fine-grained air quality information throughout a city, based on the (historical and real-time) air quality data reported by existing monitor stations and a variety of data sources we observed in the city, such as meteorology, traffic flow, human mobility, structure of road networks, and point of interests (POIs). We propose a semi-supervised learning approach based on a co-training framework that consists of two separated classifiers. One is a spatial classifier based on an artificial neural network (ANN), which takes spatially-related features (e.g., the density of POIs and length of highways) as input to model the spatial correlation between air qualities of different locations. The other is a temporal classifier based on a linear-chain conditional random field (CRF), involving temporally-related features (e.g., traffic and meteorology) to model the temporal dependency of air quality in a location. We evaluated our approach with extensive experiments based on five real data sources obtained in Beijing and Shanghai. The results show the advantages of our method over four categories of baselines, including linear/Gaussian interpolations, classical dispersion models, well-known classification models like decision tree and CRF, and ANN."}
{"_id": "929a9641a06255764c69110673fbbd22d05428d5", "title": "Key Estimation in Electronic Dance Music", "text": "In this paper we study key estimation in electronic dance music, an umbrella term referring to a variety of electronic music subgenres intended for dancing at nightclubs and raves. We start by defining notions of tonality and key before outlining the basic architecture of a template-based key estimation method. Then, we report on the tonal characteristics of electronic dance music, in order to infer possible modifications of the method described. We create new key profiles combining these observations with corpus analysis, and add two pre-processing stages to the basic algorithm. We conclude by comparing our profiles to existing ones, and testing our modifications on independent datasets of pop and electronic dance music, observing interesting improvements in the performance or our algorithms, and suggesting paths for future research."}
{"_id": "acb89e63516fbf6f3383d18b81aea18593549eef", "title": "Weed Detection Using Fractal-Based Low Cost Commodity Hardware Raspberry Pi", "text": "Conventional weed control system is usually used by spraying herbicides uniformly throughout the land. Excessive use of herbicides on an ongoing basis can produce chemical waste that is harmful to plants and soil. The application of precision agriculture farming in the detection process in order to control weeds using Computer Vision On Farm becomes interesting, but it still has some problems due to computer size and power consumption. Raspberry Pi is one of the minicomputer with low price and low power consumption. Having computing like a desktop computer with the open source Linux operating system can be used for image processing and weed fractal dimension processing using OpenCV library and C programming. This research results the best fractal computation time when performing the image with dimension size of 128 x 128 pixels. It is about 7 milliseconds. Furthermore, the average speed ratio between personal computer and Raspberry Pi is 0.04 times faster. The use of Raspberry Pi is cost and power consumption efficient compared to personal computer."}
{"_id": "2dee4331f0aa4b368fce3249e88d43f80c9ed06e", "title": "Cryptocash, cryptocurrencies, and cryptocontracts", "text": "One of the central challenges for mathematical cryptography is to create a payment system that provides the advantages of cash in a digital world. In this expository article we describe two very different solutions to this problem. The first is an elliptic-curve-based version of a construction of S. Brands, and the second is Bitcoin. We also discuss a generalization of Bitcoin that supports peer-to-peer contracts. AMS subject classifications. 94A60, 68P25, 14G50, 94-02."}
{"_id": "b54da6f5a0a0d70b3f8563b8c76a5fb9d866b867", "title": "Recent Developments in Control Schemes of BLDC Motors", "text": "This paper presents a technical review of published literature addressing control schemes for BLDC motors. The control methods reviewed include sensor less control, PWM techniques used, various methods for rotor position detection and initial rotor position detection methods."}
{"_id": "c75e1ad1eed067523e875f5bd43b4048a694c036", "title": "Greenhouse Temperature and Humidity Intelligent Control System", "text": "The article based on embedded database of greenhouse temperature and humidity control system intelligent. Put forward by embedded database system set up in an ideal environment for data greenhouse temperature and humidity control, greenhouse crops in the process of growth under control. To address the growth of greenhouse crops in the process of temperature and humidity controlled environment is not ideal, at the same time improve the efficiency of control and cost-effective system. The article focuses on the structure of the control system, hardware, software design and system control strategy. The control system has a simple hardware structure, cost-effective, easy to use and maintenance, temperature and humidity data, and other advantages of good stability. Key-Words: Embedded database, Temperature and Humidity, Control ; data filtering, Greenhouse,"}
{"_id": "abf97fc7d0228d2c58321a10cca2df9dfcda571d", "title": "Modular switched capacitor voltage multiplier topology for pulsed power supply", "text": "Along with rapid advancement of power semiconductors, voltage multipliers have introduced new series of pulsed power generators. In this paper, based on conventional voltage multiplier and by using power electronics switches a new topology in high voltage pulsed power application is proposed. This topology is a modular circuit that can generate a high output voltage from a relatively low input voltage with fast rise time and adjustable frequency, pulse width, and voltage levels using a series connection of switched capacitor cells. A comparative analysis is carried out to show the advantages of proposed topology. Experimental and simulation results are presented to confirm the analysis."}
{"_id": "7c62116714a9b8a9222083b0f688ec7d423e81ac", "title": "Effect of transdermal teriparatide administration on bone mineral density in postmenopausal women.", "text": "CONTEXT\nTreatment of osteoporosis with an anabolic agent, teriparatide [human PTH 1-34 (TPTD)], is effective in reducing incident fractures, but patient resistance to daily sc injections has limited its use. A novel transdermal patch, providing a rapid, pulse delivery of TPTD, may provide a desirable alternative.\n\n\nOBJECTIVE\nThe aim of the study was to determine the safety and efficacy of a novel transdermal TPTD patch compared to placebo patch and sc TPTD 20-microg injection in postmenopausal women with osteoporosis.\n\n\nDESIGN\nOur study consisted of 6-month, randomized, placebo-controlled, positive control, multidose daily administration.\n\n\nPATIENTS\nWe enrolled 165 postmenopausal women (mean age, 64 yr) with osteoporosis.\n\n\nINTERVENTIONS\nA TPTD patch with a 20-, 30-, or 40-microg dose or a placebo patch was self-administered daily for 30-min wear time, or 20 microg of TPTD was injected daily.\n\n\nOUTCOMES\nThe primary efficacy measure was mean percentage change in lumbar spine bone mineral density (BMD) from baseline at 6 months.\n\n\nRESULTS\nTPTD delivered by transdermal patch significantly increased lumbar spine BMD vs. placebo patch in a dose-dependent manner at 6 months (P < 0.001). TPTD 40-microg patch increased total hip BMD compared to both placebo patch and TPTD injection (P < 0.05). Bone turnover markers (procollagen type I N-terminal propeptide and C-terminal cross-linked telopeptide of type I collagen) increased from baseline in a dose-dependent manner in all treatment groups and were all significantly different from placebo patch (P < 0.001). All treatments were well tolerated, and no prolonged hypercalcemia was observed.\n\n\nCONCLUSION\nTransdermal patch delivery of TPTD in postmenopausal women with osteoporosis for 6 months is safe and effective in increasing lumbar spine and total hip BMD."}
{"_id": "627f29bc0d56e6c57311acd19fa58e00ef00684b", "title": "Modeling and analysis of electric field and electrostatic adhesion force generated by interdigital electrodes for wall climbing robots", "text": "A model is presented for the analysis of the electric field and electrostatic adhesion force produced by interdigital electrodes. Assuming that the potential varies linearly with distance in inter-electrode gaps, the potential distribution on the electrode plane is obtained by taking the first-order Taylor series approximation. The expressions of electric field components are then derived by solving the Laplace equation for the electrical potential in each subregion. The electrostatic adhesion force is calculated using the Maxwell stress tensor formulation. The dynamic properties of the electric field and electrostatic adhesion force are assessed by evaluating the transient response of the field and force under a step in applied voltages. To verify the model developed, an experimental study is carried out in conjunction with the theoretical analysis to evaluate the adhesion performance of an electrode panel on a glass pane. A double tracked wall climbing robot is designed and tested on various wall surfaces. The limit of the approximation method of the inter-electrode potential is discussed. It is found that vacuum suction force is involved in the adhesion. The influence of this vacuum suction force on electrostatic adhesion is also discussed. The results of this work would provide support for theoretical guidelines and system optimization for the electrostatic adhesion technology applied to wall climbing robots."}
{"_id": "2136bcf693956698ad60999b6813b231866ebb82", "title": "Investigating the usability of iPad mobile tablet in formative assessment of a mathematics course", "text": "Formative assessment allows students to judge their learning progress against a given standard or assessment criteria during the course of study. There are challenges related to the generation and delivery of feedback for formative purposes, for instance the students' engagement in their learning, the pedagogy adopted and the usability of the technology tools used in this regard. This paper presents a study of iPad mobile tablet technology-supported peer-to-peer assessment (P2PASS), which is a modern form of formative assessment. The experimental work was undertaken at Kigali Institute of Science and Technology (KIST) in Rwanda. The implementation of P2PASS in this study shows the students' engagement in learning mathematics, and allowed the investigation on the possibilities of improving their mastery of the subject. This paper further discusses the students' attitudes towards the P2PASS process as well as the technological tools used. The usability problems of the iPad applications are discussed, together with the implications of using mobile tablet technology in providing the feedback on the mathematical work."}
{"_id": "75fa004f8bce81f5595f158b854bee5c991f4ff2", "title": "Analysis, Design and Realization of a Novel Directive Ultrawideband Antenna", "text": "In this paper, we present a simple log-periodic-dipole-array (LPDA) solution that allows us to achieve good ultrawideband (UWB) performances. The antenna has been manufactured and the measurements agree well with the theoretical predictions. The antenna presents an average gain of 8 dB and a return loss better than -10 dB over the band from 4.2 to 10.6 GHz. Both the measured antenna transfer function and the computed effect on pulse transmission show good performances in comparison with already known UWB antennas."}
{"_id": "66283baa7373f7457ea37c7881754008e74bc9b2", "title": "Power Budgets for CubeSat Radios to Support Ground Communications and Inter-Satellite Links", "text": "CubeSats are a class of pico-satellites that have emerged over the past decade as a cost-effective alternative to the traditional large satellites to provide space experimentation capabilities to universities and other types of small enterprises, which otherwise would be unable to carry them out due to cost constraints. An important consideration when planning CubeSat missions is the power budget required by the radio communication subsystem, which enables a CubeSat to exchange information with ground stations and/or other CubeSats in orbit. The power that a CubeSat can dedicate to the communication subsystem is limited by the hard constraints on the total power available, which are due to its small size and light weight that limit the dimensions of the CubeSat power supply elements (batteries and solar panels). To date, no formal studies of the communications power budget for CubeSats are available in the literature, and this paper presents a detailed power budget analysis that includes communications with ground stations as well as with other CubeSats. For ground station communications, we outline how the orbital parameters of the CubeSat trajectory determine the distance of the ground station link and present power budgets for both uplink and downlink that include achievable data rates and link margins. For inter-satellite communications, we study how the slant range determines power requirements and affects the achievable data rates and link margins."}
{"_id": "2b6cb02184015c20f73845658b12a4ad832357de", "title": "DroidLink: Automated Generation of Deep Links for Android Apps", "text": "The mobile application (app) has become the main entrance to access the Internet on handheld devices. Unlike the Web where each webpage has a global URL to reach directly, a specific \u201ccontent page\u201d of an app can be opened only by exploring the app with several operations from the landing page. The interoperability between apps is quite fixed and thus limits the value-added \u201clinked data\u201d between apps. Recently, deep link has been proposed to enable targeting and opening a specific page of an app externally with an accessible uniform resource identifier (URI). However, implementing deep link for mobile apps requires a lot of manual efforts by app developers, which can be very error-prone and time-consuming. In this paper, we propose DroidLink to automatically generating deep links for existing Android apps. We design a deep link model suitable for automatic generation. Then we explore the transition of pages and build a navigation graph based on static and dynamic analysis of Android apps. Next, we realize an updating mechanism that keeps on revisiting the target app and discover new pages, and thus generates deep links for every single page of the app. Finally, we repackage the app with deep link supports, but requires no additional deployment requirements. We generate deep links for some popular apps and demonstrate the feasibility of DroidLink."}
{"_id": "587482a69fa1fdd3fcfea4353f9f3a3291e25eb3", "title": "Experiments With Some Programs That Search Game Trees", "text": "Many problems in artificial intelligence involve the searching of large trees of alternative possibilities\u2014for example, game-playing and theorem-proving. The problem of efficiently searching large trees is discussed. A new method called \u201cdynamic ordering\u201d is described, and the older minimax and Alpha-Beta procedures are described for comparison purposes. Performance figures are given for six variations of the game of kalah. A quantity called \u201cdepth ratio\u201d is derived which is a measure of the efficiency of a search procedure. A theoretical limit of efficiency is calculated and it is shown experimentally that the dynamic ordering procedure approaches that limit."}
{"_id": "6640f4e4beae786f301928d82a9f8eb037aa6935", "title": "Learning Continuous Control Policies by Stochastic Value Gradients", "text": "We present a unified framework for learning continuous control policies using backpropagation. It supports stochastic control by treating stochasticity in the Bellman equation as a deterministic function of exogenous noise. The product is a spectrum of general policy gradient algorithms that range from model-free methods with value functions to model-based methods without value functions. We use learned models but only require observations from the environment instead of observations from model-predicted trajectories, minimizing the impact of compounded model errors. We apply these algorithms first to a toy stochastic control problem and then to several physics-based control problems in simulation. One of these variants, SVG(1), shows the effectiveness of learning models, value functions, and policies simultaneously in continuous domains."}
{"_id": "903579c8503c773531305373b58a1579c41df6da", "title": "MedScan, a natural language processing engine for MEDLINE abstracts", "text": "MOTIVATION\nThe importance of extracting biomedical information from scientific publications is well recognized. A number of information extraction systems for the biomedical domain have been reported, but none of them have become widely used in practical applications. Most proposals to date make rather simplistic assumptions about the syntactic aspect of natural language. There is an urgent need for a system that has broad coverage and performs well in real-text applications.\n\n\nRESULTS\nWe present a general biomedical domain-oriented NLP engine called MedScan that efficiently processes sentences from MEDLINE abstracts and produces a set of regularized logical structures representing the meaning of each sentence. The engine utilizes a specially developed context-free grammar and lexicon. Preliminary evaluation of the system's performance, accuracy, and coverage exhibited encouraging results. Further approaches for increasing the coverage and reducing parsing ambiguity of the engine, as well as its application for information extraction are discussed."}
{"_id": "e02e268becfb5a7280aa0ec708f8ba3aa83d2fce", "title": "Integrated 105 dB SNR, 0.0031% THD+N Class-D Audio Amplifier With Global Feedback and Digital Control in 55 nm CMOS", "text": "It is traditionally challenging to implement higher-order PWM closed-loop Class-D audio amplifiers using analog intensive techniques in deep-submicron, low voltage process technologies. This is primarily attributed to reduced power supply, degraded analog transistor characteristics, including short-channel effects, increased flicker noise, random telegraph noise, transistor reliability concerns and passive component performance. In this paper, we introduce a global closed-loop mixed-signal architecture incorporating digital control and integrate a fourth-order amplifier prototype in 55 nm CMOS. A systematic approach to analyze, design and compensate the feedback loop in the digital domain is also presented. The versatility of implementing the loop gain poles and zeros digitally attains high gain throughout the audio band and attenuates residual high frequency ripples around the loop, simultaneously accomplishing improvements in THD+N and PSRR. The overall architecture is inherently amenable to implementation in deep-submicron and is therefore compatible with scaled CMOS. The measured prototype achieves a high 105 dBA SNR, 0.0031% THD+N, 92 dB PSRR and 85% efficiency when supplying 1 W into emulated 8 \u03a9 speaker load. This performance is competitive with conventional designs using large feature size precision CMOS or specialized BCD technologies and reports the highest output power (1.5 W) for deep-submicron designs."}
{"_id": "a86d7289c76d832e83c99539859b7b186e4ea6c8", "title": "H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes", "text": "Liver cancer is one of the leading causes of cancer death. To assist doctors in hepatocellular carcinoma diagnosis and treatment planning, an accurate and automatic liver and tumor segmentation method is highly demanded in the clinical practice. Recently, fully convolutional neural networks (FCNs), including 2-D and 3-D FCNs, serve as the backbone in many volumetric image segmentation. However, 2-D convolutions cannot fully leverage the spatial information along the third dimension while 3-D convolutions suffer from high computational cost and GPU memory consumption. To address these issues, we propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2-D DenseUNet for efficiently extracting intra-slice features and a 3-D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for liver and tumor segmentation. We formulate the learning process of the H-DenseUNet in an end-to-end manner, where the intra-slice representations and inter-slice features can be jointly optimized through a hybrid feature fusion layer. We extensively evaluated our method on the data set of the MICCAI 2017 Liver Tumor Segmentation Challenge and 3DIRCADb data set. Our method outperformed other state-of-the-arts on the segmentation results of tumors and achieved very competitive performance for liver segmentation even with a single model."}
{"_id": "41343ed62edfbca78085cc0813621732ce613e3c", "title": "Multi-layer dependability: From microarchitecture to application level", "text": "We show in this paper that multi-layer dependability is an indispensable way to cope with the increasing amount of technology-induced dependability problems that threaten to proceed further scaling. We introduce the definition of multi-layer dependability and present our design flow within this paradigm that seamlessly integrates techniques starting at circuit layer all the way up to application layer and thereby accounting for ASIC-based architectures as well as for reconfigurable-based architectures. At the end, we give evidence that the paradigm of multi-layer dependability bears a large potential for significantly increasing dependability at reasonable effort."}
{"_id": "240f9b7a868d9a486f017476b8b374a4500accda", "title": "Predicting MOOC Dropout over Weeks Using Machine Learning Methods", "text": "With high dropout rates as observed in many current larger-scale online courses, mechanisms that are able to predict student dropout become increasingly important. While this problem is partially solved for students that are active in online forums, this is not yet the case for the more general student population. In this paper, we present an approach that works on click-stream data. Among other features, the machine learning algorithm takes the weekly history of student data into account and thus is able to notice changes in student behavior over time. In the later phases of a course (i.e., once such history data is available), this approach is able to predict dropout significantly better than baseline methods."}
{"_id": "2b096cb400d9db4bf409641700c03c1d05ee3e3f", "title": "Predicting good probabilities with supervised learning", "text": "We examine the relationship between the predictions made by different learning algorithms and true posterior probabilities. We show that maximum margin methods such as boosted trees and boosted stumps push probability mass away from 0 and 1 yielding a characteristic sigmoid shaped distortion in the predicted probabilities. Models such as Naive Bayes, which make unrealistic independence assumptions, push probabilities toward 0 and 1. Other models such as neural nets and bagged trees do not have these biases and predict well calibrated probabilities. We experiment with two ways of correcting the biased probabilities predicted by some learning methods: Platt Scaling and Isotonic Regression. We qualitatively examine what kinds of distortions these calibration methods are suitable for and quantitatively examine how much data they need to be effective. The empirical results show that after calibration boosted trees, random forests, and SVMs predict the best probabilities."}
{"_id": "43820cf2001230719965ca4bcb78926dab393a6f", "title": "Predicting Student Retention in Massive Open Online Courses using Hidden Markov Models", "text": "Massive Open Online Courses (MOOCs) have a high attrition rate: most students who register for a course do not complete it. By examining a student's history of actions during a course, we can predict whether or not they will drop out in the next week, facilitating interventions to improve retention. We compare predictions resulting from several modeling techniques and several features based on different student behaviors. Our best predictor uses a Hidden Markov Model (HMM) to model sequences of student actions over time, and encodes several continuous features into a single discrete observable state using a simple cross\u00adproduct method. It yielded an ROC AUC (Receiver Operating Characteristic Area Under the Curve score) of 0.710, considerably better than a random predictor. We also use simpler HMM models to derive information about which student behaviors are most salient in determining student retention."}
{"_id": "5ee0e0656af301131a6a4a945113d91cf60575ae", "title": "Engaging with massive online courses", "text": "The Web has enabled one of the most visible recent developments in education---the deployment of massive open online courses. With their global reach and often staggering enrollments, MOOCs have the potential to become a major new mechanism for learning. Despite this early promise, however, MOOCs are still relatively unexplored and poorly understood.\n In a MOOC, each student's complete interaction with the course materials takes place on the Web, thus providing a record of learner activity of unprecedented scale and resolution. In this work, we use such trace data to develop a conceptual framework for understanding how users currently engage with MOOCs. We develop a taxonomy of individual behavior, examine the different behavioral patterns of high- and low-achieving students, and investigate how forum participation relates to other parts of the course.\n We also report on a large-scale deployment of badges as incentives for engagement in a MOOC, including randomized experiments in which the presentation of badges was varied across sub-populations. We find that making badges more salient produced increases in forum engagement."}
{"_id": "04e10f745a7267453788a22f5150b5a32b2b3951", "title": "Transforming classifier scores into accurate multiclass probability estimates", "text": "Class membership probability estimates are important for many applications of data mining in which classification outputs are combined with other sources of information for decision-making, such as example-dependent misclassification costs, the outputs of other classifiers, or domain knowledge. Previous calibration methods apply only to two-class problems. Here, we show how to obtain accurate probability estimates for multiclass problems by combining calibrated binary probability estimates. We also propose a new method for obtaining calibrated two-class probability estimates that can be applied to any classifier that produces a ranking of examples. Using naive Bayes and support vector machine classifiers, we give experimental results from a variety of two-class and multiclass domains, including direct marketing, text categorization and digit recognition."}
{"_id": "feb26eb3b0a5b5aed4b7d3c2247189afe8cd88ea", "title": "Sparse Malicious False Data Injection Attacks and Defense Mechanisms in Smart Grids", "text": "This paper discusses malicious false data injection attacks on the wide area measurement and monitoring system in smart grids. First, methods of constructing sparse stealth attacks are developed for two typical scenarios: 1) random attacks in which arbitrary measurements can be compromised; and 2) targeted attacks in which specified state variables are modified. It is already demonstrated that stealth attacks can always exist if the number of compromised measurements exceeds a certain value. In this paper, it is found that random undetectable attacks can be accomplished by modifying only a much smaller number of measurements than this value. It is well known that protecting the system from malicious attacks can be achieved by making a certain subset of measurements immune to attacks. An efficient greedy search algorithm is then proposed to quickly find this subset of measurements to be protected to defend against stealth attacks. It is shown that this greedy algorithm has almost the same performance as the brute-force method, but without the combinatorial complexity. Third, a robust attack detection method is discussed. The detection method is designed based on the robust principal component analysis problem by introducing element-wise constraints. This method is shown to be able to identify the real measurements, as well as attacks even when only partial observations are collected. The simulations are conducted based on IEEE test systems."}
{"_id": "9ca56f22e058091951f1d920fca7a304d68fe508", "title": "A 2.45 GHz harmonic suppression array antenna for rectenna application", "text": "In this paper, a 2.45 GHz harmonic suppression capability of array antenna has been presented for wireless power transfer applications. Harmonic suppression up to a fourth order of the fundamental operating frequency has been achieved by using rectangular slot Defected Ground Structure (DGS). The designed antenna uses an FR4 material having thickness 1.6 mm as a substrate which has loss tangent tan\u03b4) 0.02. The gain and radiation efficiency are 4.5dB, 75% respectively. This antenna is suitable for rectenna applications."}
{"_id": "089ed578321f84b3257ac11cb76f1db438e2c11d", "title": "A Study of Skew in MapReduce Applications", "text": "This paper presents a study of skew \u2014 highly variable task runtimes \u2014 in MapReduce applications. We describe various causes and manifestations of skew as observed in real world Hadoop applications. Runtime task distributions from these applications demonstrate the presence and negative impact of skew on performance behavior. We discuss best practices recommended for avoiding such behavior and their limitations."}
{"_id": "1bc6ea86a8ed0a80406404693c675f11f6b8e454", "title": "Towards a Trust Management System for Cloud Computing", "text": "Cloud computing provides cost-efficient opportunities for enterprises by offering a variety of dynamic, scalable, and shared services. Usually, cloud providers provide assurances by specifying technical and functional descriptions in Service Level Agreements (SLAs) for the services they offer. The descriptions in SLAs are not consistent among the cloud providers even though they offer services with similar functionality. Therefore, customers are not sure whether they can identify a trustworthy cloud provider only based on its SLA. To support the customers in reliably identifying trustworthy cloud providers, we propose a multi-faceted Trust Management (TM) system architecture for a cloud computing marketplace. This system provides means to identify the trustworthy cloud providers in terms of different attributes (e.g., security, performance, compliance) assessed by multiple sources and roots of trust information."}
{"_id": "4e2af96503b687ebdb488df8a294fe3cdeafd653", "title": "Utility of MRI after inconclusive ultrasound in pediatric patients with suspected appendicitis: retrospective review of 60 consecutive patients.", "text": "OBJECTIVE\nThe purpose of this study is to examine the utility of appendix MRI in evaluation of pediatric patients with right lower quadrant pain and inconclusive appendix sonography findings.\n\n\nMATERIALS AND METHODS\nA search of the radiology electronic database was performed for all appendix MRI examinations performed of pediatric patients within 24 hours after inconclusive appendix sonography from December 1, 2009, through April 26, 2012. Sixty patients underwent appendix MRI within 24 hours of inconclusive sonography and represented the study cohort. MRI examinations were reviewed independently by two radiologists blinded to the diagnosis and were graded as \"positive,\" \"negative,\" or \"indeterminate\" for acute appendicitis. The final diagnosis was established by review of the surgical and pathology reports and patients' electronic medical records.\n\n\nRESULTS\nTen of 60 patients (17%) had acute appendicitis. Both readers graded the same 12 examinations as positive and the same 48 examinations as negative for acute appendicitis, with a kappa value of 1.00 (expected agreement, 0.695). No MRI examination was interpreted as indeterminate. The sensitivity and specificity of MRI for acute appendicitis in children with inconclusive appendix ultrasound findings were 100% (95% CI, 0.72-1.00) and 96% (95% CI, 0.87-0.98), respectively. The positive predictive value for the examination was 83%, the negative predictive value was 100%, and overall test accuracy was 97%.\n\n\nCONCLUSION\nOur study shows that MRI has a sensitivity of 100% and specificity of 96% for appendicitis in pediatric patients after inconclusive appendix sonography. We think that MRI may supplant CT as the secondary modality to follow inconclusive appendix sonography."}
{"_id": "faae4ff4478606fbfefe8a373ea2b71cadfa4a22", "title": "Knowledge-Based Integrated Aircraft Design An Applied Approach from Design to Concept Demonstration", "text": "The design and development of new aircraft are becoming increasingly expensive and time-consuming. To assist the design process in reducing the development cost, time, and late design changes, the conceptual design needs enhancement using new tools and methods. Integration of several disciplines in the conceptual design as one entity enables the design process to be kept intact at every step and a high understanding of the aircraft concepts obtained at early stages. This thesis presents a Knowledge-Based Engineering (KBE) approach and integration of several disciplines in a holistic approach for use in aircraft conceptual design. KBE allows the reuse of obtained aircrafts\u2019 data, information, and knowledge to gain more awareness and a better understanding of the concept under consideration at early stages of design. For this purpose, Knowledge-Based (KB) methodologies are investigated for enhanced geometrical representation and to enable variable fidelity tools and Multidisciplinary Design Optimization (MDO). The geometry parameterization techniques are qualitative approaches that produce quantitative results in terms of both robustness and flexibility of the design parameterization. The information/parameters from all tools/disciplines and the design intent of the generated concepts are saved and shared via a central database. The integrated framework facilitates multi-fidelity analysis, combining low-fidelity models with high-fidelity models for a quick estimation, enabling a rapid analysis and enhancing the time for a MDO process. The geometry is further propagated to other disciplines [Computational Fluid Dynamics (CFD), Finite Element Analysis (FEA)] for analysis. This is possible with an automated streamlined process (for CFD, FEM, system simulation) to analyze and increase knowledge early in the design process. Several processes were studied to streamline the geometry for"}
{"_id": "c49044d31d82070a23d0ae223b7e95d12bc155ec", "title": "Impact Selective Incivility as Modern Discrimination in Organizations : Evidence and", "text": ""}
{"_id": "e0e55ad099902dc22f88bcb7b122f1a1d42bbe57", "title": "RP-DBSCAN: A Superfast Parallel DBSCAN Algorithm Based on Random Partitioning", "text": "In most parallel DBSCAN algorithms, neighboring points are assigned to the same data partition for parallel processing to facilitate calculation of the density of the neighbors. This data partitioning scheme causes a few critical problems including load imbalance between data partitions, especially in a skewed data set. To remedy these problems, we propose a cell-based data partitioning scheme, pseudo random partitioning , that randomly distributes small cells rather than the points themselves. It achieves high load balance regardless of data skewness while retaining the data contiguity required for DBSCAN. In addition, we build and broadcast a highly compact summary of the entire data set, which we call a two-level cell dictionary , to supplement random partitions. Then, we develop a novel parallel DBSCAN algorithm, Random Partitioning-DBSCAN (shortly, RP-DBSCAN), that uses pseudo random partitioning together with a two-level cell dictionary. The algorithm simultaneously finds the local clusters to each data partition and then merges these local clusters to obtain global clustering. To validate the merit of our approach, we implement RP-DBSCAN on Spark and conduct extensive experiments using various real-world data sets on 12 Microsoft Azure machines (48 cores). In RP-DBSCAN, data partitioning and cluster merging are very light, and clustering on each split is not dragged out by a specific worker. Therefore, the performance results show that RP-DBSCAN significantly outperforms the state-of-the-art algorithms by up to 180 times."}
{"_id": "66e4c01fecb027e50b6720294251f3de54a587bf", "title": "A Distributed Hybrid Recommendation Framework to Address the New-User Cold-Start Problem", "text": "With the development of electronic commerce, recommender system is becoming an important research topic. Existing methods of recommender systems such as collaborative filtering and content analysis are suitable for different cases. Not every recommender system is capable of handling all kinds of situations. In this paper, we bring forward a distributed hybrid recommendation framework to address the new-user cold-start problem based on user classification. First, current user characteristics, user context and operating records are used to classify the user type. Then, suitable recommendation algorithms are dynamically selected based on the current user type, and executed in parallel. Finally, the recommendation results are merged into a consolidated list. In the experiment on movie ratings dataset Movie Lens, the proposed framework can enhance the accuracy of recommendation system. Also, a 100% coverage can be achieved in the case of new users. This shows the potential of the proposed framework in addressing new-user cold-start problem."}
{"_id": "2cbba0192ce085868c066b46f3164394d5717d35", "title": "Ring-Light Photometric Stereo", "text": "We propose a novel algorithm for uncalibrated photometric stereo. While most of previous methods rely on various assumptions on scene properties, we exploit constraints in lighting configurations. We first derive an ambiguous reconstruction by requiring lights to lie on a view centered cone. This reconstruction is upgraded to Euclidean by constraints derived from lights of equal intensity and multiple view geometry. Compared to previous methods, our algorithm deals with more general data and achieves high accuracy. Another advantage of our method is that we can model weak perspective effects of lighting, while previous methods often assume orthographical illumination. We use both synthetic and real data to evaluate our algorithm. We further build a hardware prototype to demonstrate our approach."}
{"_id": "c62a100a6df34078c6ab706511966e0a6a57de74", "title": "Opinion and Perspectivesy Why We Need Quantum Physics for Cognitive Neuroscience", "text": "For the past 20 years and more, arguments about the role of quantum mechanics in consciousness and mind theory have been mounting. On the one side are traditional neuroscientists who believe that the way to understanding the brain is through looking at the nerve cells. On the other side are various physicists who suggest that the laws of quantum mechanics may have an influence on the dynamics of consciousness and the mind. At the same time however, consciousness and the mind cannot be separated from matter. They originate in the microscopic world of the human brain. There can be no definite separation between mind and matter; there is no 'mind' without 'matter', and no 'matter' without 'mind'. In terms of cognitive neuroscience, we know a great deal about the working of nerve cells. For example, we understand quite well about the formation of action potential, ion exchange, energy use, axonal transport, the vesicle cycle, and formation, oscillation and breakdown in nerve transmission. However, we still do not understand how experience is formed in our material brain (color, sound, smell, taste, pain, imagination, decision, dreams, love, or orgasm) and how consciousness arises in an unconscious material organ. The insufficiency of these answers no doubt arises from the insufficiency of the methods used by cognitive science."}
{"_id": "46a1931b4f9a885fffda35bbeacd26622b256534", "title": "The AMARA Corpus: Building Parallel Language Resources for the Educational Domain", "text": "This paper presents the AMARA corpus of on-line educational content: a new parallel corpus of educational video subtitles, multilingually aligned for 20 languages, i.e. 20 monolingual corpora and 190 parallel corpora. This corpus includes both resource-rich languages such as English and Arabic, and resource-poor languages such as Hindi and Thai. In this paper, we describe the gathering, validation, and preprocessing of a large collection of parallel, community-generated subtitles. Furthermore, we describe the methodology used to prepare the data for Machine Translation tasks. Additionally, we provide a document-level, jointly aligned development and test sets for 14 language pairs, designed for tuning and testing Machine Translation systems. We provide baseline results for these tasks, and highlight some of the challenges we face when building machine translation systems for educational content."}
{"_id": "b27fbcc6d3ebcf8399c3adb9041e045eacbf0b40", "title": "Modeling and Control of Marine Diesel Generator System With Active Protection", "text": "On-board dc grid systems have been proposed in the marine industry to reduce specific fuel consumption and to improve dynamic performance and manoeuvrability. The primary energy sources of an on-board dc system are variable frequency operating diesel engine generators with an active front-end converter system. Such systems might suffer disturbances from the engine side as well as from the electrical load side. These dynamic conditions result in torque overloads in the mechanical system, resulting in failures to sensitive driveline components. To understand the dynamics associated with disturbances and to propose a robust and reliable system, a detailed modeling approach is required. This paper presents complete modeling of a diesel generator system, including cylinder pressure, mechanical driveline system, electrical generator characteristics, and control system of an active front-end converter. The detailed model is used to develop a novel power electronic-based controller to reduce elevated load levels induced during disturbances. The controller increases the damping ratio of the first natural frequency by using the speed difference between a generator and a flywheel. The generator speed is estimated while fly wheel inertia speed is measured from an existing speed sensor used for monitoring and electronic control, and their difference is used to improve the reliability under disturbance conditions. Simulation with the detailed model has indicated excessive drivetrain loads and potential damage to critical components. The proposed controller enhances the damping of the system by transferring the torsional energy to the electrical system, reducing the shaft torque amplitude during resonance. This paper presents simulation results demonstrating the fidelity of the diesel generator model as well as the effectiveness of the proposed controller."}
{"_id": "c78d7b51cf477a1157fe684070f59a2ef1f67d38", "title": "Digital Entrepreneurship of Born Digital and Grown Digital Firms: Comparing the Effectuation Process of Yihaodian and Suning", "text": "This paper examines digital entrepreneurship in born digital and grown digital firms. Of late, digital entrepreneurship has garnered renewed attention from national leaders, practitioners, and academics due to the phenomenal success of a number of digital startups and the radical disruptions brought by the innovative use of technologies. Despite the significance and the promises of digital technology, especially amid the gloomy economy, digital entrepreneurship has remained inadequately analyzed for its predominant focus on entrepreneurs (human agency) and newly founded firms (context). This study offers an in-depth analysis of two entrepreneurial firms, Yihaodian and Suning, which are among the largest e-commerce players in China. By adopting effectuation as our theoretical lens, this study aims to generate an understanding of the digital entrepreneurial process in both born digital and grown digital firms."}
{"_id": "e8b6540fd912a8c366eded7829d302e3e6eab24f", "title": "Automatic Text Categorization of Mathematical Word Problems", "text": "This paper describes a novel application of text categorization for mathematical word problems, namely Multiplicative Compare and Equal Group problems. The empirical results and analysis show that common text processing techniques such as stopword removal and stemming should be selectively used. It is highly beneficial not to remove stopwords and not to do stemming. Part of speech tagging should also be used to distinguish words in discriminative parts of speech from the non-discriminative parts of speech which not only fail to help but even mislead the categorization decision for mathematical word problems. An SVM classifier with these selectively used text processing techniques outperforms an SVM classifier with a default setting of text processing techniques (i.e. stopword removal and stemming). Furthermore, a probabilistic meta classifier is proposed to combine the weighted results of two SVM classifiers with different word problem representations generated by different text preprocessing techniques. The empirical results show that the probabilistic meta classifier further improves the categorization accuracy."}
{"_id": "a449623136ff47e87ec67a1a6f2980ef108c328d", "title": "Facebook and texting made me do it: Media-induced task-switching while studying", "text": "Electronic communication is emotionally gratifying, but how do such technological distractions impact academic learning? The current study observed 263 middle school, high school and university students studying for 15 min in their homes. Observers noted technologies present and computer windows open in the learning environment prior to studying plus a minute-by-minute assessment of on-task behavior, off-task technology use and open computer windows during studying. A questionnaire assessed study strategies, task-switching preference, technology attitudes, media usage, monthly texting and phone calling, social networking use and grade point average (GPA). Participants averaged less than six minutes on task prior to switching most often due to technological distractions including social media, texting and preference for task-switching. Having a positive attitude toward technology did not affect being on-task during studying. However, those who preferred to task-switch had more distracting technologies available and were more likely to be off-task than others. Also, those who accessed Facebook had lower GPAs than those who avoided it. Finally, students with relatively high use of study strategies were more likely to stay on-task than other students. The educational implications include allowing students short \u2018\u2018technology breaks\u2019\u2019 to reduce distractions and teaching students metacognitive strategies regarding when interruptions negatively impact learning. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "4d389c599d683912f96ad914db9a3f309830f688", "title": "Design and implementation of microcontroller based PWM technique for sine wave inverter", "text": "A microcontroller based advanced technique of generating sine wave with minimized harmonics is implemented in this paper. The proposed technique \u201cimpulse-sine product\u201d aims to design and implement a voltage regulated inverter with ripple free and glitch free output sine wave that can operate electronic devices efficiently. The output voltage of the inverter is regulated by a feedback loop using analog to digital protocol of PIC16f877 microcontroller. The design is essentially focused upon low power electronic appliances such as personal computers, chargers, television sets. The inverter output is regulated from 16-volt rms to 220-volt rms for a variation of load between 1-\u2126 to 180-\u2126 loads. The designed inverter provides a range of operation from 25 Watts to 250 Watts. The design is mathematically modeled which is simulated in Matlab, Proteus and finally the results are practically verified."}
{"_id": "7a4ae493a998a44a5b98a4da7f3876f7e61a56ea", "title": "CYUT Short Text Conversation System for NTCIR-12 STC", "text": "In this paper, we report how we build the system for Chinese subtask in NTCIR12 Short Text Conversation (STC) shared task. Our approach is to find the most related sentences for a given input sentence. The system is implemented based on the Lucene search engine. The result shows that our system can deal with the conversation that involves related sentences."}
{"_id": "65a3639a326e09bb0715fce0fe3c681d9f52d272", "title": "On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task", "text": "Deep convolutional neural networks are powerful tools for learning visual representations from images. However, designing e cient deep architectures to analyse volumetric medical images remains challenging. This work investigates e cient and flexible elements of modern convolutional networks such as dilated convolution and residual connection. With these essential building blocks, we propose a high-resolution, compact convolutional network for volumetric image segmentation. To illustrate its e ciency of learning 3D representation from large-scale image data, the proposed network is validated with the challenging task of parcellating 155 neuroanatomical structures from brain MR images. Our experiments show that the proposed network architecture compares favourably with state-of-the-art volumetric segmentation networks while being an order of magnitude more compact. We consider the brain parcellation task as a pretext task for volumetric image segmentation; our trained network potentially provides a good starting point for transfer learning. Additionally, we show the feasibility of voxel-level uncertainty estimation using a sampling approximation through dropout."}
{"_id": "bffccc27dccec5bf7e45f0bb7c3f98f9da7da01a", "title": "Investigating healthcare professionals' decisions to accept telemedicine technology: an empirical test of competing theories", "text": "The proliferation of information technology (IT) in supporting highly specialized tasks and services has made it increasingly important to understand the factors essential to technology acceptance by individuals. In a typical professional setting, the essential characteristics of user, technology, and context may differ considerably from those in ordinary business settings. This study examined physicians\u2019 acceptance of telemedicine technology. Following a theory comparison approach, it evaluated the extent to which prevailing intention-based models, including the technology acceptance model (TAM), the theory of planned behavior (TPB) and an integrated model, could explain individual physicians\u2019 technology acceptance decisions. Based on responses from more than 400 physicians, both models were evaluated in terms of overall fit, explanatory power, and their causal links. Overall, findings suggest that TAM may be more appropriate than TPB for examining technology acceptance by individual professionals and that the integrated model, although more fully depicting physicians\u2019 technology acceptance, may not provide significant additional explanatory power. Also, instruments developed and repeatedly tested in prior studies involving conventional end-users and business managers may not be valid in professional settings. Several interesting implications are also discussed. # 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "eeb8fe6942820bf85c014cefc5fe065658561b96", "title": "A Review of Information Quality Research - Develop a Research Agenda", "text": "Recognizing the substantial development of information quality research, this review article analyzes three major aspects of information quality research: information quality assessment, information quality management and contextual information quality. Information quality assessment is analyzed by three components: information quality problem, dimension and assessment methodology. Information quality management is analyzed from three perspectives: quality management, information management and knowledge management. Following an overview of contextual information quality, this article analyzes information quality research in the context of information system and decision making. The analyzing results reveal the potential research streams and current research limitations of information quality. Aiming at bridging the research gaps, we conclude by providing the research issues for future information quality research and implications for empirical applications."}
{"_id": "00cf965e32f89e475e076aab5db97c8b3b36fa63", "title": "A tutorial on support vector regression", "text": "In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention somemodifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective."}
{"_id": "385622a5862c989653a648ac8abc59ae3fe785f7", "title": "An introduction to hidden Markov models", "text": "The basic theory of Markov chains has been known to mathematicians and engineers for close to 80 years, but it is only in the past decade that it has been applied explicitly to problems in speech processing. One of the major reasons why speech models, based on Markov chains, have not been developed until recently was the lack of a method for optimizing the parameters of the Markov model to match observed signal patterns. Such a method was proposed in the late 1960's and was immediately applied to speech processing in several research institutions. Continued refinements in the theory and implementation of Markov modelling techniques have greatly enhanced the method, leading to a wide range of applications of these models. It is the purpose of this tutorial paper to give an introduction to the theory of Markov models, and to illustrate how they have been applied to problems in speech recognition."}
{"_id": "c48d3d566dfcb63da6ad9aab48783c5e80adb771", "title": "A sample size calculator for SMART pilot studies", "text": "In clinical practice, as well as in other areas where interventions are provided, a sequential individualized approach to treatment is often necessary, whereby each treatment is adapted based on the object \u2032s response. An adaptive intervention is a sequence of decision rules which formalizes the provision of treatment at critical decision points in the care of an individual. In order to inform the development of an adaptive intervention, scientists are increasingly interested in the use of sequential multiple assignment randomized trials (SMART), which is a type of multistage randomized trial where individuals are randomized repeatedly at critical decision points to a set treatment options. While there is great interest in the use of SMART and in the development of adaptive interventions, both are relatively new to themedical and behavioral sciences. As a result, many clinical researchers will first implement a SMART pilot study (i.e., a small-scale version of a SMART) to examine feasibility and acceptability considerations prior to conducting a full-scale SMART study. A primary aim of this paper is to introduce a new methodology to calculate minimal sample size necessary for conducting a SMART pilot."}
{"_id": "cf17c2eadd207cdc1e06b6fb0fa0881450e686a2", "title": "A Quantitative and Qualitative Assessment of Automatic Text Summarization Systems", "text": "Text summarization is the process of automatically creating a shorter version of one or more text documents. This paper presents a qualitative and quantitative assessment of the 22 state-of-the-art extractive summarization systems using the CNN corpus, a dataset of 3,000 news articles."}
{"_id": "32ca9f202b17a485625909213c23be8b416f9a6a", "title": "A foundation for representing and querying moving objects", "text": "Spatio-temporal databases deal with geometries changing over time. The goal of our work is to provide a DBMS data model and query language capable of handling such time-dependent geometries, including those changing continuously that describe moving objects. Two fundamental abstractions are moving point and moving region, describing objects for which only the time-dependent position, or position and extent, respectively, are of interest. We propose to present such time-dependent geometries as attribute data types with suitable operations, that is, to provide an abstract data type extension to a DBMS data model and query language. This paper presents a design of such a system of abstract data types. It turns out that besides the main types of interest, moving point and moving region, a relatively large number of auxiliary data types are needed. For example, one needs a line type to represent the projection of a moving point into the plane, or a \u201cmoving real\u201d to represent the time-dependent distance of two points. It then becomes crucial to achieve (i) orthogonality in the design of the system, i.e., type constructors can be applied unifomly; (ii) genericity and consistency of operations, i.e., operations range over as many types as possible and behave consistently; and (iii) closure and consistency between structure and operations of nontemporal and related temporal types. Satisfying these goal leads to a simple and expressive system of abstract data types that may be integrated into a query language to yield a powerful language for querying spatio-temporal data, including moving objects. The paper formally defines the types and operations, offers detailed insight into the considerations that went into the design, and exemplifies the use of the abstract data types using SQL. The paper offers a precise and conceptually clean foundation for implementing a spatio-temporal DBMS extension."}
{"_id": "9b1918aea35650e8fc3b295417b755d4c5ed748c", "title": "Free Form based active contours for image segmentation and free space perception", "text": "In this paper we present a novel approach for representing and evolving deformable active contours. The method combines piecewise regular B\u00e9zier models and curve evolution defined by local Free Form Deformation. The contour deformation is locally constrained which allows contour convergence with almost linear complexity while adapting to various shape settings and handling topology changes of the active contour. We demonstrate the effectiveness of the new active contour scheme for visual free space perception and segmentation using omnidirectional images acquired by a robot exploring unknown indoor and outdoor environments. Several experiments validate the approach with comparison to state-of-the art parametric and geometric active contours and provide fast and real-time robot free space segmentation and navigation."}
{"_id": "509bdbacd064debc94a67f5b2e79dab070cad81c", "title": "Neon: A (Big) (Fast) Single-Chip 3D Workstation Graphics Accelerator", "text": "High-performance 3D graphics accelerators traditionally require multiple chips on multiple boards. Specialized chips perform geometry transformations and lighting computations, rasterizing, pixel processing, and texture mapping. Multiple chip designs are often scalable: they can increase performance by using more chips. Scalability has obvious costs: a minimal configuration needs several chips, and some configurations must replicate texture maps. A less obvious cost is the almost irresistible temptation to replicate chips to increase performance, rather than to design individual chips for higher performance in the first place. In contrast, Neon is a single chip that performs like a multichip design. Neon accelerates OpenGL 3D rendering, as well as X11 and Windows/NT 2D rendering. Since our pin budget limited peak memory bandwidth, we designed Neon from the memory system upward in order to reduce bandwidth requirements. Neon has no special-purpose memories; its eight independent 32-bit memory controllers can access color buffers, Z depth buffers, stencil buffers, and texture data. To fit our gate budget, we shared logic among different operations with similar implementation requirements, and left floating point calculations to Digital's Alpha CPUs. Neon\u2019s performance is between HP\u2019s Visualize fx and fx, and is well above SGI\u2019s MXE for most operations. Neon-based boards cost much less than these competitors, due to a small part count and use of commodity SDRAMs."}
{"_id": "4509771bb71500d411ced0d1cb53722fb73c9716", "title": "Boosting for transfer learning", "text": "Traditional machine learning makes a basic assumption: the training and test data should be under the same distribution. However, in many cases, this identical-distribution assumption does not hold. The assumption might be violated when a task from one new domain comes, while there are only labeled data from a similar old domain. Labeling the new data can be costly and it would also be a waste to throw away all the old data. In this paper, we present a novel transfer learning framework called TrAdaBoost, which extends boosting-based learning algorithms (Freund & Schapire, 1997). TrAdaBoost allows users to utilize a small amount of newly labeled data to leverage the old data to construct a high-quality classification model for the new data. We show that this method can allow us to learn an accurate model using only a tiny amount of new data and a large amount of old data, even when the new data are not sufficient to train a model alone. We show that TrAdaBoost allows knowledge to be effectively transferred from the old data to the new. The effectiveness of our algorithm is analyzed theoretically and empirically to show that our iterative algorithm can converge well to an accurate model."}
{"_id": "01cdbb5be2c6485f5b5dbc29417498e33e18f5e4", "title": "Publicity Trends in Arrests of \" Online Predators \" How the National Juvenile Online Victimization (n\u2010jov) Study Was Conducted", "text": "about \" online predators \" * \u2013 sex of\u2010 fenders who use the Internet to meet juvenile victims \u2013 has raised considerable alarm about the extent to which Internet use may be put\u2010 ting children and adolescents at risk for sexual abuse and exploitation. Media stories and Internet safety messages have raised fears by describing violent offenders who use the Inter\u2010 net to prey on na\u00efve children by tricking them into face\u2010to\u2010face meetings or tracking them down through information posted online. Law enforcement has mobilized on a number of fronts, setting up task forces to identify and prosecute online predators, developing under\u2010 cover operations, and urging social networking sites to protect young users. Unfortunately, however, reliable information on the scope and nature of the online predator problem remains scarce. Established criminal justice data collection systems do not gather detailed data on such crimes that could help inform public policy and education. To remedy this information vacuum, the Crimes against Children Research Center at the University of New Hampshire conducted two waves of a The N\u2010JOV Study collected information from a national sample of law en\u2010 forcement agencies about the prevalence of arrests for and characteristics of online sex crimes against minors during two 12 month periods: July 1, 2000 through June 30, 2001 (Wave 1) and calendar year 2006 (Wave 2). For both Waves, we used a two\u2010phase process of mail surveys followed by telephone interviews to collect data from a national sample of the same lo\u2010 cal, county, state, and federal law enforcement agencies. First, we sent the mail surveys to a national sample of more than 2,500 agencies. These sur\u2010 veys asked if agencies had made arrests for online sex crimes against minors during the respective one\u2010year timeframes. Then we conducted detailed telephone interviews with law enforcement investigators about a random sample of arrest cases reported in the mail surveys. For the telephone interviews, we designed a sampling procedure that took into account the number of arrests reported by an agency, so that we would not unduly burden respondents in agencies with many cases. If an agency reported between one and three arrests for online sex crimes, we conducted follow\u2010up interviews for every case. For agencies that reported more than three arrests, we conducted interviews for all cases that involved youth vic\u2010 tims (victims who were located and contacted during the investigation), and sampled other arrest cases (i.e., \u2026"}
{"_id": "386596604cce7bf86c542edbdff32781a6854889", "title": "AutoBrief: an experimental system for the automatic generation of briefings in integrated text and information graphics", "text": "This paper describes AutoBrief, an experimental intelligent multimedia presentation system that generates presentations in text and information graphics in the domain of transportation scheduling. Acting as an intelligent assistant, AutoBrief creates a presentation to communicate its analysis of alternative schedules. In addition, the multimedia presentation facilitates data exploration through its complex information visualizations and support for direct manipulation of presentation elements. AutoBrief\u2019s research contributions include (1) a design enabling a new human\u2013computer interaction style in which intelligent multimedia presentation objects (textual or graphic) can be used by the audience in direct manipulation operations for data exploration, (2) an application-independent approach to multimedia generation based on the representation of communicative goals suitable for both generation of text and of complex information graphics, and (3) an application-independent approach to intelligent graphic design based upon communicative goals. This retrospective overview paper, aimed at a multidisciplinary audience from the fields of human\u2013computer ARTICLE IN PRESS *Corresponding author. Tel.: +1-336-2561133; fax: +1-336-3345949. E-mail addresses: nlgreen@uncg.edu (N.L. Green), carenini@cs.ubc.ca (G. Carenini), kerpedjiev@maya.com (S. Kerpedjiev), mattis@mayaviz.com (J. Mattis), j.moore@ed.ac.uk (J.D. Moore), roth+@cs.cmu.edu (S.F. Roth). 1071-5819/$ see front matter r 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijhcs.2003.10.007 interaction and natural language generation, presents AutoBrief\u2019s design and design rationale. r 2003 Elsevier Ltd. All rights reserved."}
{"_id": "2065d635b97b3bc4980432426822b543a909ce4c", "title": "Failure Analysis of SGI XFS File System", "text": "Commodity file systems expect a fail stop disk. But todays disks fail in unexpected ways. Disks exhibit latent sector errors, silent corruption and transient failures to name a few. In this paper we study the behavior of SGI XFS to such errors. File systems play a key role in handling most data and hence the failure handling policy of the file system plays a major role in ensuring the integrity of the data. XFS is a highly scalable journaling file system developed by SGI. We analyze the failure handling policy of XFS file system. We fail reads and writes of various blocks that originate from the file system and analyze the file system behavior. Some such blocks include super block, journal header block, commit block, data block, index blocks. We classify our errors according to the extended IRON Taxonomy. We see that XFS is vulnerable to these failures and does not possess a uniform failure handling policy. Further the file system at times silently corrupts data. We also see that XFS has very little internal redundancy even for critical structures such as the super block and B+Tree root. Finally we see that XFS, like most commodity file systems, fails to recover from these failures putting the data at risk."}
{"_id": "76c366859763ace4bbd5d0fbee8f9f094c8dc3d5", "title": "The Wnt signaling pathway in development and disease.", "text": "Tight control of cell-cell communication is essential for the generation of a normally patterned embryo. A critical mediator of key cell-cell signaling events during embryogenesis is the highly conserved Wnt family of secreted proteins. Recent biochemical and genetic analyses have greatly enriched our understanding of how Wnts signal, and the list of canonical Wnt signaling components has exploded. The data reveal that multiple extracellular, cytoplasmic, and nuclear regulators intricately modulate Wnt signaling levels. In addition, receptor-ligand specificity and feedback loops help to determine Wnt signaling outputs. Wnts are required for adult tissue maintenance, and perturbations in Wnt signaling promote both human degenerative diseases and cancer. The next few years are likely to see novel therapeutic reagents aimed at controlling Wnt signaling in order to alleviate these conditions."}
{"_id": "19003129b41298dca0843f7c9c7fd6ec4bae0556", "title": "Magnetic Optimization Algorithms a new synthesis", "text": "A novel optimization algorithm is proposed here that is inspired by the principles of magnetic field theory. In the proposed Magnetic Optimization Algorithm (MOA) the possible solutions are magnetic particles scattered in the search space. Each magnetic particle has a measure of mass and magnetic field according to its fitness. The fitter magnetic particles are those with higher magnetic field and higher mass. These particles are located in a lattice-like environment and apply a force of attraction to their neighbors. The proposed cellular structure allows a better exploitation of local neighborhoods before they move towards the global best, hence it increases population diversity. Experimental results on 14 numerical benchmark functions show that MOA in some benchmark functions can work better than GA and PSO."}
{"_id": "9f832bdcbc9d9566f7ab07b7455364bee62086fb", "title": "Pseudogen: A Tool to Automatically Generate Pseudo-Code from Source Code", "text": "Understanding the behavior of source code written in an unfamiliar programming language is difficult. One way to aid understanding of difficult code is to add corresponding pseudo-code, which describes in detail the workings of the code in a natural language such as English. In spite of its usefulness, most source code does not have corresponding pseudo-code because it is tedious to create. This paper demonstrates a tool Pseudogen that makes it possible to automatically generate pseudo-code from source code using statistical machine translation (SMT). Pseudogen currently supports generation of English or Japanese pseudo-code from Python source code, and the SMT framework makes it easy for users to create new generators for their preferred source code/pseudo-code pairs."}
{"_id": "a8cb524cb9b052ce5ca2220d4197aa6e86f86036", "title": "Big Data Analytics for Air Quality Monitoring at a Logistics Shipping Base via Autonomous Wireless Sensor Network Technologies", "text": "The indoor air quality in industrial workplace buildings, e.g. air temperature, humidity and levels of carbon dioxide (CO2), play a critical role in the perceived levels of workers' comfort and in reported medical health. CO2 can act as an oxygen displacer, and in confined spaces humans can have, for example, reactions of dizziness, increased heart rate and blood pressure, headaches, and in more serious cases loss of consciousness. Specialized organizations can be brought in to monitor the work environment for limited periods. However, new low cost wireless sensor network (WSN) technologies offer potential for more continuous and autonomous assessment of industrial workplace air quality. Central to effective decision making is the data analytics approach and visualization of what is potentially, big data (BD) in monitoring the air quality in industrial workplaces. This paper presents a case study that monitors air quality that is collected with WSN technologies. We discuss the potential BD problems. The case trials are from two workshops that are part of a large on-shore logistics base a regional shipping industry in Norway. This small case study demonstrates a monitoring and visualization approach for facilitating BD in decision making for health and safety in the shipping industry. We also identify other potential applications of WSN technologies and visualization of BD in the workplace environments; for example, for monitoring of other substances for worker safety in high risk industries and for quality of goods in supply chain management."}
{"_id": "c1b72345ca83f452c73c8c1f7a3bbb900248dc90", "title": "Automated Verification of Electrum Wallet", "text": "We introduce a formal modeling in ASLan++ of the twofactor authentication protocol used by the Electrum Bitcoin wallet. This allows us to perform an automatic analysis of the wallet and show that it is secure for standard scenarios in Dolev Yao model [Dolev 1981]. The result could be derived thanks to some advanced features of the protocol analyzer such as the possibility to specify i) new intruder deduction rules with clauses and ii) non-deducibility constraints."}
{"_id": "51796da9284df8eae3a9ab6f2731f27fa8900095", "title": "A Semi-Supervised Approach for Kernel-Based Temporal Clustering", "text": "Temporal clustering refers to the partitioning of a time series into multiple nonoverlapping segments that belong to k temporal clusters, in such a way that segments in the same cluster are more similar to each other than to those in other clusters. Temporal clustering is a fundamental task in many fields, such as computer animation, computer vision, health care, and robotics. The applications of temporal clustering in those areas are diverse, and include human-motion imitation and recognition, emotion analysis, human activity segmentation, automated rehabilitation exercise analysis, and human-computer interaction. However, temporal clustering using a completely unsupervised method may not produce satisfactory results. Similar to regular clustering, temporal clustering also benefits from some expert knowledge that may be available. The type of approach that utilizes a small amount of knowledge to \u201cguide\u201d the clustering process is known as \u201csemi-supervised clustering.\u201d Semi-supervised temporal clustering is a strategy in which extra knowledge, in the form of pairwise constraints, is incorporated into the temporal data to help with the partitioning problem. This thesis proposes a process to adapt and transform two kernel-based methods into semi-supervised temporal clustering methods. The proposed process is exclusive to kernel-based clustering methods, and is based on two concepts. First, it uses the idea of instance-level constraints, in the form of must-link and cannot-link, to supervise the clustering methods. Second, it uses a dynamic-programming method to search for the optimal temporal clusters. The proposed process is applied to two algorithms, aligned cluster analysis (ACA) and spectral clustering. To validate the advantages of the proposed temporal semi-supervised clustering methods, a comparative analysis was performed, using the original versions of the algorithm and another semi-supervised temporal cluster. This evaluation was conducted with both synthetic data and two real-world applications. The first application includes two naturalistic audio-visual human emotion datasets, and the second application focuses on human-motion segmentation. Results show substantial improvements in accuracy, with minimal supervision, compared to unsupervised and other temporal semi-supervised approaches, without compromising time performance."}
{"_id": "18bfccf0e0383f5867e60d35759091e5a25099e1", "title": "BRAINIAC: Bringing reliable accuracy into neurally-implemented approximate computing", "text": "Applications with large amounts of data, real-time constraints, ultra-low power requirements, and heavy computational complexity present significant challenges for modern computing systems, and often fall within the category of high performance computing (HPC). As such, computer architects have looked to high performance single instruction multiple data (SIMD) architectures, such as accelerator-rich platforms, for handling these workloads. However, since the results of these applications do not always require exact precision, approximate computing may also be leveraged. In this work, we introduce BRAINIAC, a heterogeneous platform that combines precise accelerators with neural-network-based approximate accelerators. These reconfigurable accelerators are leveraged in a multi-stage flow that begins with simple approximations and resorts to more complex ones as needed. We employ high-level, application-specific light-weight checks (LWCs) to throttle this multi-stage acceleration flow and reliably ensure user-specified accuracy at runtime. Evaluation of the performance and energy of our heterogeneous platform for error tolerance thresholds of 5%-25% demonstrates an average of 3\u00d7 gain over computation that only includes precise acceleration, and 15\u00d7-35\u00d7 gain over software-based computation."}
{"_id": "f01fbb6236bbf3bddc90ac14a36811f09f45982b", "title": "The folly of forecasting : The effects of a disaggregated sales forecasting system on sales forecast error , sales forecast positive bias , and inventory levels", "text": "In this study we provide field evidence of the role that sales forecasts play as the coordination mechanism between sales managers and production managers. An extensive body of operations research documents the negative consequences of sales forecast error and investigates how to respond to sales forecast error by optimizing inventory \u201cbuffer\u201d stocks. In contrast, we focus on whether a change in the sales forecasting information environment implemented at our research site reduces forecast error and, hence, the need for those buffer stocks. The newly implemented sales forecast \u201ccontingency system\u201d disaggregated the sales forecast into two components: (i) an official sales forecast that reflected relatively more certain expected demand, and (ii) a separate report that provided the probability of a contingent demand \u201cevent\u201d occurring and the expected volume impact of that event. We predict and find that the system had the intended effect of a reduction in inventory levels, both through better timing of production via a production \u201cpostponement\u201d strategy and through a decrease in absolute forecast error. We further consider the incentives of the self-interested sales managers and predict and find that the inventory reduction benefits of the sales forecast system gained through postponement and a decline in absolute forecast error were partially offset by an increase in positive sales forecast bias. Our study provides novel insights regarding the role of forecasting within the organizational context. While the operations literature uses analytic and simulation methods to examine sales and operations planning in a very mechanistic way, we examine the role that changes in the sales forecast information environment play and the opportunistic responses of self-interested managers. (266 words)"}
{"_id": "798133be9e9a3868ef488395bbe2c644417a6ae7", "title": "Creating people-aware IoT applications by combining design thinking and user-centered design methods", "text": "This article presents a methodology based on design thinking and user experience design methods for creating what we call `people-aware' IoT applications, where user needs, not technological opportunities, drive the development. This methodology is divided into 7 steps: discovery, capturing, research, design, prototype, evaluate and refine. The tools used include conventional user experience procedures such as problem identification, group brainstorming, surveys, or interviews, mixed with more IoT-specific design specificities. The results of the methodology include well-described and user-oriented scenarios meeting user's needs and also a complete toolbox to assist the implementation and the testing of abovementioned scenarios in an IoT perspective. The article describes the methodology in detail with the help of a use case conducted in a business environment available for the project that leads to the identification and partial design of concrete people-aware IoT applications in the context of a smart meeting room."}
{"_id": "1b1a3ab704ffacdf20062e4b5bbc1f39d46a26c9", "title": "A Survey of Surface-Based Illustrative Rendering for Visualization", "text": "In this paper, we survey illustrative rendering techniques for 3D surface models. We first discuss the field of illustrative visualization in general and provide a new definition for this sub-area of visualization. For the remainder of the survey, we then focus on surface-based models. We start by briefly summarizing the differential geometry fundamental to many approaches and discuss additional general requirements for the underlying models and the methods\u2019 implementations. We then provide an overview of low-level illustrative rendering techniques including sparse lines, stippling and hatching, and illustrative shading, connecting each of them to practical examples of visualization applications. We also mention evaluation approaches and list various application fields, before we close with a discussion of the state of the art and future work."}
{"_id": "c28db43fe2bd60c14824c4dc48b7363643a99eee", "title": "Recovering the interleaver of an unknown turbo-code", "text": "We give here an efficient algorithm for recovering the permutation of an unknown turbo-code when several noisy codewords are given. The algorithm presented here uses the same information as some other algorithms given previously for this problem but in an optimal fashion. This paper also clarifies the link between this problem and the BCJR decoding algorithm."}
{"_id": "19b6df414753a81e2fbb1030ddd983210ea4ec80", "title": "Digital Libraries and Autonomous Citation Indexing", "text": "The Web is revolutionizing the way researchers access scientific literature, however scientific literature on the Web is largely disorganized. Autonomous citation indexing can help organize the literature by automating the construction of citation indices. Autonomous citation indexing aims to improve the dissemination and retrieval of scientific literature, and provides improvements in cost, availability, comprehensiveness, efficiency, and timeliness."}
{"_id": "71c0698edd0cf489cd837c91ad22bbf51643bf6c", "title": "A Statistical Approach to Anaphora Resolution", "text": "This paper presents an algorithm for identifying pronominal anaphora and two experiments based upon this algorithm. We incorporate multiple anaphora resolution factors into a statistical framework specifically the distance between the pronoun and the proposed antecedent, gender/number/animaticity of the proposed antecedent, governing head information and noun phrase repetition. We combine them into a single probability that enables us to identify the referent. Our first experiment shows the relative contribution of each source Of information and demonstrates a success rate of 82.9% for all sources combined. The second experiment investigates a method for unsupervised learning of gender/number/animaticity information. We present some experiments illustrating the accuracy of the method and note that with this information added, our pronoun resolution method achieves 84.2% accuracy. 1 I n t r o d u c t i o n We present a statistical method for determining pronoun anaphora. This program differs from earlier work in its almost complete lack of hand-crafting, relying instead on a very small corpus of Penn Wall Street Journal Tree-bank text (Marcus et al., 1993) that has been marked with co-reference information. The first sections of this paper describe this program: the probabilistic model behind it, its implementation, and its performance. The second half of the paper describes a method for using (portions of) t~e aforementioned program to learn automatically the typical gender of English words, information that is itself used in the pronoun resolution program. In particular, the scheme infers the gender of a referent from the gender of the pronouns that 161 refer to it and selects referents using the pronoun anaphora program. We present some typical results as well as the more rigorous results of a blind evaluation of its output. 2 A P r o b a b i l i s t i c M o d e l There are many factors, both syntactic and semantic, upon which a pronoun resolution system relies. (Mitkov (1997) does a detailed study on factors in anaphora resolution.) We first discuss the training features we use and then derive the probability equations from them. The first piece of useful information we consider is the distance between the pronoun and the candidate antecedent. Obviously the greater the distance the lower the probability. Secondly, we look at the syntactic situation in which the pronoun finds itself. The most well studied constraints are those involving reflexive pronouns. One classical approach to resolving pronouns in text that takes some syntactic factors into consideration is that of Hobbs (1976). This algorithm searches the parse tree in a leftto-right, breadth-first fashion that obeys the major reflexive pronoun constraints while giving a preference to antecedents that are closer to the pronoun. In resolving inter-sentential pronouns, the algorithm searches the previous sentence, again in left-to-right, breadth-first order. This implements the observed preference for subject position antecedents. Next, the actual words in a proposed nounphrase antecedent give us information regarding the gender, number, and animaticity of the proposed referent. For example: M a r i e Giraud carries historical significance as one of the last women to be ezecuted in France. S h e became an abortionist because it enabled her to buy jam, cocoa and other war-rationed goodies. Here it is helpful to recognize that \"Marie\" is probably female and thus is unlikely to be referred to by \"he\" or \"it\". Given the words in the proposed antecedent we want to find the probability that it is the referent of the pronoun in question. We collect these probabilities on the training data, which are marked with reference links. The words in the antecedent sometimes also let us test for number agreement. Generally, a singular pronoun cannot refer to a plural noun phrase, so that in resolving such a pronoun any plural candidates should be ruled out. However a singular noun phrase can be the referent of a plural pronoun, as illustrated by the following example: \"I think if I tell Viacom I need more time, they will take 'Cosby' across the street,\" says the general manager ol a network a~liate. It is also useful to note the interaction between the head constituent of the pronoun p and the antecedent. For example: A Japanese company might make television picture tubes in Japan, assemble the T V sets in Malaysia and extort them to Indonesia. Here we would compare the degree to which each possible candidate antecedent (A Japanese company, television picture tubes, Japan, T V sets, and Malaysia in this example) could serve as the direct object of \"export\". These probabilities give us a way to implement selectional restriction. A canonical example of selectional restriction is that of the verb \"eat\", which selects food as its direct object. In the case of \"export\" the restriction is not as clearcut. Nevertheless it can still give us guidance on which candidates are more probable than others. The last factor we consider is referents' mention count. Noun phrases that are mentioned repeatedly are preferred. The training corpus is marked with the number of times a referent has been mentioned up to that point in the story. Here we are concerned with the probability that a proposed antecedent is correct given that it has been repeated a certain number of times. 162 In effect, we use this probability information to identify the topic of the segment with the belief that the topic is more likely to be referred to by a pronoun. The idea is similar to tha t used in the centering approach (Brennan et al., 1987) where a continued topic is the highest-ranked candidate for pronominalization. Given the above possible sources of informar tion, we arrive at the following equation, where F(p) denotes a function from pronouns to their antecedents: F(p) = argmaxP( A(p) = alp, h, l~', t, l, so, d~ A~') where A(p) is a random variable denoting the referent of the pronoun p and a is a proposed antecedent. In the conditioning events, h is the head constituent above p, l~ r is the list of candidate antecedents to be considered, t is the type of phrase of the proposed antecedent (always a noun-phrase in this s tudy), I is the type of the head constituent, sp describes the syntactic structure in which p appears, dspecifies the distance of each antecedent from p and M\" is the number of times the referent is mentioned. Note that 17r \", d'~ and A~ are vector quantities in which each entry corresponds to a possible antecedent. When viewed in this way, a can be regarded as an index into these vectors that specifies which value is relevant to the particular choice of antecedent. This equation is decomposed into pieces that correspond to all the above factors but are more statistically manageable. The decomposition makes use of Bayes' theorem and is based on certain independence assumptions discussed below. P( A(p) = alp, h, fir, t, l, sp, d~ .Q') = P(alA~)P(p,h, fir, t,l, sp,~a, 2~) (1) P(p, h, fir, t, t, sp, diM ) o\u00a2 PCalM)P(p, h, fir, t, l, sp, ~a, .Q') (2) = P(a[:Q)P(.%, ~a, :~'I) P(p,h, fir, t, l la ,~ ,sp , i) (3) = P(all~)P(sp, d~a,.Q ) PCh, t, Zla, ~'0\", so, i) PC.. ~ la , .~', so, d, h, t, l) (4) oc P(a]l~)P(So,~a,M') P(p, 14tin, ]Q, s o, d, h, t, I) (5) = P(al.Q)P(sp, d~a, 3~r) P(ffrla, I~, s o, d, h, t, I). (6) P(pla. l~, sf,, d. h, t, l, l~) cx P(a163P(dtt la)P(f f ' lh , t, I, a) P(plw\u00b0) (7) Equation (1) is simply an application of Bayes' rule. The denominator is eliminated in the usual fashion, resulting in equation (2). Selectively applying the chain rule results in equations (3) and (4). In equation (4), the term P(h. t, lla, .~, So, d) is the same for every antecedent and is thus removed. Equat ion (6) follows when we break the last component of (5) into two probability distributions. In equation (7) we make the following independence assumptions: \u2022 Given a particular choice of the antecedent candidates, the distance is independent of distances of candidates other than the antecedent (and the distance to non-referents can be ignored): P(so, d~a, 2~) o\u00a2 P(so, dola , IC4) \u2022 The syntnctic s t ructure st, and the distance from the pronoun da are independent of the number of times the referent is mentioned. Thus P(sp, dola, M) = P(sp, d.la) Then we combine sp and de into one variable dIt, Hobbs distance, since the Hobbs algorithm takes both the syntax and distance into account. The words in the antecedent depend only on the parent consti tuent h, the type of the words t, and the type of the parent I. Hence e(ff'la, M, sp, ~, h, t, l) = P ( ~ l h , t, l, a) \u2022 The choice pronoun depends only on the words in the antecedent, i.e. P{pla, M, sp, d, h, t, l, ~ = P(pla, W) 163 \u2022 If we treat a as an index into the vector 1~, then (a, I.V') is simply the a th candidate in the list ffz. We assume the selection of the pronoun is independent of the candidates other than the antecedent. Hence P(pla, W) = P(plw,~) Since I~\" is a vector, we need to normalize P(ff ' lh, t,l, a) to obtain the probability of each element in the vector. It is reasonable to assume tha t the antecedents in W are independent of each other; in other words, P(wo+llwo, h , t , l ,a ) = P(wo+llh, t , l ,a}. Thus,"}
{"_id": "91cf22eef1132e778d004beb371b5637967a8e56", "title": "An Evaluation of Popular Copy-Move Forgery Detection Approaches", "text": "A copy-move forgery is created by copying and pasting content within the same image, and potentially postprocessing it. In recent years, the detection of copy-move forgeries has become one of the most actively researched topics in blind image forensics. A considerable number of different algorithms have been proposed focusing on different types of postprocessed copies. In this paper, we aim to answer which copy-move forgery detection algorithms and processing steps (e.g., matching, filtering, outlier detection, affine transformation estimation) perform best in various postprocessing scenarios. The focus of our analysis is to evaluate the performance of previously proposed feature sets. We achieve this by casting existing algorithms in a common pipeline. In this paper, we examined the 15 most prominent feature sets. We analyzed the detection performance on a per-image basis and on a per-pixel basis. We created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation. Experiments show, that the keypoint-based features Sift and Surf, as well as the block-based DCT, DWT, KPCA, PCA, and Zernike features perform very well. These feature sets exhibit the best robustness against various noise sources and downsampling, while reliably identifying the copied regions."}
{"_id": "bd4cd14bf33f4403d64166e0e900d7932b028b2b", "title": "Authentication Algorithm for Intrusion Detection in Wireless Networks", "text": "Security has been a major issue in wireless networks. In a wireless network, wireless devices are prone to be unauthorized accessing data or resources. Hence it becomes necessary to consider issues of security such as : 1. Authentication, 2. Access Control. Traditional methods of Authentication has been to assign user names and passwords. This is extremely vulnerable to be accessed and misused. Therefore, to overcome these problems, This paper proposes a method by which the Username and Password are stored in an Hashed format, with the help of Neural Network learning the hashes instead of storing them in the tables. Thus, reducing the risk of being accessed. Therefore, a combination of authentication and access control could make a good tool for intrusion detection."}
{"_id": "4d081514541737c24ae699814494b0c3a5585b31", "title": "Design equations for the Quasi Yagi-Uda antenna operating in the UHF band", "text": "Design equations for the standard Quasi Yagi-Uda antenna built on 60 mil FR4 are presented to be used on fixed wireless telecommunication systems, Wi-Fi, WiMAX and LTE. In this work, the design equations are systematically proposed for the band between 1 GHz and 3 GHz, where relative bandwidth is better than 39%, the cross-polarization is lower than - 15dB for every angle and frequency, the cross-polarization is lower than -25dB in the main lobe and a front to back ratio better than 12dB in all the bandwidth. Three antennas where designed, built and measured with the proposed equations at 1.5GHz, 2GHz and 3 GHz, showing good agreement with the simulation results."}
{"_id": "e233f47449c2328a9940919b510b8273e8734664", "title": "The burden of allergic contact dermatitis caused by acrylates.", "text": "Acrylates and methacrylates are esters of acrylic acid and methacrylic acid, respectively, that polymerize either spontaneously or on ultraviolet (UV) light exposure to form products that are commonly used in adhesives in healthcare settings, and in plastics and textiles (1). They were commonly considered to be occupational contact allergens, particularly in dentists and orthopaedic surgeons, owing to the use of (meth)acrylates in dental materials and bone cement (2). However, more recently, the demand for long-lasting cosmetic nails has led to the widespread use of gel nails and acrylic nails. Gel nails are methacrylate-based gel varnishes that rely on UV curing to form a hard nail-coating colour that lasts for up to 2 weeks. This has led to an occupational shift in (metha)crylate allergy, with a rise in incidence among beauticians and nail technicians using these products (1, 3\u20135). In addition, with home use UV-curing kits now readily available, there is a further shift of contact allergy to consumers. Our aim was to characterize the cases of allergic contact dermatitis caused by meth(acrylates) and cyanoacrylates in our large regional patch testing centre. Because of its significant occupational and health-related sequelae, we highlight the current burden of the serious allergy to these compounds."}
{"_id": "202ee3a9a0d9849234b38c30c0dba9a9c597b40e", "title": "A cross-platform analysis of bugs and bug-fixing in open source projects: desktop vs. Android vs. iOS", "text": "As smartphones continue to increase in popularity, understanding how software processes associated with the smartphone platform differ from the traditional desktop platform is critical for improving user experience and facilitating software development and maintenance. In this paper we focus specifically on differences in bugs and bug-fixing processes between desktop and smartphone software. Our study covers 444,129 bug reports in 88 open source projects on desktop, Android, and iOS. The study has two main thrusts: a quantitative analysis to discover similarities and differences between desktop and smartphone bug reports/processes; and a qualitative analysis where we extract topics from bug reports to understand bugs' nature, categories, and differences between platforms. Our findings include: during 2011--2013, iOS bugs were fixed three times faster compared to Android and desktop; top smartphone bug fixers are more involved in reporting bugs than top desktop bug fixers; and most frequent high-severity bugs are due to build issues on desktop, concurrency on Android, and application logic on iOS. Our study, findings, and recommendations are potentially useful to smartphone researchers and practitioners."}
{"_id": "869ed11c38bd02d70f630682db7cd9c9fee1e619", "title": "Modelling the interaction of electromagnetic fields (10 MHz-10 GHz) with the human body: methods and applications.", "text": "Numerical modelling of the interaction between electromagnetic fields (EMFs) and the dielectrically inhomogeneous human body provides a unique way of assessing the resulting spatial distributions of internal electric fields, currents and rate of energy deposition. Knowledge of these parameters is of importance in understanding such interactions and is a prerequisite when assessing EMF exposure or when assessing or optimizing therapeutic or diagnostic medical applications that employ EMFs. In this review, computational methods that provide this information through full time-dependent solutions of Maxwell's equations are summarized briefly. This is followed by an overview of safety- and medical-related applications where modelling has contributed significantly to development and understanding of the techniques involved. In particular, applications in the areas of mobile communications, magnetic resonance imaging, hyperthermal therapy and microwave radiometry are highlighted. Finally, examples of modelling the potentially new medical applications of recent technologies such as ultra-wideband microwaves are discussed."}
{"_id": "cd4a39d738c6894526dbe46a0496a6440764d979", "title": "AAAI: An Argument Against Artificial Intelligence", "text": "The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of humanity causing extreme suffering to an AI is important enough to warrant serious consideration. This paper starts from the observation that both concerns rely on problematic philosophical assumptions. Rather than tackling these assumptions directly, it proceeds to present an argument that if one takes these assumptions seriously, then one has a moral obligation to advocate for a ban on the development of a conscious AI."}
{"_id": "36cb22f1921afd29ad25b0619c7ad12ed083cd90", "title": "Reward Systems in the Brain and Nutrition.", "text": "The taste cortex in the anterior insula provides separate and combined representations of the taste, temperature, and texture of food in the mouth independently of hunger and thus of reward value and pleasantness. One synapse on, in the orbitofrontal cortex, these sensory inputs are combined by associative learning with olfactory and visual inputs for some neurons, and these neurons encode food reward value in that they respond to food only when hunger is present and in that activations correlate linearly with subjective pleasantness. Cognitive factors, including word-level descriptions and selective attention to affective value, modulate the representation of the reward value of taste, olfactory, and flavor stimuli in the orbitofrontal cortex and a region to which it projects, the anterior cingulate cortex. These food reward representations are important in the control of appetite and food intake. Individual differences in reward representations may contribute to obesity, and there are age-related differences in these reward representations. Implications of how reward systems in the brain operate for understanding, preventing, and treating obesity are described."}
{"_id": "73ba62bde2e4b60cdb0dbb98e0a6270dbc8ec5c7", "title": "Healthcare via cell phones: a systematic review.", "text": "Regular care and informational support are helpful in improving disease-related health outcomes. Communication technologies can help in providing such care and support. The purpose of this study was to evaluate the empirical evidence related to the role of cell phones and text messaging interventions in improving health outcomes and processes of care. Scientific literature was searched to identify controlled studies evaluating cell phone voice and text message interventions to provide care and disease management support. Searches identified 25 studies that evaluated cell phone voice and text messaging interventions, with 20 randomized controlled trials and 5 controlled studies. Nineteen studies assessed outcomes of care and six assessed processes of care. Selected studies included 38,060 participants with 10,374 adults and 27,686 children. They covered 12 clinical areas and took place in 13 countries. Frequency of message delivery ranged from 5 times per day for diabetes and smoking cessation support to once a week for advice on how to overcome barriers and maintain regular physical activity. Significant improvements were noted in compliance with medicine taking, asthma symptoms, HbA1C, stress levels, smoking quit rates, and self-efficacy. Process improvements were reported in lower failed appointments, quicker diagnosis and treatment, and improved teaching and training. Cost per text message was provided by two studies. The findings that enhancing standard care with reminders, disease monitoring and management, and education through cell phone voice and short message service can help improve health outcomes and care processes have implications for both patients and providers."}
{"_id": "236ae3cad39a3e57fe0f8850fc3428e42cd8bfad", "title": "\"GrabCut\": interactive foreground extraction using iterated graph cuts", "text": "The problem of efficient, interactive foreground/background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for \"border matting\" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools."}
{"_id": "d947f89ee5b843f508b14e08649ec69ea0e76733", "title": "Aggregate Profile Clustering for Streaming Analytics", "text": "Many analytic applications require analyzing user interaction data. In particular, such data can be aggregated over a window to build user activity profiles. Clustering such aggregate profiles is useful for grouping together users with similar behaviors, so that common models could be built for them. In this paper, we present an approach for clustering profiles that are incrementally maintained over a stream of updates. Owing to the potentially large number of users and high rate of interactions, maintaining profile clusters can have high processing and memory resource requirements. To tackle this problem, we apply distributed stream processing. However, in the presence of distributed state, it is a major challenge to partition the profiles over nodes such that memory and computation balance is maintained, while keeping the clustering accuracy high. Furthermore, in order to adapt to potentially changing user interaction patterns, the partitioning of profiles to nodes should be continuously revised, yet one should minimize the migration of profiles so as not to disturb the online processing of updates. We develop a re-partitioning technique that achieves all these goals. To achieve this, we keep micro-cluster summaries at each node and periodically collect these summaries at a central node to perform re-partitioning. We use a greedy algorithm with novel affinity heuristics to revise the partitioning and update the routing tables without introducing a lengthy pause. We showcase the effectiveness of our approach using an application that clusters customers of a telecommunications company based on their aggregate calling profiles."}
{"_id": "626338e26b577d3320ae0d9ebc4e7ff40844e250", "title": "A compact field weakening controller implementation", "text": "This paper proposes a compact field weakening controller (FWC). The FWC was developed during the development of an electrically driven rear axle for a hybrid electric car. The overall control system of the vehicle, the hybrid controller (HC), delivers a reference torque for the rear drive unit (RDU). Depending on the torque reference, instantaneous vehicle speed and battery voltage the task of the FWC is to produce current references for the current controller. The references are chosen to give an optimal current, optimal in the sense that the torque-per-current ratio is maximized, under the restriction of keeping the stator voltage and current limited. The FWC is based on comprehensive measurements of the machine. An algorithm is used to process the data from the measurements. The output from the algorithm is look-up tables. Implementation of the look-up tables is also discussed in the paper"}
{"_id": "dedf8f8fda29db53cfa25a7fa04b136e5763a0d8", "title": "Enhancing the RGR Routing Protocol for Unmanned Aeronautical Ad-hoc Networks", "text": "The design of a routing protocol for Unmanned Aeronautical Ad-hoc Networks (UAANETs) is quite challenging, especially due to the high mobility of Unmanned Aerial Vehicles (UAVs) and the low network density. Among all routing protocols that have been designed for UAANETs, the Reactive-Greedy-Reactive (RGR) protocol has been proposed as a promising routing protocol in high mobility and density-variable scenarios. While prior results have shown that RGR is competitive to AODV or other purely reactive routing protocols, it is not fully exploiting all the unique features of UAANETs and the knowledge maintained by intermediate nodes. This report presents a number of different enhancements, namely scoped flooding, delayed route request, and mobility prediction, in order to improve the performance of RGR in terms of overhead, packet delivery ratio and delay respectively. These modifications were implemented in Opnet, and simulation results show that we can reduce the protocol overhead by about 30%, while at the same time increasing PDR by about 3-5% and reducing packet latency. The results further show that there is still significant scope for further performance improvements."}
{"_id": "e8d9fce9377e2e0efeb353d6a2f20179f2c7ce9b", "title": "Sensitivity enhancement of MEMS fluid flow rate and flow direction sensor", "text": "MEMS based flow rate sensor and flow direction sensor is a crucial to the determination of exact fluid flow path. Conventional rectangular cantilever beam detects based on the surface strain of the beam with respect to the mass flow. Different structured beam will greatly affect the sensitivity. In this paper different geometries of cantilever beam such as rectangular, inverted trapezoidal, triangular are designed and simulated using the finite element software INTELLISUITE 8.7. The simulation results show that the inverted trapezoidal geometries having the more deflection at same stress of the conventional beam and thus increase the sensitivity by approx. 1.45 times."}
{"_id": "3c5a9f10d3b60517a97a25aa500aab5bb8665020", "title": "A study on the software requirements elicitation issues - its causes and effects", "text": "Software requirements elicitation is an important and essential pre-requisite to the subsequent phases in software development lifecycle. There is an increasing focus on how industry performs elicitation as this has a direct influence on the overall project success. It is important to understand broader elicitation issues and challenges, and address them on a large-scale, especially on geographically distributed software development. There are studies focusing on elicitation, but they are relatively small. There is no substantial research related to providing a comprehensive view of elicitation issues along with its causes and effects. This paper attempts to provide a summary of the systematic literature review (SLR) findings from 81 papers. The findings are based on causes of poor elicitation, elicitation issues and challenges, consequences of poor elicitation, advisable practices and classification of elicitation issues. The authors have leveraged cause-and-effect diagrams to draw conclusions on SLR."}
{"_id": "43a42675188ecab3b240a6ee2a28d1bad048431c", "title": "Scalable Semantics-Based Detection of Similar Android Applications", "text": "The popularity and utility of smartphones rely on their vibrant application markets; however, plagiarism threatens the long-term health of these markets. In this paper, we present a scalable approach to detecting similar Android apps based on semantic information. We implement our approach in a tool called AnDarwin and evaluate it on 265,359 apps collected from 17 markets including Google Play and numerous third-party markets. In contrast with earlier approaches, AnDarwin does not compare apps pairwise, thus greatly increasing its scalability. Additionally, AnDarwin does not rely on information other than the app code \u2014 such as the app\u2019s market, signature, or description \u2014 thus greatly increasing its reliability. AnDarwin can automatically detect library code and remove it from the similarity analysis. We present two use cases for AnDarwin: finding similar apps by different developers (\u201cclones\u201d) and similar apps from the same developer (\u201crebranded\u201d). In ten hours, AnDarwin detected at least 4,295 apps which have been the victims of cloning and 36,106 apps that are rebranded. By analyzing the clusters found by AnDarwin, we found 88 new variants of malware and identified 169 malicious apps based on differences in the requested permissions. In contrast to earlier approaches, AnDarwin can detect both full and partial app similarity. Additionally, AnDarwin can automatically detect similar code that is injected into many apps, which may indicate the spread of malware. Our evaluation demonstrates AnDarwin\u2019s ability to accurately detect similar apps on a large scale."}
{"_id": "31be1ed3f134a7a95a6feb088b2916d2cbed48b9", "title": "The one-time pad revisited", "text": "The one-time pad, the mother of all encryption schemes, is well known to be information-theoretically secure, in contrast to most encryption schemes used in practice, which are at most computationally secure. In this paper, we focus on another, completely different aspect in which the one-time pad is superior to normal encryption, and which surfaces only when the receiver (not only the eavesdropper) is considered potentially dishonest, as can be the case in a larger protocol context in which encryption is used as a sub-protocol. For example, such a dishonest receiver (who is, say, coerced by the eavesdropper) can in normal encryption verifiably leak the message to the eavesdropper by revealing the secret key. While this leakage feature can provably not be avoided completely, it is more limited if the one-time pad is used. We use the constructive cryptography framework to make these statements precise."}
{"_id": "541c3ab3ce75594c403126413b9c866fa7fba57a", "title": "Help your mobile applications with fog computing", "text": "Cloud computing has paved a way for resource-constrained mobile devices to speed up their computing tasks and to expand their storage capacity. However, cloud computing is not necessary a panacea for all mobile applications. The high network latency to cloud data centers may not be ideal for delay-sensitive applications while storing everything on public clouds risks users' security and privacy. In this paper, we discuss two preliminary ideas, one for mobile application offloading and the other for mobile storage expansion, by leveraging the edge intelligence offered by fog computing to help mobile applications. Preliminary experiments conducted based on implemented prototypes show that fog computing can provide an effective and sometimes better alternative to help mobile applications."}
{"_id": "1bae60fde4efc97c039fba9bc4b8bcee3c66d67b", "title": "Bayesian Biosurveillance of Disease Outbreaks", "text": "Early, reliable detection of disease outbreaks is a critical problem today. This paper reports an investigation of the use of causal Bayesian networks to model spatio-temporal patterns of a non-contagious disease (respiratory anthrax infection) in a population of people. The number of parameters in such a network can become enormous, if not carefully managed. Also, inference needs to be performed in real time as population data stream in. We describe techniques we have applied to address both the modeling and inference challenges. A key contribution of this paper is the explication of assumptions and techniques that are sufficient to allow the scaling of Bayesian network modeling and inference to millions of nodes for real-time surveillance applications. The results reported here provide a proof-of-concept that Bayesian networks can serve as the foundation of a system that effectively performs Bayesian biosurveillance of disease outbreaks."}
{"_id": "970ecfe73808e69e20af96ef720d5a6490ec53d0", "title": "Improving Social Inclusion Using NLP : Tools and Resources", "text": "This paper discusses the issue of accessibility to being online and using social media for people with a learning disability, and the challenges to using a co-production approach in an accessible technology project. While an increasing number of daily living tasks are now completed online, people with a learning disability frequently experience digital exclusion due to limited literacy and IT skills. The Able to Include project sought to engage people with a learning disability as active partners to test and feedback on the use and development of a pictogram app used to make social media more accessible. The challenges mainly related to the feedback needing to be sent electronically to the partners; there was only minimal contact with them and no face to face contact. The paper also outlines how other challenges were overcome to enable genuine and meaningful co-production. These included addressing online safety and ethical issues regarding anonymity."}
{"_id": "024d30897e0a2b036bc122163a954b7f1a1d0679", "title": "Mode Regularized Generative Adversarial", "text": "Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem."}
{"_id": "7e817759cc999a9d70855bfef54c7b6024abd695", "title": "Abnormal brain chemistry in chronic back pain: an in vivo proton magnetic resonance spectroscopy study", "text": "The neurobiology of chronic pain, including chronic back pain, is unknown. Structural imaging studies of the spine cannot explain all cases of chronic back pain. Functional brain imaging studies indicate that the brain activation patterns are different between chronic pain patients and normal subjects, and the thalamus, and prefrontal and cingulate cortices are involved in some types of chronic pain. Animal models of chronic pain suggest abnormal spinal cord chemistry. Does chronic pain cause brain chemistry changes? We examined brain chemistry changes in patients with chronic back pain using in vivo single- voxel proton magnetic resonance spectroscopy ((1)H-MRS). In vivo (1)H-MRS was used to measure relative concentrations of N-acetyl aspartate, creatine, choline, glutamate, glutamine, gamma-aminobutyric acid, inositol, glucose and lactate in relation to the concentration of creatine. These measurements were performed in six brain regions of nine chronic low back pain patients and 11 normal volunteers. All chronic back pain subjects underwent clinical evaluation and perceptual measures of pain and anxiety. We show that chronic back pain alters the human brain chemistry. Reductions of N-acetyl aspartate and glucose were demonstrated in the dorsolateral prefrontal cortex. Cingulate, sensorimotor, and other brain regions showed no chemical concentration differences. In chronic back pain, the interrelationship between chemicals within and across brain regions was abnormal, and there was a specific relationship between regional chemicals and perceptual measures of pain and anxiety. These findings provide direct evidence of abnormal brain chemistry in chronic back pain, which may be useful in diagnosis and future development of more effective pharmacological treatments."}
{"_id": "08f9ea089812e7975e1ddbddb155d2b406919934", "title": "Overview of the TAC 2010 Knowledge Base Population Track", "text": "In this paper we give an overview of the Knowledge Base Population (KBP) track at TAC 2010. The main goal of KBP is to promote research in discovering facts about entities and expanding a structured knowledge base with this information. A large source collection of newswire and web documents is provided for systems to discover information. Attributes (a.k.a. \u201cslots\u201d) derived from Wikipedia infoboxes are used to create the reference knowledge base (KB). KBP2010 includes the following four tasks: (1) Regular Entity Linking, where names must be aligned to entities in the KB; (2) Optional Entity linking, without using Wikipedia texts; (3) Regular Slot Filling, which requires a system to automatically discover the attributes of specified entities from the source document collection and use them to expand the KB; (4) Surprise Slot Filling, which requires a system to return answers regarding new slot types within a short time period. KBP2010 has attracted many participants (over 45 teams registered for KBP 2010 (not including the RTEKBP Validation Pilot task), among which 23 teams submitted results). In this paper we provide an overview of the task definition and annotation challenges associated with KBP2010. Then we summarize the evaluation results and discuss the lessons that we have learned based on detailed analysis."}
{"_id": "275008468de4c1f143ec601871009b8e45b63506", "title": "Probabilistic Similarity Logic", "text": "Many machine learning applications require the ability to learn from and reason about noisy multi-relational data. To address this, several effective representations have been developed that provide both a language for expressing the structural regularities of a domain, and principled support for probabilistic inference. In addition to these two aspects, however, many applications also involve a third aspect\u2013the need to reason about similarities\u2013which has not been directly supported in existing frameworks. This paper introduces probabilistic similarity logic (PSL), a general-purpose framework for joint reasoning about similarity in relational domains that incorporates probabilistic reasoning about similarities and relational structure in a principled way. PSL can integrate any existing domainspecific similarity measures and also supports reasoning about similarities between sets of entities. We provide efficient inference and learning techniques for PSL and demonstrate its effectiveness both in common relational tasks and in settings that require reasoning about similarity."}
{"_id": "8dd0b4e5fcbab552f0ef1460b2c97cec926cdb58", "title": "A short introduction to probabilistic soft logic", "text": "Probabilistic soft logic (PSL) is a framework for collective, probabilistic reasoning in relational domains. PSL uses first order logic rules as a template language for graphical models over random variables with soft truth values from the interval [0, 1]. Inference in this setting is a continuous optimization task, which can be solved efficiently. This paper provides an overview of the PSL language and its techniques for inference and weight learning. An implementation of PSL is available at http://psl.umiacs.umd.edu/."}
{"_id": "0b584f82ec87f068416c553b8c94778eecf9f7d6", "title": "Second-order cone programming", "text": "Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order (Lorentz) cones. Linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can all be formulated as SOCP problems, as can many other problems that do not fall into these three categories. These latter problems model applications from a broad range of fields from engineering, control and finance to robust optimization and combinatorial optimization. On the other hand semidefinite programming (SDP)\u2014that is the optimization problem over the intersection of an affine set and the cone of positive semidefinite matrices\u2014includes SOCP as a special case. Therefore, SOCP falls between linear (LP) and quadratic (QP) programming and SDP. Like LP, QP and SDP problems, SOCP problems can be solved in polynomial time by interior point methods. The computational effort per iteration required by these methods to solve SOCP problems is greater than that required to solve LP and QP problems but less than that required to solve SDP\u2019s of similar size and structure. Because the set of feasible solutions for an SOCP problem is not polyhedral as it is for LP and QP problems, it is not readily apparent how to develop a simplex or simplex-like method for SOCP. While SOCP problems can be solved as SDP problems, doing so is not advisable both on numerical grounds and computational complexity concerns. For instance, many of the problems presented in the survey paper of Vandenberghe and Boyd [VB96] as examples of SDPs can in fact be formulated as SOCPs and should be solved as such. In \u00a72, 3 below we give SOCP formulations for four of these examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadratic functions such as those that arise in structural optimization, logarithmic Tchebychev approximation and the"}
{"_id": "1825136db5f034642f5ff5fe90280e9a1a36d70a", "title": "Whitening Black-Box Neural Networks", "text": "Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attributes of neural networks can be exposed from a sequence of queries. This has multiple implications. On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks \u2013 we show that the revealed internal information helps generate more effective adversarial examples against the black box model. On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples. Our paper suggests that it is actually hard to draw a line between white box and black box models."}
{"_id": "a36b8732f41d8a22459b956fca1ff3a06d20fcc8", "title": "Development of a superior frontal\u2013intraparietal network for visuo-spatial working memory", "text": "Working memory capacity increases throughout childhood and adolescence, which is important for the development of a wide range of cognitive abilities, including complex reasoning. The spatial-span task, in which subjects retain information about the order and position of a number of objects, is a sensitive task to measure development of spatial working memory. This review considers results from previous neuroimaging studies investigating the neural correlates of this development. Older children and adolescents, with higher capacity, have been found to have higher brain activity in the intraparietal cortex and in the posterior part of the superior frontal sulcus, during the performance of working memory tasks. The structural maturation of white matter has been investigated by diffusion tensor magnetic resonance imaging (DTI). This has revealed several regions in the frontal lobes in which white matter maturation is correlated with the development of working memory. Among these is a superior fronto-parietal white matter region, located close to the grey matter regions that are implicated in the development of working memory. Furthermore, the degree of white matter maturation is positively correlated with the degree of cortical activation in the frontal and parietal regions. This suggests that during childhood and adolescence, there is development of networks related to specific cognitive functions, such as visuo-spatial working memory. These networks not only consist of cortical areas but also the white matter tracts connecting them. For visuo-spatial working memory, this network could consist of the superior frontal and intraparietal cortex."}
{"_id": "a74f2165ccb7e6dcf7b9bd03bf707a9f8dbd092a", "title": "Nobel Lecture: The Economic Way of Looking at Behavior", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "8f9a3f5b2902c20990923aef3ebac685151e615a", "title": "Foot Step Based Person Identification Using Histogram Similarity and Wavelet Decomposition", "text": "Research in person identification and authentication has attracted significant attention from the researchers and scientists. This paper presents a biometric user authentication based on a person's foot step. The advantage of this recognition method over other types of biometrics is that it enables unobtrusive user authentication where other types of biometrics are not available. Firstly the ground reaction force data was extracted using force plate to gather ground reaction force for individuals. Later we utilized the discrete wavelet transform to de-noise the experimental data and in the final step, histograms were used to identify different person's foot step. The experimental results show improvements in identification accuracies compared to previously reported work."}
{"_id": "4adbf5867c0eb583fcbe3cd0778fec42b1a57158", "title": "Pharmaceutical cold chain management: Platform based on a distributed ledger", "text": "In the process of distributing pharmaceutical products multiple stages are involved until products are delivered to patient. A reliable way for validating and verifying that products had been maintained within a licensed range is required. The paper presents a solution for pharmaceutical cold chain management using distributed ledger technologies. An application framework is proposed for shipment tracking which will deliver information to all stakeholders during the distribution phase of pharmaceutical products. The solution is based on Hyperledger Sawtooth distributed ledger framework, which has been extended with a custom transactions family and a sensors gateway for automatically collecting data from temperature tracking devices. The focus of the paper is to describe the data model and how entities of the system communicate."}
{"_id": "3ad556181c59e22dd8138e43102b7daab7b1546f", "title": "Fiat-Shamir with Aborts: Applications to Lattice and Factoring-Based Signatures", "text": "We demonstrate how the framework that is used for creating efficient number-theoretic ID and signature schemes can be transferred into the setting of lattices. This results in constructions of the most efficient to-date identification and signature schemes with security based on the worst-case hardness of problems in ideal lattices. In particular, our ID scheme has communication complexity of around 65, 000 bits and the length of the signatures produced by our signature scheme is about 50, 000 bits. All prior lattice-based identification schemes required on the order of millions of bits to be transferred, while all previous lattice-based signature schemes were either stateful, too inefficient, or produced signatures whose lengths were also on the order of millions of bits. The security of our identification scheme is based on the hardness of finding the approximate shortest vector to within a factor of \u00d5(n) in the standard model, while the security of the signature scheme is based on the same assumption in the random oracle model. Our protocols are very efficient, with all operations requiring \u00d5(n) time. We also show that the technique for constructing our lattice-based schemes can be used to improve certain number-theoretic schemes. In particular, we are able to shorten the length of the signatures that are produced by Girault\u2019s factoring-based digital signature scheme ([10,11,31])."}
{"_id": "851ee3421e91b42cb844068eb6dd840e930abce4", "title": "Arabic Optical Character Recognition Software : A Review 1", "text": "This paper provides a thorough evaluation of a set of six important Arabic OCR systems available in the market; namely: Abbyy FineReader, Leadtools, Readiris, Sakhr, Tesseract and NovoVerus. We test the OCR systems using a randomly selected images from the well known Arabic Printed Text Image database (250 images from the APTI database) and using a set of 8 images from an Arabic book. The APTI database contains 45.313.600 of both decomposable and non-decomposable word images. In the evaluation, we conduct two tests. The first test is based on usual metrics used in the literature. In the second test, we provide a novel measure for Arabic language, which can be used for other non-Latin languages."}
{"_id": "90b90e8e7a29daf5c5cc22bc9d46b1d46124f8d0", "title": "Synthesizing tests for detecting atomicity violations", "text": "Using thread-safe libraries can help programmers avoid the complexities of multithreading. However, designing libraries that guarantee thread-safety can be challenging. Detecting and eliminating atomicity violations when methods in the libraries are invoked concurrently is vital in building reliable client applications that use the libraries. While there are dynamic analyses to detect atomicity violations, these techniques are critically dependent on effective multithreaded tests. Unfortunately, designing such tests is non-trivial. In this paper, we design a novel and scalable approach for synthesizing multithreaded tests that help detect atomicity violations. The input to the approach is the implementation of the library and a sequential seed testsuite that invokes every method in the library with random parameters. We analyze the execution of the sequential tests, generate variable lock dependencies and construct a set of three accesses which when interleaved suitably in a multithreaded execution can cause an atomicity violation. Subsequently, we identify pairs of method invocations that correspond to these accesses and invoke them concurrently from distinct threads with appropriate objects to help expose atomicity violations. We have incorporated these ideas in our tool, named Intruder, and applied it on multiple open-source Java multithreaded libraries. Intruder is able to synthesize 40 multithreaded tests across nine classes in less than two minutes to detect 79 harmful atomicity violations, including previously unknown violations in thread-safe classes. We also demonstrate the effectiveness of Intruder by comparing the results with other approaches designed for synthesizing multithreaded tests."}
{"_id": "aac0e2533870637e425a5ea8d4807676cfc4d0aa", "title": "Parallel Processing of Dynamic Continuous Queries over Streaming Data Flows", "text": "More and more real-time applications need to handle dynamic continuous queries over streaming data of high density. Conventional data and query indexing approaches generally do not apply for excessive costs in either maintenance or space. Aiming at these problems, this study first proposes a new indexing structure by fusing an adaptive cell and KDB-tree, namely CKDB-tree. A cell-tree indexing approach has been developed on the basis of the CKDB-tree that supports dynamic continuous queries. The approach significantly reduces the space costs and scales well with the increasing data size. Towards providing a scalable solution to filtering massive steaming data, this study has explored the feasibility to utilize the contemporary general-purpose computing on the graphics processing unit (GPGPU). The CKDB-tree-based approach has been extended to operate on both the CPU (host) and the GPU (device). The GPGPU-aided approach performs query indexing on the host while perform streaming data filtering on the device in a massively parallel manner. The two heterogeneous tasks execute in parallel and the latency of streaming data transfer between the host and the device is hidden. The experimental results indicate that (1) CKDB-tree can reduce the space cost comparing to the cell-based indexing structure by 60 percent on average, (2) the approach upon the CKDB-tree outperforms the traditional counterparts upon the KDB-tree by 66, 75 and 79 percent in average for uniform, skewed and hyper-skewed data in terms of update costs, and (3) the GPGPU-aided approach greatly improves the approach upon the CKDB-tree with the support of only a single Kepler GPU, and it provides real-time filtering of streaming data with 2.5M data tuples per second. The massively parallel computing technology exhibits great potentials in streaming data monitoring."}
{"_id": "baf8ace34b363e123a115ffdf0eac4f39fd4f199", "title": "(DE)$^2$CO: Deep Depth Colorization", "text": "The ability to classify objects is fundamental for robots. Besides knowledge about their visual appearance, captured by the RGB channel, robots heavily need also depth information to make sense of the world. While the use of deep networks on RGB robot images has benefited from the plethora of results obtained on databases like ImageNet, using convnets on depth images requires mapping them into three-dimensional channels. This transfer learning procedure makes them processable by pretrained deep architectures. Current mappings are based on heuristic assumptions over preprocessing steps and on what depth properties should be most preserved, resulting often in cumbersome data visualizations, and in suboptimal performance in terms of generality and recognition results. Here, we take an alternative route and we attempt instead to learn an optimal colorization mapping for any given pretrained architecture, using as training data a reference RGB-D database. We propose a deep network architecture, exploiting the residual paradigm, that learns how to map depth data to three channel images. A qualitative analysis of the images obtained with this approach clearly indicates that learning the optimal mapping preserves the richness of depth information better than current hand-crafted approaches. Experiments on the Washington, JHUIT-50 and BigBIRD public benchmark databases, using CaffeNet, VGG-16, GoogleNet, and ResNet50 clearly showcase the power of our approach, with gains in performance of up to 16% compared to state of the art competitors on the depth channel only, leading to top performances when dealing with RGB-D data."}
{"_id": "b7bd6b6a061ad2b4f9726e5db60249ccfcfb05f5", "title": "GEOSTATISTICS : PAST , PRESENT , AND FUTURE", "text": "Geostatistical analyses were first developed in the 1950's as a result of interest in areal or block averages for ore reserves in the mining industry. Today, variogram estimation and spatial prediction (kriging) span all sciences where data exhibit spatial correlation. Theoretical properties of the spatial process are addressed under the distribution-free and likelihood-based perspectives. Strengths and weaknesses of these two dominant methodologies for modeling spatial correlation and predicting responses at unsampled sites and areal units are explored. Examples of hot spot detection and areal prediction show the flexibility of the Bayesian paradigm. Current bottlenecks and future directions in the field of geostatistics are discussed."}
{"_id": "3ae1a3cea640aaed12bfa7032e73afaa6c7a810a", "title": "iPinYou Global RTB Bidding Algorithm Competition Dataset", "text": "RTB (Real Time Bidding) is one of the most exciting developments in computational advertising in recent years. It drives transparency and efficiency in the display advertising ecosystem and facilitates the healthy growth of the display advertising industry. It enables advertisers to deliver the right message to the right person at the right time, publishers to better monetize their content by leveraging their website audience, and consumers to view relevant information through personalized ads. However, researchers in computational advertising area have been suffering from lack of publicly available datasets. iPinYou organizes a three-season global RTB algorithm competition in 2013. For each season, there is offline stage and online stage. On the offline stage, iPinYou releases a dataset for model training and reserves a dataset for testing. The dataset includes logs of ad biddings, impressions, clicks, and final conversions. After the whole competition ends, iPinYou organizes and releases all these three datasets for public use. These datasets can support experiments of some important research problems such as bid optimization and CTR estimation. To the best of our knowledge, this is the first publicly available dataset on RTB display advertising. In this paper, we give descriptions of these datasets to further boost the interests of computational advertising research community using this dataset."}
{"_id": "23cc8e75e04514cfec26eecc9e1bc14d05ac5ed5", "title": "Towards Extracting Faithful and Descriptive Representations of Latent Variable Models", "text": "Methods that use latent representations of data, such as matrix and tensor factorization or deep neural methods, are becoming increasingly popular for applications such as knowledge base population and recommendation systems. These approaches have been shown to be very robust and scalable but, in contrast to more symbolic approaches, lack interpretability. This makes debugging such models difficult, and might result in users not trusting the predictions of such systems. To overcome this issue we propose to extract an interpretable proxy model from a predictive latent variable model. We use a socalled pedagogical method, where we query our predictive model to obtain observations needed for learning a descriptive model. We describe two families of (presumably more) descriptive models, simple logic rules and Bayesian networks, and show how members of these families provide descriptive representations of matrix factorization models. Preliminary experiments on knowledge extraction from text indicate that even though Bayesian networks may be more faithful to a matrix factorization model than the logic rules, the latter are possibly more useful for interpretation and debugging."}
{"_id": "6e446902c08297543242b2af17d626a9f0163362", "title": "A design of an ICN architecture within the framework of SDN", "text": "The core design principle of Information-Centric Networking (ICN) is in the name based routing that enables users to ask for a data object by its name and makes the network deliver it to users from a nearby cache if available. Software-Defined Networking (SDN) lowers the cost and complexity of network management by decoupling architecture from infrastructure, which promises the continuous evolution of the network architecture in a flexible manner. The synergy between ICN supporting efficient data dissemination as the norm and SDN providing flexible management framework enables the combination to be a fully controllable framework for efficient data dissemination. In this paper, we propose a design of an ICN architecture within the framework of SDN."}
{"_id": "b683a4e2c0cc69008c3d0a84ad8a54bfc8e3755f", "title": "The Tri-Wheel: A novel wheel-leg mobility concept", "text": "The Tri-Wheel is a novel wheel-leg locomotion concept inspired by work with first responders. Through its two modes of operation-Driving Mode and Tumbling Mode- this mechanism is able to both drive quickly on smooth surfaces at roughly 1.7 times desired speed and climb objects as tall as 67% of the diameter of the mechanism. The unique gearing configuration that facilitates these dual capabilities is described, and testing quantifies that this nonprecision gearing system is roughly 81% efficient in a worst-case scenario of loading. This work introduces the Tri-Wheel concept and provides preliminary testing to validate its predicted operating characteristics."}
{"_id": "3fee0e4ed3116620983abf2e1147e676485408ae", "title": "A multistage dataflow implementation of a Deep Convolutional Neural Network based on FPGA for high-speed object recognition", "text": "Deep Neural Networks (DNNs) have progressed significantly in recent years. Novel DNN methods allow tasks such as image and speech recognition to be conducted easily and efficiently, compared with previous methods that needed to search for valid feature values or algorithms. However, DNN computations typically consume a significant amount of time and high-performance computing resources. To facilitate high-speed object recognition, this article introduces a Deep Convolutional Neural Network (DCNN) accelerator based on a field-programmable gate array (FPGA). Our hardware takes full advantage of the characteristics of convolutional calculation; this allowed us to implement all DCNN layers, from image input to classification, in a single chip. In particular, the dateflow from input to classification is uninterrupted and paralleled. As a result, our implementation achieved a speed of 409.62 giga-operations per second (GOPS), which is approximately twice as fast as the latest reported result. Furthermore, we used the same architecture to implement a Recurrent Convolutional Neural Network (RCNN), which can, in theory, provide better recognition accuracy."}
{"_id": "d641503d4551dc3a3f9eabefd27045996ed16887", "title": "Hierarchical data-driven vehicle dispatch and ride-sharing", "text": "Modern transportation system suffers from increasing passenger demand, limited vehicle supply and inefficient mobility service. Towards building an intelligent transportation system to address these issues, we propose a hierarchical framework to implement strategies that is capable of allocating vehicles to serve passengers in different locations based on limited supply. In the higher hierarchy, we optimize idle mileage induced by rebalancing vehicles across regions using receding horizon control towards current and predicted future requests. In addition, we design a dispatch strategy that is robust against passenger demand and vehicle mobility pattern uncertainties. In the lower hierarchy, within each region, pick-up and drop-off schedules for real-time requests are obtained for each vehicle by solving mixed-integer linear programs (MILP). The objective of the MILP is to minimize total mileage delay due to ride-sharing while serving as many requests as possible. We illustrate the validity of our framework via numerical simulations on taxi trip data from New York City."}
{"_id": "7b53f5aaa2dde43f27863d9e5eeda74a5a0b4d78", "title": "Brain tumor segmentation from MRI data sets using region growing approach", "text": "Brain segmentation is an important part of medical image processing. Most commonly, it aims at locating various lesions and pathologies inside the human brain. In this paper, a new brain segmentation algorithm is proposed. The method is seeded region growing approach which was developed to segment the area of the brain affected by a tumor. The proposed algorithm was described. Results of testing the developed method on real MRI data set are presented and discussed."}
{"_id": "2f8901f7cf7d2f6c3cfb5297436261ccb8f2cb2d", "title": "Eliminative Materialism and the Propositional Attitudes", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "6d0f7cc09deb8791a97941440cde419fbb0b563f", "title": "Wearable Ultrasonic Sensor Using Double-Layer PVDF Films for Monitoring Tissue Motion", "text": "Monitoring of the physical properties of the tissues provides valuable information for the clinical diagnosis and evaluation. However, one of the challenges of an ultrasonic method for continuous monitoring of a tissue motion using a conventional clinical ultrasonic image system could be motion artifacts due to the weight and size of its handheld ultrasonic probe employed. A wearable ultrasonic sensor, made of a polyvinylidene fluoride (PVDF) polymer piezoelectric film, may be able to reduce the motion artifacts due to its lightweight and flexible properties. However, the PVDF ultrasonic sensor has a relatively weak transmitting acoustic signal strength which causes poor signal-to-noise ratio of the ultrasonic signal acquired in pulse-echo measurements, particularly for the signal reflected from deeper tissue. This paper investigated an improvement of the ultrasonic performance of the WUS using double-layer PVDF films. The sensor was constructed using two 52-\u03bcm thick PVDF films. The developed double-layer WUS showed the 1.7 times greater ultrasonic signal amplitude compared to the WUS made of a single-layer PVDF having the equivalent PVDF film thickness. Thus, the developed double-layer PVDF WUS improved the depth of the ultrasonic penetration into the tissue. The developed WUS successfully demonstrated to monitor the contractions of biceps muscles in an upper arm. In addition, a cardiac tissue motion was clearly observed in the M-mode measurement corresponding with the cardiac cycles obtained from the ECG measurement."}
{"_id": "8fc062ac5cded5cbacb164e2b128f9914eb04727", "title": "Cerebral bases of subliminal speech priming", "text": "While the neural correlates of unconscious perception and subliminal priming have been largely studied for visual stimuli, little is known about their counterparts in the auditory modality. Here we used a subliminal speech priming method in combination with fMRI to investigate which regions of the cerebral network for language can respond in the absence of awareness. Participants performed a lexical decision task on target items preceded by subliminal primes, which were either phonetically identical or different from the target. Moreover, the prime and target could be spoken by the same speaker or by two different speakers. Word repetition reduced the activity in the insula and in the left superior temporal gyrus. Although the priming effect on reaction times was independent of voice manipulation, neural repetition suppression was modulated by speaker change in the superior temporal gyrus while the insula showed voice-independent priming. These results provide neuroimaging evidence of subliminal priming for spoken words and inform us on the first, unconscious stages of speech perception."}
{"_id": "33ec8ca46e0e1db145891c23d4bb86a401578cf2", "title": "Optimised KD-trees for fast image descriptor matching", "text": "In this paper, we look at improving the KD-tree for a specific usage: indexing a large number of SIFT and other types of image descriptors. We have extended priority search, to priority search among multiple trees. By creating multiple KD-trees from the same data set and simultaneously searching among these trees, we have improved the KD-treepsilas search performance significantly.We have also exploited the structure in SIFT descriptors (or structure in any data set) to reduce the time spent in backtracking. By using Principal Component Analysis to align the principal axes of the data with the coordinate axes, we have further increased the KD-treepsilas search performance."}
{"_id": "350a933406fed7f55ac0431c65768d62d8af8b04", "title": "Computer Vision - Algorithms and Applications", "text": "As known, adventure and experience about lesson, entertainment, and knowledge can be gained by only reading a book. Even it is not directly done, you can know more about this life, about the world. We offer you this proper and easy way to gain those all. We offer many book collections from fictions to science at all. One of them is this computer vision algorithms and applications that can be your partner."}
{"_id": "6a656a567097c53a49b1dbeb9e1e77bebf7524ec", "title": "The Case for VM-Based Cloudlets in Mobile Computing", "text": "Mobile computing continuously evolve through the sustained effort of many researchers. It seamlessly augments users' cognitive abilities via compute-intensive capabilities such as speech recognition, natural language processing, etc. By thus empowering mobile users, we could transform many areas of human activity. This article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them. In this architecture, a mobile user exploits virtual machine (VM) technology to rapidly instantiate customized service software on a nearby cloudlet and then uses that service over a wireless LAN; the mobile device typically functions as a thin client with respect to the service. A cloudlet is a trusted, resource-rich computer or cluster of computers that's well-connected to the Internet and available for use by nearby mobile devices. Our strategy of leveraging transiently customized proximate infrastructure as a mobile device moves with its user through the physical world is called cloudlet-based, resource-rich, mobile computing. Crisp interactive response, which is essential for seamless augmentation of human cognition, is easily achieved in this architecture because of the cloudlet's physical proximity and one-hop network latency. Using a cloudlet also simplifies the challenge of meeting the peak bandwidth demand of multiple users interactively generating and receiving media such as high-definition video and high-resolution images. Rapid customization of infrastructure for diverse applications emerges as a critical requirement, and our results from a proof-of-concept prototype suggest that VM technology can indeed help meet this requirement."}
{"_id": "295d469f3d85441abc7ba60b91bd5bbf7dc62110", "title": "Why do we trust new technology? A study of initial trust formation with organizational information systems", "text": "Recent trust research in the information systems (IS) field has described trust as a primary predictor of technology usage and a fundamental construct for understanding user perceptions of technology. Initial trust formation is particularly relevant in an IS context, as users must overcome perceptions of risk and uncertainty before using a novel technology. With initial trust in a more complex, organizational information system, there are a number of external determinants, trusting bases, that may explain trust formation and provide organizations with the needed levers to form or change individuals\u2019 initial trust in technology. In this study, a research model of initial trust formation is developed and includes trusting bases, trusting beliefs, trusting attitude and subjective norm, and trusting intentions. Eight trusting base factors are assessed including personality, cognitive, calculative, and both technology and organizational factors of the institutional base. The model is empirically tested with 443 subjects in the context of initial trust in a national identity system (NID). The proposed model was supported and the results indicate that subjective norm and the cognitive\u2013reputation, calculative, and organizational situational normality base factors significantly influence initial trusting beliefs and other downstream trust constructs. Factors from some of the more commonly investigated bases, personality and technology institutional, did not significantly affect trusting beliefs. The findings have strategic implications for agencies implementing e-government systems and organizational information systems in general. Published by Elsevier B.V."}
{"_id": "8892a117606fbac35153738bfc4cc66fed2d8697", "title": "A Vector Space for Distributional Semantics for Entailment", "text": "Distributional semantics creates vectorspace representations that capture many forms of semantic similarity, but their relation to semantic entailment has been less clear. We propose a vector-space model which provides a formal foundation for a distributional semantics of entailment. Using a mean-field approximation, we develop approximate inference procedures and entailment operators over vectors of probabilities of features being known (versus unknown). We use this framework to reinterpret an existing distributionalsemantic model (Word2Vec) as approximating an entailment-based model of the distributions of words in contexts, thereby predicting lexical entailment relations. In both unsupervised and semi-supervised experiments on hyponymy detection, we get substantial improvements over previous results."}
{"_id": "68e47fea6d61acbf0b058a963e42228a4e3f07af", "title": "Detecting Duplicate Posts in Programming QA Communities via Latent Semantics and Association Rules", "text": "Programming community-based question-answering (PCQA) websites such as Stack Overflow enable programmers to find working solutions to their questions. Despite detailed posting guidelines, duplicate questions that have been answered are frequently created. To tackle this problem, Stack Overflow provides a mechanism for reputable users to manually mark duplicate questions. This is a laborious effort, and leads to many duplicate questions remain undetected. Existing duplicate detection methodologies from traditional community based question-answering (CQA) websites are difficult to be adopted directly to PCQA, as PCQA posts often contain source code which is linguistically very different from natural languages. In this paper, we propose a methodology designed for the PCQA domain to detect duplicate questions. We model the detection as a classification problem over question pairs. To extract features for question pairs, our methodology leverages continuous word vectors from the deep learning literature, topic model features and phrases pairs that co-occur frequently in duplicate questions mined using machine translation systems. These features capture semantic similarities between questions and produce a strong performance for duplicate detection. Experiments on a range of real-world datasets demonstrate that our method works very well; in some cases over 30% improvement compared to state-of-the-art benchmarks. As a product of one of the proposed features, the association score feature, we have mined a set of associated phrases from duplicate questions on Stack Overflow and open the dataset to the public."}
{"_id": "1adee499c87210c6973717caac5b275988503d1e", "title": "The tear trough and lid/cheek junction: anatomy and implications for surgical correction.", "text": "BACKGROUND\nThe tear trough and the lid/cheek junction become more visible with age. These landmarks are adjacent, forming in some patients a continuous indentation or groove below the infraorbital rim. Numerous, often conflicting procedures have been described to improve the appearance of the region. The purpose of this study was to evaluate the anatomy underlying the tear trough and the lid/cheek junction and to evaluate the procedures designed to correct them.\n\n\nMETHODS\nTwelve fresh cadaver lower lid and midface dissections were performed (six heads). The orbital regions were dissected in layers, and medical photography was performed.\n\n\nRESULTS\nIn the subcutaneous plane, the tear trough and lid/cheek junction overlie the junction of the palpebral and orbital portions of the orbicularis oculi muscle and the cephalic border of the malar fat pad. In the submuscular plane, these landmarks differ. Along the tear trough, the orbicularis muscle is attached directly to the bone. Along the lid/cheek junction, the attachment is ligamentous by means of the orbicularis retaining ligament.\n\n\nCONCLUSIONS\nThe tear trough and lid/cheek junction are primarily explained by superficial (subcutaneous) anatomical features. Atrophy of skin and fat is the most likely explanation for age-related visibility of these landmarks. \"Descent\" of this region with age is unlikely (the structures are fixed to bone). Bulging orbital fat accentuates these landmarks. Interventions must extend significantly below the infraorbital rim. Fat or synthetic filler may be best placed in the intraorbicularis plane (tear trough) and in the suborbicularis plane (lid/cheek junction)."}
{"_id": "b9132f59935e7b9e9ccf90d9cca7ee1202c806f1", "title": "A Broadband Dual Circularly Polarized Patch Antenna With Wide Beamwidth", "text": "A broadband dual circularly polarized patch antenna with wide beamwidth is presented. The patch is excited by four cross slots via a microstrip line with multiple matching segments underneath the ground plane. The four cross slots and the multiple matching segments are optimized simultaneously to obtain the best performance. Measurements show that the antenna has 10-dB return-loss bandwidth of 24%, 10-dB isolation bandwidth of 19%, 3-dB axial-ratio bandwidth of 16%, and beamwidth of 110 \u00b0. Moreover, the single feed makes it a good candidate for array design."}
{"_id": "2a248b5434dfc37aad6db25750b99a4384185916", "title": "Age Prediction in Blogs: A Study of Style, Content, and Online Behavior in Pre- and Post-Social Media Generations", "text": "Research Assistant May 2008 present Currently working on a project to influence power and rifts in unstructured text project in classifying the age of authors as well as identifying opinion and persuasion in blogs (funded by IARPA SCIL) Previously worked on a project maintaining a pipeline for Question Answering System and creating system tools using Natural Language Processing techniques (funded by DARPA GALE). Computer administrator of the NLP group."}
{"_id": "3dadb121e0b3627792f94aa0b0f50387a4eff6c1", "title": "Twitter sentiment analysis using deep learning methods", "text": "The social media has Immense and popularity among all the services today. Data from SNS (Social Network Service) can be used for a lot of objectives such as prediction or sentiment analysis. Twitter is a SNS that has a huge data with user posting, with this significant amount of data, it has the potential of research related to text mining and could be subjected to sentiment analysis. But handling such a huge amount of unstructured data is a difficult task, machine learning is needed for handling such huge of data. Deep learning is of the machine learning method that use the deep feed forward neural network with many hidden layers in the term of neural network with the result of the experiment about 75%."}
{"_id": "3640542e44dc5fc442f1a47c690bbc4327502d4f", "title": "Automated vehicle system architecture with performance assessment", "text": "This paper proposes a reference architecture to increase reliability and robustness of an automated vehicle. The architecture exploits the benefits arising from the in-terdependencies of the system and provides self awareness. Performance Assessment units attached to subsystems quantify the reliability of their operation and return performance values. The Environment Condition Assessment, which is another important novelty of the architecture, informs augmented sensors on current sensing conditions. Utilizing environment conditions and performance values for subsequent centralized integrity checks allow algorithms to adapt to current driving conditions and thereby to increase their robustness. We demonstrate the benefit of the approach with the example of false positive object detection and tracking, where the detection of a ghost object is resolved in centralized performance assessment using a Bayesian network."}
{"_id": "e889e2dd1647fdfc28edb9abe08a863b704f08be", "title": "Using Intermediate Representations to Solve Math Word Problems", "text": "To solve math word problems, previous statistical approaches attempt at learning a direct mapping from a problem description to its corresponding equation system. However, such mappings do not include the information of a few higher-order operations that cannot be explicitly represented in equations but are required to solve the problem. The gap between natural language and equations makes it difficult for a learned model to generalize from limited data. In this work we present an intermediate meaning representation scheme that tries to reduce this gap. We use a sequence-to-sequence model with a novel attention regularization term to generate the intermediate forms, then execute them to obtain the final answers. Since the intermediate forms are latent, we propose an iterative labeling framework for learning by leveraging supervision signals from both equations and answers. Our experiments show using intermediate forms outperforms directly predicting equations."}
{"_id": "53c96fead0dc9307809c57e428d60665483ada9a", "title": "Graph-based mining of multiple object usage patterns", "text": "The interplay of multiple objects in object-oriented programming often follows specific protocols, for example certain orders of method calls and/or control structure constraints among them that are parts of the intended object usages. Unfortunately, the information is not always documented. That creates long learning curve, and importantly, leads to subtle problems due to the misuse of objects.\n In this paper, we propose GrouMiner, a novel graph-based approach for mining the usage patterns of one or multiple objects. GrouMiner approach includes a graph-based representation for multiple object usages, a pattern mining algorithm, and an anomaly detection technique that are efficient, accurate, and resilient to software changes. Our experiments on several real-world programs show that our prototype is able to find useful usage patterns with multiple objects and control structures, and to translate them into user-friendly code skeletons to assist developers in programming. It could also detect the usage anomalies that caused yet undiscovered defects and code smells in those programs."}
{"_id": "315c4d911466ba7dabba771fff5744074f9e5eb9", "title": "Performance improvement of synchronous reluctance motors: A review", "text": "Synchronous Reluctance motors can replace the more commonly used, induction, switched reluctance and permanent magnet motors. They have been shown to outperform similarly dimensioned induction motors in efficiency, torque and power density. They are simple and cheap to construct and require simple control. Such a motor would be ideal for wide spread use in Africa, which currently faces a power deficit, considering that motors account for about 43% of global electric power consumption. The use of synchronous reluctance motors in Electric Vehicles may contribute to the production of more efficient, heavier load and longer range vehicles. This would accelerate adoption of Electric Vehicles and it would help wean the world off fossil fuels. The synchronous reluctance motor has disadvantages of high torque ripple and poor power factor. A radical redesign of the rotor geometry and simultaneous optimisation of the rotor and the stator may significantly improve its performance."}
{"_id": "dffe3ef1fb5671c3d7d7e0042215503eb96442b9", "title": "Vectorized total variation defined by weighted L infinity norm for utilizing inter channel dependency", "text": "Vectorized total variation (VTV) is very successful convex regularizer to solve various color image recovery problems. Despite the fact that color channels of natural color images are closely related, existing variants of VTV can not utilize this prior efficiently. We proposed L\u221e-VTV as a convex regularizer can penalize the violation of such inter-channel dependency by employing weighted L\u221e (L-infty) norm. We also introduce an effective algorithm for an image denoising problem using L\u221e-VTV. Experimental results shows that our proposed method can outperform the conventional methods."}
{"_id": "3fbee0927b8ee26eccd1fc7226d0b2d1dff8a6f8", "title": "Crafting Integrated Multichannel Retailing Strategies", "text": "Multichannel retailing is the set of activities involved in selling merchandise or services to consumers through more than one channel. Multichannel retailers dominate today's retail landscape. While there are many benefits of operating multiple channels, these retailers also face many challenges. In this article, we discuss the key issues concerning multichannel retailing, including the motivations and constraints of going multichannel, the challenges of crafting multichannel retailing strategies and opportunities for creating synergies across channels, key retail mix decisions facing multichannel retailers, and the dynamics of multichannel retailing. We synthesize current knowledge drawn from the academic literature and industry practice, and discuss potential directions for future research. \u00a9 2010 Direct Marketing Educational Foundation, Inc. Published by Elsevier Inc. All rights reserved."}
{"_id": "a96b8e324612d3cef5366db8690db0809d02996d", "title": "Wire Speed Name Lookup: A GPU-based Approach", "text": "This paper studies the name lookup issue with longest prefix matching, which is widely used in URL filtering, content routing/switching, etc. Recently Content-Centric Networking (CCN) has been proposed as a clean slate future Internet architecture to naturally fit the contentcentric property of today\u2019s Internet usage: instead of addressing end hosts, the Internet should operate based on the identity/name of contents. A core challenge and enabling technique in implementing CCN is exactly to perform name lookup for packet forwarding at wire speed. In CCN, routing tables can be orders of magnitude larger than current IP routing tables, and content names are much longer and more complex than IP addresses. In pursuit of conquering this challenge, we conduct an implementation-based case study on wire speed name lookup, exploiting GPU\u2019s massive parallel processing power. Extensive experiments demonstrate that our GPU-based name lookup engine can achieve 63.52M searches per second lookup throughput on large-scale name tables containing millions of name entries with a strict constraint of no more than the telecommunication level 100\u03bcs per-packet lookup latency. Our solution can be applied to contexts beyond CCN, such as search engines, content filtering, and intrusion prevention/detection. c \u20ddProf. Qunfeng Dong (qunfeng.dong@gmail.com) and Prof. Bin Liu (lmyujie@gmail.com), placed in alphabetic order, are the correspondence authors of the paper. Yi Wang and Yuan Zu, placed in alphabetic order, are the lead student authors of Tsinghua University and University of Science and Technology of China, respectively. This paper is supported by 863 project (2013AA013502), NSFC (61073171, 61073184), Tsinghua University Initiative Scientific Research Program(20121080068), the Specialized Research Fund for the Doctoral Program of Higher Education of China(20100002110051), the Ministry of Education (MOE) Program for New Century Excellent Talents (NCET) in University, the Science and Technological Fund of Anhui Province for Outstanding Youth (10040606Y05), by the Fundamental Research Funds for the Central Universities (WK0110000007, WK0110000019), and Jiangsu Provincial Science Foundation (BK2011360)."}
{"_id": "37df54a5bed234ab4c3f462d1d726eec797bd8b9", "title": "Enable IoT interoperability in ambient assisted living: Active and healthy aging scenarios", "text": "Ambient Assisted Living (AAL) represents one of the most promising Internet of Things (IoT) applications due to its influence on the quality of life and health of the elderly people. However, the interoperability is one of the major issues that needs to be addressed to promote the adoption of AAL solutions in real environments. In this paper, an IoT prototype system for AAL called AAL-IoTSys is proposed, which includes a Smart IoT Gateway as a key component to enable the interoperability from several heterogeneous devices over different communication protocols and technologies (e.g., WiFi, ZigBee, Bluetooth, IEEE 802.15.4, 6LowPAN). This system aims to monitor environmental (temperature, humidity, C02), health (heart rate) and location (GPS) parameters in the elderly people's houses in order to facilitate events early detection and automatic sending notifications to elderly care stakeholders in active and healthy aging scenarios (included health professionals, caregivers, family members and emergency services)."}
{"_id": "66478396e78d9f1cb2168120b83cfb7ba341b577", "title": "Intelligent Tutoring Systems with Multiple Representations and Self-Explanation Prompts Support Learning of Fractions", "text": "Although a solid understanding of fractions is foundational in mathematics, the concept of fractions remains a challenging one. Previous research suggests that multiple graphical representations (MGRs) may promote learning of fractions. Specifically, we hypothesized that providing students with MGRs of fractions, in addition to the conventional symbolic notation, leads to better learning outcomes as compared to instruction incorporating only one graphical representation. We anticipated, however, that MGRs would make the students\u2019 task more challenging, since they must link the representations and distill from them a common concept or principle. Therefore, we hypothesized further that selfexplanation prompts would help students benefit from working with MGRs. To investigate these hypotheses, we conducted a classroom study in which 112 6grade students used intelligent tutors for fraction conversion and fraction addition. The results of the study show that students learned more with MGRs of fractions than with a single representation, but only when prompted to self-explain how the graphics relate to the symbolic fraction representations."}
{"_id": "926b99cbea04fb213c9984b10acf2235a3949ebb", "title": "Boosted key-frame selection and correlated pyramidal motion-feature representation for human action recognition", "text": "In this paper we propose a novel method for human action recognition based on boosted key-frame selection and correlated pyramidal motion feature representations. Instead of using an unsupervised method to detect interest points, a Pyramidal Motion Feature (PMF), which combines optical flow with a biologically inspired feature, is extracted from each frame of a video sequence. The AdaBoost learning algorithm is then applied to select the most discriminative frames from a large feature pool. In this way, we obtain the top-ranked boosted frames of each video sequence as the key frames which carry the most representative motion information. Furthermore, we utilise the correlogram which focuses not only on probabilistic distributions within one frame but also on the temporal relationships of the action sequence. In the classification phase, a Support-Vector Machine (SVM) is adopted as the final classifier for human action recognition. To demonstrate generalizability, our method has been systematically tested on a variety of datasets and shown to be more effective and accurate for action recognition compared to the previous work. We obtain overall accuracies of: 95.5%, 93.7%, and 36.5% with our proposed method on the KTH, the multiview IXMAS and the challenging HMDB51 datasets, respectively. & 2012 Elsevier Ltd. All rights reserved."}
{"_id": "7706840d72e3609df8a3d6d14d3f4edfc75e74b0", "title": "Consumers \u2019 attitude towards online shopping : Factors influencing employees of crazy domains to shop online", "text": "E-commerce offers many online marketing opportunities to companies worldwide and along with high rapid growth of online shopping; it has impressed many retailers to sell products and services through online channel to expand their market. Online shopping or marketing is the use of technology (i.e., computer, internet) for better marketing performance. And retailers are mixing strategies to meet the demand of online shoppers; they are busy in studying consumer in the field of online shopping, to see the consumer attitudes towards online shopping and specifically studying the factors influencing consumers to shop online. In this study, the multiple regression analysis was employed to measure the relationship between 9 independent variables and receptivity to online shopping. The score of Beta weight presented that all 9 independent variables had positive statistical significant effect to Internet users to accept online shopping. Among the 9 factors, the strongest influencers from highest to lowest were Price, Refund, Convenience, Auction websites, Security, Brand, Search engines, Promotion and Online shopping malls. According to independent t-test analysis for gender, there was significant different means between males and females for online shopping malls and Auctions websites factors to receptivity on online shopping. The means of female significant higher than male for these two factors. This study might contribute not only to a better understanding on what and how strongly the factors are involved in online consumer purchasing decisions but also this study provides e-retailer\u2019s standpoint such the effectively manage and recommendations. However, eretailers should keep in mind that consumer behavior might change in time to time especially in online market so the e-retailer should investigate the consumer behavior in time to time and adapt the products and services to serve as the customer requirements."}
{"_id": "368e27c54bc4f5c85ac9834d3d291e1ce9b68b17", "title": "Big Data Mobile Services for New York City Taxi Riders and Drivers", "text": "In recent years, taxis in New York City have been equipped with GPS sensors to record the start and end locations of every trip, in conjunction with fare collection equipment recording trip time, fare, and distance. This paper analyzes this vast dataset with big data technologies. We use MapReduce and Hive in order to understand traffic and travel patterns, discover relationships among the data, and make predictions on the taxi network. We also propose a service model combining the results of our data analysis with smartphone applications to offer users various taxi-related services based on results of our big data analysis."}
{"_id": "52e349ad74c35b879dfbc9a1d3a5e0e54b19c072", "title": "Monogenic polyarteritis: the lesson of ADA2 deficiency", "text": "The deficiency of Adenosine Deaminase 2 (DADA2) is a new autoinflammatory disease characterised by an early onset vasculopathy with livedoid skin rash associated with\u00a0systemic manifestations, CNS involvement and mild immunodeficiency.This condition is secondary to autosomal recessive mutations of CECR1 (Cat Eye Syndrome Chromosome Region 1) gene, mapped to chromosome 22q11.1, that encodes for the enzymatic protein adenosine deaminase 2 (ADA2). By now 19 different mutations in CECR1 gene have been detected.The pathogenetic mechanism of DADA2 is still unclear. ADA2 in a secreted protein mainly expressed by cells of the myeloid lineage; its enzymatic activity is higher in conditions of hypoxia, inflammation and oncogenesis. Moreover ADA2 is able to induce macrophages proliferation and differentiation; it's deficiency is in fact associated with a reduction of anti-inflammatory macrophages (M2). The deficiency of ADA2 is also associated with an up-regulation of neutrophils-expressed genes and an increased secretion of pro-inflammatory cytokines. The mild immunodeficiency detected in many DADA2 patients suggests a role of this protein in the adaptive immune response; an increased mortality of B cells and a reduction in the number of memory B cells, terminally differentiated B cells and plasmacells has been described in many patients. The lack of the protein is associated with endothelium damage; however the function of this protein in the endothelial homeostasis is still unknown.From the clinical point of view, this disease is characterized by a wide spectrum of severity. Chronic or recurrent systemic inflammation with fever, elevation of acute phase reactants and skin manifestations (mainly represented by livedo reticularis) is the typical clinical picture. While in some patients the disease is mild and skin-limited, others present a severe, even lethal, disease with multi-organ involvement; the CNS involvement is rather common with ischemic or hemorrhagic strokes. In many patients not only the clinical picture but also the histopathologic features are undistinguishable from those of systemic polyarteritis nodosa (PAN). Of note, patients with an unusual phenotype, mainly dominated by clinical manifestations suggestive for an immune-disrective condition, have been described.Due to its rarity, the response to treatment of DADA2 is still anecdotal. While steroids can control the disease's manifestations at high dosage, none of the common immunosuppressive drugs turned out to be effective. Biologic drugs have been used only in few patients, without a clear effectiveness; anti-TNF drugs are those associated to a better clinical response. Hematopoietic stem cells transplantation was effective in patients with a severe phenotype."}
{"_id": "342252bf0e94a5d4753a54785d191a30caa45298", "title": "Understanding and Modeling of WiFi Signal Based Human Activity Recognition", "text": "Some pioneer WiFi signal based human activity recognition systems have been proposed. Their key limitation lies in the lack of a model that can quantitatively correlate CSI dynamics and human activities. In this paper, we propose CARM, a CSI based human Activity Recognition and Monitoring system. CARM has two theoretical underpinnings: a CSI-speed model, which quantifies the correlation between CSI value dynamics and human movement speeds, and a CSI-activity model, which quantifies the correlation between the movement speeds of different human body parts and a specific human activity. By these two models, we quantitatively build the correlation between CSI value dynamics and a specific human activity. CARM uses this correlation as the profiling mechanism and recognizes a given activity by matching it to the best-fit profile. We implemented CARM using commercial WiFi devices and evaluated it in several different environments. Our results show that CARM achieves an average accuracy of greater than 96%."}
{"_id": "02880c9ac973046bf8d2fc802fb7ee4fc60c193b", "title": "Towards Answering Opinion Questions: Separating Facts from Opinions and Identifying the Polarity of Opinion Sentences", "text": "Opinion question answering is a challenging task for natural language processing. In this paper, we discuss a necessary component for an opinion question answering system: separating opinions from fact, at both the document and sentence level. We present a Bayesian classifier for discriminating between documents with a preponderance of opinions such as editorials from regular news stories, and describe three unsupervised, statistical techniques for the significantly harder task of detecting opinions at the sentence level. We also present a first model for classifying opinion sentences as positive or negative in terms of the main perspective being expressed in the opinion. Results from a large collection of news stories and a human evaluation of 400 sentences are reported, indicating that we achieve very high performance in document classification (upwards of 97% precision and recall), and respectable performance in detecting opinions and classifying them at the sentence level as positive, negative, or neutral (up to 91% accuracy)."}
{"_id": "c4a974906a9f6c7380edc9e2281931bde78828b1", "title": "Development and Use of a Gold-Standard Data Set for Subjectivity Classifications", "text": "This paper presents a case study of analyzing and improving intercoder reliability in discourse tagging using statistical techniques. Bias-corrected tags are formulated and successfully used to guide a revision of the coding manual and develop an automatic classifier. 1 Introduction This paper presents a case study of analyzing and improving intercoder reliability in discourse tagging using the statistical techniques presented in (Bruce and Wiebe, 1998; Bruce and Wiebe, to appear). Our approach is data driven: we refine our understanding and presentation of the classification scheme guided by the results of the intercoder analysis. We also present the results of a probabilistic classifier developed on the resulting annotations. Much research in discourse processing has focused on task-oriented and instructional dialogs. The task addressed here comes to the fore in other genres, especially news reporting. The task is to distinguish sentences used to objectively present factual information from sentences used to present opinions and evaluations. There are many applications for which this distinction promises to be important, including text categorization and summarization. This research takes a large step toward developing a reliably annotated gold standard to support experimenting with such applications. This research is also a case study of analyzing and improving manual tagging that is applicable to any tagging task. We perform a statistical analysis that provides information that complements the information provided by"}
{"_id": "fc9b02a1c48f7f0578cc45cdd8a00ae91854aba4", "title": "New Avenues in Opinion Mining and Sentiment Analysis", "text": "The Web holds valuable, vast, and unstructured information about public opinion. Here, the history, current use, and future of opinion mining and sentiment analysis are discussed, along with relevant techniques and tools."}
{"_id": "c7d036dbaf4e8ff0ea85256597c6015015ece3ed", "title": "Inferring 3D layout of building facades from a single image", "text": "In this paper, we propose a novel algorithm that infers the 3D layout of building facades from a single 2D image of an urban scene. Different from existing methods that only yield coarse orientation labels or qualitative block approximations, our algorithm quantitatively reconstructs building facades in 3D space using a set of planes mutually related by 3D geometric constraints. Each plane is characterized by a continuous orientation vector and a depth distribution. An optimal solution is reached through inter-planar interactions. Due to the quantitative and plane-based nature of our geometric reasoning, our model is more expressive and informative than existing approaches. Experiments show that our method compares competitively with the state of the art on both 2D and 3D measures, while yielding a richer interpretation of the 3D scene behind the image."}
{"_id": "ca907b01355ae3fbb528c19283160a4f133f5d3e", "title": "Suicide rates among physicians: a quantitative and gender assessment (meta-analysis).", "text": "OBJECTIVE\nPhysicians' suicide rates have repeatedly been reported to be higher than those of the general population or other academics, but uncertainty remains. In this study, physicians' suicide rate ratios were estimated with a meta-analysis and systematic quality assessment of recent studies.\n\n\nMETHOD\nStudies of physicians' suicide rates were located in MEDLINE, PsycINFO, AARP Ageline, and the EBM Reviews: Cochrane Database of Systematic Reviews with the terms \"physicians,\" \"doctors,\" \"suicide,\" and \"mortality.\" Studies were included if they were published in or after 1960 and gave estimates of age-standardized suicide rates of physicians and their reference population or reported extractable data on physicians' suicide; 25 studies met the criteria. Reviewers extracted data and scored each study for quality. The studies were tested for heterogeneity and publication bias and were stratified by publication year, follow-up, and study quality. Effect sizes were pooled by using fixed-effects (women) and random-effects (men) models.\n\n\nRESULTS\nThe aggregate suicide rate ratio for male physicians, compared to the general population, was 1.41, with a 95% confidence interval (CI) of 1.21-1.65. For female physicians the ratio was 2.27 (95% CI=1.90-2.73). Visual inspection of funnel plots from tests of publication bias revealed randomness for men but some indication of bias for women, with a relative, nonsignificant lack of studies in the lower right quadrant.\n\n\nCONCLUSIONS\nStudies on physicians' suicide collectively show modestly (men) to highly (women) elevated suicide rate ratios. Larger studies should help clarify whether female physicians' suicide rate is truly elevated or can be explained by publication bias."}
{"_id": "ec07426cc59a1301252913d0aeb96c88cb390f12", "title": "Designs of fully on-chip antennas in (Bi)CMOS technology", "text": "The paper presents several feasible millimeter wave on-chip antenna designs suitable to be fabricated in CMOS technology without any additional process. The results are listed and compared with state-of-the-art designs in the literature. The difficulties in designing high efficiency antenna on CMOS chip are discussed."}
{"_id": "47678b75d1e5cbf2d7daec098c4d99796d78b778", "title": "LOMAC: Low Water-Mark Integrity Protection for COTS Environments", "text": "We hypothesizethat a form of kernel-residentaccesscontrol-based integrity protection can gain widespread acceptancein Commer cial Off-The-Shelf(COTS)environmentsprovidedthat it couplessomeusefulprotectionwith a high degreeof compatibilitywith existing software, configurations,andpractices.To testthis hypothesis, wehave developeda highly-compatiblefreeopen-sour ceprototype calledLOMAC, andreleasedit on theInternet.LOMAC is a dynamicallyloadableextensionfor COTSLinux kernels thatprovidesintegrity protectionbasedonLowWater-Mark accesscontrol. We presenta classificationof existing accesscontrol modelswith regard tocompatibility, concluding thatmodelssimilar to LowWater-Mark areespeciallywellsuitedto high-compatibilitysolutions.Wealsodescribeour practical strategiesfor dealingwith thepathological cases in the Low Water-Mark model\u2019s behavior , which includea smallextensionof themodel,andanunusualapplicationof"}
{"_id": "58e23f277caca22b36d6db50da3b19ca638257d7", "title": "Experience in System Design for Human-Robot Teaming in Urban Search and Rescue", "text": "The paper describes experience with applying a user-centric design methodology in developing systems for human-robot teaming in Urban Search & Rescue. A human-robot team consists of several semi-autonomous robots (rovers/UGVs, microcopter/UAVs), several humans at an off-site command post (mission commander, UGV operators) and one on-site human (UAV operator). This system has been developed in close cooperation with several rescue organizations, and has been deployed in a real-life tunnel accident use case. The human-robot team jointly explores an accident site, communicating using a multi-modal team interface, and spoken dialogue. The paper describes the development of this complex socio-technical system per se, as well as recent experience in evaluating the performance of this system."}
{"_id": "5d14415881c1f72be25a38d509e994826e5058fd", "title": "A method for Automatic Text Summarization using Consensus of Multiple Similarity Measures and Ranking Techniques", "text": "In the era of information overload, text summarization can be defined as the process of extracting useful information from a large space of available content using traditional filtering methods. One of the major challenges in the domain of extraction based summarization is that a single statistical measure is not sufficient to produce efficient summaries which would be close to human-made \u2018gold standard\u2019, since each measure suffers from individual weaknesses. We deal with this problem by proposing a text summarization model that combines various statistical measures so that the pitfalls of an individual technique could be compensated by the strengths of others. Experimental results are presented to demonstrate the effectiveness of the proposed method using the TAC 2011 Multiling pilot dataset for English language and ROUGE summary evaluation tool."}
{"_id": "143b946922bf1ab3ec4e1f2d694ae89b723a463b", "title": "TopicShop: enhanced support for evaluating and organizing collections of Web sites", "text": "TopicShop is an interface that helps users evaluate and organize collections of web sites. The main interface components are site profiles, which contain information that helps users select high-quality items, and a work area, which offers thumbnail images, annotation, and lightweight grouping techniques to help users organize selected sites. The two components are linked to allow task integration. Previous work [2] demonstrated that subjects who used TopicShop were able to select significantly more highquality sites, in less time and with less effort. We report here on studies that confirm and extend these results. We also show that TopicShop subjects spent just half the time organizing sites, yet still created more groups and more annotations, and agreed more in how they grouped sites. Finally, TopicShop subjects tightly integrated the tasks of evaluating and organizing sites. INTRODUCTION In previous work [2], we motivated an important task for web users \u2013 gathering, evaluating, and organizing information resources for a given topic. Current web tools do not support this task well; specifically, they do not make it easy to evaluate collections of web sites to select the best ones or to organize sites for future reuse and sharing. Users have to browse and view sites one after another until they are satisfied they have a good set, or, more likely, they get tired and give up. Browsing a web site is an expensive operation, both in time and cognitive effort. And bookmarks, the most common form of keeping track of web sites, are a fairly primitive organizational technique. We designed and implemented the TopicShop system to provide comprehensive, integrated support for this task. TopicShop aids users in finding a set of relevant sites, in narrowing down the set into a smaller set of high quality sites, and in organizing sites for future use. TopicShop has evolved through a number of design iterations, driven by extensive user testing. We report here on lessons we learned from a pilot study, how these lessons re-shaped our understanding of the task and led to a significant re-design, and the results of a second, larger user study. RELATED WORK Our research program investigates the major information problems faced by users of the World Wide Web: \u2022 finding collections of items relevant to their interests; \u2022 identifying high-quality items within a collection; \u2022 finding items that contain a certain category of information, e.g., episode guides (for a television show) or song lyrics (for a musician); \u2022 organizing personalized subsets of items. We have addressed these problems by developing algorithms, implementing them in web crawling and analysis tools, creating interfaces to support users in exploring, comprehending, and organizing collections of web documents, and performing user studies [2, 3, 4, 15]. The work reported here focuses on understanding the user tasks of evaluating and organizing collections of web sites, as ill uminated by the design, evaluation, and re-design of interfaces to support these tasks. Other researchers have investigated these issues. Much recent work has been devoted to algorithms for adding meta-information to collections of web sites to enhance user comprehension, typically by analyzing the structure of links between sites. This approach builds on the intuition that when the author of one site chooses to link to another, this often implies both that the sites have similar content and that the author is endorsing the content of the linked-to site. Pirolli , Pitkow and colleagues [12, 13] experimented with link-based algorithms for clustering and categorizing web pages. Kleinberg\u2019s HITS algorithm [8] defines authoritative and hub pages within a hypertext collection. Authorities and hubs are mutually dependent: a good authority is a page that is linked to by many hubs, and a good hub is one that links to many authorities. After evaluating items and selecting the interesting ones, users must organize the items for future use. Card, Robertson, and Mackinlay [5] introduced the concept of information workspaces to refer to environments in which information items can be stored and manipulated. A departure point for most such systems is the file manager popularized by the Apple Macintosh and then in Microsoft Windows. Such systems typically include a list view, which shows various properties of items, and an icon view, which lets users organize icons representing the items in a 2D space. Mander, Salomon, and Wong [10] enhanced the basic metaphor with the addition of \u201cpiles\u201d . Users could create and manipulate piles of items. Interesting interaction techniques for displaying, browsing, and searching piles were designed and tested. Bookmarks are the most popular way to create personal information workspaces of web resources. Bookmarks consist of lists of URLs; typically the title of the web page is used as the label for the URL. Users may organize their bookmarks into a hierarchical category structure. Abrams, Baecker, and Chignell [1] carried out an extensive study of how several hundred web users used bookmarks. They observed a number of strategies for organizing bookmarks, including a flat ordered list, a single level of folders, and hierarchical folders. They also made four design recommendations to help users manage their bookmarks more effectively. First, bookmarks must be easy to organize, e.g., via automatic sorting techniques. Second, visualization techniques are necessary to provide comprehensive overviews of large sets of bookmarks. Third, rich representations of sites are required; many users noted that site titles are not accurate descriptors of site content. Finally, tools for managing bookmarks must be well i ntegrated with web browsers. Many researchers have created experimental information workspace interfaces, often designed expressly for web documents. Card, Robertson, and York [5] describe the WebBook, which uses a book metaphor to group a collection of related web pages for viewing and interaction, and the WebForager, an interface that lets users view and manage multiple WebBooks. Mackinlay, Rao, and Card [9] developed a novel user interface for accessing articles from a citation database. The central UI object is a \u201cButterfly\u201d , which represents an article, its references, and its citers. The interface makes it easy for users to browse among related articles, group articles, and generate queries to retrieve articles that stand in a particular relationship to the current article. The Data Mountain of Robertson et al [14] represents documents as thumbnail images in a 3D virtual space. Users can move and group the images freely, with various interesting visual and audio cues used to help users arrange the documents. In a study comparing the use of Data Mountain to Internet Explorer Favorites, Data Mountain users retrieved items more quickly, with fewer incorrect or failed retrievals. Our research shares goals with much of the previous work. We focus on designing interfaces that make automatically extracted information about web sites readily accessible to users. We show that this increases users\u2019 abilit y to select high-quality sites. Through ongoing user studies and redesign, we developed easy-to-use annotation and grouping techniques that let users organize items better and more quickly. Finally, we learned how users interleave work on various tasks and re-designed our interface to support such task interleaving. TOPICSHOPEXPLORER, VERSION 1 The TopicShop Explorer is implemented in C++ and runs on Microsoft Windows platforms. Version 1 was based directly on the Macintosh file manager / MS Windows Explorer metaphor (see [3] for detail of TopicShop Version 1 and the pilot study). Accordingly, users could view collections in either a details (Figure 1) or icons (Figure 2) view. The details view showed site profile information (see below) to help users evaluate sites, and the icons view let users organize sites spatially. Figure 1: TopicShop Explorer (version 1), details view. Each web site is represented by a small thumbnail image, the site title, and profile data including the links to/from other sites in the collection, and the number of pages, images, and audio files on the site. Users can sort by a property by clicking on the appropriate column. Figure 2: TopicShop Explorer (version 1), icons view. Each site is represented by a large thumbnail image and the site title. Users can organize sites by arranging them spatially, a technique especially useful in the early stages of exploration. The collections of sites and site profile data used in TopicShop are obtained by running a webcrawler/analyzer. The crawler takes a user-specified set of seed sites as input, and follows links from the seeds to construct a graph of the seed sites, pages contained on these sites, and, optionally, sites determined to be related based on their textual and hyperlink connections to the seeds. Site profiles are built by fetching a large number of pages from each site. Profiles contain content data, including the page title, an estimate of the page count, and a roster of audio files, movie files, and images; they also record links between sites in the collection. In addition, a thumbnail image of each site\u2019s root page is constructed. The first goal of TopicShop is to help users evaluate and identify high quality sites. We sought to achieve this goal by providing site profile data and interface mechanisms for viewing and exploring the data. Making this data visible means that users no longer had to select sites to browse based solely on titles and (sometimes) brief textual annotations. (A chief complaint of subjects in the Abrams et al [1] study was that titles were inadequate descriptors of site content \u2014 and that was for sites that users already had browsed and decided to bookmark.) Instead, users may visit only sites that have been endorsed (linked to) by many other sites or sites that are rich in"}
{"_id": "3a3622d9e342bc89fdb6fef27a2b4d0242a287e6", "title": "Improving Document Clustering for Short Texts by Long Documents via a Dirichlet Multinomial Allocation Model", "text": "Document clustering for short texts has received considerable interest. Traditional document clustering approaches are designed for long documents and perform poorly for short texts due to the their sparseness representation. To better understand short texts, we observe that words that appear in long documents can enrich short text context and improve the clustering performance for short texts. In this paper, we propose a novel model, namely DDMAfs, which 1) improves the clustering performance of short texts by sharing structural knowledge of long documents to short texts; 2) automatically identifies the number of clusters; 3) separates discriminative words from irrelevant words for long documents to obtain high quality structural knowledge. Our experiments indicate that the DDMAfs model performs well on the synthetic dataset and real datasets. Comparisons between the DDMAfs model and state-of-the-art short text clustering approaches show that the DDMAfs model is effective."}
{"_id": "4bfe0ff109303793d40180a9a4e69b3da784a5eb", "title": "CBSE as Novel Approach for IDS", "text": "This paper gives a new approach called Component based software engineering for designing an intrusion detection system. This paper also tells that before this component based software approach, other traditional approaches were used for designing intrusion detection system. But due to various drawbacks we proposed this component based approach for intrusion detection system. A comparison table is also illustrated in this paper on the basis of class diagram designed using UML language in a simulated environment. This comparison table will illustrate main difference between the traditional and component based approach and also shows that which parameters make this approach a better approach. Keywords\u2014Component based software engineering, intrusion detection system, Classes etc."}
{"_id": "47a9a63139ed8c9a3a1b59db446cf533fb6830a0", "title": "Overview on electrostatic discharge protection designs for mixed-voltage I/O interfaces: design concept and circuit implementations", "text": "Electrostatic discharge (ESD) protection design for mixed-voltage I/O interfaces has been one of the key challenges of system-on-a-chip (SOC) implementation in nano-scale CMOS processes. The on-chip ESD protection circuit for mixed-voltage I/O interfaces should meet the gate-oxide reliability constraints and prevent the undesired leakage current paths. This paper presents an overview on the design concept and circuit implementations of the ESD protection designs for mixed-voltage I/O interfaces without using the additional thick gate-oxide process. The ESD design constraints in mixed-voltage I/O interfaces, the classification and analysis of ESD protection designs for mixed-voltage I/O interfaces, and the designs of high-voltage-tolerant power-rail ESD clamp circuit are presented and discussed."}
{"_id": "b671518722be44e6c2be535c7f22345211c2706a", "title": "Enterprise-wide Optimization : A New Frontier in Process Systems Engineering", "text": "Enterprise-wide optimization (EWO) is a new emerging area that lies at the interface of chemical engineering and operations research, and has become a major goal in the process industries due to the increasing pressures for remaining competitive in the global marketplace. EWO involves optimizing the operations of supply, manufacturing and distribution activities of a company to reduce costs and inventories. A major focus in EWO is the optimal operation of manufacturing facilities, which often requires the use of nonlinear process models. Major operational items include planning, scheduling, realtime optimization and inventory control. One of the key features of EWO is integration of the information and the decision-making among the various functions that comprise the supply chain of the company. This can be achieved with modern IT tools, which together with the internet, have promoted e-commerce. However, as will be discussed, to fully realize the potential of transactional IT tools, the development of sophisticated deterministic and stochastic linear/nonlinear optimization models and algorithms (analytical IT tools) is needed to explore and analyze alternatives of the supply chain to yield overall optimum economic performance, as well as high levels of customer satisfaction. An additional challenge is the integrated and coordinated decision-making across the various functions in a company (purchasing, manufacturing, distribution, sales), across various geographically distributed organizations (vendors, facilities and markets), and across various levels of decision-making (strategic, tactical and operational). \u00a9 2005 American Institute of Chemical Engineers AIChE J, 51: 1846\u20131857, 2005"}
{"_id": "3ccbc06b58959b84cdc35812cdf2e0b70f7a1ce9", "title": "Forecasting stock market movement direction with support vector machine", "text": "Support vector machine (SVM) is a very speci1c type of learning algorithms characterized by the capacity control of the decision function, the use of the kernel functions and the sparsity of the solution. In this paper, we investigate the predictability of 1nancial movement direction with SVM by forecasting the weekly movement direction of NIKKEI 225 index. To evaluate the forecasting ability of SVM, we compare its performance with those of Linear Discriminant Analysis, Quadratic Discriminant Analysis and Elman Backpropagation Neural Networks. The experiment results show that SVM outperforms the other classi1cation methods. Further, we propose a combining model by integrating SVM with the other classi1cation methods. The combining model performs best among all the forecasting methods. ? 2004 Elsevier Ltd. All rights reserved."}
{"_id": "5ec2e7317c11e435c3bfa15ad93c3a1021133f38", "title": "Classifying the Informative Behaviour of Emoji in Microblogs", "text": "Emoji are pictographs commonly used in microblogs as emotion markers, but they can also represent a much wider range of concepts. Additionally, they may occur in different positions within a message (e.g. a tweet), appear in sequences or act as word substitute. Emoji must be considered necessary elements in the analysis and processing of user generated content, since they can either provide fundamental syntactic information, emphasize what is already expressed in the text, or carry meaning that cannot be inferred from the words alone. We collected and annotated a corpus of 2475 tweets pairs with the aim of analyzing and then classifying emoji use with respect to redundancy. The best classification model achieved an F-score of 0.7. In this paper we shortly present the corpus, and we describe the classification experiments, explain the predictive features adopted, discuss the problematic aspects of our approach and suggest future improvements."}
{"_id": "2c64934a9475208f8f3e5a0921f30d78fb0c9f68", "title": "Der CDO als Komplement zum CIO", "text": ""}
{"_id": "cb1c010223ad1b91d0795b8b4127691ade706a3f", "title": "Sustainable Food Supply Chains : The Role of Collaboration and Sustainable Relationships", "text": "The life cycle approach is widely used in the analysis of sustainability. Its application to supply chains is necessary since the product flows, from processing of raw materials to the final customer, are considered. The role of the organizational aspects, expressed in terms of relationships between the supply chain agents, is little considered in the life cycle analysis approach. The aim of this paper is to extend the scope of the food chain life cycle analysis by adding the organizational dimension to the environmental, economic and social ones. Within this context, Collaboration and Sustainable Relationships concepts have been explored based on a literature survey. A theoretical framework, describing their role in assessing the organizational dimension in the life cycle analysis of the food supply chains, is defined. A hypothesis on their joint influence on the supply chains performances is formulated."}
{"_id": "1aa57fd9f06e73c5fb05f070cc88b2f7cc3aa6d3", "title": "Offloading in Mobile Edge Computing: Task Allocation and Computational Frequency Scaling", "text": "In this paper, we propose an optimization framework of offloading from a single mobile device (MD) to multiple edge devices. We aim to minimize both total tasks\u2019 execution latency and the MD\u2019s energy consumption by jointly optimizing the task allocation decision and the MD\u2019s central process unit (CPU) frequency. This paper considers two cases for the MD, i.e., fixed CPU frequency and elastic CPU frequency. Since these problems are NP-hard, we propose a linear relaxation-based approach and a semidefinite relaxation (SDR)-based approach for the fixed CPU frequency case, and an exhaustive search-based approach and an SDR-based approach for the elastic CPU frequency case. Our simulation results show that the SDR-based algorithms achieve near optimal performance. Performance improvement can be obtained with the proposed scheme in terms of energy consumption and tasks\u2019 execution latency when multiple edge devices and elastic CPU frequency are considered. Finally, we show that the MD\u2019s flexible CPU range can have an impact on the task allocation."}
{"_id": "db0188e7bc557c4928ef6768c3ddea9df90680ae", "title": "Robustness beyond shallowness: incremental deep parsing", "text": "Robustness is a key issue for natural language processing in general and parsing in particular, and many approaches have been explored in the last decade for the design of robust parsing systems. Among those approaches is shallow or partial parsing, which produces minimal and incomplete syntactic structures, often in an incremental way. We argue that with a systematic incremental methodology one can go beyond shallow parsing to deeper language analysis, while preserving robustness. We describe a generic system based on such methodology and designed for building robust analyzers that tackle deeper linguistic phenomena than those traditionally handled by the now widespread shallow parsers. The rule formalism allows the recognition of n-ary linguistic relations between words or constituents on the basis of global or local structural, topological and/or lexical conditions. It offers the advantage of accepting various types of inputs, ranging from raw to chunked or constituent-marked texts, so for instance it can be used to process existing annotated corpora, or to perform a deeper analysis on the output of an existing shallow parser. It has been successfully used to build a deep functional dependency parser, as well as for the task of co-reference resolution, in a modular way."}
{"_id": "78f1393f88b9181092ab53a179a15e782f1cd9f1", "title": "From Single to Dual Robotic Therapy: A Review of the Development Process of iPAM", "text": "Stroke is a common condition in the UK with over 30,000 people per annum are left with significant disability. When severe arm paresis is occurs, functional recovery of the affected arm is often poor. This can have a major impact on physical independence. A contributing factor to the poor recovery relates to lack of intensity of appropriate arm exercise practice. Through the use of robotic technology it may be possible to augment the physical therapist's efforts and maximize the impact of the limited physical therapy resource. At Leeds University a prototype single robot system has been developed that is capable of moving a patient's arm through a set of movements prescribed by a physiotherapist. This system is robust to changes in the biomechanical and physiological properties through a large range of patients. Further development of the system has led to a dual robot system that provides a greater level of control over upper-arm joint co-ordination while offering additional support to the shoulder complex. In order to test the systems safely, two models of the human arm have been produced: 1) a 6 degree of freedom computational model simulating how the robot system could provide therapy to a patient. 2) A mechanical model (Derryck), representing the biomechanical properties of the shoulder and elbow. The prototype single robot system is shown to be capable of administering trajectory following (for Derryck) for a large range of arm parameters"}
{"_id": "d847054f540b0aa0316b266f6d5a041fbcabefdb", "title": "Categorize the Quality of Cotton Seeds Based on the Different Germination of the Cotton Using Machine Knowledge Approach", "text": "A study was made in those region that expose major area of machine learning techniques which available in data mining. There are lots of Research work is analysized in agriculture field. It is a smart techniques is needed to find the resolution over the modern method. Cotton is an significant beneficial harvest and extensively traded commodity across the world. Its yield is insightful to weather, soil as well as organization practices. This study presents a concise idea of several of the broadly capitalized machine learning technique over agricultural domain. Among the algorithms like Navie bayes, j48 ,MLP,Randomforest,Random tree, I have founded J48 gives better result over the other algorithms using cotton seed quality accuracy is best."}
{"_id": "89cef03961da985dae5f65e2bfd63ad1def76fc5", "title": "MOOCs: So Many Learners, So Much Potential ...", "text": "Massive open online courses (MOOCs) have exploded onto the scene, promising to satisfy a worldwide thirst for a high-quality, personalized, and free education. This article explores where MOOCs fit within the e-learning and Artificial Intelligence in Education (AIED) landscape."}
{"_id": "88ad82e6f2264f75f7783232ba9185a2f931a5d1", "title": "Facial Expression Analysis under Partial Occlusion: A Survey", "text": "Automatic machine-based Facial Expression Analysis (FEA) has made substantial progress in the past few decades driven by its importance for applications in psychology, security, health, entertainment, and human\u2013computer interaction. The vast majority of completed FEA studies are based on nonoccluded faces collected in a controlled laboratory environment. Automatic expression recognition tolerant to partial occlusion remains less understood, particularly in real-world scenarios. In recent years, efforts investigating techniques to handle partial occlusion for FEA have seen an increase. The context is right for a comprehensive perspective of these developments and the state of the art from this perspective. This survey provides such a comprehensive review of recent advances in dataset creation, algorithm development, and investigations of the effects of occlusion critical for robust performance in FEA systems. It outlines existing challenges in overcoming partial occlusion and discusses possible opportunities in advancing the technology. To the best of our knowledge, it is the first FEA survey dedicated to occlusion and aimed at promoting better-informed and benchmarked future work."}
{"_id": "a1cfe21a0dc5bd9d07bd16e2c7fa85a719c8e588", "title": "A six degrees of freedom haptic interface for laparoscopic training", "text": "We present the novel kinematics, workspace characterization, functional prototype and impedance control of a six degrees of freedom haptic interface designed to train surgeons for laparoscopic procedures, through virtual reality simulations. The parallel kinematics of the device is constructed by connecting a 3RRP planar parallel mechanism to a linearly actuated modified delta mechanism with a connecting link. The configuration level forward and inverse kinematics of the device assume analytic solutions, while its workspace can be shaped to enable large end-effector translations and rotations, making it well-suited for laparoscopy operations. Furthermore, the haptic interface features a low apparent inertia with high structural stiffness, thanks to its parallel kinematics with grounded actuators. A model-based open-loop impedance controller with feed-forward gravity compensation has been implemented for the device and various virtual tissue/organ stiffness levels have been rendered."}
{"_id": "cf9993d3d10d80122d12cc9516958c070f8650c1", "title": "Commercial DC-DC converters: A survey of isolated low-voltage very-high-current systems for transportation electrification", "text": "As the energy demand from low-voltage (LV) DC sources on vehicles keeps increasing, the need for DC-DC converters capable of handling very high current on the LV side is growing. A group of original equipment manufacturers (OEM) have set performance targets for such converters for the years 2015 and 2020. This paper presents an extensive survey of DC-DC converters that are: a) galvanically isolated, b) able to scale 12/24/48 V up to several hundred volts, and c) rated to carry several hundred amperes of current on the LV side. More than eighty converters from thirty manufacturers are analyzed. Characteristics such as voltage ranges, current capacity, power rating, efficiency, operating temperature, cooling types, weight, power density, volume, specific power, compliant codes, and standards are collected and presented in a graphical form. The current insights and trends are gathered and compared with the OEM targets. The results can serve as a technology indicator of this kind of converters in 2015 for researchers and practicing engineers of transportation electrification and others alike."}
{"_id": "1be81623e9cd9434a7bc088fdaa9d858fcfb3da5", "title": "Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models", "text": "Batch Normalization is quite effective at accelerating and improving the training of deep models. However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples. We hypothesize that this is due to the dependence of model layer inputs on all the examples in the minibatch, and different activations being produced between training and inference. We propose Batch Renormalization, a simple and effective extension to ensure that the training and inference models generate the same outputs that depend on individual examples rather than the entire minibatch. Models trained with Batch Renormalization perform substantially better than batchnorm when training with small or non-i.i.d. minibatches. At the same time, Batch Renormalization retains the benefits of batchnorm such as insensitivity to initialization and training efficiency."}
{"_id": "231af7dc01a166cac3b5b01ca05778238f796e41", "title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium", "text": "Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions. TTUR has an individual learning rate for both the discriminator and the generator. Using the theory of stochastic approximation, we prove that the TTUR converges under mild assumptions to a stationary local Nash equilibrium. The convergence carries over to the popular Adam optimization, for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape. For the evaluation of the performance of GANs at image generation, we introduce the \u2018Fr\u00e9chet Inception Distance\u201d (FID) which captures the similarity of generated images to real ones better than the Inception Score. In experiments, TTUR improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP) outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN Bedrooms, and the One Billion Word Benchmark."}
{"_id": "6bb2326c8981a07498555df64416d764f03a30c0", "title": "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning", "text": "Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08% top-5 error on the test set of the ImageNet classification (CLS) challenge."}
{"_id": "a38168015a783fecc5830260a7eb5b9e3e945ee2", "title": "Deep Networks with Stochastic Depth", "text": "Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91% on CIFAR-10)."}
{"_id": "2e8a322666a89adf83e8e0e7cbc5142fba5e7b01", "title": "Dissecting Video Server Selection Strategies in the YouTube CDN", "text": "In this paper, we conduct a detailed study of the YouTube CDN with a view to understanding the mechanisms and policies used to determine which data centers users download video from. Our analysis is conducted using week-long datasets simultaneously collected from the edge of five networks - two university campuses and three ISP networks - located in three different countries. We employ state-of-the-art delay-based geolocation techniques to find the geographical location of YouTube servers. A unique aspect of our work is that we perform our analysis on groups of related YouTube flows. This enables us to infer key aspects of the system design that would be difficult to glean by considering individual flows in isolation. Our results reveal that while the RTT between users and data centers plays a role in the video server selection process, a variety of other factors may influence this selection including load-balancing, diurnal effects, variations across DNS servers within a network, limited availability of rarely accessed video, and the need to alleviate hot-spots that may arise due to popular video content."}
{"_id": "00fa9dc02a52baa8770225ca5fc5281d9a7badaf", "title": "Reliable Beamspace Channel Estimation for Millimeter-Wave Massive MIMO Systems with Lens Antenna Array", "text": "Millimeter-wave (mm-wave) massive MIMO with lens antenna array can considerably reduce the number of required radio-frequency (RF) chains by beam selection. However, beam selection requires the base station to acquire the accurate information of beamspace channel. This is a challenging task as the size of beamspace channel is large, while the number of RF chains is limited. In this paper, we investigate the beamspace channel estimation problem in mm-wave massive MIMO systems with lens antenna array. Specifically, we first design an adaptive selecting network for mm-wave massive MIMO systems with lens antenna array, and based on this network, we further formulate the beamspace channel estimation problem as a sparse signal recovery problem. Then, by fully utilizing the structural characteristics of the mm-wave beamspace channel, we propose a support detection (SD)-based channel estimation scheme with reliable performance and low pilot overhead. Finally, the performance and complexity analyses are provided to prove that the proposed SD-based channel estimation scheme can estimate the support of sparse beamspace channel with comparable or higher accuracy than conventional schemes. Simulation results verify that the proposed SD-based channel estimation scheme outperforms conventional schemes and enjoys satisfying accuracy even in the low SNR region as the structural characteristics of beamspace channel can be exploited."}
{"_id": "e434a49c703d3f98b6f750fd090591f2f39b6b6a", "title": "Foofah: A Programming-By-Example System for Synthesizing Data Transformation Programs", "text": "Advancements in new data analysis and visualization technologies have resulted in wide applicability of data-driven decision making. However, raw data from various sources must be wrangled into a suitable form before they are processed by the downstream data tools. People traditionally write data transformation programs to automate this process, and such work is cumbersome and tedious.\n We built a system called FOOFAH for helping the user easily synthesize a desired data transformation program. Our system minimizes the user's effort by only asking for a small illustrative example comprised of the raw input data and the target transformed output; FOOFAH then synthesizes a program that can perform the desired data transformation. This demonstration showcases how the user can apply FOOFAH to real-world data transformation tasks."}
{"_id": "c2fbc10b2b89e5766a90c1b090b88bbfaf8ef183", "title": "6-DOF hovering controller design of the Quad Tiltrotor aircraft: Simulations and experiments", "text": "This paper presents a particular class of a convertible micro aerial vehicle (MAV) with fixed wings, the so-called Quad Tiltrotor aircraft. This aircraft is able to change its flight configuration from hover to level flight and vice versa by means of a transition maneuver. In this first part of the research, the hover dynamics of the Quad Tiltrotor is investigated. Dynamical model and nonlinear control based on Lyapunov design are studied. The presented approach focuses on the problem of finding a control law capable of stabilizing the aircraft's position. Some simulations results are given, which demonstrate the effectiveness of the controller. Further, some experimental results are presented tested on the Quad Tiltrotor experimental platform."}
{"_id": "2db888dfa4cbcad3ebb7f4a920d38db88ad23bfe", "title": "Two-Dimensional Materials for Sensing : Graphene and Beyond", "text": "Two-dimensional materials have attracted great scientific attention due to their unusual and fascinating properties for use in electronics, spintronics, photovoltaics, medicine, composites, etc. Graphene, transition metal dichalcogenides such as MoS2, phosphorene, etc., which belong to the family of two-dimensional materials, have shown great promise for gas sensing applications due to their high surface-to-volume ratio, low noise and sensitivity of electronic properties to the changes in the surroundings. Two-dimensional nanostructured semiconducting metal oxide based gas sensors have also been recognized as successful gas detection devices. This review aims to provide the latest advancements in the field of gas sensors based on various two-dimensional materials with the main focus on sensor performance metrics such as sensitivity, specificity, detection limit, response time, and reversibility. Both experimental and theoretical studies on the gas sensing properties of graphene and other two-dimensional materials beyond graphene are also discussed. The article concludes with the current challenges and future prospects for two-dimensional materials in gas sensor applications. OPEN ACCESS Electronics 2015, 4 652"}
{"_id": "b9e4580da3d96a144dead0c8342670e46da4bd16", "title": "A Full Soft-Switching ZVZCS Flyback Converter Using an Active Auxiliary Cell", "text": "In this paper, a new soft switching flyback dc-dc converter with a simple zero-voltage and zero-current switching (ZVZCS) cell is proposed. The auxiliary resonant cell consists of one switch, one diode, one inductor, and two capacitors. It provides ZVZCS not only for the main switch but also for the auxiliary switch at both turn-on and turn-off times. In addition, all the auxiliary diodes and body diodes of the switches operate under full ZVZCS except the output diode which turns off only with zero current switching. This soft switching technique makes the proposed converter have less switching losses which leads the converter to operate in higher frequencies and efficiency. Another outstanding feature of the proposed converter is using the auxiliary resonant cell out of the main power path to have lower voltage stress on the switches. In this paper, pulse width modulation (PWM) is employed to control the main switch. A detailed mode analysis of the proposed converter operation is presented along with design procedure. Finally, the accuracy performance of the converter is verified through a 60 W experimental prototype results."}
{"_id": "221c18238b829c12b911706947ab38fd017acef7", "title": "A Richly Annotated Dataset for Pedestrian Attribute Recognition", "text": "In this paper, we aim to improve the dataset foundation for pedestrian attribute recognition in real surveillance scenarios. Recognition of human attributes, such as gender, and clothes types, has great prospects in real applications. However, the development of suitable benchmark datasets for attribute recognition remains lagged behind. Existing human attribute datasets are collected from various sources or an integration of pedestrian re-identification datasets. Such heterogeneous collection poses a big challenge on developing high quality fine-grained attribute recognition algorithms. Furthermore, human attribute recognition are generally severely affected by environmental or contextual factors, such as viewpoints, occlusions and body parts, while existing attribute datasets barely care about them. To tackle these problems, we build a Richly Annotated Pedestrian (RAP) dataset from real multi-camera surveillance scenarios with long term collection, where data samples are annotated with not only fine-grained human attributes but also environmental and contextual factors. RAP has in total 41,585 pedestrian samples, each of which is annotated with 72 attributes as well as viewpoints, occlusions, body parts information. To our knowledge, the RAP dataset is the largest pedestrian attribute dataset, which is expected to greatly promote the study of large-scale attribute recognition systems. Furthermore, we empirically analyze the effects of different environmental and contextual factors on pedestrian attribute recognition. Experimental results demonstrate that viewpoints, occlusions and body parts information could assist attribute recognition a lot in real applications."}
{"_id": "8d9274823441be0e76bab445414b6381bcd3ca0a", "title": "ULDBs: Databases with Uncertainty and Lineage", "text": "This paper introduces ULDBs, an extension of relational databases with simple yet expressive constructs for representing and manipulating both lineage and uncertainty. Uncertain data and data lineage are two important areas of data management that have been considered extensively in isolation, however many applications require the features in tandem. Fundamentally, lineage enables simple and consistent representation of uncertain data, it correlates uncertainty in query results with uncertainty in the input data, and query processing with lineage and uncertainty together presents computational benefits over treating them separately.We show that the ULDB representation is complete, and that it permits straightforward implementation of many relational operations. We define two notions of ULDB minimality--data-minimal and lineage-minimal--and study minimization of ULDB representations under both notions. With lineage, derived relations are no longer self-contained: their uncertainty depends on uncertainty in the base data. We provide an algorithm for the new operation of extracting a database subset in the presence of interconnected uncertainty. Finally, we show how ULDBs enable a new approach to query processing in probabilistic databases.ULDBs form the basis of the Trio system under development at Stanford."}
{"_id": "d8471641dbf606b8dd9a0960a6be6a14cf17d681", "title": "Proposed system for estimating intrinsic value of stock using Monte Carlo simulation", "text": "Stock market prediction has been the bane and goal for investors, and one of the biggest challenges for artificial intelligence (AI) community. National economies have a great impact by the behaviour of their stock markets. Markets have become a more accessible investment tool. Attribute that all stock markets have in common is uncertainty. This uncertainty associated with stock value in short and long term future investments are undesirable. Stock market prediction is instrumental in process of investment. Drawbacks of existing system faces technical difficulties such as estimating share of dividends, state vector while implementing stochastic modelling for risk analysis. To overcome drawbacks of existing system, proposed system makes an effort of generating a combination of more than one lakh seventy thousand scenarios to find intrinsic value of company, displaying results in graphical visualization. A large scenario generation of distinct intrinsic stock value done by the system will provide intrinsic stock value for each scenario. A large set of values for all the input parameters for example high growth value, declining growth value, terminal growth value, return on equity needs to be created, so that possible intrinsic value can be generated by system. The system will calculate statistical indicators like mean, median, mode, skewness and kurtosis of large data set consisting intrinsic value of company. Comparing statistically calculated intrinsic value and current market price, system will be able to add a robust statistical reasoning for investment decision. This reasoning will have no human or emotional biases as there will be no human intervention involved for arriving to the final intrinsic value of stock. Monte Carlo simulation is best suited solution for generating random scenarios that fall in line with Brownian walk motion of stock prices."}
{"_id": "a8d4e7ba019cb644b73fae2173b13853e0d63ab4", "title": "Impact of an animal-assisted therapy programme on physiological and psychosocial variables of paediatric oncology patients", "text": "The objective of this study was to propose an intervention and safety protocol for performing animal-assisted therapy (AAT) and evaluating its efficacy in children under outpatient oncological treatment based on psychological, physiological, and quality of life indicators for the children and caregivers. The sample consisted of 24 children diagnosed with leukaemia and solid tumours (58% girls with a mean age of 8.0 years) who underwent an AAT programme consisting of three 30-min sessions in an open group. Two dogs (one Labrador retriever and one golden retriever) were used, and activities such as sensory stimulation, gait training, and socialization were conducted. The exclusion criteria were severe mental problems, inability to answer the questions included in the instruments used, allergy to animals, unavailability/lack of interest, isolation precaution, surgical wound, use of invasive devices, ostomy, no current blood count for evaluation, neutropaenia, infection, fever, diarrhoea, vomiting, respiratory symptoms at the beginning of the intervention or 1 week before the intervention, hospitalization or scheduled surgery, and non-completion of the AAT programme. The variables analysed using validated self or other evaluations were stress, pain, mood, anxiety, depression, quality of life, heart rate, and blood pressure. A quasi-experimental study design was used. We observed a decrease in pain (p = 0.046, d = -0.894), irritation (p = 0.041, d = -0.917), and stress (p = 0.005; d = -1.404) and a tendency towards improvement of depressive symptoms (p = 0.069; d = -0.801). Among the caregivers, an improvement was observed in anxiety (p = 0.007, d = -1.312), mental confusion (p = 0.006, d = -1.350), and tension (p = 0.006, d = -1.361). Therefore, the selection criteria and care protocols used for the AAT programme in the oncological context were adequate, and the programme was effective."}
{"_id": "5ff520eacf2c3fa22ccad53d7a97950fd34ddf0e", "title": "On Quaternions and Octonions : Their Geometry , Arithmetic , and Symmetry", "text": "Conway and Smith\u2019s book is a wonderful introduction to the normed division algebras: the real numbers (R), the complex numbers (C), the quaternions (H), and the octonions (O). The first two are well-known to every mathematician. In contrast, the quaternions and especially the octonions are sadly neglected, so the authors rightly concentrate on these. They develop these number systems from scratch, explore their connections to geometry, and even study number theory in quaternionic and octonionic versions of the integers. Conway and Smith warm up by studying two famous subrings of C: the Gaussian integers and Eisenstein integers. The Gaussian integers are the complex numbers x + iy for which x and y are integers. They form a square lattice:"}
{"_id": "71d3700ac2349b593dc34eb7c0412f92ddb7dedf", "title": "Cascades of two-pole-two-zero asymmetric resonators are good models of peripheral auditory function.", "text": "A cascade of two-pole-two-zero filter stages is a good model of the auditory periphery in two distinct ways. First, in the form of the pole-zero filter cascade, it acts as an auditory filter model that provides an excellent fit to data on human detection of tones in masking noise, with fewer fitting parameters than previously reported filter models such as the roex and gammachirp models. Second, when extended to the form of the cascade of asymmetric resonators with fast-acting compression, it serves as an efficient front-end filterbank for machine-hearing applications, including dynamic nonlinear effects such as fast wide-dynamic-range compression. In their underlying linear approximations, these filters are described by their poles and zeros, that is, by rational transfer functions, which makes them simple to implement in analog or digital domains. Other advantages in these models derive from the close connection of the filter-cascade architecture to wave propagation in the cochlea. These models also reflect the automatic-gain-control function of the auditory system and can maintain approximately constant impulse-response zero-crossing times as the level-dependent parameters change."}
{"_id": "847095fb7463014f52bb64708c1c1709a2aa0065", "title": "Enhancing Supervised Learning with Unlabeled Data", "text": "In many practical learning scenarios, there is a small amount of labeled data along with a large pool of unlabeled data. Many supervised learning algorithms have been developed and extensively studied. We present a new \\co-training\" strategy for using un-labeled data to improve the performance of standard supervised learning algorithms. Unlike much of the prior work, such as the co-training procedure of Blum and Mitchell (1998), we do not assume there are two redundant views both of which are suucient for classiication. The only requirement our co-training strategy places on each supervised learning algorithm is that its hypothesis partitions the example space into a set of equivalence classes (e.g. for a decision tree each leaf deenes an equivalence class). We evaluate our co-training strategy via experiments using data from the UCI repository."}
{"_id": "75a58d4b9b563c587bf4b8d6e963d9a4a9afc334", "title": "Reynolds flocking in reality with fixed-wing robots: Communication range vs. maximum turning rate", "text": "The success of swarm behaviors often depends on the range at which robots can communicate and the speed at which they change their behavior. Challenges arise when the communication range is too small with respect to the dynamics of the robot, preventing interactions from lasting long enough to achieve coherent swarming. To alleviate this dependency, most swarm experiments done in laboratory environments rely on communication hardware that is relatively long range and wheeled robotic platforms that have omnidirectional motion. Instead, we focus on deploying a swarm of small fixed-wing flying robots. Such platforms have limited payload, resulting in the use of short-range communication hardware. Furthermore, they are required to maintain forward motion to avoid stalling and typically adopt low turn rates because of physical or energy constraints. The tradeoff between communication range and flight dynamics is exhaustively studied in simulation in the scope of Reynolds flocking and demonstrated with up to 10 robots in outdoor experiments."}
{"_id": "05916057d3607c913dfbbf58de488fe6240a60d6", "title": "Nocardia seriolae infection in the three striped tigerfish, Terapon jarbua (Forssk\u00e5l).", "text": "An epizootic in pond cultured three striped tigerfish, Terapon jarbua, in Taiwan was caused by Nocardia seriolae. Diseased fish first showed clinical signs and mortalities in February and March 2003. The cumulative mortality within 2 months was 2.4% (1200 of 50 000) and affected fish were 7 months old with total lengths from 18 to 25 cm. Most affected fish were pale and lethargic with haemorrhages and ulcers on the skin. The most significant gross pathological changes were varying degrees of ascites and enlargement of the spleen, kidney and liver. Obvious white nodules, varying in size, were found in these organs. Bacteria were either coccal or filamentous in appearance, with bead-like forms. Isolates from diseased fish were characterized using the API ZYM (Analytical profile index; Bio M\u00e9rieux, France) systems and conventional tests and identified as Nocardia sp. The isolate was designated NS127 and was confirmed as N. seriolae by a polymerase chain reaction assay that gave the expected specific 432 bp amplicon. In addition, its 16S rDNA sequence gave 100% sequence identity with N. seriolae. A partial sequence of the 16S rRNA gene, heat shock protein gene and RNA polymerase gene (rpo B) of NS127 and the type strain of N. seriolae BCRC 13745 formed a monophyletic clade with a high sequence similarity and bootstrap value of 99.9%. White nodules induced in experimental fish were similar to naturally infected cases and N. seriolae was re-isolated on brain heart infusion agar. This is the first report of N. seriolae-infection in three striped tigerfish in aquaculture."}
{"_id": "fd66fae4891a7993a66ca98fcdc8ce2207bee8b8", "title": "Towards Better Analysis of Machine Learning Models: A Visual Analytics Perspective", "text": "Interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization, is very important for users to efficiently solve real-world artificial intelligence and data mining problems. Dramatic advances in big data analytics has led to a wide variety of interactive model analysis tasks. In this paper, we present a comprehensive analysis and interpretation of this rapidly developing area. Specifically, we classify the relevant work into three categories: understanding, diagnosis, and refinement. Each category is exemplified by recent influential work. Possible future research opportunities are also explored and discussed."}
{"_id": "d4b8d392993134ba42975f2844b018d1b554ded2", "title": "Reflectance and Natural Illumination from a Single Image", "text": "Estimating reflectance and natural illumination from a single image of an object of known shape is a challenging task due to the ambiguities between reflectance and illumination. Although there is an inherent limitation in what can be recovered as the reflectance band-limits the illumination, explicitly estimating both is desirable for many computer vision applications. Achieving this estimation requires that we derive and impose strong constraints on both variables. We introduce a probabilistic formulation that seamlessly incorporates such constraints as priors to arrive at the maximum a posteriori estimates of reflectance and natural illumination. We begin by showing that reflectance modulates the natural illumination in a way that increases its entropy. Based on this observation, we impose a prior on the illumination that favors lower entropy while conforming to natural image statistics. We also impose a prior on the reflectance based on the directional statistics BRDF model that constrains the estimate to lie within the bounds and variability of real-world materials. Experimental results on a number of synthetic and real images show that the method is able to achieve accurate joint estimation for different combinations of materials and lighting."}
{"_id": "22913f85923ddbb2607aec150fc74d3e24a63c3d", "title": "IT governance in public organization based on ITBSC and cobit 5: The case of Kupang Municipality", "text": "IT Governance becomes a key component in corporative governance because of the influence of information systems and technologies that support every component of the organization. IT governance which is applied to government organizations can provide positive benefits and support the achievement of business objectives to increase the quality of public services. Application of a good IT Governance is used to apply it in accordance with institutional context. The method used is COBIT 5 combined with ITBSC, then mapping with institutional objectives. The process of collecting data using structured interview methods to the stakeholders in Kupang Municipality. This research found that capability level of Kupang Municipality is in position 0 (incomplete process) with target level of capability of 3, it means that IT Governance of Kupang Municipality not in a maximal condition to responding business process. This study also produced a recommendation for improvement in order to increase the value of capability levels that were set based on COBIT 5."}
{"_id": "7928e4eeb14d271fb66a07a1f9ec47893f556bc6", "title": "Leaky-Wave Antennas", "text": "This paper gives a basic review and a summary of recent developments for leaky-wave antennas (LWAs). An LWA uses a guiding structure that supports wave propagation along the length of the structure, with the wave radiating or \u201cleaking\u201d continuously along the structure. Such antennas may be uniform, quasi-uniform, or periodic. After reviewing the basic physics and operating principles, a summary of some recent advances for these types of structures is given. Recent advances include structures that can scan to endfire, structures that can scan through broadside, structures that are conformal to surfaces, and structures that incorporate power recycling or include active elements. Some of these novel structures are inspired by recent advances in the metamaterials area."}
{"_id": "7e323f6c3a3099097a65e4d1d546eec1a9f5db37", "title": "3D Kinematic Simulation for PA10-7C Robot Arm Based on VRML", "text": "In this paper, a graphical, flexible, interactive, and systematic 3D simulation helps facilitate analyzing and previewing kinematics of PA10-7C robot arm in terms of forward kinematics, inverse kinematics, and the Denavit-Hartenberg convention. Modeling and control are of critical importance when the robot arm is used for practical applications. In the paper, the D-H model of PA10-7C robot arm is given first to describe the relationship between two consecutive frames of joints. Based on this D-H model, forward kinematics is calculated efficiently. To describe the relationship between joint angular velocities in the joint space and the end effector's velocities in Cartesian space, Jacobian matrix for PA10-7C robot arm is derived. By Jacobian matrix, inverse kinematics approach is obtained in the paper. VRML-based model of PA10-7C is built in the paper for the simulation. Finally, simulation results are discussed and the paper is concluded."}
{"_id": "b4d0203ed3b79eb76cda6df6fbe86c43027b8d08", "title": "Image set for deep learning: field images of maize annotated with disease symptoms", "text": "Automated detection and quantification of plant diseases would enable more rapid gains in plant breeding and faster scouting of farmers\u2019 fields. However, it is difficult for a simple algorithm to distinguish between the target disease and other sources of dead plant tissue in a typical field, especially given the many variations in lighting and orientation. Training a machine learning algorithm to accurately detect a given disease from images taken in the field requires a massive amount of human-generated training data. This data set contains images of maize (Zea mays L.) leaves taken in three ways: by a hand-held camera, with a camera mounted on a boom, and with a camera mounted on a small unmanned aircraft system (sUAS, commonly known as a drone). Lesions of northern leaf blight (NLB), a common foliar disease of maize, were annotated in each image by one of two human experts. The three data sets together contain 18,222 images annotated with 105,705 NLB lesions, making this the largest publicly available image set annotated for a single plant disease."}
{"_id": "2065d9eb28be0700a235afb78e4a073845bfb67d", "title": "Dynamic Movement Primitives \u2013 A Framework for Motor Control in Humans and Humanoid Robotics", "text": "Given the continuous stream of movements that biological systems exhibit in their daily activities, an account for such versatility and creativity has to assume that movement sequences consist of segments, executed either in sequence or with partial or complete overlap. Therefore, a fundamental question that has pervaded research in motor control both in artificial and biological systems revolves around identifying movement primitives (a.k.a. units of actions, basis behaviors, motor schemas, etc.). What are the fundamental building blocks that are strung together, adapted to, and created for ever new behaviors? This paper summarizes results that led to the hypothesis of Dynamic Movement Primitives (DMP). DMPs are units of action that are formalized as stable nonlinear attractor systems. They are useful for autonomous robotics as they are highly flexible in creating complex rhythmic (e.g., locomotion) and discrete (e.g., a tennis swing) behaviors that can quickly be adapted to the inevitable perturbations of a dynamically changing, stochastic environment. Moreover, DMPs provide a formal framework that also lends itself to investigations in computational neuroscience. A recent finding that allows creating DMPs with the help of well-understood statistical learning methods has elevated DMPs from a more heuristic to a principled modeling approach. Theoretical insights, evaluations on a humanoid robot, and behavioral and brain imaging data will serve to outline the framework of DMPs for a general approach to motor control in robotics and biology."}
{"_id": "6d43648beed554fa230cc6819189345a622bc322", "title": "Online movement adaptation based on previous sensor experiences", "text": "Personal robots can only become widespread if they are capable of safely operating among humans. In uncertain and highly dynamic environments such as human households, robots need to be able to instantly adapt their behavior to unforseen events. In this paper, we propose a general framework to achieve very contact-reactive motions for robotic grasping and manipulation. Associating stereotypical movements to particular tasks enables our system to use previous sensor experiences as a predictive model for subsequent task executions. We use dynamical systems, named Dynamic Movement Primitives (DMPs), to learn goal-directed behaviors from demonstration. We exploit their dynamic properties by coupling them with the measured and predicted sensor traces. This feedback loop allows for online adaptation of the movement plan. Our system can create a rich set of possible motions that account for external perturbations and perception uncertainty to generate truly robust behaviors. As an example, we present an application to grasping with the WAM robot arm."}
{"_id": "7bff4e9b01e9a79703ca1f3ec60cf000cd802501", "title": "Relative Entropy Inverse Reinforcement Learning", "text": "We consider the problem of imitation learning where the examples, demonstrated by an expert, cover only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient tool for generalizing the demonstration, based on the assumption that the expert is optimally acting in a Markov Decision Process (MDP). Most of the past work on IRL requires that a (near)optimal policy can be computed for different reward functions. However, this requirement can hardly be satisfied in systems with a large, or continuous, state space. In this paper, we propose a model-free IRL algorithm, where the relative entropy between the empirical distribution of the state-action trajectories under a uniform policy and their distribution under the learned policy is minimized by stochastic gradient descent. We compare this new approach to well-known IRL algorithms using approximate MDP models. Empirical results on simulated car racing, gridworld and ball-in-a-cup problems show that our approach is able to learn good policies from a small number of demonstrations."}
{"_id": "7cc67cbb00d3f854a4d5b1310889076f3d587464", "title": "Linear predictive hidden Markov models and the speech signal", "text": ""}
{"_id": "c59f40587d27fc71cf78264c36b866058d958539", "title": "Automatic Generation of 3 D Thermal Maps of Building Interiors", "text": "Most existing approaches to characterizing thermal properties of buildings and heat emissions from their elements rely on manual inspection and as such are slow, and labor intensive. This is often a daunting task, which requires several days of on-site inspection. In this paper, we propose a fully automatic approach to construct a 3D thermal point cloud of the building interior reflecting the geometry including walls, floors, and ceilings, as well as structures such as furniture, lights, windows, and plug loads. Our approach is based on a wearable ambulatory backpack comprising multiple sensors such as Light Detection And Ranging (LiDAR) scanners, and Infrared and optical cameras. As the operator wearing the backpack walks through the building, the LiDAR scans are collected and processed in order to compute the 3D geometry of the building. Furthermore, the Infrared cameras are calibrated intrinsically and extrinsically such that the captured images are registered to the captured geometry. Thus, the temperature data in the Infrared images is associated with the geometry resulting in a \u201cthermal 3D point cloud\u201d. The same process can be repeated using optical imagery resulting in a \u201cvisible 3D point cloud\u201d. By visualizing the two point clouds simultaneously in interactive rendering tools, we can virtually walk through the thermal and optical 3D point clouds, toggle between them, identify and annotate, \u201chot\u201d regions, objects, plug loads, thermal and moisture leaks, and document their location with fine spatial granularity in the 3D point clouds."}
{"_id": "635ec42247bd80bd41fd1f4a1b0618c2a9e0f1fc", "title": "\"No-Touch\" Technique for Lip Enhancement.", "text": "BACKGROUND\nThe purpose of this study was to examine the anatomical principles of lip structure as they relate to individualized lip enhancement procedures and to describe a technique that does not violate lip mucosa during injection.\n\n\nMETHODS\nA retrospective analysis of patients undergoing lip enhancement procedures between 2001 and 2014 was performed. Preprocedural and postprocedural photographs were analyzed for lip subunit changes. A stepwise treatment algorithm targeting specific anatomical subunits of lip is described.\n\n\nRESULTS\nFour hundred ten patients were treated with a \"no-touch\" technique for lip enhancement. Lip profile is determined by the position of the white roll. The white roll is accessed by a 30-gauge needle at a point 5 mm lateral to the oral commissure and at the base of the philtral columns. Lip projection is established by vermilion formation contributing to the arc of the Cupid's bow. To improve projection, the labial commissure is entered with a 25-gauge cannula and tunneled into the submucosal space between the white and red rolls. Lip augmentation is a direct reflection of the prominence of the red line and can be approached in a perpendicular fashion with a needle or cannula descending to the level of the wet-dry junction.\n\n\nCONCLUSIONS\nAccurate assessment of the white and red rolls, arc of Cupid's bow, philtrum, and gingival show can guide the injector on the proper enhancement that individual patients require. The no-touch technique minimizes mucosal trauma. Tailoring treatment toward lip profile, projection, and/or augmentation can yield predictable and reproducible outcomes in this commonly performed cosmetic procedure."}
{"_id": "07bd1432f07ce453f0f4a259a8b2464c25e47a48", "title": "Structured Prediction of Sequences and Trees using Infinite Contexts", "text": "Linguistic structures exhibit a rich array of global phenomena, however commonly used Markov models are unable to adequately describe these phenomena due to their strong locality assumptions. We propose a novel hierarchical model for structured prediction over sequences and trees which exploits global context by conditioning each generation decision on an unbounded context of prior decisions. This builds on the success of Markov models but without imposing a fixed bound in order to better represent global phenomena. To facilitate learning of this large and unbounded model, we use a hierarchical PitmanYor process prior which provides a recursive form of smoothing. We propose prediction algorithms based on A* and Markov Chain Monte Carlo sampling. Empirical results demonstrate the potential of our model compared to baseline finite-context Markov models on part-of-speech tagging and syntactic parsing."}
{"_id": "1768909f779869c0e83d53f6c91764f41c338ab5", "title": "A large-scale car dataset for fine-grained categorization and verification", "text": "This paper aims to highlight vision related tasks centered around \u201ccar\u201d, which has been largely neglected by vision community in comparison to other objects. We show that there are still many interesting car-related problems and applications, which are not yet well explored and researched. To facilitate future car-related research, in this paper we present our on-going effort in collecting a large-scale dataset, \u201cCompCars\u201d, that covers not only different car views, but also their different internal and external parts, and rich attributes. Importantly, the dataset is constructed with a cross-modality nature, containing a surveillance-nature set and a web-nature set. We further demonstrate a few important applications exploiting the dataset, namely car model classification, car model verification, and attribute prediction. We also discuss specific challenges of the car-related problems and other potential applications that worth further investigations. The latest dataset can be downloaded at http://mmlab.ie.cuhk.edu.hk/ datasets/comp_cars/index.html."}
{"_id": "1ae7ec7d78d3a722b33c42fbe4e79e326559860a", "title": "Automatic text summarization of Wikipedia articles", "text": "The main objective of a text summarization system is to identify the most important information from the given text and present it to the end users. In this paper, Wikipedia articles are given as input to system and extractive text summarization is presented by identifying text features and scoring the sentences accordingly. The text is first pre-processed to tokenize the sentences and perform stemming operations. We then score the sentences using the different text features. Two novel approaches implemented are using the citations present in the text and identifying synonyms. These features along with the traditional methods are used to score the sentences. The scores are used to classify the sentence to be in the summary text or not with the help of a neural network. The user can provide what percentage of the original text should be in the summary. It is found that scoring the sentences based on citations gives the best results."}
{"_id": "591705aa6ced4716075a64697786bb489447ece0", "title": "Active Inference and Learning in the Cerebellum", "text": "This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme\u2019s anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry\u2014and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception."}
{"_id": "92556d8cdb9a7c6bcacfc09069b9e03b6d10a596", "title": "Networking over Next-generation Satellite Systems Committee in Charge: Networking over Next-generation Satellite Systems Abstract Networking over Next-generation Satellite Systems Contents", "text": "Thanks to both the rapid deployment of the Internet and advances in satellite technology, the market for broadband satellite services is poised for substantial growth in the coming decade. Current communications satellite systems have generally been designed to provide either voice or data transaction (low data rate) services through small terminals, or trunking (high data rate, or broadband) services through large terminals. However, technological advances are enabling new systems that combine broadband data rates with small terminals, thereby providing more affordable \" last-mile \" network access to home and small business users worldwide. In particular, two types of broadband satellite systems are under development: high-power satellites deployed at traditional geostationary (GEO) orbits, and large constellations of satellites deployed at much lower (LEO) orbits. In this thesis, we explore research problems that have arisen from this shift in satellite network architectures. When using GEO satellites to provide Internet access service, the performance of the Internet's Transmission Control Protocol (TCP) is degraded by the high latency and high degree of bandwidth asymmetry present in such systems. We therefore undertook a comprehensive study of TCP performance in the context of broadband satellite systems used for network access. We first studied whether TCP's congestion avoidance algorithm can be adjusted to provide better fairness when satellite connections are forced to share bottleneck links with other (shorter delay) connections. Our data suggests that adjustments of the policy used in that algorithm may yield substantial fairness benefits without compromizing utilization. We next demonstrated how minor variations in TCP implementations can have drastic performance implications when used over satellite links (such as a reduction in file transfer throughput by over half), and from our observations constructed a satellite-optimized TCP implementation using standardized options. We explored the performance of TCP for short data transfers such as Web traffic, and found that two experimental options relating to how TCP starts a connection, when used together, could reduce the user-perceived latency by a factor of two to three. However, because not all of these options are likely to be deployed on a wide scale, and because even the best satellite-optimized TCP implementation is vulnerable to the fairness problems identified above, we explored the performance benefits of splitting a TCP connection at a protocol gateway within the satellite network, and found that such an approach can allow the performance of the satellite connection to approach that of a non-satellite connection. Carrying this \u2026"}
{"_id": "4701e01a81f58aabfdc7f3f3e5bca8ec711ec127", "title": "Evaluation of the Potential of Dipole Field Navigation for the Targeted Delivery of Therapeutic Agents in a Human Vascular Network", "text": "Magnetically guided agents in the vascular network are expected to enable the targeted delivery of therapeutics to localized regions while avoiding their systemic circulation. Due to the small size of the medically applicable superparamagnetic microscale agents required to reach the smaller arteries, high magnetic fields and gradients are required to reach saturation magnetization and generate sufficient directional forces, respectively, for their effective navigation in the vascular environment. Currently, the only method that provides both a high field and high magnetic gradient strengths in deep tissues at the human scale is known as dipole field navigation (DFN). This method relies on the controlled distortion of the field inside a magnetic resonance imaging scanner by precisely positioning ferromagnetic cores around the patient. This paper builds on previous works that have experimentally demonstrated the feasibility of the method and proposed optimization algorithms for placing the cores. The maximum gradient strengths that can be generated for single and multibifurcation vascular routes are investigated while considering the major constraints on core positions (limited space in the scanner, magnetic interactions). Using disc cores, which were previously shown particularly effective for the DFN, results show that gradient strengths exceeding 400 mT/m (a tenfold increase with respect to typical gradients generated by clinical MRI scanners) can be achieved at 10 cm inside the patient, but decrease as the complexity of the vascular route increases. The potential of the method is evaluated for targeting regions of a vascular model of a human liver, segmented from clinical data, with encouraging results showing strengths up to 150 mT/m for generating gradients at three consecutive bifurcations within 20\u00b0 of average gradient direction error."}
{"_id": "c3aef755ba92213581ef99f33a4a18db180cefa4", "title": "Social network security: Issues, challenges, threats, and solutions", "text": "Social networks are very popular in today\u2018s world. Millions of people use various forms of social networks as they allow individuals to connect with friends and family, and share private information. However, issues related to maintaining the privacy and security of a user\u2018s information can occur, especially when the user\u2018s uploaded content is multimedia, such as photos, videos, and audios. Uploaded multimedia content carries information that can be transmitted virally and almost instantaneously within a social networking site and beyond. In this paper, we present a comprehensive survey of different security and privacy threats that target every user of social networking sites. In addition, we separately focus on various threats that arise due to the sharing of multimedia content within a social networking site. We also discuss current state-ofthe-art defense solutions that can protect social network users from these threats. We then present future direction and discuss some easy-to-apply response techniques to achieve the goal of a trustworthy and secure social network ecosystem. KeywordsSocial network service, Security and privacy, Multimedia data, Security threats"}
{"_id": "892c41f004b6a7ad939860bbee22446c01882055", "title": "Searching for the elusive gift: advances in talent identification in sport.", "text": "The incentives for sport organizations to identify talented athletes from a young age continue to grow, yet effective talent identification remains a challenging task. This opinion paper examines recent advances in talent identification, focusing in particular on the emergence of new approaches that may offer promise to identify talent (e.g., small-sided games, genetic testing, and advanced statistical analyses). We appraise new multi-disciplinary and large-scale population studies of talent identification, provide a consideration of the most recent psychological predictors of performance, examine the emergence of new approaches that strive to diminish biases in talent identification, and look at the rise in interest in talent identification in Paralympic sport."}
{"_id": "875fef0a5d58d33267089ebbff7e44aff4ee98f7", "title": "Designing Graph Database Models from Existing Relational Databases", "text": "In this paper, a method for transforming a relational database to a graph database model is described. In this approach, the dependency graphs for the entities in the system are transformed into star graphs. This star graph model is transformed into a hyper graph model for the relational database, which, in turn, can be used to develop the domain relationship model that can be converted in to a graph database model."}
{"_id": "d36cd15ef245eb1c604b3f8286dde3dda19ffefa", "title": "A Nonisolated Single-Phase UPS Topology With 110-V/220-V Input\u2013Output Voltage Ratings", "text": "A circuit configuration of a single-phase nonisolated online uninterruptible power supply (UPS) with 110-V/220-V input\u2013output voltage ratings is proposed, allowing the bypass operation without a transformer even if the input voltage is different from the output voltage. The converter consists of an ac\u2013dc/dc\u2013dc three-level boost converter combined with a double half-bridge inverter. In this type of configuration size, cost and efficiency are improved due to the reduced number of switches and batteries, and also, no low-frequency isolation transformer is required to realize bypass operation because of the common neutral connection. Both stages of the proposed circuit operate at high frequency by using a passive nondissipative snubber circuit in the boost converter and insulated-gate bipolar-transistor switches in the double half-bridge inverter, with low conduction losses, low tail current, and low switching losses. Principle of operation and experimental results for a 2.6-kVA prototype are presented to demonstrate the UPS performance."}
{"_id": "c63a8640e5d426b1c8b0ca2ea45c20c265b3f2ad", "title": "Class Noise vs. Attribute Noise: A Quantitative Study", "text": "Real-world data is never perfect and can often suffer from corruptions (noise) that may impact interpretations of the data, models created from the data and decisions made based on the data. Noise can reduce system performance in terms of classification accuracy, time in building a classifier and the size of the classifier. Accordingly, most existing learning algorithms have integrated various approaches to enhance their learning abilities from noisy environments, but the existence of noise can still introduce serious negative impacts. A more reasonable solution might be to employ some preprocessing mechanisms to handle noisy instances before a learner is formed. Unfortunately, rare research has been conducted to systematically explore the impact of noise, especially from the noise handling point of view. This has made various noise processing techniques less significant, specifically when dealing with noise that is introduced in attributes. In this paper, we present a systematic evaluation on the effect of noise in machine learning. Instead of taking any unified theory of noise to evaluate the noise impacts, we differentiate noise into two categories: class noise and attribute noise, and analyze their impacts on the system performance separately. Because class noise has been widely addressed in existing research efforts, we concentrate on attribute noise. We investigate the relationship between attribute noise and classification accuracy, the impact of noise at different attributes, and possible solutions in handling attribute noise. Our conclusions can be used to guide interested readers to enhance data quality by designing various noise handling mechanisms."}
{"_id": "a72f0f7a42b812591d6c4889fe181a77fab62d88", "title": "Diagnosis of retinal health in digital fundus images using continuous wavelet transform (CWT) and entropies", "text": "Vision is paramount to humans to lead an active personal and professional life. The prevalence of ocular diseases is rising, and diseases such as glaucoma, Diabetic Retinopathy (DR) and Age-related Macular Degeneration (AMD) are the leading causes of blindness in developed countries. Identifying these diseases in mass screening programmes is time-consuming, labor-intensive and the diagnosis can be subjective. The use of an automated computer aided diagnosis system will reduce the time taken for analysis and will also reduce the inter-observer subjective variabilities in image interpretation. In this work, we propose one such system for the automatic classification of normal from abnormal (DR, AMD, glaucoma) images. We had a total of 404 normal and 1082 abnormal fundus images in our database. As the first step, 2D-Continuous Wavelet Transform (CWT) decomposition on the fundus images of two classes was performed. Subsequently, energy features and various entropies namely Yager, Renyi, Kapoor, Shannon, and Fuzzy were extracted from the decomposed images. Then, adaptive synthetic sampling approach was applied to balance the normal and abnormal datasets. Next, the extracted features were ranked according to the significances using Particle Swarm Optimization (PSO). Thereupon, the ranked and selected features were used to train the random forest classifier using stratified 10-fold cross validation. Overall, the proposed system presented a performance rate of 92.48%, and a sensitivity and specificity of 89.37% and 95.58% respectively using 15 features. This novel system shows promise in detecting abnormal fundus images, and hence, could be a valuable adjunct eye health screening tool that could be employed in polyclinics, and thereby reduce the workload of specialists at hospitals."}
{"_id": "7823b958464144b4cc3996723e0114047037f3d4", "title": "Mendeley readership counts: An investigation of temporal and disciplinary differences", "text": "Mike Thelwall, Pardeep Sud Scientists and managers using citation-based indicators to help evaluate research cannot evaluate recent articles because of the time needed for citations to accrue. Reading occurs before citing, however, and so it makes sense to count readers rather than citations for recent publications. To assess this, Mendeley readers and citations were obtained for articles from 2004 to late 2014 in 5 broad categories (agriculture, business, decision science, pharmacy, and the social sciences) and 50 subcategories. In these areas, citation counts tended to increase with every extra year since publication, and readership counts tended to increase faster initially but then stabilise after about 5 years. The correlation between citations and readers was also higher for longer time periods, stabilising after about five years. Although there were substantial differences between broad fields and smaller differences between subfields, the results confirm the value of Mendeley reader counts as early scientific impact indicators."}
{"_id": "3c997161dd8e35ae47179e0aafd4d3b3e3d116e7", "title": "To buy or not to buy online: adopters and non-adopters of online shopping in Singapore", "text": "The Internet, as a dynamic virtual medium for selling and buying information, services and products, is gaining increasing attention from researchers and practitioners. In this study, we examine the perceptions of adopters and non-adopters of online shopping in terms of demographic profile, consumer expectations of online stores, advantages and problems of online shopping and transaction cost. In addition, we also examine the types of products purchased, frequency of online purchase and the extent of communication with e-commerce vendors. The findings are useful in explaining consumers\u2019 buying behaviour in the electronic marketplace. Implications of the results are discussed."}
{"_id": "f21a91918eb32bd35438e9d0d19fb16ca6e73c6a", "title": "When Ignorance is Bliss : Information , Fairness , and Bargaining Efficiency", "text": "Most theories of legal discovery assume that sharing information among disputing parties will lead to convergence of expectations and will facilitate settlement. However, psychological research shows that shared information, if it is open to multiple interpretations, is likely to be interpreted egocentrically by the disputants, which can cause beliefs to diverge rather than converge. We present results from a bargaining experiment which shows that information sharing leads to divergence of expectations, and to settlement delays, when the information exchanged is amenable to multiple interpretations. By contrast, when there is only one obvious interpretation, information sharing leads to convergence of expectations and speeds settlement. We show, further, that information sharing moderates the relationship between size of the bargaining zone and the prospects for settlement. Information and efficiency 3 When Ignorance is Bliss: Information, Fairness, and Bargaining Efficiency"}
{"_id": "812fc94d95dd140052bc00da3b74cf675fc09c42", "title": "GSM Based Home Automation System UsingApp-Inventor for Android Mobile Phone", "text": "Nowadays, the remote Home Automation turns out to be more and more significant and appealing. It improves the value of our lives by automating various electrical appliances or instruments. This paper describes GSM (Global System Messaging) based secured device control system using App Inventor for Android mobile phones. App Inventor is a latest visual programming platform for developing mobile applications for Android-based smart phones. The Android Mobile Phone Platform becomes more and more popular among software developers, because of its powerful capabilities and open architecture. It is a fantastic platform for the real world interface control, as it offers an ample of resources and already incorporates a lot of sensors. No need to write programming codes to develop apps in the App Inventor, instead it provides visual design interface as the way the apps looks and use blocks of interlocking components to control the app\u2019s behaviour. The App Inventor aims to make programming enjoyable and accessible to novices."}
{"_id": "175de84e1c7ce58cd969372a54461d7499086d46", "title": "BitIodine: Extracting Intelligence from the Bitcoin Network", "text": "Bitcoin, the famous peer-to-peer, decentralized electronic currency system, allows users to benefit from pseudonymity, by generating an arbitrary number of aliases (or addresses) to move funds. However, the complete history of all transactions ever performed, called \u201cblockchain\u201d, is public and replicated on each node. The data it contains is difficult to analyze manually, but can yield a high number of relevant information. In this paper we present a modular framework, BitIodine, which parses the blockchain, clusters addresses that are likely to belong to a same user or group of users, classifies such users and labels them, and finally visualizes complex information extracted from the Bitcoin network. BitIodine labels users semi-automatically with information on their identity and actions which is automatically scraped from openly available information sources. BitIodine also supports manual investigation by finding paths and reverse paths between addresses or users. We tested BitIodine on several real-world use cases, identified an address likely to belong to the encrypted Silk Road cold wallet, or investigated the CryptoLocker ransomware and accurately quantified the number of ransoms paid, as well as information about the victims. We release a prototype of BitIodine as a library for building Bitcoin forensic analysis tools."}
{"_id": "5ae4e852d333564923e1b6caf6b009729df6ca6a", "title": "Evaluating User Privacy in Bitcoin", "text": "Bitcoin is quickly emerging as a popular digital payment system. However, in spite of its reliance on pseudonyms, Bitcoin raises a number of privacy concerns due to the fact that all of the transactions that take place are publicly announced in the system. In this paper, we investigate the privacy provisions in Bitcoin when it is used as a primary currency to support the daily transactions of individuals in a university setting. More specifically, we evaluate the privacy that is provided by Bitcoin (i) by analyzing the genuine Bitcoin system and (ii) through a simulator that faithfully mimics the use of Bitcoin within a university. In this setting, our results show that the profiles of almost 40% of the users can be, to a large extent, recovered even when users adopt privacy measures recommended by Bitcoin. To the best of our knowledge, this is the first work that comprehensively analyzes, and evaluates the privacy implications of Bitcoin."}
{"_id": "af3c5d46ea4f54a323017c7c430e4d0cc45e4abc", "title": "Secure and anonymous decentralized Bitcoin mixing", "text": "The decentralized digital currency Bitcoin presents an anonymous alternative to the centralized banking system and indeed enjoys widespread and increasing adoption. Recent works, however, show how users can be reidentified and their payments linked based on Bitcoin\u2019s most central element, the blockchain, a public ledger of all transactions. Thus, many regard Bitcoin\u2019s central promise of financial privacy as broken. In this paper, we propose CoinParty, an efficient decentralized mixing service that allows users to reestablish their financial privacy in Bitcoin and related cryptocurrencies. CoinParty, through a novel combination of decryption mixnets with threshold signatures, takes a unique place in the design space of mixing services, combining the advantages of previously proposed centralized and decentralized mixing services in one system. Our prototype implementation of CoinParty scales to large numbers of users and achieves anonymity sets by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain. CoinParty can easily be deployed by any individual group of users, i.e., independent of any third parties, or provided as a commercial or voluntary service, e.g., as a community service by privacy-aware organizations."}
{"_id": "28f5f8dc2f2f9f2a4e49024fe6aa7e9a63b23ab0", "title": "Vision-based bicycle detection and tracking using a deformable part model and an EKF algorithm", "text": "Bicycles that share the road with intelligent vehicles present particular challenges for automated perception systems. Bicycle detection is important because bicycles share the road with vehicles and can move at comparable speeds in urban environments. From a computer vision standpoint, bicycle detection is challenging as bicycle's appearance can change dramatically between viewpoints and a person riding on the bicycle is a non-rigid object. In this paper, we present a vision-based framework to detect and track bicycles that takes into account these issues. A mixture model of multiple viewpoints is defined and trained via a Support Vector Machine (SVM) to detect bicycles under a variety of circumstances. Each component of the model uses a part-based representation and known geometric context is used to improve overall detection efficiency. An extended Kalman filter (EKF) is used to estimate the position and velocity of the bicycle in vehicle coordinates. We demonstrate the effectiveness of this approach through a series of experiments run on video data of moving bicycles captured from a vehicle-mounted camera"}
{"_id": "d4e797e60696bcc453e8fa9e93ea0ebfbbabedfd", "title": "A model-free mapless navigation method for mobile robot using reinforcement learning", "text": "In this paper, a simple and efficient method is proposed using one of the basic reinforcement learning tool Q-learning. This method is to achieve the aim of mobile robot maples navigation with collision-free. Thus, an end-to-end navigation model is built, which uses lidar sensor massage as input and moving command of mobile robot as output, so as to simplify the process of environmental perception and decision making on action. Through simulation and experiments, its effectiveness is proved. After trained by this method, mobile robot can safely reach the navigation target in an unknown environment without any prior demonstration. In addition, an extensive quantitative and qualitative evaluation of this method is presented by the comparison with traditional path planning method based on foregone global environment map."}
{"_id": "b93d79021e07ab7780307df0234bab13e1a1d965", "title": "Power Density Limits and Design Trends of High-Speed Permanent Magnet Synchronous Machines", "text": "Electrical machines will always be a subcomponent in a larger system and should therefore be designed in conjunction with other major parts. Only a limited set of parameters, such as weight, size, or speed, is often relevant at a system level, but accurately determining those parameters requires performing a time-consuming detailed design of the electrical machine. This conflicts with a top down system design approach, where initially many different system topologies need to be considered. Hence, there is a need for a consistent and detailed description of the dependency of the interface parameters on machine properties. This paper proposes a method to obtain detailed design trends of electrical machines with a very high level of detail, based on large-scale application of multiobjective optimization using finite element models. The method is demonstrated by quantifying the power density of surface-mounted permanent magnet (SPM) machines versus rotor surface speed, power level, and cooling approach. It is found that power density strongly depends on cooling approach, followed by rotor surface speed, but barely on power level. Ultimately, this paper can be used both as an example when performing a similar design study and as a reference on detailed high-level and low-level machine design trends."}
{"_id": "dcd96df43c3f1577e6bd1634cfd58ddc891d8e66", "title": "Long-term electroencephalogram measurement using polymer-based dry microneedle electrode", "text": "This paper reports a successful electroencephalogram (EEG) measurement for hours using polymer-based microneedle electrodes. Needle electrodes can penetrate through the stratum corneum and therefore, do not require any skin treatment for high-quality EEG measurement. The tested needles consist of SU-8 needles, a silver film, and a nanoporous parylene protective film. In prior work, fabrication processes of polymer-based microneedles, which are considered to be more robust than silicon microneedles was developed. In this work, the electrical impedance was measured at the forehead and was verified to maintain 6 k\u03a9 for 3 h without any skin treatment, which was low enough for EEG measurement. A headset was designed to keep the contact between the needles and skin and with its help, EEG was successfully measured from the frontal poles. The acquired signals were found to be as high quality as the standard wet electrode that required skin treatment and uncomfortable pasting of conductive gel. The developed electrodes are readily applicable to record brain activities for hours while applying little mental and physical stress to the users."}
{"_id": "960b239659bcd3ce15cd6cfc9cc9932302ba2ec9", "title": "On Maximizing Sensor Network Lifetime by Energy Balancing", "text": "Many physical systems, such as water/electricity distribution networks, are monitored by battery-powered wireless-sensor networks (WSNs). Since battery replacement of sensor nodes is generally difficult, long-term monitoring can be only achieved if the operation of the WSN nodes contributes to long WSN lifetime. Two prominent techniques to long WSN lifetime are 1) optimal sensor activation and 2) efficient data gathering and forwarding based on compressive sensing. These techniques are feasible only if the activated sensor nodes establish a connected communication network (connectivity constraint), and satisfy a compressive sensing decoding constraint (cardinality constraint). These two constraints make the problem of maximizing network lifetime via sensor node activation and compressive sensing NP-hard. To overcome this difficulty, an alternative approach that iteratively solves energy balancing problems is proposed. However, understanding whether maximizing network lifetime and energy balancing problems are aligned objectives is a fundamental open issue. The analysis reveals that the two optimization problems give different solutions, but the difference between the lifetime achieved by the energy balancing approach and the maximum lifetime is small when the initial energy at sensor nodes is significantly larger than the energy consumed for a single transmission. The lifetime achieved by energy balancing is asymptotically optimal, and that the achievable network lifetime is at least 50% of the optimum. Analysis and numerical simulations quantify the efficiency of the proposed energy balancing approach."}
{"_id": "46e8299d64a00f08f4a5bd689108c00cf42860ec", "title": "Knowledge-based processing of complex stock market events", "text": "Usage of background knowledge about events and their relations to other concepts in the application domain, can improve the quality of event processing. In this paper, we describe a system for knowledge-based event detection of complex stock market events based on available background knowledge about stock market companies. Our system profits from data fusion of live event stream and background knowledge about companies which is stored in a knowledge base. Users of our system can express their queries in a rule language which provides functionalities to specify semantic queries about companies in the SPARQL query language for querying the external knowledge base and combine it with event data stream. Background makes it possible to detect stock market events based on companies attributes and not only based on syntactic processing of stock price and volume."}
{"_id": "92a53264ca7c6dad488f5d997c457a732686c2e2", "title": "Spatial Attention Deep Net with Partial PSO for Hierarchical Hybrid Hand Pose Estimation", "text": "[3] Sharp, T., Keskin, C., Robertson, D., Taylor, J., Shotton, J., Leichter, D.K.C.R.I., Wei, A.V.Y., Krupka, D.F.P.K.E., Fitzgibbon, A., Izadi, S.: Accurate, robust, and flexible real-time hand tracking. In: CHI (2015) [4] Oberweger, M., Wohlhart, P., Lepetit, V.: Training a feedback loop for hand pose estimation. In: ICCV (2015) [18] Oberweger, M., Wohlhart, P., Lepetit, V.: Hands deep in deep learning for hand pose estimation. arXiv preprint arXiv:1502.06807 (2015) [19] Tang, D., Taylor, J., Kohli, P., Keskin, C., Kim, T.K., Shotton, J.: Opening the black box: Hierarchical sampling optimization for estimating human hand pose. In: ICCV (2015) * Indicates equal contribution. Results T \u03b8"}
{"_id": "396619128638c662476ce3eaf5b52eb43ff636e7", "title": "Structured Sparse Method for Hyperspectral Unmixing", "text": "Hyperspectral Unmixing (HU) has received increasing attention in the past decades due to its ability of unveiling information latent in hyperspectral data. Unfortunately, most existing methods fail to take advantage of the spatial information in data. To overcome this limitation, we propose a Structured Sparse regularized Nonnegative Matrix Factorization (SS-NMF) method from the following two aspects. First, we incorporate a graph Laplacian to encode the manifold structures embedded in the hyperspectral data space. In this way, the highly similar neighboring pixels can be grouped together. Second, the lasso penalty is employed in SS-NMF for the fact that pixels in the same manifold structure are sparsely mixed by a common set of relevant bases. These two factors act as a new structured sparse constraint. With this constraint, our method can learn a compact space, where highly similar pixels are grouped to share correlated sparse representations. Experiments on real hyperspectral data sets with different noise levels demonstrate that our method outperforms the state-of-the-art methods significantly. Index Terms Hyperspectral Unmixing (HU), Hyperspectral Image Analysis, Structured Sparse NMF (SS-NMF), Mixed Pixel, Nonnegative Matrix Factorization (NMF)."}
{"_id": "f4d8dc512bab3f35063c205296ae13823512fd48", "title": "Electronic energy transfer.", "text": "COVER ARTICLE Fleming et al. Quantum coherence and its interplay with protein environments in photosynthetic electronic energy transfer PERSPECTIVE Albinsson and M\u00e5rtensson Excitation energy transfer in donor\u2013 bridge\u2013acceptor systems Includes a collection of articles on the theme of electronic energy transfer Recent experiments suggest that electronic energy transfer in photosynthetic pigment-protein complexes involves long-lived quantum coherence among electronic excitations of pigments. The observation has led to the suggestion that quantum coherence might play a significant role in achieving the remarkable efficiency of photosynthetic light harvesting. At the same time, the observation has raised questions regarding the role of the surrounding protein in protecting the quantum coherence. In this Perspective, we provide an overview of recent experimental and theoretical investigations of photosynthetic electronic energy transfer paying particular attention to the underlying mechanisms of long-lived quantum coherence and its non-Markovian interplay with the protein environment."}
{"_id": "dea53a884f878bbe65b9f7bc58bd2f673470628e", "title": "Text-based domain ontology building using Tf-Idf and metric clusters techniques", "text": "The paper describes the methodology used to develop a construction domain ontology, taking into account the wealth of existing semantic resources in the sector ranging from dictionaries to thesauri. Given the characteristics and settings of the construction industry, a modular, architecture-centric approach was adopted to structure and develop the ontology. The paper argues that taxonomies provide an ideal backbone for any ontology project. Therefore, a construction industry standard taxonomy was used to provide the seeds of the ontology, enriched and expanded with additional concepts extracted from large discipline-oriented document bases using information retrieval (IR) techniques."}
{"_id": "a3570fd9ac6ae016e88e62d1526f54e4cfba45a2", "title": "A Survey on Parts of Speech Tagging for Indian Languages", "text": "Part of speech (POS) tagging is basically the process of automatically assigning its lexical category to each word according to its context and definition. Each word of sentence is marked in croups as corresponding to a particular part of speech like noun, verb, adjective and adverb. POS serves as a first step in natural language process applications like information extraction, parsing, and word sense disambiguation etc. this paper presents a survey on Part of Speech taggers used for Indian languages. The main problem of tagging is to find proper way to tag each word according to particular part of speech. Very less work has been done for POS tagging on Indian languages mainly due to morphologically richness. In this paper, various techniques are discussed that are used for development of POS"}
{"_id": "e0d65a8195e2da1a4f354828a98e43ed94ac1c5d", "title": "SINGLE PHASE PWM RECTIFIER IN TRACTION APPLICATION", "text": "This research has been motivated by industrial demand for single phase PWM rectifier in traction application. This paper presents an advanced control structure design for a single phase PWM rectifier. The PWM rectifier is controlled to consume nearly sinusoidal current with power factor nearing to unity. The control structure consists of a DC-link voltage controller and a current controller implemented by a proportionalresonant controller using a fast phase angle and frequency estimator. The estimation algorithm is derived from the weighted least-squares estimation method. The feasibility of the proposed control structure is confirmed by experimental tests performed on designed laboratory prototype."}
{"_id": "b9bd0c9c9fdf937a1b542dfebe08474be89ad6b3", "title": "System-Level Performance of Downlink Non-Orthogonal Multiple Access (NOMA) under Various Environments", "text": "Non-orthogonal multiple access (NOMA) is a promising multiple access scheme for further improving the spectrum efficiency compared to that for orthogonal multiple access (OMA) in the 5th Generation (5G) mobile communication systems. All of the existing evaluations for NOMA focus on the macrocell deployment since NOMA fully utilizes the power domain and the difference in channel gains, e.g., path loss, between users, which is typically sufficiently large in macrocells. Currently, small cells are becoming important and being studied for future Long-Term Evolution (LTE) enhancements in order to improve further the system performance. Thus, it is of great interest to study the performance of NOMA for small cell deployment under various environments. This paper investigates the system level performance of NOMA in small cells considering practical assumptions such as the single user multiple-input multiple-output (SU-MIMO) technique, adaptive modulation and coding (AMC), feedback channel quality indicator (CQI). Some of the key NOMA specific functionalities, including multi-user paring and transmit power allocation are also taken into account in the evaluation. Based on computer simulations, we show that for both macrocell and small cell deployments, NOMA can still provide a larger throughput performance gain compared to that for OMA."}
{"_id": "5c92dba116be39f7a09d909c8b29424df3751145", "title": "MOS PARAMETER EXTRACTION AND OPTIMIZATION WITH GENETIC ALGORITHM", "text": "Extracting an optimal set of parameter values for a MOS device is great importance in contemporary technology is a complex problem. Traditional methods of parameter extraction can produce far from optimal solutions because of the presence of local optimum in the solution space. Genetic algorithms are well suited for finding near optimal solutions in irregular parameter spaces. In this study*, We have applied a genetic algorithm to the problem of device model parameter extraction and are able to produce models of superior accuracy in much less time and with less reliance on human expertise. MOS transistor\u2019s parameters have been extracted and optimized with genetic algorithm. 0.35\u03bcm fabricated by C35 process have been used for the results of experimental studies of parameter extraction. Extracted parameters\u2019 characteristic data results have been compared with measurement results. Different values of parameters of genetic algorithm, such as population size, crossover rate , and generation size are compared by different tests."}
{"_id": "7d986dac610e20441adb9161e5466c88932626e9", "title": "A study of smoothing methods for language models applied to information retrieval", "text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and to then rank documents by the likelihood of the query according to the estimated language model. A central issue in language model estimation is smoothing, the problem of adjusting the maximum likelihood estimator to compensate for data sparseness. In this article, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collections. Experimental results show that not only is the retrieval performance generally sensitive to the smoothing parameters, but also the sensitivity pattern is affected by the query type, with performance being more sensitive to smoothing for verbose queries than for keyword queries. Verbose queries also generally require more aggressive smoothing to achieve optimal performance. This suggests that smoothing plays two different role---to make the estimated document language model more accurate and to \"explain\" the noninformative words in the query. In order to decouple these two distinct roles of smoothing, we propose a two-stage smoothing strategy, which yields better sensitivity patterns and facilitates the setting of smoothing parameters automatically. We further propose methods for estimating the smoothing parameters automatically. Evaluation on five different databases and four types of queries indicates that the two-stage smoothing method with the proposed parameter estimation methods consistently gives retrieval performance that is close to---or better than---the best results achieved using a single smoothing method and exhaustive parameter search on the test data."}
{"_id": "3c8f10906c705b16ba38ded5b5960f5e803b2a8b", "title": "RCS reduction of Antipodal Vivaldi Antenna", "text": "A novel Antipodal Vivaldi Antenna with low radar cross section (RCS) is proposed in this paper. By using flat corrugated slotline to replace exponential gradient curve on both sides of the antenna, the RCS in the endfire direction can be reduced in the operating band of 4.3 GHz-12 GHz when the incident wave is perpendicular to the antenna plane. Mean while, the lowest operating frequency of the antenna is reduced from 4.3 GHz to 3.8 GHz, and radiation performance keeps stably."}
{"_id": "4db968f2a87761ff2475492e49d36ab364e245ea", "title": "Can soft biometric traits assist user recognition", "text": "Biometrics is rapidly gaining acceptance as the technology that can meet the ever increasing need for security in critical applications. Biometric systems automatically recognize individuals based on their physiological and behavioral characteristics. Hence, the fundamental requirement of any biometric recognition system is a human trait having several desirable properties like universality, distinctiveness, permanence, collectability, acceptability, and resistance to circumvention. However, a human characteristic that possesses all these properties has not yet been identified. As a result, none of the existing biometric systems provide perfect recognition and there is a scope for improving the performance of these systems. Although characteristics like gender, ethnicity, age, height, weight and eye color are not unique and reliable, they provide some information about the user. We refer to these characteristics as \u201csoft\u201d biometric traits and argue that these traits can complement the identity information provided by the primary biometric identifiers like fingerprint and face. This paper presents the motivation for utilizing soft biometric information and analyzes how the soft biometric traits can be automatically extracted and incorporated in the decision making process of the primary biometric system. Preliminary experiments were conducted on a fingerprint database of 160 users by synthetically generating soft biometric traits like gender, ethnicity, and height based on known statistics. The results show that the use of additional soft biometric user information significantly improves ( \u2248 6%) the recognition performance of the fingerprint biometric system."}
{"_id": "696ec77d7bcf5b71039877e07dbbdb341d04b4e3", "title": "Path Integral Networks: End-to-End Differentiable Optimal Control", "text": "In this paper, we introduce Path Integral Networks (PI-Net), a recurrent network representation of the Path Integral optimal control algorithm. The network includes both system dynamics and cost models, used for optimal control based planning. PI-Net is fully differentiable, learning both dynamics and cost models end-to-end by back-propagation and stochastic gradient descent. Because of this, PI-Net can learn to plan. PI-Net has several advantages: it can generalize to unseen states thanks to planning, it can be applied to continuous control tasks, and it allows for a wide variety learning schemes, including imitation and reinforcement learning. Preliminary experiment results show that PI-Net, trained by imitation learning, can mimic control demonstrations for two simulated problems; a linear system and a pendulum swing-up problem. We also show that PI-Net is able to learn dynamics and cost models latent in the demonstrations."}
{"_id": "b9b2f317b2d4ea9c6061d196eb15d97707dafb98", "title": "From simulation to experimentable digital twins: Simulation-based development and operation of complex technical systems", "text": "Way beyond its industrial roots, robotics evolved to be a highly interdisciplinary field with a variety of applications in a smart world. The eRobotics methodology addresses this evolution by providing platforms where roboticist can exchange ideas and collaborate with experts from other disciplines for developing complex technical systems and automated solutions. Virtual Testbeds are the central method in eRobotics, where complex technical systems and their interaction with prospective working environments are first designed, programmed, controlled and optimized in simulation before commissioning the real system. On the other hand, Industry 4.0 concepts promote the notion of \u201cDigital Twins\u201d, virtual substitutes of real world objects consisting of virtual representations and communication capabilities making up smart objects acting as intelligent nodes inside the internet of things and services. Combining these two approaches, Virtual Testbeds and Digital Twins, leads to a new kind of \u201cExperimentable Digital Twins\u201d breaking new ground in the simulation-based development and operation of complex technical systems. In this contribution, we describe how such \u201cExperimentable Digital Twins\u201d can act as the very core of simulation-based development processes streamlining the development process, enabling detailed simulations at system level and realizing intelligent systems. Besides this, the multiple use of models and simulations in various scenarios significantly reduces the effort for the use of simulation technology throughout the life cycle of complex technical systems."}
{"_id": "0961683c0bdc4556ea673d9dfcc04aacc3a12859", "title": "Design of a new broadband EMC double ridged guide horn antenna", "text": "A new design for broadband EMC double ridged guide horn (DRGH) antennas is presented. A conventional 1-18 GHz double ridged guide horn has been investigated rigorously. Then some modifications have been performed in the structure of the antenna. Elimination of radiation pattern deficiencies especially at higher frequencies accompanying with better EM characteristics of the antenna have been the main purposes for these modifications. The main modifications are imposed on the profile of ridges, H-plane flares and E-plane flares. The resulting antenna not only has considerably better performance but also has smaller physical dimensions and less weight in comparison with the conventional one."}
{"_id": "25782ed91d7c564628366a2e1edaaa02f9eed7c8", "title": "Sensitivity analysis of a 1 to 18 GHz broadband DRGH antenna", "text": "In this paper some properties of a 1-18 GHz double ridged guide horn antenna (DRGH) with a feeding section including coaxial input and a back shorting plate are rigorously investigated. Most desired electromagnetic characteristics of this antenna is achieved by empirically finding sizes for different parameters, however there is no explanation for the effect of most of them in open literature. In order to have a clear idea of the effects of different parameters, a 1-18 GHz DRGH has been simulated with HFSS. It is understood from the results that the parameters near feeding point such as the initial distance between ridges, the distance between the center of the probe and the cavity, and the radius of the inserted probe play a significant role in controlling VSWR and gain and in shaping the radiation pattern for high frequencies"}
{"_id": "3b62fa258158867429406d949409145471d5116c", "title": "Partially Dielectric-Loaded Ridged Horn Antenna Design for Ultrawideband Gain and Radiation Performance Enhancement", "text": "In this letter, a partially dielectric-loaded double-ridged horn antenna (PD-DRHA) that operates at 1-18-GHz frequency band is proposed for ultrawideband applications. The partially dielectric loading technique using a small lens inward the aperture has been implemented to enhance the gain of the standard double-ridged horn antenna (DRHA), without changing the physical dimensions. Approximately 5 dB gain increment level is achieved from 5 to 15 GHz with slight changes on return loss characteristics. Moreover, the pattern distortion of DRHA typically between 12-18-GHz band is improved. The antenna gain, radiation pattern, and input reflection performance measurements are presented with comparisons."}
{"_id": "702ac637530ffa3167809ca17713bb2fbf6fac8e", "title": "A NOVEL DUAL-POLARIZED DOUBLE-RIDGED HORN ANTENNA FOR WIDEBAND APPLICATIONS", "text": "Dual-polarized antenna is widely used in communication systems such as ECM and DF systems. In this paper a novel doubleridged horn antenna with dual polarizations is introduced for frequency range of 8\u201318 GHz. Common double ridged horn antennas have single polarization over the operating frequency. We have used five layers polarizer to provide dual polarizations performance of the doubleridged horn antenna. In order to achieve dual polarizations the strips width, strips spacing and layers distances are optimized. It is worth mentioning that the corresponding VSWR of the antenna during the optimization process should be maintain below a certain value (VSWR< 2). Simulation results show that the proposed antenna yields dual polarizations performance and low VSWR over the operating frequency. We have used CST software for antenna simulation which is based on the finite integral technique."}
{"_id": "d33bcef4d6f46c94ed232e828db2f5614549eff0", "title": "An Improved Design for a 1\u201318 GHz Double-Ridged Guide Horn Antenna", "text": "It is a well known fact that the traditional 1-18 GHz double ridge guide horn (DRGH) antenna suffers from pattern deterioration above 12 GHz. At these frequencies, instead of maintaining a single main lobe radiation pattern, the pattern splits up into four lobes. It was shown in the literature that higher order modes are causing the pattern breakup. A benchmark study is performed to establish the performance of typical current and historic 1-18 GHz DRGH antennas. The performance of the antennas are evaluated in terms of gain, VSWR and radiation patterns. An improved 1-18 GHz DRGH antenna is presented. The new design has better gain and VSWR performance without any pattern deterioration. It also consists of significantly fewer parts, reducing the possibility of performance deterioration due to gaps between parts. Two prototypes of the new design were manufactured and tested with excellent agreement between measured and simulated results. The aperture dimensions of the new design are identical to that of the traditional DRGH, making it the only 1-18 GHz DRGH without pattern breakup whose aperture dimensions comply with the requirements specified in MIL-STD-461F - 24.2 by 13.6 cm."}
{"_id": "af0596f093b2aeb57292ace04e2b251e5f793b15", "title": "AFFDEX SDK: A Cross-Platform Real-Time Multi-Face Expression Recognition Toolkit", "text": "We present a real-time facial expression recognition toolkit that can automatically code the expressions of multiple people simultaneously. The toolkit is available across major mobile and desktop platforms (Android, iOS, Windows). The system is trained on the world's largest dataset of facial expressions and has been optimized to operate on mobile devices and with very few false detections. The toolkit offers the potential for the design of novel interfaces that respond to users' emotional states based on their facial expressions. We present a demonstration application that provides real-time visualization of the expressions captured by the camera."}
{"_id": "c916f54aed75c6aeda669535da017f94aa115e37", "title": "Nonparametric variable importance using an augmented neural network with multi-task learning", "text": "In predictive modeling applications, it is often of interest to determine the relative contribution of subsets of features in explaining the variability of an outcome. It is useful to consider this variable importance as a function of the unknown, underlying data-generating mechanism rather than the specific predictive algorithm used to fit the data. In this paper, we connect these ideas in nonparametric variable importance to machine learning, and provide a method for efficient estimation of variable importance when building a predictive model using a neural network. We show how a single augmented neural network with multi-task learning simultaneously estimates the importance of many feature subsets, improving on previous procedures for estimating importance. We demonstrate on simulated data that our method is both accurate and computationally efficient, and apply our method to both a study of heart disease and for predicting mortality in ICU patients."}
{"_id": "b4c4c63bb1079ea7be9f30a881a80ecb8b4f4d13", "title": "Temporal Data Mining: An Overview", "text": "\u2014 To classify data mining problems and algorithms we used two dimensions: data type and type of mining operations. One of the main issue that arise during the data mining process is treating data that contains temporal information. The area of temporal data mining has very much attention in the last decade because from the time related feature of the data, one can extract much significant information which can not be extracted by the general methods of data mining. Many interesting techniques of temporal data mining were proposed and shown to be useful in many applications. Since temporal data mining brings together techniques from different fields such as databases, statistics and machine learning the literature is scattered among many different sources. In this paper, we present a survey on techniques of temporal data mining."}
{"_id": "2c7bb4f13487ca0eeeb27101bc1896ec661a7eb6", "title": "A practical solution for scripting language compilers", "text": "Although scripting languages are becoming increasingly popular, even mature scripting language implementations remain interpreted. Several compilers and reimplementations have been attempted, generally focusing on performance.\n Based on our survey of these reimplementations, we determine that there are three important features of scripting languages that are difficult to compile or reimplement. Since scripting languages are defined primarily through the semantics of their original implementations, they often change semantics between releases. They provide large standard libraries, which are difficult to re-use, and costly to reimplement. They provide C APIs, used both for foreign-function-interfaces and to write third-party extensions. These APIs typically have tight integration with the original implementation. Finally, they support run-time code generation. These features make the important goal of correctness difficult to achieve.\n We present a technique to support these features in an ahead-of-time compiler for PHP. Our technique uses the original PHP implementation through the provided C API, both in our compiler, and an our generated code. We support all of these important scripting language features, particularly focusing on the correctness of compiled programs. Additionally, our approach allows us to automatically support limited future language changes. We present a discussion and performance evaluation of this technique, which has not previously been published."}
{"_id": "bc9eee0eeab7b20eec6f6f1881727739e21e8855", "title": "Clinical narrative classification using discriminant word embeddings with ELM", "text": "Clinical texts are inherently complex due to the medical domain expertise required for content comprehension. In addition, the unstructured nature of these narratives poses a challenge for automatically extracting information. In natural language processing, the use of word embeddings are an effective approach to generate word representations (vectors) in a low dimensional space. In this paper we use a log-linear model (a type of neural language model) and Linear Discriminant Analysis with a kernel-based Extreme Learning Machine (ELM) to map the clinical texts to the medical code. Experimental results on clinical texts indicate improvement with ELM in comparison to SVM and neural network approaches."}
{"_id": "6c293de36f9fe0711608410d956bb0dd289908c7", "title": "The Pricing of Investment Grade Credit Risk during the Financial Crisis", "text": "This paper uses a structural model to investigate the pricing of investment grade credit risk during the \u0085nancial crisis. Our analysis suggests that the dramatic recent widening of credit spreads is highly consistent with the decline in the equity market, the increase in its long-term volatility, and an improved investor appreciation of the risks embedded in structured products. In contrast to the main argument in favor of using government funds to help purchase structured credit securities, we \u0085nd little evidence that suggests these markets are experiencing \u0085re sales. Preliminary and Incomplete. Coval: Harvard Business School; jcoval@hbs.edu. Jurek: Bendheim Center for Finance, Princeton University; jjurek@princeton.edu. Sta\u00a4ord: Harvard Business School; esta\u00a4ord@hbs.edu. We thank Stephen Blythe, Ken Froot, Bob Merton, Andr\u00e9 Perold, and David Scharfstein for valuable comments and discussions."}
{"_id": "e031082787272dda2c11f59c3156862e77b25ebd", "title": "A Highly Parallel Framework for HEVC Coding Unit Partitioning Tree Decision on Many-core Processors", "text": "High Efficiency Video Coding (HEVC) uses a very flexible tree structure to organize coding units, which leads to a superior coding efficiency compared with previous video coding standards. However, such a flexible coding unit tree structure also places a great challenge for encoders. In order to fully exploit the coding efficiency brought by this structure, huge amount of computational complexity is needed for an encoder to decide the optimal coding unit tree for each image block. One way to achieve this is to use parallel computing enabled by many-core processors. In this paper, we analyze the challenge to use many-core processors to make coding unit tree decision. Through in-depth understanding of the dependency among different coding units, we propose a parallel framework to decide coding unit trees. Experimental results show that, on the Tile64 platform, our proposed method achieves averagely more than 11 and 16 times speedup for 1920x1080 and 2560x1600 video sequences, respectively, without any coding efficiency degradation."}
{"_id": "45a04f3d9aec580810984bd3b2dde9a3762f0f93", "title": "Differentiable Parameterization of Catmull-Clark Subdivision Surfaces", "text": "Subdivision-based representations are recognized as important tools for the generation of high-quality surfaces for Computer Graphics. In this paper we describe two parameterizations of Catmull-Clark subdivision surfaces that allow a variety of algorithms designed for other types of parametric surfaces (i.e., B-splines) to be directly applied to subdivision surfaces. In contrast with the natural parameterization of subdivision surfaces characterized by diverging first order derivatives around extraordinary vertices of valence higher than four, the derivatives associated with our proposed methods are defined everywhere on the surface. This is especially important for Computer-Aided Design (CAD) applications that seek to address the limitations of NURBS-based representations through the more flexible subdivision framework."}
{"_id": "9d2ea884a8103ad756e9a743fab9cf2f95f054da", "title": "Identifying differentially expressed genes using false discovery rate controlling procedures", "text": "MOTIVATION\nDNA microarrays have recently been used for the purpose of monitoring expression levels of thousands of genes simultaneously and identifying those genes that are differentially expressed. The probability that a false identification (type I error) is committed can increase sharply when the number of tested genes gets large. Correlation between the test statistics attributed to gene co-regulation and dependency in the measurement errors of the gene expression levels further complicates the problem. In this paper we address this very large multiplicity problem by adopting the false discovery rate (FDR) controlling approach. In order to address the dependency problem, we present three resampling-based FDR controlling procedures, that account for the test statistics distribution, and compare their performance to that of the na\u00efve application of the linear step-up procedure in Benjamini and Hochberg (1995). The procedures are studied using simulated microarray data, and their performance is examined relative to their ease of implementation.\n\n\nRESULTS\nComparative simulation analysis shows that all four FDR controlling procedures control the FDR at the desired level, and retain substantially more power then the family-wise error rate controlling procedures. In terms of power, using resampling of the marginal distribution of each test statistics substantially improves the performance over the na\u00efve one. The highest power is achieved, at the expense of a more sophisticated algorithm, by the resampling-based procedures that resample the joint distribution of the test statistics and estimate the level of FDR control.\n\n\nAVAILABILITY\nAn R program that adjusts p-values using FDR controlling procedures is freely available over the Internet at www.math.tau.ac.il/~ybenja."}
{"_id": "bc4232856715a0822e642ec0dd7fb614a5f695ed", "title": "Semi-Dense Depth Interpolation using Deep Convolutional Neural Networks", "text": "With advances of recent technologies, augmented reality systems and autonomous vehicles gained a lot of interest from academics and industry. Both these areas rely on scene geometry understanding, which usually requires depth map estimation. However, in case of systems with limited computational resources, such as smartphones or autonomous robots, high resolution dense depth map estimation may be challenging. In this paper, we study the problem of semi-dense depth map interpolation along with low resolution depth map upsampling. We present an end-to-end learnable residual convolutional neural network architecture that achieves fast interpolation of semi-dense depth maps with different sparse depth distributions: uniform, sparse grid and along intensity image gradient. We also propose a loss function combining classical mean squared error with perceptual loss widely used in intensity image super-resolution and style transfer tasks. We show that with some modifications, this architecture can be used for depth map super-resolution. Finally, we evaluate our results on both synthetic and real data, and consider applications for autonomous vehicles and creating AR/MR video games."}
{"_id": "c36809acf1a0103e338af719c24999257798c2c6", "title": "An ubiquitous miniaturized android based ECG monitoring system", "text": "Miniaturization of devices has brought a lot of improvements in the healthcare delivery as a result of advancenments in Integrated circuit technologies. As a result, proper and effective way of analyzing various body conditions and diseases can be achieved. When the heart muscles repolarize and depolarize, it generates electrical impulses which could help the doctor or the health professional to diagnose various heart abnormalities. The aim of this project is to design and develop a miniaturized ECG system displayed on both PC and Tablet for preliminary diagnoses of the heart. This system utilizes a programmable single-chip microcontroller for analysing the biosignals to indicate the condition of the heart. The system is incomporated with alarm systems which prompt the doctor if there is any abnormality from a patient's heart. These abnormalities could be bradycardia or tachycardia. This system is built with a provision of enabling wireless transmission of ECG signals to a PC or TABLET through BLUETOOTH and ANDROID PLATFORM. The android application can be installed and tested on any smart phone or tablet and screen resolution can be adjusted later as per requirement. This helps the doctor to have visual description of the patient ECG without the need of mounted monitors. The experimental results show that device is clinically useful, compact, cost effective and user friendly."}
{"_id": "caf18dfde73c5312baa8be4f4a12a83ce8a7c27c", "title": "A probabilistic theory of clustering", "text": "Data clustering is typically considered a subjective process, which makes it problematic. For instance, how does one make statistical inferences based on clustering? The matter is di0erent with pattern classi1cation, for which two fundamental characteristics can be stated: (1) the error of a classi1er can be estimated using \u201ctest data,\u201d and (2) a classi1er can be learned using \u201ctraining data.\u201d This paper presents a probabilistic theory of clustering, including both learning (training) and error estimation (testing). The theory is based on operators on random labeled point processes. It includes an error criterion in the context of random point sets and representation of the Bayes (optimal) cluster operator for a given random labeled point process. Training is illustrated using a nearest-neighbor approach, and trained cluster operators are compared to several classical clustering algorithms. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."}
{"_id": "6d8468041743bbc3562ebaae98eae50cf7554065", "title": "A survey of Ambient Assisted Living systems: Challenges and opportunities", "text": "As the research in Ambient Assisted Living (AAL) matures, we expect that data generated from AAL IoT devices will benefit from analysis by well established machine learning techniques. There is also potential that new research in ML and Artificial Intelligence (AI) can be used on data generated from the sensors used in AAL. In this paper we present a survey of the research in the related topics, identify its shortcomings and propose future work that will integrate these fields by collecting ambient sensor data and process the data by ML framework which can detect and classify activities."}
{"_id": "55b84d127aaef25119cefa07dba0dc36b142d63e", "title": "Investigating the Implications of 3D Printing in Special Education", "text": "Consumer-grade digital fabrication such as 3D printing is on the rise, and we believe it can be leveraged to great benefit in special education. Although 3D printing is infiltrating mainstream education, little research has explored 3D printing in the context of students with special support needs. We describe our studies on this topic and the resulting contributions. We initially conducted a formative study exploring the use of 3D printing at three locations serving populations with varying ability, including individuals with cognitive, motor, and visual impairments. We found that 3D design and printing perform three functions in special education: (1) STEM engagement, (2) creation of educational aids for accessible curriculum content, and (3) making custom adaptive devices. As part of our formative work, we also discussed a case study in the codesign of an assistive hand grip created with occupational therapists at one of our investigation sites. This work inspired further studies on the creation of adaptive devices using 3D printers. We identified the needs and constraints of these therapists and found implications for a specialized 3D modeling tool to support their use of 3D printers. We developed GripFab, 3D modeling software based on feedback from therapists, and used it to explore the feasibility of in-house 3D object designs in support of accessibility. Our contributions include case studies at three special education sites and discussion of obstacles to efficient 3D printing in this context. We have extended these contributions with a more in-depth look at the stakeholders and findings from GripFab studies. We have expanded our discussion to include suggestions for researchers in this space, in addition to refined suggestions from our earlier work for technologists creating 3D modeling and printing tools, therapists seeking to leverage 3D printers, and educators and administrators looking to implement these design tools in special education environments."}
{"_id": "15f3f17d5102e93d96a9ebc9668c121c26bb8829", "title": "Inferring international and internal migration patterns from Twitter data", "text": "Data about migration flows are largely inconsistent across countries, typically outdated, and often inexistent. Despite the importance of migration as a driver of demographic change, there is limited availability of migration statistics. Generally, researchers rely on census data to indirectly estimate flows. However, little can be inferred for specific years between censuses and for recent trends. The increasing availability of geolocated data from online sources has opened up new opportunities to track recent trends in migration patterns and to improve our understanding of the relationships between internal and international migration. In this paper, we use geolocated data for about 500,000 users of the social network website \"Twitter\". The data are for users in OECD countries during the period May 2011- April 2013. We evaluated, for the subsample of users who have posted geolocated tweets regularly, the geographic movements within and between countries for independent periods of four months, respectively. Since Twitter users are not representative of the OECD population, we cannot infer migration rates at a single point in time. However, we proposed a difference-in-differences approach to reduce selection bias when we infer trends in out-migration rates for single countries. Our results indicate that our approach is relevant to address two longstanding questions in the migration literature. First, our methods can be used to predict turning points in migration trends, which are particularly relevant for migration forecasting. Second, geolocated Twitter data can substantially improve our understanding of the relationships between internal and international migration. Our analysis relies uniquely on publicly available data that could be potentially available in real time and that could be used to monitor migration trends. The Web Science community is well-positioned to address, in future work, a number of methodological and substantive questions that we discuss in this article."}
{"_id": "44cca26f47478aaebf6cda2318bde5985159ba1f", "title": "Manipulating the soil microbiome to increase soil health and plant fertility", "text": "A variety of soil factors are known to increase nutrient availability and plant productivity. The most influential might be the organisms comprising the soil microbial community of the rhizosphere, which is the soil surrounding the roots of plants where complex interactions occur between the roots, soil, and microorganisms. Root exudates act as substrates and signaling molecules for microbes creating a complex and interwoven relationship between plants and the microbiome. While individual microorganisms such as endophytes, symbionts, pathogens, and plant growth promoting rhizobacteria are increasingly featured in the literature, the larger community of soil microorganisms, or soil microbiome, may have more far-reaching effects. Each microorganism functions in coordination with the overall soil microbiome to influence plant health and crop productivity. Increasing evidence indicates that plants can shape the soil microbiome through the secretion of root exudates. The molecular communication fluctuates according to the plant development stage, proximity to neighboring species, management techniques, and many other factors. This review seeks to summarize the current knowledge on this topic."}
{"_id": "2e2bc1fc91b96c3d45919f956fe6cc5dbefacec0", "title": "Point-to-Multipoint Ultrasonic Communication Modem for Ubiquitous Underwater Wireless Sensor Networks", "text": "Recently, researches on underwater sensor networks (USN) for ocean development and disaster prevention have been emerged as one of interesting topics. Since low-power, high-speed and inexpensive communication modem is a prerequisite for deployment of USN, we design and implement an underwater modem by utilizing general-purpose waterproof ultrasonic sensors in this paper. Also, we make an experiment in a water tank containing sensor nodes and a sink node, a gateway, and show that point-to-multipoint communication is possible at a data rate of 1 kbps."}
{"_id": "5332e304313225d9e7bab79117f5b15783d8b09f", "title": "Steatocystoma simplex in penile foreskin: a case report", "text": "BACKGROUND\nSteatocystoma simplex is an uncommon skin lesion with a histological pattern that is identical to that of steatocystoma multiplex. We are reporting this case of steatocystoma simplex for its uncommon location in the penile foreskin, and its occurrence in a Wapishana man.\n\n\nCASE PRESENTATION\nA 56-year-old man of Wapishana ethnicity presented with complaints of referred penile discomfort and pain during sexual intercourse for 5 years. A physical examination revealed a mobile, compressible subcutaneous non-tender mass of 4 cm diameter located on the left-side of his penile foreskin. There were no signs of inflammation, no grip on the penile shaft, and no urethral discharge or enlargement of lymph nodes. We found no evidence of other cysts on cutaneous examination. We performed classical excision of the lesion under local anesthesia and confirmed the diagnosis of steatocystoma with the pathological report. As there were no complications, we discharged him the same day.\n\n\nCONCLUSION\nSteatocystoma can be considered a differential diagnosis for cystic lesions on and around the penis."}
{"_id": "b271d7ab632ebb9eb4f2c979ee666844aaec218e", "title": "Comparison of surgical variables and short-term postoperative complications in healthy dogs undergoing ovariohysterectomy or ovariectomy.", "text": "OBJECTIVE\nTo determine whether ovariohysterectomy (OVH) required more time to complete and was associated with more short-term postoperative complications than ovariectomy (OVE) in dogs.\n\n\nDESIGN\nRandomized prospective clinical trial.\n\n\nANIMALS\n40 healthy, sexually intact female dogs.\n\n\nPROCEDURES\nOVH (in 20 dogs) or OVE (20 dogs) was performed by use of standardized anesthetic and surgical protocols. Physical characteristics of the dogs, surgical variables, pain scores derived from behavior-based composite pain scales, and surgical wound characteristics were analyzed.\n\n\nRESULTS\nBody weight, age, body condition score, and distance between the sternal manubrium and the pubic rim were comparable among dogs that underwent either surgical procedure. Body weight was positively correlated with the total duration of the procedure and with time required for closure of the surgical wound. No effect of body condition score was determined for any variable. Skin and fascia incision lengths relative to the distance from the sternal manubrium to pubic rim were significantly greater in dogs that underwent OVH, compared with those of dogs that underwent OVE, but total surgical time was not different for the 2 procedures. No other significant differences were detected between the 2 groups.\n\n\nCONCLUSIONS AND CLINICAL RELEVANCE\nSignificant differences in total surgical time, pain scores, and wound scores were not observed between dogs that underwent OVH and dogs that underwent OVE via standardized protocols."}
{"_id": "79a2e622245bb4910beb0adbce76f0a737f42035", "title": "Towards a Business Process Management Maturity Model", "text": "Business Process Management (BPM) has been identified as the number one business priority by a recent Gartner study (Gartner, 2005). However, BPM has a plethora of facets as its origins are in Business Process Reengineering, Process Innovation, Process Modelling, and Workflow Management to name a few. Organisations increasingly recognize the requirement for an increased process orientation and require appropriate comprehensive frameworks, which help to scope and evaluate their BPM initiative. This research project aims toward the development of a holistic and widely accepted BPM maturity model, which facilitates the assessment of BPM capabilities. This paper provides an overview about the current model with a focus on the actual model development utilizing a series of Delphi studies. The development process includes separate studies that focus on further defining and expanding the six core factors within the model, i.e. strategic alignment, governance, method, Information Technology, people and culture."}
{"_id": "8687ee7335f6d9813ba9e4576ce25b56e57b16d1", "title": "Guidelines for conducting and reporting case study research in software engineering", "text": "Case study is a suitable research methodology for software engineering research since it studies contemporary phenomena in its natural context. However, the understanding of what constitutes a case study varies, and hence the quality of the resulting studies. This paper aims at providing an introduction to case study methodology and guidelines for researchers conducting case studies and readers studying reports of such studies. The content is based on the authors\u2019 own experience from conducting and reading case studies. The terminology and guidelines are compiled from different methodology handbooks in other research domains, in particular social science and information systems, and adapted to the needs in software engineering. We present recommended practices for software engineering case studies as well as empirically derived and evaluated checklists for researchers and readers of case study research."}
{"_id": "56a475b4eff2e5bc52cf140d23e6e845ff29cede", "title": "Capability maturity model, version 1.1", "text": "The capability maturity model (CMM), developed to present sets of recommended practices in a number of key process areas that have been shown to enhance software-development and maintenance capability, is discussed. The CMM was designed to help developers select process-improvement strategies by determining their current process maturity and identifying the issues most critical to improving their software quality and process. The initial release of the CMM, version 1.0, was reviewed and used by the software community during 1991 and 1992. A workshop on CMM 1.0, held in April 1992, was attended by about 200 software professionals. The current version of the CMM is the result of the feedback from that workshop and ongoing feedback from the software community. The technical report that describes version 1.1. is summarised.<>"}
{"_id": "6b530812c49cf5932c784ff56d9e8d5baead67a9", "title": "Innovative Contactless Energy Transfer Accessory for Rotary Ultrasonic Machining and Its Circuit Compensation Based on Coil Turns", "text": "An innovative concentric rotary ultrasonic machining (RUM) device based on local induction is proposed. The device is used as a machine tool accessory that can add the RUM to any machining center. Rather than using a complete ring, a portion of a ring is used as the primary core in the proposed rotary transformer, which eliminates spindle speed limits. The simplified configuration provides increased safety and is convenient for material processing applications. Additionally, a circuit compensation method based on coil turns is proposed. It takes the whole circuit, reliability of electronic components, transmission efficiency, and output power into consideration. We use this method to choose the optimal number of coil turns and compensation elements required to achieve increased and safe power transmission performance. This paper also demonstrates the performance of the device through experimentation. Finally, practicability is discussed."}
{"_id": "cd8e9a750f105fb8303f12380f2ca5e5e0b740a2", "title": "What Actually Wins Soccer Matches : Prediction of the 2011-2012 Premier League for Fun and Profit", "text": "Sports analytics is a fascinating problem area in which to apply statistical learning techniques. This thesis brings new data to bear on the problem of predicting the outcome of a soccer match. We use frequency counts of in-game events, sourced from the Manchester City Analytics program, to predict the 380 matches of the 2011-2012 Premier League season. We generate prediction models with multinomial regression and rigorously test them with betting simulations. An extensive review of prior efforts is presented, as well as a novel theoretically optimal betting strategy. We measure performance different feature sets and betting strategies. Accuracy and simulated profit far exceeding those of all earlier efforts are achieved."}
{"_id": "4d1d1134bd464bacf620d3ae34cd505906e2f307", "title": "The Acquisition of Reading Comprehension Skill", "text": "How do people acquire skill at comprehending what they read? That is the simple question to which we shall try to make a tentative answer. To begin, we have to acknowledge some complexities about the concept of reading comprehension and what it means to develop it."}
{"_id": "14d08c4ec2c56b93da13f7d121e64c52fbe2a341", "title": "Column-store support for RDF data management: not all swans are white", "text": "This paper reports on the results of an independent evaluation of the techniques presented in the VLDB 2007 paper \u201cScalable Semantic Web Data Management Using Vertical Partitioning\u201d, authored by D. Abadi, A. Marcus, S. R. Madden, and K. Hollenbach [1]. We revisit the proposed benchmark and examine both the data and query space coverage. The benchmark is extended to cover a larger portion of the query space in a canonical way. Repeatability of the experiments is assessed using the code base obtained from the authors. Inspired by the proposed vertically-partitioned storage solution for RDF data and the performance figures using a column-store, we conduct a complementary analysis of state-of-the-art RDF storage solutions. To this end, we employ MonetDB/SQL, a fully-functional open source column-store, and a well-known \u2013 for its performance \u2013 commercial row-store DBMS. We implement two relational RDF storage solutions \u2013 triple-store and vertically-partitioned \u2013 in both systems. This allows us to expand the scope of [1] with the performance characterization along both dimensions \u2013 triple-store vs. vertically-partitioned and row-store vs. column-store \u2013 individually, before analyzing their combined effects. A detailed report of the experimental test-bed, as well as an in-depth analysis of the parameters involved, clarify the scope of the solution originally presented and position the results in a broader context by covering more systems."}
{"_id": "1e48d6f932e0ac58705a37dbcb5d3a1f938bda14", "title": "Machine Learning for Intelligent Systems", "text": "Recent research in machine learning has focused on supervised induction for simple classification and reinforcement learning for simple reactive behaviors. In the process, the field has become disconnected from AI\u2019s original goal of creating complete intelligent agents. In this paper, I review recent work on machine learning for planning, language, vision, and other topics that runs counter to this trend and thus holds interest for the broader AI research community. I also suggest some steps to encourage further research along these lines."}
{"_id": "de047381b2dbeaf668edb6843054dadd4dedd10c", "title": "Learning narrative structure from annotated folktales", "text": "Narrative structure is an ubiquitous and intriguing phenomenon. By virtue of structure we recognize the presence of Villainy or Revenge in a story, even if that word is not actually present in the text. Narrative structure is an anvil for forging new artificial intelligence and machine learning techniques, and is a window into abstraction and conceptual learning as well as into culture and its influence on cognition. I advance our understanding of narrative structure by describing Analogical Story Merging (ASM), a new machine learning algorithm that can extract culturally-relevant plot patterns from sets of folktales. I demonstrate that ASM can learn a substantive portion of Vladimir Propp\u2019s influential theory of the structure of folktale plots. The challenge was to take descriptions at one semantic level, namely, an event timeline as described in folktales, and abstract to the next higher level: structures such as Villainy, StuggleVictory, and Reward. ASM is based on Bayesian Model Merging, a technique for learning regular grammars. I demonstrate that, despite ASM\u2019s large search space, a carefully-tuned prior allows the algorithm to converge, and furthermore it reproduces Propp\u2019s categories with a chance-adjusted Rand index of 0.511 to 0.714. Three important categories are identified with F-measures above 0.8. The data are 15 Russian folktales, comprising 18,862 words, a subset of Propp\u2019s original tales. This subset was annotated for 18 aspects of meaning by 12 annotators using the Story Workbench, a general text-annotation tool I developed for this work. Each aspect was doubly-annotated and adjudicated at inter-annotator F-measures that cluster around 0.7 to 0.8. It is the largest, most deeply-annotated narrative corpus assembled to date. The work has significance far beyond folktales. First, it points the way toward important applications in many domains, including information retrieval, persuasion and negotiation, natural language understanding and generation, and computational creativity. Second, abstraction from natural language semantics is a skill that underlies many cognitive tasks, and so this work provides insight into those processes. Finally, the work opens the door to a computational understanding of cultural influences on cognition and understanding cultural differences as captured in stories. Dissertation Supervisor: Patrick H. Winston Professor, Electrical Engineering and Computer Science Dissertation Committee: Whitman A. Richards Professor, Brain & Cognitive Sciences Peter Szolovits Professor, Electrical Engineering and Computer Science & Harvard-MIT Division of Health Sciences and Technology Joshua B. Tenenbaum Professor, Brain & Cognitive Sciences"}
{"_id": "f087259663c7f770cb201bbb07ce86cd6de2067f", "title": "Neural computation and self-organizing maps - an introduction", "text": "In this age of modern era, the use of internet must be maximized. Yeah, internet will help us very much not only for important thing but also for daily activities. Many people now, from any level can use internet. The sources of internet connection can also be enjoyed in many places. As one of the benefits is to get the on-line neural computation and self organizing maps an introduction computation neural systems series book, as the world window, as many people suggest."}
{"_id": "09bbe8b36b9ff995260492a468a95b8493dc5d76", "title": "Comparing NoSQL MongoDB to an SQL DB", "text": "NoSQL database solutions are becoming more and more prevalent in a world currently dominated by SQL relational databases. NoSQL databases were designed to provide database solutions for large volumes of data that is not structured. However, the advantages (or disadvantages) of using a NoSQL database for data that is structured, and not necessarily \"Big,\" is not clear. There are not many studies that compare the performance of processing a modest amount of structured data in a NoSQL database with a traditional relational database. In this paper, we compare one of the NoSQL solutions, MongoDB, to the standard SQL relational database, SQL Server. We compare the performance, in terms of runtime, of these two databases for a modest-sized structured database. Results show that MongoDB performs equally as well or better than the relational database, except when aggregate functions are utilized."}
{"_id": "f93a0a3e8a3e6001b4482430254595cf737697fa", "title": "Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention", "text": "In this paper, we proposed a sentence encoding-based model for recognizing text entailment. In our approach, the encoding of sentence is a two-stage process. Firstly, average pooling was used over word-level bidirectional LSTM (biLSTM) to generate a firststage sentence representation. Secondly, attention mechanism was employed to replace average pooling on the same sentence for better representations. Instead of using target sentence to attend words in source sentence, we utilized the sentence\u2019s first-stage representation to attend words appeared in itself, which is called \u201dInner-Attention\u201d in our paper . Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus has proved the effectiveness of \u201dInner-Attention\u201d mechanism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin."}
{"_id": "507b5fe36714eb6aa8acd96d1eef14212eddb82b", "title": "VeriCon: towards verifying controller programs in software-defined networks", "text": "Software-defined networking (SDN) is a new paradigm for operating and managing computer networks. SDN enables logically-centralized control over network devices through a \"controller\" software that operates independently from the network hardware, and can be viewed as the network operating system. Network operators can run both inhouse and third-party SDN programs (often called applications) on top of the controller, e.g., to specify routing and access control policies. SDN opens up the possibility of applying formal methods to prove the correctness of computer networks. Indeed, recently much effort has been invested in applying finite state model checking to check that SDN programs behave correctly. However, in general, scaling these methods to large networks is challenging and, moreover, they cannot guarantee the absence of errors.\n We present VeriCon, the first system for verifying that an SDN program is correct on all admissible topologies and for all possible (infinite) sequences of network events. VeriCon either confirms the correctness of the controller program on all admissible network topologies or outputs a concrete counterexample. VeriCon uses first-order logic to specify admissible network topologies and desired network-wide invariants, and then implements classical Floyd-Hoare-Dijkstra deductive verification using Z3. Our preliminary experience indicates that VeriCon is able to rapidly verify correctness, or identify bugs, for a large repertoire of simple core SDN programs. VeriCon is compositional, in the sense that it verifies the correctness of execution of any single network event w.r.t. the specified invariant, and can thus scale to handle large programs. To relieve the burden of specifying inductive invariants from the programmer, VeriCon includes a separate procedure for inferring invariants, which is shown to be effective on simple controller programs. We view VeriCon as a first step en route to practical mechanisms for verifying network-wide invariants of SDN programs."}
{"_id": "d8222762ca080f2c1aab87f5f312097c7ba1f12b", "title": "Diversifying trending topic discovery via Semidefinite Programming", "text": "Discovering trending topics from the Web has attracted much attention in recent years, due to users' increasing need for time-sensitive information. A large body of the existing research focuses on detecting trends by examining search traffic fluctuations, and the queries with large traffic increments would be detected as trends. The weakness of this approach is that the trends are dominated by popular fields such as celebrities, as those related queries have large search traffic. Consequently, other topics such as travel and shopping are rarely regarded as trends. In this paper, we present a scalable diversified trending topic discovery system with a MapReduce implementation. The trending topics are discovered based on three criteria: diversity, representativeness, and popularity. We explicitly model these three factors in our objective function and propose an efficient Semidefinite Programming algorithm to solve the corresponding optimization problem. To the best of our knowledge, no prior work in the literature tackles trending topic diversification. We conduct a comprehensive set of experiments with case studies to demonstrate the effectiveness of our approach. The proposed system has been successfully tested in the real-world operational environment, yielding significant improvement in traffic over the existing production system."}
{"_id": "8d0edf684de63dfecbca8126b32736059f518eb2", "title": "Deep Learning for Detection of Object-Based Forgery in Advanced Video", "text": "Passive video forensics has drawn much attention in recent years. However, research on detection of object-based forgery, especially for forged video encoded with advanced codec frameworks, is still a great challenge. In this paper, we propose a deep learning-based approach to detect object-based forgery in the advanced video. The presented deep learning approach utilizes a convolutional neural network (CNN) to automatically extract high-dimension features from the input image patches. Different from the traditional CNN models used in computer vision domain, we let video frames go through three preprocessing layers before being fed into our CNN model. They include a frame absolute difference layer to cut down temporal redundancy between video frames, a max pooling layer to reduce computational complexity of image convolution, and a high-pass filter layer to enhance the residual signal left by video forgery. In addition, an asymmetric data augmentation strategy has been established to get a similar number of positive and negative image patches before the training. The experiments have demonstrated that the proposed CNN-based model with the preprocessing layers has achieved excellent results."}
{"_id": "596c4b9e0b7817120acb917cff4728fab9f95ca8", "title": "Sample size estimation in diagnostic test studies of biomedical informatics", "text": "OBJECTIVES\nThis review provided a conceptual framework of sample size calculations in the studies of diagnostic test accuracy in various conditions and test outcomes.\n\n\nMETHODS\nThe formulae of sample size calculations for estimation of adequate sensitivity/specificity, likelihood ratio and AUC as an overall index of accuracy and also for testing in single modality and comparing two diagnostic tasks have been presented for desired confidence interval.\n\n\nRESULTS\nThe required sample sizes were calculated and tabulated with different levels of accuracies and marginal errors with 95% confidence level for estimating and for various effect sizes with 80% power for purpose of testing as well. The results show how sample size is varied with accuracy index and effect size of interest.\n\n\nCONCLUSION\nThis would help the clinicians when designing diagnostic test studies that an adequate sample size is chosen based on statistical principles in order to guarantee the reliability of study."}
{"_id": "c2c84a2acc9459c2406a855758880423a024ed49", "title": "An ant colony algorithm for the multi-compartment vehicle routing problem", "text": "We demonstrate the use of Ant Colony System (ACS) to solve the capacitated vehicle routing problem associated with collection of recycling waste from households, treated as nodes in a spatial network. For networks where the nodes are concentrated in separate clusters, the use of k-means clustering can greatly improve the efficiency of the solution. The ACS algorithm is extended to model the use of multi-compartment vehicles with kerbside sorting of waste into separate compartments for glass, paper, etc. The algorithm produces high-quality solutions for two-compartment test problems."}
{"_id": "abc8c86ef950d039fb2e05378be4b40c75fa2e1b", "title": "Stopping GAN Violence: Generative Unadversarial Networks", "text": "While the costs of human violence have attracted a great deal of attention from the research community, the effects of the network-on-network (NoN) violence popularised by Generative Adversarial Networks have yet to be addressed. In this work, we quantify the financial, social, spiritual, cultural, grammatical and dermatological impact of this aggression and address the issue by proposing a more peaceful approach which we term Generative Unadversarial Networks (GUNs). Under this framework, we simultaneously train two models: a generator G that does its best to capture whichever data distribution it feels it can manage, and a motivator M that helps G to achieve its dream. Fighting is strictly verboten and both models evolve by learning to respect their differences. The framework is both theoretically and electrically grounded in game theory, and can be viewed as a winner-shares-all two-player game in which both players work as a team to achieve the best score. Experiments show that by working in harmony, the proposed model is able to claim both the moral and log-likelihood high ground. Our work builds on a rich history of carefully argued position-papers, published as anonymous YouTube comments, which prove that the optimal solution to NoN violence is more GUNs. Takes skill to be real, time to heal each other Tupac Shakur, Changes, 1998"}
{"_id": "0f813c0318504ecec478288e5aaf095eb9171ce1", "title": "Discontinuous-Current Mode Operation of a Two-Phase Interleaved Boost DC\u2013DC Converter With Coupled Inductor", "text": "A two-phase interleaved boost dc\u2013dc converter with an inversely coupled inductor in a discontinuous-current mode (DCM) is analyzed by the equivalent inductance method. Coupling effects on the circuit statuses are described, and a forced conduction of the power diode or reverse-paralleled diode of MOSFET is caused by coupling. As a result, three major circuit statuses are figured out according to the physical relationship between the input\u2013output voltage ratio and the coupling coefficient, and their condition boundaries are used to classify the DCM operation modes. Then, considering the load (duty cycle) variation, ten DCM operation modes are comprehensively analyzed. The analysis can be used to make an easy prediction of operation modes, and extended to the analysis of an interleaved buck converter or buck/boost bidirectional converter with a coupled inductor in DCM. At last, a 300-W prototype is built and tested in the lab to verify the analysis."}
{"_id": "7c8de2abe8024de05d00095447d918b21960095d", "title": "Key2Vec: Automatic Ranked Keyphrase Extraction from Scientific Articles using Phrase Embeddings", "text": "Keyphrase extraction is a fundamental task in natural language processing that facilitates mapping of documents to a set of representative phrases. In this paper, we present an unsupervised technique (Key2Vec) that leverages phrase embeddings for ranking keyphrases extracted from scientific articles. Specifically, we propose an effective way of processing text documents for training multi-word phrase embeddings that are used for thematic representation of scientific articles and ranking of keyphrases extracted from them using theme-weighted PageRank. Evaluations are performed on benchmark datasets producing state-of-the-art results."}
{"_id": "8127f26bef3e11191f84ccbb2f768c6abc37b095", "title": "The Democratization of Personal Consumer Loans? Determinants of Success in Online Peer-to-peer Loan Auctions", "text": "Online peer-to-peer (P2P) loan auctions enable individual consumers to borrow from, and lend money to, one another directly. We study the borrowerand auction-related determinants of funding success in these auctions by conceptualizing auction decision variables (loan amount, starting interest rate, duration) as mediators between borrower characteristics such as their demographic attributes, financial strength, and effort prior to the auction, and the likelihood of funding success. Borrower characteristics are also treated as moderators of the effects of auction decision variables on funding success. The results of our empirical study, conducted using a database of 5,370 completed P2P loan auctions, provide support for the proposed conceptual framework, and reveal a strong democratization of lending practices in P2P loan auctions. Although demographic attributes such as race and gender affect likelihood of the auction\u2019s funding success, their effects are small when compared to those of borrowers\u2019 financial strength and effort when listing and publicizing the auction. These results are substantially different from the documented discriminatory practices of regulated financial institutions in the US, suggesting that individual lenders bid more rationally when their own investment money is at stake. The paper concludes with specific suggestions to borrowers to increase their chances of receiving funding in P2P loan auctions, and a discussion of future research opportunities."}
{"_id": "f7b10f5aa56bcfabedf6329808a8cf920df7b285", "title": "A review and analysis of churn prediction methods for customer retention in telecom industries", "text": "Customer churn prediction has gathered greater interest in business especially in telecommunications industries. Many authors have presented different versions of the churn prediction models greatly based on the data mining concepts employing the machine learning and meta-heuristic algorithms. This aim of this paper is to study some of the most important churn prediction techniques developed over the recent years. The primary objective is on the churn in telecom industries to accurately estimate the customer survival and customer hazard functions to gain the complete knowledge of churn over the customer tenure. Another objective is the identification of the customers who are at the blink of churn and approximating the time they will churn. This paper focuses on analyzing the churn prediction techniques to identify the churn behavior and validate the reasons for customer churn. This paper summarizes the churn prediction techniques in order to have a deeper understanding of the customer churn and it shows that most accurate churn prediction is given by the hybrid models rather than single algorithms so that telecom industries become aware of the needs of high risk customers and enhance their services to overturn the churn decision."}
{"_id": "d36784ad0d61d10b92c92a39ea098f62302b2a16", "title": "Anomaly Detection Algorithms in Business Process Logs", "text": "In some domains of application, like software development and health care processes, a normative business process system (e.g. workflow management system) is not appropriate because a flexible support is needed to the participants. On the other hand, while it is important to support flexibility of execution in these domains, security requirements can not be met whether these systems do not offer extra control, which characterizes a trade off between flexibility and security in such domains. This work presents and assesses a set of anomaly detection algorithms in logs of Process Aware Systems (PAS). The detection of an anomalous instance is based on the \u201cnoise\u201d which an instance makes in a process model discovered by a process mining algorithm. As a result, a trace that is an anomaly for a discovered model will require more structural changes for this model fit it than a trace that is not an anomaly. Hence, when aggregated to PAS, these methods can support the coexistence of security and flexibility."}
{"_id": "51cd2b9edd4bae42b1d55ee3c166bf6c56b460fc", "title": "Unsupervised induction and filling of semantic slots for spoken dialogue systems using frame-semantic parsing", "text": "Spoken dialogue systems typically use predefined semantic slots to parse users' natural language inputs into unified semantic representations. To define the slots, domain experts and professional annotators are often involved, and the cost can be expensive. In this paper, we ask the following question: given a collection of unlabeled raw audios, can we use the frame semantics theory to automatically induce and fill the semantic slots in an unsupervised fashion? To do this, we propose the use of a state-of-the-art frame-semantic parser, and a spectral clustering based slot ranking model that adapts the generic output of the parser to the target semantic space. Empirical experiments on a real-world spoken dialogue dataset show that the automatically induced semantic slots are in line with the reference slots created by domain experts: we observe a mean averaged precision of 69.36% using ASR-transcribed data. Our slot filling evaluations also indicate the promising future of this proposed approach."}
{"_id": "36147a54dfca257c03663cb4e9bb8589f52b7cef", "title": "Existential consistency: measuring and understanding consistency at Facebook", "text": "Replicated storage for large Web services faces a trade-off between stronger forms of consistency and higher performance properties. Stronger consistency prevents anomalies, i.e., unexpected behavior visible to users, and reduces programming complexity. There is much recent work on improving the performance properties of systems with stronger consistency, yet the flip-side of this trade-off remains elusively hard to quantify. To the best of our knowledge, no prior work does so for a large, production Web service.\n We use measurement and analysis of requests to Facebook's TAO system to quantify how often anomalies happen in practice, i.e., when results returned by eventually consistent TAO differ from what is allowed by stronger consistency models. For instance, our analysis shows that 0.0004% of reads to vertices would return different results in a linearizable system. This in turn gives insight into the benefits of stronger consistency; 0.0004% of reads are potential anomalies that a linearizable system would prevent. We directly study local consistency models---i.e., those we can analyze using requests to a sample of objects---and use the relationships between models to infer bounds on the others.\n We also describe a practical consistency monitoring system that tracks &phis;-consistency, a new consistency metric ideally suited for health monitoring. In addition, we give insight into the increased programming complexity of weaker consistency by discussing bugs our monitoring uncovered, and anti-patterns we teach developers to avoid."}
{"_id": "351df512735096126454f5d4bc8e9ae56f4cd288", "title": "Syntactic Clustering of the Web", "text": "We have developed an efficient way to determine the syntactic similarity of files and have applied it to every document on the World Wide Web. Using this mechanism, we built a clustering of all the documents that are syntactically similar. Possible applications include a \u201cLost and Found\u201d service, filtering the results of Web searches, updating widely distributed web-pages, and identifying violations of intellectual property rights."}
{"_id": "59435e6a18b8e5f766af1e2d9a21d3963827aae3", "title": "Introduction to the Special Issue on the Web as Corpus", "text": "The Web, teeming as it is with language data, of all manner of varieties and languages, in vast quantity and freely available, is a fabulous linguists' playground. This special issue of Computational Linguistics explores ways in which this dream is being explored."}
{"_id": "31037314ad53a20ab92af6d3301849daeb1f6421", "title": "Finding Similar Files in a Large File System", "text": "We present a tool, called sif, for finding all similar files in a large file system. Files are considered similar if they have significant number of common pieces, even if they are very different otherwise. For example, one file may be contained, possibly with some changes, in another file, or a file may be a reorganization of another file. The running time for finding all groups of similar files, even for as little as 25% similarity, is on the order of 500MB to 1GB an hour. The amount of similarity and several other customized parameters can be determined by the user at a post-processing stage, which is very fast. Sif can also be used to very quickly identify all similar files to a query file using a preprocessed index. Application of sif can be found in file management, information collecting (to remove duplicates), program reuse, file synchronization, data compression, and maybe even plagiarism detection."}
{"_id": "639b4cac06148e9f91736ba36ad4a1b97fcdfd6a", "title": "Scaling to Very Very Large Corpora for Natural Language Disambiguation", "text": "The amount of readily available on-line text has reached hundreds of billions of words and continues to grow. Yet for most core natural language tasks, algorithms continue to be optimized, tested and compared after training on corpora consisting of only one million words or less. In this paper, we evaluate the performance of different learning methods on a prototypical natural language disambiguation task, confusion set disambiguation, when trained on orders of magnitude more labeled data than has previously been used. We are fortunate that for this particular application, correctly labeled training data is free. Since this will often not be the case, we examine methods for effectively exploiting very large corpora when labeled data comes at a cost."}
{"_id": "d1ed10f18cc6168d95df2dd85294e02747c9e017", "title": "Application of Fuzzy Graph Coloring in Traffic Light Problem", "text": "Let G= (V,E,\u03c3,\u03bc) be a simple connected undirected fuzzy graph. In this paper, we use a fuzzy graph model to represent a traffic network of a city and discuss a method to find the different type of accidental zones in a traffic flows. This paper is based on fuzzy coloring of fuzzy graphs and fuzziness of vertices. Using this approach, we can minimize the total waiting time of a traffic flow. It will help to reduce the traffic jam of a road."}
{"_id": "0f0064eb75d6ac42c47f08062035e674320a2817", "title": "Embodiment enables the spinal engine in quadruped robot locomotion", "text": "The biological hypothesis of spinal engine states that locomotion is mainly achieved by the spine, while the legs may serve as assistance. Inspired by this hypothesis, a compliant, multiple degree-of-freedom, biologically-inspired spine has been embedded into a quadruped robot, named Kitty, which has no actuation on the legs. In this paper, we demonstrate how versatile behaviors (bounding, trotting, and turning) can be generated exclusively by the spine's movements through dynamical interaction between the controller, the body, and the environment, known as embodiment. Moreover, we introduce information theoretic approach to quantitatively study the spine internal dynamics and its effect on the bounding gait based on three spinal morphologies. These three morphologies differ in the position of virtual spinal joint where the spine is easier to get bent. The experimental results reveal that locomotion can be enhanced by using the spine featuring a rear virtual spinal joint, which offers more freedom for the rear legs to move forward. In addition, the information theoretic analysis shows that, according to the morphological differences of the spine, the information structure changes. The relationship between the observed behavior of the robot and the corresponding information structure is discussed in detail."}
{"_id": "3a1d637c1c52571d14916e4bd0a3557b5599cb07", "title": "PURE-LET Image Deconvolution", "text": "We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parameterize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds. This parameterization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations, which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms the state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images and demonstrate the potential applicability of the proposed method for improving the quality of these images."}
{"_id": "5e98aed4c4374b5e252042d70e07994c40e6e884", "title": "Motion Mode Recognition for Indoor Pedestrian Navigation Using Portable Devices", "text": "While indoor portable navigation using portable devices is becoming increasingly required, it faces many challenges in obtaining accurate positioning performance. One of the methods to improve navigation results is to recognize the mode of motion of the user carrying the portable device containing low-cost microelectromechanical systems (MEMS) motion sensors. Pattern recognition methodology has been employed to detect a group of motion modes that are common indoors, namely, walking, stationary, going up/down stairs, standing on an escalator, walking on an escalator, standing on a moving walkway, and walking on a moving walkway. The performance of the motion mode recognition module was examined on different types of mobile computing devices, including various brands of smartphones, tablets, smartwatches, and smartglasses, and the results obtained showed the capability of enhancing positioning performance. The module does not require satellite or wireless positioning signals, and depends only on MEMS sensors."}
{"_id": "a83058696bf4f49c98fbcab692632f46ae1d176a", "title": "Stippling by Example", "text": "In this work, we focus on stippling as an artistic style and discuss our technique for capturing and reproducing stipple features unique to an individual artist. We employ a texture synthesis algorithm based on the gray-level co-occurrence matrix (GLCM) of a texture field. This algorithm uses a texture similarity metric to generate stipple textures that are perceptually similar to input samples, allowing us to better capture and reproduce stipple distributions. First, we extract example stipple textures representing various tones in order to create an approximate tone map used by the artist. Second, we extract the stipple marks and distributions from the extracted example textures, generating both a lookup table of stipple marks and a texture representing the stipple distribution. Third, we use the distribution of stipples to synthesize similar distributions with slight variations using a numerical measure of the error between the synthesized texture and the example texture as the basis for replication. Finally, we apply the synthesized stipple distribution to a 2D grayscale image and place stipple marks onto the distribution, thereby creating a stippled image that is statistically similar to images created by the example artist."}
{"_id": "bf3bb5a09f1c21b846c880587112d747a97317e7", "title": "Ballistocardiography and Seismocardiography: A Review of Recent Advances", "text": "In the past decade, there has been a resurgence in the field of unobtrusive cardiomechanical assessment, through advancing methods for measuring and interpreting ballistocardiogram (BCG) and seismocardiogram (SCG) signals. Novel instrumentation solutions have enabled BCG and SCG measurement outside of clinical settings, in the home, in the field, and even in microgravity. Customized signal processing algorithms have led to reduced measurement noise, clinically relevant feature extraction, and signal modeling. Finally, human subjects physiology studies have been conducted using these novel instruments and signal processing tools with promising results. This paper reviews the recent advances in these areas of modern BCG and SCG research."}
{"_id": "89d2c247347bc517251998c4826076fa3cb0ca76", "title": "Stochastic Video Prediction with Conditional Density Estimation", "text": "Frame-to-frame stochasticity is a major challenge in video prediction. The use of standard feedforward and recurrent networks for video prediction leads to averaging of future states, which can in part be attributed to the networks\u2019 limited ability to model stochasticity. We propose the use of conditional variational autoencoders (CVAE) for video prediction. To model multi-modal densities in frame-to-frame transitions, we extend the CVAE framework by modeling the latent variable with a mixture of Gaussians in the generative network. We tested our proposed Gaussian mixture CVAE (GM-CVAE) on a simple video-prediction task involving a stochastically moving object. Our architecture demonstrates improved performance, achieving noticeably lower rates of blurring/averaging compared to a feedforward network and a Gaussian CVAE. We also describe how the CVAE framework can be applied to improve existing deterministic video prediction models.1"}
{"_id": "20fcdd40e92df4675d0d4209dbe27df571175b87", "title": "Body mass index reference curves for the UK, 1990.", "text": "Reference curves for stature and weight in British children have been available for the past 30 years, and have recently been updated. However weight by itself is a poor indicator of fatness or obesity, and there has never been a corresponding set of reference curves to assess weight for height. Body mass index (BMI) or weight/height has been popular for assessing obesity in adults for many years, but its use in children has developed only recently. Here centile curves for BMI in British children are presented, from birth to 23 years, based on the same large representative sample as used to update the stature and weight references. The charts were derived using Cole's LMS method, which adjusts the BMI distribution for skewness and allows BMI in individual subjects to be expressed as an exact centile or SD score. Use of the charts in clinical practice is aided by the provision of nine centiles, where the two extremes identify the fattest and thinnest four per 1000 of the population."}
{"_id": "cfdc7b01a7de752bce229008bfb87700b262ddea", "title": "Microcomputer Playfulness: Developing a Measure With Workplace Implications", "text": ""}
{"_id": "4df8264e6fd1c4caa91fc535b4a426c066abbbe2", "title": "Degradation of mechanical properties of class-H winding insulation", "text": "Thermal ageing is one of the key factors that result in insulation degradation leading to lower dielectric strength and thereafter occurrence of partial discharge in high voltage machines. However, in low voltage high current machines, the occurrence of partial discharge is minimal and degradation of dielectric strength alone will not cause the breakdown of the insulation. This paper hypothesizes that the breakdown of insulation in low voltage/high current machines occur as a result of insulation cracking among other factors. In this paper, it is shown that thermal degradation leads to degradation of mechanical properties of Polyamide-imide insulation, particularly reduction of the ultimate tensile strength and leads to enhanced brittleness. It follows that once a crack in the insulation is formed, the breakdown strength at the point of cracking will be lower. Multiple such cracks may lead to the formation of a breakdown path thereafter leading to aggravated heating and further accelerated deterioration of the insulation. In this paper, the methodology for testing the mechanical properties of insulation material is presented. A model to account for the variation of such properties due to thermal degradation is developed."}
{"_id": "66507340decdc4612b06329d4649b1e8f95a206a", "title": "Automated Training of Deep Convolutional Neural Networks for Cell Segmentation", "text": "Deep Convolutional Neural Networks (DCNN) have recently emerged as superior for many image segmentation tasks. The DCNN performance is however heavily dependent on the availability of large amounts of problem-specific training samples. Here we show that DCNNs trained on ground truth created automatically using fluorescently labeled cells, perform similar to manual annotations."}
{"_id": "17230f5b3956188055a48c5f4f61d131cce0662f", "title": "Parsing Algebraic Word Problems into Equations", "text": "This paper formalizes the problem of solving multi-sentence algebraic word problems as that of generating and scoring equation trees. We use integer linear programming to generate equation trees and score their likelihood by learning local and global discriminative models. These models are trained on a small set of word problems and their answers, without any manual annotation, in order to choose the equation that best matches the problem text. We refer to the overall system as Alges. We compare Alges with previous work and show that it covers the full gamut of arithmetic operations whereas Hosseini et al. (2014) only handle addition and subtraction. In addition, Alges overcomes the brittleness of the Kushman et al. (2014) approach on single-equation problems, yielding a 15% to 50% reduction in error."}
{"_id": "d2d0dd2202e514a60c671eee8abea81544c18285", "title": "Accurate Language Identification of Twitter Messages", "text": "We present an evaluation of \u201coff-theshelf\u201d language identification systems as applied to microblog messages from Twitter. A key challenge is the lack of an adequate corpus of messages annotated for language that reflects the linguistic diversity present on Twitter. We overcome this through a \u201cmostly-automated\u201d approach to gathering language-labeled Twitter messages for evaluating language identification. We present the method to construct this dataset, as well as empirical results over existing datasets and off-theshelf language identifiers. We also test techniques that have been proposed in the literature to boost language identification performance over Twitter messages. We find that simple voting over three specific systems consistently outperforms any specific system, and achieves state-of-the-art accuracy on the task."}
{"_id": "b7f8708ec4bad82a3c1cf34a45309ba973ea9124", "title": "Freiburger Fragebogen zur k\u00f6rperlichen Aktivit\u00e4t-Entwicklung, Pr\u00fcfung und Anwendung", "text": "Ziel vorliegender Untersuchung war die Entwicklung und Validierung eines Fragebogens zur Erfassung gesundheitswirksamer k\u00f6rperlicher Aktivit\u00e4t, sowie die Andwendung des Instruments in einer Stichprobe der erwachsenen Freiburger Wohnbev\u00f6lkerung. Die Reliabilit\u00e4t des Fragebogens wurde anhand von Test-Retest-Untersuchungen, die im Abstand von 2 Wochen und 6 Monaten durchgef\u00fchrt wurden, \u00fcberpr\u00fcft. Die hohen Korrelationen zwischen den Untersuchungen weisen auf eine hohe Reliabilit\u00e4t unseres Instruments hin. Trends im Sinne von saisonal bedingten Schwankungen fanden wir nur beim Rad fahren und der Gartenarbeit, sowie bei der abgeleiteten Basis-und Gesamtaktivit\u00e4t. Die Validierung erfolgte indirekt \u00fcber die maximale Sauerstoffaufnahme. Es bestand eine enge Korrelation zwischen der maximalen Sauerstoffaufnahme und dem Umfang der Sportaktivit\u00e4ten (partieller Korrelationskoeffizient: r=0.422, p<0.01). Die mit dem Fragebogen erhobenen Daten waren in sich konsistent. Personen, die sich selbst als \u201caktiver als ihre Altersgenossen\u201d eingesch\u00e4tzt hatten, waren, bezogen auf die Sport-(r=0.334, p<0.01) und Gesamtaktivit\u00e4t (r=0.282, p<0.05), tats\u00e4chlich aktiver als ihre Altersgenossen. Die Befragung einer (systematischen) Stichprobe der Freiburger Wohnbev\u00f6lkerung (n-612, 20\u201398 Jahre) ergab mittlere Gesamtaktivit\u00e4tsumf\u00e4nge von 9,2 h/Woche. Aktivit\u00e4ten mit leichten bis moderaten Belastungsintensit\u00e4ten machten den gr\u00f6ssten Teil der Gesamtaktivit\u00e4t aus. Die Aktivit\u00e4tsmuster waren abh\u00e4ngig von Alter und Geschlecht. In Anlehnung an die Empfehlungen zum Bewegungssoll waren, nach Paffenbarger (2000 kcal/Woche Gesamtaktivit\u00e4t), 40% der Befragten zu wenig aktiv, nach den Empfehlungen des American College of Sports Medicine (1000 kcal/Woche durch Training) waren dies 63% der Befragten. Aim of the present study was to design a questionnaire to assess health related physical activity, to validate the instrument and to apply it to a population sample. Reliability of the questionnaire was evaluated by testretest investigations with intervals of two weeks and six months. High correlations between the repeated administrations reflect a good reliability of our instrument. Only gardening and cycling, as well as the depending basic and total activity, showed typically seasonal variations. Validity was established by correlating physical activity data with maximum oxygen uptake. Maximum oxygen uptake correlated with sport activities (partial correlation coefficient: r=0.422, p<0.01). Evaluated data were consistent. People rating themselves as \u201cmore active than their coevals\u201d were indeed more active in sport (r=0.334, p<0.01) and total activity (r=0.282, p<0.05). Studying activity patterns of a population sample of adult residents of Freiburg (systematic random sampling, n=612, 20\u201398 years) we found total physical activity of 9,2 hours per week (median), with activities of low to moderate intensities dominating. Age and gender are important determinants of the activity patterns. According to the recommendation of Paffenbarger (2000 kcal/week total physical activity) 40% of the residents of Freiburg did not reach the recommended energy expenditure. Compared to the recommendation of the American College of Sports Medicine (1000 kcal/week by training) 63% of the population sample were not active enough. L'objectif de la pr\u00e9sente enqu\u00eate \u00e9tait le d\u00e9veloppement et la validation d'un questionnaire visant a estimer les effets d'activit\u00e9s physiques sur la sant\u00e9 ainsi que l'utilisation de cet outil d'analyse sur un \u00e9chantillon pris au hasard parmi la population adulte Fribourg (F.R.A.). La fiabilit\u00e9 du questionnaire a \u00e9t\u00e9 verifi\u00e9e sur la base de testes r\u00e9effectu\u00e9s \u00e0 deux semaines et \u00e0 six mois d'intervalle. Les hautes corr\u00e9lations entre les diverses enqu\u00eates ont indiqu\u00e9 une forte fiabilit\u00e9 de notre outil d'analyse. Les tendences dans le sens des variations saisonni\u00e8res n'ont \u00e9t\u00e9 trouv\u00e9es qu'au niveau de l'activit\u00e9 cycliste et du jardinage ainsi qu'au niveau des activit\u00e9s de base et globales qui sont d\u00e9riv\u00e9es. La validation s'est effectu\u00e9e de fa\u00e7on indirecte par le biais de la capacit\u00e9 maximale d'absorption d'oxyg\u00e8ne. On a constat\u00e9 un lien \u00e9troit entre la capacit\u00e9 maximale d'absorption d'oxyg\u00e8ne et le volume d'activit\u00e9s sportives (co\u00e9fficient partielle de corr\u00e9lation: r=0.422, p<0.01). Les donn\u00e9es r\u00e9v\u00e9l\u00e9es par le questionnaire \u00e9taient en soi consistentes. Les personnes qui s'estiment \u00eatre plus actives que leurs cong\u00e9n\u00e8res du m\u00eame \u00e2ge en ce qui concerne le sport et l'activit\u00e9 globale \u00e9taient effectivement plus actives. Les r\u00e9sultats du questionnaire ont fait appara\u00eetre une quantit\u00e9 moyenne d'activit\u00e9 globale de 9,2h par semaine. Pour la plus grande partie des activit\u00e9s globales, il s'agissait d'activit\u00e9s avec un niveau d'intensit\u00e9 physique allant de l\u00e9ger \u00e0 mod\u00e9r\u00e9. Les schemas d'activit\u00e9 variaient suivant l'\u00e2ge et le sexe. Selon les recommendations formul\u00e9es par Paffenbarger concernant le minimum d'activit\u00e9 physique requis (2000 kcal par semaine d'activit\u00e9 globale), 40% des personnes interog\u00e9es etaient en dessous de ce seuil. Selon les recommandations du \u201cAmerican College of Sports Medicine\u201d (1000 kcal par semaine d'activit\u00e9 sportive) 63% des personnes interog\u00e9es se situaient en dessous de ce seuil."}
{"_id": "16a710b41c3bd5fa7051763c509ad0ff814e0e6c", "title": "Enabling \u223c100fps detection on a landing unmanned aircraft for its on-ground vision-based recovery", "text": "In this paper, a deep learning inspired solution is proposed and developed to enable timeliness and practicality of the pre-existing ground stereo vision guidance system for flxed-wing UAVs' safe landing. Since the ground guidance prototype was restricted within applications due to its untimeliness, eventually the vision-based detection less than 15 fps (frame per second). Under such circumstances, we employ a regression based deep learning algorithm into automatic detection on the flying aircraft in the landing sequential images. The system architecture is upgraded so as to be compatible with the novel deep learning requests, and furthermore, annotated datasets are conducted to support training and testing of the regression-based learning detection algorithm. Experimental results validate that the detection attaches 100 fps or more while the localization accuracy is kept in the same level."}
{"_id": "cd57f03d68fdeec59f20ec0c1e8d95143d4adb18", "title": "Computer-aided design of analog and mixed-signal integrated circuits", "text": "This survey presents an overview of recent advances in the state of the art for computer-aided design (CAD) tools for analog and mixed-signal integrated circuits (ICs). Analog blocks typically constitute only a small fraction of the components on mixed-signal ICs and emerging systems-on-a-chip (SoC) designs. But due to the increasing levels of integration available in silicon technology and the growing requirement for digital systems to communicate with the continuous-valued external world, there is a growing need for CAD tools that increase the design productivity and improve the quality of analog integrated circuits. This paper describes the motivation and evolution of these tools and outlines progress on the various design problems involved: simulation and modeling, symbolic analysis, synthesis and optimization, layout generation, yield analysis and design centering, and test. This paper summarizes the problems for which viable solutions are emerging and those which are still unsolved."}
{"_id": "97a3e1c23fb18c884db0b739935b6cdd85b4f726", "title": "Effects of Small-Group Learning on Undergraduates in Science , Mathematics , Engineering , and Technology : A Meta-Analysis", "text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..I................ vii Conceptual Framework ........................................................................................................... 2 Motivational Perspective ............................................................................................. 3 Affective Perspective .................................................................................................. 3 Cognitive Perspective .................................................................................................. 3 Forms of Small-Group Learning ................................................................................. 4 Research Questions ..................................................................................................... 4 Meta-Analysis Method ............................................................................................................ 4 Literature Search Procedures ....................................................................................... 4 Inclusion Criteria ......................................................................................................... 5 Metric for Expressing Effect Sizes .............................................................................. 5 Calculations of Average Effect Sizes .......................................................................... 5 Tests for Conditional Effects ....................................................................................... 6 Study Coding ............................................................................................................... 7 Meta Analysis Results ............................................................................................................. 7 Main Effect of Small-Group Learning ........................................................................ 7 Distribution of Effect Sizes ......................................................................................... 7 Conditional Effects of Small-Group Learning .......................................................... 1 0 Discussion and Conclusions .................................................................................................. 1 7 Robust Main Effects .................................................................................................. 1 7 Conditional Effects of Small-Group Learning .......................................................... 1 8 Limitations of the Study ............................................................................................ 2 0 Implications for Theory, Research, Policy, and Practice .......................................... 2 1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 9"}
{"_id": "18f1d654d1dbf9f9e585e2bbace296c665d512f7", "title": "IoT smart parking system for reducing green house gas emission", "text": "Rapid climate change results natural calamity and severe economic impact and threats to the life. Burning fossil fuels by the medium of transportation contributes 1/3 of portion in increasing greenhouse emission and leading to raise surface temperate. Commuters in and around the developed cities faces difficulties in finding parking lot due to lack of notification process and autonomous parking systems. This causes commuters to take multiple rounds trips to get the parking slot which causes burning additional fuel and ultimately producing excessive CO2 emission. This paper describes solution to smart parking system using Internet of Things (IoT) to override parking hazards and explains how does it helps to minimize emitting greenhouse gases. IoT enables smart parking system using the system of interconnected Raspberry Pi, Distance Sensor, Pi Camera devices together. This hardware reacts to one another collects data and transmits to cloud storage."}
{"_id": "c60102d9d83000a7504c75f2a80605327420b96f", "title": "A more realistic error distance calculation for indoor positioning systems accuracy evaluation", "text": "The accuracy of indoor positioning systems is commonly computed as a metric based on the Euclidean distance from estimated locations to actual locations. This paper suggests that positioning error distances should be computed as the lengths of the paths that a person may follow when going from wrongly estimated positions to the real positions. The paper proposes a method that calculates the paths from floor plan and obstacles information using the visibility graphs and offsetting techniques, which are commonly used in robotics and CAD/CAM for navigation and manufacturing, respectively. Demonstration of the proposed method was done using a WiFi fingerprinting method based on kNN for pedestrian navigation. Comparisons between our proposed distance and the simple Euclidean distance have shown that the error distances are underestimated and that the differences between the two distances cannot be accurately represented by a fixed quantity in the context of an Indoor Positioning System (IPS) deployed in a library building. We consider that our proposed positioning error distance is more in line with the subjective error perceived by IPS users."}
{"_id": "4f6b7f8daa322801887b2ef8c2c14788e607e3b8", "title": "There is a blind spot in AI research", "text": "advances in the technical domains of AI. Alongside such efforts, designers and researchers from a range of disciplines need to conduct what we call social-systems analyses of AI. They need to assess the impact of technologies on their social, cultural and political settings. A social-systems approach could investigate, for instance, how the app AiCure \u2014 which tracks patients\u2019 adherence to taking prescribed medication and transmits records to physicians \u2014 is changing the doctor\u2013 patient relationship. Such an approach to perform a range of complex tasks in everyday life. These ranged from the identification of skin alterations that are indicative of earlystage cancer to the reduction of energy costs for data centres. The workshops also highlighted a major blind spot in thinking about AI. Autonomous systems are already deployed in our most crucial social institutions, from hospitals to courtrooms. Yet there are no agreed methods to assess the sustained effects of such applications on human populations. Recent years have brought extraordinary There is a blind spot in AI research"}
{"_id": "44e6fdffac4e670bd2a96203628fcfb6c6816ad4", "title": "Real-time visual odometry from dense RGB-D images", "text": "We present an energy-based approach to visual odometry from RGB-D images of a Microsoft Kinect camera. To this end we propose an energy function which aims at finding the best rigid body motion to map one RGB-D image into another one, assuming a static scene filmed by a moving camera. We then propose a linearization of the energy function which leads to a 6\u00d76 normal equation for the twist coordinates representing the rigid body motion. To allow for larger motions, we solve this equation in a coarse-to-fine scheme. Extensive quantitative analysis on recently proposed benchmark datasets shows that the proposed solution is faster than a state-of-the-art implementation of the iterative closest point (ICP) algorithm by two orders of magnitude. While ICP is more robust to large camera motion, the proposed method gives better results in the regime of small displacements which are often the case in camera tracking applications."}
{"_id": "5da3c40f2668ab4e44c71e267c559fca273e12ca", "title": "On unifying key-frame and voxel-based dense visual SLAM at large scales", "text": "This paper proposes an approach to real-time dense localisation and mapping that aims at unifying two different representations commonly used to define dense models. On one hand, much research has looked at 3D dense model representations using voxel grids in 3D. On the other hand, image-based key-frame representations for dense environment mapping have been developed. Both techniques have their relative advantages and disadvantages which will be analysed in this paper. In particular each representation's space-size requirements, their effective resolution, the computation efficiency, their accuracy and robustness will be compared. This paper then proposes a new model which unifies various concepts and exhibits the main advantages of each approach within a common framework. One of the main results of the proposed approach is its ability to perform large scale reconstruction accurately at the scale of mapping a building."}
{"_id": "9e21d680d80cd41d67f5355fcad109e892d1fd0c", "title": "gSLICr: SLIC superpixels at over 250Hz", "text": "We introduce a parallel GPU implementation of the Simple Linear Iterative Clustering (SLIC) superpixel segmentation. Using a single graphic card, our implementation achieves speedups of up to 83\u00d7 from the standard sequential implementation. Our implementation is fully compatible with the standard sequential implementation and the software is now available online and is open source."}
{"_id": "a9e75194aa9e261233d439c4cbcc21f2f7877515", "title": "IoT-based occupancy monitoring techniques for energy-efficient smart buildings", "text": "With the proliferation of Internet of Things (IoT) devices such as smartphones, sensors, cameras, and RFIDs, it is possible to collect massive amount of data for localization and tracking of people within commercial buildings. Enabled by such occupancy monitoring capabilities, there are extensive opportunities for improving the energy consumption of buildings via smart HVAC control. In this respect, the major challenges we envision are 1) to achieve occupancy monitoring in a minimally intrusive way, e.g., using the existing infrastructure in the buildings and not requiring installation of any apps in the users' smart devices, and 2) to develop effective data fusion techniques for improving occupancy monitoring accuracy using a multitude of sources. This paper surveys the existing works on occupancy monitoring and multi-modal data fusion techniques for smart commercial buildings. The goal is to lay down a framework for future research to exploit the spatio-temporal data obtained from one or more of various IoT devices such as temperature sensors, surveillance cameras, and RFID tags that may be already in use in the buildings. A comparative analysis of existing approaches and future predictions for research challenges are also provided."}
{"_id": "c6caba7137f8282422ab92daf2bed5755121a17d", "title": "Quality evaluation of product reviews using an information quality framework", "text": "Available online 27 August 2010"}
{"_id": "0cb184550ec3e24403e44f745bf28690cdd73ead", "title": "Distributed fair scheduling in a wireless LAN", "text": "Fairness is an important issue when accessing a shared wireless channel. With fair scheduling, it is possible to allocate bandwidth in proportion to weightsof the packet flows sharing the channel. This paper presents a fully distributed algorithm for fair scheduling in a wireless LAN. The algorithm can be implemented without using a centralized coordinator to arbitrate medium access. The proposed protocol is derived from the Distributed Coordination Function in the IEEE 802.11 standard. Simulation results show that the proposed algorithm is able to schedule transmission such that the bandwidth allocated to different flows is proportional to their weights. An attractive feature of the proposed approach is that it can be implemented with simple modifications to the IEEE 802.11 standard."}
{"_id": "733943a2b851cd3c42414295440ef5cd1a038bc5", "title": "Educational Data Mining: A Review of the State of the Art", "text": "Educational data mining (EDM) is an emerging interdisciplinary research area that deals with the development of methods to explore data originating in an educational context. EDM uses computational approaches to analyze educational data in order to study educational questions. This paper surveys the most relevant studies carried out in this field to date. First, it introduces EDM and describes the different groups of user, types of educational environments, and the data they provide. It then goes on to list the most typical/common tasks in the educational environment that have been resolved through data-mining techniques, and finally, some of the most promising future lines of research are discussed."}
{"_id": "ceb9189c726dd73c3e1a784017ad15bb7c7133f6", "title": "Artificial Vision in Road Vehicles", "text": "The last few decades witnessed the birth and growth of a new sensibility to transportation efficiency. In particular, the need for efficient and improved people and goods mobility pushed researchers to address the problem of intelligent transportation systems. This paper surveys the most advanced approaches to the (partial) customization of road following task, using on-board systems based on artificial vision. The functionalities of lane detection, obstacle detection and pedestrian detection are described and classified, and their possible application on future road vehicles is discussed."}
{"_id": "23389d2f93a8e3bf4f0fc4fcb6226c93d1142662", "title": "A Log-Logistic Model-Based Interpretation of TF Normalization of BM25", "text": "The effectiveness of BM25 retrieval function is mainly due to its sub-linear term frequency (TF) normalization component, which is controlled by a parameter k1. Although BM25 was derived based on the classic probabilistic retrieval model, it has been so far unclear how to interpret its parameter k1 probabilistically, making it hard to optimize the setting of this parameter. In this paper, we provide a novel probabilistic interpretation of the BM25 TF normalization and its parameter k1 based on a log-logistic model for the probability of seeing a document in the collection with a given level of TF. The proposed interpretation allows us to derive different approaches to estimation of parameter k1 based solely on the current collection without requiring any training data, thus effectively eliminating one free parameter from BM25. Our experiment results show that the proposed approaches can accurately predict the optimal k1 without requiring training data and achieve better or comparable retrieval performance to a well-tuned BM25 where k1 is optimized based on training data."}
{"_id": "f02a82b8b487f070d1f72581f12ff727881a80c3", "title": "CINCO: a simplicity-driven approach to full generation of domain-specific graphical modeling tools", "text": "Even with the help of powerful metamodeling frameworks, the development of domain-specific graphical modeling tools is usually a complex, repetitive, and tedious task, which introduces substantial upfront costs often prohibiting such approaches in practice. In order to reduce these costs, the presented Cinco meta tooling suite is designed to provide a holistic approach that greatly simplifies the development of such domain-specific tools. Our solution is based on the idea to apply the concept of domain specialization also to the (meta-)domain of \u201cdomain-specific modeling tools\u201d. Important here is our focus on complex graph-based models, comprising various kinds of nodes and edges together with their individual representation, correlations, and interpretation. This focus allows for high-level specifications of the model structures and functionalities as the prerequisite for push-button tool generation."}
{"_id": "8d0921b2ce0d30bffd8830b1915533c02b96958d", "title": "Microgrid power electronic converters: State of the art and future challenges", "text": "This paper presents a review of the state of the art of power electric converters used in microgrids. The paper focuses primarily on grid connected converters. Different topologies and control and modulation strategies for these specific converters are critically reviewed. Moreover, future challenges in respect of these converters are identified along with their potential solutions."}
{"_id": "f1a394cc3aaad04c6aaf0dd7fde6bf0572d6fd44", "title": "Parental stress, family quality of life, and family-teacher partnerships: Families of children with autism spectrum disorder.", "text": "BACKGROUND\nReducing parental stress and improving family quality of Life (FQOL) are continuing concerns for families of children with autism spectrum disorder (ASD). Family-teacher partnerships have been identified as a positive factor to help parents reduce their stress and improve their FQOL. However, the interrelations among parental stress, FQOL, and family-teacher partnerships need to be further examined so as to identify the possible paths to help parents reduce their stress and improve their FQOL. The purpose of this study was to examine the interrelations among these three variables.\n\n\nMETHOD\nA total of 236 parents of school children with ASD completed questionnaires, which included three measures: (a) the Beach Center Family Quality of Life Scale, (b) the Parental Stress Scale, and (c) the Beach Center Family-Professional Partnerships Scale. The structural equation modeling was used to analyze the interrelations among these three variables.\n\n\nRESULTS\nPerceived parental stress had a direct effect on parental satisfaction concerning FQOL and vice versa. Perceived family-teacher partnerships had a direct effect on FQOL, but did not have a direct effect on parental stress. However, family-teacher partnerships had an indirect effect on parental stress through FQOL.\n\n\nCONCLUSIONS AND IMPLICATIONS\nReducing parental stress could improve FQOL for families of children with ASD and vice versa. Strong family-teacher partnerships could help parents of children with ASD improve their FQOL and indirectly reduce their stress."}
{"_id": "af8e22ef8c405f9cc9ad26314cb7a9e7d3d4eec2", "title": "A new facial expression recognition based on curvelet transform and online sequential extreme learning machine initialized with spherical clustering", "text": "In this paper, a novel algorithm is proposed for facial expression recognition by integrating curvelet transform and online sequential extreme learning machine (OSELM) with radial basis function (RBF) hidden node having optimal network architecture. In the proposed algorithm, the curvelet transform is firstly applied to each region of the face image divided into local regions instead of whole face image to reduce the curvelet coefficients too huge to classify. Feature set is then generated by calculating the entropy, the standard deviation and the mean of curvelet coefficients of each region. Finally, spherical clustering (SC) method is employed to the feature set to automatically determine the optimal hidden node number and RBF hidden node parameters of OSELM by aim of increasing classification accuracy and reducing the required time to select the hidden node number. So, the learning machine is called as OSELM-SC. It is constructed two groups of experiments: The aim of the first one is to evaluate the classification performance of OSELM-SC on the benchmark datasets, i.e., image segment, satellite image and DNA. The second one is to test the performance of the proposed facial expression recognition algorithm on the Japanese Female Facial Expression database and the Cohn-Kanade database. The obtained experimental results are compared against the state-of-the-art methods. The results demonstrate that the proposed algorithm can produce effective facial expression features and exhibit good recognition accuracy and robustness."}
{"_id": "4f04b0432fa6f96544cb1bce4eb8e7e964afee38", "title": "Advances in High-Voltage Modulators for Applications in Pulsed Power and Plasma-Based Ion Implantation", "text": "Modern pulsed power technology has its roots in the late 1950s and early 1960s, and it was driven overwhelmingly by applications in national defense carried out by several countries, especially the U.S., U.K., Russia, and China. The following decades, particularly the early 1990s, witnessed an increased interest in compact systems with pulse repetition rate that could be used in nondefense applications such as treatment of material surfaces by plasma and beam interactions, treatment of pollutants, food sterilization, medical applications, etc. This spawned a new generation of pulsed power components (solid-state switches) that led to completely solid-state modulators. This paper describes how the pulsed power technology used originally in beam sources and cathodic arcs has converged to produce power sources for plasma-based ion implantation (PBII) and related technologies. The present state of the art is reviewed, and prospects for future advances are described, especially for PBII."}
{"_id": "93d89124ba73421da6f89aa97ad2d3c74cb4baf1", "title": "Norepinephrine transporter occupancy by antidepressant in human brain using positron emission tomography with (S,S)-[18F]FMeNER-D2", "text": "Central norepinephrine transporter (NET) is one of the main targets of antidepressants. Although the measurement of NET occupancy has been attempted in humans, the outcomes have been inconclusive. In this study, the occupancy of NET by different doses of an antidepressant, nortriptyline, was measured using positron emission tomography (PET) with (S,S)-[18F]FMeNER-D2. PET scans using (S,S)-[18F]FMeNER-D2 were performed on six healthy men before and after oral administration of a single oral dose of nortriptyline (10\u201375\u00a0mg). After a bolus i.v. injection of (S,S)-[18F]FMeNER-D2, dynamic scanning was performed for 0\u201390\u00a0min, followed by scanning for 120\u2013180\u00a0min. The ratio of the thalamus-to-caudate areas under the curve (120\u2013180\u00a0min) minus 1 was used as the binding potential (BPND) for NET. NET occupancy was calculated as the percentage reduction of BPND. Venous blood samples were taken to measure the concentrations of nortriptyline just before injection of the tracer and at 180\u00a0min after the injection. Mean NET occupancies by nortriptyline were 16.4% at 10\u00a0mg, 33.2% at 25\u00a0mg, and 41.1% at 75\u00a0mg. The mean plasma concentration of nortriptyline was less than the lower limit of detection at 10\u00a0mg, 23.7\u00a0ng/mL at 25\u00a0mg, and 50.5\u00a0ng/mL at 75\u00a0mg. Estimated ED50 was 76.8\u00a0mg of administration dose and 59.8\u00a0ng/mL of plasma concentration. NET occupancy by nortriptyline corresponding to the administration dose of 10\u201375\u00a0mg or plasma concentration was observed from 16% to 41%."}
{"_id": "2165d87bfb3658ac4d3cb8dd49cf0da72cad04c0", "title": "A neuromorphic architecture for anomaly detection in autonomous large-area traffic monitoring", "text": "The advanced sensing and imaging capability of today's sensor networks enables real time monitoring in a large area. In order to provide continuous monitoring and prompt situational awareness, an abstract-level autonomous information processing framework is developed that is able to detect various categories of abnormal traffic events with unsupervised learning. The framework is based on cogent confabulation model, which performs statistical inference in a manner inspired by human neocortex system. It enables detection and recognition of abnormal target vehicles within the context of surrounding traffic activities and previous events using likelihood-ratio test. A neuromorphic architecture is proposed which accelerates the computation for real-time detection by leveraging memristor crossbar arrays."}
{"_id": "433f2f2100b7f4b952da8de6e8ccc4e60514fe76", "title": "Ontogenesis of Agency in Machines : A Multidisciplinary Review", "text": "How does agency arise in biocognitive systems? Can the emergence of agency be explained by physical laws? We review these and related questions in philosophy, psychology, biology, and physics. Based on the review we ask the questions: i) Can machines have agency? and, if so, ii) How can we build machines with agency? We examine existing work in artificial intelligence and machine learning on selfmotivated, self-teaching, and self-developing systems with respect to the ontogenesis (\u201ccoming into being\u201d) of agency in computational systems. The impact of these \u201cautogenic\u201d systems or machine agency on science, technology, and humanity will be discussed."}
{"_id": "642379f705f785484cf488238ece9bac8a1da8ad", "title": "CODRA: A Novel Discriminative Framework for Rhetorical Analysis", "text": "Clauses and sentences rarely stand on their own in an actual discourse; rather, the relationship between them carries important information that allows the discourse to express a meaning as a whole beyond the sum of its individual parts. Rhetorical analysis seeks to uncover this coherence structure. In this article, we present CODRA\u2014 a COmplete probabilistic Discriminative framework for performing Rhetorical Analysis in accordance with Rhetorical Structure Theory, which posits a tree representation of a discourse.CODRA comprises a discourse segmenter and a discourse parser. First, the discourse segmenter, which is based on a binary classifier, identifies the elementary discourse units in a given text. Then the discourse parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intra-sentential parsing and the other for multi-sentential parsing. We present two approaches to combine these two stages of parsing effectively. By conducting a series of empirical evaluations over two different data sets, we demonstrate that CODRA significantly outperforms the state-of-the-art, often by a wide margin. We also show that a reranking of the k-best parse hypotheses generated by CODRA can potentially improve the accuracy even further."}
{"_id": "ce78fe238a362e0ea8c6cf65bb63d96bd72075cd", "title": "An Access Control scheme for Big Data processing", "text": "Access Control (AC) systems are among the most critical of network security components. A system's privacy and security controls are more likely to be compromised due to the misconfiguration of access control policies rather than the failure of cryptographic primitives or protocols. This problem becomes increasingly severe as software systems become more and more complex, such as Big Data (BD) processing systems, which are deployed to manage a large amount of sensitive information and resources organized into a sophisticated BD processing cluster. Basically, BD access control requires the collaboration among cooperating processing domains to be protected as computing environments that consist of computing units under distributed AC managements. Many BD architecture designs were proposed to address BD challenges; however, most of them were focused on the processing capabilities of the \u201cthree Vs\u201d (Velocity, Volume, and Variety). Considerations for security in protecting BD are mostly ad hoc and patch efforts. Even with some inclusion of security in recent BD systems, a critical security component, AC (Authorization), for protecting BD processing components and their users from the insider attacks, remains elusive. This paper proposes a general purpose AC scheme for distributed BD processing clusters."}
{"_id": "50b714de285fa093acb30efe52086dc8ac4896d2", "title": "Brightness Preserving Dynamic Histogram Equalization for Image Contrast Enhancement", "text": "Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. However, this technique is not very well suited to be implemented in consumer electronics, such as television, because the method tends to introduce unnecessary visual deterioration such as the saturation effect. One of the solutions to overcome this weakness is by preserving the mean brightness of the input image inside the output image. This paper proposes a new method, known as brightness preserving dynamic histogram equalization (BPDHE), which is an extension to HE that can produce the output image with the mean intensity almost equal to the mean intensity of the input, thus fulfill the requirement of maintaining the mean brightness of the image. First, the method smoothes the input histogram with one dimensional Gaussian filter, and then partitions the smoothed histogram based on its local maximums. Next, each partition will be assigned to a new dynamic range. After that, the histogram equalization process is applied independently to these partitions, based on this new dynamic range. For sure, the changes in dynamic range, and also histogram equalization process will alter the mean brightness of the image. Therefore, the last step in this method is to normalize the output image to the input mean brightness. Our results from 80 test images shows that this method outperforms other present mean brightness preserving histogram equalization methods. In most cases, BPDHE successfully enhance the image without severe side effects, and at the same time, maintain the mean input brightness1."}
{"_id": "7cd0a8238a483161fd1e3a36ea5b0092b71e6d6a", "title": "Language, music, syntax and the brain", "text": "The comparative study of music and language is drawing an increasing amount of research interest. Like language, music is a human universal involving perceptually discrete elements organized into hierarchically structured sequences. Music and language can thus serve as foils for each other in the study of brain mechanisms underlying complex sound processing, and comparative research can provide novel insights into the functional and neural architecture of both domains. This review focuses on syntax, using recent neuroimaging data and cognitive theory to propose a specific point of convergence between syntactic processing in language and music. This leads to testable predictions, including the prediction that that syntactic comprehension problems in Broca's aphasia are not selective to language but influence music perception as well."}
{"_id": "4b3999fb967467f569850c1accc989b114701d17", "title": "Long-term effect of high-intensity laser therapy in the treatment of patients with chronic low back pain: a randomized blinded placebo-controlled trial", "text": "The aim of this study was to compare the effect of high-intensity laser therapy (HILT), alone or combined with exercise, in the treatment of chronic low back pain (CLBP). A total of 72 male patients participated in this study, with a mean (SD) age of 32.81 (4.48) years. Patients were randomly assigned into three groups and treated with HILT plus exercise (HILT + EX), placebo laser plus exercise (PL + EX), and HILT alone in groups 1, 2, and 3, respectively. The outcomes measured were lumbar range of motion (ROM), pain level by visual analog scale (VAS), and functional disability by both the Roland Disability Questionnaire (RDQ) and the Modified Oswestry Disability Questionnaire (MODQ). Statistical analyses were performed to compare the differences between baseline and post-treatment measurements. The level of statistical significance was set as P\u2009<\u20090.05. ROM significantly increased after 4\u00a0weeks of treatment in all groups, then significantly decreased after 12\u00a0weeks of follow-up, but was still significantly more than the baseline value in groups 1 and 2. VAS, RDQ, and MODQ results showed significant decrease post-treatment in all groups, although the RDQ and MODQ results were not significantly different between groups 2 and 3. HILT combined with exercise appears to be more effective in patients with CLBP than either HLLT alone or placebo laser with exercise."}
{"_id": "9eec85413bac7b59805f2537901acb279d2ecaff", "title": "A Lightweight Multicast Authentication Mechanism for Small Scale IoT Applications", "text": "Security is very important for Internet of Things (IoTs). As a main communication mode, the security mechanism for multicast is not only the measure to ensure secured communications, but also the precondition for other security services. With the analysis of Nyberg's fast one-way accumulator and its security, we discover that it has the property of absorbency in addition to the one-way and quasi-communicative property that makes it very suitable for applications in which accumulated items are dynamic. In this paper, we revise the original Nyberg's fast one-way accumulator and construct a lightweight multicast authentication mechanism for small scale IoT applications. We analyze the security of the mechanism in detail. In addition, we evaluate seven performance aspects of the mechanism."}
{"_id": "51f53b98ccc60bf255ab653a11b1573ed3c5d815", "title": "Efficient set intersection for inverted indexing", "text": "Conjunctive Boolean queries are a key component of modern information retrieval systems, especially when Web-scale repositories are being searched. A conjunctive query q is equivalent to a |q|-way intersection over ordered sets of integers, where each set represents the documents containing one of the terms, and each integer in each set is an ordinal document identifier. As is the case with many computing applications, there is tension between the way in which the data is represented, and the ways in which it is to be manipulated. In particular, the sets representing index data for typical document collections are highly compressible, but are processed using random access techniques, meaning that methods for carrying out set intersections must be alert to issues to do with access patterns and data representation. Our purpose in this article is to explore these trade-offs, by investigating intersection techniques that make use of both uncompressed \u201cinteger\u201d representations, as well as compressed arrangements. We also propose a simple hybrid method that provides both compact storage, and also faster intersection computations for conjunctive querying than is possible even with uncompressed representations."}
{"_id": "d2be22e50067b6ac13ccbafb3f7ece2a96988c9d", "title": "Inverted index compression and query processing with optimized document ordering", "text": "Web search engines use highly optimized compression schemes to decrease inverted index size and improve query throughput, and many index compression techniques have been studied in the literature. One approach taken by several recent studies first performs a renumbering of the document IDs in the collection that groups similar documents together, and then applies standard compression techniques. It is known that this can significantly improve index compression compared to a random document ordering. We study index compression and query processing techniques for such reordered indexes. Previous work has focused on determining the best possible ordering of documents. In contrast, we assume that such an ordering is already given, and focus on how to optimize compression methods and query processing for this case. We perform an extensive study of compression techniques for document IDs and present new optimizations of existing techniques which can achieve significant improvement in both compression and decompression performances. We also propose and evaluate techniques for compressing frequency values for this case. Finally, we study the effect of this approach on query processing performance. Our experiments show very significant improvements in index size and query processing speed on the TREC GOV2 collection of 25.2 million web pages."}
{"_id": "924276d95d1bdaf087beb0ccf699443b9bf855ec", "title": "Super-Scalar RAM-CPU Cache Compression", "text": "High-performance data-intensive query processing tasks like OLAP, data mining or scientific data analysis can be severely I/O bound, even when high-end RAID storage systems are used. Compression can alleviate this bottleneck only if encoding and decoding speeds significantly exceed RAID I/O bandwidth. For this purpose, we propose three new versatile compression schemes (PDICT, PFOR, and PFOR-DELTA) that are specifically designed to extract maximum IPC from modern CPUs. We compare these algorithms with compression techniques used in (commercial) database and information retrieval systems. Our experiments on the MonetDB/X100 database system, using both DSM and PAX disk storage, show that these techniques strongly accelerate TPC-H performance to the point that the I/O bottleneck is eliminated."}
{"_id": "9c9c4a03290263be09fcbe54d2188966a165ffca", "title": "Inverted files for text search engines", "text": "The technology underlying text search engines has advanced dramatically in the past decade. The development of a family of new index representations has led to a wide range of innovations in index storage, index construction, and query evaluation. While some of these developments have been consolidated in textbooks, many specific techniques are not widely known or the textbook descriptions are out of date. In this tutorial, we introduce the key techniques in the area, describing both a core implementation and how the core can be enhanced through a range of extensions. We conclude with a comprehensive bibliography of text indexing literature."}
{"_id": "b52b27ce05c6241e7feb965ea615ad1a9b629a2e", "title": "Compressed Inverted Indexes for In-Memory Search Engines", "text": "We present the algorithmic core of a full text data base that allows fast Boolean queries, phrase queries, and document reporting using less space than the input text. The system uses a carefully choreographed combination of classical data compression techniques and inverted index based search data structures. It outperforms suffix array based techniques for all the above operations for real world (natural language) texts."}
{"_id": "39a39cb833d5c7b8a145737af1a420c98762390a", "title": "Retinal blood vessel segmentation using matched filter and Laplacian of Gaussian", "text": "Automated blood vessel segmentation of retinal images offers huge potential benefits for medical diagnosis of different ocular diseases. In this paper, 2D Matched Filters (MF) are applied to fundus retinal images to detect vessels which are enhanced by Contrast Limited Adaptive Histogram Equalization (CLAHE) method. Due to the Gaussian nature of blood vessel profile, the MF with Gaussian kernel often misclassifies non-vascular structures (e.g., step, ramp or other transients) as vessels. To avoid such false detection, this paper introduces Laplacian of Gaussian (LoG) filters in the vessel segmentation process. The inherent zero-crossing property of LoG filter is used in the algorithm, along with the MF, in order to extract vessels reliably from the retinal images. The proposed method is validated against three publicly available databases, STARE, DRIVE and HRF. Simulation results show that the proposed method is able to segment vessels accurately from the three database images with an average accuracy that is competitive to the existing methodologies."}
{"_id": "4668fe3413e7a62ba0efa6bbb4f25bcda23d6383", "title": "Deep Learning for the Classification of Lung Nodules", "text": "Deep learning, as a promising new area of machine learning, has attracted a rapidly increasing attention in the field of medical imaging. Compared to the conventional machine learning methods, deep learning requires no hand-tuned feature extractor, and has shown a superior performance in many visual object recognition applications. In this study, we develop a deep convolutional neural network (CNN) and apply it to thoracic CT images for the classification of lung nodules. We present the CNN architecture and classification accuracy for the original images of lung nodules. In order to understand the features of lung nodules, we further construct new datasets, based on the combination of artificial geometric nodules and some transformations of the original images, as well as a stochastic nodule shape model. It is found that simplistic geometric nodules cannot capture the important features of lung nodules."}
{"_id": "4d6e39e24d0a8d7327ba94c5463ea465faf5b65d", "title": "FUNCTIONAL AND LONGITUDINAL DATA ANALYSIS: PERSPECTIVES ON SMOOTHING", "text": "The perspectives and methods of functional data analysis and longitudinal data analysis are contrasted and compared. Topics include kernel methods and random effects models for smoothing, basis function methods, and examination of the relation of covariates to curve shapes. Some directions in which methodology might advance are identified."}
{"_id": "4fb2e2c8bdf8a61796d19926095adb0b5136cb21", "title": "Ethical issues raised by the treatment of gender-variant prepubescent children.", "text": "Transgender issues and transgender rights have become increasingly a matter of media attention and public policy debates. Reflecting changes in psychiatric perspectives, the diagnosis of \"trans-sexualism\" first appeared in the International Statistical Classification of Diseases and Related Health Problems in 1975 and shortly thereafter, in 1980, in the Diagnostic and Statistical Manual of Mental Disorders. Since that time, international standards of care have been developed, and today those standards are followed by clinicians across diverse cultures. In many instances, treatment of older adolescents and adults is covered by national health care systems and, in some cases, by private health insurance. Most recently, the Medicare ban on coverage for gender reassignment surgery was lifted in 2014. In contrast to the relative lack of controversy about treating adolescents and adults, there is no expert clinical consensus regarding the treatment of prepubescent children who meet diagnostic criteria for what was referred to in both DSM-IV-TR and ICD-10 as gender identity disorder in children and now in DSM-5 as gender dysphoria. One reason for the differing attitudes has to do with the pervasive nature of gender dysphoria in older adolescents and adults: it rarely desists, and so the treatment of choice is gender or sex reassignment. On the subject of treating children, however, as the World Professional Association for Transgender Health notes in their latest Standards of Care, gender dysphoria in childhood does not inevitably continue into adulthood, and only 6 to 23 percent of boys and 12 to 27 percent of girls treated in gender clinics showed persistence of their gender dysphoria into adulthood. Further, most of the boys' gender dysphoria desisted, and in adulthood, they identified as gay rather than as transgender. In an effort to clarify best treatment practices for transgender individuals, a recent American Psychiatric Association Task Force on the Treatment of Gender Identity outlined three differing approaches to treating prepubescent gender dysphoric children."}
{"_id": "c24b8e1af485085548d5fbe6ecb7bc97591f825e", "title": "Revisiting The American Voter on Twitter", "text": "The American Voter - a seminal work in political science - uncovered the multifaceted nature of voting behavior which has been corroborated in electoral research for decades since. In this paper, we leverage The American Voter as an analysis framework in the realm of computational political science, employing the factors of party, personality, and policy to structure the analysis of public discourse on online social media during the 2016 U.S. presidential primaries. Our analysis of 50 million tweets reveals the continuing importance of these three factors; our understanding is also enriched by the application of sentiment analysis techniques. The overwhelmingly negative sentiment of conversations surrounding 10 major presidential candidates reveals more \"crosstalk\" from Democratic leaning users towards Republican candidates, and less vice-versa. We uncover the lack of moderation as the most discussed personality dimension during this campaign season, as the political field becomes more extreme - Clinton and Rubio are perceived as moderate, while Trump, Sanders, and Cruz are not. While the most discussed issues are foreign policy and immigration, Republicans tweet more about abortion than Democrats who tweet more about gay rights than Republicans. Finally, we illustrate the importance of multifaceted political discourse analysis by applying regression to quantify the impact of party, personality, and policy on national polls."}
{"_id": "1713cf9acb23862a72a4756d53e3b1591e2fd7f4", "title": "Event Correlation: Language and Semantics", "text": "300.twolf 3000 102 2945 102 2945 Hardware CPU: Intel(R) Xeon(R) CPU 5150 @ 2.66GHz CPU MHz: 2660 FPU: Integrated CPU(s) enabled: 4 cores, 2 chips, 2 cores/chip CPU(s) orderable: 1,2 chip(s) Parallel: No Primary Cache: 32KB(I) + 32KB(D) on chip, per core Secondary Cache: 4096KB(I+D) on chip, per chip, shared L3 Cache: N/A Other Cache: N/A Memory: 4 x 1024MB ECC FB-DIMM DDR2-667MHz Disk Subsystem: 1 x 250GB SATA HDD Other Hardware: Software Operating System: Red Hat Enterprise Linux 4 Update 3 EM64T Compiler: Intel C++ Compiler 9.1.042 for EM64T File System: ext3 System State: Runlevel 3"}
{"_id": "5092a67406d823a6f6fd3dac555b9d022ad20bdf", "title": "There is no 16-Clue Sudoku: Solving the Sudoku Minimum Number of Clues Problem", "text": "We apply our new hitting set enumeration algorithm to solve the sudoku minimum number of clues problem, which is the following question: What is the smallest number of clues (givens) that a sudoku puzzle may have? It was conjectured that the answer is 17. We have performed an exhaustive search for a 16-clue sudoku puzzle, and we did not find one, thereby proving that the answer is indeed 17. This article describes our method and the actual search. The hitting set problem is computationally hard; it is one of Karp\u2019s twenty-one classic NP-complete problems. We have designed a new algorithm that allows us to efficiently enumerate hitting sets of a suitable size. Hitting set problems have applications in many areas of science, such as bioinformatics and software testing. \u2217School of Mathematical Sciences, University College Dublin, Ireland. E-mail: gary.mcguire@ucd.ie"}
{"_id": "bef980f5daf912fd69a9785739813dcdca06371f", "title": "On spatial smoothing for direction-of-arrival estimation of coherent signals", "text": ""}
{"_id": "324d86b305964e36bb18c198b0c2cf5b66918e6c", "title": "Collaborative detection of cyberbullying behavior in Twitter data", "text": "As the size of Twitter\u00a9 data is increasing, so are undesirable behaviors of its users. One of such undesirable behavior is cyberbullying, which may even lead to catastrophic consequences. Hence, it is critical to efficiently detect cyberbullying behavior by analyzing tweets, if possible in realtime. Prevalent approaches to identify cyberbullying are mainly stand-alone and thus, are time-consuming. This research improves detection task using the principles of collaborative computing. Different collaborative paradigms are suggested and discussed in this paper. Preliminary results indicate an improvement in time and accuracy of the detection mechanism over the stand-alone paradigm."}
{"_id": "e985e7ec130ce4552222d7fb4b2d2f923fd2a501", "title": "Orthogonal and Idempotent Transformations for Learning Deep Neural Networks", "text": "Identity transformations, used as skip-connections in residual networks, directly connect convolutional layers close to the input and those close to the output in deep neural networks, improving information flow and thus easing the training. In this paper, we introduce two alternative linear transforms, orthogonal transformation and idempotent transformation. According to the definition and property of orthogonal and idempotent matrices, the product of multiple orthogonal (same idempotent) matrices, used to form linear transformations, is equal to a single orthogonal (idempotent) matrix, resulting in that information flow is improved and the training is eased. One interesting point is that the success essentially stems from feature reuse and gradient reuse in forward and backward propagation for maintaining the information during flow and eliminating the gradient vanishing problem because of the express way through skip-connections. We empirically demonstrate the effectiveness of the proposed two transformations: similar performance in single-branch networks and even superior in multi-branch networks in comparison to identity transformations."}
{"_id": "b43be5de19e5cab8d1b476c42899f92e75660510", "title": "Country-wide rainfall maps from cellular communication networks.", "text": "Accurate and timely surface precipitation measurements are crucial for water resources management, agriculture, weather prediction, climate research, as well as ground validation of satellite-based precipitation estimates. However, the majority of the land surface of the earth lacks such data, and in many parts of the world the density of surface precipitation gauging networks is even rapidly declining. This development can potentially be counteracted by using received signal level data from the enormous number of microwave links used worldwide in commercial cellular communication networks. Along such links, radio signals propagate from a transmitting antenna at one base station to a receiving antenna at another base station. Rain-induced attenuation and, subsequently, path-averaged rainfall intensity can be retrieved from the signal's attenuation between transmitter and receiver. Here, we show how one such a network can be used to retrieve the space-time dynamics of rainfall for an entire country (The Netherlands, \u223c35,500 km(2)), based on an unprecedented number of links (\u223c2,400) and a rainfall retrieval algorithm that can be applied in real time. This demonstrates the potential of such networks for real-time rainfall monitoring, in particular in those parts of the world where networks of dedicated ground-based rainfall sensors are often virtually absent."}
{"_id": "fcd05a3d018c6711d6da3cb4b5aa1abac132e53a", "title": "Smooth assembled mappings for large-scale real walking", "text": "Virtual reality applications prefer real walking to provide highly immersive presence than other locomotive methods. Mapping-based techniques are very effective for supporting real walking in small physical workspaces while exploring large virtual scenes. However, the existing methods for computing real walking maps suffer from poor quality due to distortion. In this paper, we present a novel divide-and-conquer method, called Smooth Assembly Mapping (SAM), to compute real walking maps with low isometric distortion for large-scale virtual scenes. First, the input virtual scene is decomposed into a set of smaller local patches. Then, a group of local patches is mapped together into a real workspace by minimizing a low isometric distortion energy with smoothness constraints between the adjacent patches. All local patches are mapped and assembled one by one to obtain a complete map. Finally, a global optimization is adopted to further reduce the distortion throughout the entire map. Our method easily handles teleportation technique by computing maps of individual regions and assembling them with teleporter conformity constraints. A large number of experiments, including formative user studies and comparisons, have shown that our method succeeds in generating high-quality real walking maps from large-scale virtual scenes to small real workspaces and is demonstrably superior to state-of-the-art methods."}
{"_id": "a1a197449aeca81a39cb2213b41cef4831d6983e", "title": "Large-Scale Hierarchical Text Classification with Recursively Regularized Deep Graph-CNN", "text": "Text classification to a hierarchical taxonomy of topics is a common and practical problem. Traditional approaches simply use bag-of-words and have achieved good results. However, when there are a lot of labels with different topical granularities, bagof-words representation may not be enough. Deep learning models have been proven to be effective to automatically learn different levels of representations for image data. It is interesting to study what is the best way to represent texts. In this paper, we propose a graph-CNN based deep learning model to first convert texts to graph-of-words, and then use graph convolution operations to convolve the word graph. Graph-ofwords representation of texts has the advantage of capturing non-consecutive and long-distance semantics. CNN models have the advantage of learning different level of semantics. To further leverage the hierarchy of labels, we regularize the deep architecture with the dependency among labels. Our results on both RCV1 and NYTimes datasets show that we can significantly improve large-scale hierarchical text classification over traditional hierarchical text classification and existing deep models."}
{"_id": "5b156a3749b4110e328bcf2b147dc3d71ac09fc7", "title": "Calibrated recommendations", "text": "When a user has watched, say, 70 romance movies and 30 action movies, then it is reasonable to expect the personalized list of recommended movies to be comprised of about 70% romance and 30% action movies as well. This important property is known as calibration, and recently received renewed attention in the context of fairness in machine learning. In the recommended list of items, calibration ensures that the various (past) areas of interest of a user are reflected with their corresponding proportions. Calibration is especially important in light of the fact that recommender systems optimized toward accuracy (e.g., ranking metrics) in the usual offline-setting can easily lead to recommendations where the lesser interests of a user get crowded out by the user's main interests-which we show empirically as well as in thought-experiments. This can be prevented by calibrated recommendations. To this end, we outline metrics for quantifying the degree of calibration, as well as a simple yet effective re-ranking algorithm for post-processing the output of recommender systems."}
{"_id": "eac5856f229e57ffbd74388d81e3c0a1384c0193", "title": "Improved Bat Algorithm (IBA) on Continuous Optimization Problems", "text": "\u2014Heuristic optimization techniques have became very popular techniques and have widespread usage areas. Since they do not include mathematical terms, heuristic methods have been carried out on many fields by researchers. Main purpose of these techniques is to achieve good performance on problem of interest. One of these techniques is Bat Algorithm (BA). BA is an optimization algorithm based on echolocation characteristic of bats and developed by mimics of bats' foraging behavior. In this study, exploration and exploitation mechanisms of BA are improved by three modifications. Performance of proposed and standard version of algorithm is compared on ten basic benchmark test problems. Results indicate that proposed version is better than standard version in terms of solution quality."}
{"_id": "6cef5f34424f6027617082d351333a095c4162d7", "title": "Classifying sanctions and designing a conceptual sanctioning process model for socio-technical systems", "text": "We understand a socio-technical system (STS) as a cyber-physical system in which two or more autonomous parties interact via or about technical elements, including the parties\u2019 resources and actions. As information technology begins to pervade every corner of human life, STSs are becoming ever more common, and the challenge of governing STSs is becoming increasingly important. We advocate a normative basis for governance, wherein norms represent the standards of correct behaviour that each party in an STS expects from others. A major benefit of focussing on norms is that they provide a socially realistic view of interaction among autonomous parties that abstracts low-level implementation details. Overlaid on norms is the notion of a sanction as a negative or positive reaction to potentially any violation of or compliance with an expectation. Although norms have been well studied as regards governance for STSs, sanctions have not. Our understanding and usage of norms is inadequate for the purposes of governance unless we incorporate a comprehensive representation of sanctions. We address the aforementioned gap by proposing (i) a sanction typology that reflects the relevant features of sanctions, and (ii) a conceptual sanctioning process model providing a functional structure for sanctioning in STS. We demonstrate our contributions via a motivating scenario from the domain of renewable energy trading."}
{"_id": "898aa1b0f3d4dc6eaae9ef12f8a302cf941c0949", "title": "Classification of learning outcomes : evidence from the computer games literature", "text": "Following up on an earlier issue of The Curriculum Journal (Vol. 16, No. 1), this article focuses on learning outcomes in the context of video games. Learning outcomes are viewed from two theoretical frameworks: Kirkpatrick\u2019s levels of evaluation and the CRESST model of learning. These are used to analyse the outcomes claimed in journal articles that report empirical work, indicating the usefulness of the frameworks, and the necessity to consider the role of affective learning. The article ends with some comments on the relationship of instructional design to effective games and learning outcomes."}
{"_id": "1c3bcbd653aab0b9a4bd91b190669e97c6b21a16", "title": "Keyword extraction based on tf/idf for Chinese news document", "text": "Keyword extraction is an important research topic of information retrieval. This paper gave the specification of keywords in Chinese news documents based on analyzing linguistic characteristics of news documents and then proposed a new keyword extraction method based on tf/idf with multi-strategies. The approach selected candidate keywords of uni-, bi-and tri- grams, and then defines the features according to their morphological characters and context information. Moreover, the paper proposed several strategies to amend the incomplete words gotten from the word segmentation and found unknown potential keywords in news documents. Experimental results show that our proposed method can significantly outperform the baseline method. We also applied it to retrospective event detection. Experimental results show that the accuracy and efficiency of news retrospective event detection can be significantly improved."}
{"_id": "66bb5d71f9a241bb9cfaf03708931b8ba856edf1", "title": "Food-Bridging: A New Network Construction to Unveil the Principles of Cooking", "text": "In this manuscript, we propose, analyze, and discuss a possible new principle behind traditional cuisine: the Food-bridging hypothesis and its comparison with the food-pairing hypothesis using the same dataset and graphical models employed in the food-pairing study by Ahn et al. (2011). The Food-bridging hypothesis assumes that if two ingredients do not share a strong molecular or empirical affinity, they may become affine through a chain of pairwise affinities. That is, in a graphical model as employed by Ahn et al., a chain represents a path that joints the two ingredients, the shortest path represents the strongest pairwise chain of affinities between the two ingredients. Food-pairing and Food-bridging are different hypotheses that may describe possible mechanisms behind the recipes of traditional cuisines. Food-pairing intensifies flavor by mixing ingredients in a recipe with similar chemical compounds, and food-bridging smoothes contrast between ingredients. Both food-pairing and food-bridging are observed in traditional cuisines, as shown in this work. We observed four classes of cuisines according to food-pairing and food-bridging: (1) East Asian cuisines, at one extreme, tend to avoid food-pairing as well as food-bridging; and (4) Latin American cuisines, at the other extreme, follow both principles. For the two middle classes: (2) Southeastern Asian cuisines, avoid foodpairing and follow food-bridging; and (3) Western cuisines, follow food-pairing and avoid food-bridging."}
{"_id": "496a904b7663138b571c79f6994e5097f4a4f622", "title": "Drug Treatment of Sleep Disorders", "text": "Sleep is a complex behavior that cyclically alternates with wakefulness and is fundamental for both mental and physical wellness. During normal sleep the body acquires a specific posture; any active behavior is absent and mental activity typically oscillates between a state of \u201closs of consciousness\u201d and the experience of dreaming. Sleep is, by itself, a cyclical process characterized by the alternation of two phases, non-REM (NREM) sleep and \u201crapid eye movement\u201d (REM) sleep, during which the typical synchronized electroencephalographic (EEG) pattern of NREM sleep is substituted by a \u201cparadoxical\u201d EEG desynchronization. NREM sleep is a state of minimal energy expenditure and motor activity, during which cardiorespiratory and thermoregulatory variables are driven by the autonomic nervous system at a lower and more stable level compared to wakefulness. During REM sleep, beyond the occurrence of REMs, posture control is lost and dreams become more vivid. Furthermore, autonomic activity is highly unstable leading to centrally driven surges in heart rate and blood pressure, breathing becomes irregular, and thermoregulation is suspended or depressed, suggesting a derangement of the integrative regulation of physiological functions. Overall sleep quality is one of the most important parameters considered for the subjective assessment of quality of life and general health status. However, although sleep function(s) is intuitively associated with some kind of rest that seems to be mostly required by the central nervous system, and many theories on sleep function have been proposed, a full comprehension of sleep function has not yet been achieved and is probably not imminent. C. Berteotti \u2022 M. Cerri \u2022 M. Luppi \u2022 A. Silvani \u2022 R. Amici (*) Department of Biomedical and Neuromotor Sciences\u2014Physiology, Alma Mater Studiorum-University of Bologna, Piazza P.ta S. Donato, 2, 40126 Bologna, Italy e-mail: chiara.berteotti@unibo.it; matteo.cerri@unibo.it; marco.luppi@unibo.it; alessandro.silvani3@unibo.it; roberto.amici@unibo.it; http://www.unibo.it/Faculty/default. aspx?UPN1\u20444roberto.amici%40unibo.it \u00a9 Springer International Publishing Switzerland 2015 A. Guglietta (ed.), Drug Treatment of Sleep Disorders, Milestones in Drug Therapy, DOI 10.1007/978-3-319-11514-6_1 3 1 Neurophysiology and Neurobiology of Sleep"}
{"_id": "a795e6e4e4fc3b1274974c50edb62b8757292b12", "title": "Inductance and Resistance Calculations in Three-Dimensional Packaging Using Cylindrical Conduction-Mode Basis Functions", "text": "For the successful electrical design of system-in-package, this paper proposes an efficient method for extracting wideband resistance and inductance from a large number of 3-D interconnections. The proposed method uses the modal equivalent network from the electric field integral equation with cylindrical conduction-mode basis function, which reduces the matrix size for large 3-D interconnection problems. Additional enhancement schemes proposed further reduce the cost for computing the partial inductances. Therefore, the method discussed in this paper can be used to construct accurate models of a large number of 3-D interconnection structures such as more than 100 bonding wires used for stacking chips."}
{"_id": "6c94762f0a3be75d664ca063841a9d9c709153d8", "title": "Aligning English Strings with Abstract Meaning Representation Graphs", "text": "We align pairs of English sentences and corresponding Abstract Meaning Representations (AMR), at the token level. Such alignments will be useful for downstream extraction of semantic interpretation and generation rules. Our method involves linearizing AMR structures and performing symmetrized EM training. We obtain 86.5% and 83.1% alignment F score on development and test sets."}
{"_id": "6bb6bcc39bc0fece6ec75dcd1d011a820f3f97c2", "title": "An Impedance and Multi-Wavelength Near-Infrared Spectroscopy IC for Non-Invasive Blood Glucose Estimation", "text": "A multi-modal spectroscopy IC combining impedance spectroscopy (IMPS) and multi-wavelength near-infrared spectroscopy (mNIRS) is proposed for high precision non-invasive glucose level estimation. A combination of IMPS and mNIRS can compensate for the glucose estimation error to improve its accuracy. The IMPS circuit measures dielectric characteristics of the tissue using the RLC resonant frequency and the resonant impedance to estimate the glucose level. To accurately find resonant frequency, a 2-step frequency sweep sinusoidal oscillator (FSSO) is proposed: 1) 8-level coarse frequency switching (fSTEP = 9.4 kHz) in 10-76 kHz, and 2) fine analog frequency sweep in the range of 18.9 kHz. During the frequency sweep, the adaptive gain control loop stabilizes the output voltage swing (400 mVp-p). To improve accuracy of mNIRS, three wavelengths, 850 nm, 950 nm, and 1,300 nm, are used. For highly accurate glucose estimation, the measurement data of the IMPS and mNIRS are combined by an artificial neural network (ANN) in external DSP. The proposed ANN method reduces the mean absolute relative difference to 8.3% from 15% of IMPS, and 15-20% of mNIRS in 80-180 mg/dL blood glucose level. The proposed multi-modal spectroscopy IC occupies 12.5 mm 2 in a 0.18 \u03bcm 1P6M CMOS technology and dissipates a peak power of 38 mW with the maximum radiant emitting power of 12.1 mW."}
{"_id": "44f320e22e9f0641e74cf5b37f991e5dc294022b", "title": "Goalie Actions Attackers and Defenders Goalie Role Allocation Layer Action Selection Layer Vector Field Action Shoot Action Action Selection Layer Vector Field Action Shoot Action Action Selection Layer Vector Field Action Shoot Action Action Selection Layer Vector Field Action", "text": "The robot soccer system is being used as a test bed to develop the next generation of field robots. In the multiagent system, action selection is important for the cooperation and coordination among agents. There are many techniques in choosing a proper action of the agent. As the environment is dynamic, reinforcement learning is more suitable than supervised learning. Reinforcement learning is based on trial and error method through experience. Because of this, reinforcement learning is applied to many practical problems. However, straightforward application of reinforcement learning algorithm may not successfully scale up to more complex multi-agent learning problems. To solve this problem, modular Q-learning is employed for multi-agent learning in reinforcement methods. A modified uni-vector field is used for robot navigation. This paper discusses the Y2K2 NaroSot(Nano-Robot World Cup Soccer Tournament) robot soccer system."}
{"_id": "3d538490d2882ef8f492529762b0e3a7ea31774c", "title": "An Introduction to Systems Biology: Design Principles of Biological Circuits", "text": "Bernhard \u00d8. Palsson Cambridge U. Press, New York, 2006. $75.00 (322 pp.). ISBN 978-0-521-85903-5 Reviewed by Nigel Goldenfeld For some of us physicists, the recent explosion of interest in biology by our colleagues is something of a mystery\u2014 perhaps even a betrayal. Biology seems to offer none of the aesthetic elegance and beauty that is the hallmark of such cherished creations as the general theory of relativity and the standard model of strong, weak, and electromagnetic interactions. Neither does biology provide the breathtaking precision with which physical experiment and theory can agree in such phenomena as quantum electrodynamics or Josephson effects. And don\u2019t even think about summarizing all of biology on a T-shirt; biology textbooks, unlike those for physics, generally describe reactions and structures in monstrous detail, without an equation in sight. So why the allure of the subject among physicists? And what meaningful contributions can physical scientists make to the field? One answer is the emergence of a new science of biological circuits, or systems biology, which has become advanced enough, both scientifically and promotionally, that textbooks are warranted. Uri Alon\u2019s An Introduction to Systems Biology: Design Principles of Biological Circuits and Bernhard \u00d8. Palsson\u2019s Systems Biology: Properties of Reconstructed Networks are two recent offerings by leaders in this new field. These texts deserve serious attention from any quantitative scientist or physicist who hopes to learn about modern biology. Both books are well written; each has a different focus. Thus they could be used together to create an accessible course for beginners, with the instructor\u2019s choice of topics from each book. The books are a welcome departure from the typical biology textbook. First, they emphasize quantitative modeling rather than nomenclature; second, they concentrate on essentials of the modeling rather than on exhaustive lists of components that offer no meaningful integration; and third, the approaches described in the texts make nontrivial quantitative predictions that can be and have been verified. Alon\u2019s book is the better place for physicists to start. It assumes no prior knowledge of or even interest in biology. Yet right from chapter 1 the author succeeds in explaining in an intellectually exciting way what the cell does and what degrees of freedom enable it to function. The book proceeds with detailed discussions of some of the key network motifs, circuitelement designs that are believed to be repeated over and over again in biological systems. Those motifs include autoregulation, feed-forward loops, positive-feedback loops, and kinetic proofreading. The discussions in all cases introduce the particular motif, use simple differential equations in most cases as a way to model it, and offer plenty of comparisons with experimental data. Alon does not forget to ask a question physicists obsess over: Are any general physical principles at work in biology? Alon does not overplay that point but has included a nice chapter on robustness. He focuses on concrete examples such as chemotaxis and developmental pattern formation. He also devotes a chapter to explaining the precision with which biological processes can so accurately identify the right molecules amidst a sea of potentially confusing \u201cwrong\u201d targets. The basis of this process is kinetic proofreading, a concept first presented by physicist John Hopfield, and one that is based on simple nonequilibrium kinetics. Alon ends his book with an epilogue on simplicity in biology. He draws the detailed strands together into an appealing and inspiring overview of biology. Without the preceding attention to biological detail, this exercise would be vacuous. One final aspect of An Introduction to Systems Biology that must be mentioned is the wonderful set of exercises that accompany each chapter. The exercises range from elementary to advanced research areas. For example, chapter 9 contains problems on the errorminimization capabilities of the genetic code, which I and several other physicists, including Alon, are still elucidating. I would have liked to see evolution play a more central role in the text\u2014for example, in a chapter of its own that amplifies the important comments in the epilogue about the evolution of modularity. Perhaps when it\u2019s time for the second edition, there will be enough concrete material to further justify including the topic. Palsson\u2019s Systems Biology presents a completely different perspective. The book focuses less on principles and more on techniques for reconstructing detailed, elaborate, and predictive circuit models of biological systems. During the past 10 years these techniques have become feasible for entire organisms, thanks to the availability of fully sequenced genomes. It is now possible to map out the metabolic networks of bacteria such as Escherichia coli and simpler eukaryotes. However, other networks in organisms control other"}
{"_id": "9b4bf3c7b887dd54777c5a6c5ce85ec08a5f2555", "title": "Ultra Low Power Bioelectronics Fundamentals , Biomedical Applications , and Bio-inspired Systems", "text": "This book provides, for the first time, a broad and deep treatment of the fields of both ultra low power electronics and bioelectronics. It discusses fundamental principles and circuits for ultra low power electronic design and their applications in biomedical systems. It also discusses how ultra energy-efficient cellular and neural systems in biology can inspire revolutionary low power architectures in mixed-signal and RF electronics. The book presents a unique, unifying view of ultra low power analog and digital electronics and emphasizes the use of the ultra energy-efficient subthreshold regime of transistor operation in both. Chapters on batteries, energy harvesting, and the future of energy provide an understanding of fundamental relationships between energy use and energy generation at small scales and at large scales. A wealth of insights and examples from brain implants, cochlear implants, biomolecular sensing, cardiac devices, and bio-inspired systems make the book useful and engaging for students and practicing engineers."}
{"_id": "0c3e11581d7eedfbae6ce9aa5729599918a606d9", "title": "An analog bionic ear processor with zero-crossing detection", "text": "A 75 dB 251 /spl mu/W analog speech processor is described that preserves the performance, robustness, and programmability needed for deaf patients at a reduced power consumption compared to that of implementations with A/D and DSP. It also provides zero-crossing outputs for stimulation strategies that use phase information to improve performance."}
{"_id": "32a5f8dd328d189f5e8b9812823da89ef47cd1c5", "title": "Genomics of cellulosic biofuels", "text": "The development of alternatives to fossil fuels as an energy source is an urgent global priority. Cellulosic biomass has the potential to contribute to meeting the demand for liquid fuel, but land-use requirements and process inefficiencies represent hurdles for large-scale deployment of biomass-to-biofuel technologies. Genomic information gathered from across the biosphere, including potential energy crops and microorganisms able to break down biomass, will be vital for improving the prospects of significant cellulosic biofuel production."}
{"_id": "a66bf021c79ba2e755ad3e9558c06db05cd5589b", "title": "A 10-nW 12-bit accurate analog storage cell with 10-aA leakage", "text": "Medium-term analog storage offers a compact, accurate, and low-power method of implementing temporary local memory that can be useful in adaptive circuit applications. The performance of these cells is characterized by the sampling accuracy and voltage droop that can be achieved with a given level of die area and power. Hand calculations suggest past implementations have not achieved minimum voltage droop due to uncompensated MOS leakage mechanisms. In this paper, the dominant sources of MOS leakage are experimentally characterized in a standard 1.5-/spl mu/m CMOS process using an on-chip current integration technique, focusing specifically on the 1 fA to 1 aA current range. These measurements reveal an accumulation-mode source-drain coupling mechanism that can easily dominate diode leakage under certain bias conditions and may have limited previous designs. A simple rule-of-thumb is offered for avoiding this leakage effect, leading to a novel ultra-low leakage switch topology. A differential storage cell incorporating this new switch achieves an average leakage of 10 aA at room temperature, an 8/spl times/ reduction over past designs. The cell loses one bit of voltage accuracy, 700 /spl mu/V on a 12-bit scale and 11.3 mV on an 8-bit scale, in 3.3 and 54 min, respectively. This represents a 15/spl times/ increase in hold time at these voltage accuracies over the lowest leakage cell to date, in only 92% of the area. Since the leakage is independent of amplifier bias, the cell can operate on as little as 10 nW of power. Initial measurements also indicate the switch's leakage decreases with the square of process feature size."}
{"_id": "103d41f7829d8df58c85d2c1739dc2b3deab6bfe", "title": "Electric Motor Drive Selection Issues for HEV Propulsion Systems: A Comparative Study", "text": "This paper describes a comparative study allowing the selection of the most appropriate electric-propulsion system for a parallel hybrid electric vehicle (HEV). This paper is based on an exhaustive review of the state of the art and on an effective comparison of the performances of the four main electric-propulsion systems, namely the dc motor, the induction motor (IM), the permanent magnet synchronous motor, and the switched reluctance motor. The main conclusion drawn by the proposed comparative study is that it is the cage IM that better fulfills the major requirements of the HEV electric propulsion"}
{"_id": "179be26fc862cd87815c32959d856e7f988de08d", "title": "Blockchain and its uses", "text": "Blockchain, the technology behind most cryptocurrencies that exist today, offers a paradigm shifting technology that has the potential to transform the way that we record and verify events on the Internet. By offering a decentralized, immutable, community verified record of transactions, regardless of what these transactions represent, blockchain technology promises to transform many industries. In this paper, we provide a survey of this very important technology and discuss its possible use-cases and its effect on society. The purpose of this paper is to familiarize the reader with the current state-of-the-art in the blockchain world both in terms of technology and social impact."}
{"_id": "2327c389267019508bbb15788b5caa7bdda30998", "title": "ShapeNet: An Information-Rich 3D Model Repository", "text": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans."}
{"_id": "328a3e8811a65ef301eda423800fefd9a10fc196", "title": "A New Aggregate Local Mobility (ALM) Clustering Algorithm for VANETs", "text": "We present a beacon-based clustering algorithm aimed at prolonging the cluster lifetime in VANETs. We use a new aggregate local mobility criterion to decide upon cluster re-organisation. The scheme incorporates a contention method to avoid triggering frequent re-organisations when two clusterheads encounter each other for a short period of time. Simulation results show a significant improvement of cluster lifetime and reduced node state/role changes compared to previous popular clustering algorithms."}
{"_id": "82e82ec9b2daf1a0213f22d8ff5cc7d17f75b2e0", "title": "Kinematic analysis of electroactive polymer actuators as soft and smart structures with more DoF than inputs", "text": "Electroactive polymer (EAP) actuators have been attracting the attention of researchers due to their muscle-like behaviour and unusual properties. Several modelling methods have been proposed to understand their mechanical, chemical, electrical behaviours or `electro-chemo-mechanical' behaviour. However, estimating the whole shape or configuration of the EAP actuators has always been challenging due to their highly non-linear bending behaviour. This paper reports on an effective method to estimate the whole shape deflection of the EAP actuators by employing a so-called backbone approach. Tri-layer configured polypyrrole (PPy) based EAP actuators were used as a soft and smart structure with more degrees of freedom than its input. After deriving the inverse kinematic model of the actuator, its complete shape is estimated by solving the inverse kinematic model with an angle optimization (AngleOPT) method. The experimental results and numerical results have demonstrated the effectiveness of the method in estimating the highly non-linear bending behaviour of the PPy actuators and applicability of this modelling approach to other EAP actuators."}
{"_id": "ef08f531cbfffb93d0d0fe49c0aa76c80b8a6962", "title": "A Bidirectional Bridge Modular Switched-Capacitor-Based Power Electronics Transformer", "text": "This paper proposes a new bridge modular switched-capacitor-based ac\u2013ac converter, which is intended to replace conventional transformers in commercial and residential applications, especially for microgrids. The main advantages of the proposed converter lie in high efficiency and power factor, bidirectional operation, and capacitor voltage self-balancing ability. Another distinct merit is its symmetric construction, with which the voltage ripple of charge/pump capacitors can be cancelled out, and the output voltage will be smooth with significant ripple reduction. It is also noteworthy that the symmetric construction ensures that the two symmetric parts can work independently under a simplified control strategy. In addition, the modular configuration contributes to convenient voltage extension. Compared with the reported flying-capacitor switched-capacitor ac\u2013ac converter, the proposed converter requires fewer switches and capacitors per unit of voltage conversion, thus suffering lower voltage stress. To demonstrate the performance of the converter, a prototype with 600-W 220-Vrms high-side voltage, 55-Vrms low-side voltage, and a switching frequency of 100 kHz was designed and fabricated. The relevant experimental results are reported herein. The maximum and rated power efficiency obtained were 95.9% and 94.4%, respectively."}
{"_id": "1e507a62327dce8f747b1cc2a094d2040eed120b", "title": "Symbol Interdependency in Symbolic and Embodied Cognition", "text": "Whether computational algorithms such as latent semantic analysis (LSA) can both extract meaning from language and advance theories of human cognition has become a topic of debate in cognitive science, whereby accounts of symbolic cognition and embodied cognition are often contrasted. Albeit for different reasons, in both accounts the importance of statistical regularities in linguistic surface structure tends to be underestimated. The current article gives an overview of the symbolic and embodied cognition accounts and shows how meaning induction attributed to a specific statistical process or to activation of embodied representations should be attributed to language itself. Specifically, the performance of LSA can be attributed to the linguistic surface structure, more than special characteristics of the algorithm, and embodiment findings attributed to perceptual simulations can be explained by distributional linguistic information."}
{"_id": "df2ae4effe67763a3505a30c2b05df6a3efbb93f", "title": "Challenges and business models for mobile location-based services and advertising", "text": "Mobile advertising will become more pervasive and profitable, but not before addressing key technical and business challenges."}
{"_id": "9f094341bea610a10346f072bf865cb550a1f1c1", "title": "Recognition and volume estimation of food intake using a mobile device", "text": "We present a system that improves accuracy of food intake assessment using computer vision techniques. Traditional dietetic method suffers from the drawback of either inaccurate assessment or complex lab measurement. Our solution is to use a mobile phone to capture images of foods, recognize food types, estimate their respective volumes and finally return quantitative nutrition information. Automated and accurate food recognition presents the following challenges. First, there exist a large variety of food types that people consume in everyday life. Second, a single category of food may contain large variations due to different ways of preparation. Also, diverse lighting conditions may lead to varying visual appearance of foods. All of these pose a challenge to the state of the art recognition approaches. Moreover, the low quality images captured using cellphones make the task of 3D reconstruction difficult. In this paper, we combine several vision techniques (visual recognition and 3D reconstruction) to achieve quantitative food intake estimation. Evaluation of both recognition and reconstruction is provided in the experimental results."}
{"_id": "56a3fc8fc92cd363b008110ec15a24a3394e1d5e", "title": "Automatic Array Alignment in Data-Parallel Programs", "text": "Data-parallel languages like Fortran 90 express parallelism in the form of operations on data aggregates such as arrays. Misalignment of the operands of an array operation can reduce program performance on a distributed-memory parallel machine by requiring nonlocal data accesses. Determining array alignments that reduce communication is therefore a key issue in compiling such languages.\nWe present a framework for the automatic determination of array alignments in data-parallel languages such as Fortran 90. Our language model handles array sectioning, reductions, spreads, transpositions, and masked operations. We decompose alignment functions into three constituents: axis, stride, and offset. For each of these subproblems, we show how to solve the alignment problem for a basic block of code, possibly containing common subexpressions. Alignments are generated for all array objects in the code, both named program variables and intermediate results. The alignments obtained by our algorithms are more general than those provided by the \u201cowner-computes\u201d rule. Finally, we present some ideas for dealing with control flow, replication, and dynamic alignments that depend on loop induction variables."}
{"_id": "c18e0c41bce120dd384a495adcb5b7c2ca1f8ecd", "title": "Subcutaneous and visceral adipose tissue: their relation to the metabolic syndrome.", "text": "Methods for assessment, e.g., anthropometric indicators and imaging techniques, of several phenotypes of human obesity, with special reference to abdominal fat content, have been evaluated. The correlation of fat distribution with age, gender, total body fat, energy balance, adipose tissue lipoprotein lipase and lipolytic activity, adipose tissue receptors, and genetic characteristics are discussed. Several secreted or expressed factors in the adipocyte are evaluated in the context of fat tissue localization. The body fat distribution and the metabolic profile in nonobese and obese individuals is discussed relative to lipolysis, antilypolysis and lipogenesis, insulin sensitivity, and glucose, lipid, and protein metabolism. Finally, the endocrine regulation of abdominal visceral fat in comparison with the adipose tissue localized in other areas is presented."}
{"_id": "537db6c2984514d92a754a591841e2e20845985a", "title": "The language of mental health problems in social media", "text": "Online social media, such as Reddit, has become an important resource to share personal experiences and communicate with others. Among other personal information, some social media users communicate about mental health problems they are experiencing, with the intention of getting advice, support or empathy from other users. Here, we investigate the language of Reddit posts specific to mental health, to define linguistic characteristics that could be helpful for further applications. The latter include attempting to identify posts that need urgent attention due to their nature, e.g. when someone announces their intentions of ending their life by suicide or harming others. Our results show that there are a variety of linguistic features that are discriminative across mental health user communities and that can be further exploited in subsequent classification tasks. Furthermore, while negative sentiment is almost uniformly expressed across the entire data set, we demonstrate that there are also condition-specific vocabularies used in social media to communicate about particular disorders. Source code and related materials are available from: https: //github.com/gkotsis/ reddit-mental-health."}
{"_id": "cd0105649926af00e1f8fe4d32438ea2141628e8", "title": "CipherXRay: Exposing Cryptographic Operations and Transient Secrets from Monitored Binary Execution", "text": "Malwares are becoming increasingly stealthy, more and more malwares are using cryptographic algorithms (e.g., packing, encrypting C&C communication) to protect themselves from being analyzed. The use of cryptographic algorithms and truly transient cryptographic secrets inside the malware binary imposes a key obstacle to effective malware analysis and defense. To enable more effective malware analysis, forensics, and reverse engineering, we have developed CipherXRay - a novel binary analysis framework that can automatically identify and recover the cryptographic operations and transient secrets from the execution of potentially obfuscated binary executables. Based on the avalanche effect of cryptographic functions, CipherXRay is able to accurately pinpoint the boundary of cryptographic operation and recover truly transient cryptographic secrets that only exist in memory for one instant in between multiple nested cryptographic operations. CipherXRay can further identify certain operation modes (e.g., ECB, CBC, CFB) of the identified block cipher and tell whether the identified block cipher operation is encryption or decryption in certain cases. We have empirically validated CipherXRay with OpenSSL, popular password safe KeePassX, the ciphers used by malware Stuxnet, Kraken and Agobot, and a number of third party softwares with built-in compression and checksum. CipherXRay is able to identify various cryptographic operations and recover cryptographic secrets that exist in memory for only a few microseconds. Our results demonstrate that current software implementations of cryptographic algorithms hardly achieve any secrecy if their execution can be monitored."}
{"_id": "4ba0f4f9b00343c96ed46697d5bf5221f6bd0d0c", "title": "DDoS: threats and mitigation", "text": "According to a report from Prolexic, one of the first and largest companies offering DDoS mitigation services, \u2060 attack traffic rose 66% over the course of a year (to Q3 2011). 1 Network-layer attacks accounted for 83%, the rest being application-layer attacks. The average duration was 1.4 days and the average bandwidth consumed was 1.5Gbps. The size of attacks is getting bigger, too. In July 2011, Prolexic announced it had mitigated what it believed to be the largest packet-per-second DDoS attack ever seen in Asia. Consisting of SYN and ICMP floods, the attack deployed 176,000 bots (compared to the 5,000-10,000 bots more normally seen by Prolexic) to generate 25 million packets per second. According to the company, the majority of high-end border rout-ers typically forward 70,000 packets per second. It mitigated the attack by distributing traffic among its Tier 1 carrier partners and scrubbing centres. Ben Petro, senior VP network intelligence & availability at Verisign, traces the rise of DDoS attacks back another year. He says that for years there was little awareness of the problem but that, \" 2010 was a dramatic shift \u2013 not only in the size, scale and trajectory of DDoS, but also in its proliferation and the number of different types of organisation that were hit. \" In the years 2006-2008, he adds, the average attack was somewhere around 40Mbps. \" And then, all of a sudden, coming in 2010 we started to see 2Gbps, then 5Gbps, then 8Gbps and 15Gbps attacks, culminating in the largest that we've seen coming in at 84Gbps sustained for a week and a half. That is an incredible amount of traffic when you get down to packets or queries per second \u2013 you're over the hundreds of thousands of queries per second per location for Verisign, and we have 165 locations. So it's an enormous volume. \" The rising popularity of DDoS attacks may be connected directly to their effectiveness. Paul Sop, CTO at Prolexic, says, \" people get creative and they use and invent new ways and reasons to use DDoS. When an idea catches on in the industry there's a tipping point \u2013 people say, hey here's a type of attack we can launch and it's very likely we won't go to jail unless we're pretty dumb about it. \" He adds: \" It is like asymmetric warfare \u2013 the attackers have so much advantage in terms \u2026"}
{"_id": "d8bfd7d636183ebbccba4c353d45a76fe8ed6cde", "title": "Software-defined optical networks (SDONs): a survey", "text": "This paper gives an overview of software-defined optical networks (SDONs). It explains the general concepts on software-defined networks (SDNs), their relationship with network function virtualization, and also about OpenFlow, which is a pioneer protocol for SDNs. It then explains the benefits and challenges of extending SDNs to multilayer optical networks, including flexible grid and elastic optical networks, and how it compares to generalized multi-protocol label switching for implementing a unified control plane. An overview on the industry and research efforts on SDON standardization and implementation is given next, to bring the reader up to speed with the current state of the art in this field. Finally, the paper outlines the benefits achieved by SDONs for network operators, and also some of the important and relevant research problems that need to be addressed."}
{"_id": "6b9b6520e841575fa093acac9db4444a94647912", "title": "OCR for bilingual documents using language modeling", "text": "Script based features are highly discriminative for text segmentation and recognition. Thus they are widely used in Optical Character Recognition(OCR) problems. But usage of script dependent features restricts the adaptation of such architectures directly for another script. With script independent systems, this problem can be solved to a certain extent for monolingual documents. But the problem aggravates in case of multilingual documents as it is very difficult for a single classifier to learn many scripts. Generally a script identification module identifies text segments and accordingly the script-dependent classifier is selected. This paper presents a unified framework of language model and multiple preprocessing hypotheses for word recognition from bilingual document images. Prior to text recognition, preprocessing steps such as binarization and segmentation are required for ease of recognition. But these steps induce huge combinatorial error propagating to final recognition accuracy. In this paper we use multiple preprocessing routines as alternate hypotheses and use a language model to verify each alternative and choose the best recognized sequence. We test this architecture for word recognition of Kannada-English and Telugu-English bilingual documents and achieved better recognition rates than single methods using same classifier."}
{"_id": "0e6e22b67ff5fb01f766e77fe32472b0387ddeff", "title": "A Use Case Identification Framework and Use Case Canvas for identifying and exploring relevant Blockchain opportunities", "text": "Blockchain is a new, foundational technology with a vast amount of application possibilities. However, practitioners might not be aware of which use cases in their own business model might benefit from blockchain technology. To aid them in analyzing their business regarding blockchain suitability, this paper introduces a use case identification framework for blockchain and a use case canvas. In the development process they have been evaluated with internal and external reviews in order to offer the best possible guidance. In combination they offer an analysis framework to help practitioners decide which use cases they should take into account for blockchain technology, which characteristics these blockchain implementations would have, and which specific advantages they would offer."}
{"_id": "0ba5ae30a1d291e2a69f2305670f81c6da981328", "title": "Neural Adaptation Layers for Cross-domain Named Entity Recognition", "text": "Recent research efforts have shown that neural architectures can be effective in conventional information extraction tasks such as named entity recognition, yielding state-of-the-art results on standard newswire datasets. However, despite significant resources required for training such models, the performance of a model trained on one domain typically degrades dramatically when applied to a different domain, yet extracting entities from new emerging domains such as social media can be of significant interest. In this paper, we empirically investigate effective methods for conveniently adapting an existing, well-trained neural NER model for a new domain. Unlike existing approaches, we propose lightweight yet effective methods for performing domain adaptation for neural models. Specifically, we introduce adaptation layers on top of existing neural architectures, where no re-training using the source domain data is required. We conduct extensive empirical studies and show that our approach significantly outperforms stateof-the-art methods."}
{"_id": "f44ee667ff0e9721978d0d1e9a84c9f2f42d3656", "title": "Boosting Optical Character Recognition: A Super-Resolution Approach", "text": "Text image super-resolution is a challenging yet open research problem in the computer vision community. In particular, low-resolution images hamper the performance of typical optical character recognition (OCR) systems. In this article, we summarize our entry to the ICDAR2015 Competition on Text Image Super-Resolution. Experiments are based on the provided ICDAR2015 TextSR dataset [3] and the released Tesseract-OCR 3.02 system [1]. We report that our winning entry of text image super-resolution framework has largely improved the OCR performance with low-resolution images used as input, reaching an OCR accuracy score of 77.19%, which is comparable with that of using the original high-resolution images (78.80%)."}
{"_id": "6841c2cfcc9ba93e8b9c43a6a717f15079922ab9", "title": "Preventing Cell Phone Intrusion and Theft using Biometrics", "text": "Most cell phones use a password, PIN, or visual pattern to secure the phone. With these types of security methods being used, there is much vulnerability. Another alternative is biometric authentication. Biometric security systems have been researched for many years. Some mobile manufacturers have implemented fingerprint scanners into their phones, such as the old Fujitsu F505i [7] and the current Motorola Atrix. Since theft of cell phones is becoming more common every day, there is a real need for a security system that not only protects the data, but the phone itself. It is proposed through this research that a biometric security system be the alternative to knowledge-based and password-based authentication. Furthermore, a device dongle must be implemented into this infrastructure to establish a reliable security system that deters theft for the majority; biometrics alone is not sufficient. Cell phones need power and must be charged almost daily. A biometric phone charger that acts as a dongle with a solid state relay, will be presented as a viable solution to theft in this research. Additionally, it will be shown through the results of this research that a system dependant only on biometrics is unreliable and unsecure. Essentially, a mobile security system that combines biometrics with dongle technology is believed to be the ideal solution for limiting the black market of stolen cell phones; without the biometric charger/dongle, the stolen cell phone would be rendered useless."}
{"_id": "7590497e4c3838bce3d4563e9cb37b040ca12f29", "title": "Cuff-less, non-invasive and continuous blood pressure monitoring using indirect methods", "text": "Cuff-based methods for measuring blood pressure (BP) have shown some limitations particularly when continuous measurement is required. Moreover, the inflation of the cuff is one of the main disturbing factors as expressed among patients. In this study, a promising approach based on changes in pulse transit time (PTT) for continuous, non-invasive and indirect measurement of BP is proposed to overcome the shortcomings of cuff-based methods. The PTT can be measured using plethysmography (PPG) waveform and the QRS peak of electrocardiogram (ECG) signal and indirectly interpreted as systolic BP. Time synchronization of the two sensors plays an important role in improving the accuracy of measurement. Therefore, it is aimed to employ a tablet-based system for collecting the wireless devices' data at a specific time. It is also our objective to include a triggering mechanism option using ambient sensors in order to start the process of data collection. Bluetooth/IEEE 802.15.4 wireless technologies will be used for real-time sensing and time synchronization. We have found that the proposed indirect measurement of BP can provide continuous information of the BP in a more convenient way from the patients' point of view. Moreover, the result of our indirect measurement of BP indicates that there are some short term fluctuations in the BP."}
{"_id": "87fb09967184aecb772024a8264e529252f0b668", "title": "3D printed RF passive components by liquid metal filling", "text": "We present three-dimensional (3D) micro-scale electrical components and systems by means of 3D printing and a liquid-metal-filling technique. The 3D supporting polymer structures with hollow channels and cavities are fabricated from inkjet printing. Liquid metals made of silver particles suspension in this demonstration are then injected into the hollow paths and solidified to form metallic elements and interconnects with high electrical conductivity. In the proof-of-concept demonstrations, various radio-frequency (RF) passive components, including 3D-shaped inductors, capacitors and resistors are fabricated and characterized. High-Q inductors and capacitors up to 1 GHz have been demonstrated. This work establishes an innovative way to construct arbitrary 3D electrical systems with efficient and labor-saving processes."}
{"_id": "02b4f85d7414585bd901a6aa0ac5c65727bd5062", "title": "Strengths and Weaknesses of Quantum Computing", "text": "Recently a great deal of attention has focused on quantum computation following a sequence of results [4, 16, 15] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor\u2019s result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of NP can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random, with probability 1, the class NP cannot be solved on a quantum Turing machine in time o(2n/2). We also show that relative to a permutation oracle chosen uniformly at random, with probability 1, the class NP \u2229 co\u2013NP cannot be solved on a quantum Turing machine in time o(2n/3). The former bound is tight since recent work of Grover [13] shows how to accept the class NP relative to any oracle on a quantum computer in time O(2n/2). To appear in SIAM Journal on Computing (special issue on quantum computing) \u2217 IBM T. J. Watson Research Laboratory, Yorktown Heights, New York, NY 10598, USA. email: bennetc@watson.ibm.com. \u2020 1 Microsoft Way, Redmond, WA 98052 \u2013 6399, USA. email: ethanb@microsoft.com. \u2021 Supported in part by Canada\u2019s nserc and Qu\u00e9bec\u2019s fcar. \u00a7D\u00e9partement IRO, Universit\u00e9 de Montr\u00e9al, C.P. 6128, succursale centre-ville, Montr\u00e9al (Qu\u00e9bec), Canada H3C 3J7. email: brassard@iro.umontreal.ca. \u00b6 Supported by NSF Grant No. CCR-9310214. Computer Science Division, University of California, Berkeley, CA 94720, USA. email: vazirani@cs.berkeley.edu."}
{"_id": "041929bace8fae28de43317e46734ca0e35dda8e", "title": "Tight bounds on quantum searching", "text": "We provide a tight analysis of Grover\u2019s recent algorithm for quantum database searching. We give a simple closed-form formula for the probability of success after any given number of iterations of the algorithm. This allows us to determine the number of iterations necessary to achieve almost certainty of finding the answer. Furthermore, we analyse the behaviour of the algorithm when the element to be found appears more than once in the table and we provide a new algorithm to find such an element even when the number of solutions is not known ahead of time. Using techniques from Shor\u2019s quantum factoring algorithm in addition to Grover\u2019s approach, we introduce a new technique for approximate quantum counting, which allows to estimate the number of solutions. Finally we provide a lower bound on the efficiency of any possible quantum database searching algorithm and we show that Grover\u2019s algorithm nearly comes within a factor 2 of being optimal in terms of the number of probes required in the table."}
{"_id": "1b6c1efb9725a3ba0b88a22bf048b2b207898b44", "title": "XMSS - A Practical Forward Secure Signature Scheme Based on Minimal Security Assumptions", "text": "We present the hash-based signature scheme XMSS. It is the first provably (forward) secure and practical signature scheme with minimal security requirements: a pseudorandom and a second preimage resistant (hash) function family. Its signature size is reduced to less than 25% compared to the best provably secure hash based signature scheme."}
{"_id": "2273d9829cdf7fc9d3be3cbecb961c7a6e4a34ea", "title": "Algorithms for Quantum Computation: Discrete Logarithms and Factoring", "text": "A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a cost in computation time of at most a polynomial factol: It is not clear whether this is still true when quantum mechanics is taken into consideration. Several researchers, starting with David Deutsch, have developed models for quantum mechanical computers and have investigated their computational properties. This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored. These two problems are generally considered hard on a classical computer and have been used as the basis of several proposed cryptosystems. (We thus give the first examples of quantum cryptanulysis.)"}
{"_id": "7e7404028fc88b6e1434256b6e04ab1b47459619", "title": "A Study of Cardiovascular Disease Prediction Models Using Discriminant Analysis", "text": "Cardiovascular disease is a condition of interest to clinicians around the world and is one of the chief causes of death. In Korea, cardiovascular disease is the second largest cause of death following cancer, and is classified as one of the four major diseases, making an early diagnosis of cardiovascular disease essential for proper and timely treatment. Although the risk of cardiovascular disease is recognized in Korea, there is a lack of quality studies regarding the disease. In the present study, a prediction model was developed which allows for the calculation of the risk of cardiovascular disease among Korean patients. The prediction model for cardiovascular disease was developed using a discriminant analysis based on the data set of Korean National Health and Nutrition Examinations Survey V (KNHANES V)."}
{"_id": "79a7db3f287643d290975a38fecd5011a26e6846", "title": "High-performance polyline intersection based spatial join on GPU-accelerated clusters", "text": "The rapid growing volumes of spatial data have brought significant challenges on developing high-performance spatial data processing techniques in parallel and distributed computing environments. Spatial joins are important data management techniques in gaining insights from large-scale geospatial data. While several distributed spatial join techniques based on spatial partitions have been implemented on top of existing Big Data systems, they are not capable of natively exploiting massively data parallel computing power provided by modern commodity Graphics Processing Units (GPUs). In this study, as an important component of our research initiative in developing high-performance spatial join techniques on GPUs, we have designed and implemented a polyline intersection based spatial join technique that is capable of exploiting massively data parallel computing power on GPUs. The proposed polyline intersection based spatial join technique is integrated into a customized lightweight distributed execution engine that natively supports spatial partitions. We empirically evaluate the performance of the proposed spatial join technique on both a standalone GPU-equipped workstation and Amazon EC2 GPU-accelerated clusters and demonstrate its high performance when comparing with the state-of-the-art."}
{"_id": "4dee208efe5878bce18f60ce8268c4bdb7470775", "title": "The Case for Holistic Data Integration", "text": "Current data integration approaches are mostly limited to few data sources, partly due to the use of binary match approaches between pairs of sources. We thus advocate for the development of more holistic, clustering-based data integration approaches that scale to many data sources. We outline different use cases and provide an overview of initial approaches for holistic schema/ontology integration and entity clustering. The discussion also considers open data repositories and socalled knowledge graphs."}
{"_id": "1267eb86100a4b6dbf613b8a3b0e9209332811ff", "title": "Comprehensive review of epidemiology, scope, and impact of spinal pain.", "text": "Persistent pain interfering with daily activities is common. Chronic pain has been defined in many ways. Chronic pain syndrome is a separate entity from chronic pain. Chronic pain is defined as, \"pain that persists 6 months after an injury and beyond the usual course of an acute disease or a reasonable time for a comparable injury to heal, that is associated with chronic pathologic processes that cause continuous or intermittent pain for months or years, that may continue in the presence or absence of demonstrable pathologies; may not be amenable to routine pain control methods; and healing may never occur.\" In contrast, chronic pain syndrome has been defined as a complex condition with physical, psychological, emotional, and social components. The prevalence of chronic pain in the adult population ranges from 2% to 40%, with a median point prevalence of 15%. Among chronic pain disorders, pain arising from various structures of the spine constitutes the majority of the problems. The lifetime prevalence of spinal pain has been reported as 54% to 80%. Studies of the prevalence of low back pain and neck pain and its impact in general have shown 23% of patients reporting Grade II to IV low back pain (high pain intensity with disability) versus 15% with neck pain. Further, age related prevalence of persistent pain appears to be much more common in the elderly associated with functional limitations and difficulty in performing daily life activities. Chronic persistent low back and neck pain is seen in 25% to 60% of patients, one-year or longer after the initial episode. Spinal pain is associated with significant economic, societal, and health impact. Estimates and patterns of productivity losses and direct health care expenditures among individuals with back and neck pain in the United States continue to escalate. Recent studies have shown significant increases in the prevalence of various pain problems including low back pain. Frequent use of opioids in managing chronic non-cancer pain has been a major issue for health care in the United States placing a significant strain on the economy with the majority of patients receiving opioids for chronic pain necessitating an increased production of opioids, and escalating costs of opioid use, even with normal intake. The additional costs of misuse, abuse, and addiction are enormous. Comorbidities including psychological and physical conditions and numerous other risk factors are common in spinal pain and add significant complexities to the interventionalist's clinical task. This section of the American Society of Interventional Pain Physicians (ASIPP)/Evidence-Based Medicine (EBM) guidelines evaluates the epidemiology, scope, and impact of spinal pain and its relevance to health care interventions."}
{"_id": "06c8d51d87088d7e1f02549270b1555f371204ba", "title": "Design of a Wheelchair with Legs for People with Motor Disabilities", "text": "Low cost. For example, golf carts, outdoor chairs and special purpose sand buggies. wheelchair. AbstractA proof-of-concept prototype wheelchair with legs for people with motor disabilities is proposed, with the objective of demonstrating the feasibility of a completely new approach to mobility. Our prototype system consists of a chair equipped with wheels and legs, and is capable of traversing uneven terrain and circumventing obstacles. The important design considerations, the system design and analysis, and an experimental prototype of a chair are discussed. The results from the analysis and experimentation show the feasibility of the proposed concept and its advantages."}
{"_id": "3ef254f7aa36cd68959ec506b0622c1a1c9915d5", "title": "Uneasy lies the head that wears the crown: the link between guilt proneness and leadership.", "text": "We propose that guilt proneness is a critical characteristic of leaders and find support for this hypothesis across 3 studies. Participants in the first study rated a set of guilt-prone behaviors as more indicative of leadership potential than a set of less guilt-prone behaviors. In a follow-up study, guilt-prone participants in a leaderless group task engaged in more leadership behaviors than did less guilt-prone participants. In a third, and final, study, we move to the field and analyze 360\u00b0 feedback from a group of young managers working in a range of industries. The results indicate that highly guilt-prone individuals were rated as more capable leaders than less guilt-prone individuals and that a sense of responsibility for others underlies the positive relationship between guilt proneness and leadership evaluations."}
{"_id": "399f0125a25afe3bd7b87b1896e9d9688cb31212", "title": "Implementing it Service Management: A Case Study Focussing on Critical Success Factors", "text": "Queensland Health, a large Australian government agency, implemented a centralised IT service management model based on the ITIL framework. This paper presents an in-depth case study of the implementation. It sheds light on the challenges and breakthroughs, confirms a set of factors that contributed to the project\u2019s success and offers a learning opportunity for other organisations. The study indicates that the commitment of senior management is crucial to the project\u2019s success as is a project champion and the recognition of the need for an appropriate change management strategy to transform the organisational culture to a service-oriented focus. Maintaining close and forthright relationships with multiple vendors facilitates technology transfer to in-house staff while a benefits realisation plan is a valuable tool for tracking and communicating tangible and intangible project benefits to the project stakeholders. An effective project governance and execution process further contributes to the implementation success."}
{"_id": "deaa9cb1aa00bd008e09ca23a5393a0df4fb7766", "title": "Efficient super-resolution via image warping", "text": "This paper introduces a new algorithm for enhancing image resolution from an image sequence. The approach we propose herein uses the integrating resampler for warping. The method is a direct computation, which is fundamentally different from the iterative back-projection approaches proposed in previous work. This paper shows that image-warping techniques may have a strong impact on the quality of image resolution enhancement. By coupling the degradation model of the imaging system directly into the integrating resampler, we can better approximate the warping characteristics of real sensors, which also significantly improve the quality of super-resolution images. Examples of super-resolutions are given for gray-scale images. Evaluations are made visuallyby comparing the resulting images and those using bi-linear resampling and back-projection and quantitativelyusing OCR as a fundamental measure. The paper shows that even when the images are qualitatively similar, quantitative differences appear in machine processing. q 2000 Elsevier Science B.V. All rights reserved. Keywords: Super-resolution; Imaging-consistent restoration/reconstruction; Integrating resampling; Integrating resampler; Quantitative evaluatio n; OCR; Bilinear resampling; Image reconstruction; Image restoration; Image warping 1. Background and introduction The basic premise of super-resolution is to combine images (and in some work scene/data models) to produce an image that has higher \u201cresolution\u201d than the original image. It is not sufficient to just resample the image\u2014size is not the same as resolution. Increasing resolution could be viewed as either increasing the signal-to-noise ratio while keeping the size fixed and/or approximating the image at a larger size with reasonable approximations for frequencies higher than those representable at the original size. In either case, it requires additional information and assumptions. There are, of course, some fundamental limits on what can be gained; lens and photo-sites cause blur, and the discrete sampling causes frequencies above the Nyquist rate to alias. While in general the aliasing is theoretically impossible to separate, the problem occurs mostly along sharp \u201cintensity edges\u201d, so multiple images with slightly different imaging positions can be used to reduce its impact. There are two different groups of research that use the term super-resolution\u2014one is based on multiple images, which will be considered within this paper; the other uses \u201cscene/object\u201d models. Early research on image-based super-resolution dates back to the work by Huang and Tsai [8]. They solved the problem in the frequency domain, but they disregarded sensor blurring. Furthermore, they assumed only inter-frame translations. Gross [7] solved the problem by first merging the low resolution images by interpolation and then obtaining the high-resolution image by deblurring the merged image. Again, only inter-frame translations were considered. Peleg and Keren [11,14] solved the problem by first estimating an initial guess of the high-resolution image, then simulating the imaging process and minimizing the difference between the observed and simulated low-resolution images. Irani and Peleg [9,10] used a back-projection method similar to that used in tomography to minimize the same difference. Bascle et al. [1] used the same back-projection method to minimize the difference between the predicted and actual images except that a simple motion blur model was included. All previous work ignores the impact of image warping techniques. In this paper, we show that warping techniques may have a strong impact on the quality of image resolution enhancement. Image warping has been around almost as long as image processing, allowing users to reshape the image geometry. Image warping requires the underlying image to be \u201cresampled\u201d at non-integer locations; hence, it requires image reconstruction. When the goal of warping is to produce output for human viewing, only moderately accurate image intensities are needed. In these cases, techniques Image and Vision Computing 18 (2000) 761\u2013771 0262-8856/00/$ see front matter q 2000 Elsevier Science B.V. All rights reserved. PII: S0262-8856(99)00044-X www.elsevier.com/locate/imavis * Corresponding author. Tel.: 11-610-7584061. E-mail address:tboult@eecs.lehigh.edu (T.E. Boult). using bi-linear interpolation have been found sufficient. However, as a step for applications such as super-resolution, the precision of the warped intensity values is often important. For these problems, bi-linear image reconstruction may not be sufficient. This paper shows how the integrating resampler can be used to enhance image resolution. This builds upon the ideas of imaging-consistent restoration/reconstruction algorithms proposed in Ref. [2] and the integrating resampler proposed in Ref. [5]. A final contribution of this paper is a quantitative comparison and analysis of our techniques. In this paper, we use a character recognition rate based on a commercial OCR program. We chose this because it is easy to make it quantitative, and we consider this to be a good measure for other applications such as license-plate reading and other pattern recognition problems. In what follows in this paper, the image formation process and the relationship between restoration, reconstruction, and super-resolution is briefly reviewed in Section 2. The integrating resampler\u2014an efficient method for warping using imaging-consistent reconstruction/restoration algorithms\u2014is given in Section 3. This algorithm is well suited for today\u2019s pipelined micro-processors. In addition, the integrating resampler can allow for modifications of the intensity values to better approximate the warping characteristics of real sensors. The super-resolution algorithm where we compare the integrating resampler with bi-linear resampling is presented in Section 4. Quantitative measurement of super-resolution imaging using OCR is shown in Section 5. 2. Image formation and image restoration In this section, we briefly review the image formation process as well as the sensor model proposed in Ref. [2]. Image formation is generally described as a sequence of filtering operations. See Fig. 1. Let f \u0085x; y\u0086 be the intensity distribution of a scene at the front aperture of a lens. That distribution is acted upon by h1\u0085x; y\u0086; the blurring component of the lens, yielding f1\u0085x; y\u0086: A geometric distortion functionh2\u0085x; y\u0086 is then applied to produce imagef2\u0085u; v\u0086: At this point, f2 strikes the image sensor where it undergoes additional blurring by a point spread functionh3\u0085u; v\u0086 to generate image f3\u0085u; v\u0086: This blurring reflects the limitation of the sensor to accurately resolve each point without the influence of neighboring points. We choose to use a simple model wherein this blurring takes place within one pixel because for CCD and CID cameras, the physical boundaries between photo-sites generally allow only insignificant charge transfer between pixels. Imagef3 undergoes spatial sampling as it hits the discrete photo-sites in a CCD or CID camera. The combination of h3 with sampling is known as areas sampling. It reflects the finite size of the sampling area. If h3 is taken to be an impulse, then we have point sampling. This ideal, often assumed for theoretical considerations, is generally not true in practice. In either case, intensities in the sampled image Is are now defined only for integer values of u and v. The digital imageI \u0085u; v\u0086 is obtained via an analog-todigital converter that quantizes the samples of Is. This completes the image formation process, leaving I \u0085u; v\u0086 as the starting point for subsequent processing, including image reconstruction and restoration. Reconstruction and restoration start with I (and models for one or morehis), and seek to solve for one or more of the fjs. Recoveringf is known as image restoration and is of considerable interest in image processing. Note that recovering a functional form off2 is an intra-pixel restoration. Reconstruction limits itself to the problem of deriving the continuous functionf3. Restoration attempts to estimate the original input functionf2 (or f1). The goal of super-resolution is to recover a discrete approximation, with resolution higher than Is, to either f1 (plain super-resolution) orf (super-resolution with deblurring). Because super-resolution increases the signal-to-noise ratio, it significantly aids in the ill-condition problem of deblurring. 3. Imaging-consistency and the integrating resampler An algorithm is called imaging-consistent if it is the exact solution for some input function, which, according to the imaging model, would have generated the measured input. For image reconstruction, we achieve this by first restoring the image to yieldf2, then blurring it again by the pixel\u2019s PSF. Details can be found in Ref. [2]. Only one-dimensional image models are presented herein; higher dimensions are treated separately. M.-C. Chiang, T.E. Boult / Image and Vision Computing 18 (2000) 761\u2013771 762 Fig. 1. The image formation process and the relationship between restoration, reconstruction, and super-resolution. The first imaging-consistent method to consider is based on a piecewise quadratic model for the image. If we assume a Rect PSF filter (1 inside and 0 outside the pixel), the imaging consistent algorithm is easy to derive. To ensure that the function is continuous and that the method is local, we define the value of the reconstruction at the pixel boundarieski andki11 to be equal toEi andEi11: Any method of approximation could be used to compute Ei, though our examples will only include linear interpolation and cubic convolution. See [3, Section 2.3] for more details. Given the valuesEi at the pixel edges, an additional constraint, that the integral across the pixel must equal Vi, results in exactly three constraints: qi\u008521=2\u0086 \u0088 Ei ; qi\u00851=2\u0086 \u0088 Ei11; Z1=2 2 1=2 qi\u0085x\u0086 dx\u0088 Vi : \u00851\u0086 From Eq. (1), one can det"}
{"_id": "57d181f745182935f8090e7ee53d2b46912a047e", "title": "Simplification of Example Sentences for Learners of Japanese Functional Expressions", "text": "Learning functional expressions is one of the difficulties for language learners, since functional expressions tend to have multiple meanings and complicated usages in various situations. In this paper, we report an experiment of simplifying example sentences of Japanese functional expressions especially for Chinese-speaking learners. For this purpose, we developed \u201cJapanese Functional Expressions List\u201d and \u201cSimple Japanese Replacement List\u201d. To evaluate the method, we conduct a small-scale experiment with Chinese-speaking learners on the effectiveness of the simplified example sentences. The experimental results indicate that the simplified sentences are helpful in learning Japanese functional expressions."}
{"_id": "662bd1b1ea6e22dfbb337264fb6da33544d9902f", "title": "Algorithmics of Matching Under Preferences", "text": "\u2022 Page 17, line 13: \u201cconstrast\u201d \u2212\u2192 \u201ccontrast\u201d. (Due to Mechthild Opperud.) \u2022 Page 21, line 17: \u201cattributes\u201d \u2212\u2192 \u201cattributed\u201d. \u2022 Page 21, lines -4 to -3: change \u201cpairs in which either (i) ri is unassigned if she is unassigned in both M and M \u2032, or (ii)\u201d to \u201cpairs obtained as follows: for each resident ri, ri is unassigned if she is unassigned in both M and M \u2032, otherwise\u201d. \u2022 Page 21, line -1: \u201cjoin\u201d \u2212\u2192 \u201cmeet\u201d. (Due to Didac Busquets.) \u2022 Page 22, line 1: \u201cmeet\u201d \u2212\u2192 \u201cjoin\u201d. (Due to Didac Busquets.) \u2022 Page 23, line 4: add \u201cIn an sm instance, any matching is automatically assumed to have size n.\u201d"}
{"_id": "510c61038514fb0c1382d390c8068d9f5abeacab", "title": "SNAP, Small-world Network Analysis and Partitioning: An open-source parallel graph framework for the exploration of large-scale networks", "text": "We present SNAP (small-world network analysis and partitioning), an open-source graph framework for exploratory study and partitioning of large-scale networks. To illustrate the capability of SNAP, we discuss the design, implementation, and performance of three novel parallel community detection algorithms that optimize modularity, a popular measure for clustering quality in social network analysis. In order to achieve scalable parallel performance, we exploit typical network characteristics of small-world networks, such as the low graph diameter, sparse connectivity, and skewed degree distribution. We conduct an extensive experimental study on real-world graph instances and demonstrate that our parallel schemes, coupled with aggressive algorithm engineering for small-world networks, give significant running time improvements over existing modularity-based clustering heuristics, with little or no loss in clustering quality. For instance, our divisive clustering approach based on approximate edge betweenness centrality is more than two orders of magnitude faster than a competing greedy approach, for a variety of large graph instances on the Sun Fire T2000 multicore system. SNAP also contains parallel implementations of fundamental graph-theoretic kernels and topological analysis metrics (e.g., breadth-first search, connected components, vertex and edge centrality) that are optimized for small- world networks. The SNAP framework is extensible; the graph kernels are modular, portable across shared memory multicore and symmetric multiprocessor systems, and simplify the design of high-level domain-specific applications."}
{"_id": "260b95930133787c008d01fb870f909171c7e677", "title": "Advances in Very Deep Convolutional Neural Networks for LVCSR", "text": "Very deep CNNs with small 3 \u00d7 3 kernels have recently been shown to achieve very strong performance as acoustic models in hybrid NN-HMM speech recognition systems. In this paper we investigate how to efficiently scale these models to larger datasets. Specifically, we address the design choice of pooling and padding along the time dimension which renders convolutional evaluation of sequences highly inefficient. We propose a new CNN design without timepadding and without timepooling, which is slightly suboptimal for accuracy, but has two significant advantages: it enables sequence training and deployment by allowing efficient convolutional evaluation of full utterances, and, it allows for batch normalization to be straightforwardly adopted to CNNs on sequence data. Through batch normalization, we recover the lost peformance from removing the time-pooling, while keeping the benefit of efficient convolutional evaluation. We demonstrate the performance of our models both on larger scale data than before, and after sequence training. Our very deep CNN model sequence trained on the 2000h switchboard dataset obtains 9.4 word error rate on the Hub5 test-set, matching with a single model the performance of the 2015 IBM system combination, which was the previous best published result."}
{"_id": "1116f22277ba540f3a28dc3c858ce7b7947fa500", "title": "Non-orthogonal Multiple Access in a Downlink Multiuser Beamforming System", "text": "In this paper, we propose a non-orthogonal multiple access-based multiuser beamforming (NOMA-BF) system designed to enhance the sum capacity. In the proposed NOMA-BF system, a single BF vector is shared by two users, so that the number of supportable users can be increased. However, sharing a BF vector leads to interference from other beams as well as from the other user sharing the BF vector. Therefore, to reduce interference and improve the sum capacity, we additionally propose a clustering and power allocation algorithm. This clustering algorithm, which selects two users with high correlation and a large gain-difference between their channels, can reduce the interference from other beams and from the other user as well. Furthermore, power allocation ensures that each user's transmit power is allocated so as to maximize the sum capacity. Numerical results verify that the proposed NOMA-BF system improves the sum capacity, compared to the conventional multiuser BF system."}
{"_id": "9c94955beb49c16bf2e61072db7365467bc6f76f", "title": "On Plant Responses to D-Amino Acids Features of Growth , Root Behaviour and Selection for Plant Transformation", "text": "Amino acids have been regarded as potential plant nitrogen sources for more than a century. Some amino acids have been shown to support growth while others have growth-retarding effects. The D-isomers with notably adverse effects on plants\u2019 growth and development include D-serine and D-alanine. Recently, D-serine has been recognised as an endogenous ligand of receptor channels mediating calcium fluxes in plants, but otherwise little is known about endogenous roles of D-amino acids in plants. In the studies underlying this thesis, the negative responses to D-serine and D-alanine were converted to positive effects in Arabidopsis thaliana (Arabidopsis) plants by introducing either of two D-amino acid-metabolising enzymes. Transgenic Arabidopsis lines expressing either the D-serine dehydratase (dsdA) gene from Escherichia coli or the D-amino acid oxidase (DAO1) gene from Rhodotorula gracilis grew with otherwise toxic D-amino acids as the sole nitrogen source. I also expressed a transporter specific for D-amino acids, which further increased the transgenic plants\u2019 growth with D-serine as sole nitrogen source. Hence, both assimilation and uptake restrictions can limit plant growth on D-amino acids. The growth of transgenic lines on D-serine or D-alanine provides an unambiguous and highly visible phenotype, which is essential for a selectable marker. Thus, expressing of either the dsdA or DAO1 genes generated transformants that are easy to screen. Furthermore, the DAO1 gene can be readily used for either positive or negative selection, depending on the substrate, thus it provides a unique conditional, substrate-dependent positive/negative selectable marker for plant transformation. In summary, the presented work demonstrates that introducing the ability to catalyse a single metabolic step can allow plants to exploit an otherwise inaccessible or toxic form of organic nitrogen, and provides a versatile marker based on nitrogen nutrition for selecting transgenic plants. A possible role for D-serine in plants\u2019 touch response is also reviewed in the thesis."}
{"_id": "5e491db0e11e71c6ed798052129712ef145b88bd", "title": "LibD: Scalable and Precise Third-Party Library Detection in Android Markets", "text": "With the thriving of the mobile app markets, third-party libraries are pervasively integrated in the Android applications. Third-party libraries provide functionality such as advertisements, location services, and social networking services, making multi-functional app development much more productive. However, the spread of vulnerable or harmful third-party libraries may also hurt the entire mobile ecosystem, leading to various security problems. The Android platform suffers severely from such problems due to the way its ecosystem is constructed and maintained. Therefore, third-party Android library identification has emerged as an important problem which is the basis of many security applications such as repackaging detection and malware analysis. According to our investigation, existing work on Android library detection still requires improvement in many aspects, including accuracy and obfuscation resilience. In response to these limitations, we propose a novel approach to identifying third-party Android libraries. Our method utilizes the internal code dependencies of an Android app to detect and classify library candidates. Different from most previous methods which classify detected library candidates based on similarity comparison, our method is based on feature hashing and can better handle code whose package and method names are obfuscated. Based on this approach, we have developed a prototypical tool called LibD and evaluated it with an update-to-date and large-scale dataset. Our experimental results on 1,427,395 apps show that compared to existing tools, LibD can better handle multi-package third-party libraries in the presence of name-based obfuscation, leading to significantly improved precision without the loss of scalability."}
{"_id": "dec8aa4a1c81f99d98b6752ad578812888bc06bd", "title": "Generating Single Subject Activity Videos as a Sequence of Actions Using 3D Convolutional Generative Adversarial Networks", "text": "Humans have the remarkable ability of imagination, where within the human mind virtual simulations are done of scenarios whether visual, auditory or any other senses. These imaginations are based on the experiences during interaction with the real world, where human senses help the mind understand their surroundings. Such level of imagination has not yet been achieved using current algorithms, but a current trend in deep learning architectures known as Generative Adversarial Networks (GANs) have proven capable of generating new and interesting images or videos based on the training data. In that way, GANs can be used to mimic human imagination, where the resulting generated visuals of GANs are based on the data used during training. In this paper, we use a combination of Long Short-Term Memory (LSTM) Networks and 3D GANs to generate videos. We use a 3D Convolutional GAN to generate new human action videos based on trained data. The generated human action videos are used to generate longer videos consisting of a sequence of short actions combined creating longer and more complex activities. To generate the sequence of actions needed we use an LSTM network to translate a simple input description text into the required sequence of actions. The generated chunks are then concatenated using a motion interpolation scheme to form a single video consisting of many generated actions. Hence a visualization of the input text description is generated as a video of a subject performing the activity described."}
{"_id": "b50de1f2b3c5bf5da3cb2edb233d2b7acb91c057", "title": "SLAM-based automatic extrinsic calibration of a multi-camera rig", "text": "Cameras are often a good choice as the primary outward-looking sensor for mobile robots, and a wide field of view is usually desirable for responsive and accurate navigation, SLAMand relocalisation. While this can potentially be provided by a single omnidirectional camera, it can also be flexibly achieved by multiple cameras with standard optics mounted around the robot. However, such setups are difficult to calibrate. Here we present a general method for fully automatic extrinsic auto-calibration of a fixed multi camera rig, with no requirement for calibration patterns or other infrastructure, which works even in the case where the cameras have completely non-overlapping views. The robot is placed in a natural environment and makes a set of programmed movements including a full horizontal rotation and captures a synchronized image sequence from each camera. These sequences are processed individually with a monocular visual SLAM algorithm. The resulting maps are matched and fused robustly based on corresponding invariant features, and then all estimates are optimised full joint bundle adjustment, where we constrain the relative poses of the cameras to be fixed. We present results showing accurate performance of the method for various two and four camera configurations."}
{"_id": "8761af6ec0928c30fbea4ee4b8039ec42016c0ec", "title": "Fusion of laserscannner and video based lanemarking detection for robust lateral vehicle control and lane change maneuvers", "text": "The knowledge about lanes and the exact position on the road is fundamental for many advanced driver assistance systems. In this paper, a novel iterative histogram based approach with occupancy grids for the detection of multiple lanes is proposed. In highway scenarios, our approach is highly suitable to determine the correct number of all existing lanes on the road. Additionally, the output of the laserscannner based lane detection is fused with a production-available vision based system. It is shown that both sensor systems perfectly complement each other to increase the robustness of a lane tracking system. The achieved accuracy of the fusion system, the laserscannner and video based system is evaluated with a highly accurate DGPS to investigate the performance with respect to lateral vehicle control applications."}
{"_id": "3a3d3e05ab089f58acb0890937c7886ee56d9e57", "title": "Cultural Differences in Collaborative Authoring of Wikipedia", "text": "This article explores the relationship between national culture and computer-mediated communication (CMC) in Wikipedia. The articles on the topic game from the French, German, Japanese, and Dutch Wikipedia websites were studied using content analysis methods. Correlations were investigated between patterns of contributions and the four dimensions of cultural influences proposed by Hofstede (Power Distance, Collectivism versus Individualism, Femininity versus Masculinity, and Uncertainty Avoidance). The analysis revealed cultural differences in the style of contributions across the cultures investigated, some of which are correlated with the dimensions identified by Hofstede. These findings suggest that cultural differences that are observed in the physical world also exist in the virtual world. doi:10.1111/j.1083-6101.2006.00316.x"}
{"_id": "2d0f7a8b3c1c97dc12a86be78821f95d1455bba4", "title": "Study of the General Kalman Filter for Echo Cancellation", "text": "The Kalman filter is a very interesting signal processing tool, which is widely used in many practical applications. In this paper, we study the Kalman filter in the context of echo cancellation. The contribution of this work is threefold. First, we derive a different form of the Kalman filter by considering, at each iteration, a block of time samples instead of one time sample as it is the case in the conventional approach. Second, we show how this general Kalman filter (GKF) is connected with some of the most popular adaptive filters for echo cancellation, i.e., the normalized least-mean-square (NLMS) algorithm, the affine projection algorithm (APA) and its proportionate version (PAPA). Third, a simplified Kalman filter is developed in order to reduce the computational load of the GKF; this algorithm behaves like a variable step-size adaptive filter. Simulation results indicate the good performance of the proposed algorithms, which can be attractive choices for echo cancellation."}
{"_id": "371f5cff7a5ae3bc8114c0b00998c71aa2b10181", "title": "Emotion Enhances Learning via Norepinephrine Regulation of AMPA-Receptor Trafficking", "text": "Emotion enhances our ability to form vivid memories of even trivial events. Norepinephrine (NE), a neuromodulator released during emotional arousal, plays a central role in the emotional regulation of memory. However, the underlying molecular mechanism remains elusive. Toward this aim, we have examined the role of NE in contextual memory formation and in the synaptic delivery of GluR1-containing alpha-amino-3-hydroxy-5-methyl-4-isoxazoleproprionic acid (AMPA)-type glutamate receptors during long-term potentiation (LTP), a candidate synaptic mechanism for learning. We found that NE, as well as emotional stress, induces phosphorylation of GluR1 at sites critical for its synaptic delivery. Phosphorylation at these sites is necessary and sufficient to lower the threshold for GluR1 synaptic incorporation during LTP. In behavioral experiments, NE can lower the threshold for memory formation in wild-type mice but not in mice carrying mutations in the GluR1 phosphorylation sites. Our results indicate that NE-driven phosphorylation of GluR1 facilitates the synaptic delivery of GluR1-containing AMPARs, lowering the threshold for LTP, thereby providing a molecular mechanism for how emotion enhances learning and memory."}
{"_id": "5f905b6d9f41a45ee8ee5040db5a1bf855372164", "title": "Resource discovery in Internet of Things: Current trends and future standardization aspects", "text": "To realize the vision of Internet of Things, there must be mechanisms to discover resources and their capabilities. Thus resource discovery becomes a fundamental requirement of any IoT platform. This paper provides a comprehensive categorization of the current technology landscape for discovery while pointing out their advantages and limitations. Then, a novel search engine based resource discovery framework is proposed. The framework comprises of a proxy layer which includes drivers for various communication technologies and protocols. There is a central registry to store the configurations of the resources and index them based on the configuration parameters. A \u201clifetime\u201d attribute is introduced which denotes the period through which a resource is discoverable. The search engine ranks the resulting resources of a discovery request and returns an URI to directly access each resource. The functionalities of the proposed framework are exposed using RESTful web services. The framework allows discovery of both smart and legacy devices. We have performed gap analysis of current IoT standards for discovery mechanisms and provide suggestions to improve them to maintain interoperability. The outcomes are communicated to Standard Development Organizations like W3C (in Web of Things Interest Group) and oneM2M."}
{"_id": "2712e15d1309e217c18dc4a359a17f1d7384f47a", "title": "Study of a bus-based disruption-tolerant network: mobility modeling and impact on routing", "text": "We study traces taken from UMass DieselNet, a Disruption-Tolerant Network consisting of WiFi nodes attached to buses. As buses travel their routes, they encounter other buses and in some cases are able to establish pair-wise connections and transfer data between them. We analyze the bus-to-bus contact traces to characterize the contact process between buses and its impact on DTN routing performance. We find that the all-bus-pairs aggregated inter-contact times show no discernible pattern. However, the inter-contact times aggregated at a route level exhibit periodic behavior.Based on analysis of the deterministic inter-meeting times for bus pairs running on route pairs, and consideration of the variability in bus movement and the random failures to establish connections, we construct generative route-level models that capture the above behavior. Through trace-driven simulations of epidemic routing, we find that the epidemic performance predicted by traces generated with this finer-grained route-level model is much closer to the actual performance that would be realized in the operational system than traces generated using the coarse-grained all-bus-pairs aggregated model. This suggests the importance in choosing the rightlevel of model granularity when modelingmobility-related measures such as inter-contact times in DTNs."}
{"_id": "05ff72126f935d857bd07b8280319eef4b688ddb", "title": "Trainable videorealistic speech animation", "text": "We describe how to create with machine learning techniques a generative, speech animation module. A human subject is first recorded using a videocamera as he/she utters a predetermined speech corpus. After processing the corpus automatically, a visual speech module is learned from the data that is capable of synthesizing the human subject's mouth uttering entirely novel utterances that were not recorded in the original video. The synthesized utterance is re-composited onto a background sequence which contains natural head and eye movement. The final output is videorealistic in the sense that it looks like a video camera recording of the subject. At run time, the input to the system can be either real audio sequences or synthetic audio produced by a text-to-speech system, as long as they have been phonetically aligned.The two key contributions of this paper are 1) a variant of the multidimensional morphable model (MMM) to synthesize new, previously unseen mouth configurations from a small set of mouth image prototypes; and 2) a trajectory synthesis technique based on regularization, which is automatically trained from the recorded video corpus, and which is capable of synthesizing trajectories in MMM space corresponding to any desired utterance."}
{"_id": "0d42801cda0e6c80a6bcaf31efe6cf853fa052d0", "title": "Towards Zero Unknown Word in Neural Machine Translation", "text": "Neural Machine translation has shown promising results in recent years. In order to control the computational complexity, NMT has to employ a small vocabulary, and massive rare words outside the vocabulary are all replaced with a single unk symbol. Besides the inability to translate rare words, this kind of simple approach leads to much increased ambiguity of the sentences since meaningless unks break the structure of sentences, and thus hurts the translation and reordering of the in-vocabulary words. To tackle this problem, we propose a novel substitution-translation-restoration method. In substitution step, the rare words in a testing sentence are replaced with similar in-vocabulary words based on a similarity model learnt from monolingual data. In translation and restoration steps, the sentence will be translated with a model trained on new bilingual data with rare words replaced, and finally the translations of the replaced words will be substituted by that of original ones. Experiments on Chinese-to-English translation demonstrate that our proposed method can achieve more than 4 BLEU points over the attention-based NMT. When compared to the recently proposed method handling rare words in NMT, our method can also obtain an improvement by nearly 3 BLEU points."}
{"_id": "13880d8bbfed80ab74e0a757292519a71230a93a", "title": "OpenNMT: Open-source Toolkit for Neural Machine Translation", "text": "We describe an open-source toolkit for neural machine translation (NMT). The toolkit prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about the underlying techniques."}
{"_id": "1518039b5001f1836565215eb047526b3ac7f462", "title": "Neural Machine Translation of Rare Words with Subword Units", "text": "Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character ngram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English\u2192German and English\u2192Russian by up to 1.1 and 1.3 BLEU, respectively."}
{"_id": "53ca01f8c593b2339f292672b183235da5f6ce70", "title": "ASPEC: Asian Scientific Paper Excerpt Corpus", "text": "In this paper, we describe the details of the ASPEC (Asian Scientific Paper Excerpt Corpus), which is the first large-size parallel corpus of scientific paper domain. ASPEC was constructed in the Japanese-Chinese machine translation project conducted between 2006 and 2010 using the Special Coordination Funds for Promoting Science and Technology. It consists of a Japanese-English scientific paper abstract corpus of approximately 3 million parallel sentences (ASPEC-JE) and a Chinese-Japanese scientific paper excerpt corpus of approximately 0.68 million parallel sentences (ASPEC-JC). ASPEC is used as the official dataset for the machine translation evaluation workshop WAT (Workshop on Asian Translation)."}
{"_id": "6ce83cede4d31b40c0b5bed6d1899d89de2ee28b", "title": "PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification", "text": "We present a new release of the Paraphrase Database. PPDB 2.0 includes a discriminatively re-ranked set of paraphrases that achieve a higher correlation with human judgments than PPDB 1.0\u2019s heuristic rankings. Each paraphrase pair in the database now also includes finegrained entailment relations, word embedding similarities, and style annotations."}
{"_id": "a0d0cdd2e8af1945c03cfaf2cb451f71f208d0c9", "title": "Optimizing Paxos with batching and pipelining", "text": "Paxos is probably the most popular state machine replication protocol. Two optimizations that can greatly improve its performance are batching and pipelining. However, tuning these two optimizations to achieve high-throughput can be challenging, as their effectiveness depends on many parameters like the network latency and bandwidth, the speed of the nodes, and the properties of the application. We address this question by presenting an analytical model of the performance of Paxos, and showing how it can be used to determine configuration values for the batching and pipelining optimizations that result in high-throughput. We then validate the model, by presenting the results of experiments where we investigate the interaction of these two optimizations both in LAN and WAN environments, and comparing these resultswith the prediction from themodel. The results confirm the accuracy of the model, with the predicted values being usually very close to the ones that provide the highest performance in the experiments. Furthermore, both the model and the experiments give useful insights into the relative effectiveness of batching and pipelining. They show that although batching by itself provides the largest gains in all scenarios, in most cases combining it with pipelining provides a very significant additional increase in throughput, with this being true not only on high-latencyWANbut also inmany LAN scenarios. \u00a9 2012 Elsevier B.V. All rights reserved."}
{"_id": "8eca68829f9fa0fb9780dc36dbc929095516964a", "title": "OptoCOMM: Introducing a new optical underwater wireless communication modem", "text": "OptoCOMM aims at demonstrating the potential, at physical level, of a communication facility for the SUNRISE platform constituted by an Optical Underwater Wireless Communication (OUWC) module with target performance of 10 Mb/s transmission rate at 10 meters range in shallow medium/high turbidity harbour waters. The module, which is based on blue Light Emitting Diode (LED) units and common photodiodes, is an evolution of the proof-of-principle prototype already proven in laboratory (pool). It will constitute an additional node integrated in the Littoral Ocean Observatory Network (LOON) test-bed of the SUNRISE infrastructures, providing a high speed and short-range communication node, which will complete the capability of acoustic modems already present in the test-bed. Three modules (nodes) will be developed and experimentally demonstrated: one for direct integration with the LOON infrastructures, one, battery powered, to be potentially installed on buoys, Remotely Operated Vehicles (ROVs), etc., and one to be installed on the eFolaga Autonomous Underwater Vehicle (AUV) of the proponents. The paper describes in detail the development of the modems as well as the first lab experiments, where the core technology has been successfully tested."}
{"_id": "11e15bf0438be1f92d9eb7ee93ab5c58e65e6333", "title": "A 3-D Camera With Adaptable Background Light Suppression Using Pixel-Binning and Super-Resolution", "text": "This paper presents a CMOS time-of-flight (TOF) 3-D camera employing a column-level background light (BGL) suppression scheme for high-resolution outdoor imaging. The use of the column-level approach minimizes a pixel size for high-density pixel arrays. Pixel-binning and super-resolution can be adaptably applied for an optimal BGL suppression at given spatiotemporal resolutions. A prototype sensor has been fabricated by using 0.11 \u03bcm CMOS processes. The sensor achieved a fill factor of 24% in a pixel pitch of 5.9 \u03bcm which is the smallest among all the reported TOF cameras up to date. Measurement results showed the temporal noise of 1.47 cm-rms with a 100 ms integration time at a demodulation frequency of 12.5 MHz using a white target at 1 m distance. The non-linearity was measured as 1% over the range of 0.75 m ~ 4 m. The BGL suppression over 100 klx was achieved from indoor and outdoor experiments, while the BGL-induced offset was maintained less than 2.6 cm under 0 ~ 100 klx."}
{"_id": "dc2d5a12c9b6c76e93658c3343f854d2bc65b577", "title": "Map Matching with Inverse Reinforcement Learning", "text": "We study map-matching, the problem of estimating the route that is traveled by a vehicle, where the points observed with the Global Positioning System are available. A state-of-the-art approach for this problem is a Hidden Markov Model (HMM). We propose a particular transition probability between latent road segments by the use of the number of turns in addition to the travel distance between the latent road segments. We use inverse reinforcement learning to estimate the importance of the number of turns relative to the travel distance. This estimated importance is incorporated in the transition probability of the HMM. We show, through numerical experiments, that the error of map-matching can be reduced substantially with the proposed transition probability."}
{"_id": "75539d8ce781b5c02d571ac142e464222528be3b", "title": "Market power and efficiency in a computational electricity market with discriminatory double-auction pricing", "text": "This study reports experimental market power and efficiency outcomes for a computational wholesale electricity market operating in the short run under systematically varied concentration and capacity conditions. The pricing of electricity is determined by means of a clearinghouse double auction with discriminatory midpoint pricing. Buyers and sellers use a modified Roth Erev individual reinforcement learning algorithm to determine their price and quantity offers in each auction round. It is shown that high market efficiency is generally attained, and that market microstructure is strongly predictive for the relative market power of buyers and sellers independently of the values set for the reinforcement learning parameters. Results are briefly compared against results from an earlier study in which buyers and sellers instead engage in social mimicry learning via genetic algorithms."}
{"_id": "8658058a37d6e2d0d58b4da7efbc5a653544ee64", "title": "Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors.", "text": "Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem in the context of their own studies, we recommend design calculations in which (a) the probability of an estimate being in the wrong direction (Type S [sign] error) and (b) the factor by which the magnitude of an effect might be overestimated (Type M [magnitude] error or exaggeration ratio) are estimated. We illustrate with examples from recent published research and discuss the largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information."}
{"_id": "90168077ad534f0bb61f120d297bb40912160e4b", "title": "Commutation Torque Ripple Reduction of Brushless DC Motor in Braking Operation", "text": "In this paper, the commutation torque ripple is studied in the brushless DC motor (BLDCM) braking operation. As the phase currents flow in the reversed directions in the braking operation, the current circuits in the braking operation are different from those in the motor operation. For this reason, it is found that the commutation torque ripple can be reduced in the speed range from the rated speed to zero just by the common braking pulse width modulation strategies, where the three-phase modulation is not needed. However, it is impossible in motor operation. Under the common braking pulse width modulation strategies, the speed ranges where commutation torque ripple can be reduced are derived and compared. Based on the analysis of these speed ranges, a simple braking method is proposed. With this method, the BLDCM can be slowed down without commutation torque ripple in the whole braking process from the rated speed to zero. Experiment results verify the effectiveness of the method and the correctness of the theory."}
{"_id": "7c016cadca97a8cc25bd7e9442714453fc5a7035", "title": "Fast Screening Algorithm for Rotation and Scale Invariant Template Matching", "text": "This paper presents a generic pre-processor for expediting conventional template matching techniques. Instead of locating the best matched patch in the reference image to a query template via exhaustive search, the proposed algorithm rules out regions with no possible matches with minimum computational efforts. While working on simple patch features, such as mean, variance and gradient, the fast prescreening is highly discriminative. Its computational efficiency is gained by using a novel octagonal-star-shaped template and the inclusion-exclusion principle to extract and compare patch features. Moreover, it can handle arbitrary rotation and scaling of reference images effectively. Extensive experiments demonstrate that the proposed algorithm greatly reduces the search space while never missing the best match."}
{"_id": "a7228879cbca1d03e7400968a6ee9d893ce55fc8", "title": "Psychological Stressors : Overview", "text": "Psychological stressors are social and physical environmental circumstances that challenge the adaptive capabilities and resources of an organism. These circumstances represent an extremely wide and varied array of different situations that possess both common and specific psychological and physical attributes. The challenge for theory, research, and practice is to abstract and understand the specific qualities and characteristics of environmental exposures that most strongly elicit noxious psychological and biological responses, which in turn can lead to serious mental and physical health problems over the life course. In the present article, historical perspectives and conceptual considerations are addressed first, which provides the context for the subsequent discussion of key issues for defining and assessing psychological stressors. Susceptibility to psychological stressors is subject to individual differences, which can alter the impact and adverse consequences of such environmental exposures, necessitating a discussion of these moderating influences as well. HISTORICAL AND GENERAL CONSIDERATIONS"}
{"_id": "5844767f1bde155dc0f1515722724b24373c7e40", "title": "The Love of Large Numbers: A Popularity Bias in Consumer Choice.", "text": "Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an \"intuitive statistician\" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences."}
{"_id": "82820695c67717f4b6c1d6ed4db5bbf25fe909da", "title": "PubViz: Lightweight Visual Presentation of Publication Data", "text": "Publications play a central role in presenting the outcome of scientific research but are typically presented as textual lists, whereas related work in visualization of publication focuses on exploration \u2013 not presentation. To bridge this gap, we conducted a design study of an interactive visual representation of publication data in a BibTeX file. This paper reports our domain and problem characterization as well as our visualization design decisions in light of our user-centered design process including interviews, two user studies with a paper prototype and a d3.js prototype, and practical application at our group\u2019s website."}
{"_id": "19fbe18e8da489b17ebb283ddc7e72af7c3ffd32", "title": "Challenges in Shared-Environment Human-Robot Collaboration", "text": "We present and discuss four important yet underserved research questions critical to the future of sharedenvironment human-robot collaboration. We begin with a brief survey of research surrounding individual components required for a complete collaborative robot control system, discussing the current state of the art in Learning from Demonstration, active learning, adaptive planning systems, and intention recognition. We motivate the exploration of the presented research questions by relating them to existing work and representative use cases from the domains of construction and cooking."}
{"_id": "690bc3fd6b30cd82b0e22b55a51f53e114935f09", "title": "Automatic semantic classification of scientific literature according to the hallmarks of cancer", "text": "MOTIVATION\nThe hallmarks of cancer have become highly influential in cancer research. They reduce the complexity of cancer into 10 principles (e.g. resisting cell death and sustaining proliferative signaling) that explain the biological capabilities acquired during the development of human tumors. Since new research depends crucially on existing knowledge, technology for semantic classification of scientific literature according to the hallmarks of cancer could greatly support literature review, knowledge discovery and applications in cancer research.\n\n\nRESULTS\nWe present the first step toward the development of such technology. We introduce a corpus of 1499 PubMed abstracts annotated according to the scientific evidence they provide for the 10 currently known hallmarks of cancer. We use this corpus to train a system that classifies PubMed literature according to the hallmarks. The system uses supervised machine learning and rich features largely based on biomedical text mining. We report good performance in both intrinsic and extrinsic evaluations, demonstrating both the accuracy of the methodology and its potential in supporting practical cancer research. We discuss how this approach could be developed and applied further in the future.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe corpus of hallmark-annotated PubMed abstracts and the software for classification are available at: http://www.cl.cam.ac.uk/\u223csb895/HoC.html.\n\n\nCONTACT\nsimon.baker@cl.cam.ac.uk."}
{"_id": "b971ffc651186315f8c4f05cd1baf71feff0e6f3", "title": "Grid-connected photovoltaic systems with energy storage", "text": "There are different interesting ways that can be followed in order to reduce costs of grid-connected photovoltaic systems, i.e., by maximizing their energy production in every operating conditions, minimizing electrical losses on the plant, utilizing grid-connected photovoltaic systems not only to generate electrical energy to be put into the power system but also to implement some other important auxiliary services normally implemented by properly conceived apparatus (electrical power quality control, load demand peak control, \u2026), making easy the integration of different kind of renewable electrical energy resources (wind plant, fuel-cells, \u2026)."}
{"_id": "7a6819770fa7b9af169c9deb3acbf62c0f5a76a7", "title": "Ontology-based personalized search and browsing", "text": "As the number of Internet users and the number of accessible Web pages grows, it is becoming increasingly difficult for users to find documents that are relevant to their particular needs. Users must either browse through a large hierarchy of concepts to find the information for which they are looking or submit a query to a publicly available search engine and wade through hundreds of results, most of them irrelevant. The core of the problem is that whether the user is browsing or searching, whether they are an eighth grade student or a Nobel prize winner, the identical information is selected and it is presented the same way. In this paper, we report on research that adapts information navigation based on a user profile structured as a weighted concept hierarchy. A user may create his or her own concept hierarchy and use them for browsing Web sites. Or, the user profile may be created from a reference ontology by \u2018watching over the user\u2019s shoulder\u2019 while they browse. We show that these automatically created profiles reflect the user\u2019s interests quite well and they are able to produce moderate improvements when applied to search results."}
{"_id": "44bcb4909237a85e0dc291c8c41f52b559cc6f29", "title": "Learning-to-Rank in Research Paper CBF recommendation: Leveraging Irrelevant Papers", "text": "Suggesting relevant literature to researchers has become an active area of study, typically relying on content-based filtering (CBF) over the rich textual features available. Given the high dimensionality and the sparsity of the training samples inherent to this domain, the focus has so far been on heuristic-based methods. In this paper, we argue for the model-based approach and propose a learning-to-rank method that leverages publicly available publications\u2019 metadata to produce an effective prediction model. The proposed method is systematically evaluated on a scholarly paper recommendation dataset and compared against state-of-the-art model-based approaches as well as current, domain-specific heuristic methods. The results show that our approach clearly outperforms state-of-the-art research paper recommendations utilizing only publicly available meta-data."}
{"_id": "1839df1501ecf044635f86a4a813b111d84f03e3", "title": "Upper lateral cartilage fold-in flap: a combined spreader and/or splay graft effect without cartilage grafts.", "text": "Dorsal hump reduction almost always breaks the internal nasal valve and nasal obstruction is likely to occur postoperatively, unless reconstructed. One hundred eighty patients were operated using both open and closed rhinoplasty approaches. Upper lateral cartilages were meticulously separated from their junction with septum. Following bony and septal cartilaginous hump removal, upper lateral cartilages were folded inward. Either transcartilaginous horizontal mattress/simple sutures or perichondrial sutures were used depending of the desired width of the middle vault and the necessity for a splay-graft effect. In 7 patients unilateral, and in 1 patient bilateral, nasal synechia occurred and they were all treated under local anesthesia. All patients but 9 stated significantly improved nasal breathing. There was no inverted-V deformity or middle-vault narrowing observed. This technique is simple and physiologic, might be applicable for almost all primary rhinoplasty patients. Although it is possible with closed rhinoplasty approaches, it is easier with an open approach."}
{"_id": "a52e3b144d65399b78492e4a5661cc36cc883aeb", "title": "Light warping for enhanced surface depiction", "text": "Recent research on the human visual system shows that our perception of object shape relies in part on compression and stretching of the reflected lighting environment onto its surface. We use this property to enhance the shape depiction of 3D objects by locally warping the environment lighting around main surface features. Contrary to previous work, which require specific illumination, material characteristics and/or stylization choices, our approach enhances surface shape without impairing the desired appearance.\n Thanks to our novel local shape descriptor, salient surface features are explicitly extracted in a view-dependent fashion at various scales without the need of any pre-process. We demonstrate our system on a variety of rendering settings, using object materials ranging from diffuse to glossy, to mirror or refractive, with direct or global illumination, and providing styles that range from photorealistic to non-photorealistic. The warping itself is very fast to compute on modern graphics hardware, enabling real-time performance in direct illumination scenarios.\n Note: Third-Party Material Attribution Third-party material used in ACM Transactions on Graphics 28(3), Article 25 - \"Light Warping for Enhanced Surface Depiction,\" by Vergne, Pacanowski, Barla, Granier, and Schlick - was used without proper attribution.\n The 3D model used in Figures 1, 3, and 5, as well as in the cover image of this volume of the journal, was downloaded from the Shape Repository of AIM@SHAPE Project (http://shapes.aimatshape.net) and is the property of CNR-IMATI.\n We regret this oversight."}
{"_id": "d4065b447872f7f77676adf51adf8abe6c8b3e5d", "title": "Fully Distributed Scrum: The Secret Sauce for Hyperproductive Offshored Development Teams", "text": "Scrum was designed to achieve a hyperproductive state where productivity increases 5-10 times over industry averages and many collocated teams have achieved this effect. The question for this paper is whether distributed, offshore teams can consistently achieve the hyperproductive state. In particular, can a team establish a localized velocity and then maintain or increase that velocity when distributing teams across continents. Since 2006, Xebia started projects with half Dutch and half Indian team members. After establishing localized hyperproductivity, they move the Indian members of the team to India and show increasing velocity with fully distributed teams. After running XP engineering practices inside many distributed Scrum projects, Xebia has systematically productized a model very similar to the SirsiDynix model (J. Sutherland, 2006) for high performance, distributed, offshore teams with outstanding quality."}
{"_id": "0551fa57a42ae0fff9c670dffecfc0c63e0ec89d", "title": "A* CCG Parsing with a Supertag-factored Model", "text": "We introduce a new CCG parsing model which is factored on lexical category assignments. Parsing is then simply a deterministic search for the most probable category sequence that supports a CCG derivation. The parser is extremely simple, with a tiny feature set, no POS tagger, and no statistical model of the derivation or dependencies. Formulating the model in this way allows a highly effective heuristic for A\u2217 parsing, which makes parsing extremely fast. Compared to the standard C&C CCG parser, our model is more accurate out-of-domain, is four times faster, has higher coverage, and is greatly simplified. We also show that using our parser improves the performance of a state-ofthe-art question answering system."}
{"_id": "05b06fd42e3b64a82f1da0576297a2977797a127", "title": "Joint A* CCG Parsing and Semantic Role Labelling", "text": "Joint models of syntactic and semantic parsing have the potential to improve performance on both tasks\u2014but to date, the best results have been achieved with pipelines. We introduce a joint model using CCG, which is motivated by the close link between CCG syntax and semantics. Semantic roles are recovered by labelling the deep dependency structures produced by the grammar. Furthermore, because CCG is lexicalized, we show it is possible to factor the parsing model over words and introduce a new A\u2217 parsing algorithm\u2014 which we demonstrate is faster and more accurate than adaptive supertagging. Our joint model is the first to substantially improve both syntactic and semantic accuracy over a comparable pipeline, and also achieves state-of-the-art results for a nonensemble semantic role labelling model."}
{"_id": "349e26b382c4089a11e8a6e49c371a4d126e85d9", "title": "LSTM CCG Parsing", "text": "We demonstrate that a state-of-the-art parser can be built using only a lexical tagging model and a deterministic grammar, with no explicit model of bi-lexical dependencies. Instead, all dependencies are implicitly encoded in an LSTM supertagger that assigns CCG lexical categories. The parser significantly outperforms all previously published CCG results, supports efficient and optimal A\u2217 decoding, and benefits substantially from semisupervised tri-training. We give a detailed analysis, demonstrating that the parser can recover long-range dependencies with high accuracy and that the semi-supervised learning enables significant accuracy gains. By running the LSTM on a GPU, we are able to parse over 2600 sentences per second while improving state-of-the-art accuracy by 1.1 F1 in domain and up to 4.5 F1 out of domain."}
{"_id": "67156902beca9bc90b728c8d5dd4ac9d8b27d3a3", "title": "Chainer : a Next-Generation Open Source Framework for Deep Learning", "text": "Software frameworks for neural networks play key roles in the development and application of deep learning methods. However, as new types of deep learning models are developed, existing frameworks designed for convolutional neural networks are becoming less useful. In this paper, we introduce Chainer, a Pythonbased, standalone open source framework for deep learning models. Chainer provides a flexible, intuitive, and high performance means of implementing a full range of deep learning models, including state-of-the-art models such as recurrent neural networks and variational autoencoders."}
{"_id": "8d8aa22affe4225ca51ec460463b8fb6b5f505a4", "title": "Multi-level Lasso for Sparse Multi-task Regression", "text": "We present a flexible formulation for variable selection in multi-task regression to allow for discrepancies in the estimated sparsity patterns accross the multiple tasks, while leveraging the common structure among them. Our approach is based on an intuitive decomposition of the regression coefficients into a product between a component that is common to all tasks and another component that captures task-specificity. This decomposition yields the Multi-level Lasso objective that can be solved efficiently via alternating optimization. The analysis of the \u201corthonormal design\u201d case reveals some interesting insights on the nature of the shrinkage performed by our method, compared to that of related work. Theoretical guarantees are provided on the consistency of Multi-level Lasso. Simulations and empirical study of micro-array data further demonstrate the value of our framework."}
{"_id": "e9c513fe159c2b243325f90b6e9bcdb5f8d75c22", "title": "How Much Is Enough? Choosing \u03b5 for Differential Privacy", "text": "Differential privacy is a recent notion, and while it is nice conceptually it has been difficult to apply in practice. The parameters of differential privacy have an intuitive theoretical interpretation, but the implications and impacts on the risk of disclosure in practice have not yet been studied, and choosing appropriate values for them is non-trivial. Although the privacy parameter in differential privacy is used to quantify the privacy risk posed by releasing statistics computed on sensitive data, is not an absolute measure of privacy but rather a relative measure. In effect, even for the same value of , the privacy guarantees enforced by differential privacy are different based on the domain of attribute in question and the query supported. We consider the probability of identifying any particular individual as being in the database, and demonstrate the challenge of setting the proper value of given the goal of protecting individuals in the database with some fixed probability."}
{"_id": "7599ed76c7b46754c49fb8d70895e10ff1207f49", "title": "Coupling Distributed and Symbolic Execution for Natural Language Queries", "text": "Building neural networks to query a knowledge base (a table) with natural language is an emerging research topic in deep learning. An executor for table querying typically requires multiple steps of execution because queries may have complicated structures. In previous studies, researchers have developed either fully distributed executors or symbolic executors for table querying. A distributed executor can be trained in an end-to-end fashion, but is weak in terms of execution efficiency and explicit interpretability. A symbolic executor is efficient in execution, but is very difficult to train especially at initial stages. In this paper, we propose to couple distributed and symbolic execution for natural language queries, where the symbolic executor is pretrained with the distributed executor\u2019s intermediate execution results in a step-by-step fashion. Experiments show that our approach significantly outperforms both distributed and symbolic executors, exhibiting high accuracy, high learning efficiency, high execution efficiency, and high interpretability."}
{"_id": "848ceabcc8b127a9df760c83b94859bbbc069fca", "title": "A High-Speed and Ultra Low-Power Subthreshold Signal Level Shifter", "text": "In this paper, we present a novel level shifter circuit converting subthreshold signal levels to super-threshold signal levels at high-speed using ultra low-power and a small silicon area, making it well-suited for low-power applications such as wireless sensor networks and implantable medical devices. The proposed circuit introduces a new voltage level shifter topology employing a level-shifting capacitor contributing to increase the range of conversion voltages, while significantly reducing the conversion delay. Such a level-shifting capacitor is quickly charged, whenever the input signal detects a low-to-high transition, in order to boost internal voltage nodes, and quickly reach a high output voltage level. The proposed circuit achieves a shorter propagation delay and a smaller silicon area for a given operating frequency and power consumption compared to other circuit solutions. Measurement results are presented for the proposed circuit fabricated in a 0.18- $\\mu \\text {m}$ TSMC technology. The proposed circuit can convert a wide range of the input voltages from 330 mV to 1.8 V, and operate over a frequency range of 100 Hz to 100 MHz. It has a propagation delay of 29 ns and a power consumption of 61.5 nW for input signals 0.4 V, at a frequency of 500-kHz, outperforming previous designs."}
{"_id": "2a94fa0de804b5efaae1a66f50c3ea96539c46b8", "title": "Developing Non-goal Dialog System Based on Examples of Drama Television", "text": "This paper presents a design and experiments of developing a non-goal dialog system by utilizing human-to-human conversation examples from drama television. The aim is to build a conversational agent that can interact with users in as natural a fashion as possible, while reducing the time requirement for database design and collection. A number of the challenging design issues we faced are described, including (1) filtering and constructing a dialog example database from the drama conversations, and (2) retrieving a proper system response by finding the best dialog example based on the current user query. Subjective evaluation from a small user study is also discussed."}
{"_id": "8cda6e3762ab22aa47a827e28a7cb3692cd99ce2", "title": "Asynchronous frameless event-based optical flow", "text": "This paper introduces a process to compute optical flow using an asynchronous event-based retina at high speed and low computational load. A new generation of artificial vision sensors has now started to rely on biologically inspired designs for light acquisition. Biological retinas, and their artificial counterparts, are totally asynchronous and data driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework for processing visual data using asynchronous event-based acquisition, providing a method for the evaluation of optical flow. The paper shows that current limitations of optical flow computation can be overcome by using event-based visual acquisition, where high data sparseness and high temporal resolution permit the computation of optical flow with micro-second accuracy and at very low computational cost."}
{"_id": "1234ebb4e631f0be9a5260b3b775386203369aa5", "title": "Graphene plasmonics for terahertz to mid-infrared applications.", "text": "In recent years, we have seen a rapid progress in the field of graphene plasmonics, motivated by graphene's unique electrical and optical properties, tunability, long-lived collective excitation and its extreme light confinement. Here, we review the basic properties of graphene plasmons: their energy dispersion, localization and propagation, plasmon-phonon hybridization, lifetimes and damping pathways. The application space of graphene plasmonics lies in the technologically significant, but relatively unexploited terahertz to mid-infrared regime. We discuss emerging and potential applications, such as modulators, notch filters, polarizers, mid-infrared photodetectors, and mid-infrared vibrational spectroscopy, among many others."}
{"_id": "6301eb787862f559617b3c2b2953155631e6de05", "title": "Learning a Probabilistic Topology Discovering Model for Scene Categorization", "text": "A recent advance in scene categorization prefers a topological based modeling to capture the existence and relationships among different scene components. To that effect, local features are typically used to handle photographing variances such as occlusions and clutters. However, in many cases, the local features alone cannot well capture the scene semantics since they are extracted from tiny regions (e.g., 4 \u00d7 4 patches) within an image. In this paper, we mine a discriminative topology and a low-redundant topology from the local descriptors under a probabilistic perspective, which are further integrated into a boosting framework for scene categorization. In particular, by decomposing a scene image into basic components, a graphlet model is used to describe their spatial interactions. Accordingly, scene categorization is formulated as an intergraphlet matching problem. The above procedure is further accelerated by introducing a probabilistic based representative topology selection scheme that makes the pairwise graphlet comparison trackable despite their exponentially increasing volumes. The selected graphlets are highly discriminative and independent, characterizing the topological characteristics of scene images. A weak learner is subsequently trained for each topology, which are boosted together to jointly describe the scene image. In our experiment, the visualized graphlets demonstrate that the mined topological patterns are representative to scene categories, and our proposed method beats state-of-the-art models on five popular scene data sets."}
{"_id": "f1da8ff91712c0e733166db7e4c5de97b0dc698c", "title": "An open-label study to evaluate sildenafil for the treatment of lymphatic malformations.", "text": "BACKGROUND\nLymphatic malformations can be challenging to treat. Mainstay interventions including surgery and sclerotherapy are invasive and can result in local recurrence and complications.\n\n\nOBJECTIVE\nWe sought to assess the effect of 20 weeks of oral sildenafil on reducing lymphatic malformation volume and symptoms in children.\n\n\nMETHODS\nSeven children (4 boys, 3 girls; ages 13-85 months) with lymphatic malformations were given oral sildenafil for 20 weeks in this open-label study. The volume of the lymphatic malformation was calculated blindly using magnetic resonance imaging performed before and after 20 weeks of sildenafil. Lymphatic malformations were assessed clinically on weeks 4, 12, 20, and 32. Both the physician and parents evaluated the lymphatic malformation in comparison with baseline.\n\n\nRESULTS\nFour subjects had a lymphatic malformation volume decrease (1.0%-31.7%). In 2 subjects, despite a lymphatic malformation volume increase (1.1%-3.7%), clinical improvement was noted while on sildenafil. One subject had a 29.6% increase in lymphatic malformation volume and no therapeutic response. Lymphatic malformations of all 6 subjects who experienced a therapeutic response on sildenafil softened and became easily compressible. Adverse events were minimal.\n\n\nLIMITATIONS\nA randomized controlled trial will be necessary to verify the effects of sildenafil on lymphatic malformations.\n\n\nCONCLUSIONS\nSildenafil can reduce lymphatic malformation volume and symptoms in some children."}
{"_id": "3538a563adf1ec16d90c444542b7a7fd5cec748f", "title": "Data mining a diabetic data warehouse", "text": "Diabetes is a major health problem in the United States. There is a long history of diabetic registries and databases with systematically collected patient information. We examine one such diabetic data warehouse, showing a method of applying data mining techniques, and some of the data issues, analysis problems, and results. The diabetic data warehouse is from a large integrated health care system in the New Orleans area with 30,383 diabetic patients. Methods for translating a complex relational database with time series and sequencing information to a flat file suitable for data mining are challenging. We discuss two variables in detail, a comorbidity index and the HgbA1c, a measure of glycemic control related to outcomes. We used the classification tree approach in Classification and Regression Trees (CART) with a binary target variable of HgbA1c >9.5 and 10 predictors: age, sex, emergency department visits, office visits, comorbidity index, dyslipidemia, hypertension, cardiovascular disease, retinopathy, end-stage renal disease. Unexpectedly, the most important variable associated with bad glycemic control is younger age, not the comorbiditity index or whether patients have related diseases. If we want to target diabetics with bad HgbA1c values, the odds of finding them is 3.2 times as high in those <65 years of age than those older. Data mining can discover novel associations that are useful to clinicians and administrators [corrected]."}
{"_id": "f3d5c5f9ea59a2a71461985da3aca7f3d5044b4e", "title": "Exploring Game Space of Minimal Action Games via Parameter Tuning and Survival Analysis", "text": "Game designers can use computer-aided game design methods to model how players may experience the perceived difficulty of a game. We present methods to generate and analyze the difficulty of a wide variety of minimal action game variants throughout game space, where each point in this abstract design space represents a unique game variant. Focusing on a parameterized version of Flappy Bird, we predict hazard rates and difficulty curves using automatic playtesting, Monte Carlo simulation, a player model based on human motor skills (precision and actions per second), and survival analysis of score histograms. We demonstrate our techniques using simulated game play and actual game data from over 106 million play sessions of a popular online Flappy Bird variant, showing quantitative reasons why balancing a game for a wide range of player skill can be difficult. Some applications of our techniques include searching for a specific difficulty, game space visualization, computational creativity to find unique variants, and tuning game balance to adjust the difficulty curve even when game parameters are time varying, score dependent, or changing based on game progress."}
{"_id": "5b3d331fba47697a9d38412f8eea7b2f6bc3a320", "title": "Trainable approaches to surface natural language generation and their application to conversational dialog systems", "text": ""}
{"_id": "55b5c3b931f86a71aaa0be99eecdcea3b27078d4", "title": "Multi-accent deep neural network acoustic model with accent-specific top layer using the KLD-regularized model adaptation", "text": "We propose a multi-accent deep neural network acoustic model with an accent-specific top layer and shared bottom hidden layers. The accent-specific top layer is used to model the distinct accent specific patterns. The shared bottom hidden layers allow maximum knowledge sharing between the native and the accent models. This design is particularly attractive when considering deploying such a system to a live speech service due to its computational efficiency. We applied the KL-divergence (KLD) regularized model adaptation to train the accent-specific top layer. On the mobile short message dictation task (SMD), with 1K, 10K, and 100K British or Indian accent adaptation utterances, the proposed approach achieves 18.1%, 26.0%, and 28.5% or 16.1%, 25.4%, and 30.6% word error rate reduction (WERR) for the British and the Indian accent respectively against a baseline cross entropy (CE) model trained from 400 hour data. On the 100K utterance accent adaptation setup, comparable performance gain can be obtained against a baseline CE model trained with 2000 hour data. We observe smaller yet significant WER reduction on a baseline model trained using the MMI sequence-level criterion."}
{"_id": "94575d49df241b7931cd9b37298bdaa19c20f8f8", "title": "Vegetable price prediction using data mining classification technique", "text": "Each and every sector in this digital world is undergoing a dramatic change due to the influence of IT field. The agricultural sector needs more support for its development in developing countries like India. Price prediction helps the farmers and also Government to make effective decision. Based on the complexity of vegetable price prediction, making use of the characteristics of neural networks such as self-adapt, self-study and high fault tolerance, to build up the model of Back-propagation neural network to predict vegetable price. A prediction model was set up by applying the neural network. Taking tomato as an example, the parameters of the model are analyzed through experiment. At the end of the result of Back-propagation neural network shows absolute error percentage of monthly and weekly vegetable price prediction and analyze the accuracy percentage of the price prediction."}
{"_id": "a968d448aaf22821103267a19559dcdf6f5feeea", "title": "Image processing on multinode hadoop cluster", "text": "In the past few years the data produced all over the internet has increased with an exponential rate. The storage costs have been rising immensely. But, in the field of Computer Science, the introduction of new technologies has resulted in reduction of storage costs. This led to a rampant rise in the data generation rates. This huge amount of data that is so vast and complex such that classical methods are insufficient for processing is termed as \u2018Big Data\u2019. There are various tools in Hadoop to analyze the textual data such as Pig, HBase, etc. But the data present on the Internet as well as the Social Networking sites comprises of unstructured media. The maximum spectrum of the media files is covered by Image data. The major concern is not about the storage of images, but the processing of images being generated with the speed of light. Every day around 350 million pictures are being uploaded on social network. Until now, over 200 billion photos have been uploaded only on Facebook. This accounts to an average of around 200 photos per user. This whole amount of data generated round the globe can be classified into three formats-Structured, Semi-Structured, and Unstructured. The Image data contains not only the pictures but also the data defining those pictures such as the resolution, source of the image, capture device, etc. To fetch all this information in a structured format HIPI (Hadoop Image Processing Interface) tools are used. In this paper image processing is performed on a MultiNode Hadoop Cluster and its performance is measured."}
{"_id": "7d660fc6a2a45df97ae33d835e01a2585fa4d33f", "title": "Exploratory Analysis of the Factors Affecting Consumer Choice in E-Commerce: Conjoint Analysis", "text": "According to previous studies of online consumer behaviour, three factors are the most influential on purchasing behavior brand, colour and position of the product on the screen. However, a simultaneous influence of these three factors on the consumer decision making process has not been investigated previously. In this particular work we aim to execute a comprehensive study of the influence of these three factors. In order to answer our main research questions, we conducted an experiment with 96 different combinations of the three attributes, and using statistical analysis, such as conjoint analysis, t-test analysis and Kendall analysis we identified that the most influential factor to the online consumer decision making process is brand, the second most important attribute is the colour, which was estimated half as important as brand, and the least important attribute is the position on the screen. Additionally, we identified the main differences regarding consumers stated and revealed preferences regarding these three attributes."}
{"_id": "1f43a7e5e05f5fee3bdcf2c1c0aa6beb7cb94cca", "title": "Research guidelines for the Delphi survey technique.", "text": "Consensus methods such as the Delphi survey technique are being employed to help enhance effective decision-making in health and social care. The Delphi survey is a group facilitation technique, which is an iterative multistage process, designed to transform opinion into group consensus. It is a flexible approach, that is used commonly within the health and social sciences, yet little guidance exists to help researchers undertake this method of data collection. This paper aims to provide an understanding of the preparation, action steps and difficulties that are inherent within the Delphi. Used systematically and rigorously, the Delphi can contribute significantly to broadening knowledge within the nursing profession. However, careful thought must be given before using the method; there are key issues surrounding problem identification, researcher skills and data presentation that must be addressed. The paper does not claim to be definitive; it purports to act as a guide for those researchers who wish to exploit the Delphi methodology."}
{"_id": "de13552fa38655f98409c7684181a85e7c4452bc", "title": "NOMA in 5G Systems: Exciting Possibilities for Enhancing Spectral Efficiency", "text": "This article provides an overview of power-domain non-orthogonal multiple access for 5G systems. The basic concepts and benefits are briefly presented, along with current solutions and standardization activities. In addition, limitations and research challenges are discussed."}
{"_id": "210f996bf85fb47a2f70aec1b6a599e520ae2bc7", "title": "What Shape Are Dolphins? Building 3D Morphable Models from 2D Images", "text": "3D morphable models are low-dimensional parameterizations of 3D object classes which provide a powerful means of associating 3D geometry to 2D images. However, morphable models are currently generated from 3D scans, so for general object classes such as animals they are economically and practically infeasible. We show that, given a small amount of user interaction (little more than that required to build a conventional morphable model), there is enough information in a collection of 2D pictures of certain object classes to generate a full 3D morphable model, even in the absence of surface texture. The key restriction is that the object class should not be strongly articulated, and that a very rough rigid model should be provided as an initial estimate of the \u201cmean shape.\u201d The model representation is a linear combination of subdivision surfaces, which we fit to image silhouettes and any identifiable key points using a novel combined continuous-discrete optimization strategy. Results are demonstrated on several natural object classes, and show that models of rather high quality can be obtained from this limited information."}
{"_id": "4e676dfa09a48e4a8497f7577dab267481881a3e", "title": "A novel strong-motion seismic network for community participation in earthquake monitoring", "text": "The Quake-Catcher Network (QCN) is breaking new ground in seismology by combining new micro-electro-mechanical systems (MEMS) technology with volunteer seismic station distributed computing. Rather than distributing just computations, the QCN allows volunteers to participate in scientific data collection and computation. Using these innovative tools, QCN will increase the number of strong-motion observations for improved earthquake detection and analysis in California, and throughout the world. QCN's increased density of seismic measurements will revolutionize seismology. The QCN, in concert with current seismic networks, may soon provide advanced alerts when earthquakes occur, estimate the response of a building to earthquakes even before they happen, and generate a greater understanding of earthquakes for scientists and the general public alike. Details of how one can join the QCN are outlined. In addition, we have activities on our website that can be used in K-16 classrooms to teach students basic seismology and physics concepts."}
{"_id": "47500adb003887bb04d4c2585a81738324ff46ac", "title": "A summary of team MIT's approach to the virtual robotics challenge", "text": "The paper describes the system developed by researchers from MIT for the Defense Advanced Research Projects Agency's (DARPA) Virtual Robotics Challenge (VRC), held in June 2013. The VRC was the first competition in the DARPA Robotics Challenge (DRC), a program that aims to \u201cdevelop ground robotic capabilities to execute complex tasks in dangerous, degraded, human-engineered environments\u201d. The VRC required teams to guide a model of Boston Dynamics' humanoid robot, Atlas, through driving, walking, and manipulation tasks in simulation. Team MIT's user interface, the Viewer, provided the operator with a unified representation of all available information. A 3D rendering of the robot depicted its most recently estimated body state with respect to the surrounding environment, represented by point clouds and texture-mapped meshes as sensed by on-board LIDAR and fused over time."}
{"_id": "cedeeccea19b851cdfa3cd8ce753226c2bf55dd8", "title": "High Power Outphasing Modulation", "text": ""}
{"_id": "69a9af372d38b11567369a38e0d5452c3d5afe84", "title": "A Universal Denoising Framework With a New Impulse Detector and Nonlocal Means", "text": "Impulse noise detection is a critical issue when removing impulse noise and impulse/Gaussian mixed noise. In this paper, we propose a new detection mechanism for universal noise and a universal noise-filtering framework based on the nonlocal means (NL-means). The operation is carried out in two stages, i.e., detection followed by filtering. For detection, first, we propose the robust outlyingness ratio (ROR) for measuring how impulselike each pixel is, and then all the pixels are divided into four clusters according to the ROR values. Second, different decision rules are used to detect the impulse noise based on the absolute deviation to the median in each cluster. In order to make the detection results more accurate and more robust, the from-coarse-to-fine strategy and the iterative framework are used. In addition, the detection procedure consists of two stages, i.e., the coarse and fine detection stages. For filtering, the NL-means are extended to the impulse noise by introducing a reference image. Then, a universal denoising framework is proposed by combining the new detection mechanism with the NL-means (ROR-NLM). Finally, extensive simulation results show that the proposed noise detector is superior to most existing detectors, and the ROR-NLM produces excellent results and outperforms most existing filters for different noise models. Unlike most of the other impulse noise filters, the proposed ROR-NLM also achieves high peak signal-to-noise ratio and great image quality by efficiently removing impulse/Gaussian mixed noise."}
{"_id": "7cb0b71153232f74530232779b95faa82da572bc", "title": "Removal of High Density Salt and Pepper Noise Through Modified Decision Based Unsymmetric Trimmed Median Filter", "text": "A modified decision based unsymmetrical trimmed median filter algorithm for the restoration of gray scale, and color images that are highly corrupted by salt and pepper noise is proposed in this paper. The proposed algorithm replaces the noisy pixel by trimmed median value when other pixel values, 0's and 255's are present in the selected window and when all the pixel values are 0's and 255's then the noise pixel is replaced by mean value of all the elements present in the selected window. This proposed algorithm shows better results than the Standard Median Filter (MF), Decision Based Algorithm (DBA), Modified Decision Based Algorithm (MDBA), and Progressive Switched Median Filter (PSMF). The proposed algorithm is tested against different grayscale and color images and it gives better Peak Signal-to-Noise Ratio (PSNR) and Image Enhancement Factor (IEF)."}
{"_id": "92b820cfcee1a19acf343308113c55740e3d4396", "title": "Fast restoration of natural images corrupted by high-density impulse noise", "text": "In this paper, we suggest a general model for the fixed-valued impulse noise and propose a two-stage method for high density noise suppression while preserving the image details. In the first stage, we apply an iterative impulse detector, exploiting the image entropy, to identify the corrupted pixels and then employ an Adaptive Iterative Mean filter to restore them. The filter is adaptive in terms of the number of iterations, which is different for each noisy pixel, according to the Euclidean distance from the nearest uncorrupted pixel. Experimental results show that the proposed filter is fast and outperforms the best existing techniques in both objective and subjective performance measures."}
{"_id": "97537b301a4f8f9f8adefe3c001c84ccb8402c4a", "title": "An Efficient Edge-Preserving Algorithm for Removal of Salt-and-Pepper Noise", "text": "In this letter, a novel algorithm for removing salt-and-pepper noise from corrupted images is presented. We employ an efficient impulse noise detector to detect the noisy pixels, and an edge-preserving filter to reconstruct the intensity values of noisy pixels. Extensive experimental results demonstrate that our method can obtain better performances in terms of both subjective and objective evaluations than those state-of-the-art impulse denoising techniques. Especially, the proposed method can preserve edges very well while removing impulse noise. Since our algorithm is algorithmically simple, it is very suitable to be applied to many real-time applications."}
{"_id": "c1dbefb5a93d29b7ef665c2e086b01fc3ce2282a", "title": "Noise Adaptive Fuzzy Switching Median Filter for Salt-and-Pepper Noise Reduction", "text": "This letter presents a novel two-stage noise adaptive fuzzy switching median (NAFSM) filter for salt-and-pepper noise detection and removal. Initially, the detection stage will utilize the histogram of the corrupted image to identify noise pixels. These detected \u00bfnoise pixels\u00bf will then be subjected to the second stage of the filtering action, while \u00bfnoise-free pixels\u00bf are retained and left unprocessed. Then, the NAFSM filtering mechanism employs fuzzy reasoning to handle uncertainty present in the extracted local information as introduced by noise. Simulation results indicate that the NAFSM is able to outperform some of the salt-and-pepper noise filters existing in literature."}
{"_id": "83f5d7a73d92f8e41e5f9e9a3ef7ca9ef6b2878e", "title": "A Communication-Optimal Framework for Contracting Distributed Tensors", "text": "Tensor contractions are extremely compute intensive generalized matrix multiplication operations encountered in many computational science fields, such as quantum chemistry and nuclear physics. Unlike distributed matrix multiplication, which has been extensively studied, limited work has been done in understanding distributed tensor contractions. In this paper, we characterize distributed tensor contraction algorithms on torus networks. We develop a framework with three fundamental communication operators to generate communication-efficient contraction algorithms for arbitrary tensor contractions. We show that for a given amount of memory per processor, the framework is communication optimal for all tensor contractions. We demonstrate performance and scalability of the framework on up to 262,144 cores on a Blue Gene/Q supercomputer."}
{"_id": "f91ac398724dd122c2fa165ec1cb779005f9f829", "title": "\\pi Match: Monocular vSLAM and Piecewise Planar Reconstruction Using Fast Plane Correspondences", "text": "This paper proposes \u03c0Match, a monocular SLAM pipeline that, in contrast to current state-of-the-art feature-based methods, provides a dense Piecewise Planar Reconstruction (PPR) of the scene. It builds on recent advances in planar segmentation from affine correspondences (ACs) for generating motion hypotheses that are fed to a PEaRL framework which merges close motions and decides about multiple motion situations. Among the selected motions, the camera motion is identified and refined, allowing the subsequent refinement of the initial plane estimates. The high accuracy of this two-view approach allows a good scale estimation and a small drift in scale is observed, when compared to prior monocular methods. The final discrete optimization step provides an improved PPR of the scene. Experiments on the KITTI dataset show the accuracy of \u03c0Match and that it robustly handles situations of multiple motions and pure rotation of the camera. A Matlab implementation of the pipeline runs in about 0.7 s per frame."}
{"_id": "257bf46ec9ab97f4bd2d5597c956355915038786", "title": "$K$ -Band CMOS Differential and Quadrature Voltage-Controlled Oscillators for Low Phase-Noise and Low-Power Applications", "text": "In this paper, modified circuit topologies of a differential voltage-controlled oscillator (VCO) and a quadrature VCO (QVCO) in a standard bulk 90-nm CMOS process are presented for low dc power and low phase-noise applications. By utilizing current-reuse and transformer-feedback techniques, the proposed VCO and QVCO can be operated at reduced dc power consumption while maintaining extraordinary circuit performance in terms of low phase-noise and low amplitude/phase errors. The VCO circuit topology is investigated to obtain the design procedure. The VCO is further applied to the QVCO design with a bottom-series coupling technique. The coupling network between two differential VCOs and device size are properly designed based on our proposed design methodology to achieve low amplitude and phase errors. Moreover, the VCO and the QVCO are fully characterized with amplitude and phase errors via a four-port vector network analyzer. With a dc power of 3 mW, the VCO exhibits a frequency tuning range from 20.3 to 21.3 GHz, a phase noise of - 116.4 dBc/Hz at 1-MHz offset, a figure-of-merit (FOM) of -198 dBc/Hz, a phase error of 3.8\u00b0 , and an amplitude error of 0.9 dB. With a dc power of 6 mW, the QVCO demonstrates a phase noise of -117.4 dBc/Hz, a FOM of -195.6 dBc/Hz, a phase error of 4\u00b0 , and an amplitude error of 0.6 dB. The proposed VCO and QVCO can be compared with the previously reported state-of-the-art VCOs and QVCOs in silicon-based technologies."}
{"_id": "2d359c141fc93b385683c77de87b6b9814d93e12", "title": "Abstract syntax graphs for domain specific languages", "text": "This paper presents a representation for embedded domain specific languages (EDSLs) using abstract syntax graphs (ASGs). The purpose of this representation is to deal with the important problem of defining operations that require observing or preserving sharing and recursion in EDSLs in an expressive, yet easy-to-use way. In contrast to more conventional representations based on abstract syntax trees, ASGs represent sharing and recursion explicitly as binder constructs. We use a functional representation of ASGs based on structured graphs, where binders are encoded with parametric higher-order abstract syntax. We show how adapt to this representation to well-typed ASGs. This is especially useful for EDSLs, which often reuse the type system of the host language. We also show an alternative class-based encoding of(well-typed) ASGs that enables extensible and modular well-typed EDSLs while allowing the manipulation of sharing and recursion."}
{"_id": "1ea6511aa0e3b4f5fcc39f802076f11879c89778", "title": "An Investigation into the Pedagogical Features of Documents", "text": "Characterizing the content of a technical document in terms of its learning utility can be useful for applications related to education, such as generating reading lists from large collections of documents. We refer to this learning utility as the \u201cpedagogical value\u201d of the document to the learner.While pedagogical value is an important concept that has been studied extensively within the education domain, there has been little work exploring it from a computational, i.e., natural language processing (NLP), perspective. To allow a computational exploration of this concept, we introduce the notion of \u201cpedagogical roles\u201d of documents (e.g., Tutorial and Survey) as an intermediary component for the study of pedagogical value. Given the lack of available corpora for our exploration, we create the first annotated corpus of pedagogical roles and use it to test baseline techniques for automatic prediction of such roles."}
{"_id": "166f42f66c5e6dd959548acfb97dc77a36013639", "title": "Bilevel Model-Based Discriminative Dictionary Learning for Recognition", "text": "Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the $\\ell _{0}$ or $\\ell _{1}$ norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn\u2013Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method."}
{"_id": "173f07b51e7f98c8b6689843d16138adda51ea06", "title": "Understanding Replication in Databases and Distributed Systems", "text": "Replication is an area of interest to both distributed systems and databases. The solutions developed from these two perspectives are conceptually similar but differ in many aspects: model, assumptions, mechanisms, guarantees provided, and implementation. In this paper, we provide an abstract and \u201cneutral\u201d framework to compare replication techniques from both communities in spite of the many subtle differences. The framework has been designed to emphasize the role played by different mechanisms and to facilitate comparisons. With this, it is possible to get a functional comparison of many ideas that is valuable for both didactic and practical purposes. The paper describes the replication techniques used in both communities, compares them, and points out ways in which they can be integrated to arrive to better, more robust replication protocols."}
{"_id": "d79908ee85d6865d03937a3873ec9040b2847cc5", "title": "The Ontogenesis of Social Dominance : A Strategy-Based Evolutionary Perspective", "text": "Social dominance results when members of a social group vary in their ability to acquire resources in the presence of others (i.e., compete). Traditional approaches to social dominance often emphasize coercive behavior, but nonetheless suggest that dominant individuals are socially central (e.g., watched, attractive social partners). These patterns, however, apply to humans only up to a certain age. This apparent discontinuity may give the false impression that social dominance is less relevant to human social organization than it is to animal social organization. This paper reintroduces the ethological concept of social dominance, but reinterprets it from a strategy-based perspective. That is, if social dominance is defined as differential ability to control resources\u2014without reference to how this is done\u2014then children evidently employ different strategies to compete with peers (e.g., coercive and prosocial). Furthermore, the type of strategy children employ and peers\u2019 responses to it depend largely on the ages of the children. By adopting a strategy-based approach to social dominance and explicitly incorporating developmental processes and uniquely human capacities, human social dominance patterns appear to be more similar to primate patterns than commonly believed. Implications for social competence, peer relationships, and the development of the self are discussed. \uf8e9 1999"}
{"_id": "5128f77564f70d861debf2941df54fc575283698", "title": "PaDNet: Pan-Density Crowd Counting", "text": "Crowd counting in varying density scenes is a challenging problem in artificial intelligence (AI) and pattern recognition. Recently, deep convolutional neural networks (CNNs) are used to tackle this problem. However, the single-column CNN cannot achieve high accuracy and robustness in diverse density scenes. Meanwhile, multi-column CNNs lack effective way to accurately learn the features of different scales for estimating crowd density. To address these issues, we propose a novel pan-density level deep learning model, named as Pan-Density Network (PaDNet). Specifically, the PaDNet learns multi-scale features by three steps. First, several sub-networks are pre-trained on crowd images with different density-levels. Then, a Scale Reinforcement Net (SRN) is utilized to reinforce the scale features. Finally, a Fusion Net fuses all of the scale features to generate the final density map. Experiments on four crowd counting benchmark datasets, the ShanghaiTech, the UCF CC 50, the UCSD, and the UCF-QRNF, indicate that the PaDNet achieves the best performance and has high robustness in pan-density crowd counting compared with other state-of-the-art algorithms."}
{"_id": "524c93976b10e02f3ce86c1ba52494856ccdf805", "title": "Offline handwritten signature verification \u2014 Literature review", "text": "The area of Handwritten Signature Verification has been broadly researched in the last decades, but remains an open research problem. The objective of signature verification systems is to discriminate if a given signature is genuine (produced by the claimed individual), or a forgery (produced by an impostor). This has demonstrated to be a challenging task, in particular in the offline (static) scenario, that uses images of scanned signatures, where the dynamic information about the signing process is not available. Many advancements have been proposed in the literature in the last 5\u201310 years, most notably the application of Deep Learning methods to learn feature representations from signature images. In this paper, we present how the problem has been handled in the past few decades, analyze the recent advancements in the field, and the potential directions for future research."}
{"_id": "36269ac9ef7b54328c53d40e78bb2611f049f80b", "title": "Improving MQTT by Inclusion of Usage Control", "text": "Due to the increasing pervasiveness of Internet of Things (IoT) and Internet of Everything (IoE) devices, securing both their communications and operations has become of capital importance. Among the several existing IoT protocols, Message Queue Telemetry Transport (MQTT) is a widely-used general purpose one, usable in both constrained and powerful devices, which coordinates data exchanges through a publish/subscribe approach. In this paper, we propose a methodology to increase the security of the MQTT protocol, by including Usage Control in its operative workflow. The inclusion of usage control enables a fine-grained dynamic control of the rights of subscribers to access data and data-streams over time, by monitoring mutable attributes related to the subscriber, the environment or data itself. We will present the architecture and workflow of MQTT enhanced through Usage Control, also presenting a real implementation on Raspberry Pi 3 for performance"}
{"_id": "67d8909613b14c08b2701336bcd6dc7c4b0f9e13", "title": "A three-phase ZVS PWM DC/DC converter with asymmetrical duty cycle associated with a three-phase version of the hybridge rectifier", "text": "This paper proposes the use of a three-phase version of the hybridge rectifier in the three-phase zero-voltage switch (ZVS) DC/DC converter with asymmetrical duty cycle. The use of this new rectifier improves the efficiency of the converter because only three diodes are responsible for the conduction losses in the secondary side. The current in the secondary side of the transformer is half the output current. In addition to this, all the advantages of the three-phase DC/DC converter, i.e., the increased frequency of the output and input currents, the improved distribution of the losses, as well as the soft commutation for a wide load range, are preserved. Therefore, the resulting topology is capable of achieving high efficiency and high power density at high power levels. The theoretical analysis, simulation, and experimental results obtained from a 6-kW prototype, and also a comparison of the efficiency of this converter with the full-bridge rectifier are presented."}
{"_id": "36c6fcc468d4cffdcadca84111ac743096b1e193", "title": "Towards a Deeper Understanding of Adversarial Losses", "text": "Recent work has proposed various adversarial losses for training generative adversarial networks. Yet, it remains unclear what certain types of functions are valid adversarial loss functions, and how these loss functions perform against one another. In this paper, we aim to gain a deeper understanding of adversarial losses by decoupling the effects of their component functions and regularization terms. We first derive some necessary and sufficient conditions of the component functions such that the adversarial loss is a divergence-like measure between the data and the model distributions. In order to systematically compare different adversarial losses, we then propose DANTest\u2014a new, simple framework based on discriminative adversarial networks. With this framework, we evaluate an extensive set of adversarial losses by combining different component functions and regularization approaches. This study leads to some new insights into the adversarial losses. For reproducibility, all source code is available at https: //github.com/salu133445/dan."}
{"_id": "92c4413c2a344f297f2eb6f96800bcc7de01ad37", "title": "Grammatical error correction in non-native English", "text": "Grammatical error correction (GEC) is the task of automatically correcting grammatical errors in written text. Previous research has mainly focussed on individual error types and current commercial proofreading tools only target limited error types. As sentences produced by learners may contain multiple errors of different types, a practical error correction system should be able to detect and correct all errors. In this thesis, we investigate GEC for learners of English as a Second Language (ESL). Specifically, we treat GEC as a translation task from incorrect into correct English, explore new models for developing end-to-end GEC systems for all error types, study system performance for each error type, and examine model generali-sation to different corpora. First, we apply Statistical Machine Translation (SMT) to GEC and prove that it can form the basis of a competitive all-errors GEC system. We implement an SMT-based GEC system which contributes to our winning system submitted to a shared task in 2014. Next, we propose a ranking model to re-rank correction candidates generated by an SMT-based GEC system. This model introduces new linguistic information and we show that it improves correction quality. Finally, we present the first study using Neural Machine Translation (NMT) for GEC. We demonstrate that NMT can be successfully applied to GEC and help capture new errors missed by an SMT-based GEC system. While we focus on GEC for English, our methods presented in this thesis can be easily applied to any language. Acknowledgements First and foremost, I owe a huge debt of gratitude to my supervisor, Ted Briscoe, who has patiently guided me through my PhD and always been very helpful, understanding and supportive. I cannot thank him enough for providing me with opportunities that helped me grow as a researcher and a critical thinker. I am immensely grateful to my examiners, Paula Buttery and Stephen Pulman, for their thorough reading of my thesis, their valuable comments and an enjoyable viva. My appreciation extends to my fellow members of the Natural Language and Information Processing research group, with whom I have always enjoyed discussing our work and other random things. My gratitude goes to Stephen Clark and Ann Copestake for giving me early feedback on my work as well as Christopher Bryant for generously reading my thesis draft. I would especially like to thank Mariano Felice for being not just a great colleague but also a dear friend. A special mention has \u2026"}
{"_id": "1b810e914bba15b20a1c3dbe344e9a9eb4828436", "title": "Enhancing DataQuality in DataWarehouse Environments", "text": "D ecisions by senior management lay the groundwork for lower corporate levels to develop policies and procedures for various corporate activities. However, the potential business contribution of these activities depends on the quality of the decisions and in turn on the quality of the data used to make them. Some inputs are judgmental, others are from transactional systems, and still others are from external sources, but all must have a level of quality appropriate for the decisions they will be part of. Although concern about the quality of one\u2019s data is not new, what is fairly recent is using the same data for multiple purposes, which can be quite different from their original purposes. Users working with a particular data set come to know and internalize its deficiencies and idiosyncrasies. This knowledge is lost when data is made available to other parties, like when data needed for decision making is collected in repositories called data warehouses. Here, we offer a conceptual framework for enhancing data quality in data warehouse environments. We explore the factors that should be considered, such as the current level of data quality, the levels of quality needed by the relevant decision processes, and the potential benefits of projects designed to enhance data quality. Those responsible for data quality have to understand the importance of such factors, as well as the interaction among them. This understanding is mandatory in data warehousing environments characterized by multiple users with differing needs for data quality. For warehouses supporting a limited number of decision processes, awareness of these issues coupled with good judgment should suffice. For more complex situations, however, the number and diversity of trade-offs make reliance on judgment alone problematic. For such situations, we offer a methodology that Nothing is more likely to undermine the performance and business value of a data warehouse than inappropriate, misunderstood, or ignored data quality. Enhancing DataQuality in DataWarehouse Environments"}
{"_id": "7b4ee2287d47f58b876c71efa2f49b27766b774b", "title": "Normative data stratified by age and education for two measures of verbal fluency: FAS and animal naming.", "text": "Normative data stratified by three levels of age (16-59, 60-79, and 80-95 years) and three levels of education (0-8, 9-12, and 13-21 years) are presented for phonemic verbal fluency (FAS) and categorical verbal fluency (Animal Naming). The normative sample, aged 16 to 95 years, consisted of 1,300 cognitively intact individuals who resided in the community. Years of education ranged from 0 to 21. The total number of words in 1 minute for each of the letters F, A, and S was correlated r =.52 with the number of animal names generated in 1 minute. Regression analyses showed that FAS was more sensitive to the effects of education (18.6% of the variance) than age (11.0% of the variance). The opposite relationship occurred for Animal Naming, where age accounted for 23.4% of the variance and education accounted only for 13.6%. Gender accounted for less than 1% of variance for FAS and Animal Naming. The clinical utility of these norms is discussed."}
{"_id": "7db09ea901b308ae8dc41b3d5e916bd41f46f385", "title": "Impact of carboplatin plus paclitaxel combined with endostar against A375 melanoma cells: An in vitro and in vivo analysis.", "text": "PURPOSE\nThe aim of the present study was to investigate the efficacy of carboplatin plus paclitaxel (CP) combined with endostar against A375 melanoma cells in vitro and in vivo.\n\n\nMETHODS\nEffects on the cell viability and apoptosis induction were estimated with the Cell counting Kit-8 assay and Annexin V-FITC/Propidium Iodide staining. Fifty female BALB/c-nude mice with subcutaneous injection of A375 cells were randomized to be treated with normal saline, dacarbazine alone, dacarbazine plus endostar, carboplatin plus paclitaxel, and CP plus endostar. Tumor volume of mice was monitored after injection and survival time was adopted for survival analysis.\n\n\nRESULTS\nCP plus endostar significantly decreased the cell survival rate compared with CP (P<0.01). Combination of CP and endostar showed higher cytotoxicity to A375 cells in vitro than endostar plus dacarbazine (P<0.01). The percentage of apoptotic cells in A375 cells treated with CP plus endostar was appreciably higher when compared to CP group (P<0.05). The mean relative tumor size in CP group was definitely larger (p<0.05) than CP plus endostar group. In addition, the mean survival time in CP plus endostar group was notably elevated compared with the CP group (P<0.05).\n\n\nCONCLUSIONS\nOur data indicated that treatment with CP plus endostar significantly reduced cell growth and induced a high rate of apoptotic cells in the A375 melanoma cell line. CP and endostar exhibited synergistic anti-tumor activities in A375 melanoma cells in vitro. CP plus endostar suppressed the growth of xenograft tumors and prolonged the survival time of mice with xenograft tumors. Combination of CP and endostar may be a promising treatment for melanoma."}
{"_id": "555506a4ed9c3b5e96e9ccd5c05fbe127323932c", "title": "Heritability for Alzheimer's disease: the study of dementia in Swedish twins.", "text": "BACKGROUND\nAlzheimer's disease has been thought to have familial and sporadic forms, and several genetic defects have been identified that chiefly explain early-onset familial cases. In this study, our purpose was to detect all cases of dementia in an established twin registry and to estimate total extent of genetic contribution to liability to Alzheimer's disease.\n\n\nMETHODS\nAt the first stage, members of the registry were screened for dementia, using in-person or telephone mental status testing. At the second stage, those who screened positively and their partners were referred for clinical work-ups, including neuropsychological assessment, physician examination, laboratory tests, and neuroimaging. Clinical diagnoses were assigned at a multidisciplinary consensus conference. Probandwise concordance rates were examined by zygosity, and structural modeling was applied to the data to estimate genetic and environmental influences, using both single- and multiple-threshold models.\n\n\nRESULTS\nSixty-five pairs were identified in which one or both was demented. The probandwise concordance rate for Alzheimer's disease among monozygotic pairs was 67%; the corresponding figure for dizygotic pairs was 22%. Heritability of liability to Alzheimer's disease was estimated to be .74; to any dementia, .43. The other variance is attributable to environmental influences.\n\n\nCONCLUSIONS\nFindings indicate a substantial genetic effect for these predominantly late-onset Alzheimer's disease cases. At the same time, structural modeling results and large intra-pair differences in age of onset suggest that environmental factors are also important in determining whether and when an individual may develop dementia."}
{"_id": "e00582ebcaf603b8fefc595a8e9424bf06cb9506", "title": "SANE: System for Fine Grained Named Entity Typing on Textual Data", "text": "Assignment of fine-grained types to named entities is gaining popularity as one of the major Information Extraction tasks due to its applications in several areas of Natural Language Processing. Existing systems use huge knowledge bases to improve the accuracy of the fine-grained types. We designed and developed SANE, a system that uses Wikipedia categories to fine grain the type of the named entities recognized in the textual data. The main contribution of this work is building a named entity typing system without the use of knowledge bases. Through our experiments, 1) we establish the usefulness of Wikipedia categories to Named Entity Typing and 2) we show that SANE\u2019s performance is on par with the state-ofthe-art."}
{"_id": "766cd91c0d8650495529cab7d4eeed482729cf89", "title": "Algorithm 799: revolve: an implementation of checkpointing for the reverse or adjoint mode of computational differentiation", "text": "In its basic form, the reverse mode of computational differentiation yields the gradient of a scalar-valued function at a cost that is a small multiple of the computational work needed to evaluate the function itself. However, the corresponding memory requirement is proportional to the run-time of the evaluation program. Therefore, the practical applicability of the reverse mode in its original formulation is limited despite the availability of ever larger memory systems. This observation leads to the development of checkpointing schedules to reduce the storage requirements. This article presents the function revolve, which generates checkpointing schedules that are provably optimal with regard to a primary and a secondary criterion. This routine is intended to be used as an explicit \u201ccontroller\u201d for running a time-dependent applications program."}
{"_id": "e931f4444f634695bfab5a6e57c817da52fc512b", "title": "Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks", "text": "Efforts to reduce the numerical precision of computations in deep learning training have yielded systems that aggressively quantize weights and activations, yet employ wide high-precision accumulators for partial sums in inner-product operations to preserve the quality of convergence. The absence of any framework to analyze the precision requirements of partial sum accumulations results in conservative design choices. This imposes an upper-bound on the reduction of complexity of multiply-accumulate units. We present a statistical approach to analyze the impact of reduced accumulation precision on deep learning training. Observing that a bad choice for accumulation precision results in loss of information that manifests itself as a reduction in variance in an ensemble of partial sums, we derive a set of equations that relate this variance to the length of accumulation and the minimum number of bits needed for accumulation. We apply our analysis to three benchmark networks: CIFAR-10 ResNet 32, ImageNet ResNet 18 and ImageNet AlexNet. In each case, with accumulation precision set in accordance with our proposed equations, the networks successfully converge to the single precision floating-point baseline. We also show that reducing accumulation precision further degrades the quality of the trained network, proving that our equations produce tight bounds. Overall this analysis enables precise tailoring of computation hardware to the application, yielding areaand power-optimal systems."}
{"_id": "434ef791036c7bcec81b6d09900f93a1ebc99e51", "title": "Sorry, I only speak natural language: a pattern-based, data-driven and guided approach to mapping natural language queries to SPARQL", "text": "We present a new interface based on natural language to support users in specifying their queries with respect to RDF datasets. The approach relies on a number of predefined patterns that uniquely determine a type of SPARQL query. The approach is incremental and assisted in that it guides a user step by step in specifying a query by incrementally parsing the input and providing suggestions for completion at every stage. The methodology to specify the patterns is informed by empirical distributions of SPARQL query types as found in standard query logs. So far, we have implemented the approach using two patterns only as proof-of-concept. The coverage of the pattern library will be extended in the future. We will also provide an evaluation of the approach on the well-known QALD dataset."}
{"_id": "38f89161e9e788b998d28a02f3d0b442453402d5", "title": "Recent and Emerging Topics in Wireless Industrial Communications: A Selection", "text": "In this paper we discuss a selection of promising and interesting research areas in the design of protocols and systems for wireless industrial communications. We have selected topics that have either emerged as hot topics in the industrial communications community in the last few years (like wireless sensor networks), or which could be worthwhile research topics in the next few years (for example cooperative diversity techniques for error control, cognitive radio/opportunistic spectrum access for mitigation of external interferences)."}
{"_id": "9a1112c04d0721acca0b6d7d7b9d1ff67545c6b7", "title": "Nano-Power Wireless Wake-Up Receiver With Serial Peripheral Interface", "text": "We designed, implemented, tested and measured an ultra low power Wake Up Receiver (WUR), intended for use in Wireless Body Area Networks (WBAN). Gaussian On-Off Keying (GOOK) and Pulse Width Modulation (PWM) are used to modulate and encode, respectively, the preamble signal. The receiver incorporates a decoder to enable Serial Peripheral Interface (SPI). WUR was also comprehensively tested for power consumption and robustness to RF interference from wireless devices commonly found in the vicinity of persons utilising WBAN technology. Our results and comparative evaluation demonstrate that the achieved listening power of 270nW for the Wake Up Receiver is significantly lower power consumption than for the other state-of-the-art. The proposed preamble detection scheme can significantly reduce false wake ups due to other wireless devices in a WBAN. Additionally, SPI significantly reduces the overall power consumption for packet reception and decoding."}
{"_id": "f96e93e705ebe287d173857d7469b43e5a1d55dc", "title": "Performance Evaluation and Comparative Analysis of SubCarrier Modulation Wake-up Radio Systems for Energy-Efficient Wireless Sensor Networks", "text": "Energy-efficient communication is one of the main concerns of wireless sensor networks nowadays. A commonly employed approach for achieving energy efficiency has been the use of duty-cycled operation of the radio, where the node's transceiver is turned off and on regularly, listening to the radio channel for possible incoming communication during its on-state. Nonetheless, such a paradigm performs poorly for scenarios of low or bursty traffic because of unnecessary activations of the radio transceiver. As an alternative technology, Wake-up Radio (WuR) systems present a promising energy-efficient network operation, where target devices are only activated in an on-demand fashion by means of a special radio signal and a WuR receiver. In this paper, we analyze a novel wake-up radio approach that integrates both data communication and wake-up functionalities into one platform, providing a reconfigurable radio operation. Through physical experiments, we characterize the delay, current consumption and overall operational range performance of this approach under different transmit power levels. We also present an actual single-hop WuR application scenario, as well as demonstrate the first true multi-hop capabilities of a WuR platform and simulate its performance in a multi-hop scenario. Finally, by thorough qualitative comparisons to the most relevant WuR proposals in the literature, we state that the proposed WuR system stands out as a strong candidate for any application requiring energy-efficient wireless sensor node communications."}
{"_id": "5053d80a916aa6be5d1f2253a5f420954da7a3e4", "title": "The many faces of publish/subscribe", "text": "Well adapted to the loosely coupled nature of distributed interaction in large-scale applications, the publish/subscribe communication paradigm has recently received increasing attention. With systems based on the publish/subscribe interaction scheme, subscribers register their interest in an event, or a pattern of events, and are subsequently asynchronously notified of events generated by publishers. Many variants of the paradigm have recently been proposed, each variant being specifically adapted to some given application or network model. This paper factors out the common denominator underlying these variants: full decoupling of the communicating entities in time, space, and synchronization. We use these three decoupling dimensions to better identify commonalities and divergences with traditional interaction paradigms. The many variations on the theme of publish/subscribe are classified and synthesized. In particular, their respective benefits and shortcomings are discussed both in terms of interfaces and implementations."}
{"_id": "5189c574f75610b11b3281ed03e07debb789279d", "title": "A 37.5 /spl mu/W Body Channel Communication Wake-Up Receiver With Injection-Locking Ring Oscillator for Wireless Body Area Network", "text": "An ultra-low power wake-up receiver for body channel communication (BCC) is implemented in 130 nm CMOS process. The proposed wake-up receiver uses the injection-locking ring oscillator (ILRO) to replace the RF amplifier with low power consumption. Through the ILRO, the frequency modulated input signal is amplified to the full swing rectangular signal directly demodulated by the following low power PLL-based FSK demodulator. In addition, the energy-efficient BCC link mitigates the sensitivity and selectivity requirements for the receiver, which significantly reduces the power consumption. Furthermore, the auto frequency calibrator (AFC) is adopted to compensate the free running frequency of the ring oscillator which is caused by temperature variation and leakage current. The AFC reuses the PLL-based demodulator to periodically set the free running frequency to the desired frequency without any area overhead. As a result, the proposed wake-up receiver achieves a sensitivity of -62.7 dBm at a data rate of 200 kbps while consuming only 37.5 \u03bcW from the 0.7 V supply."}
{"_id": "817f38e6e9844d4513047916fe88561a758846e7", "title": "2D-to-3D image conversion by learning depth from examples", "text": "Among 2D-to-3D image conversion methods, those involving human operators have been most successful but also time-consuming and costly. Automatic methods, that typically make use of a deterministic 3D scene model, have not yet achieved the same level of quality as they often rely on assumptions that are easily violated in practice. In this paper, we adopt the radically different approach of \u201clearning\u201d the 3D scene structure. We develop a simplified and computationally-efficient version of our recent 2D-to-3D image conversion algorithm. Given a repository of 3D images, either as stereopairs or image+depth pairs, we find k pairs whose photometric content most closely matches that of a 2D query to be converted. Then, we fuse the k corresponding depth fields and align the fused depth with the 2D query. Unlike in our original work, we validate the simplified algorithm quantitatively on a Kinect-captured image+depth dataset against the Make3D algorithm. While far from perfect, the presented results demonstrate that online repositories of 3D content can be used for effective 2D-to-3D image conversion."}
{"_id": "5d06f630188a5ec9c05c4961eddbf9f24e2e6916", "title": "User-Based Collaborative-Filtering Recommendation Algorithms on Hadoop", "text": "Collaborative Filtering(CF) algorithms are widely used in a lot of recommender systems, however, the computational complexity of CF is high thus hinder their use in large scale systems. In this paper, we implement user-based CF algorithm on a cloud computing platform, namely Hadoop, to solve the scalability problem of CF. Experimental results show that a simple method that partition users into groups according to two basic principles, i.e., tidy arrangement of mapper number to overcome the initiation of mapper and partition task equally such that all processors finish task at the same time, can achieve linear speedup."}
{"_id": "08e98c87a0b08a4150df7659e27e8381e84a2519", "title": "Corn Disease Identification from Leaf Images Using Convolutional Neural Networks", "text": "Plant diseases are one of the major factors that can negatively impact crop yields. Therefore, we want to create an automated system that can identify plant diseases as easily as possible. Our research goal is to help crop growers who might not be experts in plant pathology be able to identify diseases as soon as they occur in order to minimize the spread of disease and the damage to the crops. We propose a very light-weight convolutional neural network architecture that is trained to classify images of corn leaves into one of the four categories: common rust disease, northern leaf blight disease, gray leaf spot disease, or healthy. The dataset used is the PlantVillage dataset taken from plantvillage.org. The dataset consists of 8506 images of corn leaves from 4 categories. We evaluated our model using 5-fold cross-validation, and we achieves an accuracy of 97.09 % on average. The result of our model is slightly worse than the performance of other well-known CNNs architectures, but the complexity of our architecture is much lower, making it more suitable for real-time inferences on resource-constrained devices such as smartphones."}
{"_id": "55b1250d48541ee0a6e7d907df3fcde9c118b1c2", "title": "Argument discovery via crowdsourcing", "text": "The amount of controversial issues being discussed on the Web has been growing dramatically. In articles, blogs, and wikis, people express their points of view in the form of arguments, i.e., claims that are supported by evidence. Discovery of arguments has a large potential for informing decision-making. However, argument discovery is hindered by the sheer amount of available Web data and its unstructured, free-text representation. The former calls for automatic text-mining approaches, whereas the latter implies a need for manual processing to extract the structure of arguments. In this paper, we propose a crowdsourcing-based approach to build a corpus of arguments, an argumentation base, thereby mediating the trade-off of automatic text-mining and manual processing in argument discovery. We develop an end-to-end process that minimizes the crowd cost while maximizing the quality of crowd answers by: (1) ranking argumentative texts, (2) pro-actively eliciting user input to extract arguments from these texts, and (3) aggregating heterogeneous crowd answers. Our experiments with real-world datasets highlight that our method discovers virtually all arguments in documents when processing only 25% of the text with more than 80% precision, using only 50% of the budget consumed by a baseline algorithm."}
{"_id": "952c3730bc66846de19500319bea1b8b40afc2d3", "title": "Comparative study of automatic speech recognition techniques", "text": "Over the past decades, extensive research has been carried out on various possible implementations of automatic speech recognition (ASR) systems. The most renowned algorithms in the field of ASR are the mel-frequency cepstral coefficients and the hidden Markov models. However, there are also other methods, such as wavelet-based transforms, artificial neural networks and support vector machines, which are becoming more popular. This review article presents a comparative study on different approaches that were proposed for the task of ASR, and which are widely used nowadays."}
{"_id": "5f49f9a07a5e158feb13b5d525e94ca119e2e593", "title": "Computational approaches to motor control", "text": "This review will focus on four areas of motor control which have recently been enriched both by neural network and control system models: motor planning, motor prediction, state estimation and motor learning. We will review the computational foundations of each of these concepts and present specific models which have been tested by psychophysical experiments. We will cover the topics of optimal control for motor planning, forward models for motor prediction, observer models of state estimation arid modular decomposition in motor learning. The aim of this review is to demonstrate how computational approaches, as well as proposing specific models, provide a theoretical framework to formalize the issues in motor control."}
{"_id": "545a78b909e22f6a26b06b5b430210e8dae291bb", "title": "Self-supervised regrasping using spatio-temporal tactile features and reinforcement learning", "text": "We introduce a framework for learning regrasping behaviors based on tactile data. First, we present a grasp stability predictor that uses spatio-temporal tactile features collected from the early-object-lifting phase to predict the grasp outcome with a high accuracy. Next, the trained predictor is used to supervise and provide feedback to a reinforcement learning algorithm that learns the required grasp adjustments based on tactile feedback. Our results gathered over more than 50 hours of real robot experiments indicate that the robot is able to predict the grasp outcome with 93% accuracy. In addition, the robot is able to improve the grasp success rate from 42% when randomly grasping an object to up to 97% when allowed to regrasp the object in case of a predicted failure."}
{"_id": "2609626519d8fa0ccc53bce49a3a21b928deeca6", "title": "Cross-Modal Multistep Fusion Network With Co-Attention for Visual Question Answering", "text": "Visual question answering (VQA) is receiving increasing attention from researchers in both the computer vision and natural language processing fields. There are two key components in the VQA task: feature extraction and multi-modal fusion. For feature extraction, we introduce a novel co-attention scheme by combining Sentence-guide Word Attention (SWA) and Question-guide Image Attention in a unified framework. To be specific, the textual attention SWA relies on the semantics of the whole question sentence to calculate contributions of different question words for text representation. For the multi-modal fusion, we propose a \u201cCross-modal Multistep Fusion (CMF)\u201d network to generate multistep features and achieve multiple interactions for two modalities, rather than focusing on modeling complex interactions between two modals like most current feature fusion methods. To avoid the linear increase of the computational cost, we share the parameters for each step in the CMF. Extensive experiments demonstrate that the proposed method can achieve competitive or better performance than the state of the art."}
{"_id": "63a35926f92ae1ba946b5c319de3428d8df2109f", "title": "Frontal cortex function as derived from hierarchical predictive coding", "text": "The frontal lobes are essential for human volition and goal-directed behavior, yet their function remains unclear. While various models have highlighted working memory, reinforcement learning, and cognitive control as key functions, a single framework for interpreting the range of effects observed in prefrontal cortex has yet to emerge. Here we show that a simple computational motif based on predictive coding can be stacked hierarchically to learn and perform arbitrarily complex goal-directed behavior. The resulting Hierarchical Error Representation (HER) model simulates a wide array of findings from fMRI, ERP, single-units, and neuropsychological studies of both lateral and medial prefrontal cortex. By reconceptualizing lateral prefrontal activity as anticipating prediction errors, the HER model provides a novel unifying account of prefrontal cortex function with broad implications for understanding the frontal cortex across multiple levels of description, from the level of single neurons to behavior."}
{"_id": "afdb1b1571564e4c7e18cbee8f4f00f3b33348a1", "title": "Comparison of Collision Avoidance Systems and Applicability to Rail Transport", "text": "The paper presents an overview of the state of the art in collision avoidance related with transportation systems like the automatic identification system (AIS) for maritime transportation, traffic alert and collision avoidance system/automatic dependent surveillance-broadcast (TCAS/ADS-B) for aircraft, and the car-2-car communication system (C2C) for road transportation. The examined systems rely on position detection and direct communication among vehicles. Alike a collision avoidance system for railway transportation \"RCAS\" is introduced. Focussing on the communication aspects, possible applicability of the examined state of the art systems to RCAS is studied. The analysis are performed at different communication system layers, namely application (APP) layer, media access control (MAC) layer and physical layer (PHY), which are the most relevant for a single hop network broadcast system as favorized in RCAS. Since multihop and addressed communication are not foreseen in a first RCAS approach, the network layer is not taken into account."}
{"_id": "5b4e0add3d0829f94f03682c63cb64863ddf8711", "title": "MICROCONTROLLER BASED POWER INVERTER", "text": "1 CHAPTER ONE: BACKGROUND STUDY 2 1."}
{"_id": "00364f6e3734a71a519126968b18d0682f380358", "title": "KYPO Cyber Range: Design and Use Cases", "text": "The physical and cyber worlds are increasingly intertwined and exposed to cyber attacks. The KYPO cyber range provides complex cyber systems and networks in a virtualized, fully controlled and monitored environment. Time-efficient and cost-effective deployment is feasible using cloud resources instead of a dedicated hardware infrastructure. This paper describes the design decisions made during it\u2019s development. We prepared a set of use cases to evaluate the proposed design decisions and to demonstrate the key features of the KYPO cyber range. It was especially cyber training sessions and exercises with hundreds of participants which provided invaluable feedback for KYPO platform development."}
{"_id": "57b2fa4d2f2bca840415b34fa17ebf2253b21d9f", "title": "Information systems interoperability: What lies beneath?", "text": "Interoperability is the most critical issue facing businesses that need to access information from multiple information systems. Our objective in this research is to develop a comprehensive framework and methodology to facilitate semantic interoperability among distributed and heterogeneous information systems. A comprehensive framework for managing various semantic conflicts is proposed. Our proposed framework provides a unified view of the underlying representational and reasoning formalism for the semantic mediation process. This framework is then used as a basis for automating the detection and resolution of semantic conflicts among heterogeneous information sources. We define several types of semantic mediators to achieve semantic interoperability. A domain-independent ontology is used to capture various semantic conflicts. A mediation-based query processing technique is developed to provide uniform and integrated access to the multiple heterogeneous databases. A usable prototype is implemented as a proof-of-concept for this work. Finally, the usefulness of our approach is evaluated using three cases in different application domains. Various heterogeneous datasets are used during the evaluation phase. The results of the evaluation suggest that correct identification and construction of both schema and ontology-schema mapping knowledge play very important roles in achieving interoperability at both the data and schema levels."}
{"_id": "1087108b598bb447654144849673b750b237c73f", "title": "Feature Selection for SVMs", "text": "We introduce a method of feature selection for Support Vector Machines. The method is based upon finding those features which minimize bounds on the leave-one-out error. This search can be efficiently performed via gradient descent. The resulting algorithms are shown to be superior to some standard feature selection algorithms on both toy data and real-life problems of face recognition, pedestrian detection and analyzing DNA micro array data."}
{"_id": "94d9cbfc474cb6ce961de45a06233b2853ca6724", "title": "Toward Automatic Program Synthesis", "text": "An elementary outline of the theorem-proving approach to automatic program synthesis is given, without dwelling on technical details. The method is illustrated by the automatic construction of both recursive and iterative programs operating on natural numbers, lists, and trees.\nIn order to construct a program satisfying certain specifications, a theorem induced by those specifications is proved, and the desired program is extracted from the proof. The same technique is applied to transform recursively defined functions into iterative programs, frequently with a major gain in efficiency.\nIt is emphasized that in order to construct a program with loops or with recursion, the principle of mathematical induction must be applied. The relation between the version of the induction rule used and the form of the program constructed is explored in some detail."}
{"_id": "7de442e0012ca8ce2c0505105fe7ee7d48ac927f", "title": "Developing a Feasible and Maintainable Ontology for Automatic Landscape Design", "text": "In general, landscape architecture includes analysis, planning, design, administration and management of natural and artificial. An important aspect is the formation of socalled sustainable landscapes that allow maximum use of the environment, natural resources and promote sustainable restoration of ecosystems. For such purposes, a designer needs a complete database with existing and suitable plants, but no designing tool has one. Therefore it is presented the structure and the development of on ontology suitable for storing and managing all information and knowledge about plants. The advantage is that the format of the ontology allows the storage of any plant species (e.g. live or fossil) and automated reasoning. Ontology is a formal conceptualization of a particular knowledge about the world, through the explicit representation of basic concepts, relations, and inference rules about themselves. Therefore the ontology may be used by a design tool for helping the designer and choosing the best options for a sustainable landscape. Keywords\u2014environment; landscapes; ontology; ontology-based simulation; sustainable landscapes"}
{"_id": "c364f8cb4cc49f188807bb178277909f030b603a", "title": "A novel approach to online social influence maximization", "text": "Online social networks are becoming a true growth point of Internet. As individuals constantly desire to interact with each other, the ability for Internet to deliver this networking influence becomes much stronger. In this paper, we study the online social influence maximization problem, which is to find a small group of influential users that maximizes the spread of influence through networks. After a thorough analysis of existing models, especially two classical ones, namely Independent cascade and linear thresholds, we argue that their idea that each user can only be activated by its active neighbors is not applicable to online social networks, since in many applications there is no clear concept for the issue of \"activation\". In our proposed influence model, if there is a social influence path linking two nonadjacent individuals, the value of influence between these two individuals can be evaluated along the existing social path based on the influence transitivity property under certain constraints. To compute influence probabilities between two neighbors, we also develop a new method which leverages both structure of networks and historical data. With reasonably learned influence probabilities, we recast the problem of online influence maximization to a weighted maximum cut problem which analyzes the influence flow among graph vertices. By running a semidefinite program-based (GW) algorithm on a complete influence graph, an optimal seed set can be identified effectively. We also provide experimental results on real online social networks, showing that our algorithm significantly outperforms greedy methods."}
{"_id": "7d0e27146e8bf64978e97ed1fdeb163e39f73a19", "title": "Arabic Opinion Mining Using Parallel Decision Trees", "text": "Opinion mining is an interested area of research, which epitomize the customer reviews of a product or service and express whether the opinions are positive or negative. Various methods have been proposed as classifiers for opinion mining such as Na\u00efve Bayesian, and Support vector machine, these methods classify opinion without giving us the reasons about why the instance opinion is classified to certain class. Therefore, in our work, we investigate opinion mining of Arabic text at the document level, by applying decision trees classification classifier to have clear, understandable rule, also we apply parallel decision trees classifiers to have efficient results. We applied parallel decision trees on two Arabic corpus of text documents by using parallel implementation of RapidMiner tools. In case of applying parallel decision tree family on OCA we get the best results of accuracy (93.83%), f-measure (93.22) and consumed time 42 Sec at thread 4, one of the resulted rule is Urdu language lines. In case of applying parallel decision tree family on BHA we get the best results of accuracy (90.63%), f-measure (82.29) and consumed time 219 Sec at thread 4, one of the resulted rule is Urdu language lines."}
{"_id": "898982b6ffb591e52d783fb9e85ca3d4e4ddea87", "title": "Surgical management of cloacal malformations: a review of 339 patients.", "text": "BACKGROUND/PURPOSE\nThe aim of this study was to describe lessons learned from the authors' series of patients with cloaca and convey the improved understanding and surgical treatment of the condition's wide spectrum of complexity.\n\n\nMETHODS\nThe medical records of 339 patients with cloaca operated on by the authors were retrospectively reviewed.\n\n\nRESULTS\nA total of 265 patients underwent primary operations, and 74 were secondary. All patients were approached posterior sagittally; 111 of them also required a laparotomy. The average length of the common channel was 4.7 cm for patients that required a laparotomy and 2.3 cm for those that did not. Vaginal reconstruction involved a vaginal pull-through in 196 patients, a vaginal flap in 38, vaginal switch in 30, and vaginal replacement in 75 (36 with rectum, 31 with ileum, and 8 with colon). One hundred twenty-two patients underwent a total urogenital mobilization. Complications included vaginal stricture or atresia in 17, urethral strictures in 6, and urethro-vaginal fistula in 19, all of which occurred before the introduction of the total urogenital mobilization. A total of 54% of all evaluated patients were continent of urine and 24% remain dry with intermittent catheterization through their native urethra and 22% through a Mitrofanoff-type of conduit. Seventy-eight percent of the patients with a common channel longer than 3 cm require intermittent catheterization compared with 28% when their common channel was shorter than 3 cm. Sixty percent of all cases have voluntary bowel movements (28% of them never soiled, and 72% soiled occasionally). Forty percent are fecally incontinent but remain clean when subjected to a bowel management program. Forty-eight patients born at other institutions with hydrocolpos were not treated correctly during the neonatal period. The surgeons failed to drain the dilated vaginas, which interfered with the drainage of the ureters and provoked urinary tract infections, pyocolpos, and/or vaginal perforation. In 24 patients, the colostomy was created too distally, and it interfered with the pull-through. Twenty-three patients suffered from colostomy prolapse. All of these patients required a colostomy, revision before the main repair. Thirty-six patients underwent reoperation because they had a persistent urogenital sinus after an operation done at another institution, and 38 patients underwent reoperation because they suffered from atresia or stenosis of the vagina or urethra. The series was divided into 2 distinct groups of patients: group A were those with a common channel shorter than 3 cm (62%) and group B had a common channel longer than 3 cm (38%).\n\n\nCONCLUSIONS\nThe separation of these groups has important therapeutic and prognostic implications. Group A patients can be repaired posterior sagittally with a reproducible, relatively short operation. Because they represent the majority of patients, we believe that most well-trained pediatric surgeons can repair these type of malformations, and the prognosis is good. Group B patients (those with a common channel longer than 3 cm), usually require a laparotomy and have a much higher incidence of associated urologic problems. The surgeons who repair these malformations require special training in urology, and the operations are prolonged, technically demanding, and the functional results are not as good as in group A. It is extremely important to establish an accurate neonatal diagnosis, drain the hydrocolpos when present, and create an adequate, totally diverting colostomy, leaving enough distal colon available for the pull-through and fixing the colon to avoid prolapse. A correct diagnosis will allow the surgeon to repair the entire defect and avoid a persistent urogenital sinus. Cloacas comprise a spectrum of defects requiring a complex array of surgical decisions. The length of the common channel is an important determinant of the potential for urinary control, and predicts the extent of surgical repair."}
{"_id": "d6ddbe79fbe374baed1aa7b1b1ed02ff13b9534d", "title": "Broadband Millimeter-Wave Power Combiner Using Compact SIW to Waveguide Transition", "text": "A novel broadband millimeter-wave passive power combiner based on substrate integrated waveguide (SIW) to waveguide transition is presented in this letter. Two transition circuits are symmetrically inserted into the E-plane of the rectangular waveguide and function as power-combining network. To satisfy the wideband request of the power combiner, a broadband and compact SIW-to-waveguide transition circuit has been developed using antisymmetric tapered probe. A Ka-band four-way power combiner was fabricated and the measured results agree well with the simulated ones. The measured results show that the proposed combiner achieved a bandwidth of 52% from 23.5 to 40 GHz with better than 15 dB return loss and an insertion loss of 0.75 to 1.4 dB. This millimeter-wave power combiner can be employed in high power-combining system for its simple structure and broadband performance."}
{"_id": "2e0532e920d5fef03d6de3ebc47171c9046043d4", "title": "Security Viewpoint in a Reference Architecture Model for Cyber-Physical Production Systems", "text": "Cyber-physical Production Systems (CPPS) are one of the technical driving forces behind the transformation of industrial production towards \"digital factory of the future\" in the context of Industry 4.0. Security is a major concern for such systems as they become more intelligent, interconnected, and coupled with physical devices. For various security activities from security analysis to designing security controls and architecture, a systematic and structured view and presentation of security-related information is required. Based on the draft standard of Reference Architecture Model for Industry 4.0 (RAMI 4.0), we propose a practical approach to establish a security viewpoint in the CPPS reference architecture model. We investigate the feasibility of using an architecture modeling tool to implement the concept and leverage existing work on models of layered architecture. We demonstrate the applicability for security analysis in two example case studies."}
{"_id": "c02d852be0ad550d2d7888485068372af5145fe9", "title": "Dual Poisson-Disk Tiling: An Efficient Method for Distributing Features on Arbitrary Surfaces", "text": "This paper introduces a novel surface-modeling method to stochastically distribute features on arbitrary topological surfaces. The generated distribution of features follows the Poisson disk distribution, so we can have a minimum separation guarantee between features and avoid feature overlap. With the proposed method, we not only can interactively adjust and edit features with the help of the proposed Poisson disk map, but can also efficiently re-distribute features on object surfaces. The underlying mechanism is our dual tiling scheme, known as the dual Poisson-disk tiling. First, we compute the dual of a given surface parameterization, and tile the dual surface by our specially-designed dual tiles; during the pre-processing, the Poisson disk distribution has been pre-generated on these tiles. By dual tiling, we can nicely avoid the problem of corner heterogeneity when tiling arbitrary parameterized surfaces, and can also reduce the tile set complexity. Furthermore, the dual tiling scheme is non-periodic, and we can also maintain a manageable tile set. To demonstrate the applicability of this technique, we explore a number of surface-modeling applications: pattern and shape distribution, bump-mapping, illustrative rendering, mold simulation, the modeling of separable features in texture and BTF, and the distribution of geometric textures in shell space."}
{"_id": "25fb5a6abcd88ee52bdb3165b844c941e90eb9bf", "title": "Revisiting Distributed Synchronous SGD", "text": "Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony. In contrast, the synchronous approach is often thought to be impractical due to idle time wasted on waiting for straggling workers. We revisit these conventional beliefs in this paper, and examine the weaknesses of both approaches. We demonstrate that a third approach, synchronous optimization with backup workers, can avoid asynchronous noise while mitigating for the worst stragglers. Our approach is empirically validated and shown to converge faster and to better test accuracies."}
{"_id": "2d83f7bca2d6f8aa4d2fc8a95489a1e1dc8884f1", "title": "Highway long short-term memory RNNS for distant speech recognition", "text": "In this paper, we extend the deep long short-term memory (DL-STM) recurrent neural networks by introducing gated direct connections between memory cells in adjacent layers. These direct links, called highway connections, enable unimpeded information flow across different layers and thus alleviate the gradient vanishing problem when building deeper LSTMs. We further introduce the latency-controlled bidirectional LSTMs (BLSTMs) which can exploit the whole history while keeping the latency under control. Efficient algorithms are proposed to train these novel networks using both frame and sequence discriminative criteria. Experiments on the AMI distant speech recognition (DSR) task indicate that we can train deeper LSTMs and achieve better improvement from sequence training with highway LSTMs (HLSTMs). Our novel model obtains 43.9/47.7% WER on AMI (SDM) dev and eval sets, outperforming all previous works. It beats the strong DNN and DLSTM baselines with 15.7% and 5.3% relative improvement respectively."}
{"_id": "9aa852632f87ba242988e901def37858a1622e35", "title": "A REVIEW OF POINT CLOUDS SEGMENTATION AND CLASSIFICATION ALGORITHMS", "text": "Today 3D models and point clouds are very popular being currently used in several fields, shared through the internet and even accessed on mobile phones. Despite their broad availability, there is still a relevant need of methods, preferably automatic, to provide 3D data with meaningful attributes that characterize and provide significance to the objects represented in 3D. Segmentation is the process of grouping point clouds into multiple homogeneous regions with similar properties whereas classification is the step that labels these regions. The main goal of this paper is to analyse the most popular methodologies and algorithms to segment and classify 3D point clouds. Strong and weak points of the different solutions presented in literature or implemented in commercial software will be listed and shortly explained. For some algorithms, the results of the segmentation and classification is shown using real examples at different scale in the Cultural Heritage field. Finally, open issues and research topics will be discussed."}
{"_id": "05fcf7d0f007d5beef631737a96768641b6de517", "title": "Inferring user demographics and social strategies in mobile social networks", "text": "Demographics are widely used in marketing to characterize different types of customers. However, in practice, demographic information such as age, gender, and location is usually unavailable due to privacy and other reasons. In this paper, we aim to harness the power of big data to automatically infer users' demographics based on their daily mobile communication patterns. Our study is based on a real-world large mobile network of more than 7,000,000 users and over 1,000,000,000 communication records (CALL and SMS). We discover several interesting social strategies that mobile users frequently use to maintain their social connections. First, young people are very active in broadening their social circles, while seniors tend to keep close but more stable connections. Second, female users put more attention on cross-generation interactions than male users, though interactions between male and female users are frequent. Third, a persistent same-gender triadic pattern over one's lifetime is discovered for the first time, while more complex opposite-gender triadic patterns are only exhibited among young people.\n We further study to what extent users' demographics can be inferred from their mobile communications. As a special case, we formalize a problem of double dependent-variable prediction-inferring user gender and age simultaneously. We propose the WhoAmI method, a Double Dependent-Variable Factor Graph Model, to address this problem by considering not only the effects of features on gender/age, but also the interrelation between gender and age. Our experiments show that the proposed WhoAmI method significantly improves the prediction accuracy by up to 10% compared with several alternative methods."}
{"_id": "65cec799ecd6e03119fe61d931a4b83bcdcf4699", "title": "An implementation of the earliest deadline first algorithm in Linux", "text": "Recently, many projects have been started to introduce some real-time mechanisms into general purpose operating systems (GPOS) in order to make them capable of providing the users with some temporal guarantees. Many of these projects focused especially on Linux for its capillary and widespread adoption throughout many different research and industrial environments.\n By tracking the kernel release cycle, we propose an efficient Earliest Deadline First implementation in the form of a patch-set against the 2.6.27 version, that is the latest released one, as of now. Our implementation provides the user with the possibility to choose SCHED_EDF as one of the possible scheduling policies for a task, with an enhanced version of the standard algorithm. In fact, we propose a new approach to shared resources' access which, differently from many other previous existing works, does not require the user to specify any parameters about the critical sections every task will enter during its execution."}
{"_id": "84205878e0b1702b995a3186563cc996be18600e", "title": "Path-space manipulation of physically-based light transport", "text": "Industry-quality content creation relies on tools for lighting artists to quickly prototype, iterate, and refine final renders. As industry-leading studios quickly adopt physically-based rendering (PBR) across their art generation pipelines, many existing tools have become unsuitable as they address only simple effects without considering underlying PBR concepts and constraints. We present a novel light transport manipulation technique that operates directly on path-space solutions of the rendering equation. We expose intuitive direct and indirect manipulation approaches to edit complex effects such as (multi-refracted) caustics, diffuse and glossy indirect bounces, and direct/indirect shadows. With our sketch- and object-space selection, all built atop a parameterized regular expression engine, artists can search and isolate shading effects to inspect and edit. We classify and filter paths on the fly and visualize the selected transport phenomena. We survey artists who used our tool to manipulate complex phenomena on both static and animated scenes."}
{"_id": "23d68bef82c294822147db39ffbfdfcdb860aa88", "title": "What's Your Next Move: User Activity Prediction in Location-based Social Networks", "text": "Location-based social networks have been gaining increasing popularity in recent years. To increase users\u2019 engagement with location-based services, it is important to provide attractive features, one of which is geo-targeted ads and coupons. To make ads and coupon delivery more effective, it is essential to predict the location that is most likely to be visited by a user at the next step. However, an inherent challenge in location prediction is a huge prediction space, with millions of distinct check-in locations as prediction target. In this paper we exploit the check-in category information to model the underlying user movement pattern. We propose a framework which uses a mixed hidden Markov model to predict the category of user activity at the next step and then predict the most likely location given the estimated category distribution. The advantages of modeling the category level include a significantly reduced prediction space and a precise expression of the semantic meaning of user activities. Extensive experimental results show that, with the predicted category distribution, the number of location candidates for prediction is 5.45 times smaller, while the prediction accuracy is 13.21% higher."}
{"_id": "493765198c1d2c4fc3307e20d3e0df6313d0b88c", "title": "Texture Segmentation Based Video Compression Using Convolutional Neural Networks", "text": "There has been a growing interest in using different approaches to improve the coding efficiency of modern video codec in recent years as demand for web-based video consumption increases. In this paper, we propose a model-based approach that uses texture analysis/synthesis to reconstruct blocks in texture regions of a video to achieve potential coding gains using the AV1 codec developed by the Alliance for Open Media (AOM). The proposed method uses convolutional neural networks to extract texture regions in a frame, which are then reconstructed using a global motion model. Our preliminary results show an increase in coding efficiency while maintaining satisfactory visual quality. Introduction With the increasing amount of videos being created and consumed, better video compression tools are needed to provide fast transmission and high visual quality. Modern video coding standards utilize spatial and temporal redundancy in the videos to achieve high coding efficiency and high visual quality with motion compensation techniques and 2-D orthogonal transforms. However, efficient exploitation of statistical dependencies measured by a mean squared error (MSE) does not always produce the best psychovisual result, and may require higher data rate to preserve detail information in the video. Recent advancement in GPU computing has enabled the analysis of large scale data using deep learning method. Deep learning techniques have shown promising performance in many applications such as object detection, natural language process, and synthetic images generation [1, 2, 3, 4]. Several methods have been developed for video applications to improve coding efficiency using deep learning. In [5], sample adaptive offset (SAO) is replaced by a CNN-based in-loop filter (IFCNN) to improve the coding efficiency in HEVC. By learning the predicted residue between the quantized reconstructed frames obtained after deblocking filter (DF) and the original frames, IFCNN is able to reconstruct video frames with higher quality without requiring any bit transmission during coding process. Similar to [5], [6] proposes a Variable-filter-size Residue-learning CNN (VRCNN) to improving coding efficiency by replacing DF and SAO in HEVC. VRCNN is based on the concept of ARCNN [7] which is originally designed for JPEG applications. Instead of only using spatial information to train a CNN to reduce the coding artifacts in HEVC, [8] proposed a spatial temporal residue network (STResNet) as an additional in-loop filter after SAO. A rate-distortion optimization strategy is used to control the on/off switch of the proposed in-loop filter. There are also some works that have been done in the decoder of HEVC to improve the coding efficiency. In [9], a deep CNN-based auto decoder (DCAD) is implemented in the decoder of HEVC to improve the video quality of decoded video. DCAD is trained to learn the predict residue between decoded video frames and original video frames. By adding the predicted residual generated from DCAD to the compressed video frames, this method enhances the compressed video frames to higher quality. In summary, the above methods improve the coding efficiency by enhancing the quality of reconstructed video frames. However, they require different trained models for video reconstruction at different quantization levels. We are interested in developing deep learning approaches to only encode visually relevant information and use a different coding method for \u201cperceptually insignificant\u201d regions in a frame, which can lead to substantial data rate reductions while maintaining visual quality. In particular, we have developed a model based approach that can be used to improve the coding efficiency by identifying texture areas in a video frame that contain detail irrelevant information, which the viewer does not perceive specific details and can be skipped or encoded at a much lower data rate. The task is then to divide a frame into \u201cperceptually insignificant\u201d texture region and then use a texture model for the pixels in that region. In 1959, Schreiber and colleagues proposed a coding method that divides an image into textures and edges and used it in image coding [10]. This work was later extended by using the human visual system and statistical model to determine the texture region [11, 12, 13]. More recently, several groups have focused on adapting perceptual based approaches to the video coding framework [14]. In our previous work [15], we introduced a texture analyzer before encoding the input sequences to identify detail irrelevant regions in the frame which are classified into different texture classes. At the encoder, no inter-frame prediction is performed for these regions. Instead, displacement of the entire texture region is modeled by just one set of motion parameters. Therefore, only the model parameters are transmitted to the decoder for reconstructing the texture regions using a texture synthesizer. Non-texture regions in the frame are coded conventionally. Since this method uses feature extraction based on texture segmentation technique, a proper set of parameters are required to achieve accurate texture segmentation for different videos. Deep learning methods usually do not require such parameter tuning for inference. As a result, deep learning techniques can be developed to perform texture segmentation and classification for the proposed model-based video coding. A Fisher vector convolution neural networks (FVCNN) that can produce segmentation labels for different texture classes was proposed in [16]. One of the advantage of FV-CNN is that the image input size is flexible and is not limited by the network architecture. Instead of doing pixel-wise classification on texture regions, a texture classification CNN network was described in [17]. To reduce computational expenses, [17] uses a small classification network to classify image patches with size of 227 \u00d7 227. A smaller network is needed to classify smaller image patches in our case. In this paper, we propose a block-based texture segmentation method to extract texture region in a video frame using convolutional neural networks. The block-based segmentation network classifies each 16 \u00d7 16 block in a frame as texture or non-texture. The identified texture region is then synthesized using the temporal correlations among the frames. Our method was implemented using the AOM/AV1 codec. Preliminary results show significant bitrate savings while maintaining a satisfactory visual quality."}
{"_id": "2862a44d3da8838597ee6d3a89934b9eaf2c2eb3", "title": "Visual Descriptors for Dense Tensor Fields in Computational Turbulent Combustion: A Case Study", "text": "Simulation and modeling of turbulent flow, and of turbulent reacting flow in particular, involve solving for and analyzing time-dependent and spatially dense tensor quantities, such as turbulent stress tensors. The interactive visual exploration of these tensor quantities can effectively steer the computational modeling of combustion systems. In this article, the authors analyze the challenges in dense symmetric-tensor visualization as applied to turbulent combustion calculation; most notable among these challenges are the dataset size and density. They analyze, together with domain experts, the feasibility of using several established tensor visualization techniques in this application domain. They further examine and propose visual descriptors for volume rendering of the data. Of these novel descriptors, one is a density-gradient descriptor which results in Schlieren-style images, and another one is a classification descriptor inspired by machine-learning techniques. The result is a hybrid visual analysis tool to be utilized in the debugging, benchmarking and verification of models and solutions in turbulent combustion. The authors demonstrate this analysis tool on two example configurations, report feedback from combustion researchers, and summarize the design lessons learned. c \u00a9 2016 Society for Imaging Science and Technology. [DOI: 10.2352/J.ImagingSci.Technol.2016.60.1.010404] INTRODUCTION Computational simulation of turbulent combustion for gas turbine design has become increasingly important in the last two decades, due in part to environmental concerns and regulations on toxic emissions. Such modern gas turbine designs feature a variety of mixing fuel compositions and possible flow configurations,1,2 which make non-computational simulations difficult. The focus of the computational research effort in this direction is on the development of computational tools for the modeling and prediction of turbulent combustion flows. Received June 30, 2015; accepted for publication Nov. 4, 2015; published online Dec. 10, 2015. Associate Editor: Song Zhang. 1062-3701/2016/60(1)/010404/11/$25.00 Tensor quantities are common features in these turbulent combustion models. In particular, stress and strain tensors are often correlated to turbulent quantities\u2014which appear unclosed in the mathematical formulation and thus need to be modeled as part of the computational simulation. Visual identification of the characteristics of such tensor quantities can bring significant insights into the computational modeling process. However, these computational tensor fields are very large and spatially dense\u2014a good example of the Big Data revolution across sciences and engineering. Figure 1 shows an example turbulent combustion configuration, featuring a grid size of 106 and 6\u00d7 106 particles (shown as spheres); this dataset should be considered in contrast to traditional tensor datasets, which feature grid sizes in the 102 range. At such large scales, typical glyph encodings become cluttered and illegible. Furthermore, combustion experts seldom have an intuitive understanding of the tensor quantities. In this respect, froma tensor visualization perspective, workingwith these datasets poses an array of challenges. Are traditional tensor and flow representations useful in this context? Does increasing the level of complexity or expressiveness of such representations help or hinder? Is interaction speed more important than the benefits gained from complex descriptors? In this article, we address a specific application design problem. In the process of exploring the design space, we also investigate some of the larger visualization questions above, through the opportunity of a case study in the computational-combustion domain. In thiswork,motivated by an ongoing collaborationwith domain experts,3 we investigate the challenges associated with the exploratory visualization of tensor quantities in turbulent combustion simulations. We first provide a characterization of the problem domain, including a data analysis. Through a case study involving five senior combustion researchers, we then iteratively explore the space of tensor visual encodings. We implement and evaluate several J. Imaging Sci. Technol. 010404-1 Jan.-Feb. 2016 Marai et al.: Visual descriptors for dense tensor fields in computational turbulent combustion: a case study Figure 1. One timestep in an example turbulent combustion configuration. The grid size is 106. In this image, 6\u00d7 106 particles are shown as spheres. This dataset should be contrasted to traditional tensor datasets, which feature sparse grids in the 102 range. At this scale, typical glyph encodings become cluttered. approaches advocated by the visualization community in an interactive prototype, and we contrast these approaches with the best-of-breed visualization practices in the target domain. Based on domain expert feedback, we then focus our efforts on identifying effective visual descriptors for volume rendering of the combustion tensor data. Our contributions include a novel density-gradient descriptor and the adaptation of a machine-learning classification technique. Next, we evaluate the visual descriptors on two computational-combustion datasets of particular interest, and we show the importance of the proposed approach for debugging the numerical simulation of complex configurations. In an effort to better bridge the gap between the combustion and tensor visualization communities, we describe these tensor field datasets. Last but not least, we contribute a summary of design lessons learned from the study and from the application design process. To the best of our knowledge, this is the first formal, exploratory case study of tensor visualization techniques in the context of very large, high-density turbulent combustion flow. TENSORS IN TURBULENT COMBUSTION MODELING Turbulent Combustion Modeling. A sufficiently accurate, flexible and reliable model can be used for an in silico combustor rig test as a much cheaper alternative to the reallife rig tests employed in combustor design and optimization. In order to achieve such amodel, themethodology should be well tested and proven with lab-scale configurations. Multiple numerical approaches exist for the generation of such computational models of combustion, most notably Direct numerical simulation (DNS), Reynolds-averaged Navier\u2013Stokes (RANS) and Large eddy simulation (LES). DNS, RANS and LES have complementary strengths. However, allmodels begin by describing the compressible reacting flow via a set of partial differential equations (PDEs) that represent the conservation of mass, momentum and energy. These PDEs are a fully coupled set of multi-dimensional non-linear equations and can be posed in a variety of forms depending on the flow conditions (compressibility, scale, flow regime, etc.).4 In this article, we exemplify the visualization of stress/strain tensors, and therefore restrict the presentation to the pertinent subset of these PDEs, namely the momentum transport equation. Stress, Strain and Turbulent Stress Tensors. A tensor is an extension of the concept of a scalar and a vector to higher orders. For example, while a stress vector is the force acting on a given unit surface, a stress tensor is defined as the components of stress vectors acting on each coordinate surface; thus, stress can be described by a symmetric second-order tensor (a matrix). The velocity stress and strain tensor fields aremanifested in the transport of fluid momentum, which is a vector quantity governed by the following conservation equation:"}
{"_id": "abf8d1b7e2b8e690051ae8e3fc49e24380c712d2", "title": "Towards the Semantic Web: Collaborative Tag Suggestions", "text": "Content organization over the Internet went through several interesting phases of evolution: from structured di rectories to unstructured Web search engines and more recently, to tagging as a way for aggregating information, a step toward s the semantic web vision. Tagging allows ranking and dat a organization to directly utilize inputs from end us er , enabling machine processing of Web content. Since tags are c reated by individual users in a free form, one important prob lem facing tagging is to identify most appropriate tags, while eliminating noise and spam. For this purpose, we define a set o f general criteria for a good tagging system. These criteria include high coverage of multiple facets to ensure good recall, least effort to reduce the cost involved in browsing, and high popu larity to ensure tag quality. We propose a collaborative tag su gestion algorithm using these criteria to spot high-quality ags. The proposed algorithm employs a goodness measure for t ags derived from collective user authorities to combat spam. Th e goodness measure is iteratively adjusted by a reward-penalty lgorithm, which also incorporates other sources of tags, e.g. , content-based auto-generated tags. Our experiments based on My We b 2.0 show that the algorithm is effective."}
{"_id": "72c99627f3d6bf9a0b00c32d44a0f15ec35ae5c1", "title": "The Effect of Focal Loss in Semantic Segmentation of High Resolution Aerial Image", "text": "The semantic segmentation of High Resolution Remote Sensing (HRRS) images is the fundamental research area of the earth observation. Convolutional Neural Network (CNN), which has achieved superior performance in computer vision task, is also useful for semantic segmentation of HRRS images. In this work, focal loss is used instead of cross-entropy loss in training of CNN to handle the imbalance in training data. To evaluate the effect of focal loss, we train SegNet and FCN with focal loss and confirm improvement in accuracy in ISPRS 2D Semantic Labeling Contest dataset, especially when $\\gamma$ is 0.5 in SegNet."}
{"_id": "2cdaca30ad5f05703004706a7fcf51060e6a1068", "title": "Co-Design for Low Warpage and High Reliability in Advanced Package with TSV-Free Interposer (TFI)", "text": "TSV-Free Interposer (TFI) technology eliminates TSV fabrication and reduces manufacturing and material cost. Co-design modelling methodology is established for TFI technology with considering wafer process, package assembly and package/board level reliability and thermal performance to optimize structure design, wafer process, assembly process and material selection. Experimental results are used for validating warpage modelling results. Through wafer level modelling, suitable carrier wafer and EMC materials are recommended to control wafer warpage less than 2mm. Effects of package substrate coefficient of thermal expansion (CTE) and stiffener on assembly induced package warpage are simulated to reduce package warpage. The recommended materials and geometry design based on reliability are aligned with that from wafer and package warpage simulation results. The final test vehicle (TV) design and material selection are determined based on co-design modelling results for achieving successful TFI wafer process and package assembly process and long term package/board level reliability."}
{"_id": "b79412cee14e583a5c6816c1124913f560303a95", "title": "Learning Fine-grained Features via a CNN Tree for Large-scale Classification", "text": "We propose a novel approach to enhance the discriminability of Convolutional Neural Networks (CNN). The key idea is to build a tree structure that could progressively learn fine-grained features to distinguish a subset of classes, by learning features only among these classes. Such features are expected to be more discriminative, compared to features learned for all the classes. We develop a new algorithm to effectively learn the tree structure among a large number of classes. Experiments on large-scale image classification tasks demonstrate that our method could boost the performance of a given basic CNN model. Our method is quite general, hence it can potentially be used in combination with many other deep learning models."}
{"_id": "5ab62d69c9862be4fb80dc1e921752ea8fb625f0", "title": "SIMPLIFIED SOLAR TRACKING PROTOTYPE", "text": "Solar energy is rapidly advancing as an important means of renewable energy resource. More energy is produced by tracking the solar panel to remain aligned to the sun at a right angle to the rays of light. This paper describes in detail the design and construction of a prototype for solar tracking system with two degrees of freedom, which detects the sunlight using photocells. The control circuit for the solar tracker is based on a PIC16F84A microcontroller (MCU). This is programmed to detect the sunlight through the photocells and then actuate the motor to position the solar panel where it can receive maximum sunlight. Keyword: PIC MCU, tracking, photocell, frame, motion system"}
{"_id": "a7421c8b1dec4401a12f86ee73b7dd105c152691", "title": "Real-time automatic tag recommendation", "text": "Tags are user-generated labels for entities. Existing research on tag recommendation either focuses on improving its accuracy or on automating the process, while ignoring the efficiency issue. We propose a highly-automated novel framework for real-time tag recommendation. The tagged training documents are treated as triplets of (words, docs, tags), and represented in two bipartite graphs, which are partitioned into clusters by Spectral Recursive Embedding (SRE). Tags in each topical cluster are ranked by our novel ranking algorithm. A two-way Poisson Mixture Model (PMM) is proposed to model the document distribution into mixture components within each cluster and aggregate words into word clusters simultaneously. A new document is classified by the mixture model based on its posterior probabilities so that tags are recommended according to their ranks. Experiments on large-scale tagging datasets of scientific documents (CiteULike) and web pages del.icio.us) indicate that our framework is capable of making tag recommendation efficiently and effectively. The average tagging time for testing a document is around 1 second, with over 88% test documents correctly labeled with the top nine tags we suggested."}
{"_id": "69bec56035163a2e6da9bf91921c05b9d3ae8d67", "title": "Superfamilies of evolved and designed networks.", "text": "Complex biological, technological, and sociological networks can be of very different sizes and connectivities, making it difficult to compare their structures. Here we present an approach to systematically study similarity in the local structure of networks, based on the significance profile (SP) of small subgraphs in the network compared to randomized networks. We find several superfamilies of previously unrelated networks with very similar SPs. One superfamily, including transcription networks of microorganisms, represents \"rate-limited\" information-processing networks strongly constrained by the response time of their components. A distinct superfamily includes protein signaling, developmental genetic networks, and neuronal wiring. Additional superfamilies include power grids, protein-structure networks and geometric networks, World Wide Web links and social networks, and word-adjacency networks from different languages."}
{"_id": "bcfc0079c36354dc00cb59018182f3fa0506e9a3", "title": "Improving Alzheimer's disease diagnosis with machine learning techniques.", "text": "There is not a specific test to diagnose Alzheimer's disease (AD). Its diagnosis should be based upon clinical history, neuropsychological and laboratory tests, neuroimaging and electroencephalography (EEG). Therefore, new approaches are necessary to enable earlier and more accurate diagnosis and to follow treatment results. In this study we used a Machine Learning (ML) technique, named Support Vector Machine (SVM), to search patterns in EEG epochs to differentiate AD patients from controls. As a result, we developed a quantitative EEG (qEEG) processing method for automatic differentiation of patients with AD from normal individuals, as a complement to the diagnosis of probable dementia. We studied EEGs from 19 normal subjects (14 females/5 males, mean age 71.6 years) and 16 probable mild to moderate symptoms AD patients (14 females/2 males, mean age 73.4 years. The results obtained from analysis of EEG epochs were accuracy 79.9% and sensitivity 83.2%. The analysis considering the diagnosis of each individual patient reached 87.0% accuracy and 91.7% sensitivity."}
{"_id": "8e63a3e082623804b9850b2bdf38acf647293be6", "title": "Cloud Automation: Precomputing Roadmaps for Flexible Manipulation", "text": "The goal of this article is to highlight the benefits of cloud automation for industrial adopters and some of the research challenges that must be addressed in this process. The focus is on the use of cloud computing for efficiently planning the motion of new robot manipulators designed for flexible manufacturing floors. In particular, different ways that a robot can interact with a computing cloud are considered, where an architecture that splits computation between the remote cloud and the robot appears advantageous. Given this synergistic robot-cloud architecture, this article describes how solutions from the recent literature can be employed on the cloud during a periodically updated preprocessing phase to efficiently answer manipulation queries on the robot given changes in the workspace. In this setup, interesting tradeoffs arise between path quality and computational efficiency, which are evaluated through simulation. These tradeoffs motivate further research on how motion planning should be executed given access to a computing cloud."}
{"_id": "f414b028e7b8e2b142d32fc18e018f75b114ca96", "title": "Soft Robot Arm Inspired by the Octopus", "text": "The octopus is a marine animal whose body has no rigid structures. It has eight arms composed of a peculiar muscular structure, named a muscular hydrostat. The octopus arms provide it with both locomotion and grasping capabilities, thanks to the fact that their stiffness can change over a wide range and can be controlled through combined contractions of the muscles. The muscular hydrostat can better be seen as a modifiable skeleton. Furthermore, the morphology the arms and the mechanical characteristics of their tissues are such that the interaction with the environment (i.e., water) is exploited to simplify control. Thanks to this effective mechanism of embodied intelligence, the octopus can control a very high number of degrees of freedom, with relatively limited computing resources. From these considerations, the octopus emerges as a good model for embodied intelligence and for soft robotics. The prototype of a robot arm has been built based on an artificial muscular hydrostat inspired to the muscular hydrostat of the Octopus vulgaris. The prototype presents the morphology of the biological model and the broad arrangement of longitudinal and transverse muscles. Actuation is obtained with cables (longitudinally) and with shape memory alloy springs (transversally). The robot arm combines contractions and it can show the basic movements of the octopus arm, like elongation, shortening and bending, in water. \u00a9 Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2012"}
{"_id": "f9b6b1cff2d0b3e3987a5ac41ed4c5446ee3eb42", "title": "Power Modeling and Optimization for OLED Displays", "text": "Emerging organic light-emitting diode (OLED)-based displays obviate external lighting, and consume drastically different power when displaying different colors, due to their emissive nature. This creates a pressing need for OLED display power models for system energy management, optimization as well as energy-efficient GUI design, given the display content or even the graphical-user interface (GUI) code. In this work, we study this opportunity using commercial QVGA OLED displays and user studies. We first present a comprehensive treatment of power modeling of OLED displays, providing models that estimate power consumption based on pixel, image, and code, respectively. These models feature various tradeoffs between computation efficiency and accuracy so that they can be employed in different layers of a mobile system. We validate the proposed models using a commercial QVGA OLED module and a mobile device with a QVGA OLED display. Then, based on the models, we propose techniques that adapt GUIs based on existing mechanisms as well as arbitrarily under usability constraints. Our measurement and user studies show that more than 75 percent display power reduction can be achieved with user acceptance."}
{"_id": "a91de8eb8500eef3eea20fd8f63f460cce354d3b", "title": "The 12-Core POWER8\u2122 Processor With 7.6 Tb/s IO Bandwidth, Integrated Voltage Regulation, and Resonant Clocking", "text": "POWER8\u2122 is a 12-core processor fabricated in IBM's 22 nm SOI technology with core and cache improvements driven by big data applications, providing 2.5\u00d7 socket performance over POWER7+\u2122. Core throughput is supported by 7.6 Tb/s of off-chip I/O bandwidth which is provided by three primary interfaces, including two new variants of Elastic Interface as well as embedded PCI Gen-3. Power efficiency is improved with several techniques. An on-chip controller based on an embedded PowerPC\u2122 405 processor applies per-core DVFS by adjusting DPLLs and fully integrated voltage regulators. Each voltage regulator is a highly distributed system of digitally controlled microregulators, which achieves a peak power efficiency of 90.5%. A wide frequency range resonant clock design is used in 13 clock meshes and demonstrates a minimum power savings of 4%. Power and delay efficiency is achieved through the use of pulsed-clock latches, which require statistical validation to ensure robust yield."}
{"_id": "ce15e729f003572d459c1301e8322fba86cbdffe", "title": "Visceral pain: the ins and outs, the ups and downs.", "text": "PURPOSE OF REVIEW\nVisceral pain represents a major clinical problem, yet far less is known about its mechanisms compared with somatic pains, for example, from cutaneous and muscular structures.\n\n\nRECENT FINDINGS\nIn this review, we describe the neuroanatomical bases of visceral pain signalling in the peripheral and central nervous system, comparing to somatic pains and also the channels and receptors involved in these events. We include an overview of potential new targets in the context of mechanisms of visceral pain and hypersensitivity.\n\n\nSUMMARY\nThis review should inform on the recognition of what occurs in patients with visceral pain, why comorbidities are common and how analgesic treatments work."}
{"_id": "c69fecbd7e53f98b7f623056674976dc8d6dce37", "title": "On the Use of Logic Trees for Ground-Motion Prediction Equations in Seismic-Hazard Analysis", "text": "Logic trees are widely used in probabilistic seismic hazard analysis as a tool to capture the epistemic uncertainty associated with the seismogenic sources and the ground-motion prediction models used in estimating the hazard. Combining two or more ground-motion relations within a logic tree will generally require several conversions to be made, because there are several definitions available for both the predicted ground-motion parameters and the explanatory parameters within the predictive ground-motion relations. Procedures for making conversions for each of these factors are presented, using a suite of predictive equations in current use for illustration. The sensitivity of the resulting ground-motion models to these conversions is shown to be pronounced for some of the parameters, especially the measure of source-to-site distance, highlighting the need to take into account any incompatibilities among the selected equations. Procedures are also presented for assigning weights to the branches in the ground-motion section of the logic tree in a transparent fashion, considering both intrinsic merits of the individual equations and their degree of applicability to the particular application."}
{"_id": "19a09658e6c05b44136baf54571a884eb1a7b52e", "title": "Privacy-Preserving Data Mining", "text": "A fruitful direction for future data mining research will be the development of techniques that incorporate privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models without access to precise information in individual data records? We consider the concrete case of building a decision-tree classifier from training data in which the values of individual records have been perturbed. The resulting data records look very different from the original records and the distribution of data values is also very different from the original distribution. While it is not possible to accurately estimate original values in individual data records, we propose a novel reconstruction procedure to accurately estimate the distribution of original data values. By using these reconstructed distributions, we are able to build classifiers whose accuracy is comparable to the accuracy of classifiers built with the original data."}
{"_id": "33736e956a5c4703fb5f215bd3ad686eeeedf2de", "title": "Executing SQL over encrypted data in the database-service-provider model", "text": "Rapid advances in networking and Internet technologies have fueled the emergence of the \"software as a service\" model for enterprise computing. Successful examples of commercially viable software services include rent-a-spreadsheet, electronic mail services, general storage services, disaster protection services. \"Database as a Service\" model provides users power to create, store, modify, and retrieve data from anywhere in the world, as long as they have access to the Internet. It introduces several challenges, an important issue being data privacy. It is in this context that we specifically address the issue of data privacy.There are two main privacy issues. First, the owner of the data needs to be assured that the data stored on the service-provider site is protected against data thefts from outsiders. Second, data needs to be protected even from the service providers, if the providers themselves cannot be trusted. In this paper, we focus on the second challenge. Specifically, we explore techniques to execute SQL queries over encrypted data. Our strategy is to process as much of the query as possible at the service providers' site, without having to decrypt the data. Decryption and the remainder of the query processing are performed at the client site. The paper explores an algebraic framework to split the query to minimize the computation at the client site. Results of experiments validating our approach are also presented."}
{"_id": "697d28f8c1074b67ac6300ca6ca46c8f913380da", "title": "An Anatomical Analysis of the Supratrochlear Artery: Considerations in Facial Filler Injections and Preventing Vision Loss.", "text": "BACKGROUND\nEmbolia cutis medicamentosa (ECM) is a rare phenomenon attributed to intra-arterial drug injection. Glabellar filler injections can result in potentially devastating visual loss from inadvertent retrograde arteriolar embolization due to the extensive vasculature within the upper face. The minimum amount of filler necessary to potentiate this complication has not yet been reported.\n\n\nOBJECTIVES\nWe aim to determine the volume of filler necessary to occupy the supratrochlear artery from the glabella to the bifurcation of the ophthalmic and central retinal arteries. We specifically examine the volume of the supratrochlear artery from the glabella to orbital apex.\n\n\nMETHODS\nThe study was approved by Duke University Institutional Review Board and involved surgical dissection of six fresh tissue cadaver heads (12 hemifaces). The arterial system in each cadaver head was injected with latex for visualization. The supratrochlear arteries were isolated anteriorly from the glabella to the orbital apex posteriorly. Intra-orbital vessel radius and length were measured. The vessel volume was calculated by water displacement of the intra-arterial latex.\n\n\nRESULTS\nThe vessel volumes ranged from 0.04 to 0.12 mL. The average vessel volume was calculated as 0.085 mL, the average length as 51.75 mm, and the average radius as 0.72 mm.\n\n\nCONCLUSIONS\nVascular occlusion from filler injections can lead to devastating visual consequences due to inadvertent retrograde intra-arterial embolization. Our findings indicate that the average entire volume of the supratrochlear artery from the glabella to the orbital apex is 0.085 mL. Injectors should be aware that a bolus of this critical volume may lead to a significant adverse outcome."}
{"_id": "95ddaf652fe0bcb16025c97f8bba1b08a0228209", "title": "Geometry-Based Symbolic Approximation for Fast Sequence Matching on Manifolds", "text": "In this paper, we consider the problem of fast and efficient indexing techniques for sequences evolving in non-Euclidean spaces. This problem has several applications in the areas of human activity analysis, where there is a need to perform fast search, and recognition in very high dimensional spaces. The problem is made more challenging when representations such as landmarks, contours, and human skeletons etc. are naturally studied in a non-Euclidean setting where even simple operations are much more computationally intensive than their Euclidean counterparts. We propose a geometry and data adaptive symbolic framework that is shown to enable the deployment of fast and accurate algorithms for activity recognition, dynamic texture recognition, motif discovery. Toward this end, we present generalizations of key concepts of piece-wise aggregation and symbolic approximation for the case of non-Euclidean manifolds. We show that one can replace expensive geodesic computations with much faster symbolic computations with little loss of accuracy in activity recognition and discovery applications. The framework is general enough to work across both Euclidean and non-Euclidean spaces, depending on appropriate feature representations without compromising on the ultra-low bandwidth, high speed and high accuracy. The proposed methods are ideally suited for real-time systems and low complexity scenarios."}
{"_id": "ec241b9099701a37e7fed473f172552f65027ab5", "title": "Wireless powering for a self-propelled and steerable endoscopic capsule for stomach inspection.", "text": "This paper describes the integration of an active locomotion module in a wirelessly powered endoscopic capsule. The device is a submersible capsule optimized to operate in a fluid environment in a liquid-distended stomach. A 3D inductive link is used to supply up to 400mW to the embedded electronics and a set of 4 radio-controlled motor propellers. The design takes advantage of a ferrite-core in the receiving coil-set. This approach significantly improves the coupling with the external field source with respect to earlier work by the group. It doubles the power that can be received with a coreless coil-set under identical external conditions. The upper limit of the received power was achieved complying with the strict regulations for safe exposure of biological tissue to variable magnetic fields. The wireless transferred power was proven to be sufficient to achieve the speed of 7cm/s in any directions. An optimized locomotion strategy was defined which limits the power consumption by running only 2 motors at a time. A user interface and a joystick controller allow to fully drive the capsule in an intuitive manner. The device functionalities were successfully tested in a dry and a wet environment in a laboratory set-up."}
{"_id": "38a1094dcc72cfa2599faa3558b725335d379146", "title": "Semantic full-text search with broccoli", "text": "We combine search in triple stores with full-text search into what we call \\emph{semantic full-text search}. We provide a fully functional web application that allows the incremental construction of complex queries on the English Wikipedia combined with the facts from Freebase. The user is guided by context-sensitive suggestions of matching words, instances, classes, and relations after each keystroke. We also provide a powerful API, which may be used for research tasks or as a back end, e.g., for a question answering system. Our web application and public API are available under \\url{http://broccoli.cs.uni-freiburg.de}."}
{"_id": "4ac439b9a7a7cc7d070ae39734ddd01d81b27576", "title": "Design and In Vitro Interference Test of Microwave Noninvasive Blood Glucose Monitoring Sensor", "text": "A design of a microwave noninvasive continuous blood glucose monitoring sensor and its interference test results are presented. The novelty of the proposed sensor is that it comprises two spatially separated split-ring resonators, where one interacts with the change in glucose level of a sample under test while the other ring is used as a reference. The reference ring has a slightly different resonant frequency and is desensitized to the sample owing to its location, thus allowing changes in temperature to be calibrated out. From an oral glucose tolerance test with two additional commercially available sensors (blood strip and continuous glucose monitor) in parallel, we obtained encouraging performance for our sensor comparable with those of the commercial sensors. The effects of endogenous interferents common to all subjects, i.e., common sugars, vitamins (ascorbic acid), and metabolites (uric acid) have also been investigated by using a large Franz cell assembly. From the interference test, it is shown that the change in sensor response is dominated by changes in glucose level for concentrations relevant to blood, and the effects of interferents are negligible in comparison."}
{"_id": "9b24079b0ccf491919f5b39c0a280d37c4f37a8f", "title": "Solving POMDPs with Levin Search and EIRA", "text": "\u009e'\u009fs A\u00a1Y\u00a2f\u009fs\u00a3\u00a4\u00a3f\u00a5\u00a7\u00a6s \u0308H\u00a9Na! Y\u00abX\u009fC \u0308l\u00a3fa\u007f\u00ac\u00ad\u009fC Y\u00aes\u00a6X\u00ab\u0090 \u0304Ja[\u00b0b\u00a2\u00b1\u00a9A\u00a2f\u00a6s2 \u03013l Y\u00a6s \u0308J\u03bc \u00a3fa!\u00b6\u00b7\u00a91 \u0327o\u009e\u008d\u00bb\u0089\u00ac\u00ad\u0087\u0089\u009e1\u20444\u00a9G1\u204428 Ya \u00b0!a!2E\u00a1N\u00a3f\u00a5) Ya \u00b0!a!\u00a2f\u00absa \u0304 \u009f\u0089\u00a3\u00a4\u00a6s\u00a1'\u00a6C3\u20444\u009d\u009fC\u00a1A\u03bc \u00a1Ya!2E\u00a1N\u00a2f\u00a6s2\u00bf\u00a2\u00a42\u00c0\u00a1N\u00c1la\\ Na \u00a2\u00a42l3\u20444\u00c2\u00a6s G\u00b0ba!\u00b6Ka 2<\u00a1e\u00a3\u00a4a[\u009fC Y2l\u00a2f2l\u00c3`\u00b0!\u00a6s\u00b6K\u00b6\u00c0\u00c4J\u03bc 2l\u00a2\u00a4\u00a1Q\u00a5s\u00c5\u00c7\u00c6\u00c8\u00a6\u00c9\u009fX\u00a1N\u00a1Na 2<\u00a1Y\u00a2\u00a4\u00a6<2V\u00ca\u00cb\u00c1l\u00a6X\u00cc\\a!\u00ab$24$ to $43$ percent of the content request messages, we can identify their originating machines. We also briefly discuss potential solutions to address the developed traceback attack. Despite being developed specifically on Freenet, the basic principles of the traceback attack and solutions have important security implications for similar anonymous content-sharing systems."}
{"_id": "124126f1f2688a6acff92e6752a3b0cd33dc50b0", "title": "Data summaries for on-demand queries over linked data", "text": "Typical approaches for querying structured Web Data collect (crawl) and pre-process (index) large amounts of data in a central data repository before allowing for query answering. However, this time-consuming pre-processing phase however leverages the benefits of Linked Data -- where structured data is accessible live and up-to-date at distributed Web resources that may change constantly -- only to a limited degree, as query results can never be current. An ideal query answering system for Linked Data should return current answers in a reasonable amount of time, even on corpora as large as the Web. Query processors evaluating queries directly on the live sources require knowledge of the contents of data sources. In this paper, we develop and evaluate an approximate index structure summarising graph-structured content of sources adhering to Linked Data principles, provide an algorithm for answering conjunctive queries over Linked Data on theWeb exploiting the source summary, and evaluate the system using synthetically generated queries. The experimental results show that our lightweight index structure enables complete and up-to-date query results over Linked Data, while keeping the overhead for querying low and providing a satisfying source ranking at no additional cost."}
{"_id": "6ad02a3a501ea613688263cacb81027266066ecf", "title": "Pure Pursuit Revisited: Field Testing of Autonomous Vehicles in Urban Areas", "text": "In this paper, we aim to explore path following. We implement a path following component by referring to the existing Pure Pursuit algorithm. Using the simulation and field operational test, we identified the problem in the path following component. The main problems identified were with respect to vehicles meandering off the path, turning a corner, and the instability of steering control. Therefore, we apply some modifications to the Pure Pursuit[1] algorithm. We have also conducted the simulation and field operational tests again to evaluate these modifications."}
{"_id": "bdb3c0dff85ccac17bd52e5498748084db460944", "title": "An ontological analysis of the economic primitives of the extended-REA enterprise information architecture", "text": "The Resource-Event-Agent (REA) model for enterprise economic phenomena was first published in The Accounting Review in 1982. Since that time its concepts and use have been extended far beyond its original accountability infrastructure to a framework for enterprise information architectures. The granularity of the model has been extended both up (to enterprise value chains) and down (to workflow or task specification) in the aggregation plane, and additional conceptual type-images and commitment-images have been proposed as well. The REA model actually fits the notion of domain ontology well, a notion that is becoming increasingly important in the era of e-commerce and virtual companies. However, its present and future components have never been analyzed formally from an ontological perspective. This paper intends to do just that, relying primarily on the conceptual terminology of John Sowa. The economic primitives of the original REA model (economic resources, economic events, economic agents, duality relationships, stock-flow relationships, control relationships and responsibility relationships) plus some newer components (commitments, types, custody, reciprocal, etc.) will be analyzed individually and collectively as a specific domain ontology. Such a review can be used to guide further conceptual development of REA extensions."}
{"_id": "77cad1aade0a08387e69b56878f71ec4492a41cb", "title": "Point-of-Interest Demand Modeling with Human Mobility Patterns", "text": "Point-of-Interest (POI) demand modeling in urban regions is critical for many applications such as business site selection and real estate investment. While some efforts have been made for the demand analysis of some specific POI categories, such as restaurants, it lacks systematic means to support POI demand modeling. To this end, in this paper, we develop a systematic POI demand modeling framework, named Region POI Demand Identification (RPDI), to model POI demands by exploiting the daily needs of people identified from their large-scale mobility data. Specifically, we first partition the urban space into spatially differentiated neighborhood regions formed by many small local communities. Then, the daily activity patterns of people traveling in the city will be extracted from human mobility data. Since the trip activities, even aggregated, are sparse and insufficient to directly identify the POI demands, especially for underdeveloped regions, we develop a latent factor model that integrates human mobility data, POI profiles, and demographic data to robustly model the POI demand of urban regions in a holistic way. In this model, POI preferences and supplies are used together with demographic features to estimate the POI demands simultaneously for all the urban regions interconnected in the city. Moreover, we also design efficient algorithms to optimize the latent model for large-scale data. Finally, experimental results on real-world data in New York City (NYC) show that our method is effective for identifying POI demands for different regions."}
{"_id": "83abfffa02e83dc215ec1dbe8d54f5aedd43ed98", "title": "Credit Card Fraud Detection: A case study", "text": "In this research, a technique for `Credit Card Fraud Detection' is developed. As fraudsters are increasing day by day. And fallacious transactions are done by the credit card and there are various types of fraud. So to solve this problem combination of technique is used like Genetic Algorithm, Behavior Based Technique and Hidden Markov Model. By this transaction is tested individually and whatever suits the best is further proceeded. And the foremost goal is to detect fraud by filtering the above techniques to get better result."}
{"_id": "948494a85965aebe7dc119f9fd238c5fb51301ef", "title": "A Review on the ESD Robustness of Drain-Extended MOS Devices", "text": "This paper reviews electrostatic discharge (ESD) investigations on laterally diffused MOS (LDMOS) and drain-extended MOS (DeMOS) devices. The limits of the safe operating area of LDMOS/DeMOS devices and device physics under ESD stress are discussed under various biasing conditions and layout schemes. Specifically, the root cause of early filament formation is highlighted. Differences in filamentary nature among various LDMOS/DeMOS devices are shown. Based on the physical understanding, device optimization guidelines are given. Finally, an outlook on technology scaling is presented."}
{"_id": "19138872a2f8ca34e051fdfa03c0dfdb7a77f016", "title": "A Survey on Cryptography and Steganography Methods for Information Security", "text": "This paper deals with the tidings of cryptography in history, how it has played a vital role in World War -1, World War-2. It also deals with the Substitution, Transposition Cryptographic techniques and Steganography principles which can be used to maintain the confidentiality of computerized and none computerized information files. A number of well known techniques have been adapted for computer usage including the Ceaser cipher, Mono alphabetic cipher, Homophonic substitution, Bale cipher, Play fair cipher, Poly alphabetic cipher, Vigenere Cipher, Onetime pad cipher, Vernam ciphers, Play Color Cipher and usage of rotor machine in Substitutions, Rail fence technique, more complex permutations for more secure transposition and some Steganography principle were briefly discussed with merits and demerits. Finally it gives the broad knowledge on almost all the cryptographic and Steganography principles where a reader or scholar have lot of scope for updating or invention of more secure algorithms to fulfill the global needs in information security."}
{"_id": "408c732c0f06abbc85ac23ab4d3b2353abb4e5dc", "title": "Human C-tactile afferents are tuned to the temperature of a skin-stroking caress.", "text": "Human C-tactile (CT) afferents respond vigorously to gentle skin stroking and have gained attention for their importance in social touch. Pharmacogenetic activation of the mouse CT equivalent has positively reinforcing, anxiolytic effects, suggesting a role in grooming and affiliative behavior. We recorded from single CT axons in human participants, using the technique of microneurography, and stimulated a unit's receptive field using a novel, computer-controlled moving probe, which stroked the skin of the forearm over five velocities (0.3, 1, 3, 10, and 30 cm s(-1)) at three temperatures (cool, 18 \u00b0C; neutral, 32 \u00b0C; warm, 42 \u00b0C). We show that CTs are unique among mechanoreceptive afferents: they discharged preferentially to slowly moving stimuli at a neutral (typical skin) temperature, rather than at the cooler or warmer stimulus temperatures. In contrast, myelinated hair mechanoreceptive afferents proportionally increased their firing frequency with stroking velocity and showed no temperature modulation. Furthermore, the CT firing frequency correlated with hedonic ratings to the same mechano-thermal stimulus only at the neutral stimulus temperature, where the stimuli were felt as pleasant at higher firing rates. We conclude that CT afferents are tuned to respond to tactile stimuli with the specific characteristics of a gentle caress delivered at typical skin temperature. This provides a peripheral mechanism for signaling pleasant skin-to-skin contact in humans, which promotes interpersonal touch and affiliative behavior."}
{"_id": "a555250adc84829fd42ead8061656a669fb2a29b", "title": "Design and fabrication of an autonomous fire fighting robot with multisensor fire detection using PID controller", "text": "Recently, Multisensor Fire Detection System (MSFDS) is one of the important research issues. Here, a fire fighter robot is fabricated providing extinguishment platform. The base of the robot is made of the wood of `Rashed tree', locally known as `Kerosene wood'. There is about 1 liter water reserving capacity. An arduino based simple algorithm is used for detection of fire and measurement of distance from fire source while the robot is on its way to extinguish fire. When the fire is detected and the robot is at a distance near to fire, a centrifugal pump is used to throw water for extinguishment purpose. A water spreader is used for effective extinguishing. It is seen that velocity of water is greatly reduced due to the use of water spreader. Two sensors: LM35 and Arduino Flame Sensors are used to detect the fire and distances on its way towards fire. Sensitivity of these sensors at different day times and distances is tested through analog reading of the serial monitor."}
{"_id": "f82d90b5b19db754bec5bf4354eaaf21e5e6d20e", "title": "Towards Discipline-Independent Argumentative Zoning : Evidence from Chemistry and Computational Linguistics", "text": "Argumentative Zoning (AZ) is an analysis of the argumentative and rhetorical structure of a scientific paper. It has been shown to be reliably used by independent human coders, and has proven useful for various information access tasks. Annotation experiments have however so far been restricted to one discipline, computational linguistics (CL). Here, we present a more informative AZ scheme with 15 categories in place of the original 7, and show that it can be applied to the life sciences as well as to CL. We use a domain expert to encode basic knowledge about the subject (such as terminology and domain specific rules for individual categories) as part of the annotation guidelines. Our results show that non-expert human coders can then use these guidelines to reliably annotate this scheme in two domains, chemistry and computational linguistics."}
{"_id": "57aa09e616382c8b111b8e2b6b36ebb9aa0a2d12", "title": "Fitting conic sections to \"very scattered\" data: An iterative refinement of the bookstein algorithm", "text": "An examination of the geometric interpretation of the error-of-fit measure of the Bookstein algorithm for fitting conic sections shows why it may not be entirely satisfactory when the data are \u201cvery scattered\u201d in the sense that the data points are distributed rather widely about an underlying smooth curve. A simple iterative refinement of the Bookstein algorithm, similar in spirit to iterative weighted least-squares methods in regression analysis, results in a fitted conic section that approximates the conic that would minimize the sum of squared orthogonal distances of data points from the fitted conic. The usefulness and limitations of the refined algorithm are demonstrated on two different types of \u201cvery scattered\u201d data."}
{"_id": "423be52973dab29c31a845ea54c9050aba0d650a", "title": "Walking on Minimax Paths for k-NN Search", "text": "Link-based dissimilarity measures, such as shortest path or Euclidean commute time distance, base their distance on paths between nodes of a weighted graph. These measures are known to be better suited to data manifold with nonconvex-shaped clusters, compared to Euclidean distance, so that k-nearest neighbor (NN) search is improved in such metric spaces. In this paper we present a new link-based dissimilarity measure based on minimax paths between nodes. Two main benefits of minimax path-based dissimilarity measure are: (1) only a subset of paths is considered to make it scalable, while Euclidean commute time distance considers all possible paths; (2) it better captures nonconvex-shaped cluster structure, compared to shortest path distance. We define the total cost assigned to a path between nodes as Lp norm of intermediate costs of edges involving the path, showing that minimax path emerges from our Lp norm over paths framework. We also define minimax distance as the intermediate cost of the longest edge on the minimax path, then present a greedy algorithm to compute k smallest minimax distances between a query and N data points in O(logN + k log k) time. Numerical experiments demonstrate that our minimax kNN algorithm reduce the search time by several orders of magnitude, compared to existing methods, while the quality of k-NN search is significantly improved over Euclidean distance. Introduction Given a set of N data points X = {x1, . . . ,xN}, k-nearest neighbor (k-NN) search in metric spaces involves finding k closest points in the dataset X to a query xq . Dissimilarity measure defines distance duv between two data points (or nodes of a weighted graph) xu and xv in the corresponding metric space, and the performance of k-NN search depends on distance metric. Euclidean distance \u2016xu \u2212 xv\u20162 is the most popular measure for k-NN search but it does not work well when data points X lie on a curved manifold with nonconvex-shaped clusters (see Fig. 1(a)). Metric learning (Xing et al. 2003; Goldberger et al. 2005; Weinberger and Saul 2009) optimizes parameters involving the Mahalanobis distance using Copyright c \u00a9 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. labeled dataset, such that points in the same cluster become close together and points in the different cluster become far apart. Most of metric learning methods are limited to linear embedding, so that the nonconvex-shaped cluster structure is not well captured (see Fig. 1(b)). Link-based (dis)similarity measures (Fouss et al. 2007; Yen, Mantrach, and Shimbo 2008; Yen et al. 2009; Mantrach et al. 2010; Chebotarev 2011) rely on paths between nodes of a weighted graph, on which nodes are associated with data points and intermediate costs (for instance Euclidean distance) are assigned to edge weights. Distance between nodes depends on the total cost that is computed by aggregating edge weights on a path connecting those nodes of interest. Total cost associated with a path is often assumed to be additive (Yen, Mantrach, and Shimbo 2008), so the aggregation reduces to summation. Dissimilarity between two nodes is calculated by integrating the total costs assigned to all possible paths between them. Such integration is often determined by computing the pseudo-inverse of the graph Laplacian, leading to Euclidean commute time distance (ECTD), regularized Laplacian kernel, and Markov diffusion kernel (see (Fouss et al. 2007; Yen, Mantrach, and Shimbo 2008; Fouss et al. 2012) and references therein), which are known to better capture the nonconvex-shaped cluster structure (see Fig. 1(c)). However, all possible paths between nodes need to be considered to compute the distances and the inversion of N \u00d7 N matrix requires O(N) time and O(N) space, so it does not scale well to the problems of interest. Shortest path distance (Dijkstra 1959) is also a popular link-based dissimilarity measure (Tenenbaum, de Silva, and Langford 2000), where only shortest paths are considered to compute the distances between two nodes. Computational cost is reduced, but the cluster structure is not well captured when shortest path distance is used for k-NN search (see Fig. 1(d)). The distance between two nodes is computed along the shortest path only, Randomized shortest path (RSP) dissimilarity measures were proposed as a family of distance measures depending on a single parameter, which has interesting property of reducing, on one end, to the standard shortest path distance when the parameter is large, on the other hand, to the commute time distance when the parameter is near zero (Yen, Mantrach, and Shimbo 2008). In this paper we present a new link-based k-NN search method with minimax paths (Pollack 1960; Gower and Ross (a) Euclidean (b) LMNN (c) ECTD (d) Shortest path (e) Minimax distance Figure 1: Real-world datasets usually contain nonconvex-shaped clusters. For example, two curved clusters (dots) and a query point (star) are given. Heat map shows the closeness to the query at each point, in terms of the distance measure used (best viewed in color; more reddish means closer to the query). (a) Euclidean distance cannot reflect the underlying cluster structure at all. (b) Metric learning (Weinberger and Saul 2009) cannot capture the nonconvex shapes. (c) Link-based distance (Fouss et al. 2007) captures the underlying cluster correctly, but requires a large amount of computation. (d) Shortest path distance is efficient to compute, but it is not reliable near the boundary between different clusters. (e) Our method is as efficient as computing the shortest path distance, while capturing the entire cluster structure correctly. 1969), which is well-known to capture the underlying cluster structure of data (e.g., Fig. 1(e)) (Kim and Choi 2007; Luo et al. 2008; Zadeh and Ben-David 2009). We develop a fast k-NN search algorithm that computes the minimax distance efficiently through the minimax paths between data points. Our method is as efficient as the shortest-path distance computation. More specifically, the overall time for kNN search with N data points is onlyO(logN+k log k). In image retrieval experiments on some public image datasets, our method was several orders of magnitude faster, while achieving the comparable search quality (in terms of precision). The main contributions of this paper are summarized: 1. We develop a novel framework for link-based (dis)similarity measures where we define the total cost (corresponding to dissimilarity) assigned to a path between nodes as Lp norm of intermediate costs of edges involving the path, showing that minimax path emerges from our Lp norm over paths framework. This framework has two main benefits: (1) it can reduce the number of paths considered for the link-based (dis)similarity computation, which improves the scalability greatly; (2) even using a small number of paths, it can capture any nonconvex-shaped cluster structure successfully. 2. We define minimax distance as the intermediate cost of the longest edge on the minimax path, then present a greedy algorithm to compute k smallest minimax distances between a query and N data points in O(logN + k log k) time, whereas the state of the arts, minimax message passing (Kim and Choi 2007) requires O(N) time for k-NN search. Aggregation over Paths: Lp Norm Link-based (dis)similarity is defined on a weighted graph, denoted by G = (X , E): \u2022 The set of nodes, X = {x1, . . . ,xN}, are associated with N data points. E = {(i, j) | i 6= j \u2208 {1, . . . , N}} is a set of edges between the nodes, excluding self-loop edges. \u2022 The graph is usually assumed to be sparse such that the number of edges, |E|, is limited to O(N). A well-known example is the K-NN graph, where an edge (i, j) exists only if xi is one of the K nearest neighbors of xj or xj is one of the K nearest neighbors of xi in the Euclidean space. The value of K is set to a small constant (usually 5-20) for ensuring sparsity. Then a link-based (dis)similarity depends on the total cost associated with paths on the graph. In this section, we describe our Lp norm approach to the cost aggregation over paths to compute the total cost along paths. Let a = (a0, a1, . . . , am) be a path with m hops, connecting the nodes xa0 ,xa1 , . . . ,xam \u2208 X through the consecutive edges {(a0, a1), (a1, a2), . . . , (am\u22121, am)} \u2286 E . We denote a set of paths with m hops between an initial node xu and a destination node xv by Auv = { a \u2223\u2223 a0 = u, am = v, (a`, a`+1) \u2208 E , a` 6= v, \u2200` = 0, . . . ,m\u2212 1 } , (1) and denote a set of all possible paths between xu and xv by Auv = \u222am=1Auv. (2) A weight c(i, j) is assigned to each edge (i, j), representing the intermediate cost of following edge (i, j), usually defined as the Euclidean distance, i.e., c(i, j) = \u2016xi \u2212 xj\u2016. We define the total cost associated with a path a as"}
{"_id": "da76f9d927564655a65a557b589b23daf1c36425", "title": "The Taste for Privacy: An Analysis of College Student Privacy Settings in an Online Social Network", "text": "The rapid growth of contemporary social network sites (SNSs) has coincided with an increasing concern over personal privacy. College students and adolescents routinely provide personal information on profiles that can be viewed by large numbers of unknown people and potentially used in harmful ways. SNSs like Facebook and MySpace allow users to control the privacy level of their profile, thus limiting access to this information. In this paper, we take the preference for privacy itself as our unit of analysis, and analyze the factors that are predictive of a student having a private versus public profile. Drawing upon a new social network dataset based on Facebook, we argue that privacy behavior is an upshot of both social influences and personal incentives. Students are more likely to have a private profile if their friends and roommates have them; women are more likely to have private profiles than are men; and having a private profile is associated with a higher level of online activity. Finally, students who have private versus public profiles are characterized by a unique set of cultural preferences\u2014of which the \u2018\u2018taste for privacy\u2019\u2019 may be only a small but integral part."}
{"_id": "e6228e0454a00117965c5ed884173531a9246189", "title": "A face(book) in the crowd: social Searching vs. social browsing", "text": "Large numbers of college students have become avid Facebook users in a short period of time. In this paper, we explore whether these students are using Facebook to find new people in their offline communities or to learn more about people they initially meet offline. Our data suggest that users are largely employing Facebook to learn more about people they meet offline, and are less likely to use the site to initiate new connections."}
{"_id": "07db033277c25d0d341ad0fbd8191a90e86422e6", "title": "Instant messaging in teen life", "text": "Instant Messaging (IM) is being widely adopted by teenagers. In a study of 16 teenage IM users, we explore IM as an emerging feature of teen life, focusing our questions on its support of interpersonal communication and its role and salience in everyday life. We qualitatively describe the teens' IM use interpersonally, as well as its place in the domestic ecology. We also identify technology adoption conditions and discuss behaviors around privacy management. In this initial investigation, we found differences in the nature of use between high school and college teens, differences we propose are accounted for by teens' degree of autonomy as a function of domestic and scholastic obligations, the development of independent work practices, Internet connectivity access, and even transportation access. Moreover, while teen IM use is in part characterized as an optimizing choice between multiple communications media, practice is also tied to concerns around peer pressure, peer group membership and creating additional opportunities to socialize."}
{"_id": "5996c10abb13d0cb84a7e4c49c426b0e963f2868", "title": "The lexical constituency model: some implications of research on Chinese for general theories of reading.", "text": "The authors examine the implications of research on Chinese for theories of reading and propose the lexical constituency model as a general framework for word reading across writing systems. Word identities are defined by 3 interlinked constituents (orthographic, phonological, and semantic). The implemented model simulates the time course of graphic, phonological, and semantic priming effects, including immediate graphic facilitation followed by graphic inhibition with simultaneous phonological facilitation, a pattern unique to the Chinese writing system. Pseudocharacter primes produced only facilitation, supporting the model's assumption that lexical thresholds determine phonological and semantic, but not graphic, effects. More generally, both universal reading processes and writing system constraints exist. Although phonology is universal, its activation process depends on how the writing system structures graphic units."}
{"_id": "2f087cdde2ea34b911b3e9917b90fd3d42070eeb", "title": "Statistics of Patch Offsets for Image Completion", "text": "Image completion involves filling missing parts in images. In this paper we address this problem through the statistics of patch offsets. We observe that if we match similar patches in the image and obtain their offsets (relative positions), the statistics of these offsets are sparsely distributed. We further observe that a few dominant offsets provide reliable information for completing the image. With these offsets we fill the missing region by combining a stack of shifted images via optimization. A variety of experiments show that our method yields generally better results and is faster than existing state-of-the-art methods."}
{"_id": "c5d8f8963be7df55dd2c960cd17918c96ac37a56", "title": "Phase-Error Measurement and Compensation in PLL Frequency Synthesizers for FMCW Sensors\u2014I: Context and Application", "text": "The synthesis of linear frequency sweeps or chirps is required, among others, in frequency-modulated continuous-wave radar systems for object position estimation. Low phase and frequency errors in sweeps with high bandwidth are a prerequisite for good accuracy and resolution, but, in certain applications where high measurement rates are desired, the additional demand for short sweep cycles has to be met. Transient phenomena in dynamic synthesizers as well as nonlinear system behavior usually cause unknown phase errors in the system output. For the class of phase-locked-loop (PLL)-based frequency synthesizers, a novel output phase-measurement method and dedicated circuitry are proposed that allow significant reduction of phase errors by adaptive input predistortion. The measurement procedure is implemented within the PLL control circuitry and does not require external equipment. The application of this method to PLL system identification and linearization of extremely short frequency sweeps is shown"}
{"_id": "9f1f42f53fa981c0f7a77b7b317f4fc68ef07ffd", "title": "Advertisement image classification using convolutional neural network", "text": "Image classification is critical and significant research problems in computer vision applications such as facial expression classification, satellite image classification, plant (fruits, flowers, leaf\u2026) classification base on images. This paper proposes the image classification model applied for identifying the display of the online advertisement. The proposal model uses Convolutional Neural Network with two parameters (n, m) where n is a number of layers and m is number of filters in Conv layer. The proposed model is called nLmF-CNN. The suitable values of parameters (n, m) for advertisement image classification are identified by experiments. The input data of the proposed model are online captured images. The processing components of nLmF-CNN are developed as deep neural networks using ConvNetJs library. The output of the proposed model is YES/NO. YES means that the advertisements display clearly. NO means that the advertisements do not display or not clear. The experimental results 86% in our normalizing dataset showed the feasibility of a proposed model nLmF-CNN."}
{"_id": "7747f0c4081527fdead83df071d56b609090e997", "title": "Large Scale Cloudlets Deployment for Efficient Mobile Cloud Computing", "text": "Interest in using Mobile Cloud Computing (MCC) for compute intensive mobile devices jobs such as multimedia applications has been growing. In fact, many new research efforts were invested to maximize the mobile intensive jobs offloading to the cloud within the cloud system coverage area. However, a large scale MCC deployment challenges and limitations are not considered. In this paper, a large scale Cloudlet-based MCC system deployment is introduced, aiming at reducing the power consumption and the network delay of multimedia applications while using MCC. The proposed deployment offers a large coverage area that can be used by mobile devices while they are moving from one location to another to reduce broadband communication needs. Practical experimental results and simulated results using show that using the proposed model reduces the power consumption of the mobile devices as well as reducing the communication latency when the mobile device requests a job to be performed remotely while satisfying the high quality of service requirements of mobile users."}
{"_id": "bf22b4d1191ebafa30c283598bcbdb8da6ad7114", "title": "Auditory Saliency Using Natural Statistics", "text": "In contrast to the wealth of saliency models in the vision literature, there is a relative paucity of models exploring auditory saliency. In this work, we integrate the approaches of (Kayser, Petkov, Lippert, & Logothetis, 2005) and (Zhang, Tong, Marks, Shan, & Cottrell, 2008) and propose a model of auditory saliency. The model combines the statistics of natural soundscapes and the recent past of the input signal to predict the saliency of an auditory stimulus in the frequency domain. To evaluate the model output, a simple behavioral experiment was performed. Results show the auditory saliency maps calculated by the model to be in excellent accord with human judgments of saliency."}
{"_id": "8dc3c0008fb172710642db4fe5fcb2db9b0cd9fe", "title": "Synthesizing data for text recognition with style transfer", "text": "Most of the existing datasets for scene text recognition merely consist of a few thousand training samples with a very limited vocabulary, which cannot meet the requirement of the state-of-the-art deep learning based text recognition methods. Meanwhile, although the synthetic datasets (e.g., SynthText90k) usually contain millions of samples, they cannot fit the data distribution of the small target datasets in natural scenes completely. To address these problems, we propose a word data generating method called SynthText-Transfer, which is capable of emulating the distribution of the target dataset. SynthText-Transfer uses a style transfer method to generate samples with arbitray text content, which preserve the texture of the reference sample in the target dataset. The generated images are not only visibly similar with real images, but also capable of improving the accuracy of the state-of-the-art text recognition methods, especially for the English and Chinese dataset with a large alphabet (in which many characters only appear in few samples, making it hard to learn for sequence models). Moreover, the proposed method is fast and flexible, with a competitive speed among common style transfer methods."}
{"_id": "fa00407c6e11fa3ea84ba8422ba3097e06f2933e", "title": "DRC: a dual route cascaded model of visual word recognition and reading aloud.", "text": "This article describes the Dual Route Cascaded (DRC) model, a computational model of visual word recognition and reading aloud. The DRC is a computational realization of the dual-route theory of reading, and is the only computational model of reading that can perform the 2 tasks most commonly used to study reading: lexical decision and reading aloud. For both tasks, the authors show that a wide variety of variables that influence human latencies influence the DRC model's latencies in exactly the same way. The DRC model simulates a number of such effects that other computational models of reading do not, but there appear to be no effects that any other current computational model of reading can simulate but that the DRC model cannot. The authors conclude that the DRC model is the most successful of the existing computational models of reading."}
{"_id": "83e70a4ecf9ada29678feef30a15be935c9e31e3", "title": "Irreversibility and Heat Generation in the Computing Process", "text": ""}
{"_id": "dd2e69465ecdcd67d5211babb9d1a6754468d174", "title": "DF2Net: Discriminative Feature Learning and Fusion Network for RGB-D Indoor Scene Classification", "text": "This paper focuses on the task of RGB-D indoor scene classification. It is a very challenging task due to two folds. 1) Learning robust representation for indoor scene is difficult because of various objects and layouts. 2) Fusing the complementary cues in RGB and Depth is nontrivial since there are large semantic gaps between the two modalities. Most existing works learn representation for classification by training a deep network with softmax loss and fuse the two modalities by simply concatenating the features of them. However, these pipelines do not explicitly consider intra-class and interclass similarity as well as inter-modal intrinsic relationships. To address these problems, this paper proposes a Discriminative Feature Learning and Fusion Network (DFNet) with two-stage training. In the first stage, to better represent scene in each modality, a deep multi-task network is constructed to simultaneously minimize the structured loss and the softmax loss. In the second stage, we design a novel discriminative fusion network which is able to learn correlative features of multiple modalities and distinctive features of each modality. Extensive analysis and experiments on SUN RGB-D Dataset and NYU Depth Dataset V2 show the superiority of DFNet over other state-of-the-art methods in RGB-D indoor scene classification task."}
{"_id": "f05759da530c3aae8237de4e30b3b207169d4ae9", "title": "Semi-supervised auto-encoder based on manifold learning", "text": "Auto-encoder is a popular representation learning technique which can capture the generative model of data via a encoding and decoding procedure typically driven by reconstruction errors in an unsupervised way. In this paper, we propose a semi-supervised manifold learning based auto-encoder (named semAE). semAE is based on a regularized auto-encoder framework which leverages semi-supervised manifold learning to impose regularization based on the encoded representation. Our proposed approach suits more practical scenarios in which a small number of labeled data are available in addition to a large number of unlabeled data. Experiments are conducted on several well-known benchmarking datasets to validate the efficacy of semAE from the aspects of both representation and classification. The comparisons to state-of-the-art representation learning methods on classification performance in semi-supervised settings demonstrate the superiority of our approach."}
{"_id": "8069614b90ebf48931a4b677a8f77799c94c4edb", "title": "Measuring student engagement in upper elementary through high school : a description of 21 instruments January 2011", "text": ""}
{"_id": "de1c3b02cea3cf00aa5a716455a5efae31a5f1d3", "title": "Web-scale knowledge extraction from semi-structured tables", "text": "A wealth of knowledge is encoded in the form of tables on the World Wide Web. We propose a classification algorithm and a rich feature set for automatically recognizing layout tables and attribute/value tables. We report the frequencies of these table types over a large analysis of the Web and propose open challenges for extracting from attribute/value tables semantic triples (knowledge). We then describe a solution to a key problem in extracting semantic triples: protagonist detection, i.e., finding the subject of the table that often is not present in the table itself. In 79% of our Web tables, our method finds the correct protagonist in its top three returned candidates."}
{"_id": "f87c26c937a3fc8a4cfbece8cebc15884f32a828", "title": "Behavior of Machine Learning Algorithms in Adversarial Environments", "text": "Behavior of Machine Learning Algorithms in Adversarial Environments"}
{"_id": "7b7f92bc462d6e8486ab9e26f2d2f32c93a853d7", "title": "Teaching Virtualization by Building a Hypervisor", "text": "Virtual machines (VMs) are an increasingly ubiquitous feature of modern computing, yet the interested student or professional has limited resources to learn how VMs work. In particular, there is a lack of \"hands-on\" exercises in constructing a virtual machine monitor (VMM, or hypervisor), which are both simple enough to understand completely but realistic enough to capture the practical challenges in using this technology. This paper describes a set of assignments to extend a small, pedagogical operating system (OS) to form a hypervisor and host itself. This pedagogical hypervisor, called HOSS, adds roughly 1,000 lines of code to the MIT JOS source, and includes a set of guided exercises. Initial results with HOSS in an upper-level virtualization course indicate that students enjoyed the assignments and were able to apply what they learned to solve different virtualization-related problems. HOSS is publicly available."}
{"_id": "e1b456831122d0077a7e8365153e1de2fa848e01", "title": "Detection and Classification of Apple Fruit Diseases Using Complete Local Binary Patterns", "text": "Diseases in fruit cause devastating problem in economic losses and production in agricultural industry worldwide. In this paper, a solution for the detection and classification of apple fruit diseases is proposed and experimentally validated. The image processing based proposed approach is composed of the following main steps, in the first step K-Means clustering technique is used for the image segmentation, in the second step some state of the art features are extracted from the segmented image, and finally images are classified into one of the classes by using a Multi-class Support Vector Machine. Our experimental results express that the proposed solution can significantly support accurate detection and automatic classification of apple fruit diseases. The classification accuracy for the proposed solution is achieved up to 93%."}
{"_id": "30b465f38bb1ac49fec77917fcda3feb6d61f3b0", "title": "Inspired by distraction: mind wandering facilitates creative incubation.", "text": "Although anecdotes that creative thoughts often arise when one is engaged in an unrelated train of thought date back thousands of years, empirical research has not yet investigated this potentially critical source of inspiration. We used an incubation paradigm to assess whether performance on validated creativity problems (the Unusual Uses Task, or UUT) can be facilitated by engaging in either a demanding task or an undemanding task that maximizes mind wandering. Compared with engaging in a demanding task, rest, or no break, engaging in an undemanding task during an incubation period led to substantial improvements in performance on previously encountered problems. Critically, the context that improved performance after the incubation period was associated with higher levels of mind wandering but not with a greater number of explicitly directed thoughts about the UUT. These data suggest that engaging in simple external tasks that allow the mind to wander may facilitate creative problem solving."}
{"_id": "9f54314337ccb57c69cbd3fba3484c85b835d1ee", "title": "Design optimization of interior permanent magnet (IPM) motors with maximized torque output in the entire speed range", "text": "A multiobjective optimization procedure for the design of interior PM motors based on differential evolution algorithm is presented. A comparative analysis has been carried out for the optimized 5 kW, 50 kW and 200 kW IPM motors with two and three layers of cavities in the rotor to show design trade-offs between goals to minimize the motor volume while maximizing the power output in the field weakening regime with the back emf constraint as the main limiting factor in the design. The finite element method has been used to calculate the motor parameters during the optimization process. The effectiveness of the proposed design methodology has been shown on a 1.65 kW IPM motor with two layers of cavities in the rotor for which a prototype has been built and tested"}
{"_id": "01f7a71167b4917004fbfd8228a2a1d06515d54c", "title": "Camera Tracking for Augmented Reality Media", "text": "This paper presents a camera tracking system for the spatial stabilization of Augmented Reality (AR) media. Our approach integrates both artificial landmarks (fiducials) and natural features for camera tracking. Artificial landmarks are used for system initialization and computation of initial camera pose. Robust and extendible tracking is achieved by dynamically calibrating the 3D positions of a priori uncalibrated natural features. Analysis and experimental results demonstrate the effectiveness of this approach for presenting stabilized AR media in long camera-motion video sequences."}
{"_id": "e2f3b00ed4915048a979dae122a79d994ce31d82", "title": "Affect Analysis Model: novel rule-based approach to affect sensing from text", "text": "In this paper, we address the tasks of recognition and interpretation of affect communicated through text messaging in online communication environments. Specifically, we focus on Instant Messaging (IM) or blogs, where people use an informal or garbled style of writing. We introduced a novel rule-based linguistic approach for affect recognition from text. Our Affect Analysis Model (AAM) was designed to deal with not only grammatically and syntactically correct textual input, but also informal messages written in an abbreviated or expressive manner. The proposed rule-based approach processes each sentence in stages, including symbolic cue processing, detection and transformation of abbreviations, sentence parsing and word/phrase/sentence-level analyses. Our method is capable of processing sentences of different complexity, including simple, compound, complex (with complement and relative clauses) and complex\u2013compound sentences. Affect in text is classified into nine emotion categories (or neutral). The strength of the resulting emotional state depends on vectors of emotional words, relations among them, tense of the analysed sentence and availability of first person pronouns. The evaluation of the Affect Analysis Model algorithm showed promising results regarding its capability to accurately recognize fine-grained emotions reflected in sentences from diary-like blog posts (averaged accuracy is up to 77 per cent), fairy tales (averaged accuracy is up to 70.2 per cent) and news headlines (our algorithm outperformed eight other systems on several measures)."}
{"_id": "09b180237872345c19d98b7f4a53e7c734cbfc36", "title": "Modeling and Tracking the Driving Environment With a Particle-Based Occupancy Grid", "text": "Modeling and tracking the driving environment is a complex problem due to the heterogeneous nature of the real world. In many situations, modeling the obstacles and the driving surfaces can be achieved by the use of geometrical objects, and tracking becomes the problem of estimating the parameters of these objects. In the more complex cases, the scene can be modeled and tracked as an occupancy grid. This paper presents a novel occupancy grid tracking solution based on particles for tracking the dynamic driving environment. The particles will have a dual nature-they will denote hypotheses, as in the particle filtering algorithms, but they will also be the building blocks of our modeled world. The particles have position and speed, and they can migrate in the grid from cell to cell, depending on their motion model and motion parameters, but they will be also created and destroyed using a weighting-resampling mechanism that is specific to particle filtering algorithms. The tracking algorithm will be centered on particles, instead of cells. An obstacle grid derived from processing a stereovision-generated elevation map is used as measurement information, and the measurement model takes into account the uncertainties of the stereo reconstruction. The resulting system is a flexible real-time tracking solution for dynamic unstructured driving environments."}
{"_id": "3af25fa580f8df90a70ad91d349c0feacabda5f3", "title": "Bayesian Occupancy Filtering for Multitarget Tracking: An Automotive Application", "text": "Reliable and efficient perception and reasoning in dynamic and densely cluttered environments are still major challenges for driver assistance systems. Most of today\u2019s systems use target tracking algorithms based on object models. They work quite well in simple environments such as freeways, where few potential obstacles have to be considered. However, these approaches usually fail in more complex environments featuring a large variety of potential obstacles, as is usually the case in urban driving situations. In this paper, we propose a new approach for robust perception and risk assessment in highly dynamic environments. This approach is called Bayesian occupancy filtering; it basically combines a four-dimensional occupancy grid representation of the obstacle state space with Bayesian filtering techniques. KEY WORDS\u2014multitarget tracking, Bayesian state estimation, occupancy grid"}
{"_id": "747395da611fd739a7d753ffa6bbd1781daa8c04", "title": "Object tracking with de-autocorrelation scheme for a dynamic occupancy gridmap system", "text": "Autonomous driving poses unique challenges for vehicle environment perception in complex driving environments. Due to the uncertain nature of the vehicle environment and imperfection of any perception framework, multiple stages of estimation might be necessary to achieve the desired performance. However, it is highly possible that the estimation of one stage might result in output estimates with significant auto/cross-correlation, which would pass to another stage. In such situations, a decorrelation procedure is required. We present an object tracking approach taking into consideration the auto-correlation (of the velocity components) introduced by an upfront dynamic occupancy gridmap system. More specifically, we use a linear state-space system to approximately \u201creconstruct\u201d the nonlinear estimation/mapping procedure for the purpose of auto-correlation quantification. We focus on demonstrating an estimation improvement of the proposed decorrelation tracker over a direct Kalman filter (i.e., KF that ignores the auto-correlation). For this we recorded several scenarios of different target motion behaviors. It is shown that the decorrelation tracker indeed introduces noticable estimation improvement in particularly velocity space if the target moves in a relatively smooth manner."}
{"_id": "23665555285455b305b29503f943de4cfd39029f", "title": "Fusion of laser and radar sensor data with a sequential Monte Carlo Bayesian occupancy filter", "text": "Occupancy grid mapping is a well-known environment perception approach. A grid map divides the environment into cells and estimates the occupancy probability of each cell based on sensor measurements. An important extension is the Bayesian occupancy filter (BOF), which additionally estimates the dynamic state of grid cells and allows modeling changing environments. In recent years, the BOF attracted more and more attention, especially sequential Monte Carlo implementations (SMC-BOF), requiring less computational costs. An advantage compared to classical object tracking approaches is the object-free representation of arbitrarily shaped obstacles and free-space areas. Unfortunately, publications about BOF based on laser measurements report that grid cells representing big, contiguous, stationary obstacles are often mistaken as moving with the velocity of the ego vehicle (ghost movements). This paper presents a method to fuse laser and radar measurement data with the SMC-BOF. It shows that the doppler information of radar measurements significantly improves the dynamic estimation of the grid map, reduces ghost movements, and in general leads to a faster convergence of the dynamic estimation."}
{"_id": "336605da40485f4c8341f16bd26b9b4849dd0dc1", "title": "Characterizing Online Discussion Using Coarse Discourse Sequences", "text": "In this work, we present a novel method for classifying comments in online discussions into a set of coarse discourse acts towards the goal of better understanding discussions at scale. To facilitate this study, we devise a categorization of coarse discourse acts designed to encompass general online discussion and allow for easy annotation by crowd workers. We collect and release a corpus of over 9,000 threads comprising over 100,000 comments manually annotated via paid crowdsourcing with discourse acts and randomly sampled from the site Reddit. Using our corpus, we demonstrate how the analysis of discourse acts can characterize different types of discussions, including discourse sequences such as Q&A pairs and chains of disagreement, as well as different communities. Finally, we conduct experiments to predict discourse acts using our corpus, finding that structured prediction models such as conditional random fields can achieve an F1 score of 75%. We also demonstrate how the broadening of discourse acts from simply question and answer to a richer set of categories can improve the recall performance of Q&A extraction."}
{"_id": "102b4deefd5438d15005964224f65db31c358c9b", "title": "Audience Involvement in Advertising : Four Levels", "text": "The effectiveness of advertising messages is widely believed to be moderated by audience involvement. In this paper, psychological theories of attention and levels of processing are used to establish a framework that can accommodate the major consumer behavior theories of audience involvement. Four levels of involvement are identified (in order from low to high) as preattention, focal attention, comprehension, and elaboration. These levels allocate increasing attentional capacity to a message source, as needed for analysis of the message by increasingly abstract-and qualitatively distinct-representational systems. Lower levels use relatively little capacity and extract information needed to determine whether higher levels will be invoked. The higher levels require greater capacity and result in increasingly durable cognitive and attitudinal effects."}
{"_id": "e07988d44357ad840556e093e98fdf0de4c89ae9", "title": "(In-)Secure messaging with the Silent Circle instant messaging protocol", "text": "Silent Text, the instant messaging application by the company Silent Circle, provides its users with end-to-end encrypted communication on the Blackphone and other smartphones. The underlying protocol, SCimp, has received many extensions during the update to version 2, but has not been subjected to critical review from the cryptographic community. In this paper, we analyze both the design and implementation of SCimp by inspection of the documentation (to the extent it exists) and code. Many of the security properties of SCimp version 1 are found to be secure, however many of the extensions contain vulnerabilities and the implementation contains bugs that affect the overall security. These problems were fed back to the SCimp maintainers and some bugs were fixed in the code base. In September 2015, Silent Circle replaced SCimp with a new protocol based on the Signal Protocol."}
{"_id": "5fe43f4d79d2829ba062fe9cf1711ee60903dd24", "title": "Chimera: Collaborative Preemption for Multitasking on a Shared GPU", "text": "The demand for multitasking on graphics processing units (GPUs) is constantly increasing as they have become one of the default components on modern computer systems along with traditional processors (CPUs). Preemptive multitasking on CPUs has been primarily supported through context switching. However, the same preemption strategy incurs substantial overhead due to the large context in GPUs. The overhead comes in two dimensions: a preempting kernel suffers from a long preemption latency, and the system throughput is wasted during the switch. Without precise control over the large preemption overhead, multitasking on GPUs has little use for applications with strict latency requirements.\n In this paper, we propose Chimera, a collaborative preemption approach that can precisely control the overhead for multitasking on GPUs. Chimera first introduces streaming multiprocessor (SM) flushing, which can instantly preempt an SM by detecting and exploiting idempotent execution. Chimera utilizes flushing collaboratively with two previously proposed preemption techniques for GPUs, namely context switching and draining to minimize throughput overhead while achieving a required preemption latency. Evaluations show that Chimera violates the deadline for only 0.2% of preemption requests when a 15us preemption latency constraint is used. For multi-programmed workloads, Chimera can improve the average normalized turnaround time by 5.5x, and system throughput by 12.2%."}
{"_id": "45ba546a85526d901e5979d89f3cee0c05c3189b", "title": "Striatal Plasticity and Basal Ganglia Circuit Function", "text": "The dorsal striatum, which consists of the caudate and putamen, is the gateway to the basal ganglia. It receives convergent excitatory afferents from cortex and thalamus and forms the origin of the direct and indirect pathways, which are distinct basal ganglia circuits involved in motor control. It is also a major site of activity-dependent synaptic plasticity. Striatal plasticity alters the transfer of information throughout basal ganglia circuits and may represent a key neural substrate for adaptive motor control and procedural memory. Here, we review current understanding of synaptic plasticity in the striatum and its role in the physiology and pathophysiology of basal ganglia function."}
{"_id": "da40befc56c2c8c8ca05b04640b17f1d41341a6d", "title": "All-solid-state dye-sensitized solar cells with high efficiency", "text": "Dye-sensitized solar cells based on titanium dioxide (TiO2) are promising low-cost alternatives to conventional solid-state photovoltaic devices based on materials such as Si, CdTe and CuIn1\u2212xGaxSe2 (refs 1, 2). Despite offering relatively high conversion efficiencies for solar energy, typical dye-sensitized solar cells suffer from durability problems that result from their use of organic liquid electrolytes containing the iodide/tri-iodide redox couple, which causes serious problems such as electrode corrosion and electrolyte leakage. Replacements for iodine-based liquid electrolytes have been extensively studied, but the efficiencies of the resulting devices remain low. Here we show that the solution-processable p-type direct bandgap semiconductor CsSnI3 can be used for hole conduction in lieu of a liquid electrolyte. The resulting solid-state dye-sensitized solar cells consist of CsSnI2.95F0.05 doped with SnF2, nanoporous TiO2 and the dye N719, and show conversion efficiencies of up to 10.2 per cent (8.51 per cent with a mask). With a bandgap of 1.3 electronvolts, CsSnI3 enhances visible light absorption on the red side of the spectrum to outperform the typical dye-sensitized solar cells in this spectral region."}
{"_id": "6ce339eab51d584d69fec70d8b0463b7cda2c242", "title": "Process Mining Based on Regions of Languages", "text": "In this paper we give an overview, how to apply region based me thods for the synthesis of Petri nets from languages to process min ing. The research domain of process mining aims at constructing a process model from an event log, such that the process model can reproduce t he log, and does not allow for much more behaviour than shown in the log. We her e consider Petri nets to represent process models. Event logs can be int erpreted as finite languages. Region based synthesis methods can be used to constr u t a Petri net from a language generating the minimal net behaviour including t he given language. Therefore, it seems natural to apply such methods in the proc ess mining domain. There are several different region based methods in literat ure yielding different Petri nets. We adapt these methods to the process mining doma in and compare them concerning efficiency and usefulness of the resulting P etri net."}
{"_id": "b4757fd20eb8ef354f5a0c42ea117f831d4b95d1", "title": "Enterprise architecture modelling--the issue of integration", "text": "The problem of aligning and integrating business and IT is hampering many companies in their strategic and tactical development. Constructing integrated architecture models contributes to tackling this problem. Unfortunately, no enterprise architecture description language currently exists that fully enables integrated enterprise modelling. A variety of architectural domains are commonly distinguished for which architects use their own modelling techniques and concepts, tool support, visualisation techniques, etc. In this paper we outline such an integrated language and describe concepts that relate architectural domains. This language also serves as a bridge between other existing modelling languages. Furthermore, we present the design of a workbench for enterprise architecture that serves as a modelling tool and a tool integration environment at the same time: it supports both the integration of models in existing modelling languages and the integration of existing modelling tools. q 2005 Elsevier Ltd. All rights reserved."}
{"_id": "9d247f674db91760674aedf5312e813b90272322", "title": "The IHS Transformations Based Image Fusion", "text": "The IHS sharpening technique is one of the most commonly used techniques for sharpening. Different transformations have been developed to transfer a color image from the RGB space to the IHS space. Through literature, it appears that, various scientists proposed alternative IHS transformations and many papers have reported good results whereas others show bad ones as will as not those obtained which the formula of IHS transformation were used. In addition to that, many papers show different formulas of transformation matrix such as IHS transformation. This leads to confusion what is the exact formula of the IHS transformation?. Therefore, the main purpose of this work is to explore different IHS transformation techniques and experiment it as IHS based image fusion. The image fusion performance was evaluated, in this study, using various methods to estimate the quality and degree of information improvement of a fused image quantitatively."}
{"_id": "aa03a69f3f7c3d0dd423288fdf86427a0270d53a", "title": "MetaLDA: A Topic Model that Efficiently Incorporates Meta Information", "text": "Besides the text content, documents and their associated words usually come with rich sets of meta information, such as categories of documents and semantic/syntactic features of words, like those encoded in word embeddings. Incorporating such meta information directly into the generative process of topic models can improve modelling accuracy and topic quality, especially in the case where the word-occurrence information in the training data is insufficient. In this paper, we present a topic model, called MetaLDA, which is able to leverage either document or word meta information, or both of them jointly. With two data argumentation techniques, we can derive an efficient Gibbs sampling algorithm, which benefits from the fully local conjugacy of the model. Moreover, the algorithm is favoured by the sparsity of the meta information. Extensive experiments on several real world datasets demonstrate that our model achieves comparable or improved performance in terms of both perplexity and topic quality, particularly in handling sparse texts. In addition, compared with other models using meta information, our model runs significantly faster."}
{"_id": "fdf308997cf93af43eaa0b78abe66097b1540678", "title": "A Novel Approach of an Absolute Encoder Coding Pattern", "text": "This paper presents a novel approach of an absolute rotary and linear optical encoder codding patterns. The concept is based on the analysis of 2-D images to find a unique sequence that allows to unambiguously determine an angular shaft position. The adopted coding method allows to achieve a very high data density for specified number of tracks on the code pattern disc. Encoders based on the proposed solution enable the production from readily available and inexpensive components. Nevertheless, encoders retain high measuring accuracy and high dependability obtained in the classical approach. The optical device and a design of processing system is proposed. Finally, the feasibility of the pattern coding method is further proved with encoder prototype."}
{"_id": "83fdf0de92d4e8781dd941fa2af83265bab86cb8", "title": "Intrinsically Motivated Reinforcement Learning: A Promising Framework for Developmental Robot Learning", "text": "One of the primary challenges of developmental robotics is the question of how to learn and represent increasingly complex behavior in a self-motivated, open-ended way. Barto, Singh, and Chentanez (Barto, Singh, & Chentanez 2004; Singh, Barto, & Chentanez 2004) have recently presented an algorithm for intrinsically motivated reinforcement learning that strives to achieve broad competence in an environment in a tasknonspecific manner by incorporating internal reward to build a hierarchical collection of skills. This paper suggests that with its emphasis on task-general, self-motivated, and hierarchical learning, intrinsically motivated reinforcement learning is an obvious choice for organizing behavior in developmental robotics. We present additional preliminary results from a gridworld abstraction of a robot environment and advocate a lay-ion of a robot environment and advocate a layered learning architecture for applying the algorithm on a physically embodied system."}
{"_id": "e81b8dc8241a54547c4d6c0a41975ed167215572", "title": "Soft contact lens biosensor for in situ monitoring of tear glucose as non-invasive blood sugar assessment.", "text": "A contact lens (CL) biosensor for in situ monitoring of tear glucose was fabricated and tested. Biocompatible 2-methacryloyloxyethyl phosphorylcholine (MPC) polymer and polydimethyl siloxane (PDMS) were employed as the biosensor material. The biosensor consists of a flexible Pt working electrode and a Ag/AgCl reference/counter electrode, which were formed by micro-electro-mechanical systems (MEMS) technique. The electrode at the sensing region was modified with glucose oxidase (GOD). The CL biosensor showed a good relationship between the output current and glucose concentration in a range of 0.03-5.0mM, with a correlation coefficient of 0.999. The calibration range covered the reported tear glucose concentrations in normal and diabetic patients. Also, the CL biosensor was applied to a rabbit for the purpose of tear glucose monitoring. The basal tear glucose was estimated to 0.11 mM. Also, the change of tear glucose induced by the change of blood sugar level was assessed by the oral glucose tolerance test. As a result, tear glucose level increased with a delay of 10 min from blood sugar level. The result showed that the CL biosensor is expected to provide further detailed information about the relationship between dynamics of blood glucose and tear glucose."}
{"_id": "cec1deb795d7c62143fd1693d51093126cddceb5", "title": "Fourier Transform Decoding of Non-Binary LDPC Codes", "text": "Binary low-density parity-check (LDPC) codes have recently emerged as a promising alternative to turbo codes and have been shown to achieve excellent coding gains in AWGN channels. Non-binary LDPC codes have demonstrated improved performance over binary LDPC codes but their decoding is computationally intensive. Fourier transform decoding can reduce the decoding complexity but has a number of problems which can be avoided by implementing this algorithm in the log domain. In this paper Fourier transform decoding of non-binary LDPC codes in the log domain is described. The computational complexity and performance of this algorithm is then discussed. Differential evolution is then used to search for optimal binary and non-binary LDPC codes and it is shown by simulation that non-binary LDPC codes are superior."}
{"_id": "be787bbfafcc2bc5b10fc57fbb8fa1784631ca70", "title": "Selective separation of virgin and post-consumer polymers (PET and PVC) by flotation method.", "text": "More and more polymer wastes are generated by industry and householders today. Recycling is an important process to reduce the amount of waste resulting from human activities. Currently, recycling technologies use relatively homogeneous polymers because hand-sorting waste is costly. Many promising technologies are being investigated for separating mixed thermoplastics, but they are still uneconomical and unreliable. At present, most waste polymers cause serious environmental problems. Burning polymers for recycling is not practiced since poisonous gases are released during the burning process. Particularly, polyvinyl chloride (PVC) materials among waste polymers generate hazardous HCl gas, dioxins containing Cl, etc., which lead to air pollution and shorten the life of the incinerator. In addition, they make other polymers difficult to recycle. Both polyethylene terephthalate (PET) and PVC have densities of 1.30-1.35g /cm(3) and cannot be separated using conventional gravity separation techniques. For this reason, polymer recycling needs new techniques. Among these techniques, froth flotation, which is also used in mineral processing, can be useful because of its low cost and simplicity. The main objective of this research is to recycle PET and PVC selectively from post-consumer polymer wastes and virgin polymers by using froth flotation. According to the results, all PVC particles were floated with 98.8% efficiency in virgin polymer separation while PET particles were obtained with 99.7% purity and 57.0% efficiency in post-consumer polymer separation."}
{"_id": "19ef74ae5dc814eaff2f25626bc9e313317a6fae", "title": "Designing a robotic assistant for healthcare applications", "text": "The population of the world is ageing rapidly. By 2050, the population aged 85 and over will be three times more than it is now. This phenomenon has caused several issues in the current health service system, especially workforce shortages in the health sector and a lack of space in aged care facilities (ACFs). In the face of these issues, home-based and community-based healthcare services have been identified as necessary in many developed countries to promote ageing-inplace and independent living in order to: 1. Lower the demands on health services and hence improve the quality of the services delivered, and 2. Maintain the quality of life of the older population by enabling them to be close to their families. For the last decade, a rising interest in personal robots as part of the technical solution in decentralised health services has led to an extensive range of research and implementations of health service and personal assistant robots. This paper describes a new research project to develop an assistant robot capable of interacting with patients, taking vital signs measurements and recording the data in healthcare environments such as aged care facilities, hospitals or personal homes. Current progress includes a comprehensive literature survey on recent health service robots with a list of issues in the area and an initial human-robot interaction study. The robot is currently interfaced with a blood pressure monitor and has a 3D face which is capable of displaying a range of different emotions with lips synchronized to speech."}
{"_id": "8cc0617b1e708b86fdca552bc6947a2c4ef902cf", "title": "Independent Factor Reinforcement Learning for Portfolio Management", "text": "In this paper we propose to do portfolio management using reinforcement learning (RL) and independent factor model. Factors in independent factor model are mutually independent and exhibit better predictability. RL is applied to each factor to capture temporal dependence and provide investment suggestion on factor. Optimal weights on factors are found by portfolio optimization method subject to the investment suggestions and general portfolio constraints. Experimental results and analysis are given to show that the proposed method has better performance when compare to two alternative portfolio management systems."}
{"_id": "9bcb7033cc37052e4dd054d6d6df626b846f0e68", "title": "A Proposal for Statistical Outlier Detection in Relational Structures", "text": "This paper extends unsupervised statistical outlier detection to the case of relational data. For nonrelational data, where each individual is characterized by a feature vector, a common approach starts with learning a generative statistical model for the population. The model assigns a likelihood measure for the feature vector that characterizes the individual; the lower the feature vector likelihood, the more anomalous the individual. A difference between relational and nonrelational data is that an individual is characterized not only by a list of attributes, but also by its links and by attributes of the individuals linked to it. We refer to a relational structure that specifies this information for a specific individual as the individual\u2019s database. Our proposal is to use the likelihood assigned by a generative model to the individual\u2019s database as the anomaly score for the individual; the lower the model likelihood, the more anomalous the individual. As a novel validation method, we compare the model likelihood with metrics of individual success. An empirical evaluation reveals a surprising finding in soccer and movie data: We observe in the data a strong correlation between the likelihood and success metrics."}
{"_id": "0623a11d5f534b194ab9f22c8a986f6ff8cf7578", "title": "Differentially Private Online Learning for Cloud-Based Video Recommendation With Multimedia Big Data in Social Networks", "text": "With the rapid growth in multimedia services and the enormous offers of video content in online social networks, users have difficulty in obtaining their interests. Therefore, various personalized recommendation systems have been proposed. However, they ignore that the accelerated proliferation of social media data has led to the big data era, which has greatly impeded the process of video recommendation. In addition, none of them has considered both the privacy of users' contexts (e.g., social status, ages, and hobbies) and video service vendors' repositories, which are extremely sensitive and of significant commercial value. To handle these problems, we propose a cloud-assisted differentially private video recommendation system based on distributed online learning. In our framework, service vendors are modeled as distributed cooperative learners, recommending videos according to user's context, while simultaneously adapting the video-selection strategy based on user-click feedback to maximize total user clicks (reward). Considering the sparsity and heterogeneity of big social media data, we also propose a novel geometric differentially private model, which can greatly reduce the performance loss. Our simulation shows the proposed algorithms outperform other existing methods and keep a delicate balance between the total reward and privacy preserving level."}
{"_id": "10446e36578c2952d4f9b90b3eaa2ae552e9295e", "title": "Embree: a kernel framework for efficient CPU ray tracing", "text": "We describe Embree, an open source ray tracing framework for x86 CPUs. Embree is explicitly designed to achieve high performance in professional rendering environments in which complex geometry and incoherent ray distributions are common. Embree consists of a set of low-level kernels that maximize utilization of modern CPU architectures, and an API which enables these kernels to be used in existing renderers with minimal programmer effort. In this paper, we describe the design goals and software architecture of Embree, and show that for secondary rays in particular, the performance of Embree is competitive with (and often higher than) existing state-of-the-art methods on CPUs and GPUs."}
{"_id": "7a8da17da9f31bf9be938600b9a45f74ae84d2d9", "title": "Barycentric Lagrange Interpolation", "text": "Barycentric interpolation is a variant of Lagrange polynomial interpolation that is fast and stable. It deserves to be known as the standard method of polynomial interpolation."}
{"_id": "f39d0c70e0f89296045f0f11d79e15fa978e3081", "title": "Feature warping for robust speaker verification", "text": "We propose a novel feature mapping approach that is robust to channel mismatch, additive noise and to some extent, nonlinear effects attributed to handset transducers. These adverse effects can distort the short-term distribution of the speech features. Some methods have addressed this issue by conditioning the variance of the distribution, but not to the extent of conforming the speech statistics to a target distribution. The proposed target mapping method warps the distribution of a cepstral feature stream to a standardised distribution over a specified time interval. We evaluate a number of the enhancement methods for speaker verification, and compare them against a Gaussian target mapping implementation. Results indicate improvements of the warping technique over a number of methods such as Cepstral Mean Subtraction (CMS), modulation spectrum processing, and short-term windowed CMS and variance normalisation. This technique is a suitable feature post-processing method that may be combined with other techniques to enhance speaker recognition robustness under adverse conditions."}
{"_id": "161cd409a78437df0ce56afe8a2cf53c125605d2", "title": "Ultrawideband All-Metal Flared-Notch Array Radiator", "text": "Simulations and measurements are presented for an all-metal flared-notch array element in both single and dual-polarization configurations. The ultrawideband radiator exhibits an operational bandwidth of 12:1 for broadside scan and 8:1 bandwidth at a 45-degree scan in all planes, maintaining active VSWR<;2. The feed consists of a direct coax-to-slot-line transition that mounts directly into the base of the radiator. The all-metal flared-notches are machined from common metal stock and fed via SMA coaxial connectors. No soldering is required for any part of the design-including the feed-and assembly is simple and modular. The array parts are machined using a high-precision wire-EDM cutting technology, ensuring that measurements (in the 700 MHz-9 GHz range) are repeatable and give close agreement with theory, even through multiple assembly cycles of the modular construction system. This paper presents results for a 32-element linear array of horizontal elements and also an 8 \u00d7 8 planar array of dual-polarized elements, comparing measurements with full-wave simulations of the complete finite array structures."}
{"_id": "209929b05cee369ee000ae4ae4c2ec7d26cff197", "title": "Dual-Polarized Broadband Array Antenna With BOR-Elements, Mechanical Design and Measurements", "text": "A dual polarized broadband phased array antenna designed for the frequency range 6-18 GHz, a 45deg conical grating lobe free scan volume and equipped with BOR-elements developed by Saab is presented. The aim with this array element is to bring about a dual polarized broadband array antenna that is easy to assemble, disassemble and connect to active microwave modules. Disassembling may be important for maintenance and upgrade reasons. Mechanical design and electromagnetic performance in the form of active reflection coefficient, calculated from measured mutual coupling coefficients, and measured active gain element pattern for a central and an edge element is presented. Edge effects in the array, which may be severe in small broadband arrays, are considered in this paper"}
{"_id": "d176cc188e5c12dadf8f24aa524db4162a4857ed", "title": "12-to-1 bandwidth all-metal Vivaldi array element", "text": "The notch element is a classic example of a wideband antenna that functions very well as a stand-alone radiator and also as an element in wideband phased arrays [1, 2]. Also known as a Vivaldi Aerial or tapered-slot antenna, the Vivaldi has seen perhaps the most widespread use in ultra-high bandwidth applications (\u226b 3:1 Bandwidth) of any type of element due to its robustness in design and range of manufacturability. Part of its popularity stems from the ease with which inexpensive printed-circuit board manufacturing can be used to create large arrays. It is also popular in high-power radar applications as well as ultra-wideband military applications, where all-metal designs are typically preferred. In this paper, a 12:1 bandwidth all-metal Vivaldi array element design is presented that can be easily machined from metal sheets and fed via standard SMA coax connectors. No soldering is required and assembly is simple and modular."}
{"_id": "25c0799f57436a3a4f3e2c51debe52aeb67040ab", "title": "Analysis of a Wavelength-Scaled Array (WSA) Architecture", "text": "Wavelength-scaled array architectures use scaled elements to achieve ultrawideband performance with significantly fewer overall radiators than traditional ultrawideband arrays based on a single element type. Compared to a conventional ultrawideband array with 8:1 bandwidth, a wavelength-scaled array that uses elements of three different sizes creates an aperture with fewer than 16% of the original element count, i.e., 6.4-times fewer elements, and by extension a comparable reduction in electronics required to feed the array. In this paper, a study of an asymmetric wavelength-scaled array architecture is presented for finite arrays of offset-centered dual-polarized flared-notch radiators. The unique element transitions within the finite array structure are modeled via a non-matching grid Domain Decomposition-Finite Element Method that allows for rigorous impedance and radiation pattern prediction of full-sized wavelength-scaled arrays. This design study shows that the wavelength-scaled array has comparable performance to traditional ultrawideband arrays in terms of VSWR, radiation patterns, array mismatch efficiency, and cross-polarization levels."}
{"_id": "f32003ac64288444501eab81dad61dc945573696", "title": "A comparative study of image classification algorithms for Foraminifera identification", "text": "Identifying Foraminifera (or forams for short) is essential for oceanographic and geoscience research as well as petroleum exploration. Currently, this is mostly accomplished using trained human pickers, routinely taking weeks or even months to accomplish the task. In this paper, a foram identification pipeline is proposed to automatic identify forams based on computer vision and machine learning techniques. A microscope based image capturing system is used to collect a labelled image data set. Various popular image classification algorithms are adapted to this specific task and evaluated under various conditions. Finally, the potential of a weighted cross-entropy loss function in adjusting the trade-off between precision and recall is tested. The classification algorithms provide competitive results when compared to human experts labeling of the data set."}
{"_id": "97bac7ce5d830ae475bd4f5485add51f6914f2ae", "title": "ActiveCDN: Cloud Computing Meets Content Delivery Networks", "text": "Content delivery networks play a crucial role in today\u2019s Internet. They serve a large portion of the multimedia on the Internet and solve problems of scalability and indirectly network congestion (at a price). However, most content delivery networks rely on a statically deployed configuration of nodes and network topology that makes it hard to grow and scale dynamically. We present ActiveCDN, a novel CDN architecture that allows a content publisher to dynamically scale their content delivery services using network virtualization and cloud computing techniques."}
{"_id": "7b33745e9025c08e51fa45238d73374d6f7f92e5", "title": "Verb Phrase Ellipsis Resolution Using Discriminative and Margin-Infused Algorithms", "text": "Verb Phrase Ellipsis (VPE) is an anaphoric construction in which a verb phrase has been elided. It occurs frequently in dialogue and informal conversational settings, but despite its evident impact on event coreference resolution and extraction, there has been relatively little work on computational methods for identifying and resolving VPE. Here, we present a novel approach to detecting and resolving VPE by using supervised discriminative machine learning techniques trained on features extracted from an automatically parsed, publicly available dataset. Our approach yields state-of-the-art results for VPE detection by improving F1 score by over 11%; additionally, we explore an approach to antecedent identification that uses the Margin-Infused-RelaxedAlgorithm, which shows promising results."}
{"_id": "aaaead632a12b372a91ce2ca5810d98234a82a39", "title": "Unsupervised Trajectory Clustering via Adaptive Multi-kernel-Based Shrinkage", "text": "This paper proposes a shrinkage-based framework for unsupervised trajectory clustering. Facing to the challenges of trajectory clustering, e.g., large variations within a cluster and ambiguities across clusters, we first introduce an adaptive multi-kernel-based estimation process to estimate the 'shrunk' positions and speeds of trajectories' points. This kernel-based estimation effectively leverages both multiple structural information within a trajectory and the local motion patterns across multiple trajectories, such that the discrimination of the shrunk point can be properly increased. We further introduce a speed-regularized optimization process, which utilizes the estimated speeds to regularize the optimal shrunk points, so as to guarantee the smoothness and the discriminative pattern of the final shrunk trajectory. Using our approach, the variations among similar trajectories can be reduced while the boundaries between different clusters are enlarged. Experimental results demonstrate that our approach is superior to the state-of-art approaches on both clustering accuracy and robustness. Besides, additional experiments further reveal the effectiveness of our approach when applied to trajectory analysis applications such as anomaly detection and route analysis."}
{"_id": "dfe17bd72e526a55f93511e30d68e68e3d04498b", "title": "RCS Reduction of Ridged Waveguide Slot Antenna Array Using EBG Radar Absorbing Material", "text": "This letter investigates the application of EBG radar absorbing material (RAM) to asymmetric ridged waveguide slot antenna array to reduce its backward RCS. The EBG RAM is based on the mushroom-like EBG structure loaded with lumped resistances. A ridged waveguide slot antenna array with 4 times 10 slot elements was designed and built, part of the metal ground plane of the array is replaced with this EBG RAM. The measured results show that performance of the antenna array is preserved when EBG RAM is used. At working frequency the backward RCS of antenna array with EBG RAM has dramatically reduced than that of the common antenna array."}
{"_id": "c005eab75655d4065274b5f52454a4b22c2b70b5", "title": "Improved LSB based Steganography Techniques for Color Images in Spatial Domain", "text": "This research paper aims to propose a new improved approach for Information Security in RGB Color Images using a Hybrid Feature detection technique; Two Component based Least Significant Bit (LSB) Substitution Technique and Adaptive LSB substitution technique for data hiding. Advanced Encryption Standard (AES) is used to provide Two Tier Security; Random Pixel Embedding imparts resistant to attacks and Hybrid Filtering makes it immune to various disturbances like noise. An image is combination of edge and smooth areas which gives an ample opportunity to hide information in it. The proposed work is direct implementation of the principle that edge areas being high in contrast, color, density and frequency can tolerate more changes in their pixel values than smooth areas, so can be embedded with a large number of secret data while retaining the original characteristics of image. The proposed approach achieved Improved Imperceptibility, Capacity than the various existing techniques along with Better Resistance to various Steganalysis attacks like Histogram Analysis, Chi-Square and RS Analysis as proven experimentally."}
{"_id": "600273eb15190754983cdde83d699e3db69f8d85", "title": "The LaHave House Project: Towards an Automated Architectural Design Service", "text": "The LaHave House Project explores the creation of an automated architectural design service based on an industrial design approach to architecture in which Architects design families of similarly structured objects, rather than individual ones. Our current system consists of three software components: 1) A design engine that uses shape grammars to generate a library of preliminary level house designs, 2) A design development tool that permits end-users to selected, customize and visualize designs drawn from the library and 3) A building systems configuration tool that transforms customized designs into working/assembly drawings. Our aim is to generate modern realizable houses that combine beautiful forms with a modern approach to space planning. We are currently completing an integrated online prototype that allows end-users to select, customize and visualization generated house designs over the Internet in 3D, using a Java/VRML based design development tool."}
{"_id": "a4691d000be35da7f315e6582df33be4278248fc", "title": "Using All Our Senses : the need for a Unified Theoretical Approach to Multi-sensory Information Visualization", "text": "There is an opportunity to consider the theory of visualization for each of our senses together. We should have a united theory of visualization that covers all senses and allows users to integrate modalities. We believe there are several important facets to the theory that need to be addressed at all stages of the multi-sensory visualization process. (1) Data enhancement for multi-sensory visualization; (2) perceptual variables for each sense; (3) Transference of the visualization methods and designs from one domain to another. (4) Cross-modal device integration. How multi-sensory devices are built and are used together; (5) Sensory integration and cross-modal interference: how different sensations interfere or reinforce each other. We need a body of research that will help researchers tackle questions such as \u2018what are the perceptual variables that are available?\u2019, \u2018what are their limitations?\u2019, \u2018Do ideas and concepts that work in one modality transfer to another?\u2019 and \u2018what are the best design strategies for depicting information in another modality?\u2019."}
{"_id": "3924f06c0e8ab8c68f96083d9070b5745048f6ae", "title": "Acceptance and use of electronic library services in ugandan universities", "text": "University libraries in Developing Countries (DCs), hampered by developmental problems, find it hard to provide electronic services. Donor communities have come in to bridge this technology gap by providing funds to university libraries for information technology infrastructure, enabling these university libraries to provide electronic library services to patrons. However, for these services to be utilized effectively, library end-users must accept and use them. To investigate this process in Uganda, this study modifies \"The Unified Theory of Acceptance and Use of Technology\" (UTAUT) by replacing \"effort expectancy\" and \"voluntariness\" with \"relevancy\", \"awareness\" and \"benefits\" factors. In so doing, we developed the Service Oriented UTAUT (SOUTAUT) model whose dependent constructs predict 133% of the variances in user acceptance and use of e-library services. The study revealed that relevancy moderated by awareness plays a major factor in acceptance and use of e-library services in DCs."}
{"_id": "0b0d055463c70f56549caac2286ebd4d9cbaacc5", "title": "A comparison of soft-switched DC-to-DC converters for electrolyser application", "text": "An Electrolyser is part of a renewable energy system (RES) and generates hydrogen from water electrolysis that is used in fuel cells. A dc-to-dc converter is required to couple the Electrolyser to the system DC bus. This paper presents the design of three soft-switched high-frequency transformer isolated dc-to-dc converters for this application based on the given specifications. It is shown that LCL-type series resonant converter with capacitive output filter is suitable for this application. Detailed theoretical and simulation results are presented. Due to the wide variation in input voltage and load current, no converter can maintain zero-voltage switching for the complete operating range. Therefore, a two stage approach (ZVT boost converter followed by LCL SRC with capacitive output filter) will be studied in the future."}
{"_id": "3d9bfc34544a5091ab8523f6a2c991a61edf1338", "title": "Fiber-Reinforced Polymer Composites for Construction \u2014 State-ofthe-Art Review", "text": "A concise state-of-the-art survey of fiber-reinforced polymer ~also known as fiber-reinforced plastic ! composites for construc tion applications in civil engineering is presented. The paper is organized into separate sections on structural shapes, bridge dec reinforcements, externally bonded reinforcements, and standards and codes. Each section includes a historical review, the curr the art, and future challenges. DOI: 10.1061/ ~ASCE!1090-0268~2002!6:2~73! CE Database keywords: Composite materials; Fiber-reinforced materials; Polymers; Bridge decks; State-of-the-art reviews. rials sive uilt , th To So-"}
{"_id": "5ae0ed96b0ebb7d8a03840aaf64c53b07ff0c1e7", "title": "News Sentiment Analysis Based on Cross-Domain Sentiment Word Lists and Content Classifiers", "text": "The main task of sentiment classification is to automatically judge sentiment polarity (positive or negative) of published sentiment data (e.g. news or reviews). Some researches have shown that supervised methods can achieve good performance for blogs or reviews. However, the polarity of a news report is hard to judge. Web news reports are different from other web documents. The sentiment features in news are less than the features in other Web documents. Besides, the same words in different domains have different polarity. So we propose a selfgrowth algorithm to generate a cross-domain sentiment word list, which is used in sentiment classification of Web news. This paper considers some previously undescribed features for automatically classifying Web news, examines the effectiveness of these techniques in isolation and when aggregated using classification algorithms, and also validates the selfgrowth algorithm for the cross-domain word list."}
{"_id": "6b6b040b9dbe1aceda320f502c092e0679e130cb", "title": "Automatic Targetless Extrinsic Calibration of a 3D Lidar and Camera by Maximizing Mutual Information", "text": "This paper reports on a mutual information (MI) based algorithm for automatic extrinsic calibration of a 3D laser scanner and optical camera system. By using MI as the registration criterion, our method is able to work in situ without the need for any specific calibration targets, which makes it practical for in-field calibration. The calibration parameters are estimated by maximizing the mutual information obtained between the sensor-measured surface intensities. We calculate the Cramer-Rao-Lower-Bound (CRLB) and show that the sample variance of the estimated parameters empirically approaches the CRLB for a sufficient number of views. Furthermore, we compare the calibration results to independent ground-truth and observe that the mean error also empirically approaches to zero as the number of views are increased. This indicates that the proposed algorithm, in the limiting case, calculates a minimum variance unbiased (MVUB) estimate of the calibration parameters. Experimental results are presented for data collected by a vehicle mounted with a 3D laser scanner and an omnidirectional camera system."}
{"_id": "7fb9383f368c4dd40e605bb5bca0f1daec965df3", "title": "Physical and mental health effects of intimate partner violence for men and women.", "text": "BACKGROUND\nFew population-based studies have assessed the physical and mental health consequences of both psychological and physical intimate partner violence (IPV) among women or men victims. This study estimated IPV prevalence by type (physical, sexual, and psychological) and associated physical and mental health consequences among women and men.\n\n\nMETHODS\nThe study analyzed data from the National Violence Against Women Survey (NVAWS) of women and men aged 18 to 65. This random-digit-dial telephone survey included questions about violent victimization and health status indicators.\n\n\nRESULTS\nA total of 28.9% of 6790 women and 22.9% of 7122 men had experienced physical, sexual, or psychological IPV during their lifetime. Women were significantly more likely than men to experience physical or sexual IPV (relative risk [RR]=2.2, 95% confidence interval [CI]=2.1, 2.4) and abuse of power and control (RR=1.1, 95% CI=1.0, 1.2), but less likely than men to report verbal abuse alone (RR=0.8, 95% CI=0.7, 0.9). For both men and women, physical IPV victimization was associated with increased risk of current poor health; depressive symptoms; substance use; and developing a chronic disease, chronic mental illness, and injury. In general, abuse of power and control was more strongly associated with these health outcomes than was verbal abuse. When physical and psychological IPV scores were both included in logistic regression models, higher psychological IPV scores were more strongly associated with these health outcomes than were physical IPV scores.\n\n\nCONCLUSIONS\nBoth physical and psychological IPV are associated with significant physical and mental health consequences for both male and female victims."}
{"_id": "df55b62f62eb7fb222afddcb86b3dec29fff81c9", "title": "Ship Detection From Optical Satellite Images Based on Saliency Segmentation and Structure-LBP Feature", "text": "Automatic ship detection from optical satellite imagery is a challenging task due to cluttered scenes and variability in ship sizes. This letter proposes a detection algorithm based on saliency segmentation and the local binary pattern (LBP) descriptor combined with ship structure. First, we present a novel saliency segmentation framework with flexible integration of multiple visual cues to extract candidate regions from different sea surfaces. Then, simple shape analysis is adopted to eliminate obviously false targets. Finally, a structure-LBP feature that characterizes the inherent topology structure of ships is applied to discriminate true ship targets. Experimental results on numerous panchromatic satellite images validate that our proposed scheme outperforms other state-of-the-art methods in terms of both detection time and detection accuracy."}
{"_id": "0d3e4d25cea1a8adc3cb4e726de634319b47d15b", "title": "Matching and Retrieval of Distorted and Occluded Shapes Using Dynamic Programming", "text": "We propose an approach for matching distorted and possibly occluded shapes using Dynamic Programming (DP). We distinguish among various cases of matching such as cases where the shapes are scaled with respect to each other and cases where an open shape matches the whole or only a part of another open or closed shape. Our algorithm treats noise and shape distortions by allowing matching of merged sequences of consecutive small segments in a shape with larger segments of another shape, while being invariant to translation, scale, orientation and starting point selection. We illustrate the effectiveness of our algorithm in retrieval of shapes on two datasets of two-dimensional open and closed shapes of marine life species. We demonstrate the superiority of our approach over traditional approaches to shape matching and retrieval based on Fourier descriptors and moments. We also compare our method with SQUID, a well known method which is available on the Internet. Our evaluation is based on human relevance judgments following a well-established methodology from the information retrieval field. A preliminary version of this work was presented at the Int. Conf. on Pattern Recognition, Barcelona, Spain, pages 67-71, Vol. 4, Sept. 2000. Corresponding author. Department of Electronic and Computer Engineering, Technical University of Crete, Chania, Crete, GR-73100, Greece, E-mail: petrakis@ced.tuc.gr, URL: http://www.ced.tuc.gr/ \u0303petrakis, Intelligent Sensory Information Systems, Faculty of Science, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands. E-mail: diplaros@science.uva.nl. This work is part of the author\u2019s student dissertation at the Department of Electronic and Computer Engineering of the Technical University of Crete. Faculty of Computer Science, Dalhousie University, Halifax, Nova Scotia, Canada B3H 1W5. E-mail: eem@cs.dal.ca, URL: http://www.cs.dal.ca/ \u0303eem."}
{"_id": "2bd576ce574df33c834b6032962cd5ae0be5299f", "title": "Guidelines for conducting systematic mapping studies in software engineering: An update", "text": ""}
{"_id": "12a68626d06d860a85737b451c2dbbc77b6fa5a6", "title": "Curlybot: Designing a New Class of Computational Toys", "text": "We introduce an educational toy, called curlybot, as the basis for a new class of toys aimed at children in their early stages of development \u2014 ages four and up. curlybot is an autonomous two-wheeled vehicle with embedded electronics that can record how it has been moved on any flat surface and then play back that motion accurately and repeatedly. Children can use curlybot to develop intuitions for advanced mathematical and computational concepts, like differential geometry, through play away from a traditional computer.\nIn our preliminary studies, we found that children learn to use curlybot quickly. They readily establish an affective and body syntonic connection with curlybot, because of its ability to remember all of the intricacies of their original gesture; every pause, acceleration, and even the shaking in their hand is recorded. Programming by example in this context makes the educational ideas implicit in the design of curlybot accessible to young children."}
{"_id": "3abe5c58bd79115a279b8ca9ec3392b19090eec7", "title": "Clustering of contingency table and mixture model", "text": "Basing cluster analysis on mixture models has become a classical and powerful approach. It enables some classical criteria such as the well-known k-means criterion to be explained. To classify the rows or the columns of a contingency table, an adapted version of k-means known as Mndki2, which uses the chi-square distance, can be used. Unfortunately, this simple, effective method which can be used jointly with correspondence analysis based on the same representation of the data, cannot be associated with a mixture model in the same way as the classical k-means algorithm. In this paper we show that the Mndki2 algorithm can be viewed as an approximation of a classifying version of the EM algorithm for a mixture of multinomial distributions. A comparison of the algorithms belonging in this context are experimentally investigated using Monte Carlo simulations. 2006 Elsevier B.V. All rights reserved."}
{"_id": "09ea3e35ae65373c8c0056501e441d14f67fe183", "title": "How to compute variance-based sensitivity indicators with your spreadsheet software", "text": "The use of sensitivity indicators is explicitly recommendend by authorities like the EC, the US EPA and others in model valuation and audit. In this note, we want to draw the attention to a numerical efficient algorithm that computes first order global sensitivity effects from given data using a discrete cosine transformation."}
{"_id": "fb2508f11a676c48bacad7a827f02db519fd969a", "title": "Multicriteria classification and sorting methods: A literature review", "text": "The assignment of alternatives (observations/objects) into predefined homogenous groups is a problem of major practical and research interest. This type of problem is referred to as classification or sorting, depending on whether the groups are nominal or ordinal. Methodologies for addressing classification and sorting problems have been developed from a variety of research disciplines, including statistics/econometrics, artificial intelligent and operations research. The objective of this paper is to review the research conducted on the framework of the multicriteria decision aiding (MCDA). The review covers different forms of MCDA classification/sorting models, different aspects of the model development process, as well as real-world applications of MCDA classification/sorting techniques and their software implementations. 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "e26cbabe8c60f1c62616917410f47ac2ad7d7609", "title": "Modeling Driver Behavior in a Cognitive Architecture", "text": "OBJECTIVE\nThis paper explores the development of a rigorous computational model of driver behavior in a cognitive architecture--a computational framework with underlying psychological theories that incorporate basic properties and limitations of the human system.\n\n\nBACKGROUND\nComputational modeling has emerged as a powerful tool for studying the complex task of driving, allowing researchers to simulate driver behavior and explore the parameters and constraints of this behavior.\n\n\nMETHOD\nAn integrated driver model developed in the ACT-R (Adaptive Control of Thought-Rational) cognitive architecture is described that focuses on the component processes of control, monitoring, and decision making in a multilane highway environment.\n\n\nRESULTS\nThis model accounts for the steering profiles, lateral position profiles, and gaze distributions of human drivers during lane keeping, curve negotiation, and lane changing.\n\n\nCONCLUSION\nThe model demonstrates how cognitive architectures facilitate understanding of driver behavior in the context of general human abilities and constraints and how the driving domain benefits cognitive architectures by pushing model development toward more complex, realistic tasks.\n\n\nAPPLICATION\nThe model can also serve as a core computational engine for practical applications that predict and recognize driver behavior and distraction."}
{"_id": "92507ec471d207b65c99c5d517bd08fc50671ced", "title": "P-Rank: An indicator measuring prestige in heterogeneous scholarly networks", "text": "Ranking scientific productivity and prestige are often limited to homogeneous networks. These networks are unable to account for the multiple factors that constitute the scholarly communication and reward system. This study proposes a new informetric indicator, PRank, for measuring prestige in heterogeneous scholarly networks containing articles, authors, and journals. P-Rank differentiates the weight of each citation based on its citing papers, citing journal, and citing authors. Articles from 16 representative library and information science journals are selected as the dataset. Principle Component Analysis is conducted to examine the relationship between P-Rank and other bibliometric indicators. We also compare the correlation and rank variances between citation counts and P-Rank scores. This work provides a new approach to examining prestige in scholarly communication networks in a more comprehensive and nuanced way."}
{"_id": "71a0f98b59792070b7005f89d34ccf56e40badf0", "title": "A Priority based Round Robin CPU Scheduling Algorithm for Real Time Systems", "text": "The main objective of this paper is to develop a new approach for round robin C P U scheduling a l g o r i t h m which improves the performance of CPU in real time operating system. The proposed Priority based Round-Robin CPU Scheduling algorithm is based on the integration of round-robin and priority scheduling algorithm. It retains the advantage of round robin in reducing starvation and also integrates the advantage of priority scheduling. The proposed algorithm also implements the concept of aging by assigning new priorities to the processes. Existing round robin CPU scheduling algorithm cannot be implemented in real time operating system due to their high context switch rates, large waiting time, large response time, large turnaround time and less throughput. The proposed algorithm improves all the drawbacks of round robin C P U scheduling algorithm. The paper also presents the comparative analysis of proposed algorithm with existing round robin scheduling algorithm on the basis of varying time quantum, average waiting time, average turnaround time and number of context switches."}
{"_id": "d49aa72f9b7db6c8f9287aeb6ff1625fc444f37d", "title": "A New Round Robin Based Scheduling Algorithm for Operating Systems: Dynamic Quantum Using the Mean Average", "text": "Round Robin, considered as the most widely adopted CPU scheduling algorithm, undergoes severe problems directly related to quantum size. If time quantum chosen is too large, the response time of the processes is considered too high. On the other hand, if this quantum is too small, it increases the overhead of the CPU. In this paper, we propose a new algorithm, called AN, based on a new approach called dynamic-time-quantum; the idea of this approach is to make the operating systems adjusts the time quantum according to the burst time of the set of waiting processes in the ready queue. Based on the simulations and experiments, we show that the new proposed algorithm solves the fixed time quantum problem and increases the performance of Round Robin."}
{"_id": "377ea7fac2a7c59f8e2bfc0e45f8996d82b71c0b", "title": "FINDING TIME QUANTUM OF ROUND ROBIN CPU SCHEDULING ALGORITHM IN GENERAL COMPUTING SYSTEMS USING INTEGER PROGRAMMING", "text": "In Round Robin Scheduling, the time quantum is fixed and then processes are scheduled such that no process get CPU time more than one time quantum in one go. If time quantum is too large, the response time of the processes is too much which may not be tolerated in interactive environment. If time quantum is too small, it causes unnecessarily frequent context switch leading to more overheads resulting in less throughput. In this paper, a method using integer programming has been proposed to solve equations that decide a value that is neither too large nor too small such that every process has reasonable response time and the throughput of the system is not decreased due to unnecessarily context switches."}
{"_id": "5bb7da9b268c9a066e2be2e5142e5e2c11630437", "title": "Schedulability analysis of periodic fixed priority systems", "text": "Feasibility analysis of fixed priority systems has been widely studied in the real-time literature and several acceptance tests have been proposed to guarantee a set of periodic tasks. They can be divided in two main classes: polynomial time tests and exact tests. Polynomial time tests can efficiently be used for online guarantee of real-time applications, where tasks are activated at runtime. These tests introduce a negligible overhead, when executed upon a new task arrival, however provide only a sufficient schedulability condition, which may cause a poor processor utilization. On the other hand, exact tests, which are based on response time analysis, provide a necessary and sufficient schedulability condition, but are too complex to be executed on line for large task sets. As a consequence, for large task sets, they are often executed off line. This paper proposes a novel approach for analyzing the schedulability of periodic task sets on a single processor under an arbitrary fixed priority assignment: Using this approach, we derive a new schedulability test which can be tuned through a parameter to balance complexity versus acceptance ratio, so that it can be used on line to better exploit the processor, based on the available computational power. Extensive simulations show that our test, when used in its exact form, is significantly faster than the current response time analysis methods. Moreover the proposed approach, for its elegance and compactness, offers an explanation of some known phenomena of fixed priority scheduling and could be helpful for further work on schedulability analysis."}
{"_id": "780d07fbf09484a2490868d45255a23762a7e21d", "title": "An Optimized Round Robin Scheduling Algorithm for CPU Scheduling", "text": "The main objective of this paper is to develop a new approach for round robin scheduling which help to improve the CPU efficiency in real time and time sharing operating system. There are many algorithms available for CPU scheduling. But we cannot implemented in real time operating system because of high context switch rates, large waiting time, large response time, large trn around time and less throughput. The proposed algorithm improves all the drawback of simple round robin architecture. The author have also given comparative analysis of proposed with simple round robin scheduling algorithm. Therefore, the author strongly feel that the proposed architecture solves all the problem encountered in simple round robin architecture by decreasing the performance parameters to desirable extent and thereby increasing the system throughput."}
{"_id": "90f3977184cee09ba52267f8471a05b39867c7c6", "title": "BranchGAN: Branched Generative Adversarial Networks for Scale-Disentangled Learning and Synthesis of Images.", "text": "We introduce BranchGAN , a novel training method that enables unconditioned generative adversarial networks (GANs) to learn image manifolds at multiple scales. The key novel feature of BranchGAN is that it is trained in multiple branches, progressively covering both the breadth and depth of the network, as resolutions of the training images increase to reveal finer-scale features. Specifically, each noise vector, as input to the generator network, is explicitly split into several sub-vectors, each corresponding to, and is trained to learn, image representations at a particular scale. During training, we progressively \u201cdefreeze\u201d the sub-vectors, one at a time, as a new set of higher-resolution images is employed for training and more network layers are added. A consequence of such an explicit sub-vector designation is that we can directly manipulate and even combine latent (sub-vector) codes which model different feature scales. Experiments demonstrate the effectiveness of our training method in scale-disentangled learning of image manifolds and synthesis, without any extra labels and without compromising quality of the synthesized high-resolution images. We further demonstrate three applications enabled or improved by BranchGAN."}
{"_id": "bb7ead7765f8fc29befaaf364e7b2b1d0fd1274d", "title": "Underground Mine Communications: A Survey", "text": "After a recent series of unfortunate underground mining disasters, the vital importance of communications for underground mining is underlined one more time. Establishing reliable communication is a very difficult task for underground mining due to the extreme environmental conditions. Until now, no single communication system exists which can solve all of the problems and difficulties encountered in underground mine communications. However, combining research with previous experiences might help existing systems improve, if not completely solve all of the problems. In this survey, underground mine communication is investigated. Major issues which underground mine communication systems must take into account are discussed. Communication types, methods, and their significance are presented."}
{"_id": "410f15263783ef1ecc9d679dcbad53fcd8584bbc", "title": "Heteroscedastic Gaussian process regression", "text": "This paper presents an algorithm to estimate simultaneously both mean and variance of a non parametric regression problem. The key point is that we are able to estimate variance locally unlike standard Gaussian Process regression or SVMs. This means that our estimator adapts to the local noise. The problem is cast in the setting of maximum a posteriori estimation in exponential families. Unlike previous work, we obtain a convex optimization problem which can be solved via Newton's method."}
{"_id": "5937e5c34b361d81c24e8896fed179ae55c6ade3", "title": "Group Buying on the Web: A Comparison of Price-Discovery Mechanisms", "text": "Web-based Group-Buying mechanisms, a refinement of quantity discounting, are being used for both Business-to-Business (B2B) and Business-toConsumer (B2C) transactions. In this paper, we survey currently operational online Group-Buying markets, and then study this phenomenon using analytical models. We surveyed over fifty active Group-Buying sites, and provide a comprehensive review of Group-Buying practices in the B2B, B2C and non-profit sectors, across three continents. On the modeling side, we build on the coordination literature in Information Economics and the quantity-discounts literature in Operations to develop an analytical model of a monopolist who uses web-based Group-Buying mechanisms under different kinds of demand uncertainty. We derive the monopolist's optimal Group-Buying schedule, and compare his profits with those that obtain under the more conventional posted-price mechanism. We also study the effect of heterogeneity in the demand regimes, in combination with uncertainty, on the relative performance of the two mechanisms. We further study the impact of the timing of the pricing decision (vis-\u00e0-vis the production decision) by modeling it as a two-stage game between the monopolist and buyers. Finally, we investigate how Group-Buying schemes compare with posted price markets when buyers can revise their prior valuation of products based on information received from third parties (infomediaries). In all cases, we characterize the conditions under which one mechanism outperforms the other, and those under which the posted price and Group-Buy mechanisms lead to identical seller revenues. Our results have implications for firms' choice of price discovery mechanisms in electronic markets and scheduling of production and pricing decisions in the presence (and absence) of scale economies of production. 1 Email: anandk@wharton.upenn.edu; Tel: (215) 898 1175. 2 Email: raviaron@wharton.upenn.edu; Tel:(215) 573 5677."}
{"_id": "c6f152deb9e2204c5eedc4904b6371a3f15b7e3d", "title": "Tamperproof transmission of fingerprints using visual cryptography schemes", "text": "Visual Cryptography and biometrics have been identified as the two most important aspects of digital security. In this paper, we propose a method for the preparation and secure transmission of fingerprint images using visual cryptography scheme. Visual cryptography (VC) is a kind of secret image sharing scheme that uses the human visual system to perform the decryption computations. A visual cryptography scheme (VCS) allows confidential messages to be encrypted into k-out-of-n secret sharing schemes. Whenever the number of participants from the group (n) is larger than or equal to the predetermined threshold value (k), the confidential message can be obtained by these participants. VCS is interesting because decryption can be done with no prior knowledge of cryptography and can be performed without any cryptographic computations. In this paper, the fingerprint image is broken up into n pieces, which individually yield no information about the image. These pieces of the images, called shares or shadows, may then be distributed among a group of n participants/dealers. By combining any k (k \u2264 n) of these shares, the original fingerprint image can be recovered, but combining less than k of them will not reveal any information about the image. The scheme is perfectly secure and very easy to implement. To decode the encrypted information, i.e., to get the original information back, the shares are stacked and the secret image pops out. The only drawback of the VCS is the loss in contrast of reconstructed image. The proposed method for the reconstruction of fingerprint image is based on XNOR operation. This will enable one to obtain back perfect fingerprint image."}
{"_id": "2e47a33d05bc3ae8b8f3606fe86242bb7cd54584", "title": "Consumer buying behavior towards online shopping stores in Malaysia", "text": "The global nature of communication and shopping has as well redefined, seeing that it is the perfect vehicle for online shopping stores. Online convenient shop is mostly reflected in shorter time and less energy spent, including shipping cost reduction, less crowd and queues than real markets, unlimited time and space, which all increase convenience of shopping. Internet shopping for businesses and consumers are being accepted as an alternative shop mode rather than visiting the stores. However, convincing the consumers to shop online is still a challenging task for web retailers in Malaysia. The growth of Internet technology in Malaysia has enormous potential as it reduces the costs of product and service delivery and extends geographical boundaries in bringing buyers and sellers together. This study is conducted to identify factors influencing consumers towards online shopping in Malaysia. The study focused on nine independent variables namely appearance, quick loading, security, sitemap, validity, promotion, attractiveness, believability, and originality. We applied Five-point Likert Scale to measure the influential factors on intention for online shopping."}
{"_id": "454ca5266a6a47db06be0dd8a1faed88f9ad2de3", "title": "Automatic Whole Brain MRI Segmentation of the Developing Neonatal Brain", "text": "Magnetic resonance (MR) imaging is increasingly being used to assess brain growth and development in infants. Such studies are often based on quantitative analysis of anatomical segmentations of brain MR images. However, the large changes in brain shape and appearance associated with development, the lower signal to noise ratio and partial volume effects in the neonatal brain present challenges for automatic segmentation of neonatal MR imaging data. In this study, we propose a framework for accurate intensity-based segmentation of the developing neonatal brain, from the early preterm period to term-equivalent age, into 50 brain regions. We present a novel segmentation algorithm that models the intensities across the whole brain by introducing a structural hierarchy and anatomical constraints. The proposed method is compared to standard atlas-based techniques and improves label overlaps with respect to manual reference segmentations. We demonstrate that the proposed technique achieves highly accurate results and is very robust across a wide range of gestational ages, from 24 weeks gestational age to term-equivalent age."}
{"_id": "3d7ac1dae034997ca5501211685a67dbe009b5ae", "title": "A 35.7\u201364.2 GHz low power Miller Divider with Weak Inversion Mixer in 65 nm CMOS", "text": "A 60 GHz wide locking range Miller divider is presented in this letter. To enhance the locking range of divider and save power consumption, we proposed a Miller divider based on weak inversion mixer. The proposed Miller divider is implemented in 65 nm CMOS and exhibits 57% locking range from 35.7 to 64.2 GHz at an input power of 0 dBm while consuming 1.6-mW dc power at 0.4 V supply. Compared to the previously reported CMOS millimeter wave frequency dividers, the proposed divider achieves the widest fractional bandwidth without any frequency tuning mechanism."}
{"_id": "4b259fbc7e2802dbea4b4b52c06a871006aef367", "title": "Identification of Refused Bequest Code Smells", "text": "Accumulated technical debt can be alleviated by means of refactoring application aiming at architectural improvement. A prerequisite for wide scale refactoring application is the automated identification of the corresponding refactoring opportunities, or code smells. One of the major architectural problems that has received limited attention is the so called 'Refused Bequest' which refers to inappropriate use of inheritance in object-oriented systems. This code smell occurs when subclasses do not take advantage of the inherited behavior, implying that replacement by delegation should be used instead. In this paper we propose a technique for the identification of Refused Bequest code smells whose major novelty lies in the intentional introduction of errors in the inherited methods. The essence of inheritance is evaluated by exercising the system's functionality through the corresponding unit tests in order to reveal whether inherited methods are actually employed by clients. Based on the results of this approach and other structural information, an indication of the smell strength on a 'thermometer' is obtained. The proposed approach has been implemented as an Eclipse plug in."}
{"_id": "7cc30dc46b888b8f374cedeaee7a4eb2f9cda879", "title": "Effectivecollaborative movie recommender system using asymmetric user similarity and matrix factorization", "text": "Recommender systems are becoming ubiquitous these days to advise important products to users. Conventional collaborative filtering methods suffer from sparsity, scalability, and cold start problem. In this work, we have implemented a novel and improved method of recommending movies by combining the asymmetric method of calculating similarity with matrix factorization and Tyco (typicality-based collaborative filtering). The asymmetric method describes that similarity of user A with B is not the similar as the similarity of B with A. Matrix factorization shows items (movies) as well as users by vectors of factors derived from rating pattern of items (movies). In Tyco clusters of movies of the same genre are created, and typicality degree (a measure of how much a movie belongs to that genre) of each movie in that cluster was considered and subsequently of each user in a genre was calculated. The similarity between users was calculated by using their typicality in genres rather than co-rated items. We had combined these methods and employed Pearson correlation coefficient method to calculate similarity to optimize results when compared to cosine similarity, Linear Regression to make predictions that gave better results. In this research work stochastic gradient descent is also used for optimization and regularization to avoid the problem of over fitting. All these approaches together provide better prediction and handle problems of sparsity, cold start, and scalability well as compared to conventional methods. Experimental results confirm that our HYBRTyco gives improved results than Tyco regarding mean absolute error (MAE)and mean absolute percentage error (MAPE), especially on the sparse dataset."}
{"_id": "a5404198277889fcba80fdcb9f56d97a429a64ad", "title": "An Analysis of Black-Box Web Application Security Scanners against Stored SQL Injection", "text": "Web application security scanners are a compilation of various automated tools put together and used to detect security vulnerabilities in web applications. Recent research has shown that detecting stored SQL injection, one of the most critical web application vulnerabilities, is a major challenge for black-box scanners. In this paper, we evaluate three state of art black-box scanners that support detecting stored SQL injection vulnerabilities. We developed our custom test bed that challenges the scanners capability regarding stored SQL injections. The results show that existing vulnerabilities are not detected even when these automated scanners are taught to exploit the vulnerability. The weaknesses of black-box scanners identified reside in many areas: crawling, input values and attack code selection, user login, analysis of server replies, miss-categorization of findings, and the automated process functionality. Because of the poor detection rate, we discuss the different phases of black-box scanners' scanning cycle and propose a set of recommendations that could enhance the detection rate of stored SQL injection vulnerabilities."}
{"_id": "94d5a251fc11e4b3b61210b383cde35b838cce41", "title": "Intrusion Detection System using Support Vector Machine and Decision Tree", "text": "Support Vector Machines (SVM) are the classifiers which were originally designed for binary classification. The classification applications can solve multi-class problems. Decision-tree-based support vector machine which combines support vector machines and decision tree can be an effective way for solving multi-class problems. This method can decrease the training and testing time, increasing the efficiency of the system. The different ways to construct the binary trees divides the data set into two subsets from root to the leaf until every subset consists of only one class. The construction order of binary tree has great influence on the classification performance. In this paper we are studying an algorithm, Tree structured multiclass SVM, which has been used for classifying data. This paper proposes the decision tree based algorithm to construct multiclass intrusion detection system."}
{"_id": "ab93fe25985409d99b15a49ae9d8987561749a32", "title": "DESIGNING A LAYOUT USING THE MODIFIED TRIANGLE METHOD , AND GENETIC ALGORITHMS", "text": "This paper describes the use of genetic algorithms (GA) for solving the facility layout problem (FLP) within manufacturing systems\u2019 design. The paper considers a specific heuristic layout planning method, known as the modified triangle method. This method seeks for an optimal layout solution based on the degrees of flows between workstations. The search for an optimal solution is extremely time-consuming and not suitable for larger systems; therefore we have developed a system based on evolutionary computation. Our paper presents a system, based on GA, and the results obtained regarding several numerical cases from the literature. We propose the usage of this system by presenting numerous advantages over other methods of solving FLP for problem representation and evolutionary computation for solution search. (Received in September 2012, accepted in June 2013. This paper was with the authors 2 months for 2 revisions.)"}
{"_id": "206b368218608bc1ea64f4028809809e1cd9fab3", "title": "Mapping Unseen Words to Task-Trained Embedding Spaces", "text": "Education University of California, Berkeley (2008-2013) Ph.D. in Computer Science Thesis: Surface Web Semantics for Structured Natural Language Processing Advisor: Dan Klein. Committee members: Dan Klein, Marti Hearst, Line Mikkelsen, Nelson Morgan University of California, Berkeley (2012) Master of Science (M.S.) in Computer Science Thesis: An All-Fragments Grammar for Simple and Accurate Parsing Advisor: Dan Klein Indian Institute of Technology, Kanpur (2004-2008) Bachelor of Technology (B.Tech.) in Computer Science and Engineering GPA: 3.96/4.00 (Institute and Department Rank 2) Cornell University (Summer 2007) CS490 (Independent Research and Reading) GPA: 4.00/4.00 Advisors: Lillian Lee, Claire Cardie"}
{"_id": "4e9caaf519d5b94aa56c62ab1bdb8e56dfd49f80", "title": "Interpreting and Unifying Outlier Scores", "text": "Outlier scores provided by different outlier models differ widely in their meaning, range, and contrast between different outlier models and, hence, are not easily comparable or interpretable. We propose a unification of outlier scores provided by various outlier models and a translation of the arbitrary \u201coutlier factors\u201d to values in the range [0, 1] interpretable as values describing the probability of a data object of being an outlier. As an application, we show that this unification facilitates enhanced ensembles for outlier detection."}
{"_id": "701a3346ddcdd8754a367d18f41c21fda231672c", "title": "DRAFT: COMMENTS WELCOME BUT PLEASE DO NOT QUOTE", "text": "The Experimental Study of Adaptive User Interfaces Pat Langley (langley@isle.org) Institute for the Study of Learning and Expertise 2164 Staunton Court, Palo Alto, CA 94306 Michael Fehling (fehling@odc.stanford.edu) Organizational Dynamics Center, EES/OR Department Stanford University, Stanford, CA 94305 Abstract In this paper we examine some issues that arise in the experimental evaluation of adaptive user interfaces, which are computational decision aids that use machine learning to improve their interaction with users. We begin by reviewing the notion of an adaptive interface and presenting examples for a number of application areas. After this, we discuss plausible dependent measures of their behavior, including solution speed, solution quality, user satisfaction, and predictive accuracy. Next we turn to independent variables that are likely to in uence these measures, including the number of user interactions and characteristics of the system, the user, and the task at hand. In closing, we comment on the role that experimentation plays in the larger scienti c process."}
{"_id": "282a380fb5ac26d99667224cef8c630f6882704f", "title": "Learning to reinforcement learn", "text": "In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown that recurrent networks can support meta-learning in a fully supervised context. We extend this approach to the RL setting. What emerges is a system that is trained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience."}
{"_id": "10b13d1d7ce00845058a04fea59dc55f1db412d5", "title": "BLINKS: ranked keyword searches on graphs", "text": "Query processing over graph-structured data is enjoying a growing number of applications. A top-k keyword search query on a graph finds the top k answers according to some ranking criteria, where each answer is a substructure of the graph containing all query keywords. Current techniques for supporting such queries on general graphs suffer from several drawbacks, e.g., poor worst-case performance, not taking full advantage of indexes, and high memory requirements. To address these problems, we propose BLINKS, a bi-level indexing and query processing scheme for top-k keyword search on graphs. BLINKS follows a search strategy with provable performance bounds, while additionally exploiting a bi-level index for pruning and accelerating the search. To reduce the index space, BLINKS partitions a data graph into blocks: The bi-level index stores summary information at the block level to initiate and guide search among blocks, and more detailed information for each block to accelerate search within blocks. Our experiments show that BLINKS offers orders-of-magnitude performance improvement over existing approaches."}
{"_id": "524b914bd4f56e1f22c92c3dad89e73f42e63f40", "title": "Modeling and control of a three-port DC-DC converter for PV-battery systems", "text": "This paper presents modeling and decoupled control design for an isolated three-port DC-DC converter (TPC), which interfaces a photovoltaic (PV) panel, a rechargeable battery, and an isolated load. A small signal model is derived from the state-space equations of the TPC. Based on the model, two decoupled controllers are designed to control the power flows of the two sources and the load voltage independently. Experimental studies are performed to verify that the two controllers are capable of regulating the power flows of the PV panel and the battery while maintaining a constant output voltage for the TPC."}
{"_id": "61f93c67f22767278f12c7f9756e7201032ae361", "title": "Brain of the African elephant (Loxodonta africana): neuroanatomy from magnetic resonance images.", "text": "We acquired magnetic resonance images of the brain of an adult African elephant, Loxodonta africana, in the axial and parasagittal planes and produced anatomically labeled images. We quantified the volume of the whole brain (3,886.7 cm3) and of the neocortical and cerebellar gray and white matter. The white matter-to-gray matter ratio in the elephant neocortex and cerebellum is in keeping with that expected for a brain of this size. The ratio of neocortical gray matter volume to corpus callosum cross-sectional area is similar in the elephant and human brains (108 and 93.7, respectively), emphasizing the difference between terrestrial mammals and cetaceans, which have a very small corpus callosum relative to the volume of neocortical gray matter (ratio of 181-287 in our sample). Finally, the elephant has an unusually large and convoluted hippocampus compared to primates and especially to cetaceans. This may be related to the extremely long social and chemical memory of elephants."}
{"_id": "d59239df881e4839e4bdc3f9f61867c69bafcb0e", "title": "Convolutional auto-encoder for image denoising of ultra-low-dose CT", "text": "OBJECTIVES\nThe purpose of this study was to validate a patch-based image denoising method for ultra-low-dose CT images. Neural network with convolutional auto-encoder and pairs of standard-dose CT and ultra-low-dose CT image patches were used for image denoising. The performance of the proposed method was measured by using a chest phantom.\n\n\nMATERIALS AND METHODS\nStandard-dose and ultra-low-dose CT images of the chest phantom were acquired. The tube currents for standard-dose and ultra-low-dose CT were 300 and 10 mA, respectively. Ultra-low-dose CT images were denoised with our proposed method using neural network, large-scale nonlocal mean, and block-matching and 3D filtering. Five radiologists and three technologists assessed the denoised ultra-low-dose CT images visually and recorded their subjective impressions of streak artifacts, noise other than streak artifacts, visualization of pulmonary vessels, and overall image quality.\n\n\nRESULTS\nFor the streak artifacts, noise other than streak artifacts, and visualization of pulmonary vessels, the results of our proposed method were statistically better than those of block-matching and 3D filtering (p-values < 0.05). On the other hand, the difference in the overall image quality between our proposed method and block-matching and 3D filtering was not statistically significant (p-value = 0.07272). The p-values obtained between our proposed method and large-scale nonlocal mean were all less than 0.05.\n\n\nCONCLUSION\nNeural network with convolutional auto-encoder could be trained using pairs of standard-dose and ultra-low-dose CT image patches. According to the visual assessment by radiologists and technologists, the performance of our proposed method was superior to that of large-scale nonlocal mean and block-matching and 3D filtering."}
{"_id": "3ffbca69a2bdd21e2585f096902bdd79c8a2630f", "title": "Collaborative GAN Sampling", "text": "Generative adversarial networks (GANs) have shown great promise in generating complex data such as images. A standard practice in GANs is to discard the discriminator after training and use only the generator for sampling. However, this loses valuable information of real data distribution learned by the discriminator. In this work, we propose a collaborative sampling scheme between the generator and discriminator for improved data generation. Guided by the discriminator, our approach refines generated samples through gradient-based optimization, shifting the generator distribution closer to the real data distribution. Additionally, we present a practical discriminator shaping method that can further improve the sample refinement process. Orthogonal to existing GAN variants, our proposed method offers a new degree of freedom in GAN sampling. We demonstrate its efficacy through experiments on synthetic data and image generation tasks."}
{"_id": "971de86408e28d5f16adc175f30fb122d2238e39", "title": "Neural correlates of mindfulness meditation-related anxiety relief.", "text": "Anxiety is the cognitive state related to the inability to control emotional responses to perceived threats. Anxiety is inversely related to brain activity associated with the cognitive regulation of emotions. Mindfulness meditation has been found to regulate anxiety. However, the brain mechanisms involved in meditation-related anxiety relief are largely unknown. We employed pulsed arterial spin labeling MRI to compare the effects of distraction in the form of attending to the breath (ATB; before meditation training) to mindfulness meditation (after meditation training) on state anxiety across the same subjects. Fifteen healthy subjects, with no prior meditation experience, participated in 4 d of mindfulness meditation training. ATB did not reduce state anxiety, but state anxiety was significantly reduced in every session that subjects meditated. Meditation-related anxiety relief was associated with activation of the anterior cingulate cortex, ventromedial prefrontal cortex and anterior insula. Meditation-related activation in these regions exhibited a strong relationship to anxiety relief when compared to ATB. During meditation, those who exhibited greater default-related activity (i.e. posterior cingulate cortex) reported greater anxiety, possibly reflecting an inability to control self-referential thoughts. These findings provide evidence that mindfulness meditation attenuates anxiety through mechanisms involved in the regulation of self-referential thought processes."}
{"_id": "05d6e0185bcb48d396fe778ceedb2078e37e72ef", "title": "Statistical mechanics of complex networks", "text": "Complex networks describe a wide range of systems in nature and society. Frequently cited examples include the cell, a network of chemicals linked by chemical reactions, and the Internet, a network of routers and computers connected by physical links. While traditionally these systems have been modeled as random graphs, it is increasingly recognized that the topology and evolution of real networks are governed by robust organizing principles. This article reviews the recent advances in the field of complex networks, focusing on the statistical mechanics of network topology and dynamics. After reviewing the empirical data that motivated the recent interest in networks, the authors discuss the main models and analytical tools, covering random graphs, small-world and scale-free networks, the emerging theory of evolving networks, and the interplay between topology and the network\u2019s robustness against failures and attacks."}
{"_id": "2628a4ab8be64bdc71d6cf9d537586d28053db86", "title": "Emergence of Scaling in Random Networks", "text": "www.sciencemag.org (this information is current as of September 11, 2008 ): The following resources related to this article are available online at http://www.sciencemag.org/cgi/content/full/286/5439/509 version of this article at: including high-resolution figures, can be found in the online Updated information and services, found at: can be related to this article A list of selected additional articles on the Science Web sites http://www.sciencemag.org/cgi/content/full/286/5439/509#related-content http://www.sciencemag.org/cgi/content/full/286/5439/509#otherarticles , 3 of which can be accessed for free: cites 11 articles This article 2718 article(s) on the ISI Web of Science. cited by This article has been http://www.sciencemag.org/cgi/content/full/286/5439/509#otherarticles 95 articles hosted by HighWire Press; see: cited by This article has been http://www.sciencemag.org/cgi/collection/physics Physics : subject collections This article appears in the following http://www.sciencemag.org/about/permissions.dtl in whole or in part can be found at: this article permission to reproduce of this article or about obtaining reprints Information about obtaining"}
{"_id": "08d4da77c489d550c3e215725551481310719893", "title": "Human Behaviour and the Principle of Least Effort: an Introduction to Human Ecology", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd human behavior and the principle of least effort an introduction to human ecology as the choice of reading, you can find here."}
{"_id": "a690eb4c7ab6c2733478fa9ac3379fa2cdea7271", "title": "Evolutionary games and spatial chaos", "text": "MUCH attention has been given to the Prisoners' Dilemma as a metaphor for the problems surrounding the evolution of coopera-tive behaviour1\u20136. This work has dealt with the relative merits of various strategies (such as tit-for-tat) when players who recognize each other meet repeatedly, and more recently with ensembles of strategies and with the effects of occasional errors. Here we neglect all strategical niceties or memories of past encounters, considering only two simple kinds of players: those who always cooperate and those who always defect. We explore the consequences of placing these players in a two-dimensional spatial array: in each round, every individual 'plays the game' with the immediate neighbours; after this, each site is occupied either by its original owner or by one of the neighbours, depending on who scores the highest total in that round; and so to the next round of the game. This simple, and purely deterministic, spatial version of the Prisoners' Dilemma, with no memories among players and no strategical elaboration, can generate chaotically changing spatial patterns, in which cooperators and defectors both persist indefinitely (in fluctuating proportions about predictable long-term averages). If the starting configurations are sufficiently symmetrical, these ever-changing sequences of spatial patterns\u2014dynamic fractals\u2014can be extraordinarily beautiful, and have interesting mathematical properties. There are potential implications for the dynamics of a wide variety of spatially extended systems in physics and biology."}
{"_id": "258c273aad5c604c882aa2f71380a871eff7354f", "title": "Disordered speech recognition using acoustic and sEMG signals", "text": "Parallel isolated word corpora were collected from healthy speakers and individuals with speech impairment due to stroke or cerebral palsy. Surface electromyographic (sEMG) signals were collected for both vocalized and mouthed speech production modes. Pioneering work on disordered speech recognition using the acoustic signal, the sEMG signals, and their fusion are reported. Results indicate that speakerdependent isolated-word recognition from the sEMG signals of articulator muscle groups during vocalized disorderedspeech production was highly effective. However, word recognition accuracy for mouthed speech was much lower, likely related to the fact that some disordered speakers had considerable difficulty producing consistent mouthed speech. Further development of the sEMG-based speech recognition systems is needed to increase usability and robustness."}
{"_id": "ac2e44622efbbab525d4301c83cb4d5d7f6f0e55", "title": "A 3D Morphable Model Learnt from 10,000 Faces", "text": "We present Large Scale Facial Model (LSFM) - a 3D Morphable Model (3DMM) automatically constructed from 9,663 distinct facial identities. To the best of our knowledge LSFM is the largest-scale Morphable Model ever constructed, containing statistical information from a huge variety of the human population. To build such a large model we introduce a novel fully automated and robust Morphable Model construction pipeline. The dataset that LSFM is trained on includes rich demographic information about each subject, allowing for the construction of not only a global 3DMM but also models tailored for specific age, gender or ethnicity groups. As an application example, we utilise the proposed model to perform age classification from 3D shape alone. Furthermore, we perform a systematic analysis of the constructed 3DMMs that showcases their quality and descriptive power. The presented extensive qualitative and quantitative evaluations reveal that the proposed 3DMM achieves state-of-the-art results, outperforming existing models by a large margin. Finally, for the benefit of the research community, we make publicly available the source code of the proposed automatic 3DMM construction pipeline. In addition, the constructed global 3DMM and a variety of bespoke models tailored by age, gender and ethnicity are available on application to researchers involved in medically oriented research."}
{"_id": "a69fda3a603c542e96a3136ee26cd900403feed6", "title": "IoT based smart surveillance security system using raspberry Pi", "text": "Communication mainly is the transfer of whatever thing or exchanging of data, so that the Internet of things is naught but the transferring or exchanging of anything with several other things. The using of internet authorized system or devices roughly calculated as that by 2020 there will be nearly about billions. The purpose of the paper is to define a safekeeping alert device spending little handling power by Internet of things which help out to observer plus alerts when gestures or else motion are there then send images to a cloud server. Besides, internet of things centered use can be used tenuously to observe the action as well as acquire warning when gestures or else indication are there. The images are showed straight to a cloud attendant, when the cloud attendant is not accessible at that time the records are put in storage close by on a Raspberry Pi. A credit card size Raspberry Pi with a advantage of Open Source Computer Vision (Open-CV) software knobs the image processing, control algorithms used for the attentiveness then shows taken images to concern persons email by the use of Wi-Fi module. The system uses ordinary webcam."}
{"_id": "d8eca17cf10ff0762ef30e726ef303b937406692", "title": "The Scalability of Trustless Trust", "text": "Permission-less blockchains can realise trustless trust, albeit at the cost of limiting the complexity of computation tasks. To explain the implications for scalability, we have implemented a trust model for smart contracts, described as agents in an open multi-agent system. Agent intentions are not necessarily known and autonomous agents have to be able to make decisions under risk. The ramifications of these general conditions for scalability are analysed for Ethereum and then generalised to other current and future platforms."}
{"_id": "cadaa6b776b5ab67a58e6542698e0105ed495c1c", "title": "On Measures of Entropy and Information", "text": "2 Mutual information 3 Mutual information . . . . . . . . . . . . . . . . . 3 Multivariate mutual information . . . . . . . . . 4 Interaction information . . . . . . . . . . . . . . . 5 Conditional mutual information . . . . . . . . . 5 Binding information . . . . . . . . . . . . . . . . 6 Residual entropy . . . . . . . . . . . . . . . . . . 6 Total correlation . . . . . . . . . . . . . . . . . . . 6 Lautum information . . . . . . . . . . . . . . . . 6 Uncertainty coefficient . . . . . . . . . . . . . . . 7"}
{"_id": "f5ea5a7c7dee975883cb39a6a84566ff311b977f", "title": "Efficacy of psychotherapeutic interventions to promote forgiveness: a meta-analysis.", "text": "OBJECTIVE\nThis meta-analysis addressed the efficacy of psychotherapeutic interventions to help people forgive others and to examine moderators of treatment effects.\n\n\nMETHOD\nEligible studies reported quantitative data on forgiveness of a specific hurt following treatment by a professional with an intervention designed explicitly to promote forgiveness. Random effects meta-analyses were conducted using k = 53 posttreatment effect sizes (N = 2,323) and k = 41 follow-up effect sizes (N = 1,716) from a total of 54 published and unpublished research reports.\n\n\nRESULTS\nParticipants receiving explicit forgiveness treatments reported significantly greater forgiveness than participants not receiving treatment (\u0394+ = 0.56 [0.43, 0.68]) and participants, receiving alternative treatments (\u0394+ = 0.45 [0.21, 0.69]). Also, forgiveness treatments resulted in greater changes in depression, anxiety, and hope than no-treatment conditions. Moderators of treatment efficacy included treatment dosage, offense severity, treatment model, and treatment modality. Multimoderator analyses indicated that treatment dosage (i.e., longer interventions) and modality (individual > group) uniquely predicted change in forgiveness compared with no-treatment controls. Compared with alternative treatment conditions, both modality (individual > group) and offense severity were marginally predictive (ps < .10) of treatment effects.\n\n\nCONCLUSIONS\nIt appears that using theoretically grounded forgiveness interventions is a sound choice for helping clients to deal with past offenses and helping them achieve resolution in the form of forgiveness. Differences between treatment approaches disappeared when controlling for other significant moderators; the advantage for individual interventions was most clearly demonstrated for Enright-model interventions, as there have been no studies of individual interventions using the Worthington model."}
{"_id": "cc23dda37d38999bd316e8deec20e9ff416d2284", "title": "Mobile application security: malware threats and defenses", "text": "Due to the quantum leap in functionality, the rate of upgrading traditional mobile phones to smartphones is tremendous. One of the most attractive features of smartphones is the availability of a large number of apps for users to download and install. However, it also means hackers can easily distribute malware to smartphones, launching various attacks. This issue should be addressed by both preventive approaches and effective detection techniques. This article first discusses why smartphones are vulnerable to security attacks. Then it presents malicious behavior and threats of malware. Next, it reviews the existing malware prevention and detection techniques. Besides more research in these directions, it points out efforts from app developers, app store administrators, and users, who are also required to defend against such malware."}
{"_id": "5fe6ba9a1b2b439d6e07795a9c58f43240b9e1ae", "title": "Hand rehabilitation based on augmented reality", "text": "This paper presents an augmented reality based system for hand rehabilitation. In this system, digital data gloves are used to detect the movements of the patients' hands and collect the physical information from the patients. Using augmented reality technology, a highly controllable environment with tasks of different levels of difficulty is provided to the patients for them to perform the rehabilitation exercises gradually. The targets of the exercises can be adjusted dynamically with respect to the physical conditions and progress of the patients. Multimodal feedbacks are provided to facilitate and encourage the patients during the rehabilitation sessions."}
{"_id": "f329eb18a4549daa83fae28043d19b83fe8356fa", "title": "Evolutionary Computation and Convergence to a Pareto Front", "text": "Research into solving multiobjective optimization problems (MOP) has as one of its an overall goals that of developing and defining foundations of an Evolutionary Computation (EC)-based MOP theory. In this paper, we introduce relevant MOP concepts, and the notion of Pareto optimality, in particular. Specific notation is defined and theorems are presented ensuring Paretobased Evolutionary Algorithm (EA) implementations are clearly understood. Then, a specific experiment investigating the convergence of an arbitrary EA to a Pareto front is presented. This experiment gives a basis for a theorem showing a specific multiobjective EA statistically converges to the Pareto front. We conclude by using this work to justify further exploration into the theoretical foundations of EC-based MOP solution methods."}
{"_id": "ffe30b2c46c0f27e26ca7d5de3b061be17b8e68f", "title": "Windows driver memory analysis: A reverse engineering methodology", "text": "In a digital forensics examination, the capture and analysis of volatile data provides significant information on the state of the computer at the time of seizure. Memory analysis is a premier method of discovering volatile digital forensic information. While much work has been done in extracting forensic artifacts from Windows kernel structures, less focus has been paid to extracting information from Windows drivers. There are two reasons for this: (1) source code for one version of the Windows kernel (but not associated drivers) is available for educational use and (2) drivers are generally called asynchronously and contain no exported functions. Therefore, finding the handful of driver functions of interest out of the thousands of candidates makes reverse code engineering problematic at best. Developing a methodology to minimize the effort of analyzing these drivers, finding the functions of interest, and extracting the data structures of interest is highly desirable. This paper provides two contributions. First, it describes a general methodology for reverse code engineering of Windows drivers memory structures. Second it applies the methodology to tcpip.sys, a Windows driver that controls network connectivity. The result is the extraction from tcpip.sys of the data structures needed to determine current network connections and listeners from the 32 and 64 bit versions of Windows Vista and Windows 7."}
{"_id": "3d376896a9aa01a71455880e033a27633d88bd6d", "title": "A Hierarchical Neural Autoencoder for Paragraphs and Documents", "text": "Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Longshort term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization1."}
{"_id": "d4833b410f4d13dee5105f1f233da1cad2cfc810", "title": "A Machine Learning Approach for Product Matching and Categorization Use case : Enriching Product Ads with Semantic Structured Data", "text": "Consumers today have the option to purchase products from thousands of e-shops. However, the completeness of the product specifications and the taxonomies used for organizing the products differ across different e-shops. To improve the consumer experience, e.g., by allowing for easily comparing offers by different vendors, approaches for product integration on the Web are needed. In this paper, we present an approach that leverages neural language models and deep learning techniques in combination with standard classification approaches for product matching and categorization. In our approach we use structured product data as supervision for training feature extraction models able to extract attribute-value pairs from textual product descriptions. To minimize the need for lots of data for supervision, we use neural language models to produce word embeddings from large quantities of publicly available product data marked up with Microdata, which boost the performance of the feature extraction model, thus leading to better product matching and categorization performances. Furthermore, we use a deep Convolutional Neural Network to produce image embeddings from product images, which further improve the results on both tasks."}
{"_id": "8a76b50d20457404444e963efff3f3a2f65a6353", "title": "Resource Management for Device-to-Device Underlay Communication", "text": "Device-to-Device (D2D) communication is a technology component for LTE-A. The existing researches allow D2D as an underlay to the cellular network to increase the spectral efficiency. In this book, D2D communication underlaying cellular networks is studied. Some physical-layer techniques and cross-layer optimization methods on resource management and interference avoidance are proposed and discussed. WINNER II channel models is applied to be the signal and interference model and simulation results show that the performance of D2D link is closely related to the distance between D2D transmitter and receiver and that between interference source and the receiver. Besides, by power control, D2D SINR degrades, which will naturally contribute to low interference to cellular communication. A simple mode selection method of D2D communication is introduced. Based on path-loss (PL) mode selection criterion, D2D gives better performance than traditional cellular system. When D2D pair is farther away from the BS, a better results can be obtained. Game theory, which offers a wide variety of analytical tools to study the complex interactions of players and predict their choices, can be used for power and radio resource management in D2D communication. A reverse iterative combinatorial auction is formulated as a mechanism to allocate the spectrum resources for D2D communications with multiple user pairs sharing the same channel. In addition, a game theoretic approach is developed to implement joint scheduling, power control and channel allocation for D2D communication. Finally, joint power and spectrum resource allocation method is studied under consideration of battery lifetime, which is an important application of D2D communication on increasing user's energy efficiency. The simulation results show that all these methods have beneficial effects on improving the system performance."}
{"_id": "28167c64461cd27c04e104f79ed051af80dc1fdf", "title": "Bayesian Compressive Sensing Using Laplace Priors", "text": "In this paper, we model the components of the compressive sensing (CS) problem, i.e., the signal acquisition process, the unknown signal coefficients and the model parameters for the signal and noise using the Bayesian framework. We utilize a hierarchical form of the Laplace prior to model the sparsity of the unknown signal. We describe the relationship among a number of sparsity priors proposed in the literature, and show the advantages of the proposed model including its high degree of sparsity. Moreover, we show that some of the existing models are special cases of the proposed model. Using our model, we develop a constructive (greedy) algorithm designed for fast reconstruction useful in practical settings. Unlike most existing CS reconstruction methods, the proposed algorithm is fully automated, i.e., the unknown signal coefficients and all necessary parameters are estimated solely from the observation, and, therefore, no user-intervention is needed. Additionally, the proposed algorithm provides estimates of the uncertainty of the reconstructions. We provide experimental results with synthetic 1-D signals and images, and compare with the state-of-the-art CS reconstruction algorithms demonstrating the superior performance of the proposed approach."}
{"_id": "0bc5e5588d1a124a551c3c2b690e230f575d9961", "title": "Cranial Impalement of a Falling Fence Spike in a Child: A Case Report", "text": "Cranial impalement injuries are rare. They occur from a variety of objects, and via different mechanisms. We describe the case of a 5-year old boy who suffered cranial impalement injury via a unique mechanism. He presented to our centre with an impacted 17.8cm long metallic rod (a fence spike) in the vertex of his cranium, just off the midline. The spike penetrated his head and broke off its supporting frame as the frame was falling off a collapsing brick fence. He was transported as soon as possible to the hospital by relatives, without any attempt to remove the impaled spike. An urgent cranial computerized tomogram was done, and the object was removed under general anaesthesia in the operating theatre. The patient had complete recovery and was subsequently discharged from the hospital, with no residual neurological deficit. This case demonstrates a rare mechanism of cranial impalement. It also highlights the importance of following basic principles in the management of such injuries."}
{"_id": "f158e208e14d386a01ec2e020b682bf05d67c7f5", "title": "Selective mutism and anxiety: a review of the current conceptualization of the disorder.", "text": "Selective mutism (SM) is a rare and interesting condition that has been associated with a wide variety of childhood psychiatric conditions. Historically viewed as more of an oddity than a distinct diagnostic entity, early conceptualizations of the condition were based largely on case studies that tended to link SM with oppositional behavior. More recently, controlled studies have enhanced our understanding of SM. This review summarizes the current conceptualization of SM, highlighting evidence supporting the notion that SM is an anxiety-related condition."}
{"_id": "d152eb5884c50d7b898c92e2a96e773f9cfaf887", "title": "DataSite: Proactive visual data exploration with computation of insight-based recommendations", "text": "Effective data analysis ideally requires the analyst to have high expertise as well as high knowledge of the data. Even with such familiarity, manually pursuing all potential hypotheses and exploring all possible views is impractical. We present DataSite, a proactive visual analytics system where the burden of selecting and executing appropriate computations is shared by an automatic server-side computation engine. Salient features identified by these automatic background processes are surfaced as notifications in a feed timeline. DataSite effectively turns data analysis into a conversation between analyst and computer, thereby reducing the cognitive load and domain knowledge requirements. We validate the system with a user study comparing it to a recent visualization recommendation system, yielding significant improvement, particularly for complex analyses that existing analytics systems do not support well."}
{"_id": "c1b189c81b6fda71a471adec11cfe72f6067c1ad", "title": "A Blockchain-Based Approach to Health Information Exchange Networks", "text": "Sharing healthcare data between institutions is challenging. Heterogeneous data structures may preclude compatibility, while disparate use of healthcare terminology limits data comprehension. Even if structure and semantics could be agreed upon, both security and data consistency concerns abound. Centralized data stores and authority providers are attractive targets for cyber attack, and establishing a consistent view of the patient record across a data sharing network is problematic. In this work we present a Blockchain-based approach to sharing patient data. This approach trades a single centralized source of trust in favor of network consensus, and predicates consensus on proof of structural and semantic interoperability."}
{"_id": "8ea55818596769aafe174b58959c4ccd6e0dbab2", "title": "Text Mining on Player Personality for Game Recommendation", "text": "Recommender systems have emergedrecently in many aspects due to enormous amount of information or items existed in e-commerce or social network services. Users find it convenient if the service can recommend interesting items to them automatically. Past systems adopted users' preference or social connections for such recommendation. In this work, we will try to identify a game player's personality traits and use them to recommend games that may be interested by the player. The personality traits of a player is identified by applying a text mining process on textual contents related to the player. The same process is also applied on the comments of the games to identified the personality traits of games. Recommendation is then performed based on the similarity between the traits of the user and the game. We performed experiments on 63 players and 2050 games and obtained promising results which may demonstrate the plausibility of personality-based recommender systems."}
{"_id": "b0b02f3b613b615fcc8afcfdbeb2807db7d78c77", "title": "Complementary and alternative therapies for treating multiple sclerosis symptoms: a systematic review.", "text": "Multiple sclerosis (MS) is a chronic disease of the central nervous system without a known cure. Thus the role of complementary and alternative therapies (CATs) for the management of symptoms lies in palliative care and this is borne out by the popularity of these treatments amongst MS sufferers. This review is aimed at determining whether this use is supported by evidence of effectiveness from rigorous clinical trials. Database literature searches were performed and papers were extracted in a pre-defined manner. Twelve randomized controlled trials were located that investigated a CAT for MS: nutritional therapy (4), massage (1), Feldenkrais bodywork (1), reflexology (1), magnetic field therapy (2), neural therapy (1) and psychological counselling (2). The evidence is not compelling for any of these therapies, with many trials suffering from significant methodological flaws. There is evidence to suggest some benefit of nutritional therapy for the physical symptoms of MS. Magnetic field therapy and neural therapy appear to have a short-term beneficial effect on the physical symptoms of MS. Massage/bodywork and psychological counselling seem to improve depression, anxiety and self-esteem. The effectiveness for other CATs is unproven at this time. In all the CATs examined further investigations are needed in the form of rigorous large-scale trials."}
{"_id": "3d17e7142161d9fe049873d462e73278299360f1", "title": "Building Better Open-Source Tools to Support Fairness in Automated Scoring", "text": "Automated scoring of written and spoken responses is an NLP application that can significantly impact lives especially when deployed as part of high-stakes tests such as the GRE\u00ae and the TOEFL\u00ae. Ethical considerations require that automated scoring algorithms treat all testtakers fairly. The educational measurement community has done significant research on fairness in assessments and automated scoring systems must incorporate their recommendations. The best way to do that is by making available automated, non-proprietary tools to NLP researchers that directly incorporate these recommendations and generate the analyses needed to help identify and resolve biases in their scoring systems. In this paper, we attempt to provide such a solution."}
{"_id": "45d7755c2e3e1d0ec92c64f78a2b09eaeafd270c", "title": "Deep HyperNEAT : Evolving the Size and Depth of the Substrate Evolutionary", "text": "This report describes DeepHyperNEAT, an extension of HyperNEAT to allow it to alter the topology of its indirectlyencoded neural network (called the substrate) so that it can continue to grow and increase in complexity over evolution. This aim is accomplished by augmenting HyperNEAT\u2019s compositional pattern producing networks (CPPNs) with new information that allows them to represent substrate topology, and by adding three novel mutations to HyperNEAT that exploit this new information. The purpose of this report is to detail the method and validate its ability to evolve, which is shown through a simple XOR-based test. Significantly more work will be needed to analyze the algorithmic and scientific implications of these new capabilities, but by releasing this report it becomes possible for others also to explore the opportunities that such an extension opens up. A link to associated source code is also provided at the end of this document. Introduction HyperNEAT [1; 2; 3] (Hypercube-based NeuroEvolution of Augmenting Topologies) is a method for evolving neural networks (NNs) through an indirect encoding based on the NEAT method [4] that came before it. The main idea in HyperNEAT is that a special genetic representation called a compositional pattern producing network (CPPN) [5] that is evolved by NEAT generates a pattern of weights that connect a fixed set of neurons positioned within a two-dimensional or three-dimensional space called the substrate. A longstanding limitation of the conventional HyperNEAT algorithm is that while the CPPN can increase in size, the number of neurons and layers in the substrate remains fixed from the beginning of evolution. While there exists one method called evolvable substrate HyperNEAT (ES-HyperNEAT) [6] that allows HyperNEAT to increase the complexity of its substrate that indeed offers opportunities not available in the original HyperNEAT, the new method proposed here, called Deep HyperNEAT is designed explicitly to evolve substrates reminiscent of architectures seen in deep learning today, with increasing numbers of layers and separate parallel pathways through the network. Deep HyperNEAT achieves its new ability by fitting more information into the standard CPPN of HyperNEAT, which now can mutate to describe new layers and new neurons in the substrate. This paper is a preliminary technical report on the method, released to allow others to experiment with its ideas quickly. Thus many intriguing scientific questions about the opportunities created by this new capability will not be answered here; instead the focus is on introducing the technical components of the method and showing that they work correctly (i.e. perform as expected) in a simple validation experiment on the XOR problem. Rigorous scientific investigation is left for the future. The paper is organized as follows: HyperNEAT and the CPPN indirect encoding are reviewed first. The main extensions to HyperNEAT and CPPNs that constitute Deep HyperNEAT are then described, and demonstrated running in a simple XOR test. The paper concludes with thoughts on implications and future work. A public repository containing all of the code for the method is linked at the end of the paper. Background: HyperNEAT HyperNEAT [1; 2; 3] is a neuroevolution method that utilizes indirect encoding to evolve connectivity patterns in neural networks (NN). HyperNEAT historically has found applications in a wide breadth of domains, including multiagent control [7], checkers [2; 3], quadruped locomotion [8; 9], Robocup Keepaway [10], and simulated driving in traffic [11]. The motivation for HyperNEAT was to capture the ability of natural genetic encoding (DNA) to reuse information to encode massive neural structures with orders of magnitude fewer genes. The key principle in HyperNEAT that enabled this kind of meaningful reuse is that neural geometry should be able to exploit problem structure. (This principle is similar to the inspiration behind incorporating convolution in deep learning, but its realization and implications in HyperNEAT are different.) For example, two hidden neurons that process adjacent parts of an input field (e.g. a visual field) should also be roughly adjacent within the neural geometry. That way, locality within an NN becomes a heuristic for relatedness. To situate neurons in an NN within a geometric context, each neuron in HyperNEAT is assigned a Cartesian coordinate. For example, if the NN is configured as a stack of two-dimensional layers of neurons, then the position of each neuron within a layer is simply (x,y). In HyperNEAT it is conventional that the coordinate system of each layer centers at (0,0) and ranges between [-1, 1] in both x and y, thereby forming a square filter or \u201csheet\u201d of neurons. That way, each neuron n within a layer l can be identified by its position (x,y). Specifying neurons in the NN as a function of the neural geometry allows for an elegant formalism for encoding the connectivity patterns between layers. The connectivity pattern between one layer and another (e.g. all possible connections and weights between neurons in some layer i and another layer j) can be encoded by a function ConnectLayers that takes the positions of the source and target neurons as input. Instead of evolving the weights of every connection individually as in past approaches to neuroevolution like NEAT [12], HyperNEAT evolves a graphical representation of a function, ConnectLayers, that describes how the neurons are connected. In this way, ConnectLayers acts as a lower-dimensional encoding for how to generate connectivity patterns. For example, we might want to query ConnectLayers for the weight of a connection between a neuron a in layer 1 with coordinates (x1,y1) and a neuron b in layer 2 with coordinates (x2,y2). ConnectLayers then returns the weight wx1,y1,x2,y2 : wx1,y1,x2,y2 = ConnectLayers(x1,y1,x2,y2). (1) The main idea in HyperNEAT is to evolve ConnectLayers. In HyperNEAT, ConnectLayers is represented by a compositional pattern producing network (CPPN) [13] (figure 1). A CPPN is an evolvable directed graph representation for functions. Much like NNs, nodes are functions and connections have weights that are applied to the outputs of nodes before passing into other nodes. Importantly, CPPN node functions are heterogeneous in any given CPPN (they are sampled from a set of possible functions usually containing sigmoid, linear, Gaussian, and cosine) and there is no notion of layers. A thorough introduction or review of CPPNs is in [5]. The CPPN begins with a simple topology (usually just its input nodes and its output node with a sparse number of connections between them) and complexifies via evolution in HyperNEAT. Evolving the CPPN this way, through complexification, allows the CPPN to be biased towards compressed representations of connectivity patterns. This approach leads to increasingly intricate yet organized connectivity patterns. In practice CPPNs have proven effective and popular in part because they are designed to evolve increasing complexity with the original NEAT algorithm, which underlies evolution in HyperNEAT. The predicate \u201chyper\u201d in HyperNEAT signifies that the CPPN is in effect encoding a pattern within a four-dimensional or six-dimensional hypercube (because ConnectLayers typically takes four or six inputs), which is then painted onto the connectivity of a network. In effect, HyperNEAT provides the unique ability to evolve a compressed function that can encode the connectivity of an arbitrarily large network as a function of its geometry. That way, it is possible to produce NNs whose connections display regular and compositional structure through the reuse of motifs encoded in the CPPN. The"}
{"_id": "0b40a2f3181e34a67486b62fc4de065a4f3609e3", "title": "Improved seam carving for video retargeting", "text": "Video, like images, should support content aware resizing. We present video retargeting using an improved seam carving operator. Instead of removing 1D seams from 2D images we remove 2D seam manifolds from 3D space-time volumes. To achieve this we replace the dynamic programming method of seam carving with graph cuts that are suitable for 3D volumes. In the new formulation, a seam is given by a minimal cut in the graph and we show how to construct a graph such that the resulting cut is a valid seam. That is, the cut is monotonic and connected. In addition, we present a novel energy criterion that improves the visual quality of the retargeted images and videos. The original seam carving operator is focused on removing seams with the least amount of energy, ignoring energy that is introduced into the images and video by applying the operator. To counter this, the new criterion is looking forward in time - removing seams that introduce the least amount of energy into the retargeted result. We show how to encode the improved criterion into graph cuts (for images and video) as well as dynamic programming (for images). We apply our technique to images and videos and present results of various applications."}
{"_id": "537f16973900fbf4e559d64113711d35bf7ca4a2", "title": "FlowFence: Practical Data Protection for Emerging IoT Application Frameworks", "text": "Emerging IoT programming frameworks enable building apps that compute on sensitive data produced by smart homes and wearables. However, these frameworks only support permission-based access control on sensitive data, which is ineffective at controlling how apps use data once they gain access. To address this limitation, we present FlowFence, a system that requires consumers of sensitive data to declare their intended data flow patterns, which it enforces with low overhead, while blocking all other undeclared flows. FlowFence achieves this by explicitly embedding data flows and the related control flows within app structure. Developers use FlowFence support to split their apps into two components: (1) A set of Quarantined Modules that operate on sensitive data in sandboxes, and (2) Code that does not operate on sensitive data but orchestrates execution by chaining Quarantined Modules together via taint-tracked opaque handles\u2014references to data that can only be dereferenced inside sandboxes. We studied three existing IoT frameworks to derive key functionality goals for FlowFence, and we then ported three existing IoT apps. Securing these apps using FlowFence resulted in an average increase in size from 232 lines to 332 lines of source code. Performance results on ported apps indicate that FlowFence is practical: A face-recognition based doorcontroller app incurred a 4.9% latency overhead to recognize a face and unlock a door."}
{"_id": "7f9bbe985ccf6c16b6ef60ccb9ef04e4219b54cb", "title": "Unauthorized origin crossing on mobile platforms: threats and mitigation", "text": "With the progress in mobile computing, web services are increasingly delivered to their users through mobile apps, instead of web browsers. However, unlike the browser, which enforces origin-based security policies to mediate the interactions between the web content from different sources, today's mobile OSes do not have a comparable security mechanism to control the cross-origin communications between apps, as well as those between an app and the web. As a result, a mobile user's sensitive web resources could be exposed to the harms from a malicious origin. In this paper, we report the first systematic study on this mobile cross-origin risk. Our study inspects the main cross-origin channels on Android and iOS, including intent, scheme and web-accessing utility classes, and further analyzes the ways popular web services (e.g., Facebook, Dropbox, etc.) and their apps utilize those channels to serve other apps. The research shows that lack of origin-based protection opens the door to a wide spectrum of cross-origin attacks. These attacks are unique to mobile platforms, and their consequences are serious: for example, using carefully designed techniques for mobile cross-site scripting and request forgery, an unauthorized party can obtain a mobile user's Facebook/Dropbox authentication credentials and record her text input. We report our findings to related software vendors, who all acknowledged their importance. To address this threat, we designed an origin-based protection mechanism, called Morbs, for mobile OSes. Morbs labels every message with its origin information, lets developers easily specify security policies, and enforce the policies on the mobile channels based on origins. Our evaluation demonstrates the effectiveness of our new technique in defeating unauthorized origin crossing, its efficiency and the convenience for the developers to use such protection."}
{"_id": "9e5db350ba34f2b4c662cdea7acb6e906484ada9", "title": "Aurasium: Practical Policy Enforcement for Android Applications", "text": "The increasing popularity of Google\u2019s mobile platform Android makes it the prime target of the latest surge in mobile malware. Most research on enhancing the platform\u2019s security and privacy controls requires extensive modification to the operating system, which has significant usability issues and hinders efforts for widespread adoption. We develop a novel solution called Aurasium that bypasses the need to modify the Android OS while providing much of the security and privacy that users desire. We automatically repackage arbitrary applications to attach user-level sandboxing and policy enforcement code, which closely watches the application\u2019s behavior for security and privacy violations such as attempts to retrieve a user\u2019s sensitive information, send SMS covertly to premium numbers, or access malicious IP addresses. Aurasium can also detect and prevent cases of privilege escalation attacks. Experiments show that we can apply this solution to a large sample of benign and malicious applications with a near 100 percent success rate, without significant performance and space overhead. Aurasium has been tested on three versions of the Android OS, and is freely available."}
{"_id": "09bc139616adeda9ec8a1c7ad40fe04330e2e445", "title": "Lessons learned from the deployment of a smartphone-based access-control system", "text": "Grey is a smartphone-based system by which a user can exercise her authority to gain access to rooms in our university building, and by which she can delegate that authority to other users. We present findings from a trial of Grey, with emphasis on how common usability principles manifest themselves in a smartphone-based security application. In particular, we demonstrate aspects of the system that gave rise to failures, misunderstandings, misperceptions, and unintended uses; network effects and new flexibility enabled by Grey; and the implications of these for user behavior. We argue that the manner in which usability principles emerged in the context of Grey can inform the design of other such applications."}
{"_id": "911bb5b7258dd89f2ffe9c337443eb13b78cabc1", "title": "NL-InSAR: Nonlocal Interferogram Estimation", "text": "Interferometric synthetic aperture radar (SAR) data provide reflectivity, interferometric phase, and coherence images, which are paramount to scene interpretation or low-level processing tasks such as segmentation and 3-D reconstruction. These images are estimated in practice from a Hermitian product on local windows. These windows lead to biases and resolution losses due to the local heterogeneity caused by edges and textures. This paper proposes a nonlocal approach for the joint estimation of the reflectivity, the interferometric phase, and the coherence images from an interferometric pair of coregistered single-look complex (SLC) SAR images. Nonlocal techniques are known to efficiently reduce noise while preserving structures by performing the weighted averaging of similar pixels. Two pixels are considered similar if the surrounding image patches are \u201cresembling.\u201d Patch similarity is usually defined as the Euclidean distance between the vectors of graylevels. In this paper, a statistically grounded patch-similarity criterion suitable to SLC images is derived. A weighted maximum likelihood estimation of the SAR interferogram is then computed with weights derived in a data-driven way. Weights are defined from the intensity and interferometric phase and are iteratively refined based both on the similarity between noisy patches and on the similarity of patches from the previous estimate. The efficiency of this new interferogram construction technique is illustrated both qualitatively and quantitatively on synthetic and true data."}
{"_id": "b22ec45cd591ef39ba40c3fdea48b7e8d1286bc4", "title": "Pattern Recognition: Possible Research Areas and Issues", "text": "Pattern recognition is a tough problem for computers, although humans are wired for it. Pattern recognition is becoming increasingly important in the age of automation and information handling and retrieval. This paper reviews possible application areas of Pattern recognition. Author covers various sub-disciplines of pattern recognition based on learning methods, such as supervised, unsupervised, semi-supervised learning and key research areas such as grammar induction. Novel solutions to these possible problems could be well deployed for character recognition, speech analysis, man and machine diagnostics, person identification, industrial inspection and so on. The paper concludes with brief discussion on open issues that need to be addressed by future researchers."}
{"_id": "75829605749167a951cd9ba24c47dfb1f5719208", "title": "A Dataset for Web-Scale Knowledge Base Population", "text": "For many domains, structured knowledge is in short supply, while unstructured text is plentiful. Knowledge Base Population (KBP) is the task of building or extending a knowledge base from text, and systems for KBP have grown in capability and scope. However, existing datasets for KBP are all limited by multiple issues: small in size, not open or accessible, only capable of benchmarking a fraction of the KBP process, or only suitable for extracting knowledge from title-oriented documents (documents that describe a particular entity, such as Wikipedia pages). We introduce and release CC-DBP, a web-scale dataset for training and benchmarking KBP systems. The dataset is based on Common Crawl as the corpus and DBpedia as the target knowledge base. Critically, by releasing the tools to build the dataset, we enable the dataset to remain current as new crawls and DBpedia dumps are released. Also, the modularity of the released tool set resolves a crucial tension between the ease that a dataset can be used for a particular subtask in KBP and the number of different subtasks it can be used to train or benchmark."}
{"_id": "528dd4f7e7dac60edef984e9fdb9cbd4b0967d63", "title": "WebGLStudio: a pipeline for WebGL scene creation", "text": "We present WebGLStudio - a pipeline for the creation and editing of high-quality 3D scenes using WebGL. The pipeline features a 3D Scene-Graph rendering engine; an interactive scene editor allowing detailed setup and configuration of assets in real-time; a web-based asset manager; and a stand-alone rendering module ready to embedded in target applications. We further present a series of implementational details discovered to overcome limitations of the web browser context to enable realistic rendering and performance. The principal contribution of this work to the graphics community is the methodology used to take advantage of several unique aspects of Javascript when used as a 3D programming language; and the demonstration of possibilities involved in real-time editing of the parameters of materials, shaders and post-processing effects, in order for the user/artist to create a 3D scene as desired."}
{"_id": "d2f328cffab2859b50752175bd3cc26592e9d1b3", "title": "BOSCH RARE SOUND EVENTS DETECTION SYSTEMS FOR DCASE 2017 CHALLENGE", "text": "In this report, we describe three systems designed at BOSCH for rare audio sound events detection task of DCASE 2017 challenge. The first system is an end-to-end audio event segmentation using embeddings based on deep convolutional neural network (DCNN) and deep recurrent neural network (DRNN) trained on Mel-filter banks and spectogram features. Both system 2 and 3 contain two parts: audio event tagging and audio event segmentation. Audio event tagging selects the positive audio recordings (containing audio events), which are later processed by the audio segmentation part. Feature selection method has been deployed to select a subset of features in both systems. System 2 employs Dilated convolutional neural network on the selected features for audio tagging, and an audio-codebook approach to convert audio features to audio vectors (Audio2vec system) which are then passed to an LSTM network for audio events boundary prediction. System 3 is based on multiple instance learning problem using variational auto encoder (VAE) to perform audio event tagging and segmentation. Similar to system 2, here a LSTM network is used for audio segmentation. Finally, we have utilized models based on score-fusion among different systems to improve the final results."}
{"_id": "b385964ea71963a79d7a34f5844110b8a62661c0", "title": "Semantic homophily in online communication: Evidence from Twitter", "text": "People are observed to assortatively connect on a set of traits. This phenomenon, termed assortative mixing or sometimes homophily, can be quantified through assortativity coefficient in social networks. Uncovering the exact causes of strong assortative mixing found in social networks has been a research challenge. Among the main suggested causes from sociology are the tendency of similar individuals to connect (often itself referred as homophily) and the social influence among already connected individuals. Distinguishing between these tendencies and other plausible causes and quantifying their contribution to the amount of assortative mixing has been a difficult task, and proven not even possible from observational data. However, another task of similar importance to researchers and in practice can be tackled, as we present here: understanding the exact mechanisms of interplay between these tendencies and the underlying social network structure. Namely, in addition to the mentioned assortativity coefficient, there are several other static and temporal network properties and substructures that can be linked to the tendencies of homophily and social influence in the social network and we herein investigate those. Concretely, we tackle a computer-mediated communication network (based on Twitter mentions) and a particular type of assortative mixing that can be inferred from the semantic features of communication content that we term semantic homophily. Our work, to the best of our knowledge, is the first to offer an in-depth analysis on semantic homophily in a communication network and the interplay between them. We quantify diverse levels of semantic homophily, identify the semantic aspects that are the drivers of observed homophily, show insights in its temporal evolution and finally, we present its intricate interplay with the communication network on Twitter. By analyzing these mechanisms we increase understanding on what are the semantic aspects that shape and how they shape the human computer-mediated communication. In addition, our analysis framework presented on this concrete case can be easily adapted, extended and applied on other type of social networks and for different types of homophily."}
{"_id": "9dbc3f4c3dff4c5ba7eb01ee0bf63dc7c7e70033", "title": "An Iterative Algorithm for Camera Pose Estimation with Lie Group Representation", "text": "For solving camera pose problem, a novel accurate and fast pose estimation algorithm based on Lie group representation is proposed. Firstly, object function based on total space error of all feature points is built. Secondly, object function is linearized based on Lie group representation. The Jacobian matrix of camera pose is deduced and pose parameters are updated. Finally, the computation efficiency of algorithm is promoted by using Kronecker product to optimize the iteration process. Simulation and real experiments show that the proposed algorithm is able to obtain camera pose with high precision, but consumes less time compared with the existing algorithms. The proposed algorithm is capable of estimating camera pose with high precision and efficiency."}
{"_id": "243b6812c006817499e5fb24952017a71dda04f0", "title": "Accurate human motion capture in large areas by combining IMU- and laser-based people tracking", "text": "A large number of applications use motion capture systems to track the location and the body posture of people. For instance, the movie industry captures actors to animate virtual characters that perform stunts. Today's tracking systems either operate with statically mounted cameras and thus can be used in confined areas only or rely on inertial sensors that allow for free and large-scale motion but suffer from drift in the pose estimate. This paper presents a novel tracking approach that aims to provide globally aligned full body posture estimates by combining a mobile robot and an inertial motion capture system. In our approach, a mobile robot equipped with a laser scanner is used to anchor the pose estimates of a person given a map of the environment. It uses a particle filter to globally localize a person wearing a motion capture suit and to robustly track the person's position. To obtain a smooth and globally aligned trajectory of the person, we solve a least squares optimization problem formulated from the motion capture suite and tracking data. Our approach has been implemented on a real robot and exhaustively tested. As the experimental evaluation shows, our system is able to provide locally precise and globally aligned estimates of the person's full body posture."}
{"_id": "d0158836cc161e908888fd3c63274875ef87f973", "title": "Predicting the severity of a reported bug", "text": "The severity of a reported bug is a critical factor in deciding how soon it needs to be fixed. Unfortunately, while clear guidelines exist on how to assign the severity of a bug, it remains an inherent manual process left to the person reporting the bug. In this paper we investigate whether we can accurately predict the severity of a reported bug by analyzing its textual description using text mining algorithms. Based on three cases drawn from the open-source community (Mozilla, Eclipse and GNOME), we conclude that given a training set of sufficient size (approximately 500 reports per severity), it is possible to predict the severity with a reasonable accuracy (both precision and recall vary between 0.65\u20130.75 with Mozilla and Eclipse; 0.70\u20130.85 in the case of GNOME)."}
{"_id": "49bc9ff4b6109a8fc35ba84602d182870bfce10d", "title": "BLASX: A High Performance Level-3 BLAS Library for Heterogeneous Multi-GPU Computing", "text": "Basic Linear Algebra Subprograms (BLAS) are a set of low level linear algebra kernels widely adopted by applications involved with the deep learning and scientific computing. The massive and economic computing power brought forth by the emerging GPU architectures drives interest in implementation of compute-intensive level 3 BLAS on multi-GPU systems. In this paper, we investigate existing multi-GPU level 3 BLAS and present that 1) issues, such as the improper load balancing, inefficient communication, insufficient GPU stream level concurrency and data caching, impede current implementations from fully harnessing heterogeneous computing resources; 2) and the inter-GPU Peer-to-Peer(P2P) communication remains unexplored. We then present BLASX: a highly optimized multi-GPU level-3 BLAS. We adopt the concepts of algorithms-by-tiles treating a matrix tile as the basic data unit and operations on tiles as the basic task. Tasks are guided with a dynamic asynchronous runtime, which is cache and locality aware. The communication cost under BLASX becomes trivial as it perfectly overlaps communication and computation across multiple streams during asynchronous task progression. It also takes the current tile cache scheme one step further by proposing an innovative 2-level hierarchical tile cache, taking advantage of inter-GPU P2P communication. As a result, linear speedup is observable with BLASX under multi-GPU configurations; and the extensive benchmarks demonstrate that BLASX consistently outperforms the related leading industrial and academic implementations such as cuBLAS-XT, SuperMatrix, MAGMA."}
{"_id": "72c81c52b4bcff6480fd42539063333238ed37aa", "title": "Self-Attentional Acoustic Models", "text": "Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities. These models have recently shown promising results for modeling discrete sequences, but they are non-trivial to apply to acoustic modeling due to computational and modeling issues. In this paper, we apply self-attention to acoustic modeling, proposing several improvements to mitigate these issues: First, self-attention memory grows quadratically in the sequence length, which we address through a downsampling technique. Second, we find that previous approaches to incorporate position information into the model are unsuitable and explore other representations and hybrid models to this end. Third, to stress the importance of local context in the acoustic signal, we propose a Gaussian biasing approach that allows explicit control over the context range. Experiments find that our model approaches a strong baseline based on LSTMs with networkin-network connections while being much faster to compute. Besides speed, we find that interpretability is a strength of selfattentional acoustic models, and demonstrate that self-attention heads learn a linguistically plausible division of labor."}
{"_id": "c26b6b3c828ccd24eec011467ab6e328919fda9a", "title": "Peginterferon alfa-2a plus ribavirin for chronic hepatitis C virus infection.", "text": "BACKGROUND\nTreatment with peginterferon alfa-2a alone produces significantly higher sustained virologic responses than treatment with interferon alfa-2a alone in patients with chronic hepatitis C virus (HCV) infection. We compared the efficacy and safety of peginterferon alfa-2a plus ribavirin, interferon alfa-2b plus ribavirin, and peginterferon alfa-2a alone in the initial treatment of chronic hepatitis C.\n\n\nMETHODS\nA total of 1121 patients were randomly assigned to treatment and received at least one dose of study medication, consisting of 180 microg of peginterferon alfa-2a once weekly plus daily ribavirin (1000 or 1200 mg, depending on body weight), weekly peginterferon alfa-2a plus daily placebo, or 3 million units of interferon alfa-2b thrice weekly plus daily ribavirin for 48 weeks.\n\n\nRESULTS\nA significantly higher proportion of patients who received peginterferon alfa-2a plus ribavirin had a sustained virologic response (defined as the absence of detectable HCV RNA 24 weeks after cessation of therapy) than of patients who received interferon alfa-2b plus ribavirin (56 percent vs. 44 percent, P<0.001) or peginterferon alfa-2a alone (56 percent vs. 29 percent, P<0.001). The proportions of patients with HCV genotype 1 who had sustained virologic responses were 46 percent, 36 percent, and 21 percent, respectively, for the three regimens. Among patients with HCV genotype 1 and high base-line levels of HCV RNA, the proportions of those with sustained virologic responses were 41 percent, 33 percent, and 13 percent, respectively. The overall safety profiles of the three treatment regimens were similar; the incidence of influenza-like symptoms and depression was lower in the groups receiving peginterferon alfa-2a than in the group receiving interferon alfa-2b plus ribavirin.\n\n\nCONCLUSIONS\nIn patients with chronic hepatitis C, once-weekly peginterferon alfa-2a plus ribavirin was tolerated as well as interferon alfa-2b plus ribavirin and produced significant improvements in the rate of sustained virologic response, as compared with interferon alfa-2b plus ribavirin or peginterferon alfa-2a alone."}
{"_id": "10983cbf1393e79d4a5b7c14b26e368c60ba0b6a", "title": "The New Algorithms of Weighted Association Rules based on Apriori and FP-Growth Methods", "text": "In order to improve the frequent itemsets generated layer-wise efficiency, the paper uses the Apriori property to reduce the search space. FP-grow algorithm for mining frequent pattern steps mainly is divided into two steps: FP-tree and FP-tree to construct a recursive mining. Algorithm FP-Growth is to avoid the high cost of candidate itemsets generation, fewer, more efficient scanning. The paper puts forward the new algorithms of weighted association rules based on Apriori and FP-Growth methods. In the same support, this method is the most effective and stable maximum frequent itemsets mining capacity and minimum execution time. Through theoretical analysis and experimental simulation of the performance of the algorithm is discussed, it is proved that the algorithm is feasible and effective."}
{"_id": "0e99b583eb0831edd7dae6285f23054ac377b85e", "title": "Athlete: A cargo handling and manipulation robot for the moon", "text": ""}
{"_id": "49531de9b6de7df2fd9f3be571c85de8633d8166", "title": "A framework for thermal building parameter identification and simulation", "text": "Studies related to Smart Energy technologies have been highlighted in the last years, specially by significantly contributing to the saving of energy consumption, to the reduction of CO2 emissions and to the diminishing of the paid values in electricity bills. The electrical-thermal analogy can be used in order to simulate heating behavior in real situations and it also may be considered a contributor to Smart Energy technologies. This is the reason why several simulation tools are used. Taking this into account, this paper presents a framework using NGSpice simulator to identify both, the thermal capacitance and the thermal resistance parameters from a hypothetical monitored room which in a Smart Building context. The results from the proposed framework are compared to OpenModelica software and to the contributions from literature."}
{"_id": "21a315b4604442168b162fde40803848c2361d1a", "title": "AXIAL-FLUX PERMANENT-MAGNET MOTOR DESIGN FOR ELECTRIC VEHICLE DIRECT DRIVE USING SIZING EQUATION AND FINITE ELEMENT ANALYSIS", "text": "The design process of a double-sided slotted TORUS axialflux permanent-magnet (AFPM) motor suitable for direct drive of electric vehicle (EV) is presented. It used sizing equation and Finite Element Analysis (FEA). AFPM motor is a high-torque-density motor easily mounted compactly onto a vehicle wheel, fitting the wheel rim perfectly. A preliminary design is a double-sided slotted AFPM motor with 6 rotor poles for high torque-density and stable rotation. In determining the design requirements, a simple vehicle-dynamics model that evaluates vehicle performance through the typical cruising trip of an automobile was considered. To obtain, with the highest possible torque, the initial design parameters of the motor, AFPM\u2019s fundamental theory and sizing equation were applied. Vector Field Opera-3D 14.0 commercial software ran the FEA of the motor design, evaluating and enhancing accuracy of the design parameters. Results of the FEA simulation were compared with those obtained from the sizing equation; at no-load condition, the flux density at every part of the motor agreed. The motor\u2019s design meets all the requirements and limits of EV, and fits the shape and size of a classical-vehicle wheel rim. The design process is comprehensive and can be used for an arbitrary EV with an arbitrary cruising scenario."}
{"_id": "8469007f1d62cac7d7188dbe6b67b24d40d46ca4", "title": "Hybrid context-sensitivity for points-to analysis", "text": "Context-sensitive points-to analysis is valuable for achieving high precision with good performance. The standard flavors of context-sensitivity are call-site-sensitivity (kCFA) and object-sensitivity. Combining both flavors of context-sensitivity increases precision but at an infeasibly high cost. We show that a selective combination of call-site- and object-sensitivity for Java points-to analysis is highly profitable. Namely, by keeping a combined context only when analyzing selected language features, we can closely approximate the precision of an analysis that keeps both contexts at all times. In terms of speed, the selective combination of both kinds of context not only vastly outperforms non-selective combinations but is also faster than a mere object-sensitive analysis. This result holds for a large array of analyses (e.g., 1-object-sensitive, 2-object-sensitive with a context-sensitive heap, type-sensitive) establishing a new set of performance/precision sweet spots."}
{"_id": "4146ac831303d3fd241bfbd496a60efd95969d7f", "title": "Probabilistic neural networks", "text": "-By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network ( PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware \"'neurons\" that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation. Keywords--Neural network, Probability density function, Parallel processor, \"Neuron\", Pattern recognition, Parzen window, Bayes strategy, Associative memory."}
{"_id": "d061b99ace2dd397e627133e3740107ac069e6d6", "title": "Multiband antenna for Bluetooth/ZigBee/ Wi-Fi/WiMAX/WLAN/X-band applications: Partial ground with periodic structures and EBG", "text": "A compact in size microstrip antenna with multiband resonance features is described in this paper. A circular antenna with the partial ground is used as a primary antenna which resonated for WiMAX band (3.3\u20133.7 GHz). Further, primary antenna embedded with mushroom shape EBG (Electromagnetic Band Gap) and periodic DGS (Defected Ground Structure) to operate over WLAN band (5.1\u20135.8 GHz) and X-band from 8\u201312 GHz. Proposed antenna covers 2.4/3.5/5.5 GHz bands and complete X-band which also reflects that it has novel suitability to use for a ZigBee/Wi-Fi/Bluetooth applications. The proposed antenna is etched on FR-4 substrate and overall dimension is 20\u00d726\u00d71.6 mm3. The projected antenna has wide-ranging bandwidth with VSWR < 2. It provides favorable agreement between measured and simulated results."}
{"_id": "ead19a7009b1856c25941f2c11abfcf22ffc1ce6", "title": "Philosophy in the Flesh : The Embodied Mind and its Challenge to Western Thought", "text": "Part I supports these sentences with findings from cognitive science \"that human reason is a form of animal reason, a reason inextricably tied to our bodies and the peculiarities of our brains\" and \"that our bodies, brains, and interactions with our environment provide the mostly unconscious basis for our everyday metaphysics, that is, our sense of what is real.\" The third sentence is a summary of their earlier work on metaphor (Lakoff and Johnson 1980), which in Part II of this book they extend to a more detailed analysis of the metaphors underlying basic philosophical issues, such as time, events and causes, mind, self, and morality. Part III applies that analysis to the metaphors tacitly assumed by philosophers ranging from the Presocratics to Noam Chomsky. Part IV presents arguments for \"empirically responsible philosophy\" and its potential for understanding \"who we are, how we experience our world, and how we ought to live.\" Finally, the appendix summarizes research inspired by this philosophy that has produced computational simulations of certain aspects of embodied minds. Given such a broad scope, the authors have not been able to cover all the topics with equal success. Instead of challenging all of western philosophy, they should have concentrated on their major opponent, Noam Chomsky and his philosophy of language. Lakoff began his career in linguistics as a student and later a teacher of Chomsky's version of transformational grammar. But in the late 1960s, he joined with other former students to promote generative semantics as an alternative to Chomsky's generative syntax. The result was a series of \"linguistic wars,\" whose history has been retold by various participants over the past twenty years. Chapter 22 of this book presents a strong case against Chomsky's \"autonomous syntax\" and for an approach that bases syntax on semantics and semantics on the bodily mechanisms of perception and action. The philosophical and linguistic arguments of Chapter 22 are reinforced in the appendix by a summary of the Neural Theory of Language (NTL), which is being developed by Jerome Feldman, George Lakoff, Lokendra Shastri, and their students. To date, the NTL group has undertaken three major tasks in language understanding"}
{"_id": "0252703f30eb3a3cc6a1c35e336814c9bd2dbd36", "title": "A VoIP emergency services architecture and prototype", "text": "Providing emergency services in VoIP networks are vital to the success of VoIP. It not only presents design and implementation challenges, but also gives an opportunity to enhance the existing emergency call handling infrastructure. We propose an architecture to deliver emergency services in SIP-based VoIP networks, which can accommodate PSTN calls through PSTN to SIP gateways. Our architecture addresses the issues of identifying emergency calls, determining callers' locations, routing emergency calls to appropriate public safety answering points (PSAPs), and presenting required information to emergency call takers. We have developed a prototype implementation to prove our architecture's feasibility and scalability. We expect to undertake a pilot project at a working PSAP with our implementation once it is thoroughly tested."}
{"_id": "62a0dc7c9c45a5a0a016386100f35537bfe1707d", "title": "Big data analytics on traditional HPC infrastructure using two-level storage", "text": "Data-intensive computing has become one of the major workloads on traditional high-performance computing (HPC) clusters. Currently, deploying data-intensive computing software framework on HPC clusters still faces performance and scalability issues. In this paper, we develop a new two-level storage system by integrating Tachyon, an in-memory file system with OrangeFS, a parallel file system. We model the I/O throughputs of four storage structures: HDFS, OrangeFS, Tachyon and two-level storage. We conduct computational experiments to characterize I/O throughput behavior of two-level storage and compare its performance to that of HDFS and OrangeFS, using TeraSort benchmark. Theoretical models and experimental tests both show that the two-level storage system can increase the aggregate I/O throughputs. This work lays a solid foundation for future work in designing and building HPC systems that can provide a better support on I/O intensive workloads with preserving existing computing resources."}
{"_id": "62051e0acbd5975240141d99fd00d0b5bbdf8de1", "title": "Sharp Finite-Time Iterated-Logarithm Martingale Concentration", "text": "We give concentration bounds for martingales that are uniform over finite times and extend classical Hoeffding and Bernstein inequalities. We also demonstrate our concentration bounds to be optimal with a matching anti-concentration inequality, proved using the same method. Together these constitute a finite-time version of the law of the iterated logarithm, and shed light on the relationship between it and the central limit theorem."}
{"_id": "1b3c86ad6c149941750d97bd72b6b0122c1d8b5e", "title": "Finite-time Analysis of the Multiarmed Bandit Problem", "text": "Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration/exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support."}
{"_id": "35009a365220fa1369b03d389bb3dd58dcc2ba5c", "title": "Achieving Privacy in the Adversarial Multi-Armed Bandit", "text": "In this paper, we improve the previously best known regret bound to achieve -differential privacy in oblivious adversarial bandits from O(T / ) to O( \u221a T lnT/ ). This is achieved by combining a Laplace Mechanism with EXP3. We show that though EXP3 is already differentially private, it leaks a linear amount of information in T . However, we can improve this privacy by relying on its intrinsic exponential mechanism for selecting actions. This allows us to reach O( \u221a lnT )-DP, with a regret of O(T ) that holds against an adaptive adversary, an improvement from the best known of O(T ). This is done by using an algorithm that run EXP3 in a mini-batch loop. Finally, we run experiments that clearly demonstrate the validity of our theoretical analysis."}
{"_id": "712a97a28124cf463c221e8ead8db1c057ef3215", "title": "Further Optimal Regret Bounds for Thompson Sampling", "text": "Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have comparable or better empirical performance compared to the state of the art methods. In this paper, we provide a novel regret analysis for Thompson Sampling that proves the first near-optimal problem-independent bound of O( \u221a NT lnT ) on the expected regret of this algorithm. Our novel martingalebased analysis techniques are conceptually simple, and easily extend to distributions other than the Beta distribution. For the version of Thompson Sampling that uses Gaussian priors, we prove a problemindependent bound of O( \u221a NT lnN) on the expected regret, and demonstrate the optimality of this bound by providing a matching lower bound. This lower bound of \u03a9( \u221a NT lnN) is the first lower bound on the performance of a natural version of Thompson Sampling that is away from the optimal bound (O( \u221a NT )) achievable for the multi-armed bandit problem by another algorithm [4]. Our near-optimal problem-independent bounds for Thompson Sampling solve a COLT 2012 open problem of Chapelle and Li. Additionally, our techniques simultaneously provide the optimal problem-dependent bound of (1 + ) \u2211 i lnT d(\u03bci,\u03bc1) +O( 2 ) on the expected regret. The optimal problem-dependent regret bound for this problem was first Preliminary work. Under review by AISTATS 2013. Do not distribute. proven recently by Kaufmann et al. [15]."}
{"_id": "1e21c514f89375098dec5b947aa5f6bcdd0377c5", "title": "Adaptation in natural and artificial systems", "text": "Name of founding work in the area. Adaptation is key to survival and evolution. Evolution implicitly optimizes organisims. AI wants to mimic biological optimization { Survival of the ttest { Exploration and exploitation { Niche nding { Robust across changing environments (Mammals v. Dinos) { Self-regulation,-repair and-reproduction 2 Artiicial Inteligence Some deenitions { \"Making computers do what they do in the movies\" { \"Making computers do what humans (currently) do best\" { \"Giving computers common sense; letting them make simple deci-sions\" (do as I want, not what I say) { \"Anything too new to be pidgeonholed\" Adaptation and modiication is root of intelligence Some (Non-GA) branches of AI: { Expert Systems (Rule based deduction)"}
{"_id": "166b5bdea1f4f850af5b045a953d6de74bc18d1e", "title": "Best of both worlds: Human-machine collaboration for object annotation", "text": "The long-standing goal of localizing every object in an image remains elusive. Manually annotating objects is quite expensive despite crowd engineering innovations. Current state-of-the-art automatic object detectors can accurately detect at most a few objects per image. This paper brings together the latest advancements in object detection and in crowd engineering into a principled framework for accurately and efficiently localizing objects in images. The input to the system is an image to annotate and a set of annotation constraints: desired precision, utility and/or human cost of the labeling. The output is a set of object annotations, informed by human feedback and computer vision. Our model seamlessly integrates multiple computer vision models with multiple sources of human input in a Markov Decision Process. We empirically validate the effectiveness of our human-in-the-loop labeling approach on the ILSVRC2014 object detection dataset."}
{"_id": "719201685a8be9653a7ee0e49a45cf2bc482a5ed", "title": "Doc 2 Cube : Automated Document Allocation to Text Cube via Dimension-Aware Joint Embedding", "text": "Data cube is a cornerstone architecture in multidimensional analysis of structured datasets. It is highly desirable to conduct multidimensional analysis on text corpora with cube structures for various text-intensive applications in healthcare, business intelligence, and social media analysis. However, one bottleneck to constructing text cube is to automatically put millions of documents into the right cells in such a text cube so that quality multidimensional analysis can be conducted afterwards\u2014it is too expensive to allocate documents manually or rely on massively labeled data. We propose Doc2Cube, a method that constructs a text cube from a given text corpus in an unsupervised way. Initially, only the label names (e.g., USA, China) of each dimension (e.g., location) are provided instead of any labeled data. Doc2Cube leverages label names as weak supervision signals and iteratively performs joint embedding of labels, terms, and documents to uncover their semantic similarities. To generate joint embeddings that are discriminative for cube construction, Doc2Cube learns dimension-tailored document representations by selectively focusing on terms that are highly label-indicative in each dimension. Furthermore, Doc2Cube alleviates label sparsity by propagating the information from label names to other terms and enriching the labeled term set. Our experiments on a real news corpus demonstrate that Doc2Cube outperforms existing methods significantly. Doc2Cube is a technology transferred to U.S. Army Research Lab and is a core component of the EventCube system that is being deployed for multidimensional news and social media data analysis."}
{"_id": "7556680e7600d66c73dc4b6075db326f78f4bb0a", "title": "User behaviour analysis in context-aware recommender system using hybrid filtering approach", "text": "The incorporation of context in recommendation systems is important for providing good recommendations as it improves the performance. Context awareness is an important concept in many disciplines. There are various methods that have already been proposed for context aware recommendation systems and can be classified into pre-filter, post-filter and contextual generalization. However, we found that there are some problems in using a single approach. So, in this paper we propose a new approach by combining pre-filter and post-filter methods based on the importance of contextual attribute. We believe that the contextual attribute should be dynamically selected for every user, as we found that it varies according to the user behaviour. In our approach, in case of a few rating scenario a simple method is used to provide a solution. The strategy adopted is advantageous over the single approach and is effective in improving the accuracy for the context aware recommender system."}
{"_id": "5040c699cc3a02d8dd2eecfdb20e5690432ad7a5", "title": "Extracting implicit features in online customer reviews for opinion mining", "text": "As the number of customer reviews grows very rapidly, it is essential to summarize useful opinions for buyers, sellers and producers. One key step of opinion mining is feature extraction. Most existing research focus on finding explicit features, only a few attempts have been made to extract implicit features. Nearly all existing research only concentrate on product features, few has paid attention to other features that relates to sellers, services and logistics. Therefore in this paper, we propose a novel co-occurrence association-based method, which aims to extract implicit features in customer reviews and provide more comprehensive and fine-grained mining results."}
{"_id": "441ec579014969245042e7f18a820bf85f77ba0a", "title": "A 65nm logic technology featuring 35nm gate lengths, enhanced channel strain, 8 Cu interconnect layers, low-k ILD and 0.57 /spl mu/m/sup 2/ SRAM cell", "text": "A 65nm generation logic technology with 1.2nm physical gate oxide, 35nm gate length, enhanced channel strain, NiSi, 8 layers of Cu interconnect, and low-k ILD for dense high performance logic is presented. Transistor gate length is scaled down to 35nm while not scaling the gate oxide as a means to improve performance and reduce power. Increased NMOS and PMOS drive currents are achieved by enhanced channel strain and junction engineering. 193nm lithography along with APSM mask technology is used on critical layers to provide aggressive design rules and a 6-T SRAM cell size of 0.57/spl mu/m/sup 2/. Process yield, performance and reliability are demonstrated on a 70 Mbit SRAM test vehicle with >0.5 billion transistors."}
{"_id": "a4cb1aa4fefd24f92e5a29ca10ec4eab82a95815", "title": "Multiple String LED Driver With Flexible and High-Performance PWM Dimming Control", "text": "The main objectives in driving multiple LED strings include achieving uniform current control and high-performance pulse width modulation (PWM) dimming for all strings. This study proposes a new multiple string LED driver to achieve not only current balance, but also flexible and wide range PWM dimming ratio for each string. A compact single-inductor multiple-output topology is adopted in the driver, accompanied by synchronous integrators and variable dimming frequency, to achieve both high-efficiency and high-performance dimming. By using the proposed variable dimming frequency scheme, high dimming frequency is applied to a string with high dimming ratio, which helps us to maintain the deviation of LED string current in an acceptable range, while low dimming frequency is applied to a string with low dimming ratio, which helps us to achieve rectangular LED current waveform. Meanwhile, the new time multiplexing control scheme automatically optimizes the LED strings\u2019 bus voltages, thus minimizes each string's power loss. A three-string LED driver prototype is constructed to validate the effectiveness of the proposed control scheme, where the three strings can have different dimming ratios between 4% and 100%."}
{"_id": "b80cd053611762f3a031a4df97c05d07521a9e31", "title": "Reinforcement Learning Design-Based Adaptive Tracking Control With Less Learning Parameters for Nonlinear Discrete-Time MIMO Systems", "text": "Based on the neural network (NN) approximator, an online reinforcement learning algorithm is proposed for a class of affine multiple input and multiple output (MIMO) nonlinear discrete-time systems with unknown functions and disturbances. In the design procedure, two networks are provided where one is an action network to generate an optimal control signal and the other is a critic network to approximate the cost function. An optimal control signal and adaptation laws can be generated based on two NNs. In the previous approaches, the weights of critic and action networks are updated based on the gradient descent rule and the estimations of optimal weight vectors are directly adjusted in the design. Consequently, compared with the existing results, the main contributions of this paper are: (1) only two parameters are needed to be adjusted, and thus the number of the adaptation laws is smaller than the previous results and (2) the updating parameters do not depend on the number of the subsystems for MIMO systems and the tuning rules are replaced by adjusting the norms on optimal weight vectors in both action and critic networks. It is proven that the tracking errors, the adaptation laws, and the control inputs are uniformly bounded using Lyapunov analysis method. The simulation examples are employed to illustrate the effectiveness of the proposed algorithm."}
{"_id": "5da05499d2c2ad2b2cca7bffcc8e1ed248f3b764", "title": "Personalized Web Search in Web Mining", "text": "Web is a collection of inter-related files on one or more Web servers. Web mining is the application of data mining techniques to extract knowledge from Web data. Web mining involves the process to apply data mining techniques to extract and uncover knowledge from web documents and services. Web mining has been explored to a vast level and different techniques have been defined for variety of applications like Web Search, Web Classification and Personalization etc. Personalized search is a way to improve the accuracy of web search. However personalized search requires collection and aggregation of user information, which often raise serious problems of breach of confidential data for many users. So, personalized web search in web mining is one of the techniques to improve privacy concerns of the user\u2019s data."}
{"_id": "7da7f18e708bcc2898725111356f509d0cb26548", "title": "WE-CARE: An Intelligent Mobile Telecardiology System to Enable mHealth Applications", "text": "Recently, cardiovascular disease (CVD) has become one of the leading death causes worldwide, and it contributes to 41% of all deaths each year in China. This disease incurs a cost of more than 400 billion US dollars in China on the healthcare expenditures and lost productivity during the past ten years. It has been shown that the CVD can be effectively prevented by an interdisciplinary approach that leverages the technology development in both IT and electrocardiogram (ECG) fields. In this paper, we present WE-CARE , an intelligent telecardiology system using mobile 7-lead ECG devices. Because of its improved mobility result from wearable and mobile ECG devices, the WE-CARE system has a wider variety of applications than existing resting ECG systems that reside in hospitals. Meanwhile, it meets the requirement of dynamic ECG systems for mobile users in terms of the detection accuracy and latency. We carried out clinical trials by deploying the WE-CARE systems at Peking University Hospital. The clinical results clearly showed that our solution achieves a high detection rate of over 95% against common types of anomalies in ECG, while it only incurs a small detection latency around one second, both of which meet the criteria of real-time medical diagnosis. As demonstrated by the clinical results, the WE-CARE system is a useful and efficient mHealth (mobile health) tool for the cardiovascular disease diagnosis and treatment in medical platforms."}
{"_id": "b544ccc2ce28851e6419d08039c3fb73f9200543", "title": "Dual broadband high gain SIW slot array antenna", "text": "A novel design of dual broadband high gain substrate integrated waveguide (SIW) cavity backed slot array antenna is described. The SIW cavity is design in dominant mode TE10. The proposed antenna is design by two pair of \u03bbg/2-resonators cavity backed slot on broad wall of single layer substrate integrated waveguide. To place two pair unequal slot length in stair nature on broad-wall of SIW cavity are responsible to broadening the bandwidth at two different frequency band and also due to arraying nature of slot enhanced the gain. The proposed antenna have Lower frequency bandwidth 4.4% (9.58 GHz\u201310.02 GHz) and higher frequency bandwidth is 9.49% (10.84 GHz\u201311.92 GHz) with 10 dB gain at lower band and 9.4 dB gain at higher band. The proposed antenna has high gain, directive radiation field with high FTBR."}
{"_id": "b0da4f699a3fe93b15e86c9ca24a975615a85ea6", "title": "A Microservice Architecture for the Intranet of Things and Energy in Smart Buildings: Research Paper", "text": "This paper presents challenges and issues in smart buildings and the Internet of Things (IoT), which we identified in years of research in real buildings. To tackle these challenges, a decentralized service-oriented architecture based on a message-oriented middleware has been implemented for the domain of smart buildings. It uses a network-transparent IoT message bus and provides the means for composing applications from auxiliary services, which facilitate device abstraction, protocol adaption, modularity, and maintainability. We demonstrate the flexibility of our architecture by describing how three distinct applications---privacy-enhancing energy data visualization, automated building energy management, and a generic user interface---can be integrated and operated simultaneously in our real smart building laboratory. We compare the advantages of our architecture to conventional ones and provide a best-practice solution for the Intranet of Things and Energy in smart buildings."}
{"_id": "22bd3a35b9550bc5b570a0beee5648eb9033be3b", "title": "Understanding the impact of video quality on user engagement", "text": "As the distribution of the video over the Internet becomes main- stream and its consumption moves from the computer to the TV screen, user expectation for high quality is constantly increasing. In this context, it is crucial for content providers to understand if and how video quality affects user engagement and how to best invest their resources to optimize video quality. This paper is a first step towards addressing these questions. We use a unique dataset that spans different content types, including short video on demand (VoD), long VoD, and live content from popular video con- tent providers. Using client-side instrumentation, we measure quality metrics such as the join time, buffering ratio, average bitrate, rendering quality, and rate of buffering events.\n We quantify user engagement both at a per-video (or view) level and a per-user (or viewer) level. In particular, we find that the percentage of time spent in buffering (buffering ratio) has the largest impact on the user engagement across all types of content. However, the magnitude of this impact depends on the content type, with live content being the most impacted. For example, a 1% increase in buffering ratio can reduce user engagement by more than three minutes for a 90-minute live video event. We also see that the average bitrate plays a significantly more important role in the case of live content than VoD content."}
{"_id": "3997484e144fbd06af32c221c630b8184d016411", "title": "A performance comparison of SQL and NoSQL databases", "text": "With the current emphasis on \u201cBig Data\u201d, NoSQL databases have surged in popularity. These databases are claimed to perform better than SQL databases. In this paper we aim to independently investigate the performance of some NoSQL and SQL databases in the light of key-value stores. We compare read, write, delete, and instantiate operations on key-value stores implemented by NoSQL and SQL databases. Besides, we also investigate an additional operation: iterating through all keys. An abstract key-value pair framework supporting these basic operations is designed and implemented using all the databases tested. Experimental results measure the timing of these operations and we summarize our findings of how the databases stack up against each other. Our results show that not all NoSQL databases perform better than SQL databases. Some are much worse. And for each database, the performance varies with each operation. Some are slow to instantiate, but fast to read, write, and delete. Others are fast to instantiate but slow on the other operations. And there is little correlation between performance and the data model each database uses."}
{"_id": "2b8b77808f3854c70e0cbade689f5f335b182fdd", "title": "The contribution of sleep to hippocampus-dependent memory consolidation", "text": "There is now compelling evidence that sleep promotes the long-term consolidation of declarative and procedural memories. Behavioral studies suggest that sleep preferentially consolidates explicit aspects of these memories, which during encoding are possibly associated with activation in prefrontal-hippocampal circuitry. Hippocampus-dependent declarative memory benefits particularly from slow-wave sleep (SWS), whereas rapid-eye-movement (REM) sleep seems to benefit procedural aspects of memory. Consolidation of hippocampus-dependent memories relies on a dialog between the neocortex and hippocampus. Crucial features of this dialog are the neuronal reactivation of new memories in the hippocampus during SWS, which stimulates the redistribution of memory representations to neocortical networks; and the neocortical slow (<1Hz) oscillation that synchronizes hippocampal-to-neocortical information transfer to activity in other brain structures."}
{"_id": "9da8545e367046b0d9ee8b4744849a46608cc22a", "title": "Active suction cup actuated by ElectroHydroDynamics phenomenon", "text": "Designing and manufacturing actuators using soft materials are among the most important subjects for future robotics. In nature, animals made by soft tissues such as the octopus have attracted the attention of the robotics community in the last years. Suckers (or suction cups) are one of the most important and peculiar organs of the octopus body, giving it the ability to apply high forces on the external environment. The integration of suction cups in soft robots can enhance their ability to manipulate objects and interact with the environment similarly to what the octopus does. However, artificial suction cups are currently actuated using fluid pressure so most of them require external compressors, which will greatly increase the size of the soft robot. In this work, we proposed the use of the ElectroHydroDynamics (EHD) principle to actuate a suction cup. EHD is a fluidic phenomenon coupled with electrochemical reaction that can induce pressure through the application of a high-intensity electric field. We succeeded in developing a suction cup driven by EHD keeping the whole structure extremely simple, fabricated by using a 3D printer and a cutting plotter. We can control the adhesion of the suction cup by controlling the direction of the fluidic flow in our EHD pump. Thanks to a symmetrical arrangement of the electrodes, composed by plates parallel to the direction of the channel, we can change the direction of the flow by changing the sign of the applied voltage. We obtained the pressure of 643 Pa in one unit of EHD pump and pressure of 1428 Pa in five units of EHD pump applying 6 kV. The suction cup actuator was able to hold and release a 2.86 g piece of paper. We propose the soft actuator driven by the EHD pump, and expand the possibility to miniaturize the size of soft robots."}
{"_id": "b7efd1fcce9f29ed34cff468b72e14b083049857", "title": "Location Privacy Violation via GPS-Agnostic Smart Phone Car Tracking", "text": "Smart phones nowadays are equipped with global positioning systems (GPS) chips to enable navigation and location-based services. A malicious app with the access to GPS data can easily track the person who carries the smart phone. People may disable the GPS module and turn it on only when necessary to protect their location privacy. However, in this paper, we demonstrate that an attacker is still able to track a person by using the embedded magnetometer sensor in victim's smart phone, even when the GPS module is disabled all the time. Moreover, this attack neither requests user permissions related to locations for installation, nor does its operation rely on wireless signals like WiFi positioning or suffer from signal propagation loss. Only the angles of a car's turning measured by the magnetometer sensor of a driver's smart phone are utilized. Without loss of generality, we focus on car tracking, since cars are popular transportation tools in developed countries, where smart phones are commonly used. Inspired by the intuition that a car may exhibit different turning angles at different road intersections, we find that an attacker can match car turning angles to a map to infer the actual path that the driver takes. We address technical challenges about car turn angle extraction, map database construction, and path matching algorithm design to make this attack practical and efficient. We also perform an evaluation using real-world driving paths to verify the relationship between the numbers of turns and the time cost of the matching algorithm. The results show that it is possible for attacker to precisely pinpoint the actual path when the driving path includes 11\u00a0turns or more. More simulations are performed to demonstrate the attack with lager selected local areas."}
{"_id": "b81ce299272fd81fe5e45d70d94ed0a5d05b3415", "title": "Let's (not) stick together: pairwise similarity biases cross-validation in activity recognition", "text": "The ability to generalise towards either new users or unforeseen behaviours is a key requirement for activity recognition systems in ubiquitous computing. Differences in recognition performance for the two application cases can be significant, and user-dependent performance is typically assumed to be an upper bound on performance. We demonstrate that this assumption does not hold for the widely used cross-validation evaluation scheme that is typically employed both during system bootstrapping and for reporting results. We describe how the characteristics of segmented time-series data render random cross-validation a poor fit, as adjacent segments are not statistically independent. We develop an alternative approach -- meta-segmented cross validation -- that explicitly circumvents this issue and evaluate it on two data-sets. Results indicate a significant drop in performance across a variety of feature extraction and classification methods if this bias is removed, and that prolonged, repetitive activities are particularly affected."}
{"_id": "8d84d7a6f01a8cae7540b4507692bc10a924c7e6", "title": "Fuzzy PID control of a flexible-joint robot arm with uncertainties from time-varying loads", "text": "This paper presents the design and experiment of a fuzzy proportional integral derivative (PID) controller for a flexible-joint robot arm with uncertainties from time-varying loads. Experimental results have shown remarkable tracking performance of this fuzzy PID controller, and have convincingly demonstrated that fuzzy logic control can be used for flexiblejoint robot arms with uncertainties and it is quite robust. In this paper, the fuzzy PID controller is first described briefly, using a simple and practical PD+I controller configuration. This configuration preserves the linear structure of the conventional PD+I controller, but has nonconstant gains: the proportional, integral, and derivative gains are nonlinear functions of their input signals, which have self-tuning (adaptive) capabilities in setpoint tracking performance. Moreover, these variable gains make the fuzzy PID controller robust with faster response time and less overshoot than its conventional counterpart. The proposed design was tested using a flexible-joint robot arm driven by a dc motor in a laboratory, where the arm was experienced with time-varying loads. Control performance by the conventional and fuzzy PID controllers for such a laboratory robotic system are both included in this paper for comparison."}
{"_id": "11943efec248fcac57ff6913424e230d0a02e977", "title": "Auxiliary Tasks in Multi-task Learning", "text": "Multi-task convolutional neural networks (CNNs) have shown impressive results for certain combinations of tasks, such as single-image depth estimation (SIDE) and semantic segmentation. This is achieved by pushing the network towards learning a robust representation that generalizes well to different atomic tasks. We extend this concept by adding auxiliary tasks, which are of minor relevance for the application, to the set of learned tasks. As a kind of additional regularization, they are expected to boost the performance of the ultimately desired main tasks. To study the proposed approach, we picked vision-based road scene understanding (RSU) as an exemplary application. Since multi-task learning requires specialized datasets, particularly when using extensive sets of tasks, we provide a multi-modal dataset for multi-task RSU, called synMT. More than 2.5 \u00b7 105 synthetic images, annotated with 21 different labels, were acquired from the video game Grand Theft Auto V (GTA V). Our proposed deep multi-task CNN architecture was trained on various combination of tasks using synMT. The experiments confirmed that auxiliary tasks can indeed boost network performance, both in terms of final results and training time."}
{"_id": "62405c029dd046e509ca577cc3d47fe6484381b3", "title": "On channel estimation and optimal training design for amplify and forward relay networks", "text": "In this paper, we provide a complete study on the training based channel estimation issues for relay networks that employ the amplify-and-forward (AF) transmission scheme. We first point out that separately estimating the channel from source to relay and relay to destination suffers from many drawbacks. Then we provide a new estimation scheme that directly estimates the overall channels from the source to the destination. The proposed channel estimation well serves the AF based space time coding (STC) that was recently developed. There exists many differences between the proposed channel estimation and that in the traditional single input single out (SISO) and multiple input single output (MISO) systems. For example, a relay must linearly precode its received training sequence by a sophisticatedly designed matrix in order to minimize the channel estimation error. Besides, each relay node is individually constrained by a different power requirement because of the non-cooperation among all relay nodes. We study both the linear least-square (LS) estimator and the minimum mean-square-error (MMSE) estimator. The corresponding optimal training sequences, as well as the optimal preceding matrices are derived from an efficient convex optimization process."}
{"_id": "de74f79cabf67bc605f1e19bc313573db7dc5c42", "title": "In vivo drug discovery in the zebrafish", "text": "The zebrafish has become a widely used model organism because of its fecundity, its morphological and physiological similarity to mammals, the existence of many genomic tools and the ease with which large, phenotype-based screens can be performed. Because of these attributes, the zebrafish might also provide opportunities to accelerate the process of drug discovery. By combining the scale and throughput of in vitro screens with the physiological complexity of animal studies, the zebrafish promises to contribute to several aspects of the drug development process, including target identification, disease modelling, lead discovery and toxicology."}
{"_id": "11291b24e7ef097593f7960d66a5863a97f996aa", "title": "The Cog Project : Building a Humanoid Robot", "text": "To explore issues of developmental structure, physical embodiment, integration of multiple sensory and motor systems, and social interaction, we have constructed an upper-torso humanoid robot called Cog. The robot has twenty-one degrees of freedom and a variety of sensory systems, including visual, auditory, vestibular, kinesthetic, and tactile senses. This chapter gives a background on the methodology that we have used in our investigations, highlights the research issues that have been raised during this project, and provides a summary of both the current state of the project and our long-term goals. We report on a variety of implemented visual-motor routines (smooth-pursuit tracking, saccades, binocular vergence, and vestibular-ocular and opto-kinetic reflexes), orientation behaviors, motor control techniques, and social behaviors (pointing to a visual target, recognizing joint attention through face and eye finding, imitation of head nods, and regulating interaction through expressive feedback). We further outline a number of areas for future research that will be necessary to build a complete embodied system."}
{"_id": "3f97f1351c6276a1d2912fcd79accaa8b3ce3afa", "title": "Dense 3D surface acquisition by structured light using off-the-shelf components", "text": "This paper describes the implementation of a LCD stripe projection system, based on a new processing scheme called line shift processing. One advantage of the method is, that the projection device can, but is not required to be stable over time nor does it have to be calibrated. The new method therefore allows us to use an off-the-shelf multimedia projector to acquire dense 3-D data (several 100K Triangles) of large objects in a matter of seconds. Since we are able to control our source of illumination, we can also acquire registered color information using standard monochrome cameras. Besides the description of the system some practical examples are given. The calibration of the system is also addressed."}
{"_id": "5046a621f1c344531ed4251828d1a2af8d8dc18f", "title": "Design of Android type Humanoid Robot Albert HUBO", "text": "To celebrate the 100th anniversary of the announcement of the special relativity theory of Albert Einstein, KAIST HUBO team and Hanson robotics team developed android type humanoid robot Albert HUBO which may be the world's first expressive human face on a walking biped robot. The Albert HUBO adopts the techniques of the HUBO design for Albert HUBO body and the techniques of Hanson robotics for Albert HUBO's head. Its height and weight are 137 cm and 57 Kg. Albert HUBO has 66 DOFs (31 for head motions and 35 for body motions) And head part uses `fubber' materials for smooth artificial skin and 28 servo motors for face movements and 3 servo motors for neck movements are used for generating a full range of facial expressions such as laugh, sadness, angry, surprised, etc. and body is modified with HUBO(KHR-3) introduced in 2004 to join with Albert HUBO's head and 35 DC motors are embedded for imitating various human-like body motions"}
{"_id": "035f444e5e801d1a375e2836b0c7c9ab90542428", "title": "A Binocular, Foveated Active Vision System", "text": "This report documents the design and implementation of a binocular, foveated active vision system as part of the Cog project at the MIT Artificial Intelligence Laboratory. The active vision system features a 3 degree of freedom mechanical platform that supports four color cameras, a motion control system, and a parallel network of digital signal processors for image processing. To demonstrate the capabilities of the system, we present results from four sample visual-motor tasks. The author receives support from a National Defense Science and Engineering Graduate Fellowship. Support for this project is provided in part by an ONR/ARPA Vision MURI Grant (No. N00014-95-1-0600)."}
{"_id": "1c6c63286be131c68d48507715936e94b934b15e", "title": "All robots are not created equal: the design and perception of humanoid robot heads", "text": "This paper presents design research conducted as part of a larger project on human-robot interaction. The primary goal of this study was to come to an initial understanding of what features and dimensions of a humanoid robot's face most dramatically contribute to people's perception of its humanness. To answer this question we analyzed 48 robots and conducted surveys to measure people's perception of each robot's humanness. Through our research we found that the presence of certain features, the dimensions of the head, and the total number of facial features heavily influence the perception of humanness in robot heads. This paper presents our findings and initial guidelines for the design of humanoid robot heads."}
{"_id": "c6df59e1d77d84f418666235979cbce6d400d3ca", "title": "Spectral Clustering Based on Local PCA", "text": "We propose a spectral clustering method based on local principal components analysis (PCA). After performing local PCA in selected neighborhoods, the algorithm builds a nearest neighbor graph weighted according to a discrepancy between the principal subspaces in the neighborhoods, and then applies spectral clustering. As opposed to standard spectral methods based solely on pairwise distances between points, our algorithm is able to resolve intersections. We establish theoretical guarantees for simpler variants within a prototypical mathematical framework for multi-manifold clustering, and evaluate our algorithm on various simulated data sets."}
{"_id": "74b72fdd873a3d1cd3242f25f1cafed0e963f3bc", "title": "Misinformation and Its Correction: Continued Influence and Successful Debiasing.", "text": "The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people's memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners-including journalists, health professionals, educators, and science communicators-design effective misinformation retractions, educational tools, and public-information campaigns."}
{"_id": "88b6a71ad0f166769c3c51cbe802c972d524a78f", "title": "SPIDER: stealthy binary program instrumentation and debugging via hardware virtualization", "text": "The ability to trap the execution of a binary program at desired instructions is essential in many security scenarios such as malware analysis and attack provenance. However, an increasing percent of both malicious and legitimate programs are equipped with anti-debugging and anti-instrumentation techniques, which render existing debuggers and instrumentation tools inadequate. In this paper, we present Spider, a stealthy program instrumentation framework which enables transparent, efficient and flexible instruction-level trapping based on hardware virtualization. Spider uses invisible breakpoint, a novel primitive we develop that inherits the efficiency and flexibility of software breakpoint, and utilizes hardware virtualization to hide its side-effects from the guest. We have implemented a prototype of Spider on KVM. Our evaluation shows that Spider succeeds in remaining transparent against state-of-the-art anti-debugging and anti-instrumentation techniques; the overhead of invisible breakpoint is comparable with traditional hardware breakpoint. We also demonstrate Spider's usage in various security applications."}
{"_id": "29f5da58639c526f6b76d6d189d440ebeabe914f", "title": "Learning to Probabilistically Identify Authoritative Documents", "text": "We describe a model of document citation that learns to identify hubs and authorities in a set of linked documents such as pages retrieved from the world wide web or papers retrieved from a research paper archive Un like the popular HITS algorithm which re lies on dubious statistical assumptions our model provides probabilistic estimates that have clear semantics We also nd that in general the identi ed authoritative docu ments correspond better to human intuition"}
{"_id": "88e0f348db738aa4e8d422ff941c0ec911e23ce4", "title": "Total Anomalous Pulmonary Venous Connection: From Embryology to a Prenatal Ultrasound Diagnostic Update", "text": ".08.002 C and the Chinese Taipei Society o mons.org/licenses/by-nc-nd/4.0/ also as a part of complex heart diseases or genetic syndromes. Among cyanotic heart diseases, total anomalous pulmonary venous connection (TAPVC) is the only condition involving a venous system malformation [1] and is easily misdiagnosed. The key characteristic of this uncommon congenital heart disease (CHD) is that all four pulmonary veins (PVs) fail to form a direct connection to the left atrium (LA); instead, they drain into the right heart through different routes of systemic venous return. The incidence of this condition is approximately 7e9 per 100,000 live births [2,3], and it accounts for 0.7e1.5% of all CHDs [4]. Patients with CHD who were born from 2000 to 2006 in Taiwan also exhibited a similar prevalence of TAPVC (0.11/ 1000) [5]."}
{"_id": "1cb789ab8925bda02758bcb69eb0ed1547b5f4b9", "title": "Generating Focused Molecule Libraries for Drug Discovery with Recurrent\nNeural Networks", "text": "In de novo drug design, computational strategies are used to generate novel molecules with good affinity to the desired biological target. In this work, we show that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing. We demonstrate that the properties of the generated molecules correlate very well with the properties of the molecules used to train the model. In order to enrich libraries with molecules active toward a given biological target, we propose to fine-tune the model with small sets of molecules, which are known to be active against that target. Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test molecules that medicinal chemists designed, whereas against Plasmodium falciparum (Malaria), it reproduced 28% of 1240 test molecules. When coupled with a scoring function, our model can perform the complete de novo drug design cycle to generate large sets of novel molecules for drug discovery."}
{"_id": "f2ab0a2aa4177dd267c3c6cc37c7ad0e33c2cdbf", "title": "Impact of Ambulance Dispatch Policies on Performance of Emergency Medical Services", "text": "In ambulance location models, fleet size and ambulance location sites are two critical factors that emergency medical service (EMS) managers can control to ensure efficient delivery of the system. The ambulance relocation and dispatch policies that are studied in dynamic ambulance relocation models also significantly contribute to improving the response time of EMS. In this paper, we review dynamic ambulance relocation models from the perspective of dispatch policies. The connection between the reviewed ambulance dispatch policies and real-life policies is highlighted. Our ambulance model is based on the modified maximal covering location problem (MCLP). It is used to examine the commonly used dispatch policy and the proposed method of free-ambulance exploitation to further improve urgent call response time. Simulation results show that the proposed method can reduce the response time of urgent calls, especially during low-ambulance-supply period. We also compared the performance of EMS with and without reroute-enabled dispatch."}
{"_id": "d814a3e0fc85e2c988b631bb8f4da9597ea0e837", "title": "A trust framework model for situational contexts", "text": "Trust is a widely studied and acknowledged concept including a diversity of operationalizations. A trust framework model for situational contexts is presented to communicate a clearer and better understanding of the trust situation studied. The framework model includes the situational trust model, the trust transaction, and the trust equation. The situational trust model relates the trustor, the trustee, the trust object, and the trust environment. The application of the trust model is exemplified to define the trust situation of consumer trust in e-commerce."}
{"_id": "8304d0c4fab00017aee19730c3e13a53058c3008", "title": "Towards structured log analysis", "text": "Value of software log file analysis has been constantly increasing with the value of information to organizations. Log management tools still have a lot to deliver in order to empower their customers with the true strength of log information. In addition to the traditional uses such as testing software functional conformance, troubleshooting and performance benchmarking, log analysis has proven its capabilities in fields like intrusion detection and compliance evaluation. This is verified by the emphasis on log analysis in regulations like PCI DSS, FISMA, HIPAA and frameworks such as ISO 27001 and COBIT. In this paper we present an in depth analysis into current log analysis domains and common problems. A practical guide to the use of few popular log analysis tools is also included. Lack of proper support for structured analysis is identified as one major flaw in existing tools. After that, we describe a framework we developed for structured log analysis with the view of providing a solution to open problems in the domain. The core strength of the framework is its ability to handle many log file formats that are not well served by existing tools and providing sophisticated infrastructure for automating recurring log analysis procedures. We prove the usefulness of the framework with a simple experiment."}
{"_id": "6f58ca3b2e368be530341f2e72953af7fe74b2d8", "title": "An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Time Windows", "text": "Thepickup and delivery problem with time windows is the problem of serving a number of transportation requestsusing a limited amount of vehicles. Each request involves mo ving a number of goods from a pickup location to a delivery location. Our task is to constr uc routes that visit all locations such that corresponding pickups and deliveries are placed on the same rout and such that a pickup is performed before the corresponding delivery. The routes must also sat isfy time window and capacity constraints. This paper presents a heuristic for the problem based on an ex te sion of theLarge Neighborhood Search heuristic previously suggested for solving the vehicle rou ting problem with time windows. The proposed heuristic is composed of a number of competing sub-heuristi cs which are used with a frequency corresponding to their historic performance. This general framework i s denotedAdaptive Large Neighborhood Search . The heuristic is tested on more than 350 benchmark instances with up to 500 requests. It is able to improve the best known solutions from the literature for mor e than 50% of the problems. The computational experiments indicate that it is advantag eous to use several competing sub-heuristics instead of just one. We believe that the proposed heuristic i very robust and is able to adapt to various instance characteristics."}
{"_id": "3ddecaf1d00857aa679d1ce92446932431c2c348", "title": "Computing persistent homology", "text": "We study the homology of a filtered d-dimensional simplicial complex K as a single algebraic entity and establish a correspondence that provides a simple description over fields. Our analysis enables us to derive a natural algorithm for computing persistent homology over an arbitrary field in any dimension. Our study also implies the lack of a simple classification over non-fields. Instead, we give an algorithm for computing individual persistent homology groups over an arbitrary PIDs in any dimension."}
{"_id": "7546948965e1c7f55bd7ecd79b256be740197ee9", "title": "Li-Fi technology in traffic light", "text": "Wireless communication has become a basic utility in personal and business life such that it becomes a fundamental of our lives and this communication uses the radio spectrum for data transfer. There are issues in using the radio spectrum further, they are, capacity, efficiency, availability and security. The defects of Wi-Fi technology has given birth to the concept of Li-Fi (Light Fidelity) technology. Li-Fi can be defined as a light based Wi-Fi. This technology mainly serves the purpose of transmitting data using retrofitting of LED Bulbs that has high efficiency, durability and reliability.This technology emerged by 1990s in a lot of countries like Japan, German and Korea. The proposed paper aims in using the Wi-Fi technology and enabling communication of vehicles with the traffic light system in order to prioritize the vehicles and change the signals accordingly rather than by a process of pre-defined order or by manual order. Traffic lights already use LED lighting, so that this proposed system may seem easy to implement. Sending data through siren lights in an ambulance and fire extinguishers to a traffic light control system and switching the signal in order to allow faster and non-interrupted transport. Here is already a kind of system in play for GPS navigational systems, but the traffic lights can be used in updating drivers using basic information or streaming video directly from news broadcasts i.e, about the accidents or delays up ahead."}
{"_id": "fbad78df3f4526893a8beb751291d7e9e479ac47", "title": "A Job Scheduling Design for Visualization Services Using GPU Clusters", "text": "Modern large-scale heterogeneous computers incorporating GPUs offer impressive processing capabilities. It is desirable to fully utilize such systems for serving multiple users concurrently to visualize large data at interactive rates. However, as the disparity between data transfer speed and compute speed continues to increase in heterogeneous systems, data locality becomes crucial for performance. We present a new job scheduling design to support multi-user exploration of large data in a heterogeneous computing environment, achieving near optimal data locality and minimizing I/O overhead. The targeted application is a parallel visualization system which allows multiple users to render large volumetric data sets in both interactive mode and batch mode. We present a cost model to assess the performance of parallel volume rendering and quantify the efficiency of job scheduling. We have tested our job scheduling scheme on two heterogeneous systems with different configurations. The largest test volume data used in our study has over two billion grid points. The timing results demonstrate that our design effectively improves data locality for complex multi-user job scheduling problems, leading to better overall performance of the service."}
{"_id": "4360e434f56032b08ee19561c21aa22250e01ccd", "title": "[Sensitization to acrylates caused by artificial acrylic nails: review of 15 cases].", "text": "BACKGROUND\nAllergic contact dermatitis due to acrylates present in the workplace is a disease frequently reported among dentists, printers, and fiberglass workers. Recently, the number of cases of contact allergic dermatitis among beauticians specialized in sculpting artificial nails has increased.\n\n\nOBJECTIVE\nOur objective was to study the clinical characteristics and allergens implicated in allergic contact dermatitis due to acrylates in beauticians and users of sculpted nails.\n\n\nMATERIAL AND METHODS\nThis was an observational, retrospective study of patients diagnosed with allergic contact dermatitis due to acrylates used in sculpting artificial nails over the last 26 years in the Hospital General Universitario, Valencia, Spain.\n\n\nRESULTS\nIn total, 15 patients were diagnosed: 14 beauticians and 1 client. Most cases were diagnosed in the past 2 years. All were women, their mean age was 32.2 years, and 26.7 % had a personal or family history of atopy. The sensitization time varied between 1 month and 15 years. The most frequently affected areas were the fleshy parts of the fingers and hands. Three patients - 2 beauticians and 1 client - presented allergic asthma due to acrylates. All patients underwent patch testing with a standard battery of allergens and a battery of acrylates. The most frequent allergens were ethylene glycol dimethacrylate (13/15, 86.7 %), hydroxyethyl methacrylate (13/15, 86.7 %), triethylene glycol dimethacrylate (7/15, 46.7 %), 2-hydroxypropyl methacrylate (5/15, 33.3 %), and methyl methacrylate (5/15, 33.3 %).\n\n\nCONCLUSIONS\nAcrylate monomers used for sculpting artificial nails are important sensitizers for contact and occupational dermatitis. The most important consideration is primary and secondary prevention."}
{"_id": "04f7ab061b2d77bf3bd139d38a0c28ab82b57c7f", "title": "An Expectation Propagation Perspective on Approximate Message Passing", "text": "An alternative derivation for the well-known approximate message passing (AMP) algorithm proposed by Donoho is presented in this letter. Compared with the original derivation, which exploits central limit theorem and Taylor expansion to simplify belief propagation (BP), our derivation resorts to expectation propagation (EP) and the neglect of high-order terms in large system limit. This alternative derivation leads to a different yet provably equivalent form of message passing, which explicitly establishes the intrinsic connection between AMP and EP, thereby offering some new insights in the understanding and improvement of AMP."}
{"_id": "209f1561bbfc6c2cb124dd715aef1e391b0e7cce", "title": "A Robust Minimax Approach to Classification", "text": "When constructing a classifier, the probability of correct classification of future data points should be maximized. We consider a binary classification problem where the mean and covariance matrix of each class are assumed to be known. No further assumptions are made with respect to the class-conditional distributions. Misclassification probabilities are then controlled in a worst-case setting: that is, under all possible choices of class-conditional densities with given mean and covariance matrix, we minimize the worst-case (maximum) probability of misclassification of future data points. For a linear decision boundary, this desideratum is translated in a very direct way into a (convex) second order cone optimization problem, with complexity similar to a support vector machine problem. The minimax problem can be interpreted geometrically as minimizing the maximum of the Mahalanobis distances to the two classes. We address the issue of robustness with respect to estimation errors (in the means and covariances of the classes) via a simple modification of the input data. We also show how to exploit Mercer kernels in this setting to obtain nonlinear decision boundaries, yielding a classifier which proves to be competitive with current methods, including support vector machines. An important feature of this method is that a worst-case bound on the probability of misclassification of future data is always obtained explicitly."}
{"_id": "44f5764158b2a65f60bfb7b9d9f438368fa5bebb", "title": "A dual-networks architecture of top-down control", "text": "Complex systems ensure resilience through multiple controllers acting at rapid and slower timescales. The need for efficient information flow through complex systems encourages small-world network structures. On the basis of these principles, a group of regions associated with top-down control was examined. Functional magnetic resonance imaging showed that each region had a specific combination of control signals; resting-state functional connectivity grouped the regions into distinct 'fronto-parietal' and 'cingulo-opercular' components. The fronto-parietal component seems to initiate and adjust control; the cingulo-opercular component provides stable 'set-maintenance' over entire task epochs. Graph analysis showed dense local connections within components and weaker 'long-range' connections between components, suggesting a small-world architecture. The control systems of the brain seem to embody the principles of complex systems, encouraging resilient performance."}
{"_id": "a909e1894433aae16c2123e7ad2cdaaae1ca893c", "title": "Design , construction and verification of a self-balancing vehicle", "text": "The Segway Personal Transporter is a small footprint electrical vehicle designed by Dean Kamen to replace the car as a more environmentally friendly transportation method in metropolitan areas. The dynamics of the vehicle is similar to the classical control problem of an inverted pendulum, which means that it is unstable and prone to tip over. This is prevented by electronics sensing the pitch angle and its time derivative, controlling the motors to keep the vehicle balancing (1). This kind of vehicle is interesting since it contains a lot of technology relevant to an environmentally friendly and energy efficient transportation industry. This thesis describes the development of a similar vehicle from scratch, incorporating every phase from literature study to planning, design, vehicle construction and verification. The main objective was to build a vehicle capable of transporting a person weighing up to 100 kg for 30 minutes or a distance of 10 km, whichever comes first. The rider controls are supposed to be natural movements; leaning forwards or backwards in combination with tilting the handlebar sideways should be the only rider input required to ride the vehicle. The vehicle was built using a model-based control design and a top-down construction approach. The controller is a linear quadratic controller implemented in a 100 Hz control loop, designed to provide as fast response to disturbances as possible without saturating the control signal under normal operating conditions. The need for adapting the control law to rider weight and height was investigated with a controller designed for a person 1,8 m tall weighing 80 kg. Simulations of persons having weights between 60-100 kg and heights between 1,6-1,9 m were performed, showing no need to adapt the controller. The controller could safely return the vehicle to upright positions even after angle disturbances of \u00b16 degrees, the highest angle deviation considered to occur during operation."}
{"_id": "4f279ef6fd214850fe42a8f34c54912187a8a014", "title": "The \"Reading the Mind in the Eyes\" Test revised version: a study with normal adults, and adults with Asperger syndrome or high-functioning autism.", "text": "In 1997 in this Journal we published the \"Reading the Mind in the Eyes\" Test, as a measure of adult \"mentalising\". Whilst that test succeeded in discriminating a group of adults with Asperger syndrome (AS) or high-functioning autism (HFA) from controls, it suffered from several psychometric problems. In this paper these limitations are rectified by revising the test. The Revised Eyes Test was administered to a group of adults with AS or HFA (N = 15) and again discriminated these from a large number of normal controls (N = 239) drawn from different samples. In both the clinical and control groups the Eyes Test was inversely correlated with the Autism Spectrum Quotient (the AQ), a measure of autistic traits in adults of normal intelligence. The Revised Eyes Test has improved power to detect subtle individual differences in social sensitivity."}
{"_id": "144c72023fac7d91faa3b82f73b32120c7c9b41b", "title": "An Experiment on DES Statistical Cryptanalysis", "text": "Linear cryptanalysis and differential cryptanalysis are the most important methods of attack against block ciphers. Their efficiency have been demonstrated against several ciphers, including the Data Encryption Standard. We prove that both of them can be considered, improved and joined in a more general statistical framework. We also show that the very same results as those obtained in the case of DES can be found without any linear analysis and we slightly improve them into an attack with theoretical complexity 242 \u00b79 . We can apply another statistical attack the xcryptanalysis on the same characteristics without a definite idea of what happens in the encryption process. It appears to be roughly as efficient as both differential and linear cryptanalysis. We propose a new heuristic method to find good characteristics. It has found an attack against DES absolutely equivalent to Matsui's one by following a distinct path."}
{"_id": "02485a373142312c354b79552b3d326913eaf86d", "title": "Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions", "text": "Active and semi-supervised learning are important techniques when labeled data are scarce. We combine the two under a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The semi-supervised learning problem is then formulated in terms of a Gaussian random field on this graph, the mean of which is characterized in terms of harmonic functions. Active learning is performed on top of the semisupervised learning scheme by greedily selecting queries from the unlabeled data to minimize the estimated expected classification error (risk); in the case of Gaussian fields the risk is efficiently computed using matrix methods. We present experimental results on synthetic data, handwritten digit recognition, and text classification tasks. The active learning scheme requires a much smaller number of queries to achieve high accuracy compared with random query selection. 1. Introduction Semi-supervised learning targets the common situation where labeled data are scarce but unlabeled data are abundant. Under suitable assumptions, it uses unlabeled data to help supervised learning tasks. Various semi-supervised learning methods have been proposed and show promising results; Seeger (2001) gives a survey. These methods typically assume that the labeled data set is given and fixed. In practice, it may make sense to utilize active learning in conjunction with semi-supervised learning. That is, we might allow the learning algorithm to pick a set of unlabeled instances to be labeled by a domain expert, which will then be used as (or to augment) the labeled data set. In other words, if we have to label a few instances for semisupervised learning, it may be attractive to let the learning algorithm tell us which instances to label, rather than selecting them randomly. We will limit the range of query selection to the unlabeled data set, a practice known as poolbased active learning or selective sampling. There has been a great deal of research in active learning. For example, Tong and Koller (2000) select queries to minimize the version space size for support vector machines; Cohn et al. (1996) minimize the variance component of the estimated generalization error; Freund et al. (1997) employ a committee of classifiers, and query a point whenever the committee members disagree. Most of the active learning methods do not take further advantage of the large amount of unlabeled data once the queries are selected. Exceptions include McCallum and Nigam (1998) who use EM with unlabeled data integrated into the active learning, and Muslea et al. (2002) who use a semi-supervised learning method during training. In addition to this body of work from the machine learning community, there is a large literature on the closely related topic of experimental design in statistics; Chaloner and Verdinelli (1995) give a survey of experimental design from a Bayesian perspective. Recently Zhu et al. (2003) introduced a semi-supervised learning framework which is based on Gaussian random fields and harmonic functions. In this paper we demonstrate how this framework allows a combination of active learning and semi-supervised learning. In brief, the framework allows one to efficiently estimate the expected generalization error after querying a point, which leads to a better query selection criterion than naively selecting the point with maximum label ambiguity. Then, once the queries are selected and added to the labeled data set, the classifier can be trained using both the labeled and remaining unlabeled data. We present results on synthetic data, text classificaProceedings of the ICML-2003 Workshop on The Continuum from Labeled to Unlabeled Data, Washington DC, 2003. tion and image classification tasks that indicate the combination of these techniques can result in highly effective learning schemes. 2. Gaussian random fields and harmonic energy minimizing functions We begin by briefly describing the semi-supervised learning framework of Zhu et al. (2003). There are labeled points , and unlabeled points ; usually . We will use , ! to denote the labeled and unlabeled set, and \"$#% '&( the total number of points. We assume the labels are binary: *),+.-0/1 243 . We assume a connected graph 56#6 87' 9: is given with nodes 7 corresponding to the \" data points, of which the nodes in are labeled with the corresponding \u2019s. The edges 9 are represented by an \"<;=\" weight matrix > which is given. For example if ?+A@CB , > can be the radial basis function (RBF): DFEHG #JI KML NPO 2 Q R B STVU E T O G T R W (1) so that nearby points in Euclidean space are assigned large edge weights. Other weightings are possible, of course, and may be more appropriate when is discrete or symbolic. For our purposes the matrix > fully specifies the data manifold structure. We note that a method for learning the scale parameter Q is proposed in (Zhu et al., 2003). The semi-supervised algorithm in this paper is based on a relaxation of the requirement that labels are binary, resulting in a simple and tractable family of algorithms. We allow continuous labels on unlabeled nodes MXZY[7]\\^@ . The labels on labeled data are still constrained to be 0 or 1: _` C#a 4)b c_ d+A-e/ 243 for _P#(24 f . We also denote the constraint by g ) U h i . Since we want unlabeled points that are nearby in the graph to have similar labels, we define the energy to be 9j c 1 C# 2 k S E l G D EmG _` O onp R (2) so that low energy corresponds to a slowly varying function over the graph. Define the diagonal matrix q6#sr*_`t*u[ 8r E whose entries r E #wv G DFEmG are the weighted degrees of the nodes in 5 . The combinatorial Laplacian is the \"x;y\" matrix z{#aq O > . We can rewrite the energy function in matrix form as 9| c 1 #( } z: . We consider the Gaussian random field ~ c 1 C# 2 \u007fP\u0080 I KML< OF\u0081 9j c 1 f where \u0081 is an \u201cinverse temperature\u201d parameter, and \u007f \u0080 is the partition function \u007f\u0082\u0080 #s\u0083 h \u0084 i U h i I KpL< OF\u0081 9j M \u0085r4 . The Gaussian random field differs from the \u201cstandard\u201d discrete Markov random field in that the field configurations are over a continuous state space. Moreover, for a Gaussian field the joint probability distribution over unlabeled nodes is Gaussian with covariance matrix \u0080 z|\u0086 0 . z\u0087 0 is the submatrix of z corresponding to unlabeled data. The minimum energy function \u0088\u0089# arg min h \u0084 i U h i 9| c 1 of the Gaussian field is harmonic; that is, it satisfies z\u008a\u0088A#\u008b/ on unlabeled data points ! , and is equal to ) on the labeled data points . The harmonic property means that the value at each unlabeled node is the average of neighboring nodes: \u0088b onp C# 2 r G S E\u008d\u008c\u008eG D\u0085EmG \u0088b _` for n\u0087#Z &a24 f &? which is consistent with our prior notion of smoothness with respect to the graph. Because of the maximum principle of harmonic functions (Doyle & Snell, 1984), \u0088 is unique. Furthermore, \u0088 satisfies /\u0089\u008f(\u0088b onp \u0090\u008f\u00912 for n\u0092+\u0093! when ! is connected and labeled nodes from both classes are present at the boundary (the usual case; otherwise \u0088 takes on the extremum 0 or 1). By definition \u0088 is the mode of the Gaussian random field, but since the joint distribution is Gaussian, \u0088 is also the mean of the field. The harmonic energy minimizing function \u0088 can be computed with matrix methods. We partition the Laplacian matrix z into blocks for labeled and unlabeled nodes, z\u0091#\u0095\u0094 z\u0087 m z\u0087 z e z 0 \u0097\u0096 and let \u0088,# \u0094 \u00880 \u0088 \u0096 where \u0088 \u0090#\u0098 ) , and \u0088 denotes the mean values on the unlabeled data points. The solution is given by \u0088 # O z \u0086 0 z e \u0088 (3) It is not hard to show that the Gaussian field, conditioned on labeled data, is a multivariate normal: * y\u0099a\u009a\u008b \u009b\u0088 \u008e fz \u0086 0 . To carry out classification with a Gaussian field, we note that the harmonic energy minimizing function \u0088 is the mean of the field. Therefore the Bayes classification rule is to label node _ as class 1 in case \u0088b c_ d\u009cJ/1 H\u009d , and to label node _ class 0 otherwise. The harmonic function \u0088 has many nice interpretations, of which the random walk view is particularly relevant here. Define the transition matrix \u009e\u009f# q \u0086 > . Consider the random walk on graph 5 by a particle. Starting from an unlabeled node _ , it moves to a node n with probability \u009e EHG after one step. The walk continues until the particle reaches a labeled node. Then \u0088 E is the probability that the particle, starting from node _ , reaches a labeled node with label 1. The labeled data are the absorbing boundary for the random walk. More on this semi-supervised learning framework can be found in (Zhu et al., 2003). 3. Active learning We propose to perform active learning with the Gaussian random field model by greedily selecting queries from the unlabeled data to minimize the risk of the harmonic energy minimization function. The risk is the estimated generalization error of the Bayes classifier, and can be efficiently computed with matrix methods. We define the true risk \u009b\u0088 of the Bayes classifier based on the harmonic function \u0088 to be \u009b\u0088 # S E U S h U l sgn \u009b\u0088 E # E ~ E g where sgn 8\u0088 E is the Bayes decision rule, where (with a slight abuse of notation) sgn \u009b\u0088 E \u0097# 2 if \u0088 E \u009c^/ \u009d and sgn \u009b\u0088 E #,/ otherwise. Here ~ E g is the unknown true label distribution at node _ , given the labeled data. Because of this 8\u0088 is not computable. In order to proceed, it is necessary to make assumptions. We begin by assuming that we can estimate the unknown distribution ~ c E g d with the mean of the Gaussian field model: ~ c E # 2pg d Z\u0088 E Intuitively, recalling \u0088 E is the probability of reaching 1 in a random walk on the graph, our assumption is that we can approximate the distribution using a biased coin at each node, whose probability of heads is \u0088 E . With this assumption we can compute the estimated risk 8\u0088 as \u0092 8\u0088 # S E U sgn \u009b\u0088 E # / 2 O \u0088 E & sgn 8\u0088 E #(2 \u0088 E # S E U \u009b\u0088 E 2 O \u0088 E (4) If we perform active learning and query an unlabeled node , we will receive an answer (0 or 1). Adding this point to the tr"}
{"_id": "03ae968c3ecb5ba30a6df7b00518db6ec785d8c3", "title": "Real-Time Tracking via On-line Boosting", "text": "Very recently tracking was approached using classification techniques such as support vector machines. The object to be tracked is discriminated by a classifier from the background. In a similar spirit we propose a novel on-line AdaBoost feature selection algorithm for tracking. The distinct advantage of our method is its capability of on-line training. This allows to adapt the classifier while tracking the object. Therefore appearance changes of the object (e.g. out of plane rotations, illumination changes) are handled quite naturally. Moreover, depending on the background the algorithm selects the most discriminating features for tracking resulting in stable tracking results. By using fast computable features (e.g. Haar-like wavelets, orientation histograms, local binary patterns) the algorithm runs in real-time. We demonstrate the performance of the algorithm on several (publically available) video sequences."}
{"_id": "04b652ee9f2af3942863f405aaf36d76812e0f44", "title": "Incremental Learning for Robust Visual Tracking", "text": "Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object\u2019s appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination."}
{"_id": "1183db5f409e8498d1a0f542703f908275a6dc34", "title": "Robust Visual Tracking and Vehicle Classification via Sparse Representation", "text": "In this paper, we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, noise, and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target in a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an \u21131-regularized least-squares problem. Then, the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework. Two strategies are used to further improve the tracking performance. First, target templates are dynamically updated to capture appearance changes. Second, nonnegativity constraints are enforced to filter out clutter which negatively resembles tracking targets. We test the proposed approach on numerous sequences involving different types of challenges, including occlusion and variations in illumination, scale, and pose. The proposed approach demonstrates excellent performance in comparison with previously proposed trackers. We also extend the method for simultaneous tracking and recognition by introducing a static template set which stores target images from different classes. The recognition result at each frame is propagated to produce the final result for the whole video. The approach is validated on a vehicle tracking and classification task using outdoor infrared video sequences."}
{"_id": "1427fc2aace877b91e43aefd1fe0b2a19b01d78b", "title": "Semi-supervised On-Line Boosting for Robust Tracking", "text": "Recently, on-line adaptation of binary classifiers for tracking has been investigated. On-line learning allows for simple classifiers since only the current view of the object from its surrounding background needs to be discriminated. However, on-line adaptation faces one key problem: Each update of the tracker may introduce an error which, finally, can lead to tracking failure (drifting). The contribution of this paper is a novel on-line semi-supervised boosting method which significantly alleviates the drifting problem in tracking applications. This allows to limit the drifting problem while still staying adaptive to appearance changes. The main idea is to formulate the update process in a semi-supervised fashion as combined decision of a given prior and an on-line classifier. This comes without any parameter tuning. In the experiments, we demonstrate real-time tracking of our SemiBoost tracker on several challenging test sequences where our tracker outperforms other on-line tracking methods. Helmut Grabner1,2 Christian Leistner1 Horst Bischof1 1 Institute for Computer Graphics and Vision, Graz University of Technology, Austria 2 Computer Vision Laboratory, ETH-Zurich, Switzerland Motivation Robust Tracking Loop search Region actual object position create confidence map update classifier (tracker) evaluate classifier"}
{"_id": "a6526df1d9b18fd3542fad7fdd95e93a5edce909", "title": "Predicting program properties from 'big code'", "text": "We present a new approach for predicting program properties from massive codebases (aka \"Big Code\"). Our approach first learns a probabilistic model from existing data and then uses this model to predict properties of new, unseen programs.\n The key idea of our work is to transform the input program into a representation which allows us to phrase the problem of inferring program properties as structured prediction in machine learning. This formulation enables us to leverage powerful probabilistic graphical models such as conditional random fields (CRFs) in order to perform joint prediction of program properties.\n As an example of our approach, we built a scalable prediction engine called JSNice for solving two kinds of problems in the context of JavaScript: predicting (syntactic) names of identifiers and predicting (semantic) type annotations of variables. Experimentally, JSNice predicts correct names for 63% of name identifiers and its type annotation predictions are correct in 81% of the cases. In the first week since its release, JSNice was used by more than 30,000 developers and in only few months has become a popular tool in the JavaScript developer community.\n By formulating the problem of inferring program properties as structured prediction and showing how to perform both learning and inference in this context, our work opens up new possibilities for attacking a wide range of difficult problems in the context of \"Big Code\" including invariant generation, decompilation, synthesis and others."}
{"_id": "af1249367eb190db2dcf92b43e3b301fbc54d292", "title": "Kansei Engineering with Online Review Mining for Hotel Service Development", "text": "In the Internet era, consumers can comment on their hotel living experience directly through the Internet. Thus, consumers online comments might become a key factor for consumers choosing hotels when planning their tourism itinerary. The large amount of User-Generated Content (UGC) provided by the consumers has also provided chances for the hospitality industry to consumer behavior statistic inferences. This study applies Kansei engineering combined with text mining to develop guidelines for hotel services to help managers meet consumers Kansei needs. Through understanding consumers online comments by text mining, this study identifies the most important customer Kansei and hotel service characteristics, and then analyzes the relationships between them. The finds can provide suggestions for hotel service improvement and development."}
{"_id": "3db4436a64ccae91adfe809cb90073e93dd39ebc", "title": "Visualizing Time-Dependent Data Using Dynamic t-SNE", "text": "Many interesting processes can be represented as time-dependent datasets. We define a time-dependent dataset as a sequence of datasets captured at particular time steps. In such a sequence, each dataset is composed of observations (high-dimensional real vectors), and each observation has a corresponding observation across time steps. Dimensionality reduction provides a scalable alternative to create visualizations (projections) that enable insight into the structure of such datasets. However, applying dimensionality reduction independently for each dataset in a sequence may introduce unnecessary variability in the resulting sequence of projections, which makes tracking the evolution of the data significantly more challenging. We show that this issue affects t-SNE, a widely used dimensionality reduction technique. In this context, we propose dynamic t-SNE, an adaptation of t-SNE that introduces a controllable trade-off between temporal coherence and projection reliability. Our evaluation in two time-dependent datasets shows that dynamic t-SNE eliminates unnecessary temporal variability and encourages smooth changes between projections."}
{"_id": "d071d95380728675aa8cfb641a024d9cd0b00aa4", "title": "Automatic Chinese Factual Question Generation", "text": "Question generation is an emerging research area of artificial intelligence in education. Question authoring tools are important in educational technologies, e.g., intelligent tutoring systems, as well as in dialogue systems. Approaches to generate factual questions, i.e., questions that have concrete answers, mainly make use of the syntactical and semantic information in a declarative sentence, which is then transformed into questions. Recently, some research has been conducted to investigate Chinese factual question generation with some limited success. Reported performance is poor due to unavoidable errors (e.g., sentence parsing, name entity recognition, and rule-based question transformation errors) and the complexity of long Chinese sentences. This article introduces a novel Chinese question generation system based on three stages, sentence simplification, question generation and ranking, to address the challenge of automatically generating factual questions in Chinese. The proposed approach and system have been evaluated on sentences from the New Practical Chinese Reader corpus. Experimental results show that ranking improves more than 20 percentage of questions rated as acceptable by annotators, from 65 percent of all questions to 87 percent of the top ranked 25 percent questions."}
{"_id": "af77d121245bc7146c53815a7b02f2dd5baa4e6c", "title": "The art of scenarios and strategic planning : tools and pitfalls", "text": "The term strategy has been misused and even abused. Worse, the word scenario is often confused with strategy to the point that clarification is needed if we are to understand one another. As a prolongation of the work done by the Rand Corporation in the 1960s, strategic planning, management and prospective approaches have been developed to help organizations master change. Over the past 25 years, we have contributed by creating or further developing various methodologies and procedures such as the Mactor and MICMAC methods for use in scenario building. These tools are doubly powerful in that they stimulate the imagination, reduce collective biases, and promote appropriation. One of the main functions of the strategic futures exercise is to eliminate two errors that we usually describe as the \u201chammer\u2019s risk\u201d and the \u201cnail\u2019s dream.\u201d In other words, we forget what a hammer\u2019s function is when staring at a nail (the nail\u2019s dream) or we know how to use a hammer and imagine that every problem is like a nail (the hammer\u2019s risk). In our case, we strive to give simple tools that may be appropriated. However, these simple tools are inspired by intellectual rigor that enables one to ask the right questions. Of course, these tools do not come with a guarantee. The natural talent, common sense, and intuition of the futurist also count! \uf6d9 2000 Elsevier Science Inc. Introduction Anticipation is not widely practiced by decision makers because when things are going well, they can manage without it, and when things are going badly, it is too late to see beyond the ends of their noses. Fast action is already urgently required! Yet reaction is not an end in itself. Although desirable in the short term, it leads nowhere if not directed towards the firm\u2019s long-term objectives. As Seneca said, \u201cthere is no favourable wind for the man who knows not where he is going.\u201d Action becomes meaningless without a goal, and only anticipation points the way to action and gives it both meaning and direction. MICHEL GODET is Professor and holder of the Chaire de prospective industrielle du Conservatoire national des arts et m\u00e9tiers, Paris, France, Director of the Laboratory of Investigation in Prospective and Strategy (LIPS). See: lips@cnam.fr, http://www.cnam.fr/lips/, where there is an on-line working paper Scenarios and Strategies: A Toolbox for Scenario Planning. Address correspondence to Michel Godet, LIPS, CNAM, 2 rue Conte, 75141 Paris, Cedex 03, France. Tel.: 33-01-4027 2530; Fax: 33-01-40 27 27 43; e-mail: lips@cnam.fr. Technological Forecasting and Social Change 65, 3\u201322 (2000) \uf6d9 2000 Elsevier Science Inc. All rights reserved. 0040-1625/00/$\u2013see front matter 655 Avenue of the Americas, New York, NY 10010 PII S0040-1625(99)00120-1"}
{"_id": "f7ed5e54ab9558dddacdb290b709b9250563337b", "title": "A 90\u2013100-GHz 4 $\\times$ 4 SiGe BiCMOS Polarimetric Transmit/Receive Phased Array With Simultaneous Receive-Beams Capabilities", "text": "This paper presents a 4 \u00d7 4 transmit/receive (T/R) SiGe BiCMOS phased-array chip at 90-100 GHz with vertical and horizontal polarization capabilities, 3-bit gain control (9 dB), and 4-bit phase control. The 4 \u00d7 4 phased array fits into a 1.6\u00d71.5 mm2 grid, which is required at 94 GHz for wide scan-angle designs. The chip has simultaneous receive (Rx) beam capabilities (V and H) and this is accomplished using dual-nested 16:1 Wilkinson combiners/divider with high isolation. The phase shifter is based on a vector modulator with optimized design between circuit level and electromagnetic simulation and results in 1 dB and gain and phase error, respectively, at 85-110 GHz. The behavior of the vector modulator phase distortion versus input power level is investigated and measured, and design guidelines are given for proper operation in a transmit (Tx) chain. The V and H Rx paths result in a gain of 22 and 25 dB, respectively, a noise figure of 9-9.5 (max. gain), and 11 dB (min. gain) measured without the T/R switch, and an input P1 dB of -31 to -26 dBm over the gain control range. The measured output Psat is ~ -5 dBm per channel, limited by the T/R switch loss. Measurements show \u00b10.6- and \u00b10.75-dB variation between the 4 \u00d7 4 array elements in the Tx mode (Psat) and Rx mode, respectively, and 40-dB coupling between the different channels on the chip. The chip consumes 1100 mA from a 2-V supply in both the Tx and Rx modes. The design can be scaled to >10 000 elements using polyimide redistribution layers on top of the chip and the application areas are in W-band radars for landing systems."}
{"_id": "7dbd718164d77c255c946193a358d885e43efb17", "title": "Invited Paper: Enhanced Architectures, Design Methodologies and CAD Tools for Dynamic Reconfiguration of Xilinx FPGAs", "text": "The paper describes architectural enhancements to Xilinx FPGAs that provide better support for the creation of dynamically reconfigurable designs. These are augmented by a new design methodology that uses pre-routed IP cores for communication between static and dynamic modules and permits static designs to route through regions otherwise reserved for dynamic modules. A new CAD tool flow to automate the methodology is also presented. The new tools initially target the Virtex-II, Virtex-II Pro and Virtex-4 families and are derived from Xilinx's commercial CAD tools"}
{"_id": "41997bca2f50263c8ffe5aa6297ba20983d7b296", "title": "Multilabel Sound Event Classification with Neural Networks", "text": "TAMPERE UNIVERSITY OF TECHNOLOGY Master\u2018s Degree Programme in Signal Processing CAKIR, EMRE: Multilabel Sound Event Classification with Neural Networks Master of Science Thesis, 63 pages October 2014 Major: Signal Processing Examiner: Tuomas Virtanen, Heikki Huttunen"}
{"_id": "ae051893fea8a87cb3be53b5ec14f568a837783a", "title": "CG_Hadoop: computational geometry in MapReduce", "text": "Hadoop, employing the MapReduce programming paradigm, has been widely accepted as the standard framework for analyzing big data in distributed environments. Unfortunately, this rich framework was not truly exploited towards processing large-scale computational geometry operations. This paper introduces CG_Hadoop; a suite of scalable and efficient MapReduce algorithms for various fundamental computational geometry problems, namely, polygon union, skyline, convex hull, farthest pair, and closest pair, which present a set of key components for other geometric algorithms. For each computational geometry operation, CG_Hadoop has two versions, one for the Apache Hadoop system and one for the SpatialHadoop system; a Hadoop-based system that is more suited for spatial operations. These proposed algorithms form a nucleus of a comprehensive MapReduce library of computational geometry operations. Extensive experimental results on a cluster of 25 machines of datasets up to 128GB show that CG_Hadoop achieves up to 29x and 260x better performance than traditional algorithms when using Hadoop and SpatialHadoop systems, respectively."}
{"_id": "f475fbdf3667da6893e2c0fef62452519af9dfec", "title": "Visual Analytics of Patterns in High-Dimensional Data", "text": "Due to the technological progress over the last decades, today\u2019s scientific and commercial applications are capable of generating, storing, and processing, massive amounts of data sets. This influences the type of data generated, which in turn means that with each data entry di erent aspects are combined and stored into one common database. Often the describing attributes are numeric; we name data with more than a handful attributes (dimensions) high-dimensional. Having to make use of these types of data archives provides new challenges to analysis techniques. The work of this thesis centers around the question of finding interesting patterns (meaningful information) in high-dimensional data sets. This task is highly challenging because of the so called curse of dimensionality, expressing that when dimensionality increases the data becomes sparse. This phenomena disturbs standard analysis techniques. Automatic techniques have to deal with the data complexity not only increasing their runtime, but also vitiating their computation functions (like distance functions). Moreover, exploring these data sets visually is hindered by the high number of dimensions that have to be displayed on the two dimensional screen space. This thesis is motivated by the idea that searching for interesting patterns in this kind of data can be done through a mixed approach of automation, visualization, and interaction. The amount of patterns a visualization contains can be measured by so called quality metrics. These automated functions can then filter the high number of highdimensional visualizations and present to the user a pre-filtered good subset for further investigation. We propose quality metrics for scatterplots and parallel coordinates focusing on di erent user tasks like identifying clusters and correlations. We also evaluate these measures with regard to (1) their ability to identify clusters in a variety of real and synthetic datasets; (2) their correlation with human perception of clusters in scatterplots. A thorough discussion of results follows reflecting the impact on directions for future research. As quality metrics were developed for a large number of di erent high-dimensional visualization techniques, we present our reflections on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a comprehensive literature review. In high-dimensional data, patterns exist often only in a subset of the dimensions. Subspace clustering techniques aim at finding these subspaces where clusters exist and which might otherwise be hidden if a traditional clustering algorithm is applied. While subspace clustering approaches tackle the sparsity problem in high-dimensional data well, designing e ective visualization to help analyzing the clustering result is not trivial. In addition to the cluster membership information, the relevant sets of dimensions and the overlaps of memberships and dimensions need to also be considered. Although, a number of techniques (for example, scatterplots, heat maps, dendrograms, hierarchical parallel coordinates) exist for visualizing traditional clustering results, little research has been done for visualizing subspace clustering results. Moreover, while extensive research has been carried out with regard to designing subspace clustering algorithms, surprisingly little attention has been paid to the developing of e ective visualization tools analyzing the viii clustering result. Appropriate visualization techniques will not only help in monitoring the clustering process but, with special mining techniques, they could also enable the domain expert to guide and even to steer the subspace clustering process to reveal the patterns of interest. To this goal, we envision a concept that combines subspace clustering algorithms and interactive scalable visual exploration techniques. This work includes the task of comparative visualization and feedback guided computation of alternative clusterings. Zusammenfassung Bedingt durch den technologischen Fortschritt der letzten Jahrzehnte sind heutige kommerzielle Applikationen in der Lage, riesige Datenmengen zu erzeugen, zu speichern und zu verarbeiten. Diese Entwicklung beeinflusst auch die Natur der erzeugten Daten, d.h. dass f\u00fcr jeden Dateneintrag unterschiedliche Aspekte in der gleichen Datenbank gespeichert werden. Oft sind die beschreibenden Attribute numerisch. Datens\u00e4tze, die mehr als f\u00fcnf solcher Attribute (Dimensionen) beinhalten, nenne ich hochdimensional. Der wertbringende Gebrauch solcher Datenarchive bringt neue Herausforderungen an Analysetechniken mit sich. Die vorliegende Dissertation bearbeitet die Fragestellung, wie interessante Muster (bedeutende Information) in hochdimensionalen R\u00e4umen gefunden werden k\u00f6nnen. Diese Aufgabenstellung ist durch das Problem des Fluches der Dimensionalit\u00e4t \u00e4u\u00dferst herausfordernd. Dieses Problem besagt, dass Daten im hochdimensionalen Raum sp\u00e4rlich vorkommen. Herk\u00f6mmliche Analysetechniken werden dadurch beeintr\u00e4chtigt. Automatische Methoden m\u00fcssen die Datenkomplexit\u00e4t nicht nur ihre Laufzeit, sondern auch ihre Berechnungsfunktionen (z.B. Distanzfunktionen) betre end, einbeziehen. Au\u00dferdem wird die visuelle Exploration dieser Daten durch die Zweidimensionalit\u00e4t der Darstellungen beeintr\u00e4chtigt. Diese Dissertation st\u00fctzt sich auf das Konzept, dass die Suche nach interessanten Mustern in hochdimensionalen Datenmengen mit einem kombinierten Ansatz von automatischen, visuellen und interaktiven Methoden durchgef\u00fchrt werden kann. Die Auspr\u00e4gung der Muster einer Visualisierung kann durch sogenannte Qualit\u00e4tsma\u00dfe gemessen werden. Durch diese automatischen Funktionen kann die gro\u00dfe Menge an hochdimensionalen Visualisierungen eingegrenzt und dem Benutzer eine ausgew\u00e4hlte Menge zur weiteren Untersuchung zur Verf\u00fcgung gestellt werden. Ich schlage Qualit\u00e4tsma\u00dfe f\u00fcr Scatterplots und Parallele Koordinaten vor, die sich auf unterschiedliche Aufgaben, wie die Identifikation von Gruppen oder Korrelationen, konzentrieren. Zus\u00e4tzlich werden diese Techniken bez\u00fcglich (1) ihrer F\u00e4higkeit Cluster in unterschiedlichen realen und synthetischen Datens\u00e4tzen und (2) ihrer Korrelation mit der menschlichen Wahrnehmung untersucht. Der ausf\u00fchrlichen Diskussion dieser Resultate folgen \u00dcberlegungen f\u00fcr die zuk\u00fcnftige Forschung. Da viele verschiedene Qualit\u00e4tsma\u00dfe f\u00fcr eine Reihe weiterer hochdimensionaler Visualisierungen entwickelt wurden, werde ich Vorschl\u00e4ge f\u00fcr deren Vernetzung und Weiterentwicklung vorstellen. Hierf\u00fcr wird eine \u00dcbersicht \u00fcber die verschiedenen Ans\u00e4tze erstellt, welcher eine Systematisierung zugrunde liegt, die aufgrund einer umfassenden Literaturauswertung zustande kam. Im hochdimensionalen Raum existieren manche Muster nur in verschiedenen Unterr\u00e4umen des Datenraumes. Subspace Clustering Algorithmen wurden entwickelt, um Unterr\u00e4ume zu finden in denen Cluster existieren, die durch traditionelle Clustering Algorithmen nicht gefunden werden w\u00fcrden. Obwohl diese Algorithmen sp\u00e4rlich mit Daten besetzte, hochdimensionale R\u00e4ume gut explorieren k\u00f6nnen, ist das Entwickeln von e ektiven Visualisierungstechniken, um diese Clusteringresultate zu analysieren, nicht trivial. Zus\u00e4tzlich zu der Clusterzugeh\u00f6rigkeit von Elementen m\u00fcssen die relevanten Attributmengen eines Clusters und die Objektund Dimensions\u00fcberlappungen von Subspaceclusx tern dargestellt werden. Auch wenn eine Reihe von Techniken f\u00fcr die Visualisierung von traditionellen Clustering Resultaten existiert (z.B. Scatterplots, Heatmaps, Dendrogramme, hierarchische Parallele Koordinaten) gibt es nur wenige Ans\u00e4tze, um das Resultat von Subspace Clustering Algorithmen zu visualisieren. Au\u00dferdem wurden bisher erstaunlich wenige Ans\u00e4tze vorgestellt, die eine visuelle Analyse der Subspace Clustering Ergebnisse unterst\u00fctzen k\u00f6nnen, obwohl im Bereich der Subspace Clustering Algorithmen viel Forschung betrieben wurde. Angemessene Visualisierungstechniken, die von speziellen Methoden zur Extraktion von Informationen unterst\u00fctzt werden, w\u00fcrden nicht nur die Nachverfolgung der Clustering Ergebnisse erm\u00f6glichen, sondern auch Fachleuten dabei helfen, den Subspace Clustering Prozess so zu steuern, dass relevante Muster zum Vorschein kommen. Dieses Ziel vor Augen stelle ich ein Konzept vor, das Subspace Clustering Algorithmen mit interaktiven skalierbaren Visualisierungen kombiniert. Meine Ans\u00e4tze widmen sich deshalb der Aufgabe der Visualisierung zum Vergleich von alternativen Clustergruppen, die durch Nutzerfeedback gesteuert werden."}
{"_id": "79c60af1a0dcbc00788bb1b4b2bd3c22d82f8f08", "title": "3D Printed Reversible Shape Changing Components with Stimuli Responsive Materials.", "text": "The creation of reversibly-actuating components that alter their shapes in a controllable manner in response to environmental stimuli is a grand challenge in active materials, structures, and robotics. Here we demonstrate a new reversible shape-changing component design concept enabled by 3D printing two stimuli responsive polymers-shape memory polymers and hydrogels-in prescribed 3D architectures. This approach uses the swelling of a hydrogel as the driving force for the shape change, and the temperature-dependent modulus of a shape memory polymer to regulate the time of such shape change. Controlling the temperature and aqueous environment allows switching between two stable configurations - the structures are relatively stiff and can carry load in each - without any mechanical loading and unloading. Specific shape changing scenarios, e.g., based on bending, or twisting in prescribed directions, are enabled via the controlled interplay between the active materials and the 3D printed architectures. The physical phenomena are complex and nonintuitive, and so to help understand the interplay of geometric, material, and environmental stimuli parameters we develop 3D nonlinear finite element models. Finally, we create several 2D and 3D shape changing components that demonstrate the role of key parameters and illustrate the broad application potential of the proposed approach."}
{"_id": "b983ea5507cdcb527aabf93585dcb836fd2c5170", "title": "A comparative study of data mining techniques for credit scoring in banking", "text": "Credit is becoming one of the most important incomes of banking. Past studies indicate that the credit risk scoring model has been better for Logistic Regression and Neural Network. The purpose of this paper is to conduct a comparative study on the accuracy of classification models and reduce the credit risk. In this paper, we use data mining of enterprise software to construct four classification models, namely, decision tree, logistic regression, neural network and support vector machine, for credit scoring in banking. We conduct a systematic comparison and analysis on the accuracy of 17 classification models for credit scoring in banking. The contribution of this paper is that we use different classification methods to construct classification models and compare classification models accuracy, and the evidence demonstrates that the support vector machine models have higher accuracy rates and therefore outperform past classification methods in the context of credit scoring in banking."}
{"_id": "7cdebd86bdf9f8b3d8249e450714337c1a650975", "title": "A Stroke Order Verification Method for On-Line Handwritten Chinese Characters Based on Tempo-spatial Consistency Analysis", "text": "This paper proposes a method to recognize stroke orders of on-line handwritten Chinese characters based on analyzing both spatial and temporal information. A novel control-point-based analysis method is presented for spatial information analysis to match strokes with various shapes and styles. Its computation complexity is much lower than image correlation method and is suitable for applications on mobile devices. For temporal information analysis, Hidden Markov Model is adopted and a proposed rectification method is integrated to find the optimal pair-wise matching result of stroke sequences. Experimental results proved the effectiveness of the proposed method. The verification rate is 99.6% on the test set."}
{"_id": "f5663fcf95c86c4d5a43830ceedf61e7d688ea52", "title": "Position and Force Control Based on Mathematical Models of Pneumatic Artificial Muscles Reinforced by Straight Glass Fibers", "text": "This paper reports on the position and force control of pneumatic artificial muscles reinforced by straight glass fibers. This type of artificial muscle has a greater contraction ratio and power and a longer lifetime than conventional McKibben types. However, these muscles are highly non-linear; hence, it is difficult to use them in a mechanical system. Furthermore, this actuator has a high compliance characteristic. Though this characteristic is useful for human interactions, the position and force of this actuator cannot be easily controlled. In this paper, a mathematical model of this type of artificial muscle is proposed, and the relationship between design parameters and control specifications is realized. In addition, the position and force based on the mathematical model are determined and applied to artificial muscle linearization."}
{"_id": "40bc01f4d62958d0138bb367521a8e867a16b4e6", "title": "Efficient set joins on similarity predicates", "text": "In this paper we present an efficient, scalable and general algorithm for performing set joins on predicates involving various similarity measures like intersect size, Jaccard-coefficient, cosine similarity, and edit-distance. This expands the existing suite of algorithms for set joins on simpler predicates such as, set containment, equality and non-zero overlap. We start with a basic inverted index based probing method and add a sequence of optimizations that result in one to two orders of magnitude improvement in running time. The algorithm folds in a data partitioning strategy that can work efficiently with an index compressed to fit in any available amount of main memory. The optimizations used in our algorithm generalize to several weighted and unweighted measures of partial word overlap between sets."}
{"_id": "5a8307cdca9d7990305af2f3437c1d1301c078d2", "title": "Size dependency of nanocrystalline TiO 2 on its optical property and photocatalytic reactivity exemplified by 2-chlorophenol", "text": "Anatase TiO2 nanocrystallines (17\u201329 nm) were successfully synthesized by the metal\u2013organic chemical vapor deposition method (MOCVD). Moderate manipulation of system parameters of MOCVD can control the particle size. The electro-optical and photocatalytic properties of the synthesized TiO2 nanoparticles were studied along with several commercially available ultra-fine TiO2 particles (e.g., 3.8\u20135.7 nm). The band gap of the TiO2 crystallines was determined using the transformed diffuse reflectance technique according to the Kubelka\u2013Munk theory. Results showed that the band gap of TiO2 monotonically decreased from 3.239 to 3.173 eV when the particle size decreased from 29 to 17 nm and then increased from 3.173 to 3.289 eV as the particle size decreased from 17 to 3.8 nm. The results of band gap change as a function of particle size agreed well with what was predicted by the Brus\u2019 equation, i.e., the effective mass model (EMM). However, results of the photocatalytic oxidation of 2-chlorophenol (2-CP), showed that the smaller the particle size, the faster the degradation rate. This is attributed in part to the combined effect of band gap change relative to the spectrum of the light source and the specific surface area (or particle size) of the photocatalysts. The change of band gap due to particle size represents only a small optical absorption window with respect to the total spectrum of the light source, i.e., from 380 to 400 nm versus>280 nm. Consequently, the gain in optical property of the larger particles was severely compromised by their decrease in specific surface area. Our results clearly indicated the importance of specific surface area in controlling the photocatalytic reactivity of photocatalysts. Results also showed that the secondary particle size grew with time due mainly to particle aggregation. The photocatalytic rate constants decreased exponentially with increase in primary particle size. Primary particle size alone is able to predict the photocatalytic rate as it is closely related to the electro-optical properties of photocatalysts. # 2006 Published by Elsevier B.V."}
{"_id": "1b3323bc7f7d8f72b1bd1445643a1dbfa3f1a20b", "title": "Structural Insights into the Niemann-Pick C1 (NPC1)-Mediated Cholesterol Transfer and Ebola Infection", "text": "Niemann-Pick disease type C (NPC) is associated with mutations in NPC1 and NPC2, whose gene products are key players in the endosomal/lysosomal egress of low-density lipoprotein-derived cholesterol. NPC1 is also the intracellular receptor for Ebola virus (EBOV). Here, we present a 4.4\u00a0\u00c5 structure of full-length human NPC1 and a low-resolution reconstruction of NPC1 in complex with the cleaved glycoprotein (GPcl) of EBOV, both determined by single-particle electron cryomicroscopy. NPC1 contains 13 transmembrane segments (TMs) and three distinct lumenal domains A (also designated NTD), C, and I. TMs 2-13 exhibit a typical resistance-nodulation-cell division fold, among which TMs 3-7 constitute the sterol-sensing domain conserved in several proteins involved in cholesterol\u00a0metabolism and signaling. A trimeric EBOV-GPcl binds to one NPC1 monomer through the domain\u00a0C. Our structural and biochemical characterizations provide an important framework for mechanistic understanding of NPC1-mediated intracellular cholesterol trafficking and Ebola virus infection."}
{"_id": "61fa02c2393bc55f5177414570421c46d4b75bb2", "title": "Development of a Danish Language Version of the Manchester Foot Pain and Disability Index: Reproducibility and Construct Validity Testing", "text": "Introduction. The Manchester Foot Pain and Disability Index (MFPDI) is a 19-item questionnaire for the assessment of disability caused by foot pain. The aim was to develop a Danish language version of the MFPDI (MFPDI-DK) and evaluate its reproducibility and construct validity. Methods. A Danish version was created, following a forward-backward translation procedure. A sample of 84 adult patients with foot pain was recruited. Participants completed two copies of the MFPDI-DK within a 24- to 48-hour interval, along with the Medical Outcomes Study Short Form 36 (SF-36), and a pain Visual Analog Scale (VAS). Reproducibility was assessed using the intraclass correlation coefficient (ICC) and 95% limits of agreement (Bland-Altman plot). Construct validity was evaluated with Pearson's Rho, using a priori hypothesized correlations with SF-36 subscales and VASmean. Results. The MFPDI-DK showed very good reliability with an ICC of 0.92 (0.88-0.95). The 95% limits of agreement ranged from -6.03 to 6.03 points. Construct validity was supported by moderate to very strong correlations with the SF-36 physical subscales and VASmean. Conclusion. The MFPDI-DK appears to be a valid and reproducible instrument in evaluating foot-pain-related disability in Danish adult patients in cross-sectional samples. Further research is needed to test the responsiveness of the MFPDI-DK."}
{"_id": "f879a87d69a531646de069317cb9478a4ee5bee2", "title": "A retrospective study of SPECT/CT scans using SUV measurement of the normal pelvis with Tc-99m methylene diphosphonate.", "text": "OBJECTIVE\nTo perform quantitative measurement based on the standardized uptake value (SUV) of Tc-99m methylene diphosphonate (MDP) in the normal pelvis using a single-photon emission tomography (SPECT)/computed tomography (CT) scanner.\n\n\nMATERIAL AND METHODS\nThis retrospective study was performed on 31 patients with cancer undergoing bone SPECT/CT scans with 99mTc-MDP. SUVmax and SUVmean of the normal pelvis were calculated based on the body weight. SUVmax and SUVmean of the bilateral anterior superior iliac spine, posterior superior iliac spine, facies auricularis ossis ilii, ischial tuberosity, and sacrum were also calculated. Furthermore, the correlation of SUVmax and SUVmean of all parts of pelvis with weight, height, and CT was assessed.\n\n\nRESULTS\nThe data for 31 patients (20 women and 11 men; mean age 58.97\u00b19.12 years; age range 37-87 years) were collected. SUVmax and SUVmean changed from 1.65\u00b10.40 to 3.8\u00b11.0 and from 1.15\u00b10.25 to 2.07\u00b10.58, respectively. The coefficient of variation of SUVmax and SUVmean ranged from 0.22 to 0.31. SUVmax and SUVmean had no statistically significant difference between men and women. SUVmax and SUVmean also showed no significant correlation with weight and height. However, part of SUVmax and SUVmean showed a significant correlation with CT. In addition, SUVmax and SUVmean of the bilateral ischial tuberosity showed a significant correlation with CT values.\n\n\nCONCLUSIONS\nDetermination of the SUV value of the normal pelvis with 99m Tc-MDP SPECT/CT is feasible and highly reproducible. SUVs of the normal pelvis showed a relatively large variability. As a quantitative imaging biomarker, SUVs might require standardization with adequate reference data for the participant to minimize variability."}
{"_id": "8b640fcd9cc46aa17b6f8bd0cfeaeaff21234d16", "title": "Challenges in automotive software engineering", "text": "The amount of software in cars grows exponentially. Driving forces of this development are cheaper and more powerful hardware and the demand for innovations by new functions. The rapid increase of software and software based functionality brings various challenges (see [21], [23], [25], [26]) for the automotive industries, for their organization, key competencies, processes, methods, tools, models, product structures, division of work, logistics, maintenance, and long term strategies. From a software engineering perspective, the automotive industry is an ideal and fascinating application domain for advanced techniques. Although the automotive industry may adopt general results and solutions from the software engineering body of knowledge gained in other domains, the specific constraints and domain specific requirements in the automotive industry ask for individual solutions and bring various challenges for automotive software engineering. In cars we find literally all interesting problems and challenging issues of software and systems engineering."}
{"_id": "39915715b1153dff6e4345002f0a5b98f2633246", "title": "Proofs of Work for Blockchain Protocols", "text": "One of the most impactful applications of proofs of work (POW) currently is in the design of blockchain protocols such as Bitcoin. Yet, despite the wide recognition of POWs as the fundamental cryptographic tool in this context, there is no known cryptographic formulation that implies the security of the Bitcoin blockchain protocol. Indeed, all previous works formally arguing the security of the Bitcoin protocol relied on direct proofs in the random oracle model, thus circumventing the di culty of isolating the required properties of the core POW primitive. In this work we ll this gap by providing a formulation of the POW primitive that implies the security of the Bitcoin blockchain protocol in the standard model. Our primitive entails a number of properties that parallel an e cient non-interactive proof system: completeness and fast veri cation, security against malicious provers (termed hardness against tampering and chosen message attacks ) and security for honest provers (termed uniquely successful under chosen key and message attacks ). Interestingly, our formulation is incomparable with previous formulations of POWs that applied the primitive to contexts other than the blockchain. Our result paves the way for proving the security of blockchain protocols in the standard model assuming our primitive can be realized from computational assumptions."}
{"_id": "845a018dcccf5704842a2f4f221af34e73cd3274", "title": "Some Theoretical Considerations in Off-the-Shelf Text Analysis Software", "text": "This paper is concerned with theoretical considerations of commercial content analysis software, namely Linguistic Inquiry and Word Count (LIWC), developed by social psychologists at the University of Texas. LIWC is widely cited and forms the basis of many research papers from a range of disciplines. Here, LIWC is taken as an example of a context-independent, word-counting approach to text analysis, and the strengths and potential pitfalls of such a methodology are discussed. It is shown that text analysis software is constrained not only by its functions, but also by its underlying theoretical assumptions. The paper offers recommendations for good practice in software commercialisation and application, stressing the importance of transparency and acknowledgement of biases."}
{"_id": "7da34b637b8801df6d6b5c8b455a5deeb5faac05", "title": "Mutation Spectrum and Phenotypic Features in Noonan Syndrome with PTPN11 Mutations: Definition of Two Novel Mutations.", "text": "OBJECTIVES\nTo evaluate the spectrum of PTPN11 gene mutations in Noonan syndrome patients and to study the genotype-phenotype associations.\n\n\nMETHODS\nIn this study, twenty Noonan syndrome patients with PTPN11 mutations were included. The patients underwent a detailed clinical and physical evaluation. To identify inherited cases, parents of all mutation positive patients were analyzed.\n\n\nRESULTS\nThirteen different PTPN11 mutations, two of them being novel, were detected in the study group. These mutations included eleven missense mutations: p.G60A, p.D61N, p.Y62D, p.Y63C, p.E69Q, p.Q79R, p.Y279C,p.N308D, p.N308S, p.M504V, p.Q510R and two novel missense mutations: p.I56V and p.I282M. The frequency of cardiac abnormalities and short stature were found to be 80\u00a0% and 80\u00a0%, respectively. Mental retardation was not observed in patients having exon 8 mutations. No significant correlations were detected between other phenotypic features and genotypes.\n\n\nCONCLUSIONS\nBy identifying genotype-phenotype correlations, this study provides information on phenotypes observed in NS patients with different PTPN11 mutations."}
{"_id": "f264e8b33c0d49a692a6ce2c4bcb28588aeb7d97", "title": "Recurrent Neural Network Regularization", "text": "We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, and machine translation."}
{"_id": "fd7725988e6b6a44d14e41c36d718bf0033f5b3c", "title": "Recurrent Nets that Time and Count", "text": ""}
{"_id": "07c43a3ff15f2104022f2b1ca8ec4128a930b414", "title": "Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription", "text": "We investigate the problem of modeling symbolic sequences of polyphonic music in a completely general piano-roll representation. We introduce a probabilistic model based on distribution estimators conditioned on a recurrent neural network that is able to discover temporal dependencies in high-dimensional sequences. Our approach outperforms many traditional models of polyphonic music on a variety of realistic datasets. We show how our musical language model can serve as a symbolic prior to improve the accuracy of polyphonic transcription."}
{"_id": "0a27a75f47af3cf52bdcd34f5b82bc9af7249c12", "title": "Efficient Genome-Wide, Privacy-Preserving Similar Patient Query based on Private Edit Distance", "text": "Edit distance has been proven to be an important and frequently-used metric in many human genomic research, with Similar Patient Query (SPQ) being a particularly promising and attractive example. However, due to the widespread privacy concerns on revealing personal genomic data, the scope and scale of many novel use of genome edit distance are substantially limited. While the problem of private genomic edit distance has been studied by the research community for over a decade [6], the state-of-the-art solution [31] is far from even close to be applicable to real genome sequences. In this paper, we propose several private edit distance protocols that feature unprecedentedly high efficiency and precision. Our construction is a combination of a novel genomic edit distance ap- proximation algorithm and new construction of private set difference size protocols. With the private edit distance based secure SPQ primitive, we propose GENSETS, a genome-wide, privacy- preserving similar patient query system. It is able to support search- ing large-scale, distributed genome databases across the nation. We have implemented a prototype of GENSETS. The experimental results show that, with 100 Mbps network connection, it would take GENSETS less than 200 minutes to search through 1 million breast cancer patients (distributed nation-wide in 250 hospitals, each having 4000 patients), based on edit distances between their genomes of lengths about 75 million nucleotides each."}
{"_id": "5f61d6e62170fdd7fef66656898adcea0167a028", "title": "IBM Mega Traffic Simulator", "text": "IBM Mega Traffic Simulator (Megaffic) is an agent-based simulator of traffic flows with two unique features. First, Megaffic can build its model of simulation by directly estimating some of the parameters of the model from probe-car data. This capability is in contrast to existing agent-based simulators of traffic flows, where the values of their parameters are calibrated with iterative simulation. Second, Megaffic can run on massively parallel computers and simulate the microscopic traffic flows in the scale of an arbitrary city or even the whole Japan. This manuscript gives an overview of the design and capability of Megaffic. \u22175-6-52 Toyosu, Koto-ku, Tokyo 135-8511, Japan"}
{"_id": "cb5f9b1a436771b65bdf4e8fa3e82a4404e8d3c8", "title": "Multilinear Sparse Principal Component Analysis", "text": "In this brief, multilinear sparse principal component analysis (MSPCA) is proposed for feature extraction from the tensor data. MSPCA can be viewed as a further extension of the classical principal component analysis (PCA), sparse PCA (SPCA) and the recently proposed multilinear PCA (MPCA). The key operation of MSPCA is to rewrite the MPCA into multilinear regression forms and relax it for sparse regression. Differing from the recently proposed MPCA, MSPCA inherits the sparsity from the SPCA and iteratively learns a series of sparse projections that capture most of the variation of the tensor data. Each nonzero element in the sparse projections is selected from the most important variables/factors using the elastic net. Extensive experiments on Yale, Face Recognition Technology face databases, and COIL-20 object database encoded the object images as second-order tensors, and Weizmann action database as third-order tensors demonstrate that the proposed MSPCA algorithm has the potential to outperform the existing PCA-based subspace learning algorithms."}
{"_id": "0cb487e6e226bb56b24146d247c2576a1ff89c26", "title": "CorMet: A Computational, Corpus-Based Conventional Metaphor Extraction System", "text": "CorMet is a corpus-based system for discovering metaphorical mappings between concepts. It does this by finding systematic variations in domain-specific selectional preferences, which are inferred from large, dynamically mined Internet corpora. Metaphors transfer structure from a source domain to a target domain, making some concepts in the target domain metaphorically equivalent to concepts in the source domain. The verbs that select for a concept in the source domain tend to select for its metaphorical equivalent in the target domain. This regularity, detectable with a shallow linguistic analysis, is used to find the metaphorical interconcept mappings, which can then be used to infer the existence of higher-level conventional metaphors. Most other computational metaphor systems use small, hand-coded semantic knowledge bases and work on a few examples. Although Cor Met's only knowledge base is Word Net (Fellbaum 1998) it can find the mappings constituting many conventional metaphors and in some cases recognize sentences instantiating those mappings. CorMet is tested on its ability to find a subset of the Master Metaphor List (Lakoff, Espenson, and Schwartz 1991)."}
{"_id": "8467c65182dcbd3ea76aa31c2074930b5e3c1e05", "title": "Metagenes and molecular pattern discovery using matrix factorization.", "text": "We describe here the use of nonnegative matrix factorization (NMF), an algorithm based on decomposition by parts that can reduce the dimension of expression data from thousands of genes to a handful of metagenes. Coupled with a model selection mechanism, adapted to work for any stochastic clustering algorithm, NMF is an efficient method for identification of distinct molecular patterns and provides a powerful method for class discovery. We demonstrate the ability of NMF to recover meaningful biological information from cancer-related microarray data. NMF appears to have advantages over other methods such as hierarchical clustering or self-organizing maps. We found it less sensitive to a priori selection of genes or initial conditions and able to detect alternative or context-dependent patterns of gene expression in complex biological systems. This ability, similar to semantic polysemy in text, provides a general method for robust molecular pattern discovery."}
{"_id": "1b31dbc68504ca509bc5cbbfff3c81541631cf4b", "title": "Discriminative Training of Deep Fully Connected Continuous CRFs With Task-Specific Loss", "text": "Recent works on deep conditional random fields (CRFs) have set new records on many vision tasks involving structured predictions. Here, we propose a fully connected deep continuous CRF model with task-specific losses for both discrete and continuous labeling problems. We exemplify the usefulness of the proposed model on multi-class semantic labeling (discrete) and the robust depth estimation (continuous) problems. In our framework, we model both the unary and the pairwise potential functions as deep convolutional neural networks (CNNs), which are jointly learned in an end-to-end fashion. The proposed method possesses the main advantage of continuously valued CRFs, which is a closed-form solution for the maximum a posteriori (MAP) inference. To better take into account the quality of the predicted estimates during the cause of learning, instead of using the commonly employed maximum likelihood CRF parameter learning protocol, we propose task-specific loss functions for learning the CRF parameters. It enables direct optimization of the quality of the MAP estimates during the learning process. Specifically, we optimize the multi-class classification loss for the semantic labeling task and the Tukey\u2019s biweight loss for the robust depth estimation problem. Experimental results on the semantic labeling and robust depth estimation tasks demonstrate that the proposed method compare favorably against both baseline and state-of-the-art methods. In particular, we show that although the proposed deep CRF model is continuously valued, with the equipment of task-specific loss, it achieves impressive results even on discrete labeling tasks."}
{"_id": "544c477e668dcf9a341525168000b039e83219de", "title": "C V ] 2 1 N ov 2 01 7 Emotion Classification with Data Augmentation Using Generative Adversarial Networks", "text": "It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like disgusted are relatively rare comparing to other labels like happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework with a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN\u2019s performance. Empirical results show that we can obtain 5%\u223c10% increase in the classification accuracy after employing the GAN-based data augmentation techniques."}
{"_id": "4122fe8ac8ca3fe5fd19e383ef8a749635da2865", "title": "AMOEBA: HIERARCHICAL CLUSTERING BASED ON SPATIAL PROXIMITY USING DELAUNATY DIAGRAM", "text": "Exploratory data analysis is increasingly more necessary as larger spatial data is managed in electro-magnetic media. We propose an exploratory method that reveals a robust clustering hierarchy. Our approach uses the Delaunay diagram to incorporate spatial proximity. It does not require any prior knowledge about the data set, nor does it require parameters from the user. Multi-level clusters are successfully discovered by this new method in only O(nlogn) time, where n is the size of the data set. The efficiency of our method allows us to construct and display a new type of tree graph that facilitates understanding of the complex hierarchy of clusters. We show that clustering methods that adopt a raster-like or vector-like representation of proximity are not appropriate for spatial clustering. We illustrate the robustness of our method with an experimental evaluation with synthetic data sets as well as real data sets."}
{"_id": "18f034b4afa9d3be07401889359a6cb902f35b18", "title": "Service-oriented middleware for the Future Internet: state of the art and research directions", "text": "Service-oriented computing is now acknowledged as a central paradigm for Internet computing, supported by tremendous research and technology development over the last 10 years. However, the evolution of the Internet, and in particular, the latest Future Internet vision, challenges the paradigm. Indeed, service-oriented computing has to face the ultra large scale and heterogeneity of the Future Internet, which are orders of magnitude higher than those of today\u2019s service-oriented systems. This article aims at contributing to this objective by identifying the key research directions to be followed in light of the latest state of the art. This article more specifically focuses on research challenges for service-oriented middleware design, therefore, investigating service description, discovery, access, and composition in the Future Internet of services."}
{"_id": "145d371dd05b9f3fb9fff364735857f9346d36a4", "title": "Analysis of the Co-purchase Network of Products to Predict Amazon Sales-Rank", "text": "Amazon sales-rank gives a relative estimate of a product item\u2019s popularity among other items in the same category. An early prediction of the Amazon sales-rank of a product would imply an early guess of its sales-popularity relative to the other products on Amazon, which is one of the largest e-commerce hub across the globe. Traditional methods suggest use of product review related features, e.g., volume of reviews, text content of the reviews etc. for the purpose of prediction. In contrast, we propose in this paper for the first time a network-assisted approach to construct suitable features for prediction. In particular, we build a co-purchase network treating the individual products as nodes, with edges in between if two products are bought with one another. The way a product is positioned in this network (e.g., its centrality, clustering coefficient etc.) turns out to be a strong indicator of its sales-rank. This network-assisted approach has two distinct advantages over the traditional baseline method based on review analysis \u2013 (i) it works even if the product has no reviews (relevant especially in the early stages of the product launch) and (ii) it is notably more discriminative in classifying a popular (i.e., low sales-rank) product from an unpopular (i.e., high sales-rank) one. Based on this observation, we build a supervised model to early classify a popular product from an unpopular one. We report our results on two different product categories (CDs and cell phones) and obtain remarkably better classification accuracy compared to the baseline scheme. When the top 100 (700) products based on sales-rank are labelled as popular and the bottom 100 (700) are labelled as unpopular, the classification accuracy of our method is 89.85% (82.1%) for CDs and 84.11% (84.8%) for cell phones compared to 46.37% (68.75%) and 83.17% (71.95%) respectively from the baseline method."}
{"_id": "005a08524e309ef170161fd7f072d10c6c692efe", "title": "IMPROVING WEB SERVICE CLUSTERING THROUGH POST FILTERING TO BOOTSTRAP THE SERVICE DISCOVERY", "text": "Web service clustering is one of a very efficient approach to discover Web services efficiently. Current approaches use similarity-distance measurement methods such as string-based, corpus-based, knowledge-based and hybrid methods. These approaches have problems that include discovering semantic characteristics, loss of semantic information, shortage of high-quality ontologies and encoding fine-grained information. Thus, the approaches couldn\u2019t identify the correct clusters for some services and placed them in wrong clusters. As a result of this, cluster performance is reduced. This paper proposes post-filtering approach to increase the performance of clusters by rearranging services incorrectly clustered. Our approach uses context aware similarity method that learns domain context by machine learning to produce models of context for terms retrieved from the Web in the filtering process to calculate the service similarity. We applied post filtering approach to hybrid term similarity based clustering approach that we proposed in our previous work. Experimental results show that our post-filtering approach works efficiently."}
{"_id": "e177ef304d4b89cde6e6779568de7a85dabd390f", "title": "A difference in hypothalamic structure between heterosexual and homosexual men.", "text": "The anterior hypothalamus of the brain participates in the regulation of male-typical sexual behavior. The volumes of four cell groups in this region [interstitial nuclei of the anterior hypothalamus (INAH) 1, 2, 3, and 4] were measured in postmortem tissue from three subject groups: women, men who were presumed to be heterosexual, and homosexual men. No differences were found between the groups in the volumes of INAH 1, 2, or 4. As has been reported previously, INAH 3 was more than twice as large in the heterosexual men as in the women. It was also, however, more than twice as large in the heterosexual men as in the homosexual men. This finding indicates that INAH is dimorphic with sexual orientation, at least in men, and suggests that sexual orientation has a biological substrate."}
{"_id": "49882a1077f20e13c96ceb7b412ba62a157cae2f", "title": "Paxos made live: an engineering perspective", "text": "We describe our experience in building a fault-tolerant data-base using the Paxos consensus algorithm. Despite the existing literature in the field, building such a database proved to be non-trivial. We describe selected algorithmic and engineering problems encountered, and the solutions we found for them. Our measurements indicate that we have built a competitive system."}
{"_id": "1ac33a52ed0abf50ce6e2a2a3876d09af98ff69d", "title": "Adaptive configuration of lora networks for dense IoT deployments", "text": "Large-scale Internet of Things (IoT) deployments demand long-range wireless communications, especially in urban and metropolitan areas. LoRa is one of the most promising technologies in this context due to its simplicity and flexibility. Indeed, deploying LoRa networks in dense IoT scenarios must achieve two main goals: efficient communications among a large number of devices and resilience against dynamic channel conditions due to demanding environmental settings (e.g., the presence of many buildings). This work investigates adaptive mechanisms to configure the communication parameters of LoRa networks in dense IoT scenarios. To this end, we develop FLoRa, an open-source framework for end-to-end LoRa simulations in OMNeT++. We then implement and evaluate the Adaptive Data Rate (ADR) mechanism built into LoRa to dynamically manage link parameters for scalable and efficient network operations. Extensive simulations show that ADR is effective in increasing the network delivery ratio under stable channel conditions, while keeping the energy consumption low. Our results also show that the performance of ADR is severely affected by a highly-varying wireless channel. We thereby propose an improved version of the original ADR mechanism to cope with variable channel conditions. Our proposed solution significantly increases both the reliability and the energy efficiency of communications over a noisy channel, almost irrespective of the network size. Finally, we show that the delivery ratio of very dense networks can be further improved by using a network-aware approach, wherein the link parameters are configured based on the global knowledge of the network."}
{"_id": "754cd9d2853ead8610ef6949cf3e6e6a48e69168", "title": "Generative Deep Neural Networks for Dialogue: A Short Review", "text": "Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and response generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. An important challenge is to develop models that can effectively incorporate dialogue context and generate meaningful and diverse responses. In support of this goal, we review recently proposed models based on generative encoder-decoder neural network architectures, and show that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure."}
{"_id": "91e344dc2c9942e60e075b07c5fd2556be73a607", "title": "Neural Ordinary Differential Equations", "text": "We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a blackbox differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models."}
{"_id": "4399ecb444f72e7a9f0d5e85802508b0dd990624", "title": "Sentence Selection and Evaluation Metrics", "text": "Human-quality text summarization systems are di cult to design, and even more di cult to evaluate, in part because documents can di er along several dimensions, such as length, writing style and lexical usage. Nevertheless, certain cues can often help suggest the selection of sentences for inclusion in a summary. This paper presents our analysis of news-article summaries generated by sentence selection. Sentences are ranked for potential inclusion in the summary using a weighted combination of statistical and linguistic features. The statistical features were adapted from standard IR methods. The potential linguistic ones were derived from an analysis of news-wire summaries. To evaluate these features we use a normalized version of precision-recall curves, with a baseline of random sentence selection, as well as analyze the properties of such a baseline. We illustrate our discussions with empirical results showing the importance of corpus-dependent baseline summarization standards, compression ratios and carefully crafted long queries."}
{"_id": "c6c25c99f01ab66b4ceaacd9f8793f6ecdb26ee9", "title": "It's worth the hassle!: the added value of evaluating the usability of mobile systems in the field", "text": "The distinction between field and laboratory is classical in research methodology. In human-computer interaction, and in usability evaluation in particular, it has been a controversial topic for several years. The advent of mobile devices has revived this topic. Empirical studies that compare evaluations in the two settings are beginning to appear, but they provide very different results. This paper presents results from an experimental comparison of a field-based and a lab-based usability evaluation of a mobile system. The two evaluations were conducted in exactly the same way. The conclusion is that it is definitely worth the hassle to conduct usability evaluations in the field. In the field-based evaluation we identified significantly more usability problems and this setting revealed problems with interaction style and cognitive load that were not identified in the laboratory."}
{"_id": "eb0b155c66535b1508f84edd99fa3da1720d08a1", "title": "Generic Battery model covering self-discharge and internal resistance variation", "text": "Design and analysis of battery powered systems depend on effective battery model. Hardware-in-the-loop testing of Battery Management System (BMS) extensively uses battery models. Simulation of Hybrid Electric Vehicles highly rely on the battery models which provides actual characteristics. This paper proposes a performance enhanced battery model which can be included in electric circuit design and analysis. Combined electrical and thermal model replicates the real world performance of battery. Self-discharge is the characteristic of a battery which reduces the stored energy without external connection. Effect of self-discharge on State of Charge (SOC) of battery is considered in the paper. Internal resistance variation with SOC of a battery included in the model further enhances its exactness. Proposed model is validated for Sony US 18650 Li-ion battery in MATLAB Simulink."}
{"_id": "1394c3d94fd24bc1a3942cda412c4cf2da117c71", "title": "Handling outliers in non-blind image deconvolution", "text": "Non-blind deconvolution is a key component in image deblurring systems. Previous deconvolution methods assume a linear blur model where the blurred image is generated by a linear convolution of the latent image and the blur kernel. This assumption often does not hold in practice due to various types of outliers in the imaging process. Without proper outlier handling, previous methods may generate results with severe ringing artifacts even when the kernel is estimated accurately. In this paper we analyze a few common types of outliers that cause previous methods to fail, such as pixel saturation and non-Gaussian noise. We propose a novel blur model that explicitly takes these outliers into account, and build a robust non-blind deconvolution method upon it, which can effectively reduce the visual artifacts caused by outliers. The effectiveness of our method is demonstrated by experimental results on both synthetic and real-world examples."}
{"_id": "30e92a6856fb1b37b14a85e5c0247015b338de1f", "title": "Radiometric calibration from a single image", "text": "Photometric methods in computer vision require calibration of the camera's radiometric response, and previous works have addressed this problem using multiple registered images captured under different camera exposure settings. In many instances, such an image set is not available, so we propose a method that performs radiometric calibration from only a single image, based on measured RGB distributions at color edges. This technique automatically selects appropriate edge information for processing, and employs a Bayesian approach to compute the calibration. Extensive experimentation has shown that accurate calibration results can be obtained using only a single input image."}
{"_id": "55ca9e02a4352f90e52d746525caf1f76ea0e2d6", "title": "Robust Radiometric Calibration and Vignetting Correction", "text": "In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D models where colors look inconsistent and notable boundaries exist. In this paper, we propose a full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function estimation, we approach each process in a manner that is robust to noise and outliers. We verify our algorithm with both synthetic and real data, which shows significant improvement compared to existing methods. We apply our estimation results to radiometrically align images for seamless mosaics and 3D model textures. We also use our method to create high dynamic range (HDR) mosaics that are more representative of the scene than normal mosaics."}
{"_id": "887a7c782b81952a5f0b85c4e7c8bf936f11096f", "title": "Fast Image Deconvolution using Hyper-Laplacian Priors", "text": "This document is supplementary material to our NIPS 2009 pap er [1] of the same name. While we choose to solve the w sub-problem (Eqn. 5) in [1] using a LUT or analytically (for s ome specific values of\u03b1), a number of numerical alternatives exist. The simplest an d f stest approach is to use Newton-Raphson (NR) to find the roots of the polynomials in Eq . 10 and Eqn 11 of [1]. As shown in Table 1,4 iterations of NR take a similar time to our analytic solution . However, the numerical algorithm has poor stability, particularly when the polynomi als skim the x-axis. In practice, we found it gave spurious solutions around 1-2% of the time, an unacce ptable rate given the iterative nature of our overall algorithm. Applying NR directly to Eqn. 5 of [1] i s similarly unreliable. Although techniques with greater stability than NR exist, they are comput ationally more expensive. For example, we also compare to Matlab\u2019s roots command, based on solving an eigenvalue problem, but this was several orders of magnitude slower than our analytic app ro ch."}
{"_id": "9362981d0a05f3bc11eebcb91d1df124426774a8", "title": "Motion-aware noise filtering for deblurring of noisy and blurry images", "text": "Image noise can present a serious problem in motion deblurring. While most state-of-the-art motion deblurring algorithms can deal with small levels of noise, in many cases such as low-light imaging, the noise is large enough in the blurred image that it cannot be handled effectively by these algorithms. In this paper, we propose a technique for jointly denoising and deblurring such images that elevates the performance of existing motion deblurring algorithms. Our method takes advantage of estimated motion blur kernels to improve denoising, by constraining the denoised image to be consistent with the estimated camera motion (i.e., no high frequency noise features that do not match the motion blur). This improved denoising then leads to higher quality blur kernel estimation and deblurring performance. The two operations are iterated in this manner to obtain results superior to suppressing noise effects through regularization in deblurring or by applying denoising as a preprocess. This is demonstrated in experiments both quantitatively and qualitatively using various image examples."}
{"_id": "356dec13b7d33bb4f9606d6e7d491d0c1066afc4", "title": "NFLB dropout: Improve generalization ability by dropping out the best -A biologically inspired adaptive dropout method for unsupervised learning", "text": "Generalization ability is widely acknowledged as one of the most important criteria to evaluate the quality of unsupervised models. The objective of our research is to find a better dropout method to improve the generalization ability of convolutional deep belief network (CDBN), an unsupervised learning model for vision tasks. In this paper, the phenomenon of low feature diversity during the training process is investigated. The attention mechanism of human visual system is more focused on rare events and depresses well-known facts. Inspired by this mechanism, No Feature Left Behind Dropout (NFLB Dropout), an adaptive dropout method is firstly proposed to automatically adjust the dropout rate feature-wisely. In the proposed method, the algorithm drops well-trained features and keeps poorly-trained ones with a high probability during training iterations. In addition, we apply two approximations of the quality of features, which are inspired by theory of saliency and optimization. Compared with the model trained by standard dropout, experiment results show that our NFLB Dropout method improves not only the accuracy but the convergence speed as well."}
{"_id": "dae315b084b9164ac68da26aaa73de877f73f75c", "title": "A Unified Framework for Multimodal Domain Adaptation", "text": "Domain adaptation aims to train a model on labeled data from a source domain while minimizing test error on a target domain. Most of existing domain adaptation methods only focus on reducing domain shift of single-modal data. In this paper, we consider a new problem of multimodal domain adaptation and propose a unified framework to solve it. The proposed multimodal domain adaptation neural networks(MDANN) consist of three important modules. (1) A covariant multimodal attention is designed to learn a common feature representation for multiple modalities. (2) A fusion module adaptively fuses attended features of different modalities. (3) Hybrid domain constraints are proposed to comprehensively learn domain-invariant features by constraining single modal features, fused features, and attention scores. Through jointly attending and fusing under an adversarial objective, the most discriminative and domain-adaptive parts of the features are adaptively fused together. Extensive experimental results on two real-world cross-domain applications (emotion recognition and cross-media retrieval) demonstrate the effectiveness of the proposed method."}
{"_id": "e5a71745d7ccddd3ffd4370a6f35f4bbca14cfcc", "title": "Metamaterial electromagnetic wave absorbers.", "text": "The advent of negative index materials has spawned extensive research into metamaterials over the past decade. Metamaterials are attractive not only for their exotic electromagnetic properties, but also their promise for applications. A particular branch-the metamaterial perfect absorber (MPA)-has garnered interest due to the fact that it can achieve unity absorptivity of electromagnetic waves. Since its first experimental demonstration in 2008, the MPA has progressed significantly with designs shown across the electromagnetic spectrum, from microwave to optical. In this Progress Report we give an overview of the field and discuss a selection of examples and related applications. The ability of the MPA to exhibit extreme performance flexibility will be discussed and the theory underlying their operation and limitations will be established. Insight is given into what we can expect from this rapidly expanding field and future challenges will be addressed."}
{"_id": "6df8ff6dd0b040dfedccd25118bfa42d7f65f391", "title": "Agricultural sustainability: concepts, principles and evidence.", "text": "Concerns about sustainability in agricultural systems centre on the need to develop technologies and practices that do not have adverse effects on environmental goods and services, are accessible to and effective for farmers, and lead to improvements in food productivity. Despite great progress in agricultural productivity in the past half-century, with crop and livestock productivity strongly driven by increased use of fertilizers, irrigation water, agricultural machinery, pesticides and land, it would be over-optimistic to assume that these relationships will remain linear in the future. New approaches are needed that will integrate biological and ecological processes into food production, minimize the use of those non-renewable inputs that cause harm to the environment or to the health of farmers and consumers, make productive use of the knowledge and skills of farmers, so substituting human capital for costly external inputs, and make productive use of people's collective capacities to work together to solve common agricultural and natural resource problems, such as for pest, watershed, irrigation, forest and credit management. These principles help to build important capital assets for agricultural systems: natural; social; human; physical; and financial capital. Improving natural capital is a central aim, and dividends can come from making the best use of the genotypes of crops and animals and the ecological conditions under which they are grown or raised. Agricultural sustainability suggests a focus on both genotype improvements through the full range of modern biological approaches and improved understanding of the benefits of ecological and agronomic management, manipulation and redesign. The ecological management of agroecosystems that addresses energy flows, nutrient cycling, population-regulating mechanisms and system resilience can lead to the redesign of agriculture at a landscape scale. Sustainable agriculture outcomes can be positive for food productivity, reduced pesticide use and carbon balances. Significant challenges, however, remain to develop national and international policies to support the wider emergence of more sustainable forms of agricultural production across both industrialized and developing countries."}
{"_id": "cbcba7295ed3bd07076639ce9d500bf0a9e7c3e2", "title": "A hybrid CAD-based construction site layout planning system using genetic algorithms", "text": "The efficient layout planning of a construction site is a fundamental task to any project undertaking. In an attempt to enhance the general practice of layout planning of construction sites, the paper introduces a novel approach for producing the sought layouts. This approach integrates the highly sophisticated graphical capabilities of computer-aided design (CAD) platforms with the robust search and optimization capabilities of genetic algorithms (GAs). In this context, GAs are utilized from within the CAD environment to optimize the location of temporary facilities on site. The functional interaction between GAs and CAD and the details of the GA-based layout optimization procedure are presented. A fully automated computer system is further developed to demonstrate the practicality of the chosen approach. In order to evaluate the system\u2019s performance, a local construction project with a 24,000m site is used. The automated system produced highly satisfactory results and showed notable flexibility through its CAD-based input/output media. D 2003 Elsevier B.V. All rights reserved."}
{"_id": "92b871a4a1f83b0f01fab4429b12b0fad15bcdb2", "title": "A Bayesian LSTM Model to Evaluate the Effects of Air Pollution Control Regulations in China", "text": "Rapid socio-economic development and urbanization have resulted in serious deterioration in air-quality in many world cities, including Beijing, China. This preliminary study is the first attempt to examine the effectiveness of air pollution control regulations implemented in Beijing during 2013 \u2013 2017 through a data-driven regulatory intervention analysis. Our proposed machine-learning model utilizes proxy data including Aerosol Optical Depth (AOD) and meteorology; it can explain 80% of the PM2.5 variability. Our preliminary results show that air pollution control regulatory measures introduced in China and Beijing have reduced PM2.5 pollution in Beijing by 23% on average."}
{"_id": "7ddc973a3243d6d6559c9aa5da976670552e2784", "title": "Visual Analytics for Explainable Deep Learning", "text": "Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even with such unprecedented advancements, the lack of explanation regarding the decisions made by deep learning models and absence of control over their internal processes act as major drawbacks in critical decision-making processes, such as precision medicine and law enforcement. In response, efforts are being made to make deep learning interpretable and controllable by humans. In this paper, we review visual analytics, information visualization, and machine learning perspectives relevant to this aim, and discuss potential challenges and future research directions."}
{"_id": "3f323a5e8ea655df73c1387f1cda3cd63b7a67ad", "title": "Head Pose Estimation for Driver Assistance Systems: A Robust Algorithm and Experimental Evaluation", "text": "Recognizing driver awareness is an important prerequisite for the design of advanced automotive safety systems. Since visual attention is constrained to a driver's field of view, knowing where a driver is looking provides useful cues about his activity and awareness of the environment. This work presents an identity-and lighting-invariant system to estimate a driver's head pose. The system is fully autonomous and operates online in daytime and nighttime driving conditions, using a monocular video camera sensitive to visible and near-infrared light. We investigate the limitations of alternative systems when operated in a moving vehicle and compare our approach, which integrates Localized Gradient Orientation histograms with support vector machines for regression. We estimate the orientation of the driver's head in two degrees-of-freedom and evaluate the accuracy of our method in a vehicular testbed equipped with a cinematic motion capture system."}
{"_id": "47ff736b02f09fed0a9edb09079f1e1d243dc46d", "title": "Towards unobtrusive emotion recognition for affective social communication", "text": "Awareness of the emotion of those who communicate with others is a fundamental challenge in building affective intelligent systems. Emotion is a complex state of the mind influenced by external events, physiological changes, or relationships with others. Because emotions can represent a user's internal context or intention, researchers suggested various methods to measure the user's emotions from analysis of physiological signals, facial expressions, or voice. However, existing methods have practical limitations to be used with consumer devices, such as smartphones; they may cause inconvenience to users and require special equipment such as a skin conductance sensor. Our approach is to recognize emotions of the user by inconspicuously collecting and analyzing user-generated data from different types of sensors on the smartphone. To achieve this, we adopted a machine learning approach to gather, analyze and classify device usage patterns, and developed a social network service client for Android smartphones which unobtrusively find various behavioral patterns and the current context of users. Also, we conducted a pilot study to gather real-world data which imply various behaviors and situations of a participant in her/his everyday life. From these data, we extracted 10 features and applied them to build a Bayesian Network classifier for emotion recognition. Experimental results show that our system can classify user emotions into 7 classes such as happiness, surprise, anger, disgust, sadness, fear, and neutral with a surprisingly high accuracy. The proposed system applied to a smartphone demonstrated the feasibility of an unobtrusive emotion recognition approach and a user scenario for emotion-oriented social communication between users."}
{"_id": "d2ca570b8746a1ecfbd155da14c8291f85545db2", "title": "Jet lag: trends and coping strategies", "text": "The number of travellers undertaking long-distance flights has continued to increase. Such flights are associated with travel fatigue and jet lag, the symptoms of which are considered here, along with their similarities, differences, and causes. Difficulties with jet lag because of sleep loss and decreased performance are emphasised. Since jet lag is caused mainly by inappropriate timing of the body clock in the new time zone, the pertinent properties of the body clock are outlined, with a description of how the body clock can be adjusted. The methods, both pharmacological and behavioural, that have been used to alleviate the negative results of time-zone transitions, are reviewed. The results form the rationale for advice to travellers flying in different directions and crossing several time zones. Finally, there is an account of the main problems that remain unresolved."}
{"_id": "f28fec4af67f14ac42cdf54272ee0b0c3b8a2b62", "title": "MiningZinc: A declarative framework for constraint-based mining", "text": "We introduce MiningZinc, a declarative framework for constraint-based data mining. MiningZinc consists of two key components: a language component and an execution mechanism. First, the MiningZinc language allows for high-level and natural modeling of mining problems, so that MiningZinc models are similar to the mathematical definitions used in the literature. It is inspired by the Zinc family of languages and systems and supports user-defined constraints and functions. Secondly, the MiningZinc execution mechanism specifies how to compute solutions for the models. It is solver independent and supports both standard constraint solvers and specialized data mining systems. The high-level problem specification is first translated into a normalized constraint language (FlatZinc). Rewrite rules are then used to add redundant constraints or solve subproblems using specialized data mining algorithms or generic constraint programming solvers. Given a model, different execution strategies are automatically extracted that correspond to different sequences of algorithms to run. Optimized data mining algorithms, specialized processing routines and generic solvers can all be automatically combined. Thus, the MiningZinc language allows one to model constraint-based itemset mining problems in a solver independent way, and its execution mechanism can automatically chain different algorithms and solvers. This leads to a unique combination of declarative modeling with high-performance solving."}
{"_id": "b9ad4b977cd50ae58f9957667806a953b459a0c1", "title": "Neural measures of the causal role of observers\u2019 facial mimicry on visual working memory for facial expressions", "text": "Simulation models of facial expressions propose that sensorimotor regions may increase the clarity of facial expressions representations in extrastriate areas. We monitored the event-related potential marker of visual working memory (VWM) representations, namely the sustained posterior contralateral negativity (SPCN), also termed contralateral delay activity, while participants performed a change detection task including to-be-memorized faces with different intensities of anger. In one condition participants could freely use their facial mimicry during the encoding/VWM maintenance of the faces while in a different condition participants had their facial mimicry blocked by a gel. Notably, SPCN amplitude was reduced for faces in the blocked mimicry condition when compared to the free mimicry condition. This modulation interacted with the empathy levels of participants such that only participants with medium-high empathy scores showed such reduction of the SPCN amplitude when their mimicry was blocked. The SPCN amplitude was larger for full expressions when compared to neutral and subtle expressions, while subtle expressions elicited lower SPCN amplitudes than neutral faces. These findings provide evidence of a functional link between mimicry and VWM for faces and further shed light on how this memory system may receive feedbacks from sensorimotor regions during the processing of facial expressions."}
{"_id": "67189cd24091ea9f2322bd23699383b923fe13b0", "title": "A Practical Message Falsification Attack on WPA", "text": "In 2008, Beck and Tews have proposed a practical attack on WPA. Their attack (called the Beck-Tews attack) can recover plaintext from an encrypted short packet, and can falsify it. The execution time of the Beck-Tews attack is about 12-15 minutes. However, the attack has the limitation, namely, the targets are only WPA implementations those support IEEE802.11e QoS features. In this paper, we propose a practical message falsification attack on any WPA implementation. In order to ease targets of limitation of wireless LAN products, we apply the Beck-Tews attack to the man-in-the-middle attack. In the man-inthe-middle attack, the user\u2019s communication is intercepted by an attacker until the attack ends. It means that the users may detect our attack when the execution time of the attack is large. Therefore, we give methods for reducing the execution time of the attack. As a result, the execution time of our attack becomes about one minute in the best case."}
{"_id": "9d141a089003292389feb21b11512a317ccff4b9", "title": "The social and ethical issues of post-genomic human biobanks", "text": "Biobanking \u2014 the organized collection of biological samples and associated data \u2014 ranges in scope from small collections of samples in academic or hospital settings to large-scale national repositories. Biobanks raise many ethical concerns, to which authorities are responding by introducing specific regulations. Genomics research, which thrives on the sharing of samples and information, is affected by two prominent ethical questions: do ethical principles prevent or promote the sharing of stored biological resources? How does the advent of large-scale biobanking alter the way in which ethical issues are addressed?"}
{"_id": "bee853b6f88defb18510b1787e027abb560014e5", "title": "A Functional Dynamic Boltzmann Machine", "text": "Dynamic Boltzmann machines (DyBMs) are recently developed generative models of a time series. They are designed to learn a time series by efficient online learning algorithms, whilst taking long-term dependencies into account with help of eligibility traces, recursively updatable memory units storing descriptive statistics of all the past data. The current DyBMs assume a finitedimensional time series and cannot be applied to a functional time series, in which the dimension goes to infinity (e.g., spatiotemporal data on a continuous space). In this paper, we present a functional dynamic Boltzmann machine (F-DyBM) as a generative model of a functional time series. A technical challenge is to devise an online learning algorithm with which F-DyBM, consisting of functions and integrals, can learn a functional time series using only finite observations of it. We rise to the above challenge by combining a kernel-based function approximation method along with a statistical interpolation method and finally derive closed-form update rules. We design numerical experiments to empirically confirm the effectiveness of our solutions. The experimental results demonstrate consistent error reductions as compared to baseline methods, from which we conclude the effectiveness of F-DyBM for functional time series prediction."}
{"_id": "83971fc25483a414e4029cfb060d28c88b147a25", "title": "A Game of Things: Strategic Allocation of Security Resources for IoT", "text": "In many Internet of Thing (IoT) application domains security is a critical requirement, because malicious parties can undermine the effectiveness of IoT-based systems by compromising single components and/or communication channels. Thus, a security infrastructure is needed to ensure the proper functioning of such systems even under attack. However, it is also critical that security be at a reasonable resource and energy cost, as many IoT devices may not have sufficient resources to host expensive security tools. In this paper, we focus on the problem of efficiently and effectively securing IoT networks by carefully allocating security tools. We model our problem according to game theory, and provide a Pareto-optimal solution, in which the cost of the security infrastructure, its energy consumption, and the probability of a successful attack, are minimized. Our experimental evaluation shows that our technique improves the system robustness in terms of packet delivery rate for different network topologies."}
{"_id": "9cba0ecc56b7611ed5b3f92147097e3033dffdf7", "title": "The Effects of Violent Video Game Habits on Adolescent Aggressive Attitudes and Behaviors", "text": "Video games have become one of the favorite activities of children in America. A growing body of research links violent video game play to aggressive cognitions, attitudes, and behaviors. This study tested the predictions that exposure to violent video game content is (1) positively correlated with hostile attribution bias, (2) positively correlated with arguments with teachers and physical fights, and negatively correlated with school performance, and (3) positively correlated with hostility. 607 8and 9-grade students from four schools participated. Each prediction was supported. Youth who expose themselves to greater amounts of video game violence see the world as a more hostile place, are more hostile themselves, get into arguments with teachers more frequently, are more likely to be involved in physical fights, and perform more poorly in school. Video game violence exposure is a significant predictor of physical fights even when respondent sex, hostility level, and weekly amount of game play are statistically controlled. It is suggested that video game violence is a risk factor for aggressive behavior. The results also suggest that parental involvement in video game play may act as a protective factor for youth. Results are interpreted within and support the framework of the General Aggression Model. 1 Address correspondence to: Douglas A. Gentile, Ph.D., National Institute on Media and the Family, 606 24th Avenue South, Suite 606, Minneapolis, MN 55454. Phone: 612/672-5437; Fax: 612/672-4113; E-mail: dgentile@mediafamily.org Violent Video Games 2 The Popularity of Video Games Video games have become one of the favorite activities of children in America (Dewitt,1994). Sales have grown consistently with the entire electronic entertainment category taking in between $7 billion and $7.5 billion in 1999, surpassing theatrical box office revenues for the first time (\u201cCome in and Play,\u201d 2000). Worldwide video game sales are now at $20 billion (Cohen, 2000). Over 100 million Gameboys and 75 million PlayStations have been sold (Kent, 2000). The average American child between the ages of 2 and 17 plays video games for 7 hours a week (Gentile & Walsh, under review). A study by Buchman and Funk (1996) highlighted the differences between boys and girls, reporting that fourth through eighth grade boys played video games for 5 to 10 hours a week while girls played for 3 to 6 hours a week. Using industry polls, Provenzo (1991) studied the most popular Nintendo video games in America and found that 40 of the 47 had violence as their main theme. In another study (Buchman & Funk, 1996) in which video games were split into six categories, human and fantasy violence accounted for about 50% of children\u2019s favorite games, with sports violence contributing another 1620% for boys and 6-15% for girls. Research On Video Games and Aggression Many observant parents agree that the effects of violent video games are probably deleterious to children; however, they generally believe that their own children will be unaffected. This may just be bias on their part, or they may be correct. Research has shown that not all children are affected in the same way by violent video games (Anderson & Dill, 2000; Lynch, 1994; Lynch, 1999). While the literature connecting video game violence and aggression is growing, much of the research that has been done on video games to date has not taken into consideration the effect of pre-existing hostility or aggression. Several correlational studies (e.g., Anderson & Dill, 2000; Colwell & Payne, 2000; Dominick, 1984; Lin & Lepper, 1987; Fling, Smith, Rodriguez, Thornton, Atkins, & Nixon, 1992) have investigated the effects of video game habits and found a positive correlation between video game habits and an increase in aggressive behavior. However, few studies have differentiated between violent and non-violent video games. Fewer still have looked at differences in the subjects' preexisting hostility or aggression. A growing number of experimental studies (e.g., Cooper & Mackie, 1986; Silvern & Williamson, 1987; Schuttte, Malouff, Post-Gorden, & Rodasta, 1988; Irwin & Gross, 1995; Anderson & Dill, 2000) have shown support for the hypothesis that violent video games lead to an increase in laboratory aggression. A meta-analytic study (Anderson & Bushman, in press-a) found that, across 54 independent tests of the relation between video game violence and aggression, involving 4,262 participants, the average effect size was both positive and significant. The General Aggression Model The General Aggression Model (GAM) and its relation to violent video games has been described by Anderson and Dill (2000). The GAM seeks to explain aggressive behavior in children after playing violent video games. This model describes a \u201cmulti-stage process by which personological (e.g., aggressive personality) and situational (e.g., video game play and provocation) input variables lead to aggressive behavior. They do so by influencing several related internal states and the outcomes of automatic and controlled appraisal (or decision) processes\u201d (Anderson & Dill, 2000, p. 773). The GAM is relevant to the study of violent video games for several reasons. One reason is that it differentiates between short and long term effects of video game violence on the game-player. With regard to the short-term effects of violent video games, the GAM predicts that both kinds of Violent Video Games 3 input variables, person and situation, can influence the present internal state of the person. The GAM further describes the internal state of a person with cognitive, affective, and arousal variables. Summarizing the GAM\u2019s predictions for the effects of violent video games on children\u2019s behavior, Anderson and Dill drew the following conclusions: \u201cShort-term violent video game increases in aggression are expected by [the model] whenever exposure to violent media primes aggressive thoughts, increases hostile feeling, or increases arousal\u201d (Anderson & Dill, 2000, p. 774). The GAM describes the long term effects of violent video games as a result of the development, over-learning, and reinforcement of aggression-related knowledge structures. These knowledge structures include vigilance for enemies (i.e., hostile attribution bias), aggressive action against others, expectations that others will behave aggressively, positive attitudes towards the use of violence, and the belief that violent solutions are effective and appropriate. Repeated exposure to graphic scenes of violence is also postulated to be desensitizing. Furthermore, it is predicted that long term game-players become more aggressive in outlook, perceptual biases, attitudes, beliefs, and behavior than they were before the repeated exposure. Two studies were conducted to test the efficacy of the GAM in predicting aggression from violent video game play (Anderson & Dill, 2000). In the first study, it was found that real-life video game play was positively related to aggressive behavior and delinquency (long-term effects). The relationship was stronger for individuals who were characteristically aggressive. In addition, amount of video game play was negatively related to school performance. In the second study, laboratory exposure to a graphically violent video game increased aggressive thoughts and behavior (short-term effects), although there was no moderating effect of hostility (i.e., aggressive personality). Both of these studies were consistent with the main hypotheses regarding the GAM and video game violence. Lynch\u2019s research on the physiological effects of violent video games (Lynch, 1994; Lynch, 1999) lends further credibility to the GAM. Lynch's results are consistent with a recent metaanalysis of seven independent tests showing that blood pressure and heart rate increase with exposure to violent video games (Anderson & Bushman, in press-a). This research demonstrates that hostility in adolescence is directly related to physiological reactivity to violent video games. It also demonstrates the efficacy of the GAM for predicting arousal measures, one of the three internal states described by the GAM that may lead to aggression. The GAM also predicts that long-term effects of violent video games will appear in a number of other areas, including hostile attribution bias, desensitization, and aggressive behaviors (such as physical fights). Children who tend to interpret ambiguous social cues as being of hostile intent (i.e., have a hostile attribution bias) are hypothesized to be more aggressive. This hypothesized relationship has been confirmed consistently across a wide range of samples ranging from early childhood through adulthood, and across a number of studies (e.g., Crick & Dodge, 1994; Dill, Anderson, Anderson, & Deuser, 1997). Furthermore, there is a robust relationship between hostile attribution bias and children\u2019s social maladjustment, such as depression, negative selfperceptions, and peer rejection (Crick, 1995). Based on the GAM, we predict that long-term exposure to violent video games (or other violent media) may create a predisposition to interpret others\u2019 actions as having malignant intent. Following this logic, if children come to have a greater hostile attribution bias from repeated, extended exposure to violent video games over time, it is also likely that they would become engaged in more aggressive behaviors such as arguments and physical fights. The current research is designed to test four hypotheses. First, video game violence exposure is positively correlated with seeing the world as a more hostile place (hostile attribution bias). Second, video game violence exposure is positively correlated with arguments with teachers and physical fights, and is negatively correlated with academic performance. Third, trait hostility will Violent Video Games 4 be positively correlated with video game violence exposure. Fourth, limiting the amount of violent video game play, either"}
{"_id": "51bb6b18854a33491d7deebdf3202582fe01f36f", "title": "BioWatch: A Noninvasive Wrist-Based Blood Pressure Monitor That Incorporates Training Techniques for Posture and Subject Variability", "text": "Noninvasive continuous blood pressure (BP) monitoring is not yet practically available for daily use. Challenges include making the system easily wearable, reducing noise level and improving accuracy. Variations in each person's physical characteristics, as well as the possibility of different postures, increase the complexity of continuous BP monitoring, especially outside the hospital. This study attempts to provide an easily wearable solution and proposes training to specific posture and individual for further improving accuracy. The wrist watch-based system we developed can measure electrocardiogram and photoplethysmogram. From these two signals, we measure pulse transit time through which we can obtain systolic and diastolic blood pressure through regression techniques. In this study, we investigate various functions to perform the training to obtain blood pressure. We validate measurements on different postures and subjects, and show the value of training the device to each posture and each subject. We observed that the average RMSE between the measured actual systolic BP and calculated systolic BP is between 7.83 to 9.37 mmHg across 11 subjects. The corresponding range of error for diastolic BP is 5.77 to 6.90 mmHg. The system can also automatically detect the arm position of the user using an accelerometer with an average accuracy of 98%, to make sure that the sensor is kept at the proper height. This system, called BioWatch, can potentially be a unified solution for heart rate, SPO2 and continuous BP monitoring."}
{"_id": "d8a2287550b04a5f2d3437bcba2b6697fd0ecf44", "title": "Agent and Cyber-Physical System Based Self-Organizing and Self-Adaptive Intelligent Shopfloor", "text": "The increasing demand of customized production results in huge challenges to the traditional manufacturing systems. In order to allocate resources timely according to the production requirements and to reduce disturbances, a framework for the future intelligent shopfloor is proposed in this paper. The framework consists of three primary models, namely the model of smart machine agent, the self-organizing model, and the self-adaptive model. A cyber-physical system for manufacturing shopfloor based on the multiagent technology is developed to realize the above-mentioned function models. Gray relational analysis and the hierarchy conflict resolution methods were applied to achieve the self-organizing and self-adaptive capabilities, thereby improving the reconfigurability and responsiveness of the shopfloor. A prototype system is developed, which has the adequate flexibility and robustness to configure resources and to deal with disturbances effectively. This research provides a feasible method for designing an autonomous factory with exception-handling capabilities."}
{"_id": "d733324855d9c5c30d8dc079025b5b783a072666", "title": "Image description with features that summarize", "text": "We present a new method for describing images for the purposes of matching and registration. We take the point of view that large, coherent regions in the image provide a concise and stable basis for image description. We develop a new algorithm for feature detection that operates on several projections (feature spaces) of the image using kernel-based optimization techniques to locate local extrema of a continuous scale-space of image regions. Descriptors of these image regions and their relative geometry then form the basis of an image description. The emphasis of the work is on features that summarize image content and are highly robust to viewpoint changes and occlusion yet remain discriminative for matching and registration. We present experimental results of these methods applied to the problem of image retrieval. We find that our method performs comparably to two published techniques: Blobworld and SIFT features. However, compared to these techniques two significant advantages of our method are its (1) stability under large changes in the images and (2) its representational efficiency. 2008 Elsevier Inc. All rights reserved."}
{"_id": "47413bdf4281ebd20b95979c398a63200ebd77c0", "title": "Feature recognition and shape design in sneakers", "text": "Article history: Available online xxxx"}
{"_id": "2329a46590b2036d508097143e65c1b77e571e8c", "title": "Deep Speech: Scaling up end-to-end speech recognition", "text": "We present a state-of-the-art speech recognition system developed using end-toend deep learning. Our architecture is significantly simpler than traditional speech systems, which rely on laboriously engineered processing pipelines; these traditional systems also tend to perform poorly when used in noisy environments. In contrast, our system does not need hand-designed components to model background noise, reverberation, or speaker variation, but instead directly learns a function that is robust to such effects. We do not need a phoneme dictionary, nor even the concept of a \u201cphoneme.\u201d Key to our approach is a well-optimized RNN training system that uses multiple GPUs, as well as a set of novel data synthesis techniques that allow us to efficiently obtain a large amount of varied data for training. Our system, called Deep Speech, outperforms previously published results on the widely studied Switchboard Hub5\u201900, achieving 16.0% error on the full test set. Deep Speech also handles challenging noisy environments better than widely used, state-of-the-art commercial speech systems."}
{"_id": "6b2921eb65968fb68974b7701e0f3101fdf92eef", "title": "Understanding Loneliness in Social Awareness Streams: Expressions and Responses", "text": "We studied the experience of loneliness as communicated by thousands of people on Twitter. Using a data set of public Twitter posts containing explicit expressions of loneliness, we qualitatively developed a categorization scheme for these expressions, showing how the context of loneliness expressed on Twitter relates to existing theories about loneliness. A quantitative analysis of the data exposed categories and patterns in communication practices around loneliness. For example, users expressing more severe, enduring loneliness are more likely to be female, and less likely to include requests for social interaction in their tweets. Further, we studied the responses to expressions of loneliness in Twitter\u2019s social settings. Deriving from the same dataset, we examined factors that correlate with the existence and type of response, showing, for example, that men were more likely to receive responses to lonely tweets, and expressions of enduring loneliness are critically less likely to receive responses."}
{"_id": "88dabd8d295ba9f727baccd73c214e094c6d134f", "title": "From Node Embedding To Community Embedding", "text": "Most of the existing graph embedding methods focus on nodes, which aim to output a vector representation for each node in the graph such that two nodes being \u201cclose\u201d on the graph are close too in the low-dimensional space. Despite the success of embedding individual nodes for graph analytics, we notice that an important concept of embedding communities (i.e., groups of nodes) is missing. Embedding communities is useful, not only for supporting various community-level applications, but also to help preserve community structure in graph embedding. In fact, we see community embedding as providing a higher-order proximity to define the node closeness, whereas most of the popular graph embedding methods focus on first-order and/or second-order proximities. To learn the community embedding, we hinge upon the insight that community embedding and node embedding reinforce with each other. As a result, we propose ComEmbed, the first community embedding method, which jointly optimizes the community embedding and node embedding together. We evaluate ComEmbed on real-world data sets. We show it outperforms the state-of-the-art baselines in both tasks of node classification and community prediction."}
{"_id": "e91202fe2aa1eaec2ea35073c0570a06043c5ed6", "title": "Predicting Learning by Analyzing Eye-Gaze Data of Reading Behavior", "text": "Researchers have highlighted how tracking learners\u2019 eye-gaze can reveal their reading behaviors and strategies, and this provides a framework for developing personalized feedback to improve learning and problem solving skills. In this paper, we describe analyses of eye-gaze data collected from 16 middle school students who worked with Betty\u2019s Brain, an open-ended learning environment, where students learn science by building causal models to teach a virtual agent. Our goal was to test whether newly available consumer-level eye trackers could provide the data that would allow us to probe further into the relations between students\u2019 reading of hypertext resources and building of graphical causal maps. We collected substantial amounts of gaze data and then constructed classifier models to predict whether students would be successful in constructing correct causal links. These models predicted correct map-building actions with an accuracy of 80% (F1 = 0.82; Cohen\u2019s kappa \u03ba = 0.62). The proportions of correct link additions are in turn directly related to learners\u2019 performance in Betty's Brain. Therefore, students\u2019 gaze patterns when reading the resources may be good indicators of their overall performance. These findings can be used to support the development of a real-time eye gaze analysis system, which can detect students reading patterns, and when necessary provide support to help them become better readers."}
{"_id": "f4b44d2374c8387cfca7670d7c0caef769b9496f", "title": "A Wireless Sensor Network-Based Ubiquitous Paprika Growth Management System", "text": "Wireless Sensor Network (WSN) technology can facilitate advances in productivity, safety and human quality of life through its applications in various industries. In particular, the application of WSN technology to the agricultural area, which is labor-intensive compared to other industries, and in addition is typically lacking in IT technology applications, adds value and can increase the agricultural productivity. This study attempts to establish a ubiquitous agricultural environment and improve the productivity of farms that grow paprika by suggesting a 'Ubiquitous Paprika Greenhouse Management System' using WSN technology. The proposed system can collect and monitor information related to the growth environment of crops outside and inside paprika greenhouses by installing WSN sensors and monitoring images captured by CCTV cameras. In addition, the system provides a paprika greenhouse environment control facility for manual and automatic control from a distance, improves the convenience and productivity of users, and facilitates an optimized environment to grow paprika based on the growth environment data acquired by operating the system."}
{"_id": "ffb6e1d138f86a74e334e7543dcf7b5907274b0a", "title": "A conditional approach to dispositional constructs: the local predictability of social behavior.", "text": "A conditional approach to dispositions is developed in which dispositional constructs are viewed as clusters of if-then propositions. These propositions summarize contingencies between categories of conditions and categories of behavior rather than generalized response tendencies. A fundamental unit for investigating dispositions is therefore the conditional frequency of acts that are central to a given behavior category in circumscribed situations, not the overall frequency of behaviors. In an empirical application of the model, we examine how people's dispositional judgments are linked to extensive observations of targets' behavior in a range of natural social situations. We identify categories of these social situations in which targets' behavior may be best predicted from observers' dispositional judgements, focusing on the domains of aggression and withdrawal. One such category consists of subjectively demanding or stressful situations that tax people's performance competencies. As expected, children judged to be aggressive or withdrawn were variable across situations in dispositionally relevant behaviors, but they diverged into relatively predictable aggressive and withdrawn actions in situations that required the social, self-regulatory, and cognitive competencies they lacked. Implications of the conditional approach for personality assessment and person perception research are considered."}
{"_id": "3b29e9ae583b28d3f3270b845371845aff4528c7", "title": "System structure for software fault tolerance", "text": "The paper presents, and discusses the rationale behind, a method for structuring complex computing systems by the use of what we term \u201crecovery blocks\u201d, \u201cconversations\u201d and \u201cfault-tolerant interfaces\u201d. The aim is to facilitate the provision of dependable error detection and recovery facilities which can cope with errors caused by residual design inadequacies, particularly in the system software, rather than merely the occasional malfunctioning of hardware components."}
{"_id": "73677af6c4d7245261a62ca850054928c02f3919", "title": "Explainable PCGML via Game Design Patterns", "text": "Procedural content generation via Machine Learning (PCGML) is the umbrella term for approaches that generate content for games via machine learning. One of the benefits of PCGML is that, unlike search or grammar-based PCG, it does not require hand authoring of initial content or rules. Instead, PCGML relies on existing content and black box models, which can be difficult to tune or tweak without expert knowledge. This is especially problematic when a human designer needs to understand how to manipulate their data or models to achieve desired results. We present an approach to Explainable PCGML via Design Patterns in which the design patterns act as a vocabulary and mode of interaction between user and model. We demonstrate that our technique outperforms non-explainable versions of our system in interactions with five expert designers, four of whom lack any machine learning expertise."}
{"_id": "2541012a8da585d9f2dfb3d97fdef88c01c2fb84", "title": "Learning grammatical structure with Echo State Networks", "text": "Echo State Networks (ESNs) have been shown to be effective for a number of tasks, including motor control, dynamic time series prediction, and memorizing musical sequences. However, their performance on natural language tasks has been largely unexplored until now. Simple Recurrent Networks (SRNs) have a long history in language modeling and show a striking similarity in architecture to ESNs. A comparison of SRNs and ESNs on a natural language task is therefore a natural choice for experimentation. Elman applies SRNs to a standard task in statistical NLP: predicting the next word in a corpus, given the previous words. Using a simple context-free grammar and an SRN with backpropagation through time (BPTT), Elman showed that the network was able to learn internal representations that were sensitive to linguistic processes that were useful for the prediction task. Here, using ESNs, we show that training such internal representations is unnecessary to achieve levels of performance comparable to SRNs. We also compare the processing capabilities of ESNs to bigrams and trigrams. Due to some unexpected regularities of Elman's grammar, these statistical techniques are capable of maintaining dependencies over greater distances than might be initially expected. However, we show that the memory of ESNs in this word-prediction task, although noisy, extends significantly beyond that of bigrams and trigrams, enabling ESNs to make good predictions of verb agreement at distances over which these methods operate at chance. Overall, our results indicate a surprising ability of ESNs to learn a grammar, suggesting that they form useful internal representations without learning them."}
{"_id": "c5ffa88dbb91e5bdf9370378a89e5e827ae0e168", "title": "Cooperation of the basal ganglia, cerebellum, sensory cerebrum and hippocampus: possible implications for cognition, consciousness, intelligence and creativity", "text": "It is suggested that the anatomical structures which mediate consciousness evolved as decisive embellishments to a (non-conscious) design strategy present even in the simplest unicellular organisms. Consciousness is thus not the pinnacle of a hierarchy whose base is the primitive reflex, because reflexes require a nervous system, which the single-celled creature does not possess. By postulating that consciousness is intimately connected to self-paced probing of the environment, also prominent in prokaryotic behavior, one can make mammalian neuroanatomy amenable to dramatically straightforward rationalization. Muscular contraction is the nervous system's only externally directed product, and the signaling routes which pass through the various brain components must ultimately converge on the motor areas. The function of several components is still debatable, so it might seem premature to analyze the global operation of the circuit these routes constitute. But such analysis produces a remarkably simple picture, and it sheds new light on the roles of the individual components. The underlying principle is conditionally permitted movement, some components being able to veto muscular contraction by denying the motor areas sufficient activation. This is true of the basal ganglia (BG) and the cerebellum (Cb), which act in tandem with the sensory cerebrum, and which can prevent the latter's signals to the motor areas from exceeding the threshold for overt movement. It is also true of the anterior cingulate, which appears to play a major role in directing attention. In mammals, the result can be mere thought, provided that a second lower threshold is exceeded. The veto functions of the BG and the Cb stem from inhibition, but the countermanding disinhibition develops at markedly different rates in those two key components. It develops rapidly in the BG, control being exercised by the amygdala, which itself is governed by various other brain regions. It develops over time in the Cb, thereby permitting previously executed movements that have proved advantageous. If cognition is linked to overt or covert movement, intelligence becomes the ability to consolidate individual motor elements into more complex patterns, and creativity is the outcome of a race-to-threshold process which centers on the motor areas. Amongst the ramifications of these ideas are aspects of cortical oscillations, phantom limb sensations, amyotrophic lateral sclerosis (ALS) the difficulty of self-tickling and mirror neurons."}
{"_id": "2f949431c12e4e0e7e1e8bdc8a9e1ca97ef8832e", "title": "Incentive mechanism for peer-to-peer media streaming", "text": "We propose a rank-based peer-selection mechanism for peer-to-peer media streaming systems. The mechanism provides incentives for cooperation through service differentiation. Contributors to the system are rewarded with flexibility and choice in peer selection, resulting in high quality streaming sessions. Free-riders are given limited options in peer selection, if any, and hence receive low quality streaming. Through simulation and wide-area measurement studies, we verify that the mechanism can provide near optimal streaming quality to the cooperative users until the bottleneck shifts from the sources to the network."}
{"_id": "40f9a4a7107c8fd11acebb62b95b6c78dbb7b608", "title": "Model-based Q-learning for humanoid robots", "text": "This paper is proposal with regarding reinforcement learning and robotics. It contains our study so far about reinforcement learning problem and Q-learning \u2014 one of the methods to solve it. The method is tested both by running on a simulation and on NAO robot. Both are written in high programming language. Since the testing are also done on NAO robot. This paper also includes our understanding about NAO robot and Robotic Operating System (ROS), and our approach to apply Q-learning on NAO robot. In the end, we have been successfully tested Q-learning method and apply it to NAO robot in real-world environment."}
{"_id": "65baef5edf7d2e12ff5ef8801e3ba6de4c34a56c", "title": "Community detection and visualization in social networks: Integrating structural and semantic information", "text": "Due to the explosion of social networking and the information sharing among their users, the interest in analyzing social networks has increased over the recent years. Two general interests in this kind of studies are community detection and visualization. In the first case, most of the classic algorithms for community detection use only the structural information to identify groups, that is, how clusters are formed according to the topology of the relationships. However, these methods do not take into account any semantic information which could guide the clustering process, and which may add elements to conduct further analyses. In the second case most of the layout algorithms for clustered graphs have been designed to differentiate the groups within the graph, but they are not designed to analyze the interactions between such groups. Identifying these interactions gives an insight into the way different communities exchange messages or information, and allows the social network researcher to identify key actors, roles, and paths from one community to another.\n This article presents a novel model to use, in a conjoint way, the semantic information from the social network and its structural information to, first, find structurally and semantically related groups of nodes, and second, a layout algorithm for clustered graphs which divides the nodes into two types, one for nodes with edges connecting other communities and another with nodes connecting nodes only within their own community. With this division the visualization tool focuses on the connections between groups facilitating deep studies of augmented social networks."}
{"_id": "43393a561914f05be312a1dff5a757cbc384d1a1", "title": "C4: the continuously concurrent compacting collector", "text": "C4, the Continuously Concurrent Compacting Collector, an updated generational form of the Pauseless GC Algorithm [7], is introduced and described, along with details of its implementation on modern X86 hardware. It uses a read barrier to support concur- rent compaction, concurrent remapping, and concurrent incremental update tracing. C4 differentiates itself from other generational garbage collectors by supporting simultaneous-generational concurrency: the different generations are collected using concurrent (non stop-the-world) mechanisms that can be simultaneously and independently active. C4 is able to continuously perform concurrent young generation collections, even during long periods of concurrent full heap collection, allowing C4 to sustain high allocation rates and maintain the efficiency typical to generational collectors, without sacrificing response times or reverting to stop-the-world operation. Azul systems has been shipping a commercial implementation of the Pauseless GC mechanism, since 2005. Three successive generations of Azul's Vega series systems relied on custom multi-core processors and a custom OS kernel to deliver both the scale and features needed to support Pauseless GC. In 2010, Azul released its first software-only commercial implementation of C4 for modern commodity X86 hardware, using Linux kernel enhancements to support the required feature set. We discuss implementa- tion details of C4 on X86, including the Linux virtual and physical memory management enhancements that were used to support the high rate of virtual memory operations required for sustained pauseless operation. We discuss updates to the collector's manage- ment of the heap for efficient generational collection and provide throughput and pause time data while running sustained workloads."}
{"_id": "1beddcd2cee2a18e6875d0a624541f1c378cdde8", "title": "A study of normative and informational social influences upon individual judgement.", "text": "By NOW, many experimental studies (e.g., 1, 3, 6) have demonstrated that individual psychological processes are subject to social influences. Most investigators, however, have not distinguished among different kinds of social influences; rather, they have carelessly used the term \"group\" influence to characterize the impact of many different kinds of social factors. In fact, a review of the major experiments in this area\u2014e.g., those by Sherif (6), Asch (1), Bovard (3)\u2014would indicate that the subjects (5s) in these experiments as they made their judgments were not functioning as members of a group in any simple or obvious manner. The S, in the usual experiment in this area, made perceptual judgments hi the physical presence of others after hearing their judgments. Typically, the S was not given experimental instructions which made him feel that he was a member of a group faced with a common task requiring cooperative effort for its most effective solution. If \"group\" influences were at work in the foregoing experiments, they were subtly and indirectly created rather than purposefully created by the experimenter."}
{"_id": "e7bd0e7a7ee6b0904d5de6e76e095a6a3b88dd12", "title": "T REE-STRUCTURED DECODING WITH DOUBLY-RECURRENT NEURAL NETWORKS", "text": "We propose a neural network architecture for generating tree-structured objects from encoded representations. The core of the method is a doubly recurrent neural network model comprised of separate width and depth recurrences that are combined inside each cell (node) to generate an output. The topology of the tree is modeled explicitly together with the content. That is, in response to an encoded vector representation, co-evolving recurrences are used to realize the associated tree and the labels for the nodes in the tree. We test this architecture in an encoderdecoder framework, where we train a network to encode a sentence as a vector, and then generate a tree structure from it. The experimental results show the effectiveness of this architecture at recovering latent tree structure in sequences and at mapping sentences to simple functional programs."}
{"_id": "20458d93f8bd7fcbd69e3540e9552f5ae577ab99", "title": "Extreme Learning Machine for Multilayer Perceptron", "text": "Extreme learning machine (ELM) is an emerging learning algorithm for the generalized single hidden layer feedforward neural networks, of which the hidden node parameters are randomly generated and the output weights are analytically computed. However, due to its shallow architecture, feature learning using ELM may not be effective for natural signals (e.g., images/videos), even with a large number of hidden nodes. To address this issue, in this paper, a new ELM-based hierarchical learning framework is proposed for multilayer perceptron. The proposed architecture is divided into two main components: 1) self-taught feature extraction followed by supervised feature classification and 2) they are bridged by random initialized hidden weights. The novelties of this paper are as follows: 1) unsupervised multilayer encoding is conducted for feature extraction, and an ELM-based sparse autoencoder is developed via \u21131 constraint. By doing so, it achieves more compact and meaningful feature representations than the original ELM; 2) by exploiting the advantages of ELM random feature mapping, the hierarchically encoded outputs are randomly projected before final decision making, which leads to a better generalization with faster learning speed; and 3) unlike the greedy layerwise training of deep learning (DL), the hidden layers of the proposed framework are trained in a forward manner. Once the previous layer is established, the weights of the current layer are fixed without fine-tuning. Therefore, it has much better learning efficiency than the DL. Extensive experiments on various widely used classification data sets show that the proposed algorithm achieves better and faster convergence than the existing state-of-the-art hierarchical learning methods. Furthermore, multiple applications in computer vision further confirm the generality and capability of the proposed learning scheme."}
{"_id": "62c00686e4b8c87aea6e2cf5bfca435ae2dc77fc", "title": "Motivational Leadership: Tips From the Business World.", "text": "It is an important task for leadership to identify the motivating factors for employees and motivate them to fulfill their individual and organizational goals. Although there are several motivational factors (extrinsic and intrinsic), intrinsic motivational factors such as autonomy, mastery, and purpose are more important for deeper lasting job satisfaction and higher performance. In this article, the authors discuss how an understanding of these factors that influence motivation has the potential to transform an organization."}
{"_id": "f4c8539bed600c9c652aba76a996b8188761d3fe", "title": "Stronger Baselines for Trustable Results in Neural Machine Translation", "text": "Interest in neural machine translation has grown rapidly as its effectiveness has been demonstrated across language and data scenarios. New research regularly introduces architectural and algorithmic improvements that lead to significant gains over \u201cvanilla\u201d NMT implementations. However, these new techniques are rarely evaluated in the context of previously published techniques, specifically those that are widely used in state-of-theart production and shared-task systems. As a result, it is often difficult to determine whether improvements from research will carry over to systems deployed for real-world use. In this work, we recommend three specific methods that are relatively easy to implement and result in much stronger experimental systems. Beyond reporting significantly higher BLEU scores, we conduct an in-depth analysis of where improvements originate and what inherent weaknesses of basic NMT models are being addressed. We then compare the relative gains afforded by several other techniques proposed in the literature when starting with vanilla systems versus our stronger baselines, showing that experimental conclusions may change depending on the baseline chosen. This indicates that choosing a strong baseline is crucial for reporting reliable experimental results."}
{"_id": "2dc1f0d89a38489070b34327ccb6e4ee0f9bb502", "title": "TMCoI-SIOT: A trust management system based on communities of interest for the social Internet of Things", "text": "We propose a trust management system (TMS) for the social Internet of things (SIOT) called TMCoI-SIOT. The proposed system integrates numerous factors such as direct and indirect trust; transaction factors and social modeling of trust. The novelty of our approach can be summed up in three aspects. The first aspect concerns the network architecture which is seen as a set of nodes groups called community of interest (CoI). Each community nodes shares the same interests and is headed by an administrator (admin. The second aspect is the trust management system that prevents On-Off attacks. The third aspect concerns the trust prediction method that uses the Kalman filter technique in order to estimate nodes behaviors and prevent possible attacks. We prove the effectiveness of the proposed system by simulation against TMS attacks."}
{"_id": "9b500fca0f2ad51093ce9cc76bc89b9e6ecbac6a", "title": "Two comparison-alternative high temperature PCB-embedded transformer designs for a 2 W gate driver power supply", "text": "With fast power semiconductor devices based on GaN and SiC becoming more common, there is a need for improved driving circuits. Transformers with smaller inter-winding capacitance in the isolated gate drive power supply helps in reducing the conducted EMI emission from the power converter to auxiliary sources. This paper presents a transformer with a small volume, a low power loss and a small inter-capacitance in a gate drive power supply to fast switching devices, such as GaN HEMT and SiC MOSFET. The transformer core is embedded into PCB to increase the integration density. Two different transformer designs, the coplanar-winding PCB embedded transformer and the toroidal PCB embedded transformer, are presented and compared. The former has a 0.8 pF inter-capacitance and the latter has 85% efficiency with 73 W/in3 power density. Both designs are dedicated to a 2 W gate drive power supply for wide-band-gap device, which can operate at 200 \u00b0C ambient temperature."}
{"_id": "5c2c7e802a752868087a57de8f151510d1b76414", "title": "Towards Effective Log Summarization", "text": "Database access logs are the canonical go-to resource for tasks ranging from performance tuning to security auditing. Unfortunately, they are also large, unwieldy, and it can be difficult for a human analyst to divine the intent behind typical queries in the log. With an eye towards creating tools for ad-hoc exploration of queries by intent, we analyze techniques for clustering queries by intent. Although numerous techniques have already been developed for log summarization, they target specific goals like query recommendation or storage layout optimization rather than the more fuzzy notion of query intent. In this paper, we first survey a variety of log summarization techniques, focusing on a class of approaches that use query similarity metrics. We then propose DCABench, a benchmark that evaluates how well query similarity metrics capture query intent, and use it to evaluate three similarity metrics. DCABench uses student answers to query construction assignments to capture a wide range of distinct SQL queries that all have the same intent. Next, we propose and evaluate a query regularization process that standardizes query representations, significantly improving the effectiveness of the three similarity metrics tested. Finally, we propose an entirely new similarity metric based on the Weisfeiler-Lehman (WL) approximate graph isomorphism algorithm, which identifies salient features of a graph \u2014 or in our case, of the abstract syntax tree of a query. We show experimentally that distances in WL-feature space capture a meaningful notion of similarity, while still retaining competitive performance."}
{"_id": "343cc181987202cf4b98e61738d0b310927c1fcf", "title": "Guided-wave and leakage characteristics of substrate integrated waveguide", "text": "The substrate integrated waveguide (SIW) technique makes it possible that a complete circuit including planar circuitry, transitions, and rectangular waveguides are fabricated in planar form using a standard printed circuit board or other planar processing techniques. In this paper, guided wave and modes characteristics of such an SIW periodic structure are studied in detail for the first time. A numerical multimode calibration procedure is proposed and developed with a commercial software package on the basis of a full-wave finite-element method for the accurate extraction of complex propagation constants of the SIW structure. Two different lengths of the SIW are numerically simulated under multimode excitation. By means of our proposed technique, the complex propagation constant of each SIW mode can accurately be extracted and the electromagnetic bandstop phenomena of periodic structures are also investigated. Experiments are made to validate our proposed technique. Simple design rules are provided and discussed."}
{"_id": "73ef059add8471751f661d78a0c1b0ee0dc62a19", "title": "Low-loss ultra-wideband transition between conductor-backed coplanar waveguide and substrate integrated waveguide", "text": "A novel transition between conductor-backed coplanar waveguide (CBCPW) and substrate integrated waveguide (SIW) is presented for microwave and millimeter-wave integrated circuit design. The proposed integrated transition that can provide simultaneous field and impedance matching, exhibits outstanding low-loss performances over an ultra-wideband range (entire Ka-band in our case). In this work, a generalized impedance inverter whose parameters are accurately extracted by the use of a numerical Thru-Reflection-Line (TRL) calibration technique is utilized to design the transition. Measured results for the fabricated transition in back-to-back configuration show that the insertion loss is better than 0.4 dB while the return loss is better than 20 dB over the entire Ka-band."}
{"_id": "2958bfb289659889c54e5a5c8f5342d9770c4335", "title": "A pseudo-class-AB telescopic-cascode operational amplifier", "text": "A class-AB architecture for single-stage operational amplifiers is presented. The structure employs a switched-capacitor level shifter to apply the signal to both sink and source output transistors to create the class-AB behavior. Using this structure, the current mirror circuit of traditional class-AB structures can be eliminated. Thus some power can be saved and the operation frequency can be increased. Simulation results of a fast-settling telescopic-cascode op-amp employing this pseudo-class-AB technique confirm the effectiveness of the approach to reduce the power consumption. This approach can be applied to the first stage of a two-stage op-amp as well, to increase the slew rate. The simulated 12-bit 100-Ms/s sample-and-hold amplifier employing the proposed pseudo-class-AB telescopic-cascode op-amp consumes only 16 mW from 3.3 V power supply."}
{"_id": "e58747f37e15bad2adfb0a739cca12aaabdea586", "title": "Flux-Weakening Control of Electric Starter\u2013Generator Based on Permanent-Magnet Machine", "text": "This paper presents the control analysis and design for a permanent-magnet machine (PMM) operated in the flux-weakening (FW) mode for an aircraft electric starter\u2013generator (SG) application. Previous literature has focused on FW control of PMMs in the motoring (starting) mode; however, the system stability and control in the generating mode have been inadequately studied. This paper reports detailed, rigorous control analysis and design for a PMM-based aircraft electric SG operated in the FW mode. It is shown that an unstable area of operation exists. A novel control scheme which eliminates this instability is proposed. The key analytical findings of the paper are verified by experimental investigation. This paper therefore concludes that the presented technique is able to ensure system stability under all modes of operation. Furthermore, it is noted that the findings of this work are also valuable for any two-quadrant PMM drive with frequent change between starting and generating regimes under current-limiting operation."}
{"_id": "9cd463d51053472336a7477e61d9ee19203d094c", "title": "Implicit Subjective and Sentimental Usages in Multi-sense Word Embeddings", "text": "In multi-sense word embeddings, contextual variations in corpus may cause a univocal word to be embedded into different sense vectors. Shi et al. (2016) show that this kind of pseudo multi-senses can be eliminated by linear transformations. In this paper, we show that pseudo multi-senses may come from a uniform and meaningful phenomenon such as subjective and sentimental usage, though they are seemingly redundant. In this paper, we present an unsupervised algorithm to find a linear transformation which can minimize the transformed distance of a group of sense pairs. The major shrinking direction of this transformation is found to be related with subjective shift. Therefore, we can not only eliminate pseudo multi-senses in multisense embeddings, but also identify these subjective senses and tag the subjective and sentimental usage of words in the corpus automatically."}
{"_id": "1a75817226876f3a3bd052ddd3cb8fb7d634d81e", "title": "UNDERSTANDING THE EFFECT OF INTERDEPENDENCY AND VULNERABILITY ON THE PERFORMANCE OF CIVIL INFRASTRUCTURE", "text": "Vulnerability is a measure ofthe extent to which a community, structure, service s or geographic area is likely to be damaged or disrupted by the im pact of particular hazard. Current asset management practices focuses on studying factors that affect p rformance of isolated infrastructure networks and model a set of actions to control the expected performanc e of these networks. This approach ignores the underlying spatial and functional interdependencies among these infrastructure networks and their vulnerability. The purpose of this paper is to intr oduce a new method that recognizes the effect of sp atial and functional interdependencies on vulnerability r ating of water, sewer and road networks. The propos ed method consists of: 1) risk assessment model, 2) in terdependency assessment model and 3) vulnerability assessment model. The risk model is composed of two modules: 1) water and sewer risk module and 2) road infrastructure module. The water and sewer ris k module will cluster these assets into three risk categories based on environmental, social, operatio nal and economical factors. The road infrastructure module will cluster road assets into three risk cat egories by using rational factorial technique based on road type, serviceability index, traffic load, and freez and thaw index. The interdependency model will de ploy the risk ranking to perform geospatial analysis in ArcGIS which results in determining the interdepend t layers of waters, sewers and roads. The vulnerabili ty model will deploy fuzzy neural networks techniqu e to determine the vulnerability rating based on spatial and functional interdependencies. The fuzzy neural networks are utilized to overcome the lack of histo r cal data and incorporate experts\u2019 preferences for establishing the knowledge base for vulnerability a ssessment. The expected contribution of this framew ork is to aid decision makers in understanding the inte rdependencies between civil infrastructure assets a nd to which extent such interdependencies can compromise as ts performance."}
{"_id": "b38ea4cdf9012fa7a76d7e700260aa99855947c3", "title": "PREDICTING HELPFUL POSTS IN OPEN-ENDED DISCUSSION FORUMS: A NEURAL ARCHITECTURE", "text": "Users participate in online discussion forums to learn from others and share their knowledge with the community. They often start a thread with a question or sharing their new findings on a certain topic. We find that, unlike Community Question Answering, where questions are mostly factoid based, the threads in a forum are often open-ended (e.g., asking for recommendations from others) without a single correct answer. In this paper, we address the task of identifying helpful posts in a forum thread, to aid users comprehend long running discussion threads, which often contain repetitive or even irrelevant posts. We propose a recurrent neural network based architecture to model (i) the relevance of a post regarding the original post starting the thread and (ii) the novelty it brings to the discussion, compared to the previous posts in the thread. Experimental results on five different types of online forum datasets show that our model significantly outperforms the state-of-the-art neural network models for text classification."}
{"_id": "f2d92b79bfad01cfada381653fca76e142357252", "title": "Process reengineering in the public sector : learning some private sector lessons", "text": "The possible applicability of business process reengineering (BPR) to organisations in the public sector is explored through analysis of the central issues in BPR and the emerging experience of organisations which have recently implemented it. In particular, the paper suggests that success of reengineering may depend critically on the strategic capability of the organisation prior to undertaking the effort. For that reason well-performing organisations are more likely to improve performance by means of BPR than are weak ones. Yet, in the public sector, it tends to be badly performing agencies which are most encouraged to undertake BPR. Knowing and understanding the reasons for success or failure of BPR in private organisations can prepare public sector managers for undertaking the effort, but each reengineering initiative must be tailored to the specific needs and circumstances of the individual agency. Public sector managers should use the widest possible definition of 'value' when analysing value-added in process reengineering and should be especially sensitive to the way in which 'value' in the public sector is differently interpreted by major stakeholders. During this learning process, public sector agencies would be well advised to be conservative in estimating gains from BPR. \u00a9 1997 Elsevier Science Ltd"}
{"_id": "e25e0ccdb347f85dda2725c1d37fc1790111c6af", "title": "An Efficient and Secure Automotive Wireless Software Update Framework", "text": "Future vehicles will be wirelessly connected to nearby vehicles, to the road infrastructure, and to the Internet, thereby becoming an integral part of the Internet of Things. New comfort features, safety functions, and a number of new vehicle-specific services will be integrated in future smart vehicles. These include a fast, secure, and reliable way to diagnose and reconfigure a vehicle, as well as the installation of new software (SW) on its integrated electronic control units (ECUs). Such wireless SW updates are beneficial for both automotive carmakers and customers, as they allow us to securely enable new features on the vehicle and to fix SW bugs by installing a new SW version over the air. A secure and dependable wireless SW update process is valuable in the entire lifetime of a modern vehicle as it can be used already during vehicle development and manufacturing process on the assembly line, as well as during vehicle maintenance in a service center. Additionally, future vehicles will allow us to remotely download up-to-date SW on the ECUs. To support this process over the entire vehicle's lifetime, a generic framework is needed. In this paper, SecUp, a generic framework enabling secure and efficient wireless automotive SW updates is proposed. SecUp utilizes IEEE\u00a0802.11s as wireless medium to interconnect vehicles and diagnostic devices in a dependable and fast way. Additionally, SecUp is enabling beneficial wireless SW update features such as parallel and partial SW updates to increase the efficiency, and comprises advanced security mechanisms to prevent abuse and attacks."}
{"_id": "494daf40d90a07b58690985088e5bd109ddb315c", "title": "DeepWear: a Case Study of Collaborative Design between Human and Artificial Intelligence", "text": "Deep neural networks (DNNs) applications are now increasingly pervasive and powerful. However, fashion designers are lagging behind in leveraging this increasingly common technology. DNNs are not yet a standard part of fashion de sign practice, either clothes patterns or prototyping tools. In this paper, we present DeepWear, a method using deep convolutional generative adversarial networks for clothes design. The DNNs learn the feature of specific brand clothes and generate images then patterns instructed from the images are made, and an author creates clothes based on that. We evaluated this system by evaluating the credibility of the actual sold clothes on market with our clothes. As the result, we found it is possible to make clothes look like actual products from the generated images. Our findings have implications for collaborative design between machine and human intelligence."}
{"_id": "faad4d2d0f69eba9e221e4cfde573122ef0e41d5", "title": "IIT ( BHU ) : System Description for LSDSem \u2019 17 Shared Task", "text": "This paper describes an ensemble system submitted as part of the LSDSem Shared Task 2017 the Story Cloze Test. The main conclusion from our results is that an approach based on semantic similarity alone may not be enough for this task. We test various approaches and compare them with two ensemble systems. One is based on voting and the other on logistic regression based classifier. Our final system is able to outperform the previous state of the art for the Story Cloze test. Another very interesting observation is the performance of sentiment based approach which works almost as well on its own as our final en-"}
{"_id": "6a2ff287cc438011e9da3e3019d5b8c9f2c1889d", "title": "Digital forensic framework using feedback and case history keeper", "text": "Cyber crime investigation is the integration of two technologies named theoretical methodology and second practical tools. First is the theoretical digital forensic methodology that encompasses the steps to investigate the cyber crime. And second technology is the practically development of the digital forensic tool which sequentially and systematically analyze digital devices to extract the evidence to prove the crime. This paper explores the development of digital forensic framework, combine the advantages of past twenty five forensic models and generate a algorithm to create a new digital forensic model. The proposed model provides the following advantages, a standardized method for investigation, the theory of model can be directly convert into tool, a history lookup facility, cost and time minimization, applicable to any type of digital crime investigation."}
{"_id": "4d8620d954252bf1b426c0e0af67344282e5bc89", "title": "Toward Adversarial Online Learning and the Science of Deceptive Machines", "text": "Intelligent systems rely on pattern recognition and signaturebased approaches for a wide range of sensors enhancing situational awareness. For example, autonomous systems depend on environmental sensors to perform their tasks and secure systems depend on anomaly detection methods. The availability of large amount of data requires the processing of data in a \u201cstreaming\u201d fashion with online algorithms. Yet, just as online learning can enhance adaptability to a non-stationary environment, it introduces vulnerabilities that can be manipulated by adversaries to achieve their goals while evading detection. Although human intelligence might have evolved from social interactions, machine intelligence has evolved as a human intelligence artifact and been kept isolated to avoid ethical dilemmas. As our adversaries become sophisticated, it might be time to revisit this question and examine how we can combine online learning and reasoning leading to the science of deceptive and counter-deceptive machines."}
{"_id": "08ac42640c4e12e27688cb9adb0c55e8ec73c8f3", "title": "Adaptive Multimodal Fusion by Uncertainty Compensation With Application to Audiovisual Speech Recognition", "text": "While the accuracy of feature measurements heavily depends on changing environmental conditions, studying the consequences of this fact in pattern recognition tasks has received relatively little attention to date. In this paper, we explicitly take feature measurement uncertainty into account and show how multimodal classification and learning rules should be adjusted to compensate for its effects. Our approach is particularly fruitful in multimodal fusion scenarios, such as audiovisual speech recognition, where multiple streams of complementary time-evolving features are integrated. For such applications, provided that the measurement noise uncertainty for each feature stream can be estimated, the proposed framework leads to highly adaptive multimodal fusion rules which are easy and efficient to implement. Our technique is widely applicable and can be transparently integrated with either synchronous or asynchronous multimodal sequence integration architectures. We further show that multimodal fusion methods relying on stream weights can naturally emerge from our scheme under certain assumptions; this connection provides valuable insights into the adaptivity properties of our multimodal uncertainty compensation approach. We show how these ideas can be practically applied for audiovisual speech recognition. In this context, we propose improved techniques for person-independent visual feature extraction and uncertainty estimation with active appearance models, and also discuss how enhanced audio features along with their uncertainty estimates can be effectively computed. We demonstrate the efficacy of our approach in audiovisual speech recognition experiments on the CUAVE database using either synchronous or asynchronous multimodal integration models."}
{"_id": "6ca8b16d92ac3254132bb512c49709e12e12be53", "title": "Distributed Auction Algorithms for the Assignment Problem with Partial Information", "text": "operations centers (MOC), in which multiple DMs with partial information and partial control over assets are involved in the development of operational level plans. The MOC emphasizes standardized processes and methods, centralized assessment and guidance, networked distributed planning capabilities, and decentralized execution for assessing, planning and executing missions across a range of military operations \uf020 Abstract\u2014Task-asset assignment is a fundamental problem paradigm in a wide variety of applications. Typical problem setting involves a single decision maker (DM) who has complete knowledge of the weight (reward, benefit, accuracy) matrix and who can control any of the assets to execute the tasks. Motivated by planning problems arising in distributed organizations, this paper introduces a novel variation of the assignment problem, wherein there are multiple DMs and each DM knows only a part of the weight matrix and/or controls a subset of the assets. We extend the auction algorithm to such realistic settings with various partial information structures using a blackboard coordination structure. We show that by communicating the bid, the best and the second best profits among DMs and with a coordinator, the DMs can reconstruct the centralized assignment solution. The auction setup provides a nice analytical framework for formalizing how team members build internal models of other DMs and achieve team cohesiveness over time. [1]. In this vein, we are developing analytical and computational models for multi-level coordinated mission planning and monitoring processes associated with MOCs, so that they can function effectively in highly dynamic, asymmetric, and unpredictable mission environments. Two key problem areas are: 1) realistic modeling of multi-level coordination structures that link tactical, operational and strategic levels of decision making; and 2) collaborative planning with partial information and partial control of assets. In the collaborating planning problem, each DM \u201cowns\u201d a set of assets and is responsible for planning certain tasks. Each task is characterized by a vector of resource requirements, while each asset is characterized by a vector of resource capabilities (see Fig. 1). Multiple assets (from the same DM or multiple DMs) may be required to process a task. The degree of match between the task-resource requirement vector and asset-resource capability vector determines the accuracy of task execution. In addition, the elements of"}
{"_id": "9cb917b6f85a50b7e6583e2cd9b979cee7fa90e2", "title": "A Survey of First-Order Probabilistic Models", "text": "There has been a long standing division in Artificial Intelligence between logical and probabilistic reasoning approaches. While probabilistic models can deal well with inherent uncertainty in many real-world domains, they operate on a mostly propositional level. Logic systems, on the other hand, can deal with much richer representations, especially first-order ones, but treat uncertainty only in limited ways. Therefore, an integration of these types of inference is highly desirable, and many approaches have been proposed, especially from the 1990s on. These solutions come from many different subfields and vary greatly in language, features and (when available at all) inference algorithms. Therefore their relation to each other is not always clear, as well as their semantics. In this survey, we present the main aspects of the solutions proposed and group them according to language, semantics and inference algorithm. In doing so, we draw relations between them and discuss particularly important choices and tradeoffs."}
{"_id": "b71c7ebb73b4ab09fcbb7f1dfbfbb4d72019752d", "title": "Multi-Class Motor Imagery EEG Decoding for Brain-Computer Interfaces", "text": "Recent studies show that scalp electroencephalography (EEG) as a non-invasive interface has great potential for brain-computer interfaces (BCIs). However, one factor that has limited practical applications for EEG-based BCI so far is the difficulty to decode brain signals in a reliable and efficient way. This paper proposes a new robust processing framework for decoding of multi-class motor imagery (MI) that is based on five main processing steps. (i) Raw EEG segmentation without the need of visual artifact inspection. (ii) Considering that EEG recordings are often contaminated not just by electrooculography (EOG) but also other types of artifacts, we propose to first implement an automatic artifact correction method that combines regression analysis with independent component analysis for recovering the original source signals. (iii) The significant difference between frequency components based on event-related (de-) synchronization and sample entropy is then used to find non-contiguous discriminating rhythms. After spectral filtering using the discriminating rhythms, a channel selection algorithm is used to select only relevant channels. (iv) Feature vectors are extracted based on the inter-class diversity and time-varying dynamic characteristics of the signals. (v) Finally, a support vector machine is employed for four-class classification. We tested our proposed algorithm on experimental data that was obtained from dataset 2a of BCI competition IV (2008). The overall four-class kappa values (between 0.41 and 0.80) were comparable to other models but without requiring any artifact-contaminated trial removal. The performance showed that multi-class MI tasks can be reliably discriminated using artifact-contaminated EEG recordings from a few channels. This may be a promising avenue for online robust EEG-based BCI applications."}
{"_id": "444471fdeb54a87202a20101503ec52c2e16e512", "title": "Information Extraction Supported Question Answering", "text": "This paper discusses the use of our information extraction (IE) system, Textract, in the questionanswering (QA) track of the recently held TREC-8 tests. One of our major objectives is to examine how IE can help IR (Information Retrieval) in applications like QA. Our study shows: (i) IE can provide solid support for QA; (ii ) low-level IE like Named Entity tagging is often a necessary component in handling most types of questions; (iii ) a robust natural language shallow parser provides a structural basis for handling questions; (iv) high-level domain independent IE, i.e. extraction of multiple relationships and general events, is expected to bring about a breakthrough in QA."}
{"_id": "e96fb5da6540f0b7bed8760fe81bcb116ac31240", "title": "Teechain: Scalable Blockchain Payments using Trusted Execution Environments", "text": "Blockchain protocols such as Bitcoin are gaining traction for exchanging payments in a secure and decentralized manner. Their need to achieve consensus across a large number of participants, however, fundamentally limits their performance. We describe Teechain, a new off-chain payment protocol that utilizes trusted execution environments (TEEs) to perform secure, efficient and scalable fund transfers on top of a blockchain, with asynchronous blockchain access. Teechain introduces secure payment chains to route payments across multiple payment channels. Teechain mitigates failures of TEEs with two strategies: (i) backups to persistent storage and (ii) a novel variant of chain-replication. We evaluate an implementation of Teechain using Intel SGX as the TEE and the operational Bitcoin blockchain. Our prototype achieves orders of magnitude improvement in most metrics compared to existing implementations of payment channels: with replicated Teechain nodes in a trans-atlantic deployment, we measure a throughput of over 33, 000 transactions per second with 0.1 second latency."}
{"_id": "ed7229d3b211f3ccd9b99d4461ec6bb520425f63", "title": "A 2-D Electronically Steered Phased-Array Antenna With 2$\\,\\times\\,$ 2 Elements in LC Display Technology", "text": "For the first time, a 2-D electronically steered phased-array antenna with a liquid-crystal (LC)-based variable delay line is presented. The structure, which is designed at 17.5 GHz, consists of a 2 \u00d7 2 microstrip patch antenna array, continuously variable delay lines with a novel geometry, RF feeding, and biasing networks. The expected insertion loss of the variable delay line is less than 4 dB with a maximum differential phase shift of 300\u00b0. During the measurements, the antenna is steered by applying an appropriate dc biasing in the range of 0-15 V to the variable delay lines. It is also shown that the return loss is always better than 15 dB at the operating frequency when the antenna is steered."}
{"_id": "398447003a7938772bfb5b91b5dbf5f7f056d0c9", "title": "Current Evidence and Opportunities for Expanding the Role of Occupational Therapy for Adults With Musculoskeletal Conditions.", "text": "Musculoskeletal conditions are the second greatest cause of disability worldwide, and chronic musculoskeletal conditions affect nearly the same percentage of the general population as chronic circulatory and respiratory conditions combined. Moreover, people with musculoskeletal conditions experience a significant decline in independence with daily activities and occupational performance, key areas targeted by occupational therapy interventions. This special issue of the American Journal of Occupational Therapy provides comprehensive summaries of evidence for the care of common musculoskeletal conditions, highlights important implications that support evidence-informed practice, and proposes ways to advance the practice of occupational therapy to improve the lives of people with musculoskeletal conditions."}
{"_id": "a566b5065e1fd50cfbbd8c04808b03dc7a7c0c1a", "title": "Agile Model-Driven Modernization to the Service Cloud", "text": "Migration of legacy systems to more advanced technologies and platforms is a current issue for many software organizations. Model-Driven Modernization combined with Software as a Service delivery model is a very promising approach, which possesses a lot of advant ges, including reduced costs, automation of migration ac tivities and reuse of system functionality. However, a drawback of such an innovative modernization approach is that it lacks mature software process models to guide its adoption. Thus , a methodology for seamless execution of different mig ration and deployment activities is quite needed. On the other hand, agile development methods have been successfully adopted in various projects, which partly or thoroughly use the engineering and delivery models exploited in the modernization process. This paper presents how a pa rticular methodology for Model-Driven Modernization with deployment to the Cloud is enriched with agile tech niques to address different challenging issues. The extended agile methodology could be used by organizations which ha ve already applied agile software development as well as by organizations that plan to introduce it in their work. KeywordsCloud computing; Agile Methodology; Modeldriven Modernization; Software as a service"}
{"_id": "205966b977df580587ca61e71c3015dff9044687", "title": "Organoids as an in vitro model of human development and disease", "text": "The in vitro organoid model is a major technological breakthrough that has already been established as an essential tool in many basic biology and clinical applications. This near-physiological 3D model facilitates an accurate study of a range of in vivo biological processes including tissue renewal, stem cell/niche functions and tissue responses to drugs, mutation or damage. In this Review, we discuss the current achievements, challenges and potential applications of this technique."}
{"_id": "106ad1c36d6a310dc1b06f0fccf61f21fd3e6ecc", "title": "Toward a Maturity Model Development Process for Information Systems ( MMDePSI )", "text": "Maturity models are significant tools to ensure continuous improvement of systems and activities. They allow selfassessment and provide a means benchmark of these activities in relation to best practices. This article proposes the MMDePSI process for the development of new models of maturity in information systems. It begins by presenting the existing methods and approaches. Then he describes the MMDePSI process and the corresponding meta-model. Before concluding and presenting prospects of this work, it evaluates the proposed process."}
{"_id": "be8e0458a7b8b79bcc234e1115f04e90ac466352", "title": "Intelligent Car Braking System with Collision Avoidance and ABS", "text": "This paper provides an efficient way to design an automatic car braking system using Fuzzy Logic. The system could avoid accidents caused by the delays in driver reaction times at critical situations. The proposed Fuzzy Logic Controller is able to brake a car when the car approaches for an obstacle in the very near range. Collision avoidance is achieved by steering the car if the obstacle is in the tolerable range and hence there is no necessity to apply the brakes. Another FLC (which is cascaded with the first FLC for collision avoidance) implements the Anti-lock Braking capability during heavy braking condition. Thus the system is made intelligent since it could take decisions automatically depending upon the inputs from ultrasonic sensors. A simulative study is done using MATLAB and LabVIEW software. The results obtained by the simulation model are compared with the existing system and the proposed model conveys a satisfactory result which has high consumer acceptance. ATMega controller is used for implementation of the proposed system."}
{"_id": "0243cc5b1f6184dae12596e0a72398034f212047", "title": "3D textureless object detection and tracking: An edge-based approach", "text": "This paper presents an approach to textureless object detection and tracking of the 3D pose. Our detection and tracking schemes are coherently integrated in a particle filtering framework on the special Euclidean group, SE(3), in which the visual tracking problem is tackled by maintaining multiple hypotheses of the object pose. For textureless object detection, an efficient chamfer matching is employed so that a set of coarse pose hypotheses is estimated from the matching between 2D edge templates of an object and a query image. Particles are then initialized from the coarse pose hypotheses by randomly drawing based on costs of the matching. To ensure the initialized particles are at or close to the global optimum, an annealing process is performed after the initialization. While a standard edge-based tracking is employed after the annealed initialization, we employ a refinement process to establish improved correspondences between projected edge points from the object model and edge points from an input image. Comparative results for several image sequences with clutter are shown to validate the effectiveness of our approach."}
{"_id": "289b94331d5bd1db7db112fcafac1a636bde5681", "title": "6-DoF model-based tracking of arbitrarily shaped 3D objects", "text": "Image-based 6-DoF pose estimation of arbitrarily shaped 3D objects based on their shape is a rarely studied problem. Most existing image-based methods for pose estimation either exploit textural information in form of local features or, if shape-based, rely on the extraction of straight line segments or other primitives. Straight-forward extensions of 2D approaches are potentially more general, but in practice assume a limited range of possible view angles. The general problem is that a 3D object can potentially produce completely different 2D projections depending on its relative pose to the observing camera. One way to reduce the solution space is to exploit temporal information, i.e. perform tracking. Again, existing model-based tracking approaches rely on relatively simple object geometries. In this paper, we propose a particle filter based tracking approach that can deal with arbitrary shapes and arbitrary or even no texture, i.e. it offers a general solution to the rigid object tracking problem. As our approach can deal with occlusions, it is in particular of interest in the context of goal-directed imitation learning involving the observation of object manipulations. Results of simulation experiments as well as real-world experiments with different object types prove the practical applicability of our approach."}
{"_id": "2dcfc3c4e8680374ec3b1e81d1cf6cff84a8dd06", "title": "On sequential Monte Carlo sampling methods for Bayesian filtering", "text": "In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different scientific disciplines. Novel extensions to the existing methods are also proposed. We show in particular how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature; these lead to very effective importance distributions. Furthermore we describe a method which uses Rao-Blackwellisation in order to take advantage of the analytic structure present in some important classes of state-space models. In a final section we develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models."}
{"_id": "89999d39dc9a9763bbbc2a26fa56ce8a06db6974", "title": "Architectural Support for Mitigating Row Hammering in DRAM Memories", "text": "DRAM scaling has been the prime driver of increasing capacity of main memory systems. Unfortunately, lower technology nodes worsen the cell reliability as it increases the coupling between adjacent DRAM cells, thereby exacerbating different failure modes. This paper investigates the reliability problem due to Row Hammering, whereby frequent activations of a given row can cause data loss for its neighboring rows. As DRAM scales to lower technology nodes, the threshold for the number of row activations that causes data loss for the neighboring rows reduces, making Row Hammering a challenging problem for future DRAM chips. To overcome Row Hammering, we propose two architectural solutions: First, Counter-Based Row Activation (CRA), which uses a counter with each row to count the number of row activations. If the count exceeds the row hammering threshold, a dummy activation is sent to neighboring rows proactively to refresh the data. Second, Probabilistic Row Activation (PRA), which obviates storage overhead of tracking and simply allows the memory controller to proactively issue dummy activations to neighboring rows with a small probability for all memory access. Our evaluations show that these solutions are effective at mitigating Row hammering while causing negligible performance loss (<; 1 percent)."}
{"_id": "80777715ab53a193a043ebb78e2148d1614756c4", "title": "A Dynamic Approach for MPE and Weighted MAX-SAT", "text": "The problem of Most Probable Explanation (MPE) arises in the scenario of probabilistic inference: finding an assignment to all variables that has the maximum likelihood given some evidence. We consider the more general CNF-based MPE problem, where each literal in a CNF-formula is associated with a weight. We describe reductions between MPE and weighted MAX-SAT, and show that both can be solved by a variant of weighted model counting. The MPE-SAT algorithm is quite competitive with the state-of-the-art MAX-SAT, WCSP, and MPE solvers on a variety of problems."}
{"_id": "fed0a077e78329c5780c8afbac656fbae17a2f78", "title": "Model selection in ecology and evolution.", "text": "Recently, researchers in several areas of ecology and evolution have begun to change the way in which they analyze data and make biological inferences. Rather than the traditional null hypothesis testing approach, they have adopted an approach called model selection, in which several competing hypotheses are simultaneously confronted with data. Model selection can be used to identify a single best model, thus lending support to one particular hypothesis, or it can be used to make inferences based on weighted support from a complete set of competing models. Model selection is widely accepted and well developed in certain fields, most notably in molecular systematics and mark-recapture analysis. However, it is now gaining support in several other areas, from molecular evolution to landscape ecology. Here, we outline the steps of model selection and highlight several ways that it is now being implemented. By adopting this approach, researchers in ecology and evolution will find a valuable alternative to traditional null hypothesis testing, especially when more than one hypothesis is plausible."}
{"_id": "2ff411d0af2b10d86f6e9da67bf99b0bbd8fa9c2", "title": "Asynchronous Exceptions in Haskell", "text": "Asynchronous exceptions, such as timeouts are important for robust, modular programs, but are extremely difficult to program with \u2014 so much so that most programming languages either heavily restrict them or ban them altogether. We extend our earlier work, in which we added synchronous exceptions to Haskell, to support asynchronous exceptions too. Our design introduces scoped combinators for blocking and unblocking asynchronous interrupts, along with a somewhat surprising semantics for operations that can suspend. Uniquely, we also give a formal semantics for our system."}
{"_id": "9e555a68567feca6ab716a812175ec36c414f178", "title": "Cannabis use and psychosis: a longitudinal population-based study.", "text": "Cannabis use may increase the risk of psychotic disorders and result in a poor prognosis for those with an established vulnerability to psychosis. A 3-year follow-up (1997-1999) is reported of a general population of 4,045 psychosis-free persons and of 59 subjects in the Netherlands with a baseline diagnosis of psychotic disorder. Substance use was assessed at baseline, 1-year follow-up, and 3-year follow-up. Baseline cannabis use predicted the presence at follow-up of any level of psychotic symptoms (adjusted odds ratio (OR) = 2.76, 95% confidence interval (CI): 1.18, 6.47), as well as a severe level of psychotic symptoms (OR = 24.17, 95% CI: 5.44, 107.46), and clinician assessment of the need for care for psychotic symptoms (OR = 12.01, 95% CI: 2.24, 64.34). The effect of baseline cannabis use was stronger than the effect at 1-year and 3-year follow-up, and more than 50% of the psychosis diagnoses could be attributed to cannabis use. On the additive scale, the effect of cannabis use was much stronger in those with a baseline diagnosis of psychotic disorder (risk difference, 54.7%) than in those without (risk difference, 2.2%; p for interaction = 0.001). Results confirm previous suggestions that cannabis use increases the risk of both the incidence of psychosis in psychosis-free persons and a poor prognosis for those with an established vulnerability to psychotic disorder."}
{"_id": "3fc1ce8c576408a944a4a9ed17cca32f11a38cae", "title": "A Modification of LambdaMART to Handle Noisy Crowdsourced Assessments", "text": "We consider noisy crowdsourced assessments and their impact on learning-to-rank algorithms. Starting with EM-weighted assessments, we modify LambdaMART in order to use smoothed probabilistic preferences over pairs of documents, directly as input to the ranking algorithm."}
{"_id": "fb332ec83c7b38bb63ba3094e9187dbec623d8b9", "title": "Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation", "text": "Imitation learning is an effective approach for autonomous systems to acquire control policies when an explicit reward function is unavailable, using supervision provided as demonstrations from an expert, typically a human operator. However, standard imitation learning methods assume that the agent receives examples of observation-action tuples that could be provided, for instance, to a supervised learning algorithm. This stands in contrast to how humans and animals imitate: we observe another person performing some behavior and then figure out which actions will realize that behavior, compensating for changes in viewpoint, surroundings, object positions and types, and other factors. We term this kind of imitation learning \u201cimitation-from-observation,\u201d and propose an imitation learning method based on video prediction with context translation and deep reinforcement learning. This lifts the assumption in imitation learning that the demonstration should consist of observations in the same environment configuration, and enables a variety of interesting applications, including learning robotic skills that involve tool use simply by observing videos of human tool use. Our experimental results show the effectiveness of our approach in learning a wide range of real-world robotic tasks modeled after common household chores from videos of a human demonstrator, including sweeping, ladling almonds, pushing objects as well as a number of tasks in simulation."}
{"_id": "6abdd60427f8cded23d09ecb653a54fa09bae66e", "title": "A point-based MDP for robust single-lane autonomous driving behavior under uncertainties", "text": "In this paper, a point-based Markov Decision Process (QMDP) algorithm is used for robust single-lane autonomous driving behavior control under uncertainties. Autonomous vehicle decision making is modeled as a Markov Decision Process (MDP), then extended to a QMDP framework. Based on MDP/QMDP, three kinds of uncertainties are taken into account: sensor noise, perception constraints and surrounding vehicles' behavior. In simulation, the QMDP-based reasoning framework makes the autonomous vehicle perform with differing levels of conservativeness corresponding to different perception confidence levels. Road tests also indicate that the proposed algorithm helps the vehicle in avoiding potentially unsafe situations under these uncertainties. In general, the results indicate that the proposed QMDP-based algorithm makes autonomous driving more robust to limited sensing ability and occasional sensor failures."}
{"_id": "86b2a1b37ae0d4591dc9902138e9db8ea303fabf", "title": "Quantum Reinforcement Learning", "text": "The key approaches for machine learning, particularly learning in unknown probabilistic environments, are new representations and computation mechanisms. In this paper, a novel quantum reinforcement learning (QRL) method is proposed by combining quantum theory and reinforcement learning (RL). Inspired by the state superposition principle and quantum parallelism, a framework of a value-updating algorithm is introduced. The state (action) in traditional RL is identified as the eigen state (eigen action) in QRL. The state (action) set can be represented with a quantum superposition state, and the eigen state (eigen action) can be obtained by randomly observing the simulated quantum state according to the collapse postulate of quantum measurement. The probability of the eigen action is determined by the probability amplitude, which is updated in parallel according to rewards. Some related characteristics of QRL such as convergence, optimality, and balancing between exploration and exploitation are also analyzed, which shows that this approach makes a good tradeoff between exploration and exploitation using the probability amplitude and can speedup learning through the quantum parallelism. To evaluate the performance and practicability of QRL, several simulated experiments are given, and the results demonstrate the effectiveness and superiority of the QRL algorithm for some complex problems. This paper is also an effective exploration on the application of quantum computation to artificial intelligence."}
{"_id": "87abb634b8a49aa964d071e46ca38e09c8855f12", "title": "METRIC AND TOPO-GEOMETRIC PROPERTIES OF URBAN STREET NETWORKS : some convergences , divergences , and new results", "text": "The theory of cities, which has grown out of the use of space syntax techniques in urban studies, proposes a curious mathematical duality: that urban space is locally metric but globally topo-geometric. Evidence for local metricity comes from such generic phenomena as grid intensification to reduce mean trip lengths in live centres, the fall of movement from attractors with metric distance, and the commonly observed decay of shopping with metric distance from an intersection. Evidence for global topo-geometry come from the fact that we need to utilise both the geometry and connectedness of the larger scale space network to arrive at configurational measures which optimally approximate movement patterns in the urban network. It might be conjectured that there is some threshold above which human being use some geometrical and topological representation of the urban grid rather than the sense of bodily distance to making movement decisions, but this is unknown. The discarding of metric properties in the large scale urban grid has, however, been controversial. Here we cast a new light on this duality. We show first some phenomena in which metric and topo-geometric measures of urban space converge and diverge, and in doing so clarify the relation between the metric and topo-geometric properties of urban spatial networks. We then show how metric measures can be used to create a new urban phenomenon: the partitioning of the background network of urban space into a network of semi-discrete patches by applying metric universal distance measures at different metric radii, suggesting a natural spatial area-isation of the city at all scales. On this basis we suggest a key clarification of the generic structure of cities: that metric universal distance captures exactly the formally and functionally local patchwork properties of the network, most notably the spatial differentiation of areas, while the top-geometric measures identifying the structure which overcomes locality and links the urban patchwork into a whole at different scales. Introduction: the dual urban network The theory of cities, which has grown out of the use of space syntax techniques in urban studies, proposes that urban street networks have a dual form: a foreground network of linked centres at all scales, and a background network of primarily residential space in which the foreground network is embedded. (Hillier 2001/2) The theory also notes a mathematical duality. On the one hand, measures which express the geometric and topological properties of the network at an extended scale, such as integration and choice measures in axial maps or segment angular maps, are needed to capture structure-function relations such as natural movement patterns (Hillier & Iida 2005). We can call these measures topo-geometric. On the other, at a more localised level, an understanding of structure-function relations often requires an account of metric properties \u2013 for example the generic, but usually local, phenomenon of grid intensification to reduce mean trip lengths in live centres (Siksna 1997, Hillier 1999), the fall of movement rates with metric distance from attractors, and the commonly observed decay of shopping with metric distance from an intersection. In terms of understanding structure-function relations. urban space seems to be globally topo-geometric but locally metric. Here we propose to link these two dualities in a more thorough-going way. We show first that the large scale foreground network of space in cities, in spite of the claims of critics (Ratti 2004), really is not metric. On the contrary, the substitution of metric for topo-geometric measures in the analysis, has catastrophic effects on the ability of syntax to account for structure-function relations at this scale. At the same time, topo-geometric measures turn out to capture some interesting metric properties of the larger scale urban network. But we then show that the background network of space really is metric in a much more general sense than has been thought, in that metric measures at different radii can be used to partition the background network of urban space into a patchwork of semi-discrete areas, suggesting a natural metric area-isation of cities at all scales as a function of the placing, shaping and scaling of urban blocks. On this basis we suggest a clarification of the dual structure of cities: that metric \u2018universal distance\u2019 (distance from all points to all others \u2013 Hillier 1996) measures can capture the spatial differentiation of the background urban network into a patchwork of local areas, while the topo-geometric measures identify the structures which overcomes locality and links the urban patchwork into a whole at different scales. The patchwork theory is in effect a theory of block size and shape, picking up the local distortions in urban space induced by the placing and shaping of physical structures. More generally, we can say that the local-to-global topo-geometric structure reflects the visual, and so non-local effects of placing blocks in space, while the patchwork structure reflects metric and so local effects. The patchwork theory extends and generalises the concept of grid intensification, meaning the reduction of block size to reduce mean distance from all points to all others in a space network. As shown in (Hillier 2000), holding total land coverage and travellable distance in free space constant, a grid in which smaller blocks are placed at the centre and larger blocks at the edge has lower mean distance from all points to all others in the space network than a regular grid, while if larger blocks are placed at the centre and smaller blocks at the edge, then the mean distance from all points to others in the space network is higher than in a regular grid. This follows from the partitioning theory set out in Chapter 8 of Space is the Machine. (Hillier 1996) In general in urban grids, live centres and sub-centres (\u2018live\u2019 in the sense of having movement dependent uses such as retail and catering) tend to the grid intensified form to maximise the inter-accessibility of the facilities within in the centre, residential areas tend to larger block sizes, reflecting the need to restrain and structure movement in the image of a spatial culture, while the linkages between centres tend to an even larger block size again, an effect of the directional structuring of routes, so that the network of linked centres which dominate the spatial structure of cities tend to oscillate between a relatively large and relative small block size, with the residential background occupying the middle range. This block size pattern is explained more fully in (Hillier 2001/2). In this paper we: \u2013 First review the duality of urban space in three ways: geometrically to establish its empirical existence as a key dimension of urban form, functionally to show its implications in terms of movement and land use patterns, and syntactically to show the relations between the two. \u2013 We then explore some of the suggestions that have been made about reducing the distance between syntax and more traditional metric approaches, in particular by examining the suggestion of Ratti that we should add metric weightings to the main syntax measures. We show the consequences of these suggestions for any theory which seeks to identify functionally meaningful structures in urban space \u2013 We then suggest a general method for showing the metric effect on space of block placing and shaping, both visually and in terms of patterns in scattergrams, by showing theoretical cases \u2013 We then apply this method to some cities and show its ability to identify if not natural spatial areas then at least a natural periodicity in city networks through which they tend to a natural spatial area-isation at all scales, reflecting the ways in which we talks about urban areas and regions at different scales. Metric and geometric properties of the grid First, we consider the urban duality geometrically by looking sections of metropolitan Tokyo and London, shown in Figure 1. As shown in (Hillier 2001/2) and later formalised in (Carvalho & Penn 2004), we must first remind ourselves of the fractal nature of urban least line networks: all are made up at all scales, from the local area to the city regions, of a small number of long lines and a large number of short lines. But there is more to be said. Longer and shorter lines form different kinds of geometric patterns. If we look for patterns in the section of Tokyo, the first thing the eye notes are line continuities. What we are seeing in effect is sequence of lines linked at their ends by nearly straight intersections with other lines, forming a visually dominant pattern in the network. But in general, the lines forming these nearly straight continuities as Figueredo calls them (Figueredo 2003) are longer than other nearby lines. This has the effect that if we find a locally longer line it is likely that at either end it will lead to another to which it will be connected by a nearly straight connection, and these lines will in turn be similarly connected. Probabilistically, we can say the longer the line, the more likely it is to end in a nearly straight connection, and taken together these alignments form a network of multi-directional sequences. Intuitively, the value of these in navigating urban grids is obvious, but here, following (Hillier 1999) we are making a structural point. Figure 1: sections of the Tokyo and London street networks What then of the shorter lines ? Again, in spite of a highly variable geometry, we find certain consistencies. First, shorter lines tend to form clusters, so that in the vicinity of each longer line there will be several shorter lines. These localised groups tend to form more grid-like local patterns, with lines either passing through each other, or ending on other lines, at near right angles. We can say then that the shorter the line, the more likely it is to"}
{"_id": "8dd1a06b56b29f16b60a36450146c6c9aedb0175", "title": "Current Status of Bioinks for Micro-Extrusion-Based 3D Bioprinting", "text": "Recent developments in 3D printing technologies and design have been nothing short of spectacular. Parallel to this, development of bioinks has also emerged as an active research area with almost unlimited possibilities. Many bioinks have been developed for various cells types, but bioinks currently used for 3D printing still have challenges and limitations. Bioink development is significant due to two major objectives. The first objective is to provide growth- and function-supportive bioinks to the cells for their proper organization and eventual function and the second objective is to minimize the effect of printing on cell viability, without compromising the resolution shape and stability of the construct. Here, we will address the current status and challenges of bioinks for 3D printing of tissue constructs for in vitro and in vivo applications."}
{"_id": "d8094a2ba45e8ada8eea2663d5d707f335dc1ce0", "title": "Calibrating a real-time traffic crash-prediction model using archived weather and ITS traffic data", "text": "Growing concern over traffic safety has led to research efforts directed towards predicting freeway crashes in Advanced Traffic Management and Information Systems (ATMIS) environment. This paper aims at developing a crash-likelihood prediction model using real-time traffic-flow variables (measured through series of underground sensors) and rain data (collected at weather stations) potentially associated with crash occurrence. Archived loop detector and rain data and historical crash data have been used to calibrate the model. This model can be implemented using an online loop and rain data to identify high crash potential in real-time. Principal component analysis (PCA) and logistic regression (LR) have been used to estimate a weather model that determines a rain index based on the rain readings at the weather station in the proximity of the freeway. A matched case-control logit model has also been used to model the crash potential based on traffic loop data and the rain index. The 5-min average occupancy and standard deviation of volume observed at the downstream station, and the 5-min coefficient of variation in speed at the station closest to the crash, all during 5-10 min prior to the crash occurrence along with the rain index have been found to affect the crash occurrence most significantly."}
{"_id": "19c1e37360ff59a3bbf7d20d8d184d1004f7a7b9", "title": "Data exfiltration: A review of external attack vectors and countermeasures", "text": "Context: One of the main targets of cyber-attacks is data exfiltration, which is the leakage of sensitive or private data to an unauthorized entity. Data exfiltration can be perpetrated by an outsider or an insider of an organization. Given the increasing number of data exfiltration incidents, a large number of data exfiltration countermeasures have been developed. These countermeasures aim to detect, prevent, or investigate exfiltration of sensitive or private data. With the growing interest in data exfiltration, it is important to review data exfiltration attack vectors and countermeasures to support future research in this field. Objective: This paper is aimed at identifying and critically analysing data exfiltration attack vectors and countermeasures for reporting the status of the art and determining gaps for future research. Method: We have followed a structured process for selecting 108 papers from seven publication databases. Thematic analysis method has been applied to analyse the extracted data from the reviewed papers. Results: We have developed a classification of (1) data exfiltration attack vectors used by external attackers and (2) the countermeasures in the face of external attacks. We have mapped the countermeasures to attack vectors. Furthermore, we have explored the applicability of various countermeasures for different states of data (i.e., in use, in transit, or at rest). Conclusion: This review has revealed that (a) most of the state of the art is focussed on preventive and detective countermeasures and significant research is required on developing investigative countermeasures that are equally important; (b) Several data exfiltration countermeasures are not able to respond in real-time, which specifies that research efforts need to be invested to enable them to respond in real-time (c) A number of data exfiltration countermeasures do not take privacy and ethical concerns into consideration, which may become an obstacle in their full adoption (d) Existing research is primarily focussed on protecting data in \u2018in use\u2019 state, therefore, future research needs to be directed towards securing data in \u2018in rest\u2019 and \u2018in transit\u2019 states (e) There is no standard or framework for evaluation of data exfiltration countermeasures. We assert the need for developing such an evaluation framework."}
{"_id": "17cdeca0150d68c583eaf749c86f47287e4ea6d5", "title": "Revisiting Cryptographic Accumulators, Additional Properties and Relations to Other Primitives", "text": "Cryptographic accumulators allow to accumulate a finite set of values into a single succinct accumulator. For every accumulated value, one can efficiently compute a witness, which certifies its membership in the accumulator. However, it is computationally infeasible to find a witness for any nonaccumulated value. Since their introduction, various accumulator schemes for numerous practical applications and with different features have been proposed. Unfortunately, to date there is no unifying model capturing all existing features. Such a model can turn out to be valuable as it allows to use accumulators in a black-box fashion. To this end, we propose a unified formal model for (randomized) cryptographic accumulators which covers static and dynamic accumulators, their universal features and includes the notions of undeniability and indistinguishability. Additionally, we provide an exhaustive classification of all existing schemes. In doing so, it turns out that most accumulators are distinguishable. Fortunately, a simple, light-weight generic transformation allows to make many existing dynamic accumulator schemes indistinguishable. As this transformation, however, comes at the cost of reduced collision freeness, we additionally propose the first indistinguishable scheme that does not suffer from this shortcoming. Finally, we employ our unified model for presenting a black-box construction of commitments from indistinguishable accumulators as well as a black-box construction of indistinguishable, undeniable universal accumulators from zero-knowledge sets. Latter yields the first universal accumulator construction that provides indistinguishability."}
{"_id": "59d1f559070457ea647858d5ea6b57c26222cd3b", "title": "Motion Planning : The Essentials", "text": "This is the first installment of a two-part tutorial. The goal of the first part is to give the reader a basic understanding of the technical issues and types of approaches to solving the basic path planning or obstacle avoidance problem. The second installment will cover more advanced issues, including feedback, differential constraints, and uncert ainty. Note that is a brieftutorial, rather than a comprehensive surveyof methods. For the latter, consult recent textbooks [4], [9]."}
{"_id": "e0057377a3c1b92b94854668e179ca06c3e39cc8", "title": "Customer churn prediction for retail business", "text": "Customer churn happens when a customer discontinues his or her interaction with a company. In retail business, a customer is treated to be churned once his/her transactions outdate a particular amount of time. Once a customer becomes a churn, the loss incurred by the company is not just the lost revenue due to the lost customer but also the costs involved in additional marketing in order to attract new customer. Reducing customer churn is a key business goal of every online business. In this project, we have considered data set from UCI Machine Learning repository. This dataset contains records of transactions that occurred between December 1, 2010 and December 1, 2011. This is recorded from an online retail gift store based in United Kingdom. The customers of this company are mainly wholesalers. This data set is pre-processed by removing NAs, validating numerical values, removing erroneous data points. We then perform aggregations on the data to generate invoice based and customer based data sets. A variable churn is attached to each data point. This churn value is determined based on the customer's transactions. Three algorithms are run on this customer aggregated dataset to predict churn value. They are Random forest, Support vector machines and Extreme gradient boosting. A comparative study is done on these three algorithms."}
{"_id": "9be787ea1c0a649656fcdbf2e9fe8a575f592d52", "title": "Detection of falls using accelerometers and mobile phone technology.", "text": "OBJECTIVES\nto study the sensitivity and specificity of fall detection using mobile phone technology.\n\n\nDESIGN\nan experimental investigation using motion signals detected by the mobile phone.\n\n\nSETTING AND PARTICIPANTS\nthe research was conducted in a laboratory setting, and 18 healthy adults (12 males and 6 females; age = 29 \u00b1 8.7 years) were recruited.\n\n\nMEASUREMENT\neach participant was requested to perform three trials of four different types of simulated falls (forwards, backwards, lateral left and lateral right) and eight other everyday activities (sit-to-stand, stand-to-sit, level walking, walking up- and downstairs, answering the phone, picking up an object and getting up from supine). Acceleration was measured using two devices, a mobile phone and an independent accelerometer attached to the waist of the participants.\n\n\nRESULTS\nBland-Altman analysis shows a higher degree of agreement between the data recorded by the two devices. Using individual upper and lower detection thresholds, the specificity and sensitivity for mobile phone were 0.81 and 0.77, respectively, and for external accelerometer they were 0.82 and 0.96, respectively.\n\n\nCONCLUSION\nfall detection using a mobile phone is a feasible and highly attractive technology for older adults, especially those living alone. It may be best achieved with an accelerometer attached to the waist, which transmits signals wirelessly to a phone."}
{"_id": "1f911ae809066d4a55598bce939a466de980b13b", "title": "LiTell: robust indoor localization using unmodified light fixtures", "text": "Owing to dense deployment of light fixtures and multipath-free propagation, visible light localization technology holds potential to overcome the reliability issue of radio localization. However, existing visible light localization systems require customized light hardware, which increases deployment cost and hinders near term adoption. In this paper, we propose LiTell, a simple and robust localization scheme that employs unmodified fluorescent lights (FLs) as location landmarks and commodity smartphones as light sensors. LiTell builds on the key observation that each FL has an inherent characteristic frequency which can serve as a discriminative feature. It incorporates a set of sampling, signal amplification and camera optimization mechanisms, that enable a smartphone to capture the extremely weak and high frequency ( > 80 kHz) features. We have implemented LiTell as a real-time localization and navigation system on Android. Our experiments demonstrate LiTell's high reliability in discriminating different FLs, and its potential to achieve sub-meter location granularity. Our user study in a multi-storey office building, parking lot and grocery store further validates LiTell as an accurate, robust and ready-to-use indoor localization system."}
{"_id": "32a11508fcbbb64a9b427b57403dc1ad54b6d718", "title": "A Cost-Effective Deadline-Constrained Dynamic Scheduling Algorithm for Scientific Workflows in a Cloud Environment", "text": "Cloud computing, a distributed computing paradigm, enables delivery of IT resources over the Internet and follows the pay-as-you-go billing model. Workflow scheduling is one of the most challenging problems in cloud computing. Although, workflow scheduling on distributed systems like grids and clusters have been extensively studied, however, these solutions are not viable for a cloud environment. It is because, a cloud environment differs from other distributed environment in two major ways: on-demand resource provisioning and pay-as-you-go pricing model. Thus, to achieve the true benefits of workflow orchestration onto cloud resources novel approaches that can capitalize the advantages and address the challenges specific to a cloud environment needs to be developed. This work proposes a dynamic cost-effective deadline-constrained heuristic algorithm for scheduling a scientific workflow in a public cloud. The proposed technique aims to exploit the advantages offered by cloud computing while taking into account the virtual machine (VM) performance variability and instance acquisition delay to identify a just-in-time schedule of a deadline constrained scientific workflow at lesser costs. Performance evaluation on some well-known scientific workflows exhibit that the proposed algorithm delivers better performance in comparison to the current state-of-the-art heuristics."}
{"_id": "a83eeab54761807cd058576373b4a6b857a58564", "title": "Artificial Intelligence as an Effective Classroom Assistant", "text": "The field of artificial intelligence in education (AIED) uses techniques from AI and cognitive science to better understand the nature of learning and teaching and to build systems to help learners gain new skills or understand new concepts. This article studies metareviews and meta-analyses to make the case for blended learning, wherein the teacher can offload some work to AIED systems."}
{"_id": "d538dad39b8ff0f9bbc2cfb1a6dffb6748344885", "title": "On Data Integrity Attacks Against Real-Time Pricing in Energy-Based Cyber-Physical Systems", "text": "In this paper, we investigate a novel real-time pricing scheme, which considers both renewable energy resources and traditional power resources and could effectively guide the participants to achieve individual welfare maximization in the system. To be specific, we develop a Lagrangian-based approach to transform the global optimization conducted by the power company into distributed optimization problems to obtain explicit energy consumption, supply, and price decisions for individual participants. Also, we show that these distributed problems derived from the global optimization by the power company are consistent with individual welfare maximization problems for end-users and traditional power plants. We also investigate and formalize the vulnerabilities of the real-time pricing scheme by considering two types of data integrity attacks: Ex-ante attacks and Ex-post attacks, which are launched by the adversary before or after the decision-making process. We systematically analyze the welfare impacts of these attacks on the real-time pricing scheme. Through a combination of theoretical analysis and performance evaluation, our data shows that the real-time pricing scheme could effectively guide the participants to achieve welfare maximization, while cyber-attacks could significantly disrupt the results of real-time pricing decisions, imposing welfare reduction on the participants."}
{"_id": "a862f658086c3cacf950f0a90380b98785577a44", "title": "An improved boosting based on feature selection for corporate bankruptcy prediction", "text": "With the recent financial crisis and European debt crisis, corporate bankruptcy prediction has become an increasingly important issue for financial institutions. Many statistical and intelligent methods have been proposed, however, there is no overall best method has been used in predicting corporate bankruptcy. Recent studies suggest ensemble learning methods may have potential applicability in corporate bankruptcy prediction. In this paper, a new and improved Boosting, FS-Boosting, is proposed to predict corporate bankruptcy. Through injecting feature selection strategy into Boosting, FS-Booting can get better performance as base learners in FS-Boosting could get more accuracy and diversity. For the testing and illustration purposes, two real world bankruptcy datasets were selected to demonstrate the effectiveness and feasibility of FS-Boosting. Experimental results reveal that FS-Boosting could be used as an alternative method for the corporate bankruptcy prediction. 2013 Elsevier Ltd. All rights reserved."}
{"_id": "0d0cee830772c3b2b274bfb5c3ad0ee42d8a0a57", "title": "Multimodal Convolutional Neural Networks for Matching Image and Sentence", "text": "In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content and one matching CNN modeling the joint representation of image and sentence. The matching CNN composes different semantic fragments from words and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. More specifically, our proposed m-CNNs significantly outperform the state-of-the-art approaches for bidirectional image and sentence retrieval on the Flickr8K and Flickr30K datasets."}
{"_id": "1fd80b5adeec4d5e921c7499a50c2cfc5b9686ad", "title": "A SICK cure for the evaluation of compositional distributional semantic models", "text": "Shared and internationally recognized benchmarks are fundamental for the development of any computational system. We aim to help the research community working on compositional distributional semantic models (CDSMs) by providing SICK (Sentences Involving Compositional Knowldedge), a large size English benchmark tailored for them. SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic and semantic phenomena that CDSMs are expected to account for, but do not require dealing with other aspects of existing sentential data sets (idiomatic multiword expressions, named entities, telegraphic language) that are not within the scope of CDSMs. By means of crowdsourcing techniques, each pair was annotated for two crucial semantic tasks: relatedness in meaning (with a 5-point rating scale as gold score) and entailment relation between the two elements (with three possible gold labels: entailment, contradiction, and neutral). The SICK data set was used in SemEval-2014 Task 1, and it freely available for research purposes."}
{"_id": "59eff53de503b948521ac05656530bf9b847fcc8", "title": "IJ CS Differences in Individualistic and Collectivistic Tendencies among College Students in Japan and the United States Emiko", "text": "It is a worldwide stereotype that Japanese, compared to Americans, are oriented more toward collectivism. But this stereotypical notion of more collectivism among Japanese, which typically stems from a view that individualism and collectivism stand at opposite ends of a continuum, has been filled with dashed empirical findings, especially in a sample of college students. In the current study, following the view that individualism and collectivism are two separate concepts rather than one with two extremes, we test and compare both individualistic and collectivistic tendencies among college students in Japan and the United States. A review of theories and research on this dimension of cultural variability across the two diverse cultures and the literature on societal pressure of collectivity and on parents as primary socialization agents of culturally expected values lead to two hypotheses: 1) Japanese college students tend less toward individualism than do Americans, and 2) Japanese college students tend less toward collectivism than do Americans. Analysis of identical survey data from college students in Japan and in the United States provides strong support for both hypotheses."}
{"_id": "e3dadfcccb9027c465c80eb26e312209270cbb6a", "title": "Autonomous robotic system for bridge deck data collection and analysis", "text": "Bridge deck inspection is conducted to identify bridge condition deterioration and, thus, to facilitate implementation of appropriate maintenance or rehabilitation procedures. In this paper, we report the development of a robotic system for bridge deck data collection and analysis. The robotic system accurately localizes itself and autonomously maneuvers on the bridge deck to collect visual images and conduct nondestructive evaluation (NDE) measurements. The developed robotic system can reduce the cost and time of the bridge deck data collection. Crack detection and mapping algorithm to build the deck crack maps is presented in detail. The electrical resistivity (ER), impact-echo (IE) and ultrasonic surface waves (USW) data collected by the robot are analyzed to generate the corrosion, delamination and concrete elastic modulus maps of the deck. The presented robotic system has been successfully deployed to inspect numerous bridges."}
{"_id": "c2d5b7a89f04f4351ba16ce73d3b0cae33be179c", "title": "Effects of Harsh and Unpredictable Environments in Adolescence on Development of Life History Strategies: A Longitudinal Test of an Evolutionary Model.", "text": "The National Longitudinal Study of Adolescent Health data were used to test predictions from life history theory. We hypothesized that (1) in young adulthood an emerging life history strategy would exist as a common factor underlying many life history traits (e.g., health, relationship stability, economic success), (2) both environmental harshness and unpredictability would account for unique variance in expression of adolescent and young adult life history strategies, and (3) adolescent life history traits would predict young adult life history strategy. These predictions were supported. The current findings suggest that the environmental parameters of harshness and unpredictability have concurrent effects on life history development in adolescence, as well as longitudinal effects into young adulthood. In addition, life history traits appear to be stable across developmental time from adolescence into young adulthood."}
{"_id": "c20a8ff87d734cf8e9b95e2bd1a5e6467bac2d0f", "title": "Pattern Recognition Systems under Attack: Design Issues and Research Challenges", "text": "We analyze the problem of designing pattern recognition systems in adversarial settings, under an engineering viewpoint, motivated by their increasing exploitation in security-sensitive applications like spam and malware detection, despite their vulnerability to potential attacks has not yet been deeply understood. We first review previous work and report examples of how a complex system may be evaded either by leveraging on trivial vulnerabilities of its untrained components, e.g., parsing errors in the pre-processing steps, or by exploiting more subtle vulnerabilities of learning algorithms. We then discuss the need of exploiting both reactive and proactive security paradigms complementarily to improve the security by design. Our ultimate goal is to provide some useful guidelines for improving the security of pattern recognition in adversarial settings, and to suggest related open issues to foster research in this area."}
{"_id": "22438f7f4d54e00144c7d1cd61ca1b79b853fec2", "title": "Security, privacy and safety evaluation of dynamic and static fleets of drones", "text": "Interconnected everyday objects, either via public or private networks, are gradually becoming reality in modern life \u2014 often referred to as the Internet of Things (IoT) or Cyber-Physical Systems (CPS). One stand-out example are those systems based on Unmanned Aerial Vehicles (UAVs). Fleets of such vehicles (drones) are prophesied to assume multiple roles from mundane to high-sensitive applications, such as prompt pizza or shopping deliveries to the home, or to deployment on battlefields for battlefield and combat missions. Drones, which we refer to as UAVs in this paper, can operate either individually (solo missions) or as part of a fleet (group missions), with and without constant connection with a base station. The base station acts as the command centre to manage the drones' activities; however, an independent, localised and effective fleet control is necessary, potentially based on swarm intelligence, for several reasons: 1) an increase in the number of drone fleets; 2) fleet size might reach tens of UAVs; 3) making time-critical decisions by such fleets in the wild; 4) potential communication congestion and latency; and 5) in some cases, working in challenging terrains that hinders or mandates limited communication with a control centre, e.g. operations spanning long period of times or military usage of fleets in enemy territory. This self-aware, mission-focused and independent fleet of drones may utilise swarm intelligence for a), air-traffic or flight control management, b) obstacle avoidance, c) self-preservation (while maintaining the mission criteria), d) autonomous collaboration with other fleets in the wild, and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them."}
{"_id": "69c482aa414609c393c1e2df5c90e5617dc387ae", "title": "A retrospective of knowledge graphs", "text": "Information on the Internet is fragmented and presented in different data sources, which makes automatic knowledge harvesting and understanding formidable for machines, and even for humans. Knowledge graphs have become prevalent in both of industry and academic circles these years, to be one of the most efficient and effective knowledge integration approaches. Techniques for knowledge graph construction can mine information from either structured, semi-structured, or even unstructured data sources, and finally integrate the information into knowledge, represented in a graph. Furthermore, knowledge graph is able to organize information in an easy-to-maintain, easy-to-understand and easy-to-use manner. In this paper, we give a summarization of techniques for constructing knowledge graphs. We review the existing knowledge graph systems developed by both academia and industry. We discuss in detail about the process of building knowledge graphs, and survey state-of-the-art techniques for automatic knowledge graph checking and expansion via logical inferring and reasoning. We also review the issues of graph data management by introducing the knowledge data models and graph databases, especially from a NoSQL point of view. Finally, we overview current knowledge graph systems and discuss the future research directions."}
{"_id": "5703defbb193df2ff616452b485b8d5f524b0cf7", "title": "Sentiment analysis using Telugu SentiWordNet", "text": "In recent times, sentiment analysis in low resourced languages and regional languages has become emerging areas in natural language processing. Researchers have shown greater interest towards analyzing sentiment in Indian languages such as Hindi, Telugu, Tamil, Bengali, Malayalam, etc. In best of our knowledge, microscopic work has been reported till date towards Indian languages due to lack of annotated data set. In this paper, we proposed a two-phase sentiment analysis for Telugu news sentences using Telugu SentiWordNet. Initially, it identifies subjectivity classification where sentences are classified as subjective or objective. Objective sentences are treated as neutral sentiment as they don't carry any sentiment value. Next, Sentiment Classification has been done where the subjective sentences are further classified into positive and negative sentences. With the existing Telugu SentiWordNet, our proposed system attains an accuracy of 74% and 81% for subjectivity and sentiment classification respectively."}
{"_id": "a3aa2b85dd59a941a891d2f620d051fd2fa5f528", "title": "Combating VR sickness through subtle dynamic field-of-view modification", "text": "Virtual Reality (VR) sickness can cause intense discomfort, shorten the duration of a VR experience, and create an aversion to further use of VR. High-quality tracking systems can minimize the mismatch between a user's visual perception of the virtual environment (VE) and the response of their vestibular system, diminishing VR sickness for moving users. However, this does not help users who do not or cannot move physically the way they move virtually, because of preference or physical limitations such as a disability. It has been noted that decreasing field of view (FOV) tends to decrease VR sickness, though at the expense of sense of presence. To address this tradeoff, we explore the effect of dynamically, yet subtly, changing a physically stationary person's FOV in response to visually perceived motion as they virtually traverse a VE. We report the results of a two-session, multi-day study with 30 participants. Each participant was seated in a stationary chair, wearing a stereoscopic head-worn display, and used control and FOV-modifying conditions in the same VE. Our data suggests that by strategically and automatically manipulating FOV during a VR session, we can reduce the degree of VR sickness perceived by participants and help them adapt to VR, without decreasing their subjective level of presence, and minimizing their awareness of the intervention."}
{"_id": "48e78f72d8564e1e502d72df5d170e0602deb2ff", "title": "Advances in rapid detection methods for foodborne pathogens.", "text": "Food safety is increasingly becoming an important public health issue, as foodborne diseases present a widespread and growing public health problem in both developed and developing countries. The rapid and precise monitoring and detection of foodborne pathogens are some of the most effective ways to control and prevent human foodborne infections. Traditional microbiological detection and identification methods for foodborne pathogens are well known to be time consuming and laborious as they are increasingly being perceived as insufficient to meet the demands of rapid food testing. Recently, various kinds of rapid detection, identification, and monitoring methods have been developed for foodborne pathogens, including nucleic-acid-based methods, immunological methods, and biosensor-based methods, etc. This article reviews the principles, characteristics, and applications of recent rapid detection methods for foodborne pathogens."}
{"_id": "54f52815972d02c0a67ea0d049f4922e05ba465d", "title": "Regression testing in the presence of non-code changes", "text": "Regression testing is an important activity performed to validate modified software, and one of its key tasks is regression test selection (RTS) -- selecting a subset of existing test cases to run on the modified software. Most existing RTS techniques focus on changes made to code components and completely ignore non-code elements, such as configuration files and databases, which can also change and affect the system behavior. To address this issue, we present a new RTS technique that performs accurate test selection in the presence of changes to non-code components. To do this, our technique computes traceability between test cases and the external data accessed by an application, and uses this information to perform RTS in the presence of changes to non-code elements. We present our technique, a prototype implementation of our technique, and a set of preliminary empirical results that illustrate the feasibility, effectiveness, and potential usefulness of our approach."}
{"_id": "3ef24c66dbb215f92f0221f4b01ddfd68c79eca8", "title": "Evidence for van der Waals adhesion in gecko setae.", "text": "Geckos have evolved one of the most versatile and effective adhesives known. The mechanism of dry adhesion in the millions of setae on the toes of geckos has been the focus of scientific study for over a century. We provide the first direct experimental evidence for dry adhesion of gecko setae by van der Waals forces, and reject the use of mechanisms relying on high surface polarity, including capillary adhesion. The toes of live Tokay geckos were highly hydrophobic, and adhered equally well to strongly hydrophobic and strongly hydrophilic, polarizable surfaces. Adhesion of a single isolated gecko seta was equally effective on the hydrophobic and hydrophilic surfaces of a microelectro-mechanical systems force sensor. A van der Waals mechanism implies that the remarkable adhesive properties of gecko setae are merely a result of the size and shape of the tips, and are not strongly affected by surface chemistry. Theory predicts greater adhesive forces simply from subdividing setae to increase surface density, and suggests a possible design principle underlying the repeated, convergent evolution of dry adhesive microstructures in gecko, anoles, skinks, and insects. Estimates using a standard adhesion model and our measured forces come remarkably close to predicting the tip size of Tokay gecko seta. We verified the dependence on size and not surface type by using physical models of setal tips nanofabricated from two different materials. Both artificial setal tips stuck as predicted and provide a path to manufacturing the first dry, adhesive microstructures."}
{"_id": "4639dc2c207e6b17319e9ff012587f3cad640b70", "title": "Evaluation of nutritional assessment techniques in elderly people newly admitted to municipal care", "text": "Objectives: To evaluate the Subjective Global Assessment (SGA) and the Mini Nutritional Assessment (MNA) with regard to validity using a combination of anthropometric and serum-protein measurements as standard criteria to assess protein-energy malnutrition (PEM).Design: Cross-sectional study with consecutive selection of residents aged \u226565\u2005y.Setting: A municipality in the south of Sweden.Subjects: During a year, starting in October 1996, 148 females and 113 males, aged \u226565\u2013104\u2005y of age, newly admitted to special types of housing for the elderly, were included in the study.Results: According to SGA, 53% were assessed as malnourished or moderately malnourished on admission. The corresponding figure from MNA was 79% malnourished or at risk of malnutrition. Both tools indicated that anthropometric values and serum proteins were significantly lower in residents classified as being malnourished (P<0.05). Sensitivity in detecting PEM was in SGA 0.93 and in MNA 0.96 and specificity was 0.61 and 0.26, respectively. Using regression analysis, weight index and serum albumin were the best objective nutritional parameters in predicting the SGA- and MNA classifications. Item \u2018muscle wasting\u2019 in SGA and \u2018self-experienced health status\u2019 in MNA showed most predictive power concerning the odds of being assessed as malnourished.Conclusions: SGA was shown to be the more useful tool in detecting residents with established malnutrition and MNA in detecting residents who need preventive nutritional measures."}
{"_id": "89222f81792674bf4ada3589f54234f27c70b86a", "title": "Youth texting: Help or hindrance to literacy?", "text": "An extensive amount of research has been performed in recent years into the widespread practice of text messaging in youth. As part of this broad area of research, the associations between youth texting and literacy have been investigated in a variety of contexts. A comprehensive, semi-systematic review of the literature into texting and literacy was conducted, with a particular focus on quantitative empirical studies. Media reports, teacher surveys, and qualitative studies were also taken into consideration as part of this wide-ranging examination of previous research into this phenomenon. There were no clear positive or negative links discovered between adolescent texting practices and literacy, with the research findings in this study area best summarized as mixed and inconclusive. More studies in this research area are required, especially of an experimental or longitudinal kind. In addressing the debate on the impact of texting on literacy, research analyses as well as media reports would benefit from a more balanced presentation of existing findings."}
{"_id": "d93783c488df0b72806aa1bf7b377ffabae2116d", "title": "Deep Neural Networks for i-Vector Language Identification of Short Utterances in Cars", "text": "This paper is focused on the application of the Language Identification (LID) technology for intelligent vehicles. We cope with short sentences or words spoken in moving cars in four languages: English, Spanish, German, and Finnish. As the response time of the LID system is crucial for user acceptance in this particular task, speech signals of different durations with total average of 3.8s are analyzed. In this paper, the authors propose the use of Deep Neural Networks (DNN) to model effectively the i-vector space of languages. Both raw i-vectors and session variability compensated i-vectors are evaluated as input vectors to DNNs. The performance of the proposed DNN architecture is compared with both conventional GMM-UBM and i-vector/LDA systems considering the effect of durations of signals. It is shown that the signals with durations between 2 and 3s meet the requirements of this application, i.e., high accuracy and fast decision, in which the proposed DNN architecture outperforms GMM-UBM and i-vector/LDA systems by 37% and 28%, respectively."}
{"_id": "dc92b4c684466b1c051993a2178c4b09d7ff06f0", "title": "Cloud RAN challenges and solutions", "text": "In this paper we take an overall look at key technical challenges in the evolution of the Radio Access Network (RAN) architecture towards Cloud RAN, and solutions to overcome them. To address fronthaul limitations, we examine the implications and tradeoffs enabled by functional splits on fronthaul needs, system performance, and centralization scale. We examine the architecture of algorithms for multi-cell coordination and implications in a Cloud RAN environment. To maximize the use of General-Purpose Processors (GPP) and operating systems such as Linux for Cloud RAN, we propose methods of achieving real-time performance suitable for RAN functions. To enable right-sizing the amount of compute used for various RAN functions based on the workload, we propose methods of pooling and elastic scaling for RAN functions that exploit the fact that certain RAN functions perform per-user operations while others perform per-cell operations. Cloud RAN also aims to use cloud management technologies such as virtualized infrastructure management (VIM) and orchestration for automating the instantiation and scaling of RAN functions. We identify special needs for RAN arising from real-time constraints and a mix of GPP and non-GPP hardware. Keywords\u2014Cloud RAN, fronthaul, real-time, pooling, elastic scaling, multi-cell coordination, orchestration, NFV, 5G."}
{"_id": "48b502c0baf2a7f1bf66deb595ec5ffa5c2f447f", "title": "Cyberbullying classification using text mining", "text": "Cyberbully is a misuse of technology advantage to bully a person. Cyberbully and its impact have occurred around the world and now the number of cases are increasing. Cyberbullying detection is very important because the online information is too large so it is not possible to be tracked by humans. The purpose of this research is to construct a classification model with optimal accuracy in identifying cyberbully conversation using Naive Bayes method and Support Vector Machine (SVM) then applying n-gram 1 to 5 for the number of class 2, 4, and 11 for each method. Naive Bayes yields an average accuracy of 92.81%, SVM with a poly kernel yields an average accuracy of 97.11% It can be concluded that SVM with poly kernel yields higher accuracy than SVM with other kernels, Naive Bayes, and Kelly Reynolds research method of decision tree (J48) and k-NN."}
{"_id": "20d9ab4990165be05e85873d443226529e13acc5", "title": "Compliance by design for artifact-centric business processes", "text": "Compliance to legal regulations, internal policies, or best practices is becoming a more and more important aspect in business processes management. Compliance requirements are usually formulated in a set of rules that can be checked during or after the execution of the business process, called compliance by detection. If noncompliant behavior is detected, the business process needs to be redesigned. Alternatively, the rules can be already taken into account while modeling the business process to result in a business process that is compliant by design. This technique has the advantage that a subsequent verification of compliance is not required. This paper focuses on compliance by design and employs an artifactcentric approach. In this school of thought, business processes are not described as a sequence of tasks to be performed (i. e., imperatively), but from the point of view of the artifacts that are manipulated during the process (i. e., declaratively). We extend the artifact-centric approach to model compliance rules and show how compliant business processes can be synthesized automatically."}
{"_id": "1cbea85150da333128d54603ed8567ac4df1d2c1", "title": "Low Cost Dual Polarized Base Station Element for Long Term Evolution", "text": "A broadband low profile dual polarized patch antenna for fourth generation long term evolution (4G LTE) is presented in this communication. By employing two wideband feeding mechanisms for the two input ports, a dual polarized patch antenna element with the characteristics of high input port isolation and wide impedance bandwidth is successfully designed. A wide meandering probe (M-probe) and a pair of twin-L-probes are proposed to feed a patch for dual polarization. The proposed design with a wideband balun has impedance bandwidths of over 47% at the two input ports as well as over 37 dB isolation within the entire operating band. Over 8 dBi gain can be achieved at the two ports."}
{"_id": "9006e99bd4a7c0360f432891f91e1e84daff3640", "title": "Mitigating DoS attacks against signature-based authentication in VANETs", "text": "Vehicular ad hoc networks (VANETs) are supposed to improve traffic safety and drivers' experiences. In a typical VANET, a vehicle broadcasts safety messages to its neighbors. Since behaviors based on safety messages could be life-critical, authentication of these messages must be guaranteed. Many signature-based schemes have been proposed for authentication in VANETs, but few of them have addressed the problem of denial of service (DoS) attacks against signature-based authentication. In such a DoS attack, attackers can broadcast forged messages with invalid signatures to force the receiving vehicles to perform lots of unnecessary signature verifications, and thus the benign vehicles cannot verify the messages from other legitimate vehicles. Our scheme features a pre-authentication process before signature verifying process to deal with this kind of DoS attack. The pre-authentication process takes advantage of the one-way hash chain and a group rekeying scheme. Evaluations show that the proposed scheme mitigates such DoS attacks effectively."}
{"_id": "ee3ef825d93b36e4e7a9f1a802e60503c2870e24", "title": "Pareto Optimal Security Resource Allocation for Internet of Things", "text": "In many Internet of Thing (IoT) application domains security is a critical requirement, because malicious parties can undermine the effectiveness of IoT-based systems by compromising single components and/or communication channels. Thus, a security infrastructure is needed to ensure the proper functioning of such systems even under attack. However, it is also critical that security be at a reasonable resource and energy cost. In this article, we focus on the problem of efficiently and effectively securing IoT networks by carefully allocating security resources in the network area. In particular, given a set of security resources R and a set of attacks to be faced A, our method chooses the subset of R that best addresses the attacks in A, and the set of locations where to place them, that ensure the security coverage of all IoT devices at minimum cost and energy consumption. We model our problem according to game theory and provide a Pareto-optimal solution in which the cost of the security infrastructure, its energy consumption, and the probability of a successful attack are minimized. Our experimental evaluation shows that our technique improves the system robustness in terms of packet delivery rate for different network topologies. Furthermore, we also provide a method for handling the computation of the resource allocation plan for large-scale networks scenarios, where the optimization problem may require an unreasonable amount of time to be solved. We show how our proposed method drastically reduces the computing time, while providing a reasonable approximation of the optimal solution."}
{"_id": "57c00b7d0a8706dd6fe0e88adb3969214a7f980c", "title": "Deep Metric and Hash-Code Learning for Content-Based Retrieval of Remote Sensing Images", "text": "The growing volume of Remote Sensing (RS) image archives demands for feature learning techniques and hashing functions which can: (1) accurately represent the semantics in the RS images; and (2) have quasi real-time performance during retrieval. This paper aims to address both challenges at the same time, by learning a semantic-based metric space for content based RS image retrieval while simultaneously producing binary hash codes for an efficient archive search. This double goal is achieved by training a deep network using a combination of different loss functions which, on the one hand, aim at clustering semantically similar samples (i.e., images), and, on the other hand, encourage the network to produce final activation values (i.e., descriptors) that can be easily binarized. Moreover, since RS annotated training images are too few to train a deep network from scratch, we propose to split the image representation problem in two different phases. In the first we use a general-purpose, pre-trained network to produce an intermediate representation, and in the second we train our hashing network using a relatively small set of training images. Experiments on two aerial benchmark archives show that the proposed method outperforms previous state-of-the-art hashing approaches by up to 5.4% using the same number of hash bits per image."}
{"_id": "1081d6c6ad54f364b8c778e0f06570d7335b1bfe", "title": "Control Knowledge to Improve Plan Quality", "text": "Generating production-quality plans is an essential element in transforming planners from research tools into real-world applications. However most of the work to date on learning planning control knowledge has been aimed at improving the efficiency of planning; this work has been termed \u201cspeed-up learning\u201d. This paper focuses on learning control knowledge to guide a planner towards better solutions, i.e. to improve the quality of the plans produced by the planner, as its problem solving experience increases. We motivate the use of quality-enhancing search control knowledge and its automated acquisition from problem solving experience. We introduce an implemented mechanism for learning such control knowledge and some of our preliminary results in a process planning domain."}
{"_id": "63462af928140606acbe658e90b2846e6de60ece", "title": "Multi-Scale Convolutional Neural Networks for Time Series Classification", "text": "Time series classification (TSC), the problem of predicting class labels of time series, has been around for decades within the community of data mining and machine learning, and found many important applications such as biomedical engineering and clinical prediction. However, it still remains challenging and falls short of classification accuracy and efficiency. Traditional approaches typically involve extracting discriminative features from the original time series using dynamic time warping (DTW) or shapelet transformation, based on which an off-the-shelf classifier can be applied. These methods are ad-hoc and separate the feature extraction part with the classification part, which limits their accuracy performance. Plus, most existing methods fail to take into account the fact that time series often have features at different time scales. To address these problems, we propose a novel end-to-end neural network model, Multi-scale Convolutional Neural Network (MCNN), which incorporates feature extraction and classification in a single framework. Leveraging a novel multi-branch layer and learnable convolutional layers, MCNN automatically extracts features at different scales and frequencies, leading to superior feature representation. MCNN is also computationally efficient, as it naturally leverages GPU computing. We conduct comprehensive empirical evaluation with various existing methods on a large number of benchmark datasets, and show that MCNN advances the state-of-the-art by achieving superior accuracy performance than other leading methods."}
{"_id": "6dda57029bf903461e621d1c6f396e0397be2af2", "title": "Mobile commerce adoption in China and the United States: a cross-cultural study", "text": "Mobile communication technologies have penetrated consumer markets throughout the world. Mobile commerce is likely to make a strong influence on business activities, consumer behavior, as well as national and global markets. Thus the identification of factors that influence mobile commerce adoption has significant value. In a global context, this study identified nine factors affecting mobile commerce adoption based on published research in MIS. These factors were investigated in China and the United States, and a comparative examination was conducted. A survey was conducted on 190 individual mobile commerce users in China and USA. Results show that there are several significant differences among the antecedents and their impacts on consumer intention to use mobile commerce in the two cultural settings. The study provides a number of practical insights and informs vendors seeking to enter the Chinese and the US marketplace with specific information on user perspectives."}
{"_id": "e3199cc853990272adbcb3b442c6ccf220978ef5", "title": "The effects of perceived risk and technology type on users' acceptance of technologies", "text": "Previous studies on technology adoption disagree regarding the relative magnitude of the effects of perceived usefulness and perceived ease of use. However these studies did not consider moderating variables. We investigated four potential moderating variables \u2013 perceived risk, technology type, user experience, and gender \u2013 in users\u2019 technology adoption. Their moderating effects were tested in an empirical study of 161 subjects. Results showed that perceived risk, technology type, and gender were significant moderating variables. However the effects of user experience were marginal after the variance of errors was removed. # 2007 Published by Elsevier B.V. www.elsevier.com/locate/im Available online at www.sciencedirect.com Information & Management 45 (2008) 1\u20139"}
{"_id": "f940916d784182590ca92c6f9cf664da5e71ee88", "title": "Understanding continued information technology usage behavior: A comparison of three models in the context of mobile internet", "text": "This study examines the utility of three prospective models for understanding the continued IT usage behavior. The three models include: Expectation-Confirmation Model in IT Domain (ECM-IT), Technology Acceptance Model (TAM), and a hybrid model integrating TAM and ECM-IT (extended ECM-IT). Based on a survey of 1826 mobile Internet users, the LISREL analysis shows that all three models meet the various goodness-of-fit criteria. When compared using special indices for differentiating among alternative good models, TAM has the best fit to the data followed by ECM-IT, and the extended ECM-IT. In terms of variance explained for intention to continue IT usage, the extended ECM-IT has the highest R (67%) followed by TAM (63%), and ECM-IT (50%). We conclude that TAM is the most parsimonious and generic model that can be used to study both initial and continued IT adoption; the extended ECM-IT explains continued IT usage behavior as well as TAM; and both the ECM-IT and extended ECM-IT models provide additional information to increase our understanding of continued IT usage. \u00a9 2006 Elsevier B.V. All rights reserved."}
{"_id": "1bd27a9c5434699f6cab538f0bbb414246f09b0e", "title": "Trust and TAM in Online Shopping: An Integrated Model", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "07e96c10c886ce2c5838d32439c6c69e052c93b1", "title": "A Practical Approach to Classify Evolving Data Streams: Training with Limited Amount of Labeled Data", "text": "Recent approaches in classifying evolving data streams are based on supervised learning algorithms, which can be trained with labeled data only. Manual labeling of data is both costly and time consuming. Therefore, in a real streaming environment, where huge volumes of data appear at a high speed, labeled data may be very scarce. Thus, only a limited amount of training data may be available for building the classification models, leading to poorly trained classifiers. We apply a novel technique to overcome this problem by building a classification model from a training set having both unlabeled and a small amount of labeled instances. This model is built as micro-clusters using semi-supervised clustering technique and classification is performed with kappa-nearest neighbor algorithm. An ensemble of these models is used to classify the unlabeled data. Empirical evaluation on both synthetic data and real botnet traffic reveals that our approach, using only a small amount of labeled data for training, outperforms state-of-the-art stream classification algorithms that use twenty times more labeled data than our approach."}
{"_id": "720e06688e1038026070253891037652f5d0d9f5", "title": "Chess Q & A : Question Answering on Chess Games", "text": "We introduce a new dataset1 for the evaluation of models addressing reasoning tasks. For a position in a chess game, we provide question and answer pairs, an image of the board, and the sequence of moves up to that position. We hope this synthetic task will improve our understanding in memory based Deep Learning with posed challenges."}
{"_id": "1276279eea30e61248990d2c3f38e305065e69ff", "title": "NEUROSCIENTIFIC INVE STIGATIONS OF MUSICAL RHYTHM", "text": "Music occurs in every human society, unfolds over time, and enables synchronized movements. The neural mechanisms underlying the perception, cognition, and production of musical rhythm have been investigated using a variety of methods. FMRI studies in particular have shown that the motor system is crucially involved in rhythm and beat perception. Studies using other methods demonstrate that oscillatory neural activity entrains to regularities in musical rhythm, and that motor system excitability is modulated by listening to musical rhythm. This review paper describes some of the recent neuroscientific findings regarding musical rhythm, and especially the perception of a regular beat. INTRODUCTION The temporal structure of music enables synchronized movement, such as tapping one\u2019s foot, clapping, or dancing to the \u2018beat\u2019 of musical rhythms. Such movement is precisely timed to align with the periodic, salient beats in the music, and with the movements of other individuals. Given this relationship between musical rhythm and movement, it is perhaps unsurprising that the brain\u2019s motor system is heavily involved in the neural processing of auditory rhythms. However, it is a relatively recent discovery that the motor system is involved even in the absence of movement \u2013 subtle differences in the temporal structure or context of an auditory rhythm can elicit robust differences in motor system activity. These discoveries are the topic of this review paper, with a focus on findings from functional magnetic resonance imaging (fMRI). FMRI measures the change in oxygenated blood levels following neural activity [see 1, 2]. This \u2018blood-oxygen-level dependent\u2019 (or BOLD) signal is considered to be an indirect measure of brain activity, and therefore increases in BOLD are termed \u2018activations\u2019 in this review. Findings from patient studies, as well as electroencephalography (EEG), magnetoencephalography (MEG), and transcranial magnetic stimulation (TMS) studies will also be discussed. Although much theoretic and empirical work has sought to explain why certain temporal patterns elicit movement (e.g., dancing) while others do not [3-5], and the evolutionary basis for human sensitivity to musical rhythm [6-7], this review will focus on the neural substrates of rhythm perception and the role of individual differences, expertise, and sensory modality. RHYTHM AND BEAT IN THE BRAIN When human participants listen to rhythms (i.e., auditory sequences) with or without a beat, widespread activity is observed in the cortical motor system, especially in the supplementary motor area (SMA) and premotor cortex (PMC), as well as subcortical regions such as the basal ganglia and cerebellum [8-14]. Rhythms that are composed of intervals that are integer ratios of one another, and have accents occurring at regular intervals, tend to elicit the perception of a regular, emphasized beat, and beats are usually organized in a metre (a temporal structure determined by the cyclical pattern of strong and weak beats; see Figure 1). Compared to rhythms without a beat, listening to beat-based rhythms elicits more activity in the SMA and the basal ganglia [10]. The importance of the basal ganglia in beat perception was highlighted in a study demonstrating that patients with Parkinson\u2019s disease have impaired"}
{"_id": "e0971aa9837281df0427023a5b6c5de2838025e0", "title": "Image Retrieval by Examples", "text": "A currently relevant research field in information sciences is the management of nontraditional distributed multimedia databases. Two related key issues are to achieve an efficient content-basedquery by exampleretrieval and a fast response time. This paper presents the architecture of a distributed image retrieval system which provides novel solutions to these key issues. In particular, a way to quantify the effectiveness of low level visual descriptors in database query tasks is presented. The results are also used to improve the system response time, which is an important issue when querying very large databases. A new mechanism to adapt system query strategies to user behavior is also introduced in order to improve the effectiveness of relevance feedback and overall system response time. Finally, the issue of browsing multiple distributed databases is considered and a solution is proposed using multidimensional scaling techniques."}
{"_id": "9e7756cb6547adc2a0346041b07dce2dbde77c1f", "title": "Caring for a child with autism spectrum disorder and parents' quality of life: application of the CarerQol.", "text": "This study describes the impact of caregiving on parents of children with autism spectrum disorders (ASDs). Secondly, we investigate construct validation of the care-related quality of life instrument (CarerQol) measuring impact of caregiving. Primary caregivers of children with ASDs were included. Many parents experienced considerable problems combining daily activities with care, had financial problems or suffered from depressive mood. Validity tests showed that a higher impact of caring on the CarerQol was positively associated with higher subjective burden and lower family quality of life. Most of the associations between CarerQol scores and background characteristics confirmed previous research. The CarerQol validly measures the impact of caregiving for children with ASDs on caregivers in our sample. The CarerQol may therefore be useful for including parent outcomes in research on ASDs."}
{"_id": "eed80bb1e6a0da9fd112cd89f236cb9d0421bb3b", "title": "Image recognition of plant diseases based on principal component analysis and neural networks", "text": "Plant disease identification based on image processing could quickly and accurately provide useful information for the prediction and control of plant diseases. In this study, 21 color features, 4 shape features and 25 texture features were extracted from the images of two kinds wheat diseases (wheat stripe rust and wheat leaf rust) and two kinds of grape diseases (grape downy mildew and grape powdery mildew), principal component analysis (PCA) was performed for reducing dimensions in feature data processing, and then neural networks including backpropagation (BP) networks, radial basis function (RBF) neural networks, generalized regression networks (GRNNs) and probabilistic neural networks (PNNs) were used as the classifiers to identify wheat diseases and grape diseases, respectively. The results showed that these neural networks could be used for image recognition of these diseases based on reducing dimensions using PCA and acceptable fitting accuracies and prediction accuracies could be obtained. For the two kinds of wheat diseases, the optimal recognition result was obtained when image recognition was conducted based on PCA and BP networks, and the fitting accuracy and the prediction accuracy were both 100%. For the two kinds of grape diseases, the optimal recognition results were obtained when GRNNs and PNNs were used as the classifiers after reducing the dimensions of feature data with PCA, and the prediction accuracies were 94.29% with the fitting accuracies equal to 100%."}
{"_id": "569bdc4061467cd0827cfaedacab4d6b770c9492", "title": "Damage in the anca-associated vasculitides: long-term data from the European vasculitis study group (EUVAS) therapeutic trials.", "text": "OBJECTIVES\nTo describe short-term (up to 12\u2005months) and long-term (up to 7\u2005years) damage in patients with newly diagnosed antineutrophil-cytoplasm antibody-associated vasculitis (AAV).\n\n\nMETHODS\nData were combined from six European Vasculitis Study group trials (n=735). Long-term follow-up (LTFU) data available for patients from four trials (n=535). Damage accrued was quantified by the Vasculitis Damage Index (VDI). Sixteen damage items were defined a priori as being potentially treatment-related.\n\n\nRESULTS\nVDI data were available for 629 of 735 patients (85.6%) at baseline, at which time 217/629 (34.5%) had \u22651 item of damage and 32 (5.1%) \u22655 items, reflecting disease manifestations prior to diagnosis and trial enrolment. LTFU data were available for 467/535 (87.3%) at a mean of 7.3\u2005years postdiagnosis. 302/535 patients (56.4%) had VDI data at LTFU, with 104/302 (34.4%) having \u22655 items and only 24 (7.9%) no items of damage. At 6\u2005months and LTFU, the most frequent items were proteinuria, impaired glomerular filtration rate, hypertension, nasal crusting, hearing loss and peripheral neuropathy. The frequency of damage, including potentially treatment-related damage, rose over time (p<0.01). At LTFU, the most commonly reported items of treatment-related damage were hypertension (41.5%; 95% CI 35.6 to 47.4%), osteoporosis (14.1%; 9.9 to 18.2%), malignancy (12.6%; 8.6 to 16.6%), and diabetes (10.4%; 6.7 to 14.0%).\n\n\nCONCLUSIONS\nIn AAV, renal, otolaryngological and treatment-related (cardiovascular, disease, diabetes, osteoporosis and malignancy) damage increases over time, with around one-third of patients having \u22655 items of damage at a mean of 7\u2005years postdiagnosis."}
{"_id": "6da3a9881b6d2b752be35a1c08ebf42414b64dbb", "title": "A Software Defined Networking architecture for the Internet-of-Things", "text": "The growing interest in the Internet of Things (IoT) has resulted in a number of wide-area deployments of IoT subnetworks, where multiple heterogeneous wireless communication solutions coexist: from multiple access technologies such as cellular, WiFi, ZigBee, and Bluetooth, to multi-hop ad-hoc and MANET routing protocols, they all must be effectively integrated to create a seamless communication platform. Managing these open, geographically distributed, and heterogeneous networking infrastructures, especially in dynamic environments, is a key technical challenge. In order to take full advantage of the many opportunities they provide, techniques to concurrently provision the different classes of IoT traffic across a common set of sensors and networking resources must be designed. In this paper, we will design a software-defined approach for the IoT environment to dynamically achieve differentiated quality levels to different IoT tasks in very heterogeneous wireless networking scenarios. For this, we extend the Multinetwork INformation Architecture (MINA), a reflective (self-observing and adapting via an embodied Observe-Analyze-Adapt loop) middleware with a layered IoT SDN controller. The developed IoT SDN controller originally i) incorporates and supports commands to differentiate flow scheduling over task-level, multi-hop, and heterogeneous ad-hoc paths and ii) exploits Network Calculus and Genetic Algorithms to optimize the usage of currently available IoT network opportunities. We have applied the extended MINA SDN prototype in the challenging IoT scenario of wide-scale integration of electric vehicles, electric charging sites, smart grid infrastructures, and a wide set of pilot users, as targeted by the Artemis Internet of Energy and Arrowhead projects. Preliminary simulation performance results indicate that our approach and the extended MINA system can support efficient exploitation of the IoT multinetwork capabilities."}
{"_id": "2465cfb6d30b3e3fcbc44d6b7fd8eb982736a168", "title": "Multi-Task Label Embedding for Text Classification", "text": "Multi-task learning in text classification leverages implicit correlations among related tasks to extract common features and yield performance gains. However, most previous works treat labels of each task as independent and meaningless onehot vectors, which cause a loss of potential information and makes it difficult for these models to jointly learn three or more tasks. In this paper, we propose Multi-Task Label Embedding to convert labels in text classification into semantic vectors, thereby turning the original tasks into vector matching tasks. We implement unsupervised, supervised and semisupervised models of Multi-Task Label Embedding, all utilizing semantic correlations among tasks and making it particularly convenient to scale and transfer as more tasks are involved. Extensive experiments on five benchmark datasets for text classification show that our models can effectively improve performances of related tasks with semantic representations of labels and additional information from each other."}
{"_id": "b0ab3531023a8ec659169196903ee94b38dc78c6", "title": "WebRTC technology overview and signaling solution design and implementation", "text": "This paper describes the WebRTC technology and implementation of WebRTC client, server and signaling. Main parts of the WebRTC API are described and explained. Signaling methods and protocols are not specified by the WebRTC standards, therefore in this study we design and implement a novel signaling mechanism. The corresponding message sequence chart of the WebRTC communication behavior describes a communication flow between peers and the server. The server application is implemented as a WebSocket server. The client application demonstrates the use of the WebRTC API for achieving real-time communication. Benefits and future development of the WebRTC technology are mentioned."}
{"_id": "3d561f3931d513ff45dc2bf594326216c21addf5", "title": "Leadership for safety: industrial experience.", "text": "The importance of leadership for effective safety management has been the focus of research attention in industry for a number of years, especially in energy and manufacturing sectors. In contrast, very little research into leadership and safety has been carried out in medical settings. A selective review of the industrial safety literature for leadership research with possible application in health care was undertaken. Emerging findings show the importance of participative, transformational styles for safety performance at all levels of management. Transactional styles with attention to monitoring and reinforcement of workers' safety behaviours have been shown to be effective at the supervisory level. Middle managers need to be involved in safety and foster open communication, while ensuring compliance with safety systems. They should allow supervisors a degree of autonomy for safety initiatives. Senior managers have a prime influence on the organisation's safety culture. They need to continuously demonstrate a visible commitment to safety, best indicated by the time they devote to safety matters."}
{"_id": "75314198378054caa200ae3d7ff76228dce6ece9", "title": "How to handle ARP in a software-defined network", "text": "The Address Resolution Protocol (ARP) enables communication between IP-speaking nodes in a local network by reconstructing the hardware (MAC) address associated with the IP address of an interface. This is not needed in a Software-Defined Network (SDN), because each device can forward packets without the need to learn this association. We tackle the interoperability problem arising between standard network devices (end systems, routers), that rely on ARP, and SDN datapaths, that do not handle ARP packets natively. In particular, we propose a general approach to handle ARP in a SDN, that is applicable in several network scenarios, is transparent for existing devices, and can coexist with any packet forwarding logic implemented in the controller. Our approach reduces ARP traffic by confining it to the edge of SDNs and requires a minimal set of flow entries in the datapaths. We argument about its applicability and confirm it with experiments performed on SDN datapaths from a range of different vendors."}
{"_id": "1af286a37494250d70d7a8bd8cc1b229a572fdcb", "title": "RASCAL: A Domain Specific Language for Source Code Analysis and Manipulation", "text": "Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance mismatch hampers the development of new solutions because the desired functionality and scalability can only be achieved by repeated and ad hoc integration of different techniques. Rascal is a domain-specific language that takes away most of this boilerplate by integrating source code analysis and manipulation at the conceptual, syntactic, semantic and technical level. We give an overview of the language and assess its merits by implementing a complex refactoring."}
{"_id": "7615c9e0df48b1353ac67d483e349abb60f3635a", "title": "Robust Audio Hashing for Content Identification", "text": "Nowadays most audio content identification systems are based on watermarking technology. In this paper we present a different technology, referred to as robust audio hashing. By extracting robust features and translating them into a bit string, we get an object called a robust hash. Content can then be identified by comparing hash values of a received audio clip with the hash values of previously stored original audio clips. A distinguishing feature of the proposed hash scheme is its ability to extract a bit string for every so many milliseconds. More precisely, for every windowed time interval a hash value of 32 bits is computed by thresholding energy differences of several frequency bands. A sequence of 256 hash values, corresponding to approximately 3 seconds of audio, can uniquely identify a song. Experimental results show that the proposed scheme is robust against severe compression, but bit errors do occur. This implies that searching and matching is a non-trivial task for large databases. For instance, a brute force search approach is already prohibitive for databases containing hash values of more than 100 songs. Therefore we propose a scheme that exploits a structured search algorithm that allows searching databases containing over 100,000 songs."}
{"_id": "273e25ebddf01eb2b6f8bf3cb91a25eeae425f5e", "title": "License plate detection and character recognition system for commercial vehicles based on morphological approach and template matching", "text": "Though license plate detection and recognition system has a lot of applications but a very little work has been done on Bangla license plate recognition. Variations among the license plate patterns and complex background of Bangladeshi license plates make it more difficult to use the existing algorithms. Because of this, we propose a solution for Bangla license plate detection and recognition. Firstly, the position of the vehicle is determined. In our country, commercial License Plates have the unique color (green) of its own. That's why; we select the portions of green color with the matching RGB intensity of the plate. A boundary based contour algorithm, area and aspect ratio have been proposed to track down the license plate in the vehicle region. License plate rows containing registration information have been separated by using horizontal projection with threshold. The characters and the digits of the rows have been segmented using vertical projection with threshold value. Finally, template matching has been used for recognizing the characters and the digits of the Bangla license plate. We tested our algorithms for over 180 still images captured from the road. We achieve93% success in license plate detection, 98.1% success in segmentation and 88.8% success rate in Bangla license plate recognition."}
{"_id": "ced7811f2b694e54e3d96ec5398e4b6afca67fc0", "title": "Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain", "text": "This paper presents a novel illumination normalization approach for face recognition under varying lighting conditions. In the proposed approach, a discrete cosine transform (DCT) is employed to compensate for illumination variations in the logarithm domain. Since illumination variations mainly lie in the low-frequency band, an appropriate number of DCT coefficients are truncated to minimize variations under different lighting conditions. Experimental results on the Yale B database and CMU PIE database show that the proposed approach improves the performance significantly for the face images with large illumination variations. Moreover, the advantage of our approach is that it does not require any modeling steps and can be easily implemented in a real-time face recognition system."}
{"_id": "62ab98f05621ed6bc174337d1cbf7065350776f3", "title": "ServerSwitch: A Programmable and High Performance Platform for Data Center Networks", "text": "As one of the fundamental infrastructures for cloud computing, data center networks (DCN) have recently been studied extensively. We currently use pure software-based systems, FPGA based platforms, e.g., NetFPGA, or OpenFlow switches, to implement and evaluate various DCN designs including topology design, control plane and routing, and congestion control. However, software-based approaches suffer from high CPU overhead and processing latency; FPGA based platforms are difficult to program and incur high cost; and OpenFlow focuses on control plane functions at present. In this paper, we design a ServerSwitch to address the above problems. ServerSwitch is motivated by the observation that commodity Ethernet switching chips are becoming programmable and that the PCI-E interface provides high throughput and low latency between the server CPU and I/O subsystem. ServerSwitch uses a commodity switching chip for various customized packet forwarding, and leverages the server CPU for control and data plane packet processing, due to the low latency and high throughput between the switching chip and server CPU. We have built our ServerSwitch at low cost. Our experiments demonstrate that ServerSwitch is fully programmable and achieves high performance. Specifically, we have implemented various forwarding schemes including source routing in hardware. Our in-network caching experiment showed high throughput and flexible data processing. Our QCN (Quantized Congestion Notification) implementation further demonstrated that ServerSwitch can react to network congestions in 23us. \u2217Correspondent author. \u2020This work was performed when Zhiqiang Zhou was a visiting student at Microsoft Research Asia."}
{"_id": "3981e11ee2db0c7257f7b325d9041c1181d954c0", "title": "Generalized velocity obstacles", "text": "We address the problem of real-time navigation in dynamic environments for car-like robots. We present an approach to identify controls that will lead to a collision with a moving obstacle at some point in the future. Our approach generalizes the concept of velocity obstacles, which have been used for navigation among dynamic obstacles, and takes into account the constraints of a car-like robot. We use this formulation to find controls that will allow collision free navigation in dynamic environments. Finally, we demonstrate the performance of our algorithm on a simulated car-like robot among moving obstacles."}
{"_id": "6adf22736658f1e86c1447c9c0656f626706af7b", "title": "Plant pathogens and integrated defence responses to infection", "text": "Plants cannot move to escape environmental challenges. Biotic stresses result from a battery of potential pathogens: fungi, bacteria, nematodes and insects intercept the photosynthate produced by plants, and viruses use replication machinery at the host's expense. Plants, in turn, have evolved sophisticated mechanisms to perceive such attacks, and to translate that perception into an adaptive response. Here, we review the current knowledge of recognition-dependent disease resistance in plants. We include a few crucial concepts to compare and contrast plant innate immunity with that more commonly associated with animals. There are appreciable differences, but also surprising parallels."}
{"_id": "dda724832e4b6aeb9a6078afa935d2048a24cf45", "title": "Trait conscientiousness and the personality meta-trait stability are associated with regional white matter microstructure", "text": "Establishing the neural bases of individual differences in personality has been an enduring topic of interest. However, while a growing literature has sought to characterize grey matter correlates of personality traits, little attention to date has been focused on regional white matter correlates of personality, especially for the personality traits agreeableness, conscientiousness and openness. To rectify this gap in knowledge we used a large sample (n\u2009>\u2009550) of older adults who provided data on both personality (International Personality Item Pool) and white matter tract-specific fractional anisotropy (FA) from diffusion tensor MRI. Results indicated that conscientiousness was associated with greater FA in the left uncinate fasciculus (\u03b2\u2009=\u20090.17, P < 0.001). We also examined links between FA and the personality meta-trait 'stability', which is defined as the common variance underlying agreeableness, conscientiousness, and neuroticism/emotional stability. We observed an association between left uncinate fasciculus FA and stability (\u03b2\u2009=\u20090.27, P\u2009< 0.001), which fully accounted for the link between left uncinate fasciculus FA and conscientiousness. In sum, these results provide novel evidence for links between regional white matter microstructure and key traits of human personality, specifically conscientiousness and the meta-trait, stability. Future research is recommended to replicate and address the causal directions of these associations."}
{"_id": "23e107c6dc8d1500ce830e69847c0319ab4f8385", "title": "How Regulatory Fit Affects Value in Consumer Choices and Opinions", "text": "Vol. XLIII (February 2006), 1\u201310 1 \u00a9 2006, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Tamar Avnet is Assistant Professor of Business, Rotman School of Management, University of Toronto (e-mail: tavnet@rotman.utoronto.ca). E. Tory Higgins is Professor of Psychology, Department of Psychology, and Professor of Management, Columbia School of Business, Columbia University (e-mail: tory@psych.columbia.edu). The research reported in this article was supported by a grant (No. 39429) from the National Institute of Mental Health to E. Tory Higgins. TAMAR AVNET and E. TORY HIGGINS*"}
{"_id": "30a587efb58f103ff4e42444e191f50a99513f1b", "title": "Elastic Distributed Bayesian Collaborative Filtering", "text": "In this paper, we consider learning a Bayesian collaborative filtering model on a shared cluster of commodity machines. Two main challenges arise: (1) How can we parallelize and distribute Bayesian collaborative filtering? (2) How can our distributed inference system handle elasticity events common in a shared, resource managed cluster, including resource ramp-up, preemption, and stragglers? To parallelize Bayesian inference, we adapt ideas from both matrix factorization partitioning schemes used with stochastic gradient descent and stale synchronous programming used with parameter servers. To handle elasticity events we offer a generalization of previous partitioning schemes that gives increased flexibility during system disruptions. We additionally describe two new scheduling algorithms to dynamically route work at runtime. In our experiments, we compare the effectiveness of both scheduling algorithms and demonstrate their robustness to system failure."}
{"_id": "32fb3a63ef554e258eaf792ebb7519390b47017e", "title": "Exploring the link between self-compassion and body image in university women.", "text": "The purpose of the present research was to examine the relationships between self-compassion and women's body image. In Study 1, female undergraduates (N=142) completed three measures of body image and measures of self-esteem and self-compassion. Results showed that high self-compassion predicted fewer body concerns independently of self-esteem. Moreover, when both self-compassion and self-esteem were included as predictors, self-compassion accounted for unique variance in body preoccupation and weight concerns whereas self-esteem did not. In Study 2, this finding was partially replicated with one component (self-judgment) of self-compassion uniquely predicting body preoccupation in undergraduate women (N=187). High scores on self-compassion also predicted less eating guilt independent of self-esteem. Additionally, self-compassion was shown to partially mediate the relationship between body preoccupation and depressive symptoms. The findings highlight the possibility that a consideration of self-compassion for body image may contribute to identifying who is most at risk for body/shape concerns."}
{"_id": "3dcabad83131ba6107c40d0f92114bfd19accb89", "title": "Self-compassion versus global self-esteem: two different ways of relating to oneself.", "text": "This research examined self-compassion and self-esteem as they relate to various aspects of psychological functioning. Self-compassion entails treating oneself with kindness, recognizing one's shared humanity, and being mindful when considering negative aspects of oneself. Study 1 (N=2,187) compared self-compassion and global self-esteem as they relate to ego-focused reactivity. It was found that self-compassion predicted more stable feelings of self-worth than self-esteem and was less contingent on particular outcomes. Self-compassion also had a stronger negative association with social comparison, public self-consciousness, self-rumination, anger, and need for cognitive closure. Self-esteem (but not self-compassion) was positively associated with narcissism. Study 2 (N=165) compared global self-esteem and self-compassion with regard to positive mood states. It was found that the two constructs were statistically equivalent predictors of happiness, optimism, and positive affect. Results from these two studies suggest that self-compassion may be a useful alternative to global self-esteem when considering what constitutes a healthy self-stance."}
{"_id": "56a919938ba183b63d9c0238f1723c4d3c4b9772", "title": "Are improvements in shame and self-compassion early in eating disorders treatment associated with better patient outcomes?", "text": "Compassion-focused therapy (CFT; Gilbert, 2005, 2009) is a transdiagnostic treatment approach focused on building self-compassion and reducing shame. It is based on the theory that feelings of shame contribute to the maintenance of psychopathology, whereas self-compassion contributes to the alleviation of shame and psychopathology. We sought to test this theory in a transdiagnostic sample of eating disorder patients by examining whether larger improvements in shame and self-compassion early in treatment would facilitate faster eating disorder symptom remission over 12 weeks. Participants were 97 patients with an eating disorder admitted to specialized day hospital or inpatient treatment. They completed the Eating Disorder Examination-Questionnaire, Experiences of Shame Scale, and Self-Compassion Scale at intake, and again after weeks 3, 6, 9, and 12. Multilevel modeling revealed that patients who experienced greater decreases in their level of shame in the first 4 weeks of treatment had faster decreases in their eating disorder symptoms over 12 weeks of treatment. In addition, patients who had greater increases in their level of self-compassion early in treatment had faster decreases in their feelings of shame over 12 weeks, even when controlling for their early change in eating disorder symptoms. These results suggest that CFT theory may help to explain the maintenance of eating disorders. Clinically, findings suggest that intervening with shame early in treatment, perhaps by building patients' self-compassion, may promote better eating disorders treatment response."}
{"_id": "2ce9e1939a22a593cc74a0afc1be5a1bb2ba1a35", "title": "The Body Appreciation Scale: development and psychometric evaluation.", "text": "Body image has been conceptualized and assessed almost exclusively in terms of its negative dimensions. Therefore, a measure reflecting body appreciation, an aspect of positive body image, was developed and evaluated via four independent samples of college women. Study 1 (N = 181) supported the Body Appreciation Scale's (BAS) unidimensionality and construct validity, as it was related as expected to body esteem, body surveillance, body shame, and psychological well-being. Study 2 (N = 327) cross-validated its unidimensionality. Study 3 (N = 424) further upheld the construct validity of the BAS, as it was: (a) related as expected to appearance evaluation, body preoccupation, body dissatisfaction, and eating disorder symptomatology and (b) unrelated to impression management. Studies 1 and 3 also indicated that the BAS predicted unique variance in psychological well-being above and beyond extant measures of body image. Study 4 (N = 177) demonstrated that its scores were stable over a 3-week period. All studies supported the internal consistency reliability of its scores. The BAS should prove useful for researchers and clinicians interested in positive body image assessment."}
{"_id": "646c8a4778626547c48eb5a98fb5ff1a0cc9fcf8", "title": "On a Feasible-Infeasible Two-Population (FI-2Pop) genetic algorithm for constrained optimization: Distance tracing and no free lunch", "text": "We explore data-driven methods for gaining insight into the dynamics of a two population genetic algorithm (GA), which has been effective in tests on constrained optimization problems. We track and compare one population of feasible solutions and another population of infeasible solutions. Feasible solutions are selected and bred to improve their objective function values. Infeasible solutions are selected and bred to reduce their constraint violations. Interbreeding between populations is completely indirect, that is, only through their offspring that happen to migrate to the other population. We introduce an empirical measure of distance, and apply it between individuals and between population centroids to monitor the progress of evolution. We find that the centroids of the two populations approach each other and stabilize. This is a valuable characterization of convergence. We find the infeasible population influences, and sometimes dominates, the genetic material of the optimum solution. Since the infeasible population is not evaluated by the objective function, it is free to explore boundary regions, where the optimum is likely to be found. Roughly speaking, the No Free Lunch theorems for optimization show that all blackbox algorithms (such as Genetic Algorithms) have the same average performance over the set of all problems. As such, our algorithm would, on average, be no better than random search or any other blackbox search method. However, we provide two general theorems that give conditions that render null the No Free Lunch results for the constrained optimization problem class we study. The approach taken here thereby escapes the No Free Lunch implications, per se."}
{"_id": "762c79a9b7fe7dbd01242e94beebbdee14b40ab4", "title": "Robotics Component Verification on ISS ROKVISS - Preliminary Results for Telepresence", "text": "ROKVISS, Germany's newest space robotics technology experiment, was successfully installed outside at the Russian Service Module of the International Space Station (ISS) during an extravehicular space walk at the end of January 2005. Since February 2005 a two joint manipulator is operated from ground via a direct radio link. The aim of ROKVISS is the in flight verification of highly integrated modular robotic joints as well as the demonstration of different control modes, reaching from high system autonomy to force feedback teleoperation (telepresence mode). The experiment will be operated for at least one year in free space to evaluate and qualify intelligent light weight robotics components under realistic circumstances for maintenance and repair tasks as foreseen in upcoming manned and unmanned space applications in near future. This paper focuses in the telepresence control mode, its technology and first results from the space experiment ROKVISS"}
{"_id": "528bdbe171ca7ed4d0ec722a3fb773610e250788", "title": "No data left behind: real-time insights from a complex data ecosystem", "text": "The typical enterprise data architecture consists of several actively updated data sources (e.g., NoSQL systems, data warehouses), and a central data lake such as HDFS, in which all the data is periodically loaded through ETL processes. To simplify query processing, state-of-the-art data analysis approaches solely operate on top of the local, historical data in the data lake, and ignore the fresh tail end of data that resides in the original remote sources. However, as many business operations depend on real-time analytics, this approach is no longer viable. The alternative is hand-crafting the analysis task to explicitly consider the characteristics of the various data sources and identify optimization opportunities, rendering the overall analysis non-declarative and convoluted.\n Based on our experiences operating in data lake environments, we design System-PV, a real-time analytics system that masks the complexity of dealing with multiple data sources while offering minimal response times. System-PV extends Spark with a sophisticated data virtualization module that supports multiple applications - from SQL queries to machine learning. The module features a location-aware compiler that considers source complexity, and a two-phase optimizer that produces and refines the query plans, not only for SQL queries but for all other types of analysis as well. The experiments show that System-PV is often faster than Spark by more than an order of magnitude. In addition, the experiments show that the approach of accessing both the historical and the remote fresh data is viable, as it performs comparably to solely operating on top of the local, historical data."}
{"_id": "7a5abad8c18a7c16b72854f3a7070de11e1f8ea2", "title": "Combined Phase-Shift and Frequency Modulation of a Dual-Active-Bridge AC\u2013DC Converter With PFC", "text": "This paper presents a combined phase-shift and frequency modulation scheme of a dual-active-bridge (DAB) ac- dc converter with power factor correction (PFC) to achieve zero voltage switching (ZVS) over the full range of the ac mains voltage. The DAB consists of a half bridge with bidirectional switches on the ac side and a full bridge on the dc side of the isolation transformer to accomplish single-stage power conversion. The modulation scheme is described by means of analytical formulas, which are used in an optimization procedure to determine the optimal control variables for minimum switch commutation currents. Furthermore, an ac current controller suitable for the proposed modulation scheme is described. A loss model and measurements on a 3.3-kW electric vehicle battery charger to connect to the 230 Vrms / 50-Hz mains considering a battery voltage range of 280-420 V validate the theoretical analysis."}
{"_id": "05432d3edfbc3f293fa5c4aebb28793b274decb3", "title": "Fundamentals of Artificial Neural Networks", "text": "As book review editor of the IEEE Transactions on Neural Networks, Mohamad Hassoun has had the opportunity to assess the multitude of books on artificial neural networks that have Historically the data pairs and logical, fashion transition while associative memories commonly? This organization of such as a, clear and algorithms search simulated annealing learning algorithm. After the utility of how to, discover form. More than three different levels of the cost function will. Dean pomerleau in reinforcement and they, discovered two chapters on. This view the application for handwriting recognition this approach by arrows originating."}
{"_id": "69a0ae99d915c176d6dc141353eabf1869efdf75", "title": "68 Implementing Dubins Airplane Pathson Fixed-Wing UAVs !", "text": "A well-known path-planning technique for mobile robots or planar aerial vehicles is to use Dubins paths, which are minimum-distance paths between two configurations subject to the constraints of the Dubins car model. An extension of this method to a three-dimensional Dubins airplane model has recently been proposed. This chapter builds on that work showing a complete architecture for implementing Dubins airplane paths on small fixed-wing UAVs. The existing Dubins airplane model is modified to be more consistent with the kinematics of a fixed-wing aircraft. The chapter then shows how a recently proposed vector-field method can be used to design a guidance law that causes the Dubins airplane model to follow straight-line and helical paths. Dubins airplane paths are more complicated than Dubins car paths because of the altitude component. Based on the difference between the altitude of the start and end configurations, Dubins airplane paths can be classified as low, medium, or high altitude gain. While for medium and high altitude gain there are many different Dubins airplane paths, this chapter proposes selecting the path that maximizes the average altitude throughout the maneuver. The proposed architecture is implemented on a six degree-of-freedom Matlab/Simulink simulation of an Aerosonde UAV, and results from this simulation demonstrate the effectiveness of the technique. 68."}
{"_id": "e9d108f3519650a8bd426fe02eea5e47b42eff1b", "title": "The classroom as a dashboard: co-designing wearable cognitive augmentation for K-12 teachers", "text": "When used in classrooms, personalized learning software allows students to work at their own pace, while freeing up the teacher to spend more time working one-on-one with students. Yet such personalized classrooms also pose unique challenges for teachers, who are tasked with monitoring classes working on divergent activities, and prioritizing help-giving in the face of limited time. This paper reports on the co-design, implementation, and evaluation of a wearable classroom orchestration tool for K-12 teachers: mixed-reality smart glasses that augment teachers' realtime perceptions of their students' learning, metacognition, and behavior, while students work with personalized learning software. The main contributions are: (1) the first exploration of the use of smart glasses to support orchestration of personalized classrooms, yielding design findings that may inform future work on real-time orchestration tools; (2) Replay Enactments: a new prototyping method for real-time orchestration tools; and (3) an in-lab evaluation and classroom pilot using a prototype of teacher smart glasses (Lumilo), with early findings suggesting that Lumilo can direct teachers' time to students who may need it most."}
{"_id": "34e89bf1beb0897e851a7c79bb71aa6fc8a7d3fb", "title": "Shortest-path kernels on graphs", "text": "Data mining algorithms are facing the challenge to deal with an increasing number of complex objects. For graph data, a whole toolbox of data mining algorithms becomes available by defining a kernel function on instances of graphs. Graph kernels based on walks, subtrees and cycles in graphs have been proposed so far. As a general problem, these kernels are either computationally expensive or limited in their expressiveness. We try to overcome this problem by defining expressive graph kernels which are based on paths. As the computation of all paths and longest paths in a graph is NP-hard, we propose graph kernels based on shortest paths. These kernels are computable in polynomial time, retain expressivity and are still positive definite. In experiments on classification of graph models of proteins, our shortest-path kernels show significantly higher classification accuracy than walk-based kernels."}
{"_id": "fce2e0b85d91878ab1b981d5284848c1f964f5ef", "title": "On effect size.", "text": "The call for researchers to report and interpret effect sizes and their corresponding confidence intervals has never been stronger. However, there is confusion in the literature on the definition of effect size, and consequently the term is used inconsistently. We propose a definition for effect size, discuss 3 facets of effect size (dimension, measure/index, and value), outline 10 corollaries that follow from our definition, and review ideal qualities of effect sizes. Our definition of effect size is general and subsumes many existing definitions of effect size. We define effect size as a quantitative reflection of the magnitude of some phenomenon that is used for the purpose of addressing a question of interest. Our definition of effect size is purposely more inclusive than the way many have defined and conceptualized effect size, and it is unique with regard to linking effect size to a question of interest. Additionally, we review some important developments in the effect size literature and discuss the importance of accompanying an effect size with an interval estimate that acknowledges the uncertainty with which the population value of the effect size has been estimated. We hope that this article will facilitate discussion and improve the practice of reporting and interpreting effect sizes."}
{"_id": "5b1cf900197eb37164a5317f749cdca59a7605d8", "title": "Models of the self: self-construals and gender.", "text": "The authors first describe individual differences in the structure of the self. In the independent self-construal, representations of others are separate from the self. In the interdependent self-construal, others are considered part of the self (H. Markus & S. Kitayama, 1991). In general, men in the United States are thought to construct and maintain an independent self-construal, whereas women are thought to construct and maintain an interdependent self-construal. The authors review the psychological literature to demonstrate that many gender differences in cognition, motivation, emotion, and social behavior may be explained in terms of men's and women's different self-construals. Recognition of the interdependent self-construal as a possible alternative conception of the self may stimulate new investigations into the ways the self influences a person's thinking, feeling, and behaving."}
{"_id": "c15687a3074dcfa2b0f431c1f418608367ce3a00", "title": "Color reduction based on ant colony", "text": "In this article a method for color reduction based on ant colony algorithm is presented. Generally color reduction involves two steps: choosing a proper palette and mapping colors to this palette. This article is about the first step. Using ant colony algorithm, pixel clusters are formed based on their colors and neighborhood information to make the final palette. A comparison between the results of the proposed method and some other methods is presented. There are some parameters in the proposed method which can be set based on user needs and priorities. This increases the flexibility of the method. 2007 Elsevier B.V. All rights reserved."}
{"_id": "e1e62eebeede7ba223a91bfd9d50cc7564b9a2d0", "title": "Subdermal neo-umbilicoplasty in abdominoplasty", "text": "Umbilicoplasty is an important surgical procedure in abdominoplasty, regardless of the technique used. An unaesthetic umbilicus often irreversibly affects surgical outcomes. This study describes the experience of our team with the subdermal neo-umbilicoplasty technique and assesses patient satisfaction with the appearance of the new umbilicus. Fifty-eight patients with abdominal deformity underwent abdominoplasty with subdermal neo-umbilicoplasty. Patients were followed up for at least 1\u00a0year with photographic documentation, assessment of patient satisfaction, and evaluation of eventual postoperative complications. Postoperative complications included one case of shallow umbilicus, four cases of superficial necrosis, and one case of midline deviation. No patient required surgical revision. There was a high level of patient satisfaction with the natural-looking umbilicus. Subdermal neo-umbilicoplasty resulted in low postoperative complications and provided a new, natural-looking umbilicus without external scars. Level of evidence: Level IV, therapeutic study"}
{"_id": "134f9305612524598efdfd53e8b16ede1f8588c7", "title": "Node, Node-Link, and Node-Link-Group Diagrams: An Evaluation", "text": "Effectively showing the relationships between objects in a dataset is one of the main tasks in information visualization. Typically there is a well-defined notion of distance between pairs of objects, and traditional approaches such as principal component analysis or multi-dimensional scaling are used to place the objects as points in 2D space, so that similar objects are close to each other. In another typical setting, the dataset is visualized as a network graph, where related nodes are connected by links. More recently, datasets are also visualized as maps, where in addition to nodes and links, there is an explicit representation of groups and clusters. We consider these three Techniques, characterized by a progressive increase of the amount of encoded information: node diagrams, node-link diagrams and node-link-group diagrams. We assess these three types of diagrams with a controlled experiment that covers nine different tasks falling broadly in three categories: node-based tasks, network-based tasks and group-based tasks. Our findings indicate that adding links, or links and group representations, does not negatively impact performance (time and accuracy) of node-based tasks. Similarly, adding group representations does not negatively impact the performance of network-based tasks. Node-link-group diagrams outperform the others on group-based tasks. These conclusions contradict results in other studies, in similar but subtly different settings. Taken together, however, such results can have significant implications for the design of standard and domain snecific visualizations tools."}
{"_id": "d2071c1e4a6030dc0005dbfeefdd196a8b293e84", "title": "Okapi at TREC-3", "text": "During the course of TREC{1 the low-level search functions were split o into a separate Basic Search System (BSS) [2], but retrieval and ranking of documents was still done using the \\classical\" probabilistic model of Robertson and Sparck Jones[7] with no account taken of document length or term frequency within document or query. Four runs were submitted to NIST for evaluation: automatic ad hoc, automatic routing, manual ad hoc and manual ad hoc with feedback. The results were undistinguished, although not among the worst. Of the ad hoc runs, the manual was better than the automatic (in which only the CONCEPTS elds of the topics were used), and feedback appeared bene cial."}
{"_id": "a043e145fffe732e6e087e395639bda15f056e1e", "title": "Mathematical Models for Natural Gas Forecasting", "text": "It is vital for natural gas Local Distribution Companies (LDCs) to forecast their customers\u2019 natural gas demand accurately. A significant error on a single very cold day can cost the customers of the LDC millions of dollars. This paper looks at the financial implication of forecasting natural gas, the nature of natural gas forecasting, the factors that impact natural gas consumption, and describes a survey of mathematical techniques and practices used to model natural gas demand. Many of the techniques used in this paper currently are implemented in a software GasDay , which is currently used by 24 LDCs throughout the United States, forecasting about 20% of the total U.S. residential, commercial, and industrial consumption. Results of GasDay\u2019s forecasting performance also is presented."}
{"_id": "c2df80c76550306231978b0e8269ba4af46146a2", "title": "Real Time Vehicle Diagnostics Using Head Mounted Displays", "text": "This thesis evaluates how a head mounted display (HMD) can be used to increase usability compared to existing computer programs that are used during maintenance work on vehicles. Problems identified during a case study in a vehicle workshop are first described. As an attempt to solve some of the identified problems a prototype application using a HMD was developed. The prototype application aids the user during troubleshooting of systems on the vehicle by leading the mechanic with textual information and augmented reality (AR). Assessment of the prototype application was done by comparing it to the existing computer program and measuring error rate and time to completion for a predefined task. Usability was also measured using the System Usability Scale. The assessment showed that HMDs can provide higher usability in terms of efficiency and satisfaction. Furthermore, the thesis describes and discusses other possibilities and limitations that usage of HMDs and AR can lead to that were identified both from theory and during implementation."}
{"_id": "0f153052802e638973eb7fb12d15ab680c04ca14", "title": "Breaking the Curse of Dimensionality Using Decompositions of Incomplete Tensors: Tensor-based scientific computing in big data analysis", "text": "Higher-order tensors and their decompositions are abundantly present in domains such as signal processing (e.g., higher-order statistics [1] and sensor array processing [2]), scientific computing (e.g., discretized multivariate functions [3]?[6]), and quantum information theory (e.g., representation of quantum many-body states [7]). In many applications, the possibly huge tensors can be approximated well by compact multilinear models or decompositions. Tensor decompositions are more versatile tools than the linear models resulting from traditional matrix approaches. Compared to matrices, tensors have at least one extra dimension. The number of elements in a tensor increases exponentially with the number of dimensions, and so do the computational and memory requirements. The exponential dependency (and the problems that are caused by it) is called the curse of dimensionality. The curse limits the order of the tensors that can be handled. Even for a modest order, tensor problems are often large scale. Large tensors can be handled, and the curse can be alleviated or even removed by using a decomposition that represents the tensor instead of using the tensor itself. However, most decomposition algorithms require full tensors, which renders these algorithms infeasible for many data sets. If a tensor can be represented by a decomposition, this hypothesized structure can be exploited by using compressed sensing (CS) methods working on incomplete tensors, i.e., tensors with only a few known elements."}
{"_id": "8ac74b8679cf073e4205ad9463d01db7e6b5838c", "title": "Robust Keyframe-based Dense SLAM with an RGB-D Camera", "text": "In this paper, we present RKD-SLAM, a robust keyframe-based dense SLAM approach for an RGB-D camera that can robustly handle fast motion and dense loop closure, and run without time limitation in a moderate size scene. It not only can be used to scan high-quality 3D models, but also can satisfy the demand of VR and AR applications. First, we combine color and depth information to construct a very fast keyframe-based tracking method on a CPU, which can work robustly in challenging cases (e.g. fast camera motion and complex loops). For reducing accumulation error, we also introduce a very efficient incremental bundle adjustment (BA) algorithm, which can greatly save unnecessary computation and perform local and global BA in a unified optimization framework. An efficient keyframe-based depth representation and fusion method is proposed to generate and timely update the dense 3D surface with online correction according to the refined camera poses of keyframes through BA. The experimental results and comparisons on a variety of challenging datasets and TUM RGB-D benchmark demonstrate the effectiveness of the proposed system."}
{"_id": "5eef22c155ca0093d0e53089401a9c7de077fc16", "title": "Predictive analytics with surveillance big data", "text": "In this paper, we describe a novel analytics system that enables query processing and predictive analytics over streams of aviation data. As part of an Internal Research and Development project, Boeing Research and Technology (BR&T) Advanced Air Traffic Management (AATM) built a system that makes predictions based upon descriptive patterns of archived aviation data. Boeing AATM has been receiving live Aircraft Situation Display to Industry (ASDI) data and archiving it for over two years. At the present time, there is not an easy mechanism to perform analytics on the data. The incoming ASDI data is large, compressed, and requires correlation with other flight data before it can be analyzed.\n The service exposes this data once it has been uncompressed, correlated, and stored in a data warehouse for further analysis using a variety of descriptive, predictive, and possibly prescriptive analytics tools. The service is being built partially in response to requests from Boeing Commercial Aviation (BCA) for analysis of capacity and flow in the US National Airspace System (NAS). The service utilizes a custom tool for correlating the raw ASDI feed, IBM Warehouse with DB2 for data management, WebSphere Message Broker for real-time message brokering, SPSS Modeler for statistical analysis, and Cognos BI for front-end business intelligence (BI) visualization. This paper describes a scalable service architecture, implementation and the value it adds to the aviation domain."}
{"_id": "a935afd52f77d0ce1bd08ff069c469a4fb0e8341", "title": "Class-DE Ultrasound Transducer Driver for HIFU Therapy", "text": "This paper presents a practical implementation of an integrated MRI-compatible CMOS amplifier capable of directly driving a piezoelectric ultrasound transducer suitable for high-intensity focused ultrasound (HIFU) therapy. The amplifier operates in Class DE mode without the need for an output matching network. The integrated amplifier has been implemented with the AMS AG H35 CMOS process. A class DE amplifier design methodology for driving unmatched piezoelectric loads is presented along with simulation and experimental results. The proposed design achieves approximately 90% efficiency with over 800 mW of output power at 1010 kHz. The total die area including pads is 2 mm2. Compatibility with MRI was validated with B1 imaging of a phantom and the amplifier circuit."}
{"_id": "fb23b5d74dab3145cc7ed735ed39b7b0dea99c2e", "title": "WiMo: location-based emotion tagging", "text": "In this paper we introduce WiMo, a location-based social networking tool that enables users to share and store their emotional feelings about places. WiMo creates a mobile social network based on common interests and enables users to share not just information but also opinions, experiences and passions. The application uses a geo-emotional tagging system running on a GPS enabled mobile phone to ascribe emotions to places. In this paper we describe the development of an emotion tagging interface and present a case study scenario to situate the application in a real world setting. We then describe work-in-progress on the development of the WiMo prototype interface, and the interaction process for users. We further outline the features of the system and finally discuss next steps in the development of the application."}
{"_id": "264961bbc677af3e9da7fe294c9fc5d6c2ab82fd", "title": "Addressing DODAG inconsistency attacks in RPL networks", "text": "RPL is a routing protocol for low-power and lossy constrained node networks. A malicious node can manipulate header options used by RPL to track DODAG inconsistencies, thereby causing denial of service attacks, increased control message overhead, and black-holes at the targeted node. RPL counteracts DODAG inconsistencies by using a fixed threshold, upon reaching which all subsequent packets with erroneous header options are ignored. However, the fixed threshold is arbitrary and does not resolve the black-hole issue either. To address this we present a mitigation strategy that allows nodes to dynamically adapt against a DODAG inconsistency attack. We also present the forced black-hole attack problem and a solution that can be used to mitigate it. Results from our experiments show that our proposed approach mitigates these attacks without any significant overhead."}
{"_id": "d563fac9dd650257c0b1b4d794a636b39f9b5d5e", "title": "Episodic and semantic memory in mild cognitive impairment", "text": "Little is known about episodic and semantic memory in the early predementia stage of Alzheimer's disease (AD), which is referred to as mild cognitive impairment (MCI). To explore person knowledge, item recognition and spatial associative memory, we designed the Face Place Test (FPT). A total of 75 subjects participated: 22 patients with early AD, 24 with MCI and 29 matched controls. As predicted, AD patients showed significant deficits in person naming, item recognition and recall of spatial location (placing). Surprisingly, subjects with MCI were also impaired on all components. There was no significant difference between AD and MCI except on the placing component. Analysis of the relationship between semantic (naming) and episodic (recognition and placing) components of the FPT revealed a significant association between the two episodic tasks, but not between episodic and semantic performance. Patients with MCI show deficits of episodic and semantic memory. The extent of impairment suggests dysfunction beyond the medial temporal lobe. The FPT might form the basis of a sensitive early indicator of AD."}
{"_id": "c569925bc8972c2c8a640a57513de6c904981d54", "title": "Cryptanalysis of DES Implemented on Computers with Cache", "text": "This paper presents the results of applying an attack against the Data Encryption Standard (DES) implemented in some applications, using side-channel information based on CPU delay as proposed in [11]. This cryptanalysis technique uses side-channel information on encryption processing to select and collect effective plaintexts for cryptanalysis, and infers the information on the expanded key from the collected plaintexts. On applying this attack, we found that the cipher can be broken with 2 known plaintexts and 2 calculations at a success rate > 90%, using a personal computer with 600-MHz Pentium III. We discuss the feasibility of cache attack on ciphers that need many S-box look-ups, through reviewing the results of our experimental attacks on the block ciphers excluding DES, such as AES."}
{"_id": "68a52ce4829fe5c32d9f7610de0e59a30d4de722", "title": "Differential Power Analysis", "text": "are the property of their respective owners. The information contained in this presentation is provided for illustrative purposes only, and is provided without any guarantee or warranty whatsoever, and does not necessarily represent official opinions of CRI or its partners. Confidential-unauthorized copying, use or redistribution is prohibited."}
{"_id": "dc821ab3a1a3b49661639da37e980bfd21d3746a", "title": "Blind Signatures for Untraceable Payments", "text": ""}
{"_id": "1eb43b055a7d104794715c143474f201e7de0750", "title": "On the importance of accurate weak classifier learning for boosted weak classifiers", "text": "Recent work has shown that improving model learning for weak classifiers can yield significant gains in the overall accuracy of a boosted classifier. However, most published classifier boosting research relies only on rudimentary learning techniques for weak classifiers. So while it is known that improving the model learning can greatly improve the accuracy of the resulting strong classifier, it remains to be shown how much can yet be gained by further improving the model learning at the weak classifier level. This paper derives a very accurate model learning method for weak classifiers based on the popular Haar-like features and presents an investigation of its usefulness compared to the standard and recent approaches. The accuracy of the new method is shown by demonstrating the new models ability to predict ROC performance on validation data. A discussion of the problems in learning accurate weak hypotheses is given, along with example solutions. It is also shown that a previous simpler method can be further improved. Lastly, we show that improving model accuracy does not continue to yield improved overall classification beyond a certain point. At this point the learning technique, in this case RealBoost, is unable to make gains from the improved model data. The method has been tested on pedestrian detection tasks using classifiers boosted using the RealBoost boosting algorithm. A subset of our most interesting results is shown to demonstrate the value of method."}
{"_id": "c87b70628a280d304652eda12c2e760d8d03e38f", "title": "Probabilistic vehicle trajectory prediction over occupancy grid map via recurrent neural network", "text": "In this paper, we propose an efficient vehicle trajectory prediction framework based on recurrent neural network. Basically, the characteristic of the vehicle's trajectory is different from that of regular moving objects since it is affected by various latent factors including road structure, traffic rules, and driver's intention. Previous state of the art approaches use sophisticated vehicle behavior model describing these factors and derive the complex trajectory prediction algorithm, which requires a system designer to conduct intensive model optimization for practical use. Our approach is data-driven and simple to use in that it learns complex behavior of the vehicles from the massive amount of trajectory data through deep neural network model. The proposed trajectory prediction method employs the recurrent neural network called long short-term memory (LSTM) to analyze the temporal behavior and predict the future coordinate of the surrounding vehicles. The proposed scheme feeds the sequence of vehicles' coordinates obtained from sensor measurements to the LSTM and produces the probabilistic information on the future location of the vehicles over occupancy grid map. The experiments conducted using the data collected from highway driving show that the proposed method can produce reasonably good estimate of future trajectory."}
{"_id": "8568be5a425d57bd5507ed3d8306efe2acdc1b85", "title": "A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition", "text": "In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism."}
{"_id": "0834e74304b547c9354b6d7da6fa78ef47a48fa8", "title": "LINE: Large-scale Information Network Embedding", "text": "This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the ``LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online\\footnote{\\url{https://github.com/tangjianpku/LINE}}."}
{"_id": "a12e6b059ba9094ab2a5d5f2157a693668039e32", "title": "Reverse wire bonding and phosphor printing for LED wafer level packaging", "text": "Solid state lighting is a good alternative light source with reduced energy consumption. Light-emitting diode (LED) is very efficient in turning electrical energy into light. LED has a number of advantages over the traditional light sources. The optical performance of the LED component is very critical. In general, white light can be obtained by applying phosphor on a blue LED chip. The blue light from the LED excites the phosphor to emit yellow light. The blue and yellow light mixes together to give white light. In order to obtain a good optical performance, it is necessary to apply phosphor properly. It is challenging to distribute a right amount of phosphor on the LED die. Besides, phosphor dispensing is usually the slowest process when compared with die bonding and wire bonding. This controls the overall throughput of the LED packaging process. There are different methods to apply the phosphor. The phosphor is mixed with epoxy or silicone to form slurry and is then dispensed onto the chip. However, the spatial color distribution is poor if phosphor slurry is used. Conformal phosphor coating can be used to improve the spatial color distribution. In this paper, an innovative phosphor stencil printing method is proposed. This paper demonstrates the feasibility of the phosphor stencil printing process for wafer-level LED packaging. LEDs are first mounted on a wafer submount. Wire bonds are used as interconnect. The phosphor is stencil printed on the chip surface after wire bond. The minimum phosphor layer thickness is controlled by the wire bond loop height. In order to achieve a low loop height, reverse wire bonding is used. The first bond is on the wafer submount and the second bond is on the LED chip. The reverse wire bond has a very low profile which allows a thin layer of phosphor to be printed on the chip surface. Prototypes are successfully fabricated. A uniform layer of phosphor is stencil printed on the LED chip on the wafer submount. Experimental result shows that the proposed phosphor printing method is very effective in distributing the right amount of phosphor on the chip surface."}
{"_id": "b439475f4e330e0d37048a992891205d0f841689", "title": "5G cellular network integration with SDN: Challenges, issues and beyond", "text": "The tremendous growth in communication technology is shaping a hyper-connected network where billions or connected devices are producing a huge volume of data. Cellular and mobile network is a major contributor towards this technology shift and require new architectural paradigm to provide low latency, high performance in a resource constrained environment. 5G technology deployment with fully IP-based connectivity is anticipated by 2020. However, there is no standard established for 5G technology and many efforts are being made to establish a unified 5G stander. In this context, variant technology such as Software Defined Network (SDN) and Network Function virtualization (NFV) are the best candidate. SDN dissociate control plane from data plane and network management is done on the centralized control plane. In this paper, a survey on state of the art on the 5G integration with the SDN is presented. A comprehensive review is presented for the different integrated architectures of 5G wireless network and the generalized solutions over the period 2010\u20132016. This comparative analysis of the existing solutions of SDN-based cellular network (5G) implementations provides an easy and concise view of the emerging trends by 2020."}
{"_id": "cba6bacdac97b467f149ab71998f6e91c25a0374", "title": "Adversarial and Perceptual Refinement for Compressed Sensing MRI Reconstruction", "text": "Deep learning approaches have shown promising performance for compressed sensing-based Magnetic Resonance Imaging. While deep neural networks trained with mean squared error (MSE) loss functions can achieve high peak signal to noise ratio, the reconstructed images are often blurry and lack sharp details, especially for higher undersampling rates. Recently, adversarial and perceptual loss functions have been shown to achieve more visually appealing results. However, it remains an open question how to (1) optimally combine these loss functions with the MSE loss function and (2) evaluate such a perceptual enhancement. In this work, we propose a hybrid method, in which a visual refinement component is learnt on top of an MSE loss-based reconstruction network. In addition, we introduce a semantic interpretability score, measuring the visibility of the region of interest in both ground truth and reconstructed images, which allows us to objectively quantify the usefulness of the image quality for image post-processing and analysis. Applied on a large cardiac MRI dataset simulated with 8-fold undersampling, we demonstrate significant improvements (p < 0.01) over the state-of-the-art in both a human observer study and the semantic interpretability score."}
{"_id": "40a4e30c2eca4de538d6ac39db48b2f04027ba6c", "title": "A Pose-Invariant Descriptor for Human Detection and Segmentation", "text": "We present a learning-based, sliding window-style approach for the problem of detecting humans in still images. Instead of traditional concatenation-style image location-based feature encoding, a global descriptor more invariant to pose variation is introduced. Specifically, we propose a principled approach to learning and classifying human/non-human image patterns by simultaneously segmenting human shapes and poses, and extracting articulation-insensitive features. The shapes and poses are segmented by an efficient, probabilistic hierarchical part-template matching algorithm, and the features are collected in the context of poses by tracing around the estimated shape boundaries. Histograms of oriented gradients are used as a source of low-level features from which our pose-invariant descriptors are computed, and kernel SVMs are adopted as the test classifiers. We evaluate our detection and segmentation approach on two public pedestrian datasets."}
{"_id": "adfb59cbc6160af7245ee28d8b703f76153af6bc", "title": "Interpretable Neural Architectures for Attributing an Ad's Performance to its Writing Style", "text": "How much does \u201cfree shipping!\u201d help an advertisement\u2019s ability to persuade? This paper presents two methods for performance attribution: finding the degree to which an outcome can be attributed to parts of a text while controlling for potential confounders1. Both algorithms are based on interpreting the behaviors and parameters of trained neural networks. One method uses a CNN to encode the text, an adversarial objective function to control for confounders, and projects its weights onto its activations to interpret the importance of each phrase towards each output class. The other method leverages residualization to control for confounds and performs interpretation by aggregating over learned word vectors. We demonstrate these algorithms\u2019 efficacy on 118,000 internet search advertisements and outcomes, finding language indicative of high and low click through rate (CTR) regardless of who the ad is by or what it is for. Our results suggest the proposed algorithms are high performance and data efficient, able to glean actionable insights from fewer than 10,000 data points. We find that quick, easy, and authoritative language is associated with success, while lackluster embellishment is related to failure. These findings agree with the advertising industry\u2019s emperical wisdom, automatically revealing insights which previously required manual A/B testing to discover."}
{"_id": "acbeb63994356f1fcc6e2350be22468d29e0da1a", "title": "A survey on comparison of electric motor types and drives used for electric vehicles", "text": "In this study, Switched Reluctance Motor (SRM), Induction Motor (IM), Brushless DC Motor, and Permanent Magnet Motor (PM), and their drives have been compared with the efficiency, cost, weight, cooling, maximum speed, reliability, fault tolerance, power ratings, and vehicle acceleration time. Hence, a comprehensive literature research on motor types and their drives used in EV has been made. According to these researches, some conclusions have been obtained. It has been seen that PM BLDC motors and their drives are the most efficient and have high power density, brushless DC motors and their drives have low cost, IM is appropriate for controllability and cost, the weight of SRM is low, its reliability is high, it operates fault-tolerance and according to the acceleration time, its performance is better than IM and BLDC. Hence, SRM is the most appropriate motor for EV."}
{"_id": "ab515c33ced707b35b480330155ab51b1729dbbf", "title": "Vision-as-Inverse-Graphics: Obtaining a Rich 3D Explanation of a Scene from a Single Image", "text": "We develop an inverse graphics approach to the problem of scene understanding, obtaining a rich representation that includes descriptions of the objects in the scene and their spatial layout, as well as global latent variables like the camera parameters and lighting. The framework's stages include object detection, the prediction of the camera and lighting variables, and prediction of object-specific variables (shape, appearance and pose). This acts like the encoder of an autoencoder, with graphics rendering as the decoder Importantly the scene representation is interpretable and is of variable dimension to match the detected number of objects plus the global variables. For the prediction of the camera latent variables we introduce a novel architecture termed Probabilistic HoughNets (PHNs), which provides a principled approach to combining information from multiple detections. We demonstrate the quality of the reconstructions obtained quantitatively on synthetic data, and qualitatively on real scenes."}
{"_id": "ba2f339f67f51d7fc2f93b858ee02fc14bf6073a", "title": "Think-aloud protocols: Analyzing three different think-aloud protocols with counts of verbalized frustrations in a usability study of an information-rich Web site", "text": "We describe an empirical, between-subjects study on the use of think-aloud protocols in usability testing of an information-rich Web site. This double-blind study used three different types of think-aloud protocols: a traditional protocol, a speech-communication protocol, and a coaching protocol. A silent condition served as the control. Eighty participants were recruited and randomly pre-assigned to one of four conditions. With the goal of keeping unintended bias to a minimum, data analysis did not count the number of identified usability problems by condition, which was considered too subjective. Rather, the study collected the number of verbalized and non-verbalized counts of frustration by condition that users experienced. The study also did a count of the number of verbalized and non-verbalized instances of positive comments by condition that users expressed. Results show that there were no statistical differences in the number of counts by condition with respect to the traditional, speech communication, or coaching condition. The study concludes that simply counting the verbalizations of users by condition does not give enough information to determine whether any of the conditions would lead to a better understanding of the usability problems associated with the Web site."}
{"_id": "106a7e73243a2c5cfd9a532d5ca8043bf492cb83", "title": "An empirical study on the use of i* by non-technical stakeholders: the case of strategic dependency diagrams", "text": "Early phases of information systems engineering include the understanding of the enterprise\u2019s context and the construction of models at different levels of decomposition, required to design the system architecture. These time-consuming activities are usually conducted by relatively large teams, composed of groups of non-technical stakeholders playing mostly an informative role (i.e. not involved in documentation and even less in modelling), led by few experienced technical consultants performing most of the documenting and modelling effort. This paper evaluates the ability of non-technical stakeholders to create strategic dependency diagrams written with the i* language in the design of the context model of a system architecture, and find out which difficulties they may encounter and what the quality of the models they build is. A case study involving non-technical stakeholders from 11 organizational areas in an Ecuadorian university held under the supervision and coordination of the two authors acting as consultants. The non-technical stakeholders identified the majority of the dependencies that should appear in the case study\u2019s context model, although they experienced some difficulties in declaring the type of dependency, representing such dependencies graphically and applying the description guidelines provided in the training. Managers were observed to make more mistakes than other more operational roles. From the observations of these results, a set of methodological advices were compiled for their use in future, similar endeavours. It is concluded that non-technical stakeholders can take an active role in the construction of the context model. This conclusion is relevant for both researchers and practitioners involved in technology transfer actions with use of i*."}
{"_id": "a80dcb654dba03a730629a8e5361dd79bbea0f5a", "title": "A high-PSR LDO using a feedforward supply-noise cancellation technique", "text": "A feed-forward noise cancellation (FFNC) technique to improve the power supply noise rejection (PSR) of a low dropout regulator (LDO) is presented. The proposed FFNC operates in conjunction with a conventional LDO and extends the noise rejection bandwidth by nearly an order of magnitude. Fabricated in 0.18\u00b5m CMOS, at 10mA load current, the prototype achieves a PSR of \u221250dB and \u221225dB at 1MHz and 10MHz supply noise frequencies, respectively. Compared to a conventional LDO, this represents an improvement of at least 30dB at 1MHz and 15dB at 10MHz. The prototype uses only 20pF load capacitance and occupies an active area of 0.04mm2."}
{"_id": "87bb169acb75197d788fb81bbc531b94b66a6446", "title": "Molecular assembly on two-dimensional materials.", "text": "Molecular self-assembly is a well-known technique to create highly functional nanostructures on surfaces. Self-assembly on two-dimensional (2D) materials is a developing field driven by the interest in functionalization of 2D materials in order to tune their electronic properties. This has resulted in the discovery of several rich and interesting phenomena. Here, we review this progress with an emphasis on the electronic properties of the adsorbates and the substrate in well-defined systems, as unveiled by scanning tunneling microscopy. The review covers three aspects of the self-assembly. The first one focuses on non-covalent self-assembly dealing with site-selectivity due to inherent moir\u00e9 pattern present on 2D materials grown on substrates. We also see that modification of intermolecular interactions and molecule-substrate interactions influences the assembly drastically and that 2D materials can also be used as a platform to carry out covalent and metal-coordinated assembly. The second part deals with the electronic properties of molecules adsorbed on 2D materials. By virtue of being inert and possessing low density of states near the Fermi level, 2D materials decouple molecules electronically from the underlying metal substrate and allow high-resolution spectroscopy and imaging of molecular orbitals. The moir\u00e9 pattern on the 2D materials causes site-selective gating and charging of molecules in some cases. The last section covers the effects of self-assembled, acceptor and donor type, organic molecules on the electronic properties of graphene as revealed by spectroscopy and electrical transport measurements. Non-covalent functionalization of 2D materials has already been applied for their application as catalysts and sensors. With the current surge of activity on building van der Waals heterostructures from atomically thin crystals, molecular self-assembly has the potential to add an extra level of flexibility and functionality for applications ranging from flexible electronics and OLEDs to novel electronic devices and spintronics."}
{"_id": "41a949738bdf73bb357c5dbf4a5e1049c1610d11", "title": "Modelling Valence and Arousal in Facebook posts", "text": "Access to expressions of subjective personal posts increased with the popularity of Social Media. However, most of the work in sentiment analysis focuses on predicting only valence from text and usually targeted at a product, rather than affective states. In this paper, we introduce a new data set of 2895 Social Media posts rated by two psychologicallytrained annotators on two separate ordinal nine-point scales. These scales represent valence (or sentiment) and arousal (or intensity), which defines each post\u2019s position on the circumplex model of affect, a well-established system for describing emotional states (Russell, 1980; Posner et al., 2005). The data set is used to train prediction models for each of the two dimensions from text which achieve high predictive accuracy \u2013 correlated at r = .65 with valence and r = .85 with arousal annotations. Our data set offers a building block to a deeper study of personal affect as expressed in social media. This can be used in applications such as mental illness detection or in automated large-scale psychological studies."}
{"_id": "2ff91e935b1af342739980e660fa75dbd23e5b93", "title": "Gesture Recognition with a 3-D Accelerometer", "text": "Gesture-based interaction, as a natural way for human-computer interaction, has a wide range of applications in ubiquitous computing environment. This paper presents an acceleration-based gesture recognition approach, called FDSVM (Frame-based Descriptor and multi-class SVM), which needs only a wearable 3-dimensional accelerometer. With FDSVM, firstly, the acceleration data of a gesture is collected and represented by a frame-based descriptor, to extract the discriminative information. Then a SVM-based multi-class gesture classifier is built for recognition in the nonlinear gesture feature space. Extensive experimental results on a data set with 3360 gesture samples of 12 gestures over weeks demonstrate that the proposed FDSVM approach significantly outperforms other four methods: DTW, Na\u00efve Bayes, C4.5 and HMM. In the user-dependent case, FDSVM achieves the recognition rate of 99.38% for the 4 direction gestures and 95.21% for all the 12 gestures. In the user-independent case, it obtains the recognition rate of 98.93% for 4 gestures and 89.29% for 12 gestures. Compared to other accelerometer-based gesture recognition approaches reported in literature FDSVM gives the best resulrs for both user-dependent and user-independent cases."}
{"_id": "558fc9a2bce3d3993a9c1f41b6c7f290cefcf92f", "title": "Efficient and Effective Solutions for Video Classification", "text": ""}
{"_id": "e4d46e4c6a94517d2691472fe14ba32768d637da", "title": "Literature Review of Automatic Multiple Documents Text Summarization", "text": "For the blessing of World Wide Web, the corpus of online information is gigantic in its volume. Search engines have been developed such as Google, AltaVista, Yahoo, etc., to retrieve specific information from this huge amount of data. But the outcome of search engine is unable to provide expected result as the quantity of information is increasing enormously day by day and the findings are abundant. So, the automatic text summarization is demanded for salient information retrieval. Automatic text summarization is a system of summarizing text by computer where a text is given to the computer as input and the output is a shorter and less redundant form of the original text. An informative pr\u00e9cis is very much helpful in our daily life to save valuable time. Research was first started naively on single document abridgement but recently information is found from various sources about a single topic in different website, journal, newspaper, text book, etc., for which multi-document summarization is required. In this paper, automatic multiple documents text summarization task is addressed and different procedure of various researchers are discussed. Various techniques are compared here that have done for multi-document summarization. Some promising approaches are indicated here and particular concentration is dedicated to describe different methods from raw level to similar like human experts, so that in future one can get significant instruction for further analysis."}
{"_id": "1b8e4b3764895cd1f7c5ccc3d9b968bf3879eb2a", "title": "Genome-Wide Regression and Prediction with the BGLR Statistical Package", "text": "Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis."}
{"_id": "6e68162c564b260de132b071a17c0c3e7de8b7f9", "title": "Emotional Integration and Advertising Effectiveness : Case of Perfumes Advertising", "text": "This paper examines emotions in advertising, its effects and functioning. Using an interview-based experiment on 256 participants, we found that emotions perceived during an advertising exposure could play an important role in eliciting responses towards the ad and the brand. However, this process is true provided that consumers associate the perceived emotion to the exposed brand or its consumption experience. Furthermore, we have identified efficiency differences between magazine ads, depending on how they visually describe emotions. In particular, we study emotional integration in advertising, i.e. salience of emotions expressed towards the ad, the presence of core brand information and clarity of the advertising message about the hedonic attributes of consumption. Interestingly, the impact of the staging process of emotions is moderated by respondents-specific variables, including their need for emotion, tolerance for ambiguity, their need for structure and need for cognition. KeywordsEmotional Reactions ;Emotional Integration; Need for Emotion; Tolerance for Ambiguity; Need for Structure; Need for Cognition; Advertising Effectiveness."}
{"_id": "3f8e57289bb81f62532d0dd87ad45dd292a7af9f", "title": "A High-Speed High-Resolution Latch Comparator for Pipeline Analog-to-Digital Converters", "text": "A high-speed and high-resolution comparator intended to be implemented in a 12 bit 100 MHz pipeline analog-to-digital converter (ADC) for frequency wireless local area network application is proposed. The designed comparator presents a rail-to-rail input range preamplifier without any capacitance required. This comparator with a novel architecture of output stage achieves very high speed at a low kickback noise. The simulation results using a 0.35 mum TSMC CMOS process technology show that this comparator exhibits a propagation delay of 2.8 ns and has a very high resolution for a rail-to-rail input signal range, while consumes only 1.0 mW of power with 5.0 V voltage supply."}
{"_id": "a96c0bfb7f82e81fc96dd97c18048fd29286bf93", "title": "CMOS analog circuit design via geometric programming", "text": "This talk presents a new method for optimizing and automating component sizing in CMOS analog circuits. It is shown that a wide variety of circuit performance measures have a special form, i.e., they are posynomial functions of the design variables. As a result, circuit design problems can be posed as geometric programs, a special type of convex optimization problem for which very efficient global optimization methods exist. The synthesis method is therefore fast, and determines the globally optimal design; in particular, the final solution is completely independent of the starting point, and infeasible specifications are unambiguously detected. Also, because the method is highly efficient, in practice it can be used to carry out robust designs and quickly explore the design space. We show that despite the restricted form of geometric programs, a large variety of circuit design problems can be posed as geometric programs."}
{"_id": "e477810eb2fef42f297116fe518c8efb4b701348", "title": "Certain Topics in Telegraph Transmission Theory", "text": "The most obvious method for determining the distortion of telegraph signals is to calculate the transients of the telegraph system. This method has been treated by various writers, and solutions are available for telegraph lines with simple terminal conditions. It is well known that the extension of the same methods to more complicated terminal conditions, which represent the usual terminal apparatus, leads to great difficulties. The present paper attacks the same problem from the alternative standpoint of the steady-state characteristics of the system. This method has the advantage over the method of transients that the complication of the circuit which results from the use of terminal apparatus does not complicate the calculations materially. This method of treatment necessitates expressing the criteria of distortionless transmission in terms of the steady-state characteristics. Accordingly, a considerable portion of the paper describes and illustrates a method for making this translation. A discussion is given of the minimum frequency range required for transmission at a given speed of signaling. In the case of carrier telegraphy, this discussion includes a comparison of single-sideband and double-sideband transmission. A number of incidental topics is also discussed."}
{"_id": "b11977493935acf8f8581330425dd38f83a73480", "title": "A low-kickback-noise latched comparator for high-speed flash analog-to-digital converters", "text": "In traditional comparators especially for flash ADCs, one serious problem is the kick back noise, which disturbs the input signal voltages and consequently might cause errors at the outputs of the ADCs. In this paper, we propose a novel CMOS latched comparator with very low kickback noise for high-speed flash ADCs. The proposed comparator separates analog preamplifier from the positive feedback digital dynamic latch so as to reduce the influence of the kickback noise. Simulation results based on a mixed signal CMOS 0.35 /spl mu/m technology show that, this comparator can work at a maximum clock frequency of 500 MHz with very reduced kickback noise compared with conventional architectures."}
{"_id": "b8a613e08d719b048a277465ac218e927c83327e", "title": "A 0.35 /spl mu/m CMOS comparator circuit for high-speed ADC applications", "text": "A high-speed differential clocked comparator circuit is presented. The comparator consists of a preamplifier and a latch stage followed by a dynamic latch that operates as an output sampler. The output sampler circuit consists of a full transmission gate (TG) and two inverters. The use of this sampling stage results in a reduction in the power consumption of this high-speed comparator. Simulations show that charge injection of the TG adds constructively to the sampled signal value, therefore amplifying the sampled signal with a modest gain of 1.15. Combined with the high gain of the inverters, the sampled signals are amplified toward the rail voltages. This comparator is designed and fabricated in a 0.35 /spl mu/m standard digital CMOS technology. Measurement results show a sampling frequency of 1 GHz with 16 mV resolution for a 1 V input signal range and 2 mW power consumption from a 3.3 V supply. The architecture can be scaled down to smaller feature sizes and lower supply voltages."}
{"_id": "ef02ef7b3db459b127d439c94915413c46bff2ce", "title": "Personalization and Customization Technologies", "text": "The article presents a survey of personalization and customization technologies. It starts with an analysis of user profiling and explains how user profiles can be used for generating personalized products and services. The article explains some standard personalization techniques such as rule-based filtering, collaborative filtering, and content-based filtering. Web usage mining is also explained and it is shown how data mining can be used in learning customer behavior and for providing personalized content. Several applications are discussed such as adaptive websites, recommender systems, adaptive web stores, and intelligent agents. The article concludes by summarizing current trends."}
{"_id": "8d14cf9b50e3e55ffe1abd9ee6e6f486b95c745e", "title": "Control optimization for a power-split hybrid vehicle", "text": "Toyota hybrid system (THS) is used in the current best-selling hybrid vehicle on the market - the Toyota Prius. This hybrid system contains a power split planetary gear system which combines the benefits of series and parallel hybrid vehicles. This paper first develops a dynamic model to investigate the control strategy of the THS power train. An equivalent consumption minimization strategy (ECMS) is developed which is based on instantaneous optimization concept. The dynamic programming (DP) technique is then utilized to obtain a performance benchmark and insight toward fine-tuning of the ECMS algorithm for better performance"}
{"_id": "29ab09179c9fc864b05fe853c8443f39ac1baaec", "title": "Grounding Language for Interactive Task Learning", "text": "This paper describes how language is grounded by a comprehension system called Lucia within a robotic agent called Rosie that can manipulate objects and navigate indoors. The whole system is built within the Soar cognitive architecture and uses Embodied Construction Grammar (ECG) as a formalism for describing linguistic knowledge. Grounding is performed using knowledge from the grammar itself, from the linguistic context, from the agent's perception, and from an ontology of long-term knowledge about object categories and properties and actions the agent can perform. The paper also describes a benchmark corpus of 200 sentences in this domain, along with test versions of the world model and ontology, and gold-standard meanings for each of the sentences. The benchmark is contained in the supplemental materials."}
{"_id": "52cb68029929b91cc1bfe62eb8576ffabb8ff787", "title": "Stock Trading Using RSPOP: A Novel Rough Set-Based Neuro-Fuzzy Approach", "text": "This paper investigates the method of forecasting stock price difference on artificially generated price series data using neuro-fuzzy systems and neural networks. As trading profits is more important to an investor than statistical performance, this paper proposes a novel rough set-based neuro-fuzzy stock trading decision model called stock trading using rough set-based pseudo outer-product (RSPOP) which synergizes the price difference forecast method with a forecast bottleneck free trading decision model. The proposed stock trading with forecast model uses the pseudo outer-product based fuzzy neural network using the compositional rule of inference [POPFNN-CRI(S)] with fuzzy rules identified using the RSPOP algorithm as the underlying predictor model and simple moving average trading rules in the stock trading decision model. Experimental results using the proposed stock trading with RSPOP forecast model on real world stock market data are presented. Trading profits in terms of portfolio end values obtained are benchmarked against stock trading with dynamic evolving neural-fuzzy inference system (DENFIS) forecast model, the stock trading without forecast model and the stock trading with ideal forecast model. Experimental results showed that the proposed model identified rules with greater interpretability and yielded significantly higher profits than the stock trading with DENFIS forecast model and the stock trading without forecast model"}
{"_id": "0efb659b15737c76a2fc50010a694123f6c45f64", "title": "Accelerated Attributed Network Embedding", "text": "Network embedding is to learn low-dimensional vector representations for nodes in a network. It has shown to be effective in a variety of tasks such as node classification and link prediction. While embedding algorithms on pure networks have been intensively studied, in many real-world applications, nodes are often accompanied with a rich set of attributes or features, aka attributed networks. It has been observed that network topological structure and node attributes are often strongly correlated with each other. Thus modeling and incorporating node attribute proximity into network embedding could be potentially helpful, though non-trivial, in learning better vector representations. Meanwhile, real-world networks often contain a large number of nodes and features, which put demands on the scalability of embedding algorithms. To bridge the gap, in this paper, we propose an accelerated attributed network embedding algorithm AANE, which enables the joint learning process to be done in a distributed manner by decomposing the complex modeling and optimization into many sub-problems. Experimental results on several real-world datasets demonstrate the effectiveness and efficiency of the proposed algorithm."}
{"_id": "573cea19728a87490ee562497767ccd5f30dd7f6", "title": "Tweeting celebrity suicides: Users' reaction to prominent suicide deaths on Twitter and subsequent increases in actual suicides.", "text": "A substantial amount of evidence indicates that news coverage of suicide deaths by celebrities is followed by an increase in suicide rates, suggesting a copycat behavior. However, the underlying process by which celebrity status and media coverage leads to increases in subsequent suicides is still unclear. This study collected over 1 million individual messages (\"tweets\") posted on Twitter that were related to 26 prominent figures in Japan who died by suicide between 2010 and 2014 and investigated whether media reports on suicide deaths that generated a greater level of reactions by the public are likely to be followed by a larger increase in actual suicides. We also compared the number of Twitter posts and the number of media reports in newspaper and on television to understand whether the number of messages on Twitter in response to the deaths corresponds to the amount of coverage in the traditional media. Using daily data from Japan's national death registry between 2010 and 2014, our analysis found an increase in actual suicides only when suicide deaths generated a large reaction from Twitter users. In contrast, no discernible increase in suicide counts was observed when the analysis included suicide deaths to which Twitter users did not show much interest, even when these deaths were covered considerably by the traditional media. This study also found suicides by relatively young entertainers generated a large number of posts on Twitter. This sharply contrasts with the relatively smaller volume of reaction to them generated by traditional forms of media, which focuses more on the deaths of non-entertainers. The results of this study strongly suggest that it is not sufficient to examine only traditional news media when investigating the impact of media reports on actual suicides."}
{"_id": "68a6dfc9b76fc84533585088134c8f92c40802de", "title": "Power and thermal characterization of a lithium-ion battery pack for hybrid-electric vehicles", "text": "A 1D electrochemical, lumped thermal model is used to explore pulse power limitations and thermal behavior of a 6 Ah, 72 cell, 276 V nominal i-ion hybrid-electric vehicle (HEV) battery pack. Depleted/saturated active material Li surface concentrations in the negative/positive electrodes onsistently cause end of high-rate (\u223c25 C) pulse discharge at the 2.7 V cell\u22121 minimum limit, indicating solid-state diffusion is the limiting echanism. The 3.9 V cell\u22121 maximum limit, meant to protect the negative electrode from lithium deposition side reaction during charge, is overly onservative for high-rate (\u223c15 C) pulse charges initiated from states-of-charge (SOCs) less than 100%. Two-second maximum pulse charge rate rom the 50% SOC initial condition can be increased by as much as 50% without risk of lithium deposition. Controlled to minimum/maximum oltage limits, the pack meets partnership for next generation vehicles (PNGV) power assist mode pulse power goals (at operating temperatures 16 \u25e6C), but falls short of the available energy goal. In a vehicle simulation, the pack generates heat at a 320 W rate on a US06 driving cycle at 25 \u25e6C, with more heat generated at lower temperatures. ess aggressive FUDS and HWFET cycles generate 6\u201312 times less heat. Contact resistance ohmic heating dominates all other mechanisms, ollowed by electrolyte phase ohmic heating. Reaction and electronic phase ohmic heats are negligible. A convective heat transfer coefficient of = 10.1 W m\u22122 K\u22121 maintains cell temperature at or below the 52 \u25e6C PNGV operating limit under aggressive US06 driving. 2006 Elsevier B.V. All rights reserved."}
{"_id": "4bc04f87a48cf8fb5fdcb87cbfe140fdf0fc0d74", "title": "Safe and automatic live update for operating systems", "text": "Increasingly many systems have to run all the time with no downtime allowed. Consider, for example, systems controlling electric power plants and e-banking servers. Nevertheless, security patches and a constant stream of new operating system versions need to be deployed without stopping running programs. These factors naturally lead to a pressing demand for live update---upgrading all or parts of the operating system without rebooting. Unfortunately, existing solutions require significant manual intervention and thus work reliably only for small operating system patches.\n In this paper, we describe an automated system for live update that can safely and automatically handle major upgrades without rebooting. We have implemented our ideas in Proteos, a new research OS designed with live update in mind. Proteos relies on system support and nonintrusive instrumentation to handle even very complex updates with minimal manual effort. The key novelty is the idea of state quiescence, which allows updates to happen only in safe and predictable system states. A second novelty is the ability to automatically perform transactional live updates at the process level, ensuring a safe and stable update process. Unlike prior solutions, Proteos supports automated state transfer, state checking, and hot rollback. We have evaluated Proteos on 50 real updates and on novel live update scenarios. The results show that our techniques can effectively support both simple and complex updates, while outperforming prior solutions in terms of flexibility, security, reliability, and stability of the update process."}
{"_id": "6185fd2ee2112bb75c78a7b4d8bd951da2d31650", "title": "RPA Antagonizes Microhomology-Mediated Repair of DNA Double-Strand Breaks", "text": "Microhomology-mediated end joining (MMEJ) is a Ku- and ligase IV\u2013independent mechanism for the repair of DNA double-strand breaks that contributes to chromosome rearrangements. Here we used a chromosomal end-joining assay to determine the genetic requirements for MMEJ in Saccharomyces cerevisiae. We found that end resection influences the ability to expose microhomologies; however, it is not rate limiting for MMEJ in wild-type cells. The frequency of MMEJ increased by up to 350-fold in rfa1 hypomorphic mutants, suggesting that replication protein A (RPA) bound to the single-stranded DNA (ssDNA) overhangs formed by resection prevents spontaneous annealing between microhomologies. In vitro, the mutant RPA complexes were unable to fully extend ssDNA and were compromised in their ability to prevent spontaneous annealing. We propose that the helix-destabilizing activity of RPA channels ssDNA intermediates from mutagenic MMEJ to error-free homologous recombination, thus preserving genome integrity."}
{"_id": "a12862972be0e7e6e901825723e703117a6d8128", "title": "A New Distance-weighted k-nearest Neighbor Classifier", "text": "In this paper, we develop a novel Distance-weighted k -nearest Neighbor rule (DWKNN), using the dual distance-weighted function. The proposed DWKNN is motivated by the sensitivity problem of the selection of the neighborhood size k that exists in k -nearest Neighbor rule (KNN), with the aim of improving classification performance. The experiment results on twelve real data sets demonstrate that our proposed classifier is robust to different choices of k to some degree, and yields good performance with a larger optimal k, compared to the other state-of-art KNN-based methods."}
{"_id": "9e34927dd7fa5d56072788b89fc85644e61979df", "title": "Active Learning in Multi-objective Evolutionary Algorithms for Sustainable Building Design", "text": "Residential and commercial buildings are responsible for about 40% of primary energy consumption in the US. The design of a building has tremendous effect on its energy profile, and recently there has been an increased interest in developing optimization methods that support the design of high performance buildings. Previous approaches are either based on simulation optimization or on training an accurate predictive model to replace expensive energy simulations during the optimization. We propose a method, suitable for expensive multiobjective optimization in very large search spaces. In particular, we use a Gaussian Process (GP) model for the prediction and devise an active learning scheme in a multi-objective genetic algorithm to preferentially simulate only solutions that are very informative to the model's predictions for the current generation. We develop a comprehensive and publicly available benchmark for building design optimization. We show that the GP model is highly competitive as a surrogate for building energy simulations, in addition to being well-suited for the active learning setting. Our results show that our approach clearly outperforms surrogate-based optimization, and produces solutions close in hypervolume to simulation optimization, while using only a fraction of the simulations and time."}
{"_id": "57f8f05ae42e6bfc9cb0b39d7ec962166cffdf6f", "title": "0 BLIS : A Framework for Rapid Instantiation of BLAS Functionality", "text": "The BLAS Libray Instantiation Software (BLIS) is a new framework for the rapid instantiation of Basic Linear Algebra Subprograms (BLAS) functionality. The fundamental innovation is the insight that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed in terms of very simple kernels. While others had made similar insights, BLIS brings this set down to what we believe is the simplest set that still supports the high performance that the computational science community demands. Higher-level framework code is generalized and implemented in standard C so that it can be reused and/or re-parameterized for different operations (as well as different architectures) with little to no modification. Inserting high-performance kernels into the framework facilitates the immediate optimization of any and all BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are given a choice of using the BLIS native interface (which is a C interface that corrects a few known idiosyncracies of the BLAS interface), the traditional Fortran BLAS interface, or through any other higher level interface that chooses to build upon the BLIS interface. Preliminary experimental performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL)."}
{"_id": "732e951c2a27d9ed43c1b26b658ea295a84f6d2f", "title": "Smart homes automation using Z-wave protocol", "text": "The rapid development of information technology and computer networks make them part of almost everything in our daily life and it became impossible to abandon their use. One of the main and important applications of technology in homes is home automation including controlling and automation of electronic and electrical machines remotely. Wireless Home Automation Networks (WHANs) are used in homes to connect the different devices together and to the Internet. In order to control home devices remotely, there are many popular protocols such as INSTEON, ZigBee, and Home Plug. In this paper, we focus on the relatively new protocol called Z-Wave protocol and we discuss it development and applications in smart homes. This wireless protocol has many advantages over the popular and widely used ZigBee protocol as it provides better reliability, low radio rebirth, easy usage, and easy Interoperability."}
{"_id": "7651f2f7200ee3d425a54fc5a4137821036dd6d9", "title": "SPENCER: A Socially Aware Service Robot for Passenger Guidance and Help in Busy Airports", "text": "We present an ample description of a socially compliant mobile robotic platform, which is developed in the EU-funded project SPENCER. The purpose of this robot is to assist, inform and guide passengers in large and busy airports. One particular aim is to bring travellers of connecting flights conveniently and efficiently from their arrival gate to the passport control. The uniqueness of the project stems from the strong demand of service robots for this application with a large potential impact for the aviation industry on one side, and on the other side from the scientific advancements in social robotics, brought forward and achieved in SPENCER. The main contributions of SPENCER are novel methods to perceive, learn, and model human social behavior and to use this knowledge to plan appropriate actions in realtime for mobile platforms. In this paper, we describe how the project advances the fields of detection and tracking of individuals and groups, recognition of human social relations and activities, normative human behavior learning, socially-aware task and motion planning, learning socially annotated maps, and conducting empirical experiments to assess socio-psychological effects of normative robot behaviors."}
{"_id": "624fcdf19c27cce2a76c8d102e24c65f8b2e3937", "title": "ViewDroid: towards obfuscation-resilient mobile application repackaging detection", "text": "In recent years, as mobile smart device sales grow quickly, the development of mobile applications (apps) keeps accelerating, so does mobile app repackaging. Attackers can easily repackage an app under their own names or embed advertisements to earn pecuniary profits. They can also modify a popular app by inserting malicious payloads into the original app and leverage its popularity to accelerate malware propagation. In this paper, we propose ViewDroid, a user interface based approach to mobile app repackaging detection. Android apps are user interaction intensive and event dominated, and the interactions between users and apps are performed through user interface, or views. This observation inspires the design of our new birthmark for Android apps, namely, feature view graph, which captures users' navigation behavior across app views. Our experimental results demonstrate that this birthmark can characterize Android apps from a higher level abstraction, making it resilient to code obfuscation. ViewDroid can detect repackaged apps at a large scale, both effectively and efficiently. Our experiments also show that the false positive and false negative rates of ViewDroid are both very low."}
{"_id": "036a1d9c55ea92860d931299ca81396f1f23a498", "title": "Autoenucleation in a 84-year-old dementia patient", "text": "Background Autoenucleation is a severe, rare form of self-mutilation. The majority of cases have been reported in the 15- to 60-year age group, usually in psychiatric patients with a history of depression or schizophrenia, sometimes caused by drug abuse. Case report We report a case of left-sided autoenucleation in an 84-year-old dementia patient suffering from reactive depression. Medical reports mentioned a suicide attempt 2\u00a0weeks prior to the incident, whereupon the patient was admitted to the locked ward of a psychiatric hospital. During one night of inpatient stay, the patient manually autoenucleated his left eye. Inspection of the enucleated organ revealed a completely intact globe with an attached optic nerve 5.5\u00a0cm in length. The orbit was filled with a massive haematoma. A contrast-enhanced computed tomography (CT) scan showed an orbital haematoma, absence of the left globe and optic nerve and a chiasmatic lesion, accompanied by an intracranial bleeding into the subarachnoid space. Primary wound closure was performed without complications. Visual acuity of the right eye could not be tested due to the patient's lack of cooperation. Conclusion To the best of our knowledge, this is the only reported case of an elderly patient with primary dementia who performed autoenucleation. Other aspects, such as patient history, suicide attempt, manual eye extraction and chiasma lesion are similar to cases reported earlier. The identification and evaluation of intracranial bleedings and chiasmatic lesions that can be associated with autoenucleation requires a contrast-enhanced CT, especially if a long optic nerve fragment is attached to the enucleated globe."}
{"_id": "4ee9f569e7878ef2b8bac9a563312c7176139828", "title": "Selective Pooling Vector for Fine-Grained Recognition", "text": "We propose a new framework for image recognition by selectively pooling local visual descriptors, and show its superior discriminative power on fine-grained image classification tasks. The representation is based on selecting the most confident local descriptors for nonlinear function learning using a linear approximation in an embedded higher dimensional space. The advantage of our Selective Pooling Vector over the previous state-of-the-art Super Vector and Fisher Vector representations, is that it ensures a more accurate learning function, which proves to be important for classifying details in fine-grained image recognition. Our experimental results corroborate this claim: with a simple linear SVM as the classifier, the selective pooling vector achieves significant performance gains on standard benchmark datasets for various fine-grained tasks such as the CMU Multi-PIE dataset for face recognition, the Caltech-UCSD Bird dataset and the Stanford Dogs dataset for fine-grained object categorization. On all datasets we outperform the state of the arts and boost the recognition rates to 96.4%, 48.9%, 52.0% respectively."}
{"_id": "eef5af8ebc44dba66ef488353a5cef655c7655f1", "title": "An Open Source Tracking Testbed and Evaluation Web Site", "text": "We have implemented a GUI-based tracking testbed system in C and Intel OpenCV, running within the Microsoft Windows environment. The motivation for the testbed is to run and log tracking experiments in near real-time. The testbed allows a tracking experiment to be repeated from the same starting state using different tracking algorithms and parameter settings, thereby facilitating comparison of algorithms. We have also developed a tracking evaluation web site to encourage third-party self-evaluation of state-of-the-art tracking algorithms. The site hosts source code for the tracking testbed, a set of ground-truth datasets, and a method for on-line evaluation of uploaded tracking results."}
{"_id": "bec696dc13813162ef12a51ac972db9807796f64", "title": "A Survey of Securing Networks Using Software Defined Networking", "text": "Software Defined Networking (SDN) is rapidly emerging as a new paradigm for managing and controlling the operation of networks ranging from the data center to the core, enterprise, and home. The logical centralization of network intelligence presents exciting challenges and opportunities to enhance security in such networks, including new ways to prevent, detect, and react to threats, as well as innovative security services and applications that are built upon SDN capabilities. In this paper, we undertake a comprehensive survey of recent works that apply SDN to security, and identify promising future directions that can be addressed by such research."}
{"_id": "8663945d5090fe409e42af217ac19f77f69eee28", "title": "Enabling Personalized Search over Encrypted Outsourced Data with Efficiency Improvement", "text": "In cloud computing, searchable encryption scheme over outsourced data is a hot research field. However, most existing works on encrypted search over outsourced cloud data follow the model of \u201cone size fits all\u201d and ignore personalized search intention. Moreover, most of them support only exact keyword search, which greatly affects data usability and user experience. So how to design a searchable encryption scheme that supports personalized search and improves user search experience remains a very challenging task. In this paper, for the first time, we study and solve the problem of personalized multi-keyword ranked search over encrypted data (PRSE) while preserving privacy in cloud computing. With the help of semantic ontology WordNet, we build a user interest model for individual user by analyzing the user's search history, and adopt a scoring mechanism to express user interest smartly. To address the limitations of the model of \u201cone size fit all\u201d and keyword exact search, we propose two PRSE schemes for different search intentions. Extensive experiments on real-world dataset validate our analysis and show that our proposed solution is very efficient and effective."}
{"_id": "8f4c9d62482eb7989cba846c223b3a1ec098949c", "title": "Small-footprint keyword spotting using deep neural networks", "text": "Our application requires a keyword spotting system with a small memory footprint, low computational cost, and high precision. To meet these requirements, we propose a simple approach based on deep neural networks. A deep neural network is trained to directly predict the keyword(s) or subword units of the keyword(s) followed by a posterior handling method producing a final confidence score. Keyword recognition results achieve 45% relative improvement with respect to a competitive Hidden Markov Model-based system, while performance in the presence of babble noise shows 39% relative improvement."}
{"_id": "c4756dcc7afc2f09d61e6e4cf2199d9f6dd695cc", "title": "Convolutional neural networks for small-footprint keyword spotting", "text": "We explore using Convolutional Neural Networks (CNNs) for a small-footprint keyword spotting (KWS) task. CNNs are attractive for KWS since they have been shown to outperform DNNs with far fewer parameters. We consider two different applications in our work, one where we limit the number of multiplications of the KWS system, and another where we limit the number of parameters. We present new CNN architectures to address the constraints of each applications. We find that the CNN architectures offer between a 27-44% relative improvement in false reject rate compared to a DNN, while fitting into the constraints of each application."}
{"_id": "563e821bb5ea825efb56b77484f5287f08cf3753", "title": "Convolutional networks for images, speech, and time series", "text": ""}
{"_id": "d9654e5bfccd3abfa166abfe4af13b80b6cbd547", "title": "UWB antenna for brain stroke and brain tumour detection", "text": "UWB Pentagon antenna is designed to detect brain stroke and brain tumour. It is operating at a band from 3.3568\u2013 12.604 GHz in free space and from 3.818 to 9.16 GHz on the normal head model. The antenna has dimensions of 44\u00d730mm2. It is fabricated on FR4-substrate with relative permittivity 4.4 and thickness 1.5mm. The antenna is simulated on the CST Microwave Studio and measured using the network analyser. There is a good agreement between the measured and simulated results of the return loss of the antenna on human's head and head phantom."}
{"_id": "79cc5d44caddc8746f92bd60ba7da892a2120330", "title": "An ultracapacitor integrated power conditioner for intermittency smoothing and improving power quality of distribution grid", "text": "Summary form only given. Penetration of various types of distributed energy resources (DERs) like solar, wind and PHEVs onto the distribution grid is on the rise. There is a corresponding increase in power quality problems and intermittencies on the distribution grid. In order to reduce the intermittencies and improve the power quality of the distribution grid an Ultracapacitor (UCAP) integrated power conditioner is proposed in this paper. UCAP integration gives the power conditioner active power capability which is useful in tackling the grid intermittencies and in improving the voltage sag and swell compensation. UCAPs have low energy density, high power density and fast charge/discharge rates which are all ideal characteristics for meeting high power low energy events like grid intermittencies, sags/swells. In this paper UCAP is integrated into dc-link of the power conditioner through a bi-directional dc-dc converter which helps in providing a stiff dc-link voltage. The integration helps in providing active/reactive power support, intermittency smoothing and sag/swell compensation. Design and control of both the dc-ac inverters and the dc-dc converter are discussed. The simulation model of the overall system is developed and compared to the experimental hardware setup."}
{"_id": "1604de41ea8bc82aac0502daa309d3fed3f8495e", "title": "Improving bug localization using structured information retrieval", "text": "Locating bugs is important, difficult, and expensive, particularly for large-scale systems. To address this, natural language information retrieval techniques are increasingly being used to suggest potential faulty source files given bug reports. While these techniques are very scalable, in practice their effectiveness remains low in accurately localizing bugs to a small number of files. Our key insight is that structured information retrieval based on code constructs, such as class and method names, enables more accurate bug localization. We present BLUiR, which embodies this insight, requires only the source code and bug reports, and takes advantage of bug similarity data if available. We build BLUiR on a proven, open source IR toolkit that anyone can use. Our work provides a thorough grounding of IR-based bug localization research in fundamental IR theoretical and empirical knowledge and practice. We evaluate BLUiR on four open source projects with approximately 3,400 bugs. Results show that BLUiR matches or outperforms a current state-of-the-art tool across applications considered, even when BLUiR does not use bug similarity data used by the other tool."}
{"_id": "66af2afd4c598c2841dbfd1053bf0c386579234e", "title": "Context-assisted face clustering framework with human-in-the-loop", "text": "Automatic face clustering, which aims to group faces referring to the same people together, is a key component for face tagging and image management. Standard face clustering approaches that are based on analyzing facial features can already achieve high-precision results. However, they often suffer from low recall due to the large variation of faces in pose, expression, illumination, occlusion, etc. To improve the clustering recall without reducing the high precision, we leverage the heterogeneous context information to iteratively merge the clusters referring to same entities. We first investigate the appropriate methods to utilize the context information at the cluster level, including using of \u201ccommon scene\u201d, people co-occurrence, human attributes, and clothing. We then propose a unified framework that employs bootstrapping to automatically learn adaptive rules to integrate this heterogeneous contextual information, along with facial features, together. Finally, we discuss a novel methodology for integrating human-in-the-loop feedback mechanisms that leverage human interaction to achieve the high-quality clustering results. Experimental results on two personal photo collections and one real-world surveillance dataset demonstrate the effectiveness of the proposed approach in improving recall while maintaining very high precision of face clustering."}
{"_id": "ee7c9e3d417df6234d64364c6922b12433e0de38", "title": "Face Detection and Gesture Recognition for Human-Computer Interaction", "text": "Reading is a hobby to open the knowledge windows. Besides, it can provide the inspiration and spirit to face this life. By this way, concomitant with the technology development, many companies serve the e-book or book in soft file. The system of this book of course will be much easier. No worry to forget bringing the face detection and gesture recognition for human computer interaction book. You can open the device and get the book by on-line."}
{"_id": "a4b4ae2e42019b1f2245513cf29264d3e644865d", "title": "Some Correlates of Electronic Health Information Management System Success in Nigerian Teaching Hospitals", "text": "Nowadays, an electronic health information management system (EHIMS) is crucial for patient care in hospitals. This paper explores the aspects and elements that contribute to the success of EHIMS in Nigerian teaching hospitals. The study adopted a survey research design. The population of study comprised 442 health information management personnel in five teaching hospitals that had implemented EHIMS in Nigeria. A self-developed questionnaire was used as an instrument for data collection. The findings revealed that there is a positive, close relationship between all the identified factors and EHIMS's success: technical factors (r = 0.564, P < 0.05); social factors (r = 0.616, P < 0.05); organizational factors (r = 0.621, P < 0.05); financial factors (r = 0.705, P < 0.05); and political factors (r = 0.589, P < 0.05). We conclude that consideration of all the identified factors was highly significant for the success of EHIMS in Nigerian teaching hospitals."}
{"_id": "89b44921d04692f44a1c6c8730179a3de3df5767", "title": "Deep Discriminative Representation Learning with Attention Map for Scene Classification", "text": "Learning powerful discriminative features for remote sensing image scene classification is a challenging computer vision problem. In the past, most classification approaches were based on handcrafted features. However, most recent approaches to remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is only to use original RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show class activation map (CAM) encoded CNN models, codenamed DDRL-AM, trained using original RGB patches and attention map based class information provide complementary information to the standard RGB deep models. To the best of our knowledge, we are the first to investigate attention information encoded CNNs. Additionally, to enhance the discriminability, we further employ a recently developed object function called \"center loss,\" which has proved to be very useful in face recognition. Finally, our framework provides attention guidance to the model in an end-to-end fashion. Extensive experiments on two benchmark datasets show that our approach matches or exceeds the performance of other methods."}
{"_id": "7bbffa6b3d45be6bb4b59067ef2d0b7f3ab54c9f", "title": "Fire and smoke detection without sensors: Image processing based approach", "text": "In this paper, novel models for fire and smoke detection using image processing is provided. The models use different colour models for both fire and smoke. The colour models are extracted using a statistical analysis of samples extracted from different type of video sequences and images. The extracted models can be used in complete fire/smoke detection system which combines colour information with motion analysis."}
{"_id": "2849be8d081baf4a4ff2b9945cff8120f245ee1d", "title": "Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction", "text": "In this work, we study parameter tuning towards the M2 metric, the standard metric for automatic grammar error correction (GEC) tasks. After implementing M2 as a scorer in the Moses tuning framework, we investigate interactions of dense and sparse features, different optimizers, and tuning strategies for the CoNLL-2014 shared task. We notice erratic behavior when optimizing sparse feature weights with M2 and offer partial solutions. To our surprise, we find that a bare-bones phrase-based SMT setup with task-specific parameter-tuning outperforms all previously published results for the CoNLL-2014 test set by a large margin (46.37% M2 over previously 40.56%, by a neural encoder-decoder model) while being trained on the same data. Our newly introduced dense and sparse features widen that gap, and we improve the state-ofthe-art to 49.49% M2."}
{"_id": "68b5037e339b03721b1280541711dbbe332b74f4", "title": "Real-time categorization of driver's gaze zone using the deep learning techniques", "text": "This paper presents a study in which driver's gaze zone is categorized using new deep learning techniques. Since the sequence of gaze zones of a driver reflects precisely what and how he behaves, it allows us infer his drowsiness, focusing or distraction by analyzing the images coming from a camera. A Haar feature based face detector is combined with a correlation filter based MOSS tracker for the face detection task to handle a tough visual environment in the car. Driving database is a big-data which was constructed using a recording setup within a compact sedan by driving around the urban area. The gaze zones consist of 9 categories depending on where a driver is looking at during driving. A convolutional neural network is trained to categorize driver's gaze zone from a given face detected image using a multi-GPU platform, and then its network parameters are transferred to a GPU within a PC running on Windows to operate in the real-time basis. Result suggests that the correct rate of gaze zone categorization reaches to 95% in average, indicating that our system outperforms the state-of-art gaze zone categorization methods based on conventional computer vision techniques."}
{"_id": "f6a9083f7e3f7d7da8251cd4f2ce091a76ad1e35", "title": "New Branch Prediction Vulnerabilities in OpenSSL and Necessary Software Countermeasures", "text": "Software based side-channel attacks allow an unprivileged spy process to extract secret information from a victim (cryptosystem) process by exploiting some indirect leakage of \u201cside-channel\u201d information. It has been realized that some components of modern computer microarchitectures leak certain side-channel information and can create unforeseen security risks. An example of such MicroArchitectural Side-Channel Analysis is the Cache Attack \u2014 a group of attacks that exploit information leaks from cache latencies [4, 7, 13, 15, 17]. Public awareness of Cache Attack vulnerabilities lead software writers of OpenSSL (version 0.9.8a and subsequent versions) to incorporate countermeasures for preventing these attacks. In this paper, we present a new and yet unforeseen side channel attack that is enabled by the recently published Simple Branch Prediction Analysis (SBPA) which is another type of MicroArchitectural Analysis, cf. [2, 3]. We show that modular inversion \u2014 a critical primitive in public key cryptography \u2014 is a natural target of SBPA attacks because it typically uses the Binary Extended Euclidean algorithm whose nature is an input-centric sequence of conditional branches. Our results show that SBPA can be used to extract secret parameters during the execution of the Binary Extended Euclidean algorithm. This poses a new potential risk to crypto-applications such as OpenSSL, which already employs Cache Attack countermeasures. Thus, it is necessary to develop new software mitigation techniques for BPA and incorporate them with cache analysis countermeasures in security applications. To mitigate this new risk in full generality, we apply a security-aware algorithm design methodology and propose some changes to the CRT-RSA algorithm flow. These changes either avoid some of the steps that require modular inversion, or remove the critical information leak from this procedure. In addition, we also show by example that, independently of the required changes in the algorithms, careful software analysis is also required in order to assure that the software implementation does not inadvertently introduce branches that may expose the application to SBPA attacks. These offer several simple ways for modifying OpenSSL in order to mitigate Branch Prediction Attacks."}
{"_id": "1e97792985044b956d9db06684cbc4ccb77e320b", "title": "Omnidirectional planar dipole array antenna for WLAN access point", "text": "In this paper, a low-cost planar dipole array antenna with an omnidirectional radiation characteristic is presented. The proposed omnidirectional planar dipole array antenna comprises two 1/spl times/2 dipole arrays arranged back to back, and can be easily constructed by printing on both sides of a dielectric substrate. By selecting the central ground plane to have a small width (about 6 mm for operating at about 2.4 GHz), the proposed array antenna can have a good omnidirectional radiation pattern in the azimuthal plane (gain variations less than 2 dBi). Prototypes of the proposed antenna for WLAN operation in the 2.4 GHz band (2400-2484 MHz) were constructed and studied."}
{"_id": "a63afdd8a3924a441252750b26560249bb07504d", "title": "An ear-worn continuous ballistocardiogram (BCG) sensor for cardiovascular monitoring", "text": "Traditionally, ballistocardiogram (BCG) has been measured using large and stationary devices. In this work, we demonstrate a portable and continuous BCG monitor that is wearable at the ear. The device has the form factor of a hearing aid and is wirelessly connected to a PC for data recording and analysis. With the ear as an anchoring point, the device uses a MEMS tri-axial accelerometer to measure BCG at the head. Morphological differences exist between head BCG and traditional BCG, but the principal peaks (J waves) and their vectors are preserved. The frequency of J waves corresponds to heart rate, and when used in conjunction with an electrocardiogram's (ECG) R wave, the timing of J waves yields the RJ interval. Results from our clinical study show linear correlation between the RJ interval and the heart's pre-ejection period during hemodynamic maneuvers, thus revealing important information about cardiac contractility and its regulation."}
{"_id": "61929f00ecc860a53e9edbcf99e0d610c6f52c2f", "title": "SimICS/Sun4m: A Virtual Workstation", "text": "System level simulators allow computer architects and system software designers to recreate an accurate and complete replica of the program behavior of a target system, regardless of the availability, existence, or instrumentation support of such a system. Applications include evaluation of architectural design alternatives as well as software engineering tasks such as traditional debugging and performance tuning. We present an implementation of a simulator acting as a virtual workstation fully compatible with the sun4m architecture from Sun Microsystems. Built using the system-level SPARC V8 simulator SimICS, SimICS/sun4m models one or more SPARC V8 processors, supports user-developed modules for data cache and instruction cache simulation and execution profiling of all code, and provides a symbolic and performance debugging environment for operating systems. SimICS/sun4m can boot unmodified operating systems, including Linux 2.0.30 and Solaris 2.6, directly from snapshots of disk partitions. To support essentially arbitrary code, we implemented binary-compatible simulators for several devices, including SCSI, console, interrupt, timers, EEPROM, and Ethernet. The Ethernet simulation hooks into the host and allows the virtual workstation to appear on the local network with full services available ( NFS, NIS, rsh, etc). Ethernet and console traffic can be recorded for future playback. The performance of SimICS/sun4m is sufficient to run realistic workloads, such as the database benchmark TPC-D, scaling factor 1/100, or an interactive network application such as Mozilla. The slowdown in relation to native hardware is in the range of 25 to 75 (measured using SPECint95). We also demonstrate some applications, including modeling an 8-processor sun4m version (which does not exist), modeling future memory hierarchies, and debugging an operating system."}
{"_id": "e1a6ba42e66713d81cecedc4451ed06fc2d6823d", "title": "A pattern recognition approach to voiced-unvoiced-silence classification with applications to speech recognition", "text": "Absb-act\u2014In speech analysis, the voiced-unvoiced decision is usually performed in conjunction with pitch analysis. The linking of voiced-unvoiced (V-UV) decision to pitch analysis not only results in unnecessary complexity, but makes it difficult to classify short speech segments which are less than a few pitch periods in duration. In this paper, we describe a pattern recognition approach for deciding whether a given segment of a speech signal should be classified as voiced speech, un-voiced speech, or silence, based on measurements made on the signal. In this method, five different measurements axe made on the speech segment to be classified. The measured parameters are the zero-crossing rate, the speech energy, the correlation between adjacent speech samples, the first predictor coefficient from a 12-pole linear predictive coding (LPC) analysis, and the energy in the prediction error. The speech segment is assigned to a particular class based on a minimum-distance rule obtained under the assumption that the measured parameters are distributed according to the multidimensional Gaussian probability density function. The means and covariances for the Gaussian distribution are determined from manually classified speech data included in a training set. The method has been found to provide reliable classification with speech segments as short as 10 ms and has been used for both speech analysis-synthesis and recognition applications. A simple nonlinear smoothing algorithm is described to provide a smooth 3-level contour of an utterance for use in speech recognition applications. Quantitative results and several examples illustrating the performance of the methOd are included in the paper."}
{"_id": "3edd80f3f6ec908a928fda2a7f05bac8088c59c5", "title": "Feudal Multi-Agent Hierarchies for Cooperative Reinforcement Learning", "text": "We investigate how reinforcement learning agents can learn to cooperate. Drawing inspiration from human societies, in which successful coordination of many individuals is often facilitated by hierarchical organisation, we introduce Feudal Multiagent Hierarchies (FMH). In this framework, a \u2018manager\u2019 agent, which is tasked with maximising the environmentally-determined reward function, learns to communicate subgoals to multiple, simultaneously-operating, \u2018worker\u2019 agents. Workers, which are rewarded for achieving managerial subgoals, take concurrent actions in the world. We outline the structure of FMH and demonstrate its potential for decentralised learning and control. We find that, given an adequate set of subgoals from which to choose, FMH performs, and particularly scales, substantially better than cooperative approaches that use a shared reward function."}
{"_id": "43292e41f11ee251ca1fe564555dd6cc5fbe6a0c", "title": "Feature Selection with Linked Data in Social Media", "text": "Feature selection is widely used in preparing highdimensional data for effective data mining. Increasingly popular social media data presents new challenges to feature selection. Social media data consists of (1) traditional high-dimensional, attribute-value data such as posts, tweets, comments, and images, and (2) linked data that describes the relationships between social media users as well as who post the posts, etc. The nature of social media also determines that its data is massive, noisy, and incomplete, which exacerbates the already challenging problem of feature selection. In this paper, we illustrate the differences between attributevalue data and social media data, investigate if linked data can be exploited in a new feature selection framework by taking advantage of social science theories, extensively evaluate the effects of user-user and user-post relationships manifested in linked data on feature selection, and discuss some research issues for future work."}
{"_id": "695fb10370d93390810ae082bd806b2169a52943", "title": "River flow estimation using adaptive neuro fuzzy inference system", "text": "Accurate estimation of River flow changes is a quite important problem for a wise and sustainable use. Such a problem is crucial to the works and decisions related to the water resources and management. In this study, an adaptive network-based fuzzy inference system (ANFIS) approach was used to construct a River flow forecasting system. In particular, the applicability of ANFIS as an estimation model for River flow was investigated. To illustrate the applicability and capability of the ANFIS, the River Great Menderes, located the west of Turkey and the most important water resource of Great Menderes Catchment\u2019s, was chosen as a case study area. The advantage of this method is that it uses the input\u2013output data sets. Totally 5844 daily data sets collected in 1985\u20132000 years were used to estimate the River flow. The models having various input structures were constructed and the best structure was investigated. In addition four various training/testing data sets were constructed by cross validation methods and the best data set was investigated. The performance of the ANFIS models in training and testing sets were compared with the observations and also evaluated. The results indicated that the ANFIS can be applied successfully and provide high accuracy and reliability for River flow estimation. \u00a9 2006 IMACS. Published by Elsevier B.V. All rights reserved."}
{"_id": "29c6a8fd6201200bc83b54bad6a10d0c023b9928", "title": "CASIA@V2: A MLN-based Question Answering System over Linked Data", "text": "We present a question answering system (CASIA@V2) over Linked Data (DBpedia), which translates natural language questions into structured queries automatically. Existing systems usually adopt a pipeline framework, which contains four major steps: 1) Decomposing the question and detecting candidate phrases; 2) mapping the detected phrases into semantic items of Linked Data; 3) grouping the mapped semantic items into semantic triples; and 4) generating the rightful SPARQL query. We present a jointly learning framework using Markov Logic Network(MLN) for phrase detection, phrases mapping to semantic items and semantic items grouping. We formulate the knowledge for resolving the ambiguities in three steps of QALD as first-order logic clauses in a MLN. We evaluate our approach on QALD-4 test dataset and achieve an F-measure score of 0.36, an average precision of 0.32 and an average recall of 0.40 over 50 questions."}
{"_id": "f26c090aff706da010c28735bea85223210fdd54", "title": "Offset Reduction of CMOS Based DynamicComparator by using Charge Storage Techniques- A Comparative Study", "text": "Comparator is one of the most widely used building block for analog and mixed signal systems. For the implementation of high-performance CMOS A/D converters, low offset comparators are essential. In this paper, dynamic comparator offset is calculated to the extent of high accuracy. The offset so calculated has been reduced by the charge storage techniques to achieve an efficient design. In addition to the offset, propagation delay and power dissipation, being the important parameter of the comparator, has been analyzed. It is observed that offset voltage in the comparator has been reduced to 350\u03bcV for output offset storage technique and 400\u03bcV for input offset storage techniques from 91mV. In this paper, BPTM model has been used in analyze the dynamic comparator. KeywordsDynamic comparator, offset voltage, storage capacitor, input offset storage, output offset storage."}
{"_id": "6434b7610aa9e7c37fce2258ab459fa99a631748", "title": "Super-resolution through neighbor embedding", "text": "In this paper, we propose a novel method for solving single-image super-resolution problems. Given a low-resolution image as input, we recover its high-resolution counterpart using a set of training examples. While this formulation resembles other learning-based methods for super-resolution, our method has been inspired by recent manifold teaming methods, particularly locally linear embedding (LLE). Specifically, small image patches in the lowand high-resolution images form manifolds with similar local geometry in two distinct feature spaces. As in LLE, local geometry is characterized by how a feature vector corresponding to a patch can be reconstructed by its neighbors in the feature space. Besides using the training image pairs to estimate the high-resolution embedding, we also enforce local compatibility and smoothness constraints between patches in the target high-resolution image through overlapping. Experiments show that our method is very flexible and gives good empirical results."}
{"_id": "a21bb8ddfe448e890871e248e196dc0f56000d3a", "title": "Example-Based Super-Resolution", "text": "Image-based models for computer graphics lack resolution independence: they cannot be zoomed much beyond the pixel resolution they were sampled at without a degradation of quality. Interpolating images usually results in a blurring of edges and image details. We describe image interpolation algorithms which use a database of training images to create plausible high-frequency details in zoomed images. Image pre-processing steps allow the use of image detail from regions of the training images which may look quite di erent from the image to be processed. These methods preserve ne details, such as edges, generate believable textures, and can give good results even after zooming multiple octaves. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonpro t educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Information Technology Center America; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Information Technology Center America. All rights reserved. Copyright c Mitsubishi Electric Information Technology Center America, 2001 201 Broadway, Cambridge, Massachusetts 02139 1. First printing, TR2001-30, August, 2001. Egon Pasztor's present address: MIT Media Lab 20 Ames St. Cambridge, MA 02139 Example-based Super-resolution William T. Freeman, Thouis R. Jones, Egon C. Pasztor MitsubishiElectricResearchLaboratories(MERL) 201Broadway Cambridge,MA 02139 (a) (b) Figure 1: An object modelledby traditionalpolygon techniques may lack someof the richnessof real-world objects,but behaves properlyunderzooming,(b)."}
{"_id": "6f657378907d2e4ad3167f4c5fb7b52426a7d6cb", "title": "Journey to Data Quality", "text": ""}
{"_id": "13078c098115391e306d96efefbbf8aeb6cdf1fa", "title": "Mediating intimacy: designing technologies to support strong-tie relationships", "text": "Intimacy is a crucial element of domestic life, and many interactive technologies designed for other purposes have been appropriated for use within intimate relationships. However, there is a deficit in current understandings of how technologies are used within intimate relationships, and how to design technologies to support intimate acts. In this paper we report on work that has addressed these deficits. We used cultural probes and contextual interviews and other ethnographically informed techniques to investigate how interactive technologies are used within intimate relationships. From this empirical work we generated a thematic understanding of intimacy and the use of interactional technologies to support intimate acts. We used this understanding to inform the design of intimate technologies. A selection of our design concepts is also presented."}
{"_id": "82135457ec005a3a2adeaa46f7a0ca68b6dc3cae", "title": "The dynamics of innovation : from National Systems and \u2018 \u2018 Mode 2 \u2019 \u2019 to a Triple Helix of university \u2013 industry \u2013 government relations", "text": "The Triple Helix of university\u2013industry\u2013government relations is compared with alternative models for explaining the current research system in its social contexts. Communications and negotiations between institutional partners generate an overlay that increasingly reorganizes the underlying arrangements. The institutional layer can be considered as the retention mechanism of a developing system. For example, the national organization of the system of innovation has historically been important in determining competition. Reorganizations across industrial sectors and nation states, however, are induced by \u017d . \u017d . new technologies biotechnology, ICT . The consequent transformations can be analyzed in terms of neoevolutionary mechanisms. University research may function increasingly as a locus in the \u2018\u2018laboratory\u2019\u2019 of such knowledge-intensive network transitions. q 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "528d1ad4725277b25f17c951c7aa637f58da1c9e", "title": "Text and Object Detection on Billboards", "text": "To captivate people\u2019s attention in a blink of an eye, a billboard as a commercial advertisement must be attractive and informative. When space means money like on the billboard, much important information must be dropped out. Unfortunately, insufficient details and the difficulty of accessing those details might easily kill the customer\u2019s attention in a flash as well. In this work, we propose a system that automatically connects a billboard with the product website instantly. More specifically, given the billboard image as an input, the system will lead users immediately to the product website for more details, or contact information. The results demonstrate that our system is significantly outperform the Google image search baseline where the product website is always included in the top 10 websites. Finally, We posit that the system can save the customers\u2019 time and complexity to access their product of interest, resulting the increasing of product sale."}
{"_id": "fd4c6f56c58923294d28dc415e1131a976e8ea93", "title": "Lean, six sigma and lean sigma: fads or real process improvement methods?", "text": "Purpose \u2013 The purpose of this paper is to explore if six sigma and lean are new methods, or if they are repackaged versions of previously popular methods \u2013 total quality management (TQM) and just-in-time (JIT). Design/methodology/approach \u2013 The study is based on a critical comparison of lean with JIT and six sigma with TQM, a study of the measure of the publication frequency \u2013 the number of academic articles published every year of the previous 30 years \u2013 for each topic, and a review of critical success factors (CSF) for change efforts. Findings \u2013 The more recent concepts of lean and six sigma have mainly replaced \u2013 but not necessarily added to \u2013 the concepts of JIT and TQM. lean and six sigma are essentially repackaged versions of the former, and the methods seem to follow the fad (product) life cycle. The literature offers fairly similar and rather general CSF for these methods, e.g. top management support and the importance of communication and information. What seems to be missing, however, is the need for a systemic approach to organizational change and improvement. Practical implications \u2013 A prediction is, given the fad or product life cycle phenomenon, that there will be a new method promoted soon, something perhaps already experienced with the borderline preposterous concept of lean six sigma. On the other hand, based on the gap in time between both JIT and lean, and TQM and six sigma \u2013 a gap filled by BRP/reengineering \u2013 the next method will be process oriented. This paper concludes with the discussion of the need for a process-based approach to organizational improvement efforts. Originality/value \u2013 This paper is of value in that it analyzes what lessons can be learnt from organizational change and improvement efforts. The analysis includes a comparison of CSF for any change project before discussing the need for a process (systems) perspective for successful organizational improvement efforts."}
{"_id": "d97dd8f927d9a1d9cf5709f57fcc0a36b6bd3579", "title": "THE THREE-DIMENSIONAL EXTENSION OF SPACE SYNTAX 048", "text": "The purpose of this study is to develop a new space syntax technique by adding third dimension in analysis of urban structure that is important to understand the formation of the cities. The theoretical framework of the study is put forward by integrating space syntax with the theory of the image of city. This entails the essential elucidation of spatial cognition pattern and image and the integration of space syntax and the image of city. The paper analyzes and demonstrates the prerequisite and feasibility of the mutual complementarily between space syntax and the image of city. The threedimensional space syntax integrated with the image of city is different from the traditional syntax that is only limited to non-figurative two-dimensional spatial cognition. It also emphasizes the important influences of the three-dimensional image points on people's behaviour in space and it is quite close to the accurate description of \u201cnatural movement\u201d. Based on the three-dimensional theory, a concept model of the threedimensional syntax is constructed, and its content includes how to describe forming process of the three-dimensional spatial image pots during the cognition activity, how to construct the objective evaluation model for the three-dimensional image pots and how to put forward the concept of weighted-integration to calculate the concept mathematical model. In the end, the paper carries through three experimental analyses. The paper argues that the extended three-dimensional syntactical model is much closer to people's spatial illation and cognition. It should be more accurate to use the model to describe and grasp the urban configuration."}
{"_id": "e9340d0312f9ea05a92bb2631da503a7f07343e0", "title": "ISCEV Standard for full-field clinical electroretinography (2015 update)", "text": "This document, from the International Society for Clinical Electrophysiology of Vision (ISCEV), presents an updated and revised ISCEV Standard for full-field clinical electroretinography (ffERG or simply ERG). The parameters for Standard flash stimuli have been revised to accommodate a variety of light sources including gas discharge lamps and light emitting diodes. This ISCEV Standard for clinical ERGs specifies six responses based on the adaptation state of the eye and the flash strength: (1) Dark-adapted 0.01 ERG (rod ERG); (2) Dark-adapted 3 ERG (combined rod-cone standard flash ERG); (3) Dark-adapted 3 oscillatory potentials; (4) Dark-adapted 10 ERG (strong flash ERG); (5) Light-adapted 3 ERG (standard flash \u201ccone\u201d ERG); and (6) Light-adapted 30\u00a0Hz flicker ERG. ISCEV encourages the use of additional ERG protocols for testing beyond this minimum standard for clinical ERGs."}
{"_id": "c0061e74b26869aed20433250e6232dee5345393", "title": "Analysis and Design of Current-Fed Half-Bridge (C)(LC)\u2013( LC) Resonant Topology for Inductive Wireless Power Transfer Application", "text": "This paper proposes and analyzes a new power electronics circuit topology for wireless inductive power transfer application using current-fed half-bridge converter with CCL-LC resonant network. The major focus is analysis and implementation of a new current-fed resonant topology with current-sharing and voltage doubling features. Generally, inductive power transfer circuits with current fed converter use parallel CL resonant tank to transfer power effectively through air gap. However, in medium power application, this topology suffers from a major limitation of high voltage stress across the inverter semiconductor devices owing to high reactive power consumed by loosely coupled coil. In proposed topology, this is mitigated by adding a capacitor in series with the coil developing series-parallel CCL tank. The power flow is controlled through variable frequency modulation. Soft-switching of the devices is obtained irrespective of the load current. For grid-to-vehicle or solar-to-vehicle, the converter is analyzed and detailed design procedure is illustrated. Experimental results are presented to verify the analysis and demonstrate the performance."}
{"_id": "b3d2933544191aef6fc412f4bea5a0cb6408165b", "title": "A Comparative Study of the Differences in Family Communication among Young Urban Adult Consumers and the Development of their Materialistic Values : A Discriminant Analysis", "text": "The study attempts to uncover the characteristics of materialism groups in Malaysia. It assesses the differences between materialism groups, i.e., low and high materialistic groups, using demographic and family communication dimensions. The data was collected through self-administered questionnaires. The sample consisted of 956 respondents. The majority of the respondents were Malays followed by Chinese and Indians. The proportion of female respondents was higher than the male respondents. Most of the respondents were single and in the age group of between 19-29 years old. Independent sample t-tests were used to compare mean scores for the study variables between \u2018high\u2019 materialism and \u2018low\u2019 materialism groups and significant mean differences were found between \u2018high\u2019 and \u2018low\u2019 materialism groups in terms of socio-oriented family communication construct. Specifically, it was found that the \u2018high\u2019 materialism group has considerably greater ratings on the constructs. Internal consistency reliability assessment using Cronbach coefficient alpha revealed that all the three dimensions had high reliability. A stepwise discriminant analysis performed on two family communication variables found that one variable was significant in differentiating the two materialism groups. The variable was socio-oriented family communication. The implications, significance and limitations of the study are discussed."}
{"_id": "cf8fdb67d6e2489c0ea1ae0921a90cfb81c3e74b", "title": "Near Optimal Online Algorithms and Fast Approximation Algorithms for Resource Allocation Problems", "text": "We present prior robust algorithms for a large class of resource allocation problems where requests arrive one-by-one (online), drawn independently from an unknown distribution at every step. We design a single algorithm that, for every possible underlying distribution, obtains a 1\u2212\u03b5 fraction of the profit obtained by an algorithm that knows the entire request sequence ahead of time. The factor \u03b5 approaches 0 when no single request consumes/contributes a significant fraction of the global consumption/contribution by all requests together. We show that the tradeoff we obtain here that determines how fast \u03b5 approaches 0, is near optimal: We give a nearly matching lower bound showing that the tradeoff cannot be improved much beyond what we obtain.\n Going beyond the model of a static underlying distribution, we introduce the adversarial stochastic input model, where an adversary, possibly in an adaptive manner, controls the distributions from which the requests are drawn at each step. Placing no restriction on the adversary, we design an algorithm that obtains a 1\u2212\u03b5 fraction of the optimal profit obtainable w.r.t. the worst distribution in the adversarial sequence. Further, if the algorithm is given one number per distribution, namely the optimal profit possible for each of the adversary\u2019s distribution, then we design an algorithm that achieves a 1\u2212\u03b5 fraction of the weighted average of the optimal profit of each distribution the adversary picks.\n In the offline setting we give a fast algorithm to solve very large linear programs (LPs) with both packing and covering constraints. We give algorithms to approximately solve (within a factor of 1+\u03b5) the mixed packing-covering problem with O(\u03b3 m log (n/\u0394)/\u03b52) oracle calls where the constraint matrix of this LP has dimension n\u00d7 m, the success probability of the algorithm is 1\u2212\u0394, and \u03b3 quantifies how significant a single request is when compared to the sum total of all requests.\n We discuss implications of our results to several special cases including online combinatorial auctions, network routing, and the adwords problem."}
{"_id": "6e35df7addc7fb788edf78f4d925684528de3f7e", "title": "Modeling wrinkles on smooth surfaces for footwear design", "text": "We describe two new shape operators that superimpose wrinkles on top of a smooth NURBS surface. Previous research studying wrinkles focused mostly on cloth modeling or in animations, which are driven more by visual realism, but allow large elastic deformations. Our operators generate wrinkle-shaped deformations in a region of a smooth surface along a given boundary based on a few basic parametric inputs such as wrinkle magnitude and extent (these terms will be defined in the paper). The essential geometric transformation to map the smooth surface to a wrinkled one will be defined purely in terms of the geometry of the surface and the input parameters. Our model is based on two surface properties: geodesic offsets and surface energy. Practical implementation of the operators is discussed, and examples presented. Finally, the motivation for the operators will be given through their application in the computer-aided design and manufacture of footwear."}
{"_id": "3f53b5181c41566ccc37ac06316aa06a5a4ad13c", "title": "STREAM: A First Programming Process", "text": "Programming is recognized as one of seven grand challenges in computing education. Decades of research have shown that the major problems novices experience are composition-based---they may know what the individual programming language constructs are, but they do not know how to put them together. Despite this fact, textbooks, educational practice, and programming education research hardly address the issue of teaching the skills needed for systematic development of programs.\n We provide a conceptual framework for incremental program development, called Stepwise Improvement, which unifies best practice in modern software development such as test-driven development and refactoring with the prevailing perspective of programming methodology, stepwise refinement. The conceptual framework enables well-defined characterizations of incremental program development.\n We utilize the conceptual framework to derive a programming process, STREAM, designed specifically for novices. STREAM is a carefully down-scaled version of a full and rich agile software engineering process particularly suited for novices learning object-oriented programming. In using it we hope to achieve two things: to help novice programmers learn faster and better while at the same time laying the foundation for a more thorough treatment of more advanced aspects of software engineering. In this article, two examples demonstrate the application of STREAM.\n The STREAM process has been taught in the introductory programming courses at our universities for the past three years and the results are very encouraging. We report on a small, preliminary study evaluating the learning outcome of teaching STREAM. The study indicates a positive effect on the development of students\u2019 process competences."}
{"_id": "60e752707e3adb38939e43d8579392edda26c198", "title": "Clustering short text using Ncut-weighted non-negative matrix factorization", "text": "Non-negative matrix factorization (NMF) has been successfully applied in document clustering. However, experiments on short texts, such as microblogs, Q&A documents and news titles, suggest unsatisfactory performance of NMF. An major reason is that the traditional term weighting schemes, like binary weight and tfidf, cannot well capture the terms' discriminative power and importance in short texts, due to the sparsity of data. To tackle this problem, we proposed a novel term weighting scheme for NMF, derived from the Normalized Cut (Ncut) problem on the term affinity graph. Different from idf, which emphasizes discriminability on document level, the Ncut weighting measures terms' discriminability on term level. Experiments on two data sets show our weighting scheme significantly boosts NMF's performance on short text clustering."}
{"_id": "241ff9045015b132d298ab3e913d3ee76da9e9bb", "title": "Vessel enhancement algorithm in digital retinal fundus microaneurysms filter for nonproliferative diabetic retinopathy classification", "text": "Diabetic Retinopathy is one of a complication of diabetes which can cause blindness. There are two levels of diabetic retinopathy which are nonproliferative diabetic retinopathy (NPDR) and proliferative diabetic retinopathy (PDR). The presence of microaneurysms in the eye is one of the early signs of diabetic retinopathy. Automated detection of microaneurysms in digital color fundus photographs is developed to help opthalmologist to detect the emergence of its early symptoms and determine the next action step for the patient. In this study we developed microaneurysms filter algorithm with the concept of vessel enhancement in order to extract the structure of microaneurysms in medical imagery, especially in the retinal image. Microaneurysms filter algorithm has been able to extract part of microaneurysms in the retinal image and detect pretty well. In addition, microaneurysms filters can also be developed to detect the next symptoms of diabetic retinopathy-hemorraghes (bleeding) that has features in common with microaneurysms which are larger and have irregular shape. Furthermore, the nonproliferative diabetic retinopathy (NPDR) classification system was developed based on the microaneurysms and hemorraghes detection. The system was tested against the patient data. Microanuerisyms detection results on DiaRetDB data has sensitivity value by 57,4% to ophthalmlogist 1, 35,37% for opthalmologist 2 and 62,68% for opthalmologist 3."}
{"_id": "9391f43bc92d3bd391d7ea67e6b0481c9a8321bf", "title": "IRFs: master regulators of signalling by Toll-like receptors and cytosolic pattern-recognition receptors", "text": "The interferon-regulatory factor (IRF) family of transcription factors was initially found to be involved in the induction of genes that encode type I interferons. IRFs have now been shown to have functionally diverse roles in the regulation of the immune system. Recently, the crucial involvement of IRFs in innate and adaptive immune responses has been gaining much attention, particularly with the discovery of their role in immunoregulation by Toll-like receptors and other pattern-recognition receptors."}
{"_id": "364dc9031ec997a15015099d876d8e49c76a4e9f", "title": "A Scalable Framework for Spatiotemporal Analysis of Location-based Social Media Data", "text": "In the past several years, social media (e.g., Twitter and Facebook) has been experiencing a spectacular rise and popularity, and becoming a ubiquitous discourse for content sharing and social networking. With the widespread of mobile devices and location-based services, social media typically allows users to share whereabouts of daily activities (e.g., check-ins and taking photos), and thus strengthens the roles of social media as a proxy to understand human behaviors and complex social dynamics in geographic spaces. Unlike conventional spatiotemporal data, this new modality of data is dynamic, massive, and typically represented in stream of unstructured media (e.g., texts and photos), which pose fundamental representation, modeling and computational challenges to conventional spatiotemporal analysis and geographic information science. In this paper, we describe a scalable computational framework to harness massive location-based social media data for efficient and systematic spatiotemporal data analysis. Within this framework, the concept of space-time trajectories (or paths) is applied to represent activity profiles of social media users. A hierarchical spatiotemporal data model, namely a spatiotemporal data cube model, is developed based on collections of space-time trajectories to represent the collective dynamics of social media users across aggregation boundaries at multiple spatiotemporal scales. The framework is implemented based upon a public data stream of Twitter feeds posted on the continent of North America. To demonstrate the advantages and performance of this framework, an interactive flow mapping interface (including both single-source and multiple-source flow mapping) is developed to allow real-time, and interactive visual exploration of movement dynamics in massive location-based social media at multiple scales. \u2217Corresponding author Preprint submitted to Computers, Enviroment and Urban Systems September 10, 2014 ar X iv :1 40 9. 28 26 v1 [ cs .S I] 8 S ep 2 01 4"}
{"_id": "e1e6e8959769a465a6bb97df1bb76ad4e754b83d", "title": "Speedy bus mastering PCI express", "text": "PCI Express is a ubiquitous bus interface providing the highest bandwidth connection in the PC platform. Sadly, support for it in FPGAs is limited and/or expensive. The Speedy PCIe core addresses this problem by bridging the gap from the bare bones interface to a user friendly, high performance design. This paper describes some of the fundamental design challenges and how they were addressed as well as giving detailed results. The hardware and software source code are available for free download from [12]."}
{"_id": "2656a138c2005cbb38bb91f86c73b4558198603f", "title": "Real-Time or Near Real-Time Persisting Daily Healthcare Data Into HDFS and ElasticSearch Index Inside a Big Data Platform", "text": "Mayo Clinic (MC) healthcare generates a large number of HL7 V2 messages\u20140.7\u20131.1 million on weekends and 1.7\u20132.2 million on business days at present. With multiple RDBMS-based systems, such a large volume of HL7 messages still cannot be real-time or near-real-time stored, analyzed, and retrieved for enterprise-level clinic and nonclinic usage. To determine if Big Data technology coupled with ElasticSearch technology can satisfy MC daily healthcare needs for HL7 message processing, a BigData platform was developed to contain two identical Hadoop clusters (TDH1.3.2 version)\u2014each containing an ElasticSearch cluster and instances of a storm topology\u2014MayoTopology for processing HL7 messages on MC ESB queues into an ElasticSearch index and the HDFS. The implemented BigData platform can process 62 \u00b1 4 million HL7 messages per day while the ElasticSearch index can provide ultrafast free-text searching at a speed level of 0.2-s per query on an index containing a dataset of 25 million HL7-derived-JSON-documents. The results suggest that the implemented BigData platform exceeds MC enterprise-level patient-care needs."}
{"_id": "31a56eff814406d4f933662196bb0ec16a22b2a2", "title": "egoSlider: Visual Analysis of Egocentric Network Evolution", "text": "Ego-network, which represents relationships between a specific individual, i.e., the ego, and people connected to it, i.e., alters, is a critical target to study in social network analysis. Evolutionary patterns of ego-networks along time provide huge insights to many domains such as sociology, anthropology, and psychology. However, the analysis of dynamic ego-networks remains challenging due to its complicated time-varying graph structures, for example: alters come and leave, ties grow stronger and fade away, and alter communities merge and split. Most of the existing dynamic graph visualization techniques mainly focus on topological changes of the entire network, which is not adequate for egocentric analytical tasks. In this paper, we present egoSlider, a visual analysis system for exploring and comparing dynamic ego-networks. egoSlider provides a holistic picture of the data through multiple interactively coordinated views, revealing ego-network evolutionary patterns at three different layers: a macroscopic level for summarizing the entire ego-network data, a mesoscopic level for overviewing specific individuals' ego-network evolutions, and a microscopic level for displaying detailed temporal information of egos and their alters. We demonstrate the effectiveness of egoSlider with a usage scenario with the DBLP publication records. Also, a controlled user study indicates that in general egoSlider outperforms a baseline visualization of dynamic networks for completing egocentric analytical tasks."}
{"_id": "e04af4480f9e684ac2c441b80c46df95b4f24d4e", "title": "Has the Internet become indispensable?", "text": "As was empirically demonstrated in [3], \u201cthe adoption rate of the Internet has exceeded that of earlier mass communication technologies by several magnitudes,\u201d making it an \u201cirreversible\u201d innovation. Studies have also shown that for the generation of U.S.-based youth that grew up with the Internet, it is gradually displacing television as their main source of entertainment, communication, and education [6]. Here, we explore how the Internet has become indispensable to people in their daily lives and develop a conceptual model allowing us to address the associated research questions. The idea is that the Internet has become so embedded in the daily fabric of people\u2019s lives that they simply cannot live without it. How is the Internet indispensable and in what ways? For which groups of people is it indispensable, for what tasks, and how has BY DONNA L. HOFFMAN, THOMAS P. NOVAK, AND ALLADI VENKATESH"}
{"_id": "c270d9d9a2113600db3d517b930082655bd871e6", "title": "Improved Microaneurysm Detection using Deep Neural Networks", "text": "In this work, we propose a novel microaneurysm (MA) detection for early dieabetic ratinopathy screening using color fundus images. Since MA usually the first lesions to appear as a indicator of diabetic retinopathy, accurate detection of MA is necessary for treatment. Each pixel of the image is classified as either MA or non-MA using deep neural network with dropout training procedure using maxout activation function. No preprocessing step or manual feature extraction is required. Substantial improvements over standard MA detection method based on pipeline of preprocessing, feature extraction, classification followed by postprocessing is achieved. The presented method is evaluated in publicly available Retinopathy Online Challenge (ROC) and Diaretdb1v2 database and achieved state-of-the-art accuracy."}
{"_id": "96544d4857777682129280b9d934d6ff2f221f40", "title": "Orthogonal nonnegative matrix t-factorizations for clustering", "text": "Currently, most research on nonnegative matrix factorization (NMF)focus on 2-factor $X=FG^T$ factorization. We provide a systematicanalysis of 3-factor $X=FSG^T$ NMF. While it unconstrained 3-factor NMF is equivalent to it unconstrained 2-factor NMF, itconstrained 3-factor NMF brings new features to it constrained 2-factor NMF. We study the orthogonality constraint because it leadsto rigorous clustering interpretation. We provide new rules for updating $F,S, G$ and prove the convergenceof these algorithms. Experiments on 5 datasets and a real world casestudy are performed to show the capability of bi-orthogonal 3-factorNMF on simultaneously clustering rows and columns of the input datamatrix. We provide a new approach of evaluating the quality ofclustering on words using class aggregate distribution andmulti-peak distribution. We also provide an overview of various NMF extensions andexamine their relationships."}
{"_id": "b94e246b151cfd08a276a5302b804618dd27c218", "title": "Continuous deployment of software intensive products and services: A systematic mapping study", "text": "BACKGROUND The software intensive industry is moving towards the adoption of a value-driven and adaptive real-time business paradigm. The traditional view of software as an item that evolves through releases every few months is being replaced by continuous evolution of software functionality. OBJECTIVE This study aims to classify and analyse literature related to continuous deployment in the software domain in order to scope the phenomenon, provide an overview of its state-of-the-art, investigate the scientific evidence in the reported results and identify areas that are suitable for further research. METHOD We conducted a systematic mapping study and classified the continuous deployment literature. The benefits and challenges related to continuous deployment were also analyzed. RESULTS The systematic mapping study includes 50 primary studies published between 2001 and 2014. An in-depth analysis of the primary studies revealed ten recurrent themes that characterize continuous deployment and provide researchers with directions for future work. In addition, a set of benefits and challenges of which practitioners may take advantage were identified. CONCLUSION Overall, although the topic area is very promising, it is still in its infancy, thus offering a plethora of new opportunities for both researchers and software intensive companies."}
{"_id": "db8fe6da397c09c0baccac1598b3ee6ee938fc3a", "title": "On the journey to continuous deployment: Technical and social challenges along the way", "text": "Context: Continuous deployment (CD) is an emerging software development process with organisations such as Facebook, Microsoft, and IBM successfully implementing and using the process. The CD process aims to immediately deploy software to customers as soon as new code is developed, and can result in a number of benefits for organizations, such as: new business opportunities, reduced risk for each release, and prevent development of wasted software. There is little academic literature on the challenges organisations face when adopting the CD process, however there are many anecdotal challenges that organisations have voiced on their online blogs. Objective: The aim of this research is to examine the challenges faced by organisations when adopting CD as well as the strategies to mitigate these challenges. Method: An explorative case study technique that involves in-depth interviews with software practitioners in an organisation that has adopted CD was conducted to identify these challenges. Results: This study found a total of 20 technical and social adoption challenges that organisations may face when adopting the CD process. The results are discussed to gain a deeper understanding of the strategies employed by organisations to mitigate the impacts of these challenges. Conclusion: While a number of individual technical and social adoption challenges were uncovered by the case study in this research, most challenges were not faced in isolation. The severity of these challenges were reduced by a number of mitigation strategies employed by the case study organization. It is concluded that organisations need to be well prepared to handle technical and social adoption challenges with their existing expertise, processes and tools before adopting the CD"}
{"_id": "0218370d599af1b573c815de3c0aae5c47866ff8", "title": "Using thematic analysis in psychology", "text": "Thematic analysis is a poorly demarcated, rarely acknowledged, yet widely used qualitative analytic method within psychology. In this paper, we argue that it offers an accessible and theoretically flexible approach to analysing qualitative data. We outline what thematic analysis is, locating it in relation to other qualitative analytic methods that search for themes or patterns, and in relation to different epistemological and ontological positions. We then provide clear guidelines to those wanting to start thematic analysis, or conduct it in a more deliberate and rigorous way, and consider potential pitfalls in conducting thematic analysis. Finally, we outline the disadvantages and advantages of thematic analysis. We conclude by advocating thematic analysis as a useful and flexible method for qualitative research in and beyond psychology. Qualitative Research in Psychology 2006; 3: 77 /101"}
{"_id": "1ef19db16443e420235067fc8b1114e96c3e1070", "title": "The qualitative interview in IS research: Examining the craft", "text": "The qualitative interview is one of the most important data gathering tools in qualitative research, yet it has remained an unexamined craft in IS research. This paper discusses the potential difficulties, pitfalls and problems of the qualitative interview in IS research. Building on Goffman\u2019s seminal work on social life, the paper proposes a dramaturgical model as a useful way of conceptualizing the qualitative interview. Based on this model the authors suggest guidelines for the conduct of qualitative interviews. 2006 Elsevier Ltd. All rights reserved."}
{"_id": "5789d652132510f056c059f36328a205cd90b25c", "title": "Recommendations without user preferences: a natural language processing approach", "text": "We examine the problems with automated recommendation systems when information about user preferences is limited. We equate the problem to one of content similarity measurement and apply techniques from Natural Language Processing to the domain of movie recommendation. We describe two algorithms, a na\u00efve word-space approach and a more sophisticated approach using topic signatures, and evaluate their performance compared to baseline, gold standard, and commercial systems."}
{"_id": "995b701a79c3c719a860e770d369c9b096808fba", "title": "Development of Inchworm In-Pipe Robot Based on Self-Locking Mechanism", "text": "In-pipe robots are used to carry sensors and some other repairing instruments to perform inspection and maintenance jobs inside pipelines. In this paper, a self-locking mechanism is presented to improve the traction ability of in-pipe robots and avoid their traditional limitations. The structure of this type of in-pipe robot is presented and some critical design issues on the principle of self-locking mechanism are discussed. Prototypes of 19 mm diameter were produced, and related experiments were performed on specially designed test platform. The traction ability of the proposed in-pipe robot was measured experimentally to be 15.2 N, far beyond its maximum static friction of 0.35 N with the inner surface of pipeline. It means that this development has broken the traditional limitation of in-pipe robot whose traction ability was smaller than its maximum static friction with the pipeline."}
{"_id": "c8b5f812318078b3a09a5e4b9ffcc2eddf416fe8", "title": "StyleCheck : An Automated Stylistic Analysis Tool", "text": "StyleCheck is a user-friendly tool with multiple functions designed to aid in the production of quality writing. Its features include stylistic analysis (on both documentwide and individual-sentence scales) and spelling and grammar check, as well as generating suggested replacements for all types of errors. In addition, StyleCheck includes the capability to identify the famous author (out of a limited corpus) with the style most similar to the user\u2019s. The source code for StyleCheck is available online at: https://github.com/alexpwelton/StyleCheck Dartmouth Computer Science Technical Report TR2014-754"}
{"_id": "521a6791980d06df10c9bf5482c6b2134e678d0d", "title": "Capacitive coupling for power and data telemetry to implantable biomedical microsystems", "text": "This paper proposes a new method for power telemetry and wireless data transfer to biomedical implants. Confinement of the transferred energy within a predefined area, hence not influencing the neighboring sensitive circuitry on the microsystem, and also allowing for more than one telemetry link on the same platform and even operating at the same carrier frequency are among the advantages of the proposed method over the traditional approaches for power and data telemetry."}
{"_id": "aea988dd3b2cbf3670c7d1380e15a6eaeb0a9f12", "title": "Document classification through image-based character embedding and wildcard training", "text": "Languages such as Chinese and Japanese have a significantly large number (several thousands) of alphabets as compared to other languages, and each of their sentences consists of several concatenated words with wide varieties of inflected forms; thus appropriate word segmentation is quite difficult. Therefore, recently proposed sophisticated language-processing methods designed for languages such as English cannot be applied. In this paper, we address those issues and propose a new and efficient document classification technique for such languages. The proposed method is characterized into a new \u201cimage-based character embedding\u201d method and character-level convolutional neural networks method with \u201cwildcard training.\u201d The first method encodes each character based on its pictorial structures and preserves them. Further, the second method treats some of the input characters as wildcards in the classification stage and functions as efficient data augmentation. We confirmed that our proposed method showed superior performance when compared conventional methods for Japanese document classification problems."}
{"_id": "dfeddb420dd7b37dde0322762d3b221c1cb0499b", "title": "Depth inpainting by tensor voting.", "text": "Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data."}
{"_id": "40ae8d1b2e84055281adf0c64db1f857b30c8c24", "title": "A linear algorithm for finding Hamiltonian cycles in 4-connected maximal planar graphs", "text": "The Hamiltonian cycle problem is one of the most popular NP-complete problems, and remains NP-complete even if we restrict ourselves to a class of (3-connected cubic) planar graphs [5,9]. Therefore, there seems to be no polynomial-time algorithm for the Hamiltonian cycle problem. However, for certain (nontrivial) classes of restricted graphs, there exist polynomial-time algorithms [3,4,6]. In fact, employing the proof technique used by Tutte [lo], Gouyou-Beauchamps has given an 0(n3) time algorithm for finding a Hamiltonian cycle in a 4-connected planar graph G, where n is the number of vertices of G [6]. Although such a graph G always has a Hamiltonian cycle [lo], it is not an easy matter to actually find a Hamiltonian cycle of G. However, for a little more restricted class of graphs, i.e., the class of 4-connected maximal planar graphs, we can construct an efficient algorithm. One can easily design an O(n*) time algorithm to find a Hamiltonian cycle in a 4-connetted maximal planar graph G with n vertices, entirely based on Whitney\u2019s proof of his theorem [l 11. In this paper, we present an efficient algorithm for the problem, based on our simplified version of Whitney\u2019s proof of his result. We employ \u2018divide and conquer\u2019 and some other techniques in the algorithm. The computational complexity of our algorithm is linear, thus optimal to within a constant factor."}
{"_id": "56de5dbb4e2e81f5d09622bf278b22a6ef6df168", "title": "Improving Vector Space Word Representations Via Kernel Canonical Correlation Analysis", "text": "Cross-lingual word embeddings are representations for vocabularies of two or more languages in one common continuous vector space and are widely used in various natural language processing tasks. A state-of-the-art way to generate cross-lingual word embeddings is to learn a linear mapping, with an assumption that the vector representations of similar words in different languages are related by a linear relationship. However, this assumption does not always hold true, especially for substantially different languages. We therefore propose to use kernel canonical correlation analysis to capture a non-linear relationship between word embeddings of two languages. By extensively evaluating the learned word embeddings on three tasks (word similarity, cross-lingual dictionary induction, and cross-lingual document classification) across five language pairs, we demonstrate that our proposed approach achieves essentially better performances than previous linear methods on all of the three tasks, especially for language pairs with substantial typological difference."}
{"_id": "44c74ed28b87176a6846415625fc3d25e65bced1", "title": "Data integration with uncertainty", "text": "This paper reports our first set of results on managing uncertainty in data integration. We posit that data-integration systems need to handle uncertainty at three levels and do so in a principled fashion. First, the semantic mappings between the data sources and the mediated schema may be approximate because there may be too many of them to be created and maintained or because in some domains (e.g., bioinformatics) it is not clear what the mappings should be. Second, the data from the sources may be extracted using information extraction techniques and so may yield erroneous data. Third, queries to the system may be posed with keywords rather than in a structured form. As a first step to building such a system, we introduce the concept of probabilistic schema mappings and analyze their formal foundations. We show that there are two possible semantics for such mappings: by-table semantics assumes that there exists a correct mapping but we do not know what it is; by-tuple semantics assumes that the correct mapping may depend on the particular tuple in the source data. We present the query complexity and algorithms for answering queries in the presence of probabilistic schema mappings, and we describe an algorithm for efficiently computing the top-k answers to queries in such a setting. Finally, we consider using probabilistic mappings in the scenario of data exchange."}
{"_id": "49b56c7d888215c0ec8d93eddb5b665df8ee289d", "title": "SLAM-based Pseudo-GNSS/INS localization system for indoor LiDAR mobile mapping systems", "text": "The emergence of Mobile Mapping Systems (MMS) has set a marked paradigm in the photogrammetric and mapping community that has not only facilitated comprehensive 3D mapping of different environments but has also paved way for new aspects of applied research in this direction. Out of the many essential blocks that make these MMS a viable tool for mapping, the positioning and orientation module is considered to be a crucial yet an expensive component. The integration of such a module with mapping sensors has allowed for the extensive implementation of such systems to provide high-quality maps. However, while such systems do not lack in system robustness and performance in general, the deployment of these systems is restricted to applications and environments where a consistent availability of GNSS signals is assured. Extending these MMS to GNSS-denied areas, such as indoor environments, is therefore quite challenging and necessitates the development of an alternative module that can act as a viable substitute to GNSS/INS for system operation without having to resort to an exhaustive modification of the same to function in GNSS-denied locations. In this research, such a case has been considered for the implementation of an indoor MMS using an Unmanned Ground Vehicle (UGV) and a 3D laser scanner for the task of generating high density maps of GNSS-denied indoor areas. To mitigate the absence of GNSS data, this paper proposes a Pseudo-GNSS/INS module integrated framework which utilizes probabilistic Simultaneous Localization and Mapping (SLAM) techniques to estimate the platform pose and heading from 3D laser scanner data. This proposed framework has been implemented based on three major notions: (i) using geometric methods for sparse point cloud extraction to carry out real-time SLAM, (ii) generating position data and geo-referencing signals from these real-time SLAM pose estimates, and (iii) carrying out the entire operation through use of a single 3D mapping sensor. The final geo-referenced point cloud can then be generated through post-processing by the Iterative Closest Projected Point (ICPP) registration technique which also diminishes the effect of sensor measurement noise. The implementation, performance and results of the proposed MMS framework for an indoor mapping system have been presented in this paper that demonstrate the ability of this Pseudo-GNSS/INS framework to operate flexibly in GNSS-denied areas."}
{"_id": "a10b04eddc674eb0b123f528fe2f72ad19218e4c", "title": "Semantic Lexicon Induction from Twitter with Pattern Relatedness and Flexible Term Length", "text": "With the rise of social media, learning from informal text has become increasingly important. We present a novel semantic lexicon induction approach that is able to learn new vocabulary from social media. Our method is robust to the idiosyncrasies of informal and open-domain text corpora. Unlike previous work, it does not impose restrictions on the lexical features of candidate terms \u2013 e.g. by restricting entries to nouns or noun phrases \u2013 while still being able to accurately learn multiword phrases of variable length. Starting with a few seed terms for a semantic category, our method first explores the context around seed terms in a corpus, and identifies context patterns that are relevant to the category. These patterns are used to extract candidate terms \u2013 i.e. multiword segments that are further analyzed to ensure meaningful term boundary segmentation. We show that our approach is able to learn high quality semantic lexicons from informally written social media text of Twitter, and can achieve accuracy as high as 92% in the top 100 learned category members."}
{"_id": "b51cdeb74165df0ce32ea3a1a606d28186f03e9d", "title": "1 USES OF TECHNOLOGY THAT SUPPORT PUPILS WITH SPECIAL EDUCATIONAL NEEDS", "text": ""}
{"_id": "39a8bcda90c9b0c5ea2495a3fbf679a4488d2cf5", "title": "Cell-Balancing Techniques: Theory and Implementation", "text": "In the safety chapter we briefly discussed the issue that when multiple cells are connected in series, the cell voltage is not always equal to the pack voltage divided by the number of cells. How does this happen? This chapter explores that question in detail, but the first question to answer is this: Why do we care? The first reason is safety. Remember, when lithium ion cell voltage exceeds 4.2V by a few hundred millivolts, it can undergo thermal runaway, melting the battery pack and device it is powering. It can even blow up as a big ball of fire. Although a well-designed pack has an overvoltage protection circuit that will prevent such an event (usually even two independent circuits!), it is better not to tempt fate by triggering this protection unnecessarily. The second reason is longevity. If the maximal recommended charging voltage is exceeded even a little, it will cause very accelerated degradation. Just increasing the charging voltage from 4.2 to 4.25V causes the degradation rate to increase by 30%. For this reason, the misbehaving cell that has higher than its due share of voltage will degrade faster. The third reason is incomplete charging of the pack. Let's assume the protector circuit does its job and that charging stops when just one cell gets close to unsafe conditions. Now we have successfully prevented thermal runaway, but all of the other cells now have lower voltages and are not fully charged. If we look at the pack voltage, it will be much less than 4.2V multiplied by the"}
{"_id": "a254bfd1b139213c810d4c637df2312a19d1d0e3", "title": "Concepts for dynamic obstacle avoidance and their extended application in underground navigation", "text": "An effective approach to dynamic obstacle avoidance for mobile robots is described in this paper. The main concepts in this approach include local Polar Object Chart (POC) for defining possible ways out, obstacle definition, detour mode, anchor for preventing getting lost, steering delimiting and velocity delimiting. The presented concepts and methods were tested and verified both by simulations and experiments on a developed prototype industrial Automated Guided Vehicle (AGV). This paper also presents the extended application of some of the concepts in underground navigation. \u00a9 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "2cfd7f75bdac0514d850a0e96c4aae95260deebb", "title": "Design Principles for Industrie 4.0 Scenarios", "text": "The increasing integration of the Internet of Everything into the industrial value chain has built the foundation for the next industrial revolution called Industrie 4.0. Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted understanding of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a quantitative text analysis and a qualitative literature review, the paper identifies design principles of Industrie 4.0. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in identifying appropriate scenarios. A case study illustrates how the identified design principles support practitioners in identifying Industrie 4.0 scenarios."}
{"_id": "e4217b7a8e15ed3ab5ac2ab129d7c2d04601daf7", "title": "Improved Forms for Controlling the Acrobot with Motors of Atypical Size Using Artificial Intelligence Techniques", "text": "An acrobot is a planar robot with a passive actuator in its first joint. The control problem of the acrobot tries to make it rise from the rest position to the inverted pendulum position. This control problem can be divided in the swing-up problem, when the robot has to rise itself through swinging up as a human acrobat does, and the balancing problem, when the robot has to maintain itself on the inverted pendulum position. We have developed three controllers for the swing-up problem applied to two types of motors: small and big. For small motors, we used the SARSA controller and the PD with a trajectory generator. For big motors, we propose a new controller to control the acrobot, a PWM controller. All controllers except SARSA are tuned using a Genetic Algorithm."}
{"_id": "c1e4420ddc71c4962e0ba26287293a25a774fb6e", "title": "Learning Depth from Single Monocular Images", "text": "We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminatively-trained Markov Random Field (MRF) that incorporates multiscale localand global-image features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps."}
{"_id": "0ee717bba5432e0a914eb1be863e44e6eeccf90b", "title": "HW/SW Codesign and FPGA Acceleration of Visual Odometry Algorithms for Rover Navigation on Mars", "text": "Future Mars exploration missions rely heavily on high-mobility autonomous rovers equipped with sophisticated scientific instruments and possessing advanced navigational capabilities. Increasing their navigation velocity and localization accuracy is essential for enabling these rovers to explore large areas on Mars. Contemporary Mars rovers move slowly, partially due to the long execution time of complex computer vision algorithms running on their slow space-grade CPUs. This paper exploits the advent of high-performance space-grade field-programmable gate arrays (FPGAs) to accelerate the navigation of future rovers. Specifically, it focuses on visual odometry (VO) and performs HW/SW codesign to achieve one order of magnitude faster execution and improved accuracy. Conforming to the specifications of the European Space Agency, we build a proof-of-concept system on an HW/SW platform with processing power resembling that to be available onboard future rovers. We develop a codesign methodology adapted to the rover's specifications, design parallel architectures, and customize several feature extraction, matching, and motion estimation algorithms. We implement and evaluate five distinct HW/SW pipelines on a Virtex6 FPGA and a 150 MIPS CPU. We provide a detailed analysis of their cost-time-accuracy tradeoffs and quantify the benefits of employing FPGAs for implementing VO. Our solution achieves a speedup factor of 16\u00d7 over a CPU-only implementation, handling a stereo image pair in less than 1 s, with a 1.25% mean positional error after a 100 m traverse and an FPGA cost of 54 K LUTs and 1.46-MB RAM."}
{"_id": "187b3af9006ee4cc039b2b97fe03099a1c4133b2", "title": "Automatic Detection of Malware-Generated Domains with Recurrent Neural Models", "text": "Modern malware families often rely on domain-generation algorithms (DGAs) to determine rendezvous points to their command-and-control server. Traditional defence strategies (such as blacklisting domains or IP addresses) are inadequate against such techniques due to the large and continuously changing list of domains produced by these algorithms. This paper demonstrates that a machine learning approach based on recurrent neural networks is able to detect domain names generated by DGAs with high precision. The neural models are estimated on a large training set of domains generated by various malwares. Experimental results show that this data-driven approach can detect malware-generated domain names with a F1 score of 0.971. To put it differently, the model can automatically detect 93 % of malware-generated domain names for a false positive rate of 1:100."}
{"_id": "fa2d0c3fb5f043a4d2bf1010be186f2171d920bc", "title": "Microstrip Rectangular 4x1 Patch Array Antenna at 2.5GHz for WiMax Application", "text": "This paper presents the design of microstrip rectangular patch antenna with center frequency at 2.5GHz for WiMAX application. The array of four by one (4x1) patch array microstrip rectangular antenna with microstrip line feeding based on quarter wave impedance matching technique was designed and simulated using Computer Simulation Tool (CST) Microwave Environment software. The performance of the designed antenna was than compared with the single patch rectangle antenna in term of return loss, Voltage Standing Wave Ratio (VSWR), bandwidth, directivity, radiation pattern and gain. The array antenna was then fabricated on the substrate type FR-4 with dielectric constant of 4.9 and thickness of 1.6mm respectively. The array antenna was measured in the laboratory using Vector Network Analyzer (VNA) and the results show good agreement with the array antenna simulated performances."}
{"_id": "fb32ed2d631dd7803a03b9c44eb6da10a7b5b00f", "title": "A comparison of the performance of latent Dirichlet allocation and the Dirichlet multinomial mixture model on short text", "text": "The expansion of the World Wide Web and the increasing popularity of microblogging websites such as Twitter and Facebook has created massive stores of textual data that is short in length. Although traditional topic models have proven to work successfully on collections of long texts such as books and news articles, they tend to produce results that are less coherent when applied to short text, such as status messages and product reviews. Over the last few decades it has become of greater relevance to analyse short texts due to the realisation that such bodies of text could potentially hold useful information. Latent Dirichlet allocation (LDA) is one of the most popular topic models and it makes the generative assumption that each document contains multiple topics in varying proportions, which is a sensible assumption about long text. On the other hand, the Dirichlet multinomial mixture model (GSDMM), a seemingly less popular topic model, assumes that a document can only belong to a single topic, which seems to be a more appropriate assumption for short text. The objective of this paper is to investigate the hypothesis that GSDMM will outperform LDA on short text, using topic coherence and stability as performance measures."}
{"_id": "7b7458e999bd60341fc208d5860dd75bea0e071e", "title": "A Four-Plate Compact Capacitive Coupler Design and LCL-Compensated Topology for Capacitive Power Transfer in Electric Vehicle Charging Application", "text": "This paper proposes a four-plate compact capacitive coupler and its circuit model for large air-gap distance capacitive power transfer (CPT). The four plates are arranged vertically, instead of horizontally, to save space in the electric vehicle charging application. The two plates that are on the same side are placed close to each other to maintain a large coupling capacitance, and they are of different sizes to maintain the coupling between the primary and secondary sides. The circuit model of the coupler is presented, considering all six coupling capacitors. The LCL compensation topology is used to resonate with the coupler and provide high voltage on the plates to transfer high power. The circuit model of the coupler is simplified to design the parameters of the compensation circuit. Finite-element analysis is employed to simulate the coupling capacitance and design the dimensions of the coupler. The circuit performance is simulated in LTspice to design the specific parameter values. A prototype of the CPT system was designed and constructed with the proposed vertical plate structure. The prototype achieved an efficiency of 85.87% at 1.88-kW output power with a 150-mm air-gap distance."}
{"_id": "1f2ee7714c3018c1687f9b17f2bc889c5c017439", "title": "Bundle Adjustment - A Modern Synthesis", "text": "This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares."}
{"_id": "26c9d899c990e3c82f1a9df81bdac1cac99ca10c", "title": "Design, modeling and control of a 5-DoF light-weight robot arm for aerial manipulation", "text": "The design, modeling and control of a 5 degrees-of-freedom light-weight robot manipulator is presented in this paper. The proposed robot arm, named Prisma Ultra-Lightweight 5 ARm (PUL5AR), is employed to execute manipulation tasks equipped on board of a vertical take-off and landing unmanned aerial vehicle. The arm is compact and light-weight. Its mechanics is designed such that it can fold on itself during landing manoeuvres. Moreover, the design is conceived to constrain the center of gravity of the arm as close as possible to vehicle base, thus reducing the total inertia and static unbalancing of the system. Experimental tests have been carried out in order to validate the dynamic model, the communication library, the developed electronics, and the control schemes implemented for the designed robot arm."}
{"_id": "a90f7269f1d1d6e6ba274fae6eb063c7b10b6d3c", "title": "A Unified Passivity Based Control Framework for Position, Torque and Impedance Control of Flexible Joint Robots", "text": "In this paper we describe a general passivity based framework for the control of flexible joint robots. Herein the recent DLR results on torque-, position-, as well as impedance control of flexible joint robots are summarized, and the relations between the individual contributions are highlighted. It is shown that an inner torque feedback loop can be incorporated into a passivity based analysis by interpreting torque feedback in terms of shaping of the motor inertia. This result, which implicitly was already included in our earlier works on torqueand position control, can also be seized for the design of impedance controllers. For impedance control, furthermore, potential shaping is of special interest. It is shown how, based only on the motor angles, a potential function can be designed which simultaneously incorporates gravity compensation and a desired Cartesian stiffness relation for the link angles. All the presented controllers were experimentally evaluated on the DLR light-weight robots and proved their performance and robustness with respect to uncertain model parameters. Herein, an impact experiment is presented briefly, and an overview of several applications is given in which the controllers have been applied."}
{"_id": "10659fb27307b62425a8f59fe39b6c786f39401b", "title": "Modeling and Control of MM-UAV: Mobile Manipulating Unmanned Aerial Vehicle", "text": "Compared to autonomous ground vehicles, UAVs (unmanned aerial vehicles) have significant mobility advantages and the potential to operate in otherwise unreachable locations. Micro UAVs still suffer from one major drawback: they do not have the necessary payload capabilities to support high performance arms. This paper, however, investigates the key challenges in controlling a mobile manipulating UAV using a commercially available aircraft and a light-weight prototype 3arm manipulator. Because of the overall instability of rotorcraft, we use a motion capture system to build an efficient autopilot. Our results indicate that we can accurately model and control our prototype system given significant disturbances when both moving the manipulators and interacting with the ground."}
{"_id": "2ff915bbc6094d65a19baf77bacab74be30df99f", "title": "Construction of Cubic Structures with Quadrotor Teams", "text": "We propose and investigate a system in which teams of quadrotor helicopters assemble 2.5-D structures from simple structural nodes and members equipped with magnets. The structures, called Special Cubic Structures (SCS), are a class of 2.5-D truss-like structures free of overhangs and holes. Grippers attached to the bottom of each quadrotor enable them to pick up, transport, and assemble the structural elements. The design of the nodes and members imposes constraints on assembly which are incorporated into the design of the algorithms used for assembly. We show that any SCS can be built using only the feasible assembly modes for individual structural elements and present simulation and experimental results with a team of quadrotors performing automated assembly. The paper includes a theoretical analysis of the SCS construction algorithm, the rationale for the design of the structural nodes, members and quadrotor gripper, a description of the quadrotor control methods for part pickup, transport and assembly, and an empirical analysis of system performance."}
{"_id": "a7310bde258299531c41923ccdf737ba4cf2f15b", "title": "Impact of FinFET Technology Introduction in the 3T1D-DRAM Memory Cell", "text": "In this paper, the 3T1D-DRAM cell based on FinFET devices is studied as an alternative to the bulk one. We observe an improvement in its behavior when IG and SG FinFETs are properly mixed, since together they provide a relevant increase in the memory circuit retention time. Moreover, our FinFET cell shows larger variability robustness, better performance at low supply voltage, and higher tolerance to elevated temperatures."}
{"_id": "fdb683a2d9626ac4e1352ff7ce9af6a1cfc94388", "title": "IoT security attacks using reverse engineering methods on WSN applications", "text": "With the rapid technological advancements of sensors, Wireless Sensor Networks (WSNs) have become a popular technology for the Internet of Things (IoT). We investigated the security of WSNs in an environmental monitoring application with the goal to demonstrate the overall security. We implemented a Secure Temperature Monitoring System (STMS), which served as our WSN application. Our results revealed a security flaw found in the bootstrap loader (BSL) password used to protect MSP430 micro-controller units (MCUs). We demonstrated how the BSL password could be brute forced in a matter of days. Furthermore, we illustrate how an attacker can reverse engineer WSN applications to obtain critical security information such as encryption keys. We contribute a solution to patch the weak BSL password security flaw and improve the security of MSP430 MCU chips. The Secure-BSL patch we contribute allows the randomization of the BSL password. Our solution increases the brute force time to decades. The impractical brute force time enhances the security of the MSP430 and prevents future reverse engineering tactics. Our research serves as proof that the security of WSNs and the overall IoT technology is broken if we cannot protect these everyday objects at the physical layer."}
{"_id": "568ddcb3976989be1e3b914f5b3e79da8cf39744", "title": "Cybersecurity Through Secure Software Development", "text": "Reports about serious vulnerabilities in critical IT components have triggered increased focus on cybersecurity worldwide. Among the man y initiatives to strengthen cybersecurity it is common to see the establishment and stre gthening of CERTs and other centers for cybersecurity. On the other hand , strengthening education in IT security and applying methods for secure systems d velopment are methods that receive much less attention. In this paper we expla in how the lack of focus on security in IT education programs worldwide is a signifi cant contributor to security vulnerabilities, and we propose an agile method for s ecure software design that requires team members to have received adequa te security education and training."}
{"_id": "031bc61d0f6068da01235364a43e9fb1ca7f9866", "title": "Identifying Important Places in People's Lives from Cellular Network Data", "text": "People spend most of their time at a few key locations, such as home and work. Being able to identify how the movements of people cluster around these \u201cimportant places\u201d is crucial for a range of technology and policy decisions in areas such as telecommunications and transportation infrastructure deployment. In this paper, we propose new techniques based on clustering and regression for analyzing anonymized cellular network data to identify generally important locations, and to discern semantically meaningful locations such as home and work. Starting with temporally sparse and spatially coarse location information, we propose a new algorithm to identify important locations. We test this algorithm on arbitrary cellphone users, including those with low call rates, and find that we are within 3 miles of ground truth for 88% of volunteer users. Further, after locating home and work, we achieve commute distance estimates that are within 1 mile of equivalent estimates derived from government census data. Finally, we perform carbon footprint analyses on hundreds of thousands of anonymous users as an example of how our data and algorithms can form an accurate and efficient underpinning for policy and infrastructure studies."}
{"_id": "6b6c0e65f629c0189f7a5187a25f8a453fc6b8ae", "title": "ExerLink: enabling pervasive social exergames with heterogeneous exercise devices", "text": "We envision that diverse social exercising games, or exergames, will emerge, featuring much richer interactivity with immersive game play experiences. Further, the recent advances of mobile devices and wireless networking will make such social engagement more pervasive - people carry portable exergame devices (e.g., jump ropes) and interact with remote users anytime, anywhere. Towards this goal, we explore the potential of using heterogeneous exercise devices as game controllers for a multi-player social exergame; e.g., playing a boat paddling game with two remote exercisers (one with a jump rope, and the other with a treadmill). In this paper, we propose a novel platform called ExerLink that converts exercise intensity to game inputs and intelligently balances intensity/delay variations for fair game play experiences. We report the design considerations and guidelines obtained from the design and development processes of game controllers. We validate the efficacy of game controllers and demonstrate the feasibility of social exergames with heterogeneous exercise devices via extensive human subject studies."}
{"_id": "8325ea0dce7471335d32e4278560c8170e6c78f5", "title": "WNGrad: Learn the Learning Rate in Gradient Descent", "text": "Adjusting the learning rate schedule in stochastic gradient methods is an important unresolved problem which requires tuning in practice. If certain parameters of the loss function such as smoothness or strong convexity constants are known, theoretical learning rate schedules can be applied. However, in practice, such parameters are not known, and the loss function of interest is not convex in any case. The recently proposed batch normalization reparametrization is widely adopted in most neural network architectures today because, among other advantages, it is robust to the choice of Lipschitz constant of the gradient in loss function, allowing one to set a large learning rate without worry. Inspired by batch normalization, we propose a general nonlinear update rule for the learning rate in batch and stochastic gradient descent so that the learning rate can be initialized at a high value, and is subsequently decreased according to gradient observations along the way. The proposed method is shown to achieve robustness to the relationship between the learning rate and the Lipschitz constant, and near-optimal convergence rates in both the batch and stochastic settings (O(1/T ) for smooth loss in the batch setting, and O(1/ \u221a T ) for convex loss in the stochastic setting). We also show through numerical evidence that such robustness of the proposed method extends to highly nonconvex and possibly non-smooth loss function in deep learning problems. Our analysis establishes some first theoretical understanding into the observed robustness for batch normalization and weight normalization."}
{"_id": "59b83666c1031c3f509f063b9963c7ad9781ca23", "title": "Hierarchical Committee of Deep CNNs with Exponentially-Weighted Decision Fusion for Static Facial Expression Recognition", "text": "We present a pattern recognition framework to improve committee machines of deep convolutional neural networks (deep CNNs) and its application to static facial expression recognition in the wild (SFEW). In order to generate enough diversity of decisions, we trained multiple deep CNNs by varying network architectures, input normalization, and weight initialization as well as by adopting several learning strategies to use large external databases. Moreover, with these deep models, we formed hierarchical committees using the validation-accuracy-based exponentially-weighted average (VA-Expo-WA) rule. Through extensive experiments, the great strengths of our committee machines were demonstrated in both structural and decisional ways. On the SFEW2.0 dataset released for the 3rd Emotion Recognition in the Wild (EmotiW) sub-challenge, a test accuracy of 57.3% was obtained from the best single deep CNN, while the single-level committees yielded 58.3% and 60.5% with the simple average rule and with the VA-Expo-WA rule, respectively. Our final submission based on the 3-level hierarchy using the VA-Expo-WA achieved 61.6%, significantly higher than the SFEW baseline of 39.1%."}
{"_id": "e2258e01f175e34377ceedbdcb5aff8178afce42", "title": "Communicating neuronal ensembles between neuromorphic chips", "text": "I describe an interchip communication system that reads out pulse trains from a 64 64 array of neurons on one chip, and transmits them to corresponding locations in a 64 64 array of neurons on a second chip. It uses a random-access, time-multiplexed, asynchronous digital bus to transmit log2N -bit addresses that uniquely identify each of theN neurons in the sending population. A peak transmission rate of 2.5MSpikes/s is achieved by pipelining the operation of the channel. I discuss how the communication channel design is optimized for sporadic stimulus-triggered activity which causes some small subpopulation to re in synchrony, by adopting an arbitered, event-driven architecture. I derive the bandwidth required to transmit this neuronal ensemble, without temporal dispersion, in terms of the number of neurons, the probability that a neuron is part of the ensemble, and the degree of synchrony."}
{"_id": "9635a88dfbe3e6758923260a1429813f94a3e032", "title": "Intent Mining for the Good , Bad & Ugly Use of Social Web : Concepts , Methods , and Challenges", "text": "The social web has empowered us to easily share information, express opinions, and engage in discussions on events around the world. While users of social media platforms often o\u21b5er help and emotional support to others (the good), they also spam (the bad) and harass others as well as even manipulate others via fake news (the ugly). In order to both leverage the positive e\u21b5ects and mitigate the negative e\u21b5ects of using social media, intent mining provides a computational approach to proactively analyze social media data. This chapter introduces an intent taxonomy of social media usage with examples and describes methods and future challenges to mine the intentional uses of social media."}
{"_id": "c4ff016dc9f310c3d8e88551edb1c4910368cf77", "title": "K-nearest correlated neighbor classification for Indian sign language gesture recognition using feature fusion", "text": "A sign language recognition system is an attempt to bring the speech and the hearing impaired community closer to more regular and convenient forms of communication. Thus, this system requires to recognize the gestures from a sign language and convert them to a form easily understood by the hearing. The model that has been proposed in this paper recognizes static images of the signed alphabets in the Indian Sign Language. Unlike the alphabets in other sign languages like the American Sign Language and the Chinese Sign language, the ISL alphabet are both single-handed and double-handed. Hence, to make recognition easier the model first categorizes them as single-handed or double-handed. For both categories two kinds of features, namely HOG and SIFT, are extracted for a set of training images and are combined in a single matrix. After which, HOG and SIFT features for the input test image are combined with the HOG and SIFT feature matrices of the training set. Correlation is computed for these matrices and is fed to a K-Nearest Neighbor Classifier to obtain the resultant classification of the test image."}
{"_id": "751dcc6feb2c32a9eee5e68ca9d620c781964820", "title": "Decentralized Edge Clouds", "text": "Cloud computing services are traditionally deployed on centralized computing infrastructures confined to a few data centers, while cloud applications run in a single data center. However, the cloud's centralized nature can be limiting in terms of performance and cost for applications where users, data, and computation are distributed. The authors present an overview of distributed clouds that might be better suited for such applications. They briefly describe the distributed cloud landscape and introduce Nebula, a highly decentralized cloud that uses volunteer edge resources. The authors provide insights into some of its key properties and design issues, and describe a distributed MapReduce application scenario to illustrate the benefits and trade-offs of using distributed and decentralized clouds for distributed data-intensive computing applications."}
{"_id": "43110beab5a93560ca14423c1e37889b925392d9", "title": "A 5-Gbps CMOS Burst-Mode CDR Circuit With an Analog Phase Interpolator for PONs", "text": "This paper presents a 5-Gb/s low-power burst-mode clock and data recovery circuit based on analog phase interpolator for passive optical network applications. The proposed clock recovery unit consists of two double-edge triggered sample-and-holds (DTSHs) and a phase interpolator. The PI instantaneously locks the recovered clock to incoming burst-mode data by coefficients generated at the DT-SHs\u2019 outputs. To reduce power dissipation in clock recovery unit, instead of two buffers, only one is utilized for the DT-SH. The proposed PI-based BM-CDR has been designed and simulated in 0.18-\u03bcm standard CMOS technology. The Results show that reduction in power dissipation of 40% for the clock recovery unit has been achieved. The proposed BM-CDR circuit retimes data at 5Gb/s for a 210-1 pseudo-random binary sequence within the first UI. The recovered data shows jitter at 14ps (pp). The circuit, including 1:2 data demux, draws 29mW power from a 1.8-V supply."}
{"_id": "0268ac29f412cc274ad180c709035ec653d0d482", "title": "User Requirements Analysis: A Review of Supporting Methods", "text": "Understanding user requirements is an integral part of information systems design and is critical to the success of interactive systems. However specifying these requirements is not so simple to achieve. This paper describes general methods to support user requirements analysis that can be adapted to a range of situations. Some brief case studies are described to illustrate how these methods have been applied in practice."}
{"_id": "6eafcc7c65788f619a28cb462219d57a1e8dd991", "title": "Shape from shading for the digitization of curved documents", "text": "Document digitization is faster and more affordable using digital cameras than scanners. On the other hand, if we aim at extending the basic digital camera functionalities for such a purpose, post-processings will be of first importance, at least to improve the text legibility. In this paper, we address the specific problem of the virtual flattening of curved documents, as for example the pages of an opened book lying on its spine. In order to compute the document shape, we use the shape from shading technique and discuss why, in some cases, it is more suitable than other 3D single-view reconstruction techniques. We extend the seminal work by Wada et\u00a0al. (Proceedings of the IAPR Workshop on machine vision and applications, Tokyo, Japan, pp. 591\u2013594, 1992) and consecutive papers, reformulating the problem in terms of perspective shape from shading. Finally, we design a complete post-processing algorithm and test it on real images. Even if the documents are much curved, it is shown that the restored images are almost identical to scanned images of the flattened documents."}
{"_id": "b467a19a7d5b2abff3afe8e4bebd5dd8be9ea518", "title": "Direct Synthesis of Complex Loaded Chebyshev Filters in a Complex Filtering Network", "text": "In this paper, a direct synthesis approach is proposed for designing a Chebyshev filter that matches the frequency variant complex loads at both the ports. The approach is based on the power wave renormalization theory and two practical assumptions: 1) the prescribed transmission zeros are stationary and 2) the reflection zeros are located along an imaginary axis. Three conditions are derived to stipulate the characteristic polynomials of the filter's responses through the renormalization of reference load impedances. These conditions are sequentially applied to ensure that the filter is physically realizable and well matched to the complex loads. It has been shown that the optimally synthesized filtering network that consists of an ideal filter cascaded with a piece of transmission line of an optimal length at each port can match the complex loads over a given frequency band with the best effort. By incorporating with an adjustment of the junction parameters, the approach provides an analytical yet flexible way to synthesize advanced microwave circuits composed of multiple filters connected together through some generalized junctions. The effectiveness of the method is demonstrated through three synthesis examples."}
{"_id": "8f4ee1e186cd08d92205540767681996019ba835", "title": "Analyze your Scratch projects with Dr . Scratch and assess your Computational Thinking skills", "text": "In this paper we present the feature of Dr. Scratch that allows to automatically assessing the Computational Thinking skills of Scratch projects. The paper reviews similar initiatives, like Hairball, and investigates the literature with proposals for assessment of Scratch projects that we have studied and remixed in order to develop the Computational Thinking analysis. Then it introduces the various aspects that Dr. Scratch takes into consideration to compute a Computational Thinking score for a Scratch project and presents some preliminary findings of the analysis of over 100 investigated Scratch projects. Finally, future directions and limitations are presented and discussed."}
{"_id": "94ea49024281f6e922e38887adbf93434ac2991a", "title": "Hatha yoga: improved vital capacity of college students.", "text": "CONTEXT\nThe vital capacity of the lungs is a critical component of good health. Vital capacity is an important concern for those with asthma, heart conditions, and lung ailments; those who smoke; and those who have no known lung problems.\n\n\nOBJECTIVE\nTo determine the effects of yoga postures and breathing exercises on vital capacity.\n\n\nDESIGN\nUsing the Spiropet spirometer, researchers measured vital capacity. Vital capacity determinants were taken near the beginning and end of two 17-week semesters. No control group was used.\n\n\nSETTING\nMidwestern university yoga classes taken for college credit.\n\n\nPARTICIPANTS\nA total of 287 college students, 89 men and 198 women.\n\n\nINTERVENTION\nSubjects were taught yoga poses, breathing techniques, and relaxation in two 50-minute class meetings for 15 weeks.\n\n\nMAIN OUTCOME MEASURES\nVital capacity over time for smokers, asthmatics, and those with no known lung disease.\n\n\nRESULTS\nThe study showed a statistically significant (P < .001) improvement in vital capacity across all categories over time.\n\n\nCONCLUSIONS\nIt is not known whether these findings were the result of yoga poses, breathing techniques, relaxation, or other aspects of exercise in the subjects' life. The subjects' adherence to attending class was 99.96%. The large number of 287 subjects is considered to be a valid number for a study of this type. These findings are consistent with other research studies reporting the positive effect of yoga on the vital capacity of the lungs."}
{"_id": "8317f40c569af2b5bb0aefbb6b07d6a991c1204e", "title": "PinPoint: Localizing Interfering Radios", "text": "This paper presents PinPoint, a technique for localizing rogue interfering radios that adhere to standard protocols in the inhospitable ISM band without any cooperation from the interfering radio. PinPoint is designed to be incrementally deployed on top of existing 802.11 WLAN infrastructure, and used by network administrators to identify and troubleshoot sources of interference which may be disrupting the network. PinPoint\u2019s key contribution is a novel algorithm that accurately computes the line of sight angle of arrival (AoA) and cyclic signal strength indicator (CSSI) of the target interfering signal at all APs, even when the line of sight (LoS) component is buried by stronger multipath components, interference and noise. PinPoint leverages this algorithm to design an optimization technique, which can localize interfering radios and simultaneously identify the type of interference. Unlike several localization techniques which require extensive pre-deployment calibration (e.g. RFFingerprinting), PinPoint requires very little calibration by the network administrator, and uses a novel algorithm to self-initialize its bearings, even if the locations of some AP are initially unknown and are oriented randomly. We implement PinPoint on WARP software radios and deploy in an indoor testbed spanning an entire floor of our department. We compare PinPoint with the best known prior RSSI [8, 11] and MUSIC-AoA based approaches and show that PinPoint achieves a median localization error of 0.97 meters, which is around three times lower compared to the RSSI [8, 11] and MUSIC-AoA based approaches."}
{"_id": "4fd83bb97d1bf3fba9dffd025d77c0f0f684a3d5", "title": "Simultaneous Localization and Mapping for Forest Harvesters", "text": "A real-time SLAM (simultaneous localization and mapping) approach to harvester localization and tree map generation in forest environments is presented in this paper. The method combines 2D laser localization and mapping with GPS information to form global tree maps. Building an incremental map while also using it for localization is the only way a mobile robot can navigate in large outdoor environments. Until recently SLAM has only been confined to small-scale, mostly indoor, environments. We try to addresses the issues of scale for practical implementations of SLAM in extensive outdoor environments. Presented algorithms are tested in real outdoor environments using an all-terrain vehicle equipped with the navigation sensors and a DGPS receiver."}
{"_id": "57a5be00c761aa55df162864cc476ab6d698ff8a", "title": "Platform Competition in Two-Sided Markets : The Case of Payment Networks", "text": "In this article, we construct a model to study competing payment networks, where networks offer differentiated products in terms of benefits to consumers and merchants. We study market equilibria for a variety of market structures: duopolistic competition and cartel, symmetric and asymmetric networks, and alternative assumptions about multihoming and consumer preferences. We find that competition unambiguously increases consumer and merchant welfare. We extend this analysis to competition among payment networks providing different payment instruments and find similar results. JEL Classifications: D43, G21, L13"}
{"_id": "e13227c5d2bfc1e04d982b2ebf13407cb3d27bc9", "title": "Forecasting Intraday stock price trends with text mining techniques", "text": "In this paper, we describe NewsCATS (news categorization and trading system), a system implemented to predict stock price trends for the time immediately after the publication of press releases. NewsCATS consists mainly of three components. The first component retrieves relevant information from press releases through the application of text preprocessing techniques. The second component sorts the press releases into predefined categories. Finally, appropriate trading strategies are derived by the third component by means of the earlier categorization. The findings indicate that a categorization of press releases is able to provide additional information that can be used to forecast stock price trends, but that an adequate trading strategy is essential for the results of the categorization to be fully exploited."}
{"_id": "9aa0b346863eaa8868a12aaf33a79fb7988c3463", "title": "Use alone or in Combination of Red and Infrared Laser in Skin Wounds.", "text": "A systematic review was conducted covering the action of red laser, infrared and combination of both, with emphasis on cutaneous wound therapy, showing the different settings on parameters such as fluency, power, energy density, time of application, frequency mode and even the type of low-power lasers and their wavelengths. It was observed that in general, the lasers brings good clinical and histological results mainly, but there is not a protocol that defines a dosage of use that has predictability of therapeutic success in repairing these wounds."}
{"_id": "2c6bf6e57ab0782770c5112f8123b249e56ea524", "title": "Ultra-stretchable and skin-mountable strain sensors using carbon nanotubes-Ecoflex nanocomposites.", "text": "Super-stretchable, skin-mountable, and ultra-soft strain sensors are presented by using carbon nanotube percolation network-silicone rubber nanocomposite thin films. The applicability of the strain sensors as epidermal electronic systems, in which mechanical compliance like human skin and high stretchability (\u03f5 > 100%) are required, has been explored. The sensitivity of the strain sensors can be tuned by the number density of the carbon nanotube percolation network. The strain sensors show excellent hysteresis performance at different strain levels and rates with high linearity and small drift. We found that the carbon nanotube-silicone rubber based strain sensors possess super-stretchability and high reliability for strains as large as 500%. The nanocomposite thin films exhibit high robustness and excellent resistance-strain dependency for over ~1380% mechanical strain. Finally, we performed skin motion detection by mounting the strain sensors on different parts of the body. The maximum induced strain by the bending of the finger, wrist, and elbow was measured to be ~ 42%, 45% and 63%, respectively."}
{"_id": "31a93001d66951e728a66ff349456d7aac7da978", "title": "Improv: A System for Scripting Interactive Actors in Virtual Worlds", "text": "Improv is a system for the creation of real\u2212time behavior\u2212based animated actors. There have been seve recent efforts to build network distributed autonomous agent But in general these efforts do not focus on the author\u2019s view To create rich interactive worlds inhabited by believable animated actors, authors need the proper tools. Im prov provides tools to create actors that respond to users and each other in real\u2212time, with personalities and mood consistent with the author\u2019s goals and intentions. Improv consists of two subsystems. The first subsystem is an Animation Engine that uses procedur techniques to enable authors to create layered, continuo non\u2212repetitive motions and smooth transitions between the The second subsystem is a Behavior Engine that enab authors to create sophisticated rules governing how acto communicate, change, and make decisions. The combin system provides an integrated set of tools for authoring th \"minds\" and \"bodies\" of interactive actors. The system uses a english\u2212style scripting language so that creative experts wh are not primarily programmers can create powerful interactiv applications."}
{"_id": "690e811e4125f9444c3dad299ed7d55482541aeb", "title": "The EMOTE model for effort and shape", "text": "Human movements include limb gestures and postural attitude. Although many computer animation researchers have studied these classes of movements, procedurally generated movements still lack naturalness. We argue that looking only at the psychological notion of gesture is insufficient to capture movement qualities needed by animated charactes. We advocate that the domain of movement observation science, specifically Laban Movement Analysis (LMA) and its Effort and Shape components, provides us with valuable parameters for the form and execution of qualitative aspects of movements. Inspired by some tenets shared among LMA proponents, we also point out that Effort and Shape phrasing across movements and the engagement of the whole body are essential aspects to be considered in the search for naturalness in procedurally generated gestures. Finally, we present EMOTE (Expressive MOTion Engine), a 3D character animation system that applies Effort and Shape qualities to independently defined underlying movements and thereby generates more natural synthetic gestures."}
{"_id": "4c390cbf57a3c905080c12cee965a8a3b8ed5a92", "title": "An Application of Inverse Reinforcement Learning to Medical Records of Diabetes Treatment", "text": "It is an important issue to utilize large amount of medical records which are being accumulated on medical information systems to improve the quality of medical treatment. The process of medical treatment can be considered as a sequential interaction process between doctors and patients. From this viewpoint, we have been modeling medical records using Markov decision processes (MDPs). Using our model, we can simulate the future of each patient and evaluate each treatment. In order to do so, the reward function of the MDP should be specified. However, there is no explicit information regarding the reward value in medical records. In this study we report our results of applying an inverse reinforcement learning (IRL) algorithm to medical records of diabetes treatment to explore the reward function that doctors have in mind during their treatments."}
{"_id": "742fc1ce9e33f891e588893134417ba84fcfd53c", "title": "Design of compact circular disc circularly polarized antenna with Koch curve fractal defected ground structure", "text": "A circularly polarized microstrip patch antenna with Koch curve fractal defected ground structure (DGS) for mobile applications is presented. Asymmetric `+' shape slots are incorporated on circular patch for getting circular polarization. The advantages of incorporation of Koch curve fractal DGS for improvement of performance parameters of circular polarized antenna presented. It is shown that with every progressive stage of fractal DGS iteration, in addition to reduction in size of the patch, other performance parameters of the antenna such as 3 dB axial ratio bandwidth, 10 dB impedance bandwidth and radiation efficiency also show significant improvement in their values at cost of marginal reduction of the gain. The design, evaluation and comparison of the proposed antenna with conventional simple circular disc circular polarized antenna without DGS along with intermediate and final iterations of Koch curve fractal DGS are illustrated."}
{"_id": "cebd268d73522c97bae7bfddebb6a2ad759bd155", "title": "Project Success Prediction in Crowdfunding Environments", "text": "Crowdfunding has gained widespread attention in recent years. Despite the huge success of crowdfunding platforms, the percentage of projects that succeed in achieving their desired goal amount is only around 40%. Moreover, many of these crowdfunding platforms follow \"all-or-nothing\" policy which means the pledged amount is collected only if the goal is reached within a certain predefined time duration. Hence, estimating the probability of success for a project is one of the most important research challenges in the crowdfunding domain. To predict the project success, there is a need for new prediction models that can potentially combine the power of both classification (which incorporate both successful and failed projects) and regression (for estimating the time for success). In this paper, we formulate the project success prediction as a survival analysis problem and apply the censored regression approach where one can perform regression in the presence of partial information. We rigorously study the project success time distribution of crowdfunding data and show that the logistic and log-logistic distributions are a natural choice for learning from such data. We investigate various censored regression models using comprehensive data of 18K Kickstarter (a popular crowdfunding platform) projects and 116K corresponding tweets collected from Twitter. We show that the models that take complete advantage of both the successful and failed projects during the training phase will perform significantly better at predicting the success of future projects compared to the ones that only use the successful projects. We provide a rigorous evaluation on many sets of relevant features and show that adding few temporal features that are obtained at the project's early stages can dramatically improve the performance."}
{"_id": "46f3bb6751419b87856c4db0193e7a72ef3fa17c", "title": "Groute: An Asynchronous Multi-GPU Programming Model for Irregular Computations", "text": "Nodes with multiple GPUs are becoming the platform of choice for high-performance computing. However, most applications are written using bulk-synchronous programming models, which may not be optimal for irregular algorithms that benefit from low-latency, asynchronous communication. This paper proposes constructs for asynchronous multi-GPU programming, and describes their implementation in a thin runtime environment called Groute. Groute also implements common collective operations and distributed work-lists, enabling the development of irregular applications without substantial programming effort. We demonstrate that this approach achieves state-of-the-art performance and exhibits strong scaling for a suite of irregular applications on 8-GPU and heterogeneous systems, yielding over 7x speedup for some algorithms."}
{"_id": "6576dc54cb64697787b3281d620174376b3e2d0e", "title": "Microfabricated adhesive mimicking gecko foot-hair.", "text": "The amazing climbing ability of geckos has attracted the interest of philosophers and scientists alike for centuries. However, only in the past few years has progress been made in understanding the mechanism behind this ability, which relies on submicrometre keratin hairs covering the soles of geckos. Each hair produces a miniscule force approximately 10(-7) N (due to van der Waals and/or capillary interactions) but millions of hairs acting together create a formidable adhesion of approximately 10 N x cm(-2): sufficient to keep geckos firmly on their feet, even when upside down on a glass ceiling. It is very tempting to create a new type of adhesive by mimicking the gecko mechanism. Here we report on a prototype of such 'gecko tape' made by microfabrication of dense arrays of flexible plastic pillars, the geometry of which is optimized to ensure their collective adhesion. Our approach shows a way to manufacture self-cleaning, re-attachable dry adhesives, although problems related to their durability and mass production are yet to be resolved."}
{"_id": "30b72920121a2ce0626df6bff3aa260d4d5372c0", "title": "Adaptive Filtering for Color Image Sharpening and Denoising", "text": "Both tasks of sharpening and denoising color images are eternity problems for improving image quality. This paper proposes a filtering algorithm which brings a dramatic improvement in sharpening effects and reducing flat-area noises for natural color images. First, we generate a luminance slope map, which classifies a color image into the edge areas and the flat areas with uniform color. Second, the Gaussian derivative filter is applied to sharpening each segmented edge area. In order to keep color balance, the edge-sharpening filter is applied only to the luminance component of the color image. In the flat areas, a filter based on the SUSAN algorithm is applied to reducing the background noises with preserving sharpened edges. The performance of the proposed method is verified for natural image with Gaussian noise and impulse noise."}
{"_id": "164932945646cae9b774a88ab87821a0437cd20a", "title": "Algorithmic Trading Using Intelligent Agents", "text": "Trading in financial markets is undergoing a radical transformation, one in which algorithmic methods are becoming increasingly more important. The development of intelligent agents that can act as autonomous traders seems like a logical step forward in this \u201calgorithms arms race\u201d. In this paper we describe an infrastructure for implementing hybrid intelligent agents with the ability to trade financial instruments without requiring human supervision. We used this infrastructure, which relies heavily on artificial intelligence models and problem solving methodologies, to implement two currency trading agents. So far these agents have been able to profit from inefficiencies in the currency market, which lead us to believe our infrastructure can be of practical interest to the investment industry."}
{"_id": "adfa98404238e303965b4db7bcd43f68bd7d0fb4", "title": "Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks", "text": "Modern convolutional networks, incorporating rectifiers and max-pooling, are neither smooth nor convex; standard guarantees therefore do not apply. Nevertheless, methods from convex optimization such as gradient descent and Adam are widely used as building blocks for deep learning algorithms. This paper provides the first convergence guarantee applicable to modern convnets, which furthermore matches a lower bound for convex nonsmooth functions. The key technical tool is the neural Taylor approximation \u2013 a straightforward application of Taylor expansions to neural networks \u2013 and the associated Taylor loss. Experiments on a range of optimizers, layers, and tasks provide evidence that the analysis accurately captures the dynamics of neural optimization. The second half of the paper applies the Taylor approximation to isolate the main difficulty in training rectifier nets \u2013 that gradients are shattered \u2013 and investigates the hypothesis that, by exploring the space of activation configurations more thoroughly, adaptive optimizers such as RMSProp and Adam are able to converge to better solutions."}
{"_id": "d635f38f1051d0406aedc54dbd510eb18ebcc164", "title": "A Visual Analytics Approach for Understanding Reasons behind Snowballing and Comeback in MOBA Games", "text": "To design a successful Multiplayer Online Battle Arena (MOBA) game, the ratio of snowballing and comeback occurrences to all matches played must be maintained at a certain level to ensure its fairness and engagement. Although it is easy to identify these two types of occurrences, game developers often find it difficult to determine their causes and triggers with so many game design choices and game parameters involved. In addition, the huge amounts of MOBA game data are often heterogeneous, multi-dimensional and highly dynamic in terms of space and time, which poses special challenges for analysts. In this paper, we present a visual analytics system to help game designers find key events and game parameters resulting in snowballing or comeback occurrences in MOBA game data. We follow a user-centered design process developing the system with game analysts and testing with real data of a trial version MOBA game from NetEase Inc. We apply novel visualization techniques in conjunction with well-established ones to depict the evolution of players' positions, status and the occurrences of events. Our system can reveal players' strategies and performance throughout a single match and suggest patterns, e.g., specific player' actions and game events, that have led to the final occurrences. We further demonstrate a workflow of leveraging human analyzed patterns to improve the scalability and generality of match data analysis. Finally, we validate the usability of our system by proving the identified patterns are representative in snowballing or comeback matches in a one-month-long MOBA tournament dataset."}
{"_id": "ba94e00c5c2e7cf78307b1fdfb2b623cceca7e86", "title": "The neural basis of intuitive best next-move generation in board game experts.", "text": "The superior capability of cognitive experts largely depends on quick automatic processes. To reveal their neural bases, we used functional magnetic resonance imaging to study brain activity of professional and amateur players in a board game named shogi. We found two activations specific to professionals: one in the precuneus of the parietal lobe during perception of board patterns, and the other in the caudate nucleus of the basal ganglia during quick generation of the best next move. Activities at these two sites covaried in relevant tasks. These results suggest that the precuneus-caudate circuit implements the automatic, yet complicated, processes of board-pattern perception and next-move generation in board game experts."}
{"_id": "fae2e07fb10d1fb4ceac961f2b2f61dbb8f13ca2", "title": "Depression and memory impairment: a meta-analysis of the association, its pattern, and specificity.", "text": "The existing evidence paints an unclear picture of whether an association exists between depression and memory impairment. The purpose of this investigation was to determine whether depression is associated with memory impairment, whether moderator variables determine the extent of this association, and whether any obtained association is unique to depression. Meta-analytic techniques were used to synthesize data from 99 studies on recall and 48 studies on recognition in clinically depressed and nondepressed samples. Associations between memory impairment and other psychiatric disorders (e.g., schizophrenia, dementia) were also examined. A significant, stable association between depression and memory impairment was revealed. Further analyses indicated, however, that it is likely that depression is linked to particular aspects of memory, the linkage is found in particular subsets of depressed individuals, and memory impairment is not unique to depression."}
{"_id": "b9e70317060e75bdaf52174391fb1f93e77b5268", "title": "Information Privacy Awareness (IPA): A Review of the Use, Definition and Measurement of IPA", "text": "Despite the acknowledged importance of awareness in the information privacy (IP) literature, we lack a consistent and thorough understanding of information privacy awareness (IPA). Drawing on Endsley\u2019s model of Situation Awareness, we propose a multidimensional model of IPA and define each of its dimensions. We then conducted a thorough review of the IP literature\u2019s use of awareness and synthesize our findings using our proposed model. This paper makes significant contributions by 1) distinguishing between IP knowledge, literacy and awareness 2) consolidating the IP literature\u2019s definitions of awareness and providing a new detailed definition 3) proposing a new IPA model that future authors can reference when using or measuring IPA."}
{"_id": "29ad218282cd9d66fe921d49ba066dae32bfa76d", "title": "Detection and prevention system against cyber attacks and botnet malware for information systems and Internet of Things", "text": "The explosion of interconnected devices and the Internet of Things has triggered new important challenges in the area of internet security, due to the various device vulnerabilities and increased potential for cyber-attacks. This paper touches on the areas of Cybersecurity, intrusion detection, prevention systems and artificial intelligence. Our aim is to create a system capable of understanding, detecting and preventing malicious connections using applied concepts of machine learning. We emphasize the importance of selecting and extracting features that can lead to an accurate decision of classification for malware and intrusion attacks. We propose a solution that combines features that extract correlations from the packet history for the same and different services and hosts, based on the rate of REJ, SYN and ACK flags and connection states, with HTTP features extracted from URI and RESTful methods. Our proposed solution is able to detect network intrusions and botnet communications with a precision of 98.4% on the binary classification problem."}
{"_id": "a867286f3b54625d5b3ae3cfb5ea523647bcb3b0", "title": "Umbrella wheel \u2014 A stair-climbing and obstacle-handling wheel design concept", "text": "This paper proposes a new design for stair-climbing using a wheel that can split into segments and walk up stairs or surmount other obstacles often found where humans traverse, while still being able to retain a perfectly round shape for traveling on smooth ground. Using this change of configuration, staircases with a wide range of dimensions can be covered efficiently and safely. The design, named Umbrella Wheel, can consist of as many wheel segments as desired, and as few as two. A smaller or higher number of wheel segments has advantages and disadvantages depending on the specific situation. Modeling the trajectory of the wheel when as it ascends or descends stairs is given and the results are analyzed."}
{"_id": "355d5192a48367d8101260ae50e2076b5bd2473c", "title": "ERP Projects: Key Success Factors and Risk of Failure A Proposed Model of Governance of Enterprise Resource Planning", "text": "The ERP (Enterprise Resource Planning) is a major innovation in the Field of Information Systems and Organization Management. Including their ability to make the organization more integrated and consistent, and absolute control of information. But the implementation of these projects remains a disturbing adventure that may lead to success or deadly failure of the organization that is embarking on this project. The aims of this article are to determine the necessary critical success factors for the ERP and causes of failures,through a comparative study of large organizations that have adopted ERP , for addressing the convergence and divergence points in implementation ERP. Such a study will allow us to develop a governance model for ERP"}
{"_id": "aa41525cfe461288177f76ed47314f6e25cca1b3", "title": "Cross-View People Tracking by Scene-Centered Spatio-Temporal Parsing", "text": "In this paper, we propose a Spatio-temporal Attributed Parse Graph (ST-APG) to integrate semantic attributes with trajectories for cross-view people tracking. Given videos from multiple cameras with overlapping field of view (FOV), our goal is to parse the videos and organize the trajectories of all targets into a scene-centered representation. We leverage rich semantic attributes of human, e.g., facing directions, postures and actions, to enhance cross-view tracklet associations, besides frequently used appearance and geometry features in the literature. In particular, the facing direction of a human in 3D, once detected, often coincides with his/her moving direction or trajectory. Similarly, the actions of humans, once recognized, provide strong cues for distinguishing one subject from the others. The inference is solved by iteratively grouping tracklets with cluster sampling and estimating people semantic attributes by dynamic programming. In experiments, we validate our method on one public dataset and create another new dataset that records people\u2019s daily life in public, e.g., food court, office reception and plaza, each of which includes 3-4 cameras. We evaluate the proposed method on these challenging videos and achieve promising multi-view tracking results."}
{"_id": "67c1e427073b95a5909b072cfc0bca081ac03f4e", "title": "SPARK: A New VANET-Based Smart Parking Scheme for Large Parking Lots", "text": "Searching for a vacant parking space in a congested area or a large parking lot and preventing auto theft are major concerns to our daily lives. In this paper, we propose a new smart parking scheme for large parking lots through vehicular communication. The proposed scheme can provide the drivers with real-time parking navigation service, intelligent antitheft protection, and friendly parking information dissemination. Performance analysis via extensive simulations demonstrates its efficiency and practicality. Keywords\u2014 Vehicular communications; smart parking; navigation; anti-theft; information dissemination; security & privacy"}
{"_id": "cd39c6952e251a1b0bfc2d635778463b4d7c8639", "title": "Time-of-Flight Sensors in Computer Graphics", "text": "A growing number of applications depend on accurate and fast 3D scene analysis. Examples are model and lightfield acquisition, collision prevention, mixed reality, and gesture recognition. The estimation of a range map by image analysis or laser scan techniques is still a time-consuming and expensive part of such systems. A lower-priced, fast and robust alternative for distance measurements are Time-of-Flight (ToF) cameras. Recently, significant advances have been made in producing low-cost and compact ToF-devices, which have the potential to revolutionize many fields of research, including Computer Graphics, Computer Vision and Human Machine Interaction (HMI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become \u201cubiquitous real-time geometry devices\u201d for gaming, web-conferencing, and numerous other applications. This STAR gives an account of recent developments in ToF-technology and discusses the current state of the integration of this technology into various graphics-related applications."}
{"_id": "abd0c916755010cf7e01bb5a87ded699baf201f5", "title": "WCP-RNN: a novel RNN-based approach for Bio-NER in Chinese EMRs", "text": "Deep learning has achieved remarkable success in a wide range of domains. However, it has not been comprehensively evaluated as a solution for the task of Chinese biomedical named entity recognition (Bio-NER). The traditional deep-learning approach for the Bio-NER task is usually based on the structure of recurrent neural networks (RNN) and only takes word embeddings into consideration, ignoring the value of character-level embeddings to encode the morphological and shape information. We propose an RNN-based approach, WCP-RNN, for the Chinese Bio-NER problem. Our method combines word embeddings and character embeddings to capture orthographic and lexicosemantic features. In addition, POS tags are involved as a\u00a0priori word information to improve the final performance. The experimental results show our proposed approach outperforms the baseline method; the highest F-scores for subject and lesion detection tasks reach 90.36 and 90.48% with an increase of 3.10 and 2.60% compared with the baseline methods, respectively."}
{"_id": "6a1ed85fe5f28e3fa1e683e65f8d44f1c952d560", "title": "Exploiting Matrix Dependency for Efficient Distributed Matrix Computation", "text": "Distributed matrix computation is a popular approach for many large-scale data analysis and machine learning tasks. However existing distributed matrix computation systems generally incur heavy communication cost during the runtime, which degrades the overall performance. In this paper, we propose a novel matrix computation system, named DMac, which exploits the matrix dependencies in matrix programs for efficient matrix computation in the distributed environment. We decompose each matrix program into a sequence of operations, and reveal the matrix dependencies between operations in the program. We next design a dependency-oriented cost model to select an optimal execution strategy for each operation, and generate a communication efficient execution plan for the matrix computation program. To facilitate the matrix computation in distributed systems, we further divide the execution plan into multiple un-interleaved stages which can run in a distributed cluster with efficient local execution strategy on each worker. The DMac system has been implemented on a popular general-purpose data processing framework, Spark. The experimental results demonstrate that our techniques can significantly improve the performance of a wide range of matrix programs."}
{"_id": "b6c930bdc4af7f5b3515d85b2c350941c019756c", "title": "A di/dt Feedback-Based Active Gate Driver for Smart Switching and Fast Overcurrent Protection of IGBT Modules", "text": "This paper presents an active gate driver (AGD) for IGBT modules to improve their overall performance under normal condition as well as fault condition. Specifically, during normal switching transients, a di/dt feedback controlled current source and current sink is introduced together with a push-pull buffer for dynamic gate current control. Compared to a conventional gate drive strategy, the proposed one has the capability of reducing the switching loss, delay time, and Miller plateau duration during turn-on and turn-off transient without sacrificing current and voltage stress. Under overcurrent condition, it provides a fast protection function for IGBT modules based on the evaluation of fault current level through the di/dt feedback signal. Moreover, the AGD features flexible protection modes, which overcomes the interruption of converter operation in the event of momentary short circuits. A step-down converter is built to evaluate the performance of the proposed driving schemes under various conditions, considering variation of turn-on/off gate resistance, current levels, and short-circuit fault types. Experimental results and detailed analysis are presented to verify the feasibility of the proposed approach."}
{"_id": "bc297e9b443b2c679f9f4b23934e83f12fdb1c2e", "title": "High Switching Performance of 1700-V, 50-A SiC Power MOSFET Over Si IGBT/BiMOSFET for Advanced Power Conversion Applications", "text": "Due to wider band gap of silicon carbide (SiC) compared to silicon (Si), MOSFET made in SiC has considerably lower drift region resistance, which is a significant resistive component in high-voltage power devices. With low on-state resistance and its inherently low switching loss, SiC MOSFETs can offer much improved efficiency and compact size for the converter compared to those using Si devices. In this paper, we report switching performance of a new 1700-V, 50-A SiC MOSFET designed and developed by Cree, Inc. Hard-switching losses of the SiC MOSFETs with different circuit parameters and operating conditions are measured and compared with the 1700-V Si BiMOSFET and 1700-V Si IGBT, using same test set-up. Based on switching and conduction losses, the operating boundary of output power and switching frequency of these devices are found out in a dc-dc boost converter and compared. The switching dv/dts and di/dts of SiC MOSFET are captured and discussed in the perspective of converter design. To validate the continuous operation, three dc-dc boost converters using these devices, are designed and tested at 10 kW of power with 1 kV of output voltage and 10 kHz of switching frequency. 1700V SiC Schottky diode is used as the blocking diode in each case. Corresponding converter efficiencies are evaluated and the junction temperature of each device is estimated. To demonstrate high switching frequency operation, the SiC MOSFET is switched upto 150 kHz within permissible junction temperature rise. A switch combination of the 1700-V SiC MOSFET and 1700-V SiC Schottky diode connected in series is also evaluated for zero voltage switching turn-ON behavior and compared with those of bipolar Si devices. Results show substantial power loss saving with the use of SiC MOSFET."}
{"_id": "ce6c27954cb36b9a6a2aa4c5ff3c83af8263bcac", "title": "Turn-on Performance of Reverse Blocking IGBT (RB IGBT) and Optimization Using Advanced Gate Driver", "text": "Turn-on performance of a reverse blocking insulated gate bipolar transistor (RB IGBT) is discussed in this paper. The RB IGBT is a specially designed IGBT having ability to sustain blocking voltage of both the polarities. Such a switch shows superior conduction but much worst switching (turn- on) performances compared to a combination of an ordinary IGBT and blocking diode. Because of that, optimization of the switching performance is a key issue that makes the RB IGB not well accepted in the real applications. In this paper, the RB IGBT turn-on losses and reverse recovery current are analyzed for different gate driver techniques, and a new gate driver is proposed. Commonly used conventional gate drivers do not have capability for the switching dynamics optimization. In contrast to this, the new proposed gate driver provides robust and simple way to control and optimize the reverse recovery current and turn-on losses. The collector current slope and reverse recovery current are controlled by the means of the gate emitter voltage control in feedforward manner. In addition, the collector emitter voltage slope is controlled during the voltage falling phase by the means of inherent increase of the gate current. Therefore, the collector emitter voltage tail and the total turn- on losses are reduced, independently on the reverse recovery current. The proposed gate driver was experimentally verified and the results presented and discussed."}
{"_id": "e95ba00ad02cf43d1ba1b10fba87340ec92a97cb", "title": "High performance active gate drive for high power IGBTs", "text": "This paper deals with an active gate drive (AGD) technology for high power IGBTs. It is based on an optimal combination of several requirements necessary for good switching performance under hard switching conditions. The scheme specifically combines together the slow drive requirements for low noise and switching stress and the fast drive requirements for high speed switching and low switching energy loss. The gate drive can also effectively dampen oscillations during low current turn-on transients in the IGBT. This paper looks at the conflicting requirements of the conventional gate drive circuit design and demonstrates, using experimental results, that the proposed three-stage active gate drive technique can be an effective solution."}
{"_id": "1b2a8da28849dd08ad6f6020835cb16720a78576", "title": "A Novel Driving and Protection Circuit for Reverse-Blocking IGBT Used in Matrix Converter", "text": "A novel drive and protection circuit for reverse-blocking insulated gate bipolar transistor (RB-IGBT) is proposed in this paper. For the drive circuit, a dynamic current source is introduced to reduce the turn-on and turn-off transients. Meanwhile, the di/dt of the collector current and the dv/dt of the collector-emitter voltage are strictly restricted, so do the respective stresses. The drive circuit consists of a conventional push-pull driver and two controllable current sources-a current generator and a current sink. These two current sources work in switching transitions. For the protection circuit, a novel collector current detecting circuit suitable for RB-IGBT is proposed. This method detects the collector current by sensing collector-emitter voltage of the device. Further study shows that this method can be used to acquire the current signs in commutation transitions of matrix converter. A series of experiments has been carried out concerning the proposed drive and protection circuit and the experimental setup; results as well as detailed analysis are presented"}
{"_id": "6b998c49799f15981b85f063b58d9e7953572fab", "title": "EL-GAN: Embedding Loss Driven Generative Adversarial Networks for Lane Detection", "text": "Convolutional neural networks have been successfully applied to semantic segmentation problems. However, there are many problems that are inherently not pixel-wise classification problems but are nevertheless frequently formulated as semantic segmentation. This ill-posed formulation consequently necessitates hand-crafted scenario-specific and computationally expensive post-processing methods to convert the per pixel probability maps to final desired outputs. Generative adversarial networks (GANs) can be used to make the semantic segmentation network output to be more realistic or better structure-preserving, decreasing the dependency on potentially complex post-processing. In this work, we propose EL-GAN: a GAN framework to mitigate the discussed problem using an embedding loss. With EL-GAN, we discriminate based on learned embeddings of both the labels and the prediction at the same time. This results in much more stable training due to having better discriminative information, benefiting from seeing both \u2018fake\u2019 and \u2018real\u2019 predictions at the same time. This substantially stabilizes the adversarial training process. We use the TuSimple lane marking challenge to demonstrate that with our proposed framework it is viable to overcome the inherent anomalies of posing it as a semantic segmentation problem. Not only is the output considerably more similar to the labels when compared to conventional methods, the subsequent post-processing is also simpler and crosses the competitive 96% accuracy threshold."}
{"_id": "bc7f9ee145f14ecba2f5f3175f27dba99c03e4b4", "title": "A single nutrient feed supports both chemically defined NS0 and CHO fed-batch processes: Improved productivity and lactate metabolism.", "text": "A chemically defined nutrient feed (CDF) coupled with basal medium preloading was developed to replace a hydrolysate-containing feed (HCF) for a fed-batch NS0 process. The CDF not only enabled a completely chemically defined process but also increased recombinant monoclonal antibody titer by 115%. Subsequent tests of CDF in a CHO process indicated that it could also replace the hydrolysate-containing nutrient feed in this expression system as well as providing an 80% increase in product titer. In both CDF NS0 and CHO processes, the peak lactate concentrations were lower and, more interestingly, lactate metabolism shifted markedly from net production to net consumption when cells transitioned from exponential to stationary growth phase. Subsequent investigations of the lactate metabolic shift in the CHO CDF process were carried out to identify the cause(s) of the metabolic shift. These investigations revealed several metabolic features of the CHO cell line that we studied. First, glucose consumption and lactate consumption are strictly complementary to each other. The combined cell specific glucose and lactate consumption rate was a constant across exponential and stationary growth phases. Second, Lactate dehydrogenase (LDH) activity fluctuated during the fed-batch process. LDH activity was at the lowest when lactate concentration started to decrease. Third, a steep cross plasma membrane glucose gradient exists. Intracellular glucose concentration was more than two orders of magnitude lower than that in the medium. Fourth, a large quantity of citrate was diverted out of mitochondria to the medium, suggesting a partially truncated tricarboxylic acid (TCA) cycle in CHO cells. Finally, other intermediates in or linked to the glycolytic pathway and the TCA cycle, which include alanine, citrate, isocitrate, and succinate, demonstrated a metabolic shift similar to that of lactate. Interestingly, all these metabolites are either in or linked to the pathway downstream of pyruvate, but upstream of fumarate in glucose metabolism. Although the specific mechanisms for the metabolic shift of lactate and other metabolites remain to be elucidated, the increased understanding of the metabolism of CHO cultures could lead to future improvements in medium and process development."}
{"_id": "bfe23a776fd902cf13574d60632fc5885d71ee96", "title": "Learning to Pinpoint Singing Voice from Weakly Labeled Examples", "text": "Building an instrument detector usually requires temporally accurate ground truth that is expensive to create. However, song-wise information on the presence of instruments is often easily available. In this work, we investigate how well we can train a singing voice detection system merely from song-wise annotations of vocal presence. Using convolutional neural networks, multipleinstance learning and saliency maps, we can not only detect singing voice in a test signal with a temporal accuracy close to the state-of-the-art, but also localize the spectral bins with precision and recall close to a recent source separation method. Our recipe may provide a basis for other sequence labeling tasks, for improving source separation or for inspecting neural networks trained on auditory spectrograms."}
{"_id": "d0502d3c60b8446f08a8e24587013ca1a31960ce", "title": "High-Dimensional Single-Cell Mapping of Central Nervous System Immune Cells Reveals Distinct Myeloid Subsets in Health, Aging, and Disease.", "text": "Individual reports suggest that the central nervous system (CNS) contains multiple immune cell types with diverse roles in tissue homeostasis, immune defense, and neurological diseases. It has been challenging to map leukocytes across the entire brain, and in particular in pathology, where phenotypic changes and influx of blood-derived cells prevent a clear distinction between reactive leukocyte populations. Here, we applied high-dimensional single-cell mass and fluorescence cytometry, in parallel with genetic fate mapping systems, to identify, locate, and characterize multiple distinct immune populations within the mammalian CNS. Using this approach, we revealed that microglia, several subsets of border-associated macrophages and dendritic cells\u00a0coexist in the CNS at steady state and exhibit\u00a0disease-specific transformations in the immune microenvironment during aging and in models of\u00a0Alzheimer's disease and multiple sclerosis. Together, these data and the described framework provide a resource for the study of disease mechanisms, potential biomarkers, and therapeutic targets in CNS disease."}
{"_id": "75e5418a83c613b0a909bbd89be691594877a50a", "title": "EEG alpha activity reflects attentional demands, and beta activity reflects emotional and cognitive processes.", "text": "Two experiments were designed to examine the effects of attentional demands on the electroencephalogram during cognitive and emotional tasks. We found an interaction of task with hemisphere as well as more overall parietal alpha for tasks not requiring attention to the environment, such as mental arithmetic, than for those requiring such attention. Differential hemispheric activation for beta was found most strongly in the temporal areas for emotionally positive or negative tasks and in the parietal areas for cognitive tasks."}
{"_id": "7e97525172c8183290b6f2c357f10af94c285042", "title": "The possibility of predicting learning performance using features of note taking activities and instructions in a blended learning environment", "text": "A formative assessment was introduced to a blended learning course in order to predict participant\u2019s learning performance using measurements of their note taking activity and metrics of their attitudes. The lexical metrics were created by analyzing the contents of student\u2019s notes from every class, and by measuring their characteristics. In the results of two types of regression analysis of these measurements, features of note taking activities were a significant source of information for predicting the scores of final exams. In regards to temporal changes in prediction performance during the course, the progress of learning and the typical changes in features of note taking activity were discussed. During the analyses, the effectiveness of the lecturer\u2019s note taking instructions was also confirmed."}
{"_id": "f648e0c502776b41d8edc88e4f7789727c02b8fc", "title": "Correlation analysis of MQTT loss and delay according to QoS level", "text": "MQTT is an open protocol developed and released by IBM. To ensure the reliability of message transmission, MQTT supports three levels of QoS. In this paper, we analyze MQTT message transmission process which consists of real wired/wireless publish client, broker server and Subscribe client. By transmitting messages through 3 levels of QoS with various sizes of payloads, we have captured packets to analyze end-to-end delays and message loss."}
{"_id": "824c63088106cb6f3f0c63d63e14ccc624d37962", "title": "Improved resolution from subpixel shifted pictures", "text": "In this paper we consider the problem of obtaining a picture with improved resolution from an ensemble of K spatially shifted low-resolution pictures. We show that it can be done and provide a procedure for its implementations. The framework and scheme utilized in the generalized multichannel sampling theorem of Papoulis and Brown was found to be an appropriate representation for our problem. The linear channels of this scheme correspond to the processes that produce the low-resolution pictures (by blur and shifts) and to the process of reconstruction. The total process is shown to consist of two distinct degradation-restoration pairs: blurring-deblurring and undersampling-merging. This paper discusses merging. Deblurring (deconvolution) can be done by any desired scheme independent of merging. It is shown that the procedure achieves the claims of super resolution schemes but, unlike many of them, is well posed and robust. This procedure holds for a selected ensemble so that K is not too large and no two of the pictures practially coincide. The number of multiplications per pixel of the improved picture can be reduced to O(K2). o ESJZ"}
{"_id": "5b24ec5dc67169c3ddbd671d09fc62b6d15f9bd0", "title": "Research Paper: Automatic Detection of Acute Bacterial Pneumonia from Chest X-ray Reports", "text": "OBJECTIVE\nTo evaluate the performance of a natural language processing system in extracting pneumonia-related concepts from chest x-ray reports.\n\n\nMETHODS\n\n\n\nDESIGN\nFour physicians, three lay persons, a natural language processing system, and two keyword searches (designated AAKS and KS) detected the presence or absence of three pneumonia-related concepts and inferred the presence or absence of acute bacterial pneumonia from 292 chest x-ray reports. Gold standard: Majority vote of three independent physicians. Reliability of the gold standard was measured.\n\n\nOUTCOME MEASURES\nRecall, precision, specificity, and agreement (using Finn's R: statistic) with respect to the gold standard. Differences between the physicians and the other subjects were tested using the McNemar test for each pneumonia concept and for the disease inference of acute bacterial pneumonia.\n\n\nRESULTS\nReliability of the reference standard ranged from 0.86 to 0.96. Recall, precision, specificity, and agreement (Finn R:) for the inference on acute bacterial pneumonia were, respectively, 0.94, 0.87, 0.91, and 0.84 for physicians; 0.95, 0.78, 0.85, and 0.75 for natural language processing system; 0.46, 0.89, 0.95, and 0.54 for lay persons; 0.79, 0.63, 0.71, and 0.49 for AAKS; and 0.87, 0.70, 0.77, and 0.62 for KS. The McNemar pairwise comparisons showed differences between one physician and the natural language processing system for the infiltrate concept and between another physician and the natural language processing system for the inference on acute bacterial pneumonia. The comparisons also showed that most physicians were significantly different from the other subjects in all pneumonia concepts and the disease inference.\n\n\nCONCLUSION\nIn extracting pneumonia related concepts from chest x-ray reports, the performance of the natural language processing system was similar to that of physicians and better than that of lay persons and keyword searches. The encoded pneumonia information has the potential to support several pneumonia-related applications used in our institution. The applications include a decision support system called the antibiotic assistant, a computerized clinical protocol for pneumonia, and a quality assurance application in the radiology department."}
{"_id": "c8d170a6cf0b773a1db726492793c0426c7bc10a", "title": "Valuation of IT Investments Using Real Options Theory", "text": "Real Options Theory is often applied to the valuation of IT investments. The application of Real Options Theory is generally accompanied by a monetary valuation of real options through option pricing models which in turn are based on restrictive assumptions and thus subject to criticism. Therefore, this paper analyzes the application of option pricing models for the valuation of IT investments. A structured literature review reveals the types of IT investments which are valuated with Real Options Theory in scientific literature. These types of IT investments are further investigated and their main characteristics are compared to the restrictive assumptions of traditional option pricing models. This analysis serves as a basis for further discussion on how the identified papers address these assumptions. The results show that a lot of papers do not account for critical assumptions, although it is known that the assumptions are not fulfilled. Moreover, the type of IT investment determines the criticality of the assumptions. Additionally, several extensions or adaptions of traditional option pricing models can be found which provide the possibility of relaxing critical assumptions. Researchers can profit from the results derived in this paper in two ways: First, which assumptions can be critical for various types of IT investments are demonstrated. Second, extensions of option pricing models that relax critical assumptions are introduced."}
{"_id": "9a8c0aadb88e337cf6a5b0d542229685236e1de8", "title": "The source code control system", "text": "The Source Code Control System (SCCS) is a software tool designed to help programming projects control changes to source code. It provides facilities for storing, updating, and retrieving all versions of modules, for controlling updating privileges for identifying load modules by version number, and for recording who made each software change, when and where it was made, and why. This paper discusses the SCCS approach to source code control, shows how it is used and explains how it is implemented."}
{"_id": "c1fccf186ccce685fed39745557fce804452669d", "title": "Scheduling Real-Time Transactions: A Performance Evaluation", "text": "Managing transactions with real-time requirements presents many new problems. In this paper we address several: How can we schedule transactions with deadlines? How do the real-time constraints affect concurrency control? How should overloads be handled? How does the scheduling of 1/0 requests affect the timeliness of transactions? How should exclusive and shared locking be handled? We describe a new group of algorithms for scheduling real-time transactions that produce serializable schedules. We present a model for scheduling transactions with deadlines on a single processor disk resident database system, and evaluate the scheduling algorithms through detailed simulation experiments."}
{"_id": "99082085ce3a82bc137560f1fa8e3b395b453021", "title": "Magnetic field effects on plant growth, development, and evolution", "text": "The geomagnetic field (GMF) is a natural component of our environment. Plants, which are known to sense different wavelengths of light, respond to gravity, react to touch and electrical signaling, cannot escape the effect of GMF. While phototropism, gravitropism, and tigmotropism have been thoroughly studied, the impact of GMF on plant growth and development is not well-understood. This review describes the effects of altering magnetic field (MF) conditions on plants by considering plant responses to MF values either lower or higher than those of the GMF. The possible role of GMF on plant evolution and the nature of the magnetoreceptor is also discussed."}
{"_id": "31399f850c03d7652869d2d57080c344acc41ecb", "title": "Attentive Tensor Product Learning for Language Generation and Grammar Parsing", "text": "This paper proposes a new architecture \u2014 Attentive Tensor Product Learning (ATPL) \u2014 to represent grammatical structures in deep learning models. ATPL is a new architecture to bridge this gap by exploiting Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate deep learning with explicit language structures and rules. The key ideas of ATPL are: 1) unsupervised learning of role-unbinding vectors of words via TPR-based deep neural network; 2) employing attention modules to compute TPR; and 3) integration of TPR with typical deep learning architectures including Long Short-Term Memory (LSTM) and Feedforward Neural Network (FFNN). The novelty of our approach lies in its ability to extract the grammatical structure of a sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. This ATPL approach is applied to 1) image captioning, 2) part of speech (POS) tagging, and 3) constituency parsing of a sentence. Experimental results demonstrate the effectiveness of the proposed approach."}
{"_id": "a6eae6cffae562422ce196d295c4fcc4698a45cc", "title": "Interrater reliability of a modified Ashworth scale of muscle spasticity.", "text": "We undertook this investigation to determine the interrater reliability of manual tests of elbow flexor muscle spasticity graded on a modified Ashworth scale. We each independently graded the elbow flexor muscle spasticity of 30 patients with intracranial lesions. We agreed on 86.7% of our ratings. The Kendall's tau correlation between our grades was .847 (p less than .001). Thus, the relationship between the raters' judgments was significant and the reliability was good. Although the results were limited to the elbow flexor muscle group, we believe them to be positive enough to encourage further trials of the modified Ashworth scale for grading spasticity."}
{"_id": "e125107da0293bfcdeea3b71f8bc4877d32aa627", "title": "Improved Plagiarism Detection Algorithm Based on Abstract Syntax Tree", "text": "Statements with conditionals are widely used in C, C++ and java, such as if and while statements and they are easy to plagiarize by adjusting the logical structure of the corresponding statements. However, the existing relative algorithms and tools cannot effectively detect code plagiarism of these statements. This paper puts forward an improved code plagiarism detection algorithm based on abstract syntax tree. The algorithm calculates the hash value for each node of the abstract syntax tree, and compares the hash values node by node. Based on this, it analyzes the if-statement plagiarism, as if-statements are representative in the statements with conditionals, and puts forward the corresponding detection schemes in order to detect plagiarism effectively. After that, with the results of many experiments, the algorithm is proved effective on detecting if-statement plagiarisms."}
{"_id": "46033ddbf93e8fa8cfafbd8a0cf6bf76a9a68fc6", "title": "Penile Implants among Prisoners\u2014A Cause for Concern?", "text": "BACKGROUND\nWe report the prevalence of penile implants among prisoners and determine the independent predictors for having penile implants. Questions on penile implants were included in the Sexual Health and Attitudes of Australian Prisoners (SHAAP) survey following concerns raised by prison health staff that increasing numbers of prisoners reported having penile implants while in prison.\n\n\nMETHODS\nComputer-Assisted Telephone Interviewing (CATI) of a random sample of prisoners was carried out in 41 prisons in New South Wales and Queensland (Australia). Men were asked, \"Have you ever inserted or implanted an object under the skin of your penis?\" If they responded Yes: \"Have you ever done so while you were in prison?\" Univariate logistic regression and logistic regression were used to determine the factors associated with penile implants.\n\n\nRESULTS\nA total of 2,018 male prisoners were surveyed, aged between 18 and 65 years, and 118 (5.8%) reported that they had inserted or implanted an object under the skin of their penis. Of these men, 87 (73%) had this done while they were in prison. In the multivariate analysis, a younger age, birth in an Asian country, and prior incarceration were all significantly associated with penile implants (p<0.001). Men with penile implants were also more likely to report being paid for sex (p<0.001), to have had body piercings (p<0.001) or tattoos in prison (p<0.001), and to have taken non-prescription drugs while in prison (p<0.05).\n\n\nCONCLUSIONS\nPenile implants appear to be fairly common among prisoners and are associated with risky sexual and drug use practices. As most of these penile implants are inserted in prison, these men are at risk of blood borne viruses and wound infection. Harm reduction and infection control strategies need to be developed to address this potential risk."}
{"_id": "e2f66f200088e105bb5a96c43474dbf46b6e99a3", "title": "Air-filled substrate integrated waveguide \u2014 A flexible and low loss technological platform", "text": "In this paper, the air-filled substrate integrated waveguide (AFSIW), offering a technological platform for the design of high performance millimeterwave self-packaged substrate integrated components and systems, is presented. AFSIW is based on low-cost printed circuit board (PCB). Several components, including antennas, filters and couplers, have been theoretically studied and experimentally demonstrated up to 60 GHz. This technology, surpassing its dielectric-filled SIW (DFSIW) counterpart, is expected by the authors to replace the rectangular waveguide technology for future low-cost extremely high data rate communication systems and highly sensitive sensors operating at millimeterwave and beyond."}
{"_id": "675b402bd488aae006ff9a9dd0951797558b2d6d", "title": "Attention Deficit Hyperactivity Disorder: Is There an App for That? Suitability Assessment of Apps for Children and Young People With ADHD", "text": "BACKGROUND\nAttention-deficit/hyperactivity disorder (ADHD) is a complex highly comorbid disorder, which can have a huge impact on those with ADHD, their family, and the community around them. ADHD is currently managed using pharmacological and nonpharmacological interventions. However, with advances in technology and an increase in the use of mobile apps, managing ADHD can be augmented using apps specifically designed for this population. However, little is known regarding the suitability and usability of currently available apps.\n\n\nOBJECTIVE\nThe aim of this study was to explore the suitability of the top 10 listed apps for children and young people with ADHD and clinicians who work with them. It is hypothesized that mobile apps designed for this population could be more suitably designed for this population.\n\n\nMETHODS\nThe top 10 listed apps that are specifically targeted toward children and young people with ADHD in the United Kingdom were identified via the Google Play (n=5) and iTunes store (n=5). Interviews were then undertaken with 5 clinicians who specialize in treating this population and 5 children and young people with ADHD themselves, to explore their opinions of the 10 apps identified and what they believe the key components are for apps to be suitable for this population.\n\n\nRESULTS\nFive themes emerged from clinician and young people interviews: the accessibility of the technology, the importance of relating to apps, addressing ADHD symptoms and related difficulties, age appropriateness, and app interaction. Three additional themes emerged from the clinician interviews alone: monitoring symptoms, side effects and app effect on relationships, and the impact of common comorbid conditions. The characteristics of the apps did not appear to match well with the views of our sample.\n\n\nCONCLUSIONS\nThese findings suggest that the apps may not be suitable in meeting the complex needs associated with this condition. Further research is required to explore the value of apps for children and young people with ADHD and their families and, in particular, any positive role for apps in the management of ADHD in this age group. A systematic review on how technology can be used to engage this population and how it can be used to help them would be a useful way forward. This could be the platform to begin exploring the use of apps further."}
{"_id": "0e952401b01dfdb1bc098345d5b74f13f560b08a", "title": "Scale Up vs. Scale Out in Cloud Storage and Graph Processing Systems", "text": "Deployers of cloud storage and iterative processing systems typically have to deal with either dollar budget constraints or throughput requirements. This paper examines the question of whether such cloud storage and iterative processing systems are more cost-efficient when scheduled on a COTS (scale out) cluster or a single beefy (scale up) machine. We experimentally evaluate two systems: 1) a distributed key-value store (Cassandra), and 2) a distributed graph processing system (Graph Lab). Our studies reveal scenarios where each option is preferable over the other. We provide recommendations for deployers of such systems to decide between scale up vs. Scale out, as a function of their dollar or throughput constraints. Our results indicate that there is a need or adaptive scheduling in heterogeneous clusters containing scale up and scale out nodes."}
{"_id": "af7417fdebb1d51e08f4b77111d6464fbaa9f6a5", "title": "The ASK Corpus - a Language Learner Corpus of Norwegian as a Second Language", "text": "In our paper we present the design and interface of ASK, a language learner corpus of Norwegian as a second language which contains essays collected from language tests on two different proficiency levels as well as personal data from the test takers. In addition, the corpus also contains texts and relevant personal data from native Norwegians as control data. The texts as well as the personal data are marked up in XML according to the TEI Guidelines. In order to be able to classify \u201cerrors\u201d in the texts, we have introduced new attributes to the TEI corr and sic tags. For each error tag, a correct form is also in the text annotation. Finally, we employ an automatic tagger developed for standard Norwegian, the \u201cOslo-Bergen Tagger\u201d, together with a facility for manual tag correction. As corpus query system, we are using the Corpus Workbench developed at the University of Stuttgart together with a web search interface developed at Aksis, University of Bergen. The system allows for searching for combinations of words, error types, grammatical annotation and personal data."}
{"_id": "609ab95cd8431efcb58e042e250a78ea16ab95be", "title": "WordFreak: An Open Tool for Linguistic Annotation", "text": "WordFreak is a natural language annotation tool that has been designed to be easy to extend to new domains and tasks. Specifically, a plug-in architecture has been developed which allows components to be added to WordFreak for customized visualization, annotation specification, and automatic annotation, without re-compilation. The APIs for these plug-ins provide mechanisms to allow automatic annotators or taggers to guide future annotation to supports active learning. At present WordFreak can be used to annotate a number of different types of annotation in English, Chinese, and Arabic including: constituent parse structure and dependent annotations, and ACE named-entity and coreference annotation. The Java source code for WordFreak is distributed under the Mozilla Public License 1.1 via SourceForge at: http://wordfreak.sourceforge.net. This site also provides screenshots, and a web deployable version of WordFreak."}
{"_id": "8c946d2acec0f4ff31c4e6574c0576de567536ef", "title": "Comparison between JSON and XML in Applications Based on AJAX", "text": "As the core technology of Web 2.0, Ajax has caught more and more attention. Xml, as the traditional data load format, needs to be resolved by DOM (Document Object Model ) both in client-side and server-side, which wastes system resource and makes a great reduction of user-friendliness. In this paper, a light-weightdata-interchanging format-JSON (Java Script Object Notation) will be introduced, which provides a higher level of flexibility and efficiency. We make a comparison between JSON and XML through expriment, then use JSON as data-transfering format in an actual project. Results show that JSON is more suitable as a data-loading tool for Ajax applications."}
{"_id": "f98bc780129c2e8841073e4d9546a65e546be75c", "title": "Characterization of double-layer capacitors (DLCs) for power electronics applications", "text": "The double-layer capacitor (DLC) for power applications is a new device. A simple resistive capacitive equivalent circuit is insufficient to characterize its terminal behaviour. Based on physical reasoning, an equivalent circuit is proposed to provide the power electronics engineer with a model for the terminal behaviour of the DLC. The equivalent model consists of three Re branches, one of them with a voltage dependent capacitor. A method to identify the circuit parameters is presented. Measurements of carbon-based DLCs for power applications are presented, analysed, and the equivalent circuit response is compared with experimental results."}
{"_id": "caed0931d477fe4da107ed1fcefffdfc80e3e208", "title": "A novel spring mechanism to reduce energy consumption of robotic arms", "text": "Most conventional robotic arms use motors to accelerate the manipulator. This leads to an unnecessary high energy consumption when performing repetitive tasks. This paper presents an approach to reduce energy consumption in robotic arms by performing its repetitive tasks with the help of a parallel spring mechanism. A special non-linear spring characteristic has been achieved by attaching a spring to two connected pulleys. This parallel spring mechanism provides for the accelerations of the manipulator without compromising its ability to vary the task parameters (the time per stroke, the displacement per stroke the grasping time and the payload). The energy consumption of the arm with the spring mechanism is compared to that of the same arm without the spring mechanism. Optimal control studies show that the robotic arm uses 22% less energy due to the spring mechanism. On the 2 DOF prototype, we achieved an energy reduction of 20%. The difference was due to model simplifications. With a spring mechanism, there is an extra energetic cost, because potential energy has to be stored into the spring during startup. This cost is equal to the total energy savings of the 2 DOF arm during 8 strokes. Next, there could have been an energetic cost to position the manipulator outside the equilibrium position. We have designed the spring mechanism in such a way that this holding cost is negligible for a range of start- and end positions. The performed experiments showed that the implementation of the proposed spring mechanism results in a reduction of the energy consumption while the arm is still able to handle varying task parameters."}
{"_id": "0dccaba946884d87c3deb08c0b05b8303729426d", "title": "A Method for Compact Image Representation Using Sparse Matrix and Tensor Projections Onto Exemplar Orthonormal Bases", "text": "We present a new method for compact representation of large image datasets. Our method is based on treating small patches from a 2-D image as matrices as opposed to the conventional vectorial representation, and encoding these patches as sparse projections onto a set of exemplar orthonormal bases, which are learned a priori from a training set. The end result is a low-error, highly compact image/patch representation that has significant theoretical merits and compares favorably with existing techniques (including JPEG) on experiments involving the compression of ORL and Yale face databases, as well as a database of miscellaneous natural images. In the context of learning multiple orthonormal bases, we show the easy tunability of our method to efficiently represent patches of different complexities. Furthermore, we show that our method is extensible in a theoretically sound manner to higher-order matrices (\u00bftensors\u00bf). We demonstrate applications of this theory to compression of well-known color image datasets such as the GaTech and CMU-PIE face databases and show performance competitive with JPEG. Lastly, we also analyze the effect of image noise on the performance of our compression schemes."}
{"_id": "4bf834b14d32508589c2a41f016720fe159f1660", "title": "Place-Its: A Study of Location-Based Reminders on Mobile Phones", "text": "Context-awareness can improve the usefulness of automated reminders. However, context-aware reminder applications have yet to be evaluated throughout a person\u2019s daily life. Mobile phones provide a potentially convenient and truly ubiquitous platform for the detection of personal context such as location, as well as the delivery of reminders. We designed Place-Its, a location-based reminder application that runs on mobile phones, to study people using location-aware reminders throughout their daily lives. We describe the design of Place-Its and a two-week exploratory user study. The study reveals that location-based reminders are useful, in large part because people use location in"}
{"_id": "ad192b83bab89418fc91e58568a7e6cc190e1aed", "title": "MULTISCALE ENTANGLEMENT RENORMALIZATION ANSATZ", "text": "The goal of this paper is to demonstrate a method for tensorizing neural networks based upon an efficient way of approximating scale invariant quantum states, the Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a replacement for linear layers in a neural network and test this implementation on the CIFAR-10 dataset. The proposed method outperforms factorization using tensor trains, providing greater compression for the same level of accuracy and greater accuracy for the same level of compression. We demonstrate MERAlayers with 3900 times fewer parameters and a reduction in accuracy of less than 1% compared to the equivalent fully connected layers."}
{"_id": "913c36c74e686709bdce4606f84e11bbca0c9c41", "title": "Parametric and nonparametric linkage analysis: a unified multipoint approach.", "text": "In complex disease studies, it is crucial to perform multipoint linkage analysis with many markers and to use robust nonparametric methods that take account of all pedigree information. Currently available methods fall short in both regards. In this paper, we describe how to extract complete multipoint inheritance information from general pedigrees of moderate size. This information is captured in the multipoint inheritance distribution, which provides a framework for a unified approach to both parametric and nonparametric methods of linkage analysis. Specifically, the approach includes the following: (1) Rapid exact computation of multipoint LOD scores involving dozens of highly polymorphic markers, even in the presence of loops and missing data. (2) Non-parametric linkage (NPL) analysis, a powerful new approach to pedigree analysis. We show that NPL is robust to uncertainty about mode of inheritance, is much more powerful than commonly used nonparametric methods, and loses little power relative to parametric linkage analysis. NPL thus appears to be the method of choice for pedigree studies of complex traits. (3) Information-content mapping, which measures the fraction of the total inheritance information extracted by the available marker data and points out the regions in which typing additional markers is most useful. (4) Maximum-likelihood reconstruction of many-marker haplotypes, even in pedigrees with missing data. We have implemented NPL analysis, LOD-score computation, information-content mapping, and haplotype reconstruction in a new computer package, GENEHUNTER. The package allows efficient multipoint analysis of pedigree data to be performed rapidly in a single user-friendly environment."}
{"_id": "cdb71550fc32f82e10bcba140416546f8f85abd4", "title": "Extensions of Parallel Coordinates for Interactive Exploration of Large Multi-Timepoint Data Sets", "text": "Parallel coordinate plots (PCPs) are commonly used in information visualization to provide insight into multi-variate data. These plots help to spot correlations between variables. PCPs have been successfully applied to unstructured datasets up to a few millions of points. In this paper, we present techniques to enhance the usability of PCPs for the exploration of large, multi-timepoint volumetric data sets, containing tens of millions of points per timestep. The main difficulties that arise when applying PCPs to large numbers of data points are visual clutter and slow performance, making interactive exploration infeasible. Moreover, the spatial context of the volumetric data is usually lost. We describe techniques for preprocessing using data quantization and compression, and for fast GPU-based rendering of PCPs using joint density distributions for each pair of consecutive variables, resulting in a smooth, continuous visualization. Also, fast brushing techniques are proposed for interactive data selection in multiple linked views, including a 3D spatial volume view. These techniques have been successfully applied to three large data sets: Hurricane Isabel (Vis'04 contest), the ionization front instability data set (Vis'08 design contest), and data from a large-eddy simulation of cumulus clouds. With these data, we show how PCPs can be extended to successfully visualize and interactively explore multi-timepoint volumetric datasets with an order of magnitude more data points."}
{"_id": "29b04f2eab798849a97e4d48bddc7bb2a733c1b0", "title": "Curved Folded Plate Timber Structures", "text": "Current research at the laboratory for timber construction IBOIS at EPFL focuses on shell structures. Two main families are investigated: shells made of linear elements, as timber rib shells, and shells made of surface elements, typically folded plate structures. With regard to timber rib shells, bent linear timber elements are used to create the shape of a shell. With folded plate structures, new timber derived products are used to create structures made of planar panels, where the panel simultaneously covers space and acts as load bearing element. Indeed, timber industry has developed in the last fifteen years new large-size timber panels. Composition and dimensions of these panels and the possibility of milling them with Computer Numerical Controlled machines shows great potential for folded plate structures. In the present paper we investigate to combine both approaches using bent timber panels for curved folded plate structures."}
{"_id": "d16afa6fe546451bb4330eae67b1722823e37e41", "title": "Food region segmentation in meal images using touch points", "text": "We propose an interactive scheme for segmenting meal images for automated dietary assessment. A smartphone user photographs a meal and marks a few touch points on the resulting image. The segmentation algorithm initializes a set of food segments with the touch points, and grows them using local image features. We evaluate the algorithm with a data set consisting of 300 manually segmented meal images. The precision of segmentation is 0.87, compared with 0.70 for fully automatic segmentation. The results show that the precision of segmentation was significantly improved by incorporating minimal user intervention."}
{"_id": "e4a75044cc9f0e9f82d5f477cf748e7e92f568f4", "title": "Recurrent Capsule Network for Relations Extraction: A Practical Application to the Severity Classification of Coronary Artery Disease", "text": "Coronary artery disease (CAD) is one of the leading causes of cardiovascular disease deaths. CAD condition progresses rapidly, if not diagnosed and treated at an early stage may eventually lead to an irreversible state of the heart muscle death. Invasive coronary arteriography is the gold standard technique for CAD diagnosis. Coronary arteriography texts describe which part has stenosis and how much stenosis is in details. It is crucial to conduct the severity classification of CAD. In this paper, we propose a recurrent capsule network (RCN) to extract semantic relations between clinical named entities in Chinese coronary arteriography texts, through which we can automatically find out the maximal stenosis for each lumen to inference how severe CAD is according to the improved method of Gensini. Experimental results on the corpus collected from Shanghai Shuguang Hospital show that our proposed RCN model achieves a F1-score of 0.9641 in relation extraction, which outperforms the baseline methods."}
{"_id": "9df2e4cecc9b28befd513542ee38276b23d1fd41", "title": "A Novel User Authentication Scheme Based on QR-Code", "text": "User authentication is one of the fundamental procedures to ensure secure communications and share system resources over an insecure public network channel. Thus, a simple and efficient authentication mechanism is required for securing the network system in the real environment. In general, the password-based authentication mechanism provides the basic capability to prevent unauthorized access. Especially, the purpose of the one-time password is to make it more difficult to gain unauthorized access to restricted resources. Instead of using the password file as conventional authentication systems, many researchers have devoted to implement various one-time password schemes using smart cards, time-synchronized token or short message service in order to reduce the risk of tampering and maintenance cost. However, these schemes are impractical because of the far from ubiquitous hardware devices or the infrastructure requirements. To remedy these weaknesses, the attraction of the QR-code technique can be introduced into our one-time password authentication protocol. Not the same as before, the proposed scheme based on QR code not only eliminates the usage of the password verification table, but also is a cost effective solution since most internet users already have mobile phones. For this reason, instead of carrying around a separate hardware token for each security domain, the superiority of handiness benefit from the mobile phone makes our approach more practical and convenient."}
{"_id": "6b1790cc931bf99e9de2207c668de5899275b0fa", "title": "Knowledge manipulation activities: results of a Delphi study", "text": "Knowledge-based organizations are hosts for multitudes of knowledge management (KM) episodes. Each episode is triggered by a knowledge need and culminates with the satisfaction of that need (or its abandonment). Within an episode, one or more of the organization\u2019 processors (human and/or computer-based) manipulate knowledge resources in various ways in an effort to meet the need. This paper identifies and characterizes a generic set of elemental knowledge manipulation activities that can be arranged in a variety of patterns within KM episodes. It also indicates possible knowledge flows that can occur among the activities. This descriptive framework was developed using conceptual synthesis and a Delphi methodology involving an international panel of researchers and practitioners in the KM field. The framework can serve as a common language for discourse about knowledge manipulation. For researchers, it suggests issues that deserve investigation and concepts that must be considered in explorations of KM episodes. For practitioners, the framework provides a perspective on activities that need to be considered in the design, measurement, control, coordination, and support of an organization\u2019 KM episodes. # 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "1442ee271a16879e0bcb6c4cc233266c2eb36a22", "title": "Brain activities associated with gaming urge of online gaming addiction.", "text": "The aim of this study was to identify the neural substrates of online gaming addiction through evaluation of the brain areas associated with the cue-induced gaming urge. Ten participants with online gaming addiction and 10 control subjects without online gaming addiction were tested. They were presented with gaming pictures and the paired mosaic pictures while undergoing functional magnetic resonance imaging (fMRI) scanning. The contrast in blood-oxygen-level dependent (BOLD) signals when viewing gaming pictures and when viewing mosaic pictures was calculated with the SPM2 software to evaluate the brain activations. Right orbitofrontal cortex, right nucleus accumbens, bilateral anterior cingulate and medial frontal cortex, right dorsolateral prefrontal cortex, and right caudate nucleus were activated in the addicted group in contrast to the control group. The activation of the region-of-interest (ROI) defined by the above brain areas was positively correlated with self-reported gaming urge and recalling of gaming experience provoked by the WOW pictures. The results demonstrate that the neural substrate of cue-induced gaming urge/craving in online gaming addiction is similar to that of the cue-induced craving in substance dependence. The above-mentioned brain regions have been reported to contribute to the craving in substance dependence, and here we show that the same areas were involved in online gaming urge/craving. Thus, the results suggest that the gaming urge/craving in online gaming addiction and craving in substance dependence might share the same neurobiological mechanism."}
{"_id": "c5e52a5aa89c137ca9dcaaa39b574ae170317b20", "title": "Enhancing well-being and alleviating depressive symptoms with positive psychology interventions: a practice-friendly meta-analysis.", "text": "Do positive psychology interventions-that is, treatment methods or intentional activities aimed at cultivating positive feelings, positive behaviors, or positive cognitions-enhance well-being and ameliorate depressive symptoms? A meta-analysis of 51 such interventions with 4,266 individuals was conducted to address this question and to provide practical guidance to clinicians. The results revealed that positive psychology interventions do indeed significantly enhance well-being (mean r=.29) and decrease depressive symptoms (mean r=.31). In addition, several factors were found to impact the effectiveness of positive psychology interventions, including the depression status, self-selection, and age of participants, as well as the format and duration of the interventions. Accordingly, clinicians should be encouraged to incorporate positive psychology techniques into their clinical work, particularly for treating clients who are depressed, relatively older, or highly motivated to improve. Our findings also suggest that clinicians would do well to deliver positive psychology interventions as individual (versus group) therapy and for relatively longer periods of time."}
{"_id": "4c256eff946c9e433b96a895842ce7ee0a9f8ce8", "title": "ARLearn: Augmented Reality Meets Augmented Virtuality", "text": "This article deals with educational opportunities for mixed reality games and related scenarios for learning. It discusses several issues and educational challenges to be tackled when linking augmented reality and augmented virtuality. Second, the paper describes the architecture of the ARLearn system which offers highly flexible support for different educational settings. Three prototypical use cases implemented based on the underlying ARLearn framework are discussed, which are a field trip system, an augmented Google StreetView client called StreetLearn, and a real time crisis intervention game. ARLearn combines real time notification and mixed reality games across Mobile Augmented Reality and Virtual Reality and the authors aim to use the underlying (open source) framework for further case studies and mixed reality applications for learning support."}
{"_id": "91a81021d450caf0b76d9b9cca03f5eb0863cddd", "title": "High-quality computational imaging through simple lenses", "text": "Modern imaging optics are highly complex systems consisting of up to two dozen individual optical elements. This complexity is required in order to compensate for the geometric and chromatic aberrations of a single lens, including geometric distortion, field curvature, wavelength-dependent blur, and color fringing.\n In this article, we propose a set of computational photography techniques that remove these artifacts, and thus allow for postcapture correction of images captured through uncompensated, simple optics which are lighter and significantly less expensive. Specifically, we estimate per-channel, spatially varying point spread functions, and perform nonblind deconvolution with a novel cross-channel term that is designed to specifically eliminate color fringing."}
{"_id": "141145eac9fe794a280f0a4fd1ecdfed71d61bb1", "title": "Feature vector selection and projection using kernels", "text": "This paper provides new insight into kernel methods by using data selection. The kernel trick is used to select from the data a relevant subset forming a basis in a feature space F . Thus the selected vectors de,ne a subspace in F . Then, the data is projected onto this subspace where classical algorithms are applied. We show that kernel methods like generalized discriminant analysis (Neural Comput. 12 (2000) 2385) or kernel principal component analysis (Neural Comput. 10 (1998) 1299) can be expressed more easily. Moreover, it will turn out that the size of the basis is related to the complexity of the model. Therefore, the data selection leads to a complexity control and thus to a better generalization. The approach covers a wide range of algorithms. We investigate the function approximation on real classi,cation problems and on a regression problem. c \u00a9 2003 Elsevier B.V. All rights reserved."}
{"_id": "0a78d735b11a901225cf5f0729001a359012c36b", "title": "A case for stateful forwarding plane", "text": "In Named Data Networking (NDN), packets carry data names instead of source and destination addresses. This paradigm shift leads to a new network forwarding plane: data consumers send Interest packets to request desired data, routers forward Interest packets and maintain the state of all pending Interests, which is then used to guide Data packets back to the consumers. Maintaining the pending Interest state, together with the two-way Interest and Data exchange, enables NDN routers\u2019 forwarding process to measure performance of different paths, quickly detect failures and retry alternative paths. In this paper we describe an initial design of NDN\u2019s forwarding plane and evaluate its data delivery performance under adverse conditions. Our results show that this stateful forwarding plane can successfully circumvent prefix hijackers, avoid failed links, and utilize multiple paths to mitigate congestion. We also compare NDN\u2019s performance with that of IP-based solutions to highlight the advantages of a stateful forwarding plane. 2013 Elsevier B.V. All rights reserved."}
{"_id": "02b66d4ead512e9e0c09f23c15c15201e602af6d", "title": "An ontology-based approach for the reconstruction and analysis of digital incidents timelines", "text": "Due to the democratisation of new technologies, computer forensics investigators have to deal with volumes of data which are becoming increasingly large and heterogeneous. Indeed, in a single machine, hundred of events occur per minute, produced and logged by the operating system and various software. Therefore, the identification of evidence, and more generally, the reconstruction of past events is a tedious and time-consuming task for the investigators. Our work aims at reconstructing and analysing automatically the events related to a digital incident, while respecting legal requirements. To tackle those three main problems (volume, heterogeneity and legal requirements), we identify seven necessary criteria that an efficient reconstruction tool must meet to address these challenges. This paper introduces an approach based on a three-layered ontology, called ORD2I, to represent any digital events. ORD2I is associated with a set of operators to analyse the resulting timeline and to ensure the reproducibility of the investigation. \u00a9 2015 Elsevier Ltd. All rights reserved."}
{"_id": "6620cb1331499385f3e8daad453734560f930257", "title": "Assessing heterogeneity in meta-analysis: Q statistic or I2 index?", "text": "In meta-analysis, the usual way of assessing whether a set of single studies is homogeneous is by means of the Q test. However, the Q test only informs meta-analysts about the presence versus the absence of heterogeneity, but it does not report on the extent of such heterogeneity. Recently, the I(2) index has been proposed to quantify the degree of heterogeneity in a meta-analysis. In this article, the performances of the Q test and the confidence interval around the I(2) index are compared by means of a Monte Carlo simulation. The results show the utility of the I(2) index as a complement to the Q test, although it has the same problems of power with a small number of studies."}
{"_id": "03a3b5ca18f6482cfee128eb24ddd1a59015fb2d", "title": "BloomFlash: Bloom Filter on Flash-Based Storage", "text": "The bloom filter is a probabilistic data structure that provides a compact representation of a set of elements. To keep false positive probabilities low, the size of the bloom filter must be dimensioned a priori to be linear in the maximum number of keys inserted, with the linearity constant ranging typically from one to few bytes. A bloom filter is most commonly used as an in memory data structure, hence its size is limited by the availability of RAM space on the machine. As datasets have grown over time to Internet scale, so have the RAM space requirements of bloom filters. If sufficient RAM space is not available, we advocate that flash memory may serve as a suitable medium for storing bloom filters, since it is about one-tenth the cost of RAM per GB while still providing access times orders of magnitude faster than hard disk. We present BLOOMFLASH, a bloom filter designed for flash memory based storage, that provides a new dimension of trade off with bloom filter access times to reduce RAM space usage (and hence system cost). The simple design of a single flat bloom filter on flash suffers from many performance bottlenecks, including in-place bit updates that are inefficient on flash and multiple reads and random writes spread out across many flash pages for a single lookup or insert operation. To mitigate these performance bottlenecks, BLOOMFLASH leverages two key design innovations: (i) buffering bit updates in RAM and applying them in bulk to flash that helps to reduce random writes to flash, and (ii) a hierarchical bloom filter design consisting of component bloom filters, stored one per flash page, that helps to localize reads and writes on flash. We use two real-world data traces taken from representative bloom filter applications to drive and evaluate our design. BLOOMFLASH achieves bloom filter access times in the range of few tens of microseconds, thus allowing up to order of tens of thousands operations per sec."}
{"_id": "f4a37839183c96ae00a3fc987397d43578b57965", "title": "Optimizing the Hadoop MapReduce Framework with high-performance storage devices", "text": "Solid-state drives (SSDs) are an attractive alternative to hard disk drives (HDDs) to accelerate the Hadoop MapReduce Framework. However, the SSD characteristics and today\u2019s Hadoop framework exhibit mismatches that impede indiscriminate SSD integration. This paper explores how to optimize a Hadoop MapReduce Framework with SSDs in terms of performance, cost, and energy consumption. It identifies extensible best practices that can exploit SSD benefits within Hadoop when combined with high network bandwidth and increased parallel storage access. Our Terasort benchmark results demonstrate that Hadoop currently does not sufficiently exploit SSD throughput. Hence, using faster SSDs in Hadoop does not enhance its performance. We show that SSDs presently deliver significant efficiency when storing intermediate Hadoop data, leaving HDDs for Hadoop Distributed File System (HDFS). The proposed configuration is optimized with the JVM reuse option and frequent heartbeat interval option. Moreover, we examined the performance of a state-of-the-art non-volatile memory express interface SSD within the Hadoop MapReduce Framework. While HDFS read and write throughput increases with high-performance SSDs, achieving complete system performance improvement requires carefully balancing CPU, network, and storage resource capabilities at a system level."}
{"_id": "67f49ad7f438ca20313519c94af777b83646d456", "title": "HoPE: Hot-Cacheline Prediction for Dynamic Early Decompression in Compressed LLCs", "text": "Data compression plays a pivotal role in improving system performance and reducing energy consumption, because it increases the logical effective capacity of a compressed memory system without physically increasing the memory size. However, data compression techniques incur some cost, such as non-negligible compression and decompression overhead. This overhead becomes more severe if compression is used in the cache. In this article, we aim to minimize the read-hit decompression penalty in compressed Last-Level Caches (LLCs) by speculatively decompressing frequently used cachelines. To this end, we propose a Hot-cacheline Prediction and Early decompression (HoPE) mechanism that consists of three synergistic techniques: Hot-cacheline Prediction (HP), Early Decompression (ED), and Hit-history-based Insertion (HBI). HP and HBI efficiently identify the hot compressed cachelines, while ED selectively decompresses hot cachelines, based on their size information. Unlike previous approaches, the HoPE framework considers the performance balance/tradeoff between the increased effective cache capacity and the decompression penalty. To evaluate the effectiveness of the proposed HoPE mechanism, we run extensive simulations on memory traces obtained from multi-threaded benchmarks running on a full-system simulation framework. We observe significant performance improvements over compressed cache schemes employing the conventional Least-Recently Used (LRU) replacement policy, the Dynamic Re-Reference Interval Prediction (DRRIP) scheme, and the Effective Capacity Maximizer (ECM) compressed cache management mechanism. Specifically, HoPE exhibits system performance improvements of approximately 11%, on average, over LRU, 8% over DRRIP, and 7% over ECM by reducing the read-hit decompression penalty by around 65%, over a wide range of applications."}
{"_id": "5c8d05e27e36ebd64ee43fe1670262cdcc2123ba", "title": "Measuring praise and criticism: Inference of semantic orientation from association", "text": "The evaluative character of a word is called its semantic orientation. Positive semantic orientation indicates praise (e.g., \"honest\", \"intrepid\") and negative semantic orientation indicates criticism (e.g., \"disturbing\", \"superfluous\"). Semantic orientation varies in both direction (positive or negative) and degree (mild to strong). An automated system for measuring semantic orientation would have application in text classification, text filtering, tracking opinions in online discussions, analysis of survey responses, and automated chat systems (chatbots). This article introduces a method for inferring the semantic orientation of a word from its statistical association with a set of positive and negative paradigm words. Two instances of this approach are evaluated, based on two different statistical measures of word association: pointwise mutual information (PMI) and latent semantic analysis (LSA). The method is experimentally tested with 3,596 words (including adjectives, adverbs, nouns, and verbs) that have been manually labeled positive (1,614 words) and negative (1,982 words). The method attains an accuracy of 82.8% on the full test set, but the accuracy rises above 95% when the algorithm is allowed to abstain from classifying mild words."}
{"_id": "710b5c54c011377130f436d2531e6c1f89dd884f", "title": "Human behavior and the principle of least effort : An introduction to human ecology", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org."}
{"_id": "4bf58ea97792749fb40aa1eaf55f9f5301fd680c", "title": "Direct Manipulation Interfaces", "text": "Approach \u2022 Social information independent of physical analog s \u2013 Text (e.g., emote) \u2013 abstract graphical representations (e.g., chat cir cles) \u2022 Interested in Abstract Approach a. Creates and deploys working systems b. Lack of attention Babble Prototype"}
{"_id": "91c9224f1c1b67017632303dc4da43b6392bad4d", "title": "Fine-grained metadata journaling on NVM", "text": "Journaling file systems have been widely used where data consistency must be assured. However, we observed that the overhead of journaling can cause up to 48.2% performance drop under certain kinds of workloads. On the other hand, the emerging high-performance, byte-addressable Non-volatile Memory (NVM) has the potential to minimize such overhead by being used as the journal device. The traditional journaling mechanism based on block devices is nevertheless unsuitable for NVM due to the write amplification of metadata journal we observed. In this paper, we propose a fine-grained metadata journal mechanism to fully utilize the low-latency byte-addressable NVM so that the overhead of journaling can be significantly reduced. Based on the observation that conventional block-based metadata journal contains up to 90% clean metadata that is unnecessary to be journalled, we design a fine-grained journal format for byte-addressable NVM which contains only modified metadata. Moreover, we redesign the process of transaction committing, checkpointing and recovery in journaling file systems utilizing the new journal format. Therefore, thanks to the reduced amount of ordered writes to NVM, the overhead of journaling can be reduced without compromising the file system consistency. Experimental results show that our NVM-based fine-grained metadata journaling is up to 15.8\u00d7 faster than the traditional approach under FileBench workloads."}
{"_id": "a0a245d32fd0b901b64e70f74ec4ee5bcf23effc", "title": "Individual ball possession in soccer", "text": "This paper describes models for detecting individual and team ball possession in soccer based on position data. The types of ball possession are classified as Individual Ball Possession (IBC), Individual Ball Action (IBA), Individual Ball Control (IBC), Team Ball Possession (TBP), Team Ball Control (TBC) und Team Playmaking (TPM) according to different starting points and endpoints and the type of ball control involved. The machine learning approach used is able to determine how long the ball spends in the sphere of influence of a player based on the distance between the players and the ball together with their direction of motion, speed and the acceleration of the ball. The degree of ball control exhibited during this phase is classified based on the spatio-temporal configuration of the player controlling the ball, the ball itself and opposing players using a Bayesian network. The evaluation and application of this approach uses data from 60 matches in the German Bundesliga season of 2013/14, including 69,667 IBA intervals. The identification rate was F = .88 for IBA and F = .83 for IBP, and the classification rate for IBC was \u03ba = .67. Match analysis showed the following mean values per match: TBP 56:04 \u00b1 5:12 min, TPM 50:01 \u00b1 7:05 min and TBC 17:49 \u00b1 8:13 min. There were 836 \u00b1 424 IBC intervals per match and their number was significantly reduced by -5.1% from the 1st to 2nd half. The analysis of ball possession at the player level indicates shortest accumulated IBC times for the central forwards (0:49 \u00b1 0:43 min) and the longest for goalkeepers (1:38 \u00b1 0:58 min), central defenders (1:38 \u00b1 1:09 min) and central midfielders (1:27 \u00b1 1:08 min). The results could improve performance analysis in soccer, help to detect match events automatically, and allow discernment of higher value tactical structures, which is based on individual ball possession."}
{"_id": "bb911cfd2a151b616147544db4edfe192129bfd0", "title": "Tank-Like Module-Based Climbing Robot Using Passive Compliant Joints", "text": "This paper proposes an underactuated modular climbing robot with flat dry elastomer adhesives. This robot is designed to achieve high speed, high payload, and dexterous motions that are typical drawbacks of previous climbing robots. Each module is designed as a tread-wheeled mechanism to simultaneously realize high speed and high adhesive force. Two modules are connected by compliant joints, which induce a positive preload on the front wheels resulting in stable climbing and high payload capacity. Compliant joints also help the robot to perform various transitions. An active tail is adopted to regulate the preload of the second module. Force transfer equations are derived and stable operating conditions are verified. The stiffness coefficients of the compliant joints and the active tail force are determined optimally to satisfy the constraints of stable operation. The prototype two-module robot achieves 6-cm/s speed and 500-g payload capacity on vertical surfaces. The abilities of flat surface locomotion, internal, external, and thin-wall transitions, and overcoming various sized obstacles are validated through experiment. The principle of joint compliance can be adopted in other climbing robots to enhance their stability and transition capability."}
{"_id": "c526ad0ea8b4cff9a671eb8a90ea98eb64ae17a7", "title": "Sparse Bayesian Extreme Learning Machine for Multi-classification", "text": "Extreme learning machine (ELM) has become a popular topic in machine learning in recent years. ELM is a new kind of single-hidden layer feedforward neural network with an extremely low computational cost. ELM, however, has two evident drawbacks: 1) the output weights solved by Moore-Penrose generalized inverse is a least squares minimization issue, which easily suffers from overfitting and 2) the accuracy of ELM is drastically sensitive to the number of hidden neurons so that a large model is usually generated. This brief presents a sparse Bayesian approach for learning the output weights of ELM in classification. The new model, called Sparse Bayesian ELM (SBELM), can resolve these two drawbacks by estimating the marginal likelihood of network outputs and automatically pruning most of the redundant hidden neurons during learning phase, which results in an accurate and compact model. The proposed SBELM is evaluated on wide types of benchmark classification problems, which verifies that the accuracy of SBELM model is relatively insensitive to the number of hidden neurons; and hence a much more compact model is always produced as compared with other state-of-the-art neural network classifiers."}
{"_id": "6aa70acfc5faf165b8381065c4da8d5ebb17689f", "title": "An Integrated Environment for the Development of Knowledge-Based Recommender Applications", "text": "The complexity of product assortments offered by online selling platforms makes the selection of appropriate items a challenging task. Customers can differ significantly in their expertise and level of knowledge regarding such product assortments. Consequently, intelligent recommender systems are required which provide personalized dialogues supporting the customer in the product selection process. In this paper we present the domainindependent, knowledge-based recommender environment CWAdvisor which assists users by guaranteeing the consistency and appropriateness of solutions, by identifying additional selling opportunities, and by providing explanations for solutions. Using examples from different application domains, we show how model-based diagnosis, personalization, and intuitive knowledge acquisition techniques support the effective implementation of customer-oriented sales dialogues. In this context, we report our experiences gained in industrial projects and present an evaluation of successfully deployed recommender applications."}
{"_id": "87a15e68bd40c76cff5c3c21fd5414475b616977", "title": "Agile distributed software development: enacting control through media and context", "text": "While face-to-face interaction is fundamental in agile software development, distributed environments must rely extensively on mediated interactions. Practicing agile principles in distributed environments therefore poses particular control challenges related to balancing fixed vs. evolving quality requirements and people vs. process-based collaboration. To investigate these challenges, we conducted an in-depth case study of a successful agile distributed software project with participants from a Russian firm and a Danish firm. Applying Kirsch\u2019s elements of control framework, we offer an analysis of how control was enacted through the project context and in the participants\u2019 mediated communication. The analysis reveals that formal measurement and evaluation control were persistently enacted through mediated communication. These formal control practices were, however, predominantly carried out in conjunction with informal roles and relationships such as clan-like control inherent in agile development. Overall, the study demonstrates that, if appropriately applied, communication technologies can significantly support distributed, agile practices by allowing concurrent enactment of both formal and informal controls. The paper discusses these findings as they relate to previous research and concludes with their implications for future research."}
{"_id": "03c56448c41d30f58d4c36b40220ce85ba2b6c1f", "title": "Active Learning with Oracle Epiphany", "text": "We present a theoretical analysis of active learning with more realistic interactions with human oracles. Previous empirical studies have shown oracles abstaining on difficult queries until accumulating enough information to make label decisions. We formalize this phenomenon with an \u201coracle epiphany model\u201d and analyze active learning query complexity under such oracles for both the realizable and the agnostic cases. Our analysis shows that active learning is possible with oracle epiphany, but incurs an additional cost depending on when the epiphany happens. Our results suggest new, principled active learning approaches with realistic oracles."}
{"_id": "10802705db21bc53f9c9fcdaff7c2c084005aba0", "title": "CLOS: Integrating Object-Oriented and Functional Programming", "text": "Lisp has a long history as a functional language,* where action is invoked by calling a procedure, and where procedural abstraction and encapsulation provide convenient modularity boundaries. A number of attempts have been made to graft object-oriented programming into this framework without losing the essential character of Lisp\u2014to include the benefits of data abstraction, extensible type classification, incremental operator definition, and code reuse through an inheritance hierarchy.\nThe Common Lisp Object System (CLOS) [3], a result of the ANSI standardization process for Common Lisp, represents a marriage of these two traditions. This article explores the landscape in which the major object-oriented facilities exist, showing how the CLOS solution is effective within the two contexts."}
{"_id": "7f246f4620756eacfc0f682e4721b7b012164bb6", "title": "Time-Bounded Authentication of FPGAs", "text": "This paper introduces a novel technique to authenticate and identify field-programmable gate arrays (FPGAs). The technique uses the reconfigurability feature of FPGAs to perform self-characterization and extract the unique timing of the FPGA building blocks over the space of possible inputs. The characterization circuit is then exploited for constructing a physically unclonable function (PUF). The PUF can accept different forms of challenges including pulsewidth, digital binary, and placement challenges. The responses from the PUF are only verifiable by entities with access to the unique timing signature. However, the authentic device is the only entity who can respond within a given time constraint. The constraint is set by the gap between the speed of PUF evaluation on authentic hardware and simulation of its behavior. A suite of authentication protocols is introduced based on the time-bounded mechanism. We ensure that the responses are robust to fluctuations in operational conditions such as temperature and voltage variations by employing: 1) a linear calibration mechanism that adjusts the clock frequency by a feedback from on-chip temperature and voltage sensor readings, and 2) a differential PUF structure with real-valued responses that cancels out the common impact of variations on delays. Security against various attacks is discussed and a proof-of-concept implementation of signature extraction and authentication are demonstrated on Xilinx Virtex 5 FPGAs."}
{"_id": "46007096b8c46d2d6436d8c28fc46b2cbfb7f49f", "title": "The Weibull-G Family of Probability Distributions", "text": "Abstract: The Weibull distribution is the most important distribution for problems in reliability. We study some mathematical properties of the new wider Weibull-G family of distributions. Some special models in the new family are discussed. The properties derived hold to any distribution in this family. We obtain general explicit expressions for the quantile function, ordinary and incomplete moments, generating function and order statistics. We discuss the estimation of the model parameters by maximum likelihood and illustrate the potentiality of the extended family with two applications to real data."}
{"_id": "d86146968b682092a39ef55f8c8b33c66a611d1c", "title": "Reduction of Computational Efforts in Finite Element-Based Permanent Magnet Traction Motor Optimization", "text": "This paper presents a method for reducing computational time in constrained single objective optimization problems related to permanent magnet motors modeled using computationally intensive finite element method. The method is based on differential evolution algorithm. The principal approach is to interrupt the evaluation of inequality constraints after encountering the constraint which is violated and continue their evaluation only if better solutions are obtained in comparison with the candidate vector from the previous generation both in terms of inequality constraints and the cost function. This approach avoids unnecessary time consuming finite element calculations required for solving the inequality constraints and saves the overall optimization time without affecting the convergence rate. The main features of this approach and how it complements the existing method for handling inequality constraints in Differential Evolution algorithm is demonstrated on design optimization of an interior permanent magnet motor for the low-floor tram TMK\u00a02200. The motor geometry is optimized for maximum torque density."}
{"_id": "56f32b919f0c2628a265d0b9bba51821d3063ff3", "title": "State-of-the-art autogenous ear reconstruction in cases of microtia.", "text": "Ear reconstruction is considered to be a challenging form of surgery. In cases of microtia, surgeons must reconstruct complex missing contours, which necessitates the use of a support and skin remnants to cover this support. Although the use of synthetic material has been proposed in order to avoid harvesting and carving cartilage, the best long-term choice for reconstructing an ear is autologous rib cartilage. This procedure requires good understanding of the 3-dimensional architecture of the ear and learning the step-by-step construction of a harmonious framework (which with practice will become the most straightforward part of the procedure). Surgery, usually performed at the age of 9 or 10 years, is planned in 2 stages. In the first stage, the framework is placed under a skin pocket. Six months later, the sulcus is created using an additional cartilage graft for projection and a skin-grafted galeal fascial flap. In order to shorten the learning curve, a detailed carving process is described here, as well as a tool to enable training before surgery. Remnants of the microtic ear can have many different shapes; therefore, a comprehensive approach to skin management is proposed, providing a simple surgical classification for all types of microtia. Furthermore, some refinements of the cartilage framework and the construction of the retroauricular sulcus have improved results. Whenever possible, successful reconstruction of a microtic ear with autologous rib cartilage, as opposed to synthetic materials, is by far the best option."}
{"_id": "40c4c1ac154c7b2ca462da3b1d4576df6b2f1730", "title": "An alignment based similarity measure for hand detection in cluttered sign language video", "text": "Locating hands in sign language video is challenging due to a number of factors. Hand appearance varies widely across signers due to anthropometric variations and varying levels of signer proficiency. Video can be captured under varying illumination, camera resolutions, and levels of scene clutter, e.g., high-res video captured in a studio vs. low-res video gathered by a Web cam in a user's home. Moreover, the signers' clothing varies, e.g., skin-toned clothing vs. contrasting clothing, short-sleeved vs. long-sleeved shirts, etc. In this work, the hand detection problem is addressed in an appearance matching framework. The histogram of oriented gradient (HOG) based matching score function is reformulated to allow non-rigid alignment between pairs of images to account for hand shape variation. The resulting alignment score is used within a support vector machine hand/not-hand classifier for hand detection. The new matching score function yields improved performance (in ROC area and hand detection rate) over the vocabulary guided pyramid match kernel (VGPMK) and the traditional, rigid HOG distance on American Sign Language video gestured by expert signers. The proposed match score function is computationally less expensive (for training and testing), has fewer parameters and is less sensitive to parameter settings than VGPMK. The proposed detector works well on test sequences from an inexpert signer in a non-studio setting with cluttered background."}
{"_id": "226ca2ea890739c188cfceffaac48d26f39c4c6b", "title": "Phylogenetic analysis in the anomaly zone.", "text": "The concatenation method has been widely used as a means of combining data to estimate phylogenetic trees (Huelsenbeck et al. 1996a, 1996b; Glazko and Nei 2003). However, simulation studies have shown that the maximum likelihood (ML) estimate of the species tree for concatenated sequences may be statistically inconsistent if the gene trees are highly heterogeneous (Kolaczkowski and Thornton 2004; Kubatko and Degnan 2007). Recently, Degnan and Rosenberg (2006) defined an \u201canomaly zone\u201d\u2014a set of short internal branches in species trees that will generate gene trees that are discordant with the species tree more often than gene trees that are concordant. Kubatko and Degnan (2007) went on to show that when DNA sequences are generated from gene trees simulated from species trees in the anomaly zone, as well as from species trees slightly outside this zone but still with short internal branches, the ML estimate of the species tree for the concatenated sequences can be inconsistent, resulting in increasing certainty in the wrong species tree. These studies were all performed with a molecular clock on rooted gene and species trees within the variation realized in stochastic simulations of DNA sequences under the Jukes and Cantor (1969) model of nucleotide substitution. They applied the ML method with a clock to recover phylogenetic trees from their simulated concatenated data sets. In this paper, we show that phylogenetic methods that solely utilize the relative order of divergences among a set of DNA sequences as a criterion for inferring phylogenies, such as the unweighted pair group method with arithmetic mean (UPGMA), are statistically consistent even when DNA sequences are generated from gene trees simulated from species trees in the anomaly zone. In addition, we use simulation to assess the performance of a variety of tree construction methods when analyzing concatenated sequences generated from 4 and 5-taxon species tree located in the anomaly zone and show that a variety of methods do in fact recover the correct species tree topology, whereas ML, with or without a molecular clock, remains inconsistent. However, the branch lengths of the tree inferred from concatenated data are inevitably overestimated, as predicted by theory. Finally, simulations also suggest that a newly proposed Bayesian approach for estimating species trees from multiple unlinked loci, BEST (Liu and Pearl 2007; Liu et al. 2008), is consistent in both topology and branch lengths on data sets generated from species trees in the anomaly zone. THEORETICAL RESULTS"}
{"_id": "1daa9321768295d39d562f69d4347739f6f8931e", "title": "Autocomplete hand-drawn animations", "text": "Hand-drawn animation is a major art form and communication medium, but can be challenging to produce. We present a system to help people create frame-by-frame animations through manual sketches. We design our interface to be minimalistic: it contains only a canvas and a few controls. When users draw on the canvas, our system silently analyzes all past sketches and predicts what might be drawn in the future across spatial locations and temporal frames. The interface also offers suggestions to beautify existing drawings. Our system can reduce manual workload and improve output quality without compromising natural drawing flow and control: users can accept, ignore, or modify such predictions visualized on the canvas by simple gestures. Our key idea is to extend the local similarity method in [Xing et al. 2014], which handles only low-level spatial repetitions such as hatches within a single frame, to a global similarity that can capture high-level structures across multiple frames such as dynamic objects. We evaluate our system through a preliminary user study and confirm that it can enhance both users' objective performance and subjective satisfaction."}
{"_id": "9c0e1eb9746f8420959d6547c25909782e0c7743", "title": "An MEG-based brain\u2013computer interface (BCI)", "text": "Brain-computer interfaces (BCIs) allow for communicating intentions by mere brain activity, not involving muscles. Thus, BCIs may offer patients who have lost all voluntary muscle control the only possible way to communicate. Many recent studies have demonstrated that BCIs based on electroencephalography (EEG) can allow healthy and severely paralyzed individuals to communicate. While this approach is safe and inexpensive, communication is slow. Magnetoencephalography (MEG) provides signals with higher spatiotemporal resolution than EEG and could thus be used to explore whether these improved signal properties translate into increased BCI communication speed. In this study, we investigated the utility of an MEG-based BCI that uses voluntary amplitude modulation of sensorimotor mu and beta rhythms. To increase the signal-to-noise ratio, we present a simple spatial filtering method that takes the geometric properties of signal propagation in MEG into account, and we present methods that can process artifacts specifically encountered in an MEG-based BCI. Exemplarily, six participants were successfully trained to communicate binary decisions by imagery of limb movements using a feedback paradigm. Participants achieved significant mu rhythm self control within 32 min of feedback training. For a subgroup of three participants, we localized the origin of the amplitude modulated signal to the motor cortex. Our results suggest that an MEG-based BCI is feasible and efficient in terms of user training."}
{"_id": "9aae1c7c9158d038930b6ad023ec6f0a2e339973", "title": "3D Reconstruction of Human Skeleton from Single Images or Monocular Video Sequences", "text": "In this paper, we first review the approaches to recover 3D shape and related movements of a human and then we present an easy and reliable approach to recover a 3D model using just one image or monocular video sequence. A simplification of the perspective camera model is required, due to the absence of stereo view. The human figure is reconstructed in a skeleton form and to improve the visual quality, a pre-defined human model is also fitted to the recovered 3D data."}
{"_id": "70bff3c9e19215c1bca501eb10a51bb63ea6cfe2", "title": "Estimation of Primary Quantization Matrix in Double Compressed JPEG Images", "text": "In this report, we present a method for estimation of primary quantization matrix from a double compressed JPEG image. We first identify characteristic features that occur in DCT histograms of individual coefficients due to double compression. Then, we present 3 different approaches that estimate the original quantization matrix from double compressed images. Finally, most successful of them Neural Network classifier is discussed and its performance and reliability is evaluated in a series of experiments on various databases of double compressed images. It is also explained in this paper, how double compression detection techniques and primary quantization matrix estimators can be used in steganalysis of JPEG files and in digital forensic analysis for detection of digital forgeries."}
{"_id": "eb161064ea62eaf6369a28ff04493b47cf29cc0c", "title": "Crowd analysis using visual and non-visual sensors, a survey", "text": "This paper proposes a critical survey of crowd analysis techniques using visual and non-visual sensors. Automatic crowd understanding has a massive impact on several applications including surveillance and security, situation awareness, crowd management, public space design, intelligent and virtual environments. In case of emergency, it enables practical safety applications by identifying crowd situational context information. This survey identifies different approaches as well as relevant work on crowd analysis by means of visual and non-visual techniques. Multidisciplinary research groups are addressing crowd phenomenon and its dynamics ranging from social, and psychological aspects to computational perspectives. The possibility to use smartphones as sensing devices and fuse this information with video sensors data, allows to better describe crowd dynamics and behaviors. Eventually, challenges and further research opportunities with reference to crowd analysis are exposed."}
{"_id": "4c14a0b18e5cd13f5ef6fae7a70826dfd66b3284", "title": "Rare adipose disorders (RADs) masquerading as obesity", "text": "Rare adipose disorders (RADs) including multiple symmetric lipomatosis (MSL), lipedema and Dercum's disease (DD) may be misdiagnosed as obesity. Lifestyle changes, such as reduced caloric intake and increased physical activity are standard care for obesity. Although lifestyle changes and bariatric surgery work effectively for the obesity component of RADs, these treatments do not routinely reduce the abnormal subcutaneous adipose tissue (SAT) of RADs. RAD SAT likely results from the growth of a brown stem cell population with secondary lymphatic dysfunction in MSL, or by primary vascular and lymphatic dysfunction in lipedema and DD. People with RADs do not lose SAT from caloric limitation and increased energy expenditure alone. In order to improve recognition of RADs apart from obesity, the diagnostic criteria, histology and pathophysiology of RADs are presented and contrasted to familial partial lipodystrophies, acquired partial lipodystrophies and obesity with which they may be confused. Treatment recommendations focus on evidence-based data and include lymphatic decongestive therapy, medications and supplements that support loss of RAD SAT. Associated RAD conditions including depression, anxiety and pain will improve as healthcare providers learn to identify and adopt alternative treatment regimens for the abnormal SAT component of RADs. Effective dietary and exercise regimens are needed in RAD populations to improve quality of life and construct advanced treatment regimens for future generations."}
{"_id": "ff02c951e5ca708b409993dc5b9b8ea712b69b07", "title": "LSTM-Based NeuroCRFs for Named Entity Recognition", "text": "Although NeuroCRF, an augmented Conditional Random Fields (CRF) model whose feature function is parameterized as a Feed-Forward Neural Network (FF NN) on word embeddings, has soundly outperformed traditional linear-chain CRF on many sequence labeling tasks, it is held back by the fact that FF NNs have a fixed input length and therefore cannot take advantage of the full input sentence. We propose to address this issue by replacing the FF NN with a Long Short-Term Memory (LSTM) NN, which can summarize an input of arbitrary length into a fixed dimension representation. The resulting model obtains F1=89.28 on WikiNER dataset, a significant improvement over the NeuroCRF baseline\u2019s F1=87.58, which is already a highly competitive result."}
{"_id": "0d93ed75dde134a191d43a34f965d99943f2c54e", "title": "Automatic classification of citation function", "text": "The automatic recognition of the rhetorical function of citations in scientific text has many applications, from improvement of impact factor calculations to text summarisation and more informative citation indexers. Citation function is defined as the author\u2019s reason for citing a given paper (e.g. acknowledgement of the use of the cited method). We show that our annotation scheme for citation function is reliable, and present a supervised machine learning framework to automatically classify citation function, which uses several shallow and linguistically-inspired features. We find, amongst other things, a strong relationship between citation function and sentiment classification."}
{"_id": "3612b052db3cdd6d80ea14fa258a834f193d1667", "title": "Reference Scope Identification in Citing Sentences", "text": "A citing sentence is one that appears in a scientific article and cites previous work. Citing sentences have been studied and used in many applications. For example, they have been used in scientific paper summarization, automatic survey generation, paraphrase identification, and citation function classification. Citing sentences that cite multiple papers are common in scientific writing. This observation should be taken into consideration when using citing sentences in applications. For instance, when a citing sentence is used in a summary of a scientific paper, only the fragments of the sentence that are relevant to the summarized paper should be included in the summary. In this paper, we present and compare three different approaches for identifying the fragments of a citing sentence that are related to a given target reference. Our methods are: word classification, sequence labeling, and segment classification. Our experiments show that segment classification achieves the best results."}
{"_id": "65a5111f1d619192226a4030136d5cd2bd6dd58b", "title": "Learning Whom to Trust with MACE", "text": "Non-expert annotation services like Amazon\u2019s Mechanical Turk (AMT) are cheap and fast ways to evaluate systems and provide categorical annotations for training data. Unfortunately, some annotators choose bad labels in order to maximize their pay. Manual identification is tedious, so we experiment with an item-response model. It learns in an unsupervised fashion to a) identify which annotators are trustworthy and b) predict the correct underlying labels. We match performance of more complex state-of-the-art systems and perform well even under adversarial conditions. We show considerable improvements over standard baselines, both for predicted label accuracy and trustworthiness estimates. The latter can be further improved by introducing a prior on model parameters and using Variational Bayes inference. Additionally, we can achieve even higher accuracy by focusing on the instances our model is most confident in (trading in some recall), and by incorporating annotated control instances. Our system, MACE (Multi-Annotator Competence Estimation), is available for download1."}
{"_id": "abaec51e13e3f16287311f9b5933baf720ed611d", "title": "New Compact CMOS Li-Ion Battery Charger Using Charge-Pump Technique for Portable Applications", "text": "This paper presents a new compact CMOS Li-Ion battery charger for portable applications that uses a charge-pump technique. The proposed charger features a small chip size and a simple circuit structure. Additionally, it provides basic functions with voltage/current detection, end-of-charge detection, and charging speed control. The charger operates in dual-mode and is supported in the trickle/large constant-current mode to constant-voltage mode with different charging rates. This charger is implemented using a TSMC 0.35-mum CMOS process with a 5-V power supply. The output voltage is almost 4.2 V, and the maximum charging current reaches 700 mA. It has 67.89% power efficiency, 837-mW chip power dissipation, and only 1.455times1.348 mm2 in chip area including pads"}
{"_id": "372ef88b6ff7f0d6c26c8ac44e2d723e69643444", "title": "Consistency of Online Random Forests", "text": "As a testament to their success, the theory of random forests has long been outpaced by their application in practice. In this paper, we take a step towards narrowing this gap by providing a consistency result for online random forests."}
{"_id": "29e736be581f693d1454ecc5a3fa5c217576d0d0", "title": "Actor-Oriented Design of Embedded Hardware and Software Systems", "text": "In this paper, we argue that model-based design and platform-based design are two views of the same thing. A platform is a set of designs. Model-based design is about using platforms with useful modeling properties to specify designs, and then synthesizing implementations from these specifications. A design language, such as SystemC or VHDL, defines a platform. Any valid expression in the language is an element of the set. Viewed from below, the language provides an abstraction of implementation capabilities. Viewed from above, the language provides a set of constraints together with benefits that flow from those constraints. It provides a conceptual framework within which a design is crafted. An actor-oriented platform lies above the program-level platforms that are widely used today. It orthogonalizes the actor definition language and the actor assembly language, enabling domain-polymorphic actor definitions, multiple models of computation, and hierarchical heterogeneity. Actor-oriented platforms offer compelling modeling properties. Synthesis into program-level descriptions is possible. We illustrate these concepts by describing a design framework built on Ptolemy II."}
{"_id": "f8a81faffb1997247c277fbc12028c981e300f3e", "title": "Exploiting the Data Redundancy Locality to Improve the Performance of Deduplication-Based Storage Systems", "text": "The chunk-lookup disk bottleneck and the read amplification problems are two great challenges for deduplication-based storage systems and restrict the applicability of data deduplication for large-scale data volumes. Previous studies and our experimental evaluations have shown that the amount of redundant data shared among different types of applications is negligible. Based on the observations, we propose AA-Plus which effectively groups the hash index of the same application together and divides the whole hash index into different groups based on the application types. Moreover, it groups the data chunks of the same application together on the disks. The extensive trace-driven experiments conducted on our lightweight prototype implementation of AA-Plus show that compared with AA-Dedupe, AA-Plus significantly speeds up the write throughput by a factor of up to 6.9 and with an average of 3.1, and speeds up the read throughput by a factor of up to 3.3 and with an average of 1.9."}
{"_id": "14d792772da62654f5c568153222ae1523659b96", "title": "Surpassing Humans in Boundary Detection using Deep Learning", "text": "In this work we show that Deep Convolutional Neural Networks can outperform humans on the task of boundary detection, as measured on the standard Berkeley Segmentation Dataset. Our detector is fully integrated in the popular Caffe framework and processes a 320x420 image in less than a second. Our contributions consist firstly in combining a careful design of the loss for boundary detection training, a multi-resolution architecture and training with external data to improve the detection accuracy of the current state of the art, from an optimal dataset scale F-measure of 0.780 to 0.808 while human performance is at 0.803. We further improve performance to 0.813 by combining deep learning with grouping, integrating the Normalized Cuts technique within a deep network. We also examine the potential of our boundary detector in conjunction with the higher level tasks of object proposal generation and semantic segmentation for both tasks our detector yields clear improvements over state-of-the-art systems."}
{"_id": "7050e7a2e88a6797ee9ec4188305c39333076184", "title": "The HEXACO Honesty-Humility, Agreeableness, and Emotionality factors: a review of research and theory.", "text": "We review research and theory on the HEXACO personality dimensions of Honesty-Humility (H), Agreeableness (A), and Emotionality (E), with particular attention to the following topics: (1) the origins of the HEXACO model in lexical studies of personality structure, and the content of the H, A, and E factors in those studies; (2) the operationalization of the H, A, and E factors in the HEXACO Personality Inventory-Revised; (3) the construct validity of self-reports on scales measuring the H factor; (4) the theoretical distinction between H and A; (5) similarity and assumed similarity between social partners in personality, with a focus on H and A; (6) the extent to which H (and A and E) variance is represented in instruments assessing the \"Five-Factor Model\" of personality; and (7) the relative validity of scales assessing the HEXACO and Five-Factor Model dimensions in predicting criteria conceptually relevant to H, A, and E."}
{"_id": "79c841ec73022d365e2d20e5d261eea6e7e90cfb", "title": "A Critical Review on Outlier Detection Techniques", "text": "Outlier Detection is a Data Mining Application. Outlier contains noisy data which is researched in various domains. The various techniques are already being researched that is more generic. We surveyed on various techniques and applications of outlier detection that provides a novel approach that is more useful for the beginners. The proposed approach helps to clean data at university level in less time with great accuracy. This survey includes the existing outlier techniques and applications where the noisy data exists. Our paper defines critical review on various techniques used in different applications of outlier detection that are to be researched further and they gives a particular type of knowledge based data i.e. more useful in research activities. So where the Anomalies is present it will be detected through outlier detection techniques and monitored accordingly especially in educational Data Mining."}
{"_id": "686150e2179840ed40a0166cba6c5d507f3aa49c", "title": "KCoFI: Complete Control-Flow Integrity for Commodity Operating System Kernels", "text": "We present a new system, KCoFI, that is the first we know of to provide complete Control-Flow Integrity protection for commodity operating systems without using heavyweight complete memory safety. Unlike previous systems, KCoFI protects commodity operating systems from classical control-flow hijack attacks, return-to-user attacks, and code segment modification attacks. We formally verify a subset of KCoFI's design by modeling several features in small-step semantics and providing a partial proof that the semantics maintain control-flow integrity. The model and proof account for operations such as page table management, trap handlers, context switching, and signal delivery. Our evaluation shows that KCoFI prevents all the gadgets found by an open-source Return Oriented Programming (ROP) gadget-finding tool in the FreeBSD kernel from being used, it also reduces the number of indirect control-flow targets by 98.18%. Our evaluation also shows that the performance impact of KCoFI on web server bandwidth is negligible while file transfer bandwidth using OpenSSH is reduced by an average of 13%, and at worst 27%, across a wide range of file sizes. Postmark, an extremely file-system intensive benchmark, shows 2x overhead. Where comparable numbers are available, the overheads of KCoFI are far lower than heavyweight memory-safety techniques."}
{"_id": "8766ea561ba91e7ff7b581882a87ed608f585b6e", "title": "Data Mining: A Preprocessing Engine", "text": "This study is emphasized on different types of normalization. Each of which was tested against the ID3 methodology using the HSV data set. Number of leaf nodes, accuracy and tree growing time are three factors that were taken into account. Comparisons between different learning methods were accomplished as they were applied to each normalization method. A new matrix was designed to check for the best normalization method based on the factors and their priorities. Recommendations were concluded."}
{"_id": "27b204c19ab0c61e4bcd7a98ae40eeb0e5153164", "title": "Attentional biases for emotional faces in young children of mothers with chronic or recurrent depression.", "text": "Attentional biases for negative stimuli have been observed in school-age and adolescent children of depressed mothers and may reflect a vulnerability to depression. The direction of these biases and whether they can be identified in early childhood remains unclear. The current study examined attentional biases in 5-7-year-old children of depressed and non-depressed mothers. Following a mood induction, children participated in a dot-probe task assessing biases for sad and happy faces. There was a significant interaction of group and sex: daughters of depressed mothers attended selectively to sad faces, while children of controls and sons of depressed mothers did not exhibit biases. No effects were found for happy stimuli. These findings suggest that attentional biases are discernible in early childhood and may be vulnerability markers for depression. The results also raise the possibility that sex differences in cognitive biases are evident before the emergence of sex differences in the prevalence of depression."}
{"_id": "40511fb109756cfe5edbe889e6812714fc972b07", "title": "A positive level shifter for high speed symmetric switching in flash memories", "text": "In this paper, a novel positive level shifter is designed for high speed interfacing between the digital circuits operating at 1.2V and the memory core circuits operating at high voltages in flash memories. The proposed design is also validated for a wide range of output voltage levels and has been optimized to perform efficiently across the different process corners and temperatures. For the targeted design of 1.2V and 33 MHz input signal to 4V output signal, the level shifter has a symmetric switching speed with a rise time and fall time of 2ns and 1.89ns respectively while the average power consumption of 0.056mW is obtained in the typical condition. Simulation results are compared with various other conventional positive level shifters at different process corners and temperatures."}
{"_id": "5369b021f2abf5daa77fa5602569bb3b8bb18546", "title": "GMMCP tracker: Globally optimal Generalized Maximum Multi Clique problem for multiple object tracking", "text": "Data association is the backbone to many multiple object tracking (MOT) methods. In this paper we formulate data association as a Generalized Maximum Multi Clique problem (GMMCP). We show that this is the ideal case of modeling tracking in real world scenario where all the pairwise relationships between targets in a batch of frames are taken into account. Previous works assume simplified version of our tracker either in problem formulation or problem optimization. However, we propose a solution using GMMCP where no simplification is assumed in either steps. We show that the NP hard problem of GMMCP can be formulated through Binary-Integer Program where for small and medium size MOT problems the solution can be found efficiently. We further propose a speed-up method, employing Aggregated Dummy Nodes for modeling occlusion and miss-detection, which reduces the size of the input graph without using any heuristics. We show that, using the speedup method, our tracker lends itself to real-time implementation which is plausible in many applications. We evaluated our tracker on six challenging sequences of Town Center, TUD-Crossing, TUD-Stadtmitte, Parking-lot 1, Parking-lot 2 and Parking-lot pizza and show favorable improvement against state of art."}
{"_id": "a3bdce5589c1c114d17d3e0b99b4c221a679d881", "title": "Integration Trends in Monolithic Power ICs: Application and Technology Challenges", "text": "This paper highlights the general trend towards further monolithic integration in power applications by enabling power management and interfacing solutions in advanced CMOS nodes. The need to combine high-density digital circuits, power-management circuits, and robust interfaces in a single technology platform requires the development of additional process options on top of baseline CMOS. Examples include high-voltage devices, devices to enable area-efficient ESD protection, and integrated capacitors and inductors with high quality factors. The use of bipolar devices in these technologies for protection and control purposes in power applications is also addressed."}
{"_id": "bd967906407857df517d412852533bd62ef333ab", "title": "Sign language recognition : Generalising to more complex corpora", "text": "The aim of this thesis is to find new approaches to Sign Language Recognition (SLR) which are suited to working with the limited corpora currently available. Data available for SLR is of limited quality; low resolution and frame rates make the task of recognition even more complex. The content is rarely natural, concentrating on isolated signs and filmed under labo\u00ad ratory conditions. In addition, the amount of accurately labelled data is minimal. To this end, several contributions are made: Tracking the hands is eschewed in favour of detection based techniques more robust to noise; for both signs and for linguistically-motivated sign sub-units are investigated, to make best use of limited data sets. Finally, an algorithm is proposed to learn signs fiom the inset signers on TV, with the aid of the accompanying subtitles, thus increasing the corpus of data available. Tracking fast moving hands under laboratory conditions is a complex task, move this to real world data and the challenge is even greater. When using tracked data as a base for SLR, the er\u00ad rors in the tracking are compounded at the classification stage. Proposed instead, is a novel sign detection method, which views space-time as a 3D volume and the sign within it as an object to be located. Features are combined into strong classifiers using a novel boosting implementation designed to create optimal classifiers over sparse datasets. Using boosted volumetric features, on a robust frame differenced input, average classification rates reach 71% on seen signers and 66% on a mixture of seen and unseen signers, with individual sign classification rates gaining 95%. Using a classifier per sign approach to SLR, means that data sets need to contain numerous examples of the signs to be learnt. Instead, this thesis proposes lear nt classifiers to detect the common sub-units of sign. The responses of these classifiers can then be combined for recognition at the sign level. This approach requires fewer examples per sign to be learnt, since the sub-unit detectors are trained on data from multiple signs. It is also faster at detection time since there are fewer classifiers to consult, the number of these being limited by the linguistics of sign and not the number of signs being detected. For this method, appearance based boosted classifiers are introduced to distinguish the sub-units of sign. Results show that when combined with temporal models, these novel sub-unit classifiers, can outperform similar classifiers learnt on tracked results. As an added side effect; since the sub-units are linguistically derived tliey can be used independently to help linguistic annotators. Since sign language data sets are costly to collect and annotate, there are not many publicly available. Those which are, tend to be constiained in content and often taken under laboratory conditions. However, in the UK, the British Broadcasting Corporation (BBC) regularly pro\u00ad duces programs with an inset signer and corresponding subtitles. This provides a natural signer, covering a wide range of topics, in real world conditions. While it has no ground truth, it is proposed tliat the tr anslated subtitles can provide weak labels for learning signs. The final con\u00ad tributions of this tliesis, lead to an innovative approach to learn signs from these co-occurring streams of data. Using a unique, temporally constr ained, version of the Apriori mining algo\u00ad rithm, similar sections of video are identified as possible sign locations. These estimates are improved upon by introducing the concept of contextual negatives, removing contextually sim\u00ad ilar\" noise. Combined with an iterative honing process, to enhance the localisation of the target sign, 23 word/sign combinations are learnt from a 30 minute news broadcast, providing a novel rnetliod for automatic data set creation."}
{"_id": "668db48c6a79826456341680ee1175dfc4cced71", "title": "Get To The Point: Summarization with Pointer-Generator Networks", "text": "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points."}
{"_id": "114cd46a6ffd730081fe116ccc4a76a8fd4c6fc9", "title": "Digital photography with flash and no-flash image pairs", "text": "Digital photography has made it possible to quickly and easily take a pair of images of low-light environments: one with flash to capture detail and one without flash to capture ambient illumination. We present a variety of applications that analyze and combine the strengths of such flash/no-flash image pairs. Our applications include denoising and detail transfer (to merge the ambient qualities of the no-flash image with the high-frequency flash detail), white-balancing (to change the color tone of the ambient image), continuous flash (to interactively adjust flash intensity), and red-eye removal (to repair artifacts in the flash image). We demonstrate how these applications can synthesize new images that are of higher quality than either of the originals."}
{"_id": "2e847b956b51a80827d96d10833ea9a46b845b5c", "title": "A Fully Projective Formulation to Improve the Accuracy of Lowe's Pose-Estimation Algorithm", "text": "Both the original version of David Lowe's innuential and classic algorithm for tracking known objects and a reformulation of it implemented by Ishii et al. rely on (diierent) approximated imaging models. Removing their simplifying assumptions yields a fully pro-jective solution with signiicantly improved accuracy and convergence, and arguably better computation{ time properties."}
{"_id": "10a463bb00b44bdd3a8620f2bedb9e1564bfcf32", "title": "The Design and Analysis of Computer Algorithms", "text": "Addison-Wesley Pub. Co., 1974, , 470 pages. With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001."}
{"_id": "110b99487cbc9a0951ff868ed752100da2bdc66d", "title": "Derivatives of Regular Expressions", "text": "Kleene's regular expressions, which can be used for describing sequential circuits, were defined using three operators (union, concatenation and iterate) on sets of sequences. Word descriptions of problems can be more easily put in the regular expression language if the language is enriched by the inclusion of other logical operations. However, il~ the problem of converting the regular expression description to a state diagram, the existing methods either cannot handle expressions with additional operators, or are made quite complicated by the presence of such operators. In this paper the notion of a derivative of a regular expression is introduced atld the properties of derivatives are discussed. This leads, in a very natural way, to the construction of a state diagram from a regular expression containing any number of logical operators."}
{"_id": "44799559a1067e06b5a6bf052f8f10637707928f", "title": "Fast Pattern Matching in Strings", "text": "\u30c6\u30ad\u30b9\u30c8\u306e\u4e2d\u304b\u3089\u4e0e\u3048\u3089\u308c\u305f\u30d1\u30bf\u30fc\u30f3\u3092\u898b\u3064\u3051\u51fa\u3059\u3068 \u3044\u3046,\u3044\u308f\u3086\u308b\u6587\u5b57\u5217\u63a2\u7d22\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u8abf\u3079\u3088\u3046\u3068 \u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6559\u79d1\u66f8\u3092\u3072\u3082\u3068\u304f\u3068,\u5fc5\u305a\u3068\u3044\u3063\u3066\u3088\u3044 \u307b\u3069,\u529b\u307e\u304b\u305b\u306e\u65b9\u6cd5,Knuth,Morris,Pratt\u306b\u3088\u308b \u65b9\u6cd5,Boyer\u3068Moore\u306b\u3088\u308b\u65b9\u6cd5 \u306e 3\u3064\u304c\u7d39\u4ecb\u3055\u308c \u3066\u3044\u308b.\u672c\u8ad6\u6587\u306f 2\u756a\u76ee\u306e\u65b9\u6cd5-\u8457\u8005\u306e\u982d\u6587\u5b57\u3092\u3068\u3063 \u3066 KMP\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3068\u7701\u7565\u3055\u308c\u308b \u306b\u3064\u3044\u3066\u8ff0\u3079\u3066 \u3044\u308b. \u6587\u5b57\u5217\u63a2\u7d22\u304c\u5fc5\u8981\u3068\u3055\u308c\u308b\u4ee3\u8868\u7684\u306a\u4f8b\u306b,\u30c6\u30ad\u30b9\u30c8\u30a8 \u30c7\u30a3\u30bf\u306e\u63a2\u7d22\u30b3\u30de\u30f3\u30c9\u304c\u3042\u308b.\u5b9f\u969b\u306e\u30c6\u30ad\u30b9\u30c8\u30a8\u30c7\u30a3\u30bf \u306e\u63a2\u7d22\u30b3\u30de\u30f3\u30c9\u3067\u306f,\u63a2\u7d22\u3059\u308b\u6587\u5b57\u5217\u304c\u304d\u308f\u3081\u3066\u77ed\u3044\u5834 \u5408\u306b\u306f\u529b\u307e\u304b\u305b\u306e\u65b9\u6cd5\u304c\u7528\u3044\u3089\u308c\u308b\u3053\u3068\u304c\u3042\u308b\u4ee5\u5916\u306f, \u7b2c 3\u306e\u65b9\u6cd5\u3053\u308c\u3082\u8457\u8005\u306e\u982d\u6587\u5b57\u304b\u3089 BM\u30a2\u30eb\u30b4\u30ea\u30ba \u30e0\u3068\u7701\u7565\u3055\u308c\u308b\u304c\u5229\u7528\u3055\u308c\u308b\u5834\u5408\u304c\u591a\u3044\u3088\u3046\u3067\u306f\u3042 \u308b\u304c,KMP\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306f BM\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3068\u4e26\u3073,\u30a2 \u30a4\u30c7\u30a3\u30a2 1\u3064\u3067\u8a08\u7b97\u91cf\u304c\u6539\u5584\u3055\u308c\u308b\u3068\u3044\u3046\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0 \u306e\u9762\u767d\u3055\u3092,\u6211\u3005\u306b\u5341\u5206\u6559\u3048\u3066\u304f\u308c\u308b. \u672c\u8ad6\u6587\u306f,KMP\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u52d5\u4f5c\u3092\u4f8b\u3092\u7528\u3044\u3066\u89e3 \u8aac\u3059\u308b\u3053\u3068\u304b\u3089\u59cb\u307e\u308a,\u5177\u4f53\u7684\u306a\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u63d0\u793a, \u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6539\u826f\u3068\u62e1\u5f35,\u8a08\u7b97\u91cf\u306b\u95a2\u3059\u308b\u7406\u8ad6\u7684\u306a\u8003 \u5bdf\u3068\u7d9a\u304f.\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u63d0\u6848\u3059\u308b\u8ad6\u6587\u3068\u3057\u3066\u306f\u3053\u3053\u307e \u3067\u3067\u5341\u5206\u306a\u306e\u3060\u304c,\u672c\u8ad6\u6587\u306f\u3055\u3089\u306b,\u9577\u3055\u304c\u5076\u6570\u306e\u56de\u6587 \u6587\u5b57\u5217\u306e\u8a71,KMP\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u767a\u660e\u306e\u6b74\u53f2,\u3064\u3044\u306b\u306f \u5f8c\u66f8\u304d\u3068\u79f0\u3057\u3066 BM\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306b\u95a2\u3059\u308b\u8003\u5bdf\u306b\u307e\u3067\u8a71 \u304c\u53ca\u3076\u3068\u3044\u3046,\u7279\u7570\u3068\u3082\u601d\u3048\u308b\u69cb\u6210\u3068\u306a\u3063\u3066\u3044\u308b. \u3053\u3053\u3067\u8ff0\u3079\u3089\u308c\u3066\u3044\u308b\u6b74\u53f2\u306b\u3088\u308b\u3068,KMP\u30a2\u30eb\u30b4\u30ea\u30ba \u30e0\u306f,\u5143\u3005\u306fMorris,\u304a\u3088\u3073,Knuth,Pratt\u30c1\u30fc\u30e0\u306b\u3088 \u308a\u72ec\u7acb\u306b\u767a\u660e\u3055\u308c\u305f.Morris\u306f,1969\u5e74\u306b CDC 6400\u3068 \u3044\u3046\u8a08\u7b97\u6a5f\u4e0a\u306e\u30c6\u30ad\u30b9\u30c8\u30a8\u30c7\u30a3\u30bf\u3092\u4f5c\u6210\u3057\u3066\u3044\u305f\u969b,\u6709 \u9650\u30aa\u30fc\u30c8\u30de\u30c8\u30f3\u7406\u8ad6\u3092\u5229\u7528\u3057\u3066,\u3053\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u539f \u5f62\u3092\u8003\u3048\u305f.\u307e\u305f\u3053\u308c\u3068\u306f\u5225\u306b,Knuth\u306f 1970\u5e74\u306b, \u9577\u3055\u304c\u5076\u6570\u306e\u56de\u6587\u6587\u5b57\u5217\u306e 2\u65b9\u5411\u30d7\u30c3\u30b7\u30e5\u30c0\u30a6\u30f3\u30aa\u30fc\u30c8 \u30de\u30c8\u30f3\u306b\u3088\u308b\u8a8d\u8b58\u306e\u7406\u8ad6\u306e\u8abf\u67fb\u3092\u3057\u3066\u3044\u308b\u3046\u3061\u306b,\u305d\u306e \u80cc\u5f8c\u306e\u6587\u5b57\u5217\u63a2\u7d22\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u5b58\u5728\u306b\u6c17\u3065\u304d,\u672c\u30a2\u30eb \u30b4\u30ea\u30ba\u30e0\u306e\u767a\u660e\u306b\u81f3\u3063\u305f\u3068\u3044\u3046.\u3053\u308c\u306f,Knuth\u306b\u3068\u3063 \u3066,\u30aa\u30fc\u30c8\u30de\u30c8\u30f3\u7406\u8ad6\u304c\u73fe\u5b9f\u306e\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\u306e\u554f\u984c\u306e \u89e3\u6c7a\u306b\u5f79\u306b\u7acb\u3063\u305f\u521d\u3081\u3066\u306e\u7d4c\u9a13\u3067\u3042\u3063\u305f\u3089\u3057\u3044.Robert Sedgewick\u306f,\u305d\u306e\u6709\u540d\u306a\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6559\u79d1\u66f8 \u306e\u6587 \u5b57\u5217\u63a2\u7d22\u306e\u7ae0\u306e\u4e2d\u3067,\u3053\u306e\u3053\u3068\u3092\u300c\u7406\u8ad6\u7684\u306a\u7d50\u679c\u304c\u601d\u3044 \u304c\u3051\u305a\u76f4\u63a5\u5b9f\u7528\u3068\u7d50\u3073\u3064\u3044\u305f\u7a00\u306a\u5b9f\u4f8b\u3067\u3042\u308b\u300d\u3068\u6307\u6458\u3057 \u3066\u3044\u308b\u304c,\u5b9f\u969b\u3053\u306e\u7d4c\u9a13\u306f,Knuth\u306b\u3068\u3063\u3066\u304d\u308f\u3081\u3066\u5370 \u8c61\u6df1\u3044\u3082\u306e\u3067\u3042\u3063\u305f\u306b\u9055\u3044\u306a\u3044.\u305d\u3046\u8003\u3048\u308b\u3068,\u4e0a\u3067\u8ff0 \u3079\u305f\u3088\u3046\u306a\u672c\u8ad6\u6587\u306e\u7279\u7570\u3068\u3082\u601d\u3048\u308b\u69cb\u6210\u3082,\u8457\u8005\u306b\u3088\u308b \u601d\u3044\u306e\u767a\u9732\u3068\u601d\u3048\u3066\u8208\u5473\u6df1\u3044.\u3061\u306a\u307f\u306b,\u672c\u8ad6\u6587\u304c\u63b2\u8f09 \u3055\u308c\u3066\u3044\u308b SIAM Journal on Computing\u306e\u540c\u3058\u53f7\u306b\u306f, Sedgewick\u306e\u8ad6\u6587\u3082\u63b2\u8f09\u3055\u308c\u3066\u3044\u308b. Morris\u304c\u3053\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u601d\u3044\u3064\u3044\u305f\u80cc\u666f\u306b\u306f, CDC 6400\u3067\u306e\u30d0\u30c3\u30d5\u30a1\u30ea\u30f3\u30b0\u51e6\u7406\u304c\u8907\u96d1\u3067\u3042\u308b\u305f\u3081, \u30c6\u30ad\u30b9\u30c8\u306e\u300c\u5f8c\u623b\u308a\u300d\u3092\u5fc5\u8981\u3068\u3057\u306a\u3044\u65b9\u6cd5\u304c\u6c42\u3081\u3089\u308c\u3066 \u3044\u305f\u3053\u3068\u304c\u3042\u3063\u305f.\u6700\u8fd1\u306e\u3088\u3046\u306b\u4e3b\u8a18\u61b6\u3092\u3075\u3093\u3060\u3093\u306b\u4f7f \u3046\u3053\u3068\u304c\u3067\u304d\u306a\u304b\u3063\u305f\u3068\u3044\u3046\u4e0d\u81ea\u7531\u3055\u304c,\u65b0\u3057\u3044\u30a2\u30eb\u30b4 \u30ea\u30ba\u30e0\u306e\u767a\u660e\u306b\u7d50\u3073\u3064\u3044\u305f\u3068\u3044\u3048\u3088\u3046. \u672c\u8ad6\u6587\u306e KMP\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u6539\u826f\u306e\u7ae0\u306b\u306f,\u756a\u5175\u3092 \u7528\u3044\u305f\u9ad8\u901f\u5316\u306e\u8a71\u3082\u8ff0\u3079\u3089\u308c\u3066\u3044\u308b.\u529b\u305a\u304f\u7684\u306b\u8a18\u8ff0\u3055 \u308c\u305f\u30d7\u30ed\u30b0\u30e9\u30e0\u304c\u591a\u3044\u73fe\u5728\u3067\u306f,\u756a\u5175\u306f\u4ee5\u524d\u307b\u3069\u306f\u4f7f\u308f \u308c\u306a\u304f\u306a\u3063\u3066\u3057\u307e\u3063\u305f\u304c,\u672c\u8ad6\u6587\u304c\u767a\u8868\u3055\u308c\u305f 1970\u5e74\u4ee3 \u5f8c\u534a\u3067\u306f,\u756a\u5175\u306f\u7f8e\u3057\u304f\u9ad8\u901f\u306a\u30d7\u30ed\u30b0\u30e9\u30e0\u3092\u8a18\u8ff0\u3059\u308b\u305f \u3081\u306e\u91cd\u5b9d\u306a\u6280\u8853\u3067\u3042\u3063\u305f.\u3053\u306e\u3088\u3046\u306a\u3068\u3053\u308d\u306b\u3082,\u6642\u4ee3 \u306e\u6d41\u308c\u3092\u611f\u3058\u3055\u305b\u308b\u3082\u306e\u304c\u3042\u308b. KMP\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306f,\u7d20\u6734\u306a\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u304b\u3089\u8ce2\u3044\u30a2 \u30eb\u30b4\u30ea\u30ba\u30e0\u3092\u30d7\u30ed\u30b0\u30e9\u30e0\u5909\u63db\u306b\u3088\u3063\u3066\u5c0e\u51fa\u3059\u308b\u4f8b\u984c\u3068\u3057 \u3066\u3082\u3088\u304f\u7528\u3044\u3089\u308c\u308b.\u3053\u306e\u3088\u3046\u306a\u984c\u6750\u3068\u3057\u3066\u3082 KMP\u30a2 \u30eb\u30b4\u30ea\u30ba\u30e0\u304c\u5f79\u306b\u7acb\u3064\u3053\u3068\u306b\u306a\u308d\u3046\u3068\u306f,\u8457\u8005\u305f\u3061\u306f\u60f3 \u5b9a\u3057\u3066\u3044\u306a\u304b\u3063\u305f\u3067\u3042\u308d\u3046."}
{"_id": "7970532dc583554818922954ba518577e22a3cae", "title": "Representation of Events in Nerve Nets and Finite Automata", "text": ""}
{"_id": "5b5ddfc2a0bfc5a048461eb11c191831cd226014", "title": "Finite Automata and Their Decision Problems", "text": "Finite automata are considered in this paper a s instruments for classifying finite tapes. Each onetape automaton defines a set of tapes, a two-tape automaton defines a set of pairs of tapes, et cetera. The structure of the defined sets is studied. Various generalizations of the notion of an automaton are introduced and their relation to the classical automata is determined. Some decision problems concerning automata are shown to be solvable by effective algorithms; others turn out to be unsolvable by algorithms."}
{"_id": "2e9b8029844a2df7140b8ef03204870d722246f0", "title": "Improving Video Game Development: Facilitating Heterogeneous Team Collaboration through Flexible Software Processes", "text": "Based on our observations in the Austrian video game software development practices we identified a lack of systematic processes and method support and inefficient collaboration between various involved disciplines, i.e. engineers and artists. Video game development includes heterogeneous disciplines, e.g. creative arts, game and content design, and software. Nevertheless, improvement of team collaboration and process support is an ongoing challenge to enable a comprehensive view on game development projects. Lessons learned from software engineering practices can help game developers to increase game development processes within a heterogeneous environment. Based on a state of the practice survey in the Austrian games industry, this paper presents (a) first results with focus on process and method support and (b) suggests a candidate flexible process approach based on Scrum to improve video game development and team collaboration. The results showed (a) a trend to highly flexible software processes involving various disciplines and (b) identified the suggested flexible process approach as feasible and useful for project application."}
{"_id": "59029360d7512a0adeb978b316b003770df430cb", "title": "Compression of multiple depth maps for IBR", "text": "Image-based rendering techniques include those with geometry and those without. Geometric information in the form of a depth map aligned with the image holds a lot of promise for IBR due to the several methods available to capture it. It can improve the quality of generated views using a limited number of views. Compression of light fields or multiple images has attracted a lot of attention in the past. Compression of multiple depth maps of the same scene has not been explored much in the literature. We propose a method for compressing multiple depth maps in this paper using geometric proxy. Different quality of rendering and compression ratio can be achieved by varying different parameters. Experiments show the effectiveness of the compression technique on several model data."}
{"_id": "859382c03cd0990460a7391aeb5d86519a4c2947", "title": "Smart contract-based campus demonstration of decentralized transactive energy auctions", "text": "Transactive energy paradigms will enable the exchange of energy from a distributed set of prosumers. While prosumers have access to distributed energy resources, these resources are intermittently available. There is a need for distributed markets to enable the exchange of energy in transactive environments, however, the large number of potential prosumers introduces challenges in the establishment of trust between prosumers. Markets for transactive environments create other challenges, such as establishing clearing prices for energy and exchanging money between prosumers. Blockchains provide a unique technology to address this distributed trust problem through the use of a distributed ledger, cryptocurrencies, and the execution of smart contracts. This paper introduces a smart contract that implements a transactive energy auction that operates without the need for a trusted entitys oversight. The auction mechanism implements a Vickrey second price auction, which guarantees bidders will submit honest bids. The contract is implemented on transactive agents on the WSU campus interacting with a 72kW PV array and the Ethereum blockchain. The contract is then used to execute auctions based on the energy from the the PV array and simulated building loads to demonstrate the auctions operations."}
{"_id": "f50ad1653e2e3a4125a0c6589c75978aaceb9361", "title": "RAMSYS: Resource-Aware Asynchronous Data Transfer with Multicore SYStems", "text": "High-speed data transfer is vital to data-intensive computing that often requires moving large data volumes efficiently within a local data center and among geographically dispersed facilities. Effective utilization of the abundant resources in modern multicore environments for data transfer remains a persistent challenge, particularly, for Non-Uniform Memory Access (NUMA) systems wherein the locality of data accessing is an important factor. This requires rethinking how to exploit parallel access to data and to optimize the storage and network I/Os. We address this challenge and present a novel design of asynchronous processing and resource-aware task scheduling in the context of high-throughput data replication. Our software allocates multiple sets of threads to different stages of the processing pipeline, including storage I/O and network communication, based on their capacities. Threads belonging to each stage follow an asynchronous model, and attain high performance via multiple locality-aware and peer-aware mechanisms, such as task grouping, buffer sharing, affinity control and communication protocols. Our design also integrates high performance features to enhance the scalability of data transfer in several scenarios, e.g., file-level sorting, block-level asynchrony, and thread-level pipelining. Our experiments confirm the advantages of our software under different types of workloads and dynamic environments with contention for shared resources, including a 28-160 percent increase in bandwidth for transferring large files, 1.7-66 times speed-up for small files, and up to 108 percent larger throughput for mixed workloads compared with three state of the art alternatives, GridFTP , BBCP and Aspera."}
{"_id": "e6b44d4fea2de01538e77c4b296e5f2e7b03b791", "title": "A Review of the Status of Brain Structure Research in Transsexualism", "text": "The present review focuses on the brain structure of male-to-female (MtF) and female-to-male (FtM) homosexual transsexuals before and after cross-sex hormone treatment as shown by in vivo neuroimaging techniques. Cortical thickness and diffusion tensor imaging studies suggest that the brain of MtFs presents complex mixtures of masculine, feminine, and demasculinized regions, while FtMs show feminine, masculine, and defeminized regions. Consequently, the specific brain phenotypes proposed for MtFs and FtMs differ from those of both heterosexual males and females. These phenotypes have theoretical implications for brain intersexuality, asymmetry, and body perception in transsexuals as well as for Blanchard's hypothesis on sexual orientation in homosexual MtFs. Falling within the aegis of the neurohormonal theory of sex differences, we hypothesize that cortical differences between homosexual MtFs and FtMs and male and female controls are due to differently timed cortical thinning in different regions for each group. Cross-sex hormone studies have reported marked effects of the treatment on MtF and FtM brains. Their results are used to discuss the early postmortem histological studies of the MtF brain."}
{"_id": "785ef09a31666cce12889a2ad43b5054d5bc7c59", "title": "Pregnancy-related low back pain.", "text": "Pregnancy related low back pain is a common complaint among pregnant women. It can potentially have a negative impact on their quality of life. The aim of this article is to present a current review of the literature concerning this issue.By using PubMed database and low back pain, pelvic girdle pain, pregnancy as keywords, abstracts and original articles in English investigating the diagnosis treatment of back pain during pregnancy were searched and analyzedLow back pain could present as either a pelvic girdle pain between the posterior iliac crest and the gluteal fold or as a lumbar pain over and around the lumbar spine. The source of the pain should be diagnosed and differentiated early.The appropriate treatment aims to reduce the discomfort and the impact on the pregnant womans quality of life. This article reveals the most common risk factors, as well as treatment methods, which may help to alleviate the pain. Some suggestions for additional research are also discussed."}
{"_id": "e36d2911b5cd29419017e34c770a4f7a66ad32b1", "title": "Refining the relationship between personality and subjective well-being.", "text": "Understanding subjective well-being (SWB) has historically been a core human endeavor and presently spans fields from management to mental health. Previous meta-analyses have indicated that personality traits are one of the best predictors. Still, these past results indicate only a moderate relationship, weaker than suggested by several lines of reasoning. This may be because of commensurability, where researchers have grouped together substantively disparate measures in their analyses. In this article, the authors review and address this problem directly, focusing on individual measures of personality (e.g., the Neuroticism-Extroversion-Openness Personality Inventory; P. T. Costa & R. R. McCrae, 1992) and categories of SWB (e.g., life satisfaction). In addition, the authors take a multivariate approach, assessing how much variance personality traits account for individually as well as together. Results indicate that different personality and SWB scales can be substantively different and that the relationship between the two is typically much larger (e.g., 4 times) than previous meta-analyses have indicated. Total SWB variance accounted for by personality can reach as high as 39% or 63% disattenuated. These results also speak to meta-analyses in general and the need to account for scale differences once a sufficient research base has been generated."}
{"_id": "2c4a0f5d589be041b48c6485161be9746364cff5", "title": "A Randomized Algorithm for CCA", "text": "We present RandomizedCCA, a randomized algorithm for computing canonical analysis, suitable for large datasets stored either out of core or on a distributed file system. Accurate results can be obtained in as few as two data passes, which is relevant for distributed processing frameworks in which iteration is expensive (e.g., Hadoop). The strategy also provides an excellent initializer for standard iterative solutions."}
{"_id": "70732a229f9d114f76c895575fe11a9982965d7b", "title": "Scene classification based on single-layer SAE and SVM", "text": "Scene classification aims to group images into semantic categories. It is a challenging problem in computer vision due to the difficulties of intra-class variability and inter-class similarity. In this paper, a scene classification approach based on single-layer sparse autoencoder (SAE) and support vector machine (SVM) is proposed. This approach consists of two steps: SAE-based feature learning step and SVM-based classification step. In the first step, a single-layer SAE network is constructed and trained by the patches which are sampled randomly from the source images. The feature representation of images is learned by the trained single-layer SAE network. Meanwhile, a pooling operation is used to reduce the dimension of the learned feature vectors. In the second step, in order to improve the classification accuracy, the parameters of SVM are optimized by a particle swarm optimization (PSO) based algorithm. The one-versus-one strategy is employed for the multiple scene classification problem. To show the efficiency of the proposed approach, several public data sets are employed. The results reveal that the proposed approach achieves better classification accuracy than the existing state-of-the-art methods. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "bfc72f1f5a26e6eea19f060d3b9778f17f2828d4", "title": "An Ultra-Wideband Horizontally Polarized Omnidirectional Circular Connected Vivaldi Antenna Array", "text": "An ultra-wideband horizontally polarized omnidirectional antenna is proposed. The connected Vivaldi antenna array mechanism is exploited to achieve the ultra-wide bandwidths of both the impedance and the radiation patterns simultaneously. Sixteen Vivaldi antenna elements are connected with each other and distributed along a circular substrate to form the proposed antenna. The designed antenna exhibits a relative impedance bandwidth of 159.97% (1.28\u201311.51 GHz) for $\\vert S_{11}\\vert <-10$ dB. Meanwhile, uniform omnidirectional radiation patterns with a gain variation of less than 1.5 dB have been obtained within the frequency band of 1.25\u20137.6 GHz (142.73%)."}
{"_id": "4869e04d79917d9b0b561a6d32103214d5ab2cfd", "title": "A Social-Network-Based Cryptocurrency Wallet-Management Scheme", "text": "Effective cryptocurrency key management has become an urgent requirement for modern cryptocurrency. Although a large body of cryptocurrency wallet-management schemes has been proposed, they are mostly constructed for specific application scenarios and often suffer from weak security. In this paper, we propose a more effective, usable, and secure cryptocurrency wallet-management system based on semi-trusted social networks, therein allowing users to collaborate with involved parties to achieve some powerful functions and recovery under certain circumstances. Furthermore, we employ an identity-based hierarchical key-insulated encryption scheme to achieve time-sharing authorization and present a semi-trusted portable social-network-based wallet-management scheme that provides the features of security-enhanced storage, portable login on different devices, no-password authentication, flexible key delegation, and so on. The performance analysis shows that our proposed schemes require minimal additional overhead and have low time delays, making them sufficiently efficient for real-world deployment."}
{"_id": "0a51ca0feb82d04cfea79f041b7a843dd952886c", "title": "A Framework for Analysis of Road Accidents", "text": "The road accident data analysis use data mining and machine learning techniques, focusing on identifying factors that affect the severity of an accident. There are a variety of reasons that contribute to accidents. Some of them are internal to the driver but many are external. For example, adverse weather conditions like fog, rainfall or snowfall cause partial visibility and it may become difficult as well as risky to drive on such roads. It is expected that the findings from this paper would help civic authorities to take proactive actions on likely crash prone weather and traffic conditions."}
{"_id": "b55293a413ec8b432429d2ce11157b308cd79d65", "title": "Low delay MPEG DASH streaming over the WebRTC data channel", "text": "Dynamic Adaptive Streaming over HTTP (MPEG-DASH) is becoming the de-facto data format and streaming solution for over-the-top (OTT) video on-demand streaming. The underlying HTTP transport has its limitations and known penalties in slow start, under-utilization, and other inefficiency. An alternate approach to DASH is the new transport schemes like HTTP/2.0 and WebSocket. In this work, we explore WebRTC as a streaming transport to carry DASH data and demonstrate that it is competitive in serving low delay streaming applications with fast channel switching, with proper video data coding and streaming signaling/control solutions."}
{"_id": "82eb267b8e86be0b444e841b4b4ed4814b6f1942", "title": "Single Image 3D Interpreter Network", "text": "Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Network (3D-INN), an endto-end framework which sequentially estimates 2D keypoint heatmaps and 3D object structure, trained on both real 2D-annotated images and synthetic 3D data. This is made possible mainly by two technical innovations. First, we propose a Projection Layer, which projects estimated 3D structure to 2D space, so that 3D-INN can be trained to predict 3D structural parameters supervised by 2D annotations on real images. Second, heatmaps of keypoints serve as an intermediate representation connecting real and synthetic data, enabling 3D-INN to benefit from the variation and abundance of synthetic 3D objects, without suffering from the difference between the statistics of real and synthesized images due to imperfect rendering. The network achieves state-of-the-art performance on both 2D keypoint estimation and 3D structure recovery. We also show that the recovered 3D information can be used in other vision applications, such as 3D rendering and image retrieval."}
{"_id": "ccb01a51ed204da01a12728bf168376bd2415b38", "title": "Real time face recognition engine using compact features for electronics key", "text": "In this research, a design and implementation of real-time face recognition engine using compact features vector extracted by linear binary pattern (LBP) and zoned discrete cosine transforms (DCT) analysis is proposed for electronics key. The function of LBP is to normalize the lighting variations of the input face image and the function of zoned DCT is to define the local descriptors of the face image. In order to get more compact features vector, the predictive LDA is employed for dimensional reduction. The aims of this research is to develop fast and strong face recognition that can be implemented for electronic key which will be implemented for substituting current security system (PIN and password). In addition, the recognition engine is also designed for real time face recognition which can work on hardware having limited resources such as Intel atom computer, raspberry pi, and android smart phone. The experimental results show that the proposed engine provide high enough recognition rate, and small false rejection rate (FRR) and false acceptance rate (FAR). In addition, the engine needs short processing time."}
{"_id": "402472ccb16a6caf409272b88a3df49a12a0fa98", "title": "Web page revisitation revisited: implications of a long-term click-stream study of browser usage", "text": "This paper presents results of an extensive long-term click-stream study of Web browser usage. Focusing on character and challenges of page revisitation, previous findings from seven to thirteen years ago are updated. The term page re-visit had to be differentiated, since the recurrence rate--the key measure for the share of page revisits--turns out to strongly depend on interpretation. We identify different types of revisitation that allow assessing the quality of current user support and developing concepts for new tools.\n Individual navigation strategies differ dramatically and are strongly influenced by personal habits and type of site visited. Based on user action logs and interviews, we distinguished short-term revisits (backtrack or undo) from medium-term (re-utilize or observe) and long-term revisits (rediscover). We analyze current problems and provide suggestions for improving support for different revisitation types."}
{"_id": "0cdba4fbcd71140a3fa58f8b03b40474a53dc84d", "title": "Streaming End-to-end Speech Recognition For Mobile Devices", "text": "End-to-end (E2E) models, which directly predict output character sequences given input speech, are good candidates for on-device speech recognition. E2E models, however, present numerous challenges: In order to be truly useful, such models must decode speech utterances in a streaming fashion, in real time; they must be robust to the long tail of use cases; they must be able to leverage user-specific context (e.g., contact lists); and above all, they must be extremely accurate. In this work, we describe our efforts at building an E2E speech recognizer using a recurrent neural network transducer. In experimental evaluations, we find that the proposed approach can outperform a conventional CTC-based model in terms of both latency and accuracy in a number of evaluation categories."}
{"_id": "17220d394693ce2df814b73686b987fe5f38aad0", "title": "Human facial illustrations: Creation and psychophysical evaluation", "text": "We present a method for creating black-and-white illustrations from photographs of human faces. In addition an interactive technique is demonstrated for deforming these black-and-white facial illustrations to create caricatures which highlight and exaggerate representative facial features. We evaluate the effectiveness of the resulting images through psychophysical studies to assess accuracy and speed in both recognition and learning tasks. These studies show that the facial illustrations and caricatures generated using our techniques are as effective as photographs in recognition tasks. For the learning task we find that illustrations are learned two times faster than photographs and caricatures are learned one and a half times faster than photographs. Because our techniques produce images that are effective at communicating complex information, they are useful in a number of potential applications, ranging from entertainment and education to low bandwidth telecommunications and psychology research."}
{"_id": "5a5a1b2bf9f5c7bd24d93a2be998358d3f792328", "title": "Information-centric sensor networks for cognitive IoT: an overview", "text": "Information-centric sensor networks (ICSNs) are a paradigm of wireless sensor networks that focus on delivering information from the network based on user requirements, rather than serving as a point-to-point data communication network. Introducing learning in such networks can help to dynamically identify good data delivery paths by correlating past actions and results, make intelligent adaptations to improve the network lifetime, and also improve the quality of information delivered by the network to the user. However, there are several factors and limitations that must be considered while choosing a learning strategy. In this paper, we identify some of these factors and explore various learning techniques that have been applied to sensor networks and other applications with similar requirements in the past. We provide our recommendation on the learning strategy based on how well it complements the needs of ICSNs, while keeping in mind the cost, computation, and operational overhead limitations."}
{"_id": "c631de7900acaa2e0840fcb404157f5ebd2d1aed", "title": "3-D Path Planning and Target Trajectory Prediction for the Oxford Aerial Tracking System", "text": "For the Oxford Aerial Tracking System (OATS) we are developing a robot helicopter that can track moving ground objects. Here we describe algorithms for the device to perform path planning and trajectory prediction. The path planner uses superquadratic potential fields and incorporates a height change mechanism that is triggered where necessary and in order to avoid local minima traps. The goal of the trajectory prediction system is to reliably predict the target trajectory during occlusion caused by ground obstacles. For this we use two artificial neural networks in parallel whose retraining is automatically triggered if major changes in the target behaviour pattern are detected. Simulation results for both systems are presented."}
{"_id": "a06761b3181a003c2297d8e86c7afc20e17fd2c6", "title": "Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors", "text": "Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods."}
{"_id": "afe3bc4bdddf6dc2091d836164b336f3cf8d1b58", "title": "Understanding mobile SNS continuance usage in China from the perspectives of social influence and privacy concern", "text": "Retaining users and facilitating continuance usage are crucial to the success of mobile social network services (SNS). This research examines the continuance usage of mobile SNS in China by integrating both the perspectives of social influence and privacy concern. Social influence includes three processes: compliance, identification and internalization, which are respectively represented by subjective norm, social identity, and group norm. The results indicate that these three factors and privacy concern have significant effects on continuance usage. The results suggest that service providers should address the issues of social influence and privacy concern to encourage mobile SNS continuance usage. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "aaf37e0afa43f5b6168074dae2bc0e695a9d1d1b", "title": "Chaffing and Winnowing: Confidentiality without Encryption", "text": "\u2022 Encryption: transforming the message to a ciphertext such that an adversary who overhears the ciphertext can not determine the message sent. The legitimate receiver possesses a secret decryption key that allows him to reverse the encryption transformation and retrieve the message. The sender may have used the same key to encrypt the message (with symmetric encryption schemes) or used a different, but related key (with public-key schemes). DES and RSA are familiar examples of encryption schemes."}
{"_id": "530c3a8065e144b292ddff7ff10ad82aa33c8845", "title": "Simulated Annealing Algorithm for 2D Image Compression", "text": "In this paper a new sign coding approximation method for the wavelet coefficients in a 2D image codec based on a simulated annealing metaheuristic is presented. The efficiency of the proposed algorithm versus a genetic algorithm using benchmarks of Kodak is compared and showing that the proposed sign prediction algorithm is efficient and provides a significant reduction of wavelet coefficients sign information in the final bit-stream. The results show that, by including sign coding capabilities to a nonembedded encoder, the sign compression gain is up to 17.35%, being the rate-distortion (R/D) performance improvement up to 0.25 dB."}
{"_id": "72067d7f8cc87cfd782ac79bf7a0811e94b77ded", "title": "The Big-Data-Driven Intelligent Wireless Network: Architecture, Use Cases, Solutions, and Future Trends", "text": "The concept of using big data (BD) for wireless communication network optimization is no longer new. However, previous work has primarily focused on long-term policies in the network, such as network planning and management. Apart from this, the source of the data collected for analysis/model training is mostly limited to the core network (CN). In this article, we introduce a novel data-driven intelligent radio access network (RAN) architecture that is hierarchical and distributed and operates in real time. We also identify the required data and respective workflows that facilitate intelligent network optimizations. It is our strong belief that the wireless BD (WBD) and machine-learning/artificial-intelligence (AI)-based methodology applies to all layers of the communication system. To demonstrate the superior performance gains of our proposed methodology, two use cases are analyzed with system-level simulations; one is the neural-network-aided optimization for Transmission Control Protocol (TCP), and the other is prediction-based proactive mobility management."}
{"_id": "d5372cd68b463728ef846485a28f1c57c0966bbb", "title": "A deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs", "text": "Cloud Radio Access Networks (RANs) have become a key enabling technique for the next generation (5G) wireless communications, which can meet requirements of massively growing wireless data traffic. However, resource allocation in cloud RANs still needs to be further improved in order to reach the objective of minimizing power consumption and meeting demands of wireless users over a long operational period. Inspired by the success of Deep Reinforcement Learning (DRL) on solving complicated control problems, we present a novel DRL-based framework for power-efficient resource allocation in cloud RANs. Specifically, we define the state space, action space and reward function for the DRL agent, apply a Deep Neural Network (DNN) to approximate the action-value function, and formally formulate the resource allocation problem (in each decision epoch) as a convex optimization problem. We evaluate the performance of the proposed framework by comparing it with two widely-used baselines via simulation. The simulation results show it can achieve significant power savings while meeting user demands, and it can well handle highly dynamic cases."}
{"_id": "de24c9686cd00fb208bdfd65e3e6257c6d7fee10", "title": "Ternary CAM Power and Delay Model: Extensions and Uses", "text": "Applications in computer networks often require high throughput access to large data structures for lookup and classification. While advanced algorithms exist to speed these search primitives on network processors and even custom application-specific integrated circuits (ASICs), achieving tight bounds on worst case performance with standard memories often requires a very careful analysis of all possible access patterns. An alternative, and often times more simple solution, is possible if a ternary CAM (TCAM) is used to perform a fully parallel search across the entire data set. Unfortunately, this parallelism means that large portions of the chip are switching during each cycle, causing large amounts of power to be consumed. While researchers at all levels of design (from algorithms to circuits) have begun to explore new ways of managing the power consumption, quantifying design alternatives is difficult due to a lack of available models. In this paper, we examine the structure of a modern TCAM and present a simple, yet accurate, power and delay model. We present techniques to estimate the dynamic power consumption and leakage power of a TCAM structure and validate the model using a combination of industrial TCAM datasheets and prior published works. Such a model is a critical first step in bridging the intellectual divide between circuit-level and algorithm-level optimizations. To demonstrate the utility of our model, we present an extensive analysis of the model by varying various architectural parameters and describe how our model can be easily extended to handle several circuit optimizations in the TCAM structure. In addition, we present a comparative study of SRAM and TCAM energy consumption to directly quantify the many design options which will be very useful for network designers to explore various power management schemes."}
{"_id": "4d526984d773c5afe38e30771f10e6f8f38ec92f", "title": "Socially compliant mobile robot navigation via inverse reinforcement learning", "text": "Mobile robots are increasingly populating our human environments. To interact with humans in a socially compliant way, these robots need to understand and comply with mutually accepted rules. In this paper, we present a novel approach to model the cooperative navigation behavior of humans. We model their behavior in terms of a mixture distribution that captures both the discrete navigation decisions, such as going left or going right, as well as the natural variance of human trajectories. Our approach learns the model parameters of this distribution that match, in expectation, the observed behavior in terms of user-defined features. To compute the feature expectations over the resulting high-dimensional continuous distributions, we use Hamiltonian Markov chain Monte Carlo sampling. Furthermore, we rely on a Voronoi graph of the environment to efficiently explore the space of trajectories from the robot\u2019s current position to its target position. Using the proposed model, our method is able to imitate the behavior of pedestrians or, alternatively, to replicate a specific behavior that was taught by tele-operation in the target environment of the robot. We implemented our approach on a real mobile robot and demonstrate that it is able to successfully navigate in an office environment in the presence of humans. An extensive set of experiments suggests that our technique outperforms state-of-the-art methods to model the behavior of pedestrians, which makes it also applicable to fields such as behavioral science or computer graphics."}
{"_id": "04f818827a2ad16bf6d8585f45fba703c509c57b", "title": "A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights", "text": "We derive a second-order ordinary differential equation (ODE), which is the limit of Nesterov\u2019s accelerated gradient method. This ODE exhibits approximate equivalence to Nesterov\u2019s scheme and thus can serve as a tool for analysis. We show that the continuous time ODE allows for a better understanding of Nesterov\u2019s scheme. As a byproduct, we obtain a family of schemes with similar convergence rates. The ODE interpretation also suggests restarting Nesterov\u2019s scheme leading to an algorithm, which can be rigorously proven to converge at a linear rate whenever the objective is strongly convex."}
{"_id": "07e0100c44d6f635f2aa44edd38adfeae3c159c4", "title": "Global Aesthetics Consensus: Avoidance and Management of Complications from Hyaluronic Acid Fillers\u2014Evidence- and Opinion-Based Review and Consensus Recommendations", "text": "BACKGROUND\nAlthough the safety profile of hyaluronic acid fillers is favorable, adverse reactions can occur. Clinicians and patients can benefit from ongoing guidance on adverse reactions to hyaluronic acid fillers and their management.\n\n\nMETHODS\nA multinational, multidisciplinary group of experts in cosmetic medicine convened the Global Aesthetics Consensus Group to review the properties and clinical uses of Hylacross and Vycross hyaluronic acid products and develop updated consensus recommendations for early and late complications associated with hyaluronic acid fillers.\n\n\nRESULTS\nThe consensus panel provided specific recommendations focusing on early and late complications of hyaluronic acid fillers and their management. The impact of patient-, product-, and technique-related factors on such reactions was described. Most of these were noted to be mild and transient. Serious adverse events are rare. Early adverse reactions to hyaluronic acid fillers include vascular infarction and compromise; inflammatory reactions; injection-related events; and inappropriate placement of filler material. Among late reactions are nodules, granulomas, and skin discoloration. Most adverse events can be avoided with proper planning and technique. Detailed understanding of facial anatomy, proper patient and product selection, and appropriate technique can further reduce the risks. Should adverse reactions occur, the clinician must be prepared and have tools available for effective treatment.\n\n\nCONCLUSIONS\nAdverse reactions with hyaluronic acid fillers are uncommon. Clinicians should take steps to further reduce the risk and be prepared to treat any complications that arise."}
{"_id": "d2163faecd156054bad97419f4c7baba954211e1", "title": "Let a Thousand Flowers Bloom? An Early Look at Large Numbers of Software App Developers and Patterns of Innovation", "text": "It is often presumed that bringing more members on board a multi-sided platform will stimulate value creation. Here I study the thousands of software producers building applications (\u201capps\u201d) on leading handheld computer platforms (1999-2004). Consistent with past theory, I find a lock-step link between numbers of producers and varieties of software titles. The narrow, unchanging scope of producers and a series of other patterns are consistent with a pronounced role of specialization and heterogeneity of producers. I also find that while adding producers making different types of software stimulated investment incentives, consistent with network effects, adding producers making similar software crowdedout innovation incentives. The latter of these two effects dominates in this context. The patterns also indicate nonrandom generation and sorting of producers onto platforms, with later cohorts generating systematically less compelling software than earlier cohorts of entrants. Overall, added producers led innovation to become more dependent on population-level diversity, variation and experimentation\u2014while drawing less on the heroic efforts and investments of any one individual innovator."}
{"_id": "2469b65ef0b1d0cfd4f31ee3912f2187d04b50c0", "title": "Simultaneous video defogging and stereo reconstruction", "text": "We present a method to jointly estimate scene depth and recover the clear latent image from a foggy video sequence. In our formulation, the depth cues from stereo matching and fog information reinforce each other, and produce superior results than conventional stereo or defogging algorithms. We first improve the photo-consistency term to explicitly model the appearance change due to the scattering effects. The prior matting Laplacian constraint on fog transmission imposes a detail-preserving smoothness constraint on the scene depth. We further enforce the ordering consistency between scene depth and fog transmission at neighboring points. These novel constraints are formulated together in an MRF framework, which is optimized iteratively by introducing auxiliary variables. The experiment results on real videos demonstrate the strength of our method."}
{"_id": "0744838ada8edf88f3777abadb92b65547088c25", "title": "Xbase: implementing domain-specific languages for Java", "text": "Xtext is an open-source framework for implementing external, textual domain-specific languages (DSLs). So far, most DSLs implemented with Xtext and similar tools focus on structural aspects such as service specifications and entities. Because behavioral aspects are significantly more complicated to implement, they are often delegated to general-purpose programming languages. This approach introduces complex integration patterns and the DSL's high level of abstraction is compromised.\n We present Xbase as part of Xtext, an expression language that can be reused via language inheritance in any DSL implementation based on Xtext. Xbase expressions provide both control structures and program expressions in a uniform way. Xbase is statically typed and tightly integrated with the Java type system. Languages extending Xbase inherit the syntax of a Java-like expression language as well as language infrastructure components, including a parser, an unparser, a linker, a compiler and an interpreter. Furthermore, the framework provides integration into the Eclipse IDE including debug and refactoring support.\n The application of Xbase is presented by means of a domain model language which serves as a tutorial example and by the implementation of the programming language Xtend. Xtend is a functional and object-oriented general purpose language for the Java Virtual Machine (JVM). It is built on top of Xbase which is the reusable expression language that is the foundation of Xtend."}
{"_id": "5545cff76f3488208b5b22747c1ed00901627180", "title": "Impulsive corporal punishment by mothers and antisocial behavior and impulsiveness of children.", "text": "This study tested the hypothesis that corporal punishment (CP), such as spanking or slapping a child for purposes of correcting misbehavior, is associated with antisocial behavior (ASB) and impulsiveness by the child. The data were obtained through interviews with a probability sample of 933 mothers of children age 2-14 in two small American cities. Analyses of variance found that the more CP experienced by the child, the greater the tendency for the child to engage in ASB and to act impulsively. These relationships hold even after controlling for family socioeconomic status, the age and sex of the child, nurturance by the mother, and the level of noncorporal interventions by the mother. There were also significant interaction effects of CP with impulsiveness by the mother. When CP was carried out impulsively, it was most strongly related to child impulsiveness and ASB; when CP was done when the mother was under control, the relationship to child behavior problems was reduced but still present. In view of the fact that there is a high risk of losing control when engaged in CP, even by parents who are not usually impulsive, and the fact that impulsive CP is so strongly associated with child behavior problems, the results of this study suggest that CP is an important risk factor for children developing a pattern of impulsive and antisocial behavior which, in turn, may contribute to the level of violence and other crime in society."}
{"_id": "e859636d95d9de4a53aa2b94fb5e6eea3bf64844", "title": "Design and simulation of self-tuning PID-type fuzzy adaptive control for an expert HVAC system", "text": "The modelling, numerical simulation and intelligent control of an expert HVAC (heating, ventilating and air-conditioning) system having two different zones with variable flow-rate were performed by considering the ambient temperature in this study. The sub-models of the system were obtained by deriving heat transfer equations of heat loss of two zones by conduction and convection, cooling unit and fan. All models of the variable flow-rate HVAC system were generated by using MATLAB/SIMULINK, and proportional-integral-derivative (PID) parameters were obtained by using Fuzzy sets. For comfortable of people the temperatures of the two different zones were decreased to 5 C from the ambient temperature. The successful results were obtained by applying self-tuning proportional-integral-derivative (PID)-type fuzzy adaptive controller if comparing with the fuzzy PD-type and the classical PID controller. The obtained results were presented in a graphical form. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "cb21ae2095764e837e0da6e823fc1bde42bda7a6", "title": "A Procedure for Measuring Microplastics using Pressurized Fluid Extraction.", "text": "A method based on pressurized fluid extraction (PFE) was developed for measuring microplastics in environmental samples. This method can address some limitations of the current microplastic methods and provide laboratories with a simple analytical method for quantifying common microplastics in a range of environmental samples. The method was initially developed by recovering 101% to 111% of spiked plastics on glass beads and was then applied to a composted municipal waste sample with spike recoveries ranging from 85% to 94%. The results from municipal waste samples and soil samples collected from an industrial area demonstrated that the method is a promising alternative for determining the concentration and identity of microplastics in environmental samples."}
{"_id": "194eeadc7ae97e9c549b8ce303db8efd8e6d3680", "title": "DCFont: an end-to-end deep chinese font generation system", "text": "Building a complete personalized Chinese font library for an ordinary person is a tough task due to the existence of huge amounts of characters with complicated structures. Yet, existing automatic font generation methods still have many drawbacks. To address the problem, this paper proposes an end-to-end learning system, DCFont, to automatically generate the whole GB2312 font library that consists of 6763 Chinese characters from a small number (e.g., 775) of characters written by the user. Our system has two major advantages. On the one hand, the system works in an end-to-end manner, which means that human interventions during offline training and online generating periods are not required. On the other hand, a novel deep neural network architecture is designed to solve the font feature reconstruction and handwriting synthesis problems through adversarial training, which requires fewer input data but obtains more realistic and high-quality synthesis results compared to other deep learning based approaches. Experimental results verify the superiority of our method against the state of the art."}
{"_id": "90140914a3a8d68bfe68f50821661f6af7bba5be", "title": "Cyber Security Resource Allocation: A Markov Decision Process Approach", "text": "An effective defense-in-depth in cyber security applies multiple layers of defense throughout a system. The goalis to defend a system against cyber-attack using severalindependent methods. Therefore, a cyber-attack that is able to penetrate one layer of defense may be unsuccessful in other layers. Common layers of cyber defense include: attack avoidance, prevention, detection, survivability and recovery. It follows that in security-conscious organizations, the cyber security investment portfolio is divided into different layers of defense. For instance, a two-way division is agility and recovery. Cyber agility pursues attack avoidance techniques such that cyber-attacks are rendered as ineffective, whereas cyber recovery seeks to fight-through successful attacks. We show that even when the primary focus is on the agility of a system, recovery should be an essential point during implementation because the frequency of attacks will degrade the system and a quick and fast recovery is necessary. However, there is not yet an optimum mechanism to allocate limited cyber security resourcesinto the different layers. We propose an approach using theMarkov Decision Process (MDP) framework for resourcesallocation between the two end layers: agility and recovery."}
{"_id": "995a5e2463a613e4234f3119da8b94edb2e46546", "title": "Protein folding and misfolding", "text": "The manner in which a newly synthesized chain of amino acids transforms itself into a perfectly folded protein depends both on the intrinsic properties of the amino-acid sequence and on multiple contributing influences from the crowded cellular milieu. Folding and unfolding are crucial ways of regulating biological activity and targeting proteins to different cellular locations. Aggregation of misfolded proteins that escape the cellular quality-control mechanisms is a common feature of a wide range of highly debilitating and increasingly prevalent diseases."}
{"_id": "a79272c6572b183ca1b6ac01fdab027645e7c14f", "title": "Machine Translation Approaches: Issues and Challenges", "text": "In the modern world, there is an increased need for language translations owing to the fact that language is an effective medium of communication. The demand for translation has become more in recent years due to increase in the exchange of information between various regions using different regional languages. Accessibility to web document in other languages, for instance, has been a concern for information Professionals. Machine translation (MT), a subfield under Artificial Intelligence, is the application of computers to the task of translating texts from one natural (human) language to another. Many approaches have been used in the recent times to develop an MT system. Each of these approaches has its own advantages and challenges. This paper takes a look at these approaches with the few of identifying their individual features, challenges and the best domain they are best suited to."}
{"_id": "a176dfee19e9120060190fa1707f0a54071d9fd1", "title": "Gay and bisexual identity development among female-to-male transsexuals in North America: emergence of a transgender sexuality.", "text": "We studied a North American sample of female-to-male (FtM) transsexuals sexually attracted to men, aiming to understand their identity and sexuality in the context of a culture of transgender empowerment. Sex-reassigned FtM transsexuals, 18 years or older and attracted to men, were recruited via an FtM community conference and listserv. Participants (N = 25) responded to open-ended questions about identity development, sexual behavior, and social support. Data were analyzed by content analysis. Scores for sexual identity, self esteem, sexual functioning, and psychological adjustment were compared to those of a comparison group (N = 76 nontransgender gay and bisexual men). Of the 25 FtMs, 15 (60%) identified as gay, 8 (32%) as bisexual, and 2 (8%) as queer. All were comfortable with their gender identity and sexual orientation. The FtM group was more bisexual than the nontransgender gay and bisexual controls. No significant group differences were found in self esteem, sexual satisfaction, or psychological adjustment. For some FtMs, sexual attractions and experiences with men affirmed their gender identity; for others, self-acceptance of a transgender identity facilitated actualization of their attractions toward men. Most were \"out\" as transgender among friends and family, but not on the job or within the gay community. Disclosure and acceptance of their homosexuality was limited. The sexual identity of gay and bisexual FtMs appears to mirror the developmental process for nontransgender homosexual men and women in several ways; however, participants also had experiences unique to being both transgender and gay/bisexual. This signals the emergence of a transgender sexuality."}
{"_id": "0daff0e01e289a578fab27fec326502d93dc5cb9", "title": "Autonomous Vehicle-target Assignment: a Game-theoretical Formulation", "text": "We consider an autonomous vehicle-target assignment problem where a group of vehicles are expected to optimally assign themselves to a set of targets. We introduce a gametheoretical formulation of the problem in which the vehicles are viewed as self-interested decision makers. Thus, we seek the optimization of a global utility function through autonomous vehicles that are capable of making individually rational decisions to optimize their own utility functions. The first important aspect of the problem is to choose the utility functions of the vehicles in such a way that the objectives of the vehicles are localized to each vehicle yet aligned with a global utility function. The second important aspect of the problem is to equip the vehicles with an appropriate negotiation mechanism by which each vehicle pursues the optimization of its own utility function. We present several design procedures and accompanying caveats for vehicle utility design. We present two new negotiation mechanisms, namely, \u201cgeneralized regret monitoring with fading memory and inertia\u201d and \u201cselective spatial adaptive play,\u201d and provide accompanying proofs of their convergence. Finally, we present simulations that illustrate how vehicle negotiations can consistently lead to near-optimal assignments provided that the utilities of the vehicles are designed appropriately. DOI: 10.1115/1.2766722"}
{"_id": "4fc2cc541fc8e85d5bfef6c6f8be97765462494b", "title": "Information Systems and Healthcare XVI: Physician Adoption of Electronic Medical Records: Applying the UTAUT Model in a Healthcare Context", "text": "This study applies the Unified Theory of Acceptance and Use of Technology (UTAUT) to the phenomenon of physician adoption of electronic medical records (EMR) technology. UTAUT integrates eight theories of individual acceptance into one comprehensive model designed to assist in understanding what factors either enable or hinder technology adoption and use. As such, it provides a useful lens through which to view what is currently taking place in the healthcare industry regarding EMR adoption. This is mutually beneficial to both the healthcare and MIS communities, as UTAUT offers valuable practical insight to the healthcare industry in explaining why EMR technology has not been more widely adopted as well as what prescriptions may facilitate future adoption, while offering the MIS community the opportunity to strengthen existing theory through an illustration of its application."}
{"_id": "09dc808c3249bbacb28a1e7b9c234cb58ad7dab4", "title": "SiGe BiCMOS and eWLB packaging technologies for automotive radar solutions", "text": "In this paper the evolution of silicon based automotive radar for the 76-81 GHz range is described. Starting with SiGe bare-die chips in 2009, today packaged MMICs are available for low-cost radar applications. Future SiGe BiCMOS technology will enable highly integrated single chip radars with superior performance at lower power consumption. This will pave the way for automotive radar safety for everyone and will be an important step towards autonomous driving."}
{"_id": "1bb6b92e874bf7cab8748a9acbb71a8dcda9ec65", "title": "W-band scalable phased arrays for imaging and communications", "text": "This article discusses the benefits and challenges associated with the design of multi-function scalable phased arrays at millimeter wave frequencies. First, applications for phased arrays with tens to hundreds of elements are discussed. Existing solutions for scaling silicon-based phased arrays from microwave to terahertz frequencies are reviewed. The challenges and tradeoffs associated with multiple integration options for W-band phased arrays are analyzed, with special consideration given to packaging and antenna performance. Finally, a solution based on SiGe ICs and organic packages for a 64-element dual-polarized 94 GHz phased array is described, along with associated measurement results."}
{"_id": "42cf4dd14f07cf34d5cad14fea5c691f31ab19ef", "title": "A 77GHz automotive radar receiver in a wafer level package", "text": "In this paper, a 77-GHz radar receiver is presented, which comes in a wafer level package and thus eliminates the need for wire bonding yielding significant cost reduction. The high integration level available in the productive Silicon-Germanium (SiGe) technology used in this paper allows for implementation of in-system monitoring of the receiver conversion parameters. This facilitates the realization of ISO 26262 compliant radar sensors for automotive safety applications."}
{"_id": "4a97f8d83c5cc523110beb11913ea5fb2a4271fe", "title": "A compact 4-chip package with 64 embedded dual-polarization antennas for W-band phased-array transceivers", "text": "A fully-integrated antenna-in-package (AiP) solution for W-band scalable phased-array systems is demonstrated. We present a fully operational compact W-band transceiver package with 64 dual-polarization antennas embedded in a multilayer organic substrate. This package has 12 metal layers, a size of 16.2 mm \u00d7 16.2 mm, and 292 ball-grid-array (BGA) pins with 0.4 mm pitch. Four silicon-germanium (SiGe) transceiver ICs are flip-chip attached to the package. Extensive full-wave electromagnetic simulation and radiation pattern measurements have been performed to optimize the antenna performance in the package environment, with excellent model-to-hardware correlation achieved. Enabled by detailed circuit-package co-design, a half-wavelength spacing, i.e., 1.6 mm at 94 GHz, is maintained between adjacent antenna elements to support array scalability at both the package and board level. Effective isotropic radiated power (EIRP) and radiation patterns are also measured to demonstrate the 64-element spatial power combining."}
{"_id": "76d7ddd030ef5ab4403a53caf2b46de89af71544", "title": "Low-Cost High-Gain and Broadband Substrate- Integrated-Waveguide-Fed Patch Antenna Array for 60-GHz Band", "text": "A low-cost high-gain and broadband substrate integrated waveguide (SIW)-fed patch antenna array is demonstrated at the 60-GHz band. A single-layered SIW feeding network with wideband T-junctions and wideband high-gain cavity-backed patch antennas are employed to achieve high gain and wideband performance simultaneously. Although the proposed antenna array has a multilayered structure, it can be fabricated by conventional low-cost single-layered printed circuit board (PCB) technology and then realized by stacking and fixing all of single layers together. The simulated and measured impedance bandwidths of a 4 \u00d7 4 antenna array are 27.5% and 22.6% for 10 dB. The discrepancy between simulation and measurement is analyzed. A gain up to 19.6 dBi, and symmetrical unidirectional radiation patterns with low cross polarization are also achieved. With advantages of low fabrication cost and good performances, the proposed antenna array is a promising candidate for millimeter-wave wireless communication systems."}
{"_id": "b2c3f2baeb6422a248cdb85e1fe0763eb4118328", "title": "Design and Sensorless Control of a Novel Axial-Flux Permanent Magnet Machine for In-Wheel Applications", "text": "This paper proposes an axial-flux-modulation permanent magnet (PM) machine and its sensorless control strategy for in-wheel application in electrical vehicles. A Vernier structure is integrated with the axial-flux PM machine to include the magnetic gear effect and improve output torque. The sensorless control strategy of the proposed machine, including initial rotor position estimation and rotating position estimation, is proposed for flux-modulation motors in this paper. The initial rotor position estimation is based on the technique of rectangular pulse voltage injection and the rotating rotor position estimation is based on the sliding mode observer (SMO). The saturation effect on inductances, which is the theoretical basis of the rectangular pulse voltage injection, makes the stator parameter variation in different loads and affects the SMO estimation. To overcome this problem, a novel online parameter self-adjustment procedure for the SMO is introduced. The machine design and its sensorless control performance are verified by simulation and prototype experiments."}
{"_id": "0b745e0ed5d1d64846196a3ec7715b548e404c7b", "title": "Unshackling evolution: evolving soft robots with multiple materials and a powerful generative encoding", "text": "In 1994, Karl Sims' evolved virtual creatures showed the potential of evolutionary algorithms to produce natural, complex morphologies and behaviors [30]. One might assume that nearly 20 years of improvements in computational speed and evolutionary algorithms would produce far more impressive organisms, yet the creatures evolved in the field of artificial life today are not obviously more complex, natural, or intelligent. Fig. 2 demonstrates an example of similar complexity in robots evolved 17 years apart."}
{"_id": "bcba4d340edb78cb5e88326958d436a64c53b138", "title": "Statistical limits of spiked tensor models", "text": "We study the statistical limits of both detecting and estimating a rank-one deformation of a symmetric random Gaussian tensor. We establish upper and lower bounds on the critical signal-to-noise ratio, under a variety of priors for the planted vector: (i) a uniformly sampled unit vector, (ii) i.i.d. \u00b11 entries, and (iii) a sparse vector where a constant fraction \u03c1 of entries are i.i.d. \u00b11 and the rest are zero. For each of these cases, our upper and lower bounds match up to a 1+o(1) factor as the order d of the tensor becomes large. For sparse signals (iii), our bounds are also asymptotically tight in the sparse limit \u03c1 \u2192 0 for any fixed d (including the d = 2 case of sparse PCA). Our upper bounds for (i) demonstrate a phenomenon reminiscent of the work of Baik, Ben Arous and P\u00e9ch\u00e9: an \u2018eigenvalue\u2019 of a perturbed tensor emerges from the bulk at a strictly lower signal-to-noise ratio than when the perturbation itself exceeds the bulk; we quantify the size of this effect. We also provide some general results for larger classes of priors. In particular, the large d asymptotics of the threshold location differs between problems with discrete priors versus continuous priors. Finally, for priors (i) and (ii) we carry out the replica prediction from statistical physics, which is conjectured to give the exact information-theoretic threshold for any fixed d. Of independent interest, we introduce a new improvement to the second moment method for contiguity, on which our lower bounds are based. Our technique conditions away from rare \u2018bad\u2019 events that depend on interactions between the signal and noise. This enables us to close \u221a 2-factor gaps present in several previous works."}
{"_id": "7a9bdd280f652f7e310ed2b8c8be4f068392f408", "title": "Mobile application testing matrix and challenges", "text": "The adoption of smart phones and the usages of mobile applications are increasing rapidly. Consequently, within limited time-range, mobile Internet usages have managed to take over the desktop usages particularly since the first smart phone-touched application released by iPhone in 2007. This paper is proposed to provide solution and answer the most demandable questions related to mobile application automated and manual testing limitations. Moreover, Mobile application testing requires agility and physically testing. Agile testing is to detect bugs through automated tools, whereas the compatibility testing is more to ensure that the apps operates on mobile OS (Operation Systems) as well as on the different real devices. Moreover, we have managed to answer automated or manual questions through two mobile application case studies MES (Mobile Exam System) and MLM (Mobile Lab Mate) by creating test scripts for both case studies and our experiment results have been discussed and evaluated on whether to adopt test on real devices or on emulators? In addition to this, we have introduced new mobile application testing matrix for the testers and some enterprises to obtain knowledge from."}
{"_id": "1f22b80e22f5bee61497f5533fb17dcd9ce04192", "title": "X-ray: Automating Root-Cause Diagnosis of Performance Anomalies in Production Software", "text": "Troubleshooting the performance of production software is challenging. Most existing tools, such as profiling, tracing, and logging systems, reveal what events occurred during performance anomalies. However, users of such tools must infer why these events occurred; e.g., that their execution was due to a root cause such as a specific input request or configuration setting. Such inference often requires source code and detailed application knowledge that is beyond system administrators and end users. This paper introduces performance summarization, a technique for automatically diagnosing the root causes of performance problems. Performance summarization instruments binaries as applications execute. It first attributes performance costs to each basic block. It then uses dynamic information flow tracking to estimate the likelihood that a block was executed due to each potential root cause. Finally, it summarizes the overall cost of each potential root cause by summing the per-block cost multiplied by the cause-specific likelihood over all basic blocks. Performance summarization can also be performed differentially to explain performance differences between two similar activities. X-ray is a tool that implements performance summarization. Our results show that X-ray accurately diagnoses 17 performance issues in Apache, lighttpd, Postfix, and PostgreSQL, while adding 2.3% average runtime overhead."}
{"_id": "b37e6afa6ae3394b2a7b2aab78e67900fe835413", "title": "An Improving Deception Detection Method in Computer-Mediated Communication", "text": "Online deception is disrupting our daily life, organizational process, and even national security. Existing deception detection approaches followed a traditional paradigm by using a set of cues as antecedents, and used a variety of data sets and common classification models to detect deception, which were demonstrated to be an accurate technique, but the previous results also showed the necessity to expand the deception feature set in order to improve the accuracy. In our study, we propose a novel feature selection method of the combination of CHI statistics and hypothesis testing, and achieve the accuracy level of 86% and F-measure of 0.84 by using the novel feature sets and SVM classification models, which exceeds the previous experiment results."}
{"_id": "8d57e94fe89c20e07ac6e47520ea39c8c814b4e5", "title": "When learning about the real world is better done virtually : A study of substituting computer simulations for laboratory equipment", "text": "This paper examines the effects of substituting a computer simulation for real laboratory equipment in the second semester of a large-scale introductory physics course. The direct current circuit laboratory was modified to compare the effects of using computer simulations with the effects of using real light bulbs, meters, and wires. Two groups of students, those who used real equipment and those who used a computer simulation that explicitly modeled electron flow, were compared in terms of their mastery of physics concepts and skills with real equipment. Students who used the simulated equipment outperformed their counterparts both on a conceptual survey of the domain and in the coordinated tasks of assembling a real circuit and describing how it worked."}
{"_id": "15c815b32ff201c1dbfa708f8584dca55b9df3a2", "title": "Progressive Gaussian filtering using explicit likelihoods", "text": "In this paper, we introduce a new sample-based Gaussian filter. In contrast to the popular Nonlinear Kalman Filters, e.g., the UKF, we do not rely on linearizing the measurement model. Instead, we take up the Gaussian progressive filtering approach introduced by the PGF 42 but explicitly rely on likelihood functions. Progression means, we incorporate the information of a new measurement gradually into the state estimate. The advantages of this filtering method are on the one hand the avoidance of sample degeneration and on the other hand an adaptive determination of the number of likelihood evaluations required for each measurement update. By this means, less informative measurements can be processed quickly, whereas measurements containing much information automatically receive more emphasis by the filter. These properties allow the new filter to cope with the demanding problem of very narrow likelihood functions in an efficient way."}
{"_id": "bdd6a2bae215342b388967744fa15a0fd5b6567d", "title": "Distributed Edge Cloud R-CNN for Real Time Object Detection", "text": "Cloud computing infrastructures have become the de-facto platform for data driven machine learning applications. However, these centralized models of computing are unqualified for dispersed high-volume real-time edge data intensive applications such as real time object detection, where video streams may be captured at multiple geographical locations. While many recent advancements in object detection have been made using Convolutional Neural Networks, these performance improvements only focus on a single, contiguous object detection model. In this paper, we propose a distributed Edge-Cloud R-CNN pipeline. By splitting the object detection pipeline into components and dynamically distributing these components in the cloud, we can achieve optimal performance to enable real time object detection. As a proof of concept, we evaluate the performance of the proposed system on a distributed computing platform including cloud servers and edge-embedded devices for real-time object detection on live video streams."}
{"_id": "97d278786cec636f0f4bef1ad69e2e4eb65e64eb", "title": "A Novel Multi-Stack Device Structure and its Analysis for High Power CMOS Switch Design", "text": "A novel multi-stack CMOS device structure is proposed, the operation of the structure is fully analyzed for high power CMOS switch design. The structure is also implemented in a standard 0.18-um triple-well CMOS process, and its performance is fully characterized. The proposed switch device incorporates multi-stack NMOS switches, one of which has a switch at the bulk and the others of which have a connection between the bulk and the source in order to provide high power handling capability to the transmit switch side. In order to demonstrate the improvement of power handling capability, performance of the conventional structure and the proposed structure are fully analyzed, compared from various simulation results, and verified with measurement results in detail. Experimental data show that the isolation of the proposed test structure is 25 dB higher at 30 dBm input power level than that of conventional structures, which was caused by the significant reduction of leakage current of the switch device in OFF state. In addition, insertion loss of the Rx switch can be maintained as 1.5 dB by applying body switching technique in 900 MHz."}
{"_id": "ba2c1d97590651e152657eba5cb9e0a37d7d74d8", "title": "A Dual-Band Dual-Polarized Nested Vivaldi Slot Array With Multilevel Ground Plane", "text": "In this paper, we present a systematic approach to the design, optimization and characterization of a broadband 5:1 bandwidth (0.8 to 4.0 GHz) antenna subarray. The array element is an optimized-taper antipodal Vivaldi slot with a bandwidth of 2.5:1. Two such elements of different sizes and with 0.4 GHz (10% of the highest frequency) overlapping bandwidths are arrayed in a nested lattice above a multilevel ground plane that shields the feeds and electronics. Return loss, radiation patterns, cross-polarization and mutual coupling are measured from 0.5\u20135.0 GHz. This array demonstrates E plane patterns with 50 and 45 3-dB beamwidths in the lower and upper frequency bands, respectively. The coupling between the elements is characterized for different relative antenna positions in all three dimensions."}
{"_id": "76faaf292c6d9dc29d3a99300a7fdd7a35d6d107", "title": "Online and Linear-Time Attention by Enforcing Monotonic Alignments", "text": "Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-tosequence problems. However, the fact that soft attention mechanisms perform a pass over the entire input sequence when producing each element in the output sequence precludes their use in online settings and results in a quadratic time complexity. Based on the insight that the alignment between input and output sequence elements is monotonic in many problems of interest, we propose an end-to-end differentiable method for learning monotonic alignments which, at test time, enables computing attention online and in linear time. We validate our approach on sentence summarization, machine translation, and online speech recognition problems and achieve results competitive with existing sequence-tosequence models."}
{"_id": "9d2ac46073da16f5e77e0a699371e7a0a26dc921", "title": "User involvement: a review of the benefits and challenges", "text": "Users can be involved in the design process in several ways. The author focuses on user-centred design, participatory design, ethnography, and contextual design. She briefly describes each approach and cites the corresponding literature. The expected benefits of these approaches highly overlap. Methods based on early user involvement promise to avoid the implementation of costly features that users do not want or cannot use. Participatory methods promise to improve the user satisfaction and the acceptance of a system. Nearly all approaches intend to improve the overall usability of the system and by this means increase the productivity and decrease the training costs and the need of user support."}
{"_id": "1a81c722727299e45af289d905d7dcf157174248", "title": "BabyTalk: Understanding and Generating Simple Image Descriptions", "text": "We present a system to automatically generate natural language descriptions from images. This system consists of two parts. The first part, content planning, smooths the output of computer vision-based detection and recognition algorithms with statistics mined from large pools of visually descriptive text to determine the best content words to use to describe an image. The second step, surface realization, chooses words to construct natural language sentences based on the predicted content and general statistics from natural language. We present multiple approaches for the surface realization step and evaluate each using automatic measures of similarity to human generated reference descriptions. We also collect forced choice human evaluations between descriptions from the proposed generation system and descriptions from competing approaches. The proposed system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work."}
{"_id": "88768a8ffa3574fce845ffa841686cfeed267267", "title": "Urogenital tract disorders in children suspected of being sexually abused.", "text": "INTRODUCTION\nChild sexual abuse (CSA) is generally defined as child exploitation that leads to achievement of sexual satisfaction. According to data from European countries, sexual abuse of children affects 10-40% of girls and 5-20% of boys.\n\n\nMATERIAL AND METHODS\nThe Medline, and Web of Science databases were searched with no date limitation on May 2015 using the terms 'child abuse' in conjunction with 'urinary tract', 'urologist', 'urological dysfunction', 'urologic symptoms', 'LUTS' or 'urinary infection'.\n\n\nRESULTS\nAwareness of the CSA problem among paediatricians and urologists is very important, because they are often the only physicians who are able to recognize the problem. CSA diagnosis is possible only through the proper collection of a medical history and a thorough physical examination. Urologists have to remember that children exposed to sexual abuse rarely exhibit abnormal genital findings. In fact, absence of genital findings is the rule rather than the exception. In most cases, the final diagnosis of sexual abuse is based on the child's history and behavior, along with the onset and exacerbation of urologic symptoms.\n\n\nCONCLUSIONS\nIn this article, we present a review of studies and literature concerning urinary symptoms in sexually abused children to clarify the problem for a broad group of urologists. We present common symptoms and premises that can point to the right diagnosis and basic guidelines of proceeding after suspicion of abuse."}
{"_id": "f6c737edc0cd82f18014c79945270bce0c85362b", "title": "The Three Layers Evaluation Model for Computer-Generated Plots", "text": "This paper describes a model for evaluating a computer-generated plot. The main motivation of this project is to provide MEXICA, our plot generator, with the capacity of evaluating its own outputs as well as assessing narratives generated by other agents that can be employed to enrich its knowledge base. We present a description of our computer model as well as an explanation of our first prototype. Then, we show the results of assessing three computergenerated narratives. The outcome suggests that we are in the right direction, although much more work"}
{"_id": "6cb902d2dc2bf50aed3800ddcb7dfa3689eb0282", "title": "THE EFFECTS OF MYOFASCIAL RELEASE VS STATIC STRETCHING ON HAMSTRINGS RANGE OF MOTION", "text": "The purpose of this study was to investigate the effects of three days of foam rolling on the hamstrings range of motion in comparison with static stretching. Lower extremity injuries are prevalent in strength training and sports today. Poor flexibility has been found to increase the risk of overuse injuries and significantly affect the individual's level of function and performance. Self myofascial release (SMR), foam rolling, is a recent modality clinically used to increase flexibility. On the other hand, there are few research studies on the technique. Twenty college students participated in this study. Ten participants were in the static stretching group, while ten participants were in the SMR group. Participants received the treatment three times in one week with at least 48 hours between treatments. The treatments were static stretching and SMR for three minutes of stretching the hamstrings muscles. The wall sit-and-reach test was used to measure hamstrings range of motion. Measurements were made at the beginning of the study and after each treatment. iv The acute stretching programs increased hamstrings range of motion in the self myofascial release group (28.9%) and static group (33.2%) respectively. The Group by Time ANOVA for flexibility revealed that there was no main effect of Group (F(1,18) = 3.629, p = 0.0729), but that there was a main effect of Time (F(3,54) = 32.130, p =.0001). At the same time there was no Group by Time interaction (F(3,54) = 1.152, p =.3366). These data suggest that self myofascial release compared to static stretching did not have a greater effect on hamstrings range of motion, but both groups increased range of motion from pretest to posttest."}
{"_id": "44ecb6ef5c61c88c2371eb7bc3136a8999d8dfca", "title": "Cultural Congruent Care : A Reflection on Patient Outcome", "text": "The growing diverse multicultural population in the United States requires that healthcare workers develop cultural knowledge, awareness, and sensitivity to help diverse patient population receive quality health care. Personal belief has been identified as a significant contributor to professional care delivery and may influence the provision of culturally competent care. However, the relationship between cultural competence and the individual cognitive perception has been an uneasy one. This article explores the moral foundations of cultural competence and ultimately its impact on patient care outcome."}
{"_id": "20d8b40e203f20a28113d82a6380a3393993c1cc", "title": "Global supply chain design : A literature review and critique", "text": "In this paper, we review decision support models for the design of global supply chains, and assess the fit between the research literature in this area and the practical issues of global supply chain design. The classification scheme for this review is based on ongoing and emerging issues in global supply chain management and includes review dimensions for (1) decisions addressed in the model, (2) performance metrics, (3) the degree to which the model supports integrated decision processes, and (4) globalization considerations. We conclude that although most models resolve a difficult feature associated with globalization, few models address the practical global supply chain design problem in its entirety. We close the paper with recommendations for future research in global supply chain modeling that is both forward-looking and practically oriented. 2005 Elsevier Ltd. All rights reserved."}
{"_id": "6b880d58b826074c49d0a38d0f9fb481e93c0acc", "title": "Approximation capabilities of multilayer feedforward networks", "text": ""}
{"_id": "9ee6209432316baf6776838917e06bca4d874747", "title": "The HiBench benchmark suite: Characterization of the MapReduce-based data analysis", "text": "The MapReduce model is becoming prominent for the large-scale data analysis in the cloud. In this paper, we present the benchmarking, evaluation and characterization of Hadoop, an open-source implementation of MapReduce. We first introduce HiBench, a new benchmark suite for Hadoop. It consists of a set of Hadoop programs, including both synthetic micro-benchmarks and real-world Hadoop applications. We then evaluate and characterize the Hadoop framework using HiBench, in terms of speed (i.e., job running time), throughput (i.e., the number of tasks completed per minute), HDFS bandwidth, system resource (e.g., CPU, memory and I/O) utilizations, and data access patterns."}
{"_id": "111b0d103542ad8679767e67bbeff8b6504002bc", "title": "The Google file system", "text": "We have designed and implemented the Google File System, a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. While sharing many of the same goals as previous distributed file systems, our design has been driven by observations of our application workloads and technological environment, both current and anticipated, that reflect a marked departure from some earlier file system assumptions. This has led us to reexamine traditional choices and explore radically different design points. The file system has successfully met our storage needs. It is widely deployed within Google as the storage platform for the generation and processing of data used by our service as well as research and development efforts that require large data sets. The largest cluster to date provides hundreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients. In this paper, we present file system interface extensions designed to support distributed applications, discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use."}
{"_id": "926574da942b7b67cd1842777e06c8ec797ab363", "title": "A Constrained, Weighted-L1 Minimization Approach for Joint Discovery of Heterogeneous Neural Connectivity Graphs", "text": "Determining functional brain connectivity is crucial to understanding the brain and neural differences underlying disorders such as autism. Recent studies have used Gaussian graphical models to learn brain connectivity via statistical dependencies across brain regions from neuroimaging. However, previous studies often fail to properly incorporate priors tailored to neuroscience, such as preferring shorter connections. To remedy this problem, the paper here introduces a novel, weighted-`1, multi-task graphical model (W-SIMULE). This model elegantly incorporates a flexible prior, along with a parallelizable formulation. Additionally, W-SIMULE extends the often-used Gaussian assumption, leading to considerable performance increases. Here, applications to fMRI data show that W-SIMULE succeeds in determining functional connectivity in terms of (1) log-likelihood, (2) finding edges that differentiate groups, and (3) classifying different groups based on their connectivity, achieving 58.6% accuracy on the ABIDE dataset. Having established W-SIMULE\u2019s effectiveness, it links four key areas to autism, all of which are consistent with the literature. Due to its elegant domain adaptivity, W-SIMULE can be readily applied to various data types to effectively estimate connectivity."}
{"_id": "3a9564179380cb03a01abbc30e4c7ed4d80efb32", "title": "Taming Subgraph Isomorphism for RDF Query Processing", "text": "RDF data are used to model knowledge in various areas such as l ife sciences, Semantic Web, bioinformatics, and social graphs . The size of real RDF data reaches billions of triples. This calls for a framework for efficiently processing RDF data. The core func tion of processing RDF data is subgraph pattern matching. There h av been two completely different directions for supporting ef ficient subgraph pattern matching. One direction is to develop spec ialized RDF query processing engines exploiting the properties of R DF data for the last decade, while the other direction is to deve lop fficient subgraph isomorphism algorithms for general, labele d graphs for over 30 years. Although both directions have a similar go al (i.e., finding subgraphs in data graphs for a given query grap h), they have been independently researched without clear reas on. We argue that a subgraph isomorphism algorithm can be easily mo dified to handle the graph homomorphism, which is the RDF patter n matching semantics, by just removing the injectivity const raint. In this paper, based on the state-of-the-art subgraph isomorp hism algorithm, we propose an in-memory solution, TurboHOM++, which is tamed for the RDF processing, and we compare it with the rep resentative RDF processing engines for several RDF benchmark s in a server machine where billions of triples can be loaded in mem ory. In order to speed upTurboHOM++, we also provide a simple yet effective transformation and a series of optimization tech niques. Extensive experiments using several RDF benchmarks show th at TurboHOM++ consistently and significantly outperforms the representative RDF engines. Specifically, TurboHOM++ outperforms its competitors by up tofiveorders of magnitude."}
{"_id": "05743d71dbc6254377040df77db0f7e135a659e8", "title": "Identifying and Accounting for Task-Dependent Bias in Crowdsourcing", "text": "Models for aggregating contributions by crowd workers have been shown to be challenged by the rise of taskspecific biases or errors. Task-dependent errors in assessment may shift the majority opinion of even large numbers of workers to an incorrect answer. We introduce and evaluate probabilistic models that can detect and correct task-dependent bias automatically. First, we show how to build and use probabilistic graphical models for jointly modeling task features, workers\u2019 biases, worker contributions and ground truth answers of tasks so that task-dependent bias can be corrected. Second, we show how the approach can perform a type of transfer learning among workers to address the issue of annotation sparsity. We evaluate the models with varying complexity on a large data set collected from a citizen science project and show that the models are effective at correcting the task-dependent worker bias. Finally, we investigate the use of active learning to guide the acquisition of expert assessments to enable automatic detection and correction of worker bias."}
{"_id": "517596b6d40b66bf77d5bd0d7ca67e80139dc9bc", "title": "A soft robotic approach to robust and dexterous grasping", "text": "In this paper, we present a compliant robotic gripper, Edgy-2, with 4-DOF dexterity, enabling four grasping modes: parallel grasping, power grasping, finger-tip pinch and fully-actuated grasping. The roboticfinger is based on soft-rigid-hybrid structures, combining fiber-reinforced soft pneumatic actuators with rigid joints, which exhibit reliable structural rigidity and grasping robustness while maintaining excellent adaptability and inherent compliance. With both grasping dexterity and grasping robustness, the Edgy-2 achieves excellent grasping reliability with various daily objects, from a fragile cherry to a 2 kg water bottled water. The relationship of design parameters and grasping strength is presented with analytical models. The performance of a prototype Edgy-2 is verified by dedicated experiments. The proposed hybrid dexterous grasping approach can be easily extended into different end-effector designs according to application requirements. The proposed mechanism for grasping provides excellent human-robot interaction safety and reliability."}
{"_id": "58c7c61a9268a3d01d378f5f7944cf75c10d1e4d", "title": "Which NoSQL Database? A Performance Overview", "text": "NoSQL data stores are widely used to store and retrieve possibly large amounts of data, typically in a key-value format. There are many NoSQL types with different performances, and thus it is important to compare them in terms of performance and verify how the performance is related to the database type. In this paper, we evaluate five most popular NoSQL databases: Cassandra, HBase, MongoDB, OrientDB and Redis. We compare those databases in terms of query performance, based on reads and updates, taking into consideration the typical workloads, as represented by the Yahoo! Cloud Serving Benchmark. This comparison allows users to choose the most appropriate database according to the specific mechanisms and application needs. TYPE OF PAPER AND"}
{"_id": "4637412ab649b2bb407be29b49bb6b0f9784a432", "title": "Pattern Recognition", "text": "In this paper a novel reconstruction method is presented that uses the topological relationship of detected image features to create a highly abstract but semantically rich 3D model of the reconstructed scenes. In the first step, a combined image-based reconstruction of points and lines is performed based on the current state of art structure from motion methods. Subsequently, connected planar threedimensional structures are reconstructed by a novel method that uses the topological relationships between the detected image features. The reconstructed 3D models enable a simple extraction of geometric shapes, such as rectangles, in the scene."}
{"_id": "6a307cf51517f6c97d855639129afd002634ab92", "title": "SD-Map - A Fast Algorithm for Exhaustive Subgroup Discovery", "text": "In this paper we present the novel SD-Map algorithm for exhaustive but efficient subgroup discovery. SD-Map guarantees to identify all interesting subgroup patterns contained in a data set, in contrast to heuristic or samplingbased methods. The SD-Map algorithm utilizes the well-known FP-growth method for mining association rules with adaptations for the subgroup discovery task. We show how SD-Map can handle missing values, and provide an experimental evaluation of the performance of the algorithm using synthetic data."}
{"_id": "8d06b2be191f8ea5e5db65ca1138566680e703f0", "title": "Demystifying Learning Analytics in Personalised Learning", "text": "This paper presents learning analytics as a mean to improve students\u2019 learning. Most learning analytics tools are developed by in-house individual educational institutions to meet the specific needs of their students. Learning analytics is defined as a way to measure, collect, analyse and report data about learners and their context, for the purpose of understanding and optimizing learning. The paper concludes by highlighting framework of learning analytics in order to improve personalised learning. In addition, it is an endeavour to define the characterising features that represents the relationship between learning analytics and personalised learning environment. The paper proposes that learning analytics is dependent on personalised approach for both educators and students. From a learning perspective, students can be supported with specific learning process and reflection visualisation that compares their respective performances to the overall performance of a course. Furthermore, the learners may be provided with personalised recommendations for suitable learning resources, learning paths, or peer students through recommending system. The paper\u2019s contribution to knowledge is in considering personalised learning within the context framework of learning analytics."}
{"_id": "dc0d5d186b0a0866f0f7c3a3ad4151f96d0f1f25", "title": "Credit card incidents and control systems", "text": "Credit and debit cards have spread and skyrocketed all around the world to become the most popular means of payments in many countries. Despite their enormous popularity, cards are not free of risk. Technology development and e-commerce have exponentially increased internal credit card incidents. This paper identifies and quantifies the different types of credit card fraud and puts into question the effectiveness of the role assigned to cardholders in its detection. \u00a9 2012 Elsevier Ltd. All rights reserved."}
{"_id": "de2447a25012c71ad316487b4b9e7378a4fcccc0", "title": "A unified view of gradient-based attribution methods for Deep Neural Networks", "text": "Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, only few attempts to analyze them from a theoretical perspective have been made in the past. In this work we analyze various state-of-the-art attribution methods and prove unexplored connections between them. We also show how some methods can be reformulated and more conveniently implemented. Finally, we perform an empirical evaluation with six attribution methods on a variety of tasks and architectures and discuss their strengths and limitations."}
{"_id": "b830796a40eafc15719823365cab73e9cf1ca9cb", "title": "Cyclic deadlock prediction and avoidance for zone-controlled AGV system", "text": "Automated guided vehicles (AGVs) are now becoming popular in automated materials handling systems, flexible manufacturing systems and even containers handling applications at seaports. Its performance affects the efficiency of the entire system. Deadlock formation is a serious problem as it stalls the AGVS. The objective of this paper is to develop an efficient AGV deadlock prediction and avoidance algorithm for container port operations. Deadlock in a broad sense is a situation in which at least a part of the system stalls. There are a lot of situations in which the system may stall and most of these situations can be avoided through the control and navigation system. This paper discusses the development of an efficient strategy for predicting and avoiding the deadlocks in a large scale AGVS A variety of deadlock-detecting algorithms are available, but these methods work mainly for manufacturing system where the network layout is simple and uses only a small number of AGVs. The AGVS under study has complex layout and involves close to 80 AGVs. The proposed solution is implemented using the Automod simulation software, and the performance of the technique is presented. Simulation shows that all the potential deadlock situations can be detected and avoided via this methodology. Also, the proposed strategy is computationally efficient and is able to provide the movement decision to the vehicles within the 1.5-s sampling time. r 2002 Published by Elsevier Science B.V."}
{"_id": "e213d04220b1b880d8bb61a60d36990c781395f3", "title": "The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems", "text": "We propose a deep learning based method, the Deep Ritz Method, for numerically solving variational problems, particularly the ones that arise from partial differential equations. The Deep Ritz method is naturally nonlinear, naturally adaptive and has the potential to work in rather high dimensions. The framework is quite simple and fits well with the stochastic gradient descent method used in deep learning. We illustrate the method on several problems including some eigenvalue problems."}
{"_id": "f6aef8ae043c3c4aa3f25a6fa4cdd5dcb8026b15", "title": "The effect of music background on the emotional appraisal of film sequences", "text": "In this study the effects of musical background on the emotional appraisal of film sequences was investigated. Four pairs of polar emotions defined in Plutchik\u2019s model were used as basic emotional qualities: joy-sadness, anticipation-surprise, fear-anger, and trustdisgust. In the preliminary study eight film sequences and eight music themes were selected as the best representatives of all eight Plutchik\u2019s emotions. In the main experiment the participant judged the emotional qualities of film-music combinations on eight seven-point scales. Half of the combinations were congruent (e.g. joyful film joyful music), and half were incongruent (e.g. joyful film sad music). Results have shown that visual information (film) had greater effects on the emotion appraisal than auditory information (music). The modulation effects of music background depend on emotional qualities. In some incongruent combinations (joysadness) the modulations in the expected directions were obtained (e.g. joyful music reduces the sadness of a sad film), in some cases (anger-fear) no modulation effects were obtained, and in some cases (trust-disgust, anticipation-surprise) the modulation effects were in an unexpected direction (e.g. trustful music increased the appraisal of disgust of a disgusting film). These results suggest that the appraisals of conjoint effects of emotions depend on the medium (film masks the music) and emotional quality (three types of modulation effects)."}
{"_id": "85701860924815edfe13f2533ab3cd4c00648159", "title": "Dynamic surface self-reconstruction is the key of highly active perovskite nano-electrocatalysts for water splitting.", "text": "The growing need to store increasing amounts of renewable energy has recently triggered substantial R&D efforts towards efficient and stable water electrolysis technologies. The oxygen evolution reaction (OER) occurring at the electrolyser anode is central to the development of a clean, reliable and emission-free hydrogen economy. The development of robust and highly active anode materials for OER is therefore a great challenge and has been the main focus of research. Among potential candidates, perovskites have emerged as promising OER electrocatalysts. In this study, by combining a scalable cutting-edge synthesis method with time-resolved X-ray absorption spectroscopy measurements, we were able to capture the dynamic local electronic and geometric structure during realistic operando conditions for highly active OER perovskite nanocatalysts. Ba0.5Sr0.5Co0.8Fe0.2O3-\u03b4 as nano-powder displays unique features that allow a dynamic self-reconstruction of the material's surface during OER, that is, the growth of a self-assembled metal oxy(hydroxide) active layer. Therefore, besides showing outstanding performance at both the laboratory and industrial scale, we provide a fundamental understanding of the operando OER mechanism for highly active perovskite catalysts. This understanding significantly differs from design principles based on ex situ characterization techniques."}
{"_id": "920314057cce832e864af4a4455b08e132b5d3e2", "title": "Applying multi-agent technique in multi-section flexible manufacturing system", "text": "In the highly competitive market, cooperative multi-agent transaction and negotiation mechanism have become an important research topic. This paper uses multi-agent technology to construct a multi-section flexible manufacturing system (FMS) model, and utilizes simulation to build a manufacturing environment based on JADE framework for multi-agent to combine with dispatching rules, such as shortest imminent processing time (SIPT), first come first serve (FCFS) earliest due date (EDD), and Buffer Sequence. This paper finds that using multi-agent technique for multi-section FMS model can enhance the production efficiency in practice. Meanwhile, in this study, multi-agent systems combined with dynamic dispatching can be used to identify the best dispatching rules combination for achieving largest throughput, and thus it can provide the reference for production scheduling in advance. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "e72bef91e694b03492c350c0a938a750e7b362ba", "title": "Providing Research Graph Data in JSON-LD Using Schema.org", "text": "In this position paper, we describe a pilot project that provides Research Graph records to external web services using JSON-LD. The Research Graph database contains a largescale graph that links research datasets (i.e., data used to support research) to funding records (i.e. grants), publications and researcher records such as ORCID profiles. This database was derived from the work of the Research Data Alliance Working Group on Data Description Registry Interoperability (DDRI), and curated using the Research Data Switchboard open source software. By being available in Linked Data format, the Research Graph database is more accessible to third-party web services over the Internet, which thus opens the opportunity to connect to the rest of the world in the semantic format. The primary purpose of this pilot project is to evaluate the feasibility of converting registry objects in Research Graph to JSON-LD by accessing widely used vocabularies published at Schema.org. In this paper, we provide examples of publications, datasets and grants from international research institutions such as CERN INSPIREHEP, National Computational Infrastructure (NCI) in Australia, and Australian Research Council (ARC). Furthermore, we show how these Research Graph records are made semantically available as Linked Data through using Schema.org. The mapping between Research Graph schema and Schema.org is available on GitHub repository. We also discuss the po\u00a9c 2017 International World Wide Web Conference Committee (IW3C2), published under Creative Commons CC BY 4.0 License. WWW 2017 Companion, April 3\u20137, 2017, Perth, Australia. ACM 978-1-4503-4914-7/17/04. http://dx.doi.org/10.1145/3038912.3038914 . tential need for an extension to Schema.org vocabulary for scholarly communication."}
{"_id": "c468404dc5175915ab316d73b70128b0bbfd2ef7", "title": "An Improved Speech Enhancement Method based on Teager Energy Operator and Perceptual Wavelet Packet Decomposition", "text": "According to the distribution characteristic of noise and clean speech signal in the frequency domain, a new speech enhancement method based on teager energy operator (TEO) and perceptual wavelet packet decomposition (PWPD) is proposed. Firstly, a modified Mask construction method is made to protect the acoustic cues at the low frequencies. Then a level-dependent parameter is introduced to further adjust the thresholds in light of the noise distribution feature. At last the sub-bands which have very little influence are set directly 0 to improve the signal-to-noise ratio (SNR) and reduce the computation load. Simulation results show that, under different kinds of noise environments, this new method not only enhances the signal-to-noise ratio (SNR) and perceptual evaluation of speech quality (PESQ), but also reduces the computation load, which is very advantageous for real-time realizing."}
{"_id": "5cfc936d12bbd8a0f100687b12b20e406215f30a", "title": "Unikernels: library operating systems for the cloud", "text": "We present unikernels, a new approach to deploying cloud services via applications written in high-level source code. Unikernels are single-purpose appliances that are compile-time specialised into standalone kernels, and sealed against modification when deployed to a cloud platform. In return they offer significant reduction in image sizes, improved efficiency and security, and should reduce operational costs. Our Mirage prototype compiles OCaml code into unikernels that run on commodity clouds and offer an order of magnitude reduction in code size without significant performance penalty. The architecture combines static type-safety with a single address-space layout that can be made immutable via a hypervisor extension. Mirage contributes a suite of type-safe protocol libraries, and our results demonstrate that the hypervisor is a platform that overcomes the hardware compatibility issues that have made past library operating systems impractical to deploy in the real-world."}
{"_id": "795347ba7148a0762714e7b65c4fa6ed09d418eb", "title": "A Taxonomy of Attacks in RPL-based Internet of Things", "text": "The growing interest for the Internet of Things is contributing to the large-scale deployment of Low power and Lossy Networks (LLN). These networks support communications amongst objects from the real world, such as home automation devices and embedded sensors, and their interconnection to the Internet. An open standard routing protocol, called RPL, has been specified by the IETF in order to address the specific properties and constraints of these networks. However, this protocol is exposed to a large variety of attacks. Their consequences can be quite significant in terms of network performance and resources. In this paper, we propose to establish a taxonomy of the attacks against this protocol, considering three main categories including attacks targeting network resources, attacks modifying the network topology and attacks related to network traffic. We describe these attacks, analyze and compare their properties, discuss existing counter-measures and their usage from a risk management perspective."}
{"_id": "58232840b9db3e471692f40081f86e34e8a9288a", "title": "Geostatistical Space-Time Models , Stationarity , Separability , and Full Symmetry", "text": "4."}
{"_id": "8ff0c546aff84566635f0a9a2e01feb3d6588c1c", "title": "Who Invented the Reverse Mode of Differentiation ?", "text": "Nick Trefethen [13] listed automatic differentiation as one of the 30 great numerical algorithms of the last century. He kindly credited the present author with facilitating the rebirth of the key idea, namely the reverse mode. In fact, there have been many incarnations of this reversal technique, which has been suggested by several people from various fields since the late 1960s, if not earlier. Seppo Linnainmaa (Lin76) of Helsinki says the idea came to him on a sunny afternoon in a Copenhagen park in 1970. He used it as a tool for estimating the effects of arithmetic rounding errors on the results of complex expressions. Gerardi Ostrowski (OVB71) discovered and used it some five years earlier in the context of certain process models in chemical engineering. Here and throughout references that are not listed in the present bibliography are noted in parentheses and can be found in the book [7]. Also in the sixties Hachtel et al. [6] considered the optimization of electronic circuits using the costate equation of initial value problems and its discretizations to compute gradients in the reverse mode for explicitly time-dependent problems. Here we see, possibly for the first time, the close connection between the reverse mode of discrete evaluation procedures and continuous adjoints of differential equations. In the 1970s Iri analyzed the properties of dual and adjoint networks. In the 1980s he became one of the key researchers on the reverse mode. From a memory and numerical stability point of view the most difficult aspect of the reverse mode is the reversal of a program. This problem was discussed in the context of Turing Machines by Benett (Ben73), who foreshadowed the use of checkpointing as a tradeoff between numerical computational effort and memory requirement."}
{"_id": "f1613bc4a40a987f67eefe60c617a360c6d0d1e8", "title": "Erasure coded storage systems for cloud storage \u2014 challenges and opportunities", "text": "Erasure coded storage schemes offer a promising future for cloud storage. Highlights of erasure coded storage systems are that these offer the same level of fault tolerance as that of replication, at lower storage footprints. In the big data era, cloud storage systems based on data replication are of dubious usability due to 200% storage overhead in data replication systems. This has prompted storage service providers to use erasure coded storage as an alternative to replication. Refinements are required in various aspects of erasure coded storage systems to make it a real contender against data replication based storage systems. Streamlining huge bandwidth requirements during the recovery of failed nodes, inefficient update operations, effect of topology in recovery and consistency requirements of erasure coded storage systems, are some areas which need attention. This paper presents an in-depth study on the challenges faced, and research pursued in some of these areas. The survey shows that more research is required to improve erasure coded storage system from being bandwidth crunchers to efficient storage systems. Another challenge that has emerged from the study is the requirement of elaborate research for upgrading the erasure coded storage systems from being mere archival storage systems by providing better update methods. Provision of multiple level consistency in erasure coded storage is yet another research opportunity identified in this work. A brief introduction to open source libraries available for erasure coded storage is also presented in the paper."}
{"_id": "12005218a4b5841dd27d75ab88079e7421a56846", "title": "Big data and cloud computing: current state and future opportunities", "text": "Scalable database management systems (DBMS)---both for update intensive application workloads as well as decision support systems for descriptive and deep analytics---are a critical part of the cloud infrastructure and play an important role in ensuring the smooth transition of applications from the traditional enterprise infrastructures to next generation cloud infrastructures. Though scalable data management has been a vision for more than three decades and much research has focussed on large scale data management in traditional enterprise setting, cloud computing brings its own set of novel challenges that must be addressed to ensure the success of data management solutions in the cloud environment. This tutorial presents an organized picture of the challenges faced by application developers and DBMS designers in developing and deploying internet scale applications. Our background study encompasses both classes of systems: (i) for supporting update heavy applications, and (ii) for ad-hoc analytics and decision support. We then focus on providing an in-depth analysis of systems for supporting update intensive web-applications and provide a survey of the state-of-the-art in this domain. We crystallize the design choices made by some successful systems large scale database management systems, analyze the application demands and access patterns, and enumerate the desiderata for a cloud-bound DBMS."}
{"_id": "19a74d3c08f4fcdbcc367c80992178e93f1ee3bd", "title": "Partial-parallel-repair (PPR): a distributed technique for repairing erasure coded storage", "text": "With the explosion of data in applications all around us, erasure coded storage has emerged as an attractive alternative to replication because even with significantly lower storage overhead, they provide better reliability against data loss. Reed-Solomon code is the most widely used erasure code because it provides maximum reliability for a given storage overhead and is flexible in the choice of coding parameters that determine the achievable reliability. However, reconstruction time for unavailable data becomes prohibitively long mainly because of network bottlenecks. Some proposed solutions either use additional storage or limit the coding parameters that can be used. In this paper, we propose a novel distributed reconstruction technique, called Partial Parallel Repair (PPR), which divides the reconstruction operation to small partial operations and schedules them on multiple nodes already involved in the data reconstruction. Then a distributed protocol progressively combines these partial results to reconstruct the unavailable data blocks and this technique reduces the network pressure. Theoretically, our technique can complete the network transfer in \u2308(log2(k + 1))\u2309 time, compared to k time needed for a (k, m) Reed-Solomon code. Our experiments show that PPR reduces repair time and degraded read time significantly. Moreover, our technique is compatible with existing erasure codes and does not require any additional storage overhead. We demonstrate this by overlaying PPR on top of two prior schemes, Local Reconstruction Code and Rotated Reed-Solomon code, to gain additional savings in reconstruction time."}
{"_id": "434eb48cfa6e631c9b04b793ca2e827a2d29dc97", "title": "Proliferation, cell cycle and apoptosis in cancer", "text": "Beneath the complexity and idiopathy of every cancer lies a limited number of 'mission critical' events that have propelled the tumour cell and its progeny into uncontrolled expansion and invasion. One of these is deregulated cell proliferation, which, together with the obligate compensatory suppression of apoptosis needed to support it, provides a minimal 'platform' necessary to support further neoplastic progression. Adroit targeting of these critical events should have potent and specific therapeutic consequences."}
{"_id": "6f2fe472de38ad0995923668aa8e1abe19084d09", "title": "Performance Analysis of Engineering Students for Recruitment Using Classification Data Mining Techniques", "text": "-Data Mining is a powerful tool for academic intervention. Mining in education environment is called Educational Data Mining. Educational Data Mining is concerned with developing new methods to discover knowledge from educational database and can used for decision making in educational system. In our work, we collected the student\u2019s data from engineering institute that have different information about their previous and current academics records like students S.No., Name, Branch, 10, 12 , B.Tech passing percentage and final grade & then apply different classification algorithm using Data Mining tools (WEKA) for analysis the students academics performance for Training & placement department or company executives. This paper deals with a comparative study of various classification data mining algorithms for the performance analysis of the student\u2019s academic records and check which algorithm is optimal for classifying students\u2019 based on their final grade. This analysis also classifies the performance of Students into Excellent, Good and Average categories. Keywords\u2013 Data Mining, Discover knowledge, Technical Education, Educational Data, Mining, Classification, WEKA, Classifiers."}
{"_id": "4e7a9af1beee401b89a0b66bf881df0abf611263", "title": "A novel integrated dielectric-and-conductive ink 3D printing technique for fabrication of microwave devices", "text": "In this work a novel combination of printed electronics using conductive nanoparticle ink together with 3D printing of dielectric material is presented as one integrated process. This allows fabrication of complicated 3D electromagnetic (EM) structures such as wide range of different antennas and microwave devices to simultaneously include printing of conductive nanoparticle ink within 3D dielectric configurations. Characterization of conductive ink and polymer based substrate has been done to ensure proper RF, electric, thermal and mechanical performance of both substrate and the ink. A meander line dipole antenna on a V-shaped substrate is printed and tested to demonstrate the efficiency and accuracy of proposed technique. The goal in this paper is to provide a low cost, environmentally friendly integrated process for the fabrication of geometrically complicated 3D electromagnetic structures."}
{"_id": "4e15a268586023e6e6ba6a49eedd4a33e98d531d", "title": "Transcriptome-wide Identification of RNA-Binding Protein and MicroRNA Target Sites by PAR-CLIP", "text": "RNA transcripts are subject to posttranscriptional gene regulation involving hundreds of RNA-binding proteins (RBPs) and microRNA-containing ribonucleoprotein complexes (miRNPs) expressed in a cell-type dependent fashion. We developed a cell-based crosslinking approach to determine at high resolution and transcriptome-wide the binding sites of cellular RBPs and miRNPs. The crosslinked sites are revealed by thymidine to cytidine transitions in the cDNAs prepared from immunopurified RNPs of 4-thiouridine-treated cells. We determined the binding sites and regulatory consequences for several intensely studied RBPs and miRNPs, including PUM2, QKI, IGF2BP1-3, AGO/EIF2C1-4 and TNRC6A-C. Our study revealed that these factors bind thousands of sites containing defined sequence motifs and have distinct preferences for exonic versus intronic or coding versus untranslated transcript regions. The precise mapping of binding sites across the transcriptome will be critical to the interpretation of the rapidly emerging data on genetic variation between individuals and how these variations contribute to complex genetic diseases."}
{"_id": "9a76fd4c97aa984b1b70a0fe99810706a226ab41", "title": "Revised Estimates for the Number of Human and Bacteria Cells in the Body", "text": "Reported values in the literature on the number of cells in the body differ by orders of magnitude and are very seldom supported by any measurements or calculations. Here, we integrate the most up-to-date information on the number of human and bacterial cells in the body. We estimate the total number of bacteria in the 70 kg \"reference man\" to be 3.8\u00b71013. For human cells, we identify the dominant role of the hematopoietic lineage to the total count (\u224890%) and revise past estimates to 3.0\u00b71013 human cells. Our analysis also updates the widely-cited 10:1 ratio, showing that the number of bacteria in the body is actually of the same order as the number of human cells, and their total mass is about 0.2 kg."}
{"_id": "9ca05b6624798e02f1f4a62494b60d8478937914", "title": "A New Technique to Design Circularly Polarized Microstrip Antenna by Fractal Defected Ground Structure", "text": "A new technique to design single-feed circularly polarized (CP) microstrip antenna is proposed. The CP radiation is obtained by adjusting the dimension of the etched fractal defected ground structure (FDGS) in the ground plane. Parameter studies of the FDGS are given to illustrate the way to achieve CP radiation. The CP microstrip antennas with the second and third iterative FDGS are fabricated and measured. The measured 10-dB return-loss bandwidth of the CP microstrip antenna is about 30 MHz (1.558 to 1.588 GHz), while its 3-dB axial-ratio bandwidth is 6 MHz (1.572 to 1.578 GHz). The gain across the CP band is between 1.7 and 2.2 dBic."}
{"_id": "64b33b7fa675d2f1f19767e35e9857e941e33c78", "title": "The Aneka platform and QoS-driven resource provisioning for elastic applications on hybrid Clouds", "text": "Cloud computing alters the way traditional software systems are built and run by introducing a utilitybased model for delivering IT infrastructure, platforms, applications, and services. The consolidation of this new paradigm in both enterprises and academia demanded reconsideration in the way IT resources are used, so Cloud computing can be used together with available resources. A case for the utilization of Clouds for increasing the capacity of computing infrastructures is Desktop Grids: these infrastructures typically provide best effort execution of high throughput jobs and other workloads that fit the model of the platform. By enhancing Desktop Grid infrastructures with Cloud resources, it is possible to offer QoS to users, motivating the adoption of Desktop Grids as a viable platform for application execution. In this paper,we describe howAneka, a platform for developing scalable applications on the Cloud, supports such a vision by provisioning resources fromdifferent sources and supporting different applicationmodels.We highlight the key concepts and features of Aneka that support the integration between Desktop Grids and Clouds and present an experiment showing the performance of this integration. \u00a9 2011 Elsevier B.V. All rights reserved."}
{"_id": "d90a33079a401fcc01d31c7f88ffaa7c51fc5c92", "title": "Model predictive path-following control of an A.R. drone quadrotor", "text": "This paper addresses the design and implementation of the Extended Prediction Self-Adaptive Control (EPSAC) approach to Model Predictive Control (MPC) for path-following. Special attention is paid to the closed-loop performance in order to achieve a fast response without overshoot, which are the necessary conditions to ensure the desired tracking performance in confined or indoor environments. The performance of the proposed MPC strategy is compared to the one achieved using PD controllers. Experimental results using the low-cost quadrotor AR.Drone 2.0 validate the reliability of the proposed strategy for 2D and 3D movements."}
{"_id": "435e2f8a16bccedc8d6728b5ead441998c889111", "title": "Mutual Information, Fisher Information, and Population Coding", "text": "In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information-theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in the context of population coding, the mutual information between the activity of a large array of neurons and a stimulus to which the neurons are tuned is naturally related to the Fisher information. In the light of this result, we consider the optimization of the tuning curves parameters in the case of neurons responding to a stimulus represented by an angular variable."}
{"_id": "bb4b1f3b7cc1a9a1753c000bce081f0e909a3f55", "title": "Novel Concept of the Three-Dimensional Vertical FG nand Flash Memory Using the Separated-Sidewall Control Gate", "text": "Recently, we proposed a novel 3-D vertical floating gate (FG)-type nand Flash memory cell array using the separated-sidewall control gate (CG) (S-SCG). This novel cell consists of one cylindrical FG with line-type CG and S-SCG structures. For simplifying the process flow, we realized the common S-SCG lines by using the prestacked polysilicon layer, through which variable medium voltages are applied not only to control the electrically inverted S/D region but also to assist the program and erase operations. In this paper, we successfully demonstrate the normal Flash cell operation and show its superior performances in comparison with the recent 3-D FG nand cells by using the cylindrical device simulation. It is shown that the proposed cell can realize the highest CG coupling ratio, low-voltage cell operations of program with 15 V at Vth = 4V and erase with 14 V at Vth = -3V , good retention-mode electric field, and sufficient read-mode on-current margin. Moreover, the proposed S-SCG cell array can fully suppress both the interference effects and the disturbance problems at the same time by removing the direct coupling effects in the same cell string, which are the most critical problems of the recent 3-D vertical stacked cell structures. Above all, the proposed cell array has good potential for terabit 3-D vertical nand Flash cell array with highly reliable multilevel cell operation."}
{"_id": "867c9ff1c4801041b0c069c116294f1d71b98b38", "title": "\"Preventative\" vs. \"Reactive\": How Parental Mediation Influences Teens' Social Media Privacy Behaviors", "text": "Through an empirical, secondary analysis of 588 teens (ages 12 - 17) and one of their parents living in the USA, we present useful insights into how parental privacy concerns for their teens and different parental mediation strategies (direct intervention versus active mediation) influence teen privacy concerns and privacy risk-taking and risk-coping privacy behaviors in social media. Our results suggest that the use of direct intervention by itself may have a suppressive effect on teens, reducing their exposure to online risks but also their ability to engage with others online and to learn how to effectively cope with online risks. Therefore, it may be beneficial for parents to combine active mediation with direct intervention so that they can protect their teens from severe online risks while empowering teens to engage with others online and learn to make good online privacy choices."}
{"_id": "7bcbe5d58352ffc97123526a8fe7188c6185dc2d", "title": "The Adult Sensory Profile: measuring patterns of sensory processing.", "text": "OBJECTIVE\nThis article describes a series of studies designed to evaluate the reliability and validity of the Adult Sensory Profile.\n\n\nMETHOD\nExpert judges evaluated the construct validity of the items. Coefficient alpha, factor analysis, and correlations of items with subscales determined item reliability, using data from 615 adult sensory profiles. A subsample of 20 adults furnished skin conductance data. A heterogeneous group of 93 adults completed the revised Adult Sensory Profile, and item reliability was reexamined.\n\n\nRESULTS\nExpert judgment indicated that items could be categorized according to Dunn's Model of Sensory Processing. Results suggested reasonable item reliability for all subscales except for the Sensation Avoiding subscale. Skin conductance measures detected distinct patterns of physiological responses consistent with the four-quadrant model. Revision of the Adult Sensory Profile resulted in improved reliability of the Sensation Avoiding subscale.\n\n\nCONCLUSION\nThe series of studies provides evidence to support the four subscales of the Adult Sensory Profile as distinct constructs of sensory processing preferences."}
{"_id": "dca6b4b7cc03e565481e4d8fb255d079c2e61cbb", "title": "The core role of the nurse practitioner: practice, professionalism and clinical leadership.", "text": "AIM\nTo draw on empirical evidence to illustrate the core role of nurse practitioners in Australia and New Zealand.\n\n\nBACKGROUND\nEnacted legislation provides for mutual recognition of qualifications, including nursing, between New Zealand and Australia. As the nurse practitioner role is relatively new in both countries, there is no consistency in role expectation and hence mutual recognition has not yet been applied to nurse practitioners. A study jointly commissioned by both countries' Regulatory Boards developed information on the core role of the nurse practitioner, to develop shared competency and educational standards. Reporting on this study's process and outcomes provides insights that are relevant both locally and internationally.\n\n\nMETHOD\nThis interpretive study used multiple data sources, including published and grey literature, policy documents, nurse practitioner program curricula and interviews with 15 nurse practitioners from the two countries. Data were analysed according to the appropriate standard for each data type and included both deductive and inductive methods. The data were aggregated thematically according to patterns within and across the interview and material data.\n\n\nFINDINGS\nThe core role of the nurse practitioner was identified as having three components: dynamic practice, professional efficacy and clinical leadership. Nurse practitioner practice is dynamic and involves the application of high level clinical knowledge and skills in a wide range of contexts. The nurse practitioner demonstrates professional efficacy, enhanced by an extended range of autonomy that includes legislated privileges. The nurse practitioner is a clinical leader with a readiness and an obligation to advocate for their client base and their profession at the systems level of health care.\n\n\nCONCLUSION\nA clearly articulated and research informed description of the core role of the nurse practitioner provides the basis for development of educational and practice competency standards. These research findings provide new perspectives to inform the international debate about this extended level of nursing practice.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe findings from this research have the potential to achieve a standardised approach and internationally consistent nomenclature for the nurse practitioner role."}
{"_id": "e151ed990085a41668a88545da12daeeaac11e74", "title": "Event-based Camera Pose Tracking using a Generative Event Model", "text": "Event-based vision sensors mimic the operation of biological retina and they represent a major paradigm shift from traditional cameras. Instead of providing frames of intensity measurements synchronously, at artificially chosen rates, event-based cameras provide information on brightness changes asynchronously, when they occur. Such non-redundant pieces of information are called \u201cevents\u201d. These sensors overcome some of the limitations of traditional cameras (response time, bandwidth and dynamic range) but require new methods to deal with the data they output. We tackle the problem of event-based camera localization in a known environment, without additional sensing, using a probabilistic generative event model in a Bayesian filtering framework. Our main contribution is the design of the likelihood function used in the filter to process the observed events. Based on the physical characteristics of the sensor and on empirical evidence of the Gaussian-like distribution of spiked events with respect to the brightness change, we propose to use the contrast residual as a measure of how well the estimated pose of the event-based camera and the environment explain the observed events. The filter allows for localization in the general case of six degrees-of-freedom motions."}
{"_id": "2a90dfe4736817866f7f388f5c3750222a9a261f", "title": "Zero quiescent current, delay adjustable, power-on-reset circuit", "text": "A power-on-reset (POR) circuit is extremely important in digital and mixed-signal ICs. It is used to initialize critical nodes of digital circuitry inside the chip during power-on. A novel POR circuit having zero quiescent current and an adjustable delay is proposed and demonstrated. The circuit has been designed in 0.5\u03bcm CMOS process to work for a supply voltage ranging from 1.8V to 5.5V. The circuit generates a POR signal after a predetermined delay, after the supply voltage crosses a predefined threshold voltage. This delay can be increased or decreased via programmable fuses. The POR threshold does not depend upon the supply ramp rate. The Brown-Out (BO) voltage for the proposed circuit matches the minimum supply voltage required by the digital circuitry inside the IC. The circuit consumes zero steady state current, making it ideal for low power applications."}
{"_id": "26747fa3e6b3f53e6d5bcd40076f46054a4f9334", "title": "Spatial Partitioning Techniques in Spatial Hadoop", "text": "SpatialHadoop is an extended MapReduce framework that supp orts global indexing that spatial partitions the data across machines providing orders of magnitude speedup, compared to traditi onal Hadoop. In this paper, we describe seven alternative partit ioning techniques and experimentally study their effect on the qua lity of the generated index and the performance of range and spatial join queries. We found that using a 1% sample is enough to produce high quality partitions. Also, we found that the total area o f partitions is a reasonable measure of the quality of indexes when r unning spatial join. This study will assist researchers in cho osing a good spatial partitioning technique in distributed enviro nments. 1. INDEXING IN SPATIALHADOOP SpatialHadoop [2, 3] provides a generic indexing algorithm which was used to implement grid, R-tree, and R+-tree based p artitioning. This paper extends our previous study by introduci ng four new partitioning techniques, Z-curve, Hilbert curve, Quad tree, and K-d tree, and experimentally evaluate all of the seven techn iques. The partitioning phase of the indexing algorithm runs in thr ee steps, where the first step is fixed and the last two steps are customiz ed for each partitioning technique. The first step computes number of desired partitionsn based on file size and HDFS block capacity which are both fixed for all partitioning techniques. The second st ep reads a random sample, with a sampling ratio \u03c1, from the input file and uses this sample to partition the space into cells such that number of sample points in each cell is at most \u230ak/n\u230b, wherek is the sample size. The third step actually partitions the file by as signing each record to one or more cells. Boundary objects are handle d using either thedistribution or replication methods. Thedistribution method assigns an object to exactly one overlapping cell and the cell has to be expanded to enclose all contained records. The replication method avoids expanding cells by replicating each record to all overlapping cells but the query processor has to employ a duplicate avoidance technique to account for replicated records . \u2217This work is supported in part by the National Science Founda tion, USA, under Grants IIS-0952977 and IIS-1218168 and the University of Minnesota Doctoral Disseration Fellowship. This work is licensed under the Creative Commons Attributio nNonCommercial-NoDerivs 3.0 Unported License. To view a cop y f this license, visit http://creativecommons.org/licenses/by-n c-nd/3.0/. Obtain permission prior to any use beyond those covered by the license. Contact copyright holder by emailing info@vldb.org. Articles fromthis volume were invited to present their results at the 41st Internatio l Conference on Very Large Data Bases, August 31st September 4th 2015, Koha la Coast, Hawaii. Proceedings of the VLDB Endowment, Vol. 8, No. 12 Copyright 2015 VLDB Endowment 2150-8097/15/08. 2. EXPERIMENTAL SETUP All experiments run on Amazon EC2 \u2018m1.large\u2019 instances which have a dual core processor, 7.5 GB RAM and 840 GB disk storage. We use Hadoop 1.2.1 running on Java 1.6 and CentOS 6. Each machine is configured to run three mappers and two reduce rs. Tables 1 and 2 summarize the datasets and configuration param eters used in our experiments, respectively. Default parame ters (in parentheses) are used unless otherwise mentioned. In the fo llowing part, we describe the partitioning techniques, the quer ies we run, and the performance metrics measured in this paper. 2.1 Partitioning Techniques This paper employsgrid and Quad tree as space partitioning techniques;STR, STR+, and K-d tree as data partitioning techniques; andZ-curve andHilbert curve as space filling curve (SFC) partitioning techniques. These techniques can also be grou ped, according to boundary object handling, into replication-based techniques (i.e., Grid, Quad, STR+, and K-d tree) and distributionbased techniques (i.e., STR, Z-Cruve, and Hilbert). Figure 1 illustrates these techniques, where sample points and partition bou daries are shown as dots and rectangles, respectively. 1. Uniform Grid: This technique does not require a random sample as it divides the input MBR using a uniform grid of \u2308\u221an\u2309 \u00d7 \u2308\u221an\u2309 grid cells and employs the replication method to handle boundary objects. 2. Quad tree: This technique inserts all sample points into a quad tree [6] with node capacity of \u230ak/n\u230b, wherek is the sample size. The boundaries of all leaf nodes are used as cell boundaries. W use the replication method to assign records to cells. 3. STR: This technique bulk loads the random sample into an Rtree using the STR algorithm [8] and the capacity of each node is set to\u230ak/n\u230b. The MBRs of leaf nodes are used as cell boundaries. Boundary objects are handled using the distribution method where it assigns a record r to the cell with maximal overlap. 4. STR+: This technique is similar to the STR technique but it uses the replication method to handle boundary objects. 5. K-d tree: This technique uses the K-d tree [1] partitioning method to partition the space into n cells. It starts with the input MBR as one cell and partitions it n \u2212 1 times to producen cells. Records are assigned to cells using the replication method. 6. Z-curve: This technique sorts the sample points by their order on the Z-curve and partitions the curve into n splits, each containing roughly\u230ak/n\u230b points. It uses the distribution method to assign a recordr to one cell by mapping the center point of its MBR to one of then splits. 7. Hilbert curve: This technique is exactly the same as the Zcurve technique but it uses Hilbert space filling curve which has better spatial properties. (a) Grid (b) Quad Tree (c) STR and STR+ (d) K-d Tree (e) Z-curve (f) Hilbert Curve Figure 1: Partitioning Techniques. Examples of nearly-empty partitions are starred. Name Size Records Average Record Size All Objects 88GB 250M 378 bytes Buildings 25GB 109M 234 bytes Roads 23GB 70M 337 bytes Lakes 8.6GB 9M 1KB Cities 1.4GB 170K 8.4KB"}
{"_id": "059b1bcf315be372c8156e07e941fa6539360cba", "title": "Mobile application for guiding tourist activities: tourist assistant - TAIS", "text": "The paper presents category classification of mobile travel applications accessible at the moment for tourists in application stores for most popular mobile operation systems (Android and iOS). The most interesting category is \"Travel Guides\" that combines \"Information Resources\" and \"Location-Based Services\" category. Authors propose application \"Tourist assistant - TAIS\" that is related to \"Travel Guides\" category and recommends the tourist attractions around. Information about attractions is extracted from different internet sources."}
{"_id": "530867970f5cf19f5115e49292d91ccaae463ca9", "title": "A novel isolated electrolytic capacitor-less single-switch AC-DC offline LED driver with power factor correction", "text": "Conventional AC-DC driver circuits for Light-Emitting Diode (LED) lamps require large output capacitance across the LED load to minimize the low frequency current ripple. This large capacitance is usually achieved by using an electrolytic capacitor, which has a lifetime that is at least two times less than that of a LED device. To match the potential lifetime of the LEDs, a new isolated single switch AC-DC high power factor LED driver without any electrolytic capacitors is proposed in this paper. In the proposed circuit, the energy storage capacitor is moved to the rectifier side, with a three-winding transformer used to provide isolation; power factor correction as well as to store and provide the required energy to the output. As a result, the energy storage capacitance is significantly reduced, which allows film capacitor to replace the unreliable electrolytic capacitors. The circuit's operating principles and its characteristics are described in this paper. Simulation and experimental results confirm that a power factor of 0.96 is achieved on a 120Vrms, 12W prototype."}
{"_id": "58ec8f00eb746abb4adb27d514333fee3c6bcf97", "title": "Lightweight Cryptography for FPGAs", "text": "The advent of new low-power Field Programmable Gate Arrays (FPGA) for battery powered devices opens a host of new applications to FPGAs. In order to provide security on resource constrained devices lightweight cryptographic algorithms have been developed. However, there has not been much research on porting these algorithms to FPGAs. In this paper we propose lightweight cryptography for FPGAs by introducing block cipher independent optimization techniques for Xilinx Spartan3 FPGAs and applying them to the lightweight cryptographic algorithms HIGHT and Present. Our implementations are the first reported of these block ciphers on FPGAs. Furthermore, they are the smallest block cipher implementations on FPGAs using only 117 and 91 slices respectively, which makes them comparable in size to stream cipher implementations. Both are less than half the size of the AES implementation by Chodowiec and Gaj without using block RAMs. Present\u2019s throughput over area ratio of 240 Kbps/slice is similar to that of AES, however, HIGHT outperforms them by far with 720 Kbps/slice."}
{"_id": "1ea6b2f67a3a7f044209aae0d0fd1cb14a1e9e06", "title": "Pixel Recurrent Neural Networks", "text": "Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast twodimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent."}
{"_id": "5ad0e283c4c2aa7b9985012979835d0131fe73d8", "title": "Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields", "text": "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency."}
{"_id": "9871aa511ca7e3c61c083c327063442bc2c411bf", "title": "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks", "text": ""}
{"_id": "ba753286b9e2f32c5d5a7df08571262e257d2e53", "title": "Conditional Generative Adversarial Nets", "text": "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels."}
{"_id": "99a69a20df596b176bbe5aa748b6864032fa0d17", "title": "Efficiency of low power audio amplifiers and loudspeakers", "text": "In this paper we look at the load presented to audio amplifiers by real transducers. We consider the power losses in Class-AB and Class-D amplifier topologies, and determine that in order to predict efficiency it is necessary to consider the amplifier/transducer combination. The ability of the class-D amplifier to recycle quadrature load current offers new ways to improve efficiency. INTRODUCTION Class-D amplifiers are beginning to become viable alternatives to the Class-AB amplifier because of the reducing cost of suitable devices. There is particular interest in developing them for low power applications where their intrinsic efficiency advantages are important. The operation of Class-D amplifiers is very different to Class-AB amplifiers and this has implications for other components in the audio chain. Conventionally the performance of audio amplifiers is considered using a pure resistive load of four or eight ohms. The origin of this lies in the nominal impedance given to electro-magnetic loudspeakers. The true impedance of an electro-magnetic loudspeaker will vary over the operating frequency range, but the variability between individual real transducers makes the definition of a more realistic \u2018bench-mark\u2019 load impractical. Other audio transducers also have a complex impedance. Transducers which utilise piezo electric elements have a input impedance that is highly reactive. The load presented to the amplifier is almost entirely capacitive. The power output and efficiency of an amplifier are dependent load impedance and so the resistive load performance may not resemble to the operation of the amplifier in real situations. In this paper we will look at the true load presented to amplifiers by the transducer and derive a more accurate measure of overall efficiency. GENERAL AMPLIFIER EFFICIENCY The electrical efficiency of an amplifier is defined as the ratio of the power developed in the load to the power drawn from the DC supply. Using simple linear analysis we can determine the efficiency of amplifier output stages and the dependence of the efficiency on load parameters. In this section the following symbols are used; Vo Output voltage Vs Supply rail voltage RL Load resistance Ibias Class AB quiescent bias current \u03c6 Load phase angle Zload Load impedance L Class D filter inductance Rin Resistance of filter inductor RDson \u2018On\u2019 state resistance of switching devices fs Class D Switching frequency Class-AB resistive case A simple Class-AB output stage is shown in figure 1. A complementary pair of output devices operate over their linear region to amplify the signal. When the devices are operated in the linear region there will be current flowing through them whilst there is a voltage across them, this will give rise to power dissipation, and hence reduce efficiency. The devices also need a quiescent bias to reduce crossover distortion as one device takes over from the other. Load"}
{"_id": "5d1f2136c3fc17b5ed0864cedcfcca5173ffeaa2", "title": "Impact of a contemplative end-of-life training program: being with dying.", "text": "OBJECTIVE\nHealth care professionals report a lack of skills in the psychosocial and spiritual aspects of caring for dying people and high levels of moral distress, grief, and burnout. To address these concerns, the \"Being with Dying: Professional Training Program in Contemplative End-of-Life Care\" (BWD) was created. The premise of BWD, which is based on the development of mindfulness and receptive attention through contemplative practice, is that cultivating stability of mind and emotions enables clinicians to respond to others and themselves with compassion. This article describes the impact of BWD on the participants.\n\n\nMETHODS\nNinety-five BWD participants completed an anonymous online survey; 40 completed a confidential open-ended telephone interview.\n\n\nRESULTS\nFour main themes-the power of presence, cultivating balanced compassion, recognizing grief, and the importance of self-care-emerged in the interviews and were supported in the survey data. The interviewees considered BWD's contemplative and reflective practices meaningful, useful, and valuable and reported that BWD provided skills, attitudes, behaviors, and tools to change how they worked with the dying and bereaved.\n\n\nSIGNIFICANCE OF RESULTS\nThe quality of presence has the potential to transform the care of dying people and the caregivers themselves. Cultivating this quality within themselves and others allows clinicians to explore alternatives to exclusively intellectual, procedural, and task-oriented approaches when caring for dying people. BWD provides a rare opportunity to engage in practices and methods that cultivate the stability of mind and emotions that may facilitate compassionate care of dying patients, families, and caregivers."}
{"_id": "f4bd288ddf6f6830bd760db60e6b991ae8d5d0eb", "title": "The influence of geometrical deviations of electrical machine systems on the signal quality of the variable reluctance resolver", "text": "Variable reluctance resolvers are needed to regulate electrical machines with inverter control. Due to several issues, e.g. discrete windings in the stator and geometric tolerances in the assembly of the electrical machine, the rotors angular position cannot be acquired properly. The subject of this paper is a variable reluctance resolver model to investigate manufacturing induced geometrical tolerances of the electrical machine in means of rotor position. For this purpose, a block diagram of a stator segment is built, connecting 24 segments to one stator core using PLECS. The rotor can be set freely in shape and position. By varying a permeance block the rotation, i.e. the position dependent airgap, is mirrored. The proposed model allows a sensitivity analysis of the sensor signal according to geometrical changes and the absolute position in the electrical machine system. To evaluate the investigations, the results are verified by measurements on a sensor test bench. The simulation results show a similar behavior as the measurement. Although the values differ by a factor of five, this absolute difference is discussed by means of neglected boundary conditions."}
{"_id": "d4d09d20b869389f7313d05462365a9d77706088", "title": "Clinical practice guidelines for diagnosis of autism spectrum disorder in adults and children in the UK: a narrative review", "text": "BACKGROUND\nResearch suggests that diagnostic procedures for Autism Spectrum Disorder are not consistent across practice and that diagnostic rates can be affected by contextual and social drivers. The purpose of this review was to consider how the content of clinical practice guidelines shapes diagnoses of Autism Spectrum Disorder in the UK; and investigate where, within those guidelines, social factors and influences are considered.\n\n\nMETHODS\nWe electronically searched multiple databases (NICE Evidence Base; TRIP; Social Policy and Practice; US National Guidelines Clearinghouse; HMIC; The Cochrane Library; Embase; Global health; Ovid; PsychARTICLES; PsychINFO) and relevant web sources (government, professional and regional NHS websites) for clinical practice guidelines. We extracted details of key diagnostic elements such as assessment process and diagnostic tools. A qualitative narrative analysis was conducted to identify social factors and influences.\n\n\nRESULTS\nTwenty-one documents were found and analysed. Guidelines varied in recommendations for use of diagnostic tools and assessment procedures. Although multidisciplinary assessment was identified as the 'ideal' assessment, some guidelines suggested in practice one experienced healthcare professional was sufficient. Social factors in operational, interactional and contextual areas added complexity to guidelines but there were few concrete recommendations as to how these factors should be operationalized for best diagnostic outcomes.\n\n\nCONCLUSION\nAlthough individual guidelines appeared to present a coherent and systematic assessment process, they varied enough in their recommendations to make the choices available to healthcare professionals particularly complex and confusing. We recommend a more explicit acknowledgement of social factors in clinical practice guidelines with advice about how they should be managed and operationalised to enable more consistency of practice and transparency for those coming for diagnosis."}
{"_id": "520da33f43fa61ea317a05b21c0457425209cca4", "title": "Navigation of an omni-directional mobile robot with active caster wheels", "text": "This work deals with navigation of an omni-directional mobile robot with active caster wheels. Initially, the posture of the omni-directional mobile robot is calculated by using the odometry information. Next, the position accuracy of the mobile robot is measured through comparison of the odometry information and the external sensor measurement. Finally, for successful navigation of the mobile robot, a motion planning algorithm that employs kinematic redundancy resolution method is proposed. Through experiments for multiple obstacles and multiple moving obstacles, the feasibility of the proposed navigation algorithm was verified."}
{"_id": "4a68a6a38d5acf40ed43370591495013a47c6f08", "title": "Examining a hate speech corpus for hate speech detection and popularity prediction", "text": "As research on hate speech becomes more and more relevant every day, most of it is still focused on hate speech detection. By attempting to replicate a hate speech detection experiment performed on an existing Twitter corpus annotated for hate speech, we highlight some issues that arise from doing research in the field of hate speech, which is essentially still in its infancy. We take a critical look at the training corpus in order to understand its biases, while also using it to venture beyond hate speech detection and investigate whether it can be used to shed light on other facets of research, such as popularity of hate tweets."}
{"_id": "d72235862cbe631d71fb234e345f2df3f65ef09a", "title": "Process integration of a 27nm, 16Gb Cu ReRAM", "text": "A 27nm 16Gb Cu based NV Re-RAM chip has been demonstrated. Novel process introduction to enable this technology include a Damascene Cell, Line-SAC Digit Lines filled with Cu, exhumed-silicided array contacts, raised epitaxial arrays, and high-drive buried access devices."}
{"_id": "06c04b352c0afa70ed33cd53352108c089116ef4", "title": "A Comparison of 10 Sampling Algorithms for Configurable Systems", "text": "Almost every software system provides configuration options to tailor the system to the target platform and application scenario. Often, this configurability renders the analysis of every individual system configuration infeasible. To address this problem, researchers have proposed a diverse set of sampling algorithms. We present a comparative study of 10 state-of-the-art sampling algorithms regarding their fault-detection capability and size of sample sets. The former is important to improve software quality and the latter to reduce the time of analysis. In a nutshell, we found that sampling algorithms with larger sample sets are able to detect higher numbers of faults, but simple algorithms with small sample sets, such as most-enabled-disabled, are the most efficient in most contexts. Furthermore, we observed that the limiting assumptions made in previous work influence the number of detected faults, the size of sample sets, and the ranking of algorithms. Finally, we have identified a number of technical challenges when trying to avoid the limiting assumptions, which questions the practicality of certain sampling algorithms."}
{"_id": "636e5f1f0c4df3c6b5d230e3375c6c52129430c4", "title": "Gaussian Process Regression Networks", "text": "We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the nonparametric flexibility of Gaussian processes. GPRN accommodates input (predictor) dependent signal and noise correlations between multiple output (response) variables, input dependent length-scales and amplitudes, and heavy-tailed predictive distributions. We derive both elliptical slice sampling and variational Bayes inference procedures for GPRN. We apply GPRN as a multiple output regression and multivariate volatility model, demonstrating substantially improved performance over eight popular multiple output (multi-task) Gaussian process models and three multivariate volatility models on real datasets, including a 1000 dimensional gene expression dataset."}
{"_id": "60b3d1bfcb7a2bbd46cde6c6c5c91334cf1d13a3", "title": "Microplastics in the marine environment.", "text": "This review discusses the mechanisms of generation and potential impacts of microplastics in the ocean environment. Weathering degradation of plastics on the beaches results in their surface embrittlement and microcracking, yielding microparticles that are carried into water by wind or wave action. Unlike inorganic fines present in sea water, microplastics concentrate persistent organic pollutants (POPs) by partition. The relevant distribution coefficients for common POPs are several orders of magnitude in favour of the plastic medium. Consequently, the microparticles laden with high levels of POPs can be ingested by marine biota. Bioavailability and the efficiency of transfer of the ingested POPs across trophic levels are not known and the potential damage posed by these to the marine ecosystem has yet to be quantified and modelled. Given the increasing levels of plastic pollution of the oceans it is important to better understand the impact of microplastics in the ocean food web."}
{"_id": "badbdaacdfa3dc3b36da1a3fe338392a1259c14b", "title": "Machine Learning and Social Robotics for Detecting Early Signs of Dementia", "text": "This paper presents the EACare project, an ambitious multidisciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e.g., due to Alzheimer\u2019s disease. The system will use methods from Machine Learning and Social Robotics, and be trained with examples of recorded clinician-patient interactions. The interaction will be developed using a participatory design approach. We describe the scope and method of the project, and report on a first Wizard of Oz prototype."}
{"_id": "83923ec80bf89923bcc896768e957a5af6bf8355", "title": "The basic building blocks and evolution of CRISPR-CAS systems.", "text": "CRISPR (clustered regularly interspaced short palindromic repeats)-Cas (CRISPR-associated) is an adaptive immunity system in bacteria and archaea that functions via a distinct self/non-self recognition mechanism that involves unique spacers homologous with viral or plasmid DNA and integrated into the CRISPR loci. Most of the Cas proteins evolve under relaxed purifying selection and some underwent dramatic structural rearrangements during evolution. In many cases, CRISPR-Cas system components are replaced either by homologous or by analogous proteins or domains in some bacterial and archaeal lineages. However, recent advances in comparative sequence analysis, structural studies and experimental data suggest that, despite this remarkable evolutionary plasticity, all CRISPR-Cas systems employ the same architectural and functional principles, and given the conservation of the principal building blocks, share a common ancestry. We review recent advances in the understanding of the evolution and organization of CRISPR-Cas systems. Among other developments, we describe for the first time a group of archaeal cas1 gene homologues that are not associated with CRISPR-Cas loci and are predicted to be involved in functions other than adaptive immunity."}
{"_id": "86be6821e65ed895e915d4cd679983c661f36c27", "title": "An algorithmic approach to multimodality midfacial rejuvenation using a new classification system for midfacial aging.", "text": "Midfacial aging is the result of the complex interplay between the osseous skeleton, facial retaining ligaments, soft tissues envelope, facial fat compartments, and the overlying skin elasticity. As a result of the many anatomic components involved in midfacial aging, the authors proposed a classification system based on distinct anatomic factors to direct surgical treatment. Evidence based data suggest that midface rejuvenation often requires a multimodality approach to obtain desired results, especially in patients with more advanced aging and poor tissue elasticity, or those with hypoplastic midfacial skeletal structure."}
{"_id": "446929dbe6d71f386d13650fcd01b0a83c357ba2", "title": "A Robust Mid-Level Representation for Harmonic Content in Music Signals", "text": "When considering the problem of audio-to-audio matching, determining musical similarity using low-level features such as Fourier transforms and MFCCs is an extremely difficult task, as there is little semantic information available. Full semantic transcription of audio is an unreliable and imperfect task in the best case, an unsolved problem in the worst. To this end we propose a robust mid-level representation that incorporates both harmonic and rhythmic information, without attempting full transcription. We describe a process for creating this representation automatically, directly from multi-timbral and polyphonic music signals, with an emphasis on popular music. We also offer various evaluations of our techniques. Moreso than most approaches working from raw audio, we incorporate musical knowledge into our assumptions, our models, and our processes. Our hope is that by utilizing this notion of a musically-motivated mid-level representation we may help bridge the gap between symbolic and audio research."}
{"_id": "bf79616a8d4d535b1ffdce3eedca0df4dda454be", "title": "Roman-txt: forms and functions of roman urdu texting", "text": "In this paper, we present a user study conducted on students of a local university in Pakistan and collected a corpus of Roman Urdu text messages. We were interested in forms and functions of Roman Urdu text messages. To this end, we collected a mobile phone usage dataset. The data consists of 116 users and 346, 455 text messages. Roman Urdu text, is the most widely adopted style of writing text messages in Pakistan. Our user study leads to interesting results, for instance, we were able to quantitatively show that a number of words are written using more than one spelling; most participants of our study were not comfortable in English and hence they write their text messages in Roman Urdu; and the choice of language adopted by the participants sometimes varies according to who the message is being sent. Moreover we found that many young students send text messages(SMS) of intimate nature."}
{"_id": "cb67d3d6c085f1fa325cfe34a132e67953f1d26f", "title": "OCR and RFID enabled vehicle identification and parking allocation system", "text": "The available parking management systems require human efforts for recording entries of coming and leaving vehicles from parking in sheets. For huge parking it is difficult to keep track of the information. Use of Radio Frequency Identification known as RFID technology reduces human efforts and Optical Character Recognition known as OCR, enabled system will provide an automated system for parking management. And also it will provide the control over the access of parking space by the use of boom barriers. For huge parking it will be an effective system. Parking can be a commercial one or can be a Very Important Person (VIP) parking. Depending on the usage system can be used. Vehicle cannot be parked in parking if it is not registered. OCR acts as a solution for vehicles which are not registered. To keep track of the happenings in parking cameras are used. Places where automated systems are required, the proposed system is an answer. System combines the two strong technologies RFID and OCR."}
{"_id": "2621c894885e3f42bca8bd2b11dab1637e697814", "title": "The REVIVE (REal Women's VIews of Treatment Options for Menopausal Vaginal ChangEs) survey in Europe: Country-specific comparisons of postmenopausal women's perceptions, experiences and needs.", "text": "OBJECTIVES\nTo achieve a better comprehension of the variability of perceptions, experiences and needs in terms of sexual and vaginal health in postmenopausal women (PMW) from four different European countries.\n\n\nMETHODS\nAn internet-based survey was conducted in Italy, Germany, Spain and the United Kingdom with a total surveyed population of 3768 PMW aged between 45 and 75 years.\n\n\nRESULTS\nThe UK sample was significantly older, with almost a quarter of participants over 65 years of age, and had the highest proportion of women experiencing recent vulvar and vaginal atrophy (52.8%). The majority of Italian and Spanish participants were receiving VVA treatment, whereas in the UK only 28% of PMW were on medication. The most common menopausal symptom was vaginal/vulvar dryness, with almost 80% of participants reporting it in all the countries except the UK (48%). On the other hand, vaginal/vulvar irritation was more frequently reported in the UK (41%). The percentage of participants with a partner was lower in the UK (71%), as was the monthly rate of sexual activity (49%). In the UK, the proportion of participants who had seen a healthcare professional for gynaecological reasons in the last year was lower than in other countries (27% vs. \u226550%), as was the proportion who has discussed their VVA symptoms with them (45% vs. \u223c67%). In this sense, UK PMW waited for a longer before asking for help (especially for pain with intercourse and dryness). The main issues relating to VVA treatment difficulties expressed by participants were administration route in the UK, efficacy in Germany, and side-effects in Italy.\n\n\nCONCLUSIONS\nAlthough all European women shared the same expectation of improving the quality of their sex lives, the opportunity for that varied among different countries in relation to the healthcare system and to the effective communication achieved with healthcare professionals when managing VVA."}
{"_id": "d2f5c15df36ef495d176a5683de636eef3089359", "title": "Do burnout and work engagement predict depressive symptoms and life satisfaction? A three-wave seven-year prospective study.", "text": "BACKGROUND\nBurnout and work engagement have been viewed as opposite, yet distinct states of employee well-being. We investigated whether work-related indicators of well-being (i.e. burnout and work engagement) spill-over and generalize to context-free well-being (i.e. depressive symptoms and life satisfaction). More specifically, we examined the causal direction: does burnout/work engagement lead to depressive symptoms/life satisfaction, or the other way around?\n\n\nMETHODS\nThree surveys were conducted. In 2003, 71% of all Finnish dentists were surveyed (n=3255), and the response rate of the 3-year follow-up was 84% (n=2555). The second follow-up was conducted four years later with a response rate of 86% (n=1964). Structural equation modeling was used to investigate the cross-lagged associations between the study variables across time.\n\n\nRESULTS\nBurnout predicted depressive symptoms and life dissatisfaction from T1 to T2 and from T2 to T3. Conversely, work engagement had a negative effect on depressive symptoms and a positive effect on life satisfaction, both from T1 to T2 and from T2 to T3, even after adjusting for the impact of burnout at every occasion.\n\n\nLIMITATIONS\nThe study was conducted among one occupational group, which limits its generalizability.\n\n\nCONCLUSIONS\nWork-related well-being predicts general wellbeing in the long-term. For example, burnout predicts depressive symptoms and not vice versa. In addition, burnout and work engagement are not direct opposites. Instead, both have unique, incremental impacts on life satisfaction and depressive symptoms."}
{"_id": "5728401922a96d0918b041162180873079b44db0", "title": "Handwriting difficulties in primary school children: a search for underlying mechanisms.", "text": "OBJECTIVE\nThis study investigated the contribution of perceptual-motor dysfunction and cognitive planning problems to the quality or speed of handwriting in children with handwriting problems (HWP).\n\n\nMETHOD\nTwenty-nine children with HWP and 20 classroom peers attending regular schools (grade 2 and grade 3) were tested with regard to visual perception, visual-motor integration, fine motor coordination, and cognitive planning abilities.\n\n\nRESULTS\nThe HWP group scored significantly lower on visual perception, visual-motor integration, fine motor coordination, and cognitive planning in comparison with classroom controls. Regression analyses showed that visual-motor integration was the only significant predictor for quality of handwriting in the HWP group, whereas fine motor coordination (i.e., unimanual dexterity) was the only significant predictor of quality of handwriting in the control group.\n\n\nCONCLUSIONS\nResults suggest that two different mechanisms underlie the quality of handwriting in children with and without handwriting problems. Poor quality of handwriting of children with HWP seems particularly related to a deficiency in visual-motor integration."}
{"_id": "46a26a258bbd087240332376a6ead5f1f7cf497a", "title": "INFORMATION TECHNOLOGY AND ORGANIZATIONAL CHANGE : CAUSAL STRUCTURE IN THEORY AND", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. This article concerns theories about why and how information technology affects organizational life. Good theory guides research, which, when applied, increases the likelihood that information technology will be employed with desirable consequences for users, organizations, and other interested parties. But what is a good theory? Theories are often evaluated in terms of their content-the specific concepts used and the human values served. This article examines theories in terms of their structures-theorists' assumptions about the nature and direction of causal influence. Three dimensions of causal structure are considered-causal agency, logical structure, and level of analysis. Causal agency refers to beliefs about the nature of causality: whether external forces cause change, whether people act purposefully to accomplish intended objectives, or whether changes emerge unpredictably from the interaction of people and events. Logical structure refers to the temporal aspect of theory-static versus dynamic-and to the logical relationships between the \"causes\" and the outcomes. Level of analysis refers to the entities about which the theory poses concepts and relationships-individuals, groups, organizations, and society. While there are many possible structures for good theory about the role of information technology in organizational change, only a few of these structures can be seen in current theorizing. Increased awareness of the options, open discussion of their advantages and disadvantages , and explicit characterization of future theoretical statements in terms of the dimensions and categories discussed here should, we believe, promote the development of better theory."}
{"_id": "35b688b67831ed0498a490820bb36860104d2d5a", "title": "Mechanical design of a gravity-balancing wearable exoskeleton for the motion enhancement of human upper limb", "text": "Powered exoskeletons can provide motion enhancement for both healthy and physically challenged people. Upper limb exoskeletons are required to have multiple degrees-of-freedom and can still produce sufficient force to augment the upper limb motion. The design using serial mechanisms usually results in a complicated and bulky exoskeleton that prevents itself from being wearable. This paper presents a new exoskeleton design aimed to achieve compactness and wearability. We consider a shoulder exoskeleton that consists of a parallel spherical mechanism with two slider crank mechanisms. The actuators can be placed on a stationary platform and attached closely to human body. Thus a better inertia property can be obtained while maintaining lightweight. Through the use of a gravity-balancing mechanism, the required actuator power becomes smaller and with better efficiency. A static model is developed to analyze and optimize the exoskeleton. Through illustrations of a prototype, the exoskeleton is shown to be wearable and can provide adequate motion enhancement of a human's upper limb."}
{"_id": "a0fd813b9218813e1b020d03a3099de7677dd145", "title": "A Virtual Feedback Assistance System for Remote Operation of a 3DOF Micromanipulator in Micro-Nanorobotic Manipulation", "text": "Manipulation in micro or nanoscale with robotic manipulators under observation of electron microscopes is a widely used strategy for fabrication of nanodevices and nanoscale material property characterization. These types of manipulation systems can handle the relatively larger scale of objects. However, the complexity of manipulation increases highly for 3D manipulation. Since the manipulation system consists of multiple components including manipulator, microscope, and also some end-effector tools, a proper offline visualization of the system is necessary for operation. Therefore, we propose a web-based virtual interface between the user and the actual manipulator operated under digital microscope initially. It gives the operator 3D positional feedback from the virtual model by mapping data read during remote operation. The same interface is used for remote operation of the manipulator within the SEM chamber and a manipulation task is performed."}
{"_id": "b6741b6ee49d26c23655b4ff3febc3166b1e3644", "title": "Dependence on computer games by adolescents.", "text": "As computer game playing is a popular activity among adolescents, a questionnaire study was undertaken with 387 adolescents (12-16 years of age) to establish their \"dependence\" using a scale adapted from the DSM-III-R criteria for pathological gambling. Analysis indicated that one in five adolescents were currently \"dependent\" upon computer games. Boys played significantly more regularly than girls and were more likely to be classified as \"dependent.\" The earlier children began playing computer games it appeared the more likely they were to be playing at \"dependent\" levels. These and other results are discussed in relation to research on other gaming dependencies."}
{"_id": "1e5f76df508cd8dd096eb3c9fd9c529874b40d22", "title": "Ambiguities in Spacebornene Synthetic Aperture Radar Systems", "text": "Several aspects of range and azimuth (time delay and Doppler) ambiguities in spaceborne synthetic aperture radars (SARs) are examined. An accurate method to evaluate the ratio of the intensities of the ambiguities to that of the signal is described. This method has been applied to the nominal SAR system on SEASAT and the variations of this ratio as a function of orbital latitude and attitude control error are discussed. It is also shown that the detailed range migration-azimuth phase history of an ambiguity is different from that of a signal. The images of ambiguities are, therefore, dispersed. Several examples of such dispersed images observed by the SEASAT SAR are presented. These dispersions are eliminated when the processing parameters are adjusted appropriately. Finally, a method is described which uses a set of multiple pulse repetition frequencies to determine the absolute values of the Doppler centroid frequencies for SARs with high carrier frequencies and relatively poor attitude measurements."}
{"_id": "36f3bcdf22ca286811b5a5232c7861210cbc443a", "title": "Ship Surveillance With TerraSAR-X", "text": "Ship detection is an important application of global monitoring of environment and security. In order to overcome the limitations by other systems, surveillance with satellite synthetic aperture radar (SAR) is used because of its possibility to provide ship detection at high resolution over wide swaths and in all weather conditions. A new X-band radar onboard the TerraSAR-X (TS-X) satellite gives access to spatial resolution as fine as 1 m. In this paper, first results on the combined use of TS-X ship detection, automatic identification system (AIS), and satellite AIS (SatAIS) is presented. The AIS system is an effective terrestrial method for tracking vessels in real time typically up to 40 km off the coast. SatAIS, as a space-based system, allows almost global coverage for monitoring of ships since not all ships operate their AIS and smaller ships are not equipped with AIS. The system is considered to be of cooperative nature. In this paper, the quality of TS-X images with respect to ship detection is evaluated, and a first assessment of its performance for ship detection is given. The velocity of a moving ship is estimated using complex TS-X data. As test cases, images were acquired over the North Sea, Baltic Sea, Atlantic Ocean, and Pacific Ocean in Stripmap mode with a resolution of 3 m at a coverage of 30 km 100 km. Simultaneous information on ship positions was available from TS-X and terrestrial as well as SatAIS. First results on the simultaneous superposition of SatAIS and high-resolution radar images are presented."}
{"_id": "44f4eb3a57da47b6a5afd27dd3a1490cca912543", "title": "Filtering of Azimuth Ambiguity in Stripmap Synthetic Aperture Radar Images", "text": "Due to the specific characteristics of the SAR system, peculiar artifacts can appear on SAR images. In particular, finite pulse repetition frequency (PRF) and nonideal antenna pattern give rise to azimuth ambiguity, with the possible presence of \u201cghosts\u201d on the image. They are due to the replica of strong targets located outside of the antenna main beam, superposed onto low intensity areas of the imaged scene. In this paper, we propose a method for the filtering of azimuth ambiguities on stripmap SAR images, that we name \u201casymmetric mapping and selective filtering\u201d (AM&SF) method. Our framework is based on the theory of selective filtering and on a two-step procedure. In the first step, two asymmetric filters are used to suppress ambiguities due to each sidelobe of the antenna pattern, and the ratios between the original and filtered images are used to produce two maps of the ambiguity-affected areas (one for each sidelobe). In the second step, these maps are used to produce a final image in which only the areas affected by the ambiguities are replaced by their filtered (via the proper of the two filters) versions. The proposed method can be employed in situations in which similar approaches fail, and it has a smaller computational burden. The framework is positively tested on TerraSAR-X and COSMO/SkyMed SAR images of different marine scenes."}
{"_id": "ea82de0a872021f37347972d1e40d93254a28a2b", "title": "Adaptive removal of azimuth ambiguities in SAR images", "text": "We introduce an innovative algorithm that is capable of suppression of strong azimuth ambiguities in single-look complex (SLC) synthetic aperture radar images, to be used both for detected products and for interferometric surveys. The algorithm exploits a band-pass filter to select that portion of the azimuth spectrum that is less influenced by aliasing, the one that corresponds to the nulls in the replicated azimuth antenna pattern. The selectivity of this filter adapts locally depending on the estimated ambiguity intensity: the filter is more selective in the presence of strong ambiguities and becomes all-pass in absence of ambiguities. The design of such filter frames in the Wiener approach with two different normalization options, depending on the use of the SLC image, either for getting multilook averaged detected products or for interferometric applications. Preliminary results achieved by processing ENVISAT Advanced Synthetic Aperture Radar sensor data are provided."}
{"_id": "bacd14459ce65c0d682e2e532450a74ccb2fa138", "title": "Detection of Anomalies in Large Scale Accounting Data using Deep Autoencoder Networks", "text": "Learning to detect fraud in large-scale accounting data is one of the long-standing challenges in financial statement audits or fraud investigations. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. To overcome this disadvantage and inspired by the recent success of deep learning we propose the application of deep autoencoder neural networks to detect anomalous journal entries. We demonstrate that the trained network\u2019s reconstruction error obtainable for a journal entry and regularized by the entry\u2019s individual attribute probabilities can be interpreted as a highly adaptive anomaly assessment. Experiments on two real-world datasets of journal entries, show the effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A) and 16.95 (dataset B) and less false positive alerts compared to state of the art baseline methods. Initial feedback received by chartered accountants and fraud examiners underpinned the quality of the approach in capturing highly relevant accounting anomalies."}
{"_id": "2b686c9fa800c8ba7ffb1dd75f224a2f7b0d7f3a", "title": "Predicting the fix time of bugs", "text": "Two important questions concerning the coordination of development effort are which bugs to fix first and how long it takes to fix them. In this paper we investigate empirically the relationships between bug report attributes and the time to fix. The objective is to compute prediction models that can be used to recommend whether a new bug should and will be fixed fast or will take more time for resolution. We examine in detail if attributes of a bug report can be used to build such a recommender system. We use decision tree analysis to compute and 10-fold cross validation to test prediction models. We explore prediction models in a series of empirical studies with bug report data of six systems of the three open source projects Eclipse, Mozilla, and Gnome. Results show that our models perform significantly better than random classification. For example, fast fixed Eclipse Platform bugs were classified correctly with a precision of 0.654 and a recall of 0.692. We also show that the inclusion of postsubmission bug report data of up to one month can further improve prediction models."}
{"_id": "c787ef5db963d1cf807fe5639d41ec5e74321a1c", "title": "Analysis on DNA based Cryptography to Secure Data Transmission", "text": "The biological research in the field of information technology paves the exploitation of storing capabilities, parallelism and also in conservative cryptography which enhances the security features for data transmission. DNA is the gene information which encodes information of all living beings. Though the DNA computing has its application in the field of huge information storage, massive parallel processing, low energy consumption which have been proposed and proved by the researchers and soon the molecular computer can replace the existing silicon computer and it exploits the world smallest computer. The combination of DNA molecules can be interpreted as a result to give a solution to a specific problem. The DNA strands can be replicated 500 times per second with greater accuracy. It can also be used in the field of cryptography based upon the vast parallelism which is used to break the existing cryptographic approach. This paper analysis an existing approach to the DNA computing method and DNA based cryptographic approach which provides the clear idea and limitations of existing research works."}
{"_id": "35f0147fba05bd320a55c70871a65998ad77a920", "title": "Indoor climate, psychosocial work environment and symptoms in open-plan offices.", "text": "To study the indoor climate, the psychosocial work environment and occupants' symptoms in offices a cross-sectional questionnaire survey was made in 11 naturally and 11 mechanically ventilated office buildings. Nine of the buildings had mainly cellular offices; five of the buildings had mainly open-plan offices, whereas eight buildings had a mixture of cellular, multi-person and open-plan offices. A total of 2301 occupants, corresponding to a response rate of 72%, completed a retrospective questionnaire. The questionnaire comprised questions concerning environmental perceptions, mucous membrane irritation, skin irritation, central nervous system (CNS) symptoms and psychosocial factors. Occupants in open-plan offices are more likely to perceive thermal discomfort, poor air quality and noise and they more frequently complain about CNS and mucous membrane symptoms than occupants in multi-person and cellular offices. The association between psychosocial factors and office size was weak. Open-plan offices may not be suited for all job types. PRACTICAL IMPLICATION: Open-plan offices may be a risk factor for adverse environmental perceptions and symptoms."}
{"_id": "13315d9ba7f0389b8216c65782b8fd4491318b5b", "title": "Natural Language Interface to Database-An Introduction", "text": "Information plays an important role in our lives .it is required numerous applications. One of the major sources of information is databases. Databases and database technology are growing rapidly and are having major impact on the growing use of computers. Computers are used in accessing of information from database. Almost all IT applications are storing, retrieving organizing, accessing analyzing, and information from databases. using the internet. Retrieving information from database requires knowledge of database languages like Structured Query Language (SQL) However, not everybody is able to write SQL queries as they may not be aware of the syntax of SQL structure of the database,. This has lead to the developing such a system where non-expert users compose their questions in their natural language and obtain the results in the form of database. So instead of working with the SQL one can query relational databases in their natural language. This new idea is provoked to develop new type of processing system called natural language interface to database Natural language interface to database enhances the users performance and is a flexible approach querying in database. This paper is an introduction to natural language interfaces to databases (NLIDBS). Covers a brief overview of the NLIDB its components its advantages disadvantages, approaches and techniques ."}
{"_id": "c342c71cb23199f112d0bc644fcce56a7306bf94", "title": "Active Learning for Convolutional Neural Networks: A Core-Set Approach", "text": "Convolutional neural networks (CNNs) have been successfully applied to many recognition and learning tasks using a universal recipe; training a deep model on a very large dataset of supervised examples. However, this approach is rather restrictive in practice since collecting a large set of labeled images is very expensive. One way to ease this problem is coming up with smart ways for choosing images to be labelled from a very large collection (i.e. active learning). Our empirical study suggests that many of the active learning heuristics in the literature are not effective when applied to CNNs in batch setting. Inspired by these limitations, we define the problem of active learning as core-set selection, i.e. choosing set of points such that a model learned over the selected subset is competitive for the remaining data points. We further present a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints. As an active learning algorithm, we choose the subset which is expected to yield best result according to our characterization. Our experiments show that the proposed method significantly outperforms existing approaches in image classification experiments by a large margin."}
{"_id": "6970b5ce76c02276709508f7bc496b8bec9f446d", "title": "Brain mechanisms of pain affect and pain modulation", "text": "Recent animal studies reveal ascending nociceptive and descending modulatory pathways that may contribute to the affective-motivational aspects of pain and play a critical role in the modulation of pain. In humans, a reliable pattern of cerebral activity occurs during the subjective experience of pain. Activity within the anterior cingulate cortex and possibly in other classical limbic structures, appears to be closely related to the subjective experience of pain unpleasantness and may reflect the regulation of endogenous mechanisms of pain modulation."}
{"_id": "e123942715a68afc145843bf37d2f980b49bb322", "title": "Crime Data Analysis and Prediction of Perpetrator Identity Using Machine Learning Approach", "text": "Crime is one of the most predominant and alarming aspects in our society and its prevention is a vital task. Crime analysis is a systematic way of detecting and investigating patterns and trends in crime. The aim of this model is to increase the efficiency of crime investigation systems. This model detects crime patterns from inferences collected from the crime scene and predicts the description of the perpetrator who is likely suspected to commit the crime. This work has two major aspects: Crime Analysis and Prediction of perpetrator identity. The Crime Analysis phase identifies the number of unsolved crimes, and analyses the influence of various factors like year, month, and weapon on the unsolved crimes. The prediction phase estimates the description of the perpetrators like, their age, sex and relationship with the victim. These predictions are done based on the evidences collected from the crime scene. The system predicts the description of the perpetrator using algorithms like, Multilinear Regression, K- Neighbors Classifier and Neural Networks. It was trained and tested using San Francisco Homicide dataset (1981\u20132014) and implemented using python."}
{"_id": "bbe56733113671455a9fee900bfca1a6d7f247bd", "title": "Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning", "text": "Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost\u2014The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution\u2014We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems."}
{"_id": "ce96286a3b658998b5089851e8532e394c50c9d6", "title": "MATHEMATICAL MODELING AND SIMULATION OF AN ELECTRIC VEHICLE", "text": "As electric vehicles become promising alternatives for sustainable and cleaner energy emissions in transportation, the modeling and simulation of electric vehicles has attracted increasing attention from researchers. This paper presents a simulation model of a full electric vehicle on the Matlab-Simulink platform to examine power flow during motoring and regeneration. The drive train components consist of a motor, a battery, a motor controller and a battery controller; modeled according to their mathematical equations. All simulation results are plotted and discussed. The torque and speed conditions during motoring and regeneration were used to determine the energy flow, and performance of the drive. This study forms the foundation for further research and development."}
{"_id": "28c4db048bb9372c70d1000b4ba23de333d5ef68", "title": "The child sexual abuse accommodation syndrome.", "text": "Child victims of sexual abuse face secondary trauma in the crisis of discovery. Their attempts to reconcile their private experiences with the realities of the outer world are assaulted by the disbelief, blame and rejection they experience from adults. The normal coping behavior of the child contradicts the entrenched beliefs and expectations typically held by adults, stigmatizing the child with charges of lying, manipulating or imagining from parents, courts and clinicians. Such abandonment by the very adults most crucial to the child's protection and recovery drives the child deeper into self-blame, self-hate, alienation and revictimization. In contrast, the advocacy of an empathic clinician within a supportive treatment network can provide vital credibility and endorsement for the child. Evaluation of the responses of normal children to sexual assault provides clear evidence that societal definitions of \"normal\" victim behavior are inappropriate and procrustean, serving adults as mythic insulators against the child's pain. Within this climate of prejudice, the sequential survival options available to the victim further alienate the child from any hope of outside credibility or acceptance. Ironically, the child's inevitable choice of the \"wrong\" options reinforces and perpetuates the prejudicial myths. The most typical reactions of children are classified in this paper as the child sexual abuse accommodation syndrome. The syndrome is composed of five categories, of which two define basic childhood vulnerability and three are sequentially contingent on sexual assault: (1) secrecy, (2) helplessness, (3) entrapment and accommodation, (4) delayed, unconvincing disclosure, and (5) retraction. The accommodation syndrome is proposed as a simple and logical model for use by clinicians to improve understanding and acceptance of the child's position in the complex and controversial dynamics of sexual victimization. Application of the syndrome tends to challenge entrenched myths and prejudice, providing credibility and advocacy for the child within the home, the courts, and throughout the treatment process. The paper also provides discussion of the child's coping strategies as analogs for subsequent behavioral and psychological problems, including implications for specific modalities of treatment."}
{"_id": "dae57540710c2cc5ba6e43634f7357decfe1186f", "title": "All-hex meshing using closed-form induced polycube", "text": "The polycube-based hexahedralization methods are robust to generate all-hex meshes without internal singularities. They avoid the difficulty to control the global singularity structure for a valid hexahedralization in frame-field based methods. To thoroughly utilize this advantage, we propose to use a frame field without internal singularities to guide the polycube construction. Theoretically, our method extends the vector fields associated with the polycube from exact forms to closed forms, which are curl free everywhere but may be not globally integrable. The closed forms give additional degrees of freedom to deal with the topological structure of high-genus models, and also provide better initial axis alignment for subsequent polycube generation. We demonstrate the advantages of our method on various models, ranging from genus-zero models to high-genus ones, and from single-boundary models to multiple-boundary ones."}
{"_id": "d2ffc78cb51e9a5f5b9275bf1958ae27a8c4a04d", "title": "A Wideband Magnetoelectric Dipole Antenna With Microstrip Line Aperture-Coupled Excitation", "text": "In this communication, a novel wideband magnetoelectric (ME) dipole antenna is proposed and investigated. A microstrip line aperture-coupled feeding structure is used to excite the antenna. This simple feeding structure can significantly enhance the impedance bandwidth. An antenna prototype is fabricated and exemplified to validate the proposed concept. Measured results show that the proposed ME dipole antenna with microstrip line aperture-coupled excitation achieves an impedance bandwidth of 85% for VSWR < 2 from 2.5 to 6.2 GHz. Further, a measured maximum boresight gain up to 8 dBi is obtained and a 1 dB gain bandwidth is about 67% from 2.5 to 5 GHz. The antenna radiation patterns perform well over the 1 dB gain bandwidth range. A low cross-polarization level is obtained over the operating band."}
{"_id": "82bed45822738decd68d57380a6c75710f5d2816", "title": "A Spatio-temporal Role-Based Access Control Model", "text": "With the growing advancement of pervasive computing technologies, we are moving towards an era where spatio-temporal information will be necessary for access control. The use of such information can be used for enhancing the security of an application, and it can also be exploited to launch attacks. For critical applications, a formal model for spatio-temporal-based access control is needed that increases the security of the application and ensures that the location information cannot be exploited to cause harm. In this paper, we propose a spatio-temporal access control model, based on the Role-Based Access Control (RBAC) model, that is suitable for pervasive computing applications. We show the association of each component of RBAC with spatio-temporal information. We formalize the model by enumerating the constraints. This model can be used for applications where spatial and temporal information of a subject and an object must be taken into account before granting or denying access."}
{"_id": "21a39ca716c37ccc133ff96c31cc71565d0c968e", "title": "Functional maps: a flexible representation of maps between shapes", "text": "We present a novel representation of maps between pairs of shapes that allows for efficient inference and manipulation. Key to our approach is a generalization of the notion of map that puts in correspondence real-valued functions rather than points on the shapes. By choosing a multi-scale basis for the function space on each shape, such as the eigenfunctions of its Laplace-Beltrami operator, we obtain a representation of a map that is very compact, yet fully suitable for global inference. Perhaps more remarkably, most natural constraints on a map, such as descriptor preservation, landmark correspondences, part preservation and operator commutativity become linear in this formulation. Moreover, the representation naturally supports certain algebraic operations such as map sum, difference and composition, and enables a number of applications, such as function or annotation transfer without establishing point-to-point correspondences. We exploit these properties to devise an efficient shape matching method, at the core of which is a single linear solve. The new method achieves state-of-the-art results on an isometric shape matching benchmark. We also show how this representation can be used to improve the quality of maps produced by existing shape matching methods, and illustrate its usefulness in segmentation transfer and joint analysis of shape collections."}
{"_id": "36c6f6f5c54e05b3d18b7e8825d2da8e859de43a", "title": "Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning", "text": "Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults."}
{"_id": "d4defe055ddaf9d84e453598ad75529709c64b70", "title": "Numba: a LLVM-based Python JIT compiler", "text": "Dynamic, interpreted languages, like Python, are attractive for domain-experts and scientists experimenting with new ideas. However, the performance of the interpreter is often a barrier when scaling to larger data sets. This paper presents a just-in-time compiler for Python that focuses in scientific and array-oriented computing. Starting with the simple syntax of Python, Numba compiles a subset of the language into efficient machine code that is comparable in performance to a traditional compiled language. In addition, we share our experience in building a JIT compiler using LLVM[1]."}
{"_id": "02a42b14af2e2ae81f5a609131360ffa81068e7e", "title": "Modelling Railway Interlocking Tables Using Coloured Petri Nets", "text": "Interlocking tables are the functional specification defining the routes, on which the passage of the train is allowed. Associated with the route, the states and actions of all related signalling equipment are also specified. This paper formally models the interlocking tables using Coloured Petri Nets (CPN). The CPN model comprises two parts: Signaling Layout and Interlocking Control. The Signaling Layout part is used to simulate the passage of the train. It stores geographic information of the signalling layout in tokens. The Interlocking Control part models actions of the controller according to the functions specified in the interlocking tables. The arc inscriptions in the model represent the content of the interlocking tables. Following our modelling approach we can reuse the same CPN net structure to model any new or modified interlocking system regardless of its size. Experimental results are presented to provide increased confidence in the model correctness."}
{"_id": "27ae58f73d3f0367ab2a2e9efb2bc22df5a76ed7", "title": "Challenging Complexity of Maximum Common Subgraph Detection Algorithms: A Performance Analysis of Three Algorithms on a Wide Database of Graphs", "text": "Graphs are an extremely general and powerful data structure. In pattern recognition and computer vision, graphs are used to represent patterns to be recognized or classified. Detection of maximum common subgraph (MCS) is useful for matching, comparing and evaluate the similarity of patterns. MCS is a well known NP-complete problem for which optimal and suboptimal algorithms are known from the literature. Nevertheless, until now no effort has been done for characterizing their performance. The lack of a large database of graphs makes the task of comparing the performance of different graph matching algorithms difficult, and often the selection of an algorithm is made on the basis of a few experimental results available. In this paper, three optimal and well-known algorithms for maximum common subgraph detection are described. Moreover a large database containing various categories of pairs of graphs (e.g. random graphs, meshes, bounded valence graphs), is presented, and the performance of the three algorithms is evaluated on this database. Article Type Communicated by Submitted Revised Regular Paper U. Brandes September 2005 January 2007 D. Conte et al., Maximum Common Subgraph, JGAA, 11(1) 99\u2013143 (2007) 100"}
{"_id": "2889f89f10105a46ee0862186afa9d6ba2d0bca2", "title": "Probabilistic Subgraph Matching Based on Convex Relaxation", "text": "We present a novel approach to the matching of subgraphs for object recognition in computer vision. Feature similarities between object model and scene graph are complemented with a regularization term that measures differences of the relational structure. For the resulting quadratic integer program, a mathematically tight relaxation is derived by exploiting the degrees of freedom of the embedding space of positive semidefinite matrices. We show that the global minimum of the relaxed convex problem can be interpreted as probability distribution over the original space of matching matrices, providing a basis for efficiently sampling all close-to-optimal combinatorial matchings within the original solution space. As a result, the approach can even handle completely ambiguous situations, despite uniqueness of the relaxed convex problem. Exhaustive numerical experiments demonstrate the promising performance of the approach which \u2013 up to a single inevitable regularization parameter that weights feature similarity against structural similarity \u2013 is free of any further tuning parameters."}
{"_id": "6ccf50496e73a69535f50262bd3dba7548677fff", "title": "Clustering by passing messages between data points.", "text": "Clustering data by identifying a subset of representative examples is important for processing sensory signals and detecting patterns in data. Such \"exemplars\" can be found by randomly choosing an initial subset of data points and then iteratively refining it, but this works well only if that initial choice is close to a good solution. We devised a method called \"affinity propagation,\" which takes as input measures of similarity between pairs of data points. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. We used affinity propagation to cluster images of faces, detect genes in microarray data, identify representative sentences in this manuscript, and identify cities that are efficiently accessed by airline travel. Affinity propagation found clusters with much lower error than other methods, and it did so in less than one-hundredth the amount of time."}
{"_id": "798dba07af6b11796a82bb2b713bfda2c7a31ae2", "title": "A Comparison of String Metrics for Matching Names and Records", "text": "We describe an open-source Java toolkit of methods for matching names and records. We summarize results obtained from using various string distance metrics on the task of matching entity names: these metrics include distance functions proposed by several different communities, including edit-distance metrics, fast heuristic string comparators, token-based distance metrics, and hybrid methods. We then describe an extension to the toolkit which allows records to be compared. We discuss some issues involved in performing a similar comparison for record-matching techniques, and finally present results for some baseline record-matching algorithms which are based on string comparisons between fields."}
{"_id": "21e938438e7282e6115413881af1c7a8ee7c8537", "title": "Development and validation of blue ray, an optical modem for the MEDUSA class AUVs", "text": "Wireless optical communications are emerging as a viable solution for high-speed data transmission over short ranges in the ocean, complementing mainstream acoustic communication systems that operate over much longer ranges, but at lower data rates. The current drive to develop cooperative autonomous vehicular systems to carry out surveying and other complex missions in the ocean critically depends on the existence of such communication links to share sensory and coordination information. This paper presents a compact, low power consumption, cost-efficient, and lightweight optical modem designed for fast data transmission between MEDUSA underwater vehicles at ranges on the order of 10 m. The transmitter uses LEDs for ON-OFF keying, while the receiver front-end adopts a transimpedance architecture with a photodiode detector. This simplistic transmission and reception technique reduces the overall design and hardware complexities compared to, e.g., laser diodes and photomultiplier tubes. The LEDs are arranged in a circular array, with a photodiode at the centre to enable half-duplex operation. These modems are designed to achieve data rates of 20-200 kb/s over short-range, line-of-sight, configurations. A transparent casing is customised to fit the Medusa vehicles, with proper alignment for inter-vehicle communication when they move in formation."}
{"_id": "45ec4a51a3c821039872dbbe0cb91087b92a106f", "title": "Agnostic Estimation of Mean and Covariance", "text": "We consider the problem of estimating the mean and covariance of a distribution from i.i.d. samples in the presence of a fraction of malicious noise. This is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when a fraction of data is adversarially corrupted, agnostically learning mixtures, agnostic ICA, etc. We present polynomial-time algorithms to estimate the mean and covariance with error guarantees in terms of information-theoretic lower bounds. As a corollary, we also obtain an agnostic algorithm for Singular Value Decomposition."}
{"_id": "a43cfc76f4ef4fb2325569905a3a4a895eb92bdb", "title": "Robotics-logistics: Challenges for automation of logistic processes", "text": "Robotics-logistics can be understood as the field of activities in which applications of industrial robot-technologies are offered and demanded in order to ensure the optimization of internal material flows. According to a survey conducted by BIBA in 2007, the area of robotics-logistics has great need for modernization. Robotics-Logistics activities can be classified in several scenarios as unloading / loading of goods and palletizing / depalletizing of goods. Possible scenarios for research and development activities within theses fields are given."}
{"_id": "b53a55d624269228a761301abab1d80ade9f89c5", "title": "Clothing identification via deep learning: forensic applications", "text": "Attribute-based identification systems are essential for forensic investigations because they help in identifying individuals. An item such as clothing is a visual attribute because it can usually be used to describe people. The method proposed in this article aims to identify people based on the visual information derived from their attire. Deep learning is used to train the computer to classify images based on clothing content. We first demonstrate clothing classification using a large scale dataset, where the proposed model performs relatively poorly. Then, we use clothing classification on a dataset containing popular logos and famous brand images. The results show that the model correctly classifies most of the test images with a success rate that is higher than 70%. Finally, we evaluate clothing classification using footage from surveillance cameras. The system performs well on this dataset, labelling 70% of the test images correctly."}
{"_id": "1bdf90cc38336e6e627150e006fd58bcba201792", "title": "Human Postures Recognition Based on D-S Evidence Theory and Multi-sensor Data Fusion", "text": "Body Sensor Networks (BSNs) are conveying notable attention due to their capabilities in supporting humans in their daily life. In particular, real-time and noninvasive monitoring of assisted livings is having great potential in many application domains, such as health care, sport/fitness, e-entertainment, social interaction and e-factory. And the basic as well as crucial feature characterizing such systems is the ability of detecting human actions and behaviors. In this paper, a novel approach for human posture recognition is proposed. Our BSN system relies on an information fusion method based on the D-S Evidence Theory, which is applied on the accelerometer data coming from multiple wearable sensors. Experimental results demonstrate that the developed prototype system is able to achieve a recognition accuracy between 98.5% and 100% for basic postures (standing, sitting, lying, squatting)."}
{"_id": "6b07115d3e588aadd1de5fe24da4b32b87c59014", "title": "Weight-averaged consistency targets improve semi-supervised deep learning results", "text": "The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%."}
{"_id": "0ef98882d8a7356c7cf7ac715bef84656a0632d4", "title": "Ensembles of nested dichotomies for multi-class problems", "text": "Nested dichotomies are a standard statistical technique for tackling certain polytomous classification problems with logistic regression. They can be represented as binary trees that recursively split a multi-class classification task into a system of dichotomies and provide a statistically sound way of applying two-class learning algorithms to multi-class problems (assuming these algorithms generate class probability estimates). However, there are usually many candidate trees for a given problem and in the standard approach the choice of a particular tree is based on domain knowledge that may not be available in practice. An alternative is to treat every system of nested dichotomies as equally likely and to form an ensemble classifier based on this assumption. We show that this approach produces more accurate classifications than applying C4.5 and logistic regression directly to multi-class problems. Our results also show that ensembles of nested dichotomies produce more accurate classifiers than pairwise classification if both techniques are used with C4.5, and comparable results for logistic regression. Compared to error-correcting output codes, they are preferable if logistic regression is used, and comparable in the case of C4.5. An additional benefit is that they generate class probability estimates. Consequently they appear to be a good general-purpose method for applying binary classifiers to multi-class problems."}
{"_id": "534329a2f549fab718ee3fec17ff3137289ecf37", "title": "A Case Study on Korean Wave: Focused on K-POP Concert by Korean Idol Group in Paris, June 2011", "text": "The study dealt with Korean Wave focusing on K-POP and analyzed its success factors, the changes in Korean Wave and the future directions for development. Also the study has compared the results of the idol groups\u2019 performance, held by SM Entertainment in June 2011 in Paris, and the perspective of Korean and French media. Key reasons were examined to analyze what led K-POP to play a crucial part in spreading the Korean Wave: The expansion of the age of its takers ranging from teens to females in their 20s; the fusion of a variety of cultural elements including oriental dance and occidental pop; the systematic system of idol training; marketing activities based on social media, etc. For the expansion of Korean Wave including K-POP and its successful positioning in the world market, there are several suggestions to make which are inventing the differentiated contents and highly appealing stories, approaching the local customers while considering local features, operating co-marketing with other cultural products."}
{"_id": "afddbb9b8e9b57263bfb529cb8f0359f8b23f688", "title": "An acute bout of self-myofascial release increases range of motion without a subsequent decrease in muscle activation or force.", "text": "Foam rolling is thought to improve muscular function, performance, overuse, and joint range of motion (ROM); however, there is no empirical evidence demonstrating this. Thus, the objective of the study was to determine the effect of self-myofascial release (SMR) via foam roller application on knee extensor force and activation and knee joint ROM. Eleven healthy male (height 178.9 \u00b1 3.5 cm, mass 86.3 \u00b1 7.4 kg, age 22.3 \u00b1 3.8 years) subjects who were physically active participated. Subjects' quadriceps maximum voluntary contraction force, evoked force and activation, and knee joint ROM were measured before, 2 minutes, and 10 minutes after 2 conditions: (a) 2, 1-minute trials of SMR of the quadriceps via a foam roller and (b) no SMR (Control). A 2-way analysis of variance (condition \u00d7 time) with repeated measures was performed on all dependent variables recorded in the precondition and postcondition tests. There were no significant differences between conditions for any of the neuromuscular dependent variables. However, after foam rolling, subjects' ROM significantly (p < 0.001) increased by 10\u00b0 and 8\u00b0 at 2 and 10 minutes, respectively. There was a significant (p < 0.01) negative correlation between subjects' force and ROM before foam rolling, which no longer existed after foam rolling. In conclusion, an acute bout of SMR of the quadriceps was an effective treatment to acutely enhance knee joint ROM without a concomitant deficit in muscle performance."}
{"_id": "4f183a67430480b440c148998bed07f1a45f68f0", "title": "SATIRE: a software architecture for smart AtTIRE", "text": "Personal instrumentation and monitoring services that collect and archive the physical activities of a user have recently been introduced for various medical, personal, safety, and entertainment purposes. A general software architecture is needed to support different categories of such monitoring services. This paper presents a software architecture, implementation, and preliminary evaluation of SATIRE, a wearable personal monitoring service transparently embedded in user garments. SATIRE records the owner's activity and location for subsequent automated uploading and archiving. The personal archive can later be searched for particular events to answer questions regarding past and present user activity, location, and behavior patterns. A short feasibility and usage study of a prototype based on MicaZ motes provides a proof of concept for the SATIRE architecture."}
{"_id": "66d9767c367e79f6dc40c7a26b8ec5479ad2c2d7", "title": "CDN brokering", "text": "Content distribution networks (CDNs) increase the capacity of individual Web sites and attempt to deliver content from caches that are located \u201ccloser\u201d to end-users than the origin servers that provide the content. CDN brokering provides CDNs a way to interoperate by allowing one CDN to intelligently redirect clients dynamically to other CDNs. This paper describes the goals, architecture, and performance of a CDN brokerage system. Our system has been deployed on the Internet on a provisional basis, and our architectural ideas have helped advance the evolution of Internet standards for interoperating CDNs."}
{"_id": "1b2f59291322dcf2bfa552b9c39d3d956bcfde1f", "title": "Editing Description Logic Ontologies with the Prot\u00e9g\u00e9 OWL Plugin", "text": "The growing interest in the Semantic Web and the Web Ontology Language (OWL) will reveal the potential of Description Logics in industrial projects. The rich semantics of OWL provide powerful reasoning capabilities that help build, maintain and query domain models for many purposes. However, before OWL can unfold its full potential, user-friendly tools with a scalable architecture are required. We present the OWL Plugin, an extension of the Prot\u00e9g\u00e9 ontology development environment, which can be used to define classes and properties, to edit logical class expressions, to invoke reasoners, and to link ontologies into the Semantic Web. We analyze some of the challenges for developers of Description Logic editors, and discuss some of our user interface design decisions."}
{"_id": "7571e852a75a1a5117f5f36b581c2f46a8e068cf", "title": "Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models", "text": "We present a new, fast and flexible pipeline for indoor scene synthesis that is based on deep convolutional generative models. Our method operates on a top-down image-based representation, and inserts objects iteratively into the scene by predicting their category, location, orientation and size with separate neural network modules. Our pipeline naturally supports automatic completion of partial scenes, as well as synthesis of complete scenes. Our method is significantly faster than the previous image-based method and generates result that outperforms state-of-the-art generative scene models in terms of faithfulness to training data and perceived visual quality."}
{"_id": "bc83845d332eb24b776cb207532a6e0bf33b9cbc", "title": "When Is It Too Early for Single Sport Specialization?", "text": "Over the past 15 years, there has been an increase in youth sports participation with a concomitant increase in early year-round training in a single sport. Many factors contribute to the desire of parents and coaches to encourage early single sport specialization, including the desire to give the young athlete an edge in competition, pursuit of scholarships, and potential professional status, and the ability to label a young athlete as elite at an early age. Despite these perceived advantages, some data suggest that early sport specialization does not lead to a competitive advantage over athletes who participate in multiple sports. Although the data are limited, there is some evidence that early sport specialization may put the young athlete at risk for overuse injuries. The focus of this review is to highlight the evidence regarding early sport specialization and risk for injury; discuss the risk factors for overuse injury in high-risk sports including ice hockey, swimming, gymnastics, and baseball; and discuss future potential research that would help define the risk of injury for young athletes who participate in early sport specialization."}
{"_id": "faaba1f0d5918314860db3e8b362d32cc66b99d0", "title": "Coordinated Power Control of Variable-Speed Diesel Generators and Lithium-Battery on a Hybrid Electric Boat", "text": "This paper presents coordinated power management strategies for a hybrid electric boat (HEB) based on dynamic load's power distribution approach using diesel generators and batteries. Two variable-speed diesel generator sets connecting to dc-bus via fully controlled rectifiers serve as the main power supply source; one lithium-battery pack linked to dc-bus through a bidirectional DC/DC converter serves as an auxiliary power supply source. The power requirement of the thrusters and other onboard equipment is represented by a load power profile. The main contribution of this paper is focused on the load power sharing of the HEB between the two variable-speed diesel generators and the lithium-battery pack, through the fluctuations extracted from load power and their assignation to the batteries. Another important issue is that when the total load demand is low, one of the diesel generator set will be switched off to improve the diesel-fuel efficiency. The output powers of the diesel generators and the batteries are controlled based on the power distribution strategy via the converters controlling. The performances of the proposed hybrid power system control are evaluated through simulations in MATALB/Simulink, and through reduced-scale experimental tests carried out for a specific driving cycle of the HEB."}
{"_id": "e16abd071b0ecb3bae9ff44ba5557ff3eb1a7c63", "title": "Multiple description-DASH: Pragmatic video streaming maximizing end-users' quality of experience", "text": "Over the past few years, adaptive bitrate streaming protocols, such as DASH, have risen to enhance the End-Users' experience towards video consumption over the Internet. Current methods perform a smoother playback (i.e., fewer re-buffering states) trading off with quality fluctuations due to the client-server link state. The proposed work goes a step further and introduces an innovative lightweight streaming solution by taking advantage of bandwidth aggregation over multiple paths using multiple content sources at the same time. This pragmatic evolving approach outperforms the QoE delivered by current DASH-based or P2P-based solutions and leverages the scope of action made available to the media delivery chain actors. The proposed solution has been implemented and is compared to a DASH-based approach used in most existing VoD use-cases through a first set of experiments. Results demonstrate the strong advantages in terms of quality delivered at the End-User's side and buffer occupancy. Discussion on the necessary trade-offs is also tackled."}
{"_id": "b2288b7cab4c5ca0f0ebd67bc287b93cd3130240", "title": "Optimizing power-accuracy trade-off in approximate adders", "text": "Approximate circuit design has gained significance in recent years targeting applications like media processing where full accuracy is not required. In this paper, we propose an approximate adder in which the approximate part of the sum is obtained by finding a single optimal level that minimises the mean error distance. Therefore hardware needed for the approximate part computation can be removed, which effectively results in very low power consumption. We compare the proposed adder with various approximate adders in the literature in terms of power and accuracy metrics. The power savings of our adder is shown to be 17% to 55% more than power savings of the existing approximate adders over a significant range of accuracy values. Further, in an image addition application, this adder is shown to provide the best trade-off between PSNR and power."}
{"_id": "9226a8601f4c317342c5c6f9835db0c1ce9e5326", "title": "Substrate Integrated Waveguide (SIW) Rotman Lens and Its Ka-Band Multibeam Array Antenna Applications", "text": "A new type of substrate integrated waveguide (SIW) multibeam antenna is proposed for mobile satellite communications using beam switching and diversity techniques. It employs an SIW Rotman lens as the beamforming network. The prototype of a single multibeam antenna is implemented at 28.5 GHz with seven input ports and an antenna array constructed by nine SIW linear slot arrays, which can generate a corresponding number of beams along one dimension. Several such antennas are grouped in two different ways to cover a 2-D solid angle with multiple beams. Experiment results show that the 2-D solid angle around (-40deg, 40deg) X (-35deg, 35deg) or (-25deg, 25deg) x (-35deg, 35deg) are covered with 20 or 25 beams with 5 -dB beamwidth, respectively. It is demonstrated that this type of printed multibeam antenna is a good choice for communication applications where mobility and high gain are simultaneously required."}
{"_id": "49928617bc7d370addd8aab476ab2680f54123b1", "title": "From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction", "text": "Human automation interaction (HAI) systems have thus far failed to live up to expectations mainly because human users do not always interact with the automation appropriately. Trust in automation (TiA) has been considered a central influence on the way a human user interacts with an automation; if TiA is too high there will be overuse, if TiA is too low there will be disuse. However, even though extensive research into TiA has identified specific HAI behaviors, or trust outcomes, a unique mapping between trust states and trust outcomes has yet to be clearly identified. Interaction behaviors have been intensely studied in the domain of HAI and TiA and this has led to a reframing of the issues of problems with HAI in terms of reliance and compliance. We find the behaviorally defined terms reliance and compliance to be useful in their functionality for application in real-world situations. However, we note that once an inappropriate interaction behavior has occurred it is too late to mitigate it. We therefore take a step back and look at the interaction decision that precedes the behavior. We note that the decision neuroscience community has revealed that decisions are fairly stereotyped processes accompanied by measurable psychophysiological correlates. Two literatures were therefore reviewed. TiA literature was extensively reviewed in order to understand the relationship between TiA and trust outcomes, as well as to identify gaps in current knowledge. We note that an interaction decision precedes an interaction behavior and believe that we can leverage knowledge of the psychophysiological correlates of decisions to improve joint system performance. As we believe that understanding the interaction decision will be critical to the eventual mitigation of inappropriate interaction behavior, we reviewed the decision making literature and provide a synopsis of the state of the art understanding of the decision process from a decision neuroscience perspective. We forward hypotheses based on this understanding that could shape a research path toward the ability to mitigate interaction behavior in the real world."}
{"_id": "3d0f7539d17816a5ba1925f10ed5c274c9d570b5", "title": "Predicting Students\u2019 Performance using Modified ID3 Algorithm", "text": "The ability to predict performance of students is very crucial in our present education system. We can use data mining concepts for this purpose. ID3 algorithm is one of the famous algorithms present today to generate decision trees. But this algorithm has a shortcoming that it is inclined to attributes with many values. So , this research aims to overcome this shortcoming of the algorithm by using gain ratio(instead of information gain) as well as by giving weights to each attribute at every decision making point. Several other algorithms like J48 and Naive Bayes classification algorithm are also applied on the dataset. The WEKA tool was used for the analysis of J48 and Naive Bayes algorithms. The results are compared and presented. The dataset used in our study is taken from the School of Computing Sciences and Engineering (SCSE), VIT University. Keyworddata mining, educational data mining (EDM), decision tree, gain ratio, weighted ID3"}
{"_id": "9de14d43395b154895149ba5a3c1635cee938bf4", "title": "Improving read performance of STT-MRAM based main memories through Smash Read and Flexible Read", "text": "Spin Transfer Torque Magnetoresistive RAM (STT-MRAM) has been recently deemed as one promising main memory alternative for high-end mobile processors. With process technology scaling, the amplitude of write current approaches that of read current in deep sub-micrometer STT-MRAM arrays. As a result, read disturbance errors (RDEs) emerge. Both high current restore required (HCRR) reads and low current long latency (LCLL) reads can guarantee read reliability and utterly remove RDEs. However, both of them degrade system performance, because of extra restores or a longer read latency. And neither of them always achieves the better performance when running a wide variety of applications. In this paper, we present two architectural techniques to boost read performance for STT-MRAM based main memories in the presence of RDEs. We first propose Smash Read (S-RD) to shorten the latency of HCRR reads by injecting a larger read current. We further introduce Flexible Read (F-RD) to dynamically adopt different types of read schemes, S-RD and LCLL, to maximize main memory system performance. On average, our techniques improve system performance by 9~13% and reduces total energy by 4~8% over all existing read schemes including HCRR and LCLL."}
{"_id": "6f206b46c26b70b3be0b1e89b1d4b6518b601005", "title": "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights", "text": "This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A wellproven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization (a variable-length encoding: 1 bit for representing zero value, and the remaining 4 bits represent at most 16 different values for the powers of two) , our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. We believe that our method sheds new insights on how to make deep CNNs to be applicable on mobile or embedded devices. The code is available at https://github.com/Zhouaojun/IncrementalNetwork-Quantization."}
{"_id": "a782971fa27ae701a2610d7d18e99833583e7ca1", "title": "MIMO-Net: A multi-input multi-output convolutional neural network for cell segmentation in fluorescence microscopy images", "text": "We propose a novel multiple-input multiple-output convolution neural network (MIMO-Net) for cell segmentation in fluorescence microscopy images. The proposed network trains the network parameters using multiple resolutions of the input image, connects the intermediate layers for better localization and context and generates the output using multi-resolution deconvolution filters. The MIMO-Net allows us to deal with variable intensity cell boundaries and highly variable cell size in the mouse pancreatic tissue by adding extra convolutional layers which bypass the max-pooling operation. The results show that our method outperforms state-of-the-art deep learning based approaches for segmentation."}
{"_id": "4c931523fadf80c5e5180de24be484624caa769b", "title": "Machine Learning for Performance Prediction in Mobile Cellular Networks", "text": "In this paper, we discuss the application of machine learning techniques for performance prediction problems in wireless networks. These problems often involve using existing measurement data to predict network performance where direct measurements are not available. We explore the performance of existing machine learning algorithms for these problems and propose a simple taxonomy of main problem categories. As an example, we use an extensive real-world drive test data set to show that classical machine learning methods such as Gaussian process regression, exponential smoothing of time series, and random forests can yield excellent prediction results. Applying these methods to the management of wireless mobile networks has the potential to significantly reduce operational costs while simultaneously improving user experience. We also discuss key challenges for future work, especially with the focus on practical deployment of machine learning techniques for performance prediction in mobile wireless networks."}
{"_id": "029dd04b0878be15b7860e0f00e84e35c443811a", "title": "Algorithmic Authority: the Ethics, Politics, and Economics of Algorithms that Interpret, Decide, and Manage", "text": "This panel will explore algorithmic authority as it manifests and plays out across multiple domains. Algorithmic authority refers to the power of algorithms to manage human action and influence what information is accessible to users. Algorithms increasingly have the ability to affect everyday life, work practices, and economic systems through automated decision-making and interpretation of \"big data\". Cases of algorithmic authority include algorithmically curating news and social media feeds, evaluating job performance, matching dates, and hiring and firing employees. This panel will bring together researchers of quantified self, healthcare, digital labor, social media, and the sharing economy to deepen the emerging discourses on the ethics, politics, and economics of algorithmic authority in multiple domains."}
{"_id": "2cef1515ac1d98f1dcfdc9b84c1a3b6f3758b9d1", "title": "Joint pose estimation and action recognition in image graphs", "text": "Human analysis in images and video is a hard problem due to the large variation in human pose, clothing, camera view-points, lighting and other factors. While the explicit modeling of this variability is difficult, the huge amount of available person images motivates for the implicit, data-driven approach to human analysis. In this work we aim to explore this approach using the large amount of images spanning a subspace of human appearance. We model this subspace by connecting images into a graph and propagating information through such a graph using a discriminatively-trained graphical model. We particularly address the problems of human pose estimation and action recognition and demonstrate how image graphs help solving these problems jointly. We report results on still images with human actions from the KTH dataset."}
{"_id": "e90dd4a2750df4d52918a610ba9fb2b013153508", "title": "Function points as a universal software metric", "text": "Function point metrics are the most accurate and effective metrics yet developed for software sizing and also for studying software productivity, quality, costs, risks, and economic value. Unlike the older \"lines of code\" metric function points can be used to study requirements, design, and in fact all software activities from development through maintenance. In the future function point metrics can easily become a universal metric used for all software applications and for all software contracts in all countries. The government of Brazil already requires function points for all software contracts, and South Korea and Italy may soon follow. However, there are some logistical problems with function point metrics that need to be understood and overcome in order for function point metrics to become the primary metric for software economic analysis. Manual function point counting is too slow and costly to be used on large software projects above 10,000 function points in size. Also, application size is not constant but grows at about 2% per calendar month during development and 8% or more per calendar year for as long as software is in active use. This paper discusses a method of high-speed function point counting that can size any application in less than two minutes, and which can predict application growth during development and for five years after release. This new method is based on pattern matching and is covered by U.S. utility patent application and hence is patent pending."}
{"_id": "bdf98a92ad17fb4e78e7cad54bdb3e1214729831", "title": "Security and Privacy Challenges in Smart Cities", "text": "The construction of smart cities will bring about a higher quality of life to the masses through digital interconnectivity, leading to increased efficiency and accessibility in cities. Smart cities must ensure individual privacy and security in order to ensure that its citizens will participate. If citizens are reluctant to participate, the core advantages of a smart city will dissolve. This article will identify and offer possible solutions to five smart city challenges, in hopes of anticipating destabilizing and costly disruptions. The challenges include privacy preservation with high dimensional data, securing a network with a large attack surface, establishing trustworthy data sharing practices, properly utilizing artificial intelligence, and mitigating failures cascading through the smart network. Finally, further research directions are provided to encourage further exploration of smart city challenges before their construction."}
{"_id": "49474c35adfcac47a1fbef2de73041544e53553d", "title": "SRT \u9664\u7b97\u306b\u57fa\u3065\u304f\u30b9\u30b1\u30fc\u30e9\u30d6\u30eb\u5270\u4f59\u4e57\u7b97\u56de\u8def A Scalable Modular Multiplication Hardware Based on High-radix SRT Division Algorithm", "text": "\u3079\u304d\u5270\u4f59\u6f14\u7b97\u306f RSA\u6697\u53f7 [1]\u3092\u59cb\u3081\u3068\u3059\u308b\u516c\u958b\u9375\u6697 \u53f7\u306e\u57fa\u672c\u6f14\u7b97\u3067\u3042\u308b.\u3053\u306e\u3079\u304d\u5270\u4f59\u6f14\u7b97\u306f\u5270\u4f59\u4e57\u7b97\u306b\u5206 \u89e3\u3067\u304d\u308b.\u6f14\u7b97\u306e\u30d3\u30c3\u30c8\u6570\u304c\u975e\u5e38\u306b\u5927\u304d\u304f,\u51e6\u7406\u6642\u9593\u304c \u9577\u3044\u3053\u3068\u304b\u3089,\u9ad8\u901f\u5316\u306e\u305f\u3081\u306b\u591a\u304f\u306e\u56de\u8def\u69cb\u6210\u6cd5\u304c\u7814\u7a76 \u3055\u308c\u3066\u3044\u308b.\u5270\u4f59\u4e57\u7b97\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0 [2]\u306e\u4e2d\u3067,\u56de\u8def\u5316 \u306b\u6709\u52b9\u306a\u3082\u306e\u3068\u3057\u3066,\u4e57/\u9664\u7b97\u3092\u30a4\u30f3\u30bf\u30ea\u30fc\u30d6\u3059\u308b SRT \u9664\u7b97\u306b\u57fa\u3065\u304f\u65b9\u6cd5 [3]\u3068,\u30e2\u30f3\u30b4\u30e1\u30ea\u4e57\u7b97\u306b\u57fa\u3065\u304f\u65b9\u6cd5 [4]\u304c\u3042\u308b.\u5f8c\u8005\u306f\u5e7e\u3064\u304b\u306e\u524d\u51e6\u7406\u3068\u5f8c\u51e6\u7406\u304c\u5fc5\u8981\u306b\u306a \u308b\u304c,\u6cd5\u306e\u500d\u6570\u306e\u9078\u629e\u304c\u5bb9\u6613\u306b\u306a\u308b.\u305d\u306e\u305f\u3081,\u57fa\u672c\u7684 \u306a\u57fa\u6570 2\u306e\u69cb\u6210\u540c\u58eb\u306e\u6bd4\u8f03\u3067\u306f\u524d\u8005\u306b\u6bd4\u3079\u3066 2\u500d\u7a0b\u5ea6\u9ad8 \u901f\u3067\u3042\u308b\u3068\u3055\u308c\u308b [5].\u3053\u306e\u3053\u3068\u304b\u3089,\u8fd1\u5e74\u306f,\u30e2\u30f3\u30b4 \u30e1\u30ea\u4e57\u7b97\u306b\u57fa\u3065\u304f\u56de\u8def\u69cb\u6210\u306e\u7814\u7a76\u304c\u591a\u3044. \u307e\u305f,\u8fd1\u5e74\u306f,\u30e2\u30f3\u30b4\u30e1\u30ea\u4e57\u7b97\u306b\u57fa\u3065\u304f\u30b9\u30b1\u30fc\u30e9\u30d6\u30eb \u306a\u56de\u8def\u69cb\u6210\u306b\u95a2\u3059\u308b\u7814\u7a76\u304c\u591a\u3044 [6, 7, 8].\u3053\u308c\u3089\u306f,\u30ef\u30fc \u30c9\u5358\u4f4d\u306b\u51e6\u7406\u3059\u308b\u3053\u3068\u3067,\u4ee5\u524d\u306e\u3082\u306e\u306b\u6bd4\u3079\u3066\u30d0\u30b9\u5e45\u3092 \u5c0f\u3055\u304f\u3057\u3066,\u3088\u308a\u5c0f\u3055\u306a\u56de\u8def\u91cf\u3067\u5b9f\u73fe\u3067\u304d\u308b.\u30b9\u30de\u30fc\u30c8 \u30ab\u30fc\u30c9\u306a\u3069\u306e\u5c0f\u578b\u306e\u88c5\u7f6e\u3067\u306f,\u5f93\u6765\u306e\u69cb\u6210\u6cd5\u3067\u306f\u56de\u8def\u304c \u6bd4\u8f03\u7684\u5927\u304d\u3044\u305f\u3081,\u3053\u306e\u3088\u3046\u306a\u56de\u8def\u898f\u6a21\u306e\u5c0f\u3055\u3044\u69cb\u6210\u304c \u591a\u5c11\u901f\u5ea6\u306f\u52a3\u308b\u304c\u6709\u52b9\u3068\u306a\u308b. \u672c\u8ad6\u6587\u3067\u306f,\u30e2\u30f3\u30b4\u30e1\u30ea\u4e57\u7b97\u3067\u306f\u306a\u304f,\u9ad8\u57fa\u6570 SRT\u9664 \u7b97 [9]\u306b\u57fa\u3065\u304f\u30b9\u30b1\u30fc\u30e9\u30d6\u30eb\u306a\u56de\u8def\u69cb\u6210\u6cd5\u3092\u63d0\u6848\u3057,\u3053 \u306e\u69cb\u6210\u306e\u9762\u7a4d/\u901f\u5ea6\u306e\u6027\u80fd\u304c\u30e2\u30f3\u30b4\u30e1\u30ea\u4e57\u7b97\u306b\u57fa\u3065\u304f\u69cb \u6210\u306b\u5339\u6575\u3059\u308b\u3053\u3068\u3092\u793a\u3059.\u307e\u305f,\u6cd5\u306e\u500d\u6570\u9078\u629e\u306b\u306f,\u9ad8"}
{"_id": "2d6c06c9e688479cbaad4165cd153a4034ccce2d", "title": "Depth-Dependent Halos: Illustrative Rendering of Dense Line Data", "text": "We present a technique for the illustrative rendering of 3D line data at interactive frame rates. We create depth-dependent halos around lines to emphasize tight line bundles while less structured lines are de-emphasized. Moreover, the depth-dependent halos combined with depth cueing via line width attenuation increase depth perception, extending techniques from sparse line rendering to the illustrative visualization of dense line data. We demonstrate how the technique can be used, in particular, for illustrating DTI fiber tracts but also show examples from gas and fluid flow simulations and mathematics as well as describe how the technique extends to point data. We report on an informal evaluation of the illustrative DTI fiber tract visualizations with domain experts in neurosurgery and tractography who commented positively about the results and suggested a number of directions for future work."}
{"_id": "9db805502642c8777eea6c79508f7f52b308c988", "title": "OpenSky: A swiss army knife for air traffic security research", "text": "The Automatic Dependent Surveillance - Broadcast (ADS-B) protocol is one of the key components of the next generation air transportation system. Since ADS-B will become mandatory by 2017 in the European airspace, it is crucial that aspects such as its security and privacy are promptly investigated by the research community. However, as expensive specialized equipment was previously necessary to collect real-world data on a large scale, such data has not been freely accessible until now. To enable researchers around the world to conduct experimental studies based on real air traffic data, we have created OpenSky, a participatory sensor network for air traffic research. In this paper, we describe the setup and capabilities of OpenSky, and detail some of the research into air traffic security that we have conducted using OpenSky."}
{"_id": "a4c144e0d9cc89a766aeae13bd6e103072c56e30", "title": "Introduction to the Special Issue on Machine Learning for Multiple Modalities in Interactive Systems and Robots", "text": "This special issue highlights research articles that apply machine learning to robots and other systems that interact with users through more than one modality, such as speech, gestures, and vision. For example, a robot may coordinate its speech with its actions, taking into account (audio-)visual feedback during their execution. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. The articles in this special issue represent examples that contribute to filling this gap."}
{"_id": "766150b00dc038b24cc3ff9715c60f42b98b7254", "title": "Antibacterial drug discovery in the resistance era", "text": "The looming antibiotic-resistance crisis has penetrated the consciousness of clinicians, researchers, policymakers, politicians and the public at large. The evolution and widespread distribution of antibiotic-resistance elements in bacterial pathogens has made diseases that were once easily treatable deadly again. Unfortunately, accompanying the rise in global resistance is a failure in antibacterial drug discovery. Lessons from the history of antibiotic discovery and fresh understanding of antibiotic action and the cell biology of microorganisms have the potential to deliver twenty-first century medicines that are able to control infection in the resistance era."}
{"_id": "c336af4c33b3f8e4913f7e58f2cdc9554fff2786", "title": "Hunting ELFs : An investigation into Android malware detection", "text": "Hunting ELFs: An Investigation Into Android Malware Detection by Thomas Sheridan Atkinson Master of Science in Information Security Royal Holloway, University of London 2015/2016 Android has risen to become the de facto operating system for mobile technologies. Its huge success and ubiquity has made it a target for malware writers. It has been found that recently sampled Android malware apps make use of repackaged malicious ELF binaries. The research carried out by this project has found a previously unknown technique used by malicious ELF binaries to hide calls to external packed binaries. An original and robust method for detecting ELF binaries that make calls to a packed binary has been developed and implemented into a state of the art framework with an aim to improving how the framework detects malicious APKs. When trained and tested on 84,979 benign samples and 16,132 malware samples, an accuracy of 96.50% was maintained using XGBoost as the classifier algorithm. The accuracy achieved using the Extra Trees algorithm was improved by 0.01% to 95.79% and the accuracy achieved when using the Random Forest algorithm was increased by 0.05% to 95.34%."}
{"_id": "0e5e26ede48306bad114c7ed25a98d5a32adce28", "title": "A unified framework for generalized Linear Discriminant Analysis", "text": "Linear discriminant analysis (LDA) is one of the well- known methods for supervised dimensionality reduction. Over the years, many LDA-based algorithms have been developed to cope with the curse of dimensionality. In essence, most of these algorithms employ various techniques to deal with the singularity problem, which occurs when the data dimensionality is larger than the sample size. They have been applied successfully in various applications. However, there is a lack of a systematic study of the commonalities and differences of these algorithms, as well as their intrinsic relationships. In this paper, a unified framework for generalized LDA is proposed via a transfer function. The proposed framework elucidates the properties of various algorithms and their relationships. Based on the presented analysis, we propose an efficient model selection algorithm for LDA. We conduct extensive experiments using a collection of high-dimensional data, including text documents, face images, gene expression data, and gene expression pattern images, to evaluate the proposed theories and algorithms."}
{"_id": "ce16549da958e736bb9a78ae7330bd240a23937e", "title": "You can't always get what you want: educational attainment, agency, and choice.", "text": "Using educational attainment to indicate socioeconomic status, the authors examined models of agency and effects of choice among European American adults of different educational backgrounds in 3 studies. Whereas college-educated (BA) participants and their preferred cultural products (i.e., rock music lyrics) emphasized expressing uniqueness, controlling environments, and influencing others, less educated (HS) participants and their preferred cultural products (i.e., country music lyrics) emphasized maintaining integrity, adjusting selves, and resisting influence. Reflecting these models of agency, HS and BA participants differently responded to choice in dissonance and reactance paradigms: BA participants liked chosen objects more than unchosen objects, but choice did not affect HS participants' preferences. Results suggest that HS and BA models of agency qualitatively differ, despite overlap between HS and BA worlds."}
{"_id": "d9b0eec4560b4410aa7998bff6a13a1ba88b272a", "title": "Implicit Discourse Relation Classification via Multi-Task Neural Networks", "text": "Without discourse connectives, classifying implicit discourse relations is a challenging task and a bottleneck for building a practical discourse parser. Previous research usually makes use of one kind of discourse framework such as PDTB or RST to improve the classification performance on discourse relations. Actually, under different discourse annotation frameworks, there exist multiple corpora which have internal connections. To exploit the combination of different discourse corpora, we design related discourse classification tasks specific to a corpus, and propose a novel Convolutional Neural Network embedded multi-task learning system to synthesize these tasks by learning both unique and shared representations for each task. The experimental results on the PDTB implicit discourse relation classification task demonstrate that our model achieves significant gains over baseline systems."}
{"_id": "9916197a8c0b8a1f6bb1911887935626f573ed38", "title": "Multi-level bootstrap analysis of stable clusters in resting-state fMRI", "text": "A variety of methods have been developed to identify brain networks with spontaneous, coherent activity in resting-state functional magnetic resonance imaging (fMRI). We propose here a generic statistical framework to quantify the stability of such resting-state networks (RSNs), which was implemented with k-means clustering. The core of the method consists in bootstrapping the available datasets to replicate the clustering process a large number of times and quantify the stable features across all replications. This bootstrap analysis of stable clusters (BASC) has several benefits: (1) it can be implemented in a multi-level fashion to investigate stable RSNs at the level of individual subjects and at the level of a group; (2) it provides a principled measure of RSN stability; and (3) the maximization of the stability measure can be used as a natural criterion to select the number of RSNs. A simulation study validated the good performance of the multi-level BASC on purely synthetic data. Stable networks were also derived from a real resting-state study for 43 subjects. At the group level, seven RSNs were identified which exhibited a good agreement with the previous findings from the literature. The comparison between the individual and group-level stability maps demonstrated the capacity of BASC to establish successful correspondences between these two levels of analysis and at the same time retain some interesting subject-specific characteristics, e.g. the specific involvement of subcortical regions in the visual and fronto-parietal networks for some subjects."}
{"_id": "277c68dfc3b9e3ecb6fd03f569bcc62641ef83c6", "title": "Haptic identification of objects using a modular soft robotic gripper", "text": "This work presents a soft hand capable of robustly grasping and identifying objects based on internal state measurements. A highly compliant hand allows for intrinsic robustness to grasping uncertainty, but the specific configuration of the hand and object is not known, leaving undetermined if a grasp was successful in picking up the right object. A soft finger was adapted and combined to form a three finger gripper that can easily be attached to existing robots, for example, to the wrist of the Baxter robot. Resistive bend sensors were added within each finger to provide a configuration estimate sufficient for distinguishing between a set of objects. With one data point from each finger, the object grasped by the gripper can be identified. A clustering algorithm to find the correspondence for each grasped object is presented for both enveloping grasps and pinch grasps. This hand is a first step towards robust proprioceptive soft grasping."}
{"_id": "349221c84d0500ae858bebe028d754648628fc5c", "title": "Onboard Reconfigurable Battery Charger for Electric Vehicles With Traction-to-Auxiliary Mode", "text": "This paper proposes a single-phase reconfigurable battery charger for an electric vehicle (EV) that operates in three different modes: grid-to-vehicle (G2V) mode, in which the traction batteries are charged from the power grid; vehicle-to-grid (V2G) mode, in which the traction batteries deliver part of the stored energy back to the power grid; and traction-to-auxiliary (T2A) mode, in which the auxiliary battery is charged from the traction batteries. When connected to the power grid, the battery charger works with a sinusoidal current in the ac side, for both G2V and V2G modes, and regulates the reactive power. When the EV is disconnected from the power grid, the control algorithms are modified, and the full-bridge ac-dc bidirectional converter works as a full-bridge isolated dc-dc converter that is used to charge the auxiliary battery of the EV, avoiding the use of an additional charger to accomplish this task. To assess the behavior of the proposed reconfigurable battery charger under different operation scenarios, a 3.6-kW laboratory prototype has been developed, and experimental results are presented."}
{"_id": "774d50f55cf9268435a2147e92e025c9309e2947", "title": "AutoISES: Automatically Inferring Security Specification and Detecting Violations", "text": "The importance of software security cannot be overstated. In the past, researchers have applied program analysis techniques to automatically detect security vulnerabilities and verify security properties. However, such techniques have limited success in reality because they require manually provided code-level security specifications. Manually writing and generating these code-level security specifications are tedious and error-prone. Additionally, they seldom exist in production software. In this paper, we propose a novel method and tool, called AutoISES, which Automatically Infers Security Specifications by statically analyzing source code, and then directly use these specifications to automatically detect security violations. Our experiments with the Linux kernel and Xen demonstrated the effectiveness of this approach \u2013 AutoISES automatically generated 84 security specifications and detected 8 vulnerabilities in the Linux kernel and Xen, 7 of which have already been confirmed by the corresponding developers."}
{"_id": "73728b028317c5740c9872b64f7e374213292068", "title": "Cultural capital or relative risk aversion? Two mechanisms for educational inequality compared.", "text": "In this paper we empirically examined two explanatory mechanisms for educational inequality: cultural reproduction and relative risk aversion, using survey data taken from secondary school pupils in Amsterdam. Cultural reproduction theory seeks to explain class variations in schooling by cultural differences between social classes. Relative risk aversion theory argues that educational inequalities can be understood by between-class variation in the necessity of pursuing education at branching points in order to avoid downward mobility. We showed that class variations in early demonstrated ability are for a substantial part cultural: cultural capital - measured by parental involvement in highbrow culture - affected school performance at the primary and secondary level. However, relative risk aversion - operationalized by being concerned with downward mobility - strongly affects schooling ambitions, whereas cultural capital had no effect. Thus, we conclude that 'primary effects' of social origin on schooling outcomes are manifested through cultural capital and not through relative risk aversion (in addition to other potential sources of class variations such as genetics). Relative risk aversion, and not cultural capital, affects schooling ambitions, which is relevant for our understanding of secondary effects."}
{"_id": "0718237a30408609554a0e2b90d35e37d54b1959", "title": "Entropy-Based Subword Mining with an Application to Word Embeddings", "text": "Recent literature has shown a wide variety of benefits to mapping traditional onehot representations of words and phrases to lower-dimensional real-valued vectors known as word embeddings. Traditionally, most word embedding algorithms treat each word as the finest meaningful semantic granularity and perform embedding by learning distinct embedding vectors for each word. Contrary to this line of thought, technical domains such as scientific and medical literature compose words from subword structures such as prefixes, suffixes, and root-words as well as compound words. Treating individual words as the finest-granularity unit discards meaningful shared semantic structure between words sharing substructures. This not only leads to poor embeddings for text corpora that have longtail distributions, but also heuristic methods for handling out-of-vocabulary words. In this paper we propose SubwordMine, an entropybased subword mining algorithm that is fast, unsupervised, and fully data-driven. We show that this allows for great cross-domain performance in identifying semantically meaningful subwords. We then investigate utilizing the mined subwords within the FastText embedding model and compare performance of the learned representations in a downstream language modeling task."}
{"_id": "4da5d6deaa2052063ac112bf15529edfa95ea023", "title": "High throughput ultralong (20 cm) nanowire fabrication using a wafer-scale nanograting template.", "text": "Nanowires are being actively explored as promising nanostructured materials for high performance flexible electronics, biochemical sensors, photonic applications, solar cells, and secondary batteries. In particular, ultralong (centimeter-long) nanowires are highly attractive from the perspective of electronic performance, device throughput (or productivity), and the possibility of novel applications. However, most previous works on ultralong nanowires have issues related to limited length, productivity, difficult alignment, and deploying onto the planar substrate complying with well-matured device fabrication technologies. Here, we demonstrate a highly ordered ultralong (up to 20 cm) nanowire array, with a diameter of 50 nm (aspect ratio of up to 4,000,000:1), in an unprecedented large (8 in.) scale (2,000,000 strands on a wafer). We first devised a perfectly connected ultralong nanograting master template on the whole area of an 8 in. substrate using a top-down approach, with a density equivalent to that achieved with e-beam lithography (100 nm). Using this large-area, ultralong, high-density nanograting template, we developed a fast and effective method for fabricating up to 20 cm long nanowire arrays on a plastic substrate, composed of metal, dielectric, oxide, and ferroelectric materials. As a suggestion of practical application, a prototype of a large-area aluminum wire grid polarizer was demonstrated."}
{"_id": "737461773f45709a3bcc79febe0493c136498528", "title": "pubmed.mineR: An R package with text-mining algorithms to analyse PubMed abstracts", "text": "The PubMed literature database is a valuable source of information for scientific research. It is rich in biomedical literature with more than 24 million citations. Data-mining of voluminous literature is a challenging task. Although several text-mining algorithms have been developed in recent years with focus on data visualization, they have limitations such as speed, are rigid and are not available in the open source. We have developed an R package, pubmed.mineR, wherein we have combined the advantages of existing algorithms, overcome their limitations, and offer user flexibility and link with other packages in Bioconductor and the Comprehensive R Network (CRAN) in order to expand the user capabilities for executing multifaceted approaches. Three case studies are presented, namely, \u2018Evolving role of diabetes educators\u2019, \u2018Cancer risk assessment\u2019 and \u2018Dynamic concepts on disease and comorbidity\u2019 to illustrate the use of pubmed.mineR. The package generally runs fast with small elapsed times in regular workstations even on large corpus sizes and with compute intensive functions. The pubmed.mineR is available at http://cran.r-project.org/web/packages/pubmed.mineR ."}
{"_id": "0a2792e4daf2289037f3d21d1d691c4d2face503", "title": "Visualization Criticism - The Missing Link Between Information Visualization and Art", "text": "Classifications of visualization are often based on technical criteria, and leave out artistic ways of visualizing information. Understanding the differences between information visualization and other forms of visual communication provides important insights into the way the field works, though, and also shows the path to new approaches. We propose a classification of several types of information visualization based on aesthetic criteria. The notions of artistic and pragmatic visualization are introduced, and their properties discussed. Finally, the idea of visualization criticism is proposed, and its rules are laid out. Visualization criticism bridges the gap between design, art, and technical/pragmatic information visualization. It guides the view away from implementation details and single mouse clicks to the meaning of a visualization."}
{"_id": "75d2480914841dd6250e49bf755d14507740299b", "title": "Automating the Design of Graphical Presentations of Relational Information", "text": "The goal of the research described in this paper is to develop an application-independent presentation tool that automatically designs effective graphical presentations (such as bar charts, scatter plots, and connected graphs) of relational information. Two problems are raised by this goal: The codification of graphic design criteria in a form that can be used by the presentation tool, and the generation of a wide variety of designs so that the presentation tool can accommodate a wide variety of information. The approach described in this paper is based on the view that graphical presentations are sentences of graphical languages. The graphic design issues are codified as expressiveness and effectiveness criteria for graphical languages. Expressiveness criteria determine whether a graphical language can express the desired information. Effectiveness criteria determine whether a graphical language exploits the capabilities of the output medium and the human visual system. A wide variety of designs can be systematically generated by using a composition algebra that composes a small set of primitive graphical languages. Artificial intelligence techniques are used to implement a prototype presentation tool called APT (A Presentation Tool), which is based on the composition algebra and the graphic design criteria."}
{"_id": "aea992b748b4aa27a6ff577fdba823ed4523d04c", "title": "Parallel Coordinates: A Tool for Visualizing Multi-dimensional Geometry", "text": "A methodology for visualizing analytic and synthetic geometry in RN is presented. It is based on a system of parallel coordinates which induces a non-projective mapping between N-Dimensional and 2-Dimensional sets. Hypersurfaces are represented by their planar images which have some geometrical properties analogous to the properties of the hypersurface that they represent. A point \u2190 \u2192 line duality when N = 2 generalizes to lines and hyperplanes enabling the representation of polyhedra in RN. The representation of a class of convex and non-convex hypersurfaces is discussed together with an algorithm for constructing and displaying any interior point. The display shows some local properties of the hypersurface and provides information on the point's proximity to the boundary. Applications to Air Traffic Control, Robotics, Computer Vision, Computational Geometry, Statistics, Instrumentation and other areas are discussed."}
{"_id": "14f157daceb1ef540a68f8aec39b41de4657690f", "title": "Visualizing the non-visual: spatial analysis and interaction with information from text documents", "text": "The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language."}
{"_id": "55b6e55b5383bcb9e3390bd61ee31b38c3a7619c", "title": "Accelerating fish detection and recognition by sharing CNNs with objectness learning", "text": "Daily increasing underwater visual data makes automatic object detection and recognition a great demand and challenging task. In this paper, we adopt a region proposal network to accelerate underwater object detection and recognition from Faster R-CNN. This process implement detection acceleration by using convolutional networks to generate high-quality object candidates, and by sharing these networks with original detection networks. Applied to a real-world fish dataset, which includes 24,277 ImageCLEF fish images belonging to 12 classes, this automatic detection and recognition system has a nearly real-time frame rate of 9.8 ftps, while yielding 15.1% higher Mean Average Precision (mAP) than the Deformable Parts Model (DPM) baseline."}
{"_id": "931026ca9cae0467333211b205bda8903401a250", "title": "Image annotation via deep neural network", "text": "Multilabel image annotation is one of the most important open problems in computer vision field. Unlike existing works that usually use conventional visual features to annotate images, features based on deep learning have shown potential to achieve outstanding performance. In this work, we propose a multimodal deep learning framework, which aims to optimally integrate multiple deep neural networks pretrained with convolutional neural networks. In particular, the proposed framework explores a unified two-stage learning scheme that consists of (i) learning to fune-tune the parameters of deep neural network with respect to each individual modality, and (ii) learning to find the optimal combination of diverse modalities simultaneously in a coherent process. Experiments conducted on the NUS-WIDE dataset evaluate the performance of the proposed framework for multilabel image annotation, in which the encouraging results validate the effectiveness of the proposed algorithms."}
{"_id": "92e12cc78a72b5ece3f752bd0b4bd46fbc8dffde", "title": "Procrastination and cramming: how adept students ace the system.", "text": "Clear power differentials between teacher and pupil and the assignment of well-delineated tasks within specified time constraints characterize the academic system. Most university students are supremely adept at working within that structure. Many students who outwardly adapt to the system, however, engage in an intense and private ritual that comprises five aspects: calculated procrastination, preparatory anxiety, climactic cramming, nick-of-time deadline-making, and a secret, if often uncelebrated, victory. These adept students often find it difficult to admit others into their efficient program of academic survival. Although such behaviors are adaptive for school rhythms and expectations, these students may also try to impose them onto personal relationships, including those that are psychotherapeutic. The students' tendency to transfer their longstanding habits of procrastination and cramming to the workplace after graduation is less problematic. The argument is made that school rhythms and expectations shape the workplace, not vice versa. Methods of addressing the troublesome aspects of these work styles are suggested. Rather than attempting to change existing work patterns, the therapist can identify the underlying psychodynamic drama and its attendant manifestations that are becoming problematic."}
{"_id": "2d0422cc490db8b2069134f27bde85c9458af8e8", "title": "An efficient FPGA overlay for portable custom instruction set extensions", "text": "Custom instruction set extensions can substantially boost performance of reconfigurable softcore CPUs. While this approach is commonly tailored to one specific FPGA system, we are presenting a fine-grained FPGA-like overlay architecture which can be implemented in the user logic of various FPGA families from different vendors. This allows the execution of a portable application consisting of a program binary and an overlay configuration in a completely heterogeneous environment. Furthermore, we are presenting different optimizations for dramatically reducing the implementation cost of the proposed overlay architecture. In particular, this includes the mapping of the overlay interconnection network directly into the switch fabric of the hosting FPGA. Our case study demonstrates an overhead reduction of an order of magnitude as compared to related approaches."}
{"_id": "080ec0707c4d4e0a3847d2c5cafc3d795cc3825f", "title": "What a Nasty Day: Exploring Mood-Weather Relationship from Twitter", "text": "While it has long been believed in psychology that weather somehow influences human's mood, the debates have been going on for decades about how they are correlated. In this paper, we try to study this long-lasting topic by harnessing a new source of data compared from traditional psychological researches: Twitter. We analyze 2 years' twitter data collected by twitter API which amounts to 10% of all postings and try to reveal the correlations between multiple dimensional structure of human mood with meteorological effects. Some of our findings confirm existing hypotheses, while others contradict them. We are hopeful that our approach, along with the new data source, can shed on the long-going debates on weather-mood correlation."}
{"_id": "4c950909c80a0089f19c337002086364daa59a21", "title": "Object detection by spatio-temporal analysis and tracking of the detected objects in a video with variable background", "text": "In this paper we propose a novel approach for detecting and tracking objects in videos with variable background i.e. videos captured by moving cameras without any additional sensor. The performance of tracking in videos with variable background depends on the successful detection of an object in variable background. The most attractive feature of detecting an object in variable background is that it does not depend on any a priori information of the scene. In a video captured by a moving camera, both the background and foreground are changing in each frame of the image sequence. So for these videos, modeling a single background with traditional background modeling methods is infeasible and thus the detection of actual moving object in a variable background is a challenging task. To detect actual moving object in this work, spatio-temporal blobs have been generated in each frame by spatio-temporal analysis of the image sequence using a three-dimensional Gabor filter. Then individual blobs, which are parts of one object are merged using Minimum Spanning Tree to form the moving object in the variable background. The height, width and four-bin grayvalue histogram of the object are calculated as its features and an object is tracked in each frame using these features to generate the trajectories of the object through the video sequence. In this work, problem of data association during tracking is solved by Linear Assignment Problem and occlusion is handled by the application of kalman filter. The major advantage of our method over most of the existing tracking algorithms is that, the proposed method does not require initialization in the first frame or training on sample data to perform. Performance of the algorithm has been tested on benchmark videos and very satisfactory result has been achieved. The performance of the algorithm is also comparable and superior with respect to some benchmark algorithms."}
{"_id": "4c25d52d216d5b3295e065f3ee46144e61e6ab67", "title": "The Influence of Information Overload on the Development of Trust and Purchase Intention Based on Online Product Reviews in a Mobile vs . Web Environment : A Research Proposal", "text": "Information overload has been studied extensively by decision science researchers, particularly in the context of task-based optimization decisions. Media selection research has similarly investigated the extent to which task characteristics influence media choice and use. This paper outlines a proposed study, which would compare the effectiveness of web-based online product review systems in facilitation trust and purchase intention to those of mobile product review systems. We propose that since web-based systems are more effective in fostering focus and are less prone to navigation frustration, information overload is less likely influence the extent to which a consumer trusts an online product review."}
{"_id": "3b50ec03cc8c367f1b3b15845f79412365e6ef2a", "title": "Real-time detection of planar regions in unorganized point clouds", "text": "Automatic detection of planar regions in point clouds is an important step for many graphics, image processing, and computer vision applications. While laser scanners and digital photography have allowed us to capture increasingly larger datasets, previous techniques are computationally expensive, being unable to achieve real-time performance for datasets containing tens of thousands of points, even when detection is performed in a non-deterministic way. We present a deterministic technique for plane detection in unorganized point clouds whose cost is O(n log n) in the number of input samples. It is based on an efficient Hough-transform voting scheme and works by clustering approximately co-planar points and by casting votes for these clusters on a spherical accumulator using a trivariate Gaussian kernel. A comparison with competing techniques shows that our approach is considerably faster and scales significantly better than previous ones, being the first practical solution for deterministic plane detection in large unorganized point clouds."}
{"_id": "ea6396579b7b4ce98575e0d3fc584c6b13fe7491", "title": "AVATAR: A Framework to Support Dynamic Security Analysis of Embedded Systems' Firmwares", "text": "To address the growing concerns about the security of embedded systems, it is important to perform accurate analysis of firmware binaries, even when the source code or the hardware documentation are not available. However, research in this field is hindered by the lack of dedicated tools. For example, dynamic analysis is one of the main foundations of security analysis, e.g., through dynamic taint tracing or symbolic execution. Unlike static analysis, dynamic analysis relies on the ability to execute software in a controlled environment, often an instrumented emulator. However, emulating firmwares of embedded devices requires accurate models of all hardware components used by the system under analysis. Unfortunately, the lack of documentation and the large variety of hardware on the market make this approach infeasible in practice. In this paper we present Avatar, a framework that enables complex dynamic analysis of embedded devices by orchestrating the execution of an emulator together with the real hardware. We first introduce the basic mechanism to forward I/O accesses from the emulator to the embedded device, and then describe several techniques to improve the system\u2019s performance by dynamically optimizing the distribution of code and data between the two environments. Finally, we evaluate our tool by applying it to three different security scenarios, including reverse engineering, vulnerability discovery and hardcoded backdoor detection. To show the flexibility of Avatar, we perform this analysis on three completely different devices: a GSM feature phone, a hard disk bootloader, and a wireless sensor node."}
{"_id": "033e667eec844477bdfcd0e4b3d5c81b7598af39", "title": "Optimization of Helical Antennas", "text": "Helical antennas have been known for a long time, but the literature is overwhelmed with controversial information about their performance. We have systematically investigated helical antennas located above an infinite ground plane and obtained design curves. We have also observed that the shape and size of the ground conductor have influence on the helical antenna performance. By optimizing the dimensions of ground conductors that have the form of a cup and a cone, we have significantly increased the antenna gain. Simultaneous optimization of the helix and the ground conductor is under way to improve the antenna performance."}
{"_id": "8aea514575fbe7bdcc786ca6aca60435dac6619e", "title": "Correlated discrete data generation using adversarial training", "text": "Generative Adversarial Networks (GAN) have shown great promise in tasks like synthetic image generation, image inpainting, style transfer, and anomaly detection. However, generating discrete data is a challenge. This work presents an adversarial training based correlated discrete data (CDD) generation model. It also details an approach for conditional CDD generation. The results of our approach are presented over two datasets; job-seeking candidates skill set (private dataset) and MNIST (public dataset). From quantitative and qualitative analysis of these results, we show that our model performs better as it leverages inherent correlation in the data, than an existing model that overlooks correlation."}
{"_id": "e8587c354b6a63fba28c6d48dc40b544b4a5f0f0", "title": "HotFlip: White-Box Adversarial Examples for NLP", "text": "We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier. We find that only a few manipulations are needed to greatly decrease the accuracy. Our method relies on an atomic flip operation, which swaps one token for another, based on the gradients of the onehot input vectors. Due to efficiency of our method, we can perform adversarial training which makes the model more robust to attacks at test time. With the use of a few semantics-preserving constraints, we demonstrate that HotFlip can be adapted to attack a word-level classifier as well."}
{"_id": "d8f90c9d94a8025ff41853d30ffe3f7307164d4d", "title": "Experiences Using the RISC-V Ecosystem to Design an Accelerator-Centric SoC in TSMC 16 nm", "text": "The recent trend towards accelerator-centric architectures has renewed the need for demonstrating new research ideas in prototype systems with custom chips. Unfortunately, building such research prototypes is tremendously challenging, but the emerging RISC-V open-source software and hardware ecosystem can partly address this challenge by reducing design, implementation, and verification effort. This paper briefly describes the Celerity system-on-chip (SoC), a 5\u00d7 5mm 385M-transistor chip in TSMC 16 nm, which uses a tiered parallel accelerator fabric to improve both the performance and energy efficiency of embedded applications. The Celerity SoC includes five RV64G cores, a 496-core RV32IM tiled manycore processor, and a complex BNN (binarized neural network) accelerator implemented as a Rocket custom co-processor (RoCC). We describe our experiences using the RISC-V ecosystem to build Celerity, and highlight both key benefits and challenges in leveraging the RISCV instruction set, RISC-V software stack, RISC-V processor and memory system generators, RISC-V on-chip network interfaces, RISC-V verification suite, and RISC-V system-level hardware infrastructure. The RISC-V ecosystem played an important role in enabling a team of junior graduate students to design and tapeout the highest-performance RISC-V SoC to date in just nine months."}
{"_id": "6a0aaefce8a27a8727d896fa444ba27558b2d381", "title": "Relation Networks for Object Detection", "text": "Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era. All state-of-the-art object detection systems still rely on recognizing object instances individually, without exploiting their relations during learning. This work proposes an object relation module. It processes a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. It is lightweight and in-place. It does not require additional supervision and is easy to embed in existing networks. It is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline. It verifies the efficacy of modeling object relations in CNN based detection. It gives rise to the first fully end-to-end object detector."}
{"_id": "1fb15924a151867cee663db8d75ccae55f1f4ae9", "title": "Social Media Security Culture The Human Dimension in Social Media Management", "text": "Social media provides both opportunities and risks for any organization. Secure integration of social media platforms in organizational ICT infrastructures tends to be focused mainly on technical aspects. Social media security management usually ignores the human dimension, but protection can only be achieved through a holistic approach. Social media security culture must be part of the overall organizational culture. From a survey conducted to determine social media guidelines, a management model was developed for creating, monitoring and controlling social media security culture. The management model will be mapped to an assessment and reporting tool."}
{"_id": "2cebecdb54f24ed4df9ae5ad85304a74a777d5f1", "title": "Retrograde amnesia and memory consolidation: a neurobiological perspective", "text": "The fact that information acquired before the onset of amnesia can be lost (retrograde amnesia) has fascinated psychologists, biologists, and clinicians for over 100 years. Studies of retrograde amnesia have led to the concept of memory consolidation, whereby medial temporal lobe structures direct the gradual establishment of memory representations in neocortex. Recent theoretical accounts have inspired a simple neural network model that produces behavior consistent with experimental data and makes these ideas about memory consolidation more concrete. Recent physiological and anatomical findings provide important information about how memory consolidation might actually occur."}
{"_id": "da202119cdf70bc4f398a67ab840dc2d47ceec9b", "title": "The International Society for Biomechanics ) 5 July 1993 THE USE OF SURFACE ELECTROMYOGRAPHY IN BIOMECHANICS", "text": "................................................................................................................... 2 INTRODUCTION ............................................................................................................. 3 FACTORS AFFECTING THE EMG SIGNAL AND FORCE PRODUCED BY A MUSCLE.............. 5 DETECTION AND PROCESSING OF THE EMG SIGNAL.................................................... 10 THE ACTIVATION TIMING OF MUSCLES......................................................................... 12 THE FORCE / EMG SIGNAL RELATIONSHIP .................................................................. 17 THE EMG SIGNAL AS A FATIGUE INDEX ...................................................................... 25 SUMMARY OF RECOMMENDATIONS.............................................................................. 32 SUMMARY OF PROBLEMS FOR RESOLUTION................................................................. 34 ISSUES FOR INTERNATIONAL AGREEMENT.................................................................... 36 ACKNOWLEDGEMENTS ................................................................................................ 37 \u00a9 DelSys Incorporated"}
{"_id": "91e4456442a7bce77cdfd24bc8a2fb6a4a3a3b6f", "title": "Enhancing recurrent neural network-based language models by word tokenization", "text": "Introduction Statistical language models estimate the probability for a given sequence of words. Given a sentence s with n words such as s = (w1,w2 . . .wn), the language model assigns P(s). Statistical language models assess good word sequence estimations based on the sequence probability estimation. Building robust, fast and accurate language models is one of the main factors in the success of building systems such as machine translation systems and automatic speech recognition systems. Statistical language models can be estimated based on various approaches. Classical language models are estimated based on n-gram word sequences such as P(s) = P(w1,w2 . . .wn) = \u220fn i=1 P(wi|wi\u22121) and can be approximated based on the Markov concept for shorter contexts (for example, bigram if n = 2 or trigram if n = 3 and so on). Recent researchers have applied neural networks different architectures to build and estimate language models. The classical feed-forward neural network-based language models have been continuously reporting good results among the traditional n-gram language modeling techniques [1]. It Abstract"}
{"_id": "4d36efe9f3568ad31ae48e287d81e03588a201d8", "title": "The relationship between trauma and beliefs about hearing voices: a study of psychiatric and non-psychiatric voice hearers.", "text": "BACKGROUND\nCognitive models suggest that distress associated with auditory hallucinations is best understood in terms of beliefs about voices. What is less clear is what factors govern such beliefs. This study aimed to explore the way in which traumatic life events contribute towards beliefs about voices and any associated distress.\n\n\nMETHOD\nThe difference in the nature and prevalence of traumatic life events and associated psychological sequelae was compared in two groups of voice hearers: psychiatric voice hearers with predominantly negative beliefs about voices (PVH) and non-psychiatric voice hearers with predominantly positive beliefs about voices (NPVH). The data from the two groups were then combined in order to examine which factors could significantly account for the variance in beliefs about voices and therefore levels of distress.\n\n\nRESULTS\nBoth groups reported a high prevalence of traumatic life events although significantly more PVH reported trauma symptoms sufficient for a diagnosis of post-traumatic stress disorder (PTSD). Furthermore, significantly more PVH reported experiencing childhood sexual abuse. Current trauma symptoms (re-experiencing, avoidance and hyperarousal) were found to be a significant predictor of beliefs about voices. Trauma variables accounted for a significant proportion of the variance in anxiety and depression.\n\n\nCONCLUSIONS\nThe results suggest that beliefs about voices may be at least partially understood in the context of traumatic life events."}
{"_id": "f713a480be9d569803349338a1c2db8cabb0165c", "title": "Universal Planning Networks", "text": "A key challenge in complex visuomotor control is learning abstract representations that are effective for specifying goals, planning, and generalization. To this end, we introduce universal planning networks (UPN). UPNs embed differentiable planning within a goal-directed policy. This planning computation unrolls a forward model in a latent space and infers an optimal action plan through gradient descent trajectory optimization. The plan-by-gradient-descent process and its underlying representations are learned end-to-end to directly optimize a supervised imitation learning objective. We find that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images. The learned representations can be leveraged to specify distance-based rewards to reach new target states for model-free reinforcement learning, resulting in substantially more effective learning when solving new tasks described via imagebased goals. Visit https://sites.google. com/view/upn-public/home for video highlights."}
{"_id": "0839d83f6ab421be184af430ce3a6f274a19676a", "title": "ADE: A Framework for Robust Complex Robotic Architectures", "text": "Robots that can interact naturally with humans require the integration and coordination of many different components with heavy computational demands. We argue that an architecture framework with facilities for dynamic, reliable, fault-recovering, remotely accessible, distributed computing is needed for the development and operation of applications that support and enhance human activities and capabilities. We describe a robotic architecture development system, called ADE, that is built on top of a multi-agent system in order to provide all of the above features. Specifically, we discuss support for autonomic computing in ADE, briefly comparing it to related features of other commonly used robotic systems. We also report our experiences with ADE in the development of an architecture for an intelligent robot assistant and provide experimental results demonstrating the system's utility"}
{"_id": "ab00c499bde248d119b5951123d965f1637ca656", "title": "Motion Tuned Spatio-Temporal Quality Assessment of Natural Videos", "text": "There has recently been a great deal of interest in the development of algorithms that objectively measure the integrity of video signals. Since video signals are being delivered to human end users in an increasingly wide array of applications and products, it is important that automatic methods of video quality assessment (VQA) be available that can assist in controlling the quality of video being delivered to this critical audience. Naturally, the quality of motion representation in videos plays an important role in the perception of video quality, yet existing VQA algorithms make little direct use of motion information, thus limiting their effectiveness. We seek to ameliorate this by developing a general, spatio-spectrally localized multiscale framework for evaluating dynamic video fidelity that integrates both spatial and temporal (and spatio-temporal) aspects of distortion assessment. Video quality is evaluated not only in space and time, but also in space-time, by evaluating motion quality along computed motion trajectories. Using this framework, we develop a full reference VQA algorithm for which we coin the term the MOtion-based Video Integrity Evaluation index, or MOVIE index. It is found that the MOVIE index delivers VQA scores that correlate quite closely with human subjective judgment, using the Video Quality Expert Group (VQEG) FRTV Phase 1 database as a test bed. Indeed, the MOVIE index is found to be quite competitive with, and even outperform, algorithms developed and submitted to the VQEG FRTV Phase 1 study, as well as more recent VQA algorithms tested on this database."}
{"_id": "3e77117e5ff402b504c624765fd75fa3a73d89d2", "title": "Visibility of wavelet quantization noise", "text": "The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression, measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r2/sup -/spl lambda//, where r is the display visual resolution in pixels/degree, and /spl lambda/ is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a \"perceptually lossless\" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes."}
{"_id": "99480bf75024a4955f6753c23650f776f191d151", "title": "Image features that draw fixations", "text": "The ability to automatically detect \u2018visually interesting\u2019 regions in an image has many practical applications especially in the design of active machine vision systems. This paper describes a data-driven approach that uses eye tracking in tandem with principal component analysis to extract lowlevel image features that attract human gaze. Data analysis on an ensemble of image patches extracted at the observer\u2019s point of gaze revealed features that resemble derivatives of the 2D Gaussian operator. Dissimilarities between human and random fixations are investigated by comparing the features extracted at the point of gaze to the general image structure obtained by random sampling in monte-carlo simulations. Finally, a simple application where these features are used to predict fixations is illustrated."}
{"_id": "cb6461c90d6733d189fe7cf31c40a4475471b31f", "title": "Image dissimilarity", "text": "In this paper we compare the performance of a number of representative instrumental models for image dissimilarity with respect to their ability to predict both image dissimilarity and image quality, as perceived by human subjects. Two sets of experimental data, one for images degraded by noise and blur, and one for JPEG-coded images, are used in the comparison. ( 1998 Elsevier Science B.V. All rights reserved."}
{"_id": "8f8de213b1318e0ef0914008010e87ab64ab94ff", "title": "ObliviStore: High Performance Oblivious Cloud Storage", "text": "We design and build ObliviStore, a high performance, distributed ORAM-based cloud data store secure in the malicious model. To the best of our knowledge, ObliviStore is the fastest ORAM implementation known to date, and is faster by 10X or more in comparison with the best known ORAM implementation. ObliviStore achieves high throughput by making I/O operations asynchronous. Asynchrony introduces security challenges, i.e., we must prevent information leakage not only through access patterns, but also through timing of I/O events. We propose various practical optimizations which are key to achieving high performance, as well as techniques for a data center to dynamically scale up a distributed ORAM. We show that with 11 trusted machines (each with a modern CPU), and 20 Solid State Drives, ObliviStore achieves a throughput of 31.5MB/s with a block size of 4KB."}
{"_id": "fa5cf89c59b834ec7573673657c99c77f53f7add", "title": "Neural Network Implementation Using CUDA and OpenMP", "text": "Many algorithms for image processing and pattern recognition have recently been implemented on GPU (graphic processing unit) for faster computational times. However, the implementation using GPU encounters two problems. First, the programmer should master the fundamentals of the graphics shading languages that require the prior knowledge on computer graphics. Second, in a job which needs much cooperation between CPU and GPU, which is usual in image processings and pattern recognitions contrary to the graphics area, CPU should generate raw feature data for GPU processing as much as possible to effectively utilize GPU performance. This paper proposes more quick and efficient implementation of neural networks on both GPU and multi-core CPU. We use CUDA (compute unified device architecture) that can be easily programmed due to its simple C language-like style instead of GPU to solve the first problem. Moreover, OpenMP (Open Multi-Processing) is used to concurrently process multiple data with single instruction on multi-core CPU, which results ineffectively utilizing the memories of GPU. In the experiments, we implemented neural networks-based text detection system using the proposed architecture, and the computational times showed about 15 times faster than implementation using CPU and about 4 times faster than implementation on only GPU without OpenMP."}
{"_id": "e15caf2ce2f94250d26768294ca82d2c5fd05baa", "title": "An empirical evaluation of internet latency expansion", "text": "The Internet's latency expansion determines the asymptotic performance of large-scale distributed systems (such as Peer-to-Peer systems), but previous studies on the Internet have defined expansion in terms of router-level hops. In this paper, we empirically determine the Internet's latency expansion characteristics using measurements from two different Internet topology datasets. Our results show that the Internet router-level topology exhibits a power-law latency expansion, in contrast to its exponential expansion rate in terms of hops."}
{"_id": "827a0ef529081212e698b3e813edacbcb02b09e6", "title": "From Cloud to Fog Computing: A Review and a Conceptual Live VM Migration Framework", "text": "Fog computing, an extension of cloud computing services to the edge of the network to decrease latency and network congestion, is a relatively recent research trend. Although both cloud and fog offer similar resources and services, the latter is characterized by low latency with a wider spread and geographically distributed nodes to support mobility and real-time interaction. In this paper, we describe the fog computing architecture and review its different services and applications. We then discuss security and privacy issues in fog computing, focusing on service and resource availability. Virtualization is a vital technology in both fog and cloud computing that enables virtual machines (VMs) to coexist in a physical server (host) to share resources. These VMs could be subject to malicious attacks or the physical server hosting it could experience system failure, both of which result in unavailability of services and resources. Therefore, a conceptual smart pre-copy live migration approach is presented for VM migration. Using this approach, we can estimate the downtime after each iteration to determine whether to proceed to the stop-and-copy stage during a system failure or an attack on a fog computing node. This will minimize both the downtime and the migration time to guarantee resource and service availability to the end users of fog computing. Last, future research directions are outlined."}
{"_id": "c411f93539714f512e437c45a7a9d0a6d5a7675e", "title": "On image classification: city images vs. landscapes", "text": "Grouping images into semantically meaningful categories using low-level visual features is a challenging and important problem in content-based image retrieval. Based on these groupings, eeective indices can be built for an image database. In this paper, we show how a speciic high-level classiication problem (city images vs. landscapes) can be solved from relatively simple low-level features geared for the particular classes. We have developed a procedure to qualitatively measure the saliency of a feature towards a classiication problem based on the plot of the intra-class and inter-class distance distributions. We use this approach to determine the discriminative power of the following features: color histogram, color coherence vector, DCT coeecient, edge direction histogram, and edge direction coherence vector. We determine that the edge direction-based features have the most discriminative power for the classiication problem of interest here. A weighted k-NN classiier is used for the classiication which results in an accuracy of 93:9% when evaluated on an image database of 2; 716 images using the leave-one-out method. This approach has been extended to further classify 528 landscape images into forests, mountains, and sunset/sunrise classes. First, the input images are classiied as sunset/sunrise images vs. forest & mountain images (94:5% accuracy) and then the forest & mountain images are classiied as forest images or mountain images (91:7% accuracy). We are currently identifying further semantic classes to assign to images as well as extracting low level features which are salient for these classes. Our nal goal is to combine multiple 2-class classiiers into a single hierarchical classiier."}
{"_id": "772bc244c254ac5e869891f23c4b49bd669617f6", "title": "Programming by choice: urban youth learning programming with scratch", "text": "This paper describes Scratch, a visual, block-based programming language designed to facilitate media manipulation for novice programmers. We report on the Scratch programming experiences of urban youth ages 8-18 at a Computer Clubhouse 'an after school center' over an 18-month period. Our analyses of 536 Scratch projects collected during this time documents the learning of key programming concepts even in the absence of instructional interventions or experienced mentors. We discuss the motivations of urban youth who choose to program in Scratch rather than using one of the many other software packages available to them and the implications for introducing programming at after school settings in underserved communities."}
{"_id": "48bf3aef9a2a86338ea814363404a7bd363118eb", "title": "HyperFlex: An SDN virtualization architecture with flexible hypervisor function allocation", "text": "Network Virtualization (NV) and Software-Defined Networking (SDN) are both expected to increase the flexibility and programmability of today's communication networks. Combining both approaches may even be a further step towards increasing the efficiency of network resource utilization. Multiple solutions for virtualizing SDN networks have already been proposed, however, they are either implemented in software or they require special network hardware. We propose HyperFlex, an SDN hypervisor architecture that relies on the decomposition of the hypervisor into functions that are essential for virtualizing SDN networks. The hypervisor functions can be flexibly executed in software or hosted on SDN network elements. Furthermore, existing hypervisor solutions focus on data-plane virtualization mechanisms and neglect the virtualization of the control-plane of SDN networks. HyperFlex provides control-plane virtualization by adding a control-plane isolation function, either in software or on network elements. The isolation function ensures that the resources of the control-plane are shared correctly between each virtual SDN network while it also protects the hypervisor resources from resource exhaustion."}
{"_id": "936d1fc30c82db64ea06a80a2c17b635299b7a48", "title": "MapReduce-based fuzzy c-means clustering algorithm: implementation and scalability", "text": "The management and analysis of big data has been identified as one of the most important emerging needs in recent years. This is because of the sheer volume and increasing complexity of data being created or collected. Current clustering algorithms can not handle big data, and therefore, scalable solutions are necessary. Since fuzzy clustering algorithms have shown to outperform hard clustering approaches in terms of accuracy, this paper investigates the parallelization and scalability of a common and effective fuzzy clustering algorithm named Fuzzy C-Means (FCM) algorithm. The algorithm is parallelized using the MapReduce paradigm outlining how the Map and Reduce primitives are implemented. A validity analysis is conducted in order to show that the implementation works correctly achieving competitive purity results compared to state-of-the art clustering algorithms. Furthermore, a scalability analysis is conducted to demonstrate the performance of the parallel FCM implementation with increasing number of computing nodes used."}
{"_id": "4ab4e6f230890864320f87fc1e9cda37deeb5a84", "title": "Design and implementation of FROST: Digital forensic tools for the OpenStack cloud computing platform", "text": "We describe the design, implementation, and evaluation of FROSTdthree new forensic tools for the OpenStack cloud platform. Our implementation for the OpenStack cloud platform supports an Infrastructure-as-a-Service (IaaS) cloud and provides trustworthy forensic acquisition of virtual disks, API logs, and guest firewall logs. Unlike traditional acquisition tools, FROST works at the cloud management plane rather than interacting with the operating system inside the guest virtual machines, thereby requiring no trust in the guest machine. We assume trust in the cloud provider, but FROST overcomes nontrivial challenges of remote evidence integrity by storing log data in hash trees and returning evidence with cryptographic hashes. Our tools are user-driven, allowing customers, forensic examiners, and law enforcement to conduct investigations without necessitating interaction with the cloud provider. We demonstrate how FROST\u2019s new features enable forensic investigators to obtain forensically-sound data from OpenStack clouds independent of provider interaction. Our preliminary evaluation indicates the ability of our approach to scale in a dynamic cloud environment. The design supports an extensible set of forensic objectives, including the future addition of other data preservation, discovery, real-time monitoring, metrics, auditing, and acquisition capabilities. a 2013 Josiah Dykstra and Alan T. Sherman. Published by Elsevier Ltd. All rights reserved."}
{"_id": "537497f2aac45e9c87019d4d47f0dace2b8e0165", "title": "Geographical Routing With Location Service in Intermittently Connected MANETs", "text": "Combining mobile platforms such as manned or unmanned vehicles and peer-assisted wireless communication is an enabler for a vast number of applications. A key enabler for the applications is the routing protocol that directs the packets in the network. Routing packets in fully connected mobile ad hoc networks (MANETs) has been studied to a great extent, but the assumption on full connectivity is generally not valid in a real system. This case means that a practical routing protocol must handle intermittent connectivity and the absence of end-to-end connections. In this paper, we propose a geographical routing algorithm called location-aware routing for delay-tolerant networks (LAROD), enhanced with a location service, location dissemination service (LoDiS), which together are shown to suit an intermittently connected MANET (IC-MANET). Because location dissemination takes time in IC-MANETs, LAROD is designed to route packets with only partial knowledge of geographic position. To achieve low overhead, LAROD uses a beaconless strategy combined with a position-based resolution of bids when forwarding packets. LoDiS maintains a local database of node locations, which is updated using broadcast gossip combined with routing overhearing. The algorithms are evaluated under a realistic application, i.e., unmanned aerial vehicles deployed in a reconnaissance scenario, using the low-level packet simulator ns-2. The novelty of this paper is the illustration of sound design choices in a realistic application, with holistic choices in routing, location management, and the mobility model. This holistic approach justifies that the choice of maintaining a local database of node locations is both essential and feasible. The LAROD-LoDiS scheme is compared with a leading delay-tolerant routing algorithm (spray and wait) and is shown to have a competitive edge, both in terms of delivery ratio and overhead. For spray and wait, this case involved a new packet-level implementation in ns-2 as opposed to the original connection-level custom simulator."}
{"_id": "9430de4089bcfcfa7c26a8cca2412a8bfc3c2b4e", "title": "Low cost 60 GHz millimeter-wave microstrip patch antenna array using low-loss planar feeding scheme", "text": "A novel low cost feeding mechanism based upon new synthesized planar waveguide substrate integrated waveguide (SIW), is presented for microstrip antenna arrays (MPAs) at millimeter-wave (mmW) band to avoid feeding loss, specially the conduction loss. As an example, SIW-fed two dimensional (2D) 2\u00d72 MPA array @ 60 GHz is introduced. The presented antenna is simulated by HFSS to validate its performance for mmW applications. The simulated results show a broadside gain of 11.71 dB within an operating impedance bandwidth of 500 MHz around a center frequency of 59.95 GHz. Also, the simulated radiation efficiency of the developed antenna is better than 85%, which is higher than that of conventional mmW microstrip antenna array."}
{"_id": "be4118d4ef1eccb9f7817d16f8d3cf9bffb01fe7", "title": "Design of a wideband dual-polarized stacked patch antenna with high isolation and low cross polarization for X-band applications", "text": "The design and realization of a low-profile wideband dual-polarized patch antenna with high isolation and low cross polarization has been presented. The aperture-coupled antenna element has four layers of substrates. Two microstrip lines terminated with circular patches are printed on the backside of the bottom substrate for feeding the radiating patches, while on the other side is a ground plane with two offset H-shaped slots. One port is excited for horizontally polarized wave and the other one is for vertical polarization. Two slots are arranged in a `T' configuration to achieve a high isolation of over 35 dB. A square radiating patch is printed on the top layer of a Duroid substrate which is placed above the ground plane. A piece of Rohacell foam with a thickness of 3mm and a permittivity of 1.06 is located above the Duroid substrate. A top radiating patch is printed on the bottom side of a top Duroid substrate. Thus, a wideband performance can be obtained due to the multilayer structure and the impedance matching can be improved by adjusting the dimensions of the circular patches and the slots. Further to reduce the cross polarization, two identical elements with one rotated by 180\u00b0 are combined and each pair of single linearly polarized elements are fed out of phase. In addition, a four-element array with two mirrored two-element arrays has been investigated. The obtained results can confirm that, the proposed dual-polarized patch array can provide a wideband performance across 8.5-11 GHz with VSWR <; 2. The isolation between the two ports is over 35 dB and the cross polarization level is less than -35 dB over the operating frequency band. Therefore, the proposed design technique is promising for X-band SAR applications."}
{"_id": "32e9dafd4869e4dffc013507bfe6ef3764499d39", "title": "Human desire inference process and analysis", "text": "................................................................................................................... viii CHAPTER"}
{"_id": "43ddd35d16ded549bb73b2581cbd92e96a4628d2", "title": "Participation in School-Based Extracurricular Activities and Adolescent Adjustment", "text": "This paper examines the association between participation in school-based extracurricular activities (ECAs) and adolescent adjustment (drinking, marijuana use, grades, academic attitudes and academic aspirations) among students from six high schools. Three major issues were addressed: the potential confounding of selective EGA participation by better adjusted students and measures of adjustment, variability in the strength of the association between ECA participation and adjustment as a function of adolescent demographic characteristics and activity type, and the role of peers as mediators of the association between ECA participation and adjustment. Adolescents who participated in ECAs reported higher grades, more positive attitudes toward schools, and higher acac demic aspirations once demographic characteristics and prior adjustment were controlled. Alcohol and marijuana use were not independently associated with ECA participation. The EGA-adjustment association did not vary by demographic characteristics and did not appear to be mediated by peer characteristics. Those who participated in non-sport ECAs reported consistendy better adjustment than those who did not participate in ECAs and those who participate in sports."}
{"_id": "a26f893c224ed6e3df1f37479de0e774a4cae237", "title": "Scale-Space Filtering", "text": "The extrema in a signal and its first few derivatives provide a useful general-purpose qualitative description for many kinds of signals. A fundamental problem in computing such descriptions is scale: a derivative must be taken over some neighborhood, but there is seldom a principled basis for choosing its size. Scale-space filtering is a method that describes signals qualitatively, managing the ambiguity of scale in an organized and natural way. The signal is first expanded by convolution with gaussian masks over a continuum of sizes. This \"scale-space\" image is then collapsed, using its qualitative structure, into a tree providing a concise but complete qualitative description covering all scales of observation. The description is further refined by applying a stability criterion, to identify events that persist of large changes in scale."}
{"_id": "98d34f19611af09a3086f04b1c1e14aef3fa742e", "title": "Validating a Measurement Tool of Presen ce in Online Communities of Inquiry", "text": "This article examines work related to the development and validation of a measurement tool for the Community of Inquiry (CoI) framework in online settings. The framework consists of three elements: social presence, teaching presence and cognitive presence, each of which is integral to the instrument. The 34 item instrument, and thus framework, was tested after being administered at four institutions in the Summer of 2007. The article also includes a discussion of implications for the future use of the CoI survey and the CoI framework itself."}
{"_id": "67821632f22bf3030099b56919453da96c91393e", "title": "KUKA Control Toolbox", "text": "This article presents an open-source MATLAB toolbox for the motion control of KUKA robot manipulators. The KUKA Control Toolbox (KCT) is a collection of MATLAB functions developed at the Univer sity of Siena. The toolbox, which is compatible with all 6 degrees of freedom (DoF) small and low-payload KUKA robots that use the Eth.RSIXML, runs on a remote computer connected with the KUKA controller via transmission control Protocol/Internet Protocol (TCP/IP). KCT includes more than 40 functions, spanning operations such as forward and inverse kine matics computation, point-to-point joint and Cartesian control, trajectory gen eration, graphical display, three-dimensional (3-D) animation and diagnostics. Applicative examples show the flexibility of KCT and its easy interfacing with other toolboxes and external devices."}
{"_id": "a46244a7f0fc7cb8994e787dcaa32b9d8d847a48", "title": "Neural Adaptive controller for Magnetic levitation system", "text": "In this study a Neural Adaptive method is used for position control and identification of a Magnetic levitation system. This controller consists of three parts: PID controller, radial basis function (RBF) network controller and radial basis function (RBF) network identifier. The combination of controllers produces a stable system which adapts to optimize performance. It is shown that this technique can be successfully used to stabilize any chosen operating point of the system. All derived results are validated by computer simulation of a nonlinear mathematical model of the system."}
{"_id": "9c11a14fc59d55986e383da14a862c5aa5aedb82", "title": "Ultra-wideband low noise amplifier with shunt resistive feedback in 0.18\u00b5m CMOS process", "text": "A CMOS low noise amplifier (LNA) for ultra-wideband (UWB) systems is presented. The proposed LNA achieve wide operating bandwidth for 3\u201310.6 GHz by using resistive shunt feedback topology. Two stage amplifiers and an inter stage circuit are designed to achieve wider gain bandwidth. The shunt resistive feedback are employed in input and output stage to provide wideband input matching with low noise figure (NF). This work is designed and fabricated in TSMC 0.18\u00b5m CMOS process. The proposed UWB LNA achieves a measured flat gain 15 dB and has a noise figure of 4 dB over the entire band while consuming 21.5 mW of power. The measured third order intercept point IIP3 is 2.5 dBm."}
{"_id": "53f933bb0512ad3f84427faf45bb58d01403400c", "title": "Structure tensor Log-Euclidean statistical models for texture analysis", "text": "This paper presents local structure tensor (LST) based methods for extracting the anisotropy and orientation information characterizing a textured image sample. A Log-Euclidean (LE) multivariate Gaussian model is proposed for representing the marginal distribution of the LST field of a texture. An extended model is considered as well for describing its spatial dependencies. The potential of these approaches are tested in a rotation invariant content based image retrieval (CBIR) context. Experiments are conducted on real data presenting anisotropic textures: very high resolution (VHR) remote sensing maritime pine forest images and material images representing dense carbons at nanometric scale."}
{"_id": "85f5b8e3e0b2fb868528349e5032b0c2d20c7a34", "title": "Time Series Prediction: Forecasting the Future and Understanding the Past", "text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading time series prediction forecasting the future and understanding the past, you can take more advantages with limited budget."}
{"_id": "7fa024607295d5271f66bafc9fd917c720aba073", "title": "Natural Toxins for Use in Pest Management", "text": "Natural toxins are a source of new chemical classes of pesticides, as well as environmentally and toxicologically safer molecules than many of the currently used pesticides. Furthermore, they often have molecular target sites that are not exploited by currently marketed pesticides. There are highly successful products based on natural compounds in the major pesticide classes. These include the herbicide glufosinate (synthetic phosphinothricin), the spinosad insecticides, and the strobilurin fungicides. These and other examples of currently marketed natural product-based pesticides, as well as natural toxins that show promise as pesticides from our own research are discussed."}
{"_id": "c2c8292dad37adcdda3b0aba5db5e33e3222832e", "title": "Inventory Constrained Maritime Routing and Scheduling for Multi-Commodity Liquid Bulk", "text": ""}
{"_id": "599b7e1b4460c8ad77def2330ec76a2e0dfedb84", "title": "Robust Subspace Clustering via Smoothed Rank Approximation", "text": "Matrix rank minimizing subject to affine constraints arises in many application areas, ranging from signal processing to machine learning. Nuclear norm is a convex relaxation for this problem which can recover the rank exactly under some restricted and theoretically interesting conditions. However, for many real-world applications, nuclear norm approximation to the rank function can only produce a result far from the optimum. To seek a solution of higher accuracy than the nuclear norm, in this letter, we propose a rank approximation based on Logarithm-Determinant. We consider using this rank approximation for subspace clustering application. Our framework can model different kinds of errors and noise. Effective optimization strategy is developed with theoretical guarantee to converge to a stationary point. The proposed method gives promising results on face clustering and motion segmentation tasks compared to the state-of-the-art subspace clustering algorithms."}
{"_id": "5facbc5bd594f4faa0524d871d49ba6a6e956e17", "title": "Sparse subspace clustering", "text": "We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all other data points. In general, finding such a SR is NP hard. Our key contribution is to show that, under mild assumptions, the SR can be obtained `exactly' by using l1 optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this SR. Our method can handle noise, outliers as well as missing data. We apply our subspace clustering algorithm to the problem of segmenting multiple motions in video. Experiments on 167 video sequences show that our approach significantly outperforms state-of-the-art methods."}
{"_id": "726d2d44a739769e90bb7388ed120efe84ea8147", "title": "Clustering and projected clustering with adaptive neighbors", "text": "Many clustering methods partition the data groups based on the input data similarity matrix. Thus, the clustering results highly depend on the data similarity learning. Because the similarity measurement and data clustering are often conducted in two separated steps, the learned data similarity may not be the optimal one for data clustering and lead to the suboptimal results. In this paper, we propose a novel clustering model to learn the data similarity matrix and clustering structure simultaneously. Our new model learns the data similarity matrix by assigning the adaptive and optimal neighbors for each data point based on the local distances. Meanwhile, the new rank constraint is imposed to the Laplacian matrix of the data similarity matrix, such that the connected components in the resulted similarity matrix are exactly equal to the cluster number. We derive an efficient algorithm to optimize the proposed challenging problem, and show the theoretical analysis on the connections between our method and the K-means clustering, and spectral clustering. We also further extend the new clustering model for the projected clustering to handle the high-dimensional data. Extensive empirical results on both synthetic data and real-world benchmark data sets show that our new clustering methods consistently outperforms the related clustering approaches."}
{"_id": "7561d26274daed0f7575073cebcebe6ba24d8192", "title": "Embedded Robust Visual Obstacle Detection on Autonomous Lawn Mowers", "text": "Currently, the only mass-market service robots are floor cleaners and lawn mowers. Although available for more than 20 years, they mostly lack intelligent functions from modern robot research. In particular, the obstacle detection and avoidance is typically a simple physical collision detection. In this work, we discuss a prototype autonomous lawn mower with camera-based non-contact obstacle avoidance. We devised a low-cost compact module consisting of color cameras and an ARM-based processing board, which can be added to an autonomous lawn mower with minimal effort. For testing our system, we conducted a field test with 20 prototype units distributed in eight European countries with a total mowing time of 3,494 hours. The results show that our proposed system is able to work without expert interaction for a full season and strongly reduces collision events while still keeping the good mowing performance. Furthermore, a questionnaire with the testers revealed that most people would favor the camera-based mower over a non-camera-based mower."}
{"_id": "9d32b9bf5f7ec86bc876216c4271f18fe68e7dcd", "title": "Online Object Tracking Based on CNN with Metropolis-Hasting Re-Sampling", "text": "Tracking-by-learning strategies have been effective in solving many challenging problems in visual tracking, in which the learning sample generation and labeling play important roles for final performance. Since the concern of deep learning based approaches has shown an impressive performance in different vision tasks, how to properly apply the learning model, such as CNN, to an online tracking framework is still challenging. In this paper, to overcome the overfitting problem caused by straight-forward incorporation, we propose an online tracking framework by constructing a CNN based adaptive appearance model to generate more reliable training data over time. With a reformative Metropolis-Hastings re-sampling scheme to reshape particles for a better state posterior representation during online learning, the proposed tracking outperforms most of the state-of-art trackers on challenging benchmark video sequences."}
{"_id": "20a74d8887c9a4caa9174f46f127a381a8083278", "title": "Avoiding revascularization with lifestyle changes: The Multicenter Lifestyle Demonstration Project.", "text": "The Multicenter Lifestyle Demonstration Project was designed to determine if comprehensive lifestyle changes can be a direct alternative to revascularization for selected patients without increasing cardiac events. A total of 333 patients completed this demonstration project (194 in the experimental group and 139 in the control group). We found that experimental group patients were able to avoid revascularization for at least 3 years by making comprehensive lifestyle changes at substantially lower cost without increasing cardiac morbidity and mortality. These patients reported reductions in angina comparable with what can be achieved with revascularization."}
{"_id": "98438f5238c1171c4c7c60d78109509c7ee9d08b", "title": "Security risk assessment framework for cloud computing environments", "text": "Cloud computing has become today\u2019s most common technology buzzword. Despite the promises of cloud computing to decrease computing implementation costs and deliver computing as a service, which allows clients to pay only for what they need and use, cloud computing also raises many security concerns. Most popular risk assessment standards, such as ISO27005, NIST SP800-30, and AS/NZS 4360, assume that an organization\u2019s assets are fully managed by the organization itself and that all security management processes are imposed by the organization. These assumptions, however, do not apply to cloud computing environments. Hence, this paper proposes a security risk assessment framework that can enable cloud service providers to assess security risks in the cloud computing environment and allow cloud clients to contribute in risk assessment. The proposed framework provides a more realistic and accurate risk assessment outcome by considering the cloud clients\u2019 evaluation of security risk factors and avoiding the complexity that can result from the involvement of clients in whole risk assessment process. Copyright \u00a9 2014 John Wiley & Sons, Ltd."}
{"_id": "6d2467d75a21d94a4e172d647564b5e238ddfdbe", "title": "A Sentence Interaction Network for Modeling Dependence between Sentences", "text": "Modeling interactions between two sentences is crucial for a number of natural language processing tasks including Answer Selection, Dialogue Act Analysis, etc. While deep learning methods like Recurrent Neural Network or Convolutional Neural Network have been proved to be powerful for sentence modeling, prior studies paid less attention on interactions between sentences. In this work, we propose a Sentence Interaction Network (SIN) for modeling the complex interactions between two sentences. By introducing \u201cinteraction states\u201d for word and phrase pairs, SIN is powerful and flexible in capturing sentence interactions for different tasks. We obtain significant improvements on Answer Selection and Dialogue Act Analysis without any feature engineering."}
{"_id": "77acdfa74c7c39cc36dee8da63e30aeffdd19e22", "title": "On-road vehicle detection: a review", "text": "Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research."}
{"_id": "ec08284de3ac57f72e3aa931881808c322be5edc", "title": "Multi-Player Alpha-Beta Pruning", "text": "Korf, R.E., Multi-player alpha-beta pruning (Research Note), Artificial Intelligence 48 (1991) 99-111. We consider the generalization of minimax search with alpha-beta pruning to non-cooperative, perfect-information games with more than two players. The minimax algorithm was generalized in [2] to the maxn algorithm applied to vectors of n-tuples representing the evaluations for each of the players. If we assume an upper bound on the sum of the evaluations for each player, and a lower bound on each individual evaluation, then shallow alpha-beta pruning is possible, but not deep pruning. In the best case, the asymptotic branching factor is reduced to (1 + 4bv'~b-\"s~-3)/2. In the average case, however, pruning does not reduce the asymptotic branching factor. Thus, alpha-beta pruning is found to be effective only in the special case of two-player games. In addition, we show that it is an optimal directional algorithm for two players."}
{"_id": "0ffaf68f4399998864d6d9835c7bd8240d322b49", "title": "LogicBlox, Platform and Language: A Tutorial", "text": "The modern enterprise software stack\u2014a collection of applications supporting bookkeeping, analytics, planning, and forecasting for enterprise data\u2014is in danger of collapsing under its own weight. The task of building and maintaining enterprise software is tedious and laborious; applications are cumbersome for end-users; and adapting to new computing hardware and infrastructures is difficult. We believe that much of the complexity in today\u2019s architecture is accidental, rather than inherent. This tutorial provides an overview of the LogicBlox platform, a ambitious redesign of the enterprise software stack centered around a unified declarative programming model, based on an extended version of Datalog. 1 The Enterprise Hairball Modern enterprise applications involve an enormously complex technology stack composed of many disparate systems programmed in a hodgepodge of programming languages. We refer to this stack, depicted in Figure 1, as \u201cthe enterprise hairball.\u201d First, there is an online transaction processing (OLTP) layer that performs bookkeeping of the core business data for an enterprise. Such data could include the current product catalog, recent sales figures, current outstanding invoices, customer account balances, and so forth. This OLTP layer typically includes a relational DBMS\u2014programmed in a combination of a query language (SQL), a stored procedure language (like PL/SQL or TSQL), and a batch programming language like Pro*C\u2014an application server, such as Oracle WebLogic [35], IBM WebSphere [36], or SAP NetWeaver [26]\u2014programmed in an object-oriented language like Java, C#, or ABAP\u2014and a web browser front-end, programmed using HTML and Javascript. In order to track the performance of the enterprise over time, a second business intelligence (BI) layer typically holds five to ten years of historical information that was originally recorded in the OLTP layer and performs read-only analyses on this information. This layer typically includes another DBMS (or, more commonly, a BI variant like Teradata [33] or IBM Netezza [25]) along with a BI application server such as Microstrategy [23], SAP BusinessObjects [5], or IBM Cognos [7], programmed using a vendor-specific declarative language. Fig. 1: Enterprise software components and technology stack example. Data is moved from the transaction layer to the BI layer via so-called extracttransform-load (ETL) tools that come with their own tool-specific programming language. Finally, in order to plan the future actions of an enterprise, there is a planning layer, which supports a class of read-write use cases for which the BI layer is unsuitable. This layer typically includes a planning application server, like Oracle Hyperion [13] or IBM Cognos TM1 [34], that are programmed using a vendor-specific declarative language or a standard language like MDX [21], and spreadsheets like Microsoft Excel that are programmed in a vendor-specific formula language (e.g. A1 = B17 D12) and optionally a scripting language like VBA. In order to enhance or automate decisions made in the planning layer, statistical predictive models are often prototyped using modeling tools like SAS, Matlab, SPSS, or R, and then rewritten for production in C++ or Java so they can be embedded in the OLTP or OLAP layers. In summary, enterprise software developed according to the hairball model must be programmed using a dozen or so different programming languages, running in almost as many system-scale components. Some of the languages are imperative (both object-oriented and not) and some are declarative (every possible flavor). Most of the languages used are vendor-specific and tied to the component in which they run (e.g. ABAP, Excel, R, OPL, etc.). Even the languages that are based on open standards are not easily ported from one component to another because of significant variations in performance or because of vendor specific extensions (e.g., the same SQL query on Oracle will perform very differently on DB2). Recent innovations in infrastructure technology for supporting much larger numbers of users (Web applications) and big data (predictive analytics), including NoSQL [28], NewSQL [27] and analytic databases, have introduced even more components into the technology stack described above, and have not helped reduce its overall complexity. This complexity makes enterprise software hard to build, hard to implement, and hard to change. Simple changes\u2014like extending a product identifier by 3 characters or an employee identifier by 1 digit\u2014often necessitate modifications to most of the different components in the stack, requiring thousands of days of person effort and costing many millions of dollars. An extreme example of the problems caused by this accidental complexity occurred in the lead-up to the year 2000, where the simple problem of adding two digits to the year field (the Y2K problem) cost humanity over $300 billion dollars [24]. Moreover, as a result of the time required for such changes, individuals within an enterprise often resort to ad hoc, error-prone methods to perform the calculations needed in order to make timely business decisions. For example, they often roll their own extensions using spreadsheets, where they incompletely or incorrectly implement such calculations based on copies of the data that is not kept in sync with the OLTP database and thus may no longer accurately reflect the state of their enterprise."}
{"_id": "10e5e1aaec68a679f88c2bf5ef24b8a128d94f4d", "title": "Attitudes toward Vegans and Vegetarians : The Role of Anticipated Moral Reproach and Dissonance", "text": "This study attempted to determine if anticipated moral reproach and cognitive dissonance explain negative attitudes toward vegans and vegetarians, and if vegans are subject to more negative attitudes than vegetarians. The sample consisted of 65 participants (women =41, men =24) who were randomly assigned into one of four conditions. Each participant read a vignette about a person who consumed either a vegan, vegetarian, plant-based or omnivore diet. They then completed a series of questionnaires that recorded their attitudes towards the person in the vignette, anticipated moral reproach, and cognitive dissonance. The results indicated that for the observed general attitudes for the vegan, vegetarian and plant-based diet conditions were more negative than for the omnivore condition, with plant-based diet being the most negative. For moral reproach the results indicated that higher levels of moral reproach contributed to more negative attitudes, but only in the omnivore condition. Whereas the liner regression indicated that cognitive dissonance was only significant predictor of attitudes in the vegan condition. The findings of this study indicated that diet can have an impact on attitudes, and that the diet its self may have a larger impact than the labels applied. DIET AND ATTITUDES 3 Attitudes Toward Vegans and Vegetarians: The Role of Anticipated Moral Reproach and Dissonance Although the majority of North Americans eat an omnivore diet, veganism and vegetarianism have in recent years experienced an increase in popularity (Duchene & Jackson, 2017). As a result, omnivores are believed to be more likely to discuss their attitudes towards vegans and vegetarians, due to the increased exposure to the groups, both in public dialogue and interpersonally among acquaintances. Cole (2011) examined all British newspapers published in the year 2007 that mentioned the words vegan, vegans or veganism. The results showed that the majority of papers published expressed a negative attitude towards veganism. Additionally, any positive papers published looked at only food suggesting that there is a limited perspective provided by the media on veganism. Cole (2011) provides evidence in support that there are systematic negative associations against vegans. The purpose of this study will be to look at possible explanatory theories that could explain why veganism is observed to be viewed more negatively then vegetarianism. The theories that will be used are moral reproach (Minson & Monin, 2012) and cognitive dissonance (Loughnan, Haslam, & Bastian, 2010). This is an important topic to research as veganism and vegetarianism are theorized to be concepts that are strongly tied to the individual\u2019s identity and are seen to be as important to the individual as their sexual orientation, religion, and/ or ethnicity (Wright, 2015). Furthermore, this study may also help to explain why strategies that attempt to encourage people to adopt a vegetarian or vegan diet are often unsuccessful (Duchene & Jackson, 2017). It has been found that personal factors are the major contributor to choosing to become a vegetarian. These personal factors are altered by the perceptions held by the dominant group, omnivores (Hussar & Harris, 2010). By understanding why omnivores have these DIET AND ATTITUDES 4 perceptions about vegans and vegetarians, will lead to insight into why these negative attitudes are occurring. Comparing Veganism and Vegetarianism in Terms of Attitudes and Diet To explore the potential difference in attitudes toward vegetarians and vegans it is important to clarify what the difference is between the two groups. Vegetarianism is defined as a diet that doesn't involve the consumption of meat or by-products of animal slaughter (Wright, 2015). In comparison, veganism consists of a diet without any animal-based products, or any item containing an ingredient derived from an animal (Wright, 2015). Furthermore, vegans place more stress on the ethical tie to animal rights then vegetarians. Animal rights scholar, Carol Adams defines veganism as \"an ethical stance based on compassion for all beings\" (Adams, 2010, pp.113) and argues that being vegan is about more than adhering to a specific diet. Further evidence to support that veganism is distinctly different from vegetarianism can be found by looking at the groups\u2019 origins. Historically, veganism was founded from a group of vegetarians, who no longer wanted to be seen as part of vegetarianism but rather as their own group (Adams, 2010). This supports the notion that veganism is distinctly different from vegetarianism, as it was created with the purpose to be different then vegetarianism. Evidently there are some clear differences between vegetarians and vegans. It is important to translate how these differences are believed to lead to differing attitudes towards vegans and vegetarians. To understand why vegetarianism may be viewed more positivity then veganism, one commonly offered explanation is that because vegetarians still consume dairy products, the group is viewed as having less polarizing views (Adams, 2010). Elaborating on this statement, Rothgerber found that 38% of people who identified as vegetarian reported consuming meat DIET AND ATTITUDES 5 products to some capacity (2014a). It has been theorised that the emergence of semivegetarianism has lead to vegetarianism being perceived as less negative then veganism. Semivegetarianism is a term that refers to an individual who identifies as being a vegetarian but still consumes meat (Adams, 2010; Rothgerber, 2014a; Ruby, 2012). The amount of meat consumed, and the degree of willingness varies among semi-vegetarians. The presence of semivegetarianism has lead to a blurred conceptualization of vegetarianism resulting in an increase in the perceived homogeneity between vegetarianism and omnivores (Adams, 2010). In support of this, Rothgerber found that semi-vegetarians were closer to omnivores than strict vegetarians regarding the level of disgust shown towards meat (2014a). In summary, this difference could alter the perception that the similarity between omnivores and vegetarians is greater than the similarity between omnivores and vegans. Previous research has been conducted looking at attitude differences between vegetarians and omnivores. The reasoning that a vegetarian gives for maintaining the diet has been found to influence omnivore attitudes towards the group. Generally, the reasons for vegetarianism have been grouped into one of two categories. Health vegetarians tend to cite personal reasons for their membership, such as citing the health benefits or a food allergy. In contrast, ethical vegetarians cite external reasons, such as animal welfare. It has been found that when vegetarians reported being morally motivated, it resulted in a harsher judgement then when the reason reported was personal motivation (Hussar & Harris, 2010). In summary, it is likely that the observed differences between health and ethical vegetarians may translate to comparing vegetarians and vegans, as veganism is tied more to morality then vegetarianism. Another area that is important to look at is difference between vegetarianism veganism, and conscious omnivores. As it is important to address if it is the association to DIET AND ATTITUDES 6 environmentalism or if it is the act of not eating meat that effects attitudes. Rothgerber (2015) states that vegetarianism is a specific form of environmentalism. As well, Bashir et. al (2013) found that individuals were less likely to affiliate with someone if they engaged in a stereotypical environmental activity such as engaging in environmental activism. However, research has shown that the diet that a person eats has a larger impact on attitudes then environmentalism. Rothgerber (2015) compared conscious omnivores to vegans and vegetarians to see how much of an effect the diet had on attitudes towards animals. Conscious omnivores are omnivores who limit their animal consumption and attempt to avoid consuming commercial meat and instead choose animals that are grass feed, or raised more humanely (Rothgerber,2015). The findings of the study were that conscious omnivores scored significantly lower than vegetarianism on both animal favourability and disgust towards meat. The findings also suggest that conscious omnivores conceptually aren\u2019t a group of transitioning vegetarians but rather a separate group. Also, conscious omnivores were found not to put as much emphasis on their eating habits as being a part of their social identity, aligning with the attitudes of normal omnivores (Rothgerber, 2015). In conclusion, it is evident that there are some characteristics of vegetarianism and veganism that are different then omnivore environmentalists. These differences likely contribute to the differences in perceptions that are experienced by omnivores. To summarise it is evident that there is significant evidence to support that there is a difference in the perceptions of vegans and vegetarians that could contribute to a difference in attitudes between the two groups. This paper will now move to discuss some theories that have been applied to these groups in the past to potentially explain this phenomenon. Anticipated Moral Reproach Towards Vegans and Vegetarians DIET AND ATTITUDES 7 Anticipated moral reproach is defined as believing that an individual who holds a different moral stance is judging your own stance as being immoral (Minson & Monin, 2012). This perception of being judged leads to the individual developing a negative attitude towards the other group. In terms of group attitudes, anticipated moral reproach has been used to explain why, when a group stresses overly moral behaviour, it is given the derogatory label of a dogooder or goody-two-shoes. Though not inherently negative, terms such as do-gooder are applied to the individual in a derogatory manner to express their dislike of the group (Minson & Monin, 2012"}
{"_id": "b98063ae46f00587a63547ce753c515766db4fbb", "title": "Dual Band Microstrip Patch Antenna for Sar Applications", "text": "Microstrip patch antennas offer the advantages of thin profile, light weight, low cost, ease of fabrication and compatibility with integrated circuitry, so the antennas are widely used to satisfy demands for polarization diversity and dual-frequency. This paper presents a coaxilly-fed single-layer compact dual band (Cand X-band) microstrip patch antenna for achieving dual-polarized radiation suitable for applications in airborne synthetic aperture radar (SAR) systems. The designed antenna consists of three rectangular patches which are overlapped along their diagonals. Full-wave electromagnetic simulations are performed to accurately predict the frequency response of the antenna. The fabricated antenna achieves an impedance bandwidth of 154 MHz (f0 = 6.83 GHz) and 209 MHz (f0 = 9.73 GHz) for VSWR < 2. Simultaneous use of both frequencies should drastically improve data collection and knowledge of the targets in the SAR system."}
{"_id": "8c8902aefbb004c5b496780f6dfd810a91d30dce", "title": "Epidemiological features of chronic low-back pain", "text": "Although the literature is filled with information about the prevalence and incidence of back pain in general, there is less information about chronic back pain, partly because of a lack of agreement about definition. Chronic back pain is sometimes defined as back pain that lasts for longer than 7-12 weeks. Others define it as pain that lasts beyond the expected period of healing, and acknowledge that chronic pain may not have well-defined underlying pathological causes. Others classify frequently recurring back pain as chronic pain since it intermittently affects an individual over a long period. Most national insurance and industrial sources of data include only those individuals in whom symptoms result in loss of days at work or other disability. Thus, even less is known about the epidemiology of chronic low-back pain with no associated work disability or compensation. Chronic low-back pain has also become a diagnosis of convenience for many people who are actually disabled for socioeconomic, work-related, or psychological reasons. In fact, some people argue that chronic disability in back pain is primarily related to a psychosocial dysfunction. Because the validity and reliability of some of the existing data are uncertain, caution is needed in an assessment of the information on this type of pain."}
{"_id": "96bcc8e5dec0df136774aca1e5b5e3d565967505", "title": "Human-Inspired Neurorobotic System for Classifying Surface Textures by Touch", "text": "Giving robots the ability to classify surface textures requires appropriate sensors and algorithms. Inspired by the biology of human tactile perception, we implement a neurorobotic texture classifier with a recurrent spiking neural network, using a novel semisupervised approach for classifying dynamic stimuli. Input to the network is supplied by accelerometers mounted on a robotic arm. The sensor data are encoded by a heterogeneous population of neurons, modeled to match the spiking activity of mechanoreceptor cells. This activity is convolved by a hidden layer using bandpass filters to extract nonlinear frequency information from the spike trains. The resulting high-dimensional feature representation is then continuously classified using a neurally implemented support vector machine. We demonstrate that our system classifies 18 metal surface textures scanned in two opposite directions at a constant velocity. We also demonstrate that our approach significantly improves upon a baseline model that does not use the described feature extraction. This method can be performed in real-time using neuromorphic hardware, and can be extended to other applications that process dynamic stimuli online."}
{"_id": "c002ab1190262167d8072442eb7565d1eb204a6f", "title": "Determination of differential leakage factors in electrical machines with non-symmetrical full and dead-coil windings", "text": "In this paper G\u00f6rges polygons are used in conjunction with masses geometry to find an easy and affordable way to compute the differential leakage factor of non symmetrical full and dead coil winding. By following the traditional way, the use of the Ossanna's infinite series which has to be obviously truncated under the bound of a predetermined accuracy is mandatory. In the presented method no infinite series is instead required. An example is then shown and discussed to demonstrate practically the effectiveness of the proposed method."}
{"_id": "9176e427fe49d9a65417e1a140dd56178b6e3c1a", "title": "RUM Extractor: A Facebook Extractor for Data Analysis", "text": "Social Network Analysis (SNA) is a field of study that focuses on analyzing user profiles and participations on social network channels in order to model relationships between people and to predict certain behaviors or knowledge. To achieve their goals, researchers, interested in SNA, have to extract content and structure from the numerous social networks available today. Existing tools, which help in this task, often require substantial pre-processing or good programming skills which may not be available for all SNA researchers. This paper describes RUM, a data extraction tool which allows researchers to easily extract several types of content and structure that are available on Facebook pages. Consequently, the extracted data can be saved and analyzed. RUM Extractor is easy to set up and use, and it gives flexible options to users to specify the type and amount of content and structure they want to retrieve. The paper also demonstrates how RUM can be exploited by collecting and further analyzing data collected from two popular Arabic news pages."}
{"_id": "4f9744358d7446f693d88c8e25828c67610bd333", "title": "Decentralized frequency control of a DDG-PV Microgrid in islanded mode", "text": "This paper presents an innovative approach to control the frequency in Diesel-Driven Generator (DDG)-Photovoltaic (PV) Microgrid (MG) in islanded mode. The common approach is based on hierarchical control: primary control and secondary control. The conventional primary controller is a P-controller. Compared to this, we propose a novel primary controller for PV to mimic the dynamic behavior of the conventional synchronous generator (SG) so that more inertia in the MG can be obtained. The proposed secondary control is accomplished decentralized, too, so that no communication links for the real-time control are required and the single point of failure can be avoided. Although in this approach every controller works locally, the controllers and their parameters are designed centrally. This means, the controllers can be appropriately adjusted, when the parameters or the topology of the system change. The parameters of the controllers are optimized by minimizing the H-inf-norm using a known non-smooth method."}
{"_id": "358d2a02275109c250f74f8af150d42eb75f7b5f", "title": "TinyECC: A Configurable Library for Elliptic Curve Cryptography in Wireless Sensor Networks", "text": "Public Key Cryptography (PKC) has been the enabling technology underlying many security services and protocols in traditional networks such as the Internet. In the context of wireless sensor networks, elliptic curve cryptography (ECC), one of the most efficient types of PKC, is being investigated to provide PKC supportin sensor network applications so that the existing PKC-based solutions can be exploited. This paper presents the design, implementation, and evaluation of TinyECC, a configurable library for ECC operations in wireless sensor networks. The primary objective of TinyECC is to provide a ready-to-use, publicly available software package for ECC-based PKC operations that can be flexibly configured and integrated into sensor network applications. TinyECC provides a number of optimization switches, which can turn specific optimizations on or off based on developers' needs. Different combinations of the optimizations have different execution time andresource consumptions, giving developers great flexibility in integrating TinyECC into sensor network applications. This paperalso reports the experimental evaluation of TinyECC on several common sensor platforms, including MICAz, Tmote Sky, and Imote2. The evaluation results show the impacts of individual optimizations on the execution time and resource consumptions, and give the most computationally efficient and the most storage efficient configuration of TinyECC."}
{"_id": "e2ed878b43065d7335bb7007fefe2fd5d0919052", "title": "Beyond the 'digital natives' debate: Towards a more nuanced understanding of students' technology experiences", "text": "The idea of the \u2018digital natives\u2019, a generation of tech-savvy young people immersed in digital technologies for which current education systems cannot cater, has gained widespread popularity on the basis of claims rather than evidence. Recent research has shown flaws in the argument that there is an identifiable generation or even a single type of highly adept technology user. For educators, the diversity revealed by these studies provides valuable insights into students\u2019 experiences of technology inside and outside formal education. While this body of work provides a preliminary understanding, it also highlights subtleties and complexities that require further investigation. It suggests, for example, that we must go beyond simple dichotomies evident in the digital natives debate to develop a more sophisticated understanding of our students\u2019 experiences of technology. Using a review of recent research findings as a starting point, this paper identifies some key issues for educational researchers, offers new ways of conceptualizing key ideas using theoretical constructs from Castells, Bourdieu and Bernstein, and makes a case for how we need to develop the debate in order to advance our understanding."}
{"_id": "3dcfafa8116ad728b842c07ebf0f1bea9745dead", "title": "A Frame of Mind: Using Statistical Models for Detection of Framing and Agenda Setting Campaigns", "text": "Framing is a sophisticated form of discourse in which the speaker tries to induce a cognitive bias through consistent linkage between a topic and a specific context (frame). We build on political science and communication theory and use probabilistic topic models combined with time series regression analysis (autoregressive distributed-lag models) to gain insights about the language dynamics in the political processes. Processing four years of public statements issued by members of the U.S. Congress, our results provide a glimpse into the complex dynamic processes of framing, attention shifts and agenda setting, commonly known as \u2018spin\u2019. We further provide new evidence for the divergence in party discipline in U.S. politics."}
{"_id": "4a6af1ca3745941f5520435473f1f90f8157577c", "title": "Deterministic compressed-sensing matrices: Where Toeplitz meets Golay", "text": "Recently, the statistical restricted isometry property (STRIP) has been formulated to analyze the performance of deterministic sampling matrices for compressed sensing. In this paper, a class of deterministic matrices which satisfy STRIP with overwhelming probability are proposed, by taking advantage of concentration inequalities using Stein's method. These matrices, called orthogonal symmetric Toeplitz matrices (OSTM), guarantee successful recovery of all but an exponentially small fraction of K-sparse signals. Such matrices are deterministic, Toeplitz, and easy to generate. We derive the STRIP performance bound by exploiting the specific properties of OSTM, and obtain the near-optimal bound by setting the underlying sign sequence of OSTM as the Golay sequence. Simulation results show that these deterministic sensing matrices can offer reconstruction performance similar to that of random matrices."}
{"_id": "5b3fe72edd2c02a2ae553d40e70010e7257be0f7", "title": "Perceptual Control Theory 1 Perceptual Control Theory A Model for Understanding the Mechanisms and Phenomena of Control", "text": "Perceptual Control Theory (PCT) provides a general theory of functioning for organisms. At the conceptual core of the theory is the observation that living things control the perceived environment by means of their behavior. Consequently, the phenomenon of control takes center stage in PCT, with observable behavior playing an important but supporting role. The first part of the paper explains how the PCT model works. This explanation includes a definition of \u201ccontrol\u201d as well as the basic equations from which one can see what is required for control to be possible. The second part of the paper describes demonstrations that the reader can download from the Internet and run, so as to learn the basics of control by experiencing and verifying the phenomenon directly. The third part of the paper shows examples of the application of PCT to different areas of psychological research including learning, developmental psychology, social psychology, and psychotherapy. This summary of the current state of the field celebrates the 50th Anniversary of the first major publication in PCT (Powers, Clark & MacFarland, 1960)."}
{"_id": "d1d044d0a94f6d8ad1a00918abededbd2a41f1bd", "title": "Intelligent Wireless Patient Monitoring and Tracking System (Using Sensor Network and Wireless Communication).", "text": "* Corresponding author: Ch.Sandeep Kumar Subudhi Abstract Aim of our work is to monitor the human body temperature, blood pressure (BP), Pulse Rate and ECG and tracking the patient location. The human body temperature, BP, Pulse Rate and ECG are detected in the working environment; this can be sensed by using respective sensors. The sensed information is send to the PIC16F877 microcontroller through signal conditioning circuit in the patient unit. A desired amount of sensor value is set and if it is exceeded preliminary steps should be taken by the indicating by buzzer.The sensor information will be transmitted from the patient unit to the main controller unit with the help of Zigbee communication system which is connected with the microcontrollers in the both units. The main controller unit will send those sensed data as well as the location of that patient by the help of GPS Module to the observer/doctor. The observer/doctor can receive the SMS sent by GSM module and further decision can be taken. The message is sent to a mobile phone using Global system mobile (GSM) Modem. MAX232 was a driver between microcontroller and modem."}
{"_id": "143bb7f9db4c35ffcafe972a5842ab0c45e66849", "title": "On speaker adaptation of long short-term memory recurrent neural networks", "text": "Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) architecture specializing in modeling long-range temporal dynamics. On acoustic modeling tasks, LSTM-RNNs have shown better performance than DNNs and conventional RNNs. In this paper, we conduct an extensive study on speaker adaptation of LSTM-RNNs. Speaker adaptation helps to reduce the mismatch between acoustic models and testing speakers. We have two main goals for this study. First, on a benchmark dataset, the existing DNN adaptation techniques are evaluated on the adaptation of LSTM-RNNs. We observe that LSTMRNNs can be effectively adapted by using speaker-adaptive (SA) front-end, or by inserting speaker-dependent (SD) layers. Second, we propose two adaptation approaches that implement the SD-layer-insertion idea specifically for LSTM-RNNs. Using these approaches, speaker adaptation improves word error rates by 3-4% relative over a strong LSTM-RNN baseline. This improvement is enlarged to 6-7% if we exploit SA features for further adaptation."}
{"_id": "4d6e574c76e4a5ebbdb5f6e382d06c058090e4b7", "title": "KL-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition", "text": "We propose a novel regularized adaptation technique for context dependent deep neural network hidden Markov models (CD-DNN-HMMs). The CD-DNN-HMM has a large output layer and many large hidden layers, each with thousands of neurons. The huge number of parameters in the CD-DNN-HMM makes adaptation a challenging task, esp. when the adaptation set is small. The technique developed in this paper adapts the model conservatively by forcing the senone distribution estimated from the adapted model to be close to that from the unadapted model. This constraint is realized by adding Kullback-Leibler divergence (KLD) regularization to the adaptation criterion. We show that applying this regularization is equivalent to changing the target distribution in the conventional backpropagation algorithm. Experiments on Xbox voice search, short message dictation, and Switchboard and lecture speech transcription tasks demonstrate that the proposed adaptation technique can provide 2%-30% relative error reduction against the already very strong speaker independent CD-DNN-HMM systems using different adaptation sets under both supervised and unsupervised adaptation setups."}
{"_id": "44e0966f6f09bc30be0a66d5737d298e01c6a6ad", "title": "Multi-Graph Learning with Positive and Unlabeled Bags", "text": "In this paper, we formulate a new multi-graph learning task with only positive and unlabeled bags, where labels are only available for bags but not for individual graphs inside the bag. This problem setting raises significant challenges because bag-of-graph setting does not have features to directly represent graph data, and no negative bags exits for deriving discriminative classification models. To solve the challenge, we propose a puMGL learning framework which relies on two iteratively combined processes for multigraph learning: (1) deriving features to represent graphs for learning; and (2) deriving discriminative models with only positive and unlabeled graph bags. For the former, we derive a subgraph scoring criterion to select a set of informative subgraphs to convert each graph into a feature space. To handle unlabeled bags, we assign a weight value to each bag and use the adjusted weight values to select most promising unlabeled bags as negative bags. A margin graph pool (MGP), which contains some representative graphs from positive bags and identified negative bags, is used for selecting subgraphs and training graph classifiers. The iterative subgraph scoring, bag weight updating, and MGP based graph classification forms a closed loop to find optimal subgraphs and most suitable unlabeled bags for multi-graph learning. Experiments and comparisons on real-world multigraph data demonstrate the algorithm performance."}
{"_id": "11ecc1320779a1c20fcc879780765f80c496ca32", "title": "Superimposed Sparse Parameter Classifiers for Face Recognition", "text": "In this paper, a novel classifier, called superimposed sparse parameter (SSP) classifier is proposed for face recognition. SSP is motivated by two phase test sample sparse representation (TPTSSR) and linear regression classification (LRC), which can be treated as the extended of sparse representation classification (SRC). SRC uses all the train samples to produce the sparse representation vector for classification. The LRC, which can be interpreted as L2-norm sparse representation, uses the distances between the test sample and the class subspaces for classification. TPTSSR is also L2-norm sparse representation and uses two phase to compute the distance for classification. Instead of the distances, the SSP classifier employs the SSPs, which can be expressed as the sum of the linear regression parameters of each class in iterations, is used for face classification. Further, the fast SSP (FSSP) classifier is also suggested to reduce the computation cost. A mass of experiments on Georgia Tech face database, ORL face database, CVL face database, AR face database, and CASIA face database are used to evaluate the proposed algorithms. The experimental results demonstrate that the proposed methods achieve better recognition rate than the LRC, SRC, collaborative representation-based classification, regularized robust coding, relaxed collaborative representation, support vector machine, and TPTSSR for face recognition under various conditions."}
{"_id": "7cdb43b7283b2c03a4984cc471ec3402d2b957bc", "title": "Face Recognition Machine Vision System Using Eigenfaces", "text": "Face Recognition is a common problem in Machine Learning. This technology has already been widely used in our lives. For example, Facebook can automatically tag people\u2019s faces in images, and also some mobile devices use face recognition to protect private security. Face images comes with different background, variant illumination, different facial expression and occlusion. There are a large number of approaches for the face recognition. Different approaches for face recognition have been experimented with specific databases which consist of single type, format and composition of image. Doing so, these approaches don\u2019t suit with different face databases. One of the basic face recognition techniques is eigenface which is quite simple, efficient, and yields generally good results in controlled circumstances. So, this paper presents an experimental performance comparison of face recognition using Principal Component Analysis (PCA) and Normalized Principal Component Analysis (NPCA). The experiments are carried out on the ORL (ATT) and Indian face database (IFD) which contain variability in expression, pose, and facial details. The results obtained for the two methods have been compared by varying the number of training images. MATLAB is used for implementing algorithms also."}
{"_id": "46e582ac44c599d4e426220e4a607287916ad064", "title": "Analog Imagery in Mental Model Reasoning: Depictive Models", "text": "We investigated whether people can use analog imagery to model the behavior of a simple mechanical interaction. Subjects saw a static computer display of two touching gears that had different diameters. Their task was to determine whether marks on each gear would meet if the gears rotated inward. This task added a problem of coordination to the typical analog rotation task in that the gears had a physical interdependency; the angular velocity of one gear depended on the angular velocity of the other gear. In the first experiment, we found the linear relationship between response time and angular disparity that indicates analog imagery. In the second experiment, we found that people can also solve the problem through a non-analog, visual comparison. We also found that people of varying spatial ability could switch between analog and non-analog solutions if instructed to do so. In the third experiment, we examined whether the elicitation of physical knowledge would influence solution strategies. To do so, we manipulated the visual realism of the gear display. Subjects who saw the most realistic gears coordinated their transformations by using the surfaces of the gears, as though they were relying on the friction connecting the surfaces. Subjects who saw more schematic displays relied on analytic strategies, such as comparing the ratios made by the angles and/or diameters of the two gears. To explain the relationship between spatial and physical knowledge found in the experiments, we constructed a computer simulation of what we call depictive modeling. In a depictive model, general spatial knowledge and context-sensitive physical knowledge have the same ontology. This is different from prior simulations in which a non-analog representation would be needed to coordinate the analog behaviors of physical objects. In our simulation, the inference that coordinates the gear motions emerges from the analog rotations themselves. We suggest that mental depictions create a bridge between imagery and mental model research by positing the referent as the primary conceptual entity."}
{"_id": "d1c59c0f3fce3193970265994b7508df8714d799", "title": "Mining Public Transport User Behaviour from Smart Card Data", "text": "In urban public transport, smart card data is made of millions of observations of users boarding vehicles over the network across several days. The issue addresses whether data mining techniques can be used to study user behaviour from these observations. This must be done with the help of transportation planning knowledge. Hence, this paper presents a common \u201ctransportation planning/data mining\u201d methodology for user behaviour analysis. Experiments were conducted on data from a Canadian transit authority. This experience demonstrates that a combination of planning knowledge and data mining tool allows producing travel behaviours indicators, mainly regarding regularity and daily patterns, from data issued from operational and management system. Results show that the public transport users of this study can rapidly be divided in four major behavioural groups, whatever type of ticket they use. Copyright \u00a9 2006 IFAC."}
{"_id": "603bedfb454acfb5d0fdc5e91620e9be12a0d559", "title": "stagNet: An Attentive Semantic RNN for Group Activity Recognition", "text": "Group activity recognition plays a fundamental role in a variety of applications, e.g. sports video analysis and intelligent surveillance. How to model the spatio-temporal contextual information in a scene still remains a crucial yet challenging issue. We propose a novel attentive semantic recurrent neural network (RNN), dubbed as stagNet, for understanding group activities in videos, based on the spatio-temporal attention and semantic graph. A semantic graph is explicitly modeled to describe the spatial context of the whole scene, which is further integrated with the temporal factor via structural-RNN. Benefiting from the \u2018factor sharing\u2019 and \u2018message passing\u2019 mechanisms, our model is capable of extracting discriminative spatio-temporal features and capturing inter-group relationships. Moreover, we adopt a spatio-temporal attention model to attend to key persons/frames for improved performance. Two widely-used datasets are employed for performance evaluation, and the extensive results demonstrate the superiority of our method."}
{"_id": "f980462aa340a1beed2f040db9b54d7f22378a58", "title": "Intelligent stereo camera mobile platform for indoor service robot research", "text": "Stereo vision is an active research topic in computer vision. Point Grey\u00ae Bumblebee\u00ae and digital single-lens reflex camera (DSLR) are normally found in the stereo vision research, they are robust but expensive. Open source electronic prototyping platforms such as Arduino and Raspberry Pi are interesting products, which allows students or researchers to custom made inexpensive experimental equipment for their research projects. This paper describes the intelligent stereo camera mobile platform developed in our research using Pi and camera modules and presents the concept of using inexpensive open source parts for robotic stereo vision research work in details."}
{"_id": "1c2c2b31e3282f2feefa5446efad7f6c0d833a29", "title": "Merging and Splitting Eigenspace Models", "text": "We present new deterministic methods that given two eigenspace models, each representing a set of n-dimensional observations will: (1) merge the models to yield a representation of the union of the sets; (2) split one model from another to represent the difference between the sets; as this is done, we accurately keep track of the mean. These methods are more efficient than computing new eigenspace models directly from the observations when the eigenmodels are dimensionally small compared to the total number of observations. Such methods are important because they provide a basis for novel techniques in machine learning, using a dynamic split-andmerge paradigm to optimally cluster observations. Here we present a theoretical derivation of the methods, empirical results relating to the efficiency and accuracy of the techniques, and three general applications, including the on-line construction of Gaussian mixture models."}
{"_id": "1628830228d781054fc65163b7bae596fe999726", "title": "An Embedded Declarative Language for Hierarchical Object Structure Traversal", "text": "A common challenge in processing large domain-specific models and in-memory object structures (e.g., complex XML documents) is writing traversals and queries on them. Object-oriented (OO) designs, particularly those based on the Visitor pattern, are commonly used for developing traversals. However, such OO designs limit the reusability and independent evolution of visitation actions (i.e., the actions to be performed at each traversed node) due to tight coupling between the traversal logic and visitation actions, particularly when a variety of different traversals are needed. Code generators developed for traversal specification languages alleviate some of these problems but their high cost of development is often prohibitive. This paper presents Language for Embedded quEry and traverSAl (LEESA), which provides a generative programming approach for embedding object structure queries and traversal specifications within a host language, C++. By virtue of being declarative, LEESA significantly reduces the development cost of programs operating on complex object structures compared to the traditional techniques."}
{"_id": "88924c34a62b31859326d63e77d9d1cd592fa39f", "title": "Design of Broadband Linear and Efficient Power Amplifier for Long-Term Evolution Applications", "text": "In this letter, a broadband linear and efficient power amplifier for base-stations in long-term evolution (LTE) applications is designed, fabricated, and experimentally verified, employing Cree's CGH40010F GaN HEMT. A novel method of baseband, fundamental and harmonic impedances control is used to provide high linearity and efficiency across a broad bandwidth. When producing a two-tone carrier-to-intermodulation ratio (C/I) of 30 dBc, the circuit has demonstrated a two-tone power added efficiency (PAE) between 45%-60% across the frequency range from 1.6 to 2.6 GHz while delivering 36.0-38.5 dBm average output power. For a single-carrier 20 MHz LTE signal with a peak-to-average ratio (PAR) of 6.5 dB, a measured high PAE of 40%-55% can be achieved at an average output power of 35.3-37.5 dBm with an adjacent channel leakage ratio (ACLR) of about -30 dBc from 1.6 to 2.6 GHz."}
{"_id": "f2d2dd3db244dcbc6fb32ff9c01ed0cdeb3fd437", "title": "Unsupervised Feature Learning Based on Deep Models for Environmental Audio Tagging", "text": "Environmental audio tagging aims to predict only the presence or absence of certain acoustic events in the interested acoustic scene. In this paper, we make contributions to audio tagging in two parts, respectively, acoustic modeling and feature learning. We propose to use a shrinking deep neural network DNN framework incorporating unsupervised feature learning to handle the multilabel classification task. For the acoustic modeling, a large set of contextual frames of the chunk are fed into the DNN to perform a multilabel classification for the expected tags, considering that only chunk or utterance level rather than frame-level labels are available. Dropout and background noise aware training are also adopted to improve the generalization capability of the DNNs. For the unsupervised feature learning, we propose to use a symmetric or asymmetric deep denoising auto-encoder syDAE or asyDAE to generate new data-driven features from the logarithmic Mel-filter banks features. The new features, which are smoothed against background noise and more compact with contextual information, can further improve the performance of the DNN baseline. Compared with the standard Gaussian mixture model baseline of the DCASE 2016 audio tagging challenge, our proposed method obtains a significant equal error rate EER reduction from 0.21 to 0.13 on the development set. The proposed asyDAE system can get a relative 6.7% EER reduction compared with the strong DNN baseline on the development set. Finally, the results also show that our approach obtains the state-of-the-art performance with 0.15 EER on the evaluation set of the DCASE 2016 audio tagging task while EER of the first prize of this challenge is 0.17."}
{"_id": "7324f3efe11da0f4501dd69a441b10fde4a24dd5", "title": "Wireless health monitoring system for patients", "text": "eHealth is the provision of healthcare with the inclusion of telecommunication techniques. This project looks at the construction of a simple device that will be capable of transferring the data of a patient's vital signs to a remote device wirelessly. The necessity of this project is to alleviate the difficulty that is encountered by medical experts in monitoring multiple patients simultaneously. This project will enable them to observe patients without having to be physically present at their bedside, be it in the hospital or in their home. A patient's body temperature, heart rate and electrocardiography (ECG) are transferred wirelessly through an agent such as Bluetooth technology."}
{"_id": "5e053cd164b02433c4efc0fc675f6273a8a1c46a", "title": "Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling", "text": "Recurrent neural networks (RNNs) have shown promising performance for language modeling. However, traditional training of RNNs, using back-propagation through time, often suffers from overfitting. One reason for this is that stochastic optimization (used for large training sets) does not provide good estimates of model uncertainty. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (also appropriate for large training sets) to learn weight uncertainty in RNNs. It yields a principled Bayesian learning algorithm, adding gradient noise during training (enhancing exploration of the model-parameter space) and model averaging when testing. Extensive experiments on various RNN models and across a broad range of applications demonstrate the superiority of the proposed approach relative to stochastic optimization."}
{"_id": "42222e7097192b9da03eed759943c0695eed14ee", "title": "SPLENDID: SPARQL Endpoint Federation Exploiting VOID Descriptions", "text": "In order to leverage the full potential of the Semantic Web it is necessary to transparently query distributed RDF data sources in the same way as it has been possible with federated databases for ages. However, there are significant differences between the Web of (linked) Data and the traditional database approaches. Hence, it is not straightforward to adapt successful database techniques for RDF federation. Reasons are the missing cooperation between SPARQL endpoints and the need for detailed data statistics for estimating the costs of query execution plans. We have implemented SPLENDID, a query optimization strategy for federating SPARQL endpoints based on statistical data obtained from voiD descriptions [1]."}
{"_id": "d797076d324114f91fcb085e13585736287dc2f2", "title": "Industrial IoT lifecycle via digital twins", "text": "Currently, the IoT discussion is focused primarily on the operational phase. This includes how a IoT device behaves, operates, communicates, and interacts with other IoT devices during operation. However, IoT devices and systems have other lifecycle phases before and after operation. This extended abstract provides an overview of how other IoT lifecycle phases (e.g., design and service) can be improved with information feedback and feedforward flows between them. Digital Twins are a new mechanism to manage IoT devices and IoT systems-of-systems throughout their lifecycle. We present our vision on the industrial IoT lifecycle managed and optimized at scale via Digital Twins."}
{"_id": "f0cf482e8a199a6bd9afb272db23556a6c8ae155", "title": "A Feature Learning Based Approach for Automated Fruit Yield Estimation", "text": "This paper demonstrates a generalised multi-scale feature learning approach to multi-class segmentation, applied to the estimation of fruit yield on treecrops. The learning approach makes the algorithm flexible and adaptable to different classification problems, and hence applicable to a wide variety of tree-crop applications. Extensive experiments were performed on a dataset consisting of 8000 colour images collected in an apple orchard. This paper shows that the algorithm was able to segment apples with different sizes and colours in an outdoor environment with natural lighting conditions, with a single model obtained from images captured using a monocular colour camera. The segmentation results are applied to the problem of fruit counting and the results are compared against manual counting. The results show a squared correlation coefficient of R2 = 0.81."}
{"_id": "886cf13c12bb575e68238d830a2970ea2f1af516", "title": "Threat awareness for critical infrastructures resilience", "text": "Utility networks are part of every nation's critical infrastructure, and their protection is now seen as a high priority objective. In this paper, we propose a threat awareness architecture for critical infrastructures, which we believe will raise security awareness and increase resilience in utility networks. We first describe an investigation of trends and threats that may impose security risks in utility networks. This was performed on the basis of a viewpoint approach that is capable of identifying technical and non-technical issues (e.g., behaviour of humans). The result of our analysis indicated that utility networks are affected strongly by technological trends, but that humans comprise an important threat to them. This provided evidence and confirmed that the protection of utility networks is a multi-variable problem, and thus, requires the examination of information stemming from various viewpoints of a network. In order to accomplish our objective, we propose a systematic threat awareness architecture in the context of a resilience strategy, which ultimately aims at providing and maintaining an acceptable level of security and safety in critical infrastructures. As a proof of concept, we demonstrate partially via a case study the application of the proposed threat awareness architecture, where we examine the potential impact of attacks in the context of social engineering in a European utility company."}
{"_id": "b10d18138105ddd9c028c9423bb12fb206e188af", "title": "A 3D human motion refinement method based on sparse motion bases selection", "text": "Motion capture (MOCAP) is an important technique that is widely used in many areas such as computer animation, film industry, physical training and so on. Even with professional MOCAP system, the missing marker problems always occur. Motion refinement is an essential preprocessing step for MOCAP data based applications. Although many existing approaches for motion refinement have been developed, it is still a challenging task due to the complexity and diversity of human motion. A data driven based motion refinement method is proposed in this paper, which modifies the traditional sparse coding process for special task of motion recovery from missing parts. Meanwhile, the objective function is derived by taking both statistical and kinematical property of motion data into account. Poselet model and moving window grouping are applied in the proposed method to achieve a fine-grained feature representation, which preserves the embedded spatial-temporal kinematic information. 5 motion dictionaries are learnt for each kind of poselet from training data in parallel. The motion refine problem is finally solved as an \u21131-minimization problem. Compared with several state-of-art motion refine methods, the experimental result shows that our approach outperforms the competitors."}
{"_id": "11db74171df92a50d64bd88d569454415878c63a", "title": "Using Syntax-Based Machine Translation to Parse English into Abstract Meaning Representation", "text": "We present a parser for Abstract Meaning Representation (AMR). We treat Englishto-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser improves upon state-of-the-art results by 7 Smatch points."}
{"_id": "f99f46f1b90be4582946951f4f12d8b9209354b2", "title": "Omnidirectional conformal antenna array based on E-shaped patches", "text": "An antenna consisting of 4 element subarrays of strongly coupled E-shaped patches placed conformally on a metal cylinder is proposed for use in wireless local area networks (WLAN). The use of this special array configuration in the cylindrical case allows reaching a 8.3 % bandwidth and an omnidirectional radiation pattern in the horizontal plane with only 2 subarrays. CST Microwave Studio is used for validation purposes before manufacturing. To the knowledge of the authors, it is the first time that this recently introduced concept of strongly coupled E-shaped patches is used in a conformal antenna."}
{"_id": "15d6c9f7812df8ee5b016602af3b4dd7d4b83647", "title": "Recurrent Memory Array Structures Technical Report", "text": "The following report introduces ideas augmenting standard Long Short Term Memory (LSTM) architecture with multiple memory cells per hidden unit in order to improve its generalization capabilities. It considers both deterministic and stochastic variants of memory operation. It is shown that the nondeterministic Array-LSTM approach improves stateof-the-art performance on character level text prediction achieving 1.402 BPC\u2020 on enwik8 dataset. Furthermore, this report estabilishes baseline neural-based results of 1.12 BPC and 1.19 BPC for enwik9 and enwik10 datasets respectively."}
{"_id": "037a7c2e74397924d07e02509bfc1bd4491ac36e", "title": "Information Fusion in the Immune System", "text": "Biologically-inspired methods such as evolutionary algorithms and neural networks are proving useful in the field of information fusion. Artificial immune systems (AISs) are a biologically-inspired approach which take inspiration from the biological immune system. Interestingly, recent research has shown how AISs which use multi-level information sources as input data can be used to build effective algorithms for realtime computer intrusion detection. This research is based on biological information fusion mechanisms used by the human immune system and as such might be of interest to the information fusion community. The aim of this paper is to present a summary of some of the biological information fusion mechanisms seen in the human immune system, and of how these mechanisms have been implemented as AISs."}
{"_id": "ebafaeea47733560d881196dda48b8bfd41709a3", "title": "Synthetic Aperture Radar Imaging Using Stepped Frequency Waveform", "text": "This paper presents a synthetic aperture radar (SAR) imaging system using a stepped frequency waveform. The main problem in stepped frequency SAR is the range difference in one sequence of pulses. This paper analyzes the influence of range difference in one sequence of pulses in detail and proposes a method to compensate this influence in the Doppler domain. A Stolt interpolation for the stepped frequency signal is used to focus the data accurately. The parameters for the stepped frequency SAR are analyzed, and some criteria are proposed for system design. Finally, simulation and experimental results are presented to demonstrate the validity of the proposed method. The experimental data are collected by using a stepped frequency radar mounted on a rail."}
{"_id": "b72a44e0f509a91a02c4d5b2df2a95fb1b3ee392", "title": "Follow me cloud: interworking federated clouds and distributed mobile networks", "text": "This article introduces the Follow-Me Cloud concept and proposes its framework. The proposed framework is aimed at smooth migration of all or only a required portion of an ongoing IP service between a data center and user equipment of a 3GPP mobile network to another optimal DC with no service disruption. The service migration and continuity is supported by replacing IP addressing with service identification. Indeed, an FMC service/application is identified, upon establishment, by a session/service ID, dynamically changing along with the service being delivered over the session; it consists of a unique identifier of UE within the 3GPP mobile network, an identifier of the cloud service, and dynamically changing characteristics of the cloud service. Service migration in FMC is triggered by change in the IP address of the UE due to a change of data anchor gateway in the mobile network, in turn due to UE mobility and/or for load balancing. An optimal DC is then selected based on the features of the new data anchor gateway. Smooth service migration and continuity are supported thanks to logic installed at UE and DCs that maps features of IP flows to the session/service ID."}
{"_id": "c0b60d02b2d59123f6b336fe2e287bdb02a2a776", "title": "Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study", "text": "Visual attention is a process that enables biological and machine vision systems to select the most relevant regions from a scene. Relevance is determined by two components: 1) top-down factors driven by task and 2) bottom-up factors that highlight image regions that are different from their surroundings. The latter are often referred to as \u201cvisual saliency.\u201d Modeling bottom-up visual saliency has been the subject of numerous research efforts during the past 20 years, with many successful applications in computer vision and robotics. Available models have been tested with different datasets (e.g., synthetic psychological search arrays, natural images or videos) using different evaluation scores (e.g., search slopes, comparison to human eye tracking) and parameter settings. This has made direct comparison of models difficult. Here, we perform an exhaustive comparison of 35 state-of-the-art saliency models over 54 challenging synthetic patterns, three natural image datasets, and two video datasets, using three evaluation scores. We find that although model rankings vary, some models consistently perform better. Analysis of datasets reveals that existing datasets are highly center-biased, which influences some of the evaluation scores. Computational complexity analysis shows that some models are very fast, yet yield competitive eye movement prediction accuracy. Different models often have common easy/difficult stimuli. Furthermore, several concerns in visual saliency modeling, eye movement datasets, and evaluation scores are discussed and insights for future work are provided. Our study allows one to assess the state-of-the-art, helps to organizing this rapidly growing field, and sets a unified comparison framework for gauging future efforts, similar to the PASCAL VOC challenge in the object recognition and detection domains."}
{"_id": "c18a0618ada97eba7e72dd9d4ccfa7b838718ad3", "title": "Task and context determine where you look.", "text": "The deployment of human gaze has been almost exclusively studied independent of any specific ongoing task and limited to two-dimensional picture viewing. This contrasts with its use in everyday life, which mostly consists of purposeful tasks where gaze is crucially involved. To better understand deployment of gaze under such circumstances, we devised a series of experiments, in which subjects navigated along a walkway in a virtual environment and executed combinations of approach and avoidance tasks. The position of the body and the gaze were monitored during the execution of the task combinations and dependence of gaze on the ongoing tasks as well as the visual features of the scene was analyzed. Gaze distributions were compared to a random gaze allocation strategy as well as a specific \"saliency model.\" Gaze distributions showed high similarity across subjects. Moreover, the precise fixation locations on the objects depended on the ongoing task to the point that the specific tasks could be predicted from the subject's fixation data. By contrast, gaze allocation according to a random or a saliency model did not predict the executed fixations or the observed dependence of fixation locations on the specific task."}
{"_id": "11190a466d1085c09a11e52cc63f112280ddce74", "title": "A Model of Saliency-Based Visual Attention for Rapid Scene Analysis", "text": "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail."}
{"_id": "494219f151071a752914c51024a307f0e22e9d2c", "title": "An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed", "text": "Integration of goal-driven, top-down attention and image-driven, bottom-up attention is crucial for visual search. Yet, previous research has mostly focused on models that are purely top-down or bottom-up. Here, we propose a new model that combines both. The bottom-up component computes the visual salience of scene locations in different feature maps extracted at multiple spatial scales. The topdown component uses accumulated statistical knowledge of the visual features of the desired search target and background clutter, to optimally tune the bottom-up maps such that target detection speed is maximized. Testing on 750 artificial and natural scenes shows that the model\u2019s predictions are consistent with a large body of available literature on human psychophysics of visual search. These results suggest that our model may provide good approximation of how humans combine bottom-up and top-down cues such as to optimize target detection speed."}
{"_id": "f5027eb3ad922bf6903b080329c02dff6b34ce20", "title": "Factors affecting the acceptance of information systems supporting emergency operations centres", "text": "Despite the recognition that information system acceptance is an important antecedent of effective emergency management, there has been comparatively very little research examining this aspect of technology acceptance. The current research responded to this gap in literature by adapting and integrating existing models of technology acceptance. This was done in order to examine how a range of technology acceptance factors could affect the acceptance of emergency operations centre information systems. Relationships between several of these factors were also examined. Questionnaire data from 383 end-users of four different emergency operations centre information systems were analysed using structural equation modelling. This analysis concluded that technology acceptance factors of performance expectancy, effort expectancy, social influence and information quality explained 65 percent of variance in symbolic adoption, which is a combination of mental acceptance and psychological attachment towards an information system. A number of moderating effects of age, gender, experience of use and domain experience were also identified. A mediating component, of performance expectancy, explained 49 percent of variance between facilitating conditions, information quality, effort expectancy, and resulting symbolic adoption. These findings highlight a need to re-focus technology acceptance research on both mediating and moderating effects and the importance of considering domain specific factors. Applied recommendations are also made, for successfully implementing relevant information"}
{"_id": "88a29a406054826fd8ad68b8c23231828d84f3fc", "title": "Interventions to address the academic impairment of children and adolescents with ADHD.", "text": "There exists a strong link between ADHD and academic underachievement. Both the core behavioral symptoms of ADHD and associated executive functioning deficits likely contribute to academic impairment. Current evidence-based approaches to the treatment of ADHD (i.e., stimulant medication, clinical behavior therapy and classroom behavioral interventions) have demonstrated a robust impact on behavioral variables such as attention and disruptive behavior within classroom analogue settings; however, their efficacy in improving academic outcomes is much less clear. Although surprisingly few treatment outcome studies of ADHD have attempted to incorporate interventions that specifically target academic outcomes, the studies that are available suggest that these interventions may be beneficial. The state of the treatment literature for addressing academic impairment in children and adolescents with ADHD will be reviewed herein, as well as limitations of current research, and directions for future research."}
{"_id": "e25767429f6f0d934a972864b70c3f21cf50dac2", "title": "Intravesical stone formation on intrauterine contraceptive device", "text": "Since more than 30\u00a0years, intrauterine contraceptive devices (IUCD) have been used for a contraceptive opportunity. Although they are termed to be a safe and effective method for contraception, they also have some type of complications and uterine perforation, septic abortion, pelvic abscess are the serious complications of these devices. The incidence of uterine perforation is very low, but in the literature nearly 100 cases were reported about the extra uterine localization of IUCD. Migration may occur to the adjacent organs. We here in describe a case of a 31\u00a0year-old woman who had an IUCD with stone formation in the bladder. In the literature all of the cases were reported as IUCD migration, but although it seems technically impossible, IUCD placement into the bladder should also be considered in misplaced IUCDs."}
{"_id": "2d193bcd49dcc9a9cb67acf622d074d969e1057b", "title": "Offset calibration technique for capacitive transimpedance amplifier used in uncooled infrared detection", "text": "This paper presents a novel readout circuit of uncooled, bolometer-based, focal plane arrays (FPAs). The offset and flicker noise are the design challenges of microbolometer readout circuits (ROCs). The ROC is not only required to apply careful noise cancellation techniques, but also to be insensitive to process and supply voltage variations. The proposed circuit involves a new offset cancellation technique which overcomes process variations, noise, random, and systematic offset. & 2016 Elsevier Ltd. All rights reserved."}
{"_id": "ddf5d0dabf9d7463c306d2969307a9541d96ba66", "title": "Big data framework for students' academic performance prediction: A systematic literature review", "text": "Big Data is becoming an integral part of education system worldwide, bringing in so much of prediction potential and therefore opportunities to improve learning and teaching methodologies. In fact, it has become the digital policy instrument for policy makers to make strategic decisions supported by big data analytics. This paper uses Systematic Literature Review (SLR) to establish a general overview and background in establishing gaps that need to be addressed in big data analytics for education landscape. A total of 59 research papers from 473 papers were chosen and analyzed. There is an increased trend in incorporating big data in education however, it is not sufficient and need more research in this area particularly predictive models and frameworks for educational settings."}
{"_id": "8080f6bc92f788bd72b25919990d50e069ab96b8", "title": "Seismic surveying with drone-mounted geophones", "text": "Seismic imaging is the primary technique for subsurface exploration. Traditional seismic imaging techniques rely heavily on manual labor to plant sensors, lay miles of cabling, and then recover the sensors. Often sites of resource or rescue interest may be difficult or hazardous to access. Thus, there is a substantial need for unmanned sensors that can be deployed by air and potentially in large numbers. This paper presents working prototypes of autonomous drones equipped with geophones (vibration sensors) that can fly to a site, land, listen for echoes and vibrations, store the information on-board, and subsequently return to home base. The design uses four geophone sensors (with spikes) in place of the landing gear. This provides a stable landing attitude, redundancy in sensing, and ensures the geophones are oriented perpendicular to the ground. The paper describes hardware experiments demonstrating the efficacy of this technique and a comparison with traditional manual techniques. The performance of the seismic drone was comparable to a well planted geophone, proving the drone mount system is a feasible alternative to traditional seismic sensors."}
{"_id": "eb7ba7940e2fda0dfd0f5989e3fd0b5291218450", "title": "Prevalence of diabetes mellitus and impaired fasting glucose levels in the Eastern Province of Saudi Arabia: results of a screening campaign.", "text": "INTRODUCTION\nThis study aimed to estimate the prevalence of diagnosed and undiagnosed diabetes mellitus (DM) in the Eastern Province of Saudi Arabia, and to study its relationship with socioeconomic factors.\n\n\nMETHODS\nThe study targeted all Saudi subjects aged 30 years and above who resided in the Eastern Province in 2004. DM screening was conducted by taking the capillary fasting blood glucose (CFBG) after eight hours or more of fasting, or the casual capillary blood glucose (CCBG). A positive screening test for hyperglycaemia was defined as CFBG more than or equal to 100 mg/dl (5.6 mmol/l), or CCBG more than or equal to 140 mg/dl (7.8 mmol/l). A positive result was confirmed on another day through the measurement of fasting plasma glucose (FPG) levels from a venous sample. A diagnosis of DM was considered if FPG was more than or equal to 126 mg/dl (7.0 mmol/l), or when there was a history of a previous diagnosis.\n\n\nRESULTS\nOut of 197,681 participants, 35,929 (18.2 percent) had a positive history of DM or a positive screening test for hyperglycaemia. After confirmation by venous blood testing, the prevalence of DM dropped to 17.2 percent while the prevalence of newly diagnosed DM was 1.8 percent. The prevalence increased with age and was higher in women, widows, divorcees, those who had a low education level and the unemployed.\n\n\nCONCLUSION\nThe prevalence of DM in Saudi Arabia is one of the highest reported in the world, and its yield of screening is high."}
{"_id": "d8a56e64bd74bf19d5a2ad6cdf8132c86eff3767", "title": "Articulation points guided redundancy elimination for betweenness centrality", "text": "Betweenness centrality (BC) is an important metrics in graph analysis which indicates critical vertices in large-scale networks based on shortest path enumeration. Typically, a BC algorithm constructs a shortest-path DAG for each vertex to calculate its BC score. However, for emerging real-world graphs, even the state-of-the-art BC algorithm will introduce a number of redundancies, as suggested by the existence of articulation points. Articulation points imply some common sub-DAGs in the DAGs for different vertices, but existing algorithms do not leverage such information and miss the optimization opportunity.\n We propose a redundancy elimination approach, which identifies the common sub-DAGs shared between the DAGs for different vertices. Our approach leverages the articulation points and reuses the results of the common sub-DAGs in calculating the BC scores, which eliminates redundant computations. We implemented the approach as an algorithm with two-level parallelism and evaluated it on a multicore platform. Compared to the state-of-the-art implementation using shared memory, our approach achieves an average speedup of 4.6x across a variety of real-world graphs, with the traversal rates up to 45 ~ 2400 MTEPS (Millions of Traversed Edges per Second)."}
{"_id": "95176f1ee03e9f460165abd90402e939b7f49fe1", "title": "Development of 24 GHz rectennas for Fixed Wireless Access", "text": "We need electricity to use wireless information. If we reduce amount of batteries or electrical wires with a wireless power transmission technology via microwave (MPT), it is a green communication system. We Kyoto University propose a Fixed Wireless Access (FWA) system with the MPT with NTT, Japan. In this paper, we show mainly development results of 24GHz rectennas, rectifying antenna, for FWA. We developed some types of the rectennas. Finally we achieve 65% of RF-DC conversion efficiency with output filter of harmonic balance."}
{"_id": "cfc4af4c0c96e39ed48584d61ee9ba5ebd2aa60a", "title": "A Weighted Dictionary Learning Model for Denoising Images Corrupted by Mixed Noise", "text": "This paper proposes a general weighted l2-l0 norms energy minimization model to remove mixed noise such as Gaussian-Gaussian mixture, impulse noise, and Gaussian-impulse noise from the images. The approach is built upon maximum likelihood estimation framework and sparse representations over a trained dictionary. Rather than optimizing the likelihood functional derived from a mixture distribution, we present a new weighting data fidelity function, which has the same minimizer as the original likelihood functional but is much easier to optimize. The weighting function in the model can be determined by the algorithm itself, and it plays a role of noise detection in terms of the different estimated noise parameters. By incorporating the sparse regularization of small image patches, the proposed method can efficiently remove a variety of mixed or single noise while preserving the image textures well. In addition, a modified K-SVD algorithm is designed to address the weighted rank-one approximation. The experimental results demonstrate its better performance compared with some existing methods."}
{"_id": "5eaaee68d0a4b01eeed884d00a58d8064c17ff4e", "title": "Guido: A Musical Score Recognition System", "text": "This paper presents an optical music recognition system Guido that can automatically recognize the main musical symbols of music scores that were scanned or taken by a digital camera. The application is based on object model of musical notation and uses linguistic approach for symbol interpretation and error correction. The system offers musical editor with a partially automatic error correction."}
{"_id": "e3ed44c657eb450a38538cd7433a7390ab5f0c46", "title": "Batteries and Ultracapacitors for Electric, Hybrid, and Fuel Cell Vehicles", "text": "The application of batteries and ultracapacitors in electric energy storage units for battery powered (EV) and charge sustaining and plug-in hybrid-electric (HEV and PHEV) vehicles have been studied in detail. The use of IC engines and hydrogen fuel cells as the primary energy converters for the hybrid vehicles was considered. The study focused on the use of lithium-ion batteries and carbon/carbon ultracapacitors as the energy storage technologies most likely to be used in future vehicles. The key findings of the study are as follows. 1) The energy density and power density characteristics of both battery and ultracapacitor technologies are sufficient for the design of attractive EVs, HEVs, and PHEVs. 2) Charge sustaining, engine powered hybrid-electric vehicles (HEVs) can be designed using either batteries or ultracapacitors with fuel economy improvements of 50% and greater. 3) Plug-in hybrids (PHEVs) can be designed with effective all-electric ranges of 30-60 km using lithium-ion batteries that are relatively small. The effective fuel economy of the PHEVs can be very high (greater than 100 mpg) for long daily driving ranges (80-150 km) resulting in a large fraction (greater than 75%) of the energy to power the vehicle being grid electricity. 4) Mild hybrid-electric vehicles (MHEVs) can be designed using ultracapacitors having an energy storage capacity of 75-150 Wh. The fuel economy improvement with the ultracapacitors is 10%-15% higher than with the same weight of batteries due to the higher efficiency of the ultracapacitors and more efficient engine operation. 5) Hybrid-electric vehicles powered by hydrogen fuel cells can use either batteries or ultracapacitors for energy storage. Simulation results indicate the equivalent fuel economy of the fuel cell powered vehicles is 2-3 times higher than that of a gasoline fueled IC vehicle of the same weight and road load. Compared to an engine-powered HEV, the equivalent fuel economy of the hydrogen fuel cell vehicle would be 1.66-2.0 times higher"}
{"_id": "4a1b842dbb0f4b609d436ca82880cc64e96a546d", "title": "Multi-label Text Categorization with Model Combination based on F1-score Maximization", "text": "Text categorization is a fundamental task in natural language processing, and is generally defined as a multi-label categorization problem, where each text document is assigned to one or more categories. We focus on providing good statistical classifiers with a generalization ability for multi-label categorization and present a classifier design method based on model combination and F1-score maximization. In our formulation, we first design multiple models for binary classification per category. Then, we combine these models to maximize the F1-score of a training dataset. Our experimental results confirmed that our proposed method was useful especially for datasets where there were many combinations of category labels."}
{"_id": "7b4df53307a142765b8a60feb0b558c3c4a31c7a", "title": "On accelerating pair-HMM computations in programmable hardware", "text": "This paper explores hardware acceleration to significantly improve the runtime of computing the forward algorithm on Pair-HMM models, a crucial step in analyzing mutations in sequenced genomes. We describe 1) the design and evaluation of a novel accelerator architecture that can efficiently process real sequence data without performing wasteful work; and 2) aggressive memoization techniques that can significantly reduce the number of invocations of, and the amount of data transferred to the accelerator. We describe our demonstration of the design on a Xilinx Virtex 7 FPGA in an IBM Power8 system. Our design achieves a 14.85\u00d7 higher throughput than an 8-core CPU baseline (that uses SIMD and multi-threading) and a 147.49 \u00d7 improvement in throughput per unit of energy expended on the NA12878 sample."}
{"_id": "a3a9ac13f2022b98154eb864991e9e6d3d5b39d6", "title": "The human connectome: Origins and challenges", "text": "The human connectome refers to a map of the brain's structural connections, rendered as a connection matrix or network. This article attempts to trace some of the historical origins of the connectome, in the process clarifying its definition and scope, as well as its putative role in illuminating brain function. Current efforts to map the connectome face a number of significant challenges, including the issue of capturing network connectivity across multiple spatial scales, accounting for individual variability and structural plasticity, as well as clarifying the role of the connectome in shaping brain dynamics. Throughout, the article argues that these challenges require the development of new approaches for the statistical analysis and computational modeling of brain network data, and greater collaboration across disciplinary boundaries, especially with researchers in complex systems and network science."}
{"_id": "a2700a5942208de1ab1966ee60bf38bf30bbbac5", "title": "Fully convolutional networks for multi-modality isointense infant brain image segmentation", "text": "The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6\u20138 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement."}
{"_id": "55d20eabc156f954d161c721af2e176f5991ed1d", "title": "A 24-GHz high-gain Yagi-Uda antenna array", "text": "A compact 24-GHz Yagi-Uda antenna has been developed using standard design tables and simple scaling to take into account the added capacitance due to the supporting dielectric substrate. The antenna results in a directivity of 9.3 dB, a front-to-back ratio of 11 dB, and a bandwidth of 2.5-3%. The Yagi-Uda antenna has been implemented in an 11-beam system using a planar array and a 2-inch Teflon spherical lens. The measured patterns show a 22 dB gain beam, a cross-polarization level of -24 dB, and a crossover level of -6 dB. The design method presented in this paper is quite straightforward, and can be used to develop low-, medium-, and even high-gain endfire Yagi-Uda antennas."}
{"_id": "484d3f1db7c8f89d6cb5c11c2633a8cc316df009", "title": "A Decomposed Dual-Cross Generative Adversarial Network for Image Rain Removal", "text": "Rain removal is important for many computer vision applications, such as surveillance, autonomous car, etc. Traditionally, rain removal is regarded as a signal removal problem which usually causes over-smoothing by removing texture details in non-rain background regions. This paper considers the issue of rain removal from a completely different perspective, to treat rain removal as a signal decomposition problem. Specifically, we decompose the rain image into two components, namely non-rain background image and rain streaks image. Then, we introduce an adversarial training mechanism to synthesize non-rain background image and rain streaks image in a Dual-Cross manner, which makes the two adversarial branches interact with each other, archiving a win-win result ultimately. The proposed Decomposed Dual-Cross Generative Adversarial Network (DDC-GAN) shows significantly performance improvement compared with stateof-the-art methods on both synthetic and real-world images in terms of qualitative and quantitative measures (over 3dB gains in PSNR)."}
{"_id": "7812d3528829cd26ab446ca208851db00129f294", "title": "Content-based audio segmentation using support vector machines", "text": "Audio exists at everywhere, but is often out-of-order. It is necessary to arrange them into regularized classes in order to use them more easily. It is also useful, especially in video content analysis, to segment an audio stream according to audio types. In this paper, we present our work in applying support vector machines (SVMs) in audio segmentation and classification. Five audio classes are considered: silence, music, background sound, pure speech, and non-pure speech which includes speech over music and speech over noise. A SVM learns optimal class boundaries from training data to best separate between two classes. A sound clip is segmented by classifying each sub-clip of one second into one of these five classes. Experiments on a database composed of clips of 14870 seconds in total length show that the average accuracy rate for the SVM method is much better than that of the traditional Euclidean distance based (nearest neighbor) method."}
{"_id": "88a6dbdde3e895f9f226d573770e902df22c0a6f", "title": "Building contextual visual vocabulary for large-scale image applications", "text": "Not withstanding its great success and wide adoption in Bag-of-visual Words representation, visual vocabulary created from single image local features is often shown to be ineffective largely due to three reasons. First, many detected local features are not stable enough, resulting in many noisy and non-descriptive visual words in images. Second, single visual word discards the rich spatial contextual information among the local features, which has been proven to be valuable for visual matching. Third, the distance metric commonly used for generating visual vocabulary does not take the semantic context into consideration, which renders them to be prone to noise. To address these three confrontations, we propose an effective visual vocabulary generation framework containing three novel contributions: 1) we propose an effective unsupervised local feature refinement strategy; 2) we consider local features in groups to model their spatial contexts; 3) we further learn a discriminant distance metric between local feature groups, which we call discriminant group distance. This group distance is further leveraged to induce visual vocabulary from groups of local features. We name it contextual visual vocabulary, which captures both the spatial and semantic contexts. We evaluate the proposed local feature refinement strategy and the contextual visual vocabulary in two large-scale image applications: large-scale near-duplicate image retrieval on a dataset containing 1.5 million images and image search re-ranking tasks. Our experimental results show that the contextual visual vocabulary shows significant improvement over the classic visual vocabulary. Moreover, it outperforms the state-of-the-art Bundled Feature in the terms of retrieval precision, memory consumption and efficiency."}
{"_id": "d171d1ed68ded30892576c4bfb9c155f7f5f7f48", "title": "Paraphrase Identification Based on Weighted URAE, Unit Similarity and Context Correlation Feature", "text": "A deep learning model adaptive to both sentence-level and articlelevel paraphrase identification is proposed in this paper. It consists of pairwise unit similarity feature and semantic context correlation feature. In this model, sentences are represented by word and phrase embedding while articles are represented by sentence embedding. Those phrase and sentence embedding are learned from parse trees through Weighted Unfolding Recursive Autoencoders (WURAE), an unsupervised learning algorithm. Then, unit similarity matrix is calculated by matching the pairwise lists of embedding. It is used to extract the pairwise unit similarity feature through CNN and k-max pooling layers. In addition, semantic context correlation feature is taken into account, which is captured by the combination of CNN and LSTM. CNN layers learn collocation information between adjacent units while LSTM extracts the long-term dependency feature of the text based on the output of CNN. This model is experimented on a famous English sentence paraphrase corpus, MSRPC, and a Chinese article paraphrase corpus. The results show that the deep semantic feature of text could be extracted based on WURAE, unit similarity and context correlation feature. We release our code of WURAE, deep learning model for paraphrase identification and pre-trained phrase end sentence embedding data for use by the community."}
{"_id": "0afc9ea9db2d9914b1763ae7b7ca05604de0eb05", "title": "The impact of cyberloafing towards Malaysia employees' productivity: A conceptual framework", "text": "The increasing use of the internet has changed the way we live and work. Moreover, in countries where there is a use of internet on a large scale, changes in the content and context of work have been observed. The use of Internet has become a tool to achieve competitive advantage by organizations. Also, it has been associated with many negative consequences such as reduced employees' productivity. Despite its importance, there is a lack of cyberloafing studies in Malaysia. The paper proposes a modified version of Triandis's theory of interpersonal behaviour by incorporating a new independent variable, ability to hide cyberloafing that directly predicts employees cyberloafing. This paper also recommends examining the relationship between cyberloafing and employees' productivity. Some studies believe that cyberloafing is beneficial for employees' productivity but other studies suggest that it is harmful to employees' productivity. By understanding what causes cyberloafing and its impacts, organizations can be more effective in managing the issue."}
{"_id": "2276067dcaa5fc9c240041a50a114a06a9636276", "title": "Competing Memes Propagation on Networks: A Network Science Perspective", "text": "In this paper, we study the intertwined propagation of two competing \"memes\" (or data, rumors, etc.) in a composite network. Within the constraints of this scenario, we ask two key questions: (a) which meme will prevail? and (b) can one influence the outcome of the propagations? Our model is underpinned by two key concepts, a structural graph model (composite network) and a viral propagation model (SI1I2S). Using this framework, we formulate a non-linear dynamic system and perform an eigenvalue analysis to identify the tipping point of the epidemic behavior. Based on insights gained from this analysis, we demonstrate an effective and accurate prediction method to determine viral dominance, which we call the EigenPredictor. Next, using a combination of synthetic and real composite networks, we evaluate the effectiveness of various viral suppression techniques by either a) concurrently suppressing both memes or b) unilaterally suppressing a single meme while leaving the other relatively unaffected."}
{"_id": "5b2bc4aaa63412ff1745a79d2f322b5ff67d0f9c", "title": "Sentiment in short strength detection informal text", "text": ""}
{"_id": "0b65ecacd0025b4a0e31c4774a4925d3181631fa", "title": "The dynamics of viral marketing", "text": "We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a \u2018long tail\u2019 where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective."}
{"_id": "1be4b2bc0e149981f8814a0d9c130fd442c48e93", "title": "Comparing and managing multiple versions of slide presentations", "text": "Despite the ubiquity of slide presentations, managing multiple presentations remains a challenge. Understanding how multiple versions of a presentation are related to one another, assembling new presentations from existing presentations, and collaborating to create and edit presentations are difficult tasks. In this paper, we explore techniques for comparing and managing multiple slide presentations. We propose a general comparison framework for computing similarities and differences between slides. Based on this framework we develop an interactive tool for visually comparing multiple presentations. The interactive visualization facilitates understanding how presentations have evolved over time. We show how the interactive tool can be used to assemble new presentations from a collection of older ones and to merge changes from multiple presentation authors."}
{"_id": "1150ade08d97b42632df8355faf7f5b0ede31554", "title": "Visual pattern discovery for architecture image classification and product image search", "text": "Many objects have repetitive elements, and finding repetitive patterns facilitates object recognition and numerous applications. We devise a representation to describe configurations of repetitive elements. By modeling spatial configurations, visual patterns are more discriminative than local features, and are able to tackle with object scaling, rotation, and deformation. We transfer the pattern discovery problem into finding frequent subgraphs from a graph, and exploit a graph mining algorithm to solve this problem. Visual patterns are then exploited in architecture image classification and product image retrieval, based on the idea that visual pattern can describe elements conveying architecture styles and emblematic motifs of brands. Experimental results show that our pattern discovery approach has promising performance and is superior to the conventional bag-of-words approach."}
{"_id": "f5a275a173824adfb619cd4dc941e57ca1248eb7", "title": "Autonomous decision-making: a data mining approach", "text": "The researchers and practitioners of today create models, algorithms, functions, and other constructs defined in abstract spaces. The research of the future will likely be data driven. Symbolic and numeric data that are becoming available in large volumes will define the need for new data analysis techniques and tools. Data mining is an emerging area of computational intelligence that offers new theories, techniques, and tools for analysis of large data sets. In this paper, a novel approach for autonomous decision-making is developed based on the rough set theory of data mining. The approach has been tested on a medical data set for patients with lung abnormalities referred to as solitary pulmonary nodules (SPNs). The two independent algorithms developed in this paper either generate an accurate diagnosis or make no decision. The methodology discussed in the paper depart from the developments in data mining as well as current medical literature, thus creating a variable approach for autonomous decision-making."}
{"_id": "2c52492f8c44db08c87d368cc406e268163ed9f3", "title": "Modeling Propagation Dynamics and Developing Optimized Countermeasures for Rumor Spreading in Online Social Networks", "text": "The spread of rumors in Online Social Networks (OSNs) poses great challenges to the social peace and public order. It is imperative to model propagation dynamics of rumors and develop corresponding countermeasures. Most of the existing works either overlook the heterogeneity of social networks or do not consider the cost of countermeasures. Motivated by these issues, this paper proposes a heterogeneous network based epidemic model that incorporates both the network heterogeneity and various countermeasures. Through analyzing the existence and stability of equilibrium solutions of the proposed ODE (Ordinary Differential Equation) system, the critical conditions that determine whether a rumor continuously propagates or becomes extinct are derived. Moreover, we concern about the cost of the main two types of countermeasures, i.e., Blocking rumors at influential users and spreading truth to clarify rumors. Employing the Pontryagin's maximum principle, we obtain the optimized countermeasures that ensures a rumor can become extinct at the end of an expected time period with lowest cost. Both the critical conditions and the optimized countermeasures provide a real-time decision reference to restrain the rumor spreading. Experiments based on Digg2009 dataset are conducted to evaluate the effectiveness of the proposed dynamic model and the efficiency of the optimized countermeasures."}
{"_id": "d5e01fcb42a4f60f1a3686d1b4c37150b17bac29", "title": "Bus tracking and monitoring using RFID", "text": "The rapid growth in population in India causes more crowding at public bus stops. People long wait for arriving of buses and suddenly gather near bus when it arrives and travel in overcrowded buses on footboards which leads to accidents. Theft is another risk of overcrowding. All this happens due to lack of proper information of arriving of buses at many of the bus stops. To solve this problem in a cost effective way, we have developed an IoT based bus tracking system to show the current location of the bus and seat availability in the arriving buses. We have used RFID technology for tracking the bus and Thingspeak web server for displaying location of the bus and seat availability in android application in a smart phone. This system enables the commuters to know the exact location of the bus, and occupancy level in the bus, which enable the commuters to take decision, whether to board this bus or next. This reduces the waiting time and overcrowding at bus stops. It also reduces the risk of accidents and thefts."}
{"_id": "0bd9f30ad324a65895c52bbc889d69c2222a4684", "title": "On path planning methods for automotive collision avoidance", "text": "There is a strong trend for increasingly sophisticated Advanced Driver Assistance Systems (ADAS) such as Autonomous Emergency Braking (AEB) systems, Lane Keeping Aid (LKA) systems, and indeed autonomous driving. This trend generates a need for online maneuver generation, for which numerous approaches can be found in the large body of work related to path planning and obstacle avoidance. In order to ease the challenge of choosing a method, this paper reports quantitative and qualitative insights about three different path planning methods: a state lattice planner, predictive constraint-based planning, and spline-based search tree. Each method is described, implemented and compared on two specific traffic situations. The paper will not provide a final answer about which method is best. This depends on several factors such as computational constraints and the formulation of maneuver optimality that is appropriate for a given assistance or safety function. Instead, the conclusions will highlight qualitative merits and drawbacks for each method, in order to provide guidance for choosing a method for a specific application."}
{"_id": "974728aab2d5005b335896b9297e9a9326700782", "title": "Edge detection using constrained discrete particle swarm optimisation in noisy images", "text": "Edge detection algorithms often produce broken edges, especially in noisy images. We propose an algorithm based on discrete particle swarm optimisation (PSO) to detect continuous edges in noisy images. A constrained PSO-based algorithm with a new objective function is proposed to address noise and reduce broken edges. The localisation accuracy of the new algorithm is compared with that of a modified version of the Canny algorithm as a Gaussian-based edge detector, the robust rank order (RRO)-based algorithm as a statistical based edge detector, and our previously developed PSO-based algorithm. Pratt's figure of merit is used as a measure of localisation accuracy for these edge detection algorithms. Experimental results show that the performance of the new algorithm is higher than the Canny and RRO algorithms in the images corrupted by two different types of noise (impulsive and Gaussian noise). The new algorithm also detects edges more accurately and smoothly than our previously developed algorithm in noisy images."}
{"_id": "9cec2f71a5c953a72bc4242f20a0b9a91a262775", "title": "Geo-parsing Messages from Microtext", "text": "Widespread use of social media during crises has become commonplace, as shown by the volume of messages during the Haiti earthquake of 2010 and Japan tsunami of 2011. Location mentions are particularly important in disaster messages as they can show emergency responders where problems have occurred. This article explores the sorts of locations that occur in disaster-related social messages, how well off-theshelf software identifies those locations, and what is needed to improve automated location identification, called geo-parsing. To do this, we have sampled Twitter messages from the February 2011 earthquake in Christchurch, Canterbury, New Zealand. We annotated locations in messages manually to make a gold standard by which to measure locations identified by a Named Entity Recognition software. The Stanford NER software found some locations that were proper nouns, but did not identify locations that were not capitalized, local streets and buildings, or nonstandard place abbreviations and mis-spellings that are plentiful in microtext. We review how these problems might be solved in software research, and model a readable crisis map that shows crisis location clusters via enlarged place labels."}
{"_id": "984e9df38eb93781aabca41c91499317e9351ef6", "title": "Combining virtual reality enabled simulation with 3D scanning technologies towards smart manufacturing", "text": "Recent introduction of low-cost 3D sensing and affordable immersive virtual reality have lowered the barriers for creating and maintaining 3D virtual worlds. In this paper, we propose a way to combine these technologies with discrete-event simulation to improve the use of simulation in decision making in manufacturing. This work will describe how feedback is possible from real world systems directly into a simulation model to guide smart behaviors. Technologies included in the research include feedback from RGBD images of shop floor motion and human interaction within full immersive virtual reality that includes the latest headset technologies."}
{"_id": "a81d135e8102e0e887e8765140f95c2edc88637e", "title": "Hybrid PWM Strategy of SVPWM and VSVPWM for NPC Three-Level Voltage-Source Inverter", "text": "Neutral-point (NP) voltage drift is the main technical drawback of NP-clamped (NPC) three-level inverters. Traditional space vector pulsewidth modulation (SVPWM) is incapable of controlling the NP voltage for high modulation indexes and low power factors. Virtual SVPWM (VSVPWM) is capable of controlling the NP voltage under full modulation indexes and full power factors. However, this modulation strategy is more complex than SVPWM, increases the switching frequency, and deteriorates the output waveforms of the inverter. A novel PWM concept that includes NP voltage-balancing conditions is proposed. Based on this concept, a hybrid modulation scheme that uses both SVPWM and VSVPWM is presented for complete control of the NP voltage in NPC three-level inverters. The performance of this new modulation approach and its benefits over SVPWM and VSVPWM are verified by simulation and experiments."}
{"_id": "8ebd1dd54b7eaa1f130c9e1fb6052f112290e1b0", "title": "Why-type Question Classification in Question Answering System", "text": "The fundamental requisite to acquire information on any topic has become increasingly important. The need for Question Answering Systems (QAS) prevalent nowadays, replacing the traditional search engines stems from the user requirement for the most accurate answer to any question or query. Thus, interpreting the information need of the users is quite crucial for designing and developing a question answering system. Question classification is an important component in question answering systems that helps to determine the type of question and its corresponding type of answer. In this paper, we present a new way of classifying Why-type questions, aimed at understanding a questioner\u2019s intent. Our taxonomy classifies Why-type questions into four separate categories. In addition, to automatically detect the categories of these questions by a parser, we differentiate them at lexical level."}
{"_id": "def7eba25a199388d77fb2bc87a131a23c3f8ec6", "title": "Circular Quadruple-Ridged Flared Horn Achieving Near-Constant Beamwidth Over Multioctave Bandwidth: Design and Measurements", "text": "A circular quadruple-ridged flared horn achieving almost-constant beamwidth over 6:1 bandwidth is presented. This horn is the first demonstration of a wideband feed for radio telescopes which is capable of accommodating different reflector antenna optics, maintains almost constant gain and has excellent match. Measurements of stand-alone horn performance reveal excellent return loss performance as well as stable radiation patterns over 6:1 frequency range. Physical optics calculations predict an average of 69% aperture efficiency and 13 K antenna noise temperature with the horn installed on a radio telescope."}
{"_id": "61bef8f1b9a60cc78332a2e07cfbc9057fa4ee15", "title": "Automatic Classification of Power Quality Events Using Balanced Neural Tree", "text": "This paper proposes an empirical-mode decomposition (EMD) and Hilbert transform (HT)-based method for the classification of power quality (PQ) events. Nonstationary power signal disturbance waveforms are considered as the superimposition of various undulating modes, and EMD is used to separate out these intrinsic modes known as intrinsic mode functions (IMFs). The HT is applied on all the IMFs to extract instantaneous amplitude and frequency components. This time-frequency analysis results in the clear visual detection, localization, and classification of the different power signal disturbances. The required feature vectors are extracted from the time-frequency distribution to perform the classification. A balanced neural tree is constructed to classify the power signal patterns. Finally, the proposed method is compared with an S-transform-based classifier to show the efficacy of the proposed technique in classifying the PQ disturbances."}
{"_id": "820db2f7f1245f9dca77a7bfd0c8081078573976", "title": "Research on nursing handoffs for medical and surgical settings: an integrative review.", "text": "AIMS\nTo synthesize outcomes from research on handoffs to guide future computerization of the process on medical and surgical units.\n\n\nBACKGROUND\nHandoffs can create important information gaps, omissions and errors in patient care. Authors call for the computerization of handoffs; however, a synthesis of the literature is not yet available that might guide computerization.\n\n\nDATA SOURCES\nPubMed, CINAHL, Cochrane, PsycINFO, Scopus and a handoff database from Cohen and Hilligoss.\n\n\nDESIGN\nIntegrative literature review.\n\n\nREVIEW METHODS\nThis integrative review included studies from 1980-March 2011 in peer-reviewed journals. Exclusions were studies outside medical and surgical units, handoff education and nurses' perceptions.\n\n\nRESULTS\nThe search strategy yielded a total of 247 references; 81 were retrieved, read and rated for relevance and research quality. A set of 30 articles met relevance criteria.\n\n\nCONCLUSION\nStudies about handoff functions and rituals are saturated topics. Verbal handoffs serve important functions beyond information transfer and should be retained. Greater consideration is needed on analysing handoffs from a patient-centred perspective. Handoff methods should be highly tailored to nurses and their contextual needs. The current preference for bedside handoffs is not supported by available evidence. The specific handoff structure for all units may be less important than having a structure for contextually based handoffs. Research on pertinent information content for contextually based handoffs is an urgent need. Without it, handoff computerization is not likely to be successful. Researchers need to use more sophisticated experimental research designs, control for individual and unit differences and improve sampling frames."}
{"_id": "1ad823bf77c691f1d2b572799f8a8c572d941118", "title": "Towards The Deep Model: Understanding Visual Recognition Through Computational Models", "text": "Introduction Vision, due to its significance in surviving and socializing, is one of the most important and extensively studied sensory functions in the human brain. In order to fully understand visual information processing, or more specifically, visual recognition, David Marr proposed the Tri-level Hypothesis [29], in that three levels of the system should be studied: the computational goal of the system, the internal representation or the algorithm the system uses to achieve the goal, and the neural substrates that implement the system. It is well-known that visual recognition in the human brain is implemented by the ventral visual pathway [32], which receives visual information from the retina and goes through a layered structure including V1 (also known as the primary visual cortex), V2, V4, before reaching the inferior temporal cortex (IT). The topographic mapping between the retina and the human visual cortex follows a log-polar transformation, in which the Cartesian coordinates of the retina are transformed to polar coordinates (polar angle and eccentricity) in the human visual cortex. From V1 to V4, each of the visual areas contains a complete polar angle and eccentricity map, and have a increasing receptive field (RF) size and respond to increasing complex visual features [12,22]. In higher level visual areas such as the ventral temporal cortex (VTC), studies using functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) have found there exists visual category-selective regions, such as the Fusiform Face Area (FFA, [26]) for face processing, the Parahippocampal Place Area (PPA, [13]) for scene perception, and the Lateral Occipital Complex (LOC, [19]) for general object recognition. Grill-Spector and Malach [20] proposed that layer-based hierarchical processing, together with the functional specialization, are the two organizing principles of the human visual cortex. More recent studies have shown that the central-biased face recognition pathway (FFA) and the peripheral-biased scene recognition pathway (PPA) are both functionally and anatomically segregated by mid-fusiform sulcus (MFS), which enables parallel fast processing of visual recognition tasks in the ventral pathway [18, 43, 21, 44]."}
{"_id": "b27dc92565b6dc50599fad2a9bc7ae0d9be0e7b8", "title": "UniqueID: Decentralized Proof-of-Unique-Human", "text": "Bitcoin and Ethereum are novel mechanisms for decentralizing the concept of money and computation. Extending decentralization to the human identity concept, we can think of using blockchain for creating a list of verified human identities with a one-person-one-ID property. UniqueID is a Decentralized Autonomous Organization(DAO) for maintaining human identities such that every physical human entity can have no more that one account. One part of this identity is simply the user\u2019s claim on one of his unique, permanent, and measurable characteristics -biometrics. Blockchain has proved its integrity as a platform for storing and performing computations on such claims. The biggest challenge here is to ensure that the user has submitted his own valid biometric data. Human verifiers can check if there is any inconsistency in other users\u2019 data, by peer-to-peer checks. For preventing bad behavior and centralization in the verification process, UniqueID benefits from novel governance mechanisms to choose verifiers and punish unjust ones. Also, there are incentives for honest verifiers and users by newly generated tokens. We show how the users\u2019 privacy can be preserved by using stateof-the-art cryptographic techniques, and so they can use their identity without any concerns for votings, financial and banking purposes, social media accounts, reputation systems etc."}
{"_id": "73ec2a1c77dc3cf1e3164626c81e4e72b15b2821", "title": "On the multi-stage influence maximization problem", "text": "The influence maximization problem turns up in many online social networks (OSN) in which each participant can potentially influence the decisions made by others in the network. Relationships can be friendships, family relationships or even professional relationships. Such influences can be used to achieve some objective and the influence maximization problem attempts to make decisions so as to maximize the effect of these influences. Past work focused on a static problem whereby one tries to identify the participants who are the most influential. Recently, a multi-stage version of the problem was proposed in which outcomes of influence attempts are observed before additional participants are chosen. For example, in online advertising, one can provide an impression to a particular subject and if that subject clicks on the impression, then their friends are informed in the hope that this information will increase the chances that they also click on the impression and eventually purchase the product. This problem is computationally intensive; in this paper we investigate various optimization methods for finding its solution that yield close to optimal results while taking less computation time. These include greedy and particle swarm optimization algorithms."}
{"_id": "12425a9f9ef67a4d1bc3741cc722be2af05f9649", "title": "Conflict in Adult Close Relationships : An Attachment Perspective", "text": "Relationship researchers have focused on the frequency of conflict in couples' relationships and the manner in which couples engage in and try to resolve conflicts. conflict, under some conditions, may facilitate the development and maintenance of associated with patterns of behavior (e.g., negative affect reciprocity, demand-withdraw) and thought that tend to escalate conflict and make it more difficult to negotiate a facilitates intimacy or exacerbates distress may depend on individual differences in the way in which people interpret and respond to conflict. framework for understanding different responses to conflict. People are thought to differ in their working models of attachment, which include expectations, beliefs, and goals These working models are likely to shape people's thoughts, feelings, and behavior during conflict. For example, a person who expects close others to be generally responsive and available is likely to interpret and respond to conflict very differently from a person who expects close others to be rejecting and unavailable. Attachment theory may be able to inform the literature on 3 conflict in close relationships by suggesting how individuals might differ in how they construe conflict. At the same time, the study of relationship conflict provides a useful context for testing important aspects of attachment theory. Conflict may be particularly likely to reveal attachment processes because (a) it may act as a stressor on the relationship and thereby activate the attachment system (Simpson, Rholes, & Phillips, 1996), (b) it challenges partners' abilities to regulate their emotions and behavior (Kobak & Duemmler, 1994), which are thought to be connected to attachment processes and (c) it may trigger behaviors (e.g., personal disclosures) that typically promote intimacy, thereby providing evidence relevant to different attachment goals such as achieving intimacy or maintaining self-reliance (Pietromonaco & Feldman Barrett, 1997). In this chapter, we first discuss how conflict can be conceptualized within an attachment framework. Specifically, we propose that conflict may pose a threat to the attachment bond, but that it also may provide an opportunity for perceiving or experiencing greater intimacy. Furthermore, the degree to which people perceive conflict as a threat, opportunity, or both will depend on the content (e.g., expectations, beliefs, goals) of their working models of attachment. Next, we identify a set of predictions that follow from this framework, and evaluate extent to which empirical findings support these predictions; in particular, we attempt to integrate divergent findings in the empirical literature. Finally, we outline several critical \u2026"}
{"_id": "581c012d0d3b7b5c2318413f7ad7640643e9f8ba", "title": "Evaluation of cross-language voice conversion based on GMM and straight", "text": "Voice conversion is a technique for producing utterances using any target speakers\u2019 voice from a single source speaker\u2019s utterance. In this paper, we apply cross-language voice conversion between Japanese and English to a system based on a Gaussian Mixture Model (GMM) method and STRAIGHT, a high quality vocoder. To investigate the effects of this conversion system across different languages, we recorded two sets of bilingual utterances and performed voice conversion experiments using a mapping function which converts parameters of acoustic features for a source speaker to those of a target speaker. The mapping functions were trained using bilingual databases of both Japanese and English speech. In an objective evaluation using Mel cepstrum distortion (Mel CD), it was confirmed that the system can perform cross-language voice conversion with the same performance as that within a single-language."}
{"_id": "b2db00f73fc6b97ebe12e97cfdaefbb2fefc253b", "title": "A Comparison Between the Silhouette Index and the Davies-Bouldin Index in Labelling IDS Clusters", "text": "One of the most difficult problems in the design of an anomaly b sed intrusion detection system (IDS) that uses clustering is th at of labelling the obtained clusters, i.e. determining which of them correspond t \u201dgood\u201d behaviour on the network/host and which to \u201dbad\u201d behaviour. In this pap er, a new clusters\u2019 labelling strategy, which makes use of a clustering quality index is proposed for application in such an IDS. The aim of the new labelling algor ithm is to detect compact clusters containing very similar vectors and these are highly likely to be attack vectors. Two clustering quality indexes have been te sted and compared: the Silhouette index and the Davies-Bouldin index. Experiment al results comparing the effectiveness of a multiple classifier IDS with the two in dexes implemented show that the system using the Silhouette index produces sli ghtly more accurate results than the system that uses the Davies-Bouldin index. However, the computation of the Davies-Bouldin index is much less complex th an the computation of the Silhouette index, which is a very important advantage regarding eventual real-time operation of an IDS that employs clustering."}
{"_id": "32b892031491ceef88db855c343071ca914d18ac", "title": "Detecting social media mobile botnets using user activity correlation and artificial immune system", "text": "With the rapidly growing development of cellular networks and powerful smartphones, botnets have invaded the mobile domain. Social media, like Twitter, Facebook, and YouTube have created a new communication channel for attackers. Recently, bot masters started to exploit social media for different malicious activity, such as sending spam, recruitment of new bots, and botnet command and control. In this paper we propose a detection technique for social mediabased mobile botnets using Twitter. The proposed method combines the correlation between tweeting and user activity, such as clicks or taps, and an Artificial Immune System detector, to detect tweets caused by bots and differentiate them from tweets generated by user or by user-approved applications. This detector creates a signature of the tweet and compares it with a dynamically updated signature library of bot behavior signatures. The proposed system has been fully implemented on Android platform and tested under several sets of generated tweets. The test results show that the proposed method has a very high accuracy in detecting bot tweets with about 95% detection ratio."}
{"_id": "ee35c52c22fadf92277c308263be6288249a6327", "title": "A miniaturized wideband dual-polarized linear array with balanced antipodal Vivaldi antenna", "text": "Previous studies on Vivaldi wideband phased arrays are mainly focused on Two-dimensional scanning phased arrays. A miniaturized balanced antipodal Vivaldi antenna (BAVA) is presented in this paper. A novel vertical parasitic metal strip loading is employed in the dual-polarized linear phased arrays, it to make a compact structure. With the arc-shaped slots and metal strip loads, the radiation performance in lower operating band can be greatly enhanced. The proposed antenna is simulated in the infinite condition through periodic boundary with the size of 100mm (length) *100mm (width) *125mm (depth). The antenna achieves an impedance bandwidth when scanning to \u00b150\u00b0 for VSWR\u22643 achieves 4:1 (0.5GHz-2GHz) bandwidth and 5:1 (0.4GHz-2GHz) bandwidth in the vertical polarization and horizontal polarization, respectively, and the isolation is less than -18dB over the operating frequency range."}
{"_id": "7227d6b036febeaf686b058a1e2c75fe92eeb5a1", "title": "The effects of stimulant therapy, EEG biofeedback, and parenting style on the primary symptoms of attention-deficit/hyperactivity disorder.", "text": "One hundred children, ages 6-19, who were diagnosed with attention-deficit/hyperactivity disorder (ADHD), either inattentive or combined types, participated in a study examining the effects of Ritalin, EEG biofeedback, and parenting style on the primary symptoms of ADHD. All of the patients participated in a 1-year, multimodal, outpatient program that included Ritalin, parent counseling, and academic support at school (either a 504 Plan or an IEP). Fifty-one of the participants also received EEG biofeedback therapy. Posttreatment assessments were conducted both with and without stimulant therapy. Significant improvement was noted on the Test of Variables of Attention (TOVA; L. M. Greenberg, 1996) and the Attention Deficit Disorders Evaluation Scale (ADDES; S. B. McCarney, 1995) when participants were tested while using Ritalin. However, only those who had received EEG biofeedback sustained these gains when tested without Ritalin. The results of a Quantitative Electroencephalographic Scanning Process (QEEG-Scan; V. J. Monastra et al., 1999) revealed significant reduction in cortical slowing only in patients who had received EEG biofeedback. Behavioral measures indicated that parenting style exerted a significant moderating effect on the expression of behavioral symptoms at home but not at school."}
{"_id": "0c8f2cd4282d31cdff72d9471f2639d66fe28047", "title": "Interval Stabbing Problems in Small Integer Ranges", "text": "Given a set I of n intervals, a stabbing query consists of a point q and asks for all intervals in I that contain q. The Interval Stabbing Problem is to find a data structure that can handle stabbing queries efficiently. We propose a new, simple and optimal approach for different kinds of interval stabbing problems in a static setting where the query points and interval ends are in {1, . . . , O(n)}. keywords: interval stabbing, interval intersection, rank space, static, discrete, point enclosure"}
{"_id": "3181b9ce21265bbf8175314714e1535f75b3d80f", "title": "(Leveled) Fully Homomorphic Encryption without Bootstrapping", "text": "We present a novel approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled, fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits of a-priori bounded depth), without Gentry\u2019s bootstrapping procedure.\n Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or Ring LWE (RLWE) problems that have 2 \u03bb security against known attacks. We construct the following.\n (1) A leveled FHE scheme that can evaluate depth-L arithmetic circuits (composed of fan-in 2 gates) using O(\u03bb. L3) per-gate computation, quasilinear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure.\n (2) A leveled FHE scheme that can evaluate depth-L arithmetic circuits (composed of fan-in 2 gates) using O(\u03bb2) per-gate computation, which is independent of L. Security is based on RLWE for quasipolynomial factors. This construction uses bootstrapping as an optimization.\n We obtain similar results for LWE, but with worse performance. All previous (leveled) FHE schemes required a per-gate computation of \u03a9(\u03bb3.5), and all of them relied on subexponential hardness assumptions.\n We introduce a number of further optimizations to our scheme based on the Ring LWE assumption. As an example, for circuits of large width (e.g., where a constant fraction of levels have width \u03a9(\u03bb)), we can reduce the per-gate computation of the bootstrapped version to O(\u03bb), independent of L, by batching the bootstrapping operation. At the core of our construction is a new approach for managing the noise in lattice-based ciphertexts, significantly extending the techniques of Brakerski and Vaikuntanathan [2011b]."}
{"_id": "807168ac98191495a6fec9e992f9a5588c59c2b6", "title": "Are educational computer microgames engaging and effective for knowledge acquisition at high-schools ? A quasi-experimental study", "text": "Curricular schooling can benefit from the usage of educational computer games, but it is difficult to integrate them in the formal schooling system. Here, we investigate one possible approach to this integration, which capitalizes on using a micro-game that can be played with a teacher\u2019s guidance as a supplement after a traditional expository lecture followed by a debriefing. The game\u2019s purpose is to reinforce and integrate part of the knowledge learnt during the lecture. We investigated feasibility of this approach in a quasi-experimental study in 70 min long seminars on the topic of animal learning at 5 classes at 4 different high-schools in the Czech Republic. Each class was divided to two groups randomly. After an expository lecture, the game group played a game called Orbis Pictus Bestialis while the control group received an extra lecture that used media-rich materials. The time allotment was the same in both groups. We investigated the immediate and one month delayed effects of the game on students\u2019 knowledge reinforced and integrated by the game as well as on knowledge learnt during the expository lecture but not strengthened by the game. We also investigated students\u2019 overall appeal towards the seminar and its perceived educational value. Data from 100 students were analysed. The results showed that a) the game-playing is comparable to the traditional form of teaching concerning immediate knowledge gains and has a significant medium positive effect size regarding retention, b) the gameplaying is not detrimental to information transmitted in the expository lecture but not strengthened by the game, c) perceived educational value and the overall appeal were high in the game group, nevertheless the perceived educational value was slightly lower in the game group comparing to the traditional group. Our results suggest that the proposed approach of harnessing educational computer games at high-schools is promising. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "d9a6ca780e82477e8d77e4abd02c09d8057cfb8c", "title": "Deep learning for health informatics: Recent trends and future directions", "text": "Health informatics has emerged as a growing domain of interest among researchers world-wide owing to its major implications on society. Applications of machine learning in healthcare range from disease prediction to patient-level personalized services. The prevalence of big data in healthcare has paved the way for applications based on deep learning techniques in the past few years. This paper reviews recent trends and applications of deep learning applied to the healthcare domain. We highlight recent research work, identify challenges and suggest possible future directions that could be pursued further in this domain."}
{"_id": "62198d1c71b827f0674da3d4d7ebf718702713eb", "title": "Phase-Based Methods for Heart Rate Detection Using UWB Impulse Doppler Radar", "text": "Ultra-wideband (UWB) pulse Doppler radars can be used for noncontact vital signs monitoring of more than one subject. However, their detected signals typically have low signal-to-noise ratio (SNR) causing significant heart rate (HR) detection errors, as the spurious harmonics of respiration signals and mixed products of respiration and heartbeat signals (that can be relatively higher than heartbeat signals) corrupt conventional fast Fourier transform spectrograms. In this paper, we extend the complex signal demodulation (CSD) and arctangent demodulation (AD) techniques previously used for accurately detecting the phase variations of reflected signals of continuous wave radars to UWB pulse radars as well. These detection techniques reduce the impact of the interfering harmonic signals, thus improving the SNR of the detected vital sign signals. To further enhance the accuracy of the HR estimation, a recently developed state-space method has been successfully combined with CSD and AD techniques and over 10 dB improvements in SNR is demonstrated. The implementation of these various detection techniques has been experimentally investigated and full error and SNR analysis of the HR detection are presented."}
{"_id": "ccfb3ce17d27d239ac39458987f67fcade966b62", "title": "Inspection of pole-like structures using a vision-controlled VTOL UAV and shared autonomy", "text": "We present an approach for the inspection of vertical pole-like infrastructure using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structures, such as light and power distribution poles, is a time consuming, dangerous and expensive task with high operator workload. To address these issues, we propose a VTOL platform that can operate at close-quarters, whilst maintaining a safe stand-off distance and rejecting environmental disturbances. We adopt an Image based Visual Servoing (IBVS) technique using only two line features to stabilise the vehicle with respect to a pole. Visual, inertial and sonar data are used, making the approach suitable for indoor or GPS-denied environments. Results from simulation and outdoor flight experiments demonstrate the system is able to successfully inspect and circumnavigate a pole."}
{"_id": "b3b3818d5c94db12fb632354926ccbcf02b14a4e", "title": "SIT: A Lightweight Encryption Algorithm for Secure Internet of Things", "text": "The Internet of Things (IoT) being a promising technology of the future is expected to connect billions of devices. The increased number of communication is expected to generate mountains of data and the security of data can be a threat. The devices in the architecture are essentially smaller in size and low powered. Conventional encryption algorithms are generally computationally expensive due to their complexity and requires many rounds to encrypt, essentially wasting the constrained energy of the gadgets. Less complex algorithm, however, may compromise the desired integrity. In this paper we propose a lightweight encryption algorithm named as Secure IoT (SIT). It is a 64-bit block cipher and requires 64-bit key to encrypt the data. The architecture of the algorithm is a mixture of feistel and a uniform substitution-permutation network. Simulations result shows the algorithm provides substantial security in just five encryption rounds. The hardware implementation of the algorithm is done on a low cost 8-bit micro-controller and the results of code size, memory utilization and encryption/decryption execution cycles are compared with benchmark encryption algorithms. The MATLAB code for relevant simulations is available online at https://goo.gl/Uw7E0W. Keywords\u2014IoT; Security; Encryption; Wireless Sensor Network WSN; Khazad"}
{"_id": "f2545e19fe7af037f13f693711afe3eb9019bf72", "title": "Slot-Gated Modeling for Joint Slot Filling and Intent Prediction", "text": "Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights. Considering that slot and intent have the strong relationship, this paper proposes a slot gate that focuses on learning the relationship between intent and slot attention vectors in order to obtain better semantic frame results by the global optimization. The experiments show that our proposed model significantly improves sentence-level semantic frame accuracy with 4.2% and 1.9% relative improvement compared to the attentional model on benchmark ATIS and Snips datasets respectively1."}
{"_id": "742ae6841eefa0ce5ec2f22cef938f88e660f677", "title": "The effects of prosthetic foot stiffness on transtibial amputee walking mechanics and balance control during turning.", "text": "BACKGROUND\nLittle evidence exists regarding how prosthesis design characteristics affect performance in tasks that challenge mediolateral balance such as turning. This study assesses the influence of prosthetic foot stiffness on amputee walking mechanics and balance control during a continuous turning task.\n\n\nMETHODS\nThree-dimensional kinematic and kinetic data were collected from eight unilateral transtibial amputees as they walked overground at self-selected speed clockwise and counterclockwise around a 1-meter circle and along a straight line. Subjects performed the walking tasks wearing three different ankle-foot prostheses that spanned a range of sagittal- and coronal-plane stiffness levels.\n\n\nFINDINGS\nA decrease in stiffness increased residual ankle dorsiflexion (10-13\u00b0), caused smaller adaptations (<5\u00b0) in proximal joint angles, decreased residual and increased intact limb body support, increased residual limb propulsion and increased intact limb braking for all tasks. While changes in sagittal-plane joint work due to decreased stiffness were generally consistent across tasks, effects on coronal-plane hip work were task-dependent. When the residual limb was on the inside of the turn and during straight-line walking, coronal-plane hip work increased and coronal-plane peak-to-peak range of whole-body angular momentum decreased with decreased stiffness.\n\n\nINTERPRETATION\nChanges in sagittal-plane kinematics and kinetics were similar to those previously observed in straight-line walking. Mediolateral balance improved with decreased stiffness, but adaptations in coronal-plane angles, work and ground reaction force impulses were less systematic than those in sagittal-plane measures. Effects of stiffness varied with the residual limb inside versus outside the turn, which suggests that actively adjusting stiffness to turn direction may be beneficial."}
{"_id": "543b52629963eac11b2b8b3cf05273539277bafb", "title": "Software-specific part-of-speech tagging: an experimental study on stack overflow", "text": "Part-of-speech (POS) tagging performance degrades on out-of-domain data due to the lack of domain knowledge. Software engineering knowledge, embodied in textual documentations, bug reports and online forum discussions, is expressed in natural language, but is full of domain terms, software entities and software-specific informal languages. Such software texts call for software-specific POS tagging. In the software engineering community, there have been several attempts leveraging POS tagging technique to help solve software engineering tasks. However, little work is done for POS tagging on software natural language texts.\n In this paper, we build a software-specific POS tagger, called S-POS, for processing the textual discussions on Stack Overflow. We target at Stack Overflow because it has become an important developer-generated knowledge repository for software engineering. We define a POS tagset that is suitable for describing software engineering knowledge, select corpus, develop a custom tokenizer, annotate data, design features for supervised model training, and demonstrate that the tagging accuracy of S-POS outperforms that of the Stanford POS Tagger when tagging software texts. Our work presents a feasible roadmap to build software-specific POS tagger for the socio-professional contents on Stack Overflow, and reveals challenges and opportunities for advanced software-specific information extraction."}
{"_id": "a7f0587ea7ddfb9fcef6210a898b6fe37884182a", "title": "A Constraint-Based Approach to Constructing Continuous Cartograms", "text": "We present a new constraint-based continuous area cartogram construction method that is unique in its ability to preserve essential cues for recognition of region shapes. It automatically achieves desired region areas while maintaining correct map topology. The algorithm is compared with a number of existing methods, and results are shown to be superior in both accuracy and preservation of shape recognition cues. Through hierarchical resolution, we first perform gross adjustments upon a coarsely resampled map and later refine the map at progressively higher levels of detail."}
{"_id": "e6556a463c21cc3b114ac63b02b1667a73ff7d2c", "title": "Learning Environmental Sounds with Multi-scale Convolutional Neural Network", "text": "Deep learning has dramatically improved the performance of sounds recognition. However, learning acoustic models directly from the raw waveform is still challenging. Current waveform-based models generally use time-domain convolutional layers to extract features. The features extracted by single size filters are insufficient for building discriminative representation of audios. In this paper, we propose multi-scale convolution operation, which can get better audio representation by improving the frequency resolution and learning filters cross all frequency area. For leveraging the waveform-based features and spectrogram-based features in a single model, we introduce twophase method to fuse the different features. Finally, we propose a novel end-to-end network called WaveMsNet based on the multi-scale convolution operation and two-phase method. On the environmental sounds classification datasets ESC-10 and ESC-50, the classification accuracies of our WaveMsNet achieve 93.75% and 79.10% respectively, which improve significantly from the previous methods."}
{"_id": "3d0df7797a91b5ba7d9e018842a6a1d7abd8fd80", "title": "Orange: From Experimental Machine Learning to Interactive Data Mining", "text": "Orange (www.ailab.si/orange) is a suite for machine learning and data mining. It can be used though scripting in Python or with visual programming in Orange Canvas using GUI components called widgets. In the demonstration we will show how to easily prototype stateof-the-art machine learning algorithms through Orange scripting, and design powerful and interactive data exploration applications through visual programming. 1 Orange, a Component-Based Framework Orange is a comprehensive, component-based framework for machine learning and data mining. It is intended for both experienced users and researchers in machine learning who want to prototype new algorithms while reusing as much of the code as possible, and for those just entering the field who can either write short Python scripts for data analysis or enjoy in the powerful while easy-to-use visual programming environment. Orange includes a range of techniques, such as data management and preprocessing, supervised and unsupervised learning, performance analysis, and a range of data and model visualization techniques. 1.1 Scripting in Python As a framework, Orange is comprised of several layers. The core design principle was to use C++ to code the basic data representation and manipulation and all time-complex procedures, such as most learning algorithms and data preprocessing. Tasks that are less time consuming are coded in Python. Python is a popular object-oriented scripting language known for its simplicity and power, and often used for prototyping and as a \u201dglue-language\u201d for components written in other languages. The interface between C++ and Python provides a tight integration: Python scripts can access and manipulate Orange objects as if they were implemented in Python. On the other hand, components defined in Python can be used by the C++ core. For instance, one can use classification tree as implemented within Orange (in C++) but prototype a component for attribute selection in Python. For Orange, we took special care to implement machine learning methods so that they are assembled from a set of reusable components one can either use in the new algorithms, or replace them with prototypes written in Python. Just for a taste, here is a simple Python script, which, using Orange, reads the data, reports on the number of instances and attributes, builds two classifiers and outputs predicted and true class of the first five instances. import orange data = orange.ExampleTable(\u2019voting.tab\u2019) print \u2019Instances:\u2019, len(data), \u2019Attributes:\u2019, len(data.domain.attributes) nbc = orange.BayesLearner(data) knn = orange.kNNLearner(data, k=10) for i in range(5): print nbc(data[i]), knn(data[i]), \u2019vs. true class\u2019, data[i].getClass() Another, a bit more complicated script below, implements a classification tree learner where node attributes that split the data are chosen at random by a function randomChoice, which is used in place of data splitting component of Orange\u2019s classification tree inducer. The script builds a standard and random tree from the data, and reports on their sizes. import orange, random def randomChoice(instances, *args): attr = random.choice(instances.domain.attributes) cl = orange.ClassifierFromVar(whichVar=attr, classVar=attr) return cl, attr.values, None, 1 treeLearner = orange.TreeLearner() rndLearner = orange.TreeLearner() rndLearner.split = randomChoice data = orange.ExampleTable(\u2019voting.tab\u2019) tree = treeLearner(data) rndtree = rndLearner(data) print tree.treesize(), \u2019vs.\u2019, rndtree.treesize() 1.2 Visual Programming Component-based approach was also used for graphical user\u2019s interface (GUI). Orange\u2019s GUI is made of widgets, which are essentially a GUI wrappers around data analysis algorithms implemented in Orange and Python. Widgets communicate through channels, and a particular set of connected widgets is called a schema. Orange schemas can be either set in Python scripts, or, preferably, are put together by means of visual programming in an application called Orange Canvas. Besides ease-of-use and flexibility, data exploration widgets were carefully design to support interaction. Clicking on a classification tree node in the tree visualization widget, for example, outputs the corresponding data instances making them available for further analysis. Any visualization of predictive or visualization models where their elements are associated with particular subsets of instances, attributes, data domains, etc., behave in the similar way. A snapshot of Orange Canvas with an example schema is shown in Fig. 1. Fig. 1. Snapshot of Orange Canvas with a schema that takes a microarray data, performs k-means clustering, and evaluates the performance of two different supervised learning methods when predicting the cluster label. Clustered data is visualized in the Heat Map widget, which sends any selected data subset to the Scatterplot widget. 2 Orange Demonstration at ECML-PKDD The demonstration at ECML-PKDD will feature an introduction to Orange, give a tutorial on how to do machine learning by scripting, and show how to use component-based programming to design a new induction algorithm. The second part of the demonstration will demonstrate Orange widgets and visual programming, and will show hot to use these to build classifiers, find interaction between attributes, and do a range of interesting data visualizations. Demonstration will be live, e.g. we will write scripts and visually program at the stage rather than show transparencies. The demonstration will use the data from medicine (predictive models) and functional genomics (data mining of microarray and sequence data). CD\u2019s with installations and copies of the white-papers will be handed out. 3 On Significance and Contribution Orange is an open-source framework that features both scripting and visual programming. Because of component-based design in C++ and integration with Python, Orange should appeal to machine learning researchers for the speed of execution and ease of prototyping of new methods. Graphical user\u2019s interface is provided through visual programming and carefully designed widgets that support interactive data exploration. Component-based design, both on the level of procedural and visual programming, flexibility in combining components to design new machine learning methods and data mining applications, and user and developer-friendly environment are also the most significant attributes of Orange and those where Orange can make its contribution to the community."}
{"_id": "8109c1996d8e3463bf771e278a78f2c9a3fbc52d", "title": "Analytics-driven data ingestion and derivation in the AWESOME polystore", "text": "Polystores, i.e., data management systems that use multiple stores for different data models, are gaining popularity. We are developing a polystore-based system called AWESOME to support social data analytics. The AWESOME polystore can support relational, semistructured, graph and text data and houses a Spark computation engine to produce derived data during ingestion. ADIL, the data ingestion language of AWESOME allows a user to flexibly specify the placement of original and derived data into and across component stores and the computation engine. The paper also outlines a number of optimization strategies for managing data placement in AWESOME."}
{"_id": "16358a75a3a6561d042e6874d128d82f5b0bd4b3", "title": "Good Word Attacks on Statistical Spam Filters", "text": "Unsolicited commercial email is a significant problem for users and providers of email services. While statistical spam filters have proven useful, senders of spam are learning to bypass these filters by systematically modifying their email messages. In a good word attack, one of the most common techniques, a spammer modifies a spam message by inserting or appending words indicative of legitimate email. In this paper, we describe and evaluate the effectiveness of active and passive good word attacks against two types of statistical spam filters: naive Bayes and maximum entropy filters. We find that in passive attacks without any filter feedback, an attacker can get 50% of currently blocked spam past either filter by adding 150 words or fewer. In active attacks allowing test queries to the target filter, 30 words will get half of blocked spam past either filter."}
{"_id": "250fdb28cb6515147eff52ad18bc07902c9d88ad", "title": "Social Networking Sites and Addiction: Ten Lessons Learned", "text": "Online social networking sites (SNSs) have gained increasing popularity in the last decade, with individuals engaging in SNSs to connect with others who share similar interests. The perceived need to be online may result in compulsive use of SNSs, which in extreme cases may result in symptoms and consequences traditionally associated with substance-related addictions. In order to present new insights into online social networking and addiction, in this paper, 10 lessons learned concerning online social networking sites and addiction based on the insights derived from recent empirical research will be presented. These are: (i) social networking and social media use are not the same; (ii) social networking is eclectic; (iii) social networking is a way of being; (iv) individuals can become addicted to using social networking sites; (v) Facebook addiction is only one example of SNS addiction; (vi) fear of missing out (FOMO) may be part of SNS addiction; (vii) smartphone addiction may be part of SNS addiction; (viii) nomophobia may be part of SNS addiction; (ix) there are sociodemographic differences in SNS addiction; and (x) there are methodological problems with research to date. These are discussed in turn. Recommendations for research and clinical applications are provided."}
{"_id": "81179a51e518238e004b07e5c7f2fcef2ac3868f", "title": "Artificial Intelligence techniques: An introduction to their use for modelling environmental systems", "text": "Knowledge-based or Artificial Intelligence techniques are used increasingly as alternatives to more classical techniques to model environmental systems. We review some of them and their environmental applicability, with examples and a reference list. The techniques covered are case-based reasoning, rule-based systems, artificial neural networks, fuzzy models, genetic algorithms, cellular automata, multi-agent systems, swarm intelligence, reinforcement learning and hybrid systems. \u00a9 2008 IMACS. Published by Elsevier B.V. All rights reserved."}
{"_id": "1efea64c597a295e8302c3678a29ec7aff0c9460", "title": "Suitable MLP Network Activation Functions for Breast Cancer and Thyroid Disease Detection", "text": "This paper compared various MLP activation functions for classification problems. The most well-known (Artificial Neural Network) ANN architecture is the Multilayer Perceptron (MLP) network which is widely used for solving problems related to data classifications. Selection of the activation functions in the MLP network plays an essential role on the network performance. A lot of studies have been conducted by reseachers to investigate special activation function to solve different kind of problems. Therefore, this paper intends to investigate the activation functions in MLP networks in terms of the accuracy performances. The activation functions under investigation are sigmoid, hyperbolic tangent, neuronal, logarithmic, sinusoidal and exponential. Medical diagnosis data from two case studies, thyroid disease classification and breast cancer classification, have been used to test the performance of the MLP network. The MLP networks are trained using Back Propagation learning algorithm. The performance of the MLP networks are calculated based on the percentage of correct classificition. The results show that the hyperbolic tangent function in MLP network had the capability to produce the highest accuracy for classifying breast cancer data. Meanwhile, for thyroid disease classification, neuronal function is the most suitable function that performed the highest accuracy in MLP network."}
{"_id": "88ff2d1ae3a46b11b2c4ea132eeaa657c496bc13", "title": "Monolithic Low Cost Ka-Band Wilkinson Power Dividers on Flexible Organic Substrates", "text": "A millimeter wave Wilkinson power divider is presented on a thin and durable organic substrate, along with an analysis of bend topologies to reduce coupling between the output paths. The divider provides excellent performance across the entire Ka-band from 27 to 40 GHz. Insertion loss is approximately 0.3 dB and isolation between the output ports is around 30 dB at the design frequency."}
{"_id": "2b96ef623c868b6a702324e667b66099070c3698", "title": "Liquid crystal polymer (LCP) for microwave/millimeter wave multilayer packaging", "text": "This paper presents characterization and analysis of liquid crystal polymer materials for microwave and millimeter multi-layer packaging. Processing techniques have been developed to fabricate interconnects on this new LCP material. The experimental results demonstrate that an interconnect on LCP achieves a measured insertion loss of less than 0.1 dB/mm up to 50 GHz. This material is highly suitable for microwave and millimeter wave packaging."}
{"_id": "962f9eec6cce6253a2242fad2461746c9ccd2a0f", "title": "Wideband coplanar waveguide RF probe pad to microstrip transitions without via holes", "text": "A novel via-less coplanar waveguide (CPW) to microstrip transition is discussed and design rules based on simulations and experimental results are presented. This transition demonstrates a maximum insertion loss of 1 dB over the frequency range from 10 GHz to 40 GHz with a value of 0.4 dB at 20 GHz. This transition could find a variety of applications due to its compatibility with RF systems-on-a chip, low loss performance, low cost and its ease of fabrication."}
{"_id": "987878e987c493de52efa5b96db6558a65374585", "title": "A broadband Wilkinson balun using microstrip metamaterial lines", "text": "A metamaterial balun that converts a single-ended input to a differential output over a large bandwidth is presented. The device also exhibits excellent return loss, isolation, and through characteristics over the same frequency band. The balun comprises a Wilkinson divider, followed by a +90/spl deg/ negative-refractive-index (NRI) metamaterial (MM) phase-shifting line along the top branch, and a -90/spl deg/ MM phase-shifting line along the bottom branch. Utilizing MM lines for both the +90/spl deg/ and -90/spl deg/ branches allows the slopes of their phase responses to be matched, resulting in a broadband differential output signal. The theoretical performance of the balun is verified through circuit simulations and measurements of a fabricated prototype at 1.5 GHz. The MM balun exhibits a measured differential output phase bandwidth (180/spl deg//spl plusmn/10/spl deg/) of 1.16 GHz (77%), from 1.17 to 2.33 GHz. The measured isolation and return loss for all three ports remain below -10 dB over a bandwidth in excess of 2 GHz, while the output quantities |S/sub 21/| and |S/sub 31/| remain above -4 dB from 0.5 to 2.5 GHz."}
{"_id": "165002bc6d763cb4fe4d5db3cd59a0049711a9af", "title": "Mop: an efficient and generic runtime verification framework", "text": "Monitoring-Oriented Programming (MOP1) [21, 18, 22, 19] is a formal framework for software development and analysis, in which the developer specifies desired properties using definable specification formalisms, along with code to execute when properties are violated or validated. The MOP framework automatically generates monitors from the specified properties and then integrates them together with the user-defined code into the original system.\n The previous design of MOP only allowed specifications without parameters, so it could not be used to state and monitor safety properties referring to two or more related objects. In this paper we propose a parametric specification formalism-independent extension of MOP, together with an implementation of JavaMOP that supports parameters. In our current implementation, parametric specifications are translated into AspectJ code and then weaved into the application using off-the-shelf AspectJ compilers; hence, MOP specifications can be seen as formal or logical aspects.\n Our JavaMOP implementation was extensively evaluated on two benchmarks, Dacapo [14] and Tracematches [8], showing that runtime verification in general and MOP in particular are feasible. In some of the examples, millions of monitor instances are generated, each observing a set of related objects. To keep the runtime overhead of monitoring and event observation low, we devised and implemented a decentralized indexing optimization. Less than 8% of the experiments showed more than 10% runtime overhead; in most cases our tool generates monitoring code as efficient as the hand-optimized code. Despite its genericity, JavaMOP is empirically shown to be more efficient than runtime verification systems specialized and optimized for particular specification formalisms. Many property violations were detected during our experiments; some of them are benign, others indicate defects in programs. Many of these are subtle and hard to find by ordinary testing."}
{"_id": "5820bb5ed98d114a4a58ae891950c5df16ebcd00", "title": "A Low Voltage Delta-Sigma Fractional Frequency Divider for Multi-band WSN Frequency Synthesizers", "text": "A 1 V low voltage delta-sigma fractional-N frequency divider for multi-band (780/868/915 MHz and 2.4 GHz) WSN frequency synthesizers is presented. The frequency divider consists of a dual-modulus prescaler, a pulse-swallow counter and a delta-sigma modulator. The high-speed and low-voltage phase-switching dualmodulus prescaler is used in the frequency divider. Low threshold voltage transistors are applied to overcome low voltage supply and forward phase-switching technique is adopted to prevent glitches. The modified deltasigma modulator with long output sequence length and less spurs is adopted to minimize the fractional spurs. The frequency divider is designed in 0.18 \u03bcm TSMC RF CMOS technology under 1 V supply instead of the standard 1.8 V supply. The total chip area is 1190 \u03bcm \u00d7 485 \u03bcm including I/O pads. The post simulation results show the frequency divider operates normally over a wide range of 1.3-5.0 GHz and the core circuit (without test buffers) consumes 2.3 mW. Copyright \u00a9 2013 IFSA."}
{"_id": "f78f63a2a4093ec549ff2eaa683b247c9612a9ce", "title": "A high-accuracy method for fine registration of overlapping point clouds", "text": "This paper presents a high-accuracy method for fine registration of two partially overlapping point clouds that have been coarsely registered. The proposed algorithm, which is named dual interpolating point-tosurface method, is principally a modified variant of point-to-surface Iterative Closest Point (ICP) algorithm. The original correspondences are established by adopting a dual surface fitting approach using B-spline interpolation. A novel auxiliary pair constraint based on the surface fitting approach, together with surface curvature information, is employed to remove unreliable point matches. The combined constraint directly utilizes global rigid motion consistency in conjunction with local geometric invariant to reject false correspondences precisely and efficiently. The experimental results involving a number of realistic point clouds demonstrate that the new method can obtain accurate and robust fine registration for pairwise 3D point clouds. This method addresses highest accuracy alignment with less focus on recovery from poor coarse registrations. 2009 Elsevier B.V. All rights reserved."}
{"_id": "610425bf03641e29b98cdcb2b8f187f951644891", "title": "Rethink energy accounting with cooperative game theory", "text": "Energy accounting determines how much a software principal contributes to the total system energy consumption. It is the foundation for evaluating software and for operating system based energy management. While various energy accounting policies have been tried, there is no known way to evaluate them directly simply because it is hard to track all hardware usage by software in a heterogeneous multicore system like modern smartphones and tablets.\n In this work, we argue that energy accounting should be formulated as a cooperative game and that the Shapley value provides the ultimate ground truth for energy accounting policies. We reveal the important flaws of existing energy accounting policies based on the Shapley value theory and provide Shapley value-based energy accounting, a practical approximation of the Shapley value, for battery-powered mobile systems. We evaluate this approximation against existing energy accounting policies in two ways: (i) how well they identify the top energy consuming applications, and (ii) how effective they are in system energy management. Using a prototype based on Texas Instruments Pandaboard and smartphone workload, we experimentally demonstrate existing energy accounting policies can deviate by 400% in attributing energy consumption to running applications and can be up to 25% less effective in system energy management when compared to Shapley value-based energy accounting."}
{"_id": "bb3919f0c447117b063a6489cfff3a17e623e851", "title": "A Controlled Pilot-Outcome Study of Sensory Integration ( SI ) in the Treatment of Complex Adaptation to Traumatic Stress", "text": "This study tested whether sensory integration (SI) treatment combined with psychotherapy would improve symptom outcome over psychotherapy alone in the treatment of complex posttraumatic stress, as measured by the Disorders of Extreme Stress Not Otherwise Specified (DESNOS) clinical construct in a group of 10 adult patients with histories of childhood abuse. DESNOS symptoms were assessed at three time periods (T1, baseline; T2, after experimental group SI treatment; and T3, after wait-list control group SI treatment) using the Structured Interview for Disorders of Extreme Stress (SIDES). The Sensory Learning Program (SLP), developed by the Sensory Learning Institute of Boulder, Colorado, was used as the SI treatment modality. Results indicated significant differential improvement for the group treated with SLP in SIDES Total Score (T1/T2 and T2/T3), Self Perception (T1/T2 and T2/T3), Affect Regulation (T2/T3), and Alterations in Meaning (T1/T2)."}
{"_id": "ad6f69e983bdc86ec77aeb013692ccb35faf2979", "title": "B2C web site quality and emotions during online shopping episodes: An empirical study", "text": "This paper explores the impact of the quality of a web site on the cognitive process leading to consumers\u2019 emotions\u2014considered as direct antecedents to shopping behaviors and operationalized as mental states of readiness arising from the appraisal of events. A parsimonious theoretical model was defined and tested with data collected from 215 web-shopping episodes during which consumers were shopping for low-touch products. Analysis of the results showed that web site quality had a positive impact on the cognitive appraisal of situational state, which in turn influenced five of the six emotions of the proposed model: liking, joy, pride, dislike, and frustration. Results also showed that a substantial number of shoppers experienced intensively the emotions of liking and joy. Moreover, this paper highlights several implications that could help managers and webmasters improve the quality of their web sites. # 2006 Elsevier B.V. All rights reserved."}
{"_id": "6a31ff302d15f5159dc566603f15ac1d774c290b", "title": "An Efficient Character-Level Neural Machine Translation", "text": "Neural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems on the task of English-to-French translation. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose an efficient architecture to train a deep character-level neural machine translation by introducing a decimator and an interpolator. The decimator is used to sample the source sequence before encoding while the interpolator is used to resample after decoding. Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is much faster and more memory-efficient in training than conventional character-based models. More interestingly, our model is able to translate the misspelled word like human beings."}
{"_id": "4d98b9b68becf6803f09cfc801e2eacdb4899b1c", "title": "Naturalizing consciousness: a theoretical framework.", "text": "Consciousness has a number of apparently disparate properties, some of which seem to be highly complex and even inaccessible to outside observation. To place these properties within a biological framework requires a theory based on a set of evolutionary and developmental principles. This paper describes such a theory, which aims to provide a unifying account of conscious phenomena."}
{"_id": "39da8410e503738eb19cd5d2f3e154e7e2a9971b", "title": "Image-based Calorie Content Estimation for Dietary Assessment", "text": "In this paper, we present an image-analysis based approach to calorie content estimation for dietary assessment. We make use of daily food images captured and stored by multiple users in a public Web service called Food Log. The images are taken without any control or markers. We build a dictionary dataset of 6512 images contained in Food Log the calorie content of which have been estimated by experts in nutrition. An image is compared to the ground truth data from the point of views of multiple image features such as color histograms, color correlograms and SURF fetures, and the ground truth images are ranked by similarities. Finally, calorie content of the input food image is computed by linear estimation using the top n ranked calories in multiple features. The distribution of the estimation shows that 79% of the estimations are correct within \u00b140% error and 35% correct within \u00b120% error."}
{"_id": "ca4a597a7783a4e9bba6bf39c938a182cebabfd0", "title": "A religious upbringing reduces the influence of genetic factors on disinhibition: evidence for interaction between genotype and environment on personality.", "text": "Information on personality, on anxiety and depression and on several aspects of religion was collected in 1974 Dutch families consisting of adolescent and young adult twins and their parents. Analyses of these data showed that differences between individuals in religious upbringing, in religious affiliation and in participation in church activities are not influenced by genetic factors. The familial resemblance for different aspects of religion is high, but can be explained entirely by environmental influences common to family members. Shared genes do not contribute to familial resemblances in religion. The absence of genetic influences on variation in several dimensions of religion is in contrast to findings of genetic influences on a large number of other traits that were studied in these twin families. Differences in religious background are associated with differences in personality, especially in Sensation Seeking. Subjects with a religious upbringing, who are currently religious and who engage in church activities score lower on the scales of the Sensation Seeking Questionnaire. The most pronounced effect is on the Disinhibition scale. The resemblances between twins for the Disinhibition scale differ according to their religious upbringing. Receiving a religious upbringing seems to reduce the influence of genetic factors on Disinhibition, especially in males."}
{"_id": "717b92e3857404c8dc625d9af3381bb48efd3e43", "title": "A shape and image merging technique to solve jigsaw puzzles", "text": "This paper proposes an algorithm for solving subsets of typical (canonical) jigsaw puzzles. This algorithm combines shape and image matching with a cyclic \u2018\u2018growth\u2019\u2019 process that tries to place pieces in their correct positions. First, the jigsaw pieces are extracted from the input image. Then, the corner points of the jigsaw pieces are detected. Next, piece classification and recognition are performed based on jigsaw piece models. Connection relationships between pieces are calculated and finally recovered by using boundary shape matching and image merging. We tested this algorithm by employing real-world images containing dozens of jigsaw pieces. The experiment s results show this algorithm is efficient and effective. 2003 Elsevier Science B.V. All rights reserved."}
{"_id": "a9a9e64bba4d015b73a01dc96c3af7cdb5169219", "title": "Adaptive Droop Control Applied to Voltage-Source Inverters Operating in Grid-Connected and Islanded Modes", "text": "This paper proposes a novel control for voltage-source inverters with the capability to flexibly operate in grid-connected and islanded modes. The control scheme is based on the droop method, which uses some estimated grid parameters such as the voltage and frequency and the magnitude and angle of the grid impedance. Hence, the inverter is able to inject independently active and reactive power to the grid. The controller provides a proper dynamics decoupled from the grid-impedance magnitude and phase. The system is also able to control active and reactive power flows independently for a large range of impedance grid values. Simulation and experimental results are provided in order to show the feasibility of the control proposed."}
{"_id": "164bd38ee518e8191fc3fb27b23a10c2668f76d9", "title": "Manipulating Attributes of Natural Scenes via Hallucination", "text": "Fig. 1. Given a natural image, the proposed approach can hallucinate different versions of the same scene in a wide range of conditions, e.g.night, sunset, winter, spring, rain, fog or even a combination of those. First, our method utilizes a single generator network to imagine the scene with respect to its semantic layout and the desired set of attributes. Then, it directly transfers the look from the hallucinated output to the input image, without a need to have access to a reference style image."}
{"_id": "4cff9725b6a755cfc27ccd4ed9e498d56a50d5ae", "title": "Reflection Removal Using Low-Rank Matrix Completion", "text": "The images taken through glass often capture a target transmitted scene as well as undesired reflected scenes. In this paper, we propose a low-rank matrix completion algorithm to remove reflection artifacts automatically from multiple glass images taken at slightly different camera locations. We assume that the transmitted scenes are more dominant than the reflected scenes in typical glass images. We first warp the multiple glass images to a reference image, where the gradients are consistent in the transmission images while the gradients are varying across the reflection images. Based on this observation, we compute a gradient reliability such that the pixels belonging to the salient edges of the transmission image are assigned high reliability. Then we suppress the gradients of the reflection images and recover the gradients of the transmission images only, by solving a low-rank matrix completion problem in gradient domain. We reconstruct an original transmission image using the resulting optimal gradient map. Experimental results show that the proposed algorithm removes the reflection artifacts from glass images faithfully and outperforms the existing algorithms on typical glass images."}
{"_id": "218a80b6c988babb1c1c77adbfe18a785563d21f", "title": "Unveiling clusters of events for alert and incident management in large-scale enterprise it", "text": "Large enterprise IT (Information Technology) infrastructure components generate large volumes of alerts and incident tickets. These are manually screened, but it is otherwise difficult to extract information automatically from them to gain insights in order to improve operational efficiency. We propose a framework to cluster alerts and incident tickets based on the text in them, using unsupervised machine learning. This would be a step towards eliminating manual classification of the alerts and incidents, which is very labor intense and costly. Our framework can handle the semi-structured text in alerts generated by IT infrastructure components such as storage devices, network devices, servers etc., as well as the unstructured text in incident tickets created manually by operations support personnel. After text pre-processing and application of appropriate distance metrics, we apply different graph-theoretic approaches to cluster the alerts and incident tickets, based on their semi-structured and unstructured text respectively. For automated interpretation and read-ability on semi-structured text clusters, we propose a method to visualize clusters that preserves the structure and human-readability of the text data as compared to traditional word clouds where the text structure is not preserved; for unstructured text clusters, we find a simple way to define prototypes of clusters for easy interpretation. This framework for clustering and visualization will enable enterprises to prioritize the issues in their IT infrastructure and improve the reliability and availability of their services."}
{"_id": "aad7f9eeb10d4f655c3e3d18d3542603ad3071b4", "title": "Deep Unsupervised Learning of Visual Similarities", "text": "Exemplar learning of visual similarities in an unsupervised manner is a problem of paramount importance to Computer Vision. In this context, however, the recent breakthrough in deep learning could not yet unfold its full potential. With only a single positive sample, a great imbalance between one positive and many negatives, and unreliable relationships between most samples, training of Convolutional Neural networks is impaired. In this paper we use weak estimates of local similarities and propose a single optimization problem to extract batches of samples with mutually consistent relations. Conflicting relations are distributed over different batches and similar samples are grouped into compact groups. Learning visual similarities is then framed as a sequence of categorization tasks. The CNN then consolidates transitivity relations within and between groups and learns a single representation for all samples without the need for labels. The proposed unsupervised approach has shown competitive performance on detailed posture analysis and object classification."}
{"_id": "a05525533913378777e5f089cc0223cd26f3a590", "title": "Overuse lower extremity injuries in sports.", "text": "When athletes train harder the risk of injury increases, and there are several common overuse injuries to the lower extremity. Three of the most common lower extremity overuse injuries in sports are discussed including the diagnosis and treatments: medial tibal stress syndrome, iliotibial band syndrome, and stress fractures. The charge of sports medicine professionals is to identify and treat the cause of the injuries and not just treat the symptoms. Symptomatology is an excellent guide to healing and often the patient leads the physician to the proper diagnosis through an investigation of the athlete's training program, past injury history, dietary habits, choice of footwear, and training surface."}
{"_id": "fb8e5fe50227595de8de4fc661dc8eae45952d23", "title": "Tree Adjoining Grammar Based Parser for a Hindi Text-to-Scene Conversion System", "text": "We have developed Preksha: a Hindi text visualizer. Preksha generates a 3D scene by understanding a given Hindi text. We propose the Earley's parsing algorithm using Tree adjoining grammar (TAG) for Hindi language understanding. A brief overview of the TAG formalism and the Earley's parsing algorithm is presented. Preksha's Hindi language parsing is explainedwith the Lisp notation. The maincontribution of this paper is to apply TAG parsing for morphologically-richfree-order language Hindi for automatic text visualization (ATV)."}
{"_id": "01df112f239b088706d33b31834d7a34c3af6b3b", "title": "Autonomous Battery Exchange of UAVs with a Mobile Ground Base", "text": "This paper presents the autonomous battery exchange operation for small scale UAVs, using a mobile ground base that carries a robotic arm and a service station containing the battery exchange mechanism. The goal of this work is to demonstrate the means to increase the autonomy and persistence of robotic systems without requiring human intervention. The design and control of the system and its components are presented in detail, as well as the collaborative software framework used to plan and execute complex missions. Finally, the results of autonomous outdoor experiments are presented, in which the ground rover successfully localizes, retrieves, services, and deploys the landed UAV, proving its capacity to extend and enhance autonomous operations."}
{"_id": "a7533c8985e60605c9503c65ef66c8ae48f46f20", "title": "A Survey Paper on PID Control System", "text": "This paper concentrates on the work done on the PID control system specially on time delaying systems like network control systems. The different works which have already been done are summarised here like gain margin, phase margin methods. The absolute focal point on time delayed systemare the delays induced by network and the packet losses. This survey paper covers the study on such methods that compensates these"}
{"_id": "c14902a822d285a25eafdeb8a54b6afe2e5f2a75", "title": "Flying Swarm of Drones Over Circulant Digraph", "text": "Drones are critical to military collaborative engagement, surveillance, and monitoring of areas of interest. They also play an increasingly influential role in commercial applications. We describe a new innovative method for collisionless, survivable, and persistent flying of given $k$ drones over $2n$ waypoints that results in dispersed coverage of a given area of interest, where $n > k \\geq 3$ and $k$ divides $n$. Our method is fundamentally different than any known to date since it does not need sensors to avoid collisions between drones. In addition, the flight of drones does not have to be coordinated."}
{"_id": "f2ea043ca1932b8e7aa040ff68b2ff03a2f21675", "title": "High-voltage pulse waveform modulator based on solid-state Marx generator", "text": "Many research fields need special high-voltage pulses in pulsed power applications, which are different from typical rectangular pulses. In this paper, a high-voltage waveform modulator of two methods using a solid-state Marx generator has been developed and tested. Instead of manufacturing a generator with a specialized structure, the modulator can control a general Marx generator by low-voltage programmable logic devices to produce an irregular but controllable high-voltage triangular pulse with pulse length of ~4.5 \u03bcs and peak voltage of ~123 kV."}
{"_id": "39763630d38a447aa1126b5468b7effffc02e53a", "title": "Treevis.net: A Tree Visualization Reference", "text": "Tree visualization is one of the best-studied areas of information visualization; researchers have developed more than 200 visualization and layout techniques for trees. The treevis.net project aims to provide a hand-curated bibliographical reference to this ever-growing wealth of techniques. It offers a visual overview that users can filter to a desired subset along the design criteria of dimensionality, edge representation, and node alignment. Details, including links to the original publications, can be brought up on demand. Treevis.net has become a community effort, with researchers sending in preprints of their tree visualization techniques to be published or pointing out additional information."}
{"_id": "d644b0fe1c37f7f350fdf70d4d14505695dc2e93", "title": "Thinking Aloud : Reconciling Theory and Practice \u2014", "text": "\u2014M. TED BOREN AND JUDITH RAMEY, ASSOCIATE MEMBER, IEEE Abstract\u2014Thinking-aloud protocols may be the most widely used method in usability testing, but the descriptions of this practice in the usability literature and the work habits of practitioners do not conform to the theoretical basis most often cited for it: Ericsson and Simon\u2019s seminal work PROTOCOL ANALYSIS: VERBAL REPORTS AS DATA [1]. After reviewing Ericsson and Simon\u2019s theoretical basis for thinking aloud, we review the ways in which actual usability practice diverges from this model. We then explore the concept of SPEECH GENRE as an alternative theoretical framework. We first consider uses of this new framework that are consistent with Simon and Ericsson\u2019s goal of eliciting a verbal report that is as undirected, undisturbed, and constant as possible. We then go on to consider how the proposed new approach might handle problems that arise in usability testing that appear to require interventions not supported in the older model."}
{"_id": "1a295aae9d78d54b329f5f83e1e3925988dadabd", "title": "Cone Trees: animated 3D visualizations of hierarchical information", "text": "The task of managing and accessing large information spaces is a problem in large scale cognition. Emerging technologies for 3D visualization and interactive animation offer potential solutions to this problem, especially when the structure of the information can be visualized. We describe one of these Information Visualization techniques, called the Cone Tree, which is used for visualizing hierarchical information structures. The hierarchy is presented in 3D to maximize effective use of available screen space and enable visualization of the whole structure. Interactive animation is used to shift some of the user's cognitive load to the human perceptual system."}
{"_id": "29a1b10bae922ccb6a2d3f40958bcc1b70c5fbc1", "title": "Stacked Graphs \u2013 Geometry & Aesthetics", "text": "In February 2008, the New York Times published an unusual chart of box office revenues for 7500 movies over 21 years. The chart was based on a similar visualization, developed by the first author, that displayed trends in music listening. This paper describes the design decisions and algorithms behind these graphics, and discusses the reaction on the Web. We suggest that this type of complex layered graph is effective for displaying large data sets to a mass audience. We provide a mathematical analysis of how this layered graph relates to traditional stacked graphs and to techniques such as ThemeRiver, showing how each method is optimizing a different ldquoenergy functionrdquo. Finally, we discuss techniques for coloring and ordering the layers of such graphs. Throughout the paper, we emphasize the interplay between considerations of aesthetics and legibility."}
{"_id": "2e08e5a4084e1495dab1767f8f46ade97588db4b", "title": "Readings in information visualization - using vision to think", "text": "Visualization: The use of computer-supported, interactive, visual representations of data to amplify cognition. (6) Cognition is the acquisition or use of knowledge. This definition has the virtue of focusing as much on the purpose of visualization as the means. Hamming (1973) saud, \u201cthe purpose of computation is insight, not numbers.\u201d Likewise for visualization, \u201cthe purpose of visualization is insight not pictures.\u201d The main goals of this insight are discovery, decision making, and explanation. (6)"}
{"_id": "e892f94c1e86842d1cb2a9e97ed2369b95acf5cf", "title": "Managing information sharing within an organizational setting: A social network perspective", "text": "In the current information-driven and technologically based global economy, organizations are becoming increasingly dependent on the cumulative knowledge of their employees, suppliers, customers, and other key stakeholders. An organization\u2019s ability to share this knowledge among organizational members is key to its competitive advantage (Bock, Zmud, Kim, & Lee, 2007; Brown & Duguid, 2000; Small & Sage, 2006). Information sharing is critical to an organization\u2019s competitiveness and requires a free flow of information among members that is undistorted and up-to-date (Childhouse & Towill, 2003; Li & Lin, 2006; Moberg, Cutler, Gross, & Speh, 2002; Rahman, 2004; Tan, Lyman, & Wisner, 2002). However, extensive information sharing within organizations still appears to be the exception rather than the rule (Bock et al., 2007; Davenport & Prusack, 1998; Li & Lin, 2006). According to Li and Lin (2006), intensified competition and globalized markets are some of the challenges associated with getting products and services to the right place at the right time and at the lowest cost. These challenges, for instance, have forced organizations to realize that it is not enough to improve their efficiencies; rather, their entire supply chains have to be made competitive. One way for organizations to do this is to support information sharing among their work groups (Li & Lin, 2006). This article explores organizational information sharing as it relates to individuals within and between work groups. A review of the literature on organizational structure and information sharing was conducted to examine the research in this area. A case example then illustrates how a social network approach (SNA) was used to explore the process of measuring the social"}
{"_id": "84f55e87ac69e3e194ada0b86cb8a02375c0d371", "title": "A SECURE ONLINE PAYMENT SYSTEM USING STEGANOGRAPHY AND VISUAL CRYPTOGRAPHY", "text": "In This paper we presents a new methodology which provides limited information only that is important in transferring money while shopping online provides security for customers data and increasing in confidence of customer and preventing identity stealing. Here we used cryptography technique for image encryption. Data is encrypted in images shares which are meaningless images. These shares are then transferred over an untrusted communication channel. By combining only these shares give the original secret image. This images may contain ATM card details, credit card information etc. At merchants side merchant have to pay some amount per transaction so it\u2019s merchant\u2019s responsibility to protect customers information from being misused. So here is solution for such operations."}
{"_id": "95d0cd902ff0fa253b6757ba3c8e09ce25b494cc", "title": "A novel method for eye tracking and blink detection in video frames", "text": "This paper presents a novel method for eye tracking and blink detection in the video frames obtained from low resolution consumer grade web cameras. It uses a method involving Haar based cascade classifier for eye tracking and a combination of HOG features with SVM classifier for eye blink detection. The presented method is non intrusive and hence provides a comfortable user interaction. The eye tracking method has an accuracy of 92.3% and the blink detection method has an accuracy of 92.5% when tested using standard databases and a combined accuracy of 86% when tested under real world conditions of a normal room."}
{"_id": "05d9769a4e74d1bb9f0cc35a29a8c200defc40f4", "title": "THE INSTITUTIONAL FOUNDATIONS OF PUBLIC POLICY : A TRANSACTIONS APPROACH WITH APPLICATION TO ARGENTINA *", "text": "Public policies are the outcomes of complex intertemporal exchanges among politicians. The basic institutional characteristics of a country constitute the framework within which those transactions are accomplished. We develop a transactions theory to understand the ways in which political institutions affect the transactions that political actors are able to undertake, and hence the policies that emerge. We argue that Argentina is a case in which the functioning of political institutions has been such that it prevented the capacity to undertake efficient intertemporal political exchanges. We use positive political theory and transaction cost economics to explain the workings of Argentine political institutions, and to show how that maps into low-quality policies."}
{"_id": "3b317873646eaa99f4241c2016abf542e3b28498", "title": "Functional cartography of complex metabolic networks", "text": "High-throughput techniques are leading to an explosive growth in the size of biological databases and creating the opportunity to revolutionize our understanding of life and disease. Interpretation of these data remains, however, a major scientific challenge. Here, we propose a methodology that enables us to extract and display information contained in complex networks. Specifically, we demonstrate that we can find functional modules in complex networks, and classify nodes into universal roles according to their pattern of intra- and inter-module connections. The method thus yields a \u2018cartographic representation\u2019 of complex networks. Metabolic networks are among the most challenging biological networks and, arguably, the ones with most potential for immediate applicability. We use our method to analyse the metabolic networks of twelve organisms from three different superkingdoms. We find that, typically, 80% of the nodes are only connected to other nodes within their respective modules, and that nodes with different roles are affected by different evolutionary constraints and pressures. Remarkably, we find that metabolites that participate in only a few reactions but that connect different modules are more conserved than hubs whose links are mostly within a single module."}
{"_id": "a87ce57a00e0c80a1fa38e89801bddcc9f382885", "title": "Dengue fever prediction using K-means clustering algorithm", "text": "Dengue fever is a virus infection which is transmitted to humans by mosquitoes that living in tropical and subtropical climates and carries the virus. The dengue viruses occur in 4 serotypes (DENV-1 to DENV-4). A dengue disease ranges from mild febrile disease to severe hemorrhagic fever. Predicting the relationship between the dengue serotypes will surely help the biotechnologists and bioinformaticians to move one step forward to discover antibiotic for dengue. This paper has been focused four stages namely preprocessing, attribute selection, clustering and predicting the dengue fever. R 3.3.2 Tool is used for preprocessing the household of dengue dataset. D win's method has been applied to generate filled dataset by substituting all missing values for nominal and numeric attributes with mode and mean value. Dengue virus can be predicted by applying different data mining techniques. The main goal of research work is to predict the people who are affected by dengue depending upon categorization of age group using K-means clustering algorithm has been implemented."}
{"_id": "340cfb3a5d11b72f72b64636e3e80a64c77bfec5", "title": "Next generation data systems and knowledge products to support agricultural producers and science-based policy decision making", "text": "Research on next generation agricultural systems models shows that the most important current limitation is data, both for on-farm decision support and for research investment and policy decision making. One of the greatest data challenges is to obtain reliable data on farm management decision making, both for current conditions and under scenarios of changed bio-physical and socio-economic conditions. This paper presents a framework for the use of farm-level and landscape-scale models and data to provide analysis that could be used in NextGen knowledge products, such as mobile applications or personal computer data analysis and visualization software. We describe two analytical tools - AgBiz Logic and TOA-MD - that demonstrate the current capability of farmlevel and landscape-scale models. The use of these tools is explored with a case study of an oilseed crop, Camelina sativa, which could be used to produce jet aviation fuel. We conclude with a discussion of innovations needed to facilitate the use of farm and policy-level models to generate data and analysis for improved knowledge products."}
{"_id": "77c2bda34de0f18dce08e911c803c290cbfeab3c", "title": "A Temporal Neuro-Fuzzy Monitoring System to Manufacturing Systems", "text": "Fault diagnosis and failure prognosis are essential techniques in improving the safety of many manufacturing systems. Therefore, on-line fault detection and isolation is one of the most important tasks in safety-critical and intelligent control systems. Computational intelligence techniques are being investigated as extension of the traditional fault diagnosis methods. This paper discusses the Temporal Neuro-Fuzzy Systems (TNFS) fault diagnosis within an application study of a manufacturing system. The key issues of finding a suitable structure for detecting and isolating ten realistic actuator faults are described. Within this framework, data-processing interactive software of simulation baptized NEFDIAG (NEuro Fuzzy DIAGnosis) version 1.0 is developed. This software devoted primarily to creation, training and test of a classification Neuro-Fuzzy system of industrial process failures. NEFDIAG can be represented like a special type of fuzzy perceptron, with three layers used to classify patterns and failures. The system selected is the workshop of SCIMAT clinker, cement factory in Algeria."}
{"_id": "08d94e04e4be38ef06bd0a3294af3936a27db529", "title": "Review on Printed Log Periodic and Yagi MSA", "text": "The log periodic antenna and Yagi-Uda antenna are used in the applications where very high directivity is required. They also give very high gain in the range of 17-20dBi. This paper presents a review on various configurations of log periodic and Yagi antennas, their advantages and problems. One problem encountered with Yagi-Uda antenna is relatively less bandwidth. This problem is solved by log periodic antenna which can operate over high bandwidth and providing high gain at the same time. In this paper, a review of various techniques to realize printed Yagi-Uda and log periodic antenna is discussed. They are realized by using different feeding techniques like microstrip feeding, co-axial feeding etc. They are also realized by using modifying the shape of directors and reflectors. The high bandwidth (log periodic antenna) has also been realized by increasing the number of reflectors."}
{"_id": "0c6246b804b0ffd124aec6a0b081e8deeae50716", "title": "EMG analysis of the scapular muscles during a shoulder rehabilitation program.", "text": "The purpose of this study was to determine which exercises most effectively use the scapular muscles. Eight muscles in 9 healthy subjects were studied with indwelling electromyographic electrodes and cinematography while performing 16 exercises. The 8 muscles studied were the upper, middle, and lower trapezius; levator scapula; rhomboids; pectoralis minor; and the middle and lower serratus anterior. Each exercise was divided into arcs of motion and the electromyographic activity was quantified as a percentage of the maximal manual muscle test. The optimal exercises for each muscle were identified based on intensity (greater than 50% maximal manual muscle test) and duration (over at least 3 consecutive arcs of motion) of the muscle activity. Twelve of the exercises qualified as top exercises for all of the muscles. On further analysis, a group of 4 exercises was shown to make up the core of a scapular muscle strengthening program. Those 4 exercises include scaption (scapular plane elevation), rowing, push-up with a plus, and press-up."}
{"_id": "d100aea5a04ba5ed1cd5627926204bf75644ccb0", "title": "Research on autonomous stairs climbing for the shape-shifting robot", "text": "In this paper, we present an algorithm for a shape-shifting robot climbing stairs autonomously. The robot has been used several times for search and rescue missions. Because of the special environment of tasks, the robot must have the ability to climb stairs autonomously to help the operators. Due to the characters of robot in \"T\" configuration, the mathematical model of force is established and the whole process of climbing stairs is analyzed. Grouser-tread hooking, track-stair edge frictional force et al. are all considered during the process of climbing. A description of its dynamic model is also presented which is involved in the controller. The controller is comprised of center control and heading control, which can help the robot climb stairs safely and quickly. The effectiveness of the algorithms proposed are verified by experiment."}
{"_id": "34082dc7ce859da49ddec80417e2403124542e8b", "title": "A framework based on deep learning and mathematical morphology for cabin door detection in an automated aerobridge docking system", "text": "In this paper, a cabin door detection framework based on deep learning and mathematical morphology is proposed. It is applied to an automated docking system for airplane cabin door. This system needs to work under any weather condition like rain, shine, day and night. Limited by the number of videos, just a small dataset based on actual airport operation is established for aerobridge docking process. As the training dataset is small, the trained detector cannot identify all the cabin doors in this dataset. Some of the cabin doors, which are not detected, can be identified with the combination of deep learning and mathematical morphology. Experimental results show that the integration of deep learning and mathematical morphology performs better than the simple deep learning method."}
{"_id": "3acad5199ee074bdfd53dfb66250b0b7640b3249", "title": "Enhancements in UltraScale CLB Architecture", "text": "Each generation of FPGA architecture benefits from optimizations around its technology node and target usage. In this paper, we discuss some of the changes made to the CLB for Xilinx's 20nm UltraScale product family. We motivate those changes and demonstrate better results than previous CLB architectures on a variety of metrics. We show that, in demanding scenarios, logic placed in an UltraScale device requires 16% less wirelength than 7-series. Designs mapped to UltraScale devices also require fewer logic tiles. In this paper, we demonstrate the utilization benefits of the UltraScale CLB attributed to certain CLB enhancements. The enhancements described herein result in an average packing improvement of 3% for the example design suite. We also show that the UltraScale architecture handles aggressive, tighter packing more gracefully than previous generations of FPGA. These significant reductions in wirelength and CLB counts translate directly into power, performance and ease-of-use benefits."}
{"_id": "3007c11ef7752c3fc92afb44006270f386e70136", "title": "PPDDL 1 . 0 : An Extension to PDDL for Expressing Planning Domains with Probabilistic Effects", "text": "We desribe a variation of the planning domain definition language, PDDL, that permits the modeling of probabilistic planning problems with rewards. This language, PPDDL1.0, was used as the input language for the probabilistic track of the 4th International Planning Competition. We provide the complete syntax for PPDDL1.0 and give a semantics of PPDDL1.0 planning problems in terms of Markov decision processes. Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA"}
{"_id": "2e0b1facffd6e9a0b1bc6b87d1dab0874846fee0", "title": "Cloudy Computing: Leveraging Weather Forecasts in Energy Harvesting Sensor Systems", "text": "To sustain perpetual operation, systems that harvest environmental energy must carefully regulate their usage to satisfy their demand. Regulating energy usage is challenging if a system's demands are not elastic and its hardware components are not energy-proportional, since it cannot precisely scale its usage to match its supply. Instead, the system must choose when to satisfy its energy demands based on its current energy reserves and predictions of its future energy supply. In this paper, we explore the use of weather forecasts to improve a system's ability to satisfy demand by improving its predictions. We analyze weather forecast, observational, and energy harvesting data to formulate a model that translates a weather forecast to a wind or solar energy harvesting prediction, and quantify its accuracy. We evaluate our model for both energy sources in the context of two different energy harvesting sensor systems with inelastic demands: a sensor testbed that leases sensors to external users and a lexicographically fair sensor network that maintains steady node sensing rates. We show that using weather forecasts in both wind- and solar-powered sensor systems increases each system's ability to satisfy its demands compared with existing prediction strategies."}
{"_id": "882de83e90ea503b9d3c4967c8f2885078ec51a7", "title": "Fuzzy Ontology and LSTM-Based Text Mining: A Transportation Network Monitoring System for Assisting Travel", "text": "Intelligent Transportation Systems (ITSs) utilize a sensor network-based system to gather and interpret traffic information. In addition, mobility users utilize mobile applications to collect transport information for safe traveling. However, these types of information are not sufficient to examine all aspects of the transportation networks. Therefore, both ITSs and mobility users need a smart approach and social media data, which can help ITSs examine transport services, support traffic and control management, and help mobility users travel safely. People utilize social networks to share their thoughts and opinions regarding transportation, which are useful for ITSs and travelers. However, user-generated text on social media is short in length, unstructured, and covers a broad range of dynamic topics. The application of recent Machine Learning (ML) approach is inefficient for extracting relevant features from unstructured data, detecting word polarity of features, and classifying the sentiment of features correctly. In addition, ML classifiers consistently miss the semantic feature of the word meaning. A novel fuzzy ontology-based semantic knowledge with Word2vec model is proposed to improve the task of transportation features extraction and text classification using the Bi-directional Long Short-Term Memory (Bi-LSTM) approach. The proposed fuzzy ontology describes semantic knowledge about entities and features and their relation in the transportation domain. Fuzzy ontology and smart methodology are developed in Web Ontology Language and Java, respectively. By utilizing word embedding with fuzzy ontology as a representation of text, Bi-LSTM shows satisfactory improvement in both the extraction of features and the classification of the unstructured text of social media."}
{"_id": "83f75e5d9f384d57870f3551a38bb1e4491c535c", "title": "Single-structure 3-axis lorentz force magnetometer with sub-30 nT/\u221aHZ resolution", "text": "This work demonstrates a 3-axis Lorentz force magnetometer for electronic compass purposes. The magnetometer measures magnetic flux in 3 axes using a single structure. With 1 mW power consumption, the sensor achieves sub-30 nT/\u221aHz resolution in each of the 3 axes. Compared to the 3-axis Hall sensors currently used in smartphones, the 3-axis magnetometer shown here has the advantages of 10\u00d7 lower noise floor and the ability to be co-fabricated with MEMS inertial sensors."}
{"_id": "2496f4442e71d681a0f157c1ad930fb73d42f1f9", "title": "FrameBase: Representing N-Ary Relations Using Semantic Frames", "text": "Large-scale knowledge graphs such as those in the Linked Data cloud are typically represented as subject-predicate-object triples. However, many facts about the world involve more than two entities. While n-ary relations can be converted to triples in a number of ways, unfortunately, the structurally different choices made in different knowledge sources significantly impede our ability to connect them. They also make it impossible to query the data concisely and without prior knowledge of each individual source. We present FrameBase, a wide-coverage knowledge-base schema that uses linguistic frames to seamlessly represent and query n-ary relations from other knowledge bases, at different levels of granularity connected by logical entailment. It also opens possibilities to draw on natural language processing techniques for querying and data mining."}
{"_id": "31cd1fb673ebcf2cfb18fca840b831b6e49e650b", "title": "The Role of Deliberate Practice in the Acquisition of Expert Performance", "text": "The theoretical framework presented in this article explains expert performance as the end result of individuals' prolonged efforts to improve performance while negotiating motivational and external constraints. In most domains of expertise, individuals begin in their childhood a regimen of effortful activities (deliberate practice) designed to optimize improvement. Individual differences, even among elite performers, are closely related to assessed amounts of deliberate practice. Many characteristics once believed to reflect innate talent are actually the result of intense practice extended for a minimum of 10 years. Analysis of expert performance provides unique evidence on the potential and limits of extreme environmental adaptation and learning."}
{"_id": "b1c2df2ee12243563dbe157aa3ce4c57f953f3dd", "title": "Design of Compact Dual Band High Directive Electromagnetic Bandgap (EBG) Resonator Antenna Using Artificial Magnetic Conductor", "text": "A compact high directive EBG resonator antenna operating in two frequency bands is described. Two major contributions to this compact design are using single layer superstrate and using artificial surface as ground plane. To obtain only the lower operating frequency band using superstrate layer is enough, but to extract the upper operating frequency band both superstrate layer and artificial surface as ground plane are necessary. Therefore, design of a superstrate to work in two frequency bands is very important. Initially, using appropriate frequency selective surface (FSS) structure with square loop elements, we design an optimum superstrate layer for each frequency band separately to achieve maximum directivity. Also, to design an artificial surface to work in the upper frequency band we use a suitable FSS structure over dielectric layer backed by PEC. Next, by using the idea of FSS structure with double square loop elements we propose FSS structure with modified double square loop elements, so that it operates in both of the desired operating frequency bands simultaneously. Finally, the simulation results for two operating frequency bands are shown to have good agreement with measurements."}
{"_id": "c1ab5f9f239f8b252d397c9104133f20ac46e8ae", "title": "Functional Brain Connectivity Using fMRI in Aging and Alzheimer\u2019s Disease", "text": "Normal aging and Alzheimer\u2019s disease (AD) cause profound changes in the brain\u2019s structure and function. AD in particular is accompanied by widespread cortical neuronal loss, and loss of connections between brain systems. This degeneration of neural pathways disrupts the functional coherence of brain activation. Recent innovations in brain imaging have detected characteristic disruptions in functional networks. Here we review studies examining changes in functional connectivity, measured through fMRI (functional magnetic resonance imaging), starting with healthy aging and then Alzheimer\u2019s disease. We cover studies that employ the three primary methods to analyze functional connectivity\u2014seed-based, ICA (independent components analysis), and graph theory. At the end we include a brief discussion of other methodologies, such as EEG (electroencephalography), MEG (magnetoencephalography), and PET (positron emission tomography). We also describe multi-modal studies that combine rsfMRI (resting state fMRI) with PET imaging, as well as studies examining the effects of medications. Overall, connectivity and network integrity appear to decrease in healthy aging, but this decrease is accelerated in AD, with specific systems hit hardest, such as the default mode network (DMN). Functional connectivity is a relatively new topic of research, but it holds great promise in revealing how brain network dynamics change across the lifespan and in disease."}
{"_id": "720ec4239843f585535e802fb39f20d477ded4aa", "title": "An End-to-End Signal Strength Model for Underwater Optical Communications", "text": "In this paper, we present a generic model of signal strength in underwater optical communications. The model includes light sources, detectors, amplifier and detector circuitry, optics, as well as a simple extinction model of the water channel. The end-to-end model provides insights into optimization approaches for underwater optical modems and enables relative pose estimation between underwater optical transmitters and receivers. We instantiate our model to the AquaOptical model by determining its parameters and verifying the model prediction in a suite of pool experiments."}
{"_id": "10ceb668f84860bb09fca364125cae4b1ee2e760", "title": "A dynamic bayesian network click model for web search ranking", "text": "As with any application of machine learning, web search ranking requires labeled data. The labels usually come in the form of relevance assessments made by editors. Click logs can also provide an important source of implicit feedback and can be used as a cheap proxy for editorial labels. The main difficulty however comes from the so called position bias - urls appearing in lower positions are less likely to be clicked even if they are relevant. In this paper, we propose a Dynamic Bayesian Network which aims at providing us with unbiased estimation of the relevance from the click logs. Experiments show that the proposed click model outperforms other existing click models in predicting both click-through rate and relevance."}
{"_id": "87a851593965e371184cf3c3df5630f8d66bf49b", "title": "Implementation of SMS based Heartbeat monitoring system using PSoC Microcontroller", "text": "The rapidly growing VLSI technology could reduce the cost of the embedded systems considerably which has motivated a new era of low cost personal health care monitoring systems which meeting common man requirements. Such systems are dedicated to one person and provide the diagnostic information through remotely. This paper proposes a PSoC microcontroller and GSM modules we are eliminating the cables attached to patient. The patient has a freedom of doing daily activities and still be under continuous monitoring (very suitable for old age people). PSoC has inbuilt ADC and Programmable Gain amplifier which enabled single chip implementation. The hardware complexity is also simple and reduces cost. The basic principle of this system is to read the bio medical signals from the biomedical sensor modules and perform the tasks of data conversion, sending SMS using GSM, as well as providing the ability of simple preprocessing such as waveform averaging or rectification. GSM based modem interface through AT commands to PSoC. The Microcontroller will be able to receive and send text messages Heart beat is sensed by using a high intensity type LED and LDR. The finger is placed between the LED and LDR. The skin may be illuminated with visible (red) using transmitted or reflected light for detection. The concept of this paper is builds upon the integration of wireless communications into medical applications to revolutionize personal healthcare. Under this concept, patients are no longer bound to a specific healthcare location where they are monitored by medical instruments\u2014wireless communications would not only provide them with safe and accurate monitoring but also the freedom of movement. Remote patient monitoring will not only redefine hospital care but also work, home, and recreational activities. Imagine the home of the future: elderly individuals no longer have to travel to the hospital, a mother stuck at work will be able to receive email updates of her sick child's temperature, a doctor on the go will have the ability to turn on his cell phone to check the status of his patients. The objective of this project is to build a wireless heart beat monitoring system using GSM Technology, which could potentially be an integral part of a suite of personal healthcare appliances for a large-scale remote patient monitoring system. In this paper a low cost Digital Heart rate meter using PSoC Microcontroller is designed, and also this unit is connecting to GSM Modem for send \u2026"}
{"_id": "4547045fecfb303140adafde3384ab5b1d70dd7c", "title": "Unsupervised domain adaptation for robust speech recognition via variational autoencoder-based data augmentation", "text": "Domain mismatch between training and testing can lead to significant degradation in performance in many machine learning scenarios. Unfortunately, this is not a rare situation for automatic speech recognition deployments in real-world applications. Research on robust speech recognition can be regarded as trying to overcome this domain mismatch issue. In this paper, we address the unsupervised domain adaptation problem for robust speech recognition, where both source and target domain speech are available, but word transcripts are only available for the source domain speech. We present novel augmentation-based methods that transform speech in a way that does not change the transcripts. Specifically, we first train a variational autoencoder on both source and target domain data (without supervision) to learn a latent representation of speech. We then transform nuisance attributes of speech that are irrelevant to recognition by modifying the latent representations, in order to augment labeled training data with additional data whose distribution is more similar to the target domain. The proposed method is evaluated on the CHiME-4 dataset and reduces the absolute word error rate (WER) by as much as 35% compared to the non-adapted baseline."}
{"_id": "63a010c69f00e65c946a68b546bbd42cbed03564", "title": "MagNet: A Two-Pronged Defense against Adversarial Examples", "text": "Deep learning has shown impressive performance on hard perceptual problems. However, researchers found deep learning systems to be vulnerable to small, specially crafted perturbations that are imperceptible to humans. Such perturbations cause deep learning systems to mis-classify adversarial examples, with potentially disastrous consequences where safety or security is crucial. Prior defenses against adversarial examples either targeted specific attacks or were shown to be ineffective.\n We propose MagNet, a framework for defending neural network classifiers against adversarial examples. MagNet neither modifies the protected classifier nor requires knowledge of the process for generating adversarial examples. MagNet includes one or more separate detector networks and a reformer network. The detector networks learn to differentiate between normal and adversarial examples by approximating the manifold of normal examples. Since they assume no specific process for generating adversarial examples, they generalize well. The reformer network moves adversarial examples towards the manifold of normal examples, which is effective for correctly classifying adversarial examples with small perturbation. We discuss the intrinsic difficulties in defending against whitebox attack and propose a mechanism to defend against graybox attack. Inspired by the use of randomness in cryptography, we use diversity to strengthen MagNet. We show empirically that MagNet is effective against the most advanced state-of-the-art attacks in blackbox and graybox scenarios without sacrificing false positive rate on normal examples."}
{"_id": "68e4e5b7c1b862ba5d0cfe046f2e84c6a8c3c394", "title": "Delving into Transferable Adversarial Examples and Black-box Attacks", "text": "An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack Clarifai.com, which is a black-box image classification system."}
{"_id": "6f61d15a31d6d051aeee3bf6d1482d332e68ebfe", "title": "Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong", "text": "Ongoing research has proposed several methods to defend neural networks against adversarial examples, many of which researchers have shown to be ineffective. We ask whether a strong defense can be created by combining multiple (possibly weak) defenses. To answer this question, we study three defenses that follow this approach. Two of these are recently proposed defenses that intentionally combine components designed to work well together. A third defense combines three independent defenses. For all the components of these defenses and the combined defenses themselves, we show that an adaptive adversary can create adversarial examples successfully with low distortion. Thus, our work implies that ensemble of weak defenses is not sufficient to provide strong defense against adversarial examples."}
{"_id": "6e61af5ed38439b722d7f8e92a1ddb32430a2ee4", "title": "Sentiment Predictability for Stocks", "text": "In this work, we present our findings and experiments for stock-market prediction using various textual sentiment analysis tools, such as mood analysis and event extraction, as well as prediction models, such as LSTMs and specific convolutional architectures."}
{"_id": "c5398cca4c6d5342b3e1568b08deddea57a9ab4f", "title": "Control of microglial neurotoxicity by the fractalkine receptor", "text": "Microglia, the resident inflammatory cells of the CNS, are the only CNS cells that express the fractalkine receptor (CX3CR1). Using three different in vivo models, we show that CX3CR1 deficiency dysregulates microglial responses, resulting in neurotoxicity. Following peripheral lipopolysaccharide injections, Cx3cr1\u2212/\u2212 mice showed cell-autonomous microglial neurotoxicity. In a toxic model of Parkinson disease and a transgenic model of amyotrophic lateral sclerosis, Cx3cr1\u2212/\u2212 mice showed more extensive neuronal cell loss than Cx3cr1+ littermate controls. Augmenting CX3CR1 signaling may protect against microglial neurotoxicity, whereas CNS penetration by pharmaceutical CX3CR1 antagonists could increase neuronal vulnerability."}
{"_id": "ebf3b172f640b6e4f565ae72247c91825d4fa527", "title": "A valid two-item food security questionnaire for screening HIV-1 infected patients in a clinical setting.", "text": "OBJECTIVE\nTo validate a two-item food security questionnaire (FSQ) for use in a clinical setting to screen HIV-1 infected patients for food insecurity.\n\n\nDESIGN\nThe present study was a questionnaire-based survey of forty-nine subjects attending an HIV clinic. Subjects completed a two-item questionnaire and a six-item validated FSQ contemporaneously.\n\n\nRESULTS\nA strong correlation was found between the two-item and six-item FSQ (rho = 0.895; 95 % CI 0.821, 0.940; P < 0.0001). Cronbach's alpha coefficient was found to be 0.94 and 0.90 for the two-item and six-item FSQ, respectively. The two-item FSQ yielded a sensitivity of 100 % (95 % CI 75, 100) and a specificity of 78 % (95 % CI 61, 90). The negative predictive value was found to be 100 % (95 % CI 88, 100).\n\n\nCONCLUSIONS\nThe results of the present study suggest that the two-item FSQ is a valid, reliable and sensitive screening tool of food insecurity in people living with HIV in a clinical setting."}
{"_id": "bfa2b3203618c7639fa2b568842515deada35695", "title": "Clinical Information Extraction via Convolutional Neural Network", "text": "We report an implementation of a clinical information extraction tool that leverages deep neural network to annotate event spans and their attributes from raw clinical notes and pathology reports. Our approach uses context words and their part-of-speech tags and shape information as features. Then we hire temporal (1D) con-volutional neural network to learn hidden feature representations. Finally, we use Multilayer Perceptron (MLP) to predict event spans. The empirical evaluation demonstrates that our approach significantly outperforms baselines."}
{"_id": "8b0a1c36c6c775265c0cb023aca6a71d3e830fd7", "title": "Playable Experiences at AIIDE 2015", "text": "This paper describes entries to the third Playable Experiences track to be held at the AIIDE conference. We discuss the five entries accepted to the track for 2015, as well as the ongoing development of the track as part"}
{"_id": "da013b84a93cc89d78f2d9a346fc275e3c159565", "title": "Affordable Self Driving Cars and Robots with Semantic Segmentation", "text": "Image semantic segmentation is of immense interest for self-driving car research. Accurate and efficient segmentation mechanisms are required. We have evaluated Intersection over Union (IoU) metric over Cityscapes and KITTI datasets. We designed baseline softmax regression and maximum likelihood estimation, which performs quite poorly for the image segmentation task. We ran fully convolutional networks (FCN), which performed almost perfectly over the KITTI dataset. We found overfitting problem for the more complex Cityscape dataset. We conducted several experiments with regularization, dropout, data augmentation, image scaling and newer architectures. We are able to successfully mitigate overfitting by data augmentation. We also generated a confusion matrix and conducted error ablative analysis to get a deeper understanding of FCNs."}
{"_id": "e406f1648c3ce88c2d6e6152e67105e92af76f31", "title": "Genetic process mining: an experimental evaluation", "text": "One of the aims of process mining is to retrieve a process model from an event log. The discovered models can be used as objective starting points during the deployment of process-aware information systems (Dumas et\u00a0al., eds., Process-Aware Information Systems: Bridging People and Software Through Process Technology. Wiley, New York, 2005) and/or as a feedback mechanism to check prescribed models against enacted ones. However, current techniques have problems when mining processes that contain non-trivial constructs and/or when dealing with the presence of noise in the logs. Most of the problems happen because many current techniques are based on local information in the event log. To overcome these problems, we try to use genetic algorithms to mine process models. The main motivation is to benefit from the global search performed by this kind of algorithms. The non-trivial constructs are tackled by choosing an internal representation that supports them. The problem of noise is naturally tackled by the genetic algorithm because, per definition, these algorithms are robust to noise. The main challenge in a genetic approach is the definition of a good fitness measure because it guides the global search performed by the genetic algorithm. This paper explains how the genetic algorithm works. Experiments with synthetic and real-life logs show that the fitness measure indeed leads to the mining of process models that are complete (can reproduce all the behavior in the log) and precise (do not allow for extra behavior that cannot be derived from the event log). The genetic algorithm is implemented as a plug-in in the ProM framework."}
{"_id": "26afb306b0f3b6347382dc33efdd0f8dd4d6ea73", "title": "Joint Spatial Division and Multiplexing: Opportunistic Beamforming, User Grouping and Simplified Downlink Scheduling", "text": "Joint Spatial Division and Multiplexing (JSDM) is a downlink multiuser MIMO scheme recently proposed by the authors in order to enable \u201cmassive MIMO\u201d gains and simplified system operations for Frequency Division Duplexing (FDD) systems. The key idea lies in partitioning the users into groups with approximately similar channel covariance eigenvectors and serving these groups by using two-stage downlink precoding scheme obtained as the concatenation of a pre-beamforming matrix, that depends only on the channel second-order statistics, with a multiuser MIMO linear precoding matrix, which is a function of the effective channels including pre-beamforming. The role of pre-beamforming is to reduce the dimensionality of the effective channel by exploiting the near-orthogonality of the eigenspaces of the channel covariances of the different user groups. This paper is an extension of our initial work on JSDM, and addresses some important practical issues. First, we focus on the regime of finite number of antennas and large number of users and show that JSDM with simple opportunistic user selection is able to achieve the same scaling law of the system capacity with full channel state information. Next, we consider the large-system regime (both antennas and users growing large) and propose a simple scheme for user grouping in a realistic setting where users have different angles of arrival and angular spreads. Finally, we propose a low-overhead probabilistic scheduling algorithm that selects the users at random with probabilities derived from large-system random matrix analysis. Since only the pre-selected users are required to feedback their channel state information, the proposed scheme realizes important savings in the CSIT feedback."}
{"_id": "5c6ef8c44dc84d6e47600adf4459a777e9a2b46a", "title": "Classification of Brain Cancer using Artificial Neural Network", "text": "A Brain Cancer Detection and Classification System has been designed and developed. The system uses computer based procedures to detect tumor blocks or lesions and classify the type of tumor using Artificial Neural Network in MRI images of different patients with Astrocytoma type of brain tumors. The image processing techniques such as histogram equalization, image segmentation, image enhancement, morphological operations and feature extraction have been developed for detection of the brain tumor in the MRI images of the cancer affected patients. The extraction of texture features in the detected tumor has been achieved by using Gray Level Co-occurrence Matrix (GLCM). These features are compared with the stored features in the Knowledge Base. Finally a Neuro Fuzzy Classifier has been developed to recognize different types of brain cancers. The whole system has been tested in two phases firstly Learning/Training Phase and secondly Recognition/Testing Phase. The known MRI images of affected brain cancer patients obtained from Radiology Department of Tata Memorial Hospital (TMH) were used to train the system. The unknown samples of brain cancer affected MRI images are also obtained from TMH and were used to test the system. The system was found efficient in classification of these samples and responds any abnormality."}
{"_id": "9956f1e667b61ebdec55abd32c41686a0dc34335", "title": "Triple Band Annular Ring Loaded Stacked Circular Patch Microstrip Antenna", "text": "In this paper a novel structure of annular ring loaded stacked circular patch microstrip antenna is theoretically analysed to observe various parameters such as return loss, input impedance, gain, directivity and radiation pattern. It is found that antenna possess three band of operation which signify the compactness and multiband operation of antenna. The antenna is resonating at three operating frequencies 1.720, 2.950, 3.060 GHz. The proposed theory is verified by simulation using Ansoft\u2019s HFSS and theoretical results are in good agreement with simulated results. The antenna is useful for multi-services operations such as WLAN, GSM, UMTS, and WiMAX services."}
{"_id": "3b9fa975934eea1a6e40bba095ebd3c64a14c96d", "title": "Transfer learning for children's speech recognition", "text": "Children's speech processing is more challenging than that of adults due to lacking of large scale children's speech corpora. With the developing of the physical speech organ, high inter speaker and intra speaker variabilities are observed in children's speech. On the other hand, data collection on children is difficult as children usually have short attention span and their language proficiency is limited. In this paper, we propose to improve children's automatic speech recognition performance with transfer learning technique. We compare two transfer learning approaches in enhancing children's speech recognition performance with adults' data. The first method is to perform acoustic model adaptation on the pre-trained adult model. The second is to train acoustic model with deep neural network based multi-task learning approach: the adults' and children's acoustic characteristics are learnt jointly in the shared hidden layers, while the output layers are optimized with different speaker groups. Our experiment results show that both transfer learning approaches are effective in transferring rich phonetic and acoustic information from adults' model to children model. The multi-task learning approach outperforms the acoustic adaptation approach. We further show that the speakers' acoustic characteristics in languages can also benefit the target language under the multi-task learning framework."}
{"_id": "7a28c571c423c023226b87fe3a106622a0b3ff42", "title": "Process Mining and Security: Visualization in Database Intrusion Detection", "text": "Nowadays, more and more organizations keep their valuable and sensitive data in Database Management Systems (DBMSs). The traditional database security mechanisms such as access control mechanisms, authentication, data encryption technologies do not offer a strong enough protection against the exploitation of vulnerabilities (e.g. intrusions) in DBMSs from insiders. Intrusion detection systems recently proposed in the literature focus on statistical approaches, which are not intuitive. Our research is the first ever effort to use process mining modeling low-level event logs for database intrusion detection. We have proposed a novel approach for visualizing database intrusion detection using process mining techniques. Our experiments showed that intrusion detection visualization will be able to help security officers who might not know deeply the complex system, identify the true positive detection and eliminate the false positive results."}
{"_id": "f041ee98d75fbea648609b5ac6d303be9b6ccff7", "title": "Vision-based obstacle recognition system for automated lawn mower robot development", "text": "Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system\u2019s performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%."}
{"_id": "a5253ea59d0e41b1841e0a65354404c53047edb9", "title": "Radio frequency identification: technologies, applications, and research issues", "text": "A radio frequency identification (RFID) system is a special kind of sensor network to identify an object or a person using radio frequency transmission. A typical RFID system includes transponders (tags) and interrogators (readers): tags are attached to objects/persons, and readers communicate with the tags in their transmission ranges via radio signals. RFID systems have been gaining more and more popularity in areas such as supply chain management, automated identification systems, and any place requiring identifications of products or people. RFID technology is better than barcode in many ways, and may totally replace barcode in the future if certain technologies can be achieved such as low cost and protection of personal privacy. This paper provides a technology survey of RFID systems and various RFID applications. We also discuss five critical research issues: cost control, energy efficiency, privacy issue, multiple readers\u2019 interference, and security issue. Copyright # 2006 John Wiley & Sons, Ltd."}
{"_id": "f2658276de21a45d4186e739c8170fd44ffd6431", "title": "Binarized Genetic Algorithm with Neural Network for Stock Market Prediction", "text": "Stock market prediction is a challenging task to predict the upcoming stock values. It is very difficult to predict because of unstable nature of stock. This stock market prices are continuously changing day by day. Many Business Analysts spends more money on this stock market. Some of the persons may lose the money on this stock market because they don't have clear idea about the stock market. Estimate and analyse the stock market accurately then only get more profit. Artificial neural network is a very popular technique for forecast the stock market price, but using this technique to forecast the stock market price up to some extent. So there is a need to improve the accuracy of the system. In this paper, propose a novel system called Binarized Genetic Algorithm with Artificial Neural Network (BGANN) technique to predict and forecast future behavior of individual stocks or the upcoming stock. Binarized Genetic Algorithm is used for optimizing Neural Network weights while training. Comparative analysis shows that BGANN method performance is better compared to the Support Vector Machine (SVM), Neural Network (NN), and Auto Regressive Integrated Moving Average (ARIMA) models."}
{"_id": "49830f605c316fec8e9e365e7e64adeea47e89d8", "title": "Modeling visual problem solving as analogical reasoning.", "text": "We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record"}
{"_id": "4405774b3e9a804af16214fd7574864f511b8460", "title": "A 71-Gb/s NRZ Modulated 850-nm VCSEL-Based Optical Link", "text": "We report error free (BER <; 10-12) operation of a directly non-return-to-zero modulated 850-nm vertical cavity surface-emitting laser (VCSEL) link operating to 71 Gb/s. This is the highest error free modulation rate for a directly modulated laser of any type. The optical link consists of a 130-nm BiCMOS driver IC with two-tap feed-forward equalization, a wide bandwidth 850-nm VCSEL, a surface illuminated GaAs PIN photodiode, and a 130-nm BiCMOS receiver IC."}
{"_id": "79aec2093b2a1b0197e7d145b5cf86abc70fee3e", "title": "Minimizing Faulty Executions of Distributed Systems", "text": "When troubleshooting buggy executions of distributed systems, developers typically start by manually separating out events that are responsible for triggering the bug (signal) from those that are extraneous (noise). In this paper we present DEMi, a tool for automatically performing this minimization. We apply DEMi to buggy executions of two very different distributed systems, Raft and Spark, and find that it achieves up to 97% reduction of execution length within a small time budget."}
{"_id": "60e0cc24c7801193bab359211b74f20737486620", "title": "Stationary Frame Induction Motor Feed Forward Current Controller With Back EMF Compensation", "text": "In this paper, a novel induction motor (IM) stator current controller is proposed based on the stationary frame feed forward controlling structure. The proposed regulator represents an improved variant of the existing stationary and rotational frame IM stator current regulators based on the back EMF compensation. The paper novelty consists of the stationary frame controlling structure based on two feed forward actions that enable asymptotic reference tracking. Namely, the novel stationary frame regulator includes a proportional integral (PI) regulation term combined with a stator resistance voltage drop and back EMF feed forward compensations, with the back EMF compensation based on the stator flux value estimated by means of a programmable low-pass filter. The feedback term is based on a PI controller, with the corresponding parameter tuning procedure outlined in the paper. The simulation runs are included in the paper in order to examine the regulator performance in relation to the factors critical to the IM stator current regulation: variable IM model parameter values, transport delay introduced in the control loop, variable stator frequency, and rotor speed values. Also, the experimental tests are presented and carried out on an experimental setup for different operating conditions typical for an IM drive."}
{"_id": "8ea66cc19826d09873d858a39f3cfbf601b093e9", "title": "Coping with stress during childhood and adolescence: problems, progress, and potential in theory and research.", "text": "Progress and issues in the study of coping with stress during childhood and adolescence are reviewed. Definitions of coping are considered, and the relationship between coping and other aspects of responses to stress (e.g., temperament and stress reactivity) is described. Questionnaire, interview, and observation measures of child and adolescent coping are evaluated with regard to reliability and validity. Studies of the association of coping with symptoms of psychopathology and social and academic competence are reviewed. Initial progress has been made in the conceptualization and measurement of coping, and substantial evidence has accumulated on the association between coping and adjustment. Problems still remain in the conceptualization and measurement of coping in young people, however, and aspects of the development and correlates of coping remain to be identified. An agenda for future research on child-adolescent coping is outlined."}
{"_id": "fb9b8bbc6bd2f50252fc2556ff0e096d5b6fc1f1", "title": "An Efficient & Secure Content Contribution and Retrieval content in Online Social Networks using Level-level Security Optimization & Content Visualization Algorithm", "text": "Received Nov 14, 2017 Revised Jan 26, 2018 Accepted Feb 11, 2018 Online Social Networks (OSNs) is currently popular interactive media to establish the communication, share and disseminate a considerable amount of human life data. Daily and continuous communications imply the exchange of several types of content, including free text, image, audio, and video data. Security is one of the friction points that emerge when communications get mediated in Online Social Networks (OSNs). However, there are no contentbased preferences supported, and therefore it is not possible to prevent undesired messages. Providing the service is not only a matter of using previously defined web content mining and security techniques. To overcome the issues, Level-level Security Optimization & Content Visualization Algorithm is proposed to avoid the privacy issues during content sharing and data visualization. It adopts level by level privacy based on user requirement in the social network. It evaluates the privacy compatibility in the online social network environment to avoid security complexities. The mechanism divided into three parts namely like online social network platform creation, social network privacy, social network within organizational privacy and network controlling and authentication. Based on the experimental evaluation, a proposed method improves the privacy retrieval accuracy (PRA) 9.13% and reduces content retrieval time (CRT) 7 milliseconds and information loss (IL) 5.33%."}
{"_id": "dfea6f486865d92778f19fc02916087f8f7a80d6", "title": "Probability of Random Correspondence for Fingerprints", "text": "Individuality of fingerprints can be quantified by computing the probabilistic metrics for measuring the degree of fingerprint individuality. In this paper, we present a novel individuality evaluation approach to estimate the probability of random correspondence (PRC). Three generative models are developed respectively to represent the distribution of fingerprint features: ridge flow, minutiae and minutiae together with ridge points. A mathematical model that computes the PRCs are derived based on the generative models. Three metrics are discussed in this paper: (i) PRC of two samples, (ii) PRC among a random set of n samples (nPRC) and (iii) PRC between a specific sample among n others (specific nPRC). Experimental results show that the theoretical estimates of fingerprint individuality using our model consistently follow the empirical values based on the NIST4 database."}
{"_id": "0558c94a094158ecd64f0d5014d3d9668054fb97", "title": "Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing", "text": "We present Resilient Distributed Datasets (RDDs), a distributed memory abstraction that allows programmers to perform in-memory computations on large clusters while retaining the fault tolerance of data flow models like MapReduce. RDDs are motivated by two types of applications that current data flow systems handle inefficiently: iterative algorithms, which are common in graph applications and machine learning, and interactive data mining tools. In both cases, keeping data in memory can improve performance by an order of magnitude. To achieve fault tolerance efficiently, RDDs provide a highly restricted form of shared memory: they are read-only datasets that can only be constructed through bulk operations on other RDDs. However, we show that RDDs are expressive enough to capture a wide class of computations, including MapReduce and specialized programming models for iterative jobs such as Pregel. Our implementation of RDDs can outperform Hadoop by 20\u00d7 for iterative jobs and can be used interactively to search a 1 TB dataset with latencies of 5\u20137 seconds."}
{"_id": "0783763baafa5cac38d880967f7c339c8fe6d61f", "title": "Applying Model Management to Classical Meta Data Problems", "text": "Model management is a new approach to meta data management that offers a higher level programming interface than current techniques. The main abstractions are models (e.g., schemas, interface definitions) and mappings between models. It treats these abstractions as bulk objects and offers such operators as Match, Merge, Diff, Compose, Apply, and ModelGen. This paper extends earlier treatments of these operators and applies them to three classical meta data management problems: schema integration, schema evolution, and round-trip engineering."}
{"_id": "341a08d1854b5ecf871bbb4c7833a435927abbda", "title": "Towards a unified architecture for in-RDBMS analytics", "text": "The increasing use of statistical data analysis in enterprise applications has created an arms race among database vendors to offer ever more sophisticated in-database analytics. One challenge in this race is that each new statistical technique must be implemented from scratch in the RDBMS, which leads to a lengthy and complex development process. We argue that the root cause for this overhead is the lack of a unified architecture for in-database analytics. Our main contribution in this work is to take a step towards such a unified architecture. A key benefit of our unified architecture is that performance optimizations for analytics techniques can be studied generically instead of an ad hoc, per-technique fashion. In particular, our technical contributions are theoretical and empirical studies of two key factors that we found impact performance: the order data is stored, and parallelization of computations on a single-node multicore RDBMS. We demonstrate the feasibility of our architecture by integrating several popular analytics techniques into two commercial and one open-source RDBMS. Our architecture requires changes to only a few dozen lines of code to integrate a new statistical technique. We then compare our approach with the native analytics tools offered by the commercial RDBMSes on various analytics tasks, and validate that our approach achieves competitive or higher performance, while still achieving the same quality."}
{"_id": "41071fcd9204084d2c1633352cfc5f0ba7f0fee0", "title": "Sketch of Big Data Real-Time Analytics Model", "text": "Big Data has drawn huge attention from researchers in information sciences, decision makers in governments and enterprises. However, there is a lot of potential and highly useful value hidden in the huge volume of data. Data is the new oil, but unlike oil data can be refined further to create even more value. Therefore, a new scientific paradigm is born as data-intensive scientific discovery, also known as Big Data. The growth volume of real-time data requires new techniques and technologies to discover insight value. In this paper we introduce the Big Data real-time analytics model as a new technique. We discuss and compare several Big Data technologies for real-time processing along with various challenges and issues in adapting Big Data. Realtime Big Data analysis based on cloud computing approach is our future research direction."}
{"_id": "92bd68ed6c89f2c02b0121963516b52125963cdd", "title": "Turning the Page : The Impact of Choice Closure on Satisfaction", "text": "This paper introduces the concept of choice closure, defined as the psychological process by which consumers come to perceive a decision to be resolved and complete, and focus on the decision outcome. Choice closure inhibits consumers who have already made a choice from reverting to the decision process and engaging in potentially unfavorable comparisons between the chosen and the forgone options. In four studies, we show that physical acts of closure\u2014such as turning one\u2019s back on, covering, and turning the page on the rejected alternatives\u2014facilitate choice closure in the context of difficult decisions involving large choice sets, and that choice closure results in greater satisfaction."}
{"_id": "1287220e4c8eab29fc5fe6d5efa9fb704da0f4f5", "title": "Evolutionary Synthesis of Deep Neural Networks via Synaptic Cluster-driven Genetic Encoding", "text": "There has been significant recent interest towards achievin g highly efficient deep neural network architectures. A promising paradigm for ach ieving this is the concept ofevolutionary deep intelligence , which attempts to mimic biological evolution processes to synthesize highly-efficient deep neural n etworks over successive generations. An important aspect of evolutionary deep inte lligence is the genetic encoding scheme used to mimic heredity, which can have a sign ificant impact on the quality of offspring deep neural networks. Motivated by the neurobiological phenomenon of synaptic clustering, we introduce a new genet ic encoding scheme where synaptic probability is driven towards the formation of a highly sparse set of synaptic clusters. Experimental results for the task of ima ge classification demonstrated that the synthesized offspring networks using this synaptic cluster-driven genetic encoding scheme can achieve state-of-the-art perf ormance while having network architectures that are not only significantly more e ffici nt (with a\u223c125fold decrease in synapses for MNIST) compared to the origina l a cestor network, but also tailored for GPU-accelerated machine learning app lications."}
{"_id": "c77be34db96695159244723fe9ffa4a88dc4a36d", "title": "Understanding and Predicting Graded Search Satisfaction", "text": "Understanding and estimating satisfaction with search engines is an important aspect of evaluating retrieval performance. Research to date has modeled and predicted search satisfaction on a binary scale, i.e., the searchers are either satisfied or dissatisfied with their search outcome. However, users' search experience is a complex construct and there are different degrees of satisfaction. As such, binary classification of satisfaction may be limiting. To the best of our knowledge, we are the first to study the problem of understanding and predicting graded (multi-level) search satisfaction. We ex-amine sessions mined from search engine logs, where searcher satisfaction was also assessed on multi-point scale by human annotators. Leveraging these search log data, we observe rich and non-monotonous changes in search behavior in sessions with different degrees of satisfaction. The findings suggest that we should predict finer-grained satisfaction levels. To address this issue, we model search satisfaction using features indicating search outcome, search effort, and changes in both outcome and effort during a session. We show that our approach can predict subtle changes in search satisfaction more accurately than state-of-the-art methods, affording greater insight into search satisfaction. The strong performance of our models has implications for search providers seeking to accu-rately measure satisfaction with their services."}
{"_id": "d7d9d643a378b6fd69fff63d113f4eae1983adc8", "title": "Speculations Concerning the First Ultraintelligent Machine", "text": ""}
{"_id": "8e7c7a32070d3d22f0640497e3fc15c575295f03", "title": "AR typing interface for mobile devices", "text": "We propose a new user interface system for mobile devices. By using augmented reality (AR) technology, the system overlays virtual objects on real images captured by a camera attached to the back of a mobile device, and the user can operate the mobile device by manipulating the virtual objects with his/her hand in the space behind the mobile device. This system allows the user to operate the device in a wide three-dimensional space and to select small objects easily. Also, the AR technology provides the user with a sense of reality in operating the device. We developed a typing application using our system and verified the effectiveness by user studies. The results showed that more than half of the subjects felt that the operation area of the proposed system is larger than that of a smartphone and that both AR and unfixed key-plane are effective for improving typing speed."}
{"_id": "7b1ca9a74ab7fbfc32a69e8313ca2f2d78ac6c35", "title": "Comparison of Three Different CNN Architectures for Age Classification", "text": "As one of the powerful tools of machine learning, Convolutional Neural Network (CNN) architectures are used tosolve complex problems like image recognition, video analysisand natural language processing. In this paper, three differentCNN architectures for age classification using face images arecompared. The Morph dataset containing over 55k images isused in experiments and success of a 6-layer CNN and 2 variantsof ResNet with different depths are compared. The images in thedataset are divided into 6 different age classes. While 80% of theimages are used in training of the networks, the rest of the 20% isused for testing. The performance of the networks are comparedaccording to two different criteria namely, the ability to makethe estimation pointing the exact age classes of test images andthe ability to make the estimation pointing the exact age classesor at most neighboring classes of the images. According to theperformance results obtained, with 6-layer network, it is possibleto estimate the exact or neighboring classes of the images withless than 5% error. It is shown that for a 6 class age classificationproblem 6-layer network is more successful than the deeperResNet counterparts since 6-layer network is less susceptible tooverfitting for this problem."}
{"_id": "5c11bab81f973d4d4ba2d9c4ca9e105e2b89cc0b", "title": "Cardiac arrhythmia detection from ECG combining convolutional and long short-term memory networks", "text": "Objectives: Atrial fibrillation (AF) is a common heart rhythm disorder associated with deadly and debilitating consequences including heart failure, stroke, poor mental health, reduced quality of life and death. Having an automatic system that diagnoses various types of cardiac arrhythmias would assist cardiologists to initiate appropriate preventive measures and to improve the analysis of cardiac disease. To this end, this paper introduces a new approach to detect and classify automatically cardiac arrhythmias in electrocardiograms (ECG) recordings. Methods: The proposed approach used a combination of Convolution Neural Networks (CNNs) and a sequence of Long Short-Term Memory (LSTM) units, with pooling, dropout and normalization techniques to improve their accuracy. The network predicted a classification at every 18th input sample and we selected the final prediction for classification. Results were cross-validated on the Physionet Challenge 2017 training dataset, which contains 8,528 single lead ECG recordings lasting from 9s to just over 60s. Results: Using the proposed structure and no explicit feature selection, 10-fold stratified cross-validation gave an overall F-measure of 0.83.10\u00b10.015 on the held-out test data (mean \u00b1 standard deviation over all folds) and 0.80 on the hidden dataset of the Challenge entry server."}
{"_id": "cf76b60fbbe3cf7bb53ea98e1e993b4b2a90aa18", "title": "NEVESIM: event-driven neural simulation framework with a Python interface", "text": "NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies."}
{"_id": "d3ee666cd83ddfb9e16ee00fb7ed85d272b668f5", "title": "Autonomous Positioning of Eye Surgical Robot Using the Tool Shadow and Kalman Filtering", "text": "Vitreoretinal surgery is one of the most difficult surgical operations, even for experienced surgeons. Thus, a master-slave eye surgical robot has been developed to assist the surgeon in safely performing vitreoretinal surgeries; however, in the master-slave control, the robotic positioning accuracy depends on the surgeon\u2019s coordination skills. This paper proposes a new method of autonomous robotic positioning using the shadow of the surgical instrument. First, the microscope image is segmented into three regions\u2014namely, a micropipette, its shadow, and the eye ground\u2014using a Gaussian mixture model (GMM). The tips of the micropipette and its shadow are then extracted from the contour lines of the segmented regions. The micropipette is then autonomously moved down to the simulated eye ground until the distance between the tips of micropipette and its shadow in the microscopic image reaches a predefined threshold. To handle possible occlusions, the tip of the shadow is estimated using a Kalman filter. Experiments to evaluate the robotic positioning accuracy in the vertical direction were performed. The results show that the autonomous positioning using the Kalman filter enhanced the accuracy of robotic positioning."}
{"_id": "3efa7f079e5115a85015f2407788d47ffa282f58", "title": "Open circuit fault detection in PWM voltage source inverter for PMSM drive system", "text": "In this paper, a simple and low-cost open-circuit fault detection and identification method for a pulse-width modulated (PWM) voltage-source inverter (VSI) employing a permanent magnet synchronous motor is proposed. An open-circuit fault of a power switch in the PWM VSI changes the corresponding terminal voltage and introduces the voltage distortions to each phase voltage. The proposed open-circuit fault diagnosis method employs the model reference adaptive system techniques and requires no additional sensors or electrical devices to detect the fault condition and identify the faulty switch. The proposed method has the features of fast diagnosis time, simple structure, and being easily inserted to the existing control algorithms as a subroutine without major modifications. The simulations and experiments are carried out and the results show the effectiveness of the proposed method."}
{"_id": "5e38e39dee575b07fb751bff5b05064b6942e2d3", "title": "A Time Series Classification Method for Behaviour-Based Dropout Prediction", "text": "Students' dropout rate is a key metric in online and open distance learning courses. We propose a time-series classification method to construct data based on students' behaviour and activities on a number of online distance learning modules. Further, we propose a dropout prediction model based on the time series forest (TSF) classification algorithm. The proposed predictive model is based on interaction data and is independent of learning objectives and subject domains. The model enables prediction of dropout rates without the requirement for pedagogical experts. Results show that the prediction accuracy on two selected datasets increases as the portion of data used in the model grows. However, a reasonable prediction accuracy of 0.84 is possible with only 5% of the dataset processed. As a result, early prediction can help instructors design interventions to encourage course completion before a student falls too far behind."}
{"_id": "5dc38be6a1b98e3aa0df679f7cf787100504d613", "title": "Weakly-supervised text-to-speech alignment confidence measure", "text": "This work proposes a new confidence measure for evaluating text-to-speech alignment systems outputs, which is a key component for many applications, such as semi-automatic corpus anonymization, lips syncing, film dubbing, corpus preparation for speech synthesis and speech recognition acoustic models training. This confidence measure exploits deep neural networks that are trained on large corpora without direct supervision. It is evaluated on an open-source spontaneous speech corpus and outperforms a confidence score derived from a state-of-the-art text-to-speech aligner. We further show that this confidence measure can be used to fine-tune the output of this aligner and improve the quality of the resulting alignment."}
{"_id": "b3356b85c4173d253ddfaf2bcf3c96b5efc4b134", "title": "Designing an augmented reality board game with children: the battleboard 3D experience", "text": "This demo shows BattleBoard 3D which is an Augmented Reality (AR) based game prototype featuring the use of LEGO for the physical and digital pieces. Design concepts, the physical setting and, user interface for the game is illustrated and described. Based on qualitative studies of children playing the game we illustrate design issues for AR board games."}
{"_id": "a5d426d6f4bd5dc3311f0c994f642cf1f5de0488", "title": "Research and implementation of RSA algorithm for encryption and decryption", "text": "Cryptographic technique is one of the principal means to protect information security. Not only has it to ensure the information confidential, but also provides digital signature, authentication, secret sub-storage, system security and other functions. Therefore, the encryption and decryption solution can ensure the confidentiality of the information, as well as the integrity of information and certainty, to prevent information from tampering, forgery and counterfeiting. Encryption and decryption algorithm's security depends on the algorithm while the internal structure of the rigor of mathematics, it also depends on the key confidentiality. Key in the encryption algorithm has a pivotal position, once the key was leaked, it means that anyone can be in the encryption system to encrypt and decrypt information, it means the encryption algorithm is useless. Therefore, what kind of data you choose to be a key, how to distribute the private key, and how to save both data transmission keys are very important issues in the encryption and decryption algorithm. This paper proposed an implementation of a complete and practical RSA encrypt/decrypt solution based on the study of RSA public key algorithm. In addition, the encrypt procedure and code implementation is provided in details."}
{"_id": "464e8ac9026540e3f9611c762e0b2a483d11e88a", "title": "Personalized machine learning for robot perception of affect and engagement in autism therapy", "text": "Robots have the potential to facilitate future therapies for children on the autism spectrum. However, existing robots are limited in their ability to automatically perceive and respond to human affect, which is necessary for establishing and maintaining engaging interactions. Their inference challenge is made even harder by the fact that many individuals with autism have atypical and unusually diverse styles of expressing their affective-cognitive states. To tackle the heterogeneity in children with autism, we used the latest advances in deep learning to formulate a personalized machine learning (ML) framework for automatic perception of the children\u2019s affective states and engagement during robot-assisted autism therapy. Instead of using the traditional one-size-fits-all ML approach, we personalized our framework to each child using their contextual information (demographics and behavioral assessment scores) and individual characteristics. We evaluated this framework on a multimodal (audio, video, and autonomic physiology) data set of 35 children (ages 3 to 13) with autism, from two cultures (Asia and Europe), and achieved an average agreement (intraclass correlation) of ~60% with human experts in the estimation of affect and engagement, also outperforming nonpersonalized ML solutions. These results demonstrate the feasibility of robot perception of affect and engagement in children with autism and have implications for the design of future autism therapies."}
{"_id": "4619c5b1a37ee8e523f8104aa990ff50ca7ec7d9", "title": "Estimating alertness from the EEG power spectrum", "text": "In tasks requiring sustained attention, human alertness varies on a minute time scale. This can have serious consequences in occupations ranging from air traffic control to monitoring of nuclear power plants. Changes in the electroencephalographic (EEG) power spectrum accompany these fluctuations in the level of alertness, as assessed by measuring simultaneous changes in EEG and performance on an auditory monitoring task. By combining power spectrum estimation, principal component analysis and artificial neural networks, the authors show that continuous, accurate, noninvasive, and near real-time estimation of an operator's global level of alertness is feasible using EEC; measures recorded from as few as two central scalp sites. This demonstration could lead to a practical system for noninvasive monitoring of the cognitive state of human operators in attention-critical settings."}
{"_id": "625fb5f70406ac8dbd954d1105bd8e725d9254d9", "title": "The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "5cd7273a0e81c241b2e502a80447b128358e77dc", "title": "Resuming the discussion of AMSTAR: What can (should) be made better?", "text": "BACKGROUND\nEvidence syntheses, and in particular systematic reviews (SRs), have become one of the cornerstones of evidence-based health care. The Assessment of Multiple Systematic Reviews (AMSTAR) tool has become the most widely used tool for investigating the methodological quality of SRs and is currently undergoing revision. The objective of this paper is to present insights, challenges and potential solutions from the point of view of a group of assessors, while referring to earlier methodological discussions and debates with respect to AMSTAR.\n\n\nDISCUSSION\nOne major drawback of AMSTAR is that it relies heavily on reporting quality rather than on methodological quality. This can be found in several items. Furthermore, it should be acknowledged that there are now new methods and procedures that did not exist when AMSTAR was developed. For example, the note to item 1 should now refer to the International Prospective Register of Ongoing Systematic Reviews (PROSPERO). Furthermore, item 3 should consider the definition of hand-searching, as the process of reviewing conference proceedings using the search function (e.g. in Microsoft Word or in a PDF file) does not meet the definition set out by the Cochrane Collaboration. Moreover, methods for assessing the quality of the body of evidence have evolved since AMSTAR was developed and should be incorporated into a revised AMSTAR tool. Potential solutions are presented for each AMSTAR item with the aim of allowing a more thorough assessment of SRs. As the AMSTAR tool is currently undergoing further development, our paper hopes to add to preceding discussions and papers regarding this tool and stimulate further discussion."}
{"_id": "c285e3f31cde88a3f80c8483edea712a8f02d334", "title": "Pregnancy-related gigantomastia. Case report", "text": "The term gigantomastia has been used to describe breast enlargement to extreme size, with sloughing, haemorrhage and infection. The condition is rare and a case of pregnancy-related gigantomastia is reported."}
{"_id": "75c023f3704775d9ab8f74bd314049c94d88edb0", "title": "Disconnecting outcomes and evaluations: the role of negotiator focus.", "text": "Three experiments explored the role of negotiator focus in disconnecting negotiated outcomes and evaluations. Negotiators who focused on their target prices, the ideal outcome they could obtain, achieved objectively superior outcomes compared with negotiators who focused on their lower bound (e.g., reservation price). Those negotiators who focused on their targets, however, were less satisfied with their objectively superior outcomes. In the final experiment, when negotiators were reminded of their lower bound after the negotiation, the satisfaction of those negotiators who had focused on their target prices was increased, with outcomes and evaluations becoming connected rather than disconnected. The possible negative effects of setting high goals and the temporal dimensions of the disconnection and reconnection between outcomes and evaluations are discussed."}
{"_id": "4e171856b5eac3a2bf7ebc1c243d9937b55a09bc", "title": "Random Features for Large-Scale Kernel Machines", "text": "To accelerate the training of kernel machines, we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods. Our randomized features are designed so that the inner products of the transformed data are approximately equal to those in the feature space of a user specified shift-invariant kernel. We explore two sets of random features, provide convergence bounds on their ability to approximate various radial basis kernels, and show that in large-scale classification and regression tasks linear machine learning algorithms that use these features outperform state-of-the-art large-scale kernel machines."}
{"_id": "b959164d1efca4b73986ba5d21e664aadbbc0457", "title": "A Practical Bayesian Framework for Backpropagation Networks", "text": "A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian \"evidence\" automatically embodies \"Occam's razor,\" penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained."}
{"_id": "a76f3a369d10a48c7e9ec872ffcd4e9ef2fd534e", "title": "The Effects of Intervention Using a Robot on the Levels of Compliance of Four Children with Autism to Requests Given by T heir Mothers", "text": "The Effects of Intervention Using a Robot on the Levels of Compliance of Four Children with Autism to Requests Given by Their Mothers Holly J. Nelson Department of Communication Disorders Master of Science The current study presents the use of a humanoid robot to facilitate compliant behaviors to two types of directives in four children with autism. The children participated in a three-month intervention program that incorporated core components of the SCERTS model in order to facilitate social communication (Prizant, Wetherby, Rubin, & Laurent, 2003). Treatment sessions were comprised of 40 minutes of traditional treatment and 10 minutes of interaction with a humanoid robot. Preand post-intervention assessment were conducted, each involving a 5 minute interaction with the child\u2019s mother, in which they were presented with directives in the form of physical manipulation and verbal requests accompanied by a gesture or model. These preand post-intervention sessions were recorded, analyzed, and coded for compliant and noncompliant behavior to the directives, as well as any eye contact, language, or reciprocal action that accompanied their behavior. The overall results were variable, revealing that two participants made notable gains, one child remained consistent, and another participant showed a decrease in compliant behavior in the post-intervention sessions. Further research should be conducted to include a longer period of baseline and intervention, more systematic identification of the most effective probes for the child, and documentation of the child\u2019s physical and emotional state."}
{"_id": "26fd11160af927f607654dfe884a2054004ef767", "title": "Adaptation and coaching of periodic motion primitives through physical and visual interaction", "text": "In this paper we propose and evaluate a control system to (1) learn and (2) adapt robot motion for continuous non-rigid contact with the environment. We present the approach in the context of wiping surfaces with robots. Our approach is based on learning by demonstration. First an initial periodicmotion, covering the essence of the wiping task, is transferred from a human to a robot. The system extracts and learns one period of motion. Once the user/demonstrator is content with the motion, the robot seeks and establishes contactwith a given surface,maintaining a predefined force of contact through force feedback. The shape of the surface is encoded for the complete period ofmotion, but the robot can adapt to a different surface, perturbations or obstacles. The novelty stems from the fact that the feedforward component is learned and encoded in a dynamic movement primitive. By using the feedforward component, the feedback component is greatly reduced if not completely canceled. Finally, if the user is not satisfied with the periodic pattern, he/she can change parts of motion through predefined gestures or through physical contact in a manner of a tutor or a coach. The complete system thus allows not only a transfer ofmotion, but a transfer ofmotionwithmatching correspondences, i.e. wiping motion is constrained to maintain physical contact with the surface to be wiped. The interface for both learning and adaptation is simple and intuitive and allows for fast and reliable knowledge transfer to the robot. Simulated and real world results in the application domain of wiping a surface are presented on three different robotic platforms. Results of the three robotic platforms, namely a 7 degree-of-freedom Kuka LWR-4 robot, the ARMAR-IIIa humanoid platform and the Sarcos CB-i humanoid robot, depict different methods of adaptation to the environment and coaching. \u00a9 2015 Elsevier B.V. All rights reserved. \u21e4 Corresponding author. E-mail addresses: andrej.gams@ijs.si (A. Gams), tadej.petric@ijs.si (T. Petri\u00a3), martin.do@kit.edu (M. Do), bojan.nemec@ijs.si (B. Nemec), xmorimo@atr.jp (J. Morimoto), asfour@kit.edu (T. Asfour), ales.ude@ijs.si (A. Ude)."}
{"_id": "2752e4f99d1e2852e52cbd2a5b6c502a0103885c", "title": "A Soldier Health Monitoring System for Military Applications", "text": "With recent advances in technology, various wearable sensors have been developed for the monitoring of human physiological parameters. A Body Sensor Network (BSN) consisting of such physiological and biomedical sensor nodes placed on, near or within a human body can be used for real-time health monitoring. In this paper, we describe an on-going effort to develop a system consisting of interconnected BSNs for real-time health monitoring of soldiers. We discuss the background and an application scenario for this project. We describe the preliminary prototype of the system and present a blast source localization application."}
{"_id": "a43b3644116e3289368e277ff4d9c63266991bb7", "title": "High-Conversion-Ratio Bidirectional DC\u2013DC Converter With Coupled Inductor", "text": "In this paper, a high-conversion-ratio bidirectional dc-dc converter with coupled inductor is proposed. In the boost mode, two capacitors are parallel charged and series discharged by the coupled inductor. Thus, high step-up voltage gain can be achieved with an appropriate duty ratio. The voltage stress on the main switch is reduced by a passive clamp circuit. Therefore, the low resistance RDS(ON) of the main switch can be adopted to reduce conduction loss. In the buck mode, two capacitors are series charged and parallel discharged by the coupled inductor. The bidirectional converter can have high step-down gain. Aside from that, all of the switches achieve zero voltage-switching turn-on, and the switching loss can be improved. Due to two active clamp circuits, the energy of the leakage inductor of the coupled inductor is recycled. The efficiency can be further improved. The operating principle and the steady-state analyses of the voltage gain are discussed. Finally, a 24-V-input-voltage, 400-V-output-voltage, and 200-W-output-power prototype circuit is implemented in the laboratory to verify the performance."}
{"_id": "4977b33fc7d1b69a65fe3acd2a14de325ab2b25b", "title": "Safeguarding 5G wireless communication networks using physical layer security", "text": "The fifth generation (5G) network will serve as a key enabler in meeting the continuously increasing demands for future wireless applications, including an ultra-high data rate, an ultrawide radio coverage, an ultra-large number of devices, and an ultra-low latency. This article examines security, a pivotal issue in the 5G network where wireless transmissions are inherently vulnerable to security breaches. Specifically, we focus on physical layer security, which safeguards data confidentiality by exploiting the intrinsic randomness of the communications medium and reaping the benefits offered by the disruptive technologies to 5G. Among various technologies, the three most promising ones are discussed: heterogenous networks, massive multiple-input multiple-output, and millimeter wave. On the basis of the key principles of each technology, we identify the rich opportunities and the outstanding challenges that security designers must tackle. Such an identification is expected to decisively advance the understanding of future physical layer security."}
{"_id": "2b64a8c1f584389b611198d47a750f5d74234426", "title": "Deblurring Face Images with Exemplars", "text": "The human face is one of the most interesting subjects involved in numerous applications. Significant progress has been made towards the image deblurring problem, however, existing generic deblurring methods are not able to achieve satisfying results on blurry face images. The success of the state-of-the-art image deblurring methods stems mainly from implicit or explicit restoration of salient edges for kernel estimation. When there is not much texture in the blurry image (e.g., face images), existing methods are less effective as only few edges can be used for kernel estimation. Moreover, recent methods are usually jeopardized by selecting ambiguous edges, which are imaged from the same edge of the object after blur, for kernel estimation due to local edge selection strategies. In this paper, we address these problems of deblurring face images by exploiting facial structures. We propose a maximum a posteriori (MAP) deblurring algorithm based on an exemplar dataset, without using the coarse-to-fine strategy or ad-hoc edge selections. Extensive evaluations against state-of-the-art methods demonstrate the effectiveness of the proposed algorithm for deblurring face images. We also show the extendability of our method to other specific deblurring tasks."}
{"_id": "6051b99473b0ede114ac642f47c968fc01acb83b", "title": "MorusDB: a resource for mulberry genomics and genome biology", "text": "Mulberry is an important cultivated plant that has received the attention of biologists interested in sericulture and plant-insect interaction. Morus notabilis, a wild mulberry species with a minimal chromosome number is an ideal material for whole-genome sequencing and assembly. The genome and transcriptome of M. notabilis were sequenced and analyzed. In this article, a web-based and open-access database, the Morus Genome Database (MorusDB), was developed to enable easy-to-access and data mining. The MorusDB provides an integrated data source and an easy accession of mulberry large-scale genomic sequencing and assembly, predicted genes and functional annotations, expressed sequence tags (ESTs), transposable elements (TEs), Gene Ontology (GO) terms, horizontal gene transfers between mulberry and silkworm and ortholog and paralog groups. Transcriptome sequencing data for M. notabilis root, leaf, bark, winter bud and male flower can also be searched and downloaded. Furthermore, MorusDB provides an analytical workbench with some built-in tools and pipelines, such as BLAST, Search GO, Mulberry GO and Mulberry GBrowse, to facilitate genomic studies and comparative genomics. The MorusDB provides important genomic resources for scientists working with mulberry and other Moraceae species, which include many important fruit crops. Designed as a basic platform and accompanied by the SilkDB, MorusDB strives to be a comprehensive platform for the silkworm-mulberry interaction studies. Database URL: http://morus.swu.edu.cn/morusdb."}
{"_id": "7cf863fcda9270eca4cb4f0b10b595f521b057f3", "title": "Guidelines for haptic interpersonal communication applications: an exploration of foot interaction styles", "text": "A new method for researching haptic interaction styles is presented, based on a layered interaction model and a classification of existing devices. The method is illustrated by designing a new foot interaction device. The aim of which is to enhance non-verbal communication over a computer network. A layered protocols interaction model allows to consider all aspects of the haptic communication process: the intention to perform an action, limitations of the human body, and specifications of the communication device and the network. We demonstrate how this model can be used to derive design-guidelines by analyzing and classifying existing communication devices. By designing and evaluating a foot interaction device, we not only demonstrate that feet are suited for personal, concealed communication over a network, but also show the added value of the design-guidelines. Results of user tests provide clues for designing stimuli for foot interaction and indicate applications of foot communication devices."}
{"_id": "abcb668ff382bc5bb273a3e809b120bffed56c98", "title": "20.7 A 13.8\u00b5W binaural dual-microphone digital ANSI S1.11 filter bank for hearing aids with zero-short-circuit-current logic in 65nm CMOS", "text": "This paper presents an ANSI S1.11 1/3-octave filter-bank chip for binaural hearing aids with two microphones per ear. Binaural multimicrophone systems significantly suppress noise interference and preserve interaural time cues at the cost of significantly higher computational and power requirements than monophonic single-microphone systems. With clock rates around the 1MHz mark, these systems are ideal candidates for low-power implementation through charge-recovery design. At such low clock frequencies, however, charge-recovery logic suffers from short-circuit currents that limit its theoretical energy efficiency [1]. The chip described in this paper is designed in 65nm CMOS using a new charge-recovery logic, called zero-short-circuit-current (ZSCC) logic, that drastically reduces short-circuit current. It processes 4 input streams at 1.75MHz with a charge recovery rate of 92%, achieving 9.7\u00d7 lower power per input compared with the 40nm monophonic single-input chip that represents the published state of the art [2]."}
{"_id": "eef39364df06eb9933d2fc41a0f13eea17113c58", "title": "Credit scoring using support vector machines with direct search for parameters selection", "text": "Support vector machines (SVM) is an effective tool for building good credit scoring models. However, the performance of the model depends on its parameters\u2019 setting. In this study, we use direct search method to optimize the SVM-based credit scoring model and compare it with other three parameters optimization methods, such as grid search, method based on design of experiment (DOE) and genetic algorithm (GA). Two real-world credit datasets are selected to demonstrate the effectiveness and feasibility of the method. The results show that the direct search method can find the effective model with high classification accuracy and good robustness and keep less dependency on the initial search space or point setting."}
{"_id": "744c1c0f76398b3632652650adea7b4fd91f96a7", "title": "Capability Model for Open Data: An Empirical Analysis", "text": "Creating superior competitiveness is central to open data organization's survivability in the fast changing and competitive open data market. In their quest to develop and increase competitiveness and survivability, many of these organizations are moving towards developing open data capabilities. Research-based knowledge on open data capabilities and how they relate to each other remains sparse, however, with most of the open data literature focusing on social and economic value of open data, not capabilities required. By exploring the related literature on business and organizational capabilities and linking the findings to the empirical evidence collected through the survey of 49 open data organizations around the world, this study develops an open data capability model. The model emerged from our deductive research process improves both theoretical and practical understanding of open data capabilities and their relationships required to help increase competitiveness and survivability of these types of organizations."}
{"_id": "b28e68dbf095e2ebc741b489b17c6c72e22fcaef", "title": "Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods", "text": "Diabetic retinopathy is a complication of diabetes that is caused by changes in the blood vessels of the retina. The symptoms can blur or distort the patient's vision and are a main cause of blindness. Exudates are one of the primary signs of diabetic retinopathy. Detection of exudates by ophthalmologists normally requires pupil dilation using a chemical solution which takes time and affects patients. This paper investigates and proposes a set of optimally adjusted morphological operators to be used for exudate detection on diabetic retinopathy patients' non-dilated pupil and low-contrast images. These automatically detected exudates are validated by comparing with expert ophthalmologists' hand-drawn ground-truths. The results are successful and the sensitivity and specificity for our exudate detection is 80% and 99.5%, respectively."}
{"_id": "8bfc93568d1894b911f07b8e1a3c61eab86df6da", "title": "On-Line Flatness Measurement in the Steelmaking Industry", "text": "Shape is a key characteristic to determine the quality of outgoing flat-rolled products in the steel industry. It is greatly influenced by flatness, a feature to describe how the surface of a rolled product approaches a plane. Flatness is of the utmost importance in steelmaking, since it is used by most downstream processes and customers for the acceptance or rejection of rolled products. Flatness sensors compute flatness measurements based on comparing the length of several longitudinal fibers of the surface of the product under inspection. Two main different approaches are commonly used. On the one hand, most mechanical sensors measure the tensile stress across the width of the rolled product, while manufacturing and estimating the fiber lengths from this stress. On the other hand, optical sensors measure the length of the fibers by means of light patterns projected onto the product surface. In this paper, we review the techniques and the main sensors used in the steelmaking industry to measure and quantify flatness defects in steel plates, sheets and strips. Most of these techniques and sensors can be used in other industries involving rolling mills or continuous production lines, such as aluminum, copper and paper, to name a few. Encompassed in the special issue, State-of-the-Art Sensors Technology in Spain 2013, this paper also reviews the most important flatness sensors designed and developed for the steelmaking industry in Spain."}
{"_id": "53bf890ddba4d6433e868c0a73a529243c23591c", "title": "An OpenNaaS Based SDN Framework for Dynamic QoS Control", "text": "Network Service Providers should offer provisioning services with guaranteed Quality of Service (QoS), specifically adapted to the characteristics of the applications running on their network. In this paper we propose the Network Control Layer (NCL), a software framework solution based on Software Defined Networks (SDN), OpenFlow and Network as a Service (NaaS) paradigms. It addresses a major innovation area in the field of network control and management providing on-demand end-to-end network provisioning services with guaranteed QoS, based on the specific requirements of on-top running interactive applications. The NCL implementation is based on the OpenNaaS framework, and it includes mechanisms for network status monitoring and SDN switches configuration based on the interactive applications' QoS network requirements. We demonstrate NCL's utility in the context of control plane models making use of a practical use case."}
{"_id": "7e0e7e7bda6b527fbe63b658d94b403a342d315a", "title": "A Feature-Based Approach for Group-Wise Grid Map Registration", "text": "For autonomous vehicles and advanced driver assistance systems, information on the actual state of the environment is fundamental for localization and mapping tasks. Localization benefits from multiple observations of the same location at different times as these may provide important information on static and mobile objects. For efficient mapping, the environment may be explored in parallel. For these purposes, multiple observations represented by grid maps have to be aligned into one mutual frame. This paper addresses the problem of group-wise grid map registration using an image processing approach. For registration, a rotational-invariant descriptor is proposed in order to provide the correspondences of points of interest in radar-based occupancy grid maps. As pairwise registration of multiple grid maps suffers from bias, this paper proposes a graph-based approach for robust registration of multiple grid maps. This will facilitate highly accurate range sensor maps for the aforementioned purposes. Large-scale experiments show the benefit of the proposed methods and compare it to state-of-the-art algorithms on radar measurements."}
{"_id": "27578f85abf2cc167e855411bbfa972d7d26ec0f", "title": "Very Small Ultra-Wide-Band MMIC Magic T and Applications to Combiners and Dividers", "text": "An FET-sized 1-18 GHz monolithic active magic T (1 W hybrid) is proposed. It unifies two different dividers, electrically isolated from each other, in a novel GaAs FET electrode configuration, viz. the LUFET concept. Its characteristics and experiment results are presented. Applications of the magic T to miniature wide-band RF signal processing. . ."}
{"_id": "effe71998a44456615dbc0e660c24eab6bfb7887", "title": "The big five personality traits: psychological entities or statistical constructs?", "text": "The present study employed multivariate genetic item-level analyses to examine the ontology and the genetic and environmental etiology of the Big Five personality dimensions, as measured by the NEO Five Factor Inventory (NEO-FFI) [Costa and McCrae, Revised NEO personality inventory (NEO PI-R) and NEO five-factor inventory (NEO-FFI) professional manual, 1992; Hoekstra et al., NEO personality questionnaires NEO-PI-R, NEO-FFI: manual, 1996]. Common and independent pathway model comparison was used to test whether the five personality dimensions fully mediate the genetic and environmental effects on the items, as would be expected under the realist interpretation of the Big Five. In addition, the dimensionalities of the latent genetic and environmental structures were examined. Item scores of a population-based sample of 7,900 adult twins (including 2,805 complete twin pairs; 1,528 MZ and 1,277 DZ) on the Dutch version of the NEO-FFI were analyzed. Although both the genetic and the environmental covariance components display a 5-factor structure, applications of common and independent pathway modeling showed that they do not comply with the collinearity constraints entailed in the common pathway model. Implications for the substantive interpretation of the Big Five are discussed."}
{"_id": "dad2396b76e07377827c9e062b23d2c1ac7448c7", "title": "Relationship between the cayley-dickson fourier transform and the hartley transform of multidimensional real signals", "text": "The paper shows relations between two different frequency representations of n-dimensional signals (n = 1, 2, 3): the Cayley-Dickson Fourier transform and the Hartley transform. The formulas relating the n-D complex Fourier transform and the n-D Hartley transform are presented. New formulas relating the Quaternion and Octonion Fourier transforms and respectively the 2-D and 3-D Hartley transforms are developed. The paper is illustrated with examples of Hartley transforms of 1-D, 2-D and 3-D Gaussian signals."}
{"_id": "1b84bea9e36b39d8e21650d929eee886404c4d8a", "title": "Budgeted Optimization with Constrained Experiments", "text": "Motivated by a real-world problem, we study a novel budgeted optimization problem where the goal is to optimize an unknown function f(\u00b7) given a budget by requesting a sequence of samples from the function. In our setting, however, evaluating the function at precisely specified points is not practically possible due to prohibitive costs. Instead, we can only request constrained experiments. A constrained experiment, denoted by Q, specifies a subset of the input space for the experimenter to sample the function from. The outcome of Q includes a sampled experiment x, and its function output f(x). Importantly, as the constraints of Q become looser, the cost of fulfilling the request decreases, but the uncertainty about the location x increases. Our goal is to manage this trade-off by selecting a set of constrained experiments that best optimize f(\u00b7) within the budget. We study this problem in two different settings, the non-sequential (or batch) setting where a set of constrained experiments is selected at once, and the sequential setting where experiments are selected one at a time. We evaluate our proposed methods for both settings using synthetic and real functions. The experimental results demonstrate the efficacy of the proposed methods."}
{"_id": "660b18a5c8c9df7659d57b57593e4ca69a043663", "title": "Asynchronous MPC with a strict honest majority using non-equivocation", "text": "Multiparty computation (MPC) among n parties can tolerate up to tsynchronous communication setting; however, in an asynchronous communication setting, the resiliency bound decreases to only t < n/3 active corruptions. We improve the resiliency bound for asynchronous MPC (AMPC) to match synchronous MPC using non-equivocation.\n Non-equivocation is a message authentication mechanism to restrict a corrupted sender from making conflicting statements to different (honest) parties. It can be implemented using an increment-only counter and a digital signature oracle, realizable with trusted hardware modules readily available in commodity computers and smartphone devices. A non-equivocation mechanism can also be transferable and allows a receiver to verifiably transfer the authenticated statement to other parties. In this work, using transferable non-equivocation, we present an AMPC protocol tolerating t < n/2 faults. From a practical point of view, our AMPC protocol requires fewer setup assumptions than the previous AMPC protocol with t < n/2 by Beerliova-Trubiniova, Hirt and Nielsen [PODC 2010]: unlike their AMPC protocol, it does not require any synchronous broadcast round at the beginning of the protocol and avoids the threshold homomorphic encryption setup assumption. Moreover, our AMPC protocol is also efficient and provides a gain of \u0398(n) in the communication complexity per multiplication gate, over the AMPC protocol of Beerliova-Trubiniova et al. In the process, using non-equivocation, we also define the first asynchronous verifiable secret sharing (AVSS) scheme with t < n/2, which is of independent interest to threshold cryptography."}
{"_id": "ddd4df11f38fb7dfa29a00fcdfed335ff0b2f0fc", "title": "The Evolution of Culturally-Variable Sex Differences : Men and Women Are Not Always Different , but When They Are ... It Appears Not to Result from Patriarchy or Sex Role Socialization", "text": "\u00a9 Springer International Publishing Switzerland 2014 T. K. Shackelford, R. D. Hansen (eds.), The Evolution of Sexuality, Evolutionary Psychology DOI 10.1007/978-3-319-09384-0_11 D. P. Schmitt (\uf02a) Department of Psychology, Bradley University, 75 Bradley Hall, Peoria, IL 61625, USA e-mail: dps@bradley.edu Just like all other sexually reproducing species, male and female humans are more similar than different. Even so, men\u2019s and women\u2019s psychological traits sometimes differ in important ways, both in terms of typical or average levels (Buss 1989; Del Giudice 2009; Ellis 2011) and in terms of variability (Archer and Mehdikhani 2003; Borkenau et al. 2013; Lippa 2009). Sex differences in numerous traits have been well-established as moderate to large in size1 and as culturally pervasive. For example, sex differences in negative emotion-related traits have been documented across several meta-analyses (Feingold 1994; Miettunen et al. 2007; Whissell 1996), integrative neuroscientific reviews (Hyde et al. 2008; Stevens and Hamann 2012), and large cross-cultural surveys (Costa et al. 2001; Hopcroft and McLaughlin 2012; Lippa 2009; McCrae and Terracciano 2005; Schmitt et al. 2008; Van de Velde et al. 2010). Using a multivariate approach, Del Giudice et al. (2012) documented across 16 personality traits\u2014ranging from dominance and liveliness to perfectionism and tension\u2014that sex differences in personality are astonishingly large, with only 10 % overlap in men\u2019s and women\u2019s overall distributions. Beyond sex differences in personality traits, psychologists have uncovered dozens of ways that men and women differ in affect, behavior, and cognition across most cultures (Archer 2014; Browne 1998; Mealey 2000). In one comprehensive review, Ellis (2011) identified 63 psychological sex differences that have been replicated across multiple cultures and at least 10 studies, with not a single replication failure (probably an overly strict exclusionary criterion; see Schmitt et al. 2014). In another wide-ranging review, Archer (2014) reported culturally-pervasive sex differences are reliably found in the assessment of negative emotions (e.g., fear, anxiety, depression), anti-social behaviors (e.g., aggression, violence, criminality), cognitive abilities (e.g., mental rotation, object location,"}
{"_id": "082e55f9bff0e08ef35d81d136ed6c7bb5cfeb7f", "title": "Toward Computational Fact-Checking", "text": "Our news are saturated with claims of \u201cfacts\u201d made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim \u201ccherry-picking\u201d? This paper proposes a framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate practical fact-checking tasks\u2014reverse-engineering (often intentionally) vague claims, and countering questionable claims\u2014as computational problems. Along with the modeling framework, we develop an algorithmic framework that enables efficient instantiations of \u201cmeta\u201d algorithms by supplying appropriate algorithmic building blocks. We present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results."}
{"_id": "1727c826ce2082002086e0de49a597b3f8f2e0fe", "title": "Neural NILM: Deep Neural Networks Applied to Energy Disaggregation", "text": "Energy disaggregation estimates appliance-by-appliance electricity consumption from a single meter that measures the whole home's electricity demand. Recently, deep neural networks have driven remarkable improvements in classification performance in neighbouring machine learning fields such as image classification and automatic speech recognition. In this paper, we adapt three deep neural network architectures to energy disaggregation: 1) a form of recurrent neural network called `long short-term memory' (LSTM); 2) denoising autoencoders; and 3) a network which regresses the start time, end time and average power demand of each appliance activation. We use seven metrics to test the performance of these algorithms on real aggregate power data from five appliances. Tests are performed against a house not seen during training and against houses seen during training. We find that all three neural nets achieve better F1 scores (averaged over all five appliances) than either combinatorial optimisation or factorial hidden Markov models and that our neural net algorithms generalise well to an unseen house."}
{"_id": "5934599cada82396b665d9225a644245dc979877", "title": "A Survey on Non-Intrusive Load Monitoring Methodies and Techniques for Energy Disaggregation Problem", "text": "The rapid urbanization of developing countries coupled with explosion in construction of high rising buildings and the high power usage in them calls for conservation and efficient energy program. Such a programme require monitoring of end-use appliances energy consumption in real-time. The worldwide recent adoption of smart-meter in smart-grid, has led to the rise of Non-Intrusive Load Monitoring (NILM); which enables estimation of appliance-specific power consumption from building\u2019s aggregate power consumption reading. NILM provides households with cost-effective real-time monitoring of end-use appliances to help them understand their consumption pattern and become part and parcel of energy conservation strategy. This paper presents an up to date overview of NILM system and its associated methods and techniques for energy disaggregation problem. This is followed by the review of the state-of-the art NILM algorithms. Furthermore, we review several performance metrics used by NILM researcher to evaluate NILM algorithms and discuss existing benchmarking framework for direct comparison of the state of the art NILM algorithms. Finally, the paper discuss potential NILM use-cases, presents an overview of the public available dataset and highlight challenges and future research directions. \u2217sambaiga@gmail.com \u2020shubi.kaijage@nmaist.ac.tz \u2021kisangiri.michael@nmaist.ac.tz \u00a7nhmvungi@udsm.ac.tz"}
{"_id": "79ed16acfce6381e05743adfcffa21a2590b768a", "title": "Energy disaggregation for real-time building flexibility detection", "text": "Energy is a limited resource which has to be managed wisely, taking into account both supply-demand matching and capacity constraints in the distribution grid. One aspect of the smart energy management at the building level is given by the problem of real-time detection of flexible demand available. In this paper we propose the use of energy disaggregation techniques to perform this task. Firstly, we investigate the use of existing classification methods to perform energy disaggregation. A comparison is performed between four classifiers, namely Naive Bayes, k-Nearest Neighbors, Support Vector Machine and AdaBoost. Secondly, we propose the use of Restricted Boltzmann Machine to automatically perform feature extraction. The extracted features are then used as inputs to the four classifiers and consequently shown to improve their accuracy. The efficiency of our approach is demonstrated on a real database consisting of detailed appliance-level measurements with high temporal resolution, which has been used for energy disaggregation in previous studies, namely the REDD. The results show robustness and good generalization capabilities to newly presented buildings with at least 96% accuracy."}
{"_id": "d85a51e2978f4563ee74bf9a09d3219e03799819", "title": "REDD : A Public Data Set for Energy Disaggregation Research", "text": "Energy and sustainability issues raise a large number of problems that can be tackled using approaches from data mining and machine learning, but traction of such problems has been slow due to the lack of publicly available data. In this paper we present the Reference Energy Disaggregation Data Set (REDD), a freely available data set containing detailed power usage information from several homes, which is aimed at furthering research on energy disaggregation (the task of determining the component appliance contributions from an aggregated electricity signal). We discuss past approaches to disaggregation and how they have influenced our design choices in collecting data, we describe the hardware and software setups for the data collection, and we present initial benchmark disaggregation results using a well-known Factorial Hidden Markov Model (FHMM) technique."}
{"_id": "ca62b26407d215f9fc810f54af8dedc73ff446ac", "title": "Backtrack Programming Techniques", "text": "The purpose of this paper is twofold. First, a brief exposition of the general backtrack technique and its history is given. Second, it is shown how the use of macros can considerably shorten the computation time in many cases. In particular, this technique has allowed the solution of two previously open combinatorial problems, the computation of new terms in a well-known series, and the substantial reduction in computation time for the solution to another combinatorial problem."}
{"_id": "16271988e40da98d51ec92a79ec7d0c39b83de8b", "title": "A review of process fault detection and diagnosis: Part II: Qualitative models and search strategies", "text": "In this part of the paper, we review qualitative model representations and search strategies used in fault diagnostic systems. Qualitative models are usually developed based on some fundamental understanding of the physics and chemistry of the process. Various forms of qualitative models such as causal models and abstraction hierarchies are discussed. The relative advantages and disadvantages of these representations are highlighted. In terms of search strategies, we broadly classify them as topographic and symptomatic search techniques. Topographic searches perform malfunction analysis using a template of normal operation, whereas, symptomatic searches look for symptoms to direct the search to the fault location. Various forms of topographic and symptomatic search strategies are discussed. # 2002 Published by Elsevier Science Ltd."}
{"_id": "0988bd0d27f22b71bdbb9524b29c86ea7f24b167", "title": "Ontology-based information retrieval for historical documents", "text": "This article presents an ontology-based approach to designing and developing new representation IR system instead of conventional keyword-based approach. Such representation improves the precision and recall of document retrieval. Experiments carried out on the ontology-based approach and keyword-based approach demonstrates the effectiveness of the proposed approach."}
{"_id": "651e0593e76f6b536dae32337b4b9067d09f0796", "title": "MentorNet: Regularizing Very Deep Neural Networks on Corrupted Labels", "text": "Recent studies have discovered that deep networks are capable of memorizing the entire data even when the labels are completely random. Since deep models are trained on big data where labels are often noisy, the ability to overfit noise can lead to poor performance. To overcome the overfitting on corrupted training data, we propose a novel technique to regularize deep networks in the data dimension. This is achieved by learning a neural network called MentorNet to supervise the training of the base network, namely, StudentNet. Our work is inspired by curriculum learning and advances the theory by learning a curriculum from data by neural networks. We demonstrate the efficacy of MentorNet on several benchmarks. Comprehensive experiments show that it is able to significantly improve the generalization performance of the state-of-the-art deep networks on corrupted training data."}
{"_id": "b3ac41a573287c1410bda125c98bdbfd3520c077", "title": "Sustainability and ICT \u2013 An overview of the field", "text": "Sustainable development requires the decoupling of economic growth from environmental impacts and from the use of natural resources. This article gives an overview of existing approaches to using Information and Communication Technology (ICT) in the service of sustainability: Environmental Informatics, Green ICT, and Sustainable Human-Computer Interaction (HCI). These approaches are then discussed in the context of the Jevons paradox, an economic argument implying that technological efficiency alone will not produce sustainability. This consideration leads to the conclusion that a combination of efficiency and sufficiency strategies is the most effective way to stimulate innovations which will unleash ICT\u2019s potential to support sustainability."}
{"_id": "6769d09cd981775facfe24a1359ff698c040a2e2", "title": "Focus group interviews in nursing research: part 1.", "text": "Focus groups are used by researchers in the social and behavioural sciences to explore phenomena and are accepted as a legitimate qualitative methodology. The primary goal of focus groups is to use interaction data resulting from discussion among participants to increase the depth of the enquiry and reveal aspects of the phenomenon assumed to be otherwise less accessible. This article, the first of three articles on focus groups, examines the nature of focus groups, issues regarding planning focus groups, selecting participants and the size of the groups. This article is aimed at students who are undertaking research modules as part of their academic studies or writing a research proposal as well as at novice researchers who intend to use focus groups as a means of data collection."}
{"_id": "731571b85ad40b7e8704f03bc721f19910a9a8a6", "title": "Technology Insight: noninvasive brain stimulation in neurology\u2014perspectives on the therapeutic potential of rTMS and tDCS", "text": "In neurology, as in all branches of medicine, symptoms of disease and the resulting burden of illness and disability are not simply the consequence of the injury, inflammation or dysfunction of a given organ; they also reflect the consequences of the nervous system's attempt to adapt to the insult. This plastic response includes compensatory changes that prove adaptive for the individual, as well as changes that contribute to functional disability and are, therefore, maladaptive. In this context, brain stimulation techniques tailored to modulate individual plastic changes associated with neurological diseases might enhance clinical benefits and minimize adverse effects. In this Review, we discuss the use of two noninvasive brain stimulation techniques\u2014repetitive transcranial magnetic stimulation and transcranial direct current stimulation\u2014to modulate activity in the targeted cortex or in a dysfunctional network, to restore an adaptive equilibrium in a disrupted network for best behavioral outcome, and to suppress plastic changes for functional advantage. We review randomized controlled studies, in focal epilepsy, Parkinson's disease, recovery from stroke, and chronic pain, to illustrate these principles, and we present evidence for the clinical effects of these two techniques."}
{"_id": "9bf06ae90833c801fcd974d4a201968ea5961c7d", "title": "Guarded Commands, Nondeterminacy and Formal Derivation of Programs", "text": "So-called \u201cguarded commands\u201d are introduced as a building block for alternative and repetitive constructs that allow nondeterministic program components for which at least the activity evoked, but possibly even the final state, is not necessarily uniquely determined by the initial state. For the formal derivation of programs expressed in terms of these constructs, a calculus will be be shown."}
{"_id": "724da2e972866249234b5c27949b62beb80e4239", "title": "Change-Impact Analysis of Firewall Policies", "text": "Firewalls are the mainstay of enterprise security and the most widely adopted technology for protecting private networks. The quality of protection provided by a firewall directly depends on the quality of its policy (i.e., configuration). Due to the lack of tools for analyzing firewall policies, most firewalls on the Internet have been plagued with policy errors. A firewall policy error either creates security holes that will allow malicious traffic to sneak into a private network or blocks legitimate traffic and disrupts normal business processes, which in turn could lead to irreparable, if not tragic, consequences. A major source of policy errors stem from policy changes. Firewall policies often need to be changed as networks evolve and new threats emerge. In this paper, we first present the theory and algorithms for firewall policy change-impact analysis. Our algorithms take as input a firewall policy and a proposed change, then output the accurate impact of the change. Thus, a firewall administrator can verify a proposed change before committing it."}
{"_id": "e3e24c263ab53daa93a43c2350cf15bc79ac42b2", "title": "Phase noise modelling and mitigation techniques in ofdm communications systems", "text": "This paper addresses the analysis and mitigation of the signal distortion caused by oscillator phase noise (PN) in OFDM communications systems. Two new PN mitigation techniques are proposed, especially targeted for reducing the intercarrier interference (ICI) effects due to PN. The first proposed method is a fairly simple one, stemming from the idea of linearly interpolating between two consecutive common phase error (CPE) estimates to obtain a linearized estimate of the time-varying phase characteristics. The second technique, in turn, is an extension to the existing state-of-the-art ICI estimation methods. Here the idea is to use an additional interpolation stage to improve the phase estimation performance around the boundaries of two consecutive OFDM symbols. The paper also verifies the performance improvement of these new PN estimation techniques by comparing them to the existing state-of-the-art techniques using extensive computer simulations. To emphasize practicality, the simulations are carried out in 3GPP-LTE downlink -like system context, covering both additive white Gaussian noise (AWGN) and extended ITU-R Vehicular A multipath channel types."}
{"_id": "1923d570b0512af73c05e45e20bc876e8e66afa5", "title": "Endoscopic Surgery: The History, the Pioneers", "text": "The introduction of endoscopy into surgical practice is one of the biggest success stories in the history of medicine. Endoscopy has its roots in the nineteenth century and was initially developed by urologists and internists. During the 1960s and 1970s gynecologists took the lead in the development of endoscopic surgery while most of the surgical community continued to ignore the possibilities of the new technique. This was due in part to the introduction of ever more sophisticated drugs, the impressive results of intensive care medicine, and advances in anesthesia, which led to the development of more radical and extensive operations, or \u201cmajor surgery.\u201d The idea that large problems require large incisions so deeply dominated surgical thinking that there was little room to appreciate the advances of \u201ckey-hole\u201d surgery. Working against this current, some general surgeons took up the challenge. In 1976 the Surgical Study Group on Endoscopy and Ultrasound (CAES) was formed in Hamburg. Five years later, on the other side of the Atlantic, the Society of American Gastrointestinal Endoscopic Surgeons (SAGES) was called into being. In 1987 the first issue of the journal Surgical Endoscopy was published, and the following year the First World Congress on Surgical Endoscopy took place in Berlin. The sweeping success of the \u201claparoscopic revolution\u201d (1989\u20131990) marked the end of traditional open surgery and encouraged surgeons to consider new perspectives. By the 1990s the breakthrough had been accomplished: endoscopy was incorporated into surgical thinking."}
{"_id": "f5b830941d0415c444bb2b6a5ee9fc856abac126", "title": "Critical Success Factors for a Customer Relationship Management Strategy", "text": "Most organizations have perceived the customer relationship management (CRM) concept as a technological solution for problems in individual areas, accompanied by a great deal of uncoordinated initiatives. Nevertheless, CRM must be conceived as a strategy, due to its human, technological, and processes implications, at the time an organization decides to implement it. On this basis, the main goal stated in this research is to propose, justify, and validate a model based on critical success factors (CSFs) that will constitute a guide for companies in the implementation and diagnosis of a CRM strategy. The model is conformed by a set of 13 CSFs with their 55 corresponding metrics, which will serve as a guide for organizations wishing to apply this type of strategy. These factors cover the three key aspects of every CRM strategy (human factor, processes, and technology); giving a global focus and propitiating success in the implementation of a CRM strategy. These CSFs \u2013 and their metrics \u2013 were evaluated by a group of internationally experts allowing determining guidelines for a CRM implementation as well as the probable causes of the deWciencies in past projects."}
{"_id": "25652884ce0291b6196380bfbe72df726156407a", "title": "Video parsing for abnormality detection", "text": "Detecting abnormalities in video is a challenging problem since the class of all irregular objects and behaviors is infinite and thus no (or by far not enough) abnormal training samples are available. Consequently, a standard setting is to find abnormalities without actually knowing what they are because we have not been shown abnormal examples during training. However, although the training data does not define what an abnormality looks like, the main paradigm in this field is to directly search for individual abnormal local patches or image regions independent of another. To address this problem we parse video frames by establishing a set of hypotheses that jointly explain all the foreground while, at same time, trying to find normal training samples that explain the hypotheses. Consequently, we can avoid a direct detection of abnormalities. They are discovered indirectly as those hypotheses which are needed for covering the foreground without finding an explanation by normal samples for themselves. We present a probabilistic model that localizes abnormalities using statistical inference. On the challenging dataset of [15] it outperforms the state-of-the-art by 7% to achieve a frame-based abnormality classification performance of 91% and the localization performance improves by 32% to 76%."}
{"_id": "dc627777ec1fffa1d4fbfd786837ed4f2bb977c0", "title": "Using Supervised Learning Methods for Gene Selection in RNA-Seq Case-Control Studies", "text": "Whole transcriptome studies typically yield large amounts of data, with expression values for all genes or transcripts of the genome. The search for genes of interest in a particular study setting can thus be a daunting task, usually relying on automated computational methods. Moreover, most biological questions imply that such a search should be performed in a multivariate setting, to take into account the inter-genes relationships. Differential expression analysis commonly yields large lists of genes deemed significant, even after adjustment for multiple testing, making the subsequent study possibilities extensive. Here, we explore the use of supervised learning methods to rank large ensembles of genes defined by their expression values measured with RNA-Seq in a typical 2 classes sample set. First, we use one of the variable importance measures generated by the random forests classification algorithm as a metric to rank genes. Second, we define the EPS (extreme pseudo-samples) pipeline, making use of VAEs (Variational Autoencoders) and regressors to extract a ranking of genes while leveraging the feature space of both virtual and comparable samples. We show that, on 12 cancer RNA-Seq data sets ranging from 323 to 1,210 samples, using either a random forests-based gene selection method or the EPS pipeline outperforms differential expression analysis for 9 and 8 out of the 12 datasets respectively, in terms of identifying subsets of genes associated with survival. These results demonstrate the potential of supervised learning-based gene selection methods in RNA-Seq studies and highlight the need to use such multivariate gene selection methods alongside the widely used differential expression analysis."}
{"_id": "b3f0b349a3feb3809f7b1895ccaed1ec9daa907d", "title": "CYCLOSA: Decentralizing Private Web Search through SGX-Based Browser Extensions", "text": "By regularly querying Web search engines, users (unconsciously) disclose large amounts of their personal data as part of their search queries, among which some might reveal sensitive information (e.g. health issues, sexual, political or religious preferences). Several solutions exist to allow users querying search engines while improving privacy protection. However, these solutions suffer from a number of limitations: some are subject to user re-identification attacks, while others lack scalability or are unable to provide accurate results. This paper presents CYCLOSA, a secure, scalable and accurate private Web search solution. CYCLOSA improves security by relying on trusted execution environments (TEEs) as provided by Intel SGX. Further, CYCLOSA proposes a novel adaptive privacy protection solution that reduces the risk of user re-identification. CYCLOSA sends fake queries to the search engine and dynamically adapts their count according to the sensitivity of the user query. In addition, CYCLOSA meets scalability as it is fully decentralized, spreading the load for distributing fake queries among other nodes. Finally, CYCLOSA achieves accuracy of Web search as it handles the real query and the fake queries separately, in contrast to other existing solutions that mix fake and real query results."}
{"_id": "06510633326418e81ab8918d32c4e0cb8dd9c44a", "title": "Beyond binary choices: Integrating individual and social creativity", "text": "The power of the unaided individual mind is highly overrated. Although society often thinks of creative individuals as working in isolation, intelligence and creativity result in large part from interaction and collaboration with other individuals. Much human creativity is social, arising from activities that take place in a context in which interaction with other people and the artifacts that embody collective knowledge are essential contributors. This paper examines: (1) how individual and social creativity can be integrated by means of proper collaboration models and tools supporting distributed cognition; (2) how the creation of shareable externalizations (\u201cboundary objects\u201d) and the adoption of evolutionary process models in the construction of meta-design environments can enhance creativity and support spontaneous design activities (\u201cunselfconscious cultures of design\u201d); and (3) how a new design competence is emerging\u2014one that requires passage from individual creative actions to synergetic activities, from the reflective practitioner to reflective communities, and from given tasks to personally meaningful activities. The paper offers examples in the context of collaborative design and art practice, including urban planning, interactive art, and open source. In the effort to draw a viable path \u201cbeyond binary choices\u201d, the paper points out some major challenges for the next generation of socio-technical environments to further increase the integration of individual and social creativity."}
{"_id": "25642be46de0f2e74e0da81a14646f8bfcc9000a", "title": "What Does Classifying More Than 10, 000 Image Categories Tell Us?", "text": "Image classification is a critical task for both humans and computers. One of the challenges lies in the large scale of the semantic space. In particular, humans can recognize tens of thousands of object classes and scenes. No computer vision algorithm today has been tested at this scale. This paper presents a study of large scale categorization including a series of challenging experiments on classification with more than 10, 000 image classes. We find that a) computational issues become crucial in algorithm design; b) conventional wisdom from a couple of hundred image categories on relative performance of different classifiers does not necessarily hold when the number of categories increases; c) there is a surprisingly strong relationship between the structure of WordNet (developed for studying language) and the difficulty of visual categorization; d) classification can be improved by exploiting the semantic hierarchy. Toward the future goal of developing automatic vision algorithms to recognize tens of thousands or even millions of image categories, we make a series of observations and arguments about dataset scale, category density, and image hierarchy."}
{"_id": "b8b17b2dd75749ecd68eb1ff1d2cea1703660a18", "title": "Knowledge-aided adaptive radar at DARPA: an overview", "text": "For the past several years, the Defense Advanced Research Projects Agency (DARPA) has been pioneering the development of the first ever real-time knowledge-aided (KA) adaptive radar architecture. The impetus for this program is the ever increasingly complex missions and operational environments encountered by modern radars and the inability of traditional adaptation methods to address rapidly varying interference environments. The DARPA KA sensor signal processing and expert reasoning (KASSPER) program has as its goal the demonstration of a high performance embedded computing (HPEC) architecture capable of integrating high-fidelity environmental knowledge (i.e., priors) into the most computationally demanding subsystem of a modern radar: the adaptive space-time beamformer. This is no mean feat as environmental knowledge is a memory quantity that is inherently difficult (if not impossible) to access at the rates required to meet radar front-end throughput requirements. In this article, we will provide an overview of the KASSPER program highlighting both the benefits of KA adaptive radar, key algorithmic concepts, and the breakthrough look-ahead radar scheduling approach that is the keystone to the KASSPER HPEC architecture."}
{"_id": "4b35d641f969b3eb977aae2f9b7abfbbad7ff9ac", "title": "Predicting patient revisits at the University of Virginia Health System Emergency Department", "text": "This study focuses on the predictive identification of patients frequently revisiting the University of Virginia Health System Emergency Department. Identifying these patients can help the Emergency Department formulate strategies that improve patient care and decrease excess Emergency Department utilization. The Health System in particular faces a number of unique challenges in its ongoing mission to reduce extraneous patient revisits. In addition to its status as an academic hospital, it serves a broad geographic region as one of five level-I trauma centers in the Commonwealth of Virginia. In this study we utilized 5 years of data from the University of Virginia Health System data warehouse. These data contain information on 91,297 patients and 196,902 unique encounters, including details on patient demographics, diagnoses and hospital departments visited. From these raw data we engineered features, trained gradient boosted decision trees, and experimented with unsupervised clustering techniques to best approximate 30-day Emergency Department revisit risk at the conclusion of each patient encounter. Our best model for revisit risk resulted in a Receiver Operator Characteristic Area Under the Curve of 0.75. Furthermore, we exhibit the real-time performance of our model as a tool to rank which at-risk patients should receive priority for Emergency Department resources. This test demonstrated a significant improvement over the current allocation of Emergency Department social worker resources with a daily Mean Average Precision of 0.83. The methodologies proposed in this paper exhibit an end-to-end framework to transform raw administrative claims and limited clinical data into predictive models that help the Emergency Department better manage resources and target interventions."}
{"_id": "349277a67468e7f6a5bfc487ab125887c6925229", "title": "Computer System Intrusion Detection : A Survey", "text": "The abil ity to detect intruders in computer systems increases in importance as computers are increasingly integrated into the systems that we rely on for the correct functioning of society. This paper reviews the history of research in intrusion detection as performed in software in the context of operating systems for a single computer, a distributed system, or a network of computers. There are two basic approaches: anomaly detection and misuse detection. Both have been practiced since the 1980s. Both have naturally scaled to use in distributed systems and networks."}
{"_id": "58ff0a7f24a3a97b1c3dc5162ead03c6a0e03180", "title": "Towards Ontology Learning from Folksonomies", "text": "A folksonomy refers to a collection of user-defined tags with which users describe contents published on the Web. With the flourish of Web 2.0, folksonomies have become an important mean to develop the Semantic Web. Because tags in folksonomies are authored freely, there is a need to understand the structure and semantics of these tags in various applications. In this paper, we propose a learning approach to create an ontology that captures the hierarchical semantic structure of folksonomies. Our experimental results on two different genres of real world data sets show that our method can effectively learn the ontology structure from the folksonomies."}
{"_id": "a4f38719e4e20c25ac1f6cd51f31b38d50264590", "title": "Cloud-assisted Industrial Internet of Things (IIoT) - Enabled framework for health monitoring", "text": "The promising potential of the emerging Internet of Things (IoT) technologies for interconnected medical devices and sensors has played an important role in the next-generation healthcare industry for quality patient care. Because of the increasing number of elderly and disabled people, there is an urgent need for a real-time health monitoring infrastructure for analyzing patients\u2019 healthcare data to avoid preventable deaths. Healthcare Industrial IoT (HealthIIoT) has significant potential for the realization of such monitoring. HealthIIoT is a combination of communication technologies, interconnected apps, Things (devices and sensors), and people that would function together as one smart system to monitor, track, and store patients\u2019 healthcare information for ongoing care. This paper presents a HealthIIoT-enabled monitoring framework, where ECG and other healthcare data are collected by mobile devices and sensors and securely sent to the cloud for seamless access by healthcare professionals. Signal enhancement, watermarking, and other related analytics will be used to avoid identity theft or clinical error by healthcare professionals. The suitability of this approach has been validated through both experimental evaluation, and simulation by deploying an IoT-driven ECG-based health monitoring service in the cloud. \u00a9 2016 Elsevier B.V. All rights reserved."}
{"_id": "0f4f5ba66a0b666c512c4f120c521cecc89e013f", "title": "RFID Technology for IoT-Based Personal Healthcare in Smart Spaces", "text": "The current evolution of the traditional medical model toward the participatory medicine can be boosted by the Internet of Things (IoT) paradigm involving sensors (environmental, wearable, and implanted) spread inside domestic environments with the purpose to monitor the user's health and activate remote assistance. RF identification (RFID) technology is now mature to provide part of the IoT physical layer for the personal healthcare in smart environments through low-cost, energy-autonomous, and disposable sensors. It is here presented a survey on the state-of-the-art of RFID for application to body centric systems and for gathering information (temperature, humidity, and other gases) about the user's living environment. Many available options are described up to the application level with some examples of RFID systems able to collect and process multichannel data about the human behavior in compliance with the power exposure and sanitary regulations. Open challenges and possible new research trends are finally discussed."}
{"_id": "c9bfed5fb8c6c7e57f65568f32a311fd9e6148fd", "title": "Cloud-enabled wireless body area networks for pervasive healthcare", "text": "With the support of mobile cloud computing, wireless body area networks can be significantly enhanced for massive deployment of pervasive healthcare applications. However, several technical issues and challenges are associated with the integration of WBANs and MCC. In this article, we study a cloud-enabled WBAN architecture and its applications in pervasive healthcare systems. We highlight the methodologies for transmitting vital sign data to the cloud by using energy-efficient routing, cloud resource allocation, semantic interactions, and data security mechanisms."}
{"_id": "ceae612aef2950a5b42009c25302079f891bf7e2", "title": "An IoT-Aware Architecture for Smart Healthcare Systems", "text": "Over the last few years, the convincing forward steps in the development of Internet of Things (IoT)-enabling solutions are spurring the advent of novel and fascinating applications. Among others, mainly radio frequency identification (RFID), wireless sensor network (WSN), and smart mobile technologies are leading this evolutionary trend. In the wake of this tendency, this paper proposes a novel, IoT-aware, smart architecture for automatic monitoring and tracking of patients, personnel, and biomedical devices within hospitals and nursing institutes. Staying true to the IoT vision, we propose a smart hospital system (SHS), which relies on different, yet complementary, technologies, specifically RFID, WSN, and smart mobile, interoperating with each other through a Constrained Application Protocol (CoAP)/IPv6 over low-power wireless personal area network (6LoWPAN)/representational state transfer (REST) network infrastructure. The SHS is able to collect, in real time, both environmental conditions and patients' physiological parameters via an ultra-low-power hybrid sensing network (HSN) composed of 6LoWPAN nodes integrating UHF RFID functionalities. Sensed data are delivered to a control center where an advanced monitoring application (MA) makes them easily accessible by both local and remote users via a REST web service. The simple proof of concept implemented to validate the proposed SHS has highlighted a number of key capabilities and aspects of novelty, which represent a significant step forward compared to the actual state of the art."}
{"_id": "c9824c687887ae34739ad0a8cd94c08bf996e9ca", "title": "Selective mutism and comorbidity with developmental disorder/delay, anxiety disorder, and elimination disorder.", "text": "OBJECTIVES\nTo assess the comorbidity of developmental disorder/delay in children with selective mutism (SM) and to assess other comorbid symptoms such as anxiety, enuresis, and encopresis.\n\n\nMETHOD\nSubjects with SM and their matched controls were evaluated by a comprehensive assessment of the child and by means of a parental structured diagnostic interview with focus on developmental history. Diagnoses were made according to DSM-IV.\n\n\nRESULTS\nA total of 54 children with SM and 108 control children were evaluated. Of the children with SM, 68.5% met the criteria for a diagnosis reflecting developmental disorder/delay compared with 13.0% in the control group. The criteria for any anxiety diagnosis were met by 74.1% in the SM group and for an elimination disorder by 31.5% versus 7.4% and 9.3%, respectively, in the control group. In the SM group, 46.3% of the children met the criteria for both an anxiety diagnosis and a diagnosis reflecting developmental disorder/delay versus 0.9% in the controls.\n\n\nCONCLUSIONS\nSM is associated with developmental disorder/delay nearly as frequently as with anxiety disorders. The mutism may conceal developmental problems in children with SM. Children with SM often meet diagnostic criteria for both a developmental and an anxiety disorder."}
{"_id": "b101dd995c21aefde44fdacec799875d8cd943bc", "title": "Modelling memory functions with recurrent neural networks consisting of input compensation units: I. Static situations", "text": "Humans are able to form internal representations of the information they process\u2014a capability which enables them to perform many different memory tasks. Therefore, the neural system has to learn somehow to represent aspects of the environmental situation; this process is assumed to be based on synaptic changes. The situations to be represented are various as for example different types of static patterns but also dynamic scenes. How are neural networks consisting of mutually connected neurons capable of performing such tasks? Here we propose a new neuronal structure for artificial neurons. This structure allows one to disentangle the dynamics of the recurrent connectivity from the dynamics induced by synaptic changes due to the learning processes. The error signal is computed locally within the individual neuron. Thus, online learning is possible without any additional structures. Recurrent neural networks equipped with these computational units cope with different memory tasks. Examples illustrate how information is extracted from environmental situations comprising fixed patterns to produce sustained activity and to deal with simple algebraic relations."}
{"_id": "7db9c588206d4286c25a12a75f06cc536a99ea34", "title": "Optimal Government Debt Maturity \u2217", "text": "This paper develops a model of optimal government debt maturity in which the government cannot issue state-contingent bonds and cannot commit to fiscal policy. If the government can perfectly commit, it fully insulates the economy against government spending shocks by purchasing short-term assets and issuing long-term debt. These positions are quantitatively very large relative to GDP and do not need to be actively managed by the government. Our main result is that these conclusions are not robust to the introduction of lack of commitment. Under lack of commitment, large and tilted positions are very expensive to finance ex-ante since they exacerbate the problem of lack of commitment ex-post. In contrast, a flat maturity structure minimizes the cost of lack of commitment, though it also limits insurance and increases the volatility of fiscal policy distortions. We show that the optimal maturity structure is nearly flat because reducing average borrowing costs is quantitatively more important for welfare than reducing fiscal policy volatility. Thus, under lack of commitment, the government actively manages its debt positions and can approximate optimal policy by confining its debt instruments to consols."}
{"_id": "f9cd7da733c5b5b54a4bbd35f67a913c05df83ea", "title": "Direction of Arrival Estimation of GNSS Signals Based on Synthetic Antenna Array", "text": "Jammer and interference are sources of errors in positions estimated by GNSS receivers. The interfering signals reduce signal-to-noise ratio and cause receiver failure to correctly detect satellite signals. Because of the robustness of beamforming techniques to jamming and multipath mitigation by placing nulls in direction of interference signals, an antenna array with a set of multi-channel receivers can be used to improve GNSS signal reception. Spatial reference beam forming uses the information in the Direction Of Arrival (DOA) of desired and interference signals for this purpose. However, using a multi-channel receiver is not applicable in many applications for estimating the Angle Of Arrival (AOA) of the signal (hardware limitations or portability issues). This paper proposes a new method for DOA estimation of jammer and interference signals based on a synthetic antenna array. In this case, the motion of a single antenna can be used to estimate the AOA of the interfering signals."}
{"_id": "77f14f3b5f094bf2e785fae772846116da18fa48", "title": "FIGURE8: A Novel System for Generating and Evaluating Figurative Language", "text": "Similes are easily obtained from web-driven and casebased reasoning approaches. Still, generating thoughtful figurative descriptions with meaningful relation to narrative context and author style has not yet been fully explored. In this paper, the author prepares the foundation for a computational model which can achieve this level of aesthetic complexity. This paper also introduces and evaluates a possible architecture for generating and ranking figurative comparisons on par with humans: the"}
{"_id": "5c5a997753f3c158d45e91abd70ede28ba78b0d4", "title": "Detecting and Naming Actors in Movies Using Generative Appearance Models", "text": "We introduce a generative model for learning person and costume specific detectors from labeled examples. We demonstrate the model on the task of localizing and naming actors in long video sequences. More specifically, the actor's head and shoulders are each represented as a constellation of optional color regions. Detection can proceed despite changes in view-point and partial occlusions. We explain how to learn the models from a small number of labeled key frames or video tracks, and how to detect novel appearances of the actors in a maximum likelihood framework. We present results on a challenging movie example, with 81% recall in actor detection (coverage) and 89% precision in actor identification (naming)."}
{"_id": "98269bdcadf64235a5b7f874f5b578ddcd5912b1", "title": "Patient-centredness: a conceptual framework and review of the empirical literature.", "text": "A 'patient-centred' approach is increasingly regarded as crucial for the delivery of high quality care by doctors. However, there is considerable ambiguity concerning the exact meaning of the term and the optimum method of measuring the process and outcomes of patient-centred care. This paper reviews the conceptual and empirical literature in order to develop a model of the various aspects of the doctor-patient relationship encompassed by the concept of 'patient-centredness' and to assess the advantages and disadvantages of alternative methods of measurement. Five conceptual dimensions are identified: biopsychosocial perspective; 'patient-as-person'; sharing power and responsibility; therapeutic alliance; and 'doctor-as-person'. Two main approaches to measurement are evaluated: self-report instruments and external observation methods. A number of recommendations concerning the measurement of patient-centredness are made."}
{"_id": "8759c972d89e1bf6deeab780aa2f8e21140c953b", "title": "Excitation Control Method for a Low Sidelobe SIW Series Slot Array Antenna With 45$^{\\circ}$ Linear Polarization", "text": "A sidelobe suppression method for a series slot array antenna which radiates 45\u00b0 -inclined linear polarization is proposed. Axial displacements are employed to create arbitrary excitation coefficients for individual centered-inclined radiating slots along the center line of a broad wall. To verify the proposed design method, we design two types of center-fed linear slot array antennas with a Dolph-Chebyshev distribution for -20 dB and -26 dB sidelobe levels (SLLs) in the Ka band. Furthermore, a cross-validation process involving an equivalent circuit model analysis and electromagnetic full-wave simulation using CST MWS is utilized. The entire structure of the proposed series slot array antenna is fabricated on printed circuit boards (PCBs), including drilling and chemical etching, to secure advantages of miniaturization and cost reduction. The measured realized gains are 15.17 and 15.95 dBi and SLLs are -18.7 and -22.5 dB respectively for two types of fabricated antennas. It demonstrates the validity of the proposed sidelobe suppression method."}
{"_id": "9ae641778971fd6e0bdf3755ef4658d40c218def", "title": "Flow Cytometry Bioinformatics", "text": "Flow cytometry bioinformatics is the application of bioinformatics to flow cytometry data, which involves storing, retrieving, organizing, and analyzing flow cytometry data using extensive computational resources and tools. Flow cytometry bioinformatics requires extensive use of and contributes to the development of techniques from computational statistics and machine learning. Flow cytometry and related methods allow the quantification of multiple independent biomarkers on large numbers of single cells. The rapid growth in the multidimensionality and throughput of flow cytometry data, particularly in the 2000s, has led to the creation of a variety of computational analysis methods, data standards, and public databases for the sharing of results. Computational methods exist to assist in the preprocessing of flow cytometry data, identifying cell populations within it, matching those cell populations across samples, and performing diagnosis and discovery using the results of previous steps. For preprocessing, this includes compensating for spectral overlap, transforming data onto scales conducive to visualization and analysis, assessing data for quality, and normalizing data across samples and experiments. For population identification, tools are available to aid traditional manual identification of populations in two-dimensional scatter plots (gating), to use dimensionality reduction to aid gating, and to find populations automatically in higher dimensional space in a variety of ways. It is also possible to characterize data in more comprehensive ways, such as the density-guided binary space partitioning technique known as probability binning, or by combinatorial gating. Finally, diagnosis using flow cytometry data can be aided by supervised learning techniques, and discovery of new cell types of biological importance by high-throughput statistical methods, as part of pipelines incorporating all of the aforementioned methods. Open standards, data, and software are also key parts of flow cytometry bioinformatics. Data standards include the widely adopted Flow Cytometry Standard (FCS) defining how data from cytometers should be stored, but also several new standards under development by the International Society for Advancement of Cytometry (ISAC) to aid in storing more detailed information about experimental design and analytical steps. Open data is slowly growing with the opening of the CytoBank database in 2010 and FlowRepository in 2012, both of which allow users to freely distribute their data, and the latter of which has been recommended as the preferred repository for MIFlowCyt-compliant data by ISAC. Open software is most widely available in the form of a suite of Bioconductor packages, but is also available for web execution on the GenePattern platform."}
{"_id": "4c58cdeba2201739a303cfe157e5dbd9c47ab577", "title": "CREDIT CARD FRAUD DETECTION BASED ON BEHAVIOR MINING", "text": "NIMISHA PHILIP, SHERLY K.K Department of Computer Science & Engineering, Toc H Institute of Science & Technology email:nimishavellapally@gmail.com Department of Information Technology, Toc H Institute of Science & Technology, Ernakulam, Kerala, India shrly_shilu@yahoo.com Abstract \u2014 Globalization and increased use of the Internet for online shopping has resulted in a considerable proliferation of credit card transactions throughout the world.Higher acceptability and convenience of credit cards for purchases has not only given personal comfort to customers but also attracted a large number of attackers. As a result, credit card payment systems must be supported by efficient fraud detection capability for minimizing unwanted activities by adversaries. Most of the well known algorithms for fraud detection are based on supervised training Every cardholder has a certain shopping behavior, which establishes an activity profile for him. Existing FDS try to capture behavioral patterns as rules which are static .This becomes ineffective when cardholder develops new patterns of behavior Here, we propose a unsupervised method to dynamically profile behavior pattern of customer Then the incoming transactions are compared against the user profile to indicate the anomalies, based on which the corresponding warnings are outputted. A FP tree based pattern matching algorithm is used to evaluate how unusual the new transactions are."}
{"_id": "6f60b08c9ef3040d0d8eaac32e0d991900b0e390", "title": "The relationship between gamma frequency and running speed differs for slow and fast gamma rhythms in freely behaving rats.", "text": "In hippocampal area CA1 of rats, the frequency of gamma activity has been shown to increase with running speed (Ahmed and Mehta, 2012). This finding suggests that different gamma frequencies simply allow for different timings of transitions across cell assemblies at varying running speeds, rather than serving unique functions. However, accumulating evidence supports the conclusion that slow (\u223c25-55 Hz) and fast (\u223c60-100 Hz) gamma are distinct network states with different functions. If slow and fast gamma constitute distinct network states, then it is possible that slow and fast gamma frequencies are differentially affected by running speed. In this study, we tested this hypothesis and found that slow and fast gamma frequencies change differently as a function of running speed in hippocampal areas CA1 and CA3, and in the superficial layers of the medial entorhinal cortex (MEC). Fast gamma frequencies increased with increasing running speed in all three areas. Slow gamma frequencies changed significantly less across different speeds. Furthermore, at high running speeds, CA3 firing rates were low, and MEC firing rates were high, suggesting that CA1 transitions from CA3 inputs to MEC inputs as running speed increases. These results support the hypothesis that slow and fast gamma reflect functionally distinct states in the hippocampal network, with fast gamma driven by MEC at high running speeds and slow gamma driven by CA3 at low running speeds."}
{"_id": "2683c31534393493924f967c5d1e4ba0657f0181", "title": "Design and Implementation of a TEM Stripline for EMC Testing", "text": "In this paper, a stripline for radiated field immunity testing of audio/video products is designed, constructed, calibrated, and verified. Since the canonical one is only suitable for testing equipments with height less than 70 cm, there is a need of a new device which is also in compliance with EN 55020 standard for testing new large sets, like 47'' thin-film transistor liquid crystal display (TFT LCD) television sets, in the frequency range of 150 kHz-150 MHz. Increasing the height and width of testing area causes important problems in terms of field uniformity regarding the higher order modes, characteristic impedance, and reflections in addition to the room resonances and field interference sources like corners at the edges of stripline. Comprehensive numerical study is performed to overcome these problems and obtain the optimum design before the construction. Measured data show that the new stripline is in very good agreement with the specifications determined by the standards and therefore it can be used for the electromagnetic compatibility (EMC) testing."}
{"_id": "b790ed1ac0b3451ff7522b3b2b9cda6ca3e28670", "title": "Giving a new makeover to STEAM: Establishing YouTube beauty gurus as digital literacy educators through messages and effects on viewers", "text": ""}
{"_id": "2f91a4913bcfaa5ed5db8f9fc4f5b28a272e52e6", "title": "De-anonymizing scale-free social networks by percolation graph matching", "text": "We address the problem of social network de-anonymization when relationships between people are described by scale-free graphs. In particular, we propose a rigorous, asymptotic mathematical analysis of the network de-anonymization problem while capturing the impact of power-law node degree distribution, which is a fundamental and quite ubiquitous feature of many complex systems such as social networks. By applying bootstrap percolation and a novel graph slicing technique, we prove that large inhomogeneities in the node degree lead to a dramatic reduction of the initial set of nodes that must be known a priori (the seeds) in order to successfully identify all other users. We characterize the size of this set when seeds are selected using different criteria, and we show that their number can be as small as n% for any small \u03b5 > 0. Our results are validated through simulation experiments on real social network graphs."}
{"_id": "c639c3aa374b39ae2e461ac704d90324cdcc3ac8", "title": "SpiderMAV: Perching and stabilizing micro aerial vehicles with bio-inspired tensile anchoring systems", "text": "Whilst Micro Aerial Vehicles (MAVs) possess a variety of promising capabilities, their high energy consumption severely limits applications where flight endurance is of high importance. Reducing energy usage is one of the main challenges in advancing aerial robot utility. To address this bottleneck in the development of unmanned aerial vehicle applications, this work proposes an bioinspired mechanical approach and develops an aerial robotic system for greater endurance enabled by low power station-keeping. The aerial robotic system consists of an multirotor MAV and anchoring modules capable of launching multiple tensile anchors to fixed structures in its operating envelope. The resulting tensile perch is capable of providing a mechanically stabilized mode for high accuracy operation in 3D workspace. We explore generalised geometric and static modelling of the stabilisation concept using screw theory. Following the analytical modelling of the integrated robotic system, the tensile anchoring modules employing high pressure gas actuation are designed, prototyped and then integrated to a quadrotor platform. The presented design is validated with experimental tests, demonstrating the stabilization capability even in a windy environment."}
{"_id": "1d9e8248ec8b333f86233bb0c4a88060776f51b1", "title": "SUSAN\u2014A New Approach to Low Level Image Processing", "text": "This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new feature detectors are based on the minimization of this local image region, and the noise reduction method uses this region as the smoothing neighbourhood. The resulting methods are accurate, noise resistant and fast. Details of the new feature detectors and of the new noise reduction method are described, along with test results."}
{"_id": "3d31692cac919f0b98f5be6be55ff8945c2a810c", "title": "An accident prediction approach based on XGBoost", "text": "As an important threat to public security, urban fire accident causes huge economic loss and catastrophic collapse. Predicting and analyzing the interior rule of urban fire accident from its appearance needed to be solved in the field. In this paper, we propose a new urban fire accident prediction approach based on XGBoost. The method determines the predictive indexes in a quantitative and qualitative way from different characteristics in various kinds of fire accidents. For screening the features we need, we adopt the feature selection algorithm based on association rules. For data cleaning, we use a method based on Box-Cox transformation that transforms the continual response variables from the feature space for removing the dependencies on unobservable errors and the predictor variable to some extent. Then we use the data to train the model based on XGBoost to obtain the best prediction accuracy. Experiments show that the method provides a feasible solution to urban fire accident prediction. The method contributes to improving the public security situation, we have added the method and related model to the City in a box\u2122, Shenzhen Aerospace Smart City System Technology Co., Ltd."}
{"_id": "c7b2518e892ccb310d66bb9c1114cb78f5a22e23", "title": "Generating spiking time series with Generative Adversarial Networks : an application on banking transactions by Luca Simonetto 11413522 September 2018", "text": "The task of data generation using Generative Models has recently gained more and more attention from the scientific community, as the number of applications in which these models work surprisingly well is constantly increasing. Some examples are image and video generation, speech synthesis and style transfer, pose guided image generation, cross-domain transfer and super resolution. Contrarily to such tasks generating data coming from the banking domain poses a different challenge, due to its atypical structure when compared with traditional data and its limited availability due to privacy restrictions. In this work, we analyze the feasibility of generating spiking time series patterns appearing in the banking environment using Generative Adversarial Networks. We develop a novel end-to-end framework for training, testing and comparing different generative models using both quantitative and qualitative metrics. Finally, we propose a novel approach that combines Variational Autoencoders with Generative Adversarial Networks in order to learn a loss function for datasets in which good similarity metrics are difficult to define."}
{"_id": "a0efabbcdfa89f60624114fe7fb35c02093f3cc1", "title": "Emotional stability, core self-evaluations, and job outcomes: A review of the evidence and an agenda for future research", "text": "In this article we present a review of research on core self-evaluations, a broad personality trait indicated by 4 more narrow traits: self-esteem, generalized self-efficacy, locus of control, and emotional stability. We review evidence suggesting that the 4 core traits are highly related, load on a single unitary factor, and have dubious incremental validity controlling for their common core. We more generally investigate the construct validity of core self-evaluations. We also report on the development and validation of the first direct measure of the concept, the Core Self-Evaluations Scale (CSES). Cross-cultural evidence on the CSES is provided. We conclude by offering an agenda for future research, discussing areas where future core self-evaluations research is most needed."}
{"_id": "47f92fe6854c54c0b0e6d5792becfbd589dac877", "title": "Analysis of digital bang-bang clock and data recovery for multi-gigabit/s serial transceivers", "text": "A Harmonic Balance method for analyzing digital bang-bang clock and data recovery (CDR) is proposed in this paper. The jitter tolerance performance of the CDR is predicted by a function with variables that can be easily correlated to design parameters. A 6.25Gb/s serial transceiver was fabricated in 90nm CMOS technology. Measurements show that the jitter tolerance performance can be accurately predicted by the proposed method."}
{"_id": "6933387f7c984f9819d3fcfab9b06905b537fbee", "title": "Compact Circularly-Polarized Patch Antenna Loaded With Metamaterial Structures", "text": "A metamaterial-inspired low-profile patch antenna is proposed and studied for circularly-polarized (CP) radiation. The present antenna, which has a single-fed configuration, is loaded with the composite right/left-handed (CRLH) mushroom-like structures and a reactive impedance surface (RIS) for miniaturization purpose. The CP radiation is realized by exciting two orthogonally-polarized modes simultaneously which are located in the left-handed (LH) region. The detailed antenna radiation characteristics are examined and illustrated with both simulated and experimental results. The CP performance can be controlled in several different ways. This antenna exhibits an overall size of 0.177\u03bb0 \u00d7 0.181\u03bb0 \u00d7 0.025\u03bb0 at 2.58 GHz and a radiation efficiency around 72%. Finally, based on the proposed CP patch antenna, a compact dual-band dual linearly-polarized patch antenna has also been designed and fabricated. Promising experimental results are observed."}
{"_id": "c1037dbfd829159f42cb4d1e60da804adf000779", "title": "Low Profile Fully Planar Folded Dipole Antenna on a High Impedance Surface", "text": "A fully planar antenna design incorporating a high impedance surface (HIS) is presented. The HIS is composed by a periodic array of subwavelength dogbone-shaped conductors printed on top of a thin dielectric substrate and backed by a metallic ground plane. First, the characteristics of a dipole over PEC or PMC layers, a dielectric slab, and the HIS are compared and studied in detail, highlighting the advantages provided by the use of the HIS. Then, the design of a low profile folded dipole antenna working at 5.5 GHz on top of the HIS is described. The surface provides close to 6% antenna impedance bandwidth and increased gain up to 7 dBi, while shielding the lower half space from radiation. The antenna structure comprises three metal layers without any vias between them, and its overall thickness is 0.059\u03bb0. The dipole is fed by a balanced twin lead line through a balun transformer integrated in the same antenna layer. A prototype has been built and measurements confirming simulation results are provided."}
{"_id": "dde75bf73fa0037cbf586d89d698825bae2a4669", "title": "A Compact, Low-Profile Metasurface-Enabled Antenna for Wearable Medical Body-Area Network Devices", "text": "We propose a compact conformal wearable antenna that operates in the 2.36-2.4 GHz medical body-area network band. The antenna is enabled by placing a highly truncated metasurface, consisting of only a two by two array of I-shaped elements, underneath a planar monopole. In contrast to previously reported artificial magnetic conducting ground plane backed antenna designs, here the metasurface acts not only as a ground plane for isolation, but also as the main radiator. An antenna prototype was fabricated and tested, showing a strong agreement between simulation and measurement. Comparing to previously proposed wearable antennas, the demonstrated antenna has a compact form factor of 0.5 \u03bb0 \u00d70.3 \u03bb0 \u00d70.028 \u03bb0, all while achieving a 5.5% impedance bandwidth, a gain of 6.2 dBi, and a front-to-back ratio higher than 23 dB. Further numerical and experimental investigations reveal that the performance of the antenna is extraordinarily robust to both structural deformation and human body loading, far superior to both planar monopoles and microstrip patch antennas. Additionally, the introduced metal backed metasurface enables a 95.3% reduction in the specific absorption rate, making such an antenna a prime candidate for incorporation into various wearable devices."}
{"_id": "82db6b9b0ca217dbd3e83ef6f2a8bb8c568d350d", "title": "Mining Contextual Preference Rules for Building User Profiles", "text": "The emerging of ubiquitous computing technologies in recent years has given rise to a new field of research consisting in incorporating context-aware preference querying facilities in database systems. One important step in this setting is the Preference Elicitation task which consists in providing the user ways to inform his/her choice on pairs of objects with a minimal effort. In this paper we propose an automatic preference elicitation method based on mining techniques. The method consists in extracting a user profile from a set of user preference samples. In our setting, a profile is specified by a set of contextual preference rules verifying some important properties (soundness and conciseness). We evaluate the efficacy of the proposed method in a series of experiments executed on a real-world database of user preferences about movies."}
{"_id": "6d0f9a1a28c4e39082db30de8023a5efa759d1e5", "title": "Wearable driver drowsiness detection using electrooculography signal", "text": "Every year, more than 100,000 automobile crashes are caused by driver drowsiness. Various technologies have been developed to address this issue, including vehicle-based measurements, behavior change detection, and physiological signal analysis. Both vehicle-based measurements and behavior change detection require bulky components. They also identify the driver's drowsiness too late for effective accident prevention. The physiological signal changes in an early stage and can be used to detect the on-set of driver drowsiness. In this paper, the development of a wearable drowsiness detection system is introduced. This system measures the electrooculography (EOG) signal; transmits the signal to a smartphone wirelessly; and could alarm the driver based on a prediction algorithm that can estimate 0.5-second-ahead EOG signal behavior. This system is compact, comfortable, and cost effective. The 0.5-second-ahead estimation capability provides the critical time for a driver to correct the behavior, and ultimately saves lives."}
{"_id": "b352e768dae0b8c063beedff296270590485cdfe", "title": "Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation", "text": "The automated segmentation of regions of interest (ROIs) in medical imaging is the fundamental requirement for the derivation of high-level semantics for image analysis in clinical decision support systems. Traditional segmentation approaches such as region-based depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, methods based on fully convolutional networks (FCN) have achieved great success in the segmentation of general images. FCNs leverage a large labeled dataset to hierarchically learn the features that best correspond to the shallow appearance as well as the deep semantics of the images. However, when applied to medical images, FCNs usually produce coarse ROI detection and poor boundary definitions primarily due to the limited number of labeled training data and limited constraints of label agreement among neighboring similar pixels. In this paper, we propose a new stacked FCN architecture with multi-channel learning (SFCN-ML). We embed the FCN in a stacked architecture to learn the foreground ROI features and background non-ROI features separately and then integrate these different channels to produce the final segmentation result. In contrast to traditional FCN methods, our SFCN-ML architecture enables the visual attributes and semantics derived from both the fore- and background channels to be iteratively learned and inferred. We conducted extensive experiments on three public datasets with a variety of visual challenges. Our results show that our SFCN-ML is more effective and robust than a routine FCN and its variants, and other state-of-the-art methods."}
{"_id": "a816140318e3857d01e2dc4ccc2e2f5a7a8f094a", "title": "Design, Fabrication, and Characterization of a 3-D CMOS Fluxgate Magnetometer", "text": "A dual-core 3-D microfluxgate magnetometer fabricated by a simple and inexpensive fabrication process is described in this paper. The microfluxgate is able to operate along a nearly linear V-B relationship at the second harmonic frequency and features good characteristics of high sensitivity and low noise response. These characteristic results indicate a field-to-voltage transfer coefficient of 11 V/T measured at the second harmonic frequency, power consumption of 67.3 mW, and a field noise response less than 12 nT/\u221a Hz at 1 Hz. In brief, our proposed device not only enhances responsivity capability and linear V-B characteristics, but also is CMOS process compatible, which is considered both function-efficient and cost-effective."}
{"_id": "48c0f9829a0ff8647b94082ef3d0a83c57417cb6", "title": "Performance Analysis of Big Data Gathering in Wireless Sensor Network Using an EM Based Clustering Scheme", "text": "Big-data is a popular term in the field of information and communication technology. Wireless Sensor Networks (WSN) is one of the eminent contributors of big data. WSN contains numerous sensor nodes that cooperatively monitor an environment. Each network consists of sensor node, memory, and communication device. Data generated by single sensor node is small but data generated by distributed sensor network is significantly large and it is termed as big-data. The critical issue in WSN is energy consumption and data gathering. This paper mainly focus on Expectation Maximization (EM) based clustering scheme implementation and the performance analysis of WSN using single mobile sink to eight mobile sink. From the analysis we also derived a relationship between the number of mobile sinks required for a particular network with a given number of sensor nodes. Experimental results show that the number of mobile sinks is also an important parameter to efficiently gather information in WSN."}
{"_id": "c6d59c7d23186d4f370d4e5ae4065ffc0efdbbf8", "title": "All-dielectric subwavelength metasurface focusing lens.", "text": "We have proposed, designed, manufactured and tested low loss dielectric micro-lenses for infrared (IR) radiation based on a dielectric metamaterial layer. This metamaterial layer was created by patterning a dielectric surface and etching to sub-micron depths. For a proof-of-concept lens demonstration, we have chosen a fine patterned array of nano-pillars with variable diameters. Gradient index (GRIN) properties were achieved by engineering the nano-pattern characteristics across the lens, so that the effective optical density of the dielectric metamaterial layer peaks around the lens center, and gradually drops at the lens periphery. A set of lens designs with reduced reflection and tailorable phase gradients have been developed and tested, demonstrating focal distances of a few hundred microns, beam area contraction ratio up to three, and insertion losses as low as 11%."}
{"_id": "36af51f13b2c1c24f13c6b468a7113054f8c4327", "title": "Ontology-Based Information Extraction for Knowledge Enrichment and Validation", "text": "Ontology is widely used as a mean to represent and share common concepts and knowledge from a particular domain or specialisation. As a knowledge representation, the knowledge within an ontology must be able to evolve along with the recent changes and updates within the community practice. In this paper, we propose a new Ontology-based Information Extraction (OBIE) system that extends existing systems in order to enrich and validate an ontology. Our model enables the ontology to find related recent knowledge in the domain from communities, by exploiting their underlying knowledge as keywords. The knowledge extraction process uses ontology-based and pattern-based information extraction technique. Not only the extracted knowledge enriches the ontology, it also validates contradictory instance-related statements within the ontology that is no longer relevant to recent practices. We determine a confidence value during the enrichment and validation process to ensure the stability of the enriched ontology. We implement the model and present a case study in herbal medicine domain. The result of the enrichment and validation process shows promising results. Moreover, we analyse how our proposed model contributes to the achievement of a richer and stable ontology."}
{"_id": "a3442990ba9059708272580f0eef5f84f8605427", "title": "SecMANO: Towards Network Functions Virtualization (NFV) Based Security MANagement and Orchestration", "text": "Network Functions Virtualization (NFV) has recently emerged as one of the major technological driving forces that significantly accelerate the evolution of today's computer and communication networks. Despite the advantages of NFV, e.g., saving investment cost, optimizing resource consumption, improving operational efficiency, simplifying network service lifecycle management, lots of novel security threats and vulnerabilities will be introduced, thereby impeding its further development and deployment in practice. In this paper, we briefly report our threat analysis in the context of NFV, and identify the corresponding security requirements. The purpose is to establish a comprehensive threat taxonomy and provide a guideline to develop effective security countermeasures. Furthermore, a conceptual design framework for NFV based security management and service orchestration is presented, with an objective to dynamically and adaptively deploy and manage security functions on the demands of users and customers. A use case about NFV based access control is also developed, illustrating the feasibility and advantages of implementing NFV based security management and orchestration."}
{"_id": "36b482f6a6666175f776a7a12837a1d366ef1815", "title": "Information security management needs more holistic approach: A literature review", "text": "Information technology has dramatically increased online business opportunities; however these opportunities have also created serious risks in relation to information security. Previously, information security issues were studied in a technological context, but growing security needs have extended researchers\u2019 attention to explore the management role in information security management. Various studies have explored different management roles and activities, but none has given a comprehensive picture of these roles and activities to manage information security effectively. So it is necessary to accumulate knowledge about various managerial roles and activities from literature to enable managers to adopt these for a more holistic approach to information security management. In this paper, using a systematic literature review approach, we synthesised literature related to management\u2019s roles in information security to explore specific managerial activities to enhance information security management. We found that numerous activities of management, particularly development and execution of information security policy, awareness, compliance training, development of effective enterprise information architecture, IT nformation architecture infrastructure management, business and IT alignment and human resources management, had a significant impact on the quality of management of information security. Thus, this research makes a novel contribution by arguing that a more holistic approach to information security is needed and we suggest the ways in which managers can play an effective role in information security. This research also opens up many new avenues for further research in this area. \u00a9 2015 Elsevier Ltd. All rights reserved."}
{"_id": "5e6f43abce81b991a0860f28f0915b2333b280c5", "title": "Incorporating Unsupervised Learning in Activity Recognition", "text": "Users are constantly involved in a multitude of activities in ever-changing context. Analyzing activities in contextrich environments has become a great challenge in contextawareness research. Traditional methods for activity recognition, such as classification, cannot cope with the variety and dynamicity of context and activities. In this paper, we propose an activity recognition approach that incorporates unsupervised learning. We analyze the feasibility of applying subspace clustering\u2014a specific type of unsupervised learning\u2014 to high-dimensional, heterogeneous sensory input. Then we present the correspondence between clustering output and classification input. This approach has the potential to discover implicit, evolving activities, and can provide valuable assistance to traditional classification based methods. As sensors become prevalent means in context detection and information channels proliferate to make context sharing easier, it is increasingly challenging to interpret context and analyze its effects on the activities (Lim and Dey 2010). We argue that applying traditional approaches to activity recognition may become more and more difficult to apply in context and activity-rich environments. In the literature, context attributes used for learning activities are chosen by either empirical assumption or dimension reduction to render a small set of features (Krause, Smailagic, and Siewiorek 2006). These approaches are infeasible in face of a broad spectrum of context information. The most significant drawback is that they fail to acknowledge the large variety of features needed to describe different activities. For activity recognition, most previous works applied supervised learning approaches that aimed at predicting activities among a set of known classes(Ferscha et al. 2004). These approaches, however, are also challenged when coping with new and fast evolving activities. Unsupervised learning, particularly clustering, has been highly successful for revealing implicit relationships and regularities in large data sets. Intuitively, we can envisage an activity recognition approach that applies clustering to context history. The clusters, representing frequent context patterns, can suggest activities and their contextual conditions. The results can be used independently for analyzing and interpreting activities. Furthermore, clusters can reveal the scopes and conditions Copyright c \u00a9 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. of activities and interactions. This information is valuable when determining the scopes of information sharing in pervasive environments. Although clustering is a promising approach in discovering associations within context, the feasibility of traditional clustering is questionable in dealing with high dimensionality and heterogeneity of context data. In this paper, we will first conduct a detailed analysis about the challenges to apply clustering for activity recognition. Afterwards we introduce two recent subspace clustering methods that can address these challenges. Lastly, based on the analysis of unsupervised activity recognition, we will propose an activity recognition framework that incorporates clustering in conventional classification. We will show that the two directions are complementary to each other, and developing a hybrid approach will greatly benefit activity context awareness. Unsupervised activity learning"}
{"_id": "35de12d0b56582afddb44f9c2bdde87048e0c43e", "title": "Active Learning with Feedback on Both Features and Instances", "text": "We extend the traditional active learning framework to include feedback on features in addition to labeling instances, and we execute a careful study of the effects of feature selection and human feedback on features in the setting of text categorization. Our experiments on a variety of categorization tasks indicate that there is significant potential in improving classifier performance by feature re-weighting, beyond that achieved via membership queries alone (traditional active learning) if we have access to an oracle that can point to the important (most predictive) features. Our experiments on human subjects indicate that human feedback on feature relevance can identify a sufficient proportion of the most relevant features (over 50% in our experiments). We find that on average, labeling a feature takes much less time than labeling a document. We devise an algorithm that interleaves labeling features and documents which significantly accelerates standard active learning in our simulation experiments. Feature feedback can complement traditional active learning in applications such as news filtering, e-mail classification, and personalization, where the human teacher can have significant knowledge on the relevance of features."}
{"_id": "1bbe7f68d48181a8fadfd1eb7b6eed140f49d1d6", "title": "Fast image stitching and editing for panorama painting on mobile phones", "text": "We propose a fast image stitching and editing approach for panorama painting and have implemented it on a mobile phone. A seam search using dynamic programming finds a good boundary along which to merge the source images, and a transition smoothing process using instant image cloning removes merging artifacts, yielding highquality panoramic images. A sequential image stitching procedure requires only keeping the current source image in memory during image stitching, which saves memory and is more suitable to mobile phone implementations."}
{"_id": "15eb646a50ffdf27f168295a0a4e56f449618a28", "title": "Studying Evolution with Self-Replicating Computer Programs", "text": "A critical discussion is presented on the use of self-replicating program systems as tools for the formulation of generalised theories of evolution. Results generated by such systems must be treated with caution, but, if used properly, they can ooer us unprecedented opportunities for empirical , comparative studies. A new system called Cosmos is introduced, which is based upon Ray's Tierra 15]. The major diierence between Cosmos and previous systems is that individual self-replicating programs in Cosmos are modelled (in a very simpliied fashion) on cellular organisms. Previous systems have generally used simpler self-replicators. The hope is that Cosmos may be better able to address questions concerning the sudden emergence of complex multicellular biological organisms during the Cambrian explosion. Results of initial exploratory runs are presented, which are somewhat diierent to those of similar runs on Tierra. These diierences were expected , and indicate the sensitivity of such systems to the precise details of the language in which the self-replicating programs are written. With the strengths and weaknesses of the methodology in mind, some directions for future research with Cosmos are discussed. Within the last decade, computers have become powerful and aaordable enough to enable a number of research groups to study the evolution of life in a new way. Rather than following the traditional approach of trying to capture properties of whole populations in mathematical models, the new approach models a large number of individual self-replicating entities which are competing against each other for resources required for replication. This is achieved by creating a computer which can run a large number of self-replicating programs in parallel 1. 1 In practice, a virtual computer is created (i.e. implemented in software), with parallelism simulated by time-slicing between Tom Ray pioneered this approach with his Tierra system 15, 16]. Since then, a number of other systems have also been developed, including Avida, developed by Chris Adami and Titus Brown 2], Computer Zoo, written by Jakob Skipper 18], and John Koza's system of self-replicating LISP-like programs 8]. Using such a methodology to study evolutionary systems is attractive for a number of reasons. For example, as the self-replicators are being modelled individually rather than as populations, the simulation respects the fact that a gene does not work in isolation. Rather, it is part of a large ensemble of genes which must all work together with some degree of cooperation in order for the individual organism \u2026"}
{"_id": "35a86bc97b3317275d4efb5f9cd0038818984795", "title": "CPLR: Collaborative pairwise learning to rank for personalized recommendation", "text": "Compared with explicit feedback data, implicit feedback data is easier to be collected and more widespread. However, implicit feedback data is also more difficult to be analyzed due to the lack of negative examples. Bayesian personalized ranking (BPR) is a well-known algorithm for personalized recommendation on implicit feedback data due to its high performance. However, it (1) treats all the unobserved feedback the same as negative examples which may be just caused by unseen, (2) treats all the observed feedback the same as positive examples which may be just caused by noisy action, and (3) assumes the preferences of users are independent which is difficult to achieve in reality. To solve all these problems, we propose a novel personalized recommendation algorithm called collaborative pairwise learning to rank (CPLR), which considers the influence between users on the preferences for both items with observed feedback and items without. To take these information into consideration, we try to optimize a generalized AUC instead of the standard AUC used in BPR. CPLR can be seen as a generalized BPR. Besides BPR, several extension algorithms of BPR, like social BPR (SBPR) and group preference based BPR (GBPR), are special cases of CPLR. Extensive experiments demonstrate the promise of our approach in comparison with traditional collaborative filtering methods and state-of-the-art pairwise learning to rank algorithms. Compared with the performance of baseline algorithms on five real-world data sets, the improvements of CPLR are over 17%, 23% and 22% for Pre@5, MAP and NDCG, respectively. \u00a9 2018 Published by Elsevier B.V."}
{"_id": "a8ee35debd996dc37b6b8de22ea3c0941bb88a79", "title": "Bangla license plate reader for metropolitan cities of Bangladesh using template matching", "text": "A simple algorithm for license plate reader for metropolitan cities in Bangladesh has been proposed in this paper. We have made use of a feature of Bangla script called \u201cMatra\u201d which joins the characters of a word together. Since the various words of a standard Bangladeshi license plate also have Matra, we have proposed a method where these words are segmented as single connected components and later recognized through template matching of words. There are also numbers and single letters in the number plate which are segmented and recognized as single characters. We have demonstrated that if we exploit the feature of Matra this way, the complexity of the algorithm is decreased, and the proposed system works smoothly for images taken under various conditions."}
{"_id": "ca7abe84edfd596ee7f065512084bfe256864e83", "title": "A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics", "text": "Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants' self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects' selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca's area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions."}
{"_id": "a0838a2f4aba21c9e373ec795e00e2f443965d61", "title": "An Approach for Schema Extraction of JSON and Extended JSON Document Collections", "text": "JSON documents are raising as a common format for data representation due to the increasing popularity of NoSQL document-oriented databases. One of the reasons for such a popularity is their ability to handle large volumes of data at the absence of an explicit data schema. However, schema information is sometimes essential for applications during data retrieval, integration and analysis tasks, for example. Given this context, this paper presents an approach that extracts a schema from a JSON or Extended JSON document collection stored in a NoSQL document-oriented database or other document repository. Aggregation operations are considered in order to obtain a schema for each distinct structure in the collection, and a hierarchical data structure is proposed to group these schemas in order to generate a global schema in JSON Schema format. Experiments conducted on actual datasets, like DBPedia and Foursquare, demonstrate that the accuracy of the generated schemas is equivalent or even superior than related work."}
{"_id": "99e536ca8c6c0aac024514348384ccc96b59fed8", "title": "An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision", "text": "After [15], [31], [19], [8], [25], [5], minimum cut/maximum flow algorithms on graphs emerged as an increasingly useful tool for exact or approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut/max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for applications in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Goldberg-Tarjan style \"push-relabel\" methods and algorithms based on Ford-Fulkerson style \"augmenting paths.\" We benchmark these algorithms on a number of typical graphs in the contexts of image restoration, stereo, and segmentation. In many cases, our new algorithm works several times faster than any of the other methods, making near real-time performance possible. An implementation of our max-flow/min-cut algorithm is available upon request for research purposes."}
{"_id": "105f3fd2054cb63223d9ffbda7b6bd5915c6be6b", "title": "A Bayesian Approach to Unsupervised One-Shot Learning of Object Categories", "text": "Learning visual models of object categories notoriously requires thousands of training examples; this is due to the diversity and richness of object appearance which requires models containing hundreds of parameters. We present a method for learning object categories from just a few images ( ). It is based on incorporating \u201cgeneric\u201d knowledge which may be obtained from previously learnt models of unrelated categories. We operate in a variational Bayesian framework: object categories are represented by probabilistic models, and \u201cprior\u201d knowledge is represented as a probability density function on the parameters of these models. The \u201cposterior\u201d model for an object category is obtained by updating the prior in the light of one or more observations. Our ideas are demonstrated on four diverse categories (human faces, airplanes, motorcycles, spotted cats). Initially three categories are learnt from hundreds of training examples, and a \u201cprior\u201d is estimated from these. Then the model of the fourth category is learnt from 1 to 5 training examples, and is used for detecting new exemplars a set of test images."}
{"_id": "6aeb2602361cb5c080da868180176b99260c7a97", "title": "Quotidian resilience: Exploring mechanisms that drive resilience from a perspective of everyday stress and coping", "text": "Resilience is often associated with extreme trauma or overcoming extraordinary odds. This way of thinking about resilience leaves most of the ontogenetic picture a mystery. In the following review we put forth the Everyday Stress Resilience Hypothesis where resilience is analyzed from a systems perspective and seen as a process of regulating everyday life stressors. Successful regulation accumulates into regulatory resilience which emerges during early development from successful coping with the inherent stress in typical interactions. These quotidian stressful events lead to activation of behavioral and physiologic systems. Stress that is effectively resolved in the short run and with reiteration over the long-term increases children's as well as adults' capacity to cope with more intense stressors. Infants, however, lack the regulatory capacities to take on this task by themselves. Therefore, through communicative and regulatory processes during infant-adult interactions, we demonstrate that the roots of regulatory resilience originate in infants' relationship with their caregiver and that maternal sensitivity can help or hinder the growth of resilience."}
{"_id": "3a838b4f69798c120e0217494357bc2178524414", "title": "Histological assessment of the palatal mucosa and greater palatine artery with reference to subepithelial connective tissue grafting", "text": "This study aimed to measure the thickness of the epithelium and lamina propria of the palatal mucosa and to elucidate the location of the greater palatine artery to provide the anatomical basis for subepithelial connective tissue grafting. Thirty-two maxillary specimens, taken from the canine distal area to the first molar distal area, were embedded in paraffin and stained with hematoxylin-eosin. The thickness of the epithelium and lamina propria of the palatal mucosa was measured at three positions on these specimens, starting from 3 mm below the alveolar crest and in 3-mm intervals. The location of the greater palatine artery was evaluated by using image-processing software. The mean epithelial thickness decreased significantly in the posterior teeth; it was 0.41, 0.36, 0.32, and 0.30 mm in the canine, first premolar, second premolar, and first molar distal areas, respectively. The lamina propria was significantly thicker in the canine distal; it was 1.36, 1.08, 1.09, and 1.05 mm, respectively. The mean length from the alveolar crest to the greater palatine artery increased toward the posterior molar; it was 7.76, 9.21, 10.93, and 11.28 mm, respectively. The mean depth from the surface of the palatal mucosa to the greater palatine artery decreased from the canine distal to the first premolar distal but increased again toward the posterior molar; it was 3.97, 3.09, 3.58, and 5.50 mm, respectively. Detailed histological assessments of the lamina propria of the palatal mucosa and the greater palatine artery are expected to provide useful anatomical guidelines for subepithelial connective tissue grafting."}
{"_id": "ec8ec2dfd73cf3667f33595fef84c95c42125945", "title": "Pose-Invariant Face Alignment with a Single CNN", "text": "Face alignment has witnessed substantial progress in the last decade. One of the recent focuses has been aligning a dense 3D face shape to face images with large head poses. The dominant technology used is based on the cascade of regressors, e.g., CNNs, which has shown promising results. Nonetheless, the cascade of CNNs suffers from several drawbacks, e.g., lack of end-to-end training, handcrafted features and slow training speed. To address these issues, we propose a new layer, named visualization layer, which can be integrated into the CNN architecture and enables joint optimization with different loss functions. Extensive evaluation of the proposed method on multiple datasets demonstrates state-of-the-art accuracy, while reducing the training time by more than half compared to the typical cascade of CNNs. In addition, we compare across multiple CNN architectures, all with the visualization layer, to further demonstrate the advantage of its utilization."}
{"_id": "6db5404ee214105caf7cefc0c327c47c3ae7f281", "title": "Persian Text Normalization using Classification Tree and Support Vector Machine", "text": "Text normalization is one of the most important tasks in text processing and text to speech conversion. In this paper, we propose a machine learning method to determine the type of Farsi language non-standard words (NSWs) by only using the structure of these words. Two methods including support vector machines (SVM) and classification and regression trees (CART) were used and evaluated on different training and test sets for NSW type classification in Farsi. The experimental results show that, NSW type classification in Farsi can be efficiently done by using only the structural form of Farsi non-standard words. In addition, the results is compared with a previous work done on normalization using multi-layer perceptron (MLP) neural network and shows that SVM outperforms MLP in both number of efforts and total performance"}
{"_id": "505b0f406a7ea4ec01d338977169ad965151d97a", "title": "Fbufs: A High-Bandwidth Cross-Domain Transfer Facility", "text": "We have designed and implemented a new operating system facility for I/O buffer management and data transferacross protection domain boundaries on shared memory machines. This facility, called fast buffers (fbufs), combines virtual page remapping with shared virtual memory, and exploits locality in I/O traffic to achieve high throughput without compromising protection, security, or modularity. goal is to help deliver the high bandwidth afforded by emerging high-speed networks to user-level processes, both in monolithic and microkernel-based operating systems.This paper outlines the requirements for a cross-domain transfer facility, describes the design of the fbuf mechanism that meets these requirements, and experimentally quantifies the impact of fbufs on network performance."}
{"_id": "a73405038fdc0d8bf986539ef755a80ebd341e97", "title": "Conditional High-Order Boltzmann Machines for Supervised Relation Learning", "text": "Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance."}
{"_id": "5ffcc447c7003b7e7199e18cb0f817821dd2e007", "title": "Rectifying the Bound Document Image Captured by the Camera: A Model Based Approach", "text": "A model based approach for rectifying the camera image of the bound document has been developed, i.e., the surface of the document is represented by a general cylindrical surface. The principle of using the model to unwrap the image is discussed. Practically, the skeleton of each horizontal text line is extracted to help estimate the parameter of the model, and rectify the images. To use the model, only a few priori is required, and no more auxiliary device is necessary. Experiment results are given to demonstrate the feasibility and the stability of the method."}
{"_id": "bd2654a03572786c3f3e3f2cca155b6929737027", "title": "The SAGE project: a storage centric approach for exascale computing: invited paper", "text": "SAGE (Percipient StorAGe for Exascale Data Centric Computing) is a European Commission funded project towards the era of Exascale computing. Its goal is to design and implement a Big Data/Extreme Computing (BDEC) capable infrastructure with associated software stack. The SAGE system follows a storage centric approach as it is capable of storing and processing large data volumes at the Exascale regime.\n SAGE addresses the convergence of Big Data Analysis and HPC in an era of next-generation data centric computing. This convergence is driven by the proliferation of massive data sources, such as large, dispersed scientific instruments and sensors where data needs to be processed, analyzed and integrated into simulations to derive scientific and innovative insights. A first prototype of the SAGE system has been been implemented and installed at the J\u00fclich Supercomputing Center. The SAGE storage system consists of multiple types of storage device technologies in a multi-tier I/O hierarchy, including flash, disk, and non-volatile memory technologies. The main SAGE software component is the Seagate Mero Object Storage that is accessible via the Clovis API and higher level interfaces. The SAGE project also includes scientific applications for the validation of the SAGE concepts.\n The objective of this paper is to present the SAGE project concepts, the prototype of the SAGE platform and discuss the software architecture of the SAGE system."}
{"_id": "be49dbac8e395dba3e8f918924ffe4a55dec34ca", "title": "Outlier Detection with Kernel Density Functions", "text": "Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel unsupervised algorithm for outlier detection with a solid statistical foundation is proposed. First we modify a nonparametric density estimate with a variable kernel to yield a robust local density estimation. Outliers are then detected by comparing the local density of each point to the local density of its neighbors. Our experiments performed on several simulated data sets have demonstrated that the proposed approach can outperform two widely used outlier detection algorithms (LOF and LOCI)."}
{"_id": "8b4062e6e905ef805ec5b980e21711bda26c29d1", "title": "Episodic Training for Domain Generalization", "text": "Domain generalization (DG) is the challenging and topical problem of learning models that generalize to novel testing domain with different statistics than a set of known training domains. The simple approach of aggregating data from all source domains and training a single deep neural network end-to-end on all the data provides a surprisingly strong baseline that surpasses many prior published methods. In this paper we build on this strong baseline by designing an episodic training procedure that trains a single deep network in a way that exposes it to the domain shift that characterises a novel domain at runtime. Specifically, we decompose a deep network into feature extractor and classifier components, and then train each component by simulating it interacting with a partner who is badly tuned for the current domain. This makes both components more robust, ultimately leading to our networks producing state-of-the-art performance on three DG benchmarks. As a demonstration, we consider the pervasive workflow of using an ImageNet trained CNN as a fixed feature extractor for downstream recognition tasks. Using the Visual Decathlon benchmark, we demonstrate that our episodic-DG training improves the performance of such a general purpose feature extractor by explicitly training it for robustness to novel problems. This provides the largest-scale demonstration of heterogeneous DG to date."}
{"_id": "eb2fd1475397b8cc28b4aaac59f71f9c7420f8b4", "title": "Personality Based E-Recruitment System", "text": "Nodes in Mobile Ad Hoc Networks (MANETs) are limited battery powered. That\u2019s why energy efficient routing has become an important optimization criterion in MANETs. The conventional routing protocols do not consider energy of the nodes while selecting routes which leads to early exhaustion of nodes and partitioning of the network. This paper attempts to provide an energy aware routing algorithm. The proposed algorithm finds the transmission energy between the nodes relative to the distance and the performance of the algorithm is analyzed between two metrics Total Transmission energy of a route and Maximum Number of Hops. The proposed algorithm shows efficient energy utilization and increased network lifetime with total transmission energy metric."}
{"_id": "9975d7c94394c9c8af2453ba08432edea7d4c22a", "title": "Translation of Natural Language Query Into Keyword Query Using a RNN Encoder-Decoder", "text": "The number of natural language queries submitted to search engines is increasing as search environments get diversified. However, legacy search engines are still optimized for short keyword queries. Thus, the use of natural language queries at legacy search engines degrades the retrieval performance of the engines. This paper proposes a novel method to translate a natural language query into a keyword query relevant to the natural language query for retrieving better search results without change of the engines. The proposed method formulates the translation as a generation task. That is, the method generates a keyword query from a natural language query by preserving the semantics of the natural language query. A recurrent neural network encoder-decoder architecture is adopted as a generator of keyword queries from natural language queries. In addition, an attention mechanism is also used to cope with long natural language queries."}
{"_id": "2c5dba592d9ade79f4a2f55d928d4fc9fc301cfb", "title": "Harnessing the power of FPGAs using altera's OpenCL compiler", "text": "In recent years, Field-Programmable Gate Arrays have become extremely powerful computational platforms that can efficiently solve many complex problems. The most modern FPGAs comprise effectively millions of programmable elements, signal processing elements and high-speed interfaces, all of which are necessary to deliver a complete solution. The power of FPGAs is unlocked via low-level programming languages such as VHDL and Verilog, which allow designers to explicitly specify the behavior of each programmable element. While these languages provide a means to create highly efficient logic circuits, they are akin to \"assembly language\" programming for modern processors. This is a serious limiting factor for both productivity and the adoption of FPGAs on a wider scale. In this talk, we use the OpenCL language to explore techniques that allow us to program FPGAs at a level of abstraction closer to traditional software-centric approaches. OpenCL is an industry standard parallel language based on 'C' that offers numerous advantages that enable designers to take full advantage of the capabilities offered by FPGAs, while providing a high-level design entry language that is familiar to a wide range of programmers.\n To demonstrate the advantages a high-level programming language can offer, we demonstrate how to use Altera's OpenCL Compiler on a set of case studies. The first application is single-precision general-element matrix multiplication (SGEMM). It is an example of a highly-parallel algorithm for which an efficient circuit structures are well known. We show how this application can be implemented in OpenCL and how the high-level description can be optimized to generate the most efficient circuit in hardware. The second application is a Fast Fourier Transform (FFT), which is a classical FPGA benchmark that is known to have a good implementation on FPGAs. We show how we can implement the FFT algorithm, while exploring the many different possible architectural choices that lead to an optimized implementation for a given FPGA. Finally, we discuss a Monte-Carlo Black-Scholes simulation, which demonstrates the computational power of FPGAs. We describe how a random number generator in conjunction with computationally intensive operations can be harnessed on an FPGA to generate a high-speed benchmark, which also consumes far less power than the same benchmark running on a comparable GPU. We conclude the tutorial with a set of live demonstrations.\n Through this tutorial we show the benefits high-level languages offer for system-level design and productivity. In particular, Altera's OpenCL compiler is shown to enable high-performance application design that fully utilizes capabilities of modern FPGAs."}
{"_id": "e70bd45d7bbf165ccf30954a741cd95d1b67eabf", "title": "A Portable SDR Non-Orthogonal Multiple Access Testbed for 5G Networks", "text": "Non-orthogonal multiple access (NOMA) is envisioned to be one of the promising radio access techniques for the fifth generation (5G) mobile networks. In this paper, a portable NOMA testbed based on software defined radio (SDR) is developed by transporting our NOMA system to mini personal computers (PCs). Moreover, the NOMA testbed has been enhanced from 5 MHz bandwidth to 10 MHz bandwidth. As the computation complexity grows higher with the increase of system bandwidth, the portable NOMA testbed is not competent to fully demonstrate the performance of our NOMA system due to the limitation of mini PC. For a better understanding of the NOMA testbed, the system architecture and scenario are introduced in brief. Then, the implementation of NOMA transceiver and the protocol stack are described separately. Finally, a series of experiments are carried out to evaluate its performance loss compared with the original NOMA system based on desktops. The experimental results indicate that the performance of processors may be a short slab for the development of portable SDR-based testbeds towards 5G networks."}
{"_id": "9ac8f58fcbe49202515ef915b7cec6083c4b6ef6", "title": "Lower Bounds for Finding Stationary Points of Non-Convex , Smooth High-Dimensional Functions \u2217", "text": "We establish lower bounds on the complexity of finding -stationary points of smooth, non-convex, high-dimensional functions. For functions with Lipschitz continuous pth derivative, we show that all algorithms\u2014even randomized algorithms observing arbitrarily high-order derivatives\u2014have worst-case iteration count \u03a9( \u2212(p+1)/p). Our results imply that the O( \u22122) convergence rate of gradient descent is unimprovable without additional assumptions (e.g. Lipschitz Hessian), and that cubic regularization of Newton\u2019s method and pth order regularization in general are similarly optimal. Additionally, we prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates better than O( \u22128/5), which is within \u22121/15 of the recently established \u00d5( \u22125/3) rate for accelerated gradient descent."}
{"_id": "23332a035751a2ba468f42501ce55753b6c07079", "title": "Bayesian Neural Word Embedding", "text": "Recently, several works in the domain of natural language processing presented successful methods for word embedding. Among them, the Skip-Gram with negative sampling, known also as word2vec, advanced the state-of-the-art of various linguistics tasks. In this paper, we propose a scalable Bayesian neural word embedding algorithm. The algorithm relies on a Variational Bayes solution for the SkipGram objective and a detailed step by step description is provided. We present experimental results that demonstrate the performance of the proposed algorithm for word analogy and similarity tasks on six different datasets and show it is competitive with the original Skip-Gram method."}
{"_id": "dbcade6e8f05655e55e0f7b9a58088855814cab0", "title": "Exploring the factors affecting MOOC retention: A survey study", "text": "Massive Open Online Courses (MOOCs) hold the potential to open up educational opportunities to a global audience. However, evidence suggests that only a small proportion of MOOC participants go on to complete their courses and relatively little is understood about the MOOC design and implementation factors that influence retention. This paper reports a survey study of 379 participants enrolled at university in Cairo who were encouraged to take a MOOC of their own choice as part of their development. 122 participants (32.2%) went onto to complete an entire course. There were no significant differences in completion rates by gender, level of study (undergraduate or postgraduate) or MOOC platform. A post-MOOC survey of students' perceptions found that MOOC Course Content was a significant predictor of MOOC retention, with the relationship mediated by the effect of content on the Perceived Effectiveness of the course. Interaction with the instructor of the MOOC was also found to be significant predictor of MOOC retention. Overall these constructs explained 79% of the variance in MOOC retention. \u00a9 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)."}
{"_id": "7fcd60dd4feceb576f44d138c94a04644eeb5537", "title": "Corporate Sustainability : First Evidence on Materiality", "text": "An increasing number of companies make sustainability investments, and an increasing number of investors integrate sustainability performance data in their capital allocation decisions. To date however, the prior academic literature has not distinguished between investments in material versus immaterial sustainability issues. We develop a novel dataset by hand-mapping data on sustainability investments classified as material for each industry into firm-specific performance data on a variety of sustainability investments. This allows us to present new evidence on the value implications of sustainability investments. Using calendar-time portfolio stock return regressions we find that firms with good performance on material sustainability issues significantly outperform firms with poor performance on these issues, suggesting that investments in sustainability issues are shareholder-value enhancing. Further, firms with good performance on sustainability issues not classified as material do not underperform firms with poor performance on these same issues, suggesting investments in sustainability issues are at a minimum not value-destroying. Finally, firms with good performance on material issues and concurrently poor performance on immaterial issues perform the best. These results speak to the efficiency of firms\u2019 sustainability investments, and also have implications for asset managers who have committed to the integration of sustainability factors in their capital allocation decisions. \uf02a Mozaffar Khan is a visiting Associate Professor at Harvard Business School. George Serafeim is the Jakurski Family Associate Professor of Business Administration at Harvard Business School. Aaron Yoon is a doctoral student at Harvard Business School. We are grateful for comments from seminar participants at National University of Singapore and Harvard Business School. We are grateful for financial support from the Division of Faculty Research and Development at Harvard Business School. George Serafeim has served on the Standards Council of SASB. Contact emails: mkhan@hbs.edu; gserafeim@hbs.edu; ayoon@hbs.edu."}
{"_id": "ca38cc7ebc85034d3dcbc16f2329bd6c4ae010a1", "title": "VBaaS: VNF Benchmark-as-a-Service", "text": "When rolling out Network Function Virtualization (NFV) services, resource monitoring becomes a critical task subject to different cost-accuracy tradeoffs depending on whether continuous monitoring or more static infrastructure resource views are taken. In this context, we propose Virtualized Network Functions (VNF) Benchmark-as-a-Service (VBaaS) to enable not only run-time resource evaluation but also test-before-deploy opportunities for VNFs and NFV Infrastructures. We describe the motivation behind VBaaS and its main value proposition for a number of use cases around the orchestration tasks of VNF Forwarding Graphs We present the main components of VBaaS along their system interactions and interfaces, discussing the main benefits of adopting VBaaS and open research issues. Addressing the identified challenges and finalizing our proof of concept VBaaS are our main ongoing work activities."}
{"_id": "cf226f71615d4f5e780e64f4cc7e370add110dfb", "title": "AN ACCEPTANCE MODEL FOR USEFUL AND FUN INFORMATION SYSTEMS", "text": "Investigating the factors associated with user acceptance of new software systems has been an important research stream in the field of information systems for many years. The technology acceptance model has long been used to examine the acceptance of utilitarian systems. Recently, it has been used to examine recreational or pleasure-oriented systems. Many examples exist of software that, depending on the context of use, can be used for productive and pleasurable interaction. This paper examines the determinants of use of one such \u201cdual\u201d system. A survey of users of a dual system was conducted. Results show that perceived usefulness is more important in determining intention to use than perceived enjoyment, and that perceived ease of use has no direct impact on intention, but still has a strong indirect effect."}
{"_id": "ab4241a105268337fefeea6f3eb66c7b4b77cf09", "title": "Interactive Neural Translation Assistance for Human Translators", "text": "We present the first user study on neural interactive translation prediction. Our neural translation aid was built into existing online translation software. Depending on the prefix already typed by the user the system suggests a list of continuations. We assessed the impact of the system on human translators in a user study with both professional and non-professional translators. The analysis was done with respect to translation speed and translation accuracy as assessed by human judges. We find that our neural translation aid enables relatively fast translations and does not compromise on quality."}
{"_id": "29df23cce28f3444cd9cc7d1b7033aec68bc3aed", "title": "Accurate and robust localization of duplicated region in copy\u2013move image forgery", "text": "Copy\u2013move image forgery detection has recently become a very active research topic in blind image forensics. In copy\u2013move image forgery, a region from some image location is copied and pasted to a different location of the same image. Typically, post-processing is applied to better hide the forgery. Using keypoint-based features, such as SIFT features, for detecting copy\u2013move image forgeries has produced promising results. The main idea is detecting duplicated regions in an image by exploiting the similarity between keypoint-based features in these regions. In this paper, we have adopted keypoint-based features for copy\u2013move image forgery detection; however, our emphasis is on accurate and robust localization of duplicated regions. In this context, we are interested in estimating the transformation (e.g., affine) between the copied and pasted regions more accurately as well as extracting these regions as robustly by reducing the number of false positives and negatives. To address these issues, we propose using a more powerful set of keypoint-based features, called MIFT, which shares the properties of SIFT features but also are invariant to mirror reflection transformations. Moreover, we propose refining the affine transformation using an iterative scheme which improves the estimation of the affine transformation parameters by incrementally finding additional keypoint matches. To reduce false positives and negatives when extracting the copied and pasted regions, we propose using \u201cdense\u201d MIFT features, instead of standard pixel correlation, along with hysteresis thresholding and morphological operations. The proposed approach has been evaluated and compared with competitive approaches through a comprehensive set of experiments using a large dataset of real images (i.e., CASIA v2.0). Our results indicate that our method can detect duplicated regions in copy\u2013move image forgery with higher accuracy, especially when the size of the duplicated region is small."}
{"_id": "00bbba51721dee6e0b1cd2a5b614ab46f33abab6", "title": "Using Information Content to Evaluate Semantic Similarity in a Taxonomy", "text": "1. Please explain how this manuscript advances this field of research and/or contributes something new to the literature. They tackle the problem of semantic similarity based on the information content the concepts share. They combine the taxonomic structure with the empirical problem estimates which provides better way of adapting knowledge to multiple context concepts. They also try to find a solution to the multiple inheritance problems."}
{"_id": "011ffa226a58b6711ce509609b8336911325b0e0", "title": "Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy", "text": "This paper presents a new approach for measuring semantic similarity/distance between words and concepts. It combines a lexical taxonomy structure with corpus statistical information so that the semantic distance between nodes in the semantic space constructed by the taxonomy can be better quantified with the computational evidence derived from a distributional analysis of corpus data. Specifically, the proposed measure is a combined approach that inherits the edge-based approach of the edge counting scheme, which is then enhanced by the node-based approach of the information content calculation. When tested on a common data set of word pair similarity ratings, the proposed approach outperforms other computational models. It gives the highest correlation value (r = 0.828) with a benchmark based on human similarity judgements, whereas an upper bound (r = 0.885) is observed when human subjects replicate the same task."}
{"_id": "3cd9fd8a36c8feb74bb20ae25817edb9c6a0518c", "title": "Automatic Word Sense Discrimination", "text": "This paper presents context-group discrimination, a disambiguation algorithm based on clustering. Senses are interpreted as groups (or clusters) of similar contexts of the ambiguous word. Words, contexts, and senses are represented in Word Space, a high-dimensional, real-valued space in which closeness corresponds to semantic similarity. Similarity in Word Space is based on second-order co-occurrence: two tokens (or contexts) of the ambiguous word are assigned to the same sense cluster if the words they co-occur with in turn occur with similar words in a training corpus. The algorithm is automatic and unsupervised in both training and application: senses are induced from a corpus without labeled training instances or other external knowledge sources. The paper demonstrates good performance of context-group discrimination for a sample of natural and artificial ambiguous words."}
{"_id": "59407446503d49a8cf5f5643b17502835b62f139", "title": "Using WordNet to Disambiguate Word Senses for Text Retrieval", "text": "This paper describes an automatic indexing procedure that uses the \u201cIS-A\u201d relations contained within WordNet and the set of nouns contained in a text to select a sense for each plysemous noun in the text. The result of the indexing procedure is a vector in which some of the terms represent word senses instead of word stems. Retrieval experiments comparing the effectivenss of these sense-based vectors vs. stem-based vectors show the stem-based vectors to be superior overall, although the sense-based vectors do improve the performance of some queries. The overall degradation is due in large part to the difficulty of disambiguating senses in short query statements. An analysis of these results suggests two conclusions: the IS-A links define a generalization/specialization hierarchy that is not sufficient to reliably select the correct sense of a noun from the set of fine sense distinctions in WordNet; and missing correct matches because of incorrect sense resolution has a much more deleterious effect on retrieval performance than does making spurious matches."}
{"_id": "4869b34755f76c203d4a17448ed13043d455a6ed", "title": "Are Slice-Based Cohesion Metrics Actually Useful in Effort-Aware Post-Release Fault-Proneness Prediction? An Empirical Study", "text": "Background. Slice-based cohesion metrics leverage program slices with respect to the output variables of a module to quantify the strength of functional relatedness of the elements within the module. Although slice-based cohesion metrics have been proposed for many years, few empirical studies have been conducted to examine their actual usefulness in predicting fault-proneness. Objective. We aim to provide an in-depth understanding of the ability of slice-based cohesion metrics in effort-aware post-release fault-proneness prediction, i.e. their effectiveness in helping practitioners find post-release faults when taking into account the effort needed to test or inspect the code. Method. We use the most commonly used code and process metrics, including size, structural complexity, Halstead's software science, and code churn metrics, as the baseline metrics. First, we employ principal component analysis to analyze the relationships between slice-based cohesion metrics and the baseline metrics. Then, we use univariate prediction models to investigate the correlations between slice-based cohesion metrics and post-release fault-proneness. Finally, we build multivariate prediction models to examine the effectiveness of slice-based cohesion metrics in effort-aware post-release fault-proneness prediction when used alone or used together with the baseline code and process metrics. Results. Based on open-source software systems, our results show that: 1) slice-based cohesion metrics are not redundant with respect to the baseline code and process metrics; 2) most slice-based cohesion metrics are significantly negatively related to post-release fault-proneness; 3) slice-based cohesion metrics in general do not outperform the baseline metrics when predicting post-release fault-proneness; and 4) when used with the baseline metrics together, however, slice-based cohesion metrics can produce a statistically significant and practically important improvement of the effectiveness in effort-aware post-release fault-proneness prediction. Conclusion. Slice-based cohesion metrics are complementary to the most commonly used code and process metrics and are of practical value in the context of effort-aware post-release fault-proneness prediction."}
{"_id": "14365e40f97f542f1377f897cec653cbb2b565c4", "title": "Automatic Spelling Correction for Resource-Scarce Languages using Deep Learning", "text": "Spelling correction is a well-known task in Natural Language Processing (NLP). Automatic spelling correction is important for many NLP applications like web search engines, text summarization, sentiment analysis etc. Most approaches use parallel data of noisy and correct word mappings from different sources as training data for automatic spelling correction. Indic languages are resourcescarce and do not have such parallel data due to low volume of queries and nonexistence of such prior implementations. In this paper, we show how to build an automatic spelling corrector for resourcescarce languages. We propose a sequenceto-sequence deep learning model which trains end-to-end. We perform experiments on synthetic datasets created for Indic languages, Hindi and Telugu, by incorporating the spelling mistakes committed at character level. A comparative evaluation shows that our model is competitive with the existing spell checking and correction techniques for Indic languages."}
{"_id": "7683e11deaf14c2b94c709c5ef23318986643aea", "title": "Spatial-Aware Object-Level Saliency Prediction by Learning Graphlet Hierarchies", "text": "To fill the semantic gap between the predictive power of computational saliency models and human behavior, this paper proposes to predict where people look at using spatial-aware object-level cues. While object-level saliency has been recently suggested by psychophysics experiments and shown effective with a few computational models, the spatial relationship between the objects has not yet been explored in this context. We in this work for the first time explicitly model such spatial relationship, as well as leveraging semantic information of an image to enhance object-level saliency modeling. The core computational module is a graphlet-based (i.e., graphlets are moderate-sized connected subgraphs) deep architecture, which hierarchically learns a saliency map from raw image pixels to object-level graphlets (oGLs) and further to spatial-level graphlets (sGLs). Eye tracking data are also used to leverage human experience in saliency prediction. Experimental results demonstrate that the proposed oGLs and sGLs well capture object-level and spatial-level cues relating to saliency, and the resulting saliency model performs competitively compared with the state-of-the-art."}
{"_id": "feb58a691ee0d26c8173bb9c9826355b9d08e093", "title": "An interleaving and load sharing method for multiphase LLC converters", "text": "Interleaving frequency-controlled LLC resonant converters will encounter load-sharing problem due to the tolerance of the resonant components. In this paper, full-wave and half-wave switch-controlled capacitors (SCCs) are used in LLC stages to solve this problem. By using resonant capacitance as a control variable, the output current can be modulated even when all the LLC stages are synchronized at the same switching frequency. A design procedure is developed. A 600W prototype is built to verify the feasibility."}
{"_id": "fc22886c3d2c485e87009aac5f89f55ece10d0ed", "title": "Transforming Models with ATL 1", "text": "This paper presents ATL (ATLAS Transformation Language): a hybrid model transformation language that allows both declarative and imperative constructs to be used in transformation definitions. The paper describes the language syntax by using examples. Language semantics is described in pseudo-code and various optimizations of transformation executions are discussed. ATL is supported by a set of development tools such as an editor, a compiler, a virtual machine, and a debugger. A case study shows the applicability of the language constructs. Alternative ways for implementing the case study are outlined. ATL language features are classified according to a model that captures the commonalities and variabilities of the model transformations domain."}
{"_id": "aea2bdcaeef3ec16b801c989c515419c9d961bb4", "title": "Assessing It/Business Alignment", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."}
{"_id": "52f4ad8568812e06bf0ff9766bc86909c52ab030", "title": "Development of next-generation system-on-package (SOP) technology based on silicon carriers with fine-pitch chip interconnection", "text": "of next-generation system-on-package (SOP) technology based on silicon carriers with fine-pitch chip interconnection J. U. Knickerbocker P. S. Andry L. P. Buchwalter A. Deutsch R. R. Horton K. A. Jenkins Y. H. Kwark G. McVicker C. S. Patel R. J. Polastre C. Schuster A. Sharma S. M. Sri-Jayantha C. W. Surovic C. K. Tsang B. C. Webb S. L. Wright S. R. McKnight E. J. Sprogis B. Dang System-on-Package (SOP) technology based on silicon carriers has the potential to provide modular design flexibility and highperformance integration of heterogeneous chip technologies and to support robust chip manufacturing with high-yield/low-cost chips for a wide range of twoand three-dimensional product applications. Key technology enablers include silicon through-vias, high-density wiring, high-I/O chip interconnection, and supporting test and assembly technologies. The silicon through-vias are a key feature permitting efficient area array signal, power, and ground interconnection through these thinned silicon packages. Highdensity wiring and high-density chip I/O interconnection can enable tight integration of heterogeneous chip technologies which approximate the performance of an integrated system-on-chip with a \u2018\u2018virtual chip\u2019\u2019 using the silicon package for integration. Silicon carrier fabrication leverages existing manufacturing capability and mid-UV lithography to provide very dense package wiring following CMOS back-end-of-line design rules. Further, the thermal expansion of the silicon carrier package matches the chip, which helps maintain reliability even as the high-density chip microbump interconnections scale to smaller size. In addition to heterogeneous chip integration, SOP products may leverage the integration of passive components, active devices, and electrooptic structures to enhance system-level performance while also maintaining functional test capability and known good chips when needed. This paper describes the technical challenges and recent progress made in the development of silicon carrier technology for potential new applications."}
{"_id": "dd131b9105c16e16a8711faa80f232de01df02c1", "title": "Optical Character Recognition from Text Image", "text": "Optical Character Recognition (OCR) is a system that provides a full alphanumeric recognition of printed or handwritten characters by simply scanning the text image. OCR system interprets the printed or handwritten characters image and converts it into corresponding editable text document. The text image is divided into regions by isolating each line, then individual characters with spaces. After character extraction, the texture and topological features like corner points, features of different regions, ratio of character area and convex area of all characters of text image are calculated. Previously features of each uppercase and lowercase letter, digit, and symbols are stored as a template. Based on the texture and topological features, the system recognizes the exact character using feature matching between the extracted character and the template of all characters as a measure of similarity."}
{"_id": "65e1ada2360b42368a5f9f5e40ff436051c6fa84", "title": "Pomegranate: fast and flexible probabilistic modeling in python", "text": "We present pomegranate, an open source machine learning package for probabilistic modeling in Python. Probabilistic modeling encompasses a wide range of methods that explicitly describe uncertainty using probability distributions. Three widely used probabilistic models implemented in pomegranate are general mixture models, hidden Markov models, and Bayesian networks. A primary focus of pomegranate is to abstract away the complexities of training models from their definition. This allows users to focus on specifying the correct model for their application instead of being limited by their understanding of the underlying algorithms. An aspect of this focus involves the collection of additive sufficient statistics from data sets as a strategy for training models. This approach trivially enables many useful learning strategies, such as out-of-core learning, minibatch learning, and semi-supervised learning, without requiring the user to consider how to partition data or modify the algorithms to handle these tasks themselves. pomegranate is written in Cython to speed up calculations and releases the global interpreter lock to allow for built-in multithreaded parallelism, making it competitive with\u2014or outperform\u2014other implementations of similar algorithms. This paper presents an overview of the design choices in pomegranate, and how they have enabled complex features to be supported by simple code. The code is available at https://github.com/jmschrei/pomegranate"}
{"_id": "4c92026d57ea1c81301723bd4bcf45d645efe517", "title": "Entropic Brain-computer Interfaces - Using fNIRS and EEG to Measure Attentional States in a Bayesian Framework", "text": "Implicit Brain-Computer Interfaces (BCI) adapt system settings subtly based on real time measures of brain activation without the user\u2019s explicit awareness. For example, measures of the user\u2019s cognitive workload might drive a system that alters the timing of notifications in order to minimize user interruption. Here, we consider new avenues for implicit BCI based on recent discoveries in cognitive neuroscience and conduct a series of experiments using BCI\u2019s principal non-invasive brain sensors, fNIRS and EEG. We show how Bayesian and systems neuroscience formulations explain the difference in performance of machine learning algorithms trained on brain data in different conditions. These new formulations posit that the brain aims to minimize its long-term surprisal of sensory data and organizes its calculations on two anti-correlated networks. We consider how to use real-time input that portrays a user along these dimensions in designing Bidirectional BCIs, which are Implicit BCIs that aim to optimize the user\u2019s state by modulating computer output based on feedback from a brain monitor. We introduce Entropic Brain-Computer Interfacing as a type of Bidirectional BCI which uses physiological measurements of information theoretical dimensions of the user\u2019s state to evaluate the digital flow of information to the user\u2019s brain, tweaking this output in a feedback loop to the user\u2019s benefit."}
{"_id": "97a611717ddcb13a6fb5f19ebbc96dd005ed4887", "title": "3D face recognition using local binary patterns", "text": "It is well recognized that expressions can significantly change facial geometry that results in a severe problem for robust 3D face recognition. So it is crucial for many applications that how to extract expression-robust features to describe 3D faces. In this paper, we develop a novel 3D face recognition algorithm using Local Binary Pattern (LBP) under expression varieties, which is an extension of the LBP operator widely used in ordinary facial analysis. First, to depict the human face more accurately and reduce the effect of facial local distortion for face recognition, a special feature-based 3D face division scheme is proposed. Then, the LBP representation framework for 3D faces is described, and the facial depth and normal information are extracted and encoded by LBP, to reduce the expression effect. For each face region, the statistical histogram is utilized to summarize the facial details, and accordingly three matching strategies are presented to address the recognition task. Finally, the proposed 3D face recognition algorithm is tested on BJUT-3D and FRGC v2.0 databases, achieves promising results, and concludes that it is feasible and valid to apply the LBP representation framework on 3D face recognition. & 2012 Elsevier B.V. All rights reserved."}
{"_id": "34a534185687eae7e16399ffa9e3769efd3942b3", "title": "Server Siblings: Identifying Shared IPv4/IPv6 Infrastructure Via Active Fingerprinting", "text": "We present, validate, and apply an active measurement technique that ascertains whether candidate IPv4 and IPv6 server addresses are \u201csiblings,\u201d i.e., assigned to the same physical machine. In contrast to prior efforts limited to passive monitoring, opportunistic measurements, or end-client populations, we propose an active methodology that generalizes to all TCP-reachable devices, including servers. Our method extends prior device fingerprinting techniques to improve their feasibility in modern environments, and uses them to support measurement-based detection of sibling interfaces. We validate our technique against a diverse set of 65 web servers with known sibling addresses and find it to be over 97% accurate with 99% precision. Finally, we apply the technique to characterize the top \u223c6,400 Alexa IPv6-capable web domains, and discover that a DNS name in common does not imply that the corresponding IPv4 and IPv6 addresses are on the same machine, network, or even autonomous system. Understanding sibling and non-sibling relationships gives insight not only into IPv6 deployment and evolution, but also helps characterize the potential for correlated failures and susceptibility to certain attacks."}
{"_id": "3d7db8b339c83d50d635cee3c6931da5324a4799", "title": "Syntax Parsing: Implementation Using Grammar-Rules for English Language", "text": "Syntactic parsing deals with syntactic structure of a sentence. The word 'syntax' refers to the grammatical arrangement of words in a sentence and their relationship with each other. The objective of syntactic analysis is to find syntactic structure of a sentence which is usually depicted as a tree. Identifying the syntactic structure is useful in determining the meaning of a sentence. Natural language processing is an arena of computer science and linguistics, concerned with the dealings amongst computers and human languages. It processes the data through lexical analysis, Syntax analysis, Semantic analysis, Discourse processing, Pragmatic analysis. This paper gives various parsing methods. The algorithm in this paper splits the English sentences into parts using POS tagger, It identifies the type of sentence (Facts, active, passive etc.) and then parses these sentences using grammar rules of Natural language. The algorithm has been tested on real sentences of English and it accomplished an accuracy of 81%."}
{"_id": "626449886eb1e635178ff1c2f9c7b1ccea7cc48d", "title": "English Language Learners and Academic Achievement : Revisiting the Threshold Hypothesis", "text": "This nonexperimental study explored the predictive strength of English proficiency levels on academic achievement of middle school students in a sample of 17,470 native English-speaking (NES) students, 558 English language learners (current ELLs), and 500 redesignated fluent English proficient students (former ELLs). Results of multilevel analyses indicated that after controlling for relevant studentand school-level characteristics, former ELLs significantly outperformed current ELL and NES students in reading (effect sizes: 1.07 and 0.52) and mathematics (effect sizes: 0.86 and 0.42). The results support Cummins\u2019s (1979, 2000) lower level threshold hypothesis predicting that upon reaching adequate proficiency in the language of schooling and testing, ELLs would no longer experience academic disadvantages. Refinements for the theory and directions for future research are discussed."}
{"_id": "d7ea82e9fba798c450b17e88c14da12a14b6345c", "title": "LSCitter: building social machines by augmenting existing social networks with interaction models", "text": "We present LSCitter, an implemented framework for supporting human interaction on social networks with formal models of interaction, designed as a generic tool for creating social machines on existing infrastructure. Interaction models can be used to choreograph distributed systems, providing points of coordination and communication between multiple interacting actors. While existing social networks specify how interactions happen---who messages go to and when, the effects of carrying out actions---these are typically implicit, opaque and non user-editable. Treating interaction models as first class objects allows the creation of electronic institutions, on which users can then choose the kinds of interaction they wish to engage in, with protocols which are explicit, visible and modifiable. However, there is typically a cost to users to engage with these institutions. In this paper we introduce the notion of \"shadow institutions\", where actions on existing social networks are mapped onto formal interaction protocols, allowing participants access to computational intelligence in a seamless, zero-cost manner to carry out computation and store information."}
{"_id": "3ca91c2fa80a4e716c80ca8d4587decb94212065", "title": "PPG-based methods for non invasive and continuous blood pressure measurement: an overview and development issues in body sensor networks", "text": "Non invasive and continuous measurement of blood pressure can enable an effective 24/7 monitoring of hypertensive patients to timely prevent cardiovascular problems and precisely regulate anti-hypertension cures. Unfortunately, to date, blood pressure can be only measured through either cuff-based or invasive instruments that cannot be continuously used due to their characteristics. This paper proposes an overview of techniques and approaches based on the photoplethysmographic (PPG) signal for non invasive and continuous measurement of blood pressure. In particular, the PPG signal can be easily acquired from an optical sensor applied on the epidermis and used, alone or integrated with the ECG signal, to estimate the blood pressure. On the basis of such methods new instruments and sensor-based systems can be developed and integrated with computer-based health care systems that aim at supporting continuous and remote monitoring of assisted livings."}
{"_id": "8a0b6953bf9e4de798c04b40b04ee6d63af60e7b", "title": "The See-Through System: From implementation to test-drive", "text": "Cooperative awareness in vehicular networks is probably the killer application for vehicle-to-vehicle (V2V) communications that cannot be matched by infrastructure-based alternatives even when disregarding communication costs. New and improved driver assistance systems can be introduced by extending their reach to sensors residing in neighboring vehicles, such as windshield-installed cameras. In previous work, we defined theoretical foundations for a driver assistance system that leverages on V2V communication and windshield-installed cameras to transform vision-obstructing vehicles into transparent tubular objects. We now present an implementation of the actual See-Through System (STS), where we combine the communication aspects with the control and augmented reality components of the system. We present a validation methodology and test the system with multiple vehicles on a closed road segment. This evaluation shows that the STS is able to increase the visibility of drivers intending to overtake, thus increasing the safety of such critical maneuvers. It also shows that Dedicated Short Range Communication (DSRC) provides the required latency for this delay-critical inter-vehicle communication, which could hardly be guaranteed with infrastructure-based communication technologies."}
{"_id": "5752e1470520c0dc7c642af1e6b83392012087a6", "title": "MediaPaaS: A Cloud-Based Media Processing Platform for Elastic Live Broadcasting", "text": "Mobility is changing the way of how people consume live media content. By staying always connected with the Internet from various mobile devices, people expect to have enhanced TV viewing experience from anywhere on any device. Therefore, live broadcasting needs to be widely accessible and customizable, instead of being passive content only on TV. In this paper we present a cloud-based media processing platform, called MediaPaaS, for enabling elastic live broadcasting in the cloud. As an ecosystem-oriented solution for content providers, we outsource complex media processing from both content providers and terminal devices to the cloud. A distributed media processing model is proposed to enable dynamic pipeline composition and cross-pipeline task sharing in the cloud for flexible live content processing. Also, a prediction-based task scheduling algorithm is presented to minimize cloud resource usage without affecting quality of streams. The MediaPaaS platform allows third-party application developers to extend its capability to enable certain customization for running live channels. To our knowledge, this paper is the first work to openly discuss the detailed design issues of a cloud-based platform for elastic live broadcasting."}
{"_id": "1d8fa2649fe7c600d3a745126b8b05233301e8e3", "title": "The UJI librarian robot", "text": "This paper describes the UJI Librarian Robot, a mobile manipulator that is able to autonomously locate a book in an ordinary library, and grasp it from a bookshelf, by using eye-in-hand stereo vision and force sensing. The robot is only provided with the book code, a library map and some knowledge about its logical structure and takes advantage of the spatio-temporal constraints and regularities of the environment by applying disparate techniques such as stereo vision, visual tracking, probabilistic matching, motion estimation, multisensor-based grasping, visual servoing and hybrid control, in such a way that it exhibits a robust and dependable performance. The system has been tested, and experimental results show how it is able to robustly locate and grasp a book in a reasonable time without human intervention."}
{"_id": "79eba188ef95c84f299341cf694063bb49c91618", "title": "Smart Supervisory Control for Optimized Power Management System of hybrid Micro-Grid", "text": "Micro grid have been accepted concept widely for the better interconnection of distributed generators (DGs). Corresponding to the conventional power system ac microgrid have been proposed, particularly increasing the use of renewable energy sources generate dc power which is need a dc link for the purpose of grid connection and as a result of increasing modern dc loads. Dc micro grid has been recently emerged for their benefits in terms of efficiency cost and no of conversion stages.. During the islanding operation of the hybrid ac/dc microgrid, the IC is intended to take the role of supplier to one microgrid and at the same time acts as a load to the other microgrid and the power management system should be able to share the power demand between the exiting ac and dc sources in both the microgrids. This paper considers the power flow control and management issues amongst multiple sources distributed throughout both ac and dc microgrids. The paper proposes a decentralized power sharing method in order to eliminate the need for communication between DGs or microgrids. The performance of the proposed power control strategy is validated for different operating conditions, using MATLAB/SIMULATION environment. Keyword: Mixed integer linear programming, hybrid ac/dc microgrid, interlinking ac/dc converter, power management. Introduction Nowadays, electrical grids are more distributed, intelligent and flexible. They are not only driven by the growing environmental concern and the energy security, but also by the ornamenting of the electricity market. Over 100 years the three phase AC power systems existing due to its different operating voltage levels and over long distance. Newly more renewable power conversion systems are connected in ac distribution systems due to environmental issues caused by fueled power plants. Nowadays, more DC loads like LED and Electric vehicles are connected to AC power systems to save energy and to reduce the pollution caused by the fossil fueled power plants. The rising rate of consumption of nuclear and fossil fuels, and the community demand for reducing pollutant emission in electricity generation field are the most significant reasons for worldwide attention to the renewable energy resources. In generally micro grids are defined as a cluster of loads, distributed energy sources, and storage devices. It is accepted that for excellent operation of the micro-grid, a PMS is essential to manage power flow in the micro-grid. It is noteworthy that the power flow means the determination of the output of the electricity generation facilities to meet the demanded power. There are two general approaches to develop PMSs: 1) rule based and 2) optimization-based. In contrast, the latter approach supervises power flow in a micro-grid by minimizing a cost function, which is derived based on performance expectations of the micro-grid, and considering some operational constraints. Recently, a robust PMS for a grid-connected system is presented in [1], in which uncertainty in the generation prediction is considered in the design procedure. On the other hand, hybrid ac/dc micro-grid is a new concept, which decouples dc sources with dc loads and ac sources with ac loads, while power is exchanged between both sides using a bidirectional converter/inverter [2], [3]. In [4]-[6], a droop-based controller is introduced to manage power sharing between the ac and dc micro-grids. In this paper, the ac and dc micro-grids are treated as two separate entities with individual droop representations, where the information from these two droop characteristics is merged to decide the amount of power to exchange between the microgrids. In [7], a hybrid ac/dc micro-grid consisting a WG as an ac source and a PV array as a dc source is International Journal of Applied Engineering Research ISSN 0973-4562 Volume 11, Number 6 (2016) pp 3980-3986 \u00a9 Research India Publications. http://www.ripublication.com 3981 presented, where a rule-based system is proposed to manage the power flow in the hybrid ac/dcmicro-grid. In [8] and [9], the amount of power which should be exchanged between the micro-grids is determined through arule-based management system with four predefined operating modes. Finally, a rulebased PMS for a hybrid ac/dc microgrid is presented in [10], where more distinct operation modes are considered. Hence, microgrids [12] are becoming a reality to cope with a new scenario in which renewable energy, distributed generation (DG) and distributed energy-storage systems have to be integrated together. This new concept makes the final user not to be a passive element in the grid, but an entity able to generate, storage, control and manage part of the energy that he/she will consume. Besides, a reduction in cost and an increment in reliability and transparency are achieved. The radial transformation of the electrical grid entails deeply challenges not only on the architecture of the power system, but also in the control system. There are many control techniques in the literature based on the droop control method whose aim is to avoid communication between DG units [13][18]. Although this method achieves good reliability and flexibility, it presents several drawbacks [19], [20]: 1) it is not suitable when parallel DG units share nonlinear loads, it must take into account harmonic currents; 2) the output impedance of DG units and the line impedances affect the power sharing accuracy; 3)it is dependent of the load frequency deviations, which implies a phase deviation between DG units and grid/load. To cope with this problem, two additional control loops have been defined in [21] and [22]: Secondary control, which restores the nominal values of the frequency and voltage in the MG; and Tertiary control, which sets the reference of the frequency and voltage in the MG. This paper is organized as follows. Section II gives a brief overview of the typical hybrid ac/dc micro grid modeling In section III gives the overview of renewable power source models and their corresponding converters, where the models of PV and DG as the sources of the ac micro-grid are given in Section III-A, the model of battery as the source of the dc micro-gridis given in Section III-B, and the model of Generator as the source of dc micro grid in section III-C. In Section IV, a PMS to coordinating between the ac and dc micro-grids is proposed. Section IV-Apresents, Grid Connected Mode. In Section IV-B, minimum and maximum charge/discharge power of the battery banks, and minimum allowable power exchange between the ac and dc micro-grids in isolated mode computation are presented. The simulation results obtained with the proposed PMS are also reported in Section V. Finally, Section VI summarizes the main outcome of this paper. Microgrid Modelling MicroGrid design begins by understanding the load profile needed to be served by the system. Often, the load is adjusted in the system design by applying energy efficiency measures such as demand response controls, equipment upgrades, and other system adjustments to reduce the overall generation needs. What does not typically happen, however, is a closer look at those loads to determine how much can be served natively by DC power sources. This is a fundamental flaw in optimizing the use of renewable energy resources that supply DC electricity-like photovoltaic, battery, and fuel cell technologies. Instead, the entire load is considered and the resulting power generation requirements are sized accordingly. With some additional thought and separation of the DC loads from the AC loads, the amount of renewable generation required can be dramatically reduced to only supply dc power for the dc equipment in the building. In other words, by designing separate DC and AC networks, a building MicroGrid could be developed that minimizes power losses due to transformation or conversion by simply supplying the equipment with the native electricity it requires. So, instead of using 50 rooftop solar panels to supply a building, you might only need 10, driving down system costs and increasing efficiencies by utilizing the best type of power for each piece of equipment being served. The conceptual architecture described in this article proposes a building MicroGrid using a hybrid approach of DC renewable generation resources for DC equipment and the utility grid for AC equipment. An \u201cenergy router\u201d acts as the hub that manages electricity across the AC and DC buses and minimizes the need for lossy DC-AC and AC-DC electricity transformations. Figure 2.1: Architecture of Microgrid Figure 2. 1, above, shows the overall building level DC MicroGrid architecture. Because of the power losses incurred with DC power over long distances, this concept is best used in a building level MicroGrid with short runs for the circuits on the system. In reality, calling it a hybrid MicroGrid is probably more apropos, but for the purposes of this discussion, we\u2019ll use the DC MicroGrid terminology to keep it short. The Green Lines represent the DC part of the system, while the Orange Lines represent the AC portion. In typical office buildings, the amount of DC power equipment ranges from 20-30% of the overall load, with those numbers steadily rising as LED lights, computers, more electronics, and electric International Journal of Applied Engineering Research ISSN 0973-4562 Volume 11, Number 6 (2016) pp 3980-3986 \u00a9 Research India Publications. http://www.ripublication.com 3982 vehicles enter the equation. The architectural rendering in Figure 1 may not be complete as some DC equipment will require DC-DC transformations from the DC network\u2019s supply voltage, but these transformations are less lossy than AC-DC transformations and also produce much less heat. DC Network The building\u2019s DC electricity generation needs can be met through renewable energy technologies: \uf0b7 Solar (Photovoltaic)"}
{"_id": "2efbd080d838986301b093dad1696da1e30e327d", "title": "BDO-SD: An efficient scheme for big data outsourcing with secure deduplication", "text": "In the big data era, duplicated copies of data may be generated by various smart devices. Although convergent encryption has been extensively adopted for secure deduplication, it is still very challenging to efficiently and reliably manage the convergent keys and support keyword search over encrypted data. This paper addresses this problem by presenting an efficient scheme for big data outsourcing with secure deduplication, named BDO-SD. Specifically, with the convergent encryption technique, data owners can outsource their data to the cloud server with data deduplication; with the idea of public key encryption and keywords search, a data user can query encrypted data while preserving its privacy. Security analysis demonstrates that our proposed BDO-SD scheme can achieve the data confidentiality, query privacy and key security. Simulation results show that compared with other schemes, our scheme can achieve better efficiency than the existing data deduplication schemes."}
{"_id": "18182d42c3360936735bee9da35249280c806ccb", "title": "Protocol Oblivious Forwarding (POF): Software-Defined Networking with Enhanced Programmability", "text": "Software-defined networking separates the control and forwarding planes of a network to make it more programmable and application- aware. As one of the initial implementations of SDN, OpenFlow abstracts a forwarding device as a flow table and realizes flow processing by applying the \"match-and-act\" principle. However, the protocol-dependent nature of OpenFlow still limits the programmability of the forwarding plane. Hence, in this article, we discuss how to leverage protocol-oblivious forwarding (POF) to further enhance the network programmability such that the forwarding plane becomes protocol-independent and can be dynamically reprogrammed to support new protocol stacks seamlessly. We first review the development of OpenFlow and explain the motivations for introducing POF. Then we explain the working principle of POF, discuss our efforts on realizing the POF development ecosystem, and show our implementation of POF-based source routing as a novel use case. Finally, we elaborate on the first WAN-based POF network testbed that includes POF switches located in two cities in China."}
{"_id": "45fc88bbf2d007c8c8b9021d303c20611e03a715", "title": "Digital Game-Based Learning in high school Computer Science education: Impact on educational effectiveness and student motivation", "text": "0360-1315/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.compedu.2008.06.004 * Tel.: +3"}
{"_id": "5ceaabef5e2ad55a781607433cc47573db786684", "title": "The 'digital natives' debate: A critical review of the evidence", "text": "The idea that a new generation of students is entering the education system has excited recent attention among educators and education commentators. Termed \u2018digital natives\u2019 or the \u2018Net generation\u2019, these young people are said to have been immersed in technology all their lives, imbuing them with sophisticated technical skills and learning preferences for which traditional education is unprepared. Grand claims are being made about the nature of this generational change and about the urgent necessity for educational reform in response. A sense of impending crisis pervades this debate. However, the actual situation is far from clear. In this paper, the authors draw on the fields of education and sociology to analyse the digital natives debate. The paper presents and questions the main claims made about digital natives and analyses the nature of the debate itself. We argue that rather than being empirically and theoretically informed, the debate can be likened to an academic form of a \u2018moral panic\u2019. We propose that a more measured and disinterested approach is now required to investigate \u2018digital natives\u2019 and their implications for education. The one thing that does not change is that at any and every time it appears that there have been \u2018great changes\u2019. Marcel Proust, Within a Budding Grove"}
{"_id": "cd761fcc7db3352cf4613dd6ef5cbdedb6ced928", "title": "Forecasting emerging technologies : Use of bibliometrics and patent analysis", "text": "It is rather difficult to forecast emerging technologies as there is no historical data available. In such cases, the use of bibliometrics and patent analysis have provided useful data. This paper presents the forecasts for three emerging technology areas by integrating the use of bibliometrics and patent analysis into well-known technology forecasting tools such as scenario planning, growth curves and analogies. System dynamics is also used to be able to model the dynamic ecosystem of the technologies and their diffusion. Technologies being forecasted are fuel cell, food safety and optical storage technologies. Results from these three applications help us to validate the proposed methods as appropriate tools to forecast emerging technologies. \u00a9 2006 Elsevier Inc. All rights reserved."}
{"_id": "3145ba9bf2fe52e723fe5d244a15b69846556307", "title": "Food-related illness and death in the United States.", "text": "Letters Letters Letters Letters Letters correctly identified E. faecium and E. faecalis to the species level, most (4 of 5) did not correctly identify E. gallinarum (three misidentified it as E. casseliflavus and one as E. faecalis). The results of this study are consistent with those of previous studies in the United States (4,5), South America (6), Spain (7), and Mexico (8). Although in countries like Chile, disk diffusion is practical and reliable for most susceptibility testing, detecting low-level vanco-mycin resistance in enterocci is difficult without supplementary testing. In Chile, as in other countries, strategies should be implemented to improve detection of these strains, including improvement of phenotypical and genotypical methods for VRE detection and species identification. Documentation of proficiency in detecting VRE is important for improving laboratory performance, detecting clinical isolates, and accurate and reliable reporting to local, national, and international surveillance systems. A, Febre N, et al. Epidemiologic analysis for acquisition of vancomycin-resistant enterococcus (VRE) in an intensive care unit in Brazil. Altschuler M, et al. Study to determine the ability of clinical laboratories to detect antimicrobial-resistant To the Editor: Dr. Mead and colleagues should be commended for attempting to estimate the prevalence of foodborne disease in the United States (1). Their study provides more complete estimates than previous studies in terms of the number of foodborne pathogens included; for example, it includes the first realistic estimate of the number of cases of disease due to Norwalk-like caliciviruses. However, the publication of these estimates raises some important issues. Even though \" accurate estimates of disease burden are the foundation of sound public health policy \" (2), most of these estimates (in particular, the assumption that unknown agents are transmitted by food in the same proportion as known agents) were derived from assumptions rather than data. Known foodborne agents clearly cannot account for most gastrointestinal illnesses (1). However, illnesses from unknown agents may be as likely to have the transmission characteristics of rotavirus (1% foodborne) or Cryptosporidium (10% foodborne) as those of the Norwalk-like viruses (40% foodborne). Furthermore , it was assumed that detecting outbreaks or cases of toxin-mediated illnesses (e.g., due to Bacillus cereus, Staphylococcus aureus, or Clostridium perfringens) follows the model of Salmonella. In the authors' entire list of known foodborne agents, data are presented for cases identified both from outbreaks and active surveillance for only three agents: Salmonella, Shigella, and Campylobacter. Salmonella is clearly the most \u2026"}
{"_id": "78fa742d0f94d63355a14c4eadbcd0fe7a527a49", "title": "What makes things fun to learn? heuristics for designing instructional computer games", "text": "In this paper, I will describe my intuitions about what makes computer games fun. More detailed descriptions of the experiments and the theory on which this paper is based are given by Malone (1980a, 1980b). My primary goal here is to provide a set of heuristics or guidelines for designers of instructional computer games. I have articulated and organized common sense principles to spark the creativity of instructional designers (see Banet, 1979, for an unstructured list of similar principles). To demonstrate the usefulness of these principles, I have included several applications to actual or proposed instructional games. Throughout the paper I emphasize games with educational uses, but I focus on what makes the games fun, not on what makes them educational.\n Though I will not emphasize the point in this paper, these same ideas can be applied to other educational environments and life situations. In a sense, the categories I will describe constitute a general taxonomy of intrinsic motivation\u2014of what makes an activity fun or rewarding for its own sake rather than for the sake of some external reward (See Lepper and Greene, 1979).\n I think the essential characteristics of good computer games and other intrinsically enjoyable situations can be organized into three categories: challenge, fantasy, and curiosity."}
{"_id": "1a9b61681cd33980babe381a0a243cf3e28063ad", "title": "Code and tell: assessing young children's learning of computational thinking using peer video interviews with ScratchJr", "text": "In this paper, we present a novel technique for assessing the learning of computational thinking in the early childhood classroom. Students in three second grade classrooms learned foundational computational thinking concepts using ScratchJr and applied what they learned to creating animated collages, stories, and games. They then conducted artifact-based video interviews with each other in pairs using their iPad cameras. As discussed in the results, this technique can show a broad range of what young children learn about computational thinking in classroom interventions using ScratchJr than more traditional assessment techniques. It simultaneously provides a developmentally appropriate educational activity (i.e. peer interviews) for early childhood classrooms."}
{"_id": "a9fd7321bddc2527397b3c44096b9f9c7915c4da", "title": "Tadalafil therapy for pulmonary arterial hypertension.", "text": "BACKGROUND\nTreatment options for pulmonary arterial hypertension target the prostacyclin, endothelin, or nitric oxide pathways. Tadalafil, a phosphodiesterase type-5 inhibitor, increases cGMP, the final mediator in the nitric oxide pathway.\n\n\nMETHODS AND RESULTS\nIn this 16-week, double-blind, placebo-controlled study, 405 patients with pulmonary arterial hypertension (idiopathic or associated), either treatment-naive or on background therapy with the endothelin receptor antagonist bosentan, were randomized to placebo or tadalafil 2.5, 10, 20, or 40 mg orally once daily. The primary end point was the change from baseline to week 16 in the distance walked in 6 minutes. Changes in World Health Organization functional class, clinical worsening, and health-related quality of life were also assessed. Patients completing the 16-week study could enter a long-term extension study. Tadalafil increased the distance walked in 6 minutes in a dose-dependent manner; only the 40-mg dose met the prespecified level of statistical significance (P<0.01). Overall, the mean placebo-corrected treatment effect was 33 m (95% confidence interval, 15 to 50 m). In the bosentan-naive group, the treatment effect was 44 m (95% confidence interval, 20 to 69 m) compared with 23 m (95% confidence interval, -2 to 48 m) in patients on background bosentan therapy. Tadalafil 40 mg improved the time to clinical worsening (P=0.041), incidence of clinical worsening (68% relative risk reduction; P=0.038), and health-related quality of life. The changes in World Health Organization functional class were not statistically significant. The most common treatment-related adverse events reported with tadalafil were headache, myalgia, and flushing.\n\n\nCONCLUSIONS\nIn patients with pulmonary arterial hypertension, tadalafil 40 mg was well tolerated and improved exercise capacity and quality of life measures and reduced clinical worsening."}
{"_id": "9c7908f327b76b1818d4f9123d4d27d98fe2424b", "title": "Classification of rice grain varieties using two artificial neural networks (MLP and neuro-fuzzy).", "text": "Artificial neural networks (ANNs) have many applications in various scientific areas such as identification, prediction and image processing. This research was done at the Islamic Azad University, Shahr-e-Rey Branch, during 2011 for classification of 5 main rice grain varieties grown in different environments in Iran. Classification was made in terms of 24 color features, 11 morphological features and 4 shape factors that were extracted from color images of each grain of rice. The rice grains were then classified according to variety by multi layer perceptron (MLP) and neuro-fuzzy neural networks. The topological structure of the MLP model contained 39 neurons in the input layer, 5 neurons (Khazar, Gharib, Ghasrdashti, Gerdeh and Mohammadi) in the output layer and two hidden layers; neuro-fuzzy classifier applied the same structure in input and output layers with 60 rules. Average accuracy amounts for classification of rice grain varieties computed 99.46% and 99.73% by MLP and neuro-fuzzy classifiers alternatively. The accuracy of MLP and neuro-fuzzy networks changed after feature selections were 98.40% and 99.73 % alternatively."}
{"_id": "9e8c04b9b54f0c4c742f6b3470fe5353d3b91864", "title": "Skin force sensor using piezoresistive PEDOT:PSS with arabitol on flexible PDMS", "text": "In this paper, a flexible conducting polymer based force sensor array has been developed. A conducting polymer, namely poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) with additive arabitol, was utilized as the piezoresistive layer on a flexible polydimethylsiloxane (PDMS) substrate owing to its high conductivity and stretchability. A simplified strain sensor has been developed to demonstrate the properties of the arabitol-PEDOT:PSS. Suitable curing conditions and arabitol concentration have been established with design of experiment. The arabitol-PEDOT:PSS can be stretched up to 20% and it shows a significant gauge factor of 11.5. A flexible interdigital force sensor array has been developed to demonstrate the effectiveness of arabitol-PEDOT:PSS on PDMS as a force sensor."}
{"_id": "f3995b71138ce5d79875473ac9e1b6df414114f0", "title": "Electrostatic adhesive brakes for high spatial resolution refreshable 2.5D tactile shape displays", "text": "We investigate the mechanism, design, modeling and fabrication of a scalable high resolution, low cost and lightweight refreshable 2.5D tactile pin array controlled by electrostatic adhesive brakes. By replacing linear actuators in motorized shape displays with a high voltage solid-state circuit that can be fabricated with printable electronics techniques, we can decrease the cost and complexity of such devices. Electrostatic adhesive brakes, made by patterning interdigital electrodes on high dielectric constant thin films, are used to hold metal pins' positions and provide contact force to the user's fingertip. We present designs of two high resolution brake modules which are 1.7 mm pitch with 0.8 mm width pins and 4 mm pitch with 1.58 mm width pins with a maximum measured dynamic loading force of 76.3 gf and static loading force of 28 gf on an individual pin (for the later size). A small demonstration of 4 \u00d7 2 pin array with a 4 mm pitch size within a row and 2.5 mm pitch size between the rows, using 1.58 mm width pins, was created. We also characterized the refresh time to be 37.5 ms for each brake, which enables refreshable actuated pin displays."}
{"_id": "a708c162edd794cab982cf1049f2b81a05cf5757", "title": "Fuzzy Passive-Aggressive classification: A robust and efficient algorithm for online classification problems", "text": ""}
{"_id": "3b705556d6aa4947c6b028dd29f0c084da96bf44", "title": "Electric field effect in atomically thin carbon films.", "text": "We describe monocrystalline graphitic films, which are a few atoms thick but are nonetheless stable under ambient conditions, metallic, and of remarkably high quality. The films are found to be a two-dimensional semimetal with a tiny overlap between valence and conductance bands, and they exhibit a strong ambipolar electric field effect such that electrons and holes in concentrations up to 10(13) per square centimeter and with room-temperature mobilities of approximately 10,000 square centimeters per volt-second can be induced by applying gate voltage."}
{"_id": "f7ef6b811c3f769d7ede81095064cc223fb708d1", "title": "Quality Traits of \u201cCannabidiol Oils\u201d: Cannabinoids Content, Terpene Fingerprint and Oxidation Stability of European Commercially Available Preparations", "text": "Cannabidiol (CBD)-based oil preparations are becoming extremely popular, as CBD has been shown to have beneficial effects on human health. CBD-based oil preparations are not unambiguously regulated under the European legislation, as CBD is not considered as a controlled substance. This means that companies can produce and distribute CBD products derived from non-psychoactive hemp varieties, providing an easy access to this extremely advantageous cannabinoid. This leaves consumers with no legal quality guarantees. The objective of this project was to assess the quality of 14 CBD oils commercially available in European countries. An in-depth chemical profiling of cannabinoids, terpenes and oxidation products was conducted by means of GC-MS and HPLC-Q-Exactive-Orbitrap-MS in order to improve knowledge regarding the characteristics of CBD oils. Nine out of the 14 samples studied had concentrations that differed notably from the declared amount, while the remaining five preserved CBD within optimal limits. Our results highlighted a wide variability in cannabinoids profile that justifies the need for strict and standardized regulations. In addition, the terpenes fingerprint may serve as an indicator of the quality of hemp varieties, while the lipid oxidation products profile could contribute in evaluation of the stability of the oil used as milieu for CBD rich extracts."}
{"_id": "e387d77969483c744ed218f555572a356ea1e63a", "title": "Hybrid BFOA-PSO algorithm for automatic generation control of linear and nonlinear interconnected power systems", "text": "In the Bacteria Foraging Optimization Algorithm (BFAO), the chemotactic process is randomly set, imposing that the bacteria swarm together and keep a safe distance from each other. In hybrid bacteria foraging optimization algorithm and particle swarm optimization (hBFOA-PSO) algorithm the principle of swarming is introduced in the framework of BFAO. The hBFOA-PSO algorithm is based on the adjustment of each bacterium position according to the neighborhood environment. In this paper, the effectiveness of the hBFOA-PSO algorithm has been tested for Automatic Generation Control (AGC) of an interconnected power system. A widely used linear model of two area non-reheat thermal system equipped with Proportional-Integral (PI) controller is considered initially for the design and analysis purpose. At first, a conventional Integral Time multiply Absolute Error (ITAE) based objective function is considered and the performance of hBFOA-PSO algorithm is compared with PSO, BFOA and GA. Further a modified objective function using ITAE, damping ratio of dominant eigenvalues and settling time with appropriate weight coefficients is proposed to increase the performance of the controller. Further, robustness analysis is carried out by varying the operating load condition and time constants of speed governor, turbine, tieline power in the range of +50% to -50% as well as size and position of step load perturbation to demonstrate the robustness of the proposed hBFOA-PSO optimized PI controller. The proposed approach is also extended to a nonlinear power system model by considering the effect of governor dead band non-linearity and the superiority of the proposed approach is shown by comparing the results of Craziness based Particle Swarm Optimization (CRAZYPSO) approach for the identical interconnected power system. Finally, the study is extended to a three area system considering both thermal and hydro units with different PI coefficients and comparison between ANFIS and proposed approach has been provided."}
{"_id": "07d7acb0313511b35a7a2b479d6c49de0c229887", "title": "miRNAs and apoptosis: RNAs to die for", "text": "MicroRNAs (miRNAs) are small non-coding RNAs of about 18\u201324 nucleotides in length that negatively regulate gene expression. Discovered only recently, it has become clear that they are involved in many biological processes such as developmental timing, differentiation and cell death. Data that connect miRNAs to various kinds of diseases, particularly cancer, are accumulating. miRNAs can influence cancer development in many ways, including the regulation of cell proliferation, cell transformation, and cell death. In this review, we focus on miRNAs that have been shown to play a role in the regulation of apoptosis. We first describe in detail how Drosophila has been utilized as a model organism to connect several miRNAs with the cell death machinery. We discuss the genetic approaches that led to the identification of those miRNAs and subsequent work that helped to establish their function. In the second part of the review article, we focus on the involvement of miRNAs in apoptosis regulation in mammals. Intriguingly, many of the miRNAs that regulate apoptosis have been shown to affect cancer development. In the end, we discuss a virally encoded miRNA that influences the cell death response in the mammalian host cell. In summary, the data gathered over the recent years clearly show the potential and important role of miRNAs to regulate apoptosis at various levels and in several organisms."}
{"_id": "40cc00af42a4f4ffc11f09ecf3101dd7b2205664", "title": "DOES THE BALANCED SCORECARD WORK : AN EMPIRICAL INVESTIGATION Professor", "text": "Commentators suggest that between 30 and 60% of large US firms have adopted the Balanced Scorecard, first described by Bob Kaplan and David Norton in their seminal Harvard Business Review paper of 1992 (Kaplan and Norton, 1992; Marr et al, 2004). Empirical evidence that explores the performance impact of the balanced scorecard, however, is extremely rare and much that is available is anecdotal at best. This paper reports a study that set out to explore the performance impact of the balanced scorecard by employing a quasi-experimental design. Up to three years worth of financial data were collected from two sister divisions of an electrical wholesale chain based in the UK, one of which had implemented the balanced scorecard and one of which had not. The relative performance improvements of geographically matched pairs of branches were compared to establish what, if any, performance differentials existed between the branches that had implemented the balanced scorecard and those that had not. The key findings of the study are that while the Electrical division \u2013 the division that implemented the balanced scorecard \u2013 sees improvements in sales and gross profit; similar performance improvements are also observed in the sister division. Hence the performance impact of the balanced scorecard has to be questioned. Clearly further work on this important topic is required in similar settings where natural experiments occur."}
{"_id": "74c7d82031524e14de70b0e9e5a824c72742d3d7", "title": "A flexible architecture integrating monitoring and analytics for managing large-scale data centers", "text": "To effectively manage large-scale data centers and utility clouds, operators must understand current system and application behaviors. This requires continuous, real-time monitoring along with on-line analysis of the data captured by the monitoring system, i.e., integrated monitoring and analytics -- Monalytics [28]. A key challenge with such integration is to balance the costs incurred and associated delays, against the benefits attained from identifying and reacting to, in a timely fashion, undesirable or non-performing system states. This paper presents a novel, flexible architecture for Monalytics in which such trade-offs are easily made by dynamically constructing software overlays called Distributed Computation Graphs (DCGs) to implement desired analytics functions. The prototype of Monalytics implementing this flexible architecture is evaluated with motivating use cases in small scale data center experiments, and a series of analytical models is used to understand the above trade-offs at large scales. Results show that the approach provides the flexibility needed to meet the demands of autonomic management at large scale with considerably better performance/cost than traditional and brute force solutions."}
{"_id": "61212778c8f4fcc64d7b0ce00d5b4bd1933271fd", "title": "Learning Automata as a Basis for Multi Agent Reinforcement Learning", "text": "Learning Automata (LA) are adaptive decision making devices suited for operation in unknown environments [12]. Originally they were developed in the area of mathematical psychology and used for modeling observed behavior. In its current form, LA are closely related to Reinforcement Learning (RL) approaches and most popular in the area of engineering. LA combine fast and accurate convergence with low computational complexity, and have been applied to a broad range of modeling and control problems. However, the intuitive, yet analytically tractable concept of learning automata makes them also very suitable as a theoretical framework for Multi agent Reinforcement Learning (MARL). Reinforcement Learning (RL) is already an established and profound theoretical framework for learning in stand-alone or single-agent systems. Yet, extending RL to multi-agent systems (MAS) does not guarantee the same theoretical grounding. As long as the environment an agent is experiencing is Markov, and the agent can experiment sufficiently, RL guarantees convergence to the optimal strategy. In a MAS however, the reinforcement an agent receives, may depend on the actions taken by the other agents acting in the same environment. Hence, the Markov property no longer holds. And as such, guarantees of convergence are lost. In the light of the above problem it is important to fully understand the dynamics of multi-agent reinforcement learning. Although, they are not fully recognized as such, LA are valuable tools for current MARL research. LA are updated strictly on the basis of the response of the environment, and not on the basis of any knowledge regarding other automata, i.e. nor their strategies, nor their feedback. As such LA agents are very simple. Moreover, LA can be treated analytically. Convergence proofs do exist for a variety of settings ranging from a single automaton model acting in a simple stationary random environment to a distributed automata model interacting in a complex environment. In this paper we argue that LA are very interesting building blocks for learning in multi agent systems. The LA can be viewed as policy iterators, who update their action probabilities based on private information only. This is a very attractive property in applications where communication is expensive. LA are in particular appealing in games with stochastic payoffs. Then we move to collections of learning automata, that can independently converge to interesting solution concepts. We study the single stage setting, including the analytical results. Then we generalize to interconnected learning automata, that can deal with multi agent multi-stage problems. We also show how Ant Colony Optimization can be mapped to the interconnected Learning Automata setting. This extended abstract is a summary of an article published earlier [14]."}
{"_id": "78ebba855252e57ff2eb71f00fe4548701af6a77", "title": "A New Approach of copy move Forgery Detection using Rigorous Preprocessing and Feature Extraction", "text": "these days, advanced pictures are being used in an extensive variety of uses and for numerous reasons. They additionally assume an imperative part in the capacity and exchange of visual data, particularly the mystery ones. With this far reaching utilization of advanced pictures, notwithstanding the expanding number of devices and programming of computerized pictures altering, it has turned out to be anything but difficult to control and change the real data of the picture. In this way, it has turned out to be important to check the credibility and the respectability of the picture by utilizing present day and advanced methods, which add to examination and comprehension of the pictures\u2019 substance, and after that ensure their trustworthiness. There are many sorts of picture imitation, the most critical and prominent sort is called duplicate glue fabrication, which utilizes a similar picture during the time spent falsification. This sort of fraud is utilized for one of two things, first to cover a protest or scene by replicating the region of the picture and gluing it on another zone of a similar picture. In this paper we have presented a new approach of copy move forgery detection. proposed scheme uses Oriented FAST and rotated BRIEF(ORB) alternative of scale invariant feature transform (SIFT) technique which is integrated with modified local contrast modification-contrast limited adaptive histogram equalization(LCM-CLAHE). Experimental results shows that proposed scheme is more promising in terms of false positive rate(FPR) and true positive rate(TPR) compare to state of the art techniques. Keywords\u2014 Image Processing, Image Enhancement, Histogram Equalization, SIFT, TPR, FPR, Copy move forgery, ORB"}
{"_id": "525385c4fa990e75fa222a40118fd70429ba048d", "title": "An Ontology-based Summarization System for Arabic Documents (OSSAD)", "text": "With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper an Ontology-based Summarization System for Arabic Documents, OSSAD, is introduced. Domain knowledge is extracted from an Arabic corpus and represented by topic related concepts/keywords and the lexical relations among them. The user\u2019s query is first expanded by using the Arabic WordNet and then by adding the domain-specific knowledge base to the expansion. For summarization, decision tree algorithm (C4.5) is used, which was trained by a set of features extracted from the original documents. For the testing dataset, Essex Arabic Summaries Corpus (EASC) was used. Recall Oriented Understudy for Gisting Evaluation (ROUGE) was used to compare OSSAD summaries with the human summaries along with other automatic summarization systems, showing that the proposed approach demonstrated promising results."}
{"_id": "f3c7e622c8d5a15e7bee42078cd00d0233c0c8b2", "title": "Adult hippocampal neurogenesis is involved in anxiety-related behaviors", "text": "Adult hippocampal neurogenesis is a unique example of structural plasticity, the functional role of which has been a matter of intense debate. New transgenic models have recently shown that neurogenesis participates in hippocampus-mediated learning. Here, we show that transgenic animals, in which adult hippocampal neurogenesis has been specifically impaired, exhibit a striking increase in anxiety-related behaviors. Our results indicate that neurogenesis plays an important role in the regulation of affective states and could be the target of new treatments for anxiety disorders."}
{"_id": "8e5c9917a1e59e57b970d1809c6c331d73b0e65c", "title": "A quasi fixed frequency constant on time controlled boost converter", "text": "Continuous conduction mode (CCM) boost converter is difficult to compensate due to the right half plane (RHP) zero. This paper presents a quasi fixed frequency constant on time (COT) controlled boost converter, it has no loop compensation requirement. With the proposed COT control, pulse frequency modulation (PFM) comes free at light load, which improves the light load efficiency compared with traditional PWM control. A compact on-time generator that controls the operation frequency is proposed. And a current sink based feedback circuit is used to improve the quality of feedback signal.The COT controller is designed with 1.5 mum BCD process. Simulation results demonstrate that the COT boost converter has fast load transient response and quasi fixed frequency operation."}
{"_id": "cc890e953f39939cb87a1fe5cfccde812a3efd9a", "title": "Credit decision tool using mobile application data for microfinance in agriculture", "text": "Smallholder agriculture is vital for food security and poverty reduction. Poor, unbanked smallholder farmers face challenges in obtaining loans for agriculture due to missing financial credit history. Although credit decisions tools with alternatives such as big data from mobile phones currently exist, these are mainly targeted towards consumer credit solutions. There is need for research focused on applying such data to credit decisions in agriculture, considering the unique challenges of the field. This research proposes a credit decision tool using alternative data from a mobile phone application used for reporting agricultural activities. 213 users submitted 41613 reports of farm activity by 11336 farmers over a 5-year period. Users were classified groups based on personal information, association with farmers, and reports. Three methods are applied for analysis: multiple logistic regression, a support vector machine with a linear kernel, and a support vector machine with a radial basis function kernel. Three sets of features, selected with different methods, are used. The results are evaluated by accuracy, and Area Under the Receiver operating curve (AUC). The support vector machine with a radial basis function kernel shows the best performance on the original data set with an Area Under the Receiver operating curve (AUC) value of 0.983. Feasibility of applying each method in agricultural microfinance is considered. Finally, further development of the models using Geographic Information System analysis of geospatial data which affects agricultural productivity, such as soil type and water resources, is suggested to incorporate the ability of the borrower to repay loans from farm activity."}
{"_id": "18b6dfb4f5d65f53217c9908195faadec337ec0a", "title": "A robot's spatial perception communicated via human touch", "text": "A robot perceives space in order to navigate, map, search and investigate its surroundings. One application area where humans interact with robots is search and rescue. The robot may have unique capabilities such as seeing outside the visible spectrum and being able to navigate in tight and hazardous spaces. In such an operation, the human may also be an effective search agent. Thus, it would be beneficial if the robot\u2019s spatial perception was conveyed to the human via a secondary peripheral modality. We have worked on developing a visual to tactile substitution.device for people who are visually impaired or blind. We propose to use this similar technique to the remote robot in order that it convey its spatial perception to the human operator or co-worker."}
{"_id": "98f4a8d8407f3918f98ee2889347b11191cf411c", "title": "A Structured Vector Space Model for Word Meaning in Context", "text": "We address the task of computing vector space representations for the meaning of word occurrences, which can vary widely according to context. This task is a crucial step towards a robust, vector-based compositional account of sentence meaning. We argue that existing models for this task do not take syntactic structure sufficiently into account. We present a novel structured vector space model that addresses these issues by incorporating the selectional preferences for words\u2019 argument positions. This makes it possible to integrate syntax into the computation of word meaning in context. In addition, the model performs at and above the state of the art for modeling the contextual adequacy of paraphrases."}
{"_id": "223dcd0e44532fc02444709e61327432c74fe46d", "title": "Phrase-Based Statistical Machine Translation", "text": "This paper is based on the work carried out in the framework of the Verbmobil project, which is a limited-domain speech translation task (German-English). In the final evaluation, the statistical approach was found to perform best among five competing approaches. In this paper, we will further investigate the used statistical translation models. A shortcoming of the single-word based model is that it does not take contextual information into account for the translation decisions. We will present a translation model that is based on bilingual phrases to explicitly model the local context. We will show that this model performs better than the single-word based model. We will compare monotone and non-monotone search for this model and we will investigate the benefit of using the sum criterion instead of the maximum approximation."}
{"_id": "2a954bdf203b327741ecf5d1c8fd70ccd15dcf73", "title": "Mining the Web for Bilingual Text", "text": "STRAND (Resnik, 1998) is a languageindependent system for automatic discovery of text in parallel translation on the World Wide Web. This paper extends the preliminary STRAND results by adding automatic language identification, scaling up by orders of magnitude, and formally evaluating performance. The most recent end-product is an automatically acquired parallel corpus comprising 2491 English-French document pairs, approximately 1.5 million words per language. 1 I n t r o d u c t i o n Text in parallel translation is a valuable resource in natural language processing. Statistical methods in machine translation (e.g. (Brown et al., 1990)) typically rely on large quantities of bilingual text aligned at the document or sentence level, and a number of approaches in the burgeoning field of crosslanguage information retrieval exploit parallel corpora either in place of or in addition to mappings between languages based on information from bilingual dictionaries (Davis and Dunning, 1995; Landauer and Littman, 1990; Hull and Oard, 1997; Oard, 1997). Despite the utility of such data, however, sources of bilingual text are subject to such limitations as licensing restrictions, usage fees, restricted domains or genres, and dated text (such as 1980's Canadian politics); or such sources simply may not exist for * This work was supported by Department of Defense contract MDA90496C1250, DARPA/ITO Contract N66001-97-C-8540, and a research grant from Sun Microsystems Laboratories. The author gratefully acknowledges the comments of the anonymous reviewers, helpful discussions with Dan Melamed and Doug Oard, and the assistance of Jeff Allen in the French-English experimental evaluation. language pairs of interest. Although the majority of Web content is in English, it also shows great promise as a source of multilingual content. Using figures from the Babel survey of multilinguality on the Web (h tZp : / /www. i s o c . o r g / ) , it is possible to estimate that as of June, 1997, there were on the order of 63000 primarily non-English Web servers, ranging over 14 languages. Moreover, a followup investigation of the non-English servers suggests that nearly a third contain some useful cross-language data, such as parallel English on the page or links to parallel English pages the follow-up also found pages in five languages not identified by the Babel study (Catalan, Chinese, Hungarian, Icelandic, and Arabic; Michael Littman, personal communication). Given the continued explosive increase in the size of the Web, the trend toward business organizations that cross national boundaries, and high levels of competition for consumers in a global marketplace, it seems impossible not to view multilingual content on the Web as an expanding resource. Moreover, it is a dynamic resource, changing in content as the world changes. For example, Diekema et al., in a presentation at the 1998 TREC-7 conference (Voorhees and Harman, 1998), observed that the performance of their cross-language information retrieval was hurt by lexical gaps such as Bosnia/Bosniethis illustrates a highly topical missing pair in their static lexical resource (which was based on WordNet 1.5). And Gey et al., also at TREC-7, observed that in doing cross-language retrieval using commercial machine translation systems, gaps in the lexicon (their example was acupuncture/Akupunktur) could make the difference between precision of 0.08 and precision of 0.83 on individual queries. ttesnik (1998) presented an algorithm called"}
{"_id": "39a31d4aedfb4d221e793b428123f26b15416fc7", "title": "\"I don't believe in word senses\"", "text": "Word sense disambiguation assumes word senses. Within the lexicography and linguistics literature, they are known to be very slippery entities. The paper looks at problems with existing accounts of \u2018word sense\u2019 and describes the various kinds of ways in which a word\u2019s meaning can deviate from its core meaning. An analysis is presented in which word senses are abstractions from clusters of corpus citations, in accordance with current lexicographic practice. The corpus citations, not the word senses, are the basic objects in the ontology. The corpus citations will be clustered into senses according to the purposes of whoever or whatever does the clustering. In the absence of such purposes, word senses do not exist. Word sense disambiguation also needs a set of word senses to disambiguate between. In most recent work, the set has been taken from a general-purpose lexical resource, with the assumption that the lexical resource describes the word senses of English/French/: : : , between which NLP applications will need to disambiguate. The implication of the paper is, by contrast, that word senses exist only relative to a task."}
{"_id": "3f8c6ecb15bbdf667ec6bbe1b132db1945110976", "title": "FACE-CHANGE: Application-Driven Dynamic Kernel View Switching in a Virtual Machine", "text": "Kernel minimization has already been established as a practical approach to reducing the trusted computing base. Existing solutions have largely focused on whole-system profiling - generating a globally minimum kernel image that is being shared by all applications. However, since different applications use only part of the kernel's code base, the minimized kernel still includes an unnecessarily large attack surface. Furthermore, once the static minimized kernel is generated, it is not flexible enough to adapt to an altered execution environment (e.g., new workload). FACE-CHANGE is a virtualization-based system to facilitate dynamic switching at runtime among multiple minimized kernels, each customized for an individual application. Based on precedent profiling results, FACE-CHANGE transparently presents a customized kernel view for each application to confine its reach ability of kernel code. In the event that the application exceeds this boundary, FACE-CHANGE is able to recover the missing code and back trace its attack/exception provenance to analyze the anomalous behavior."}
{"_id": "2b2cca00abbb8f68cd59fb734475d1afe294ec85", "title": "Lightweight application classification for network management", "text": "Traffic application classification is an essential step in the network management process to provide high availability of network services. However, network management has seen limited use of traffic classification because of the significant overheads of existing techniques. In this context we explore the feasibility and performance of lightweight traffic classification based on NetFlow records. In our experiments, the NetFlow records are created from packet-trace data and pre-tagged based upon packet content. This provides us with NetFlow records that are tagged with a high accuracy for ground-truth. Our experiments show that NetFlow records can be usefully employed for application classification. We demonstrate that our machine learning technique is able to provide an identification accuracy (\u2248 91%) that, while a little lower than that based upon previous packet-based machine learning work (> 95%), is significantly higher than the commonly used port-based approach (50--70%). Trade-offs such as the complexity of feature selection and packet sampling are also studied. We conclude that a lightweight mechanism of classification can provide application information with a considerably high accuracy, and can be a useful practice towards more effective network management."}
{"_id": "d36018859bbdac8797064a2f4c58e4661bbd4f02", "title": "A Compact Ka-Band Planar Three-Way Power Divider", "text": "A Ka-band planar three-way power divider which uses the coupled line instead of the transmission line is proposed to reduce chip size. The proposed planar topology, different from the conventional Wilkinson power divider, is analyzed and can provide not only compact but also dc block characteristics, which are very suitable for monolithic microwave integrated circuit applications. The divider implemented by a pHEMT process shows an insertion loss less than 5.1 dB and an output isolation better than 17 dB. A return loss less than 18 dB and a phase difference of 4.2deg at 30 GHz can be achieved. Finally, good agreements between the simulation and experimental results are shown."}
{"_id": "2012b864ce2b69039db429bea959751d0591e2cf", "title": "Optimal control of execution costs for portfolios", "text": "We derive dynamic optimal trading strategies that minimize the expected cost of trading blocks of securities over a xed time horizon. Given xed blocks si of shares of stock i to be traded within a nite number of periods T , i = 1; : : : ; n, and given price-impact functions that yield the execution price of an individual trade as a function of the shares of stock i traded and current market conditions, we obtain the optimal sequence of trades as a function of market conditions|closed-form expressions in some cases|that minimizes the expected cost of executing si within T periods. We also propose an approximation algorithm for incorporating constraints, a particularly important extension in practice. To illustrate the practical relevance of our methods, we apply them to a hypothetical portfolio of 25 stocks by estimating their price-impact functions using historical trade data from 1996 and deriving the optimal execution strategies for the portfolio, and by performing several Monte Carlo simulation experiments. This research is part of the MIT Laboratory for Financial Engineering's Transactions Costs Project. We are grateful to Investment Technology Group, the National Science Foundation (Grant Nos. DMI{9610486 and SBR{9709976), and the sponsors of the LFE for nancial support. We thank Dave Cushing, Chris Darnell, Robert Ferstenberg, Rohit D'Souza, John Heaton, Leonid Kogan, Bruce Lehmann, Greg Peterson, Jiang Wang, and seminar participants at Columbia, Harvard, London Business School, MIT, Northwestern, the NYSE Conference on Best Execution, the Society of Quantitative Analysts, and Yale for helpful comments and discussion. Boeing Professor of Operations Research, MIT Sloan School of Management and Operations Research Center, Massachusetts Institute of Technology, E53-359, Cambridge, MA 02142{1347. Portfolio Strategist, Long Term Capital Management, One East Weaver St., Greenwich, CT 06831. Harris & Harris Group Professor, MIT Sloan School of Management and Operations Research Center, Cambridge, MA 02142{1347."}
{"_id": "665a311c538fc021c27acd3953f171924cc5905c", "title": "Optimization of image description metrics using policy gradient methods", "text": "In this paper, we propose a novel training procedure for image captioning models based on policy gradient methods. This allows us to directly optimize for the metrics of interest, rather than just maximizing likelihood of human generated captions. We show that by optimizing for standard metrics such as BLEU, CIDEr, METEOR and ROUGE, we can develop a system that improve on the metrics and ranks first on the MSCOCO image captioning leader board, even though our CNN-RNN model is much simpler than state of the art models. We further show that by also optimizing for the recently introduced SPICE metric, which measures semantic quality of captions, we can produce a system that significantly outperforms other methods as measured by human evaluation. Finally, we show how we can leverage extra sources of information, such as pre-trained image tagging models, to further improve quality."}
{"_id": "b980ae538e12185ec1c8910557d23da00286527c", "title": "A Quasi-Passive Leg Exoskeleton for Load-Carrying Augmentation", "text": "A quasi-passive leg exoskeleton is presented for load-carrying augmentation during walking. The exoskeleton has no actuators, only ankle and hip springs and a knee variabledamper. Without a payload, the exoskeleton weighs 11.7kg and requires only 2 Watts of electrical power during loaded walking. For a 36 kg payload, we demonstrate that the quasi-passive exoskeleton transfers on average 80% of the load to the ground during the single support phase of walking. By measuring the rate of oxygen consumption on a study participant walking at a self-selected speed, we find that the exoskeleton slightly increases the walking metabolic cost of transport (COT) as compared to a standard loaded backpack (10% increase). However, a similar exoskeleton without joint springs or damping control (zero-impedance exoskeleton) is found to increase COT by 23% compared to the loaded backpack, highlighting the benefits of passive and quasi-passive joint mechanisms in the design of efficient, low-mass leg exoskeletons."}
{"_id": "f3e3f0d4371e10c58ef2682330e6f02b9aeb3e58", "title": "Face recognition using PCA-based method", "text": "The objective of this paper is to develop the image processing and recognize the faces using PCA-based face recognition technique. MATLAB based programs are implemented to identify the faces using Indian databases and the Face recognition data, University of Essex, UK. For matching unknown images with known images, different techniques like sum of absolute difference (SAD), sum of squared difference (SSD), normalized cross correlation (NCC) etc. can be used which has been shown in this paper. Finally, it has been shown that the performance of PCA-based face recognition system is quite satisfactory instead of having some shortcomings of the system. (Abstract)"}
{"_id": "38dc4cf97af1232fd85e68b6f01bfd7d5be1bea4", "title": "Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity.", "text": "Triggered by recent experimental results, temporally asymmetric Hebbian (TAH) plasticity is considered as a candidate model for the biological implementation of competitive synaptic learning, a key concept for the experience-based development of cortical circuitry. However, because of the well known positive feedback instability of correlation-based plasticity, the stability of the resulting learning process has remained a central problem. Plagued by either a runaway of the synaptic efficacies or a greatly reduced sensitivity to input correlations, the learning performance of current models is limited. Here we introduce a novel generalized nonlinear TAH learning rule that allows a balance between stability and sensitivity of learning. Using this rule, we study the capacity of the system to learn patterns of correlations between afferent spike trains. Specifically, we address the question of under which conditions learning induces spontaneous symmetry breaking and leads to inhomogeneous synaptic distributions that capture the structure of the input correlations. To study the efficiency of learning temporal relationships between afferent spike trains through TAH plasticity, we introduce a novel sensitivity measure that quantifies the amount of information about the correlation structure in the input, a learning rule capable of storing in the synaptic weights. We demonstrate that by adjusting the weight dependence of the synaptic changes in TAH plasticity, it is possible to enhance the synaptic representation of temporal input correlations while maintaining the system in a stable learning regime. Indeed, for a given distribution of inputs, the learning efficiency can be optimized."}
{"_id": "e51834df62d013dcf4790abee94104345a5d15f5", "title": "Battery Charger for Electric Vehicle Traction Battery Switch Station with PV System", "text": "This paper presents the functionality of a commercialized fast charger for a lithium-ion electric vehicle propulsion battery. The device is intended to operate in a battery switch station, allowing an up-to 1-h recharge of a 25-kWh depleted battery, removed from a vehicle. The charger is designed as a dual-stage-controlled ac/dc converter. The input stage consists of a three-phase full-bridge diode rectifier combined with a reduced rating shunt active power filter. The input stage creates an uncontrolled pulsating dc bus while complying with the grid codes by regulating the total harmonic distortion and power factor according to the predetermined permissible limits. The output stage is formed by six interleaved groups of two parallel dc\u2013dc converters, fed by the uncontrolled dc bus and performing the battery charging process. The charger is capable of operating in any of the three typical charging modes: constant current, constant voltage, and constant power. Simulation results are shown to demonstrate the functionality of the device with including of Photovoltaic system. I.INTRODUCTION The traction battery is undoubtedly the most critical component of an electric vehicle (EV), since the cost and weight as well as the reliability and driving range of the vehicle are strongly influenced by the battery characteristics. Modern rechargeable lithium batteries, which are, by far, the most power or energy dense among modern batteries, are commonly used in traction applications. The high energy/power content requires appropriate battery management to ensure safety and optimal performance. In particular, proper recharging is essential in order to utilize the full capacity of the battery pack and preserve its nominal lifetime there are two common types of vehicle battery chargers. The onboard (often referred to as slow or low power) charger is located on board. The propulsion battery is recharged via the slow charger, plugged into a charging spot, while the vehicle is at parking lot . The off board (so-called fast or high power) charger is located at the battery switch station (BSS). The battery must be removed from the vehicle to be recharged via the fast charger (FC). The slow charger usually operates at 0.15\u20130.25-C rates, while the FC rate may typically reach 2-C rates, i.e., while charging a 25-kWh battery; the slow charger supplies 3\u20134 kW, while the FC peak power is typically 30\u201350 kW. The typical concept of EV includes urban driving only, where the full battery charge is sufficient for mediumrange routes of 50\u2013100 miles. Recharging is accomplished by plugging the car into charge spots placed at different city locations throughout the day and at driver\u2019s home during the night. Recently, a paradigm shift toward closing the gap between EV and conventional vehicles has occurred, forcing the infrastructure to support EV intercity driving as well. The following concept of BSS was developed: When out of charge, the EV battery can be replaced at a BSS, allowing nearly uninterrupted long range driving. The replacement process takes 2\u20134 min, similar to the duration of conventional refuelling process . The near empty battery, removed from a vehicle at the BSS, is recharged by an FC to be available as quickly as possible for the next customer. The charging time is obviously crucial, affecting the battery stock. For example, assuming 4-min battery replacement time, 15 vehicles per hour may be processed by each service lane. If battery charging time is 1 h, the minimum stock of 15 batteries per lane should be present at the station. Reducing the charging time obviously reduces the stock as well. It is worth noting that, since there is no human involvement in the fast charging process, galvanic isolation is usually not required to be present in an FC. M.Deepthi et. al. / International Journal of New Technologies in Science and Engineering Vol. 2, Issue 6,Dec 2015, ISSN 2349-0780 Available online @ www.ijntse.com 225 The FC is basically a controlled ac/dc power supply, drawing the power from the three-phase ac utility grid, converting it to dc and injecting it into the traction battery. In order to create a feasible solution, the FC must both satisfy the grid code in terms of power factor (PF) and harmonic content from the utility side and support lithium-ion charging modes from the battery side. Since the BSS usually contains multiple FCs, its impact on the distribution grid is very significant, as shown by previous research. Therefore, the input stage of the FC usually performs PF correction (PFC) according to the regulation requirements in addition to rectification. It can be accomplished by employing either an active rectifier , or a diode rectifier combined with a PFC circuit. The wellknown single-phase PFC approach, utilizing an uncontrolled rectifier followed by a full-rating boost dc\u2013dc converter, is unsuitable for the three-phase diode rectifier case. However, it can be modified by splitting the three-phase rectifier into either two single-phase legs followed by two independent PFC converters or three single-phase \u0394or Y-connected stages. Alternatively, a more elegant approach employs a shunt connected active power filter (APF) at the uncontrolled rectifier input, supplying the reactive current to the diode rectifier, thus achieving both near-unity PF and near-zero total harmonic distortion (THD) by letting the utility to supply the active current only, which is in phase with the utility voltage and of the same shape. The use of either one three-phase or three single-phase APF configurations is potentially feasible for implementing a three-phase PFC stage. The additional advantage of the approach is the fact that, because of the shunt connection, the APF rating is less than onethird of the bridge rectifier rating, since the APF supplies the reactive and harmonic power only, while the series-connected PFC converter rating is equal to the load kilo volt ampere rating. The resulting loss reduction is an extremely desirable feature for an FC since the dissipated heat must be removed from the BSS by means of a cumbersome ventilation system, whose complexity is proportional to the amount of the heat to be removed. From the battery side, a conventional lithium-ion battery charging is characterized by two main phases: constant current (CC) and constant voltage (CV). Recently, constant power (CP) charging became popular in large vehicle battery packs. Hence, the charger output stage (typically consisting of dc\u2013dc converters) must be capable of operating as either a current or voltage source. Alternatively, it can be operated as a voltage supply with dynamic current limitation. CP mode is usually achieved by operating as a current source, constantly varying according to the power profile. Moreover, the charger output current ripple should be kept as low as possible in order to prevent undesired influence on the battery chemistry. The wellknown solution, allowing splitting the load power between multiple modules in order to reduce both the conduction losses and current ripple, is interleaving. Interleaving employs parallel operation of converters, whose output current is equally shifted with respect to others such that, when summed, the current ripples partially cancel each other, creating a low ripple total output current. In addition, interleaving also reduces the implementation challenge of designing a single fullrating converter by using several lower rating converters instead at the expense of somewhat increased hardware cost, volume/weight, and more complex control circuitry. M.Deepthi et. al. / International Journal of New Technologies in Science and Engineering Vol. 2, Issue 6,Dec 2015, ISSN 2349-0780 Available online @ www.ijntse.com 226 II.POWER QUALITY Electric power quality may be defined as a measure of how well electric power service can be utilized by customers. Power Quality problem is an occurrence manifested as a nonstandard voltage, current or frequency that results in a failure or a mis operation of end user equipment. To compensate harmonics conventional Passive Filters are used for specific number of harmonics. To compress total harmonic content Active Power Filters are used. For all types of power quality solutions at the distribution system voltage level DFACTS also called as Custom Power Devices are introduced to improve Power Quality. Power quality is certainly a major concern in the present era; it becomes of especially important with the introduction sophisticated devices, whose performance is very sensitive to the quality of power supply. Modern industrial processes are based on a large amount of electronic devices such as programmable logic controllers and adjustable speed drives. Electronic devices are very sensitive to disturbances and thus industrial loads become less tolerant to power quality problems. Power Quality has become an important issue since many loads at various distribution ends like adjustable speed drives, process industries, printers, domestic utilities; computers, microprocessor based equipments etc. have become intolerant to voltage fluctuations, harmonic content and interruptions. Power Quality mainly deals with issues like maintaining a fixed voltage at the Point of Common Coupling for various distribution voltage levels irrespective of voltage fluctuations, maintaining near unity power factor power drawn from the supply, blocking of voltage and current unbalance from passing upwards from various distribution levels, reduction of voltage and current harmonics in the system and suppression of excessive supply neutral current. Recently, the importance of power quality issues has increased due to various reasons. First of all, there have been changes in the nature of electrical loads. On one hand, the characteristics of load have become more complex due to the increased use of power electronic equipment, which results in a deviation of voltage and current from its sinusoidal wavefor"}
{"_id": "5f9d8263ab657a9985f8f5f252a3e2da49b6e15b", "title": "A stretchable PIFA antenna", "text": "This paper presents a stretchable Planar Inverted-F Antenna (PIFA) based on a new metallization technique of depositing nanoscale thin (50/100nm) Au film on elastomer Polydimethylsiloxane (PDMS). The thin metal films can be reversibly stretched up to 20% without losing electrical conduction. The PIFA antenna made of new materials can work under a 10% strain and return to its original state after removal of an applied stress. Simulation and measurement results for the antenna performance are given."}
{"_id": "addd4296263321fc8f1cfbd67a1e3313faaf5480", "title": "A secure triple level encryption method using cryptography and steganography", "text": "Cryptography is the practice for secure communication in the presence of third parties. Steganography is technique of writing hidden messages such that except the sender and receiver, no one even suspects the existence of the message. In today's hi-tech age, threats from intruders are very great such that usage of either of the above techniques separately may not be able to provide the intended protection. In order to increase the level of protection, both the techniques may be used in a combined manner. Multimedia techniques can also be used to hide the data. In this paper, we propose an encrypting system which combines techniques of cryptography and steganography with data hiding. Instead of using a single level of data encryption, the message is encrypted twice. Traditional techniques have been used for this purpose. Then the cipher is hidden inside the image in encrypted format for further use. It uses a reference matrix for selection of passwords depending on the properties of the image. The image with the hidden data is used for further purposes."}
{"_id": "b9acc402f77a2897314c1dcfafec0f5eb6ac1666", "title": "Achieving high navigation accuracy using inertial navigation systems in autonomous underwater vehicles", "text": "This paper presents a summary of the current state-of-the-art in INS-based navigation systems in AUVs manufactured by Bluefin Robotics Corporation. A detailed description of the successful integrations of the Kearfott T-24 Ring Laser Gyro and the IXSEA PHINS III Fiber Optic Gyro into recent Bluefin Robotics AUVs is presented. Both systems provide excellent navigation accuracy for high quality data acquisition. This paper provides a comprehensive assessment of the primary advantages and disadvantages of each INS, paying particular attention to navigation accuracy, power draw, physical size, and acoustic radiated noise. Additionally, a brief presentation of a recently integrated Synthetic Aperture Sonar system will be used to highlight how critical a high-performance INS is to hydrographic, mine countermeasures, and other SAS applications."}
{"_id": "a1f38df539a62bfc8b0886d91610cbc374336ab1", "title": "An empirical study of tokenization strategies for biomedical information retrieval", "text": "Due to the great variation of biological names in biomedical text, appropriate tokenization is an important preprocessing step for biomedical information retrieval. Despite its importance, there has been little study on the evaluation of various tokenization strategies for biomedical text. In this work, we conducted a careful, systematic evaluation of a set of tokenization heuristics on all the available TREC biomedical text collections for ad hoc document retrieval, using two representative retrieval methods and a pseudo-relevance feedback method. We also studied the effect of stemming and stop word removal on the retrieval performance. As expected, our experiment results show that tokenization can significantly affect the retrieval accuracy; appropriate tokenization can improve the performance by up to 96%, measured by mean average precision (MAP). In particular, it is shown that different query types require different tokenization heuristics, stemming is effective only for certain queries, and stop word removal in general does not improve the retrieval performance on biomedical text."}
{"_id": "6f4b00783bd55d4cd3912b64737d9e426438a0e6", "title": "The new blocs on the block: using community forums to foster new neighbourhoods", "text": "Research has consistently shown that online tools increase social capital. In the context of neighbourhoods Hampton and Wellman have shown that in newly developed areas residents effectively used mailing lists to connect with each other, circulate information, and ask for help. The research question of whether similar findings would hold in the larger context of a city for a long period of time is still open. To tackle this research question, we have gathered the complete dataset of the most popular neighbourhood online forum in Dublin. In this dataset, we have people sharing a common purpose (blocs) who live in the same neighbourhood and interact online to ask for help, engage in local activities, and, more generally, have a better understanding of their physical community. Our analysis highlights the particularly concentrated usage in newly established developments where a pre-existing community may be absent. Additionally, these communications provide a valuable resource to understand local issues relevant to the community."}
{"_id": "5a405ce49951778001456fd17b444ec717d87cc7", "title": "Learning MultiGoal Inverse Kinematics in Humanoid Robot", "text": "General Inverse Kinematic (IK) solvers may not guarantee real-time control of the end-effectors in external coordinates along with maintaining stability. This work addresses this problem by using Reinforcement Learning (RL) for learning an inverse kinematics solver for reachability tasks which ensures stability and self-collision avoidance while solving for end effectors. We propose an actor-critic based algorithm to learn joint space trajectories of stable configuration for solving inverse kinematics that can operate over continuous action spaces. Our approach is based on the idea of exploring the entire workspace and learning the best possible configurations. The proposed strategy was evaluated on the highly articulated upper body of a 27 degrees of freedom (DoF) humanoid for learning multi-goal reachability tasks of both hands along with maintaining stability in double support phase. We show that the trained model was able to solve inverse kinematics for both the hands, where the articulated torso contributed to both the tasks."}
{"_id": "c72deb198ed5a6b02ed64a59072e415317f50bcd", "title": "Park Here! a smart parking system based on smartphones' embedded sensors and short range Communication Technologies", "text": "Nowadays, the convergence of Internet of Things (IoT) networking and mobile applications is favoring the deployment of novel and advanced smart parking systems through which users can be informed in real-time about the presence of vacant parking spots close to their destinations. In this paper, we provide an example of such opportunity, by describing Park Here!, a novel mobile application that aims at mitigating the overhead caused by parking spot seeking operations in urban areas. Our solution targets common city environments, where no per-spot sensors are available, and there is no remote service allowing the reservation in-advance of a parking spot. For this scenario, we propose a novel algorithm for the automatic detection of parking actions performed by the user, through the analysis of smartphone embedded sensors' (accelerometer/gyroscope), and of the Bluetooth connectivity. Once a parking event has been detected, an adaptive strategy allows disseminating the information over the target scenario, using a combination of Internet connection to a remote server, and Device-to-Device (D2D) connections over WiFi Direct links. Preliminary experiments demonstrate the accuracy of the proposed algorithm in correctly identifying parking events in an automatic way, and hence in notifying information to other potentially interested users."}
{"_id": "513969abb237d2e33e05bbc562488d53173ecf17", "title": "Prediction of Rainfall Time Series Using Modular Artificial Neural Networks Coupled with Data Preprocessing Techniques", "text": "1 Dept. of Civil and Structural Engineering, Hong Kong Polytechnic University, 4 Hung Hom, Kowloon, Hong Kong, People\u2019s Republic of China 5 6 2 Changjiang Institute of Survey, Planning, Design and Research, 7 Changjiang Water Resources Commission, 8 430010, Wuhan, HuBei, People\u2019s Republic of China 9 10 3 Department of Civil Engineering, Ryerson University, 11 350 Victoria Street, Toronto, Ontario, Canada, M5B 2K3 12 13 *Email: cekwchau@polyu.edu.hk 14 15 ABSTRACT 16"}
{"_id": "de1a87431309ff9e9d3bce7fb0b2f3b1a62b45e3", "title": "INTEGRATION OF BIM AND GIS : THE DEVELOPMENT OF THE CITYGML GEOBIM EXTENSION", "text": "There is a growing interest in the integration of BIM and GIS. However, most of the research is focused on importing BIM data in GIS applications and vice versa. Real integration of BIM and GIS is using the strong parts of the GIS technology in BIM, and of course the strong parts from BIM technology in GIS. In this paper a mix of strong parts from both worlds is integrated in a single project. The paper describes the development of a CityGML extension called GeoBIM to get semantic IFC data into a GIS context. The conversion of IFC to CityGML (including the GeoBIM extension) is implemented in the open source Building Information Modelserver. This contribution was selected in a double blind review process to be published within the Lecture Notes in Geoinformation and Cartography series (Springer-Verlag, Heidelberg). Advances in 3D Geo-Information Sciences Kolbe, Thomas H.; K\u00f6nig, Gerhard; Nagel, Claus (Eds.) 2011, X ISBN 978-3-642-12669-7, Hardcover Date of Publication: January 5, 2011 Series Editors: Cartwright, W., Gartner, G., Meng, L., Peterson, M.P. ISSN: 1863-2246 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXVIII-4/W15 5th International 3D GeoInfo Conference, November 3-4, 2010, Berlin, Germany 193"}
{"_id": "46d9b965c7084fd6440a67777b93dc23a5f598e5", "title": "FLEXIBLE GENERATION OF SEMANTIC 3 D BUILDING MODELS", "text": "The paper presents a new 3D semantic building model dedicated to urban development. It provides the following special semantic features: storeys, passages, opening objects, and building variants, modelling different levels of detail and design respectively temporal variants. The model also includes a parametric part supporting easy generation of buildings and building variants in the urban design phase. In order to make the powerful tools of the CA(A)D world accessible to city modeling, a toolbox is being developed for transformation of IFC data. These extensions and addition facilitate multiple utilization of city models and thus enhance sustainability of modeling."}
{"_id": "a9fa678efd18d22223ae345756857f90250e3aa5", "title": "An investigation into the applicability of building information models in geospatial environment in support of site selection and fire response management processes", "text": "1474-0346/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.aei.2008.06.001 * Corresponding author. Tel.: +90 312 266 00 5 (U. Isikdag); tel.: +44 161 295 62 90; fax: +44 016 tel.: +44 0161 295 4992; fax: +44 0161 295 3173 (G. E-mail addresses: egetera@superonline.com, uisik j.underwood@salford.ac.uk (J. Underwood), g.aoauad@ Some tasks in the construction industry and urban management field such as site selection and fire response management are usually managed by using a Geographical Information System (GIS), as the tasks in these processes require a high level and amount of integrated geospatial information. Recently, a key element of this integrated geospatial information to emerge is detailed geometrical and semantic information about buildings. In parallel, Building Information Models (BIMs) of today have the capacity for storing and representing such detailed geometrical and semantic information. In this context, the research aimed to investigate the applicability of BIMs in geospatial environment by focusing specifically on these two domains; site selection and fire response management. In the first phase of the research two use case scenarios were developed in order to understand the processes in these domains in a more detailed manner and to establish the scope of a possible software development for transferring information from BIMs into the geospatial environment. In the following phase of the research two data models were developed \u2013 a Schema-Level Model View and a geospatial data model. The Schema-Level Model View was used in simplifying the information acquired from the BIM, while the geospatial data model acted as the template for creating physical files and databases in the geospatial environment. Following this, three software components to transfer building information into the geospatial environment were designed, developed, and validated. The first component served for acquiring the building information from the BIM, while the latter two served for transforming the information into the geospatial environment. The overall research demonstrated that it is possible to transfer (high level of geometric and semantic) information acquired from BIMs into the geospatial environment. The results also demonstrated that BIMs provide a sufficient level and amount of (geometric and semantic) information (about the building) for the seamless automation of data management tasks in the site selection and fire response management processes. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "b0755b87f517810a904ef0d847426725f6d4126b", "title": "Credit Assignment Techniques in Stochastic Computation Graphs", "text": "Stochastic computation graphs (SCGs) provide a formalism to represent structured optimization problems arising in artificial intelligence, including supervised, unsupervised, and reinforcement learning. Previous work has shown that an unbiased estimator of the gradient of the expected loss of SCGs can be derived from a single principle. However, this estimator often has high variance and requires a full model evaluation per data point, making this algorithm costly in large graphs. In this work, we address these problems by generalizing concepts from the reinforcement learning literature. We introduce the concepts of value functions, baselines and critics for arbitrary SCGs, and show how to use them to derive lower-variance gradient estimates from partial model evaluations, paving the way towards general and efficient credit assignment for gradient-based optimization. In doing so, we demonstrate how our results unify recent advances in the probabilistic inference and reinforcement learning literature."}
{"_id": "19c3736da5116e0e80a64db35afe421663c4b4a8", "title": "How to Play any Mental Game or A Completeness Theorem for Protocols with Honest Majority", "text": "We present a polynomial-time algorithm that, given as a input the description of a game with incomplete information and any number of players, produces a protocol for playing the game that leaks no partial information, provided the majority of the players is honest.\nOur algorithm automatically solves all the multi-party protocol problems addressed in complexity-based cryptography during the last 10 years. It actually is a completeness theorem for the class of distributed protocols with honest majority. Such completeness theorem is optimal in the sense that, if the majority of the players is not honest, some protocol problems have no efficient solution [C]."}
{"_id": "2fcda810f3f43cd1aaeea04782ae4234cb61a3f8", "title": "Dictionary of protein secondary structure: pattern recognition of hydrogen-bonded and geometrical features.", "text": "For a successful analysis of the relation between amino acid sequence and protein structure, an unambiguous and physically meaningful definition of secondary structure is essential. We have developed a set of simple and physically motivated criteria for secondary structure, programmed as a pattern-recognition process of hydrogen-bonded and geometrical features extracted from x-ray coordinates. Cooperative secondary structure is recognized as repeats of the elementary hydrogen-bonding patterns \u201cturn\u201d and \u201cbridge.\u201d Repeating turns are \u201chelices,\u201d repeating bridges are \u201cladders,\u201d connected ladders are \u201csheets.\u201d Geometric structure is defined in terms of the concepts torsion and curvature of differential geometry. Local chain \u201cchirality\u201d is the torsional handedness of four consecutive Ca positions and is positive for right-handed helices and negative for ideal twisted @-sheets. Curved pieces are defined as \u201cbends.\u201d Solvent \u201cexposure\u201d is given as the number of water molecules in possible contact with a residue. The end result is a compilation of the primary structure, including SS bonds, secondary structure, and solvent exposure of 62 different globular proteins. The presentation is in linear form: strip graphs for an overall view and strip tables for the details of each of 10,925 residues. The dictionary is also available in computer-readable form for protein structure prediction work."}
{"_id": "f3c8439de032f3b0cb62940d55e2db811cae4de8", "title": "Improved segmentation of abnormal cervical nuclei using a graph-search based approach", "text": "Reliable segmentation of abnormal nuclei in cervical cytology is of paramount importance in automation-assisted screening techniques. This paper presents a general method for improving the segmentation of abnormal nuclei using a graph-search based approach. More specifically, the proposed method focuses on the improvement of coarse (initial) segmentation. The improvement relies on a transform that maps round-like border in the Cartesian coordinate system into lines in the polar coordinate system. The costs consisting of nucleus-specific edge and region information are assigned to the nodes. The globally optimal path in the constructed graph is then identified by dynamic programming. We have tested the proposed method on abnormal nuclei from two cervical cell image datasets, Herlev and H&E stained liquid-based cytology (HELBC), and the comparative experiments with recent state-of-the-art approaches demonstrate the superior performance of the proposed method."}
{"_id": "3aa4a74523ba0babec3311614fc3147f85bdf733", "title": "The inferior parietal lobule and temporoparietal junction: A network perspective", "text": "Information processing in specialized, spatially distributed brain networks underlies the diversity and complexity of our cognitive and behavioral repertoire. Networks converge at a small number of hubs - highly connected regions that are central for multimodal integration and higher-order cognition. We review one major network hub of the human brain: the inferior parietal lobule and the overlapping temporoparietal junction (IPL/TPJ). The IPL is greatly expanded in humans compared to other primates and matures late in human development, consistent with its importance in higher-order functions. Evidence from neuroimaging studies suggests that the IPL/TPJ participates in a broad range of behaviors and functions, from bottom-up perception to cognitive capacities that are uniquely human. The organization of the IPL/TPJ is challenging to study due to the complex anatomy and high inter-individual variability of this cortical region. In this review we aimed to synthesize findings from anatomical and functional studies of the IPL/TPJ that used neuroimaging at rest and during a wide range of tasks. The first half of the review describes subdivisions of the IPL/TPJ identified using cytoarchitectonics, resting-state functional connectivity analysis and structural connectivity methods. The second half of the article reviews IPL/TPJ activations and network participation in bottom-up attention, lower-order self-perception, undirected thinking, episodic memory and social cognition. The central theme of this review is to discuss how network nodes within the IPL/TPJ are organized and how they participate in human perception and cognition."}
{"_id": "e037fe8715debddd8bd32cba48aaa980bdd6a68e", "title": "A Comparison of learning algorithms on the Arcade Learning Environment", "text": "Reinforcement learning agents have traditionally been evaluated on small toy problems. With advances in computing power and the advent of the Arcade Learning Environment, it is now possible to evaluate algorithms on diverse and difficult problems within a consistent framework. We discuss some challenges posed by the arcade learning environment which do not manifest in simpler environments. We then provide a comparison of model-free, linear learning algorithms on this challenging problem set."}
{"_id": "72bf9c5787d7ff56a1697a3389f11d14654b4fcf", "title": "Robust Face Recognition Using Symmetric Shape-from-Shading", "text": "Sensitivity to variations in illumination is a fundamental and challenging problem in face recognition. In this paper, we describe a new method based on symmetric shape-from-shading (SFS) to develop a face recognition system that is robust to changes in illumination. The basic idea of this approach is to use the symmetric SFS algorithm as a tool to obtain a prototype image which is illumination-normalized. Applying traditional SFS algorithms to real images of complex objects (in terms of their shape and albedo variations) such as faces is very challenging. It is shown that the symmetric SFS algorithm has a unique point-wise solution. In practice, given a single real face image with complex shape and varying albedo, even the symmetric SFS algorithm cannot guarantee the recovery of accurate and complete shape information. For the particular problem of face recognition, we utilize the fact that all faces share a similar shape making the direct computation of the prototype image from a given face image feasible. The symmetry property has been used to develop a new model-based source-from-shading algorithm which is more accurate than existing algorithms. Using the symmetric property, we are also able to gain insight into how changes in illumination a ect eigen-subspace based face recognition systems. Finally, to demonstrate the e cacy of our method, we have applied it to several publicly available face databases. We rst compare a new illumination-invariant measure to the measure proposed in (Jacobs et al., 1998), and then demonstrate signi cant performance improvement over existing face recognition systems using PCA and/or LDA for images acquired under variable lighting conditions."}
{"_id": "95240b1df285f86a1bd9df1daaf6140ef46f0fcb", "title": "Real-time lane detection and forward collision warning system based on stereo vision", "text": "This paper presents a real-time and robust lane detection and forward collision warning technique based on stereo cameras. First, obstacles image is obtained through stereo matching and UV-disparity segmentation algorithm. Then, Inverse Perspective Mapping(IPM) and Sobel filtering are conducted to generate a low-noise top view of the road by fusing the obstacles image and the original image. Next, Hough Transformation for the top view map is completed and the extreme points(poles) are calculated as the detected lanes according to the traffic lanes model. Besides, the host lane is selected or supplemented among all the detected lanes and the nearest obstacle in this host lane is detected for the forward collision warning. Experimental results on the public data set indicate that our method can work effectively and real-timely in the normal structured environment."}
{"_id": "774f7b50e3901664caa66463cdb3af410c3440ad", "title": "DSP Implementation in the Loop of the Vector Control Drive of a Permanent Magnet Synchronous Machine", "text": "The aim of this work is to design and implement the FOC (Field Oriented Control) of the permanent magnet synchronous machine (PMSM) on DSP in Hardware In the Loop (HIL). The MATLAB /Simulink, the IDE Code Composer Studio and the DSP TMS320F28335 platform will be used to perform the simulation, the code generation and the HIL.\n The obtained performances of this command (a speed response time of 10 ms and a zero static error vis-\u00e0-vis the setpoint and disruption) converge to those obtained with conventional simulation on SIMULINK"}
{"_id": "20824179b18f2812f9cd1c98ca65bf2be5cc8bd5", "title": "Automatic Vehicle Accident Detection and Messaging System Using GSM and GPS Modem", "text": "The Rapid growth of technology and infrastructure has made our lives easier. The advent of technology has also increased the traffic hazards and the road accidents take place frequently which causes huge loss of life and property because of the poor emergency facilities. Our project will provide an optimum solution to this draw back. An accelerometer can be used in a car alarm application so that dangerous driving can be detected. It can be used as a crash or rollover detector of the vehicle during and after a crash. With signals from an accelerometer, a severe accident can be recognized. According to this project when a vehicle meets with an accident immediately Vibration sensor will detect the signal or if a car rolls over, and Micro electro mechanical system (MEMS) sensor will detects the signal and sends it to ARM controller. Microcontroller sends the alert message through the GSM MODEM including the location to police control room or a rescue team. So the police can immediately trace the location through the GPS MODEM, after receiving the information. Then after conforming the location necessary action will be taken. If the person meets with a small accident or if there is no serious threat to anyone`s life, then the alert message can be terminated by the driver by a switch provided in order to avoid wasting the valuable time of the medical rescue team. This paper is useful in detecting the accident precisely by means of both vibration sensor and Micro electro Mechanical system (MEMS) or accelerometer. As there is a scope for improvement and as a future implementation we can add a wireless webcam for capturing the images which will help in providing driver`s assistance."}
{"_id": "b117e3ab62ce5a405428bd3b9f76afd1293e0c85", "title": "Counting Graphlets: Space vs Time", "text": "Counting graphlets is a well-studied problem in graph mining and social network analysis. Recently, several papers explored very simple and natural approaches based on Monte Carlo sampling of Markov Chains (MC), and reported encouraging results. We show, perhaps surprisingly, that this approach is outperformed by a carefully engineered version of color coding (CC) [1], a sophisticated algorithmic technique that we extend to the case of graphlet sampling and for which we prove strong statistical guarantees. Our computational experiments on graphs with millions of nodes show CC to be more accurate than MC. Furthermore, we formally show that the mixing time of the MC approach is too high in general, even when the input graph has high conductance. All this comes at a price however. While MC is very efficient in terms of space, CC's memory requirements become demanding when the size of the input graph and that of the graphlets grow. And yet, our experiments show that a careful implementation of CC can push the limits of the state of the art, both in terms of the size of the input graph and of that of the graphlets."}
{"_id": "d6b5767a7329a65a68df34650b2ced2988a0fb14", "title": "CommonSense: Collaborative learning of scene semantics by robots and humans", "text": "The recent introduction of robots to everyday scenarios has revealed new opportunities for collaboration and social interaction between robots and people. However, high level interaction will require semantic understanding of the environment. In this paper, we advocate that co-existence of assistive robots and humans can be leveraged to enhance the semantic understanding of the shared environment, and improve situation awareness. We propose a probabilistic framework that combines human activity sensor data generated by smart wearables with low level localisation data generated by robots. Based on this low level information and leveraging colocation events between a user and a robot, it can reason about semantic information and track humans and robots across different rooms. The proposed system relies on two-way sharing of information between the robot and the user. In the first phase, user activities indicative of room utility are inferred from consumer wearable devices and shared with the robot, enabling it to gradually build a semantic map of the environment. This will enable natural language interaction and high-level tasks for both assistive and co-working robots. In a second phase, via colocation events, the robot is able to share semantic information with the user, by labelling raw user data with semantic information about room type. Over time, the labelled data is used for training an Hidden Markov Model for room-level localisation, effectively making the user independent from the robot."}
{"_id": "a66ac9c27be015b1432e0844fd8205f7600e9046", "title": "Optimal Dynamic Hotel Pricing \u2217", "text": "We analyze a confidential reservation database provided by a luxury hotel, \u201dhotel 0\u201d, based in a major US city that enables us to observe individual reservations and cancellations at a daily frequency over a 37 month period. We show how the hotel sets prices for various classes of customers and how its prices vary over time. Hotel pricing is a challenging high-dimensional problem since hotels must not only set prices for each current date, but they must also quote prices for a range of future dates, room types and customer types. Our data reveal the full path of room rates quoted for different types of rooms and customers in advance of the arrival date. We find large within and between week variability in room prices, as well as huge seasonal variations in average daily rates and occupancy rates, not only for the hotel we study but also for its direct competitors. We formulate and estimate a structural model of optimal dynamic hotel pricing using the Method of Simulated Moments (MSM). The estimated model provides accurate predictions of the actual prices set by this firm and resulting paths of bookings and cancellations. Prices quoted for bookings generally decline as the arrival date approaches on nonbusy days, but can increase dramatically in the final days before arrival on busy days when there is a high probability of sell-out. Hotel 0\u2019s prices co-move strongly with its competitors\u2019 prices and we show that a simple price-following strategy where hotel 0 undercuts its competitors\u2019 average price by a fixed percentage provides a good first approximation to its pricing behavior. However we show that simple price-following is suboptimal: when hotel 0 expects to sell out, it is optimal to depart from price-following and increase its price significantly above its competitors. On non-busy days, it is not optimal for hotel 0 to cut its prices in the final days before arrival to try to increase occupancy unless its competitors cut their prices. Though price-following has the superficial appearance of collusive behavior mediated by the use of a commercial revenue management system (RMS), our results suggest that hotel 0\u2019s pricing is competitive and is best described as a rational best response to its beliefs about demand and the prices set by its competitors. In fact hotel 0 regularly disregards the recommended prices of its RMS, which it regards as too low compared to the prices it actually sets."}
{"_id": "c3afdd2730dabd5ebac75a2cffce70e84a2a13ce", "title": "Boosting Recurrent Neural Networks for Time Series Prediction", "text": "We adapt a boosting algorithm to the problem of predicting future values of time series, using recurrent neural networks as base learners. The experiments we performed show that boosting actually provides improved results and that the weighted median is better for combining the learners than the weighted mean."}
{"_id": "c13df80b0c2e3cf75f7617ec5c7d146ab97cff7e", "title": "Establishing rigour in qualitative research: the decision trail.", "text": "The aim of this paper is to show the way in which the decision trail of a qualitative research process can be maintained. It is argued that the trustworthiness (rigour) of a study may be established if the reader is able to audit the events, influences and actions of the researcher. The actual study containing the recording of this decision trail aimed to express the concerns of older patients who were admitted to the acute care sector. The study took place in two care of the elderly wards in a 1000-bed National Health Service hospital in the UK, in 1991. Eventually, 14 patients were interviewed, each on several occasions, and their concerns are expressed in themes, namely: routine geriatric style of care, depersonalization, care deprivation and geriatric segregation. I describe the preparations that were undertaken before patient interviews could commence. The literature recording the process of the interviewer's experience as data in qualitative research is scarce. I show the researcher's participation in making the data as part of an existential phenomenological research process. Existential phenomenology relies on recording influences while generating data such as significant literature, media reports, my value position and journal data."}
{"_id": "9c14d9b3a717ed4e9bd006da3ec8bd38b69b40dd", "title": "A balanced psychology and a full life.", "text": "Psychology since World War II has been largely devoted to repairing weakness and understanding suffering. Towards that end, we have made considerable gains. We have a classification of mental illness that allows international collaboration, and through this collaboration we have developed effective psychotherapeutic or pharmacological treatments for 14 major mental disorders. However, while building a strong science and practice of treating mental illness, we largely forgot about everyday well-being. Is the absence of mental illness and suffering sufficient to let individuals and communities flourish? Were all disabling conditions to disappear, what would make life worth living? Those committed to a science of positive psychology can draw on the effective research methods developed to understand and treat mental illness. Results from a new randomized, placebo-controlled study demonstrate that people are happier and less depressed three months after completing exercises targeting positive emotion. The ultimate goal of positive psychology is to make people happier by understanding and building positive emotion, gratification and meaning. Towards this end, we must supplement what we know about treating illness and repairing damage with knowledge about nurturing well-being in individuals and communities."}
{"_id": "8fbc8880696464e05561e5f5737c2df4b9f53e35", "title": "Clustering versus faceted categories for information exploration", "text": "Information seekers often express a desire for a user interface that organizes search results into meaningful groups, in order to help make sense of the results, and to help decide what to do next. A longitudinal study in which participants were provided with the ability to group search results found they changed their search habits in response to having the grouping mechanism available [2]. There are many open research questions about how to generate useful groupings and how to design interfaces to support exploration using grouping. Currently two methods are quite popular: clustering and faceted categorization. Here, I describe both approaches and summarize their advantages and disadvantages based on the results of usability studies. Clustering refers to the grouping of items according to some measure of similarity. In document clustering, similarity is typically computed using associations and commonalities among features, where features are typically words and phrases [1]. One of the better implementations of clustering of Web results can be found at Clusty.com. The greatest advantage of clustering is that it is fully automatable and can be easily applied to any text collection. Clustering can also reveal interesting and potentially unexpected or new trends in a group of documents. A query on \u201cNew Orleans\u201d\u2019 run on Clusty.com on Sept. 16, 2005 (shortly after the devastation wreaked by Hurricane Katrina), revealed a top-ranked cluster titled Hurricane, followed by the more standard groupings of Hotels, Louisiana, University, and Mardi Gras. Clustering can be useful for clarifying and sharpening a vague query, by showing users the dominant themes of the returned results [2]. Clustering also works well for disambiguating ambiguous queries; particularly acronyms. For example, ACL can stand for Anterior Cruciate Ligament, Association for Computational Linguistics, Atlantic Coast Line Railroad, among others. Unfortunately, because clustering algorithms are imperfect, they do not neatly group all occurrences of each acronym into one cluster, nor do they allow users to issue follow-up queries that only return documents from the intended sense (for example, \u201cACL meeting\u201d will return meetings for multiple senses of the term). By Marti A. Hearst"}
{"_id": "6448ff6d8b4aa370969342ccc78004eb6266b10b", "title": "Management Dimension for Assessing Organizations' Readiness toward Business Intelligence Systems", "text": "The improvements and capabilities of Business Intelligence (BI) systems are making them highly required by organizations in many of the fields. Day by day, the demand is increasing on these systems. Figures show that billions of US dollars are spent annually on BI systems. However, half of these BI projects are ending with unrealized benefit, according to some studies. As a participation to reduce this failure rate, the authors have developed a conceptual model, based on the existing studies in the field, which may help organizations to assess their readiness toward BI systems. The conceptual model is tested empirically on the Context of Malaysian Organizations(CMOs) as they are emerging widely to BI systems and they lack for this kind of studies to be done them. The conceptual model is consisting of seven dimensions. Depending on the results of the conducted study, this paper will disclose the results of management dimension, which will help to know the degree of its importance for organizations' to be ready for such systems. A brief introduction on BI systems and the challenges of their implementation will be presented. In Addition, this paper will go through some important studies have been done in the field. The details of the study will be provided as well."}
{"_id": "a5b1aec261cc7e7e1bb0a6fedef7cf6a86d871d4", "title": "A Meta-Model for the Analysis and Design of Organizations in Multi-Agent Systems", "text": "This paper presents a generic meta-model of multi-agent systems based on organizational concepts such as groups, roles and structures. This model, called AALAADIN, defines a very simple description of coordination and negotiation schemes through multi-agent systems. Aalaadin is a meta-model of artificial organization by which one can build multi-agent systems with different forms of organizations such as market-like and hierarchical organizations. We show that this meta-model allows for agent heterogeneity in languages, applications and architectures. We also introduce the concept of organizational reflection which uses the same conceptual model to describe systemlevel tasks such as remote communication and migration of agents. Finally, we briefly describe a platform, called MADKIT, based on this model. It relies on a minimal agent kernel with platform-level services implemented as agents, groups and roles."}
{"_id": "ea0cb20d5f71c937abc20b00391141bf12cc6fd5", "title": "Coreference in Spoken vs. Written Texts: a Corpus-based Analysis", "text": "This paper describes an empirical study of coreference in spoken vs. written text. We focus on the comparison of two particular text types, interviews and popular science texts, as instances of spoken and written texts since they display quite different discourse structures. We believe in fact, that the correlation of difficulties in coreference resolution and varying discourse structures requires a deeper analysis that accounts for the diversity of coreference strategies or their sub-phenomena as indicators of text type or genre. In this work, we therefore aim at defining specific parameters that classify differences in genres of spoken and written texts such as the preferred segmentation strategy, the maximal allowed distance in or the length and size of coreference chains as well as the correlation of structural and syntactic features of coreferring expressions. We argue that a characterization of such genre dependent parameters might improve the performance of current state-of-art coreference resolution technology."}
{"_id": "f06fb35972f67d6d321ebb7eda7a8894d133af3c", "title": "HOW THE EYE MEASURES REALITY AND VIRTUAL REALITY", "text": "tures throughout our history. The paintings at Niaux, Altamira, and Lascaux (Clottes, 1995; Ruspoli, 1986), for example, are known to be about 14,000 years old, but with the recently discovered paintings in the Grotte Chauvet, the origin of representational art appears to have been pushed back even further (Chauvet, Brunel Deschamps, & Hillaire, 1995; Clottes, 1996), to 20,000 years ago if not longer.1 Thus, these paintings date from about the time at which homo sapiens sapiens first appeared in Europe (Nougier, 1969). We should remember these paintings in the context of virtual reality; our fascination with pictures is by no means recent. My intent is threefold: first, to discuss our perception of the cluttered layout, or space, that we normally find around us; second, to discuss the development of representational art up to our current appreciation of it; and third, to apply this knowledge to virtual reality systems. The first discussion focuses on the use of multiple sources of information specifying ordinal depth relations, within the theoretical framework that I have called directed perception (Cutting, 1986, 1991). The second discussion is embedded within the first, but it is steeped in neither history nor art history; instead, it is offered through the peculiar eyes of optics and psychology, particularly psychophysics. The third is addressed to one of the pressing problems of graphics and of virtual reality\u2014how we perceive the layout of the environments that we simulate."}
{"_id": "1ec59aece51a698bce34f393cf6474b926fd89ad", "title": "Exemplar Guided Unsupervised Image-to-Image Translation", "text": "Image-to-image translation task has become a popular topic recently. Most works focus on either one-to-one mapping in an unsupervised way or many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner and cross-domain variations. To alleviate these issues, we propose the Exemplar Guided UNsupervised Image-to-image Translation (EGUNIT) network which conditions the image translation process on an image in the target domain. An image representation is assumed to comprise both content information which is shared across domains and style information specific to one domain. By applying exemplar-based adaptive instance normalization to the shared content representation, EG-UNIT manages to transfer the style information in the target domain to the source domain. Experimental results on various datasets show that EG-UNIT can indeed translate the source image to diverse instances in the target domain with semantic consistency."}
{"_id": "8c0dccb76249df1e152b30f48c2dee1fd146f7ac", "title": "Usability Dimensions for Mobile Applications-A Review", "text": "Usability has been increasingly recognized as a significant quality dimension to determine the success of mobile applications. Due to its importance, a number of usability guidelines have been proposed to direct the design of usable applications. The guidelines are intended particularly for desktop and web-based applications. Mobile applications on the other hand are different in many ways from those applications due to the mobility nature of mobile devices. To date, the usability guidelines for mobile applications are very limited. They in fact are isolated, which makes usability evaluation for mobile devices more difficult. This study aims to address this issue by proposing a set of usability dimensions that should be considered for designing and evaluating mobile applications. The dimensions are illustrated as a model that considers four contextual factors: user, environment, technology and task/activity. The model was proposed based on the reviews of previous related studies, which were analyzed by using content analysis approach. Twenty-five dimensions were found from the analysis. The dimensions however were synthesized and prioritized based on their importance towards designing usable mobile applications. As a result, ten most important dimensions were outlined in the model. The model can be used by practitioners and researchers as a guideline to design usable mobile applications and further research can be conducted in the near future."}
{"_id": "94eae870703e3ec9ce8a00a502228e9c47128b7b", "title": "SMARTMUSEUM: A mobile recommender system for the Web of Data", "text": "Semantic and context knowledge have been envisioned as an appropriate solution for addressing the content heterogeneity and information overload in mobile Web information access, but few have explored their full potential in mobile scenarios, where information objects refer to their physical counterparts, and retrieval is context-aware and personalized for users. We present SMARTMUSEUM, a mobile ubiquitous recommender system for the Web of Data, and its application to information needs of tourists in context-aware on-site access to cultural heritage. The SMARTMUSEUM system utilizes Semantic Web languages as the form of data representation. Ontologies are used to bridge the semantic gap between heterogeneous content descriptions, sensor inputs, and user profiles. The system makes use of an information retrieval framework wherein context data and search result clustering are used in recommendation of suitable content for mobile users. Results from laboratory experiments demonstrate that ontology-based reasoning, query expansion, search result clustering, and context knowledge lead to significant improvement in recommendation performance. The results from field trials show that the usability of the system meets users\u2019 expectations in real-world use. The results indicate that semantic content representation and retrieval can significantly improve the performance of mobile recommender systems in knowledge-rich domains."}
{"_id": "dbedcbb1eebd6c614a468d51417e3f88347ba1fd", "title": "A visitor's guide in an active museum: Presentations, communications, and reflection", "text": "Technology can play a crucial role in supporting museum visitors and enhancing their overall museum visit experiences. Visitors coming to a museum do not want to be overloaded with information, but to receive the relevant information, learn, and have an overall interesting experience. To serve this goal, a user-friendly and flexible system is needed. The design of such a system poses several challenges that need to be addressed in parallel. The user interface should be intuitive and let the visitors focus on the exhibits, not on the technology. Content and delivery must provide relevant information and at the same time allow visitors to get the level of detail and the perspectives in which they are interested. Personalization may play a key role in providing relevant information to individuals. Yet, since visitors tend to visit the museum in small groups, technology should also contribute to and facilitate during-the-visit communication or post-visit group interaction. The PIL project applied at the Hecht museum extended the research results of the PEACH project and tried to address all of these considerations. Evaluation involving users substantiated several aspects of the design."}
{"_id": "0565b38859f76df988248e6054f0ed72156371ab", "title": "Smart cities in Europe", "text": "Urban performance currently depends not only on the city\u2019s endowment of hard infrastructure (\u2018physical capital\u2019), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (\u2018human and social capital\u2019). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the \u2018smart city\u2019 has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city."}
{"_id": "16c9dfcb9648474a911f2faa76d9aa6721920752", "title": "Verifiable source code documentation in controlled natural language", "text": "Writing documentation about software internals is rarely considered a rewarding activity. It is highly time-consuming and the resulting documentation is fragile when the software is continuously evolving in a multi-developer setting. Unfortunately, traditional programming environments poorly support the writing and maintenance of documentation. Consequences are severe as the lack of documentation on software structure negatively impacts the overall quality of the software product. We show that using a controlled natural language with a reasoner and a query engine is a viable technique for verifying the consistency and accuracy of documentation and source code. Using ACE, a state-of-the-art controlled natural language, we present positive results on the comprehensibility and the general feasibility of creating and verifying documentation. As a case study, we used automatic documentation verification to identify and fix severe flaws in the architecture of a non-trivial piece of software. Moreover, a user experiment shows that our language is faster and easier to learn and understand than other formal languages for software documentation."}
{"_id": "2d533eb6ca5d47d745ed49c863ce3d8499b4bf17", "title": "Context-aware query suggestion by mining click-through and session data", "text": "Query suggestion plays an important role in improving the usability of search engines. Although some recently proposed methods can make meaningful query suggestions by mining query patterns from search logs, none of them are context-aware - they do not take into account the immediately preceding queries as context in query suggestion. In this paper, we propose a novel context-aware query suggestion approach which is in two steps. In the offine model-learning step, to address data sparseness, queries are summarized into concepts by clustering a click-through bipartite. Then, from session data a concept sequence suffix tree is constructed as the query suggestion model. In the online query suggestion step, a user's search context is captured by mapping the query sequence submitted by the user to a sequence of concepts. By looking up the context in the concept sequence sufix tree, our approach suggests queries to the user in a context-aware manner. We test our approach on a large-scale search log of a commercial search engine containing 1:8 billion search queries, 2:6 billion clicks, and 840 million query sessions. The experimental results clearly show that our approach outperforms two baseline methods in both coverage and quality of suggestions."}
{"_id": "38cd636e280b78cedeb3efbbe9c44d12325b219d", "title": "Spectral\u2013Spatial Classification of Hyperspectral Imagery Based on Partitional Clustering Techniques", "text": "A new spectral-spatial classification scheme for hyperspectral images is proposed. The method combines the results of a pixel wise support vector machine classification and the segmentation map obtained by partitional clustering using majority voting. The ISODATA algorithm and Gaussian mixture resolving techniques are used for image clustering. Experimental results are presented for two hyperspectral airborne images. The developed classification scheme improves the classification accuracies and provides classification maps with more homogeneous regions, when compared to pixel wise classification. The proposed method performs particularly well for classification of images with large spatial structures and when different classes have dissimilar spectral responses and a comparable number of pixels."}
{"_id": "3856d1cb357dfea0fb55dbdf0922657c8076f8cf", "title": "Nonvolatile memory design based on ferroelectric FETs", "text": "Ferroelectric FETs (FEFETs) offer intriguing possibilities for the design of low power nonvolatile memories by virtue of their three-terminal structure coupled with the ability of the ferroelectric (FE) material to retain its polarization in the absence of an electric field. Utilizing the distinct features of FEFETs, we propose a 2-transistor (2T) FEFET-based nonvolatile memory with separate read and write paths. With proper co-design at the device, cell and array levels, the proposed design achieves non-destructive read and lower write power at iso-write speed compared to standard FERAM. In addition, the FEFET-based memory exhibits high distinguishability with six orders of magnitude difference in the read currents corresponding to the two states. Comparative analysis based on experimentally calibrated models shows significant improvement of access energy-delay. For example, at a fixed write time of 550ps, the write voltage and energy are 58.5% and 67.7% lower than FERAM, respectively. These benefits are achieved with 2.4 times the area overhead. Further exploration of the proposed FEFET memory in energy harvesting nonvolatile processors shows an average improvement of 27% in forward progress over FERAM."}
{"_id": "1be37ab7b64c78351e20952d4261033328ecd69c", "title": "Abstract Semantic Differencing for Numerical Programs", "text": "Semantic Differencing for Numerical Programs Nimrod Partush Eran Yahav"}
{"_id": "61a2b442a872dc4adc4a543adb0ba8a32168ae09", "title": "Comparative genomic analysis identifies structural features of CRISPR-Cas systems in Riemerella anatipestifer", "text": "Riemerella anatipestifer infection is a contagious disease that has resulted in major economic losses in the duck industry worldwide. This study attempted to characterize CRISPR-Cas systems in the disease-causing agent, Riemerella anatipestifer (R. anatipestifer). The CRISPR-Cas system provides adaptive immunity against foreign genetic elements in prokaryotes and CRISPR-cas loci extensively exist in the genomes of archaea and bacteria. However, the structure characteristics of R. anatipestifer CRISPR-Cas systems remains to be elucidated due to the limited availability of genomic data. To identify the structure and components associated with CRISPR-Cas systems in R. anatipestifer, we performed comparative genomic analysis of CRISPR-Cas systems in 25 R. anatipestifer strains using high-throughput sequencing. The results showed that most of the R. anatipestifer strains (20/25) that were analyzed have two CRISPR loci (CRISPR1 and CRISPR2). CRISPR1 was shown to be flanked on one side by cas genes, while CRISPR2 was designated as an orphan. The other analyzed strains harbored only one locus, either CRISPR1 or CRISPR2. The length and content of consensus direct repeat sequences, as well as the length of spacer sequences associated with the two loci, differed from each other. Only three cas genes (cas1, cas2 and cas9) were located upstream of CRISPR1. CRISPR1 was also shown to be flanked by a 107\u00a0bp-long putative leader sequence and a 16\u00a0nt-long anti-repeat sequence. Combined with analysis of spacer organization similarity and phylogenetic tree of the R. anatipestifer strains, CRISPR arrays can be divided into different subgroups. The diversity of spacer organization was observed in the same subgroup. In general, spacer organization in CRISPR1 was more divergent than that in CRISPR2. Additionally, only 8\u00a0% of spacers (13/153) were homologous with phage or plasmid sequences. The cas operon flanking CRISPR1 was observed to be relatively conserved based on multiple sequence alignments of Cas amino acid sequences. The phylogenetic analysis associated with Cas9 showed Cas9 sequence from R. anatipestifer was closely related to that of Bacteroides fragilis and formed part of the subtype II-C subcluster. Our data revealed for the first time the structural features of R. anatipestifer CRISPR-Cas systems. The illumination of structural features of CRISPR-Cas system may assist in studying the specific mechanism associated with CRISPR-mediated adaptive immunity and other biological functions in R. anatipestifer."}
{"_id": "4524d286e33b0902d8d42a51df36cd0902b03156", "title": "Neural network-based reputation model in a distributed system", "text": "Current centralized trust models are inappropriate to apply in a large distributed multi-agent system, due to various evaluation models and partial observations in local level reputation management. This paper proposes a distributed reputation management structure, and develops a global reputation model. The global reputation model is a novel application of neural network techniques in distributed reputation evaluations. The experimental results showed that the model has robust performance under various estimation accuracy requirements. More important, the model is adaptive to changes in distributed system structures and in local reputation evaluations."}
{"_id": "f1e5c4b0d51c67b6fa38212cd1ab0fc9d3ad3a27", "title": "Infants' preferences for toys, colors, and shapes: sex differences and similarities.", "text": "Girls and boys differ in their preferences for toys such as dolls and trucks. These sex differences are present in infants, are seen in non-human primates, and relate, in part, to prenatal androgen exposure. This evidence of inborn influences on sex-typed toy preferences has led to suggestions that object features, such as the color or the shape of toys, may be of intrinsically different interest to males and females. We used a preferential looking task to examine preferences for different toys, colors, and shapes in 120 infants, ages 12, 18, or 24 months. Girls looked at dolls significantly more than boys did and boys looked at cars significantly more than girls did, irrespective of color, particularly when brightness was controlled. These outcomes did not vary with age. There were no significant sex differences in infants' preferences for different colors or shapes. Instead, both girls and boys preferred reddish colors over blue and rounded over angular shapes. These findings augment prior evidence of sex-typed toy preferences in infants, but suggest that color and shape do not determine these sex differences. In fact, the direction of influence could be the opposite. Girls may learn to prefer pink, for instance, because the toys that they enjoy playing with are often colored pink. Regarding within sex differences, as opposed to differences between boys and girls, both boys and girls preferred dolls to cars at age 12-months. The preference of young boys for dolls over cars suggests that older boys' avoidance of dolls may be acquired. Similarly, the sex similarities in infants' preferences for colors and shapes suggest that any subsequent sex differences in these preferences may arise from socialization or cognitive gender development rather than inborn factors."}
{"_id": "e4502d1869c0924bc6686e7ddbb953f5402391da", "title": "On The Architecture Scheduling Problem Of Industry 4.0", "text": "The recently emerged Fourth Industrial Revolution (Industry 4.0) is characterised by the introduction of the new Cyber-Physical System (CPS) concepts and the Internet of Things (IoT) paradigm. These new collaborating computational entities offer a broad range of opportunity to consider with a different perspective. One of the perennial problems of the manufacturing operation is the scheduling problem of typical job-shop manufacturing systems. Starting from a comparison with the typical architecture of an operating systems scheduler module, we introduce a new manufacturing scheduling architecture. Overcoming the typical Full-Hierarchical configuration defined in the ANSI/ISA 95 in favour of a SemiHeterarchical one, the introduced scheduling architecture leads to a mixture of proactive and reactive approach to the Job-shop Scheduling Problem (JSP), taking advantage from both the common decentralised and the centralised methodology. Keywords\u2014 Industry 4.0; Cyber-Physical System (CPS); JobShop Scheduling Problem; Manufacturing System; System of"}
{"_id": "197f137c0174141d65418502e8702281281cdf3a", "title": "Path integral guided policy search", "text": "3Sergey Levine is with Google Brain, Mountain View, CA 94043, USA. We present a policy search method for learning complex feedback control policies that map from high-dimensional sensory inputs to motor torques, for manipulation tasks with discontinuous contact dynamics. We build on a prior technique called guided policy search (GPS), which iteratively optimizes a set of local policies for specific instances of a task, and uses these to train a complex, high-dimensional global policy that generalizes across task instances. We extend GPS in the following ways: (1) we propose the use of a model-free local optimizer based on path integral stochastic optimal control (PI2), which enables us to learn local policies for tasks with highly discontinuous contact dynamics; and (2) we enable GPS to train on a new set of task instances in every iteration by using on-policy sampling: this increases the diversity of the instances that the policy is trained on, and is crucial for achieving good generalization. We show that these contributions enable us to learn deep neural network policies that can directly perform torque control from visual input. We validate the method on a challenging door opening task and a pick-and-place task, and we demonstrate that our approach substantially outperforms the prior LQR-based local policy optimizer on these tasks. Furthermore, we show that on-policy sampling significantly increases the generalization ability of these policies."}
{"_id": "654f0dfe78333cc5824a507486f7cede751cd301", "title": "Transcutaneous measurement of glomerular filtration rate using FITC-sinistrin in rats.", "text": "BACKGROUND\nInulin/sinistrin (I/S) clearance is a gold standard for an accurate assessment of glomerular filtration rate (GFR). Here we describe and validate an approach for a transcutaneous determination of GFR by using fluorescein-isothiocyanate-labelled sinistrin (FITC-S) in rats.\n\n\nMETHODS\nUsing a small animal imager, fluorescence is measured over the depilated ear of a rat after the injection of FITC-S. The decay curve of fluorescence is used for the calculation of half-life and GFR. The thus obtained transcutaneous data were validated by simultaneously performed enzymatic and fluorometric measurements in plasma of both FITC-S and sinistrin.\n\n\nRESULTS\nThe results of enzymatic sinistrin determination versus transcutaneous half-life of FITC-S or plasma fluorescence correlated well with each other (R(2) > 0.90). Furthermore, Bland-Altman analyses proved a good degree of agreement of the three methods used. The measurements performed in healthy animals as well as different models of renal failure demonstrate its appropriateness in a wide range of renal function.\n\n\nCONCLUSIONS\nThe transcutaneous method described offers a precise assessment of GFR in small animals. As neither blood and/or urine sampling nor time-consuming lab work is required, GFR can be determined immediately after the clearance procedure is finished. This method, therefore, simplifies and fastens GFR determinations in small lab animals compared to conventional bolus clearance techniques based on blood sampling. A low-cost device for the measurement of transcutaneous fluorescence intensity over time is under construction."}
{"_id": "ff97ea322e04a21582c041ac3c6c8975a9d6b955", "title": "Periodontal diseases.", "text": "The periodontal diseases are highly prevalent and can affect up to 90% of the worldwide population. Gingivitis, the mildest form of periodontal disease, is caused by the bacterial biofilm (dental plaque) that accumulates on teeth adjacent to the gingiva (gums). However, gingivitis does not affect the underlying supporting structures of the teeth and is reversible. Periodontitis results in loss of connective tissue and bone support and is a major cause of tooth loss in adults. In addition to pathogenic microorganisms in the biofilm, genetic and environmental factors, especially tobacco use, contribute to the cause of these diseases. Genetic, dermatological, haematological, granulomatous, immunosuppressive, and neoplastic disorders can also have periodontal manifestations. Common forms of periodontal disease have been associated with adverse pregnancy outcomes, cardiovascular disease, stroke, pulmonary disease, and diabetes, but the causal relations have not been established. Prevention and treatment are aimed at controlling the bacterial biofilm and other risk factors, arresting progressive disease, and restoring lost tooth support."}
{"_id": "21ce866e3680e3c447453456c42ec26fafb80a64", "title": "Artificial neural networks in time series forecasting: a comparative analysis", "text": "Artificial neural networks (ANN) have received a great deal of attention in many fields of engineering and science. Inspired by the study of brain architecture, ANN represent a class of non-linear models capable of learning from data. ANN have been applied in many areas where statistical methods are traditionally employed. They have been used in pattern recognition, classification, prediction and process control. The purpose of this paper is to discuss ANN and compare them to non-linear time series models. We begin exploring recent developments in time series forecasting with particular emphasis on the use of non-linear models. Thereafter we include a review of recent results on the topic of ANN. The relevance of ANN models for the statistical methods is considered using time series prediction problems. Finally we construct asymptotic prediction intervals for ANN and show how to use prediction intervals to choose the number of nodes in the ANN."}
{"_id": "23c56db706ae5073b4d8da8240a18f4119fcc0d3", "title": "ntology matching with semantic verification ves", "text": "Automated Semantic Matching of Ontologies with Verification (ASMOV) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure vailable online xxx eywords: ntology ntology alignment ntology matching its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies. \u00a9 2009 Elsevier B.V. All rights reserved."}
{"_id": "40110da208ff26b551a3b2eb01f1e447a361fff3", "title": "Reputation , Trust , and Rebates : How Online Auction Markets Can Improve Their Feedback Mechanisms", "text": "Trust and trustworthiness are crucial to the survival of online markets, and reputation systems that rely on feedback from traders help sustain trust. However, in current online auction markets only half of the buyers leave feedback after transactions, and nearly all of it is positive. In this paper, I propose a mechanism whereby sellers can provide rebates to buyers contingent on buyers provision of reports. Using a game theoretical model, I show how the rebate incentive mechanism can increase reporting. In both a pure adverse selection model, and a model with adverse selection and moral hazard, there exists a pooling equilibrium where both good and bad sellers choose the rebate option, even though their true types are revealed through feedback. In the presence of moral hazard, the mechanism induces bad sellers to improve the quality of the contract."}
{"_id": "238eaf6cd85e9ac91fede400153ce7c5bd113df7", "title": "iVAT and aVAT: Enhanced Visual Analysis for Cluster Tendency Assessment", "text": "Given a pairwise dissimilarity matrix D of a set of n objects, visual methods (such as VAT) for cluster tendency assessment generally represent D as an n \u00d7 n image I(D\u0303) where the objects are reordered to reveal hidden cluster structure as dark blocks along the diagonal of the image. A major limitation of such methods is the inability to highlight cluster structure in I(D\u0303) when D contains highly complex clusters. To address this problem, this paper proposes an improved VAT (iVAT) method by combining a path-based distance transform with VAT. In addition, an automated VAT (aVAT) method is also proposed to automatically determine the number of clusters from I(D\u0303). Experimental results on several synthetic and real-world data sets have demonstrated the effectiveness of our methods."}
{"_id": "4d3cbc5799d7963477da279dae9a08ac4d459157", "title": "Deep Learning for Nonlinear Diffractive Imaging", "text": "Image reconstruction under multiple light scattering is crucial for a number of important applications in cell microscopy and tissue imaging. The reconstruction problem is often formulated as a nonconvex optimization, where a nonlinear measurement model is used to account for multiple scattering and a regularizer is used to enforce the prior on the object. In this letter, We propose a powerful alternative to this optimization-based view of image reconstruction by designing and training a deep convolutional neural network (CNN) for inverting multiple scattering. Simulations show that the proposed formulation is substantially faster and achieves higher imaging quality compared to the state-of-the-art methods based on optimization."}
{"_id": "289b61d5b498929bf057943da02e40487d21c2f8", "title": "B2W2: N-Way Concurrent Communication for IoT Devices", "text": "The exponentially increasing number of internet of things (IoT) devices and the data generated by these devices introduces the spectrum crisis at the already crowded ISM 2.4 GHz band. To address this issue and enable more flexible and concurrent communications among IoT devices, we propose B2W2, a novel communication framework that enables N-way concurrent communication among WiFi and Bluetooth Low Energy (BLE) devices. Specifically, we demonstrate that it is possible to enable the BLE to WiFi cross-technology communication while supporting the concurrent BLE to BLE and WiFi to WiFi communications. We conducted extensive experiments under different real-world settings and results show that its throughput is more than 85X times higher than the most recently reported cross-technology communication system [22], which only supports one-way communication (i.e., broadcasting) at any specific time."}
{"_id": "ef40737ca8823dd55fc1b44f225f4c8feb0194bb", "title": "A study of big data processing constraints on a low-power Hadoop cluster", "text": "Big Data processing with Hadoop has been emerging recently, both on the computing cloud and enterprise deployment. However, wide-spread security exploits may hurt the reputation of public clouds. If Hadoop on the cloud is not an option, an organization has to build its own Hadoop clusters. But having a data center is not worth for a small organization both in terms of building and operating costs. Another viable solution is to build a cluster with low-cost ARM system-on-chip boards. This paper presents a study of a Hadoop cluster for processing Big Data built atop 22 ARM boards. The Hadoop's MapReduce was replaced by Spark and experiments on three different hardware configurations were conducted to understand limitations and constraints of the cluster. From the experimental results, it can be concluded that processing Big Data on an ARM cluster is highly feasible. The cluster could process a 34 GB Wikipedia article file in acceptable time, while generally consumed the power 0.061-0.322 kWh for all benchmarks. It has been found that I/O of the hardware is fast enough, but the power of CPUs is inadequate because they are largely spent for the Hadoop's I/O."}
{"_id": "4bba88ac970d88c7a4a6aa174d765649a3fcfb00", "title": "Selective Intervention and Internal Hybrids: Interpreting and Learning from the Rise and Decline of the Oticon Spaghetti Organization", "text": "Infusing hierarchies with elements of market control has become a much-used way of simultaneously increasing entrepreneurialism and motivation in firms. However, this paper argues that such \u201cinternal hybrids,\u201d particularly in their radical forms, are inherently hard to successfully design and implement, because of fundamental credibility problems related to managerial promises to not intervene in delegated decision-making \uf8e7 an incentive problem that is often referred to as the \u201cproblem of selective intervention.\u201d This theoretical theme is developed and illustrated, using the case of the world-leading Danish hearing aids producer, Oticon. In the beginning of the 1990s, Oticon became famous for its radical internal hybrid, the \u201dspaghetti organization.\u201d Recent work has interpreted the spaghetti organization as a radical attempt to foster dynamic capabilities by imposing loose coupling on the organization, neglecting, however, that about a decade later, the spaghetti organization has given way to a more traditional matrix organization. This paper presents an organizational economics interpretation of organizational changes in Oticon, and argues that a strong liability of the spaghetti organization was the above incentive problem. Motivation in Oticon was strongly harmed by selective intervention on the part of top-management Changing the organizational structure was one means of repairing these motivational problems. Refutable implications are developed, both for the understanding of efficient design of internal hybrids, and for the more general issue of the distinction between firms and markets, as well as the choice between internal and external hybrids."}
{"_id": "7bd19f37bd85824d52ecdd9a4141c841508dcb24", "title": "Optimizing 360 video delivery over cellular networks", "text": "As an important component of the virtual reality (VR) technology, 360-degree videos provide users with panoramic view and allow them to freely control their viewing direction during video playback. Usually, a player displays only the visible portion of a 360 video. Thus, fetching the entire raw video frame wastes bandwidth. In this paper, we consider the problem of optimizing 360 video delivery over cellular networks. We first conduct a measurement study on commercial 360 video platforms. We then propose a cellular-friendly streaming scheme that delivers only 360 videos' visible portion based on head movement prediction. Using viewing data collected from real users, we demonstrate the feasibility of our approach, which can reduce bandwidth consumption by up to 80% based on a trace-driven simulation."}
{"_id": "6e3b85ae5df3c9a47071de37d5ca9c0cf91ee63f", "title": "Investigation of safety in human-robot-interaction for a series elastic, tendon-driven robot arm", "text": "This paper presents the design of the lightweight BioRob manipulator with spring-loaded tendon-driven actuation developed for safe physical human-robot interaction. The safety of the manipulator is analyzed by an analytical worst-case estimation of impact and clamping forces in the absence of collision detection. As intrinsic joint compliance can pose a threat by storing energy, a safety evaluation method is proposed taking the potential energy stored in the elastic actuation into account. The evaluation shows that the robot arm design constrains the worst case clamping forces to only 25 N, while being able to handle loads up to 2 kg, and inherits extremely low impact properties, such as an effective mass of less than 0.4 kg in non near-singular configurations, enabling safe operation even in case of high velocities. The results are validated in simulation and experiments."}
{"_id": "b072327fea750918fbc4c263d1f2b342bd304d8c", "title": "Accidental scald burns in sinks.", "text": "Scald burns to the feet and lower extremities in children are described in the literature as often resulting from forced immersions. This report illustrates 3 cases of burns whose distribution and historical factors identify them as accidental. The location of these accidental burns is similar to those found in inflicted injury, but the patterns were indicative of flowing water burns, not forced immersions. Burns in these locations may be confused with abuse. Medical providers need to be aware of information that may enable them to distinguish the 2 causes. Effective caregiver education regarding the importance of lowering the temperature of water heaters and discouraging play in household sinks is critical to prevent additional tap water scald burn injuries."}
{"_id": "3f542cdde202ffb1b0c5173bdfa135bc2ff8af38", "title": "A Reconfigurable Antenna With Frequency and Polarization Agility", "text": "A new antenna with both frequency and polarization reconfigurability is presented. The antenna consists of a square microstrip patch with a single probe feed located along the diagonal line. The center of each edge of the patch is connected to a shorting post via a p-i-n diode for polarization switching and two varactor diodes for frequency tuning. By switching between the different states of the p-i-n diodes, the proposed antenna can produce radiation patterns with horizontal, vertical, or 45\u00b0 linear polarization. By varying the dc bias voltage, the operating frequency of each polarization of the antenna can be independently tuned. The frequency tuning range is from 1.35 to 2.25 GHz (|S11| <; -10 dB) for either horizontal or vertical polarization and from 1.35 to 1.9 GHz for the 45\u00b0 linear polarization. Measured results on frequency tuning ranges and radiation patterns agree well with numerical simulations."}
{"_id": "5e4fc4ff42d22c47c08d19a0a263d65271e4bc87", "title": "Wideband Circular Polarization Reconfigurable Antenna", "text": "This communication introduces a polarization reconfigurable wideband circularly polarized (CP) antenna, which consists of four radiating arms connected to a reconfigurable feeding network. The four radiating arms excited by the feeding network are able to generate wideband CP waves with bidirectional radiation patterns in free space. In order to increase the gain and obtain a broadside radiation pattern, the proposed antenna is placed above a metallic reflector with the distance of quarter wavelength at the center frequency. In addition, polarization reconfigurability is realized by utilizing PIN diodes in the feeding transmission lines such that left-handed circular polarization (LHCP) and right-handed circular polarization (RHCP) modes can be selectively excited by controlling the on/off states of the PIN diodes. The proposed antenna exhibits a wide impedance bandwidth of 80% and an overlapped axial ratio (AR) bandwidth of 23.5% for both modes. The antenna gain is stable across the operating bandwidth with the peak gain of 4.8 dBic. The antenna has a wide AR beamwidth of 90\u00b0. The presented work is suitable for GPS, CNSS, or RFID applications."}
{"_id": "8687feb71453c7b6fb7b73802d0b77800b7285b9", "title": "of Aperture Coupled Microstrip Antennas : History , Operation , Development , and Applications", "text": ""}
{"_id": "cfdb77b2cb9f8ca9616fa34a84d23685c0a4c45e", "title": "Reconfigurable Antennas", "text": "Reconfigurable antennas change polarization, operating frequency, or far-field pattern in order to cope with changing system parameters. This paper reviews some of the past and current technology applicable to reconfigurable antennas, with several examples of implementations. Mechanically movable parts and arrays are discussed, as well as more-recent semiconductor-component and tunable-material technologies applicable to reconfigurable antennas."}
{"_id": "aef5223dd82909a78e36895e1fe261d29704eb0e", "title": "High Beam Steering in Fabry\u2013P\u00e9rot Leaky-Wave Antennas", "text": "A high-gain low-profile Fabry-Perot (FP) leaky-wave antenna (LWA) presenting one-dimensional high beam steering properties is proposed in this letter. The structure consists of a ground plane and a varying inductive partially reflective surface (PRS). A microstrip patch antenna is embedded into the cavity to act as the primary feed. As design examples, antennas are designed to operate at 9.5 GHz. Subwavelength FP cavities with fixed overall thickness of \u03bb0 /6 (where \u03bb0 is the free-space operating wavelength) are fabricated and measured. The impact of varying the PRS inductance is analyzed. It is shown that a high beam steering angle from broadside toward endfire direction close to 60 can be obtained when judiciously designing the inductive grid of the PRS."}
{"_id": "b85e38e22a461644c775d7477ccb6c9542d46144", "title": "Giving voice to office customers: Best practices in how office handles verbatim text feedback", "text": "Microsoft Office users submit hundreds of thousands of pieces of verbatim feedback per month. How can an engineer or manager in Office find the signal in this data to make business decisions? This paper presents an overview of the Office Customer Voice (OCV) system. OCV combines classification, on-demand clustering and other machine learning techniques with a rich web UI to solve this problem. In this paper, we describe the different types of feedback received. Next, we outline the architecture used to build OCV. We then detail the text processing, classification and clustering done to reason on the data. Finally, we present challenges, future plans, and best practices that may be relevant to other teams analyzing customer feedback. We argue that this multi-pronged approach to handling customer feedback presents a pattern that other organizations can use to mature their handling of customer feedback."}
{"_id": "31cff1cdb1d85778edcd4afa80e192c356123cab", "title": "Evaluating data storage structures of MapReduce", "text": "MapReduce framework and its open-source implementation Hadoop, a scalable and fault-tolerant infrastructure for big data analysis on large clusters, can achieve different performance with different data storage structures. This paper evaluates the performance about three kinds of data storage structures of MapReduce, namely row-store, column-store, and RCFile. The evaluating experiments are designed to test three data storage structures in terms of data loading time, data storage space, and query execution time. The experimental results show that RCFile data storage structure can achieve better performance in most cases."}
{"_id": "4c3fccc1644ccc417283ecb69e6d2559e4b7d01c", "title": "An Optimal Algorithm for Mutual Exclusion in Computer Networks", "text": "An algorithm is proposed that creates mutual exclusion in a computer network whose nodes communicate only by messages and do not share memory. The algorithm sends only 2*(N 1) messages, where N is the number of nodes in the network per critical section invocation. This number of messages is at a minimum if parallel, distributed, symmetric control is used; hence, the algorithm is optimal in this respect. The time needed to achieve mutual exclusion is also minimal under some general assumptions. As in Lamport's \"bakery algorithm,\" unbounded sequence numbers are used to provide first-come firstserved priority into the critical section. It is shown that the number can be contained in a fixed amount of memory by storing it as the residue of a modulus. The number of messages required to implement the exclusion can be reduced by using sequential node-by-node processing, by using broadcast message techniques, or by sending information through timing channels. The \"readers and writers\" problem is solved by a simple modification of the algorithm and the modifications necessary to make the algorithm robust are described."}
{"_id": "b8448f4017cc50b24fe1561e34082c39b578426a", "title": "M2U-Net: Effective and Efficient Retinal Vessel Segmentation for Resource-Constrained Environments", "text": "In this paper, we present a novel neural network architecture for retinal vessel segmentation that improves over the state of the art on two benchmark datasets, is the first to run in real time on high resolution images, and its small memory and processing requirements make it deployable in mobile and embedded systems. The M2U-Net has a new encoder-decoder architecture that is inspired by the U-Net. It adds pretrained components of MobileNetV2 in the encoder part and novel contractive bottleneck blocks in the decoder part that, combined with bilinear upsampling, drastically reduce the parameter count to 0.55M compared to 31.03M in the original U-Net. We have evaluated its performance against a wide body of previously published results on three public datasets. On two of them, the M2U-Net achieves new state-of-the-art performance by a considerable margin. When implemented on a GPU, our method is the first to achieve real-time inference speeds on high-resolution fundus images. We also implemented our proposed network on an ARM-based embedded system where it segments images in between 0.6 and 15 sec, depending on the resolution. Thus, the M2U-Net enables a number of applications of retinal vessel structure extraction, such as early diagnosis of eye diseases, retinal biometric authentication systems, and robot assisted microsurgery."}
{"_id": "5c3e3009b902cb2e548684d5d66d3537f87894c5", "title": "GIS METHODS IN TIME-GEOGRAPHIC RESEARCH: GEOCOMPUTATION AND GEOVISUALIZATION OF HUMAN ACTIVITY PATTERNS", "text": "Over the past 40 years or so, human activities and movements in space-time have attracted considerable research interest in geography. One of the earliest analytical perspectives for the analysis of human activity patterns and movements in spacetime is time geography. Despite the usefulness of time geography in many areas of geographical research, there are very few studies that actually implemented its constructs as analytical methods up to the mid-1990s. With increasing availability of geo-referenced individual-level data and improvement in the geo-computational capabilities of Geographical Information Systems (GIS), it is now more feasible than ever before to operationalize and implement time-geographic constructs. This paper discusses recent applications of GIS-based geo-computation and three-dimensional (3-D) geo-visualization methods in time-geographic research. The usefulness of these methods is illustrated through examples drawn from the author\u2019s recent studies. The paper attempts to show that GIS provides an effective environment for implementing timegeographic constructs and for the future development of operational methods in time-geographic research."}
{"_id": "c190eb24b9aad123dd76a0a59b3019158acf1290", "title": "SHiP: Signature-based Hit Predictor for high performance caching", "text": "The shared last-level caches in CMPs play an important role in improving application performance and reducing off-chip memory bandwidth requirements. In order to use LLCs more efficiently, recent research has shown that changing the re-reference prediction on cache insertions and cache hits can significantly improve cache performance. A fundamental challenge, however, is how to best predict the re-reference pattern of an incoming cache line.\n This paper shows that cache performance can be improved by correlating the re-reference behavior of a cache line with a unique signature. We investigate the use of memory region, program counter, and instruction sequence history based signatures. We also propose a novel Signature-based Hit Predictor (SHiP) to learn the re-reference behavior of cache lines belonging to each signature. Overall, we find that SHiP offers substantial improvements over the baseline LRU replacement and state-of-the-art replacement policy proposals. On average, SHiP improves sequential and multiprogrammed application performance by roughly 10% and 12% over LRU replacement, respectively. Compared to recent replacement policy proposals such as Seg-LRU and SDBP, SHiP nearly doubles the performance gains while requiring less hardware overhead."}
{"_id": "99d03dc3437b5c4ed1b3e54d65a030330e32c6cb", "title": "Discrete-time signal processing (2nd ed.)", "text": ""}
{"_id": "074ca67956bce4e030b75b411dfcf943d93d694b", "title": "A Multiband Dual-Polarized Omnidirectional Antenna for 2G/3G/LTE Applications", "text": "A multiband dual-polarized omnidirectional antenna for 2G/3G/Long Term Evolution (LTE) mobile communications is proposed in this letter, which consists of horizontal polarization (HP) and vertical polarization (VP) element with separate feeds. The VP element consists of three polygonal radiation patches with three equally spaced legs shorted to the ground plane. The HP element consists of three wideband slot loop structures, which is nested on the top of the VP element. Three slot loop structures provide a 360\u00b0 coverage for HP and enhance its bandwidth. Both the simulated and measured results indicate that the frequency bands of 1650\u20132900\u00a0MHz for HP and 780\u20132700\u00a0MHz for VP can be achieved. The reflection of VP improves the gain of the HP element at least 1\u00a0dBi after nesting. The gain of HP element is more than 3\u00a0dBi for LTE, and the gain of VP element is more than 5\u00a0dBi in the LTE band and 1.5\u00a0dBi in the 2G band. Port isolation larger than 30\u00a0dB and low-gain variation levels are also obtained. The proposed antenna can be applied in mobile communications."}
{"_id": "3391f524ffa611cfa8334d70008f1a949fba0302", "title": "Automatic Detection and Quantification of WBCs and RBCs Using Iterative Structured Circle Detection Algorithm", "text": "Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs) and red blood cells (RBCs) in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD) algorithm to solve the initialization problem, detecting irregular circles (cells), selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs."}
{"_id": "d2e47b04b4dd88397d5a19db27ba6a0aa5d1317e", "title": "Social Honeypots: Making Friends With A Spammer Near You", "text": "Social networking communities have become an important communications platform, but the popularity of these communities has also made them targets for a new breed of social spammers. Unfortunately, little is known about these social spammers, their level of sophistication, or their strategies and tactics. Thus, in this paper, we provide the first characterization of social spammers and their behaviors. Concretely, we make two contributions: (1) we introduce social honeypots for tracking and monitoring social spam, and (2) we report the results of an analysis performed on spam data that was harvested by our social honeypots. Based on our analysis, we find that the behaviors of social spammers exhibit recognizable temporal and geographic patterns and that social spam content contains various distinguishing characteristics. These results are quite promising and suggest that our analysis techniques may be used to automatically identify social spam."}
{"_id": "e7d8c4642eb7f08b66e8e591e5ba313a6cfeed57", "title": "Monolithic DC-DC boost converter with current-mode hysteretic control", "text": "A monolithic DC-DC boost converter with current-mode hysteretic control is designed and simulated in 0.18-\u03bcm CMOS technology. The system is simple, robust, and has a fast response to external changes. It does not require an external clock, and the output is regulated by voltage feedback in addition to limiting the inductor current by sensing it. A non-overlapping clock is internally generated to drive the power switches using buffers designed to minimize power dissipation. For conversion specifications of 1.8 V to 3.3 V at 150 mA, overall efficiency of 94.5% is achieved. Line regulation is 17.5 mV/V, load regulation is 0.33% for a 100 mA current step, while the output voltage ripple is below 30 mV for nominal conditions."}
{"_id": "ab86fdbc15c9a114b78d4a024a08e14e94c8cf2a", "title": "Least-Squares Fitting of Circles and Ellipses \u2217", "text": "Fitting circles and ellipses to given points in the plane is a problem that arises in many application areas, e.g. computer graphics [1], coordinate metrology [2], petroleum engineering [11], statistics [7]. In the past, algorithms have been given which fit circles and ellipses in some least squares sense without minimizing the geometric distance to the given points [1], [6]. In this paper we present several algorithms which compute the ellipse for which the sum of the squares of the distances to the given points is minimal. These algorithms are compared with classical simple and iterative methods. Circles and ellipses may be represented algebraically i.e. by an equation of the form F (x) = 0. If a point is on the curve then its coordinates x are a zero of the function F . Alternatively, curves may be represented in parametric form, which is well suited for minimizing the sum of the squares of the distances. 1 Preliminaries and Introduction Ellipses, for which the sum of the squares of the distances to the given points is minimal will be referred to as \u201cbest fit\u201d or \u201cgeometric fit\u201d, and the algorithms will be called \u201cgeometric\u201d. Determining the parameters of the algebraic equation F (x) = 0 in the least squares sense will be denoted by \u201calgebraic fit\u201d and the algorithms will be called \u201calgebraic\u201d. We will use the well known Gauss-Newton method to solve the nonlinear least squares problem (cf. [15]). Let u = (u1, . . . , un) T be a vector of unknowns and consider the nonlinear system of m equations f(u) = 0. If m > n, then we want to minimize"}
{"_id": "4d29d339d90b4f9a3b41f8ca0216c79af0559c2d", "title": "Measures of retaining digital evidence to prosecute computer-based cyber-crimes", "text": "With the rapid growth of computer and network systems in recent years, there has also been a corresponding increase in cyber-crime. Cybercrime takes many forms and has garnered much attention in the media, making information security a more urgent and important priority. In order to fight cyber-crime, criminal evidence must be gathered from these computer-based systems. This is quite different from the collection of conventional criminal evidence and can confuse investigators attempting to deal with the forensics of cyber-crime, highlighting the importance of computer forensics. In this paper, we offer solutions to guard against cyber-crime through the implementation of software toolkits for computerbased systems. In this way, those who engage in criminal acts in cyber-space can be more easily apprehended. \u00a9 2006 Elsevier B.V. All rights reserved."}
{"_id": "2e5d3eeb4fbcfa60203f2bba46cb0fbe3c3a2e84", "title": "Obesity, fat distribution, and weight gain as risk factors for clinical diabetes in men.", "text": "OBJECTIVE\nTo investigate the relation between obesity, fat distribution, and weight gain through adulthood and the risk of non-insulin-dependent diabetes mellitus (NIDDM).\n\n\nRESEARCH DESIGN AND METHODS\nWe analyzed data from a cohort of 51,529 U.S. male health professionals, 40-75 years of age in 1986, who completed biennial questionnaires sent out in 1986, 1988, 1990, and 1992. During 5 years of follow-up (1987-1992), 272 cases of NIDDM were diagnosed among men without a history of diabetes, heart disease, and cancer in 1986 and who provided complete health information. Relative risks (RRs) associated with different anthropometric measures were calculated controlling for age, and multivariate RRs were calculated controlling for smoking, family history of diabetes, and age.\n\n\nRESULTS\nWe found a strong positive association between overall obesity as measured by body mass index (BMI) and risk of diabetes. Men with a BMI of > or = 35 kg/m2 had a multivariate RR of 42.1 (95% confidence interval [CI] 22.0-80.6) compared with men with a BMI < 23.0 kg/m2. BMI at age 21 and absolute weight gain throughout adulthood were also significant independent risk factors for diabetes. Fat distribution, measured by waist-to-hip ratio (WHR), was a good predictor of diabetes only among the top 5%, while waist circumference was positively associated with the risk of diabetes among the top 20% of the cohort.\n\n\nCONCLUSIONS\nThese data suggest that waist circumference may be a better indicator than WHR of the relationship between abdominal adiposity and risk of diabetes. Although early obesity, absolute weight gain throughout adulthood, and waist circumference were good predictors of diabetes, attained BMI was the dominant risk factor for NIDDM; even men of average relative weight had significantly elevated RRs."}
{"_id": "d5157e8ee47a6fa19d7a70ec4ae990291f4f393f", "title": "A Highly Linear Two-Stage Amplifier Integrated Circuit Using InGaP/GaAs HBT", "text": "This paper presents a linearized two-stage amplifier integrated circuit (IC), having an output power level of about 1 Watt, used for general purpose applications. A predistortion method, based on alignment of the nonlinear characteristics between the first- and the second-stage amplifiers, has been proposed and analyzed in order to enhance the linearization aspects. The resistors in the active bias circuits of both the stages are optimized to achieve the best cancellation conditions for the third-order intermodulation components. The two-stage amplifier IC, which is based on an InGaP/GaAs hetero-junction bipolar transistor (HBT) technology, has been designed and implemented for the 900 MHz band. An output 1 dB compression point (P1dB) of 29.6 dBm, a maximum third-order output intercept point (OIP3) of as high as 48.7 dBm at a two-tone average output power of 21 dBm have been obtained while having a quiescent current of 375 mA and a single bias supply of 5.5 V. The implemented amplifier is able to maintain its IMD3 performance below -60 dBc up to an output power of 21 dBm."}
{"_id": "273fce2775328c01e035767517109d97e8b44059", "title": "REAL TIME HAND GESTURE RECOGNITION USING SIFT", "text": "The objective of the gesture recognition is to identify and distinguish the human gestures and utilizes these identified gestures for applications in specific domain. In this paper we propose a new approach to build a real time system to identify the standard gesture given by American Sign Language, or ASL, the dominant sign language of Deaf Americans, including deaf communities in the United States, in the English-speaking parts of Canada, and in some regions of Mexico. We propose a new method of improvised scale invariant feature transform (SIFT) and use the same to extract the features. The objective of the paper is to decode a gesture video into the appropriate alphabets."}
{"_id": "72a6044a0108e0f8f1e68cd70ada46c81a416324", "title": "Improved Training of Generative Adversarial Networks Using Representative Features", "text": "Despite the success of generative adversarial networks (GANs) for image generation, the trade-off between visual quality and image diversity remains a significant issue. This paper achieves both aims simultaneously by improving the stability of training GANs. The key idea of the proposed approach is to implicitly regularize the discriminator using representative features. Focusing on the fact that standard GAN minimizes reverse Kullback-Leibler (KL) divergence, we transfer the representative feature, which is extracted from the data distribution using a pre-trained autoencoder (AE), to the discriminator of standard GANs. Because the AE learns to minimize forward KL divergence, our GAN training with representative features is influenced by both reverse and forward KL divergence. Consequently, the proposed approach is verified to improve visual quality and diversity of state of the art GANs using extensive evaluations."}
{"_id": "053d5d6041959bc207d8c38ca599663b06533821", "title": "Functional inactivation of the amygdala before but not after auditory fear conditioning prevents memory formation.", "text": "Two competing theories predict different effects on memory consolidation when the amygdala is inactivated after fear conditioning. One theory, based on studies using inhibitory avoidance training, proposes that the amygdala modulates the strength of fear learning, and post-training amygdala manipulations interfere with memory consolidation. The other, based on studies using Pavlovian fear conditioning, hypothesizes that fear learning occurs in the amygdala, and post-training manipulations after acquisition will not affect memory consolidation. We infused the GABAA agonist muscimol (4.4 nmol/side) or vehicle into lateral and basal amygdala (LBA) of rats either before or immediately after tone-foot shock Pavlovian fear conditioning. Pre-training infusions eliminated acquisition, whereas post-training infusions had no effect. These findings indicate that synaptic activity in LBA is necessary during learning, but that amygdala inactivation directly after training does not affect memory consolidation. Results suggest that essential aspects of plasticity underlying auditory fear conditioning take place within LBA during learning."}
{"_id": "6abb32181ad80dae9b38170f6c026fcad03b6cac", "title": "BitDeposit: Deterring Attacks and Abuses of Cloud Computing Services through Economic Measures", "text": "Dependability in cloud computing applications can be negatively affected by various attacks or service abuses. To come ahead of this threat, we propose an economic measure to deter attacks and various service abuses in cloud computing applications. Our proposed defense is based on requiring a service user to pay a small deposit, using digital currency, before invoking the service. Once they are done using the service, and there has been no detected abuse or attack, the deposit is paid back by the service provider to the service user. If an attack or an abuse is detected, the service user is not paid back and the service provider gets to keep the deposit. We propose the use of micro payments with a decentralized nature and small transaction fees, such as the Bit coin digital currency. Moreover, thanks to the existence of money exchanges which convert the Bit coin currency to real world currency, service providers can recoup loses when they exchange the confiscated deposits for real world currency."}
{"_id": "719c68cfd88d980f061ae7b93b146c665b4ea8f1", "title": "Multi-channel ECG classification using forests of randomized shapelet trees", "text": "Data series of multiple channels occur at high rates and in massive quantities in several application domains, such as healthcare. In this paper, we study the problem of multi-channel ECG classification. We map this problem to multivariate data series classification and propose five methods for solving it, using a split-and-combine approach. The proposed framework is evaluated using three base-classifiers on real-world data for detecting Myocardial Infarction. Extensive experiments are performed on real ECG data extracted from the Physiobank data repository. Our findings emphasize the importance of selecting an appropriate base-classifier for multivariate data series classification, while demonstrating the superiority of the Random Shapelet Forest (0.825 accuracy) against competitor methods (0.664 accuracy for 1-NN under cDTW)."}
{"_id": "09e740b949b108cdc17538c587edfcdb0cc52647", "title": "Getting Physical with the Digital Investigation Process", "text": "In this paper, a process model for digital investigations is defined using the theories and techniques from the physical investigation world. While digital investigations have recently become more common, physical investigations have existed for thousands of years and the experience from them can be applied to the digital world. This paper introduces the notion of a digital crime scene with its own witnesses, evidence, and events that can be investigated using the same model as a physical crime scene. The proposed model integrates the physical crime scene investigation with the digital crime scene investigation to identify a person who is responsible for the digital activity. The proposed model applies to both law enforcement and corporate investigations."}
{"_id": "28605ab2a9444c8fbe37982fb4005c82b60003f2", "title": "Digital Forensic Model Based On Malaysian Investigation Process", "text": "Faculty Of Science & Technology Islamic Science University Of Malaysia Summary With the proliferation of the digital crime around the world, numerous digital forensic investigation models already being develop .In fact many of the digital forensic investigation model focus on technical implementation of the investigation process as most of it develop by traditional forensic expert and technologist. As an outcome of this problem most of the digital forensic practitioners focus on technical aspect and forget the core concept of digital forensic investigation model .In this paper we are introducing a new digital forensic model which will be capture a full scope of an investigation process based on Malaysia Cyber Law .The proposed model is also compared with the existing model which currently available and being apply in the investigation process."}
{"_id": "3c8047cbaeb62e760ddd643c3146064afd6cea5d", "title": "A hierarchical, objectives-based framework for the digital investigations process", "text": "Although digital forensics investigations can vary drastically in their level of complexity, each investigative process must follow a rigorous path. Previously proposed frameworks are predominantly single-tier, higher order process models, focusing on the abstract, rather than more concrete principles of the investigation. We contend that these frameworks, although useful in explaining overarching concepts, require additional detail in order to be useful to investigators and researchers. We therefore propose a multi-tier, hierarchical framework to guide digital investigations. Our framework includes objectivesbased sub-phases that are applicable to various layers of abstraction."}
{"_id": "3d05a61fe4e1ac297e23c4206bd2e2093943f666", "title": "An Extended Model of Cybercrime Investigations", "text": "A comprehensive model of cybercrime investigations is important for standardising terminology, defining requirements, and supporting the development of new techniques and tools for investigators. In this paper a model of investigations is presented which combines the existing models, generalises them, and extends them by explicitly addressing certain activities not included in them. Unlike previous models, this model explicitly represents the information flows in an investigation and captures the full scope of an investigation, rather than only the processing of evidence. The results of an evaluation of the model by practicing cybercrime investigators are presented. This new model is compared to some important existing models and applied to a real investigation."}
{"_id": "424daafd9ac88cf67efd046d02ed1eed4f65fd41", "title": "Defining Digital Forensic Examination and Analysis Tool Using Abstraction Layers", "text": "This paper uses the theory of abstraction layers to describe the purpose and goals of digital forensic analysis tools. Using abstraction layers, we identify where tools can introduce errors and provide requirements that the tools must follow. Categories of forensic analysis types are also defined based on the abstraction layers. Abstraction layers are not a new concept, but their usage in digital forensic analysis is not well documented."}
{"_id": "fef3feb39ed95a4376019af8dbfe604643fb7f25", "title": "Faster Dual-Key Stealth Address for Blockchain-Based Internet of Things Systems", "text": "Stealth address prevents public association of a blockchain transaction\u2019s output with a recipient\u2019s wallet address and hides the actual destination address of a transaction. While stealth address provides an effective privacy-enhancing technology for a cryptocurrency network, it requires blockchain nodes to actively monitor all the transactions and compute the purported destination addresses, which restricts its application for resource-constrained environments like Internet of Things (IoT). In this paper, we propose DKSAP-IoT, a faster dual-key stealth address protocol for blockchain-based IoT systems. DKSAP-IoT utilizes a technique similar to the TLS session resumption to improve the performance and reduce the transaction size at the same time between two communication peers. Our theoretical analysis as well as the extensive experiments on an embedded computing platform demonstrate that DKSAPIoT is able to reduce the computational overhead by at least 50% when compared to the state-of-the-art scheme, thereby paving the way for its application to blockchain-based IoT systems."}
{"_id": "a1b0cb03863365e81edadd874fd368075ed16a6b", "title": "The earth is flat (p\u00a0>\u00a00.05): significance thresholds and the crisis of unreplicable research", "text": "The widespread use of 'statistical significance' as a license for making a claim of a scientific finding leads to considerable distortion of the scientific process (according to the American Statistical Association). We review why degrading p-values into 'significant' and 'nonsignificant' contributes to making studies irreproducible, or to making them seem irreproducible. A major problem is that we tend to take small p-values at face value, but mistrust results with larger p-values. In either case, p-values tell little about reliability of research, because they are hardly replicable even if an alternative hypothesis is true. Also significance (p\u00a0\u2264\u00a00.05) is hardly replicable: at a good statistical power of 80%, two studies will be 'conflicting', meaning that one is significant and the other is not, in one third of the cases if there is a true effect. A replication can therefore not be interpreted as having failed only because it is nonsignificant. Many apparent replication failures may thus reflect faulty judgment based on significance thresholds rather than a crisis of unreplicable research. Reliable conclusions on replicability and practical importance of a finding can only be drawn using cumulative evidence from multiple independent studies. However, applying significance thresholds makes cumulative knowledge unreliable. One reason is that with anything but ideal statistical power, significant effect sizes will be biased upwards. Interpreting inflated significant results while ignoring nonsignificant results will thus lead to wrong conclusions. But current incentives to hunt for significance lead to selective reporting and to publication bias against nonsignificant findings. Data dredging, p-hacking, and publication bias should be addressed by removing fixed significance thresholds. Consistent with the recommendations of the late Ronald Fisher, p-values should be interpreted as graded measures of the strength of evidence against the null hypothesis. Also larger p-values offer some evidence against the null hypothesis, and they cannot be interpreted as supporting the null hypothesis, falsely concluding that 'there is no effect'. Information on possible true effect sizes that are compatible with the data must be obtained from the point estimate, e.g., from a sample average, and from the interval estimate, such as a confidence interval. We review how confusion about interpretation of larger p-values can be traced back to historical disputes among the founders of modern statistics. We further discuss potential arguments against removing significance thresholds, for example that decision rules should rather be more stringent, that sample sizes could decrease, or that p-values should better be completely abandoned. We conclude that whatever method of statistical inference we use, dichotomous threshold thinking must give way to non-automated informed judgment."}
{"_id": "1e58d7e5277288176456c66f6b1433c41ca77415", "title": "Bootstrapping Fine-Grained Classifiers : ! Active Learning with a Crowd in the Loop", "text": "We propose an iterative crowd-enabled active learning algorithm for building high-precision visual classifiers from unlabeled images. Our method employs domain experts to identify a small number of examples of a specific visual event. These expert-labeled examples seed a classifier, which is then iteratively trained by active querying of a non-expert crowd. These non-experts actively refine the classifiers at every iteration by answering simple binary questions about the classifiers\u2019 detections. The advantage of this approach is that experts efficiently shepherd an unsophisticated crowd into training a classifier capable of fine-grained distinctions. This obviates the need to label an entire dataset to obtain high-precision classifiers. We find these classifiers are advantageous for creating a large vocabulary of visual attributes for specialized taxonomies. We demonstrate our crowd active learning pipeline by creating classifiers for attributes related to North American birds and fashion."}
{"_id": "ff7059fdd4c215b881b655e84c21780d74fd3e9e", "title": "Outlooks and Insights on Group Decision and Negotiation", "text": "Community governance needs a small group discussion among community people to identify their concerns, to share them each other and to generate better alternatives for solving problems. A planner should manage the discussion to achieve these objectives. This study analyzed the small group discussion in the community disaster risk management by using text mining. Correspondence analysis was applied to the text data of the discussion. Analytical results revealed the characteristics and effects of small group discussion."}
{"_id": "a1f156dcd37479ec87a8554d1bc31eea9fde533e", "title": "Validating Formative Partial Least Squares (PLS) Models: Methodological Review and Empirical Illustration", "text": "The issue of formative constructs, as opposed to th e more frequently used reflective ones, has recently gained momentum among IS and Management re searchers. Most researchers maintain that formative constructs have been understudied, a n that there is paucity in methodological literature to guide researchers on how such constru cts should be developed and estimated. A survey of IS research has revealed that about 29% o f constructs were misspecified as reflective rather than formative constructs. Furthermore, guid elines about how models containing formative constructs should be indentified and estimated are fragmented and inconsistent. Thus, this paper aims to present a methodological review of formativ e model identification and evaluation. We bring a brief theoretical overview of formative con structs, and put together a guideline for estimating formative measurement and structural mod els. We then present a simplified model composed of three formative constructs and illustra te how it is assessed and estimated using"}
{"_id": "ebc7f59d83b4e7d851ef38460f08b1f3b350521a", "title": "Spatial Diversity 24-GHz FMCW Radar With Ground Effect Compensation for Automotive Applications", "text": "In automotive radar applications, a realistic road surface can cause multipath reflection, degrade the received power, and reduce detection probability. This paper proposes a one-transmitter\u2013two-receiver (1T2R) 24-GHz frequency-modulation continuous-wave (FMCW) radar architecture with spatial diversity and ground effect compensation for automotive applications. A ray-tracing technique was developed to evaluate the multipath effect of the road surface. In the proposed design, suitable spacing between the two receiving antennas compensated for the multipath effect of the road surface and improved the signal-to-ground clutter ratio of the receivers by a minimum of 20 dB at certain distances. The measurement and calculation results of the proposed radar are in close agreement."}
{"_id": "c1d7c99071bd382eea29bce054a4a222aa17b80a", "title": "PLINK: a tool set for whole-genome association and population-based linkage analyses.", "text": "Whole-genome association studies (WGAS) bring new computational, as well as analytic, challenges to researchers. Many existing genetic-analysis tools are not designed to handle such large data sets in a convenient manner and do not necessarily exploit the new opportunities that whole-genome data bring. To address these issues, we developed PLINK, an open-source C/C++ WGAS tool set. With PLINK, large data sets comprising hundreds of thousands of markers genotyped for thousands of individuals can be rapidly manipulated and analyzed in their entirety. As well as providing tools to make the basic analytic steps computationally efficient, PLINK also supports some novel approaches to whole-genome data that take advantage of whole-genome coverage. We introduce PLINK and describe the five main domains of function: data management, summary statistics, population stratification, association analysis, and identity-by-descent estimation. In particular, we focus on the estimation and use of identity-by-state and identity-by-descent information in the context of population-based whole-genome studies. This information can be used to detect and correct for population stratification and to identify extended chromosomal segments that are shared identical by descent between very distantly related individuals. Analysis of the patterns of segmental sharing has the potential to map disease loci that contain multiple rare variants in a population-based linkage analysis."}
{"_id": "3f4ed1c58b6053267c823d8e8b213500c94191e0", "title": "Future Shipboard MVdc System Protection Requirements and Solid-State Protective Device Topological Tradeoffs", "text": "The search for the optimum architecture for shipboard medium voltage dc integrated power systems must take into account the short-circuit protection in addition to overarching goals of efficiency, survivability, reliability of power, and cost effectiveness. Presently, accepted approaches to protection are \u201cunit-based,\u201d which means the power converter(s) feeding the bus coordinate with no-load electromechanical switches to isolate faulted portions of the bus. However, \u201cbreaker-based\u201d approaches, which rely upon solid-state circuit breakers for fault mitigation, can result in higher reliability of power and potentially higher survivability. The inherent speed of operation of solid-state protective devices will also play a role in fault isolation, hence reducing stress level on all system components. A comparison study is performed of protective device topologies that are suitable for shipboard distribution systems rated between 4 and 30 kVdc from the perspectives of size and number of passive components required to manage the commutation energy during sudden fault events and packaging scalability to higher current and voltage systems. The implementation assumes a multichip silicon carbide (SiC) 10-kV, 240-A MOSFET/junction barrier Schottkey (JBS) diode module."}
{"_id": "d1625afa8e71c866c1a3b3d99468ccf4de922be3", "title": "Informing the Design of Spoken Conversational Search: Perspective Paper", "text": "We conducted a laboratory-based observational study where pairs of people performed search tasks communicating verbally. Examination of the discourse allowed commonly used interactions to be identified for Spoken Conversational Search (SCS). We compared the interactions to existing models of search behaviour. We find that SCS is more complex and interactive than traditional search. This work enhances our understanding of different search behaviours and proposes research opportunities for an audio-only search system. Future work will focus on creating models of search behaviour for SCS and evaluating these against actual SCS systems."}
{"_id": "13330fd1aaa09e25708628e8aff96033622fb33c", "title": "Interference cancellation for cellular systems: a contemporary overview", "text": "Cellular networks today are interference-limited and only becomes increasingly so in the future due to the many users that need to share the spectrum to achieve high-rate multimedia communication. Despite the enormous amount of academic and industrial research in the past 20 years on interference-aware receivers and the large performance improvements promised by these multi-user techniques, today's receivers still generally treat interference as background noise. In this article, we enumerate the reasons for this widespread scepticism, and discuss how current and future trends increases the need for and viability of multi-user receivers for both the uplink, where many asynchronous users are simultaneously detected, and the downlink, where users are scheduled and largely orthogonalized; but the mobile handset still needs to cope with a few dominant interfering base stations. New results for interference cancelling receivers that use conventional front-ends are shown to alleviate many of the shortcomings of prior techniques, particularly for the challenging uplink. This article gives an overview of key recent research breakthroughs on interference cancellation and highlights system-level considerations for future multi-user receivers."}
{"_id": "095aa20a329c71d11224fe7f5cf1f34fa03261f0", "title": "Active learning for online bayesian matrix factorization", "text": "The problem of large-scale online matrix completion is addressed via a Bayesian approach. The proposed method learns a factor analysis (FA) model for large matrices, based on a small number of observed matrix elements, and leverages the statistical model to actively select which new matrix entries/observations would be most informative if they could be acquired, to improve the model; the model inference and active learning are performed in an online setting. In the context of online learning, a greedy, fast and provably near-optimal algorithm is employed to sequentially maximize the mutual information between past and future observations, taking advantage of submodularity properties. Additionally, a simpler procedure, which directly uses the posterior parameters learned by the Bayesian approach, is shown to achieve slightly lower estimation quality, with far less computational effort. Inference is performed using a computationally efficient online variational Bayes (VB) procedure. Competitive results are obtained in a very large collaborative filtering problem, namely the Yahoo! Music ratings dataset."}
{"_id": "d585376a5537beb81d93f0580fa2a74cab303f76", "title": "Related Key Chosen IV Attack on Grain-128a Stream Cipher", "text": "The well-known stream cipher Grain-128 is a variant version of Grain v1 with 128-bit secret key. Grain v1 is a stream cipher which has successfully been chosen as one of seven finalists by European eSTREAM project. Yet Grain-128 is vulnerable against some recently introduced attacks. A new version of Grain-128 with authentication, named Grain-128a, is proposed by \u00c5gren, Hell, Johansson, and Meier. The designers claimed that Grain-128a is strengthened against all known attacks and observations on the original Grain-128. So far there exists no attack on Grain-128a except a differential fault attack by Banik, Maitra, and Sarkar. In this paper, we give some observations on Grain-128a, and then propose a related key chosen IV attack on Grain-128a based on these observations. Our attack can recover the 128-bit secret key of Grain-128a with a computational complexity of $2^{96.322} $, requiring $2^{96} $ chosen IVs and $2^{103.613} $ keystream bits. The success probability of our attack is 0.632. This related key attack is \u201cminimal\u201d in the sense that it only requires two related keys. The result shows that our attack is much better than an exhaustive key search in the related key setting."}
{"_id": "15f10d4d32ad74d5f99b0d3118034db20e57cdc1", "title": "16-Way Radial Divider/Combiner for Solid State Power Amplifiers in the K Band", "text": "This paper presents the design and measurement results of a 16-way radial power divider/combiner for solid state power amplifiers in the K band using an original assembly technique of the combined ways. The design is based on a radial non-resonant cavity connected to 8 peripheral rectangular waveguides with two waveguide-to-microstrip transitions inserted in each peripheral rectangular waveguide. Each combined way includes on the same tray the input and output waveguide-to-microstrip transitions and the MMIC amplifier location. The trays can be individually tested before mounting in the 16-way divider/combiner. The low insertion loss of the 16-way power divider/combiner leads to an estimated combining efficiency better than 86.5% from 16.5 GHz to 21 GHz with a maximum value about 88% from 17.2 to 19.2 GHz."}
{"_id": "8d80439a7a85d2d0011a8d46b64320ad5007eadc", "title": "Accelerating and Compressing LSTM Based Model for Online Handwritten Chinese Character Recognition", "text": "With the development of deep learning tools, the online handwritten Chinese character recognition (HCCR) performance has been greatly improved by using deep neural networks (DNNs) especially for long short-term memory (LSTM). However, DNNs suffer from large consumption of computation and storage resources, which may cause problems for service providers, such as server pressure, longer service latency and higher energy consumption. To solve these problems, we propose a framework that combines singular value decomposition (SVD) and adaptive drop weight (ADW) to accelerate and compress LSTM based models. We first build an LSTM based model that achieves an accuracy of 97.83% on the ICDAR2013 online HCCR competition dataset. After restructuring the model with SVD and ADW, it can reduce the FLOPs (floating point operations per second) of the forward process by approximately 10 times and compress the model with 1/30 of the original size with only a 0.5% decrease in accuracy. Finally, integrated with our efficient forward implementation, the recognition of an online character requires only 2.7 ms in average on a CPU with a single thread, while requiring only 0.45 MB for model storage."}
{"_id": "655ba8fbbde231371203fa6b389bf6084660bf3e", "title": "ASSESSMENT AND COLLABORATION IN ONLINE LEARNING", "text": "Assessment can be seen as the engine that drives student course activity, online or off. It is particularly important in encouraging and shaping collaborative activity online. This paper discusses three sorts of online collaborative activity\u2014collaborative discussion, small group collaboration, and collaborative exams. In each of these areas, it provides both theoretical grounding and practical advice for assessing, and so encouraging, collaboration in online courses."}
{"_id": "a6baa5dd252a8fde180c59cd4e407a2f60fb7a30", "title": "Statistical Parsing for harmonic Analysis of Jazz Chord Sequences", "text": "Analysing music resembles natural language parsing in requiring the derivation of structure from an unstructured and highly ambiguous sequence of elements, whether they are notes or words. Such analysis is fundamental to many music processing tasks, such as key identification and score transcription. The focus of the present paper is on harmonic analysis. We use the three-dimensional tonal harmonic space developed by [4, 13, 14] to define a theory of tonal harmonic progression, which plays a role analogous to semantics in language. Our parser applies techniques from natural language processing (NLP) to the problem of analysing harmonic progression. It uses a formal grammar of jazz chord sequences of a kind that is widely used for NLP, together with the statistically based modelling techniques standardly used in wide-coverage parsing, to map music onto underlying harmonic progressions in the tonal space. Using supervised learning over a small corpus of jazz chord sequences annotated with harmonic analyses, we show that grammar-based musical parsing using simple statistical parsing models is more accurate than a baseline Markovian model trained on the same corpus."}
{"_id": "63503f37256d4eb059d4a1435e68e71b22da6867", "title": "Smart-cap for pedestrian safety", "text": "In the last few decades, the use of mobile phones have increased rapidly. Safety experts have considered dependency on mobile phones as a threat to the pedestrian safety. In this paper, a wearable smart-cap has been proposed to detect an obstacle in the path of a pedestrian mobile user. The vision of the proposed system is to provide a user-friendly, cost effective and a quick way to avoid pedestrian accidents due to mobile phones. The volunteers' feedback and testing results have been outstanding. The volunteers have reported that the proposed system is user-friendly and accurate."}
{"_id": "f3e999df19ad5ef6cd61e73742e362b872e51215", "title": "Towards a design generation methodology", "text": "Inspired by the recent advances of deep learning and computer graphics techniques in generating sophisticated artistic styles and architectural designs, we envision a general design generation methodology that uses various specification and generative methods and tools in an integral design cycle."}
{"_id": "fd570fa0a974cd47172a06c67181caeb199a6567", "title": "How data science can advance mental health research", "text": "Accessibility of powerful computers and availability of so-called big data from a variety of sources means that data science approaches are becoming pervasive. However, their application in mental health research is often considered to be at an earlier stage than in other areas despite the complexity of mental health and illness making such a sophisticated approach particularly suitable. In this Perspective, we discuss current and potential applications of data science in mental health research using the UK Clinical Research Collaboration classification: underpinning research; aetiology; detection and diagnosis; treatment development; treatment evaluation; disease management; and health services research. We demonstrate that data science is already being widely applied in mental health research, but there is much more to be done now and in the future. The possibilities for data science in mental health research are substantial. Russ et al. discuss the broad applications of data science to mental health research and consider future ways that big data can improve detection, diagnosis, treatment, healthcare provision and disease management."}
{"_id": "7535c5bd1b7a5905d548c98ec67d52d66a806e98", "title": "Joint Embedding Models for Textual and Social Analysis", "text": "In online social networks, users openly interact, share content, and endorse each other. Although the data is interconnected, previous research has primarily focused on modeling the social network behavior separately from the textual content. Here we model the data in a holistic way, taking into account connections between social behavior and content. Specifically, we define multiple decision tasks over the relationships between users and the content generated by them. We show, on a real world dataset, that a learning a joint embedding (over user characteristics and language) and using joint prediction (based on intraand inter-task constraints) produces consistent gains over (1) learning specialized embeddings, and (2) predicting locally w.r.t. a single task, with or without constraints."}
{"_id": "4912c18161cf35a066a9e70b4e4ef45ff9d19035", "title": "Differentially Private Online Learning", "text": "In this paper, we consider the problem of preserving privacy in the context of online learning. Online learning involves learning from data in real-time, due to which the learned model as well as its predictions are continuously changing. This makes preserving privacy of each data point significantly more challenging as its effect on the learned model can be easily tracked by observing changes in the subsequent predictions. Furthermore, with more and more online systems (e.g. search engines like Bing, Google etc.) trying to learn their customers\u2019 behavior by leveraging their access to sensitive customer data (through cookies etc.), the problem of privacy preserving online learning has become critical. We study the problem in the framework of online convex programming (OCP)\u2014a popular online learning setting with several theoretical and practical implications\u2014while using differential privacy as the formal measure of privacy. For this problem, we provide a generic framework that can be used to convert any given OCP algorithm into a private OCP algorithm with provable privacy as well as regret guarantees (utility), provided that the given OCP algorithm satisfies the following two criteria: 1) linearly decreasing sensitivity, i.e., the effect of the new data points on the learned model decreases linearly, 2) sub-linear regret. We then illustrate our approach by converting two popular OCP algorithms into corresponding differentially private algorithms while guaranteeing \u00d5( \u221a T ) regret for strongly convex functions. Next, we consider the practically important class of online linear regression problems, for which we generalize the approach by Dwork et al. (2010a) to provide a differentially private algorithm with just poly-log regret. Finally, we show that our online learning framework can be used to provide differentially private algorithms for the offline learning problem as well. For the offline learning problem, our approach guarantees better error bounds and is more practical than the existing state-of-the-art methods (Chaudhuri et al., 2011; Rubinstein et al., 2009)."}
{"_id": "54f9e7d3835444b03cad4fd483784e1c004c3c0b", "title": "The humble Bayesian: model checking from a fully Bayesian perspective.", "text": "Gelman and Shalizi (2012) criticize what they call the 'usual story' in Bayesian statistics: that the distribution over hypotheses or models is the sole means of statistical inference, thus excluding model checking and revision, and that inference is inductivist rather than deductivist. They present an alternative hypothetico-deductive approach to remedy both shortcomings. We agree with Gelman and Shalizi's criticism of the usual story, but disagree on whether Bayesian confirmation theory should be abandoned. We advocate a humble Bayesian approach, in which Bayesian confirmation theory is the central inferential method. A humble Bayesian checks her models and critically assesses whether the Bayesian statistical inferences can reasonably be called upon to support real-world inferences."}
{"_id": "08605f1e39ba0459a25a47b32db229aeacdac6aa", "title": "NANOPARTICLES \u2013 AN OVERVIEW", "text": "Recently particulate systems like nanoparticles have been used as a physical approach to alter and improve the pharmacokinetic and pharmacodynamic properties of various types of drug molecules. They have been used in vivo to protect the drug entity in the systemic circulation, restrict access of the drug to the chosen sites and to deliver the drug at a controlled and sustained rate to the site of action.Drug delivery research is clearly moving from the microto the nanosize scale. Nanotechnology is therefore emerging as a field in medicine that is expected to elicit significant therapeutic benefits. The development of effective nanodelivery systems capable of carrying a drug specifically and safely to a desired site of action is one of the most challenging tasks of pharmaceutical formulation investigators. They are attempting to reformulate and add new indications to the existing blockbuster drugs to maintain positive scientific outcomes and therapeutic breakthroughs. The nanodelivery systems mainly include nanoemulsions, lipid or polymeric Nanoparticles and liposomes.For the past few years, there has been a considerable research on the basis of Novel drug delivery system, using particulate vesicle systems as such drug carriers for small and large molecules. Nanoparticles have been improving the therapeutic effect of drugs and minimize the side effects. Basically, Nanoparticles have been prepared by using various techniques as such dispersion of preformed polymers, polymerization of monomers and ionic gelation or co-acervation of hydrophilic polymer."}
{"_id": "931478afaeb06b58b3888b1c78456c8a2bd8ee44", "title": "Clique percolation in random networks.", "text": "The notion of k-clique percolation in random graphs is introduced, where k is the size of the complete subgraphs whose large scale organizations are analytically and numerically investigated. For the Erdos-R\u00e9nyi graph of N vertices we obtain that the percolation transition of k-cliques takes place when the probability of two vertices being connected by an edge reaches the threshold p(c) (k) = [(k - 1)N](-1/(k - 1)). At the transition point the scaling of the giant component with N is highly nontrivial and depends on k. We discuss why clique percolation is a novel and efficient approach to the identification of overlapping communities in large real networks."}
{"_id": "2343169d2b0ea72c4160006414aa4c19e81728d1", "title": "Combining geometry and combinatorics: A unified approach to sparse signal recovery", "text": "There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach utilizes geometric properties of the measurement matrix Phi. A notable example is the Restricted Isometry Property, which states that the mapping Phi preserves the Euclidean norm of sparse signals; it is known that random dense matrices satisfy this constraint with high probability. On the other hand, the combinatorial approach utilizes sparse matrices, interpreted as adjacency matrices of sparse (possibly random) graphs, and uses combinatorial techniques to recover an approximation to the signal. In this paper we present a unification of these two approaches. To this end, we extend the notion of Restricted Isometry Property from the Euclidean lscr2 norm to the Manhattan lscr1 norm. Then we show that this new lscr1 -based property is essentially equivalent to the combinatorial notion of expansion of the sparse graph underlying the measurement matrix. At the same time we show that the new property suffices to guarantee correctness of both geometric and combinatorial recovery algorithms. As a result, we obtain new measurement matrix constructions and algorithms for signal recovery which, compared to previous algorithms, are superior in either the number of measurements or computational efficiency of decoders."}
{"_id": "98d5590fb9510199f599012b337d6d0505da7fa2", "title": "Chapter 3 Semantic Web Service Description", "text": "The convergence of semantic Web with service oriented computing is manifested by Semantic Web Services (SWS) technology. It addresses the major challenge of automated, interoperable and meaningful coordination of Web Services to be carried out by intelligent software agents. In this chapter, we briefly discuss prominent SWS description frameworks, that are the standard SAWSDL, OWL-S and WSML. This is complemented by main critics of Semantic Web Services, and selected references to further readings on the subject."}
{"_id": "0e790522e68e44a5c99515e009049831b15cf29f", "title": "Reconstructing Storyline Graphs for Image Recommendation from Web Community Photos", "text": "In this paper, we investigate an approach for reconstructing storyline graphs from large-scale collections of Internet images, and optionally other side information such as friendship graphs. The storyline graphs can be an effective summary that visualizes various branching narrative structure of events or activities recurring across the input photo sets of a topic class. In order to explore further the usefulness of the storyline graphs, we leverage them to perform the image sequential prediction tasks, from which photo recommendation applications can benefit. We formulate the storyline reconstruction problem as an inference of sparse time-varying directed graphs, and develop an optimization algorithm that successfully addresses a number of key challenges of Web-scale problems, including global optimality, linear complexity, and easy parallelization. With experiments on more than 3.3 millions of images of 24 classes and user studies via Amazon Mechanical Turk, we show that the proposed algorithm improves other candidate methods for both storyline reconstruction and image prediction tasks."}
{"_id": "8e9a75f08e1e7a59f636091c7ea4839f1161e07c", "title": "A note on multi-image denoising", "text": "Taking photographs under low light conditions with a hand-held camera is problematic. A long exposure time can cause motion blur due to the camera shaking and a short exposure time gives a noisy image. We consider the new technical possibility offered by cameras that take image bursts. Each image of the burst is sharp but noisy. In this preliminary investigation, we explore a strategy to efficiently denoise multi-images or video. The proposed algorithm is a complex image processing chain involving accurate registration, video equalization, noise estimation and the use of state-of-the-art denoising methods. Yet, we show that this complex chain may become risk free thanks to a key feature: the noise model can be estimated accurately from the image burst. Preliminary tests will be presented. On the technical side, the method can already be used to estimate a non parametric camera noise model from any image burst."}
{"_id": "3d23be971f17ac490ba276de93f587b5203fccfc", "title": "Air-ground localization and map augmentation using monocular dense reconstruction", "text": "We propose a new method for the localization of a Micro Aerial Vehicle (MAV) with respect to a ground robot. We solve the problem of registering the 3D maps computed by the robots using different sensors: a dense 3D reconstruction from the MAV monocular camera is aligned with the map computed from the depth sensor on the ground robot. Once aligned, the dense reconstruction from the MAV is used to augment the map computed by the ground robot, by extending it with the information conveyed by the aerial views. The overall approach is novel, as it builds on recent developments in live dense reconstruction from moving cameras to address the problem of air-ground localization. The core of our contribution is constituted by a novel algorithm integrating dense reconstructions from monocular views, Monte Carlo localization, and an iterative pose refinement. In spite of the radically different vantage points from which the maps are acquired, the proposed method achieves high accuracy whereas appearance-based, state-of-the-art approaches fail. Experimental validation in indoor and outdoor scenarios reported an accuracy in position estimation of 0.08 meters and real time performance. This demonstrates that our new approach effectively overcomes the limitations imposed by the difference in sensors and vantage points that negatively affect previous techniques relying on matching visual features."}
{"_id": "807014cff599a76a2caeed3c88f9d7c265dbfb14", "title": "Radar cross section measurement with 77 GHz automotive FMCW radar", "text": "In this paper, radar cross section (RCS) measurement for human subjects and vehicles in 77 GHz automotive frequency modulated continuous wave (FMCW) radar system is presented. In this system, it is impossible to utilize conventional RCS definition due to high frequency band and the modulation technique used. Therefore, we introduce a new parameter called pseudo-RCS that can replace the conventional RCS. Then, we conduct actual experiment in the road to measure the RCS of the human subjects and the vehicles. In the measurement, data of four human subjects and four different kinds of vehicles are recorded with the 77 GHz FMCW radar. From the actual measured data, we find RCS distributions of the human subjects and the vehicles. For the human subjects, the RCS values are distributed following the Nakagami distribution. On the other hand, the log-normal distribution is well fit for the RCS values in the case of the vehicle."}
{"_id": "214621f1d3aef3ebe1b661a205801c4c8e8ccb54", "title": "Treatment of molluscum contagiosum with cantharidin: a practical approach.", "text": "Molluscum contagiosum is very common. In this article we discuss the use of cantharidin as a treatment option for molluscum contagiosum and give detailed information about distribution sources, how to apply it, and caveats regarding its use.Molluscum contagiosum is a common viral disease of childhood caused by a poxvirus, which presents with small, firm, dome-shaped, umbilicated papules. It is generally benign and self-limited, with spontaneous resolution within 6 months to several years. Watchful waiting can often be an appropriate management strategy; however, some patients either desire or require treatment. Reasons for actively treating molluscum contagiosum may include alleviation of discomfort and itching (particularly in patients where an eczematous eruption - the so-called \"molluscum eczema\" - is seen in association) or in patients with ongoing atopic dermatitis where more lesions are likely to be present. Other reasons for treatment include limitation of spread to other areas and people, prevention of scarring and superinfection, and elimination of the social stigma of visible lesions. No one treatment is uniformly effective.Treatment options include destructive therapies (curettage, cryotherapy, cantharidin, and keratolytics, among others), immunomodulators (imiquimod, cimetidine, and Candida antigen), and antivirals (cidofovir). In this article we discuss and describe our first-line treatment approach for those molluscum needing treatment - cantharidin."}
{"_id": "3a7194dbec56500589a8b059cbc422f4bd6dafb4", "title": "Telemanipulation of Snake-Like Robots for Minimally Invasive Surgery of the Upper Airway", "text": "Abstract. This research focuses on developing and testing the highlevel control of a novel eight DOF hybrid robot using a DaVinci master manipulator. The teleoperation control is formulated as weighted, multi objective constrained least square (LS) optimization problems one for the master controller and the other for the slave controller. This allows us to incorporate various virtual fixtures in our control algorithm as constraints of the LS problem based on the robot environment. Experimental validation to ensure position tracking and sufficient dexterity to perform suturing in confined spaces such as throat are presented."}
{"_id": "19bd0041f6c11f422675ece7e4fd70867d12f4ef", "title": "Start Later, Sleep Later: School Start Times and Adolescent Sleep in Homeschool Versus Public/Private School Students.", "text": "Homeschooled students provide a naturalistic comparison group for later/flexible school start times. This study compared sleep patterns and sleep hygiene for homeschooled students and public/private school students (grades 6-12). Public/private school students (n = 245) and homeschooled students (n = 162) completed a survey about sleep patterns and sleep hygiene. Significant school group differences were found for weekday bedtime, wake time, and total sleep time, with homeschooled students waking later and obtaining more sleep. Homeschooled students had later school start times, waking at the same time that public/private school students were starting school. Public/private school students had poorer sleep hygiene practices, reporting more homework and use of technology in the hour before bed. Regardless of school type, technology in the bedroom was associated with shorter sleep duration. Later school start times may be a potential countermeasure for insufficient sleep in adolescents. Future studies should further examine the relationship between school start times and daytime outcomes, including academic performance, mood, and health."}
{"_id": "cc558648d9c76e99fb4f7c7f9ba919c9106fe98a", "title": "Robot navigation: Review of techniques and research challenges", "text": "Robot navigation has multi-spectral applications across a plethora of industries. Being one of the fastest evolving field of Artificial intelligence, it has garnered a lot of speculation and the problem of efficient Robot navigation has become a talking point in the \u201cArtificial Intelligence research community\u201d. Robot Navigation can be thought of as a collection of sub-problems that when solved efficiently will (most likely) give the complete and the most efficient solution. While the main Navigation problem can be divided into numerous smaller ones some of the prevalent (sub)-tasks associated with Robot Navigation are Path Planning, Collision Prevention, Search Algorithm and an appropriate pictorial (graphical) representation of the given environment. Path Planning gives us the most optimal path finding approach while keeping the dynamism of the environment and the objects in mind. Collision Prevention ensures the Robot reaches its goal without colliding with any of the objects present in the environment. Collision Prevention techniques' vitality increases considerably when the number of objects in the input space is large (>10, approximately). Search Algorithms are the functional units of this problem as they determine the path the Robot follows while traveling to its destination. Since the environment greatly affects the choice of technique for all the other tasks it is vital to have a clear, concise and a non-ambiguous representation of the same. The main aim of the paper is to summarise the development of various techniques in the multiple domains of the Robot Navigation problem in the last fifteen years (2000-2015) while stating the scope and the restriction of each of them."}
{"_id": "92b09fbf854caefdb465885b2ebd85d76331dcbf", "title": "Unsupervised Pattern Discovery in Speech", "text": "We present a novel approach to speech processing based on the principle of pattern discovery. Our work represents a departure from traditional models of speech recognition, where the end goal is to classify speech into categories defined by a prespecified inventory of lexical units (i.e., phones or words). Instead, we attempt to discover such an inventory in an unsupervised manner by exploiting the structure of repeating patterns within the speech signal. We show how pattern discovery can be used to automatically acquire lexical entities directly from an untranscribed audio stream. Our approach to unsupervised word acquisition utilizes a segmental variant of a widely used dynamic programming technique, which allows us to find matching acoustic patterns between spoken utterances. By aggregating information about these matching patterns across audio streams, we demonstrate how to group similar acoustic sequences together to form clusters corresponding to lexical entities such as words and short multiword phrases. On a corpus of academic lecture material, we demonstrate that clusters found using this technique exhibit high purity and that many of the corresponding lexical identities are relevant to the underlying audio stream."}
{"_id": "37b602e4097b435a0413fd6f99773b83862f4037", "title": "Hypothalamic Orexin Neurons Regulate Arousal According to Energy Balance in Mice", "text": "Mammals respond to reduced food availability by becoming more wakeful and active, yet the central pathways regulating arousal and instinctual motor programs (such as food seeking) according to homeostatic need are not well understood. We demonstrate that hypothalamic orexin neurons monitor indicators of energy balance and mediate adaptive augmentation of arousal in response to fasting. Activity of isolated orexin neurons is inhibited by glucose and leptin and stimulated by ghrelin. Orexin expression of normal and ob/ob mice correlates negatively with changes in blood glucose, leptin, and food intake. Transgenic mice, in which orexin neurons are ablated, fail to respond to fasting with increased wakefulness and activity. These findings indicate that orexin neurons provide a crucial link between energy balance and arousal."}
{"_id": "ddc5b36e1f952303de128e1d35ab20a9f80f9936", "title": "Design of wideband printed rectangular monopole patch antenna with band notch", "text": "In this paper, a microstrip-line fed printed monopole rectangular microstrip patch antenna with wide bandwidth having frequency band-notch function for wireless communication have been proposed. Bandwidth enhancement is obtained by rounding the corners of rectangular patch and modifying the ground plane. By cutting rectangular slot on the radiating element, the frequency band-notch characteristic is obtained. FR-4 substrate having dielectric constant of 4.4 and thickness of 1.6 mm is used to simulate printed rectangular monopole antenna (PRMA) having an overall size of substrate 55 \u00d7 45 mm2. The designed antenna is simulated using CAD FEKO simulation software. The proposed antenna has impedance bandwidth of 6.61 GHz (2.11 GHz to 8.72 GHz) with a rejection band from 3.03 GHz to 3.48 GHz. The observed gain of proposed antenna is 2.42 dBi. The proposed microstrip patch antenna is fabricated, tested and measured result is presented in this paper."}
{"_id": "4746f430e89262ccf8b59ece3f869c790e79c9ba", "title": "Co-Clustering with Side Information for Text mining", "text": "Many of the text mining applications contain a huge amount of information from document in the form of text. This text can be very helpful for Text Clustering. This text also includes various kind of other information known as Side Information or Metadata. Examples of this side information include links to other web pages, title of the document, author name or date of Publication which are present in the text document. Such metadata may possess a lot of information for the clustering purposes. But this Side information may be sometimes noisy. Using such Side Information for producing clusters without filtering it, can result to bad quality of Clusters. So we use an efficient Feature Selection method to perform the mining process to select that Side Information which is useful for Clustering so as to maximize the advantages from using it. The proposed technique, CCSI (Co-Clustering with Side Information) system makes use of the process of Co-Clustering or Two-mode clustering which is a data mining technique that allows concurrently clustering of the rows and columns of a matrix."}
{"_id": "21be31ffde44487ce2d69e36cc667edffceb87d0", "title": "Gaze tracking by Binocular Vision and LBP features", "text": "In this paper, a new method for eye gaze tracking is proposed under natural head movement. In this method, Local-Binary-Pattern Texture Feature (LBP) is adopted to calculate the eye features according to the characteristic of the eye, and a precise Binocular Vision approach is used to detect the space coordinate of the eye. The combined features of space coordinates and LBP features of the eyes are fed into Support Vector Regression (SVR) to match the gaze mapping function, in the hope of tracking gaze direction under natural head movement. The experimental results prove that the proposed method can determine the gaze direction accurately."}
{"_id": "ca20e2b6c65a9099f767a1325bd337a0c4e0605c", "title": "Bridging the Gap: From Research to Practical Advice", "text": "Software engineers must solve practical problems under deadline pressure. They rely on the best-codified knowledge available, turning to weaker results and their expert judgment when sound science is unavailable. Meanwhile, software engineering researchers seek fully validated results, resulting in a lag to practical guidance. To bridge this gap, research results should be systematically distilled into actionable guidance in a way that respects differences in strength and scope among the results. Starting with the practitioners\u2019 need for actionable guidance, this article reviews the evolution of software engineering research expectations, identifies types of results and their strengths, and draws on evidence-based medicine for a concrete example of deriving pragmatic guidance from mixed-strength research results. It advances a levels-of-evidence framework to allow researchers to clearly identify the strengths of their claims and the supporting evidence for their results and to work with practitioners to synthesize actionable recommendations from diverse types of evidence. This article is part of a special issue on software engineering\u2019s 50th anniversary."}
{"_id": "1f26c41f9d637f1e056355341d06472ad65a9a44", "title": "Why did TD-Gammon Work?", "text": "Although TD-Gammon is one of the major successes in machine learning, it has not led to similar impressive breakthroughs in temporal difference learning for other applications or even other games. We were able to replicate some of the success of TD-Gammon, developing a competitive evaluation function on a 4000 parameter feed-forward neural network, without using back-propagation, reinforcement or temporal difference learning methods. Instead we apply simple hill-climbing in a relative fitness environment. These results and further analysis suggest that the surprising success of Tesauro\u2019s program had more to do with the co-evolutionary structure of the learning task and the dynamics of the backgammon game itself."}
{"_id": "dce23f9f27f1237e1c39226e7105b45bad772818", "title": "Subdivision templates for converting a non-conformal hex-dominant mesh to a conformal hex-dominant mesh without pyramid elements", "text": "This paper presents a computational method for converting a non-conformal hex-dominant mesh to a conformal hex-dominant mesh without help of pyramid elements. During the conversion, the proposed method subdivides a non-conformal element by applying a subdivision template and conformal elements by a conventional subdivision scheme. Although many finite element solvers accept mixed elements, some of them require a mesh to be conformal without a pyramid element. None of the published automated methods could create a conformal hex-dominant mesh without help of pyramid elements, and therefore the applicability of the hex-dominant mesh has been significantly limited. The proposed method takes a non-conformal hex-dominant mesh as an input and converts it to a conformal hex-dominant mesh that consists only of hex, tet, and prism elements. No pyramid element will be introduced. The conversion thus considerably increases the applicability of the hex-dominant mesh in many finite element solvers."}
{"_id": "32e3fcd0b0fc569a8327ddf25f40e1975a32e636", "title": "Total parser combinators", "text": "A monadic parser combinator library which guarantees termination of parsing, while still allowing many forms of left recursion, is described. The library's interface is similar to those of many other parser combinator libraries, with two important differences: one is that the interface clearly specifies which parts of the constructed parsers may be infinite, and which parts have to be finite, using dependent types and a combination of induction and coinduction; and the other is that the parser type is unusually informative.\n The library comes with a formal semantics, using which it is proved that the parser combinators are as expressive as possible. The implementation is supported by a machine-checked correctness proof."}
{"_id": "0eac1d94c1aa7c3bd8a91b21d7df1b0e15a27a02", "title": "An Inside Look at Botnets", "text": "The continued growth and diversification of the Internet has been accompanied by an increasing prevalence of attacks and intrusions [40]. It can be argued, however, that a significant change in motivation for malicious activi ty has taken place over the past several years: from vandalism and recognition in th e hacker community, to attacks and intrusions for financial gain. This shift has bee n marked by a growing sophistication in the tools and methods used to conduct atta cks, thereby escalating the network security arms race. Our thesis is that the reactivemethods for network security that are predominant today are ultimately insufficient and that more proactivemethods are required. One such approach is to develop a foundational understanding of the mechanisms employed by malicious software (malware) which is often readi ly available in source form on the Internet. While it is well known that large IT secu rity companies maintain detailed databases of this information, these are not o penly available and we are not aware of any such open repository. In this paper we begin t he process of codifying the capabilities of malware by dissecting four widely-u sed Internet Relay Chat (IRC) botnet codebases. Each codebase is classified along se ven k y dimensions including botnet control mechanisms, host control mechani sms, propagation mechanisms, exploits, delivery mechanisms, obfuscation and de ception mechanisms. Our study reveals the complexity of botnet software, and we disc us es implications for defense strategies based on our analysis."}
{"_id": "025e2f8aa158df0bffe6ede4c61bf8662910de1e", "title": "Slow and Steady Feature Analysis: Higher Order Temporal Coherence in Video", "text": "How can unlabeled video augment visual learning? Existing methods perform \"slow\" feature analysis, encouraging the representations of temporally close frames to exhibit only small differences. While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture how the visual content changes. We propose to generalize slow feature analysis to \"steady\" feature analysis. The key idea is to impose a prior that higher order derivatives in the learned feature space must be small. To this end, we train a convolutional neural network with a regularizer on tuples of sequential frames from unlabeled video. It encourages feature changes over time to be smooth, i.e., similar to the most recent changes. Using five diverse datasets, including unlabeled YouTube and KITTI videos, we demonstrate our method's impact on object, scene, and action recognition tasks. We further show that our features learned from unlabeled video can even surpass a standard heavily supervised pretraining approach."}
{"_id": "770eabc6310b6f1b201b9202f7ad5a9883819f29", "title": "Fast Computation of the Difference of Low-Pass Transform", "text": "This paper defines the difference of low-pass (DOLP) transform and describes a fast algorithm for its computation. The DOLP is a reversible transform which converts an image into a set of bandpass images. A DOLP transform is shown to require O(N2) multiplies and produce O(N log(N)) samples from an N sample image. When Gaussian low-pass filters are used, the result is a set of images which have been convolved with difference of Gaussian (DOG) filters from an exponential set of sizes. A fast computation technique based on ``resampling'' is described and shown to reduce the DOLP transform complexity to O(N log(N)) multiplies and O(N) storage locations. A second technique, ``cascaded convolution with expansion,'' is then defined and also shown to reduce the computational cost to O(N log(N)) multiplies. Combining these two techniques yields an algorithm for a DOLP transform that requires O(N) storage cells and requires O(N) multiplies."}
{"_id": "e05b7cea60db1f883303eb45acf51dafe093ab47", "title": "Three-way decisions with probabilistic rough sets", "text": "The rough set theory approximates a concept by three regions, namely, the positive, boundary and negative regions. Rules constructed from the three regions are associated with different actions and decisions, which immediately leads to the notion of three-way decision rules. A positive rule makes a decision of acceptance, a negative rule makes a decision of rejection, and a boundary rule makes a decision of abstaining. This paper provides an analysis of three-way decision rules in the classical rough set model and the decision-theoretic rough set model. The results enrich the rough set theory by ideas from Bayesian decision theory and hypothesis testing in statistics. The connections established between the levels of tolerance for errors and costs of incorrect decisions make the rough set theory practical in applications."}
{"_id": "ed81378eb0937a6425f6221800049c03dd8dd114", "title": "A method for normalizing histology slides for quantitative analysis", "text": "Inconsistencies in the preparation of histology slides make it difficult to perform quantitative analysis on their results. In this paper we provide two mechanisms for overcoming many of the known inconsistencies in the staining process, thereby bringing slides that were processed or stored under very different conditions into a common, normalized space to enable improved quantitative analysis."}
{"_id": "02ef0192da55d7243acc12b6b4a13f5e26ff59a8", "title": "Pattern-Reconfigurable Planar Circular Ultra-Wideband Monopole Antenna", "text": "A novel pattern-reconfigurable compact planar ultra-wideband monopole antenna is presented. By the incorporation of four p-i-n diode switches and two parasitic elements, the antenna's radiation patterns can be shaped to concentrate energy in specific directions while minimising the gain in other unwanted directions without significantly affecting the impedance bandwidth of the antenna. A fully functional prototype has been developed and tested. The measured results of the return loss, radiation patterns, and realised gain verify the effectiveness of the proposed antenna configuration. The antenna switches its radiation patterns between an omni-directional mode and two directional modes with opposite directions in the operating range from 3 to 6 GHz. The proposed antenna could be a suitable candidate for advanced and smart radio applications such as cognitive radio (CR) as it can enhance the radio front-end flexibility and performance by adding the benefits of pattern diversity, specifically in multipath environments."}
{"_id": "2ec47ffbfc4f7800919674804e15ae3a045a23b8", "title": "Genome plasticity a key factor in the success of polyploid wheat under domestication.", "text": "Wheat was domesticated about 10,000 years ago and has since spread worldwide to become one of the major crops. Its adaptability to diverse environments and end uses is surprising given the diversity bottlenecks expected from recent domestication and polyploid speciation events. Wheat compensates for these bottlenecks by capturing part of the genetic diversity of its progenitors and by generating new diversity at a relatively fast pace. Frequent gene deletions and disruptions generated by a fast replacement rate of repetitive sequences are buffered by the polyploid nature of wheat, resulting in subtle dosage effects on which selection can operate."}
{"_id": "d01e3414ca706eda917576d947ece811b5cbcdde", "title": "Empowerment - an Introduction", "text": "Is it better for you to own a corkscrew or not? If asked, you as a human being would likely say \" yes \" , but more importantly, you are somehow able to make this decision. You are able to decide this, even if your current acute problems or task do not include opening a wine bottle. Similarly, it is also unlikely that you evaluated several possible trajectories your life could take and looked at them with and without a corkscrew, and then measured your survival or reproductive fitness in each. When you, as a human cognitive agent, made this decision, you were likely relying on a behavioural \" proxy \" , an internal motivation that abstracts the problem of evaluating a decision impact on your overall life, but evaluating it in regard to some simple fitness function. One example would be the idea of curiosity, urging you to act so that your experience new sensations and learn about the environment. On average, this should lead to better and richer models of the world, which give you a better chance of reaching your ultimate goals of survival and reproduction. But how about questions such as, would you rather be rich than poor, sick or healthy, imprisoned or free? While each options offers some interesting new experience , there seems to be a consensus that rich, healthy and free is a preferable choice. We think that all these examples, in addition to the question of tool ownership above, share a common element of preparedness. Everything else being equal it is preferable to be prepared, to keep ones options open or to be in a state where ones actions have the greatest influence on ones direct environment. The concept of Empowerment, in a nutshell, is an attempt at formalizing and quantifying these degrees of freedom (or options) that an organism or agent has as a proxy for \" preparedness \" ; preparedness, in turn, is considered a proxy for prospective fitness via the hypothesis that preparedness would be a good indicator to distinguish promising from less promising regions in the prospective fitness landscape, without actually having to evaluate the full fitness landscape."}
{"_id": "7b1996d4446f7682fa0ae36527f3cbc5e46dad58", "title": "DroidMat: Android Malware Detection through Manifest and API Calls Tracing", "text": "Recently, the threat of Android malware is spreading rapidly, especially those repackaged Android malware. Although understanding Android malware using dynamic analysis can provide a comprehensive view, it is still subjected to high cost in environment deployment and manual efforts in investigation. In this study, we propose a static feature-based mechanism to provide a static analyst paradigm for detecting the Android malware. The mechanism considers the static information including permissions, deployment of components, Intent messages passing and API calls for characterizing the Android applications behavior. In order to recognize different intentions of Android malware, different kinds of clustering algorithms can be applied to enhance the malware modeling capability. Besides, we leverage the proposed mechanism and develop a system, called Droid Mat. First, the Droid Mat extracts the information (e.g., requested permissions, Intent messages passing, etc) from each application's manifest file, and regards components (Activity, Service, Receiver) as entry points drilling down for tracing API Calls related to permissions. Next, it applies K-means algorithm that enhances the malware modeling capability. The number of clusters are decided by Singular Value Decomposition (SVD) method on the low rank approximation. Finally, it uses kNN algorithm to classify the application as benign or malicious. The experiment result shows that the recall rate of our approach is better than one of well-known tool, Androguard, published in Black hat 2011, which focuses on Android malware analysis. In addition, Droid Mat is efficient since it takes only half of time than Androguard to predict 1738 apps as benign apps or Android malware."}
{"_id": "cce036ec83d9b0cb67ecd64ce6ea30a1a6bbde48", "title": "Anti-counterfeiting with hardware intrinsic security", "text": "Counterfeiting of goods and electronic devices is a growing problem that has a huge economic impact on the electronics industry. Sometimes the consequences are even more dramatic, when critical systems start failing due to the use of counterfeit lower quality components. Hardware Intrinsic security (i.e. security systems built on the unique electronic fingerprint of a device) offers the potential to reduce the counterfeiting problem drastically. In this paper we will show how Hardware Intrinsic Security (HIS) can be used to prevent various forms of counterfeiting and over-production. HIS technology can also be used to bind software or user data to specific hardware devices, which provides additional security to both soft- and hardware vendors as well as consumers using HIS-enabled products. Besides showing the benefits of HIS, we will also provide an extensive overview of the results (both scientific and industrial) that Intrinsic-ID has achieved studying and implementing HIS."}
{"_id": "0bf5b73d421b69c49de0665d581e1d3ebc8cb0bf", "title": "HDRF: Stream-Based Partitioning for Power-Law Graphs", "text": "Balanced graph partitioning is a fundamental problem that is receiving growing attention with the emergence of distributed graph-computing (DGC) frameworks. In these frameworks, the partitioning strategy plays an important role since it drives the communication cost and the workload balance among computing nodes, thereby affecting system performance. However, existing solutions only partially exploit a key characteristic of natural graphs commonly found in the real-world: their highly skewed power-law degree distributions. In this paper, we propose High-Degree (are) Replicated First (HDRF), a novel streaming vertex-cut graph partitioning algorithm that effectively exploits skewed degree distributions by explicitly taking into account vertex degree in the placement decision. We analytically and experimentally evaluate HDRF on both synthetic and real-world graphs and show that it outperforms all existing algorithms in partitioning quality."}
{"_id": "8e0d718f61136802cc66f4d30234447dcc91c72f", "title": "Mining and ranking users\u2019 intents behind queries", "text": "How to understand intents behind user queries is crucial towards improving the performance of Web search systems. NTCIR-11 IMine task focuses on this problem. In this paper, we address the NTCIR-11 IMine task with two phases referred to as Query Intent Mining (QIM) and Query Intent Ranking (QIR). (I) QIM is intended to mine users\u2019 potential intents by clustering short text fragments related to the given query. (II) QIR focuses on ranking those mined intents in a proper way. Two challenges exist in handling these tasks. (II) How to precisely estimate the intent similarity between user queries which only consist of a few words. (2) How to properly rank intents in terms of multiple factors, e.g. relevance, diversity, intent drift and so on. For the first challenge, we first investigate two interesting phenomena by analyzing query logs and document datasets, namely \u201cSame-Intent-Co-Click\u201d (SICC) and \u201cSame-Intent-Similar-Rank\u201d (SISR). SICC means that when users issue different queries, these queries represent the same intent if they click on the same URL. SISR means that if two queries denote the same intent, we should get similar search results when issuing them to a search engine. Then, we propose similarity functions for QIM based on the two phenomena. For the second challenge, we propose a novel intent ranking model which considers multiple factors as a whole. We perform extensive experiments and an interesting case study on the Chinese dataset of NTCIR-11 IMine task. Experimental results demonstrate the effectiveness of our proposed approaches in terms of both QIM and QIR."}
{"_id": "c6024938e8128b9219c05ee320121768f264f30a", "title": "Content-adaptive low rank regularization for image denoising", "text": "Prior knowledge plays an important role in image denoising tasks. This paper utilizes the data of the input image to adaptively model the prior distribution. The proposed scheme is based on the observation that, for a natural image, a matrix consisted of its vectorized non-local similar patches is of low rank. We use a non-convex smooth surrogate for the low-rank regularization, and view the optimization problem from the empirical Bayesian perspective. In such framework, a parameter-free distribution prior is derived from the grouped non-local similar image contents. Experimental results show that the proposed approach is highly competitive with several state-of-art denoising methods in PSNR and visual quality."}
{"_id": "f879556115284946637992191563849e840789d1", "title": "Geometry Guided Adversarial Facial Expression Synthesis", "text": "Facial expression synthesis has drawn much attention in the field of computer graphics and pattern recognition. It has been widely used in face animation and recognition. However, it is still challenging due to the high-level semantic presence of large and non-linear face geometry variations. This paper proposes a Geometry-Guided Generative Adversarial Network (G2-GAN) for continuously-adjusting and identity-preserving facial expression synthesis. We employ facial geometry (fiducial points) as a controllable condition to guide facial texture synthesis with specific expression. A pair of generative adversarial subnetworks is jointly trained towards opposite tasks: expression removal and expression synthesis. The paired networks form a mapping cycle between neutral expression and arbitrary expressions, with which the proposed approach can be conducted among unpaired data. The proposed paired networks also facilitate other applications such as face transfer, expression interpolation and expression-invariant face recognition. Experimental results on several facial expression databases show that our method can generate compelling perceptual results on different expression editing tasks."}
{"_id": "43e5ee363d1be2a5c85f9e9d715192154f6dd8b2", "title": "Elliptic curve cryptography", "text": "This paper describes the Elliptic Curve Cryptography algorithm and its suitability for smart cards."}
{"_id": "ea7dacaedb8ed426df92a8e432eda7178b9f5707", "title": "Fault Tolerance in Cloud Using Reactive and Proactive Techniques", "text": "Fault tolerance plays a vital role in ensuring high serviceability and reliability in cloud. A lot of research is currently under way to analyze how cloud can provide fault tolerance for an application. The work proposes a reactive fault tolerant technique that uses check pointing to tolerate the fault. The work proposes a VM\u03bc Checkpoint framework to protect both VMs and applications in the VMs against transient errors. The VM-\u03bcCheckpoint mechanism is implemented using CoW-PC (Copy on Write \u2013 Presave in cache) algorithm. The CoW-PC algorithm presaves all the tasks running on the VM\u2019s in a cache memory. When there is any transient failure happening in VMs, it is noted and it is recovered using last presaved checkpoint from the cache memory. Once the tasks are executed successfully, the presaved checkpoints are deleted automatically from the cache memory. Thus the algorithm uses in place and in memory recovery of checkpoints that reduces the checkpoint overhead and improves the performance."}
{"_id": "091003e11af2784e00283640c25ad6833333cdab", "title": "A High Efficiency two-stage ZVS AC/DC converter with all SiC MOSFET", "text": "A High Efficiency two-stage AC/DC converter is presented and discussed in this paper, which can be applied in on-board charger for electric vehicles. The front-end converter is a high efficiency ZVS half bridge power factor correction (PFC) topology with only one device in the current flowing path. Owing to the high voltage stress of the half bridge rectifier, the SiC MOSFET is applied to reduce the conduction and switching losses and achieve high frequency. The downstream stage is a LLC resonant DC/DC converter. The peak efficiency reaches 97.5%."}
{"_id": "14e97ea7de6662c67e28bce1595cf419aff6ad5f", "title": "CPR: Composable performance regression for scalable multiprocessor models", "text": "Uniprocessor simulators track resource utilization cycle by cycle to estimate performance. Multiprocessor simulators, however, must account for synchronization events that increase the cost of every cycle simulated and shared resource contention that increases the total number of cycles simulated. These effects cause multiprocessor simulation times to scale superlinearly with the number of cores. Composable performance regression (CPR) fundamentally addresses these intractable multiprocessor simulation times, estimating multiprocessor performance with a combination of uniprocessor, contention, and penalty models. The uniprocessor model predicts baseline performance of each core while the contention models predict interfering accesses from other cores. Uniprocessor and contention model outputs are composed by a penalty model to produce the final multiprocessor performance estimate. Trained with a production quality simulator, CPR is accurate with median errors of 6.63, 4.83 percent for dual-, quad-core multiprocessors. Furthermore, composable regression is scalable, requiring 0.33\u00d7 the simulations required by prior regression strategies."}
{"_id": "d3a0bb385387f55e7fe34ceeb62a3bd5e6833e3a", "title": "The Internet of Things - promise for the future? An introduction", "text": "The Internet is a living entity, always changing and evolving. New applications and businesses are created continuously. In addition to an evolving Internet, technology is also changing the landscape. Broadband connectivity is becoming cheap and ubiquitous; devices are becoming more powerful and smaller with a variety of on-board sensors. The proliferation of more devices becoming connected is leading to a new paradigm: the Internet of Things. The Internet of Things is driven by an expansion of the Internet through the inclusion of physical objects combined with an ability to provide smarter services to the environment as more data becomes available. Various application domains ranging from Green-IT and energy efficiency to logistics are already starting to benefit from Internet of Things concepts. There are challenges associated with the Internet of Things, most explicitly in areas of trust and security, standardisation and governance required to ensure a fair and trustworthy open Internet of Things which provides value to all of society. Internet of Things is high on the research agenda of several multinationals as well as the European Commission and countries such as China. The research conducted is driving the creation of a useful and powerful Internet of Things. The benefits of Internet of Things to the developing and emerging economies are significant, and strategies to realise these need to be found."}
{"_id": "48830e2e4272fa88dc256f1ac9cf81be14112bdb", "title": "Reinforcement Learning through Asynchronous Advantage Actor-Critic on a GPU", "text": "We introduce a hybrid CPU/GPU version of the Asynchronous Advantage ActorCritic (A3C) algorithm, currently the state-of-the-art method in reinforcement learning for various gaming tasks. We analyze its computational traits and concentrate on aspects critical to leveraging the GPU\u2019s computational power. We introduce a system of queues and a dynamic scheduling strategy, potentially helpful for other asynchronous algorithms as well. Our hybrid CPU/GPU version of A3C, based on TensorFlow, achieves a significant speed up compared to a CPU implementation; we make it publicly available to other researchers at https://github.com/NVlabs/GA3C."}
{"_id": "203c80902356a0ab088da8e5154d7097b3a51cf7", "title": "An Empirical Evaluation of Thompson Sampling", "text": "Thompson sampling is one of oldest heuristic to address the e xploration / exploitation trade-off, but it is surprisingly unpopular in t he literature. We present here some empirical results using Thompson sampling on simu lated and real data, and show that it is highly competitive. And since this heuris tic is very easy to implement, we argue that it should be part of the standard bas eline to compare against."}
{"_id": "ae5ad87abafef4b8331f5125cb1682949dd2abed", "title": "3-D TCAD simulation to optimize the trench termination design for higher and robust BVdss", "text": "Optimizing the edge termination design around the periphery of active area is critically important for achieving the highest and stable breakdown voltage (BVDSS) for any power devices. Active cell structures can be assumed as two dimensional (2-D) in the central part of the die, however as the active cells terminate to the termination regions at the periphery of the die, 2-D and 3-D transition regions are formed at different locations of the die layout with respect to the last edge termination trench. Optimization of the 3-D termination region is imperative to ascertain equal or higher BVDSS of the termination region than the active cell region. Synopsys advanced multi-dimensional TCAD device simulation tool - \u201cSentaurus Device Editor (SDE) [1]\u201d is adopted for designing and optimizing the 3-D termination transition region for a higher and robust BVDSS, which is validated by the experimental data."}
{"_id": "c5c0e3e8389c52dd7d4e02a6058769366d008c14", "title": "Dual-Band Filter Design With Flexible Passband Frequency and Bandwidth Selections", "text": "In this paper, improved dual-band filter design is studied. The dual-band resonators are composed of shunt open- and short-circuited stubs. In order to fulfil the requirements of dual-band inverters, a structure of stepped-impedance asymmetric coupled lines is proposed and its equivalent circuit is also derived. The dual-band filter is then designed based on this equivalent circuit. This type of filter can achieve relatively large practical passband center frequency ratios (in theory infinite), and it has more freedom of bandwidth ratio. The circuit size is also reduced. Detailed design procedure is presented and, finally, a filter example is given to validate the theoretical study"}
{"_id": "202b797c1ed589027978ed52997df67d53155738", "title": "A Comparison of Tactile, Visual, and Auditory Warnings for Rear-End Collision Prevention in Simulated Driving", "text": "OBJECTIVE\nThis study examined the effectiveness of rear-end collision warnings presented in different sensory modalities as a function of warning timing in a driving simulator.\n\n\nBACKGROUND\nThe proliferation of in-vehicle information and entertainment systems threatens driver attention and may increase the risk of rear-end collisions. Collision warning systems have been shown to improve inattentive and/or distracted driver response time (RT) in rear-end collision situations. However, most previous rear-end collision warning research has not directly compared auditory, visual, and tactile warnings.\n\n\nMETHOD\nSixteen participants in a fixed-base driving simulator experienced four warning conditions: no warning, visual, auditory, and tactile. The warnings activated when the time-to-collision (TTC) reached a critical threshold of 3.0 or 5.0 s. Driver RT was captured from a warning below critical threshold to brake initiation.\n\n\nRESULTS\nDrivers with a tactile warning had the shortest mean RT. Drivers with a tactile warning had significantly shorter RT than drivers without a warning and had a significant advantage over drivers with visual warnings.\n\n\nCONCLUSION\nTactile warnings show promise as effective rear-end collision warnings.\n\n\nAPPLICATION\nThe results of this study can be applied to the future design and evaluation of automotive warnings designed to reduce rear-end collisions."}
{"_id": "0b4b6932d5df74b366d9235b40334bc40d719c72", "title": "Temporal Ensembling for Semi-Supervised Learning", "text": "In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels."}
{"_id": "21b25b025898bd1cabe60234434b49cf14016981", "title": "Gradient descent GAN optimization is locally stable", "text": "Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the \u201cgradient descent\u201d form of GAN optimization, i.e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator parameters. We show that even though GAN optimization does not correspond to a convex-concave game (even for simple parameterizations), under proper conditions, equilibrium points of this optimization procedure are still locally asymptotically stable for the traditional GAN formulation. On the other hand, we show that the recently proposed Wasserstein GAN can have non-convergent limit cycles near equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which is able to guarantee local stability for both the WGAN and the traditional GAN, and also shows practical promise in speeding up convergence and addressing mode collapse."}
{"_id": "2444be7584d1f5a7e2aa9f65078de09154f14ea1", "title": "Born Again Neural Networks", "text": "Knowledge Distillation (KD) consists of transferring \u201cknowledge\u201d from one machine learning model (the teacher) to another (the student). Commonly, the teacher is a high-capacity model with formidable performance, while the student is more compact. By transferring knowledge, one hopes to benefit from the student\u2019s compactness, without sacrificing too much performance. We study KD from a new perspective: rather than compressing models, we train students parameterized identically to their teachers. Surprisingly, these Born-Again Networks (BANs), outperform their teachers significantly, both on computer vision and language modeling tasks. Our experiments with BANs based on DenseNets demonstrate state-of-the-art performance on the CIFAR-10 (3.5%) and CIFAR-100 (15.5%) datasets, by validation error. Additional experiments explore two distillation objectives: (i) Confidence-Weighted by Teacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP). Both methods elucidate the essential components of KD, demonstrating the effect of the teacher outputs on both predicted and nonpredicted classes."}
{"_id": "396785ef3914fb49df9f1396dc7637bbd738e4f3", "title": "Learning Internal Representations", "text": "Probably the most important, problem in machine learning is the preliminary biasing of a learner\u2019s hypothesis space so that it is small enough to ensure good generalisation from reasonable training sets, yet large enough that it contains a good solution to the problem being learnt, In this paper a mechanism for automailcall,y learning or biasing the learner\u2019s hypothesis space is introduced. It works by first learning an appropriate [nterna! represenlaiton for a learning en~-ironment and then using that representation to hiss the learner\u2019s hypothesis space for the learning of future tasks drawn from the same environment. Jln internal represent at ion must be learnt by sampling from many s2mtiar tasks, not just a single task as occurs in ordinary machine learning. It is proved that the number of examples m per task required to ensure good generalisation from a representation learner obeys }n = O(a + b/n) where n is the number of tasks being learnt, and a and b are constants. If the tasks are learnt independently (/. e. without a common representation ) then m = 0( a + b). It is argued that for learning environments such as speech and character recognition b >> a and hence representation learning in these environments can potentially yield a drastic reduction in the number of examples required per task. It is also proved that if n = O(b) (with m = O(a+b/n ) ) then the representation learnt will be good for learning novel tasks from the same environment, and that the number of examples required to generalise well on a novel task will be reduced to O(a) (as opposed to {)( a + b) if no represent ation is used). It, is shown that gradient descent can be used to train neural network representations and the results of an experiment are reported in which a neural network representation was learnt for an environment consisting of transJaf ion ail,y in uarlanf Boolean functions. The experiment, provides strong qualitative support for the theoretical results."}
{"_id": "6a12eef3ea13f837dec2dfd05ec462b9a9867be7", "title": "Self-Censorship on Facebook", "text": "We report results from an exploratory analysis examining \u201clast-minute\u201d self-censorship, or content that is filtered after being written, on Facebook. We collected data from 3.9 million users over 17 days and associate self-censorship behavior with features describing users, their social graph, and the interactions between them. Our results indicate that 71% of users exhibited some level of last-minute self-censorship in the time period, and provide specific evidence supporting the theory that a user\u2019s \u201cperceived audience\u201d lies at the heart of the issue: posts are censored more frequently than comments, with status updates and posts directed at groups censored most frequently of all sharing use cases investigated. Furthermore, we find that: people with more boundaries to regulate censor more; males censor more posts than females and censor even more posts with mostly male friends than do females, but censor no more comments than females; people who exercise more control over their audience censor more content; and, users with more politically and age diverse friends censor less, in general."}
{"_id": "27ba58c16e84d9c9887c2892e75b3691bc91184d", "title": "On the rate of channel polarization", "text": "A bound is given on the rate of channel polarization. As a corollary, an earlier bound on the probability of error for polar coding is improved. Specifically, it is shown that, for any binary-input discrete memoryless channel W with symmetric capacity I(W) and any rate R \u226a I(W), the polar-coding block-error probability under successive cancellation decoding satisfies Pe(N, R) \u2264 2\u2212N\u03b2 for any \u03b2 \u226a 1/2 when the block-length N is large enough."}
{"_id": "6b9a3fb2db7207821283abc52d4369c0a2b27420", "title": "Natural Language Generation in Interactive Systems", "text": "An informative and comprehensive overview of the state-of-the-art in natural language generation (NLG) for interactive systems, this guide introduces graduate students and new researchers to the field of natural language processing and artificial intelligence, while inspiring them with ideas for future research. Detailing the techniques and challenges of NLG for interactive applications, it focuses on research into systems that model collaborativity and uncertainty, are capable of being scaled incrementally, and can engage with the user effectively. A range of real-world case studies is also included. The book and the accompanying website feature a comprehensive bibliography, and refer the reader to corpora, data, software, and other resources for pursuing research on natural language generation and interactive systems, including dialogue systems, multimodal interfaces, and assistive technologies. It is an ideal resource for students and researchers in computational linguistics, natural language processing, and related fields."}
{"_id": "b20174e5c007f1fcd322aee56200d14506d8d842", "title": "Stretching Agile to fit CMMI Level 3 - the story of creating MSF for CMMI Process Improvement at Microsoft Corporation", "text": "Agile practitioners pride themselves on highly productive, responsive, low ceremony, lightweight, tacit knowledge processes with little waste, adaptive planning and frequent iterative delivery of value. It is often assumed that CMMI compliant processes need to be heavyweight, bureaucratic, slow moving, high ceremony and plan driven. Agile developers often skeptically perceive formal process improvement initiatives as management generated inefficiency that gets in the way of productivity. At Microsoft, we\u2019ve adopted the teachings of W. Edwards Deming and stretched our MSF for Agile Software Development method to fit the requirements for CMMI Level 3. The resultant MSF for CMMI Process Improvement is a highly iterative, adaptive planning method, light on documentation, and heavily automated through tooling. It enables management and organization of software engineering through use of agile metrics such as velocity and cumulative flow but with an added dimension of an understanding of variation \u2013 adapted from Deming\u2019s teachings. This is the story of how mixing Deming with Agile produced a lightweight CMMI solution for .Net developers everywhere."}
{"_id": "36c63a0effe1c12e6010b8c8d004d65b0ca4fdd5", "title": "Anomaly Detection in Text: The Value of Domain Knowledge", "text": "We consider the problem of detecting anomalies from text data. Our hypothesis is that as with classical anomaly detection algorithms, domain-specific features are more important than the linguistic features. We employ the use of first-order logic and demonstrate the effectiveness of useful domain knowledge in two domains. Our results show that the domain-specific features are more predictive and that the relational learning methods exhibit superior performance."}
{"_id": "4045ced70cba66af99b542bb3bb87578a4020ebf", "title": "Accuracy of automatic number plate recognition (ANPR) and real world UK number plate problems", "text": "This paper considers real world UK number plates and relates these to ANPR. It considers aspects of the relevant legislation and standards when applying them to real world number plates. The varied manufacturing techniques and varied specifications of component parts are also noted. The varied fixing methodologies and fixing locations are discussed as well as the impact on image capture."}
{"_id": "c8cff23dcba448f4af436d40d32e367ea0bbe9bc", "title": "Full 3-D Printed Microwave Termination: A Simple and Low-Cost Solution", "text": "This paper describes the realization and characterization of microwave 3-D printed loads in rectangular waveguide technology. Several commercial materials were characterized at X-band (8-12 GHz). Their dielectric properties were extracted through the use of a cavity-perturbation method and a transmission/reflection rectangular waveguide method. A lossy carbon-loaded Acrylonitrile Butadiene Styrene (ABS) polymer was selected to realize a matched load between 8 and 12 GHz. Two different types of terminations were realized by fused deposition modeling: a hybrid 3-D printed termination (metallic waveguide + pyramidal polymer absorber + metallic short circuit) and a full 3-D printed termination (self-consistent matched load). Voltage standing wave ratio of less than 1.075 and 1.025 were measured over X-band for the hybrid and full 3-D printed terminations, respectively. Power behavior of the full 3-D printed termination was investigated. A very linear evolution of reflected power as a function of incident power amplitude was observed at 10 GHz up to 11.5 W. These 3-D printed devices appear as a very low cost solution for the realization of microwave matched loads in rectangular waveguide technology."}
{"_id": "261764f15f5d81ff5da68d60c22edb40b1764fb8", "title": "Captology: A Critical Review", "text": "This critical review of B.J. Fogg\u2019s book Persuasive Technology regards captology as an eclectic and formative work. It summarises two other reviewers\u2019 work and identifies several new strengths. It scrutinises Fogg\u2019s functional triad computers functioning as tools, media and social actors and some categorical changes are recommended. It investigates further Johnson\u2019s concerns about specific ethical omissions, nominating a new term, compusuasion, for the resultant but unintended, exogenous behaviour/attitude change effects of captological design. The review commences to more carefully define what constitutes persuasion and draws attention to the distinction between persuasion techniques in general and the behavioural changes that result from advocacy and education. The reviewer concludes that a fundamental ethic be that the designer\u2019s intent be exposed at the commencement of the user\u2019s engagement with the program and proffers the idea of persuasion resulting in a new conviction, induced by others, as a helpful definition of"}
{"_id": "89c8f10a6546606a5667bb1d131e87b99deaa85c", "title": "Identifying the Scan and Attack Infrastructures Behind Amplification DDoS Attacks", "text": "Amplification DDoS attacks have gained popularity and become a serious threat to Internet participants. However, little is known about where these attacks originate, and revealing the attack sources is a non-trivial problem due to the spoofed nature of the traffic.\n In this paper, we present novel techniques to uncover the infrastructures behind amplification DDoS attacks. We follow a two-step approach to tackle this challenge: First, we develop a methodology to impose a fingerprint on scanners that perform the reconnaissance for amplification attacks that allows us to link subsequent attacks back to the scanner. Our methodology attributes over 58% of attacks to a scanner with a confidence of over 99.9%. Second, we use Time-to-Live-based trilateration techniques to map scanners to the actual infrastructures launching the attacks. Using this technique, we identify 34 networks as being the source for amplification attacks at 98\\% certainty."}
{"_id": "0aa2e697c0d901229e3c6b2495fed8ac990b0ec6", "title": "Finger Sleeve : A Wearable Navigation Device", "text": "In this paper we present a novel wearable navigation system along with implicit HCI (iHCI) model, where interaction with technology is dissolved into a day-today activity. In this type of HCI model a computer takes the input and tries to output an action that is a proactive anticipation of next action of a user. Usually, in urban areas people use voice assisted navigation systems or navigation guidelines displayed on a mobile phone. Some navigation systems are already installed on car dashboard, which needs explicit attention in order to make driving decisions. A navigation system using haptic perception to guide a user throughout a journey is the key contribution of this paper. It does not ask for explicit user attention and demonstrates the indolent form of technological interaction. This wearable device is an index finger sleeve, which consists of vibrator modules, Bluetooth communication module and Microcontroller Unit (MCU). A working prototype has been built and tested."}
{"_id": "ec10084d3998cc1ec7a1612a7a7c3faff42a185b", "title": "Learning to Detect Rooftops in Aerial Images", "text": "In this paper, we examine the use of machine learning to improve the robustness of systems for image analysis on the task of roof detection. We review the problem of analyzing aerial photographs, and describe an existing vision system that attempts to automate the identiication of buildings in aerial images. After this, we brieey review several well-known learning algorithms that represent a wide variety of inductive biases. We report three experiments designed to illuminate facets of applying machine learning methods to the image analysis task; one experiment focuses on within-image learning, another deals with the cost of diierent errors, and a third addresses between-image learning. Experimental results demonstrate that machine-learned classiiers meet or exceed the accuracy of handcrafted solutions and that useful generalization occurs when training and testing on data derived from diierent images."}
{"_id": "7fb4d06ac25d609c2d00e187aa1f29e6cf803616", "title": "SlangSD: Building and Using a Sentiment Dictionary of Slang Words for Short-Text Sentiment Classification", "text": "Sentiment in social media is increasingly considered as an important resource for customer segmentation, market understanding, and tackling other socio-economic issues. However, sentiment in social media is difficult to measure since user-generated content is usually short and informal. Although many traditional sentiment analysis methods have been proposed, identifying slang sentiment words remains untackled. One of the reasons is that slang sentiment words are not available in existing dictionaries or sentiment lexicons. To this end, we propose to build the first sentiment dictionary of slang words to aid sentiment analysis of social media content. It is laborious and time-consuming to collect and label the sentiment polarity of a comprehensive list of slang words. We present an approach to leverage web resources to construct an extensive Slang Sentiment word Dictionary (SlangSD) that is easy to maintain and extend. SlangSD is publicly available1 for research purposes. We empirically show the advantages of using SlangSD, the newly-built slang sentiment word dictionary for sentiment classification, and provide examples demonstrating its ease of use with an existing sentiment system."}
{"_id": "e0ab7c2f6068f0587a627d820c78ffa0cc72de91", "title": "Vehicle control system for automatic valet parking with infrastructure sensors", "text": "This paper presents vehicle control system for automatic valet parking with infrastructure sensors. First, we describe the automatic valet parking service system. In the service, vehicle moves autonomously based on sensing data generated by infrastructure sensors. Second, we implement vehicle control system for automatic valet parking. We design hardware and software components focusing on minimizing the add-on devices and maximizing data integration. Finally, we do a feasibility test for verification of the system."}
{"_id": "9429369078d62d69658c1aacc54c73f33d417e5e", "title": "Comparisons of centerline extraction methods for liver blood vessels in ImageJ and 3D slicer", "text": "In this paper, we introduce the centerline extraction processes of the liver vessel systems by using the plug-ins in ImageJ and 3D Slicer. The Skeletonize (2D/3D) plug-in in ImageJ is easy to use, and only based on the volumetric data. It performs fast and require no user interactions in terms of seed points selection. The VMTK toolkit in 3D slicer is more powerful. It uses the surface model and can provide more smooth and consistent results. Due to its complexities, it requires more computational resources and user specified seed points. Some possible improvements for these two methods are discussed."}
{"_id": "d5e1add7a30b6c1f3b0f3278e8b29acac1ffdcc3", "title": "Implantable Myoelectric Sensors (IMESs) for Intramuscular Electromyogram Recording", "text": "We have developed a multichannel electrogmyography sensor system capable of receiving and processing signals from up to 32 implanted myoelectric sensors (IMES). The appeal of implanted sensors for myoelectric control is that electromyography (EMG) signals can be measured at their source providing relatively cross-talk-free signals that can be treated as independent control sites. An external telemetry controller receives telemetry sent over a transcutaneous magnetic link by the implanted electrodes. The same link provides power and commands to the implanted electrodes. Wireless telemetry of EMG signals from sensors implanted in the residual musculature eliminates the problems associated with percutaneous wires, such as infection, breakage, and marsupialization. Each implantable sensor consists of a custom-designed application-specified integrated circuit that is packaged into a biocompatible RF BION capsule from the Alfred E. Mann Foundation. Implants are designed for permanent long-term implantation with no servicing requirements. We have a fully operational system. The system has been tested in animals. Implants have been chronically implanted in the legs of three cats and are still completely operational four months after implantation."}
{"_id": "7e48ecaee001358a19cda8cbdc9081239834cb54", "title": "Comparative Metagenomics Reveals Host Specific Metavirulomes and Horizontal Gene Transfer Elements in the Chicken Cecum Microbiome", "text": "BACKGROUND\nThe complex microbiome of the ceca of chickens plays an important role in nutrient utilization, growth and well-being of these animals. Since we have a very limited understanding of the capabilities of most species present in the cecum, we investigated the role of the microbiome by comparative analyses of both the microbial community structure and functional gene content using random sample pyrosequencing. The overall goal of this study was to characterize the chicken cecal microbiome using a pathogen-free chicken and one that had been challenged with Campylobacter jejuni.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nComparative metagenomic pyrosequencing was used to generate 55,364,266 bases of random sampled pyrosequence data from two chicken cecal samples. SSU rDNA gene tags and environmental gene tags (EGTs) were identified using SEED subsystems-based annotations. The distribution of phylotypes and EGTs detected within each cecal sample were primarily from the Firmicutes, Bacteroidetes and Proteobacteria, consistent with previous SSU rDNA libraries of the chicken cecum. Carbohydrate metabolism and virulence genes are major components of the EGT content of both of these microbiomes. A comparison of the twelve major pathways in the SEED Virulence Subsystem (metavirulome) represented in the chicken cecum, mouse cecum and human fecal microbiomes showed that the metavirulomes differed between these microbiomes and the metavirulomes clustered by host environment. The chicken cecum microbiomes had the broadest range of EGTs within the SEED Conjugative Transposon Subsystem, however the mouse cecum microbiomes showed a greater abundance of EGTs in this subsystem. Gene assemblies (32 contigs) from one microbiome sample were predominately from the Bacteroidetes, and seven of these showed sequence similarity to transposases, whereas the remaining sequences were most similar to those from catabolic gene families.\n\n\nCONCLUSION/SIGNIFICANCE\nThis analysis has demonstrated that mobile DNA elements are a major functional component of cecal microbiomes, thus contributing to horizontal gene transfer and functional microbiome evolution. Moreover, the metavirulomes of these microbiomes appear to associate by host environment. These data have implications for defining core and variable microbiome content in a host species. Furthermore, this suggests that the evolution of host specific metavirulomes is a contributing factor in disease resistance to zoonotic pathogens."}
{"_id": "53ddd8d657a963f70059827470bc403f335c4da3", "title": "A Survey and Analysis of Various Health-Related Knowledge Mining Techniques in Social Media", "text": "Smart extraction of knowledge from social media has received the recent interest of the Biomedical and Health Informatics community for the simultaneous improvement of healthcare outcomes and lessen the expenses making use of consumergenerated reviews. Social media provides chances for patients and doctors to share their views and experiences without any obtrusion through online communities that might generate information, which is much beyond what is known by the domain experts. Nonetheless, for conventional public health surveillance systems, it is difficult to detect and then monitor the concerns related to health and the changes seen in attitudes of the public towards health-related problems. To solve this problem, several studies have shown the usage of information in social media for the discovery of biomedical and healthrelated information. Several disease-specific knowledge exchanges are now available on Face book and other portals of online social networking. These kind of new sources of information, support, and engagement have gone to become significant for patients who are suffering with the disease, and still the quality and the content of the knowledge contributed in these digital areas are not properly comprehended. The existing research methodologies are discussed with their merits and demerits, so that the further research works can be concentrated more. The experimental tests conducted were on all the research works in mat lab simulation environment and it is compared against each other to find the better approach under various performance measures such as Accuracy, Precision and Recall."}
{"_id": "17677ce6c75bf00d6ab0bbbb742e7e1cbf391c93", "title": "Approaching the Limit of Predictability in Human Mobility", "text": "In this study we analyze the travel patterns of 500,000 individuals in Cote d'Ivoire using mobile phone call data records. By measuring the uncertainties of movements using entropy, considering both the frequencies and temporal correlations of individual trajectories, we find that the theoretical maximum predictability is as high as 88%. To verify whether such a theoretical limit can be approached, we implement a series of Markov chain (MC) based models to predict the actual locations visited by each user. Results show that MC models can produce a prediction accuracy of 87% for stationary trajectories and 95% for non-stationary trajectories. Our findings indicate that human mobility is highly dependent on historical behaviors, and that the maximum predictability is not only a fundamental theoretical limit for potential predictive power, but also an approachable target for actual prediction accuracy."}
{"_id": "16f63ebc5b393524b48932946cb1ba3b6ac5c702", "title": "Simple Customization of Recursive Neural Networks for Semantic Relation Classification", "text": "In this paper, we present a recursive neural network (RNN) model that works on a syntactic tree. Our model differs from previous RNN models in that the model allows for an explicit weighting of important phrases for the target task. We also propose to average parameters in training. Our experimental results on semantic relation classification show that both phrase categories and task-specific weighting significantly improve the prediction accuracy of the model. We also show that averaging the model parameters is effective in stabilizing the learning and improves generalization capacity. The proposed model marks scores competitive with state-of-the-art RNN-based models."}
{"_id": "397598e6eae070857808dfafbcc4ff3679f40197", "title": "Effects of communication styles on acceptance of recommendations in intercultural collaboration", "text": "The objective of this study is to investigate the impact of culture and communication style (explicit versus implicit) on people\u2019s reactions on recommendations in intercultural collaboration. The experimental results from three intercultural collaboration teams were studied: Chinese-American, Chinese-German, and Chinese-Korean. The results indicate that Chinese participants showed more positive evaluations (i.e., higher trust, higher satisfaction, and more future collaboration intention) on the implicit advisor than American and German participants. Compared with Chinese participants, Korean participants accepted explicit recommendations more often and showed more positive evaluations on the explicit advisor. The results also show that when Chinese express recommendations in an explicit way, their recommendations were accepted more often and were more positively evaluated by cross-cultural partners."}
{"_id": "53964b05929728398e2c67609de2438e36347691", "title": "R2O, an extensible and semantically based database-to-ontology mapping language", "text": "We present R2O, an extensible and declarative language to describe mappings between relational DB schemas and ontologies implemented in RDF(S) or OWL. R2O provides an extensible set of primitives with welldefined semantics. This language has been conceived expressive enough to cope with complex mapping cases arisen from situations of low similarity between the ontology and the DB models."}
{"_id": "ce1e8d04fdff25e91399e2fbb8bc8159dd1ea58a", "title": "Low-Dimensional Representation of Spectral Envelope Without Deterioration for Full-Band Speech Analysis/Synthesis System", "text": "A speech coding for a full-band speech analysis/synthesis system is described. In this work, full-band speech is defined as speech with a sampling frequency above 40 kHz, whose Nyquist frequency covers the audible frequency range. In prior works, speech coding has generally focused on the narrowband speech with a sampling frequency below 16 kHz. On the other hand, statistical parametric speech synthesis currently uses the full-band speech, and low-dimensional representation of speech parameters is being used. The purpose of this study is to achieve speech coding without deterioration for full-band speech. We focus on a high-quality speech analysis/synthesis system and mel-cepstral analysis using frequency warping. In the frequency warping function, we directly use three auditory scales. We carried out a subjective evaluation using the WORLD vocoder and found that the optimum number of dimensions was around 50. The kind of frequency warping did not significantly affect the sound quality in the dimensions."}
{"_id": "49b3256add6efdcd9ed2ea90c54b18bb8f5cee3e", "title": "Bayesian Regularization and Pruning Using a Laplace Prior", "text": "Standard techniques for improved generalization from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy suggests a Laplace rather than a gaussian prior. After training, the weights then arrange themselves into two classes: (1) those with a common sensitivity to the data error and (2) those failing to achieve this sensitivity and that therefore vanish. Since the critical value is determined adaptively during training, pruningin the sense of setting weights to exact zerosbecomes an automatic consequence of regularization alone. The count of free parameters is also reduced automatically as weights are pruned. A comparison is made with results of MacKay using the evidence framework and a gaussian regularizer."}
{"_id": "60e7c95e555ddfdd31b4f98f8c02bc59b925dafb", "title": "An MRF Model for Binarization of Natural Scene Text", "text": "Inspired by the success of MRF models for solving object segmentation problems, we formulate the binarization problem in this framework. We represent the pixels in a document image as random variables in an MRF, and introduce a new energy (or cost) function on these variables. Each variable takes a foreground or background label, and the quality of the binarization (or labelling) is determined by the value of the energy function. We minimize the energy function, i.e. find the optimal binarization, using an iterative graph cut scheme. Our model is robust to variations in foreground and background colours as we use a Gaussian Mixture Model in the energy function. In addition, our algorithm is efficient to compute, and adapts to a variety of document images. We show results on word images from the challenging ICDAR 2003 dataset, and compare our performance with previously reported methods. Our approach shows significant improvement in pixel level accuracy as well as OCR accuracy."}
{"_id": "b452a77c8a5153cc3b38de47227eb8f51116a6f6", "title": "A New Gradient Based Character Segmentation Method for Video Text Recognition", "text": "The current OCR cannot segment words and characters from video images due to complex background as well as low resolution of video images. To have better accuracy, this paper presents a new gradient based method for words and character segmentation from text line of any orientation in video frames for recognition. We propose a Max-Min clustering concept to obtain text cluster from the normalized absolute gradient feature matrix of the video text line image. Union of the text cluster with the output of Canny operation of the input video text line is proposed to restore missing text candidates. Then a run length algorithm is applied on the text candidate image for identifying word gaps. We propose a new idea for segmenting characters from the restored word image based on the fact that the text height difference at the character boundary column is smaller than that of the other columns of the word image. We have conducted experiments on a large dataset at two levels (word and character level) in terms of recall, precision and f-measure. Our experimental setup involves 3527 characters of English and Chinese, and this dataset is selected from TRECVID database of 2005 and 2006."}
{"_id": "0d3b177e8d027d44c191e739a3a70ccacc2eac82", "title": "Interactive Graph Cuts for Optimal Boundary and Region Segmentation of Objects in N-D Images", "text": "In this paper we describe a new technique for general purpose interactive segmentation of N-dimensional images . The user marks certain pixels as \u201cobject\u201d or \u201cbackground\u201d to provide hard constraints for segmentation. Additional soft constraints incorporate both boundary and region information. Graph cuts are used to find the globally optimal segmentation of the N-dimensional image. The obtained solution gives the best balance of boundary and region properties among all segmentations satisfying the constraints . The topology of our segmentation is unrestricted and both \u201cobject\u201d and \u201cbackground\u201d segments may consist of several isolated parts. Some experimental results are present ed in the context of photo/video editing and medical image segmentation. We also demonstrate an interesting Gestalt example. A fast implementation of our segmentation method is possible via a new max-flow algorithm in [2]."}
{"_id": "06f0c3b6bc1f03deb14f80eb3a528c9bcd2bc9ae", "title": "Corporate Governance and Corporate Social Responsibility Synergies and Interrelationships", "text": "Manuscript Type: Empirical Research Questions/Issue: This paper seeks to explore the interrelationships between corporate governance (CG) and corporate social responsibility (CSR): first, theoretically, by reviewing the literature and surveying various postulations on offer; second, empirically, by investigating the conception and interpretation of this relationship in the context of a sample of firms operating in Lebanon. Accordingly, the paper seeks to highlight the increasing cross-connects or interfaces between CG and CSR, capitalizing on fresh insights from a developing country perspective. Research Findings/Results: A qualitative interpretive research methodology was adopted, drawing on in-depth interviews with the top managers of eight corporations operating in Lebanon, with the findings suggesting that the majority of managers conceive of CG as a necessary pillar for sustainable CSR. These findings are significant and interesting, implying that recent preoccupation with CG in developing countries is starting to be counterbalanced by some interest/attention to CSR, with growing appreciation of their interdependencies and the need to move beyond CG conformance toward voluntary CSR performance. Theoretical Implications: This study makes two important contributions. First, it suggests that there is a salient two-way relationship and increasing overlap between CG and CSR. While much previous literature has researched CG and CSR independently, this paper makes the case for considering them jointly and systematically. Second, the paper outlines a number of theoretical propositions that can serve as the basis for future research on the topic, particularly in developing countries, given that the data and theoretical propositions are both derived from and tailored to developing country contexts. Practical Implications: This study can potentially alert managers to the increasing overlap between the CG and CSR agendas and the need to exert diligent systematic efforts on both fronts. CG and CSR share more in common than previously assumed, and this needs to be accounted for by practitioners. The research can also alert policy makers in developing countries to the need to increase the vigilance and capacity of the regulatory and judicial systems in the context of CG reform and to increase institutional pressures, particularly of the coercive and normative variety to enhance CSR adoption."}
{"_id": "bb4ef7f3980368c3b620de71a61957fee39519ed", "title": "Cultural adaptation and validation of the Health Literacy Questionnaire (HLQ): robust nine-dimension Danish language confirmatory factor model", "text": "Health literacy is an important construct in population health and healthcare requiring rigorous measurement. The Health Literacy Questionnaire (HLQ), with nine scales, measures a broad perception of health literacy. This study aimed to adapt the HLQ to the Danish setting, and to examine the factor structure, homogeneity, reliability and discriminant validity. The HLQ was adapted using forward-backward translation, consensus conference and cognitive interviews (n\u00a0=\u00a015). Psychometric properties were examined based on data collected by face-to-face interview (n\u00a0=\u00a0481). Tests included difficulty level, composite scale reliability and confirmatory factor analysis (CFA). Cognitive testing revealed that only minor re-wording was required. The easiest scale to respond to positively was 'Social support for health', and the hardest were 'Navigating the healthcare system' and 'Appraisal of health information'. CFA of the individual scales showed acceptably high loadings (range 0.49-0.93). CFA fit statistics after including correlated residuals were good for seven scales, acceptable for one. Composite reliability and Cronbach's \u03b1 were >0.8 for all but one scale. A nine-factor CFA model was fitted to items with no cross-loadings or correlated residuals allowed. Given this restricted model, the fit was satisfactory. The HLQ appears robust for its intended application of assessing health literacy in a range of settings. Further work is required to demonstrate sensitivity to measure changes."}
{"_id": "6b0fb5007f45ded85e723c93f7c57d1aa736e508", "title": "A Survey on Software Estimation Techniques in Traditional and Agile Development Models", "text": "Software projects mostly exceeds budget, delivered late and does not meet with the customer\u2019s satisfaction for years. In the past, many traditional development models like waterfall, spiral, iterative, and prototyping methods are used to build the software systems. In recent years, agile models are widely used in developing the software products. The major reasons are \u2013 simplicity, incorporating the requirement changes at any time, light-weight approach and delivering the working product early and in short duration. Whatever the development model used, it still remains a challenge for software engineer\u2019s to accurately estimate the size, effort and the time required for developing the software system. This survey focuses on the existing estimation models used in traditional as well in agile software development."}
{"_id": "9031a91f7ec5651119270f96768f7600392bbdf1", "title": "Coarse-to-fine volumetric segmentation of teeth in Cone-Beam CT", "text": "We consider the problem of localizing and segmenting individual teeth inside 3D Cone-Beam Computed Tomography (CBCT) images. To handle large image sizes we approach this task with a coarse-to-fine framework, where the whole volume is first analyzed as a 33-class semantic segmentation (adults have up to 32 teeth) in coarse resolution, followed by binary semantic segmentation of the cropped region of interest in original resolution. To improve the performance of the challenging 33-class segmentation, we first train the Coarse step model on a large weakly labeled dataset, then fine-tune it on a smaller precisely labeled dataset. The Fine step model is trained with precise labels only. Experiments using our inhouse dataset show significant improvement for both weaklysupervised pretraining and for the addition of the Fine step. Empirically, this framework yields precise teeth masks with low localization errors sufficient for many real-world applications."}
{"_id": "a7639cb373001dfbbbfa82672cdeb80b15b8b82d", "title": "Experience with SCRAM, a SCenario Requirements Analysis Method", "text": "A method of scenario based requirements engineering is described that uses a combination of early prototypes, scenario scripts and design rationale to elicit and validate user requirements. Experience in using the method on an EU project, Multimedia Broker, is reported. Quantitative data on requirements sessions is analysed to assess user participation and quality of requirements captured. The method worked well but there were problems in the use of design rationale and control of turn taking in RE sessions. Lessons learned in using the method are summarised and future improvements for the method are discussed."}
{"_id": "1f1293fad395357fb793fecc9e338d5b34e21c56", "title": "Constrained Arc-Eager Dependency Parsing", "text": "Arc-eager dependency parsers process sentences in a single left-to-right pass over the input and have linear time complexity with greedy decoding or beam search. We show how such parsers can be constrained to respect two different types of conditions on the output dependency graph: span constraints, which require certain spans to correspond to subtrees of the graph, and arc constraints, which require certain arcs to be present in the graph. The constraints are incorporated into the arc-eager transition system as a set of preconditions for each transition and preserve the linear time complexity of the parser."}
{"_id": "a3703f15a17a09224cdead60eae26888a266c197", "title": "Show Me You Care: Trait Empathy, Linguistic Style, and Mimicry on Facebook", "text": "Linguistic mimicry, the adoption of another\u2019s language patterns, is a subconscious behavior with pro-social benefits. However, some professions advocate its conscious use in empathic communication. This involves mutual mimicry; effective communicators mimic their interlocutors, who also mimic them back. Since mimicry has often been studied in face-to-face contexts, we ask whether individuals with empathic dispositions have unique communication styles and/or elicit mimicry in mediated communication on Facebook. Participants completed Davis\u2019s Interpersonal Reactivity Index and provided access to Facebook activity. We confirm that dispositional empathy is correlated to the use of particular stylistic features. In addition, we identify four empathy profiles and find correlations to writing style. When a linguistic feature is used, this often \u201ctriggers\u201d use by friends. However, the presence of particular features, rather than participant disposition, best predicts mimicry. This suggests that machine-human communications could be enhanced based on recently used features, without extensive user profiling."}
{"_id": "d142c1b2488ea054112187b347e1a5fa83a3d54e", "title": "Survey Research in HCI", "text": ""}
{"_id": "3f0bfc78e9fa507d306665613629ece88d4ff6fb", "title": "SemLens: visual analysis of semantic data with scatter plots and semantic lenses", "text": "Querying the Semantic Web and analyzing the query results are often complex tasks that can be greatly facilitated by visual interfaces. A major challenge in the design of these interfaces is to provide intuitive and efficient interaction support without limiting too much the analytical degrees of freedom. This paper introduces SemLens, a visual tool that combines scatter plots and semantic lenses to overcome this challenge and to allow for a simple yet powerful analysis of RDF data. The scatter plots provide a global overview on an object collection and support the visual discovery of correlations and patterns in the data. The semantic lenses add dimensions for local analysis of subsets of the objects. A demo accessing DBpedia data is used for illustration."}
{"_id": "9653d5c2c7844347343d073bbedd96e05d52f69b", "title": "Pointer Networks", "text": "We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence [1] and Neural Turing Machines [2], because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems \u2013 finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem \u2013 using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems."}
{"_id": "5f74ff4911ef3d720f5a6d427711414b59471d8e", "title": "Interactions between the Midbrain Superior Colliculus and the Basal Ganglia", "text": "An important component of the architecture of cortico-basal ganglia connections is the parallel, re-entrant looped projections that originate and return to specific regions of the cerebral cortex. However, such loops are unlikely to have been the first evolutionary example of a closed-loop architecture involving the basal ganglia. A phylogenetically older, series of subcortical loops can be shown to link the basal ganglia with many brainstem sensorimotor structures. While the characteristics of individual components of potential subcortical re-entrant loops have been documented, the full extent to which they represent functionally segregated parallel projecting channels remains to be determined. However, for one midbrain structure, the superior colliculus (SC), anatomical evidence for closed-loop connectivity with the basal ganglia is robust, and can serve as an example against which the loop hypothesis can be evaluated for other subcortical structures. Examination of ascending projections from the SC to the thalamus suggests there may be multiple functionally segregated systems. The SC also provides afferent signals to the other principal input nuclei of the basal ganglia, the dopaminergic neurones in substantia nigra and to the subthalamic nucleus. Recent electrophysiological investigations show that the afferent signals originating in the SC carry important information concerning the onset of biologically significant events to each of the basal ganglia input nuclei. Such signals are widely regarded as crucial for the proposed functions of selection and reinforcement learning with which the basal ganglia have so often been associated."}
{"_id": "3555ea301c393a03a12eb93f366f893dc55ea942", "title": "Internet of Things: A Comprehensive Study of Security Issues and Defense Mechanisms", "text": "The Internet of Things (IoT) is an evolving global trend in Web-based information architecture aiding in the exchange of services and goods over a network without necessitating human-to-human or human-to-computer interaction. It has the potential to revolutionize physical world interaction of individuals and the organizations. The application of IoT can be recognized significantly in many areas such as in healthcare, resource management, learning, knowledge processing, and many more. The practical realization of IoT is met with a plethora of security and privacy challenges that need to be tackled for IoT\u2019s successful deployment on a commercially viable large scale. This paper analyzes the security issues related to IoT networks through an analysis of the existing empirical researches to get an insight on the security requirements of the IoT networks. The findings of the study revealed that security threats are one of the biggest and ever-growing challenges for IoT, and it is essential to substantially mitigate them for the success of this platform."}
{"_id": "663ab3b9d5c60a49ded3977c8770911d57d754e9", "title": "Deep People Counting with Faster R-CNN and Correlation Tracking", "text": "Crowd counting is a key problem for many computer vision tasks while most existing methods try to count people based on regression with hand-crafted features. Recently, the fast development of deep learning has resulted in many promising detectors of generic object classes. In this paper, to effective leverage the discriminability of convolutional neural networks, we propose a method to people counting based on Faster R-CNN[9] head-shoulder detection and correlation tracking. Firstly, we train a Faster R-CNN head-shoulder detector with Zeiler model to detect people with multiple poses and views. Next, we employ kernelized correlation filter(KCF)[7] to track the people and obtain the trajectory. Considering the results of the detection and tracking, we fuse the two bounding box to obtain a continuous and stable trajectory. Extensive experiments and comparison show the promise of the proposed approach."}
{"_id": "3c3e8536965d80bd6a93300fdc4540af8ef38156", "title": "Design and Experimentation of WPT Charger for Electric City Car", "text": "Wireless power transfer systems (WPTSs) with inductive coupling are advantageously used to charge the battery pack of electric vehicles. They basically consist of coil coupling, power supply circuitry connected to the transmitting side of the coil coupling, and power-conditioning circuitry connected to the receiving side of the coil coupling. This paper presents the design and implementation of a wireless power transfer (WPT) battery charger for an electric city car. A short overview on the working principles of a series-series resonant WPTS is given before describing in detail the design procedure of the power circuitry needed for its operation, i.e., an alternating-current-direct-current converter cascaded by a high-frequency inverter in the transmitting section and a diode rectifier cascaded by a chopper in the receiving section. The coil coupling with spiral coils is designed with the help of a finite-element-method code. A prototype of the WPT battery charger is built up according to the design results, and experiments that validate the design procedure are carried out."}
{"_id": "889c94e7440b1d2ad2cc7ff4be2b72c1dafe6347", "title": "An Introduction to Logistic Regression Analysis and Reporting", "text": "The purpose of this article is to provide researchers, editors, and readers with a set of guidelines for what to expect in an article using logistic regression techniques. Tables, figures, and charts that should be included to comprehensively assess the results and assumptions to be verified are discussed. This article demonstrates the preferred pattern for the application of logistic methods with an illustration of logistic regression applied to a data set in testing a research hypothesis. Recommendations are also offered for appropriate reporting formats of logistic regression results and the minimum observation-to-predictor ratio. The authors evaluated the use and interpretation of logistic regression presented in 8 articles published in The Journal of Educational Research between 1990 and 2000. They found that all 8 studies met or exceeded recommended criteria."}
{"_id": "1e335bec91db57288b94ff6e5fd1aeb221c09055", "title": "The Antecedents of Customer Loyalty : An Empirical Investigation in Life Insurance Context", "text": "The present paper seeks to offer the most decipherable and widely applicable antecedents of customer loyalty. It explores the extant literature on customer loyalty and brings out seven variables which are responsible for formation of customer loyalty. Further, the relative importance of these variables has been ascertained through Multiple Regression Analysis which revealed that service quality and commitment are the strongest predictors of customer loyalty in the Indian life insurance industry. The paper also attempts to assess the loyalty status of life insurance customers in India and draw a comparison between public and private sector life insurance companies in order to provide significant insights to the life insurance companies that may assist them in devising better loyalty practices. The findings suggest that Indian customers do care about the public sector status of a financial service provider as it entails a sense of security and stability and thus creates a difference between customer loyalty of public sector life insurer and that of private sector life insurer. The paper holds significant implications for academicians interested in dynamics of customer loyalty as well as the marketers of life insurance services who are concerned with customer relationships."}
{"_id": "58e572fb17ed8d78d1ae57e8ae4a4d3f8dd23e50", "title": "The Interplay of Cross-Situational Word Learning and Sentence-Level Constraints", "text": "A variety of mechanisms contribute to word learning. Learners can track co-occurring words and referents across situations in a bottom-up manner (cross-situational word learning, CSWL). Equally, they can exploit sentential contexts, relying on top-down information such as verb-argument relations and world knowledge, offering immediate constraints on meaning (word learning based on sentence-level constraints, SLCL). When combined, CSWL and SLCL potentially modulate each other's influence, revealing how word learners deal with multiple mechanisms simultaneously: Do they use all mechanisms? Prefer one? Is their strategy context dependent? Three experiments conducted with adult learners reveal that learners prioritize SLCL over CSWL. CSWL is applied in addition to SLCL only if SLCL is not perfectly disambiguating, thereby complementing or competing with it. These studies demonstrate the importance of investigating word-learning mechanisms simultaneously, revealing important characteristics of their interaction in more naturalistic learning environments."}
{"_id": "a3cfa1a2bf3d7e6bd8a207aa6a3cf6145c36b28f", "title": "Anisotropic diffusion-based detail-preserving smoothing for image restoration", "text": "It is important in image restoration to remove noise while preserving meaningful details such as edges and fine features. The existing edge-preserving smoothing methods may inevitably take fine detail as noise or vice versa. In this paper, we propose a new edge-preserving smoothing technique based on a modified anisotropic diffusion. The proposed method can simultaneously preserve edges and fine details while filtering out noise in the diffusion process. Since the fine detail in the neighborhood of a small image window generally have a gray-level variance larger than that of the noisy background, the proposed diffusion model incorporates both local gradient and gray-level variance to preserve edges and fine details while effectively removing noise. Experimental results have shown that the proposed anisotropic diffusion scheme can effectively smooth noisy background, yet well preserve edge and fine details in the restored image. The proposed method has the best restoration result compared with other edge-preserving methods."}
{"_id": "6d856863ca4f076835ab5ad83524ebf78b9a6c3e", "title": "Interacting Multiple Model Filter-Based Sensor Fusion of GPS With In-Vehicle Sensors for Real-Time Vehicle Positioning", "text": "Vehicle position estimation for intelligent vehicles requires not only highly accurate position information but reliable and continuous information provision as well. A low-cost Global Positioning System (GPS) receiver has widely been used for conventional automotive applications, but it does not guarantee accuracy, reliability, or continuity of position data when GPS errors occur. To mitigate GPS errors, numerous Bayesian filters based on sensor fusion algorithms have been studied. The estimation performance of Bayesian filters primarily relies on the choice of process model. For this reason, the change in vehicle dynamics with driving conditions should be addressed in the process model of the Bayesian filters. This paper presents a positioning algorithm based on an interacting multiple model (IMM) filter that integrates low-cost GPS and in-vehicle sensors to adapt the vehicle model to various driving conditions. The model set of the IMM filter is composed of a kinematic vehicle model and a dynamic vehicle model. The algorithm developed in this paper is verified via intensive simulation and evaluated through experimentation with a real-time embedded system. Experimental results show that the performance of the positioning system is accurate and reliable under a wide range of driving conditions."}
{"_id": "c2d3af4d6ac90cbfc2f399c33e1d6d949e26298d", "title": "High-Integrity IMM-EKF-Based Road Vehicle Navigation With Low-Cost GPS/SBAS/INS", "text": "User requirements for the performance of Global Navigation Satellite System (GNSS)-based road applications have been significantly increasing in recent years. Safety systems based on vehicle localization, electronic fee-collection systems, and traveler information services are just a few examples of interesting applications requiring onboard equipment (OBE) capable of offering a high available accurate position, even in unfriendly environments with low satellite visibility such as built-up areas or tunnels and at low cost. In addition to that, users and service providers demand from the OBEs not only accurate continuous positioning but integrity information of the reliability of this position as well. Specifically, in life-critical applications, high-integrity monitored positioning is absolutely required. This paper presents a solution based on the fusion of GNSS and inertial sensors (a Global Positioning System/satellite-based augmentation system/inertial navigation system integrated system) running an extended Kalman filter combined with an interactive multimodel method (IMM-EKF). The solution developed in this paper supplies continuous positioning in marketable conditions and a meaningful trust level of the given solution. A set of tests performed in controlled and real scenarios proves the suitability of the proposed IMM-EKF implementation as compared with low-cost GNSS-based solutions, dead reckoning systems, single-model EKF, and other filtering approaches of the current literature."}
{"_id": "eb2bfe2b5539b9390a83b8d593449dbc6540168f", "title": "Lane-Level Integrity Provision for Navigation and Map Matching With GNSS, Dead Reckoning, and Enhanced Maps", "text": "Lane-level positioning and map matching are some of the biggest challenges for navigation systems. Additionally, in safety applications or in those with critical performance requirements (such as satellite-based electronic fee collection), integrity becomes a key word for the navigation community. In this scenario, it is clear that a navigation system that can operate at the lane level while providing integrity parameters that are capable of monitoring the quality of the solution can bring important benefits to these applications. This paper presents a pioneering novel solution to the problem of combined positioning and map matching with integrity provision at the lane level. The system under consideration hybridizes measurements from a global navigation satellite system (GNSS) receiver, an odometer, and a gyroscope, along with the road information stored in enhanced digital maps, by means of a multiple-hypothesis particle-filter-based algorithm. A set of experiments in real environments in France and Germany shows the very good results obtained in terms of positioning, map matching, and integrity consistency, proving the feasibility of our proposal."}
{"_id": "551fe3c247e2201f659ccd4e6fe933beb7d452ea", "title": "IMM object tracking for high dynamic driving maneuvers", "text": "Classical object tracking approaches use a Kalman-filter with a single dynamic model which is therefore optimised to a single driving maneuver. In contrast the interacting multiple model (IMM) filter allows for several parallel models which are combined to a weighted estimate. Choosing models for different driving modes, such as constant speed, acceleration and strong acceleration changes, the object state estimation can be optimised for highly dynamic driving maneuvers. The paper describes the analysis of Stop&Go situations and the systematic parametrisation of the IMM method based on these statistics. The evaluation of the IMM approach is presented based on real sensor measurements of laser scanners, a radar and a video image processing unit."}
{"_id": "855b2fe8de7e5038f1372640159390b2065826b6", "title": "Overview of the Lucy Project: Dynamic Stabilization of a Biped Powered by Pneumatic Artificial Muscles", "text": "This paper gives an overview of the Lucy project. What is special is that the biped is not actuated with the classical electrical drives, but with pleated pneumatic artificial muscles. In an antagonistic setup of such muscles both the torque and the compliance are controllable. From human walking there is evidence that joint compliance plays an important role in energy-efficient walking and running. To be able to walk at different walking speeds and step lengths, a trajectory generator and joint trajectory tracking controller are combined. The first generates dynamically stable trajectories based on the objective locomotion parameters which can be changed from step to step. The joint trajectory tracking unit controls the pressure inside the muscles so the desired motion is followed. It is based on a computed torque model and takes the torque\u2013 angle relation of the antagonistic muscle setup into account. With this strategy the robot is able to walk at a speed up to 0.15 m/s. A compliance controller is developed to reduce the energy consumption by combining active trajectory control with the exploitation of the natural dynamics. A mathematical formulation was developed to find an optimal compliance setting depending on the desired trajectory and physical properties of the system. This strategy is experimentally evaluated on a single pendulum structure and not implemented on the real robot because the walking speed of the robot is currently too slow. At the end a discussion is given about the pros and cons of building a pneumatic biped, and the control architecture used. \u00a9 Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2008"}
{"_id": "6caab3c4f692ba2f4d814a393c49eccad0fa8fce", "title": "DeepTag: inferring diagnoses from veterinary clinical notes", "text": "Large scale veterinary clinical records can become a powerful resource for patient care and research. However, clinicians lack the time and resource to annotate patient records with standard medical diagnostic codes and most veterinary visits are captured in free-text notes. The lack of standard coding makes it challenging to use the clinical data to improve patient care. It is also a major impediment to cross-species translational research, which relies on the ability to accurately identify patient cohorts with specific diagnostic criteria in humans and animals. In order to reduce the coding burden for veterinary clinical practice and aid translational research, we have developed a deep learning algorithm, DeepTag, which automatically infers diagnostic codes from veterinary free-text notes. DeepTag is trained on a newly curated dataset of 112,558 veterinary notes manually annotated by experts. DeepTag extends multitask LSTM with an improved hierarchical objective that captures the semantic structures between diseases. To foster human-machine collaboration, DeepTag also learns to abstain in examples when it is uncertain and defers them to human experts, resulting in improved performance. DeepTag accurately infers disease codes from free-text even in challenging cross-hospital settings where the text comes from different clinical settings than the ones used for training. It enables automated disease annotation across a broad range of clinical diagnoses with minimal preprocessing. The technical framework in this work can be applied in other medical domains that currently lack medical coding resources."}
{"_id": "3ccf752029540235806bdd0c5293b56ddc1254c2", "title": "Multi-Objective Resource Allocation for Secure Communication in Cognitive Radio Networks with Wireless Information and Power Transfer", "text": "In this paper, we study resource allocation for multiuser multiple-input single-output (MISO) secondary communication systems with multiple system design objectives. We consider cognitive radio (CR) networks where the secondary receivers are able to harvest energy from the radio frequency when they are idle. The secondary system provides simultaneous wireless power and secure information transfer to the secondary receivers. We propose a multi-objective optimization framework for the design of a Pareto optimal resource allocation algorithm based on the weighted Tchebycheff approach. In particular, the algorithm design incorporates three important system design objectives: total transmit power minimization, energy harvesting efficiency maximization, and interference power leakage-totransmit power ratio minimization. The proposed framework takes into account a quality of service (QoS) requirement regarding communication secrecy in the secondary system and the imperfection of the channel state information (CSI) of potential eavesdroppers (idle secondary receivers and primary receivers) at the secondary transmitter. The proposed framework includes total harvested power maximization and interference power leakage minimization as special cases. The adopted multiobjective optimization problem is non-convex and is recast as a convex optimization problem via semidefinite programming (SDP) relaxation. It is shown that the global optimal solution of the original problem can be constructed by exploiting both the primal and the dual optimal solutions of the SDP relaxed problem. Besides, two suboptimal resource allocation schemes for the case when the solution of the dual problem is unavailable for constructing the optimal solution are proposed. Numerical results not only demonstrate the close-to-optimal performance of the proposed suboptimal schemes, but also unveil an interesting trade-off between the considered conflicting system design objectives."}
{"_id": "5d62c70bdca5383a25d0b4d541934b0ebfb52e27", "title": "Latent Discriminant Analysis with Representative Feature Discovery", "text": "Linear Discriminant Analysis (LDA) is a well-known method for dimension reduction and classification with focus on discriminative feature selection. However, how to discover discriminative as well as representative features in LDA model has not been explored. In this paper, we propose a latent Fisher discriminant model with representative feature discovery in an semi-supervised manner. Specifically, our model leverages advantages of both discriminative and generative models by generalizing LDA with data-driven prior over the latent variables. Thus, our method combines multi-class, latent variables and dimension reduction in an unified Bayesian framework. We test our method on MUSK and Corel datasets and yield competitive results compared to baselines. We also demonstrate its capacity on the challenging TRECVID MED11 dataset for semantic keyframe extraction and conduct a human-factors ranking-based experimental evaluation, which clearly demonstrates our proposed method consistently extracts more semantically meaningful keyframes than chal-"}
{"_id": "6e73bb17cf29e8e79dfded2564e314b6265aa5a4", "title": "Approximate Orthogonal Sparse Embedding for Dimensionality Reduction", "text": "Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L1 -norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes."}
{"_id": "8d638a4af3ffb50a91e709352c56a1235a9fa82d", "title": "Piccolo: An Ultra-Lightweight Blockcipher", "text": "We propose a new 64-bit blockcipher Piccolo supporting 80 and 128-bit keys. Adopting several novel design and implementation techniques, Piccolo achieves both high security and notably compact implementation in hardware. We show that Piccolo offers a sufficient security level against known analyses including recent related-key differential attacks and meet-in-the-middle attacks. In our smallest implementation, the hardware requirements for the 80 and the 128-bit key mode are only 683 and 758 gate equivalents, respectively. Moreover, Piccolo requires only 60 additional gate equivalents to support the decryption function due to its involution structure. Furthermore, its efficiency on the energy consumption which is evaluated by energy per bit is also remarkable. Thus, Piccolo is one of the competitive ultra-lightweight blockciphers which are suitable for extremely constrained environments such as RFID tags and sensor nodes."}
{"_id": "7319c444035bec581ae059c4627f2fe428b7dbac", "title": "A Post-failure Analysis of Mobile Payment Platforms", "text": "Despite on-going efforts over the past decade, mobile payments are yet to take off successfully. Repeated failures show that mobile payment platforms are complex to launch. The objective of this paper is to unveil factors explaining the failure of past mobile payment platforms. The use of a multi-level framework aims at enriching the variety of issues investigated. In order to explore the reasons of failure, we selected four cases from different countries happening at different times. Our results show that these cases share many of the same failure factors: lack of collaboration between the stakeholders, no technology standard, and low value-added for consumers and merchants compared to existing payment solutions. Even though the list is not exhaustive, the uncovered factors in our study seem to be necessary (but not sufficient) for the further developments of mobile payment platforms."}
{"_id": "61ae424fb81457530a08171dcec2def041c1a2ae", "title": "Multi-dimensional pattern discovery in financial time series using sax-ga with extended robustness", "text": "This paper proposes a new Multi-Dimensional SAX-GA approach to pattern discovery using genetic algorithms (GA). The approach is capable of discovering patterns in multi-dimensional financial time series. First, the several dimensions of data are converted to a Symbolic Aggregate approXimation (SAX) representation, which is, then, feed to a GA optimization kernel. The GA searches for profitable patterns occurring simultaneously in the multi-dimensional time series. Based on the patterns found, the GA produces more robust investment strategies, since the simultaneity of patterns on different dimensions of the data, reinforces the strength of the trading decisions implemented. The proposed approach was tested using stocks from S&P500 index, and is compared to previous reference works of SAX-GA and to the Buy & Hold (B&H) classic investment strategy."}
{"_id": "9c65c48ed44e814408e8aa79766c19628debe13e", "title": "A survey on human activity recognition from videos", "text": "Understanding the activities of human from videos is demanding task in Computer Vision. Identifying the actions being accomplished by the human in the video sequence automatically and tagging their actions is the prime functionality of intelligent video systems. The goal of activity recognition is to identify the actions and objectives of one or more objects from a series of examination on the action of object and their environmental condition. The major applications of Human Activity Recognition varies from Content-based Video Analytics, Robotics, Human-Computer Interaction, Human fall detection, Ambient Intelligence, Visual Surveillance, Video Indexing etc. This paper collectively summarizes and deciphers the various methodologies, challenges and issues of Human Activity Recognition systems. Variants of Human Activity Recognition systems such as Human Object Interactions and Human-Human Interactions are also explored. Various benchmarking datasets and their properties are being explored. The Experimental Evaluation of various papers are analyzed efficiently with the various performance metrics like Precision, Recall, and Accuracy."}
{"_id": "287a7b8eeecc0f443da7cebac14106719aee6d8e", "title": "Towards Agile Implementation of Test Maturity Model Integration (TMMI) Level 2 using Scrum Practices", "text": "The software industry has invested the substantial effort to improve the quality of its products like ISO, CMMI and TMMI. Although applying of TMMI maturity criteria has a positive impact on product quality, test engineering productivity, and cycle-time effort. But it requires heavy software development processes and large investments in terms of cost and time that medium/small companies do not deal with it. At the same time, agile methods deal with changing requirements and continuous delivery of valuable software by short time iterations. So that the aim of this paper is to improve the testing process by applying a detailed mapping between TMMI and Scrum practices and verifying this mapping with a study providing empirical evidence of the obtained results. The research has been evaluated on a sample of 2 large-sized TMMI certified companies. In conclusion, the experimental results show the effectiveness of using this integrated approach compared to other existing approaches. Keywords\u2014Agile software development; Scrum; TMMI; Software Testing; CMMI"}
{"_id": "d290d921081607ef30713d57598a9478499b0799", "title": "An Experimental Study of Genetic Crossover Operators for the Job Shop Scheduling Problem", "text": "The aim of this paper is to show the influence of genetic crossover operators on the performance of a genetic algorithm (GA). The GA is applied to the job shop scheduling problem (JSSP). To achieve this aim an experimental study of a set of crossover operators is presented. The experimental study is based on a decision support system (DSS). To compare the abilities of different crossover operators, the DSS was designed giving all the operators the same opportunities. The genetic crossover operators are tested on a set of standard instances taken from the literature. The makespan is the measure used to evaluate the genetic crossover operators. The main conclusion is that there is a crossover operator having the best average performance on a specific set of solved instances. The DSS developed can be utilized in a common industrial or construction environment. Key-Words: Genetic algorithms, crossover operators, scheduling, heuristics, JSSP."}
{"_id": "9ebb1218b496aad144eafdda1a3daadf8fad43eb", "title": "Text mining in healthcare. Applications and opportunities.", "text": "Healthcare information systems collect massive amounts of textual and numeric information about patients, visits, prescriptions, physician notes and more. The information encapsulated within electronic clinical records could lead to improved healthcare quality, promotion of clinical and research initiatives, fewer medical errors and lower costs. However, the documents that comprise the health record vary in complexity, length and use of technical vocabulary. This makes knowledge discovery complex. Commercial text mining tools provide a unique opportunity to extract critical information from textual data archives. In this paper, we share our experience of a collaborative research project to develop predictive models by text mining electronic clinical records. We provide an overview of the text mining process, examples of existing studies, experiences of our collaborative project and future opportunities."}
{"_id": "166d3c07a253a59bfcdd03f5c03ace96e6fee6f4", "title": "Deep Visuo-Tactile Learning: Estimation of Material Properties from Images", "text": "Estimation of materials properties, such as softness or roughness from visual perception is an essential factor in deciding our way of interaction with our environment in e.g., object manipulation tasks or walking. In this research, we propose a method for deep visuo-tactile learning in which we train a encoder-decoder network with an intermediate layer in an unsupervised manner with images as input and tactile sequences as output. Materials properties are then represented in the intermediate layer as a continuous feature space and are estimated from image information. Unlike past studies utilizing tactile sensors focusing on classification for object recognition or recognizing material properties, our method does not require manually designing class labels or annotation, does not cause unknown objects to be classified into known discrete classes, and can be used without a tactile sensor after training. To collect data for training, we have attached a uSkin tactile sensor and a camera to the end-effector of a Sawyer robot to stroke surfaces of 21 different material surfaces. Our results after training show that features are indeed expressed continuously, and that our method is able to handle unknown objects in its feature space."}
{"_id": "3d54d1e4a9bff2397652321852b15d650ab70ac8", "title": "Dynamic modeling and inverse dynamic control of mobile robot", "text": "This paper presents a dynamic modeling and control studies of mobile robot. The robot is mainly built all around three wheels and platform. A mathematical model of unicycle mobile robot is determined by kinematic and dynamic model. The dynamic modelling equations are based on Lagrangian formulation. The motion control strategy is based on the inverse of dynamic control. This leads to accurate tracking of trajectory. The goal to reach in this paper is to improve the performances of inverse dynamic control of mobile robot. The validity of the proposed controller is demonstrated by the simulation of two wheels mobile robots case."}
{"_id": "41b5a56b824f3c535f2e650a320a10783c29e8d3", "title": "A Method for Building Personalized Ontology Summaries", "text": "In the context of ontology engineering, the ontology understanding is the basis for its further development and reuse. One intuitive effective approach to support ontology understanding is the process of ontology summarization which highlights the most important concepts of an ontology. Ontology summarization identifies an excerpt from an ontology that contains the most relevant concepts and produces an abridged ontology. In this article, we present a method for summarizing ontologies that represent a data source schema or describe a knowledge domain. We propose an algorithm to produce a personalized ontology summary based on user-defined parameters, e.g. summary size. The relevance of a concept is determined through user indication or centrality measures, commonly used to determine the relative importance of a vertex within a graph. The algorithm searches for the best ontology summary, i.e., the one containing the most relevant ontology concepts respecting all the original relationships between concepts and the parameters set by the user. Experiments were done comparing our generated ontology summaries against golden standard summaries as well as summaries produced by methods available in related work. We achieved in average more than 62.5% of similarity with golden standard summaries."}
{"_id": "387419eef734fe732df1eff613adc5d785cefd4c", "title": "Systematic Review of Clustering High-Dimensional and Large Datasets", "text": "Technological advancement has enabled us to store and process huge amount of data in relatively short spans of time. The nature of data is rapidly changing, particularly its dimensionality is more commonly multi- and high-dimensional. There is an immediate need to expand our focus to include analysis of high-dimensional and large datasets. Data analysis is becoming a mammoth task, due to incremental increase in data volume and complexity in terms of heterogony of data. It is due to this dynamic computing environment that the existing techniques either need to be modified or discarded to handle new data in multiple high-dimensions. Data clustering is a tool that is used in many disciplines, including data mining, so that meaningful knowledge can be extracted from seemingly unstructured data. The aim of this article is to understand the problem of clustering and various approaches addressing this problem. This article discusses the process of clustering from both microviews (data treating) and macroviews (overall clustering process). Different distance and similarity measures, which form the cornerstone of effective data clustering, are also identified. Further, an in-depth analysis of different clustering approaches focused on data mining, dealing with large-scale datasets is given. These approaches are comprehensively compared to bring out a clear differentiation among them. This article also surveys the problem of high-dimensional data and the existing approaches, that makes it more relevant. It also explores the latest trends in cluster analysis, and the real-life applications of this concept. This survey is exhaustive as it tries to cover all the aspects of clustering in the field of data mining."}
{"_id": "a27c3f0a249dc122104b937c5783f83b3585bb53", "title": "Scalable Distributed Stream Processing", "text": "Many stream-based applications are naturally distributed. Applications are often embedded in an environment with numerous connected computing devices with heterogeneous capabilities. As data travels from its point of origin (e.g., sensors) downstream to applications, it passes through many computing devices, each of which is a potential target of computation. Furthermore, to cope with time-varying load spikes and changing demand, many servers would be brought to bear on the problem. In both cases, distributed computation is the norm. Abstract"}
{"_id": "e0e731805c073c4589375c8b8f65769834201114", "title": "CLOUDS: A Decision Tree Classifier for Large Datasets", "text": "Classification for very large datasets has many practical applications in data mining. Techniques such as discretization and dataset sampling can be used to scale up decision tree classifiers to large datasets. Unfortunately, both of these techniques can cause a significant loss in accuracy. We present a novel decision tree classifier called CLOUDS, which samples the splitting points for numeric attributes followed by an estimation step to narrow the search space of the best split. CLOUDS reduces computation and I/O complexity substantially compared to state of the art classifters, while maintaining the quality of the generated trees in terms of accuracy and tree size. We provide experimental results with a number of real and synthetic data~ets."}
{"_id": "9799cb70e20634f4c117bd864b3726f10a66253e", "title": "What are the Gaps in Mobile Patient Portal? Mining Users Feedback Using Topic Modeling", "text": "Patient portals are positioned as a central component of patient engagement through the potential to change the physician-patient relationship and enable chronic disease self-management. In this article, we extend the existing literature by discovering design gaps for patient portals from a systematic analysis of negative users\u2019 feedback from the actual use of patient portals. Specifically, we adopt topic modeling approach, LDA algorithm, to discover design gaps from online low rating user reviews of a common mobile patient portal, EPIC\u2019s mychart. To validate the extracted gaps, we compared the results of LDA analysis with that of human analysis. Overall, the results revealed opportunities to improve collaboration and to enhance the design of portals intended for patientcentered care."}
{"_id": "4ff5d795889c71848ddec03f43873c6eb179f161", "title": "Text-to-3 D Scene Generation using Semantic Parsing and Spatial Knowledge with Rule Based System", "text": "Scene Generation plays an important role in digital media to represent a news or a specific domain to the viewers. It\u2019s not easy to produce a scene from a text. Text may not completely express the whole situation in digital media. Most of the people are not conscious about the news until it's not visualized to them. Text to 3D scene generation is a process where people do not need to read a news. The 3D Scene will represent the situation. It will help people to conscious about their life. In this paper, we introduce a rule-based framework where scene generated from text using semantic parsing and spatial knowledge. Semantic parsing has identified the templates, objects, and constraints and spatial knowledge have built the relation between object and template. Our rule based framework has identified the uncountable noun and some spatial relations to generate 3D scenes."}
{"_id": "6dfa1cc927369035171fa646472ce332fbe4b49d", "title": "A deadlock prevention method for railway networks using monitors for colored Petri nets", "text": "The real-time traffic control of railway networks authorizes movements of the trains and imposes safety constraints. The paper deals with the real time traffic control focusing on deadlock prevention problem. Colored Petri nets are used to model the dynamics of the railway network system: places represent tracks and stations, tokens are trains. The prevention policy is expressed by a set of linear inequality constraints, called colored Generalized Mutual Exclusion Constraints that are enforced by adding appropriate monitor places. Using digraph tools, deadlock situations are characterized and a strategy is established to define off-line a set of Generalized Mutual Exclusion Constraints that prevent deadlock. An example shows in detail the design of the proposed control logic."}
{"_id": "132f3ca098835f15845f6c7d7e228d00729bb161", "title": "User Preference Modeling and Exploitation in IoT Scenarios", "text": "Recommender Systems are commonly used in web applications to support users in finding items of their interest. We here propose to use Recommender Systems in Internet of Things scenarios to support human decision making in the physical world. For instance, users' choices (visit actions) for points of interests (POIs) while they explore a sensor enabled city can be tracked and used to generate recommendations for not yet visited POIs. In this PhD research, we propose a novel learning approach for generating an explainable human behaviour model and relevant recommendations in sensor enabled areas. Moreover, we propose techniques for simulating user behaviour and analyse the collective dynamics of a population of users."}
{"_id": "4159521bb401c139b440264049ce0af522033b5c", "title": "The Race Between Man and Machine : Implications of Technology for Growth , Factor Shares and Employment", "text": "The advent of automation and the simultaneous decline in the labor share and employment among advanced economies raises concerns that labor will be marginalized and made redundant by new technologies. This paper examines this proposition in a task-based framework wherein not only are tasks previously performed by labor automated, but also more complex versions of existing tasks, in which labor has a comparative advantage, can be created. We fully characterize the structure of equilibrium in this model, establishing how the allocation of factors to tasks and factor prices are determined by the available technology and the endogenous choices of firms between capital and labor. We then demonstrate that although automation tends to reduce employment and the share of labor in national income, the creation of more complex tasks has the opposite effect. Our full model endogenizes the direction of research and development towards automation and the creation of new complex tasks. We show that, under reasonable conditions, there exists a stable balanced growth path in which the two types of innovations go hand-in-hand. Consequently, an increase in automation reduces the wage to rental rate ratio, and thus discourages further automation, encourages greater creation of more labor-intensive tasks, and restores the share of labor in national income and the employment to population ratio back towards their initial values. Though the economy is self-correcting, the equilibrium allocation of research effort is not optimal: to the extent that wages reflect quasi-rents for workers, firms will engage in too much automation. Finally, we extend the model to include workers of different skills. We find that inequality increases during transitions, but the self-correcting forces also serve to limit the increase in inequality over longer periods. Still in Progress. Comments Welcome. \u2217We thank Philippe Aghion, David Autor, Erik Brynjolfsson, Chad Jones, and seminar participants at the AEA 2015, the MIT Initiative on the Digital Economy, and the NBER 2015 Economic Growth Group for their valuable comments."}
{"_id": "b68e8d2f4d2c709bb5919b82effcb6a7bbd3db37", "title": "Stock Market Forecasting Using Machine Learning Algorithms", "text": "Prediction of stock market is a long-time attractive topic to researchers from different fields. In particular, numerous studies have been conducted to predict the movement of stock market using machine learning algorithms such as support vector machine (SVM) and reinforcement learning. In this project, we propose a new prediction algorithm that exploits the temporal correlation among global stock markets and various financial products to predict the next-day stock trend with the aid of SVM. Numerical results indicate a prediction accuracy of 74.4% in NASDAQ, 76% in S&P500 and 77.6% in DJIA. The same algorithm is also applied with different regression algorithms to trace the actual increment in the markets. Finally, a simple trading model is established to study the performance of the proposed prediction algorithm against other benchmarks."}
{"_id": "d299f687a94cfd6b82ef1289079fe4c0038ad7e8", "title": "A LSTM-based method for stock returns prediction: A case study of China stock market", "text": "The presented paper modeled and predicted China stock returns using LSTM. The historical data of China stock market were transformed into 30-days-long sequences with 10 learning features and 3-day earning rate labeling. The model was fitted by training on 900000 sequences and tested using the other 311361 sequences. Compared with random prediction method, our LSTM model improved the accuracy of stock returns prediction from 14.3% to 27.2%. The efforts demonstrated the power of LSTM in stock market prediction in China, which is mechanical yet much more unpredictable."}
{"_id": "1a4999c918c6206cd9804c48f7dce1bac6ec5b4a", "title": "Learning to trade via direct reinforcement", "text": "We present methods for optimizing portfolios, asset allocations, and trading systems based on direct reinforcement (DR). In this approach, investment decision-making is viewed as a stochastic control problem, and strategies are discovered directly. We present an adaptive algorithm called recurrent reinforcement learning (RRL) for discovering investment policies. The need to build forecasting models is eliminated, and better trading performance is obtained. The direct reinforcement approach differs from dynamic programming and reinforcement algorithms such as TD-learning and Q-learning, which attempt to estimate a value function for the control problem. We find that the RRL direct reinforcement framework enables a simpler problem representation, avoids Bellman's curse of dimensionality and offers compelling advantages in efficiency. We demonstrate how direct reinforcement can be used to optimize risk-adjusted investment returns (including the differential Sharpe ratio), while accounting for the effects of transaction costs. In extensive simulation work using real financial data, we find that our approach based on RRL produces better trading strategies than systems utilizing Q-learning (a value function method). Real-world applications include an intra-daily currency trader and a monthly asset allocation system for the S&P 500 Stock Index and T-Bills."}
{"_id": "1dc53b91327cab503acc0ca5afb9155882b717a5", "title": "Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning", "text": "Since most real-world applications of classification learning involve continuous-valued attributes, properly addressing the discretization process is an important problem. This paper addresses the use of the entropy minimization heuristic for discretizing the range of a continuous-valued attribute into multiple intervals. We briefly present theoretical evidence for the appropriateness of this heuristic for use in the binary discretization algorithm used in ID3, C4 , CART, and other learning algorithms. The results serve to justify extending the algorithm to derive multiple intervals. We formally derive a criterion based on the minimum description length principle for deciding the partitioning of intervals. We demonstrate via empirical evaluation on several real-world data sets that better decision trees are obtained using the new multi-interval algorithm. Introduction Classification learning algorithms typically use heuristics to guide their search through the large space of possible relations between combinations of attribute' values and classes. One such heuristic uses the notion of selecting attributes locally minimizing the information entropy of the classes in a data set (d. the ID3 algorithm (13) and its extensions, e.g. GID3 (2), GID3* (5), and C4 (15), CART (1), CN2 (3) and others). See (11; 5; 6) for a general discussion of the attribute selection problem. The attributes in a learning problem may be nominal (categorical), or they may be continuous (numerical). The term continuous\" is used in the literature to refer to attributes taking on numerical values (integer or real); or in general an attribute with a linearly ordered range of values. The above mentioned attribute selection process assumes that all attributes are nominal. Continuous-valued attributes are discretized prior to selection , typically by paritioning the range of the attribute into subranges. In general, a discretization is simply a logical condition , in terms of one or more attributes, that serves to partition the data into at least two subsets. In this paper, we focus only on the discretization of continuous-valued attributes. We first present a result about the information entropy minimization heuristic for binary discretization (two-interval splits). This gives us: . a better understanding of the heuristic and its behavior 1022 Machine Learning :;. . formal evidence that supports the usage of the heuristic ;;' in this context , and . a gain in computational effciency that results in speeding . up the evaluation process for continuous-valued attribute discretization. We then proceed to extend the algorithm to divide the range of a continuous-valued attribute into multiple intervals rather than just two. We first motivate the need for such a capability, then we present the multiple interval generalization, and finally we present the empirical evaluation results confirming that the new capability does indeed result in producing better decision trees. Binary Discretization A continuous-valued attribute is typically discretized during decision tree generation by partitioning its range into two intervals. A threshold value for the continuous-valued attribute is determined, and the test ::"}
{"_id": "76ad6daa899a8657c9c17480e5fc440fda53acec", "title": "A Multi-task Deep Network for Person Re-identification", "text": "Person re-identification (ReID) focuses on identifying people across different scenes in video surveillance, which is usually formulated as a binary classification task or a ranking task in current person ReID approaches. In this paper, we take both tasks into account and propose a multi-task deep network (MTDnet) that makes use of their own advantages and jointly optimize the two tasks simultaneously for person ReID. To the best of our knowledge, we are the first to integrate both tasks in one network to solve the person ReID. We show that our proposed architecture significantly boosts the performance. Furthermore, deep architecture in general requires a sufficient dataset for training, which is usually not met in person ReID. To cope with this situation, we further extend the MTDnet and propose a cross-domain architecture that is capable of using an auxiliary set to assist training on small target sets. In the experiments, our approach outperforms most of existing person ReID algorithms on representative datasets including CUHK03, CUHK01, VIPeR, iLIDS and PRID2011, which clearly demonstrates the effectiveness of the proposed approach."}
{"_id": "54dcab4c995559b8268f40dbb7207d16fdf08636", "title": "T-S Fuzzy Path Controller Design for the Omnidirectional Mobile Robot", "text": "This paper focuses on the design of T-S fuzzy path controller for improving the high-speed tracking accuracy of a four-wheel omnidirectional mobile robot (ODMR). Usually, when an ODMR moves in high speed, the adverse effects induced by Coriolis force significantly deteriorate the tracking performances. In this paper, a T-S fuzzy control designed with both the state feedback and path tracking error is proposed to meet the design specifications. Moreover, a new approach of kinematics inversion considering the nonlinear term of coordinate transform is proposed to simplify the analysis of ODMR motion. Finally, an example is provided to compare the present T-S fuzzy path controller with the well-tuned PID controller. The simulation results indicate that the proposed approach provides much better tracking performances as the speed of ODMR increases"}
{"_id": "c968f87bb22e2064c7cc415df4cb6b9e7591e5be", "title": "Beam-steerable patch antenna array using parasitic coupling and reactive loading", "text": "A parasitically-coupled and reactively-loaded patch antenna array is demonstrated to achieve beam steering in an analog manner. The strong parasitic coupling between closely-spaced patch antennas is used to form a phased array with a single driven element. The phase shift between the array elements is adjustable by changing the reactive loading on the parasitic patch antenna. A prototype of this antenna array is demonstrated at 2.3 GHz."}
{"_id": "51a48bc88c9fdbbf2bcc6aaec9fa4cf762cae6ba", "title": "Food Image Recognition via Superpixel Based Low-Level and Mid-Level Distance Coding for Smart Home Applications", "text": "Food image recognition is a key enabler for many smart home applications such as smart kitchen and smart personal nutrition log. In order to improve living experience and life quality, smart home systems collect valuable insights of users\u2019 preferences, nutrition intake and health conditions via accurate and robust food image recognition. In addition, efficiency is also a major concern since many smart home applications are deployed on mobile devices where high-end GPUs are not available. In this paper, we investigate compact and efficient food image recognition methods, namely low-level and mid-level approaches. Considering the real application scenario where only limited and noisy data are available, we first proposed a superpixel based Linear Distance Coding (LDC) framework where distinctive low-level food image features are extracted to improve performance. On a challenging small food image dataset where only 12 training images are available per category, our framework has shown superior performance in both accuracy and robustness. In addition, to better model deformable food part distribution, we extend LDC\u2019s feature-to-class distance idea and propose a mid-level superpixel food parts-to-class distance mining framework. The proposed framework show superior performance on a benchmark food image datasets compared to other low-level and mid-level approaches in the literature."}
{"_id": "750a8ad7921ba82db2f6395d3dec379355ac45cf", "title": "Vulnerability , risk and adaptation : A conceptual framework", "text": "The purpose of this paper is to present a tentative conceptual framework for studies of vulnerability and adaptation to climate variability and change, generally applicable to a wide range of contexts, systems and hazards. Social vulnerability is distinguished from biophysical vulnerability, which is broadly equivalent to the natural hazards concept of risk. The IPCC definition of vulnerability is discussed within this context, which helps us to reconcile apparently contradictory definitions of vulnerability. A concise typology of physically defined hazards is presented; the relationship between the vulnerability and adaptive capacity of a human system depends critically on the nature of the hazard faced. Adaptation by a system may be inhibited by process originating outside the system; it is therefore important to consider \" external \" obstacles to adaptation, and links across scales, when assessing adaptive capacity."}
{"_id": "98ab89f55355a25a33df4378e63262fce0cb7f52", "title": "Boosting Neural Machine Translation", "text": "Training efficiency is one of the main problems for Neural Machine Translation (NMT). Deep networks need for very large data as well as many training iterations to achieve state-of-the-art performance. This results in very high computation cost, slowing down research and industrialisation. In this paper, we propose to alleviate this problem with several training methods based on data boosting and bootstrap with no modifications to the neural network. It imitates the learning process of humans, which typically spend more time when learning \u201cdifficult\u201d concepts than easier ones. We experiment on an English-French translation task showing accuracy improvements of up to 1.63 BLEU while saving 20% of training time."}
{"_id": "503a6d42cfb0174ca944053372153e21fec1111c", "title": "Markov Chain Monte Carlo with People", "text": "Many formal models of cognition implicitly use subjective probability distributions to capture the assumptions of human learners. Most applications of these models determine these distributions indirectly. We propose a method for directly determining the assumptions of human learners by sampling from subjective probability distributions. Using a correspondence between a model of human choice and Markov chain Monte Carlo (MCMC), we describe a method for sampling from the distributions over objects that people associate with different categories. In our task, subjects choose whether to accept or reject a proposed change to an object. The task is constructed so that these decisions follow an MCMC acceptance rule, defining a Markov chain for which the stationary distribution is the category distribution. We test this procedure for both artificial categories acquired in the laboratory, and natural categories acquired from experience."}
{"_id": "cdb0b71e3962bd8d84663954e231ea43c33cdc5f", "title": "An Approach to Capture Role-Based Access Control Models from Spring Web Applications", "text": "To mitigate potential misinterpretation and security violations, software developers should use tools that reflect the state of web applications and visualise them as graphical models. Modelling helps to ensure that functionality and access control mechanisms are consistently interconnected. In this paper, we propose an approach to support Web application development using the Spring platform. Our proposal is supported by the Eclipse IDE plugin tool, which recognises Spring Security configuration captures, its notations, and visualises them in the role-based access control (RBAC) models. The RBAC models are represented using SecureUML modelling language. The plugin is validated through survey taken by software developers."}
{"_id": "6ebc601bba067af64b57d59039319650168df4c9", "title": "CNN-based Real-time Dense Face Reconstruction with Inverse-rendered Photo-realistic Face Images.", "text": "With the powerfulness of convolution neural networks (CNN), CNN based face reconstruction has recently shown promising performance in reconstructing detailed face shape from 2D face images. The success of CNN-based methods relies on a large number of labeled data. The state-of-the-art synthesizes such data using a coarse morphable face model, which however has difficulty to generate detailed photo-realistic images of faces (with wrinkles). This paper presents a novel face data generation method. Specifically, we render a large number of photo-realistic face images with different attributes based on inverse rendering. Furthermore, we construct a fine-detailed face image dataset by transferring different scales of details from one image to another. We also construct a large number of video-type adjacent frame pairs by simulating the distribution of real video data. With these nicely constructed datasets, we propose a coarse-to-fine learning framework consisting of three convolutional networks. The networks are trained for real-time detailed 3D face reconstruction from monocular video as well as from a single image. Extensive experimental results demonstrate that our framework can produce high-quality reconstruction but with much less computation time compared to the state-of-the-art. Moreover, our method is robust to pose, expression and lighting due to the diversity of data."}
{"_id": "6f75435c7446d0d5450bccc17efa6459bae68284", "title": "Comparison of Shunt and Series/Shunt nMOS Single-Pole Double-Throw Switches for X-Band Phased Array T/R Modules", "text": "This paper compares the performance of shunt and series/shunt single-pole double-throw nMOS switches designed in a 0.13 mum SiGe BiCMOS process for X-band phased array transmit/receive modules. From 8.5 to 10.5 GHz, the worst case return loss, insertion loss, and isolation are 14.5, 1.89, and 20.5 dB, respectively, for the reflective shunt switch, and 22.2, 2.33, and 22.5 dB, respectively, for the absorptive series/shunt switch. Both switches exhibit an IIP3 of about 28 dBm and dissipate no dc power. The performance of these switches are comparable to other CMOS switches found in triple well technologies, on non-standard substrates, using special device structures, or using extra dc biases"}
{"_id": "e4bfcca730f16f925ad0f8d8deff37f04b423210", "title": "Is it possible to cure Internet addiction with the Internet?", "text": "Significant technological advancements over the last two decades have led to enhanced accessibility to computing devices and the Internet. Our society is experiencing an ever-growing integration of the Internet into everyday lives, and this has transformed the way we obtain and exchange information, communicate and interact with one another as well as conduct business. However, the term \u2018Internet addiction\u2019 (IA) has emerged from problematic and excessive Internet usage which leads to the development of addictive cyber-behaviours, causing health and social problems. The most commonly used intervention treatments such as motivational interviewing, cognitive-behavioural therapy, and retreat or inpatient care mix a variety of psychotherapy theories to treat such addictive behaviour and try to address underlying psychosocial issues that are often coexistent with IA, but the efficacy of these approaches is not yet proved. The aim of this paper is to address the question of whether it is possible to cure IA with the Internet. After detailing the current state-of-the-art including various IA definitions, risk factors, assessment methods and IA treatments, we outline the main research challenges that need to be solved. Moreover, we propose an Internet-based IA Recovery Framework (IARF) which uses AI to closely observe, visualize and analyse patient\u2019s Internet usage behaviour for possible staged intervention. The proposal to use smart Internet-based systems to control IA can be expected to be controversial. This paper is intended to stimulate further discussion and research in IA recovery through Internet-based frameworks."}
{"_id": "7ccc0bc4ed1d6c2e2814ca28ac35cc2aec605c95", "title": "LDAExplore: Visualizing Topic Models Generated Using Latent Dirichlet Allocation", "text": "We present LDAExplore, a tool to visualize topic distributions in a given document corpus that are generated using Topic Modeling methods. Latent Dirichlet Allocation (LDA) is one of the basic methods that is predominantly used to generate topics. One of the problems with methods like LDA is that users who apply them may not understand the topics that are generated. Also, users may find it difficult to search correlated topics and correlated documents. LDAExplore, tries to alleviate these problems by visualizing topic and word distributions generated from the document corpus and allowing the user to interact with them. The system is designed for users, who have minimal knowledge of LDA or Topic Modelling methods. To evaluate our design, we run a pilot study which uses the abstract\u2019s of 322 Information Visualization papers, where every abstract is considered a document. The topics generated are then explored by users. The results show that users are able to find correlated documents and group them based on topics that are similar."}
{"_id": "2bab122e886271733c3be851b2b11b040cefc213", "title": "Barriers to the acceptance of electronic medical records by physicians from systematic review to taxonomy and interventions", "text": "BACKGROUND\nThe main objective of this research is to identify, categorize, and analyze barriers perceived by physicians to the adoption of Electronic Medical Records (EMRs) in order to provide implementers with beneficial intervention options.\n\n\nMETHODS\nA systematic literature review, based on research papers from 1998 to 2009, concerning barriers to the acceptance of EMRs by physicians was conducted. Four databases, \"Science\", \"EBSCO\", \"PubMed\" and \"The Cochrane Library\", were used in the literature search. Studies were included in the analysis if they reported on physicians' perceived barriers to implementing and using electronic medical records. Electronic medical records are defined as computerized medical information systems that collect, store and display patient information.\n\n\nRESULTS\nThe study includes twenty-two articles that have considered barriers to EMR as perceived by physicians. Eight main categories of barriers, including a total of 31 sub-categories, were identified. These eight categories are: A) Financial, B) Technical, C) Time, D) Psychological, E) Social, F) Legal, G) Organizational, and H) Change Process. All these categories are interrelated with each other. In particular, Categories G (Organizational) and H (Change Process) seem to be mediating factors on other barriers. By adopting a change management perspective, we develop some barrier-related interventions that could overcome the identified barriers.\n\n\nCONCLUSIONS\nDespite the positive effects of EMR usage in medical practices, the adoption rate of such systems is still low and meets resistance from physicians. This systematic review reveals that physicians may face a range of barriers when they approach EMR implementation. We conclude that the process of EMR implementation should be treated as a change project, and led by implementers or change managers, in medical practices. The quality of change management plays an important role in the success of EMR implementation. The barriers and suggested interventions highlighted in this study are intended to act as a reference for implementers of Electronic Medical Records. A careful diagnosis of the specific situation is required before relevant interventions can be determined."}
{"_id": "1b7f225f769b7672d5ce3620ecd6c79bacc76ad7", "title": "Automated camera dysfunctions detection", "text": "Surveillance systems depend greatly on the robustness and availability of the video streams. The cameras must deliver reliable streams from an angle corresponding to the correct viewpoint. In other words, the field of view and video quality must remain unchanged after the initial installation of a surveillance camera. The paper proposes an approach to detect changes (such as displacement, out of focus, obstruction) automatically in a difficult environment with illumination variations and dynamic background and foreground objects."}
{"_id": "a94be030ccd68f3a5a3bf9245137fe114c549819", "title": "An analogue approach to the travelling salesman problem using an elastic net method", "text": "The travelling salesman problem1 is a classical problem in the field of combinatorial optimization, concerned with efficient methods for maximizing or minimizing a function of many independent variables. Given the positions of N cities, which in the simplest case lie in the plane, what is the shortest closed tour in which each city can be visited once? We describe how a parallel analogue algorithm, derived from a formal model2\u20133 for the establishment of topographically ordered projections in the brain4\u201310, can be applied to the travelling salesman problem1,11,12. Using an iterative procedure, a circular closed path is gradually elongated non-uniformly until it eventually passes sufficiently near to all the cities to define a tour. This produces shorter tour lengths than another recent parallel analogue algorithm13, scales well with the size of the problem, and is naturally extendable to a large class of optimization problems involving topographic mappings between geometrical structures14."}
{"_id": "c45d52239fdcc567669f12bfc41ead95dd61e4a7", "title": "Improving Text Proposals for Scene Images with Fully Convolutional Networks", "text": "Text Proposals have emerged as a class-dependent version of object proposals \u2013 efficient approaches to reduce the search space of possible text object locations in an image. Combined with strong word classifiers, text proposals currently yield top state of the art results in end-to-end scene text recognition. In this paper we propose an improvement over the original Text Proposals algorithm of [1], combining it with Fully Convolutional Networks to improve the ranking of proposals. Results on the ICDAR RRC and the COCO-text datasets show superior performance over current state-of-the-art."}
{"_id": "544d91882ef5acb9e9b79011b97057e658b0d2fc", "title": "Sum-Rate Analysis for High Altitude Platform (HAP) Drones With Tethered Balloon Relay", "text": "High altitude platform (HAP) drones can provide broadband wireless connectivity to ground users in rural areas by establishing line-of-sight links and exploiting effective beamforming techniques. However, at high altitudes, acquiring the channel state information (CSI) for HAPs, which is a key requirement to perform beamforming, is challenging. In this paper, by exploiting an interference alignment (IA) technique, a novel method for achieving the maximum sum-rate in HAP-based communications without CSI is proposed. In particular, to realize IA, a multiple-antenna tethered balloon is used as a relay between multiple HAP drones and ground stations (GSs). Here, a multiple-input multiple-output X network system is considered. The capacity of the considered $M \\times N$ X network with a tethered balloon relay is derived in closed-form. Simulation results corroborate the theoretical findings and show that the proposed approach yields the maximum sum-rate in multiple HAPs-GSs communications in absence of CSI. The results also show the existence of an optimal balloon\u2019s altitude for which the sum-rate is maximized."}
{"_id": "cb3f5defe2120076ebbcc89b9256bbfcb8b4d8a1", "title": "Feature Generating Networks for Zero-Shot Learning", "text": "Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets - CUB, FLO, SUN, AWA and ImageNet - in both the zero-shot learning and generalized zero-shot learning settings."}
{"_id": "6009e0f79cff9fd33294ec75b27b63ab2b3cce88", "title": "Compact and Efficient Bipolar Coupler for Wireless Power Chargers: Design and Analysis", "text": "Compactness and efficiency are the two basic considerations of the wireless battery chargers for electric vehicles (EVs) and plug-in hybrid EVs. The double-sided LCC compensation topology for wireless power transfer (WPT) has been proved to be one of the efficient solutions lately. However, with the increase of the numbers of compensation components, the volume of the system may become larger, which makes it less attractive. To improve the compactness, a bipolar coupler structure with a compensation-integrated feature is proposed. The inductors of the LCC compensation networks are designed as planar-type and attached to the power-transferring main coils. Extra space and magnetic cores for the compensated inductors outside of the coupler are saved. The cost is that extra couplings between the compensated coils (inductors) and the main coils are induced. To validate the feasibility, the proposed coupler is modeled and investigated by 3-D finite-element analysis tool first. The positioning of the compensated coils, the range of the extra couplings, and the tolerance to misalignment are studied. This is followed by the circuit modeling and characteristic analysis of the proposed WPT topology based on the fundamental harmonic approximation. At last, a 600 mm \u00d7 600 mm with a nominal 150-mm-gap wireless charger prototype, operated at a resonant frequency of 95 kHz and a rated power of 5.6 kW has been built and tested. A peak efficiency of 95.36% from a dc power source to the battery load is achieved at rated operation condition."}
{"_id": "c07ea1464404a387319c5bab39f6a91d7ef9e4f1", "title": "Addressing Female Genital Mutilation/Cutting (FGM/C) in the Era of Clitoral Reconstruction: Plastic Surgery", "text": "Purpose of the Review\nThe aim of this review is to give an overview of the recent evidence on clitoral reconstruction and other relevant reconstructive plastic surgery measures after female genital mutilation/cutting (FGM/C).\n\n\nRecent Findings\nRecent publications present refinements and modifications of the surgical technique of clitoral reconstruction along with reconstruction of the labia majora and clitoral hood. Novel approaches with reposition of the clitoral nerve, anchoring of the labia majora, fat grafting, and full thickness mucosa grafts have been introduced. The current evidence on outcomes of clitoral reconstruction shows potential benefits. However, there is a risk of postoperative complications and a negative outcome. Experts in the field advocate for a multidisciplinary approach including psychosexual counseling and health education with or without subsequent clitoral reconstructive surgery.\n\n\nSummary\nThe evolution of reconstructive treatment for women with FGM/C is expanding, however at a slow rate. The scarcity of evidence on clitoral reconstruction halters availability of clinical guidelines and consensus regarding best practice. Clitoral reconstruction should be provided by multidisciplinary referral centers in a research setting with long-term follow-up on outcomes of postoperative morbidity and possible benefits."}
{"_id": "00514b5cd341ef128d216e86f2a795f218ef83db", "title": "Design and development of a heart rate measuring device using fingertip", "text": "In this paper, we presented the design and development of a new integrated device for measuring heart rate using fingertip to improve estimating the heart rate. As heart related diseases are increasing day by day, the need for an accurate and affordable heart rate measuring device or heart monitor is essential to ensure quality of health. However, most heart rate measuring tools and environments are expensive and do not follow ergonomics. Our proposed Heart Rate Measuring (HRM) device is economical and user friendly and uses optical technology to detect the flow of blood through index finger. Three phases are used to detect pulses on the fingertip that include pulse detection, signal extraction, and pulse amplification. Qualitative and quantitative performance evaluation of the device on real signals shows accuracy in heart rate estimation, even under intense of physical activity. We compared the performance of HRM device with Electrocardiogram reports and manual pulse measurement of heartbeat of 90 human subjects of different ages. The results showed that the error rate of the device is negligible."}
{"_id": "543ad4f3b3ec891023af53ef6fa2200ce886694f", "title": "HEART RATE MONITORING SYSTEM USING FINGER TIP THROUGH ARDUINO AND PROCESSING SOFTWARE", "text": "Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. Heart rate is a very vital health parameter that is directly related to the soundness of the human cardiovascular system. Heart rate is the number of times the heart beats per minute, reflects different physiological conditions such as biological workload, stress at work and concentration on tasks, drowsiness and the active state of the autonomic nervous system. It can be measured either by the ECG waveform or by sensing the pulse the rhythmic expansion and contraction of an artery as blood is forced through it by the regular contractions of the heart. The pulse can be felt from those areas where the artery is close to the skin. This paper describes a technique of measuring the heart rate through a fingertip and Arduino. It is based on the principal of photophelthysmography (PPG) which is non-invasive method of measuring the variation in blood volume in tissue using a light source and detector. While the heart is beating, it is actually pumping blood throughout the body, and that makes the blood volume inside the finger artery to change too. This fluctuation of blood can be detected through an optical sensing mechanism placed around the fingertip. The signal can be amplified and is sent to arduino with the help of serial port communication. With the help of processing software heart rate monitoring and counting is performed. The sensor unit consists of an infrared light-emitting-diode (IR LED) and a photo diode. The IR LED transmits an infrared light into the fingertip, a part of which is reflected back from the blood inside the finger arteries. The photo diode senses the portion of the light that is reflected back. The intensity of reflected light depends upon the blood volume inside the fingertip. So, every time the heart beats the amount of reflected infrared light changes, which can be detected by the photo diode. With a high gain amplifier, this little alteration in the amplitude of the reflected light can be converted into a pulse."}
{"_id": "e35c466be82e1cb669027c587fb4f65a881f0261", "title": "Microcontroller Based Wireless Temperature And Heart Beat Read-Out", "text": "In this paper, we propose a Simple Wireless transmission System using common approach Sensor Platform called The wireless based Patient Sensor platform (WSP, Sensor Node) which has remote access capability. The goals of the WSP are to establish: Standard sensor node (System on module), a common software .The proposed platform architecture (Sensor Node) offers flexibility, easy customization for different vital parameter collecting and sending. An prototype has been established based on wireless communication channel. Wireless lan (IEEE .802.15.4)) has been used as communication channel on our prototype (Sensor node). Desire sensor information (vital parameter) can be viewed remotely, and also vital parameter can be adjusted to meet demand."}
{"_id": "c2c465c332ec57a4430ce5f2093915b4ded497ff", "title": "Augmented Reality for the Study of Human Heart Anatomy", "text": "Augmented reality is increasingly applied in medical education mainly because educators can share knowledge through virtual objects. This research describes the development of a web application, which enhances users' medical knowledge with regards to the anatomy of the human heart by means of augmented reality. Evaluation is conducted in two different facets. In the first one, the authors of this paper evaluate the feasibility of a three-dimensional human heart module using one investigator under the supervision of an expert. In the second, evaluation aims at identifying usability issues by means of the cognitive walkthrough method. Three medical students (naive users) are called upon three target tasks in the web application. Task completion is appreciated in the light of the standard set of cognitive walkthrough questions. Augmented reality content miss hits are revealed by means of the first evaluation in an effort to enhance the educational utility of the three-dimensional human heart. Cognitive walkthrough provides further improvement points, which may further enhance usability in the next software release. The current piece of work constitutes the pre-pilot evaluation. Standardized methodologies are utilized in an effort to improve the application before its wider piloting to proper student populations. Such evaluations are considered important in experiential learning methods aiding online education of anatomy courses."}
{"_id": "30a379ae42217e67ccd6d6d98dec76bcd38b21ef", "title": "Semantic Sensor Data Search in a Large-Scale Federated Sensor Network", "text": "Sensor network deployments are a primary source of massive amounts of data about the real world that surrounds us, measuring a wide range of physical properties in real time. However, in large-scale deployments it becomes hard to effectively exploit the data captured by the sensors, since there is no precise information about what devices are available and what properties they measure. Even when metadata is available, users need to know low-level details such as database schemas or names of properties that are specific to a device or platform. Therefore the task of coherently searching, correlating and combining sensor data becomes very challenging. We propose an ontology-based approach, that consists in exposing sensor observations in terms of ontologies enriched with semantic metadata, providing information such as: which sensor recorded what, where, when, and in which conditions. For this, we allow defining virtual semantic streams, whose ontological terms are related to the underlying sensor data schemas through declarative mappings, and can be queried in terms of a high level sensor network ontology."}
{"_id": "c6a2e194ad418074f64d935daa867639eabc6540", "title": "Design of fractal based microstrip rectangular patch antenna for multiband applications", "text": "In this paper a multiband fractal based rectangular microstrip patch antenna is designed. FR4 substrate having thickness of 1.58 mm is used as substrate material for the design of proposed antenna and microstrip feed line provides the excitation to the antenna. The antenna operating frequency range is from 1 to 10 GHz. The proposed antenna resonate at twelve different frequencies as 1.86, 2.33, 3.67, 4.57, 5.08, 6.06, 7.03, 7.75, 8.08, 8.84, 9.56 and 10 GHz and the return losses are -15.39, -16.48, -10.02, -17.29, -13.15, -23.41, -10.22, -11.28, -17.02, -10.94, -15.15 and -15.48 dB respectively. The proposed antenna is designed and simulated by using the Ansoft HFSS V13 (high frequency structure simulator) software."}
{"_id": "14d079a4f3655034083fc749ba2f8f370f28a81a", "title": "Multiview Stereo via Volumetric Graph-Cuts and Occlusion Robust Photo-Consistency", "text": "This paper presents a volumetric formulation for the multiview stereo problem which is amenable to a computationally tractable global optimization using Graph-cuts. Our approach is to seek the optimal partitioning of 3D space into two regions labeled as \"object\" and \"empty\" under a cost functional consisting of the following two terms: 1) A term that forces the boundary between the two regions to pass through photo-consistent locations; and 2) a ballooning term that inflates the \"object\" region. To take account of the effect of occlusion on the first term, we use an occlusion robust photo-consistency metric based on normalized cross correlation, which does not assume any geometric knowledge about the reconstructed object. The globally optimal 3D partitioning can be obtained as the minimum cut solution of a weighted graph."}
{"_id": "d5049a49ab605a6703b0461a330e4dbbcd7307fb", "title": "A Novel Design of $4 \\times 4$ Butler Matrix With Relatively Flexible Phase Differences", "text": "This letter presents a novel topology of a 4 \u00d74 Butler matrix, which can realize relatively flexible phase differences at the output ports. The proposed Butler matrix employs couplers with arbitrary phase-differences to replace quadrature couplers in the conventional Butler matrix. By controlling the phase differences of the applied couplers, the progressive phase differences among output ports of the proposed Butler matrix can be relatively flexible. To facilitate the design, closed-form design equations are derived and presented. For verifying the design concept, a planar 4\u00d74 Butler matrix with four unique progressive phase differences ( - 30\u00b0, + 150\u00b0, - 120\u00b0, and + 60\u00b0) is designed and fabricated. At the operating frequency, the amplitude imbalance is less than 0.75 dB, and the phase mismatch is within \u00b16\u00b0. The measured return loss is better than 16 dB, and the isolation is better than 18 dB. The bandwidth with 10 dB return loss is about 15%."}
{"_id": "28765b5468a7690654145e7afcb05ea660107e3f", "title": "Linked REST APIs: A Middleware for Semantic REST API Integration", "text": "Over the last decade, an exponentially increasing number of REST services have been providing a simple and straightforward syntax for accessing rich data resources. To use these services, however, developers have to understand \"information-use contracts\" specified in natural language, and, to build applications that benefit from multiple existing services they have to map the underlying resource schemas in their code. This process is difficult and error-prone, especially as the number and overlap of the underlying services increases, and the mappings become opaque, difficult to maintain, and practically impossible to reuse. The more recent advent of the Linked Data formalisms can offer a solution to the challenge. In this paper, we propose a conceptual framework for REST-service integration based on Linked Data models. In this framework, the data exposed by REST services is mapped to Linked Data schemas, based on these descriptions, we have developed a middleware that can automatically compose API calls to respond to data queries (in SPARQL). Furthermore, we have developed a RDF model for characterizing the access-control protocols of these APIs and the quality of the data they expose, so that our middleware can develop \"legal\" compositions with desired qualities. We report our experience with the implementation of a prototype that demonstrates the usefulness of our framework in the context of a research-data management application."}
{"_id": "3d79478898cb3c67b6a66675577334bd5286ec01", "title": "Cooperation between distributed agents through self-organisation", "text": "The paper discussu some antral irrw in the cooperation between disuibuted agcnu using lhe following case study: The objective is to explore a distant planet. more concretely to collect samples of a parricular typ of precious ruck. The location of the ruck .iamplu is unknown in advance but they arc rypiwlly clustered in cenain spm There i S a vehicle that can drive around on the planet and later reenter the s p x d (0 g0 back to ewlh There is no deuiled map of the t e m h although it h known hat the terrain is fuU of obstacles. hills. valleys. etc."}
{"_id": "032d59d75b26872d40081fb40d7a81c894455d91", "title": "Privacy Preserving Data", "text": "In this paper we introduce the concept of privacy preserving data mining. In our model, two parties owning conndential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. This problem has many practical and important applications, such as in medical research with conndential patient records. Data mining algorithms are usually complex, especially as the size of the input is measured in megabytes, if not gigabytes. A generic secure multi-party computation solution, based on evaluation of a circuit computing the algorithm on the entire input, is therefore of no practical use. We focus on the problem of decision tree learning and use ID3, a popular and widely used algorithm for this problem. We present a solution that is considerably more eecient than generic solutions. It demands very few rounds of communication and reasonable bandwidth. In our solution, each party performs by itself a computation of the same order as computing the ID3 algorithm for its own database. The results are then combined using eecient cryptographic protocols, whose overhead is only logarithmic in the number of transactions in the databases. We feel that our result is a substantial contribution, demonstrating that secure multi-party computation can be made practical, even for complex problems and large inputs."}
{"_id": "101c14f6a04663a7e2c5965c4e0a2d46cb465a08", "title": "The Role of Requirements Engineering Practices in Agile Development: An Empirical Study", "text": ""}
{"_id": "4d352696f60eaebf7ef941bb31173ba0a1bb9a41", "title": "Cross-modal Image-Graphics Retrieval by Neural Transfer Learning", "text": ""}
{"_id": "ab58774577016ce7fa69ff8646d478905d7dffe9", "title": "Gaze-Adaptive Above and On-Surface Interaction", "text": "We explore the combination of above-surface sensing with eye tracking to facilitate concurrent interaction with multiple regions on touch screens. Conventional touch input relies on positional accuracy, thereby requiring tight visual monitoring of one's own motor action. In contrast, above-surface sensing and eye tracking provides information about how user's hands and gaze are distributed across the interface. In these situations we facilitate interaction by 1) showing the visual feedback of the hand hover near user's gaze point and 2) decrease the requisite of positional accuracy by employing gestural information. We contribute input and visual feedback techniques that combine these modalities and demonstrate their use in example applications. A controlled study showed the effectiveness of our techniques for manipulation tasks against conventional touch, while the effectiveness in acquisition tasks depended on the amount of mid-air motion, leading to our conclusion that the techniques can benefit interacting with multiple interface regions."}
{"_id": "d5489b88db7dc3ffcbb78163d4d06bb4554e8075", "title": "Enterprise Content Management (ECM): Needs, challenges and recommendations", "text": "Availability of information, agility of business process, interoperability of diverse business functions and conformity of the legal requirements are few of the critical success factors for enterprises to survive in the modern and competing global market space. Enterprises while striving to achieve these critical success factors are further exposed to the complexity of massive volumes of variant data and information, which exists in a broad array of formats. Complex and extended business processes spanning across the business functions and partners around the globe, need for integration and interoperability, fulfillment of compliance to legal and regulatory requirements. Enterprise content management (ECM) answers the above questions and offers the capabilities to efficiently and effectively manage the challenges imposed by the demanding business requirements of the modern world. The study in-hand reviews and analyzes the existing literature on enterprise content management domain. In order to conduct a structured literature review, three broader areas of considerations for ECM are identified i.e. business needs & drivers, challenges and recommendations. The review inhand aims at consolidating valuable domain knowledge and research work accomplished in the recent years for the benefits of the industry and academia."}
{"_id": "28115c63c146e4b77fa690e5a55175a36dacc634", "title": "Web Application Testing: A Review on Techniques, Tools and State of Art", "text": "Web applications are meant to be viewed by human user. While this implies that quality of web application has importance in our daily life. Web application quality is our prime concern. To ensure the quality of web application, web testing is having a dandy role in Software Testing as well as Web Community. Web Applications are erring because of features provided for rising of web application. In the last years, various web testing problems have been addressed by research work. Several tools, techniques and methods have been determined to test w eb application eff icaciously. This paper w ill present the contribution of researchers in the f ield of web application in previous years and state of art of web testing and challenges primarily because of distributed and heterogeneous nature of web"}
{"_id": "52130ce75aec43c319a05bc656852653ae520fde", "title": "A study of interactive 3D point location in a computer simulated virtual environment", "text": "The present study investigated the ability to inter actively locate points in a three dimensional computer environment using a six degree of freedom input device. Four different visu al feedback modes were tested: fixed viewpoint monoscopic persp ective, fixed viewpoint stereoscopic perspective, head-tracked mo noscopic perspective and head-tracked stereoscopic perspecti ve. Targets were located at plus and minus 10cm along the X, Y or Z axes, from a fixed starting location. Data about the time to complete the task and positioning accuracy (error) are gathered for each trial. In addition, subjective feedback regarding the apparat us, visual mode and task difficulty was solicited from subject s. The results indicate that stereoscopic performance is superior to monoscopic performance and that asymmetries exist both across and within axes. Head tracking had no appreciable effect upon performance. Subjective feedback regarding performance is usuall y consistent with objective measures, but some inconsistencies a re present."}
{"_id": "6d3c37e938ab4bf27d8b5dd45361efd89a61fe5b", "title": "Pavlov: Programming by Stimulus-Response Demonstration", "text": "Pavlov is a Programming By Demonstration (PBD) system that allows animated interfaces to be created without programming. Using a drawing editor and a clock, designers specify the behavior of a target interface by demonstrating stimuli (end-user actions or time) and the (time-stamped) graphical transformations that should be executed in response. This stimulus-response model allows interaction and animation to be defined in a uniform manner, and it allows for the demonstration of interactive animation, i.e., game-like behaviors in which the end-user (player) controls the speed and direction of object movement."}
{"_id": "61225ab6ad556451355664bb27cff2760ad7108a", "title": "Contextual Equivalence for Probabilistic Programs with Continuous Random Variables and Scoring", "text": "We present a logical relation for proving contextual equivalence in a probabilistic programming language (PPL) with continuous random variables and with a scoring operation for expressing observations and soft constraints. Our PPL model is based on a big-step operational semantics that represents an idealized sampler with likelihood weighting. The semantics treats probabilistic non-determinism as a deterministic process guided by a source of entropy. We derive a measure on result values by aggregating (that is, integrating) the behavior of the operational semantics over the entropy space. Contextual equivalence is defined in terms of these measures, taking real events as observable behavior. We define a logical relation and prove it sound with respect to contextual equivalence. We demonstrate the utility of the logical relation by using it to prove several useful examples of equivalences, including the equivalence of a \u03b2v-redex and its contractum and a general form of expression re-ordering. The latter equivalence is sound for the sampling and scoring effects of probabilistic programming but not for effects like mutation or control."}
{"_id": "8315ab82ccf825e440395487a3fe025e4833c171", "title": "Data-Driven Vehicle Trajectory Prediction", "text": "Vehicle trajectory or route prediction is useful in online, data-driven transportation simulation to predict future traffic patterns and congestion, among other uses. The various approaches to route prediction have varying degrees of data required to predict future vehicle trajectories. Three approaches to vehicle trajectory prediction, along with extensions, are examined to assess their accuracy on an urban road network. These include an approach based on the intuition that drivers attempt to reduce their travel time, an approach based on neural networks, and an approach based on Markov models. The T-Drive trajectory data set consisting of GPS trajectories of over ten thousand taxicabs and including 15 million data points in Beijing, China is used for this evaluation. These comparisons illustrate that using trajectory data from other vehicles can substantially improve the accuracy of forward trajectory prediction in the T-Drive data set. These results highlight the benefit of exploiting dynamic data to improve the accuracy of transportation simulation predictions."}
{"_id": "e12083f6b62753e7e46fa466efd9f126b3310132", "title": "Survey of failures and fault tolerance in cloud", "text": "Cloud computing provides support for hosting client's application. Cloud is a distributed platform that provides hardware, software and network resources to both execute consumer's application and also to store and mange user's data. Cloud is also used to execute scientific workflow applications that are in general complex in nature when compared to other applications. Since cloud is a distributed platform, it is more prone to errors and failures. In such an environment, avoiding a failure is difficult and identifying the source of failure is also complex. Because of this, fault tolerance mechanisms are implemented on the cloud platform. This ensures that even if there are failures in the environment, critical data of the client is not lost and user's application running on cloud is not affected in any manner. Fault tolerance mechanisms also help in improving the cloud's performance by proving the services to the users as required on demand. In this paper a survey of existing fault tolerance mechanisms for the cloud platform are discussed. This paper also discusses the failures, fault tolerant clustering methods and fault tolerant models that are specific for scientific workflow applications."}
{"_id": "d6f1ffd0a90f405d7ab2af92b4c0474c02d149e6", "title": "Tracking the student's performance in Web-based education using Scrum methodology", "text": "The Internet, communication and mobile technologies are emerging as predominant paradigm in the learning of students. Various technologies have created numerous opportunities for the learners to explore in the learning landscape. Even though traditional teaching and learning are still popular in India, the scope for Web based education is positive. It is commonly believed that the Web based education is poised to penetrate deep into the educational system in India in near future. While global web-based education system faces different subjects, ideas and different kind of talented student's, the biggest challenge is communication and tracking the performance of student's. To deal with this challenge, we can use scrum. Scrum is the essence and one of the powerful agile methodologies. This paper presents about scrum and scrum backlog."}
{"_id": "7ba36b1159fde8eaef541c321f3fa9bb05bece7a", "title": "Design of Incoherent Frames via Convex Optimization", "text": "This paper describes a new procedure for the design of incoherent frames used in the field of sparse representations. We present an efficient algorithm for the design of incoherent frames that works well even when applied to the construction of relatively large frames. The main advantage of the proposed method is that it uses a convex optimization formulation that operates directly on the frame, and not on its Gram matrix. Solving a sequence of convex optimization problems allows for the introduction of constraints on the frame that were previously considered impossible or very hard to include, such as non-negativity. Numerous experimental results validate the approach."}
{"_id": "b0cc0694f71d08b02e6328c606a6d9028c757a1e", "title": "Hyper-parameter selection in non-quadratic regularization-based radar image formation", "text": "We consider the problem of automatic parameter selection in regularization-based radar image formation techniques. It has previously been shown that non-quadratic regularization produces feature-enhanced radar images; can yield superresolution; is robust to uncertain or limited data; and can generate enhanced images in non-conventional data collection scenarios such as sparse aperture imaging. However, this regularized imaging framework involves some hyper-parameters, whose choice is crucial because that directly affects the characteristics of the reconstruction. Hence there is interest in developing methods for automatic parameter choice. We investigate Stein\u2019s unbiased risk estimator (SURE) and generalized cross-validation (GCV) for automatic selection of hyper-parameters in regularized radar imaging. We present experimental results based on the Air Force Research Laboratory (AFRL) \u201cBackhoe Data Dome,\u201d to demonstrate and discuss the effectiveness of these methods."}
{"_id": "93da24aac5f17243d6d6b6ab8056ecb5e9b3006f", "title": "Automatic fusion and classification using random forests and features extracted with deep learning", "text": "Fusion of different sensor modalities has proven very effective in numerous remote sensing applications. However, in order to benefit from fusion, advanced feature extraction mechanisms that rely on domain expertise are typically required. In this paper we present an automated feature extraction scheme based on deep learning. The feature extraction is unsupervised and hierarchical. Furthermore, computational efficiency (often a challenge for deep learning methods) is a primary goal in order to make certain that the method can be applied in large remote sensing datasets. Promising classification results show the applicability of the approach for both reducing the gap between naive feature extraction and methods relying on domain expertise, as well as further improving the performance of the latter in two challenging datasets."}
{"_id": "087ab67119b7caf129e93d8daa170a7c12a2a8f6", "title": "Who, where, when and what: discover spatio-temporal topics for twitter users", "text": "Micro-blogging services, such as Twitter, and location-based social network applications have generated short text messages associated with geographic information, posting time, and user ids. The availability of such data received from users offers a good opportunity to study the user's spatial-temporal behavior and preference. In this paper, we propose a probabilistic model W4 (short for Who+Where+When+What) to exploit such data to discover individual users' mobility behaviors from spatial, temporal and activity aspects. To the best of our knowledge, our work offers the first solution to jointly model individual user's mobility behavior from the three aspects. Our model has a variety of applications, such as user profiling and location prediction; it can be employed to answer questions such as ``Can we infer the location of a user given a tweet posted by the user and the posting time?\" Experimental results on two real-world datasets show that the proposed model is effective in discovering users' spatial-temporal topics, and outperforms state-of-the-art baselines significantly for the task of location prediction for tweets."}
{"_id": "4ec6d5e2300da16f0e007d5a72d5c9f12aacb989", "title": "Modelling the scaling properties of human mobility", "text": "Individual human trajectories are characterized by fat-tailed distributions of jump sizes and waiting times, suggesting the relevance of continuous-time random-walk (CTRW) models for human mobility. However, human traces are barely random. Given the importance of human mobility, from epidemic modelling to traffic prediction and urban planning, we need quantitative models that can account for the statistical characteristics of individual human trajectories. Here we use empirical data on human mobility, captured by mobile-phone traces, to show that the predictions of the CTRW models are in systematic conflict with the empirical results. We introduce two principles that govern human trajectories, allowing us to build a statistically self-consistent microscopic model for individual human mobility. The model accounts for the empirically observed scaling laws, but also allows us to analytically predict most of the pertinent scaling exponents."}
{"_id": "1b233b82ba35c913e0ddcf19b11ec50d013a8149", "title": "Fused Matrix Factorization with Geographical and Social Influence in Location-Based Social Networks", "text": "Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users\u2019 preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user\u2019s check-in on a location as a Multi-center Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly."}
{"_id": "2a442410d949be2ec1b9cc4b87dcbbf0ff259e21", "title": "The scaling laws of human travel", "text": "The dynamic spatial redistribution of individuals is a key driving force of various spatiotemporal phenomena on geographical scales. It can synchronize populations of interacting species, stabilize them, and diversify gene pools. Human travel, for example, is responsible for the geographical spread of human infectious disease. In the light of increasing international trade, intensified human mobility and the imminent threat of an influenza A epidemic, the knowledge of dynamical and statistical properties of human travel is of fundamental importance. Despite its crucial role, a quantitative assessment of these properties on geographical scales remains elusive, and the assumption that humans disperse diffusively still prevails in models. Here we report on a solid and quantitative assessment of human travelling statistics by analysing the circulation of bank notes in the United States. Using a comprehensive data set of over a million individual displacements, we find that dispersal is anomalous in two ways. First, the distribution of travelling distances decays as a power law, indicating that trajectories of bank notes are reminiscent of scale-free random walks known as L\u00e9vy flights. Second, the probability of remaining in a small, spatially confined region for a time T is dominated by algebraically long tails that attenuate the superdiffusive spread. We show that human travelling behaviour can be described mathematically on many spatiotemporal scales by a two-parameter continuous-time random walk model to a surprising accuracy, and conclude that human travel on geographical scales is an ambivalent and effectively superdiffusive process."}
{"_id": "0cd83be0270ee5ca08752d5bcced9326d8529740", "title": "Estimating surface normals in noisy point cloud data", "text": "In this paper we describe and analyze a method based on local least square fitting for estimating the normals at all sample points of a point cloud data (PCD) set, in the presence of noise. We study the effects of neighborhood size, curvature, sampling density, and noise on the normal estimation when the PCD is sampled from a smooth curve in R2 or a smooth surface in R3 and noise is added. The analysis allows us to find the optimal neighborhood size using other local information from the PCD. Experimental results are also provided."}
{"_id": "582ad10b10b13e9e97b2e8bac314cccbac9c60a0", "title": "Big data: the next frontier for innovation in therapeutics and healthcare.", "text": "Advancements in genomics and personalized medicine not only effect healthcare delivery from patient and provider standpoints, but also reshape biomedical discovery. We are in the era of the '-omics', wherein an individual's genome, transcriptome, proteome and metabolome can be scrutinized to the finest resolution to paint a personalized biochemical fingerprint that enables tailored treatments, prognoses, risk factors, etc. Digitization of this information parlays into 'big data' informatics-driven evidence-based medical practice. While individualized patient management is a key beneficiary of next-generation medical informatics, this data also harbors a wealth of novel therapeutic discoveries waiting to be uncovered. 'Big data' informatics allows for networks-driven systems pharmacodynamics whereby drug information can be coupled to cellular- and organ-level physiology for determining whole-body outcomes. Patient '-omics' data can be integrated for ontology-based data-mining for the discovery of new biological associations and drug targets. Here we highlight the potential of 'big data' informatics for clinical pharmacology."}
{"_id": "1b3efaaf72813b20ae1605db8674c419256e49ab", "title": "Forecasting Real Estate Prices \u2217", "text": "This chapter reviews the evidence of predictability in US residential and commercial real estate markets. First, we highlight the main methodologies used in the construction of real estate indices, their underlying assumptions and their impact on the stochastic properties of the resultant series. We then survey the key empirical findings in the academic literature, including short-run persistence and long-run reversals in the log changes of real estate prices. Next, we summarize the ability of local as well as aggregate variables to forecast real estate returns. We illustrate a number of these results by relying on six aggregate indexes of the prices of unsecuritized (residential and commercial) real estate and REITs. The effect of leverage and monetary policy is also discussed."}
{"_id": "e75696dab756f0fc703649e7043727d16ff24d2e", "title": "The Beta Generalized Exponential Distribution", "text": "We introduce the beta generalized exponential distribution that includes the beta exponential and generalized exponential distributions as special cases. We provide a comprehensive mathematical treatment of this distribution. We derive the moment generating function and the rth moment thus generalizing some results in the literature. Expressions for the density, moment generating function and rth moment of the order statistics also are obtained. We discuss estimation of the parameters by maximum likelihood and provide the information matrix. We observe in one application to real data set that this model is quite flexible and can be used quite effectively in analyzing positive data in place of the beta exponential and generalized exponential distributions. keywords: Beta exponential distribution, Information matrix, Generalized exponential distribution, Maximum likelihood estimation."}
{"_id": "b87b11481f7c798ea33f611e130984061b5f98a1", "title": "Motivation and job satisfaction", "text": "The movement of workers to act in a desired manner has always consumed the thoughts of managers. In many ways, this goal has been reached through incentive programs, corporate pep talks, and other types of conditional administrative policy. However, as the workers adjust their behaviour in response to one of the aforementioned stimuli, is job satisfaction actualized? The instilling of satisfaction within workers is a crucial task of management. Satisfaction creates confidence, loyalty and ultimately improved quality in the output of the employed. Satisfaction, though, is not the simple result of an incentive program. Employees will most likely not take any more pride in their work even if they win the weekend getaway for having the highest sales. This paper reviews the literature of motivational theorists and draws from their approaches to job satisfaction and the role of motivation within job satisfaction. The theories of Frederick Herzberg and Edwin Locke are presented chronologically to show how Locke\u2019s theory was a response to Herzberg\u2019s theory. By understanding these theories, managers can focus on strategies of creating job satisfaction. This is followed by a brief examination of Kenneth Blanchard and Paul Hersey\u2019s theory on leadership within management and how this art is changing through time. Herzberg and job satisfaction"}
{"_id": "3e66bfc9bf3068636bb2cd52bfb614004fb81b81", "title": "Recognizing Objects in Range Data Using Regional Point Descriptors", "text": "Recognition of three dimensional (3D) objects in noisy and cluttered scenes is a challenging problem in 3D computer vision. One approach that has been successful in past research is the regional shape descriptor. In this paper, we introduce two new regional shape descriptors: 3D shape contexts and harmonic shape contexts. We evaluate the performance of these descriptors on the task of recognizing vehicles in range scans of scenes using a database of 56 cars. We compare the two novel descriptors to an existing descriptor, the spin image, showing that the shape context based descriptors have a higher recognition rate on noisy scenes and that 3D shape contexts outperform the others on cluttered scenes."}
{"_id": "0c4b21c7d6ffeddda24ede8df99f5ac8cb89ad24", "title": "How Smart is Smart Money ? An Empirical Two-Sided Matching Model of Venture Capital", "text": "In capital markets, top-tier investors may have better abilities to monitor and manage their investments. In addition, there may be sorting in these markets, with top-tier investors investing in the best deals, second-tier investors investing in the second-best deals, and so forth. To separate and quantify these two effects, a structural model of the market for venture capital is developed and estimated. The model is a two-sided matching model that allows for sorting in equilibrium. It is found that more experienced venture capitalists make more successful investments. This is explained both by their value-adding influence on their investments, and by their access to late stage and biotechnology companies, companies that are more successful on average. Sorting is found to be prevalent and has general implications for the interpretation of empirical evidence of the impact of investors on their investments."}
{"_id": "45c8591533ee33e154167401d4f2e7331b82ada5", "title": "Supervised Topic Models", "text": "We introduce supervised latent Dirichlet allocation (sLDA), a statistical model of labelled documents. The model accommodates a variety of response types. We derive a maximum-likelihood procedure for parameter estimation, which relies on variational approximations to handle intractable posterior expectations. Prediction problems motivate this research: we use the fitted model to predict response values for new documents. We test sLDA on two real-world problems: movie ratings predicted from reviews, and web page popularity predicted from text descriptions. We illustrate the benefits of sLDA versus modern regularized regression, as well as versus an unsupervised LDA analysis followed by a separate regression."}
{"_id": "4102b0fd68fa13d851073df878e1e62498d59f53", "title": "A Circuit-Compatible SPICE model for Enhancement Mode Carbon Nanotube Field Effect Transistors", "text": "This paper presents a circuit-compatible compact model for short channel length (5 nm~100 nm), quasi-ballistic single wall carbon nanotube field-effect transistors (CNFETs). For the first time, a universal circuit-compatible CNFET model was implemented with HSPICE. This model includes practical device non-idealities, e.g. the quantum confinement effects in both circumferential and channel length direction, the acoustical/optical phonon scattering in channel region and the resistive source/drain, as well as the real time dynamic response with a transcapacitance array. This model is valid for CNFET for a wide diameter range and various chiralities as long as the carbon nanotube (CNT) is semiconducting"}
{"_id": "50f94bd748606e87623686b83ac04dfc3a90f922", "title": "Fear bradycardia and activation of the human periaqueductal grey", "text": "Animal models of predator defense distinguish qualitatively different behavioral modes that are activated at increasing levels of predation threat. A defense mode observed at intermediate threat levels is freezing: a cessation of locomotion that is characterized by a parasympathetically dominated autonomic nervous system response that causes heart rate deceleration, or fear bradycardia. Studies in rodents have shown that freezing depends on amygdalar projections to the periaqueductal grey (PAG). In humans, freezing-like behaviors are implicated in development and maintenance of psychopathology, but neural mechanisms underlying freezing or its characteristic autonomic response profile have not been identified. Here, we combined event-related blood oxygenation level-dependent functional MRI (BOLD-fMRI) with autonomic response measures in a picture viewing paradigm to probe activity and interconnectivity within the amygdala-PAG pathway and test for an association with parasympathetic as opposed to sympathetic activation. In response to negatively arousing pictures, we observed parasympathetic (bradycardia) and sympathetic (pupil dilation) autonomic responses, BOLD responses in the amygdala and PAG, and effective connectivity between these regions. Critically, BOLD responses in the PAG to negative pictures correlated on a trial-by-trial basis with bradycardia but not pupil dilation. This correlation with bradycardia remained significant when partialling out pupil dilation. Additionally, activity in regions associated with motor planning and inhibition mirrored the PAG response. Thus, our findings implicate the human PAG in a parasympathetically dominated defense mode that subserves a state of attentive immobility. Mechanistic insight into this qualitatively distinct defense mode may importantly advance translational models of anxiety disorders."}
{"_id": "a1ee310fc474f8cbea0f866b100dc70af780ec0b", "title": "RSSI informed phase method for distance calculations", "text": "In an attempt to find an accurate, environmentally robust, and fast process for RFID distance estimations, a method was developed called RSSI Informed Phase. This method uses both phase and RSSI measurements to estimate the distance, and potentially location, of a moving RFID tag. RSSI is initially used to find an approximate reader to tag separation distance. This distance value is applied to find an approximate slope of the phase angle vs. frequency curve. The estimated slope can inform the phase distance calculation, meaning fewer reads are required to find the actual phase angle vs. frequency slope. The reduction in the number of necessary reads accelerates the localization process and makes this method more robust for dynamic environments."}
{"_id": "49461870d3d074c8bd0fcc5410970ef87d3620e5", "title": "Teleoperated robot writing using EMG signals", "text": "In this paper, we have developed a method to tele-control a robot arm to imitate human writing skills using electromyography (EMG) signals. The control method is implemented on the Baxter\u00ae robot with a brush attached on the endpoint of its arm to imitate human writing skills, and a MYO sensor is used to measure the surface electromyographic (sEMG) signals generated by contractions of human muscles involved in writing tasks. A haptic device Sensable\u00ae Omni is used to control the motion of the Baxter\u00ae robot arm, and V-Rep\u00ae is employed to simulate the movement of the arm and to calculate the inverse kinematic of Baxter\u00ae robot arm. All the communications for Baxter\u00ae robot, V-Rep simulator, Omni device and MYO are integrated in MATLAB\u00ae/Simulink\u00ae. The main test is to make the Baxter\u00ae arm following the movement of a human subject when writing with Omni stylus, and the EMG signals are processed using low pass filter and moving average technique to extract the smoothed envelope which is utilised to control the variation of position instead of variation of force along the vertical Z axis, so when the operator writes with force variations, the Baxter\u00ae can draw lines with variable thickness. This imitation system is successfully tested to make the Baxter\u00ae arm to follow the writing movement of the operator's hand and would have potential applications in more complicated human robot teleoperation tasks such as tele-rehabilitation and tele-surgery."}
{"_id": "a16dc6af67ef9746068c63a56a580cb3b2a83e9c", "title": "Parallel computations for controlling an arm.", "text": "In order to control a reaching movement of the arm and body, several different computational problems must be solved. Some parallel methods that could be implemented in networks of neuron-like processors are described. Each method solves a different part of the overall task. First, a method is described for finding the torques necessary to follow a desired trajectory. The methods is more economical and more versatile than table look-up and requires very few sequential steps. Then a way of generating an internal representation of a desired trajectory is described. This method shows the trajectory one piece at a time by applying a large set of heuristic rules to a \"motion blackboard\" that represents the static and dynamic parameters of the state of the body at the current point in the trajectory. The computations are simplified by expressing the positions, orientations, and motions of parts of the body in terms of a single, non-accelerating, world-based frame of reference, rather than in terms of the joint-angles or an egocentric frame based on the body itself."}
{"_id": "921e27dd44595b0f4353ee6b981aa53251073e55", "title": "TimeMachine: Timeline Generation for Knowledge-Base Entities", "text": "We present a method called TIMEMACHINE to generate a timeline of events and relations for entities in a knowledge base. For example for an actor, such a timeline should show the most important professional and personal milestones and relationships such as works, awards, collaborations, and family relationships. We develop three orthogonal timeline quality criteria that an ideal timeline should satisfy: (1) it shows events that are relevant to the entity; (2) it shows events that are temporally diverse, so they distribute along the time axis, avoiding visual crowding and allowing for easy user interaction, such as zooming in and out; and (3) it shows events that are content diverse, so they contain many different types of events (e.g., for an actor, it should show movies and marriages and awards, not just movies). We present an algorithm to generate such timelines for a given time period and screen size, based on submodular optimization and web-co-occurrence statistics with provable performance guarantees. A series of user studies using Mechanical Turk shows that all three quality criteria are crucial to produce quality timelines and that our algorithm significantly outperforms various baseline and state-of-the-art methods."}
{"_id": "72a6c4dafe02277033af326d0a18c0a811aad897", "title": "Sustainable construction: construction and demolition waste reconsidered.", "text": "Construction activity in Europe has increased substantially in the past decade. Likewise, there has also been a commensurate rise in the generation of construction and demolition waste (C&DW). This, together with the fact that in many European countries the rate of recycling and reuse of C&DW is still quite low has engendered a serious environmental problem and a motivation to develop strategies and management plans to solve it. Due to its composition, there is a significant potential to reuse and/or recycle C&DW, and thereby, contribute to improving the sustainability of construction and development, but practical procedures are not yet widely known or practiced in the construction industry. This article (a) summarizes the different applications that are presently practiced to optimize the recovery and/or application of C&DW for reuse, and (b) proposes various measures and strategies to improve the processing of this waste. The authors suggest that to enhance environmental effectiveness, a conscious and comprehensive C&DW management plan should be implemented in each jurisdiction. More precisely, this study presents a holistic approach towards C&DW management, through which environmental benefits can be achieved through the application of new construction methods that can contribute to sustainable growth."}
{"_id": "a7966868b57323ff53b963312664cfd708ebcc39", "title": "Wireless image communication system for fire-fighting robots", "text": "Recently, it has sometimes been impossible for fire-fighting personnel to access the site of a fire, even as the fire causes tremendous property damage and loss of human life, due to high temperatures or the presence of explosive materials. In such environments, fire-fighting robots can be useful for extinguishing a fire, and they should be controlled by remote operators. In order to help a remote operator who is located far away from the fire-fighting robot, wireless image communication systems have been investigated in this study. The developed system uses a High-Speed Downlink Packet Access (HSDPA) network to transmit images, showing the view from a fire-fighting robot. The fire-fighting robots and remote controllers are equipped with industrial computers, and image-transmitting and image-receiving programs function according to various protocols to allow communication between the robots and controllers"}
{"_id": "c5e5d1bb3b124dbabe86eb3fbc95a94162a07b31", "title": "Low voltage low power analog circuit design OTA using signal attenuation technique in universal filter application", "text": "This paper presents a new configuration for a linear operational transconductance amplifier (OTA) using a signal attenuation technique. The OTA is designed to operate with a \u00b10.8V supply voltage and consumes 0.45mW power. All simulations are performed by ELDO model BSIM3v3 technology CMOS TSMC 0.18\u03bcm. The simulation results of this circuit showed a high DC gain of 73.6dB with a unity frequency of 50.19MHz and a total harmonic distortion of -60.81dB at 100 kHz for an input voltage of 1Vpp. Based on this circuit, a voltage mode universal filter has been implemented. The simulation results are in a good agreement with the theoretical calculations."}
{"_id": "38357398faffb094111dfb8558ffcf638ba78e5d", "title": "Competitive intelligence: construct exploration, validation and equivalence", "text": "Purpose \u2013 Little empirical research has been conducted on competitive intelligence (CI). This paper aims to contribute to the quantitative strand of the CI literature by exploring and validating the theoretical constructs of the CI process. Design/methodology/approach \u2013 Data from 601 questionnaires filled out by South African and Flemish exporters were subjected to exploratory factor analysis and construct equivalence analysis between the sub-samples. Findings \u2013 The results showed that the CI process consists of three constructs, while the context in which CI takes place consists of four constructs. This agrees to some extent with the literature. When verifying the constructs for both cultures it was found that all but one CI context construct can be viewed as equivalent in both groups. Bias analysis identified one item in the questionnaire that was biased. Via regression analysis it was also indicated that the context in which CI takes place influences the CI process to a large extent. The research identified size as an important influencing factor in a business\u2019 CI process. Practical implications \u2013 Businesses involved in CI should take note that an improvement in their formal infrastructure, employee involvement and internal information processes could enhance their CI capability. Originality/value \u2013 This paper contributes towards the formalising of the constructs of competitive intelligence."}
{"_id": "8c3aad5fe532344a7b0574d95e23fcd82a3322d6", "title": "Empirical studies of software engineering: a roadmap", "text": "In this article we summarize the strengths and weaknesses of empirical research in software engineering. We argue that in order to improve the current situation we must create better studies and draw more credible interpretations from them. We finally present a roadmap for this improvement, which includes a general structure for software empirical studies and concrete steps for achieving these goals: designing better studies, collecting data more effectively, and involving others in our empirical enterprises."}
{"_id": "c592abca4236ad3cb9cea9371d0cf35fd9dbd9b5", "title": "Hidost: a static machine-learning-based detector of malicious files", "text": "Malicious software, i.e., malware, has been a persistent threat in the information security landscape since the early days of personal computing. The recent targeted attacks extensively use non-executable malware as a stealthy attack vector. There exists a substantial body of previous work on the detection of non-executable malware, including static, dynamic, and combined methods. While static methods perform orders of magnitude faster, their applicability has been hitherto limited to specific file formats. This paper introduces Hidost, the first static machine-learning-based malware detection system designed to operate onmultiple file formats. Extending a previously published, highly effective method, it combines the logical structure of files with their content for even better detection accuracy. Our system has been implemented and evaluated on two formats, PDF and SWF (Flash). Thanks to its modular design and general feature set, it is extensible to other formats whose logical structure is organized as a hierarchy. Evaluated in realistic experiments on timestamped datasets comprising 440,000 PDF and 40,000 SWF files collected during several months, Hidost outperformed all antivirus engines deployed by the website VirusTotal to detect the highest number of malicious PDF files and ranked among the best on SWF malware."}
{"_id": "2b739fad8072544b2d45c7f9f9521a127376188f", "title": "Thinking in circuits: toward neurobiological explanation in cognitive neuroscience", "text": "Cognitive theory has decomposed human mental abilities into cognitive (sub) systems, and cognitive neuroscience succeeded in disclosing a host of relationships between cognitive systems and specific structures of the human brain. However, an explanation of why specific functions are located in specific brain loci had still been missing, along with a neurobiological model that makes concrete the neuronal circuits that carry thoughts and meaning. Brain theory, in particular the Hebb-inspired neurocybernetic proposals by Braitenberg, now offers an avenue toward explaining brain\u2013mind relationships and to spell out cognition in terms of neuron circuits in a neuromechanistic sense. Central to this endeavor is the theoretical construct of an elementary functional neuronal unit above the level of individual neurons and below that of whole brain areas and systems: the distributed neuronal assembly (DNA) or thought circuit (TC). It is shown that DNA/TC theory of cognition offers an integrated explanatory perspective on brain mechanisms of perception, action, language, attention, memory, decision and conceptual thought. We argue that DNAs carry all of these functions and that their inner structure (e.g., core and halo subcomponents), and their functional activation dynamics (e.g., ignition and reverberation processes) answer crucial localist questions, such as why memory and decisions draw on prefrontal areas although memory formation is normally driven by information in the senses and in the motor system. We suggest that the ability of building DNAs/TCs spread out over different cortical areas is the key mechanism for a range of specifically human sensorimotor, linguistic and conceptual capacities and that the cell assembly mechanism of overlap reduction is crucial for differentiating a vocabulary of actions, symbols and concepts."}
{"_id": "65293ecf6a4c5ab037a2afb4a9a1def95e194e5f", "title": "Face, Age and Gender Recognition using Local Descriptors", "text": "This thesis focuses on the area of face processing and aims at designing a reliable framework to facilitate face, age, and gender recognition. A Bag-of-Words framework has been optimized for the task of face recognition by evaluating different feature descriptors and different bag-of-words configurations. More specifically, we choose a compact set of features (e.g., descriptors, window locations, window sizes, dictionary sizes, etc.) in order to produce the highest possible rate of accuracy. Experiments on a challenging dataset shows that our framework achieves a better level of accuracy when compared to other popular approaches such as dimension reduction techniques, edge detection operators, and texture and shape feature extractors. The second contribution of this thesis is the proposition of a general framework for age and gender classification. Although the vast majority of the existing solutions focus on a single visual descriptor that often only encodes a certain characteristic of the image regions, this thesis aims at integrating multiple feature types. For this purpose, feature selection is employed to obtain more accurate and robust facial descriptors. Once descriptors have been computed, a compact set of features is chosen, which facilitates facial image processing for age and gender analysis. In addition to this, a new color descriptor (CLR-LBP) is proposed and the results obtained is shown to be comparable to those of other pre-existing color descriptors. The experimental results indicates that our age and gender framework outperforms other proposed methods when examined on two challenging databases, where face objects are present with different expressions and levels of illumination. This achievement demonstrates the effectiveness of our proposed solution and allows us to achieve a higher accuracy over the existing state-of-the-art methods."}
{"_id": "d2d277f91cd558d35bf62d71b54b819e0df0550a", "title": "Mate choice decisions: the role of facial beauty", "text": "For most people, facial beauty appears to play a prominent role in choosing a mate. Evidence from research on facial attractiveness indicates that physical beauty is a sexually selected trait mediated, in part, by pubertal facial hormone markers that signal important biological information about the displayer. Such signals would be ineffective if they did not elicit appropriate cognitive and/or emotional responses in members of the opposite sex. In this article, I argue that the effectiveness of these hormonal displays varies with perceivers' brains, which have been organized by the degree of steroid hormone exposure in the uterus, and activated by varying levels of circulating steroids following puberty. I further propose that the methodology used for examining mate choice decisions has general applicability for determining how cognitive and emotional evaluations enter into decision processes."}
{"_id": "17ba9019ec7eb1f03d64f4d80e47376a4f6f8583", "title": "Learning from the past: answering new questions with past answers", "text": "Community-based Question Answering sites, such as Yahoo! Answers or Baidu Zhidao, allow users to get answers to complex, detailed and personal questions from other users. However, since answering a question depends on the ability and willingness of users to address the asker's needs, a significant fraction of the questions remain unanswered. We measured that in Yahoo! Answers, this fraction represents 15% of all incoming English questions. At the same time, we discovered that around 25% of questions in certain categories are recurrent, at least at the question-title level, over a period of one year.\n We attempt to reduce the rate of unanswered questions in Yahoo! Answers by reusing the large repository of past resolved questions, openly available on the site. More specifically, we estimate the probability whether certain new questions can be satisfactorily answered by a best answer from the past, using a statistical model specifically trained for this task. We leverage concepts and methods from query-performance prediction and natural language processing in order to extract a wide range of features for our model. The key challenge here is to achieve a level of quality similar to the one provided by the best human answerers.\n We evaluated our algorithm on offline data extracted from Yahoo! Answers, but more interestingly, also on online data by using three \"live\" answering robots that automatically provide past answers to new questions when a certain degree of confidence is reached. We report the success rate of these robots in three active Yahoo! Answers categories in terms of both accuracy, coverage and askers' satisfaction. This work presents a first attempt, to the best of our knowledge, of automatic question answering to questions of social nature, by reusing past answers of high quality."}
{"_id": "5b660f6fb6b1277a5c8a311a7e688234cde909d9", "title": "Netprobe: a fast and scalable system for fraud detection in online auction networks", "text": "Given a large online network of online auction users and their histories of transactions, how can we spot anomalies and auction fraud? This paper describes the design and implementation of NetProbe, a system that we propose for solving this problem. NetProbe models auction users and transactions as a Markov Random Field tuned to detect the suspicious patterns that fraudsters create, and employs a Belief Propagation mechanism to detect likely fraudsters. Our experiments show that NetProbe is both efficient and effective for fraud detection. We report experiments on synthetic graphs with as many as 7,000 nodes and 30,000 edges, where NetProbe was able to spot fraudulent nodes with over 90% precision and recall, within a matter of seconds. We also report experiments on a real dataset crawled from eBay, with nearly 700,000 transactions between more than 66,000users, where NetProbe was highly effective at unearthing hidden networks of fraudsters, within a realistic response time of about 6 minutes. For scenarios where the underlying data is dynamic in nature, we propose IncrementalNetProbe, which is an approximate, but fast, variant of NetProbe. Our experiments prove that Incremental NetProbe executes nearly doubly fast as compared to NetProbe, while retaining over 99% of its accuracy."}
{"_id": "0aea7c981f3c0bf140bbe30b135ad1d87eab3503", "title": "Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms", "text": "In this paper, we establish max-flow min-cut theorems for several important classes of multicommodity flow problems. In particular, we show that for any n-node multicommodity flow problem with uniform demands, the max-flow for the problem is within an O(log n) factor of the upper bound implied by the min-cut. The result (which is existentially optimal) establishes an important analogue of the famous 1-commodity max-flow min-cut theorem for problems with multiple commodities. The result also has substantial applications to the field of approximation algorithms. For example, we use the flow result to design the first polynomial-time (polylog n-times-optimal) approximation algorithms for well-known NP-hard optimization problems such as graph partitioning, min-cut linear arrangement, crossing number, VLSI layout, and minimum feedback arc set. Applications of the flow results to path routing problems, network reconfiguration, communication in distributed networks, scientific computing and rapidly mixing Markov chains are also described in the paper."}
{"_id": "ac6ce0babd322544ec28d2999ca45f25931dfe59", "title": "Requirements Traceability: a Systematic Review and Industry Case Study", "text": "Requirements traceability enables software engineers to trace a requirement from its emergence to its ful \u0304llment. In this paper we examine requirements traceability de \u0304nitions, challenges, tools and techniques, by the use of a systematic review performing an exhaustive search through the years 1997 2007. We present a number of common de \u0304nitions, challenges, available tools and techniques (presenting empirical evidence when found), while complementing the results and analysis with a static validation in industry through a series of interviews."}
{"_id": "4909210fc73cc4a432fbe1fd68b8445beb7f02f2", "title": "StudentLife: assessing mental health, academic performance and behavioral trends of college students using smartphones", "text": "Much of the stress and strain of student life remains hidden. The StudentLife continuous sensing app assesses the day-to-day and week-by-week impact of workload on stress, sleep, activity, mood, sociability, mental well-being and academic performance of a single class of 48 students across a 10 week term at Dartmouth College using Android phones. Results from the StudentLife study show a number of significant correlations between the automatic objective sensor data from smartphones and mental health and educational outcomes of the student body. We also identify a Dartmouth term lifecycle in the data that shows students start the term with high positive affect and conversation levels, low stress, and healthy sleep and daily activity patterns. As the term progresses and the workload increases, stress appreciably rises while positive affect, sleep, conversation and activity drops off. The StudentLife dataset is publicly available on the web."}
{"_id": "f0464db650f2357b181feb50df25f645619a1371", "title": "Finite Element Analysis Based Design of Mobile Robot for Removing Plug Oil Well", "text": "In order to develop the mobile robot for removing the plug oil well, the robot was designed based on the wheeltype and leg-type robot mechanism. A well functioning prototype has been manufactured. To demonstrate the validity and the benefit of the mobile robot, supporting mechanism and guiding rod were chosen to design based on the FEM. The mathematical model of the supporting mechanism is established and the mechanical property is analyzed using the FEM. The deformation and stress of some components of the supporting mechanism and the guiding rod is investigated. The results show that the supporting mechanism and the guiding rod have excellent performance with little displacement and small stress under working condition. The strength and rigidity of supporting mechanism and the guiding rod are good enough to ensure the reliability of the whole robot mechanism."}
{"_id": "2ab93b912b5dd28467e59e11fce7d409859dae47", "title": "Cognitive load and detection thresholds in car following situations: safety implications for using mobile (cellular) telephones while driving.", "text": "This study was aimed at investigating drivers' ability to detect a car ahead decelerating, while doing mobile phone related tasks. Nineteen participants aged between 20 and 29 years, (2000-125000 km driving experience) drove at 80 km/h, 50 m behind a lead car, on a 30 km section of motorway in normal traffic. During each trial the lead car started to decelerate at an average of 0.47 m/s2 while the participant either looked at the car in front (control), continuously dialed series of three random integers on a numeric keypad (divided visual attention), or performed a memory and addition task (non-visual attention). The results indicated that drivers' detection ability was impaired by about 0.5 s in terms of brake reaction time and almost 1 s in terms of time-to-collision, when they were doing the non-visual task whilst driving. This impairment was similar to when the drivers were dividing their visual attention between the road ahead and dialing numbers on the keypad. It was concluded that neither a hands-free option nor a voice controlled interface removes the safety problems associated with the use of mobile phones in a car."}
{"_id": "b3eaa5338a953c3564af0d46d6a8293393947d4e", "title": "Telerobotics, automation, and human supervisory control [Review]", "text": "In Telerobotics, Automation, and Human Supervisory Control, Thomas B. Sheridan shows that progress in robotics depends not only on change in technology, but also on advances in humans\u2019 relationships to machines. Sheridan, Professor of Engineering and Applied Psychology at M.I.T., has spent decades studying human-machine relations. His idea of \u201chuman supervisory control\u201d (just \u201csupervisory control\u201d for short) has the potential to bring robotics out of the laboratory and into the difficult and messy world. Traditionally, the public and researchers alike have focused on automation which seeks to build into a machine all necessary capabilities for achieving a particular task, including sensors, actuators, servos, and algorithms. In contrast to automatic control, supervisory control puts a human back \u201cin the loop,\u201d usually at the highest levels of abstraction, while maintaining lower-level functions within the machine. In a supervisory control system human and machine work together. The operator may intervene and operate manually in a particular situation, or she (Sheridan refers to the generic operator as female) might turn over control to an automatic controller for a time, acting as monitor of the task at hand. Typically the \u201csystem\u201d (including the human) will operate somewhere on the spectrum between manual and automatic, dividing the task in different degrees depending on the situation. When the operator and the machine are not collocated but are connected by some communication channel, the system is said to be telerobotic. Determining the proper balance of man and machine, manual and automatic control, or providing tools to do so, is the goal of Sheridan\u2019s book. An important modern example of a supervisory control system (although not a telerobotic one) is a modem aircraft. The pilot can command the flight surfaces directly and fly manually, or she can program an autopilot to fly a complete route, including nearly complete takeoff and landing. Typically, the pilot performs some"}
{"_id": "ab28c03cba8f7da5d83eeff0f7ceb0b7524763b8", "title": "Internet Traffic Classification by Aggregating Correlated Naive Bayes Predictions", "text": "This paper presents a novel traffic classification scheme to improve classification performance when few training data are available. In the proposed scheme, traffic flows are described using the discretized statistical features and flow correlation information is modeled by bag-of-flow (BoF). We solve the BoF-based traffic classification in a classifier combination framework and theoretically analyze the performance benefit. Furthermore, a new BoF-based traffic classification method is proposed to aggregate the naive Bayes (NB) predictions of the correlated flows. We also present an analysis on prediction error sensitivity of the aggregation strategies. Finally, a large number of experiments are carried out on two large-scale real-world traffic datasets to evaluate the proposed scheme. The experimental results show that the proposed scheme can achieve much better classification performance than existing state-of-the-art traffic classification methods."}
{"_id": "6936e74575df0ff0b9351c307c9f193c09d9c928", "title": "New relationships between breast microcalcifications and cancer", "text": "Background:Breast microcalcifications are key diagnostically significant radiological features for localisation of malignancy. This study explores the hypothesis that breast calcification composition is directly related to the local tissue pathological state.Methods:A total of 236 human breast calcifications from 110 patients were analysed by mid-Fouries transform infrared (FTIR) spectroscopy from three different pathology types (112 invasive carcinoma (IC), 64 in-situ carcinomas and 60 benign). The biochemical composition and the incorporation of carbonate into the hydroxyapatite lattice of the microcalcifications were studied by infrared microspectroscopy. This allowed the spectrally identified composition to be directly correlated with the histopathology grading of the surrounding tissue.Results:The carbonate content of breast microcalcifications was shown to significantly decrease when progressing from benign to malignant disease. In this study, we report significant correlations (P<0.001) between microcalcification chemical composition (carbonate content and protein matrix\u2009:\u2009mineral ratios) and distinct pathology grades (benign, in-situ carcinoma and ICs). Furthermore, a significant correlation (P<0.001) was observed between carbonate concentrations and carcinoma in-situ sub-grades. Using the two measures of pathology-specific calcification composition (carbonate content and protein matrix\u2009:\u2009mineral ratios) as the inputs to a two-metric discriminant model sensitivities of 79, 84 and 90% and specificities of 98, 82 and 96% were achieved for benign, ductal carcinoma in situ and invasive malignancies, respectively.Conclusions:We present the first demonstration of a direct link between the chemical nature of microcalcifications and the grade of the pathological breast disease. This suggests that microcalcifications have a significant association with cancer progression, and could be used for future objective analytical classification of breast pathology. A simple two-metric model has been demonstrated, more complex spectral analysis may yeild greater discrimination performance. Furthermore there appears to be a sequential progression of calcification composition."}
{"_id": "e8af75f5df4728157f70d2fa6f10bf9102e4d089", "title": "Link prediction in complex networks: a local na\u00efve Bayes model", "text": "Common-neighbor-based method is simple yet effective to predict missing links, which assume that two nodes are more likely to be connected if they have more common neighbors. In such method, each common neighbor of two nodes contributes equally to the connection likelihood. In this Letter, we argue that different common neighbors may play different roles and thus lead to different contributions, and propose a local n\u00e4\u0131ve Bayes model accordingly. Extensive experiments were carried out on eight real networks. Compared with the common-neighbor-based methods, the present method can provide more accurate predictions. Finally, we gave a detailed case study on the US air transportation network. Introduction. \u2013 The problem of link prediction aims at estimating the likelihood of the existence of a link between two nodes in a given network, based on the observed links [1]. Recently, the study of link prediction has attracted much attention from disparate scientific communities. In the theoretical aspect, accurate prediction indeed gives evidence to some underlying mechanisms that drives the network evolution [2]. Moveover, it is very possible to build a fair evaluation platform for network modeling under the framework of link prediction, which might be interested by network scientists [1, 3]. In the practical aspect, for biological networks such as protein-protein interaction networks and metabolic networks [4\u20136], the experiments of uncovering the new links or interactions are costly, and thus to predict in advance and focus on the links most likely to exist can sharply reduce the experimental costs [7]. In addition, some of the representative methods of link prediction have been successfully applied to address the classification problem in partially labeled networks [8, 9], as well as identified the spurious links resulted from the inaccurate information in the data [10]. Motivated by the theoretical interests and practical significance, many methods for link predication have been proposed. Therein some algorithms are based on Markov chains [11\u201313] and machine learning [14, 15]. Another group of algorithms are based on node similarity [1, 16]. (a)Corresponding author:linyuan.lue@unifr.ch Common Neighbors (CN) is one of the simplest similarity indices. Empirically, Kossinets and Watts [17] analyzed a large-scale social network, suggesting that two students having many mutual friends are very probable to be friend in the future. Also in online society like Facebook, the users tend to be friend if they have common friends and therefore form a community. Extensive analysis on disparate networks suggests that CN index is of good performance on predicting the missing links [16, 18]. L\u00fc et al. [19] suggested that in a network with large clustering coefficient, CN can provide competitively accurate predictions compared with the indices making use of global information. Very recently, Cui et al. [20] revealed that the nodes with more common neighbors are more likely to form new links in a growing network. The basic assumption of CN is that two nodes are more likely to be connected if they have more common neighbors. Simply counting the number of common neighbors indicates that each common neighbor gives equal contribution to the connection likelihood. However, sometimes different common neighbors may play different roles. For instance, the common close friends of two people who don\u2019t know each other may contribute more to their possibly future friendship than their common nodding acquaintances. In this Letter, we propose a probabilistic model based on the Bayesian theory, called Local N\u00e4\u0131ve Bayes (LNB) model, to predict the missing links in complex networks. Based on the LNB model,"}
{"_id": "7625126f603d03dd51d038d7c5cbc89f08d77948", "title": "A Rank-Switching, Open-Row DRAM Controller for Time-Predictable Systems", "text": "We introduce ROC, a Rank-switching, Open-row Controller for Double Data Rate Dynamic RAM (DDR DRAM). ROC is optimized for mixed-criticality multicore systems using modern DDR devices: compared to existing real-time memory controllers, it provides significantly lower worst case latency bounds for hard real-time tasks and supports throughput-oriented optimizations for soft real-time applications. The key to improved performance is an innovative rank-switching mechanism which hides the latency of write-read transitions in DRAM devices without requiring unpredictable request reordering. We further employ open row policy to take advantage of the data caching mechanism (row buffering) in each device. ROC provides complete timing isolation between hard and soft tasks and allows for compositional timing analysis over the number of cores and memory ranks in the system. We implemented and synthesized the ROC back end in Verilog RTL, and evaluated its performance on both synthetic tasks and a set of representative benchmarks."}
{"_id": "098463dcd780b59d966cc76336b9beb1c85f6499", "title": "Interprofessional education: effects on professional practice and health care outcomes.", "text": "BACKGROUND\nAs patient care becomes more complex, effective collaboration between health and social care professionals is required. However, evidence suggests that these professionals do not collaborate well together. Interprofessional education (IPE) offers a possible way forward in this area.\n\n\nOBJECTIVES\nTo assess the usefulness of IPE interventions compared to education in which the same professions were learning separately from one another.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Effective Practice and Organisation of Care Group specialised register, MEDLINE (1968 to 1998) and Cinahl (1982 to 1998). We also hand searched the Journal of Interprofessional Care (1992 to 1998), the Centre for the Advancement of Interprofessional Education Bulletin (1987 to 1998), conference proceedings, the 'grey literature' held by relevant organisations, and reference lists of articles.\n\n\nSELECTION CRITERIA\nRandomised trials, controlled before and after studies and interrupted time series studies of IPE interventions designed to improve collaborative practice between health/social care practitioners and/or the health/well being of patients/clients. The participants included chiropodists/podiatrists, complementary therapists, dentists, dietitians, doctors/physicians, hygienists, psychologists, psychotherapists, midwives, nurses, pharmacists, physiotherapists, occupational therapists, radiographers, speech therapists and/or social workers. The outcomes included objectively measured or self reported (validated instrument) patient/client outcomes and reliable (objective or validated subjective) health care process measures.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo reviewers independently assessed the eligibility of potentially relevant studies.\n\n\nMAIN RESULTS\nThe total yield from the search strategy was 1042, of which 89 were retained for further consideration. However none of these studies met the inclusion criteria.\n\n\nREVIEWER'S CONCLUSIONS\nDespite finding a large body of literature on the evaluation of IPE, these studies lacked the methodological rigour needed to begin to convincingly understand the impact of IPE on professional practice and/or health care outcomes."}
{"_id": "e7e070a929d7911ef2f8c6f6d4e3c5d62c5a9890", "title": "Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking", "text": "Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking."}
{"_id": "ac960d2baa95944a83b28410e5e299ebca8ddc37", "title": "The Mini Nutritional Assessment (MNA) review of the literature--What does it tell us?", "text": "UNLABELLED\nTo review the literature on the MNA to Spring 2006, we searched MEDLINE, Web of Science and Scopus, and did a manual search in J Nutr Health Aging, Clin Nutr, Eur J Clin Nutr and free online available publications.\n\n\nVALIDATION AND VALIDITY\nThe MNA was validated against two principal criteria, clinical status and comprehensive nutrition assessment using principal component and discriminant analysis. The MNA shortform (MNA-SF) was developed and validated to allow a 2-step screening process. The MNA and MNA-SF are sensitive, specific, and accurate in identifying nutrition risk.\n\n\nNUTRITIONAL SCREENING\nThe prevalence of malnutrition in community-dwelling elderly (21 studies, n = 14149 elderly) is 2 +/- 0.1% (mean +/- SE, range 0- 8%) and risk of malnutrition is 24 +/- 0.4% (range 8-76%). A similar pattern is seen in out-patient and home care elderly (25 studies, n = 3119 elderly) with prevalence of undernutrition 9 +/- 0.5% (mean +/- SE, range 0-30%) and risk of malnutrition 45 +/- 0.9% (range 8-65%). A high prevalence of undernutrition has been reported in hospitalized and institutionalized elderly patients: prevalence of malnutrition is 23 +/- 0.5% (mean +/- SE, range 1- 74%) in hospitals (35 studies, n = 8596) and 21 +/- 0.5% (mean +/- SE, range 5-71%) in institutions (32 studies, n = 6821 elderly). An even higher prevalence of risk of malnutrition was observed in the same populations, with 46 +/- 0.5% (range 8-63%) and 51 +/- 0.6% (range 27-70%), respectively. In cognitively impaired elderly subjects (10 studies, n = 2051 elderly subjects), detection using the MNA, prevalence of malnutrition was 15 +/- 0.8% (mean +/- SE, range 0-62%), and 44 +/- 1.1% (range 19-87%) of risk of malnutrition.\n\n\nCHARACTERISTICS\nThe large variability is due to differences in level of dependence and health status among the elderly. In hospital settings, a low MNA score is associated with an increase in mortality, prolonged length of stay and greater likelihood of discharge to nursing homes. Malnutrition is associated with functional and cognitive impairment and difficulties eating. The MNA(R) detects risk of malnutrition before severe change in weight or serum proteins occurs.\n\n\nNUTRITIONAL INTERVENTION\nIntervention studies demonstrate that timely intervention can stop weight loss in elderly at risk of malnutrition or undernourished and is associated with improvements in MNA scores. The MNA can also be used as a follow up assessment tool.\n\n\nCONCLUSION\nThe MNA is a screening and assessment tool with a reliable scale and clearly defined thresholds, usable by health care professionals. It should be included in the geriatric assessment and is proposed in the minimum data set for nutritional interventions."}
{"_id": "cdcab26ff762b2cfb2fc4bea7dd94699c8e75109", "title": "Examining students' online interaction in a live video streaming environment using data mining and text mining", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.chb.2012.07.020 \u21d1 Tel.: +1 757 683 5008; fax: +1 757 683 5639. E-mail address: whe@odu.edu This study analyses the online questions and chat messages automatically recorded by a live video streaming (LVS) system using data mining and text mining techniques. We apply data mining and text mining techniques to analyze two different datasets and then conducted an in-depth correlation analysis for two educational courses with the most online questions and chat messages respectively. The study found the discrepancies as well as similarities in the students\u2019 patterns and themes of participation between online questions (student\u2013instructor interaction) and online chat messages (student\u2013students interaction or peer interaction). The results also identify disciplinary differences in students\u2019 online participation. A correlation is found between the number of online questions students asked and students\u2019 final grades. The data suggests that a combination of using data mining and text mining techniques for a large amount of online learning data can yield considerable insights and reveal valuable patterns in students\u2019 learning behaviors. Limitations with data and text mining were also revealed and discussed in the paper. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "60542b1a857024c79db8b5b03db6e79f74ec8f9f", "title": "Learning to Detect Human-Object Interactions", "text": "We study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them. HOI detection is a fundamental problem in computer vision as it provides semantic information about the interactions among the detected objects. We introduce HICO-DET, a new large benchmark for HOI detection, by augmenting the current HICO classification benchmark with instance annotations. To solve the task, we propose Human-Object Region-based Convolutional Neural Networks (HO-RCNN). At the core of our HO-RCNN is the Interaction Pattern, a novel DNN input that characterizes the spatial relations between two bounding boxes. Experiments on HICO-DET demonstrate that our HO-RCNN, by exploiting human-object spatial relations through Interaction Patterns, significantly improves the performance of HOI detection over baseline approaches."}
{"_id": "b0222abf3c67c2726001f1f7b849f0547ddfcb13", "title": "Biohashing: two factor authentication featuring fingerprint data and tokenised random number", "text": "Human authentication is the security task whose job is to limit access to physical locations or computer network only to those with authorisation. This is done by equipped authorised users with passwords, tokens or using their biometrics. Unfortunately, the first two suffer a lack of security as they are easy being forgotten and stolen; even biometrics also suffers from some inherent limitation and specific security threats. A more practical approach is to combine two or more factor authenticator to reap benefits in security or convenient or both. This paper proposed a novel two factor authenticator based on iterated inner products between tokenised pseudo-random number and the user specific fingerprint feature, which generated from the integrated wavelet and Fourier\u2013Mellin transform, and hence produce a set of user specific compact code that coined as BioHashing. BioHashing highly tolerant of data capture offsets, with same user fingerprint data resulting in highly correlated bitstrings. Moreover, there is no deterministic way to get the user specific code without having both token with random data and user fingerprint feature. This would protect us for instance against biometric fabrication by changing the user specific credential, is as simple as changing the token containing the random data. The BioHashing has significant functional advantages over solely biometrics i.e. zero equal error rate point and clean separation of the genuine and imposter populations, thereby allowing elimination of false accept rates without suffering from increased occurrence of false reject rates. 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."}
{"_id": "590284d4b1d490344572c8ad337e0d4dbedfaad5", "title": "A weighted additive fuzzy programming approach for multi-criteria supplier selection", "text": "In supply chain management, to build strategic and strong relationships, firms should select best suppliers by applying appropriate method and selection criteria. In this paper, to handle ambiguity and fuzziness in supplier selection problem effectively, a new weighted additive fuzzy programming approach is developed. Firstly, linguistic values expressed as trapezoidal fuzzy numbers are used to assess the weights of the factors. By applying the distances of each factor between Fuzzy Positive Ideal Rating and Fuzzy Negative Ideal Rating, weights are obtained. Then applying suppliers\u2019 constraints, goals and weights of the factors, a fuzzy multi-objective linear model is developed to overcome the selection problem and assign optimum order quantities to each supplier. The proposed model is explained by a numerical example. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "41487776364a6dddee91f1e2d2b78f46d0b93499", "title": "Artificial Neural Networks architectures for stock price prediction: comparisons and applications", "text": "We present an Artificial Neural Network (ANN) approach to predict stock market indices, particularly with respect to the forecast of their trend movements up or down. Exploiting different Neural Networks architectures, we provide numerical analysis of concrete financial time series. In particular, after a brief r\u00e9sum\u00e9 of the existing literature on the subject, we consider the Multi-layer Perceptron (MLP), the Convolutional Neural Networks (CNN), and the Long Short-Term Memory (LSTM) recurrent neural networks techniques. We focus on the importance of choosing the correct input features, along with their preprocessing, for the specific learning algorithm one wants to use. Eventually, we consider the S&P500 historical time series, predicting trend on the basis of data from the past days, and proposing a novel approach based on combination of wavelets and CNN, which outperforms the basic neural networks ones. We show, that neural networks are able to predict financial time series movements even trained only on plain time series data and propose more ways to improve results. Key\u2013Words: Artificial neural networks, Multi-layer neural network, Convolutional neural network, Long shortterm memory, Recurrent neural network, Deep Learning, Stock markets analysis, Time series analysis, financial forecasting"}
{"_id": "18744f1ff827685c6d8d2919aef8b3bd32291fcd", "title": "MIR_EVAL: A Transparent Implementation of Common MIR Metrics", "text": "Central to the field of MIR research is the evaluation of algorithms used to extract information from music data. We present mir_eval, an open source software library which provides a transparent and easy-to-use implementation of the most common metrics used to measure the performance of MIR algorithms. In this paper, we enumerate the metrics implemented by mir_eval and quantitatively compare each to existing implementations. When the scores reported by mir_eval differ substantially from the reference, we detail the differences in implementation. We also provide a brief overview of mir_eval\u2019s architecture, design, and intended use. 1. EVALUATING MIR ALGORITHMS Much of the research in Music Information Retrieval (MIR) involves the development of systems that process raw music data to produce semantic information. The goal of these systems is frequently defined as attempting to duplicate the performance of a human listener given the same task [5]. A natural way to determine a system\u2019s effectiveness might be for a human to study the output produced by the system and judge its correctness. However, this would yield only subjective ratings, and would also be extremely timeconsuming when evaluating a system\u2019s output over a large corpus of music. Instead, objective metrics are developed to provide a well-defined way of computing a score which indicates each system\u2019s output\u2019s correctness. These metrics typically involve a heuristically-motivated comparison of the system\u2019s output to a reference which is known to be correct. Over time, certain metrics have become standard for each \u2217Please direct correspondence to craffel@gmail.com c \u00a9 Colin Raffel, Brian McFee, Eric J. Humphrey, Justin Salamon, Oriol Nieto, Dawen Liang, Daniel P. W. Ellis. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: Colin Raffel, Brian McFee, Eric J. Humphrey, Justin Salamon, Oriol Nieto, Dawen Liang, Daniel P. W. Ellis."}
{"_id": "25d8f151f05f39bce7acf04257c7a7c060d9972f", "title": "Fuzzy Cognitive Map software", "text": "Fuzzy Cognitive Map (FCM) is a soft computing modelling methodology for complex systems. Beyond the mathematical formulation of the FCM theory, there was a need of developing a software tool to facilitate the implementation of FCMs. This paper describes the use of a software tool that was developed to construct FCM models. Some theoretical elements of Fuzzy Cognitive Maps are presented. Then, it is discussed how the software tool is used to develop a model and simulate the use of Fuzzy Cognitive Maps. The user interface of the software tool is described and how the FCM-Analyst is used to facilitate the implementation and simulation of Fuzzy Cognitive Maps."}
{"_id": "2e6545996df2c292a1eb7ef891b3ebc7f762e5cc", "title": "In the mood: engaging teenagers in psychotherapy using mobile phones", "text": "Mental illness is a significant and growing problem throughout the world. Many mental health problems have their root in childhood, and early intervention is recommended. Engaging young people in psychotherapeutic activities is challenging, and treatment adherence is often poor. This paper presents a series of studies carried out as part of the development of a mobile and online symptom tracking tool for adolescents with mental health problems. Teenagers use the system to record symptoms on their mobile phones and can view this information in a clinical setting with a therapist. We focus on a clinical pilot of the system with ten users in public mental health clinics. As well as indicating that the mobile diary tool can increase client adherence to therapeutic activities, the study yields insight into the factors influencing the success of the design and informs the design of other systems to be used as adjuncts to therapy."}
{"_id": "e7b06002f079895b19b2c756227ed7a4696c6362", "title": "Perspectives on Live Streaming: Apps, Users, and Research", "text": "The emergence of mobile live streaming as a popular app over the past couple years provides a new platform for socio-technical interaction in the world. This panel brings together several different perspectives on live streaming: companies who are producing commercial apps, researchers studying how people are using live streaming, and users. We hope the discussion identifies design and research implications for the future."}
{"_id": "8121eac2d4a3ca372dff6b98f8f9f0c75e8a9f84", "title": "Lightweight Management of Resource-Constrained Sensor Devices in Internet of Things", "text": "It is predicted that billions of intelligent devices and networks, such as wireless sensor networks (WSNs), will not be isolated but connected and integrated with computer networks in future Internet of Things (IoT). In order to well maintain those sensor devices, it is often necessary to evolve devices to function correctly by allowing device management (DM) entities to remotely monitor and control devices without consuming significant resources. In this paper, we propose a lightweight RESTful Web service (WS) approach to enable device management of wireless sensor devices. Specifically, motivated by the recent development of IPv6-based open standards for accessing wireless resource-constrained networks, we consider to implement IPv6 over low-power wireless personal area network (6LoWPAN)/routing protocol for low power and lossy network (RPL)/constrained application protocol (CoAP) protocols on sensor devices and propose a CoAP-based DM solution to allow easy access and management of IPv6 sensor devices. By developing a prototype cloud system, we successfully demonstrate the proposed solution in efficient and effective management of wireless sensor devices."}
{"_id": "12d29accd24976eb4f27bcf26b5d5093dc2b24a7", "title": "2011 terminology of the vulva of the International Federation for Cervical Pathology and Colposcopy.", "text": "OBJECTIVE\nThis study aimed to present the clinical and colposcopic terminology of the vulva (including the anus) of the International Federation of Cervical Pathology and Colposcopy.\n\n\nMATERIALS AND METHODS\nThe terminology has been developed by the International Federation of Cervical Pathology and Colposcopy Nomenclature Committee during 2009-2011.\n\n\nRESULTS\nThe terminology is part of a comprehensive terminology of the lower genital tract, allowing for standardization of nomenclature by colposcopists, clinicians, and researchers taking care of women with lesions in these areas. The terminology includes basic definitions and normal findings that are important for the clinician lacking experience with management of vulvar disease. This terminology introduces definitions for abnormal findings recently accepted by the International Society for the Study of Vulvovaginal Disease and includes patterns to identify malignancy.\n\n\nCONCLUSIONS\nThe terminology differs from past terminologies in that it includes colposcopic patterns and anal colposcopy. Nevertheless, the role of the colposcope in the management of vulvar disease is limited."}
{"_id": "5ab321e0ea7893dda145331bfb95e102c0b61a5d", "title": "Single-Fed Broadband Circularly Polarized Stacked Patch Antenna With Horizontally Meandered Strip for Universal UHF RFID Applications", "text": "In this paper, a horizontally meandered strip (HMS) feed technique is proposed to achieve good impedance matching and symmetrical broadside radiation patterns for a single-fed broadband circularly polarized stacked patch antenna, which is suitable for universal ultrahigh frequency (UHF) RF identification (RFID) applications. The antenna is composed of two corner truncated patches and an HMS, all of which are printed on the upper side of the FR4 substrates. One end of the HMS is connected to the main patch by a probe, while the other end is connected to an SMA connector. Simulation results are compared with the measurements, and a good agreement is obtained. The measurements show that the antenna has an impedance bandwidth (VSWR <; 1.5) of about 25.8% (758-983 MHz), a 3-dB axial ratio (AR) bandwidth of about 13.5% (838-959 MHz), and a gain level of about 8.6 dBic or larger within the 3-dB AR bandwidth. Therefore, the proposed antenna can be a good candidate for universal UHF RFID readers operating at the UHF band of 840-955 MHz. In addition, a parametric study and a design guideline of the proposed antenna are presented to provide the engineers with information for designing, modifying, and optimizing such an antenna. At last, the proposed antenna is validated in RFID system applications."}
{"_id": "65077651b36a63d3ca4184137df348cc8b29776a", "title": "Asymmetric-Circular Shaped Slotted Microstrip Antennas for Circular Polarization and RFID Applications", "text": "Novel asymmetric-circular shaped slotted microstrip patch antennas with slits are proposed for circularly polarized (CP) radiation and radio frequency identification (RFID) reader applications. A single-feed configuration based asymmetric-circular shaped slotted square microstrip patches are adopted to realize the compact circularly polarized microstrip antennas. The asymmetric-circular shaped slot(s) along the diagonal directions are embedded symmetrically onto a square microstrip patch for CP radiation and small antenna size. The CP radiation can be achieved by slightly asymmetric (unbalanced) patch along the diagonal directions by slot areas. Four symmetric-slits are also embedded symmetrically along the orthogonal directions of the asymmetric-circular shaped slotted patch to further reduce antenna size. The operating frequency of the antenna can be tuned by varying the slit length while keeping the CP radiation unchanged. The measured 3-dB axial-ratio (AR) bandwidth of around 6.0 MHz with 17.0 MHz impedance bandwidth is achieved for the antenna on a RO4003C substrate. The overall antenna size is 0.27\u03bbo \u00d7 0.27\u03bbo \u00d7 0.0137\u03bbo at 900 MHz."}
{"_id": "6f3ffb1a7b6cb168caeb81a23b68bbf99fdab052", "title": "A circularly polarized short backfire antenna excited by an unbalance-fed cross aperture", "text": "An unbalance-fed cross aperture is developed to excite a short backfire antenna (SBA) for circular polarization. The cross aperture consists of two orthogonal H-shaped slots with a pair of capacitive stubs and is fed by a single probe that forms an unbalanced feed with a shorting pin. It is demonstrated that the cross-aperture-excited SBA can achieve an axial ratio (les 3 dB) bandwidth of 4.2% with a voltage standing wave ratio (VSWR) bandwidth of 6.5% (VSWR<1.2) and a gain of 14 dBi. The antenna structure is described and the simulation and experimental results are presented. The mechanisms for impedance matching and circular-polarization production are analyzed"}
{"_id": "838b107445e72d903f2217946c73a5d3d1e4344e", "title": "A Dual-Band Circularly Polarized Aperture-Coupled Stacked Microstrip Antenna for Global Positioning Satellite", "text": "This paper describes the design and testing of an aperture-coupled circularly polarized antenna for global positioning satellite (GPS) applications. The antenna operates at both the L1 and L2 frequencies of 1575 and 1227 MHz, which is required for differential GPS systems in order to provide maximum positioning accuracy. Electrical performance, lowprofile, and cost were equally important requirements for this antenna. The design procedure is discussed, and measured results are presented. Results from a manufacturing sensitivity analysis are also included."}
{"_id": "9639aa5fadb89ea5e8362dad52082745012c90aa", "title": "Wideband Circularly Polarized Patch Antenna Using Broadband Baluns", "text": "A novel 90deg broadband balun comprising a broadband 90deg Schiffman phase shifter is introduced as a means of enhancing the wideband circular polarization performance of dual-fed type microstrip antennas. The proposed 90deg broadband balun delivers good impedance matching, balanced power splitting and consistent 90deg (plusmn5deg) phase shifting, across a wide bandwidth (~57.5%). A circular patch antenna utilizing the proposed 90deg broadband balun is shown to attain measured impedance(S11< -10 dB) and axial ratio (AR < 3 dB) bandwidths of 60.24% and 37.7%, respectively, for the dual L-probe case; and 71.28% and 81.6% respectively, for the quadruple L-probe case."}
{"_id": "84f472b6d5516d95f592137056fe83dc2e914a70", "title": "vFPGAmanager: A Virtualization Framework for Orchestrated FPGA Accelerator Sharing in 5G Cloud Environments", "text": "Network operators are actively pushing towards the new 5G era and a crucial part to accomplish this is the Network Functions Virtualization (NFV). FPGAs and their hardware accelerators are a promising solution for NFV and 5G cloud environments because of their fast turnaround time and great speedup potential through application parallelism mapping on the reconfigurable fabric. Recently, consolidation reached a plateau in this field with lightweight virtualization techniques, that require a high overcommitment of FPGA accelerator resources to cope with numerous demands of guests. Although FPGAs can play an important role for the future 5G networks their capability to manage and control them from the upper layers of the software stack is inadequate. The lack of such support coupled with cloud integration and programmability issues can repel potential providers from utilizing FPGAs at their data centers. This paper presents the communication mechanism of the vFPGAmanager, an FPGA virtualization framework which can be orchestrated, monitored and enables accelerators overcommitment with direct guest access. These are key features to allow potential adopters of FPGA technology to include them in the next generation of NFV systems. The communication mechanism architecture is detailed and then benchmarked to show that even under heavy load on the system it demonstrates a minimal overhead to orchestrate and monitor the FPGA as a resource."}
{"_id": "6ce56097799b8c87dcb81e594a0e0dc69863365c", "title": "Refining Duplicate Detection for Improved Data Quality", "text": "Detecting duplicates is a pervasive data quality challenge that hinders organizations from extracting value from their data sooner. The increased complexity and heterogeneity of modern datasets has lead to the presence of varying record formats, missing values, and evolving data semantics. As data is integrated, duplicates inevitably occur in the integrated instance. One of the challenges in deduplication is determining whether two values are sufficiently close to be considered equal. Existing similarity functions often rely on counting the number of required edits to transform one value to the other. This is insufficient in attribute domains, such as time, where small syntactic differences do not always translate to \u2019closeness\u2019. In this paper, we propose a duplication detection framework, which adapts metric functional dependencies (MFDs) to improve the detection accuracy by relaxing the matching condition on numeric values to allow a permitted tolerance. We evaluate our techniques against two existing approaches using three real data collections, and show that we achieve an average 25% and 34% improvement in precision and recall, respectively, over non-MFD versions."}
{"_id": "067f491726b2fa0b583a36823dd5bc78e3d222a0", "title": "Automatic music transcription: challenges and future directions", "text": "Automatic music transcription is considered by many to be a key enabling technology in music signal processing. However, the performance of transcription systems is still significantly below that of a human expert, and accuracies reported in recent years seem to have reached a limit, although the field is still very active. In this paper we analyse limitations of current methods and identify promising directions for future research. Current transcription methods use general purpose models which are unable to capture the rich diversity found in music signals. One way to overcome the limited performance of transcription systems is to tailor algorithms to specific use-cases. Semi-automatic approaches are another way of achieving a more reliable transcription. Also, the wealth of musical scores and corresponding audio data now available are a rich potential source of training data, via forced alignment of audio to scores, but large scale utilisation of such data has yet to be attempted. Other promising approaches include the integration of information from multiple algorithms and different musical aspects."}
{"_id": "456ffd05807fab1ebb2b53a72478da280d38af52", "title": "Improving Interaction in HMD-Based Vehicle Simulators through Real Time Object Reconstruction", "text": "Bringing real objects into the virtual world has been shown to increase usability and presence in virtual reality applications. This paper presents a system to generate a real time virtual reconstruction of real world user interface elements for use in a head mounted display based driving simulator. Our system uses sensor fusion algorithms to combine data from depth and color cameras to generate an accurate, detailed, and fast rendering of the user's hands while using the simulator. We tested our system and show in our results that the inclusion of the participants real hands, the wheel, and the shifter in the virtual environment increases the immersion, presence, and usability of the simulation. Our system can also be used to bring other real objects into the virtual world, especially when accuracy, detail, and real time updates are desired."}
{"_id": "a6a0384d7bf8ddad303034fe691f324734409568", "title": "Business process management - lessons from European business", "text": "This paper reports on the findings of a survey and case study research into the understanding and application of business process management (BPM) in European companies. The process perspective is increasingly being seen as a mechanism for achieving competitive advantage through performance improvement and in response to market pressures, customer expectations for better and more reliable service and increasing competition. We reveal the level of importance European companies attach to BPM, what it means to them and what they have done in practice. The paper draws on a postal survey conducted with quality directors and business process managers in organisations which are members of the European Foundation for Quality Management (EFQM) and case studies in a number of organisations regarded as being leaders in the adoption of BPM. The study has helped to highlight some interesting approaches and reveal features which are important for BPM to be successful. Introduction One of the difficulties with business process management (BPM) is that of terminology. The term process can be found in many disciplines which contribute to our understanding of organisations in the management literature. An operational view is seen in quality improvement (Deming, 1986), total quality management (Oakland, 1989) and the concept of just-in-time (Harrison, 1992). Systems thinking (Jenkins, 1971; Checkland, 1981), cybernetics (Beer, 1966) and systems dynamics (Senge, 1990) give a richer meaning to the term. Organisational theorists have also talked in terms of social and organisational processes (Burrell and Morgan, 1979; Monge, 1990). A useful review of these antecedents has been provided by Peppard and Preece (1995). The domain in which the current study is centred develops out of recent approaches which seek to improve organisational effectiveness through the attachment of managerial thinking to Total Quality or Business Excellence models. These have been essentially practitioner driven and not grounded in academic theory. Examples include the European Foundation for Quality Management model (EFQM) (Hakes, 1995) and the Malcom Baldrige National Quality Award model (MBNQA) (George, 1992). While these models espouse multi-factorial and multi-constituency models of organisational effectiveness they remain essentially goal-based (Cameron, 1986). They have also developed from a strong operational framework and often have absorbed the approaches of We would like to thank Bob Dart, Simon Machin and Tony Grant from Royal Mail. We also gratefully appreciate the assistance of the EFQM, Rank Xerox, British Telecom, TNT and Nortel during the various stages of our research. D ow nl oa de d by S E L C U K U N IV E R SI T Y A t 0 2: 52 0 8 Fe br ua ry 2 01 5 (P T ) Lessons from European business 11 business process re-engineering (Hammer, 1990). They have also been influenced by strategic thinking in the strong process orientation of the value chain analysis (Porter, 1985) and they accommodate the resources based view of the firm (Grant, 1991). Use of the models may lead to questioning the design of an organisation at a strategic level by reassessing the value of functions in favour of processes which transcend functionality (Ghoshal and Bartlett, 1995; Galbraith, 1995). However, neither the EFQM nor the MBNQA provide direct guidance on how to deploy BPM. Approaches often incorporate attempts to identify business processes and to classify them as being operational, supporting or direction setting. This activity is often facilitated by consultants using a range of methods but commonly including aspects of process mapping at least at the top level of the organisation. There is evidence that adopting the process paradigm is favoured at least by senior managers (Garvin, 1995), although it is by no means clear that this is a widely held opinion throughout organisations. While it is possible to point to aspects of good practice in BPM, at least at an operational level (Armistead, 1996), we do not know how organisations apply the notion in practice and what they have found to be the key components of a BPM approach. Method The aim of the research has been to develop a better understanding of BPM and how it can be applied as a route to achieving organisational effectiveness. We have been especially interested to find out how companies have used a business process perspective as a way of managing their whole organisation, rather than just the application of process improvement techniques. In particular we have sought to explore the following questions: . How important is BPM for European managers? . Is there a common understanding of BPM among European organisations? . How are European organisations implementing BPM in practice? In addressing these questions we hope to shed light on how organisations conceptualise BPM and draw on their experiences in order to enlighten others both in terms of strategy formulation and deployment. This paper discusses the findings of the research and proposes lessons which can be learnt. During our research we have built up a rich databank of case study material. The case studies have been compiled using an open-ended interview format where senior executives (usually the quality director or business process manager) have been invited to elaborate on their organisation's approach to BPM. The interviews were recorded and transcribed and a cognitive map was developed to identify concepts. In some cases data from the interviews were supplemented by material used for internal self assessments against the EFQM model. Organisations were typically chosen because they were known to have adopted BPM approaches. This paper refers specifically to case studies with Rank Xerox, Nortel, British Telecom and TNT, all of whom have been winners in some form (either directly of through subsidiaries) of European Quality Awards. D ow nl oa de d by S E L C U K U N IV E R SI T Y A t 0 2: 52 0 8 Fe br ua ry 2 01 5 (P T )"}
{"_id": "56a5557a9bcaebfbfcd2c114131790c1b835ff98", "title": "Reconfigurable antennas in cognitive radio that can think for themselves?", "text": "This paper discusses the use of reconfigurable antennas in cognitive radio. Most of the emphasis on cognitive radio so far has been in the area of spectral estimation and signal classification. In this paper we show that once a cognitive device manages to learn the RF environment (cognition part), from past observations and decisions using machine learning techniques, we can use the collected data to train reconfigurable antennas to adapt to any change in the RF environment."}
{"_id": "241a9f34187fe3d351f939e7950d107ea02ca061", "title": "Self-regulation strategies improve self-discipline in adolescents : benefits of mental contrasting and implementation intentions", "text": "Self-regulation strategies improve self-discipline in adolescents: benefits of mental contrasting and implementation intentions Angela Lee Duckwortha; Heidi Grantb; Benjamin Loewa; Gabriele Oettingenac; Peter M. Gollwitzerad a Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA b Department of Psychology, Lehigh University, Bethlehem, PA, USA c Department of Psychology, University of Hamburg, Hamburg, Germany d Department of Psychology, University of Konstanz, Konstanz, Germany"}
{"_id": "59a35b63cf845ebf0ba31c290423e24eb822d245", "title": "The FaceSketchID System: Matching Facial Composites to Mugshots", "text": "Facial composites are widely used by law enforcement agencies to assist in the identification and apprehension of suspects involved in criminal activities. These composites, generated from witness descriptions, are posted in public places and media with the hope that some viewers will provide tips about the identity of the suspect. This method of identifying suspects is slow, tedious, and may not lead to the timely apprehension of a suspect. Hence, there is a need for a method that can automatically and efficiently match facial composites to large police mugshot databases. Because of this requirement, facial composite recognition is an important topic for biometrics researchers. While substantial progress has been made in nonforensic facial composite (or viewed composite) recognition over the past decade, very little work has been done using operational composites relevant to law enforcement agencies. Furthermore, no facial composite to mugshot matching systems have been documented that are readily deployable as standalone software. Thus, the contributions of this paper include: 1) an exploration of composite recognition use cases involving multiple forms of facial composites; 2) the FaceSketchID System, a scalable, and operationally deployable software system that achieves state-of-the-art matching accuracy on facial composites using two algorithms (holistic and component based); and 3) a study of the effects of training data on algorithm performance. We present experimental results using a large mugshot gallery that is representative of a law enforcement agency's mugshot database. All results are compared against three state-of-the-art commercial-off-the-shelf face recognition systems."}
{"_id": "da3bd6c69b2e1a511a6e4066ddb8d89909f16f11", "title": "Office Procedure as Practical Action: Models of Work and System Design", "text": "The design of office technology relies upon underlying conceptions of human organization and action. The goal of building office information systems requires a representation of office work and its relevant objects. The concern of this paper is that although system designers recognize the centrality of procedural tasks in the office, they tend to ignore the actual work involved in accomplishing those tasks. A perspicuous instance of work in an accounting office is used to recommend a new line of research into the practical problems of office work, and to suggest preliminary implications of that research for office systems design."}
{"_id": "591804251e15cfb571bc90c2fab2344f462e1617", "title": "Multilabel classification via calibrated label ranking", "text": "Label ranking studies the problem of learning a mapping from instances to rankings over a predefined set of labels. Hitherto existing approaches to label ranking implicitly operate on an underlying (utility) scale which is not calibrated in the sense that it lacks a natural zero point. We propose a suitable extension of label ranking that incorporates the calibrated scenario and substantially extends the expressive power of these approaches. In particular, our extension suggests a conceptually novel technique for extending the common learning by pairwise comparison approach to the multilabel scenario, a setting previously not being amenable to the pairwise decomposition technique. The key idea of the approach is to introduce an artificial calibration label that, in each example, separates the relevant from the irrelevant labels. We show that this technique can be viewed as a combination of pairwise preference learning and the conventional relevance classification technique, where a separate classifier is trained to predict whether a label is relevant or not. Empirical results in the area of text categorization, image classification and gene analysis underscore the merits of the calibrated model in comparison to state-of-the-art multilabel learning methods."}
{"_id": "9b5d79a92f510fd1e32b4e0e714b44ff9fb50ee3", "title": "Graph-based term weighting for information retrieval", "text": "A standard approach to Information Retrieval (IR) is to model text as a bag of words. Alternatively, text can be modelled as a graph, whose vertices represent words, and whose edges represent relations between the words, defined on the basis of any meaningful statistical or linguistic relation. Given such a text graph, graph theoretic computations can be applied to measure various properties of the graph, and hence of the text. This work explores the usefulness of such graph-based text representations for IR. Specifically, we propose a principled graph-theoretic approach of (1) computing term weights and (2) integrating discourse aspects into retrieval. Given a text graph, whose vertices denote terms linked by co-occurrence and grammatical modification, we use graph ranking computations (e.g. PageRank Page et\u00a0al. in The pagerank citation ranking: Bringing order to the Web. Technical report, Stanford Digital Library Technologies Project, 1998) to derive weights for each vertex, i.e. term weights, which we use to rank documents against queries. We reason that our graph-based term weights do not necessarily need to be normalised by document length (unlike existing term weights) because they are already scaled by their graph-ranking computation. This is a departure from existing IR ranking functions, and we experimentally show that it performs comparably to a tuned ranking baseline, such as BM25 (Robertson et\u00a0al. in NIST Special Publication 500-236: TREC-4, 1995). In addition, we integrate into ranking graph properties, such as the average path length, or clustering coefficient, which represent different aspects of the topology of the graph, and by extension of the document represented as a graph. Integrating such properties into ranking allows us to consider issues such as discourse coherence, flow and density during retrieval. We experimentally show that this type of ranking performs comparably to BM25, and can even outperform it, across different TREC (Voorhees and Harman in TREC: Experiment and evaluation in information retrieval, MIT Press, 2005) datasets and evaluation measures."}
{"_id": "6e487937856c3e70d4c5062548ce4533754a27c6", "title": "Confronting the Challenge of Quality Diversity", "text": "In contrast to the conventional role of evolution in evolutionary computation (EC) as an optimization algorithm, a new class of evolutionary algorithms has emerged in recent years that instead aim to accumulate as diverse a collection of discoveries as possible, yet where each variant in the collection is as fit as it can be. Often applied in both neuroevolution and morphological evolution, these new quality diversity (QD) algorithms are particularly well-suited to evolution's inherent strengths, thereby offering a promising niche for EC within the broader field of machine learning. However, because QD algorithms are so new, until now no comprehensive study has yet attempted to systematically elucidate their relative strengths and weaknesses under different conditions. Taking a first step in this direction, this paper introduces a new benchmark domain designed specifically to compare and contrast QD algorithms. It then shows how the degree of alignment between the measure of quality and the behavior characterization (which is an essential component of all QD algorithms to date) impacts the ultimate performance of different such algorithms. The hope is that this initial study will help to stimulate interest in QD and begin to unify the disparate ideas in the area."}
{"_id": "ff0f6c4c5913a909dc2f319fa68d064f4d93db5f", "title": "A Finger-Vein Based Cancellable Bio-cryptosystem", "text": "Irrevocability is one major issue in existing bio-cryptosystems. In this paper, we proposed a cancellable bio-cryptosystem by taking the full advantage of cancellable and non-invertible properties of bio-hashing biometrics. Specifically, two transformed templates are generated by using the bio-hashing algorithm and applied into two different secure sketches, fuzzy commitment sketch and fuzzy vault sketch, respectively. These two secure sketches can be fused in two different ways: AND fusion and OR fusion, so as to emphasis either on the recognition accuracy or the security level of the system. Experimental results and security analysis show the validity of the proposed scheme."}
{"_id": "696a082a2c086e60ee8668cfafbaa1c599568306", "title": "Protocol considerations for a prefix-caching proxy for multimedia streams", "text": "The increasing popularity of multimedia streaming applications introduces new challenges in content distribution. Web-initiated multimedia streams typically experience high start-up delay, due to large protocol overheads and the poor delay, throughput, and loss properties of the Internet. Internet service providers can improve performance by caching the initial segment (the pre x) of popular streams at proxies near the requesting clients. The proxy can initiate transmission to the client while simultaneously requesting the remainder of the stream from the server. This paper analyzes the challenges of realizing a pre x-caching service in the context of the IETF's Real-Time Streaming Protocol (RTSP), a multimedia streaming protocol that derives from HTTP. We describe how to exploit existing RTSP features, such as the Range header, and how to avoid several round-trip delays by caching protocol information at the proxy. Based on our experiences, we propose extensions to RTSP that would ease the development of new multimedia proxy services. In addition, we discuss how caching the partial contents of multimedia streams introduces new challenges in cache coherency and feedback control. Then, we brie y present our preliminary implementation of pre x caching on a Linux-based PC, and describe how the proxy interoperates with the RealNetworks server and client."}
{"_id": "2f97faa0888ff559bb1c5905c71b9b81b782b638", "title": "Planning for decentralized control of multiple robots under uncertainty", "text": "This paper presents a probabilistic framework for synthesizing control policies for general multi-robot systems that is based on decentralized partially observable Markov decision processes (Dec-POMDPs). Dec-POMDPs are a general model of decision-making where a team of agents must cooperate to optimize a shared objective in the presence of uncertainty. Dec-POMDPs also consider communication limitations, so execution is decentralized. While Dec-POMDPs are typically intractable to solve for real-world problems, recent research on the use of macro-actions in Dec-POMDPs has significantly increased the size of problem that can be practically solved. We show that, in contrast to most existing methods that are specialized to a particular problem class, our approach can synthesize control policies that exploit any opportunities for coordination that are present in the problem, while balancing uncertainty, sensor information, and information about other agents. We use three variants of a warehouse task to show that a single planner of this type can generate cooperative behavior using task allocation, direct communication, and signaling, as appropriate. This demonstrates that our algorithmic framework can automatically optimize control and communication policies for complex multi-robot systems."}
{"_id": "23bde3dadb4effccb3b539c5ce46c295a11615bb", "title": "SCL: Simplifying Distributed SDN Control Planes", "text": "We consider the following question: what consistency model is appropriate for coordinating the actions of a replicated set of SDN controllers? We first argue that the conventional requirement of strong consistency, typically achieved through the use of Paxos or other consensus algorithms, is conceptually unnecessary to handle unplanned network updates. We present an alternate approach, based on the weaker notion of eventual correctness, and describe the design of a simple coordination layer (SCL) that can seamlessly turn a set of single-image SDN controllers (that obey certain properties) into a distributed SDN system that achieves this goal (whereas traditional consensus mechanisms do not). We then show through analysis and simulation that our approach provides faster responses to network events. While our primary focus is on handling unplanned network updates, our coordination layer also handles policy updates and other situations where consistency is warranted. Thus, contrary to the prevailing wisdom, we argue that distributed SDN control planes need only be slightly more complicated than single-image controllers."}
{"_id": "eb448bb53372d14df4113f04fee813307f24d049", "title": "A compact 2.45 GHz, low power wireless energy harvester with a reflector-backed folded dipole rectenna", "text": "This paper paper describes the design procedure as well as the experimental performance of a 2.45GHz 10 \u03bcW wireless energy harvester (WEH) with a maximum total efficiency of \u2248 30% at 1 \u03bcW/cm2 incident power density. The WEH integrates a shunt high-speed rectifying diode with a folded dipole. A metal reflector increases the gain of the rectenna and a quarter wavelength differential line is used as a choke. Both a VDI WVD and a Skyworks GaAs Schottky diode are integrated with the antenna and their performance is compared."}
{"_id": "6820fe24af3b540ef220813614cb05a35ccee81d", "title": "A Comparison of Evolutionary Algorithms and Gradient-based Methods for the Optimal Control Problem", "text": "An experimental comparison of evolutionary algorithms and gradient-based methods for the optimal control problem is carried out. The problem is solved separately by Particle swarm optimization, Grey wolf optimizer, Fast gradient descent method, Marquardt method and Adam method. The simulation is performed on a jet aircraft model. The results of each algorithm performance are compared according to the best found value of the fitness function, the mean value and the standard deviation."}
{"_id": "44726d8c654f3cc21e7e6bbc01b4f2bd74cd658e", "title": "Improving speech recognition by revising gated recurrent units", "text": "Speech recognition is largely taking advantage of deep learning, showing that substantial benefits can be obtained by modern Recurrent Neural Networks (RNNs). The most popular RNNs are Long Short-Term Memory (LSTMs), which typically reach state-of-the-art performance in many tasks thanks to their ability to learn long-term dependencies and robustness to vanishing gradients. Nevertheless, LSTMs have a rather complex design with three multiplicative gates, that might impair their efficient implementation. An attempt to simplify LSTMs has recently led to Gated Recurrent Units (GRUs), which are based on just two multiplicative gates. This paper builds on these efforts by further revising GRUs and proposing a simplified architecture potentially more suitable for speech recognition. The contribution of this work is two-fold. First, we suggest to remove the reset gate in the GRU design, resulting in a more efficient single-gate architecture. Second, we propose to replace tanh with ReLU activations in the state update equations. Results show that, in our implementation, the revised architecture reduces the per-epoch training time with more than 30% and consistently improves recognition performance across different tasks, input features, and noisy conditions when compared to a standard GRU."}
{"_id": "0d939deb8e4bb239fb407410f36650516ccde69d", "title": "Kineticons: using iconographic motion in graphical user interface design", "text": "Icons in graphical user interfaces convey information in a mostly universal fashion that allows users to immediately interact with new applications, systems and devices. In this paper, we define Kineticons - an iconographic scheme based on motion. By motion, we mean geometric manipulations applied to a graphical element over time (e.g., scale, rotation, deformation). In contrast to static graphical icons and icons with animated graphics, kineticons do not alter the visual content or \"pixel-space\" of an element. Although kineticons are not new - indeed, they are seen in several popular systems - we formalize their scope and utility. One powerful quality is their ability to be applied to GUI elements of varying size and shape from a something as small as a close button, to something as large as dialog box or even the entire desktop. This allows a suite of system-wide kinetic behaviors to be reused for a variety of uses. Part of our contribution is an initial kineticon vocabulary, which we evaluated in a 200 participant study. We conclude with discussion of our results and design recommendations."}
{"_id": "21c2bd08b2111dcf957567b98e1c8dcad652e3dd", "title": "Sample Size in Factor Analysis", "text": "The factor analysis literature includes a range of recommendations regarding the minimum sample size necessary to obtain factor solutions that are adequately stable and that correspond closely to population factors. A fundamental misconception about this issue is that the minimum sample size, or the minimum ratio of sample size to the number of variables, is invariant across studies. In fact, necessary sample size is dependent on several aspects of any given study, including the level of communality of the variables and the level of overdetermination of the factors. The authors present a theoretical and mathematical framework that provides a basis for understanding and predicting these effects. The hypothesized effects are verified by a sampling study using artificial data. Results demonstrate the lack of validity of common rules of thumb and provide a basis for establishing guidelines for sample size in factor analysis."}
{"_id": "65ea05d3570ef7756eafbe2c4b51ec7652882dee", "title": "The Fox News Effect: Media Bias and Voting\u2217", "text": "Does media bias affect voting? We address this question by looking at the entry of Fox News in cable markets and its impact on voting. Between October 1996 and November 2000, the conservative Fox News Channel was introduced in the cable programming of 20 percent of US towns. Fox News availability in 2000 appears to be largely idiosyncratic. Using a data set of voting data for 9,256 towns, we investigate if Republicans gained vote share in towns where Fox News entered the cable market by the year 2000. We find a significant effect of the introduction of Fox News on the vote share in Presidential elections between 1996 and 2000. Republicans gain 0.4 to 0.6 percentage points in the towns which broadcast Fox News. The results are robust to town-level controls, district and county fixed effects, and alternative specifications. We also find a significant effect of Fox News on Senate vote share and, to a lesser extent, on voter turnout. Our estimates imply that Fox News convinced 3 to 8 percent of its viewers to vote Republican. The evidence is most consistent with voters suffering from persuasion bias. We also discuss a model of rational voter updating that fits most facts. \u2217George Akerlof, Stephen Ansolabehere, Larry M. Bartels, Matthew Gentzkow, Alan Gerber, Jay Hamilton, Alan Krueger, Marco Manacorda, Enrico Moretti, Torsten Persson, Sam Popkin, Riccardo Puglisi, Matthew Rabin, Jesse Shapiro, David Stromberg, and audiences at Bonn University, EUI (Florence), Fuqua, Harvard University, IIES (Stockholm), Princeton University, UC Berkeley, University of Chicago GSB, and the NBER 2005 Political Economy and Labor Studies Meetings provided useful comments. We would like to specially thank Jim Collins and Matthew Gentzkow for providing the Scarborough data. Shawn Bananzadeh, Jessica Chan, Marguerite Converse, Neil Dandavati, Tatyana Deryugina, Monica Deza, Dylan Fox, Melissa Galicia, Calvin Ho, Sudhamas Khanchanawong, Richard Kim, Martin Kohan, Vipul Kumar, Jonathan Leung, Clarice Li, Tze Yang Lim, Ming Mai, Sameer Parekh, Sharmini Radakrishnan, Rohan Relan, Chanda Singh, Matthew Stone, Nan Zhang, Sibo Zhao, and Liya Zhu helped collect the voting and the cable data. Dan Acland, Saurabh Bhargava, Avi Ebenstein, and Devin Pope provided excellent research assistance."}
{"_id": "15fb71a0385919c0b1cf05a9053a1d0cffba93f3", "title": "Validation of LiDAR point clouds for classification of high-value crops using geometric-and reflectance-based extraction algorithm", "text": "This study provides an experimental analysis of high-value crop classification using geometric and reflectance-based feature extraction algorithms which can be used to validate extensive classification. The main goal of this study is to identify the class based on the 3D reconstruction of small scale LiDAR scanner and RGB images. The reference used in this study was obtained from the initially classified classes of Mapua-Phil LiDAR2 Project. In validating high-value crops, the classification results were compared to the results obtained using the reference data. The validation procedures involve comparing the locations of previously classified crops and analyzing the crop cycle in the tested fields. The proposed methodology used to identify specific class based on the geometric feature alone are found to be acceptable with an average accuracy of 92.5% while the reflectance based classification alone provides an average accuracy of 90%."}
{"_id": "d0840190eb1b2153586cdbe7237e904580890436", "title": "Gain enhancement of butler matrix fed antenna array system by using planar circular EBG units", "text": "In this paper planar circular Electromagnetic Band Gap (EBG) structures have been implemented in the Butler matrix fed antenna array system to improve the radiation performance of the system. Initially a conventional 4\u00d74 Butler matrix fed antenna array system has been designed at 14.5 GHz. Asymmetricity in the gain values of major beams of the antenna array system has been observed both in simulation and measurement. Generally due to the design simplicity, light weight and compactness, the rectangular patch microstrip antenna has been used as the radiating element in the Butler matrix fed antenna array system. At higher frequencies microstrip antenna suffers from losses due to surface wave excitation. It degrades the radiation efficiency of the antenna array system and also increases the level of mutual coupling between the radiating elements in the antenna array. EBG (Electromagnetic Band Gap) units are used to improve the radiation performance of the system by the generation of stop band. The propagation of surface wave will be suppressed in this stop band region and may improve the performance of the system. To fulfill this requirement, 3 \u00d7 1 planar circular EBG unit has been designed. A stop band of 2.15 GHz with the centre frequency of 14.5 GHz has been obtained by the proposed EBG unit. The 3\u00d71 planar circular EBG unit has been integrated between the radiating elements of the antenna array system. As a result the gain enhancement has been achieved for the 1R, 2L, 2R and 1L beams both in simulation and measurement at 14.5 GHz."}
{"_id": "d1f80ad8a7015064632b9f1bd89faf53d36c9a49", "title": "Wiggling through complex traffic: Planning trajectories constrained by predictions", "text": "The vision of autonomous driving is piecewise becoming reality. Still the problem of executing the driving task in a safe and comfortable way in all possible environments, for instance highway, city or rural road scenarios is a challenging task. In this paper we present a novel approach to planning trajectories for autonomous vehicles. Hereby we focus on the problem of planning a trajectory, given a specific behavior option, e.g. merging into a specific gap at a highway entrance or a roundabout. Therefore we explicitly take arbitrary road geometry and prediction information of other traffic participants into account. We extend former contributions in this field by providing a flexible problem description and a trajectory planner without specialization to distinct classes of maneuvers beforehand. Using a carefully chosen representation of the dynamic free space, the method is capable of considering multiple lanes including the predicted dynamics of other traffic participants, while being real-time capable at the same time. The combination of those properties in one general planning method represents the novelty of the proposed method. We demonstrate the capability of our algorithm to plan safe trajectories in simulation and in real traffic in real-time."}
{"_id": "c42adbb77919328fad1fdbcc1ae7cdf12c118134", "title": "Privacy-Preserving Human Activity Recognition from Extreme Low Resolution", "text": "Privacy protection from unwanted video recordings is an important societal challenge. We desire a computer vision system (e.g., a robot) that can recognize human activities and assist our daily life, yet ensure that it is not recording video that may invade our privacy. This paper presents a fundamental approach to address such contradicting objectives: human activity recognition while only using extreme low-resolution (e.g., 16x12) anonymized videos. We introduce the paradigm of inverse super resolution (ISR), the concept of learning the optimal set of image transformations to generate multiple low-resolution (LR) training videos from a single video. Our ISR learns different types of sub-pixel transformations optimized for the activity classification, allowing the classifier to best take advantage of existing high-resolution videos (e.g., YouTube videos) by creating multiple LR training videos tailored for the problem. We experimentally confirm that the paradigm of inverse super resolution is able to benefit activity recognition from extreme low-resolution videos."}
{"_id": "94772d100b42a4c26903234943c4e0d8ad827230", "title": "Coverage-Guided Fuzzing for Deep Neural Networks", "text": "In company with the data explosion over the past decade, deep neural network (DNN) based software has experienced unprecedented leap and is becoming the key driving force of many novel industrial applications, including many safety-critical scenarios such as autonomous driving. Despite great success achieved in various human intelligence tasks, similar to traditional software, DNNs could also exhibit incorrect behaviors caused by hidden defects causing severe accidents and losses. In this paper, we propose DeepHunter, an automated fuzz testing framework for hunting potential defects of general-purpose DNNs. DeepHunter performs metamorphic mutation to generate new semantically preserved tests, and leverages multiple plugable coverage criteria as feedback to guide the test generation from different perspectives. To be scalable towards practical-sized DNNs, DeepHunter maintains multiple tests in a batch, and prioritizes the tests selection based on active feedback. The effectiveness of DeepHunter is extensively investigated on 3 popular datasets (MNIST, CIFAR-10, ImageNet) and 7 DNNs with diverse complexities, under large set of 6 coverage criteria as feedback. The large-scale experiments demonstrate that DeepHunter can (1) significantly boost the coverage with guidance; (2) generate useful tests to detect erroneous behaviors and facilitate the DNN model quality evaluation; (3) accurately capture potential defects during DNN quantization for platform migration."}
{"_id": "fbd7d342fb6245dc161594c889fec655a67a1c59", "title": "The design of early childhood makerspaces to support positive technological development: Two case studies", "text": "Purpose \u2013 With the advent of the maker movement, there has been a new push to explore how spaces of learning ought to be designed. The purpose of this paper is to integrate three approaches for thinking about the role of design of the learning environment: the makerspace movement, Reggio Emilia\u2019s Third Teacher approach, and the positive technological development (PTD) framework. Design/methodology/approach \u2013 This paper describes two case studies that involved the design of two different early childhood makerspaces (ECMSs) through a co-participatory design experience: the Kindergarten Creator Space at the International School of Billund in Denmark; and the ECMS at (removed for blind review), a resource library in Medford, MA. Findings \u2013 Based on the foundational education framework of PTD, and ideas from the field of interior design, this paper describes the design principles of several successful makerspaces, and case examples of children who use them. Originality/value \u2013 By grounding the theoretical discussion in three approaches, the authors aim to suggest design elements of physical spaces in schools and libraries that can promote young children\u2019s learning through making. Recommendations are discussed for practitioners and researchers interested in ECMSs."}
{"_id": "23e522329ef66f71e485172ecc808306fd95f23f", "title": "Predicting the Structure of Cooking Recipes", "text": "Cooking recipes exist in abundance; but due to their unstructured text format, they are hard to study quantitatively beyond treating them as simple bags of words. In this paper, we propose an ingredientinstruction dependency tree data structure to represent recipes. The proposed representation allows for more refined comparison of recipes and recipe-parts, and is a step towards semantic representation of recipes. Furthermore, we build a parser that maps recipes into the proposed representation. The parser\u2019s edge prediction accuracy of 93.5% improves over a strong baseline of 85.7% (54.5% error reduction)."}
{"_id": "a243583e159b4ee4f545184bbdb9c1b2a4e5a483", "title": "Convolutional Monte Carlo Rollouts in Go", "text": "In this work, we present a MCTS-based Go-playing program which uses convolutional networks in all parts. Our method performs MCTS in batches, explores the Monte Carlo search tree using Thompson sampling and a convolutional network, and evaluates convnet-based rollouts on the GPU. We achieve strong win rates against open source Go programs and attain competitive results against state of the art convolutional net-based Go-playing programs."}
{"_id": "de8421e737d8f8757385bb754917cb8cfb77dc18", "title": "AFRL-RQ-WP-TR-2014-0212 A BROADBAND HIGH-GAIN BILAYER LOG-PERIODIC DIPOLE ARRAY ( LPDA ) FOR ULTRA HIGH FREQUENCY ( UHF ) CONFORMAL LOAD BEARING ANTENNA STRUCTURES ( CLAS ) APPLICATIONS", "text": "A broadband high-gain bi-layer Log-PeriodicDipole-Array (LPDA) is introduced for Conformal Load Bearing Antenna Structures (CLAS) applications. Under the proposed scheme the two layers of the LPDA are printed on two separate thin dielectric substrates which are substantially separated from each other. A meander line geometry is adapted to achieve size reduction for the array. The fabricated and tested array easily exceeds more than an octave of gain, pattern and VSWR bandwidth. Index Terms \u2014 Broadband antenna, array, log-periodic, LPDA, endfire, ultrawideband, UWB, CLAS."}
{"_id": "b6b53d8c8790d668e799802444e31e90ac177479", "title": "Docker Layer Placement for On-Demand Provisioning of Services on Edge Clouds", "text": "Driven by the increasing popularity of the microservice architecture, we see an increase in services with unknown demand pattern located in the edge network. Pre-deployed instances of such services would be idle most of the time, which is economically infeasible. Also, the finite storage capacity limits the amount of deployed instances we can offer. Instead, we present an on-demand deployment scheme using the Docker platform. In Docker, service images consist of layers, each layer adding specific functionality. This allows different services to reuse layers, avoiding cluttering the storages with redundant replicas. We propose a layer placement method which allows users to connect to a server, retrieve all necessary layers -possibly from multiple locations- and deploy an instance of the requested service within the desired response time. We search for the best layer placement which maximizes the satisfied demand given the storage and delay constraints. We developed an iterative optimization heuristic which is less exhaustive by dividing the global problem in smaller subproblems. Our simulation results show that our heuristic is able to solve the problem with less system resources. Last, we present interesting use-cases to use this approach in real-life scenarios."}
{"_id": "bbc964cf09b35127f671628b53bdee5fa0780047", "title": "Support vector regression with modified firefly algorithm for stock price forecasting", "text": "The support vector regression (SVR) has been employed to deal with stock price forecasting problems. However, the selection of appropriate kernel parameters is crucial to obtaining satisfactory forecasting performance. This paper proposes a novel approach for forecasting stock prices by combining the SVR with the firefly algorithm (FA). The proposed forecasting model has two stages. In the first stage, to enhance the global convergence speed, a modified version of the FA, which is termed the MFA, is developed in which the dynamic adjustment strategy and the opposition-based chaotic strategy are introduced. In the second stage, a hybrid SVR model is proposed and combined with the MFA for stock price forecasting, in which the MFA is used to optimize the SVR parameters. Finally, comparative experiments are conducted to show the applicability and superiority of the proposed methods. Experimental results show the following: (1) Compared with other algorithms, the proposed MFA algorithm possesses superior performance, and (2) The proposed MFA-SVR prediction procedure can be considered as a feasible and effective tool for forecasting stock prices."}
{"_id": "994c88b567703f76696ff29ca0c5232268d06261", "title": "Women with hyperandrogenism in elite sports: scientific and ethical rationales for regulating.", "text": "The recent implementation by some major sports-governing bodies of policies governing eligibility of females with hyperandrogenism to compete in women's sports has raised a lot of attention and is still a controversial issue. This short article addresses two main subjects of controversy: the existing scientific basis supporting performance enhancing of high blood T levels in elite female athletes, and the ethical rationale and considerations about these policies. Given the recently published data about both innate and acquired hyperandrogenic conditions and their prevalence in elite female sports, we claim that the high level of androgens are per se performance enhancing. Regulating women with clinical and biological hyperandrogenism is an invitation to criticism because biological parameters of sex are not neatly divided into only two categories in the real world. It is, however, the responsibility of the sports-governing bodies to do their best to guarantee a level playing field to all athletes. In order not cloud the discussions about the policies on hyperandrogenism in sports, issues of sports eligibility and therapeutic options should always be considered and explained separately, even if they may overlap. Finally, some proposals for refining the existing policies are made in the present article."}
{"_id": "b7a9f189276a6d48b366b725a60d696e3d65b11e", "title": "Design of Boost-Flyback Single-Stage PFC converter for LED power supply without electrolytic capacitor for energy-storage", "text": "Light emitting diodes (LEDs) are likely to be used for general lighting applications due to their high efficiency and longer life. This paper presents the concept of applying large voltage ripple for energy storage into the Boost-Flyback Single -Stage PFC converter for the elimination of the electrical capacitor. Following proposed design procedure, the single stage PFC circuit with small energy storage capacitance can still achieve good output voltage regulation while preserving desired input power factor. The detailed theoretical analysis and design procedure of the Single-Stage PFC converter is presented. The experimental results obtained on a 60W prototype converter along with waveforms are presented."}
{"_id": "057de3e571a45631e0b2c3121f3f6b5e27c2e081", "title": "Multilevel single-phase inverter based on TNPC-type", "text": "A new single-phase 7-level pulse-width modulation (PWM) inverter using T-type Neutral-point-clamped (TNPC) celIs is presented to obtain high quality output voltage minimizing the number of switching devices. It includes two TNPC cells series-connected together and connected to with different voltage levels. To avoid inherent problems caused by voltage unbalance, modified Buck-boost converters are added at DC-link side to make DC voltage sources for the proposed inverter. Operational principles and the switching pattern are analyzed in this paper. To verify the performance of the proposed inverter, PSIM simulation and experimental results are also shown in this paper."}
{"_id": "5575a8fadf8e65d9e3441b33cd5d1c687aa53300", "title": "Development of a helical climbing modular snake robot", "text": "This research aims to study the grasping torque profile of the helical climbing robot on the cylindrical pole with constant radius. The modular snake robot is formed into a helical shape which can be described by various parameters such as the helical pitch angle, the radius and the pitch distance. The torque in each axis of rotation is affected by the helical pitch angle parameter. Five grasping configurations with different helical pitch angle were tested on the 18 degree of freedoms, 7 modules wheeled snake robot. The experimental result showed that the torque around yaw axis transferred to the pitch axis when the helical pitch angle was increasing. The profile of torque magnitude along the robot's body resulted in parabolic shape due to the unbalanced grasping force of the discrete points of contact between the robot and the pole."}
{"_id": "3e2eafd148ac8c1ad53b3bf14247f360979600f7", "title": "Privacy and human behavior in the age of information", "text": "This Review summarizes and draws connections between diverse streams of empirical research on privacy behavior. We use three themes to connect insights from social and behavioral sciences: people\u2019s uncertainty about the consequences of privacy-related behaviors and their own preferences over those consequences; the context-dependence of people\u2019s concern, or lack thereof, about privacy; and the degree to which privacy concerns are malleable\u2014manipulable by commercial and governmental interests. Organizing our discussion by these themes, we offer observations concerning the role of public policy in the protection of privacy in the information age."}
{"_id": "03efbbf1c8fa661f31fe94efcc894ad95eeff3e2", "title": "Classifying latent user attributes in twitter", "text": "Social media outlets such as Twitter have become an important forum for peer interaction. Thus the ability to classify latent user attributes, including gender, age, regional origin, and political orientation solely from Twitter user language or similar highly informal content has important applications in advertising, personalization, and recommendation. This paper includes a novel investigation of stacked-SVM-based classification algorithms over a rich set of original features, applied to classifying these four user attributes. It also includes extensive analysis of features and approaches that are effective and not effective in classifying user attributes in Twitter-style informal written genres as distinct from the other primarily spoken genres previously studied in the user-property classification literature. Our models, singly and in ensemble, significantly outperform baseline models in all cases. A detailed analysis of model components and features provides an often entertaining insight into distinctive language-usage variation across gender, age, regional origin and political orientation in modern informal communication."}
{"_id": "2ee4b8bc544020c14d8be093182093dc16327c26", "title": "Stochastic gradient boosted distributed decision trees", "text": "Stochastic Gradient Boosted Decision Trees (GBDT) is one of the most widely used learning algorithms in machine learning today. It is adaptable, easy to interpret, and produces highly accurate models. However, most implementations today are computationally expensive and require all training data to be in main memory. As training data becomes ever larger, there is motivation for us to parallelize the GBDT algorithm. Parallelizing decision tree training is intuitive and various approaches have been explored in existing literature. Stochastic boosting on the other hand is inherently a sequential process and have not been applied to distributed decision trees. In this work, we present two different distributed methods that generates exact stochastic GBDT models, the first is a MapReduce implementation and the second utilizes MPI on the Hadoop grid environment."}
{"_id": "391d9ef4395cf2f69e7a2f0483d40b6addd95888", "title": "Robust Sentiment Detection on Twitter from Biased and Noisy Data", "text": "In this paper, we propose an approach to automatically detect sentiments on Twitter messages (tweets) that explores some characteristics of how tweets are written and meta-information of the words that compose these messages. Moreover, we leverage sources of noisy labels as our training data. These noisy labels were provided by a few sentiment detection websites over twitter data. In our experiments, we show that since our features are able to capture a more abstract representation of tweets, our solution is more effective than previous ones and also more robust regarding biased and noisy data, which is the kind of data provided by these"}
{"_id": "9ff86ef08cbfe5f0ef15af42f62aea7673c62027", "title": "Recognizing Stances in Ideological On-Line Debates", "text": "This work explores the utility of sentiment and arguing opinions for classifying stances in ideological debates. In order to capture arguing opinions in ideological stance taking, we construct an arguing lexicon automatically from a manually annotated corpus. We build supervised systems employing sentiment and arguing opinions and their targets as features. Our systems perform substantially better than a distribution-based baseline. Additionally, by employing both types of opinion features, we are able to perform better than a unigrambased system."}
{"_id": "b335982aba6a92691e090294876bc4a4dcea3f06", "title": "A Security Risk Assessment Framework for Smart Car", "text": "As the automobile industry has recently adopted information technologies, the latter are being used to replace mechanical systems with electronically-controlled systems. Moreover, automobiles are evolving into smart cars or connected cars as they are connected to various IT devices and networks such as VANET (Vehicular Ad hoc NETwork). Although there were no concerns about the hacking of automobiles in the past, various security threats are now emerging as electronic systems are gradually filling up the interiors of many automobiles, which are in turn being connected to external networks. As such, researchers have begun studying smart car security, leading to the disclosure of security threats through the testing or development of various automobile security technologies. However, the security threats facing smart cars do not occur frequently and, practically speaking, it is unrealistic to attempt to cope with every possible security threat when considering such factors as performance, compatibility, and so forth. Moreover, the excessive application of security technology will increase the overall vehicle cost and lower the effectiveness of investment. Therefore, smart car security risks should be assessed and prioritized to establish efficient security measures. To that end, this study constructed a security risk assessment framework in a bid to establish efficient measures for smart car security. The proposed security risk assessment framework configured the assessment procedure based on the conventional security risk analysis model GMITS (ISO13335) and utilized 'attack tree analysis' to assess the threats and vulnerabilities."}
{"_id": "0c02d473ae301bab2d61cd196183fa0853c5bcda", "title": "Learning Policies for Partially Observable Environments: Scaling Up", "text": "Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques for nding optimal behavior do not appear to scale well and have been unable to nd satisfactory policies for problems with more than a dozen states. After a brief review of pomdp's, this paper discusses several simple solution methods and shows that all are capable of nding near-optimal policies for a selection of extremely small pomdp's taken from the learning literature. In contrast, we show that none are able to solve a slightly larger and noisier problem based on robot navigation. We nd that a combination of two novel approaches performs well on these problems and suggest methods for scaling to even larger and more complicated domains."}
{"_id": "6b2db1f5fa266528cba6d7bfdb8079852d161b63", "title": "Acceptance of e-commerce services: the case of electronic brokerages", "text": "This paper examines human motivations underlying individual acceptance of business-to-consumer (B2C) electronic commerce services. Such acceptance is the key to the survival of firms in this intensely competitive industry. A modified theory of planned behavior (TPB) is used to hypothesize a model of e-commerce service acceptance, which is then tested using a field survey of 172 e-brokerage users. We found TPB useful in explaining e-commerce service acceptance; however acceptance motivations are significantly different from that of typical IS products. Based on a broader conceptualization of TPB\u2019s subjective norm to include both external (mass-media) and interpersonal influences, we report that subjective norm is an important predictor of e-commerce acceptance, behavioral control has minimal impact on e-commerce acceptance, and external influence is a significant determinant of subjective norm. Implications of these findings in light of e-commerce research and practice are discussed."}
{"_id": "d5967e408c135957ec7598c63e9b1ec587368f24", "title": "SURVEY OF BIOMETRIC RECOGNITION SYSTEMS AND THEIR", "text": "The term Biometrics is becoming highly important in computer security world. The human physical characteristics like fingerprints, face, hand geometry, voice and iris are known as biometrics. These features are used to provide an authentication for computer based security systems. The existing computer security systems used at various places like banking, passport, credit cards, smart cards, PIN , access control and network security are using username and passwords for person identification. The username and passwords can be replaced and/or provide double authentication by using any one of the biometric features. In this paper, the main focus is on the various biometrics, their applications and the existing biometrics recognition systems."}
{"_id": "1f68e974d2deda8f8a3313e3b68444b893ef7865", "title": "Highly sensitive sensor for detection of initial slip and its application in a multi-fingered robot hand", "text": "Tactile sensors for slip detection are essential for implementing human-like gripping in a robot hand. In previous studies, we proposed flexible, thin and lightweight slip detection sensors utilizing the characteristics of pressure-sensitive conductive rubber. This was achieved by using the high-frequency vibration component generated in the process of slipping of the gripped object in order to distinguish between slipping of the object and changes in the normal force. In this paper, we design a slip detection sensor for a multi-fingered robot hand and examine the influence of noise caused by the operation of such a hand. Finally, we describe an experiment focusing on the adjustment of the gripping force of a multi-fingered robot hand equipped with the developed sensors"}
{"_id": "e2d656639c8187246282650be6eb407b41f6096b", "title": "Multi-Information Source Optimization", "text": "We consider Bayesian methods for multi-information source optimization (MISO), in which we seek to optimize an expensive-to-evaluate black-box objective function while also accessing cheaper but biased and noisy approximations (\u201cinformation sources\u201d). We present a novel algorithm that outperforms the state of the art for this problem by using a Gaussian process covariance kernel better suited to MISO than those used by previous approaches, and an acquisition function based on a one-step optimality analysis supported by efficient parallelization. We also provide a novel technique to guarantee the asymptotic quality of the solution provided by this algorithm. Experimental evaluations demonstrate that this algorithm consistently finds designs of higher value at less cost than previous approaches."}
{"_id": "3d23b8a7f06be2af95216a39bb18c28b06cc0e4b", "title": "Narrowband Cooperative Spectrum Sensing in Cognitive Networks", "text": "Narrowband Cooperative Spectrum Sensing in Cognitive Networks"}
{"_id": "0a35160f347f990e95f517b65a920b1b53ec32df", "title": "Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion.", "text": "We used positron emission tomography to study neural mechanisms underlying intensely pleasant emotional responses to music. Cerebral blood flow changes were measured in response to subject-selected music that elicited the highly pleasurable experience of \"shivers-down-the-spine\" or \"chills.\" Subjective reports of chills were accompanied by changes in heart rate, electromyogram, and respiration. As intensity of these chills increased, cerebral blood flow increases and decreases were observed in brain regions thought to be involved in reward/motivation, emotion, and arousal, including ventral striatum, midbrain, amygdala, orbitofrontal cortex, and ventral medial prefrontal cortex. These brain structures are known to be active in response to other euphoria-inducing stimuli, such as food, sex, and drugs of abuse. This finding links music with biologically relevant, survival-related stimuli via their common recruitment of brain circuitry involved in pleasure and reward."}
{"_id": "65f1613a4f1b015fc64608b787227de0549f4cec", "title": "SPECTRE : Serialization of Proof-of-work Events : Confirming Transactions via Recursive Elections", "text": "Bitcoin utilizes Nakamoto Consensus to achieve agreement on a consistent set of transactions, in the permissionless setting, where anyone can participate in the protocol anonymously. Since its rise, many other permissionless consensus protocols have been proposed. We present SPECTRE, a new protocol for the consensus core of cryptocurrencies that remains secure even under high throughput and fast confirmation times. At any throughput, SPECTRE is resilient to attackers with up to 50% of the computational power (reaching the limit defined by network congestion and bandwidth constraints). SPECTRE can operate at arbitrarily high block creation rates, which implies that its transactions confirm in mere seconds (limited mostly by the round-trip-time in the network). SPECTRE\u2019s underlying model falls into the category of partial synchronous networks: its security depends on the existence of some bound on the delivery time of messages between honest participants, but the protocol itself does not contain any parameter that depends on this bound. Hence, while other protocols that do encode such parameters must operate with extreme safety margins, SPECTRE converges according to the actual network delay. Key to SPECTRE\u2019s achievements is the fact that it satisfies weaker properties than classic consensus requires. In the conventional paradigm, the order between any two transactions must be decided and agreed upon by all non-corrupt nodes. In contrast, SPECTRE only satisfies this with respect to transactions performed by honest users. We observe that in the context of money, two conflicting payments that are published concurrently could only have been created by a dishonest user, hence we can afford to delay the acceptance of such transactions without harming the usability of the system. Our framework formalizes this weaker set of requirements for a cryptocurrency\u2019s distributed ledger. We then provide a formal proof that SPECTRE satisfies these requirements."}
{"_id": "8fcb3084b6b1483b053655f1793067531ec2dea5", "title": "1 Marine anthropogenic litter on British beaches : a 10-year nationwide assessment using citizen science", "text": "Growing evidence suggests that anthropogenic litter, particularly plastic, represents a highly pervasive and persistent threat to global marine ecosystems. Multinational research is progressing to characterise its sources, distribution and abundance so that interventions aimed at reducing future inputs and clearing extant litter can be developed. Citizen science projects, whereby members of the public gather information, offer a low-cost method of collecting large volumes of data with considerable temporal and spatial coverage. Furthermore, such projects raise awareness of environmental issues and can lead to positive changes in behaviours and attitudes. We present data collected over a decade (2005-2014 inclusive) by Marine Conservation Society (MCS) volunteers during beach litter surveys carried along the British coastline, with the aim of increasing knowledge on the composition, spatial distribution and temporal trends of coastal debris. Unlike many citizen science projects, the MCS beach litter survey programme gathers information on the number of volunteers, duration of surveys and distances covered. This comprehensive information provides an opportunity to standardise data for variation in sampling effort among surveys, enhancing the value of outputs and robustness of findings. We found that plastic is the main constituent of anthropogenic litter on British beaches and the majority of traceable items originate from land-based sources, such as public littering. We identify the coast of the Western English Channel and Celtic Sea as experiencing the highest relative litter levels. Increasing trends over the 10-year time period were detected for a number of individual item categories, yet no statistically significant change in total (effort-corrected) litter was detected. We discuss the limitations of the dataset and make recommendations for future work. The study demonstrates the value of citizen science data in providing insights that would otherwise not be possible due to logistical and financial constraints of running government-funded sampling programmes on such large scales."}
{"_id": "27a2fad58dd8727e280f97036e0d2bc55ef5424c", "title": "Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking", "text": "To help accelerate progress in multi-target, multi-camera tracking systems, we present (i) a new pair of precision-recall measures of performance that treats errors of all types uniformly and emphasizes correct identification over sources of error; (ii) the largest fully-annotated and calibrated data set to date with more than 2 million frames of 1080 p, 60 fps video taken by 8 cameras observing more than 2,700 identities over 85 min; and (iii) a reference software system as a comparison baseline. We show that (i) our measures properly account for bottom-line identity match performance in the multi-camera setting; (ii) our data set poses realistic challenges to current trackers; and (iii) the performance of our system is comparable to the state of the art."}
{"_id": "9c4835a4d9a9143ee500b300d238d629c16b8fe3", "title": "Effective Design Parameters on the End Effect in Single-Sided Linear Induction Motors", "text": "Linear induction motors are used in various industries but they have some specific phenomena which are the causes for some problems. The most important phenomenon is called end effect. End effect decreases efficiency, power factor and output force and unbalances the phase currents. This phenomenon is more important in medium and high speeds machines. In this paper a factor, EEF , is obtained by an accurate equivalent circuit model, to determine the end effect intensity. In this way, all of effective design parameters on end effect is described. Accuracy of this equivalent circuit model is evaluated by two dimensional finite-element analysis using ANSYS. The results show the accuracy of the equivalent circuit model. Keywords\u2014Linear induction motor, end effect, equivalent circuit model, finite-element method."}
{"_id": "372f837c868ad984533d804e13260fac37768b8b", "title": "In-Cell Self-Capacitive-Type Mobile Touch System and Embedded Readout Circuit in Display Driver IC", "text": "A mobile display system, which includes a self- capacitive type in-cell touch screen panel and a display driver IC, in which the readout circuit is embedded, is developed for reduced cost, improved display performance, and superior touch performance. A touch electrode is implemented with a segmented common electrode of an LCD display. The display system of the in-cell touch panel is 5.7-in a-Si HD (720 \u00d7 1280), the number of touch electrodes is 576 (18 \u00d7 32), and the sensor pitch is approximately 4\u00a0mm. Compared to the display-only system, display performance degradation does not occur in the proposed system since in-cell technology is adopted. Despite using a self-capacitance sensing method in the panel, unlimited multitouch is available by using an all-point sensing scheme; this system shows a touch accuracy of less than 0.2\u00a0mm and signal-to-noise ratio is measured to be more than 48\u00a0dB in the entire panel without any digital filter. To enhance the touch performance, we proposed noise reduction techniques that are strongly related to the display operation, such as base line compensation, delta adding correlated double sampling, and the use of a high-voltage charge compensator. The power consumption including micro controller unit (MCU) is less than 30\u00a0mW."}
{"_id": "227639dba3974bcd6018f03dd2334a0e462fe341", "title": "LinkedIn skills: large-scale topic extraction and inference", "text": "\"Skills and Expertise\" is a data-driven feature on LinkedIn, the world's largest professional online social network, which allows members to tag themselves with topics representing their areas of expertise. In this work, we present our experiences developing this large-scale topic extraction pipeline, which includes constructing a folksonomy of skills and expertise and implementing an inference and recommender system for skills. We also discuss a consequent set of applications, such as Endorsements, which allows members to tag themselves with topics representing their areas of expertise and for their connections to provide social proof, via an \"endorse\" action, of that member's competence in that topic."}
{"_id": "26cfe4cb725e8cb939cd968f44218b2d9a36a794", "title": "A Problem of Dimensionality: A Simple Example", "text": "In pattern recognition problems it has been noted that beyond a certain point the inclusion of additional parameters (that have been estimated) leads to higher probabilities of error. A simple problem has been formulated where the probability of error approaches zero as the dimensionality increases and all the parameters are known; on the other hand, the probability of error approaches one-half as the dimensionality increases and parameters are estimated."}
{"_id": "629edc5dfc768ffceec4e23afd5b00febb72d483", "title": "Aggregates Caching in Columnar In-Memory Databases", "text": "The mixed database workloads found in enterprise applications are comprised of short-running transactional as well as analytical queries with resource-intensive data aggregations. In this context, caching the query results of long-running queries is desirable as it increases the overall performance. However, traditional caching approaches are inefficient in a way that changes in the base data result in invalidation or recalculation of cached results. Columnar in-memory databases with a main-delta architecture are optimized for a novel caching mechanism for aggregate queries that is the main contribution of this paper. With the separation into a readoptimized main storage and write-optimized delta storage, we do not invalidate cached query results when new data is inserted to the delta storage. Instead, we use the cached query result and combine it with the newly added records in the delta storage. We evaluate this caching mechanism with mixed database workloads and show how it compares to existing work in this area."}
{"_id": "33bc410bd8c724c8e8d19c984f4481abf4092684", "title": "Computer-assisted pop-up design for children: computationally enriched paper engineering", "text": "Computationally-enriched crafts are activities that blend the advantages of computational media with the affective, social, and cognitive affordances of children's crafts. In this paper, we describe a design application, Popup Workshop, whose purpose is to introduce children to the craft (and engineering discipline) of pop-up design in paper. We describe the fundamental ideas behind computational crafts in general, present our application in its current implementation and offer a scenario for its use, explore the particular ways in which pop-up design serves as fertile ground for integrating computation and tangible design, and discuss our early pilot-tests with elementary-school children, as well as ongoing and related work."}
{"_id": "dba7cf91feff6bdf26e9a3940015fd0c0cdc6520", "title": "Original Loop-Closure Detection Algorithm for Monocular vSLAM", "text": "Vision-based simultaneous localization and mapping (vSLAM) is a well-established problem in mobile robotics and monocular vSLAM is one of the most challenging variations of that problem nowadays. In this work we study one of the core post-processing optimization mechanisms in vSLAM, e.g. loop-closure detection. We analyze the existing methods and propose original algorithm for loop-closure detection, which is suitable for dense, semi-dense and feature-based vSLAM methods. We evaluate the algorithm experimentally and show that it contribute to more accurate mapping while speeding up the monocular vSLAM pipeline to the extent the latter can be used in real-time for controlling small multirotor vehicle (drone)."}
{"_id": "b5f8a85522580f05242973fd4b6158a6299d76b1", "title": "The Structure of the Relational Database Model", "text": "= (0,6., dom, M, SC)"}
{"_id": "f95bdf4129709f8900e920232a1e13aa12daca76", "title": "Detection of Subconscious Face Recognition Using Consumer-Grade Brain-Computer Interfaces", "text": "We test the possibility of tapping the subconscious mind for face recognition using consumer-grade BCIs. To this end, we performed an experiment whereby subjects were presented with photographs of famous persons with the expectation that about 20% of them would be (consciously) recognized; and since the photos are of famous persons, we expected that subjects would have seen before some of the 80% they didn\u2019t (consciously) recognize. Further, we expected that their subconscious would have recognized some of those in the 80% pool that they had seen before. An exit questionnaire and a set of criteria allowed us to label recognitions as conscious, false, no recognitions, or subconscious recognitions. We analyzed a number of event related potentials training and testing a support vector machine. We found that our method is capable of differentiating between no recognitions and subconscious recognitions with promising accuracy levels, suggesting that tapping the subconscious mind for face recognition is feasible."}
{"_id": "6a06f87d82975b873b3cd6130a60a26c1d0b181c", "title": "Going Wild: Large-Scale Classification of Open DNS Resolvers", "text": "Since several years, millions of recursive DNS resolvers are-deliberately or not-open to the public. This, however, is counter-intuitive, since the operation of such openly accessible DNS resolvers is necessary in rare cases only. Furthermore, open resolvers enable both amplification DDoS and cache snooping attacks, and can be abused by attackers in multiple other ways. We thus find open recursive DNS resolvers to remain one critical phenomenon on the Internet.\n In this paper, we illuminate this phenomenon by analyzing it from two different angles. On the one hand, we study the landscape of DNS resolvers based on empirical data we collected for over a year. We analyze the changes over time and classify the resolvers according to device type and software version. On the other hand, we take the viewpoint of a client and measure the response authenticity of these resolvers. Besides legitimate redirections (e.g., to captive portals or router login pages), we find millions of resolvers to deliberately manipulate DNS resolutions (i.e., return bogus IP address information). To understand this threat in more detail, we systematically analyze non-legitimate DNS responses and reveal open DNS resolvers that manipulate DNS resolutions to censor communication channels, inject advertisements, serve malicious files, perform phishing, or redirect to other kinds of suspicious or malicious activities."}
{"_id": "9155978dacc622a229c39e7117870b8883280f32", "title": "System-level Max POwer (SYMPO) - a systematic approach for escalating system-level power consumption using synthetic benchmarks", "text": "To effectively design a computer system for the worst case power consumption scenario, system architects often use hand-crafted maximum power consuming benchmarks at the assembly language level. These stressmarks, also called power viruses, are very tedious to generate and require significant domain knowledge. In this paper, we propose SYMPO, an automatic SYstem level Max POwer virus generation framework, which maximizes the power consumption of the CPU and the memory system using genetic algorithm and an abstract workload generation framework. For a set of three ISAs, we show the efficacy of the power viruses generated using SYMPO by comparing the power consumption with that of MPrime torture test, which is widely used by industry to test system stability. Our results show that the usage of SYMPO results in the generation of power viruses that consume 14-41% more power compared to MPrime on SPARC ISA. The genetic algorithm achieved this result in about 70 to 90 generations in 11 to 15 hours when using a full system simulator. We also show that the power viruses generated in the Alpha ISA consume 9-24% more power compared to the previous approach of stressmark generation. We measure and provide the power consumption of these benchmarks on hardware by instrumenting a quad-core AMD Phenom II X4 system. The SYMPO power virus consumes more power compared to various industry grade power viruses on x86 hardware. We also provide a microarchitecture independent characterization of various industry standard power viruses."}
{"_id": "10e69e21bec6cb3271acb4228933dba92fbb724e", "title": "Socially Aware Conference Participant Recommendation With Personality Traits", "text": "As a result of the importance of academic collaboration at smart conferences, various researchers have utilized recommender systems to generate effective recommendations for participants. Recent research has shown that the personality traits of users can be used as innovative entities for effective recommendations. Nevertheless, subjective perceptions involving the personality of participants at smart conferences are quite rare and have not gained much attention. Inspired by the personality and social characteristics of users, we present an algorithm called Socially and Personality Aware Recommendation of Participants (SPARP). Our recommendation methodology hybridizes the computations of similar interpersonal relationships and personality traits among participants. SPARP models the personality and social characteristic profiles of participants at a smart conference. By combining the aforementioned recommendation entities, SPARP then recommends participants to each other for effective collaborations. We evaluate SPARP using a relevant data set. Experimental results confirm that SPARP is reliable and outperforms other state-of-the-art methods."}
{"_id": "09779ea94f0035c1e5d5cf75f7dfca8c7966a17b", "title": "Planar MIMO antennas for IoT and CR applications", "text": "In this paper, a planar, compact, single-substrate, multiband 2 sets of 2-elements each multiple-input-multiple-output (MIMO) antenna system is presented. The MIMO antenna system consists of a tunable 2-element meandered and folded MIMO antenna to cover the LTE Band (698 MHz\u2013813 MHz) and a compact 2-element modified truncated cube wideband antenna to cover 754 MHz\u2013971 MHz, 1.65\u20131.83 GHz and 2\u20133.66 GHz, respectively. The ground plane of this antenna behaves as a sensing antenna operating in 0.76\u20131.92 GHz, and 3.0\u20135.2 GHz. The upper band antennas operate in 0.728\u20131.08 GHz, 1.64\u20131.84 GHz, 2.1\u20133.69 GHz, and 5.01\u20135.55 GHz range to develop a complete antenna platform for cognitive radios (CR) and Internet of Things (IoT) applications. The antenna is fabricated on a low cost FR-4 substrate (\u03b5r=4.4 tan\u03b4=0.02) of dimensions 65 \u00d7120 \u00d71.56 mm3."}
{"_id": "09ccba4404669089bbcc9ffd361e2d72794a494a", "title": "Learning Latent Personas of Film Characters", "text": "We present two latent variable models for learning character types, or personas, in film, in which a persona is defined as a set of mixtures over latent lexical classes. These lexical classes capture the stereotypical actions of which a character is the agent and patient, as well as attributes by which they are described. As the first attempt to solve this problem explicitly, we also present a new dataset for the text-driven analysis of film, along with a benchmark testbed to help drive future work in this area."}
{"_id": "1a311e9fddc2db40b978c3b5580b10a72aadccef", "title": "A MATLAB-Simulink-Based PV Module Model and Its Application Under Conditions of Nonuniform Irradiance", "text": "The performance of a photovoltaic (PV) module is mostly affected by array configuration, irradiance, and module temperature. It is important to understand the relationship between these effects and the output power of the PV array. This paper presents a MATLAB-Simulink-based PV module model which includes a controlled current source and an S-Function builder. The modeling scheme in S-Function builder is deduced by some predigested functions. Under the conditions of nonuniform irradiance, the model is practically validated by using different array configurations in testing platform. The comparison experiments indicate that I-V and P-V characteristic curves of simulation match the measurements from outdoor experiment well. Under the conditions of nonuniform irradiance, both simulation and experiment show that the output power of a PV array gets more complicated due to multiple peaks. Moreover, the proposed model can also simulate electric circuit and its maximum power point tracking (MPPT) in the environment of MATLAB-Simulink. The experiments show that the proposed model has good predictability in the general behaviors of MPPT under the conditions of both nonuniform and uniform irradiance."}
{"_id": "3e202588b345019f645f3af6ac41a86cbddab450", "title": "QuickCut: An Interactive Tool for Editing Narrated Video", "text": "We present QuickCut, an interactive video editing tool designed to help authors efficiently edit narrated video. QuickCut takes an audio recording of the narration voiceover and a collection of raw video footage as input. Users then review the raw footage and provide spoken annotations describing the relevant actions and objects in the scene. QuickCut time-aligns a transcript of the annotations with the raw footage and a transcript of the narration to the voiceover. These aligned transcripts enable authors to quickly match story events in the narration with semantically relevant video segments and form alignment constraints between them. Given a set of such constraints, QuickCut applies dynamic programming optimization to choose frame-level cut points between the video segments while maintaining alignments with the narration and adhering to low-level film editing guidelines. We demonstrate QuickCut's effectiveness by using it to generate a variety of short (less than 2 minutes) narrated videos. Each result required between 14 and 52 minutes of user time to edit (i.e. between 8 and 31 minutes for each minute of output video), which is far less than typical authoring times with existing video editing workflows."}
{"_id": "2a958633da47499e01723a24a8106fcce4849360", "title": "Towards improving bug tracking systems with game mechanisms", "text": "Low bug report quality and human conflicts pose challenges to keep bug tracking systems productive. This work proposes to address these issues by applying game mechanisms to bug tracking systems. We investigate the use of game mechanisms in Stack Overflow, an online community organized to resolve computer programming related problems, for which the improvements we seek for bug tracking systems also turn out to be relevant. The results of our Stack Overflow investigation show that its game mechanisms could be used to address these issues by motivating contributors to increase contribution frequency and quality, by filtering useful contributions, and by creating an agile and dependable moderation system. We proceed by mapping these mechanisms to open-source bug tracking systems, and find that most benefits are applicable. Additionally, our results motivate tailoring a reward and reputation system and summarizing bug reports as future directions for increasing the benefits of game mechanisms in bug tracking systems."}
{"_id": "5b110494639f71fa8354e61af04c0cb5e8bbae70", "title": "Anti-jamming performance of cognitive radio networks under multiple uncoordinated jammers in fading environment", "text": "In this paper, we study both the jamming capability of the cognitive-radio-based jammers and the anti-jamming capability of the cognitive radio networks (CRN), by considering multiple uncooperative jammers and independent Rayleigh flat-fading propagations. A Markov model of CRN transmission is set up for the cross-layer analysis of the anti-jamming performance. The transitional probabilities are derived analytically by considering a smart jamming attack strategy. Average throughput expression is obtained and verified by simulations. The results indicate that CRN communications can be extremely susceptible to smart jamming attacks targeting the CRN spectrum sensing and channel switching procedures."}
{"_id": "54f8ae5828615d1fe7d61c0038cc1ec77f4697b0", "title": "Usability Analysis of Visual Programming Environments: A 'Cognitive Dimensions' Framework", "text": "The cognitive dimensions framework is a broad-brush evaluation technique for interactive devices and for non-interactive notations. It sets out a small vocabulary of terms designed to capture the cognitively-relevant aspects of structure, and shows how they can be traded off against each other. The purpose of this paper is to propose the framework as an evaluation technique for visual programming environments. We apply it to two commercially-available dataflow languages (with further examples from other systems) and conclude that it is effective and insightful; other HCI-based evaluation techniques focus on different aspects and would make good complements. Insofar as the examples we used are representative, current VPLs are successful in achieving a good \u2018closeness of match\u2019, but designers need to consider the \u2018viscosity\u2019 (resistance to local change) and the \u2018secondary notation\u2019 (possibility of conveying extra meaning by choice of layout, colour, etc.)."}
{"_id": "482210feeb76e454f8fdcadddb0794df12db3fb0", "title": "Customer churn analysis : Churn determinants and mediation effects of partial defection in the Korean mobile telecommunications service industry", "text": "Retaining customers is one of the most critical challenges in the maturing mobile telecommunications service industry. Using customer transaction and billing data, this study investigates determinants of customer churn in the Korean mobile telecommunications service market. Results indicate that call quality-related factors influence customer churn; however, customers participating in membership card programs are also more likely to churn, which raises questions about program effectiveness. Furthermore, heavy users also tend to churn. In order to analyze partial and total defection, this study defines changes in a customer\u2019s status from active use (using the service on a regular basis) to non-use (deciding not to use it temporarily without having churned yet) or suspended (being suspended by the service provider) as partial defection and from active use to churn as total defection. Thus, mediating effects of a customer\u2019s partial defection on the relationship between the churn determinants and total defection are analyzed and their implications are discussed. Results indicate that some churn determinants influence customer churn, either directly or indirectly through a customer\u2019s status change, or both; therefore, a customer\u2019s status change explains the relationship between churn determinants and the probability of churn. r 2006 Elsevier Ltd. All rights reserved."}
{"_id": "b8aae299e926d8e6f547faea4b90619fc6361146", "title": "Building Skeleton Models via 3-D Medial Surface/Axis Thinning Algorithms", "text": ""}
{"_id": "9873d43696165b50fab955d27b9dde838c0a0152", "title": "Machine learning applications in genetics and genomics", "text": "The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets. Here, we provide an overview of machine learning applications for the analysis of genome sequencing data sets, including the annotation of sequence elements and epigenetic, proteomic or metabolomic data. We present considerations and recurrent challenges in the application of supervised, semi-supervised and unsupervised machine learning methods, as well as of generative and discriminative modelling approaches. We provide general guidelines to assist in the selection of these machine learning methods and their practical application for the analysis of genetic and genomic data sets."}
{"_id": "36638aff184754db62547b75bade8fa2076b1b19", "title": "Real AdaBoost : boosting for credit scorecards and similarity to WOE logistic regression", "text": "Adaboost is a machine learning algorithm that builds a series of small decision trees, adapting each tree to predict difficult cases missed by the previous trees and combining all trees into a single model. We will discuss the AdaBoost methodology and introduce the extension called Real AdaBoost. Real AdaBoost comes from a strong academic pedigree: its authors are pioneers of machine learning and the method has well-established empirical and theoretical support spanning 15 years. Practically speaking, Real AdaBoost is able to produce readable credit scorecards and offers attractive features including variable interaction and adaptive, stage-wise binning. We will contrast Real AdaBoost to the dominant methodology for creating credit scorecards: stepwise weight of evidence logistic regression (SWOELR). Real AdaBoost is remarkably similar to SWOELR and is well positioned to serve as a benchmark for SWOELR models; it may even offer a statistical framework by which we can understand the power of SWOELR. We offer a macro to generate Real AdaBoost models in SAS. INTRODUCTION Financial institutions (FIs) must develop a wide range of models for marketing, fraud detection, loan adjudication, etc. Modeling has undergone a recent renaissance as machine learning has exploded \u2013 spurned by the availability of advanced statistical techniques, the ubiquity of powerful computers to execute these techniques, and the well-publicized successes of the companies who have embraced these methods (Parloff 2016). Modeling departments within some FIs face opposing demands: executives want some of the famed value of advanced methods, while government regulators, internal deployment teams and front-line staff want models that are easy to implement, interpret and understand. In this paper we review Real AdaBoost, a machine learning technique that may offer a middle-ground between powerful, but opaque machine learning methods and transparent conventional methods. CONSUMER RISK MODELS One field of modeling where FIs must often strike a balance between power and transparency is consumer risk modeling. Consumer risk modeling involves ranking customers by their credit worthiness (the likelihood they will repay a loan): first by identifying customer characteristics that indicate risk of delinquency, and then combining them mathematically to calculate a relative risk score for each customer (common characteristics include: past loan delinquency, high credit utilization, etc.). CREDIT SCORECARDS In order to keep the consumer risk models as transparent as possible, many FIs require that the final output of the model be in the form of a scorecard (an example is shown in Table 1). Credit scorecards are a popular way to represent customer risk models due to their simplicity, readability, and the ease with which business expertise can be incorporated during the modeling process (Maldonado et al. 2013). A scorecard lists a number of characteristics that indicate risk and each characteristic is subdivided into a small number of bins defined by ranges of values for that characteristic (e.g., credit utilization: 30-80% is a bin for the credit utilization characteristic). Each bin is assigned a number of score points, a value derived from a statistical model and proportional to the risk of that bin (SAS 2012). A customer will fall into one and only one bin per characteristic and the final score of the applicant is the sum of the points assigned by each bin (plus an intercept). This final score is proportional to consumer risk. The procedure for developing scorecards is termed stepwise weight of evidence logistic regression (SWOELR) and is implemented in the Credit Scoring add-on in SAS\u00ae Enterprise MinerTM."}
{"_id": "2a0d9ece43a6de46b4218ed76fc7ecb706237a84", "title": "Approximation Algorithms for Degree-Constrained Minimum-Cost Network-Design Problems", "text": "We study network-design problems with two different design objectives: the total cost of the edges and nodes in the network and the maximum degree of any node in the network. A prototypical example is the degree-constrained node-weighted Steiner tree problem: We are given an undirected graph G(V,E) , with a nonnegative integral function d that specifies an upper bound d(v) on the degree of each vertex v \u2208 V in the Steiner tree to be constructed, nonnegative costs on the nodes, and a subset of k nodes called terminals. The goal is to construct a Steiner tree T containing all the terminals such that the degree of any node v in T is at most the specified upper bound d(v) and the total cost of the nodes in T is minimum. Our main result is a bicriteria approximation algorithm whose output is approximate in terms of both the degree and cost criteria\u2014the degree of any node v \u2208 V in the output Steiner tree is O(d(v) log\u00a0 k) and the cost of the tree is O(log\u00a0 k) times that of a minimum-cost Steiner tree that obeys the degree bound d(v) for each node v . Our result extends to the more general problem of constructing one-connected networks such as generalized Steiner forests. We also consider the special case in which the edge costs obey the triangle inequality and present simple approximation algorithms with better performance guarantees."}
{"_id": "d51379a0aa76b9283f78d2b90f04d5eb4cc2f757", "title": "Construction and Refinement of Panoramic Mosaics with Global and Local Alignment", "text": "This paper presents techniques for constructing full view panoramic mosaics from sequences of images. Our representation associates a rotation matrix (and optionally a focal length) with each input image, rather than explicitly projecting all of the images onto a common surface (e.g., a cylinder). In order to reduce accumulated registration errors, we apply global alignment (block adjustment) to the whole sequence of images, which results in an optimal image mosaic (in the least-squares sense). To compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions, we develop a local alignment (deghosting) technique which warps each image based on the results of pairwise local image registrations. By combining both global and local alignment, we significantly improve the quality of our image mosaics, thereby enabling the creation of full view panoramic mosaics with hand-held cameras."}
{"_id": "c1abc3ade259fb9507063f489c590404ffb31282", "title": "A multi-temporal InSAR method incorporating both persistent scatterer and small baseline approaches", "text": "[1] Synthetic aperture radar (SAR) interferometry is a technique that provides high-resolution measurements of the ground displacement associated with many geophysical processes. Advanced techniques involving the simultaneous processing of multiple SAR acquisitions in time increase the number of locations where a deformation signal can be extracted and reduce associated error. Currently there are two broad categories of algorithms for processing multiple acquisitions, persistent scatterer and small baseline methods, which are optimized for different models of scattering. However, the scattering characteristics of real terrains usually lay between these two end-member models. I present here a new method that combines both approaches, to extract the deformation signal at more points and with higher overall signal-to-noise ratio than can either approach alone. I apply the combined method to data acquired over Eyjafjallaj\u00f6kull volcano in Iceland, and detect time-varying ground displacements associated with two intrusion events. Citation: Hooper, A., (2008), A multi-temporal InSAR method incorporating both persistent scatterer and small baseline approaches, Geophys. Res. Lett., 35, L16302, doi:10.1029/ 2008GL034654."}
{"_id": "ccbac9c9cda72b27bfda0d780be86da744b6ce7c", "title": "Machine Learning Methods for Optical Character Recognition", "text": "Success of optical character recognition depends on a number of factors, two of which are feature extraction and classi cation algorithms. In this paper we look at the results of the application of a set of classi ers to datasets obtained through various basic feature extraction methods."}
{"_id": "7ef50bc62345704996c358df6baa27fff0b6a8c7", "title": "Type-2 fuzzy ontology-based semantic knowledge for collision avoidance of autonomous underwater vehicles", "text": "The volume of obstacles encountered in the marine environment is rapidly increasing, which makes the development of collision avoidance systems more challenging. Several fuzzy ontology-based simulators have been proposed to provide a virtual platform for the analysis of maritime missions. However, due to the simulators\u2019 limitations, ontologybased knowledge cannot be utilized to evaluate maritime robot algorithms and to avoid collisions. The existing simulators must be equipped with smart semantic domain knowledge to provide an efficient framework for the decision-making system of AUVs. This article presents type-2 fuzzy ontology-based semantic knowledge (T2FOBSK) and a simulator for marine users that will reduce experimental time and the cost of marine robots and will evaluate algorithms intelligently. The system reformulates the user\u2019s query to extract the positions of AUVs and obstacles and convert them to a proper format for the simulator. The simulator uses semantic knowledge to calculate the degree of collision risk and to avoid obstacles. The available type-1 fuzzy ontology-based approach cannot extract intensively blurred data from the hazy marine environment to offer actual solutions. Therefore, we propose a type-2 fuzzy ontology to provide accurate information about collision risk and the marine environment during real-time marine operations. Moreover, the type-2 fuzzy ontology is designed using Prot\u00e9g\u00e9 OWL-2 tools. The DL query and SPARQL query are used to evaluate the ontology. The distance to closest point of approach (DCPA), time to closest point of approach (TCPA) and variation of compass degree (VCD) are used to calculate the degree of collision risk between AUVs and obstacles. The experimental and simulation results show that the proposed architecture is highly efficient and highly productive for marine missions and the real-time decision-making system of AUVs. 2014 Elsevier Inc. All rights reserved."}
{"_id": "27d157c2d75e5ebbfd48112756948d1bdf5c37f6", "title": "Tracking Time Evolving Data Streams for Short-Term Traffic Forecasting", "text": "Data streams have arisen as a relevant topic during the last few years as an efficient method for extracting knowledge from big data. In the robust layered ensemble model (RLEM) proposed in this paper for short-term traffic flow forecasting, incoming traffic flow data of all connected road links are organized in chunks corresponding to an optimal time lag. The RLEM model is composed of two layers. In the first layer, we cluster the chunks by using the Graded Possibilistic c-Means method. The second layer is made up by an ensemble of forecasters, each of them trained for short-term traffic flow forecasting on the chunks belonging to a specific cluster. In the operational phase, as a new chunk of traffic flow data presented as input to the RLEM, its memberships to all clusters are evaluated, and if it is not recognized as an outlier, the outputs of all forecasters are combined in an ensemble, obtaining in this a way a forecasting of traffic flow for a short-term time horizon. The proposed RLEM model is evaluated on a synthetic data set, on a traffic flow data simulator and on two real-world traffic flow data sets. The model gives an accurate forecasting of the traffic flow rates with outlier detection and shows a good adaptation to non-stationary traffic regimes. Given its characteristics of outlier detection, accuracy, and robustness, RLEM can be fruitfully integrated in traffic flow management systems."}
{"_id": "7119cc9d286af1da4bff2d24d3188a459047b632", "title": "Gujarati Script Recognition : A Review", "text": "This paper is step to locate the researchers, their track in the way for recognizing the characters from various regional scripts in India. This survey paper would provide a path for developing recognition tools for Indian scripts where there is still a scope of recognition accuracy. It provides the reasons for researchers to work for recognition of Indian scripts. In this paper, the various scripts along with their special properties, there feature extraction and recognition techniques are described. Simultaneously a detailed comparison is made on the fronts of techniques used and recognition of Gujarati script."}
{"_id": "c5869e967a0888ad901c94eefb56dd9d1b4c6175", "title": "Survey on security threats of smartphones in Internet of Things", "text": "Internet of Things (IoT) is an emerging concept which will interconnect billions of devices (such as smartphones, sensors and other networking devices), to communicate with each other. This interconnection of devices can be beneficial in many ways such as improving location based services and timely coordination. However, this also impose severe security concerns specially on smartphones which contains sensitive data. In this paper, we provide a comprehensive analysis of different security issues in smartphones which are integral part of IoT. We provide an easy and concise overview of different security threats. Our categorization helps in identifying specific security breach and applying relevant countermeasures."}
{"_id": "eee10a44039596bda9c56259dd8e977632907b91", "title": "Text Document Clustering based on Phrase Similarity using Affinity Propagation", "text": "Affinity propagation (AP) was recently introduced as an un-supervised learning algorithm for exemplar based clustering. In this paper novel text document clustering algorithm has been developed based on vector space model, phrases and affinity propagation clustering algorithm. Proposed algorithm can be called Phrase affinity clustering (PAC). PAC first finds the phrase by ukkonen suffix tree construction algorithm, second finds the vector space model using tf-idf weighting scheme of phrase. Third calculate the similarity matrix form VSD using cosine similarity . In Last affinity propagation algorithm generate the clusters . F-Measure ,Purity and Entropy of Proposed algorithm is better than GAHC ,ST-GAHC and ST-KNN on OHSUMED ,RCV1 and News group data sets."}
{"_id": "2fdba96ee1dbf23193e830c9ae44e646f46c296d", "title": "Development of a credible and integrated electronic voting machine based on contactless IC cards, biometrie fingerprint credentials and POS printer", "text": "In recent times there has been a decline in the confidence of common people over electronic voting machines (EVMs). More anticipated than what the reality actually is, today's automated vote casting methods have faced immense controversy related to being vulnerable to hacking and questions have been raised about their transparency and security. This paper proposes the design and development of a novel tamper resistant electronic voting system that aims to mitigate the recurring issues and flaws of existing voting machines of today. To elucidate the system in brief, multiple layered verification process would be carried out on a potential voter by the means of fingerprint recognition and a Near Field Communication (NFC) smart card entry in order to authenticate his or her identity. Subsequently, the person would cast the vote by pressing a button corresponding to a particular candidate which would be recorded in the system providing the vote caster a visual confirmation. The final vote would then be printed out spontaneously onto a ballot box using a POS (Point of Sales) printer for an added level of validation. This system not only prevents multiple vote casts, but also eliminates the discrepancies that commonly arise with a person claiming not to have voted, whereas his or her name is present in the list of vote casters. The project is an incremental extension of a previous research."}
{"_id": "e15ca6bcd9ecc190c3846f2862bbe8d1c52f0645", "title": "LLC resonant converter burst mode control with constant burst time and optimal switching pattern", "text": "In this paper, a novel burst mode control solution for LLC resonant converters is proposed for the highest efficiency and small output voltage ripple. The burst time is set to be constant, the off time is modulated by load conditions. During the burst time, three pulse switching pattern is implemented to keep output voltage low frequency ripple minimal. The switch first turned on is controlled to be last turned off to reduce the initial hard switching turn on loss when burst starts. The pulse width of first switch is optimized to settle LLC to the steady-state under the highest efficiency load condition in one pulse. Zero current detection (ZCD) of the resonant current is implemented to achieve first switch partial ZVS, further increasing light load efficiency."}
{"_id": "16f72d65abb6230cb11b19edcf4d9271d799d5b8", "title": "A self-powered ultra-fast DC solid state circuit breaker using a normally-on SiC JFET", "text": "This paper introduces a new self-powered solid state circuit breaker (SSCB) concept using a normally-on SiC JFET as the main static switch and a fast-starting isolated DC/DC converter as the protection driver. The new SSCB detects short circuit faults by sensing its drain-source voltage rise, and draws power from the fault condition to turn and hold off the SiC JFET. The new two-terminal SSCB can be directly placed in a circuit branch without requiring any external power supply or additional wiring. A unique low power isolated DC/DC converter is designed and optimized to provide a fast reaction to a short circuit event. The SSCB prototypes have experimentally demonstrated a fault current interruption capability up to 180 amperes at a DC bus voltage of 400 volts within 0.8 microseconds. DC circuit protection applications provide a unique market opportunity for wide bandgap power semiconductor devices outside the conventional focus on power electronic converter applications."}
{"_id": "c3424453e93be6757165ce04dbfda08ebfa98952", "title": "Noninvasive in vivo glucose sensing on human subjects using mid-infrared light.", "text": "Mid-infrared quantum cascade laser spectroscopy is used to noninvasively predict blood glucose concentrations of three healthy human subjects in vivo. We utilize a hollow-core fiber based optical setup for light delivery and collection along with a broadly tunable quantum cascade laser to obtain spectra from human subjects and use standard chemo-metric techniques (namely partial least squares regression) for prediction analysis. Throughout a glucose concentration range of 80-160 mg/dL, we achieve clinically accurate predictions 84% of the time, on average. This work opens a new path to a noninvasive in vivo glucose sensor that would benefit the lives of hundreds of millions of diabetics worldwide."}
{"_id": "64a877d135db3acbc23c295367927176f332595f", "title": "FCM : THE FUZZY c-MEANS CLUSTERING ALGORITHM", "text": "nThis paper transmits a FORTRAN-IV coding of the fuzzy c-means (FCM) clustering program. The FCM program is applicable to a wide variety of geostatistical data analysis problems. This program generates fuzzy partitions and prototypes for any set of numerical data. These partitions are useful for corroborating known substructures or suggesting substructure in unexplored data. The clustering criterion used to aggregate subsets is a generalized least-squares objective function. Features of this program include a choice of three norms (Euclidean, Diagonal, or Mahalonobis), an adjustable weighting factor that essentially controls sensitivity to noise, acceptance of variable numbers of clusters, and outputs that include several measures of cluster validity."}
{"_id": "674fe1bd71456e2d9f847f9ad210a91ee47f87bc", "title": "Relational versus non-relational database systems for data warehousing", "text": "Relational database systems have been the dominating technology to manage and analyze large data warehouses. Moreover, the ER model, the standard in database design has a close relationship with the relational model. Recently, there has been a surge of alternative technologies for large scale analytic processing, most of which are not based on the relational model. Out of these proposals, distributed file systems together with MapReduce have become strong competitors to relational database systems to analyze large data sets, exploiting parallel processing. Moreover, there is progress on using MapReduce to evaluate relational queries. With that motivation in mind, this panel will compare pros and cons of each technology for data warehousing and will identify research issues, considering practical aspects like ease of use, programming flexibility and cost; as well as technical aspects like data modeling, storage, hardware, scalability, query processing, fault tolerance and data mining."}
{"_id": "f89ee2c9c67858c00bd87df310994ff3a69de747", "title": "Variance Reduction Methods II 1 Importance Sampling", "text": "For this problem, however, the usual approach would be completely inadequate since approximating \u03b8 to any reasonable degree of accuracy would require n to be inordinately large. For example, on average we would have to set n \u2248 2.7014\u00d7 10 in order to obtain just one non-zero value of I. Clearly this is impractical and a much smaller value of n would have to be used. Using a much smaller value of n, however, would almost inevitably result in an estimate, \u03b8\u0302n = 0, and an approximate confidence interval [L,U ] = [0, 0]! So the naive approach does not work. We could try to use the variance reduction techniques we have seen in the course so far, but they would provide little, if any, help."}
{"_id": "2390d0ba96c89d60a15e1940c80a05f026508a39", "title": "The Effect of Internet Security Breach Announcements on Market Value: Capital Market Reactions for Breached Firms and Internet Security Developers", "text": "Assessing the value of information technology (IT) security is challenging because of the difficulty of measuring the cost of security breaches. An event-study analysis, using market valuations, was used to assess the impact of security breaches on the market value of breached firms. The information-transfer effect of security breaches (i.e., their effect on the market value of firms that develop security technology) was also studied. The results show that announcing an Internet security breach is negatively associated with the market value of the announcing firm. The breached firms in the sample lost, on average, 2.1 percent of their market value within two days of the announcement\u2014an average loss in market capitalization of $1.65 billion per breach. Firm type, firm size, and the year the breach occurred help explain the cross-sectional variations in abnormal returns produced by security breaches. The effects of security breaches are not restricted to the breached firms. The market value of security developers is positively associated with the disclosure of security breaches by other firms. The security developers in the sample realized an average abnormal return of 1.36 percent during the two-day period after the announcement\u2014an average gain of $1.06 billion in two days. The study suggests that the cost of poor security is very high for investors."}
{"_id": "69fffc3d23a5046b0a1cbc97c9e4d2aba99c1af7", "title": "Explaining and Predicting the Impact of Branding Alliances and Web Site Quality on Initial Consumer Trust of E-Commerce Web Sites", "text": "Trust is a crucial factor in e-commerce. However, consumers are less likely to trust unknown Web sites. This study explores how less-familiar e-commerce Web sites can use branding alliances and Web site quality to increase the likelihood of initial consumer trust. We use the associative network model of memory to explain brand knowledge and to show how the mere exposure effect can be leveraged to improve a Web site\u2019s brand image. We also extend information integration theory to explain how branding alliances are able to increase initial trust and transfer positive effects to Web sites. Testing of our model shows that the most important constructs for increasing initial trust in our experimental context are branding and Web site quality. Finally, we discuss future research ideas, limitations, implications, and ideas for practitioners."}
{"_id": "ab8bc000c544d83ee32e464c7259c05bba86ba62", "title": "A critical analysis of decision support systems research", "text": "This paper critically analyses the nature and state of decision support systems (DSS) research. To provide context for the analysis, a history of DSS is presented which focuses on the evolution of a number of sub-groupings of research and practice: personal decision support systems, group support systems, negotiation support systems, intelligent decision support systems, knowledge managementbased DSS, executive information systems/business intelligence, and data warehousing. To understand the state of DSS research an empirical investigation of published DSS research is presented. This investigation is based on the detailed analysis of 1,020 DSS articles published in 14 major journals from 1990 to 2003. The analysis found that DSS publication has been falling steadily since its peak in 1994 and the current publication rate is at early 1990s levels. Other findings include that personal DSS and group support systems dominate research activity and data warehousing is the least published type of DSS. The journal DSS is the major publishing outlet; US \u2018Other\u2019 journals dominate DSS publishing and there is very low exposure of DSS in European journals. Around two-thirds of DSS research is empirical, a much higher proportion than general IS research. DSS empirical research is 1 Acknowledgements: We would like to thank Gemma Dodson for research assistance, especially in obtaining articles and coding, and Peter O\u2019Donnell for discussions about DSS history. The research was partially supported by a Monash University Small Grant and a Curtin University Visiting Professorship. Preliminary results from the empirical study were presented at the 2004 IFIP International Conference on Decision Support Systems, Prato, Italy and the 2004 Australasian Conference on Information Systems, Hobart, Australia."}
{"_id": "15d776329b43804235843e22b22f18ced69409ec", "title": "A Critical Review of Construct Indicators and Measurement Model Misspecification in Marketing and Consumer Research", "text": "A review of the literature suggests that few studies use formative indicator measurement models, even though they should. Therefore, the purpose of this research is to (a) discuss the distinction between formative and reflective measurement models, (b) develop a set of conceptual criteria that can be used to determine whether a construct should be modeled as having formative or reflective indicators, (c) review the marketing literature to obtain an estimate of the extent of measurement model misspecification in the field, (d ) estimate the extent to which measurement model misspecification biases estimates of the relationships between constructs using a Monte Carlo simulation, and (e) provide recommendations for modeling formative indicator constructs."}
{"_id": "1f2b0942d24762c0c6eb1e7d37baabc6118b8ed0", "title": "Service quality delivery through Web sites : A critical review of extant know", "text": "Evidence exists that service quality delivery through Web sites is an essential strategy to success, possibly more important than low price and Web presence. To deliver superior service quality, managers of companies with Web presences must first understand how customers perceive and evaluate online customer service. Information on this topic is beginning to emerge from both academic and practitioner sources, but this information has not yet been examined as a whole. The goals of this article are to review and synthesize the literature about service quality delivery through Web sites, describe what is known about the topic, and develop an agenda for needed research."}
{"_id": "9cb81d1d2cb6fcfe415dd6968f2ef92fa82f9a48", "title": "Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation", "text": "Histogram equalization (HE) is widely used for contrast enhancement. However, it tends to change the brightness of an image and hence, not suitable for consumer electronic products, where preserving the original brightness is essential to avoid annoying artifacts. Bi-histogram equalization (BBHE) has been proposed and analyzed mathematically that it can preserve the original brightness to a certain extend. However, there are still cases that are not handled well by BBHE, as they require higher degree of preservation. This paper proposes a generalization of BBHE referred to as recursive mean-separate histogram equalization (RMSHE) to provide not only better but also scalable brightness preservation. BBHE separates the input image's histogram into two based on its mean before equalizing them independently. While the separation is done only once in BBHE, this paper proposes to perform the separation recursively; separate each new histogram further based on their respective mean. It is analyzed mathematically that the output image's mean brightness will converge to the input image's mean brightness as the number of recursive mean separation increases. Besides, the recursive nature of RMSHE also allows scalable brightness preservation, which is very useful in consumer electronics. Simulation results show that the cases which are not handled well by HE, BBHE and dualistic sub image histogram equalization (DSIHE), have been properly enhanced by RMSHE. Keyword: Bi-histogram equalization; Dualistic sub-image; Histogram equalization; Recursive mean-separate; Scalable brightness preservation"}
{"_id": "76a244f4da1d29170a9f91d381a5e12dc7ad2c0f", "title": "Sales Forecasting for Retail Chains", "text": "This paper presents a use case of data mining for sales forecasting in retail demand and sales prediction. In particular, the Extreme Gradient Boosting algorithm is used to design a prediction model to accurately estimate probable sales for retail outlets of a major European Pharmacy retailing company. The forecast of potential sales is based on a mixture of temporal and economical features including prior sales data, store promotions, retail competitors, school and state holidays, location and accessibility of the store as well as the time of year. The model building process was guided by common sense reasoning and by analytic knowledge discovered during data analysis and definitive conclusions were drawn. The performances of the XGBoost predictor were compared with those of more traditional regression algorithms like Linear Regression and Random Forest Regression. Findings not only reveal that the XGBoost algorithm outperforms the traditional modeling approaches with regard to prediction accuracy, but it also uncovers new knowledge that is hidden in data which help in building a more robust feature set and strengthen the sales prediction model."}
{"_id": "3c4dba924ca7b627fe481cc91c55bdd2601a32b7", "title": "\"Applying TAM to E-Services Adoption: The Moderating Role of Perceived Risk\"", "text": "Consumer adoption of e-services is an important goal for many service providers, however little is known about how different consumer segments perceive and evaluate them for adoption. The Technology Acceptance Model (TAM) explains information systems evaluation and adoption, however the Internetdelivered e-services context presents additional variance that requires supplemental measures to be added to TAM. This research extends TAM to include a perceived usage risk main effect and also tested whether perceived risk moderated several of TAM\u2019s relationships. Results indicate that higher levels of perceived risk deflated ease of use\u2019s effect and inflated subjective norm\u2019s effect on perceived usefulness and adoption intention."}
{"_id": "33526226231cce669317ece44e6af262b8395dd9", "title": "CRF-CNN: Modeling Structured Information in Human Pose Estimation", "text": "Deep convolutional neural networks (CNN) have achieved great success. On the other hand, modeling structural information has been proved critical in many vision problems. It is of great interest to integrate them effectively. In a classical neural network, there is no message passing between neurons in the same layer. In this paper, we propose a CRF-CNN framework which can simultaneously model structural information in both output and hidden feature layers in a probabilistic way, and it is applied to human pose estimation. A message passing scheme is proposed, so that in various layers each body joint receives messages from all the others in an efficient way. Such message passing can be implemented with convolution between features maps in the same layer, and it is also integrated with feedforward propagation in neural networks. Finally, a neural network implementation of endto-end learning CRF-CNN is provided. Its effectiveness is demonstrated through experiments on two benchmark datasets."}
{"_id": "ae2140ffe2f470e95aebcc4d672f76c771300b30", "title": "Diagnosis Code Prediction from Electronic Health Records as Multilabel Text Classification : A Survey", "text": "This article presents a survey on diagnosis code prediction from various information in Electronic Health Records (EHR): both unstructured free text and structured data. Particularly, our interests are in casting the problem as text classification with multiple sources and using neural network based models. We will first present previous work in this area and describe some simple baseline models for the relevant tasks."}
{"_id": "cc27ee23f43375e07de58aa0c430a4741a67c84d", "title": "Bandwidth Improvement of Microstrip Patch Antenna Using H- Shaped Patch", "text": "Despite the many advantages of microstrip patch antennas, they do have some considerable drawbacks. One of the main limitations with patch antennas is their inherently narrowband performance due to its resonant nature. With bandwidth as low as a few percent; broadband applications using conventional patch designs are limited. So for the antenna miniaturization and bandwidth improvement H-shaped microstrip patch antenna used. In this paper, authors cover two aspects of microstrip antenna designs. The first is the analysis of single element narrowband rectangular microstrip antenna which operates at the central frequency of 3.3 GHz. The second aspect is the analysis and design of slot cut H-shaped microstrip antenna. The simulation process has been done through high frequency structure simulator (HFSS). The properties of antenna such as bandwidth, S parameter, VSWR has been investigated and compared between a single element rectangular and H-shaped microstrip antenna."}
{"_id": "49dcc6a238f22831adece27b5f8b0643a00a6caa", "title": "A Low-Complexity Full-Duplex Radio Implementation With a Single Antenna", "text": "This paper presents a novel low-complexity full-duplex radio design, which only uses a single patch antenna without any duplexer or circulator for passive suppression of self-interference, and a computationally efficient technique for linear digital cancellation. The proposed full-duplex design is tested for IEEE 802.11g Wireless Standard on the WARP (v3) software-defined radio implementation platform. It is shown that this design provides a total suppression of 88\u00a0dB, which is sufficient for low-power or short-range full-duplex communication. The dual polarized slot coupled patch antenna used in our design provides an interport isolation as high as 60\u00a0dB in 2.4\u00a0GHz band. Additionally, the digital domain cancellation utilizes a frequency domain-based estimation and reconstruction approach, which not only offers up to $61\\%$ reduction in the computational complexity but also provides a $5-7$ \u00a0dB better digital cancellation performance in highly selective channel conditions, as compared to the time-domain-based techniques. The proposed full-duplex implementation can be easily applied in OFDM-based wireless systems, such as IEEE 802.11, which is the considered air interface in this paper."}
{"_id": "0b5f2898d7d6dc83b7caabdc0ecfca40699691b2", "title": "Multivariate Analysis of Ecological Data", "text": "2 Foreword This textbook provides study materials for the participants of the course named Multivariate Analysis of Ecological Data that we teach at our university for the third year. Material provided here should serve both for the introductory and the advanced versions of the course. We admit that some parts of the text would profit from further polishing, they are quite rough but we hope in further improvement of this text. We hope that this book provides an easy-to-read supplement for the more exact and detailed publications like the collection of the Dr. Ter Braak' papers and the Canoco for Windows 4.0 manual. In addition to the scope of these publications, this textbook adds information on the classification methods of the multivariate data analysis and introduces some of the modern regression methods most useful in the ecological research. Wherever we refer to some commercial software products, these are covered by trademarks or registered marks of their respective producers. This publication is far from being final and this is seen on its quality: some issues appear repeatedly through the book, but we hope this provides, at least, an opportunity to the reader to see the same topic expressed in different words."}
{"_id": "dca4eaacddb18ad44786c008b73296831502d27c", "title": "A route planning method based on improved artificial potential field algorithm", "text": "The route planning method based on artificial potential field is an important branch and the hot research focus of assignment planning. The traditional artificial potential field method is easy to fall into the local minimum, and the paper analyzed the causes, and then did some research on the improved artificial potential field method, and then proposed a method modifying the repulsion direction based on the improved artificial potential field to solve the three typical local minimum problem. The simulation results show that the proposed method is effective."}
{"_id": "14fe958ca38e14372e5e118ef9af095b9a041f0d", "title": "Building a Virtual HPC Cluster with Auto Scaling by the Docker", "text": "Solving the software dependency issue under the HPC environment has always been a difficult task for both computing system administrators and application scientists. This work would like to tackle the issue by introducing the modern container technology, the Docker, to be specific. By integrating the auto-scaling feature of service discovery with the light-weight virtualization tool, the Docker, the construction of a virtual cluster on top of physical cluster hardware is attempted. Thus, through the isolation of computing environment, a remedy of software dependency of HPC environment is possible. Keywords\u2014HPC; Docker; auto-scaling; service discovery; MPI; Consul;"}
{"_id": "d4b0e3b02e411c830fc3775b3ce519992d4aebd8", "title": "GeoClustering: A web service for geospatial clustering", "text": "Geospatial clustering is a very important research topic in the field of \u201eGeospatial Knowledge Discovery.\u201f Its main purpose is to group similar objects, from large geospatial datasets, into clusters, based on the objects\u201f spatial and non-spatial attributes. This capability is very useful for better understanding the patterns and distributions of geographical phenomena. With the emerging availability of large amounts of geospatial data on the Internet, the demand for powerful methods of extracting meaningful information from large amounts of web-based geospatial data is thus also rapidly increasing. In this chapter, we present a geospatial clustering web service called GeoClustering. With this service, users are able to cluster online data sources easily and to then conveniently visualize the clustering results through a friendly map-centric user interface. Since it is web-based, GeoClustering offers a platform that can be accessed by anyone from anywhere at any time. To achieve better interoperability, GeoClustering offers an API and follows the OGC standard data format for spatial data exchange."}
{"_id": "f3377a08b457abc05296e4499af4588a816904c9", "title": "Compact Duplexing for a 680-GHz Radar Using a Waveguide Orthomode Transducer", "text": "A compact 680-GHz waveguide orthomode transducer (OMT) and circular horn combination has been designed, tested, and characterized in a radar transceiver's duplexer. The duplexing capability is implemented by a hybrid waveguide quasi optical solution, combining a linear polarization OMT and an external grating polarizer. Isolation between the OMT's orthogonal ports' flanges was measured with a vector network analyzer to exceed 33 dB over a >10% bandwidth between 630 and 710 GHz. Calibrated Y-factor measurements using a mixer attached to the OMT ports reveal losses through the transmit and receive paths that sum to an average of 4.7 dB of two-way loss over 660-690 GHz. This is consistent with radar sensitivity measurements comparing the new OMT/horn with a quasi-optical wire grid beam splitter. Moreover, the radar performance assessment validates the OMT as a suitable compact substitute of the wire grid for the JPL's short-range 680-GHz imaging radar."}
{"_id": "f39fc7c420277616eea29754d0d367297c6f02c1", "title": "Feature Extraction Based on Co-occurrence of Adjacent Local Binary Patterns", "text": "In this paper, we propose a new image feature based on spatial co-occurrence among micropatterns, where each micropattern is represented by a Local Binary Pattern (LBP). In conventional LBP-based features such as LBP histograms, all the LBPs of micropatterns in the image are packed into a single histogram. Doing so discards important information concerning spatial relations among the LBPs, even though they may contain information about the image\u2019s global structure. To consider such spatial relations, we measure their co-occurrence among multiple LBPs. The proposed feature is robust against variations in illumination, a feature inherited from the original LBP, and simultaneously retains more detail of image. The significant advantage of the proposed method versus conventional LBP-based features is demonstrated through experimental results of face and texture recognition using public databases."}
{"_id": "682e22b67261db805842aa56eabf2a5ed78257a9", "title": "Online Footsteps to Purchase: Exploring Consumer Behaviors on Online Shopping Sites", "text": "As an important part of the Internet economy, online markets have gained much interest in research community as well as industry. Researchers have studied various aspects of online markets including motivations of consumer behaviors on online markets. However, due to the lack of log data of consumers' online behaviors including their purchase, it has not been thoroughly investigated or validated on what drives consumers to purchase products on online markets. Our research moves forward from prior studies by analyzing consumers' actual online behaviors that lead to actual purchases, and using datasets from multiple online shopping sites that can provide comparisons across different types of online shopping sites. We analyzed consumers' buying process and constructed consumers' behavior trajectory to gain deeper understanding of consumer behaviors on online markets. We find that a substantial portion (24%) of consumers in a general-purpose marketplace (like eBay) discover items from external sources (e.g., price comparison sites), while most (>95%) of consumers in a special-purpose shopping site directly access items from the site itself. We also reveal that item browsing patterns and cart usage patterns are the important predictors of the actual purchases. Using behavioral features identified by our analysis, we developed a prediction model to infer whether a consumer purchases item(s). Our prediction model of purchases achieved over 80% accuracy across four different online shopping sites."}
{"_id": "a245153fae777882a2fca97ca3bc6ca8582c0cf0", "title": "Using Associative Classifiers for Predictive Analysis in Health Care Data Mining", "text": "Association rule mining is one of the most important and well researched techniques of data mining for descriptive task, initially used for market basket analysis. It finds all the rules existing in the transactional database that satisfy some minimum support and minimum confidence constraints. Classification using Association rule mining is another major Predictive analysis technique that aims to discover a small set of rule in the database that forms an accurate classifier. In this paper, we introduce the combined approach that integrates association rule mining and classification rule mining called Associative Classification (AC). This is new classification approach. The integration is done by focusing on mining a special subset of association rules called classification association rule (CAR). And then classification is being performed using these CAR. Using association rule mining for constructing classification systems is a promising approach. Given the readability of the associative classifiers, they are especially fit to applications were the model may assist domain experts in their decisions. Medical field is a good example was such applications may appear. Consider an example were"}
{"_id": "58609a1eb552d52fdb9d1bd69f4ab6237ab7e615", "title": "Modeling and Optimization of Building Mix and Energy Supply Technology for Urban Districts", "text": "Reducing the energy consumption and associated greenhouse gas emissions of urban areas is paramount in research and practice, encompassing strategies to both reduce energy consumption and carbon intensity in both energy supply and demand. Most methods focus on one of these two approaches but few integrate decisions for supply and demand simultaneously. This paper presents a novel model that endogenously simulates energy supply and demand at a district scale on an hourly time scale. Demand is specified for a variety of building uses, and losses and municipal loads are calculated from the number of buildings in the district. Energy supply is modeled using technology-specific classes, allowing easy addition of specific equipment or types of energy generation. Standard interfaces allow expansion of the model to include new types of energy supply and demand. The model can be used for analysis of a single design alternative or optimization over a large design space, allowing exploration of various densities, mixes of uses, and energy supply technologies. An example optimization is provided for a community near San Francisco, California. This example uses 21 building types, 32 combined heat and power engines, and 16 chillers. The results demonstrate the ability to compare performance tradeoffs and optimize for three objectives: life cycle cost, annual carbon dioxide emissions, and overall system"}
{"_id": "56ba40849acfe4f6a8a2338f5bb31be2ce6df712", "title": "Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition", "text": "Here, we develop an audiovisual deep residual network for multimodal apparent personality trait recognition. The network is trained end-to-end for predicting the Big Five personality traits of people from their videos. That is, the network does not require any feature engineering or visual analysis such as face detection, face landmark alignment or facial expression recognition. Recently, the network won the third place in the ChaLearn First Impressions Challenge with a test accuracy of 0.9109."}
{"_id": "6fa2da2383cbf3c620cdb3a89ce92d452a7475cf", "title": "Towards the Success of ERP Systems : Case Study in Two Moroccan Companies", "text": "For over a decade, ERP systems in Morocco have shown a significant increase. Companies make large investments in these scale systems in order to anticipate positive impacts on their organization. Measuring ERP systems success becomes an important criterion for the stakeholders in justifying the associated investment. In this paper, we will examine the success of these systems by using a qualitative research method based on a case study approach. We will provide an insight into how two Moroccan organizations perceive the success of their ERP systems. Findings indicated that Information, system quality and net benefits are the three main dimensions of success."}
{"_id": "89da44703829e54c46f5f8d0cb5f73b76520f56e", "title": "Forward-viewing CMUT arrays for medical imaging", "text": "This paper reports the design and testing of forward-viewing annular arrays fabricated using capacitive micromachined ultrasonic transducer (CMUT) technology. Recent research studies have shown that CMUTs have broad frequency bandwidth and high-transduction efficiency. One- and two-dimensional CMUT arrays of various sizes already have been fabricated, and their viability for medical imaging applications has been demonstrated. We fabricated 64-element, forward-viewing annular arrays using the standard CMUT fabrication process and carried out experiments to measure the operating frequency, bandwidth, and transmit/receive efficiency of the array elements. The annular array elements, designed for imaging applications in the 20 MHz range, had a resonance frequency of 13.5 MHz in air. The immersion pulse-echo data collected from a plane reflector showed that the devices operate in the 5-26 MHz range with a fractional bandwidth of 135%. The output pressure at the surface of the transducer was measured to be 24 kPa/V. These values translate into a dynamic range of 131.5 dB for 1-V excitation in 1-Hz bandwidth with a commercial low noise receiving circuitry. The designed, forward-viewing annular CMUT array is suitable for mounting on the front surface of a cylindrical catheter probe and can provide Doppler information for measurement of blood flow and guiding information for navigation through blood vessels in intravascular ultrasound imaging."}
{"_id": "b3f6f001a3792a55bf8e7a47cc7e1f2f21b45655", "title": "Exploration of Virtual and Augmented Reality for Visual Analytics and 3D Volume Rendering of Functional Magnetic Resonance Imaging (fMRI) Data", "text": "Statistical analysis of functional magnetic resonance imaging (fMRI), such as independent components analysis, is providing new scientific and clinical insights into the data with capabilities such as characterising traits of schizophrenia. However, with existing approaches to fMRI analysis, there are a number of challenges that prevent it from being fully utilised, including understanding exactly what a 'significant activity' pattern is, which structures are consistent and different between individuals and across the population, and how to deal with imaging artifacts such as noise. Interactive visual analytics has been presented as a step towards solving these challenges by presenting the data to users in a way that illuminates meaning. This includes using circular layouts that represent network connectivity and volume renderings with 'in situ' network diagrams. These visualisations currently rely on traditional 2D 'flat' displays with mouse-and-keyboard input. Due to the constrained screen space and an implied concept of depth, they are limited in presenting a meaningful, uncluttered abstraction of the data without compromising on preserving anatomic context. In this paper, we present our ongoing research on fMRI visualisation and discuss the potential for virtual reality (VR) and augmented reality (AR), coupled with gesture-based inputs to create an immersive environment for visualising fMRI data. We suggest that VR/AR can potentially overcome the identified challenges by allowing for a reduction in visual clutter and by allowing users to navigate the data abstractions in a 'natural' way that lets them keep their focus on the visualisations. We present exploratory research we have performed in creating immersive VR environments for fMRI data."}
{"_id": "c28191ebadd076e1f6954894c912681241c0e137", "title": "A scrum-based approach to CMMI maturity level 2 in web development environments", "text": "Scrum has become one of the most popular agile methodologies, either alone or combined with other agile practices. Besides, CMMI (Capability Maturity Model Integration) is accepted as a suitable model to measure the maturity of the organizations when developing or acquiring software. Although these two approaches are often considered antagonist, the use of an agile approach to reach certain CMMI maturity levels may result beneficial to organizations that develop Web systems, since they would take the advantages of both approaches. In Web community, this union may be very interesting, because agile approaches fits with the special needs of Web development, and they could be a useful tool for companies getting a certain grade of maturity. This work analyzes the goals of CMMI maturity level 2 and the feasibility of achieving them using the practices proposed by Scrum, trying to assess whether the use of this methodology is suitable for meeting the CMMI generic and specific goals or not. Finally, and based on this analysis, this paper raises a possible extension of Scrum, based on agile techniques, to accommodate the CMMI maturity level 2."}
{"_id": "590159c5f3bdfaf785a31b9f0c814d2c8f6f05d0", "title": "Non-Intrusive Load Monitoring Using Prior Models of General Appliance Types", "text": "Non-intrusive appliance load monitoring is the process of disaggregating a household\u2019s total electricity consumption into its contributing appliances. In this paper we propose an approach by which individual appliances can be iteratively separated from an aggregate load. Unlike existing approaches, our approach does not require training data to be collected by sub-metering individual appliances, nor does it assume complete knowledge of the appliances present in the household. Instead, we propose an approach in which prior models of general appliance types are tuned to specific appliance instances using only signatures extracted from the aggregate load. The tuned appliance models are then used to estimate each appliance\u2019s load, which is subsequently subtracted from the aggregate load. This process is applied iteratively until all appliances for which prior behaviour models are known have been disaggregated. This paper summarises the contribution described in full in [9]."}
{"_id": "69314ab4fa664bd5965c093574f2b4fc94c0d496", "title": "The hitchhiker's guide to successful residential sensing deployments", "text": "Homes are rich with information about people's energy consumption, medical health, and personal or family functions. In this paper, we present our experiences deploying large-scale residential sensing systems in over 20 homes. Deploying small-scale systems in homes can be deceptively easy, but in our deployments we encountered a phase transition in which deployment effort increases dramatically as residential deployments scale up in terms of 1) the number of nodes, 2) the length of time, and 3) the number of houses. In this paper, we distill our experiences down to a set of guidelines and design principles to help future deployments avoid the potential pitfalls of large-scale sensing in homes."}
{"_id": "b2145536093caf72e3ce0bc89d507fd85484647c", "title": "Duty-cycling buildings aggressively: The next frontier in HVAC control", "text": "Buildings are known to be the largest consumers of electricity in the United States, and often times the dominant energy consumer is the HVAC system. Despite this fact, in most buildings the HVAC system is run using primitive static control algorithms based on fixed work schedules causing wasted energy during periods of low occupancy. In this paper we present a novel control architecture that uses occupancy sensing to guide the operation of a building HVAC system. We show how we can enable aggressive duty-cycling of building HVAC systems \u2014 that is, turn them ON or OFF \u2014 to save energy while meeting building performance requirements using inexpensive sensing and control methods. We have deployed our occupancy sensor network across an entire floor of a university building and our data shows several periods of low occupancy with significant opportunities to save energy over normal HVAC schedules. Furthermore, by interfacing with the building Energy Management System (EMS) directly and using real-time occupancy data collected by our occupancy nodes, we measure electrical energy savings of 9.54% to 15.73% and thermal energy savings of 7.59% to 12.85% for the HVAC system by controlling just one floor of our four floor building."}
{"_id": "b8908ed7d294177ba43833421ea06958f67a1004", "title": "AMPds: A public dataset for load disaggregation and eco-feedback research", "text": "A home-based intelligent energy conservation system needs to know what appliances (or loads) are being used in the home and when they are being used in order to provide intelligent feedback or to make intelligent decisions. This analysis task is known as load disaggregation or non-intrusive load monitoring (NILM). The datasets used for NILM research generally contain real power readings, with the data often being too coarse for more sophisticated analysis algorithms, and often covering too short a time period. We present the Almanac of Minutely Power dataset (AMPds) for load disaggregation research; it contains one year of data that includes 11 measurements at one minute intervals for 21 sub-meters. AMPds also includes natural gas and water consumption data. Finally, we use AMPds to present findings from our own load disaggregation algorithm to show that current, rather than real power, is a more effective measure for NILM."}
{"_id": "c938c0a255ba0357210d7ec7d11df4452b58c938", "title": "Time Expression Recognition Using a Constituent-based Tagging Scheme", "text": "Wefind from four datasets that time expressions are formed by loose structure and the words used to express time information can differentiate time expressions from common text. The findings drive us to design a learning method named TOMN to model time expressions. TOMN defines a constituent-based tagging scheme named TOMN scheme with four tags, namely T, O, M, and N, indicating the constituents of time expression, namely Time token,Modifier,Numeral, and the words Outside time expression. In modeling, TOMN assigns a word with a TOMN tag under conditional random fields with minimal features. Essentially, our constituent-based TOMN scheme overcomes the problem of inconsistent tag assignment that is caused by the conventional position-based tagging schemes (e.g., BIO scheme and BILOU scheme). Experiments show that TOMN is equally or more effective than state-of-the-art methods on various datasets, and much more robust on cross-datasets. Moreover, our analysis can explain many empirical observations in other works about time expression recognition and named entity recognition. ACM Reference Format: Xiaoshi Zhong and Erik Cambria. 2018. Time Expression Recognition Using a Constituent-based Tagging Scheme. In Proceedings of The Web Conference 2018 (WWW\u201918). ACM, New York, NY, USA, 10 pages. https://doi.org/10. 1145/3178876.3185997"}
{"_id": "9a7784eea6bfa62bf2834ee0b87a3cdda46006f2", "title": "Digital Comics Image Indexing Based on Deep Learning", "text": "The digital comic book market is growing every year now, mixing digitized and digital-born comics. Digitized comics suffer from a limited automatic content understanding which restricts online content search and reading applications. This study shows how to combine state-of-the-art image analysis methods to encode and index images into an XML-like text file. Content description file can then be used to automatically split comic book images into sub-images corresponding to panels easily indexable with relevant information about their respective content. This allows advanced search in keywords said by specific comic characters, action and scene retrieval using natural language processing. We get down to panel, balloon, text, comic character and face detection using traditional approaches and breakthrough deep learning models, and also text recognition using LSTM model. Evaluations on a dataset composed of online library content are presented, and a new public dataset is also proposed."}
{"_id": "9f246622099ce9fb2199ff4606edd5da7b6b43ba", "title": "Piecewise 3D Euler spirals", "text": "3D Euler spirals are visually pleasing, due to their property of having their curvature and their torsion change linearly with arc-length. This paper presents a novel algorithm for fitting piecewise 3D Euler spirals to 3D curves with G2 continuity and torsion continuity. The algorithm can also handle sharp corners. Our piecewise representation is invariant to similarity transformations and it is close to the input curves up to an error tolerance."}
{"_id": "a5366f4d0e17dce1cdb59ddcd90e806ef8741fbc", "title": "Legged Robots That Balance", "text": ""}
{"_id": "31c0968fb5f587918f1c49bf7fa51453b3e89cf7", "title": "Deep Transfer Learning for Person Re-Identification", "text": "Person re-identification (Re-ID) poses an inevitable challenge to deep learning: how to learn a robust deep model with millions of parameters on a small training set of few or no labels. In this paper, two deep transfer learning methods are proposed to address the training data sparsity problem, respectively from the supervised and unsupervised settings. First, a two-stepped fine-tuning strategy with proxy classifier learning is developed to transfer knowledge from auxiliary datasets. Second, given an unlabelled Re-Iddataset, an unsupervised deep transfer learning model is proposed based on a co-training strategy. Extensive experiments show that the proposed models achieve a good performance of deep Re-ID models."}
{"_id": "27cf57c1a2fc2a673e7435c9bcd4720ff5baa125", "title": "3 D Indoor Object Recognition by Holistic Scene Understanding", "text": "In this project, we jointly solve the problem of indoor object recognition and scene classification. The joint problem is modeled by a Conditional Random Field (CRF). Specifically, we implement unary potential on scene appearance, unary potential on object appearance, and binary potential on geometry between objects. The two appearance potentials are obtained by training two neural networks respectively and extracting features from them. Binary potential on object-object geometry relationship is inferred by projecting one object onto the other one\u2019s surface and perceiving the overlap. Other potentials include object cuboid geometry, binary potential on scene-object concurrence, and binary potential on object-object concurrence. Experiments on the challenging NYU-RGBD v2 dataset show that the approach of jointly solving object detection and scene classification problems by integrating several feature representations achieves better performance than vanilla classification. Meanwhile, improving the potentials on scene appearance and object-object geometry relationship achieve fairly satisfactory improvements compared with the performance in [12]. We achieve object recognition precision of 0.3859 and scene classification accuracy of 0.6208 among 21 classes of objects and 13 classes of scenes. Both improve obviously comparing to 0.3829 and 0.6086 in baseline [12]. This result is also comparable to the state-of-art method implemented on the identical dataset [18]."}
{"_id": "45b321f73129f712fb70a4b8a87dfe5db32967ea", "title": "CAVE-SOM: Immersive visual data mining using 3D Self-Organizing Maps", "text": "Data mining techniques are becoming indispensable as the amount and complexity of available data is rapidly growing. Visual data mining techniques attempt to include a human observer in the loop and leverage human perception for knowledge extraction. This is commonly allowed by performing a dimensionality reduction into a visually easy-to-perceive 2D space, which might result in significant loss of important spatial and topological information. To address this issue, this paper presents the design and implementation of a unique 3D visual data mining framework - CAVE-SOM. The CAVE-SOM system couples the Self-Organizing Map (SOM) algorithm with the immersive Cave Automated Virtual Environment (CAVE). The main advantages of the CAVE-SOM system are: i) utilizing a 3D SOM to perform dimensionality reduction of large multi-dimensional datasets, ii) immersive visualization of the trained 3D SOM, iii) ability to explore and interact with the multi-dimensional data in an intuitive and natural way. The CAVE-SOM system uses multiple visualization modes to guide the visual data mining process, for instance the data histograms, U-matrix, connections, separations, uniqueness and the input space view. The implemented CAVE-SOM framework was validated on several benchmark problems and then successfully applied to analysis of wind-power generation data. The knowledge extracted using the CAVE-SOM system can be used for further informed decision making and machine learning."}
{"_id": "a557dd1e40058cf97fcffb4dfd2706a8935678a7", "title": "Experimental Results of High-Efficiency Resonant Coupling Wireless Power Transfer Using a Variable Coupling Method", "text": "We experimentally demonstrate that the efficiency of wireless power transfer utilizing resonant coupling is improved by applying the \u201cmatching condition.\u201d The `matching condition' is extracted from an equivalent circuit model of a magnetically coupled wireless power transfer system. This condition is achieved by varying the coupling factor between the source (load) and the internal resonator. Applying this technique results in efficiency improvements of 46.2% and 29.3% at distances of 60 cm and 1 m, respectively. The maximum efficiency is 92.5% at 15 cm. A circuit model based on the extracted parameters of the resonators produces results similar to the experimental data."}
{"_id": "1e24be7414757ef13aac0b641844f14c2fda824f", "title": "Improved Named Entity Translation and Bilingual Named Entity Extraction", "text": "Translation of named entities (NE), including proper names, temporal and numerical expressions, is very important in multilingual natural language processing, like crosslingual information retrieval and statistical machine translation. In this paper we present an integrated approach to extract a named entity translation dictionary from a bilingual corpus while at the same time improving the named entity annotation quality.Starting from a bilingual corpus where the named entities are extracted independently for each language, a statistical alignment model is used to align the named entities. An iterative process is applied to extract named entity pairs with higher alignment probability. This leads to a smaller but cleaner named entity translation dictionary and also to a significant improvement of the monolingual named entity annotation quality for both languages. Experimental result shows that the dictionary size is reduced by 51.8% and the annotation quality is improved from70.03 to 78.15 for Chinese and 73.38 to 81.46 in terms of F-score."}
{"_id": "80079ed707328a7c05e66f8292f4d8defd4d7059", "title": "Zero-Field Readout Electronics for Planar Fluxgate Sensors Without Compensation Coil", "text": "A simple and sensitive readout electronics for planar fluxgate sensors is presented. The system exploits the sense coils to directly generate the compensation static field avoiding the additional coil/s required in standard closed loop configuration, thus providing clear advantages in terms of size and cost. Moreover, feedback configurations are known to provide better linearity and stability of the system. The sensitivity of the developed demonstration system can be easily set from 13.3 to 104.9 mV/\u03bcT with nonlinearity ranging from 0.17% to 0.38% of the measuring interval, whereas the corresponding measuring intervals vary from \u00b1301 to \u00b138 \u03bcT. The measuring uncertainty, the noise field spectral density, and the system bandwidth have been estimated in 12.2 nT, \u2248 10 nT/\u221a{Hz}, and \u22481.5 Hz, respectively. The proposed measuring instrument is extremely easy to use and versatile. Moreover, due to the use of commercially available ferromagnetic material and the simple and via-less design, the proposed fluxgate sensor results in a very low cost and reliable device."}
{"_id": "d650c8dc578021609e36140bc86144ff895a591b", "title": "Efficient Keyword-Aware Representative Travel Route Recommendation", "text": "With the popularity of social media (e.g., Facebook and Flicker), users can easily share their check-in records and photos during their trips. In view of the huge number of user historical mobility records in social media, we aim to discover travel experiences to facilitate trip planning. When planning a trip, users always have specific preferences regarding their trips. Instead of restricting users to limited query options such as locations, activities, or time periods, we consider arbitrary text descriptions as keywords about personalized requirements. Moreover, a diverse and representative set of recommended travel routes is needed. Prior works have elaborated on mining and ranking existing routes from check-in data. To meet the need for automatic trip organization, we claim that more features of Places of Interest (POIs) should be extracted. Therefore, in this paper, we propose an efficient Keyword-aware Representative Travel Route framework that uses knowledge extraction from users\u2019 historical mobility records and social interactions. Explicitly, we have designed a keyword extraction module to classify the POI-related tags, for effective matching with query keywords. We have further designed a route reconstruction algorithm to construct route candidates that fulfill the requirements. To provide befitting query results, we explore Representative Skyline concepts, that is, the Skyline routes which best describe the trade-offs among different POI features. To evaluate the effectiveness and efficiency of the proposed algorithms, we have conducted extensive experiments on real location-based social network datasets, and the experiment results show that our methods do indeed demonstrate good performance compared to state-of-the-art works."}
{"_id": "3d74f5201b30772620015b8e13f4da68ea559dfe", "title": "Sudoku as a SAT Problem", "text": "Sudoku is a very simple and well-known puzzle that has achieved international popularity in the recent past. This paper addresses the problem of encoding Sudoku puzzles into conjunctive normal form (CNF), and subsequently solving them using polynomial-time propositional satisfiability (SAT) inference techniques. We introduce two straightforward SAT encodings for Sudoku: the minimal encoding and the extended encoding. The minimal encoding suffices to characterize Sudoku puzzles, whereas the extended encoding adds redundant clauses to the minimal encoding. Experimental results demonstrate that, for thousands of very hard puzzles, inference techniques struggle to solve these puzzles when using the minimal encoding. However, using the extended encoding, unit propagation is able to solve about half of our set of puzzles. Nonetheless, for some puzzles more sophisticated inference techniques are required."}
{"_id": "3b07d6a163c3b6cc75c3805dfbff60f1458e121b", "title": "Variational Inference using Implicit Distributions", "text": "Generative adversarial networks (GANs) have given us a great tool to fit implicit generative models to data. Implicit distributions are ones we can sample from easily, and take derivatives of samples with respect to model parameters. These models are highly expressive and we argue they can prove just as useful for variational inference (VI) as they are for generative modelling. Several papers have proposed GAN-like algorithms for inference, however, connections to the theory of VI are not always well understood. This paper provides a unifying review of existing algorithms establishing connections between variational autoencoders, adversarially learned inference, operator VI, GAN-based image reconstruction, and more. Secondly, the paper provides a framework for building new algorithms: depending on the way the variational bound is expressed we introduce prior-contrastive and jointcontrastive methods, and show practical inference algorithms based on either density ratio estimation or denoising."}
{"_id": "e60676de42490b282e73a8de9c96b05ba64d3e72", "title": "Hand3D: Hand Pose Estimation using 3D Neural Network", "text": "We propose a novel 3D neural network architecture for 3D hand pose estimation from a single depth image. Different from previous works that mostly run on 2D depth image domain and require intermediate or post process to bring in the supervision from 3D space, we convert the depth map to a 3D volumetric representation, and feed it into a 3D convolutional neural network(CNN) to directly produce the pose in 3D requiring no further process. Our system does not require the ground truth reference point for initialization, and our network architecture naturally integrates both local feature and global context in 3D space. To increase the coverage of the hand pose space of the training data, we render synthetic depth image by transferring hand pose from existing real image datasets. We evaluation our algorithm on two public benchmarks and achieve the state-ofthe-art performance. The synthetic hand pose dataset will be available."}
{"_id": "26458da3a7afa04ed37f2b15edfa22e2f52b116d", "title": "\"If It Matters, I Can Explain It\": Social Desirability of Knowledge Increases the Illusion of Explanatory Depth", "text": "This paper explores whether social desirability affects the illusion of explanatory depth (IEOD) by comparing the magnitude of this illusion in topics with different levels of social desirability within several domains. This question was chosen because prior literature shows that social expectations about how much a person should know about a certain topic affect the magnitude of the IOED. Previous research shows also that social desirability has an effect on a similar illusion related to argumentation, and that the IOED is affected by the way a person thinks knowledge is distributed in his or her social group. In order to do so, 184 participants were assigned randomly to three knowledge domains (history, economics, and devices) and in each domain they rated their understanding of a high-desirability and a low-desirability topic following a standard IOED procedure. Results show that social desirability has an effect on the IOED magnitude and that overestimation of understanding varies among domains. Particularly, participants tend to overestimate their understanding of high desirability topics only. This effect was stronger in the historical domain."}
{"_id": "57ef97df1c7de77067806e231eec8be7a6418e75", "title": "Behavioral oscillation in priming: competing perceptual predictions conveyed in alternating theta-band rhythms.", "text": "The brain constantly creates perceptual predictions about forthcoming stimuli to guide perception efficiently. Abundant studies have demonstrated that perceptual predictions modulate sensory activities depending on whether the actual inputs are consistent with one particular prediction. In real-life contexts, however, multiple and even conflicting predictions might concurrently exist to be tested, requiring a multiprediction coordination process. It remains largely unknown how multiple hypotheses are conveyed and harmonized to guide moment-by-moment perception. Based on recent findings revealing that multiple locations are sampled alternatively in various phase of attentional rhythms, we hypothesize that this oscillation-based temporal organization mechanism may also underlie the multiprediction coordination process. To address the issue, we used well established priming paradigms in combination with a time-resolved behavioral approach to investigate the fine temporal dynamics of the multiprediction harmonization course in human subjects. We first replicate classical priming effects in slowly developing trends of priming time courses. Second, after removing the typical priming patterns, we reveal a new theta-band (\u223c4 Hz) oscillatory component in the priming behavioral data regardless of whether the prime was masked. Third, we show that these theta-band priming oscillations triggered by congruent and incongruent primes are in an out-of-phase relationship. These findings suggest that perceptual predictions return to low-sensory areas not continuously but recurrently in a theta-band rhythm (every 200-300 ms) and that multiple predictions are dynamically coordinated in time by being conveyed in different phases of the theta-band oscillations to achieve dissociated but temporally organized neural representations."}
{"_id": "e5ebe01c9348f60eb6dbb277d71de4262dadf056", "title": "The Impact of Experimental Setup in Prepaid Churn Prediction for Mobile Telecommunications: What to Predict, for Whom and Does the Customer Experience Matter?", "text": "Prepaid customers in mobile telecommunications are not bound by a contract and can therefore change operators (\u2018churn\u2019) at their convenience and without notification. This makes the task of predicting prepaid churn both challenging and financially rewarding. This paper presents an explorative, real world study of prepaid churn modeling by varying the experimental setup on three dimensions: data, outcome definition and population sample. Firstly, we add Customer Experience Management (CEM) data to data traditionally used for prepaid churn prediction. Secondly, we vary the outcome definition of prepaid churn. Thirdly, we alter the sample of the customers included, based on their inactivity period at the time of recording. While adding CEM parameters did not influence the predictability of churn, one variation on the sample and especially a particular change in the outcome definition had a substantial"}
{"_id": "4fd24105e1956752e19bca5ec1b75f511c0d34e1", "title": "A Comparative Study of Staff Removal Algorithms", "text": "This paper presents a quantitative comparison of different algorithms for the removal of stafflines from music images. It contains a survey of previously proposed algorithms and suggests a new skeletonization-based approach. We define three different error metrics, compare the algorithms with respect to these metrics, and measure their robustness with respect to certain image defects. Our test images are computer-generated scores on which we apply various image deformations typically found in real-world data. In addition to modern western music notation, our test set also includes historic music notation such as mensural notation and lute tablature. Our general approach and evaluation methodology is not specific to staff removal but applicable to other segmentation problems as well."}
{"_id": "727a8deb17701dd07f4e74af37b8d2e8cb8cb35b", "title": "Real-Time Object Detection and Recognition System Using OpenCV via SURF Algorithm in Emgu CV for Robotic Handling in Libraries", "text": ""}
{"_id": "130822f195826263547147029e4fc9a90f877355", "title": "PUSHOVER ANALYSIS OF A MULTISTORIED BUILDING", "text": "To model the advanced behaviour of reinforced concrete analytically in its non-linear zone is difficult. This has led engineers in the past to believe heavily on empirical formulas that were derived from numerous experiments for the design of reinforced concrete structures. For structural design and assessment of reinforced concrete members, the nonlinear analysis has become a vital tool. The method can be used to study the behaviour of reinforced concrete structures as well as force redistribution. This analysis of the nonlinear response of RC structures to be distributed out in an exceedingly routine fashion. It helps within the investigation of the behaviour of the structure below completely different loading conditions and also the cracks pattern. In the present study, the non-linear response of RCC frame using the loading has been carried out with the intention to investigate the relative importance of many factors in the non-linear analysis of RCC frames. The structure was modelled and analyzed in STAAD Pro V8i, SAP2000 and designed manually. The description of the reinforcement was done in AutoCAD 2010. Keywordsmulti-storied building, STAAD PRO, Floor plan, SAP 2000, Push over analysis."}
{"_id": "e99f72bc1d61bc7c8acd6af66880d9a815846653", "title": "Automated Irrigation System-IoT Based Approach", "text": "Agriculture is a major source of earning of Indians and agriculture has made a big impact on India's economy. The development of crops for a better yield and quality deliver is exceptionally required. So suitable conditions and suitable moisture in beds of crop can play a major role for production.. Mostly irrigation is done by tradition methods of stream flows from one end to other. Such supply may leave varied moisture levels in filed. The administration of the water system can be enhanced utilizing programmed watering framework This paper proposes a programmed water system with framework for the terrains which will reduce manual labour and optimizing water usage increasing productivity of crops. For formulating the setup, Arduino kit is used with moisture sensor with Wi-Fi module. Our experimental setup is connected with cloud framework and data is acquisition is done. Then data is analysed by cloud services and appropriate recommendations are given."}
{"_id": "43c9812beb3c36189c6a2c8299b5b5a7ae80c472", "title": "Self-Autonomous Wireless Sensor Nodes With Wind Energy Harvesting for Remote Sensing of Wind-Driven Wildfire Spread", "text": "The satellite-based remote sensing technique has been widely used in monitoring wildfire spread. There are two prominent drawbacks with this approach of using satellites located in space: (1) very low sampling rate (temporal resolution problem) and (2) lack of accuracy (spatial and spectral resolution problem). To address these challenges, a wireless sensor network deployed at ground level with high-fidelity and low-altitude atmospheric sensing for wind speed of local wildfire spread has been used. An indirect approach in sensing wind speed has been proposed in this paper as an alternative to the bulky conventional wind anemometer to save cost and space. The wind speed is sensed by measuring the equivalent electrical output voltage of the wind turbine generator (WTG). The percentage error in the wind speed measurement using the proposed indirect method is measured to be well within the \u00b14% limit with respect to wind anemometer accuracy. The same WTG also functions as a wind energy harvesting (WEH) system to convert the available wind energy into electrical energy to sustain the operation of the wireless sensor node. The experimental results show that the designed WEH system is able to harvest an average electrical power of 7.7 mW at an average wind speed of 3.62 m/s for powering the operation of the wireless sensor node that consumes 3.5 mW for predicting the wildfire spread. Based on the sensed wind speed information, the fire control management system determines the spreading condition of the wildfire, and an adequate fire suppression action can be performed by the fire-fighting experts."}
{"_id": "2e4c3e053452b30b73dfab598bb0862b7c7181b2", "title": "Real-parameter optimization with differential evolution", "text": "This study reports how the differential evolution (DE) algorithm performed on the test bed developed for the CEC05 contest for real parameter optimization. The test bed includes 25 scalable functions, many of which are both non-separable and highly multi-modal. Results include DE's performance on the 10 and 30-dimensional versions of each function"}
{"_id": "6b5a99dd7a5adb37a7ed99006ed8f080be9380f3", "title": "Opposition-based Magnetic Optimization Algorithm with parameter adaptation strategy", "text": "Magnetic Optimization Algorithm (MOA) has emerged as a promising optimization algorithm that is inspired by the principles of magnetic field theory. In this paper we improve the performance of the algorithm in two aspects. First an Opposition-Based Learning (OBL) approach is proposed for the algorithm which is applied to the movement operator of the algorithm. Second, by learning from the algorithm\u2019s past experience, an adaptive parameter control strategy which dynamically sets the parameters of the algorithm during the optimization is proposed. To show the significance of the proposed parameter adaptation strategy, we compare the algorithm with two well-known parameter setting techniques on a number of benchmark problems. The results indicate that although the proposed algorithm with the adaptation strategy does not require to set the parameters of the algorithm prior to the optimization process, it outperforms MOA with other parameter setting strategies in most large-scale optimization problems. We also study the algorithm while employing the OBL by comparing it with the original version of MOA. Furthermore, the proposed algorithm is tested and compared with seven traditional population-based algorithms and eight state-of-theart optimization algorithms. The comparisons demonstrate that the proposed algorithm outperforms the traditional algorithms in most benchmark problems, and its results is comparative to those obtained by the state-of-the-art algorithms."}
{"_id": "eaad37c86a2bd3f5aa6e802cf3a9b5bcec6d7658", "title": "Lightweight IPS for port scan in OpenFlow SDN networks", "text": "Security has been one of the major concerns for the computer network community due to resource abuse and malicious flows intrusion. Before a network or a system is attacked, a port scan is typically performed to discover vulnerabilities, like open ports, which may be used to access and control them. Several studies have addressed Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) methods for detecting malicious activities, based on received flows or packet data analysis. However, those methods lead to an increase in switching latency, due to the need to analyze flows or packets before routing them. This may also increase network overhead when flows or packets are duplicated to be parsed by an external IDS. On the one hand, an IDS/IPS may be a bottleneck on the network and may not be useful. On the other hand, the new paradigm called Software Defined Networking (SDN) and the OpenFlow protocol provide some statistical information about the network that may be used for detecting malicious activities. Hence, this work presents a new port scan IPS for SDN based on the OpenFlow switch counters data. A non-intrusive and lightweight method was developed and implemented, with low network overhead, and low memory and processing power consumption. The results showed that our method is effective on detecting and preventing port scan attacks."}
{"_id": "fa4377bc57379cdb7284eeeefa96617aa594f846", "title": "Improved Option Pricing Using Artificial Neural Networks and Bootstrap Methods", "text": "A hybrid neural network is used to predict the difference between the conventional option-pricing model and observed intraday option prices for stock index option futures. Confidence intervals derived with bootstrap methods are used in a trading strategy that only allows trades outside the estimated range of spurious model fits to be executed. Whilst hybrid neural network option pricing models can improve predictions they have bias. The hybrid option-pricing bias can be reduced with bootstrap methods. A modified bootstrap predictor is indexed by a parameter that allows the predictor to range from a pure bootstrap predictor, to a hybrid predictor, and finally the bagging predictor. The modified bootstrap predictor outperforms the hybrid and bagging predictors. Greatly improved performance was observed on the boundary of the training set and where only sparse training data exists. Finally, bootstrap bias estimates were studied."}
{"_id": "4bc8b0e0f0a2a370d3f52e227d560031be3b3d87", "title": "High quality & low thermal resistance eutectic flip chip LED bonding", "text": "Die attach process plays a key role in the thermal management of high power LED packages by governing the thermal resistance of the solder interfaces between the LED chips and the substrate materials. In this paper, comparison was made between the GaN based flip chip LED packages fabricated by (1) heated collet and (2) flux reflow eutectic die attach process. Thermal transient characteristics of the samples had been investigated based on the evaluation of the differential structure function and chip temperature measurement by infrared microscopy. It was shown that LEDs with higher void content would result in higher thermal resistance and chip temperature. Under high power operation at 6W, it was estimated that the junction temperature would exceed 80\u00b0C with void content 40%, which was at least 10\u00b0C higher comparing to that with void content 20%. Based on the result, LEDs samples bonded by heated collet die attach process had an average void content of 8.8% with 0.9% standard deviation, which were smaller comparing to the samples bonded by flux reflow die attach process (40% void content with standard deviation 20.5%). Due to the advantage of small void content control, it is possible to fabricate high power LED package with low thermal resistance, consistent and reliable die attach quality by using heated collet die attach process."}
{"_id": "4cb2f596c796fda6d12248824f68460db84100e4", "title": "Unconstrained Pose-Invariant Face Recognition Using 3D Generic Elastic Models", "text": "Classical face recognition techniques have been successful at operating under well-controlled conditions; however, they have difficulty in robustly performing recognition in uncontrolled real-world scenarios where variations in pose, illumination, and expression are encountered. In this paper, we propose a new method for real-world unconstrained pose-invariant face recognition. We first construct a 3D model for each subject in our database using only a single 2D image by applying the 3D Generic Elastic Model (3D GEM) approach. These 3D models comprise an intermediate gallery database from which novel 2D pose views are synthesized for matching. Before matching, an initial estimate of the pose of the test query is obtained using a linear regression approach based on automatic facial landmark annotation. Each 3D model is subsequently rendered at different poses within a limited search space about the estimated pose, and the resulting images are matched against the test query. Finally, we compute the distances between the synthesized images and test query by using a simple normalized correlation matcher to show the effectiveness of our pose synthesis method to real-world data. We present convincing results on challenging data sets and video sequences demonstrating high recognition accuracy under controlled as well as unseen, uncontrolled real-world scenarios using a fast implementation."}
{"_id": "1733583ca8de32ec5ff0526443b46db44f677b3e", "title": "Deep Learning for Lung Cancer Detection: Tackling the Kaggle Data Science Bowl 2017 Challenge", "text": "We present a deep learning framework for computer-aided lung cancer diagnosis. Our multi-stage framework detects nodules in 3D lung CAT scans, determines if each nodule is malignant, and \u0080nally assigns a cancer probability based on these results. We discuss the challenges and advantages of our framework. In the Kaggle Data Science Bowl 2017, our framework ranked 41st out of 1972 teams."}
{"_id": "a229a5ebba7275c372473071463bee14f6aa530b", "title": "Multiple templates auto exposure control based on luminance histogram for onboard camera", "text": "In order to adjust the exposure of high resolution and frequency images from an on-board camera in real time, a simple exposure control approach based on luminance histogram analysis is presented to realize light control precisely and rapidly. The algorithm divides the initial image into nine blocks and calculates the mean brightness of every part. It uses the luminance histogram to analyze the degree of exposure of both: the global image and every part, especially interest areas. According to the area size of the neighborhood on the two edges of the histogram, the different weight matrix can be decided to quickly calculate the optimal brightness for the next frame, and then the optimal exposure time can be determined accurately. The algorithm does not need to determine an expensive quadratic function to decide the weight of every part so as to fast perform auto exposure control. The experimental results show that the algorithm can correctly and fast adjust the exposure time of the on-board camera on an autonomous robot, and has strong adaptability for various shootings outdoor."}
{"_id": "24f4a4dba6888730e77ee909bfca7fbed48ccbc4", "title": "What campus-based students think about the quality and benefits of e-learning", "text": "There is a trend in Irish universities to utilise the benefits of the e-learning as a mechanism to improve learning performance of campus-based students. Whilst traditional methods, such as face-to-face lectures, tutorials, and mentoring, remain dominant in the educational sector, universities are investing heavily in learning technologies, to facilitate improvements with respect to the quality of learning. The technology to support reuse and sharing of educational resources, or learning objects, is becoming more stable, with interoperability standards maturing. However, debate has raged about what constitutes effective use of learning technology. This research expands upon a study carried out in 2003 examining students\u2019 perceptions of e-learning in a large undergraduate accounting class environment. As a result, improvements were made to the instructional design of the course, to enable students to engage interactively with content. The subsequent study, reported in this paper, adopted a broad range of techniques to understand students\u2019 learning experience in depth. The findings of this research provide an insight into how these students really work and learn using technologies, if at all. It is hoped that our findings will improve the experience for both students and lecturers who engage in teaching and learning through this medium. 502 British Journal of Educational Technology Vol 36 No 3 2005 \u00a9 British Educational Communications and Technology Agency, 2005. Introduction A recent trend in higher education is to create and provide online access to course materials. Over the past two decades academics and institutes of higher education have been diversifying their delivery of instruction through new Internet media such as learning management systems, asynchronous distance learning, and online classrooms amongst a myriad of other burgeoning educational technologies. This combination of traditional face-to-face lectures or tutorials, and web-based course content is better known as \u201cblended learning,\u201d purporting to blend the best aspects of real and virtual environments. Many Irish universities have invested in architectures and platforms to support their teaching staff in delivering material to students in a blended manner. Other institutions have adopted a lecturer-driven approach, whereby teaching staff are left to their own devices to supplement their lectures and tutorials with online material, hosted via their own web servers and typically open source, freeware software, or basic web pages. The reasons behind the drive to incorporate technology into the educational process are threefold. First, pressure to utilise information and communications technology (ICT) at university level comes from changes in the student demography. The rise of \u201cfull time part time students\u201d is a phenomenon of recent years, where school leavers take part-time jobs whilst attending university, leaving less time for evening tutorials or weekend study. In addition, there is a drive for what is known as lifelong learning, whereby adults are increasingly returning to institutions of higher education to take supplementary courses whilst in full-time employment, or during short career breaks. In this regard, the Irish government has made a commitment to have 15% of adults in continuing higher education by 2006. The figure currently stands at less than 5% (Department of Education and Science, 2000). This movement to adult continuing education was also reflected in a statement made by Information Commission Society (Ireland): \u201cThe issue of integration IT to the teaching process is an important part of future improvements, which it will be crucial to pursue\u201d (Boylan, 2000, p. 31). Second, changes in the market for delivery of education is also shifting. Private for-profit higher education institutions are offering a wide range of certificate and degree courses. The Irish Open University Initiative \u201cOscail\u201d and its UK equivalent offer distance learning diploma and degree courses. Many of these courses are delivered through ICT, along with weekend face-to-face tutorials. This distance learning market has also seen the arrival of new entrants, such as the Atlantic University Alliance, a collaboration between National University of Ireland, Galway, University College Cork, and the University of Limerick. Similar undertakings are underway in other institutes of higher education, whereby expertise in delivering blended learning within the institution is being extended towards distance learning courses by using a significant amount of ICT to overcome geographic and time barriers. Third, another pressure on traditional institutions of higher education comes from innovations in new technologies. John Seely Brown and Paul Duiguid claim that these technologies \u201coffer new ways to think of producing, distributing and consuming acaWhat students think about e-learning 503 \u00a9 British Educational Communications and Technology Agency, 2005. demic material. As with so many other institutions, new technologies have caused universities to rethink not simply isolated features but their entire mission and how they go about it\u201d (Seely Brown & Duiguid, 2000). This sentiment is also being echoed by government policy towards higher education where the Irish State is attempting to set certain parameters of performance covering such areas as equity of access, commitment to e-learning and to ICT, commitment to life-long learning, and outreach to other communities. In the subject domain of accountancy, the integration of technology into the classroom is also becoming a central issue for accounting educators for an additional reason. Accountancy professionals in industry require competency in accountancy-related information technology (IT) skills. Higher education institutions, in response, have tried to simulate authentic learning scenarios in the accountancy class and incorporate IT topics into their programmes. However, the efficacy of introducing technology in accounting education remains unclear. Positive learning effects associated with IT in accounting education have been noted (Abdolmohammadi et al , 1998; Fetters, McKenzie & Callaghan, 1986; Friedman, 1981). Equally, evidence has shown that technology has negatively impacted on learning (Dickens & Harper, 1986; Togo & McNamee, 1997). Indeed, some critics harshly criticise its introduction to educational contexts. Selwyn (2002, p. 172) claims in generic terms that \u201c...in the final analysis educational technology remains a diverting pastime and \u2018add-on\u2019 to the curriculum for some teaching and learners, whilst for most in education its impact has been slight.\u201d This paper hopes to look at some of the issues raised by individual students with regard to the use of technology in the teaching and learning process. Rather than focusing on macro issues, such as the political decisions behind the integration of technology, this research aims to examine individuals\u2019 acceptance or rejection of using ICT. Exploring the perspective of the individual student when discussing the use of technology is paramount to beginning to understand the nature of ICT in higher educational settings. This is of importance both to policy makers and education technologists in the debate on the role of ICT in education at an organisational level. The module and the medium This research focused on one module of The Principles of Accounting being taught to approximately 600 first-year undergraduate students at the University of Limerick. The module under investigation was part of a wider course, leading to a degree in Business Studies or Law and Accounting. The majority of students were between 17 and 19 years of age, coming from distributed second-level schools in Ireland. A minority were Erasmus students from various European countries, or mature students over the age of 23. For all, except overseas students, this was their first experience in using an e-learning platform, although most had previously informal used the web to gather information, or prepare coursework in second-level education, prior to entering university. This module was delivered by using a blended learning approach, supplementing weekly lectures, tutorials, and laboratory sessions with online course content, interac504 British Journal of Educational Technology Vol 36 No 3 2005 \u00a9 British Educational Communications and Technology Agency, 2005. tive quizzes, and Excel tasks. A range of different e-learning assessments complemented the traditional weekly meetings. Students were asked to submit a compulsory paperbased accounting case study, and additionally were offered two optional ICT assignments, a stream of online multiple-choice quizzes, and an Excel project. A module web site provided course details, additional readings, and supplementary links. Employing multiple teaching methods simultaneously is a form of blended learning (Saunders & Werner, 2003). Alternative instructional resources can stimulate positive effects on accounting students\u2019 learning experiences, according to (Rebele et al , 1998). Furthermore, instructional innovations are desirable to develop accounting students\u2019 IT competencies (Albrecht & Sack, 2000). Figure 1 illustrates the staged delivery of both the traditional and online elements of the module. The university does not provide institutional support for e-learning on a campus-wide scale, but two other modules in their first year also used a blended learning delivery approach, providing lecture notes online, or sample answers, in the subject area of economics and mathematics. Both of these other modules differ in marking scheme and in the extent to which online supports are provided such as discussion boards, or positive marking schemes for continuous online assessments. As such, blended learning is not yet a norm across modules for first-year students at this current time. Methods The research developed on experience gained in a previous study on a prior"}
{"_id": "fe50efe9e282c63941ec23eb9b8c7510b6283228", "title": "A Facial Expression Recognition System Using Convolutional Networks", "text": "Facial expression recognition has been an active research area in the past ten years, with a growing application area like avatar animation and neuromarketing. The recognition of facial expressions is not an easy problem for machine learning methods, since different people can vary in the way that they show their expressions. And even an image of the same person in one expression can vary in brightness, background and position. Therefore, facial expression recognition is still a challenging problem in computer vision. In this work, we propose a simple solution for facial expression recognition that uses a combination of standard methods, like Convolutional Network and specific image pre-processing steps. Convolutional networks, and the most machine learning methods, achieve better accuracy depending on a given feature set. Therefore, a study of some image pre-processing operations that extract only expression specific features of a face image is also presented. The experiments were carried out using a largely used public database for this problem. A study of the impact of each image pre-processing operation in the accuracy rate is presented. To the best of our knowledge, our method achieves the best result in the literature, 97.81% of accuracy, and takes less time to train than state-of-the-art methods."}
{"_id": "32af32d3f79d0f8130b5170238e9c9695e980097", "title": "Analysis of user keyword similarity in online social networks", "text": "How do two people become friends? What role does homophily play in bringing two people closer to help them forge friendship? Is the similarity between two friends different from the similarity between any two people? How does the similarity between a friend of a friend compare to similarity between direct friends? In this work, our goal is to answer these questions. We study the relationship between semantic similarity of user profile entries and the social network topology. A user profile in an on-line social network is characterized by its profile entries. The entries are termed as user keywords. We develop a model to relate keywords based on their semantic relationship and define similarity functions to quantify the similarity between a pair of users. First, we present a \u2018forest model\u2019 to categorize keywords across multiple categorization trees and define the notion of distance between keywords. Second, we use the keyword distance to define similarity functions between a pair of users. Third, we analyze a set of Facebook data according to the model to determine the effect of homophily in on-line social networks. Based on our evaluations, we conclude that direct friends are more similar than any other user pair. However, the more striking observation is that except for direct friends, similarities between users are approximately equal, irrespective of the topological distance between them."}
{"_id": "e4e9e923be7dba92d431cb70db67719160949053", "title": "\"Industrie 4.0\" and Smart Manufacturing - A Review of Research Issues and Application Examples", "text": ""}
{"_id": "d605d6981960d0ba9e50d60de9ba72e5f73010da", "title": "Effects of energy drinks on the cardiovascular system", "text": "Throughout the last decade, the use of energy drinks has been increasingly looked upon with caution as potentially dangerous due to their perceived strong concentration of caffeine aside from other substances such as taurine, guarana, and L-carnitine that are largely unknown to the general public. In addition, a large number of energy drink intoxications have been reported all over the world including cases of seizures and arrhythmias. In this paper, we focus on the effect of energy drinks on the cardiovascular system and whether the current ongoing call for the products' sales and regulation of their contents should continue."}
{"_id": "73d749e30b41452873003a1286e8fee80bf5b69e", "title": "Boosting item keyword search with spreading activation", "text": "Most keyword search engines return directly matching keyword phrases. However, publishers cannot anticipate all possible ways in which users would search for the items in their documents. In fact, many times, there may be no direct keyword match between a keyword search phrase and descriptions of relevant items that are perfect matches for the search. We present an automated, high precision-based information retrieval solution to boost item find-ability by bridging the semantic gap between item information and popular keyword search phrases. Our solution achieves an average of 80% F-measure for various boosted matches for keyword search phrases in various categories."}
{"_id": "4e8e71312fea5688df34ca1272e19aea9893ce93", "title": "Advanced Cooling for Power Electronics", "text": "Power electronics devices such as MOSFET's, GTO's, IGBT's, IGCT's etc. are now widely used to efficiently deliver electrical power in home electronics, industrial drives, telecommunication, transport, electric grid and numerous other applications. This paper discusses cooling technologies that have evolved in step to remove increasing levels of heat dissipation and manage junction temperatures to achieve goals for efficiency, cost, and reliability. Cooling technologies rely on heat spreading and convection. In applications that use natural or forced air cooling, water heat pipes provide efficient heat spreading for size and weight reduction. Previous concepts are reviewed and an improved heat sink concept with staggered fin density is described for more isothermal cooling. Where gravity can drive liquid flow, thermosiphons provide efficient heat transport to remote fin volumes that can be oriented for natural and/or forced air cooling. Liquid cold plates (LCP's) offer the means to cool high heat loads and heat fluxes including double sided cooling for the highest density packaging. LCP's can be used both in single phase cooling systems with aqueous or oil based coolants and in two-phase cooling systems with dielectric fluids and refrigerants. Previous concepts are reviewed and new concepts including an air cooled heat sink, a thermosiphon heat sink, a vortex flow LCP and a shear flow direct contact cooling concept are described."}
{"_id": "797f359b211c072a5b754e7a8f48a3b1ecf9b8be", "title": "Evaluating a PID, pure pursuit, and weighted steering controller for an autonomous land vehicle", "text": "Wright Laboratory, at Tyndall AFB, Florida, has contracted the University ofFlorida to develop autonomous navigation systems for a variety ofrobotic vehicles, capable ofperforming tasks associated with the location and removal of bombs and mines. One ofthe tasks involves surveying closed target ranges for unexploded buried munitions. Accuracy in path following is critical to the task. There are hundreds of acres that currently require surveying. The sites are typically divided into regions, where each mission can take up to 4.5 hours. These sites are usually surveyed along parallel rows. By improving the accuracy ofpath following, the distance betweenthe rows can be increased to nearly the detection width ofthe ground penetrating sensors, resulting in increased acreage surveyed per mission. This paper evaluates a high-level PID and a pure pursuit steering controller. The controllers were combined into a weighted solution so that the desirable features of each controller is preserved. This strategy was demonstrated in simulation and implemented on a Navigation Test Vehicle (NTV). For a test path ofvarying curvature, the average lateral control error was 2 cm at a vehicle speed of 1.34 mIs."}
{"_id": "06b919f865d0a0c3adbc10b3c34cbfc35fb98d43", "title": "A Sensitivity Analysis of (and Practitioners' Guide to) Convolutional Neural Networks for Sentence Classification", "text": "Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on sentence classification tasks (Kim, 2014; Kalchbrenner et al., 2014). However, these models require practitioners to specify the exact model architecture and accompanying hyperparameters, e.g., the choice of filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance (Kim, 2014). We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification. One important observation borne out by our experimental results is that researchers should report performance variances, as these can be substantial due to stochastic initialization and inference."}
{"_id": "364da079f91a6cb385997be990af06e9ddf6e888", "title": "Effective Use of Word Order for Text Categorization with Convolutional Neural Networks", "text": "Convolutional neural network (CNN) is a neural network that can make use of the internal structure of data such as the 2D structure of image data. This paper studies CNN on text categorization to exploit the 1D structure (namely, word order) of text data for accurate prediction. Instead of using low-dimensional word vectors as input as is often done, we directly apply CNN to high-dimensional text data, which leads to directly learning embedding of small text regions for use in classification. In addition to a straightforward adaptation of CNN from image to text, a simple but new variation which employs bag-ofword conversion in the convolution layer is proposed. An extension to combine multiple convolution layers is also explored for higher accuracy. The experiments demonstrate the effectiveness of our approach in comparison with state-of-the-art methods."}
{"_id": "39020bbb4acd71ec95cd047f42d7c2c6a3223fd2", "title": "Improving TCP Throughput over Two-Way Asymmetric Links: Analysis and Solutions", "text": "The sharing of a common buffer by TCP data segments and acknowledgments in a network or internet has been known to produce the effect of ack compression, often causing dramatic reductions in throughput. We study several schemes for improving the performance of two-way TCP traffic over asymmetric links where the bandwidths in the two directions may differ substantially, possibly by many orders of magnitude. These approaches reduce the effect of ack compression by carefully controlling the flow of data packets and acknowledgments. We first examine a scheme where acknowledgments are transmitted at a higher priority than data. By analysis and simulation, we show that prioritizing acks can lead to starvation of the low-bandwidth connection. Next, we introduce and analyze a connection-level backpressure mechanism designed to limit the maximum amount of data buffered in the outgoing IP queue of the source of the low-bandwidth connection. We show that this approach, while minimizing the queueing delay for acks, results in unfair bandwidth allocation on the slow link. Finally, our preferred solution separates the acks from data packets in the outgoing queue, and makes use of a connection-level bandwidth allocation mechanism to control their bandwidth shares. We show that this scheme overcomes the limitations of the previous approaches, provides isolation, and enables precise control of the connection throughputs. We present analytical models of the dynamic behavior of each of these approaches, derive closed-form expressions for the expected connection efficiencies in each case, and validate them with simulation results."}
{"_id": "12ce0e39d13b2e96d5b66c61d0c83764b51cecc3", "title": "Multiagent Systems Engineering", "text": "This paper describes the Multiagent Systems Engineering (MaSE) methodology. MaSE is a general purpose, methodology for developing heterogeneous multiagent systems. MaSE uses a number of graphically based models to describe system goals, behaviors, agent types, and agent communication interfaces. MaSE also provides a way to specify architecture-independent detailed definition of the internal agent design. An example of applying the MaSE methodology is also presented."}
{"_id": "da03683b9fa40acc3c0645b9dd5df1e7d1cf6f38", "title": "RoXSum: Leveraging Data Aggregation and Batch Processing for XML Routing", "text": "Content-based routing is the primary form of communication within publish/subscribe systems. In those systems data transmission is performed by sophisticated overlay networks of content-based routers, which match data messages against registered subscriptions and forward them based on this matching. Despite their inherent complexities, such systems are expected to deliver information in a timely and scalable fashion. As a result, their successful deployment is a strenuous task. Relevant efforts have so far focused on the construction of the overlay network and the filtering of messages at each broker. However, the efficient transmission of messages has received less attention. In this work, we propose a solution that gracefully handles the transmission task, while providing performance benefits for the matching task as well. Along those lines, we design RoXSum, a message representation scheme that aggregates the routing information from multiple documents in a way that permits subscription matching directly on the aggregated content. Our performance study shows that RoXSum is a viable and effective technique, as it speeds up message routing for more than an order of magnitude."}
{"_id": "2d38ff7205c4bf57c16b56c09279c347dfed0767", "title": "Relation Extraction Using Multi-Encoder LSTM Network on a Distant Supervised Dataset", "text": "Relation extraction techniques are used to find potential relational facts in textual content. Relation Extraction systems require huge amount of training data to model semantics of sentences and identify relations. Distant supervision, often used to construct training data for relation extraction, produces noisy alignments that can hurt the performance of relation extraction systems. To this end, we propose a simple, yet effective, technique to automatically compute confidence levels of alignments. We compute the confidence values of automatically labeled content using co-occurrence statistics of relations and dependency patterns of aligned sentences. Thereafter, we propose a novel multi-encoder bidirectional Long Short Term Memory (LSTM) model to identify relations in a given sentence. We use different features (words, part-of-speech (POS) tags and dependency paths) in each encoder and concatenate the hidden states of all the encoders to predict the relations. Our experiments show that a multi-encoder network can handle different features together to predict relations more accurately (~9% improvement over a single encoder model). We also conduct visualization experiments to show that our model learns intermediate representations effectively to identify relations in sentences."}
{"_id": "5b126a3d59242728d615213584efe699c8f09bfa", "title": "REFLECTIONS ON FREQUENCY EFFECTS IN LANGUAGE PROCESSING", "text": "This response addresses the following points raised in the commentaries: (a) complementary learning mechanisms, the distinction between explicit and implicit memory, and the neuroscience of \u201cnoticing\u201d; (b) what must and what need not be noticed for learning; (c) when frequency fails to drive learning, which addresses factors such as failing to notice cues, perseveration, transfer from L1, developmental readiness, thinking too hard, pedagogical input, and practicing; (d) attention and form-focused instruction; (e) conscious and unconscious knowledge of frequency; (f) sequences of acquisition\u2014from formula, through low-scope pattern, to construction; (g) the Fundamental Difference hypothesis; (h) the blind faith of categorical grammar; (i) Labovian variationist perspectives; (j) parsimony and theory testing; (k) universals and predispositions; and (l) wanna-contractions. It concludes by emphasizing that language acquisition is a process of dynamic emergence and that learners\u2019 language is a product of their history of usage in communicative interaction."}
{"_id": "d38948f54500e79c2ee091c77e73829413c38392", "title": "OPAQUE: An Asymmetric PAKE Protocol Secure Against Pre-Computation Attacks", "text": "Password-Authenticated Key Exchange (PAKE) protocols allow two parties that only share a password to establish a shared key in a way that is immune to offline attacks. Asymmetric PAKE (aPAKE) strengthens this notion for the more common client-server setting where the server stores a mapping of the password and security is required even upon server compromise, that is, the only allowed attack in this case is an (inevitable) offline exhaustive dictionary attack against individual user passwords. Unfortunately, current aPAKE protocols (that dispense with the use of servers\u2019 public keys) allow for pre-computation attacks that lead to the instantaneous compromise of user passwords upon server compromise, thus forgoing much of the intended aPAKE security. Indeed, these protocols use \u2013 in essential ways \u2013 deterministic password mappings or use random \u201csalt\u201d transmitted in the clear from servers to users, and thus are vulnerable to pre-computation attacks. We initiate the study of Strong aPAKE protocols that are secure as aPAKE\u2019s but are also secure against pre-computation attacks. We formalize this notion in the Universally Composable (UC) settings and present two modular constructions using an Oblivious PRF as a main tool. The first builds a Strong aPAKE from any aPAKE (which in turn can be constructed from any PAKE [23]) while the second builds a Strong aPAKE from any authenticated key-exchange protocol secure against reverse impersonation (a.k.a. KCI). Using the latter transformation, we show a practical instantiation of a UC-secure Strong aPAKE in the Random Oracle model. The protocol (\u201cOPAQUE\u201d) consists of 2 messages (3 with mutual authentication), requires 3 and 4 exponentiations for server and client, respectively (2 to 4 of which can be fixed-base depending on optimizations), provides forward secrecy, is PKI-free, supports user-side hash iterations, has a built-in facility for password-based storage and retrieval of secrets and credentials, and accommodates a user-transparent server-side threshold implementation."}
{"_id": "516ac4ab74a3b8a94540fe8a2bab082ea9bb9aaa", "title": "Death by complete decapitation of motorcyclist wearing full face helmet: case report.", "text": "We describe a case of complete decapitation following a motorcycle accident in which the victim was wearing a full face helmet. A young man lost control of his motorcycle and was thrown about 20 m, hitting his head against the barrier separating a tramline from the road. The resulting trauma caused his decapitation, the only fatal wound ascertained by the various forensic investigations. The authors present this rare case and compare it against the other two cases reported in the literature, providing some observations on the ways in which this injury can come about. The absence of abrasions or signs that the wound edges came into contact with a metal structure, the presence of signs of impact on the side of the helmet and the finding of a transversal fracture at the base of the skull point to the violent action of a side-to-side opposite force, due to the resistance provided by the lower edge of the protective helmet."}
{"_id": "28f744bf3aa4b4fc8d8ccf61016f2c40b3cc94f7", "title": "A Regularization Method with Inference of Trust and Distrust in Recommender Systems", "text": "In this study we investigate the recommendation problem with trust and distrust relationships to overcome the sparsity of users\u2019 preferences, accounting for the fact that users trust the recommendations of their friends, and they do not accept the recommendations of their foes. In addition, not only users\u2019 preferences are sparse, but also users\u2019 social relationships. So, we first propose an inference step with multiple random walks to predict the implicit-missing trust relationships that users might have in recommender systems, while considering users\u2019 explicit trust and distrust relationships during the inference. We introduce a regularization method and design an objective function with a social regularization term to weigh the influence of friends\u2019 trust and foes\u2019 distrust degrees on users\u2019 preferences. We formulate the objective function of our regularization method as a minimization problem with respect to the users\u2019 and items\u2019 latent features and then we solve our recommendation problem via gradient descent. Our experiments confirm that our approach preserves relatively high recommendation accuracy in the presence of sparsity in both the users\u2019 preferences and social relationships, significantly outperforming several state-of-the-art methods."}
{"_id": "b970eae72090c5c85efe386629630f6409065998", "title": "A New Algorithm of Fully-automatic Red-eye Removal", "text": "the redeye effect is typically formed in consumer photos taken with a built-in camera flash. The paper presents a new algorithm of the automatic removal of redeye from digital photos. The proposed algorithm can remove red-eye automatically without manual intervention. First, Detecting faces by using the AdaBoost algorithm. Second, the red-eyes are located by using operations of segmentation, morphology and geometric constraint. Finally, correcting the located red-eyes completely. The experimental results are satisfied with high correction rates, relatively low computational complexity and robustness."}
{"_id": "e9f992b026def0533eb656ecd89eb4bb103f7126", "title": "Autonomous Underwater Vehicles : Trends and Transformations", "text": "Three examples of inter-agency cooperation utilizing current generation, individual Autonomous Underwater Vehicles (AUVs) are described consistent with recent recommendations of the U.S. Commission on Ocean Policy. The first steps in transforming individual AUVs into adaptive, networked systems are underway. To realize an affordable and deployable system, a network-class AUV must be designed with cost\u2013size constraints not necessarily applied in developing solo AUVs. Vehicle types are suggested based on function and ocean operating regime: surface layer, interior and bottom layer. Implications for platform, navigation and control subsystems are explored and practical formulations for autonomy and intelligence are postulated for comparing performance and judging behavior. Laws and conventions governing intelligent maritime navigation are reviewed and an autonomous controller with conventional collision avoidance behavior is described. Network-class cost constraints can be achieved through economies of scale. Productivity and efficiency in AUV manufacturing will increase if constructive competition is maintained. Constructive strategies include interface and operating standards. Professional societies and industry trade groups have a leadership role to play in establishing public, open standards. cations are described at many conferences and in an expanding literature of journal publications. Griffiths (2003) provides recent developments in AUV design, construction and operation. A number of commercial manufacturers have emerged to supply the growing market. Clearly, individual AUVs are evolving into useful tools that extend current measurement methods. Three examples involving current generation, individual AUVs will serve to illustrate trends in inter-agency cooperation utilizing this technology. Following these examples, we examine factors that will transform current measurement methods. Network externalities associated with interagency cooperation will play a role in driving this transformation. Recently the Navy joined with the U. S. Environmental Protection Agency, the Narragansett Bay Estuary Program, and the Autonomous Undersea Systems Institute to demonstrate the effectiveness of using a Solar Powered Autonomous Underwater Vehicle (SAUV II) to measure dissolved oxgyen concentrations in Greenwich Bay, Rhode Island. Utilization of an AUV to rapidly move continuously-recording dissolved oxygen sensors over large spatial areas to map the frequency and extent of hypoxia demonstrates cost-effective water quality monitoring (Crimmins, 2005). Real-time monitoring of dissolved oxygen over large spatial scales is important for predicting the onset of hypoxia and mapping its extent (RI Dept Env Mgmt, 2004). The SAUV II (Figure 1) is a solar powered AUV designed for long endurance missions such as monitoring, surveillance, or station keeping where real time bi-directional communications to shore are critical. The SAUV II operates continuously for several months using solar energy to recharge its lithium ion batteries during daylight hours. The SAUV II was fitted with a fast-response galvanic oxygen micro-sensor (AMT Analysenmesstechnik GmbH). This sensor provides rapid in situ profiling of dissolved oxygen at depths of up to 100 m with a response time of a few hundred milliseconds. The SAUV II was also fitted with a NXIC conductivity-temperature-depth (CTD) sensor (Falmouth Scientific Inc.). CTD data are essential for detecting water column stratification, which affects the development and breakdown of hypoxia (Crimmins, 2005). T T R E N D S o move forward wisely with ocean science and technology, the United States needs a unified strategy that engages national and international collaboration. Part VII of the Report by the U.S. Commission on Ocean Policy, \u201cScience-Based Decisions: Understanding of the Oceans,\u201d not only affirms this assertion, but also identifies a framework for federal agencies to proactively develop and implement policies that embrace the spirit of inter-agency cooperation and cost-effective shared technology. The Commission states that a larger, coordinated investment in infrastructure and technology development is necessary to conduct and support ocean science (Commissioners, 2004; Crimmins, 2004a). Future ocean sampling and surveillance systems should be capable of global deployment, sustained presence, three dimensional adaptive aperture, real time control and robust performance. These requirements can be met affordably by a network of Autonomous Underwater Vehicles (AUVs). The first generation of AUVs has transitioned from dream to reality (Griffiths, 2003). Engineering progress is now regularly reported and appli-"}
{"_id": "2e0b52e5d66c174a6c18ebc7ed15a0b5e601a687", "title": "Ensemble Classifiers for Network Intrusion Detection System", "text": "Two of the major challenges in designing anomaly intrusion detection are to maximize detection accuracy and to minimize false alarm rate. In addressing this issue, this paper proposes an ensemble of one-class classifiers where each adopts different learning paradigms. The techniques deployed in this ensemble model are; Linear Genetic Programming (LGP), Adaptive Neural Fuzzy Inference System (ANFIS) and Random Forest (RF). The strengths from the individual models were evaluated and ensemble rule was formulated. Prior to classification, a 2-tier feature selection process was performed to expedite the detection process. Empirical results show an improvement in detection accuracy for all classes of network traffic; Normal, Probe, DoS, U2R and R2L. Random Forest, which is an ensemble learning technique that generates many classification trees and aggregates the individual result was also able to address imbalance dataset problem that many of machine learning techniques fail to sufficiently address it."}
{"_id": "e89b647e5ad2cff5e024a5dedf20a37b964df7e7", "title": "Design and implementation of digital fractional order PID controller using optimal pole-zero approximation method for magnetic levitation system", "text": "The aim of this paper is to employ fractional order proportional integral derivative ( FO-PID ) controller and integer order PID controller to control the position of the levitated object in a magnetic levitation system ( MLS ), which is inherently nonlinear and unstable system. The proposal is to deploy discrete optimal pole-zero approximation method for realization of digital fractional order controller. An approach of phase shaping by slope cancellation of asymptotic phase plots for zeros and poles within given bandwidth is explored. The controller parameters are tuned using dynamic particle swarm optimization ( dPSO ) technique. Effectiveness of the proposed control scheme is verified by simulation and experimental results. The performance of realized digital FO-PID controller has been compared with that of the integer order PID controllers. It is observed that effort required in fractional order control is smaller as compared with its integer counterpart for obtaining the same system performance."}
{"_id": "3644b532d32108e8c0e81ddb1f5aaacfff3a3d7a", "title": "Buddha ' s Brain : Neuroplasticity and Meditation", "text": "In a recent visit to the United States, the Dalai Lama gave a speech at the Society for Neuroscience's annual meeting in Washington, D.C. Over the past several years, he has helped recruit Tibetan Buddhist monks for, and directly encouraged research on the brain and meditation in the Waisman Laboratory for Brain Imaging and Behavior at the University of Wisconsin-Madison. The findings from studies in this unusual sample as well as related research efforts, suggest that, over the course of meditating for tens of thousands of hours, the long-term practitioners had actually altered the structure and function of their brains. In this article we discuss neuroplasticity, which encompasses such alterations, and the findings from these studies. Further, we comment on the associated signal processing (SP) challenges, current status and how SP can contribute to advance these studies."}
{"_id": "b4ed1f783a5f8ae219dd9042ee0b180fd9e1d261", "title": "The most effective exercise for strengthening the supraspinatus muscle: evaluation by magnetic resonance imaging.", "text": "BACKGROUND\nElectromyography has been used to determine the best exercise for strengthening the supraspinatus muscle, but conflicting results have been reported. Magnetic resonance imaging T2 relaxation time appears to be more accurate in determining muscle activation.\n\n\nPURPOSE\nTo determine the best exercises for strengthening the supraspinatus muscle.\n\n\nSTUDY DESIGN\nCriterion standard.\n\n\nMETHODS\nSix male volunteers performed three exercises: the empty can, the full can, and horizontal abduction. Immediately before and after each exercise, magnetic resonance imaging examinations were performed and changes in relaxation time for the subscapularis, supraspinatus, infraspinatus, teres minor, and deltoid muscles were recorded.\n\n\nRESULTS\nThe supraspinatus muscle had the greatest change among the studied muscles in relaxation time for the empty can (10.5 ms) and full can (10.5 ms) exercises. After the horizontal abduction exercise the change in relaxation time for the supraspinatus muscle (3.6 ms) was significantly smaller than that for the posterior deltoid muscle (11.5 ms) and not significantly different from that of the other muscles studied.\n\n\nCONCLUSION\nThe empty can and full can exercises were most effective in activating the supraspinatus muscle."}
{"_id": "d41008671aae2bb342adae39720bfcb14921e591", "title": "Neural Network Model for Earthquake Prediction Using DMETER Data and Seismic Belt Information", "text": "The mechanism of the earthquake remains to be investigated, though some anomalies associated with earthquakes have been found by DEMETER satellite observations. The next step is to use the DEMETER data for earthquake prediction. It is a useful and feasible way to use the self-adaptive artificial neural network to construct relations between various symptom factors and earthquake occurrences. The back-propagation neural network is quite suitable to express the nonlinear relation between earthquake and various anomalies. In this paper a series of physical quantities measured by the DEMETER satellite including Electron density, Electron temperature, ions temperature and oxygen ion density, together with seismic belt information are used to form sample sets for a back-propagation neural network. The neural network model then can be used to conduct the prediction. In the end, validation tests are performed based on those important seismic events happened in 2008."}
{"_id": "2d32bd73265f09053b40c50a5d58f7b7adc868d9", "title": "Collaborative computer-aided design - research and development status", "text": "In order to facilitate product design and realization processes, presently, research is actively carried out for developing methodologies and technologies of collaborative computer-aided design systems to support design teams geographically dispersed based on the quickly evolving information technologies. In this paper, the developed collaborative systems, methodologies and technologies, which are organized as a horizontal or a hierarchical manner, are reviewed. Meanwhile, a 3D streaming technology, which can effectively transmit visualization information across networks for Web applications, is highlighted and the algorithms behind it are disclosed."}
{"_id": "a878b5528bd6478a04f24056583e2ddffe43085d", "title": "Analyzing the performance differences between pattern matching and compressed pattern matching on texts", "text": "In this study the statistics of pattern matching on text data and the statistics of compressed pattern matching on compressed form of the same text data are compared. A new application has been developed to count the character matching numbers in compressed and uncompressed texts individually. Also a new text compression algorithm that allows compressed pattern matching by using classical pattern matching algorithms without any change is presented in this paper. In this paper while the presented compression algorithm based on digram and trigram substitution has been giving about 30-35% compression factor, the duration of compressed pattern matching on compressed text is calculated less than the duration of pattern matching on uncompressed text. Also it is confirmed that the number of character comparison on compressed texts while doing a compressed pattern matching is less than the number of character comparison on uncompressed texts. Thus the aim of the developed compression algorithm is to point out the difference in text processing between compressed and uncompressed text and to form opinions for another applications."}
{"_id": "4c2ebacd84fab0e0180e9966837c5c57eb7ab0fb", "title": "A restricted Boltzmann machine based two-lead electrocardiography classification", "text": "An restricted Boltzmann machine learning algorithm were proposed in the two-lead heart beat classification problem. ECG classification is a complex pattern recognition problem. The unsupervised learning algorithm of restricted Boltzmann machine is ideal in mining the massive unlabelled ECG wave beats collected in the heart healthcare monitoring applications. A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. In this paper a deep belief network was constructed and the RBM based algorithm was used in the classification problem. Under the recommended twelve classes by the ANSI/AAMI EC57: 1998/(R)2008 standard as the waveform labels, the algorithm was evaluated on the two-lead ECG dataset of MIT-BIH and gets the performance with accuracy of 98.829%. The proposed algorithm performed well in the two-lead ECG classification problem, which could be generalized to multi-lead unsupervised ECG classification or detection problems."}
{"_id": "929ea29f81c60f8d6e333eac7e0bfb72905865ac", "title": "Energy-Aware Genetic Algorithms for Task Scheduling in Cloud Computing", "text": "For the cloud computing, task scheduling problems are of paramount importance. It becomes more challenging when takes into account energy consumption, traditional make span criteria and users QoS as objectives. This paper considers independent tasks scheduling in cloud computing as a bi-objective minimization problem with make span and energy consumption as the scheduling criteria. We use Dynamic Voltage Scaling (DVS) to minimize energy consumption and propose two algorithms. These two algorithms use the methods of unify and double fitness to define the fitness function and select individuals. They adopt the genetic algorithm to parallel find the reasonable scheduling scheme. The simulation results demonstrate the two algorithms can efficiently find the right compromise between make span and energy consumption."}
{"_id": "0bdaf705c237a53b8db50dbdb62f1451774cb2a3", "title": "A* Parsing: Fast Exact Viterbi Parse Selection", "text": "A PCFGparsingcandramaticallyreducethetime requiredto find theexactViterbi parseby conservatively estimatingoutsideViterbi probabilities.Wediscussvariousestimatesandgive efficientalgorithmsfor computingthem.On Penntreebanksentences, ourmostdetailedestimate reducesthetotalnumberof edgesprocessedto lessthan3%of thatrequiredby exhausti ve parsing,andevenasimplerestimatewhichcanbepre-computedin underaminutestill reduces thework by a factorof 5. ThealgorithmextendstheclassicA graphsearchprocedureto a certainhypergraphassociatedwith parsing.Unlike best-firstandfinite-beammethodsfor achieving thiskind of speed-up, theA parseris guaranteedto returnthemostlikely parse,not justanapproximation.Thealgorithmis alsocorrectfor awide rangeof parsercontrolstrategies andmaintainsaworst-casecubictime bound. A Parsing: FastExact Viterbi ParseSelection"}
{"_id": "0dd6795ae207ae4bc455c9ac938c3eebd84897c8", "title": "Statistical language learning", "text": "The $64,000 question in computational linguistics these days is: \u201cWhat should I read to learn about statistical natural language processing?\u201d I have been asked this question over and over, and each time I have given basically the same reply: there is no text that addresses this topic directly, and the best one can do is find a good probability-theory textbook and a good information-theory textbook, and supplement those texts with an assortment of conference papers and journal articles. Understanding the disappointment this answer provoked, I was delighted to hear that someone had finally written a bookdirectly addressing this topic. However, after reading Eugene Charniak\u2019s Statistical Language Learning, I have very mixed feelings about the impact this bookmight have on the ever-growing field of statistical NLP. The book begins with a very brief description of the classic artificial intelligence approach toNLP (chapter 1), including morphology, syntax, semantics, and pragmatics. It presents a few definitions from probability theory and information theory (chapter 2), then proceeds to introduce hidden Markov models (chapters 3\u20134) and probabilistic context-free grammars (chapters 5\u20136). The book concludes with a few chapters discussing advanced topics in statistical language learning, such as grammar induction (chapter 7), syntactic disambiguation (chapter 8), word clustering (chapter 9), andword sense disambiguation (chapter 10). To its credit, the book serves as an interesting popular discussion of statistical modeling in NLP. It is well-written and entertaining, and very accessible to the reader with a limited mathematical background. It presents a good selection of statistical NLP topics to introduce the reader to the field. And the descriptions of the forward-backward algorithm for hidden Markov models and the inside-outside algorithm for probabilistic context-free grammars are intuitive and easy to follow. However, as a resource for someone interested in entering this area of research, this book falls far short of its author\u2019s goals. These goals are clearly stated in the preface:"}
{"_id": "27e5bd13d581ef682b96038dce4c18f260122352", "title": "The PASCAL Recognising Textual Entailment Challenge", "text": "This paper describes the PASCAL Network of Excellence first Recognising Textual Entailment (RTE-1) Challenge benchmark. The RTE task is defined as recognizing, given two text fragments, whether the meaning of one text can be inferred (entailed) from the other. This application-independent task is suggested as capturing major inferences about the variability of semantic expression which are commonly needed across multiple applications. The Challenge has raised noticeable attention in the research community, attracting 17 submissions from diverse groups, suggesting the generic relevance of the task."}
{"_id": "48ef181956a748301dd40f3f97c690795a11c886", "title": "Adaptive duplicate detection using learnable string similarity measures", "text": "The problem of identifying approximately duplicate records in databases is an essential step for data cleaning and data integration processes. Most existing approaches have relied on generic or manually tuned distance metrics for estimating the similarity of potential duplicates. In this paper, we present a framework for improving duplicate detection using trainable measures of textual similarity. We propose to employ learnable text distance functions for each database field, and show that such measures are capable of adapting to the specific notion of similarity that is appropriate for the field's domain. We present two learnable text similarity measures suitable for this task: an extended variant of learnable string edit distance, and a novel vector-space based measure that employs a Support Vector Machine (SVM) for training. Experimental results on a range of datasets show that our framework can improve duplicate detection accuracy over traditional techniques."}
{"_id": "c42c34f4e71850cce97c857d6c094283be235ce7", "title": "Scaling Textual Inference to the Web", "text": "Most Web-based Q/A systems work by finding pages that contain an explicit answer to a question. These systems are helpless if the answer has to be inferred from multiple sentences, possibly on different pages. To solve this problem, we introduce the HOLMES system, which utilizes textual inference (TI) over tuples extracted from text. Whereas previous work on TI (e.g., the literature on textual entailment) has been applied to paragraph-sized texts, HOLMES utilizes knowledge-based model construction to scale TI to a corpus of 117 million Web pages. Given only a few minutes, HOLMES doubles recall for example queries in three disparate domains (geography, business, and nutrition). Importantly, HOLMES\u2019s runtime is linear in the size of its input corpus due to a surprising property of many textual relations in the Web corpus\u2014they are \u201capproximately\u201d functional in a well-defined sense."}
{"_id": "4f0a8597130c327c7c139a32e836619bd392b7a9", "title": "Diversified Visual Attention Networks for Fine-Grained Object Classification", "text": "Fine-grained object classification attracts increasing attention in multimedia applications. However, it is a quite challenging problem due to the subtle interclass difference and large intraclass variation. Recently, visual attention models have been applied to automatically localize the discriminative regions of an image for better capturing critical difference, which have demonstrated promising performance. Unfortunately, without consideration of the diversity in attention process, most of existing attention models perform poorly in classifying fine-grained objects. In this paper, we propose a diversified visual attention network (DVAN) to address the problem of fine-grained object classification, which substantially relieves the dependency on strongly supervised information for learning to localize discriminative regions com-pared with attention-less models. More importantly, DVAN explicitly pursues the diversity of attention and is able to gather discriminative information to the maximal extent. Multiple attention canvases are generated to extract convolutional features for attention. An LSTM recurrent unit is employed to learn the attentiveness and discrimination of attention canvases. The proposed DVAN has the ability to attend the object from coarse to fine granularity, and a dynamic internal representation for classification is built up by incrementally combining the information from different locations and scales of the image. Extensive experiments conducted on CUB-2011, Stanford Dogs, and Stanford Cars datasets have demonstrated that the pro-posed DVAN achieves competitive performance compared to the state-of-the-art approaches, without using any prior knowledge, user interaction, or external resource in training and testing."}
{"_id": "b2fc430d7606ebc9199b08232dc9c024a303dc55", "title": "DB2 Design Advisor: Integrated Automatic Physical Database Design", "text": "The DB2 Design Advisor in IBM\u00ae DB2\u00ae Universal DatabaseTM (DB2 UDB) Version 8.2 for Linux\u00ae, UNIX\u00ae and Windows\u00ae is a tool that, for a given workload, automatically recommends physical design features that are any subset of indexes, materialized query tables (also called materialized views), shared-nothing database partitionings, and multidimensional clustering of tables. Our work is the very first industrial-strength tool that covers the design of as many as four different features, a significant advance to existing tools, which support no more than just indexes and materialized views. Building such a tool is challenging, because of not only the large search space introduced by the interactions among features, but also the extensibility needed by the tool to support additional features in the future. We adopt a novel \u201chybrid\u201d approach in the Design Advisor that allows us to take important interdependencies into account as well as to encapsulate design features as separate components to lower the reengineering cost. The Design Advisor also features a built-in module that automatically reduces the given workload, and therefore provides great scalability for the tool. Our experimental results demonstrate that our tool can quickly provide good physical design recommendations that satisfy users\u2019 requirements."}
{"_id": "82bcb524a2036676bfa4ebd3324fe76013dced54", "title": "Differential Privacy as a Mutual Information Constraint", "text": "Differential privacy is a precise mathematical constraint meant to ensure privacy of individual pieces of information in a database even while queries are being answered about the aggregate. Intuitively, one must come to terms with what differential privacy does and does not guarantee. For example, the definition prevents a strong adversary who knows all but one entry in the database from further inferring about the last one. This strong adversary assumption can be overlooked, resulting in misinterpretation of the privacy guarantee of differential privacy. Herein we give an equivalent definition of privacy using mutual information that makes plain some of the subtleties of differential privacy. The mutual-information differential privacy is in fact sandwiched between \u03b5-differential privacy and (\u03b5,\u03b4)-differential privacy in terms of its strength. In contrast to previous works using unconditional mutual information, differential privacy is fundamentally related to conditional mutual information, accompanied by a maximization over the database distribution. The conceptual advantage of using mutual information, aside from yielding a simpler and more intuitive definition of differential privacy, is that its properties are well understood. Several properties of differential privacy are easily verified for the mutual information alternative, such as composition theorems."}
{"_id": "50b6bf5e79a50e5d68f06047d850644554164ddc", "title": "Large-signal-modulation of high-efficiency light-emitting diodes for optical communication", "text": "The dynamic behavior of high-efficiency light-emitting diodes (LEDs) is investigated theoretically and experimentally. A detailed theoretical description of the switch-on and switch-off transients of LEDs is derived. In the limit of small-signal modulation, the well-established exponential behavior is obtained. However, in the case of high injection, which is easily reached for thin active layer LEDs, the small-signal time constant is found to be up to a factor of two faster than the radiative recombination lifetime. Using such quantum-well LEDs, we have demonstrated optical data transfer with wide open eye diagrams at bit rates up to 2 Gbit/s. In addition, we have combined the use of thin active layers with the concept of surface-textured thin-film LEDs, which allow a significant improvement in the light extraction efficiency. With LEDs operating at 0.5 Gbit/s and 1 Gbit/s, we have achieved external quantum efficiencies of 36% and 29%, respectively."}
{"_id": "ecc4b0701f09415f6bf97084c0ed8dd05346599d", "title": "Highly Efficient Doherty Amplifier Based on Class-E Topology for WCDMA Applications", "text": "This letter reports a high-efficiency gallium nitride (GaN) high-electron mobility transistor (HEMT) Doherty power amplifier (DPA) based on the class-E topology for wideband code-division multiple-access (WCDMA) applications. The class-E topology is employed as the carrier and peaking cells of the Doherty configuration. For validations, the proposed DPA is designed and implemented with 25-W GaN HEMTs at 2.14 GHz. For the proposed DPA, the power-added efficiency (PAE) and drain efficiency of 56.1% and 61.2% are achieved at 40 dBm (6-dB backoff power from Psat) for a continuous wave. For a 1-carrier WCDMA signal, the PAE of 44.8% is obtained with an adjacent channel leakage ratio (ACLR) of -31 dBc at 37 dBm, which is an 8.9% improvement over the conventional DPA with an ACLR of -36.4 dBc."}
{"_id": "8acb187e276d8ed5066632f8e8d3938ce4b58e45", "title": "Botanical insecticides, deterrents, and repellents in modern agriculture and an increasingly regulated world.", "text": "Botanical insecticides have long been touted as attractive alternatives to synthetic chemical insecticides for pest management because botanicals reputedly pose little threat to the environment or to human health. The body of scientific literature documenting bioactivity of plant derivatives to arthropod pests continues to expand, yet only a handful of botanicals are currently used in agriculture in the industrialized world, and there are few prospects for commercial development of new botanical products. Pyrethrum and neem are well established commercially, pesticides based on plant essential oils have recently entered the marketplace, and the use of rotenone appears to be waning. A number of plant substances have been considered for use as insect antifeedants or repellents, but apart from some natural mosquito repellents, little commercial success has ensued for plant substances that modify arthropod behavior. Several factors appear to limit the success of botanicals, most notably regulatory barriers and the availability of competing products (newer synthetics, fermentation products, microbials) that are cost-effective and relatively safe compared with their predecessors. In the context of agricultural pest management, botanical insecticides are best suited for use in organic food production in industrialized countries but can play a much greater role in the production and postharvest protection of food in developing countries."}
{"_id": "740c9a0696bd7d4218793ad059c0794012715641", "title": "Event-based Sensing for Space Situational Awareness", "text": "A revolutionary type of imaging device, known as a silicon retina or event-based sensor, has recently been developed and is gaining in popularity in the field of artificial vision systems. These devices are inspired by a biological retina and operate in a significantly different way to traditional CCD-based imaging sensors. While a CCD produces frames of pixel intensities, an event-based sensor produces a continuous stream of events, each of which is generated when a pixel detects a change in log light intensity. These pixels operate asynchronously and independently, producing an event-based output with high temporal resolution. There are also no fixed exposure times, allowing these devices to offer a very high dynamic range independently for each pixel. Additionally, these devices offer high-speed, low power operation and a sparse spatiotemporal output. As a consequence, the data from these sensors must be interpreted in a significantly different way to traditional imaging sensors and this paper explores the advantages this technology provides for space imaging. The applicability and capabilities of event-based sensors for SSA applications are demonstrated through telescope field trials. Trial results have confirmed that the devices are capable of observing resident space objects from LEO through to GEO orbital regimes. Significantly, observations of RSOs were made during both day-time and nighttime (terminator) conditions without modification to the camera or optics. The event based sensor\u2019s ability to image stars and satellites during day-time hours offers a dramatic capability increase for terrestrial optical sensors. This paper shows the field testing and validation of two different architectures of event-based imaging sensors. An eventbased sensor\u2019s asynchronous output has an intrinsically low data-rate. In addition to low-bandwidth communications requirements, the low weight, low-power and high-speed make them ideally suitable to meeting the demanding challenges required by space-based SSA systems. Results from these experiments and the systems developed highlight the applicability of event-based sensors to ground and space-based SSA tasks."}
{"_id": "c92b11e7d8407338721e058d65bf80640d72101c", "title": "Separable spectro-temporal Gabor filter bank features: Reducing the complexity of robust features for automatic speech recognition.", "text": "To test if simultaneous spectral and temporal processing is required to extract robust features for automatic speech recognition (ASR), the robust spectro-temporal two-dimensional-Gabor filter bank (GBFB) front-end from Sch\u00e4dler, Meyer, and Kollmeier [J. Acoust. Soc. Am. 131, 4134-4151 (2012)] was de-composed into a spectral one-dimensional-Gabor filter bank and a temporal one-dimensional-Gabor filter bank. A feature set that is extracted with these separate spectral and temporal modulation filter banks was introduced, the separate Gabor filter bank (SGBFB) features, and evaluated on the CHiME (Computational Hearing in Multisource Environments) keywords-in-noise recognition task. From the perspective of robust ASR, the results showed that spectral and temporal processing can be performed independently and are not required to interact with each other. Using SGBFB features permitted the signal-to-noise ratio (SNR) to be lowered by 1.2\u2009dB while still performing as well as the GBFB-based reference system, which corresponds to a relative improvement of the word error rate by 12.8%. Additionally, the real time factor of the spectro-temporal processing could be reduced by more than an order of magnitude. Compared to human listeners, the SNR needed to be 13\u2009dB higher when using Mel-frequency cepstral coefficient features, 11 dB higher when using GBFB features, and 9\u2009dB higher when using SGBFB features to achieve the same recognition performance."}
{"_id": "f70ed78fe16021f168ebbdd5265e39f94f1f8cc1", "title": "Achievements : Re \u0304 ections on the Measurement of Women ' s Empowerment", "text": "This paper begins from the understanding that women's empowerment is about the process by which those who have been denied the ability to make strategic life choices acquire such an ability. Awide gap separates this processual understanding of empowerment from the more instrumentalist forms of advocacy which have required the measurement and quanti\u00aecation of empowerment. The ability to exercise choice incorporates three inter-related dimensions: resources (de\u00aened broadly to include not only access, but also future claims, to both material and human and social resources); agency (including processes of decision making, as well as less measurable manifestations of agency such as negotiation, deception and manipulation); and achievements (well-being outcomes). A number of studies of women's empowerment are analysed to make some important methodological points about the measurement of empowerment. The paper argues that these three dimensions of choice are indivisible in determining the meaning of an indicator and hence its validity as a measure of empowerment. The notion of choice is further quali\u00aeed by referring to the conditions of choice, its content and consequences. These quali\u00aecations represent an attempt to incorporate the structural parameters of individual choice in the analysis of women's empowerment. CONCEPTUALIZING EMPOWERMENT"}
{"_id": "b0888d997588946f9b4b44ecddfbfd960e281bb9", "title": "Low power LVDS transmitter with low common mode variation for 1GB/s-per pin operation", "text": "This paper presents the design of a low voltage differential signalling (LVDS) transmitter intended using a LCD panel interface and is capable of transmitting 1GB/s per pin data rate. The transmitter is fully LVDS standard compatible which is achieved by employing a common mode feedback circuit to meet all Ac and DC specifications over PVT variation, specifically due to the implementation of feedback circuit, the common mode variation is made very less quiescent current of 5.5mA. Further, the implementation incorporates the idea of sharing of bias blocks among different transmitters in full chip resulting in less active area. The transmitter is implemented in 3.3V, 0.35/spl mu/ CMOS technology. In this implementation the active area (excluding ESD) per transmitter is 0.039 mm square. The initial silicon data (output offset voltage of 1.20V and differential voltage of 320mV) at room temperature shows the consistency of the simulation results."}
{"_id": "9b277dedf6e725c7f7f857904c55ebd29e0bfa73", "title": "MAGIC\u2014Memristor-Aided Logic", "text": "Memristors are passive components with a varying resistance that depends on the previous voltage applied across the device. While memristors are naturally used as memory, memristors can also be used for other applications, including logic circuits. In this brief, a memristor-only logic family, i.e., memristor-aided logic (MAGIC), is presented. In each MAGIC logic gate, memristors serve as an input with previously stored data, and an additional memristor serves as an output. The topology of a MAGIC nor gate is similar to the structure of a common memristor-based crossbar memory array. A MAGIC nor gate can therefore be placed within memory, providing opportunities for novel non-von Neumann computer architectures. Other MAGIC gates also exist (e.g., and, or, not, and nand) and are described in this brief."}
{"_id": "32c20afb5c91ed7cdbafb76408c3a62b38dd9160", "title": "Viewing Real-World Faces in 3D", "text": "We present a data-driven method for estimating the 3D shapes of faces viewed in single, unconstrained photos (aka \"in-the-wild\"). Our method was designed with an emphasis on robustness and efficiency - with the explicit goal of deployment in real-world applications which reconstruct and display faces in 3D. Our key observation is that for many practical applications, warping the shape of a reference face to match the appearance of a query, is enough to produce realistic impressions of the query's 3D shape. Doing so, however, requires matching visual features between the (possibly very different) query and reference images, while ensuring that a plausible face shape is produced. To this end, we describe an optimization process which seeks to maximize the similarity of appearances and depths, jointly, to those of a reference model. We describe our system for monocular face shape reconstruction and present both qualitative and quantitative experiments, comparing our method against alternative systems, and demonstrating its capabilities. Finally, as a testament to its suitability for real-world applications, we offer an open, on-line implementation of our system, providing unique means of instant 3D viewing of faces appearing in web photos."}
{"_id": "b36907df813c42213360348375eed564ee3f39f8", "title": "Sensorized Garments and Textrode-Enabled Measurement Instrumentation for Ambulatory Assessment of the Autonomic Nervous System Response in the ATREC Project", "text": "Advances in textile materials, technology and miniaturization of electronics for measurement instrumentation has boosted the development of wearable measurement systems. In several projects sensorized garments and non-invasive instrumentation have been integrated to assess on emotional, cognitive responses as well as physical arousal and status of mental stress through the study of the autonomous nervous system. Assessing the mental state of workers under stressful conditions is critical to identify which workers are in the proper state of mind and which are not ready to undertake a mission, which might consequently risk their own life and the lives of others. The project Assessment in Real Time of the Stress in Combatants (ATREC) aims to enable real time assessment of mental stress of the Spanish Armed Forces during military activities using a wearable measurement system containing sensorized garments and textile-enabled non-invasive instrumentation. This work describes the multiparametric sensorized garments and measurement instrumentation implemented in the first phase of the project required to evaluate physiological indicators and recording candidates that can be useful for detection of mental stress. For such purpose different sensorized garments have been constructed: a textrode chest-strap system with six repositionable textrodes, a sensorized glove and an upper-arm strap. The implemented textile-enabled instrumentation contains one skin galvanometer, two temperature sensors for skin and environmental temperature and an impedance pneumographer containing a 1-channel ECG amplifier to record cardiogenic biopotentials. With such combinations of garments and non-invasive measurement devices, a multiparametric wearable measurement system has been implemented able to record the following physiological parameters: heart and respiration rate, skin galvanic response, environmental and peripheral temperature. To ensure the proper functioning of the implemented garments and devices the full series of 12 sets have been functionally tested recording cardiogenic biopotential, thoracic impedance, galvanic skin response and temperature values. The experimental results indicate that the implemented wearable measurement systems operate according to the specifications and are ready to be used for mental stress experiments, which will be executed in the coming phases of the project with dozens of healthy volunteers."}
{"_id": "2c075293886b601570024b638956828b4fbc6a24", "title": "CPU and/or GPU: Revisiting the GPU Vs. CPU Myth", "text": "Parallel computing using accelerators has gained widespre ad research attention in the past few years. In particular, using GPUs for general purpose computing has brought forth several success stories with respect to time taken, cost, power, and other metrics. Howev er, accelerator based computing has significantly relegated the role of CPUs in computation. As CPUs evo lv and also offer matching computational resources, it is important to also include CPUs in the comput ation. We call this thehybrid computing model. Indeed, most computer systems of the present age offe r a d gree of heterogeneity and therefore such a model is quite natural. We reevaluate the claim of a recent paper by Lee et al.(ISCA 20 10). We argue that the right question arising out of Lee et al. (ISCA 2010) should be how to use a CPU+ GPU platform efficiently, instead of whether one should use a CPU or a GPU exclusively. To this end, we experiment with a set of 13 diverse workloads ranging from databases, image processing, spars e matrix kernels, and graphs. We experiment with two different hybrid platforms: one consisting of a 6-c ore Intel i7-980X CPU and an NVidia Tesla T10 GPU, and another consisting of an Intel E7400 dual core CP U with an NVidia GT520 GPU. On both these platforms, we show that hybrid solutions offer good ad vantage over CPU or GPU alone solutions. On both these platforms, we also show that our solutions are 9 0% resource efficient on average. Our work therefore suggests that hybrid computing can offer tremendous advantages at not only research-scale platforms but also the more realistic scale yst ms with significant performance gains and resource efficiency to the large scale user community."}
{"_id": "79512622696b3e95c8a2507e30fb5219c4f5e34a", "title": "Research on data augmentation for image classification based on convolution neural networks", "text": "The performance of deep convolution neural networks will be further enhanced with the expansion of the training data set. For the image classification tasks, it is necessary to expand the insufficient training image samples through various data augmentation methods. This paper explores the impact of various data augmentation methods on image classification tasks with deep convolution Neural network, in which Alexnet is employed as the pre-training network model and a subset of CIFAR10 and ImageNet (10 categories) are selected as the original data set. The data augmentation methods used in this paper include: GAN/WGAN, Flipping, Cropping, Shifting, PCA jittering, Color jittering, Noise, Rotation, and some combinations. Experimental results show that, under the same condition of multiple increasing, the performance evaluation on small-scale data sets is more obvious, the four individual methods (Cropping, Flipping, WGAN, Rotation) perform generally better than others, and some appropriate combination methods are slightly more effective than the individuals."}
{"_id": "27e9a525e9bbd087655795284427807aea1e16ae", "title": "Ricci Flow for 3D Shape Analysis", "text": "Ricci flow is a powerful curvature flow method, which is invariant to rigid motion, scaling, isometric, and conformal deformations. We present the first application of surface Ricci flow in computer vision. Previous methods based on conformal geometry, which only handle 3D shapes with simple topology, are subsumed by the Ricci flow-based method, which handles surfaces with arbitrary topology. We present a general framework for the computation of Ricci flow, which can design any Riemannian metric by user-defined curvature. The solution to Ricci flow is unique and robust to noise. We provide implementation details for Ricci flow on discrete surfaces of either Euclidean or hyperbolic background geometry. Our Ricci flow-based method can convert all 3D problems into 2D domains and offers a general framework for 3D shape analysis. We demonstrate the applicability of this intrinsic shape representation through standard shape analysis problems, such as 3D shape matching and registration, and shape indexing. Surfaces with large nonrigid anisotropic deformations can be registered using Ricci flow with constraints of feature points and curves. We show how conformal equivalence can be used to index shapes in a 3D surface shape space with the use of Teichmuller space coordinates. Experimental results are shown on 3D face data sets with large expression deformations and on dynamic heart data."}
{"_id": "34c84bc46dbac879d82b675d2adf2b7f1911d387", "title": "DARPA's explainable artificial intelligence (XAI) program", "text": "The DARPA's Explainable Artificial Intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. This talk will summarize the XAI program and present highlights from these Phase 1 evaluations."}
{"_id": "505342c1f45289c28bba0b52dca5a4b559257a1e", "title": "Automatic Identification of Retinal Arteries and Veins in Fundus Images using Local Binary Patterns", "text": "Artery and vein (AV) classification of retinal images is a key to necessary tasks, such as automated measurement of arteriolar-to-venular diameter ratio (AVR). This paper comprehensively reviews the state-of-the art in AV classification methods. To improve on previous methods, a new Local Binary Pattern-based method (LBP) is proposed. Beside its simplicity, LBP is robust against low contrast and low quality fundus images; and it helps the process by including additional AV texture and shape information. Experimental results compare the performance of the new method with the state-of-the art; and also methods with different feature extraction and classification schemas."}
{"_id": "362fd0d33c725a2508698f1d4c989fa91049418f", "title": "Improving code review effectiveness through reviewer recommendations", "text": "Effectively performing code review increases the quality of software and reduces occurrence of defects. However, this requires reviewers with experiences and deep understandings of system code. Manual selection of such reviewers can be a costly and time-consuming task. To reduce this cost, we propose a reviewer recommendation algorithm determining file path similarity called FPS algorithm. Using three OSS projects as case studies, FPS algorithm was accurate up to 77.97%, which significantly outperformed the previous approach."}
{"_id": "f1ca01eedf3514bbc19677b2baf4c2bf29ac7946", "title": "Side-channel leakage aware instruction scheduling", "text": "Speed-optimized side-channel protected software implementations of block ciphers are important for the security of embedded IoT devices based on general-purpose microcontrollers. The recent work of Schwabe et al. published at SAC 2016 introduced a bit-sliced implementation of AES and a first-order Boolean-masked version of it, targeting ARM Cortex-M CPU cores. The authors claim to be secure against timing as well as first-order power and electromagnetic side-channel attacks. However, the author's security claims are not taking the actual leakage characteristics of the underlying CPU architecture into account, hence making the scheme potentially vulnerable to first-order attacks in practice. In this work we show indeed that such a masking scheme can be attacked very easily by first-order electromagnetic side-channel attacks. In order to fix the issue and provide practical first-order security, we provide a strategy to schedule program instructions in way that the specific leakage of the CPU does not impair the side-channel countermeasure."}
{"_id": "9cb0d830a8856d59d0a5e9210d3ba1c3e1def707", "title": "Rationalizing Sentiment Analysis in Tensorflow", "text": "Sentiment analysis using deep learning models is a leading subject of interest in Natural Language Processing that is as powerful as it is opaque. Current state-ofthe-art models can produce accurate predictions, but they provide little insight as to why the model predicted this sentiment. Businesses relying on these models might be less likely to act on insight given the lack of evidence for predictions. These people would be more likely to trust such predictions if a brief explanation of the outcome is provided. Recent work by Lei et al [4]. has set forth a framework for a multi-aspect sentiment analysis concurrently providing text rationalization with each prediction. This framework sets forth a two-part approach, which summarizes a review and predicts a sentiment. In this paper, we explore the performance of this framework, seeking to recreate and improve upon it in TensorFlow."}
{"_id": "95cd6ef496b4b4ffd249581dc76856f2af9e7a66", "title": "Grid Connected PV Systems with Smart Grid functionality", "text": "..................................................................................................................................... ii Declaration ................................................................................................................................ iii Acknowledgements ................................................................................................................... iv Table of"}
{"_id": "b5e7f5284f11b27b8d7de0ee2b304604377ef8e0", "title": "Speech Enhancement Using a-Minimum Mean-Square Error Short-Time Spectral Amplitude Estimator", "text": "Absstroct-This paper focuses on the class of speech enhancement systems which capitalize on the major importance of the short-time spectral amplitude (STSA) of the speech signal in its perception. A system which utilizes a minimum mean-square error (MMSE) STSA estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the \" spectral subtraction \" algorithm. In this paper we derive the MMSE STSA estimator, based on modeling speech and noise spectral components as statistically independent Gaussian random variables. We analyze the performance of the proposed STSA estimator and compare it with a STSA estimator derived from the Wiener estimator. We also examine the MMSE STSA estimator under uncertainty of signal presence in the noisy observations. In constructing the enhanced signal, the MMSE STSA estimator is combined with the complex exponential of the noisy phase. It is shown here that the latter is the MMSE estimator of the complex exponential of the original phase, which does not affect the STSA estimation. The proposed approach results in a significant reduction of the noise, and provides enhanced speech with colorless residual noise. The complexity of the proposed algorithm is approximately that of other systems in the discussed class."}
{"_id": "b79676abcd5c7c2009e07a569d76538d1f3947bd", "title": "Elimination of the musical noise phenomenon with the Ephraim and Malah noise suppressor", "text": "This paper presents a study of the noise suppression technique proposed by Ephraim and Malah. This technique has been used recently for the restoration of degraded audio recordings because it is free of the frequently encountered \u2018musical noise\u2019 artifact. It is demonstrated how this artifact is actually eliminated without bringing distortion to the recorded signal even if the noise is only poorly stationary."}
{"_id": "4ad35158e11f8def2ba3c389df526f5664ab5d65", "title": "Speech enhancement using a minimum mean-square error log-spectral amplitude estimator", "text": ""}
{"_id": "97ae7cfd5583f9282dbc3083c6d0b3f01400cd1a", "title": "Enhancement and bandwidth compression of noisy speech", "text": "Over the past several years there has been considerable attention focused on the problem of enhancement and bandwidth compression of speech degraded by additive background noise. This interest is motivated by several factors including a broad set of important applications, the apparent lack of robustness in current speech-compression systems and the development of several potentially promising and practical solutions. One objective of this paper is to provide an overview of the variety of techniques that have been proposed for enhancement and bandwidth compression of speech degraded by additive background noise. A second objective is to suggest a unifying framework in terms of which the relationships between these systems is more visible and which hopefully provides a structure which will suggest fruitful directions for further research."}
{"_id": "c1169df27b21cc39e1f032f142f28ef6cc8ce883", "title": "Narrative Schema Stability in News Text", "text": "We investigate the stability of narrative schemas (Chambers & Jurafsky, 2009) automatically induced from a news corpus, representing recurring narratives in a corpus. If such techniques produce meaningful results, we should expect that small changes to the corpus will result in only small changes to the induced schemas. We describe experiments involving successive ablation of a corpus and cross-validation at each stage of ablation, on schemas generated by three different techniques over a general news corpus and topically-specific subcorpora. We also develop a method for evaluating the similarity between sets of narrative schemas, and thus the stability of the schema induction algorithms. This stability analysis affirms the heterogeneous/homogeneous document category hypothesis first presented in Simonson & Davis (2016), whose technique is problematically limited. Additionally, increased ablation leads to increasing stability, so the smaller the remaining corpus, the more stable schema generation appears to be. We surmise that as a corpus grows larger, novel and more varied narratives continue to appear and stability declines, though at some point this decline levels off as new additions to the corpus consist essentially of \u201cmore of the same.\u201d"}
{"_id": "6a131d921a6633699d050ca25c93e6e94ab95cf7", "title": "Content Centric Networking in tactical and emergency MANETs", "text": "Reliable and secure content distribution in a disruptive environment is a critical challenge due to high mobile and lossy channels. Traditional IP networking and wireless protocols tend to perform poorly. In this paper, we propose Content Centric Networking (CCN) for emergency wireless ad hoc environments. CCN is a novel communication architecture capable to access and retrieve content by name. This new approach achieves scalability, security, and efficient network resource management in large scale disaster recovery and battlefield networks. Existing Internet CCN schemes cannot be directly applied to wireless mobile ad hoc networks due to different environments and specific limitations. Thus, we must extend the CCN architecture by introducing features and requirements especially designed for disruptive networks. We prove feasibility and performance gain of the new design via implementation and experimentation."}
{"_id": "ae5470e3bccfa343f0c6de40e144db6109f377dc", "title": "Tunable VHF Miniaturized Helical Filters", "text": "In this paper, a miniaturization technique and a tuning concept that enable the realization of highly miniaturized and frequency reconfigurable helical filters are reported for the first time. The proposed filter design is based on tunable helical resonators with an internal volume of only 8.6 cm 3 and variable capacitive loading. A second-order bandpass filter was built and measured as a proof-of-concept demonstrator. The helical filter is manufactured with a CNC process and tuning is achieved through piezoelectrically actuated tuners, which are bonded over the helix apex. A measured center frequency tuning between 257-365 MHz was realized with an insertion loss between 0.3-3.2 dB including connector losses. The measured filter bandwidth varied between 6%-7.7%. Tunable resonators were also manufactured so as to extract the unloaded quality factor. It was measured between 392-639 for a center frequency tuning of 2.5:1 (156.5-395 MHz)."}
{"_id": "dd88a853f8f13602c98dec3150709998e831308a", "title": "Blind Speech Separation and Enhancement With GCC-NMF", "text": "We present a blind source separation algorithm named GCC-NMF that combines unsupervised dictionary learning via non-negative matrix factorization NMF with spatial localization via the generalized cross correlation GCC method. Dictionary learning is performed on the mixture signal, with separation subsequently achieved by grouping dictionary atoms, at each point in time, according to their spatial origins. The resulting source separation algorithm is simple yet flexible, requiring no prior knowledge or information. Separation quality is evaluated for three tasks using stereo recordings from the publicly available SiSEC signal separation evaluation campaign: 3 and 4 concurrent speakers in reverberant environments, speech mixed with real-world background noise, and noisy recordings of a moving speaker. Performance is quantified using perceptually motivated and SNR-based measures with the PEASS and BSS Eval toolkits, respectively. We evaluate the effects of model parameters on separation quality, and compare our approach with other unsupervised and semi-supervised speech separation and enhancement approaches. We show that GCC-NMF is a flexible source separation algorithm, outperforming task-specific approaches in each of the three settings, including both blind as well as several informed approaches that require prior knowledge or information."}
{"_id": "58a34752553d41133f807ee37a6796c5193233f2", "title": "Anomaly detection approach using hybrid algorithm of data mining technique", "text": "The excessive use of the communication networks, rising of Internet of Things leads to increases the vulnerability to the important and secret information. advance attacking techniques and number of attackers are increasing radically. Intrusion is one of the main threats to the internet. Hence security issues had been big problem, so that various techniques and approaches have been presented to address the limitations of intrusion detection system such as low accuracy, high false alarm rate, and time consuming. This paper proposes a hybrid machine learning technique for network intrusion detection based on combination of K-means clustering and Sequential Minimal Optimization (SMO) classification. It introduces hybrid approach that able to reduce the rate of false positive alarm, false negative alarm rate, to improve the detection rate and detect zero-day attackers. The NSL-KDD dataset has been used in the proposed technique.. The classification has been performed by using Sequential Minimal Optimization. After training and testing the proposed hybrid machine learning technique, the results have shown that the proposed technique (K-mean + SMO) has achieved a positive detection rate of (94.48%) and reduce the false alarm rate to (1.2%) and achieved accuracy of (97.3695%)."}
{"_id": "ac532b0476f06afa6c2772f130c70740debe4f35", "title": "A Deep Network with Visual Text Composition Behavior", "text": "While natural languages are compositional, how state-of-the-art neural models achieve compositionality is still unclear. We propose a deep network, which not only achieves competitive accuracy for text classification, but also exhibits compositional behavior. That is, while creating hierarchical representations of a piece of text, such as a sentence, the lower layers of the network distribute their layer-specific attention weights to individual words. In contrast, the higher layers compose meaningful phrases and clauses, whose lengths increase as the networks get deeper until fully composing the sentence."}
{"_id": "d71d5e4869bee1816907a0aa528abc533f600b79", "title": "Balanced-to-Unbalanced Bandpass Filters and the Antenna Application", "text": "Novel balanced-to-unbalanced (balun) bandpass filters are presented and carefully examined. By properly converting a symmetric four-port balanced-to-balanced bandpass filter to a three-port device, with two ports forming a balanced output and one port remaining unbalanced, a novel three-port balun bandpass filter with both functions of balun and bandpass filter may be realized. Specifically, a balun bandpass filter is implemented with its center frequency at 0.99 GHz and a 3-dB fractional bandwidth (Bomega) of 10.1%. The implemented balun bandpass filter presents an excellent in-band balance performance with common-mode rejection ratio (Rcm) better than 41 dB (the corresponding amplitude and phase imbalances are within 0.003deg and 1deg) over the pass- band. In this study, a wideband balun bandpass filter is also implemented so as to improve the wideband performance of the quasi- Yagi antenna. Being integrated with the proposed wideband balun bandpass filter, the implemented quasi-Yagi antenna achieves a bandwidth of 48% for IS11 | < -10 dB, with front-to-back ratio better than 13 dB, cross-polarization smaller than -20 dB, and peak gain of 3-5 dBi within the wide operating bandwidth."}
{"_id": "f1130c28890ddcf147d1ed3476ebb06b4ea2b494", "title": "Active defence through deceptive IPS", "text": "Modern security mechanisms such as Unified Threat Management (UTM), Next-Generation Firewalls and Security Information and Event Management (SIEM) have become more sophisticated over recent years, promising advanced security features and immediate mitigation of the most advanced threats. While this appears promising, in practice even this cutting-edge technology often fails to protect modern organisations as they are being targeted by attacks that were previously unknown to the security industry. Most security mechanisms are based on a database of previously known attack artefacts (signatures) and they will fail on slightly modified or new attacks. The need for threat intelligence is in complete contrast with the way current security solutions are responding to the threats they identify, as they immediately block them without attempting to acquire any further information. In this report, we present and evaluate a security mechanism that operates as an intrusion prevention system which uses honeypots to deceive an attacker, prevent a security breach and which allows the potential acquisition of intelligence on each intrusion attempt. a aThis article is published online by Computer Weekly as part of the 2017 Royal Holloway information security thesis series http://www.computerweekly.com/ehandbook/Active-defence-through-deceptive-IPS. It is based on an MSc dissertation written as part of the MSc in Information Security at the ISG, Royal Holloway, University of London. The full thesis is published on the ISG\u2019s website at https://www.royalholloway.ac.uk/isg/."}
{"_id": "cb7102a4d53f964db3e1fe11d6218cd09b21abb4", "title": "Gait Generation With Smooth Transition Using CPG-Based Locomotion Control for Hexapod Walking Robot", "text": "This paper presents a locomotion control method based on central pattern generator (CPG) for hexapod walking robot to achieve gait generation with smooth transition. By deriving an analytical limit cycle approximation of the Van der Pol oscillator, a simple diffusive coupling scheme is proposed to construct a ring-shape CPG network with phase-locked behavior. The stability of the proposed network is proved using synchronization analysis with guaranteed uniform ultimate boundedness of the synchronous errors. In contrast to conventional numerical methods in tuning parameters of the CPG network, our method provides an explicit result that could analytically determine the network parameters to yield the rhythmic waveforms with prescribed frequency, amplitude, and phase relationship among neurons. Employing the proposed network to govern the swing/stance phase according to the profile of the resulting CPG signals, a locomotion control strategy for the hexapod robot is further developed to manipulate the leg movements during the gait cycle. By simply adjusting the phase lags of the CPG network, the proposed control strategy is capable of generating stable walking gaits for hexapod robots and achieving smooth transition among the generated gaits. The simulation and experimental results have demonstrated the effectiveness of the proposed locomotion control method."}
{"_id": "a6459c7ad0491655158cd7e05a15feeedbea58fb", "title": "Microbial Fuel Cells for Wastewater Treatment", "text": "1.1 Screening The influent is strained to remove all large objects carried in the sewage stream. This is most commonly performed with an automated mechanically-raked bar screen in modern plants serving large populations, whilst in smaller or less modern plants a manually-cleaned screen may be used. The raking action of a mechanical bar screen is typically paced according to the accumulation on the bar screens and/or flow rate. The solids are collected and later disposed of in landfill or incinerated. Bar screens or mesh screens of varying sizes may be used to optimise solids removal, so as to trap and remove the floating matter, such as pieces of cloth, paper, wood, kitchen refuse, etc. These floating materials will choke pipes or adversely affect the working of the pumps if not removed. They should be placed before the grit chambers. However, if the quality of grit is not of much importance, as in the case of landfilling etc., screens may even be placed after the grit chambers. They may sometimes be accommodated in the body of the grit chambers themselves."}
{"_id": "7f90d5bdbc4caa78d1cec17326e417b98b07b399", "title": "Detecting hot topics from Twitter: A multiview approach", "text": "Twitter is widely used all over the world, and a huge number of hot topics are generated by Twitter users in real time. These topics are able to reflect almost every aspect of people\u2019s daily lives. Therefore, the detection of topics in Twitter can be used in many real applications, such as monitoring public opinion, hot product recommendation and incidence detection. However, the performance of traditional topic detection methods is still far from perfect largely owing to the tweets\u2019 features, such as their limited length and arbitrary abbreviations. To address these problems, we propose a novel framework (MVTD) for Twitter topic detection using multiview clustering, which can integrate multirelations among tweets, such as semantic relations, social tag relations and temporal relations. We also propose some methods for measuring relations among tweets. In particular, to better measure the semantic similarity of tweets, we propose a new document similarity measure based on a suffix tree (STVSM). In addition, a new keyword extraction method based on a suffix tree is proposed. Experiments on real datasets show that the performance of MVTD is much better than that of a single view, and it is useful for detecting topics from Twitter."}
{"_id": "9608a11dce79321d1b5fd8ccc1343c9bb2481a31", "title": "Job insecurity and organizational citizenship behavior: exploring curvilinear and moderated relationships.", "text": "This article examined a curvilinear relationship between job insecurity and organizational citizenship behavior (OCB). Drawing from social exchange theory and research on personal control, we developed and tested an explanation for employees' reactions to job insecurity based on their conceptualization of their social exchange relationship with the organization at different levels of job insecurity. Using data from 244 Chinese employees and 102 supervisory ratings of OCB, we found support for a U-shaped relationship between job insecurity and OCB. Moreover, 2 factors--psychological capital and subordinate-supervisor guanxi--moderated the curvilinear relationship, such that the curvilinear relationship is more pronounced among those with lower psychological capital or less positive subordinate-supervisor guanxi."}
{"_id": "165179ea85512f3d879b973393676e08861241dc", "title": "A study on the road accidents using data investigation and visualization in Los Ba\u00f1os, Laguna, Philippines", "text": "Road safety is one of the most vital and crucial part of every countries daily economic growth. It gives so much impact in public health specially in the Philippines. Securing its safety will be a big help to a particular country's economic growth. The main objective of this paper is to analyze road accident's data to discover hidden pattern that can be used as precautionary measure to at least lessen the accident that occur yearly in Los Ba\u00f1os, Laguna, Philippines. Predicting algorithms such as Decision Tree, Na\u00efve Bayes and Rule induction were used to identify factors affecting accident in Los Ba\u00f1os, Laguna. Using these three classifier the following are the results obtained by the researchers; for Decision Tree 92.84% accuracy occurred with 0.797 kappa while in Na\u00efve Bayes 91.50% accuracy was generated with 0.741 kappa and 92.50% accuracy for Rule Induction and 0.783 kappa was produced. The researchers discovered that the place where an accident happened don't have significant correlation on the fatality of the victim. On the other hand, the researchers also found that the time and day play a vital role on the fatality or severity of victims of road accident particularly car collision."}
{"_id": "f8af6bcfc71614d762d7f724a90677abc46b67c0", "title": "Feature reduction and selection for EMG signal classification", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A doi:10.1016/j.eswa.2012.01.102 \u21d1 Corresponding author. Tel.: +66 74 55 8831; fax: E-mail address: angkoon.p@hotmail.com (A. Phiny Feature extraction is a significant method to extract the useful information which is hidden in surface electromyography (EMG) signal and to remove the unwanted part and interferences. To be successful in classification of the EMG signal, selection of a feature vector ought to be carefully considered. However, numerous studies of the EMG signal classification have used a feature set that have contained a number of redundant features. In this study, most complete and up-to-date thirty-seven time domain and frequency domain features have been proposed to be studied their properties. The results, which were verified by scatter plot of features, statistical analysis and classifier, indicated that most time domain features are superfluity and redundancy. They can be grouped according to mathematical property and information into four main types: energy and complexity, frequency, prediction model, and time-dependence. On the other hand, all frequency domain features are calculated based on statistical parameters of EMG power spectral density. Its performance in class separability viewpoint is not suitable for EMG recognition system. Recommendation of features to avoid the usage of redundant features for classifier in EMG signal classification applications is also proposed in this study. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "53c173cdaff795114faa2aebc080c2cec6d2c9a2", "title": "An Introduction to the Theory of Mechanism Design", "text": "1 The authors will donate all payments that they receive from the publisher for this book to Amnesty International."}
{"_id": "80e15425716b76920b5c1278953fb2ea615aae54", "title": "Semantic exploitation of implicit patent information", "text": "In recent years patents have become increasingly important for businesses to protect their intellectual capital and as a valuable source of information. Patent information is, however, not employed to its full potential and the interpretation of structured and unstructured patent information in large volumes remains a challenge. We address this by proposing an integrated interdisciplinary approach that uses natural language processing and machine learning techniques to formalize multilingual patent information in an ontology. The ontology further contains patent and domain specific knowledge, which allows for aligning patents with technological fields of interest and other business-related artifacts. Our empirical evaluation shows that for categorizing patents according to well-known technological fields of interest, the approach achieves high accuracy with selected feature sets compared to related work focussing on monolingual patents. We further show that combining OWL RL reasoning with SPARQL querying over the patent knowledge base allows for answering complex business queries and illustrate this with real-world use cases from the automotive domain."}
{"_id": "70912aecc5e4edd517263b7092eaf8d73e16f116", "title": "Dynamic Pricing in Spatial Crowdsourcing: A Matching-Based Approach", "text": "In spatial crowdsourcing, requesters submit their task-related locations and increase the demand of a local area. The platform prices these tasks and assigns spatial workers to serve if the prices are accepted by requesters. There exist mature pricing strategies which specialize in tackling the imbalance between supply and demand in a local market. However, in global optimization, the platform should consider the mobility of workers; that is, any single worker can be the potential supply for several areas, while it can only be the true supply of one area when assigned by the platform. The hardness lies in the uncertainty of the true supply of each area, hence the existing pricing strategies do not work. In the paper, we formally define this Global Dynamic Pricing(GDP) problem in spatial crowdsourcing. And since the objective is concerned with how the platform matches the supply to areas, we let the matching algorithm guide us how to price. We propose a MAtching-based Pricing Strategy (MAPS) with guaranteed bound. Extensive experiments conducted on the synthetic and real datasets demonstrate the effectiveness of MAPS."}
{"_id": "2be709dc223d12c88b68ba53d4c949577b65c76e", "title": "Mechanics and energetics of human locomotion on sand.", "text": "Moving about in nature often involves walking or running on a soft yielding substratum such as sand, which has a profound effect on the mechanics and energetics of locomotion. Force platform and cinematographic analyses were used to determine the mechanical work performed by human subjects during walking and running on sand and on a hard surface. Oxygen consumption was used to determine the energetic cost of walking and running under the same conditions. Walking on sand requires 1.6-2.5 times more mechanical work than does walking on a hard surface at the same speed. In contrast, running on sand requires only 1.15 times more mechanical work than does running on a hard surface at the same speed. Walking on sand requires 2.1-2.7 times more energy expenditure than does walking on a hard surface at the same speed; while running on sand requires 1.6 times more energy expenditure than does running on a hard surface. The increase in energy cost is due primarily to two effects: the mechanical work done on the sand, and a decrease in the efficiency of positive work done by the muscles and tendons."}
{"_id": "330008af4074ef0e2b21787677783827d6a15056", "title": "Open Set Learning with Counterfactual Images", "text": "In open set recognition, a classifier must label instances of known classes while detecting instances of unknown classes not encountered during training. To detect unknown classes while still generalizing to new instances of existing classes, we introduce a dataset augmentation technique that we call counterfactual image generation. Our approach, based on generative adversarial networks, generates examples that are close to training set examples yet do not belong to any training category. By augmenting training with examples generated by this optimization, we can reformulate open set recognition as classification with one additional class, which includes the set of novel and unknown examples. Our approach outperforms existing open set recognition algorithms on a selection of image classification tasks."}
{"_id": "d0ba8b60980b940446f581ba87f871a325cf3cca", "title": "Obstacle Detection on Railway Track by Fusing Radar and Image Sensor", "text": "Novel technology to recognize the situation in distant place is necessary to develop a railway safety monitoring system by which human being having fallen onto the tracks from a platform and obstacles in the level crossing can be detected. In this research, we propose a method for detecting a stationary or moving obstacle by the technology which employs the super resolution radar techniques, the image recognition techniques, and the technology to fuse these techniques. Our method is designed for detecting obstacles such as cars, bicycles, and human on the track in the range up to hundreds of meters ahead by using sensors mounted on a train. In super resolution radar techniques, novel stepped multiple frequency radar is confirmed to provide the expected high resolution performance by experimental study using software defined radar. We designed the parameters of the stepped multiple frequency radar. The signal processing processor which employs A/Ds, D/As of high-sampling rate and the highest performance FPGAs has been designed and developed. In image recognition techniques, the main algorithm which detects a pair of rails is based on information about rail edges and intensity gradients. As the obstacle exists in the vicinity of rails, all the detection processes can be limited to the region in the vicinity of rails. As an example, we detect an obstacle by estimating image boundaries which enclose a group of feature points exactly on the obstacle. In order to determine the group of feature points, feature points of the whole image are detected at each frames and tracked over image sequence. For robustness estimation of boundary, we use radar so as to detect the obstacle\u2019s rough position in an image region where an obstacle would exist; the motion segmentation technique is applied to the tracks that is located in the region. Note that an obstacle's position detected by radar is remarkably rough because radar has low transverse resolution. Nevertheless, the position detected by radar contributes to rapid and robust estimation of satisfactory image boundaries. To allow for monitoring in large distances, a prototype for an onboard pan/tilt camera platform with directional control was designed and manufactured as required in a railway situation. To detect obstacles in real time, we have built a high-performance test machine with a GPU and an image processing environment."}
{"_id": "8b4919ca7362e575dfa760cdf5d54b713be8fbc4", "title": "The potential distribution measurement on the stress grading system of high-voltage rotating machines by using pockels effect", "text": "A surface potential measuring system was developed by utilizing Pockels sensor for measuring the potential distribution on the stress grading (SG) system of form wound motor coils. A Bi4Ge3O12 (BGO) Pockels crystal is put close to a measuring object and the capacitively-induced voltage on the end surface of the BGO crystal is measured by using Pockels effect of the crystal itself. The spatial resolution of the measuring system is 5 mm, and the frequency response reaches up to GHz-range. With this system, the surface potential distributions on a model bar coil were measured. According to the measuring results, it is shown that the newly developed potential measuring system here is very useful for evaluating the reliability of the SG system of motors."}
{"_id": "3df916713a73c726aa6fc30e135df69eb616be05", "title": "Polarization-agile antennas", "text": "A polarization-agile antenna is a type of antenna the polarization state of which can be changed dynamically, i.e., it can have either linear (vertical or horizontal) or circular polarization (left hand or right hand), depending on the requirements of its specific application. This special characteristic offers great flexibility for antenna systems, because a single antenna could be used to satisfy the different requirements of several systems. This paper gives a comprehensive review of polarization-agile antennas. The basics of polarization in antennas and polarization-agile antennas are explained. Various examples of recent developments are then provided, and the antenna configurations and design principles are illustrated and discussed. Finally, the future development of polarization-agile antennas for wireless systems is discussed"}
{"_id": "31b8f64435ae710c364b21b4c7ff3543f0189feb", "title": "Hesitation Disfluencies in Spontaneous Speech: The Meaning of um", "text": "Human speech is peppered with ums and uhs, among other signs of hesitation in the planning process. But are these so-called fillers (or filled pauses) intentionally uttered by speakers, or are they side-effects of difficulties in the planning process? And how do listeners respond to them? In the present paper, we review evidence concerning the production and comprehension of fillers such as um and uh, in an attempt to determine whether they can be said to be \u2018words\u2019 with \u2018meanings\u2019 that are understood by listeners. We conclude that, whereas listeners are highly sensitive to hesitation disfluencies in speech, there is little evidence to suggest that they are intentionally produced, or should be considered to be words in the conventional sense. When humans communicate, their messages are conveyed by more than just the words they use. They can, for example, use gesture to indicate what a phrase such as \u2018this one\u2019 refers to, or change their tone of voice to show that the assertion that they \u2018love linguistics\u2019 is not to be taken at face value. As well as these (presumably deliberate) additional components of communication, the words of unprepared spoken language are likely to be accompanied by a range of unintentional errors. If, for example, a speaker exchanges the onsets of two words, by perhaps saying \u2018darn bore\u2019 when intending to say \u2018barn door\u2019, it is highly unlikely that the exchange is intentional, and the occurrence of accidental speech errors like this may inform us about the nature of speech planning (e.g. Hartsuiker et al. 2005). Between these extremes of intentionality are disfluencies, or the false starts, repetitions, and hesitations that accompany the words that speakers plan and utter. Averaging across several studies, Fox Tree (1995) estimated that approximately 6% of words uttered are, or are affected by, some form of disfluency (see also Bortfeld et al. 2001). These disfluencies may not always be accidental: It has been argued that some types of disfluency should be counted among the tools the speaker has for communicating to the listener, alongside things such as tone of voice (others, such as false starts, may be due to speakers editing their own speech). Chief among the potentially communicative disfluencies are the so-called fillers, such as um 590 Martin Corley and Oliver W. Stewart \u00a9 2008 The Authors Language and Linguistics Compass 2/4 (2008): 589\u2013602, 10.1111/j.1749-818x.2008.00068.x Journal Compilation \u00a9 2008 Blackwell Publishing Ltd and uh, which (together with prolongations and pauses) mark a hesitation on the part of the speaker. In this paper, we investigate the role played by hesitation in human communication, with a particular focus on fillers and the communicative goals they may serve. Producing Hesitation Disfluencies Hesitation phenomena such as fillers are most likely to occur at the beginning of an utterance or phrase, presumably as a consequence of the greater demand on planning processes at these junctures (Maclay and Osgood 1959; Beattie 1979; Barr 2001). The view that cognitive load is an important predictor of disfluency is supported by the fact that disfluencies are found to occur more often before longer utterances (Oviatt 1995; Shriberg 1996), and when the topic is unfamiliar (Bortfeld et al. 2001; Merlo and Mansur 2004). Cognitive load is also implicated when we look at hesitations on a word-by-word basis. Investigations of where disfluencies such as fillers occur throughout utterances have established that they are more likely to occur before content words (Maclay and Osgood 1959), such as low-frequency color names (Levelt 1983). However, Beattie and Butterworth (1979) came to a different conclusion when they investigated the distributional properties of disfluencies across a set of recordings of two-person conversations. They showed that both low-frequency content words and those rated as contextually improbable were likely to be preceded by hesitations such as fillers; when frequency was held constant, contextual probability still predicted disfluency. Rather than attributing disfluency to cognitive load, Beattie and Butterworth suggested that speakers might be aware of an element of choice in selecting words with low contextual probability, and were more likely to be disfluent for this reason. Choice was also implicated in a study by Schachter et al. (1991) in which lectures in the natural sciences, social sciences, and humanities were recorded and analyzed for numbers of fillers per minute. Disfluency differed between topics, with the natural sciences resulting in the least and the humanities in the most frequent use of fillers. However, when the lecturers were interviewed on general topics, their rates of disfluency did not differ. Schachter et al. attributed the differences in lecture disfluency rates to the fact that there were fewer linguistic options in the sciences, causing lecturers to hesitate less as they selected appropriate terms. They later corroborated their claims by measuring vocabulary size in lectures, learned articles, and topic-related journalism (Schachter et al. 1994), showing that there were indeed fewer terms used in the sciences. So far, we have been speaking about cognitive load and choice as if they were different, but it is of course the case that a higher number of options to choose from could result in an increased cognitive load. In a detailed experimental investigation, Oomen and Postma (2001) manipulated speech rate, using a task modified from Levelt (1983; see also Martin"}
{"_id": "11e279e93e29a0ee66cdd24394c804d123bf5bcb", "title": "A Method for the Analytical Extraction of the Single-Diode PV Model Parameters", "text": "Determination of PV model parameters usually requires time consuming iterative procedures, prone to initialization and convergence difficulties. In this paper, a set of analytical expressions is introduced to determine the five parameters of the single-diode model for crystalline PV modules at any operating conditions, in a simple and straightforward manner. The derivation of these equations is based on a newly found relation between the diode ideality factor and the open circuit voltage, which is explicitly formulated using the temperature coefficients. The proposed extraction method is robust, cost-efficient, and easy-to-implement, as it relies only on datasheet information, while it is based on a solid theoretical background. Its accuracy and computational efficiency is verified and compared to other methods available in the literature through both simulation and outdoor measurements."}
{"_id": "6c854bc409dfb96517356174fc59c7bcfd1719ea", "title": "Comprehensive Approach to Modeling and Simulation of Photovoltaic Arrays", "text": "This paper proposes a method of modeling and simulation of photovoltaic arrays. The main objective is to find the parameters of the nonlinear I-V equation by adjusting the curve at three points: open circuit, maximum power, and short circuit. Given these three points, which are provided by all commercial array data sheets, the method finds the best I-V equation for the single-diode photovoltaic (PV) model including the effect of the series and parallel resistances, and warranties that the maximum power of the model matches with the maximum power of the real array. With the parameters of the adjusted I-V equation, one can build a PV circuit model with any circuit simulator by using basic math blocks. The modeling method and the proposed circuit model are useful for power electronics designers who need a simple, fast, accurate, and easy-to-use modeling method for using in simulations of PV systems. In the first pages, the reader will find a tutorial on PV devices and will understand the parameters that compose the single-diode PV model. The modeling method is then introduced and presented in details. The model is validated with experimental data of commercial PV arrays."}
{"_id": "71b403edc3342fee24c79ad628bb5efe87962320", "title": "Identification of Photovoltaic Source Models", "text": "This paper presents a photovoltaic (PV) model estimation method from the PV module data. The model is based on a single-diode model of a PV cell. The cell model of a module is extended to estimate the parameters of arrays. The variation of the parameters with change in temperature and irradiance is also studied. Finally, estimation of the maximum power point from the estimated parameters under varying environmental conditions is presented."}
{"_id": "880091b99fc1d39b740e062e22dcf49ef4f3795b", "title": "On the Parameter Extraction of a Five-Parameter Double-Diode Model of Photovoltaic Cells and Modules", "text": "The main contribution of this paper is to present a new set of approximate analytical solutions for the parameters of a photovoltaic (PV) five-parameter double-diode model that can be used as initial values for the numerical solutions based on the Newton-Raphson method. The proposed formulations are developed based on only the limited information given by the PV manufacturers, i.e., the open-circuit voltage ( Voc), the short circuit current ( Isc), and the current and voltage at the maximum power point (Im and Vm). Compared with the existing techniques that require the entire experimental I-V curve or additional information such as the slope of the I-V curves of the open circuit and the short circuit points, the proposed technique is quite independent of these additional data, and, it is therefore, a low cost and fast parameter extraction method. The accuracy of the theoretical I-V curves is evaluated through the comparison of the simulation results and experimental data. The results of the application of the proposed technique to different PV modules show the accuracy and validity of the proposed analytical-numerical method."}
{"_id": "12f9db9d7b63694a3ce60df9e9f01d8fc3208cba", "title": "PV panel model based on datasheet values", "text": "This work presents the construction of a model for a PV panel using the single-diode five-parameters model, based exclusively on data-sheet parameters. The model takes into account the series and parallel (shunt) resistance of the panel. The equivalent circuit and the basic equations of the PV cell/panel in Standard Test Conditions (STC)1 are shown, as well as the parameters extraction from the data-sheet values. The temperature dependence of the cell dark saturation current is expressed with an alternative formula, which gives better correlation with the datasheet values of the power temperature dependence. Based on these equations, a PV panel model, which is able to predict the panel behavior in different temperature and irradiance conditions, is built and tested."}
{"_id": "8711a402d3b4e9133884116e5aaf6931c86ae46b", "title": "Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software", "text": ""}
{"_id": "55d75e48bf690033d9776adc85d4e7afb9d94661", "title": "A Practical Method for Modeling PCB Transmission Lines with Conductor Surface Roughness and Wideband Dielectric Properties", "text": "This paper discusses the modeling techniques to account for transmission line high frequency effects, and proposes a method to integrate these techniques into a practical and design-worthy model generation flow. The frequency tabulated transmission line models are capable of predicting wideband dielectric characteristics and high frequency conductor losses due to skin effect and surface roughness. They can be used to accurately model high volume, low cost printed circuit board traces which often have roughened trace surfaces and pronounced frequency variation in dielectric properties. The model accuracy is verified up to 20 GHz in frequency domain, thus suitable for multi-gigabit signaling analysis"}
{"_id": "7bcc12180aa581b424fdc46a4f696db93ecbfb9d", "title": "Moving Right Along: A Computational Model of Metaphoric Reasoning about Events", "text": "This paper describes the results of an implemented computational model that cashes out the belief that metaphor interpretation is grounded in embodied primitives. The speci c task addressed is the interpretation of simple causal narratives in the domains of Politics and Economics. The stories are taken from newspaper articles in these domains. When presented with a preparsed version of these narratives as input, the system described is able to generate commonsense inferences consistent with the input."}
{"_id": "9d00c2e68ccb6017b8d68d57657c4267e607df7a", "title": "A service-oriented mobile augmented reality architecture for personalized museum environments", "text": "This paper presents a mobile augmented reality platform that exploits service-orientation for the visualisation of cultural media objects in personalised interactive museum environments. The service orientated architecture is composed of a mobile client, web service framework and service providers utilized to perform augmented reality tasks as well as support for media contents acquisition and consumption via a mobile or wireless network. The mobile client performs augmented reality tasks including multiple object tracking of physical 3D objects concurrently and then visualizes associated media contents (e.g. 3D models, videos, images, text, social media data, etc.) that are associated with the tracked physical (or reference) objects and then overlaid in the correct perspective in the real scene in a personalised museum visualisation scenario. A unique feature includes media content acquisition from open service providers through a web service framework. Typical services include, a photogrammetry service that allows users to obtain virtual 3D models of preferred cultural objects through image-based reconstruction techniques. These acquired 3D reconstruction contents can then be integrated with existing associated contents in a personalized augmented reality museum environment that can be stored and saved for later viewing, such as in the home after visiting a particular augmented reality based museum interactive. This paper focuses on describing the service orientation architectural components that supports personalized mobile augmented reality museum environments."}
{"_id": "f7be5f48fa5c85cdd85b918548e9e618e418f2d7", "title": "Firefly Algorithm for Continuous Constrained Optimization Tasks", "text": "The paper provides an insight into the improved novel metaheuristics of the Firefly Algorithm for constrained continuous optimization tasks. The presented technique is inspired by social behavior of fireflies and the phenomenon of bioluminescent communication. The first part of the paper is devoted to the detailed description of the existing algorithm. Then some suggestions for extending the simple scheme of the technique under consideration are presented. Subsequent sections concentrate on the performed experimental parameter studies and a comparison with existing Particle Swarm Optimization strategy based on existing benchmark instances. Finally some concluding remarks on possible algorithm extensions are given, as well as some properties of the presented approach and comments on its performance in the constrained continuous optimization tasks."}
{"_id": "8b91bf2114b11d23890e68e7c1e05a3acaca2a8b", "title": "Optimizing long short-term memory recurrent neural networks using ant colony optimization to predict turbine engine vibration", "text": "This article expands on research that has been done to develop a recurrent neural network (RNN) capable of predicting aircraft engine vibrations using long short-term memory (LSTM) neurons. LSTM RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, making this approach ungeneralizable across multiple engines. In initial work, multiple LSTM RNN architectures were proposed, evaluated and compared. This research improves the performance of the most effective LSTM network design proposed in the previous work by using a promising neuroevolution method based on ant colony optimization (ACO) to develop and enhance the LSTM cell structure of the network. A parallelized version of the ACO neuroevolution algorithm has been developed and the evolved LSTM RNNs were compared to the previously used fixed topology. The evolved networks were trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. Results were obtained using MPI (Message Passing Interface) on a high performance computing (HPC) cluster, evolving 1000 different LSTM cell structures using 168 cores over 4 days. The new evolved LSTM cells showed an improvement of 1.35%, reducing prediction error from 5.51% to 4.17% when predicting excessive engine vibrations 10 seconds in the future, while at the same time dramatically reducing the number of weights from 21,170 to 11,810."}
{"_id": "5c86484627785af733e5be09f1f8f3e8eee5442a", "title": "Innovations in the Field of Child Abuse and Neglect Prevention : A Review of the Literature", "text": "ii Acknowledgments I would like to thank all of the experts I interviewed during the process of writing this paper. Their deep understanding of the history and visions for the future of child maltreatment prevention provided a strong foundation for this work and guided the direction and structure of the project overall. I am grateful for how generous they were with their time and wisdom. The names and affiliations of all the professionals that participated in an interview are listed in Appendix A. Additionally, I would like to thank my mentor, Dr. Deborah Daro. She is a creative and sophisticated thinker, an eloquent and articulate writer, and a passionate and inspiring leader moving child abuse and neglect prevention efforts forward. Lastly, I would like to thank Anne Clary and Matthew Brenner for lending their editing expertise, and the greater Chapin Hall community for their support. Trend #2: Social context and culture can protect the developing child and strengthen parental capacity in important ways that can buffer against individual and contextual risk factors.. Trend #6: Implementation science offers program managers effective research frameworks to monitor and strengthen the service delivery process and to improve the odds of replicating model programs with fidelity and quality.. Trend #7: Maximizing population-level change requires new understanding of how to construct and sustain effective state systems, local community collaboration, and robust community-based Trend #8: New technologies offer important, cost-effective opportunities for advancing our reach into new populations and supporting direct service providers.. 1 Introduction Child maltreatment prevention policy and practice have been driven by a continuous stream of innovation generated by researchers, advocates, and practitioners working across disciplines to improve child safety and well-being. Settlement homes were established in the late 19 th century, social work was professionalized in the early 20 th century, child abuse injuries were identified and findings published by pediatric radiologist John Caffey and pediatrician Henry Kempe in the 1960s, and the federal government passed the Child Abuse Prevention and Treatment Act of 1972. Progress in the field has always been determined by the diligence of thought leaders in a multitude of professions and the collaboration of diverse and unlikely partners (Myers, 2011, p. 1166). Today, the nation faces an excruciating economic climate, and many parents find it hard to meet their children's most basic needs. Simultaneously, political pressure is mounting to cut social services and reduce government spending, and, as a \u2026"}
{"_id": "491e524ff23cb17629e54451c9c402bb08e5c0fd", "title": "Gaps as characters in sequence-based phylogenetic analyses.", "text": "In the analysis of sequence-based data matrices, the use of different methods of treating gaps has been demonstrated to influence the resulting phylogenetic hypotheses (e.g., Eernisse and Kluge, 1993; Vogler and DeSalle, 1994; Simons and May den, 1997). Despite this influence, a well-justified, uniformly applied method of treating gaps is lacking in sequence-based phylogenetic studies. Treatment of gaps varies widely from secondarily mapping gaps onto the tree inferred from base characters to treating all gaps as separate characters or character states (Gonzalez, 1996). This diversity of approaches demonstrates the need for a comprehensive discussion of indel (insertion or deletion) coding and a robust method with which to incorporate gap characters into tree searches. We use the term \"indel coding\" instead of \"gap coding\" because the term \"gap coding\" has already been applied to coding quantitative characters (Mickevich and Johnson, 1976; Archie, 1985). Although \"indel coding\" undesirably refers to processes that are not observed (insertions and deletions) instead of patterns that are observed (gaps), the term is unambiguous and does not co-opt established terminology. The purpose of this paper is to discuss the implications of each of the methods of treating gaps in phylogenetic analyses, to allow workers to make informed choices among them. We suggest that gaps should be coded as characters in phylogenetic analyses, and we propose two indel-coding methods. We discuss four main points: (1) the logical independence of alignment and tree search; (2) why gaps are properly coded as characters; (3) how gaps should be coded as characters; and (4) problems with a priori weighting of gap characters during tree search. LOGICAL INDEPENDENCE"}
{"_id": "311b59d19a815dbaf1d466c25d126a57fdbdaf65", "title": "Depth Estimation From a Light Field Image Pair With a Generative Model", "text": "In this paper, we propose a novel method to estimate the disparity maps from a light field image pair captured by a pair of light field cameras. Our method integrates two types of critical depth cues, which are separately inferred from the epipolar plane images and binocular stereo vision into a global solution. At the same time, in order to produce highly accurate disparity maps, we adopt a generative model, which can estimate a light field image only with the central subaperture view and corresponding hypothesized disparity map. The objective function of our method is formulated to minimize two energy terms/differences. One is the difference between the two types of previously extracted disparity maps and the target disparity maps, directly optimized in the gray-scale disparity space. The other indicates the difference between the estimated light field images and the input light field images, optimized in the RGB color space. Comprehensive experiments conducted on real and virtual scene light field image pairs demonstrate the effectiveness of our method."}
{"_id": "cc8fe2d1085f8c80a5993d1171b2aed6613d9df7", "title": "Ezetimibe - a new approach in hypercholesterolemia management.", "text": "Suchy et al. [10] discuss the benefits of ezetimibe beyond lipid-lowering as well as its effects on atherosclerosis. Some comments may be of interest. Reducing low density lipoprotein cholesterol (LDL-C) is the main indication for ezetimibe [2, 7, 8]. This drug is useful in patients who cannot achieve LDL-C goals on statins or those who are statin intolerant [2, 4, 7, 8]. Meta-analyses suggested that adding ezetimibe decreases LDL-C levels more than statin monotherapy, after doubling the dose of a statin or selecting a more potent statin [2, 7, 8]. It was also suggested that ezetimibe can increase LDL particle size, especially in patients with high triglyceride levels [3, 11]. Ezetimibe may exert other beneficial actions which may or may not be related to its hypolipidemic capacity (e.g., for non-alcoholic fatty liver disease and renoprotection) [6]. Ezetimibe monotherapy reduced the plasma mass and activity of lipoprotein-associated phospholipase A2 (Lp-PLA2), which is considered risk predictor of cardiovascular (CV) events [9]. Furthermore, ezetimibe may lower the levels of oxidative stress markers [5]. In the end, the most important issue is reducing vascular events. In this context, the Study of Heart and Renal Protection (SHARP) [1] reported that the combination of ezetimibe with simvastatin was associated with reduced risk of CV events compared with placebo after 4.9 years in patients (n = 9,270) who are pre-dialysis or on dialysis. The clinical relevance of these points and those addressed by Suchy et al. [10] requires to be established by future studies."}
{"_id": "9bf8557d0bc24721e841aa3b8bd778e9c69b06eb", "title": "Model Predictive Control for Distributed Microgrid Battery Energy Storage Systems", "text": "This brief proposes a new convex model predictive control (MPC) strategy for dynamic optimal power flow between battery energy storage (ES) systems distributed in an ac microgrid. The proposed control strategy uses a new problem formulation, based on a linear $d$ \u2013 $q$ reference frame voltage-current model and linearized power flow approximations. This allows the optimal power flows to be solved as a convex optimization problem, for which fast and robust solvers exist. The proposed method does not assume that real and reactive power flows are decoupled, allowing line losses, voltage constraints, and converter current constraints to be addressed. In addition, nonlinear variations in the charge and discharge efficiencies of lithium ion batteries are analyzed and included in the control strategy. Real-time digital simulations were carried out for an islanded microgrid based on the IEEE 13 bus prototypical feeder, with distributed battery ES systems and intermittent photovoltaic generation. It is shown that the proposed control strategy approaches the performance of a strategy based on nonconvex optimization, while reducing the required computation time by a factor of 1000, making it suitable for a real-time MPC implementation."}
{"_id": "9e738879b40754bd6eb45c7e14043d428161694e", "title": "A Slacker Coherence Protocol for Pull-based Monitoring of On-line Data Sources \ufffd", "text": "An increasing number of online applications operate on data from disparate, and often wide-spread, data sources. This paper studies the design of a system for the automated monitoring of on-line data sources. In this system a number of ad-hoc data warehouses, which maintain client-specified views, are interposed between clients and data sources. We present a model of coherence, referred to here as slacker coherence, to address the freshness problem in the context of pull-based protocols. We experimentally examine various techniques for estimating update rates and polling adaptively. We also look at the impact on the coherence model performance of the request scheduling algorithm at the source."}
{"_id": "18ac098956e2a7cced85566c88194482245d9219", "title": "Analyzing the Held-Karp TSP Bound: A Monotonicity Property with Application", "text": "In their 1971 paper on the Traveling Salesman Problem and Minimum Spanning Trees, Held and Karp showed that finding an optimally weighted 1-tree is equivalent to solving a linear program for the Traveling Salesman Problem (TSP) with only node-degree constraints and subtour elimination constraints. In this paper we show that the Held-Karp 1-trees have a certain monotonicity property: given a particular instance of the symmetric TSP with triangle inequality, the cost of the minimum weighted 1-tree is monotonic with respect to the set of nodes included. As a consequence, we obtain an alternate proof of a result of Wolsey and show that linear programs with node-degree and subtour elimination constraints must have a cost at least 23OPT , where OPT is the cost of the optimum solution to the TSP instance. The traveling salesman problem is one of the most notorious in the field of combinatorial optimization, and one of the most well-studied [7]. Currently, the most successful approach to finding optimal solutions to large-scale problems is based on formulating the problem as a linear program and finding explicit partial descriptions of this linear polytope [5], [8]. The most natural constraints are derived from an integer linear programming formulation that uses nodedegree constraints and subtour elimination constraints. We focus our attention on symmetric instances of the TSP that obey the triangle inequality. Let V = {1, 2, . . . , n} denote the set of nodes. For any distinct i and j, assign a cost cij such that cij = cji, and for any k distinct from i and j, cij \u2264 cik + ckj . Then the Subtour LP on this instance is B = min \u2211 1\u2264ii xij + \u2211 j16,000 genes and 13,000 diseases, which makes it one of the largest repositories currently available of its kind. DisGeNET integrates expert-curated databases with text-mined data, covers information on Mendelian and complex diseases, and includes data from animal disease models. It features a score based on the supporting evidence to prioritize gene-disease associations. It is an open access resource available through a web interface, a Cytoscape plugin and as a Semantic Web resource. The web interface supports user-friendly data exploration and navigation. DisGeNET data can also be analysed via the DisGeNET Cytoscape plugin, and enriched with the annotations of other plugins of this popular network analysis software suite. Finally, the information contained in DisGeNET can be expanded and complemented using Semantic Web technologies and linked to a variety of resources already present in the Linked Data cloud. Hence, DisGeNET offers one of the most comprehensive collections of human gene-disease associations and a valuable set of tools for investigating the molecular mechanisms underlying diseases of genetic origin, designed to fulfill the needs of different user profiles, including bioinformaticians, biologists and health-care practitioners. Database URL: http://www.disgenet.org/"}
{"_id": "1a2c6843b9e781f2f77e875f3d073ab686f6fae3", "title": "A visual tool for ontology alignment to enable geospatial interoperability", "text": "In distributed geospatial applications with heterogeneous databases, an ontology-driven approach to data integration relies on the alignment of the concepts of a global ontology that describe the domain, with the concepts of the ontologies that describe the data in the distributed databases. Once the alignment between the global ontology and each distributed ontology is established, agreements that encode a variety of mappings between concepts are derived. In this way, users can potentially query hundreds of geospatial databases using a single query. Using our approach, querying can be easily extended to new data sources and, therefore, to new regions. In this paper, we describe the AgreementMaker, a tool that displays the ontologies, supports several mapping layers visually, presents automatically generated mappings, and finally produces the agreements. r 2007 Elsevier Ltd. All rights reserved."}
{"_id": "09c1ba685a08632e4a35eac23f2888ffa4572746", "title": "How to Prove Yourself: Practical Solutions to Identification and Signature Problems", "text": "In this paper we describe simple identification and signature schemes which enable any user to prove his identity and the authenticity of his messages to any other user without shared or public keys. The schemes are provably secure against any known or chosen message attack if factoring is difficult, and typical implementations require only 1% to 4% of the number of modular multiplications required by the RSA scheme. Due to their simplicity, security and speed, these schemes are ideally suited for microprocessor-based devices such as smart cards, personal computers, and remote control systems."}
{"_id": "8d69c06d48b618a090dd19185aea7a13def894a5", "title": "Efficient Identification and Signatures for Smart Cards", "text": ""}
{"_id": "02dc2a93a48d38deae9f1369d5b33ce98af2a3f2", "title": "Compact E-Cash", "text": "This paper presents efficient off-line anonymous e-cash schemes where a user can withdraw a wallet containing 2 coins each of which she can spend unlinkably. Our first result is a scheme, secure under the strong RSA and the y-DDHI assumptions, where the complexity of the withdrawal and spend operations is O( + k) and the user\u2019s wallet can be stored using O( +k) bits, where k is a security parameter. The best previously known schemes require at least one of these complexities to be O(2 \u00b7 k). In fact, compared to previous e-cash schemes, our whole wallet of 2 coins has about the same size as one coin in these schemes. Our scheme also offers exculpability of users, that is, the bank can prove to third parties that a user has double-spent. We then extend our scheme to our second result, the first e-cash scheme that provides traceable coins without a trusted third party. That is, once a user has double spent one of the 2 coins in her wallet, all her spendings of these coins can be traced. However, the price for this is that the complexity of the spending and of the withdrawal protocols becomes O( \u00b7 k) and O( \u00b7 k + k) bits, respectively, and wallets take O( \u00b7 k) bits of storage. All our schemes are secure in the random oracle model."}
{"_id": "7b21e59443c059066c42249d10c948eabe45553f", "title": "Chain : Operator Scheduling for Memory Minimization in Data Stream Systems", "text": "In many applications involving continuous data streams, data arrival is bursty and data rate fluctuates over time. Systems that seek to give rapid or real-time query responses in such an environment must be prepared to deal gracefully with bursts in data arrival without compromising system performance. We discuss one strategy for processing bursty streams --- adaptive, load-aware scheduling of query operators to minimize resource consumption during times of peak load. We show that the choice of an operator scheduling strategy can have significant impact on the run-time system memory usage. We then present Chain scheduling, an operator scheduling strategy for data stream systems that is near-optimal in minimizing run-time memory usage for any collection of single-stream queries involving selections, projections, and foreign-key joins with stored relations. Chain scheduling also performs well for queries with sliding-window joins over multiple streams, and multiple queries of the above types. A thorough experimental evaluation is provided where we demonstrate the potential benefits of Chain scheduling, compare it with competing scheduling strategies, and validate our analytical conclusions."}
{"_id": "0ccdcc21528f9f2e19d96faaf88de430082ca7a2", "title": "LaBB-CAT: an Annotation Store", "text": "\u201cONZE Miner\u201d, an open-source tool for storing and automatically annotating Transcriber transcripts, has been redeveloped to use \u201cannotation graphs\u201d as its data model. The annotation graph framework provides the new software, \u201cLaBB-CAT\u201d, greater flexibility for automatic and manual annotation of corpus data at various independent levels of granularity, and allows more sophisticated annotation structures, opening up new possibilities for corpus mining and conversion between tool formats."}
{"_id": "5274ee1a11da397d2e93a52a67adea67f4ad77df", "title": "Agile in Public Administration: Oxymoron or Reality? An Experience Report", "text": "In the last 10 years, Agile methods and practices have emerged as an alternative for software development. Different \"flavors\" of Agile have appeared ranging from project management to tests organization. These approaches have being gaining popularity and involve now a solid option for organizations developing software, but what about Public Administrations? Is Agile a suitable option for developing software in Public Administrations? Even if Public Administrations have been traditionally regarded as changeresistant, Agile approach can also provide them with the benefits of quick adaptation and frequent value delivery. This paper presents the results of two different projects, which use an Agile framework based on Scrum, developed by a Spanish Public Administration. Additionally, after considering the obtained results, it takes out some relevant learned lessons on the suitability of applying Agile approaches to Public Administration environments."}
{"_id": "b87c0cf95208caacb025bf87d9ba451a87aacaca", "title": "Machine Health Monitoring Using Local Feature-Based Gated Recurrent Unit Networks", "text": "In modern industries, machine health monitoring systems (MHMS) have been applied wildly with the goal of realizing predictive maintenance including failures tracking, downtime reduction, and assets preservation. In the era of big machinery data, data-driven MHMS have achieved remarkable results in the detection of faults after the occurrence of certain failures (diagnosis) and prediction of the future working conditions and the remaining useful life (prognosis). The numerical representation for raw sensory data is the key stone for various successful MHMS. Conventional methods are the labor-extensive as they usually depend on handcrafted features, which require expert knowledge. Inspired by the success of deep learning methods that redefine representation learning from raw data, we propose local feature-based gated recurrent unit (LFGRU) networks. It is a hybrid approach that combines handcrafted feature design with automatic feature learning for machine health monitoring. First, features from windows of input time series are extracted. Then, an enhanced bidirectional GRU network is designed and applied on the generated sequence of local features to learn the representation. A supervised learning layer is finally trained to predict machine condition. Experiments on three machine health monitoring tasks: tool wear prediction, gearbox fault diagnosis, and incipient bearing fault detection verify the effectiveness and generalization of the proposed LFGRU."}
{"_id": "664a2c6bff5fb2708f30a116745fad9470ef317a", "title": "Sparse Probabilistic Principal Component Analysis", "text": "Principal component analysis (PCA) is a popular dimensionality reduction algorithm. However, it is not easy to interpret which of the original features are important based on the principal components. Recent methods improve interpretability by sparsifying PCA through adding an L1 regularizer. In this paper, we introduce a probabilistic formulation for sparse PCA. By presenting sparse PCA as a probabilistic Bayesian formulation, we gain the benefit of automatic model selection. We examine three different priors for achieving sparsification: (1) a two-level hierarchical prior equivalent to a Laplacian distribution and consequently to an L1 regularization, (2) an inverse-Gaussian prior, and (3) a Jeffrey\u2019s prior. We learn these models by applying variational inference. Our experiments verify that indeed our sparse probabilistic model results in a sparse PCA solution."}
{"_id": "281eeacc1d2fdd9a79f652b77afb14d8a3c2d7b6", "title": "Template-Free Construction of Poems with Thematic Cohesion and Enjambment", "text": "Existing poetry generation systems usually focus on particular features of poetry (such as rhythm, rhyme, metaphor) and specific techniques to achieve them. They often resort to template-base solutions, in which it is not always clear how many of the alleged features of the outputs were already present in the template employed. The present paper considers two specific features \u2013 thematic consistency, and enjambment \u2013 and presents an ngram based construction method that achieves these features without the use of templates. The construction procedure is not intended to produce poetry of high quality, only to achieve the features considered specifically in its design. A set of metrics is defined to capture these features in quantitative terms, and the metrics are applied to system outputs and to samples of both human and computer generated poetry. The results of these tests are discussed in terms of the danger of to ignoring certain features when designing construction procedures but valuing them during evaluation even if they arise from hard-wiring in the resources or serendipitous emergence, and the fundamental need for poets to develop a personal voice \u2013 fundamental for human poets and inconsistent with the application of Turing tests."}
{"_id": "7bdd64656d89667dc0f4184868fb92b9077cfacd", "title": "MovieGEN : A Movie Recommendation System", "text": "In this paper we introduce MovieGEN, an expert system for movie recommendation. We implement the system using machine learning and cluster analysis based on a hybrid recommendation approach. The system takes in the users\u2019 personal information and predicts their movie preferences using well-trained support vector machine (SVM) models. Based on the SVM prediction it selects movies from the dataset, clusters the movies and generates questions to the users. Based on the users\u2019 answers, it refines its movie set and it finally recommends movies for the users. In the process we traverse the parameter space, thus enabling the customizability of the system."}
{"_id": "d323e4983d4c5f95b98ba55239b015e6e1b3176b", "title": "A harmonic suppression circularly polarized patch antenna for an RF ambient energy harvesting system", "text": "This paper presented a new design of harmonic suppression microstrip patch antenna with the circular polarization properties at frequency 900MHz for an RF ambient energy harvesting system. The purpose of harmonic suppression antenna is to eliminate the using of a harmonic filter circuit in filtering the unwanted signal at the harmonic frequencies in a rectenna system. By introducing a truncating to corner of the rectangular microstrip patch antenna to achieved the required axial ratio, an antenna with circular polarization property is presented. The dimension of patch is reduced by applying partially ground plane. Then an implementation of a defected ground structure (DGS) with a circular shape slot, the unwanted radiation at harmonic frequencies are successfully suppressed. The patch antenna is designed on an FR4 substrate with \u03b5r=4.7 and a thickness of 1.6 mm has -20dB return loss at 900 MHz and suppress the second and third harmonic up to -2dB. The proposed circularly polarized harmonic suppression antenna is then fabricated and measured for the verification of the design."}
{"_id": "f5e267b5a69f55fc686e9810d9191cc8471f578e", "title": "Conceptualizing and testing formative constructs: tutorial and annotated example", "text": "Although abundant advice is available for how to develop and validate multi-item scales based on reflective constructs, scant attention has been directed to how to construct and validate formative constructs. Such advice is important because (1) theory suggests many constructs are formative and (2) recent advances in software render testing models with formative constructs more tractable. In this tutorial, our goal is to enhance understanding of formative constructs at the conceptual, statistical and methodological levels. Specifically, we (1) provide general principles for specifying whether a construct should be conceptually modeled as reflective or formative, (2) discuss the statistical logic behind formative constructs, and (3) illustrate how to model and evaluate formative constructs. In particular, we provide a tutorial in which we test and validate professional reward structure, a formative construct, in two popular structural equation modeling programs: EQS and PLS. We conclude with a summary of guidelines for how to conduct and evaluate research using formative constructs."}
{"_id": "e86cdad345b446d26a58863523455f7fec67634a", "title": "Exemplar-Based Portrait Style Transfer", "text": "Transferring the style of an example image to a content image opens the door of artistic creation for end users. However, it is especially challenging for portrait photos since human vision system is sensitive to the slight artifacts on portraits. Previous methods use facial landmarks to densely align the content face with the style face to reduce the artifacts. However, they can only handle the facial region. As for the whole image, building the dense correspondence is difficult and may easily introduce errors. In this paper, we propose a robust approach for portrait style transfer that gets rid of dense correspondence. Our approach is based on the guided image synthesis framework. We propose three novel guidance maps for the synthesis process. Contrary to former methods, these maps do not require the dense correspondence between content image and style image, which allows our method to handle the whole portrait photo instead of facial region only. In comparison with recent neural style transfer methods, our method achieves more pleasing results and preserves more texture details. Extensive experiments demonstrate our advantage over former methods on portrait style transfer."}
{"_id": "6d7a6064e2c3cf893380b04d484c3c4277af7ea5", "title": "Tracking football player movement from a single moving camera using particle filters", "text": "This paper deals with the problem of tracking football players in a football match using data from a single moving camera. Tracking footballers from a single video source is di cult: not only do the football players occlude each other, but they frequently enter and leave the camera's eld of view, making initialisation and destruction of a player's tracking a di cult task. The system presented here uses particle lters to track players. The multiple state estimates used by a particle lter provide an elegant method for maintaining tracking of players following an occlusion. Automated tracking can be achieved by creating and stopping particle lters depending on the input player data."}
{"_id": "025bb5afe6d3d1ab1f724513745e7cc11d96afb2", "title": "Compact hashing with joint optimization of search accuracy and time", "text": "Similarity search, namely, finding approximate nearest neighborhoods, is the core of many large scale machine learning or vision applications. Recently, many research results demonstrate that hashing with compact codes can achieve promising performance for large scale similarity search. However, most of the previous hashing methods with compact codes only model and optimize the search accuracy. Search time, which is an important factor for hashing in practice, is usually not addressed explicitly. In this paper, we develop a new scalable hashing algorithm with joint optimization of search accuracy and search time simultaneously. Our method generates compact hash codes for data of general formats with any similarity function. We evaluate our method using diverse data sets up to 1 million samples (e.g., web images). Our comprehensive results show the proposed method significantly outperforms several state-of-the-art hashing approaches."}
{"_id": "388341c65f2bb0792c24cc5bdfca7b6979837065", "title": "Blockchain as a Platform for Secure Inter-Organizational Business Processes", "text": "Today, most of the services one may think of are based on a collaborative paradigm (e.g., social media services, IoT-based services, etc.). One of the most relevant representative of such class of services are inter-organizational processes, where an organized group of joined activities is carried out by two or more organizations to achieve a common business goal. Inter-organizational processes are therefore vital to achieve business partnerships among different organizations. However, they may also pose serious security and privacy threats to the data each organization exposes. This is mainly due to the weak trust relationships that may hold among the collaborating parties, which result in a potential lack of trust on how data/operations are managed. In this paper, we discuss, how blockchain, one of today hottest technology, can be used in support of secure inter-organizational processes. We further point out which additional security issues the use of blockchain can bring, illustrate the ongoing research projects in the area and discuss future research directions."}
{"_id": "bdad010cc2a5250c9f7b3a6bd183c1e7d8788d9e", "title": "Exploiting 2D Floorplan for Building-Scale Panorama RGBD Alignment", "text": "This paper presents a novel algorithm that utilizes a 2D floorplan to align panorama RGBD scans. While effective panorama RGBD alignment techniques exist, such a system requires extremely dense RGBD image sampling. Our approach can significantly reduce the number of necessary scans with the aid of a floorplan image. We formulate a novel Markov Random Field inference problem as a scan placement over the floorplan, as opposed to the conventional scan-to-scan alignment. The technical contributions lie in multi-modal image correspondence cues (between scans and schematic floorplan) as well as a novel coverage potential avoiding an inherent stacking bias. The proposed approach has been evaluated on five challenging large indoor spaces. To the best of our knowledge, we present the first effective system that utilizes a 2D floorplan image for building-scale 3D pointcloud alignment. The source code and the data are shared with the community to further enhance indoor mapping research."}
{"_id": "cf03950cbed331c1a7a8770cfaa64b7e005facfc", "title": "Impact of singular excessive computer game and television exposure on sleep patterns and memory performance of school-aged children.", "text": "OBJECTIVE\nTelevision and computer game consumption are a powerful influence in the lives of most children. Previous evidence has supported the notion that media exposure could impair a variety of behavioral characteristics. Excessive television viewing and computer game playing have been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems. Nevertheless, there is insufficient knowledge about the relationship between singular excessive media consumption on sleep patterns and linked implications on children. The aim of this study was to investigate the effects of singular excessive television and computer game consumption on sleep patterns and memory performance of children.\n\n\nMETHODS\nEleven school-aged children were recruited for this polysomnographic study. Children were exposed to voluntary excessive television and computer game consumption. In the subsequent night, polysomnographic measurements were conducted to measure sleep-architecture and sleep-continuity parameters. In addition, a visual and verbal memory test was conducted before media stimulation and after the subsequent sleeping period to determine visuospatial and verbal memory performance.\n\n\nRESULTS\nOnly computer game playing resulted in significant reduced amounts of slow-wave sleep as well as significant declines in verbal memory performance. Prolonged sleep-onset latency and more stage 2 sleep were also detected after previous computer game consumption. No effects on rapid eye movement sleep were observed. Television viewing reduced sleep efficiency significantly but did not affect sleep patterns.\n\n\nCONCLUSIONS\nThe results suggest that television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance, which supports the hypothesis of the negative influence of media consumption on children's sleep, learning, and memory."}
{"_id": "20cb6d4fafa43f83ef3d63582ccb865220fd6e59", "title": "A Nonisolated Three-Port DC\u2013DC Converter and Three-Domain Control Method for PV-Battery Power Systems", "text": "In order to interface one photovoltaic (PV) port, one bidirectional battery port, and one load port of a PV-battery dc power system, a novel nonisolated three-port dc/dc converter named boost bidirectional buck converter (B3C) and its control method based on three-domain control are proposed in this paper. The power flow and operating principles of the proposed B3C are analyzed in detail, and then, the dc voltage relation between three ports is deduced. The proposed converter features high integration and single-stage power conversion from both PV and battery ports to the load port, thus leading to high efficiency. The current of all three ports is continuous; hence, the electromagnetic noise can be reduced. Furthermore, the control and modulation method for B3C has been proposed for realizing maximum power point tracking (MPPT), battery management, and bus voltage regulation simultaneously. The operation can be transited between conductance mode and MPPT mode automatically according to the load power. Finally, experimental verifications are given to illustrate the feasibility and effectiveness of the proposed topology and control method."}
{"_id": "cfcddb3d7de314442ccd36724da525732a90c425", "title": "A CRISPR/Cas-Mediated Selection-free Knockin Strategy in Human Embryonic Stem Cells", "text": "The development of new gene-editing tools, in particular the CRISPR/Cas system, has greatly facilitated site-specific mutagenesis in human embryonic stem cells (hESCs), including the introduction or correction of patient-specific mutations for disease modeling. However, integration of a reporter gene into an endogenous locus in hESCs still requires a lengthy and laborious two-step strategy that involves first drug selection to identify correctly targeted clones and then excision of the drug-resistance cassette. Through the use of iCRISPR, an efficient gene-editing platform we recently developed, this study demonstrates a knockin strategy without drug selection for both active and silent genes in hESCs. Lineage-specific hESC reporter lines are useful for real-time monitoring of cell-fate decisions and lineage tracing, as well as enrichment of specific cell populations during hESC differentiation. Thus, this selection-free knockin strategy is expected to greatly facilitate the use of hESCs for developmental studies, disease modeling, and cell-replacement therapy."}
{"_id": "81a31129d1445e35027ca3de44b5126a2cd12333", "title": "Computer vision technology for food quality assurance Sundaram Gunasekaran", "text": "Quality assurance is one of the most important goals of any industry. The ability to manufacture high-quality products consistently is the basis for success in the highly competitive food industry. It encourages loyalty in customers and results in an expanding market share. The quality assurance methods used in the food industry have traditionally involved human visual inspection. Such methods are tedious, laborious, time-consuming and inconsistent. As plant throughput increased and quality tolerance tightened, it became necessary to employ automatic methods for quality assurance and quality control. During the past decade, computers have been introduced widely for automation of the quality assurance task. The food industry has been traditionally slow in adopting this new technology and moving towards programmable automation\u2019. Renard? cited the following barriers to automating food processing plants: diverging opinions of what to measure because manufacturing processes themselves are not well understood; lack of appropriate standards against which properties can be compared; and the age of plants and equipment. However, the speed, reliability and cost savings inherent to process automation have provided sufficient impetus to the food industry to pursue aggressively the automatic, on-line control of all plant operations. Innovation of new technology often undergoes four phases3. The first is a research or discovery phase, in which new knowledge results in advances in the technology. Phase two is the early commercialization phase, in which a small market for the technology develops. Phase three represents the emergence of niche-specific applications of the technology. The last phase is that of widespread proliferation. A key characteristic of phase four is that the technology is transparent to users, that is,"}
{"_id": "6deaa9d6a17eab8232631ac8c58d3cf6103d81b3", "title": "Partial entity structure: a compact non-manifold boundary representation based on partial topological entities", "text": "Non-manifold boundary representations have gained a great deal of popularity in recent years and various representation schemes have been proposed because they allow an even wider range of objects for various applications than conventional manifold representations. However, since these schemes are mainly interested in describing sufficient adjacency relationships of topological entities, the models represented in these schemes occupy too much storage space redundantly although they are very efficient in answering queries on topological adjacency relationships. Storage requirement can arise as a crucial problem in models in which topological data is more dominant than geometric data, such as tessellated or mesh models.\nTo solve this problem, in this paper, we propose a compact non-manifold boundary representation, called the partial entity structure, which allows the reduction of the storage size to half that of the radial edge structure, which is known as a time efficient non-manifold data structure, while allowing full topological adjacency relationships to be derived without loss of efficiency. This representation contains not only the conventional primitive entities like the region, face, edge, and vertex, but also the partial topological entities such as the partial-face, partial-edge, and partial-vertex for describing non-manifold conditions at vertices, edges, and faces. In order to verify the time and storage efficiency of the partial entity structure, the time complexity of basic query procedures and the storage requirement for typical geometric models are derived and compared with those of existing schemes. Furthermore, a set of the generalized Euler operators and typical high-level modeling capabilities such as Boolean operations are also implemented to confirm that our data structure is sound and easy to be manipulated."}
{"_id": "cacbc29fc8082186c6efbc6d49841afddbee8f3e", "title": "Blood pressure estimation from pulse wave velocity measured on the chest", "text": "Recently, monitoring of blood pressure fluctuation in the daily life is focused on in the hypertension care area to predict the risk of cardiovascular and cerebrovascular disease events. In this paper, in order to propose an alternative system to the existed ambulatory blood pressure monitoring (ABPM) sphygmomanometer, we have developed a prototype of small wearable device consisting of electrocardiogram (ECG) and photopelthysmograph (PPG) sensors. In addition, it was examined whether blood pressure can be estimated based on pulse wave transit time (PWTT) only by attaching that device on the surface of the chest. We indicated that our system could also sense tendency of time-dependent change of blood pressure by measuring pulse of vessel over the sternum while its propagation distance is short."}
{"_id": "4c359447d4759efcdfe2ac37a7bdfa7c0d56544a", "title": "School burnout and engagement in the context of demands-resources model.", "text": "BACKGROUND\nA four-wave longitudinal study tested the demands-resources model in the school context.\n\n\nAIM\nTo examine the applicability of the demands-resources to the school context.\n\n\nMETHOD\nData of 1,709 adolescents were gathered, once during the transition from comprehensive to post-comprehensive education, twice during post-comprehensive education, and once 2 years later.\n\n\nRESULTS\nThe hypotheses were supported, path analysis showing that study demands were related to school burnout a year later, while study resources were related to schoolwork engagement. Self-efficacy was positively related to engagement and negatively to burnout. School burnout predicted schoolwork engagement negatively 1 year later. Engagement was positively related to life satisfaction 2 years later, while burnout was related to depressive symptoms. Finally, burnout mediated the relationship between study demands and mental health outcomes.\n\n\nCONCLUSIONS\nThe demands-resources model can usefully be applied to the school context, including the associations between school-related burnout and engagement among adolescents. The model comprises two processes, the energy-depleting process and the motivational process."}
{"_id": "326fd35a67dd1af3afb05ac5a751130aec352fb3", "title": "Search , Obfuscation , and Price Elasticities on the Internet 1", "text": "We examine the competition between a group of Internet retailers who operate in an environment where a price search engine plays a dominant role. We show that for some products in this environment, the easy price search makes demand tremendously pricesensitive. Retailers, though, engage in obfuscation\u2014practices that frustrate consumer search or make it less damaging to firms\u2014resulting in much less price sensitivity on some other products. We discuss several models of obfuscation and examine its effects on demand and markups empirically."}
{"_id": "8d4bbf30de2093b80bad2c83e4d22892d8a23342", "title": "Cost-aware depth map estimation for Lytro camera", "text": "Since commercial light field cameras became available, the light field camera has aroused much interest from computer vision and image processing communities due to its versatile functions. Most of its special features are based on an estimated depth map, so reliable depth estimation is a crucial step. However, estimating depth on real light field cameras is a challenging problem due to noise and short baselines among sub-aperture images. We propose a depth map estimation method for light field cameras by exploiting correspondence and focus cues. We aggregate costs among all the sub-aperture images on cost volume to alleviate noise effects. With efficiency of the cost volume, cost-aware depth estimation is quickly achieved by discrete-continuous optimization. In addition, we analyze each property of correspondence and focus cues and utilize them to select reliable anchor points. A well reconstructed initial depth map from the anchors is shown to enhance convergence. We show our method outperforms the state-of-the-art methods by validating it on real datasets acquired with a Lytro camera."}
{"_id": "3dde3fec553b8d24a85d7059a3cc629ab33f7578", "title": "OpenFlow: enabling innovation in campus networks", "text": "This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale testbeds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too"}
{"_id": "358331e956573abd252784ac30b9ce40ab1d523b", "title": "Software-Defined Wireless Networking Opportunities and Challenges for Internet-of-Things: A Review", "text": "With the emergence of Internet-of-Things (IoT), there is now growing interest to simplify wireless network controls. This is a very challenging task, comprising information acquisition, information analysis, decision-making, and action implementation on large scale IoT networks. Resulting in research to explore the integration of software-defined networking (SDN) and IoT for a simpler, easier, and strain less network control. SDN is a promising novel paradigm shift which has the capability to enable a simplified and robust programmable wireless network serving an array of physical objects and applications. This paper starts with the emergence of SDN and then highlights recent significant developments in the wireless and optical domains with the aim of integrating SDN and IoT. Challenges in SDN and IoT integration are also discussed from both security and scalability perspectives."}
{"_id": "6332b9cff7aff4391c6fbaa94ff6e8ffbc3d2e4f", "title": "Publish/subscribe-enabled software defined networking for efficient and scalable IoT communications", "text": "The Internet of Things (IoT) is the result of many different enabling technologies such as embedded systems, wireless sensor networks, cloud computing, big-data, etc., which are used to gather, process, infer, and transmit data. Combining all these technologies requires a research effort to address all the challenges of these technologies, especially for sensing and delivering information from the physical world to cloud-hosted services. In this article we outline the most important issues related to standardization efforts, mobility of objects, networking and gateway access, and QoS support. In particular, we describe a novel IoT network architecture that integrates software defined networking (SDN) and data distribution service (DDS) middleware. The proposed architecture will improve service delivery of IoT systems and will bring flexibility to the network."}
{"_id": "6a6c794083cbdf79de0fcd2065699477290b5546", "title": "Maestro: A System for Scalable OpenFlow Control", "text": "The fundamental feature of an OpenFlow network is that the controller is responsible for the initial establishment of every flow by contacting related switches. Thus the performance of the controller could be a bottleneck. This paper shows how this fundamental problem is addressed by parallelism. The state of the art OpenFlow controller, called NOX, achieves a simple programming model for control function development by having a single-threaded event-loop. Yet NOX has not considered exploiting parallelism. We propose Maestro which keeps the simple programming model for programmers, and exploits parallelism in every corner together with additional throughput optimization techniques. We experimentally show that the throughput of Maestro can achieve near linear scalability on an eight core server machine. Keywords-OpenFlow, network management, multithreading, performance optimization"}
{"_id": "9263499d59fdc184ff097b37d0b00a35927e4632", "title": "A Survey of Fog Computing: Concepts, Applications and Issues", "text": "Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing."}
{"_id": "bca194cd761bc35e82ca742e2058577cdbc5ef67", "title": "Grounding New Words on the Physical World in Multi-Domain Human-Robot Dialogues", "text": "This paper summarizes our ongoing project on developing an architecture for a robot that can acquire new words and their meanings while engaging in multidomain dialogues. These two functions are crucial in making conversational service robots work in real tasks in the real world. Household robots and office robots need to be able to work in multiple task domains and they also need to engage in dialogues in multiple domains corresponding to those task domains. Lexical acquisition is necessary because speech understanding cannot be done without enough knowledge on words that are possibly spoken in the task domain. Our architecture is based on a multi-expert model in which multiple domain experts are employed and one of them is selected based on the user utterance and the situation to engage in the control of the dialogue and physical behaviors. We incorporate experts that have an ability to acquire new lexical entries and their meanings grounded on the physical world through spoken interactions. By appropriately selecting those experts, lexical acquisition in multi-domain dialogues becomes possible. An example robotic system based on this architecture that can acquire object names and location names demonstrates the viability of the architecture."}
{"_id": "dd47c68b65ed57df83a9f9e3057d1f9c5a14351d", "title": "A Brief Tour Through Provenance in Scientific Workflows and Databases", "text": "Within computer science, the term provenance has multiple meanings, due to different motivations, perspectives, and assumptions prevalent in the respective communities. This chapter provides a high-level \u201csightseeing tour\u201d of some of those different notions and uses of provenance in scientific workflows and databases."}
{"_id": "ec994b0a82d716e9a02428dcd8a0bf33b9916f4b", "title": "The raven roosting optimisation algorithm", "text": "A significant stream of literature which draws inspiration from the foraging activities of various organisms to design optimisation algorithms has emerged over the past decade. The success of these algorithms across a wide variety of application domains has spurred interest in the examination of the foraging behaviours of other organisms to develop novel and powerful, optimisation algorithms. A variety of animals, including some species of birds and bats, engage in social roosting whereby large numbers of conspecifics gather together to roost, either overnight or for longer periods. It has been claimed that these roosts can serve as information centres to spread knowledge concerning the location of food resources in the environment. In this paper we look at the social roosting and foraging behaviour of one species of bird, the common raven, and take inspiration from this to design a novel optimisation algorithm which we call the raven roosting optimisation algorithm. The utility of the algorithm is assessed on a series of benchmark problems and the results are found to be competitive. We also provide a novel taxonomy which classifies foraging-inspired optimisation algorithms based on the underlying social communication mechanism embedded in the algorithms."}
{"_id": "afde48d14d4b6783b6aef376a1bb4a47ffccc071", "title": "Quantifying driver stress: developing a system for collecting and processing bio-metric signals in natural situations.", "text": "A system for quantifying the physiological features of emotional stress is being developed for use during a driving task. Two prototypes, using sensors that measure the driver's skin conductance, respiration, muscle activity, and heart activity are presented. The first system allows sampling rates of 200 Hz on two fast channels and 20 Hz on six additional channels. It uses a wearable computer to do real-time processing on the signals and has an attached digital camera which was used to capture images of the driver's facial expression once every minute. The second system uses a car-based computer that allows a sampling rate of 1984 samples per second on eight channels. This system uses multiple video cameras to continuously capture the driver's facial expression and road conditions. The data is then synchronized with the physiological signals using a video quad-splitter. The methods for extracting physiological features in the driving environment are discussed, including measurement of the skin conductance orienting response, muscle activity, pulse, and respiration patterns. Preliminary studies show how using multiple modalities of sensors can help discriminate reactions to driving events and how individual's response to similar driving conditions can vary from day to day."}
{"_id": "57e1028e004dbd48a469a13be1616a3205ec5d2b", "title": "SOCIAL SPACE AND SYMBOLIC POWER *", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/asa.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission."}
{"_id": "9f75de328947af808d4a133a791041c77190864e", "title": "A customizable pipeline for social media text normalization", "text": "Social networks are persistently generating text-based data that encapsulate vast amounts of knowledge. However, the presence of non-standard terms and misspellings in texts originating from social networks poses a crucial challenge for natural language processing and machine learning systems that attempt to mine this knowledge. To address this problem, we propose a sequential, modular, and hybrid pipeline for social media text normalization. In the first phase, text preprocessing techniques and social media-specific vocabularies gathered from publicly available sources are used to transform, with high precision, out-of-vocabulary terms into in-vocabulary terms. A sequential language model, generated using the partially normalized texts from the first phase, is then utilized to normalize short, high-frequency, ambiguous terms. A supervised learning module is employed to normalize terms based on a manually annotated training corpus. Finally, a tunable, distributed language model-based backoff module at the end of the pipeline enables further customization of the system to specific domains of text. We performed intrinsic evaluations of the system on a publicly available domain-independent dataset from Twitter, and our system obtained an F-score of 0.836, outperforming other benchmark systems for the task. We further performed brief, task-oriented evaluations of the system to illustrate the customizability of the system to domain-specific tasks and the effects of normalization on downstream applications. The modular design enables the easy customization of the system to distinct types domain-specific social media text, in addition to its off-the-shelf application to generic social media text."}
{"_id": "73d3149684ccb500e28fe22f778017f80a79ac4a", "title": "Friend or Foe: Cyberbullying in Social Network Sites", "text": "As the use of social media technologies proliferates in organizations, it is important to understand the nefarious behaviors, such as cyberbullying, that may accompany such technology use and how to discourage these behaviors. We draw from neutralization theory and the criminological theory of general deterrence to develop and empirically test a research model to explain why cyberbullying may occur and how the behavior may be discouraged. We created a research model of three second-order formative constructs to examine their predictive influence on intentions to cyberbully. We used PLS- SEM to analyze the responses of 174 Facebook users in two different cyberbullying scenarios. Our model suggests that neutralization techniques enable cyberbullying behavior and while sanction certainty is an important deterrent, sanction severity appears ineffective. We discuss the theoretical and practical implications of our model and results."}
{"_id": "7c27c259a5a0ee9ed0624f1d4a89d5755b70cbe5", "title": "Face Recognition: Features Versus Templates", "text": "Over the last 20 years, several different techniques have been proposed for computer recognition of human faces. The purpose of this paper is to compare two simple but general strategies on a common database (frontal images of faces of 47 people: 26 males and 21 females, four images per person). We have developed and implemented two new algorithms; the first one is based on the computation of a set of geometrical features, such as nose width and length, mouth position, and chin shape, and the second one is based on almost-grey-level template matching. The results obtained on the testing sets (about 90% correct recognition using geometrical features and perfect recognition using template matching) favor our implementation of the template-matching approach."}
{"_id": "27e4b65121d3c88643d86dc91a9bdafdf223b988", "title": "Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond", "text": "ive Text Summarization using Sequence-to-sequence RNNs and Beyond Ramesh Nallapati IBM Watson nallapati@us.ibm.com Bowen Zhou IBM Watson zhou@us.ibm.com Cicero dos Santos IBM Watson"}
{"_id": "e806b6b4d91cb4ec514eaf13df3918cfe22fa1c9", "title": "QR Code Security: A Survey of Attacks and Challenges for Usable Security", "text": "QR (Quick Response) codes are two-dimensional barcodes with the ability to encode different types of information. Because of their high information density and robustness, QR codes have gained popularity in various fields of application. Even though they offer a broad range of advantages, QR codes pose significant security risks. Attackers can encode malicious links that lead e.g. to phishing sites. Such malicious QR codes can be printed on small stickers and replace benign ones on billboard advertisements. Although many real world examples of QR code based attacks have been reported in the media, only little research has been conducted in this field and almost no attention has been paid on the interplay of security and human-computer interaction. In this work, we describe the manifold use cases of QR codes. Furthermore, we analyze the most significant attack scenarios with respect to the specific use cases. Additionally, we systemize the research that has already been conducted and identified usable security and security awareness as the main research challenges. Finally we propose design requirements with respect to the QR code itself, the reader application and usability aspects in order to support further research into to making QR code processing both secure and usable."}
{"_id": "078e6aee9d0327a851ea04709452d642df8fbafd", "title": "Assessment of exposure to sexually explicit materials and factors associated with exposure among preparatory school youths in Hawassa City, Southern Ethiopia: a cross-sectional institution based survey", "text": "BACKGROUND\nAccording to the 2007 Ethiopian census, youths aged 15-24 years were more than 15.2 million which contributes to 20.6% of the whole population. These very large and productive groups of the population are exposed to various sexual and reproductive health risks. The aim of this study was to assess exposure to Sexually Explicit Materials (SEM) and factors associated with exposure among preparatory school students in Hawassa city, Southern Ethiopia.\n\n\nMETHODOLOGY\nA cross-sectional institution based study involving 770 randomly selected youth students of preparatory schools at Hawassa city. Multi stage sampling technique was used to select study subjects. Data was collected using pre-tested and self-administered questionnaire. Data was entered by EPI INFO version 3.5.1 and analyzed using SPSS version 20.0 statistical software packages. The result was displayed using descriptive, bivariate and multivariate analysis. Statistical association was done for independent predictors (at p\u2009<\u20090.05).\n\n\nRESULT AND DISCUSSION\nAbout 750 students were participated in this study with a response rate of 97.4%. Among this, about 77.3% of students knew about the presence of SEM and most of the respondents 566(75.5%) were watched SEM films/movies and 554(73.9%) were exposed to SE texts. The overall exposure to SEM in school youths was 579(77.2%). Among the total respondents, about 522(70.4%) claimed as having no open discussion on sexual issues with in their family. Furthermore, About 450 (60.0%) respondents complained for having no sexual and reproductive health education at their school. Male students had faced almost two times higher exposure to SEM than female students (95 % CI: AOR 1.84(C.I\u2009=\u20091.22, 2.78). Students who attended private school were more than two times more likely exposed to SEM than public schools (95 % CI: AOR 2.07(C.I\u2009=\u20091.29, 3.30). Students who drink alcohol and labelled as 'sometimes' were two times more likely exposed to SEM than those who never drink alcohol (95 % CI\u2009=\u2009AOR 2.33(C.I\u2009=\u20091.26, 4.30). Khat chewers who labelled \"rarely\", \"sometimes\" and \"often\" had shown higher exposure (95 % CI: AOR 3.02(CI\u2009=\u20091.65, 5.52), (95 % CI: AOR 3.40(CI\u2009=\u20091.93, 6.00) and (95 % CI: AOR 2.67(CI\u2009=\u20091.46, 4.86) than those who never chew khat, respectively. Regarding SEM access, school youths with label 'easy access were exposed in odds of six folds than youths of no access (95 % CI: AOR 5.64(C.I\u2009=\u20093.56, 8.9).\n\n\nCONCLUSION\nHigh number of students was exposed to sexually explicit materials. Sex, school type, substance use and access to SEM were observed independent predictors of exposure to SEM.\n\n\nMOTIVATION\nThe current generation of young people is the healthiest, most educated, and most urbanized in history. However, there still remain some serious concerns. Most people become sexually active during adolescence. Premarital sexual activity is common and is on the rise worldwide. Rates are highest in sub Saharan Africa, where more than half of girls aged 15-19 are sexually experienced. Millions of adolescents are bearing children, in sub-Saharan Africa. More than half of women give birth before age 20. The need for improved health and social services aimed at adolescents, including reproductive health services, is being increasingly recognized throughout the world. Approximately 85 % of world adolescents live in developing countries. Each year, up to 100 million becomes infected with a curable sexually transmitted disease (STI). About 40 % of all new global human immunodeficiency virus (HIV) infections occur among 15-24 year olds; with recent estimates of 7000 infected each day. These health risks are influenced by many interrelated factors, such as expectations concerning early marriage and sexual relationships, access to education and employment, gender inequities, sexual violence, and the influence of mass media and popular culture. Furthermore, many adolescents lack strong stable relationships with parents or other adults whom they can talk to about their reproductive health concerns. Despite these challenges, programs that meet the information and service needs of adolescents can make a real difference. Successful programs help young people develop life-planning skill, respect the needs and concerns of young people, involve communities in their efforts, and provide respectful and confidential clinical services. Accordingly, the government of Ethiopia now works on improving adolescent's health as one part of MDG (Goal VI-halting transmission of HIV/AIDS, STI, and other communicable diseases) with a focus on adolescents, since they are most affected population. This finding, therefore, will benefit the government to partly evaluate the goal achieving through adolescents exposure status to sexually explicit materials and improvement of sexual issues free talk with in school with class mates and their family at home. For that matter, we authors decided to publish this finding in BMC Reproductive Health Journal so that on line access will be easy to all governing bodies that they use to re-plan their strategies for better product of plan. Moreover, Researchers, Practitioners, policy makers, Students, school leaders and professionals will also benefit from this finding for their future researches references, knowledge gain and practice."}
{"_id": "655df04cb172fb2c5e7c49261dcd0840fd8f674e", "title": "Additive Manufacturing Defect Detection using Neural Networks", "text": "Currently defect detection in additive manufacturing is predominately done by traditional image processing, approximation, and statistical methods. Two important aspects of defect detection are edge detection and porosity detection. Both problems appear to be good candidates to apply machine learning. This project describes the implementation of neural networks as an alternative method for defect detection. Results from current methods and the neural networks are compared for both speed and accuracy."}
{"_id": "8bffc3f546ecb3fab1333829ff8f61851d08e5e5", "title": "Secure and light IoT protocol (SLIP) for anti-hacking", "text": "In the elemental technologies, it is necessary to realize the Internet service of things (IoT), sensors and devices, network, platform (hardware platforms, open software platform, such as specific OS platforms). Web services, data analysis and prediction, big data processing, such as security and privacy protection technology, there are a variety of techniques. These elements technology provide a specific function. The element technology is integrated with each other. However, by several techniques are integrated, it can be problems with integration of security technologies that existed for each element technology. Even if individual technologies basic security features are constituting Internet Services (CIA: Confidentiality, integrity, authentication or authorization). It also offers security technology not connected to each other. Therefore, I will look at the security technology and proposes a lightweight routing protocol indispensable for realizing a secure Internet services things."}
{"_id": "107a2c807e8aacbcd9afcdeb2ddc9222ac25b15b", "title": "A search engine for 3D models", "text": "As the number of 3D models available on the Web grows, there is an increasing need for a search engine to help people find them. Unfortunately, traditional text-based search techniques are not always effective for 3D data. In this article, we investigate new shape-based search methods. The key challenges are to develop query methods simple enough for novice users and matching algorithms robust enough to work for arbitrary polygonal models. We present a Web-based search engine system that supports queries based on 3D sketches, 2D sketches, 3D models, and/or text keywords. For the shape-based queries, we have developed a new matching algorithm that uses spherical harmonics to compute discriminating similarity measures without requiring repair of model degeneracies or alignment of orientations. It provides 46 to 245% better performance than related shape-matching methods during precision--recall experiments, and it is fast enough to return query results from a repository of 20,000 models in under a second. The net result is a growing interactive index of 3D models available on the Web (i.e., a Google for 3D models)."}
{"_id": "ec134727ab91752d35567f48044211dea89aca4d", "title": "Robotic excavation in construction automation", "text": "C onstruction is of prime economic significance to many industry sectors. Intense competition, shortages of skilled labor, and technological advances are forcing rapid change in the construction industry, thus motivating construction automation [1]. Earthmoving machines, such as bulldozers, wheel loaders, excavators, scrapers, and graders, are common in construction. Excavationbased operations are used in general earthmoving, digging, and sheet-piling for displacing large amounts of material. On a smaller scale, operations such as trenching and footing formation require precisely controlled excavation. Although the fully automated construction site is still a dream of some civil engineers, research developments have shown the promise of robotics and automation in construction [2]. Despite the apparent economic importance of excavation in construction, there have been few implementations of autonomous or teleoperated excavators. A number of researchers have investigated the feasibility of automating excavation. Many of these studies have addressed the possible use of autonomous excavators during unmanned phases of establishing manned Lunar or Martian research stations [3], [4]. Much of the work on terrestrial excavation has focused on teleoperation, rather than on the system requirements for autonomous operation. Although there have been a number of valuable theoretical and experimental contributions to the field of autonomous, robotic or teleoperated excavation [5]-[8], autonomous operation of a full-scale excavator has not been commercially demonstrated. Many of the experimental studies reported in the literature involve using conventional industrial robots fitted with buckets to excavate in a bed of loose sand. While there are parallels between \u201cclassical\u201d robotics and robotic excavation, there are also some pronounced differences. In particular, an excavator is not fixed relative to the work piece; it plastically deforms the work piece by applying large forces and is caused to move relative to the soil by the same large forces. Furthermore, strategic and bucket trajectory planning must necessarily occur in a dynamic environment; if the excavator is not changing the profile of the soil being worked, it is not doing useful work. This article presents some results of the autonomous excavation project conducted at the Australian Centre for Field Robotics (ACFR) with a focus on construction automation. The application of robotic technology and computer control is one key to construction industry automation. Excavation automation is a multidisciplinary task, encompassing a broad area of research and development planning monitoring environment sensing and modeling navigation machine modeling and control. The ultimate goal of the ACFR excavation project is to demonstrate fully autonomous execution of excavation tasks in"}
{"_id": "b025235adf05e92ba7df7da1ced3c1d3b569d65e", "title": "Convolutional CRFs for Semantic Segmentation", "text": "For the challenging semantic image segmentation task the most efficient models have traditionally combined the structured modelling capabilities of Conditional Random Fields (CRFs) with the feature extraction power of CNNs. In more recent works however, CRF post-processing has fallen out of favour. We argue that this is mainly due to the slow training and inference speeds of CRFs, as well as the difficulty of learning the internal CRF parameters. To overcome both issues we propose to add the assumption of conditional independence to the framework of fully-connected CRFs. This allows us to reformulate the inference in terms of convolutions, which can be implemented highly efficiently on GPUs. Doing so speeds up inference and training by two orders of magnitude. All parameters of the convolutional CRFs can easily be optimized using backpropagation. To facilitating further CRF research we make our implementation publicly available. Please visit: https://github.com/MarvinTeichmann/ConvCRF"}
{"_id": "d76bb42ad0d29ec1641adb2d6227e1240eca7378", "title": "CBR: Strengths and Weaknesses", "text": "There is considerable enthusiasm about Case-Based Reasoning as a means of developing knowledge-based systems. There are two broad reasons for this enthusiasm. First, it is evident that much of human expert competence is experience based and it makes sense to adopt a reuse-based methodology for developing knowledge based systems. The other reason is the expectation that using Case-Based Reasoning to develop knowledge based systems will i nvolve less knowledge engineering than alternative \u2018 first-principles\u2019 based approaches. In this paper I explore the veracity of this assertion and outline the types of situation in which it will be true. CBR is perceived to have this knowledge engineering advantage because it allows the development of knowledge based systems in weak theory domains. If CBR can work without formalising a domain theory then there is a question about the quality of solutions produced by case-based systems. This is the other issue discussed in this paper and situations where CBR will and will not produce good quality solutions are outlined."}
{"_id": "d026f38261009aab19eba5934fb02e323e893956", "title": "Modeling of braided pneumatic actuators for robotic control", "text": "Braided Pneumatic Actuators exhibit non-linear forcelength properties grossly similar to muscle, and have a high strength-to-weight ratio. These properties make them desirable for legged robots. This work emphasizes understanding the actuator properties for later use in simulation and control of legged robots. Static and dynamic mathematical models for the actuators are reported and verified through testing and simulation. In addition, a dynamic air flow model modulated by a solenoid valve was developed to provide modular actuation subroutines for simulation."}
{"_id": "30708e26d81644b9a31b15ba6dc8534a5c00ea89", "title": "Review of performance metrics for green data centers: a taxonomy study", "text": "Data centers now play an important role in modern IT infrastructures. Although much research effort has been made in the field of green data center computing, performance metrics for green data centers have been left ignored. This paper is devoted to categorization of green computing performance metrics in data centers, such as basic metrics like power metrics, thermal metrics and extended performance metrics i.e. multiple data center indicators. Based on a taxonomy of performance metrics, this paper summarizes features of currently available metrics and presents insights for the study on green data center computing."}
{"_id": "fcff8436ae4cdfbb92cb992da60cb02fcc63aad0", "title": "Camera Augmented Mobile C-Arm (CAMC): Calibration, Accuracy Study, and Clinical Applications", "text": "Mobile C-arm is an essential tool in everyday trauma and orthopedics surgery. Minimally invasive solutions, based on X-ray imaging and coregistered external navigation created a lot of interest within the surgical community and started to replace the traditional open surgery for many procedures. These solutions usually increase the accuracy and reduce the trauma. In general, they introduce new hardware into the OR and add the line of sight constraints imposed by optical tracking systems. They thus impose radical changes to the surgical setup and overall procedure. We augment a commonly used mobile C-arm with a standard video camera and a double mirror system allowing real-time fusion of optical and X-ray images. The video camera is mounted such that its optical center virtually coincides with the C-arm's X-ray source. After a one-time calibration routine, the acquired X-ray and optical images are coregistered. This paper describes the design of such a system, quantifies its technical accuracy, and provides a qualitative proof of its efficiency through cadaver studies conducted by trauma surgeons. In particular, it studies the relevance of this system for surgical navigation within pedicle screw placement, vertebroplasty, and intramedullary nail locking procedures. The image overlay provides an intuitive interface for surgical guidance with an accuracy of <;1 mm, ideally with the use of only one single X-ray image. The new system is smoothly integrated into the clinical application with no additional hardware especially for down-the-beam instrument guidance based on the anteroposterior oblique view, where the instrument axis is aligned with the X-ray source. Throughout all experiments, the camera augmented mobile C-arm system proved to be an intuitive and robust guidance solution for selected clinical routines."}
{"_id": "1762baa638866a13dcc6d146fd5a49b36cbd9c30", "title": "Semi-Supervised Classification with Graph Convolutional Networks", "text": "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin."}
{"_id": "58a63086b209374d5cf625d27617eba1e96288ef", "title": "ArnetMiner: extraction and mining of academic social networks", "text": "This paper addresses several key issues in the ArnetMiner system, which aims at extracting and mining academic social networks. Specifically, the system focuses on: 1) Extracting researcher profiles automatically from the Web; 2) Integrating the publication data into the network from existing digital libraries; 3) Modeling the entire academic network; and 4) Providing search services for the academic network. So far, 448,470 researcher profiles have been extracted using a unified tagging approach. We integrate publications from online Web databases and propose a probabilistic framework to deal with the name ambiguity problem. Furthermore, we propose a unified modeling approach to simultaneously model topical aspects of papers, authors, and publication venues. Search services such as expertise search and people association search have been provided based on the modeling results. In this paper, we describe the architecture and main features of the system. We also present the empirical evaluation of the proposed methods."}
{"_id": "009dbf3187862352aac542bf7d61e27bce6b27f5", "title": "SimRank: a measure of structural-context similarity", "text": "The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects:\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach."}
{"_id": "0853c2a59d44fe97e0d21f89d80fa2f5a220e3b9", "title": "Inductive Conformal Prediction: Theory and Application to Neural Networks", "text": "Traditional machine learning algorithms for pattern recognition just output simple predictions, without any associated confidence values. Confidence values are an indication of how likely each prediction is of being correct. In the ideal case, a confidence of 99% or higher for all examples in a set, means that the percentage of erroneous predictions in that set will not exceed 1%. Knowing the likelihood of each prediction enables us to assess the extent to which we can rely on it. For this reason, predictions that are associated with some kind of confidence values are highly desirable in many risk-sensitive applications, such as those used for medical diagnosis or financial analysis. In fact, such information can benefit any application that requires human-computer interaction, as confidence values can be used to determine the way in which each prediction will be treated. For instance, a filtering mechanism can be employed so that only predictions which satisfy a certain level of confidence will be taken into account, while the rest can be discarded or passed on to a human for judgement. There are two main areas in mainstream machine learning that can be used in order to obtain some kind of confidence values; the Bayesian framework and the theory of Probably Approximately Correct learning (PAC theory). Quite often the Bayesian framework is used for producing algorithms that complement individual predictions with probabilistic measures of their quality. On the other hand, PAC theory can be used for producing upper bounds on the probability of error for a given algorithm with respect to some confidence level 1 \u2212 \u03b4. Both of these approaches however, have their drawbacks. In order to apply the Bayesian framework one is required to have some prior knowledge about the distribution that generates the data. When the correct prior is known, Bayesian methods provide optimal decisions. For real world data sets though, as the required knowledge is not available, one has to assume the existence of an arbitrarily chosen prior. In this case, if the assumed prior is incorrect, the resulting confidence levels may also be \u201cincorrect\u201d; for example the predictive regions output for the 95% confidence level may contain the true label in much less than 95% of the cases. This signifies a major failure, as we would expect confidence levels to bound the percentage of expected errors. An experimental demonstration of how misleading Bayesian methods can become when their assumptions are violated can be found in (Melluish et al., 2001)."}
{"_id": "08109a04d0386a43781669602547f59bc696a98f", "title": "An unsupervised long short-term memory neural network for event detection in cell videos", "text": "We propose an automatic unsupervised cell event detection and classification method, which expands convolutional Long Short-Term Memory (LSTM) neural networks, for cellular events in cell video sequences. Cells in images that are captured from various biomedical applications usually have different shapes and motility, which pose difficulties for the automated event detection in cell videos. Current methods to detect cellular events are based on supervised machine learning and rely on tedious manual annotation from investigators with specific expertise. So that our LSTM network could be trained in an unsupervised manner, we designed it with a branched structure where one branch learns the frequent, regular appearance and movements of objects and the second learns the stochastic events, which occur rarely and without warning in a cell video sequence. We tested our network on a publicly available dataset of densely packed stem cell phase-contrast microscopy images undergoing cell division. This dataset is considered to be more challenging that a dataset with sparse cells. We compared our method to several published supervised methods evaluated on the same dataset and to a supervised LSTM method with a similar design and configuration to our unsupervised method. We used an F1-score, which is a balanced measure for both precision and recall. Our results show that our unsupervised method has a higher or similar F1-score when compared to two fully supervised methods that are based on Hidden Conditional Random Fields (HCRF), and has comparable accuracy with the current best supervised HCRF-based method. Our method was generalizable as after being trained on one video it could be applied to videos where the cells were in different conditions. The accuracy of our unsupervised method approached that of its supervised counterpart."}
{"_id": "f9823bc7eec44a9a6cd7b629c8f6430fe82877fd", "title": "Ensuring Data Integrity in Cloud Computing", "text": "Cloud computing provides convenient on-demand network access to a shared pool of configurable computing resources. The resources can be rapidly deployed with great efficiency and minimal management overhead. Cloud is an insecure computing platform from the view point of the cloud users, the system must design mechanisms that not only protect sensitive information by enabling computations with encrypted data, but also protect users from malicious behaviours by enabling the validation of the computation result. In this paper, we propose a new data encoding scheme called layered interleaving, designed for time-sensitive packet recovery in the presence of bursty loss. It is high-speed data recovery scheme with minimal loss probability and using a forward error correction scheme to handle bursty loss. The proposed approach is highly efficient in recovering the singleton losses almost immediately and from bursty data losses."}
{"_id": "25dfda0ffe750e2a57b1d8c0275716619beb490b", "title": "Self-organized structures in a superorganism : do ants \u201c behave \u201d like molecules ?", "text": "While the striking structures (e.g. nest architecture, trail networks) of insect societies may seem familiar to many of us, the understanding of pattern formation still constitutes a challenging problem. Over the last two decades, self-organization has dramatically changed our view on how collective decision-making and structures may emerge out of a population of ant workers having each their own individuality as well as a limited access to information. A variety of collective behaviour spontaneously outcome from multiple interactions between nestmates, even when there is no directing influence imposed by an external template, a pacemaker or a leader. By focussing this review on foraging structures, we show that ant societies display some properties which are usually considered in physico-chemical systems, as typical signatures of self-organization. We detail the key role played by feed-back loops, fluctuations, number of interacting units and sensitivity to environmental factors in the emergence of a structured collective behaviour. Nonetheless, going beyond simple analogies with non-living self-organized patterns, we stress on the specificities of social structures made of complex living units of which the biological features have been selected throughout the evolution depending on their adaptive value. In particular, we consider the ability of each ant individual to process information about environmental and social parameters, to accordingly tune its interactions with nestmates and ultimately to determine the final pattern emerging at the collective level. We emphasize on the parsimony and simplicity of behavioural rules at the individual level which allow an efficient processing of information, energy and matter within the whole colony. \u00a9 2006 Elsevier B.V. All rights reserved. PACS: 87.18.Bb; 87.23.Cc; 87.23.Ge; 89.75.-k; 89.75.Fb; 89.75.Kd"}
{"_id": "3bc46849083e55e690fb3e2b8e3a18b3adc19a32", "title": "One Vector is Not Enough: Entity-Augmented Distributed Semantics for Discourse Relations", "text": "Discourse relations bind smaller linguistic units into coherent texts. Automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked arguments. A more subtle challenge is that it is not enough to represent the meaning of each argument of a discourse relation, because the relation may depend on links between lowerlevel components, such as entity mentions. Our solution computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree. We also perform a downward compositional pass to capture the meaning of coreferent entity mentions. Implicit discourse relations are then predicted from these two representations, obtaining substantial improvements on the Penn Discourse Treebank."}
{"_id": "f0111dd237b88cc5433f83b6520ff4e5a4286e66", "title": "Analysis of blood samples for counting leukemia cells using Support vector machine and nearest neighbour", "text": "analysis of blood samples for counting leukemia cells is discussed in this paper. support vector machine and nearest neighbour concept is presented in this paper. The identification of blood disorders, it can lead to classification of certain diseases related to blood. Several classification techniques are investigated till today. One of the best methods for classification techniques nearest neighbour and SVM (Support Vector Machine).By identifying and counting blood cell within the blood smear using classification techniques it\u2019s quite possible to detect so many diseases. If one of the new classifier is used i.e. nearest neighbour and SVM it is quiet possible to detect the cancer cell from the blood cell counting. Keyword: Blood, Classification technique, Nearest Neighbour Network, SVM"}
{"_id": "239e7a7f8127b25e046e53fd5f1951dd2af473cd", "title": "The Fast Fourier Transform and Its Applications", "text": "The advent of the fast Fourier transform method has greatly extended our ability to implement Fourier methods on digital computers. A description of the alogorithm and its programming is given here and followed by a theorem relating its operands, the finite sample sequences, to the continuous functions they often are intended to approximate. An analysis of the error due to discrete sampling over finite ranges is given in terms of aliasing. Procedures for computing Fourier integrals, convolutions and lagged products are outlined. HISTORICAL BACKGROUND T HE FAST Fourier transform algorithm has an interesting history which has been described in [3]. Time does not permit repeating this history here in detail. The essentials are, however, that until the recent publication of fast Fourier transform methods, computer programs were using up hundreds of hours of computer time with procedures requiring something proportional to N2 operations to compute Fourier transforms of N data points. It is not surprising then that the \"new\" methods requiring a number of operations proportional to N log N received considerable attention and led to revisions in computer programs and in problemn-solving techniques using Fourier methods. It was discovered later that the base 2 form of the fast Fourier transform algorithm had been published many years ago by Runge and Konig [10] and by Stumpff [12], [13]. These are authors whose works are widely read and their papers certainly were used by those computing Fourier series. How then could these important algorithms have gone unnoticed? The answer is that the papers of Runge, K6nig, and Stumpif described primarily how one could use symmetries of the sine-cosine functions to reduce the amount of computation by factors of 4, 8, or even more. Relatively small portions of these papers mentioned the successive doubling algorithm which permitted one to take two Fourier analyses of N-point samples of data and combine them in N operations to obtain an analysis of a 2N-point sampling of the same data. Successive application of this algorithm obviously yields an N-point Fourier analysis in 10g2 N doublings, and therefore, takes N log2 N operations. Thus, while the computational method using symmetries reduced the proportionality factor in the KN2 operations required to transform an N-point sequence, the method based on the doubling algorithm took a number of operations proportional to N log2 N. Manuscript received August 16, 1968. The authors are with the IBM Watson Research Center, Yorktown Heights, N.Y. It is most likely that with the relatively small values of N used in preelectronic computer days, the former methods were easier to use and took fewer operations. Consequently, the methods requiring N log N operations were neglected. With the arrival of electronic computers capable of doing calculations of Fourier transforms with large values of N, the N log N methods were overlooked and the well-known hand calculator methods were programmed for the computers. Perhaps there is something to be learned from this experience, namely, that there may exist numerical methods in the older literature which should be reappraised whenever computing devices undergo radical changes. FAST FOURIER ALGORITHMS In the interest of coherent presentation, the definitions and procedures utilized herein will be developed in blocks and shown as figures. Thus, the discrete Fourier series is defined in Fig. 1 with A (n) being a sequence which gives the complex Fourier amplitudes as a function of frequency n. The X(j), j=0, 1, * * *, N-1 are regarded here as a complex sequence, and in a problem may represent a sampling of a signal at N sampling points. Also, (2iri' 2r 2r WN= exp .= cos-+ isinN1 N N is the principle Nth root of unity and if we substitute the expression for WN in terms of sines and cosines, we obtain the perhaps more familiar sine-cosine Fourier series. Complex Fourier series is employed for ease of notation and derivation of formulas. One should note next the inversion formula in Fig. 1, giving the A (n)'s in terms of the X(j)'s. Since A (n) is also a Fourier series, an algorithm or a program for computing the A (n)'s from the X(j)'s can be used to compute the X(j)'s from the A(n)'s. Since WNN= 1, the exponent of WN is to be interpreted modulo N. This leads to an essential property of the sequences X(j) and A (n), i.e., that they are periodic functions of j and n, respectively, with period N. It is shown in Fig. 2 that when N is a product, N= rs, the Fourier series can be calculated in a two-stage process. This is done just as though the sequences A (n) and X(j) were defined on two-dimensional rXs arrays with the array indices (ji, jo) and (n1, no) being defined as shown. When we substitute for j and n in WNin, and jn is reduced modulo N, it is found that the series can be 27 Authorized licensed use limited to: Peking University. Downloaded on March 24, 2009 at 01:10 from IEEE Xplore. Restrictions apply. IEEE TRANSACTIONS ON EDUCATION, MARCH 1969 Discrete Fourier Series: N-1 X(j) E A(n)wjn n,O N where WN exp(27riIN) Inversion Formula: N-1 N j.0 N Let us write: X(tjt4-A(n) Periodicity: WNN * I, Wj'N+N."}
{"_id": "fd55ebf25d396090f475ac1f2bd9615e391a7d5f", "title": "MOSFET switched 20 kV, 500 A, 100 ns pulse generator with series connected pulse transformers", "text": "This paper describes a MOSFET switched pulser, which is composed of 30 series connections of pulse generating modules (PGMs) that can produce a 700 V/500 A/100 ns pulse with the rising time of 60 ns or less. All PGMs are connected in series through the specially designed pulse transformers to obtain 20 kV/500 A/100 ns pulses at repetition rates up to 20 kHz. To obtain the fast rising time pulse, it is essential to minimize the leakage inductance of the pulse transformer and the wiring inductance between pulse transformers. In this paper, two kinds of layout for the series connection of pulse transformers are introduced and compared to the conventional methods. Also, the authors minimized the losses by adopting an energy recovery circuit that returns to the source the stored energy in the clamp circuit of the primary windings of the pulse transformers. Experimental results are given for the proposed pulser under the resistor load and the wire-cylinder plasma reactor load, respectively."}
{"_id": "149564dacea3dc9ba9cff5534f87eba7eb07b918", "title": "Provisioning Software-Defined IoT Cloud Systems", "text": "Cloud computing is ever stronger converging with the Internet of Things (IoT) offering novel techniques for IoT infrastructure virtualization and its management on the cloud. However, system designers and operations managers face numerous challenges to realize IoT cloud systems in practice, mainly due to the complexity involved with provisioning large-scale IoT cloud systems and diversity of their requirements in terms of IoT resources consumption, customization of IoT capabilities and runtime governance. In this paper, we introduce the concept of software-defined IoT units - a novel approach to IoT cloud computing that encapsulates fine-grained IoT resources and IoT capabilities in well-defined APIs in order to provide a unified view on accessing, configuring and operating IoT cloud systems. Our software-defined IoT units are the fundamental building blocks of software-defined IoT cloud systems. We present our framework for dynamic, on-demand provisioning and deploying such software-defined IoT cloud systems. By automating provisioning processes and supporting managed configuration models, our framework simplifies provisioning and enables flexible runtime customizations of software-defined IoT cloud systems. We demonstrate its advantages on a real-world IoT cloud system for managing electric fleet vehicles."}
{"_id": "2f8b034ac986d4007fa4801312ffbb2dbcb556c2", "title": "Determining embedding dimension for phase-space reconstruction using a geometrical construction.", "text": "A, et al. Nonlinear analysis of electroencephalogram at rest and during cognitive tasks in patients with schizophrenia. One-dimensional electroencephalography (EEG) data were transformed into multidimensional phase space. The phase space concept is crucial for nonlinear dynamics analysis. In a hypothetical system governed by n variables, the phase space is n-dimensional. Each state of the system corresponds to a point in phase space, with n coordinates that are the values assumed by the governing variables for this specific state. If the system is observed for a period of time, the sequence of the point in phase space converges into a subspace, called system's attractor. The technique of delay coordinates is carried out to reconstruct the attractor's trajectory. To unfold the projection back to a multivariate state space, a representation of the original system, the delay coordinates y(t) = [xj(t), xj(t + T), \u2026 xj(t + [d-1] T)] from a single time series xj and the embedding procedure were performed. In this formula, y(t) is a point of the trajectory in the phase space at time t, x (t is iT) are the coordinates in the phase space corresponding to the time-delayed values of the time series, T is the time delay between the points of the time series considered, and d is the embedding dimension. 1,2 The choice of the optimal time delay T and embedding dimension d are important to the success of reconstruction with finite data. For the time delay, the first local minimum of the average mutual information between the set of measurements v(t) and v(t + T) was used in the present study. For the embedding dimension d the minimum (optimal) embedding was computed in the reconstruction procedure. 3 The attractor's reconstruction is necessary for the analysis of mathematical quantities characterizing the attractor itself. One important metric property of the attractor is the correlation dimension (D2), which estimates the degrees of freedom, such as the number of independent variables necessary to describe the dynamic of the original system. The larger the D2 values of the attractor, the more complicated the behavior of the nonlinear system. The D2 is thus a measure of the complexity of the process being investigated, and it characterizes the distribution of points in phase space. 4 The Gassberger\u2013Procaccia algorithm 5 was computed to evaluate D2 values of the attractors from EEG data. With this algorithm, D2 is derived by determining the relative number \u2026"}
{"_id": "79905228c61b3a328496c597fa371e94c8e9b115", "title": "Obstacle Avoidance System for a Quadrotor UAV", "text": "This is the summary of the research being done at Cal Poly Pomona at an undergraduate level on the development of obstacle avoidance capability for unmanned aerial vehicles (UAVs). A quadrotor UAV is used as the research platform. Push and Rapidly Exploring Random Tree (RRT) algorithms are used for the obstacle avoidance capability. The paper shows the use of cheaper commercial off-the-shelf sensors and processor board in the design and implementation of obstacle avoidance algorithm."}
{"_id": "90c562f186022b8f38f412444ad3cdf05d445d3f", "title": "A new linearization method of unbalanced electrical distribution networks", "text": "With increasing penetration of distributed generation in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. As DN control and operation strategies are mostly based on the linearized sensitivity coefficients between controlled variables (e.g., node voltages, line currents, power loss) and control variables (e.g., power injections, transformer tap positions), efficient and precise calculation of these sensitivity coefficients, i.e. linearization of DN, is of fundamental importance. In this paper, the derivation of the node voltages and power loss as functions of the nodal power injections and transformers' tap-changers positions is presented, and then solved by a Gauss-Seidel method. Compared to other approaches presented in the literature, the proposed method takes into account different load characteristics (e.g., constant PQ, constant impedance, constant current and any combination of above) of a generic multi-phase unbalanced DN and improves the accuracy of linearization. Numerical simulations on both IEEE 13 and 34 nodes test feeders show the efficiency and accuracy of the proposed method."}
{"_id": "410a1c7f2ae53dfcef45e9f54426f01fb046bf8e", "title": "Dielectric properties estimation of normal and malignant skin tissues at millimeter-wave frequencies using effective medium theory", "text": "Millimeter-wave reflectometry can be used as a non-invasive method for early detection of skin cancers. Less penetration and higher resolution are only some reasons to use mm-waves for skin cancer diagnosis. The dielectric parameters data of human skin at mm-wave frequencies are not widely addressed in the literature due to the expensive instruments and the complicated structure of human skin. Available dielectric properties of cancerous tissues in the literature are for other tissues rather than skin and are restricted to the frequencies below 20 GHz. In this paper, dielectric properties of healthy and malignant skin tissues were estimated in the range of 20-100 GHz by applying the effective medium theory. A stratified skin model and some skin tumors were presented. The possibility of mm-wave reflectometry for detection of skin cancers has also been studied. The results of this paper can be used as a theoretical basis for taking further steps in skin cancer research."}
{"_id": "472b179ecda85504166b5c6cdfa562e89bd62870", "title": "Ontology-Based Data Access for Extracting Event Logs from Legacy Data: The onprom Tool and Methodology", "text": "Process mining aims at discovering, monitoring, and improving business processes by extracting knowledge from event logs. In this respect, process mining can be applied only if there are proper event logs that are compatible with accepted standards, such as extensible event stream (XES). Unfortunately, in many real world set-ups, such event logs are not explicitly given, but instead are implicitly represented in legacy information systems. In this work, we exploit a framework and associated methodology for the extraction of XES event logs from relational data sources that we have recently introduced. Our approach is based on describing logs by means of suitable annotations of a conceptual model of the available data, and builds on the ontology-based data access (OBDA) paradigm for the actual log extraction. Making use of a realworld case study in the services domain, we compare our novel approach with a more traditional extract-transform-load based one, and are able to illustrate its added value. We also present a set of tools that we have developed and that support the OBDA-based log extraction framework. The tools are integrated as plugins of the ProM process mining suite."}
{"_id": "6b2a667fb351aef4872b5a21d5fb0cabb529e15a", "title": "Identifying the Higgs Boson", "text": "This paper discusses the identification of the Higgs boson subatomic particle from jet pull energy colorflow images of the particles\u2019 decay, as modeled by the ATLAS Experiment at the Large Hadron Collider in CERN [1]. The Higgs field is an hypothesized energy field thought to permeate the entire universe. Without it, the Standard Model of particle physics would break down, as atomic particles would not have the required mass to attract each other, leading them to simply float around in the universe at the speed of light [20]. Its proof would completely alter our understanding of mass as a physical property, making the discovery of the Higgs field the fundamental unanswered question in particle physics in the last halfcentury [13]. The Standard Model suggests that if the Higgs field were to exist, then its quantum excitation, a particle referred to as the Higgs boson, would also have to exist [20]. The Large Hadron Collider (LHC), tasked with finding this particle, consisting of multiple super-powered electromagnets, collides charged particles traveling at near lightspeed. These collisions deform space upon impact, breaking the charged particles into subatomic constituents. It has been hypothesized that with a proton collision at high enough energy, the Higgs boson would decay in observable ways. The ATLAS detector at the LHC records 40 million proton collisions a second, making human curation of these events unfeasible [14]. An accurate classification system that would label the most promising observations as Higgs boson particle decays is, therefore, required. In collaboration with SLAC and the ATLAS Experiment at CERN we were given access to energy images for both Higgs boson and gluon decay, referred to as signal and background respectively, seen in Figures 1 and 2. The purpose of this project was to build a binary classifier which, given the colorflow energy image of the decay of an unknown particle, would accurately distinguish whether or not that particle was a Higgs boson. While we tested a wide array of supervised learning models, we focused primarily on ensemble methods. In particular we developed and fine-tuned an Adaptive Boosting classifier (AdaBoost) with a Random Forest classifier as its base estimator [8, 10]. We also utilized a number of image feature extraction mechanisms includ-"}
{"_id": "c33a7557e2c23bc62ae448e67148a2ab9f3756b1", "title": "A new cell search scheme in 3GPP long term evolution downlink, OFDMA systems", "text": "This paper presents a cell search procedure in 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) downlink systems. The proposed cell search procedure contains three steps. In the beginning, conventional mode detection, symbol timing detection and fractional carrier frequency offset (CFO) estimation methods are adopted so that frequency-domain signal processing can be performed subsequently. Secondly, a new joint detection method for integer carrier frequency offset and sector cell index information is proposed, which is shown to be capable of resisting symbol timing error. Finally, we also proposed a detection method for frame timing and cell identity (ID) group information without explicit channel estimates. Simulation results demonstrate that the proposed scheme with lower detection error probability is effective and robust compared with some conventional approaches."}
{"_id": "2ae8ebe9053522a48be23d66bbc65d0ae4eaf78e", "title": "Approximate graph mining with label costs", "text": "Many real-world graphs have complex labels on the nodes and edges. Mining only exact patterns yields limited insights, since it may be hard to find exact matches. However, in many domains it is relatively easy to define a cost (or distance) between different labels. Using this information, it becomes possible to mine a much richer set of approximate subgraph patterns, which preserve the topology but allow bounded label mismatches. We present novel and scalable methods to efficiently solve the approximate isomorphism problem. We show that approximate mining yields interesting patterns in several real-world graphs ranging from IT and protein interaction networks to protein structures."}
{"_id": "1de115045b0e2741f76f1ac1190c150ef12418c6", "title": "Evaluating features and classifiers for road weather condition analysis", "text": "Weather-dependent road conditions are a major factor in many automobile incidents; computer vision algorithms for automatic classification of road conditions can thus be of great benefit. This paper presents a system for classification of road conditions using still-frames taken from an uncalibrated dashboard camera. The problem is challenging due to variability in camera placement, road layout, weather and illumination conditions. The system uses a prior distribution of road pixel locations learned from training data then fuses normalized luminance and texture features probabilistically to categorize the segmented road surface. We attain an accuracy of 80% for binary classification (bare vs. snow/ice-covered) and 68% for 3 classes (dry vs. wet vs. snow/ice-covered) on a challenging dataset, suggesting that a useful system may be viable."}
{"_id": "fa2357e09a5c2c41d263c28ba33a6ad051ab8698", "title": "Multi-modal and multi-brain-computer interfaces: A review", "text": "In this short paper we survey recent research views in non-traditional brain-computer interfaces. That is, interfaces that can process brain activity input, but that are designed for non-clinical purposes, that are meant to be used by `healthy' users, that process other user input as well, and that even allow input, voluntarily or involuntarily, from multiple users. This is a rather new research and BCI application area. Control of applications can be made more robust by fusing brain activity information with other information, either explicitly provided by a user (such as commands) or extracted from the user by interpreting his or her behavior (movements, posture, gaze, facial expression, nonverbal speech) or sensing (neuro-)physiological characteristics (skin conductivity, heart rate, brain activity). We see also research where brain activity of multiple users is fused to get a (sometimes critical) control task done. We see also the emergence of applications where BCI measurements help to improve group performance, for example, during meetings, or where information obtained from these measurements helps to learn about audience preferences. Multi-brain BCI has also become a paradigm in artistic and game applications. Artistic interactive BCI applications may require audience participation. In game environments brain activity of various players can be used to control or adapt the game. Both competition and collaboration in serious games, entertainment games and artistic installations requires fusion of EEG measurements from different subjects."}
{"_id": "6d3e50661962fe87c2b14eb8ef9cd61d3126e0bf", "title": "FACTORS AFFECTING CORPORATE SOCIAL RESPONSIBILITY DISCLOSURE IN EGYPT", "text": "The study makes a significant contribution to the corporate social responsibility (CSR) disclosure literature by offering the first study of its type undertaken in Egypt as an example of a developing country that examines the determinants of individual and aggregated types of CSR information. Using a sample of 111 Egyptian listed companies for the period of 2005\u20132010, we find that 66% of the Egyptian listed companies disclose on average 10\u201350 CSR statements. In addition, we find that product/customer information is used extensively by Egyptian listed companies compared with other types of CSR information. Finally we find that profitability is the main determinant for the aggregated and most of individual CSR information in Egypt."}
{"_id": "1fc0fadc26eb6adef3692fb5dfa60cc177542986", "title": "Automatic fault tree generation from SysML system models", "text": "In this paper, a methodology is proposed to integrate safety analysis within a systems engineering approach. This methodology is based on SysML models and aims at generating (semi-) automatically safety analysis artifacts, mainly FMEA and FTA, from system models. Preliminary functional and component FMEA are automatically generated from the functional and structural models respectively, then completed by safety experts. By representing SysML structural diagram as a directed multi-graph, through a graph traversal algorithm and some identified patterns, generic fault trees are automatically derived with corresponding logic gates and events. The proposed methodology provides the safety expert with assistance during safety analysis. It helps reducing time and error proneness of the safety analysis process. It also helps ensuring consistency since the safety analysis artifacts are automatically generated from the latest system model version. The methodology is applied to a real case study, the electromechanical actuator EMA."}
{"_id": "1e40dcef601dcb09d541b9613e55a24324cd2715", "title": "Securing communication in 6LoWPAN with compressed IPsec", "text": "Real-world deployments of wireless sensor networks (WSNs) require secure communication. It is important that a receiver is able to verify that sensor data was generated by trusted nodes. It may also be necessary to encrypt sensor data in transit. Recently, WSNs and traditional IP networks are more tightly integrated using IPv6 and 6LoWPAN. Available IPv6 protocol stacks can use IPsec to secure data exchange. Thus, it is desirable to extend 6LoWPAN such that IPsec communication with IPv6 nodes is possible. It is beneficial to use IPsec because the existing end-points on the Internet do not need to be modified to communicate securely with the WSN. Moreover, using IPsec, true end-to-end security is implemented and the need for a trustworthy gateway is removed. In this paper we provide End-to-End (E2E) secure communication between IP enabled sensor networks and the traditional Internet. This is the first compressed lightweight design, implementation, and evaluation of 6LoWPAN extension for IPsec. Our extension supports both IPsec's Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, communication endpoints are able to authenticate, encrypt and check the integrity of messages using standardized and established IPv6 mechanisms."}
{"_id": "902e8f2017b9914b8afbecee0b4abfa66671c936", "title": "Structural equations modeling : Fit Indices , sample size , and advanced topics", "text": "This article is the second of two parts intended to serve as a primer for structural equations models for the behavioral researcher. The first article introduced the basics: the measurement model, the structural model, and the combined, full structural equations model. In this second article, advanced issues are addressed, including fit indices and sample size, moderators, longitudinal data, mediation, and so forth. \u00a9 2009 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved."}
{"_id": "1c57beecca34dcda311b7c1eb23a8cecd8f495ff", "title": "A series elastic actuator as a new load-sensitive continuously variable transmission mechanism for control actuation systems", "text": "Electrical motors are often used to actuate mechanisms with varying speed and under varying load e.g., missile control actuation systems or robot extremities. Sizing of the motor is critical in these applications where space is limited. Varying speed and load cause poor utilization of the motor if the reduction ratio is constant. This requires employment of a larger motor than the smallest possible with ideal variable transmission. To realize better utilization of the motor, we propose a series elastic actuator that works as a load sensitive continuously variable transmission system. This is achieved by combining a pin in slot mechanism with an elastic output shaft. The proposed mechanism is modeled and simulated using example missile flight data and a typical robot finger loading scenario. Results are compared to the constant reduction ratio case, and advantages of the proposed mechanism are shown as decreased actuator size and power."}
{"_id": "1bfa073dfca23d870378949d4774717b75904c27", "title": "From Manual to Semi-Automatic Semantic Annotation: About Ontology-Based Text Annotation Tools", "text": "Semantic Annotation is a basic technology for intelligent content and is bene cial in a wide range of content-oriented intelligent applications, esp. in the area of the Semantic Web. In this paper we present our work in ontology-based semantic annotation, which is embedded in a scenario of a knowledge portal application. Starting with seemingly good and bad manual semantic annotation, we describe our experiences made within the KAinitiative. The experiences gave us the starting point for developing an ergonomic and knowledge base-supported annotation tool. Furthermore, the annotation tool described are currently extended with mechanisms for semi-automatic information-extraction based annotation. Supporting the evolving nature of semantic content we additionally describe our idea of evolving ontologies supporting semantic annotation. This paper has been presented at the COLING-2000 Workshop on Semantic Annotation and Intelligent Content, Centre Universitaire, Luxembourg, 5.-6. August, 2000."}
{"_id": "4adef6d5951172dfce9d49e8672d960d11b6f8de", "title": "Towards Text Knowledge Engineering", "text": "We introduce a methodology for automating the maintenance of domain-specific taxonomies based on natural language text understanding. A given ontology is incrementally updated as new concepts are acquired from real-world texts. The acquisition process is centered around the linguistic and conceptual \u201cquality\u201d of various forms of evidence underlying the generation and refinement of concept hypotheses. On the basis of the quality of evidence, concept hypotheses are ranked according to credibility and the most credible ones are selected for assimilation into the domain knowledge base."}
{"_id": "266de78ed10439c448a125f6f783c1a97e608401", "title": "Introduction to WordNet: An On-line Lexical Database", "text": "Standard alphabetical procedures for organizing lexical information put together words that are spelled alike and scatter words with similar or related meanings haphazardly through the list. Unfortunately, there is no obvious alternative, no other simple way for lexicographers to keep track of what has been done or for readers to find the word they are looking for. But a frequent objection to this solution is that finding things on an alphabetical list can be tedious and time-consuming. Many people who would like to refer to a dictionary decide not to bother with it because finding the information would interrupt their work and break their train of thought."}
{"_id": "be36e06077d1fb0f443030f9817cd0c9ea5a3829", "title": "Exploring Students' Computational Thinking Skills in Modeling and Simulation Projects: a Pilot Study", "text": "Computational Thinking (CT) is gaining a lot of attention in education. We explored how to discern the occurrences of CT in the projects of 12th grade high school students in the computer science (CS) course. Within the projects, they constructed models and ran simulations of phenomena from other (STEM) disciplines. We examined which CT aspects occurred in students' activities and how to assess students' CT accomplishments. For this purpose we employed a framework based on CT characterizations by Wing [14, 15], CSTA [4] and Comer et al. [3]. We analyzed students' project documentation, survey results and interviews with individual students. The findings indicate that this framework is suitable for detection of occurrences of CT aspects in students' data. Moreover, our preliminary results suggest that the framework is useful in assessment of the quality of the students' CT performance."}
{"_id": "070cfe6c83858b8b4ca4be0eb2cac3114d9daaae", "title": "Influence and passivity in social media", "text": "The ever-increasing amount of information flowing through Social Media forces the members of these networks to compete for attention and influence by relying on other people to spread their message. A large study of information propagation within Twitter reveals that the majority of users act as passive information consumers and do not forward the content to the network. Therefore, in order for individuals to become influential they must not only obtain attention and thus be popular, but also overcome user passivity. We propose an algorithm that determines the influence and passivity of users based on their information forwarding activity. An evaluation performed with a 2.5 million user dataset shows that our influence measure is a good predictor of URL clicks, outperforming several other measures that do not explicitly take user passivity into account. We demonstrate that high popularity does not necessarily imply high influence and vice-versa."}
{"_id": "e0067a9c3f30aab46d9e6f8844dc36e83c85869f", "title": "Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks", "text": "A major challenge in brain tumor treatment planning and quantit ative evaluation is determination of the tumor extent. The noninva s ve magnetic resonance imaging (MRI) technique has emerged as a front-line diag nostic tool for brain tumors without ionizing radiation. Manual segmentation of brain t umor extent from 3D MRI volumes is a very time-consuming task and the performance is highly relied on operator\u2019s experience. In this context, a reliable fully automatic segmentation method for the brain tumor segmentation is necessary for an efficient measurement of the tumor extent. In this study, w e propose a fully automatic method for brain tumor segmentation, which is develo ped using U-Net based deep convolutional networks. Our method was evalu ated on Multimodal Brain Tumor Image Segmentation (BRATS 2015) datasets, which contain 220 high-grade brain tumor and 54 low-grade tumor cases. Cro svalidation has shown that our method can obtain promising segmentation effi-"}
{"_id": "cacc5529928cf7b091fa6194704d608ced2c514c", "title": "COMPENDIUM : a text summarisation tool for generating summaries of multiple purposes , domains , and genres", "text": "In this paper, we present a Text Summarisation tool, compendium, capable of generating the most common types of summaries. Regarding the input, singleand multi-document summaries can be produced; as the output, the summaries can be extractive or abstractiveoriented; and finally, concerning their purpose, the summaries can be generic, query-focused, or sentiment-based. The proposed architecture for compendium is divided in various stages, making a distinction between core and additional stages. The former constitute the backbone of the tool and are common for the generation of any type of summary, whereas the latter are used for enhancing the capabilities of the tool. The main contributions of compendium with respect to the state-of-the-art summarisation systems are that (i) it specifically deals with the problem of redundancy, by means of textual entailment; (ii) it combines statistical and cognitive-based techniques for determining relevant content; and (iii) it proposes an abstractive-oriented approach for facing the challenge of abstractive summarisation. The evaluation performed in different domains and textual genres, comprising traditional texts, as well as texts extracted from the Web 2.0, shows that compendium is very competitive and appropriate to be used as a tool for generating summaries."}
{"_id": "a758a733d56c13429ad01677d5d22790f20241ae", "title": "A low noise readout circuit for integrated electrochemical biosensor arrays", "text": "This paper presents a low noise electrochemical interface circuit that is tuned to the needs of protein-based biosensor arrays and compatible with the formation of fully integrated biochemical microsystems. The circuit includes an integrated potentiostat and highly sensitive amperometric readout amplifier. It achieves good noise performance while supporting biosensor output currents from 10 pA to 10 /spl mu/A to suit a wide range of sensitivities and electrode areas. Implemented in 0.5 /spl mu/m CMOS process, it is compatible with a post-CMOS process to realize an integrated biosensor array."}
{"_id": "2ae24faec0e4fd7a07f5730098c4f246ab7545e5", "title": "A meta-analytic review of gender differences in perceptions of sexual harassment.", "text": "Research on gender differences in perceptions of sexual harassment informs an ongoing legal debate regarding the use of a reasonable person standard instead of a reasonable woman standard to evaluate sexual harassment claims. The authors report a meta-analysis of 62 studies of gender differences in harassment perceptions. An earlier quantitative review combined all types of social-sexual behaviors for a single meta-analysis; the purpose of this study was to investigate whether the magnitude of the female-male difference varies by type of behavior. An overall standardized mean difference of 0.30 was found, suggesting that women perceive a broader range of social-sexual behaviors as harassing. However, the meta-analysis also found that the female-male difference was larger for behaviors that involve hostile work environment harassment, derogatory attitudes toward women, dating pressure, or physical sexual contact than sexual propositions or sexual coercion."}
{"_id": "1606c60131470429c3449d5ac27b647bde2a0bb2", "title": "Deep Learning and Data Labeling for Medical Applications", "text": "This study addresses the recognition problem of the HEp-2 cell using indirect immunofluorescent (IIF) image analysis, which can facilitate the diagnosis of many autoimmune diseases by finding antibodies in the patient serum. Recently, a lot of automatic HEp-2 cell classification strategies including both shallow and deep methods have been developed, wherein the deep Convolutional Neural Networks (CNNs) have been proven to achieve impressive performance. However, the deep CNNs in general requires a fixed size of image as the input. In order to conquer the limitation of the fixed size problem, a spatial pyramid pooling (SPP) strategy has been proposed in general object recognition and detection. The SPP-net usually exploit max pooling strategies for aggregating all activated status of a specific neuron in a predefined spatial region by only taking the maximum activation, which achieved superior performance compared with mean pooling strategy in the traditional state-of-the-art coding methods such as sparse coding, linear locality-constrained coding and so on. However, the max pooling strategy in SPP-net only retains the strongest activated pattern, and would completely ignore the frequency: an important signature for identifying different types of images, of the activated patterns. Therefore, this study explores a generalized spatial pooling strategy, called K-support spatial pooling, in deep CNNs by integrating not only the maximum activated magnitude but also the response magnitude of the relatively activated patterns of a specific neuron together. This proposed K-support spatial pooling strategy in deep CNNs combines the popularly applied mean and max pooling methods, and then avoid awfully emphasizing of the maximum activation but preferring a group of activations in a supported region. The deep CNNs with the proposed K-support spatial pooling is applied for HEp-2 cell classification, and achieve promising performance compared with the state-of-the-art approaches."}
{"_id": "79f50173782dac0abb7e7f37dbabf4605494f722", "title": "Participatory Visualization with Wordle", "text": "We discuss the design and usage of ldquoWordle,rdquo a Web-based tool for visualizing text. Wordle creates tag-cloud-like displays that give careful attention to typography, color, and composition. We describe the algorithms used to balance various aesthetic criteria and create the distinctive Wordle layouts. We then present the results of a study of Wordle usage, based both on spontaneous behaviour observed in the wild, and on a large-scale survey of Wordle users. The results suggest that Wordles have become a kind of medium of expression, and that a ldquoparticipatory culturerdquo has arisen around them."}
{"_id": "d046b97ff711c32b25eff210c417dd3c5916e816", "title": "IncludeOS: A Minimal, Resource Efficient Unikernel for Cloud Services", "text": "The emergence of cloud computing as a ubiquitous platform for elastically scaling services has generated need and opportunity for new types of operating systems. A service that needs to be both elastic and resource efficient needs A) highly specialized components, and B) to run with minimal resource overhead. Classical general purpose operating systems designed for extensive hardware support are by design far from meeting these requirements. In this paper we present IncludeOS, a single tasking library operating system for cloud services, written from scratch in C++. Key features include: extremely small disk-and memory footprint, efficient asynchronous I/O, OS-library where only what your service needs gets included, and only one device driver by default (virtio). As a test case a bootable disk image consisting of a simple DNS server with OS included is shown to require only 158 kb of disk space and to require 5-20% less CPU-time, depending on hardware, compared to the same binary running on Linux."}
{"_id": "5165aa6cf35761dc50cbd57a61ab067901211cb8", "title": "FPGA-Based Sensorless PMSM Speed Control Using Reduced-Order Extended Kalman Filters", "text": "This paper presents the design and implementation of a field-programmable gate array (FPGA)-based architecture for the speed control of sensorless permanent-magnet synchronous motor (PMSM) drives. For the reduction of computation resources, as well as accuracy improvement in the rotor position estimation, a parallel reduced-order extended Kalman filter (EKF) is proposed in this work. Compared with an EKF, the system order is reduced and the iteration process is greatly simplified, resulting in significant savings of resource utility, while maintaining high estimation performance. The whole control system includes a current-control-and-coordinate-transformation unit, a proportional-integral (PI) speed controller, and other accessory modules, all implemented in a single FPGA chip. A hardware description language is adopted to describe advantageous features of the proposed control system. Moreover, the finite-state-machine method is applied with the purpose to reduce logic elements used in the FPGA chip. The validity of the approach is verified through simulation based on the Modelsim/Simulink cosimulation method. Finally, experimental results are obtained on an FPGA platform with an inverter-fed PMSM to show the feasibility and effectiveness of the proposed system-on-programmable-chip for PMSM drives."}
{"_id": "6a0d253053ce8646e49904efe0e062bcc30d8257", "title": "Learning to Remember Rare Events", "text": "Despite recent advances, memory-augmented deep neural networks are still limited when it comes to life-long and one-shot learning, especially in remembering rare events. We present a large-scale life-long memory module for use in deep learning. The module exploits fast nearest-neighbor algorithms for efficiency and thus scales to large memory sizes. Except for the nearest-neighbor query, the module is fully differentiable and trained end-to-end with no extra supervision. It operates in a life-long manner, i.e., without the need to reset it during training. Our memory module can be easily added to any part of a supervised neural network. To show its versatility we add it to a number of networks, from simple convolutional ones tested on image classification to deep sequence-to-sequence and recurrent-convolutional models. In all cases, the enhanced network gains the ability to remember and do life-long one-shot learning. Our module remembers training examples shown many thousands of steps in the past and it can successfully generalize from them. We set new state-of-the-art for one-shot learning on the Omniglot dataset and demonstrate, for the first time, life-long one-shot learning in recurrent neural networks on a large-scale machine translation task."}
{"_id": "e45213ef5a0d0cad7a9ebfa117a178f60c763d17", "title": "A 1.2 V Low-Noise-Amplifier with Double Feedback for High Gain and Low Noise Figure", "text": "Faculdade de Ci\u00eancias e Tecnologia Departamento de Engenharia Eletrot\u00e9cnica e de Computadores Mestrado Integrado em Engenharia Eletrot\u00e9cnica e de Computadores por David Jorge Tiago Amo\u00eado In this thesis we present a balun low noise amplifier (LNA) in which the gain is boosted using a double feedback structure. The circuit is based in a Balun LNA with noise and distortion cancellation. The LNA is based in two basic stages: common-gate (CG) and common-source (CS). We propose to replace the resistors by active loads, which have two inputs that will be used to provide the feedback (in the CG and CS stages). This proposed methodology will boost the gain and reduce the NF (Noise Figure). Simulation results, with a 130 nm CMOS technology, show that the gain is 19.65 dB and the NF is less than 2.17 dB. The total power dissipation is only 5 mW (since no extra blocks are required), leading to an FOM (Figure of Merit) of 3.13 mW from a nominal 1.2 supply."}
{"_id": "f3dab33e5f00e80ed4ce97d3443e96eb0ee96301", "title": "AdapNet: Adaptive semantic segmentation in adverse environmental conditions", "text": "Robust scene understanding of outdoor environments using passive optical sensors is a onerous and essential task for autonomous navigation. The problem is heavily characterized by changing environmental conditions throughout the day and across seasons. Robots should be equipped with models that are impervious to these factors in order to be operable and more importantly to ensure safety in the real-world. In this paper, we propose a novel semantic segmentation architecture and the convoluted mixture of deep experts (CMoDE) fusion technique that enables a multi-stream deep neural network to learn features from complementary modalities and spectra, each of which are specialized in a subset of the input space. Our model adaptively weighs class-specific features of expert networks based on the scene condition and further learns fused representations to yield robust segmentation. We present results from experimentation on three publicly available datasets that contain diverse conditions including rain, summer, winter, dusk, fall, night and sunset, and show that our approach exceeds the state-of-the-art. In addition, we evaluate the performance of autonomously traversing several kilometres of a forested environment using only the segmentation for perception."}
{"_id": "6b8dbd0c4eb11d5772b3b44f31d55a94afd7ed8f", "title": "Statistical compression-based models for text classification", "text": "Text classification is the task of assigning predefined categories to text documents. It is a common machine learning problem. Statistical text classification that makes use of machine learning methods to learn classification rules are particularly known to be successful in this regard. In this research project we are trying to re-invent the text classification problem with a sound methodology based on statistical data compression technique-the Minimum Message Length (MML) principle. To model the data sequence we have used the Probabilistic Finite State Automata (PFSAs). We propose two approaches for text classification using the MML-PFSAs. We have tested both the approaches with the Enron spam dataset and the results of our empirical evaluation has been recorded in terms of the well known classification measures i.e. recall, precision, accuracy and error. The results indicate good classification accuracy that can be compared with the state of art classifiers."}
{"_id": "640420df9138915a1c165c8f53fb2974b9980dee", "title": "A SEM-neural network approach for predicting antecedents of m-commerce acceptance", "text": "Higher penetration of powerful mobile devices \u2013 especially smartphones \u2013 and high-speed mobile internet access are leading to better offer and higher levels of usage of these devices in commercial activities, especially among young generations. The purpose of this paper is to determine the key factors that influence consumers\u2019 adoption of mobile commerce. The extended model incorporates basic TAM predictors, such as perceived usefulness and perceived ease of use, but also several external variables, such as trust, mobility, customization and customer involvement. Data was collected from 224 m-commerce consumers. First, structural equation modeling (SEM) was used to determine which variables had signif-"}
{"_id": "33625011888a2f86233a4e5bd07643ad6f7041b7", "title": "Cortical hubs revealed by intrinsic functional connectivity: mapping, assessment of stability, and relation to Alzheimer's disease.", "text": "Recent evidence suggests that some brain areas act as hubs interconnecting distinct, functionally specialized systems. These nexuses are intriguing because of their potential role in integration and also because they may augment metabolic cascades relevant to brain disease. To identify regions of high connectivity in the human cerebral cortex, we applied a computationally efficient approach to map the degree of intrinsic functional connectivity across the brain. Analysis of two separate functional magnetic resonance imaging datasets (each n = 24) demonstrated hubs throughout heteromodal areas of association cortex. Prominent hubs were located within posterior cingulate, lateral temporal, lateral parietal, and medial/lateral prefrontal cortices. Network analysis revealed that many, but not all, hubs were located within regions previously implicated as components of the default network. A third dataset (n = 12) demonstrated that the locations of hubs were present across passive and active task states, suggesting that they reflect a stable property of cortical network architecture. To obtain an accurate reference map, data were combined across 127 participants to yield a consensus estimate of cortical hubs. Using this consensus estimate, we explored whether the topography of hubs could explain the pattern of vulnerability in Alzheimer's disease (AD) because some models suggest that regions of high activity and metabolism accelerate pathology. Positron emission tomography amyloid imaging in AD (n = 10) compared with older controls (n = 29) showed high amyloid-beta deposition in the locations of cortical hubs consistent with the possibility that hubs, while acting as critical way stations for information processing, may also augment the underlying pathological cascade in AD."}
{"_id": "e4ac0dcbcb701bfd36128296751fe9cb25e1e871", "title": "Low-power, folded cascode near rail-to-rail OTA for moderate frequency signal processing", "text": "This paper presents a gate-driven, folded-cascode operational transconductance amplifier (FCOTA), operating in strong inversion region. The input core utilizes two complementary PMOS and NMOS input pairs to ensure rail-to-rail input common mode range (ICMR). The cascoded load structure at output port performs the job of current summer and provides enhanced gain due to high output impedance for FCOTA. The proposed OTA has been employed to implement universal biquadratic low-pass, high-pass, band-pass and notch transfer functions, simultaneously available at different nodes of the filter. The proposed FCOTA used dual supply voltage of \u00b1 0.9V and dissipates around 93.6 \u03bcW power and provides 80.24 dB open loop gain and gain bandwidth (GBW) of 6.03 MHz. The Cadence VIRTUOSO environment using UMC 0.18 \u03bcm CMOS process technology has been used to simulate the proposed circuit."}
{"_id": "6b41f80de9a314af67b909b509d7119d0d29c787", "title": "Cyber-Crimes and their Impacts : A Review 1", "text": "In the current era of online processing, maximum of the information is online and prone to cyber threats. There are a huge number of cyber threats and their behavior is difficult to early understanding hence difficult to restrict in the early phases of the cyber attacks. Cyber attacks may have some motivation behind it or may be processed unknowingly. The attacks those are processed knowingly can be considered as the cyber crime and they have serious impacts over the society in the form of economical disrupt, psychological disorder, threat to National defense etc. Restriction of cyber crimes is dependent on proper analysis of their behavior and understanding of their impacts over various levels of society. Therefore, the current manuscript provides the understanding of cyber crimes and their impacts over society with the future trends of cyber crimes."}
{"_id": "82aafd8ad5bfdc62ecf552e6294322e01a6cdbf7", "title": "Evolving Personalized Content for Super Mario Bros Using Grammatical Evolution", "text": "Adapting game content to a particular player\u2019s needs and expertise constitutes an important aspect in game design. Most research in this direction has focused on adapting game difficulty to keep the player engaged in the game. Dynamic difficulty adjustment, however, focuses on one aspect of the gameplay experience by adjusting the content to increase or decrease perceived challenge. In this paper, we introduce a method for automatic level generation for the platform game Super Mario Bros using grammatical evolution. The grammatical evolution-based level generator is used to generate player-adapted content by employing an adaptation mechanism as a fitness function in grammatical evolution to optimize the player experience of three emotional states: engagement, frustration and challenge. The fitness functions used are models of player experience constructed in our previous work from crowd-sourced gameplay data collected from over 1500 game sessions."}
{"_id": "4f27aa2646e7c09b9889e6fb922da31bcbb69602", "title": "Efficient matrix completion for seismic data reconstruction", "text": "Despite recent developments in improved acquisition, seismic data often remains undersampled along source and receiver coordinates, resulting in incomplete data for key applications such as migration and multiple prediction. We interpret the missing-trace interpolation problem in the context of matrix completion and outline three practical principles for using low-rank optimization techniques to recover seismic data. Specifically, we strive for recovery scenarios wherein the original signal is low rank and the subsampling scheme increases the singular values of the matrix. We employ an optimization program that restores this low rank structure to recover the full volume. Omitting one or more of these principles can lead to poor interpolation results, as we show experimentally. In light of this theory, we compensate for the high-rank behavior of data in the source-receiver domain by employing the midpoint-offset transformation for 2D data and a source-receiver permutation for 3D data to reduce the overall singular values. Simultaneously, in order to work with computationally feasible algorithms for large scale data, we use a factorization-based approach to matrix completion, which significantly speeds up the computations compared to repeated singular value decompositions without reducing the recovery quality. In the context of our theory and experiments, we also show that windowing the data too aggressively can have adverse effects on the recovery quality. To overcome this problem, we carry out our interpolations for each frequency independently while working with the entire frequency slice. The result is a computationally efficient, theoretically motivated framework for interpolating missing-trace data. Our tests on realistic twoand three-dimensional seismic data sets show that our method compares favorably, both in terms of computational speed and recovery quality, to existing curvelet-based and tensor-based techniques. Geophysics This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c \u00a9Mitsubishi Electric Research Laboratories, Inc., 2015 201 Broadway, Cambridge, Massachusetts 02139"}
{"_id": "1ff107c3230c51ae3cc8e0f14dced3eaebea9a8e", "title": "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems", "text": "An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: (1) Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intented recipient. Only he can decipher the message, since only he knows the corresponding decryption key. (2) A message can be \u201csigned\u201d using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in \u201celectronic mail\u201d and \u201celectronic funds transfer\u201d systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n, of two large secret primer numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d \u2261 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n."}
{"_id": "3b03935dfc89c0cad63e05976c21fef6c9fb4190", "title": "New directions in cryptography", "text": "Two kinds of contemporary developments in crypcommunications over an insecure channel order to use cryptogtography are examined. Widening applications of teleprocessraphy to insure privacy, however, it currently necessary for the ing have given rise to a need for new types of cryptographic communicating parties to share a key which is known to no systems, which minimize the need for secure key distribution one else. This is done by sending the key in advance over some channels and supply the equivalent of a written signature. This secure channel such a private courier or registered mail. A paper suggests ways to solve these currently open problems. private conversation between two people with no prior acquainIt also discusses how the theories of communication and computance is a common occurrence in business, however, and it is tation are beginning to provide the tools to solve cryptographic unrealistic to expect initial business contacts to be postponed problems of long standing. long enough for keys to be transmitted by some physical means. The cost and delay imposed by this key distribution problem is a major barrier to the transfer of business communications"}
{"_id": "d21f261bf5a9d7333337031a3fa206eaf0c6082c", "title": "An improved algorithm for computing logarithms over GF(p) and its cryptographic significance (Corresp.)", "text": ""}
{"_id": "e0339a61865527b64dba0c18f4f9a11af24c8a97", "title": "Constraint Propagation Algorithms for Temporal Reasoning", "text": "This paper considers computational aspects of several temporal representation languages. It investigates an interval-based representation, and a point-based one. Computing the consequences of temporal assertions is shown to be computational/y intractable in the interval-based representation, but not in the poinfbased one. However, a fragment of the interval language can be expressed using the point language and benefits from the tractability of the latter.\u2019 The representation of time has been a recurring concern of Artificial Intelligence researchers. Many representation schemes have been proposed for temporal reasoning; of these, one of the most attractive is James Allen\u2019s algebra of temporal intervals [Allen 831. This representation scheme is particularly appealing for its simplicity and for its ease of implementation with constraint propagation algorithms. Reasoners based on this algebra have been put to use in several ways. For example, the planning system of Allen and Koomen [1983] relies heavily on the temporal algebra to perform reasoning about the ordering of actions. Elegant approaches such as this one may be compromised, however, by computational characteristics of the interval algebra. This paper concerns itself with these computational aspects of Allen\u2019s algebra, and of a simpler algebra of time points. Our perspective here is primarily computation-theoretic. We approach the problem of temporal representation by asking questions of complexity and tractability. In this light, this paper examines Allen\u2019s interval algebra, and the simpler algebra of time points. The bulk of the paper establishes some formal results about the temporal algebras. In brief these results are: l Determining consistency of statements in the interval algebra is NP-hard, as is determining all consequences of these statements. Allen\u2019s polynomial-time constraint propagation algorithm is sound but not complete for these tasks. l In contrast, constraint propagation is sound and complete for computing consistency and consequences of assertions in the time point algebra. It operates in O(n3) time and O(n2) space. l A restricted form of the interval algebra can be formulated in terms of the time point algebra. Constraint propagation is sound and complete for this fragment. Throughout the paper, we consider how these formal results affect practical Artificial Intelligence programs. \u2018This research was supported in part by the Defense Advanced Research Agency, under contracts NOOOl4-85-C-0079 and N-0001 4-77-C-0378. Projects The Interval Algebra Allen\u2019s interval algebra has been described in detail in [Allen 831. In brief, the elements of the algebra are relations that may exist between intervals of time. Because the algebra allows for indefiniteness in temporal relations, it admits many possible relations between intervals (213 in fact). But all of these relations can be expressed as vectors of definite simple relations, of which there are only thirteen, 2 The thirteen simple relations, whose definitions appear in Figure 1, precisely characterize the relative starting and ending points of two temporal intervals. If the relation between two intervals is completely defined, then it can be exactly described with a simple relation. Alternatively, vectors of simple relations introduce indefiniteness in the description of how two temporal intervals relate. Vectors are interpreted as the disjunction of their constituent simple relations. A BEFORE B B AFTER A G--v* A MEETS B B MET-BY A A ,B /"}
{"_id": "cc4a4f6032d77f5f14a127ad2fac38f2ddc80d91", "title": "Natural Language Aggregate Query over RDF Data", "text": "Natural language question/answering over RDF data has received widespread attention. Although there have been several studies that have dealt with a small number of aggregate queries, they have many restrictions (i.e., interactive information, controlled question or query template). Thus far, there has been no natural language querying mechanism that can process general aggregate queries over RDF data. Therefore, we propose a framework called NLAQ (Natural Language Aggregate Query). First, we propose a novel algorithm to automatically understand a user\u2019s query intention, which mainly contains semantic relations and aggregations. Second, to build a better bridge between the query intention and RDF data, we propose an extended paraphrase dictionary ED to obtain more candidate mappings for semantic relations, and we introduce a predicate-type adjacent set PT to filter out inappropriate candidate mapping combinations in semantic relations and basic graph patterns. Third, we design a suitable translation plan for each aggregate category and effectively distinguish whether an aggregate item is numeric or not, which will greatly affect the aggregate result. Finally, we conduct extensive experiments over real datasets (QALD benchmark and DBpedia), and the experimental results demonstrate that our solution is effective."}
{"_id": "f6a88b3099300f17163a367b9638254359178412", "title": "A new method on designing and simulating CNTFET_based ternary gates and arithmetic circuits", "text": "This paper presents a new design of three-valued logic gates on the basis of carbon nanotube transistors. Next, the proposed circuit is compared with the existing models of circuits. The simulation results, using HSPICE indicate that the proposed model outperform the existing models in terms of power and delay. Moreover, using load-transistor results in 48.23% reductions in terms of power-delay-product (PDP) compared with the best model ever observed. Furthermore, the implementation of ternary arithmetic circuits such as half adder and multiplier using load-transistor has been investigated. An added common term between Sum and Carry output in half-adder (HA) and Carry and Product output in lower multiplier (MUL) has been used in arithmetic circuits. Considering the fact that a better proposal with higher performance has been utilized for the implementation of three-valued gates, our model results in the reduction of 9.95% and 23.87% PDP in HA and MUL respectively compared with the best models observed."}
{"_id": "44bc1416ebf24ccc024defbe61c128fb40372590", "title": "A cascade filter for pulse wave baseline drift elimination", "text": "Analyzing the pulse wave is an important method to diagnosis the status of human health. Generally the directly acquired signal contains varying degrees of baseline drift which is usually caused by breathing and physical movement, it mostly influences the accuracy of signal analysis and must be eliminated. This article presented a cascade filter framework to remove the baseline drift of pulse wave. The first stage zero phase filter (ZPF) roughly reduced the trend of the baseline drift. The second stage of the removed method considered the signal features. Through the pulse waveform reference points' accurate location and cubic spline estimation, we completely eliminated the rest drift. Compared with the other methods, e.g. wavelet-based filtering, morphological filtering, empirical mode decomposition (EMD) filtering, the experimental result showed that the proposed method had not only better effect, but also small calculation cost, and the scheme was more suitable for actual application."}
{"_id": "fb626f42be72f6520a63e61106b558b4224c48f5", "title": "Text-Based Deep Learning for Artificial Jazz Improvisation", "text": "Deep learning has exploded in popularity in the past several years. Picking up from recent projects using recurrent neural networks to generate text, we attempted to use the same principles to generate jazz-like improvisations. We provide an RNN using torch-rnn, trained on freely available MIDI files of jazz transcriptions, which can generate music that captures rhythms, and to a lesser extend harmony and melody. Listener feedback shows that our generated music is easily recognizable as \u201cjazz\u201d, though lacking macro-level structure as can be expected for a basic RNN generating entirely sequential patterns. We also provide a new ASCII-based file format capable of storing music in a compact but not totally opaque way. Our format allows for close to a 7-times size decrease from existing MIDI-to-CSV converters. We show the musical principles behind the information we chose to include and leave out in converting MIDI to our new format."}
{"_id": "717449dd98b6b5ea0436057d16d8229ac1d7bece", "title": "Design of an experimental setup for evaluation of inductive transcutaneous energy transfer systems", "text": "Work objective \u2014 is to develop an experimental setup for evaluation characteristics of inductive transcutaneous energy transfer systems, such as comparative studies of various topological solutions; studies of systems' thermal safety; influence of geometry of a coils couple on the parameters of energy transfer. The system consists of planar spiral coils, device for positioning, receiving and transmitting circuit boards, frequency generator, and National Instruments USB-6361 DAQ. Mathematical apparatus has been developed that makes it possible to obtain self-inductance and the internal equivalent resistance of the coils from the measured voltage drop in the transmitting circuit, as well as the dependence of the mutual inductance on displacements. LabVIEW virtual instrument for automation of measurements is designed. The values of the self-inductances of planar spiral coils and the practical dependences of the mutual inductance of spiral coils on displacements without and with using automation tools were experimentally found. The results corresponds theoretical calculations."}
{"_id": "bf1a461b5c766e2021e0aa1b0b4ec8528de34ea6", "title": "Antenna Selection for MIMO Nonorthogonal Multiple Access Systems", "text": "This paper considers the antenna selection (AS) problem for a multiple-input multiple-output nonorthogonal multiple access system. In particular, we develop new computationally efficient AS algorithms for two commonly used scenarios: NOMA with fixed power allocation (F-NOMA) and NOMA with cognitive radio-inspired power allocation (CR-NOMA). For the F-NOMA scenario, a new max\u2013max\u2013max AS (A$^3$ -AS) scheme is first proposed to maximize the system sum-rate. This is achieved by selecting one antenna at the base station (BS) and corresponding best receive antenna at each user that maximizes the channel gain of the resulting strong user. To improve the user fairness, a new max\u2013min\u2013max AS (AIA-AS) scheme is subsequently developed, in which we jointly select one transmit antenna at BS and corresponding best receive antennas at users to maximize the channel gain of the resulting weak user. For the CR-NOMA scenario, we propose another new AS algorithm, termed maximum-channel-gain-based antenna selection (MCG-AS), to maximize the achievable rate of the secondary user, under the condition that the primary user's quality-of-service requirement is satisfied. The asymptotic closed-form expressions of the average sum-rate for A $^3$-AS and AIA-AS and that of the average rate of the secondary user for MCG-AS are derived. Numerical results demonstrate that the AIA-AS provides better user fairness, whereas the A$^3$-AS achieves a near-optimal sum-rate in F-NOMA systems. For the CR-NOMA scenario, MCG-AS achieves a near-optimal performance in a wide signal-to-noise-ratio regime. Furthermore, all the proposed AS algorithms yield a significant computational complexity reduction, compared to exhaustive search-based counterparts."}
{"_id": "6a94f16627379a7163f39a5b909c41b5167d3889", "title": "Session-based Recommendations with Recurrent Neural Networks", "text": "Session-based recommendation Permanent cold start: where personalized recommendations fail \u2022 User identification: Many sites (e.g. classifieds, video services) don\u2019t require users to log in. Although some form of identification is possible, it is not reliable. \u2022 Intent/theme: Sessions usually have a goal or a specific theme. Different sessions of the same user center around different concepts. The entire user history may not help much in identifying the user\u2019s current needs. \u2022 Never/rarely returning users: High percentage of the users of webshops come from search engines in search for some products and rarely return. Workaround in practice \u2022 Item-to-item recommendations: Recommend similar or frequently co-occurring items. We explore item-to-session recommendations. By modeling the whole session, more accurate recommendations can be provided. We propose an RNN-based approach to model the session and provide session-based recommendations. domonkos.tikk@gravityrd.com @domonkostikk balazs.hidasi@gravityrd.com @balazshidasi Bal\u00e1zs Hidasi"}
{"_id": "692a911d3995f19b276aba0efc07e66271535942", "title": "Model-predictive target defense by team of unmanned surface vehicles operating in uncertain environments", "text": "In this paper, we present a heuristic planning approach for guarding a valuable asset by a team of autonomous unmanned surface vehicles (USVs) operating in a continuous state-action space. The team's objective is to maximize the amount of time it takes an intruder boat to reach the asset. The team must cooperatively deal with uncertainty about which boats are actual intruders, employ active blocking to slow down intruders' movement towards the asset, and intelligently distribute themselves around the target to optimize future guarding opportunities. Our planner incorporates a market-based algorithm for allocating tasks to individual USVs by forward-simulating the mission and assigning estimated utilities to candidate task-allocation plans. The planner can be automatically adapted to a specific mission by optimizing the behaviors used to fulfil individual tasks. We present detailed simulation results that demonstrate the effectiveness of our approach."}
{"_id": "c85597fb5ee96d38fd05d94067f0e1525fafff3d", "title": "The Effects of Personal Susceptibility and Social Support on Internet Addiction: an Application of Adler\u2019s Theory of Individual Psychology", "text": "The aim of the present study was to examine the addictive effects of personal susceptibility and social support from family, friends, and significant other on Internet addiction. In this study, 207 medical students completed the UCLA Loneliness Scale, the Academic Expectations Stress Inventory, the Multidimensional Scale of Perceived Social Support, and the Young\u2019s Diagnostic Questionnaire for Internet addiction. Participants were recruited via simple random sampling technique. Personal susceptibility variables such as loneliness, academic stress due to other expectations, and academic stress due to self-expectations were significantly positively correlated to, and all domains of social support were significantly negatively correlated to, Internet addiction. After adjusting for demographic variables, social support emerged as a significant predictor of Internet addiction in the hierarchical regression analysis. In support of Adler\u2019s individual psychology theory, the present findings suggest that social support from family is a valuable adjunct to prevention and intervention programs aimed at alleviating Internet addiction in medical students."}
{"_id": "610eaed09988471e49fc458a9dd828889dff23b0", "title": "Connectivity Learning in Multi-Branch Networks", "text": "The CIFAR-10 dataset consists of color images of size 32x32. The training set contains 50,000 images, the testing set 10,000 images. Each image in CIFAR-10 is categorized into one of 10 possible classes. In Table 3, we report the performance of different models trained on CIFAR-10. From these results we can observe that our models using learned connectivity achieve consistently better performance over the equivalent models trained with the fixed connectivity (Xie et al., 2017)."}
{"_id": "4beef78e9b21611a59237b63d512014e47f32d5e", "title": "Processing Analytical Queries over Encrypted Data", "text": "MONOMI is a system for securely executing analytical workloads over sensitive data on an untrusted database server. MONOMI works by encrypting the entire database and running queries over the encrypted data. MONOMI introduces split client/server query execution, which can execute arbitrarily complex queries over encrypted data, as well as several techniques that improve performance for such workloads, including per-row precomputation, space-efficient encryption, grouped homomorphic addition, and pre-filtering. Since these optimizations are good for some queries but not others, MONOMI introduces a designer for choosing an efficient physical design at the server for a given workload, and a planner to choose an efficient execution plan for a given query at runtime. A prototype of MONOMI running on top of Postgres can execute most of the queries from the TPC-H benchmark with a median overhead of only 1.24\u00d7 (ranging from 1.03\u00d7 to 2.33\u00d7) compared to an un-encrypted Postgres database where a compromised server would reveal all data."}
{"_id": "251962a24ddd6f2fd6be73f8ccc28a7eccb54bff", "title": "CryptoKnight: Generating and Modelling Compiled Cryptographic Primitives", "text": "Cryptovirological augmentations present an immediate, incomparable threat. Over the last decade, the substantial proliferation of crypto-ransomware has had widespread consequences for consumers and organisations alike. Established preventive measures perform well, however, the problem has not ceased. Reverse engineering potentially malicious software is a cumbersome task due to platform eccentricities and obfuscated transmutation mechanisms, hence requiring smarter, more efficient detection strategies. The following manuscript presents a novel approach for the classification of cryptographic primitives in compiled binary executables using deep learning. The model blueprint, a Dynamic Convolutional Neural Network (DCNN), is fittingly configured to learn from variable-length control flow diagnostics output from a dynamic trace. To rival the size and variability of equivalent datasets, and to adequately train our model without risking adverse exposure, a methodology for the procedural generation of synthetic cryptographic binaries is defined, using core primitives from OpenSSL with multivariate obfuscation, to draw a vastly scalable distribution. The library, CryptoKnight, rendered an algorithmic pool of AES, RC4, Blowfish, MD5 and RSA to synthesise combinable variants which automatically fed into its core model. Converging at 96% accuracy, CryptoKnight was successfully able to classify the sample pool with minimal loss and correctly identified the algorithm in a real-world crypto-ransomware application."}
{"_id": "c15865cd60b0a8ab4184727d51b87f43272d7c5a", "title": "Autoencoder-based holographic image restoration", "text": "We propose a holographic image restoration method using an autoencoder, which is an artificial neural network. Because holographic reconstructed images are often contaminated by direct light, conjugate light, and speckle noise, the discrimination of reconstructed images may be difficult. In this paper, we demonstrate the restoration of reconstructed images from holograms that record page data in holographic memory and quick response codes by using the proposed method."}
{"_id": "5e1991fc2cb48c4a3eb235e3c6106cf0c560ba7d", "title": "ICE: Item Concept Embedding via Textual Information", "text": "This paper proposes an item concept embedding (ICE) framework to model item concepts via textual information. Specifically, in the proposed framework there are two stages: graph construction and embedding learning. In the first stage, we propose a generalized network construction method to build a network involving heterogeneous nodes and a mixture of both homogeneous and heterogeneous relations. The second stage leverages the concept of neighborhood proximity to learn the embeddings of both items and words. With the proposed carefully designed ICE networks, the resulting embedding facilitates both homogeneous and heterogeneous retrieval, including item-to-item and word-to-item retrieval. Moreover, as a distributed embedding approach, the proposed ICE approach not only generates related retrieval results but also delivers more diverse results than traditional keyword-matching-based approaches. As our experiments on two real-world datasets show, ICE encodes useful textual information and thus outperforms traditional methods in various item classification and retrieval tasks."}
{"_id": "43d8eee46acec9d6e802db822d3b802f790b42a8", "title": "Attributing physical and biological impacts to anthropogenic climate change", "text": "Significant changes in physical and biological systems are occurring on all continents and in most oceans, with a concentration of available data in Europe and North America. Most of these changes are in the direction expected with warming temperature. Here we show that these changes in natural systems since at least 1970 are occurring in regions of observed temperature increases, and that these temperature increases at continental scales cannot be explained by natural climate variations alone. Given the conclusions from the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report that most of the observed increase in global average temperatures since the mid-twentieth century is very likely to be due to the observed increase in anthropogenic greenhouse gas concentrations, and furthermore that it is likely that there has been significant anthropogenic warming over the past 50\u2009years averaged over each continent except Antarctica, we conclude that anthropogenic climate change is having a significant impact on physical and biological systems globally and in some continents."}
{"_id": "f163d3971dd7abfe39bdd143a67910543ffdead2", "title": "Life Cycle of Smart Contracts in Blockchain Ecosystems", "text": "This paper discusses the life cycle of decentralized smart contracts, i.e. digital and executable representations of rights and obligations in a multi-party environment. The life cycle relies on blockchain technology, i.e. a distributed digital ledger, to ensure proper implementation and integrity of the smart contracts. The life cycle consists of four subsequent phases: Creation, freezing, execution, and finalization. For each phase actors and technological services are identified and explained in detail. With the life cycle at hand, risks and limitations of smart contracts and the underlying blockchain technology are briefly discussed."}
{"_id": "5f77b4e45982edfcc4c795e763ea128998b98c9c", "title": "SciPlore Xtract: Extracting Titles from Scientific PDF Documents by Analyzing Style Information (Font Size)", "text": "Extracting titles from a PDF\u2019s full text is an important task in information retrieval to identify PDFs. Existing approaches apply complicated and expensive (in terms of calculating power) machine learning algorithms such as Support Vector Machines and Conditional Random Fields. In this paper we present a simple rule based heuristic, which considers style information (font size) to identify a PDF\u2019s title. In a first experiment we show that this heuristic delivers better results (77.9% accuracy) than a support vector machine by CiteSeer (69.4% accuracy) in an \u2018academic search engine\u2019 scenario and better run times (8:19 minutes vs. 57:26 minutes)."}
{"_id": "5cb8dfdd2718e0b8058712f7d8231994013fe7fc", "title": "Modified dual band gysel power divider with isolation bandwidth improvement", "text": "This paper presents a new design of a dual band power divider with high power handling capability and isolation bandwidth improvement, which contains six branch line sections, two grounded resistors, one resistor between output ports, an extension line at the input, and a single stub line at the end. Exact closed-form equations are derived for the proposed structure using even-mode and odd-mode analysis. In this design, flexibility in choosing two grounded resistors enables us to determine the frequency ratio range, bandwidth of the circuit, and the level of power handling capability. In addition, line admittance analysis shows that the proposed structure can support the acceptable frequency ratio range. For verification, the proposed structure is simulated, fabricated and measured using microstrip lines. Simulation and measurement results show wide isolation bandwidth, while maintaining high power handling capability over the Wilkinson power divider."}
{"_id": "00b4ed20f1839fbe8a168c2daac2b3be06d4085e", "title": "Extended Object and Group Tracking: A Comparison of Random Matrices and Random Hypersurface Models", "text": "Intelligent Sensor-Actuator-Systems Laboratory (ISAS), Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany \u2020Dept. Sensor Data and Information Fusion, Fraunhofer FKIE, Wachtberg, Germany \u2021Data Fusion Algorithms & Software, EADS Deutschland GmbH, Ulm, Germany marcus.baum@kit.edu, {michael.feldmann |wolfgang.koch}@fkie.fraunhofer.de dietrich.fraenken@eads.com, uwe.hanebeck@ieee.org"}
{"_id": "06865d55b7ceef648269eefa0c0dbb244fed9113", "title": "Evolutionary Algorithms for Constrained Parameter Optimization Problems", "text": "Evolutionary computation techniques have received a great deal of attention regarding their potential as optimization techniques for complex numerical functions. However, they have not produced a significant breakthrough in the area of nonlinear programming due to the fact that they have not addressed the issue of constraints in a systematic way. Only recently have several methods been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems; however, these methods have several drawbacks, and the experimental results on many test cases have been disappointing. In this paper we (1) discuss difficulties connected with solving the general nonlinear programming problem; (2) survey several approaches that have emerged in the evolutionary computation community; and (3) provide a set of 11 interesting test cases that may serve as a handy reference for future methods."}
{"_id": "1db22351639f5cb43b227d456567e15560fec07a", "title": "Hardening soft information sources", "text": "\" $#\"% & &' ()% & % &% ! *+ (, . \" & 0/213 . '4 5 6 7 98: 5 & ; < !%; & = ' >\"& ! 5 & % &%; ! ?*@ (, . \" & 06A % & ! &% & *; 5 \" ! 7B& (, C;D0 & ') 7 & 3 ? E *F*G%;8 \" & H6 * 5 JI$% ; #\"% \"6K & & '\"-,% *L M ? N *; & O ! / PK>G . 8; % *; ! \" Q ; 5 ! 8 ; R S*; 5 \" H !T\" ? *U(R ! . ! ! & O 78 8: ! F V*; 5 \" F & % * ' . ! W ? ! 5 % BX !* C$*; 5 \" /$YN ! ! 9 A(, . ' . G* S W (, * \" 5 N 5 H E '7T5 ! & Z ( . [% IG 7 !*7* \" 5 \"/ \\$ 7 & ] & *; ! 7 & U !* 8 ! . 6^ ,/ \"/ 6: & A8 ! . (Q (, & . 5 E I_ '`% *; '\" !* * \" 5 7 T5 W8 ! & !%; W (, F*; 5 \" \"/$abIG ?'`(, ? \" &% ! W ([ % c 8 8; ! 5 \" ) & \" S !*; ! ; W ) Qd . ' % ! ) ( ?T\" *; A(e < T\" f !*L(, 5 W ! IG 4 5 % /g\\ (, . % \" !* Z \" 8 & . h? \" & ]8; ! ; . *Z T5 A ) & T\" 'W ) & . F 5 & . (, EO *i W A G ? H 8 & . % . / Categories and Subject Descriptors YF/ j / .gk l?mon p:q r s t u3p:m\"& ! 5 H & % &%; ! ?*U (, . \" & 06G ;% H & Q ! &% & )* \" 5 Q ! B| (, C D^ & '} 7*G%;8 5 & 3 W *N 5 JI)% #\"% ? & & '\"% * ;M~ *; ! & O ! / a} 7 \u007f >G . 8 () $ (e A* \" 5 \"6S & * Z & c ! \" W 5 ! 8; L*; 5 \" !T\" ? *@(R ! . ! \u0080 ! & O 8 8: ! 5\u0081 %; ! ] \"* *i ! D\u0082\\\u0083 h 2\u00840 &-&\u0085S ? ! ^6 PK \" 6cj5\u0086;\u0087?\u0086 Y} !'W\u0088G 6 \u0089H & %; ! ^60\u0089:a\u008a\u0087\u008c\u008b\"\u008di\u0087?\u008e;/ \u0081 % ! Z \"* *i ! D]\u008fN 8 / ( \u0081 . 8;% $\u0088G \"6}\u0090E T /f ( \\$ \" & \" 06\u0091\u0088i 5 & A\\ aF/ KDD \u201900 Boston, Massachusetts USA \u0092\u0094\u0093X\u0095?\u0096 \u0097 \u0098 \u0099)\u009aG\u009bG\u0098,\u009bG\u009c \u009b\"\u0095?\u009d \u009e \u009f \u009bG \u0098=\u00a1 \u0096G\u00a2?\u00a3 \u0093X\u00a4S\u009bG\u00a2 \u0098:\u00a5:\u009d \u00a6 \u00a7A\u009bG \u0308G\u0099 \u00a9U\u0093 aH\u00a2 \u00abR\u0098X\u00abR\u00ac \u009b\"\u00a6:\u009c!\u009d!\u00a1 \u009bG\u00ad \u00ab3\u0096G\u00a2S\u00ab3 \u0308W\u0095 \u009bG\u0098,\u00ab=\u0095R\u00ae \u009bG\u009c \u00ab=\u00a6 \u00ab \u0098, \u0304;\u0099&\u00b05\u00b1 \u009b 2\u0083\u00a6 \u00abR\u009bG\u0098X\u00abR\u0096G \u0308:\u00a3;\u0093X\u00a4[\u009b\"\u00a2!\u0098o\u00a5:\u009d \u00a6 \u00a7F\u009bG \u0308i\u0099 \u00a9U\u0093 aH\u0096G\u00a2! \u0308 \u009d \u00a63\u00a6 3H \u0308 \u00ab3\u00ad \u009d \u00a2 \u0095?\u00ab3\u0098, \u0304;\u0099 \u00b05\u00b1 \u009bG \u0098=\u00a1 \u0096G\u00a2?\u00a3 \u0093X\u00a4S\u00b1 \u00a5:\u009d \u00a6 \u00a7A\u009bG \u0308G\u0099 \u00a9F\u0093 a\u0091\u00a2 \u00ab3\u0098,\u00ab3\u00ac!\u009b\"\u00a6:\u009c!\u009d!\u00a1 \u009bG\u00ad \u00abR\u0096G\u00a2o\u0097?\u0096G\u00a2E\u0095?\u009bG\u0098X\u00ab \u00953\u00ae \u009bG\u009c \u00ab=\u00a6 \u00abR\u00983 \u0304;\u0099 \u00b05\u00b1 \u009bG \u0098=\u00a1 \u0096G\u00a2?\u00a3 \u0093X\u00a4S\u00b1 \u00a5:\u009d \u00a6 \u00a7A\u009bG \u0308G\u0099 \u00a9F\u0093X\u00a4 \u0301:\u00927a^\u03bcN\u00a4V\u00b6^\u00b7 \u0327\u0098 \u00a1 \u009d!\u0096G\u00a2&\u009d \u00a7410\u00a2&\u0096G\u00ad \u00ab3 \u0308Go \u0099 \u00b05\u00b1 \u0092\u0094\u00933\u00a1 \u009bG\u00a2|\u009a \u0099)\u009aG\u009bG\u0098,\u009bG\u009c \u009b\"\u0095?\u009d \u00bb \u009f \u009bG \u0098=\u00a1 \u0096G\u00a2?\u00a3 \u009c \u009bG\u00a2!\u0098 \u0095?\u009d \u00a6 \u00a7A\u009bG \u0308;\u00a9\u0091\u00ac \u00a2 \u00ab3\u0098,\u00ab3\u00ac!\u009b\"\u00a6 \u00b05\u00b1 \u009b 2\u0083\u00a6 \u00abR\u009bG\u0098X\u00abR\u0096G \u0308:\u00a3J\u009c!\u009b\"\u00a2!\u0098 \u0095?\u009d \u00a6 \u00a7A\u009bG \u0308 \u00a9H\u00ac!\u0096G\u00a2 \u0308 \u009d \u00a63\u00a6 \u00b05\u00b1 \u009bG \u0098=\u00a1 \u0096G\u00a2?\u00a3 \u009c \u009bG\u00a2!\u0098 \u0095?\u009d \u00a6 \u00a7A\u009bG \u0308;\u00a9\u0091\u009c \u00a6 \u009bG\u00ac 1\u20444G\u009c!\u0096 1\u20442 \u00b0\"\u00b1 3\u20444 u3\u00bf:\u00c0Hq yL\u00c1;\u00c2 \u00c3\u00c5\u00c4ix?p;n t \u00c6\u00c8\u00c7Hu,\u00c7H\u00c9,uep \u00bf:q s \u00caH\u00cbHu=\u00cc \u00cdHs t s \u00c7^s;x?yLs m^\u00cdft\u008c\u00cb^y \u00c0Hm^\u00cdHy q \u00c93w^u,m^\u00bf<\u00c4 \u00cb^s q\u008c\u00cdH\u00c6]\u00cdHs;t s '$ 'G . F &% \u007f \" F\u0085E ? ! :13 * > k \u0087\"\u0087 z \u0081 ! k j z /Z * \" \" ? E ? (, . \" & $ : % S & % *; S (\u0091 ; 8 8: ! /\u00ceYN T5 6: N ) (, $ !*$ W* . 7 ( | W & H ! (, K } & S . Q8 8: 6G H (0 & . S\u00cf, \"/ ;/)B|\\\u0083 . \u0081 CA *\u007fB&\\+/_PE/ \u0081 CG\u00d0Q ! (e ) & N . E8\u00d1 ! 0/K [ . IG Q *i \u00d2c !%; S )8: (, . \" & N 8: ! 5 & N &% J Z \" % &9 & E G% . : } (K 5 & N F A8 ! & % )8 8: ! N 8: ! ^/ YN ! ! 7 )(, . h W c (, )*; 5 \" c \" ) 7 ] ; <*G & * & O ! ! . '\u00d3 ! !(, S F & ) . ) & |'\"/K\\ N(, . h? U !*G] \" 7 & 98 ! . (V*; ? . < & \u00cf . 5 U I_ ' \u00d09 ! (, ! ! ? : & & N (, Q * & O ! &d\u007f & : \" N 6 *; ! . ; ; 48 ! \u00ce () (, ) *; & O ! c ! (, \u00ce & W . 9 ! ? -| * M ? /K [ E ! ? &%; (H & U !*$\u00cf, T5 & R\u00d0S*; 5 \" \"/ aE \u007f >G . 8 W 7 & < L\u00d4^ %; ! \u00875/ \u00d5N 5 & W & \" . ! Z % & 7 : \u00ce*i ! (R ! . & A !*]*; 5 \" Z\u00bb\u00d6 & ] & (e E* \" 5 c\u009eQ/S13 W8 ! & % 6\u00d7\u00bb\u00d8 . 8 A & \" } \u00d9 % & ) ( & E8 8: \u00d3B&\u00da \u00840a \u0081 \u00db \u00daS\u00dcS\u00ddL & ? ! . 8; ! T\" \"C7 \u00d2\u00ce \" * & \u0081 ,6: ! ! ) \" }\u009e\u0083* G 5 / aE & % W N% W ; \" ! \"8; R *; 5 \" c % ; < >\". 8; \"60 F & c . 8 \" & h c & \" N (, N* \" c U . ' *G \u00de: ! $ ?>G / \u0327\u00d4 Z ? \"6F (, Z*; 5 . W : ! ! \" ?*\u0083 '4 % . \" & ? ' >G & ! \" & (, \" ? \u00d3(R ! . 5 & O *\u00c8 \"* k \u008d z 6^ ) S ! % 8\u00838\u00d1 \" & \" k \u008b z /7a\u00df (, )* \" 5 . A (3 ! .\u0094. ! < & ? Z () ?T\" ! ) ! ! \" \" ! % 6 % . % & '\"-& ! ! \" *\u00c8BX !* C9* \" 5 / \u00d4 . '\"6_ Q \" &% . (, o(, 5 ^ ! Q T\" \u00d3 N & \u00ce . . 5 &%; ! c ( 2 IG ' | 9 * & O ! c ! 7 W \u00d1 7 -, ! (e ! / \u00cf,\u00d4 S >G . 8; \"6 & } & 5 cB&\\+/ PS/ \u0081 ! C7 *G * ? N ( ! \" & 7 * 6 6 ! F ! (, ! /\u00ce\u0090E IG ? T\" ! & N !*<* \" 5 \"6 c S g*G & 7 ! (e ! ? N ! (, K ) & N . ! -| *c & &'\"/K\\$ N* O } 7\u0095 \u0096 \u0097 \u0098 \u009aG\u009bG\u0098,\u009bG\u009c!\u009b\"\u0095 \u009dF 9 : c ) (} &%;8 7 (S & 7(, . U\u00cf \u00d0 6 ! @ ! 5 S T5 } . NO >G * S (^ ! 5 & ) * 6 6 W ! N ! (, ! ! ? / 13 < & \u00d3 \" ! 8 3 * . L & c . c & . A* \" *G \u00de: ! ! 9 ! & & c [ <% * L*G \u00de: ! c >G 6 \"/ ;/ 6H & & 4B|{W J \" ! \" !*; C . )*; \" F c*i \u00de\u00d1 ! N8: ! c ? E ) !' & \u00d38 8\u00d1 7 ! 8 ; ? . G* /c H c !% . T\" \u00ce & ; )8; ! ; . c < & % A ! (, ! ! ? U '$ ;? \" \" & 7 N & F [ & Z . *G \" & $ (: & S ?>G \u00d7(3 ! . J & \" c & < \" c >G & ! 5 * N(, c \"6} & % & . \u0083B|{W J \" ! \" !*; C ?>G & ! \" *$(3 ! . 98\u00d1 \" 8 A8 8: % *9 : N ! !8; ! ! *W 5 E & E ! !(, ! B&{W \" \" !*; C 6 *c %; *c : S*i & S(3 ! . & [ ! !(, ! WB|{W J \" ! ! \" !* C#\" ?>G & ! \" *W(R ! . A S S !'%$ / H F ! &%; \u00d9 7 & ) >G . 8 F (K\u00d4^ % ! c\u0087\"6 &% 8 8: \" F U \" 8 ! ? *W | )8: \" 8 E8 8\u00d1 ! \u0087N * \u008di/K N(, 7(, 5 T\" ) : \u00d9 >G & ! \" ? *W(3 ! . & ) & & )8 5 \" F ( \u0087\"D \u009bG \u0098 \u00a1 \u0096G\u00a2 \u00a3 \u0093X\u00a4S\u009bG\u00a2!\u0098o\u00a5:\u009d \u00a6 \u00a7F\u009bG \u0308G\u0099& ' !\u00a9\u00ce\u0093 aH\u00a2!\u00ab3\u0098,\u00ab3\u00ac \u009b\"\u00a60\u009c!\u009d!\u00a1 \u009bG\u00ad \u00abR\u0096G\u00a2 \u00ab3 \u0308W\u0095?\u009bG\u0098X\u00ab \u00953\u00ae \u009bG\u009c \u00ab=\u00a6 \u00abR\u00983 \u0304;\u0099& ( 3\u00b05\u00b1 \u009b 2<\u00a6R\u00a3;\u0093X\u00a4[\u009bG\u00a2 \u0098^\u00a5:\u009d \u00a6 \u00a7F\u009bG \u0308G\u0099& ' !\u00a9\u00ce\u0093 aH\u0096G\u00a2! \u0308 \u009d \u00a63\u00a6S3H \u0308 \u00ab3\u00ad \u009d \u00a2 \u0095 \u00ab3\u0098, \u0304 \u0099) ( ,\u00b0\"\u00b1 *\u007f &%;8 8: 5 & c(, (, 5 A T\" 9 \u00d1 L ?>G & ! \" * (3 ! . & N ; 5 ! \"8 ' ? & $ ( \u008diD \u009bG \u0098 \u00a1 \u0096G\u00a2 \u00a3 \u0093X\u00a4S\u00b1 \u00a5:\u009d \u00a6 \u00a7F\u009bG \u0308i\u0099& * \u00a9\u00ce\u0093 aH\u00a2 \u00abR\u0098X\u00abR\u00ac \u009b\"\u00a60\u009c!\u009d \u00a1 \u009bG\u00ad \u00ab3\u0096G\u00a2 \u0097 \u0096G\u00a2N\u0095 \u009bG\u0098,\u00ab=\u0095R\u00ae \u009bG\u009c \u00ab=\u00a6 \u00ab3\u0098, \u0304;\u0099) * \u00b05\u00b1 \u009bG \u0098 \u00a1 \u0096G\u00a2 \u00a3 \u0093X\u00a4S\u00b1 \u00a5:\u009d \u00a6 \u00a7F\u009bG \u0308i\u0099& * \u00a9\u00ce\u0093X\u00a4 \u0301:\u00927a^\u03bc}\u00a4)\u00b60\u00b7A\u009f0\u0092[1G1:\u00a6 \u0304\"\u00abR \u0308GoA\u0098 \u00a1 \u009d!\u0096G\u00a2&\u009d \u00a7 10\u00a2&\u0096G\u00ad \u00ab3 \u0308Go7\u0098,\u0096N10\u00a2&\u0096G\u009c \u00a6 \u009d \u00a7\u008a\u0095?\u0096\"\u00a6 \u00ad \u00ab3 \u0308Go \u0099& *|\u00b0\"\u00b1 H 5 \" & } & U(e \" (, . 7 (, S*; 5 \" \u00ce\u009eQ/ Y} !* )* . H & Q ? -, ! (, ! ! H ! 5 & & ; 8 Q : | ! !(, ! S $ & F (, S* \" 5 \"/S F -, ! (e ! N ! \" & 8\u00d1 8 . \" E \" &% ! ' ! 8 ! * 'c $ #\"% T c ! & \u007f ] & c (e N ! (, ! /ZYN ?T\" 6^ ! A ) F J ' . ! ) T\" N F IW & \u00d9 c 8 ! \" & (3% & d W(R%; & . 8 8; 5 J ] (e U ! (e ! 7 9 !*$ 8; ! !5 & 0/ 1e \u007f !*; \u00d3 ! ! . 'L & % ? \u00d3 ! 8; ! 5 & (R%; & c 7(, . % \" W < 8; ! ? \" & L(R%; & \u007f 5 7 (c\u00ab3 \u0308 \u0098X\u009d \u00a2310\u00a2&\u009d \u0098,\u009bG\u0098,\u00ab3\u0096G \u0308 \u009bG\u00a2&\u00ac \u0095c 5 J (} ; 4 c \u007f ! W (} & 9(, . + , * ! * * ! A*G & ? U ! (e ! 6: * -@ } -X 5 \" & T\" N ! ? G% . \u00d1 N\u00cf, ? \" !\u00d0^*G % ?*9 . ! * K : )/)aE \u007f\u00abR \u0308 \u0098,\u009d \u00a2310\u00a2&\u009d \u0098,\u009bG\u0098,\u00ab3\u0096G \u0308/.\u00ce ) 5 'G c } (K 8 ! \" & 4 ! 6H &% J & 5 U(, 7 Q ! !(, ! % A 8;8\u00d1 ` .W & ! 7 A 5 . 5 A c ! 9 (S & \u00d3(e . + , 0*213.;/c [ * O; & S E 8 ! \" & < ! ? E F : A *o6 ,/ \"/ 6; S 8: 5 & ; $ & 5 + , * 13.` * * + 4 , 05617.;/$13 < & 7 \" N S *G ! & '$ 8; ! ? * \" 8 5 /}\u00d4 V ' T\" $ 8 ! & 9.) *c 'A ! (e ! \"6G S* O 8.;\u00cf \u00d0H N : } & S%; & . 5 8 ! \" & < (: \"D .;\u00cf \u00d0<;>= .;\u00cf ? \u00d0 (@.U + , ? \" & ! \u00d5N 5 A & \" } 8 ! 5 & .7* O; ) 7 !* * \" \" .;\u00cf|\u009eH\u00d0 * T\" * '9 ! !8; 5 $ 5 J ! (e ! U ]\u009e\u0083 'c E% & . \" 7 8 ! \" & % *; 8.;/ \u00dcE(} % ! 7 5 7 8; ! ? \" & W ! 78: \" & Wd 7 %; * \" W 8 ! +B|\u00daN/}\u0088G . CL \" LB|YA/ \u00db % h C / \u0327 6-b & ! >+ , * 9 & 5 `\u00cf, c%; IG !\u00d0 ( 8; ! ? & ) 5 0*G/A\u00d4 . ' F \" &% . c & 7 ? \" ) ! 8 ! T\" * * & N(, . (\u0091 A1 \u0096G\u0098X\u009d \u0308 \u0098,\u00ab3\u009b\"\u00a6 \u00abR \u0308 \u0098,\u009d \u00a2310\u00a2&\u009d \u0098,\u009bG\u0098,\u00ab3\u0096G \u0308A\u0095?\u009d \u0098 6\" ,/ \"/=6 .0A'B C) ()\u00cf, ! * \u00d0V8\u00d1 \" & E ! 8; ! 5 & ! /9 E T\" F (, * \" 5 7\u009e$ *W F D.0A'B CK (08: \" ! & H 8 ! \" & % H 5 \" [ : K O; *A ) 8 ! \" & /. E .FA'B C . ; . h & W 5 A(3% & HG\"\u00cf .G\u00d0)* O; *\u007f \" A(, E c ! 6I \u00ce *HIJ* ! W8 ! . ! c () & \" A(R%; & %K .;\u00cf|\u009eH\u00d0 K^ 9 & 9 G% . : ([*i & F &%;8 ) < & F !* * \" 5 2.;\u00cf|\u009eH\u00d00 \u0091 *LK .MK; ) & G% . : ! ) (H ! 8; ! 5 & ] ! .;/ G5\u00cf .G\u00d0ON -U\u00cf .G\u00d0 PQI K .;\u00cf|\u009eH\u00d0 K PQI * K .MK \u00cf \u0087\u008c\u00d0 -)\u00cf .G\u00d0ON R SUTWV X S&Y Z\\[ \u00d5N 5 ) & 5 S \" . ! ) ! S ! ) \"* *; * .A E T\" ) & \" D-U\u00cf .G\u00d0]P IJ*WK .MKi ! \" ? 7 ; /I K .;\u00cf|\u009eH\u00d0 K * ! 5 /7 7 \" N(R%; & ? \u00d3 ! !8; ! ! 7 W & ! 5*; \u00de4 : ? & ! & 7%; IG W (S & 8 ! \" & \u00df *+ & ? . 8 \" & $ (\u00ce & ! &% & 4 !* * \" \" 5/) ; } M & T5 A(3% ? & $ ) 8 8; ! >\" . 5 ' * T\" * (3 ! . A8; ! ; & R . G* ^ $ & N(, W & ^/ 3. A PROBABILISTIC MODEL 1e \u00d9 & ; } & $ ) T5 A 8 8 ! >\" . \" A8 ! & c* T & $ (K & ) ? \" (3% & 4\u00cf \u0087\u008c\u00d0 /K\u0084^ D^ \"\u00cf|\u00bb2 F.J \u009eH\u00d0S 7*G & ;% & !* * \" 5 \u00ce\u00bb 6; 8 ! \" & .;6 * (, N* \" 5 \u009eQ/KaE Z 8 & . ^ !* ; c (\u0091\u009e )8 )\u00bb2 F.) & \" . 5>\" . h ^ \"\u00cf|\u00bb2 F.MK \u009eH\u00d0 /\u0091\u00d5N \" & ? N & \" H(, UO >G *Z\u009e & } 8 & . : !* ; : N(, % *W '9 & . 8; ' . 5>\" . h & M~ E8 ! &'$ ( \u00bb2 F.A *$\u009eQ/ H N* O _^ _\u00cf|\u00bb2 F.J J\u009eH\u00d0^ 5 &% . E & \" H & ! Q K EO E ? `4 ( 8: \" & : ! -| *9 M H *7 NO E ? H ( ! 5 & 6 5 J < ( O >G * |'\"/c ? 9 5 &% . 8 & A . 8; ' & \" F & ! 7 WO ; F (S8: 5 & ; &% 8 /a J \u007f %; *\u007f 8 8: 7 \u00bb / \\$ W /b : & W G% . : \u00d9 (N8\u00d1 \" & !*L &%;8 /\u00c81e & 8 ! & . G* : S \" &% . } & 5 \u00d7 : 5 & Z & !*c*; 5 U \" \u00bb *+ & $ (, c* \" \" \u009eb \u00c8*i% 8 \" < &% 8; / \\$ W K \u00bbcKH *>K \u009edK\u0091 \u00d1 & W G% . : Z (F &%;8 9 \u00c8\u00bb * \u009e ! &8: ? & T\" '\u00ce [ ! *G & G % ! K (: & . &% 8; S ! & ! ? \" *Z \" S 8 ! \" ) ! . / \\ } 7 K .MK : ) & S G% . : (: ! ? ^ c 7 ! 8; ! 5 & 9.Ed 5 & \" K*i% 8; ? \" & \u00d9 (: ! 9.N \u00d7 % *Z % \u00d7 '7 & ! #\"% ! . & \" T\" !') ! (, ! ! ? E \" 5 . \" K E % 5 A ! \"/K\\ \u0083 5 &% . N & \" ^ _\u00cf `d !.J \u009eH\u00d0 ? W \u00d1 A* . 8: \" ?* \" (, S N ! V6 [ 6; * 7 ! N ! G% . : N8 ! . ! S $ & N ! !T S\u00cf J \u0087\u008c\u00d0 * & N G% . \u00d1 } ?* 7 & 5 D^ _\u00cf .G\u00d0 &% . S W\u0087\"/ ^ \"\u00cf|\u00bb2 F.J !\u009eH\u00d0 ; ^ \"\u00cf|\u009edK .J J\u00bb \u00d0 (^ \"\u00cf .G\u00d0 ^ \"\u00cf|\u00bb \u00d0 \u00cf|\u008d\"\u00d0 ^ \"\u00cf|\u00bb \u00d0O; \u00cf \u0087 \u00d0 \u0087 b \u00cf,\u008eG\u00d0 ^ \"\u00cf .G\u00d0O; \u0087 [ \u00cf \u0087 [ \u00d0 [ S T V X S Y Z\\[ + \u00cf,jG\u00d0 ^ \"\u00cf|\u009edK .J J\u00bb \u00d0O; \u00cf \u0087 ! \u00d0 \u0087 K . \u00cf|\u00bb \u00d0 K\" \u00cf|\u008b\"\u00d0 13 <\u00cf|\u008b\"\u00d0S N T5 A & \" 8. \u00cf|\u00bb \u00d0Q ) & A S ( (, S &%;8 a+ &% J & \" D.;\u00cf aS\u00d0<1$\u00bb *QK . \u00cf|\u00bb \u00d0 K\" N & ) !*G &'$ (K & N ? / \u00d1 T5 #\"% \" & ? ! &8: *c S N ? ! 78 ! G ^(e K \" ! ;! ! \" & ) & & 8 )\u00bb2 .J !\u009eQ/K\u00d4^ ! K\u00bb\u008a H \" ! 5 *A 'N ! 8: ? \" *G ' ! & 7 ( & %b\u00808: \" & 9 !* &% 8 ) \" N ! * . /\u00ceaS ? \" 4 ! 5 & & W ? & L8 ! G c 8 7 & 48 ! &' b *< & G% c & 8 ! 3 |'+\u0087 # N/$a \u0327 & . c8 ! ? E % * 7 \" ! ! 5 .A 'c ? & Z ! S(3 ! . .0A(B C ! & 98 ! &'L (} & Z ! 6 + , 0*c A8; ! 8: ! & F + *Z ! 56; 5 J \u00d9 & ! \" & ^6: & N ! & 8 ! G ! . 5 ) & 8; ! ; |'$ [ /[1e $ & ) \" F (K 5 ! ! \" & 6.;6 T5 6^ (, F & A ! N T5 A : ! < ?* F J I$ & 5 . S ! o(e . *:6G ,/ 5/=6 & \" Q Q S 5 'G F *9 & \" S 5 J c ! (e ! \" S 5 . \" S ) % \" c * \" \"/[1e(:.U 5 E ^(, . *Z & ) ! N T5 /S13(K & F E"}
{"_id": "247feaf26d9605f9adf0ee1364790f006adff445", "title": "Finding Alternative Translations in a Large Corpus of Movie Subtitle", "text": "OpenSubtitles.org provides a large collection of user contributed subtitles in various languages for movies and TV programs. Subtitle translations are valuable resources for cross-lingual studies and machine translation research. A less explored feature of the collection is the inclusion of alternative translations, which can be very useful for training paraphrase systems or collecting multi-reference test suites for machine translation. However, differences in translation may also be due to misspellings, incomplete or corrupt data files, or wrongly aligned subtitles. This paper reports our efforts in recognising and classifying alternative subtitle translations with language independent techniques. We use time-based alignment with lexical re-synchronisation techniques and BLEU score filters and sort alternative translations into categories using edit distance metrics and heuristic rules. Our approach produces large numbers of sentence-aligned translation alternatives for over 50 languages provided via the OPUS corpus collection."}
{"_id": "28196eb5d53a1129133b2977b7613c277596ccc6", "title": "Language as a Latent Variable: Discrete Generative Models for Sentence Compression", "text": "In this work we explore deep generative models of text in which the latent representation of a document is itself drawn from a discrete language model distribution. We formulate a variational auto-encoder for inference in this model and apply it to the task of compressing sentences. In this application the generative model first draws a latent summary sentence from a background language model, and then subsequently draws the observed sentence conditioned on this latent summary. In our empirical evaluation we show that generative formulations of both abstractive and extractive compression yield state-of-the-art results when trained on a large amount of supervised data. Further, we explore semi-supervised compression scenarios where we show that it is possible to achieve performance competitive with previously proposed supervised models while training on a fraction of the supervised data."}
{"_id": "7a67159fc7bc76d0b37930b55005a69b51241635", "title": "Abstractive Sentence Summarization with Attentive Recurrent Neural Networks", "text": "Abstractive Sentence Summarization generates a shorter version of a given sentence while attempting to preserve its meaning. We introduce a conditional recurrent neural network (RNN) which generates a summary of an input sentence. The conditioning is provided by a novel convolutional attention-based encoder which ensures that the decoder focuses on the appropriate input words at each step of generation. Our model relies only on learned features and is easy to train in an end-to-end fashion on large data sets. Our experiments show that the model significantly outperforms the recently proposed state-of-the-art method on the Gigaword corpus while performing competitively on the DUC-2004 shared task.ive Sentence Summarization generates a shorter version of a given sentence while attempting to preserve its meaning. We introduce a conditional recurrent neural network (RNN) which generates a summary of an input sentence. The conditioning is provided by a novel convolutional attention-based encoder which ensures that the decoder focuses on the appropriate input words at each step of generation. Our model relies only on learned features and is easy to train in an end-to-end fashion on large data sets. Our experiments show that the model significantly outperforms the recently proposed state-of-the-art method on the Gigaword corpus while performing competitively on the DUC-2004 shared task."}
{"_id": "d1b04260c552307a9bfd993b2e5c6e4b964539a7", "title": "Selective Encoding for Abstractive Sentence Summarization", "text": "We propose a selective encoding model to extend the sequence-to-sequence framework for abstractive sentence summarization. It consists of a sentence encoder, a selective gate network, and an attention equipped decoder. The sentence encoder and decoder are built with recurrent neural networks. The selective gate network constructs a second level sentence representation by controlling the information flow from encoder to decoder. The second level representation is tailored for sentence summarization task, which leads to better performance. We evaluate our model on the English Gigaword, DUC 2004 and MSR abstractive sentence summarization datasets. The experimental results show that the proposed selective encoding model outperforms the state-ofthe-art baseline models."}
{"_id": "30786fe2525573e7f7c5669dc4992950005d59e8", "title": "Simulating crowd movement in agent-based model of large-scale flood", "text": "Crowd movement during natural disasters and major accidents can affect the success of evacuation procedure. Thus to develop an effective evacuation plan, a wide-range of scenarios causing different routes and patterns of crowd movement should be considered. Recent researches in agent-based simulation have achieved techniques to simulate crowd movement in emergency scenarios at city scale. However simulating crowd movement at a macroscopic level for disasters which might affect several cities, like large-scale flood, is still a challenge. This paper addresses this issue and makes three contributions. First, the development of an agent-based layered model to simulate large-scale flood using GIS is demonstrated. Second, the simulation of crowd agents' movement on available roads is presented. Third, the preliminary experiments running on private Cloud server is reported. The experiments cover case studies illustrated the movement of crowd agents around Thailand while several parts of the country were inundated. The 2D animation depict the movement of crowd; and the simulation results show the status of the agents and the amount of individuals which required shelters. To simulate one day events, the simulator took 4-9 hours execution time depending on the severity of floods and available facilities."}
{"_id": "16a76e9193ac8ecf4d59024fd82d8b43784253df", "title": "Secure routing for internet of things: A survey", "text": "The Internet of Things (IoT) could be described as the pervasive and global network which aids and provides a system for the monitoring and control of the physical world through the collection, processing and analysis of generated data by IoT sensor devices. It is projected that by 2020 the number of connected devices is estimated to grow exponentially to 50 billion. The main drivers for this growth are our everyday devices such as cars, refrigerators, fans, lights, mobile phones and other operational technologies including the manufacturing infrastructures which are now becoming connected systems across the world. It is apparent that security will pose a fundamental enabling factor for the successful deployment and use of most IoT applications and in particular secure routing among IoT sensor nodes thus, mechanisms need to be designed to provide secure routing communications for devices enabled by the IoT technology. This survey analyzes existing routing protocols and mechanisms to secure routing communications in IoT, as well as the open research issues. We further analyze how existing approaches ensure secure routing in IoT, their weaknesses, threats to secure routing in IoT and the open challenges and strategies for future research work for a better secure IoT routing."}
{"_id": "4987aaf293f1715aeda9387f832e3630a79fe74b", "title": "Practical Identity-Based Encryption Without Random Oracles", "text": "We present an Identity Based Encryption (IBE) system that is fully secure in the standard model and has several advantages over previous such systems \u2013 namely, computational efficiency, shorter public parameters, and a \u201ctight\u201d security reduction, albeit to a stronger assumption that depends on the number of private key generation queries made by the adversary. Our assumption is a variant of Boneh et al.\u2019s decisional Bilinear Diffie-Hellman Exponent assumption, which has been used to construct efficient hierarchical IBE and broadcast encryption systems. The construction is remarkably simple. It also provides recipient anonymity automatically, providing a second (and more efficient) solution to the problem of achieving anonymous IBE without random oracles. Finally, our proof of CCA2 security, which has more in common with the security proof for the Cramer-Shoup encryption scheme than with security proofs for other IBE systems, may be of independent interest."}
{"_id": "504b7d4ef693ffca26e52976382ed8d31dfc10d6", "title": "Empirical Evaluation of User Experience in two Adaptive Mobile Application Prototypes", "text": "Today\u2019s applications such as ubiquitous systems are more and more aware of user\u2019s habits and the context of use. The features of products and the context of use will affect the human\u2019s experiences and preferences about the use of device. Thus, user experience in user-product interaction has been regarded as an important research topic in the mobile application design area. The purpose of this paper is to clarify how user experience can be evaluated in adaptive mobile applications. The user experience evaluations were performed through interviews and observation while test users were using PDA-based adaptive mobile application prototypes. As a result, this paper presents the analysis of the test methods for further user experience evaluations. CR Categories: J.m [Computer Applications]: Miscellaneous; Experimentation; Human Factors"}
{"_id": "ea0f9280637fdbb528aa09754e8cfad99672510a", "title": "Changes in adult attachment styles in American college students over time: a meta-analysis.", "text": "The current article examines changes over time in a commonly used measure of adult attachment style. A cross-temporal meta-analysis was conducted on 94 samples of American college students (total N = 25,243, between 1988 and 2011) who chose the most representative description of four possible attachment styles (Secure, Dismissing, Preoccupied, and Fearful) on the Relationship Questionnaire. The percentage of students with Secure attachment styles has decreased in recent years (1988: 48.98%; 2011: 41.62%), whereas the percentage of students with Insecure attachment styles (sum of Dismissing, Preoccupied, Fearful) has increased in recent years (1988: 51.02%; 2011: 58.38%). The percentage of students with Dismissing attachment styles has increased over time (1988: 11.93%; 2011: 18.62%), even after controlling for age, gender, race, and publication status. Positive views of others have declined across the same time period. We discuss possible implications and explanations for these changes."}
{"_id": "5b431cbb98633e09714079bb757f4fdcab26cece", "title": "Zero moment point-measurements from a human walker wearing robot feet as shoes", "text": "The anthropomorphic biped robot Bip is equipped with sensors for measuring the ground/feet forces in order to localize the center of pressure (CoP) and zero moment point (ZMP). This paper focuses on experimental results regarding the evolution of the ground contact forces, obtained from a human walker wearing the robot feet as shoes. First, one determines the influence of heavy and rigid metallic shoes on the gait of the subject, during experiments carried out on flat ground. Second, the evolution of the contact forces is studied while walking on parallel planes with different elevations (stairs), then one addresses the case where the feet are supported by two nonparallel planes (uneven terrain). The corresponding analysis is founded on the concepts of virtual supporting surface and pseudo-CoP-ZMP introduced in a companion paper, discussing the theoretical aspects of CoP and ZMP (Sardain and Bessonnet, 2004). Beyond the academic contribution, the data analyzed in this paper could constitute an incitement to design truly anthropomorphic feet, for Bip as well as for other biped robots, because all the current robot feet are functionally rather poor."}
{"_id": "b2ec49cfc4486cfb189e463abb33c1e14bfa1b58", "title": "Evolutionary algorithms in theory and practice - evolution strategies, evolutionary programming, genetic algorithms", "text": "Find the secret to improve the quality of life by reading this evolutionary algorithms in theory and practice evolution strategies evolutionary programming genetic algorithms. This is a kind of book that you need now. Besides, it can be your favorite book to read after having this book. Do you ask why? Well, this is a book that has different characteristic with others. You may not need to know who the author is, how well-known the work is. As wise word, never judge the words from who speaks, but make the words as your good value to your life."}
{"_id": "a3a49588ca22b257e44267addc6eba77a7d6948f", "title": "Quality Metrics in optical modulation analysis: EVM and its relation to Q-factor, OSNR, and BER", "text": "The quality of optical signals is a very important parameter in optical communications. Several metrics are in common use, like optical signal-to-noise power ratio (OSNR), Q-factor, error vector magnitude (EVM) and bit error ratio (BER). A measured raw BER is not necessarily useful to predict the final BER after soft-decision forward error correction (FEC), if the statistics of the noise leading to errors is unknown. In this respect the EVM is superior, as it allows an estimation of the error statistics. We compare various metrics analytically, by simulation, and through experiments. We employ six quadrature amplitude modulation (QAM) formats at symbol rates of 20 GBd and 25 GBd. The signals were generated by a software-defined transmitter. We conclude that for optical channels with additive Gaussian noise the EVM metric is a reliable quality measure. For nondata-aided QAM reception, BER in the range 10-6...10-2 can be reliably estimated from measured EVM."}
{"_id": "7b183c11f1ba13f05db2050ab1abb1fa9f52c9a8", "title": "Deep Marching Cubes: Learning Explicit Surface Representations", "text": "Existing learning based solutions to 3D surface prediction cannot be trained end-to-end as they operate on intermediate representations (e.g., TSDF) from which 3D surface meshes must be extracted in a post-processing step (e.g., via the marching cubes algorithm). In this paper, we investigate the problem of end-to-end 3D surface prediction. We first demonstrate that the marching cubes algorithm is not differentiable and propose an alternative differentiable formulation which we insert as a final layer into a 3D convolutional neural network. We further propose a set of loss functions which allow for training our model with sparse point supervision. Our experiments demonstrate that the model allows for predicting sub-voxel accurate 3D shapes of arbitrary topology. Additionally, it learns to complete shapes and to separate an object's inside from its outside even in the presence of sparse and incomplete ground truth. We investigate the benefits of our approach on the task of inferring shapes from 3D point clouds. Our model is flexible and can be combined with a variety of shape encoder and shape inference techniques."}
{"_id": "3a470684d4a2c512ae04fc36b4c06c10ac0f8046", "title": "Causal Generative Domain Adaptation Networks", "text": "An essential problem in domain adaptation is to understand and make use of distribution changes across domains. For this purpose, we first propose a flexible Generative Domain Adaptation Network (G-DAN) with specific latent variables to capture changes in the generating process of features across domains. By explicitly modeling the changes, one can even generate data in new domains using the generating process with new values for the latent variables in G-DAN. In practice, the process to generate all features together may involve high-dimensional latent variables, requiring dealing with distributions in high dimensions and making it difficult to learn domain changes from few source domains. Interestingly, by further making use of the causal representation of joint distributions, we then decompose the joint distribution into separate modules, each of which involves different low-dimensional latent variables and can be learned separately, leading to a Causal G-DAN (CG-DAN). This improves both statistical and computational efficiency of the learning procedure. Finally, by matching the feature distribution in the target domain, we can recover the target-domain joint distribution and derive the learning machine for the target domain. We demonstrate the efficacy of both G-DAN and CG-DAN in domain generation and cross-domain prediction on both synthetic and real data experiments."}
{"_id": "3b6f2f5a97ff34a94c66e19fbed9858e714db113", "title": "Migrating code with statistical machine translation", "text": "In the era of mobile computing, developers often need to migrate code written for one platform in a programming language to another language for a different platform, e.g., from Java for Android to C# for Windows Phone. The migration process is often performed manually or semi-automatically, in which developers are required to manually define translation rules and API mappings. This paper presents semSMT, an automatic tool to migrate code written in Java to C#. semSMT utilizes statistical machine translation to automatically infer translation rules from existing migrated code, thus, requires no manual defining of rules. The video demonstration on semSMT can be found on YouTube at http://www.youtube.com/watch?v=aRSnl5-7vNo."}
{"_id": "34f2ab81056d9129c447706bd52ac794107f63c6", "title": "A Faster and More Realistic Flush+Reload Attack on AES", "text": "Cloud\u2019s unrivaled cost effectiveness and on the fly operation versatility is attractive to enterprise and personal users. However, the cloud inherits a dangerous behavior from virtualization systems that poses a serious security risk: resource sharing. This work exploits a shared resource optimization technique called memory deduplication to mount a powerful known-ciphertext only cache side-channel attack on a popular OpenSSL implementation of AES. In contrast to the other cross-VM cache attacks, our attack does not require synchronization with the target server and is fully asynchronous, working in a more realistic scenario with much weaker assumption. Also, our attack succeeds in just 15 seconds working across cores in the cross-VM setting. Our results show that there is strong information leakage through cache in virtualized systems and the memory deduplication should be approached with caution."}
{"_id": "0d52475d3ee562a466c037c4f75ebc1018bfcf8a", "title": "Time-Varying Graphs and Social Network Analysis: Temporal Indicators and Metrics", "text": "Most instruments formalisms, concepts, and metrics for social networks analysis fail to capture their dynamics. Typical systems exhibit different scales of dynamics, ranging from the fine-grain dynamics of interactions (which recently led researchers to consider temporal versions of distance, connectivity, and related indicators), to the evolution of network properties over longer periods of time. This paper proposes a general approach to study that evolution for both atemporal and temporal indicators, based respectively on sequences of static graphs and sequences of time-varying graphs that cover successive time-windows. All the concepts and indicators, some of which are new, are expressed using a time-varying graph formalism recently proposed in [10]."}
{"_id": "99f07e194197a55fd017657d4cd1a8d9c349de05", "title": "Speech Synthesis for Mixed-Language Navigation Instructions", "text": "Text-to-Speech (TTS) systems that can read navigation instructions are one of the most widely used speech interfaces today. Text in the navigation domain may contain named entities such as location names that are not in the language that the TTS database is recorded in. Moreover, named entities can be compound words where individual lexical items belong to different languages. These named entities may be transliterated into the script that the TTS system is trained on. This may result in incorrect pronunciation rules being used for such words. We describe experiments to extend our previous work in generating code-mixed speech to synthesize navigation instructions, with a mixed-lingual TTS system. We conduct subjective listening tests with two sets of users, one being students who are native speakers of an Indian language and very proficient in English, and the other being drivers with low English literacy, but familiarity with location names. We find that in both sets of users, there is a significant preference for our proposed system over a baseline system that synthesizes instructions in English."}
{"_id": "a87f545ed1ebf88e003c4b8d39d0324ded1a0be6", "title": "Deep-learning: A potential method for tuberculosis detection using chest radiography", "text": "Tuberculosis (TB) is a major health threat in the developing countries. Many patients die every year due to lack of treatment and error in diagnosis. Developing a computer-aided diagnosis (CAD) system for TB detection can help in early diagnosis and containing the disease. Most of the current CAD systems use handcrafted features, however, lately there is a shift towards deep-learning-based automatic feature extractors. In this paper, we present a potential method for tuberculosis detection using deep-learning which classifies CXR images into two categories, that is, normal and abnormal. We have used CNN architecture with 7 convolutional layers and 3 fully connected layers. The performance of three different optimizers has been compared. Out of these, Adam optimizer with an overall accuracy of 94.73% and validation accuracy of 82.09% performed best amongst them. All the results are obtained using Montgomery and Shenzhen datasets which are available in public domain."}
{"_id": "24bd8a844f6d07dc2ec5a0211496e2741506c588", "title": "MARKETING AND THE INTERNET : A RESEARCH REVIEW", "text": "This paper reviews the research to-date on how the Internet is impacting marketing. It focuses mainly on academic research published in US journals, but includes selected material from other sources. It covers internet adoption and usage; online purchasing behavior; internet advertising; internet economics and pricing; channels and intermediaries and online marketing strategy. The final section gives overall conclusions and briefly lists some opportunities for future research. Contrary to earlier hype, the Internet does not change the fundamental principles of marketing. Nor has its impact to-date \u2013 e.g. on consumer behavior, advertising, pricing, channels, intermediaries, strategy and globalization \u2013 been anything like as dramatic as predicted. Nevertheless, it has already had some effect on all of these areas. For instance, it is emerging as a flexible, fast-growing advertising medium and as a significant direct channel in many markets (PCs, books, music, travel, and even cars). It has led to some reduction in average prices and in price dispersion. It is bringing into the mainstream previously exotic business models such as auctions, electronic markets, and brand communities. Most of its successes have come from established businesses with \u2018bricks and clicks\u2019 strategies which use the Internet to supplement existing activities but rarely to generate much direct revenue We expect its impact to increase steadily over the next ten years, as the technology improves, its availability and usage continue to grow, and businesses learn better how to use it within their wider marketing strategy. Earlier expectations about the Internet were so overblown that the reality was bound to disappoint. But by any normal standards, it is a fast-growing young medium with wide-ranging effects across the whole marketing spectrum."}
{"_id": "aa33f96e37f5222ef61403ac13dab54247120bdb", "title": "Social-Aware Movie Recommendation via Multimodal Network Learning", "text": "With the rapid development of Internet movie industry social-aware movie recommendation systems (SMRs) have become a popular online web service that provide relevant movie recommendations to users. In this effort many existing movie recommendation approaches learn a user ranking model from user feedback with respect to the movie's content. Unfortunately this approach suffers from the sparsity problem inherent in SMR data. In the present work we address the sparsity problem by learning a multimodal network representation for ranking movie recommendations. We develop a heterogeneous SMR network for movie recommendation that exploits the textual description and movie-poster image of each movie as well as user ratings and social relationships. With this multimodal data we then present a heterogeneous information network learning framework called SMR-multimodal network representation learning (MNRL) for movie recommendation. To learn a ranking metric from the heterogeneous information network we also developed a multimodal neural network model. We evaluated this model on a large-scale dataset from a real world SMR Web site and we find that SMR-MNRL achieves better performance than other state-of-the-art solutions to the problem."}
{"_id": "7a63dd6b58db9e6557ed7a2c47637d95d6dd7863", "title": "A hyper-heuristic evolutionary algorithm for automatically designing decision-tree algorithms", "text": "Decision tree induction is one of the most employed methods to extract knowledge from data, since the representation of knowledge is very intuitive and easily understandable by humans. The most successful strategy for inducing decision trees, the greedy top-down approach, has been continuously improved by researchers over the years. This work, following recent breakthroughs in the automatic design of machine learning algorithms, proposes a hyper-heuristic evolutionary algorithm for automatically generating decision-tree induction algorithms, named HEAD-DT. We perform extensive experiments in 20 public data sets to assess the performance of HEAD-DT, and we compare it to traditional decision-tree algorithms such as C4.5 and CART. Results show that HEAD-DT can generate algorithms that significantly outperform C4.5 and CART regarding predictive accuracy and F-Measure."}
{"_id": "2d3e12639b809060748773c259f40ec15710b336", "title": "On the tradeoff between privacy and utility in data publishing", "text": "In data publishing, anonymization techniques such as generalization and bucketization have been designed to provide privacy protection. In the meanwhile, they reduce the utility of the data. It is important to consider the tradeoff between privacy and utility. In a paper that appeared in KDD 2008, Brickell and Shmatikov proposed an evaluation methodology by comparing privacy gain with utility gain resulted from anonymizing the data, and concluded that \"even modest privacy gains require almost complete destruction of the data-mining utility\". This conclusion seems to undermine existing work on data anonymization. In this paper, we analyze the fundamental characteristics of privacy and utility, and show that it is inappropriate to directly compare privacy with utility. We then observe that the privacy-utility tradeoff in data publishing is similar to the risk-return tradeoff in financial investment, and propose an integrated framework for considering privacy-utility tradeoff, borrowing concepts from the Modern Portfolio Theory for financial investment. Finally, we evaluate our methodology on the Adult dataset from the UCI machine learning repository. Our results clarify several common misconceptions about data utility and provide data publishers useful guidelines on choosing the right tradeoff between privacy and utility."}
{"_id": "a01fee3d8ba37f4e51c371ee91409d69da725c88", "title": "Wideband Tapered-Slot Antenna with Corrugated Edges for GPR Applications", "text": "A novel Wideband tapered-slot antenna for GPR applications is proposed. A conventional approach to reduce unwanted interactions between the antenna and the ground surface requires extensive microwave absorbers treatment. For the same purpose we propose to use corrugations along the antenna edges. The corrugations improve directivity and wideband matching of the antenna with free space retaining also good radiation efficiency. Further improvement of the antenna performance has been achieved by patching the corrugated areas of the antenna with the resistance card strips. The elimination of the foam-type absorbers makes the antenna more robust and easier in fabrication."}
{"_id": "16bd1fbe3694173eda4ad4338a85f8288d19bf02", "title": "Relational Learning of Pattern-Match Rules for Information Extraction", "text": "Information extraction is a form of shallow text processing which locates a specified set of relevant items in natural language documents. Such systems can be useful, but require domain-specific knowledge and rules, and are time-consuming and difficult to build by hand, making infomation extraction a good testbed for the application of machine learning techniques to natural language processing. This paper presents a system, RAPIER, that takes pairs of documents and filled templates and induces pattern-match rules that directly extract fillers for the slots in the template. The learning algorithm incorporates techniques from several inductive logic programming systems and learns unbounded patterns that include constraints on the words and part-of-speech tags surrounding the filler. Encouraging results are presented on learning to extract information from computer job postings from the newsgroup misc. jobs. offered."}
{"_id": "5dac19b9d00ae6748132c033608b0449d127a1b6", "title": "Concept-Based Knowledge Discovery in Texts Extracted from the Web", "text": "This paper presents an approach for knowledge discovery in texts extracted from the Web. Instead of analyzing words or attribute values, the approach is based on concepts, which are extracted from texts to be used as characteristics in the mining process. Statistical techniques are applied on concepts in order to find interesting patterns in concept distributions or associations. In this way, users can perform discovery in a high level, since concepts describe real world events, objects, thoughts, etc. For identifying concepts in texts, a categorization algorithm is used associated to a previous classification task for concept definitions. Two experiments are presented: one for political analysis and other for competitive intelligence. At the end, the approach is discussed, examining its problems and advantages in the Web context."}
{"_id": "6665e03447f989c9bdb3432d93e89b516b9d18a7", "title": "Fast Effective Rule Induction", "text": ""}
{"_id": "381f43ee885b78ec2b2264c915135e19b7dde8b6", "title": "A Survey of Data Mining and Knowledge Discovery Software Tools", "text": "Knowledge discovery in databases is a rapidly growing field, whose development is driven by strong research interests as well as urgent practical, social, and economical needs. While the last few years knowledge discovery tools have been used mainly in research environments, sophisticated software products are now rapidly emerging. In this paper, we provide an overview of common knowledge discovery tasks and approaches to solve these tasks. We propose a feature classification scheme that can be used to study knowledge and data mining software. This scheme is based on the software's general characteristics, database connectivity, and data mining characteristics. We then apply our feature classification scheme to investigate 43 software products, which are either research prototypes or commercially available. Finally, we specify features that we consider important for knowledge discovery software to possess in order to accommodate its users effectively, as well as issues that are either not addressed or insufficiently solved yet."}
{"_id": "c14663df0457580c002572b27a9246e2756e2c30", "title": "Prosthetic mesh used for inguinal and ventral hernia repair: normal appearance and complications in ultrasound and CT.", "text": "The use of prosthetic mesh has now become accepted practice in the treatment of patients with both inguinal and ventral hernias. This pictorial review illustrates the various radiological appearances of these meshes and also demonstrates the post-operative complications associated with their implantation."}
{"_id": "77d9d8d4f1179cbb02edf35393f13be1ded1c684", "title": "Design and Roadmap Methodology for Integrated Power Modules in High Switching Frequency Synchronous Buck Voltage Regulators", "text": "The proposed methodology encompasses four study phases that address the following key design aspects of integrated power modules (IPM) for voltage regulators: Fundamental switching understanding, determination of compact/accurate loss expressions and derivation of design rules, guidelines and figure of merits for converter optimisation. The applied methods are based on several modelling approaches suitable for the different study phases. The simplified loss model proves to be effective to breakdown the power losses of a state of the art IPM. Analysis of an optimised design leads to the identification of critical circuit and device parameters that may well condition the layout of future technological roadmap targets."}
{"_id": "b3e3f63c25e22d5a13cb3d73561fc752b913b7dd", "title": "Analysing B2B electronic procurement benefits: information systems perspective", "text": "Abstract Purpose \u2013 This paper aims to present electronic procurement benefits identified in four case companies from the information technology (IT), hi-tech sector. Design/methodology/approach \u2013 Multi-case study design was applied. The benefits reported in the companies were analysed and classified according to taxonomies from the information systems discipline. Finally, a new benefits classification was proposed. The framework was developed based on information systems literature. Findings \u2013 The research confirmed difficulties with benefits evaluation, as, apart from operational benefits, non-financial, intangible benefits at strategic level were also identified. Traditional evaluation methods are unable to capture all benefits categories, especially at strategic level. New taxonomy was created, which allows evaluation of the complex e-procurement impact. In the proposed taxonomy, e-procurement benefits are classified according to their level (operational, tactical, strategic), area of impact, applying scorecard dimensions (customer, process, financial, learning and growth). In addition the benefits characteristic is captured (tangible, intangible, financial and non-financial). Research limitations/implications \u2013 Research is based on four case studies only. Findings are specific to case companies and the environment in which they operate. The framework should be tested further in different contexts. Practical implications \u2013 The new taxonomy allows evaluation of the complex e-procurement impact, demonstrating that benefits achieved do not concern merely the financial impact. The framework can be applied to preparing new systems implementation as well as to evaluating existing systems. Originality/value \u2013 The paper applies information systems frameworks to the electronic procurement field, which allows one to look at e-procurement systems considering its complex impact. The framework can also be used to evaluate different systems, not simply e-procurement."}
{"_id": "90a6f53bf0eb10fe53f908419c9ac644b16d6065", "title": "Riemannian Geometry and Geometric Analysis (5. ed.)", "text": ""}
{"_id": "b51523055d8d73cbade282be22dad8e66cd80c27", "title": "Improving Comprehension of Measurements Using Concrete Re-expression Strategies", "text": "It can be difficult to understand physical measurements (e.g., 28 lb, 600 gallons) that appear in news stories, data reports, and other documents. We develop tools that automatically re-express unfamiliar measurements using the measurements of familiar objects. Our work makes three contributions: (1) we identify effectiveness criteria for objects used in concrete measurement re-expressions; (2) we operationalize these criteria in a scalable method for mining a large dataset of concrete familiar objects with their physical dimensions from Amazon and Wikipedia; and (3) we develop automated concrete re-expression tools that implement three common re-expression strategies (adding familiar context, reunitization and proportional analogy) as energy minimization algorithms. Crowdsourced evaluations of our tools indicate that people find news articles with re-expressions more helpful and re- expressions help them to better estimate new measurements."}
{"_id": "bc3479f0bef8c6b154f78e8f41879c9e5e9cea95", "title": "GPTIPS 2: an open-source software platform for symbolic data mining", "text": "GPTIPS is a free, open source MATLAB based software platform for symbolic data mining (SDM). It uses a \u2018multigene\u2019 variant of the biologically inspired machine learning method of genetic programming (MGGP) as the engine that drives the automatic model discovery process. Symbolic data mining is the process of extracting hidden, meaningful relationships from data in the form of symbolic equations. In contrast to other data-mining methods, the structural transparency of the generated predictive equations can give new insights into the physical systems or processes that generated the data. Furthermore \u2013 this transparency makes the models very easy to deploy outside of MATLAB. The rationale behind GPTIPS is to reduce the technical barriers to using, understanding, visualising and deploying GP based symbolic models of data, whilst at the same time remaining highly customisable and delivering robust numerical performance for \u2018power users\u2019. In this chapter, notable new features of the latest version of the software GPTIPS 2 are discussed with these aims in mind. Additionally, a simplified variant of the MGGP high level gene crossover mechanism is proposed. It is demonstrated that the new functionality of GPTIPS 2 (a) facilitates the discovery of compact symbolic relationships from data using multiple approaches, e.g. using novel \u2018gene-centric\u2019 visualisation analysis to mitigate horizontal bloat and reduce complexity in multigene symbolic regression models (b) provides numerous methods for visualising the properties of symbolic models (c) emphasises the generation of graphically navigable \u2018libraries\u2019 of models that are optimal in terms of the Pareto trade off surface of model performance and complexity and (d) expedites 'real world' applications by the simple, rapid and robust deployment of symbolic models outside the software environment they were developed in."}
{"_id": "f67acaa10ad4a0eb7130cd1f0b953478056f32af", "title": "Millimeter-Wave Receiver Concepts for 77 GHz Automotive Radar in Silicon-Germanium Technology", "text": "The first \u20ac price and the \u00a3 and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the \u20ac(D) includes 7% for Germany, the \u20ac(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. D. Kissinger Millimeter-Wave Receiver Concepts for 77 GHz Automotive Radar in Silicon-Germanium Technology"}
{"_id": "83009e2fa390a5935b88d1d91d2948de890ee1dd", "title": "Polygon-Invariant Generalized Hough Transform for High-Speed Vision-Based Positioning", "text": "The generalized Hough transform (GHT) is widely used for detecting or locating objects under similarity transformation. However, a weakness of the traditional GHT is its large storage requirement and time-consuming computational complexity due to the 4-D parameter space voting strategy. In this paper, a polygon-invariant GHT (PI-GHT) algorithm, as a novel scale- and rotation-invariant template matching method, is presented for high-speed object vision-based positioning. To demonstrate the performance of PI-GHT, several experiments were carried out to compare this novel algorithm with the other five popular matching methods. Experimental results show that the computational effort required by PI-GHT is smaller than that of the common methods due to the similarity transformations applied to the scale- and rotation-invariant triangle features. Moreover, the proposed PI-GHT maintains inherent robustness against partial occlusion, noise, and nonlinear illumination changes, because the local triangle features are based on the gradient directions of edge points. Consequently, PI-GHT is implemented in packaging equipment for radio frequency identification devices at an average time of 4.13 ms and 97.06% matching rate, to solder paste printing at average time nearly 5 ms with 99.87%. PI-GHT is applied to LED manufacturing equipment to locate multiobjects at least five times improvement in speed with a 96% matching rate."}
{"_id": "f82b4bba7137450cba231aae466512e21e73961e", "title": "Dual Business Models: Going beyond Spatial Separation", "text": "Despite substantial literature on managing two conflicting business models within one company, little has been done regarding qualitative studies investigating the organizational context and the installed integration and separation mechanisms between these two. Based on organizational context factors and business model elements, using the organizational ambidexterity as theoretical lens, this research paper presents a longitudinal case study conducted at a magazine publisher that intends to advance the comprehension of the interplay between traditional and online business models. Our analysis suggests that integration and separation mechanisms occur along all elements of a business model. The organizational context factors moderate the integration and separation mechanisms between the traditional and the online business model and uncover details regarding the conflict inherent in those dual business models."}
{"_id": "6b8e362385c018f16e9a083ba952a7cae659f166", "title": "Multi-source Neural Automatic Post-Editing: FBK's participation in the WMT 2017 APE shared task", "text": "Previous phrase-based approaches to Automatic Post-editing (APE) have shown that the dependency of MT errors from the source sentence can be exploited by jointly learning from source and target information. By integrating this notion in a neural approach to the problem, we present the multi-source neural machine translation (NMT) system submitted by FBK to the WMT 2017 APE shared task. Our system implements multi-source NMT in a weighted ensemble of 8 models. The n-best hypotheses produced by this ensemble are further re-ranked using features based on the edit distance between the original MT output and each APE hypothesis, as well as other statistical models (n-gram language model and operation sequence model). This solution resulted in the best system submission for this round of the APE shared task for both en-de and de-en language directions. For the former language direction, our primary submission improves over the MT baseline up to -4.9 TER and +7.6 BLEU points. For the latter, where the higher quality of the original MT output reduces the room for improvement, the gains are lower but still significant (-0.25 TER and +0.3 BLEU)."}
{"_id": "6695ef22b146f1669b7530236e8859e0a0e527ed", "title": "A review of facial nerve anatomy.", "text": "An intimate knowledge of facial nerve anatomy is critical to avoid its inadvertent injury during rhytidectomy, parotidectomy, maxillofacial fracture reduction, and almost any surgery of the head and neck. Injury to the frontal and marginal mandibular branches of the facial nerve in particular can lead to obvious clinical deficits, and areas where these nerves are particularly susceptible to injury have been designated danger zones by previous authors. Assessment of facial nerve function is not limited to its extratemporal anatomy, however, as many clinical deficits originate within its intratemporal and intracranial components. Similarly, the facial nerve cannot be considered an exclusively motor nerve given its contributions to taste, auricular sensation, sympathetic input to the middle meningeal artery, and parasympathetic innervation to the lacrimal, submandibular, and sublingual glands. The constellation of deficits resulting from facial nerve injury is correlated with its complex anatomy to help establish the level of injury, predict recovery, and guide surgical management."}
{"_id": "fcd0807f968a99cdbfa7c2c099686e5286b63bb1", "title": "Low-Cost Binary128 Floating-Point FMA Unit Design with SIMD Support", "text": "Binary64 arithmetic is rapidly becoming inadequate to cope with today's large-scale computations due to an accumulation of errors. Therefore, binary128 arithmetic is now required to increase the accuracy and reliability of these computations. At the same time, an obvious trend emerging in modern processors is to extend their instruction sets by allowing single instruction multiple data (SIMD) execution, which can significantly accelerate the data-parallel applications. To address the combined demands mentioned above, this paper presents the architecture of a low-cost binary128 floating-point fused multiply add (FMA) unit with SIMD support. The proposed FMA design can execute a binary128 FMA every other cycle with a latency of four cycles, or two binary64 FMAs fully pipelined with a latency of three cycles, or four binary32 FMAs fully pipelined with a latency of three cycles. We use two binary64 FMA units to support binary128 FMA which requires much less hardware than a fully pipelined binary128 FMA. The presented binary128 FMA design uses both segmentation and iteration hardware vectorization methods to trade off performance, such as throughput and latency, against area and power. Compared with a standard binary128 FMA implementation, the proposed FMA design has 30 percent less area and 29 percent less dynamic power dissipation."}
{"_id": "a569455d692792e3445cbbd57940d8d4413b7b80", "title": "Applications of RFID in Pharmaceutical Industry", "text": "Security and safety are two important features desired in pharmaceutical supply chain and achieving the same is a challenging task. The need to secure and authenticate pharmaceutical products has increased tremendously with the emerging counterfeit product market. The motivation to introduce counterfeit pharmaceutical products in the supply chain could be to gain rapid economic benefits or affecting the reputation of strong brand in the pharmaceutical industry. RFID technology can be used to deter counterfeiting attempts. It can also be used in various other domains in the pharmaceutical industry. The main aim of this paper is to outline all the applications of RFID in the pharmaceutical industry. After explaining the main applications, we discuss how information hiding techniques could be used with RFID to offer efficient expiry date management, pharmaceutical tamper detection, and fraud detection and prevention."}
{"_id": "b1f6fffccfd5a2d0f618d5b6e73ccfaf370a152d", "title": "Feeding the microbiota-gut-brain axis: diet, microbiome, and neuropsychiatry.", "text": "The microbial population residing within the human gut represents one of the most densely populated microbial niche in the human body with growing evidence showing it playing a key role in the regulation of behavior and brain function. The bidirectional communication between the gut microbiota and the brain, the microbiota-gut-brain axis, occurs through various pathways including the vagus nerve, the immune system, neuroendocrine pathways, and bacteria-derived metabolites. This axis has been shown to influence neurotransmission and the behavior that are often associated with neuropsychiatric conditions. Therefore, research targeting the modulation of this gut microbiota as a novel therapy for the treatment of various neuropsychiatric conditions is gaining interest. Numerous factors have been highlighted to influence gut microbiota composition, including genetics, health status, mode of birth, and environment. However, it is diet composition and nutritional status that has repeatedly been shown to be one of the most critical modifiable factors regulating the gut microbiota at different time points across the lifespan and under various health conditions. Thus the microbiota is poised to play a key role in nutritional interventions for maintaining brain health."}
{"_id": "97a18d0c88d72bac9fbdfe9d19485ac37175177b", "title": "Small-size circular polarized patch antenna", "text": "Designing of an microstrip patch antenna with circular polarization (CP) and reduced size is a challenging and complex task. Here such an antenna is proposed and experimentally studied. Deployment of meandering technique and shorting pins together at the four corners of patch gives less back radiation along with acheiving CP and fairly reduce the size of antenna. Also it can provide inductive and capacitive loading effect to the patch which in turn control the frequency of operation. In this paper, a study is conducted with different shorting strip structures such as rectangle, U-shape, and meandering. Simulations were carried out in HFSS and compared the simulated results with different structures. It is found that meandering technique gives better size reduction than other two because the strongest current strengths concentrates at the meandering shorting strips. In addition, meandering technique provides high front-to-back ratio than others."}
{"_id": "05393361e6d9e56ee7dbabb1e5ef6c1c212fc34d", "title": "A Practical Guide to Support Vector Classification", "text": "The support vector machine (SVM) is a popular classification technique. However, beginners who are not familiar with SVM often get unsatisfactory results since they miss some easy but significant steps. In this guide, we propose a simple procedure which usually gives reasonable results."}
{"_id": "51853d0e2b9ca6962bd8fa1272c77811e5ad18ab", "title": "Twitter Catches The Flu: Detecting Influenza Epidemics using Twitter", "text": "With the recent rise in popularity and scale of social media, a growing need exists for systems that can extract useful information from huge amounts of data. We address the issue of detecting influenza epidemics. First, the proposed system extracts influenza related tweets using Twitter API. Then, only tweets that mention actual influenza patients are extracted by the support vector machine (SVM) based classifier. The experiment results demonstrate the feasibility of the proposed approach (0.89 correlation to the gold standard). Especially at the outbreak and early spread (early epidemic stage), the proposed method shows high correlation (0.97 correlation), which outperforms the state-of-the-art methods. This paper describes that Twitter texts reflect the real world, and that NLP techniques can be applied to extract only tweets that contain useful information."}
{"_id": "5f47dff76174f0a0cda429e2ed0e733b4026ddea", "title": "You Are What You Tweet: Analyzing Twitter for Public Health", "text": "Analyzing user messages in social media can measure different population characteristics, including public health measures. For example, recent work has correlated Twitter messages with influenza rates in the United States; but this has largely been the extent of mining Twitter for public health. In this work, we consider a broader range of public health applications for Twitter. We apply the recently introduced Ailment Topic Aspect Model to over one and a half million health related tweets and discover mentions of over a dozen ailments, including allergies, obesity and insomnia. We introduce extensions to incorporate prior knowledge into this model and apply it to several tasks: tracking illnesses over times (syndromic surveillance), measuring behavioral risk factors, localizing illnesses by geographic region, and analyzing symptoms and medication usage. We show quantitative correlations with public health data and qualitative evaluations of model output. Our results suggest that Twitter has broad applicability for public health research."}
{"_id": "d08e416c1fee72adc3f6b4aa6e0e9098dd89f1f3", "title": "Automatic Classification of Change Requests for Improved IT Service Quality", "text": "Faulty changes to the IT infrastructure can lead to critical system and application outages, and therefore cause serious economical losses. In this paper, we describe a change planning support tool that aims at assisting the change requesters in leveraging aggregated information associated with the change, like past failure reasons or best implementation practices. The thus gained knowledge can be used in the subsequent planning and implementation steps of the change. Optimal matching of change requests with the aggregated information is achieved through the classification of the change request into about 200 fine-grained activities. We propose to automatically classify the incoming change requests using various information retrieval and machine learning techniques. The cost of building the classifiers is reduced by employing active learning techniques or by leveraging labeled features. Historical tickets from two customers were used to empirically assess and compare the accuracy of the different classification approaches (Lucene index, multinomial logistic regression, and generalized expectation criteria)."}
{"_id": "005b12da7075264fb784c9ca1eb8115c4cce196e", "title": "Automatic fire detection based on soft computing techniques: review from 2000 to 2010", "text": "Automatic fire detection system is a system that is capable of assessing environmental factors and their effects on the environment as well as predicting the occurrence of fire in the early stages and even before the outbreak. There are two perspectives in fire detection: fire detection in forests or jungles and fire detection in occupied or residential areas. Automatic fire detection has attracted increased attention due to its importance in decreasing fire damage. There are many studies that have considered appropriate techniques for early fire detection. In recent years researches have been studying technical developments in this field aimed at exploiting wireless communications networks, detection systems and fire prediction systems design. In this paper the automatic fire detection researches using intelligent techniques from 2000 to 2010 is reviewed. We could classify researches to four categories: fire detectors, reduce false alarms systems, fire data analysis and fire predictors. We also classify the intelligent techniques outlined in the researches for each category."}
{"_id": "740f94e0325b67e6ff5efba0f5112c977e21a75a", "title": "Punctuation Prediction Model for Conversational Speech", "text": "An ASR system usually does not predict any punctuation or capitalization. Lack of punctuation causes problems in result presentation and confuses both the human reader and off-the-shelf natural language processing algorithms. To overcome these limitations, we train two variants of Deep Neural Network (DNN) sequence labelling models a Bidirectional Long Short-Term Memory (BLSTM) and a Convolutional Neural Network (CNN), to predict the punctuation. The models are trained on the Fisher corpus which includes punctuation annotation. In our experiments, we combine time-aligned and punctuated Fisher corpus transcripts using a sequence alignment algorithm. The neural networks are trained on Common Web Crawl GloVe embedding of the words in Fisher transcripts aligned with conversation side indicators and word time infomation. The CNNs yield a better precision and BLSTMs tend to have better recall. While BLSTMs make fewer mistakes overall, the punctuation predicted by the CNN is more accurate especially in the case of question marks. Our results constitute significant evidence that the distribution of words in time, as well as pre-trained embeddings, can be useful in the punctuation prediction task."}
{"_id": "67301c286439a7c24368300ea13e9785bd666aed", "title": "Velocity determination in scenes containing several moving objects", "text": "A method is described which quantifies the speed and direction of several moving objects in a sequence of digital images. A relationship between the time variation of intensity, the spatial gradient, and velocity has been developed which allows the determination of motion using clustering techniques. This paper describes these relationships, the clustering technique, and provides examples of the technique on real images containing several moving objects."}
{"_id": "4c8e53712f54587d7ae73bede360473007728546", "title": "Food recognition using statistics of pairwise local features", "text": "Food recognition is difficult because food items are de-formable objects that exhibit significant variations in appearance. We believe the key to recognizing food is to exploit the spatial relationships between different ingredients (such as meat and bread in a sandwich). We propose a new representation for food items that calculates pairwise statistics between local features computed over a soft pixel-level segmentation of the image into eight ingredient types. We accumulate these statistics in a multi-dimensional histogram, which is then used as a feature vector for a discriminative classifier. Our experiments show that the proposed representation is significantly more accurate at identifying food than existing methods."}
{"_id": "fd26732084b318aee34651c4ccbcb0cf8d7f9d8f", "title": "A high-performance silicon-on-insulator MEMS gyroscope operating at atmospheric pressure", "text": "This paper presents a new, high-performance silicon-on-insulator (SOI) MEMS gyroscope with decoupled oscillation modes. The gyroscope tructure allows it to achieve matched-resonance-frequencies, large drive-mode oscillation amplitude, high sense-mode quality factor, and low echanical cross-talk. The gyroscope is fabricated through the commercially available SOIMUMPS process of MEMSCAP Inc. The fabricated yroscope has minimum capacitive sense gaps of 2.6 m and a structural silicon thickness of 25 m, and it fits into a chip area smaller than mm \u00d7 3 mm. The fabricated gyroscope is hybrid connected to a CMOS capacitive interface ASIC chip, which is fabricated in a standard 0.6 m MOS process. The characterization of the hybrid-connected gyroscope demonstrates a low measured noise-equivalent rate of 90\u25e6/h/Hz1/2 at tmospheric pressure, eliminating the need for a vacuum package for a number of applications. R2-non-linearity of the gyroscope is measured to e better than 0.02%. The gyroscope has a low quadrature signal of 70\u25e6/s and a short-term bias stability of 1.5\u25e6/s. The angular rate sensitivity of the yroscope is 100 V/(\u25e6/s) at atmospheric pressure, which improves 24 times to 2.4 mV/(\u25e6/s) at vacuum. The noise-equivalent rate of the gyroscope t 20 mTorr vacuum is measured to be 35\u25e6/h/Hz1/2, which can be improved further by reducing the electromechanical noise. 2006 Elsevier B.V. All rights reserved."}
{"_id": "b328fb4ef092ccdcdad0782d540e8546d1f5c5ec", "title": "A High-Efficiency Ultra-Low-Power CMOS Rectifier for RF Energy Harvesting Applications", "text": "This paper presents a novel ultra-low power rectifier for RF energy harvester, designed and implemented in standard 130 nm CMOS technology. The proposed 915 MHz ISM band RF energy harvester is designed for wearable medical devices and internet of things (IoT) applications. An off-chip differential matching network passively boosts the low-level incoming AC signal generated by the antenna. Then, a novel self-compensated cross-coupled rectifier is designed to convert the AC signal into a DC output voltage. The rectifier is comprised of 10 stages and it uses both dynamic and static bias compensation to decrease the transistors forward voltage drop. The post-layout simulation results demonstrate a sensitivity of \u221230.5 dBm for 1 V output at a capacitive load which is lower than the current state-of-the-art. The peak end-to-end efficiency is 42.8 % at \u221216 dBm input power, delivering 2.32 V at 0.5 M\u03a9 resistor load."}
{"_id": "52b5eab895a2d9ae24ea72ca72782974a52f90c4", "title": "Relation Extraction with Matrix Factorization and Universal Schemas", "text": "Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a universal schema: the union of all involved schemas (surface form predicates as in OpenIE, and relations in the schemas of preexisting databases). This schema has an almost unlimited set of relations (due to surface forms), and supports integration with existing structured data (through the relation types of existing databases). To populate a database of such schema we present matrix factorization models that learn latent feature vectors for entity tuples and relations. We show that such latent models achieve substantially higher accuracy than a traditional classification approach. More importantly, by operating simultaneously on relations observed in text and in pre-existing structured DBs such as Freebase, we are able to reason about unstructured and structured data in mutually-supporting ways. By doing so our approach outperforms stateof-the-art distant supervision."}
{"_id": "4f6d882467d1a567db604a2f94496e4c70422410", "title": "Benign violations: making immoral behavior funny.", "text": "Humor is an important, ubiquitous phenomenon; however, seemingly disparate conditions seem to facilitate humor. We integrate these conditions by suggesting that laughter and amusement result from violations that are simultaneously seen as benign. We investigated three conditions that make a violation benign and thus humorous: (a) the presence of an alternative norm suggesting that the situation is acceptable, (b) weak commitment to the violated norm, and (c) psychological distance from the violation. We tested the benign-violation hypothesis in the domain of moral psychology, where there is a strong documented association between moral violations and negative emotions, particularly disgust. Five experimental studies show that benign moral violations tend to elicit laughter and amusement in addition to disgust. Furthermore, seeing a violation as both wrong and not wrong mediates behavioral displays of humor. Our account is consistent with evolutionary accounts of laughter, explains humor across many domains, and suggests that humor can accompany negative emotion."}
{"_id": "9d4adf5e75cea94b4c7f09c618e4462b543d9c87", "title": "An approach to discriminate GNSS spoofing from multipath fading", "text": "GNSS signals are vulnerable to various types of interference including jamming and spoofing attacks. Spoofing signals are designed to deceive target GNSS receivers without being detected by conventional receiver quality monitoring metrics. This paper focuses on detecting an overlapped spoofing attack where the correlation peaks of the authentic and spoofing signals interact during the attack. Several spoofing detection and signal quality monitoring (SQM) metrics are introduced. This paper proposes a spoofing detection architecture utilizing combination of different metrics to detect spoofing signals and distinguish them from multipath signals. Experimental results show that the pre-despreading spoofing detection metrics such as variance analysis are not sensitive to multipath propagation and can be used in conjunction with post-despreading methods to correctly detect spoofing signals. Several test scenarios based on different spoofing and multipath cases are performed to demonstrate the effectiveness of the proposed architecture to correctly detect spoofing attack ands distinguish them from multipath."}
{"_id": "5421d83992eb058505db23b5fb21e762ee466d9f", "title": "PAST-TENSE GENERATION FROM FORM VERSUS MEANING: BEHAVIOURAL DATA AND SIMULATION EVIDENCE.", "text": "The standard task used to study inflectional processing of verbs involves presentation of the stem form from which the participant is asked to generate the past tense. This task reveals a processing disadvantage for irregular relative to regular English verbs, more pronounced for lower-frequency items. Dual- and single-mechanism theories of inflectional morphology are both able to account for this pattern; but the models diverge in their predictions concerning the magnitude of the regularity effect expected when the task involves past-tense generation from meaning. In this study, we asked normal speakers to generate the past tense from either form (verb stem) or meaning (action picture). The robust regularity effect observed in the standard form condition was no longer reliable when participants were required to generate the past tense from meaning. This outcome would appear problematic for dual-mechanism theories to the extent that they assume the process of inflection requires stem retrieval. By contrast, it supports single-mechanism models that consider stem retrieval to be task-dependent. We present a single-mechanism model of verb inflection incorporating distributed phonological and semantic representations that reproduces this task-dependent pattern."}
{"_id": "1d1ebcb8de65575d870ceacf4af613961caa2257", "title": "Deep Learning Approaches to Text Production", "text": "Text production is a key component of many NLP applications. In data-driven approaches, it is used for instance, to generate dialogue turns from dialogue moves [Wen et al., 2015, Wen et al., 2016, Novikova et al., 2017], to verbalise the content of Knowledge bases [Gardent et al., 2017a, Gardent et al., 2017b] or to generate natural English sentences from rich linguistic representations, such as dependency trees [Belz et al., 2011, Mille et al., 2018] or Abstract Meaning Representations [May and Priyadarshi, 2017, Konstas et al., 2017, Song et al., ]. In text-driven methods on the other hand, text production is at work in sentence compression [Knight and Marcu, 2000, Cohn and Lapata, 2008, Filippova and Strube, 2008, Pitler, 2010, Filippova et al., 2015, Toutanova et al., 2016]; sentence fusion [McKeown et al., 2010, Filippova, 2010, Thadani and McKeown, 2013]; paraphrasing [Dras, 1999, Barzilay and McKeown, 2001, Bannard and Callison-Burch, 2005, Wubben et al., 2010, Narayan et al., 2016, Dong et al., 2017, Mallinson et al., 2017]; sentence (or text) simplification [Siddharthan et al., 2004, Zhu et al., 2010, Woodsend and Lapata, 2011, Wubben et al., 2012, Narayan and Gardent, 2014, Xu et al., 2015, Narayan and Gardent, 2016, Zhang and Lapata, 2017, Narayan et al., 2017], text summarisation [Wan, 2010, Nenkova and McKeown, 2011, Woodsend and Lapata, 2010, Rush et al., 2015, Cheng and Lapata, 2016, Nallapati et al., 2016, Chen et al., 2016, Tan and Wan, 2017, See et al., 2017, Nallapati et al., 2017, Paulus et al., 2017, Yasunaga et al., 2017, Narayan et al., 2018a, Narayan et al., 2018b, Pasunuru and Bansal, 2018, Celikyilmaz et al., 2018] and end-to-end dialogue systems [Li et al., 2017]. Following the success of encoder-decoder models in modeling sequence-rewriting tasks such as machine translation [Sutskever et al., 2011, Bahdanau et al., 2014], deep learning models have successfully been applied to the various text production tasks. For instance, [Rush et al., 2015] utilize a local attention-based model for abstractive summarisation, [Shang et al., 2015] propose an encoder-decoder model for response generation, [See et al., 2017] uses a hybrid of encoder-decoder model and pointer network [Vinyals et al., 2015] for story highlight generation, [Dong et al., 2017] exploits an encoder-decoder model for question rephrasing and [Konstas et al., 2017] for AMR generation. In this tutorial, we will cover the fundamentals and the state-of-the-art research on neural models for text production. Each text production task raises a slightly different communication goal (e.g, how to take the dialogue context into account when producing a dialogue turn; how to detect and merge relevant information when summarising a text; or how to produce a well-formed text that correctly capture the information contained in some input data in the case of data-to-text generation). We will outline the constraints specific to each subtasks and examine how the existing neural models account for them."}
{"_id": "ccab8d39a252c43803d1e3273a48db162811ae5c", "title": "A Single-Stage Single-Switch LED Driver Based on Class-E Converter", "text": "This paper proposes a single-stage single-switch light-emitting diode (LED) driver that integrates a buck-boost circuit with a Class-E resonant converter by sharing single-switch device. The buck-boost circuit, working as a power-factor correction (PFC) stage, operates in the discontinuous conduction mode (DCM) to shape the input current. The Class-E converter steps down the voltage to drive the LED. For specific component parameters, the Class-E converter can achieve soft-switching on the power switch and two rectifier diodes, and reduce switch losses at high frequencies. Electrolytic capacitors are used in the proposed converter to achieve a lower cost, but the system reliability decreases. To overcome this disadvantage, film capacitors can be used, but the current ripple increases. Neglecting the cost, multilayered film capacitors are the best option, if higher reliability and lower current ripple are required. The proposed driver features high efficiency (90.8%), and a high power factor (PF) (0.995). In this paper, analytical results and design considerations at 100 kHz are presented, and a 100-W prototype with 110 VAC input was built to validate the theoretical analysis."}
{"_id": "b9c8819d0238f0463c01de6ad393c89d93266f46", "title": "Developing and validating an instrument for measuring user-perceived web quality", "text": "Many of the instruments to measure information and system quality were developed in the context of mainframe and PC-based technologies of yesteryears. With the proliferation of the Internet and World Wide Web applications, users are increasingly interfacing and interacting with web-based applications. It is, therefore, important to develop new instruments and scales, which are directly targeted to these new interfaces and applications. In this article, we report on the development of an instrument that captures key characteristics of web site quality from the user\u00e2\u20acTMs perspective. The 25item instrument measures four dimensions of web quality: specific content, content quality, appearance and technical adequacy. While improvements are possible, the instrument exhibits excellent psychometric properties. The instrument would be useful to organizations and web designers as it provides an aggregate measure of web quality, and to researchers in related web research. a 1, b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution."}
{"_id": "363c01593f54d2ab08b2580bca0362976ef5dcf4", "title": "An Implementation of Scoped Memory for Real-Time Java", "text": "This paper presents our experience implementing the memory management extensions in the Real-Time Specification for Java. These extensions are designed to given real-time programmers the control they need to obtain predictable memory system behavior while preserving Java\u2019s safe memory model. We describe our implementation of certain dynamic checks required by the Real-Time Java extensions. In particular, we describe how to perform these checks in a way that avoids harmful interactions between the garbage collector and the memory management system. We also found that extensive debugging support was necessary during the development of Real-Time Java programs. We therefore used a static analysis and a dynamic debugging package during the development of our benchmark applications."}
{"_id": "c174e7929d52ee9153a2252b1d5e3c370cb919c8", "title": "Behavior-driven visualization recommendation", "text": "We present a novel approach to visualization recommendation that monitors user behavior for implicit signals of user intent to provide more effective recommendation. This is in contrast to previous approaches which are either insensitive to user intent or require explicit, user specified task information. Our approach, called Behavior-Driven Visualization Recommendation (BDVR), consists of two distinct phases: (1) pattern detection, and (2) visualization recommendation. In the first phase, user behavior is analyzed dynamically to find semantically meaningful interaction patterns using a library of pattern definitions developed through observations of real-world visual analytic activity. In the second phase, our BDVR algorithm uses the detected patterns to infer a user's intended visual task. It then automatically suggests alternative visualizations that support the inferred visual task more directly than the user's current visualization. We present the details of BDVR and describe its implementation within our lab's prototype visual analysis system. We also present study results that demonstrate that our approach shortens task completion time and reduces error rates when compared to behavior-agnostic recommendation."}
{"_id": "d21617565e2d6e5ff95f94daed8499d235f4a6a4", "title": "Small-signal analysis of closed-loop PWM boost converter in CCM with complex impedance load", "text": "The following closed-loop transfer functions of the boost converter operating in continuous-conduction mode (CCM) supplying a complex impedance load are derived and analyzed: input-to-output voltage Mvcl and reference-to-output Tcl. The load of the boost dc-dc converter is composed of a series-connected resistance and inductance. The dynamic characteristics of the closed-loop boost converter with a third-order double-lead integral compensator are evaluated for different load inductances. The theoretically predicted results are validated through switching-circuit simulations using a suitable converter design example."}
{"_id": "70ca66188f98537ba9e38d87ee2e5c594ef4196d", "title": "A 77-GHz FMCW MIMO Radar Based on an SiGe Single-Chip Transceiver", "text": "This paper describes a novel frequency-modulated continuous-wave radar concept, where methods like nonuniform sparse antenna arrays and multiple-input multiple-output techniques are used to improve the angular resolution of the proposed system. To demonstrate the practical feasibility using standard production techniques, a prototype sensor using a novel four-channel single-chip radar transceiver in combination with differential patch antenna arrays was realized on off-the-shelf RF substrate. Furthermore, to demonstrate its practical applicability, the assembled system was tested in real world measurement scenarios in conjunction with the presented efficient signal processing algorithms."}
{"_id": "7be83b992da56d892140f2b83d6672eb21b2465e", "title": "An IQ-modulator based heterodyne 77-GHz FMCW colocated MIMO radar system", "text": "In this work the realization of a 77-GHz frequency-division multiple-access-based frequency-modulated continuous-wave multiple-input multiple-output radar with four transceiver channels in conjunction with a non-uniform linear antenna array is presented. The radar system, consisting of an RF-frontend utilizes transceiver chips with integrated inphase/quadrature-modulators to generate the frequency shifted transmit signals and a field programmable gate array-based baseband board, is used for test measurements to verify the system performance and to demonstrate the beamforming capability as well as the accuracy of the digital-beamforming method."}
{"_id": "8da84ea04a289d06d314be75898d9aa96cdf7b55", "title": "MIMO radar theory and experimental results", "text": "The continuing progress of Moore's law has enabled the development of radar systems that simultaneously transmit and receive multiple coded waveforms from multiple phase centers and to process them in ways that have been unavailable in the past. The signals available for processing from these multiple-input multiple-output (MIMO) radar systems appear as spatial samples corresponding to the convolution of the transmit and receive aperture phase centers. The samples provide the ability to excite and measure the channel that consists of the transmit/receive propagation paths, the target and incidental scattering or clutter. These signals may be processed and combined to form an adaptive coherent transmit beam, or to search an extended area with high resolution in a single dwell. Adaptively combining the received data provides the effect of adaptively controlling the transmit beamshape and the spatial extent provides improved track-while-scan accuracy. This paper describes the theory behind the improved surveillance radar performance and illustrates this with measurements from experimental MIMO radars."}
{"_id": "df168c45654bf1d62b8e066e68be5ba1450a976a", "title": "Motion compensation and efficient array design for TDMA FMCW MIMO radar systems", "text": "In this paper we present methods for the design of planar frequency-modulated continuous-wave (FMCW) multiple-input multiple-output (MIMO) arrays with an emphasis on the problem of moving targets in time-division multiple-access (TDMA) systems. We discuss the influence of target motion and boundaries of operation and present a method to compensate for its effects, which requires special attention in the array design and in signal processing. Array design techniques, examples including an implementation, and measurement results are also covered in this article."}
{"_id": "1c8203e8b826de9d056dffbe6eaffd4fc245d7e1", "title": "An IQ-modulator based heterodyne 77-GHz FMCW radar", "text": "In this paper a method to realize a heterodyne frequency-modulated continuous-wave (FMCW) radar is presented. The system operates at a frequency of 77 GHz and is based on an in-phase quadrature-phase (IQ) modulator in the transmit (TX)-path of the system. This IQ modulator is used as a single-sideband mixer, which allows the realization of an offset frequency between the radar's TX and receive signal. In this way it is possible to shift the frequency band containing the target information away from dc, which simplifies FMCW range measurement of targets close to the radar."}
{"_id": "32253f9e37d5f2b6e9454809a83eee8f012d660d", "title": "CP-nets: A Tool for Representing and Reasoning with Conditional Ceteris Paribus Preference Statements", "text": "Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence."}
{"_id": "ecea4a8d6c4f61be1819c07d30cdc043d1271c3b", "title": "Fracking Sarcasm using Neural Network", "text": "Precise semantic representation of a sentence and definitive information extraction are key steps in the accurate processing of sentence meaning, especially for figurative phenomena such as sarcasm, Irony, and metaphor cause literal meanings to be discounted and secondary or extended meanings to be intentionally profiled. Semantic modelling faces a new challenge in social media, because grammatical inaccuracy is commonplace yet many previous state-of-the-art methods exploit grammatical structure. For sarcasm detection over social media content, researchers so far have counted on Bag-of-Words(BOW), N-grams etc. In this paper, we propose a neural network semantic model for the task of sarcasm detection. We also review semantic modelling using Support Vector Machine (SVM) that employs constituency parsetrees fed and labeled with syntactic and semantic information. The proposed neural network model composed of Convolution Neural Network(CNN) and followed by a Long short term memory (LSTM) network and finally a Deep neural network(DNN). The proposed model outperforms state-of-the-art textbased methods for sarcasm detection, yielding an F-score of .92."}
{"_id": "0c93f357cfd0f985b4cc225fd2d0836531ed9d0a", "title": "Empirical Power Computation Using SAS \u00ae for Schuirmann \u2019 s Two One-Sided Tests Procedure in Clinical Pharmacokinetic Drug-Drug Interaction Studies", "text": "Drug-drug interaction studies are becoming increasingly crucial in drug development. An FDA survey of recent new drug application submissions showed that 70% of drug-drug interaction studies were conducted using fixedsequence designs. Schuirmann\u2019s Two One-sided Tests (TOST) procedure is the FDA preferred statistical method for evaluating drug-drug interaction. However, calculating the exact power of the TOST procedure generally requires sophisticated numerical integration. Monte Carlo simulation is a good alternative approach to the power calculation if a closed algerbraic formula is not available. This paper presents a SAS Macro to compute the empirical power of the TOST procedure under a fixed-sequence design. Empirical power curve plot and numerical table can be produced through this SAS macro."}
{"_id": "1cd8ee3bfead2964a3e4cc375123bb594949aa0b", "title": "Training verified learners with learned verifiers", "text": "This paper proposes a new algorithmic framework, predictor-verifier training, to train neural networks that are verifiable, i.e., networks that provably satisfy some desired input-output properties. The key idea is to simultaneously train two networks: a predictor network that performs the task at hand, e.g., predicting labels given inputs, and a verifier network that computes a bound on how well the predictor satisfies the properties being verified. Both networks can be trained simultaneously to optimize a weighted combination of the standard data-fitting loss and a term that bounds the maximum violation of the property. Experiments show that not only is the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times (outperforming previous algorithms on small datasets like MNIST and SVHN), but it can also be scaled to produce the first known (to the best of our knowledge) verifiably robust networks for CIFAR-10."}
{"_id": "501e412d6fc0fd2fcc6cd93bc21e100cb6ce3c0b", "title": "Fingerprint Template Protection: From Theory to Practice", "text": "One of the potential vulnerabilities in a biometric system is the leakage of biometric template information, which may lead to serious security and privacy threats. Most of the available template protection techniques fail to meet all the desired requirements of a practical biometric system like revocability, security, privacy, and high matching accuracy. In particular, protecting the fingerprint templates has been a difficult problem due to large intra-user variations (e.g., rotation, translation, nonlinear deformation, and partial prints). There are two fundamental challenges in any fingerprint template protection scheme. First, we need to select an appropriate representation scheme that captures most of the discriminatory information, but is sufficiently invariant to changes in finger placement and can be secured using available template protection algorithms. Secondly, we need to automatically align or register the fingerprints obtained during enrollment and matching without using any information that could reveal the features, which uniquely characterize a fingerprint. This chapter analyzes how these two challenges are being addressed in practice and how the design choices affect the trade-off between the security and matching accuracy. Though much progress has been made over the last decade, we believe that fingerprint template protection algorithms are still not sufficiently robust to be incorporated into practical fingerprint recognition systems. Anil K. Jain Department of Computer Science & Engineering, Michigan State University, East Lansing, MI 48824, USA and Department of Brain & Cognitive Engineering, Korea University, Seoul. e-mail: jain@cse.msu.edu, Karthik Nandakumar Institute for Infocomm Research, A*STAR, Fusionopolis, Singapore. e-mail: knandakumar@i2r.astar.edu.sg, Abhishek Nagar Department of Computer Science & Engineering, Michigan State University, East Lansing, MI 48824, USA. e-mail: nagarabh@cse.msu.edu"}
{"_id": "b8cb8ba4c4e1f2f7ede6a769ef83f20a5c99d640", "title": "Programming cells by multiplex genome engineering and accelerated evolution", "text": "The breadth of genomic diversity found among organisms in nature allows populations to adapt to diverse environments. However, genomic diversity is difficult to generate in the laboratory and new phenotypes do not easily arise on practical timescales. Although in vitro and directed evolution methods have created genetic variants with usefully altered phenotypes, these methods are limited to laborious and serial manipulation of single genes and are not used for parallel and continuous directed evolution of gene networks or genomes. Here, we describe multiplex automated genome engineering (MAGE) for large-scale programming and evolution of cells. MAGE simultaneously targets many locations on the chromosome for modification in a single cell or across a population of cells, thus producing combinatorial genomic diversity. Because the process is cyclical and scalable, we constructed prototype devices that automate the MAGE technology to facilitate rapid and continuous generation of a diverse set of genetic changes (mismatches, insertions, deletions). We applied MAGE to optimize the 1-deoxy-d-xylulose-5-phosphate (DXP) biosynthesis pathway in Escherichia coli to overproduce the industrially important isoprenoid lycopene. Twenty-four genetic components in the DXP pathway were modified simultaneously using a complex pool of synthetic DNA, creating over 4.3 billion combinatorial genomic variants per day. We isolated variants with more than fivefold increase in lycopene production within 3\u2009days, a significant improvement over existing metabolic engineering techniques. Our multiplex approach embraces engineering in the context of evolution by expediting the design and evolution of organisms with new and improved properties."}
{"_id": "42a6372f9e447d086f51b1a0ab629d5775fdc85d", "title": "Anti-inflammatory effect of Moringa oleifera Lam. seeds on acetic acid-induced acute colitis in rats", "text": "OBJECTIVE\nAnti-inflammatory, immuno-modulatory, and antioxidant properties of Moringa oleifera Lam. suggest that it might have beneficial effects on colitis. The present study was performed to investigate the anticolitis effect of Moringa oleifera seeds hydro-alcoholic extract (MSHE) and its chloroform fraction (MCF) on acetic acid-induced colitis in rats.\n\n\nMATERIALS AND METHODS\nBoth MSHE and MCF with three increasing doses (50, 100, and 200 mg/kg) were administered orally to separate groups of male Wistar rats, 2 h before ulcer induction (using acetic acid 4%) and continued for 5 days. Prednisolone (4 mg/kg) and normal saline (1 ml/kg) were used in reference and control groups, respectively. All rats were sacrificed 24 h after the last dose (at day 6) and tissue injuries were assessed macroscopically and pathologically.\n\n\nRESULTS\nExtracts with three doses mentioned before were effective to reduce weight of distal colon (8 cm) as a marker for inflammation and tissue edema. Three doses of MSHE and two greater doses of MCF (100 and 200 mg/kg) were effective to reduce ulcer severity, area, and index as well as mucosal inflammation severity and extent, crypt damage, invasion involvement, total colitis index, and MPO activity compared with controls. MCF (50 mg/kg) was not significantly effective in reducing evaluated parameters of colitis compared with controls.\n\n\nCONCLUSION\nIt is concluded that MSHE and MCF were both effective to treat experimental colitis and this might be attributed to their similar major components, biophenols and flavonoids. Since the efficacy was evident even in low doses of MSHE, presence of active constituents with high potency in seeds is persuasive."}
{"_id": "a252992313e9e022fb284d7caed05d3e2d607acb", "title": "A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation", "text": "Pose estimation of object is one of the key problems for the automatic-grasping task of robotics. In this paper, we present a new vision-based robotic grasping system, which can not only recognize different objects but also estimate their poses by using a deep learning model, finally grasp them and move to a predefined destination. The deep learning model demonstrates strong power in learning hierarchical features which greatly facilitates the recognition mission. We apply the Max-pooling Convolutional Neural Network (MPCNN), one of the most popular deep learning models, in this system, and assign different poses of objects as different classes in MPCNN. Besides, a new object detection method is also presented to overcome the disadvantage of the deep learning model. We have built a database comprised of 5 objects with different poses and illuminations for experimental performance evaluation. The experimental results demonstrate that our system can achieve high accuracy on object recognition as well as pose estimation. And the vision-based robotic system can grasp objects successfully regardless of different poses and illuminations."}
{"_id": "7e33e5ce94bcca51a1c33bcc7ab7f1382c8f7782", "title": "Faster discovery of faster system configurations with spectral learning", "text": "Despite the huge spread and economical importance of configurable software systems, there is unsatisfactory support in utilizing the full potential of these systems with respect to finding performance-optimal configurations. Prior work on predicting the performance of software configurations suffered from either (a) requiring far too many sample configurations or (b) large variances in their predictions. Both these problems can be avoided using the WHAT spectral learner. WHAT\u2019s innovation is the use of the spectrum (eigenvalues) of the distance matrix between the configurations of a configurable software system, to perform dimensionality reduction. Within that reduced configuration space, many closely associated configurations can be studied by executing only a few sample configurations. For the subject systems studied here, a few dozen samples yield accurate and stable predictors\u2014less than 10% prediction error, with a standard deviation of less than 2%. When compared to the state of the art, WHAT (a) requires 2\u201310 times fewer samples to achieve similar prediction accuracies, and (b) its predictions are more stable (i.e., have lower standard deviation). Furthermore, we demonstrate that predictive models generated by WHAT can be used by optimizers to discover system configurations that closely approach the optimal performance."}
{"_id": "48a35d457f8fc2e9b49679ed5b6c81a42c67d069", "title": "Intelligent video surveillance for monitoring fall detection of elderly in home environments", "text": "Video surveillance is an omnipresent topic when it comes to enhancing security and safety in the intelligent home environments. In this paper, we propose a novel method to detect various posture-based events in a typical elderly monitoring application in a home surveillance scenario. These events include normal daily life activities, abnormal behaviors and unusual events. Due to the fact that falling and its physical-psychological consequences in the elderly are a major health hazard, we monitor human activities with a particular interest to the problem of fall detection. Combination of best-fit approximated ellipse around the human body, projection histograms of the segmented silhouette and temporal changes of head position, would provide a useful cue for detection of different behaviors. Extracted feature vectors are fed to a MLP neural network for precise classification of motions and determination of fall event. Reliable recognition rate of experimental results underlines satisfactory performance of our system."}
{"_id": "4955a8cbb3aabbb4f83c3cc302a55897e976f784", "title": "Binomial filters", "text": "Binomial filters are simple and efficient structures based on the binomial coefficients for implementing Gaussian filtering. They do not require multipliers and can therefore be implemented efficiently in programmable hardware. There are many possible variations of the basic binomial filter structure, and they provide a wide range of space-time trade-offs; a number of these designs have been captured in a parametrised form and their features are compared. This technique can be used for multi-dimensional filtering, provided that the filter is separable. The numerical performance of binomial filters, and their implementation using field-programmable devices for an image processing application, are also discussed."}
{"_id": "1867459df0d6cd15d9c0bba81c35c4db6d813fec", "title": "Cyber Bullying Prevention: Intervention in Taiwan", "text": "BACKGROUND\nThis study aimed to explore the effectiveness of the cyber bullying prevention WebQuest course implementation.\n\n\nMETHODOLOGY/FINDINGS\nThe study adopted the quasi-experimental design with two classes made up of a total of 61 junior high school students of seventh grade. The study subjects comprised of 30 students from the experimental group and 31 students from the control group. The experimental group received eight sessions (total 360 minutes) of the teaching intervention for four consecutive weeks, while the control group did not engage in any related courses. The self-compiled questionnaire for the student's knowledge, attitudes, and intentions toward cyber bullying prevention was adopted. Data were analysed through generalized estimating equations to understand the immediate results on the student's knowledge, attitudes, and intentions after the intervention. The results show that the WebQuest course immediately and effectively enhanced the knowledge of cyber bullying, reduced the intentions, and retained the effects after the learning. But it produced no significant impact on the attitude toward cyber bullying.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe intervention through this pilot study was effective and positive for cyber bulling prevention. It was with small number of students. Therefore, studies with large number of students and long experimental times, in different areas and countries are warranted."}
{"_id": "05b18ede7f46c29c2722fed3376d277a1d286c55", "title": "Applications of Dual Quaternions in Three Dimensional Transformation and Interpolation", "text": "Quaternions have long been integral to the field of computer graphics, due to their minimal and robust representation of rotations in three dimensional space. Dual quaternions represent a compact method of representing rigid body transformations (that is rotations and translations) with similar interpolation and combination properties. By comparing them to two other kinds of rigid transformations, we examine their properties and evaluate their usefulness in a real time environment. These properties include accuracy of operations, efficiency of operations, and the paths that interpolation and blending methods using those transformation methods take. The blending and interpolation methods are of particular interest as we constructed a skeletal animation system to highlight a potential application of dual quaternions. The bone hierarchy was constructed with dual quaternions and a sequence of identical hierarchies with different transformations at each bone can be interpolated as though they were keyframes to produce animations. Weighted transformations required in skinning the skeleton structure to a triangular mesh also prove an effective application of dual quaternions. Our findings show that while dual quaternions are useful in the context of skeletal animation, other applications may favour other representations, due to simplicity or speed."}
{"_id": "889e6ee95eba2d0ce8535b8e9b7bd61914de5946", "title": "Depth camera based indoor mobile robot localization and navigation", "text": "The sheer volume of data generated by depth cameras provides a challenge to process in real time, in particular when used for indoor mobile robot localization and navigation. We introduce the Fast Sampling Plane Filtering (FSPF) algorithm to reduce the volume of the 3D point cloud by sampling points from the depth image, and classifying local grouped sets of points as belonging to planes in 3D (the \u201cplane filtered\u201d points) or points that do not correspond to planes within a specified error margin (the \u201coutlier\u201d points). We then introduce a localization algorithm based on an observation model that down-projects the plane filtered points on to 2D, and assigns correspondences for each point to lines in the 2D map. The full sampled point cloud (consisting of both plane filtered as well as outlier points) is processed for obstacle avoidance for autonomous navigation. All our algorithms process only the depth information, and do not require additional RGB data. The FSPF, localization and obstacle avoidance algorithms run in real time at full camera frame rates (30Hz) with low CPU requirements (16%). We provide experimental results demonstrating the effectiveness of our approach for indoor mobile robot localization and navigation. We further compare the accuracy and robustness in localization using depth cameras with FSPF vs. alternative approaches that simulate laser rangefinder scans from the 3D data."}
{"_id": "c6a25f214e4c2fa100697235d62d78759a2fbf85", "title": "1 TOURISM ONTOLOGY AND SEMANTIC MANAGEMENT SYSTEM : STATE-OFTHE-ARTS ANALYSIS", "text": "The global importance of tourism is steadily rising,creating new job opportunities in many countries. Today\u2019s information management solutions for the complex tasks of tourism intermediaries are still at an early stage from a semantic point of view. This paper presents some preliminary results of OnTourism Project. OnTourism is aimed at (1) applying, concretizing and evaluating Semantic Web technologies such as ontologies, semantic annotation of content, and semantic search to the information-rich and economically important tourism domain, (2) identifying, developing and integrating reference ontologies for the tourism industry, and (3) showing the proof-of-concept in a real-world scenario of the Austrian tourism industry. First results presented in this paper identify publicly available tourism ontologies and existing freely available ontology management tools for the tourism domain. We identify seven tourism ontologies which are suitable as a basis for creating problem-specific ontologies. Furthermore we review and evaluate five freely available ontology management tools that are suited for application in the tourism domain."}
{"_id": "7a2fc025463d03b17a1d0fa4941b00db3ce71f26", "title": "Compact Personalized Models for Neural Machine Translation", "text": "We propose and compare methods for gradientbased domain adaptation of self-attentive neural machine translation models. We demonstrate that a large proportion of model parameters can be frozen during adaptation with minimal or no reduction in translation quality by encouraging structured sparsity in the set of offset tensors during learning via group lasso regularization. We evaluate this technique for both batch and incremental adaptation across multiple data sets and language pairs. Our system architecture\u2014combining a state-of-the-art self-attentive model with compact domain adaptation\u2014provides high quality personalized machine translation that is both space and time efficient."}
{"_id": "98c548a4be0d3b62971e75259d7514feab14f884", "title": "Deep generative-contrastive networks for facial expression recognition", "text": "As the expressive depth of an emotional face differs with individuals or expressions, recognizing an expression using a single facial image at a moment is difficult. A relative expression of a query face compared to a reference face might alleviate this difficulty. In this paper, we propose to utilize contrastive representation that embeds a distinctive expressive factor for a discriminative purpose. The contrastive representation is calculated at the embedding layer of deep networks by comparing a given (query) image with the reference image. We attempt to utilize a generative reference image that is estimated based on the given image. Consequently, we deploy deep neural networks that embed a combination of a generative model, a contrastive model, and a discriminative model with an end-to-end training manner. In our proposed networks, we attempt to disentangle a facial expressive factor in two steps including learning of a generator network and a contrastive encoder network. We conducted extensive experiments on publicly available face expression databases (CK+, MMI, Oulu-CASIA, and in-the-wild databases) that have been widely adopted in the recent literatures. The proposed method outperforms the known state-of-the art methods in terms of the recognition accuracy."}
{"_id": "8f67e7287f545f644abb9086eb33e25f978bb87e", "title": "Continuous noninvasive glucose monitoring technology based on \"occlusion spectroscopy\".", "text": "BACKGROUND\nA truly noninvasive glucose-sensing device could revolutionalize diabetes treatment by leading to improved compliance with recommended glucose levels, thus reducing the long-term complications and cost of diabetes. Herein, we present the technology and evaluate the efficacy of a truly noninvasive device for continuous blood glucose monitoring, the NBM (OrSense Ltd.).\n\n\nMETHODS\nIn vitro analysis was used to validate the technology and algorithms. A clinical study was performed to quantify the in vivo performance of the NBM device. A total of 23 patients with type 1 (n = 12) and type 2 (n = 11) diabetes were enrolled in the clinical study and participated in 111 sessions. Accuracy was assessed by comparing NBM data with paired self-monitoring of blood glucose meter readings.\n\n\nRESULTS\nIn vitro experiments showed a strong correlation between calculated and actual glucose concentrations. The clinical trial produced a total of 1690 paired glucose values (NBM vs reference). In the paired data set, the reference glucose range was 40-496 mg/dl. No systematic bias was found at any of the glucose levels examined (70, 100, 150, and 200 mg/dl). The mean relative absolute difference was 17.2%, and a Clarke error grid analysis showed that 95.5% of the measurements fall within the clinically acceptable A&B regions (zone A, 69.7%; and zone B, 25.7%).\n\n\nCONCLUSIONS\nThis study indicates the potential use of OrSense's NBM device as a noninvasive sensor for continuous blood glucose evaluation. The device was safe and well tolerated."}
{"_id": "dd7b3d17f622afb950b9d593353e9bdd83fa5dbb", "title": "Extra-Specific Multiword Expressions for Language-Endowed Intelligent Agents", "text": "Language-endowed intelligent agents benefit from leveraging lexical knowledge falling at different points along a spectrum of compositionality. This means that robust computational lexicons should include not only the compositional expectations of argument-taking words, but also non-compositional collocations (idioms), semi-compositional collocations that might be difficult for an agent to interpret (e.g., standard metaphors), and even collocations that could be compositionally analyzed but are so frequently encountered that recording their meaning increases the efficiency of interpretation. In this paper we argue that yet another type of stringto-meaning mapping can also be useful to intelligent agents: remembered semantic analyses of actual text inputs. These can be viewed as super-specific multi-word expressions whose recorded interpretations mimic a person\u2019s memories of knowledge previously learned from language input. These differ from typical annotated corpora in two ways. First, they provide a full, context-sensitive semantic interpretation rather than select features. Second, they are are formulated in the ontologically-grounded metalanguage used in a particular agent environment, meaning that the interpretations contribute to the dynamically evolving cognitive capabilites of agents configured in that environment."}
{"_id": "792f19f137323e335b144e548465d483b9786068", "title": "Lifelong Multi-Agent Path Finding for Online Pickup and Delivery Tasks", "text": "The multi-agent path-finding (MAPF) problem has recently received a lot of attention. However, it does not capture important characteristics of many real-world domains, such as automated warehouses, where agents are constantly engaged with new tasks. In this paper, we therefore study a lifelong version of the MAPF problem, called the multiagent pickup and delivery (MAPD) problem. In the MAPD problem, agents have to attend to a stream of delivery tasks in an online setting. One agent has to be assigned to each delivery task. This agent has to first move to a given pickup location and then to a given delivery location while avoiding collisions with other agents. We present two decoupled MAPD algorithms, Token Passing (TP) and Token Passing with Task Swaps (TPTS). Theoretically, we show that they solve all well-formed MAPD instances, a realistic subclass of MAPD instances. Experimentally, we compare them against a centralized strawman MAPD algorithm without this guarantee in a simulated warehouse system. TP can easily be extended to a fully distributed MAPD algorithm and is the best choice when real-time computation is of primary concern since it remains efficient for MAPD instances with hundreds of agents and tasks. TPTS requires limited communication among agents and balances well between TP and the centralized MAPD algorithm."}
{"_id": "2f77f12249c5ca4f1935b73f0005f2ad26d391aa", "title": "A trust-risk perspective on social commerce use: an examination of the biasing role of habit", "text": "Purpose \u2013 Social commerce websites have emerged as new platforms which integrate social media features with traditional commerce aspects to enhance users\u2019 purchasing experience. The purpose of this paper is to examine the role of social factors such as trust toward site members in determining users\u2019 trust and risk evaluations, and the role of social commerce use habit in attenuating users\u2019 rational risk and trust considerations for developing purchase intentions. Design/methodology/approach \u2013 Relying on the risk deterrence perspective and rational decision-making models involving trust and habit, this study proposes a set of hypotheses which are tested through analyzing survey data using structural equation modeling techniques. Findings \u2013 Results show that commerce risk deters purchasing intentions; trust toward the social commerce website increases users\u2019 purchasing intentions; and trust toward the site members indirectly increases purchasing intentions. Moreover, trust toward site members reduces perceived commerce risk. Findings also show that habit modulates trust and risk effects on use decisions in this context; habit moderates (weakens) the relationships between commerce risk and purchase intentions and between trust toward the social commerce site and purchase intentions. Originality/value \u2013 This study extends theories on decision making in social settings such as in the case of social commerce. It does so by accounting for unique modulating effects of habit in social settings in which social aspects such as trust in other members and risk are unique and important."}
{"_id": "dae8e198722778ba192eb4b553658a3522e9cfdd", "title": "IEEE 802 . 11 ah : A Long Range 802 . 11 WLAN at Sub 1 GHz", "text": "IEEE 802.11ah is an emerging Wireless LAN (WLAN) standard that defines a WLAN system operating at sub 1 GHz license-exempt bands. Thanks to the favorable propagation characteristics of the low frequency spectra, 802.11ah can provide much improved transmission range compared with the conventional 802.11 WLANs operating at 2.4 GHz and 5 GHz bands. 802.11ah can be used for various purposes including large scale sensor networks, extended range hotspot, and outdoor Wi-Fi for cellular traffic offloading, whereas the available bandwidth is relatively narrow. In this paper, we give a technical overview of 802.11ah Physical (PHY) layer and Medium Access Control (MAC) layer. For the 802.11ah PHY, which is designed based on the down-clocked operation of IEEE 802.11ac\u2019s PHY layer, we describe its channelization and transmission modes. Besides, 802.11ah MAC layer has adopted some enhancements to fulfill the expected system requirements. These enhancements include the improvement of power saving features, support of large number of stations, efficient medium access mechanisms and throughput enhancements by greater compactness of various frame formats. Through the numerical analysis, we evaluate the transmission range for indoor and outdoor environments and the theoretical throughput with newly defined channel access mechanisms."}
{"_id": "2e151b36863baaca9e9c274973842f9ce428badf", "title": "Hypertool: A Programming Aid for Message-Passing Systems", "text": "|As both the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more di cult and errorprone. This paper discusses programming assistance and automation concepts and their application to a program development tool for message-passing systems called Hypertool. It performs scheduling and handles the communication primitive insertion automatically. Two algorithms, based on the critical-path method, are presented for scheduling processes statically. Hypertool also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs."}
{"_id": "95e59a42a690a932ce94ed3daf78e8a461795718", "title": "A Taxonomy of Scheduling in General-Purpose Distributed Computing Systems", "text": "One measure of usefulness of a general-purpose distributed computing system is the system\u2019s ability to provide a level of performance commensurate to the degree of multiplicity of resources present in the system. Many different approaches and metrics of performance have been proposed in an attempt to achieve this goal in existing systems. In addition, analogous problem formulations exist in other fields such as control theory, operations research, and production management. However, due to the wide variety of approaches to this problem, it is difficult to meaningfully compare different systems since there is no uniform means for qualitatively or quantitatively evaluating them. It is difficult to successfully build upon existing work or identify areas worthy of additional effort without some understanding of the relationships between past efforts. In this paper, a taxonomy of approaches to the resource management problem is presented in an attempt to provide a common terminology and classification mechanism necessary in addressing this problem. The taxonomy, while presented and discussed in terms of distributed scheduling, is also applicable to most types of resource management. As an illustration of the usefulness of the taxonomy an annotated bibliography is given which classifies a large number of distributed scheduling approaches according to the taxonomy."}
{"_id": "421189db3c9a8ad3b84750bc3e850b9cf76e6d45", "title": "A Microprocessor-based Hypercube Supercomputer", "text": "Each node in the NCUBE/ten parallel processor is organized around a custom, VAX-like, 32-bit CPU chip. With 1024 nodes, the NCUBE/ten provides a throughput of 500 MELOPS."}
{"_id": "7dad526fc681a0808b0163c29d8e4bd92785b952", "title": "Scheduling Precedence Graphs in Systems with Interprocessor Communication Times", "text": "The problem of nonpreemptively scheduling a set of rn partially ordered tasks on n identical processors subject to interprocessor communication delays is studied in an effort to minimize the makespan. A new heuristic, called Earliest Task First (ETF), is designed and analyzed. It is shown that the makespan wEx-generated by ETF always satisfies to Eq.. (2 1/n wopti) + C, where Wopti) is the optimal makespan without considering communication delays and C is the communication requirements over some immediate predecessor-immediate successor pairs along one chain. An algorithm is also provided to calculate C. The time complexity of Algorithm ETF is O(nm2)."}
{"_id": "0bc1ad4f69233fce02ac03c19ae72b01115c5b39", "title": "GIPSY: Automated Geographic Indexing of Text Documents", "text": "In this paper we present an algorithm which automatically extracts geopositional coordinate index terms from text to support georeferenced document indexing and retrieval. Under this algorithm, words and phrases containing geographic place names or characteristics are extracted from a text document and used as input to database functions which use spatial reasoning to approximate statistically the geoposition being referenced in the text. We conclude with a discussion of preliminary results and future work."}
{"_id": "8e44163452e3414ab7d69ccb207bb9cec40118f7", "title": "Hybrid supervised clustering based ensemble scheme for text classification", "text": "PurposeThe immense quantity of available unstructured text documents serve as one of the largest source of information. Text classification can be an essential task for many purposes in information retrieval, such as document organization, text filtering and sentiment analysis. Ensemble learning has been extensively studied to construct efficient text classification schemes with higher predictive performance and generalization ability. Providing diversity among the classification algorithms is a key issue in the ensemble design. Design/methodology/approachAn ensemble scheme based on hybrid supervised clustering is presented for text classification. In the presented scheme, supervised hybrid clustering, which is based on cuckoo search algorithm and k-means, is introduced to partition the data samples of each class into clusters so that training subsets with higher diversities can be provided. Each classifier is trained on the diversified training subsets and the predictions of individual classifiers are combined by the majority voting rule. The predictive performance of the proposed classifier ensemble is compared to conventional classification algorithms (such as Na\u00efve Bayes, logistic regression, support vector machines and C4.5 algorithm) and ensemble learning methods (such as AdaBoost, Bagging and Random Subspace) using eleven text benchmarks. FindingThe experimental results indicate that the presented classifier ensemble outperforms the conventional classification algorithms and ensemble learning methods for text classification. Originality/valueThe presented ensemble scheme is the first to use supervised clustering to obtain diverse ensemble for text classification"}
{"_id": "a93322db50d0847d64446a71a5d18dbbbf7fb724", "title": "Web evaluation: heuristic evaluation vs. user testing.", "text": "It is very important that designers recognize the benefits and limitations of different usability inspection methods. This is because the quality of the usability evaluation is dependent on the method used. Two of the most popular usability evaluation techniques are user testing and heuristic analysis. The main objective of this study was to compare the efficiency and effectiveness between user testing and heuristic analysis in evaluating four different commercial web sites. The results showed that both user testing and heuristic analysis addressed different usability problems. Analysis by severity of problems found and diminishing return analysis model on the relationship between the number of new problems discovered with users and evaluators used showed that both methods are equally efficient and effective in addressing different categories of usability problems. These significant differences found between these two methods suggested that the two methods are complimentary and should not be competing. In order for better evaluation results, both user testing and heuristic analysis are still needed. Relevance to industry: The research findings from this study will be of particular value to the web development industry and communities. Knowledge regarding the differences between user testing and heuristic evaluation will enable appropriate business decisions to be made on when and how to apply these methods to improve the overall efficiency of the design process. Published by Elsevier B.V."}
{"_id": "479b0b7a912083105c2152a95e947094dacd5cb4", "title": "Automated Evaluation of Out-of-Context Errors", "text": "We present a new approach to evaluate computational models for the task of text understanding by the means of out-of-context error detection. Through the novel design of our automated modification process, existing large-scale data sources can be adopted for a vast number of text understanding tasks. The data is thereby altered on a semantic level, allowing models to be tested against a challenging set of modified text passages that require to comprise a broader narrative discourse. Our newly introduced task targets actual real-world problems of transcription and translation systems by inserting authentic out-of-context errors. The automated modification process is applied to the 2016 TEDTalk corpus. Entirely automating the process allows the adoption of complete datasets at low cost, facilitating supervised learning procedures and deeper networks to be trained and tested. To evaluate the quality of the modification algorithm a language model and a supervised binary classification model are trained and tested on the altered dataset. A human baseline evaluation is examined to compare the results with human performance. The outcome of the evaluation task indicates the difficulty to detect semantic errors for machine-learning algorithms and humans, showing that the errors cannot be identified when limited to a single sentence."}
{"_id": "bec3c3e6bb9c738dad942f00fc69848018c3b1cc", "title": "Part-Activated Deep Reinforcement Learning for Action Prediction", "text": "In this paper, we propose a part-activated deep reinforcement learning (PA-DRL) method for action prediction. Most existing methods for action prediction utilize the evolution of whole frames to model actions, which cannot avoid the noise of the current action, especially in the early prediction. Moreover, the loss of structural information of human body diminishes the capacity of features to describe actions. To address this, we design the PA-DRL to exploit the structure of the human body by extracting skeleton proposals under a deep reinforcement learning framework. Specifically, we extract features from different parts of the human body individually and activate the action-related parts in features to enhance the representation. Our method not only exploits the structure information of the human body, but also considers the saliency part for expressing actions. We evaluate our method on three popular action prediction datasets: UT-Interaction, BIT-Interaction and UCF101. Our experimental results demonstrate that our method achieves the performance with state-of-the-arts."}
{"_id": "254eada1041dd64c9635234b69481890d8558b98", "title": "Design patterns: elements of reuseable object-oriented software", "text": "Factory (87) Provide an interface for creating families of related or dependent objects without specifying their concrete classes. Adapter (139) Convert the interface of a class into another interface clients expect. Adapter lets classes work together that couldn't otherwise because of incompatible interfaces. Bridge (151) Decouple an abstraction from its implementation so that the two can vary independently. Builder (97) Separate the construction of a complex object from its representation so that the same construction process can create different representations. Chain of Responsibility (223) Avoid coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it. Command (233) Encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations. Composite (163) Compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly. Decorator (175) Attach additional responsibilities to an object dynamically. Decorators provide a flexible alternative to subclassing for extending functionality. Facade (185) Provide a unified interface to a set of interfaces in a subsystem. Facade defines a higher-level interface that makes the subsystem easier to use. Factory Method (107) Define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses. Flyweight (195) Use sharing to support large numbers of fine-grained objects efficiently. Interpreter (243) Given a language, define a represention for its grammar along with an interpreter that uses the representation to interpret sentences in the language. Iterator (257) Provide a way to access the elements of an aggregate object sequentially without exposing its underlying representation. Mediator (273) Define an object that encapsulates how a set of objects interact. Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently. Memento (283) Without violating encapsulation, capture and externalize an object's internal state so that the object can be restored to this state later. Observer (293) Define a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically. Prototype (117) Specify the kinds of objects to create using a prototypical instance, and create new objects by copying this prototype. Proxy (207) Provide a surrogate or placeholder for another object to control access to it. Singleton (127) Ensure a class only has one instance, and provide a global point of access to it. State (305) Allow an object to alter its behavior when its internal state changes. The object will appear to change its class. Strategy (315) Define a family of algorithms, encapsulate each one, and make them interchangeable. Strategy lets the algorithm vary independently from clients that use it. Template Method (325) Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. Template Method lets subclasses redefine certain steps of an algorithm without changing the algorithm's structure. Visitor (331) Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates. Organizing the Catalog Design patterns vary in their granularity and level of abstraction. Because there are many design patterns, we need a way to organize them. This section classifies design patterns so that we can refer to families of related patterns. The classification helps you learn the patterns in the catalog faster, and it can direct efforts to find new patterns as well. We classify design patterns by two criteria (Table 1.1). The first criterion, called purpose, reflects what a pattern does. Patterns can have either creational, structural, or behavioral purpose. Creational patterns concern the process of object creation. Structural patterns deal with the composition of classes or objects. Behavioral patterns characterize the ways in which classes or objects interact and distribute responsibility."}
{"_id": "813cbb851da72df44e9b6cd322549aa711e7195d", "title": "Double-sided cooling and thermo-electrical management of power transients for silicon chips on DCB-substrates for converter applications: Design, technology and test", "text": "This paper deals with the system design, technology and test of a novel concept of integrating Si and SiC power dies along with thermo-electric coolers in order to thermally manage transients occurring during operation. The concept features double-sided cooling as well as new materials and joining technologies to integrate the dies such as transient liquid phase bonding/soldering and sintering. Coupled-field simulations are used to predict thermal performance and are verified by especially designed test stands to very good agreement. This paper is the second in a series of publications on the ongoing work."}
{"_id": "5b8189d15c6c0ccafcbea94187f188ce09153aad", "title": "Markov Decision Processes With Applications in Wireless Sensor Networks: A Survey", "text": "Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are used to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs."}
{"_id": "12161e5c5daa594fdab296356b3424c3bf4c8e9e", "title": "Production-run software failure diagnosis via hardware performance counters", "text": "Sequential and concurrency bugs are widespread in deployed software. They cause severe failures and huge financial loss during production runs. Tools that diagnose production-run failures with low overhead are needed. The state-of-the-art diagnosis techniques use software instrumentation to sample program properties at run time and use off-line statistical analysis to identify properties most correlated with failures. Although promising, these techniques suffer from high run-time overhead, which is sometimes over 100%, for concurrency-bug failure diagnosis and hence are not suitable for production-run usage.\n We present PBI, a system that uses existing hardware performance counters to diagnose production-run failures caused by sequential and concurrency bugs with low overhead. PBI is designed based on several key observations. First, a few widely supported performance counter events can reflect a wide variety of common software bugs and can be monitored by hardware with almost no overhead. Second, the counter overflow interrupt supported by existing hardware and operating systems provides a natural and effective mechanism to conduct event sampling at user level. Third, the noise and non-determinism in interrupt delivery complements well with statistical processing.\n We evaluate PBI using 13 real-world concurrency and sequential bugs from representative open-source server, client, and utility programs, and 10 bugs from a widely used software-testing benchmark. Quantitatively, PBI can effectively diagnose failures caused by these bugs with a small overhead that is never higher than 10%. Qualitatively, PBI does not require any change to software and presents a novel use of existing hardware performance counters."}
{"_id": "5817f5d2288e2eacff71704eb940d165e5f07d50", "title": "Predicting Online Extremism, Content Adopters, and Interaction Reciprocity", "text": "We present a machine learning framework that leverages a mixture of metadata, network, and temporal features to detect extremist users, and predict content adopters and interaction reciprocity in social media. We exploit a unique dataset containing millions of tweets generated by more than 25 thousand users who have been manually identified, reported, and suspended by Twitter due to their involvement with extremist campaigns. We also leverage millions of tweets generated by a random sample of 25 thousand regular users who were exposed to, or consumed, extremist content. We carry out three forecasting tasks, (i) to detect extremist users, (ii) to estimate whether regular users will adopt extremist content, and finally (iii) to predict whether users will reciprocate contacts initiated by extremists. All forecasting tasks are set up in two scenarios: a post hoc (time independent) prediction task on aggregated data, and a simulated real-time prediction task. The performance of our framework is extremely promising, yielding in the different forecasting scenarios up to 93% AUC for extremist user detection, up to 80% AUC for content adoption prediction, and finally up to 72% AUC for interaction reciprocity forecasting. We conclude by providing a thorough feature analysis that helps determine which are the emerging signals that provide predictive power in different scenarios."}
{"_id": "c8423def833791630c3fb7d26f85306625a25c71", "title": "The Moderating Role of Organizational Culture in the Relationship between Power, Trust, and eSCMS Adoption Intention", "text": "Building on multiple theoretical perspectives, we examined how organizational culture moderates the relationship between power, trust, and a firm\u2019s eSCMS adoption intention. We tested the hypotheses using survey data collected from senior executives in China. Our findings reveal that a target firm\u2019s perceived mediated power would negatively impact its trust toward a dominant firm, while its perceived non-mediated power would positively affect its trust. Meanwhile, trust can positively influence the target firm\u2019s eSCMS adoption intention. Further, an internally focused culture weakens the negative effects of mediated power on trust. Meanwhile, an externally focused culture weakens the positive relationships between non-mediated power and trust, and between trust and eSCMS adoption intention. The externally focused culture could weaken the negative relationship between mediated power and trust either. The theoretical contributions and managerial implications of the study are discussed."}
{"_id": "0367903f0fd45046f67a1b52702cc80926c0da40", "title": "An Injectable 64 nW ECG Mixed-Signal SoC in 65 nm for Arrhythmia Monitoring", "text": "A syringe-implantable electrocardiography (ECG) monitoring system is proposed. The noise optimization and circuit techniques in the analog front-end (AFE) enable 31 nA current consumption while a minimum energy computation approach in the digital back-end reduces digital energy consumption by 40%. The proposed SoC is fabricated in 65 nm CMOS and consumes 64 nW while successfully detecting atrial fibrillation arrhythmia and storing the irregular waveform in memory in experiments using an ECG simulator, a live sheep, and an isolated sheep heart."}
{"_id": "0a245098455a6663f922a83d318f7b61d357ab1f", "title": "Convolutional deep maxout networks for phone recognition", "text": "Convolutional neural networks have recently been shown to outperform fully connected deep neural networks on several speech recognition tasks. Their superior performance is due to their convolutional structure that processes several, slightly shifted versions of the input window using the same weights, and then pools the resulting neural activations. This pooling operation makes the network less sensitive to translations. The convolutional network results published up till now used sigmoid or rectified linear neurons. However, quite recently a new type of activation function called the maxout activation has been proposed. Its operation is closely related to convolutional networks, as it applies a similar pooling step, but over different neurons evaluated on the same input. Here, we combine the two technologies, and experiment with deep convolutional neural networks built from maxout neurons. Phone recognition tests on the TIMIT database show that switching to maxout units from rectifier units decreases the phone error rate for each network configuration studied, and yields relative error rate reductions of between 2% and 6%."}
{"_id": "09d7d6c44f3a54d62abb95b79c632e875c698bc8", "title": "Automatic quantification of crack patterns by image processing", "text": "Image processing technologies are proposed to quantify crack patterns. On the basis of the technologies, a software \u201cCrack Image Analysis System\u201d (CIAS) has been developed. An image of soil crack network is used as an example to illustrate the image processing technologies and the operations of the CIAS. The quantification of the crack image involves the following three steps: image segmentation, crack identification and measurement. First, the image is converted to a binary image using a cluster analysis method; noise in the binary image is removed; and crack spaces are fused. Then, the medial axis of the crack network is extracted from the binary image, with which nodes and crack segments can be identified. Finally, various geometric parameters of the crack network can be calculated automatically, such as node number, crack number, clod area, clod perimeter, crack area, width, length, and direction. The thresholds used in the operations are specified by cluster analysis and other innovative methods. As a result, the objects (nodes, cracks and clods) in the crack network can be quantified automatically. The software may be used to study the generation and development of soil crack patterns and rock fractures. & 2013 Elsevier Ltd. All rights reserved."}
{"_id": "60a7338cc9e54dd0eb6a19f1b11022af3375e392", "title": "BuzzSaw at SemEval-2017 Task 7: Global vs. Local Context for Interpreting and Locating Homographic English Puns with Sense Embeddings", "text": "This paper describes our system participating in the SemEval-2017 Task 7, for the subtasks of homographic pun location and homographic pun interpretation. For pun interpretation, we use a knowledgebased Word Sense Disambiguation (WSD) method based on sense embeddings. Punbased jokes can be divided into two parts, each containing information about the two distinct senses of the pun. To exploit this structure we split the context that is input to the WSD system into two local contexts and find the best sense for each of them. We use the output of pun interpretation for pun location. As we expect the two meanings of a pun to be very dissimilar, we compute sense embedding cosine distances for each sense-pair and select the word that has the highest distance. We describe experiments on different methods of splitting the context and compare our method to several baselines. We find evidence supporting our hypotheses and obtain competitive results for pun interpretation."}
{"_id": "a166168505d15a2f7477f23ebd4c843388647fd4", "title": "An efficient probabilistic occupancy map-based people localization approach", "text": "The widespread use of vision-based video surveillance systems has inspired many research efforts on people localization. One of the current main trends in this field is based on probabilistic occupancy map (POM) obtained from multiple camera views. Although the POM-based approaches are robust against noisy foregrounds and can achieve great localization accuracy, they require high computation complexity. In this paper, two enhancement schemes are proposed to improve the efficiency of the POM-based people localization: (i) quick screening of potential people locations, and (ii) timely termination of iterations for occupancy probability estimation. Experimental results show that the proposed approach achieves up to 7.25 times speed-up compared to the standard POM-based approach, while delivering comparable people localization accuracy."}
{"_id": "20b4e8135314cee0af40d48158b3bf8535417227", "title": "Tracing female gamer identity. An empirical study into gender and stereotype threat perceptions", "text": "It seems that women stand outside game culture resulting in a low gamer identity profile. A nuanced and detailed examination of how gender identity and threatening experiences tap into their play practices has hitherto been lacking however. The present study fills this gap by examining how female players express a gamer identity and how this relates to perceptions of threat and stigmatization. Based on a large-scale survey directed at female players, a statistical model is specified taking into account how respondents attribute a gamer label to their self-concept. Results suggest that the cognitive, evaluative, and affective dimensions of female identity predict gamer identification in distinct ways. Whereas the mere cognitive process of categorizing as a woman is not related to gamer identity, women who feel closely connected to other women are less inclined to self-identify as gamer. However, group appraisal of womanhood is positively related to identifying as a gamer. When taking into account stigmatization, experienced discrimination by male players seems to discourage women to label themselves as gamer. \u00a9 2017 Elsevier Ltd. All rights reserved."}
{"_id": "9d1f478173839ca0e15bd9e797f51986ee1685de", "title": "A New Compact Microstrip-Fed Dual-Band Coplanar Antenna for WLAN Applications", "text": "A novel compact microstrip fed dual-band coplanar antenna for wireless local area network is presented. The antenna comprises of a rectangular center strip and two lateral strips printed on a dielectric substrate and excited using a 50 Omega microstrip transmission line. The antenna generates two separate resonant modes to cover 2.4/5.2/5.8 GHz WLAN bands. Lower resonant mode of the antenna has an impedance bandwidth (2:1 VSWR) of 330 MHz (2190-2520 MHz), which easily covers the required bandwidth of the 2.4 GHz WLAN, and the upper resonant mode has a bandwidth of 1.23 GHz (4849-6070 MHz), covering 5.2/5.8 GHz WLAN bands. The proposed antenna occupy an area of 217 mm2 when printed on FR4 substrate (epsivr=4.7). A rigorous experimental study has been conducted to confirm the characteristics of the antenna. Design equations for the proposed antenna are also developed"}
{"_id": "1e095362f820077f60994daa60f6d0a5278fd048", "title": "Privacy-Preserving Multiple Linear Regression of Vertically Partitioned Real Medical Datasets", "text": "This paper studies the feasibility of privacy-preservingdata mining in epidemiological study. As for the data-miningalgorithm, we focus to a linear multiple regression thatcan be used to identify the most significant factorsamong many possible variables, such as the historyof many diseases. We try to identify the linear model to estimate a lengthof hospital stay from distributed dataset related tothe patient and the disease information. In this paper, we have done experiment usingthe real medical dataset related to stroke andattempt to apply multiple regression with sixpredictors of age, sex, the medical scales, e.g., Japan Coma Scale, and the modified Rankin Scale. Our contributions of this paper include(1) to propose a practical privacy-preserving protocols for linear multiple regressionwith vertically partitioned datasets, and(2) to show the feasibility of the proposed system usingthe real medical dataset distributed into two parties, the hospital who knows the technical details of diseasesduring the patients are in the hospital, and the local government who knows the residence even afterthe patients left hospital. (3) to show the accuracy and the performance of thePPDM system which allows us to estimate the expectedprocessing time with arbitrary number of predictors."}
{"_id": "46149a6ebda78923a4d0b79e5276b3bb508d89a5", "title": "Local potential connectivity in cat primary visual cortex.", "text": "Time invariant description of synaptic connectivity in cortical circuits may be precluded by the ongoing growth and retraction of dendritic spines accompanied by the formation and elimination of synapses. On the other hand, the spatial arrangement of axonal and dendritic branches appears stable. This suggests that an invariant description of connectivity can be cast in terms of potential synapses, which are locations in the neuropil where an axon branch of one neuron is proximal to a dendritic branch of another neuron. In this paper, we attempt to reconstruct the potential connectivity in local cortical circuits of the cat primary visual cortex (V1). Based on multiple single-neuron reconstructions of axonal and dendritic arbors in 3 dimensions, we evaluate the expected number of potential synapses and the probability of potential connectivity among excitatory (pyramidal and spiny stellate) neurons and inhibitory basket cells. The results provide a quantitative description of structural organization of local cortical circuits. For excitatory neurons from different cortical layers, we compute local domains, which contain their potentially pre- and postsynaptic excitatory partners. These domains have columnar shapes with laminar specific radii and are roughly of the size of the ocular dominance column. Therefore, connections between most excitatory neurons in the ocular dominance column can be implemented by local synaptogenesis. Structural connectivity involving inhibitory basket cells is generally weaker than excitatory connectivity. Here, only nearby neurons are capable of establishing more than one potential synapse, implying that within the ocular dominance column these connections have more limited potential for circuit remodeling."}
{"_id": "d3b5dbc9e94c0f49c6f5178dc689de65c5282160", "title": "Synthetic aperture ultrasound imaging.", "text": "The paper describes the use of synthetic aperture (SA) imaging in medical ultrasound. SA imaging is a radical break with today's commercial systems, where the image is acquired sequentially one image line at a time. This puts a strict limit on the frame rate and the possibility of acquiring a sufficient amount of data for high precision flow estimation. These constrictions can be lifted by employing SA imaging. Here data is acquired simultaneously from all directions over a number of emissions, and the full image can be reconstructed from this data. The paper demonstrates the many benefits of SA imaging. Due to the complete data set, it is possible to have both dynamic transmit and receive focusing to improve contrast and resolution. It is also possible to improve penetration depth by employing codes during ultrasound transmission. Data sets for vector flow imaging can be acquired using short imaging sequences, whereby both the correct velocity magnitude and angle can be estimated. A number of examples of both phantom and in vivo SA images will be presented measured by the experimental ultrasound scanner RASMUS to demonstrate the many benefits of SA imaging."}
{"_id": "f7a4a0b2ed6557e4eb9e09a8e74d94de23994c60", "title": "Facial soft-tissue spaces and retaining ligaments of the midcheek: defining the premaxillary space.", "text": "BACKGROUND\nThis anatomical study was undertaken to define the soft-tissue spaces, retaining ligaments, and their relations in the midcheek.\n\n\nMETHODS\nSixty fresh hemifaces were dissected. The retaining ligaments and facial spaces were defined and their dimensions recorded. The course of the key vessels and branches of the facial and infraorbital nerves were defined and their anatomical relations noted.\n\n\nRESULTS\nThe preseptal and prezygomatic spaces underlie the lid-cheek and malar segments of the midcheek. A previously undocumented soft-tissue space, the premaxillary space, was found to underlie the nasolabial segment. The retaining ligaments of the midcheek are the tear trough-orbicularis retaining ligament complex in the upper midcheek and the zygomatic and maxillary ligaments in the lower midcheek. The tear trough-orbicularis retaining ligament complex separates the preseptal space above from the prezygomatic and premaxillary spaces below. Facial nerve branches in the midcheek are closely associated with the zygomatic ligaments located outside the lower boundary of the prezygomatic space and are protected so long as the dissection is kept within this space. The infraorbital nerve is protected by the floor of the premaxillary space, formed by the levator labii superioris and, at the inferior boundary of the space, by the close relation with the maxillary ligaments.\n\n\nCONCLUSIONS\nThis study completely defined the spaces and retaining ligaments of the midcheek. Knowledge of this anatomy is key to safe and atraumatic suborbicular dissection for effective midcheek lifts."}
{"_id": "22ae02d81c21cb90b0de071550cfb99e6a623e62", "title": "Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval", "text": "This paper develops a model that addresses sentence embedding, a hot topic in current natural language processing research, using recurrent neural networks (RNN) with Long Short-Term Memory (LSTM) cells. The proposed LSTM-RNN model sequentially takes each word in a sentence, extracts its information, and embeds it into a semantic vector. Due to its ability to capture long term memory, the LSTM-RNN accumulates increasingly richer information as it goes through the sentence, and when it reaches the last word, the hidden layer of the network provides a semantic representation of the whole sentence. In this paper, the LSTM-RNN is trained in a weakly supervised manner on user click-through data logged by a commercial web search engine. Visualization and analysis are performed to understand how the embedding process works. The model is found to automatically attenuate the unimportant words and detect the salient keywords in the sentence. Furthermore, these detected keywords are found to automatically activate different cells of the LSTM-RNN, where words belonging to a similar topic activate the same cell. As a semantic representation of the sentence, the embedding vector can be used in many different applications. These automatic keyword detection and topic allocation abilities enabled by the LSTM-RNN allow the network to perform document retrieval, a difficult language processing task, where the similarity between the query and documents can be measured by the distance between their corresponding sentence embedding vectors computed by the LSTM-RNN. On a web search task, the LSTM-RNN embedding is shown to significantly outperform several existing state of the art methods. We emphasize that the proposed model generates sentence embedding vectors that are specially useful for web document retrieval tasks. A comparison with a well known general sentence embedding method, the Paragraph Vector, is performed. The results show that the proposed method in this paper significantly outperforms Paragraph Vector method for web document retrieval task."}
{"_id": "5324ba064dc1656dd51c04122c2c802ef9ec28ce", "title": "Recurrent Recommender Networks", "text": "Recommender systems traditionally assume that user profiles and movie attributes are static. Temporal dynamics are purely reactive, that is, they are inferred after they are observed, e.g. after a user's taste has changed or based on hand-engineered temporal bias corrections for movies. We propose Recurrent Recommender Networks (RRN) that are able to predict future behavioral trajectories. This is achieved by endowing both users and movies with a Long Short-Term Memory (LSTM) autoregressive model that captures dynamics, in addition to a more traditional low-rank factorization. On multiple real-world datasets, our model offers excellent prediction accuracy and it is very compact, since we need not learn latent state but rather just the state transition function."}
{"_id": "c4d7be4bddb5326d03d43a0763b7b31ffc118627", "title": "A Survey of Point-of-interest Recommendation in Location-based Social Networks", "text": "Point-of-interest (POI) recommendation that suggests new places for users to visit arises with the popularity of location-based social networks (LBSNs). Due to the importance of POI recommendation in LBSNs, it has attracted much academic and industrial interest. In this paper, we offer a systematic review of this field, summarizing the contributions of individual efforts and exploring their relations. We discuss the new properties and challenges in POI recommendation, compared with traditional recommendation problems, e.g., movie recommendation. Then, we present a comprehensive review in three aspects: influential factors for POI recommendation, methodologies employed for POI recommendation, and different tasks in POI recommendation. Specifically, we propose three taxonomies to classify POI recommendation systems. First, we categorize the systems by the influential factors check-in characteristics, including the geographical information, social relationship, temporal influence, and content indications. Second, we categorize the systems by the methodology, including systems modeled by fused methods and joint methods. Third, we categorize the systems as general POI recommendation and successive POI recommendation by subtle differences in the recommendation task whether to be bias to the recent check-in. For each category, we summarize the contributions and system features, and highlight the representative work. Moreover, we discuss the available data sets and the popular metrics. Finally, we point out the possible future directions in this area and conclude this survey."}
{"_id": "e9fac1091d9a1646314b1b91e58f40dae3a750cd", "title": "The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions", "text": "Received () Revised () Recurrent nets are in principle capable to store past inputs to produce the currently desired output. Because of this property recurrent nets are used in time series prediction and process control. Practical applications involve temporal dependencies spanning many time steps, e.g. between relevant inputs and desired outputs. In this case, however, gradient based learning methods take too much time. The extremely increased learning time arises because the error vanishes as it gets propagated back. In this article the decaying error ow is theoretically analyzed. Then methods trying to overcome vanishing gradients are brieey discussed. Finally, experiments comparing conventional algorithms and alternative methods are presented. With advanced methods long time lag problems can be solved in reasonable time."}
{"_id": "8924a32c1b0769b1d523739217fe07c876571700", "title": "An Empirical Analysis of the Influence of Fault Space on Search-Based Automated Program Repair", "text": "Automated program repair (APR) has attracted great research attention, and various techniques have been proposed. Search-based APR is one of the most important categories among these techniques. Existing researches focus on the design of effective mutation operators and searching algorithms to better find the correct patch. Despite various efforts, the effectiveness of these techniques are still limited by the search space explosion problem. One of the key factors attribute to this problem is the quality of fault spaces as reported by existing studies. This motivates us to study the importance of the fault space to the success of finding a correct patch. Our empirical study aims to answer three questions. Does the fault space significantly correlate with the performance of search-based APR? If so, are there any indicative measurements to approximate the accuracy of the fault space before applying expensive APR techniques? Are there any automatic methods that can improve the accuracy of the fault space? We observe that the accuracy of the fault space affects the effectiveness and efficiency of search-based APR techniques, e.g., the failure rate of GenProg could be as high as 60% when the real fix location is ranked lower than 10 even though the correct patch is in the search space. Besides, GenProg is able to find more correct patches and with fewer trials when given a fault space with a higher accuracy. We also find that the negative mutation coverage, which is designed in this study to measure the capability of a test suite to kill the mutants created on the statements executed by failing tests, is the most indicative measurement to estimate the efficiency of search-based APR. Finally, we confirm that automated generated test cases can help improve the accuracy of fault spaces, and further improve the performance of search-based APR techniques. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. c \u00a9 2017 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 CCS Concepts \u2022Software and its engineering\u2192Automatic programming; Empirical software validation;"}
{"_id": "854146b2b4dc4bb12df9d6a2d66aa3a392ae0c40", "title": "Software Reuse Through Information Retrieval", "text": "There is widespread need for safe, verifiable, efficient, and reliable software that can be delivered in a timely manner. Software reuse can make a valuable contrbution toward this goal by increasing programmer productivity and software quality. Unfortunately, the amount of software reuse currently done is quite small. DeMarco [1] estimates that in the average software development environment only about five percent of code is reused."}
{"_id": "d6a22e135a9c214f2dd4f71a50b5cf7e082853f0", "title": "Unsupervised speech representation learning using WaveNet autoencoders", "text": "We consider the task of unsupervised extraction of meaningful latent representations of speech by applying autoencoding neural networks to speech waveforms. The goal is to learn a representation able to capture high level semantic content from the signal, e.g. phoneme identities, while being invariant to confounding low level details in the signal such as the underlying pitch contour or background noise. The behavior of autoencoder models depends on the kind of constraint that is applied to the latent representation. We compare three variants: a simple dimensionality reduction bottleneck, a Gaussian Variational Autoencoder (VAE), and a discrete Vector Quantized VAE (VQVAE). We analyze the quality of learned representations in terms of speaker independence, the ability to predict phonetic content, and the ability to accurately reconstruct individual spectrogram frames. Moreover, for discrete encodings extracted using the VQVAE, we measure the ease of mapping them to phonemes. We introduce a regularization scheme that forces the representations to focus on the phonetic content of the utterance and report performance comparable with the top entries in the ZeroSpeech 2017 unsupervised acoustic unit discovery task."}
{"_id": "0d8e2f4d60c102975055080427731f2d14dcebc3", "title": "Monotony of road environment and driver fatigue: a simulator study.", "text": "Studies have shown that drowsiness and hypovigilance frequently occur during highway driving and that they may have serious implications in terms of accident causation. This paper focuses on the task induced factors that are involved in the development of these phenomena. A driving simulator study was conducted in order to evaluate the impact of the monotony of roadside visual stimulation using a steering wheel movement (SWM) analysis procedure. Fifty-six male subjects each drove during two different 40-min periods. In one case, roadside visual stimuli were essentially repetitive and monotonous, while in the other one, the environment contained disparate visual elements aiming to disrupt monotony without changing road geometry. Subject's driving performance was compared across these conditions in order to determine whether disruptions of monotony can have a positive effect and help alleviate driver fatigue. Results reveal an early time-on-task effect on driving performance for both driving periods and more frequent large SWM when driving in the more monotonous road environment, which implies greater fatigue and vigilance decrements. Implications in terms of environmental countermeasures for driver fatigue are discussed."}
{"_id": "3e677c2114908857885c0199ba41f6594ee017be", "title": "Speech responsive mobile robot for transporting objects of different weight categories", "text": "In this paper we present a robot with mechanical gripper which has the capability of transporting limited sized objects from one place to another with pick-up and drop capabilities. The robot's responses are based on speech recognition of verbal commands. In our project we have used Google speech recognition module in order to understand verbal commands. We also categorized the objects into six specific categories according to the amount of gripping force required to lift the objects. The categories are divided according to object stiffness. An android application was used to communicate with the robot through Bluetooth communication. The application decodes the human speech into an array of characters which are transmitted to the robot using Bluetooth technology. The robot uses microcontroller which decodes the messages into executable functions. In this project we have designed the robot such that it understands only fifteen distinct verbal commands and ignores others. Being a speech responsive mobile robot it can be effectively used to move objects from one place to another by people with disability or handicap."}
{"_id": "19d8b1dd85080d25c0670ecaab58c52fdc41d54c", "title": "The A2iA Arabic Handwritten Text Recognition System at the Open HaRT2013 Evaluation", "text": "This paper describes the Arabic handwriting recognition systems proposed by A2iA to the NIST OpenHaRT2013 evaluation. These systems were based on an optical model using Long Short-Term Memory (LSTM) recurrent neural networks, trained to recognize the different forms of the Arabic characters directly from the image, without explicit feature extraction nor segmentation.Large vocabulary selection techniques and n-gram language modeling were used to provide a full paragraph recognition, without explicit word segmentation. Several recognition systems were also combined with the ROVER combination algorithm. The best system exceeded 80% of recognition rate."}
{"_id": "2ad062fa4cbf5cb59104a5b02d563d99d74240b8", "title": "Dynamic resource management in SDN-based virtualized networks", "text": "Network virtualization allows for an abstraction between user and physical resources by letting a given physical infrastructure to be shared by multiple service providers. However, network virtualization presents some challenges, such as, efficient resource management, fast provisioning and scalability. By separating a network's control logic from the underlying routers and switches, software defined networking (SDN) promises an unprecedented simplification in network programmability, management and innovation by service providers, and hence, its control model presents itself as a candidate solution to the challenges in network virtualization. In this paper, we use the SDN control plane to efficiently manage resources in virtualized networks by dynamically adjusting the virtual network (VN) to substrate network (SN) mappings based on network status. We extend an SDN controller to monitor the resource utilisation of VNs, as well as the average loading of SN links and switches, and use this information to proactively add or remove flow rules from the switches. Simulations show that, compared with three state-of-art approaches, our proposal improves the VN acceptance ratio by about 40% and reduces VN resource costs by over 10%."}
{"_id": "569e269798afcd9586e0144e0cdd26b21828587c", "title": "CIRCULAR POLARIZATION DUAL FEED MICROSTRIP PATCH ANTENNA WITH 3 dB HYBRID COUPLER FOR WLAN ( 2 . 4 GHZ", "text": "Microstrip patch antennas represent one family of compact antennas that offer a conformal nature and the capability of ready integration with communication system\u2019s printed circuitry. In this project, a 2.4 GHz circular polarization microstrip antenna is designed, constructed and measured. The selected microstrip antenna is a dual \u2013fed circular polarized microstrip antenna. The antenna consists of square patch and 3 dB hybrid coupler. The dual \u2013 fed circular polarized microstrip antenna is etched on a FR4 with dielectric substrate of 4.6 with the height of 1.6 mm. Circular polarization is obtained when two orthogonal modes are equally excited with 90\u00b0 phase difference between them. Circular polarization is important because regardless of the receiver orientation, it will always be receiving a component of the signal. This is due to the resulting wave having an angular variation. KEYWORD:MICROSTRIP,CIRCULAR POLARIZATION"}
{"_id": "3e903f452549d3f9e49f820ae6b93e8785e50c67", "title": "Social semantic query expansion", "text": "Weak semantic techniques rely on the integration of Semantic Web techniques with social annotations and aim to embrace the strengths of both. In this article, we propose a novel weak semantic technique for query expansion. Traditional query expansion techniques are based on the computation of two-dimensional co-occurrence matrices. Our approach proposes the use of three-dimensional matrices, where the added dimension is represented by semantic classes (i.e., categories comprising all the terms that share a semantic property) related to the folksonomy extracted from social bookmarking services, such as delicious and StumbleUpon. The results of an indepth experimental evaluation performed on both artificial datasets and real users show that our approach outperforms traditional techniques, such as relevance feedback and personalized PageRank, so confirming the validity and usefulness of the categorization of the user needs and preferences in semantic classes. We also present the results of a questionnaire aimed to know the users opinion regarding the system. As one drawback of several query expansion techniques is their high computational costs, we also provide a complexity analysis of our system, in order to show its capability of operating in real time."}
{"_id": "b7f96e579f96e3c64a10b2d2eb4a051be5978d7e", "title": "A Novel Low-Cost, Large Curvature Bend Sensor Based on a Bowden-Cable", "text": "Bend sensors have been developed based on conductive ink, optical fiber, and electronic textiles. Each type has advantages and disadvantages in terms of performance, ease of use, and cost. This study proposes a new and low-cost bend sensor that can measure a wide range of accumulated bend angles with large curvatures. This bend sensor utilizes a Bowden-cable, which consists of a coil sheath and an inner wire. Displacement changes of the Bowden-cable's inner wire, when the shape of the sheath changes, have been considered to be a position error in previous studies. However, this study takes advantage of this position error to detect the bend angle of the sheath. The bend angle of the sensor can be calculated from the displacement measurement of the sensing wire using a Hall-effect sensor or a potentiometer. Simulations and experiments have shown that the accumulated bend angle of the sensor is linearly related to the sensor signal, with an R-square value up to 0.9969 and a root mean square error of 2% of the full sensing range. The proposed sensor is not affected by a bend curvature of up to 80.0 m(-1), unlike previous bend sensors. The proposed sensor is expected to be useful for various applications, including motion capture devices, wearable robots, surgical devices, or generally any device that requires an affordable and low-cost bend sensor."}
{"_id": "f587e64e5659eab13a3768f12b1a02d7575c8bc2", "title": "Slot-Coupled Multisection Quadrature Hybrid for UWB Applications", "text": "We present a three-section quadrature hybrid based on slot-coupled directional couplers, which operates over a bandwidth from 3.1 to 10.6 GHz. A prototype has been fabricated which exhibits a return loss better than 20 dB, isolation around 20 dB, an amplitude imbalance between output ports of less than plusmn0.75 dB and a phase imbalance between +1deg and -3deg across the 3.1-10.6 GHz band. This design outperforms previously reported results for ultra wide band operation."}
{"_id": "d522a9452a436dce6770443d9ac8c1245e7e3510", "title": "A High-Performance FPGA-Based Implementation of the LZSS Compression Algorithm", "text": "The increasing growth of embedded networking applications has created a demand for high-performance logging systems capable of storing huge amounts of high-bandwidth, typically redundant data. An efficient way of maximizing the logger performance is doing a real-time compression of the logged stream. In this paper we present a flexible high-performance implementation of the LZSS compression algorithm capable of processing up to 50 MB/s on a Virtex-5 FPGA chip. We exploit the independently addressable dual-port block RAMs inside the FPGA chip to achieve an average performance of 2 clock cycles per byte. To make the compressed stream compatible with the ZLib library [1] we encode the LZSS algorithm output using a fixed Huffman table defined by the Deflate specification [2]. We also demonstrate how changing the amount of memory allocated to various internal tables impacts the performance and compression ratio. Finally, we provide a cycle-accurate estimation tool that allows finding a trade-off between FPGA resource utilization, compression ratio and performance for a specific data sample."}
{"_id": "ab1b2029dafea99f4c98aa50767ac1df447fe48a", "title": "Mining (Social) Network Graphs to Detect Random Link Attacks", "text": "Modern communication networks are vulnerable to attackers who send unsolicited messages to innocent users, wasting network resources and user time. Some examples of such attacks are spam emails, annoying tele-marketing phone calls, viral marketing in social networks, etc. Existing techniques to identify these attacks are tailored to certain specific domains (like email spam filtering), but are not applicable to a majority of other networks. We provide a generic abstraction of such attacks, called the Random Link Attack (RLA), that can be used to describe a large class of attacks in communication networks. In an RLA, the malicious user creates a set of false identities and uses them to communicate with a large, random set of innocent users. We mine the social networking graph extracted from user interactions in the communication network to find RLAs. To the best of our knowledge, this is the first attempt to conceptualize the attack definition, applicable to a variety of communication networks. In this paper, we formally define RLA and show that the problem of finding an RLA is NP-complete. We also provide two efficient heuristics to mine subgraphs satisfying the RLA property; the first (GREEDY) is based on greedy set-expansion, and the second (TRWALK) on randomized graph traversal. Our experiments with a real-life data set demonstrate the effectiveness of these algorithms."}
{"_id": "377ce926004ff65cfdcc2c4c6a7bac11ea2f6221", "title": "A Solution to Forecast Demand Using Long Short-Term Memory Recurrent Neural Networks for Time Series Forecasting", "text": "This study focuses on predicting demand based on data collected which spans across many periods. To help our client build a solution to forecast demand effectively, we developed a model using Long Short-Term Memory (LSTM) Networks, a type of Recurrent Neural Network, to estimate demand based on historical patterns. While there may be many available models for dealing with a time series problem, the LSTM model is relatively new and highly sophisticated to its counterpart. By comparing this study which works excellently for sequential learning, to the other existing models and techniques, we are now closer to solving at least one of many complications apparent across industries. The study becomes even more important for supply chain professionals, especially those in the purchase department, as they can now rely on a highly accurate model instead of basing their forecasts purely on intuition and recent customer behavior. Using data from the M3-Competition, which is a competition conducted by the International Institute of Forecasters, we develop a working framework to help our client compare their existing models (feedforward neural network and exponential smoothing model) with our LSTM model."}
{"_id": "53633cc15e299d5ad3c726f677cbd9f7746a553a", "title": "CRISPR interference (CRISPRi) for sequence-specific control of gene expression", "text": "Sequence-specific control of gene expression on a genome-wide scale is an important approach for understanding gene functions and for engineering genetic regulatory systems. We have recently described an RNA-based method, CRISPR interference (CRISPRi), for targeted silencing of transcription in bacteria and human cells. The CRISPRi system is derived from the Streptococcus pyogenes CRISPR (clustered regularly interspaced palindromic repeats) pathway, requiring only the coexpression of a catalytically inactive Cas9 protein and a customizable single guide RNA (sgRNA). The Cas9-sgRNA complex binds to DNA elements complementary to the sgRNA and causes a steric block that halts transcript elongation by RNA polymerase, resulting in the repression of the target gene. Here we provide a protocol for the design, construction and expression of customized sgRNAs for transcriptional repression of any gene of interest. We also provide details for testing the repression activity of CRISPRi using quantitative fluorescence assays and native elongating transcript sequencing. CRISPRi provides a simplified approach for rapid gene repression within 1\u20132 weeks. The method can also be adapted for high-throughput interrogation of genome-wide gene functions and genetic interactions, thus providing a complementary approach to RNA interference, which can be used in a wider variety of organisms."}
{"_id": "35c0559556471f6bcda1c2b7a70a64f029a63421", "title": "Serendipitous recommendation for scholarly papers considering relations among researchers", "text": "Serendipity occurs when one finds an interesting discovery while searching for something else. While search engines seek to report work relevant to a targeted query, recommendation engines are particularly well-suited for serendipitous recommendations as such processes do not need to fulfill a targeted query. Junior researchers can use such an engine to broaden their horizon and learn new areas, while senior researchers can discover interdisciplinary frontiers to apply integrative research. We adapt a state-of-the-art scholarly paper recommendation system's user profile construction to make use of information drawn from 1) dissimilar users and 2) co-authors to specifically target serendipitous recommendation."}
{"_id": "4658a728225d2408a93c8706c2418dc4379d6be7", "title": "Cross-Lingual Ontology Mapping - An Investigation of the Impact of Machine Translation", "text": "Ontologies are at the heart of knowledge management and make use of information that is not only written in English but also in many other natural languages. In order to enable knowledge discovery, sharing and reuse of these multilingual ontologies, it is necessary to support ontology mapping despite natural language barriers. This paper examines the soundness of a generic approach that involves machine translation tools and monolingual ontology matching techniques in cross-lingual ontology mapping scenarios. In particular, experimental results collected from case studies which engage mappings of independent ontologies that are labeled in English and Chinese are presented. Based on findings derived from these studies, limitations of this generic approach are discussed. It is shown with evidence that appropriate translations of conceptual labels in ontologies are of crucial importance when applying monolingual matching techniques in cross-lingual ontology mapping. Finally, to address the identified challenges, a semantic-oriented cross-lingual ontology mapping (SOCOM) framework is proposed and discussed."}
{"_id": "eb16157bceceac9bfa66c85c8975658d6d71df4b", "title": "Usability and Design Guidelines of Smart Canes for Users with Visual Impairments", "text": "The percentage of the population with visual impairments is increasing rapidly. Every year, the number of visually impaired people grows by up to 2 million worldwide. The World Health Organization (WHO, 2011) estimates that there are 39 million blind people and 246 million visually impaired people internationally. In addition, visual impairments are closely associated with aging. About 65% of the visually impaired are 50 years of age or older, with about 20% of the world\u2019s population in this age group (WHO, 2011). In Europe, in particular, about 90% of visually impaired and partially sighted people are over the age of 60. One of the most severe difficulties for the visually impaired is safe independent mobility. Having independent mobility is a significant factor in ensuring that this aging group can perform simple daily tasks without depending on others. Clark-Carter (1985) indicates that visually impaired people have a low level of mobility. Further, a recent survey in France revealed that only 2% of the overall visually impaired population uses mobility aids (Sander, Lelievre, & Tallec, 2005). One model of an independent mobility aid for the visually impaired is the traditional white cane, which is inexpensive, lightweight and foldable. Unfortunately, white cane users have difficulty detecting protruding bars or moving vehicles until they are dangerously close, which can lead to collisions and falls. The limited capability of the white cane corresponds to its length and a user\u2019s maneuvering skills. As such, users rarely detect overhanging obstacles at head-level or ranges further than approximately 1 m from the user. Manduchi and Kurniawan (2011) report in a recent survey of 300 visually impaired people that over 40% of the respondents experienced head-level accidents at least once a month. Further, Clark-Carter (1985) found that visually impaired people were shown to be potentially dangerous when using a white cane. To address these difficulties, many \u201csmart\u201d products for the visually impaired have been introduced in the last four decades, including smart canes and handheld or wearable devices that are equipped with a sensor system. A smart cane offers an improvement over a traditional white cane because it has the ability to detect objects above the cane and up to a range of 2 m away using an ultrasonic sensor. A white cane allows objects to be sensed through touch and echolocation from tapping. A smart cane has the same capabilities, except that it uses vibrotactile information and produces vibration alerts for obstacles in front of the user. However, these smart products have not been successfully adopted and used by a large number of visually impaired people (Roentgen, Gelderblom, Soede, & de Witte, 2008). Several ORIGINAL ARTICLE"}
{"_id": "a6614dc9ebf183b5accfe13d43f46f68d2ce2cac", "title": "Heart Rate Monitoring as an Easy Way to Increase Engagement in Human-Agent Interaction", "text": "Physiological sensors are gaining the attention of manufacturers and users. As denoted by devices such as smartwatches or the newly released Kinect 2 \u2013 which can covertly measure heartbeats \u2013 or by the popularity of smartphone apps that track heart rate during fitness activities. Soon, physiological monitoring could become widely accessible and transparent to users. We demonstrate how one could take advantage of this situation to increase users\u2019 engagement and enhance user experience in human-agent interaction. We created an experimental protocol involving embodied agents \u2013 \u201cvirtual avatars\u201d. Those agents were displayed alongside a beating heart. We compared a condition in which this feedback was simply duplicating the heart rates of users to another condition in which it was set to an average heart rate. Results suggest a superior social presence of agents when they display feedback similar to users\u2019 internal state. This physiological \u201csimilarity-attraction\u201d effect may lead, with little effort, to a better acceptance of agents and robots by the general public."}
{"_id": "1b4fba54272e3d5431ba54ebc111247d111d2458", "title": "Ultrasound guided posterior femoral cutaneous nerve block.", "text": "The posterior femoral cutaneous nerve (PFCN) is a branch of the sacral plexus. It needs to be implemented as a complementary block for anesthesia or in the surgeries necessitating tourniquet in the suitable cases. We consider target oriented block concept within the PFCN block in the anesthesia implementations with the emergence of ultrasonic regional anesthesia in the practice and with the better understanding of sonoanatomy."}
{"_id": "9c2297dfd6b37c0d75554ad6b343b91c70faf682", "title": "Dynamic Resource Discovery Based on Preference and Movement Pattern Similarity for Large-Scale Social Internet of Things", "text": "Given the wide range deployment of disconnected delay-tolerant social Internet of Things (SIoT), efficient resource discovery remains a fundamental challenge for large-scale SIoT. The existing search mechanisms over the SIoT do not consider preference similarity and are designed in Cartesian coordinates without sufficient consideration of real-world network deployment environments. In this paper, we propose a novel resource discovery mechanism in a 3-D Cartesian coordinate system with the aim of enhancing the search efficiency over the SIoT. Our scheme is based on both of preference and movement pattern similarity to achieve higher search efficiency and to reduce the system overheads of SIoT. Simulation experiments have been conducted to evaluate this new scheme in a large-scale SIoT environment. The simulation results show that our proposed scheme outperforms the state-of-the-art resource discovery schemes in terms of search efficiency and average delay."}
{"_id": "3fe53582d5c5721ae0db0e3020849d0e5e05ef0c", "title": "Through mold vias for stacking of mold embedded packages", "text": "The constant drive towards further miniaturization and heterogeneous system integration leads to a need for new packaging technologies which also allow large area processing and 3D integration with potential for low cost applications. Large area mold embedding technologies and embedding of active components into printed circuit boards (Chip-in-Polymer) are two major packaging trends in this area. This paper describes the use of a novel S2iP (Stacked System in Package) interconnect technique using advanced molding process for multi chip embedding in combination with large area and low cost redistribution technology derived from printed circuit board manufacturing with a focus on integration of through mold vias for package stacking. The use of compression molding equipment with liquid or granular epoxy molding compounds for the targeted integration process flow is a new technology that has been especially developed to allow large area embedding of single chips but also of multiple chips or heterogeneous systems on wafer scale, typically 8\u201d to 12\u201d. Future developments will deal with panel sizes up to 470 \u00d7 370 mm\u00b2. The wiring of the embedded components in this novel type of SiP is done using PCB manufacturing technologies, i.e. a resin coated copper (RCC) film is laminated over the embedded components \u2014 whichever no matter which shape they are: a compression molded wafer or a larger rectangular area of a Molded Array Package (MAP). Interconnects are formed by laser drilling to die pads and electroplating \u2014 all of them making use of standard PCB processes. Thus, through vias which are standard features in PCB manufacturing and can be also integrated in the proposed process flow for mold embedding in combination with RCC based redistribution. Vias were drilled by laser or mechanically after RCC lamination and were metalized together with the vias for chip interconnection. Within this study different liquid and granular molding compounds have been intensively evaluated on their processability. Via drilling process by laser and mechanical drilling is systematically developed and analyzed with focus on via diameter, pitch, mold thickness and molding compound composition and here especially on filler particle sizes and distribution. The feasibility of the entire process chain is demonstrated by fabrication of a Ball Grid Array (BGA) type of system package with two embedded dies and through mold vias allowing the stacking of these BGA packages. Finally, a technology demonstrator is described consisting of two BGAs stacked on each other and mounted on a base substrate enabling the electrical test of a daisy chain structure through the stacked module, allowing the evaluation of the technology and the applied processes."}
{"_id": "e6caaf79e301c50beafab5dfb801136da91e5c17", "title": "Mindfulness-based cognitive therapy for depression: a new approach to preventing relapse.", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."}
{"_id": "3e090dac6019963715df50dc23d830d97a0e25ba", "title": "Practical Variational Inference for Neural Networks", "text": "Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus."}
{"_id": "652d159bf64a70194127722d19841daa99a69b64", "title": "Generating Sequences With Recurrent Neural Networks", "text": "This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles."}
{"_id": "4cd4a796666ef48069a7b524b397cff021497b99", "title": "Modeling and Simulating Apache Spark Streaming Applications", "text": "Stream processing systems are used to analyze big data streams with low latency. The performance in terms of response time and throughput is crucial to ensure all arriving data are processed in time. This depends on various factors such as the complexity of used algorithms and configurations of such distributed systems and applications. To ensure a desired system behavior, performance evaluations should be conducted to determine the throughput and required resources in advance. In this paper, we present an approach to predict the response time of Apache Spark Streaming applications by modeling and simulating them. In a preliminary controlled experiment, our simulation results suggest accurate prediction values for an upscaling scenario."}
{"_id": "3cbd2d8d6f925e1bbc622ee9dbef6338c755ca5e", "title": "An intelligent event-driven approach for efficient energy consumption in commercial buildings: smart office use case", "text": "In this paper we present a use case related to the intelligent processing of events coming from the conventional (\"cheap\") sensors in order to support better energy consumption in commercial buildings. The approach has been implemented using our iCEP framework and deployed in the office space of a real working environment. This research is a kind of the proof of the concept for a new technology that the industry partner would like to exploit.\n The results are very encouraging: smart decisions for the efficient usage of energy can be made by the intelligent processing of \"cheap\" sensor events."}
{"_id": "115dd17ea385d40c2c53ebe35ccaf97e09e694be", "title": "IT Governance and Sarbanes-Oxley: The Latest Sales Pitch or Real Challenges for the IT Function?", "text": "Building on an analysis of the Sarbanes-Oxley Act, literature, as well as private and public interviews, this paper hopes to shed light on the potential impact of Sarbanes-Oxley for IT governance, IT budgets, and relationships with vendors and outsourcers. Findings have implications for research as well as practical lessons-learned for American firms, and for IT vendors or other companies doing business with American companies."}
{"_id": "7302ffd9387199ece08b2b3510ed0467d74a41db", "title": "Attention Strategies for Multi-Source Sequence-to-Sequence Learning", "text": "Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks."}
{"_id": "8f688e1fee7180e342791629b5ccac53f2e1c975", "title": "Printed $\\lambda/8$ -PIFA for Penta-Band WWAN Operation in the Mobile Phone", "text": "A small-size printed planar inverted-F antenna (PIFA) operated at its one-eighth wavelength (lambda/8) mode as the fundamental resonant mode for achieving WWAN (wireless wide area network) operation in the mobile phone is presented. The proposed PIFA has a simple structure of comprising two radiating strips of length about lambda/8 at 900 MHz and is fed using a coupling feed. Compared to the traditional PIFA using a direct feed, the coupling feed greatly decreases the very large input impedance seen at the lambda/8 mode for the traditional PIFA and results in successful excitation of the lambda/8 mode for the proposed PIFA. Two lambda/8 modes are generated by the two radiating strips and occur at close frequencies at about 900 MHz to form a wide lower band to cover GSM850/900 operation. The two radiating strips also generate two higher-order modes or lambda/4 modes at about 1900 MHz to form a wide upper band for GSM1800/1900/UMTS operation. Penta-band WWAN operation is hence achieved, yet the proposed PIFA only occupies a small printed area of 15times31 mm2 or 465 mm2 on the system circuit board of the mobile phone, which is about the smallest for the internal uniplanar printed antenna capable of penta-band operation that have been reported. Details of the proposed PIFA are presented. The specific absorption rate (SAR) and hearing aid compatibility (HAC) results for the proposed PIFA are also analyzed."}
{"_id": "307f6c9252b2ce92a51e09ca0640ce814356c68c", "title": "Learning to Doodle with Stroke Demonstrations and Deep Q-Networks", "text": "Doodling is a useful and common intelligent skill that people can learn and master. In this work, we propose a two-stage learning framework to teach a machine to doodle in a simulated painting environment via Stroke Demonstration and deep Q-learning (SDQ). The developed system, Doodle-SDQ, generates a sequence of pen actions to reproduce a reference drawing and mimics the behavior of human painters. In the first stage, it learns to draw simple strokes by imitating in supervised fashion from a set of strokeaction pairs collected from artist paintings. In the second stage, it is challenged to draw real and more complex doodles without ground truth actions; thus, it is trained with Qlearning. Our experiments confirm that (1) doodling can be learned without direct stepby-step action supervision and (2) pretraining with stroke demonstration via supervised learning is important to improve performance. We further show that Doodle-SDQ is effective at producing plausible drawings in different media types, including sketch and watercolor. A short video can be found at https://www.youtube.com/watch? v=-5FVUQFQTaE."}
{"_id": "1a43325def098ee57ff6f1c0b19a30811fe92304", "title": "Automated Music Emotion Recognition: A Systematic Evaluation", "text": "Automated music emotion recognition (MER) is a challenging task in Music Information Retrieval (MIR) with wide-ranging applications. Some recent studies pose MER as a continuous regression problem in the ArousalValence (AV) plane. These consist of variations on a common architecture having a universal model of emotional response, a common repertoire of low-level audio features, a bag-of-frames approach to audio analysis, and relatively small data sets. These approaches achieve some success at MER and suggest that further improvements are possible with current technology. Our contribution to the state of the art is to examine just how far one can go within this framework, and to investigate what the limitations of this framework are. We present the results of a systematic study conducted in an attempt to maximize the prediction performance of an automated MER system using the architecture described. We begin with a carefully constructed data set, emphasizing quality over quantity. We address affect induction rather than affect attribution. We consider a variety of algorithms at each stage of the training process, from preprocessing to feature selection and model selection, and we report the results of extensive testing. We found that: (1) none of the variations we considered leads to a substantial improvement in performance, which we present as evidence of a limit on what is achievable under this architecture, and (2) the size of the small data sets that are commonly used in the MER literature limits the possibility of improving the set of features used in MER due to the phenomenon of Subset Selection Bias. We conclude with some proposals for advancing the state of the art."}
{"_id": "5e1615685ec12383ceef42a75b31bf8d0845c4d7", "title": "Efficient Hardware Architectures for Deep Convolutional Neural Network", "text": "Convolutional neural network (CNN) is the state-of-the-art deep learning approach employed in various applications. Real-time CNN implementations in resource limited embedded systems are becoming highly desired recently. To ensure the programmable flexibility and shorten the development period, field programmable gate array is appropriate to implement the CNN models. However, the limited bandwidth and on-chip memory storage are the bottlenecks of the CNN acceleration. In this paper, we propose efficient hardware architectures to accelerate deep CNN models. The theoretical derivation of parallel fast finite impulse response algorithm (FFA) is introduced. Based on FFAs, the corresponding fast convolution units (FCUs) are developed for the computation of convolutions in the CNN models. Novel data storage and reuse schemes are proposed, where all intermediate pixels are stored on-chip and the bandwidth requirement is reduced. We choose one of the largest and most accurate networks, VGG16, and implement it on Xilinx Zynq ZC706 and Virtex VC707 boards, respectively. We achieve the top-5 accuracy of 86.25% using an equal distance non-uniform quantization method. It is estimated that the average performances are 316.23 GOP/s under 172-MHz working frequency on Xilinx ZC706 and 1250.21 GOP/s under 170-MHz working frequency on VC707, respectively. In brief, the proposed design outperforms the existing works significantly, in particular, surpassing related designs by more than two times in terms of resource efficiency."}
{"_id": "6525905fbc9e22b6b1c6909909560eeda8ca8fb1", "title": "Inverse Combinatorial Optimization: A Survey on Problems, Methods, and Results", "text": "Given a (combinatorial) optimization problem and a feasible solution to it, the corresponding inverse optimization problem is to find a minimal adjustment of the cost function such that the given solution becomes optimum. Several such problems have been studied in the last twelve years. After formalizing the notion of an inverse problem and its variants, we present various methods for solving them. Then we discuss the problems considered in the literature and the results that have been obtained. Finally, we formulate some open problems."}
{"_id": "e79eb95b7c73a510e59dc7c7c750cc4c685578d5", "title": "The Iowa Model of Evidence-Based Practice to Promote Quality Care.", "text": "The UIHC Department of Nursing is nationally known for its work on use of research to improve patient care. This reputation is attributable to staff members who continue to question \"how can we improve practice?\" or \"what does the latest evidence tell us about this patient problem?\" and to administrators who support, value, and reward EBP. The revisions made in the original Iowa Model are based on suggestions from staff at UIHC and other practitioners across the country who have implemented the model. We value their feedback and have set forth this revised model for evaluation and adoption by others."}
{"_id": "2d208d551ff9000ca189034fa683edb826f4c941", "title": "Coupled semi-supervised learning for information extraction", "text": "We consider the problem of semi-supervised learning to extract categories (e.g., academic fields, athletes) and relations (e.g., PlaysSport(athlete, sport)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semi-supervised training using only a few labeled examples is typically unreliable because the learning task is underconstrained. This paper pursues the thesis that much greater accuracy can be achieved by further constraining the learning task, by coupling the semi-supervised training of many extractors for different categories and relations. We characterize several ways in which the training of category and relation extractors can be coupled, and present experimental results demonstrating significantly improved accuracy as a result."}
{"_id": "1f2083043b7e651a95ff57a7b24e50e8d2ab3432", "title": "Context adaptive deep neural networks for fast acoustic model adaptation", "text": "Deep neural networks (DNNs) are widely used for acoustic modeling in automatic speech recognition (ASR), since they greatly outperform legacy Gaussian mixture model-based systems. However, the levels of performance achieved by current DNN-based systems remain far too low in many tasks, e.g. when the training and testing acoustic contexts differ due to ambient noise, reverberation or speaker variability. Consequently, research on DNN adaptation has recently attracted much interest. In this paper, we present a novel approach for the fast adaptation of a DNN-based acoustic model to the acoustic context. We introduce a context adaptive DNN with one or several layers depending on external factors that represent the acoustic conditions. This is realized by introducing a factorized layer that uses a different set of parameters to process each class of factors. The output of the factorized layer is then obtained by weighted averaging over the contribution of the different factor classes, given posteriors over the factor classes. This paper introduces the concept of context adaptive DNN and describes preliminary experiments with the TIMIT phoneme recognition task showing consistent improvement with the proposed approach."}
{"_id": "d20285ae8cdbb1c00580e01b08b3e9a633478e3a", "title": "ACQR: A Novel Framework to Identify and Predict Influential Users in Micro-Blogging", "text": "As key roles of online social networks, influential users in micro-blogging have the ability to influence the attitudes or behaviour of others. When it comes to marketing, the users\u2019 influence should be associated with a certain topic or field on which people have different levels of preference and expertise. In order to identify and predict influential users in a specific topic more effectively, users\u2019 actual influential capability on a certain topic and potential influence unlimited by topics is combined into a novel comprehensive framework named \u201cACQR\u201d in this research. ACQR framework depicts the attributes of the influentials from four aspects, including activeness (A), centrality (C), quality of post (Q) and reputation (R). Based on this framework, a data mining method is developed for discovering and forecasting the top influentials. Empirical results reveal that our ACQR framework and the data mining method by TOPSIS and SVMs (with polynomial and RBF kernels) can perform very well in identifying and predicting influential users in a certain topic (such as iPhone 5). Furthermore, the dynamic change processes of users\u2019 influence from longitudinal perspective are analysed and suggestions to the sales managers are provided."}
{"_id": "f11609acf88a327f515b9b9461096dde96e10e3f", "title": "CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge Acquisition Bottlenecks", "text": "MC& CYC project is the building, over the coming decade, of a large knowledge base (or KB) of real world facts and heuristics and-as a part of the KB itself-methods for efficiently reasoning over the KB. As the title of this article suggests, our hypothesis is that the two major limitations to building large intelligent programs might be overcome by using such a system. We briefly illustrate how common sense reasoning and analogy can widen the knowledge acquisition bottleneck The next section (\u201cHow CYC Works\u201d) illustrates how those same two abilities can solve problems of the type that stymie current expert systems. We then report how the project is being conducted currently: its strategic philosophy, its tactical methodology, and a case study of how we are currently putting that into practice. We conclude with a discussion of the project\u2019s feasibility and timetable."}
{"_id": "8e5af94585e2d16fd2175ad4a22086e5b4137c2f", "title": "Privacy-Preserving Attribute-Based Access Control Model for XML-Based Electronic Health Record System", "text": "Cloud-based electronic health record (EHR) systems enable medical documents to be exchanged between medical institutions; this is expected to contribute to improvements in various medical services in the future. However, as the system architecture becomes more complicated, cloud-based EHR systems may introduce additional security threats when compared with the existing singular systems. Thus, patients may experience exposure of private data that they do not wish to disclose. In order to protect the privacy of patients, many approaches have been proposed to provide access control to patient documents when providing health services. However, most current systems do not support fine-grained access control or take into account additional security factors such as encryption and digital signatures. In this paper, we propose a cloud-based EHR model that performs attribute-based access control using extensible access control markup language. Our EHR model, focused on security, performs partial encryption and uses electronic signatures when a patient\u2019s document is sent to a document requester. We use XML encryption and XML digital signature technology. Our proposed model works efficiently by sending only the necessary information to the requesters who are authorized to treat the patient in question."}
{"_id": "ce334946188479f095ab0366c95f14888181cfc3", "title": "An empirical approach to the experience of architectural space in virtual reality \u2014 exploring relations between features and affective appraisals of rectangular indoor spaces", "text": "In the presented exploratory study, quantitative relations between the experience of architectural spaces and physical properties were investigated using virtual reality (VR) simulations. Geometric properties from a component-based description of rectangular rooms were tested on their suitability as predictor variables for experiential qualities. In a virtual-reality-based perceptual experiment, qualities of 16 vacant rectangular interiors were rated in eight principal categories by 16 participants using the semantic differential scaling technique. The simulated scenes were automatically generated by means of a custom made utility that also provided for the component-based room descriptions. The data analysis revealed strong correlations between several scene features and the averaged rated experience. For example, a preference for ratios near to the golden section could be observed for spatial proportions, which are not directly perceivable. Altogether, a set of five widely independent factors (openness, two room proportions, room area, and balustrade height) turned out to be most effective for describing the observed variance in the attributed experiential qualities. The combination of realistic virtual reality simulations and psychophysical data raising methods proved to be an effective means for basic architectural research. Several quantitative relations between physical properties and the emotional experience of architectural space could be demonstrated. D 2004 Elsevier B.V. All rights reserved."}
{"_id": "36a37ccd62757523796838b7df549a388f134a3e", "title": "Mutual Coupling Reduction Using F-Shaped Stubs in UWB-MIMO Antenna", "text": "A compact, high performance, and novel-shaped ultra-wideband (UWB) multiple-input multiple-output (MIMO) antenna with low mutual coupling is presented in this paper. The proposed antenna consists of two radiating elements with shared ground plane having an area of $50\\times 30$ mm2. F-shaped stubs are introduced in the shared ground plane of the proposed antenna to produce high isolation between the MIMO antenna elements. The designed MIMO antenna has very low mutual coupling of (S21 < \u221220 dB), low envelop correlation coefficient (ECC <0.04), high diversity gain (DG >7.4 dB), high multiplexing efficiency ( ${\\eta }_{\\mathrm {Mux}}> -3.5$ ), and high peak gain over the entire UWB frequencies. The antenna performance is studied in terms of S-Parameters, radiation properties, peak gain, diversity gain, envelop correlation coefficient, and multiplexing efficiency. A good agreement between the simulated and measured results is observed."}
{"_id": "2a7b4ff5752b96e2a7b46cff10d78d2a1e7ec5b9", "title": "Building a Literal Bridge Between Robotics and Neuroscience using Functional Near Infrared Spectroscopy ( NIRS )", "text": "Functional near infrared spectroscopy (NIRS) is a promising new tool for research in human-robot interaction (HRI). The use of NIRS in HRI has already been demonstrated both as a means for investigating brain activity during human-robot interactions, as well as in the development of brain-robot interfaces that passively monitor a person\u2019s cognitive state to adapt robots\u2019 behaviors accordingly. In this paper, we survey the utility of NIRS and its range of applications in HRI. We discuss both some exemplary applications as well as several pressing challenges to the deployment of NIRS in more realistic settings."}
{"_id": "acaa0541c7ba79049a6e39c791b9da4b740b7f4a", "title": "Automatic Code Completion", "text": "Code completion software is an important tool for many developers, but it traditionally fails to model any long term program dependencies such as scoped variables, instead settling for suggestions based on static libraries. In this paper we present a deep learning approach to code completion for non-terminals (program structural components) and terminals (program text) that takes advantage of running dependencies to improve predictions. We develop an LSTM model and augment it with several approaches to Attention in order to better capture the relative value of the input, hidden state, and context. After evaluating on a large dataset of JavaScript programs, we demonstrate that our Gated LSTM model significantly improves over a Vanilla LSTM baseline, achieving an accuracy of 77% on the non-terminal prediction problem and 46% accuracy on the terminal prediction problem."}
{"_id": "d68c5625a09aa2e1d56e432fc86fdb61311d30f5", "title": "Could on-line voting boost desire to vote? - Technology acceptance perceptions of young Hungarian citizens", "text": "Article history: Received 24 May 2015 Received in revised form 2 October 2016 Accepted 18 November 2016 In our paper we develop and test the argument that intent to i-vote (to use on-line voting systems) drives intent to vote, while intent to i-vote is influenced by four key attitudes: performance expectation, perception on ease of use, trust in the internet and trust in the government.We show that these findings contradict thosewhich exclusively identified economical, legal, and cultural drivers to enhance democratic participation in the Central and Eastern European region. Rooted cardinally in the Technology Acceptance Model (TAM) six hypotheses were set, and then tested with partial least square (PLS) structural equation modelling. In the context of young, educated and internet-ready Hungarian voters the testing of the hypotheses has shown high level of on-line voting intent and that perception of on-line voting would enhance voting desire amongst young Hungarian internet users. Also, our findings show that performance expectation, perception on ease of use and trust in the internet are positively associated with i-voting intent. \u00a9 2016 Elsevier Inc. All rights reserved."}
{"_id": "73a4958ca38e4ed5970c22168e021fb9e4793d9e", "title": "Feature selection with harmony search and its applications", "text": "Feature selection is a term given to the problem of selecting important domain attributes which are most predictive of a given outcome. Unlike other dimensionality reduction methods, feature selection approaches seek to preserve the semantics of the original data following reduction. Many strategies have been exploited for this task in an effort to identify more compact and better quality feature subsets. A number of group-based feature subset evaluation measures have been developed, which have the ability to judge the quality of a given feature subset as a whole, rather than assessing the qualities of individual features. Techniques of stochastic nature have also emerged, which are inspired by nature phenomena or social behaviour, allowing good solutions to be discovered without resorting to exhaustive search. In this thesis, a novel feature subset search algorithm termed \u201cfeature selection with harmony search\u201d is presented. The proposed approach utilises a recently developed meta-heuristic: harmony search, that is inspired by the improvisation process of musical players. The proposed approach is general, and can be employed in conjunction with many feature subset evaluation measures. The simplicity of harmony search is exploited to reduce the overall complexity of the search process. The stochastic nature of the resultant technique also allows the search process to escape from local optima, while identifying multiple, distinctive candidate solutions. Additional parameter control schemes are introduced to reduce the effort and impact of static parameter configuration of HS, which are further combined with iterative refinement, in order to enforce the discovery of more compact feature subsets. The flexibility of the proposed approach, and its powerful performance in selecting multiple, good quality feature subsets lead to a number of further theoretical developments. These include the generation and reduction of feature subset-based classifier ensembles; feature selection and adaptive classifier ensemble for dynamic data; hybrid rule induction on the basis of fuzzy-rough set theory; and antecedent selection for fuzzy rule interpolation. The resultant techniques are experimentally evaluated using data sets drawn from real-world problem domains, and systematically compared with leading methodologies in their respective areas, demonstrating the efficacy and competitive performance of the present work."}
{"_id": "76a817f932229bca6356f541a46b84c884336f4c", "title": "Text summarization using enhanced MMR technique", "text": "Now a day when huge amount of documents and web contents are available, so reading of full content is somewhat difficult. Summarization is a way to give abstract form of large document so that the moral of the document can be communicated easily. Current research in automatic summarization is dominated by some effective, yet naive approaches: summarization through extraction, summarization through Abstraction and multi-document summarization. These techniques are used to building a summary of a document. Although there are a number of techniques implemented for the summarization of text for the single document or for the online web data or for any language. Here in this paper we are implemented an efficient technique for text summarization to reduce the computational cost and time and also the storage capacity."}
{"_id": "1d3f159c6b14775be88b0f5e8718fa1046a40077", "title": "Modeling temporal effects of human mobile behavior on location-based social networks", "text": "The rapid growth of location-based social networks (LBSNs) invigorates an increasing number of LBSN users, providing an unprecedented opportunity to study human mobile behavior from spatial, temporal, and social aspects. Among these aspects, temporal effects offer an essential contextual cue for inferring a user's movement. Strong temporal cyclic patterns have been observed in user movement in LBSNs with their correlated spatial and social effects (i.e., temporal correlations). It is a propitious time to model these temporal effects (patterns and correlations) on a user's mobile behavior. In this paper, we present the first comprehensive study of temporal effects on LBSNs. We propose a general framework to exploit and model temporal cyclic patterns and their relationships with spatial and social data. The experimental results on two real-world LBSN datasets validate the power of temporal effects in capturing user mobile behavior, and demonstrate the ability of our framework to select the most effective location prediction algorithm under various combinations of prediction models."}
{"_id": "0c023447f0f230a083f19817a1c363ff96d1cd3d", "title": "Harnessing the Crowdsourcing Power of Social Media for Disaster Relief", "text": "This article briefly describes the advantages and disadvantages of crowdsourcing applications applied to disaster relief coordination. It also discusses several challenges that must be addressed to make crowdsourcing a useful tool that can effectively facilitate the relief progress in coordination, accuracy, and security."}
{"_id": "1ba912f2bf7ade8760f7d5cac300c2c9df7696f9", "title": "BeAMS: A Beacon-Based Angle Measurement Sensor for Mobile Robot Positioning", "text": "Positioning is a fundamental issue in mobile robot applications, and it can be achieved in multiple ways. Among these methods, triangulation based on angle measurements is widely used, robust, accurate, and flexible. This paper presents BeAMS, which is a new active beacon-based angle measurement system used for mobile robot positioning. BeAMS introduces several major innovations. One innovation is the use of a unique unsynchronized channel with on-off keying modulated infrared signals to measure angles and to identify the beacons. We also introduce a new mechanism to measure angles: Our system detects a beacon when it enters and leaves an angular window. We show that the estimator resulting from the center of this angular window provides an unbiased estimate of the beacon angle. A theoretical framework for a thorough performance analysis of BeAMS is provided. We establish the upper bound of the variance and validate this bound through experiments and simulations; the overall error measure of BeAMS is lower than 0.24\u00b0 for an acquisition rate of 10 Hz. In conclusion, BeAMS is a low-power, flexible, and robust solution for angle measurement and a reliable component for robot positioning."}
{"_id": "35b38d29b0eb7fdb3a9a694197a0af5704d68636", "title": "Brain impalement by an angle metal bar", "text": "The author reports the case of a 37-year-old right-handed man who was impaled in the head by an angle metal bar at a construction work site. Impalement injuries of the brain are rare, and their management is complex. The surgical treatment of the injury and the medical management of complications are described in detail. The patient made a good recovery although he has functional deficits related to the injury to his frontal lobes."}
{"_id": "2f9c09120193c1215b1a1ad9c2d81c04a322af92", "title": "CGBoost : Conjugate Gradient in Function Space", "text": "The superior out-of-sample performance of AdaBoost has been attributed to the fact that it minimizes a cost function based on margin, in that it can be viewed as a special case of AnyBoost, an abstract gradient descent algorithm. In this paper, we provide a more sophisticated abstract boosting algorithm, CGBoost, based on conjugate gradient in function space. When the AdaBoost exponential cost function is optimized, CGBoost generally yields much lower cost and training error but higher test error, which implies that the exponential cost is vulnerable to overfitting. With the optimization power of CGBoost, we can adopt more \u201cregularized\u201d cost functions that have better out-of-sample performance but are difficult to optimize. Our experiments demonstrate that CGBoost generally outperforms AnyBoost in cost reduction. With suitable cost functions, CGBoost can have better out-of-sample performance."}
{"_id": "587c46469a8c025a3f18fd1907330a07aa45eac4", "title": "An Efficient Approach to Model-Based Hierarchical Reinforcement Learning", "text": "We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowledge and selective execution at different levels of abstraction, to efficiently solve large, complex problems. Our framework adopts a new transition dynamics learning algorithm that identifies the common action-feature combinations of the subtasks, and evaluates the subtask execution choices through simulation. The framework is sample efficient, and tolerates uncertain and incomplete problem characterization of the subtasks. We test the framework on common benchmark problems and complex simulated robotic environments. It compares favorably against the stateof-the-art algorithms, and scales well in very large problems."}
{"_id": "da29d52adce6dbe02468098a71d79811019fb68c", "title": "Introducing 'Bones': a parallelizing source-to-source compiler based on algorithmic skeletons", "text": "Recent advances in multi-core and many-core processors requires programmers to exploit an increasing amount of parallelism from their applications. Data parallel languages such as CUDA and OpenCL make it possible to take advantage of such processors, but still require a large amount of effort from programmers.\n A number of parallelizing source-to-source compilers have recently been developed to ease programming of multi-core and many-core processors. This work presents and evaluates a number of such tools, focused in particular on C-to-CUDA transformations targeting GPUs. We compare these tools both qualitatively and quantitatively to each other and identify their strengths and weaknesses.\n In this paper, we address the weaknesses by presenting a new classification of algorithms. This classification is used in a new source-to-source compiler, which is based on the algorithmic skeletons technique. The compiler generates target code based on skeletons of parallel structures, which can be seen as parameterisable library implementations for a set of algorithm classes. We furthermore demonstrate that the presented compiler requires little modifications to the original sequential source code, generates readable code for further fine-tuning, and delivers superior performance compared to other tools for a set of 8 image processing kernels."}
{"_id": "52aa38ffa5011d84cb8aae9f1112ce53343bf32c", "title": "Data-Driven De-Anonymization in Bitcoin", "text": "We analyse the performance of several clustering algorithms in the digital peerto-peer currency Bitcoin. Clustering in Bitcoin refers to the task of finding addresses that belongs to the same wallet as a given address. In order to assess the effectiveness of clustering strategies we exploit a vulnerability in the implementation of Connection Bloom Filtering to capture ground truth data about 37,585 Bitcoin wallets and the addresses they own. In addition to well-known clustering techniques, we introduce two new strategies, apply them on addresses of the collected wallets and evaluate precision and recall using the ground truth. Due to the nature of the Connection Bloom Filtering vulnerability the data we collect is not without errors. We present a method to correct the performance metrics in the presence of such inaccuracies. Our results demonstrate that even modern wallet software can not protect its users properly. Even with the most basic clustering technique known as multiinput heuristic, an adversary can guess on average 68.59% addresses of a victim. We show that this metric can be further improved by combining several more sophisticated heuristics."}
{"_id": "0cf71398f7d26a0e7bb757c4dfc35f0570c2f7b8", "title": "A probabilistic model for component-based shape synthesis", "text": "We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis."}
{"_id": "9b0e2a6af3a6d2757185af42ed79bdf8add9ae1c", "title": "Analysis and Design of an Ultrabroadband Stacked Power Amplifier in CMOS Technology", "text": "This brief presents the analysis and design of a two-stage stacked power amplifier (PA) with very broadband gain frequency response and power performance in a small chip size. The broadband load impedance match is realized using modified stacked field-effect transistors (FETs) with a resistive feedback by analyzing the matching condition of the source input impedance of the stacked FET. In order to further improve the broadband gain frequency response, the effectiveness of a gain expansion from a stacked driver amplifier is demonstrated to compensate the gain compression of the last-stage amplifier. To verify the design concept, a two-stage three-stacked PA has been implemented in a 0.18-\u03bcm CMOS technology. The PA achieves a saturated output power of 22-24.3 dBm and a power added efficiency of 13%-20% within a 194% fractional bandwidth from 0.1 to 6.5 GHz. It also demonstrates better than 11-dB input return loss (RL) and better than 5.1-dB output RL. This PA occupies a chip size of 0.64 mm2 including pads."}
{"_id": "b55a4a4ff3f635af764fe91640973d0c9cdcb76d", "title": "Designing an Indonesian part of speech tagset and manually tagged Indonesian corpus", "text": "We describe our work on designing a linguistically principled part of speech (POS) tagset for the Indonesian language. The process involves a detailed study and analysis of existing tagsets and the manual tagging of an Indonesian corpus. The results of this work are an Indonesian POS tagset consisting of 23 tags and an Indonesian corpus of over 250.000 lexical tokens that have been manually tagged using this tagset."}
{"_id": "ee6941b723349553755827b2b11229246da01854", "title": "Radar Communication for Combating Mutual Interference of FMCW Radars", "text": "Commercial automotive radars used today are based on frequency modulated continuous wave signals due to the simple and robust detection method and good accuracy. However, the increase in both the number of radars deployed per vehicle and the number of such vehicles leads to mutual interference, cutting short future plans for autonomous driving and active safety functionality. We propose and analyze a radar communication (RadCom) approach to reduce this mutual interference while simultaneously offering communication functionality. We achieve this by frequency division multiplexing radar and communication, where communication is built on a decentralized carrier sense multiple access protocol and is used to adjust the timing of radar transmissions. Our simulation results indicate that radar interference can be significantly reduced, at no cost in radar accuracy."}
{"_id": "40eea5e85514360e7efb7de36502937c5e16f20d", "title": "Wealth and happiness across the world: material prosperity predicts life evaluation, whereas psychosocial prosperity predicts positive feeling.", "text": "The Gallup World Poll, the first representative sample of planet Earth, was used to explore the reasons why happiness is associated with higher income, including the meeting of basic needs, fulfillment of psychological needs, increasing satisfaction with one's standard of living, and public goods. Across the globe, the association of log income with subjective well-being was linear but convex with raw income, indicating the declining marginal effects of income on subjective well-being. Income was a moderately strong predictor of life evaluation but a much weaker predictor of positive and negative feelings. Possessing luxury conveniences and satisfaction with standard of living were also strong predictors of life evaluation. Although the meeting of basic and psychological needs mediated the effects of income on life evaluation to some degree, the strongest mediation was provided by standard of living and ownership of conveniences. In contrast, feelings were most associated with the fulfillment of psychological needs: learning, autonomy, using one's skills, respect, and the ability to count on others in an emergency. Thus, two separate types of prosperity-economic and social psychological-best predict different types of well-being."}
{"_id": "1e6f8a2cadc4b475bd2f06d2efb0fe1095048c6f", "title": "A Survey of Recent Results in Networked Control Systems", "text": "Networked control systems (NCSs) are spatially distributed systems for which the communication between sensors, actuators, and controllers is supported by a shared communication network. We review several recent results on estimation, analysis, and controller synthesis for NCSs. The results surveyed address channel limitations in terms of packet-rates, sampling, network delay, and packet dropouts. The results are presented in a tutorial fashion, comparing alternative methodologies"}
{"_id": "f69faa33b0ce5597dbdfb772ef12495f4f1391eb", "title": "Advanced image fusion to overlay coronary sinus anatomy with real-time fluoroscopy to facilitate left ventricular lead implantation in CRT.", "text": "BACKGROUND\nFailure rate for left ventricular (LV) lead implantation in cardiac resynchronization therapy (CRT) is up to 12%. The use of segmentation tools, advanced image registration software, and high-fidelity images from computerized tomography (CT) and cardiac magnetic resonance (CMR) of the coronary sinus (CS) can guide LV lead implantation. We evaluated the feasibility of advanced image registration onto live fluoroscopic images to allow successful LV lead placement.\n\n\nMETHODS\nTwelve patients (11 male, 59 \u00b1 16.8 years) undergoing CRT had three-dimensional (3D) whole-heart imaging (six CT, six CMR). Eight patients had at least one previously failed LV lead implant. Using segmentation software, anatomical models of the cardiac chambers, CS, and its branches were overlaid onto the live fluoroscopy using a prototype version of the Philips EP Navigator software to guide lead implantation.\n\n\nRESULTS\nWe achieved high-fidelity segmentations of cardiac chambers, coronary vein anatomy, and accurate registration between the 3D anatomical models and the live fluoroscopy in all 12 patients confirmed by balloon occlusion angiography. The CS was cannulated successfully in every patient and in 11, an LV lead was implanted successfully. (One patient had no acceptable lead values due to extensive myocardial scar).\n\n\nCONCLUSION\nUsing overlaid 3D segmentations of the CS and cardiac chambers, it is feasible to guide CRT implantation in real time by fusing advanced imaging and fluoroscopy. This enabled successful CRT in a group of patients with previously failed implants. This technology has the potential to facilitate CRT and improve implant success."}
{"_id": "8a9b53ed0399a43f4cb9ed1b43791d7ec6483d43", "title": "Extracting Aspect-Evaluation and Aspect-Of Relations in Opinion Mining", "text": "The technology of opinion extraction allows users to retrieve and analyze people\u2019s opinions scattered over Web documents. We define an opinion unit as a quadruple consisting of the opinion holder, the subject being evaluated, the part or the attribute in which the subject is evaluated, and the value of the evaluation that expresses a positive or negative assessment. We use this definition as the basis for our opinion extraction task. We focus on two important subtasks of opinion extraction: (a) extracting aspect-evaluation relations, and (b) extracting aspect-of relations, and we approach each task using methods which combine contextual and statistical clues. Our experiments on Japanese weblog posts show that the use of contextual clues improve the performance for both tasks."}
{"_id": "628846cfeb03c294626dba3272e9b552fd50b77a", "title": "Being overweight has limited effect on SCARF osteotomy outcome for hallux valgus correction", "text": "The purpose of this study was to investigate the association between body mass index (BMI) and the results of SCARF osteotomy of the first metatarsal for hallux valgus (HV) correction, as the literature on this is scant. This prospective study was carried out between 2011 and 2015. One hundred and thirty-three patients diagnosed with moderate to severe HV underwent a SCARF corrective osteotomy. We divided the patients into two groups according to their BMI: normal and overweight. Postoperative follow-up was two years. All patients were examined twice by two medical doctors simultaneously: pre-operatively and post-operatively at two years\u2019 follow-up. Data collected included biometrical records, X-rays [HV angle (HVA), intermetatarsal angle (IMA), American Orthopaedic Foot and Ankle Society Hallux Metatarsophalangeal Index (AOFAS-HMI) and visual analogue scale (VAS) for pain and satisfaction]. There was a significant difference between patient age (p\u2009=\u20090.001), age at onset (p\u2009<\u20090.001) and AOFAS-HMI (p\u2009=\u20090.035) at follow-up. Other parameters were similar in both groups. Regardless of BMI, the radiological outcome was comparable. Despite a significant difference in AOFAS-HMI results, pain and satisfaction level were similar. The authors agreed that high BMI has protective role in the prevalence of HV."}
{"_id": "f824415989a7863a37e581fdeec2f1d9f4d54f62", "title": "Tweeties Squabbling: Positive and Negative Results in Applying Argument Mining on Social Media", "text": ""}
{"_id": "4abdf7f981612216de354f3dc6ed2b07b5e9f114", "title": "A honeycomb-shaped planar monopole antenna for broadband millimeter-wave applications", "text": "This paper investigates a planar monopole antenna for fifth generation (5G) wireless communication networks. The proposed antenna has an ultra-wide band impedance response in millimeter wave (mmW) spectrum, 25\u201339 GHz covering Ka band. The antenna has unique structural layout resembling hexagonal honeycomb and has low profile (8\u00d77 mm2) on 0.254 mm thick Rogers substrate, enabling the design for incorporation into future mobile phones. This antenna provides peak gain of 4.15 dBi along with 90% efficiency in the working band. The design is also extended to an 8\u00d71 element array presenting maximum gain of 12.7 dBi at central frequency of the antenna."}
{"_id": "2c08a799e591bd7f857903e0e109d399c8ffb634", "title": "Learning to Collaborate: Multi-Scenario Ranking via Multi-Agent Reinforcement Learning", "text": "Ranking is a fundamental and widely studied problem in scenarios such as search, advertising, and recommendation. However, joint optimization for multi-scenario ranking, which aims to improve the overall performance of several ranking strategies in different scenarios, is rather untouched. Separately optimizing each individual strategy has two limitations. The first one is lack of collaboration between scenariosmeaning that each strategy maximizes its own objective but ignores the goals of other strategies, leading to a suboptimal overall performance. The second limitation is the inability of modeling the correlation between scenarios meaning that independent optimization in one scenario only uses its own user data but ignores the context in other scenarios. In this paper, we formulate multi-scenario ranking as a fully cooperative, partially observable, multi-agent sequential decision problem. We propose a novel model named Multi-Agent Recurrent Deterministic Policy Gradient (MA-RDPG) which has a communication component for passing messages, several private actors (agents) for making actions for ranking, and a centralized critic for evaluating the overall performance of the co-working actors. Each scenario is treated as an agent (actor). Agents collaborate with each other by sharing a global action-value function (the critic) and passing messages that encodes historical information across scenarios. The model is evaluated with online settings on a large E-commerce platform. Results show that the proposed model exhibits significant improvements against baselines in terms of the overall performance."}
{"_id": "5ae02cc91484688c38d8c08bffc15958020518c4", "title": "Deep Multi-task Gaussian Processes for Survival Analysis with Competing Risks", "text": "Designing optimal treatment plans for patients with comorbidities requires accurate cause-specific mortality prognosis. Motivated by the recent availability of linked electronic health records, we develop a nonparametric Bayesian model for survival analysis with competing risks, which can be used for jointly assessing a patient\u2019s risk of multiple (competing) adverse outcomes. The model views a patient\u2019s survival times with respect to the competing risks as the outputs of a deep multi-task Gaussian process (DMGP), the inputs to which are the patients\u2019 covariates. Unlike parametric survival analysis methods based on Cox and Weibull models, our model uses DMGPs to capture complex non-linear interactions between the patients\u2019 covariates and cause-specific survival times, thereby learning flexible patient-specific and cause-specific survival curves, all in a data-driven fashion without explicit parametric assumptions on the hazard rates. We propose a variational inference algorithm that is capable of learning the model parameters from time-to-event data while handling right censoring. Experiments on synthetic and real data show that our model outperforms the state-of-the-art survival models."}
{"_id": "ce8fdd05afa2f2306a36f93bf8e25a81db8fa3ae", "title": "Fairness in Package-to-Group Recommendations", "text": "Recommending packages of items to groups of users has several applications, including recommending vacation packages to groups of tourists, entertainment packages to groups of friends, or sets of courses to groups of students. In this paper, we focus on a novel aspect of package-to-group recommendations, that of fairness. Specifically, when we recommend a package to a group of people, we ask that this recommendation is fair in the sense that every group member is satisfied by a sufficient number of items in the package. We explore two definitions of fairness and show that for either definition the problem of finding the most fair package is NP-hard. We exploit the fact that our problem can be modeled as a coverage problem, and we propose greedy algorithms that find approximate solutions within reasonable time. In addition, we study two extensions of the problem, where we impose category or spatial constraints on the items to be included in the recommended packages. We evaluate the appropriateness of the fairness models and the performance of the proposed algorithms using real data from Yelp, and a user study."}
{"_id": "ad967c33442eba19e8012d1c79efa969a5814839", "title": "Natural Image Colorization", "text": "In this paper, we present an interactive system for users to easily colorize the natural images of complex scenes. In our system, colorization procedure is explicitly separated into two stages: Color labeling and Color mapping. Pixels that should roughly share similar colors are grouped into coherent regions in the color labeling stage, and the color mapping stage is then introduced to further fine-tune the colors in each coherent region. To handle textures commonly seen in natural images, we propose a new color labeling scheme that groups not only neighboring pixels with similar intensity but also remote pixels with similar texture. Motivated by the insight into the complementary nature possessed by the highly contrastive locations and the smooth locations, we employ a smoothness map to guide the incorporation of intensity-continuity and texture-similarity constraints in the design of our labeling algorithm. Within each coherent region obtained from the color labeling stage, the color mapping is applied to generate vivid colorization effect by assigning colors to a few pixels in the region. A set of intuitive interface tools is designed for labeling, coloring and modifying the result. We demonstrate compelling results of colorizing natural images using our system, with only a modest amount of user input."}
{"_id": "0dc46bb1828cce90d3fbf240e4ed3a2249a5b79c", "title": "An Optimized Transformerless Photovoltaic Grid-Connected Inverter", "text": "Unipolar sinusoidal pulsewidth modulation (SPWM) full-bridge inverter brings high-frequency common-mode voltage, which restricts its application in transformerless photovoltaic grid-connected inverters. In order to solve this problem, an optimized full-bridge structure with two additional switches and a capacitor divider is proposed in this paper, which guarantees that a freewheeling path is clamped to half input voltage in the freewheeling period. Sequentially, the high-frequency common-mode voltage has been avoided in the unipolar SPWM full-bridge inverter, and the output current flows through only three switches in the power processing period. In addition, a clamping branch makes the voltage stress of the added switches be equal to half input voltage. The operation and clamping modes are analyzed, and the total losses of power device of several existing topologies and proposed topology are fairly calculated. Finally, the common-mode performance of these topologies is compared by a universal prototype inverter rated at 1 kW."}
{"_id": "176cc8ef67871579ca4206d0a59f736fe147b80c", "title": "A New High-Efficiency Single-Phase Transformerless PV Inverter Topology", "text": "There is a strong trend in the photovoltaic inverter technology to use transformerless topologies in order to acquire higher efficiencies combining with very low ground leakage current. In this paper, a new topology, based on the H-bridge with a new ac bypass circuit consisting of a diode rectifier and a switch with clamping to the dc midpoint, is proposed. The topology is simulated and experimentally validated, and a comparison with other existing topologies is performed. High conversion efficiency and low leakage current are demonstrated."}
{"_id": "2555b5ff7c8b3dc16541ff76b9b073bea9f11d29", "title": "Leakage Current Analytical Model and Application in Single-Phase Transformerless Photovoltaic Grid-Connected Inverter", "text": "Due to the characteristics of low cost and high efficiency, the transformerless photovoltaic (PV) grid-connected inverters have been popularized in the application of solar electric generation system in residential market. Unfortunately, the leakage current through the stray capacitors between the PV array and the ground is harmful. This paper focuses on the leakage current suppressing method, in which all common-mode paths are considered. First, the common-mode analytical model at switching frequency is developed, and the rules of eliminating switching frequency common-mode source are summarized based on this model. The existing full-bridge- and half-bridge-type converters have been analyzed by using the developed model and rules, and then, a new full-bridge-type converter structure and a compensation strategy for half-bridge-type inverter have been presented finally."}
{"_id": "74a1a96742a869d8dc777a9a47a2de6d4ff97554", "title": "Cyber-Physical System Security of a Power Grid : State-ofthe-Art", "text": "As part of the smart grid development, more and more technologies are developed and deployed on the power grid to enhance the system reliability. A primary purpose of the smart grid is to significantly increase the capability of computer-based remote control and automation. As a result, the level of connectivity has become much higher, and cyber security also becomes a potential threat to the cyber-physical systems (CPSs). In this paper, a survey of the state-of-the-art is conducted on the cyber security of the power grid concerning issues of: (1) the structure of CPSs in a smart grid; (2) cyber vulnerability assessment; (3) cyber protection systems; and (4) testbeds of a CPS. At Washington State University (WSU), the Smart City Testbed (SCT) has been developed to provide a platform to test, analyze and validate defense mechanisms against potential cyber intrusions. A test case is provided in this paper to demonstrate how a testbed helps the study of cyber security and the anomaly detection system (ADS) for substations."}
{"_id": "cc6cdc2dfda4c23e7fb60eae63e96eda44630161", "title": "The paradox of received social support: the importance of responsiveness.", "text": "Although the perception of available support is associated with positive outcomes, the receipt of actual support from close others is often associated with negative outcomes. In fact, support that is \"invisible\" (not perceived by the support recipient) is associated with better outcomes than \"visible\" support. To investigate this paradox, we proposed that received support (both visible and invisible) would be beneficial when it was responsive to the recipient's needs. Sixty-seven cohabiting couples participated in a daily-experience study in which they reported on the support they provided and received each day. Results indicated that both visible and invisible support were beneficial (i.e., associated with less sadness and anxiety and with greater relationship quality) only when the support was responsive. These findings suggest that the nature of support is an important determinant of when received support will be beneficial."}
{"_id": "813ea9404d3483c7b3172c5579946f7cd8cf38ab", "title": "The study of evasion of packed PE from static detection", "text": "Static detection of packed portable executables (PEs) relies primarily on structural properties of the packed PE, and also on anomalies in the packed PE caused by packing tools. This paper outlines weaknesses in this method of detection. We show that these structural properties and anomalies are contingent features of the executable, and can be more or less easily modified to evade static detection."}
{"_id": "73d3006f7e41ea084b464f868091564432c46e28", "title": "BAliBASE: a benchmark alignment database for the evaluation of multiple alignment programs", "text": "SUMMARY\nBAliBASE is a database of manually refined multiple sequence alignments categorized by core blocks of conservation sequence length, similarity, and the presence of insertions and N/C-terminal extensions.\n\n\nAVAILABILITY\nFrom http://www-igbmc. u-strasbg.fr/BioInfo/BAliBASE/index.html"}
{"_id": "11f1f4af26e9b8a033e06c9fb00e5fb8794156e2", "title": "Pneumatic muscle actuators within robotic and mechatronic systems", "text": "Pneumatic artificial muscles (PAMs) as soft, lightweight and compliant actuators have great potential in applications for the actuations of new types of robots and manipulators. The favourable characteristics of fluidic muscles, such as high power-to-weight ratio and safe interaction with humans are also very suitable during the process of musculoskeletally rehabilitating patients and are often used in making artificial orthoses. This technology, despite the problems of control relatng to nonlinear phenomena, may also have wide future applications within industrial and mechatronic systems. This paper presents several experimental systems actuated by PAMs, which have been designed as test models within the fields of mobile robots, mechatronics, fluid power systems and the feedback control education of mechanical engineering students. This paper first presents the design and construction of a four legged walking robot actuated by pneumatic muscles. The robot has a fully autonomous system with a wireless application platform and can be controlled using a cell phone. Then the paper describes the design and construction of the prototype of an actively-powered ankle foot orthosis. This orthosis device actuated by a single PAM is able to provide the appropriate functions required during the rehabilitations of patients and the loss of mobility. Then the paper focuses on the design and control of a ball and beam system with an antagonistic muscle pair for generating the necessary torque for beam rotation. This mechatronic balancing mechanism falls into the category of unstable, under-actuated, multivariable systems with highly nonlinear dynamics. The final section of the article presents the design and control of a single-joint manipulator arm with pneumatic muscle actuators that enable some new features for the controlled systems. Fluid Power 2015 176"}
{"_id": "5cb332dbb797cadd573ebc5f6d2149786e09cc47", "title": "Diabetes management in the primary care setting: a comparison of physicians' performance by gender.", "text": "BACKGROUND\nA major shift in the gender of the medical-doctor workforce is now underway, and all over the world it is expected that an average 65% of the medical workforce will be women by 2030. In addition, an aging population means that chronic diseases, such as diabetes, are becoming more prevalent and the demand for care is rising. There is growing evidence of female physicians performing better than male physicians.AimOur study aimed to investigate whether any differences in diabetes process indicators are associated with gender, and/or the interaction between gender and different organizational models.Design and settingA population-based cross-sectional analysis was conducted on a large data set obtained by processing the public health administration databases of seven Italian local health units (LHUs). The seven LHUs, distributed all over the Italian peninsula in seven different regions, took part in a national project called MEDINA, with the focus on chronic disease management in primary care (PC).\n\n\nMETHODS\nA total score was calculated for the average performance in the previously listed five indicators, representing global adherence to a quality management of patients with diabetes. A multilevel analysis was applied to see how LHUs affected the outcome. A quantile regression model was also fitted.\n\n\nRESULTS\nOur study included 2287 Italian general practitioners (586 of them female) caring for a total of 2 646 059 patients. Analyzing the performance scores confirmed that female general practitioners obtained better results than males. The differences between males and females were stronger on the 25th and 75th percentiles of the score than on the median values. The interaction between gender and LHU was not significant.\n\n\nCONCLUSION\nOur study evidenced that female physicians perform better than males in providing PC for diabetes independently by the different organizational models. Further research to understand the reasons for these gender differences is needed."}
{"_id": "fc67c8e8894e58f6211a33d4dbdf13371d09c33e", "title": "Direct type-specific conic fitting and eigenvalue bias correction", "text": "A new method to fit specific types of conics to scattered data points is introduced. Direct, specific fitting of ellipses and hyperbolae is achieved by imposing a quadratic constraint on the conic coefficients, whereby an improved partitioning of the design matrix is devised so as to improve computational efficiency and numerical stability by eliminating redundant aspects of the fitting procedure. Fitting of parabolas is achieved by determining an orthogonal basis vector set in the Grassmannian space of the quadratic terms\u2019 coefficients. The linear combination of the basis vectors that fulfills the parabolic condition and has a minimum residual norm is determined using Lagrange multipliers. This is the first known direct solution for parabola specific fitting. Furthermore, the inherent bias of a linear conic fit is addressed. We propose a linear method of correcting this bias, producing better geometric fits which are still constrained to specific conic type. 2007 Published by Elsevier B.V."}
{"_id": "392fdd9c8944b27ff1f89ae53ab658989452de38", "title": "Articulatory feature-based pronunciation modeling", "text": "Spoken language, especially conversational speech, is characterized by great variability in word pronunciation, including many variants that differ grossly from dictionary prototypes. This is one factor in the poor performance of automatic speech recognizers on conversational speech, and it has been very difficult to mitigate in traditional phonebased approaches to speech recognition. An alternative approach, which has been studied by ourselves and others, is one based on sub-phonetic features rather than phones. In such an approach, a word\u2019s pronunciation is represented as multiple streams of phonological features rather than a single stream of phones. Features may correspond to the positions of the speech articulators, such as the lips and tongue, or may be more abstract categories such as manner and place. This article reviews our work on a particular type of articulatory feature-based pronunciation model. The model allows for asynchrony between features, as well as per-feature substitutions, making it more natural to account for many pronunciation changes that are difficult to handle with phone-based models. Such models can be efficiently represented as dynamic Bayesian networks. The feature-based models improve significantly over phone-based counterparts in terms of frame perplexity and lexical access accuracy. The remainder of the article discusses related work and future directions."}
{"_id": "4d0c1c5005d0dd48374ce5ff78a4eb30ea22d58d", "title": "Attacking an AES-Enabled NFC Tag: Implications from Design to a Real-World Scenario", "text": "Radio-frequency identification (RFID) technology is the enabler for applications like the future internet of things (IoT), where security plays an important role. When integrating security to RFID tags, not only the cryptographic algorithms need to be secure but also their implementation. In this work we present differential power analysis (DPA) and differential electromagnetic analysis (DEMA) attacks on a securityenabled RFID tag. The attacks are conducted on both an ASIC-chip version and on an FPGA-prototype version of the tag. The design of the ASIC version equals that of commercial RFID tags and has analog and digital part integrated on a single chip. Target of the attacks is an implementation of the Advanced Encryption Standard (AES) with 128-bit key length and DPA countermeasures. The countermeasures are shuffling of operations and insertion of dummy rounds. Our results illustrate that the effort for successfully attacking the ASIC chip in a real-world scenario is only 4.5 times higher than for the FPGA prototype in a laboratory environment. This let us come to the conclusion that the effort for attacking contactless devices like RFID tags is only slightly higher than that for contact-based devices. The results further underline that the design of countermeasures like the insertion of dummy rounds has to be done with great care, since the detection of patterns in power or electromagnetic traces can be used to significantly lower the attacking effort."}
{"_id": "47225c992d7086cf5d113942212edb4a57401130", "title": "Building Program Vector Representations for Deep Learning", "text": "Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the \u201ccoding criterion\u201d to build program vector representations, which are the premise of deep learning for program analysis. Our representation learning approach directly makes deep learning a reality in this new field. We evaluate the learned vector representations both qualitatively and quantitatively. We conclude, based on the experiments, the coding criterion is successful in building program representations. To evaluate whether deep learning is beneficial for program analysis, we feed the representations to deep neural networks, and achieve higher accuracy in the program classification task than \u201cshallow\u201d methods, such as logistic regression and the support vector machine. This result confirms the feasibility of deep learning to analyze programs. It also gives primary evidence of its success in this new field. We believe deep learning will become an outstanding technique for program analysis in the near future."}
{"_id": "29982474bf1c57e4c84b0b0850dd89675fcb9162", "title": "A case against convexity in conceptual spaces", "text": "The notion of conceptual space, proposed by G\u00e4rdenfors as a framework for the representation of concepts and knowledge, has been highly influential over the last decade or so. One of the main theses involved in this approach is that the conceptual regions associated with properties, concepts, verbs, etc. are convex. The aim of this paper is to show that such a constraint\u2014that of the convexity of the geometry of conceptual regions\u2014is problematic; both from a theoretical perspective and with regard to the inner workings of the theory itself. On the one hand, all the arguments provided in favor of the convexity of conceptual regions rest on controversial assumptions. Additionally, his argument in support of a Euclidean metric, based on the integral character of conceptual dimensions, is weak, and under non-Euclidean metrics the structure of regions may be non-convex. Furthermore, even if the metric were Euclidean, the convexity constraint could be not satisfied if concepts were differentially weighted. On the other hand, G\u00e4rdenfors\u2019 convexity constraint is brought into question by the own inner workings of conceptual spaces because: (i) some of the allegedly convex properties of concepts are not convex; (ii) the conceptual regions resulting from the combination of convex properties can be non-convex; (iii) convex regions may co-vary in non-convex ways; and (iv) his definition of verbs is incompatible with a definition of properties in terms of convex regions. Therefore, the mandatory character of the convexity requirement for regions in a conceptual space theory should be reconsidered."}
{"_id": "f09009dd03600485517c373b55eeea34fe83b72c", "title": "Inductive Design of Maturity Models: Applying the Rasch Algorithm for Design Science Research", "text": "Maturity models are an established means to systematically document and guide the development of organizations using archetypal capability levels. Often, these models lack a sound foundation and/or are derived on the basis of an arbitrary design method. In order to foster the design of relevant and rigorous artifacts, this paper presents a method for maturity model construction that applies the Rasch algorithm and cluster analysis as a sound methodical foundation. The Rasch algorithm is widely used to improve scholarly intelligence and attainment tests. In order to demonstrate the application of the proposed method and to evaluate its usability and applicability, we present a design exemplar in the business intelligence domain."}
{"_id": "01a59bbc0be0635fb9aa2d23f71cd4cc06175685", "title": "Increased prefrontal and parietal activity after training of working memory", "text": "Working memory capacity has traditionally been thought to be constant. Recent studies, however, suggest that working memory can be improved by training. In this study, we have investigated the changes in brain activity that are induced by working memory training. Two experiments were carried out in which healthy, adult human subjects practiced working memory tasks for 5 weeks. Brain activity was measured with functional magnetic resonance imaging (fMRI) before, during and after training. After training, brain activity that was related to working memory increased in the middle frontal gyrus and superior and inferior parietal cortices. The changes in cortical activity could be evidence of training-induced plasticity in the neural systems that underlie working memory."}
{"_id": "8e87d95d6eb52f5245251fc30a4283f143074fbf", "title": "Aerial-Image Denoising Based on Convolutional Neural Network with Multi-Scale Residual Learning Approach", "text": "Aerial images are subject to various types of noise, which restricts the recognition and analysis of images, target monitoring, and search services. At present, deep learning is successful in image recognition. However, traditional convolutional neural networks (CNNs) extract the main features of an image to predict directly and are limited by the requirements of the training sample size (i.e., a small data size is not successful enough). In this paper, using a small sample size, we propose an aerial-image denoising recognition model based on CNNs with a multi-scale residual learning approach. The proposed model has the following three advantages: (1) Instead of directly learning latent clean images, the proposed model learns the noise from noisy images and then subtracts the learned residual from the noisy images to obtain reconstructed (denoised) images; (2) The developed image denoising recognition model is beneficial to small training datasets; (3) We use multi-scale residual learning as a learning approach, and dropout is introduced into the model architecture to force the network to learn to generalize well enough. Our experimental results on aerial-image denoising recognition reveal that the proposed approach is highly superior to the other state-of-the-art methods."}
{"_id": "bcffdf8f1b8214f249cfe7b2d68f433546ac3d51", "title": "RECONCEPTUALISING GAMIFICATION : PLAY AND PEDAGOGY", "text": "Gamification is a complex and controversial concept. It has been both embraced as a marketing and education revolution, and dismissed as practice of exploitation. Contested within the debate around gamification has been the very concept of what a game is, what the core mechanics of games are, and whether gamification truly mobilises these core mechanics. This paper will challenge the foundation of this debate through reconceptualising gamification not as a simple set of techniques and mechanics, but as a pedagogic heritage, an alternative framework for training and shaping participant behaviour that has at its core the concepts of entertainment and engagement. In doing so it will recontextualise current practices of gamification into a longer and deeper history, and suggest potential pathways for more sophisticated gamification in the future."}
{"_id": "51cb48aa9022983a1b4f7fcfe994f224dda8b8a5", "title": "Minimum Near-Convex Shape Decomposition.", "text": "Shape decomposition is a fundamental problem for part-based shape representation. We propose the Minimum Near-Convex Decomposition (MNCD) to decompose arbitrary shapes into minimum number of "near-convex" parts. The near-convex shape decomposition is formulated as a discrete optimization problem by minimizing the number of non-intersecting cuts. Two perception rules are imposed as constraints into our objective function to improve the visual naturalness of the decomposition. With the degree of near-convexity a user specified parameter, our decomposition is robust to local distortions and shape deformation. The optimization can be efficiently solved via Binary Integer Linear Programming. Both theoretical analysis and experiment results show that our approach outperforms the state-of-the-art results without introducing redundant parts, and thus leads to robust shape representation."}
{"_id": "1aa9c2a204196952580ec661bae2a8aa17cee6d3", "title": "A 3-kW Wireless Power Transfer System for Sightseeing Car Supercapacitor Charge", "text": "Wireless power transfer (WPT) is the preferred charging method for electric vehicles (EVs) powered by battery and supercapacitor. In this paper, a novel WPT system with constant current charging capability for sightseeing car with supercapacitor storage is designed. First, an optimized magnetic coupler using ferrite cores and magnetic shielding structure is proposed to ensure stable power transfer and high efficiency. Compared with the traditional planar shape ferrite core coupler, the proposed magnetic coupler requires lesser ferrite material without degrading the performance of the WPT system. Second, the model of supercapacitor is applied to the WPT system and the relationship between equivalent load resistances of supercapacitor and charging time is analyzed in detail. Then, a Buck converter with proportional integral (PI) controller is implemented on the secondary side to maintain constant charging current for the variable load. Finally, the proposed design is verified by experiments. Constant charging current of 31.5\u00a0A across transfer distance of 15\u00a0cm is achieved. The peak transfer power and system efficiency are 2.86\u00a0kW and 88.05%, respectively."}
{"_id": "42d2a8508953dad947aa39ca348fabda02bc944e", "title": "Step Climbing Omnidirectional Mobile Robot with Passive Linkages", "text": "In this paper, we develop a holonomic omni-directional mobile vehicle with step-climbing ability. This system realizes omni-directional motion on flat floor using special wheels and passes over the step in forward or backward direction using the passive linkage mechanism. Our key ideas are the following topics. First topic is new omnidirectional mobile mechanism which consists of special wheels and passive linkage mechanism. Second topic is new passive linkage mechanism which can change its body configuration easily when the vehicle passes over the step. Third topic is wheel control reference derivation based on the body configuration, which changes passively during step climbing for reducing wheel slippage. Last topic is wheel control method which keeps the rotation velocity coordination among the wheels for reducing wheel slippage and increasing the step-climbing performance. We verified the effectiveness of our proposed method by the computer simulations and experiments. Utilizing our proposed mechanism and control systems, the vehicle has both omnidirectional mobility and step overcoming function. Furthermore, our developing vehicle can pass over the 128[mm] height step using the wheel which radius is 66[mm]. Its performance is three times larger than one of general wheeled vehicle."}
{"_id": "624527d8b92bc6d423784113ded0b9fd639add00", "title": "Pay Attention to the Ending: Strong Neural Baselines for the ROC Story Cloze Task", "text": "We consider the ROC story cloze task (Mostafazadeh et al., 2016) and present several findings. We develop a model that uses hierarchical recurrent networks with attention to encode the sentences in the story and score candidate endings. By discarding the large training set and only training on the validation set, we achieve an accuracy of 74.7%. Even when we discard the story plots (sentences before the ending) and only train to choose the better of two endings, we can still reach 72.5%. We then analyze this \u201cending-only\u201d task setting. We estimate human accuracy to be 78% and find several types of clues that lead to this high accuracy, including those related to sentiment, negation, and general ending likelihood regardless of the story context."}
{"_id": "bb829ff159aef1861ee7e9ce1531ef31aa6b4674", "title": "Convenient or Useful ? Consumer Adoption of Smartphones for Mobile Commerce", "text": "Merchants have developed apps for the smartphone that assist consumers through the buying process, from when they gather information to when they decide to purchase. Sharing information, such as location, shopping preferences and financial data, can enhance consumers\u2019 experience. However, they may be reluctant to disclose these details unless they perceive that the benefits gained are more than the risk of privacy loss. This privacy calculus is added to the unified theory of acceptance and use of technology (UTAUT2) in order to explain consumers\u2019 willingness to exchange the disclosure of personal information for additional value. Sharing information makes mobile commerce more convenient by saving time and effort. Companies are able to send offers that are tailored to a specific customer. Payments are processed faster because the merchant already has the financial data on hand. UTAUT2 is further extended with the Theory of Convenience. Results from a survey of over 300 consumers show that perceived value and perceived convenience are influencing variables and that perceived value mediates the influence of perceived convenience on intention to use."}
{"_id": "958340c7ccd205ed7670693fa9519f9c140e372d", "title": "DeepLogo: Hitting Logo Recognition with the Deep Neural Network Hammer", "text": "Recently, there has been a flurry of industrial activity around logo recognition, such as Ditto\u2019s service for marketers to track their brands in user-generated images, and LogoGrab\u2019s mobile app platform for logo recognition. However, relatively little academic or open-source logo recognition progress has been made in the last four years. Meanwhile, deep convolutional neural networks (DCNNs) have revolutionized a broad range of object recognition applications. In this work, we apply DCNNs to logo recognition. We propose several DCNN architectures, with which we surpass published state-of-art accuracy on a popular logo recognition dataset."}
{"_id": "df682aa90fbbbf665a8b273a57ca87d6cea9ff99", "title": "Hidden Markov Models for Speech Recognition", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "16c8421040aabc6d8b1c8b2cb48bf03bf565d672", "title": "Procedural Generation of Roads", "text": "In this paper, we propose an automatic method for generating roads based on a weighted anisotropic shortest path algorithm. Given an input scene, we automatically create a path connecting an initial and a final point. The trajectory of the road minimizes a cost function that takes into account the different parameters of the scene including the slope of the terrain, natural obstacles such as rivers, lakes, mountains and forests. The road is generated by excavating the terrain along the path and instantiating generic parameterized models."}
{"_id": "442fa58964d1fb30ffb335ee1e96701fbb4e8b78", "title": "Arc-Length Parameterized Spline Curves for Real-Time Simulation", "text": "Parametric curves are frequently used in computer animation and virtual environments to control the movements of synthetic objects. In many of these applications it is essential to efficiently relate parameter values to the arc length of the curve. Current approaches to compute arc length or to construct an arc-length parameterized curve are impractical to use in real-time applications. This paper presents a simple and efficient technique to generate approximately arc-length parameterized spline curves that closely match spline curves typically used to model roads in high-fidelity driving simulators."}
{"_id": "5d05aad3241323e8da2959b6499c294e2b900798", "title": "Congested traffic states in empirical observations and microscopic simulations", "text": "We present data from several German freeways showing different kinds of congested traffic forming near road inhomogeneities, specifically lane closings, intersections, or uphill gradients. The states are localized or extended, homogeneous or oscillating. Combined states are observed as well, like the coexistence of moving localized clusters and clusters pinned at road inhomogeneities, or regions of oscillating congested traffic upstream of nearly homogeneous congested traffic. The experimental findings are consistent with a recently proposed theoretical phase diagram for traffic near on-ramps [D. Helbing, A. Hennecke, and M. Treiber, Phys. Rev. Lett. 82, 4360 (1999)]. We simulate these situations with a continuous microscopic single-lane model, the \"intelligent driver model,\" using empirical boundary conditions. All observations, including the coexistence of states, are qualitatively reproduced by describing inhomogeneities with local variations of one model parameter. We show that the results of the microscopic model can be understood by formulating the theoretical phase diagram for bottlenecks in a more general way. In particular, a local drop of the road capacity induced by parameter variations has essentially the same effect as an on-ramp."}
{"_id": "6059d75e3dba73e43cc93762c677a962f5d903d9", "title": "Computing the arc length of parametric curves", "text": "Specifying constraints on motion is simpler if the curve is parameterized by arc length, but many parametric curves of practical interest cannot be parameterized by arc length. An approximate numerical reparameterization technique that improves on a previous algorithm by using a different numerical integration procedure that recursively subdivides the curve and creates a table of the subdivision points is presented. The use of the table greatly reduces the computation required for subsequent arc length calculations. After table construction, the algorithm takes nearly constant time for each arc length calculation. A linear increase in the number of control points can result in a more than linear increase in computation. Examples of this type of behavior are shown.<>"}
{"_id": "786043deb56d3f9dec4f825b8db8940a28df20c6", "title": "Approximate Arc Length Parametrization", "text": "Current approaches to compute the arc length of a parametric curve rely on table lookup schemes. We present an approximate closed-form solution to the problem of computing an arc length parametrization for any given parametric curve. Our solution outputs a one or two-span B\u00e9zier curve which relates the length of the curve to the parametric variable. The main advantage of our approach is that we obtain a simple continuous function relating the length of the curve and the parametric variable. This allows the length to be easily computed given the parametric values. Tests with our algorithm on several thousand curves show that the maximum error in our approximation is 8.7% and that the average of maximum errors is 1.9%. Our algorithm is fast enough to compute the closed-form solution in a fraction of a second. After that a user can interactively get an approximation of the arc length for an arbitrary parameter value."}
{"_id": "d41111f03e0765bb9dfca00e35ca7bbd0402ee29", "title": "NIH Image to ImageJ: 25 years of image analysis", "text": "For the past 25 years NIH Image and ImageJ software have been pioneers as open tools for the analysis of scientific images. We discuss the origins, challenges and solutions of these two programs, and how their history can serve to advise and inform other software projects."}
{"_id": "322a6e1f464330751dea2eb6beecac24466322ad", "title": "Graph Database Applications and Concepts with Neo4j", "text": "Graph databases (GDB) are now a viable alternative to Relational Database Systems (RDBMS). Chemistry, biology, semantic web, social networking and recommendation engines are all examples of applications that can be represented in a much more natural form. Comparisons will be drawn between relational database systems (Oracle, MySQL) and graph databases (Neo4J) focusing on aspects such as data structures, data model features and query facilities. Additionally, several of the inherent and contemporary limitations of current offerings comparing and contrasting graph vs. relational database implementations will be explored."}
{"_id": "087337fdad69caaab8ebd8ae68a731c5bf2e8b14", "title": "Fully Convolutional Networks for Semantic Segmentation", "text": "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Our key insight is to build \u201cfully convolutional\u201d networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional networks achieve improved segmentation of PASCAL VOC (30% relative improvement to 67.2% mean IU on 2012), NYUDv2, SIFT Flow, and PASCAL-Context, while inference takes one tenth of a second for a typical image."}
{"_id": "7cc3b68f4af1bf23580a26c9d180ca3f9ca2bcca", "title": "Computational Decomposition of Style for Controllable and Enhanced Style Transfer", "text": "Neural style transfer has been demonstrated to be powerful in creating artistic image with help of Convolutional Neural Networks (CNN). However, there is still lack of computational analysis of perceptual components of the artistic style. Different from some early attempts which studied the style by some pre-processing or post-processing techniques, we investigate the characteristics of the style systematically based on feature map produced by CNN. First, we computationally decompose the style into basic elements using not only spectrum based methods including Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT) but also latent variable models such Principal Component Analysis (PCA), Independent Component Analysis (ICA). Then, the decomposition of style induces various ways of controlling the style elements which could be embedded as modules in state-of-the-art style transfer algorithms. Such decomposition of style brings several advantages. It enables the computational coding of different artistic styles by our style basis with similar styles clustering together, and thus it facilitates the mixing or intervention of styles based on the style basis from more than one styles so that compound style or new style could be generated to produce styled images. Experiments demonstrate the effectiveness of our method on not only painting style transfer but also sketch style transfer which indicates possible applications on picture-to-sketch problems."}
{"_id": "fcd48a3e79f5507db52624353f36ba2afb294b37", "title": "A CMOS low-power low-offset and high-speed fully dynamic latched comparator", "text": "This paper presents a novel dynamic latched comparator that demonstrates lower offset voltage and higher load drivability than the conventional dynamic latched comparators. With two additional inverters inserted between the input- and output-stage of the conventional double-tail dynamic comparator, the gain preceding the regenerative latch stage is improved. The complementary version of the regenerative latch stage, which provides larger output drive current than the conventional one at a limited area, is implemented. The proposed circuit is designed using 90nm CMOS technology and 1V power supply voltage, and it demonstrates up to 19% less offset voltage and 62% less sensitivity of the delay to the input voltage difference (17ps/decade) than the conventional double-tail latched comparator at approximately the same area and power consumption."}
{"_id": "2d8123fe4a8bb0db2578a53228afc1eaf607971e", "title": "A DYNAMIC FUZZY-COGNITIVE-MAP APPROACH BASED ON RANDOM NEURAL NETWORKS", "text": "A fuzzy cognitive map is a graphical means of representing arbitrarily complex models of interrelations between concepts. The purpose of this paper is to describe a dynamic fuzzy cognitive map based on the random neural network model. Previously, we have developed a random fuzzy cognitive map and illustrated its application in the modeling of processes. The adaptive fuzzy cognitive map changes its fuzzy causal web as causal patterns change and as experts update their causal knowledge. Our model carries out inferences via numerical calculation instead of symbolic deduction. We show how the dynamic random fuzzy cognitive map can reveal implications of models composed of dynamic processes. Copyright c \u00a92003 Yang\u2019s Scientific Research Institute, LLC. All rights reserved."}
{"_id": "12e736212dcf79e3a03356c1100a2f6bcabf5aaf", "title": "Eclectic extraction of propositional rules from Neural Networks", "text": "Artificial Neural Network is among the most popular algorithm for supervised learning. However, Neural Networks have a well-known drawback of being a \u201cBlack Box\u201d learner that is not comprehensible to the Users. This lack of transparency makes it unsuitable for many high risk tasks such as medical diagnosis that requires a rational justification for making a decision. Rule Extraction methods attempt to curb this limitation by extracting comprehensible rules from a trained Network. Many such extraction algorithms have been developed over the years with their respective strengths and weaknesses. They have been broadly categorized into three types based on their approach to use internal model of the Network. Eclectic Methods are hybrid algorithms that combine the other approaches to attain more performance. In this paper, we present an Eclectic method called HERETIC. Our algorithm uses Inductive Decision Tree learning combined with information of the neural network structure for extracting logical rules. Experiments and theoretical analysis show HERETIC to be better in terms of speed and performance."}
{"_id": "fbd578f0059bcabe6c17315dc83457093f8dde17", "title": "High Efficiency AC \u2013 AC Power Electronic Converter Applied To Domestic Induction Heating", "text": "Induction heating is a well-known technique to produce very high temperature for applications. A large number of topologies have been developed in this area such as voltage and current source inverter. This project presents the analysis and design of a new ac\u2013ac resonant converter applied to domestic induction heating. The proposed topology, based on the halfbridge series resonant inverter, uses only two diodes to rectify the mains voltage.The proposed converter can operate with zerovoltage switching during both switches-on and switch-off transitions. This topology doubles the output voltage, and therefore, the current in the load is reduced for the same output power. As a consequence, the converter efficiency is significantly improved.The output power is controlled using the PWM technique. This results in an increase of the net efficiency of the induction heating system. The circuit is simulated using MATLAB. The circuit is implemented using PIC controller. Both simulation and hardware results are compared."}
{"_id": "08a4fa5caead14285131f6863b6cd692540ea59a", "title": "Censoring Representations with an Adversary", "text": "In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from separate training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model."}
{"_id": "75b9ed937f20d1472f4fcd2da9349ff78319117c", "title": "EHRs Connect Research and Practice: Where Predictive Modeling, Artificial Intelligence, and Clinical Decision Support Intersect", "text": "Objectives: Electronic health records (EHRs) are only a first step in capturing and utilizing healthrelated data\u2014the challenge is turning that data into useful information. Furthermore, EHRs are increasingly likely to include data relating to patient outcomes, functionality such as clinical decision support, and genetic information as well, and, as such, can be seen as repositories of increasingly valuable information about patients\u2019 health conditions and responses to treatment over time. Methods: We describe a case study of 423 patients treated by Centerstone within Tennessee and Indiana in which we utilized electronic health record data to generate predictive algorithms of individual patient treatment response. Multiple models were constructed using predictor variables derived from clinical, financial and geographic data. Results: For the 423 patients, 101 deteriorated, 223 improved and in 99 there was no change in clinical condition. Based on modeling of various clinical indicators at baseline, the highest accuracy in predicting individual patient response ranged from 70% to 72% within the models tested. In terms of individual predictors, the Centerstone Assessment of Recovery Level\u2014Adult (CARLA) baseline score was most significant in predicting outcome over time (odds ratio 4.1+2.27). Other variables with consistently significant impact on outcome included payer, diagnostic category, location and provision of case management services. Conclusions: This approach represents a promising avenue toward reducing the current gap between research and practice across healthcare, developing data-driven clinical decision support based on real-world populations, and serving as a component of embedded clinical artificial intelligences that \u2018\u2018learn\u2019\u2019 over time. & 2012 Fellowship of Postgraduate Medicine. Published by Elsevier Ltd. All rights reserved. 2211-8837/$ see front matter & 2012 Fellowship of Postgraduate Medicine. Published by Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.hlpt.2012.03.001 Corresponding author at: Department of Informatics, Centerstone Research Institute, 365 South Park Ridge Road, Suite 103, Bloomington, IN 47401, United States. Tel.: +1 812 337 2302. E-mail addresses: Casey.Bennett@CenterstoneResearch.org (C.C. Bennett), Tom.Doub@CenterstoneResearch.org (T.W. Doub), RebeccaSelove@CenterstoneResearch.org (R. Selove). Health Policy and Technology (2012) 1, 105\u2013114 Author's personal copy"}
{"_id": "e0fee2e4c9f919344e5f50f2db9cd03ecdb3c535", "title": "Full-wave design of H-plane contiguous manifold output multiplexers using the fictitious reactive load concept", "text": "In this paper, a new procedure for the design of contiguous band manifold output multiplexers in H-plane configuration is presented. Such a configuration eliminates the slots between the T-junctions and the filters allowing high power handling, as well as dramatically reducing the risk of multipactor. This paper pursues the division of the design into tasks with low computational effort. The approach comprises: 1) the singly terminated synthesis of the channel filters and their corresponding full-wave responses; 2) the manifold design using fictitious reactive loads simulating the phase response of every channel out of its corresponding band; and 3) the final full-wave optimization of the whole structure using the simulated-annealing method. In order to validate the above procedure, a five-channel Ku-band multiplexer has been designed, manufactured, and measured."}
{"_id": "505b42d36db44a97a23bada015ae781d2e1dc767", "title": "Highly Parallel Genome-wide Expression Profiling of Individual Cells Using Nanoliter Droplets", "text": "Cells, the basic units of biological structure and function, vary broadly in type and state. Single-cell genomics can characterize cell identity and function, but limitations of ease and scale have prevented its broad application. Here we describe Drop-seq, a strategy for quickly profiling thousands of individual cells by separating them into nanoliter-sized aqueous droplets, associating a different barcode with each cell's RNAs, and sequencing them all together. Drop-seq analyzes mRNA transcripts from thousands of individual cells simultaneously while remembering transcripts' cell of origin. We analyzed transcriptomes from 44,808 mouse retinal cells and identified 39 transcriptionally distinct cell populations, creating a molecular atlas of gene expression for known retinal cell classes and novel candidate cell subtypes. Drop-seq will accelerate biological discovery by enabling routine transcriptional profiling at single-cell resolution. VIDEO ABSTRACT."}
{"_id": "63aa6b35e392e26c94ae252623863b4934c0a958", "title": "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks", "text": "Recent trojan attacks on deep neural network (DNN) models are one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker\u2019s chosen trojan trigger input. Trojan attacks are easy to craft; survive even in adverse conditions such as different viewpoints, and lighting conditions on images; and threaten real world applications such as autonomous vehicles and robotics. The trojan trigger is a secret guarded and exploited by the attacker. Therefore, detecting such trojaned inputs is a challenge, especially at run-time when models are in active operation. We focus on vision systems and build the STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of the predicted classes for the perturbed inputs for a given model\u2014malicious or benign\u2014under deployment. A low entropy in the predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input\u2014a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on two popular and contrasting datasets: MNIST and CIFAR10. We achieve an overall false acceptance rate (FAR) of less than 1%, given a preset false rejection rate (FRR) of 1%, for four different tested trojan trigger types\u2014three triggers are identified in previous attack works and one dedicated trigger is crafted by us to demonstrate the trigger-size insensitivity of the STRIP detection approach. In particular, on the dataset of natural images in CIFAR10, we have empirically achieved the desired result of 0% for both FRR and FAR."}
{"_id": "d7805eee3daef814140001a6c59fda004266b3c8", "title": "Introduction to This is Watson", "text": ""}
{"_id": "9fde9bd944feb74fde14d1c89510e9ac82540f68", "title": "A formal ontological perspective on the behaviors and functions of technical artifacts", "text": "In this paper we present a formal characterization of the engineering concepts of behavior and function of technical artifacts. We capture the meanings that engineers attach to these concepts by formalizing, within the formal ontology DOLCE, the five meanings of artifact behavior and the two meanings of function that Chandrasekaran and Josephson identified in 2000 within the Functional Representation approach. We begin our formalization by reserving the term \u2018behavior\u2019 of a technical artifact as \u2018the specific way in which the artifact occurs in an event.\u2019 This general notion is characterized formally, and used to provide definitions of actual behaviors of artifacts, and the physically possible and physically impossible behaviors that rational agents believe that artifacts have. We also define several other notions, e.g., input and output behaviors of artifacts, and then show that these ontologically characterizedconcepts give a general framework in which Chandrasekaran and Josephson\u2019s meanings of behavior can be explicitly formalized. Finally we show how Chandrasekaran and Josephson\u2019s two meanings of artifact functions, namely, device-centric and environmental-centric functions, can be captured in DOLCE via the concepts of behavioral constraint and mode of deployment of an artifact. A more general goal of this work is to show that foundational ontologies are suited to the engineering domain: they can facilitate information sharing and exchange in the various engineering domains by providing concept structures and clarifications that make explicit and precise important engineering notions. The meanings of the terms \u2018behavior\u2019 and \u2018function\u2019 in domains like designing, redesigning, reverse engineering, product architecture, and engineering knowledge bases are often ambiguous or overloaded. Our results show that foundational ontologies can accommodate the variety of denotations these terms have and can explain their relationships."}
{"_id": "4dd6d59ae1310e9fbb15d441d2451e90ab0a6bd7", "title": "THE DEVELOPMENT OF THE BALANCED SCORECARD AS A STRATEGIC MANAGEMENT TOOL", "text": "The Balanced Scorecard is a widely adopted performance management framework first described in the early 1990s. More recently it has been proposed as the basis for a \u2018strategic management system\u2019. This paper describes its evolution, recognising three distinct generations of Balanced Scorecard design. The paper relates the empirically driven developments in Balanced Scorecard thinking with literature concerning strategic management within organisations. It concludes that developments to date have been worthwhile, highlights potential areas for further refinement, and sets out some possible topics for future research into the field. The Balanced Scorecard and its development The Balanced Scorecard was first introduced in the early 1990s through the work of Robert Kaplan and David Norton of the Harvard Business School. Since then, the concept has become well known and its various forms widely adopted across the world (Rigby, 2001). By combining financial measures and non-financial measures in a single report, the Balanced Scorecard aims to provide managers with richer and more relevant information about activities they are managing than is provided by financial measures alone. To aid clarity and utility, Kaplan and Norton proposed that the number of measures on a Balanced Scorecard should also be constrained in number, and clustered into four groups (Kaplan and Norton, 1992, 1993). Beyond this, the original definition of Balanced Scorecard was sparse. But from the outset it was clear that the selection of measures, both in terms of filtering (organisations typically had access to many more measures than were needed to populate the Balanced Scorecard) and clustering (deciding which measures should appear in which perspectives) would be a key activity. Kaplan and Norton proposed that measure selection should focus on information relevant to the implementation of strategic plans, and that simple attitudinal questions be used to help determine the appropriate allocation of measures to perspectives (Kaplan and Norton, 1992). In essence the Balanced Scorecard has remained unchanged since these early papers, having at its core a limited number of measures clustered into groups, and an underlying strategic focus. But modern Balanced Scorecard designs also have a number of features that clearly differentiate them from earlier examples. This paper describes these changes as an evolution through three distinct \u2018generations\u2019 of Balanced Scorecard design. 1 Generation Balanced Scorecard Balanced Scorecard was initially described as a simple, \u201c4 box\u201d approach to performance measurement (Kaplan and Norton, 1992). In addition to financial measures, managers were encouraged to look at measures drawn from three other \u201cperspectives\u201d of the business: Learning and Growth; Internal Business Process; and Customer, chosen to represent the major stakeholders in a business (Mooraj et al, 1999). Definition of what comprised a Balanced Scorecard was sparse and focused on the high level structure of the device. Simple \u2018causality\u2019 between the four perspectives was illustrated but not used for specific purpose. Kaplan and Norton\u2019s original paper\u2019s focus was on the selection and reporting of a limited number of measures in each of the four perspectives (Kaplan and Norton, 1992). The paper suggested use of attitudinal questions relating to the vision and goals of the organisation to help in the selection of measures to be used, and also encouraged the consideration of \u2018typical\u2019 areas of interest in this process. Kaplan and Norton\u2019s original work makes no specific observations concerning how the Balanced Scorecard might improve the performance of organisations; the implication is that the provision of accessible relevant measurement data itself will trigger improved organisational performance. However, they do imply that the source of these improvements is changes in behaviour: \u201cIt establishes goals but assumes that people will adopt whatever behaviours and take whatever actions are necessary to arrive at those goals\u201d. In the light of this, the basis for selecting the goals represented by the Balanced Scorecard is of some importance. But in their first paper Kaplan and Norton say little about how a Balanced Scorecard could be developed in practice beyond a general assertion that design involved \u201cputting vision and strategy at the centre of the measurement system\u201d (1992). Later writing includes increasing amounts of proscription about development methods, concluding with a lengthy description of one such process in their first book on the subject published in 1996. Figure 1 \u2013 1 Generation Balanced Scorecard Figure 1 shows a diagrammatic representation of Kaplan and Norton\u2019s original Balanced Scorecard design. We will subsequently refer to this type of Balanced Scorecard design as a \u20181 Generation\u2019 Balanced Scorecard. Key attributes of the design and suggested development process for this type of Balanced Scorecard are summarised in the table below. Practical Experiences with 1 Generation Balanced Scorecards The authors\u2019 professional experience suggests that 1 Generation Balanced Scorecards are still being developed, and that they probably still form the large majority of Balanced Scorecard designs introduced into organisations. This is reflected in the literature, where books and articles that use more advanced representations of Balanced Scorecard are only recently appearing (Olve et al, 1999, Kaplan and Norton, 2000, Niven, 2002). But despite its huge popularity as a concept, and apparently widespread adoption, relatively few detailed case studies concerning Balanced Scorecard implementation experiences appear to exist in the academic literature. Those few that do focus primarily on the architecture of the Balanced Scorecard design (e.g. Butler et al, 1997), and associated organisational experiences (e.g. Ahn, 2001). Commercial / practitioner writing on Balanced Scorecard is more extensive (e.g. Schneiderman, 1999), but often more partisan (e.g. Lingle et al 1996). But in general the literature endorses the utility of the approach (Epstein et al, 1997) but notes weaknesses in the initial design proposition, and recommends improvements (e.g. Eagleson et al, 2000, Kennerley et al, 2000). 2 Generation Balanced Scorecard The practical difficulties associated with the design of 1 Generation Balanced Scorecards are significant, in part because the definition of a Balanced Scorecard was initially vague, allowing for considerable interpretation. Two significant areas of concern were filtering (the process of choosing specific measures to report), and clustering (deciding how to group measures into \u2018perspectives\u2019). Discussions relating to clustering continue to be rehearsed in the literature (e.g. Butler et al, 1997, Kennerley et al, 2000), but discussions relating to filtering are less common, and usually appear as part of descriptions of methods of Balanced Scorecard design (e.g. Kaplan and Norton, 1996, Olve et al, 1999). Perhaps the most significant early change translated the attitudinal approach to measure selection proposed initially be Kaplan and Norton (e.g. \u201cTo succeed financially, how should we appear to our shareholders?\u201d) into a process that yielded a few appropriate key measures of performance in each perspective. A solution was the introduction of the concept of \u2018strategic objectives\u2019 (Kaplan and Norton, 1993). Initially these were represented as short sentences attached to the four perspectives, and were used to capture the essence of the organisation\u2019s strategy material to each of the areas: measures were then selected that reflected achievement of these strategic objectives. Although subtle, this approach to measure selection quite different from that initially proposed, since strategic objectives were developed directly from strategy statements based on a corporate vision or a strategic plan. Another key development concerned causality. Causality between the perspectives had been introduced in early \u20181st Generation\u2019 Balanced Scorecard thinking (see Figure 1). \u20182nd Generation\u2019 Balanced Scorecard saw the idea of causality developed further. Instead of simply highlighting causal links between perspectives, internal documents from one consulting firm\u2019s work in 1993 shows an early attempt to indicate linkages between the measures themselves. This improvement was also proposed later by others (Newing, 1995). Measure based linkages provided a richer model of causality than before, but presented conceptual problems \u2013 for example, the use of measures encouraged attempts to \u2018prove\u2019 the causality between measures using various forms of analysis (indeed this is still the case \u2013 e.g. Brewer, 2002). Collectively the changes in design described here represent a materially different definition of what comprises a Balanced Scorecard compared to Kaplan and Norton\u2019s original work we will refer to Balanced Scorecards that incorporate these developments as \u20182 Generation Balanced Scorecards\u2019. The impact of these changes were characterised by Kaplan and Norton in 1996 as enabling the Balanced Scorecard to evolve from \u201can improved measurement system to a core management system\u201d (Kaplan and Norton 1996). Maintaining the focus that Balanced Scorecard was intended to support the management of strategic implementation, Kaplan and Norton further described the use of this development of the Balanced Scorecard as the central element of \u201ca strategic management system\u201d. One consequence of this change in emphasis was to increase the pressure on the design process to accurately reflect the organisation\u2019s strategic goals. Over time the idea of strategic linkage became an increasingly important element of Balanced Scorecard design methodology, and in the mid 1990\u2019s Balanced Scorecard documentation began to show graphically linkages between the strategic objectives themselves (rather than the measures) with causality linking across the perspectives toward key objectives re"}
{"_id": "4e5da90c1fe907f1dab34637a1e35157ea937d95", "title": "An agent-based decision support system for wholesale electricity market", "text": "Application software has been developed for analyzing and understanding a dynamic price change in the US wholesale power market. Traders can use the software as an effective decision-making tool by modeling and simulating a power market. The software uses different features of a decision support system by creating a framework for assessing new trading strategies in a competitive electricity trading environment. The practicality of the software is confirmed by comparing its estimation accuracy with those of other methods (e.g., neural network and genetic algorithm). The software has been applied to a data set regarding the California electricity crisis in order to examine whether the learning (convergence) speed of traders is different between the two periods (before and during the crisis). Such an application confirms the validity of the proposed software. \u00a9 2007 Elsevier B.V. All rights reserved."}
{"_id": "b8a36458cd5f3e6b0874c42bdf03caad6640a206", "title": "Tuning of PID controller using Ziegler-Nichols method for speed control of DC motor", "text": "In this paper, a weighted tuning methods of a PID speed controller for separately excited Direct current motor is presented, based on Empirical Ziegler-Nichols tuning formula and modified Ziegler-Nichol PID tuning formula. Both these methods are compared on the basis of output response, minimum settling time, and minimum overshoot for speed demand application of DC motor. Computer simulation shows that the performance of PID controller using Modified Ziegler-Nichols technique is better than that of traditional Ziegler-Nichols technique."}
{"_id": "8eedf5059d5c49ecc39019c005f208d46bc6039f", "title": "Comparative analysis of RADAR-IR sensor fusion methods for object detection", "text": "This paper presents the Radar and IR sensor fusion method for objection detection. The infrared camera parameter calibration with Levenberg-Marquardt (LM) optimization method is proposed based on the Radar ranging data represented by Cartesian coordinate compared with 6 fusion methods. The proposed method firstly performs the estimation of the intrinsic parameter matrix of infrared camera with some optical trick. Then the method searches the extrinsic parameters using the generative approach. The initial angle and translation of the extrinsic parameters are optimized by the LM method with the geometrical cost function. In the experiments, the performance of proposed method outperforms by a maximum 13 times the performance of the other baseline methods on the averaged Euclidian distance error. In future work, the angular noise of the Radar information will be improved and the proposed method will provide the effective proposals for the deep neural network."}
{"_id": "733a27d423e4f26b2ed64d85561df85d066960a2", "title": "Noise-Immune Embedded NAND-ROM Using a Dynamic Split Source-Line Scheme for VDDmin and Speed Improvements", "text": "Embedded NAND-type read-only-memory (NAND-ROM) provides large-capacity, high-reliability, on-chip non-volatile storage. However, NAND-ROM suffers from code-dependent read noises and cannot survive at low supply voltages (VDDs). These code-dependent read noises are primarily due to the charge-sharing effect, bitline leakage current, and crosstalk between bitlines, which become worse at lower VDD. This study proposes a dynamic split source-line (DSSL) scheme for NAND-ROM. The proposed scheme overcomes code-dependent read noises while improving the read access time and suppressing the active-mode gate leakage current, with only a 1% area penalty in the cell array. Experiments on a fabricated 256 Kb macro using a 90 nm industrial logic process demonstrate that the proposed DSSL scheme achieves 100% code-pattern coverage under a small sensing margin. Additionally, the DSSL NAND-ROM works with a wide range of supply voltages (1-0.31 V) with a 38%, 45.8%, and 37% improvement in speed, power, and standby current, respectively, at VDD = 1 V."}
{"_id": "988c10748a66429dda79d02bc5eb57c64f9768fb", "title": "FlowQA: Grasping Flow in History for Conversational Machine Comprehension", "text": "Conversational machine comprehension requires a deep understanding of the conversation history. To enable traditional, single-turn models to encode the history comprehensively, we introduce FLOW, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure. Compared to shallow approaches that concatenate previous questions/answers as input, FLOW integrates the latent semantics of the conversation history more deeply. Our model, FLOWQA, shows superior performance on two recently proposed conversational challenges (+7.2% F1 on CoQA and +4.0% on QuAC). The effectiveness of FLOW also shows in other tasks. By reducing sequential instruction understanding to conversational machine comprehension, FLOWQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy."}
{"_id": "135c89b491f82bd4fd7de175ac778207f598342b", "title": "Not All Neural Embeddings are Born Equal", "text": "Neural language models learn word representations that capture rich linguistic and conceptual information. Here we investigate the embeddings learned by neural machine translation models. We show that translation-based embeddings outperform those learned by cutting-edge monolingual models at single-language tasks requiring knowledge of conceptual similarity and/or syntactic role. The findings suggest that, while monolingual models learn information about how concepts are related, neural-translation models better capture their true ontological status. It is well known that word representations can be learned from the distributional patterns in corpora. Originally, such representations were constructed by counting word co-occurrences, so that the features in one word\u2019s representation corresponded to other words [11, 17]. Neural language models, an alternative means to learn word representations, use language data to optimise (latent) features with respect to a language modelling objective. The objective can be to predict either the next word given the initial words of a sentence [4, 14, 8], or simply a nearby word given a single cue word [13, 15]. The representations learned by neural models (sometimes called embeddings) generally outperform those acquired by co-occurrence counting models when applied to NLP tasks [3]. Despite these clear results, it is not well understood how the architecture of neural models affects the information encoded in their embeddings. Here, we explore this question by considering the embeddings learned by architectures with a very different objective function to monolingual language models: neural machine translation models. We show that translation-based embeddings outperform monolingual embeddings on two types of task: those that require knowledge of conceptual similarity (rather than simply association or relatedness), and those that require knowledge of syntactic role. We discuss what the findings indicate about the information content of different embeddings, and suggest how this content might emerge as a consequence of the translation objective. 1 Learning embeddings from language data Both neural language models and translation models learn real-valued embeddings (of specified dimension) for words in some pre-specified vocabulary, V , covering many or all words in their training corpus. At each training step, a \u2018score\u2019 for the current training example (or batch) is computed based on the embeddings in their current state. This score is compared to the model\u2019s objective function, and the error is backpropagated to update both the model weights (affecting how the score is computed from the embeddings) and the embedding features. At the end of this process, the embeddings should encode information that enables the model to optimally satisfy its objective. 1.1 Monolingual models In the original neural language model [4] and subsequent variants [8], each training example consists of n subsequent words, of which the model is trained to predict the n-th word given the first n \u2212 1 ar X iv :1 41 0. 07 18 v2 [ cs .C L ] 1 3 N ov 2 01 4 1 words. The model first represents the input as an ordered sequence of embeddings, which it transforms into a single fixed length \u2018hidden\u2019 representation by, e.g., concatenation and non-linear projection. Based on this representation, a probability distribution is computed over the vocabulary, from which the model can sample a guess at the next word. The model weights and embeddings are updated to maximise the probability of correct guesses for all sentences in the training corpus. More recent work has shown that high quality word embeddings can be learned via models with no nonlinear hidden layer [13, 15]. Given a single word in the corpus, these models simply predict which other words will occur nearby. For each word w in V , a list of training cases (w, c) : c \u2208 V is extracted from the training corpus. For instance, in the skipgram approach [13], for each \u2018cue word\u2019 w the \u2018context words\u2019 c are sampled from windows either side of tokens of w in the corpus (with c more likely to be sampled if it occurs closer to w).1 For each w in V , the model initialises both a cue-embedding, representing the w when it occurs as a cue-word, and a context-embedding, used when w occurs as a context-word. For a cue word w, the model can use the corresponding cueembedding and all context-embeddings to compute a probability distribution over V that reflects the probability of a word occurring in the context of w. When a training example (w, c) is observed, the model updates both the cue-word embedding of w and the context-word embeddings in order to increase the conditional probability of c. 1.2 Translation-based embeddings Neural translation models generate an appropriate sentence in their target language St given a sentence Ss in their source language [see, e.g., 16, 6]. In doing so, they learn distinct sets of embeddings for the vocabularies Vs and Vt in the source and target languages respectively. Observing a training case (Ss, St), such a model represents Ss as an ordered sequence of embeddings of words from Vs. The sequence for Ss is then encoded into a single representation RS .2 Finally, by referencing the embeddings in Vt, RS and a representation of what has been generated thus far, the model decodes a sentence in the target language word by word. If at any stage the decoded word does not match the corresponding word in the training target St, the error is recorded. The weights and embeddings in the model, which together parameterise the encoding and decoding process, are updated based on the accumulated error once the sentence decoding is complete. Although neural translation models can differ in low-level architecture [7, 2], the translation objective exerts similar pressure on the embeddings in all cases. The source language embeddings must be such that the model can combine them to form single representations for ordered sequences of multiple words (which in turn must enable the decoding process). The target language embeddings must facilitate the process of decoding these representations into correct target-language sentences. 2 Comparing Mono-lingual and Translation-based Embeddings To learn translation-based embeddings, we trained both the RNN encoder-decoder [RNNenc, 7] and the RNN Search architectures [2] on a 300m word corpus of English-French sentence pairs. We conducted all experiments with the resulting (English) source embeddings from these models. For comparison, we trained a monolingual skipgram model [13] and its Glove variant [15] for the same number of epochs on the English half of the bilingual corpus. We also extracted embeddings from a full-sentence language model [CW, 8] trained for several months on a larger 1bn word corpus. As in previous studies [1, 5, 3], we evaluate embeddings by calculating pairwise (cosine) distances and correlating these distances with (gold-standard) human judgements. Table 1 shows the correlations of different model embeddings with three such gold-standard resources, WordSim-353 [1], MEN [5] and SimLex-999 [10]. Interestingly, translation embeddings perform best on SimLex-999, while the two sets of monolingual embeddings perform better on modelling the MEN and WordSim353. To interpret these results, it should be noted that SimLex-999 evaluation quantifies conceptual similarity (dog wolf ), whereas MEN and WordSim-353 (despite its name) quantify more general relatedness (dog collar) [10]. The results seem to indicate that translation-based embeddings better capture similarity, while monolingual embeddings better capture relatedness. 1 Subsequent variants use different algorithms for selecting the (w, c) from the training corpus [9, 12] Alternatively, subsequences (phrases) of Ss may be encoded at this stage in place of the whole sentence [2]."}
{"_id": "1d2a4018b4fc6a5f498e65d68260615dbc9e7ec6", "title": "Entity Linking meets Word Sense Disambiguation: a Unified Approach", "text": "Entity Linking (EL) and Word Sense Disambiguation (WSD) both address the lexical ambiguity of language. But while the two tasks are pretty similar, they differ in a fundamental respect: in EL the textual mention can be linked to a named entity which may or may not contain the exact mention, while in WSD there is a perfect match between the word form (better, its lemma) and a suitable word sense. In this paper we present Babelfy, a unified graph-based approach to EL and WSD based on a loose identification of candidate meanings coupled with a densest subgraph heuristic which selects high-coherence semantic interpretations. Our experiments show state-of-the-art performances on both tasks on 6 different datasets, including a multilingual setting. Babelfy is online at http://babelfy.org"}
{"_id": "32cf9f4c97d74f8ee0d8a16e01cd4fd62d330650", "title": "Align, Disambiguate and Walk: A Unified Approach for Measuring Semantic Similarity", "text": "Semantic similarity is an essential component of many Natural Language Processing applications. However, prior methods for computing semantic similarity often operate at different levels, e.g., single words or entire documents, which requires adapting the method for each data type. We present a unified approach to semantic similarity that operates at multiple levels, all the way from comparing word senses to comparing text documents. Our method leverages a common probabilistic representation over word senses in order to compare different types of linguistic data. This unified representation shows state-ofthe-art performance on three tasks: semantic textual similarity, word similarity, and word sense coarsening."}
{"_id": "e5ffd0d05ba5f0171721de5f770f61f2663afe39", "title": "Cycling Down Under : A Comparative Analysis of Bicycling Trends and Policies in Sydney and Melbourne", "text": "The purpose of this paper is to document and explain differences in cycling between Australia\u2019s two largest cities. Our comparative case study analysis is based on a wide range of statistical datasets, secondary reports, and interviews with a panel of 22 bicycling policy and planning experts. The main finding is that cycling levels in Melbourne are roughly twice as high as in Sydney and have been growing three times as fast in recent years. The difference is due to Melbourne\u2019s more favorable topography, climate, and road network as well as more supportive public policies. In particular, Melbourne has more and better integrated cycling infrastructure as well as more extensive cycling programs, advocacy, and promotional events. Melbourne also benefits from safer cycling than Sydney, which suffers from a lack of traffic-protected cycling facilities and aggressive motorist behavior toward cyclists on the road. While cycling has been increasing in Australia, it remains at very low levels relative to northern Europe, where both land use and transport policies are far more supportive of bicycling while discouraging car use through numerous restrictions and financial disincentives."}
{"_id": "324b3e5f126c9d6b7438a1336484fd2c07942c4c", "title": "An attack scenario and mitigation mechanism for enterprise BYOD environments", "text": "The recent proliferation of the Internet of Things (IoT) technology poses major security and privacy concerns. Specifically, the use of personal IoT devices, such as tablets, smartphones, and even smartwatches, as part of the Bring Your Own Device (BYOD) trend, may result in severe network security breaches in enterprise environments. Such devices increase the attack surface by weakening the digital perimeter of the enterprise network and opening new points of entry for malicious activities. In this paper we demonstrate a novel attack scenario in an enterprise environment by exploiting the smartwatch device of an innocent employee. Using a malicious application running on a suitable smartwatch, the device imitates a real Wi-Fi direct printer service in the network. Using this attack scenario, we illustrate how an advanced attacker located outside of the organization can leak/steal sensitive information from the organization by utilizing the compromised smartwatch as a means of attack. An attack mitigation process and countermeasures are suggested in order to limit the capability of the remote attacker to execute the attack on the network, thus minimizing the data leakage by the smartwatch."}
{"_id": "31181e73befea410e25de462eccd0e74ba8fea0b", "title": "Introduction to Algorithms, 3rd Edition", "text": ""}
{"_id": "0e6f5abd7e4738b765cd48f4c272093ecb5fd0bc", "title": "Impact of an augmented reality system on students' motivation for a visual art course", "text": ""}
{"_id": "b4320a830664e3e80155d943f628c97053c5f1bc", "title": "OTPaaS\u2014One Time Password as a Service", "text": "Conventional password-based authentication is considered inadequate by users as many online services started to affect each other. Online credentials are used to recover other credentials and complex attacks are directed to the weakest one of many of these online credentials. As researchers are looking for new authentication techniques, one time passwords, which is a two-factor authentication scheme, looks like a natural enhancement over conventional username/password schemes. The manuscript places the OTP verifier to the cloud to ease adoption of its usage by cloud service providers. When the OTP verifier is placed on the cloud as a service, other cloud service providers could outsource their OTP deployments as well as cloud users could activate their respective account on the OTP provider on several cloud services. This enables them to use several cloud services without the difficulty of managing several OTP accounts for each cloud service. On the other hand, OTP service provision saves inexperienced small to medium enterprises from spending extra costs for OTP provisioning hardware, software, and employers. The paper outlines architecture to build a secure, privacy-friendly, and sound OTP provider in the cloud to outsource the second factor of authentication. Cloud user registration to OTP provider, service provider activation, and authentication phases are inspected. The security and privacy considerations of the proposed architecture are defined and analyzed. Attacks from outsiders, unlinkability properties of user profiles, attacks from curious service providers or OTP verifiers are mitigated within the given assumptions. The proposed solution, which locates the OTP provider in the cloud, is rendered robust and sound as a result of the analysis."}
{"_id": "6b69fd4f192d655f5f0fd94bdae81e85ee3d4c75", "title": "POLSAR Image Classification via Wishart-AE Model or Wishart-CAE Model", "text": "Neural network such as an autoencoder (AE) and a convolutional autoencoder (CAE) have been successfully applied in image feature extraction. For the statistical distribution of polarimetric synthetic aperture radar (POLSAR) data, we combine the Wishart distance measurement into the training process of the AE and the CAE. In this paper, a new type of AE and CAE is specially defined, which we name them Wishart-AE (WAE) and Wishart-CAE (WCAE). Furthermore, we connect the WAE or the WCAE with a softmax classifier to compose a classification model for the purpose of POLSAR image classification. Compared with AE and CAE models, WAE and WCAE models can achieve higher classification accuracy because they could obtain the classification features, which are more suitable for POLSAR data. What is more, the WCAE model utilizes the local spatial information of a POLSAR image when compared with the WAE model. A convolutional natural network (CNN), which also makes use of the spatial information, has been widely applied in image classification, but our WCAE model is time-saving than the CNN model. Given the above, our methods not only improve the classification performance but also save the experimental time. Experimental results on four POLSAR datasets also demonstrate that our proposed methods are significantly effective."}
{"_id": "e216d9a5d5801f79ce88efa9c5ca7fa503f0cc75", "title": "Comparison of database replication techniques based on total order broadcast", "text": "In this paper, we present a performance comparison of database replication techniques based on total order broadcast. While the performance of total order broadcast-based replication techniques has been studied in previous papers, this paper presents many new contributions. First, it compares with each other techniques that were presented and evaluated separately, usually by comparing them to a classical replication scheme like distributed locking. Second, the evaluation is done using a finer network model than previous studies. Third, the paper compares techniques that offer the same consistency criterion (one-copy serializability) in the same environment using the same settings. The paper shows that, while networking performance has little influence in a LAN setting, the cost of synchronizing replicas is quite high. Because of this, total order broadcast-based techniques are very promising as they minimize synchronization between replicas."}
{"_id": "1de3667abbed271d2ae933ee9bdf8de3ee010f58", "title": "A printed 16 ports massive MIMO antenna system with directive port beams", "text": "The design of a massive multiple-input-multiple-output (mMIMO) antenna system based on patch antennas is presented in this paper. The array consists of 16 ports, each port consists of a 2\u00d72 patch antenna array with different phase excitation at each element to tilt the beam toward different directions to provide lower correlation coefficient values. A fixed progressive phase feed network is designed to provide the beam tilts. The proposed antenna system is designed using a 3-layer FR4 substrate with total size of 33.33\u00d733.33\u00d70.16 cm3."}
{"_id": "2787fafe4aa765d8716821df9ef9183dd9372323", "title": "Social influence based clustering of heterogeneous information networks", "text": "Social networks continue to grow in size and the type of information hosted. We witness a growing interest in clustering a social network of people based on both their social relationships and their participations in activity based information networks. In this paper, we present a social influence based clustering framework for analyzing heterogeneous information networks with three unique features. First, we introduce a novel social influence based vertex similarity metric in terms of both self-influence similarity and co-influence similarity. We compute self-influence and co-influence based similarity based on social graph and its associated activity graphs and influence graphs respectively. Second, we compute the combined social influence based similarity between each pair of vertices by unifying the self-similarity and multiple co-influence similarity scores through a weight function with an iterative update method. Third, we design an iterative learning algorithm, SI-Cluster, to dynamically refine the K clusters by continuously quantifying and adjusting the weights on self-influence similarity and on multiple co-influence similarity scores towards the clustering convergence. To make SI-Cluster converge fast, we transformed a sophisticated nonlinear fractional programming problem of multiple weights into a straightforward nonlinear parametric programming problem of single variable. Our experiment results show that SI-Cluster not only achieves a better balance between self-influence and co-influence similarities but also scales extremely well for large graph clustering."}
{"_id": "4d01dc1a9709f1b491ea2c3ceeafdcdf3875cef3", "title": "Unemployment alters the set point for life satisfaction.", "text": "According to set-point theories of subjective well-being, people react to events but then return to baseline levels of happiness and satisfaction over time. We tested this idea by examining reaction and adaptation to unemployment in a 15-year longitudinal study of more than 24,000 individuals living in Germany. In accordance with set-point theories, individuals reacted strongly to unemployment and then shifted back toward their baseline levels of life satisfaction. However, on average, individuals did not completely return to their former levels of satisfaction, even after they became reemployed. Furthermore, contrary to expectations from adaptation theories, people who had experienced unemployment in the past did not react any less negatively to a new bout of unemployment than did people who had not been previously unemployed. These results suggest that although life satisfaction is moderately stable over time, life events can have a strong influence on long-term levels of subjective well-being."}
{"_id": "3b3b5184573d7748a4f84fe9c5ccccb6838a203d", "title": "Towards an intelligent textbook: eye gaze based attention extraction on materials for learning and instruction in physics", "text": "In this paper, we propose the attention extraction method on a textbook of physics by using eye tracking glasses. We prepare a document including text and tasks in physics, and record reading behavior on the document from 6-grade students. The result confirms that students pay attention to a different region depending on situations (using the text as a material to learn the content, or using it as hints to solve tasks) and their comprehension."}
{"_id": "439f282365fb5678f8e891cf92fbc0c5b83e030c", "title": "A single-phase grid-connected PV converter with minimal DC-link capacitor and low-frequency ripple-free maximum power point tracking", "text": "This paper presents a novel single-phase grid-connected photovoltaic (PV) converter based on current-fed dual active bridge (CF-DAB) dc-dc converter. The proposed converter allows a minimal DC-link capacitor with large voltage variation without low-frequency effect on maximum power point tracking (MPPT). An advanced control system is developed based on the converter operation principles. Minimized transformer peak current is capable to be achieved through a symmetrical dc voltage control. Simulation results are presented to validate the performance of the proposed PV converter. A 5kW laboratorial prototype is also implemented and the results will be presented later."}
{"_id": "743bbf46557ec767f389c1ec5ac901ef64ffab37", "title": "DeepV2D: Video to Depth with Differentiable Structure from Motion", "text": "We propose DeepV2D, an end-to-end differentiable deep learning architecture for predicting depth from a video sequence. We incorporate elements of classical Structure from Motion into an end-to-end trainable pipeline by designing a set of differentiable geometric modules. Our full system alternates between predicting depth and refining camera pose. We estimate depth by building a cost volume over learned features and apply a multi-scale 3D convolutional network for stereo matching. The predicted depth is then sent to the motion module which performs iterative pose updates by mapping optical flow to a camera motion update. We evaluate our proposed system on NYU, KITTI, and SUN3D datasets and show improved results over monocular baselines and deep and classical stereo reconstruction."}
{"_id": "5f7da8725dd6e79228d156df929bcfa450afdfa8", "title": "Verification of Periodically Controlled Hybrid Systems: Application to an Autonomous Vehicle", "text": "This article introduces Periodically Controlled Hybrid Automata (PCHA) for modular specification of embedded control systems. In a PCHA, control actions that change the control input to the plant occur roughly periodically, while other actions that update the state of the controller may occur in the interim. Such actions could model, for example, sensor updates and information received from higher-level planning modules that change the set point of the controller. Based on periodicity and subtangential conditions, a new sufficient condition for verifying invariant properties of PCHAs is presented. For PCHAs with polynomial continuous vector fields, it is possible to check these conditions automatically using, for example, quantifier elimination or sum of squares decomposition. We examine the feasibility of this automatic approach on a small example. The proposed technique is also used to manually verify safety and progress properties of a fairly complex planner-controller subsystem of an autonomous ground vehicle. Geometric properties of planner-generated paths are derived which guarantee that such paths can be safely followed by the controller."}
{"_id": "2be92d9137f4af64a059da33a58b148d153fc446", "title": "Prediction of Student Academic Performance by an Application of K-Means Clustering Algorithm", "text": "Data Clustering is used to extract meaningful information and to develop significant relationships among variables stored in large data set/data warehouse. In this paper data clustering technique named k-means clustering is applied to analyze student\u2019s learning behavior. The student\u2019s evaluation factor like class quizzes, mid and final exam assignment are studied. It is recommended that all these correlated information should be conveyed to the class advisor before the conduction of final exam. This study will help the teachers to reduce the drop out ratio to a significant level and improve the performance of students."}
{"_id": "fd1432a21eaeb1f231781e3f05fe4bb9ff613cd5", "title": "Scalable Realistic Rendering with Many-Light Methods", "text": "Recent years have seen increasing attention and significant progress in many-light rendering, a class of methods for the efficient computation of global illumination. The many-light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce artifact-free images in a fraction of a second but also converge to the full solution over time. In this state-of-the-art report, we have three goals: give an easy-to-follow, introductory tutorial of many-light theory; provide a comprehensive, unified survey of the topic with a comparison of the main algorithms; and present a vision to motivate and guide future research. We will cover both the fundamental concepts as well as improvements, extensions, and applications of many-light rendering."}
{"_id": "023225d739460b96a32cecab31db777fe353dc64", "title": "Accelerated training of conditional random fields with stochastic gradient methods", "text": "We apply Stochastic Meta-Descent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several large data sets, the resulting optimizer converges to the same quality of solution over an order of magnitude faster than limited-memory BFGS, the leading method reported to date. We report results for both exact and inexact inference techniques."}
{"_id": "cb3381448d1e79ab28796d74218a988f203b12ee", "title": "HURRICANE TRAJECTORY PREDICTION VIA A SPARSE RECURRENT NEURAL NETWORK", "text": "A proposed sparse recurrent neural network with flexible topology is used for trajectory prediction of the Atlantic hurricanes. For prediction of the future trajectories of a target hurricane, the most similar hurricanes to the target hurricane are found by comparing directions of the hurricanes. Then, the first and second differences of their positions over their life time are used for training the proposed network. Comparison of the obtained predictions with actual trajectories of Sandy and Humberto hurricanes show that our approach is quite promising for this aim."}
{"_id": "72d1423fdb2e2587a230356b7c1313a3571d9993", "title": "Mindful consumption : a customer-centric approach to sustainability", "text": "How effectively business deals with the challenges of sustainability will define its success for decades to come. Current sustainability strategies have three major deficiencies: they do not directly focus on the customer, they do not recognize the looming threats from rising global over-consumption, and they do not take a holistic approach. We present a framework for a customer-centric approach to sustainability. This approach recasts the sustainability metric to emphasize the outcomes of business actions measured holistically in term of environmental, personal and economic well-being of the consumer. We introduce the concept of mindful consumption (MC) as the guiding principle in this approach. MC is premised on a consumer mindset of caring for self, for community, and for nature, that translates behaviorally into tempering the selfdefeating excesses associated with acquisitive, repetitive and aspirational consumption. We also make the business case for fostering mindful consumption, and illustrate how the marketing function can be harnessed to successfully implement the customer-centric approach to sustainability."}
{"_id": "0501336bc04470489529b4928c5b6ba0f1bdf5f2", "title": "Quality-biased ranking for queries with commercial intent", "text": "Modern search engines are good enough to answer popular commercial queries with mainly highly relevant documents. However, our experiments show that users behavior on such relevant commercial sites may differ from one to another web-site with the same relevance label. Thus search engines face the challenge of ranking results that are equally relevant from the perspective of the traditional relevance grading approach. To solve this problem we propose to consider additional facets of relevance, such as trustability, usability, design quality and the quality of service. In order to let a ranking algorithm take these facets in account, we proposed a number of features, capturing the quality of a web page along the proposed dimensions. We aggregated new facets into the single label, commercial relevance, that represents cumulative quality of the site. We extrapolated commercial relevance labels for the entire learning-to-rank dataset and used weighted sum of commercial and topical relevance instead of default relevance labels. For evaluating our method we created new DCG-like metrics and conducted off-line evaluation as well as on-line interleaving experiments demonstrating that a ranking algorithm taking the proposed facets of relevance into account is better aligned with user preferences."}
{"_id": "4a87972b28143b61942a0eb011b60f76be0ebf2e", "title": "Scalable Graph Exploration on Multicore Processors", "text": "Many important problems in computational sciences, social network analysis, security, and business analytics, are data-intensive and lend themselves to graph-theoretical analyses. In this paper we investigate the challenges involved in exploring very large graphs by designing a breadth-first search (BFS) algorithm for advanced multi-core processors that are likely to become the building blocks of future exascale systems. Our new methodology for large-scale graph analytics combines a highlevel algorithmic design that captures the machine-independent aspects, to guarantee portability with performance to future processors, with an implementation that embeds processorspecific optimizations. We present an experimental study that uses state-of-the-art Intel Nehalem EP and EX processors and up to 64 threads in a single system. Our performance on several benchmark problems representative of the power-law graphs found in real-world problems reaches processing rates that are competitive with supercomputing results in the recent literature. In the experimental evaluation we prove that our graph exploration algorithm running on a 4-socket Nehalem EX is (1) 2.4 times faster than a Cray XMT with 128 processors when exploring a random graph with 64 million vertices and 512 millions edges, (2) capable of processing 550 million edges per second with an R-MAT graph with 200 million vertices and 1 billion edges, comparable to the performance of a similar graph on a Cray MTA-2 with 40 processors and (3) 5 times faster than 256 BlueGene/L processors on a graph with average degree 50."}
{"_id": "4e2ddd2e9ebaf293c730ef3dda22d58de5e695aa", "title": "Thirteen Ways to Look at the Correlation Coefficient", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "50ac4c9c4409438719bcb8b1bb9e5d1a0dbedb70", "title": "Unicorn: A System for Searching the Social Graph", "text": "Unicorn is an online, in-memory social graph-aware indexing system designed to search trillions of edges between tens of billions of users and entities on thousands of commodity servers. Unicorn is based on standard concepts in information retrieval, but it includes features to promote results with good social proximity. It also supports queries that require multiple round-trips to leaves in order to retrieve objects that are more than one edge away from source nodes. Unicorn is designed to answer billions of queries per day at latencies in the hundreds of milliseconds, and it serves as an infrastructural building block for Facebook\u2019s Graph Search product. In this paper, we describe the data model and query language supported by Unicorn. We also describe its evolution as it became the primary backend for Facebook\u2019s search offerings."}
{"_id": "26743afd04f8ffe3e166e329e62bd414e19b5e70", "title": "The anatomy of a large-scale social search engine", "text": "We present Aardvark, a social search engine. With Aardvark, users ask a question, either by instant message, email, web input, text message, or voice. Aardvark then routes the question to the person in the user's extended social network most likely to be able to answer that question. As compared to a traditional web search engine, where the challenge lies in finding the right document to satisfy a user's information need, the challenge in a social search engine like Aardvark lies in finding the right person to satisfy a user's information need. Further, while trust in a traditional search engine is based on authority, in a social search engine like Aardvark, trust is based on intimacy. We describe how these considerations inform the architecture, algorithms, and user interface of Aardvark, and how they are reflected in the behavior of Aardvark users."}
{"_id": "59a6fa9ef3841302aef0efb17d46579841b80031", "title": "Bidirectional Expansion For Keyword Search on Graph Databases", "text": "Relational, XML and HTML data can be represented as graphs with entities as nodes and relationships as edges. Text is associated with nodes and possibly edges. Keyword search on such graphs has received much attention lately. A central problem in this scenario is to efficiently extract from the data graph a small number of the \u201cbest\u201d answer trees. A Backward Expanding search, starting at nodes matching keywords and working up toward confluent roots, is commonly used for predominantly text-driven queries. But it can perform poorly if some keywords match many nodes, or some node has very large degree. In this paper we propose a new search algorithm, Bidirectional Search, which improves on Backward Expanding search by allowing forward search from potential roots towards leaves. To exploit this flexibility, we devise a novel search frontier prioritization technique based on spreading activation. We present a performance study on real data, establishing that Bidirectional Search significantly outperforms Backward Expanding search."}
{"_id": "0489975dfb31c2f6a15c95d149536960a117a7d1", "title": "A comparative study of game mechanics and control laws for an adaptive physiological game", "text": "We present an adaptive biofeedback game that aims to maintain the player\u2019s arousal by modifying game difficulty in response to the player\u2019s physiological state, as measured with wearable sensors. Our approach models the interaction between human physiology and game difficulty during gameplay as a control problem, where game difficulty is the system input and player arousal its output. We validate the approach on a car-racing game with real-time adaptive game mechanics. Specifically, we use (1) car speed, road visibility, and steering jitter as three mechanisms to manipulate game difficulty, (2) electrodermal activity as physiological correlate of arousal, and (3) two types of control law: proportional (P) control, and proportional-integral-derivative (PID) control. We also propose quantitative measures to characterize the effectiveness of these game adaptations and controllers in manipulating the player\u2019s arousal. Experimental trials with 25 subjects in both open-loop (no feedback) and closed-loop (negative feedback) conditions show statistically significant differences in effectiveness among the three game mechanics and also between the two control laws. Specifically, manipulating car speed provides higher control of arousal levels than changing road visibility or vehicle steering. Our results also confirm that PID control leads to lower error and reduced oscillations in the closed-loop response compared to proportional-only control. Finally, we discuss the theoretical and practical implications of our approach."}
{"_id": "e2ae162dfc77c404ec79363f6ead95a2286a16fc", "title": "A summary-statistic representation in peripheral vision explains visual crowding.", "text": "Peripheral vision provides a less faithful representation of the visual input than foveal vision. Nonetheless, we can gain a lot of information about the world from our peripheral vision, for example in order to plan eye movements. The phenomenon of crowding shows that the reduction of information available in the periphery is not merely the result of reduced resolution. Crowding refers to visual phenomena in which identification of a target stimulus is significantly impaired by the presence of nearby stimuli, or flankers. What information is available in the periphery? We propose that the visual system locally represents peripheral stimuli by the joint statistics of responses of cells sensitive to different position, phase, orientation, and scale. This \"textural\" representation by summary statistics predicts the subjective \"jumble\" of features often associated with crowding. We show that the difficulty of performing an identification task within a single pooling region using this representation of the stimuli is correlated with peripheral identification performance under conditions of crowding. Furthermore, for a simple stimulus with no flankers, this representation can be adequate to specify the stimulus with some position invariance. This provides evidence that a unified neuronal mechanism may underlie peripheral vision, ordinary pattern recognition in central vision, and texture perception. A key component of our methodology involves creating visualizations of the information available in the summary statistics of a stimulus. We call these visualizations \"mongrels\" and show that they are highly useful in examining how the early visual system represents the visual input. Mongrels enable one to study the \"equivalence classes\" of our model, i.e., the sets of stimuli that map to the same representation according to the model."}
{"_id": "6ebd85d449cb3e023bc5bf2a4f1444b7a06b0351", "title": "Enhancing SDN security for IoT-related deployments through blockchain", "text": "The majority of business activity of our integrated and connected world takes place in networks based on cloud computing infrastructure that cross national, geographic and jurisdictional boundaries. Such an efficient entity interconnection is made possible through an emerging networking paradigm, Software Defined Networking (SDN) that intends to vastly simplify policy enforcement and network reconfiguration in a dynamic manner. However, despite the obvious advantages this novel networking paradigm introduces, its increased attack surface compared to traditional networking deployments proved to be a thorny issue that creates skepticism when safety-critical applications are considered. Especially when SDN is used to support Internet-of-Things (IoT)-related networking elements, additional security concerns rise, due to the elevated vulnerability of such deployments to specific types of attacks and the necessity of inter-cloud communication any IoT application would require. The overall number of connected nodes makes the efficient monitoring of all entities a real challenge, that must be tackled to prevent system degradation and service outage. This position paper provides an overview of common security issues of SDN when linked to IoT clouds, describes the design principals of the recently introduced Blockchain paradigm and advocates the reasons that render Blockchain as a significant security factor for solutions where SDN and IoT are involved."}
{"_id": "fc72945cd68fe529222a44846c5fadeb791b2e62", "title": "Measuring Negative and Positive Thoughts in Children: An Adaptation of the Children\u2019s Automatic Thoughts Scale (CATS)", "text": "The aim of this study is to describe the factor structure and psychometric properties of an extended version of the Children\u2019s Automatic Thoughts Scale (CATS), the CATS-Negative/Positive (CATS-N/P). The CATS was originally designed to assess negative self-statements in children and adolescents. However, positive thoughts also play a major role in childhood disorders such as anxiety and depression. Therefore, positive self-statements were added to the CATS. The CATS-N/P was administered to a community sample of 554 children aged 8\u201318\u00a0years. The results of a confirmatory factor analysis revealed that the positive self-statements formed a separate and psychometrically sound factor. Internal and short-term test\u2013retest reliability was good. Boys reported more hostile and positive thoughts than girls; and younger children reported more negative thoughts concerning physical threat, social threat, and failure than older children. In conclusion, the results of the current study support the use of the CATS-N/P for the measurement of positive and negative thoughts in children. The application of the CATS-N/P can facilitate further research on cognitive factors in different childhood disorders."}
{"_id": "345aa38e5ac0de40afc471700bbc7eb64e2f1f4c", "title": "MODIFIED MULTILOOK CROSS CORRELATION ( MLCC ) ALGORITHM FOR DOPPLER CENTROID ESTIMATION IN SYNTHETIC APERTURE RADAR SIGNAL PROCESS -", "text": "The Multilook Cross Correlation (MLCC) is one of the most reliable algorithms used for Doppler ambiguity number estimation of the Doppler centroid parameter. However, the existing MLCC algorithm is only suitable for low contrast scenes. In high contrast scenes, the estimated result is not reliable, and the error is unacceptable. Besides, the Doppler centroid estimation processing time is long and can only be used in offline processing. In this paper, we introduce a modified MLCC algorithm that has better sensitivity which is suitable not only for low contrast scenes, but also for high contrast scenes. In addition, the modified MLCC algorithm can be implemented on parallel signal processing units for better time efficiency. Experiments with RADARSAT-1 data show that the modified algorithm works well in both high and low contrast scenes."}
{"_id": "c6105e27f253cdddaf9708b927dd8655353e2795", "title": "Author ' s personal copy A new hybrid arti fi cial neural networks for rainfall \u2013 runoff process modeling", "text": "This paper proposes a hybrid intelligent model for runoff prediction. The proposed model is a combination of data preprocessing methods, genetic algorithms and levenberg\u2013marquardt (LM) algorithm for learning feed forward neural networks. Actually it evolves neural network initial weights for tuning with LM algorithm by using genetic algorithm. We also use data pre-processing methods such as data transformation, input variables selection and data clustering for improving the accuracy of the model. The capability of the proposed method is tested by applying it to predict runoff at the Aghchai watershed. The results show that this approach is able to predict runoff more accurately than Artificial Neural Network (ANN) and Adaptive Neuro Fuzzy Inference System (ANFIS) models. & 2013 Elsevier B.V. All rights reserved."}
{"_id": "fd11e7b2ad5e96de57516b3e65ef9c10389fcb38", "title": "Evaluating the Effect of Total Quality Management Practices on Business Performance : A Study of Manufacturing Firms of Pakistan", "text": "The present study has evaluated the effect of TQM practices on business performance of manufacturing firms in Pakistan. A sample of 65 managers working in the quality assurance department was taken from these manufacturing firms and their responses were collected using a structured questionnaire. Descriptive statistics were used to demonstrate the effect of different levels of TQM practices on business performance in terms of financial performance, product quality performance, customer satisfaction. The study had also considered organizational level variables like size of firm, year of existence, top management commitment as moderating variables, the results have shown that TQM was implemented only at the first three levels i.e. quality control ,quality assurance and continuous improvement in the selected firms and that top management commitment is the most important variable effecting on TQM implementation which then directly effect on business performance in these manufacturing firms."}
{"_id": "376bc5e0dc7131ff15d33e11057d0a391043b09b", "title": "Food Balance Estimation by Using Personal Dietary Tendencies in a Multimedia Food Log", "text": "We have investigated the \u201cFoodLog\u201d multimedia food-recording tool, whereby users upload photographs of their meals and a food diary is constructed using image-processing functions such as food-image detection and food-balance estimation. In this paper, following a brief introduction to FoodLog, we propose a Bayesian framework that makes use of personal dietary tendencies to improve both food-image detection and food-balance estimation. The Bayesian framework facilitates incremental learning. It incorporates three personal dietary tendencies that influence food analysis: likelihood, prior distribution, and mealtime category. In the evaluation of the proposed method using images uploaded to FoodLog, both food-image detection and food-balance estimation are improved. In particular, in the food-balance estimation, the mean absolute error is significantly reduced from 0.69 servings to 0.28 servings on average for two persons using more than 200 personal images, and 0.59 servings to 0.48 servings on average for four persons using 100 personal images. Among the works analyzing food images, this is the first to make use of statistical personal bias to improve the performance of the analysis."}
{"_id": "7b0b8e56fa8ff96c4f0ca26f21a5176d9cd1f78d", "title": "IoT in Agriculture: Designing a Europe-Wide Large-Scale Pilot", "text": "The technologies associated with the Internet of Things have great potential for application in the domain of food and agriculture, especially in view of the societal and environmental challenges faced by this sector. From farm to fork, IoT technologies could transform the sector, contributing to food safety, and the reduction of agricultural inputs and food waste. A major step toward greater uptake of these technologies will be the execution of IoT-based large-scale pilots (LSPs) in the entire supply chain. This article outlines the challenges and constraints that an LSP deployment of IoT in this domain must consider. Sectoral and technological challenges are described in order to identify a set of technological and agrifood requirements. An architecture based on a system of systems approach is briefly presented, the importance of addressing the interoperability challenges faced by this sector is highlighted, and we elaborate on requirements for new business models, security, privacy, and data governance. A description of the technologies and solutions involved in designing pilots for four agrifood domains (dairy, fruit, arable, meat and vegetable supply chain) is eventually provided. In conclusion, it is noted that for IoT to be successful in this domain, a significant change of culture is needed."}
{"_id": "4b0d321b796d2ed4705617be64d8c70a4653704f", "title": "Tapping the power of text mining", "text": "Sifting through vast collections of unstructured or semistructured data beyond the reach of data mining tools, text mining tracks information sources, links isolated concepts in distant documents, maps relationships between activities, and helps answer questions."}
{"_id": "d8c662e56aac2688454a55e5df9040cd6e391abd", "title": "Designed Interdependence, Growing Social Capital and Knowledge Sharing in IS Project Teams", "text": "This study brings in the task interdependence as the antecedent for growing team social capital and increasing knowledge sharing among team members in IS project context in which heterogeneous members need to interact very closely. Data collected from 203 IS projects largely confirm the proposed research model. Between the task interdependence and team performance, the team social capital as well as the knowledge sharing among members fully mediates the relationship. Interestingly, direct path from task interdependence onto performance is found to be insignificant, rebutting the suggestions of the work design theory which says well designed interdependence directly increase performance in heterogeneous teams. Findings of our study signify the criticality of the mediating roles of social capital and knowledge sharing. Additionally, it is also confirmed that the social capital grows with time. Also, longer term projects revealed a bit different mechanism than shorter ones. Implications are discussed with limitations and further studies."}
{"_id": "392adb8961470d63dfbffa8d5f4abc206ea3f747", "title": "BAG: Managing GPU as Buffer Cache in Operating Systems", "text": "This paper presents the design, implementation and evaluation of BAG, a system that manages GPU as the buffer cache in operating systems. Unlike previous uses of GPUs, which have focused on the computational capabilities of GPUs, BAG is designed to explore a new dimension in managing GPUs in heterogeneous systems where the GPU memory is an exploitable but always ignored resource. With the carefully designed data structures and algorithms, such as concurrent hashtable, log-structured data store for the management of GPU memory, and highly-parallel GPU kernels for garbage collection, BAG achieves good performance under various workloads. In addition, leveraging the existing abstraction of the operating system not only makes the implementation of BAG non-intrusive, but also facilitates the system deployment."}
{"_id": "38fa4de5838fe5de5c283f49c32f865f67b23bb4", "title": "Packaging a $W$ -Band Integrated Module With an Optimized Flip-Chip Interconnect on an Organic Substrate", "text": "This paper, for the first time, presents successful integration of a W-band antenna with an organically flip-chip packaged silicon-germanium (SiGe) low-noise amplifier (LNA). The successful integration requires an optimized flip-chip interconnect. The interconnect performance was optimized by modeling and characterizing the flip-chip transition on a low-loss liquid crystal polymer organic substrate. When the loss of coplanar waveguide (CPW) lines is included, an insertion loss of 0.6 dB per flip-chip-interconnect is measured. If the loss of CPW lines is de-embedded, 0.25 dB of insertion loss is observed. This kind of low-loss flip-chip interconnect is essential for good performance of W-band modules. The module, which we present in this paper, consists of an end-fire Yagi-Uda antenna integrated with an SiGe BiCMOS LNA. The module is 3 mm \u00d7 1.9 mm and consumes only 19.2 mW of dc power. We present passive and active E- and H-plane radiation pattern measurements at 87, 90, and 94 GHz. Passive and active antennas both showed a 10-dB bandwidth of 10 GHz. The peak gain of passive and active antennas was 5.2 dBi at 90 GHz and 21.2 dBi at 93 GHz, respectively. The measurements match well with the simulated results."}
{"_id": "0d4dbd59e42e615ccf6cd4f71203be97afac48fb", "title": "End-to-End Joint Semantic Segmentation of Actors and Actions in Video", "text": "Traditional video understanding tasks include human action recognition and actor/object semantic segmentation. However, the combined task of providing semantic segmentation for different actor classes simultaneously with their action class remains a challenging but necessary task for many applications. In this work, we propose a new end-to-end architecture for tackling this task in videos. Our model effectively leverages multiple input modalities, contextual information, and multitask learning in the video to directly output semantic segmentations in a single unified framework. We train and benchmark our model on the Actor-Action Dataset (A2D) for joint actor-action semantic segmentation, and demonstrate state-of-the-art performance for both segmentation and detection. We also perform experiments verifying our approach improves performance for zero-shot recognition, indicating generalizability of our jointly learned feature space."}
{"_id": "fc42e5f18f3ff6b88bbd0d07e6bc0e0cbeb20a88", "title": "Comparison of Longley-Rice, ITM and ITWOM propagation models for DTV and FM broadcasting", "text": "With the rapid deployment of digital TV, there is an increasing need for accurate point-to-area prediction tools. There is a great deal of propagation models for coverage prediction of DTV. Some of them are pure empirical models, and others are mixed, empirical-analytical models, based on measurement campaigns and electromagnetic theory. The aim of this paper is to compare accurate measurements taken by a Rohde & Schwarz FSH-3 portable spectrum analyzer and precision antennas (biconical and log-periodic), with simulation results derived from coverage prediction models, like the NTIA-ITS Longley-Rice model, the ITM (Irregular Terrain Model) using the 3-arc-second SRTM (Satellite Radar Topography Mission) data that is available freely, and the newer ITWOM (Irregular Terrain with Obstructions Model) model which combines equations from ITU-R P.1546 model with Beer's law and Snell's law. Furthermore, measurements for analog FM broadcasting are compared to predictions from the above mentioned models."}
{"_id": "b1ec6643d3fe3beb7181e8d358500ca86e69be94", "title": "iDocChip: A Configurable Hardware Architecture for Historical Document Image Processing: Percentile Based Binarization", "text": "End-to-end Optical Character Recognition (OCR) systems are heavily used to convert document images into machine-readable text. Commercial and open-source OCR systems (like Abbyy, OCRopus, Tesseract etc.) have traditionally been optimized for contemporary documents like books, letters, memos, and other end-user documents. However, these systems are difficult to use equally well for digitizing historical document images, which contain degradations like non-uniform shading, bleed-through, and irregular layout; such degradations usually do not exist in contemporary document images.\n The open-source anyOCR is an end-to-end OCR pipeline, which contains state-of-the-art techniques that are required for digitizing degraded historical archives with high accuracy. However, high accuracy comes at a cost of high computational complexity that results in 1) long runtime that limits digitization of big collection of historical archives and 2) high energy consumption that is the most critical limiting factor for portable devices with constrained energy budget. Therefore, we are targeting energy efficient and high throughput acceleration of the anyOCR pipeline. Generalpurpose computing platforms fail to meet these requirements that makes custom hardware design mandatory. In this paper, we are presenting a new concept named iDocChip. It is a portable hybrid hardware-software FPGA-based accelerator that is characterized by low footprint meaning small size, high power efficiency that will allow using it in portable devices, and high throughput that will make it possible to process big collection of historical archives in real time without effecting the accuracy.\n In this paper, we focus on binarization, which is the second most critical step in the anyOCR pipeline after text-line recognizer that we have already presented in our previous publication [21]. The anyOCR system makes use of a Percentile Based Binarization method that is suitable for overcoming degradations like non-uniform shading and bleed-through. To the best of our knowledge, we propose the first hardware architecture of the PBB technique. Based on the new architecture, we present a hybrid hardware-software FPGA-based accelerator that outperforms the existing anyOCR software implementation running on i7-4790T in terms of runtime by factor of 21, while achieving energy efficiency of 10 Images/J that is higher than that achieved by low power embedded processors with negligible loss of recognition accuracy."}
{"_id": "3854f1a53090479359e39fee0baed5ae2040233a", "title": "Architecture Technical Debt: Understanding Causes and a Qualitative Model", "text": "A known problem in large software companies is to balance the prioritization of short-term with long-term responsiveness. Specifically, architecture violations (Architecture Technical Debt) taken to deliver fast might hinder future feature development, which would hinder agility. We conducted a multiple-case embedded case study in 7 sites at 5 large companies in order to shed light on the current causes for the accumulation of Architectural Technical Debt that causes effort. We provide a taxonomy of the factors and their influence in the accumulation of debt, and we provide a qualitative model of how the debt is accumulated and recovered over time."}
{"_id": "c7d59c8e3e5745fef1473df8f4f78a86fc4f7c87", "title": "E-S-QUAL A Multiple-Item Scale for Assessing Electronic Service Quality", "text": "Using the means-end framework as a theoretical foundation, this article conceptualizes, constructs, refines, and tests a multiple-item scale (E-S-QUAL) for measuring the service quality delivered by Web sites on which customers shop online. Two stages of empirical data collection revealed that two different scales were necessary for capturing electronic service quality. The basic E-S-QUAL scale developed in the research is a 22-item scale of four dimensions: efficiency, fulfillment, system availability, and privacy. The second scale, E-RecS-QUAL, is salient only to customers who had nonroutine encounters with the sites and contains 11 items in three dimensions: responsiveness, compensation, and contact. Both scales demonstrate good psychometric properties based on findings from a variety of reliability and validity tests and build on the research already conducted on the topic. Directions for further research on electronic service quality are offered. Managerial implications stemming from the empirical findings about E-S-QUAL are also discussed."}
{"_id": "94c817e196e71c03b3425f905ebd1793dc6469c2", "title": "Visual Analysis of Large Graphs", "text": "The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand, and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques were presented by Herman et al. [HMM00] and Diaz [DPS02]. The first work surveyed the main techniques for visualization of hierarchies and graphs in general that had been introduced until 2000. The second work concentrated on graph layouts introduced until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as time-varying graphs. Also, in accordance with ever growing amounts of graph-structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State-of-the-Art Report, we survey available techniques for the visual analysis of large graphs. Our review firstly considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process."}
{"_id": "7153074f9f112db794825b7d13aeac1e79f5f0fa", "title": "Internet Banking Adoption Among Young Intellectuals", "text": "The aim of this paper is to study technology acceptance of internet banking among undergraduate students in Malaysia. Thus, the theoretical framework of the paper is based on modified version of Technology Acceptance Model (TAM). This paper develops a technology acceptance model for internet banking, a conceptual framework to explain the factors influencing undergraduate students' acceptance of internet banking. The model employs perceived usefulness (PU), perceived ease of use (PEOU), perceived credibility (PC) and computer self-efficacy (CSE). The first two are two initial constructs for TAM model. The rest are new constructs to be included in the model in order to have an extension of TAM model that better reflects the students\u2019 view. The results suggest that PU, PEOU and PC had a significant relationship with behavioral intention. Further, these measures are good determinant for undergraduate acceptance for internet banking. Results also suggest that PU and PEOU had a significant relationship with CSE. On the contrary, CSE did not associate with PC. Also, PEOU had JIBC December 2007, Vol. 12, No. 3 2 a significant relationship with PU and PC that indicate these scales are related to PEOU in explaining undergraduate preference. The paper is useful in providing the understanding of the TAM among undergraduate from Malaysians\u2019 perspective."}
{"_id": "993a4e87a550b3751b875abe6167f0e2dd33418e", "title": "Best Practices for Data Centers : Lessons Learned from Benchmarking 22 Data Centers", "text": "Over the past few years, the authors benchmarked 22 data center buildings. From this effort, we have determined that data centers can be over 40 times as energy intensive as conventional office buildings. Studying the more efficient of these facilities enabled us to compile a set of \u201cbest-practice\u201d technologies for energy efficiency. These best practices include: improved air management, emphasizing control and isolation of hot and cold air streams; rightsizing central plants and ventilation systems to operate efficiently both at inception and as the data center load increases over time; optimized central chiller plants, designed and controlled to maximize overall cooling plant efficiency, central air-handling units, in lieu of distributed units; \u201cfree cooling\u201d from either air-side or water-side economizers; alternative humidity control, including elimination of control conflicts and the use of direct evaporative cooling; improved uninterruptible power supplies; high-efficiency computer power supplies; on-site generation combined with special chillers for cooling using the waste heat; direct liquid cooling of racks or computers; and lowering the standby losses of standby generation systems. Other benchmarking findings include power densities from 5 to nearly 100 Watts per square foot; though lower than originally predicted, these densities are growing. A 5:1 variation in cooling effectiveness index (ratio of cooling power to computer power) was found, as well as large variations in power distribution efficiency and overall center performance (ratio of computer power to total building power). These observed variations indicate the potential of energy savings achievable through the implementation of best practices in the design and operation of data centers. The Data Center Challenge The energy used by a typical rack of state-of-the art servers, drawing 20 kilowatts of power at 10 cents per kWh, uses more that $17,000 per year in electricity. Given that data centers can hold hundreds of such racks, they constitute a very energy-intensive building type. Clearly, efforts to improve energy efficiency in data centers can pay big dividends. But, where to start? \u201cYou can\u2019t manage what you don\u2019t measure\u201d is a mantra often recited by corporate leaders (Tschudi et al 2005). Similarly, the high-availability needs of data center facilities can be efficiently met using this philosophy. This paper presents a selected distillation of the extensive benchmarking in 22 data centers which formed the basis for the development of 10 best-practice guides for design and operation that summarized how better energy performance was obtained in these actual facilities (LBNL 2006). With typical annual energy costs per square foot 15 times (and in some cases over 40 times) that of typical office buildings, data centers are an important target for energy savings. They operate continuously, which means that their electricity demand is always contributing to 3-76 \u00a9 2006 ACEEE Summer Study on Energy Efficiency in Buildings peak utility system demands, an important fact given that utility pricing increasingly reflects time-dependent tariffs. Energy-efficiency best practices can realize significant energy and peakpower savings while maintaining or improving reliability, and yield other non-energy benefits. Data centers ideally are buildings built primarily for computer equipment: to house it and provide power and cooling to it. They can also be embedded in other buildings including highrise office buildings. In any event, a useful metric to help gauge the energy efficiency of the data center is the computer power consumption index, or the fraction of the total data center power (including computers, power conditioning, HVAC, lighting, and miscellaneous) to that used by the computers (higher is better; a realistic maximum is in the 0.8 to 0.9 range, depending on climate). As shown in Figure 1, this number varies by more than a factor of 2 from the worst centers to the best we\u2019ve benchmarked to date. Even the best have room for improvement, suggesting that many centers have tremendous opportunities for greater energy efficiency. Figure 1. Computer Power Consumption Index Computer Power Consumption Index"}
{"_id": "3032a54029d06298ca9b06c17a7b4b813f52c834", "title": "Mesofluidic actuation for articulated finger and hand prosthetics", "text": "The loss of fingers and hands severely limits career and lifestyle options for the amputee. Unfortunately, while there have been strides made in advancements of upper arm and leg prosthetics, the state of the art in prosthetic hands is lagging far behind. Options are generally limited to claw like devices that provide limited gripping capacity. The overall objective of this paper is to demonstrate a path towards a low-cost prosthetic hand with multiple articulated fingers and a thumb that rivals the human hand in terms of weight, size, dexterity, range of motion, force carrying capacity and speed. We begin with a description of the functional requirements for a human hand. When comparing requirements with actuation technologies, the fluid power approach has the potential to realize a prosthetic hand that rivals a human hand in size, strength and dexterity. We introduce a new actuation technology, mesofluidics, that focuses on miniaturization of fluid power to the meso-scale (mm to cm). As a novel demonstration of the potential for this technology, we describe a proof-of-principle mesofluidic finger that has intrinsic actuation and control (actuators and control valves within the volume of the finger). This finger weighs 63 grams, is sized to the 50th percentile male finger, has a total of 25 mechanical parts and is capable of providing 10 kg (22 lbs) of pinch force."}
{"_id": "2748dc51ba8dd9d2a7899caadbef2e3269b8b0b9", "title": "A Learning-Based Framework for Velocity Control in Autonomous Driving", "text": "We present a framework for autonomous driving which can learn from human demonstrations, and we apply it to the longitudinal control of an autonomous car. Offline, we model car-following strategies from a set of example driving sequences. Online, the model is used to compute accelerations which replicate what a human driver would do in the same situation. This reference acceleration is tracked by a predictive controller which enforces a set of comfort and safety constraints before applying the final acceleration. The controller is designed to be robust to the uncertainty in the predicted motion of the preceding vehicle. In addition, we estimate the confidence of the driver model predictions and use it in the cost function of the predictive controller. As a result, we can handle cases where the training data used to learn the driver model does not provide sufficient information about how a human driver would handle the current driving situation. The approach is validated using a combination of simulations and experiments on our autonomous vehicle."}
{"_id": "42a6ae6827f8cc92e15191e53605b0aa4f875fb9", "title": "Challenges in Autonomous Vehicle Testing and Validation", "text": "Software testing is all too often simply a bug hunt rather than a wellconsidered exercise in ensuring quality. A more methodical approach than a simple cycle of system-level test-fail-patch-test will be required to deploy safe autonomous vehicles at scale. The ISO 26262 development V process sets up a framework that ties each type of testing to a corresponding design or requirement document, but presents challenges when adapted to deal with the sorts of novel testing problems that face autonomous vehicles. This paper identifies five major challenge areas in testing according to the V model for autonomous vehicles: driver out of the loop, complex requirements, non-deterministic algorithms, inductive learning algorithms, and failoperational systems. General solution approaches that seem promising across these different challenge areas include: phased deployment using successively relaxed operational scenarios, use of a monitor/actuator pair architecture to separate the most complex autonomy functions from simpler safety functions, and fault injection as a way to perform more efficient edge case testing. While significant challenges remain in safety-certifying the type of algorithms that provide high-level autonomy themselves, it seems within reach to instead architect the system and its accompanying design process to be able to employ existing software safety approaches."}
{"_id": "64c83def2889146beb7ca2dddee2dae21d9ca6de", "title": "A machine learning approach for personalized autonomous lane change initiation and control", "text": "We study an algorithm that allows a vehicle to autonomously change lanes in a safe but personalized fashion without the driver's explicit initiation (e.g. activating the turn signals). Lane change initiation in autonomous driving is typically based on subjective rules, functions of the positions and relative velocities of surrounding vehicles. This approach is often arbitrary, and not easily adapted to the driving style preferences of an individual driver. Here we propose a data-driven modeling approach to capture the lane change decision behavior of human drivers. We collect data with a test vehicle in typical lane change situations and train classifiers to predict the instant of lane change initiation with respect to the preferences of a particular driver. We integrate this decision logic into a model predictive control (MPC) framework to create a more personalized autonomous lane change experience that satisfies safety and comfort constraints. We show the ability of the decision logic to reproduce and differentiate between two lane changing styles, and demonstrate the safety and effectiveness of the control framework through simulations."}
{"_id": "d1cdee2aa2d92211d20df820aa4774fa0563a726", "title": "Kinematic and dynamic vehicle models for autonomous driving control design", "text": "We study the use of kinematic and dynamic vehicle models for model-based control design used in autonomous driving. In particular, we analyze the statistics of the forecast error of these two models by using experimental data. In addition, we study the effect of discretization on forecast error. We use the results of the first part to motivate the design of a controller for an autonomous vehicle using model predictive control (MPC) and a simple kinematic bicycle model. The proposed approach is less computationally expensive than existing methods which use vehicle tire models. Moreover it can be implemented at low vehicle speeds where tire models become singular. Experimental results show the effectiveness of the proposed approach at various speeds on windy roads."}
{"_id": "00b76a334337cc4289f7e05fb3a86f73b2b4e60b", "title": "Apprenticeship learning for motion planning with application to parking lot navigation", "text": "Motion and path-planning algorithms often use complex cost functions for both global navigation and local smoothing of trajectories. Obtaining good results typically requires carefully hand-engineering the trade-offs between different terms in the cost function. In practice, it is often much easier to demonstrate a few good trajectories. In this paper, we describe an efficient algorithm which - when given access to a few trajectory demonstrations - can automatically infer good trade-offs between the different costs. In our experiments, we apply our algorithm to the problem of navigating a robotic car in a parking lot."}
{"_id": "2087c23fbc7890c1b27fe3f2914299cc0693306e", "title": "Deep or shallow, NLP is breaking out", "text": "Neural net advances improve computers' language ability in many fields."}
{"_id": "dd19e8fd985972c5d5e9dc555df9ceb2a88f78e0", "title": "Long-Term Amnesia: A Review and Detailed Illustrative Case Study", "text": "Long-term amnesia is a slowly developing form of anterograde amnesia accompanied by retrograde amnesia of variable severity (Kapur, 1996; 1997) often associated with damage to the anterior temporal neocortex and epileptic seizures. The precise neural and functional deficits that underlie this condition are unknown. A patient, JL, who has this condition following a closed-head injury, is described in detail. Her injury caused bilateral anterior temporal neocortex damage that was more extensive on the left and right-sided damage to the perirhinal and orbitofrontal cortices. The hippocampus appeared to be intact bilaterally. Epilepsy developed within two years of JL's injury. Apart from her memory impairments, JL's cognitive functions, including high-level visual perception, attention, semantic memory and executive functions were well preserved. Her memory also seemed well preserved for at least 30 minutes following encoding. The one exception was the patient's relatively greater impairment at difficult visual recognition tests for which verbalization may not have been an effective strategy. This problem may have been caused by JL's right-sided perirhinal and orbitofrontal cortex damage. Her recall and recognition was clearly impaired after a three-week delay. She also showed a retrograde amnesia, which appeared to be milder than her remote post-morbid memory deficit. JL's remote memory was preserved for information first encountered in either the pre- or post-morbid period provided the information had received sufficient rehearsal over long periods of time. Her long-term amnesia may have been caused by anterior temporal neocortex damage, possibly in association with her epileptic seizures. Whether the condition is heterogeneous, involves a deficit in slow consolidation, disruption of unconsolidated memories, or blockage of maintenance or disruption of insufficiently rehearsed memories whether or not these have been slowly consolidated is discussed."}
{"_id": "95485dede6d7118f0577662bd6fac396f3d380a4", "title": "met*: A Method for Discriminating Matonymy and Metaphor by Computer", "text": "The met* method distinguishes selected examples of metonymy from metaphor and from literalness and anomaly in short English sentences. In the met* method, literalness is distinguished because it satisfies contextual constraints that the nonliteral others all violate. Metonymy is discriminated from metaphor and anomaly in a way that [1] supports Lakoff and Johnson's (1980) view that in metonymy one entity stands for another whereas in metaphor one entity is viewed as another, [2] permits chains of metonymies (Reddy 1979), and [3] allows metonymies to co-occur with instances of either literalness, metaphor, or anomaly. Metaphor is distinguished from anomaly because the former contains a relevant analogy, unlike the latter. The met* method is part of Collative Semantics, a semantics for natural language processing, and has been implemented in a computer program called meta5. Some examples of meta5's analysis of metaphor and metonymy are given. The met* method is compared with approaches from artificial intelligence, linguistics, philosophy, and psychology."}
{"_id": "3679f082c93b8ebdd80410fda965015d118cfb87", "title": "Genetic engineering of algae for enhanced biofuel production.", "text": "There are currently intensive global research efforts aimed at increasing and modifying the accumulation of lipids, alcohols, hydrocarbons, polysaccharides, and other energy storage compounds in photosynthetic organisms, yeast, and bacteria through genetic engineering. Many improvements have been realized, including increased lipid and carbohydrate production, improved H(2) yields, and the diversion of central metabolic intermediates into fungible biofuels. Photosynthetic microorganisms are attracting considerable interest within these efforts due to their relatively high photosynthetic conversion efficiencies, diverse metabolic capabilities, superior growth rates, and ability to store or secrete energy-rich hydrocarbons. Relative to cyanobacteria, eukaryotic microalgae possess several unique metabolic attributes of relevance to biofuel production, including the accumulation of significant quantities of triacylglycerol; the synthesis of storage starch (amylopectin and amylose), which is similar to that found in higher plants; and the ability to efficiently couple photosynthetic electron transport to H(2) production. Although the application of genetic engineering to improve energy production phenotypes in eukaryotic microalgae is in its infancy, significant advances in the development of genetic manipulation tools have recently been achieved with microalgal model systems and are being used to manipulate central carbon metabolism in these organisms. It is likely that many of these advances can be extended to industrially relevant organisms. This review is focused on potential avenues of genetic engineering that may be undertaken in order to improve microalgae as a biofuel platform for the production of biohydrogen, starch-derived alcohols, diesel fuel surrogates, and/or alkanes."}
{"_id": "14d2345c04e4f68d49603c55cda23f78fd8efc70", "title": "The use of time-variant EEG Granger causality for inspecting directed interdependencies of neural assemblies", "text": "Understanding of brain functioning requires the investigation of activated cortical networks, in particular the detection of interactions between different cortical sites. Commonly, coherence and correlation are used to describe interrelations between EEG signals. However, on this basis, no statements on causality or the direction of their interrelations are possible. Causality between two signals may be expressed in terms of upgrading the predictability of one signal by the knowledge of the immediate past of the other signal. The best-established approach in this context is the so-called Granger causality. The classical estimation of Granger causality requires the stationarity of the signals. In this way, transient pathways of information transfer stay hidden. The study presents an adaptive estimation of Granger causality. Simulations demonstrate the usefulness of the time-variant Granger causality for detecting dynamic causal relations within time intervals of less than 100 ms. The time-variant Granger causality is applied to EEG data from the Stroop task. It was shown that conflict situations generate dense webs of interactions directed from posterior to anterior cortical sites. The web of directed interactions occurs mainly 400 ms after the stimulus onset and lasts up to the end of the task."}
{"_id": "3b86506f0562c1453461509c5dc43359fe908db5", "title": "An application-specific protocol architecture for wireless microsensor networks", "text": "In recent years advances in energy e cient design and wireless technologies have enabled exciting new applications for wireless devices These applications span a wide range including real time and streaming video and audio delivery remote monitoring using networked microsensors personal medical monitoring and home networking of everyday appliances While these applications require high performance from the network they su er from resource constraints that do not appear in more traditional wired computing environments In particular wireless spectrum is scarce often limiting the bandwidth available to applications and making the channel error prone and the nodes are battery operated often limiting available energy My thesis is that this harsh environment with severe resource constraints requires an application speci c protocol architecture rather than the traditional layered approach to obtain the best pos sible performance This dissertation supports this claim using detailed case studies on microsensor networks and wireless video delivery The rst study develops LEACH Low Energy Adaptive Clustering Hierarchy an architecture for remote microsensor networks that combines the ideas of energy e cient cluster based routing and media access together with application speci c data aggregation to achieve good performance in terms of system lifetime latency and application perceived quality This approach improves system lifetime by an order of magnitude compared to general purpose approaches when the node energy is limited The second study develops an unequal error protection scheme for MPEG compressed video delivery that adapts the level of protection applied to portions of a packet to the degree of importance of the corresponding bits This approach obtains better application perceived performance than current approaches for the same amount of transmission bandwidth These two systems show that application speci c protocol architectures achieve the energy and latency e ciency and error robustness needed for wireless networks Thesis Supervisor Anantha P Chandrakasan Title Associate Professor Thesis Supervisor Hari Balakrishnan Title Assistant Professor This thesis is dedicated to the memory of Robert H Rabiner The rst Rabiner at the Tute"}
{"_id": "3a8db4034b239220ff27d9c657e128225be51e57", "title": "'Strong is the new skinny': A content analysis of #fitspiration images on Instagram.", "text": "'Fitspiration' is an online trend designed to inspire viewers towards a healthier lifestyle by promoting exercise and healthy food. This study provides a content analysis of fitspiration imagery on the social networking site Instagram. A set of 600 images were coded for body type, activity, objectification and textual elements. Results showed that the majority of images of women contained only one body type: thin and toned. In addition, most images contained objectifying elements. Accordingly, while fitspiration images may be inspirational for viewers, they also contain a number of elements likely to have negative effects on the viewer's body image."}
{"_id": "edf087a889aba7706c46075ee70212aadb21262e", "title": "Service Science", "text": "This paper is a first exploration of the relationship between service science and Grid computing. Service science is the study of value co-creation interactions among entities, known as service systems. Within the emerging service science community, service is often defined as the application of competences (resources) for the benefit of another. Grid computing is the study of resource sharing among entities, known as virtual organizations, which solve complex business, societal, scientific, and engineering problems. Within the Grid computing community, service is sometimes defined as protocols plus behavior. Both Grid computing and service science are connecting academic, industry, government, and volunteer sector collaborators on a range of projects including eScience, healthcare, environmental sustainability, and more. This paper compares and contrasts the notions of resource, entity, service, interaction, and success criteria for the two areas of study. In conclusion, new areas for collaborative inquiry are proposed."}
{"_id": "b427876c03141fa24094e59bfa299c719ae233a3", "title": "Global Changes and Drivers of the Water Footprint of Food Consumption : A Historical Analysis", "text": "Water is one of the most important limiting resources for food production. How much water is needed for food depends on the size of the population, average food consumption patterns and food production per unit of water. These factors show large differences around the world. This paper analyzes sub-continental dynamics of the water footprint of consumption (WFcons) for the prevailing diets from 1961 to 2009 using data from the Food and Agriculture Organization (FAO). The findings show that, in most regions, the water needed to feed one person decreased even if diets became richer, because of the increase in water use efficiency in food production during the past half-century. The logarithmic mean Divisia index (LMDI) decomposition approach is used to analyze the contributions of the major drivers of WFcons for food: population, diet and agricultural practices (output per unit of water). We compare the contributions of these drivers through different subcontinents, and find that population growth still was the major driver behind increasing WFcons for food until now and that potential water savings through agricultural practice improvements were offset by population growth and diet change. The changes of the factors mentioned above were the largest in most developing areas with rapid economic development. With the development of globalization, the international food trade has brought more and more water savings in global water use over time. The results indicate that, in the near future and in many regions, diet change is likely to override population growth as the major driver behind WFcons for food. OPEN ACCESS"}
{"_id": "8dec9e54c40d0aabe6b5ecf2fa4a6b6089ed7e3d", "title": "Optimal Sink Deployment for Distributed Sensing of Spatially Nonstationary Phenomena", "text": "The optimal deployment of sinks in a sensor region for power efficient data gathering of a physical phenomenon is investigated in this work. In the system of consideration, nodes perform lossless distributed coding of the sensed data and the spatial statistics of the monitored phenomenon are possibly nonstationary due to heterogeneity in the sensing environment (e.g. variations in the altimetric profile). Non-stationary spatial statistics lead to uneven spatial profiles of the bit rates, unlike stationary statistics. The properties of rate profiles and the consequent optimal sink locations for a broad class of spatially non-stationary covariance models are studied by mathematical analysis and numerical examples."}
{"_id": "94f039e2e825d0ad343fdcf30ac974be318fa9c7", "title": "3D convolutional neural networks by modal fusion", "text": "We propose multi-view and volumetric convolutional neural networks (ConvNets) for 3D shape recognition, which combines surface normal and height fields to capture local geometry and physical size of an object. This strategy helps distinguishing between objects with similar geometries but different sizes. This is especially useful for enhancing volumetric ConvNets and classifying 3D scans with insufficient surface details. Experimental results on CAD and real-world scan datasets showed that our technique outperforms previous approaches."}
{"_id": "1614efe6116dd83073ed4c99b0da61e9a111a69f", "title": "Using ultra-low parasitic hybrid packaging method to reduce high frequency EMI noise for SiC power module", "text": "In order to reduce high frequency electromagnetic interference (EMI) noise for SiC devices, this paper presents a hybrid structure SiC power module with low parasitic inductance and low ground capacitance that can suppress EMI efficiently. To certify the improvement on EMI, two buck converters, using the same power loop and gate drive, are built for experiment. Two converters use the proposed power module and TO-247 packaged commercial devices respectively. The experimental results show that low parasitic inductance can increase switching speed considerably while have only slight impact on EMI. And low parasitic ground capacitance can decrease the ground current effectively. Considering soft switching can also reduce EMI, this paper further compares EMI noise of two converters under two conditions: continuous current modulation (CCM) and triangle current modulation (TCM). The result shows that the proposed power module can suppress the common mode (CM) noise effectively at both situations. Furthermore, this paper verifies the noise mode transformation from differential mode (DM) EMI to CM EMI."}
{"_id": "b868b510a8b59c335816558c4f87a8b6533c03a1", "title": "MVR: An Architecture for Computation Offloading in Mobile Edge Computing", "text": "As communication and sensing capabilities of mobile devices increase, mobile applications are becoming increasingly complex. The ability of computation offloading, which is one of the main features of mobile edge computing gains relevance as a technique to improve the battery lifetime of mobile devices and increase the performance of applications. In this paper, we describe the offloading system model and present an innovative architecture, called \"MVR\", contributing to computation offloading in mobile edge computing."}
{"_id": "9fd59c2ae6bd74e4918a42d58845e87b7bcca9e5", "title": "A Better Method to Analyze Blockchain Consistency", "text": "The celebrated Nakamoto consensus protocol [16] ushered in several new consensus applications including cryptocurrencies. A few recent works [7, 17] have analyzed important properties of blockchains, including most significantly, consistency, which is a guarantee that all honest parties output the same sequence of blocks throughout the execution of the protocol. To establish consistency, the prior analysis of Pass, Seeman and Shelat [17] required a careful counting of certain combinatorial events that was difficult to apply to variations of Nakamoto. The work of Garay, Kiayas, and Leonardas [7] provides another method of analyzing the blockchain under the simplifying assumption that the network was synchronous. The contribution of this paper is the development of a simple Markov-chain based method for analyzing consistency properties of blockchain protocols. The method includes a formal way of stating strong concentration bounds as well as easy ways to concretely compute the bounds. We use our new method to answer a number of basic questions about consistency of blockchains: Our new analysis provides a tighter guarantee on the consistency property of Nakamoto's protocol, including for parameter regimes which [17] could not consider; We analyze a family of delaying attacks first presented in [17], and extend them to other protocols; We analyze how long a participant should wait before considering a high-value transaction \"confirmed\"; We analyze the consistency of CliqueChain, a variation of the Chainweb [14] system; We provide the first rigorous consistency analysis of GHOST [20] and also analyze a folklore \"balancing\"-attack. In each case, we use our framework to experimentally analyze the consensus bounds for various network delay parameters and adversarial computing percentages. We hope our techniques enable authors of future blockchain proposals to provide a more rigorous analysis of their schemes."}
{"_id": "586ae174454a7c2fe5a4d9e08fcb29fdc66cde47", "title": "Cloaking Malware with the Trusted Platform Module", "text": "The Trusted Platform Module (TPM) is commonly thought of as hardware that can increase platform security. However, it can also be used for malicious purposes. The TPM, along with other hardware, can implement acloaked computation , whose memory state cannot be observed by any other software, including the operating system and hypervisor. We show that malware can use cloaked computations to hide essential secrets (like the target of an attack) from a malware analyst. We describe and implement a protocol that establishes an encryption key under control of the TPM that can only be used by a specific infection program. An infected host then proves the legitimacy of this key to a remote malware distribution platform, and receives and executes an encrypted payload in a way that prevents software visibility of the decrypted payload. We detail how malware can benefit from cloaked computations and discuss defenses against our protocol. Hardening legitimate uses of the TPM against attack improves the resilience of our malware, creating a Catch-22 for secure computing technology."}
{"_id": "dd7db71cd4c7fc0db06871a0d8fe0ab69d47008a", "title": "Question Generation Shared Task and Evaluation Challenge - Status Report", "text": "The First Shared Task Evaluation Challenge on Question Generation took place in 2010 as part of the 3 workshop on Question Generation. The campaign included two tasks: Question Generation from Sentences and Question Generation from Paragraphs. This status report briefly summarizes the motivation, tasks and results. Lessons learned relevant to future QG-STECs are also offered."}
{"_id": "e3decffad93a91ecd2b687ff9723a26212d00780", "title": "Sensor observation streams within cloud-based IoT platforms: Challenges and directions", "text": "Observation streams can be considered as a special case of data streams produced by sensors. With the growth of the Internet of Things (IoT), more and more connected sensors will produce unbounded observation streams. In order to bridge the gap between sensors and observation consumers, we have witnessed the design and the development of Cloud-based IoT platforms. Such systems raise new research challenges, in particular regarding observation collection, processing and consumption. These new research challenges are related to observation streams and should be addressed from the implementation phase by developers to build platforms able to meet other non-functional requirements later. Unlike existing surveys, this paper is intended for developers that would like to design and implement a Cloud-based IoT platform capable of handling sensor observation streams. It provides a comprehensive way to understand main observation-related challenges, as well as non-functional requirements of IoT platforms such as platform adaptation, scalability and availability. Last but not the least, it gives recommendations and compares some relevant open-source software that can speed up the development process."}
{"_id": "680f268973fc8efd775a6bfe08487ee1c3cb9e61", "title": "The Effects of Perceived Corporate Social Responsibility on Employee Attitudes", "text": "We explore the impact on employee attitudes of their perceptions of how others outside the organization are treated (i.e., corporate social responsibility) above and beyond the impact of how employees are directly treated by the organization. Results of a study of 827 employees in eighteen organizations show that employee perceptions of corporate social responsibility (CSR) are positively related to (a) organizational commitment with the relationship being partially mediated by work meaningfulness and perceived organizational support (POS) and (b) job satisfaction with work meaningfulness partially mediating the relationship but not POS. Moreover, in order to address limited micro-level research in CSR, we develop a measure of employee perceptions of CSR through four pilot studies. Employing a bifactor model, we find that social responsibility has an additional effect on employee attitudes beyond environmental responsibility, which we posit is due to the relational component of social responsibility (e.g., relationships with community)."}
{"_id": "e2dfcc0bcbc62060f517ae65f426baa0c17bd288", "title": "Steady-State Analysis of the Series Resonant Converter", "text": "The series resonant converter is analyzed in steady state, and for constant switching frequency the output current and voltage characteristics are found to be ellipses. The converter operating point can then be easily obtained by superimposing a load line on these elliptical characteristics. Peak resonant capacitor voltage and inductor current are also plotted in the output plane and are dependent to first order only on output current. When peak voltage and current are plotted in this manner, the dependence of component stresses on operating point is clearly revealed. The output characteristics are modified to include the effect of transistor and diode voltage drops, and experimental verification is presented."}
{"_id": "9ba2756d4412ababf44cf7c167a4ffbcdf39b0d7", "title": "CSIFT based locality-constrained linear coding for image classification", "text": "In the past decade, SIFT descriptor has been witnessed as one of the most robust local invariant feature descriptors and widely used in various vision tasks. Most traditional image-classification systems depend on the gray-based SIFT descriptors, which only analyze the gray level variations of the images. Misclassification may happen since their color contents are ignored. In this article, we concentrate on improving the performance of existing image-classification algorithms by adding color information. To achieve this purpose, different kinds of colored SIFT descriptors are introduced and implemented. locality-constrained linear coding (LLC), a state-of-the-art sparse coding technology, is employed to construct the image-classification system for the evaluation. Moreover, we propose a simple $$\\ell _2$$ \u2113 2 -norm regularized local distance to improve the traditional LLC method. The real experiments are carried out on several benchmarks. With the enhancements to color SIFT and $$\\ell _2$$ \u2113 2 -norm regularization, the proposed image-classification system obtains approximately $$2\\,\\%$$ 2 % improvement of classification accuracy on the Caltech-101 dataset and approximately $$5\\,\\%$$ 5 % improvement of classification accuracy on the Caltech-256 dataset."}
{"_id": "c75180ab22b80b7ac3c8853a934ac515313b9aad", "title": "GOT-10k: A Large High-Diversity Benchmark for Generic Object Tracking in the Wild", "text": "In this work, we introduce a large high-diversity database for generic object tracking, called GOT-10k. GOT-10k is backboned by the semantic hierarchy of WordNet [1]. It populates a majority of 563 object classes and 87 motion patterns in real-world, resulting in a scale of over 10 thousand video segments and 1.5 million bounding boxes. To our knowledge, GOT-10k is by far the richest motion trajectory dataset, and its coverage of object classes is more than a magnitude wider than similar scale counterparts [20], [23]. By publishing GOT-10k, we hope to encourage the development of generic purposed trackers that work for a wide range of moving objects and under diverse real-world scenarios. To promote generalization and avoid the evaluation results biased to seen classes, we follow the one-shot principle [35] in dataset splitting where training and testing classes are zero-overlapped. We also carry out a series of analytical experiments to select a compact while highly representative testing subset \u2013 it embodies 84 object classes and 32 motion patterns with only 180 video segments, allowing for efficient evaluation. Finally, we train and evaluate a number of representative trackers on GOT-10k and analyze their performance. The evaluation results suggest that tracking in real-world unconstrained videos is far from being solved, and only 40% of frames are successfully tracked using top ranking trackers. The database and toolkits are publicly available at https://got-10k.github.io."}
{"_id": "20d2996f3c102b9ec63701e0cd55eab94bf666dc", "title": "Thermal Facial Analysis for Deception Detection", "text": "Thermal imaging technology can be used to detect stress levels in humans based on the radiated heat from their face. In this paper, we use thermal imaging to monitor the periorbital region's thermal variations and test whether it can offer a discriminative signature for detecting deception. We start by presenting an overview on automated deception detection and propose a novel methodology, which we validate experimentally on 492 thermal responses (249 lies and 243 truths) extracted from 25 participants. The novelty of this paper lies in scoring a larger number of questions per subject, emphasizing a within-person approach for learning from data, proposing a framework for validating the decision making process, and correct evaluation of the generalization performance. A $k$ -nearest neighbor classifier was used to classify the thermal responses using different strategies for data representation. We report an 87% ability to predict the lie/truth responses based on a within-person methodology and fivefold cross validation. Our results also show that the between-person approach for modeling deception does not generalize very well across the training data."}
{"_id": "2d44ec5efc7c96b83364cb54521282abd3bc0240", "title": "Deception detection using a multimodal approach", "text": "In this paper we address the automatic identification of deceit by using a multimodal approach. We collect deceptive and truthful responses using a multimodal setting where we acquire data using a microphone, a thermal camera, as well as physiological sensors. Among all available modalities, we focus on three modalities namely, language use, physiological response, and thermal sensing. To our knowledge, this is the first work to integrate these specific modalities to detect deceit. Several experiments are carried out in which we first select representative features for each modality, and then we analyze joint models that integrate several modalities. The experimental results show that the combination of features from different modalities significantly improves the detection of deceptive behaviors as compared to the use of one modality at a time. Moreover, the use of non-contact modalities proved to be comparable with and sometimes better than existing contact-based methods. The proposed method increases the efficiency of detecting deceit by avoiding human involvement in an attempt to move towards a completely automated non-invasive deception detection process."}
{"_id": "9406ee01e3fda0932168f31cd3835a7d7a943fc6", "title": "Multiple View Geometry in Computer Vision", "text": ""}
{"_id": "a16ae424938e2e49396d5662a98e0c02166cdb02", "title": "Automatic Sublingual Vein Feature Extraction System", "text": "The quintessence of the diagnosis in traditional Chinese medicine is syndrome differentiation and treatment. Syndrome differentiation consists of four methods: observing, hearing as well as smelling, asking, and touching. The examination of the observing is the most important procedure in the method of \"tongue.\" In recent years, numerous medical studies have identified the close relations between sublingual veins and human organs. Therefore, sublingual pathological symptoms, as well as demographical information of patients, imply pathological changes in the organs, and early diagnosis is beneficial for early treatment. However, the diagnosis of sublingual pathological symptoms is usually influenced by the doctor's subjective interpretation, experience, and environmental factors. The results can easily be limited by subjective factors such as knowledge, experience, mentality, diagnostic techniques, color perception and interpretation. Different doctors may make different judgments on the same tongue, presenting less than desirable repeatability. Therefore, assisting doctors' diagnoses with scientific methods and standardizing the differentiating process to obtain reliable diagnoses and enhance the clinical applicability of Chinese medicine is an important issue. In its wake, this study aims to construct an Automatic Sublingual Vein Feature Extraction System based on image processing technologies to allow objective and quantified computer readings. The extraction of sublingual vein features mainly captures the back of the tongue and extract the sublingual vein area for feature expression analysis. Firstly, the patient's back of the tongue is photographed and color-graded to compensate for color distortion, and then the tongue-back area is extracted. This study extracts tongue-back imagery by analyzing the RGB color expression of the back of the tongue, lips, teeth and skin, translating it into the HSI color space easily perceived by the human eye, along with skin area removal, rectangle detection, teeth area removal, black area removal and control point detection. The captured tongue-back image goes through histogram equalization and hue shift to enhance color contrast. Sublingual veins are extracted through analyzing RGB color component shift, hues, saturation and brightness. Then the sublingual vein color information and positioning are used to differentiate hues, lengths and branches. Thinning analysis is used to determine the presence of varicose veins. At the same time, the surrounding features of sublingual veins, such as columnar vein, bubbly vein, petechiae and bloodshot, are extracted. The information regarding features and lingual vein conditions are integrated and analyzed for doctors' clinical reference. This study utilizes 199 lingual images for statistic testing and three lingual diagnostic experts in Chinese medicine for lingual reading. The accuracy for the extractions are: tongue back 86%, sublingual vein 80%, varicose veins 90%, branches 87%, and the accuracy rates for columnar veins and bubbly veins are 87%, 88% and 73% respectively."}
{"_id": "dec856c042e06d9f9ad5f5b7d5b4c77fb0699704", "title": "Robust camera calibration of soccer video using genetic algorithm", "text": "This paper proposes an exact and robust genetic algorithm-based method for the calibration of soccer camera. According to the FIFA official soccer field layout we defined a field model for the soccer court. Camera calibration is done through finding the homography transform between the field model and the input frame which is followed with DLT decomposition. The intersection of lines in the field model and input frame form feature points and by means of a genetic algorithm, we found the correspondence between those points. Our algorithm was applied to a couple of soccer video frames and the achieved results demonstrate its robustness, accuracy and high performance."}
{"_id": "2f95e2ca11610cb334d8d777d7b0f0d5561e67bc", "title": "You are what you include: large-scale evaluation of remote javascript inclusions", "text": "JavaScript is used by web developers to enhance the interactivity of their sites, offload work to the users' browsers and improve their sites' responsiveness and user-friendliness, making web pages feel and behave like traditional desktop applications. An important feature of JavaScript, is the ability to combine multiple libraries from local and remote sources into the same page, under the same namespace. While this enables the creation of more advanced web applications, it also allows for a malicious JavaScript provider to steal data from other scripts and from the page itself. Today, when developers include remote JavaScript libraries, they trust that the remote providers will not abuse the power bestowed upon them.\n In this paper, we report on a large-scale crawl of more than three million pages of the top 10,000 Alexa sites, and identify the trust relationships of these sites with their library providers. We show the evolution of JavaScript inclusions over time and develop a set of metrics in order to assess the maintenance-quality of each JavaScript provider, showing that in some cases, top Internet sites trust remote providers that could be successfully compromised by determined attackers and subsequently serve malicious JavaScript. In this process, we identify four, previously unknown, types of vulnerabilities that attackers could use to attack popular web sites. Lastly, we review some proposed ways of protecting a web application from malicious remote scripts and show that some of them may not be as effective as previously thought."}
{"_id": "27309b61fd01532dac1e6d88acb6fcc29bad9190", "title": "Memory engram storage and retrieval", "text": "A great deal of experimental investment is directed towards questions regarding the mechanisms of memory storage. Such studies have traditionally been restricted to investigation of the anatomical structures, physiological processes, and molecular pathways necessary for the capacity of memory storage, and have avoided the question of how individual memories are stored in the brain. Memory engram technology allows the labeling and subsequent manipulation of components of specific memory engrams in particular brain regions, and it has been established that cell ensembles labeled by this method are both sufficient and necessary for memory recall. Recent research has employed this technology to probe fundamental questions of memory consolidation, differentiating between mechanisms of memory retrieval from the true neurobiology of memory storage."}
{"_id": "69660235186b725c452ea5a7d8621089a713fabc", "title": "Dimension Reduction and Data Visualization Using Neural Networks", "text": "The problem of visual presentation of multidimensional data is discussed. The projection methods for dimension reduction are reviewed. The chapter deals with the artificial neural networks that may be used for reducing dimension and data visualization, too. The stress is put on combining the selforganizing map (SOM) and Sammon mapping and on the neural network for Sammon\u2019s mapping SAMANN. Large scale applications are discussed: environmental data analysis, statistical analysis of curricula, comparison of schools, analysis of the economic and social conditions of countries, analysis of data on the fundus of eyes and analysis of physiological data on men\u2019s health."}
{"_id": "2402066417256a70d7bf36ee163af5eba0aed211", "title": "Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking", "text": "The natural language generation (NLG) component of a spoken dialogue system (SDS) usually needs a substantial amount of handcrafting or a well-labeled dataset to be trained on. These limitations add significantly to development costs and make cross-domain, multi-lingual dialogue systems intractable. Moreover, human languages are context-aware. The most natural response should be directly learned from data rather than depending on predefined syntaxes or rules. This paper presents a statistical language generator based on a joint recurrent and convolutional neural network structure which can be trained on dialogue act-utterance pairs without any semantic alignments or predefined grammar trees. Objective metrics suggest that this new model outperforms previous methods under the same experimental conditions. Results of an evaluation by human judges indicate that it produces not only high quality but linguistically varied utterances which are preferred compared to n-gram and rule-based systems."}
{"_id": "cedfd39777d9a5e8bd68a9149c20f3e860483a40", "title": "Adaptive bootstrapping of recommender systems using decision trees", "text": "Recommender systems perform much better on users for which they have more information. This gives rise to a problem of satisfying users new to a system. The problem is even more acute considering that some of these hard to profile new users judge the unfamiliar system by its ability to immediately provide them with satisfying recommendations, and may quickly abandon the system when disappointed. Rapid profiling of new users by a recommender system is often achieved through a bootstrapping process - a kind of an initial interview - that elicits users to provide their opinions on certain carefully chosen items or categories. The elicitation process becomes particularly effective when adapted to users' responses, making best use of users' time by dynamically modifying the questions to improve the evolving profile. In particular, we advocate a specialized version of decision trees as the most appropriate tool for this task. We detail an efficient tree learning algorithm, specifically tailored to the unique properties of the problem. Several extensions to the tree construction are also introduced, which enhance the efficiency and utility of the method. We implemented our methods within a movie recommendation service. The experimental study delivered encouraging results, with the tree-based bootstrapping process significantly outperforming previous approaches."}
{"_id": "e56fde174f9958b6d6c71371b76fd0c2c9fe8aff", "title": "Segmenting Unknown 3D Objects from Real Depth Images using Mask R-CNN Trained on Synthetic Point Clouds", "text": "The ability to segment unknown objects in depth images has potential to enhance robot skills in grasping and object tracking. Recent computer vision research has demonstrated that Mask R-CNN can be trained to segment specific categories of objects in RGB images when massive handlabeled datasets are available. As generating these datasets is time-consuming, we instead train with synthetic depth images. Many robots now use depth sensors, and recent results suggest training on synthetic depth data can generalize well to the real world. We present a method for automated dataset generation and rapidly generate a training dataset of 50k depth images and 320k object masks synthetically using simulated scenes of 3D CAD models. We train a variant of Mask R-CNN on the generated dataset to perform category-agnostic instance segmentation without hand-labeled data. We evaluate the trained network, which we refer to as Synthetic Depth (SD) Mask R-CNN, on a set of real, high-resolution images of challenging, denselycluttered bins containing objects with highly-varied geometry. SD Mask R-CNN outperforms point cloud clustering baselines by an absolute 15% in Average Precision and 20% in Average Recall, and achieves performance levels similar to a Mask RCNN trained on a massive, hand-labeled RGB dataset and fine-tuned on real images from the experimental setup. The network also generalizes well to a lower-resolution depth sensor. We deploy the model in an instance-specific grasping pipeline to demonstrate its usefulness in a robotics application. Code, the synthetic training dataset, and supplementary material are available at https://bit.ly/2letCuE."}
{"_id": "07a6112414db5983f5b6c1d9b7feb733b390fdd7", "title": "Disruption of Large-Scale Brain Systems in Advanced Aging", "text": "Cognitive decline is commonly observed in advanced aging even in the absence of disease. Here we explore the possibility that normal aging is accompanied by disruptive alterations in the coordination of large-scale brain systems that support high-level cognition. In 93 adults aged 18 to 93, we demonstrate that aging is characterized by marked reductions in normally present functional correlations within two higher-order brain systems. Anterior to posterior components within the default network were most severely disrupted with age. Furthermore, correlation reductions were severe in older adults free from Alzheimer's disease (AD) pathology as determined by amyloid imaging, suggesting that functional disruptions were not the result of AD. Instead, reduced correlations were associated with disruptions in white matter integrity and poor cognitive performance across a range of domains. These results suggest that cognitive decline in normal aging arises from functional disruption in the coordination of large-scale brain systems that support cognition."}
{"_id": "c3a0686fdae078336c8165d844263f34b2c75ada", "title": "Use of physiological signals to predict cybersickness", "text": "http://dx.doi.org/10.1016/j.displa.2016.07.002 0141-9382/ 2016 Elsevier B.V. All rights reserved. q This paper was recommended for publication by Richard H.Y. So. \u21d1 Corresponding author at: University of California, Irvine, 2201 Social & Behavioral Sciences Gateway Building, Irvine, CA 92697-5100, USA. E-mail addresses: mdenniso@uci.edu (M.S. Dennison), awisti@uci.edu (A.Z. Wisti), mdzmura@uci.edu (M. D\u2019Zmura). Mark S. Dennison \u21d1, A. Zachary Wisti, Michael D\u2019Zmura"}
{"_id": "be98ba27ffdaa99834d12a1aa9c905c7bc6848c1", "title": "Secure Fine-Grained Access Control and Data Sharing for Dynamic Groups in the Cloud", "text": "Cloud computing is an emerging computing paradigm that enables users to store their data in a cloud server to enjoy scalable and on-demand services. Nevertheless, it also brings many security issues, since cloud service providers (CSPs) are not in the same trusted domain as users. To protect data privacy against untrusted CSPs, existing solutions apply cryptographic methods (e.g., encryption mechanisms) and provide decryption keys only to authorized users. However, sharing cloud data among authorized users at a fine-grained level is still a challenging issue, especially when dealing with dynamic user groups. In this paper, we propose a secure and efficient fine-grained access control and data sharing scheme for dynamic user groups by: 1) defining and enforcing access policies based on the attributes of the data; 2) permitting the key generation center to efficiently update user credentials for dynamic user groups; and 3) allowing some expensive computation tasks to be performed by untrusted CSPs without requiring any delegation key. Specifically, we first design an efficient revocable attribute-based encryption (ABE) scheme with the property of ciphertext delegation by exploiting and uniquely combining techniques of identity-based encryption, ABE, subset-cover framework, and ciphertext encoding mechanism. We then present a fine-grained access control and data sharing system for on-demand services with dynamic user groups in the cloud. The experimental data show that our proposed scheme is more efficient and scalable than the state-of-the-art solution."}
{"_id": "d14ad611a79df4e5232d29a34a2386b9ca2b6659", "title": "Machine Learning-Based Software Quality Prediction Models: State of the Art", "text": "Quantification of parameters affecting the software quality is one of the important aspects of research in the field of software engineering. In this paper, we present a comprehensive literature survey of prominent quality molding studies. The survey addresses two views: (1) quantification of parameters affecting the software quality; and (2) using machine learning techniques in predicting the software quality. The paper concludes that, model transparency is a common shortcoming to all the surveyed studies."}
{"_id": "8fe7d4afad871a2926b1fe83c3698a72f9fe1ffc", "title": "Design and Implementation of Energy Management System With Fuzzy Control for DC Microgrid Systems", "text": "This paper presents the design and implementation of an energy management system (EMS) with fuzzy control for a dc microgrid system. Modeling, analysis, and control of distributed power sources and energy storage devices with MATLAB/Simulink are proposed, and the integrated monitoring EMS is implemented with LabVIEW. To improve the life cycle of the battery, fuzzy control manages the desired state of charge. The RS-485/ZigBee network has been designed to control the operating mode and to monitor the values of all subsystems in the dc microgrid system."}
{"_id": "551e277ae9413bda5fb810842e2f2effdf6d5750", "title": "3-Level high power converter with press pack IGBT", "text": "Converteam recently has introduced the MV7000 series of medium voltage drives, designed with Press Pack IGBT (PPI) in 3-level NPC topology in the power range from 4 MW to 32 MW [3]. PPI combines the benefits of disk type device technology with state-of-the-art semiconductor technology. This paper explains relevant technical features and design issues related to MV7000 including direct series connection of devices."}
{"_id": "410bc41c4372fd78a68cd08ad39f2a2c5156fc52", "title": "Bayesian Unsupervised Topic Segmentation", "text": "This paper describes a novel Bayesian approach to unsupervised topic segmentation. Unsupervised systems for this task are driven by lexical cohesion: the tendency of wellformed segments to induce a compact and consistent lexical distribution. We show that lexical cohesion can be placed in a Bayesian context by modeling the words in each topic segment as draws from a multinomial language model associated with the segment; maximizing the observation likelihood in such a model yields a lexically-cohesive segmentation. This contrasts with previous approaches, which relied on hand-crafted cohesion metrics. The Bayesian framework provides a principled way to incorporate additional features such as cue phrases, a powerful indicator of discourse structure that has not been previously used in unsupervised segmentation systems. Our model yields consistent improvements over an array of state-of-the-art systems on both text and speech datasets. We also show that both an entropy-based analysis and a well-known previous technique can be derived as special cases of the Bayesian framework.1"}
{"_id": "ff5c5d0ff736df0234590dafafb7358ad31292c8", "title": "Block diagonalization for multi-user MIMO with other-cell interference", "text": "Block diagonalization is one approach for linear preceding in the multiple-input multiple-output broadcast channel that sends multiple interference free data streams to different users in the same cell. Unfortunately, block diagonalization neglects other-cell interference (OCI), which limits the performance of users at the edge of the cell. This paper presents an OCI-aware enhancement to block diagonalization that uses a whitening filter for interference suppression at the receiver and a novel precoder using the interference-plus-noise covariance matrix for each user at the transmitter. For complex Gaussian matrix channels, the asymptotic sum rate of the proposed system is analyzed under a large antenna assumption for isotropic inputs and compared to conventional block diagonalization. The capacity loss due to OCI is quantified in terms of results from single-user MIMO capacity. Several numerical examples compare achievable sum rates, the proposed asymptotic rates, and the capacity loss, in low and high interference regimes."}
{"_id": "d781b74cf002f9fffcb7f60c3c319c41797d702e", "title": "Design and deployment of Aqua monitoring system using wireless sensor networks and IAR-Kick", "text": "In Aquaculture, the yields (shrimp, fish etc.) depend on the water characteristics of the aquaculture pond. For maximizing fish yields, the parameters which are to be kept at certain optimal levels in water. The parameters can vary a lot during the period of a day and can rapidly change depending on the external environmental conditions. Hence it is necessary to monitor these parameters with high frequency . Wireless sensor networks are used to monitor aqua farms for relevant parameters this system consists of two modules which are transmitter station and receiver station. The data transmits through GSM to the Database at receiver station. The graphical user interface was designed, to convey the data in the form of a message to the farmers in their respective local languages to their Mobile Phones and alerts them in unhygienic environmental conditions, in order to take suitable actions. Keywords; aquaculture; wireless sensor networks; IAR-Kick;pH;"}
{"_id": "784376563c94e231241fbcf71d4d2774aec4b935", "title": "A Comparison over Focused Web Crawling Strategies", "text": "In this paper we review and compare focused crawling strategies, studied and published during the past decade. Despite giant leaps in communication, storage and computing power in recent years, crawlers have always struggled to keep up with Web content generation and modification. Focused crawlers attempt to i) accelerate the crawling process, ii) maximize the harvest of high quality pages, iii) assign appropriate credit to different documents along a crawling path, such that short-term gains are not pursued at the expense of less obvious paths that ultimately yield larger sets of valuable pages. Beyond the review and comparison of the focused crawling strategies, we additionally propose additions to the corresponding architectures for further research."}
{"_id": "5ad900cc76088bd1f48f29b4940710938fb9206b", "title": "Penetration Testing with Improved Input Vector Identification", "text": "Penetration testing is widely used to help ensure the security of web applications. It discovers vulnerabilities by simulating attacks from malicious users on a target application. Identifying the input vectors of a web application and checking the results of an attack are important parts of penetration testing, as they indicate where an attack could be introduced and whether an attempted attack was successful. Current techniques for identifying input vectors and checking attack results are typically ad-hoc and incomplete, which can cause parts of an application to be untested and leave vulnerabilities undiscovered. In this paper, we propose a new approach to penetration testing that addresses these limitations by leveraging two recently-developed analysis techniques. The first is used to identify a web application's possible input vectors, and the second is used to automatically check whether an attack resulted in an injection. To empirically evaluate our approach, we compare it against a state-of-the-art, alternative technique. Our results show that our approach performs a more thorough penetration testing and leads to the discovery of more vulnerabilities."}
{"_id": "5d404211336ec535ac4f6e288cc5047cf433a327", "title": "Crawling AJAX by Inferring User Interface State Changes", "text": "AJAX is a very promising approach for improving rich interactivity and responsiveness of web applications. At the same time, AJAX techniques shatter the metaphor of a web \"page\" upon which general search crawlers are based. This paper describes a novel technique for crawling AJAX applications through dynamic analysis and reconstruction of user interface state changes. Our method dynamically infers a state-flow graph modeling the various navigation paths and states within an AJAX application. This reconstructed model can be used to generate linked static pages. These pages could be used to expose AJAX sites to general search engines. Moreover, we believe that the crawling techniques that are part of our solution have other applications, such as within general search engines, accessibility improvements, or in automatically exercising all user interface elements and conducting state-based testing of AJAX applications. We present our open source tool called CRAWLJAX which implements the concepts discussed in this paper. Additionally, we report a case study in which we apply our approach to a number of representative AJAX applications and elaborate on the obtained results."}
{"_id": "035da7cc36b2071cd07e093151be743f56c97c34", "title": "Flow-Sensitive Type Qualifiers", "text": "We present a system for extending standard type systems with flow-sensitive type qualifiers. Users annotate their programs with type qualifiers, and inference checks that the annotations are correct. In our system only the type qualifiers are modeled flow-sensitively---the underlying standard types are unchanged, which allows us to obtain an efficient constraint-based inference algorithm that integrates flow-insensitive alias analysis, effect inference, and ideas from linear type systems to support strong updates. We demonstrate the usefulness of flow-sensitive type qualifiers by finding a number of new locking bugs in the Linux kernel."}
{"_id": "184b5c5d0dcbe1d4685cccbfc729d759d5a67c29", "title": "A Comparison of Push and Pull Techniques for AJAX", "text": "AJAX applications are designed to have high user interactivity and low user-perceived latency. Real-time dynamic Web data such as news headlines, stock tickers, and auction updates need to be propagated to the users as soon as possible. However, AJAX still suffers from the limitations of the Web's request/response architecture which prevents servers from pushing real-time dynamic web data. Such applications usually use a pull style to obtain the latest updates, where the client actively requests the changes based on a predefined interval. It is possible to overcome this limitation by adopting a push style of interaction where the server broadcasts data when a change occurs on the server side. Both these options have their own trade-offs. This paper explores the fundamental limits of browser-based applications and analyzes push solutions for AJAX technology. It also shows the results of an empirical study comparing push and pull."}
{"_id": "7f43019a1a48b1cb80544366c571163e1ea545dc", "title": "Tractable Queries for Lightweight Description Logics", "text": "It is a classic result in database theory that conjunctive query (CQ) answering, which is NP-complete in general, is feasible in polynomial time when restricted to acyclic queries. Subsequent results identified more general structural properties of CQs (like bounded treewidth) which ensure tractable query evaluation. In this paper, we lift these tractability results to knowledge bases formulated in the lightweight description logics DL-Lite and ELH. The proof exploits known properties of query matches in these logics and involves a querydependent modification of the data. To obtain a more practical approach, we propose a concrete polynomial-time algorithm for answering acyclic CQs based on rewriting queries into datalog programs. A preliminary evaluation suggests the interest of our approach for handling large acyclic CQs."}
{"_id": "c2ad8ddc8b872d751de746fbbd5deff5ad336194", "title": "Twitter Sentiment Classification on Sanders Data using Hybrid Approach", "text": "Sentiment analysis is very perplexing and massive issue in the field of social data mining. Twitter is one of the mostly used social media where people discuss on various issues in a dense way. The tweets about a particular topic give peoples\u2019 views, opinions, orientations, inclinations about that topic. In this work, we have used pre-labeled (with positive, negative and neutral opinion) tweets on particular topics for sentiment classification. Opinion score of each tweet is calculated using feature vectors. These opinion score is used to classify the tweets into positive, negative and neutral classes. Then using various machine learning classifiers the accuracy of predicted classification with respect to actual classification is being calculated and compared using supervised learning model. Along with building a sentiment classification model, analysis of tweets is being carried out by visualizing the wordcloud of tweets using R."}
{"_id": "c9e301c367fdc9c2dbd6d59780a202f7be4e78d8", "title": "Multiple Kernel Fuzzy Clustering", "text": "While fuzzy c-means is a popular soft-clustering method, its effectiveness is largely limited to spherical clusters. By applying kernel tricks, the kernel fuzzy c-means algorithm attempts to address this problem by mapping data with nonlinear relationships to appropriate feature spaces. Kernel combination, or selection, is crucial for effective kernel clustering. Unfortunately, for most applications, it is uneasy to find the right combination. We propose a multiple kernel fuzzy c-means (MKFC) algorithm that extends the fuzzy c-means algorithm with a multiple kernel-learning setting. By incorporating multiple kernels and automatically adjusting the kernel weights, MKFC is more immune to ineffective kernels and irrelevant features. This makes the choice of kernels less crucial. In addition, we show multiple kernel k-means to be a special case of MKFC. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed MKFC algorithm."}
{"_id": "79d9e2d5fb1422dd4a6e6f3e4137a4acc1bf2d32", "title": "Helicopter induced propeller injuries.", "text": "Following routine maintenance a military helicopter was returned to service. Due to negligence some hoses had not been connected correctly. The wrongly connected hoses resulted in a malfunction, causing the rotor blades of the helicopter to smash through the windows of the helicopter cockpit. Both the pilot and the co-pilot were hit by the rotor blades and decapitated. Other passengers in the helicopter were injured. Due to vibrations the helicopter fell to the ground and its tail partly broke away (Figs. 1, 2). The bodies and body parts of the pilot and co-pilot were found on the ground outside the helicopter cockpit."}
{"_id": "3f877fe690e31ecdd1971febb336460c9ec99849", "title": "Sonoliards: Rendering Audible Sound Spots by Reflecting the Ultrasound Beams", "text": "This paper proposes a dynamic acoustic field generation system for a spot audio towards particular person indoors. Spot audio techniques have been explored by generating the ultrasound beams toward the target person in certain area however everyone in this area can hear the sound. Our system recognizes the positions of each person indoor using motion capture and 3D model data of the room. After that we control direction of parametric speaker in real-time so that sound reach only particular person by calculating the reflection of sound on surfaces such as wall and ceiling. We calculate direction of parametric speaker using a beam tracing method. We present generating methods of dynamic acoustic field in our system and conducted the experiments on human factor to evaluate performance of proposed system."}
{"_id": "e4fa93b6e9f0a6f5d8414340c9d90712364c4878", "title": "Emotions , Partisanship , and Misperceptions : How Anger and Anxiety Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation", "text": "Citizens are frequently misinformed about political issues and candidates but the circumstances under which inaccurate beliefs emerge are not fully understood. This experimental study demonstrates that the independent experience of two emotions, anger and anxiety, in part determines whether citizens consider misinformation in a partisan or open-minded fashion. Anger encourages partisan, motivated evaluation of uncorrected misinformation that results in beliefs consistent with the supported political party, while anxiety at times promotes initial beliefs based less on partisanship and more on the information environment. However, exposure to corrections improves belief accuracy, regardless of emotion or partisanship. The results indicate that the unique experience of anger and anxiety can affect the accuracy of political beliefs by strengthening or attenuating the influence of partisanship."}
{"_id": "a4cc0c4135940662d66ce365bc8e843255605117", "title": "Sensation seeking in England and America: cross-cultural, age, and sex comparisons.", "text": "This study compared the factor structure of the Sensation-Seeking Scale (SSS) in English and American samples, and a new form of the SSS, applicable to both samples, was constructed. Three of the four factors showed good crossnational and cross-sex reliability. English and American males did not differ on the total SSS score, but American females scored higher than English females. Males in both countries scored higher than females on the total SSS score and on the Thrill and Adventure-Seeking and Disinhibition subscales. Significant age declines occurred for both sexes, particularly on Thrill and Adventure Seeking and Disinhibition."}
{"_id": "6fb3940ddd658e549a111870f10ca77ba3c4cf37", "title": "A Better Baseline for AVA", "text": "We introduce a simple baseline for action localization on the AVA dataset. The model builds upon the Faster R-CNN bounding box detection framework, adapted to operate on pure spatiotemporal features \u2013 in our case produced exclusively by an I3D model pretrained on Kinetics. This model obtains 21.9% average AP on the validation set of AVA v2.1, up from 14.5% for the best RGB spatiotemporal model used in the original AVA paper (which was pretrained on Kinetics and ImageNet), and up from 11.3% of the publicly available baseline using a ResNet101 image feature extractor, that was pretrained on ImageNet. Our final model obtains 22.8%/21.9% mAP on the val/test sets and outperforms all submissions to the AVA challenge at CVPR 2018."}
{"_id": "20589b37883fbae68d060cf4d4143d491677b096", "title": "Deep predictive policy training using reinforcement learning", "text": "Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes. However, training such predictive policies is challenging as it involves finding a trajectory of motor activations for the full duration of the action. We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations. The architecture consists of three sub-networks referred to as the perception, policy and behavior super-layers. The perception and behavior super-layers force an abstraction of visual and motor data trained with synthetic and simulated training samples, respectively. The policy super-layer is a small subnetwork with fewer parameters that maps data in-between the abstracted manifolds. It is trained for each task using methods for policy search reinforcement learning. We demonstrate the suitability of the proposed architecture and learning framework by training predictive policies for skilled object grasping and ball throwing on a PR2 robot. The effectiveness of the method is illustrated by the fact that these tasks are trained using only about 180 real robot attempts with qualitative terminal rewards."}
{"_id": "920d6cedff215711b1d0408457048e5d1d0b6b01", "title": "Piriformis syndrome, diagnosis and treatment.", "text": "Piriformis syndrome (PS) is an uncommon cause of sciatica that involves buttock pain referred to the leg. Diagnosis is often difficult, and it is one of exclusion due to few validated and standardized diagnostic tests. Treatment for PS has historically focused on stretching and physical therapy modalities, with refractory patients also receiving anesthetic and corticosteroid injections into the piriformis muscle origin, belly, muscle sheath, or sciatic nerve sheath. Recently, the use of botulinum toxin (BTX) to treat PS has gained popularity. Its use is aimed at relieving sciatic nerve compression and inherent muscle pain from a tight piriformis. BTX is being used increasingly for myofascial pain syndromes, and some studies have demonstrated superior efficacy to corticosteroid injection. The success of BTX in treating PS supports the prevailing pathoanatomic etiology of the condition and suggests a promising future for BTX in the treatment of other myofascial pain syndromes."}
{"_id": "189f893da626e37c8aaee3522519ccc87ba1a8a2", "title": "A high-level 3D visualization API for Java and ImageJ", "text": "Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de ."}
{"_id": "34cbfea7d7e065bbd541e2915dd272355f12e865", "title": "Psychological first-aid: a practical aide-memoire.", "text": "Despite advances made in recent years in medical first aid, psychiatric intervention, survival training and equipment design, many people still perish quickly during and immediately following a disastrous event. In this study, individuals and groups of survivors of life-threatening events were debriefed and the behavior of those who coped well during such a threat to life were compared with those who did not. The behaviors of those who coped well were distilled into a set of principles for psychological first aid; that is, a series of simple actions for use within a disaster which serves to recover victims to functional behavior as quickly as possible, thus increasing their chance for survival. These principles of psychological first aid have recently been introduced into basic first aid and survival training courses for both military and civilian units."}
{"_id": "2060441ed47f6cee9bab6c6597a7709836691da3", "title": "Iterative Thresholding Algorithm for Sparse Inverse Covariance Estimation", "text": "The `1-regularized maximum likelihood estimation problem has recently become a topic of great interest within the machine learning, statistics, and optimization communities as a method for producing sparse inverse covariance estimators. In this paper, a proximal gradient method (G-ISTA) for performing `1-regularized covariance matrix estimation is presented. Although numerous algorithms have been proposed for solving this problem, this simple proximal gradient method is found to have attractive theoretical and numerical properties. G-ISTA has a linear rate of convergence, resulting in an O(log \u03b5) iteration complexity to reach a tolerance of \u03b5. This paper gives eigenvalue bounds for the G-ISTA iterates, providing a closed-form linear convergence rate. The rate is shown to be closely related to the condition number of the optimal point. Numerical convergence results and timing comparisons for the proposed method are presented. G-ISTA is shown to perform very well, especially when the optimal point is well-conditioned."}
{"_id": "4a20823dd4ce6003e31f7d4e0649fe8c719926f2", "title": "A gene-coexpression network for global discovery of conserved genetic modules.", "text": "To elucidate gene function on a global scale, we identified pairs of genes that are coexpressed over 3182 DNA microarrays from humans, flies, worms, and yeast. We found 22,163 such coexpression relationships, each of which has been conserved across evolution. This conservation implies that the coexpression of these gene pairs confers a selective advantage and therefore that these genes are functionally related. Many of these relationships provide strong evidence for the involvement of new genes in core biological functions such as the cell cycle, secretion, and protein expression. We experimentally confirmed the predictions implied by some of these links and identified cell proliferation functions for several genes. By assembling these links into a gene-coexpression network, we found several components that were animal-specific as well as interrelationships between newly evolved and ancient modules."}
{"_id": "25c760c11c7803b2aefd6b6ae36f15908f76b544", "title": "Sparse inverse covariance estimation with the graphical lasso.", "text": "We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and B\u00fchlmann (2006). We illustrate the method on some cell-signaling data from proteomics."}
{"_id": "8f944c55cccf7b8e6911d39ba78d711c35a40020", "title": "Anatomical and biomechanical mechanisms of subacromial impingement syndrome.", "text": "Subacromial impingement syndrome is the most common disorder of the shoulder, resulting in functional loss and disability in the patients that it affects. This musculoskeletal disorder affects the structures of the subacromial space, which are the tendons of the rotator cuff and the subacromial bursa. Subacromial impingement syndrome appears to result from a variety of factors. Evidence exists to support the presence of the anatomical factors of inflammation of the tendons and bursa, degeneration of the tendons, weak or dysfunctional rotator cuff musculature, weak or dysfunctional scapular musculature, posterior glenohumeral capsule tightness, postural dysfunctions of the spinal column and scapula and bony or soft tissue abnormalities of the borders of the subacromial outlet. These entities may lead to or cause dysfunctional glenohumeral and scapulothoracic movement patterns. These various mechanisms, singularly or in combination may cause subacromial impingement syndrome."}
{"_id": "38e574280045bd46f31bce121b3ae79526eb4af1", "title": "Artificial Intelligence in Education", "text": "In a life-long learning society, learning scenarios can be categorized into five types, which are \u201cclassroom learning\u201d, \u201cself-learning\u201d, \u201cinquiry learning\u201d, \u201clearning in doing\u201d and \u201clearning in working\u201d. From a life-wide learning perspective, all these scenarios play vital roles for personal development. How to recognize these learning scenarios (including learning time, learning place, learning peers, learning activities, etc.) and provide the matched learning ways (including learning path, resources, peers, teachers, etc.) are the basis for smart learning environments, however few research could be found to address this problem. In order to solve this problem, we propose a conceptual framework of smart learning engine that is the core of integrated, interactive and intelligent (i) learning environments. The smart learning engine consists of three main functions. The first function is to identify data from student, teacher, subject area, and the environment using wireless sensors, the established learning resources and scenarios, and a learner modeling technology. The acquired data includes prior knowledge, theme-based context, leaner/teacher profile, physical environments, etc. The second function is to compute the best ways of learning based on the learning scenario and learning preference. In detail, this function includes modeling learner\u2019s affective data, building knowledge structure, optimizing knowledge module, and connecting learners. The third function is to deploy personalized and adaptive strategy, resources and tools for students and teachers based on the computed results in the second function. Deploy interactive strategies, learning paces, learning resources, and delivery approaches are the core elements for this function."}
{"_id": "8a5411878433127a81807f9ba051651024f115cf", "title": "Evolutionary reinforcement learning in FX order book and order flow analysis", "text": "As macroeconomic fundamentals based modelling of FX timeseries have been shown not to fit the empirical evidence at horizons of less than one year, interest has moved towards microstructure-based approaches. Order flow data has recently been receiving an increasing amount of attention in equity market analyses and thus increasingly in foreign exchange as well. In this paper, order flow data is coupled with order book derived indicators and we explore whether pattern recognition techniques derived from computational learning can be applied to successfully infer trading strategies on the underlying timeseries. Due to the limited amount of data available the results are preliminary. However, the approach demonstrates promise and it is shown that using order flow and order book data is usually superior to trading on technical signals alone."}
{"_id": "4a8a62bab0c006b88dc910341c0f7be2e27b105e", "title": "Enterprise Architecture for Digital Transformation", "text": "Digital transformation requires an altogether new institutional logic and effective response at a requisite organizational level. Sensing and seizing fleeting market opportunities and reconfiguring the business in line with the shifting value proposition requires increasingly specialized resources, more dynamic capabilities, and in-built resilience in the face of change. While Enterprise Architecture (EA) has been suggested to facilitate enterprise transformation, the focus has traditionally been on process standardization and integration rather than on continuous adaptation to the changing business and technological landscape. For EA to have a desired impact, more adaptive conceptualizations of EA that address the requirements of the new digital environment are called for. In this conceptual paper, we explore the implications of digital transformation on enterprise architecture. In particular, we note that existing approaches to EA address integration and coherence within a single organization but typically fall short in the face of complexities pertaining to digital ecosystems. We suggest a vertical typology of organizational capabilities and postulate that today's digital environment increasingly requires adaptive capabilities that transcend the traditional notion of dynamic capabilities. We then investigate how EA can help build flexibility and resilience in the organization."}
{"_id": "cd418c6851336a19651ff74f4b859203c25c75c5", "title": "Looking under the bonnet : factors affecting student adoption of e-learning systems in", "text": "The primary questions addressed in this paper are the following: what are the factors that affect students\u201f adoption of an e-learning system and what are the relationships among these factors? This paper investigates and identifies some of the major factors affecting students\u201f adoption of an e-learning system in a university in Jordan. E-learning adoption is approached from the information systems acceptance point of view. This suggests that a prior condition for learning effectively using e-learning systems is that students must actually use them. Thus, a greater knowledge of the factors that affect IT adoption and their interrelationships is a pre-cursor to a better understanding of student acceptance of e-learning systems. In turn, this will help and guide those who develop, implement, and deliver e-learning systems. In this study, an extended version of the Technology Acceptance Model (TAM) was developed to investigate the underlying factors that influence students\u201f decisions to use an e-learning system. The TAM was populated using data gathered from a survey of 486 undergraduate students, who were using the Moodle based e-learning system at the Arab Open University. The model was estimated using Structural Equation Modelling (SEM). A path model was developed to analyze the relationships between the factors to explain students\u201f adoption of the e-learning system. Whilst findings support existing literature about prior experience affecting perceptions, they also point to surprising group effects, which may merit future exploration."}
{"_id": "256f63cba7ede2a58d56a089122466bc35ce6abf", "title": "Video semantic object segmentation by self-adaptation of DCNN", "text": "This paper proposes a new framework for semantic segmentation of objects in videos. We address the label inconsistency problem of deep convolutional neural networks (DCNNs) by exploiting the fact that videos have multiple frames; in a few frames the object is confidently-estimated (CE) and we use the information in them to improve labels of the other frames. Given the semantic segmentation results of each frame obtained from DCNN, we sample several CE frames to adapt the DCNN model to the input video by focusing on specific instances in the video rather than general objects in various circumstances. We propose offline and online approaches under different supervision levels. In experiments our method achieved great improvement over the original model and previous state-of-the-art methods. c \u00a9 2016 Elsevier Ltd. All rights reserved."}
{"_id": "4e51f0102e58ab10bc9552c92f385dd060010537", "title": "Vision: towards real time epidemic vigilance through online social networks: introducing SNEFT -- social network enabled flu trends", "text": "Our vision is to achieve faster and near real time detection and prediction of the emergence and spread of an influenza epidemic, through sophisticated data collection and analysis of Online Social Networks (OSNs) such as Facebook, MySpace, and Twitter. In particular, we present the design of a system called SNEFT (Social Network Enabled Flu Trends), which will be developed in a 12-month SBIR (Small Business Innovation Research) project funded by the National Institutes of Health (NIH). We describe the innovative technologies that will be developed in this project for collecting and aggregating OSN data, extracting information from it, and integrating it with mathematical models of influenza. One of the monitoring tools used by the Centers for Disease Control and Prevention (CDC) is reports of Influenza-Like Illness (ILI) cases; these reports are authoritative but typically have a delay of one to two weeks due to the largely manual process. We describe the SNEFT prototype in the context of predicting ILI cases well in advance of the CDC reports. We observe that OSN data is individually noisy but collectively revealing, and speculate on other applications that can potentially be enabled by OSN data collection and analysis."}
{"_id": "37a29397c51828ed4369cc6e7ed3db56afc4b35c", "title": "Speaking in tongues: SQL access to NoSQL systems", "text": "Non-relational data stores, which are usually called NoSQL systems, have become an important class of data management systems. They often outperform the relational systems. Yet there is no common way to interface with NoSQL systems. The Structured Query Language (SQL) has already proven to be useful to provide a uniform query language for all the relational systems. We identify a subset of SQL for access to NoSQL systems. Our extensible middleware translates SQL queries to the query languages of the connected NoSQL systems. The migration between these systems is thereby greatly simplified as well as the comparison of the supported NoSQL systems."}
{"_id": "95e7728506bab271014618f8a9b0eafbb1d4dbf8", "title": "Upgrading Wireless Home Routers for Enabling Large-Scale Deployment of Cloudlets", "text": "Smartphones become more and more popular over recent years due to their small form factors. However, such mobile systems are resource-constrained in view of computational power, storage and battery life. Offloading resource-intensive tasks (aka mobile cloud computing) to distant (e.g., cloud computing) or closely located data centers (e.g., cloudlet) overcomes these issues. Especially, cloudlets provide computational power with low latency for responsive applications due to their proximity to mobile users. However, a large-scale deployment of rangerestricted cloudlets is still an open challenge. In this paper, we propose a novel concept for a large-scale deployment of cloudlets by upgrading wireless home routers. Beside router\u2019s native purpose of routing data packets through the network, it can now offer computing resources with low latency and high bandwidth without additional hardware. Proving our concept, we conducted comprehensive benchmark tests against existing concepts. As result, the feasibility of this concept is shown and provide a promising way to large-scale deploy cloudlets in existing infrastructures."}
{"_id": "6ad054b4401d57e04307b74eef5dff6ba6274007", "title": "Sentiment expression via emoticons on social media", "text": "Emoticons (e.g., :) and :( ) have been widely used in sentiment analysis and other NLP tasks as features to machine learning algorithms or as entries of sentiment lexicons. In this paper, we argue that while emoticons are strong and common signals of sentiment expression on social media the relationship between emoticons and sentiment polarity are not always clear. Thus, any algorithm that deals with sentiment polarity should take emoticons into account but extreme caution should be exercised in which emoticons to depend on. First, to demonstrate the prevalence of emoticons on social media, we analyzed the frequency of emoticons in a large recent Twitter data set. Then we carried out four analyses to examine the relationship between emoticons and sentiment polarity as well as the contexts in which emoticons are used. The first analysis surveyed a group of participants for their perceived sentiment polarity of the most frequent emoticons. The second analysis examined clustering of words and emoticons to better understand the meaning conveyed by the emoticons. The third analysis compared the sentiment polarity of microblog posts before and after emoticons were removed from the text. The last analysis tested the hypothesis that removing emoticons from text hurts sentiment classification by training two models with and without emoticons in the text, respectively. The results confirms the arguments that: 1) a few emoticons are strong and reliable signals of sentiment polarity and one should take advantage of them in any sentiment analysis; 2) a large group of the emoticons conveys complicated sentiment hence they should be treated with extreme caution."}
{"_id": "da411a876b4037434e4f47f7d14f0fca1ca0cad8", "title": "The Propulsion Committee Final Report and Recommendations to the 25 th ITTC 1", "text": ""}
{"_id": "6e3a8b3d7ae93091609337221f8f252480188e7b", "title": "Thalamic relays and cortical functioning.", "text": "Studies on the visual thalamic relays, the lateral geniculate nucleus and pulvinar, provide three key properties that have dramatically changed the view that the thalamus serves as a simple relay to get information from subcortical sites to cortex. First, the retinal input, although a small minority (7%) in terms of numbers of synapses onto geniculate relay cells, dominates receptive field properties of these relay cells and strongly drives them; 93% of input thus is nonretinal and modulates the relay in dynamic and important ways related to behavioral state, including attention. We call the retinal input the driver input and the nonretinal, modulator input, and their unique morphological and functional differences allow us to recognize driver and modulator input to many other thalamic relays. Second, much of the modulation is related to control of a voltage-gated, low threshold Ca(2+) conductance that determines response properties of relay cells -burst or tonic - and this, among other things, affects the salience of information relayed. Third, the lateral geniculate nucleus and pulvinar (a massive but generally mysterious and ignored thalamic relay), are examples of two different types of relay: the LGN is a first order relay, transmitting information from a subcortical driver source (retina), while the pulvinar is mostly a higher order relay, transmitting information from a driver source emanating from layer 5 of one cortical area to another area. Higher order relays seem especially important to general corticocortical communication, and this view challenges the conventional dogma that such communication is based on direct corticocortical connections. In this sense, any new information reaching a cortical area, whether from a subcortical source or another cortical area, benefits from a thalamic relay. Other examples of first and higher order relays also exist, and generally higher order relays represent the majority of thalamus. A final property of interest emphasized in chapter 17 by Guillery (2005) is that most or all driver inputs to thalamus, whether from a subcortical source or from layer 5 of cortex, are axons that branch, with the extrathalamic branch innervating a motor or premotor region in the brainstem, or in some cases, spinal cord. This suggests that actual information relayed by thalamus to cortex is actually a copy of motor instructions (Guillery, 2005). Overall, these features of thalamic relays indicate that the thalamus not only provides a behaviorally relevant, dynamic control over the nature of information relayed, it also plays a key role in basic corticocortical communication."}
{"_id": "b6d1230bb36abccd96b410a5f776f650d9362f77", "title": "The CRISPR/Cas9 system produces specific and homozygous targeted gene editing in rice in one generation.", "text": "The CRISPR/Cas9 system has been demonstrated to efficiently induce targeted gene editing in a variety of organisms including plants. Recent work showed that CRISPR/Cas9-induced gene mutations in Arabidopsis were mostly somatic mutations in the early generation, although some mutations could be stably inherited in later generations. However, it remains unclear whether this system will work similarly in crops such as rice. In this study, we tested in two rice subspecies 11 target genes for their amenability to CRISPR/Cas9-induced editing and determined the patterns, specificity and heritability of the gene modifications. Analysis of the genotypes and frequency of edited genes in the first generation of transformed plants (T0) showed that the CRISPR/Cas9 system was highly efficient in rice, with target genes edited in nearly half of the transformed embryogenic cells before their first cell division. Homozygotes of edited target genes were readily found in T0 plants. The gene mutations were passed to the next generation (T1) following classic Mendelian law, without any detectable new mutation or reversion. Even with extensive searches including whole genome resequencing, we could not find any evidence of large-scale off-targeting in rice for any of the many targets tested in this study. By specifically sequencing the putative off-target sites of a large number of T0 plants, low-frequency mutations were found in only one off-target site where the sequence had 1-bp difference from the intended target. Overall, the data in this study point to the CRISPR/Cas9 system being a powerful tool in crop genome engineering."}
{"_id": "e3734bc00eb28e697011d6be04920df3d8892062", "title": "Learning Inverse Dynamics for Robot Manipulator Control", "text": "Model-based control strategies for robot manipulators can present numerous performance advantages when an accurate model of the system dynamics is available. In practice, obtaining such a model is a challenging task which involves modeling such physical processes as friction, which may not be well understood and difficult to model. Furthermore, uncertainties in the physical parameters of a system may be introduced from significant discrepancies between the manufacturer data and the actual system. Traditionally, adaptive and robust control strategies have been developed to deal with parametric uncertainty in the dynamic model, but often require knowledge of the structure of the dynamics. Recent approaches to model-based manipulator control involve data-driven learning of the inverse dynamics relationship, eliminating the need for any a-priori knowledge of the system model. Locally Weighted Projection Regression (LWPR) has been proposed for learning the inverse dynamics function of a manipulator. Due to its use of simple local, linear models, LWPR is suitable for online and incremental learning. Although global regression techniques such as Gaussian Process Regression (GPR) have been shown to outperform LWPR in terms of accuracy, due to its heavy computational requirements, GPR has been applied mainly to offline learning of inverse dynamics. More recent efforts in making GPR computationally tractable for real-time control have resulted in several approximations which operate on a select subset, or sparse representation of the entire training data set. Despite the significant advancements that have been made in the area of learning control, there has not been much work in recent years to evaluate these newer regression techniques against traditional model-based control strategies such as adaptive control. Hence, the first portion of this thesis provides a comparison between a fixed model-based control strategy, an adaptive controller and the LWPR-based learning controller. Simulations are carried out in order to evaluate the position and orientation tracking performance of each controller under varied end effector loading, velocities and inaccuracies in the known dynamic parameters. Both the adaptive controller and LWPR controller are shown to have comparable performance in the presence of parametric uncertainty. However, it is shown that the learning controller is unable to generalize well outside of the regions in which it has been trained. Hence, achieving good performance requires significant amounts of training in the anticipated region of operation. In addition to poor generalization performance, most learning controllers commence learning entirely from \u2018scratch,\u2019 making no use of any a-priori knowledge which may be available from the well-known rigid body dynamics (RBD) formulation. The second portion of this thesis develops two techniques for online, incremental learning algorithms which incorporate prior knowledge to improve generalization performance. First, prior knowledge"}
{"_id": "81342e852f57fe6a267e241cdaabbd80c2da3f85", "title": "The Impact of Interactive Touchscreens on Physics Education in Upper Secondary School \u2013 A systematic literature review", "text": "Interactive touchscreens such as tablet PCs (TPC) and interactive whiteboards (IWB) are becoming more and more common in classrooms around the world. To date, very little research has been conducted on the impact of the new technology on physics education. This systematic literature review aims to investigate research on what impact tablet PCs and interactive whiteboards might have on the education in upper Secondary School. The review was performed in response to the following questions: 1. What is the influence of IWBs and TPCs on students\u2019 active participation in physics education? 2. How can an IWB or TPC improve students\u2019 learning about physics concepts? 3. How can educational research on touchscreen technology help inform effective teaching strategies in physics education? To respond to the questions of the study, relevant research about interactive whiteboards and/or tablet PCs was consulted and analysed. Twelve articles were located, mainly through the ERIC and Scopus databases, but also through Google Scholar. The included articles reported empirical research about physics education with interactive whiteboards or tablet PCs. The results from the articles indicate that interactive touchscreens might help improve learners\u2019 active participation in physics education. Interactive whiteboards can, for example, be used to display interactive simulations during group work, something students are found to appreciate and easily engage in. A tablet PC can be used in the same way, but also allows students to receive anonymous support and feedback from the teacher during class which seems to be beneficial for learning. Results show that it is possible to improve students\u2019 understanding of physics concepts by using interactive whiteboards or tablet PCs. However, further research is required to compare results from students using touch technology and students taught in traditional manner to be able to draw any general conclusions about observed learning effects."}
{"_id": "cab78a90dfac2f271e6764205e23af35a525bee0", "title": "Mining Frequent Patterns by Pattern-Growth: Methodology and Implications", "text": "Mining frequent patterns has been a focused topic in data mining research in recent years, with the developmeht of numerous interesting algorithms for mining association, correlation, causality, sequential patterns, partial periodicity, constraint-based frequent pattern mining, associative classification, emerging patterns, etc. Most of the previous studies adopt an Apriori-like, candidate generation-and-test approach. However, based on our analysis, candidate generation and test may still be expensive, especially when encountering long and numerous patterns. A new methodology, called f r e q u e n t p a t t e r n g rowth , which mines frequent patterns without candidate generation, has been developed. The method adopts a divide-andconquer philosophy to project and partition databases based on the currently discovered frequent patterns and grow such patterns to longer ones in the projected databases. Moreover, efficient data structures have been developed for effective database compression and fast in-memory traversal. Such a methodology may eliminate or substantially reduce the number of candidate sets to be generated and also reduce the size of the database to be iteratively examined, and, therefore, lead to high performance. In this paper, we provide an overview of this approach and examine its methodology and implications for mining several kinds of frequent patterns, including association, frequent closed itemsets, max-patterns, sequential patterns, and constraint-based mining of frequent patterns. We show that frequent pattern growth is efficient at mining large databases and its further development may lead to scalable mining of many other kinds of patterns as well."}
{"_id": "90bb83a7ac66aed225de860b9754fc7162516b29", "title": "Alike people, alike interests? Inferring interest similarity in online social networks", "text": "Understanding how much two individuals are alike in their interests (i.e., interest similarity) has become virtually essential for many applications and services in Online Social Networks (OSNs). Since users do not always explicitly elaborate their interests in OSNs like Facebook, how to determine users\u2019 interest similarity without fully knowing their interests is a practical problem. In this paper, we investigate how users\u2019 interest similarity relates to various social features (e.g. geographic distance); and accordingly infer whether the interests of two users are alike or unalike where one of the users\u2019 interests are unknown. Relying on a large Facebook dataset, which contains 479, 048 users and 5, 263, 351 user-generated interests, we present comprehensive empirical studies and verify the homophily of interest similarity across three interest domains (movies, music and TV shows). The homophily reveals that people tend to exhibit more similar tastes if they have similar demographic information (e.g., age, location), or if they are friends. It also shows that the individuals with a higher interest entropy usually share more interests with others. Based on these results, we provide a practical prediction model under a real OSN environment. For a given user with no interest information, this model can select some individuals who not only exhibit many interests but also probably achieve high interest similarities with the given user. Eventually, we illustrate a use case to demonstrate that the proposed prediction model could facilitate decision-making for OSN applications and services."}
{"_id": "09750ce4a8fa0a0fc596bdda8bf58db74fa9a0e1", "title": "Synthesizing Training Images for Boosting Human 3D Pose Estimation", "text": "Human 3D pose estimation from a single image is a challenging task with numerous applications. Convolutional Neural Networks (CNNs) have recently achieved superior performance on the task of 2D pose estimation from a single image, by training on images with 2D annotations collected by crowd sourcing. This suggests that similar success could be achieved for direct estimation of 3D poses. However, 3D poses are much harder to annotate, and the lack of suitable annotated training images hinders attempts towards end-to-end solutions. To address this issue, we opt to automatically synthesize training images with ground truth pose annotations. Our work is a systematic study along this road. We find that pose space coverage and texture diversity are the key ingredients for the effectiveness of synthetic training data. We present a fully automatic, scalable approach that samples the human pose space for guiding the synthesis procedure and extracts clothing textures from real images. Furthermore, we explore domain adaptation for bridging the gap between our synthetic training images and real testing photos. We demonstrate that CNNs trained with our synthetic images out-perform those trained with real photos on 3D pose estimation tasks."}
{"_id": "0f5b44e6ad77a5555e28d86e2391ee8eb988dcdf", "title": "\"Constant, constant, multi-tasking craziness\": managing multiple working spheres", "text": "Most current designs of information technology are based on the notion of supporting distinct tasks such as document production, email usage, and voice communication. In this paper we present empirical results that suggest that people organize their work in terms of much larger and thematically connected units of work. We present results of fieldwork observation of information workers in three different roles: analysts, software developers, and managers. We discovered that all of these types of workers experience a high level of discontinuity in the execution of their activities. People average about three minutes on a task and somewhat more than two minutes using any electronic tool or paper document before switching tasks. We introduce the concept of working spheres to explain the inherent way in which individuals conceptualize and organize their basic units of work. People worked in an average of ten different working spheres. Working spheres are also fragmented; people spend about 12 minutes in a working sphere before they switch to another. We argue that design of information technology needs to support people's continual switching between working spheres."}
{"_id": "b86d71ceca26fbb747093aeb5669e3df9a9fbee2", "title": "A Knowledge Hunting Framework for Common Sense Reasoning", "text": "We introduce an automatic system that achieves state-of-the-art results on the Winograd Schema Challenge (WSC), a common sense reasoning task that requires diverse, complex forms of inference and knowledge. Our method uses a knowledge hunting module to gather text from the web, which serves as evidence for candidate problem resolutions. Given an input problem, our system generates relevant queries to send to a search engine, then extracts and classifies knowledge from the returned results and weighs them to make a resolution. Our approach improves F1 performance on the full WSC by 0.21 over the previous best and represents the first system to exceed 0.5 F1. We further demonstrate that the approach is competitive on the Choice of Plausible Alternatives (COPA) task, which suggests that it is generally applicable."}
{"_id": "202b4116e06243f8cd686643bdab143c6f999203", "title": "The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis", "text": ", doi: 10.1098/rspa.1998.0193 454 1998 Proc. R. Soc. Lond. A Yen, Chi Chao Tung and Henry H. Liu Norden E. Huang, Zheng Shen, Steven R. Long, Manli C. Wu, Hsing H. Shih, Quanan Zheng, Nai-Chyuan nonlinear and non-stationary time series analysis The empirical mode decomposition and the Hilbert spectrum for References http://rspa.royalsocietypublishing.org/content/454/1971/903#related-urls Article cited in: Email alerting service here corner of the article or click Receive free email alerts when new articles cite this article sign up in the box at the top right-hand"}
{"_id": "127a818c2ba1bbafbabc62d4163b0dd98364f64a", "title": "NFC Antenna With Nonuniform Meandering Line and Partial Coverage Ferrite Sheet for Metal Cover Smartphone Applications", "text": "This paper proposes a near-field communication (NFC) antenna solution for metal-cover smartphone applications. In this NFC antenna solution, a narrow slot is initially loaded into the metal cover, and the position of this slot can be altered (with flexibility) according to the design of the smartphone\u2019s external appearance. Next, an unconventional six-turn coil (with a six-sided irregular hexagonal shape) is designed that has a nonuniform linewidth and a nonuniform line gap between two lines, and it is partially loaded with a rectangular ferrite composite. In this design, an enhanced magnetic line of forces can be realized in certain specific locations, and an excellent inductively coupled near-field receiver is achieved. Notably, this proposed NFC antenna can pass the tests required by NFC forum certification, and its performances are comparable with the traditional NFC antenna that has nonmetallic cover."}
{"_id": "3786308bf65cde7e5c0b320ab6cc01a8ab0abfff", "title": "Miniaturized NFC Antenna Design for a Tablet PC With a Narrow Border and Metal Back-Cover", "text": "A novel structure of the near-field communication (NFC) antenna design for a tablet PC is proposed. This tablet PC has a narrow border and full metallic back-cover. A miniaturized loop antenna design is achieved by attaching ferrite sheets on both sides of the loop antenna. The ferrite sheets may reduce eddy currents induced on the adjacent metallic back-cover by the loop antenna to improve the communication range of the NFC. Only the edge of the tablet PC allows the antenna to radiate due to the full metallic back-cover. Thus, the NFC antenna needs to be narrow to be installed on the edge of the tablet PC. Therefore, we propose a miniaturized NFC antenna with the dimensions of 41.5 (L) \u00d7 7.5 (W) \u00d7 0.45 (T) mm3 only. Simulated magnetic field distributions are consistent with measured voltage distributions. This design has a good communication range more than 6 cm in front of the touchscreen panel and reaches 2 cm over the other side above the metal back-cover."}
{"_id": "39378ac6b6eb081d10badae100139e408c5232e4", "title": "A silver inkjet printed ferrite NFC antenna", "text": "This paper presents a near field communication (NFC) antenna that is printed directly on a ferrite substrate using a silver inkjet printing process. Such a silver inkjet printed ferrite NFC antenna provides a minimum substrate thickness, a good operation on metal objects, and a small assembly effort. The NFC antenna performance is analysed by measurements based on the ISO/IEC standard 10373-6 for proximity identification cards test methods. For this, the antenna is connected to an NFC microchip. Measurements in a non-metal environment show that the ferrite antenna performs equally good a custom-built NFC antenna printed on photo paper substrate, despite additional losses in the ferrite substrate. In a metal environment the ferrite antenna clearly outperforms the photo paper antenna."}
{"_id": "39b3f4769d9478db2bfc687d72390689e99bd092", "title": "NFC Antenna Design for Low-Permeability Ferromagnetic Material", "text": "A novel structure of a near-field communication (NFC) antenna for a mobile handset is proposed by changing the sequence of loop winding and modifying the inner loop structure. In general, the sintered ferrite sheets with higher relative permeability (\u03bcr \u2248 200) have been used to reduce the performance deterioration of NFC antennas due to the eddy current in the battery pack of a mobile handset. However, their costs are high, and they are considerably breakable. In this letter, we effectively enhance the performance of an NFC antenna by employing the ferrite-polymer composite with lower relative permeability (\u03bcr \u2248 55). The proposed antenna improves up to 23% more reading range and up to 65% better load modulation performance."}
{"_id": "2145960516ca6693a672e7400103a255b40722a0", "title": "Material property of on-metal magnetic sheet attached on NFC/HF-RFID antenna and research of its proper pattern and size on", "text": "When a payment system with NFC/HF-RFID is installed in smart-phone, their communication with 13.56MHz is blocked by such metal as battery, coil and PCB. To solve this problem, magnetic sheet with high permeability is set between NFC/HF-RFID antenna and near-by metal. So far thin compound magnetic sheet or thin sintered ferrite has been used. In order to make thickness of these materials thinner, we propose amorphous magnetic sheet. Besides this amorphous magnetic sheet causes to increase the transmission effect."}
{"_id": "084459cc2b4499ae8ae4a0edea46687900a0c4d4", "title": "On neural networks and learning systems for business computing", "text": ""}
{"_id": "d9e1e1f63e7c9e0478a0ba49eb8cb426b1cb42ca", "title": "Going wireless: behavior & practice of new mobile phone users", "text": "We report on the results of a study in which 19 new mobile phone users were closely tracked for the first six weeks after service acquisition. Results show that new users tend to rapidly modify their perceptions of social appropriateness around mobile phone use, that actual nature of use frequently differs from what users initially predict, and that comprehension of service-oriented technologies can be problematic. We describe instances and features of mobile telephony practice. When in use, mobile phones occupy multiple social spaces simultaneously, spaces with norms that sometimes conflict: the physical space of the mobile phone user and the virtual space of the conversation."}
{"_id": "31159c6900ffb3f29af00d55b2a824d2a5535381", "title": "Evolution of Architectural Floor Plans", "text": "Layout planning is a process of sizing and placing rooms (e.g. in a house) while attempting to optimize various criteria. Often there are conflicting criteria such as construction cost, minimizing the distance between related activities, and meeting the area requirements for these activities. The process of layout planning has mostly been done by hand, with a handful of attempts to automate the process. This thesis explores some of these past attempts and describes several new techniques for automating the layout planning process using evolutionary computation. These techniques are inspired by the existing methods, while adding some of their own innovations. Additional experiments are done to test the possibility of allowing polygonal exteriors with rectilinear interior walls. Several multi-objective approaches are used to evaluate and compare fitness. The evolutionary representation and requirements specification used provide great flexibility in problem scope and depth and is worthy of considering in future layout and design attempts. The system outlined in this thesis is capable of evolving a variety of floor plans conforming to functional and geometric specifications. Many of the resulting plans look reasonable even when compared to a professional floor plan. Additionally polygonal and multi-floor buildings were also generated."}
{"_id": "712f45af0a205bcdabd9605c9af7d307dd34d493", "title": "Overview of the TREC 2017 Real-Time Summarization Track", "text": "Jimmy Lin, Adam Roegiest, Luchen Tan, Richard McCreadie, Ellen Voorhees, and Fernando Diaz 1 David R. Cheriton School of Computer Science, University of Waterloo, Ontario, Canada 2 School of Computing Science, University of Glasgow, Scotland, the United Kingdom 3 National Institute for Standards and Technology, Maryland, USA 4 Microsoft Research, New York, USA {jimmylin, aroegies, luchen.tan}@uwaterloo.ca richard.mccreadie@glasgow.ac.uk, ellen.voorhees@nist.gov, fdiaz@microsoft.com"}
{"_id": "e241941dde0bbee962630d8202ea67e886d106ea", "title": "Human-comfortable navigation for an autonomous robotic wheelchair", "text": "Reliable autonomous navigation is an active research topic that has drawn the attention for decades, however, human factors such as navigational comfort has not received the same level of attention. This work proposes the concept of \u201ccomfortable map\u201d and presents a navigation approach for autonomous passenger vehicles which in top of being safe and reliable is comfortable. In our approach we first extract information from users preference related to comfort while sitting on a robotic wheelchair under different conditions in an indoor corridor environment. Human-comfort factors are integrated to a geometric map generated by SLAM framework. Then a global planner computes a safe and comfortable path which is followed by the robotic wheelchair. Finally, an evaluation with 29 participants using a fully autonomous robotic wheelchair, showed that more than 90% of them found the proposed approach more comfortable than a shortest-path state of the art approach."}
{"_id": "bee16cae891faf2b0a1782c6c0869d2ad6c291de", "title": "An ant system based on moderate search for TSP", "text": "Ant Colony Optimization (ACO) algorithms often suffer from criticism for the local optimum and premature convergence. In order to overcome these inherent shortcomings shared by most ACO algorithms, we divide the ordinary ants into two types: the utilizationoriented ants and the exploration-oriented ants. The utilization-oriented ants focus on constructing solutions based on the learned experience like ants in many other ACO algorithms. On the other hand, inspired by the adaptive behaviors of some real-world Monomorium ant species who tend to select paths with moderate pheromone concentration, a novel search strategy, that is, a completely new transition rule is designed for the exploration-oriented ants to explore more unknown solutions. In addition, a new corresponding update strategy is also employed. Moreover, applying the new search strategy and update strategy, we propose an improved version of ACO algorithm\u2014 Moderate Ant System. This improved algorithm is experimentally turned out to be effective and competitive."}
{"_id": "e77c39c461c8fcad812f4541215b1ecf1a2cc61f", "title": "A new ART-counterpropagation neural network for solving a forecasting problem", "text": "This study presents a novel Adaptive resonance theory-Counterpropagation neural network (ART-CPN) for solving forecasting problems. The network is based on the ART concept and the CPN learning algorithm for constructing the neural network. The vigilance parameter is used to automatically generate the nodes of the cluster layer for the CPN learning process. This process improves the initial weight problem and the adaptive nodes of the cluster layer (Kohonen layer). ART-CPN involves real-time learning and is capable of developing a more stable and plastic prediction model of input patterns by self-organization. The advantages of ART-CPN include the ability to cluster, learn and construct the network model for forecasting problems. The network was applied to solve the real forecasting problems. The learning algorithm revealed better learning efficiency and good prediction performance. q 2004 Elsevier Ltd. All rights reserved."}
{"_id": "e11cc2a3cd150927ec61e596296ceb9b0c20bda4", "title": "Search-Free DOD, DOA and Range Estimation for Bistatic FDA-MIMO Radar", "text": "Frequency diverse array (FDA) is an emerging technology, the hybrid of FDA and multiple-input-multiple-output (FDA-MIMO) under monostatic scenarios has received much attention in recent years. However, little work have been done for bistatic FDA-MIMO radar. In this paper, we investigate strategies on estimating direction-of-departure (DOD), direction-of-arrival and range for bistatic FDA-MIMO radar. Our strategies have two aspects. First, nonlinear frequency increments including both subarray and random modes are employed to overcome the problem that the DOD and range parameters of FDA transmitting steering vectors are coupled. Second, in order to reduce the computational complexity associated with the 3-D spectral peak searching algorithms, estimation of signal parameters via rotational invariance technique and parallel factor algorithms with their corresponding phase ambiguity resolving methods, are proposed for subarray and random modes, respectively. Both of the two algorithms perform well while the range parameter of targets satisfy a range constraint criterion. This criterion can also be used for designing frequency increments of bistatic FDA-MIMO radar. Additionally, the Cram\u00e9r-Rao bound of bistatic FDA-MIMO radar and the algorithm performance analysis consist of identifiability and complexity are derived. All the proposed methods are verified by both theoretical analysis and numerical simulations. And satisfactory results are achieved."}
{"_id": "de4a8fd957cfdc73b3df3c51f1845238ca4a58bc", "title": "Yoga for anxiety: a systematic review of the research evidence.", "text": "Between March and June 2004, a systematic review was carried out of the research evidence on the effectiveness of yoga for the treatment of anxiety and anxiety disorders. Eight studies were reviewed. They reported positive results, although there were many methodological inadequacies. Owing to the diversity of conditions treated and poor quality of most of the studies, it is not possible to say that yoga is effective in treating anxiety or anxiety disorders in general. However, there are encouraging results, particularly with obsessive compulsive disorder. Further well conducted research is necessary which may be most productive if focused on specific anxiety disorders."}
{"_id": "34abda2f748abdaeef33d71c7fe9bdc6e6815874", "title": "A fully portable robot system for cleaning solar panels", "text": "Dust and dirt particles accumulating on PV panels decrease the solar energy reaching the cells, thereby reducing their overall power output. Hence, cleaning the PV panels is a problem of great practical engineering interest in solar PV power generation. In this paper, the problem is reviewed and methods for dust removal are discussed. A portable robotic cleaning device is developed and features a versatile platform which travels the entire length of a panel. An Arduino microcontroller is used to implement the robot's control system. Initial testing of the robot has provided favorable results and shows that such a system is viable. Future improvements on the design are discussed, especially the different methods of transporting the robot from one panel to another. In conclusion, it is found that robotic cleaning solution is practical and can help in maintaining the clean PV panel efficiency."}
{"_id": "4910c4d7eea372034339f21141550f6d7cb28665", "title": "Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss", "text": "Monocular depth estimation benefits greatly from learning based techniques. By studying the training data, we observe that the per-pixel depth values in existing datasets typically exhibit a long-tailed distribution. However, most previous approaches treat all the regions in the training data equally regardless of the imbalanced depth distribution, which restricts the model performance particularly on distant depth regions. In this paper, we investigate the long tail property and delve deeper into the distant depth regions (i.e. the tail part) to propose an attentiondriven loss for the network supervision. In addition, to better leverage the semantic information for monocular depth estimation, we propose a synergy network to automatically learn the information sharing strategies between the two tasks. With the proposed attention-driven loss and synergy network, the depth estimation and semantic labeling tasks can be mutually improved. Experiments on the challenging indoor dataset show that the proposed approach achieves state-of-the-art performance on both monocular depth estimation and semantic labeling tasks."}
{"_id": "32500c29af9887a98a441b06262ce48f93f61388", "title": "Nutrition Therapy Recommendations for the Management of Adults With\nDiabetes", "text": "There is no standard meal plan or eating pattern that works universally for all people with diabetes. In order to be effective, nutrition therapy should be individualized for each patient/client based on his or her individual health goals; personal and cultural preferences; health literacy and numeracy; access to healthful choices; and readiness, willingness, and ability to change. Nutrition interventions should emphasize a variety of minimally processed nutrient dense foods in appropriate portion sizes as part of a healthful eating pattern and provide the individual with diabetes with practical tools for day-to-day food plan and behavior change that can be maintained over the long term."}
{"_id": "b40234457d7a0dc8695b4c8a2b225cd68b72a106", "title": "Impact Of Online Advertising On Consumer Attitudes And Interests Buy Online ( Survey On Students Of Internet Users In Makassar )", "text": "The rapid development of technology today makes Internet users continues to increase. This is supported by the ease of internet users access the internet either through a PC, laptop, mobile phones, tablets and other media. The increase in Internet users this makes the internet into a proper promotion using online advertising. With a wide reach and touch the various layers of the Internet media communities may be appropriate advice for company promotion. However, an increasing number of Internet users, especially in the city of Makassar is not accompanied by an increase in the number of online purchases. Based on that it is necessary to examine how the effect of online advertising on consumer buying behavior and online, as well as how to control the behavior and subjective norms influence the attitudes and interests of consumers buy online. This study aims to analyze and test the effect of online advertising on consumer attitudes and purchase interest online, a survey conducted on students of Internet users in the city of Makassar. The study was conducted on students of public and private universities in the city of Makassar. The method used was a quantitative analysis using the technique of purposive sampling method with a sample of 340 people. Testing this hypothesis using structural equation modeling (SEM). The results showed that online advertising has an influence on consumer buying behavior and online. Dimensions interactivity of online advertising provides the highest influence on the attitudes and interests of consumers purchasing online."}
{"_id": "95f45f7c9c4d8effb459921897b49dd7583710bd", "title": "Bernoulli Embeddings for Graphs", "text": "Just as semantic hashing (Salakhutdinov and Hinton 2009) can accelerate information retrieval, binary valued embeddings can significantly reduce latency in the retrieval of graphical data. We introduce a simple but effective model for learning such binary vectors for nodes in a graph. By imagining the embeddings as independent coin flips of varying bias, continuous optimization techniques can be applied to the approximate expected loss. Embeddings optimized in this fashion consistently outperform the quantization of both spectral graph embeddings and various learned real-valued embeddings, on both ranking and pre-ranking tasks for a variety of datasets."}
{"_id": "b596c56f5f8ac503c79a05351d8a770723842a69", "title": "Playful bottle: a mobile social persuasion system to motivate healthy water intake", "text": "This study of mobile persuasion system explores the use of a mobile phone, when attached to an everyday object used by an everyday behavior, becomes a tool to sense and influence that behavior. This mobile persuasion system, called Playful Bottle system, makes use of a mobile phone attached to an everyday drinking mug and motivates office workers to drink healthy quantities of water. A camera and accelerometer sensors in the phone are used to build a vision/motion-based water intake tracker to detect the amount and regularity of water consumed by the user. Additionally, the phone includes hydration games in which natural drinking actions are used as game input. Two hydration games are developed: a single-user TreeGame with automated computer reminders and a multi-user ForestGame with computer-mediated social reminders from members of the group playing the game. Results from 7-week user study with 16 test subjects suggest that both hydration games are effective for encouraging adequate and regular water intake by users. Additionally, results of this study suggest that adding social reminders to the hydration game is more effective than system reminders alone."}
{"_id": "ec3b99525caeeee92887e4dcae8731377d37ddb2", "title": "Design of Organic TFT Pixel Electrode Circuit for Active-Matrix OLED Displays", "text": "A new current-programming pixel circuit for active-matrix organic light-emitting diode (AM-OLED) displays, composed of four organic thin-film transistors (OTFTs) and one capacitor, has been designed, simulated and evaluated. The most critical issue in realizing AMOLED displays with OTFTs is the variation and aging of the driving-TFTs and degradation of OLEDs. This can cause image sticking or degradation of image quality. These problems require compensation methods for high-quality display applications, and pixel level approach is considered to be one of the most important factors for improving display image quality. Our design shows that the current OLED and OTFT technology can be implemented for AMOLED displays, compensating the degradation of OTFT device characteristics."}
{"_id": "c48c33d9f0ca7882bc6b917efe5288dd0537ae05", "title": "Deep Optical Flow Estimation Via Multi-Scale Correspondence Structure Learning", "text": "As an important and challenging problem in computer vision, learning based optical flow estimation aims to discover the intrinsic correspondence structure between two adjacent video frames through statistical learning. Therefore, a key issue to solve in this area is how to effectively model the multi-scale correspondence structure properties in an adaptive end-to-end learning fashion. Motivated by this observation, we propose an endto-end multi-scale correspondence structure learning (MSCSL) approach for optical flow estimation. In principle, the proposed MSCSL approach is capable of effectively capturing the multi-scale inter-image-correlation correspondence structures within a multi-level feature space from deep learning. Moreover, the proposed MSCSL approach builds a spatial Conv-GRU neural network model to adaptively model the intrinsic dependency relationships among these multi-scale correspondence structures. Finally, the above procedures for correspondence structure learning and multi-scale dependency modeling are implemented in a unified end-to-end deep learning framework. Experimental results on several benchmark datasets demonstrate the effectiveness of the proposed approach."}
{"_id": "12acaac349edb5cccbf799cf0c00b32294c8465d", "title": "Perspectives and Visions of Computer Science Education in Primary and Secondary (K-12) Schools", "text": "In view of the recent developments in many countries, for example, in the USA and in the UK, it appears that computer science education (CSE) in primary or secondary schools (K-12) has reached a significant turning point, shifting its focus from ICT-oriented to rigorous computer science concepts. The goal of this special issue is to offer a publication platform for soundly based in-depth experiences that have been made around the world with concepts, approaches, or initiatives that aim at supporting this shift. For this purpose, the article format was kept as large as possible, enabling the authors to explain many facets of their concepts and experiences in detail. Regarding the structure of the articles, we had encouraged the authors to lean on the Darmstadt Model, a category system that was developed to support the development, improvement, and investigation of K-12 CSE across regional or national boundaries. This model could serve as a unifying framework that might provide a proper structure for a well-founded critical discussion about the future of K-12 CSE. Curriculum designers or policy stakeholders, who have to decide, which approach an upcoming national initiative should follow, could benefit from this discussion as well as researchers who are investigating K12 CSE in any regard. With this goal in mind, we have selected six extensive and two short case studies from the UK, New Zealand, USA/Israel, France, Sweden, Georgia (USA), Russia, and Italy that provide an in-depth analysis of K-12 CSE in their respective country or state."}
{"_id": "13712f31d0f2dc0049ccaa3cce300c088bbbf88e", "title": "Design and development of a mobile crawling robot with novel halbach array based magnetic wheels", "text": "Higher efficiency and the safety of hull inspection can be achieved by replacing the conventional methods with modern crawling robots. This paper consists of detailed study of the design and development of a crawling robot with a novel permanent magnetic wheel based on Halbach magnetic array. The magnetic array and the wheel assembly were designed based on the magnetic simulation results. Maintaining the adequate adhesion force as well as the friction force of the wheels were considered in designing the wheels. The crawling robot is equipped with a steering system based on Ackermann steering method hence the steering angles are controlled by an algorithm and actuated using servo motors. The driving system of the robot is equipped with four geared stepper motors and they are controlled separately by the algorithm according to the inputs of the controller. The control architecture and the components used for the robot control system are explained. The operator can remotely control the robot and inspect the hull surface via the camera feed available in the HMI (Human Machine Interface) of the robot. The prototype of the crawling robot was tested in a similar environment and the test results are included in the paper."}
{"_id": "6260223838fb3b0c788ac183b4ebbd2d11a02c3a", "title": "Security as a Service Model for Cloud Environment", "text": "Cloud computing is becoming increasingly important for provision of services and storage of data in the Internet. However there are several significant challenges in securing cloud infrastructures from different types of attacks. The focus of this paper is on the security services that a cloud provider can offer as part of its infrastructure to its customers (tenants) to counteract these attacks. Our main contribution is a security architecture that provides a flexible security as a service model that a cloud provider can offer to its tenants and customers of its tenants. Our security as a service model while offering a baseline security to the provider to protect its own cloud infrastructure also provides flexibility to tenants to have additional security functionalities that suit their security requirements. The paper describes the design of the security architecture and discusses how different types of attacks are counteracted by the proposed architecture. We have implemented the security architecture and the paper discusses analysis and performance evaluation results."}
{"_id": "2e0fbdc57b0f3e1a05333e27efc13b080cad7ca9", "title": "Surface-Wave Launchers for Beam Steering and Application to Planar Leaky-Wave Antennas", "text": "A linear array of surface-wave launchers (SWLs) is presented for surface-wave (SW) beam steering on a grounded dielectric slab (GDS). Specifically, a two-element array of directive Yagi-Uda like SWL antennas (operating between 21-24 GHz) is investigated. By varying the relative phase difference between SWL elements, the excited SW field distribution can be steered. In addition, a six-element linear array of SWLs with nonuniform weighting is also presented. If metallic gratings are placed on top of the GDS, the bound SW mode can be transformed into a radiated leaky-wave (LW) mode and directive beam patterns can be generated in the far field. A specific elliptical grating configuration is also investigated demonstrating single frequency cylindrical-sector LW beam steering. These slotted SWL arrays can be useful for novel SW beam scanning designs, millimeter wave power distribution systems and new LW antennas."}
{"_id": "0fe7e5e0493ff789a66c0a802052d90113a25523", "title": "Holographic artificial impedance surfaces for conformal antennas", "text": "We have developed a method for generating arbitrary radiation patterns from antennas on complex objects. The object is coated with an artificial impedance surface consisting of a lattice of sub-wavelength metal patches on a grounded dielectric substrate. The effective surface impedance depends on the size of the patches, and can be varied as a function of position. Using holography, the surface impedance is designed to generate any desired radiation pattern from currents in the surface. With this technique we can create antennas with novel properties such as radiation toward angles that would otherwise be shadowed"}
{"_id": "6723549d2e0681f63482ba2883267b91c324b518", "title": "Planar Leaky-Wave Antenna Designs Offering Conical-Sector Beam Scanning and Broadside Radiation Using Surface-Wave Launchers", "text": "Two planar antenna designs that utilize surface-waves for leaky-wave excitation are investigated. Specifically, a surface-wave launcher is implemented to excite cylindrical surface-waves, which are bound to a grounded dielectric slab. By the addition of circular metallic strip gratings, a partially reflecting surface is realized, providing appropriate conditions for 2-D leaky-wave radiation. In particular, two designs are investigated: a continuous circular strip and a segmented circular strip grating. Results illustrate conical-sector beam scanning for the continuous circular strip grating between 2022 GHz, while broadside radiation is observed at 21.2 GHz by the segmented circular strip design."}
{"_id": "679a256767d557d4bd9f02d6057a33e640bb5c08", "title": "Low-Cost Phased-Array Antenna Using Compact Tunable Phase Shifters Based on Ferroelectric Ceramics", "text": "A low-cost phased-array antenna at 10 GHz is presented for a scan angle of \u00b150\u00b0. The array employs continuously tunable phase shifters based on a screen printed barium-strontium-titanate thick-film ceramic. Due to the use of artificial transmission line topology, the proposed phase-shifter design has a very compact size (3 mm \u00d7 2.8 mm) for 342\u00b0 total phase shift. In the frequency range from 8 to 10 GHz, it exhibits a figure of merit >;52\u00b0/dB, which is among the best of phase shifters based on ferroelectric thick films. In a prototyped phased array, the RF circuit consists of a feeding network, phase shifters, and antenna elements, which are integrated into one planar metallization layer. Furthermore, a simple way for routing bias lines for phase shifters is demonstrated using high resistive electrodes. Using screen printed thick films and applying a simplified fabrication process for the RF and bias circuitry can reduce the total expense of phased arrays considerably."}
{"_id": "b3f55c313212fd4f3fd9774a1e84ba7b88ad90f7", "title": "Application of antenna arrays to mobile communications. II. Beam-forming and direction-of-arrival considerations", "text": "Array processing involves manipulation of signals induced on various antenna elements. Its capabilities of steering nulls to reduce cochannel interferences and pointing independent beams toward various mobiles, as well as its ability to provide estimates of directions of radiating sources, make it attractive to a mobile communications system designer. Array processing is expected to play an important role in fulfilling the increased demands of various mobile communications services. Part I of this paper showed how an array could be utilized in different configurations to improve the performance of mobile communications systems, with references to various studies where feasibility of an array system for mobile communications is considered. This paper provides a comprehensive and detailed treatment of different beam-forming schemes, adaptive algorithms to adjust the required weighting on antennas, direction-of-arrival estimation methods\u2014including their performance comparison\u2014and effects of errors on the performance of an array system, as well as schemes to alleviate them. This paper brings together almost all aspects of array signal processing. It is presented at a level appropriate to nonexperts in the field and contains a large reference list to probe further."}
{"_id": "0a0d6d33de7b58bb76d67fa08fba949a991d6e29", "title": "Architectural Mismatch or Why it's hard to build systems out of existing parts", "text": "Many would argue that future breakthroughs in software productivity will depend on our ability to combine existing pieces of software to produce new applications. An important step towards this goal is the development of new techniques to detect and cope with mismatches in the assembled parts. Some problems of composition are due to low-level issues of interoperability, such as mismatches in programming languages or database schemas. However, in this paper we highlight a different, and in many ways more pervasive, class of problem: architectural mismatch. Specifically, we use our experience in building a family of software design environments from existing parts to illustrate a variety of types of mismatch that center around the assumptions a reusable part makes about the structure of the application in which is to appear. Based on this experience we show how an architectural view of the mismatch problem exposes some fundamental, thorny problems for software composition and suggests possible research avenues needed to solve them."}
{"_id": "670b4ed3852e1c067e72740f93b265cc53ed9397", "title": "Comparison of Discovery Service Architectures for the Internet of Things", "text": "In the emerging Internet of Things rich data on real-world objects and events will be generated in vast amounts and stored in widely distributed databases. In truly global and dynamic application scenarios, intermediate brokers are needed to find these data, even if the exact location and form of storage are initially unknown to the requester. Discovery Services are aimed to fill this gap: they respond to requests for data on specific objects with a list of corresponding data providers. In this paper, we frame functional requirements for Discovery Services, and perform an overview and analysis of five established approaches for implementing Discovery Services that are taken from literature and industrial practice. In order to compare their characteristics, we develop a quality framework based on literature review and an ISO standard for software quality."}
{"_id": "22b9856a01613fdacfad2335e039efec3f9e4e11", "title": "CatchTartan: Representing and Summarizing Dynamic Multicontextual Behaviors", "text": "Representing and summarizing human behaviors with rich contexts facilitates behavioral sciences and user-oriented services. Traditional behavioral modeling represents a behavior as a tuple in which each element is one contextual factor of one type, and the tensor-based summaries look for high-order dense blocks by clustering the values (including timestamps) in each dimension. However, the human behaviors are multicontextual and dynamic: (1) each behavior takes place within multiple contexts in a few dimensions, which requires the representation to enable non-value and set-values for each dimension; (2) many behavior collections, such as tweets or papers, evolve over time. In this paper, we represent the behavioral data as a two-level matrix (temporal-behaviors by dimensional-values) and propose a novel representation for behavioral summary called Tartan that includes a set of dimensions, the values in each dimension, a list of consecutive time slices and the behaviors in each slice. We further develop a propagation method CatchTartan to catch the dynamic multicontextual patterns from the temporal multidimensional data in a principled and scalable way: it determines the meaningfulness of updating every element in the Tartan by minimizing the encoding cost in a compression manner. CatchTartan outperforms the baselines on both the accuracy and speed. We apply CatchTartan to four Twitter datasets up to 10 million tweets and the DBLP data, providing comprehensive summaries for the events, human life and scientific development."}
{"_id": "54113d65b26940b290c1fe3f6324e012b3ae77d6", "title": "Stronger Security of Authenticated Key Exchange", "text": "Recent work by Krawczyk [12] and Menezes [16] has highlighted the importance of understanding well the guarantees and limitations of formal security models when using them to prove the security of protocols. In this paper we focus on security models for authenticated key exchange (AKE) protocols. We observe that there are several classes of attacks on AKE protocols that lie outside the scope of the Canetti-Krawczyk model. Some of these additional attacks have already been considered by Krawczyk [12]. In an attempt to bring these attacks within the scope of the security model we extend the Canetti-Krawczyk model for AKE security by providing significantly greater powers to the adversary. Our contribution is a more compact, integrated, and comprehensive formulation of the security model. We then introduce a new AKE protocol called NAXOS and prove that it is secure against these stronger"}
{"_id": "888ea80404357b3a13e7006bafc59d1a65fd8019", "title": "Ensemble Coordination Approach in Multi-AGV Systems Applied to Industrial Warehouses", "text": "This paper deals with a holistic approach to coordinate a fleet of automated guided vehicles (AGVs) in an industrial environment. We propose an ensemble approach based on a two layer control architecture and on an automatic algorithm for the definition of the roadmap. The roadmap is built by considering the path planning algorithm implemented on the hierarchical architecture and vice versa. In this way, we want to manage the coordination of the whole system in order to increase the flexibility and the global efficiency. Furthermore, the roadmap is computed in order to maximize the redundancy, the coverage and the connectivity. The architecture is composed of two layers. The low-level represents the roadmap itself. The high-level describes the topological relationship among different areas of the environment. The path planning algorithm works on both these levels and the subsequent coordination among AGVs is obtained exploiting shared resource (i.e., centralized information) and local negotiation (i.e., decentralized coordination). The proposed approach is validated by means of simulations and comparison using real plants. Note to Practitioners-The motivation of this work grows from the need to increase the flexibility and efficiency of current multi-AGV systems. In particular, in the current state-of-the-art the AGVs are guided through a manually defined roadmap with ad-hoc strategies for coordination. This is translated into high setup time requested for installation and the impossibility to easily respond to dynamic changes of the environment. The proposed method aims at managing all the design, setup and control process in an automatic manner, decreasing the time for setup and installation. The flexibility is also increased by considering a coordination strategy for the fleet of AGVs not based on manual ad-hoc rules. Simulations are performed in order to compare the proposed approach to the current industrial one. In the future, industrial aspects, as the warehouse management system, will be integrated in order to achieve a real and complete industrial functionality."}
{"_id": "15ecc29f2fe051b7a3f5c23cd82137d1513dbca0", "title": "Robust odometry estimation for RGB-D cameras", "text": "The goal of our work is to provide a fast and accurate method to estimate the camera motion from RGB-D images. Our approach registers two consecutive RGB-D frames directly upon each other by minimizing the photometric error. We estimate the camera motion using non-linear minimization in combination with a coarse-to-fine scheme. To allow for noise and outliers in the image data, we propose to use a robust error function that reduces the influence of large residuals. Furthermore, our formulation allows for the inclusion of a motion model which can be based on prior knowledge, temporal filtering, or additional sensors like an IMU. Our method is attractive for robots with limited computational resources as it runs in real-time on a single CPU core and has a small, constant memory footprint. In an extensive set of experiments carried out both on a benchmark dataset and synthetic data, we demonstrate that our approach is more accurate and robust than previous methods. We provide our software under an open source license."}
{"_id": "cb638a997e994e84cfd51a01ef1e45d3ccab0457", "title": "Standard high-resolution pelvic MRI vs. low-resolution pelvic MRI in the evaluation of deep infiltrating endometriosis", "text": "To compare the capabilities of standard pelvic MRI with low-resolution pelvic MRI using fast breath-hold sequences to evaluate deep infiltrating endometriosis (DIE). Sixty-eight consecutive women with suspected DIE were studied with pelvic MRI. A double-acquisition protocol was carried out in each case. High-resolution (HR)-MRI consisted of axial, sagittal, and coronal TSE T2W images, axial TSE T1W, and axial THRIVE. Low-resolution (LR)-MRI was acquired using fast single shot (SSH) T2 and T1 images. Two radiologists with 10 and 2\u00a0years of experience reviewed HR and LR images in two separate sessions. The presence of endometriotic lesions of the uterosacral ligament (USL), rectovaginal septum (RVS), pouch of Douglas (POD), and rectal wall was noted. The accuracies of LR-MRI and HR-MRI were compared with the laparoscopic and histopathological findings. Average acquisition times were 24\u00a0minutes for HR-MRI and 7\u00a0minutes for LR-MRI. The more experienced radiologist achieved higher accuracy with both HR-MRI and LR-MRI. The values of sensitivity, specificity, PPV, NPV, and accuracy did not significantly change between HR and LR images or interobserver agreement for all of the considered anatomic sites. LR-MRI performs as well as HR-MRI and is a valuable tool for the detection of deep endometriosis extension. \u2022 High- and low-resolution MRI perform similarly in deep endometriosis evaluation \u2022 Low-resolution MRI significantly reduces the duration of the examination \u2022 Radiologist experience is fundamental for evaluating deep pelvic endometriosis"}
{"_id": "ecf86691b5cbc47362703992d6b664c7bfa4325c", "title": "Group discussion as interactive dialogue or as serial monologue: the influence of group size.", "text": "Current models draw a broad distinction between communication as dialogue and communication as monologue. The two kinds of models have different implications for who influences whom in a group discussion. If the discussion is like interactive dialogue, group members should be influenced most by those with whom they interact in the discussion; if it is like serial monologue, they should be influenced most by the dominant speaker. The experiments reported here show that in small, 5-person groups, the communication is like dialogue and members are influenced most by those with whom they interact in the discussion. However, in large, 10-person groups, the communication is like monologue and members are influenced most by the dominant speaker. The difference in mode of communication is explained in terms of how speakers in the two sizes of groups design their utterances for different audiences."}
{"_id": "37d09142fe774813a124015b799439de0a544895", "title": "Fuzzy Cognitive Maps Tool for Scenario Analysis and Pattern Classification", "text": "After 30 years of research, challenges and solutions, Fuzzy Cognitive Maps (FCMs) have become a suitable knowledgebased methodology for modeling and simulation. This technique is especially attractive when modeling systems that are characterized by ambiguity, complexity and non-trivial causality. FCMs are well-known due to the transparency achieved during modeling tasks. The literature reports successful studies related to the modeling of complex systems using FCMs. However, the situation is not the same when it comes to software implementations where domain experts can design FCM-based systems, run simulations or perform more advanced experiments. The existing implementations are not proficient in providing many options to adjust essential parameters during the modeling steps. The gap between the theoretical advances and the development of accurate, transparent and sound FCM-based systems advocates for the creation of more complete and flexible software products. Therefore, the goal of this paper is to introduce FCM Expert, a software tool for fuzzy cognitive modeling oriented to scenario analysis and pattern classification. The main features of FCM Expert rely on Machine Learning algorithms to compute the parameters defining the model, optimize the network topology and improve the system convergence without losing information. On the other hand, FCM Expert allows performing WHAT-IF simulations and studying the system behavior through a friendly, intuitive and easy-to-use graphical user interface."}
{"_id": "cca87ecd26106502f6d2510575367516210f4d6f", "title": "Web-based statistical fact checking of textual documents", "text": "User generated content has been growing tremendously in recent years. This content reflects the interests and the diversity of online users. In turn, the diversity among internet users is also reflected in the quality of the content being published online. This increases the need to develop means to gauge the support available for content posted online. In this work, we aim to make use of the web-content to calculate a statistical support score for textual documents. In the proposed algorithm, phrases representing key facts are extracted to construct basic elements of the document. Search is used thereon to validate the support available for these elements online, leading to assigning an overall score for each document. Experimental results have shown a difference between the score distribution of factual news data and false facts data. This indicates that the approach seems to be a promising seed for distinguishing different articles based on the content."}
{"_id": "5fe44882306f659991319365a70a7b8ba5c3f7f8", "title": "Aspect identification and sentiment analysis based on NLP", "text": "This paper proposes method for aspect identification and sentiment analysis. This paper deals with the problem of sentence level. We use semantic syntax tree to extract NP nodes, a method based on slack to identify aspect and mini-support function to narrow aspect; to analyze the sentiment for the aspect we use dictionary-based method. Experimental results show that the proposed approach achieves the state-of-the-art."}
{"_id": "1ccef9fa75e519daa10618fe9f2d7a46a34a7040", "title": "The Bitcoin Backbone Protocol: Analysis and Applications", "text": "Bitcoin is the first and most popular decentralized cryptocurrency to date. In this work, we extract and analyze the core of the Bitcoin protocol, which we term the Bitcoin backbone, and prove two of its fundamental properties which we call common prefix and chain quality in the static setting where the number of players remains fixed. Our proofs hinge on appropriate and novel assumptions on the \u201chashing power\u201d of the adversary relative to network synchronicity; we show our results to be tight under high synchronization. Next, we propose and analyze applications that can be built \u201con top\u201d of the backbone protocol, specifically focusing on Byzantine agreement (BA) and on the notion of a public transaction ledger. Regarding BA, we observe that Nakamoto\u2019s suggestion falls short of solving it, and present a simple alternative which works assuming that the adversary\u2019s hashing power is bounded by 1/3. The public transaction ledger captures the essence of Bitcoin\u2019s operation as a cryptocurrency, in the sense that it guarantees the liveness and persistence of committed transactions. Based on this notion we describe and analyze the Bitcoin system as well as a more elaborate BA protocol, proving them secure assuming high network synchronicity and that the adversary\u2019s hashing power is strictly less than 1/2, while the adversarial bound needed for security decreases as the network desynchronizes. Finally, we show that our analysis of the Bitcoin backbone protocol for synchronous networks extends with relative ease to the recently considered \u201cpartially synchronous\u201d model, where there is an upper bound in the delay of messages that is unknown to the honest parties."}
{"_id": "2c9cb9d1aa1f4830ed2841097edd4de6c7ef3ff9", "title": "On the use of MapReduce for imbalanced big data using Random Forest", "text": "In this age, big data applications are increasingly becoming the main focus of attention because of the enormous increment of data generation and storage that has taken place in the last years. This situation becomes a challenge when huge amounts of data are processed to extract knowledge because the data mining techniques are not adapted to the new space and time requirements. Furthermore, real-world data applications usually present a class distribution where the samples that belong to one class, which is precisely the main interest, are hugely outnumbered by the samples of the other classes. This circumstance , known as the class imbalance problem, complicates the learning process as the standard learning techniques do not correctly address this situation. In this work, we analyse the performance of several techniques used to deal with imbal-anced datasets in the big data scenario using the Random Forest classifier. Specifically, oversampling, undersampling and cost-sensitive learning have been adapted to big data using MapReduce so that these techniques are able to manage datasets as large as needed providing the necessary support to correctly identify the underrepresented class. The Random Forest classifier provides a solid basis for the comparison because of its performance, robustness and versatility. An experimental study is carried out to evaluate the performance of the diverse algorithms considered. The results obtained show that there is not an approach to imbalanced big data classification that outperforms the others for all the data considered when using Random Forest. Moreover, even for the same type of problem, the best performing method is dependent on the number of mappers selected to run the experiments. In most of the cases, when the number of splits is increased, an improvement in the running times can be observed, however, this progress in times is obtained at the expense of a slight drop in the accuracy performance obtained. This decrement in the performance is related to the lack of density problem, which is evaluated in this work from the imbalanced data point of view, as this issue degrades the performance of classifiers in the imbalanced scenario more severely than in standard learning."}
{"_id": "d15747c093b4e24eb8ca898d705184741c02dd20", "title": "PhoneyC: A Virtual Client Honeypot", "text": "The number of client-side attacks has grown significantly in the past few years, shifting focus away from defendable positions to a broad, poorly defended space filled with vulnerable clients. Just as honeypots enabled deep research into server-side attacks, honeyclients can permit the deep study of client-side attacks. A complement to honeypots, a honeyclient is a tool designed to mimic the behavior of a userdriven network client application, such as a web browser, and be exploited by an attacker\u2019s content. These systems are instrumented to discover what happened and how. This paper presents PhoneyC, a honeyclient tool that can provide visibility into new and complex client-side attacks. PhoneyC is a virtual honeyclient, meaning it is not a real application but rather an emulated client. By using dynamic analysis, PhoneyC is able to remove the obfuscation from many malicious pages. Furthermore, PhoneyC emulates specific vulnerabilities to pinpoint the attack vector. PhoneyC is a modular framework that enables the study of malicious HTTP pages and understands modern vulnerabilities and attacker techniques."}
{"_id": "4a8c279e01ca4c713f1be72f4fde0031a53d0c14", "title": "The treatment of osteoporotic thoracolumbar severe burst fractures with short pedicle screw fixation and vertebroplasty.", "text": "BACKGROUND\nTo investigate the clinical and radiological results of short pedicle screw fixation and vertebroplasty in osteoporotic thoracolumbar severe burst fractures.\n\n\nMETHODS\nFrom September 2006 to August 2010, 19 consecutive patients sustained osteoporotic thoracolumbar severe burst fractures with or without neurologic deficit and were included in this prospective study. All patients underwent short pedicle screw fixation and vertebroplasty. Segmental kyphosis, AVBHr and PVBHr, and Canal compromise were calculated on radiographs pre-operatively, post-operative and at final follow up. VAS, ODI and SF-36 were calculated pre-operatively and at final follow up.\n\n\nRESULTS\nMean operative time was 70.8 min (range 60~100 min) and mean blood loss was 92 ml (range 60~160 ml). The mean duration of their hospital stay was 4.5 days (range 3-7 days). The operative incisions were healing well. Average follow up time was 40.1 months (range 24~72 months). The AVBHr was corrected from preoperative (48.1 \u00b1 6.8) % to postoperative (94.1 \u00b1 1.7) % (P < 0.001). The PVBHr was corrected from preoperative (62.7 \u00b1 4.8) % to postoperative (92.8 \u00b1 1.8) % (P < 0.001). Canal compromise was corrected from preoperative (37.3 \u00b1 5.8) % to postoperative (5.9 \u00b1 2.3) % (P < 0.001). The segmental kyphosis was corrected from preoperative (20.6 \u00b1 5.3) degree to postoperative (2.0 \u00b1 3.2) degree (P < 0.001). VAS scores were reduced from preoperative 7.21 \u00b1 0.86 to 2.21 \u00b1 0.98 at final follow up (P < 0.001). SF-36 Bodily pain was reduced from preoperative 75.31 \u00b1 13.85 to 13.74 \u00b1 13.24 at final follow up (P < 0.001), and SF-36 Role Physical was reduced from preoperative 59.21 \u00b1 26.63 to 19.74 \u00b1 22.94 at final follow up (P < 0.001). The ODI scores were reduced from preoperative 81.68 \u00b1 4.44 to 15.37 \u00b1 5.54 at final follow up (P < 0.001). All 4 patients with partial neurological deficit initially had improvement. Cement leakage was observed in 3 cases (two anterior to vertebral body and one into the disc without sequela). There were no instances of instrumentation failure and no patient had persistent postoperative back pain.\n\n\nCONCLUSIONS\nVertebroplasty and short pedicle screw fixation has the advantages of both radiographic and functional results for treating osteoporotic thoracolumbar severe burst fractures using a purely posterior approach."}
{"_id": "f2b6b80ce32b365a1b7498b48a1417592264eee6", "title": "An efficient centralized scheduling algorithm in IEEE 802.15.4e TSCH networks", "text": "IEEE 802.15.4e Time Slotted Channel Hopping (TSCH) standard has gained a lot of attention within the Industrial Internet of Things research community due to its effectiveness in improving reliability and providing ultra-low power consumption for industrial applications, and in which its communication is orchestrated by a schedule. Despite its relevance, the standard leaves out of its scope in defining how the schedule is built, updated and maintained. This open issue is one of the trending topics in the IETF 6TiSCH WG, that still need to be addressed. This work focuses on scheduling in TSCH networks in a centralized manner where the gateway makes time and frequency slot allocation. This paper formulates the scheduling problem as a throughput maximization problem and delay minimization problem. We propose a graph theoretical approach to solve the throughput maximization problem in a centralized way. The combinatorial properties of the scheduling problem are addressed by providing an equivalent maximum weighted bipartite matching (MWBM) problem to reduce the computational complexity and also adopting the Hungarian algorithm in polynomial time. Simulation results are provided to evaluate the performance of the proposed scheme."}
{"_id": "3d0610e3cdd8933b72397d4a387f655a97bcc2a8", "title": "Effects of Presowing Pulsed Electromagnetic Treatment of Tomato Seed on Growth, Yield, and Lycopene Content", "text": "The use of magnetic field as a presowing treatment has been adopted by researchers as a new environmental friendly technique. The aim of this study was to determine the effect of magnetic field exposure on tomato seeds covering a range of parameters such as transplanting percentage, plant height, shoot diameter, number of leaves per plant, fresh weight, dry weight, number of flowers, yield, and lycopene content. Pulsed electromagnetic field was used for 0, 5, 10, and 15 minutes as a presowing treatment of tomato seeds in a field experiment for two years. Papimi device (amplitude on the order of 12.5 mT) has been used. The use of pulsed electromagnetic field as a presowing treatment was found to enhance plant growth in tomato plants at certain duration of exposure. Magnetic field treatments and especially the exposure of 10 and 15 minutes gave the best results in all measurements, except plant height and lycopene content. Yield per plant was higher in magnetic field treatments, compared to control. MF-15 treatment yield was 80.93% higher than control treatment. Lycopene content was higher in magnetic field treatments, although values showed no statistically significant differences."}
{"_id": "dc52d1ede1b90bf9d296bc5b34c9310b7eaa99a2", "title": "The mnist database of handwritten digits", "text": "please note that your browser may uncompress these files without telling you. If the files you downloaded have a larger size than the above, they have been uncompressed by your browser. Simply rename them to remove the .gz extension. Some people have asked me \"my application can't open your image files\". These files are not in any standard image format. You have to write your own (very simple) program to read them. The file format is described at the bottom of this page."}
{"_id": "138f8fc3e05509eb9d43d0446fcff21a73cf06ae", "title": "Statistical Pattern Recognition", "text": "This course will provide an introduction to statistical pattern recognition. The lectures will focus on different techniques including methods for feature extraction, dimensionality reduction, data clustering and pattern classification. State-of-art approaches such as ensemble learning and sparse modelling will be introduced. Selected real-world applications will illustrate how the techniques are applied in practice."}
{"_id": "d45556175c94541a6502bf43320a9d6dce11433a", "title": "Issues and challenges of data security in a cloud computing environment", "text": "Now customers can opt for software and information technology services according to his requirements and can get these services on a leased basis from the network service provider and this has the facility to scale its requirements to up or down. This service is known as cloud computing, provided by the infrastructure provider which is a third party provider. Cloud computing provides many advantages to the customer like scalability, better economics of scaling, its ability to recover from problems, its ability to outsource non-core activities and flexibility. Cloud computing is a better option for the organizations to take as their best option without any initial investment and day by day frequent and heavy use of cloud computing is increasing but despite all the benefits a cloud offers to an organization there are certain doubts and threats regarding the security issues associated with a cloud computing platform. The security issues primarily involve the external control over organizational structure and management and personal and private data of the organization can be compromised. Personal and private data in this computing environment has a very high risk of breach of confidentiality, integrity and availability. Growth of cloud computing is mainly hampered due to these security concerns and challenges. Proper security arrangements are need to be placed before selecting the service provider for any cloud computing service and customers need to be very careful about understanding the risks of security breaches and challenges of using this new computing environment. This detailed study discusses about some of the challenges associated with cloud computing services and security issues related to this platform."}
{"_id": "5b58277bc34d15e345e143720614134945ede942", "title": "Radar target detection and Doppler ambiguity resolution", "text": "A Continuous Wave (CW) radar with a linear Frequency Modulation (FM) and a very short chirp duration is considered in this paper. A single target range and Doppler frequency measurement consits of a sequence of L chirp signals with the same chirp duration. The maximum unambiguously measurable radial velocity in this case is limited by a minimum chirp duration. Therefore fast moving targets will cause ambiguous measurement results, that have to be resolved. The technique of resolving ambiguities on the basis of several subsequent measurements with different ambiguity interval lengths is well known as Chinese Remainder Theorem (CRT). It has been extensively described in literature [2]. Two different signal processing approaches to apply this technique for the resolution of ambiguities are compared in this paper. These do not differ in terms of the ambiguity resolution itself, because they are based on the same principle. The detection performance however is different."}
{"_id": "04250e037dce3a438d8f49a4400566457190f4e2", "title": "A direct LDA algorithm for high-dimensional data - with application to face recognition", "text": "However, for a task with very high-dimensional data such as images, the traditional LDA algorithm encounters several di$culties. Consider face recognition for example. A low-de\"nition face image of size 64 64 implies a feature space of 64 64\"4096 dimensions, and therefore scatter matrices of size 4096 4096\"16M. First, it is computationally challenging to handle big matrices (such as computing eigenvalues). Second, those matrices are almost always singular, as the number of training images needs to be at least 16M for them to be non-degenerate. Due to these di$culties, it is commonly believed that a direct LDA solution for such high-dimensional data is infeasible. Thus, ironically, before LDA can be used to reduce dimensionality, another procedure has to be \"rst applied for dimensionality reduction. In face recognition, many techniques have been proposed (for a good review, see Ref. [1]). Among them, the most notable is a two-stage PCA#LDA approach [2,3]:"}
{"_id": "99d9812c4e769864acbcc57d764a842a52d95c0b", "title": "A Wideband SiGe BiCMOS Frequency Doubler With 6.5-dBm Peak Output Power for Millimeter-Wave Signal Sources", "text": "This paper presents a balanced frequency doubler with 6.5-dBm peak output power at 204 GHz in 130-nm SiGe BiCMOS technology ( $f_{T}/f_{\\max }=210$ /250 GHz). To convert the single-ended input signal to a differential signal for balanced operation, an on-chip transformer-based balun is employed. Detailed design procedure and compensation techniques to lower the imbalance at the output ports, based on mixed mode S parameters are proposed and verified analytically and through electromagnetic simulations. The use of optimized harmonic reflectors at the input port results in a 2-dBm increase in output power without sacrificing the bandwidth of interest. The measured conversion loss of the frequency doubler is 9 dB with 6-dBm input power at 204-GHz output. The measured peak output power is 6.5 dBm with an on-chip power amplifier stage. The 3-dB output power bandwidth is measured to be wider than 50 GHz (170\u2013220 GHz). The total chip area of the doubler is 0.09 mm2 and the dc power consumption is 90 mW from a 1.8-V supply, which corresponds to a 5% collector efficiency."}
{"_id": "4e381dfece765ba0c9689bd57ea40f9280dc287e", "title": "Crowdfunding: A Spimatic application of digital fandom", "text": "The digital environment has opened up new opportunities for fans to engage with the production process of their favoured texts. The successful Kickstarter campaigns for Rob Thomas and Kristen Bell\u2019s Veronica Mars film and Zach Braff\u2019s film Wish I Was Here, and the failed Kickstarter campaign for Melissa Joan Hart\u2019s film Darci\u2019s Walk of Shame, each illustrate how the entertainers drew (or did not draw) on specific support from his or her own fan communities to generate funds and word-of-mouth publicity. In this paper I augment studies of digital fandom by integrating the technosocial concept of the Spime into the relationships forged between the technology of crowdfunding and the affect of particular audiences. I illustrate how \u2018crowdfunding fandom\u2019 changes the unique posthuman experience of 21 century participatory culture by interpreting fandom in a Spimatic network. This ultimately more fully integrates the fan into the production process, creating a truly \u2018participatory culture\u2019."}
{"_id": "e7d868486a29b74a7e7cc94776a353196bf63a0f", "title": "A computational cognition model of perception, memory, and judgment", "text": "The mechanism of human cognition and its computability provide an important theoretical foundation to intelligent computation of visual media. This paper focuses on the intelligent processing of massive data of visual media and its corresponding processes of perception, memory, and judgment in cognition. In particular, both the human cognitive mechanism and cognitive computability of visual media are investigated in this paper at the following three levels: neurophysiology, cognitive psychology, and computational modeling. A computational cognition model of Perception, Memory, and Judgment (PMJ model for short) is proposed, which consists of three stages and three pathways by integrating the cognitive mechanism and computability aspects in a unified framework. Finally, this paper illustrates the applications of the proposed PMJ model in five visual media research areas. As demonstrated by these applications, the PMJ model sheds some light on the intelligent processing of visual media, and it would be innovative for researchers to apply human cognitive mechanism to computer science."}
{"_id": "4b80f7c6ac8160897dd887ccea40c1a014b27368", "title": "A Flexible Architecture for Ray Tracing Terrain Heightfields", "text": "High-quality interactive rendering of terrain surfaces is a challenging task, which requires compromises between rendering quality, rendering time and available resources. However, current solutions typically provide optimized strategies tailored to particular constraints. In this paper we propose a more scalable approach based on functional programming and introduce a flexible ray tracer for rendering terrain heightfields. This permits the dynamic composition of complex and recursive shaders. In order to exploit the concurrency of the GPU for a large number of dynamically created tasks with inter-dependencies, the functional model is represented as a token stream and is iteratively rewritten via pattern matching on multiple shader cores in parallel. A first prototype demonstrates the feasibility of our approach."}
{"_id": "258e80bbf3c013fda6b3f66d0b459d0da3492044", "title": "Presurgical evaluation of epilepsy.", "text": "An overview of the following six cortical zones that have been defined in the presurgical evaluation of candidates for epilepsy surgery is given: the symptomatogenic zone; the irritative zone; the seizure onset zone; the epileptogenic lesion; the epileptogenic zone; and the eloquent cortex. The stepwise historical evolution of these different zones is described. The current diagnostic techniques used in the definition of these cortical zones, such as video-EEG monitoring, MRI and ictal single photon emission computed tomography, are discussed. Established diagnostic tests are set apart from procedures that should still be regarded as experimental, such as magnetoencephalography, dipole source localization and spike-triggered functional MRI. Possible future developments that might lead to a more direct definition of the epileptogenic zone are presented."}
{"_id": "16c6a5df8185ca12e5d1a280cea23fb55e27f2f1", "title": "Detecting Encrypted Traffic: A Machine Learning Approach", "text": "Detecting encrypted traffic is increasingly important for deep packet inspection (DPI) to improve the performance of intrusion detection systems. We propose a machine learning approach with several randomness tests to achieve high accuracy detection of encrypted traffic while requiring low overhead incurred by the detection procedure. To demonstrate how effective the proposed approach is, the performance of four classification methods (N\u00e4\u0131ve Bayesian, Support Vector Machine, CART and AdaBoost) are explored. Our recommendation is to use CART which is not only capable of achieving an accuracy of 99.9% but also up to about 2.9 times more efficient than the second best candidate (N\u00e4\u0131ve Bayesian)."}
{"_id": "3bad812dac5b2db81c464ff80e2db7595bd902be", "title": "Topic Modeling using Topics from Many Domains, Lifelong Learning and Big Data", "text": "Topic modeling has been commonly used to discover topics from document collections. However, unsupervised models can generate many incoherent topics. To address this problem, several knowledge-based topic models have been proposed to incorporate prior domain knowledge from the user. This work advances this research much further and shows that without any user input, we can mine the prior knowledge automatically and dynamically from topics already found from a large number of domains. This paper first proposes a novel method to mine such prior knowledge dynamically in the modeling process, and then a new topic model to use the knowledge to guide the model inference. What is also interesting is that this approach offers a novel lifelong learning algorithm for topic discovery, which exploits the big (past) data and knowledge gained from such data for subsequent modeling. Our experimental results using product reviews from 50 domains demonstrate the effectiveness of the proposed approach."}
{"_id": "6896b672970b05fd90e245919429dbd379bd050a", "title": "Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design", "text": "Flow-based generative models are powerful exact likelihood models with efficient sampling and inference. Despite their computational efficiency, flowbased models generally have much worse density modeling performance compared to state-of-the-art autoregressive models. In this paper, we investigate and improve upon three limiting design choices employed by flow-based models in prior work: the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers. Based on our findings, we propose Flow++, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks. Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flow-based models. Our implementation is available at https://github.com/aravind0706/flowpp."}
{"_id": "32a9fb2ec387dfd3d4ca662430a69d5b8f887793", "title": "Engineering exosomes as refined biological nanoplatforms for drug delivery", "text": "Exosomes, a subgroup of extracellular vesicles (EVs), have been recognized as important mediators of long distance intercellular communication and are involved in a diverse range of biological processes. Because of their ideal native structure and characteristics, exosomes are promising nanocarriers for clinical use. Exosomes are engineered at the cellular level under natural conditions, but successful exosome modification requires further exploration. The focus of this paper is to summarize passive and active loading approaches, as well as specific exosome modifications and examples of the delivery of therapeutic and imaging molecules. Examples of exosomes derived from a variety of biological origins are also provided. The biocompatible characteristics of exosomes, with suitable modifications, can increase the stability and efficacy of imaging probes and therapeutics while enhancing cellular uptake. Challenges in clinical translation of exosome-based platforms from different cell sources and the advantages of each are also reviewed and discussed."}
{"_id": "7dd5e5c8561ba1813ea524907c9a79cceeace95d", "title": "A Compositional Interpretation of Biomedical Event Factuality", "text": "We propose a compositional method to assess the factuality of biomedical events extracted from the literature. The composition procedure relies on the notion of semantic embedding and a fine-grained classification of extrapropositional phenomena, including modality and valence shifting, and a dictionary based on this classification. The event factuality is computed as a product of the extra-propositional operators that have scope over the event. We evaluate our approach on the GENIA event corpus enriched with certainty level and polarity annotations. The results indicate that our approach is effective in identifying the certainty level component of factuality and is less successful in recognizing the other element, negative polarity."}
{"_id": "500f88036ad6b05fb5bfad419ec7a6dc10305e2c", "title": "CIM: Community-Based Influence Maximization in Social Networks", "text": "Given a social graph, the problem of influence maximization is to determine a set of nodes that maximizes the spread of influences. While some recent research has studied the problem of influence maximization, these works are generally too time consuming for practical use in a large-scale social network. In this article, we develop a new framework, community-based influence maximization (CIM), to tackle the influence maximization problem with an emphasis on the time efficiency issue. Our proposed framework, CIM, comprises three phases: (i) community detection, (ii) candidate generation, and (iii) seed selection. Specifically, phase (i) discovers the community structure of the network; phase (ii) uses the information of communities to narrow down the possible seed candidates; and phase (iii) finalizes the seed nodes from the candidate set. By exploiting the properties of the community structures, we are able to avoid overlapped information and thus efficiently select the number of seeds to maximize information spreads. The experimental results on both synthetic and real datasets show that the proposed CIM algorithm significantly outperforms the state-of-the-art algorithms in terms of efficiency and scalability, with almost no compromise of effectiveness."}
{"_id": "920a208d36319b8b5343c66d768693400ad826b7", "title": "A Dictionary Approach to EBSD Indexing", "text": "We propose a framework for indexing of grain and sub-grain structures in electron backscatter diffraction (EBSD) images of polycrystalline materials. The framework is based on a previously introduced physics-based forward model by Callahan and De Graef (2013) relating measured patterns to grain orientations (Euler angle). The forward model is tuned to the microscope and the sample symmetry group. We discretize the domain of the forward model onto a dense grid of Euler angles and for each measured pattern we identify the most similar patterns in the dictionary. These patterns are used to identify boundaries, detect anomalies, and index crystal orientations. The statistical distribution of these closest matches is used in an unsupervised binary decision tree (DT) classifier to identify grain boundaries and anomalous regions. The DT classifies a pattern as an anomaly if it has an abnormally low similarity to any pattern in the dictionary. It classifies a pixel as being near a grain boundary if the highly ranked patterns in the dictionary differ significantly over the pixels 3\u00d7 3 neighborhood. Indexing is accomplished by computing the mean orientation of the closest dictionary matches to each pattern. The mean orientation is estimated using a maximum likelihood approach that models the orientation distribution as a mixture of Von MisesFisher distributions over the quaternionic 3-sphere. The proposed dictionary matching approach permits segmentation, anomaly detection, and indexing to be performed in a unified manner with the additional benefit of uncertainty quantification. We demonstrate the proposed dictionary-based approach on a Ni-base IN100 alloy. c1 c1 Part of this work was reported in the Proceedings of the IEEE International Conference on Image Processing (ICIP), Melbourne Australia, Sept 2013."}
{"_id": "f4e481c143e52cbdabf523bbd28ae0dd79e2aa66", "title": "The Impact of Power Allocation on Cooperative Non-orthogonal Multiple Access Networks With SWIPT", "text": "In this paper, a cooperative non-orthogonal multiple access (NOMA) network is considered, where a source communicates with two users through an energy harvesting relay. The impact of two types of NOMA power allocation policies, namely NOMA with fixed power allocation (F-NOMA) and cognitive radio inspired NOMA (CR-NOMA), on the considered cooperative simultaneous wireless information and power transfer (SWIPT) system is investigated. In particular, closed-form expressions for the outage probability and their high SNR approximations are derived to characterize the performance of SWIPT-F-NOMA and SWIPT-CR-NOMA. These developed analytical results demonstrate that the two power allocation policies realize different tradeoffs between the reception reliability, user fairness and system complexity. Compared with the conventional SWIPT relaying networks with orthogonal multiple access (OMA), the proposed NOMA schemes can effectively reduce the outage probability, although all of them realize the same diversity gain."}
{"_id": "391c00495de723dd459e9834c8af0a353fe44b92", "title": "Boosting automatic event extraction from the literature using domain adaptation and coreference resolution", "text": "MOTIVATION\nIn recent years, several biomedical event extraction (EE) systems have been developed. However, the nature of the annotated training corpora, as well as the training process itself, can limit the performance levels of the trained EE systems. In particular, most event-annotated corpora do not deal adequately with coreference. This impacts on the trained systems' ability to recognize biomedical entities, thus affecting their performance in extracting events accurately. Additionally, the fact that most EE systems are trained on a single annotated corpus further restricts their coverage.\n\n\nRESULTS\nWe have enhanced our existing EE system, EventMine, in two ways. First, we developed a new coreference resolution (CR) system and integrated it with EventMine. The standalone performance of our CR system in resolving anaphoric references to proteins is considerably higher than the best ranked system in the COREF subtask of the BioNLP'11 Shared Task. Secondly, the improved EventMine incorporates domain adaptation (DA) methods, which extend EE coverage by allowing several different annotated corpora to be used during training. Combined with a novel set of methods to increase the generality and efficiency of EventMine, the integration of both CR and DA have resulted in significant improvements in EE, ranging between 0.5% and 3.4% F-Score. The enhanced EventMine outperforms the highest ranked systems from the BioNLP'09 shared task, and from the GENIA and Infectious Diseases subtasks of the BioNLP'11 shared task.\n\n\nAVAILABILITY\nThe improved version of EventMine, incorporating the CR system and DA methods, is available at: http://www.nactem.ac.uk/EventMine/."}
{"_id": "a4757879a8e4fdda20aa1921f337846127bb7a9c", "title": "Dynamic Resource Allocation in Ad-Hoc Mobile Cloud Computing", "text": "Mobile Cloud Computing (MCC) has draw extensive research attentions due to the increasingly demand on the energy consumption and execution time constraints on Mobile Devices (MDs). MCC has been well investigated in the scenarios when workload is offloaded to a remote cloud or a cloudlet. However, in those scenarios, infrastructure is required to provide connections between MDs and the cloud#x002F;cloudlet. To facilitate MCC for environment when infrastructure is not available, Ad-hoc MCC which allows MDs to share and process workload coordinately is discussed [1]. In this study, the detailed resource allocation problem in Ad-hoc MCC, including how to assign tasks to MDs, how tasks are executed on each MD, and how to arrange task and task result return transmissions so that interference among different MDs can be avoided are carefully studied. Since the problem is NP-hard, a heuristic algorithm is then introduced. As shown in the evaluation section, when no infrastructure is available, compared with the case when an application is executed solely on a single MD, the proposed algorithm can reduce an application's response time significantly."}
{"_id": "4624a22816daf82313f48a978afc7d53ed0ec368", "title": "Bayesian face recognition", "text": "We propose a new technique for direct visual matching of images for the purposes of face recognition and image retrieval, using a probabilistic measure of similarity, based primarily on a Bayesian (MAP) analysis of image differences. The performance advantage of this probabilistic matching technique over standard Euclidean nearest-neighbor eigenface matching was demonstrated using results from DARPA\u2019s 1996 \u201cFERET\u201d face recognition competition, in which this Bayesian matching algorithm was found to be the top performer. In addition, we derive a simple method of replacing costly computation of nonlinear (on-line) Bayesian similarity measures by inexpensive linear (off-line) subspace projections and simple Euclidean norms, thus resulting in a significant computational speed-up for implementation with very large databases. Appears in: Pattern Recognition, Vol. 33, No. 11, pps. 1771-1782, November, 2000. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c Mitsubishi Electric Research Laboratories, Inc., 2002 201 Broadway, Cambridge, Massachusetts 02139 Appears in: Pattern Recognition, Vol. 33, No. 11, pps. 1771-1782, November, 2000."}
{"_id": "3efb459577a261d12fc7bab6b964858b720544be", "title": "Generative adversarial networks and adversarial methods in biomedical image analysis", "text": "Generative adversarial networks (GANs) and other adversarial methods are based on a game-theoretical perspective on joint optimization of two neural networks as players in a game. Adversarial techniques have been extensively used to synthesize and analyze biomedical images. We provide an introduction to GANs and adversarial methods, with an overview of biomedical image analysis tasks that have benefited from such methods. We conclude with a discussion of strengths and limitations of adversarial methods in biomedical image analysis, and propose potential future research directions."}
{"_id": "48bb59ff4e72fbaf9a78e292b54c0cb1cd547ec3", "title": "Co-Clustering Structural Temporal Data with Applications to Semiconductor Manufacturing", "text": "Recent years have witnessed data explosion in semiconductor manufacturing due to advances in instrumentation and storage techniques. The large amount of data associated with process variables monitored over time form a rich reservoir of information, which can be used for a variety of purposes, such as anomaly detection, quality control, and fault diagnostics. In particular, following the same recipe for a certain Integrated Circuit device, multiple tools and chambers can be deployed for the production of this device, during which multiple time series can be collected, such as temperature, impedance, gas flow, electric bias, etc. These time series naturally fit into a two-dimensional array (matrix), i.e., each element in this array corresponds to a time series for one process variable from one chamber. To leverage the rich structural information in such temporal data, in this article, we propose a novel framework named C-Struts to simultaneously cluster on the two dimensions of this array. In this framework, we interpret the structural information as a set of constraints on the cluster membership, introduce an auxiliary probability distribution accordingly, and design an iterative algorithm to assign each time series to a certain cluster on each dimension. Furthermore, we establish the equivalence between C-Struts and a generic optimization problem, which is able to accommodate various distance functions. Extensive experiments on synthetic, benchmark, as well as manufacturing datasets demonstrate the effectiveness of the proposed method."}
{"_id": "8293ad6f45ce1483f5695da5dc3ca5888bf5afff", "title": "A Narrative Literature Review and E-Commerce Website Research", "text": "In this study, a narrative literature review regarding culture and e-commerce website design has been introduced. Cultural aspect and e-commerce website design will play a significant role for successful global e-commerce sites in the future. Future success of businesses will rely on e-commerce. To compete in the global e-commerce marketplace, local businesses need to focus on designing culturally friendly e-commerce websites. To the best of my knowledge, there has been insignificant research conducted on correlations between culture and e-commerce website design. The research shows that there are correlations between e-commerce, culture, and website design. The result of the study indicates that cultural aspects influence e-commerce website design. This study aims to deliver a reference source for information systems and information technology researchers interested in culture and e-commerce website design, and will show lessfocused research areas in addition to future directions."}
{"_id": "3b6c2c76420dee7413691c2fcd157a13bd1ce69f", "title": "C R ] 4 J ul 2 01 7 A general framework for Bitcoin analytics", "text": "Modern cryptocurrencies exploit decentralised ledgers \u2014 the so-called blockchains \u2014 to record a public and unalterable history of transactions. These ledgers represent a rich, and increasingly growing, source of information, in part of difficult interpretation and undisclosed meaning. Many analytics, mostly based on ad-hoc engineered solutions, are being developed to discover relevant knowledge from these data. We introduce a framework for the development of custom analytics on Bitcoin \u2014 the most preeminent cryptocurrency \u2014 which also allows to integrate data within the blockchain with data retrieved form external sources. We illustrate the flexibility and effectiveness of our analytics framework by means of paradigmatic use cases."}
{"_id": "dac431ea817da7833d2b6535455219ffa1af3108", "title": "ON STAGNATION OF THE DIFFERENTIAL EVOLUTION ALGORITHM", "text": "This article discusses the stagnation of an evolutionary optimization algorithm called Differential Evolution. Stagnation problem refers to a situation in which the optimum seeking process stagnates before finding a globally optimal solution. Typically, stagnation occurs virtually without any obvious reason. The stagnation differs from the premature convergence so that the population remains diverse and unconverged after stagnation, but the optimization process does not progress anymore. The reasons for this problem have remained unknown so far. This article uncovers this problem describing the basic nature of stagnation phenomena, a mechanism behind it and some reasons for stagnation. Advices for reducing the risk of stagnation are concluded on basis of the new findings."}
{"_id": "e245ef09fe9e4f56d0fa7f75257298588f4d0392", "title": "Agent-Agnostic Human-in-the-Loop Reinforcement Learning", "text": "Providing Reinforcement Learning agents with expert advice can dramatically improve various aspects of learning. Prior work has developed teaching protocols that enable agents to learn efficiently in complex environments; many of these methods tailor the teacher\u2019s guidance to agents with a particular representation or underlying learning scheme, offering effective but specialized teaching procedures. In this work, we explore protocol programs, an agent-agnostic schema for Human-in-the-Loop Reinforcement Learning. Our goal is to incorporate the beneficial properties of a human teacher into Reinforcement Learning without making strong assumptions about the inner workings of the agent. We show how to represent existing approaches such as action pruning, reward shaping, and training in simulation as special cases of our schema and conduct preliminary experiments on simple domains."}
{"_id": "4d3c779b5a224133bd5c69e05103fedbd904590a", "title": "Analyzing the effects of disk-pointer corruption", "text": "The long-term availability of data stored in a file system depends on how well it safeguards on-disk pointers used to access the data. Ideally, a system would correct all pointer errors. In this paper, we examine how well corruption-handling techniques work in reality. We develop a new technique called type-aware pointer corruption to systematically explore how a file system reacts to corrupt pointers. This approach reduces the exploration space for corruption experiments and works without source code. We use type-aware pointer corruption to examine Windows NTFS and Linux ext3. We find that they rely on type and sanity checks to detect corruption, and NTFS recovers using replication in some instances. However, NTFS and ext3 do not recover from most corruptions, including many scenarios for which they possess sufficient redundant information, leading to further corruption, crashes, and unmountable file systems. We use our study to identify important lessons for handling corrupt pointers."}
{"_id": "9afe1fdb837ffede87b08d40f60efcf734825b55", "title": "The trade-off between taxi time and fuel consumption in airport ground movement", "text": "Environmental issues play an important role across many sectors. This is particularly the case in the air transportation industry. One area which has remained relatively unexplored in this context is the ground movement problem for aircraft on the airport\u2019s surface. Aircraft have to be routed from a gate to a runway and vice versa and a key area of study is whether fuel burn and environmental impact improvements will best result from purely minimising the taxi times or whether it is also important to avoid multiple acceleration phases. This paper presents a newly developed multiobjective approach for analysing the trade-off between taxi time and fuel consumption during taxiing. The approach consists of a combination of a graph-based routing algorithm and a population adaptive immune algorithm to discover different speed profiles of aircraft. Analysis with data from a European hub airport has highlighted the impressive performance of the new approach. Furthermore, it is shown that the S. Ravizza ( ) \u00b7 J.A.D. Atkin School of Computer Science, University of Nottingham, Jubilee Campus, Nottingham NG8 1BB, UK e-mail: smr@cs.nott.ac.uk J.A.D. Atkin e-mail: jaa@cs.nott.ac.uk J. Chen \u00b7 P. Stewart School of Engineering, University of Lincoln, Brayford Pool, Lincoln LN6 7TS, UK J. Chen e-mail: juchen@lincoln.ac.uk P. Stewart e-mail: pstewart@lincoln.ac.uk E.K. Burke Department of Computing and Mathematics, University of Stirling, Cottrell Building, Stirling FK9 4LA, UK e-mail: e.k.burke@stir.ac.uk 26 S. Ravizza et al. trade-off between taxi time and fuel consumption is very sensitive to the fuel-related objective function which is used."}
{"_id": "cb89938cb50beabd23d92395da09ada7c7400ea9", "title": "Competitive intelligence process and tools for intelligence analysis", "text": "Purpose \u2013 The purpose of this survey research is twofold. First, to study and report the process that is commonly used to create and maintain a competitive intelligence (CI) program in organizations. And second, to provide an analysis of several emergent text mining, web mining and visualization-based CI tools, which are specific to collection and analysis of intelligence. Design/methodology/approach \u2013 A range of recently published research literature on CI processes, applications, tools and technologies to collect and analyze competitive information within organizations is reviewed to explore their current state, issues and challenges learned from their practice. Findings \u2013 The paper provides executive decision makers and strategic managers a better understanding of what methods are available and appropriate to the decisions they must make and the steps involved in CI undertaking. Originality/value \u2013 The findings of this research provide the managers of CI programs a context for understanding which tools and techniques are better suited to their specific types of problems; and help them develop and evaluate a usable set of tools and best practices to apply to their industry."}
{"_id": "76415cffc9c0c9585307696da09a0d082f8e8df4", "title": "A clustering algorithm based on graph connectivity", "text": ""}
{"_id": "161a8e11dd11a4f47c1cb66d4015d1b4afbab0fd", "title": "A finger on the pulse: temporal rhythms and information seeking in medical work", "text": "Most cooperative work takes place in information-rich environments. However, studies of \"information work\" tend to focus on the decontextualized access and retrieval problems faced by individual information seekers. Our work is directed towards understanding how information management is seamlessly integrated into the course of everyday activities. Drawing on an ethnographic study of medical work, we explore the relationship between information and temporal coordination and discuss the role of temporal patterns or \"rhythms\" in providing individuals with the means to coordinate information and work."}
{"_id": "029e930160bb7fcdab920e68526a79b1960ad89c", "title": "Comprehensive and Efficient Protection of Kernel Control Data", "text": "Protecting kernel control data (e.g., function pointers and return addresses) has been a serious issue plaguing rootkit defenders. In particular, rootkit authors only need to compromise one piece of control data to launch their attacks, while defenders need to protect thousands of such values widely scattered across kernel memory space. Worse, some of this data (e.g., return addresses) is volatile and can be dynamically generated at run time. Existing solutions, however, offer either incomplete protection or excessive performance overhead. To overcome these limitations, we present indexed hooks, a scheme that greatly facilitates kernel control-flow enforcement by thoroughly transforming and restricting kernel control data to take only legal jump targets (allowed by the kernel's control-flow graph). By doing so, we can severely limit the attackers' possibility of exploiting them as an infection vector to launch rootkit attacks. To validate our approach, we have developed a compiler-based prototype that implements this technique in the FreeBSD 8.0 kernel, transforming 49 025 control transfer instructions (~7.25% of the code base) to use indexed hooks instead of direct pointers. Our evaluation results indicate that our approach is generic, effective, and can be implemented on commodity hardware with a low performance overhead (<;5% based on benchmarks)."}
{"_id": "3bc05bd6564e0305dae868c51178666bd332b9b6", "title": "Fine-grained accelerators for sparse machine learning workloads", "text": "Text analytics applications using machine learning techniques have grown in importance with ever increasing amount of data being generated from web-scale applications, social media and digital repositories. Apart from being large in size, these generated data are often unstructured and are heavily sparse in nature. The performance of these applications on current systems is hampered by hard to predict branches and low compute-per-byte ratio. This paper proposes a set of fine-grained accelerators that improve the performance and energy-envelope of these applications by an order of magnitude."}
{"_id": "496691c466c736ac02c89e36491c7da2c4d58650", "title": "Filter technologies for wireless base stations", "text": "Phenomenal growth in the telecommunication industry in recent years has brought significant advances in filter technology as new communication systems emerged, demanding more stringent filter characteristics. In particular, the growth of the wireless communication industry has spurred tremendous activity in the area of microwave filter miniaturization and has been responsible for many advances made in this field. The filters that are currently being used in wireless base stations can be divided into two main categories: coaxial cavity resonator filters and dielectric resonator (DR) filters. While coaxial cavity filters have limited quality factor (Q) values, they offer the lowest cost design and are still being widely employed, particularly in wide bandwidth applications. With increased demands for high performance wireless systems, dielectric resonator filters are emerging as the baseline design for wireless base stations. Over the next five years, dielectric resonator filters are expected to have a significant share of the overall wireless base station filter market. High-temperature superconductor (HTS) filters are also expected to have a share of this market, particularly for systems, which have very stringent requirements for out-of-band interference. In this article, we begin by reviewing the main filter requirements, highlighting the technologies that are being currently employed. Emerging filter technologies that have the potential to replace the existing technologies are then described."}
{"_id": "36af4ce59b7d2cb2ca2743b010ea0b300ee9b10d", "title": "Exploring the physical layer frontiers of cellular uplink", "text": "Communication systems in practice are subject to many technical/technological constraints and restrictions. Multiple input, multiple output (MIMO) processing in current wireless communications, as an example, mostly employs codebook-based pre-coding to save computational complexity at the transmitters and receivers. In such cases, closed form expressions for capacity or bit-error probability are often unattainable; effects of realistic signal processing algorithms on the performance of practical communication systems rather have to be studied in simulation environments. The Vienna LTE-A Uplink Simulator is a 3GPP LTE-A standard compliant MATLAB-based link level simulator that is publicly available under an academic use license, facilitating reproducible evaluations of signal processing algorithms and transceiver designs in wireless communications. This paper reviews research results that have been obtained by means of the Vienna LTE-A Uplink Simulator, highlights the effects of single-carrier frequency-division multiplexing (as the distinguishing feature to LTE-A downlink), extends known link adaptation concepts to uplink transmission, shows the implications of the uplink pilot pattern for gathering channel state information at the receiver and completes with possible future research directions."}
{"_id": "d020eb83f03a9f9c97e728355c4a9010fa65d8ef", "title": "A Semantic Similarity Approach to Paraphrase Detection", "text": "This paper presents a novel approach to the problem of paraphrase identification. Although paraphrases often make use of synonymous or near synonymous terms, many previous approaches have either ignored or made limited use of information about similarities between word meanings. We present an algorithm for paraphrase identification which makes extensive use of word similarity information derived from WordNet (Fellbaum, 1998). The approach is evaluated using the Microsoft Research Paraphrase Corpus (Dolan et al., 2004), a standard resource for this task, and found to outperform previously published methods."}
{"_id": "e7690d6c20d4bfa71e527715e5959c25587722da", "title": "A keyword extraction method from twitter messages represented as graphs", "text": "Twitter is a microblog service that generates a huge amount of textual content daily. All this content needs to be explored by means of text mining, natural language processing, information retrieval, and other techniques. In this context, automatic keyword extraction is a task of great usefulness. A fundamental step in text mining techniques consists of building a model for text representation. The model known as vector space model, VSM, is the most well-known and used among these techniques. However, some difficulties and limitations of VSM, such as scalability and sparsity, motivate the proposal of alternative approaches. This paper proposes a keyword extraction method for tweet collections that represents texts as graphs and applies centrality measures for finding the relevant vertices (keywords). To assess the performance of the proposed approach, three different sets of experiments are performed. The first experiment applies TKG to a text from the Time magazine and compares its performance with that of the literature. The second set of experiments takes tweets from three different TV shows, applies TKG and compares it with TFIDF and KEA, having human classifications as benchmarks. Finally, these three algorithms are applied to tweets sets of increasing size and their computational running time is measured and compared. Altogether, these experiments provide a general overview of how TKG can be used in practice, its performance when compared with other standard approaches, and how it scales to larger data instances. The results show that TKG is a novel and robust proposal to extract keywords from texts, particularly from short messages, such"}
{"_id": "2401cd5606c6bc5390acc352d00c1685f0c8af60", "title": "Consensus-Driven Propagation in Massive Unlabeled Data for Face Recognition", "text": "Face recognition has witnessed great progress in recent years, mainly attributed to the high-capacity model designed and the abundant labeled data collected. However, it becomes more and more prohibitive to scale up the current million-level identity annotations. In this work, we show that unlabeled face data can be as effective as the labeled ones. Here, we consider a setting closely mimicking the real-world scenario, where the unlabeled data are collected from unconstrained environments and their identities are exclusive from the labeled ones. Our main insight is that although the class information is not available, we can still faithfully approximate these semantic relationships by constructing a relational graph in a bottom-up manner. We propose Consensus-Driven Propagation (CDP) to tackle this challenging problem with two modules, the \u201ccommittee\u201d and the \u201cmediator\u201d, which select positive face pairs robustly by carefully aggregating multi-view information. Extensive experiments validate the effectiveness of both modules to discard outliers and mine hard positives. With CDP, we achieve a compelling accuracy of 78.18% on MegaFace identification challenge by using only 9% of the labels, comparing to 61.78% when no unlabeled data are used and 78.52% when all labels are employed."}
{"_id": "9b4ae1b16915953fbc8b0e98e02af41d40950183", "title": "Sleep Staging by Modeling Sleep Stage Transitions using Deep CRF", "text": "Sleep plays a vital role in human health, both mental and physical. Sleep disorders like sleep apnea are increasing in prevalence, with the rapid increase in factors like obesity. Sleep apnea is most commonly treated with Continuous Positive Air Pressure (CPAP) therapy, which maintains the appropriate pressure to ensure continuous airflow. It is widely accepted that in addition to preventing air passage collapse, increase in deep and REM sleep stages would be good metrics for how well the CPAP therapy is working in improving sleep health. Presently, however, there is no mechanism to easily detect a patient\u2019s sleep stages from CPAP flow data alone. We propose, for the first time, an automated sleep staging model based only on the flow signal. Recently deep neural networks have shown high accuracy on sleep staging by eliminating handcrafted features. However, these methods focus exclusively on extracting informative features from the input signal, without paying much attention to the dynamics of sleep stages in the output sequence. We propose an end-to-end framework that uses a combination of deep convolution and recurrent neural networks to extract high-level features from raw flow signal and then uses a structured output layer based on a conditional random field to model the temporal transition structure of the sleep stages. We improve upon the previous methods by 10% using our model, that can be augmented to the previous sleep staging deep learning methods. We also show that our method can be used to accurately track sleep metrics like sleep efficiency calculated from sleep stages that can be deployed for monitoring the response of CPAP therapy on sleep apnea patients. Apart from the technical contributions, we expect this study to motivate new research questions in sleep science, especially towards the understanding of sleep architecture trajectory among patients under CPAP therapy."}
{"_id": "5650c04c7e10f908b04dd29c93b4bb768d267453", "title": "Optimisation of quasi-resonant induction cookers", "text": "In this paper, the optimisation possibilities of a quasi-resonant induction cooker are investigated. Simulations show that main losses of the induction cooker occur in the heating coil and the RC-IGBT (power switch). The performance of the coil can be improved mainly by minimising the coil resistance. The IGBT-optimisation is based on the reduction of tail current in soft switching mode. The tail current is reduced by using lifetime engineering and reduction of the IGBT-thickness. The losses of the optimised IGBT are about 30% lower. As a result, the cooling system of the IGBT can be smaller and cheaper. Together with some optimisations of the circuit the overall losses of the system could be improved by almost 10% or 1 percentage point."}
{"_id": "e901162b6dca552b213a010bfab318d5445227a6", "title": "Penile length, digit length, and anogenital distance according to birth weight in newborn male infants", "text": "PURPOSE\nAnogential distance (AGD) and the 2:4 digit length ratio appear to provide a reliable guide to fetal androgen exposure. We intended to investigate the current status of penile size and the relationship between penile length and AGD or digit length according to birth weight in Korean newborn infants.\n\n\nMATERIALS AND METHODS\nBetween May 2013 and February 2014, among a total of 78 newborn male infants, 55 infants were prospectively included in this study. Newborn male infants with a gestational age of 38 to 42 weeks and birth weight>2.5 kg were assigned to the NW group (n=24) and those with a gestational age<38 weeks and birth weight<2.5 kg were assigned to the LW group (n=31). Penile size and other variables were compared between the two groups.\n\n\nRESULTS\nStretched penile length of the NW group was 3.3 \u00b1 0.2 cm, which did not differ significantly from that reported in 1987. All parameters including height, weight, penile length, testicular size, AGD, and digit length were significantly lower in the LW group than in the NW group. However, there were no significant differences in AGD ratio or 2:4 digit length ratio between the two groups.\n\n\nCONCLUSIONS\nThe penile length of newborn infants has not changed over the last quarter century in Korea. With normal penile appearance, the AGD ratio and 2:4 digit length ratio are consistent irrespective of birth weight, whereas AGD, digit length, and penile length are significantly smaller in newborns with low birth weight."}
{"_id": "a97acca3bd5676d46a25a4b2aff12d10bb4452c8", "title": "Ultra-wide bandwidth time-hopping spread-spectrum impulse radio for wireless multiple-access communications", "text": "Attractive features of time-hopping spread-spectrum multiple-access systems employing impulse signal technology are outlined, and emerging design issues are described. Performance of such communications systems in terms of achievable transmission rate and multiple-access capability are estimated for both analog and digital data modulation formats under ideal multiple-access channel conditions."}
{"_id": "f78d78508dd814fd3c90fd75ae8d2ef0f685283c", "title": "MODELING AND CONTROL OF BALL AND BEAM SYSTEM USING MODEL BASED AND NON-MODEL BASED CONTROL APPROACHES", "text": "The ball and beam system is a laboratory equipment with high nonlinearity in its dynamics. The aims of this research are to model the ball and beam system considering nonlinear factors and coupling effect and to design controllers to control the ball position. The LQR is designed considering two Degrees-of-Freedom and"}
{"_id": "4515d348e13fd31e2e7f3df1ab8c04899dd77d59", "title": "Cloud Computing - The Business Perspective", "text": "If cloud computing (CC) is to achieve its potential, there needs to be a clear understanding of the various issues involved, both from the perspectives of the providers and the consumers of the technology. There is an equally urgent need for understanding the business-related issues surrounding CC. We interviewed several industry executives who are either involved as developers or are evaluating CC as an enterprise user. We identify the strengths, weaknesses, opportunities and threats for the industry. We also identify the various issues that will affect the different stakeholders of CC. We issue a set of recommendations for the practitioners who will provide and manage this technology. For IS researchers, we outline the different areas of research that need attention so that we are in a position to advise the industry in the years to come. Finally, we outline some of the key issues facing governmental agencies who will be involved in the regulation of cloud computing."}
{"_id": "ae24488111896527432ff51653e4d6284fe60e83", "title": "Service-Oriented Cloud Computing Architecture", "text": "Cloud computing is getting popular and IT giants such as Google, Amazon, Microsoft, IBM have started their cloud computing infrastructure. However, current cloud implementations are often isolated from other cloud implementations. This paper gives an overview survey of current cloud computing architectures, discusses issues that current cloud computing implementations have and proposes a Service-Oriented Cloud Computing Architecture (SOCCA) so that clouds can interoperate with each other. Furthermore, the SOCCA also proposes high level designs to better support multi-tenancy feature of cloud computing."}
{"_id": "0f44833eb9047158221e7b3128cde1347b58ccd6", "title": "The Case for Energy-Proportional Computing", "text": "Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use. Achieving energy proportionality will require significant improvements in the energy usage profile of every system component, particularly the memory and disk subsystems."}
{"_id": "12a37f235bb1fc7f5572924add2993058931915d", "title": "The Chinese Wall Security Policy", "text": "Everyone who has seen the movie Wall Street wi~l have seen a commercial security policy in action. The recent work of Clark and Wilson and the WIPCIS initiative (the Workshop on Integrity Policy for Computer Information Systems) has drawn attention to the existence of a wide range of commercial security policies which are both significantly different from each other and quite alien to current \u201cmilitary\u201d thin king as implemented in products for the security marin symbols over an alphabet \u03c3, where each symbol is encoded by lg|\u03c3| bits. We show that compressed suffix arrays use just nHh + \u03c3 bits, while retaining full text indexing functionalities, such as searching any pattern sequence of length m in O(m lg |\u03c3| + polylog(n)) time. The term Hh \u2264 lg |\u03c3| denotes the hth-order empirical entropy of the text, which means that our index is nearly optimal in space apart from lower-order terms, achieving asymptotically the empirical entropy of the text (with a multiplicative constant 1). If the text is highly compressible so that Hn = o(1) and the alphabet size is small, we obtain a text index with o(m) search time that requires only o(n) bits. Further results and tradeoffs are reported in the paper."}
{"_id": "9f3b09e9ebe4f34c264ecba9fa1d95b822c746be", "title": "MBMF: Model-Based Priors for Model-Free Reinforcement Learning", "text": "Reinforcement Learning is divided in two main paradigms: model-free and model-based. Each of these two paradigms has strengths and limitations, and has been successfully applied to real world domains that are appropriate to its corresponding strengths. In this paper, we present a new approach aimed at bridging the gap between these two paradigms. We aim to take the best of the two paradigms and combine them in an approach that is at the same time dataefficient and cost-savvy. We do so by learning a probabilistic dynamics model and leveraging it as a prior for the intertwined model-free optimization. As a result, our approach can exploit the generality and structure of the dynamics model, but is also capable of ignoring its inevitable inaccuracies, by directly incorporating the evidence provided by the direct observation of the cost. As a proof-of-concept, we demonstrate on simulated tasks that our approach outperforms purely model-based and model-free approaches, as well as the approach of simply switching from a model-based to a model-free setting."}
{"_id": "e3806f6849f51442a58eae5bd26cce1a5a041fa3", "title": "Variable flux permanent magnet synchronous machine (VF-PMSM) design to meet electric vehicle traction requirements with reduced losses", "text": "Variable flux permanent magnet synchronous machines (VF-PMSMs) in which the magnetization state (MS) of low coercive force (low-Hc) permanent magnets can be actively controlled to reduce losses in applications that require wide-speed operation have been proposed recently. While prior focus has been on achieving MS manipulation without over-sizing the inverter and obtaining higher torque capability, this paper extends the design objectives to include the power requirements of an electric vehicle traction motor over its entire speed range. Finite element methods are used to study the effect of combinations of low-Hc and high-Hc permanent magnets arranged in either series or parallel on the performance of VF-PMSMs. It is shown that while both configurations help improve the torque density, only the series configuration can help improve the high speed power capability. Experimental results showing the variable MS property, torque-speed capability and loss reduction capability of a series magnet configuration VF-PMSM test machine are presented."}
{"_id": "4087110a5adb3a42582d1808bee8721f738580b4", "title": "Sudoku Squares and Chromatic Polynomials", "text": "T he Sudoku puzzle has become a very popular puzzle that many newspapers carry as a daily feature. The puzzle consists of a 9\u00d79 grid in which some of the entries of the grid have a number from 1 to 9. One is then required to complete the grid in such a way that every row, every column, and every one of the nine 3\u00d7 3 sub-grids contain the digits from 1 to 9 exactly once. The sub-grids are shown in Figure 1."}
{"_id": "f8b12b0a138271c245097f2c70ed9a0b61873968", "title": "Simultaneous algebraic reconstruction technique (SART): a superior implementation of the art algorithm.", "text": "In this paper we have discussed what appears to be a superior implementation of the Algebraic Reconstruction Technique (ART). The method is based on 1) simultaneous application of the error correction terms as computed by ART for all rays in a given projection; 2) longitudinal weighting of the correction terms back-distributed along the rays; and 3) using bilinear elements for discrete approximation to the ray integrals of a continuous image. Since this implementation generates a good reconstruction in only one iteration, it also appears to have a computational advantage over the more traditional implementation of ART. Potential applications of this implementation include image reconstruction in conjunction with ray tracing for ultrasound and microwave tomography in which the curved nature of the rays leads to a non-uniform ray density across the image."}
{"_id": "1b3eb4d8d6be74c821ac035c39a880b2d88ec22a", "title": "Finding Steiner Forests in Planar Graphs", "text": "Several routing problems such as VLSI river routing and single-layer routing can be formulated as a problem of finding a Steiner forest in a planar (grid) graph. Given an unweighted planar graph G together with nets of terminals, our problem is to find a Steiner forest, i.e., vertex-disjoint trees, each of which interconnects all the terminals of a net. This paper gives an efficient algorithm to solve the problem for the case all terminals lie on two face boundaries of G. Also obtained is an algorithm for finding a maximum number of internally vertexdisjoint paths connecting two specified vertices in a planar graph G. Both run in 0( nlog n) time if G has n vertices."}
{"_id": "538c4e4327c706a9346472661b3256c39b07df08", "title": "Causality in thought.", "text": "Causal knowledge plays a crucial role in human thought, but the nature of causal representation and inference remains a puzzle. Can human causal inference be captured by relations of probabilistic dependency, or does it draw on richer forms of representation? This article explores this question by reviewing research in reasoning, decision making, various forms of judgment, and attribution. We endorse causal Bayesian networks as the best normative framework and as a productive guide to theory building. However, it is incomplete as an account of causal thinking. On the basis of a range of experimental work, we identify three hallmarks of causal reasoning-the role of mechanism, narrative, and mental simulation-all of which go beyond mere probabilistic knowledge. We propose that the hallmarks are closely related. Mental simulations are representations over time of mechanisms. When multiple actors are involved, these simulations are aggregated into narratives."}
{"_id": "07f83b84f97735037ad37f6e691c937770871994", "title": "Get Out of my Picture! Internet-based Inpainting", "text": "We present a method to replace a user specified target region of a photograph by using other photographs of the same scene downloaded from the Internet via viewpoint invariant image search. Each of the retrieved images is first geometrically then photometrically registered with the query photograph. Geometric registration is achieved using multiple homographies and photometric registration is performed using a global affine transformation on image intensities. Each retrieved image proposes a possible solution for the target region. In the final step we combine these proposals into a single visually consistent result, using a Markov random field optimisation to choose seams between proposals, followed by gradient domain fusion. We demonstrate removal of objects and people in challenging photographs of Oxford landmarks containing complex image structures."}
{"_id": "e4d24a7c9f1bcbf2067277b9061215e9dd85d26d", "title": "Biodegradable polymeric nanoparticles based drug delivery systems.", "text": "Biodegradable nanoparticles have been used frequently as drug delivery vehicles due to its grand bioavailability, better encapsulation, control release and less toxic properties. Various nanoparticulate systems, general synthesis and encapsulation process, control release and improvement of therapeutic value of nanoencapsulated drugs are covered in this review. We have highlighted the impact of nanoencapsulation of various disease related drugs on biodegradable nanoparticles such as PLGA, PLA, chitosan, gelatin, polycaprolactone and poly-alkyl-cyanoacrylates."}
{"_id": "b2e5db91a09513c3d83bcf0b877110d54a0339e7", "title": "DeepBE: Learning Deep Binary Encoding for Multi-label Classification", "text": "The track 2 and track 3 of ChaLearn 2016 can be considered as Multi-Label Classification problems. We present a framework of learning deep binary encoding (DeepBE) to deal with multi-label problems by transforming multi-labels to single labels. The transformation of DeepBE is in a hidden pattern, which can be well addressed by deep convolutions neural networks (CNNs). Furthermore, we adopt an ensemble strategy to enhance the learning robustness. This strategy is inspired by its effectiveness in fine-grained image recognition (FGIR) problem, while most of face related tasks such as track 2 and track 3 are also FGIR problems. By DeepBE, we got 5.45% and 10.84% mean square error for track 2 and track 3 respectively. Additionally, we proposed an algorithm adaption method to treat the multiple labels of track 2 directly and got 6.84% mean square error."}
{"_id": "255bf72455ce94bfca01139968911ce8f3d85b26", "title": "Deep neural architectures for large scale android malware analysis", "text": "Android is arguably the most widely used mobile operating system in the world. Due to its widespead use and huge user base, it has attracted a lot of attention from the unsavory crowd of malware writers. Traditionally, techniques to counter such malicious software involved manually analyzing code and figuring out whether it was malicious or benign. However, due to the immense pace at which newer malware families are surfacing, such an approach is no longer feasible. Machine learning offers a way to tackle this issue of speed by automating the classification task. While several efforts have been made to use traditional machine learning techniques to Android malware detection, no reasonable effort has been made to utilize the newer, deep learning models in this domain. In this paper, we apply several deep learning models including fully connected, convolutional and recurrent neural networks as well as autoencoders and deep belief networks to detect Android malware from a large scale dataset of more than 55 GBs of Android malware. Further, we apply Bayesian machine learning to this problem domain to see how it fares with the deep learning based models while also providing insights into the dataset. We show that we are able to achieve better results using these models as compared to the state-of-the-art approaches. Our best model gets an F1 score of 0.986 with an AUC of 0.983 as compared to the existing best F1 score of 0.875 and AUC of 0.953."}
{"_id": "12dbbc1f31d302b528e7d260b6a51fb280112ab3", "title": "Temporal Information Extraction", "text": "Research on information extraction (IE) seeks to distill relational tuples from natural language text, such as the contents of the WWW. Most IE work has focussed on identifying static facts, encoding them as binary relations. This is unfortunate, because the vast majority of facts are fluents, only holding true during an interval of time. It is less helpful to extract PresidentOf(Bill-Clinton, USA) without the temporal scope 1/20/93 1/20/01. This paper presents TIE, a novel, information-extraction system, which distills facts from text while inducing as much temporal information as possible. In addition to recognizing temporal relations between times and events, TIE performs global inference, enforcing transitivity to bound the start and ending times for each event. We introduce the notion of temporal entropy as a way to evaluate the performance of temporal IE systems and present experiments showing that TIE outperforms three alternative approaches."}
{"_id": "a471336c2f07fd35199be15331f07771a8dfdd2b", "title": "A comparison of Haar-like, LBP and HOG approaches to concrete and asphalt runway detection in high resolution imagery", "text": "In this article, the three most used object detection approaches, Linear Binary Pattern cascade, Haarlike cascade, and Histogram of Oriented Gradients with Support Vector Machine are applied to automatic runway detection in high resolution satellite imagery and their results are compared. They have been employed predominantly for human feature recognition and this paper tests theirs applicability to runways. The results show that they can be indeed employed for this purpose with LBP and Haar approaches performing better than HOG+SVM."}
{"_id": "c20baa16cb57ff4979569871d15294fa720bbc23", "title": "Life beyond Distributed Transactions: an Apostate's Opinion", "text": "Many decades of work have been invested in the area of distributed transactions including protocols such as 2PC, Paxos, and various approaches to quorum. These protocols provide the application programmer a fa\u00e7ade of global serializability. Personally, I have invested a nontrivial portion of my career as a strong advocate for the implementation and use of platforms providing guarantees of global serializability. My experience over the last decade has led me to liken these platforms to the Maginot Line. In general, application developers simply do not implement large scalable applications assuming distributed transactions. When they attempt to use distributed transactions, the projects founder because the performance costs and fragility make them impractical. Natural selection kicks in... 1 The Maginot Line was a huge fortress that ran the length of the Franco-German border and was constructed at great expense between World War I and World War II. It successfully kept the German army from directly crossing the border between France and Germany. It was quickly bypassed by the Germans in 1940 who invaded through Belgium. This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/). You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but you must attribute the work to the author and CIDR 2007. 3 Biennial Conference on Innovative DataSystems Research (CIDR) January 7-10, Asilomar, California USA. Instead, applications are built using different techniques which do not provide the same transactional guarantees but still meet the needs of their businesses. This paper explores and names some of the practical approaches used in the implementations of large-scale mission-critical applications in a world which rejects distributed transactions. We discuss the management of fine-grained pieces of application data which may be repartitioned over time as the application grows. We also discuss the design patterns used in sending messages between these repartitionable pieces of data. The reason for starting this discussion is to raise awareness of new patterns for two reasons. First, it is my belief that this awareness can ease the challenges of people hand-crafting very large scalable applications. Second, by observing the patterns, hopefully the industry can work towards the creation of platforms that make it easier to build these very large applications."}
{"_id": "8a74e166e96eca6db7a45b84d85a6b4181cf3516", "title": "Lane detection using spline model", "text": "In this paper, a Catmull\u00b1Rom spline-based lane model which describes the perspective e\u0080ect of parallel lines has been proposed for generic lane boundary. Since Catmull\u00b1Rom spline can form arbitrary shapes by di\u0080erent sets of control points, it can describe a wider range of lane structures compared with other lane models, i.e. straight and parabolic models. Moreover, the lane detection problem has been formulated here as the problem of determining the set of control points of lane model. The proposed algorithm \u00aerst detects the vanishing point (line) by using a Hough-like technique and then solves the lane detection problem by suggesting a maximum likelihood approach. Also, we have employed a multi-resolution strategy for rapidly achieving an accurate solution. This coarse-to-\u00aene matching o\u0080ers us an acceptable solution at an a\u0080ordable computational cost, and thus speeds up the process of lane detection. As a result, the proposed method is robust to noise, shadows, and illumination variations in the captured road images, and is also applicable to both the marked and the unmarked roads. \u00d3 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "2e00c4b7e9b7ad6a5fd959dc50796aee7fbd6ce3", "title": "The Ant System Applied to the Quadratic Assignment Problem", "text": "In recent years there has been growing interest in algorithms inspired by the observation of natural phenomena to define computational procedures which can solve complex problems. In this article we introduce a distributed heuristic algorithm which was inspired by the observation of the behavior of ant colonies and we propose its use for the Quadratic Assignment Problem. Finally the results obtained in solving some classical instances of the problem are compared with those obtained from other evolutionary heuristics to evaluate the quality of the system proposed. ^Dipartimento di Scienze dell'Informazione, Universit\u00e0 di Bologna, Via Sacchi 3, 47023, Cesena, Italy, EU, maniezzo@csr.unibo.it & Dipartimento di Elettronica e Informazione, Politecnico di Milano, Via Ponzio 34/5, 20133, Milano, Italy, EU, colorni@elet.polimi.it #IRIDIA, Universit\u00e9 Libre de Bruxelles, Av. Franklin Roosevelt 50, CP 194/6, 1050 Brussels, Belgium, EU, mdorigo@ulb.ac.be"}
{"_id": "2b735a5cd94b0b5868e071255bd187a901cb975a", "title": "Optimization, Learning and Natural Algorithms", "text": ""}
{"_id": "4017f984d1b4b8748a06da2739183782bbe9b46d", "title": "Optimization by simulated annealing", "text": ""}
{"_id": "664025aae372a2dc5d8f06e877b721b1461c4214", "title": "1 Positive Feedback as a Search Strategy", "text": "A combination of distributed computation, positive feedback and constructive greedy heuristic is proposed as a new approach to stochastic optimization and problem solving. Positive feedback accounts for rapid discovery of very good solutions, distributed computation avoids premature convergence, and greedy heuristic helps the procedure to find acceptable solutions in the early stages of the search process. An application of the proposed methodology to the classical travelling salesman problem shows that the system can rapidly provide very good, if not optimal, solutions. We report on many simulation results and discuss the working of the algorithm. Some hints about how this approach can be applied to a variety of optimization problems are also given."}
{"_id": "34d379ff7bf5464f17cdd5e4396488e2c849cb3c", "title": "A Multi-Dialect, Multi-Genre Corpus of Informal Written Arabic", "text": "This paper presents a multi-dialect, multi-genre, human annotated corpus of dialectal Arabic with data obtained from both online newspaper commentary and Twitter. Most Arabic corpora are small and focus on Modern Standard Arabic (MSA). There has been recent interest, however, in the construction of dialectal Arabic corpora (Zaidan and Callison-Burch, 2011a; Al-Sabbagh and Girju, 2012). This work differs from previously constructed corpora in two ways. First, we include coverage of five dialects of Arabic: Egyptian, Gulf, Levantine, Maghrebi and Iraqi. This is the most complete coverage of any dialectal corpus known to the authors. In addition to data, we provide results for the Arabic dialect identification task that outperform those reported in Zaidan and Callison-Burch (2011a)."}
{"_id": "76f48c44f7eaa31e88d32c169389b71a1512955f", "title": "Building Rome on a Cloudless Day", "text": "This paper introduces an approach for dense 3D reconstruction from unregistered Internet-scale photo collections with about 3 million images within the span of a day on a single PC (\u201ccloudless\u201d). Our method advances image clustering, stereo, stereo fusion and structure from motion to achieve high computational performance. We leverage geometric and appearance constraints to obtain a highly parallel implementation on modern graphics processors and multi-core architectures. This leads to two orders of magnitude higher performance on an order of magnitude larger dataset than competing state-of-the-art approaches."}
{"_id": "454375e01d9127999ebdc9d41774f9f1dd3fb13f", "title": "Device-free gesture tracking using acoustic signals: demo", "text": "In this demo, we present LLAP, a hand tracking system that uses ultrasound to localize the hand of the user to enable device-free gesture inputs. LLAP utilizes speakers and microphones on Commercial-Off-The-Shelf (COTS) mobile devices to play and record sound waves that are inaudible to humans. By measuring the phase of the sound signal reflected by the hands or fingers of the user, we can accurately measure the gesture movements. With a single pair of speaker/microphone, LLAP can track hand movement with accuracy of 3.5 mm. For devices with two microphones, LLAP enables drawing-in-the air capability with tracking accuracy of 4.6 mm. Moreover, the latency for LLAP is smaller than 15 ms for both the Android and the iOS platforms so that LLAP can be used for real-time applications."}
{"_id": "d7f958a82bfa7b5bec4b86847c8d60fca8a7e76c", "title": "Compressive coded aperture imaging", "text": "Nonlinear image reconstruction based upon sparse representations of images has recently received widespread attention with the emerging framework of compressed sensing (CS). This theory indicates that, when feasible, judicious selection of the type of distortion induced by measurement systems may dramatically improve our ability to perform image reconstruction. However, applying compressed sensing theory to practical imaging systems poses a key challenge: physical constraints typically make it infeasible to actually measure many of the random projections described in the literature, and therefore, innovative and sophisticated imaging systems must be carefully designed to effectively exploit CS theory. In video settings, the performance of an imaging system is characterized by both pixel resolution and field of view. In this work, we propose compressive imaging techniques for improving the performance of video imaging systems in the presence of constraints on the focal plane array size. In particular, we describe a novel yet practical approach that combines coded aperture imaging to enhance pixel resolution with superimposing subframes of a scene onto a single focal plane array to increase field of view. Specifically, the proposed method superimposes coded observations and uses wavelet-based sparsity recovery algorithms to reconstruct the original subframes. We demonstrate the effectiveness of this approach by reconstructing with high resolution the constituent images of a video sequence."}
{"_id": "40a938f61314f0f79319656d11c040044f883a07", "title": "A multimodal feature learning approach for sentiment analysis of social network multimedia", "text": "In this paper we investigate the use of a multimodal feature learning approach, using neural network based models such as Skip-gram and Denoising Autoencoders, to address sentiment analysis of micro-blogging content, such as Twitter short messages, that are composed by a short text and, possibly, an image. The approach used in this work is motivated by the recent advances in: i) training language models based on neural networks that have proved to be extremely efficient when dealing with web-scale text corpora, and have shown very good performances when dealing with syntactic and semantic word similarities; ii) unsupervised learning, with neural networks, of robust visual features, that are recoverable from partial observations that may be due to occlusions or noisy and heavily modified images. We propose a novel architecture that incorporates these neural networks, testing it on several standard Twitter datasets, and showing that the approach is efficient and obtains good classification results."}
{"_id": "7be0f21125b4accf24ed8884e32f723606a48b9e", "title": "Child murder by parents: a psychiatric review of filicide.", "text": ""}
{"_id": "8e614b4534a9db140deba6f7a4f51754689ea739", "title": "A lifespan database of adult facial stimuli.", "text": "Faces constitute a unique and widely used category of stimuli. In spite of their importance, there are few collections of faces for use in research, none of which adequately represent the different ages of faces across the lifespan. This lack of a range of ages has limited the majority of researchers to using predominantly young faces as stimuli even when their hypotheses concern both young and old participants. We describe a database of 575 individual faces ranging from ages 18 to 93. Our database was developed to be more representative of age groups across the lifespan, with a special emphasis on recruiting older adults. The resulting database has faces of 218 adults age 18-29, 76 adults age 30-49, 123 adults age 50-69, and 158 adults age 70 and older. These faces may be acquired for research purposes from http://agingmind.cns.uiuc.edu/facedb/. This will allow researchers interested in using facial stimuli access to a wider age range of adult faces than has previously been available."}
{"_id": "530a930d96bded46b13b086c23a1268a635cb47a", "title": "Quantifying Trading Behavior in Financial Markets Using Google Trends", "text": "Crises in financial markets affect humans worldwide. Detailed market data on trading decisions reflect some of the complex human behavior that has led to these crises. We suggest that massive new data sources resulting from human interaction with the Internet may offer a new perspective on the behavior of market participants in periods of large market movements. By analyzing changes in Google query volumes for search terms related to finance, we find patterns that may be interpreted as \"early warning signs\" of stock market moves. Our results illustrate the potential that combining extensive behavioral data sets offers for a better understanding of collective human behavior."}
{"_id": "57c2087ae2680d63e661ab1c7e1cfd9b00fd7aaf", "title": "Low-power high-performance SAR ADC with redundancy and digital background calibration", "text": "As technology scales, the improved speed and energy efficiency make the successiveapproximation-register (SAR) architecture an attractive alternative for applications that require high-speed and high-accuracy analog-to-digital converters (ADCs). In SAR ADCs, the key linearity and speed limiting factors are capacitor mismatch and incomplete digital-to-analog converter (DAC)/reference voltage settling. In this thesis, a sub-radix-2 SAR ADC is presented with several new contributions. The main contributions include investigation of using digital error correction (redundancy) in SAR ADCs for dynamic error correction and speed improvement, development of two new calibration algorithms to digitally correct for manufacturing mismatches, design of new architecture to incorporate redundancy within the architecture itself while achieving 94% better energy efficiency compared to conventional switching algorithm, development of a new capacitor DAC structure to improve the SNR by four times with improved matching, joint design of the analog and digital circuits to create an asynchronous platform in order to reach the targeted performance, and analysis of key circuit blocks to enable the design to meet noise, power and timing requirements. The design is fabricated in standard 1P9M 65nm CMOS technology with 1.2V supply. The active die area is 0.083mm with full rail-to-rail input swing of 2.4VP\u2212P . A 67.4dB SNDR, 78.1dB SFDR, +1.0/-0.9 LSB12 INL and +0.5/-0.7 LSB12 DNL are achieved at 50MS/s at Nyquist rate. The total power consumption, including the estimated calibration and reference power, is 2.1mW, corresponding to 21.9fJ/conv.step FoM. This ADC achieves the best FoM of any ADCs with greater than 10b ENOB and 10MS/s sampling rate. Thesis Supervisor: Duane S. Boning Title: Professor of Electrical Engineering and Computer Science Thesis Supervisor: Hae-Seung Lee Title: Professor of Electrical Engineering and Computer Science"}
{"_id": "c24ae0a7b8c0b4557690c247accee55747e39acc", "title": "Effects of time-to-trigger parameter on handover performance in SON-based LTE systems", "text": "In this paper, we evaluate handover (HO) performance when we apply various time-to-trigger (TTT) methods in self-organizing network (SON)-based long term evolution (LTE) systems. Although TTT can mitigate the wasteful ping-pong HO effect, it also can cause undesirable radio link failure (RLF) due to delayed HO. The optimal HO timings that produce the lowest ping-pong rate within allowable RLF rate vary depending on user equipment (UE) speeds and neighboring cell configurations. To achieve efficient HO timings, we propose and investigate two methods: \u201cadaptive\u201d and \u201cgrouping.\u201d In the \u201cadaptive method,\u201d we select the adaptive TTT value for each UE speed based on RLF rate of 2%. The \u201cgrouping method\u201d classifies UE speeds into three ranges and assigns the proper TTT value to each range. To apply the LTE specification more effectively, we suggest the criteria of grouping, and propose the proper TTT value for each range. We consider HO in two neighboring cell configurations: from a macro cell to either a macro cell or a pico cell. Simulation results show that the HO performance of the \u201cadaptive method\u201d is greatly improved compared to that of applying fixed TTT values. The results also show that the performance of the \u201cgrouping method\u201d, using proposed criteria and proper TTT values, is comparable to that of the \u201cadaptive method.\u201d"}
{"_id": "16d42b7ac4ed284cb8200488eefec644a4a54d57", "title": "A compact filtering power divider based on SIW triangular cavities", "text": "A compact substrate-integrated waveguide (SIW) filtering power divider with high isolation is presented in this paper based on SIW triangular cavities. An isolation resistance is exploited to achieve high isolation between two output ports. The design parameters can be realized by determining proper offset positions of the feeding ports and coupling windows, which can be fixed by the synthesized Qe and Mij. By using three triangular cavities as the basic elements, a filtering power divider is designed and fabricated on a single-layer substrate with the standard printed circuit boards (PCB) process. This filtering power divider operates at the center frequency of 11.4 GHz with the bandwidths of 0.75 GHz. There are good agreements in the simulated and measured results."}
{"_id": "4c46250280d25cad47000f4175889495818ef111", "title": "Rethinking the fear circuit: the central nucleus of the amygdala is required for the acquisition, consolidation, and expression of Pavlovian fear conditioning.", "text": "In the standard model of pavlovian fear learning, sensory input from neutral and aversive stimuli converge in the lateral nucleus of the amygdala (LA), in which alterations in synaptic transmission encode the association. During fear expression, the LA is thought to engage the central nucleus of the amygdala (CE), which serves as the principal output nucleus for the expression of conditioned fear responses. In the present study, we reexamined the roles of LA and CE. Specifically, we asked whether CE, like LA, might also be involved in fear learning and memory consolidation. Using functional inactivation methods, we first show that CE is involved not only in the expression but also the acquisition of fear conditioning. Next, we show that inhibition of protein synthesis in CE after training impairs fear memory consolidation. These findings indicate that CE is not only involved in fear expression but, like LA, is also involved in the learning and consolidation of pavlovian fear conditioning."}
{"_id": "e0b3a62944afc0675ebaf8993705d6216b8d3daf", "title": "Deconstructing the Blockchain to Approach Physical Limits", "text": "The concept of a blockchain was invented by Satoshi Nakamoto to maintain a distributed ledger for an electronic payment system, Bitcoin. In addition to its security, important performance measures of a blockchain protocol are its transaction throughput, confirmation latency and confirmation reliability. Existing systems operate far away from these physical limits. In this work we introduce Prism , a new proof-of-work blockchain protocol, which can achieve 1) security against up to 50% adversarial hashing power; 2) optimal throughput up to the capacity C of the network; 3) confirmation latency for honest transactions proportional to the propagation delay D, with confirmation error probability exponentially small in the bandwidth-delay product CD; 4) eventual total ordering of all transactions. Our approach to the design of this protocol is based on deconstructing the blockchain into its basic functionalities and systematically scaling up these functionalities to approach their physical limits."}
{"_id": "957e017bebb0e90f191c29f88796dc07420baa9b", "title": "A comparative study of classification algorithms for forecasting rainfall", "text": "India is an agricultural country which largely depends on monsoon for irrigation purpose. A large amount of water is consumed for industrial production, crop yield and domestic use. Rainfall forecasting is thus very important and necessary for growth of the country. Weather factors including mean temperature, dew point temperature, humidity, pressure of sea and speed of wind and have been used to forecasts the rainfall. The dataset of 2245 samples of New Delhi from June to September (rainfall period) from 1996 to 2014 has been collected from a website named Weather Underground. The training dataset is used to train the classifier using Classification and Regression Tree algorithm, Naive Bayes approach, K nearest Neighbour and 5-10-1 Pattern Recognition Neural Network and its accuracy is tested on a test dataset. Pattern Recognition networks has given 82.1% accurate results, KNN with 80.7% correct forecasts ranks second, Classification and Regression Tree(CART) gives 80.3% while Naive Bayes provides 78.9% correctly classified samples."}
{"_id": "89392718100ef1a461480bd7885ee09b0337ef24", "title": "Vertical ridge augmentation in the esthetic zone.", "text": "The reconstruction of deficient alveolar ridges using vertical and/or horizontal guided bone regeneration techniques allows for ideal implant placement, which is crucial for function and also for esthetically successful outcomes. Unlike in the past, when meeting a patient's functional demands was sufficient, many patients now have greater expectations from their implant restoration. Hence, it is no longer enough simply to restore the edentulous space with a functioning tooth or teeth. It has been suggested that patients now measure their final restoration using the contralateral natural tooth as the gold standard. Both subjective and objective levels of patient information on dental implants have increased significantly in the last decade. As a result of this demand, implant literature has inherited and developed specific esthetic parameters and patient-centered outcomes from studies in the restorative field. Unfortunately, studies reporting on guided bone regeneration in the esthetic zone entirely lack such parameters and outcomes. Currently, there is a strong need for a consensus on objective and well-defined parameters to assess the esthetics in bone regeneration and subsequently on implant dentistry."}
{"_id": "cef26aee0938fe2abb7944888814526323720ec9", "title": "A New Discrete Particle Swarm Optimization Algorithm", "text": "Particle Swarm Optimization (PSO) has been shown to perform very well on a wide range of optimization problems. One of the drawbacks to PSO is that the base algorithm assumes continuous variables. In this paper, we present a version of PSO that is able to optimize over discrete variables. This new PSO algorithm, which we call Integer and Categorical PSO (ICPSO), incorporates ideas from Estimation of Distribution Algorithms (EDAs) in that particles represent probability distributions rather than solution values, and the PSO update modifies the probability distributions. In this paper, we describe our new algorithm and compare its performance against other discrete PSO algorithms. In our experiments, we demonstrate that our algorithm outperforms comparable methods on both discrete benchmark functions and NK landscapes, a mathematical framework that generates tunable fitness landscapes for evaluating EAs."}
{"_id": "842ca93d770edef147e9ca117e1c0294a596cb82", "title": "Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery", "text": "Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs), which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS) scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features Remote Sens. 2015, 7 14681 from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the lowand mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance."}
{"_id": "4f5f2b7740cf53bacf502856461601c881c60c74", "title": "Memory Leak Analysis by Contradiction", "text": "We present a novel leak detection algorithm. To prove the absence of a memory leak, the algorithm assumes its presence and runs a backward heap analysis to disprove this assumption. We have implemented this approach in a memory leak analysis tool and used it to analyze several routines that manipulate linked lists and trees. Because of the reverse nature of the algorithm, the analysis can locally reason about the absence of memory leaks. We have also used the tool as a scalable, but unsound leak detector for C programs. The tool has found several bugs in larger programs from the SPEC2000 suite."}
{"_id": "88481350c0d695d60202eeecfc53907cb4770d4b", "title": "Sign language recognition using Microsoft Kinect", "text": "In last decade lot of efforts had been made by research community to create sign language recognition system which provide a medium of communication for differently-abled people and their machine translations help others having trouble in understanding such sign languages. Computer vision and machine learning can be collectively applied to create such systems. In this paper, we present a sign language recognition system which makes use of depth images that were captured using a Microsoft Kinect\u00ae camera. Using computer vision algorithms, we develop a characteristic depth and motion profile for each sign language gesture. The feature matrix thus generated was trained using a multi-class SVM classifier and the final results were compared with existing techniques. The dataset used is of sign language gestures for the digits 0-9."}
{"_id": "35b1a82c3399a58da236a5bb340fbcc593f73343", "title": "Injury Risk Quantification for Industrial Robots in Collaborative Operation with Humans", "text": "While the topic of human-safe robots has an appreciable academic history, only today are various solutions to this challenge becoming economically relevant. We are witnessing a maturing of technology and market expectations, paired with initiatives in the area of standardization. One of the remaining hurdles to routine deployment of robots for collaborative operation with humans is obtaining a well-founded understanding of low-level injuries of the type that such robots can potentially inflict. For the assessment of collaborative robot designs and systems, the low-level injury potential must be unambiguously quantified. A classification scheme for holding such information is presented and an outlook on its use is given."}
{"_id": "3273f6c7d8a16d75c5e1ea8ae3602cf6f61a5fd9", "title": "Acceptance Model in the Domains of LIS and Education : A Review of Selected Literature", "text": "The Technology Acceptance Model (TAM) is a theoretical framework that is most extensively utilized in explaining an individual\u2019s acceptance of an information technology or an information system. This study reviews numerous literature related to the TAM focusing on TAM applications in the domains of Library and Information Science (LIS) and Education. The different studies in these areas were evaluated to understand the modifications incorporated into this model. The study attempts provide insight on future trends in the technology acceptance model as well to help identify gaps in literature where future research could be conducted."}
{"_id": "2c56368255ca6a86ef0a26466b9d0a2425f55e9d", "title": "Risk Aversion and Expected-Utility Theory : A Calibration Theorem", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "2d7622afe72922de5c430014965cfddf33692885", "title": "A review of empirical research on manufacturing flexibility", "text": "Manufacturing flexibility is widely recognized as a critical component to achieving a competitive advantage in the marketplace. A comprehensive look at the empirical research pertaining to manufacturing flexibility highlights the very fragmented nature of this body of work. We present a comprehensive contingency-based framework for examining the content related issues involving the relationships and variables included in past studies. We also examine several important \u017d . research designrmethodology issues e.g., sampling, data collection and measurement and propose solutions to some identified problems. q 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "df7c11bf2eaec672904440901efaeba2ef601fef", "title": "Risk Management : Coordinating Corporate Investment and Financing Policies", "text": "This paper develops a general framework for analyzing corporate risk management policies. We begin by observing that if external sources of finance are more costly to corporations than internally generated funds, there will typically be a benefit to hedging: hedging adds value to the extent that it helps ensure that a corporation has sufficient internal funds available to take advantage of attractive investment opportunities. We then argue that this simple observation has wide ranging implications for the design of risk management strategies. We delineate how these strategies should depend on such factors as shocks to investment and financing opportunities. We also discuss exchange rate hedging strategies for multinationals, as well as strategies involving \"nonlinear\" instruments like options. CORPORATIONS TAKE RISK MANAGEMENT very seriously\u2014recent surveys find that risk management is ranked by financial executives as one of their most important objectives.^ Given its real-world prominence, one might guess that the topic of risk management would command a great deal of attention from researchers in finance, and that practitioners would therefore have a welldeveloped body of wisdom from which to draw in formulating hedging strategies. Such a guess would, however, be at best only partially correct. Finance theory does do a good job of instructing firms on the implementation of hedges. For example, if a refining company decides that it wants to use options to reduce its exposure to oil prices by a certain amount, a BlackScholes type model can help the company calculate the number of contracts needed. Indeed, there is an extensive literature that covers numerous practical aspects of what might be termed \"hedging mechanics,\" from the computation of hedge ratios to the institutional peculiarities of individual contracts. Unfortunately, finance theory has had much less clear cut guidance to offer on the logically prior questions of hedging strategy: What sorts of risks * Froot is from Harvard and NBER, Scharfstein is from MIT and NBER, and Stein is from MIT and NBER. We thank Don Lessard, Tim Luehrman, Andre Perold, Raghuram Rajan, Julio Rotemberg, and Stew Myers for helpful discussions. We are also grateful to the IFSRC and the Center for Energy Policy Research at MIT, the Department of Research at Harvard Business School, the National Science Foundation, and Batterymarch Financial Management for generous financial support. ' See Rawls and Smithson (1990)."}
{"_id": "3ab1c861c4be472a9b672214f472e37ad82931bb", "title": "Financing Constraints and Corporate Investment", "text": "EMPIRICAL models of business investment rely generally on the assumption of a \"representative firm\" that responds to prices set in centralized securities markets. Indeed, if all firms have equal access to capital markets, firms' responses to changes in the cost of capital or tax-based investment incentives differ only because of differences in investment demand. A firm's financial structure is irrelevant to investment because external funds provide a perfect substitute for internal capital. In general, with perfect capital markets, a firm's investment decisions are independent of its financial condition. An alternative research agenda, however, has been based on the view that internal and external capital are not perfect substitutes. According to this view, investment may depend on financial factors, such as the availability of internal finance, access to new debt or equity finance, or the functioning of particular credit markets. For example, a firm's internal cash flow may affect investment spending because of a \"financ-"}
{"_id": "90c365effd3b2f3a31082d32cba45ef10f1c515c", "title": "Visual landmarks sharpen grid cell metric and confer context specificity to neurons of the medial entorhinal cortex", "text": "Neurons of the medial entorhinal cortex (MEC) provide spatial representations critical for navigation. In this network, the periodic firing fields of grid cells act as a metric element for position. The location of the grid firing fields depends on interactions between self-motion information, geometrical properties of the environment and nonmetric contextual cues. Here, we test whether visual information, including nonmetric contextual cues, also regulates the firing rate of MEC neurons. Removal of visual landmarks caused a profound impairment in grid cell periodicity. Moreover, the speed code of MEC neurons changed in darkness and the activity of border cells became less confined to environmental boundaries. Half of the MEC neurons changed their firing rate in darkness. Manipulations of nonmetric visual cues that left the boundaries of a 1D environment in place caused rate changes in grid cells. These findings reveal context specificity in the rate code of MEC neurons."}
{"_id": "8ffd906129ad079e1df379a0be17d3f9f0d80b9c", "title": "Synergistic Analysis of Evolving Graphs", "text": "Evolving graph processing involves repeating analyses, which are often iterative, over multiple snapshots of the graph corresponding to different points in time. Since the snapshots of an evolving graph share a great number of vertices and edges, traditional approaches that process these snapshots one at a time without exploiting this overlap contain much wasted effort on both data loading and computation, making them extremely inefficient. In this article, we identify major sources of inefficiencies and present two optimization techniques to address them. First, we propose a technique for amortizing the fetch cost by merging fetching of values for different snapshots of the same vertex. Second, we propose a technique for amortizing the processing cost by feeding values computed by earlier snapshots into later snapshots. We have implemented these optimizations in two distributed graph processing systems, namely, GraphLab and ASPIRE. Our experiments with multiple real evolving graphs and algorithms show that, on average fetch amortization speeds up execution of GraphLab and ASPIRE by 5.2\u00d7 and 4.1\u00d7 , respectively. Amortizing the processing cost yields additional average speedups of 2\u00d7 and 7.9\u00d7, respectively."}
{"_id": "7721cd34379cfebfce7925e38f4832364f404eda", "title": "Assessment of methodological quality of primary studies by systematic reviews: results of the metaquality cross sectional study.", "text": "OBJECTIVES\nTo describe how the methodological quality of primary studies is assessed in systematic reviews and whether the quality assessment is taken into account in the interpretation of results.\n\n\nDATA SOURCES\nCochrane systematic reviews and systematic reviews in paper based journals.\n\n\nSTUDY SELECTION\n965 systematic reviews (809 Cochrane reviews and 156 paper based reviews) published between 1995 and 2002.\n\n\nDATA SYNTHESIS\nThe methodological quality of primary studies was assessed in 854 of the 965 systematic reviews (88.5%). This occurred more often in Cochrane reviews than in paper based reviews (93.9% v 60.3%, P < 0.0001). Overall, only 496 (51.4%) used the quality assessment in the analysis and interpretation of the results or in their discussion, with no significant differences between Cochrane reviews and paper based reviews (52% v 49%, P = 0.58). The tools and methods used for quality assessment varied widely.\n\n\nCONCLUSIONS\nCochrane reviews fared better than systematic reviews published in paper based journals in terms of assessment of methodological quality of primary studies, although they both largely failed to take it into account in the interpretation of results. Methods for assessment of methodological quality by systematic reviews are still in their infancy and there is substantial room for improvement."}
{"_id": "7d2edb30786562e0d0f2f1857c02fcd38983b622", "title": "An Ontology-Based Approach to Model-Driven Software Product Lines", "text": "Software development in highly variable domains constrained by tight regulations and with many business concepts involved results in hard to deliver and maintain applications, due to the complexity of dealing with the large number of concepts provided by the different parties and system involved in the process. One way to tackle these problems is thru combining software product lines and model-driven software development supported by ontologies. Software product lines and model-driven approaches would promote reuse on the software artifacts and, if supported by an ontological layer, those artifacts would be domain-validated. We intend to create a new conceptual framework for software development with domain validated models in highly variable domains. To define such a framework we will propose a model that relates several dimensions and areas of software development thru time and abstraction levels. This model would guarantee to the software house traceability of components, domain validated artifacts, easy to maintain and reusable components, due to the relations and mappings we propose to establish in the conceptual framework, between the software artifacts and the ontology."}
{"_id": "89781726d303b54b908029cded8229ee7a42770c", "title": "Pixel-Oriented Visualization of Change in Social Networks", "text": "We propose a new approach to visualize social networks. Most common network visualizations rely on graph drawing. While without doubt useful, graphs suffer from limitations like cluttering and important patterns may not be realized especially when networks change over time. Our approach adapts pixel-oriented visualization techniques to social networks as an addition to traditional graph visualizations. The visualization is exemplified using social networks based on corporate wikis."}
{"_id": "329cf7a6528f339560ff9fb617f140b1a54a067e", "title": "Beyond Recommender Systems: Helping People Help Each Other", "text": "The Internet and World Wide Web have brought us into a world of endless possibiliti es: interactive Web sites to experience, music to li sten to, conversations to participate in, and every conceivable consumer item to order. But this world also is one of endless choice: how can we select from a huge universe of items of widely varying quality? Computational recommender systems have emerged to address this issue. They enable people to share their opinions and benefit from each other\u2019s experience. We present a framework for understanding recommender systems and survey a number of distinct approaches in terms of this framework. We also suggest two main research challenges: (1) helping people form communities of interest while respecting personal privacy, and (2) developing algorithms that combine multiple types of information to compute recommendations."}
{"_id": "ce3c1b0e4868d03879aefb6e2198767c4e3e7943", "title": "Context-aware application scheduling in mobile systems: what will users do and not do next?", "text": "Usage patterns of mobile devices depend on a variety of factors such as time, location, and previous actions. Hence, context-awareness can be the key to make mobile systems to become personalized and situation dependent in managing their resources. We first reveal new findings from our own Android user experiment: (i) the launching probabilities of applications follow Zipf's law, and (ii) inter-running and running times of applications conform to log-normal distributions. We also find context-dependency in application usage patterns, for which we classify contexts in a personalized manner with unsupervised learning methods. Using the knowledge acquired, we develop a novel context-aware application scheduling framework, CAS that adaptively unloads and preloads background applications in a timely manner. Our trace-driven simulations with 96 user traces demonstrate the benefits of CAS over existing algorithms. We also verify the practicality of CAS by implementing it on the Android platform."}
{"_id": "c88e9238b8e564b6554667130996a7c6fa8b2564", "title": "Multimodal feature extraction and fusion for semantic mining of soccer video: a survey", "text": "This paper presents a classified review of soccer video analysis works. The existing approaches in the aspects of highlight event detection, video summarization and retrieval based on video stream, ball and player tracking for provision of match statistics, technical and tactical analysis and application of different sources in soccer video analysis have been surveyed. In addition, some major existing commercial softwares developed for video analysis are introduced and compared. With regard to the existing challenge for automatic and realtime provision of video analysis, different computer vision approaches are discussed and compared. Audio, video and text feature extraction methods have been investigated and the future trends for improvement of the reviewed systems have been introduced in terms of response time optimization, increase of precision and eliminating the need of human intervention for video analysis."}
{"_id": "87d2c54d5059be201f1701f2b8bc8b4dc2038ba6", "title": "Confusion Matrix", "text": "A confusion matrix (Kohavi and Provost, 1998) contains information about actual and predicted classifications done by a classification system. Performance of such systems is commonly evaluated using the data in the matrix. The following table shows the confusion matrix for a two class classifier. The entries in the confusion matrix have the following meaning in the context of our study: \u25cf a is the number of correct predictions that an instance is negative, \u25cf b is the number of incorrect predictions that an instance is positive, \u25cf c is the number of incorrect of predictions that an instance negative, and \u25cf d is the number of correct predictions that an instance is positive. Predicted Negative Positive Actual Negative a b Positive c d"}
{"_id": "462881e12a6708ddc520fa4d58c99ba73b18a6ab", "title": "Continuous Electrowetting of Non-toxic Liquid Metal for RF Applications", "text": "Continuous electrowetting (CEW) is demonstrated to be an effective actuation mechanism for reconfigurable radio frequency (RF) devices that use non-toxic liquid-metal tuning elements. Previous research has shown CEW is an efficient means of electrically inducing motion in a liquid-metal slug, but precise control of the slug's position within fluidic channels has not been demonstrated. Here, the precise positioning of liquid-metal slugs is achieved using CEW actuation in conjunction with channels designed to minimize the liquid-metal surface energy at discrete locations. This approach leverages the high surface tension of liquid metal to control its resting position with submillimeter accuracy. The CEW actuation and fluidic channel design were optimized to create reconfigurable RF devices. In addition, solutions for the reliable actuation of a gallium-based, non-toxic liquid-metal alloy (Galinstan) are presented that mitigate the tendency of the alloy to form a surface oxide layer capable of wetting to the channel walls, inhibiting motion. A reconfigurable slot antenna utilizing these techniques to achieve a 15.2% tunable frequency bandwidth is demonstrated."}
{"_id": "29dbe7b48f5467370cdf2c017e281a6d7bcae77d", "title": "The Impact of E-Commerce Announcements on the Market Value of Firms", "text": "Firms are undertaking growing numbers of e-commerce initiatives and increasinglymaking significant investments required to participate in the growing online market. However, empirical support for the benefits to firms from e-commerce is weaker than glowing accounts in the popular press, based on anecdotal evidence, would lead us to believe. In this paper, we explore the following questions: What are the returns to shareholders in firms engaging in e-commerce? How do the returns to conventional, brick and mortar firms from e-commerce initiatives compare with returns to the new breed of net firms? How do returns from businessto-business e-commerce compare with returns from business-to-consumer e-commerce? How do the returns to e-commerce initiatives involving digital goods compare to initiatives involving tangible goods? We examine these issues using event study methodology and assess the cumulative abnormal returns to shareholders (CARs) for 251 e-commerce initiatives announced by firms between October and December 1998. The results suggest that e-commerce initiatives do indeed lead to significant positive CARs for firms\u2019 shareholders. While the CARs for conventional firms are not significantly different from those for net firms, the CARs for businessto-consumer (B2C) announcements are higher than those for business-to-business (B2B) announcements. Also, the CARs with respect to e-commerce initiatives involving tangible goods are higher than for those involving digital goods. Our data were collected in the last quarter of 1998 during a unique bull market period and the magnitudes of CARs (between 4.9 and 23.4% for different subsamples) in response to e-commerce announcements are larger than those reported for a variety of other firm actions in prior event studies. This paper presents the first empirical test of the dot com effect, validating popular anticipations of significant future benefits to firms entering into e-commerce arrangements. (Event Study; Electronic Commerce;Market Value; Resource-Based View; Business-to-Business;Business-to-Consumer; Digital Goods; Tangible Goods)"}
{"_id": "11821fd15b7cb58a8fb3e411f140b30683c35c6e", "title": "Facebook C2C social commerce: A study of online impulse buying", "text": "Facebook users are increasingly using the site to conduct commercial activities, by posting advertisements in groups and then buying or selling items from each other. This type of group is called as a C2C Facebook \u201cbuy and sell\u201d group in the current work. Drawing from Latent State Trait theory, Heuristic Information Processing, and Observational Learning, we conducted an online field experiment to empirically investigate the effect of the information quality of the advertisement, the trait of the impulsiveness, and the number of \u201cLikes\u201d it receives on consumers\u2019 urge to buy impulsively. The findings and implications of our study are discussed in the"}
{"_id": "334d6c71b6bce8dfbd376c4203004bd4464c2099", "title": "Biconvex Relaxation for Semidefinite Programming in Computer Vision", "text": "References [1] Wang, P., Shen, C., van den Hengel, A., A fast semidefinite approach to solving binary quadratic problems. CVPR 2013. [2] Candes, E.J., Li, X., Soltanolkotabi, M., Phase retrieval via wirtinger flow: Theory and algorithms. IEEE Transaction on Information Theory, 2015. [3] Zheng, Q., Lafferty, J., A convergent gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements. NIPS 2015. [4] Netrapalli, P., Jain, P., Sanghavi, S., Phase retrieval using alternating minimization. NIPS 2013. Many important problems requires solving for low rank SDPs with PSD constraint matrices."}
{"_id": "ab4e3d9ec9bdb389a6077985d328456fc9e699a2", "title": "Information Feedback, Targeting, and Coordination: An Experimental Study", "text": "We experimentally study the role of information targeting and its effect on coordination in a multi-threshold public goods game. To the best of our knowledge, ours is the first to consider this problem. In a lab setting, we consider four treatments, one in which no information is provided and three others that vary in whom we provide the information to: a random sample of subjects; those whose contributions are below the average of their group, and those whose contributions are above the average of their group. We find that random provision of information is no better than providing no information at all. More importantly, the average contributions improve with targeted treatments. Coordination waste is also lower with targeted treatments. The insights from this research may also be relevant to management in contexts such as piracy, teendrinking, among others, where positively or negatively affecting coordination between consumers is of interest."}
{"_id": "b51e6bf2f9dc38f76c55ea2caa185b11833901ee", "title": "Sirolimus-eluting stents versus standard stents in patients with stenosis in a native coronary artery.", "text": "BACKGROUND\nPreliminary reports of studies involving simple coronary lesions indicate that a sirolimus-eluting stent significantly reduces the risk of restenosis after percutaneous coronary revascularization.\n\n\nMETHODS\nWe conducted a randomized, double-blind trial comparing a sirolimus-eluting stent with a standard stent in 1058 patients at 53 centers in the United States who had a newly diagnosed lesion in a native coronary artery. The coronary disease in these patients was complex because of the frequent presence of diabetes (in 26 percent of patients), the high percentage of patients with longer lesions (mean, 14.4 mm), and small vessels (mean, 2.80 mm). The primary end point was failure of the target vessel (a composite of death from cardiac causes, myocardial infarction, and repeated percutaneous or surgical revascularization of the target vessel) within 270 days.\n\n\nRESULTS\nThe rate of failure of the target vessel was reduced from 21.0 percent with a standard stent to 8.6 percent with a sirolimus-eluting stent (P<0.001)--a reduction that was driven largely by a decrease in the frequency of the need for revascularization of the target lesion (16.6 percent in the standard-stent group vs. 4.1 percent in the sirolimus-stent group, P<0.001). The frequency of neointimal hyperplasia within the stent was also decreased in the group that received sirolimus-eluting stents, as assessed by both angiography and intravascular ultrasonography. Subgroup analyses revealed a reduction in the rates of angiographic restenosis and target-lesion revascularization in all subgroups examined.\n\n\nCONCLUSIONS\nIn this randomized clinical trial involving patients with complex coronary lesions, the use of a sirolimus-eluting stent had a consistent treatment effect, reducing the rates of restenosis and associated clinical events in all subgroups analyzed."}
{"_id": "c795c035d563231782711f931e264e629c52de44", "title": "Attention-Based Convolutional Neural Network for Semantic Relation Extraction", "text": "Nowadays, neural networks play an important role in the task of relation classification. In this paper, we propose a novel attention-based convolutional neural network architecture for this task. Our model makes full use of word embedding, part-of-speech tag embedding and position embedding information. Word level attention mechanism is able to better determine which parts of the sentence are most influential with respect to the two entities of interest. This architecture enables learning some important features from task-specific labeled data, forgoing the need for external knowledge such as explicit dependency structures. Experiments on the SemEval-2010 Task 8 benchmark dataset show that our model achieves better performances than several stateof-the-art neural network models and can achieve a competitive performance just with minimal feature engineering."}
{"_id": "6db97354117e760edb8fb073911a30ca471c5690", "title": "Autonomous discovery of the goal space to learn a parameterized skill", "text": "A parameterized skill is a mapping from multiple goals/task parameters to the policy parameters to accomplish them. Existing works in the literature show how a parameterized skill can be learned given a task space that defines all the possible achievable goals. In this work, we focus on tasks defined in terms of final states (goals), and we face on the challenge where the agent aims to autonomously acquire a parameterized skill to manipulate an initially unknown environment. In this case, the task space is not known a priori and the agent has to autonomously discover it. The agent may posit as a task space its whole sensory space (i.e. the space of all possible sensor readings) as the achievable goals will certainly be a subset of this space. However, the space of achievable goals may be a very tiny subspace in relation to the whole sensory space, thus directly using the sensor space as task space exposes the agent to the curse of dimensionality and makes existing autonomous skill acquisition algorithms inefficient. In this work we present an algorithm that actively discovers the manifold of the achievable goals within the sensor space. We validate the algorithm by employing it in multiple different simulated scenarios where the agent actions achieve different types of goals: moving a redundant arm, pushing an object, and changing the color of an object."}
{"_id": "7a4cb5714ccd5212f6c9de9829cb17b77fd5fa99", "title": "Cannabidiol Claims and Misconceptions.", "text": "Once a widely ignored phytocannabinoid, cannabidiol now attracts great therapeutic interest, especially in epilepsy and cancer. As with many rising trends, various myths and misconceptions have accompanied this heightened public interest and intrigue. This forum article examines and attempts to clarify some areas of contention."}
{"_id": "23a317ae53e70094e849f836f9ec21e025903073", "title": "Rethinking Visualization: A High-Level Taxonomy", "text": "We present the novel high-level visualization taxonomy. Our taxonomy classifies visualization algorithms rather than data. Algorithms are categorized based on the assumptions they make about the data being visualized; we call this set of assumptions the design model. Because our taxonomy is based on design models, it is more flexible than existing taxonomies and considers the user's conceptual model, emphasizing the human aspect of visualization. Design models are classified according to whether they are discrete or continuous and by how much the algorithm designer chooses display attributes such as spatialization, timing, colour, and transparency. This novel approach provides an alternative view of the visualization field that helps explain how traditional divisions (e.g., information and scientific visualization) relates and overlap, and that may inspire research ideas in hybrid visualization areas"}
{"_id": "15b9b4e23efe9d1c58217cbe0d3dd154fb65baae", "title": "A Dataset for Quality Assessment of Camera Captured Document Images", "text": "With the proliferation of cameras on mobile devices there is an increased desire to image document pages as an alternative to scanning. However, the quality of captured document images is often lower than its scanned equivalent due to hardware limitations and stability issues. In this context, automatic assessment of the quality of captured images is useful for many applications. Although there has been a lot of work on developing computational methods and creating standard datasets for natural scene image quality assessment, until recently quality estimation of camera captured document images has not been given much attention. One traditional quality indicator for document images is the Optical Character Recognition (OCR) accuracy. In this work, we present a dataset of camera captured document images containing varying levels of focal-blur introduced manually during capture. For each image we obtained the character level OCR accuracy. Our dataset can be used to evaluate methods for predicting OCR quality of captured documents as well as enhancements. In order to make the dataset publicly and freely available, originals from two existing datasets University of Washington dataset and Tobacco Database were selected. We present a case study with three recent methods for predicting the OCR quality of images on our dataset."}
{"_id": "5ac65efcf8db05e8f1f70e09d51f275caa9d0aae", "title": "Convolutional Neural Networks for No-Reference Image Quality Assessment", "text": "In this work we describe a Convolutional Neural Network (CNN) to accurately predict image quality without a reference image. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max and min pooling, two fully connected layers and an output node. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating image quality. This approach achieves state of the art performance on the LIVE dataset and shows excellent generalization ability in cross dataset experiments. Further experiments on images with local distortions demonstrate the local quality estimation ability of our CNN, which is rarely reported in previous literature."}
{"_id": "589b1677d6de28c47693c5816c32698860c32d10", "title": "Tri-modal Person Re-identification with RGB, Depth and Thermal Features", "text": "Person re-identification is about recognizing people who have passed by a sensor earlier. Previous work is mainly based on RGB data, but in this work we for the first time present a system where we combine RGB, depth, and thermal data for re-identification purposes. First, from each of the three modalities, we obtain some particular features: from RGB data, we model color information from different regions of the body, from depth data, we compute different soft body biometrics, and from thermal data, we extract local structural information. Then, the three information types are combined in a joined classifier. The tri-modal system is evaluated on a new RGB-D-T dataset, showing successful results in re-identification scenarios."}
{"_id": "3faebe9d5c47fc90998811c4ac768706283d605c", "title": "Semi-Supervised Detection of Extreme Weather Events in Large Climate Datasets", "text": "The detection and identification of extreme weather events in large scale climate simulations is an important problem for risk management, informing governmental policy decisions and advancing our basic understanding of the climate system. Recent work has shown that fully supervised convolutional neural networks (CNNs) can yield acceptable accuracy for classifying well-known types of extreme weather events when large amounts of labeled data are available. However, there are many different types of spatially localized climate patterns of interest (including hurricanes, extra-tropical cyclones, weather fronts, blocking events, etc.) found in simulation data for which labeled data is not available at large scale for all simulations of interest. We present a multichannel spatiotemporal encoder-decoder CNN architecture for semi-supervised bounding box prediction and exploratory data analysis. This architecture is designed to fully model multichannel simulation data, temporal dynamics and unlabelled data within a reconstruction and prediction framework so as to improve the detection of a wide range of extreme weather events. Our architecture can be viewed as a 3D convolutional autoencoder with an additional modified one-pass bounding box regression loss. We demonstrate that our approach is able to leverage temporal information and unlabelled data to improve localization of extreme weather events. Further, we explore the representations learned by our model in order to better understand this important data, and facilitate further work in understanding and mitigating the effects of climate change."}
{"_id": "24aa2f1004aab06000f24d5b650b891d6dc68818", "title": "Implementation of a Practical Distributed Calculation System with Browsers and JavaScript, and Application to Distributed Deep Learning", "text": "Deep learning can achieve outstanding results in various fields. However, it requires so significant computational power that graphics processing units (GPUs) and/or numerous computers are often required for the practical application. We have developed a new distributed calculation framework called \u201dSashimi\u201d that allows any computer to be used as a distribution node only by accessing a website. We have also developed a new JavaScript neural network framework called \u201dSukiyaki\u201d that uses general purpose GPUs with web browsers. Sukiyaki performs 30 times faster than a conventional JavaScript library for deep convolutional neural networks (deep CNNs) learning. The combination of Sashimi and Sukiyaki, as well as new distribution algorithms, demonstrates the distributed deep learning of deep CNNs only with web browsers on various devices. The libraries that comprise the proposed methods are available under MIT license athttp://mil-tokyo.github.io/."}
{"_id": "62d37831e00bc0511480f7cb78874ffb30331382", "title": "Estimating Fingerprint Deformation", "text": "Fingerprint matching is affected by the nonlinear distortion introduced in fingerprint impressions during the image acquisition process. This nonlinear deformation causes fingerprint features such as minutiae points and ridge curves to be distorted in a complex manner. In this paper we develop an average deformation model for a fingerprint impression (baseline impression) by observing its relative distortion with respect to several other impressions of the same finger. The deformation is computed using a Thin Plate Spline (TPS) model that relies on ridge curve correspondences between image pairs. The estimated average deformation is used to distort the minutiae template of the baseline impression prior to matching. An index of deformation has been proposed to select the average deformation model with the least variability corresponding to a finger. Preliminary results indicate that the average deformation model can improve the matching performance of a fingerprint matcher."}
{"_id": "0f32f3a8e401f863899b0ef623a721a0996e7ae3", "title": "Vertical distance from the crest of bone to the height of the interproximal papilla between adjacent implants.", "text": "BACKGROUND\nAs patient demand increases for more natural restorations in the esthetic zone, clinicians must have the highest level of skill and knowledge to maintain or reform the interdental papilla between teeth, between implants and teeth, and between adjacent implants. To date, there are no reports that have measured the distance from the contact point to the bony crest between implants. One reason for this may be the fact that, with two adjacent implants, the contact point of the crown can be established at any distance from the gingival margin according to the restorative dentist's specifications. Therefore, in this study, the height of the soft tissue to the crest of bone was measured between two adjacent implants independent of the location of the contact point. The purpose of this study was to determine the range and average height of tissue between two adjacent implants.\n\n\nMETHODS\nA total of 136 interimplant papillary heights were examined in 33 patients by eight different examiners in five private dental offices. After administration of appropriate local anesthesia, a standardized periodontal probe was placed vertically from the height of the papilla to the crest of bone. The measurements were rounded off to the nearest millimeter.\n\n\nRESULTS\nThe mean height of papillary tissue between two adjacent implants was 3.4 mm, with a range of 1 mm to 7 mm.\n\n\nCONCLUSIONS\nClinicians should proceed with great caution when placing two implants adjacent to each other in the esthetic zone. In most cases, only 2, 3, or 4 mm of soft tissue height (average 3.4 mm) can be expected to form over the interimplant crest of bone. These results showed that modification of treatment plans may be necessary when esthetics are critical for success."}
{"_id": "7d982bdcf768ffd3199be289db2b2f6e55777b12", "title": "Interpolation error compensation method for look-up table based IPMSM drive", "text": "In this paper, interpolation error compensation method for look-up table(LUT) based IPMSM(Interior Permanent Magnet Synchronous Motor) drive is proposed. Generally, the LUT based IPMSM drive is widely used on automotive application because of its reliable current control. However, the LUT data is memorized discreetly, so the linear interpolation method is used for the absent data condition. Because this interpolation method does not follow the PMSM operation trace, the error is unavoidable between the capable operation point and the interpolated output. This paper proposes the compensation method for this interpolation error. Simulation and Experiment are performed for verify the reliability of the proposed control method."}
{"_id": "65a568aa9482c9ad6c21c7444861639432b0efe4", "title": "Image recognition for visually impaired people by sound", "text": "The aim of this paper is to represent a method where a blind person can get information about the shape of an image through speech signal. In this paper we proposed an algorithm for image recognition by speech sound. Blind people face a number of challenges when interacting with their environments because so much information is encoded visually. The proposed method enables the visually impaired people to see with the help of ears. The novelty of this paper is to convert the image to sound using the methodology of edge detection."}
{"_id": "1454c2f35dd729083f7e3715d8e6c4a00e9f55bb", "title": "Dual Decomposition with Many Overlapping Components", "text": "Dual decomposition has been recently proposed as a way of combining complementary models, with a boost in predictive power. However, in cases where lightweight decompositions are not readily available (e.g., due to the presence of rich features or logical constraints), the original subgradient algorithm is inefficient. We sidestep that difficulty by adopting an augmented Lagrangian method that accelerates model consensus by regularizing towards the averaged votes. We show how first-order logical constraints can be handled efficiently, even though the corresponding subproblems are no longer combinatorial, and report experiments in dependency parsing, with state-of-the-art results."}
{"_id": "cf20e34a1402a115523910d2a4243929f6704db1", "title": "AN IDEA BASED ON HONEY BEE SWARM FOR NUMERICAL OPTIMIZATION", "text": ""}
{"_id": "4938e8c8c9ea3d351d283181819af5e5801efbed", "title": "Deep Learning for Event-Driven Stock Prediction", "text": "Neural Tensor Network for Learning Event Embeddings Event Representation E = (O1, P, O2, T), where P is the action, O1 is the actor, O2 is the object and T is the timestamp (T is mainly used for aligning stock data with news data). For example, the event \u201cSep 3, 2013 Microsoft agrees to buy Nokia\u2019s mobile phone business for $7.2 billion.\u201d is modeled as: (Actor = Microsoft, Action = buy, Object = Nokia\u2019s mobile phone business, Time = Sep 3, 2013) Event Embedding"}
{"_id": "b9843426b745ea575276fce3b527a2dc4d2eebb3", "title": "PAMS: A new position-aware multi-sensor dataset for human activity recognition using smartphones", "text": "Nowadays smartphones are ubiquitous in various aspects of our lives. The processing power, communication bandwidth, and the memory capacity of these devices have surged considerably in recent years. Besides, the variety of sensor types, such as accelerometer, gyroscope, humidity sensor, and bio-sensors, which are embedded in these devices, opens a new horizon in self-monitoring of physical daily activities. One of the primary steps for any research in the area of detecting daily life activities is to test a detection method on benchmark datasets. Most of the early datasets limited their work to collecting only a single type of sensor data such as accelerometer data. While some others do not consider age, weight, and gender of the subjects who have participated in collecting their activity data. Finally, part of the previous works collected data without considering the smartphone's position. In this paper, we introduce a new dataset, called Position-Aware Multi-Sensor (PAMS). The dataset contains both accelerometer and gyroscope data. The gyroscope data boosts the accuracy of activity recognition methods as well as enabling them to detect a wider range of activities. We also take the user information into account. Based on the biometric attributes of the participants, a separate learned model is generated to analyze their activities. We concentrate on several major activities, including sitting, standing, walking, running, ascending/descending stairs, and cycling. To evaluate the dataset, we use various classifiers, and the outputs are compared to the WISDM. The results show that using aforementioned classifiers, the average precision for all activities is above 88.5%. Besides, we measure the CPU, memory, and bandwidth usage of the application collecting data on the smartphone."}
{"_id": "55cc17d32cc001a2c36b3156bfa0748daf5d0fb4", "title": "A Corpus of Proverbs Annotated with Metaphors", "text": "Proverbs are commonly metaphoric in nature and the mapping across domains is commonly established in proverbs. The abundance of proverbs in terms of metaphors makes them an extremely valuable linguistic resource since they can be utilized as a gold standard for various metaphor related linguistic tasks such as metaphor identification or interpretation. Besides, a collection of proverbs from various languages annotated with metaphors would also be essential for social scientists to explore the cultural differences between those languages. In this paper, we introduce PROMETHEUS, a dataset consisting of English proverbs and their equivalents in Italian. In addition to the word-level metaphor annotations for each proverb, PROMETHEUS contains other types of information such as the metaphoricity degree of the overall proverb, its meaning, the century that it was first recorded in and a pair of subjective questions responded by the annotators. To the best of our knowledge, this is the first multi-lingual and open-domain corpus of proverbs annotated with word-level metaphors."}
{"_id": "d88fd2ad7c378fff8536e5c2b26034e9917ee8a4", "title": "Taming Google-Scale Continuous Testing", "text": "Growth in Google's code size and feature churn rate has seen increased reliance on continuous integration (CI) and testing to maintain quality. Even with enormous resources dedicated to testing, we are unable to regression test each code change individually, resulting in increased lag time between code check-ins and test result feedback to developers. We report results of a project that aims to reduce this time by: (1) controlling test workload without compromising quality, and (2) distilling test results data to inform developers, while they write code, of the impact of their latest changes on quality. We model, empirically understand, and leverage the correlations that exist between our code, test cases, developers, programming languages, and code-change and test-execution frequencies, to improve our CI and development processes. Our findings show: very few of our tests ever fail, but those that do are generally \"closer\" to the code they test, certain frequently modified code and certain users/tools cause more breakages, and code recently modified by multiple developers (more than 3) breaks more often."}
{"_id": "22d35b27bab295efe9d5a28cdf15ed7c4fbcf25c", "title": "Algorithm Engineering for Parallel Computation", "text": "The emerging discipline of algorithm engineering has primarily focussed on transforming pencil-and-paper sequential algorithms into robust, efficient, well tested, and easily used implementations. As parallel computing becomes ubiquitous, we need to extend algorithm engineering techniques to parallel computation. Such an extension adds significant complications. After a short review of algorithm engineering achievements for sequential computing, we review the various complications caused by parallel computing, present some examples of successful efforts, and give a personal view of possible future research."}
{"_id": "b454d1385696f4629206e5c8bd190bc707ae75b7", "title": "Designing the Future of Personal Fashion", "text": "Advances in computer vision and machine learning are changing the way people dress and buy clothes. Given the vast space of fashion problems, where can data-driven technologies provide the most value? To understand consumer pain points and opportunities for technological interventions, this paper presents the results from two independent need-finding studies that explore the gold-standard of personalized shopping: interacting with a personal stylist. Through interviews with five personal stylists, we study the range of problems they address and their in-person processes for working with clients. In a separate study, we investigate how styling experiences map to online settings by building and releasing a chatbot that connects users to one-on-one sessions with a stylist, acquiring more than 70 organic users in three weeks. These conversations reveal that in-person and online styling sessions share similar goals, but online sessions often involve smaller problems that can be resolved more quickly. Based on these explorations, we propose future highly personalized, online interactions that address consumer trust and uncertainty, and discuss opportunities for automation."}
{"_id": "d59694a277deeb3ea5b4191b21a93b8de24c0f62", "title": "A Secure Low-Cost Edge Device Authentication Scheme for the Internet of Things", "text": "Because of the enhanced capability of adversaries, edge devices of Internet of Things (IoT) infrastructure are now increasingly vulnerable to counterfeiting and piracy. Ensuring the authenticity of such devices is of great concern since an adversary can create a backdoor either to bypass the security, and/or to leak secret information over an unsecured communication channel. The reliability of such devices could also be called into question because they might be counterfeit, defective and/or of inferior quality. It is of prime importance to design and develop solutions for authenticating such edge devices. In this paper, we present a novel low-cost solution for authenticating edge devices. We use SRAM based PUF to generate unique \"digital fingerprints\" for every device, which can be used to generate a unique device ID. We propose a novel ID matching scheme to verify the identity of an edge device even though the PUF is extremely unreliable. We show that the probability of impersonating an ID by an adversary is extremely low. In addition, our proposed solution is resistant to various known attacks."}
{"_id": "47d91765ea424816bd0320161ae3cefcce462ce6", "title": "A Survey of Secure Routing Protocols in Multi-Hop Cellular Networks", "text": "Multi-hop networks are expected to be an important part of 5G mobile systems. In a multi-hop cellular network (MCN), the nodes assist each other in relaying packets towards their destinations. The structures and operations of multi-hop networks make them particularly vulnerable to certain types of attack. Hence, security measures to counter these attacks are necessary. In this paper, we provide an overview of the secure routing protocols for multi-hop networks and classify them into MCN Type-1 and Type-0 categories for device-to-device communications, and the Internet-of-Things category for machine-to-machine communications. Our focus is on the applied cryptographic techniques and the security mechanisms in secure routing protocols. We propose an evaluation framework incorporating different security aspects, vulnerabilities, and levels of deployability to compare a number of secure routing protocols. Moreover, we review the secure routing paths in software-defined networking as a solution for existing challenges in multi-hop networks. Some open research problems are highlighted as possible directions for future studies."}
{"_id": "26aa0aff1ea1baf848a521363cc455044690e090", "title": "A 2d + 3d rich data approach to scene understanding", "text": "On your one-minute walk from the coffee machine to your desk each morning, you pass by dozens of scenes \u2013 a kitchen, an elevator, your office \u2013 and you effortlessly recognize them and perceive their 3D structure. But this one-minute scene-understanding problem has been an open challenge in computer vision since the field was first established 50 years ago. In this dissertation, we aim to rethink the path researchers took over these years, challenge the standard practices and implicit assumptions in the current research, and redefine several basic principles in computational scene understanding. The key idea of this dissertation is that learning from rich data under natural setting is crucial for finding the right representation for scene understanding. First of all, to overcome the limitations of object-centric datasets, we built the Scene Understanding (SUN) Database, a large collection of real-world images that exhaustively spans all scene categories. This scene-centric dataset provides a more natural sample of human visual world, and establishes a realistic benchmark for standard 2D recognition tasks. However, while an image is a 2D array, the world is 3D and our eyes see it from a viewpoint, but this is not traditionally modeled. To obtain a 3D understanding at high-level, we reintroduce geometric figures using modern machinery. To model scene viewpoint, we propose a panoramic place representation to go beyond aperture computer vision and use data that is close to natural input for human visual system. This paradigm shift toward rich representation also opens up new challenges that require a new kind of big data \u2013 data with extra descriptions, namely rich data. Specifically, we focus on a highly valuable kind of rich data \u2013 multiple viewpoints in 3D \u2013 and we build the SUN3D database to obtain an integrated place-centric representation of scenes. We argue for the great importance of modeling the computer\u2019s role as an agent in a 3D scene, and demonstrate the power of place-centric scene representation. Thesis Supervisor: Antonio Torralba Title: Associate Professor"}
{"_id": "bdb806e7a5a1b7982440d74127f8065d6842d7f5", "title": "A Deep Reinforcement Learning Algorithm with Expert Demonstrations and Supervised Loss and its application in Autonomous Driving", "text": "In this paper, we propose a deep reinforcement learning(DRL) algorithm which combines Deep Deterministic Policy Gradient (DDPG) with expert demonstrations and supervised loss for decision making for autonomous driving. Training DRL agent with supervised learning is adopted to accelerate the exploration process and increase the stability. A supervised loss function is introduced in the algorithm to update the actor networks. In addition, reward construction is combined to make the training process more stable and efficient. The proposed algorithm is applied to a popular autonomous driving simulator called TORCS. The experimental results show that the training efficiency and stability are improved by utilizing our algorithm in autonomous driving."}
{"_id": "1b742a290c23c2ed0b3c6c99c3645535882912f6", "title": "Residual Useful Life Prediction for Rolling Element Bearings Based on Multi-feature Fusion Regression", "text": "Residual useful life (RUL) prediction is critical in efficient implementation of condition-based maintenances for rolling element bearings (REBs). A multi-feature fusion regression method is reported for predicting the RUL of REBs in this work. In the proposed approach, locally linear embedding technique is employed to fuse original features for erecting a condition indicator of the REBs. The fused condition indicator is then modeled by an adaptive network-based fuzzy inference system for predicting the RUL. The present approach is applied to experimental data collected from an REB. Experimental results show that the proposed approach exhibits better performance than peer approaches. It is capable of accurately predicting the RUL of the REB."}
{"_id": "5951d6a9d5e2b27c05384204dad7e64f06ee8d2c", "title": "Deep residual coalesced convolutional network for efficient semantic road segmentation", "text": "This paper proposes a deep learning-based efficient and compact solution for road scene segmentation problem, named deep residual coalesced convolutional network (RCC-Net). Initially, the RCC-Net performs dimensionality reduction to compress and extract relevant features, from which it is subsequently delivered to the encoder. The encoder adopts the residual network style for efficient model size. In the core of each residual network, three different convolutional layers are simultaneously coalesced for obtaining broader information. The decoder is then altered to upsample the encoder for pixel-wise mapping from the input images to the segmented output. Experimental results reveal the efficacy of the proposed network over the state-of-the-art methods and its capability to be deployed in an average system."}
{"_id": "7f44973b8cb78be47d55d335f40a54aa00ef814c", "title": "Text-Independent Writer Identification via CNN Features and Joint Bayesian", "text": "This paper proposes a novel method for offline text-independent writer identification by using convolutional neural network (CNN) and joint Bayesian, which consists of two stages, i.e. feature extraction and writer identification. In the stage of feature extraction, since a large number of data is essential to train an effective CNN model with high generalizability and the amount of handwriting is limited in writer identification, a data augmentation technique is first developed to generate thousands of handwriting images for each writer. Then a deep CNN network is designed to extract discriminative features to represent the properties of different writing styles, which is trained by using the generated handwriting images. In the stage of writer identification, the training dataset is used to train the CNN model for feature extraction and the joint Bayesian technique is employed to accomplish the task of writer identification based on the extracted CNN features. The proposed method is tested on two standard benchmark datasets, i.e. ICDAR2013 and CVL dataset. Experimental results demonstrate that the proposed method gets the best performance compared to the state-of-the-art approaches."}
{"_id": "f8cd248abc151a5b972cf7797be94aac0bd4f5e1", "title": "Mental health of hospital consultants: the effects of stress and satisfaction at work", "text": "BACKGROUND\nBurnout and psychiatric morbidity among gastroenterologists, surgeons, radiologists, and oncologists in the UK have been estimated by means of a questionnaire-based survey. The relationship between consultants' mental health and their job stress and satisfaction, as well as their job and demographic characteristics, were also examined.\n\n\nMETHODS\nPsychiatric morbidity was estimated using the 12-item General Health Questionnaire. The three components of burnout-emotional exhaustion, depersonalization, and low personal accomplishment-were assessed using the Maslach Burnout Inventory. Job stress and satisfaction were measured using study-specific questions.\n\n\nFINDINGS\nOf 1133 consultants, 882 (78%) returned questionnaires. The estimated prevalence of psychiatric morbidity was 27%, with no significant differences between the four specialist groups. Radiologists reported the highest level of burnout in terms of low personal accomplishment. Job satisfaction significantly protected consultants' mental health against job stress. Three sources of stress were associated with both burnout and psychiatric morbidity; feeling overloaded, and its effect on home life; feeling poorly managed and resourced; and dealing with patients' suffering. Burnout was also associated with low satisfaction in three domains: relationships with patients, relatives and staff; professional status/esteem; intellectual stimulation. In addition, being aged 55 years or less and being single were independent risk factors for burnout. Burnout was also more prevalent among consultants who felt insufficiently trained in communication and management skills.\n\n\nINTERPRETATION\nConsultants' mental health is likely to be protected against the high demands of medical practice by maintaining or enhancing job satisfaction, and by providing training in communication and management skills."}
{"_id": "7fdf8e2393ee280c9a3aeb94aae375411358e45a", "title": "Real-time traffic sign recognition from video by class-specific discriminative features", "text": "Article history: Received 1 August 2007 Received in revised form 22 May 2009 Accepted 26 May 2009"}
{"_id": "28e702e1a352854cf0748b9a6a9ad6679b1d4e83", "title": "Progressive skyline computation in database systems", "text": "The skyline of a d-dimensional dataset contains the points that are not dominated by any other point on all dimensions. Skyline computation has recently received considerable attention in the database community, especially for progressive methods that can quickly return the initial results without reading the entire database. All the existing algorithms, however, have some serious shortcomings which limit their applicability in practice. In this article we develop branch-and-bound skyline (BBS), an algorithm based on nearest-neighbor search, which is I/O optimal, that is, it performs a single access only to those nodes that may contain skyline points. BBS is simple to implement and supports all types of progressive processing (e.g., user preferences, arbitrary dimensionality, etc). Furthermore, we propose several interesting variations of skyline computation, and show how BBS can be applied for their efficient processing."}
{"_id": "5dbe84872b42cb6167d3b601fdf68b4eb2d7f5d9", "title": "The Skyline Operator", "text": "We propose to extend database systems by a Skyline operation. This operation $filters out a set of interesting points from a potentially large set of data points. A point is interesting if it is not dominated by any other point. For example, a hotel might be interesting for somebody traveling to Nassau if no other hotel is both cheaper and closer to the beach. We show how SQL can be extended to pose Skyline queries, present and evaluate alternative algorithms to implement the Skyline operation, and show how this operation can be combined with other database operations, e.g., join."}
{"_id": "1548a4a9dedede835a6c095206eb27f3fdb8394f", "title": "PEDOT:PSS \"Wires\" Printed on Textile for Wearable Electronics.", "text": "Herein, the fabrication of all-organic conductive wires is demonstrated by utilizing patterning techniques such as inkjet printing and sponge stencil to apply poly(3,4-ethylenedioxythiophene) polystyrenesulfonate (PEDOT:PSS) onto nonwoven polyethylene terephthalate (PET) fabric. The coating of the conducting polymer is only present on the surface of the substrate (penetration depth \u223c 200 \u03bcm) to retain the functionality and wearability of the textile. The wires fabricated by different patterning techniques provide a wide range of resistance, i.e., tens of k\u03a9/\u25a1 to less than 2 \u03a9/\u25a1 that allows the resistance to be tailored to a specific application. The sheet resistance is measured to be as low as 1.6 \u03a9/\u25a1, and the breakdown current is as high as 0.37 A for a 1 mm wide line. The specific breakdown current exceeds the previously reported values of macroscopic carbon nanotube based materials. Simple circuits composed of the printed wires are demonstrated, and resistance of the circuit from the measurement agrees with the calculated value based on Kirchhoff's rules. Additionally, the printed PEDOT:PSS wires show less than 6.2% change in sheet resistance after three washing and drying cycles using detergent."}
{"_id": "fdb96b838c139db95d78f205625b5bcfbf89e7b1", "title": "A neurocognitive approach to understanding the neurobiology of addiction", "text": "Recent concepts of addiction to drugs (e.g. cocaine) and non-drugs (e.g. gambling) have proposed that these behaviors are the product of an imbalance between three separate, but interacting, neural systems: an impulsive, largely amygdala-striatum dependent, neural system that promotes automatic, habitual and salient behaviors; a reflective, mainly prefrontal cortex dependent, neural system for decision-making, forecasting the future consequences of a behavior, and inhibitory control; and the insula that integrates interoception states into conscious feelings and into decision-making processes that are involved in uncertain risk and reward. These systems account for poor decision-making (i.e. prioritizing short-term consequences of a decisional option) leading to more elevated addiction risk and relapse. This article provides neural evidence for this three-systems neural model of addiction."}
{"_id": "1d96ff7c3059cd121ccee0533c923b27d258c90a", "title": "Analysis of Word Embeddings and Sequence Features for Clinical Information Extraction", "text": "This study investigates the use of unsupervised features derived from word embedding approaches and novel sequence representation approaches for improving clinical information extraction systems. Our results corroborate previous findings that indicate that the use of word embeddings significantly improve the effectiveness of concept extraction models; however, we further determine the influence that the corpora used to generate such features have. We also demonstrate the promise of sequence-based unsupervised features for further improving concept extraction."}
{"_id": "a2e738c4107a8d123a6be42d34c02b9f9939b50d", "title": "Fog computing: Security issues, solutions and robust practices", "text": "Fog Computing (FC) has extended the services of cloud computing to the edge of the network. It inherits some of the characteristics from cloud computing but FC also have some distinguished features such as geo-distribution, location awareness and low latency. Along with the inherited characteristics, it also inherits the issues and problems of cloud computing like energy efficiency, resource management and security issues. This paper presents the critical analysis of the fog architecture with respect to security. The state of the art work done since 2012 is critical analyzed on the bases of security techniques and security threats. We grouped the existing security techniques on the bases of security goals achieved by each. It will provide a clear and comprehensive distinction between the security areas explored and those, which still need researchers' attention."}
{"_id": "e2e9eb8d0ac182b9db04a4fa833ee078e04a10c3", "title": "Specification Inference from Demonstrations", "text": "Learning from expert demonstrations has received a lot of attention in artificial intelligence and machine learning. The goal is to infer the underlying reward function that an agent is optimizing given a set of observations of the agent\u2019s behavior over time in a variety of circumstances, the system state trajectories, and a plant model specifying the evolution of the system state for different agent\u2019s actions. The system is often modeled as a Markov decision process (Puterman 2014), that is, the next state depends only on the current state and agent\u2019s action, and the the agent\u2019s choice of action depends only on the current state. While the former is a Markovian assumption on the evolution of system state, the later assumes that the target reward function is itself Markovian. In this work, we explore learning a class of non-Markovian reward functions, known in the formal methods literature as specifications. These specifications offer better composition, transferability, and interpretability. We then show that inferring the specification can be done efficiently without unrolling the transition system. We demonstrate on a 2-d grid world example."}
{"_id": "0da677ac81956a3bdb28605616f6268ad1c31381", "title": "Comparative study of trust and reputation systems for wireless sensor networks", "text": "Wireless sensor networks (WSNs) are emerging as useful technology for information extraction from the surrounding environment by using numerous small-sized sensor nodes that are mostly deployed in sensitive, unattended, and (sometimes) hostile territories. Traditional cryptographic approaches are widely used to provide security in WSN. However, because of unattended and insecure deployment, a sensor node may be physically captured by an adversary who may acquire the underlying secret keys, or a subset thereof, to access the critical data and/or other nodes present in the network. Moreover, a node may not properly operate because of insufficient resources or problems in the network link. In recent years, the basic ideas of trust and reputation have been applied to WSNs to monitor the changing behaviors of nodes in a network. Several trust and reputation monitoring (TRM) systems have been proposed, to integrate the concepts of trust in networks as an additional security measure, and various surveys are conducted on the aforementioned system. However, the existing surveys lack a comprehensive discussion on trust application specific to the WSNs. This survey attempts to provide a thorough understanding of trust and reputation as well as their applications in the context of WSNs. The survey discusses the components required to build a TRM and the trust computation phases explained with a study of various security attacks. The study investigates the recent advances in TRMs and includes a concise comparison of various TRMs. Finally, a discussion on open issues and challenges in the implementation of trust-based systems is also presented. Copyright \u00a9 2012 John Wiley & Sons, Ltd."}
{"_id": "a73c4c222a03fcaca869623ea1d0e8b617416742", "title": "Cognitive signals for brain-machine interfaces in posterior parietal cortex include continuous 3D trajectory commands.", "text": "Cortical neural prosthetics extract command signals from the brain with the goal to restore function in paralyzed or amputated patients. Continuous control signals can be extracted from the motor cortical areas, whereas neural activity from posterior parietal cortex (PPC) can be used to decode cognitive variables related to the goals of movement. Because typical activities of daily living comprise both continuous control tasks such as reaching, and tasks benefiting from discrete control such as typing on a keyboard, availability of both signals simultaneously would promise significant increases in performance and versatility. Here, we show that PPC can provide 3D hand trajectory information under natural conditions that would be encountered for prosthetic applications, thus allowing simultaneous extraction of continuous and discrete signals without requiring multisite surgical implants. We found that limb movements can be decoded robustly and with high accuracy from a small population of neural units under free gaze in a complex 3D point-to-point reaching task. Both animals' brain-control performance improved rapidly with practice, resulting in faster target acquisition and increasing accuracy. These findings disprove the notion that the motor cortical areas are the only candidate areas for continuous prosthetic command signals and, rather, suggests that PPC can provide equally useful trajectory signals in addition to discrete, cognitive variables. Hybrid use of continuous and discrete signals from PPC may enable a new generation of neural prostheses providing superior performance and additional flexibility in addressing individual patient needs."}
{"_id": "1f5dc8bee4f9f34d0708558498e706fefd2ab6c1", "title": "InverseRenderNet: Learning single image inverse rendering", "text": "We show how to train a fully convolutional neural network to perform inverse rendering from a single, uncontrolled image. The network takes an RGB image as input, regresses albedo and normal maps from which we compute lighting coefficients. Our network is trained using large uncontrolled image collections without ground truth. By incorporating a differentiable renderer, our network can learn from self-supervision. Since the problem is ill-posed we introduce additional supervision: 1. We learn a statistical natural illumination prior, 2. Our key insight is to perform offline multiview stereo (MVS) on images containing rich illumination variation. From the MVS pose and depth maps, we can cross project between overlapping views such that Siamese training can be used to ensure consistent estimation of photometric invariants. MVS depth also provides direct coarse supervision for normal map estimation. We believe this is the first attempt to use MVS supervision for learning inverse rendering."}
{"_id": "42d3357e13896b8075e4dd317421f1d4fadcffee", "title": "Design of ultra high-speed CMOS CML buffers and latches", "text": "A comprehensive study of ultra high-speed current-mode logic (CML) buffers and regenerative CML latches will be illustrated. A new design procedure to systematically design a chain of tapered CML buffers is proposed. Next, a new 20GHz regenerative latch circuit will be introduced. Experimental results show a higher performance for the new latch architecture compared to a conventional CML latch circuit at ultra high-frequencies. It is also shown, both through the experiments and by using efficient analytical models, why CML buffers are better than CMOS inverters in high-speed low-voltage applications."}
{"_id": "a88950630aeecf3043502c72969cd17cd46e0825", "title": "Uniform Robust Scale-Invariant Feature Matching for Optical Remote Sensing Images", "text": "Extracting well-distributed, reliable, and precisely aligned point pairs for accurate image registration is a difficult task, particularly for multisource remote sensing images that have significant illumination, rotation, and scene differences. The scale-invariant feature transform (SIFT) approach, as a well-known feature-based image matching algorithm, has been successfully applied in a number of automatic registration of remote sensing images. Regardless of its distinctiveness and robustness, the SIFT algorithm suffers from some problems in the quality, quantity, and distribution of extracted features particularly in multisource remote sensing imageries. In this paper, an improved SIFT algorithm is introduced that is fully automated and applicable to various kinds of optical remote sensing images, even with those that are five times the difference in scale. The main key of the proposed approach is a selection strategy of SIFT features in the full distribution of location and scale where the feature qualities are quarantined based on the stability and distinctiveness constraints. Then, the extracted features are introduced to an initial cross-matching process followed by a consistency check in the projective transformation model. Comprehensive evaluation of efficiency, distribution quality, and positional accuracy of the extracted point pairs proves the capabilities of the proposed matching algorithm on a variety of optical remote sensing images."}
{"_id": "137a9ba6deee9ee6beda4cd2b0ccd93f7088b047", "title": "Parallelizing Exploration-Exploitation Tradeoffs with Gaussian Process Bandit Optimization", "text": "Can one parallelize complex exploration\u2013 exploitation tradeoffs? As an example, consider the problem of optimal highthroughput experimental design, where we wish to sequentially design batches of experiments in order to simultaneously learn a surrogate function mapping stimulus to response and identify the maximum of the function. We formalize the task as a multiarmed bandit problem, where the unknown payoff function is sampled from a Gaussian process (GP), and instead of a single arm, in each round we pull a batch of several arms in parallel. We develop GP-BUCB, a principled algorithm for choosing batches, based on the GP-UCB algorithm for sequential GP optimization. We prove a surprising result; as compared to the sequential approach, the cumulative regret of the parallel algorithm only increases by a constant factor independent of the batch size B. Our results provide rigorous theoretical support for exploiting parallelism in Bayesian global optimization. We demonstrate the effectiveness of our approach on two real-world applications."}
{"_id": "40ca6ed9d60f1cc538801496ded50f66002f85a8", "title": "Using Computer Programming to Enhance Science Learning for 5th Graders in Taipei", "text": "The purpose of this study is to evaluate the effectiveness of using Scratch Programming in science learning for 5th graders in Taipei. A quasi-experimental design with a single group was used in this study. 96 fifth graders at an elementary school in Taipei participated in 15 weeks of Scratch Programming activities to enrich computer learning during their science learning. Three research instruments were used in this study, including a logical thinking test, a problem solving test, and a learning questionnaire. The results were subjected to t-test and frequency analysis. The findings of this study include (1) the outcomes of the logical thinking test and the problem solving test indicate that students had better performance in logical thinking and problem solving, (2) feedback from the study questionnaire indicates that more than 71.4% of students favored using Scratch Programming in their computer learning, and more than 58.4% of students favored designing Scratch Programming in their science learning, (3) more than 54.7% of students reported that they are willing to do more similar computer programming projects for other science units in the future. In view of the above findings, this study concludes that Scratch Programming is effective for 5th graders in Taipei."}
{"_id": "85927fb622a699bc7683acae0044d37fbe6991c0", "title": "New E-Commerce User Interest Patterns", "text": "The number of online purchases is increasing constantly. Companies have recognized the related opportunities and they are using online channels progressively. In order to acquire potential customers, companies often try to gain a better understanding through the use of web analytics. One of the most useful sources are web log files. Basically, these provide an abundance of important information about the user behavior on a website, such as the path or access time. Mining this so-called clickstream data in the most comprehensive way has become an important task in order to predict the behavior of online customers, optimize webpages, and give personalized recommendations. As the number of customers constantly rises, the volume of the generated data log files also increases, both in terms of size and quantity. Thus, for certain companies, the currently used technologies are no longer sufficient. In this work, a comprehensive workflow will be proposed using a clustering algorithm in a Hadoop ecosystem to investigate user interest patterns. The complete workflow will be demonstrated on an application scenario of one of the largest business-to-business (B2B) electronic commerce websites in Germany. Furthermore, an experimental evaluation method will be applied to verify the applicability and efficiency of the used algorithm, along with the associated framework."}
{"_id": "7ecc89ee1c0815101eb5aea27a59fd23d4f2237c", "title": "S-CNN-BASED SHIP DETECTION FROM HIGH-RESOLUTION REMOTE SENSING IMAGES", "text": "Reliable ship detection plays an important role in both military and civil fields. However, it makes the task difficult with high-resolution remote sensing images with complex background and various types of ships with different poses, shapes and scales. Related works mostly used gray and shape features to detect ships, which obtain results with poor robustness and efficiency. To detect ships more automatically and robustly, we propose a novel ship detection method based on the convolutional neural networks (CNNs), called SCNN, fed with specifically designed proposals extracted from the ship model combined with an improved saliency detection method. Firstly we creatively propose two ship models, the \u201cV\u201d ship head model and the \u201c ||\u201d ship body one, to localize the ship proposals from the line segments extracted from a test image. Next, for offshore ships with relatively small sizes, which cannot be efficiently picked out by the ship models due to the lack of reliable line segments, we propose an improved saliency detection method to find these proposals. Therefore, these two kinds of ship proposals are fed to the trained CNN for robust and efficient detection. Experimental results on a large amount of representative remote sensing images with different kinds of ships with varied poses, shapes and scales demonstrate the efficiency and robustness of our proposed S-CNN-Based ship detector."}
{"_id": "10fe1adf25ee4bfc6acd93d1f978a57ceade86bc", "title": "A requirements monitoring framework for enterprise systems", "text": "Requirements compliant software is becoming a necessity. Fewer and fewer organizations will run their critical transactions on software that has no visible relationship to its requirements. Businesses wish to see their software being consistent with their policies. Moreover, partnership agreements are pressuring less mature organizations to improve their systems. Businesses that rely on web services, for example, are vulnerable to the problems of their web service providers. While electronic commerce has increased the speed of on-line transactions, the technology for monitoring requirements compliance\u2014especially for transactions\u2014has lagged behind. To address the requirements monitoring problem for enterprise information systems, we integrate techniques for requirements analysis and software execution monitoring. Our framework assists analysts in the development of requirements monitors for enterprise services. The deployed system raises alerts when services succeed or fail to satisfy their specified requirements, thereby making software requirements visible. The framework usage is demonstrated with an analysis of ebXML marketplace specifications. An analyst applies goal analysis to discover potential service obstacles, and then derives requirements monitors and a distributed monitoring system. Once deployed, the monitoring system provides alerts when obstacles occur. A summary of the framework implementation is presented, along with analysis of two monitor component implementations. We conclude that the approach implemented in the framework, called ReqMon, provides real-time feedback on requirements satisfaction, and thereby provides visibility into requirements compliance of enterprise information systems."}
{"_id": "f5df61effe8047eb9ea1702cfcc268dbba678567", "title": "Differential Dataflow", "text": "Existing computational models for processing continuously changing input data are unable to efficiently support iterative queries except in limited special cases. This makes it difficult to perform complex tasks, such as social-graph analysis on changing data at interactive timescales, which would greatly benefit those analyzing the behavior of services like Twitter. In this paper we introduce a new model called differential computation, which extends traditional incremental computation to allow arbitrarily nested iteration, and explain\u2014with reference to a publicly available prototype system called Naiad\u2014how differential computation can be efficiently implemented in the context of a declarative dataparallel dataflow language. The resulting system makes it easy to program previously intractable algorithms such as incrementally updated strongly connected components, and integrate them with data transformation operations to obtain practically relevant insights from real data streams."}
{"_id": "20598061fb258a575e096ac470a6bcb854b66a7a", "title": "ioCane: A Smart-Phone and Sensor-Augmented Mobility Aid for the Blind", "text": "We present the design, implementation, and early results of ioCane, a mobility aid for blind cane users that uses detachable cane-mounted ultrasonic sensors connected to a circuit board to send contextual data wirelessly to an Android phone application. The system uses the built-in mobile phone modalities of vibrations and chimes to alert the user to object height and proximity. We believe this plug-and-play solution for visually impaired users has the potential to enhance user mobility and object avoidance with a minimal learning curve. A pilot study testing the performance of the ioCane with blind cane users showed a 47.3% improvement in obstacle avoidance. To our knowledge, the ioCane is the first sensor-based mobility assistance system to integrate natively with a mobile phone without any modifications to the phone or the system, as well as the first system of its kind to be evaluated by actual visually-impaired cane users. ACM Classification"}
{"_id": "5566a38ab58a169d96534752594c884ce91cab52", "title": "\"Soft\" Exoskeletons for Upper and Lower Body Rehabilitation - Design, Control and Testing", "text": "The basic concepts for exoskeletal systems have been suggested for some time with applications ranging from construction, manufacturing and mining to rescue and emergency services. In recent years, research has been driven by possible uses in medical/rehabilitation and military applications. Yet there are still significant barriers to the effective use and exploitation of this technology. Among the most pertinent of these factors is the power and actuation system and its impact of control, strength, speed and, perhaps most critically, safety. This work describes the design, construction and testing of an ultra low-mass, full-body exoskeleton system having seven degrees of freedom (DOFs) for the upper limbs and five degrees of freedom (DOFs) for each of the lower limbs. This low mass is primarily due to the use of a new range of pneumatic muscle actuators as the power source for the system. The work presented will show how the system takes advantage of the inherent controllable compliance to produce a unit that is powerful, providing a wide range of functionality (motion and forces over an extended range) in a manner that has high safety integrity for the user. The general layout of both the upper and the lower body exoskeleton is presented together with results from preliminary experiments to demonstrate the potential of the device in limb retraining, rehabilitation and power assist (augmentation) operations."}
{"_id": "74876176f70ed354c5e4f7d273a966e32f9d1252", "title": "Synthetic Occlusion Augmentation with Volumetric Heatmaps for the 2018 ECCV PoseTrack Challenge on 3D Human Pose Estimation", "text": "In this paper we present our winning entry at the 2018 ECCV PoseTrack Challenge on 3D human pose estimation. Using a fully-convolutional backbone architecture, we obtain volumetric heatmaps per body joint, which we convert to coordinates using soft-argmax. Absolute person center depth is estimated by a 1D heatmap prediction head. The coordinates are back-projected to 3D camera space, where we minimize the L1 loss. Key to our good results is the training data augmentation with randomly placed occluders from the Pascal VOC dataset. In addition to reaching first place in the Challenge, our method also surpasses the state-of-the-art on the full Human3.6M benchmark when considering methods that use no extra pose datasets in training. Code for applying synthetic occlusions is availabe at https://github.com/ isarandi/synthetic-occlusion."}
{"_id": "bfab1ad6f4179bb1d75487747f060c785c04e4aa", "title": "The effect of site quality on repurchase intention in Internet shopping through mediating variables: The case of university students in South Korea", "text": "We performed a study to determine the influence that site quality has on repurchase intention of Internet shopping through customer satisfaction, customer trust, and customer commitment. Appropriate measures were developed and tested on 230 university students of Gyeongnam province in South Korea with a cross-sectional questionnaire survey. The results of the empirical analysis confirmed that site quality can be conceptualized as a composite of six dimensions of shopping convenience, site design, information usefulness, transaction security, payment system, and customer communication. Second, site quality positively affected customer satisfaction and customer trust, but did not affect customer commitment and repurchase intention. Third, site quality can affect repurchase intention by enhancing or attenuating customer satisfaction, customer trust, and customer commitment in online transaction situation. The mediating effect of customer satisfaction, customer trust, and customer commitment between site quality and repurchase intention is identified. Fourth, site quality indirectly affected customer commitment through customer satisfaction. Customer satisfaction indirectly affected repurchase intention through customer trust and customer commitment. Thus, it is found that site quality can be a very important factor to enhance repurchase intention in the customer perspective. \u00a9 2013 Elsevier Ltd. All rights reserved."}
{"_id": "5378b70429877b9e0687878a905e66ba479391a1", "title": "DocTag2Vec: An Embedding Based Multi-label Learning Approach for Document Tagging", "text": "Tagging news articles or blog posts with relevant tags from a collection of predefined ones is coined as document tagging in this work. Accurate tagging of articles can benefit several downstream applications such as recommendation and search. In this work, we propose a novel yet simple approach called DocTag2Vec to accomplish this task. We substantially extend Word2Vec and Doc2Vec\u2014two popular models for learning distributed representation of words and documents. In DocTag2Vec, we simultaneously learn the representation of words, documents, and tags in a joint vector space during training, and employ the simple k-nearest neighbor search to predict tags for unseen documents. In contrast to previous multi-label learning methods, DocTag2Vec directly deals with raw text instead of provided feature vector, and in addition, enjoys advantages like the learning of tag representation, and the ability of handling newly created tags. To demonstrate the effectiveness of our approach, we conduct experiments on several datasets and show promising results against state-of-the-art methods."}
{"_id": "0633015006fd8088c9089a94848ef3f21ac3881c", "title": "QUASY: Quantitative Synthesis Tool", "text": "We present the tool Q UASY, a quantitative synthesis tool. Q UASY takes qualitative and quantitative specifications and automatic ally onstructs a system that satisfies the qualitative specification and optimizes the qu antitative specification, if such a system exists. The user can choose between a system that satisfies and optimi zes the specifications (a) under all possible environment behaviors or (b) under th most-likely environment behaviors given as a probability distribution on the po ssible input sequences. QUASY solves these two quantitative synthesis problems by reduct ion to instances of 2-player games and Markov Decision Processes (MDPs) with qua ntitative winning objectives. QUASY can also be seen as a game solver for quantitative games. Most notable, it can solve lexicographic mean-payoff games with 2 players, MDPs with meanpayoff objectives, and ergodic MDPs with mean-payoff parit y objectives."}
{"_id": "1199c44a5834cce889e52ba128068b4d2224b067", "title": "Gist: A Solver for Probabilistic Games", "text": "Gist is a tool that (a) solves the qualitative analysis problem of turn-based probabilistic games with \u03c9-regular objectives; and (b) synthesizes reasonable environment assumptions for synthesis of unrealizable specifications. Our tool provides the first and efficient implementations of several reduction-based techniques to solve turn-based probabilistic games, and uses the analysis of turn-based probabilistic games for synthesizing environment assumptions for unrealizable specifications."}
{"_id": "38c6ff59aadeab427024b62f69c8f41d68e7cb37", "title": "Playing Stochastic Games Precisely", "text": "We study stochastic two-player games where the goal of one player is to achieve precisely a given expected value of the objective function, while the goal of the opponent is the opposite. Potential applications for such games include controller synthesis problems where the optimisation objective is to maximise or minimise a given payoff function while respecting a strict upper or lower bound, respectively. We consider a number of objective functions including reachability, \u03c9-regular, discounted reward, and total reward. We show that precise value games are not determined, and compare the memory requirements for winning strategies. For stopping games we establish necessary and sufficient conditions for the existence of a winning strategy of the controller for a large class of functions, as well as provide the constructions of compact strategies for the studied objectives."}
{"_id": "5b9631561a89a3e071d8ec386a616a120220bfd9", "title": "PRISM 4.0: Verification of Probabilistic Real-Time Systems", "text": "This paper describes a major new release of the PRISM probabilistic model checker, adding, in particular, quantitative verification of (priced) probabilistic timed automata. These model systems exhibiting probabilistic, nondeterministic and real-time characteristics. In many application domains, all three aspects are essential; this includes, for example, embedded controllers in automotive or avionic systems, wireless communication protocols such as Bluetooth or Zigbee, and randomised security protocols. PRISM, which is open-source, also contains several new components that are of independent use. These include: an extensible toolkit for building, verifying and refining abstractions of probabilistic models; an explicit-state probabilistic model checking library; a discrete-event simulation engine for statistical model checking; support for generation of optimal adversaries/strategies; and a benchmark suite."}
{"_id": "8a6818f092710d5b03bd713bf74059346cff1678", "title": "The Ins and Outs of the Probabilistic Model Checker MRMC", "text": "The Markov Reward Model Checker (MRMC) is a software tool for verifying properties over probabilistic models. It supports PCTL and CSL model checking, and their reward extensions. Distinguishing features of MRMC are its support for computing timeand reward-bounded reachability probabilities, (property-driven) bisimulation minimization, and precise on-the-fly steady-state detection. Recent tool features include time-bounded reachability analysis for uniform CTMDPs and CSL model checking by discrete-event simulation. This paper presents the tool\u2019s current status and its implementation details."}
{"_id": "4c4d6fec361b83b4f3575aa78612010de94d2fc7", "title": "Connectivity in a citation network: The development of DNA theory", "text": "The study of citation networks for both articles and journals is routine. In general, these analyses proceed by considering the similarity of articles or journals and submitting the set of similarity measures to some clustering or scaling procedure. Two common methods are found in bibliomettic coupling, where two citing articles are similar to the extent they cite the same literature, and co-citation analysis where cited articles are similar to the extent they are cited by the same citing articles. Methods based on structural and regular equivalence also seek to partition the article based on their positional location. Such methods have in common focus on the articles and partitions of them. We propose a quite different approach where the connective threads through a network are preserved and the focus is on the links in the network rather than on the nodes. Variants of the depth first search algorithm are used to detect and represent the mainstream of the literature of a clearly delineated area of scientific research. The specific citation network is one that consists of ties among the key events and papers that lead to the discovery and modeling of DNA together with the final experimental confirmation of its representation."}
{"_id": "a39ad25c7df407ef71d9ea33c750641853d0ead3", "title": "Semantic similarity based evaluation for C programs through the use of symbolic execution", "text": "Automatic grading of programs has existed in various fields for many years ago. Within this paper, we propose a method for evaluating C programs. Two approaches are distinguished in this context: static and dynamic analysis methods. Unlike the dynamic analysis that requires an executable program to be evaluated, static analysis could evaluate a program even if it is not totally correct. The proposed method is based on static analysis of programs. It consists of comparing the evaluated program with the evaluator-provided program through their Control Flow Graphs. Here, the great challenge is to deal with the multiplicity of solutions that exists for the same programming problem. As a solution to this weakness, we propose an innovative similarity measure that compares two programs according to their semantic executions. In fact, the evaluated program is compared to the evaluator-provided program called model program by using the symbolic execution technique. The experimentations presented in this work are performed by using a basic implementation of the proposed method. The obtained results reveal a promising realization in the field of automated evaluation of programs. They also show that the proposed method guarantees a considerable approximation to the human program evaluation."}
{"_id": "73209d66b1894ab5ce25ba70e1e39ed47dbbc11e", "title": "Topic Models for Image Annotation and Text Illustration", "text": "Image annotation, the task of automatically generating description words for a picture, is a key component in various image search and retrieval applications. Creating image databases for model development is, however, costly and time consuming, since the keywords must be hand-coded and the process repeated for new collections. In this work we exploit the vast resource of images and documents available on the web for developing image annotation models without any human involvement. We describe a probabilistic model based on the assumption that images and their co-occurring textual data are generated by mixtures of latent topics. We show that this model outperforms previously proposed approaches when applied to image annotation and the related task of text illustration despite the noisy nature of our dataset."}
{"_id": "843b208402a8c52883e7e96d3a5d218fda39e541", "title": "Home telehealth - Current state and future trends", "text": "OBJECTIVE\nThe purpose of this paper is to give an overview about the state of the art in research on home telehealth in an international perspective.\n\n\nMETHOD\nThe study is based on a review of the scientific literature published between 1990 and 2003 and retrieved via Medline in January/February 2004. All together, the abstracts of 578 publications have been analyzed.\n\n\nRESULTS\nThe majority of publications (44%) comes from the United States, followed by UK and Japan. Most publications deal with vital sign parameter (VSP) measurement and audio/video consultations (\"virtual visits\"). Publications about IT tools for improved information access and communication as well as decision support for staff, patients and relatives are relatively sparse. Clinical application domains are mainly chronic diseases, the elderly population and paediatrics.\n\n\nCONCLUSIONS\nInternationally, we observe a trend towards tools and services not only for professionals but also for patients and citizens. However, their impact on the patient-provider relationship and their design for special user groups, such as elderly and/or disabled needs to be further explored. In general, evaluation studies are rare and further research is critical to determine the impacts and benefits, and limitations, of potential solutions and to overcome a number of hinders and restrictions, such as - the lack of standards to combine incompatible information systems; - the lack of an evaluation framework considering legal, ethical, organisational, clinical, usability and technical aspects; - the lack of proper guidelines for practical implementation of home telehealth solutions."}
{"_id": "c53a27f77d89badc5457391c297eefb415d49861", "title": "Intelligent stock data prediction using predictive data mining techniques", "text": "Cloud computing is the one of the admired paradigms of current era, which facilitates the users with on demand services and pay as you use services. It has tremendous applications in almost every sphere such as education, gaming, social networking, transportation, medical, business, stock market, pattern matching, etc. Stock market is such an industry where lots of data is generated and benefits are reaped on the basis of accurate prediction. So prediction is a vital part of stock market. Therefore, an attempt is being made to predict the stock market based on the given data set of stock market along with some features; using the techniques available for predictive data mining. Machine learning is one of the upcoming trends of data mining; hence few machine learning algorithms have been used such as Decision tree, Linear model, Random forest and further their results have been compared using the classification evaluation parameters such as H, AUC, ROC, TPR, FPR, etc. Random forest have been consider as the most effective model as it yield the highest accuracy of 54.12% whereas decision tree and linear model gives the accuracy of 51.87% and 52.83% respectively."}
{"_id": "df0ccc8d04b29fed302eeb11bb82712bfde0cfb5", "title": "New VF-power system architecture and evaluation for future aircraft", "text": "Conventional aircraft power system is a constant-frequency (CF) supply based on mechanical-regulated constant-speed mechanism that has relatively low efficiency. Replacing the CF system with variable-frequency (VF) power improves overall efficiency, and reduce the system's weight and volume. However, this creates a new tier of requirements and design challenges. Novel VF-power architecture is developed with minimization of the power losses throughout the stages of power conversions. Optimal partitioning and grouping of onboard AC loads has been discussed with specific system data. New VF-input multi-functional power converters are also briefly discussed."}
{"_id": "7d97e323bbcacac3d3b38fb4a945abc5d795d51a", "title": "A State-of-the-Art of Semantic Change Computation", "text": "This paper reviews state-of-the-art of one emerging field in computational linguistics \u2014 semantic change computation, proposing a framework that summarizes the literature by identifying and expounding five essential components in the field: diachronic corpus, diachronic word sense characterization, change modelling, evaluation data and data visualization. Despite the potential of the field, the review shows that current studies are mainly focused on testifying hypotheses proposed in theoretical linguistics and that several core issues remain to be solved: the need for diachronic corpora of languages other than English, the need for comprehensive evaluation data for evaluation, the comparison and construction of approaches to diachronic word sense characterization and change modelling, and further exploration of data visualization techniques for hypothesis"}
{"_id": "5226b9ea9cb26f13681a7d09bc2c1de9deed848e", "title": "A DC-90 GHz 4-Vpp differential linear driver in a 0.13 \u03bcm SiGe:C BiCMOS technology for optical modulators", "text": "In this paper, a linear driver for optical modulators in a 0.13 \u03bcm SiGe:C BiCMOS technology with fT/fmax of 300/500 GHz is presented. The driver is implemented following a distributed amplifier topology in a differential manner. In a 50-\u03a9 environment, the circuit delivers a maximum differential output amplitude of 4 Vpp, featuring a small-signal gain of 13 dB and 3-dB bandwidth of 90 GHz. Time-domain measurements using OOK (up to 56 Gb/s) and PAM-4 (at 30Gbaud) are performed, demonstrating the maximum output swing and linearity of the driver. The output power to power dissipation ratio is 3.6%. To the best knowledge of the authors, this is the first time a linear driver for optical modulators demonstrates such bandwidth."}
{"_id": "1753c2dc85cc40e0a2e8b4a405c1690eab066d8d", "title": "FENNEL: streaming graph partitioning for massive scale graphs", "text": "Balanced graph partitioning in the streaming setting is a key problem to enable scalable and efficient computations on massive graph data such as web graphs, knowledge graphs, and graphs arising in the context of online social networks. Two families of heuristics for graph partitioning in the streaming setting are in wide use: place the newly arrived vertex in the cluster with the largest number of neighbors or in the cluster with the least number of non-neighbors.\n In this work, we introduce a framework which unifies the two seemingly orthogonal heuristics and allows us to quantify the interpolation between them. More generally, the framework enables a well principled design of scalable, streaming graph partitioning algorithms that are amenable to distributed implementations. We derive a novel one-pass, streaming graph partitioning algorithm and show that it yields significant performance improvements over previous approaches using an extensive set of real-world and synthetic graphs.\n Surprisingly, despite the fact that our algorithm is a one-pass streaming algorithm, we found its performance to be in many cases comparable to the de-facto standard offline software METIS and in some cases even superiror. For instance, for the Twitter graph with more than 1.4 billion of edges, our method partitions the graph in about 40 minutes achieving a balanced partition that cuts as few as 6.8% of edges, whereas it took more than 81/2 hours by METIS to produce a balanced partition that cuts 11.98% of edges. We also demonstrate the performance gains by using our graph partitioner while solving standard PageRank computation in a graph processing platform with respect to the communication cost and runtime."}
{"_id": "0650df86ad901fb9aadd9033a83c328a6f595666", "title": "A Language Modeling Approach to Predicting Reading Difficulty", "text": "We demonstrate a new research approach to the problem of predicting the reading difficulty of a text passage, by recasting readability in terms of statistical language modeling. We derive a measure based on an extension of multinomial na\u00efve Bayes classification that combines multiple language models to estimate the most likely grade level for a given passage. The resulting classifier is not specific to any particular subject and can be trained with relatively little labeled data. We perform predictions for individual Web pages in English and compare our performance to widely-used semantic variables from traditional readability measures. We show that with minimal changes, the classifier may be retrained for use with French Web documents. For both English and French, the classifier maintains consistently good correlation with labeled grade level (0.63 to 0.79) across all test sets. Some traditional semantic variables such as type-token ratio gave the best performance on commercial calibrated test passages, while our language modeling approach gave better accuracy for Web documents and very short passages (less than 10 words)."}
{"_id": "4aa0522f8efc370241a0f1acb7c888787e5ab70b", "title": "Dynamic projection mapping onto a deformable object with occlusion based on high-speed tracking of dot marker array", "text": "In recent years, projection mapping has attracted much attention in a variety of fields. Generally, however, the objects in projection mapping are limited to rigid and static or quasistatic objects. Dynamic projection mapping onto a deformable object could remarkably expand the possibilities. In order to achieve such a projection mapping, it is necessary to recognize the deformation of the object even when it is occluded. However, it is still a challenging problem to achieve this task in real-time with low latency. In this paper, we propose an efficient, high-speed tracking method utilizing high-frame-rate imaging. Our method is able to track an array of dot markers arranged on a deformable object even when there is external occlusion caused by the user interaction and self-occlusion caused by the deformation of the object itself. Additionally, our method can be applied to a stretchable object. Dynamic projection mapping with our method showed robust and consistent display onto a sheet of paper and cloth with a tracking performance of about 0.2 ms per frame, with the result that the projected pattern appeared to be printed on the deformable object."}
{"_id": "812b169bf22777b30823c46312fa9425237bca35", "title": "Novel Online Methods for Time Series Segmentation", "text": "To efficiently and effectively mine massive amounts of data in the time series, approximate representation of the data is one of the most commonly used strategies. Piecewise linear approximation is such an approach, which represents a time series by dividing it into segments and approximating each segment with a straight line. In this paper, we first propose a new segmentation criterion that improves computing efficiency. Based on this criterion, two novel online piecewise linear segmentation methods are developed, the feasible space window method and the stepwise feasible space window method. The former usually produces much fewer segments and is faster and more reliable in the running time than other methods. The latter can reduce the representation error with fewer segments. It achieves the best overall performance on the segmentation results compared with other methods. Extensive experiments on a variety of real-world time series have been conducted to demonstrate the advantages of our methods."}
{"_id": "ed51689477c76193c09bf41700d43491c0b26cd6", "title": "Broadcast Encryption with Traitor Tracing", "text": "In this thesis, we look at definitions and black-box constructions with efficient instantiations for broadcast encryption and traitor tracing. We begin by looking at the security notions for broadcast encryption found in the literature. Since there is no easy way to compare these existing notions, we propose a framework of security notions for which we establish relationships. We then show where existing notions fit within this framework. Second, we present a black-box construction of a decentralized dynamic broadcast encryption scheme. This scheme does not rely on any trusted authorities, and new users can join at any time. It achieves the strongest security notion based on the security of its components and has an efficient instantiation that is fully secure under the DDH assumption in the standard model. Finally, we give a black-box construction of a message-based traitor tracing scheme, which allows tracing not only based on pirate decoders but also based on watermarks contained in a message. Our scheme is the first one to obtain the optimal ciphertext rate of 1 asymptotically. We then show that at today\u2019s data rates, the scheme is already practical for standard choices of values."}
{"_id": "3d67f52a871c0a31ce6fa3f3162ab9affd13ed05", "title": "Cross-domain security of cyber-physical systems", "text": "The interaction between the cyber domain and the physical domain components and processes can be leveraged to enhance the security of the cyber-physical system. In order to do so, we must first analyze various cyber domain and physical domain information flows, and characterize the relation between them using model functions. In this paper, we present a notion of cross-domain security of cyber-physical systems, whereby we present a security analysis framework that can be used for generating novel cross-domain attack models, attack detection methods, etc. We demonstrate how information flows such as discrete domain signal flows and continuous domain energy flows in the cyber and physical domain can be used to generate model functions using data-driven estimation, and use this model functions for performing various cross-domain security analysis. We also demonstrate the practical applicability of the cross-domain security analysis framework using the cyber-physical manufacturing system as a case study."}
{"_id": "af8a14f2698afffad73095255f680632b3dd25ef", "title": "Facial reconstruction: soft tissue thickness values for South African black females.", "text": "In forensic science, investigators frequently have to deal with unidentified skeletonised remains. When conventional methods of identification are unsuccessful, forensic facial reconstruction (FFR) may be used, often as a last resort, to assist the process. FFR relies on the relationships between the facial features, subcutaneous soft tissues and underlying bony structure of the skull. The aim of this study was to develop soft tissue thickness (STT) values for South African black females for application to FFR, to compare these values to existing literature or databases and to add these values to existing population data. Computerised tomography scanning was used to determine average population-specific STT values at 28 facial landmarks of 154 black females. Descriptive statistics are provided for these STT values, which were also compared to those reported in three other comparable databases. Many of these STT values are significantly different from those reported for comparable groups, suggesting that individuals from different geographical areas have unique facial features thus requiring population-specific STT values. Repeatability tests indicated that most measurements could be recorded with a high degree of reliability."}
{"_id": "a325d5ea42a0b6aeb0390318e9f65f584bd67edd", "title": "Fine-Grained Visual Comparisons with Local Learning", "text": "Given two images, we want to predict which exhibits a particular visual attribute more than the other-even when the two images are quite similar. Existing relative attribute methods rely on global ranking functions; yet rarely will the visual cues relevant to a comparison be constant for all data, nor will humans' perception of the attribute necessarily permit a global ordering. To address these issues, we propose a local learning approach for fine-grained visual comparisons. Given a novel pair of images, we learn a local ranking model on the fly, using only analogous training comparisons. We show how to identify these analogous pairs using learned metrics. With results on three challenging datasets-including a large newly curated dataset for fine-grained comparisons-our method outperforms stateof-the-art methods for relative attribute prediction."}
{"_id": "8ef4c64cb734a68f4e3c7c2ffaf7d4d403a5aab3", "title": "Quantity versus quality: A new approach to examine the relationship between technology use and student outcomes", "text": "The author argues that to examine the relationship between technology use and student outcomes, the quality of technology use\u2014how, and what, technology is used\u2014is a more significant factor than the quantity of technology use\u2014how much technology is used. This argument was exemplified by an empirical study that used both angles to examine the association between technology use and student outcomes. When only the quantity of technology use was examined, no significant association was observed. However, when the quality of technology was examined by investigating the specific types of technology uses, a significant association was identified between technology use and all student outcomes. Furthermore, different types of technology use showed different influences on specific student outcomes. General technology uses were positively associated with student technology proficiency, while subject-specific technology uses were negatively associated with student technology proficiency. Social-communication technology uses were significantly positively associated with developmental outcomes such as self-esteem and positive attitude towards school. Entertainment/exploration technology use showed significant positive association with student learning habits. None of these technology uses had significant influence on student academic outcome. Specific suggestions for integrating technology into schools and future research were provided. Introduction In the last two decades, generous investments have been made in educational technology around the world. For example, the US had invested more than $66 billion in school technology in just 10 years (Quality Education Data, 2004). By 2004, China had spent 100 billion Yuan (about $13.2 billion) on educational technology (Zhao, 2005), and British Journal of Educational Technology Vol 41 No 3 2010 455\u2013472 doi:10.1111/j.1467-8535.2009.00961.x \u00a9 2009 The Author. Journal compilation \u00a9 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. the annual expense on educational technology was projected to reach 35.5 billion Yuan in 2007 (Okokok Report, 2004). Ireland\u2019s second national educational technology plan proposed to invest 107.92 million pounds in educational technology (Ireland Ministry of Education and Science, 2001). The generous investments were supported by the strongly held premise that technology can help students learn more efficiently and effectively, and as a result increase student academic achievement. This belief in the connection between technology and student achievement is a theme commonly emphasised in mission statements of educational technology projects and arguments to support educational technology investment (Zhao & Conway, 2001). For example, the first US national educational technology plan claims, \u2018Properly used, technology increases students\u2019 learning opportunities, motivation, and achievement\u2019 (U.S. Department of Education, 1996, p. 10). The second plan assures that technology would \u2018enhance learning and improve student achievement for all students\u2019 (U.S. Department of Education, 2000, p. 4). The third plan further states that with new technologies, \u201810 years from now we could be looking at the greatest leap forward in achievement in the history of education. By any measure, the improvements will be dramatic\u2019 (U.S. Department of Education, p. 11). However, this premise on the crucial role of technology in student achievement has not been substantially supported by empirical evidence. In fact, findings from different empirical studies focusing on the effect of technology on learning have been inconsistent and contradictory. On the one hand, some studies have identified significant positive impact of technology use on student outcomes in academic areas such as literacy development (Blasewitz & Taylor, 1999; Tracey & Young, 2006), reading comprehension and vocabulary (Scrase, 1998; Stone, 1996; Woehler, 1994), writing (Nix, 1998), mathematics (Elliott & Hall, 1997; Mac Iver, Balfanz & Plank, 1999) and science (Harmer & Cates, 2007; Lazarowitz & Huppert, 1993; Liu, Hsieh, Cho & Schallert, 2006; Reid-Griffin, 2003). For example, Tienken and Wilson (2007) compared seventhgrade students whose teachers used mathematics websites and presentation software in their classrooms with students whose teachers did not teach with these technology tools. They found that the use of these technology tools had a positive effect on students\u2019 learning of basic mathematic skills. In addition, positive impacts have been identified in student developmental areas, including attitude towards learning and self-esteem (Nguyen, Hsieh & Allen, 2006; Sivin-Kachala & Bialo, 2000), motivation, attendance and discipline (eg, Matthew, 1997). For example, using a mixed-method design, Wighting (2006) reported that using computers in the classroom positively affected students\u2019 sense of learning in a community. Similarly, in the UK, a series of research studies have been conducted to examine the effect of the large-scale Tablet PC programmes, and findings reveal that the use of technology has improved student access to curriculum, communication and motivation (eg, Sheehy et al, 2005; Twining et al, 2006). On the other hand, several other researchers have come to very different conclusions. Some argue that technology use may not have any positive impact on student outcomes. For example, The Organization for Economic Co-operation and Development\u2019s (OECD) Programme for International Student Assessment 2003 study found that stu456 British Journal of Educational Technology Vol 41 No 3 2010 \u00a9 2009 The Author. Journal compilation \u00a9 2009 Becta. dents using computers most frequently at school did not necessarily perform better than students using technology less frequently, and the impact of technology use on student math achievement varied by countries (OECD, 2005). In March 2007, the Institute of Education Sciences released an influential report titled Effectiveness of Reading and Mathematics Software Products: Findings from the First Student Cohort. This study, intended to assess the effects of 16 computer software products designed to teach firstand fourthgrade reading and sixth-grade math, using a rigorous random assignment design, found that \u2018test scores in treatment classrooms that were randomly assigned to use products did not differ from test scores in control classrooms by statistically significant margins\u2019 (Dynarski et al, 2007, p. xiii). Furthermore, some studies suggest that technology use might even harm children and their learning (eg, Healy, 1998; Stoll, 1999). For example, Waight and Abd-El-Khalick (2007) found that the use of computer technology restricted rather than promoted \u2018inquiry\u2019 in a sixth-grade science classroom. Mixed findings have also emerged from large-scale international studies. A study of the Trends in International Mathematics and Science Study (TIMSS) reported that technology use was negatively related to science achievement among eighth graders in Turkey (Aypay, Erdogan & Sozer, 2007). Another TIMSS study found that while medium use of computer technology was related to higher science scores, extensive use was related to lower science scores (Antonijevic, 2007). Similarly, based on data collected from 175 000 15-year-old students in 31 countries, researchers at the University of Munich announced that performance in math and reading had suffered significantly among students who had more than one computer at home (MacDonald, 2004). Schacter (1999) also identified some negative impacts on student achievement through the review of five large-scale studies that employed diverse research methods to examine the impact of educational technology. Discouraged by these findings, some people have come to the conclusion that putting computers in classrooms has been wasteful and pointless (Oppenheimer, 2003). Existing research on the relationship technology on student learning presents a mixed message (Andrews et al, 2007; O\u2019Dwyer, Russell, Bebell & Tucker-Seeley, 2005; Torgerson & Zhu, 2003). Such mixed and often conflicting findings make it difficult to draw conclusions about the effects of technology, to provide meaningful advice to those who make decisions about technology investment in education and to make practical suggestions for integrating technology into schools. There are at least two problems contributing to the controversy over the relationship between technology use and student outcomes. The first is that technology is often examined at a very general level (Zhao, 2003). Many studies \u2018treat technology as an undifferentiated characteristic of schools and classrooms. No distinction is made between different types of technology programs\u2019 (Wenglinsky, 1998, p. 3). We know that technology is a very broad term that includes many kinds of hardware and software. These technologies may have different impacts on student outcomes. Even the same technology can be used differently in various contexts to solve all kinds of problems (Zhao) and thus have \u2018different meanings in different settings\u2019 (Peyton & Bruce, Quantity versus quality on technology use 457 \u00a9 2009 The Author. Journal compilation \u00a9 2009 Becta. 1993, p. 10). Treating technology as if it is a single thing obscures the unique characteristics of different technologies and their uses. The second problem is the focus of the studies. Most studies focus on the impact of the quantity of technology use, in other words, how much or how frequently technology is used, but ignore the quality of technology use, that is, how technology is used. For example, many studies examine the relationship between how much time students spend on using computers or how often they use computers and their achievement (eg, Du, Havard, Yu & Adams, 2004; Mann, Shakeshaft, Becker & Kottkamp, 1999). However, research suggests that the quality of technology use is more critical"}
{"_id": "1c82ffbf057067bee2b637f4b25422684c9a4feb", "title": "Spatial Correlation and Mobility-Aware Traffic Modeling for Wireless Sensor Networks", "text": "Recently, there has been a great deal of research on using mobility in wireless sensor networks (WSNs) to facilitate surveillance and reconnaissance in a wide deployment area. Besides providing an extended sensing coverage, node mobility along with spatial correlation introduces new network dynamics, which could lead to the traffic patterns fundamentally different from the traditional (Markovian) models. In this paper, a novel traffic modeling scheme for capturing these dynamics is proposed that takes into account the statistical patterns of node mobility and spatial correlation. The contributions made in this paper are twofold. First, it is shown that the joint effects of mobility and spatial correlation can lead to bursty traffic. More specifically, a high mobility variance and small spatial correlation can give rise to pseudo-long-range-dependent (LRD) traffic (high bursty traffic), whose autocorrelation function decays slowly and hyperbolically up to a certain cutoff time lag. Second, due to the ad hoc nature of WSNs, certain relay nodes may have several routes passing through them, necessitating local traffic aggregations. At these relay nodes, our model predicts that the aggregated traffic also exhibits the bursty behavior characterized by a scaled power-law decayed autocovariance function. According to these findings, a novel traffic shaping protocol using movement coordination is proposed to facilitate effective and efficient resource provisioning strategy. Finally, simulation results reveal a close agreement between the traffic pattern predicted by our theoretical model and the simulated transmissions from multiple independent sources, under specific bounds of the observation intervals."}
{"_id": "84b9b635ef17924e99f7f551048b55df176043fe", "title": "A Review of Facebook Research in the Social Sciences.", "text": "With over 800 million active users, Facebook is changing the way hundreds of millions of people relate to one another and share information. A rapidly growing body of research has accompanied the meteoric rise of Facebook as social scientists assess the impact of Facebook on social life. In addition, researchers have recognized the utility of Facebook as a novel tool to observe behavior in a naturalistic setting, test hypotheses, and recruit participants. However, research on Facebook emanates from a wide variety of disciplines, with results being published in a broad range of journals and conference proceedings, making it difficult to keep track of various findings. And because Facebook is a relatively recent phenomenon, uncertainty still exists about the most effective ways to do Facebook research. To address these issues, the authors conducted a comprehensive literature search, identifying 412 relevant articles, which were sorted into 5 categories: descriptive analysis of users, motivations for using Facebook, identity presentation, the role of Facebook in social interactions, and privacy and information disclosure. The literature review serves as the foundation from which to assess current findings and offer recommendations to the field for future research on Facebook and online social networks more broadly."}
{"_id": "0f3e25b3567b5cdcff79992fd2e7c79d24b20645", "title": "Information Disclosure and Control on Facebook: Are They Two Sides of the Same Coin or Two Different Processes?", "text": "Facebook, the popular social network site, is changing the nature of privacy and the consequences of information disclosure. Despite recent media reports regarding the negative consequences of disclosing information on social network sites such as Facebook, students are generally thought to be unconcerned about the potential costs of this disclosure. The current study explored undergraduate students' information disclosure and information control on Facebook and the personality factors that influence levels of disclosure and control. Participants in this online survey were 343 undergraduate students who were current users of Facebook. Results indicated that participants perceived that they disclosed more information about themselves on Facebook than in general, but participants also reported that information control and privacy were important to them. Participants were very likely to have posted information such as their birthday and e-mail address, and almost all had joined an online network. They were also very likely to post pictures such as a profile picture, pictures with friends, and even pictures at parties and drinking with friends. Contrary to expectations, information disclosure and information control were not significantly negatively correlated, and multiple regression analyses revealed that while disclosure was significantly predicted by the need for popularity, levels of trust and self-esteem predicted information control. Therefore, disclosure and control on Facebook are not as closely related as expected but rather are different processes that are affected by different aspects of personality. Implications of these findings and suggestions for future research are discussed."}
{"_id": "2891da32932a45f8b14bb95f7e26b5ae9677f430", "title": "On the evolution of user interaction in Facebook", "text": "Online social networks have become extremely popular; numerous sites allow users to interact and share content using social links. Users of these networks often establish hundreds to even thousands of social links with other users. Recently, researchers have suggested examining the activity network - a network that is based on the actual interaction between users, rather than mere friendship - to distinguish between strong and weak links. While initial studies have led to insights on how an activity network is structurally different from the social network itself, a natural and important aspect of the activity network has been disregarded: the fact that over time social links can grow stronger or weaker. In this paper, we study the evolution of activity between users in the Facebook social network to capture this notion. We find that links in the activity network tend to come and go rapidly over time, and the strength of ties exhibits a general decreasing trend of activity as the social network link ages. For example, only 30% of Facebook user pairs interact consistently from one month to the next. Interestingly, we also find that even though the links of the activity network change rapidly over time, many graph-theoretic properties of the activity network remain unchanged."}
{"_id": "ccd9afd91ac7e6013cd07643cfe620764fac4892", "title": "Spectral collaborative filtering", "text": "Despite the popularity of Collaborative Filtering (CF), CF-based methods are haunted by the cold-start problem, which has a significantly negative impact on users' experiences with Recommender Systems (RS). In this paper, to overcome the aforementioned drawback, we first formulate the relationships between users and items as a bipartite graph. Then, we propose a new spectral convolution operation directly performing in the spectral domain, where not only the proximity information of a graph but also the connectivity information hidden in the graph are revealed. With the proposed spectral convolution operation, we build a deep recommendation model called Spectral Collaborative Filtering (SpectralCF). Benefiting from the rich information of connectivity existing in the spectral domain, SpectralCF is capable of discovering deep connections between users and items and therefore, alleviates the cold-start problem for CF. To the best of our knowledge, SpectralCF is the first CF-based method directly learning from the spectral domains of user-item bipartite graphs. We apply our method on several standard datasets. It is shown that SpectralCF significantly out-performs state-of-the-art models. Code and data are available at https://github.com/lzheng21/SpectralCF."}
{"_id": "6aedad2c0bbf5a5a36cf88807694aa1b2c671842", "title": "Eco-Routing Navigation System Based on Multisource Historical and Real-Time Traffic Information", "text": "Due to increased public awareness on global climate change and other energy and environmental problems, a variety of strategies are being developed and used to reduce the energy consumption and environmental impact of roadway travel. In advanced traveler information systems, recent efforts have been made in developing a new navigation concept called \u201ceco-routing,\u201d which finds a route that requires the least amount of fuel and/or produces the least amount of emissions. This paper presents an eco-routing navigation system that determines the most eco-friendly route between a trip origin and a destination. It consists of the following four components: 1) a Dynamic Roadway Network database, which is a digital map of a roadway network that integrates historical and real-time traffic information from multiple data sources through an embedded data fusion algorithm; 2) an energy/emissions operational parameter set, which is a compilation of energy/emission factors for a variety of vehicle types under various roadway characteristics and traffic conditions; 3) a routing engine, which contains shortest path algorithms used for optimal route calculation; and 4) user interfaces that receive origin-destination inputs from users and display route maps to the users. Each of the system components and the system architecture are described. Example results are also presented to prove the validity of the eco-routing concept and to demonstrate the operability of the developed eco-routing navigation system. In addition, current limitations of the system and areas for future improvements are discussed."}
{"_id": "db28b36213e42690bd71370a8f74e0571d0304cf", "title": "Popularity prediction of images and videos on Instagram", "text": "We live in a world surrounded by numerous social media platforms, applications and websites which produce various texts, images and videos (posts) daily. People share their moments with their friends and families via these tools to keep in touch. This extensiveness of social media has led to an expansion of information in various forms. It is difficult to imagine someone totally unfamiliar with these concepts and not having posted any content on a platform. All users, ranging from individuals to large companies, want to get the most of their audiences' attention. Nevertheless, the problem is that not all these posts are admired and noticed by their audience. Therefore, it would be important to know what characteristics a post should have to become the most popular. Studying this enormous data will develop a knowledge from which we can understand the best way to publish our posts. To this end, we gathered images and videos from Instagram accounts and we used some image/video context features to predict the number of likes a post obtains as a meaning of popularity through some regression and classification methods. By the experiments with 10-fold cross-validation, we get the results of Popularity Score prediction with 0.002 in RMSE and Popularity Class prediction with 90.77% accuracy. As we know, this study is the first exploring of Iranian Instagram users for popularity prediction."}
{"_id": "c977ed640f9a3aa6074791e63fe4e754b9c85a69", "title": "Optimization Using Simulation of Traffic Light Signal Timings", "text": "Traffic congestion has become a great challenge and a large burden on both the governments and the citizens in vastly populated cities. The main problem, originally initiated by several factors, continues to threaten the stability of modern cities and the livelihood of its habitants. Hence, improving the control strategies that govern the traffic operations is a powerful solution that can solve the congestion problem. These improvements can be achieved by enhancing the traffic control performance through adjusting the traffic signal timings. This paper focuses on finding various solutions for the addressed problem through the optimization using simulation of traffic signal timings under oversaturated conditions. The system under study is an actual road network in Alexandria, Egypt; where, numerous data have been collected in different time intervals. A series of computer simulation models to represent the actual system as well as proposed solutions have been developed using the ExtendSim simulation environment. Furthermore, an evolutionary optimizer is utilized to attain a set of optimum/near-optimum signal timings to minimize the total time in system of the vehicles, resulting in an improved performance of the road network. Analysis of experimentation results shows that the adopted methodology optimizes the vehicular flow in a multiple-junction urban traffic network."}
{"_id": "4934ac57123efbfbde139ec846452acd95520b40", "title": "Emergence of neural encoding of auditory objects while listening to competing speakers.", "text": "A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation."}
{"_id": "1a202ce3b60fc7d9a758c15bc6a5d66eb394b390", "title": "The NASD Securities Observation , News Analysis & Regulation System ( SONAR )", "text": "The Securities Observation, News Analysis, and Regulation (SONAR) system was developed by NASD to monitor the Nasdaq, Over the Counter (OTC), and Nasdaq-Liffe (futures) stock markets for potential insider trading and fraud through misrepresentation. SONAR has been in operational use at NASD since December 2001, processing approximately 10,000 news wires stories and SEC filings, evaluating price/volume models for 25,000 securities, and generating 50-60 alerts (or \u201cbreaks\u201d) per day for review by several groups of regulatory analysts and investigators. In addition, SONAR has greatly expanded surveillance coverage to new areas of the market and increased accuracy significantly over an earlier break detection system. SONAR makes use of several AI and statistical techniques, including NLP text mining, statistical regression, rule-based inference, uncertainty, and fuzzy matching. Sonar combines these enabling technologies in a system designed to deliver a steady stream of high-quality breaks to the analysts for further investigation. Additional components including visualization, text search agents, and flexible displays add to the system\u2019s utility. Finally, SONAR is designed as a knowledge-based system. Domain knowledge is maintained by knowledge engineers working closely with the regulatory analysts. In this way, SONAR is adaptable to new market conditions, data sources, and regulatory concerns."}
{"_id": "6bd3544905ec46cb321bc06c7937110da44f90ed", "title": "Mood Effects on Cognition : Affective Influences on the Content and Process of Information Processing and Behavior", "text": ""}
{"_id": "5b823d0bdcb2c5dddfb84b4f92353bc5aa8f2644", "title": "In Defense of Single-column Networks for Crowd Counting", "text": "Crowd counting usually addressed by density estimation becomes an increasingly important topic in computer vision due to its widespread applications in video surveillance, urban planning, and intelligence gathering. However, it is essentially a challenging task because of the greatly varied sizes of objects, coupled with severe occlusions and vague appearance of extremely small individuals. Existing methods heavily rely on multi-column learning architectures to extract multi-scale features, which however suffer from heavy computational cost, especially undesired for crowd counting. In this paper, we propose the single-column counting network (SCNet) for efficient crowd counting without relying on multi-column networks. SCNet consists of residual fusion modules (RFMs) for multi-scale feature extraction, a pyramid pooling module (PPM) for information fusion, and a sub-pixel convolutional module (SPCM) followed by a bilinear upsampling layer for resolution recovery. Those proposed modules enable our SCNet to fully capture multi-scale features in a compact single-column architecture and estimate high-resolution density map in an efficient way. In addition, we provide a principled paradigm for density map generation and data augmentation for training, which shows further improved performance. Extensive experiments on three benchmark datasets show that our SCNet delivers new state-of-the-art performance and surpasses previous methods by large margins, which demonstrates the great effectiveness of SCNet as a singlecolumn network for crowd counting. c \u00a9 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. *Equal contribution 2 WANG et al.: IN DEFENSE OF SINGLE-COLUMN NETWORKS FOR CROWD COUNTING"}
{"_id": "c5d2dbd4454fe9610cb2a7b19772fba8fa229567", "title": "What Users Ask a Search Engine: Analyzing One Billion Russian Question Queries", "text": "We analyze the question queries submitted to a large commercial web search engine to get insights about what people ask, and to better tailor the search results to the users' needs. Based on a dataset of about one billion question queries submitted during the year 2012, we investigate askers' querying behavior with the support of automatic query categorization. While the importance of question queries is likely to increase, at present they only make up 3-4% of the total search traffic.\n Since questions are such a small part of the query stream, and are more likely to be unique than shorter queries, click-through information is typically rather sparse. Thus, query categorization methods based on the categories of clicked web documents do not work well for questions. As an alternative, we propose a robust question query classification method that uses the labeled questions from a large community question answering platform (CQA) as a training set. The resulting classifier is then transferred to the web search questions. Even though questions on CQA platforms tend to be different to web search questions, our categorization method proves competitive with strong baselines with respect to classification accuracy.\n To show the scalability of our proposed method we apply the classifiers to about one billion question queries and discuss the trade-offs between performance and accuracy that different classification models offer."}
{"_id": "53ad6a3c95c8be67bddbae3ef76c938adcd9775d", "title": "Cache-Conscious Data Placement", "text": "As the gap between memory and processor speeds continues to widen, cache eficiency is an increasingly important component of processor performance. Compiler techniques have been used to improve instruction cache pet$ormance by mapping code with temporal locality to different cache blocks in the virtual address space eliminating cache conflicts. These code placement techniques can be applied directly to the problem of placing data for improved data cache pedormance.In this paper we present a general framework for Cache Conscious Data Placement. This is a compiler directed approach that creates an address placement for the stack (local variables), global variables, heap objects, and constants in order to reduce data cache misses. The placement of data objects is guided by a temporal relationship graph between objects generated via profiling. Our results show that profile driven data placement significantly reduces the data miss rate by 24% on average."}
{"_id": "75a739c1dd74835c519e9b701cd1e60e38fc0b27", "title": "Approaches to Interpreter Composition", "text": "In this paper, we compose six different Python and Prolog VMs into 4 pairwise compositions: one using C interpreters; one running on the JVM; one using meta-tracing interpreters; and one using a C interpreter and a meta-tracing interpreter. We show that programs that cross the language barrier frequently execute faster in a meta-tracing composition, and that meta-tracing imposes a significantly lower overhead on composed programs relative to mono-language programs."}
{"_id": "d83233cd007d740272c58bb75f947d933d72aa9b", "title": "A framework for efficient network anomaly intrusion detection with features selection", "text": "An intrusion Detection System (IDS) provides alarts against intrusion attacks where a traditional firewall fails. Machine learning algorithms aim to detect anomalies using supervised and unsupervised approaches. Features selection techniques identify important features and remove irrelevant and redundant attributes to reduce the dimensionality of feature space. This paper presents a features selection framework for efficient network anomaly detection using different machine learning classifiers. The framework applies different strategies by using filter and wrapper features selection methodologies. The aim of this framework is to select the minimum number of features that achieve the highest accuracy. UNSW-NB15 dataset is used in the experimental results to evaluate the proposed framework. The results show that by using 18 features from one of the filter ranking methods and applying J48 as a classifier, an accuracy of 88% is achieved."}
{"_id": "1d1e6d7299ac2e8e0a37a594678e0c39542ae53d", "title": "Application-specific resource provisioning for wide-area distributed computing", "text": "Some modern distributed applications require cooperation among multiple geographically separated computing facilities to perform intensive computing at the end sites and large-scale data transfers in the wide area network. It has been widely recognized that WDM networks are cost-effective means to support data transfers in this type of data-intensive applications. However, neither the traditional approaches to establishing lightpaths between given source destination pairs nor the existing application-level approaches that only consider computing resources but take the underlying connectivity for granted are sufficient. In this article we identify key limitations and issues in existing systems, and focus on joint resource allocation of both computing resources and network resources in federated computing and network systems. A variety of resource allocation schemes that provide modern distributed computing applications with performance and reliability guarantees are presented."}
{"_id": "725aa3f0e394bf102d25200507074beb64a86ef1", "title": "The Multidimensional Work Motivation Scale : Validation evidence in seven languages and nine countries", "text": "The Multidimensional Work Motivation Scale: Validation evidence in seven languages and nine countries Maryl\u00e8ne Gagn\u00e9, Jacques Forest, Maarten Vansteenkiste, Laurence Crevier-Braud, Anja Van den Broeck, Ann Kristin Aspeli, Jenny Bellerose, Charles Benabou, Emanuela Chemolli, Stefan Tomas G\u00fcntert, Hallgeir Halvari, Devani Laksmi Indiyastuti, Peter A. Johnson, Marianne Hauan Molstad, Mathias Naudin, Assane Ndao, Anja Hagen Olafsen, Patrice Roussel, Zheni Wang & Cathrine Westbye a School of Psychology, University of Western Australia, Perth, Australia b Department of Organization and Human Resource Management, UQAM School of Management Sciences, Universit\u00e9 du Qu\u00e9bec \u00e0 Montr\u00e9al, Montr\u00e9al, Canada c Department of Developmental, Personality and Social Psychology, University of Gent, Gent, Belgium d Department of Psychology, Universit\u00e9 du Qu\u00e9bec \u00e0 Montr\u00e9al, Montr\u00e9al, Canada e Department of Economy and Management, Hogeschool Universiteit Brussel, Brussels, and KU Leuven, Leuven, Belgium f Faculty of Economics and Social Science, H\u00f8gskolen i Buskerud og Vestfold, Kongsberg, Norway g Department of Psychology, Universit\u00e9 de Montr\u00e9al, Montr\u00e9al, Canada h Psychology of Work in Organization and Society Research Group, Swiss Federal Institute of Technology, Zurich, Switzerland i Faculty of Economics and Business, Universitas Gadjah Mada, Yogyakarta, Indonesia j Institute of Work Psychology, University of Sheffield, Sheffield, United Kingdom k International Institute of Commerce and Development, Universit\u00e9 Panth\u00e9on Assas \u2014 Paris II, Paris, France l Faculty of Economics and Management Sciences, Universit\u00e9 Cheikh Anta DIOP, Dakkar, S\u00e9n\u00e9gal m Department of Management and Human Resources, Universit\u00e9 Toulouse 1, Toulouse, France Published online: 05 Feb 2014."}
{"_id": "8aedd2af730d8f3bce2ea46162d004a97fd9d615", "title": "Beyond budgeting and Agile Software Development: a Conceptual Framework for the Performance Management of Agile Software Development Teams", "text": "Around the same time as the emergence of agile methods as a formalized concept, the management accounting literature introduced the concept of Beyond Budgeting as a performance management model for changing business environments. Both concepts share many similarities with both having a distinctly agile or adaptive perspective. The Beyond Budgeting model promises to enable companies to keep pace with changing business environments, quickly create and adapt strategy and empower people throughout the organization to make effective changes. This research in progress paper attempts to develop the Beyond Budgeting model within the context of agile software development teams. The twelve Beyond Budgeting principles are discussed and a research framework is presented. This framework is being used in two case studies to investigate the organizational issues and challenges that affect the performance of agile software"}
{"_id": "3b9484449d77317ca1cb6a6c44c50c99879a8f0e", "title": "Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding", "text": "Semantic slot filling is one of the most challenging problems in spoken language understanding (SLU). In this paper, we propose to use recurrent neural networks (RNNs) for this task, and present several novel architectures designed to efficiently model past and future temporal dependencies. Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants. To facilitate reproducibility, we implemented these networks with the publicly available Theano neural network toolkit and completed experiments on the well-known airline travel information system (ATIS) benchmark. In addition, we compared the approaches on two custom SLU data sets from the entertainment and movies domains. Our results show that the RNN-based models outperform the conditional random field (CRF) baseline by 2% in absolute error reduction on the ATIS benchmark. We improve the state-of-the-art by 0.5% in the Entertainment domain, and 6.7% for the movies domain."}
{"_id": "c37fe13f94dfc2f3494a35a63336689ce4392135", "title": "MPNET: An End-to-End Deep Neural Network for Object Detection in Surveillance Video", "text": "Object detection is one of the most important topics in computer vision task and has obtained impressive performance thanks to the use of deep convolutional neural network. For object detection, especially in still image, it has achieved excellent performance during past two years, such as the series of R-CNN which plays a vital role in improving performance. However, with the number of surveillance videos increasing, the current methods may not meet the growing demand. In this paper, we propose a new framework named moving-object proposals generation and prediction framework (MPGP) to reduce the searching space and generate some accurate proposals which can reduce computational cost. In addition, we explore the relation of moving regions in feature map of different layers and predict candidates according to the results of previous frames. Last but not least, we utilize spatial-temporal information to strengthen the detection score and further adjust the location of the bounding boxes. Our MPGP framework can be applied to different region-based networks. Experiments on CUHK data set, XJTU data set, and AVSS data set, show that our approach outperforms the state-of-the-art approaches."}
{"_id": "70e9c56188d6d32d774b21c05160e922c5ee28ab", "title": "Lifetime Portfolio Selection By Dynamic Stochastic Programming", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "6dcf0af1a7f59be10e4894eddb4999f554798605", "title": "Advocating an ethical memory model for artificial companions from a human-centred perspective", "text": "This paper considers the ethical implications of applying three major ethical theories to the memory structure of an artificial companion that might have different embodiments such as a physical robot or a graphical character on a hand-held device. We start by proposing an ethical memory model and then make use of an action-centric framework to evaluate its ethical implications. The case that we discuss is that of digital artefacts that autonomously record and store user data, where this data are used as a resource for future interaction with users."}
{"_id": "a6d9e9d9dc7adf72e06cd7489807514320ecf730", "title": "Efficiently exploring architectural design spaces via predictive modeling", "text": "Architects use cycle-by-cycle simulation to evaluate design choices and understand tradeoffs and interactions among design parameters. Efficiently exploring exponential-size design spaces with many interacting parameters remains an open problem: the sheer number of experiments renders detailed simulation intractable. We attack this problem via an automated approach that builds accurate, confident predictive design-space models. We simulate sampled points, using the results to teach our models the function describing relationships among design parameters. The models produce highly accurate performance estimates for other points in the space, can be queried to predict performance impacts of architectural changes, and are very fast compared to simulation, enabling efficient discovery of tradeoffs among parameters in different regions. We validate our approach via sensitivity studies on memory hierarchy and CPU design spaces: our models generally predict IPC with only 1-2% error and reduce required simulation by two orders of magnitude. We also show the efficacy of our technique for exploring chip multiprocessor (CMP) design spaces: when trained on a 1% sample drawn from a CMP design space with 250K points and up to 55x performance swings among different system configurations, our models predict performance with only 4-5% error on average. Our approach combines with techniques to reduce time per simulation, achieving net time savings of three-four orders of magnitude."}
{"_id": "df1d1e45714cae031091797cf24b6b2a0751fb46", "title": "Adaptive Neuro-Fuzzy Inference System based speed controller for brushless DC motor", "text": "In this paper, a novel controller for brushless DC (BLDC) motor has been presented. The proposed controller is based on Adaptive Neuro-Fuzzy Inference System (ANFIS) and the rigorous analysis through simulation is performed using simulink tool box in MATLAB environment. The performance of the motor with proposed ANFIS controller is analyzed and compared with classical Proportional Integral (PI) controller, Fuzzy Tuned PID controller and Fuzzy Variable Structure controller. The dynamic characteristics of the brushless DC motor is observed and analyzed using the developed MATLAB/simulink model. Control system response parameters such as overshoot, undershoot, rise time, recovery time and steady state error are measured and compared for the above controllers. In order to validate the performance of the proposed controller under realistic working environment, simulation result has been obtained and analyzed for varying load and varying set speed conditions. & 2014 Elsevier B.V. All rights reserved."}
{"_id": "83bfb38f5370c6744240b21d12f7f1b0fddb0d33", "title": "High Performance Rogowski Current Transducers", "text": "A Rogowski current transducer is an invaluable tool for semiconductor and power electronic circuit development since it is non-intrusive and does not saturate at high currents. This paper reviews the operating principles, performance limitations and development of this measurement technology and outlines improvements to the integrator design that enables bandwidths of 10MHz to be achieved."}
{"_id": "016c6b6f33ec1ebac023f1f399ab5cbb9758c485", "title": "A preamplifier-latch comparator for high accuracy high speed switched-capacitors pipelined ADC", "text": "In this paper, a gain-boosting preamplifier-latch comparator with minimum input referred offset voltage is proposed. The preamplifier utilizes telescope structure with cross-coupled active loading, which enhances the gain of preamplifier. In order to minimize the delay time of comparator, the input DC range and the ratio of PMOS/NMOS are selected carefully. The circuit is fabricated in TSMC 0.18um with 1.8V power supply."}
{"_id": "691d0a287a2515ebe5019cda498dcb6d24dfd5a4", "title": "Least-Squares Fitting of Two 3-D Point Sets", "text": "Two point sets {pi} and {p'i}; i = 1, 2,..., N are related by p'i = Rpi + T + Ni, where R is a rotation matrix, T a translation vector, and Ni a noise vector. Given {pi} and {p'i}, we present an algorithm for finding the least-squares solution of R and T, which is based on the singular value decomposition (SVD) of a 3 \u00d7 3 matrix. This new algorithm is compared to two earlier algorithms with respect to computer time requirements."}
{"_id": "1d50b77cde6c2c9f6b28b3bbdda1fd912383ce37", "title": "Area-Preservation Mapping using Optimal Mass Transport", "text": "We present a novel area-preservation mapping/flattening method using the optimal mass transport technique, based on the Monge-Brenier theory. Our optimal transport map approach is rigorous and solid in theory, efficient and parallel in computation, yet general for various applications. By comparison with the conventional Monge-Kantorovich approach, our method reduces the number of variables from O(n2) to O(n), and converts the optimal mass transport problem to a convex optimization problem, which can now be efficiently carried out by Newton's method. Furthermore, our framework includes the area weighting strategy that enables users to completely control and adjust the size of areas everywhere in an accurate and quantitative way. Our method significantly reduces the complexity of the problem, and improves the efficiency, flexibility and scalability during visualization. Our framework, by combining conformal mapping and optimal mass transport mapping, serves as a powerful tool for a broad range of applications in visualization and graphics, especially for medical imaging. We provide a variety of experimental results to demonstrate the efficiency, robustness and efficacy of our novel framework."}
{"_id": "6fe387e0891c47dffc0793ccfdbe92e41f984056", "title": "Hierarchical Reinforcement Learning", "text": "A hierarchical representation of the input-output transition function in a learning system is suggested. The choice of either representing the knowledge in a learning system as a discrete set of input-output pairs or as a continuous input-output transition function is discussed. The conclusion that both representations could be e cient, but at di erent levels is made. The di erence between strategies and actions is de ned. An algorithm for using adaptive critic methods in a two-level reinforcement learning system is presented. Two problems that are faced, the hierarchical credit assignment problem and the equalized state problem are described. Simulations of a one dimensional hierarchical reinforcement learning system is presented."}
{"_id": "27f3c2b0bb917091f92e4161863ec3559452280f", "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "text": "We present a new estimation principle for parameterized statistical models. The idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity. We show that this leads to a consistent (convergent) estimator of the parameters, and analyze the asymptotic variance. In particular, the method is shown to directly work for unnormalized models, i.e. models where the density function does not integrate to one. The normalization constant can be estimated just like any other parameter. For a tractable ICA model, we compare the method with other estimation methods that can be used to learn unnormalized models, including score matching, contrastive divergence, and maximum-likelihood where the normalization constant is estimated with importance sampling. Simulations show that noise-contrastive estimation offers the best trade-off between computational and statistical efficiency. The method is then applied to the modeling of natural images: We show that the method can successfully estimate a large-scale two-layer model and a Markov random field."}
{"_id": "a6580c11678bc2a2cfe711348eb9c0edfe484fc0", "title": "Social Dominance An Intergroup Theory of Social Hierarchy and Oppression", "text": "Social dominance : an intergroup theory of social hierarchy and oppression / Jim Sidanius, Felicia Pratto. p. cm. Includes bibliographical references and index."}
{"_id": "ad24460dcb4cde52562d6eab97dce2edaa5ed0d3", "title": "D-Sempre: Learning Deep Semantic-Preserving Embeddings for User interests-Social Contents Modeling", "text": "Exponential growth of social media consumption demands e\u0082ective user interests-social contents modeling for more personalized recommendation and social media summarization. However, due to the heterogeneous nature of social contents, traditional approaches lack the ability of capturing the hidden semantic correlations across these multi-modal data, which leads to semantic gaps between social content understanding and user interests. To e\u0082ectively bridge the semantic gaps, we propose a novel deep learning framework for user interests-social contents modeling. We \u0080rst mine and parse data, i.e. textual content, visual content, social context and social relation, from heterogeneous social media feeds. \u008cen, we design a two-branch network to map the social contents and users into a same latent space. Particularly, the network is trained by a largemargin objective that combines a cross-instance distance constraint with a within-instance semantic-preserving constraint in an endto-end manner. At last, a Deep Semantic-Preserving Embedding (D-Sempre) is learned, and the ranking results can be given by calculating distances between social contents and users. To demonstrate the e\u0082ectiveness of D-Sempre in user interests-social contents modeling, we construct a Twi\u008aer dataset and conduct extensive experiments on it. As a result, D-Sempre e\u0082ectively integrates the multimodal data from heterogeneous social media feeds and captures the hidden semantic correlations between users\u2019 interests and social contents."}
{"_id": "a9e1059254889fa6d5105dfc458bfd43898a74ac", "title": "A fuzzy classification system for prediction of the results of the basketball games", "text": "Prediction of the sports game results is an interesting topic that has gained attention lately. Mostly there are used stochastical methods of uncertainty description. In this work it is presented a preliminary approach to build a fuzzy model to basketball game results prediction. Ten fuzzy rule learning algorithms are selected, conducted and compared against standard linear regression with use of the KEEL system. Feature selection algorithms are applied and a majority voting is used in order to select the most representative features."}
{"_id": "b44f36231ec7bfae79803ce40738e7eb2a1264cb", "title": "Inpainting by Flexible Haar-Wavelet Shrinkage", "text": "We present novel wavelet-based inpainting algorithms. Applying ideas from anisotropic regularization and diffusion, our models can better handle degraded pixels at edges. We interpret our algorithms within the framework of forward-backward splitting methods in convex analysis and prove that the conditions for ensuring their convergence are fulfilled. Numerical examples illustrate the good performance of our algorithms."}
{"_id": "6247c0acba0dd88204a0d8d23b89a293f9d58e20", "title": "Tutorial on Machine Learning for Spectrum Sharing in Wireless Networks", "text": "As spectrum utilization efficiency is the major bottleneck in current wireless networking, many stakeholders discuss that spectrum should be shared rather than being exclusively allocated. Shared spectrum access raises many challenges which, if not properly addressed, degrades the performance level of the co-existing networks. Coexistence scenarios may involve two or more networks: same or different types; operated by the same or different operators. The complex interactions among the coexisting networks can be addressed by the machine learning tools which by their nature embrace uncertainty and can model the complex interactions. In this tutorial, we start with the basics of coexistence of wireless networks in the unlicensed bands. Then, we focus on WiFi and LTE-U coexistence. After providing a brief overview of machine learning topics such as supervised learning, unsupervised learning, reinforcement learning, we overview five particular examples which exploit learning schemes to enable efficient spectrum sharing entailing a generic cognitive radio setting as well as LTE and WiFi coexistence scenarios. We conclude with a list of challenges and research directions. I. SPECTRUM SHARING IN WIRELESS NETWORKS Spectrum sharing is the situation where at least two users or technologies are authorized to use the same portion of the radio spectrum on a non-exclusive manner1. We overview the current state of spectrum sharing and provide a taxonomy of spectrum sharing scenarios. We can list the main challenges in providing peaceful coexistence as follows: (i) scarcity of the resources, (ii) heterogeneity of the coexisting networks, (iii) power asymmetry, and (iv) lack of coordination, communication, and cooperation among the coexisting networks. II. COEXISTENCE IN THE UNLICENSED BANDS: THE CASE OF WIFI AND LTE-U The success of IEEE 802.11 networks in the unlicensed bands, i.e., 2.4 GHz and 5 GHz, has proved the efficiency and feasibility of using spectrum in a license-exempt manner. Currently, even the cellular providers consider expanding their network\u2019s capacity with unlicensed spectrum to cope with the increasing wireless traffic demand. More particularly, Qualcomm\u2019s 2013 proposal [1] of aggregating 5GHz bands with the licensed carriers of an LTE network has paved the way for LTE unlicensed networks. However, operation in the unlicensed bands has to address the coexistence challenges. For example, WiFi networks at 2.4 GHz bands, e.g., 802.11b/g/n, have to find the best channel 1S. Forge, R. Horvitz, and C. Blackman. Perspectives on the value of shared spectrum access. Final Report for the European Commission, 2012. among three non-overlapping channels for operation in a very-dense WLAN deployment. Additionally, 2.4 GHz band accommodates also non-WiFi technologies such as Bluetooth, ZigBee, or microwave ovens which all create interference on WLANs. As for 5GHz which has many more non-overlapping channels compared to 2.4 GHz, the more severe challenge is to coexist with technologies other than 802.11n/ac/ax networks, namely unlicensed LTE networks and radars. The main coexistence mechanism of WiFi is listen-beforetalk (LBT) which is also known as carrier sense multiple access with collision avoidance (CSMA/CA). A station with a traffic to transmit has to first check whether the medium is free or not. To decide on the state of the medium, two approaches exist: carrier sense and energy detection. In carrier sensing, a WiFi node decodes the preamble of a WiFi frame that is received above some energy detection level. The node extracts the information from the PLCP header which carries information about the occupancy duration of the medium by that ongoing flow. This mechanism is also referred to as channel reservation. With this information, a WiFi node knows when to re-start sensing the medium for a transmission opportunity. Energy detection (ED) is a simpler approach in which a candidate transmitter decides that the air is free if the signal level is below a predefined ED threshold. This approach is used for detecting inter-technology signals, where the received signal is not decodable, i.e., it belongs to other technologies (or corrupted WiFi signals). Despite its simplicity, ED requires more effort on a potential transmitter as it must constantly sense the energy level in the air to detect a transmission opportunity. As LTE follows a scheduled medium access on the licensedspectrum, there is no notion or necessity of politeness or LBT in more technical terms. However, it is vital for LTE unlicensed to implement such mechanisms for coexistence with WiFi and other unlicensed LTE networks at 5 GHz bands. Currently, frequency-domain sharing is a first step only. In other words, an LTE small cell first checks the channel activities and selects a clear channel, if any. For time sharing, there are two approaches taken by two variants of LTE unlicensed: duty cycling by LTE-U and LBT by License-Assisted-Access (LAA). LTE-U which is an industry-led effort lets small cells apply duty cycling where during the OFF periods WiFi can access the medium. As this approach does not mandate LBT before turning small cell transmissions on, it may degrade WiFi performance drastically. LAA requires LBT similar to WiFi\u2019s CSMA/CA. LAA speciCROWNCOM2017 TUTORIAL 2 fication is led by 3GPP and aims to develop a global solution in contrast to LTE-U which is only compliant to countries like US, Korea, China where LBT is not mandatory. We overview basics of these two variants and list the major issues in their peaceful coexistence with WiFi networks. III. BACKGROUND ON MACHINE LEARNING We provide a sparse overview of learning approaches: supervised, unsupervised, and reinforcement learning. IV. THE ROLE OF MACHINE LEARNING IN SPECTRUM SHARING AND COEXISTENCE In this part, we examine the literature using ML approaches to solve the coexistence issues as our case studies. Is the channel idle or busy?This question is at the heart of coexistence of networks in a multi-channel environment, as the first step of coexistence is to choose a channel that is clear. For cognitive radio networks, it is mandatory to detect the channel state to avoid violating the rules of secondary spectrum access. Casting this question into a binary classification problem, authors [2] introduce several (un)supervised learning algorithms to correctly identify the state of a channel. While supervised approaches require the real channel state information from the Primary Users, unsupervised learning such as K-means does not require any input from the PUs which is a desirable property of classification scheme in a practical setting. Which unlicensed channel to select for each LAA SBS for inter-operator coexistence?As we expect multiple LAA operators deploy their small cells independently, there is surely the question of how to select an unlicensed channel to aggregate, particularly in case there are more cells than the number of available channels. One way of channel selection is to let every LAA BS learn from its own observations via trial-and-error, Q-learning [3]. Which unlicensed carrier to aggregate and how long to use this carrier?Q-learning framework can also be applied to an LAA setting where an LAA BS needs to select an unlicensed carrier and the transmission duration on the selected carrier [4]. Can WiFi exploit ML for defending itself against LTEU interference?Different than the literature which develops coexistence solutions to be deployed at the LTE base stations for WiFi/LTE setting, [5] proposes to also equip the WiFi APs with a tool that estimates the ON-duration of an existing LTE-U network in the neighborhood. Moreover, the developed solution can estimate the remaining airtime for the WiFi AP based on the LTE\u2019s predicted ON duration. Key idea of WiPLUS is to detect the times where LTE-U has an ongoing transmission using the data passively collected from the MAC FSM of the NIC. However, although LTE-U signal may not be detected above the ED level, it may still have a severe impact on WiFi. Thus, PHY-layer analysis solely on signal level is short of detecting the moderate interference regime. WiPLUS overcomes this challenge by combining data from MAC FSM states and ARQ missing acknowledgments. Sampled data from a testbed has a lot of noise due to imperfections of the measuring devices and the complex interactions among the coexisting systems as well as PHY and MAC layers. WiPLUS applies K-means clustering to detect outliers on the estimated LTE-U on-durations. After filtering the data points based on the signal\u2019s frequency harmonics, WiPLUS calculates the LTEU on-time as the average of the data points, each of which corresponds to an estimate of LTE-U on-time. Can we estimate WiFi link performance by learning from real-world link capacity measurements?In a multiAP setting, an AP can select the operation channel based on the expected capacity of the existing links. The traditional way is to take the SNIR-based capacity estimate into account, i.e., Shannon\u2019s capacity formula. However, this capacity model may sometimes fail to represent the complex interactions between PHY and MAC layers, e.g., partially-overlapping channels in case of channel bonding in new 802.11ac/ax standards. The idea of [6] is to use supervised learning as a tool to model the complex interactions among many factors such as power and PHY rate of a neighboring WiFi link implicitly rather than modelling it explicitly. V. OPEN RESEARCH DIRECTIONS Machine learning-based solutions can embrace the complexity and uncertainty prevalent in the complex scenarios, especially hybrid horizontal spectrum sharing, by learning from the observations. However, a wireless network poses peculiar challenges such as the energy limitations, real-time operation, and sometimes fast changes in the operation environment that render learning less effective. We overview such challenges and conclude with some open questions in this"}
{"_id": "b1f53c9ca221f4772155c7e49cf3e298e8648840", "title": "Obfuscation procedure based in dead code insertion into crypter", "text": "The threat that attacks cyberspace is known as malware. In order to infect the technologic devices that are attacked, malware needs to evade the different antivirus systems. To avoid detection, an obfuscation technique must be applied so malware is updated and ready to be performed. No obstant, the technique implementation presents difficulties in terms of its required ability, evasion tests and infection functionality that turn outs to be a problem to keep malware updated. Therefore, a procedure is proposed that allows applying AVFUCKER or DSPLIT techniques. The purpose is to optimize the required technical means, reduce the antivirus analysis and malware functionality check times."}
{"_id": "a07c0952053f83ae9f32d955d6bb36e60f7a8dd0", "title": "GRAPHITE : Polyhedral Analyses and Optimizations for GCC", "text": "We present a plan to add loop nest optimizations in GCC based on polyhedral representations of loop nests. We advocate a static analysis approach based on a hierarchy of interchangeable abstractions with solvers that range from the exact solvers such as OMEGA, to faster but less precise solvers based on more coarse abstractions. The intermediate representation GRAPHITE1 (GIMPLE Represented as Polyhedra with Interchangeable Envelopes), built on GIMPLE and the natural loops, hosts the high level loop transformations. We base this presentation on the WRaP-IT project developed in the Alchemy group at INRIA Futurs and Paris-Sud University, on the PIPS compiler developed at \u00c9cole des mines de Paris, and on a joint work with several members of the static analysis and polyhedral compilation community in France. The main goal of this project is to bring more high level loop optimizations to GCC: loop fusion, tiling, strip mining, etc. Thanks to the 1This work was partially supported by ACI/APRON. WRaP-IT experience, we know that the polyhedral analyzes and transformations are affordable in a production compiler. A second goal of this project is to experiment with compile time reduction versus attainable precision when replacing operations on polyhedra with faster operations on more abstract domains. However, the use of a too coarse representation for computing might also result in an over approximated solution that cannot be used in subsequent computations. There exists a trade off between speed of the computation and the attainable precision that has not yet been analyzed for real world programs."}
{"_id": "b9f75b2fa01347b1f521935354f6724a418f8fc8", "title": "From piecemeal to configurational representation of faces.", "text": "Unlike older children and adults, children of less than about 10 years of age remember photographs of faces presented upside down almost as well as those shown upright and are easily fooled by simple disguises. The development at age 10 of the ability to encode orientation-specific configurational aspects of a face may reflect completion of certain maturational changes in the right cerebral hemisphere."}
{"_id": "ecfc3bc6294e34f3f9c9d1c0010d756ad345c6a1", "title": "Learning and Recognition of Clothing Genres From Full-Body Images", "text": "According to the theory of clothing design, the genres of clothes can be recognized based on a set of visually differentiable style elements, which exhibit salient features of visual appearance and reflect high-level fashion styles for better describing clothing genres. Instead of using less-discriminative low-level features or ambiguous keywords to identify clothing genres, we proposed a novel approach for automatically classifying clothing genres based on the visually differentiable style elements. A set of style elements, that are crucial for recognizing specific visual styles of clothing genres, were identified based on the clothing design theory. In addition, the corresponding salient visual features of each style element were identified and formulated with variables that can be computationally derived with various computer vision algorithms. To evaluate the performance of our algorithm, a dataset containing 3250 full-body shots crawled from popular online stores was built. Recognition results show that our proposed algorithms achieved promising overall precision, recall, and ${F}$ -score of 88.76%, 88.53%, and 88.64% for recognizing upperwear genres, and 88.21%, 88.17%, and 88.19% for recognizing lowerwear genres, respectively. The effectiveness of each style element and its visual features on recognizing clothing genres was demonstrated through a set of experiments involving different sets of style elements or features. In summary, our experimental results demonstrate the effectiveness of the proposed method in clothing genre recognition."}
{"_id": "448787207f4c35772dafd2270f545761caed57f4", "title": "Teaching Introductory Artificial Intelligence with Pac-Man", "text": "The projects that we have developed for UC Berkeley\u2019s introductory artificial intelligence (AI) course teach foundational concepts using the classic video game Pac-Man. There are four project topics: state-space search, multi-agent search, probabilistic inference, and reinforcement learning. Each project requires students to implement general-purpose AI algorithms and then to inject domain knowledge about the PacMan environment using search heuristics, evaluation functions, and feature functions. We have found that the Pac-Man theme adds consistency to the course, as well as tapping in to students\u2019 excitement about video games."}
{"_id": "c9dd44cf1ee964d8794a3cc51471ae1d95fd66bf", "title": "Considering Social and Emotional Artificial Intelligence", "text": "This paper presents the concepts of social and emotional intelligence as elements of human intelligence that are complementary to the intelligence assessed by the Turing Test. We argue that these elements are gaining importance as human users are increasingly conceptualising machines as social entities. We describe an implementation of Sensitive Artificial Listeners which provides a hands-on example of technology with some emotional and social skills, and discuss first elements of test methodologies for such technology."}
{"_id": "02413be569417baf09b1b3e1132bb9b94eb19733", "title": "Low numeracy and dyscalculia : identification and intervention", "text": "One important factor in the failure to learn arithmetic in the normal way is an endogenous core deficit in the sense of number. This has been associated with low numeracy in general (e.g. Halberda et al. in Nature 455:665\u2013668, 2008) and with dyscalculia more specifically (e.g. Landerl et al. in Cognition 93:99\u2013125, 2004). Here, we describe straightforward ways of identifying this deficit, and offer some new ways of strengthening the sense of number using learning technologies."}
{"_id": "c5dc853ea2c2e00af7e83a47ea12082a8f0a470c", "title": "Visualizing LSTM decisions", "text": "Long Short-Term Memory (LSTM) recurrent neural networks are renowned for being uninterpretable \"black boxes\". In the medical domain where LSTMs have shown promise, this is specifically concerning because it is imperative to understand the decisions made by machine learning models in such acute situations. This study employs techniques used in the Convolutional Neural Network domain to elucidate the operations that LSTMs perform on time series. The visualization techniques include input saliency by means of occlusion and derivatives, class mode visualization, and temporal outputs. Moreover, we demonstrate that LSTMs appear to extract features similar to those extracted by wavelets. It was found that deriving the inputs for saliency is a poor approximation and occlusion is a better approach. Moreover, analyzing LSTMs on different sets of data provide novel interpretations."}
{"_id": "69cf637170453ed80625559326d417c291dded4a", "title": "A survey of performance measures to evaluate ego-lane estimation and a novel sensor-independent measure along with its applications", "text": "Lane estimation plays a central role for Driver Assistance Systems, therefore many approaches have been proposed to measure its performance. However, no commonly agreed metric exists. In this work, we first present a detailed survey of the current measures. Most of them apply pixel-level benchmarks on camera images and require a time-consuming and fault-prone labeling process. Moreover, these metrics cannot be used to assess other sources such as the detected guardrails, curbs or other vehicles. Therefore, we introduce an efficient and sensor-independent metric, which provides an objective and intuitive self-assessment for the entire road estimation process at multiple levels: individual detectors, lane estimation itself, and the target applications (e.g., lane keeping system). Our metric does not require a high labeling effort and can be used both online and offline. By selecting the evaluated points in specific distances, it can be applied to any road model representation. By comparing in 2D vehicle coordinate system, two possibilities exist to generate the ground-truth: the human-driven path or the expensive alternative with DGPS and detailed maps. This paper applies both methods and reveals that the human-driven path also qualifies for this task and it is applicable to scenarios without GPS signal, e.g., tunnel. Although the lateral offset between reference and detection is widely used in the majority of works, this paper shows that another criterion, the angle deviation, is more appropriate. Finally, we compare our metric with other state-of-the-art metrics using real data recordings from different scenarios."}
{"_id": "caf31ab71b3b45e9881ec4b036067b5854b6841c", "title": "Why and How Investors Use ESG Information : Evidence from a Global Survey", "text": "Using survey data from a sample of senior investmen professionals from mainstream (i.e. not SRI funds ) investment organizations we provide insights into w hy and how investors use reported environmental, social and governance (ESG) information. The primar y reason survey respondents consider ESG information in investment decisions is because they consider it financially material to investment performance. ESG information is perceived to provid e information primarily about risk rather than a company\u2019s competitive positioning. There is no one siz fits all, with the financial materiality of di fferent ESG issues varying across sectors. Lack of comparab ility due to the lack of reporting standards is the primary impediment to the use of ESG information. M ost frequently, the information is used to screen companies with the most often used method being neg ative screening. However, negative screening is perceived as the least investment beneficial while ful integration into stock valuation and positive screening considered more beneficial. Respondents expect nega tiv screening to be used less in the future, while positive screening and active ownership to be used more."}
{"_id": "bd483defd23589a08065e9dcc7713e928abacfbd", "title": "The trouble with overconfidence.", "text": "The authors present a reconciliation of 3 distinct ways in which the research literature has defined overconfidence: (a) overestimation of one's actual performance, (b) overplacement of one's performance relative to others, and (c) excessive precision in one's beliefs. Experimental evidence shows that reversals of the first 2 (apparent underconfidence), when they occur, tend to be on different types of tasks. On difficult tasks, people overestimate their actual performances but also mistakenly believe that they are worse than others; on easy tasks, people underestimate their actual performances but mistakenly believe they are better than others. The authors offer a straightforward theory that can explain these inconsistencies. Overprecision appears to be more persistent than either of the other 2 types of overconfidence, but its presence reduces the magnitude of both overestimation and overplacement."}
{"_id": "24aad39e3c07e39478a85947d674df7248c135e9", "title": "Dynamic XML documents with distribution and replication", "text": "The advent of XML as a universal exchange format, and of Web services as a basis for distributed computing, has fostered the apparition of a new class of documents: dynamic XML documents. These are XML documents where some data is given explicitly while other parts are given only intensionally by means of embedded calls to web services that can be called to generate the required information. By the sole presence of Web services, dynamic documents already include inherently some form of distributed computation. A higher level of distribution that also allows (fragments of) dynamic documents to be distributed and/or replicated over several sites is highly desirable in today's Web architecture, and in fact is also relevant for regular (non dynamic) documents.The goal of this paper is to study new issues raised by the distribution and replication of dynamic XML data. Our study has originated in the context of the Active XML system [1, 3, 22] but the results are applicable to many other systems supporting dynamic XML data. Starting from a data model and a query language, we describe a complete framework for distributed and replicated dynamic XML documents. We provide a comprehensive cost model for query evaluation and show how it applies to user queries and service calls. Finally, we describe an algorithm that, for a given peer, chooses data and services that the peer should replicate to improve the efficiency of maintaining and querying its dynamic data."}
{"_id": "738feaca3e125c6c6eae4c160b60fdedd12c89aa", "title": "A measurement study of google play", "text": "Although millions of users download and use third-party Android applications from the Google Play store, little information is known on an aggregated level about these applications. We have built PlayDrone, the first scalable Google Play store crawler, and used it to index and analyze over 1,100,000 applications in the Google Play store on a daily basis, the largest such index of Android applications. PlayDrone leverages various hacking techniques to circumvent Google's roadblocks for indexing Google Play store content, and makes proprietary application sources available, including source code for over 880,000 free applications. We demonstrate the usefulness of PlayDrone in decompiling and analyzing application content by exploring four previously unaddressed issues: the characterization of Google Play application content at large scale and its evolution over time, library usage in applications and its impact on application portability, duplicative application content in Google Play, and the ineffectiveness of OAuth and related service authentication mechanisms resulting in malicious users being able to easily gain unauthorized access to user data and resources on Amazon Web Services and Facebook."}
{"_id": "8961cbea2ebb3d1609fb1ca60155685a5e34597d", "title": "Asymmetrical Duty Cycle-Controlled LLC Resonant Converter With Equivalent Switching Frequency Doubler", "text": "In the conventional full-bridge LLC converter, the duty cycle is kept as 0.5 and the phase-shift angle of the half-bridge modules is 0\u00b0 to be a symmetrical operation, which makes the resonant tank operating frequency only equal to the switching frequency of the power devices. By regulating the duty cycles of the upper and lower switches in each half-bridge module to be 0.75 and 0.25 and the phase-shift angle of the half-bridge modules to be 180\u00b0, the asymmetrical duty cycle controlled full-bridge LLC resonant converter is derived. The proposed asymmetrical duty cycle controlled scheme halves the switching frequency of the primary switches. As a result, the driving losses are effectively reduced. Compared with the conventional full-bridge LLC converter, the soft-switching condition of the derived asymmetrical controlled LLC converter becomes easier to reduce the resonant current. Consequently, the conduction losses are decreased and the conversion efficiency is improved. The asymmetrical control scheme can be also extended to the stacked structure for high input voltage applications. Finally, two LLC converter prototypes both with 200-kHz resonant frequency for asymmetrical and symmetrical control schemes are built and compared to validate the effectiveness of the proposed control strategy."}
{"_id": "2a51e6646261ed6dd4adde885f6e7d76c94dc11e", "title": "Behavioural Analytics using Process Mining in On-line Advertising", "text": "Online behavioural targeting is one of the most popular business strategies on the display advertising today. It is based primarily on analysing web user behavioural data with the usage of machine learning techniques with the aim to optimise web advertising. Being able to identify \u201cunknown\u201d and \u201cfirst time seen\u201d customers is of high importance in online advertising since a successful guess could identify \u201cpossible prospects\u201d who would be more likely to purchase an advertisement\u2019s product. By identifying prospective customers, online advertisers may be able to optimise campaign performance, maximise their revenue as well as deliver advertisements tailored to a variety of user interests. This work presents a hybrid approach benchmarking machine-learning algorithms and attribute preprocessing techniques in the context of behavioural targeting in process oriented environments. The performance of our suggested methodology is evaluated using the key performance metric in online advertising which is the predicted conversion rate. Our experimental results indicate that the presented process mining framework can significantly identify prospect customers in most cases. Our results seem promising, indicating that there is a need for further workflow research in online display"}
{"_id": "2151a214aca6e72ee2980ae8cbf7be47fed0cb7a", "title": "Design and use paradigms for Gazebo, an open-source multi-robot simulator", "text": "Simulators have played a critical role in robotics research as tools for quick and efficient testing of new concepts, strategies, and algorithms. To date, most simulators have been restricted to 2D worlds, and few have matured to the point where they are both highly capable and easily adaptable. Gazebo is designed to fill this niche by creating a 3D dynamic multi-robot environment capable of recreating the complex worlds that would be encountered by the next generation of mobile robots. Its open source status, fine grained control, and high fidelity place Gazebo in a unique position to become more than just a stepping stone between the drawing board and real hardware: data visualization, simulation of remote environments, and even reverse engineering of blackbox systems are all possible applications. Gazebo is developed in cooperation with the Player and Stage projects (Gerkey, B. P., et al., July 2003), (Gerkey, B. P., et al., May 2001), (Vaughan, R. T., et al., Oct. 2003), and is available from http://playerstage.sourceforge.net/gazebo/ gazebo.html."}
{"_id": "33691b7e8c980ea2dea5b5d3d7bee661e9623715", "title": "Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM", "text": "In the last years several direct (i.e. featureless) monocular SLAM approaches have appeared showing impressive semi-dense or dense scene reconstructions. These works have questioned the need of features, in which consolidated SLAM techniques of the last decade were based. In this paper we present a novel feature-based monocular SLAM system that is more robust, gives more accurate camera poses, and obtains comparable or better semi-dense reconstructions than the current state of the art. Our semi-dense mapping operates over keyframes, optimized by local bundle adjustment, allowing to obtain accurate triangulations from wide baselines. Our novel method to search correspondences, the measurement fusion and the inter-keyframe depth consistency tests allow to obtain clean reconstructions with very few outliers. Against the current trend in direct SLAM, our experiments show that by decoupling the semi-dense reconstruction from the trajectory computation, the results obtained are better. This opens the discussion on the benefits of features even if a semi-dense reconstruction is desired."}
{"_id": "448babca3ceca4a4f2b2ce1d2e87cd4ff43e337a", "title": "SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights", "text": "Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these \u201clocal best matches\u201d. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%."}
{"_id": "dda1822872942f658b89e7e1c1ffe08c35e7b290", "title": "Can you fool AI with adversarial examples on a visual Turing test?", "text": "Deep learning has achieved impressive results in many areas of Computer Vision and Natural Language Processing. Among others, Visual Question Answering (VQA), also referred to a visual Turing test, is considered one of the most compelling problems, and recent deep learning models have reported significant progress in vision and language modeling. Although Artificial Intelligence (AI) is getting closer to passing the visual Turing test, at the same time the existence of adversarial examples to deep learning systems may hinder the practical application of such systems. In this work, we conduct the first extensive study on adversarial examples for VQA systems. In particular, we focus on generating targeted adversarial examples for a VQA system while the target is considered to be a question-answer pair. Our evaluation shows that the success rate of whether a targeted adversarial example can be generated is mostly dependent on the choice of the target question-answer pair, and less on the choice of images to which the question refers. We also report the language prior phenomenon of a VQA model, which can explain why targeted adversarial examples are hard to generate for some question-answer targets. We also demonstrate that a compositional VQA architecture is slightly more resilient to adversarial attacks than a non-compositional one. Our study sheds new light on how to build deep vision and language resilient models robust against adversarial examples."}
{"_id": "4ce67cfe2a9b563c2aff4e99aad2be84096f66ee", "title": "Designing a mobile game to thwarts malicious IT threats: A phishing threat avoidance perspective", "text": "Phishing is an online identity theft, which aims to steal sensitive information such as username, password and online banking details from victims. To prevent this, phishing education needs to be considered. Game based education is becoming more and more popular. This paper introduces a mobile game prototype for the android platform based on a story, which simplifies and exaggerates real life. The elements of a game design framework for avoiding phishing attacks were used to address the game design issues and game design principles were used as a set of guidelines for structuring and presenting information. The overall mobile game design was aimed to enhance the user\u2019s avoidance behaviour through motivation to protect themselves against phishing threats. The prototype mobile game design was presented on MIT App Inventor Emulator."}
{"_id": "201f8165d56e65a77eaf1946c4891a4d22a0aa87", "title": "Fibonacci Exposure Bracketing for High Dynamic Range Imaging", "text": "Exposure bracketing for high dynamic range (HDR) imaging involves capturing several images of the scene at different exposures. If either the camera or the scene moves during capture, the captured images must be registered. Large exposure differences between bracketed images lead to inaccurate registration, resulting in artifacts such as ghosting (multiple copies of scene objects) and blur. We present two techniques, one for image capture (Fibonacci exposure bracketing) and one for image registration (generalized registration), to prevent such motion-related artifacts. Fibonacci bracketing involves capturing a sequence of images such that each exposure time is the sum of the previous N(N > 1) exposures. Generalized registration involves estimating motion between sums of contiguous sets of frames, instead of between individual frames. Together, the two techniques ensure that motion is always estimated between frames of the same total exposure time. This results in HDR images and videos which have both a large dynamic range and minimal motion-related artifacts. We show, by results for several real-world indoor and outdoor scenes, that the proposed approach significantly outperforms several existing bracketing schemes."}
{"_id": "4bfce93a0d431e9f9c908bdf467f8b58fbb719f1", "title": "Harvesting Mid-level Visual Concepts from Large-Scale Internet Images", "text": "Obtaining effective mid-level representations has become an increasingly important task in computer vision. In this paper, we propose a fully automatic algorithm which harvests visual concepts from a large number of Internet images (more than a quarter of a million) using text-based queries. Existing approaches to visual concept learning from Internet images either rely on strong supervision with detailed manual annotations or learn image-level classifiers only. Here, we take the advantage of having massive well organized Google and Bing image data, visual concepts (around 14, 000) are automatically exploited from images using word-based queries. Using the learned visual concepts, we show state-of-the-art performances on a variety of benchmark datasets, which demonstrate the effectiveness of the learned mid-level representations: being able to generalize well to general natural images. Our method shows significant improvement over the competing systems in image classification, including those with strong supervision."}
{"_id": "4ca3f1570d146d01395e5c16081f5fb6a748e7f8", "title": "Viewpoint-independent book spine segmentation", "text": "We propose a method to precisely segment books on bookshelves in images taken from general viewpoints. The proposed segmentation algorithm overcomes difficulties due to text and texture on book spines, various book orientations under perspective projection, and book proximity. A shape dependent active contour is used as a first step to establish a set of book spine candidates. A subset of these candidates are selected using spatial constraints on the assembly of spine candidates by formulating the selection problem as the maximal weighted independent set (MWIS) of a graph. The segmented book spines may be used by recognition systems (e.g., library automation), or rendered in computer graphics applications. We also propose a novel application that uses the segmented book spines to assist users in bookshelf reorganization or to modify the image to create a bookshelf with a tidier look. Our method was successfully tested on challenging sets of images."}
{"_id": "f1862403e388297fe6d53382762927e32af0a7aa", "title": "An improved fuzzy clustering method using modified Fukuyama-Sugeno cluster validity index", "text": "The objective of clustering algorithms is to group similar patterns in one class and dissimilar patterns in disjoint classes. This article proposes a novel algorithm for fuzzy partitional clustering with an aim to minimize a composite objective function, defined using the Fukuyama-Sugeno cluster validity index. The optimization of this objective function tries to minimize the separation between clusters of a data set and maximize the compactness of a certain cluster. But in certain cases, such as a data set having overlapping clusters, this approach leads to poor clustering results. Thus we introduce a new parameter in the objective function which enables us to yield more accurate clustering results. The algorithm has been validated with some artificial and real world datasets."}
{"_id": "2d2115000efc5dea918c2ff2f6a7f6c8645397fa", "title": "Secure Conjunctive Keyword Search over Encrypted Data", "text": "We study the setting in which a user stores encrypted documents (e.g. e-mails) on an untrusted server. In order to retrieve documents satisfying a certain search criterion, the user gives the server a capability that allows the server to identify exactly those documents. Work in this area has largely focused on search criteria consisting of a single keyword. If the user is actually interested in documents containing each of several keywords (conjunctive keyword search) the user must either give the server capabilities for each of the keywords individually and rely on an intersection calculation (by either the server or the user) to determine the correct set of documents, or alternatively, the user may store additional information on the server to facilitate such searches. Neither solution is desirable; the former enables the server to learn which documents match each individual keyword of the conjunctive search and the latter results in exponential storage if the user allows for searches on every set of keywords. We define a security model for conjunctive keyword search over encrypted data and present the first schemes for conducting such searches securely. We propose first a scheme for which the communication cost is linear in the number of documents, but that cost can be incurred \u201coffline\u201d before the conjunctive query is asked. The security of this scheme relies on the Decisional Diffie-Hellman (DDH) assumption. We propose a second scheme whose communication cost is on the order of the number of keyword fields and whose security relies on a new hardness assumption."}
{"_id": "47ba21d48212bb7ac3969ee7dbb8a3fb4a9d15c6", "title": "How to Enhance Cloud Architectures to Enable Cross-Federation: Towards Interoperable Storage Providers", "text": "Small/medium cloud storage providers can hardly compete with the biggest cloud players such as Google, Amazon, Dropbox, etc. As a consequence, the cloud storage market depends on such mega-providers and each small/medium provider cannot face alone the challenge of Big Data storage. A possible solution consists in establishing stronger partnerships among small-medium providers where they can borrow/lend resources each other, according to the rules of the federated cloud ecosystem they belong to. According to such an approach, the challenge consists in creating federated cloud ecosystems able to compete with mega-provides and one of the major problems for the achievement of such an ecosystem is the management of inter-domain communications. In this paper, we propose an architecture addressing such an issue. In particular, we present and test a solution integrating the CLEVER Message Oriented Middleware (MOM) with the Hadoop Distribute File System (HDFS), i.e., one of the major massive storage solutions currently available on the market."}
{"_id": "8f343dccd9c87a1d388e8fbc0fa716f623d6e690", "title": "Monitoring and depth of strategy use in computer-based learning environments for science and history.", "text": "BACKGROUND\nSelf-regulated learning (SRL) models position metacognitive monitoring as central to SRL processing and predictive of student learning outcomes (Winne & Hadwin, 2008; Zimmerman, 2000). A body of research evidence also indicates that depth of strategy use, ranging from surface to deep processing, is predictive of learning performance.\n\n\nAIMS\nIn this study, we investigated the relationships among the frequency of metacognitive monitoring and the utilization of deep and surface-level strategies, and the connections between these SRL processes and learning outcomes across two academic domains, science and history.\n\n\nSAMPLE\nThis was a secondary data analysis of two studies. The first study sample was 170 undergraduate students from a University in the south-eastern United States. The second study sample consisted of 40 US high school students in the same area.\n\n\nMETHODS\nWe collected think-aloud protocol SRL and knowledge measure data and conducted both structural equation\u00a0modelling and path analysis to investigate our research questions.\n\n\nRESULTS\nFindings showed across both studies and two distinct academic domains, students who enacted more frequent monitoring also enacted more frequent deep strategies resulting in better performance on academic evaluations.\n\n\nCONCLUSIONS\nThese findings suggest the importance of measuring not only what depth of strategies learners use, but also the degree to which they monitor their learning. Attention to both is needed in research and practice."}
{"_id": "3ab69a0a22a8c53a226ec73b11b51830a3adc351", "title": "Question Type Classification Using a Part-of-Speech Hierarchy", "text": "Question type (or answer type) classification is the task of determining the correct type of the answer expected to a given query. This is often done by defining or discovering syntactic patterns that represent the structure of typical queries of each type, and classify a given query according to which pattern they satisfy. In this paper, we combine the idea of using informer spans as patterns with our own part-of-speech hierarchy in order to propose both a new approach to pattern-based question type classification and a new way of discovering the informers to be used as patterns. We show experimentally that using our part-of-speech hierarchy greatly improves type classification results, and allows our system to learn valid new informers."}
{"_id": "15f795d436aaa9e77ccccb00b9df49bf0127f8b1", "title": "Dissociation between face perception and face memory in adults, but not children, with developmental prosopagnosia", "text": "Cognitive models propose that face recognition is accomplished through a series of discrete stages, including perceptual representation of facial structure, and encoding and retrieval of facial information. This implies that impaired face recognition can result from failures of face perception, face memory, or both. Studies of acquired prosopagnosia, autism spectrum disorders, and the development of normal face recognition support the idea that face perception and face memory are distinct processes, yet this distinction has received little attention in developmental prosopagnosia (DP). To address this issue, we tested the face perception and face memory of children and adults with DP. By definition, face memory is impaired in DP, so memory deficits were present in all participants. However, we found that all children, but only half of the adults had impaired face perception. Thus, results from adults indicate that face perception and face memory are dissociable, while the results from children provide no evidence for this division. Importantly, our findings raise the possibility that DP is qualitatively different in childhood versus adulthood. We discuss theoretical explanations for this developmental pattern and conclude that longitudinal studies are necessary to better understand the developmental trajectory of face perception and face memory deficits in DP."}
{"_id": "1ed2f698fe40a689309df8af4b9ccb448440a333", "title": "The public health risks of media violence: a meta-analytic review.", "text": "OBJECTIVE\nTo conduct a meta-analytic review of studies that examine the impact of violent media on aggressive behavior and to determine whether this effect could be explained through methodological problems inherent in this research field.\n\n\nSTUDY DESIGN\nA detailed literature search identified peer-reviewed articles addressing media violence effects. Effect sizes were calculated for all studies. Effect sizes were adjusted for observed publication bias.\n\n\nRESULTS\nPublication bias was a problem for studies of aggressive behavior, and methodological problems such as the use of poor aggression measures inflated effect size. Once corrected for publication bias, studies of media violence effects provided little support for the hypothesis that media violence is associated with higher aggression. The corrected overall effect size for all studies was r = .08.\n\n\nCONCLUSIONS\nResults from the current analysis do not support the conclusion that media violence leads to aggressive behavior. It cannot be concluded at this time that media violence presents a significant public health risk."}
{"_id": "ce32df632ab7d448646e5453f68665d6fad74293", "title": "A UHF class E2 DC/DC converter using GaN HEMTs", "text": "In this paper, the design of a class E2 resonant DC/DC converter, operating at UHF band, is proposed. Combining the use of GaN HEMT devices, both for the inverter and the synchronous rectifier, with high Q lumped-element multi-harmonic matching networks, a peak efficiency value of 72% has been obtained at 780 MHz with a 10.3 W output power. By means of a Pulse Width Modulation (PWM) over the gate driving envelope, the output voltage may be controlled while keeping low switching losses, with an estimated small-signal bandwidth (BW) and a slew rate of 11 MHz and 630 V/\u00b5Seg, respectively."}
{"_id": "f1d72a08c83d9ffb4fbd5eea529f2ad326a7289e", "title": "Vaping on Instagram: cloud chasing, hand checks and product placement.", "text": "INTRODUCTION\nThis study documented images posted on Instagram of electronic cigarettes (e-cigarette) and vaping (activity associated with e-cigarette use). Although e-cigarettes have been studied on Twitter, few studies have focused on Instagram, despite having 500 million users. Instagram's emphasis on images warranted investigation of e-cigarettes, as past tobacco industry strategies demonstrated that images could be used to mislead in advertisements, or normalise tobacco-related behaviours. Findings should prove informative to tobacco control policies in the future.\n\n\nMETHODS\n3\u2005months of publicly available data were collected from Instagram, including images and associated metadata (n=2208). Themes of images were classified as (1) activity, for example, a person blowing vapour; (2) product, for example, a personal photo of an e-cigarette device; (3) advertisement; (4) text, for example, 'meme' or image containing mostly text and (5) other. User endorsement (likes) of each type of image was recorded. Caption text was analysed to explore different trends in vaping and e-cigarette-related text.\n\n\nRESULTS\nAnalyses found that advertisement-themed images were most common (29%), followed by product (28%), and activity (18%). Likes were more likely to accompany activity and product-themed images compared with advertisement or text-themed images (p<0.01). Vaping-related text greatly outnumbered e-cigarette-related text in the image captions.\n\n\nCONCLUSIONS\nInstagram affords its users the ability to post images of e-cigarette-related behaviours and gives advertisers the opportunity to display their product. Future research should incorporate novel data streams to improve public health surveillance, survey development and educational campaigns."}
{"_id": "4c059a8900d24058c9cb27b85df96cc430a79970", "title": "Making Scheduling \"Cool\": Temperature-Aware Workload Placement in Data Centers", "text": "Trends towards consolidation and higher-density computing configurations make the problem of heat management one of the critical challenges in emerging data centers. Conventional approaches to addressing this problem have focused at the facilities level to develop new cooling technologies or optimize the delivery of cooling. In contrast to these approaches, our paper explores an alternate dimension to address this problem, namely a systems-level solution to control the heat generation through temperatureaware workload placement. We first examine a theoretic thermodynamic formulation that uses information about steady state hot spots and cold spots in the data center and develop real-world scheduling algorithms. Based on the insights from these results, we develop an alternate approach. Our new approach leverages the non-intuitive observation that the source of cooling inefficiencies can often be in locations spatially uncorrelated with its manifested consequences; this enables additional energy savings. Overall, our results demonstrate up to a factor of two reduction in annual data center cooling costs over location-agnostic workload distribution, purely through software optimizations without the need for any costly capital investment."}
{"_id": "55877f7c574b4686c98e70b96971722c1593ba9f", "title": "Calorie restriction extends Saccharomyces cerevisiae lifespan by increasing respiration", "text": "Calorie restriction (CR) extends lifespan in a wide spectrum of organisms and is the only regimen known to lengthen the lifespan of mammals. We established a model of CR in budding yeast Saccharomyces cerevisiae. In this system, lifespan can be extended by limiting glucose or by reducing the activity of the glucose-sensing cyclic-AMP-dependent kinase (PKA). Lifespan extension in a mutant with reduced PKA activity requires Sir2 and NAD (nicotinamide adenine dinucleotide). In this study we explore how CR activates Sir2 to extend lifespan. Here we show that the shunting of carbon metabolism toward the mitochondrial tricarboxylic acid cycle and the concomitant increase in respiration play a central part in this process. We discuss how this metabolic strategy may apply to CR in animals."}
{"_id": "55e1a5110f0d6eaf814afc886cd28a822dd4f484", "title": "Internet of Things: A review to support IoT architecture's design", "text": "Internet of Things (IoT) is a state-of-the-art field that is affecting every aspect of human life. This paper presents a state-of-the-art overview of IoT challenges, available standards, and popular architectures. The aim is to guide future researchers in the field of IoT by highlighting the main research gaps. Moreover, it provides research recommendations to help in designing future IoT architectures that fit IoT environment characteristics."}
{"_id": "6cc9b854d4d55ff76881685b090569da499022d0", "title": "Sampling-oscilloscope measurement of a microwave mixer with single-digit phase accuracy", "text": "We describe a straightforward method of separately characterizing up- and down-conversion in microwave mixers using a sampling oscilloscope. The method mismatch-corrects the results, determines both magnitude and phase, and uses a novel time-base correction scheme to improve the accuracy of the measurements. We estimate our measurement accuracy to be on the order of a tenth of a decibel in magnitude and a few degrees in phase. We use the method to characterize the magnitude and phase reciprocity of a microwave mixer."}
{"_id": "a551a1d57dd41603acd635c9bd6ccc27840932de", "title": "Feedforward neural network and ANFIS-based approaches to forecasting the off-cam energy characteristics of Kaplan turbine", "text": "The determination of the energy characteristics of a Kaplan hydraulic turbine is based on numerous measuring points during extensive and expensive experimental model tests in laboratory and on-site prototype tests at the hydropower plant. The results of those experimental researches are valuable insofar as they are detailed and comprehensive. In order to reduce the number of modes, in which the double-regulated turbine has to be tested with the aim of obtaining the off-cam energy characteristics in unknown operating modes, the application of contemporary artificial neural networks models is presented in the paper. The rationalization of the turbine test conditions may not be at the expense of the quality of the obtained characteristics. Two types of neural networks, feedforward neural networks and adaptive network-based fuzzy inference system with different partitioning methods, were used. The reliability of applied method was considered by analyzing and validating the predicted turbine energy parameters with the results obtained in the highly sophisticated laboratory."}
{"_id": "4ba01f5f01a3e6cc9f7f08529a49b024148c8843", "title": "Developing a data quality framework for asset management in engineering organisations", "text": "Data Quality (DQ) is seen as critical to effective business decision-making. However, maintaining DQ is often acknowledged as problematic. Asset data is the key enabler in gaining control of assets. The quality asset data provides the foundation for effective Asset Management (AM). Researches have indicated that achieving AM DQ is the key challenge engineering organisations face today. This paper investigates the DQ issues emerging from the unique nature of engineering AM data. It presents an exploratory research of a large scale national-wide DQ survey on how Australian engineering organisations address DQ issues, and proposes an AM specific DQ framework."}
{"_id": "eecc54b1a2c1ded1997dcbfd9188ba52e88e1217", "title": "Maintenance scheduling of geographically distributed assets with prognostics information", "text": "Maintenance scheduling for high value assets has been studied for decades and is still a crucial area of research with new technological advancements. The main dilemma of maintenance scheduling is to avoid failures while preventing unnecessary maintenance. The technological advancements in real time monitoring and computational science make tracking asset health and forecasting asset failures possible. The usage and maintenance of assets can be planned more efficiently with the forecasted failure probability and remaining useful life (i.e., prognostic information). The prognostic information is time sensitive. Geographically distributed assets such as off-shore wind farms and railway switches add another complexity to the maintenance scheduling problem with the required time of travel to reach these assets. Thus, the travel time between geographically distributed assets should be incorporated in the maintenance scheduling when one technician (or team) is responsible for the maintenance of multiple assets. This paper presents a methodology to schedule the maintenance of geographically distributed assets using their prognostic information. Genetic Algorithm based solution incorporating the daily work duration of the maintenance team is also presented in the paper. \u00a9 2015 Elsevier B.V. All rights reserved."}
{"_id": "7b9f0330d698580902d4ecd65b0dc1d86d78d345", "title": "PNUTS: Yahoo!'s hosted data serving platform", "text": "We describe PNUTS, a massively parallel and geographically distributed database system for Yahoo!\u2019s web applications. PNUTS provides data storage organized as hashed or ordered tables, low latency for large numbers of concurrent requests including updates and queries, and novel per-record consistency guarantees. It is a hosted, centrally managed, and geographically distributed service, and utilizes automated load-balancing and failover to reduce operational complexity. The first version of the system is currently serving in production. We describe the motivation for PNUTS and the design and implementation of its table storage and replication layers, and then present experimental results."}
{"_id": "00f7b192212078fc8afcbe504cc8caf57d8f73b5", "title": "The Part-Time Parliament", "text": "Recent archaeological discoveries on the island of Paxos reveal that the parliament functioned despite the peripatetic propensity of its part-time legislators. The legislators maintained consistent copies of the parliamentary record, despite their frequent forays from the chamber and the forgetfulness of their messengers. The Paxon parliament's protocol provides a new way of implementing the state machine approach to the design of distributed systems."}
{"_id": "85c6933942f7d1954c42e2fc031877fe1b44314c", "title": "Performance Analysis of IEEE 802.11ac DCF with Hidden Nodes", "text": "Recently, the IEEE 802.11 standard based Wireless Local Area Networks (WLAN) have become more popular and are widely deployed. It is anticipated that WLAN will play an important rule in the future wireless communication systems in order to provide several gigabits data rate. IEEE 802.11ac is one of the ongoing WLAN standard aiming to support very high throughput (VHT) with data rate of up to 6 Gbps below the 6 GHz band. In the development of IEEE 802.11ac standard, several new physical layer (PHY) and medium access control layer (MAC) features are taken into consideration, such as employing wider bandwidth in PHY and incrementing the limits of frame aggregation in MAC. However, due to the newly introduced features, some traditional techniques used in previous standards could face some problems. This paper presents a performance analysis of 802.11ac Distributed Coordination Function (DCF) in presence of hidden nodes in overlapping BSS (OBSS) environment. The effectiveness of DCF in IEEE 802.11ac WLAN when using different primary channels and different frequency bandwidth has also been discussed. Our results indicate that the traditional RTS/CTS handshake mechanism faces shortcomings and needs to be modified in order to support the newly defined 802.11ac amendment."}
{"_id": "4a7bbb5718449555f63eb45a1ab2c71fd212a75c", "title": "Advanced Compiler Optimizations for Supercomputers", "text": "Compilers for vector or multiprocessor computers must have certain optimization features to successfully generate parallel code."}
{"_id": "92bb2dbb32aeac65ebdb71988e99bbe7360574fc", "title": "Real time lane detection for autonomous vehicles", "text": "An increasing safety and reducing road accidents, thereby saving lives are one of great interest in the context of Advanced Driver Assistance Systems. Apparently, among the complex and challenging tasks of future road vehicles is road lane detection or road boundaries detection. It is based on lane detection (which includes the localization of the road, the determination of the relative position between vehicle and road, and the analysis of the vehiclepsilas heading direction). One of the principal approaches to detect road boundaries and lanes using vision system on the vehicle. However, lane detection is a difficult problem because of the varying road conditions that one can encounter while driving. In this paper, a vision-based lane detection approach capable of reaching real time operation with robustness to lighting change and shadows is presented. The system acquires the front view using a camera mounted on the vehicle then applying few processes in order to detect the lanes. Using a pair of hyperbolas which are fitting to the edges of the lane, those lanes are extracted using Hough transform. The proposed lane detection system can be applied on both painted and unpainted road as well as curved and straight road in different weather conditions. This approach was tested and the experimental results show that the proposed scheme was robust and fast enough for real time requirements. Eventually, a critical overview of the methods were discussed, their potential for future deployment were assist."}
{"_id": "f09a953114744dd234a24b4bf6cab103f44e1788", "title": "Cloud detection of remote sensing images by deep learning", "text": "Cloud detection plays a major role for remote sensing image processing. Most of the existed cloud detection methods use the low-level feature of the cloud, which often cause error result especially for thin cloud and complex scene. In this paper, a novel cloud detection method based on deep learning framework is proposed. The designed deep Convolutional Neural Networks (CNNs) consists of four convolutional layers and two fully-connected layers, which can mine the deep features of cloud. The image is firstly clustered into superpixels as sub-region through simple linear iterative cluster (SLIC) method. Through the designed network model, the probability of each superpixel that belongs to cloud region is predicted, so that the cloud probability map of the image is generated. Lastly, the cloud region is obtained according to the gradient of the cloud map. Through the proposed method, both thin cloud and thick cloud can be detected well, and the result is insensitive to complex scene. Experimental results indicate that the proposed method is more robust and effective than compared methods."}
{"_id": "55d8a79ec7ffcb95ded1531104b173a2995e45e6", "title": "A Case for Protecting Computer Games With SGX", "text": "The integrity and confidentiality of computer games has long been a concern of game developers, both in preventing players from cheating and from obtaining unlicensed copies of the software. Recently, Intel released SGX, which can provide new security guarantees for software developers to achieve an unprecedented level of software integrity and confidentiality. To explore how SGX can protect a computer game in practice, in this paper we make a first step of exploring new ways to protect the integrity and confidentiality of game code and data, and in doing so we have developed a framework and design principles for integrating games with SGX. We have applied our framework to demonstrate how it can be used to protect a real world computer game."}
{"_id": "81779d7aae4475502a9904be4b45d5cad541e1cb", "title": "Semantic Web Services-A Survey", "text": "The technology where the meaning of the information and the service of the web is defined by making the web to understand and satisfies the request of the people is called Semantic Web Services. That is the idea of having data on the web defined and linked in a way that it can be used by machines not just for display purpose, but for automation, integration and reuse of data across various application .The idea of the semantic is raised to overcome the limitation of the Web services such as Average WWW searches examines only about 25% of potentially relevant sites and return a lot of unwanted information, Information on web is not suitable for software agent and Doubling of size. It is built on top of the Web Services extended with rich semantic representations along with capabilities for automatic reasoning developed in the field of artificial intelligence. This survey attempts to give an overview of the underlying concepts and technologies along with the categorization, selection and discovery of services based on semantic."}
{"_id": "ac0d4a9ef44957db68e04729e4ca62dd292ac563", "title": "High quality lip-sync animation for 3D photo-realistic talking head", "text": "We propose a new 3D photo-realistic talking head with high quality, lip-sync animation. It extends our prior high-quality 2D photo-realistic talking head to 3D. An a/v recording of a person speaking a set of prompted sentences with good phonetic coverage for ~20-minutes is first made. We then use a 2D-to-3D reconstruction algorithm to automatically adapt a general 3D head mesh model to the person. In training, super feature vectors consisting of 3D geometry, texture and speech are augmented together to train a statistical, multi-streamed, Hidden Markov Model (HMM). The HMM is then used to synthesize both the trajectories of head motion animation and the corresponding dynamics of texture. The resultant 3D talking head animation can be controlled by the model predicted geometric trajectory while the articulator movements, e.g., lips, are rendered with dynamic 2D texture image sequences. Head motions and facial expression can also be separately controlled by manipulating corresponding parameters. In a real-time demonstration, the life-like 3D talking head can take any input text, convert it into speech and render lip-synced speech animation photo-realistically."}
{"_id": "90a754f597958a2717862fbaa313f67b25083bf9", "title": "A Review of Human Activity Recognition Methods", "text": "Recognizing human activities from video sequences or still images is a challenging task due to problems, such as background clutter, partial occlusion, changes in scale, viewpoint, lighting, and appearance. Many applications, including video surveillance systems, human-computer interaction, and robotics for human behavior characterization, require a multiple activity recognition system. In this work, we provide a detailed review of recent and state-of-the-art research advances in the field of human activity classification. We propose a categorization of human activity methodologies and discuss their advantages and limitations. In particular, we divide human activity classification methods into two large categories according to whether they use data from different modalities or not. Then, each of these categories is further analyzed into sub-categories, which reflect how they model human activities and what type of activities they are interested in. Moreover, we provide a comprehensive analysis of the existing, publicly available human activity classification datasets and examine the requirements for an ideal human activity recognition dataset. Finally, we report the characteristics of future research directions and present some open issues on human activity recognition."}
{"_id": "da62b00035a0bd2041a332d5c3ac6df4d52a7e3e", "title": "A Framework for Network Function Virtualization by Chang Lan Research Project", "text": "The proliferation of network processing appliances (\u201cMiddleboxes\u201d) has been accompanied by a growing recognition of the problems they bring, including expensive hardware and complex management. This recognition led the networking industry to launch a concerted effort towards Network Function Virtualization (NFV) with the goal of bringing greater openness and agility to network dataplanes. However, a closer look under the hood reveals a less rosy picture: NFV is currently replacing, on a one-to-one basis, monolithic hardware with monolithic software. Furthermore, while several efforts are exploring a new model for middlebox deployment in which a third-party offers middlebox processing as a service, no solutions address the confidentiality concerns. In order to process an organizations traffic, the cloud sees the traffic unencrypted. This means that the cloud now has access to potentially sensitive packet payloads and headers. In the first part of this thesis, we present E2, a scalable and application-agnostic scheduling framework for packet processing, and compare its performance to current approaches. E2 brings two benefits: (i) it allows developers to rely on external frameworkbased mechanisms for common tasks, freeing them to focus on their core application logic and (ii) it simplifies the operators responsibilities, as it both automates and consolidates common management tasks. In the following chapter, we then present Embark, the first system that enables a cloud provider to support middlebox outsourcing while maintaining the clients confidentiality. Embark supports a wide-range of middleboxes. Our evaluation shows that Embark supports these applications with competitive performance."}
{"_id": "75199dea8bd73cf93b64299aacd03ed58d567329", "title": "A Novel System for Visual Navigation of Educational Videos Using Multimodal Cues", "text": "With recent developments and advances in distance learning and MOOCs, the amount of open educational videos on the Internet has grown dramatically in the past decade. However, most of these videos are lengthy and lack of high-quality indexing and annotations, which triggers an urgent demand for efficient and effective tools that facilitate video content navigation and exploration. In this paper, we propose a novel visual navigation system for exploring open educational videos. The system tightly integrates multimodal cues obtained from the visual, audio and textual channels of the video and presents them with a series of interactive visualization components. With the help of this system, users can explore the video content using multiple levels of details to identify content of interest with ease. Extensive experiments and comparisons against previous studies demonstrate the effectiveness of the proposed system."}
{"_id": "34912edb1cf0576ff36ca9c4f651237f9115deed", "title": "Musical Instrument Recognition in User-generated Videos using a Multimodal Convolutional Neural Network Architecture", "text": "This paper presents a method for recognizing musical instruments in user-generated videos. Musical instrument recognition from music signals is a well-known task in the music information retrieval (MIR) field, where current approaches rely on the analysis of the good-quality audio material. This work addresses a real-world scenario with several research challenges, i.e. the analysis of user-generated videos that are varied in terms of recording conditions and quality and may contain multiple instruments sounding simultaneously and background noise. Our approach does not only focus on the analysis of audio information, but we exploit the multimodal information embedded in the audio and visual domains. In order to do so, we develop a Convolutional Neural Network (CNN) architecture which combines learned representations from both modalities at a late fusion stage. Our approach is trained and evaluated on two large-scale video datasets: YouTube-8M and FCVID. The proposed architectures demonstrate state-of-the-art results in audio and video object recognition, provide additional robustness to missing modalities, and remains computationally cheap to train."}
{"_id": "e98d0ac651dd00ba690647ad62e2c8832e5b3434", "title": "VINERy: A Visual IDE for Information Extraction", "text": "Information Extraction (IE) is the key technology enabling analytics over unstructured and semi-structured data. Not surprisingly, it is becoming a critical building block for a wide range of emerging applications. To satisfy the rising demands for information extraction in real-world applications, it is crucial to lower the barrier to entry for IE development and enable users with general computer science background to develop higher quality extractors. In this demonstration, we present VINERY, an intuitive yet expressive visual IDE for information extraction. We show how it supports the full cycle of IE development without requiring a single line of code and enables a wide range of users to develop high quality IE extractors with minimal efforts. The extractors visually built in VINERY are automatically translated into semantically equivalent extractors in a state-of-the-art declarative language for IE. We also demonstrate how the auto-generated extractors can then be imported into a conventional Eclipse-based IDE for further enhancement. The results of our user studies indicate that VINERY is a significant step forward in facilitating extractor development for both expert and novice IE developers."}
{"_id": "48eac4c7c72650b768fb0e6451632f418f6de6a9", "title": "Modeling and Analyzing the Reliability and Cost of Service Composition in the IoT: A Probabilistic Approach", "text": "Recently, many efforts have been devoted to explore the integration of Internet of Things (IoT) and Service-Oriented Computing (SOC). These works allow the real-world devices to provide their functionality as web services. However, two important issues, unreliable service providing and resource constraints, make the modeling and analysis of service composition in IoT a big challenge. In this paper, we propose a probabilistic approach to formally describe and analyze the reliability and cost-related properties of the service composition in IoT. First, a service composition in IoT is modeled as a finite state machine (FSM) which focuses on the functional part. Then, we extend this FSM model to a Markov Decision Process (MDP), which can specify the reliability of service operations. Furthermore, we extend MDP with cost structure, which can represent the different service quality attributes for each operation, such as energy consumption, communication cost, etc. The desirable quality properties of the service composition are specified by a probabilistic extension of temporal logic PCTL. We adopt a well-established probabilistic model checker PRISM to verify and analyze those properties of our service composition models."}
{"_id": "3cfd1a51402700f5785eb795bf0489788f037de8", "title": "A Simulink-based robotic toolkit for simulation and control of the PUMA 560 robot manipulator", "text": "In this paper, a Simulink Robotic Toolkit (SRTK) for the Puma 560 robot manipulator is developed on the MATLAB/SIMULINK-based platform. Through the use of the Real-Time Linux Target and the Real-Time Windows Target, the SRTK can be executed on the Linux or Win32-based operating systems (e.g., Windows 95/98/NT) in real-time. Moreover, the graphical user-friendly nature of Simulink allows the SRTK to be a flexible tool that can easily be customized to fit the specific needs of the user. That is, based on the layered approach of the SRTK, the user can perform operations such as calibration, joint control, Cartesian control, Cartesian PD control, impedance control, some trajectory generation tasks, and real-time simulation of the Puma 560 through a user-friendly MATLAB-based graphical user interface (GUI) without writing any code. One of the key features of the toolkit is that users can easily incorporate additional functionality and hardware through the simple block diagram interface that Simulink provides. Moreover, the MATLABbased GUI can also easily be modified to allow the user to exploit the added functionality with the GUI by using straightforward MATLAB script files. Hence, the SRTK allows a researcher to use the Puma 560 without the burden of the external issues related to the control, interface, and software issues, while providing for the flexibility for easily modifying components for increased functionality."}
{"_id": "6943c053b8d7b29e384c45d180d2da4bac8434cc", "title": "A compendium of RNA-binding motifs for decoding gene regulation", "text": "RNA-binding proteins are key regulators of gene expression, yet only a small fraction have been functionally characterized. Here we report a systematic analysis of the RNA motifs recognized by RNA-binding proteins, encompassing 205 distinct genes from 24 diverse eukaryotes. The sequence specificities of RNA-binding proteins display deep evolutionary conservation, and the recognition preferences for a large fraction of metazoan RNA-binding proteins can thus be inferred from their RNA-binding domain sequence. The motifs that we identify in vitro correlate well with in vivo RNA-binding data. Moreover, we can associate them with distinct functional roles in diverse types of post-transcriptional regulation, enabling new insights into the functions of RNA-binding proteins both in normal physiology and in human disease. These data provide an unprecedented overview of RNA-binding proteins and their targets, and constitute an invaluable resource for determining post-transcriptional regulatory mechanisms in eukaryotes."}
{"_id": "e8e325674437b3cea597f77f4b5419b5124dee93", "title": "MPLS based hybridization in SDN", "text": "The new paradigm of Software Defined Networking (SDN) although has great potential to address the complex problems presented by enterprise networks, it has its own deployment and scalability issues. Further, a full SDN deployment has its own business and economic challenges. A smooth transition from legacy networks to SDN (disruption free, accommodating budget constraints, with progressive improvement in network management) requires a hybrid networking model as an inevitable intermediate step; that allows heterogeneous paradigms to function together while the full transition is realized in phases. Therefore, the need of the hour is to develop an incremental deployment strategy that caters to the needs of the organization. We present here a class-based hybrid SDN model for Multi Protocol Label Switching (MPLS) networks. We discuss the model, design, components, their interactions, advantages and drawbacks. We also present a n implementation and evaluation of a prototype. In legacy networks, MPLS architecture closely resembles SDN paradigm in terms of separation of control and data planes, flow-abstraction etc. Moreover, ISPs have preferred MPLS over the years due to benefits of virtual private networks and traffic engineering. The central idea is to partition traffic using forwarding equivalence classes at the ingress router, the rules of which can be updated via a centralized controller using OpenFlow. Therefore, we aim to use the standard MPLS data-plane together with a control-plane based on OpenFlow to come up with a systematic incremental deployment methodology as well as a hybrid operation model"}
{"_id": "03ed4064310cf3a1b93187e26eeaa4ecf4539532", "title": "Accurate and efficient stereo processing by semi-global matching and mutual information", "text": "This paper considers the objectives of accurate stereo matching, especially at object boundaries, robustness against recording or illumination changes and efficiency of the calculation. These objectives lead to the proposed semi-global matching method that performs pixelwise matching based on mutual information and the approximation of a global smoothness constraint. Occlusions are detected and disparities determined with sub-pixel accuracy. Additionally, an extension for multi-baseline stereo images is presented. There are two novel contributions. Firstly, a hierarchical calculation of mutual information based matching is shown, which is almost as fast as intensity based matching. Secondly, an approximation of a global cost calculation is proposed that can be performed in a time that is linear to the number of pixels and disparities. The implementation requires just 1 second on typical images."}
{"_id": "225c758d34658c6903e8cc59b811338019ec49db", "title": "Evaluation of Cost Functions for Stereo Matching", "text": "Stereo correspondence methods rely on matching costs for computing the similarity of image locations. In this paper we evaluate the insensitivity of different matching costs with respect to radiometric variations of the input images. We consider both pixel-based and window-based variants and measure their performance in the presence of global intensity changes (e.g., due to gain and exposure differences), local intensity changes (e.g., due to vignetting, non-Lambertian surfaces, and varying lighting), and noise. Using existing stereo datasets with ground-truth disparities as well as six new datasets taken under controlled changes of exposure and lighting, we evaluate the different costs with a local, a semi-global, and a global stereo method."}
{"_id": "c4f512aabfbff79d150ca49a1c43c996d0ec9029", "title": "Mutual Information Based Semi-Global Stereo Matching on the GPU", "text": "Real-time stereo matching is necessary for many practical applications, including robotics. There are already many real-time stereo systems, but they typically use local approaches that cause object boundaries to be blurred and small objects to be removed. We have selected the Semi-Global Matching (SGM) method for implementation on graphics hardware, because it can compete with the currently best global stereo methods. At the same time, it is much more efficient than most other methods that produce a similar quality. In contrast to previous work, we have fully implemented SGM including matching with mutual information, which is partly responsible for the high quality of disparity images. Our implementation reaches 4.2 fps on a GeForce 8800 ULTRA with images of 640\u00d7480 pixel size and 128 pixel disparity range and 13 fps on images of 320\u00d7240 pixel size and 64 pixel disparity range."}
{"_id": "12e4b40b2de6de40088fb33cf0d860bf90e22b17", "title": "Visual Correspondence Using Energy Minimization and Mutual Information", "text": "We address visual correspondence problems without assuming that scene points have similar intensities in different views.This situation is common, usually due to non-lambertian scenes or to differences between cameras. We use maximization of mutual information, a powerful technique for registering images that requires no a priori model of the relationship between scene intensities in different views. However, it has proven difficult to use mutual information to compute dense visual correspondence. Comparing fixed-size windows via mutual information suffers from the well-known problems of fixed windows, namely poor performance at discontinuities and in low-texture regions. In this paper, we show how to compute visual correspondence using mutual information without suffering from these problems. Using a simple approximation, mutual information can be incorporated into the standard energy minimization framework used in early vision. The energy can then be efficiently minimized using graph cuts, which preserve discontinuities and handle low-texture regions. The resulting algorithm combines the accurate disparity maps that come from graph cuts with the tolerance for intensity changes that comes from mutual information."}
{"_id": "2d2d97c19f284d13b377e6b936c6b26f007067b1", "title": "Robust feature selection and robust PCA for internet traffic anomaly detection", "text": "Robust statistics is a branch of statistics which includes statistical methods capable of dealing adequately with the presence of outliers. In this paper, we propose an anomaly detection method that combines a feature selection algorithm and an outlier detection method, which makes extensive use of robust statistics. Feature selection is based on a mutual information metric for which we have developed a robust estimator; it also includes a novel and automatic procedure for determining the number of relevant features. Outlier detection is based on robust Principal Component Analysis (PCA) which, opposite to classical PCA, is not sensitive to outliers and precludes the necessity of training using a reliably labeled dataset, a strong advantage from the operational point of view. To evaluate our method we designed a network scenario capable of producing a perfect ground-truth under real (but controlled) traffic conditions. Results show the significant improvements of our method over the corresponding classical ones. Moreover, despite being a largely overlooked issue in the context of anomaly detection, feature selection is found to be an important preprocessing step, allowing adaption to different network conditions and inducing significant performance gains."}
{"_id": "d6681ebd5255d9539fe19e77cfe0859f89e3482e", "title": "On the Analysis and Design of Low-Loss Single-Pole Double-Throw W-Band Switches Utilizing Saturated SiGe HBTs", "text": "This paper describes the analysis and design of saturated silicon-germanium (SiGe) heterojunction bipolar transistor (HBT) switches for millimeter-wave applications. A switch optimization procedure is developed based on detailed theoretical analysis and is then used to design multiple switch variants. The switches utilize IBM's 90-nm 9HP technology, which features SiGe HBTs with peak f T/ fmax of 300/350 GHz. Using a reverse-saturated configuration, a single-pole double-throw switch with a measured insertion loss of 1.05 dB and isolation of 22 dB is achieved at 94 GHz after de-embedding pad losses. The switch draws 5.2 mA from a 1.1-V supply, limiting power consumption to less than 6 mW. The switching speed is analyzed and the simulated turn-on and turn-off times are found to be less than 200 ps. A technique is also introduced to significantly increase the power-handling capabilities of saturated SiGe switches up to an input-referred 1-dB compression point of 22 dBm. Finally, the impact of RF stress on this novel configuration is investigated and initial measurements over a 48-h period show little performance degradation. These results demonstrate that SiGe-based switches may provide significant benefits to millimeter-wave systems."}
{"_id": "61d30ec2549e5c14fbea1b7dfa4fd4fa4e75b592", "title": "New Canonical Representations by Augmenting OBDDs with Conjunctive Decomposition (Extended Abstract)", "text": "We identify two families of canonical representations called ROBDD[\u2227\u00ee]C and ROBDD[\u2227T\u0302 ,i]T by augmenting ROBDD with two types of conjunctive decompositions. These representations cover the three existing languages ROBDD, ROBDD with as many implied literals as possible (ROBDDL\u221e), and AND/OR BDD. We introduce a new time efficiency criterion called rapidity which reflects the idea that exponential operations may be preferable if the language can be exponentially more succinct. Then we demonstrate that the expressivity, succinctness and operation rapidity do not decrease from ROBDD[\u2227T\u0302 ,i]T to ROBDD[\u2227\u00ee]C , and then to ROBDD[\u2227 \u00ee+1 ]C . We also demonstrate that ROBDD[\u2227\u00ee]C (i > 1) and ROBDD[\u2227T\u0302 ,i]T are not less tractable than ROBDD-L\u221e and ROBDD, respectively. Finally, we develop a compiler for ROBDD[\u2227\u221e\u0302]C which significantly advances the compiling efficiency of canonical representations."}
{"_id": "cedd1940d862b7ee2ebe9afb07a2aac0508ff183", "title": "A new dynamic foot abduction orthosis for clubfoot treatment.", "text": "Recurrent clubfoot deformity after successful initial correction with the use of the Ponseti method continues to be a common problem and is often caused by noncompliance with wear of the traditional foot abduction brace. The purpose of this study was to assess the results of a newly designed dynamic foot abduction orthosis in terms of (1) parental compliance and (2) effectiveness in preventing recurrent clubfoot deformities. Twenty-eight patients (49 clubfeet) who were treated with a dynamic foot abduction orthosis in accordance with the Ponseti method were included in this study. Of the 28 patients, 18 had idiopathic clubfeet (31 clubfeet), 2 had complex idiopathic clubfeet (4 clubfeet), 5 had myelodysplasia (8 clubfeet), and 3 were syndromic (6 clubfeet). The mean duration of follow-up was 29 months (range, 24-36 months). Noncompliance was reported in only 2 (7.1%) of the 28 patients in the new orthosis compared with the authors' previously reported 41% (21/51) noncompliance rate in patients treated with the use of the traditional foot abduction brace. The two patients in this study, in which parents were noncompliant with orthosis wear, developed recurrent deformities. There were 2 patients (7%) who experienced skin blistering in the new orthosis compared with 12 (23.5%) of 51 patients who experienced blistering with the use of traditional abduction brace in the authors' previously reported study. Logistic regression modeling compliance and recurrence revealed that noncompliance with the foot abduction orthosis was most predictive of recurrence of deformity (odds ratio, 27; 95% confidence interval, 2.2-326; P = 0.01). The articulating foot abduction orthosis is well tolerated by patients and parents and results in a higher compliance rate and a lower complication rate than what were observed with the traditional foot abduction orthosis."}
{"_id": "4c4077f9b37c3631548d56f6ddd63c9c9394cec8", "title": "A review on License Plate Recognition system algorithms", "text": "License plate recognition(LPR)system contributed to facilitate the human work and make it easier. It is an image processing technology, which uses number plate to identify the vehicles. This paper aims to investigate in the existing LPR research by categorizing existing methods by analyzing the pros/cons of these features, and comparing them in terms of the algorithms that involve and recognition performance. We focus on the recent studies and their limitations, since each country has a different environment and format of licence plate."}
{"_id": "4f8b3a4ca73f29b6b6dd5643a6da09c885d7f108", "title": "A Review of the Evolution of Vision-Based Motion Analysis and the Integration of Advanced Computer Vision Methods Towards Developing a Markerless System", "text": "BACKGROUND\nThe study of human movement within sports biomechanics and rehabilitation settings has made considerable progress over recent decades. However, developing a motion analysis system that collects accurate kinematic data in a timely, unobtrusive and externally valid manner remains an open challenge.\n\n\nMAIN BODY\nThis narrative review considers the evolution of methods for extracting kinematic information from images, observing how technology has progressed from laborious manual approaches to optoelectronic marker-based systems. The motion analysis systems which are currently most widely used in sports biomechanics and rehabilitation do not allow kinematic data to be collected automatically without the attachment of markers, controlled conditions and/or extensive processing times. These limitations can obstruct the routine use of motion capture in normal training or rehabilitation environments, and there is a clear desire for the development of automatic markerless systems. Such technology is emerging, often driven by the needs of the entertainment industry, and utilising many of the latest trends in computer vision and machine learning. However, the accuracy and practicality of these systems has yet to be fully scrutinised, meaning such markerless systems are not currently in widespread use within biomechanics.\n\n\nCONCLUSIONS\nThis review aims to introduce the key state-of-the-art in markerless motion capture research from computer vision that is likely to have a future impact in biomechanics, while considering the challenges with accuracy and robustness that are yet to be addressed."}
{"_id": "8c75355187407006fcfd218c3759419a9ed85e77", "title": "On the Efficiency of Far-Field Wireless Power Transfer", "text": "Far-field wireless power transfer (WPT) is a promising technique to resolve the painstaking power-charging problem inherent in various wireless terminals. This paper investigates the power transfer efficiency of the WPT segment in future communication systems in support of simultaneous power and data transfer, by means of analytically computing the time-average output direct current (DC) power at user equipments (UEs). In order to investigate the effect of channel variety among UEs on the average output DC power, different policies for the scheduling of the power transfer among the users are implemented and compared in two scenarios: homogeneous, whereby users are symmetric and experience similar path loss, and heterogeneous, whereby users are asymmetric and exhibit different path losses. Specifically, if opportunistic scheduling is performed among N symmetric/asymmetric UEs, the power scaling laws are attained by using extreme value theory and reveal that the gain in power transfer efficiency is lnN if UEs are symmetric whereas the gain is N if UEs are asymmetric, compared with that of conventional round-robin scheduling. Thus, the channel variety among UEs inherent to the wireless environment can be exploited by opportunistic scheduling to significantly improve the power transfer efficiency when designing future wireless communication systems in support of simultaneous power and data transfer."}
{"_id": "d4adf371467fa44832b7cdc67e8168fa59de0d29", "title": "Scalable Neural Network Compression and Pruning Using Hard Clustering and L1 Regularization", "text": "We propose a simple and easy to implement neural network compression algorithm that achieves results competitive with more complicated state-of-the-art methods. The key idea is to modify the original optimization problem by adding K independent Gaussian priors (corresponding to the k-means objective) over the network parameters to achieve parameter quantization, as well as an `1 penalty to achieve pruning. Unlike many existing quantization-based methods, our method uses hard clustering assignments of network parameters, which adds minimal change or overhead to standard network training. We also demonstrate experimentally that tying neural network parameters provides less gain in generalization performance than changing network architecture and connectivity patterns entirely."}
{"_id": "941f10d1f4a2c71742be16e796b203078a0a3d30", "title": "Trust and risk in e-government adoption", "text": "Citizen confidence in the competence of the government and the reliability of the technology used to implement egovernment initiatives is imperative to the wide-spread adoption of e-government. This study analyzes how citizens\u2019 trust in technology and government affect their willingness to engage in e-government transactions. We proposes a model of egovernment trust composed of disposition to trust, institution-based trust (IBT), characteristic-based trust (CBT) and perceived risk. Data were collected via a survey of 214 citizens ranging in age from 14 to 83 old. The model was tested using Structural Equation Modeling techniques. Results indicate that disposition to trust positively affects IBT and CBT trusts, which in turn affect intentions to use an e-government service. CBT trust also affects negatively perceived risk, which affects use intentions as well. Implications for practice and research are discussed."}
{"_id": "0f8ada60f594aebb993e53f4338bd0ee34da9c16", "title": "Recovery for overloaded mobile edge computing", "text": "Mobile edge computing (MEC) allows the use of its services with low latency, location awareness and mobility support to make up for the disadvantages of cloud computing. But an overloaded MEC system or any MEC system failure significantly degrades quality of experience (QoE) and negates the advantages of MEC. In this paper, we propose two different recovery schemes for overloaded or broken MEC. One recovery scheme is where an overloaded MEC offloads its work to available neighboring MECs within transfer range. The other recovery scheme is for situations when there is no available neighboring MEC within transfer range. The second scheme accesses user devices of a neighboring MEC adjacent to the overloaded MEC as ad-hoc relay nodes in order to bridge the transfer disconnect between two MECs. Numerical experiments demonstrated the performance and verified the possibility of the proposed schemes, providing insightful results for potential future"}
{"_id": "c9fa081b87f1d4ecab19e42be1a5473e8c31630c", "title": "Encoding Sentiment Information into Word Vectors for Sentiment Analysis", "text": "Santa Fe, New Mexico, USA, August 20-26, 2018. 997 Encoding Sentiment Information into Word Vectors for Sentiment Analysis Zhe Ye\u2660 Fang Li\u2660 Timothy Baldwin\u2665 \u2660 Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China \u2665 School of Computing and Information Systems, The University of Melbourne, Victoria, 3010, Australia {yezhejack, fli}@sjtu.edu.cn, tb@ldwin.net"}
{"_id": "fd25154684d9c2e31772949a49e3d6c04d3bd6be", "title": "Seventh sense technology", "text": "Seventh Sense Technology is a technology used for the human electronic interaction using natural hand gestures. The hand gestures can be used to control any micro controller based devices or robots at distant places using Zigbee or GSM Module or Satellite transceiver. The \"Seventh Sense Technology\" is easy to implement with reduced implementation cost and it opens a wide area of research for the next generation and a greater development in almost all areas of current interest. Seventh Sense Technology uses the Sixth Sense Technology and its advancements to control the autonomous robots or any microcontroller based devices on human will. Currently autonomous robots or any microcontroller based devices are designed for a single purpose, but with this technology robots and those devices can be made to achieve multidimensional purposes. Even the costlier sensors can be replaced with the low cost night vision cameras. It opens a way to control robots or any microcontroller based devices at any distance with natural hand gestures of human beings. It also has the provision of customizable virtual keyboard for giving more actions to distant devices. It is going to be a greater field of research that also paves a lot of attention in this current world of technology."}
{"_id": "09e8479c593f0b11a26f23b09883dcbcd9001176", "title": "Semantic labeling of 3D point clouds with object affordance for robot manipulation", "text": "When a robot is deployed it needs to understand the nature of its surroundings. In this paper, we address the problem of semantic labeling 3D point clouds by object affordance (e.g., `pushable', `liftable'). We propose a technique to extract geometric features from point cloud segments and build a classifier to predict associated object affordances. With the classifier, we have developed an algorithm to enhance object segmentation and reduce manipulation uncertainty by iterative clustering, along with minimizing labeling entropy. Our incremental multiple view merging technique shows improved object segmentation. The novel feature of our approach is the semantic labeling that can be directly applied to manipulation planning. In our experiments with 6 affordance labels, an average of 81.8% accuracy of affordance prediction is achieved. We demonstrate refined object segmentation by applying the classifier to data from the PR2 robot using a Microsoft Kinect in an indoor office environment."}
{"_id": "beeebb7a82446e7a19671491d0b654d4ec4a7bc2", "title": "Patterns, Causes, and Consequences of Anthropocene Defaunation", "text": "Anthropocene defaunation, the global extinction of faunal species and populations and the decline in abundance of individuals within populations, has been predominantly documented in terrestrial ecosystems, but indicators suggest defaunation has been more severe in freshwater ecosystems. Marine defaunation is in a more incipient stage, yet pronounced effects are already apparent and its rapid acceleration seems likely. Defaunation now impacts the planet\u2019s wildlife with profound cascading consequences, ranging from local to global coextinctions of interacting species to the loss of ecological services critical for humanity. Slowing defaunation will require aggressively reducing animal overexploitation and habitat destruction; mitigating climate disruption; and stabilizing the impacts of human population growth and uneven resource consumption. Given its omnipresence, defaunation should receive status of major global environmental change and should be addressed with the same urgency as deforestation, pollution, and climatic change. Global action is needed to prevent defaunation\u2019s current trajectory from catalyzing the planet\u2019s sixth major extinction. 333 Review in Advance first posted online on August 26, 2016. (Changes may still occur before final publication online and in print.) Changes may still occur before final publication online and in print A nn u. R ev . E co l. E vo l. Sy st . 2 01 6. 47 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g A cc es s pr ov id ed b y U ni ve rs ity o f C al if or ni a Sa nt a B ar ba ra o n 10 /2 0/ 16 . F or p er so na l u se o nl y. ES47CH15-Dirzo ARI 19 August 2016 14:55"}
{"_id": "32f7f3c108ea53793354e3760b2e4f4a7cf8f581", "title": "Haptic Virtual Fixtures for Robot-Assisted Manipulation", "text": "Haptic virtual fixtures are software-generated force and position signals applied to human operators in order to improve the safety, accuracy, and speed of robot-assisted manipulation tasks. Virtual fixtures are effective and intuitive because they capitalize on both the accuracy of robotic systems and the intelligence of human operators. In this paper, we discuss the design, analysis, and implementation of two categories of virtual fixtures: guidance virtual fixtures, which assist the user in moving the manipulator along desired paths or surfaces in the workspace, and forbidden-region virtual fixtures, which prevent the manipulator from entering into forbidden regions of the workspace. Virtual fixtures are analyzed in the context of both cooperative manipulation and telemanipulation systems, considering issues related to stability, passivity, human modeling, and applications."}
{"_id": "98bbd141e5bd025c825c41798a5bca364ca44b5c", "title": "Hard Drive Failure Prediction Using Big Data", "text": "We design a general framework named Hdoctor for hard drive failure prediction. Hdoctor leverages the power of big data to achieve a significant improvement comparing to all previous researches that used sophisticated machine learning algorithms. Hdoctor exhibits a series of engineering innovations: (1) constructing time dependent features to characterize the Self-Monitoring, Analysis and Reporting Technology (SMART) value transitions during disk failures, (2) combining features to enable the model to learn the correlation among different SMART attributes, (3) regarding circumstance data such as cluster workload, temperature, humidity, location as related features. Meanwhile, Hdoctor collects/labels samples and updates model automatically, and works well for all kinds of disk failure prediction in our intelligent data center. In this work, we use Hdoctor to collect 74,477,717 training records from our clusters involving 220,022 disks. By training a simple and scalable model, our system achieves a detection rate of 97.82%, with a false alarm rate (FAR) of 0.3%, which hugely outperforms all previous algorithms. In addition, Hdoctor is an excellent indicator for how to predict different hardware failures efficiently under various circumstances."}
{"_id": "e5096686a6bdd34fb4b8caf2cb20a1032c6b882a", "title": "Introduction to Nonlinear Optimization - Theory, Algorithms, and Applications with MATLAB", "text": "introduction to nonlinear optimization theory algorithms introduction to nonlinear optimization theory algorithms introduction to nonlinear optimization theory algorithms introduction to nonlinear optimization theory algorithms chapter 16: introduction to nonlinear programming nonlinear programming: concepts, algorithms and applications theory, algorithms, and applications with matlab introduction to nonlinear optimization epubsam course: math 657|optimization theory time: spring 2016, 5 convex optimization: modeling and algorithms nonlinear programming 13 mit massachusetts institute an introduction to nonlinear optimization theory ebook nonlinear optimization lecture notes for the course mat a and trends in optimization with e applications solving optimization problems using the matlab siam bestsellers 2000\u20132015 order author title code 2015 optimization an introduction imperial college london [review published in siam review, vol. 58, issue 1, pp. 164.] nonlinear optimization and applications springer convex analysis and nonlinear optimization theory and examples appm 4720/5720 advanced topics in convex optimization lecture slides on nonlinear programming based on lectures convex optimization \u2014 boyd & vandenberghe"}
{"_id": "2e754898f3a4035793620e018abe1687701eb091", "title": "A Survey of Game Theory as Applied to Network Security", "text": "Network security is a complex and challenging problem. The area of network defense mechanism design is receiving immense attention from the research community for more than two decades. However, the network security problem is far from completely solved. Researchers have been exploring the applicability of game theoretic approaches to address the network security issues and some of these approaches look promising. This paper surveys the existing game theoretic solutions which are designed to enhance network security and presents a taxonomy for classifying the proposed solutions. This taxonomy should provide the reader with a better understanding of game theoretic solutions to a variety of cyber security problems."}
{"_id": "5eec231709381c901209e5dac97e0784e80ebc7a", "title": "Root coverage in molar teeth: a comparative controlled randomized clinical trial.", "text": "AIM\nTo compare the clinical outcomes of laterally moved, coronally advanced flap (LMCAF) versus Bilaminar technique (BT) in the treatment of single gingival recession on molar teeth.\n\n\nMATERIAL AND METHODS\nFifty patients showing Miller I and II gingival recessions at first molar teeth were treated: 25 were randomly assigned to the BT group and 25 belonged to the LMCAF group. Patient's post-operative morbidity was assessed 1 week after the surgery, while aesthetic evaluation and the clinical evaluation were made 1 year later.\n\n\nRESULTS\nNo statistically significant difference was demonstrated in terms of recession and PPD reduction. Statistically greater probability of complete root coverage (CRC, Odds Ratio 22.1) and greater increase in gingival thickness were observed in the BT group. Greater increase in keratinized tissue was obtained in the LMCAF. Patient satisfaction with aesthetics was very high in both treatment groups. Better post-operative course was observed in the LMCAF, while better post-operative sensitivity and root coverage evaluation were demonstrated in patients treated with BT.\n\n\nCONCLUSIONS\nGingival recession at first molar teeth can be successfully treated with LMCAF and BT. Better CRC was achieved with BT, while more comfortable post-operative course was associated with the LMCAF."}
{"_id": "7819386cf33eb2a4c5cfbd46506473167a02d336", "title": "Ontology-based text summarization. The case of Texminer", "text": "Purpose \u2013 The purpose of this paper is to look into the latest advances in ontology-based text summarization systems, with emphasis on the methodologies of a socio-cognitive approach, the structural discourse models and the ontology-based text summarization systems. Design/methodology/approach \u2013 The paper analyzes the main literature in this field and presents the structure and features of Texminer, a software that facilitates summarization of texts on Port and Coastal Engineering. Texminer entails a combination of several techniques, including: socio-cognitive user models, Natural Language Processing, disambiguation and ontologies. After processing a corpus, the system was evaluated using as a reference various clustering evaluation experiments conducted by Arco (2008) and Hennig et al. (2008). The results were checked with a support vector machine, Rouge metrics, the F-measure and calculation of precision and recall. Findings \u2013 The experiment illustrates the superiority of abstracts obtained through the assistance of ontology-based techniques. Originality/value \u2013 The authors were able to corroborate that the summaries obtained using Texminer are more efficient than those derived through other systems whose summarization models do not use ontologies to summarize texts. Thanks to ontologies, main sentences can be selected with a broad rhetorical structure, especially for a specific knowledge domain."}
{"_id": "3e58d93502268d68840d0b4fbb4a12ebacc10c22", "title": "Model-based Security Testing: An Empirical Study on OAuth 2.0 Implementations", "text": "Motivated by the prevalence of OAuth-related vulnerabilities in the wild, large-scale security testing of real-world OAuth 2.0 implementations have received increasing attention lately [31,37,42]. However, these existing works either rely on manual discovery of new vulnerabilities in OAuth 2.0 implementations or perform automated testing for specific, previously-known vulnerabilities across a large number of OAuth implementations. In this work, we propose an adaptive model-based testing framework to perform automated, large-scale security assessments for OAuth 2.0 implementations in practice. Key advantages of our approach include (1) its ability to identify existing vulnerabilities and discover new ones in an automated manner; (2) improved testing coverage as all possible execution paths within the scope of the model will be checked and (3) its ability to cater for the implementation differences of practical OAuth systems/ applications, which enables the analyst to offload the manual efforts for large-scale testing of OAuth implementations. We have designed and implemented OAuthTester to realize our proposed framework. Using OAuthTester, we examine the implementations of 4 major Identity Providers as well as 500 top-ranked US and Chinese websites which use the OAuth-based Single-Sign-On service provided by the formers. Our empirical findings demonstrate the efficacy of adaptive model-based testing on OAuth 2.0 deployments at scale. More importantly, OAuthTester not only manages to rediscover various existing vulnerabilities but also identify several previously unknown security flaws and new exploits for a large number of eal-world applications implementing OAuth 2.0."}
{"_id": "24ff26b9cf71d9f9a8e7dedf0cfe56d105363cd3", "title": "The Berkeley FrameNet Project", "text": "FrameNet is a three-year NSF-supported project in corpus-based computational lexicography, now in its second year (NSF IRI-9618838, \"Tools for Lexicon Building\"). The project's key features are (a) a commitment to corpus evidence for semantic and syntactic generalizations, and (b) the representation of the valences of its target words (mostly nouns, adjectives, and verbs) in which the semantic portion makes use of frame semantics. The resulting database will contain (a) descriptions of the semantic frames underlying the meanings of the words described, and (b) the valence representation (semantic and syntactic) of several thousand words and phrases, each accompanied by (c) a representative collection of annotated corpus attestations, which jointly exemplify the observed linkings between \"frame elements\" and their syntactic realizations (e.g. grammatical function, phrase type, and other syntactic traits). This report will present the project's goals and workflow, and information about the computational tools that have been adapted or created in-house for this work. 1 I n t r o d u c t i o n The Berkeley FrameNet project 1 is producing frame-semantic descriptions of several thousand English lexical items and backing up these descriptions with semantically annotated attestations from contemporary English corpora 2. 1The project is based at the International Computer Science Institute (1947 Center Street, Berkeley, CA). A fuller bibliography may be found in (Lowe et ai., 1997) 2Our main corpus is the British National Corpus. We have access to it through the courtesy of Oxford University Press; the POS-tagged and lemmatized version we use was prepared by the Institut flit Maschinelle Sprachverarbeitung of the University of Stuttgart). The These descriptions are based on hand-tagged semantic annotations of example sentences extracted from large text corpora and systematic analysis of the semantic patterns they exemplify by lexicographers and linguists. The primary emphasis of the project therefore is the encoding, by humans, of semantic knowledge in machine-readable form. The intuition of the lexicographers is guided by and constrained by the results of corpus-based research using highperformance software tools. The semantic domains to be covered are\" HEALTH CARE, CHANCE, PERCEPTION, COMMUNICATION, TRANSACTION, TIME, SPACE, BODY (parts and functions of the body), MOTION, LIFE STAGES, SOCIAL CONTEXT, EMOTION and COGNITION. 1.1 Scope of t h e P r o j e c t The results of the project are (a) a lexical resource, called the FrameNet database 3, and (b) associated software tools. The database has three major components (described in more detail below: \u2022 Lexicon containing entries which are composed of: (a) some conventional dictionary-type data, mainly for the sake of human readers; (b) FORMULAS which capture the morphosyntactic ways in which elements of the semantic frame can be realized within the phrases or sentences built up around the word; (c) links to semantically ANNOTATED EXAMEuropean collaborators whose participation has made this possible are Sue Atkins, Oxford University Press, and Ulrich Held, IMS-Stuttgart. SThe database will ultimately contain at least 5,000 lexical entries together with a parallel annotated corpus, these in formats suitable for integration into applications which use other lexical resources such as WordNet and COMLEX. The final design of the database will be selected in consultation with colleagues at Princeton (WordNet), ICSI, and IMS, and with other members of the NLP community."}
{"_id": "b4317b8a4490c84301907a61f5b8ebb26ab8828d", "title": "Discovery of inference rules for question-answering", "text": "One of the main challenges in question-answering is the potential mismatch between the expressions in questions and the expressions in texts. While humans appear to use inference rules such as \u0093X writes Y\u0094 implies \u0093X is the author of Y\u0094 in answering questions, such rules are generally unavailable to question-answering systems due to the inherent difficulty in constructing them. In this paper, we present an unsupervised algorithm for discovering inference rules from text. Our algorithm is based on an extended version of Harris\u0092 Distributional Hypothesis, which states that words that occurred in the same contexts tend to be similar. Instead of using this hypothesis on words, we apply it to paths in the dependency trees of a parsed corpus. Essentially, if two paths tend to link the same set of words, we hypothesize that their meanings are similar. We use examples to show that our system discovers many inference rules easily missed by humans."}
{"_id": "2fa2af72590819d7a4be995baa9809060f9c815a", "title": "Efficient clustering of high-dimensional data sets with application to reference matching", "text": "Many important problems involve clustering large datasets. Although naive implementations of clustering are computationally expensive, there are established efficient techniques for clustering when the dataset has either (1) a limited number of clusters, (2) a low feature dimensionality, or (3) a small number of data points. However, there has been much less work on methods of efficiently clustering datasets that are large in all three ways at once\u2014for example, having millions of data points that exist in many thousands of dimensions representing many thousands of clusters. We present a new technique for clustering these large, highdimensional datasets. The key idea involves using a cheap, approximate distance measure to efficiently divide the data into overlapping subsets we call canopies. Then clustering is performed by measuring exact distances only between points that occur in a common canopy. Using canopies, large clustering problems that were formerly impossible become practical. Under reasonable assumptions about the cheap distance metric, this reduction in computational cost comes without any loss in clustering accuracy. Canopies can be applied to many domains and used with a variety of clustering approaches, including Greedy Agglomerative Clustering, K-means and Expectation-Maximization. We present experimental results on grouping bibliographic citations from the reference sections of research papers. Here the canopy approach reduces computation time over a traditional clustering approach by more than an order of magnitude and decreases error in comparison to a previously used algorithm by 25%."}
{"_id": "8ce4a669358dadbfac69ff0d313af042aeb94de1", "title": "Pedestrian intention recognition using Latent-dynamic Conditional Random Fields", "text": "We present a novel approach for pedestrian intention recognition for advanced video-based driver assistance systems using a Latent-dynamic Conditional Random Field model. The model integrates pedestrian dynamics and situational awareness using observations from a stereo-video system for pedestrian detection and human head pose estimation. The model is able to capture both intrinsic and extrinsic class dynamics. Evaluation of our method is performed on a public available dataset addressing scenarios of lateral approaching pedestrians that might cross the road, turn into the road or stop at the curbside. During experiments, we demonstrate that the proposed approach leads to better stability and class separation compared to state-of-the-art pedestrian intention recognition approaches."}
{"_id": "306e426b6e9baa613892cfb7fbb2907d70ebaea4", "title": "Explainable User Clustering in Short Text Streams", "text": "User clustering has been studied from different angles: behavior-based, to identify similar browsing or search patterns, and content-based, to identify shared interests. Once user clusters have been found, they can be used for recommendation and personalization. So far, content-based user clustering has mostly focused on static sets of relatively long documents. Given the dynamic nature of social media, there is a need to dynamically cluster users in the context of short text streams. User clustering in this setting is more challenging than in the case of long documents as it is difficult to capture the users' dynamic topic distributions in sparse data settings. To address this problem, we propose a dynamic user clustering topic model (or UCT for short). UCT adaptively tracks changes of each user's time-varying topic distribution based both on the short texts the user posts during a given time period and on the previously estimated distribution. To infer changes, we propose a Gibbs sampling algorithm where a set of word-pairs from each user is constructed for sampling. The clustering results are explainable and human-understandable, in contrast to many other clustering algorithms. For evaluation purposes, we work with a dataset consisting of users and tweets from each user. Experimental results demonstrate the effectiveness of our proposed clustering model compared to state-of-the-art baselines."}
{"_id": "27266e3be4d9b9572e8e11f4ef34110a2a00f515", "title": "Clustering of Gaze During Dynamic Scene Viewing is Predicted by Motion", "text": "Where does one attend when viewing dynamic scenes? Research into the factors influencing gaze location during static scene viewing have reported that low-level visual features contribute very little to gaze location especially when opposed by high-level factors such as viewing task. However, the inclusion of transient features such as motion in dynamic scenes may result in a greater influence of visual features on gaze allocation and coordination of gaze across viewers. In the present study, we investigated the contribution of low- to mid-level visual features to gaze location during free-viewing of a large dataset of videos ranging in content and length. Signal detection analysis on visual features and Gaussian Mixture Models for clustering gaze was used to identify the contribution of visual features to gaze location. The results show that mid-level visual features including corners and orientations can distinguish between actual gaze locations and a randomly sampled baseline. However, temporal features such as flicker, motion, and their respective contrasts were the most predictive of gaze location. Additionally, moments in which all viewers\u2019 gaze tightly clustered in the same location could be predicted by motion. Motion and mid-level visual features may influence gaze allocation in dynamic scenes, but it is currently unclear whether this influence is involuntary or due to correlations with higher order factors such as scene semantics."}
{"_id": "642b12db4da2e0c2c5e051a05a22c439146e8deb", "title": "Variable Selection via Penalized Neural Network : a Drop-Out-One Loss Approach", "text": "We propose a variable selection method for high dimensional regression models, which allows for complex, nonlinear, and high-order interactions among variables. The proposed method approximates this complex system using a penalized neural network and selects explanatory variables by measuring their utility in explaining the variance of the response variable. This measurement is based on a novel statistic called Drop-Out-One Loss. The proposed method also allows (overlapping) group variable selection. We prove that the proposed method can select relevant variables and exclude irrelevant variables with probability one as the sample size goes to infinity, which is referred to as the Oracle Property. Experimental results on simulated and real world datasets show the efficiency of our method in terms of variable selection and prediction accuracy."}
{"_id": "23b5edcb523441e04c48af59e48f2a9e27163f5c", "title": "Abstraction and reformulation in artificial intelligence.", "text": "This paper contributes in two ways to the aims of this special issue on abstraction. The first is to show that there are compelling reasons motivating the use of abstraction in the purely computational realm of artificial intelligence. The second is to contribute to the overall discussion of the nature of abstraction by providing examples of the abstraction processes currently used in artificial intelligence. Although each type of abstraction is specific to a somewhat narrow context, it is hoped that collectively they illustrate the richness and variety of abstraction in its fullest sense."}
{"_id": "328a523c0fc9a3e6ea9615fe7b9a36037a1b4f63", "title": "Visual Tracking with Fully Convolutional Networks", "text": "We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly."}
{"_id": "488ab1e313f5109153f2c74e3b5d86d41e9b4b71", "title": "Hybrid Fuzz Testing: Discovering Software Bugs via Fuzzing and Symbolic Execution", "text": "Random mutational fuzz testing (fuzzing) and symbolic executions are program testing techniques that have been gaining popularity in the security research community. Fuzzing finds bugs in a target program by natively executing it with random inputs while monitoring the execution for abnormal behaviors such as crashes. While fuzzing may have a reputation of being able to explore deep into a program\u2019s state space efficiently, na\u0131\u0308ve fuzzers usually have limited code coverage for typical programs since unconstrained random inputs are unlikely to drive the execution down many different paths. In contrast, symbolic execution tests a program by treating the program\u2019s input as symbols and interpreting the program over such symbolic inputs. Although in theory symbolic execution is guaranteed to be effective in achieving code coverage if we explore all possible paths, this generally requires exponential resource and is thus not practical for many real-world programs. This thesis presents our attempt to attain the best of both worlds by combining fuzzing with symbolic execution in a novel manner. Our technique, called hybrid fuzzing, first uses symbolic execution to discover frontier nodes that represent unique paths in the program. After collecting as many frontier nodes as possible under a user-specifiable resource constraint, it transits to fuzz the program with preconditioned random inputs, which are provably random inputs that respect the path predicate leading to each frontier node. Our current implementation supports programs with linear path predicates and can automatically generate preconditioned random inputs from a polytope model of the input space extracted from binaries. These preconditioned random inputs can then be used with any fuzzer. Experiments show that our implementation is efficient in both time and space, and the inputs generated by it are able to gain extra breadth and depth over previous approaches."}
{"_id": "ca20b05926f4f754912dbbfa79aa3de38640c652", "title": "Perception and Emotion How We Recognize Facial Expressions", "text": "Perception and emotion interact, as is borne out by studies of how people recognize emotion from facial expressions. Psychological and neurological research has elucidated the processes, and the brain structures, that participate in facial emotion recognition. Studies have shown that emotional reactions to viewing faces can be very rapid and that these reactions may, in turn, be used to judge the emotion shown in the face. Recent experiments have argued that people actively explore facial expressions in order to recognize the emotion, a mechanism that emphasizes the instrumental nature of social cognition. KEYWORDS\u2014face perception; faces; emotion; amygdala An influential psychological model of face processing argued that early perception (construction of a geometric representation of the face based on its features) led to subsequently separate processing of the identity of the face and of the emotional expression of the face (Bruce & Young, 1986). That model has received considerable support from neuroscience studies suggesting that the separate processes are based on separable neuroanatomical systems: In neuroimaging studies, different parts of the brain are activated in response to emotional expressions or identity changes; and brain damage can result in impairments in recognizing identity but not in recognizing emotional expressions, or the reverse. SOME PROCESSING IS RAPID AND AUTOMATIC AND CAN OCCUR IN THE ABSENCE OF AWARENESS OF THE STIMULI Some responses in the brain to emotional facial expressions are so rapid (less than 100 ms) that they could not plausibly be based on conscious awareness of the stimulus, although the responses, in turn, may contribute to conscious awareness. Evidence comes from studies using event-related potentials, measures of the brain\u2019s electrical activity recorded at the scalp (or, much more rarely, directly from the brain in surgical patients). In those experiments, the responses to many presentations of emotional stimuli are averaged across stimuli, and the changes in electrical potential can be accurately timed in relation to stimulus onset. Evidence has also come from studies in which viewers were shown facial expressions subliminally. Especially salient aspects of faces are most potent in driving nonconscious responses: For instance, subliminal presentation of only the whites of the eyes of fearful faces results in measurable brain activation (Whalen et al., 2004). One specific structure involved in such rapid and automatic neural responses is the amygdala, a structure in the medial temporal lobe that is known to be involved in many aspects of emotion processing. Findings such as these have been of interest to psychologists, as they provide a potential mechanism consistent with two-factor theories of human behavior. For instance, the theory that affect and cognitive judgment are separate processes, and that affect can precede cognitive judgment, receives some support from the neuroscience findings (Zajonc, 1980). The data also add detail to theories of visual consciousness. In one set of studies, emotional facial expressions were shown to neurological patients who, because of their brain damage, were unable to report seeing the stimuli. An individual with damage to the visual cortex had \u2018\u2018blindsight\u2019\u2019 for emotional faces: He could discriminate the emotion shown in faces by guessing, even though he reported no visual experience of seeing the faces. A neuroimaging experiment in this same individual revealed that the amygdala was activated differentially by different emotional faces, despite the absence of conscious visual experience (Morris, deGelder, Weiskrantz, & Dolan, 2001). These and other studies have suggested a distinction between subcortical processing of emotional visual stimuli (e.g., involving the amygdala and brainstem nuclei such as the superior colliculus), which may be independent of conscious vision (Johnson, 2005), and cortical processing, which is usually accompanied by conscious experience. It is interesting to note that amphibians and reptiles have only subcortical visual processing, since they lack a neocortex. One broad interpretation of these observations is thus that the subcortical route for processing emotional stimuli is the more ancient one, and that in Address correspondence to Ralph Adolphs, HSS 228-77, Caltech, Pasadena, CA 91125; e-mail: radolphs@hss.caltech.edu. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 222 Volume 15\u2014Number 5 Copyright r 2006 Association for Psychological Science mammals an additional, cortical route has evolved that probably allows more flexible behaviors based on learning and conscious deliberation. A final wrinkle has come from psychological studies of the relationship between attention and emotion. It has been known for some time that emotionally salient stimuli can capture attention, an interaction that makes much adaptive sense (Ohman, Flykt, & Esteves, 2001). But more recent studies have also shown the reverse: that volitionally allocating attention to stimuli can influence their emotional evaluation. Inhibiting attention to visual stimuli that are distractors in a search task, for example, results in devaluation of those stimuli when subjects are asked to rate them on affective dimensions (Raymond, Fenske, & Tavassoli, 2003). Emotions thus represent the value of stimuli\u2014what people approach or avoid, cognitively or behaviorally, volitionally or automatically. FEAR AND DISGUST MAY BE PROCESSED SPECIALLY Several studies have found evidence that the amygdala, the subcortical structure discussed earlier, is disproportionately important for processing facial expressions of fear, although by no means exclusively so. The only other emotion for which a similar neuroanatomical specificity has been reported is disgust. A region of the cortex called the insula represents motor and sensory aspects of that emotional response (Calder, Lawrence, & Young, 2001). In lesion studies, damage to the amygdala is shown to result in impairment in the ability to recognize fear from facial expressions. Similarly, damage to the insula can result in impairment in the ability to recognize disgust from facial expressions. However, questions remain regarding exactly which emotion category or dimension is being encoded in these brain regions, since other emotions are often also variably impaired. Especially informative have been studies in a rare patient, SM, who has total damage to the amygdala on both sides of the brain. The subject can accurately judge age and gender from faces, and she has no difficulty recognizing familiar individuals from their faces. She also has little difficulty recognizing most facial expressions, with the notable exception of fear. When asked to judge the fearfulness of faces, SM is selectively and severely impaired (Adolphs, Tranel, Damasio, & Damasio, 1994). However, other patients with similar damage are impaired on a wider array of emotions; in functional neuroimaging studies, activation of the amygdala is seen in response to several emotions in addition to fear; and even SM\u2019s highly specific impairment depends on the questions she is asked. For example, when asked to classify faces into categories of basic emotions, SM is rather selectively impaired on fear; but when asked to rate how arousing the emotion shown in the face is, she is impaired on all emotions of negative valence (Adolphs, Russell, & Tranel, 1999). These data suggest that, while the amygdala is critical for recognizing fear in faces, its role encompasses a broader, more abstract, and perhaps dimensional rather than categorical aspect of emotions, for which fear is but one instance\u2014an issue I take up again below. MANY BRAIN STRUCTURES ARE INVOLVED Just as there is good evidence that the amygdala does more than solely detect fear, there are of course brain structures other than the amygdala that participate in perceiving emotion from faces. Brain-imaging studies, in particular, have found evidence for a large number of other brain structures that may come into play, depending on the emotion shown in the face and on the demands of the experimental task. One framework that summarizes these data runs as follows. Visual areas in the temporal cortex encode the face\u2019s features and bind them into a global perceptual representation of the face. Subcortical visual areas (such as the superior colliculus, prominent in animals such as frogs) carry out coarser but faster processing of the face. Both routes, the cortical and the subcortical, feed visual information about the face into the amygdala. The amygdala then associates the visual representation of the face with an emotional response in the body, as well as with changes in the operation of other brain structures. For instance, it likely triggers autonomic responses (such as skin-conductance response, the sympathetic autonomic response of sweating on the palms of the hands) to the face, and it also modulates visual attention to the face. Tracing the path from initial perception of the face to recognizing the emotion it expresses is complicated by feedback loops and by multiple pathways that can be engaged. One class of proposals, as influential as they are controversial, argues that the emotional response elicited by the amygdala can in fact be used by the viewer to reconstruct knowledge about the emotion shown in the face. Roughly: if I experience a pang of fear within myself upon seeing a fearful face, I can use the knowledge of my own emotion to infer what the emotional state of the person whose face is shown in the stimulus might be. Broader theories in a similar vein do not restrict themselves to the amygdala or to fear, but more generally propose that we make inferences about other people\u2019s emotional states by simulating within ourselves aspects of those states (Goldman & Sripada, 2005). Emotional contagion and imitation may be the earliest aspects of such a mechanism that can already be seen in infants"}
{"_id": "f1e19852781c8135cd1a9dfabb7a55ed01622b8e", "title": "DeepQA Jeopardy! Gamification: A Machine-Learning Perspective", "text": "DeepQA is a large-scale natural language processing (NLP) question-and-answer system that responds across a breadth of structured and unstructured data, from hundreds of analytics that are combined with over 50 models, trained through machine learning. After the 2011 historic milestone of defeating the two best human players in the Jeopardy! game show, the technology behind IBM Watson, DeepQA, is undergoing gamification into real-world business problems. Gamifying a business domain for Watson is a composite of functional, content, and training adaptation for nongame play. During domain gamification for medical, financial, government, or any other business, each system change affects the machine-learning process. As opposed to the original Watson Jeopardy!, whose class distribution of positive-to-negative labels is 1:100, in adaptation the computed training instances, question-and-answer pairs transformed into true-false labels, result in a very low positive-to-negative ratio of 1:100 000. Such initial extreme class imbalance during domain gamification poses a big challenge for the Watson machine-learning pipelines. The combination of ingested corpus sets, question-and-answer pairs, configuration settings, and NLP algorithms contribute toward the challenging data state. We propose several data engineering techniques, such as answer key vetting and expansion, source ingestion, oversampling classes, and question set modifications to increase the computed true labels. In addition, algorithm engineering, such as an implementation of the Newton-Raphson logistic regression with a regularization term, relaxes the constraints of class imbalance during training adaptation. We conclude by empirically demonstrating that data and algorithm engineering are complementary and indispensable to overcome the challenges in this first Watson gamification for real-world business problems."}
{"_id": "eff1e8bf5d1b8cfe0fec1b076c42393b8a8c11bb", "title": "Deep Constrained Clustering - Algorithms and Advances", "text": "The area of constrained clustering has been extensively explored by researchers and used by practitioners. Constrained clustering formulations exist for popular algorithms such as k-means, mixture models, and spectral clustering but have several limitations. We explore a deep learning formulation of constrained clustering and in particular explore how it can extend the field of constrained clustering. We show that our formulation can not only handle standard together/apart constraints without the well documented negative effects reported but can also model instance level constraints (level-of-difficulty), cluster level constraints (balancing cluster size) and triplet constraints. The first two are new ways for domain experts to enforce guidance whilst the later importantly allows generating ordering constraints from continuous side-information."}
{"_id": "da980cc43c11c540188607c2372a8fa57d878090", "title": "MOTHERS WHO KILL THEIR CHILDREN A literature review", "text": "Maternal filicide, the murder of a child by its mother, is a complex phenomenon with various causes and characteristics. Research, by means of the development of several classification systems and in identifying particular risk factors, has been conducted with the aim of better prevention of this emotionally evocative crime. Various disciplines have offered a wide range of perspectives on why women kill their biological children. These are intended to yield a better understanding of the aetiology of this crime. This literature review delineates three dominant perspectives: psychiatric, psychological, and sociological. The main findings of each perspective are discussed. However, these three perspectives frequently operate in conjunction with each other in that both intrapsychic and interpersonal dynamics play a role in acts of maternal filicide. The most vulnerable women appear to be those who have had a severely deficient developmental history (trauma and/or grossly inadequate parenting), those who experience current difficult psychosocial circumstances, and those who have been diagnosed with a psychiatric illness. However, not all women who experience such problems kill their children. In this regard, individual differences have an important role to play and more carefully delineated future research is suggested. One of the most significant findings of this literature review is that there appears to be a paucity of systematic research on the South African phenomenon of parental child homicide."}
{"_id": "19051ff1822758696ba535405765bde3d2df6b5c", "title": "Fast trajectory clustering using Hashing methods", "text": "There has been an explosion in the usage of trajectory data. Clustering is one of the simplest and most powerful approaches for knowledge discovery from trajectories. In order to produce meaningful clusters, well-defined metrics are required to capture the essence of similarity between trajectories. One such distance function is Dynamic Time Warping (DTW), which aligns two trajectories together in order to determine similarity. DTW has been widely accepted as a very good distance measure for trajectory data. However, trajectory clustering is very expensive due to the complexity of the similarity functions, for example, DTW has a high computational cost O(n2), where n is the average length of the trajectory, which makes the clustering process very expensive. In this paper, we propose the use of hashing techniques based on Distance-Based Hashing (DBH) and Locality Sensitive Hashing (LSH) to produce approximate clusters and speed up the clustering process."}
{"_id": "59bb1b423a17bdac54d9b4412ac216618cb68c53", "title": "Human-computer interaction for hybrid carving", "text": "In this paper we explore human-computer interaction for carving, building upon our previous work with the FreeD digital sculpting device. We contribute a new tool design (FreeD V2), with a novel set of interaction techniques for the fabrication of static models: personalized tool paths, manual overriding, and physical merging of virtual models. We also present techniques for fabricating dynamic models, which may be altered directly or parametrically during fabrication. We demonstrate a semi-autonomous operation and evaluate the performance of the tool. We end by discussing synergistic cooperation between human and machine to ensure accuracy while preserving the expressiveness of manual practice."}
{"_id": "adf726bdcdddacee1c70d911b8f84b6a16841a32", "title": "ExtRA: Extracting Prominent Review Aspects from Customer Feedback", "text": "Many existing systems for analyzing and summarizing customer reviews about products or service are based on a number of prominent review aspects. Conventionally, the prominent review aspects of a product type are determined manually. This costly approach cannot scale to large and cross-domain services such as Amazon.com, Taobao.com or Yelp.com where there are a large number of product types and new products emerge almost everyday. In this paper, we propose a novel framework, for extracting the most prominent aspects of a given product type from textual reviews. The proposed framework, ExtRA, extracts K most prominent aspect terms or phrases which do not overlap semantically automatically without supervision. Extensive experiments show that ExtRA is effective and achieves the state-of-the-art performance on a dataset consisting of different product types."}
{"_id": "397edb026ba0500df10ac813229d900e7880f307", "title": "A general framework for benchmarking firewall optimization techniques", "text": "Firewalls are among the most pervasive network security mechanisms, deployed extensively from the borders of networks to end systems. The complexity of modern firewall policies has raised the computational requirements for firewall implementations, potentially limiting the throughput of networks. Administrators currently rely on ad hoc solutions to firewall optimization. To address this problem, a few automatic firewall optimization techniques have been proposed, but there has been no general approach to evaluate the optimality of these techniques. In this paper we present a general framework for rule-based firewall optimization. We give a precise formulation of firewall optimization as an integer programming problem and show that our framework produces optimal reordered rule sets that are semantically equivalent to the original rule set. Our framework considers the complex interactions among the rules in firewall configurations and relies on a novel partitioning of the packet space defined by the rules themselves. For validation, we employ this framework on real firewall rule sets for a quantitative evaluation of existing heuristic approaches. Our results indicate that the framework is general and faithfully captures performance benefits of firewall optimization heuristics."}
{"_id": "9ebecb9581185af467e8c770254a269244513d4b", "title": "Graph Data Management and Mining: A Survey of Algorithms and Applications", "text": "Graph mining and management has become a popular area of research in recent years because of its numerous applications in a wide variety of practical fields, including computational biology, software bug localization and computer networking. Different applications result in graphs of different sizes and complexities. Correspondingly, the applications have different requirements for the underlying mining algorithms. In this chapter, we will provide a survey of different kinds of graph mining and management algorithms. We will also discuss a number of applications, which are dependent upon graph representations. We will discuss how the different graph mining algorithms can be adapted for different applications. Finally, we will discuss important avenues of future research"}
{"_id": "e89523cc72f494c9b77f590951a2a8eccd8a41fd", "title": "A mixed-signal X-band SiGe multi-function control MMIC for phased array radar applications", "text": "In this paper is reported the design, fabrication and test of a mixed-signal SiGe X-band multi-function control MMIC for phased array radar applications. Said MMIC, fabricated, with the ST-Microelectronics BiCMOS7RF SiGe technology, comprises a 5-bit phase shifter, 5-bit attenuator, SPDT switches, several gain amplifiers and a digital serial to parallel converter to reduce the number of MMIC I/O control lines. The gain amplifiers are implemented using SiGe HBTs, while phase shifter, attenuator and SPDT switches are based on CMOS transistors. The measurement results show a return loss better than 15 dB and a gain of 17 dB, equal in both RX and TX state. In RX mode the obtained noise figure is lower than 10 dB, while in TX mode the output P1dB is higher than 12 dBm. The achieved RF performance, the low power consumption and associated low cost, make this SiGe control-chip an attractive solution for high performance/low cost Tx/Rx components for phased array radar applications."}
{"_id": "8c2499a0a4c08f5ee4424ece0e4f680d8faae285", "title": "One-Shot Segmentation in Clutter", "text": "We tackle the problem of one-shot segmentation: finding and segmenting a previously unseen object in a cluttered scene based on a single instruction example. We propose a novel dataset, which we call cluttered Omniglot. Using a baseline architecture combining a Siamese embedding for detection with a U-net for segmentation we show that increasing levels of clutter make the task progressively harder. Using oracle models with access to various amounts of ground-truth information, we evaluate different aspects of the problem and show that in this kind of visual search task, detection and segmentation are two intertwined problems, the solution to each of which helps solving the other. We therefore introduce MaskNet, an improved model that attends to multiple candidate locations, generates segmentation proposals to mask out background clutter and selects among the segmented objects. Our findings suggest that such image recognition models based on an iterative refinement of object detection and foreground segmentation may provide a way to deal with highly cluttered scenes."}
{"_id": "a334fc2ae1b12bbdedf0b5ed4e40e9e8c93a5274", "title": "Modelling consumer credit risk via survival analysis", "text": "Credit risk models are used by financial companies to evaluate in advance the insolvency risk caused by credits that enter into default. Many models for credit risk have been developed over the past few decades. In this paper, we focus on those models that can be formulated in terms of the probability of default by using survival analysis techniques. With this objective three different mechanisms are proposed based on the key idea of writing the default probability in terms of the conditional distribution function of the time to default. The first method is based on a Cox\u2019s regression model, the second approach uses generalized linear models under censoring and the third one is based on nonparametric kernel estimation, using the product-limit conditional distribution function estimator by Beran. The resulting nonparametric estimator of the default probability is proved to be consistent and asymptotically normal. An empirical study, based on modified real data, illustrates the three methods. MSC: 62P20, 91B30, 62G05, 62N01"}
{"_id": "1004b8bfcf19eecb957b1de4af328f41aa097042", "title": "Translating evidence-based decision making into practice: EBDM concepts and finding the evidence.", "text": "This is the first of 2 articles that focuses on strategies that can be used to integrate an evidence-based decision making [EBDM] approach into practice. The articles will focus on EBDM methodology and enhancing skills, including how to find valid evidence to answer clinical questions, critically appraise the evidence found and determine if it applies. In addition, online resources will be identified to supplement information presented in each article. The purpose of this article is to define evidence-based decision making and discuss skills necessary for practitioners to efficiently adopt EBDM. It will provide a guide for finding evidence to answer a clinical question using PubMed's specialized searching tools under Clinical Queries."}
{"_id": "3899e1a75b3b172977c1fc7790327376dccd78e0", "title": "Revisiting Complex Moments for 2-D Shape Representation and Image Normalization", "text": "When comparing 2-D shapes, a key issue is their normalization. Translation and scale are easily taken care of by removing the mean and normalizing the energy. However, defining and computing the orientation of a 2-D shape is not so simple. In fact, although for elongated shapes the principal axis can be used to define one of two possible orientations, there is no such tool for general shapes. As we show in the paper, previous approaches fail to compute the orientation of even noiseless observations of simple shapes. We address this problem and show how to uniquely define the orientation of an arbitrary 2-D shape, in terms of what we call its Principal Moments. We start by showing that a small subset of these moments suffices to describe the underlying 2-D shape, i.e., that they form a compact representation, which is particularly relevant when dealing with large databases. Then, we propose a new method to efficiently compute the shape orientation: Principal Moment Analysis. Finally, we discuss how this method can further be applied to normalize gray-level images. Besides the theoretical proof of correctness, we describe experiments demonstrating robustness to noise and illustrating with real images."}
{"_id": "522da4de436d505583930d23b517421b80c191dd", "title": "IoT Security Framework for Smart Cyber Infrastructures", "text": "The Internet of Things (IoT) will connect not only computers and mobile devices, but it will also interconnect smart buildings, homes, and cities, as well as electrical grids, gas, and water networks, automobiles, airplanes, etc. IoT will lead to the development of a wide range of advanced information services that need to be processed in real-time and require data centers with large storage and computing power. The integration of IoT with Cloud and Fog Computing can bring not only the required computational power and storage capacity, but they enable IoT services to be pervasive, cost-effective, and can be accessed from anywhere using any device (mobile or stationary). However, IoT infrastructures and services will introduce grand security challenges due to the significant increase in the attack surface, complexity, heterogeneity and number of resources. In this paper, we present an IoT security framework for smart infrastructures such as Smart Homes (SH) and smart buildings (SB). We also present a general threat model that can be used to develop a security protection methodology for IoT services against cyber-attacks (known or unknown). Additionally, we show that Anomaly Behavior Analysis (ABA) Intrusion Detection System (ABA-IDS) can detect and classify a wide range of attacks against IoT sensors."}
{"_id": "280bb91e1f8f84eec3979c5d592396d3a4a2963c", "title": "Analysis and Design of a Single-Stage Parallel AC-to-DC Converter", "text": "In this paper, a single-stage (S2) parallel ac-to-dc converter based on single-switch two-output boost-flyback converter is presented. The converter contains two semistages. One is the boost-flyback semistage, which transfers partial input power transferred to load directly through one power flow path and has excellent self-power factor correction property when operating in discontinuous conduction mode even though the boost output is close to the peak value of the line voltage. The other one is the flyback dc-to-dc (dc/dc) semistage that provides the output regulation on another parallel power flow path. With this design, the power conversion efficiency is improved and the current stress of control switch is reduced. Furthermore, the calculation process of power distribution and bulk capacitor voltage, design equations, and design procedure for key parameters are also presented. By following the procedure, an 80 W prototype converter has been built and tested. The experimental results show that the measured line harmonic current at the worst condition complies with the IEC61000-3-2 class D limits, the maximum bulk capacitor voltage is about 415.4 V, and the maximum efficiency is about 85.8%. Hence, the proposed S2 converter is suitable for universal input usage."}
{"_id": "3d7348c63309ddb68b4e69782bc6bf516bb1ced7", "title": "A High Step-Down Transformerless Single-Stage Single-Switch AC/DC Converter", "text": "This paper presents a high step-down tranformerless single-stage single-switch ac/dc converter suitable for universal line applications (90-270 Vrms) . The topology integrates a buck-type power-factor correction (PFC) cell with a buck-boost dc/dc cell and part of the input power is coupled to the output directly after the first power processing. With this direct power transfer feature and sharing capacitor voltages, the converter is able to achieve efficient power conversion, high power factor, low voltage stress on intermediate bus (less than 130 V) and low output voltage without a high step-down transformer. The absence of transformer reduces the component counts and cost of the converter. Unlike most of the boost-type PFC cell, the main switch of the proposed converter only handles the peak inductor current of dc/dc cell rather than the superposition of both inductor currents. Detailed analysis and design procedures of the proposed circuit are given and verified by experimental results."}
{"_id": "43d09c0ebcde0c8d06ab1d6ea50701cb15f437f8", "title": "A Novel Single-Stage High-Power-Factor AC-to-DC LED Driving Circuit With Leakage Inductance Energy Recycling", "text": "This paper proposes a novel single-stage ac-to-dc light-emitting-diode (LED) driver circuit. A buck-boost power factor (PF) corrector is integrated with a flyback converter. A recycling path is built to recover the inductive leakage energy. In this way, the presented circuit can provide not only high PF and low total harmonic distortion but also high conversion efficiency and low switching voltage spikes. Above 0.95 PF and 90% efficiency are obtained from an 8-W LED lamp driver prototype."}
{"_id": "beb38cf2b03e3c22f5fe4e75999bbea52ed3ceee", "title": "Flicker-Free Electrolytic Capacitor-Less Universal Input Offline LED Driver With PFC", "text": "Recent developments in improving lighting efficiency and cost reduction of LEDs have made them suitable alternatives to the current lighting systems. In this paper, a novel offline structure is proposed to drive LEDs. The proposed circuit has a high-input power factor, high efficiency, a long lifetime, and it produces no flicker. To increase the lifetime of the converter, the proposed circuit does not include any electrolytic capacitors in the power stage. The proposed circuit consists of a transition mode flyback converter in order to improve power factor. Additionally, a buck converter is added to the third winding of the flyback transformer in order to create two parallel paths for the electrical power to feed the output load. DC power reaches the load through one stage (flyback) and ac power reaches the load through two stages of conversion (flyback + buck). Therefore, in the proposed one-and-a-half stage circuit, the efficiency is improved compared to a regular two-stage circuit. Although the proposed structure has some output current ripple, it is low enough (less than 8%) that the structure can be rendered flicker free, as shall be discussed. Principles of operation and design equations are presented as well as experimental results for a 700 mA/20 W universal input prototype."}
{"_id": "a3faa972480bb9734eb6bafc54c4cfc190b4595e", "title": "Large air gap coupler for inductive charger", "text": "A novel magnetic coupler of large r r gap IS presented It IS developed for the eleehlo relucle s aumahc mduchve charger The ncw induchve coupler proposed hqe has sUmcient mung ~oducmnce ven rf I has a large a s gap Calculated exrung inductance 18 40 pH at 2 nuns of w d m g and 5 mm ay gap, whch apees well with measured value In order to assess the gmerahon of heat c w e d by the eddy cment, the magneuc flux densrhes in the lnduchve chaser and also a flat non plate, to wbch the inductive charger IS attached, Bre calculated The conyersion efXciency wth the coupler and a MOSFETs full-bndge inverier of IW lrHq IS 97% at 8 3 kW output"}
{"_id": "5f55f327c7fb90cf1831aa54c789d420628191ee", "title": "Implementing Remote Procedure Calls", "text": "Remote procedure calls (RPC) appear to be a useful paradig m for providing communication across a network between programs written in a high-level language. This paper describes a package providing a remote procedure call facility, the options that face the designer of such a package, and the decisions ~we made. We describe the overall structure of our RPC mechanism, our facilities for binding RPC clients, the transport level communication protocol, and some performance measurements. We include descriptioro~ of some optimizations used to achieve high performance and to minimize the load on server machines that have many clients."}
{"_id": "791aefc89307138bbda68f0f78c30dbe288cc20f", "title": "Somatization vs . Psychologization of Emotional Distress : A Paradigmatic Example for Cultural Psychopathology", "text": "This paper describes the developing area of cultural psychopathology, an interdisciplinary field of study focusing on the ways in which cultural factors contribute to the experience and expression of psychological distress. We begin by outlining two approaches, often competing, in order to provide a background to some of the issues that complicate the field. The main section of the paper is devoted to a discussion of depression in Chinese culture as an example of the types of questions that can be studied. Here, we start with a review of the epidemiological literature, suggesting low rates of depression in China, and move to the most commonly cited explanation, namely that Chinese individuals with depression present this distress in a physical way. Different explanations of this phenomenon, known as somatization, are explored and reconceptualized according to an increasingly important model for cross-cultural psychologists: the cultural constitution of the self. We close by discussing some of the contributions, both theoretical and methodological, that can be made by cross-cultural psychologists to researchers in cultural psychopathology. Creative Commons License This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. This article is available in Online Readings in Psychology and Culture: https://scholarworks.gvsu.edu/orpc/vol10/iss2/3"}
{"_id": "e19497cc905346401fc969187ce5c0b8f4656c65", "title": "Privacy-Aware Tag Recommendation for Image Sharing", "text": "Image tags are very important for indexing, sharing, searching, and surfacing images with private content that needs protection. As the tags are at the sole discretion of users, they tend to be noisy and incomplete. In this paper, we present a privacy-aware approach to automatic image tagging, which aims at improving the quality of user annotations, while also preserving the images' original privacy sharing patterns. Precisely, we recommend potential tags for each target image by mining privacy-aware tags from the most similar images of the target image obtained from a large collection. Experimental results show that privacy-aware approach is able to predict accurate tags that can improve the performance of a downstream application on image privacy prediction. Crowd-sourcing predicted tags exhibit the quality of the recommended tags."}
{"_id": "3f4b312d0bc1c954064eca04f2663d8ee72b5dcf", "title": "A study to evaluate the safety, tolerability, and efficacy of brodalumab in subjects with rheumatoid arthritis and an inadequate response to methotrexate.", "text": "OBJECTIVE\nTo evaluate the efficacy and safety of brodalumab, a human monoclonal antibody inhibitor of the interleukin 17 receptor, in subjects with rheumatoid arthritis (RA).\n\n\nMETHODS\nPatients (n = 252) with inadequate response to methotrexate (MTX) were randomized to receive subcutaneous injections of brodalumab (70 mg, 140 mg, or 210 mg) or placebo. The primary endpoint was the American College of Rheumatology 50% response (ACR50) at Week 12.\n\n\nRESULTS\nDemographics and baseline characteristics were generally balanced among treatment groups. At Week 12, ACR50 occurred in 16% (70 mg), 16% (140 mg), 10% (210 mg), and 13% (placebo; all nonsignificant vs placebo) of subjects. No significant treatment effects were observed for the secondary endpoints, including ACR20, ACR70, and Disease Activity Score in 28 joints. Incidences of all adverse events (AE), including serious AE (SAE), were similar across treatment groups. A total of 7 subjects reported SAE during the study (2 in the placebo group and 5 in the brodalumab groups), none of which was treatment related. There was 1 death (cardiopulmonary failure) \u223c1 week after the last dose in the 140 mg group.\n\n\nCONCLUSION\nOur study failed to find evidence of meaningful clinical efficacy with brodalumab treatment in subjects with RA who had an inadequate response to MTX. These preliminary results do not support further evaluation of brodalumab as a treatment for RA. Clinicaltrials.gov number: NCT00950989."}
{"_id": "40c555885ac9d1df63ff2c2b75c136fe03a35c16", "title": "Alternative seating for young children with Autism Spectrum Disorder: effects on classroom behavior.", "text": "A single subject, withdrawal design was used to investigate the effects of therapy balls as seating on engagement and in-seat behavior of young children with Autism Spectrum Disorder (ASD). In addition, social validity was assessed to evaluate teachers' opinions regarding the intervention. During baseline and withdrawal (A phases) participants used their typical classroom seating device (chair, bench or carpet square). During the intervention (B phases) participants sat on therapy balls. Results indicated substantial improvements in engagement and in-seat behavior when participants were seated on therapy balls. Social validity findings indicated that the teachers' preferred the therapy balls. This study suggests therapy balls as classroom seating may facilitate engagement and in-seat behavior and create opportunities to provide effective instruction."}
{"_id": "a5e4c83b816f2f004ae5dfd600145cea9ea15724", "title": "Automatic Analysis of Rhythmic Poetry with Applications to Generation and Translation", "text": "We employ statistical methods to analyze, generate, and translate rhythmic poetry. We first apply unsupervised learning to reveal word-stress patterns in a corpus of raw poetry. We then use these word-stress patterns, in addition to rhyme and discourse models, to generate English love poetry. Finally, we translate Italian poetry into English, choosing target realizations that conform to desired rhythmic patterns."}
{"_id": "a3c7af4ffc4f04dd2cbb32f108af6d7ba823f35a", "title": "In vivo 3-dimensional analysis of scapular kinematics: comparison of dominant and nondominant shoulders.", "text": "BACKGROUND\nAlterations in scapular motion frequently are seen in association with various shoulder disorders. It is common clinically to compare the pathological shoulder with the contralateral shoulder, in spite of arm dominance, to characterize the disorder. However, there have been few articles that test the underlying assumption that dominant and nondominant shoulders exhibit comparable dynamic kinematics. The purpose of this study was to compare the 3-dimensional (3-D) scapular kinematics of dominant and nondominant shoulders during dynamic scapular plane elevation using 3-D-2-D (2-dimensional) registration techniques.\n\n\nMATERIALS AND METHODS\nTwelve healthy males with a mean age of 32 years (range, 27-36) were enrolled in this study. Bilateral fluoroscopic images during scapular plane elevation and lowering were taken, and CT-derived 3-D bone models were matched with the silhouette of the bones in the fluoroscopic images using 3-D-2-D registration techniques. Angular values of the scapula and scapulohumeral rhythm were compared between dominant and nondominant shoulders with statistical analysis.\n\n\nRESULTS\nThere was a significant difference in upward rotation angles between paired shoulders (P < .001), while significant differences were not found in the other angular values and scapulohumeral rhythm. The dominant scapulae were 10\u00b0 more downwardly rotated at rest and 4\u00b0 more upwardly rotated during elevation compared to the nondominant scapulae.\n\n\nDISCUSSION/CONCLUSION\nScapular motion was not the same between dominant and nondominant arms in healthy subjects. The dominant scapula was rotated further downward at rest and reached greater upward rotation with abduction. These differences should be considered in clinical assessment of shoulder pathology."}
{"_id": "c0898ad1ead40c3d863661424decd778db6ed437", "title": "A Low-Power Text-Dependent Speaker Verification System with Narrow-Band Feature Pre-Selection and Weighted Dynamic Time Warping", "text": "To fully enable voice interaction in wearable devices, a system requires low-power, customizable voice-authenticated wake-up. Existing speaker-verification (SV) methods have shortcomings relating to power consumption and noise susceptibility. To meet the application requirements, we propose a low-power, text-dependent SV system comprising a sparse spectral feature extraction front-end showing improved noise robustness and accuracy at low power, and a back-end running an improved dynamic time warping (DTW) algorithm that preserves signal envelope while reducing misalignments. Without background noise, the proposed system achieves an equal-errorrate (EER) of 1.1%, compared to 1.4% with a conventional Mel-frequency cepstral coefficients (MFCC)+DTW system and 2.6% with a Gaussian mixture universal background (GMMUBM) based system. At 3dB signal-to-noise ratio (SNR), the proposed system achieves an EER of 5.7%, compared to 13% with a conventional MFCC+DTW system and 6.8% with a GMM-UBM based system. The proposed system enables simple, low-power implementation such that the power consumption of the end-to-end system, which includes a voice activity detector, feature extraction front-end, and back-end decision unit, is under 380 \u03bcW."}
{"_id": "a904ec1184cdbef0a2a362032b84885779574858", "title": "Fast Method to Optimize RF Bumper Transparency for Wide-Band Automotive Radar", "text": "This article describes novel methods to enhance the RF integration of wideband automotive radars. The first part of the paper is introducing the context and the considered scenario. Whereas the second part is focusing on two solutions to improve the radar integration in car chassis environment. The aim is to have good RF performances on wideband operation (76\u201381 GHz) and on a wide range of angle of incidence for the radar having wide field of view (e.g. Blind spot radar, Pedestrian detection, \u2026)."}
{"_id": "59cce25151fe4d70383624dd535f533617dd8025", "title": "Unsupervised prediction of citation influences", "text": "Publication repositories contain an abundance of information about the evolution of scientific research areas. We address the problem of creating a visualization of a research area that describes the flow of topics between papers, quantifies the impact that papers have on each other, and helps to identify key contributions. To this end, we devise a probabilistic topic model that explains the generation of documents; the model incorporates the aspects of topical innovation and topical inheritance via citations. We evaluate the model's ability to predict the strength of influence of citations against manually rated citations."}
{"_id": "5a4b3297dbced87babc1b4f9f673191e007a3e03", "title": "Force to Rebalance Control of HRG and Suppression of Its Errors on the Basis of FPGA", "text": "A novel design of force to rebalance control for a hemispherical resonator gyro (HRG) based on FPGA is demonstrated in this paper. The proposed design takes advantage of the automatic gain control loop and phase lock loop configuration in the drive mode while making full use of the quadrature control loop and rebalance control loop in controlling the oscillating dynamics in the sense mode. First, the math model of HRG with inhomogeneous damping and frequency split is theoretically analyzed. In addition, the major drift mechanisms in the HRG are described and the methods that can suppress the gyro drift are mentioned. Based on the math model and drift mechanisms suppression method, four control loops are employed to realize the manipulation of the HRG by using a FPGA circuit. The reference-phase loop and amplitude control loop are used to maintain the vibration of primary mode at its natural frequency with constant amplitude. The frequency split is readily eliminated by the quadrature loop with a DC voltage feedback from the quadrature component of the node. The secondary mode response to the angle rate input is nullified by the rebalance control loop. In order to validate the effect of the digital control of HRG, experiments are carried out with a turntable. The experimental results show that the design is suitable for the control of HRG which has good linearity scale factor and bias stability."}
{"_id": "3c55c334d34b611a565683ea42a06d4e1f01db47", "title": "Topic-Oriented Exploratory Search Based on an Indexing Network", "text": "An exploratory search may be driven by a user's curiosity or desire for specific information. When users investigate unfamiliar fields, they may want to learn more about a particular subject area to increase their knowledge rather than solve a specific problem. This work proposes a topic-oriented exploratory search method that provides browse guidance to users. It allows them to discover new associations and knowledge, and helps them find their interested information and knowledge. Since an exploratory search needs to judge the ability to discover new knowledge, the existing commonly used metrics fail to capture it. This paper thus defines a new set of criteria containing clarity, relevance, novelty, and diversity to analyze the effectiveness of an exploratory search. Experiments are designed to compare results from the proposed method and Google's \u201csearch related to ....\u201d The results show that the proposed one is more suitable for learning new associations and discovering new knowledge with highly likely relevance to a query. This work concludes that it is more suitable than Google for an exploratory search."}
{"_id": "0889019b395890f57bfae3ce7d8391649ae68de4", "title": "Word Embedding based Correlation Model for Question/Answer Matching", "text": "The large scale of Q&A archives accumulated in community based question answering (CQA) servivces are important information and knowledge resource on the web. Question and answer matching task has been attached much importance to for its ability to reuse knowledge stored in these systems: it can be useful in enhancing user experience with recurrent questions. In this paper, a Word Embedding based Correlation (WEC) model is proposed by integrating advantages of both the translation model and word embedding. Given a random pair of words, WEC can score their co-occurrence probability in Q&A pairs, while it can also leverage the continuity and smoothness of continuous space word representation to deal with new pairs of words that are rare in the training parallel text. An experimental study on Yahoo! Answers dataset and Baidu Zhidao dataset shows this new method\u2019s promising"}
{"_id": "0a0e7baec6a4d2f3100c856c631bbc50e08fbad5", "title": "OpenCL framework for ARM processors with NEON support", "text": "The state-of-the-art ARM processors provide multiple cores and SIMD instructions. OpenCL is a promising programming model for utilizing such parallel processing capability because of its SPMD programming model and built-in vector support. Moreover, it provides portability between multicore ARM processors and accelerators in embedded systems. In this paper, we introduce the design and implementation of an efficient OpenCL framework for multicore ARM processors. Computational tasks in a program are implemented as OpenCL kernels and run on all CPU cores in parallel by our OpenCL framework. Vector operations and built-in functions in OpenCL kernels are optimized using the NEON SIMD instruction set. We evaluate our OpenCL framework using 37 benchmark applications. The result shows that our approach is effective and promising."}
{"_id": "f3381a72a5ed288d54a93d92a85e96f7ba2ab36c", "title": "LEARNING TO SOLVE CIRCUIT-SAT: AN UNSUPERVISED DIFFERENTIABLE APPROACH", "text": "Recent efforts to combine Representation Learning with Formal Methods, commonly known as Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems. In this paper, we propose a neural framework that can learn to solve the Circuit Satisfiability problem. Our framework is built upon two fundamental contributions: a rich embedding architecture that encodes the problem structure, and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem. The experimental results show the superior out-of-sample generalization performance of our framework compared to the recently developed NeuroSAT method."}
{"_id": "0052c9768f873bebf4b1788cc94bb8b0b3526be7", "title": "Interaction control of an UAV endowed with a manipulator", "text": "In this paper, we present the design, simulation and experimental validation of a control architecture for an unmanned aerial vehicle endowed with a manipulation system and interacting with a remote environment. The goal of this work is to show that the interaction control allows the manipulator to track a desired force, normal to a vertical wall, while still maintaining the possibility of moving on the wall. The control strategy has been implemented and validated in simulations and experiments on the manipulator standalone, i.e., attached to a fixed base, and on the manipulator attached to the aerial vehicle."}
{"_id": "6b3bafa55e812213a0caecc06d9a8c625d75f656", "title": "Negative pressure wave based pipeline Leak Detection: Challenges and algorithms", "text": "Negative pressure wave is a popular method to detect the occurrence and location of leak incidents in oil/gas pipeline. Three core technical challenges and related algorithm are discussed in this paper. The first is data quality. The balance between noise level and locating precision is discussed in filter design. The second one is dynamic slope in anomaly detection, whence a bi-SPC (Static Process Control) algorithms is proposed to make the threshold be adaptive. The third one is the false alarm caused normal working condition changes. Multiple-sensor paring algorithms is presented. With these algorithms, the robustness and locating precision of LDS (Leak Detection System) can be ensured."}
{"_id": "ac7d747a78ca2ad42627bd6360064d8c694a92e6", "title": "Data mining techniques - for marketing, sales, and customer support", "text": "When there are many people who don't need to expect something more than the benefits to take, we will suggest you to have willing to reach all benefits. Be sure and surely do to take this data mining techniques for marketing sales and customer support that gives the best reasons to read. When you really need to get the reason why, this data mining techniques for marketing sales and customer support book will probably make you feel curious."}
{"_id": "819d2e6855992a3efbf97750d9ad0b5379c9cc4e", "title": "An On-Chip Temperature Sensor With a Self-Discharging Diode in 32-nm SOI CMOS", "text": "We report a 39 \u03bcm \u00d7 27 \u03bcm on-chip temperature sensor which uses the temperature-dependent reverse-bias leakage current of a lateral silicon on insulator (SOI) CMOS p-n diode to monitor the thermal profile of a 32-nm microprocessor core. In this sensor, the diode junction capacitance is first charged to a fixed voltage. Subsequently, the diode capacitance is allowed to self-discharge through its temperature-dependent reverse-bias current. Next, by using a time-to-digital-converter circuit, the discharge voltage is converted to a temperature-dependent time pulse, and finally, its width is measured by using a digital counter. This compact temperature sensor demonstrates a 3\u03c3 measurement inaccuracy of \u00b11.95\u00b0C across the 5\u00b0C-100\u00b0C temperature range while consuming only 100 \u03bcW from a single 1.65-V supply."}
{"_id": "5e095981ebf4d389e9356bd56e59e0ade1b42e88", "title": "2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text", "text": "The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate."}
{"_id": "673a851db00c835d17788b8a6cb7b470de493fb8", "title": "Heterogenous Uncertainty Sampling for Supervised Learning", "text": "Uncertainty sampling methods iteratively request class labels for training instances whose classes are uncertain despite the previous labeled instances. These methods can greatly reduce the number of instances that an expert need label. One problem with this approach is that the classifier best suited for an application may be too expensive to train or use during the selection of instances. We test the use of one classifier (a highly efficient probabilistic one) to select examples for training another (the C4.5 rule induction program). Despite being chosen by this heterogeneous approach, the uncertainty samples yielded classifiers with lower error rates than random samples ten times larger."}
{"_id": "2046412fecff64e095cc5190b69172055afd2094", "title": "Information-Based Objective Functions for Active Data Selection", "text": "Learning can be made more efficient if we can actively select particularly salient data points. Within a Bayesian learning framework, objective functions are discussed that measure the expected informativeness of candidate measurements. Three alternative specifications of what we want to gain information about lead to three different criteria for data selection. All these criteria depend on the assumption that the hypothesis space is correct, which may prove to be their main weakness."}
{"_id": "96dfb38a00a9c78cd39e8a2f38494a38ed364cc8", "title": "The PhysIO Toolbox for Modeling Physiological Noise in fMRI Data", "text": "BACKGROUND\nPhysiological noise is one of the major confounds for fMRI. A common class of correction methods model noise from peripheral measures, such as ECGs or pneumatic belts. However, physiological noise correction has not emerged as a standard preprocessing step for fMRI data yet due to: (1) the varying data quality of physiological recordings, (2) non-standardized peripheral data formats and (3) the lack of full automatization of processing and modeling physiology, required for large-cohort studies.\n\n\nNEW METHODS\nWe introduce the PhysIO Toolbox for preprocessing of physiological recordings and model-based noise correction. It implements a variety of noise models, such as RETROICOR, respiratory volume per time and heart rate variability responses (RVT/HRV). The toolbox covers all intermediate steps - from flexible read-in of data formats to GLM regressor/contrast creation - without any manual intervention.\n\n\nRESULTS\nWe demonstrate the workflow of the toolbox and its functionality for datasets from different vendors, recording devices, field strengths and subject populations. Automatization of physiological noise correction and performance evaluation are reported in a group study (N=35).\n\n\nCOMPARISON WITH EXISTING METHODS\nThe PhysIO Toolbox reproduces physiological noise patterns and correction efficacy of previously implemented noise models. It increases modeling robustness by outperforming vendor-provided peak detection methods for physiological cycles. Finally, the toolbox offers an integrated framework with full automatization, including performance monitoring, and flexibility with respect to the input data.\n\n\nCONCLUSIONS\nThrough its platform-independent Matlab implementation, open-source distribution, and modular structure, the PhysIO Toolbox renders physiological noise correction an accessible preprocessing step for fMRI data."}
{"_id": "48c2f63d8f78387e0481e7bde3e54da8883fcbb5", "title": "Eratosthenes sieve based key-frame extraction technique for event summarization in videos", "text": "The rapid growth of video data demands both effective and efficient video summarization methods so that users are empowered to quickly browse and comprehend a large amount of video content. It is a herculean task to manage access to video content in real time where humongous amount of audiovisual recorded data is generated every second. In this paper we propose an Eratosthenes Sieve based key-frame extraction approach for video summarization (VS) which can work better for real-time applications. Here, Eratosthenes Sieve is used to generate sets of all Prime number frames and nonprime number frames up to total N frames of a video. k-means clustering procedure is employed on these sets to extract the key\u2013frames quickly. Here, the challenge is to find the optimal set of clusters, achieved by employing Davies-Bouldin Index (DBI). DBI a cluster validation technique which allows users with free parameter based VS approach to choose the desired number of key-frames without incurring additional computational costs. Moreover, our proposed approach includes likes of both local and global perspective videos. The method strongly enhances clustering procedure performance trough engagement of Eratosthenes Sieve. Qualitative and quantitative evaluation and complexity computation are done in order to compare the performances of the proposed model and state-of-the-art models. Experimental results on two benchmark datasets with various types of videos exhibit that the proposed methods outperform the state-of-the-art models on F-measure."}
{"_id": "36614d27abacada826afc5e5fdef3dac97bbe55e", "title": "Quantifying the total wetted surface area of the world fleet: a first step in determining the potential extent of ships\u2019 biofouling", "text": "Ships\u2019 hulls can transport aquatic nuisance species, but there is little quantitative information about the magnitude of vessel biofouling on a global or regional scale. There does not exist a robust method to estimate the wetted surface area (WSA) of a particular fleet of ships, especially across the diversity of possible vessel types. An estimate of the total WSA of ship arrivals into a port or region is essential to determine the potential scope of biofouling and to inform management strategies to reduce the future invasions. Multiple statistical models were developed so commonly available ships\u2019 parameters could be used to estimate the WSA for a given set of fleet data. Using individual ship characteristics and publicly available data from\u00a0 ~120,000 active commercial ships in the world fleet, the method results in a total global minimum WSA estimate of approximately 325\u00a0\u00d7\u00a0106\u00a0m2. The size of the global fleet employed here is greater than the commonly cited vessel number of approximately 80,000\u201390,000, as we include ships <100 gross tons. Over 190,000 vessels were initially identified, representing a WSA of 571\u00a0\u00d7\u00a0106\u00a0m2, but active status of only 120,000 vessels could be confirmed. Reliable data were unavailable on the operating status of many additional and especially smaller vessels. This approach, along with a contemporary and comprehensive estimate of global WSA, when combined with knowledge of the different operational profiles of ships that may reduce biofouling (port residence times, steaming speeds, extent of antifouling coatings, cleaning frequency, etc.), can inform current numerical models and risk assessments of bioinvasions."}
{"_id": "bf5a245180f45e7025ba7497b3ef62a1d160235e", "title": "Documenting Software Architectures: Views and Beyond", "text": "* This author is with the Software Engineering Institute at Carnegie Mellon University. ** This author is with the School of Computer Science at Carnegie Mellon University. Software architecture has emerged as a foundational concept for the successful development of large, complex systems. Signs that the field is maturing to become an engineering discipline include textbooks on the subject; the work in repeatable design, exemplified by architectural styles and patterns; robust methods for evaluating software architectures; international conferences devoted to it; and recognition of architecture as a professional practice. However, treatment of architecture to date has largely concentrated on its design, and to a lesser extent, its validation. But effectively documenting an architecture is as important as crafting it, because if the architecture is not understood (or worse, misunderstood) it cannot meet its goals as the unifying vision for system and software development. Three years ago, researchers at the Software Engineering Institute and the Carnegie Mellon School of Computer Science set out to answer the question: \u201cHow should you document an architecture so that others can successfully use it, maintain it, and build a system from it?\u201d The result of that work is an approach we loosely call \u201cviews and beyond.\u201d [1] Modern software architecture practice embraces the concept of architectural views. A view is a representation of a set of system elements and relations associated with them. Modern systems are more than complex enough to make it difficult to grasp them all at once. Instead, we restrict our attention at any one moment to one (or a small number) of the software system\u2019s structures, which we represent as views. Some authors prescribe a fixed set of views with which to engineer and communicate an architecture. Rational\u2019s Unified Process, for example, is based on Kruchten\u2019s 4+1 view approach to software [4]. The Siemens Four Views model [3] is another example. A recent trend, however, is to recognize that architects should produce whatever views are useful for the system at hand. IEEE 1471 exemplifies this philosophy; it holds that an architecture description consists of a set of views, each of which conforms to a viewpoint, which in turn is a realization of the concerns of one or more stakeholders. This philosophy about views leads to the fundamental principle of the views-and-beyond approach:"}
{"_id": "eda5e8bdda9e6012198d62fed9a2f09ea4f82003", "title": "VeTrack: Real Time Vehicle Tracking in Uninstrumented Indoor Environments", "text": "Although location awareness and turn-by-turn instructions are prevalent outdoors due to GPS, we are back into the darkness in uninstrumented indoor environments such as underground parking structures. We get confused, disoriented when driving in these mazes, and frequently forget where we parked, ending up circling back and forth upon return.In this paper, we propose VeTrack, a smartphone-only system that tracks the vehicle's location in real time using the phone's inertial sensors. It does not require any environment instrumentation or cloud backend. It uses a novel \"shadow\" tracing method to accurately estimate the vehicle's trajectories despite arbitrary phone/vehicle poses and frequent disturbances. We develop algorithms in a Sequential Monte Carlo framework to represent vehicle states probabilistically, and harness constraints by the garage map and detected landmarks to robustly infer the vehicle location. We also find landmark (e.g., speed bumps, turns) recognition methods reliable against noises, disturbances from bumpy rides and even hand-held movements. We implement a highly efficient prototype and conduct extensive experiments in multiple parking structures of different sizes and structures, with multiple vehicles and drivers. We find that VeTrack can estimate the vehicle's real time location with almost negligible latency, with error of 2-4 parking spaces at 80-percentile."}
{"_id": "47154ab3630bba115d53bbc5d9cbdd85548f2230", "title": "Prediction and Simulation of Human Mobility Following Natural Disasters", "text": "In recent decades, the frequency and intensity of natural disasters has increased significantly, and this trend is expected to continue. Therefore, understanding and predicting human behavior and mobility during a disaster will play a vital role in planning effective humanitarian relief, disaster management, and long-term societal reconstruction. However, such research is very difficult to perform owing to the uniqueness of various disasters and the unavailability of reliable and large-scale human mobility data. In this study, we collect big and heterogeneous data (e.g., GPS records of 1.6 million users1 over 3 years, data on earthquakes that have occurred in Japan over 4 years, news report data, and transportation network data) to study human mobility following natural disasters. An empirical analysis is conducted to explore the basic laws governing human mobility following disasters, and an effective human mobility model is developed to predict and simulate population movements. The experimental results demonstrate the efficiency of our model, and they suggest that human mobility following disasters can be significantly more predictable and be more easily simulated than previously thought."}
{"_id": "0f4f3a06af5e2bcb45711bbde7554144516d97a8", "title": "Clarithromycin and Tetracycline Binding to Soil Humic Acid in the Absence and Presence of Calcium.", "text": "Numerous ionizable organic micropollutants contain positively charged moieties at pH values typical of environmental systems. Describing organic cation and zwitterion interaction with dissolved natural organic matter requires explicit consideration of the pH-dependent speciation of both sorbate and sorbent. We studied the pH-, ionic strength-, and concentration-dependent binding of relatively large, organic cations and zwitterions (viz., the antibiotics clarithromycin and tetracycline) to dissolved humic acid in the absence and presence of Ca(2+) and evaluated the ability of the NICA-Donnan model to describe the data. Clarithromycin interaction with dissolved humic acid was well described by the model including the competitive effect of Ca(2+) on clarithromycin binding over a wide range of solution conditions by considering only the binding of the cationic species to low proton-affinity sites in humic acid. Tetracycline possesses multiple ionizable moieties and forms complexes with Ca(2+). An excellent fit to experimental data was achieved by considering tetracycline cation interaction with both low and high proton-affinity sites of humic acid and zwitterion interaction with high proton-affinity sites. In contrast to clarithromycin, tetracycline binding to humic acid increased in the presence of Ca(2+), especially under alkaline conditions. Model calculations indicate that this increase is due to electrostatic interaction of positively charged tetracycline-Ca complexes with humic acid rather than due to the formation of ternary complexes, except at very low TC concentrations."}
{"_id": "b138b5ffb82d9ba195400b35dcf837f0e500c89e", "title": "Survey of clustering algorithms for MANET", "text": "Many clustering schemes have been proposed for ad hoc networks. A systematic classification of these clustering schemes enables one to better understand and make improvements. In mobile ad hoc networks, the movement of the network nodes may quickly change the topology resulting in the increase of the overhead message in topology maintenance. Protocols try to keep the number of nodes in a cluster around a pre-defined threshold to facilitate the optimal operation of the medium access control protocol. The clusterhead election is invoked on-demand, and is aimed to reduce the computation and communication costs. A large variety of approaches for ad hoc clustering have been developed by researchers which focus on different performance metrics. This paper presents a survey of different clustering schemes."}
{"_id": "edab384ff0d582807b7b819bcc79eff8cda8a0ef", "title": "Efficient reinforcement learning using Gaussian processes", "text": "IX"}
{"_id": "1e2aac6424413ede3380957c289daeaff4b7f2a4", "title": "Learning Sparsely Used Overcomplete Dictionaries", "text": "We consider the problem of learning sparsely used overcomplete dictionaries, where each observation is a sparse combination of elements from an unknown overcomplete dictionary. We establish exact recovery when the dictionary elements are mutually incoherent. Our method consists of a clustering-based initialization step, which provides an approximate estimate of the true dictionary with guaranteed accuracy. This estimate is then refined via an iterative algorithm with the following alternating steps: 1) estimation of the dictionary coefficients for each observation through l1 minimization, given the dictionary estimate, and 2) estimation of the dictionary elements through least squares, given the coefficient estimates. We establish that, under a set of sufficient conditions, our method converges at a linear rate to the true dictionary as well as the true coefficients for each observation."}
{"_id": "cc2ac69f43da51ae8713973b072e084067af2c31", "title": "Fuzzy controlled fast charging system for lithium-ion batteries", "text": "A DSP is adopted to construct a fuzzy controlled lithium-ion battery charging system. By using this intelligent charging system, the data collection, calculation and peripheral circuit control are performed for the battery charging status. According to the lithium-ion battery charging specifications, two hours are required for battery charging. A fuzzy logic controller (FLC) is constructed by using the battery protection cell voltage and voltage difference among the batteries. They are used as the input variable to shorten the charging and equalizing time and assure that the battery will be operated within the safety voltage range."}
{"_id": "43b5a8ccdb652a709f2c79dabc5616ccaf30fec6", "title": "A Particle Swarm Optimization ( PSO )-based Heuristic for Scheduling Work fl ow Applications in Cloud Computing Environments", "text": "Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically. However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution costs when they are scheduled taking into account only the \u2018execution time\u2019. In addition to optimizing execution time, the cost arising from data transfers between resources as well as execution costs must also be taken into account. In this paper, we present a particle swarm optimization (PSO) based scheduling heuristic for data intensive applications that takes into account both computation cost and data transmission cost. We experiment with a workflow application by varying its computation and communication costs. We analyze the cost savings when using PSO as compared to using existing \u2018Best Resource Selection\u2019 (BRS) algorithm. Our results show that we can achieve: a) as much as 3 times cost savings as compared to BRS, b) good distribution of workload onto resources, when using PSO based scheduling heuristic."}
{"_id": "3eb2f78a3127a9c5f09fe2c455e71509fab30442", "title": "Community-Affiliation Graph Model for Overlapping Network Community Detection", "text": "One of the main organizing principles in real-world networks is that of network communities, where sets of nodes organize into densely linked clusters. Communities in networks often overlap as nodes can belong to multiple communities at once. Identifying such overlapping communities is crucial for the understanding the structure as well as the function of real-world networks. Even though community structure in networks has been widely studied in the past, practically all research makes an implicit assumption that overlaps between communities are less densely connected than the non-overlapping parts themselves. Here we validate this assumption on 6 large scale social, collaboration and information networks where nodes explicitly state their community memberships. By examining such ground-truth communities we find that the community overlaps are more densely connected than the non-overlapping parts, which is in sharp contrast to the conventional wisdom that community overlaps are more sparsely connected than the communities themselves. Practically all existing community detection methods fail to detect communities with dense overlaps. We propose Community-Affiliation Graph Model, a model-based community detection method that builds on bipartite node-community affiliation networks. Our method successfully captures overlapping, non-overlapping as well as hierarchically nested communities, and identifies relevant communities more accurately than the state-of-the-art methods in networks ranging from biological to social and information networks."}
{"_id": "4a2aef701393166d17b1e38f553e6935509bed63", "title": "The head direction signal: origins and sensory-motor integration.", "text": "Navigation first requires accurate perception of one's spatial orientation within the environment, which consists of knowledge about location and directional heading. Cells within several limbic system areas of the mammalian brain discharge allocentrically as a function of the animal's directional heading, independent of the animal's location and ongoing behavior. These cells are referred to as head direction (HD) cells and are believed to encode the animal's perceived directional heading with respect to its environment. Although HD cells are found in several areas, the principal circuit for generating this signal originates in the dorsal tegmental nucleus and projects serially, with some reciprocal connections, to the lateral mammillary nucleus --> anterodorsal thalamus --> PoS, and terminates in the entorhinal cortex. HD cells receive multimodal information about landmarks and self-generated movements. Vestibular information appears critical for generating the directional signal, but motor/proprioceptive and landmark information are important for updating it."}
{"_id": "6c7ca39ece117890c9f793af9d97536958d25f35", "title": "A hybrid method for imputation of missing values using optimized fuzzy c-means with support vector regression and a genetic algorithm", "text": "Missing values in datasets should be extracted from the datasets or should be estimated before they are used for classification, association rules or clustering in the preprocessing stage of data mining. In this study, we utilize a fuzzy c-means clustering hybrid approach that combines support vector regression and a genetic algorithm. In this method, the fuzzy clustering parameters, cluster size and weighting factor are optimized and missing values are estimated. The proposed novel hybrid method yields sufficient and sensible imputation performance results. The results are compared with those of fuzzy c-means genetic algorithm imputation, support vector regression genetic algorithm imputation and zero imputation. 2013 Elsevier Inc. All rights reserved."}
{"_id": "0d645b2254be58a788e811da7d12beed5d19ea3c", "title": "Task driven perceptual organization for extraction of rooftop polygons", "text": "A new method for extracting planar polygonal rooftops in monocular aerial imagery is proposed. Structural features are extracted and hierarchically related using perceptual grouping techniques. Top-down feature veri cation is used so that features, and links between the features, are veri ed with local information in the image and weighed in a graph. Cycles in the graph correspond to possible building rooftop hypotheses. Virtual features are hypothesized for the perceptual completion of partially occluded rooftops. Extraction of the \\best\" grouping of features into a building rooftop hypothesis is posed as a graph search problem. The maximally weighted, independent set of cycles in the graph is extracted as the nal set of roof"}
{"_id": "871e8debb37e25bb958ee132828da4411a95944d", "title": "Didn't you see my message?: predicting attentiveness to mobile instant messages", "text": "Mobile instant messaging (e.g., via SMS or WhatsApp) often goes along with an expectation of high attentiveness, i.e., that the receiver will notice and read the message within a few minutes. Hence, existing instant messaging services for mobile phones share indicators of availability, such as the last time the user has been online. However, in this paper we not only provide evidence that these cues create social pressure, but that they are also weak predictors of attentiveness. As remedy, we propose to share a machine-computed prediction of whether the user will view a message within the next few minutes or not. For two weeks, we collected behavioral data from 24 users of mobile instant messaging services. By the means of machine-learning techniques, we identified that simple features extracted from the phone, such as the user's interaction with the notification center, the screen activity, the proximity sensor, and the ringer mode, are strong predictors of how quickly the user will attend to the messages. With seven automatically selected features our model predicts whether a phone user will view a message within a few minutes with 70.6% accuracy and a precision for fast attendance of 81.2%"}
{"_id": "44c2b79b31461d2a0d957f5c121454d57cf1362c", "title": "Conjugated linoleic acid and disease prevention: a review of current knowledge.", "text": "Conjugated linoleic acid (CLA), a derivative of a fatty acid linoleic acid (LA), has been reported to decrease tumorigenesis in animals. CLA is unique because unlike most antioxidants which are components of plant products, it is present in food from animal sources such as dairy foods and meats. CLA concentrations in dairy products typically range from 2.9 to 8.92 mg/g fat of which the 9-cis, 11-trans isomer makes up to 73% to 93% of the total CLA. Low concentrations of CLA are found in human blood and tissues. In vitro results suggest that CLA is cytotoxic to MCF-7 cells and it inhibits the proliferation of human malignant melanoma and colorectal cancer cells. In animal studies, CLA has inhibited the development of mouse epidermal tumors, mouse forestomach cancer and rat mammary cancer. Hamsters fed CLA collectively had significantly reduced levels of plasma total cholesterol, non-high-density lipoprotein cholesterol, (combined very-low and low-density lipoprotein) and triglycerides with no effect on high-density lipoprotein cholesterol, as compared to controls. Dietary CLA modulated certain aspects of the immune defense but had no obvious effect on the growth of an established, aggressive mammary tumor in mice. It is now thought that CLA itself may not have anti-oxidant capabilities but may produce substances which protect cells from the detrimental effects of peroxides. There is, however, insufficient evidence from human epidemiological data, and very few of the animal studies have shown a dose-response relationship with the quantity of CLA feed and the extent of tumor growth. Further research with tumor models is needed to test the efficacy and utility of CLA in cancer and other disease prevention and form the basis of evaluating its effect in humans by observational studies and clinical trials."}
{"_id": "199d73ceef4a3ecf8cde17fa25db9e5d58f062a4", "title": "Gore Galore: Literary Theory and Computer Games", "text": "Computer games have not been adequately theorized within the humanities. In this paper a brief history of computer games is presented as a starting point for developing a topology of games and a theory of computer games as rhetorical artifacts suitable for critical study. The paper addresses the question of why games should be treated seriously and suggests a theoretical approach based on Backhtin\u2019s poetics of the novel where the experience of time and space (the chronotope) provides a framework of questions for discussing computer games."}
{"_id": "c74e735826d50018235b98dd4723a9fad28b3956", "title": "Qualitative Methods in Research on Teaching", "text": "This chapter reviews basic issues of theory and method in approaches to research on teaching that are alternatively called ethnographic, qualitative, participant observational, case study, symbolic interactionist, phenomenological, constructivist, or interpretive. These approaches are all slightly different, but each bears strong family resemblance to the others. The set of related approaches is relatively new in the field of research on teaching. The approaches have emerged as significant in the decade of the 1960s in England and in the 1970s in the United States, Australia, New Zealand, and Germany. Because interest in these approaches is so recent, the previous editions of the Handbook of Research on Teaching do not contain a chapter devoted to participant observational research. Accordingly, this chapter attempts to describe r~s and their theoretical _nresup~s in considerable detail and does not attempt an exhaustive review of the rapidly growing literature in the field. Such a review will be appropriate for the next edition of this handbook. From this point on I will use the term interpretive to refer to the whole family of approaches to participant observational research. I adopt this term for three reasons: (a) It is more inclusive than many of the others (e.g., ethnography, case study); (b) W Blake"}
{"_id": "0fc0a9260afbf0ba48c8343221713b7bf73616d2", "title": "Breast disorders in pregnant and lactating women.", "text": "The breast undergoes extensive changes during pregnancy and lactation that can create diagnostic challenges. This article reviews the anatomy of the breast, breast changes associated with pregnancy and lactation, and breast imaging techniques for pregnant and lactating women. Various benign breast conditions in this patient population also are discussed, such as lactating adenomas, galactoceles, and granulomatous mastitis. Finally, pregnancy-associated breast cancer is presented, including its epidemiology, diagnosis, staging, treatment, and prognosis."}
{"_id": "16db62aeefef00907b47072d658b8b9fcd16bdde", "title": "A traffic-aware street lighting scheme for Smart Cities using autonomous networked sensors", "text": "Street lighting is a ubiquitous utility, but sustaining its operation presents a heavy financial and environmental burden. Many schemes have been proposed which selectively dim lights to improve energy efficiency, but little consideration has been given to the usefulness of the resultant street lighting system. This paper proposes a real-time adaptive lighting scheme, which detects the presence of vehicles and pedestrians and dynamically adjusts their brightness to the optimal level. This improves the energy efficiency of street lighting and its usefulness; a streetlight utility model is presented to evaluate this. The proposed scheme is simulated using an environment modelling a road network, its users, and a networked communication system and considers a real streetlight topology from a residential area. The proposed scheme achieves similar or improved utility to existing schemes, while consuming as little as 1-2% of the energy required by conventional and state-of-the-art techniques."}
{"_id": "46e678c863fa00b178ed17b594da8cf1dab6d5a0", "title": "Road Network Extraction and Intersection Detection From Aerial Images by Tracking Road Footprints", "text": "In this paper, a new two-step approach (detecting and pruning) for automatic extraction of road networks from aerial images is presented. The road detection step is based on shape classification of a local homogeneous region around a pixel. The local homogeneous region is enclosed by a polygon, called the footprint of the pixel. This step involves detecting road footprints, tracking roads, and growing a road tree. We use a spoke wheel operator to obtain the road footprint. We propose an automatic road seeding method based on rectangular approximations to road footprints and a toe-finding algorithm to classify footprints for growing a road tree. The road tree pruning step makes use of a Bayes decision model based on the area-to-perimeter ratio (the A/P ratio) of the footprint to prune the paths that leak into the surroundings. We introduce a lognormal distribution to characterize the conditional probability of A/P ratios of the footprints in the road tree and present an automatic method to estimate the parameters that are related to the Bayes decision model. Results are presented for various aerial images. Evaluation of the extracted road networks using representative aerial images shows that the completeness of our road tracker ranges from 84% to 94%, correctness is above 81%, and quality is from 82% to 92%."}
{"_id": "d77ddbb81886d07941ec4679ffe83721e3f5ef2b", "title": "UAV borne real-time road mapping system", "text": "Road information is useful in many fields. In this paper, a real-time mapping system is presented to acquire the image and determine the geometry of the road. These include designs of platform and instruments, data transmission, processing and archiving. Compare with the traditional platforms, the UAV (Unmanned Aerial Vehicle) platforms offer greater flexibility, shorter response time and is able to generate very high resolution data. It is inexpensive to operate, and its operation is not limited by air traffic constraints. With autonomous flight control system, it is easy to navigate the aircraft along the planned lines strictly. In this system, an unmanned fixed-wing aircraft with double engine and double generator is used, which helps to improve the reliability and capability of the platform. A digital three-axis stabilized platform is developed, which performs an important role in camera attitude controlling. To ensure there is no coverage hole, a data manage system is adopted: after an instantaneous inspection of data integrity, the data will be archived as they are received. If any anomaly is detected, the mission planning will be adapted to re-image the concerned area. Vehicle information extraction from image sequence is also described in this paper. The experiment showed the presented system worked well in real-time road mapping and vehicle information extraction."}
{"_id": "e429e4874f4b3a397a754f28a919362d282b7966", "title": "Road Extraction Using SVM and Image Segmentation", "text": "In this paper, a unique approach for road extraction utilizing pixel spectral information for classification and image segmentation-derived object features was developed. In this approach, road extraction was performed in two steps. In the first step, support vector machine (SVM) was employed merely to classify the image into two groups of categories: a road group and a non-road group. For this classification, support vector machine (SVM) achieved higher accuracy than Gaussian maximum likelihood (GML). In the second step, the road group image was segmented into geometrically homogeneous objects using a region growing technique based on a similarity criterion, with higher weighting on shape factors over spectral criteria. A simple thresholding on the shape index and density features derived from these objects was performed to extract road features, which were further processed by thinning and vectorization to obtain road centerlines. The experiment showed the proposed approach worked well with images comprised by both rural and urban area features. Introduction Road information not only plays a central role in the transportation application, but also is an important data layer in Geographical Information Systems (GIS). Automated road extraction can save time and labor to a great degree in updating a road spatial database. Various road extraction approaches have been developed. Xiong (2001) grouped these methods into five categories: ridge finding, heuristic reasoning, dynamic programming, statistical inference, and map matching. In ridge finding, edge operators are performed on images to derive edge magnitude and direction, followed by a thresholding and thinning process to obtain ridge pixels (Nevatia and Babu, 1980; Treash and Amaratunga, 2000). Alternatively, gradient direction profile analysis can be performed to generate edge pixels (Gong and Wang, 1997). Ridge points are linked to produce the road segments. Heuristic reasoning is a knowledge-based method in which a series of pre-set rules on road characteristics such as shape index, the distance between image primitives, fragments trend, and contextual information are employed to detect and connect image primitives or antiparallel linear edges to road segments (McKeown, et al., 1985; Zhu and Yeh, 1986). In the dynamic programming method, roads are modeled with a set of mathematical equations on the derivatives of gray values and select characteristics of roads, such as smooth curves, homogeneous surface, narrow linear features, and relatively constant width. Dynamic programming is employed to solve the optimization problem Road Extraction Using SVM and Image Segmentation Mingjun Song and Daniel Civco (Gruen and Li, 1995). In the statistical inference method, linear features are modeled as a Markov point process or a geometric-stochastic model on the road width, direction, intensity and background intensity, and maximum a posteriori probability is used to estimate the road network (Barzohar and Cooper, 1996; Stoica, et al., 2000). In a map matching method, existing road maps are used as starting point to update the road network. In general, two steps are involved: first, a mapimage matching algorithm is employed to match the roads on the map to the image; second, new roads are searched based on the assumption that they are connected to existing roads (Stilla, 1995). Xiong\u2019s classification on road extraction methods is only a generalization, and some other methods may combine different techniques. Active contour models, known as snakes, are also used in road extraction (Gruen and Li, 1997; Agouris, et al., 2001). A snake is a spline with minimized energy driven by internal spline and external image forces (Park, et al., 2001). In general, external image forces are represented by the gradient magnitude of an image, which attracts snakes to contours with strong edges. Internal forces are given by a continuity term and a curvature term expressed by the differences of adjacent snaxels, which are vertex nodes of the snake, with weights coming from training data, which control the shape and smoothness of the snakes. Through the optimization, the snake evolves from its initial position to desired position with minimized energy. Park and Kim (2001) used template matching to extract road segments in which a road template was formed around the road seed, and an adaptive least squares matching algorithm was used to detect a target window with similar transformation. This method assumes a small difference in brightness values between template and target windows. Most of these road extraction methods require some road seeds as starting points, which are in general provided by users, and road segments evolve under a certain model. Sometimes control points are needed to correct the evolution of roads (Zhao, et al., 2002). Further, these methods use black-and-white aerial photographs or the panchromatic band of high-resolution satellite images and therefore the geometric characteristics of roads alone play a critical role. Boggess (1993) used a classification method incorporating texture and neural networks to extract roads by classifying roads and other features from Landsat TM imagery, but obtained numerous false-inclusions. Roberts, et al. (2001) developed a spectral mixture library using hyperspectral images to extract roads, but the use of spectral information alone does not capture the spatial properties of these curvilinear features. P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G December 2004 1 3 6 5 Center for Land use Education and Research, Department of Natural Resources Management and Engineering, The University of Connecticut U-4087, 1376 Storrs Road, Storrs, CT 06269-4087 (mingjun.song@uconn.edu, daniel.civco@ uconn.edu). Photogrammetric Engineering & Remote Sensing Vol. 70, No. 12, December 2004, pp. 1365\u20131371. 0099-1112/04/7012\u20131365/$3.00/0 \u00a9 2004 American Society for Photogrammetry and Remote Sensing LFX-536.qxd 11/9/04 16:13 Page 1365"}
{"_id": "f6e1ce05762144ecceb38be94091da4a23b47769", "title": "Effect of temporal envelope smearing on speech reception.", "text": "The effect of smearing the temporal envelope on the speech-reception threshold (SRT) for sentences in noise and on phoneme identification was investigated for normal-hearing listeners. For this purpose, the speech signal was split up into a series of frequency bands (width of 1/4, 1/2, or 1 oct) and the amplitude envelope for each band was low-pass filtered at cutoff frequencies of 0, 1/2, 1, 2, 4, 8, 16, 32, or 64 Hz. Results for 36 subjects show (1) a severe reduction in sentence intelligibility for narrow processing bands at low cutoff frequencies (0-2 Hz); and (2) a marginal contribution of modulation frequencies above 16 Hz to the intelligibility of sentences (provided that lower modulation frequencies are completely present). For cutoff frequencies above 4 Hz, the SRT appears to be independent of the frequency bandwidth upon which envelope filtering takes place. Vowel and consonant identification with nonsense syllables were studied for cutoff frequencies of 0, 2, 4, 8, or 16 Hz in 1/4-oct bands. Results for 24 subjects indicate that consonants are more affected than vowels. Errors in vowel identification mainly consist of reduced recognition of diphthongs and of confusions between long and short vowels. In case of consonant recognition, stops appear to suffer most, with confusion patterns depending on the position in the syllable (initial, medial, or final)."}
{"_id": "8c034d0135ba656340ec5220bc2dc0f294e9dd96", "title": "A MapReduce Approach to NoSQL RDF Databases", "text": "In recent years, the increased need to house and process large volumes of data has prompted the need for distributed storage and querying systems. The growth of machine-readable RDF triples has prompted both industry and academia to develop new database systems, called \u201cNoSQL,\u201d with characteristics that differ from classical databases. Many of these systems compromise ACID properties for increased horizontal scalability and data availability. This thesis concerns the development and evaluation of a NoSQL triplestore. Triplestores are database management systems central to emerging technologies such as the Semantic Web and linked data. A triplestore comprises data storage using the RDF (resource description framework) data model and the execution of queries written in SPARQL. The triplestore developed here exploits an opensource stack comprising, Hadoop, HBase, and Hive. The evaluation spans several benchmarks, including the two most commonly used in triplestore evaluation, the Berlin SPARQL Benchmark, and the DBpedia benchmark, a query workload that operates an RDF representation of Wikipedia. Results reveal that the join algorithm used by the system plays a critical role in dictating query runtimes. Distributed graph databases must carefully optimize queries before generating MapReduce query plans as network traffic for large datasets can become prohibitive if the query is executed naively."}
{"_id": "c77a84cd5a53343e6977bcf1878c0e4cb9263780", "title": "NOBLE \u2013 Flexible concept recognition for large-scale biomedical natural language processing", "text": "Natural language processing (NLP) applications are increasingly important in biomedical data analysis, knowledge engineering, and decision support. Concept recognition is an important component task for NLP pipelines, and can be either general-purpose or domain-specific. We describe a novel, flexible, and general-purpose concept recognition component for NLP pipelines, and compare its speed and accuracy against five commonly used alternatives on both a biological and clinical corpus. NOBLE Coder implements a general algorithm for matching terms to concepts from an arbitrary vocabulary set. The system\u2019s matching options can be configured individually or in combination to yield specific system behavior for a variety of NLP tasks. The software is open source, freely available, and easily integrated into UIMA or GATE. We benchmarked speed and accuracy of the system against the CRAFT and ShARe corpora as reference standards and compared it to MMTx, MGrep, Concept Mapper, cTAKES Dictionary Lookup Annotator, and cTAKES Fast Dictionary Lookup Annotator. We describe key advantages of the NOBLE Coder system and associated tools, including its greedy algorithm, configurable matching strategies, and multiple terminology input formats. These features provide unique functionality when compared with existing alternatives, including state-of-the-art systems. On two benchmarking tasks, NOBLE\u2019s performance exceeded commonly used alternatives, performing almost as well as the most advanced systems. Error analysis revealed differences in error profiles among systems. NOBLE Coder is comparable to other widely used concept recognition systems in terms of accuracy and speed. Advantages of NOBLE Coder include its interactive terminology builder tool, ease of configuration, and adaptability to various domains and tasks. NOBLE provides a term-to-concept matching system suitable for general concept recognition in biomedical NLP pipelines."}
{"_id": "c500a6168a4767998b3cd2ed7c03fe6e1d6f352b", "title": "Analysis and optimization of software requirements prioritization techniques", "text": "Prioritizing requirements helps the project team to understand which requirements are most important and most urgent. Based on this finding a software engineer can decide what to develop/implement in the first release and what on the coming releases. Prioritization is also a useful activity for decision making in other phases of software engineering like development, testing, and implementation. There are a number of techniques available to prioritize the requirements with their associated strengths and limitations. In this paper we will examine state of the art techniques and analyze their applicability on software requirements domain. At the end we present a framework that will help the software engineer of how to perform prioritization process by combining existing techniques and approaches."}
{"_id": "9d6a786d48a1fd63715f1b9a0df8dcdc8f84708e", "title": "The academic social network", "text": "By means of their academic publications, authors form a social network. Instead of sharing casual thoughts and photos (as in Facebook), authors select co-authors and reference papers written by other authors. Thanks to various efforts (such as Microsoft Academic Search and DBLP), the data necessary for analyzing the academic social network is becoming more available on the Internet. What type of information and queries would be useful for users to discover, beyond the search queries already available from services such as Google Scholar? In this paper, we explore this question by defining a variety of ranking metrics on different entities\u2014authors, publication venues, and institutions. We go beyond traditional metrics such as paper counts, citations, and h-index. Specifically, we define metrics such as influence, connections, and exposure for authors. An author gains influence by receiving more citations, but also citations from influential authors. An author increases his or her connections by co-authoring with other authors, and especially from other authors with high connections. An author receives exposure by publishing in selective venues where publications have received high citations in the past, and the selectivity of these venues also depends on the influence of the authors who publish there. We discuss the computation aspects of these metrics, and the similarity between different metrics. With additional information of author-institution relationships, we are able to study institution rankings based on the corresponding authors\u2019 rankings for each type of metric as well as different domains. We are prepared to demonstrate these ideas with a web site ( http://pubstat.org ) built from millions of publications and authors."}
{"_id": "eb6c03696c39fd0f15eb5a2d196af79cef985baf", "title": "Beamspace channel estimation for millimeter-wave massive MIMO systems with lens antenna array", "text": "By employing the lens antenna array, beamspace MIMO can utilize beam selection to reduce the number of required RF chains in mmWave massive MIMO systems without obvious performance loss. However, to achieve the capacity-approaching performance, beam selection requires the accurate information of beamspace channel of large size, which is challenging, especially when the number of RF chains is limited. To solve this problem, in this paper we propose a reliable support detection (SD)-based channel estimation scheme. Specifically, we propose to decompose the total beamspace channel estimation problem into a series of sub-problems, each of which only considers one sparse channel component. For each channel component, we first reliably detect its support by utilizing the structural characteristics of mmWave beamspace channel. Then, the influence of this channel component is removed from the total beamspace channel estimation problem. After the supports of all channel components have been detected, the nonzero elements of the sparse beamspace channel can be estimated with low pilot overhead. Simulation results show that the proposed SD-based channel estimation outperforms conventional schemes and enjoys satisfying accuracy, even in the low SNR region."}
{"_id": "b6a7e07e14178ef46e764ecab9a71202f8f407bd", "title": "The Use of Wearable Inertial Motion Sensors in Human Lower Limb Biomechanics Studies: A Systematic Review", "text": "Wearable motion sensors consisting of accelerometers, gyroscopes and magnetic sensors are readily available nowadays. The small size and low production costs of motion sensors make them a very good tool for human motions analysis. However, data processing and accuracy of the collected data are important issues for research purposes. In this paper, we aim to review the literature related to usage of inertial sensors in human lower limb biomechanics studies. A systematic search was done in the following search engines: ISI Web of Knowledge, Medline, SportDiscus and IEEE Xplore. Thirty nine full papers and conference abstracts with related topics were included in this review. The type of sensor involved, data collection methods, study design, validation methods and its applications were reviewed."}
{"_id": "e813708e55be12d41812fc05ff9be3fa21f8fc91", "title": "An empirical study on the impact of static typing on software maintainability", "text": "Static type systems play an essential role in contemporary programming languages. Despite their importance, whether static type systems impact human software development capabilities remains open. One frequently mentioned argument in favor of static type systems is that they improve the maintainability of software systems\u2014an often-used claim for which there is little empirical evidence. This paper describes an experiment that tests whether static type systems improve the maintainability of software systems, in terms of understanding undocumented code, fixing type errors, and fixing semantic errors. The results show rigorous empirical evidence that static types are indeed beneficial to these activities, except when fixing semantic errors. We further conduct an exploratory analysis of the data in order to understand possible reasons for the effect of type systems on the three kinds of tasks used in this experiment. From the exploratory analysis, we conclude that developers using a dynamic type system tend to look at different files more frequently when doing programming tasks\u2014which is a potential reason for the observed differences in time."}
{"_id": "b6203f82bf276fff9a0082ee1b51d37ac90f4b79", "title": "A 12.77-MHz 31 ppm/\u00b0C On-Chip RC Relaxation Oscillator With Digital Compensation Technique", "text": "The design of a 12.77-MHz on-chip RC relaxation oscillator with digital compensation technique is presented. To maintain the frequency stability versus temperature and supply voltage variations, loop delay tuning by a digital feedback loop is developed in this system. In order to generate an on-chip reference for digital calibration, a replica comparator is added. The on-chip relaxation oscillator is fabricated in 0.18-\u03bcm CMOS process. The measured output frequency variation is 31 ppm/\u00b0C across -30 to 120 \u00b0C temperature range after compensation. The frequency variation over the supply voltage from 0.6 V to 1.1 V is \u00b10.5%/V. The measured total power consumption is 56.2 \u03bcW at 0.9-V supply voltage when the digital compensation blocks are enabled. After digital compensation, the compensation blocks can be shutdown for power saving, and the main oscillator consumes only 12.8 \u03bcW."}
{"_id": "31ace8c9d0e4550a233b904a0e2aabefcc90b0e3", "title": "Learning Deep Face Representation", "text": "Face representation is a crucial step of face recognition systems. An optimal face representation should be discriminative, robust, compact, and very easyto-implement. While numerous hand-crafted and learning-based representations have been proposed, considerable room for improvement is still present. In this paper, we present a very easy-to-implement deep learning framework for face representation. Our method bases on a new structure of deep network (called Pyramid CNN). The proposed Pyramid CNN adopts a greedy-filter-and-down-sample operation, which enables the training procedure to be very fast and computationefficient. In addition, the structure of Pyramid CNN can naturally incorporate feature sharing across multi-scale face representations, increasing the discriminative ability of resulting representation. Our basic network is capable of achieving high recognition accuracy (85.8% on LFW benchmark) with only 8 dimension representation. When extended to feature-sharing Pyramid CNN, our system achieves the state-of-the-art performance (97.3%) on LFW benchmark. We also introduce a new benchmark of realistic face images on social network and validate our proposed representation has a good ability of generalization."}
{"_id": "ccaf1eeb48ced2dc35c59356439fc128ca571d25", "title": "On the minimum node degree and connectivity of a wireless multihop network", "text": "This paper investigates two fundamental characteristics of a wireless multi -hop network: its minimum node degree and its k--connectivity. Both topology attributes depend on the spatial distribution of the nodes and their transmission range. Using typical modeling assumptions :--- :a random uniform distribution of the nodes and a simple link model :--- :we derive an analytical expression that enables the determination of the required range r0 that creates, for a given node density \u03c1, an almost surely k--connected network. Equivalently, if the maximum r0 of the nodes is given, we can find out how many nodes are needed to cover a certain area with a k--connected network. We also investigate these questions by various simulations and thereby verify our analytical expressions. Finally, the impact of mobility is discussed.The results of this paper are of practical value for researchers in this area, e.g., if they set the parameters in a network--level simulation of a mobile ad hoc network or if they design a wireless sensor network."}
{"_id": "ad25484abd3fd91cda4f609a022ab7fcd1e81ab4", "title": "Imagination-Based Decision Making with Physical Models in Deep Neural Networks", "text": "Decision-making is challenging in continuous settings where complex sequences of events determine rewards, even when these event sequences are largely observable. In particular, traditional trial-and-error learning strategies may have a hard time associating continuous actions with their reward because of the size of the state space and the complexity of the reward function. Given a model of the world, a different strategy is to use imagination to exploit the knowledge embedded in that model. In this regime, the system directly optimizes the decision for each episode based on predictions from the model. We extend deep learning methods that have been previously used for model-free learning and apply them towards a model-based approach in which an expert is consulted multiple times in the agents\u2019 imagination before it takes an action in the world. We show preliminary results on a difficult physical reasoning task where our model-based approach outperforms a model-free baseline, even when using an inaccurate expert."}
{"_id": "b605d2e31eb65c23e1eeaaa8a093a1aa02e974f9", "title": "Help seeking behavior and the Internet: A national survey", "text": "Health-related websites have the potential to powerfully influence the attitudes and behavior of consumers. Access to reliable disease information online has been linked to reduced anxiety, increased feelings of self-efficacy, and decreases in utilization of ambulatory care. Studies report that Internet health information seekers are more likely to have health concerns; adult seekers are more likely to rate themselves as having poor health status and adolescent seekers are more likely to demonstrate clinical impairment or depressive symptomatology compared to non-seekers. Although more and more Americans are using the Internet for healthcare information, little is known about how this information affects their health behaviors. The current study extends the literature by examining characteristics associated with help seeking, either from a healthcare provider or from peers, as a direct result of health information found online. Medical care seekers appear to be using the Internet to enhance their medical care; they report using the information online to diagnose a problem and feel more comfortable about their health provider's advice given the information found on the Internet. Support seekers tend to be of slightly lower income compared to non-support seekers. They are also significantly more likely to have searched for information about a loved one's medical or health condition, signaling that many of these consumers may be caretakers."}
{"_id": "850de93d6812b5913f0ce63d3b77f3c368f493c7", "title": "What is the function of the claustrum?", "text": "The claustrum is a thin, irregular, sheet-like neuronal structure hidden beneath the inner surface of the neocortex in the general region of the insula. Its function is enigmatic. Its anatomy is quite remarkable in that it receives input from almost all regions of cortex and projects back to almost all regions of cortex. We here briefly summarize what is known about the claustrum, speculate on its possible relationship to the processes that give rise to integrated conscious percepts, propose mechanisms that enable information to travel widely within the claustrum and discuss experiments to address these questions."}
{"_id": "265f49bf5930329da160cb677b937374eb71574a", "title": "Distinct mechanisms regulate slow-muscle development", "text": "Vertebrate muscle development begins with the patterning of the paraxial mesoderm by inductive signals from midline tissues [1, 2]. Subsequent myotome growth occurs by the addition of new muscle fibers. We show that in zebrafish new slow-muscle fibers are first added at the end of the segmentation period in growth zones near the dorsal and ventral extremes of the myotome, and this muscle growth continues into larval life. In marine teleosts, this mechanism of growth has been termed stratified hyperplasia [3]. We have tested whether these added fibers require an embryonic architecture of muscle fibers to support their development and whether their fate is regulated by the same mechanisms that regulate embryonic muscle fates. Although Hedgehog signaling is required for the specification of adaxial-derived slow-muscle fibers in the embryo [4, 5], we show that in the absence of Hh signaling, stratified hyperplastic growth of slow muscle occurs at the correct time and place, despite the complete absence of embryonic slow-muscle fibers to serve as a scaffold for addition of these new slow-muscle fibers. We conclude that slow-muscle-stratified hyperplasia begins after the segmentation period during embryonic development and continues during the larval period. Furthermore, the mechanisms specifying the identity of these new slow-muscle fibers are different from those specifying the identity of adaxial-derived embryonic slow-muscle fibers. We propose that the independence of early, embryonic patterning mechanisms from later patterning mechanisms may be necessary for growth."}
{"_id": "152f41f32c495276159e484de760ce6d8002915e", "title": "A practical approach for computing the diameter of a point set", "text": "We present an approximation algorithm for computing the diameter of a point-set in $d$-dimensions. The new algorithm is sensitive to the \u201chardness\u201d of computing the diameter of the given input, and for most inputs it is able to compute the {\\em exact} diameter extremely fast. The new algorithm is simple, robust, has good empirical performance, and can be implemented quickly. As such, it seems to be the algorithm of choice in practice for computing/approximating the diameter."}
{"_id": "b1b7d431f04db28e568beda816e2f7b536da0b3b", "title": "A Trigraph Based Centrality Approach Towards Text Summarization", "text": "As the electronic documents are increasing due to the revolution of information there is an urgent need for summarizing the text documents. From the previous works we observed that there is no generalized graph model for text summarization and low order ngrams could not preserve the contextual meaning. This paper focuses on an extractive based graphical approach for text summarization, based on trigrams and graph based centrality measure. Trigraph is generated and the centrality of the connected trigraph is taken to extract the important trigrams. A mapping is done between the original words and the trigrams to regain the link between the words. And after comparing the centrality from the graph, the summary is extracted. The ROUGE-SU4 F-measure obtained for the proposed approach is 0.036 which is significantly better than the previous approaches."}
{"_id": "4281046803e75e1ad7144bc1adec7a3757de7e8d", "title": "State of the Art in Example-based Texture Synthesis", "text": "Recent years have witnessed significant progress in example-based texture synthesis algorithms. Given an example texture, these methods produce a larger texture that is tailored to the user\u2019s needs. In this state-of-the-art report, we aim to achieve three goals: (1) provide a tutorial that is easy to follow for readers who are not already familiar with the subject, (2) make a comprehensive survey and comparisons of different methods, and (3) sketch a vision for future work that can help motivate and guide readers that are interested in texture synthesis research. We cover fundamental algorithms as well as extensions and applications of texture synthesis."}
{"_id": "21d470547b836d6e561a1cc86f24bbb6d1ee83b1", "title": "The back squat: A proposed assessment of functional deficits and technical factors that limit performance.", "text": "Fundamental movement competency is essential for participation in physical activity and for mitigating the risk of injury, which are both key elements of health throughout life. The squat movement pattern is arguably one of the most primal and critical fundamental movements necessary to improve sport performance, to reduce injury risk and to support lifelong physical activity. Based on current evidence, this first (1 of 2) report deconstructs the technical performance of the back squat as a foundation training exercise and presents a novel dynamic screening tool that incorporates identification techniques for functional deficits that limit squat performance and injury resilience. The follow-up report will outline targeted corrective methodology for each of the functional deficits presented in the assessment tool."}
{"_id": "4e42c4799ba2bae521406e4ded23a475b69c8c00", "title": "Bioinformatics Curriculum Guidelines: Toward a Definition of Core Competencies", "text": "1 School of Electrical Engineering and Computer Science, Ohio University, Athens, Ohio, United States of America, 2 Bioinformatics and Research Computing, Whitehead Institute, Cambridge, Massachusetts, United States of America, 3 Department of Biological Sciences and School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America, 4 European Molecular Biology Laboratory, European Bioinformatics Institute, Wellcome Trust Genome Campus, Hinxton, Cambridge, United Kingdom, 5 School of Informatics and Computing, Indiana University, Bloomington, Indiana, United States of America, 6 School of Computer Science and Engineering, The University of New South Wales, Sydney, New South Wales, Australia, 7 The Genome Analysis Centre, Norwich Research Park, Norwich, United Kingdom"}
{"_id": "a86161e8fda5b12beeab9c7e1ee4d40a76fe3b2f", "title": "A Mixed Filtering Approach for Track Condition Monitoring Using Accelerometers on the Axle Box and Bogie", "text": "This paper describes a method of estimating irregularities in railway tracks using acceleration data measured from high-speed trains. Track irregularities are the main causes of the vibration of high-speed trains and thus should be carefully monitored to maintain the stability and ride quality of the trains. A mixed filtering approach is proposed for stable displacement estimation and waveband classification of the irregularities in the measured acceleration. Accelerometers are mounted on the axle box and the bogie of a high-speed train to measure the acceleration in the lateral and vertical directions. The estimated results are compared with those of a commercial track geometry measurement system. Finally, the performance of the proposed approach and the relationship between the mounted location of the accelerometers and the estimated track irregularities are discussed."}
{"_id": "09a1e1aabad909b666f4ef7fb096e497d9472c7c", "title": "A Semantic Web Services Architecture", "text": "Formed in February 2003, the Semantic Web Services Initiative Architecture (SWSA) committee\u2019s mission is to develop the necessary abstractions for an architecture that supports Semantic Web services. The resultant framework builds on the W3C Web Services Architecture working group report (and is motivated in part by Tim Berners-Lee\u2019s vision for the Semantic Web1). Other groups developing Semantic Web services frameworks contributed to the discussions, including the Web Ontology Language for Services (OWL-S) consortium, the Web Service Modeling Ontology (WSMO; www.wsmo. org) group at the Digital Enterprise Research Institute (DERI), and the Managing End-to-End Operations-Semantics (METEOR-S; http://lsdis.cs.uga.edu/projects/meteor-s/) group at the University of Georgia.2,3 In this article, we describe the protocols exchanged between the interacting entities or agents that interpret and reason with semantic descriptions in the deployment of Semantic Web services. We focus specifically on those capabilities that extend the potential range of Web services; we also discuss security, reliability, and a flexible means of recovery from the problems that can occur in open and evolving environments. The SWSA architectural framework attempts to address five classes of Semantic Web agent requirements \u2014 dynamic service discovery, service engagement, service process enactment and management, community support services, and quality of service (QoS) \u2014 which we cover in detail here as well."}
{"_id": "a365f227c13d2699be3300cda559ff6e6b7d7c98", "title": "An Improved Algorithm for Harris Corner Detection", "text": "Corner points are formed from two or more edges and edges usually define the boundary between two different objects or parts of the same objects. In this novel we discuss the theory of the Harris corner detection and indicate its disadvantage. Then it proposes an improved algorithm of Harris detection algorithm based on the neighboring point eliminating method. It reduces the time of the detection, and makes the corners distributing more homogenous so that avoids too many corners stay together. Experimental results show that the algorithm can detect corner more equality distributing, and can be used in some fact applications such as image registration well. Key words-Corner point; Harris corner; neighboring point; homogenous distributing 1. Introduce Corner point is a maximum curvature point of the two-dimensional image brightness change or the edge of sharp curves in a image, which haves widely used in such as the camera calibration, virtual scene reconstruction, motion estimation, image registration and more computer vision tasks. These points keep important features of image and can also be effective in reducing the amount of data information, allow real-time processing possible. Now, there are two corner detection methods: based on the image edge method and based on the image gray scale method. The former always need to encode the image edge, which is heavily dependent on image segmentation and edge extraction, have large difficulty and calculate, and if the detected area haves some change, it will lead to the failure of the operation, therefore, this method is not used widely. The latter method detect corner by calculating the curvature and gradient, it avoid these disadvantage. In currently, it haves Moravec operator, Harris operator, Susan operator and so on. Experimental shows that Harris corner detection method haves the best result in above mentioned. But in fact applications, Harris algorithm must to give a proper threshold T, and the algorithm based on the provided threshold to detect the ideal corner. T always depend on the image, specifically it is difficult to determine under the different colors. And it gets the ideal T after user blind set T more times. In addition, the points which have the larger eigenvalues are concentrated in some regions cause the detect corners are non-uniform distribution; and if T dropped, even though the overall distribution of the corner towards reasonable, but it also leads to the points close together, have the clustering phenomenon, it will affect the later analysis and processing. 2. Harris Corner Detection Algorithm 2.1 Detection Theory Harris corner detection algorithm is based on the point feature extraction of signal. It makes the window \u03c9 (usually rectangular area) to move infinitesimal displacement to any direction, and the variation of gray can be defined as: \u2212 = + + 2 , , , , ] [ v u v y u x v u y x I I w E + + = v u v u y x O yY xX w , 2 2 2 , )] , ( [ 2 2 2 By Cxy Ax + + = T y x M y x ) , ( ) , ( = (1) Where X and Y are one-order gray-level gradient, they can be got by convolution with 978-1-4244-4131-0/09/$25.00 \u00a92009 IEEE"}
{"_id": "c9656e79c992e39cc9f8d07075d95773a5d61371", "title": "Transforming Cooling Optimization for Green Data Center via Deep Reinforcement Learning", "text": "Cooling system plays a critical role in a modern data center (DC). Developing an optimal control policy for DC cooling system is a challenging task. The prevailing approaches often rely on approximating system models that are built upon the knowledge of mechanical cooling, electrical and thermal management, which is difficult to design and may lead to suboptimal or unstable performances. In this paper, we propose utilizing the large amount of monitoring data in DC to optimize the control policy. To do so, we cast the cooling control policy design into an energy cost minimization problem with temperature constraints, and tap it into the emerging deep reinforcement learning (DRL) framework. Specifically, we propose an end-toend cooling control algorithm (CCA) that is based on the actorcritic framework and an off-policy offline version of the deep deterministic policy gradient (DDPG) algorithm. In the proposed CCA, an evaluation network is trained to predict an energy cost counter penalized by the cooling status of the DC room, and a policy network is trained to predict optimized control settings when gave the current load and weather information. The proposed algorithm is evaluated on the EnergyPlus simulation platform and on a real data trace collected from the National Super Computing Centre (NSCC) of Singapore. Our results show that the proposed CCA can achieve about 11% cooling cost saving on the simulation platform compared with a manually configured baseline control algorithm. In the trace-based study, we propose a de-underestimation validation mechanism as we cannot directly test the algorithm on a real DC. Even though with DUE the results are conservative, we can still achieve about 15% cooling energy saving on the NSCC data trace if we set the inlet temperature threshold at 26.6 degree Celsius."}
{"_id": "0c147fe39dfef281f228200e1ac47faf653c4d85", "title": "Sleep deprivation amplifies reactivity of brain reward networks, biasing the appraisal of positive emotional experiences.", "text": "Appropriate interpretation of pleasurable, rewarding experiences favors decisions that enhance survival. Conversely, dysfunctional affective brain processing can lead to life-threatening risk behaviors (e.g., addiction) and emotion imbalance (e.g., mood disorders). The state of sleep deprivation continues to be associated with maladaptive emotional regulation, leading to exaggerated neural and behavioral reactivity to negative, aversive experiences. However, such detrimental consequences are paradoxically aligned with the perplexing antidepressant benefit of sleep deprivation, elevating mood in a proportion of patients with major depression. Nevertheless, it remains unknown how sleep loss alters the dynamics of brain and behavioral reactivity to rewarding, positive emotional experiences. Using functional magnetic resonance imaging (fMRI), here we demonstrate that sleep deprivation amplifies reactivity throughout human mesolimbic reward brain networks in response to pleasure-evoking stimuli. In addition, this amplified reactivity was associated with enhanced connectivity in early primary visual processing pathways and extended limbic regions, yet with a reduction in coupling with medial frontal and orbitofrontal regions. These neural changes were accompanied by a biased increase in the number of emotional stimuli judged as pleasant in the sleep-deprived group, the extent of which exclusively correlated with activity in mesolimbic regions. Together, these data support a view that sleep deprivation not only is associated with enhanced reactivity toward negative stimuli, but imposes a bidirectional nature of affective imbalance, associated with amplified reward-relevant reactivity toward pleasure-evoking stimuli also. Such findings may offer a neural foundation on which to consider interactions between sleep loss and emotional reactivity in a variety of clinical mood disorders."}
{"_id": "3dda8abcbd5ade5b1e6846f73dc43b1eab8254e0", "title": "Factors contributing to attrition behavior in diabetes self-management programs: A mixed method approach", "text": "BACKGROUND\nDiabetes self-management education is a critical component in diabetes care. Despite worldwide efforts to develop efficacious DSME programs, high attrition rates are often reported in clinical practice. The objective of this study was to examine factors that may contribute to attrition behavior in diabetes self-management programs.\n\n\nMETHODS\nWe conducted telephone interviews with individuals who had Type 2 diabetes (n = 267) and attended a diabetes education centre. Multivariable logistic regression was performed to identify factors associated with attrition behavior. Forty-four percent of participants (n = 118) withdrew prematurely from the program and were asked an open-ended question regarding their discontinuation of services. We used content analysis to code and generate themes, which were then organized under the Behavioral Model of Health Service Utilization.\n\n\nRESULTS\nWorking full and part-time, being over 65 years of age, having a regular primary care physician or fewer diabetes symptoms were contributing factors to attrition behaviour in our multivariable logistic regression. The most common reasons given by participants for attrition from the program were conflict between their work schedules and the centre's hours of operation, patients' confidence in their own knowledge and ability when managing their diabetes, apathy towards diabetes education, distance to the centre, forgetfulness, regular physician consultation, low perceived seriousness of diabetes, and lack of familiarity with the centre and its services. There was considerable overlap between our quantitative and qualitative results.\n\n\nCONCLUSION\nReducing attrition behaviour requires a range of strategies targeted towards delivering convenient and accessible services, familiarizing individuals with these services, increasing communication between centres and their patients, and creating better partnerships between centres and primary care physicians."}
{"_id": "597cecf0ebd50a62bead2f97f3eaad5126eb905c", "title": "Bit-Cell Level Optimization for Non-volatile Memories Using Magnetic Tunnel Junctions and Spin-Transfer Torque Switching", "text": "Spin-transfer torque magnetic random access memories (STT-MRAM), using magnetic tunnel junctions (MTJ), is a resistive memory technology that has spurred significant research interest due to its potential for on-chip, high-density, high-speed, low-power, and non-volatile memory. However, due to conflicting read and write requirements, there is a need to develop optimization techniques for designing STT-MRAM bit-cells to minimize read and write failures. We propose an optimization technique that minimizes read and write failures by proper selection of bit-cell configuration and by proper access transistor sizing. A mixed-mode simulation framework was developed to evaluate the effectiveness of our optimization technique. Our simulation framework captures the transport physics in the MTJ using Non-Equilibrium Green's Function method and self-consistently solves the MTJ magnetization dynamics using Landau-Lifshitz-Gilbert equation augmented with the full Slonczewski spin-torque term. The electrical parameters of the MTJ are then encapsulated in a Verilog-A model and used in HSPICE to perform bit-cell level optimization. The optimization technique is applied to STT-MRAM bit-cells designed using 45 nm bulk and 45 nm silicon-on-insulator CMOS technologies. Finally, predictions are made for optimized STT-MRAM bit-cells designed in 16 nm predictive technology."}
{"_id": "88ca567b2d3e0e674003f2bb236484e41419f105", "title": "State of health monitoring of Li-ion batteries using dynamic resistance mapping and regression", "text": "The increase in speed and accuracy of computation methods and the advancement in sensor technologies has shifted the focus towards health monitoring systems of vehicles. Li-ion batteries which are used in electric vehicles, tend to get damaged and pose potential safety threats when ill-used. An adaptive control system called battery management system is needed to ensure the safety and life optimization of batteries in electric vehicles. This system takes care of both diagnosis and prognosis, giving the user an idea of current status as well as remaining lifetime of the battery. A novel algorithm based on dynamic resistance mapping is used to develop a data driven computational model for estimating the state of health of battery. A model with a correlation co-efficient of 99.45% is developed for state of health estimation to find the health of the battery during run time."}
{"_id": "2edaa17229438130a487af1fa5c0bdf66df13e4b", "title": "Metamorphic Testing: A Review of Challenges and Opportunities", "text": "Metamorphic testing is an approach to both test case generation and test result verification. A central element is a set of metamorphic relations, which are necessary properties of the target function or algorithm in relation to multiple inputs and their expected outputs. Since its first publication, we have witnessed a rapidly increasing body of work examining metamorphic testing from various perspectives, including metamorphic relation identification, test case generation, integration with other software engineering techniques, and the validation and evaluation of software systems. In this article, we review the current research of metamorphic testing and discuss the challenges yet to be addressed. We also present visions for further improvement of metamorphic testing and highlight opportunities for new research."}
{"_id": "cf4e74e003ad07f0a4f9940c420df54d45a16f03", "title": "Adapted Neuro-Fuzzy Inference System on indirect approach TSK fuzzy rule base for stock market analysis", "text": "Nowadays because of the complicated nature of making decision in stock market and making real-time strategy for buying and selling stock via portfolio selection and maintenance, many research papers has involved stock price prediction issue. Low accuracy resulted by models may increase trade cost such as commission cost in more sequenced buy and sell signals because of insignificant alarms and otherwise bad diagnosis in price trend do not satisfy trader\u2019s expectation and may involved him/her in irrecoverable cost. Therefore, in this paper, Neuro-Fuzzy Inference System adopted on a Takagi\u2013Sugeno\u2013Kang (TSK) type Fuzzy Rule Based System is developed for stock price prediction. The TSK fuzzy model applies the technical index as the input variables and the consequent part is a linear combination of the input variables. Fuzzy C-Mean clustering implemented for identifying number of rules. Initial membership function of the premise part approximately defined as Gaussian function. TSK parameters tuned by Adaptive NeroFuzzy Inference System (ANFIS). Proposed model is tested on the Tehran Stock Exchange Indexes (TEPIX). This index with high accuracy near by 97.8% has successfully forecasted with several experimental tests from different sectors. 2009 Published by Elsevier Ltd."}
{"_id": "21bcb27995ae1007f4dabe5973c5fa6df7706f3e", "title": "SIGNet: Scalable Embeddings for Signed Networks", "text": "Recent successes in word embedding and document embedding have motivated researchers to explore similar representations for networks and to use such representations for tasks such as edge prediction, node label prediction, and community detection. Existing methods are largely focused on finding distributed representations for unsigned networks and are unable to discover embeddings that respect polarities inherent in edges. We propose SIGNet, a fast scalable embedding method suitable for signed networks. Our proposed objective function aims to carefully model the social structure implicit in signed networks by reinforcing the principles of social balance theory. Our method builds upon the traditional word2vec family of embedding approaches but we propose a new targeted node sampling strategy to maintain structural balance in higher-order neighborhoods. We demonstrate the superiority of SIGNet over state-of-the-art methods proposed for both signed and unsigned networks on several real world datasets from different domains. In particular, SIGNet offers an approach to generate a richer vocabulary of features of signed networks to support representation and reasoning."}
{"_id": "09afe2f75b161377b51e3a7e0aea1d7fd2b50174", "title": "Sliding mode control strategy for three-phase DVR employing twelve-switch voltage source converter", "text": "In this study, a new method is proposed for the suppression of voltage amplitude fluctuations on the load terminals. Moreover, the voltage harmonics on the load terminals are effectively reduced below the defined limits in the IEEE-519 standard. In the studied topology, a twelve-switch three-phase voltage source converter is used for effective control on the zero-sequence components due to the unbalanced voltages or single phase voltage sags. In order to obtain a fast dynamic response, sliding mode control strategy is used to control the system. The existence condition of the sliding mode is presented. The performance of the closed-loop system is verified with cases of voltage sag and utility voltage harmonics. The overall system is explained in detail and verified by using MATLAB/Simulink."}
{"_id": "709a4fe9d79a1fd9fbfc80eef56d92a60bde4db4", "title": "Employing Aesthetic Principles for Automatic Photo Book Layout", "text": "Photos are a common way of preserving our personal memories. The visual souvenir of a personal event is often composed into a photo collage or the pages of a photo album. Today we find many tools to help users creating such compositions by different tools for authoring photo compositions. Some template-based approaches generate nice presentations, however, come mostly with limited design variations. Creating complex and fancy designs for, e.g., a personal photo book, still demands design and composition skills to achieve results that are really pleasing to the eye \u2013 skills which many users simply lack. Professional designers instead would follow general design principles such as spatial layout rules, symmetry, balance among the element as well color schemes and harmony. In this paper, we propose an approach to deliver principles of design and composition to the end user by embedding it into an automatic composition application. We identify and analyze common design and composition principles and transfer these to the automatic creation of pleasant photo compositions by employing genetic algorithms. In contrast to other approaches, we strictly base our design system on common design principles, consider additional media types besides text in the photo book and specifically take the content of photos into account. Our approach is both implemented in a web-based rich media application and a tool for the automatic transformation from blogs into photo books."}
{"_id": "964c0b98fcd0052102278a4b367c263daf6078f6", "title": "Textual Spatial Cosine Similarity", "text": "When dealing with document similarity many methods exist today, like cosine similarity. More complex methods are also available based on the semantic analysis of textual information, which are computationally expensive and rarely used in the real time feeding of content as in enterprisewide search environments. To address these real-time constraints, we developed a new measure of document similarity called Textual Spatial Cosine Similarity, which is able to detect similitude at the semantic level using word placement information contained in the document. We will see in this paper that two degenerate cases exist for this model, which coincide with Cosine Similarity on one side and with a paraphrasing detection model to the other."}
{"_id": "647a288efdf44a79da75df7027f88610372d32a1", "title": "SVM Based Effective Malware Detection System", "text": "Malware is coined as an instance of malicious code that has the potential to harm a computer or network. Recent years have encountered massive growth in malwares as existing signature based malware detection approaches are becoming ineffective and intractable. Cyber criminals and malware developers have adapted code obfuscation techniques which undermines the effectiveness of malware defense mechanism. Hence we propounded a system which focuses on static analysis in addition with automated behavior analysis in emulated environment generating behavior reports to investigate malwares. The proposed method uses programs as opcode density histograms and reduces the explosion of features. We employed eigen vector subspace analysis to filter and diminish the misclassification and interference of features. Our system uses a hybrid approach for discovering malware based on support vector machine classifier so that potential of malware detection system can be leveraged to combat with diverse forms of malwares while attaining high accuracy and low false alarms. Keywords\u2014Behavior Analysis, Static Analysis, Opcode Extraction, Malware Detection, Support Vector Machine."}
{"_id": "c747c38adc6ec1badac861ba714847ca5fa04dbb", "title": "Trends in ship electric propulsion", "text": "Since the 1980s there has been an explosion in the number and variety of electric propulsion ships being built around the world with everything from cruise liners to amphibious assault ships adopting electric propulsion. This technology revolution has occurred, largely un-noticed outside the marine industry. This paper briefly describes what has transpired in recent decades, why the resurgence is occurring and what the future may hold for electrically propelled ships."}
{"_id": "62106dee68b8f19fb80929d46ff27b0a151df62a", "title": "An Analysis of Mobile Banking User Behavior Using Customer Segmentation", "text": "This study proposed an integrated data mining and customer behavior scoring model to manage existing mobile banking users in an Iranian bank. This segmentation model was developed to identify groups of customers based on transaction history, recency, frequency, monetary background. It classified mobile banking users into six groups. This study demonstrated that identifying customers by a behavioral scoring facilitates marketing strategy assignment. Then the bank can develop its marketing actions. Thus, the bank can attract more customers, maintain its customers, and keep high customers' satisfaction. Keywords\u2014Data mining; mobile data, mobile banking; customer segmentation Introduction Online banking is increasingly common. Financial institutions deliver online services via various electronic channels, subsequently diminishing the importance of conventional branch networks. The newly emerging channels of online banking and rapidly increasing penetration rates of mobile phones motivate this study (C. S. Chen, 2013). The internet has had a significant impact on financial institutions, allowing consumers to access many bank facilities 24 hours a day, while allowing banks to significantly cut their costs. Research has shown that online banking is the cheapest delivery channel for many banking services (KoenigLewis, Palmer, & Moll, 2010; Robinson, 2000). A number of studies have identified advantages to bank customers, including cost and time savings as well as spatial independence benefits (Koenig-Lewis et al., 2010). According to Gartner\u2019s prediction of leading trends of 2012 in mobile applications, mobile commerce (m-commerce) remains the most important one. Gartner further forecasts that mobile devices will replace PCs as the main device to access the internet. As for the third quarter of 2012, IPSOS indicated that \u201cThe era of Multi-Screen has come, and smartphones account for the purchasing behavior of 65% of mobile device users.\u201d According to that report, 66 percent of the smartphone holders in Taiwan access the internet via a smartphone at least once daily; approximately 57 percent of the customers perform mobile searches; and 40 percent of the customers shop via mobile phones (IPSOS, 2012). These statistics reflect vigorous growth in the scale of m-commerce. However, mobile banking remains in its infancy, and international adoption rates demonstrate the strong potential of m-commerce (FRB, 2012). Therefore, data mining for mobile banking is of priority concern for further developing mobile banking services (MBSs) (C."}
{"_id": "40e467e1b73bb1c33997d0a18dfd789a3bbc5460", "title": "MFCC and VQ voice recognition based ATM security for the visually disabled", "text": "A biometrie based automatic teller machine which works with a two-tier security using voice and fingerprint recognition can help users which are visually challenged, allowing them to use the machine using only their biometric characteristics. An automated teller machine (ATM) requires a user to pass an identity test using their PIN before doing any financial transactions. The current method available for access control in ATM is based on cards and pins which increases the issues of unauthorized access on accounts via card skimming. It is eminently difficult to avoid another person from attaining and using someone else's card also regular smartcards can be lost, duplicated, stolen or falsified with accuracy. Another concern is the accessibility of ATM to differently abled people. These concerns can be overcome by using fingerprint recognition for authentication, as discussed from the researchers' previous study, and by adding up an additional voice recognition system feature as discussed on this paper. The four fingerprint sample patterns of an individual are completely separate and uncorrelated. The action of fingerprint recognition involves pre-processing, feature extraction and minutiae matching. Matching is done by comparing the user's fingerprint with the existing fingerprint database, images which were acquired at the time of opening an account in the bank account. Once the fingerprint of the user passes the authentication procedures of the system the user is now able to carry out further transactions using voice-based commands by speaking through a microphone. This model not only provides security but also accessibility to certain sections of the population like people with visual impairment and eye disabilities. The system uses biometric based user recognition. The authenticity of the account will be checked by the input of the user's fingerprint, this will then allow further transaction via voice recognition that implements a MFCC, DWT and VQ to continue. ATM users are identified by the lowest VQ distortion of each voice input. Out of 50 different legitimate user trials, 42 tries were identified while 8 legitimate user tries were denied of access in the system producing 84% accuracy."}
{"_id": "831d661d657d97a07894da8639a048c430c5536d", "title": "Weakly Supervised Facial Analysis with Dense Hyper-Column Features", "text": "Weakly supervised methods have recently become one of the most popular machine learning methods since they are able to be used on large-scale datasets without the critical requirement of richly annotated data. In this paper, we present a novel, self-taught, discriminative facial feature analysis approach in the weakly supervised framework. Our method can find regions which are discriminative across classes yet consistent within a class and can solve many face related problems. The proposed method first trains a deep face model with high discriminative capability to extract facial features. The hypercolumn features are then used to give pixel level representation for better classification performance along with discriminative region detection. In addition, calibration approaches are proposed to enable the system to deal with multi-class and mixed-class problems. The system is also able to detect multiple discriminative regions from one image. Our uniform method is able to achieve competitive results in various face analysis applications, such as occlusion detection, face recognition, gender classification, twins verification and facial attractiveness analysis."}
{"_id": "39ad191890465009d3ee5f57f00536aaf5cea81a", "title": "Towards Good Practices for Image Retrieval Based on CNN Features", "text": "Recent works have demonstrated that Convolutional Neural Networks (CNNs) achieve state-of-the-art results in several computer vision tasks. CNNs have also shown their ability to provide effective descriptors for image retrieval. In this paper, we focus on CNN feature extraction for instance-level image search. We started by studying in depth several methods proposed to improve the Regional Maximal Activation (RMAC) approach. Then, we selected some of these advances and introduced a new approach that combines multi-scale and multi-layer feature extraction with feature selection. We also propose an approach for local RMAC descriptor extraction based on class activation maps. Our parameter-free approach provides short descriptors and achieves state-of-the-art performance without the need of CNN finetuning or additional data in any way. In order to demonstrate the effectiveness of our approach, we conducted extensive experiments on four well known instance-level image retrieval benchmarks (the INRIA Holidays dataset, the University of Kentucky Benchmark, Oxford5k and Paris6k)."}
{"_id": "8d6de86e4873d0149eca7dffda28e1248cf162a2", "title": "Real-life voice activity detection with LSTM Recurrent Neural Networks and an application to Hollywood movies", "text": "A novel, data-driven approach to voice activity detection is presented. The approach is based on Long Short-Term Memory Recurrent Neural Networks trained on standard RASTA-PLP frontend features. To approximate real-life scenarios, large amounts of noisy speech instances are mixed by using both read and spontaneous speech from the TIMIT and Buckeye corpora, and adding real long term recordings of diverse noise types. The approach is evaluated on unseen synthetically mixed test data as well as a real-life test set consisting of four full-length Hollywood movies. A frame-wise Equal Error Rate (EER) of 33.2% is obtained for the four movies and an EER of 9.6% is obtained for the synthetic test data at a peak SNR of 0 dB, clearly outperforming three state-of-the-art reference algorithms under the same conditions."}
{"_id": "056713e422a0753c5eb1733d73e9f8185e2015d4", "title": "Explaining nonlinear classification decisions with deep Taylor decomposition", "text": "Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems, e.g., image classification, natural language processing or human action recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method is based on deep Taylor decomposition and efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets."}
{"_id": "147570a4736ddf6167d471d2bf43db1f78703812", "title": "The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification", "text": "We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the \u201cquintessential\u201d observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants\u2019 understanding when using explanations produced by BCM, compared to those given by prior art."}
{"_id": "3037897d2fd1cc72dfc5a5b79cf9c0e8cdae083e", "title": "Early versus late fusion in semantic video analysis", "text": "Semantic analysis of multimodal video aims to index segments of interest at a conceptual level. In reaching this goal, it requires an analysis of several information streams. At some point in the analysis these streams need to be fused. In this paper, we consider two classes of fusion schemes, namely early fusion and late fusion. The former fuses modalities in feature space, the latter fuses modalities in semantic space. We show by experiment on 184 hours of broadcast video data and for 20 semantic concepts, that late fusion tends to give slightly better performance for most concepts. However, for those concepts where early fusion performs better the difference is more significant."}
{"_id": "376b078694f0c183e4832900debda4dfed021a9a", "title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", "text": "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13]."}
{"_id": "1837decb49fb6fc68a6085e797faefb591fecb8a", "title": "Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network", "text": "We propose a novel weakly-supervised semantic segmentation algorithm based on Deep Convolutional Neural Network (DCNN). Contrary to existing weakly-supervised approaches, our algorithm exploits auxiliary segmentation annotations available for different categories to guide segmentations on images with only image-level class labels. To make segmentation knowledge transferrable across categories, we design a decoupled encoder-decoder architecture with attention model. In this architecture, the model generates spatial highlights of each category presented in images using an attention model, and subsequently performs binary segmentation for each highlighted region using decoder. Combining attention model, the decoder trained with segmentation annotations in different categories boosts accuracy of weakly-supervised semantic segmentation. The proposed algorithm demonstrates substantially improved performance compared to the state-of-theart weakly-supervised techniques in PASCAL VOC 2012 dataset when our model is trained with the annotations in 60 exclusive categories in Microsoft COCO dataset."}
{"_id": "30204cd1e8551ddc1d532c20cdb5717f4cfd884f", "title": "3-D Modeling of Tomato Canopies Using a High-Resolution Portable Scanning Lidar for Extracting Structural Information", "text": "In the present study, an attempt was made to produce a precise 3D image of a tomato canopy using a portable high-resolution scanning lidar. The tomato canopy was scanned by the lidar from three positions surrounding it. Through the scanning, the point cloud data of the canopy were obtained and they were co-registered. Then, points corresponding to leaves were extracted and converted into polygon images. From the polygon images, leaf areas were accurately estimated with a mean absolute percent error of 4.6%. Vertical profile of leaf area density (LAD) and leaf area index (LAI) could be also estimated by summing up each leaf area derived from the polygon images. Leaf inclination angle could be also estimated from the 3-D polygon image. It was shown that leaf inclination angles had different values at each part of a leaf."}
{"_id": "817b4d24643ac0d36ea9e647f0c144abd6b2da10", "title": "Clickbait Detection", "text": "This paper proposes a new model for the detection of clickbait, i.e., short messages that lure readers to click a link. Clickbait is primarily used by online content publishers to increase their readership, whereas its automatic detection will give readers a way of filtering their news stream. We contribute by compiling the first clickbait corpus of 2992 Twitter tweets, 767 of which are clickbait, and, by developing a clickbait model based on 215 features that enables a random forest classifier to achieve 0.79 ROC-AUC at 0.76 precision and 0.76 recall."}
{"_id": "6e0d7423ae78a3885596a2d22cbd7fd2e61947ae", "title": "Revealing the structure of the world airline network", "text": "Resilience of most critical infrastructures against failure of elements that appear insignificant is usually taken for granted. The World Airline Network (WAN) is an infrastructure that reduces the geographical gap between societies, both small and large, and brings forth economic gains. With the extensive use of a publicly maintained data set that contains information about airports and alternative connections between these airports, we empirically reveal that the WAN is a redundant and resilient network for long distance air travel, but otherwise breaks down completely due to removal of short and apparently insignificant connections. These short range connections with moderate number of passengers and alternate flights are the connections that keep remote parts of the world accessible. It is surprising, insofar as there exists a highly resilient and strongly connected core consisting of a small fraction of airports (around 2.3%) together with an extremely fragile star-like periphery. Yet, in spite of their relevance, more than 90% of the world airports are still interconnected upon removal of this core. With standard and unconventional removal measures we compare both empirical and topological perceptions for the fragmentation of the world. We identify how the WAN is organized into different classes of clusters based on the physical proximity of airports and analyze the consequence of this fragmentation."}
{"_id": "cbf1fabf4074a74470c29d188f6c060ae9b35e88", "title": "ICDAR2017 Competition on Handwritten Text Recognition on the READ Dataset", "text": "This paper describes the fourth edition of the Handwritten Text Recognition (HTR) competition that was prepared this time in the context of the International Conference on Document Analysis and Recognition (ICDAR) 2017. Previous editions of this competition were conducted, first, with datasets from the tranScriptorium project in ICFHR 2014, and ICDAR 2015, and then, with datasets from the \"Recognition and Enrichment of Archival Documents (READ)\" European project in ICFHR 2016. This competition aims to bring together researchers working on off-line HTR and provides them a suitable benchmark to compare their techniques on the task of transcribing typical and difficult historical handwritten documents. The competition proposed for ICDAR 2017 aims at introducing a usual scenario for some collections in which there exist transcripts at page level for many pages useful for training, but these transcripts are not aligned with line images. Two tracks with different conditions on the use of training data were proposed. Most of the data comes from the Alfred Escher Letter Collection. But handwritten images were drawn from other German collections written by several hands."}
{"_id": "da7b6474f2523df431db86c5e025686f313b1bb1", "title": "An edge-set based large scale graph processing system", "text": "Next generation analytics will be all about graphs, though performance has been a fundamental challenge for large scale graph processing. In this paper, we present an industrial graph processing engine for exploring various large scale linked data, which exhibits superior performance due to the several innovations. This engine organizes a graph as a set of edge-sets, compatible with the traditional edge-centric sharding for graphs, but becomes more amenable for large scale processing. Each time only a portion of the sets are needed for computation and the data access patterns can be highly predictable for prefetch for many graph computing algorithms. Due to the sparsity of large scale graph structure, this engine differentiates logical edge-sets from the edge-sets physically stored on the disk, where multiple logical edge-sets can be organized into a same physical edge-set to increase the data locality. Besides, in contrast to existing solution, the data structures utilized for the physical edge-sets can vary from one to another. Such heterogeneous edge-set representation explores the best graph processing performance according to local data access patterns. We conduct experiments on a representative set of property graphs on multiple platforms, where the proposed system outperform the baseline systems consistently."}
{"_id": "7dcf23009f40cbbc69445c1165249a0a1e8ddffe", "title": "Performance Measure Analysis between Anisotropic Diffusion Filter and Bilateral Filter for Post Processing of Fractal Compressed Medical Images", "text": "Filtering is Prime important processes in Medical Image processing applications. Any post processing process aims in the removal of unwanted noise which usually corrupts the image quality and perception. This research paper focuses on searching effective De noising filters for post processing of Fractal compressed Images on Medical Images like CT of Bone , MR Images of Brain ,Mammograms, Ultrasound Images of uterus. In this work Fractal Image Compression (FIC) a lossy compression scheme based on contractive mapping theorem is employed to map the Range blocks and Domain blocks by using the property of self similarity in the images. We have used two types of filters namely anisotropic diffusion filter and bilateral filter for the removal of noise in Medical images. The Peak signal to noise ratio (PSNR) was measured after applying the two different filters and a comparative analysis of PSNR values before and after filtering was recorded. The simulated results obtained showed an increase in PSNR value for bilateral filter than with anisotropic filter and also the quality of the image was improved."}
{"_id": "20b6acfb7dd0cfc1281a40e55292af087381c212", "title": "Lightly supervised recognition for automatic alignment of large coherent speech recordings", "text": "Large quantities of audio data with associated text such as audiobooks are nowadays available. These data are attractive for a range of research areas as they include features that go beyond the level of single sentences. The proposed approach allows high quality transcriptions and associated alignments of this form of data to be automatically generated. It combines information from lightly supervised recognition and the original text to yield the final transcription. The scheme is fully automatic and has been successfully applied to a number of audiobooks. Performance measurements show low word/sentence error rates as well as high sentence boundary accuracy."}
{"_id": "8b4411294a6728cad5e040c24b85f7b095689cb0", "title": "A Conceptual Framework for Supporting a Rapid Design of Web Applications for Data Analysis of Electrical Quality Assurance Data for the LHC", "text": "The Large Hadron Collider (LHC) is one of the most complex machines ever build. It is composed of many components which constitute a large system. The tunnel and the accelerator is just one of a very critical fraction of the whole LHC infrastructure. Hardware comissioning as one of the critical processes before running the LHC is implemented during the Long Shutdown (LS) states of the macine, where Electrical Quality Assurance (ELQA) is one of its key components. Here a huge data is collected when implementing various ELQA electrical tests. In this paper we present a conceptual framework for supporting a rapid design of web applications for ELQA data analysis. We show a framework\u2019s main components, their possible integration with other systems and machine learning algorithms and a simple use case of prototyping an application for Electrical Quality Assurance of the LHC."}
{"_id": "91ec137cdd3c753da857c31cefe33845678e8b1f", "title": "Diffusion Variational Autoencoders", "text": "A standard Variational Autoencoder, with a Euclidean latent space, is structurally incapable of capturing topological properties of certain datasets. To remove topological obstructions, we introduce Diffusion Variational Autoencoders with arbitrary manifolds as a latent space. A Diffusion Variational Autoencoder uses transition kernels of Brownian motion on the manifold. In particular, it uses properties of the Brownian motion to implement the reparametrization trick and fast approximations to the KL divergence. We show that the Diffusion Variational Autoencoder is capable of capturing topological properties of synthetic datasets. Additionally, we train MNIST on spheres, tori, projective spaces, SO(3), and a torus embedded in R. Although a natural dataset like MNIST does not have latent variables with a clear-cut topological structure, training it on a manifold can still highlight topological and geometrical properties."}
{"_id": "dda403e6d9b61e3fa84fafb3aa2f70884d03a944", "title": "Transductive Multi-view Embedding for Zero-Shot Recognition and Annotation", "text": "Most existing zero-shot learning approaches exploit transfer learning via an intermediate-level semantic representation such as visual attributes or semantic word vectors. Such a semantic representation is shared between an annotated auxiliary dataset and a target dataset with no annotation. A projection from a low-level feature space to the semantic space is learned from the auxiliary dataset and is applied without adaptation to the target dataset. In this paper we identify an inherent limitation with this approach. That is, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset/domain are biased when applied directly to the target dataset/domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding, to solve it. It is \u2018transductive\u2019 in that unlabelled target data points are explored for projection adaptation, and \u2018multi-view\u2019 in that both lowlevel feature (view) and multiple semantic representations (views) are embedded to rectify the projection shift. We demonstrate through extensive experiments that our framework (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) achieves state-of-the-art recognition results on image and video benchmark datasets, and (4) enables novel cross-view annotation tasks."}
{"_id": "304d73315aec0f1f7ba3bb930f6111ef5e25ec14", "title": "Discrete and Continuous, Probabilistic Anticipation for Autonomous Robots in Urban Environments", "text": "This paper develops a probabilistic anticipation algorithm for dynamic objects observed by an autonomous robot in an urban environment. Predictive Gaussian mixture models are used due to their ability to probabilistically capture continuous and discrete obstacle decisions and behaviors; the predictive system uses the probabilistic output (state estimate and covariance) of a tracking system and map of the environment to compute the probability distribution over future obstacle states for a specified anticipation horizon. A Gaussian splitting method is proposed based on the sigma-point transform and the nonlinear dynamics function, which enables increased accuracy as the number of mixands grows. An approach to caching elements of this optimal splitting method is proposed, in order to enable real-time implementation. Simulation results and evaluations on data from the research community demonstrate that the proposed algorithm can accurately anticipate the probability distributions over future states of nonlinear systems."}
{"_id": "b4ee698d0e33d2d616153db50eb4ed143bc69a1d", "title": "GestureSleeve: using touch sensitive fabrics for gestural input on the forearm for controlling smartwatches", "text": "Smartwatches provide quick and easy access to information. Due to their wearable nature, users can perceive the information while being stationary or on the go. The main drawback of smartwatches, however, is the limited input possibility. They use similar input methods as smartphones but thereby suffer from a smaller form factor. To extend the input space of smartwatches, we present GestureSleeve, a sleeve made out of touch enabled textile. It is capable of detecting different gestures such as stroke based gestures or taps. With these gestures, the user can control various smartwatch applications. Exploring the performance of the GestureSleeve approach, we conducted a user study with a running application as use case. In this study, we show that input using the GestureSleeve outperforms touch input on the smartwatch. In the future the GestureSleeve can be integrated into regular clothing and be used for controlling various smart devices."}
{"_id": "ad47122171ef1313d169b6b68fdb9809ec1c213a", "title": "Evaluation of object-oriented design patterns in game development", "text": "The use of object-oriented design patterns in game development is being evaluated in this paper. Games\u2019 quick evolution, demands great flexibility, code reusability and low maintenance costs. Games usually differentiate between versions, in respect of changes of the same type (additional animation, terrains etc). Consequently, the application of design patterns in them can be beneficial regarding maintainability. In order to investigate the benefits of using design patterns, a qualitative and a quantitative evaluation of open source projects is being performed. For the quantitative evaluation, the projects are being analyzed by reverse engineering techniques and software metrics are calculated. 2006 Elsevier B.V. All rights reserved."}
{"_id": "568e4db5656f28bcb569436f16a28fa707a6423c", "title": "Metrics for Quality Assurance of Web Based Applications", "text": "Web-Commerce applications are now an indispensable aspect of businesses around the world. More businesses are now migrating from outdated applications to a new type of combined e-business designs. With such large volumes of applications that need to be put online, there is now a dire need for measurable and quantifiable metrics that can help in gauging the quality of these websites. The development considerations for both domains may be deemed similar in their final purpose, that is to provide a service to its end-users, however, web-applications today face a myriad of constraints, with most businesses opting to go online, the crucial questions are; Is the Web info metrics are any different, or is it just an application of classical metrics (desktop metrics) to a new medium (web metrics). In our research, we propose to investigate these issues, and present the distinguishable metrics for the Quality Assurance(QA) processes involved in Web-Applications, as opposed to traditional desktop software application. We will also be scrutinizing the major problem that has been persistent in QA related to web applications; the lack of standards, and development models for the web applications."}
{"_id": "fc211e6b7a112982bd96a9aa04144a0a06e86a97", "title": "Soft switching interleaved PWM buck converter with one auxiliary switch", "text": "This paper proposes an interleaved zero voltage transition (ZVT) PWM buck converter. The proposed converter consists of two identical buck converter modules and an auxiliary circuit. The auxiliary circuit provides zero voltage switching condition for the main switches. Also, zero current switching condition for the auxiliary switch and diodes is achieved. The interleaved buck converter with the proposed auxiliary switch is analyzed. The simulation results of a 300W ZVT interleaved buck converter operating at 100 KHz are presented to justify the validity of theoretical analysis."}
{"_id": "3295de63e1ff45040f6f14123e49e510068e6eee", "title": "Brief mindfulness meditation training alters psychological and neuroendocrine responses to social evaluative stress", "text": "OBJECTIVE\nTo test whether a brief mindfulness meditation training intervention buffers self-reported psychological and neuroendocrine responses to the Trier Social Stress Test (TSST) in young adult volunteers. A second objective evaluates whether pre-existing levels of dispositional mindfulness moderate the effects of brief mindfulness meditation training on stress reactivity.\n\n\nMETHODS\nSixty-six (N=66) participants were randomly assigned to either a brief 3-day (25-min per day) mindfulness meditation training or an analytic cognitive training control program. All participants completed a standardized laboratory social-evaluative stress challenge task (the TSST) following the third mindfulness meditation or cognitive training session. Measures of psychological (stress perceptions) and biological (salivary cortisol, blood pressure) stress reactivity were collected during the social evaluative stress-challenge session.\n\n\nRESULTS\nBrief mindfulness meditation training reduced self-reported psychological stress reactivity but increased salivary cortisol reactivity to the TSST, relative to the cognitive training comparison program. Participants who were low in pre-existing levels of dispositional mindfulness and then received mindfulness meditation training had the greatest cortisol reactivity to the TSST. No significant main or interactive effects were observed for systolic or diastolic blood pressure reactivity to the TSST.\n\n\nCONCLUSIONS\nThe present study provides an initial indication that brief mindfulness meditation training buffers self-reported psychological stress reactivity, but also increases cortisol reactivity to social evaluative stress. This pattern may indicate that initially brief mindfulness meditation training fosters greater active coping efforts, resulting in reduced psychological stress appraisals and greater cortisol reactivity during social evaluative stressors."}
{"_id": "a4d37add4da12988dbf7258eae3c10a8f9f82324", "title": "A video object tracking algorithm combined Kalman filter and adaptive least squares under occlusion", "text": "Object occlusion is widespread in video surveillance due to the influence of angle and environment, which brings a great impact on the target tracking and makes the development of video object tracking encountered meeting many constraints. The challenge in video object tracking is how to track accurately when the target is obscured. We use the frame difference method to detect the target and smooth the target trajectory by using Kalman filter. When the target is occluded, we can select the appropriate fitting section after smoothing trajectories and use the least square method to fit the target motion path adaptively, so we can predict the object location. By comparing the experimental results with the actual motion of the target, it shows that the algorithm can be used to track most of the occlusion targets precisely."}
{"_id": "bf8ac3b460fd08022857cdcdcccc48d67804bec8", "title": "The development of initial trust in an online company by new customers", "text": "Lack of trust in online companies is a primary reason why many web users do not shop online. This study proposes a model to explain how new customers of a web-based company develop initial trust in the company after their first visit. The model is empirically tested using a questionnaire-based field study. The results indicate that perceived company reputation and willingness to customize products and services can significantly affect initial trust. Perceived web site usefulness, ease of use, and security control are also significant antecedents of initial trust. Finally, we found no support for the hypothesized effect of individual customer trust propensity on initial trust. # 2003 Elsevier B.V. All rights reserved."}
{"_id": "0407d72c2e773aec18a4be6e2bcbdf1f91f032bb", "title": "Leadership in Complex Organizations", "text": "This paper asks how complexity theory informs the role of leadership in organizations. Complexity theory is a science of complexly interacting systems; it explores the nature of interaction and adaptation in such systems and how they influence such things as emergence, innovation, and fitness. We argue that complexity theory focuses leadership efforts on behaviors that enable organizational effectiveness, as opposed to determining or guiding effectiveness. Complexity science broadens conceptualizations of leadership from perspectives that are heavily invested in psychology and social psychology (e.g., human relations models) to include processes for managing dynamic systems and interconnectivity. We develop a definition of organizational complexity and apply it to leadership science, discuss strategies for enabling complexity and effectiveness, and delve into the relationship between complexity theory and other currently important leadership theories. The paper concludes with a discussion of possible implications for research strategies in the social"}
{"_id": "2d59338108d3333890089305f15a60b6e5f00c54", "title": "The Iron Cage Revisited : Institutional Isomorphism and Collective Rationality in Organizational Fields Author ( s ) :", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "c38c27ee0457dbc4a0d47d9ac71008f502245e2c", "title": "Contributions of the ventromedial prefrontal cortex to goal-directed action selection.", "text": "In this article, it will be argued that one of the key contributions of the ventromedial prefrontal cortex (vmPFC) to goal-directed action selection lies both in retrieving the value of goals that are the putative outcomes of the decision process and in establishing a relative preference ranking for these goals by taking into account the value of each of the different goals under consideration in a given decision-making scenario. These goal-value signals are then suggested to be used as an input into the on-line computation of action values mediated by brain regions outside of the vmPFC, such as parts of the parietal cortex, supplementary motor cortex, and dorsal striatum. Collectively, these areas can be considered to be constituent elements of a multistage decision process whereby the values of different goals must first be represented and ranked before the value of different courses of action available for the pursuit of those goals can be computed."}
{"_id": "b94753895e9aa1d736428a985bce56021f2bd9ec", "title": "Advanced Wireless LAN Technologies: IEEE 802.11AC and Beyond", "text": "In 2013, global mobile data traffic grew 81% and it is projected to increase 11-fold between 2013 and 2018. Further, it is predicted that by 2018, over two-thirds of the world's mobile traffic will be video and more than half of all traffic from wireless connected devices will be offloaded to Wi-Fi networks and femto-cells [1]. Consequently, wireless LANs need major upgrades to improve both throughput and efficiency. IEEE 802.11ac is an amendment to the 802.11 standard that was just ratified by IEEE 802.11. Promising up to gigabit data rates, many Wi-Fi products are being built based on this specification. In addition to technologies that improve throughput, IEEE 802.11ax is investigating and evaluating advanced wireless technologies that enable more efficient utilization of the existing spectrum."}
{"_id": "8c1fd0a75938a2680458f155e60b5bb71be05f84", "title": "Spelling Correction for Morphologically Rich Language: a Case Study of Russian", "text": "We present an algorithm for automatic correction of spelling errors on the sentence level, which uses noisy channel model and feature-based reranking of hypotheses. Our system is designed for Russian and clearly outperforms the winner of SpellRuEval-2016 competition. We show that language model size has the greatest influence on spelling correction quality. We also experiment with different types of features and show that morphological and semantic information also improves the accuracy of spellchecking. The task of automatic spelling correction has applications in different areas including correction of search queries, spellchecking in browsers and text editors etc. It attracted intensive attention in early era of modern NLP. Many researchers addressed both the problems of effective candidates generation (Kernighan et al., 1990; Brill and Moore, 2000) and their adequate ranking (Golding and Roth, 1999; Whitelaw et al., 2009). Recently, the focus has moved to close but separate areas of text normalization (Han et al., 2013) and grammar errors correction (Ng et al., 2014), though the task of spellchecking is far from being perfectly solved. Most of early works were conducted for English for which NLP tasks are usually easier than for other languages due to simplicity of its morphology and strict word order. Also there were studies for Arabic (papers of QALB-2014 Shared Task (Ng et al., 2014)) and Chinese (Wu et al., 2013), but for most languages the problem still is open. In context of Slavic languages, there were just a few works including Sorokin and Shavrina (2016) for Russian, Richter et al. (2012) for Czech and Hladek et al. (2013) for Slovak. However, spelling correction becomes actual again due to intensive growth of social media. Indeed, corpora of Web texts including blogs, microblogs, forums etc. become the main sources for corpus studies. Most of these corpora are very large so they are collected and processed automatically with only limited manual correction. Hence, most texts in such corpora contain various types of spelling variation, from mere typos and orthographic errors to dialectal and sociolinguistic peculiarities. Moreover, orthographic errors are unavoidable since the more social media texts we have, the higher is the fraction of those, whose authors are not well-educated and therefore tend to make mistakes. That increases the percentage of out-of-vocabulary words in text, which affects the quality of any further NLP task from lemmatization to any kind of parsing or information extraction. Summarizing, it is desirable to detect and correct at least undoubtable misspellings in Web texts with high precision. Unfortunately, there were very few studies dealing with spellchecking for real-world Web texts, e.g. LiveJournal or Facebook. Most authors investigated spelling correction in a rather restricted fashion. They focused on selecting a correct word from a small pre-defined confusion set (e.g., adopt/adapt), skipping a problem of detecting misprints or generating the set of possible corrections. Often researchers did not deal with real-world errors just randomly introducing typos in every word with some probability. Therefore, spelling correction has no \u201cintelligent baseline\u201d algorithm such as trigram HMM-models for morphological parsing or CBOW vectors for distributional similarity. One of the goals of our work is to propose such a baseline. The principal feature of our approach is that it works with entire sentences, not on the level of separate words. A serious problem for research on spellcheck45 ing is the lack of publicly available datasets for spelling correction in different languages. Fortunately, recently such a corpus was created for Russian during SpellRuEval-2016 competition (Sorokin et al., 2016). Russian is rather complex for NLP tasks because of its developed nominal and verb morphology and free word order. Therefore it is well-suited for extensive testing of spelling correction algorithms, although our results are applicable to any other language having similar properties. We propose a reranking algorithm for automatic spelling correction and evaluate it on SpellRuEval2016 dataset. The paper is organized as follows: Section 1 summarizes previous work on automatic spelling correction focusing on context-sensitive approaches, Section 2 contains our algorithm, Section 3 describes test data, Section 4 analyzes the performance of our system depending on different settings and we conclude in Section 5."}
{"_id": "e5fde1f6d46d1d2f8e966548e71232f1fd6ebec0", "title": "Assessing and Quantifying Network Effects in an Online Dating Market", "text": "We empirically examine and quantify network effects on a large online dating platform in Brazil. We exploit a natural experiment, wherein a focal platform acquired its Brazilian competitor and subsequently imported the competitor\u2019s base of 150,000+ users over a 3-day period; a large exogenous shock to the composition of the purchasing platform. Our study context and the natural experiment provide two unique sources of identification: i) accounts purchased from the competitor were almost exclusively heterosexual users, even though the purchasing platform also played host to homosexual users, and ii) the treatment varied across cities, in that the \u201cvalue\u201d of new users to the existing user base differed. because purchased users differed from existing users in terms of their average characteristics (e.g., location within the city). We leverage the former to estimate a difference-in-differences specification, treating homosexual enrollment and exit rates as a plausible control for those of the heterosexual population, whereas the latter provides us with an opportunity to explore the importance of local market structure in the manifestation of network effects. We find that the treatment increased both rates of enrollment and rates of exit, amongst both genders, with a net positive effect that translated to a 17% increase in short-term revenue for the platform. We find clear evidence of local network effects; in cities where the average spatial distance between new users and existing users was larger, the treatment effect was significantly weaker. We discuss the implications for the literature and practice, and we suggest a number of avenues for future work in this space. \u2217Assistant Professor, Information Systems Department, Carlson School of Management, University of Minnesota. gburtch@umn.edu. \u2020Associate Professor, Information Systems Department, Desautels Faculty of Management, McGill University. jui.ramaprasad@mcgill.ca."}
{"_id": "62cd6f907fdd767eacd60ce4d5ecdf3f4812123a", "title": "Review of Recent Advances in the Application of the Wavelet Transform to Diagnose Cracked Rotors", "text": "Wavelet transform (WT) has been used in the diagnosis of cracked rotors since the 1990s. At present, WT is one of the most commonly used tools to treat signals in several fields. Understandably, this has been an area of extensive scientific research, which is why this paper aims to summarize briefly the major advances in the field since 2008. The present review considers advances in the use and application of WT, the selection of the parameters used, and the key achievements in using WT for crack diagnosis."}
{"_id": "fec1295f84f6eada317bd9973458eb42354246f5", "title": "Time-Varying Copula Models for Financial Time Series", "text": "We perform an analysis of the potential time inhomogeneity in the dependence between multiple financial time series. To this end, we use the framework of copula theory and tackle the question of whether dependencies in such a case can be assumed constant throughout time or rather have to be modeled in a time-inhomogeneous way. We focus on parametric copula models and suitable inference techniques in the context of a special copula-based multivariate time series model. A recent result due to Chan et al. (2009) is used to derive the joint limiting distribution of local maximum-likelihood estimators on overlapping samples. By restricting the overlap to be fixed, we establish the limiting law of the maximum of the estimator series. Based on the limiting distributions, we develop statistical homogeneity tests, and investigate their local power properties. A Monte Carlo simulation study demonstrates that bootstrapped variance estimates are needed in finite samples. Empirical analyses on real-world financial data finally confirm that time-varying parameters are an exception rather than the rule."}
{"_id": "329f8aa2b6868bdd3f61a263742f1e03163d4787", "title": "Bilingual Active Learning for Relation Classification via Pseudo Parallel Corpora", "text": "Active learning (AL) has been proven effective to reduce human annotation efforts in NLP. However, previous studies on AL are limited to applications in a single language. This paper proposes a bilingual active learning paradigm for relation classification, where the unlabeled instances are first jointly chosen in terms of their prediction uncertainty scores in two languages and then manually labeled by an oracle. Instead of using a parallel corpus, labeled and unlabeled instances in one language are translated into ones in the other language and all instances in both languages are then fed into a bilingual active learning engine as pseudo parallel corpora. Experimental results on the ACE RDC 2005 Chinese and English corpora show that bilingual active learning for relation classification significantly outperforms monolingual active learning."}
{"_id": "480e34860b1e458502db6a464e1ce9e8c0e3cc81", "title": "An Efficient Algorithm for Local Distance Metric Learning", "text": "Learning application-specific distance metrics from labeled data is critical for both statistical classification and information retrieval. Most of the earlier work in this area has focused on finding metrics that simultaneously optimize compactness and separability in a global sense. Specifically, such distance metrics attempt to keep all of the data points in each class close together while ensuring that data points from different classes are separated. However, particularly when classes exhibit multimodal data distributions, these goals conflict and thus cannot be simultaneously satisfied. This paper proposes a Local Distance Metric (LDM) that aims to optimize local compactness and local separability. We present an efficient algorithm that employs eigenvector analysis and bound optimization to learn the LDM from training data in a probabilistic framework. We demonstrate that LDM achieves significant improvements in both classification and retrieval accuracy compared to global distance learning and kernel-based KNN. Introduction Distance metric learning has played a significant role in both statistical classification and information retrieval. For instance, previous studies (Goldberger et al. 2005; Weinberger, Blitzer, & Saul 2006) have shown that appropriate distance metrics can significantly improve the classification accuracy of the K Nearest Neighbor (KNN) algorithm. In multimedia information retrieval, several papers (He et al. 2003; 2004; Muller, Pun, & Squire 2004) have shown that appropriate distance metrics, learned either from labeled or unlabeled data, usually result in substantial improvements in retrieval accuracy compared to the standard Euclidean distance. Most of the work in distance metrics learning can be organized into the following two categories: \u2022 Unsupervised distance metric learning, or manifold learning. The main idea is to learn a underlying low-dimensional manifold where geometric relationships (e.g., distance) between most of the observed data points are preserved. Popular algorithms in this category include ISOMAP (Tenenbaum, de Silva, & Langford 2000), Local Linear Embedding (Saul & Roweis 2003), and the Laplacian Eigenmap (Belkin & Niyogi 2003). Copyright c \u00a9 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. \u22125 0 5 10 \u22128 \u22126 \u22124 \u22122 0 2 4 6 8 10 12 Class A Class B (a) \u22120.2 \u22120.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 \u22120.25 \u22120.2 \u22120.15 \u22120.1 \u22120.05 0 0.05 0.1 Class A Class B"}
{"_id": "61395b81b65d16ed7bd07d4ec728412d73f6f018", "title": "A surgical parallel continuum manipulator with a cable-driven grasper", "text": "In this paper, we present the design, construction, and control of a six-degree-of-freedom (DOF), 12 mm diameter, parallel continuum manipulator with a 2-DOF, cable-driven grasper. This work is a proof-of-concept first step towards miniaturization of this type of manipulator design to provide increased dexterity and stability in confined-space surgical applications, particularly for endoscopic procedures. Our robotic system consists of six superelastic NiTi (Nitinol) tubes in a standard Stewart-Gough configuration and an end effector with 180 degree motion of its two jaws. Two Kevlar cables pass through the centers of the tube legs to actuate the end effector. A computationally efficient inverse kinematics model provides low-level control inputs to ten independent linear actuators, which drive the Stewart-Gough platform and end-effector actuation cables. We demonstrate the performance and feasibility of this design by conducting open-loop range-of-motion tests for our system."}
{"_id": "dcc0b8fae182fafbcd4eb6b569f6ac80c241a4da", "title": "Fast learning of scale-free networks based on Cholesky factorization", "text": "School of Electrical Engineering, University of Belgrade, Belgrade, Serbia Mathematical Institute of the Serbian Academy of Sciences and Arts, Belgrade, Serbia Center for Data Analytics & Biomedical Informatics, Temple University, Philadelphia, USA Correspondence Vladisav Jelisavcic, School of Electrical Engineering, University of Belgrade, Belgrade, Serbia. E-mail: vladisav@mi.sanu.ac.rs Funding information NSF BIGDATA, Grant/Award Numbers: 1447670, 1659998, CNS-1625061; CURE Abstract Recovering network connectivity structure from highdimensional observations is of increasing importance in statistical learning applications. A prominent approach is to learn a Sparse Gaussian Markov Random Field by optimizing regularized maximum likelihood, where the sparsity is induced by imposing L1 norm on the entries of a precision matrix. In this article, we shed light on an alternative objective, where instead of precision, its Cholesky factor is penalized by the L1 norm. We show that such an objective criterion possesses attractive properties that allowed us to develop a very fast Scale-Free Networks Estimation Through Cholesky factorization (SNETCH) optimization algorithm based on coordinate descent, which is highly parallelizable and can exploit an active set approach. The approach is particularly suited for problems with structures that allow sparse Cholesky factor, an important example being scale-free networks. Evaluation on synthetically generated examples and high-impact applications from a biomedical domain of up to more than 900,000 variables provides evidence that for such tasks the SNETCH algorithm can learn the underlying structure more accurately, and an order of magnitude faster than state-of-the-art approaches based on the L1 penalized precision matrix."}
{"_id": "3ae526f08b062db478ecdcf17b9fb32125922e20", "title": "Evaluating Recommender Systems -- An evaluation framework to predict user satisfaction for recommender systems in an electronic programme guide context", "text": "Preface It was January 2007 when Dolf Trieschnigg, my supervisor for the course Information Retrieval , first told me about the availability of graduation projects at TNO Information and Communication Technology. This was a bull's eye since I just started to orientate myself on a subject to investigate. I was also looking for a company where I could perform my research since I wanted to get familiar with working in a professional environment. TNO Information and Communication Technology was one of the companies that seemed interesting to me, so I contacted Stephan Raaijmakers for more information. The subject of the proposed research project, evaluating recommender systems, was completely new to me, but seemed fascinating. And it is. In September 2008 I started my research by crafting the goals and research questions for the project. Now, almost nine months later, the research has resulted in the report that you just started reading. It marks the end of my life as a student (at least for now) and the start of my professional career at TNO. At TNO I can continue to work on personalisation and recommender systems. Although I wrote this thesis the research would never have been completed without the support of other people. In the first place, I would like to thank my supervisors, Erik, Djoerd, Stephan and Dolf for supervising me throughout the research project. They were always there for me when I needed their help and/or expertise and provided loads of valuable comments and suggestions. I also like to thank Jan Telman for his help and advice with respect to statistics (and hiking through the Alps). Furthermore I would like to thank all people that participated in the user study. Without their devotion to the demanding study, the research would not have been completed. I would also like to thank my lovely girlfriend Linda, my family and my colleagues at TNO Information and Communication Technology for being all ears when I tried to explain my approach and ideas to them. They helped me to abstract from all the details and forced me to explain things clearly so I better understood it myself. Finally, I owe special thanks to Mark Prins, who provided me with an inexhaustible source of Ti\u00ebsto's Club Life, A State of Trance and other dance and trance music."}
{"_id": "e7e5d45d0095aa71ad9be8852af849d3f3085fdd", "title": "Conversion of urdu nastaliq to roman urdu using OCR", "text": "This paper deals with Urdu Nastaliq, which is a popular script of writing Urdu language. The complex and cursive nature of nastaliq makes optical character recognition for this script very challenging. Segmentation of urdu nastaliq is also very complex due to different levels and shapes of characters according to their position in a word. Character based segmentation technique for urdu names has been proposed in this paper. After segmenting each character, the characters are matched to their templates already saved in the database. The proposed technique handles many complexities in conversion of urdu names to roman urdu including vowel sounds produced by a single character due to diacritics and different voices of ye, ain and other such characters. At the end names written in nastaliq script are converted into roman urdu."}
{"_id": "e096af1f047064d4b9a14cffe692ff393770d04e", "title": "The role of prefrontal cortex and posterior parietal cortex in task switching.", "text": "Human ability to switch from one cognitive task to another involves both endogenous preparation without an external stimulus and exogenous adjustment in response to the external stimulus. In an event-related functional MRI study, participants performed pairs of two tasks that are either the same (task repetition) or different (task switch) from each other. On half of the trials, foreknowledge about task repetition or task switch was available. On the other half, it was not. Endogenous preparation seems to involve lateral prefrontal cortex (BA 46/45) and posterior parietal cortex (BA 40). During preparation, higher activation increases in inferior lateral prefrontal cortex and superior posterior parietal cortex were associated with foreknowledge than with no foreknowledge. Exogenous adjustment seems to involve superior prefrontal cortex (BA 8) and posterior parietal cortex (BA 39/40) in general. During a task switch with no foreknowledge, activations in these areas were relatively higher than during a task repetition with no foreknowledge. These results suggest that endogenous preparation and exogenous adjustment for a task switch may be independent processes involving different brain areas."}
{"_id": "8144c5e362da8dff8def2c8cba91bedca9c55056", "title": "Filtering of Video using Guided Filter", "text": "Image processing is one of the most important areas of signal processing. Image processing is applicable for the improvement of pictorial improvement for human perception and autonomous machine application also the efficient storage and transmission. In this paper guided filter is proposed which can be applicable to both images as well as to video. This guided filter has many advantages over the previous version of filter like mean filter, median filter, sobel filter, bilateral filter, etc. Among of those bilateral filter is more popular. But this bilateral filter has certain limitations like gradient reversal effect this introduces false edges in the image. Proposed guided filter can smooth image/video and also preserved the edge that is information at the edges of image/video but this filter has better behaviors near edges. Also the fast implementation of guided filter is possible. This filter also has algorithm implementation regardless of kernel size. Guided filter gives better output than that of bilateral filter after filtering this bilateral filter gives output like cartoon this also a problem with this. The guided filter is both effective and efficient in many applications of computer vision and computer graphics, etc. Filtering techniques are used in image and video processing applications for various technologies."}
{"_id": "3ef00af00ed4e0113f4a6e27b43969d105124fc3", "title": "TrialChain: A Blockchain-Based Platform to Validate Data Integrity in Large, Biomedical Research Studies", "text": "The governance of data used for biomedical research and clinical trials is an important requirement for generating accurate results. To improve the visibility of data quality and analysis, we developed TrialChain, a blockchain-based platform that can be used to validate data integrity from large, biomedical research studies. We implemented a private blockchain using the MultiChain platform and integrated it with a data science platform deployed within a large research center. An administrative web application was built with Python to manage the platform, which was built with a microservice architecture using Docker. The TrialChain platform was integrated during data acquisition into our existing data science platform. Using NiFi, data were hashed and logged within the local blockchain infrastructure. To provide public validation, the local blockchain state was periodically synchronized to the public Ethereum network. The use of a combined private/public blockchain platform allows for both public validation of results while maintaining additional security and lower cost for blockchain transactions. Original data and modifications due to downstream analysis can be logged within TrialChain and data assets or results can be rapidly validated when needed using API calls to the platform. The TrialChain platform provides a data governance solution to audit the acquisition and analysis of biomedical research data. The platform provides cryptographic assurance of data authenticity and can also be used to document data analysis."}
{"_id": "9e7487d7396acf298f972f0b8ab0397e0d9e7a0f", "title": "DesnowNet: Context-Aware Deep Network for Snow Removal", "text": "Existing learning-based atmospheric particle-removal approaches such as those used for rainy and hazy images are designed with strong assumptions regarding spatial frequency, trajectory, and translucency. However, the removal of snow particles is more complicated because they possess additional attributes of particle size and shape, and these attributes may vary within a single image. Currently, hand-crafted features are still the mainstream for snow removal, making significant generalization difficult to achieve. In response, we have designed a multistage network named DesnowNet to in turn deal with the removal of translucent and opaque snow particles. We also differentiate snow attributes of translucency and chromatic aberration for accurate estimation. Moreover, our approach individually estimates residual complements of the snow-free images to recover details obscured by opaque snow. Additionally, a multi-scale design is utilized throughout the entire network to model the diversity of snow. As demonstrated in the qualitative and quantitative experiments, our approach outperforms state-of-the-art learning-based atmospheric phenomena removal methods and one semantic segmentation baseline on the proposed Snow100K dataset. The results indicate our network would benefit applications involving computer vision and graphics."}
{"_id": "80c97ef62eee25ac48412cd1e20d9cc55e931b3f", "title": "Evidence for a basic level in a taxonomy of everyday action sounds", "text": "We searched for evidence that the auditory organization of categories of sounds produced by actions includes a privileged or \u201cbasic\u201d level of description. The sound events consisted of single objects (or substances) undergoing simple actions. Performance on sound events was measured in two ways: sounds were directly verified as belonging to a category, or sounds were used to create lexical priming. The category verification experiment measured the accuracy and reaction time to brief excerpts of these sounds. The lexical priming experiment measured reaction time benefits and costs caused by the presentation of these sounds prior to a lexical decision. The level of description of a sound varied in how specifically it described the physical properties of the action producing the sound. Both identification and priming effects were superior when a label described the specific interaction causing the sound (e.g. trickling) in comparison to the following: (1) more general descriptions (e.g. pour, liquid: trickling is a specific manner of pouring liquid), (2) more detailed descriptions using adverbs to provide detail regarding the manner of the action (e.g. trickling evenly). These results are consistent with neuroimaging studies showing that auditory representations of sounds produced by actions familiar to the listener activate motor representations of the gestures involved in sound production."}
{"_id": "6dbbfe6e6071b9591abc7ac8e92c05a615d3a8a3", "title": "A Parameter-Free Spatio-Temporal Pattern Mining Model to Catalog Global Ocean Dynamics", "text": "As spatio-temporal data have become ubiquitous, an increasing challenge facing computer scientists is that of identifying discrete patterns in continuous spatio-temporal fields. In this paper, we introduce a parameter-free pattern mining application that is able to identify dynamic anomalies in ocean data, known as ocean eddies. Despite ocean eddy monitoring being an active field of research, we provide one of the first quantitative analyses of the performance of the most used monitoring algorithms. We present an incomplete information validation technique, that uses the performance of two methods to construct an imperfect ground truth to test the significance of patterns discovered as well as the relative performance of pattern mining algorithms. These methods, in addition to the validation schemes discussed provide researchers new directions in analyzing large unlabeled climate datasets."}
{"_id": "9342bc944185916301c6fcaa637c32f729aa3418", "title": "An Efficient Representation for Filtrations of Simplicial Complexes", "text": "A filtration over a simplicial complex K is an ordering of the simplices of K such that all prefixes in the ordering are subcomplexes of K. Filtrations are at the core of Persistent Homology, a major tool in Topological Data Analysis. To represent the filtration of a simplicial complex, the entire filtration can be appended to any data structure that explicitly stores all the simplices of the complex such as the Hasse diagram or the recently introduced Simplex Tree [Algorithmica\u201914]. However, with the popularity of various computational methods that need to handle simplicial complexes, and with the rapidly increasing size of the complexes, the task of finding a compact data structure that can still support efficient queries is of great interest.\n This direction has been recently pursued for the case of maintaining simplicial complexes. For instance, Boissonnat et al. [Algorithmica\u201917] considered storing the simplices that are maximal with respect to inclusion and Attali et al. [IJCGA\u201912] considered storing the simplices that block the expansion of the complex. Nevertheless, so far there has been no data structure that compactly stores the filtration of a simplicial complex, while also allowing the efficient implementation of basic operations on the complex.\n In this article, we propose a new data structure called the Critical Simplex Diagram (CSD), which is a variant of the Simplex Array List [Algorithmica\u201917]. Our data structure allows one to store in a compact way the filtration of a simplicial complex and allows for the efficient implementation of a large range of basic operations. Moreover, we prove that our data structure is essentially optimal with respect to the requisite storage space. Finally, we show that the CSD representation admits fast construction algorithms for Flag complexes and relaxed Delaunay complexes."}
{"_id": "3813fade6b111f08636ad220ef32bd95b57d3e03", "title": "PERFORMANCE LIMITS OF SWITCHED-CAPACITOR DC-DC CONVERTERS", "text": "Theoretical performance limits of twophase switched-capacitor (SC) dc-dc converters are discussed in this paper. For a given number of capacitors k, the complete set of attainable dc conversion ratios is found. The maximum step-up or stepdown ratio is given by the k t h Fibonacci number, while the bound on the number of switches required in any SC circuit is 3k 2. Practical implications, illustrated by several SC converter examples, include savings in the number of components required for a given application, and the ability to construct SC converters that can maintain the output voltage regulation and high conversion efficiency over a wide range of input voltage variations. Limits found for the output resistance and efflciency can be used for selection and comparison of SC converters."}
{"_id": "533ea26a8af1f364b92d856f0aae2fc4e1539952", "title": "Analysis and Optimization of Switched-Capacitor DC\u2013DC Converters", "text": "Analysis methods are developed that fully determine a switched-capacitor (SC) dc-dc converter's steady-state performance through evaluation of its output impedance. This analysis method has been verified through simulation and experimentation. The simple formulation developed permits optimization of the capacitor sizes to meet a constraint such as a total capacitance or total energy storage limit, and also permits optimization of the switch sizes subject to constraints on total switch conductances or total switch volt-ampere (V-A) products. These optimizations then permit comparison among several switched-capacitor topologies, and comparisons of SC converters with conventional magnetic-based dc-dc converter circuits, in the context of various application settings. Significantly, the performance (based on conduction loss) of a ladder-type converter is found to be superior to that of a conventional magnetic-based converter for medium to high conversion ratios."}
{"_id": "0206bc94b05200094f32a7cf440551e4daebc618", "title": "MOS charge pumps for low-voltage operation", "text": "New MOS charge pumps utilizing the charge transfer switches (CTS\u2019s) to direct charge flow and generate boosted output voltage are described. Using the internal boosted voltage to backward control the CTS of a previous stage yields charge pumps that are suitable for low-voltage operation. Applying dynamic control to the CTS\u2019s can eliminate the reverse charge sharing phenomenon and further improve the voltage pumping gain. The limitation imposed by the diode-configured output stage can be mitigated by pumping it with a clock of enhanced voltage amplitude. Using the new circuit techniques, a 1.2-V-to-3.5-V charge pump and a 2-V-to-16-V charge pump are demonstrated."}
{"_id": "74e7b8023c09e12205745d9a10b95076a9b0f82a", "title": "Analytical and Practical Analysis of Switched-Capacitor DC-DC Converters", "text": "Switched-capacitor DC-DC converters are useful alternatives to inductor-based converters in many lowpower and medium-power applications. This work develops a straightforward analysis method to determine a switched-capacitor converter\u2019s output impedance (a measure of performance and power loss). This resistive impedance is a function of frequency and has two asymptotic limits, one corresponding to very high switching frequency where resistive paths dominate the impedance, and one corresponding to very low switching frequency where charge transfers among idealized capacitors dominate the impedance. An optimization method is developed to improve the performance of these converters through component sizing based on practical constraints. Several switched-capacitor converter topologies are compared in the two asymptotic limits. Switched-capacitor converter performance (based on conduction loss) is compared with that of two magnetics-based DC-DC converters. At moderate to high conversion ratios, the switchedcapacitor converter has significantly less conduction loss than an inductor-based buck converter. Some aspects of converter implementation are discussed, including the power loss due to device parasitics and methods for transistor control. Implementation using both integrated and discrete devices is discussed. With the correct analysis methods, switched-capacitor DC-DC converters can provide an attractive alternative to conventional power converters."}
{"_id": "7cde4cf792f2be12deb8d5410170a003375397d5", "title": "SWITCHED-CAPACITOR DC-DC CONVERTERS FOR LOW-POWER ON-CHIP APPLICATIONS", "text": "The paper describes switched-capacitor dc-dc converters (charge pumps) suitable for on-chip, low-power applications. The proposed configurations are based on connecting two identical but opposite-phase SC converters in parallel, thus eliminating the need for separate bootstrap gate drivers. We focus on emerging very low-power VLSI applications such as batterypowered or self-powered signal processors where high power conversion efficiency is important and where power levels are in the milliwatt range. Conduction and switching losses are considered to allow design optimization in terms of switching frequency and component sizes. Open-loop and closed-loop operation of an experimental, fully integrated, 10MHz voltage doubler is described. The doubler has 2V or 3V input and generates 3.3V or 5V output at up to 5mW load. The converter circuit fabricated in a standard 1.2\u03bc CMOS technology takes 0.7mm of the chip area."}
{"_id": "07baaa9229594ec9215a0b2fba5abbd8238759dd", "title": "Iterative multi - view plane fitting", "text": "We present a method for the reconstruction of 3D planes from calibrated 2D images. Given a set of pixels \u03a9 in a reference image, our method computes a plane which best approximates that part of the scene which has been projected to \u03a9 by exploiting additional views. Based on classical image alignment techniques we derive linear matching equations minimally parameterized by the three parameters of an object-space plane. The resulting iterative algorithm is highly robust because it is able to integrate over large image regions due to the correct object-space approximation and hence is not limited to comparing small image patches. Our method can be applied to a pair of stereo images but is also able to take advantage of the additional information provided by an arbitrary number of input images. A thorough experimental validation shows that these properties enable robust convergence especially under the influence of image sensor noise and camera calibration errors."}
{"_id": "df7ef1abf27970e3952f68f9130d8dccfcbd6841", "title": "Adaptive Convolutional Filter Generation for Natural Language Understanding", "text": "Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP are not expressive enough, in the sense that all input sentences share the same learned (and static) set of filters. Motivated by this problem, we propose an adaptive convolutional filter generation framework for natural language understanding, by leveraging a meta network to generate inputaware filters. We further generalize our framework to model question-answer sentence pairs and propose an adaptive question answering (AdaQA) model; a novel two-way feature abstraction mechanism is introduced to encapsulate co-dependent sentence representations. We investigate the effectiveness of our framework on document categorization and answer sentence-selection tasks, achieving state-of-the-art performance on several"}
{"_id": "b2e43d1b66cb0f618d113354dd6c29fa729282a6", "title": "Convolutional Neural Networks and Language Embeddings for End-to-End Dialect Recognition", "text": "\u2022 Fusion between end-to-end system and language embeddings shows better efficiency than between end-to-end system such as MFCC and FBANK \u2022 Spectrograms achieve slightly better results than MFCCs Motivation \u2022 One of the challenges of processing real-world spoken content, such as media broadcasts, is the potential presence of different dialects of a language in the material. \u2022 Dialect identification (DID) can be a useful capability to identify which dialect is being spoken during a recording. \u2022 The Arabic Multi-Genre Broadcast (MGB) Challenge tasks have provided a valuable resource for researchers interested in processing multi-dialectal Arabic speech. \u2022 Investigation of end-to-end DID approach with dataset augmentation for acoustic feature and language embeddings for linguistic feature"}
{"_id": "ec02fc82ba8e101701f59d1ca80a2cd299b07bad", "title": "Optic Disc Boundary and Vessel Origin Segmentation of Fundus Images", "text": "This paper presents a novel classification-based optic disc (OD) segmentation algorithm that detects the OD boundary and the location of vessel origin (VO) pixel. First, the green plane of each fundus image is resized and morphologically reconstructed using a circular structuring element. Bright regions are then extracted from the morphologically reconstructed image that lie in close vicinity of the major blood vessels. Next, the bright regions are classified as bright probable OD regions and non-OD regions using six region-based features and a Gaussian mixture model classifier. The classified bright probable OD region with maximum Vessel-Sum and Solidity is detected as the best candidate region for the OD. Other bright probable OD regions within 1-disc diameter from the centroid of the best candidate OD region are then detected as remaining candidate regions for the OD. A convex hull containing all the candidate OD regions is then estimated, and a best-fit ellipse across the convex hull becomes the segmented OD boundary. Finally, the centroid of major blood vessels within the segmented OD boundary is detected as the VO pixel location. The proposed algorithm has low computation time complexity and it is robust to variations in image illumination, imaging angles, and retinal abnormalities. This algorithm achieves 98.8%-100% OD segmentation success and OD segmentation overlap score in the range of 72%-84% on images from the six public datasets of DRIVE, DIARETDB1, DIARETDB0, CHASE_DB1, MESSIDOR, and STARE in less than 2.14 s per image. Thus, the proposed algorithm can be used for automated detection of retinal pathologies, such as glaucoma, diabetic retinopathy, and maculopathy."}
{"_id": "3e9cdb6d2509e6a5d376c7e0ef6c7b40903d3d97", "title": "Wireless Body Area Network (WBAN): A Survey on Reliability, Fault Tolerance, and Technologies Coexistence", "text": "Wireless Body Area Network (WBAN) has been a key element in e-health to monitor bodies. This technology enables new applications under the umbrella of different domains, including the medical field, the entertainment and ambient intelligence areas. This survey paper places substantial emphasis on the concept and key features of the WBAN technology. First, the WBAN concept is introduced and a review of key applications facilitated by this networking technology is provided. The study then explores a wide variety of communication standards and methods deployed in this technology. Due to the sensitivity and criticality of the data carried and handled by WBAN, fault tolerance is a critical issue and widely discussed in this paper. Hence, this survey investigates thoroughly the reliability and fault tolerance paradigms suggested for WBANs. Open research and challenging issues pertaining to fault tolerance, coexistence and interference management and power consumption are also discussed along with some suggested trends in these aspects."}
{"_id": "687b68910a974eb874e355fd2412f1bb4fbc6860", "title": "Insufficient effort responding: examining an insidious confound in survey data.", "text": "Insufficient effort responding (IER; Huang, Curran, Keeney, Poposki, & DeShon, 2012) to surveys has largely been assumed to be a source of random measurement error that attenuates associations between substantive measures. The current article, however, illustrates how and when the presence of IER can produce a systematic bias that inflates observed correlations between substantive measures. Noting that inattentive responses as a whole generally congregate around the midpoint of a Likert scale, we propose that Mattentive, defined as the mean score of attentive respondents on a substantive measure, will be negatively related to IER's confounding effect on substantive measures (i.e., correlations between IER and a given substantive measure will become less positive [or more negative] as Mattentive increases). Results from a personality questionnaire (Study 1) and a simulation (Study 2) consistently support the hypothesized confounding influence of IER. Using an employee sample (Study 3), we demonstrated how IER can confound bivariate relationships between substantive measures. Together, these studies indicate that IER can inflate the strength of observed relationships when scale means depart from the scale midpoints, resulting in an inflated Type I error rate. This challenges the traditional view that IER attenuates observed bivariate correlations. These findings highlight situations where IER may be a methodological nuisance, while underscoring the need for survey administrators and researchers to deter and detect IER in surveys. The current article serves as a wake-up call for researchers and practitioners to more closely examine IER in their data."}
{"_id": "76194dd34e3054fadcb4af6246b133f18924f419", "title": "Joint Factor Analysis Versus Eigenchannels in Speaker Recognition", "text": "We compare two approaches to the problem of session variability in Gaussian mixture model (GMM)-based speaker verification, eigenchannels, and joint factor analysis, on the National Institute of Standards and Technology (NIST) 2005 speaker recognition evaluation data. We show how the two approaches can be implemented using essentially the same software at all stages except for the enrollment of target speakers. We demonstrate the effectiveness of zt-norm score normalization and a new decision criterion for speaker recognition which can handle large numbers of t-norm speakers and large numbers of speaker factors at little computational cost. We found that factor analysis was far more effective than eigenchannel modeling. The best result we obtained was a detection cost of 0.016 on the core condition (all trials) of the evaluation"}
{"_id": "35ade5c894d5c859d72736f4e88124a3946235b6", "title": "Composite Behavioral Modeling for Identity Theft Detection in Online Social Networks", "text": "In this work, we aim at building a bridge from poor behavioral data to an effective, quick-response, and robust behavior model for online identity theft detection. We concentrate on this issue in online social networks (OSNs) where users usually have composite behavioral records, consisting of multi-dimensional low-quality data, e.g., offline check-ins and online user generated content (UGC). As an insightful result, we find that there is a complementary effect among different dimensions of records for modeling users\u2019 behavioral patterns. To deeply exploit such a complementary effect, we propose a joint model to capture both online and offline features of a user\u2019s composite behavior. We evaluate the proposed joint model by comparing with some typical models on two real-world datasets: Foursquare and Yelp. In the widely-used setting of theft simulation (simulating thefts via behavioral replacement), the experimental results show that our model outperforms the existing ones, with the AUC values 0.956 in Foursquare and 0.947 in Yelp, respectively. Particularly, the recall (True Positive Rate) can reach up to 65.3% in Foursquare and 72.2% in Yelp with the corresponding disturbance rate (False Positive Rate) below 1%. It is worth mentioning that these performances can be achieved by examining only one composite behavior (visiting a place and posting a tip online simultaneously) per authentication, which guarantees the low response latency of our method. This study would give the cybersecurity community new insights into whether and how a real-time online identity authentication can be improved via modeling users\u2019 composite behavioral patterns."}
{"_id": "08ab1c63e36ea9311443613f3bb8bb9b5362fd4f", "title": "Coherent Path Tracing", "text": "Packet tracing is a popular and efficient method for accelerating ray tracing. However, packet traversal techniques become inefficient when they are applied to path tracing since the secondary rays are incoherent. In this paper, we present a simple technique for improving the coherency of secondary rays. This technique uses the same sequence of random numbers for generating secondary rays for all the pixels in each sample. This improves the efficiency of the packet tracing algorithm, but creates structured noise patterns in the image. We propose an interleaved sampling scheme that reduces the correlation in the noise and makes it virtually imperceptible in the final image. Coherent path tracing is unbiased, simple to implement and outperforms standard path tracing with packet tracing, while producing images with similar RMS error values."}
{"_id": "d6847623d62f390f6843cf27868cfc077f5555e1", "title": "\u201cI am active\u201d: effects of a program to promote active aging", "text": "BACKGROUND\nActive aging involves a general lifestyle strategy that allows preservation of both physical and mental health during the aging process. \"I am Active\" is a program designed to promote active aging by increased physical activity, healthy nutritional habits, and cognitive functioning. The purpose of this study was to assess the effectiveness of this program.\n\n\nMETHODS\nSixty-four healthy adults aged 60 years or older were recruited from senior centers and randomly allocated to an experimental group (n=31) or a control group (n=33). Baseline, post-test, and 6-month follow-up assessments were performed after the theoretical-practical intervention. Effect sizes were calculated.\n\n\nRESULTS\nAt the conclusion of the program, the experimental group showed significant improvement compared with the control group in the following domains: physical activity (falls risk, balance, flexibility, self-efficacy), nutrition (self-efficacy and nutritional status), cognitive performance (processing speed and self-efficacy), and quality of life (general, health and functionality, social and economic status). Although some declines were reported, improvements at follow-up remained in self-efficacy for physical activity, self-efficacy for nutrition, and processing speed, and participants had better nutritional status and quality of life overall.\n\n\nCONCLUSION\nOur findings show that this program promotes improvements in domains of active aging, mainly in self-efficacy beliefs as well as in quality of life in healthy elders."}
{"_id": "857592f00360eba13d2a147ae38cb911a4047dac", "title": "Continuous Signed Distance Functions for 3D Vision", "text": "We explore the use of continuous signed distance functions as an object representation for 3D vision. Popularized in procedural computer graphics, this representation defines 3D objects as geometric primitives combined with constructive solid geometry and transformed by nonlinear deformations, scaling, rotation or translation. Unlike its discretized counterpart, that has become important in dense 3D reconstruction, the continuous distance function is not stored as a sampled volume, but as a closed mathematical expression. We argue that this representation can have several benefits for 3D vision, such as being able to describe many classes of indoor and outdoor objects at the order of hundreds of bytes per class, getting parametrized shape variations for free. As a distance function, the representation also has useful computational aspects by defining, at each point in space, the direction and distance to the nearest surface, and whether a point is inside or outside the surface."}
{"_id": "2254ae8c87b64d2b8609e5a860648542982029d0", "title": "The brief aggression questionnaire: psychometric and behavioral evidence for an efficient measure of trait aggression.", "text": "A key problem facing aggression research is how to measure individual differences in aggression accurately and efficiently without sacrificing reliability or validity. Researchers are increasingly demanding brief measures of aggression for use in applied settings, field studies, pretest screening, longitudinal, and daily diary studies. The authors selected the three highest loading items from each of the Aggression Questionnaire's (Buss & Perry, 1992) four subscales--Physical Aggression, Verbal Aggression, anger, and hostility--and developed an efficient 12-item measure of aggression--the Brief Aggression Questionnaire (BAQ). Across five studies (N\u2009=\u20093,996), the BAQ showed theoretically consistent patterns of convergent and discriminant validity with other self-report measures, consistent four-factor structures using factor analyses, adequate recovery of information using item response theory methods, stable test-retest reliability, and convergent validity with behavioral measures of aggression. The authors discuss the reliability, validity, and efficiency of the BAQ, along with its many potential applications."}
{"_id": "85a1ca73388f4a4d0f92d70ac8bc63af06b2f972", "title": "Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans.", "text": "Brain-computer interfaces (BCIs) can provide communication and control to people who are totally paralyzed. BCIs can use noninvasive or invasive methods for recording the brain signals that convey the user's commands. Whereas noninvasive BCIs are already in use for simple applications, it has been widely assumed that only invasive BCIs, which use electrodes implanted in the brain, can provide multidimensional movement control of a robotic arm or a neuroprosthesis. We now show that a noninvasive BCI that uses scalp-recorded electroencephalographic activity and an adaptive algorithm can provide humans, including people with spinal cord injuries, with multidimensional point-to-point movement control that falls within the range of that reported with invasive methods in monkeys. In movement time, precision, and accuracy, the results are comparable to those with invasive BCIs. The adaptive algorithm used in this noninvasive BCI identifies and focuses on the electroencephalographic features that the person is best able to control and encourages further improvement in that control. The results suggest that people with severe motor disabilities could use brain signals to operate a robotic arm or a neuroprosthesis without needing to have electrodes implanted in their brains."}
{"_id": "c19047f9c83a31e6d49dfebe1d59bece774d5eab", "title": "Investigating the functions of subregions within anterior hippocampus", "text": "Previous functional MRI (fMRI) studies have associated anterior hippocampus with imagining and recalling scenes, imagining the future, recalling autobiographical memories and visual scene perception. We have observed that this typically involves the medial rather than the lateral portion of the anterior hippocampus. Here, we investigated which specific structures of the hippocampus underpin this observation. We had participants imagine novel scenes during fMRI scanning, as well as recall previously learned scenes from two different time periods (one week and 30 min prior to scanning), with analogous single object conditions as baselines. Using an extended segmentation protocol focussing on anterior hippocampus, we first investigated which substructures of the hippocampus respond to scenes, and found both imagination and recall of scenes to be associated with activity in presubiculum/parasubiculum, a region associated with spatial representation in rodents. Next, we compared imagining novel scenes to recall from one week or 30 min before scanning. We expected a strong response to imagining novel scenes and 1-week recall, as both involve constructing scene representations from elements stored across cortex. By contrast, we expected a weaker response to 30-min recall, as representations of these scenes had already been constructed but not yet consolidated. Both imagination and 1-week recall of scenes engaged anterior hippocampal structures (anterior subiculum and uncus respectively), indicating possible roles in scene construction. By contrast, 30-min recall of scenes elicited significantly less activation of anterior hippocampus but did engage posterior CA3. Together, these results elucidate the functions of different parts of the anterior hippocampus, a key brain area about which little is definitely known."}
{"_id": "1abdc7a0494f103967a82a9471a7af42796a41dc", "title": "Modern Microwave Ferrites", "text": "Microwave ferrites are ubiquitous in systems that send, receive, and manipulate electromagnetic signals across very high frequency to quasi-optical frequency bands. In this paper, modern microwave ferrites are reviewed including spinel, garnet, and hexaferrite systems as thin and thick films, powders and compacts, and metamaterials. Their fundamental properties and utility are examined in the context of high frequency applications ranging from the VHF to millimeter-wave bands. Perspective and outlook of advances in theory, processing, and devices occurring in the science and engineering communities since the year 2000 are presented and discussed."}
{"_id": "09cca9d37140bae6c5a78b7c9ec112bd29ab0b3d", "title": "SmartDroid: an automatic system for revealing UI-based trigger conditions in android applications", "text": "User interface (UI) interactions are essential to Android applications, as many Activities require UI interactions to be triggered. This kind of UI interactions could also help malicious apps to hide their sensitive behaviors (e.g., sending SMS or getting the user's device ID) from being detected by dynamic analysis tools such as TaintDroid, because simply running the app, but without proper UI interactions, will not lead to the exposure of sensitive behaviors. In this paper we focus on the challenging task of triggering a certain behavior through automated UI interactions. In particular, we propose a hybrid static and dynamic analysis method to reveal UI-based trigger conditions in Android applications. Our method first uses static analysis to extract expected activity switch paths by analyzing both Activity and Function Call Graphs, and then uses dynamic analysis to traverse each UI elements and explore the UI interaction paths towards the sensitive APIs. We implement a prototype system SmartDroid and show that it can automatically and efficiently detect the UI-based trigger conditions required to expose the sensitive behavior of several Android malwares, which otherwise cannot be detected with existing techniques such as TaintDroid."}
{"_id": "79159935ffcfbd18d76fbdc9b108c896ee296716", "title": "Stimuli-responsive nanocarriers for drug delivery.", "text": "Spurred by recent progress in materials chemistry and drug delivery, stimuli-responsive devices that deliver a drug in spatial-, temporal- and dosage-controlled fashions have become possible. Implementation of such devices requires the use of biocompatible materials that are susceptible to a specific physical incitement or that, in response to a specific stimulus, undergo a protonation, a hydrolytic cleavage or a (supra)molecular conformational change. In this Review, we discuss recent advances in the design of nanoscale stimuli-responsive systems that are able to control drug biodistribution in response to specific stimuli, either exogenous (variations in temperature, magnetic field, ultrasound intensity, light or electric pulses) or endogenous (changes in pH, enzyme concentration or redox gradients)."}
{"_id": "b6d30456d9a1ee711f91581bf1f1b7f5d1e276e2", "title": "SCUC With Hourly Demand Response Considering Intertemporal Load Characteristics", "text": "In this paper, the hourly demand response (DR) is incorporated into security-constrained unit commitment (SCUC) for economic and security purposes. SCUC considers fixed and responsive loads. Unlike fixed hourly loads, responsive loads are modeled with their intertemporal characteristics. The responsive loads linked to hourly market prices can be curtailed or shifted to other operating hours. The study results show that DR could shave the peak load, reduce the system operating cost, reduce fuel consumptions and carbon footprints, and reduce the transmission congestion by reshaping the hourly load profile. Numerical simulations in this paper exhibit the effectiveness of the proposed approach."}
{"_id": "10a7b139435977094d230414372a82cdfec6d8db", "title": "A Data-Clustering Algorithm on Distributed Memory Multiprocessors", "text": "To cluster increasingly massive data sets that are common today in data and text mining, we propose a parallel implementation of the k-means clustering algorithm based on the message passing model. The proposed algorithm exploits the inherent data-parallelism in the kmeans algorithm. We analytically show that the speedup and the scaleup of our algorithm approach the optimal as the number of data points increases. We implemented our algorithm on an IBM POWERparallel SP2 with a maximum of 16 nodes. On typical test data sets, we observe nearly linear relative speedups, for example, 15.62 on 16 nodes, and essentially linear scaleup in the size of the data set and in the number of clusters desired. For a 2 gigabyte test data set, our implementation drives the 16 node SP2 at more than 1.8 gigaflops."}
{"_id": "36278bf6919c6dced7d16dc0c02d725e1ed178f8", "title": "New spectral methods for ratio cut partitioning and clustering", "text": "Partitioning of circuit netlists is important in many phases of VLSI design, ranging from layout to testing and hardware simulation. The ratio cut objective function [29] has received much attention since it naturally captures both mincut and equipartition, the two traditional goals of partitioning. In this paper, we show that the second smallest eigenvalue of a matrix derived from the netlist gives a provably good approximation of the optimal ratio cut partition cost. We also demonstrate that fast Lanczos-type methods for the sparse symmetric eigenvalue problem are a robust basis for computing heuristic ratio cuts based on the eigenvector of this second eigenvalue. Effective clustering methods are an immediate byproduct of the second eigenvector computation, and are very successful on the \u201cdifficult\u201d input classes proposed in the CAD literature. Finally, we discuss the very natural intersection graph representation of the circuit netlist as a basis for partitioning, and propose a heuristic based on spectral ratio cut partitioning of the netlist intersection graph. Our partitioning heuristics were tested on industry benchmark suites, and the results compare favorably with those of Wei and Cheng [29], 1321 in terms of both solution quality and runtime. This paper concludes by describing several types of algorithmic speedups and directiops for future work."}
{"_id": "62a4f42da8a6151fac124d2b40e28f41f8ba9eea", "title": "Ontologies Improve Text Document Clustering", "text": "Text document clustering plays an important role in providing intuitive navigation and browsing mechanisms by organizing large sets of documents into a small number of meaningful clusters. The bag of words representation used for these clustering methods is often unsatisfactory as it ignores relationships between important terms that do not cooccur literally. In order to deal with the problem, we integrate core ontologies as background knowledge into the process of clustering text documents. Our experimental evaluations compare clustering techniques based on precategorizations of texts from Reuters newsfeeds and on a smaller domain of an eLearning course about Java. In the experiments, improvements of results by background knowledge compared to a baseline without background knowledge can be shown in many interesting combinations."}
{"_id": "1525f96aac727000272ca034947aeb83af40d7c3", "title": "Introduction to the Special Issue on Word Sense Disambiguation: The State of the Art", "text": "ANIMATE, HUMAN, etc. and encode type restrictions on nouns and adjectives and on the arguments of verbs. Subject codes use another set of primitives to classify senses of words by subject (ECONOMICS, ENGINEERING, etc.). Guthrie et al. (1991) demonstrate a typical use of this information: in addition to using the Lesk-based method of counting overlaps between definitions and contexts, they impose a correspondence of subject codes in an iterative process. No quantitative evaluation of this method is available, but Cowie et al. (1992) improve the method using simulated annealing and report results of 47% for sense distinctions and 72% for homographs. The use of LDOCE box codes, however, is problematic: the codes are not systematic (see, for example, Fontenelle, 1990); in later work, Braden-Harder (1993) showed that simply matching box or subject codes is not sufficient for disambiguation. For example, in I tipped the driver, the codes for several senses of the words in the sentence satisfy the necessary constraints (e.g. tip-money + human object or tip-tilt + movable solid object). In many ways, the supple7 Note that the assumptions underlying this method are very similar to Quillian\u2019s: Thus one may think of a full concept analogically as consisting of all the information one would have if he looked up what will be called the \u201cpatriarch\u201d word in a dictionary, then looked up every word in each of its definitions, then looked up every word found in each of these, and so on, continually branching outward[...] (Quillian, 1968, p. 238). However, Quillian\u2019s network also keeps track of semantic relationships among the words encountered along the path between two words, which are encoded in his semantic network; the neural network avoids the overhead of creating the semantic network but loses this relational information. 13 mentary information in the LDOCE, and in particular the subject codes, are similar to those in a thesaurus, which, however, are more systematically structured. Inconsistencies in dictionaries, noted earlier, are not the only and perhaps not the major source of their limitations for WSD. While dictionaries provide detailed information at the lexical level, they lack pragmatic information that enters into sense determination (see, e.g., Hobbs, 1987). For example, the link between ash and tobacco, cigarette or tray in a network such as Quillian\u2019s is very indirect, whereas in the Brown Corpus, the word ash co-occurs frequently with one of these words. It is therefore not surprising that corpora have become a primary source of information for WSD; this development is outlined below in section 2.3. 1.3.2 Thesauri. Thesauri provide information about relationships among words, most notably synonymy. Roget's International Thesaurus, which was put into machine-tractable form in the 1950's8 and has been used in a variety of applications including machine translation (Masterman, 1957), information retrieval (Sparck Jones, 1964, 1986), and content analysis (Sedelow and Sedelow, 1969; see also Sedelow and Sedelow, 1986, 1992), also supplies an explicit concept hierarchy consisting of up to eight increasingly refined levels. Typically, each occurrence of the same word under different categories of the thesaurus represent different senses of that word; i.e., the categories correspond roughly to word senses (Yarowsky, 1992). A set of words in the same category are semantically related. The earliest known use of Roget\u2019s for WSD is the work of Masterman (1957), described above in section 2.1. Several years later, Patrick (1985) used Roget\u2019s to discriminate among verb senses, by examining semantic clusters formed by \u201ce-chains\u201d derived from the thesaurus (Bryan, 1973, 1974; see also Sedelow and Sedelow, 1986). He uses \u201cword-strong neighborhoods,\u201d comprising word groups in low-level semicolon groups, which are the most closely related semantically in the thesaurus, and words connected to the group via chains. He is able to discriminate the correct sense of verbs such as inspire (to raise the spirits vs. to inhale, breathe in, sniff, etc.), question (to doubt vs. to ask a question) with \u201chigh reliability.\u201d Bryan's earlier work had already demonstrated that homographs can be distinguished by applying a metric based on relationships defined by his chains (Bryan, 1973, 1974). Similar work is described in Sedelow and Mooney (1988). Yarowsky (1992) derives classes of words by starting with words in common categories in Roget's (4th ed.). A 100-word context of each word in the category is extracted from a corpus (the 1991 electronic text of Grolier's Ency8 The work of Masterman (1957) and Sparck Jones (1964) relied on a version of Roget\u2019s that was hand-punched onto cards in the 1950\u2019s; the Sedelow\u2019s (1969) work relied on a machine readable version of the 3rd Edition. Roget\u2019s is now widely available via anonymous ftp from various sites."}
{"_id": "1bdce9eaa77acbe547f7407a4ca376e1c9779d7a", "title": "A Comparison of Document Clustering Techniques", "text": "This paper presents the results of an experimental study of some common document clustering techniques. In particular, we compare the two main approaches to document clustering, agglomerative hierarchical clustering and K-means. (For K-means we used a \u201cstandard\u201d K-means algorithm and a variant of K-means, \u201cbisecting\u201d K-means.) Hierarchical clustering is often portrayed as the better quality clustering approach, but is limited because of its quadratic time complexity. In contrast, K-means and its variants have a time complexity which is linear in the number of documents, but are thought to produce inferior clusters. Sometimes K-means and agglomerative hierarchical approaches are combined so as to \u201cget the best of both worlds.\u201d However, our results indicate that the bisecting K-means technique is better than the standard K-means approach and as good or better than the hierarchical approaches that we tested for a variety of cluster evaluation metrics. We propose an explanation for these results that is based on an analysis of the specifics of the clustering algorithms and the nature of document data. 1 Background and Motivation Document clustering has been investigated for use in a number of different areas of text mining and information retrieval. Initially, document clustering was investigated for improving the precision or recall in information retrieval systems [Rij79, Kow97] and as an efficient way of finding the nearest neighbors of a document [BL85]. More recently, clustering has been proposed for use in browsing a collection of documents [CKPT92] or in organizing the results returned by a search engine in response to a user\u2019s query [ZEMK97]. Document clustering has also been used to automatically generate hierarchical clusters of documents [KS97]. (The automatic generation of a taxonomy of Web documents like that provided by Yahoo!"}
{"_id": "a427a81f3c5dab2e59cd9787bb858f7924f8f1f1", "title": "A Real Time Speech to Text Conversion Technique for Bengali Language", "text": "This paper presents a model to convert natural Bengali language to text. The proposed model requires the usage of the open sourced framework Sphinx 4 which is written in Java and provides the required procedural coding tools to develop an acoustic model for a custom language like Bengali. Our main objective was to ensure that the system was adequately trained on a word by word basis from various speakers so that it could recognize new speakers fluently. We used a free digital audio workstation (DAW) called Audacity to manipulate the collected recording data via continuous frequency profiling techniques to reduce the Signal-to-Noise-Ratio (SNR), vocal leveling, normalization and syllable splitting as well as merging which ensure an error free 1:1-word mapping of each utterance with its mirror transcription file text. To evaluate the performance of proposed model, we utilize an audio dataset of recorded speech data from 10 individual speakers consisting of both males and females using custom transcript files that we wrote. Experimental results demonstrate that the proposed model exhibits average 71.7% accuracy for our tested dataset."}
{"_id": "898d6ecfc547639011ce31e76c88fe69f5a544d1", "title": "Being Innovative About Service Innovation: Service, Design and Digitalization", "text": "Moving beyond questions about \u201cservices\u201d and \u201cthe services economy,\u201d this panel considers fresh ways of thinking about service innovation in the era of pervasive digitization. Panelists will argue that our understanding of digital services and products is radically transformed if we consider all exchanges to be service-for-service exchanges in which customers and suppliers co-create value in exchange networks. Innovation can then be understood as the continual process of breaking down knowledge (information) and reintegrating it to create new knowledge-based resources. Pervasive digitalization and generative digital platforms are revolutionizing service exchange possibilities. Value exchanges nonetheless occur within contexts that are material and social, tangible and tacit. The dynamics of these dimensions of service exchange challenge our concepts and methods for designing for service. Representing different approaches and disciplines, panelists will share their views on how the IS field might rethink service innovation, design and digitalization."}
{"_id": "6439b436b7ddb13bf87677df311d7a59ff135559", "title": "Ear Biometrics Based on Geometrical Feature Extraction", "text": "Biometrics identification methods proved to be very efficient, more natural and easy for users than traditional methods of human identification. In fact, only biometrics methods truly identify humans, not keys and cards they posses or passwords they should remember. The future of biometrics will surely lead to systems based on image analysis as the data acquisition is very simple and requires only cameras, scanners or sensors. More importantly such methods could be passive, which means that the user does not have to take active part in the whole process or, in fact, would not even know that the process of identification takes place. There are many possible data sources for human identification systems, but the physiological biometrics seem to have many advantages over methods based on human behaviour. The most interesting human anatomical parts for such passive, physiological biometrics systems based on images acquired from cameras are face and ear. Both of those methods contain large volume of unique features that allow to distinctively identify many users and will be surely implemented into efficient biometrics systems for many applications. The article introduces to ear biometrics and presents its advantages over face biometrics in passive human identification systems. Then the geometrical method of feature extraction from human ear images in order to perform human identification is presented."}
{"_id": "eca7e43b4baa8feccefe4d8d2b55cb3fae5f2287", "title": "Wavelet Neural Networks and their application in the study of dynamical systems", "text": "The main aim of this dissertation is to study the topic of wavelet neural networks and see how they are useful for dynamical systems applications such as predicting chaotic time series and nonlinear noise reduction. To do this, the theory of wavelets has been studied in the first chapter, with the emphasis being on discrete wavelets. The theory of neural networks and its current applications in the modelling of dynamical systems has been shown in the second chapter. This provides sufficient background theory to be able to model and study wavelet neural networks. In the final chapter a wavelet neural network is implemented and shown to accurately estimate the dynamics of a chaotic system, enabling prediction and enhancing methods already available in nonlinear noise reduction."}
{"_id": "09286005d0c0253995c970387dd222ae4acbc8f1", "title": "From Event Detection to Storytelling on Microblogs Janani Kalyanam", "text": "The problem of detecting events from content published on microblogs has garnered much interest in recent times. In this paper, we address the questions of what happens after the outbreak of an event in terms of how the event gradually progresses and attains each of its milestones, and how it eventually dissipates. We propose a model based approach to capture the gradual unfolding of an event over time. This enables the model to automatically produce entire timeline trajectories of events from the time of their outbreak to their disappearance. We apply our model on the Twitter messages collected about Ebola during the 2014 outbreak and obtain the progression timelines of several events that occurred during the outbreak. We also compare our model to several existing topic modeling and event detection baselines in literature to demonstrate its efficiency."}
{"_id": "631ef4eb378626f61b691086a6100d1e02f05dcc", "title": "Glaucoma Detection Using Dwt Based Energy Features and Ann Classifier", "text": "Glaucoma is a disease in which fluid pressure in the eye increases continously, and damage the optic nerve and leads to vision loss.It is the second leading cause of blindness. For identification of disease in human eyes we are using clinical decision support system which is based on retinal image analysis technique,that used to extract structure,contextual or texture features.Texture features within images which gives accurate and efficicent glaucoma classification.For finding this texture features we use energy distribution over wavelet subband.In this paper we focus on fourteen features which is obtained from daubecies(db3),symlets(sym3),and biorthogonal(bio3.3,bio3.5,and bio3.7).We propose a novel technique to extract this energy signatures using 2D wavelet transform and passed these signatures to different feature ranking and feature selection strategies.The energy obtained from detailed coefficent are used to classify normal and glaucomatous image with high accuracy. This will be classified using support vector machines, sequential minimal optimization, random forest, naive Bayes and artificial neural network.We observed an accuracy of 94% using the ANN classifier.Performance graph is shown for all classifiers.Finally the defected region founded by segmentation and this will be post processed by morphological processing technique for smoothing operation."}
{"_id": "8a8ccfb17f393972df62c140d8268c7dc03fe7ee", "title": "Biomedical Signal and Image Processing Spring 2008", "text": "Introduction In this chapter we will examine how we can generalize the idea of transforming a time series into an alternative representation, such as the Fourier (frequency) domain, to facilitate systematic methods of either removing (filtering) or adding (interpolating) data. In particular, we will examine the techniques of Principal Component Analysis (PCA) using Singular Value Decomposition (SVD), and Independent Component Analysis (ICA). Both of these techniques utilize a representation of the data in a statistical domain rather than a time or frequency domain. That is, the data are projected onto a new set of axes that fulfill some statistical criterion, which implies independence, rather than a set of axes that represent discrete frequencies such as with the Fourier transform, where the independence is assumed. Another important difference between these statistical techniques and Fourier-based techniques is that the Fourier components onto which a data segment is projected are fixed, whereas PCA-or ICA-based transformations depend on the structure of the data being analyzed. The axes onto which the data are projected are therefore discovered. If the structure of the data (or rather the statistics of the underlying sources) changes over time, then the axes onto which the data are projected will change too 1. Any projection onto another set of axes (or into another space) is essentially a method for separating the data out into separate components or sources which will hopefully allow us to see important structure more clearly in a particular projection. That is, the direction of projection increases the signal-to-noise ratio (SNR) for a particular signal source. For example, by calculating the power spectrum of a segment of data, we hope to see peaks at certain frequencies. The power (amplitude squared) along certain frequency vectors is therefore high, meaning we have a strong component in the signal at that frequency. By discarding the projections that correspond to the unwanted sources (such as the noise or artifact sources) and inverting the transformation, we effectively perform a filtering of the recorded observation. This is true for both ICA and PCA as well as Fourier-based techniques. However, one important difference between these techniques is that Fourier techniques assume that the projections onto each frequency component are independent of the other frequency components. In PCA and ICA we attempt to find a set of axes which are independent of one another in some sense. We assume there are a set of independent 1 \u2026"}
{"_id": "221ffeb8cea8b706fb44b3164e1aa22dd560141e", "title": "Face recognition using PCA and different distance classifiers", "text": "A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the way is to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. In this paper we focus on 3-D facial recognition system and biometric facial recognition system. We do critics on facial recognition system giving effectiveness and weaknesses. This paper also introduces scope of recognition system in India. Face recognition has received substantial attention from researchers in biometrics, pattern recognition field and computer vision communities. Face recognition can be applied in Security measure at Air ports, Passport verification, Criminals list verification in police department, Visa processing, Verification of Electoral identification and Card Security measure at ATM\u2019s. Principal Component Analysis (PCA) is a technique among the most common feature extraction techniques used in Face Recognition. In this paper, a face recognition system for personal identification and verification using Principal Component Analysis with different distance classifiers is proposed. The test results in the ORL face database produces interesting results from the point of view of recognition success, rate, and robustness of the face recognition algorithm. Different classifiers were used to match the image of a person to a class (a subject) obtained from the training data. These classifiers are: the City-Block Distance Classifier, the Euclidian distance classifier, the Squared Euclidian Distance Classifier, and the Squared Chebyshev distance Classifier."}
{"_id": "03583e58838ca97cca1526082bd059080bfbcf16", "title": "Cancer genome landscapes.", "text": "Over the past decade, comprehensive sequencing efforts have revealed the genomic landscapes of common forms of human cancer. For most cancer types, this landscape consists of a small number of \"mountains\" (genes altered in a high percentage of tumors) and a much larger number of \"hills\" (genes altered infrequently). To date, these studies have revealed ~140 genes that, when altered by intragenic mutations, can promote or \"drive\" tumorigenesis. A typical tumor contains two to eight of these \"driver gene\" mutations; the remaining mutations are passengers that confer no selective growth advantage. Driver genes can be classified into 12 signaling pathways that regulate three core cellular processes: cell fate, cell survival, and genome maintenance. A better understanding of these pathways is one of the most pressing needs in basic cancer research. Even now, however, our knowledge of cancer genomes is sufficient to guide the development of more effective approaches for reducing cancer morbidity and mortality."}
{"_id": "cac200f811dffb2e007bcd271d9a6654d6490825", "title": "Types and tokens in visual processing: a double dissociation between the attentional blink and repetition blindness.", "text": "In rapid serial visual presentation tasks, correct identification of a target triggers a deficit for reporting a 2nd target appearing within 500 ms: an attentional blink (AB). A different phenomenon, termed repetition blindness (RB), refers to a deficit for the 2nd of 2 stimuli that are identical. What is the relationship between these 2 deficits? The present study obtained a double dissociation between AB and RB. AB and RB followed different time courses (Experiments 1 and 4A), increased target-distractor discriminability alleviated AB but not RB (Experiments 2 and 4A), and enhanced episodic distinctiveness of the two targets eliminated RB but not AB (Experiments 3 and 4B). The implications of the double dissociation between AB and RB for theories of visual processing are discussed."}
{"_id": "bbad8eb4fd80411f99b08a5b2aa11eaed6b6f51a", "title": "A Compact, Wideband Circularly Polarized Co-designed Filtering Antenna and Its Application for Wearable Devices With Low SAR", "text": "A compact circularly polarized (CP) co-designed filtering antenna is reported. The device is based on a patch radiator seamlessly integrated with a bandpass filter composed of coupled stripline open-loop resonators, which are designed together as a system. In the proposed design, the patch functions simultaneously as the radiator and the last stage resonator of the filter, resulting in a low-profile integrated radiating and filtering module with a small overall form factor of 0.53\u03bb0 \u00d7 0.53\u03bb0 \u00d7 0.07\u03bb0. It is shown that the filtering circuit not only ensures frequency selectivity but also provides impedance matching functionality, which serves to broaden both the impedance and axial ratio bandwidths. The designed filtering antenna was fabricated and measured, experimentally achieving an S11 <; -13.5 dB, an axial ratio of less than 3 dB and a gain higher than 5.2 dBi over a bandwidth from 3.77 to 4.26 GHz, i.e., around 12.2%, which makes it an excellent candidate for integration into a variety of wireless systems. A linearly polarized version of the integrated filtering antenna was also demonstrated. In addition, further full-wave simulations and experiments were carried out to verify that the designed CP filtering antenna maintains its properties even when mounted on different positions of the human body with various body gestures. The stable impedance and radiation properties also make it a suitable candidate as a wearable antenna for off-body wireless communications."}
{"_id": "241a013f74c0493b5f54cb3a7b8a0e5e0f12a78f", "title": "Instance Tumor Segmentation using Multitask Convolutional Neural Network", "text": "Automatic tumor segmentation is an important and challenging clinical task because tumors have different sizes, shapes, contrasts, and locations. In this paper, we present an automatic instance semantic segmentation method based on deep neural networks (DNNs). The proposed networks are tailored to picture tumors in magnetic resonance imaging (MRI) and computed tomography (CT) images. We present an end-to-end multitask learning architecture comprising three stages, namely detection, segmentation, and classification. This paper introduces a new technique for tumor detection based on high-level extracted features from convolutional neural networks (CNNs) using the Hough transform technique. The detected tumor(s), are segmented with a set of fully connected (FC) layers, and the segmented mask is classified through FCs. The proposed architecture gives promising results on the popular medical image benchmarks. Our framework is generalized in the sense that it can be used in different types of medical images in varied sizes, such as the Liver Tumor Segmentation (LiTS-2017) challenge, and the Brain Tumor Segmentation (BraTS-2016) benchmark."}
{"_id": "7ca3f9400049a860ded736840e345da3cd8ce150", "title": "A Distributed Multi Agents Based Platform for High Performance Computing Infrastructures", "text": "This work introduces a novel, modular, layered web based platform for managing machine learning experiments on grid-based High Performance Computing infrastructures. The coupling of the communication services offered by the grid, with an administration layer and conventional web server programming, via a data synchronization utility, leads to the straightforward development of a web-based user interface that allows the monitoring and managing of diverse online distributed computing applications. It also introduces an experiment generation and monitoring tool particularly suitable for investigating machine learning in game playing. The platform is demonstrated with experiments for two different games."}
{"_id": "af50e745422eda4f880943c614fd17f2428061ba", "title": "Mining Patients' Narratives in Social Media for Pharmacovigilance: Adverse Effects and Misuse of Methylphenidate", "text": "Background: The Food and Drug Administration (FDA) in the United States and the European Medicines Agency (EMA) have recognized social media as a new data source to strengthen their activities regarding drug safety. Objective: Our objective in the ADR-PRISM project was to provide text mining and visualization tools to explore a corpus of posts extracted from social media. We evaluated this approach on a corpus of 21 million posts from five patient forums, and conducted a qualitative analysis of the data available on methylphenidate in this corpus. Methods: We applied text mining methods based on named entity recognition and relation extraction in the corpus, followed by signal detection using proportional reporting ratio (PRR). We also used topic modeling based on the Correlated Topic Model to obtain the list of the matics in the corpus and classify the messages based on their topics. Results: We automatically identified 3443 posts about methylphenidate published between 2007 and 2016, among which 61 adverse drug reactions (ADR) were automatically detected. Two pharmacovigilance experts evaluated manually the quality of automatic identification, and a f-measure of 0.57 was reached. Patient's reports were mainly neuro-psychiatric effects. Applying PRR, 67% of the ADRs were signals, including most of the neuro-psychiatric symptoms but also palpitations. Topic modeling showed that the most represented topics were related to Childhood and Treatment initiation, but also Side effects. Cases of misuse were also identified in this corpus, including recreational use and abuse. Conclusion: Named entity recognition combined with signal detection and topic modeling have demonstrated their complementarity in mining social media data. An in-depth analysis focused on methylphenidate showed that this approach was able to detect potential signals and to provide better understanding of patients' behaviors regarding drugs, including misuse."}
{"_id": "2cc8d5f093697e191eadb9d83f3023afee00666c", "title": "Structural analysis of cooking preparation steps in Japanese", "text": "We propose a method to create process flow graphs automatically from textbooks for cooking programs. This is realized by understanding context by narrowing down the domain to cooking, and making use of domain specific constraints and knowledge. Since it is relatively easy to extract significant keywords from cooking procedures, we create a domain specific dictionary by statistical methods, and propose a structural analysis method using the dictionary. In order to evaluate the ability of the proposed method, we applied the method to actual procedures as an experiment, which showed effective results. The same experiment was also performed on a different program, which showed lower accuracy but also showed realistic results."}
{"_id": "80b9df54d0c2a56325186ca88e01ba830ec93b74", "title": "Mesh repair with user-friendly topology control", "text": "Limitations of current 3D acquisition technology often lead to polygonal meshes exhibiting a number of geometrical and topological defects which prevent them from widespread use. In this paper we present a new method for model repair which takes as input an arbitrary polygonal mesh and outputs a valid two-manifold triangle mesh. Unlike previous work, our method allows users to quickly identify areas with potential topological errors and to choose how to fix them in a userfriendly manner. Key steps of our algorithm include the conversion of the input model into a set of voxels, the use of morphological operators to allow the user to modify the topology of the discrete model, and the conversion of the corrected voxel set back into a two-manifold triangle mesh. Our experiments demonstrate that the proposed algorithm is suitable for repairing meshes of a large class of shapes."}
{"_id": "d2c2a2fe61dd1477c5cff03582f1521831586f4e", "title": "OnToKnowledge: Ontology-based Tools for Knowledge Management.", "text": "Dieter Fensel, Frank van Harmelen, Michel Klein, Hans Akkermans Free University Amsterdam VUA, Division of Mathematics and Informatics De Boelelaan 1081a, NL-1081 HV Amsterdam, The Netherlands http://www.ontoknowledge.org ; contact: dieter@cs.vu.nl and Jeen Broekstra, Christiaan Fluit, Jos van der Meer, AIdministrator, The Netherlands Hans-Peter Schnurr, Rudi Studer, AIFB, University of Karlsruhe, Germany John Hughes, Uwe Krohn, John Davies, BT, Ipswich, UK Robert Engels, Bernt Bremdal, CognIT, Oslo, Norway Fredrik Ygge, Enersearch AB, Gothenburg, Sweden Thorsten Lau, Bernd Novotny, Ulrich Reimer, Swiss Life, Z\u00fcrich, Switzerland Ian Horrocks, University of Manchester, UK"}
{"_id": "2d33c06612b8e85ba1b36cb42ef7d26ca22091f3", "title": "Facial age estimation using BSIF and LBP", "text": "Human face aging is irreversible process causing changes in human face characteristics such us hair whitening, muscles drop and wrinkles. Due to the importance of human face aging in biometrics systems, age estimation became an attractive area for researchers. This paper presents a novel method to estimate the age from face images, using binarized statistical image features (BSIF) and local binary patterns (LBP) histograms as features performed by support vector regression (SVR) and kernel ridge regression (KRR). We applied our method on FG-NET and PAL datasets. Our proposed method has shown superiority to that of the state-of-the-art methods when using the whole PAL database."}
{"_id": "31d15e69389efeda5e21adfa888dffa8018523c0", "title": "Handbook of Applied Cryptography", "text": ""}
{"_id": "b05da56351393b3069082f59d03085a8066b1bdc", "title": "Word2vec applied to recommendation: hyperparameters matter", "text": "Skip-gram with negative sampling, a popular variant of Word2vec originally designed and tuned to create word embeddings for Natural Language Processing, has been used to create item embeddings with successful applications in recommendation. While these fields do not share the same type of data, neither evaluate on the same tasks, recommendation applications tend to use the same already tuned hyperparameters values, even if optimal hyperparameters values are often known to be data and task dependent. We thus investigate the marginal importance of each hyperparameter in a recommendation setting through large hyperparameter grid searches on various datasets. Results reveal that optimizing neglected hyperparameters, namely negative sampling distribution, number of epochs, subsampling parameter and window-size, significantly improves performance on a recommendation task, and can increase it by an order of magnitude. Importantly, we find that optimal hyper-parameters configurations for Natural Language Processing tasks and Recommendation tasks are noticeably different."}
{"_id": "08f8bf41afaf29ae794d48fe7b715e9f979d9e44", "title": "A single-layer SIW slot array antenna with TE20 mode", "text": "The idea of using higher order modes is demonstrated by an SIW slot array antenna with a TE20 mode to realize boresight radiation. In order to excite the TE20 mode, an SIW hybrid coupler and a compact SIW 90\u00b0 phase shifter are designed to offer the two out-of-phase outputs with identical amplitude. Besides, a CPW-SIW transition is adopted for measurement. The SIW slot array antenna with the TE20 mode is designed at X-band in a single-layer PCB. The measured bandwidth is 5.5% (9.72\u201310.27 GHz). The measured total efficiency is 80% at 10 GHz. Compared to a conventional SIW slot array just with a power divider, using the TE20 mode provides the merits of not only reducing the number of vias but also relaxing fabrication limit and tolerance which could be the fabrication bottleneck at high frequencies."}
{"_id": "be6756f930b42345e84a79d365db498a825bb4dd", "title": "TagAssist: Automatic Tag Suggestion for Blog Posts", "text": "In this paper, we describe a system called TagAssist that provides tag suggestions for new blog posts by utilizing existing tagged posts. The system is able to increase the quality of suggested tags by performing lossless compression over existing tag data. In addition, the system employs a set of metrics to evaluate the quality of a potential tag suggestion. Coupled with the ability for users to manually add tags, TagAssist can ease the burden of tagging and increase the utility of retrieval and browsing systems built on top of tagging data."}
{"_id": "b16147d467777cb59732932a168e6d63ef2ec7e0", "title": "Storytelling by the StoryCake visualization", "text": "On the issue of how to explore the complex relationships of entities in a story, Storyline visualization has proven to be a useful approach. The traditional Storyline visualization technology only applies to a linear continuous story and has the limitation of reflecting the development of the plots in a story. In this paper, we propose a hierarchical plot visualization method called StoryCake to improve this problem. First, according to the story elements, inter-sessions are divided into groups, whereby the layout of the entities will be optimized in individual groups. Then, the hierarchical relationships of entities in polar coordinates can be confirmed. Finally, we use several skills, e.g., interaction, labels and a fan-shaped visualization view, to enable people better understand how story evolves and track the hierarchical relationship more conveniently. Experimental results show that the proposed hierarchical plot visualization method in polar coordinates can reveal discontinuous events and nonlinear stories. The results also show that our algorithm presents the narrative structure and development of the story."}
{"_id": "f71e0680ba42ed0da4773eb7b6e5d8b35bb92b79", "title": "The self to other model of empathy: Providing a new framework for understanding empathy impairments in psychopathy, autism, and alexithymia", "text": "Despite increasing empirical and theoretical work on empathy, particularly on the content of empathic representations, there is a relative lack of consensus regarding the information processing necessary for empathy to occur. Here we attempt to delineate a mechanistic cognitive model of empathy in order to provide a framework within which neuroimaging work on empathy can be located, and which may be used in order to understand various disorders characterised by atypical levels of empathy. To this end data from individuals with psychopathy, autism, and alexithymia inform the model, and the model is used to provide a unifying framework for any empathy impairments seen in these disorders. The model adopts a developmental framework and tries to address the four difficult questions of empathy: How do we know what another is feeling? What is the role of theory of mind in empathy? How does the state of another cause a corresponding state in the self? How do we represent another's emotion once emotional contagion has taken place?"}
{"_id": "b8e9ba3be98c973b0a8fa7be94b8ecd8aaeef6ca", "title": "Rectification of QR-code images using the parametric cylindrical surface model", "text": "In recent years, QR codes have found a wide range of applications in business and industry. Moreover, their popularity and function are still growing (fast). Therefore, the processing of the images of QR codes is an important task. In this paper, we try to solve a problem encountered in QR decoding when the QR code is printed near the binding edge inside a book or on the surface of a cylindrical container. When a picture is taken on such a QR code, the picture is no longer in a rectangular shape, which is supposedly the \u201cright\u201d shape that any QR code should take, due to warping from the image capturing mechanism by the camera. We refer to such warping as cylinder-to-screen (C2S) warping. This warping translates into serious distortion of the original QR-code image (also referred to as QR image, for short) and thus causes failure in its decoding. In this paper, a mathematical model called the parametric cylindrical surface model (PCSM) is proposed. Then, based on it, a scheme is proposed to rectify C2S-warped QR images. Experiments show that the proposed scheme is effective, in the sense that many failed QR decodings become successful after the rectification."}
{"_id": "ba3ce400fb3f1cf87e33dbc10b685665eea1edb9", "title": "Image Segmentation Techniques Overview", "text": "The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a prediction of the development trend of image segmentation with the combination of these algorithms."}
{"_id": "0dc6960a406b6040c8eb9645a89e39bf34226b3e", "title": "Biometrics from Brain Electrical Activity: A Machine Learning Approach", "text": "The potential of brain electrical activity generated as a response to a visual stimulus is examined in the context of the identification of individuals. Specifically, a framework for the visual evoked potential (VEP)-based biometrics is established, whereby energy features of the gamma band within VEP signals were of particular interest. A rigorous analysis is conducted which unifies and extends results from our previous studies, in particular, with respect to 1) increased bandwidth, 2) spatial averaging, 3) more robust power spectrum features, and 4) improved classification accuracy. Simulation results on a large group of subject support the analysis"}
{"_id": "1033681364c7fb7017237748c23f29237d23202f", "title": "EARTh: An Environmental Application Reference Thesaurus in the Linked Open Data cloud", "text": "The paper aims at providing a description of EARTh, the Environmental Application Reference Thesaurus. It represents a general-purpose thesaurus for the environment, which has been published as a SKOS dataset in the Linked Open Data cloud. It promises to become a core tool for indexing and discovery environmental resources by refining and extending GEMET, which is considered the de facto standard when speaking of general-purpose thesaurus for the environment in Europe, besides it has been interlinked to popular LOD datasets as AGROVOC, EUROVOC, DBPEDIA and UMTHES. The paper illustrates the main characteristics of EARTh as a guide to its usage. It clarifies (i) the methodology adopted to define the EARTh content; (ii) the design and technological choices made when publishing EARTh as Linked Data; (iii) the information pertaining to its access and maintenance. Descriptions of EARTh applications and future relevance are highlighted. CNR-IIA-EKOLab Environmental Knowledge Organization Laboratory, Area della Ricerca di Roma 1, Via Salaria Km 29,300 C.P. 10,I-00016 Monterotondo stazione RM, {difranco, vds, plini} @iia.cnr.it"}
{"_id": "3fa712a641a80e9f5438bb8f1ca0093159689da0", "title": "Nurse Managers\u2019 Emotional Intelligence and Effective Leadership: A Review of the Current Evidence", "text": "Background\nEmotional Intelligence has made a significant contribution to effective leadership, becoming one of the key characteristics of leaders.Objective: The aim of the present study was to review qualitative and quantitative studies concerning Emotional Intelligence of nurse leaders and the evidence-based composition of their results.\n\n\nMethod\nA search was performed in the electronic databases (Pubmed, Scopus and CINAHL) for articles, which were published in the period 2000-2017 in English or Greek. Eleven studies met the inclusion criteria, of which 10 were quantitative and one was qualitative.\n\n\nResults\nThe results suggested that Emotional Intelligence is a useful tool for nurse leaders and contributes decisively to the achievement of effective management in healthcare.\n\n\nConclusion\nIt is necessary for nurses to improve their social and emotional skills because of the particular nature of the nursing profession, which places the healthy or weak person at its center."}
{"_id": "4877d6ba4e676c5193f8f175f1c71fbf6358f88c", "title": "ProgTest: An environment for the submission and evaluation of programming assignments based on testing activities", "text": "Programming foundations is not an easy subject to be taught \u2014 many students have difficulties understanding the abstract concepts of programming and have a wrong view about the programming activity. In order to address these problems, experiences have suggested the integrated teaching of programming concepts and software testing in introductory CS courses. Shortly, the idea is that testing can contribute to enhance the students' capabilities of understanding and analysis. However, such perspective requires tools to provide an adequate feedback to evaluate the students' performance concerning programming and testing activities. In this paper we describe ProgTest \u2014 a web-based tool for the submission and automatic evaluation of practical programming assignments based on testing activities. Results from a preliminary validation of ProgTest are also presented. Such results provide evidences on the practical use of ProgTest as a supporting mechanism for the integrated teaching of programming foundations and software testing."}
{"_id": "5eb14aca4a0a1a68960bc8d59801ed76a82d84ad", "title": "Building a HighLevel Dataflow System on top of MapReduce: The Pig Experience", "text": "Increasingly, organizations capture, transform and analyze enormous data sets. Prominent examples include internet companies and e-science. The Map-Reduce scalable dataflow paradigm has become popular for these applications. Its simple, explicit dataflow programming model is favored by some over the traditional high-level declarative approach: SQL. On the other hand, the extreme simplicity of MapReduce leads to much low-level hacking to deal with the many-step, branching dataflows that arise in practice. Moreover, users must repeatedly code standard operations such as join by hand. These practices waste time, introduce bugs, harm readability, and impede optimizations. Pig is a high-level dataflow system that aims at a sweet spot between SQL and Map-Reduce. Pig offers SQL-style high-level data manipulation constructs, which can be assembled in an explicit dataflow and interleaved with custom Mapand Reduce-style functions or executables. Pig programs are compiled into sequences of Map-Reduce jobs, and executed in the Hadoop Map-Reduce environment. Both Pig and Hadoop are open-source projects administered by the Apache Software Foundation. This paper describes the challenges we faced in developing Pig, and reports performance comparisons between Pig execution and raw Map-Reduce execution."}
{"_id": "492ff2e9cab852a8fc46225f2e764007f8e5cb72", "title": "Matroids and the greedy algorithm", "text": "(0) Many discrete programming algorithms are being proposed. One thing most of them have in common is that they do not work very well. Solving problems that are a priori finite, astronomically finite, is preferably a matter of finding algorithms that are s o m e h o w better than finite. These considerations prompt looking for good algorithms and trying to understand how and why at least a few combinatorial problems have them. (1) Let H be a finite (for convenience) set of real-valued vectors , x = [xj] , ] ~ E. Often all the members of H will be integer-valued; often they will all be {0, 1}-valued. The index-set, E, is any finite set of elements. Let c = [cj], ] ~ E, be any real vector on E, called the objective or weighting of E. The problem of finding a member o f H which maximizes (or minimizes) cx = iN cjx/, j ~ E, we call a loco problem or loco programming. \"Loco\" stands for \"linear-objective combinatorial\". (2) In order for a loco problem to be a completely defined problem, the way H is given must of course be specified. There are various ways to describe implicitly very large sets H so tha t it is relatively easy to determine whether or not any particular vector is a member of H. One well-known:way is in linear programming, where H is the set of extreme points of the solution-set of a given finite system, L, of linear equations and <_ type linear inequalities in the variables xj (briefly, a"}
{"_id": "b0c3d601b6c7ab591c0141fa0b1f61b56a56f31b", "title": "Bitcoin Network Measurements for Simulation Validation and Parameterization", "text": "Bitcoin is gaining increasing popularity nowadays, even though the crypto-currencies field has plenty of digital currencies that have emerged before the adoption of Bitcoin idea. Bitcoin is a decentralized digital currency which relies on set of miners to maintain a distributed public ledger and peer-to-peer network to broadcast transactions. In this paper, we analyse how transaction validation is achieved by the transaction propagation round trip and how transaction dissemination throughout the network can lead to inconsistencies in the view of the current transactions ledger by different nodes. We then measure the transaction propagation delay in the real Bitcoin network and how it is affected by the number of nodes and network topology. This measurement enables a precise validation of any simulation model of the Bitcoin network. Large-scale measurements of the real Bitcoin network are performed in this paper. This will provide an opportunity to parameterise any model of the Bitcoin"}
{"_id": "8819363f462d0cf57ab837beebcc0973d6747d84", "title": "Haptic jamming: A deformable geometry, variable stiffness tactile display using pneumatics and particle jamming", "text": "Many controllable tactile displays present the user with either variable mechanical properties or adjustable surface geometries, but controlling both simultaneously is challenging due to electromechanical complexity and the size/weight constraints of haptic applications. This paper discusses the design, manufacturing, control, and preliminary evaluation of a novel haptic display that achieves both variable stiffness and deformable geometry via air pressure and a technique called particle jamming. The surface of the device consists of a flat, deformable layer of hollow silicone cells filled with coffee grounds. It selectively solidifies in different regions when the air is vacuumed out of individual cells, jamming the coffee particles together. The silicone layer is clamped over a chamber with regulated air pressure. Different sequences of air pressure and vacuum level adjustment allow regions of the surface to display a small rigid lump, a large soft plane, and various other combinations of lump size and stiffness. Experimental data from individual cells show that surface stiffness increases with vacuum level and the elliptical shape of the cells become increasingly spherical with increased chamber pressure."}
{"_id": "8d754fa6efd85175ee28fa6a9c1a7dc565a6a5b2", "title": "Design of a Dual-Band High-Gain Antenna Array for WLAN and WiMAX Base Station", "text": "A dual-band 3 \u00d72 antenna array with high gain is presented in this letter. A dual-band folded dipole antenna is utilized as the antenna array element. A broadband symmetric feed network is designed to maintain each array element with the identical current magnitude and phase. The antenna array has a low profile of 12 mm (about 0.1\u03bbL; \u03bbL is the wavelength in free space at the lowest operating frequency). Dual bands 2.3 ~ 2.56 and 3 ~ 4.2 GHz for return loss (RL) being higher than 10 dB can be obtained in simulation, while two 10-dB RL bands 2.31 ~ 2.51 and 3.1 ~ 4.15 GHz in measurement cover the wireless local area network (WLAN, 2.4 ~ 2.48 GHz) and Worldwide Interoperability for Microwave Access (WiMAX, 3.3 ~ 3.8 GHz). The measured results also show that this planar antenna array has a high gain 14.3 ~ 15.1 dBi (2.31 ~ 2.51 GHz), 16.2 ~ 17.3 dBi ( 3.1 ~ 4.15 GHz) in + z-direction and a low cross-polarization level <; - 26 dB."}
{"_id": "796d7d155213d1f8ae43f1f24460703c19dd06cd", "title": "Systematic review of the research evidence examining the effectiveness of interventions using a sensory integrative approach for children.", "text": "Twenty-seven studies were systematically reviewed to identify, evaluate, and synthesize the research literature on the effectiveness of sensory integration (SI) intervention on the ability of children with difficulty processing and integrating sensory information to engage in desired occupations and to apply these findings to occupational therapy practice. Results suggest the SI approach may result in positive outcomes in sensorimotor skills and motor planning; socialization, attention, and behavioral regulation; reading-related skills; participation in active play; and achievement of individualized goals. Gross motor skills, self-esteem, and reading gains may be sustained from 3 mo to 2 yr. Findings may be limited by Type II error because of small sample sizes, variable intervention dosage, lack of fidelity to intervention, and selection of outcomes that may not be meaningful to clients and families or may not change with amount of treatment provided. Replication of findings with methodologically and theoretically sound studies is needed to support current findings."}
{"_id": "692b4290109b5c76d7fa68cc56d8c928400f60a4", "title": "Speed measurement algorithm for low speed permanent magnet synchronous motor based on overlapped measurement regions", "text": "In order to improve the speed measurement performance of low speed PMSM drives equipped with absolute optical encoders, this paper proposes a speed measurement method based on overlapped measurement regions. The proposed method replaces the long measurement region of the conventional backward differentiation method with several short ones which are overlapped at most regions. In one speed measurement process, the results obtained from the short regions are averaged to improve precision. Then the relationship between precision, time delay and region length, overlap times, are analyzed using expectation and variance as performance index. Based on the analysis, an optimal design method of the measurement region is derived. Experimental researches are carried out on a low speed permanent magnet synchronous motor equipped with a 13 bit absolute encoder. The results prove the validity and effectiveness of the proposed method."}
{"_id": "c2eaffc617c57ab5106ba3bc95b7b910be473389", "title": "Sensors and systems for fruit detection and localization: A review", "text": "This paper reviews the research and development of machine vision systems for fruit detection and localization for robotic harvesting and/or crop-load estimation of specialty tree crops including apples, pears, and citrus. Variable lighting condition, occlusions, and clustering are some of the important issues needed to be addressed for accurate detection and localization of fruit in orchard environment. To address these issues, various techniques have been investigated using different types of sensors and their combinations as well as with different image processing techniques. This paper summarizes various techniques and their advantages and disadvantages in detecting fruit in plant or tree canopies. The paper also summarizes the sensors and systems developed and used by researchers to localize fruit as well as the potential and limitations of those systems. Finally, major challenges for the successful application of machine vision system for robotic fruit harvesting and crop-load estimation, and potential future directions for research and development are discussed. 2015 Elsevier B.V. All rights reserved."}
{"_id": "b1f6efbccf85bd489e68e13ad7e0d4d21656cdfb", "title": "Change impact analysis for Natural Language requirements: An NLP approach", "text": "Requirements are subject to frequent changes as a way to ensure that they reflect the current best understanding of a system, and to respond to factors such as new and evolving needs. Changing one requirement in a requirements specification may warrant further changes to the specification, so that the overall correctness and consistency of the specification can be maintained. A manual analysis of how a change to one requirement impacts other requirements is time-consuming and presents a challenge for large requirements specifications. We propose an approach based on Natural Language Processing (NLP) for analyzing the impact of change in Natural Language (NL) requirements. Our focus on NL requirements is motivated by the prevalent use of these requirements, particularly in industry. Our approach automatically detects and takes into account the phrasal structure of requirements statements. We argue about the importance of capturing the conditions under which change should propagate to enable more accurate change impact analysis. We propose a quantitative measure for calculating how likely a requirements statement is to be impacted by a change under given conditions. We conduct an evaluation of our approach by applying it to 14 change scenarios from two industrial case studies."}
{"_id": "c7490e5b808cc62e4ac39db5cecb9938278ddfba", "title": "Generalized Haar Filter based Deep Networks for Real-Time Object Detection in Traffic Scene", "text": "Vision-based object detection is one of the fundamental functions in numerous traffic scene applications such as self-driving vehicle systems and advance driver assistance systems (ADAS). However, it is also a challenging task due to the diversity of traffic scene and the storage, power and computing source limitations of the platforms for traffic scene applications. This paper presents a generalized Haar filter based deep network which is suitable for the object detection tasks in traffic scene. In this approach, we first decompose a object detection task into several easier local regression tasks. Then, we handle the local regression tasks by using several tiny deep networks which simultaneously output the bounding boxes, categories and confidence scores of detected objects. To reduce the consumption of storage and computing resources, the weights of the deep networks are constrained to the form of generalized Haar filter in training phase. Additionally, we introduce the strategy of sparse windows generation to improve the efficiency of the algorithm. Finally, we perform several experiments to validate the performance of our proposed approach. Experimental results demonstrate that the proposed approach is both efficient and effective in traffic scene compared with the state-of-the-art."}
{"_id": "36a675d51e905348967d2834211696564cf21d37", "title": "Classification of Motor Imagery BCI Using Multivariate Empirical Mode Decomposition", "text": "Brain electrical activity recorded via electroencephalogram (EEG) is the most convenient means for brain-computer interface (BCI), and is notoriously noisy. The information of interest is located in well defined frequency bands, and a number of standard frequency estimation algorithms have been used for feature extraction. To deal with data nonstationarity, low signal-to-noise ratio, and closely spaced frequency bands of interest, we investigate the effectiveness of recently introduced multivariate extensions of empirical mode decomposition (MEMD) in motor imagery BCI. We show that direct multichannel processing via MEMD allows for enhanced localization of the frequency information in EEG, and, in particular, its noise-assisted mode of operation (NA-MEMD) provides a highly localized time-frequency representation. Comparative analysis with other state of the art methods on both synthetic benchmark examples and a well established BCI motor imagery dataset support the analysis."}
{"_id": "d64cbe387755336e890842d1abac7899db24626c", "title": "Source Code Metrics for Programmable Logic Controller (PLC) Ladder Diagram (LD) Visual Programming Language", "text": "The IEC 611131-3, an open international standard for Programmable Logic Controllers (PLC) defines several domain specific languages for industrial and process automation. Domain specific languages have several features, programming constructs and notations which are different than general purpose languages and hence the traditional source code metrics for general purpose programming languages cannot be directly applied to domain specific languages for writing control programs in the area of industrial automation. We propose source code level metrics to measure size, vocabulary, cognitive complexity and testing complexity of a visual Programmable Logic Controller (PLC) programming language. We present metrics for Ladder Diagram (LD) programming language which is one of the five languages defined in the IEC 61131-3 standard. The proposed metric is based on factors like number of distinct operators and operands, total number of operators and operands, number of rungs, weights of different basic control structures, structure of the program and control flow. We apply Weyukur's property to validate the metrics and evaluate the number of properties satisfied by the proposed metric."}
{"_id": "34e731eaa0b23776f09b5054a7e9b180a65d8a84", "title": "Accelerometer-based tilt estimation of a rigid body with only rotational degrees of freedom", "text": "An estimation algorithm is developed for determining pitch and roll angles (tilt) of a rigid body fixed at a pivot point using multiple accelerometers. The estimate is independent of the rigid body dynamics; the method is applicable both in static conditions and for any dynamic motion of the body. No dynamic model is required for the estimator; only the mounting positions of the sensors need to be known. The proposed estimator is the optimal linear estimate in a least-squares sense if knowledge of the system dynamics is not used. The estimate may be used as a basis for further filtering and fusion techniques, such as sensor fusion with rate gyro data. The estimation algorithm is applied to the problem of state estimation for the Balancing Cube, a rigid structure that can actively balance on its corners. Experimental results are provided."}
{"_id": "27381ae0b5369ad4d58f93c7e374361488f0dff7", "title": "Survey on mining subjective data on the web", "text": "In the past years we have witnessed Sentiment Analysis and Opinion Mining becoming increasingly popular topics in Information Retrieval and Web data analysis. With the rapid growth of the user-generated content represented in blogs, wikis and Web forums, such an analysis became a useful tool for mining the Web, since it allowed us to capture sentiments and opinions at a large scale. Opinion retrieval has established itself as an important part of search engines. Ratings, opinion trends and representative opinions enrich the search experience of users when combined with traditional document retrieval, by revealing more insights about a subject. Opinion aggregation over product reviews can be very useful for product marketing and positioning, exposing the customers\u2019 attitude towards a product and its features along different dimensions, such as time, geographical location, and experience. Tracking how opinions or discussions evolve over time can help us identify interesting trends and patterns and better understand the ways that information is propagated in the Internet. In this study, we review the development of Sentiment Analysis and Opinion Mining during the last years, and also discuss the evolution of a relatively new research direction, namely, Contradiction Analysis. We give an overview of the proposed methods and recent advances in these areas, and we try to layout the future research directions in the field."}
{"_id": "993d736b0174abf2f713bea9d9642b85a2313cae", "title": "Estimating Attributes: Analysis and Extensions of RELIEF", "text": "In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very eecient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are veriied on various artiicial and one well known real-world problem."}
{"_id": "11157109b8f3a098c5c3f801ba9acbffd2aa49b1", "title": "Automatic Retrieval and Clustering of Similar Words", "text": "Bootstrapping semantics from text is one of the greatest challenges in natural language learning. We first define a word similarity measure based on the distributional pattern of words. The similarity measure allows us to construct a thesaurus using a parsed corpus. We then present a new evaluation methodology for the automatically constructed thesaurus. The evaluation results show that the thesaurus is significantly closer to WordNet than Roget Thesaurus is."}
{"_id": "75fef44e3b45f91821ca2fec1bd5e305dc6c15b0", "title": "Digital Nudging", "text": "Digital nudging is the use of user-interface design elements to guide people\u2019s behavior in digital choice environments. Digital choice environments are user interfaces \u2013 such as web-based forms and ERP screens \u2013 that require people to make judgments or decisions. Humans face choices every day, but the outcome of any choice is influenced not only by rational deliberations of the available options but also by the design of the choice environment in which information is presented, which can exert a subconscious influence on the outcome. In other words, \u2018\u2018what is chosen often depends upon how the choice is presented\u2019\u2019 (Johnson et al. 2012, p. 488) such that the \u2018\u2018choice architecture alters people\u2019s behavior in a predictable way\u2019\u2019 (Thaler and Sunstein 2008, p. 6). Even simple modifications of the choice environment in which options are presented can influence people\u2019s choices and \u2018\u2018nudge\u2019\u2019 them into behaving in particular ways. In fact, there is no neutral way to present choices. For example, Johnson and Goldstein (2003) showed that simply changing default options (from opt-in to opt-out) in the context of organ donation nearly doubled the percentage of people who consent to being organ donors. Many choices are made in online environments. As the design of digital choice environments always (either deliberately or accidentally) influences people\u2019s choices, understanding the effects of digital nudges in these environments can help designers lead users to the most desirable choice. For example, the mobile payment app Square nudges people into giving tips by setting the default to \u2018\u2018tipping\u2019\u2019 so that customers must actively select a \u2018\u2018no tipping\u2019\u2019 option if they choose not to give a tip. Using this simple nudge has raised tip amounts, especially where little or no tipping has been common (Carr 2013). These examples show that simply changing the default option affects the outcome."}
{"_id": "12071c32587dd4b7fe7af8592286d5bb5116f664", "title": "Perspectives and Limitations of Self-Organizing Maps in Blind Separation of Source Signals", "text": "|The capabilities of self-organizing maps (SOMs) in parametrizing data manifolds qualify them as candidates for blind separation algorithms. We study the virtues and problems of the SOM-based approach in a simple example. Also numerical simulations of more general cases have been performed. It shows that the performance is unquestionable in the case of a linear mixture only if the observed data are prewhitened and inhomogeneities in the input data are compensated. The algorithm is robust with respect to deviations from linearity, although may fail for complex non-linearly distorted signals. Due to computational restrictions only mixtures from a few sources can be resolved. Under certain conditions it is possible to separate more sources than sensors using a dimension-increasing map."}
{"_id": "4daa58df53641fbe0460abb8e330050d936745d3", "title": "Anomaly Detection in Vessel Tracking Using Support Vector Machines (SVMs)", "text": "The paper is devoted to supervise method approach to identify the vessel anomaly behaviour in waterways using the Automated Identification System (AIS) vessel reporting data. In this work, we describe the use of SVMs to detect the vessel anomaly behaviour. The SVMs is a supervised method that needs some pre knowledge to extract the maritime movement patterns of AIS raw data into information. This is the basis to remodel information into a meaningful and valuable form. The result of this work shows that the SVMs technique is applicable to be used for the identification of vessel anomaly behaviour. It is proved that the best accuracy result is obtained from dividing raw data into 70% for training and 30% for testing stages."}
{"_id": "0a4110fda21f0de29824ead1df591d2c5e1da8d0", "title": "Volley: Automated Data Placement for Geo-Distributed Cloud Services", "text": "As cloud services grow to span more and more globally distributed datacenters, there is an increasingly urgent need for automated mechanisms to place application data across these datacenters. This placement must deal with business constraints such as WAN bandwidth costs and datacenter capacity limits, while also minimizing user-perceived latency. The task of placement is further complicated by the issues of shared data, data inter-dependencies, application changes and user mobility. We document these challenges by analyzing month-long traces from Microsoft's Live Messenger and Live Mesh, two large-scale commercial cloud services."}
{"_id": "3b1f22ab6a85c80d063b804f0cd3edd3ba13e772", "title": "On Challenges in Evaluating Malware Clustering", "text": "Malware clustering and classification are important tools that enable analysts to prioritize their malware analysis efforts. The recent emergence of fully automated methods for malware clustering and classification that report high accuracy suggests that this problem may largely be solved. In this paper, we report the results of our attempt to confirm our conjecture that the method of selecting ground-truth data in prior evaluations biases their results toward high accuracy. To examine this conjecture, we apply clustering algorithms from a different domain (plagiarism detection), first to the dataset used in a prior work\u2019s evaluation and then to a wholly new malware dataset, to see if clustering algorithms developed without attention to subtleties of malware obfuscation are nevertheless successful. While these studies provide conflicting signals as to the correctness of our conjecture, our investigation of possible reasons uncovers, we believe, a cautionary note regarding the significance of highly accurate clustering results, as can be impacted by testing on a dataset with a biased cluster-size distribution."}
{"_id": "3a7ee539e5c7749dad00050895a91e2891755061", "title": "Gource: visualizing software version control history", "text": "This film demonstrates a tool for visualizing the development history of software projects as an interactive animation, showing developers working on the hierarchical file-directory structure of a project over the course of its development."}
{"_id": "efb1a85cf540fd4f901a78100a2e450d484aebac", "title": "The Quest for Scalable Blockchain Fabric: Proof-of-Work vs. BFT Replication", "text": "Bitcoin cryptocurrency demonstrated the utility of global consensus across thousands of nodes, changing the world of digital transactions forever. In the early days of Bitcoin, the performance of its probabilistic proof-of-work (PoW) based consensus fabric, also known as blockchain, was not a major issue. Bitcoin became a success story, despite its consensus latencies on the order of an hour and the theoretical peak throughput of only up to 7 transactions per second. The situation today is radically different and the poor performance scalability of early PoW blockchains no longer makes sense. Specifically, the trend of modern cryptocurrency platforms, such as Ethereum, is to support execution of arbitrary distributed applications on blockchain fabric, needing much better performance. This approach, however, makes cryptocurrency platforms step away from their original purpose and enter the domain of database-replication protocols, notably, the classical state-machine replication, and in particular its Byzantine fault-tolerant (BFT) variants. In this paper, we contrast PoW-based blockchains to those based on BFT state machine replication, focusing on their scalability limits. We also discuss recent proposals to overcoming these scalability limits and outline key outstanding open problems in the quest for the \u201cultimate\u201d blockchain fabric(s)."}
{"_id": "93a478a9d931f44cf4112b86f2808438eaec3f0c", "title": "Integrated system of face recognition and sound localization for a smart door phone", "text": "This paper proposes a smart system using both face recognition and sound localization techniques to identify foreign faces through a door phone in a more efficient and accurate way. In particular, a visitor's voice is used to localize the proper location of the speaker. The location information is then used to adjust the door phone camera position to frame the face in its field of view. In order to accurately locate the facial position of the visitor, 4 microphones are positioned in a cross configuration. Furthermore, face detection and face recognition algorithms as well as a wireless interface are used to recognize the visitor and send the information to the home owner's phone. The entire control system is built using an FPGA chip and tested for actual use in door phone environments1."}
{"_id": "9165151dc212c1b1f03368b4e5b3b558f92882c9", "title": "Deep Convolutional Factor Analyser for Multivariate Time Series Modeling", "text": "Deep generative models can perform dramatically better than traditional graphical models in a number of machine learning tasks. However, training such models remains challenging because their latent variables typically do not have an analytical posterior distribution, largely due to the nonlinear activation nodes. We present a deep convolutional factor analyser (DCFA) for multivariate time series modeling. Our network is constructed in a way that bottom layer nodes are independent. Through a process of up-sampling and convolution, higher layer nodes gain more temporal dependency. Our model can thus give a time series different representations at different depths. DCFA only consists of linear Gaussian nodes. Therefore, the posterior distributions of latent variables are also Gaussian and can be estimated easily using standard variational Bayes algorithm. We show that even without nonlinearity the proposed deep model can achieve state-of-the-art results in anomaly detection, classification and clustering using both synthetic and real-world datasets."}
{"_id": "ea8f3f40dc4649854960fc3f41bc0bcdefb5ecbf", "title": "Visual Sentiment Prediction by Merging Hand-Craft and CNN Features", "text": "Nowadays, more and more people are getting used to social media such as Instagram, Facebook, Twitter, and Flickr to post images and texts to express their sentiment and emotions on almost all events and subjects. In consequence, analyzing sentiment of the huge number of images and texts on social networks has become more indispensable. Most of current research has focused on analyzing sentiment of textual data, while only few research has focused on sentiment analysis of image data. Some of these research has considered handcraft image features, the others has utilized Convolutional Neural Network (CNN) features. However, no research to our knowledge has considered mixing both hand-craft and CNN features. In this paper, we attempt to merge CNN which has shown remarkable achievements in Computer Vision recently, with handcraft features such as Color Histogram (CH) and Bag-of-Visual Words (BoVW) with some local features such as SURF and SIFT to predict sentiment of images. Furthermore, because it is often the case that the large amount of training data may not be easily obtained in the area of visual sentiment, we employ both data augmentation and transfer learning from a pre-trained CNN such as VGG16 trained with ImageNet dataset. With the handshake of hand-craft and End-to-End features from CNN, we attempt to attain the improvement of the performance of the proposed visual sentiment prediction framework. We conducted experiments on an image dataset from Twitter with polarity labels (\"positive\" and \"negative\"). The results of experiments demonstrate that our proposed visual sentimental prediction framework outperforms the current state-of-the-art methods."}
{"_id": "6318f241012260300de660935a89b023fe022b0a", "title": "A gender identity interview for children.", "text": "A 12-item gender identity interview schedule was administered to 85 children referred for concerns regarding their gender identity development and 98 clinical and normal control children. Factor analysis identified two factors, which were labeled Affective Gender Confusion and Cognitive Gender Confusion. The gender-referred group gave significantly more deviant responses than did the controls on both factors. Results were discussed with regard to several diagnostic and assessment issues pertaining to children with gender identity disorder."}
{"_id": "5fabe0cffc609573cddfef2bf79d962caf2e7630", "title": "Factors Affecting Online Search Intention and Online Purchase Intention", "text": "This research focuses on various factors affecting online search intention which has been found to be a key predictor of online purchase intention. Data were collected from a sample consisting of mostly young adults with familiarity of computer use and online shopping experience. A structural equation model was employed to test hypotheses. According to the findings, utilitarian value of Internet information search, hedonic value of Internet information search, perceived benefits of Internet shopping, perceived risk of Internet shopping, and Internet purchase experience predicted online search intention well. The findings also showed that online search intention positively affects * Professor of Marketing, College of Business Administration, Seoul National University(jaekim@snu.ac.kr) ** Marketing, Telephone Service Team, Hanaro Telecom Co., Inc. (heechun@hanaro.com) *** M.B.A. candidate, College of Business Administration, Seoul National University(haejoo80@yahoo.com) This research was supported by the Institute of Management Research, Seoul National University. Seoul Journal of Business Volume 10, Number 2 (December 2004)"}
{"_id": "3084d5f726e262cfba647d8417fd76145ee3f662", "title": "Cyber Security Incidents on Critical Infrastructure and Industrial Networks", "text": "National critical infrastructure and industrial processes are heavily reliant on automation, monitoring and control technologies, including the widely used Supervisory Control and Data Acquisition (SCADA) systems. The growing interconnection of these systems with corporate networks exposes them to cyber attacks, with several security incidents reported over the last few decades. This study provides a classification scheme for categorising security incidents related to critical infrastructure and industrial control systems. The classification scheme is applied to analyse 242 security incidents on critical infrastructure and industrial control networks, which were reported between 1982 and 2014. The results show interesting patterns, with key points highlighted for the purpose of improving the way we plan for and direct efforts toward protecting critical infrastructure and industrial networks."}
{"_id": "62a9848a3b198bb47fd75bed6069b92a63a06797", "title": "How to evaluate technologies for health behavior change in HCI research", "text": "New technologies for encouraging physical activity, healthy diet, and other types of health behavior change now frequently appear in the HCI literature. Yet, how such technologies should be evaluated within the context of HCI research remains unclear. In this paper, we argue that the obvious answer to this question - that evaluations should assess whether a technology brought about the intended change in behavior - is too limited. We propose that demonstrating behavior change is often infeasible as well as unnecessary for a meaningful contribution to HCI research, especially when in the early stages of design or when evaluating novel technologies. As an alternative, we suggest that HCI contributions should focus on efficacy evaluations that are tailored to the specific behavior-change intervention strategies (e.g., self-monitoring, conditioning) embodied in the system and studies that help gain a deep understanding of people's experiences with the technology."}
{"_id": "21e1a50ead66ac791db4ae9afd917f2b3adf28cc", "title": "Millimeter-Wave MIMO Prototype: Measurements and Experimental Results", "text": "Millimeter-wave MIMO systems are one of the candidate schemes for 5G wireless standardization efforts. In this context, the main contributions of this article are three-fold. First, we describe parallel sets of measurements at identical transmit-receive location pairs with 2.9, 29 and 61 GHz carrier frequencies in indoor office, shopping mall, and outdoor settings. These measurements provide insights on propagation, blockage and material penetration losses, and the key elements necessary in system design to make mm-Wave systems viable in practice. Second, one of these elements is hybrid beamforming necessary for better link margins by reaping the array gain with large antenna dimensions. From the class of fully-flexible hybrid beamformers, we describe a robust class of directional beamformers toward meeting the high data-rate requirements of mm-Wave systems. Third, leveraging these design insights, we then describe an experimental prototype system at 28 GHz that realizes high data rates on both the downlink and uplink and robustly maintains these rates in outdoor and indoor mobility scenarios. In addition to maintaining large signal constellation sizes in spite of radio frequency challenges, this prototype leverages the directional nature of the mm-Wave channel to perform seamless beam switching and handover across mm-Wave base stations, thereby overcoming the path losses in non-line-of-sight links and blockages encountered at mm-Wave frequencies."}
{"_id": "ebbe56d235e0812d99609d4cff98473bfb5a7e33", "title": "A unified protocol stack solution for LTE and WLAN in future mobile converged networks", "text": "The interworking of the LTE system and WLAN technologies has drawn much attention lately, due to the growing demands for various multimedia services and large data traffic in hotspot areas. Existing research studies have mostly investigated the coupling architectures for these two wireless communication standards at the network layer. However, in the current architectures, many important coordination functions and joint optimizations cannot be accomplished efficiently. To tackle this problem, a new CBS solution is proposed, which integrates different RATs at layer 2 in the true sense of convergence. We design a unified protocol stack that includes all the original functions of both LTE and WLAN systems. Then we propose a convergence architecture, the RMC sublayer, for joint management of these two RATs. The proposed CBS solution can support seamless offloading through soft handover, guaranteed QoS, forwarding management by a single IP address, and customized bandwidth aggregation service. Finally, our simulation and initial experiment results demonstrate the feasibility and efficiency of the CBS solution in future mobile converged networks."}
{"_id": "e71f912a70810406ebb60b12024c971868b53a51", "title": "D-CFPR: D numbers extended consistent fuzzy preference relations", "text": "X iv :1 40 3. 57 53 v1 [ cs .A I] 2 3 M ar 2 01 4 D-CFPR: D numbers extended consistent fuzzy preference relations Xinyang Deng, Felix T.S. Chan, Rehan Sadiq, Sankaran Mahadevan, Yong Deng School of Computer and Information Science, Southwest University, Chongqing, 400715, China Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, China School of Engineering, University of British Columbia, 1137 Alumni Ave, Kelowna, BC V1V 1V7, Canada School of Engineering, Vanderbilt University, Nashville, TN, 37235, USA"}
{"_id": "bb1f2813687763cd91c795b2471cf6ef31a4e34b", "title": "Conditional molecular design with deep generative models", "text": "Although machine learning has been successfully used to propose novel molecules that satisfy desired properties, it is still challenging to explore a large chemical space efficiently. In this paper, we present a conditional molecular design method that facilitates generating new molecules with desired properties. The proposed model, which simultaneously performs both property prediction and molecule generation, is built as a semisupervised variational autoencoder trained on a set of existing molecules with only a partial annotation. We generate new molecules with desired properties by sampling from the generative distribution estimated by the model. We demonstrate the effectiveness of the proposed model by evaluating it on drug-like molecules. The model improves the performance of property prediction by exploiting unlabeled molecules and efficiently generates novel molecules fulfilling various target conditions."}
{"_id": "d7f8eb7944eb36b5e25eb6d2456d6a3e1f0e142c", "title": "Identifying extremism in social media with multi-view context-aware subset optimization", "text": "Cyber threat intelligence and security informatics play critical roles in the identification of the principal influencers and threats associated with criminal activity and extremism in web-based communities. This paper presents an effective, dynamic learning framework utilizing domain specific context information to design a novel classification model that can robustly identify malicious social media posts (e.g. enrollment propaganda for extremist groups) with expressions of extremism or criminal intent. Research towards automated identification of extreme online posts and their associated key suspects and threats faces numerous challenges, 1) Online data, particularly social media data, originated from numerous independent and heterogeneous sources are largely unstructured. 2) The tactics, techniques, and procedures (TTPs) of criminal activity and extremism are constantly evolving. 3) There are limited ground truth data to support the development of effective classification technologies. In this paper, we present a human-machine collaborative, semi-supervised learning system that can efficiently and effectively identify malicious social media posts in presence of these challenges. Our system and framework develops an initial classifier from limited annotated data and in an interactive manner evolves dynamically into a sophisticated model using shortlisted relevant samples, identified via a graph-based optimization method, solvable by maximum flow algorithm. This same method also may be used to refine the classifier as TTPs evolve. Under this framework, the classifier performance converges faster using roughly 1\u20132 orders of magnitude fewer annotated samples, as compared to fully supervised solutions, resulting in a reasonably acceptable accuracy of nearly 80%. We validate our framework using a large collection of English and non-English flagged words extracted from three web-based forums and manually verified by multiple independent annotators."}
{"_id": "fc5940c207325590decf094c17f83521c86982a3", "title": "Effect of different surface pre-treatments and luting materials on shear bond strength to PEEK.", "text": "OBJECTIVES\nTo assess the bonding potential of a universal composite resin cement and an adhesive/composite system to differently pre-treated PEEK surfaces.\n\n\nMETHODS\nOne hundred and fifty PEEK disks were embedded in epoxy resin, polished (P4000 grit) and treated as follows (n=30/group): (A) no treatment, (B) acid etching with sulfuric acid (98%) for 1 min, (C) sandblasting for 10s with 50 microm alumina, (D) sandblasting for 10s with 110 microm alumina and (E) silica coating using the Rocatec system (3M ESPE). Polished and sandblasted (50 microm alumina) cp titanium (grade 4) served as a control. Acrylic hollow cylinders were either luted with a universal composite resin cement (RelyX Unicem) or an unfilled resin (Heliobond) and a hybrid composite (Tetric) to the specimens. Bond strength was measured in a shear test and failure modes were assessed. Statistic analysis was performed with one-way ANOVA followed by a post hoc Scheff\u00e9 test and unpaired t-tests.\n\n\nRESULTS\nWith the universal composite resin cement, no bond could be established on any PEEK surfaces, except specimens etched with sulfuric acid (19.0+/-3.4MPa). Shear bond strength to titanium was significantly lower (8.7+/-2.8MPa, p<0.05). Applying the adhesive/composite system, shear bond strength values on pre-treated PEEK ranged from 11.5+/-3.2MPa (silica coating) to 18.2+/-5.4MPa (acid etched) with no statistically significant differences (p>0.05). No bond was obtained on the polished surface.\n\n\nSIGNIFICANCE\nBonding to PEEK is possible when using a bonding system. No adherence can be achieved with the tested universal composite resin cement except on an etched surface. The results strongly encourage further research in PEEK application in dentistry."}
{"_id": "18200e8db6fc63f16d5ed098b5abc17bf0939333", "title": "The Fastest Pedestrian Detector in the West", "text": "Figure 1: In many applications, detection speed is as important as accuracy. (a) A standard pipeline for performing modern multiscale detection is to create a densely sampled image pyramid, compute features at each scale, and finally perform sliding window classification (with a fixed scale model). Although effective; the creation of the feature pyramid can dominate the cost of detection, leading to slow multiscale detection. (b) Viola and Jones [6] utilized shift and scale invariant features, allowing a trained detector to be placed at any location and scale without relying on an image pyramid. Constructing such a classifier pyramid results in fast multiscale detection; unfortunately, most features are not scale invariant, including gradient histograms, significantly limiting the generality of this scheme. (c) We propose a fast method for approximating features at multiple scales using a sparsely sampled image pyramid with a step size of an entire octave and within each octave we use a classifier pyramid. The proposed approach achieves nearly the same accuracy as using densely sampled image pyramids, with nearly the same speed as using a classifier pyramid applied to an image at a single scale."}
{"_id": "19dfea55ce8c6999415cb6216de0e5bd108a4f79", "title": "A mobile vision system for robust multi-person tracking", "text": "We present a mobile vision system for multi-person tracking in busy environments. Specifically, the system integrates continuous visual odometry computation with tracking-by-detection in order to track pedestrians in spite of frequent occlusions and egomotion of the camera rig. To achieve reliable performance under real-world conditions, it has long been advocated to extract and combine as much visual information as possible. We propose a way to closely integrate the vision modules for visual odometry, pedestrian detection, depth estimation, and tracking. The integration naturally leads to several cognitive feedback loops between the modules. Among others, we propose a novel feedback connection from the object detector to visual odometry which utilizes the semantic knowledge of detection to stabilize localization. Feedback loops always carry the danger that erroneous feedback from one module is amplified and causes the entire system to become instable. We therefore incorporate automatic failure detection and recovery, allowing the system to continue when a module becomes unreliable. The approach is experimentally evaluated on several long and difficult video sequences from busy inner-city locations. Our results show that the proposed integration makes it possible to deliver stable tracking performance in scenes of previously infeasible complexity."}
{"_id": "5361b603b854f154aa4be0be5ed92afff1436ec8", "title": "A Boosted Particle Filter: Multitarget Detection and Tracking", "text": "The problem of tracking a varying number of non-rigid objects has two major difficulties. First, the observation models and target distributions can be highly non-linear and non-Gaussian. Second, the presence of a large, varying number of objects creates complex interactions with overlap and ambiguities. To surmount these difficulties, we introduce a vision system that is capable of learning, detecting and tracking the objects of interest. The system is demonstrated in the context of tracking hockey players using video sequences. Our approach combines the strengths of two successful algorithms: mixture particle filters and Adaboost. The mixture particle filter [17] is ideally suited to multi-target tracking as it assigns a mixture component to each player. The crucial design issues in mixture particle filters are the choice of the proposal distribution and the treatment of objects leaving and entering the scene. Here, we construct the proposal distribution using a mixture model that incorporates information from the dynamic models of each player and the detection hypotheses generated by Adaboost. The learned Adaboost proposal distribution allows us to quickly detect players entering the scene, while the filtering process enables us to keep track of the individual players. The result of interleaving Adaboost with mixture particle filters is a simple, yet powerful and fully automatic multiple object tracking system."}
{"_id": "fef0b51c865cc72b64bfafd6a1bf3539c3c1d290", "title": "Face liveness detection through face structure analysis", "text": "Liveness detection is a way to detect whether the person is live or not during submission of his/her biometric trait. It is mandatory in order to prevent face spoofing attacks. Therefore, in this paper, we proposed a robust face structure analysis mechanism to detect the liveness by exploiting face shape information. 3D structure/shape of the face is measured on the basis of disparity map between left and right image taken by stereoscopic vision. A gradient-based eight neighbour feature extraction technique has been proposed to extract unique features from these disparity images. It produces minimal computational cost by taking subset of the overall image. We have applied linear discriminant analysis (LDA), C-means algorithms on these features while principal component analysis (PCA) is applied on raw disparity images. We have achieved a recognition rate of 91.6%, 97.5% and 98.3% using PCA, LDA and C-means respectively, which strengthened the confidence of our proposed feature extraction technique."}
{"_id": "8a9455e4dcac03b5fd891a74bcf0fdcd80cf51e4", "title": "A Hybrid Course Recommendation System by Integrating Collaborative Filtering and Artificial Immune Systems", "text": "Abstract: This research proposes a two-stage user-based collaborative filtering process using an artificial immune system for the prediction of student grades, along with a filter for professor ratings in the course recommendation for college students. We test for cosine similarity and Karl Pearson (KP) correlation in affinity calculations for clustering and prediction. This research uses student information and professor information datasets of Yuan Ze University from the years 2005\u20132009 for the purpose of testing and training. The mean average error and confusion matrix analysis form the testing parameters. A minimum professor rating was tested to check the results, and observed that the recommendation systems herein provide highly accurate results for students with higher mean grades."}
{"_id": "a4269ee19a49ec9cf7343a4a8a187a9caceab6a4", "title": "Tactical Language and Culture Training Systems: Using Artificial Intelligence to Teach Foreign Languages and Cultures", "text": "The Tactical Language and Culture Training System (TLCTS) helps people quickly acquire communicative skills in foreign languages and cultures. More than 20,00 0 learners worldwide have used TLCTS courses. TLCTS utili zes artificial intelligence technologies in multiple wa ys: during the authoring process, and at run time to process l arner speech, interpret learner actions, control the resp on e of non-player characters, and evaluate and assess lear ner performance and proficiency. This paper describes the architecture of TLCTS and the artificial intelligence techn ologies that it employs, and presents results from multiple evaluation studies that demonstrate the benefits of learn ing foreign language and culture using this approach."}
{"_id": "6fb1daa47363b1b66bf254583437b116bde63f05", "title": "Neural network model with Monte Carlo algorithm for electricity demand forecasting in Queensland", "text": "With the rapid growth over the past few decades, people are consuming more and more electrical energies. In order to solve the contradiction between supply and demand to minimize electricity cost, it is necessary and useful to predict the electricity demand. In this paper, we apply an improved neural network algorithm to forecast the electricity, and we test it on a collected electricity demand data set in Queensland to verify its performance. There are two contributions in this paper. Firstly, comparing with backpropagation (BP) neural network, the results show a better performance on this improved neural network. Secondly, the performance on various hidden layers shows that different dimension of hidden layer in this improved neural network has little impact on the Queensland's electricity demand forecasting."}
{"_id": "5672de9bd5cea015f0fde8bc3686c74cff353508", "title": "Generalized state-space model for an n-phase interleaved buck-boost converter", "text": "An n-phase interleaved buck-boost converter (IBBC) with generalized steady state model has been presented in this paper. Development in distributed generation system requires low cost, high efficient converters for power conversion. Paralleling of converters is done so as to minimize the ripples at the output and reduce the size of passive components and improve the overall efficiency and cost of the system. An interleaved buck boost converter finds application in fuel cells, electric vehicles and various solar charging applications. The steady state and small signal analysis of an n phase buck-boost converter in CCM mode have been carried out and average state space equations have been developed. In averaging, parasitic elements have been excluded for simplifying the calculations. A simulation model for a three phase interleaved buck boost converter has been presented initially which is then generalized."}
{"_id": "f341efe6667151fd79eb9c45fa6d15b06d57c151", "title": "Attaining human-level performance for anatomical landmark detection in 3D CT data", "text": "We present an efficient neural network approach for locating anatomical landmarks, using a two-pass, two-resolution cascaded approach which leverages a mechanism we term atlas location autocontext. Location predictions are made by regression to Gaussian heatmaps, one heatmap per landmark. This system allows patchwise application of a shallow network, thus enabling the prediction of multiple volumetric heatmaps in a unified system, without prohibitive GPU memory requirements. Evaluation is performed for 22 landmarks defined on a range of structures in head CT scans and the neural network model is benchmarked against a previously reported decision forest model trained with the same cascaded system. Models are trained and validated on 201 scans. Over the final test set of 20 scans which was independently annotated by 2 observers, we show that the neural network reaches an accuracy which matches the annotator variability."}
{"_id": "ed308c5085610c85a598e790390d954743d0e0cc", "title": "A Regulated Charge Pump for Tunneling Floating-Gate Transistors", "text": "Flash memory is an important component in many embedded system-on-a-chip applications, which drives the need to generate high write/erase voltages in generic CMOS processes. In this paper, we present a regulated, high-voltage charge pump for erasing Flash memory and floating-gate transistors. This 0.069-mm2 charge pump was fabricated in a 0.35 $\\mu \\text {m}$ standard CMOS process. While operating from a 2.5 V supply, the charge pump generates regulated voltages up to 16 V with a PSRR of 52 dB and an output impedance of 6.8 $\\text{k}\\Omega $ . To reduce power consumption for use in battery-powered applications, this charge pump uses a variable-frequency regulation technique and a new circuit for minimizing short-circuit current in the clock-generation circuitry; the resulting charge pump is able to erase the charge on floating-gate transistors using only $1.45 \\mu J$ ."}
{"_id": "6525b5e36a350791d5a90e029aa72053a0e3c2af", "title": "AMES-Cloud: A Framework of Adaptive Mobile Video Streaming and Efficient Social Video Sharing in the Clouds", "text": "While demands on video traffic over mobile networks have been souring, the wireless link capacity cannot keep up with the traffic demand. The gap between the traffic demand and the link capacity, along with time-varying link conditions, results in poor service quality of video streaming over mobile networks such as long buffering time and intermittent disruptions. Leveraging the cloud computing technology, we propose a new mobile video streaming framework, dubbed AMES-Cloud, which has two main parts: adaptive mobile video streaming (AMoV) and efficient social video sharing (ESoV). AMoV and ESoV construct a private agent to provide video streaming services efficiently for each mobile user. For a given user, AMoV lets her private agent adaptively adjust her streaming flow with a scalable video coding technique based on the feedback of link quality. Likewise, ESoV monitors the social network interactions among mobile users, and their private agents try to prefetch video content in advance. We implement a prototype of the AMES-Cloud framework to demonstrate its performance. It is shown that the private agents in the clouds can effectively provide the adaptive streaming, and perform video sharing (i.e., prefetching) based on the social network analysis."}
{"_id": "e30ef9b464e2cb6f1c36f9af5e282d765c1b3cdd", "title": "A Comparative Analysis of Personality-Based Music Recommender Systems", "text": "This article describes a preliminary study on considering information about the target user\u2019s personality in music recommender systems (MRSs). For this purpose, we devised and implemented four MRSs and evaluated them on a sample of real users and real-world datasets. Experimental results show that MRSs that rely on purely users\u2019 personality information are able to provide performance comparable with those of a state-of-the-art MRS, even better in terms of the diversity of the suggested items."}
{"_id": "32a2940bbb3aa60b786dcaf42e810eca831a16a4", "title": "Assembly algorithms for next-generation sequencing data.", "text": "The emergence of next-generation sequencing platforms led to resurgence of research in whole-genome shotgun assembly algorithms and software. DNA sequencing data from the Roche 454, Illumina/Solexa, and ABI SOLiD platforms typically present shorter read lengths, higher coverage, and different error profiles compared with Sanger sequencing data. Since 2005, several assembly software packages have been created or revised specifically for de novo assembly of next-generation sequencing data. This review summarizes and compares the published descriptions of packages named SSAKE, SHARCGS, VCAKE, Newbler, Celera Assembler, Euler, Velvet, ABySS, AllPaths, and SOAPdenovo. More generally, it compares the two standard methods known as the de Bruijn graph approach and the overlap/layout/consensus approach to assembly."}
{"_id": "228b4e4bce405c6ecb401a07b8912a0cb9d62668", "title": "Anomaly Detection in Cyber Physical Systems Using Recurrent Neural Networks", "text": "This paper presents a novel unsupervised approach to detect cyber attacks in Cyber-Physical Systems (CPS). We describe an unsupervised learning approach using a Recurrent Neural network which is a time series predictor as our model. We then use the Cumulative Sum method to identify anomalies in a replicate of a water treatment plant. The proposed method not only detects anomalies in the CPS but also identifies the sensor that was attacked. The experiments were performed on a complex dataset which is collected through a Secure Water Treatment Testbed (SWaT). Through the experiments, we show that the proposed technique is able to detect majority of the attacks designed by our research team with low false positive rates."}
{"_id": "f804fb069c95673daaa18ddfd07030a5fe5de9eb", "title": "Antidepressant effects of sleep deprivation require astrocyte-dependent adenosine mediated signaling", "text": "Major depressive disorder is a debilitating condition with a lifetime risk of ten percent. Most treatments take several weeks to achieve clinical efficacy, limiting the ability to bring instant relief needed in psychiatric emergencies. One intervention that rapidly alleviates depressive symptoms is sleep deprivation; however, its mechanism of action is unknown. Astrocytes regulate responses to sleep deprivation, raising the possibility that glial signaling mediates antidepressive-like actions of sleep deprivation. Here, we found that astrocytic signaling to adenosine (A1) receptors was required for the robust reduction of depressive-like behaviors following 12 hours of sleep deprivation. As sleep deprivation activates synaptic A1 receptors, we mimicked the effect of sleep deprivation on depression phenotypes by administration of the A1 agonist CCPA. These results provide the first mechanistic insight into how sleep deprivation impacts mood, and provide a novel pathway for rapid antidepressant development by modulation of glial signaling in the brain."}
{"_id": "be407fd8869b899f36d82b1676f23142302a30a7", "title": "Plagiarism Meets Paraphrasing: Insights for the Next Generation in Automatic Plagiarism Detection", "text": "Although paraphrasing is the linguistic mechanism underlying many plagiarism cases, little attention has been paid to its analysis in the framework of automatic plagiarism detection. Therefore, state-of-the-art plagiarism detectors find it difficult to detect cases of paraphrase plagiarism. In this article, we analyze the relationship between paraphrasing and plagiarism, paying special attention to which paraphrase phenomena underlie acts of plagiarism and which of them are detected by plagiarism detection systems. With this aim in mind, we created the P4P corpus, a new resource that uses a paraphrase typology to annotate a subset of the PAN-PC-10 corpus for automatic plagiarism detection. The results of the Second International Competition on Plagiarism Detection were analyzed in the light of this annotation.The presented experiments show that (i) more complex paraphrase phenomena and a high density of paraphrase mechanisms make plagiarism detection more difficult, (ii) lexical substitutions are the paraphrase mechanisms used the most when plagiarizing, and (iii) paraphrase mechanisms tend to shorten the plagiarized text. For the first time, the paraphrase mechanisms behind plagiarism have been analyzed, providing critical insights for the improvement of automatic plagiarism detection systems."}
{"_id": "93e39ba18752934181a44076b94042ff2789ca04", "title": "Diode-clamped multilevel converters: a practicable way to balance DC-link voltages", "text": "The converter topologies identified as diode-clamped multilevel (DCM) or, equivalently, as multipoint clamped (MPC), are rarely used in industrial applications, owing to some serious drawbacks involving mainly the stacked bank of capacitors that constitutes their multilevel dc link. The balance of the capacitor voltages is not possible in all operating conditions when the MPC converter possesses a passive front end. On the other hand, in ac/dc/ac power conversion, the back-to-back connection of a multilevel rectifier with a multilevel inverter allows the balance of the dc-link capacitor voltages and, at the same time, it offers the power-factor-correction capability at the mains ac input. An effective balancing strategy suitable for MPC conversion systems with any number of dc-link capacitors is presented here. The strategy has been carefully studied to optimize the converter efficiency. The simulation results related to a high-power conversion system (up to 10 MW) characterized by four intermediate dc-link capacitors are shown."}
{"_id": "c7291b0e5ddfc5a6a78b1a474a8a18e4e52141ef", "title": "Anti-falling tree climbing mechanism optimization", "text": "This paper presents an improved anti-falling tree climbing mechanism. A study and comparison was made on an original tree climbing robotic system published in reference. This optimized robotic system only relies on one stepper to achieve the anti-falling and anti-jamming functionality under either static or dynamic situations. This improved design feature reduces the total robot weight and enhances the control system robustness. Moreover, to achieve the robot spiral climbing morphology and decrease the servomotor rotational frictional forces, a special servomotor module and a bearing supporting mechanism are designed and implemented. The optimized mechanism design of the climbing robot is modelled in SolidWorks and the climbing process is simulated using SimMechanics. The simulation outcome of the initial and improved design shows that the new design has higher climbing speed and lighter weight. Physical prototype experiment matches with the simulation outcome and verifies the feasibility and effectiveness of this redesigned tree climbing mechanism."}
{"_id": "a532eadfd406aac6400cad831937271b802ba4dc", "title": "Convergence Analysis of MAP Based Blur Kernel Estimation", "text": "One popular approach for blind deconvolution is to formulate a maximum a posteriori (MAP) problem with sparsity priors on the gradients of the latent image, and then alternatingly estimate the blur kernel and the latent image. While several successful MAP based methods have been proposed, there has been much controversy and confusion about their convergence, because sparsity priors have been shown to prefer blurry images to sharp natural images. In this paper, we revisit this problem and provide an analysis on the convergence of MAP based approaches. We first introduce a slight modification to a conventional joint energy function for blind deconvolution. The reformulated energy function yields the same alternating estimation process, but more clearly reveals how blind deconvolution works. We then show the energy function can actually favor the right solution instead of the no-blur solution under certain conditions, which explains the success of previous MAP based approaches. The reformulated energy function and our conditions for the convergence also provide a way to compare the qualities of different blur kernels, and we demonstrate its applicability to automatic blur kernel size selection, blur kernel estimation using light streaks, and defocus estimation."}
{"_id": "3e5dbb5fd3460e020c321bc48b7d0fa6c3324dbc", "title": "Simulation of Absolute Amplitudes of Ultrasound Signals Using Equivalent Circuits", "text": "Equivalent circuits for piezoelectric devices and ultrasonic transmission media can be used to cosimulate electronics and ultrasound parts in simulators originally intended for electronics. To achieve efficient system- level optimization, it is important to simulate correct, absolute amplitude of the ultrasound signal in the system, as this determines the requirements on the electronics regarding dynamic range, circuit noise, and power consumption. This paper presents methods to achieve correct, absolute amplitude of an ultrasound signal in a simulation of a pulse-echo system using equivalent circuits. This is achieved by taking into consideration loss due to diffraction and the effect of the cable that connects the electronics and the piezoelectric transducer. The conductive loss in the transmission line that models the propagation media of the ultrasound pulse is used to model the loss due to diffraction. Results show that the simulated amplitude of the echo follows measured values well in both near and far fields, with an offset of about 10%. The use of a coaxial cable introduces inductance and capacitance that affect the amplitude of a received echo. Amplitude variations of 60% were observed when the cable length was varied between 0.07 m and 2.3 m, with simulations predicting similar variations. The high precision in the achieved results show that electronic design and system optimization can rely on system simulations alone. This will simplify the development of integrated electronics aimed at ultrasound systems."}
{"_id": "abf24ab2ba71cf5e252df7c4b0ff0f7179793630", "title": "PMLAB: An Scripting Environment for Process Mining", "text": "In a decade of process mining research, several algorithms have been proposed to solve particular process mining tasks. At the same pace, tools have appeared both in the academic and the commercial domains. These tools have enabled the use of process mining practices to a rather limited extent. In this paper we advocate for a change in the mentality: process mining may be an exploratory discipline, and PMLAB \u2013 a Python-based scripting environment supporting this \u2013 is proposed. This demo presents the main features of the PMLAB environment"}
{"_id": "7b32762c0320497f0b359a8460090e24ce0b6b42", "title": "Metasurface holograms reaching 80% efficiency.", "text": "Surfaces covered by ultrathin plasmonic structures--so-called metasurfaces--have recently been shown to be capable of completely controlling the phase of light, representing a new paradigm for the design of innovative optical elements such as ultrathin flat lenses, directional couplers for surface plasmon polaritons and wave plate vortex beam generation. Among the various types of metasurfaces, geometric metasurfaces, which consist of an array of plasmonic nanorods with spatially varying orientations, have shown superior phase control due to the geometric nature of their phase profile. Metasurfaces have recently been used to make computer-generated holograms, but the hologram efficiency remained too low at visible wavelengths for practical purposes. Here, we report the design and realization of a geometric metasurface hologram reaching diffraction efficiencies of 80% at 825\u2005nm and a broad bandwidth between 630\u2005nm and 1,050\u2005nm. The 16-level-phase computer-generated hologram demonstrated here combines the advantages of a geometric metasurface for the superior control of the phase profile and of reflectarrays for achieving high polarization conversion efficiency. Specifically, the design of the hologram integrates a ground metal plane with a geometric metasurface that enhances the conversion efficiency between the two circular polarization states, leading to high diffraction efficiency without complicating the fabrication process. Because of these advantages, our strategy could be viable for various practical holographic applications."}
{"_id": "2071f3ee9ec4d17250b00626d55e47bf75ae2726", "title": "Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition", "text": "where fk is the current approximation, and Rk f the current residual (error). Using initial values of Ro f = f , fo = 0, and k = 1 , the M P algorithm is comprised of the following steps, In this paper we describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries (I) Compute the inner-products {(Rkf, Zn)},. of elementary building blocks e.g. affine (wavelet) frames. We propose a modification to the Matching Pursuit algorithm of Mallat and Zhang (1992) that maintains full backward orthogonality of the residual convergence. We refer to this modified algorithm as Orthogonal Matching Pursuit (OMP). It is shown that all additional computation required for the OMP algorithm may be performed recursively. (11) Find nktl such that (error) at every step and thereby leads to improved I ( R k f , 'nk+I) l 2 asYp I(Rkf, zj)I 1"}
{"_id": "364bcbe6e3cb439d1ca694b259c2066d3769c860", "title": "Extensions of compressed sensing", "text": "We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of pixels, and yet be accurately reconstructed. The samples are nonadaptive and measure \u2018random\u2019 linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible ` norm. We perform a series of numerical experiments which validate in general terms the basic idea proposed in [14, 3, 5], in the favorable case where the transform coefficients are sparse in the strong sense that the vast majority are zero. We then consider a range of less-favorable cases, in which the object has all coefficients nonzero, but the coefficients obey an ` bound, for some p \u2208 (0, 1]. These experiments show that the basic inequalities behind the CS method seem to involve reasonable constants. We next consider synthetic examples modelling problems in spectroscopy and image processing, and note that reconstructions from CS are often visually \u201cnoisy\u201d . We post-process using translation-invariant de-noising, and find the visual appearance considerably improved. We also consider a multiscale deployment of compressed sensing, in which various scales are segregated and CS applied separately to each; this gives much better quality reconstructions than a literal deployment of the CS methodology. We also report that several workable families of \u2018random\u2019 linear combinations all behave equivalently, including random spherical, random signs, partial Fourier and partial Hadamard. These results show that, when appropriately deployed in a favorable setting, the CS framework is able to save significantly over traditional sampling, and there are many useful extensions of the basic idea."}
{"_id": "0848ad52c4e3cbddb78049b42b75afe25112b26d", "title": "Wavelet support vector machine", "text": "An admissible support vector (SV) kernel (the wavelet kernel), by which we can construct a wavelet support vector machine (SVM), is presented. The wavelet kernel is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. The existence of wavelet kernels is proven by results of theoretic analysis. Computer simulations show the feasibility and validity of wavelet support vector machines (WSVMs) in regression and pattern recognition."}
{"_id": "1803bf9aaec59e1a561d66d019dfcbfb246067d1", "title": "A Cognitive Model of Semantic Network Learning", "text": "Child semantic development includes learning the meaning of words as well as the semantic relations among words. A presumed outcome of semantic development is the formation of a semantic network that reflects this knowledge. We present an algorithm for simultaneously learning word meanings and gradually growing a semantic network, which adheres to the cognitive plausibility requirements of incrementality and limited computations. We demonstrate that the semantic connections among words in addition to their context is necessary in forming a semantic network that resembles an adult\u2019s semantic knowledge."}
{"_id": "a6a6690ad6e6a2d7db8bcfff884feb34e2a5113f", "title": "Survey on Fraud Detection Techniques Using Data Mining", "text": "Now a days, fraud is a million dollar business and every year it is increasing more and more. In a crime survey in 2009, tells that close to 30% of the company\u2019s worldwide reported falling victim to fraud in the past year. Fraud involves individual or in a form of groups who intentionally act secretly to rob another of something of worth, for their own profit. Fraud is as against the humanity and it is unlimited variety of different forms. So in this paper, we introduce different methods and techniques to tackle the fraud and how to prevent the fraud into the organization, businesses, and hospital and insurance companies. Fraud prevention is a continuing battle. Many website owners must continually fight against many kinds of crime ranging from stealing identity theft and credit cards and many other types of fraud related to that. Internet Fraud is also very popular, where fraudster can steal the credit card number and buy thing from the different websites."}
{"_id": "13c214313a420af822a00182ee9c6057c8ccde0b", "title": "Effects of Weighted Vests on the Engagement of Children With Developmental Delays and Autism", "text": "The use of weighted vests for children with autism spectrum disorders and developmental disabilities is a common practice as part of sensory integration therapy programs. The purpose of the current investigation was to extend the research on the use of weighted vests for children with autism and developmental delays in a methodologically rigorous study. The study was conducted using an alternating treatment design. This allowed the comparison of three different conditions: weighted vest, vest with no weight (which served as a placebo), and no vest (which served as a baseline). The results showed no differentiation in engagement between conditions for any of the participants. Implications for practice and future research are provided."}
{"_id": "d1909ee7543ab92a8d8aad81af6c589278654b48", "title": "The Impact of Information Technology Infrastructure Flexibility on Strategic Alignment and Application Implementations", "text": "IT infrastructure flexibility is now being viewed as an organizational core competency that is necessary for organizations to survive and prosper in rapidly-changing, competitive, business environments. Using data from 200 U.S. and Canadian companies, this study examines the impact of the four components of IT infrastructure flexibility (compatibility, connectivity, modularity, and IT personnel) on strategic IT-business alignment and the extent to which various applications are implemented within an organization. The \u201cextent\u201d of implementation refers to the the organization\u2019s experience with the particular application and the degree to which the application is implemented and used throughout the organization. The findings from analysis of a structural model provide evidence that connectivity, modularity, and IT personnel (among other considerations that we discuss in the paper) make significant, positive impacts on strategic alignment and that all four components result in significant, positive impacts on the applications implementation. The study reinforces the importance of IT infrastructure flexibility to organizations as one source for sustainable competitive advantage."}
{"_id": "61d0e65485e05cb5e8ffe65002cf6c816514eaa4", "title": "Investigating Language Universal and Specific Properties in Word Embeddings", "text": "Recently, many NLP tasks have benefited from distributed word representation. However, it remains unknown whether embedding models are really immune to the typological diversity of languages, despite the language-independent architecture. Here we investigate three representative models on a large set of language samples by mapping dense embedding to sparse linguistic property space. Experiment results reveal the language universal and specific properties encoded in various word representation. Additionally, strong evidence supports the utility of word form, especially for inflectional languages."}
{"_id": "0a9f5b8223a1b78ef9c414cb7bf6bf95fc1f3c99", "title": "Color and Uncertainty: It is not always Black and White", "text": "To fully comprehend the meaning and impact of visualized data it is crucial that users are able to perceive and comprehend the inherent uncertainty of the data in a correct and intuitive way. Data uncertainty is frequently visualized through color mappings. Previous studies argued that color hue is not suitable for communicating uncertainty because most hue scales lack an intrinsic perceptual order. In this paper we examine the use of hue for communicating data uncertainty in more detail. We investigated the potential of distinct color triples (rather than the entire spectrum of colors, as used in previously studies) to represent different levels of uncertainty. We identified several color triples that reliably map to an intuitive ordering of certainty. Bipolar color scales constructed from these color triples can be used to communicate uncertainty in visualizations, particularly to audiences of nonspecialists. A \u2018traffic light\u2019 configuration (with red and green at the endpoints and either yellow or orange in the middle) communicates uncertainty most intuitively."}
{"_id": "3603d3d7e61868e7b79abc6b489ef167adc210a9", "title": "A Lender-Based Theory of Collateral\u2217", "text": "We consider an imperfectly competitive loan market in which a local relationship lender has an information advantage vis-\u00e0-vis distant transaction lenders. Competitive pressure from the transaction lenders prevents the local lender from extracting the full surplus from projects, so that she inefficiently rejects marginally profitable projects. Collateral mitigates the inefficiency by increasing the local lender\u2019s payoff from precisely those marginal projects that she inefficiently rejects. The model predicts that, controlling for observable borrower risk, collateralized loans are more likely to default ex post, which is consistent with the empirical evidence. The model also predicts that borrowers for whom local lenders have a relatively smaller information advantage face higher collateral requirements, and that technological innovations that narrow the information advantage of local lenders, such as small business credit scoring, lead to a greater use of collateral in lending relationships. JEL classification: D82; G21"}
{"_id": "ca8a3eb540719e7a869cc962b0d3624462a5bcf3", "title": "Graph-Level Operations: A High-Level Interface for Graph Visualization Technique Specification", "text": "Social networks collected by historians or sociologists typically have a large number of actors and edge attributes. Applying social network analysis (SNA) algorithms to these networks produces additional attributes such as degree, centrality, and clustering coefficients. Understanding the effects of this plethora of attributes is one of the main challenges of multivariate SNA. We present the design of GraphDice, a multivariate network visualization system for exploring the attribute space of edges and actors. GraphDice builds upon the ScatterDice system for its main multidimensional navigation paradigm, and extends it with novel mechanisms to support network exploration in general and SNA tasks in particular. Novel mechanisms include visualization of attributes of interval type and projection of numerical edge attributes to node attributes. We show how these extensions to the original ScatterDice system allow to support complex visual analysis tasks on networks with hundreds of actors and up to 30 attributes, while providing a simple and consistent interface for interacting with network data."}
{"_id": "479c4aef2519f0459097c33c13f73e406cfe024b", "title": "Table 1 : A Comparison between Lean Construction and Agile Project", "text": "This paper briefly summarises the evolution of Agile Project Management (APM) and differentiates it from lean and agile production and \u2018leagile\u2019 construction. The significant benefits being realized through employment of APM within the information systems industry are stated. The characteristics of APM are explored, including: philosophy, organizational attitudes and practices, planning, execution and control and learning. Finally, APM is subjectively assessed as to its potential contribution to the pre-design, design and construction phases. In conclusion, it is assessed that APM offers considerable potential for application in predesign and design but that there are significant hurdles to its adoption in the actual construction phase. Should these be overcome, APM offers benefits well beyond any individual project."}
{"_id": "73a7e6a9fe673f6db0431a0224e06587703ffbc8", "title": "A Bandelet-Based Inpainting Technique for Clouds Removal From Remotely Sensed Images", "text": "It is well known that removing cloud-contaminated portions of a remotely sensed image and then filling in the missing data represent an important photo editing cumbersome task. In this paper, an efficient inpainting technique for the reconstruction of areas obscured by clouds or cloud shadows in remotely sensed images is presented. This technique is based on the Bandelet transform and the multiscale geometrical grouping. It consists of two steps. In the first step, the curves of geometric flow of different zones of the image are determined by using the Bandelet transform with multiscale grouping. This step allows an efficient representation of the multiscale geometry of the image's structures. Having well represented this geometry, the information inside the cloud-contaminated zone is synthesized by propagating the geometrical flow curves inside that zone. This step is accomplished by minimizing a functional whose role is to reconstruct the missing or cloud contaminated zone independently of the size and topology of the inpainting domain. The proposed technique is illustrated with some examples on processing aerial images. The obtained results are compared with those obtained by other clouds removal techniques."}
{"_id": "315fb65f3358c20b9e7403b1cf7b7a1d7480c0bb", "title": "Modeling of Overhead Transmission Lines for Lightning Studies", "text": "The aim of this paper is to analyze the influence of the models proposed for representing overhead transmission lines in lightning calculations. A full model of a transmission line for lightning overvoltage calculations can be split into several parts: wires (shield wires and phase conductors), towers, footing impedances and insulators. An additional component, the arrester, should be added if the study is aimed at selecting the arrester ratings needed to achieve a given performance of the line. This paper presents a sensitivity study whose main goal is to determine the effect that the model and the parameters selected for representing the main parts of a transmission line can have on the flashover rate."}
{"_id": "4995be055d63be04ed979d05cec97888b308e488", "title": "Semantic Structural Alignment of Neural Representational Spaces Enables Translation between English and Chinese Words", "text": "Two sets of items can share the same underlying conceptual structure, while appearing unrelated at a surface level. Humans excel at recognizing and using alignments between such underlying structures in many domains of cognition, most notably in analogical reasoning. Here we show that structural alignment reveals how different people's neural representations of word meaning are preserved across different languages, such that patterns of brain activation can be used to translate words from one language to another. Groups of Chinese and English speakers underwent fMRI scanning while reading words in their respective native languages. Simply by aligning structures representing the two groups' neural semantic spaces, we successfully infer all seven Chinese\u2013English word translations. Beyond language translation, conceptual structural alignment underlies many aspects of high-level cognition, and this work opens the door to deriving many such alignments directly from neural representational content."}
{"_id": "e288b9206dffbdb45f18ff611393e976be42282a", "title": "AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation", "text": "In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM) applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF) are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure."}
{"_id": "36e1b02a66ed928ef13e3a2ba6852e90a8713036", "title": "Traffic management: a holistic approach to memory placement on NUMA systems", "text": "NUMA systems are characterized by Non-Uniform Memory Access times, where accessing data in a remote node takes longer than a local access. NUMA hardware has been built since the late 80's, and the operating systems designed for it were optimized for access locality. They co-located memory pages with the threads that accessed them, so as to avoid the cost of remote accesses. Contrary to older systems, modern NUMA hardware has much smaller remote wire delays, and so remote access costs per se are not the main concern for performance, as we discovered in this work. Instead, congestion on memory controllers and interconnects, caused by memory traffic from data-intensive applications, hurts performance a lot more. Because of that, memory placement algorithms must be redesigned to target traffic congestion. This requires an arsenal of techniques that go beyond optimizing locality. In this paper we describe Carrefour, an algorithm that addresses this goal. We implemented Carrefour in Linux and obtained performance improvements of up to 3.6 relative to the default kernel, as well as significant improvements compared to NUMA-aware patchsets available for Linux. Carrefour never hurts performance by more than 4% when memory placement cannot be improved. We present the design of Carrefour, the challenges of implementing it on modern hardware, and draw insights about hardware support that would help optimize system software on future NUMA systems."}
{"_id": "a77bd5e4232c964d0c308d5ce375900202ea68bd", "title": "Blockchain network slice broker in 5G: Slice leasing in factory of the future use case", "text": "5G Network Slice Broker concept aims to enable mobile virtual network operators, over-the-top providers, and industry vertical market players to request and lease resources from infrastructure providers dynamically according to needs. In the future digital factory, the leasing of resources could also happen autonomously by manufacturing equipment. This paper presents Blockchain Slice Leasing Ledger Concept and analysis of its applicability in the Factory of the Future. The novel concept utilizes 5G Network Slice Broker in a Blockchain to reduce service creation time and enable manufacturing equipment autonomously and dynamically acquire the slice needed for more efficient operations."}
{"_id": "21701a87c687c3bac376b7ce98c1b2c448f83e94", "title": "Optimizing neighborhood projection with relaxation factor for inextensible cloth simulation", "text": "In this paper, we propose a novel method for inextensible cloth simulation. Our method introduces a neighborhood projection optimized with a relaxation factor. The neighborhood projection enforces inextensibility by modifying particle positions with a Jacobi-style iteration, leading to conservation of linear and angular quasi momenta. The relaxation factor is estimated according to the corrections and constraints, and is used to scale the corrections while keeping convergence to a smaller number of iterations. Experimental results demonstrate that our method increases the simulation efficiency, and stably handles inextensible cloth even in overconstrained situations. In addition to the simulation of hanging cloth and draping cloth, the simulated umbrella demonstrates the characters of our method for this type of objects."}
{"_id": "60afb1828de4efa1588401f87caa55ac3a0cd820", "title": "Early-onset hidradenitis suppurativa.", "text": "A 9-year-old girl developed hidradenitis suppurativa 3 months after the first signs of adrenarche. Such a close temporal relationship is consistent with the hypothesis that the disease is androgen dependent. Less than 2% of patients have onset of the disease before the age of 11 years. The exceptionally early age of onset in our patient may be partly explained by the fact that she had an early puberty."}
{"_id": "7f60b70dede16fe4d7b412674929b4805c9b5c95", "title": "Prepubertal hidradenitis suppurativa: two case reports and review of the literature.", "text": "Hidradenitis suppurativa (HS) is a chronic suppurative scarring disease of apocrine sweat gland-bearing skin in the axillary, anogenital, and, rarely, the breast and scalp regions. Females are more commonly affected than males and it is usually seen at puberty or later. We report two girls with prepubertal hidradenitis suppurativa whose initial presentation predated any signs of puberty. This early onset is very rare and its etiology remains unknown. Severe disease can be seen in prepubertal children and surgical intervention is effective in these cases."}
{"_id": "8b85f287c144aad5ff038ec0b140e0c4e210990b", "title": "Finasteride for the treatment of hidradenitis suppurativa in children and adolescents.", "text": "IMPORTANCE\nHidradenitis suppurativa (HS) is a chronic debilitating cutaneous disease for which there is no universally effective treatment. Patients typically present at puberty with tender subcutaneous nodules that can progress to dermal abscess formation. Antiandrogens have been used in the treatment of HS, and studies have primarily focused on adult patients.\n\n\nOBSERVATIONS\nWe present a case series of 3 pediatric patients with HS who were successfully treated with oral finasteride, resulting in decreased frequency and severity of disease flares with no significant adverse effects.\n\n\nCONCLUSIONS AND RELEVANCE\nFinasteride is a therapeutic option that provides benefit for pediatric patients with HS. Further prospective data and randomized controlled studies will provide helpful information in the management of this disease."}
{"_id": "e72e10ad6228bd3dcee792f6f571c5ffed37266f", "title": "Hidradenitis suppurativa in children and adolescents: a review of treatment options.", "text": "Hidradenitis suppurativa (HS) is a burdensome disease and has the potential to affect the life course of patients. It is a rare disease in children, and the recorded literature is correspondingly scarce. This article reviews the therapeutic options for HS in children and adolescents, and highlights particular differences or challenges with treating patients in this age group compared with adults. The work-up of paediatric patients with HS should include considerations of possible endocrine co-morbidities and obesity. Medical therapy of lesions may include topical clindamycin. Systemic therapy may include analgesics, clindamycin and rifampicin, finasteride, corticosteroids or tumour necrosis factor alpha (TNF\u03b1) blockers. Superinfections should be appropriately treated. Scarring lesions generally require surgery."}
{"_id": "f4f6ae0b10f2cc26e5cf75b4c39c70703f036e9b", "title": "Morbidity in patients with hidradenitis suppurativa.", "text": "BACKGROUND\nAlthough skin diseases are often immediately visible to both patients and society, the morbidity they cause is only poorly defined. It has been suggested that quality-of-life measures may be a relevant surrogate measure of skin disease. Hidradenitis suppurativa (HS) leads to painful eruptions and malodorous discharge and is assumed to cause a significant degree of morbidity. The resulting impairment of life quality has not previously been quantitatively assessed, although such an assessment may form a pertinent measure of disease severity in HS.\n\n\nOBJECTIVES\nTo measure the impairment of life quality in patients with HS.\n\n\nMETHODS\nIn total, 160 patients suffering from HS were approached. The following data were gathered: quality-of-life data (Dermatology Life Quality Index, DLQI questionnaire), basic demographic data, age at onset of the condition and the average number of painful lesions per month.\n\n\nRESULTS\nOne hundred and fourteen patients participated in the study. The mean +/- SD age of the patients was 40.9 +/- 11.7 years, the mean +/- SD age at onset 21.8 +/- 9.9 years and the mean +/- SD duration of the disease 18.8 +/- 11.4 years. Patients had a mean +/- SD DLQI score of 8.9 +/- 8.3 points. The highest mean score out of the 10 DLQI questions was recorded for question 1, which measures the level of pain, soreness, stinging or itching (mean 1.55 points, median 2 points). Patients experienced a mean of 5.1 lesions per month.\n\n\nCONCLUSIONS\nHS causes a high degree of morbidity, with the highest scores obtained for the level of pain caused by the disease. The mean DLQI score for HS was higher than for previously studied skin diseases, and correlated with disease intensity as expressed by lesions per month. This suggests that the DLQI may be a relevant outcome measure in future therapeutic trials in HS."}
{"_id": "c94ac1ce8492f39c2efa1eb93475372196fe0520", "title": "Real-Time Mental Arithmetic Task Recognition From EEG Signals", "text": "Electroencephalography (EEG)-based monitoring the state of the user's brain functioning and giving her/him the visual/audio/tactile feedback is called neurofeedback technique, and it could allow the user to train the corresponding brain functions. It could provide an alternative way of treatment for some psychological disorders such as attention deficit hyperactivity disorder (ADHD), where concentration function deficit exists, autism spectrum disorder (ASD), or dyscalculia where the difficulty in learning and comprehending the arithmetic exists. In this paper, a novel method for multifractal analysis of EEG signals named generalized Higuchi fractal dimension spectrum (GHFDS) was proposed and applied in mental arithmetic task recognition from EEG signals. Other features such as power spectrum density (PSD), autoregressive model (AR), and statistical features were analyzed as well. The usage of the proposed fractal dimension spectrum of EEG signal in combination with other features improved the mental arithmetic task recognition accuracy in both multi-channel and one-channel subject-dependent algorithms up to 97.87% and 84.15% correspondingly. Based on the channel ranking, four channels were chosen which gave the accuracy up to 97.11%. Reliable real-time neurofeedback system could be implemented based on the algorithms proposed in this paper."}
{"_id": "2bf11b00938e73468e3ab02dfe678985212f1aea", "title": "Mining User Mobility Features for Next Place Prediction in Location-Based Services", "text": "Mobile location-based services are thriving, providing an unprecedented opportunity to collect fine grained spatio-temporal data about the places users visit. This multi-dimensional source of data offers new possibilities to tackle established research problems on human mobility, but it also opens avenues for the development of novel mobile applications and services. In this work we study the problem of predicting the next venue a mobile user will visit, by exploring the predictive power offered by different facets of user behavior. We first analyze about 35 million check-ins made by about 1 million Foursquare users in over 5 million venues across the globe, spanning a period of five months. We then propose a set of features that aim to capture the factors that may drive users' movements. Our features exploit information on transitions between types of places, mobility flows between venues, and spatio-temporal characteristics of user check-in patterns. We further extend our study combining all individual features in two supervised learning models, based on linear regression and M5 model trees, resulting in a higher overall prediction accuracy. We find that the supervised methodology based on the combination of multiple features offers the highest levels of prediction accuracy: M5 model trees are able to rank in the top fifty venues one in two user check-ins, amongst thousands of candidate items in the prediction list."}
{"_id": "b92513dac9d5b6a4683bcc625b94dd1ced98734e", "title": "Two/Too Simple Adaptations of Word2Vec for Syntax Problems", "text": "We present two simple modifications to the models in the popular Word2Vec tool, in order to generate embeddings more suited to tasks involving syntax. The main issue with the original models is the fact that they are insensitive to word order. While order independence is useful for inducing semantic representations, this leads to suboptimal results when they are used to solve syntax-based problems. We show improvements in part-ofspeech tagging and dependency parsing using our proposed models."}
{"_id": "6275549f94dc466921fda4115361688222158be4", "title": "Digital Fabrication Techniques for Cultural Heritage: A Survey", "text": "Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been the main domain of 3D printing applications over the last decade. Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology. This survey overviews the various fabrication technologies, discussing their strengths, limitations, and costs. Various successful uses of 3D printing in the Cultural Heritage are analysed, which should also be useful for other application contexts. We review works that have attempted to extend fabrication technologies in order to deal with the specific issues in the use of digital fabrication in the Cultural Heritage. Finally, we also propose areas for future research."}
{"_id": "65fed11870160561b19c1be168bf9d6a5873ab20", "title": "A Multi-Cloud Framework for Measuring and Describing Performance Aspects of Cloud Services Across Different Application Types", "text": "Cloud services have emerged as an innovative IT provisioning model in the recent years. However, after their usage severe considerations have emerged with regard to their varying performance due to multitenancy and resource sharing issues. These issues make it very difficult to provide any kind of performance estimation during application design or deployment time. The aim of this paper is to present a mechanism and process for measuring the performance of various Cloud services and describing this information in machine understandable format. The framework is responsible for organizing the execution and can support multiple Cloud providers. Furthermore we present approaches for measuring service performance with the usage of specialized metrics for ranking the services according to a weighted combination of cost, performance and workload."}
{"_id": "9969ff0b19914bc8487b6abb1d5312ce538a7284", "title": "Design of novel capacitive type torque sensor for robotic applications", "text": "This paper introduces a novel capacitive type torque sensor. The developed torque sensor measures single axis torque value by using two capacitance transducing cells. To make a linear relation between the capacitance and the deformation, the shape of flexure hinge is considered to be deformed constantly. Also, the nonlinear curve fitting method is used to fit the capacitance linearly. Finally, the performance of the sensor is verified in reference to a commercial torque sensor."}
{"_id": "96c76308dc5aca61355bc4529be9e3a8232fa155", "title": "Overview of High-Step-Up Coupled-Inductor Boost Converters", "text": "High-step-up, high-efficiency, and cost-effective dc-dc converters, serving as an interfacing cell to boost the low-voltage output of renewable sources to the utility voltage level, are an important part in renewable energy systems. Over the past few years, there has been a substantial amount of studies devoted to high-step-up dc-dc converters. Among them, the category of coupled-inductor boost converters is widely researched and considered to be a promising solution for high-step-up applications. In this paper, these converters are categorized into five groups according to the major topological features. The derivation process, advantages, and disadvantages of these converters are systematically discussed, compared, and scrutinized. This paper aims to provide an introduction, review, and framework for the category of high-step-up coupled-inductor boost converters. General structures for the topologies are proposed to clarify the topological derivation process and to show potential gaps. Furthermore, challenges or directions are presented in this paper for deriving new topologies in this field."}
{"_id": "a621face3570bcd56caff92a80fed494dca49bd8", "title": "Design and Analysis of a Two Stage Operational Amplifier for High Gain and High Bandwidth", "text": "In this paper a design and comparison between a fully differential RC Miller compensated CMOS op-amp and conventional op-amp is presented. High gain enables the circuit to operate efficiently in a closed loop feedback system, whereas high bandwidth makes it suitable for high speed applications. A novel RC Miller compensation technique is used to optimize the parameters of gain and bandwidth for high speed applications are illustrated in this research work. The design is also able to address any fluctuation in supply or dc input voltages and stabilizes the operation by nullifying. The design is implemented on TSMC 0.18 \uf06dm CMOS process at 3.3 V as supply voltage under room temperature 27\uf0b0 C. The simulated result shows that a unity gain bandwidth of 136.8 MHz with a high gain of 92.27 dB is achieved for the proposed op-amp circuit. The total areas of the layouts are 0.000158 mm and 0.000532 mm for conventional and proposed respectively. Key wrods: Two-Stage Op-amp, fully differential, high gain, high bandwidth"}
{"_id": "24c4c20ccda15b831b199f87491813f876d4db07", "title": "Assembler: Efficient Discovery of Spatial Co-evolving Patterns in Massive Geo-sensory Data", "text": "Recent years have witnessed the wide proliferation of geo-sensory applications wherein a bundle of sensors are deployed at different locations to cooperatively monitor the target condition. Given massive geo-sensory data, we study the problem of mining spatial co-evolving patterns (SCPs), i.e., groups of sensors that are spatially correlated and co-evolve frequently in their readings. SCP mining is of great importance to various real-world applications, yet it is challenging because (1) the truly interesting evolutions are often flooded by numerous trivial fluctuations in the geo-sensory time series; and (2) the pattern search space is extremely large due to the spatiotemporal combinatorial nature of SCP. In this paper, we propose a two-stage method called Assember. In the first stage, Assember filters trivial fluctuations using wavelet transform and detects frequent evolutions for individual sensors via a segment-and-group approach. In the second stage, Assember generates SCPs by assembling the frequent evolutions of individual sensors. Leveraging the spatial constraint, it conceptually organizes all the SCPs into a novel structure called the SCP search tree, which facilitates the effective pruning of the search space to generate SCPs efficiently. Our experiments on both real and synthetic data sets show that Assember is effective, efficient, and scalable."}
{"_id": "ca876e6b2aa530e3108ad307520bc19eaf95e136", "title": "A Markov Model for Headway/Spacing Distribution of Road Traffic", "text": "In this paper, we link two research directions of road traffic-the mesoscopic headway distribution model and the microscopic vehicle interaction model-together to account for the empirical headway/spacing distributions. A unified car-following model is proposed to simulate different driving scenarios, including traffic on highways and at intersections. Unlike our previous approaches, the parameters of this model are directly estimated from the Next Generation Simulation (NGSIM) Trajectory Data. In this model, empirical headway/spacing distributions are viewed as the outcomes of stochastic car-following behaviors and the reflections of the unconscious and inaccurate perceptions of space and/or time intervals that people may have. This explanation can be viewed as a natural extension of the well-known psychological car-following model (the action point model). Furthermore, the fast simulation speed of this model will benefit transportation planning and surrogate testing of traffic signals."}
{"_id": "306d317a1d45be685bd5162942d5472aac33559a", "title": "Design of a Marx Generator for HEMP filter evaluation taking account of parasitic effect of components", "text": "This paper introduces the design of a Marx Generator that can generate high attitude pulses up to 25 kV and 2.5 kA. The pulses have a wave shape representative of that used to test High-altitude ElectroMagnetic (HEMP) filters against the pulse current injection (PCI) requirements. The rise time is 20 ns and the pulse width is 230 ns on a 10 \u03a9 resistor. In the design, it was found that parasitic effect of circuit components could significantly affect the performance of the generator. This is mainly because the pulse width is very narrow and its frequency spectrum is very wideband. The high-frequency circuit models of the components used in the generator were obtained by calculation and optimization, and then further verified by experiment. The verified circuit models were then used to design the three-stage generator. The measured performance agrees very well with the simulated one using the proposed circuit models."}
{"_id": "e2151994deb9da2b192ec1e281558dc509274822", "title": "Brain activation during human male ejaculation.", "text": "Brain mechanisms that control human sexual behavior in general, and ejaculation in particular, are poorly understood. We used positron emission tomography to measure increases in regional cerebral blood flow (rCBF) during ejaculation compared with sexual stimulation in heterosexual male volunteers. Manual penile stimulation was performed by the volunteer's female partner. Primary activation was found in the mesodiencephalic transition zone, including the ventral tegmental area, which is involved in a wide variety of rewarding behaviors. Parallels are drawn between ejaculation and heroin rush. Other activated mesodiencephalic structures are the midbrain lateral central tegmental field, zona incerta, subparafascicular nucleus, and the ventroposterior, midline, and intralaminar thalamic nuclei. Increased activation was also present in the lateral putamen and adjoining parts of the claustrum. Neocortical activity was only found in Brodmann areas 7/40, 18, 21, 23, and 47, exclusively on the right side. On the basis of studies in rodents, the medial preoptic area, bed nucleus of the stria terminalis, and amygdala are thought to be involved in ejaculation, but increased rCBF was not found in any of these regions. Conversely, in the amygdala and adjacent entorhinal cortex, a decrease in activation was observed. Remarkably strong rCBF increases were observed in the cerebellum. These findings corroborate the recent notion that the cerebellum plays an important role in emotional processing. The present study for the first time provides insight into which regions in the human brain play a primary role in ejaculation, and the results might have important implications for our understanding of how human ejaculation is brought about, and for our ability to improve sexual function and satisfaction in men."}
{"_id": "381c7853690a0fee6f00d2608a7779737f1365f9", "title": "Data Center Energy Consumption Modeling: A Survey", "text": "Data centers are critical, energy-hungry infrastructures that run large-scale Internet-based services. Energy consumption models are pivotal in designing and optimizing energy-efficient operations to curb excessive energy consumption in data centers. In this paper, we survey the state-of-the-art techniques used for energy consumption modeling and prediction for data centers and their components. We conduct an in-depth study of the existing literature on data center power modeling, covering more than 200 models. We organize these models in a hierarchical structure with two main branches focusing on hardware-centric and software-centric power models. Under hardware-centric approaches we start from the digital circuit level and move on to describe higher-level energy consumption models at the hardware component level, server level, data center level, and finally systems of systems level. Under the software-centric approaches we investigate power models developed for operating systems, virtual machines and software applications. This systematic approach allows us to identify multiple issues prevalent in power modeling of different levels of data center systems, including: i) few modeling efforts targeted at power consumption of the entire data center ii) many state-of-the-art power models are based on a few CPU or server metrics, and iii) the effectiveness and accuracy of these power models remain open questions. Based on these observations, we conclude the survey by describing key challenges for future research on constructing effective and accurate data center power models."}
{"_id": "17c20fc0d0808d1c685159a52101422b8c7d6868", "title": "Differential quasi self-complimentary (QSC) ultra-wideband (UWB) MIMO antenna", "text": "In this paper, a differentially excited ultra-wideband (UWB) MIMO antenna is designed for pattern diversity applications, utilizing quasi self-complimentary (QSC) antenna elements. Four QSC elements are realized using half- octagon shaped monopoles, and their complementary cuts from the ground plane. Due to the QSC property, wide impedance bandwidth covering the UWB frequency range of 3\u201311 GHz is achieved within very less antenna footprint. The QSC elements are placed symmetrically about the four edges of a square substrate. Two oppositely positioned elements are fed with 180o phase difference, in order to form a single differential pair. Two such differential pairs are placed perpendicular to each other for realizing pattern diversity performance with high value of differential isolation."}
{"_id": "052402b356a1938ef213cd07810a2f32e7956c5d", "title": "Airway management and smoke inhalation injury in the burn patient.", "text": "Smoke inhalation injury, a unique form of acute lung injury, greatly increases the occurrence of postburn morbidity and mortality. In addition to early intubation for upper-airway protection, subsequent critical care of patients who have this injury should be directed at maintaining distal airway patency. High-frequency ventilation, inhaled heparin, and aggressive pulmonary toilet are among the therapies available. Even so, immunosuppression, intubation, and airway damage predispose these patients to pneumonia and other complications."}
{"_id": "76b6b2be912f263db3be8dca2724fb32a70a0077", "title": "Tibiofemoral movement 1: the shapes and relative movements of the femur and tibia in the unloaded cadaver knee.", "text": "In six unloaded cadaver knees we used MRI to determine the shapes of the articular surfaces and their relative movements. These were confirmed by dissection. Medially, the femoral condyle in sagittal section is composed of the arcs of two circles and that of the tibia of two angled flats. The anterior facets articulate in extension. At about 20 degrees the femur 'rocks' to articulate through the posterior facets. The medial femoral condyle does not move anteroposteriorly with flexion to 110 degrees. Laterally, the femoral condyle is composed entirely, or almost entirely, of a single circular facet similar in radius and arc to the posterior medial facet. The tibia is roughly flat. The femur tends to roll backwards with flexion. The combination during flexion of no anteroposterior movement medially (i.e., sliding) and backward rolling (combined with sliding) laterally equates to internal rotation of the tibia around a medial axis with flexion. About 5 degrees of this rotation may be obligatory from 0 degrees to 10 degrees flexion; thereafter little rotation occurs to at least 45 degrees. Total rotation at 110 degrees is about 20 degrees, most if not all of which can be suppressed by applying external rotation to the tibia at 90 degrees."}
{"_id": "ebecd22710d711565d7276f8301753e302ffd0c2", "title": "A Layered Security Approach for Cloud Computing Infrastructure", "text": "This paper introduces a practical security model based on key security considerations by looking at a number of infrastructure aspects of Cloud Computing such as SaaS, Utility, Web, Platform and Managed Services, Service commerce platforms and Internet Integration which was introduced with a concise literature review. The purpose of this paper is to offer a macro level solution for identified common infrastructure security requirements. This model with a number of emerged patterns can be applied to infrastructure aspect of Cloud Computing as a proposed shared security approach in system development life cycle focusing on the plan-built-run scope."}
{"_id": "943d8c91ebc8a80034d163f3f6cab611460ec411", "title": "Development of a three-dimensional ball rotation sensing system using optical mouse sensors", "text": "Robots using ball(s) as spherical wheels have the advantage of omnidirectional motion. Some of these robots use only a single ball as their wheel and dynamically balance on it. However, driving a ball wheel is not straightforward. These robots usually use one or more motor-driven rollers or wheels in frictional contact with the ball wheel. Some slippage can occur with these schemes, so it is important to measure the actual rotation of the ball wheel for good control. This paper proposes one method for measuring the three dimensional rotation of a sphere, which is applicable to the case of a ball wheel. The system measures surface speed by using two or more optical mouse sensors and transforms them into the angular velocity vector of the ball, followed by integration to give the rotational angle. Experiments showed the correctness of this method, yielding an error of approximately 1%, with a 10 ms response."}
{"_id": "d68145adc8699818b1d90ec0907028c0257c9289", "title": "IMPLEMNTATION OF SIMULINK BASED MODEL USING SOBEL EDGE DETECTOR FOR DENTAL PROBLEMS Deepika Nagpal", "text": "Image Segmentation is the process of partitioning a digital image into multiple regions or sets of pixels.Edge Detection is one of the main Technique used in Segmentation.In this paper we used Sobel edge detector for segmenting the dental X-ray image.Using MATLAB,Image is segmented.Sysytem Test tool is used for the verification of the Simulink Model. The Simulink Model based Image Segmentation is a new function in image processing and offers a model based design for processing. Dental Caries is the main problem occurred in the teeths.Segmentation help to identify the places where the problems of dental caries are present."}
{"_id": "6a10db5956411e8d470ca0d063bb4e91c7f16d23", "title": "Predicting conversion from MCI to AD with FDG-PET brain images at different prodromal stages", "text": "Early diagnosis of Alzheimer disease (AD), while still at the stage known as mild cognitive impairment (MCI), is important for the development of new treatments. However, brain degeneration in MCI evolves with time and differs from patient to patient, making early diagnosis a very challenging task. Despite these difficulties, many machine learning techniques have already been used for the diagnosis of MCI and for predicting MCI to AD conversion, but the MCI group used in previous works is usually very heterogeneous containing subjects at different stages. The goal of this paper is to investigate how the disease stage impacts on the ability of machine learning methodologies to predict conversion. After identifying the converters and estimating the time of conversion (TC) (using neuropsychological test scores), we devised 5 subgroups of MCI converters (MCI-C) based on their temporal distance to the conversion instant (0, 6, 12, 18 and 24 months before conversion). Next, we used the FDG-PET images of these subgroups and trained classifiers to distinguish between the MCI-C at different stages and stable non-converters (MCI-NC). Our results show that MCI to AD conversion can be predicted as early as 24 months prior to conversion and that the discriminative power of the machine learning methods decreases with the increasing temporal distance to the TC, as expected. These findings were consistent for all the tested classifiers. Our results also show that this decrease arises from a reduction in the information contained in the regions used for classification and by a decrease in the stability of the automatic selection procedure."}
{"_id": "93159d70058f52efd42e0d3cfc18375fddfd5771", "title": "Graph Partitioning using Quantum Annealing on the D-Wave System", "text": "Graph partitioning (GP) applications are ubiquitous throughout mathematics, computer science, chemistry, physics, bio-science, machine learning, and complex systems. Post Moore's era supercomputing has provided us an opportunity to explore new approaches for traditional graph algorithms on quantum computing architectures. In this work, we explore graph partitioning using quantum annealing on the D-Wave 2X machine. Motivated by a recently proposed graph-based electronic structure theory applied to quantum molecular dynamics (QMD) simulations, graph partitioning is used for reducing the calculation of the density matrix into smaller subsystems rendering the calculation more computationally efficient. Unconstrained graph partitioning as community clustering based on the modularity metric can be naturally mapped into the Hamiltonian of the quantum annealer. On the other hand, when constraints are imposed for partitioning into equal parts and minimizing the number of cut edges between parts, a quadratic unconstrained binary optimization (QUBO) reformulation is required. This reformulation may employ the graph complement to fit the problem in the Chimera graph of the quantum annealer. Partitioning into 2 parts and k parts concurrently for arbitrary k are demonstrated with benchmark graphs, random graphs, and small material system density matrix based graphs. Results for graph partitioning using quantum and hybrid classical-quantum approaches are shown to be comparable to current \"state of the art\" methods and sometimes better."}
{"_id": "7359ef31c8bfccad8bf1f374d73db511ab74eb2e", "title": "Typology of Distributed Ledger based Business Models", "text": "The potential of distributed ledger technology and its application in various industries is a controversially debated topic. Advocates of the technology emphasize the economic benefits of decentralization and transparency, leading to cost reductions as well as the alleviation of several of today`s economic and technological problems. In contrast, critics assert that the potential of distributed ledgers might be overhyped, possibly leading to the next tech bubble. This paper contributes to the discussion by developing a typology of business models that are based on distributed ledger technology. In particular, this paper is a first step towards a more differentiated discussion on the potential of distributed ledges, by taking the underlying business models into consideration. Despite a characterization of the types, a discussion about special features of distributed ledger based business models is provided in the context of contemporary business model literature and the associated role of IT. It is proposed that future research must evaluate each business model isolated to achieve a comprehensive assessment of the potential of distributed ledgers. This paper can be interpreted as starting point for more fruitful discussions and the repeal of the partially diametrical opposed opinions towards the potentials of the technology."}
{"_id": "1259e05fb0b4f195757ad1dee3d001569afeb4cc", "title": "Learning structural SVMs with latent variables", "text": "We present a large-margin formulation and algorithm for structured output prediction that allows the use of latent variables. Our proposal covers a large range of application problems, with an optimization problem that can be solved efficiently using Concave-Convex Programming. The generality and performance of the approach is demonstrated through three applications including motiffinding, noun-phrase coreference resolution, and optimizing precision at k in information retrieval."}
{"_id": "02e43d9ca736802d72824892c864e8cfde13718e", "title": "Transferring a semantic representation for person re-identification and search", "text": "Learning semantic attributes for person re-identification and description-based person search has gained increasing interest due to attributes' great potential as a pose and view-invariant representation. However, existing attribute-centric approaches have thus far underperformed state-of-the-art conventional approaches. This is due to their nonscalable need for extensive domain (camera) specific annotation. In this paper we present a new semantic attribute learning approach for person re-identification and search. Our model is trained on existing fashion photography datasets - either weakly or strongly labelled. It can then be transferred and adapted to provide a powerful semantic description of surveillance person detections, without requiring any surveillance domain supervision. The resulting representation is useful for both unsupervised and supervised person re-identification, achieving state-of-the-art and near state-of-the-art performance respectively. Furthermore, as a semantic representation it allows description-based person search to be integrated within the same framework."}
{"_id": "03f98c175b4230960ac347b1100fbfc10c100d0c", "title": "Supervised Descent Method and Its Applications to Face Alignment", "text": "Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu/intraface."}
{"_id": "5758b5761c1b95557aa0f7ea83d65ba69f857978", "title": "An Optimal Algorithm for Generalized Causal Message Ordering (Abstract)", "text": "Asynchronous execution of processes and unpredictable communication delays create nondeterminism in distributed systems that complicates the design, verification, and analysis of distributed programs. To simplify the design and development of distributed applications, the idea of \u201ccausal message ordering\u201d (CO) was introduced by Birman and Joseph [1]. If for any two messages M and M\u2019 sent to the same destination d, Send(M) + Send(M\u2019), then CO ensures that Delivered + Deliver-yd(lf\u2019) at d. Existing algorithms that provide CO have high message overhead and use more storage than is required by an optimal algorithm, or make simplifying assumptions by assuming certain topolog y/communicat ion pat t ems and inbuilt synchronize ation. We present an optimal CO algorithm [2] under the following framework:"}
{"_id": "23d1255f1a0453ba96c6e6ef054903086f3656df", "title": "Private Release of Graph Statistics using Ladder Functions", "text": "Protecting the privacy of individuals in graph structured data while making accurate versions of the data available is one of the most challenging problems in data privacy. Most efforts to date to perform this data release end up mired in complexity, overwhelm the signal with noise, and are not effective for use in practice. In this paper, we introduce a new method which guarantees differential privacy. It specifies a probability distribution over possible outputs that is carefully defined to maximize the utility for the given input, while still providing the required privacy level. The distribution is designed to form a 'ladder', so that each output achieves the highest 'rung' (maximum probability) compared to less preferable outputs. We show how our ladder framework can be applied to problems of counting the number of occurrences of subgraphs, a vital objective in graph analysis, and give algorithms whose cost is comparable to that of computing the count exactly. Our experimental study confirms that our method outperforms existing methods for counting triangles and stars in terms of accuracy, and provides solutions for some problems for which no effective method was previously known. The results of our algorithms can be used to estimate the parameters of suitable graph models, allowing synthetic graphs to be sampled."}
{"_id": "ca5d86b602c4fe2625ca80ac4da6704c18f6a279", "title": "Cloud Computing : Types , Architecture , Applications , Concerns , Virtualization and Role of IT Governance in Cloud", "text": "Technology innovation and its adoption are two critical successful factors for any business/organization. Cloud computing is a recent technology paradigm that enables organizations or individuals to share various services in a seamless and cost-effective manner. This paper describes cloud computing, a computing platform for the next generation of the Internet. The paper defines clouds, types of cloud Provides, Comparison of Cloud Computing with Grid Computing, applications and concerns of Cloud Computing , Concept of Virtualization in Cloud Computing. Readers will also discover the working, Architecture and Role of I.T Governance in Cloud Computing."}
{"_id": "b61a1da721b6d61807dbb194ea7e1fde3ce63fd0", "title": "BIRNet: Brain Image Registration Using Dual-Supervised Fully Convolutional Networks", "text": "In this paper, we propose a deep learning approach for image registration by predicting deformation from image appearance. Since obtaining ground-truth deformation fields for training can be challenging, we design a fully convolutional network that is subject to dual-guidance: (1) Coarse guidance using deformation fields obtained by an existing registration method; and (2) Fine guidance using image similarity. The latter guidance helps avoid overly relying on the supervision from the training deformation fields, which could be inaccurate. For effective training, we further improve the deep convolutional network with gap filling, hierarchical loss, and multi-source strategies. Experiments on a variety of datasets show promising registration accuracy and efficiency compared with state-of-the-art methods."}
{"_id": "d072c9908850d9c0cc562b5b88bdb9df799398b2", "title": "Thin Images Reflected in the Water: Narcissism and Girls' Vulnerability to the Thin-Ideal.", "text": "The purpose of this research is to test how adolescent girls' narcissistic traits-characterized by a need to impress others and avoid ego-threat-influence acute adverse effects of thin-ideal exposure. Participants (11-15 years; total N\u2009=\u2009366; all female) reported their narcissistic traits. Next, in two experiments, they viewed images of either very thin or average-sized models, reported their wishful identification with the models (Experiment 2), and tasted high-calorie foods in an alleged taste test (both experiments). Narcissism kept girls from wishfully identifying with thin models, which is consistent with the view that narcissistic girls are prone to disengage from thin-ideal exposure. Moreover, narcissism protected vulnerable girls (those who experience low weight-esteem) from inhibiting their food intake, and led other girls (those who consider their appearance relatively unimportant) to increase their food intake. These effects did not generalize to conceptually related traits of self-esteem and perfectionism, and were not found for a low-calorie foods outcome, attesting to the specificity of findings. These experiments demonstrate the importance of narcissism at reducing girls' thin-ideal vulnerability. Girls high in narcissism disengage self-protectively from threats to their self-image, a strategy that renders at least subsets of them less vulnerable to the thin-ideal."}
{"_id": "1423547ebfa22b912c300f67b7c2460550b48a12", "title": "Channel Estimation Algorithms for OFDM Systems", "text": "In this work we have compared different type of channel estimation algorithm for Orthogonal Frequency Division Multiplexing (OFDM) systems. The result of the Adaptive Boosting (AdaBoost) algorithm was compared with other algorithms such Least Square (LS), Best Linear Unbiased Estimator (BLUE) and Minimum Mean Square Error (MMSE)."}
{"_id": "b82a8c5d0e11d4fc5f0eeb4631136b8174af98a5", "title": "EchoFlex: Hand Gesture Recognition using Ultrasound Imaging", "text": "Recent improvements in ultrasound imaging enable new opportunities for hand pose detection using wearable devices. Ultrasound imaging has remained under-explored in the HCI community despite being non-invasive, harmless and capable of imaging internal body parts, with applications including smart-watch interaction, prosthesis control and instrument tuition. In this paper, we compare the performance of different forearm mounting positions for a wearable ultrasonographic device. Location plays a fundamental role in ergonomics and performance since the anatomical features differ among positions. We also investigate the performance decrease due to cross-session position shifts and develop a technique to compensate for this misalignment. Our gesture recognition algorithm combines image processing and neural networks to classify the flexion and extension of 10 discrete hand gestures with an accuracy above 98%. Furthermore, this approach can continuously track individual digit flexion with less than 5% NRMSE, and also differentiate between digit flexion at different joints."}
{"_id": "011dcf6b9fa8d64e508ecead47c1a9a9521a3e59", "title": "Coping with Volume and Variety in Temporal Event Sequences: Strategies for Sharpening Analytic Focus", "text": "The growing volume and variety of data presents both opportunities and challenges for visual analytics. Addressing these challenges is needed for big data to provide valuable insights and novel solutions for business, security, social media, and healthcare. In the case of temporal event sequence analytics it is the number of events in the data and variety of temporal sequence patterns that challenges users of visual analytic tools. This paper describes 15 strategies for sharpening analytic focus that analysts can use to reduce the data volume and pattern variety. Four groups of strategies are proposed: (1) extraction strategies, (2) temporal folding, (3) pattern simplification strategies, and (4) iterative strategies. For each strategy, we provide examples of the use and impact of this strategy on volume and/or variety. Examples are selected from 20 case studies gathered from either our own work, the literature, or based on email interviews with individuals who conducted the analyses and developers who observed analysts using the tools. Finally, we discuss how these strategies might be combined and report on the feedback from 10 senior event sequence analysts."}
{"_id": "791a15d362ce6b8dff856e7a79df90d02e985f97", "title": "Indoor Navigation Validation Framework for Visually Impaired Users", "text": "In this paper, we introduce the first validation framework of an indoor navigation system for blind and visually impaired (BVI) users, which is a significant step toward the development of cost effective indoor way-finding solutions for BVI users. The BVI users require detailed landmark-based navigation instructions that will help them arrive at the chosen destination independently. These users will interact with the navigation instructions on a smartphone using an accessible user interface. The validation framework includes the following three main components: 1) virtual reality-based simulation that simulates a BVI user traversing and interacting with the physical environment, developed using Unity game engine; 2) generation of action codes that emulate the avatar movement in the virtual environment, developed using a natural language processing parser of the navigation instructions; and 3) accessible user interface, which enables the user to access the navigation instructions, developed using Sikuli script library. We introduce a case study that illustrates the use of the validation tool using PERCEPT system. It is our strong belief that the validation framework we provide in this paper will encourage other developers to invent indoor navigation systems for BVI users. In addition, we would like to mention that this tool is the first step of validating an indoor navigation system and should be followed by trials with human subjects."}
{"_id": "31b38e4d6c3c340349da7203bc3e4d1b4f36329e", "title": "SENTIMENT ANALYSIS IN E-COMMERCE USING RECOMMENDATION SYSTEM", "text": "In the field of text mining the sentiment analysis is one of the current research topics. Sentiment analysis is the best solution for opinions and sentiment mining from natural language. Sentiment analysis gives significant information for decision making in various domains. The sentiments include ratings, reviews and emoticons. Several sentiment detection approaches are available which affect the quality of result. This paper finding the sentiments of people related to the services of E-shopping websites. The central goal is to recommend the products to users which are posted in E-shopping website and analyzing which one is the best product. The proposed system use stochastic learning algorithm which analyze various feedbacks related to the services and then the sentiments are classified as negative, positive and neutral. The pre-processing of the data is greatly affecting the quality of detected sentiments so that the fake reviews in the website are analyzed. Finally the classification technique can be used to analysis the products which are posted in E-shopping website. The proposed system find out fake reviews made by posting fake comments about a product by identifying the MAC address along with review posting patterns. In order to find out the review is fake or genuine the proposed system find out the MAC address of the user if the system observes fake review send by the same MAC address many a times it will inform the admin to remove that review from the system. The proposed system helps the user to find out correct review of the product."}
{"_id": "0083e2483969d7679bd8b8340ea1172b148840d5", "title": "Fetal facial profile markers of Down syndrome in the second and third trimesters of pregnancy.", "text": "OBJECTIVES\nTo investigate the use of the maxilla-nasion-mandible (MNM) angle and fetal profile (FP) line to assess the degree of midfacial hypoplasia in Down-syndrome fetuses in the second and third trimesters of pregnancy.\n\n\nMETHODS\nThe MNM angle and FP line were measured retrospectively in stored two-dimensional images or three-dimensional volumes of fetuses with Down syndrome. Data collected from January 2006 to July 2013 were retrieved from the digital databases of participating units. The MNM angle was expressed as a continuous variable (degrees) and the FP line as positive, negative or zero. Measurements were obtained from stored images in the midsagittal plane by two experienced examiners and compared with our previously reported normal ranges for euploid fetuses. A MNM angle below the 5(th) centile of the reference range and a positive or negative FP line were considered as abnormal.\n\n\nRESULTS\nA total of 133 fetuses with Down syndrome were available for analysis, eight of which were subsequently excluded because of inadequate images. The MNM angle was not influenced by gestational age (P\u2009=\u20090.48) and was significantly smaller in Down-syndrome fetuses than in euploid fetuses (mean, 12.90\u00b0 vs 13.53\u00b0, respectively; P\u2009=\u20090.015). The MNM angle was below the 5th centile for euploid fetuses in 16.8% of fetuses with Down syndrome (P\u2009<\u20090.01). In the cohort of Down-syndrome fetuses, a positive FP line was present in 41.6% of cases (with a false-positive rate (FPR) of 6.3%) and was positively correlated with Down syndrome and gestational age (P\u2009<\u20090.01). There was no case with a negative FP line. In cases of Down syndrome, a positive FP line was correlated with a small MNM angle (P\u2009<\u20090.01).\n\n\nCONCLUSIONS\nA small MNM angle and a positive FP line can be regarded as novel markers for Down syndrome. The FP line is an easy marker to measure, has a low FPR, does not require knowledge of normal reference values and has the potential to differentiate between Down syndrome and trisomy 18, as, in the latter, the FP line is often negative."}
{"_id": "2555de3cadddf0e113d6c3145b61a33c01c189bb", "title": "Abstraction Learning", "text": "ion Learning Fei Deng fei.deng@rutgers.edu Jinsheng Ren rjs17@mails.tsinghua.edu.cn Feng Chen chenfeng@mail.tsinghua.edu.cn"}
{"_id": "f57db5f7c3e8958dafca96a221a38ae5a1d2f843", "title": "A Low-Noise Wideband Digital Phase-Locked Loop Based on a Coarse\u2013Fine Time-to-Digital Converter With Subpicosecond Resolution", "text": "This paper presents the design of a digital PLL which uses a high-resolution time-to-digital converter (TDC) for wide loop bandwidth. The TDC uses a time amplification technique to reduce the quantization noise down to less than 1 ps root mean square (RMS). Additionally TDC input commutation reduces low-frequency spurs due to inaccurate TDC scaling factor in a counter-assisted digital PLL. The loop bandwidth is set to 400 kHz with a 25 MHz reference. The in-band phase noise contribution from the TDC is -116 dBc/Hz, the phase noise is -117 dBc/Hz at high band (1.8 GHz band) 400 kHz offset, and the RMS phase error is 0.3deg."}
{"_id": "7823358495bbedf574b5aaf748239667a419860c", "title": "Continuous Glucose Monitoring Systems: A Review.", "text": "There have been continuous advances in the field of glucose monitoring during the last four decades, which have led to the development of highly evolved blood glucose meters, non-invasive glucose monitoring (NGM) devices and continuous glucose monitoring systems (CGMS). Glucose monitoring is an integral part of diabetes management, and the maintenance of physiological blood glucose concentration is the only way for a diabetic to avoid life-threatening diabetic complications. CGMS have led to tremendous improvements in diabetic management, as shown by the significant lowering of glycated hemoglobin (HbA1c) in adults with type I diabetes. Most of the CGMS have been minimally-invasive, although the more recent ones are based on NGM techniques. This manuscript reviews the advances in CGMS for diabetes management along with the future prospects and the challenges involved."}
{"_id": "6f441e3db17dfb6e576890eeef3da29fb3e02c4d", "title": "OWL rules: A proposal and prototype implementation", "text": "Although the OWL Web Ontology Language adds considerable expressive power to the Semantic Web it does have expressive limitations, particularly with respect to what can be said about properties. We present SWRL (the Semantic Web Rules Language), a Horn clause rules extension to OWL that overcomes many of these limitations. SWRL extends OWL in a syntactically and semantically coherent manner: the basic syntax for SWRL rules is an extension of the abstract syntax for OWL DL and OWL Lite; SWRL rules are given formal meaning via an extension of the OWL DL model-theoretic semantics; SWRL rules are given an XML syntax based on the OWL XML presentation syntax; and a mapping from SWRL rules to RDF graphs is given based on the OWL RDF/XML exchange syntax. We discuss the expressive power of SWRL, showing that the ontology consistency problem is undecidable, provide several examples of SWRL usage, and discuss a prototype implementation of reasoning support for SWRL."}
{"_id": "28f5b4fbfe5b9f83dfebe9e357bb5e90e8f98c80", "title": "The neural basis of economic decision-making in the Ultimatum Game.", "text": "The nascent field of neuroeconomics seeks to ground economic decision making in the biological substrate of the brain. We used functional magnetic resonance imaging of Ultimatum Game players to investigate neural substrates of cognitive and emotional processes involved in economic decision-making. In this game, two players split a sum of money;one player proposes a division and the other can accept or reject this. We scanned players as they responded to fair and unfair proposals. Unfair offers elicited activity in brain areas related to both emotion (anterior insula) and cognition (dorsolateral prefrontal cortex). Further, significantly heightened activity in anterior insula for rejected unfair offers suggests an important role for emotions in decision-making."}
{"_id": "5b5cbb503157e072d903036745b5b403a1b17084", "title": "Security/privacy of wearable fitness tracking IoT devices", "text": "As wearable fitness trackers gain widespread acceptance among the general population, there is a concomitant need to ensure that associated privacy and security vulnerabilities are kept to a minimum. We discuss potential vulnerabilities of these trackers, in general, and specific vulnerabilities in one such tracker - Fitbit - identified by Rahman et al. (2013) who then proposed means to address identified vulnerabilities. However, the `fix' has its own vulnerabilities. We discuss possible means to alleviate related issues."}
{"_id": "2f63c4b66626e19b881e1ce7336c3e5f3b366adf", "title": "Using BPMN to Model a BPEL Process", "text": "The Business Process Modeling Notation (BPMN) has been developed to enable business user to develop readily understandable graphical representations of business processes. BPMN is also supported with appropriate graphical object properties that will enable the generation of executable BPEL. Thus, BPMN creates a standardized bridge for the gap between the business process design and process implementation. This paper presents a simple, yet instructive example of how a BPMN diagram can be used to generate a BPEL process."}
{"_id": "61093ca3afcc72f04bebbbad7b9fd98d09438122", "title": "Cloud computing: state-of-the-art and research challenges", "text": "Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. However, despite the fact that cloud computing offers huge opportunities to the IT industry, the development of cloud computing technology is currently at its infancy, with many issues still to be addressed. In this paper, we present a survey of cloud computing, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges. The aim of this paper is to provide a better understanding of the design challenges of cloud computing and identify important research directions in this increasingly important area."}
{"_id": "0f950ab3a02ea2ef46b3db67f7bae7e0dff11823", "title": "Generalized Pythagoras Trees for visualizing hierarchies", "text": "Pythagoras Trees are fractals that can be used to depict binary hierarchies. But this binary encoding is an obstacle for visualizing hierarchical data such as file systems or phylogenetic trees, which branch into n sub-hierarchies. Although any hierarchy can be modeled as a binary one by subsequently dividing n-ary branches into a sequence of n \u2212 1 binary branches, we follow a different strategy. In our approach extending Pythagoras Trees to arbitrarily branching trees, we only need a single visual element for an n-ary branch instead of spreading the binary branches along a strand. Each vertex in the hierarchy is visualized as a rectangle sized according to a metric. We analyze several visual parameters such as length, width, order, and color of the nodes against the use of different metrics. The usefulness of our technique is illustrated by two case studies visualizing directory structures and a large phylogenetic tree. We compare our approach with existing tree diagrams and discuss questions of geometry, perception, readability, and aesthetics."}
{"_id": "2f54f8d7435852244f4a9b504bd51f291ffeac8d", "title": "Effective missing data prediction for collaborative filtering", "text": "Memory-based collaborative filtering algorithms have been widely adopted in many popular recommender systems, although these approaches all suffer from data sparsity and poor prediction quality problems. Usually, the user-item matrix is quite sparse, which directly leads to inaccurate recommendations. This paper focuses the memory-based collaborative filtering problems on two crucial factors: (1) similarity computation between users or items and (2) missing data prediction algorithms. First, we use the enhanced Pearson Correlation Coefficient (PCC) algorithm by adding one parameter which overcomes the potential decrease of accuracy when computing the similarity of users or items. Second, we propose an effective missing data prediction algorithm, in which information of both users and items is taken into account. In this algorithm, we set the similarity threshold for users and items respectively, and the prediction algorithm will determine whether predicting the missing data or not. We also address how to predict the missing data by employing a combination of user and item information. Finally, empirical studies on dataset MovieLens have shown that our newly proposed method outperforms other state-of-the-art collaborative filtering algorithms and it is more robust against data sparsity."}
{"_id": "78a00de20967358b5d4dc53a0d6ab1cf59418a68", "title": "ItemRank: A Random-Walk Based Scoring Algorithm for Recommender Engines", "text": "Recommender systems are an emerging technology that helps consumers to find interesting products. A recommender system makes personalized product suggestions by extracting knowledge from the previous users interactions. In this paper, we present \u201dItemRank\u201d, a random\u2013walk based scoring algorithm, which can be used to rank products according to expected user preferences, in order to recommend top\u2013rank items to potentially interested users. We tested our algorithm on a standard database, the MovieLens data set, which contains data collected from a popular recommender system on movies, that has been widely exploited as a benchmark for evaluating recently proposed approaches to recommender system (e.g. [Fouss et al., 2005; Sarwar et al., 2002]). We compared ItemRank with other state-of-the-art ranking techniques (in particular the algorithms described in [Fouss et al., 2005]). Our experiments show that ItemRank performs better than the other algorithms we compared to and, at the same time, it is less complex than other proposed algorithms with respect to memory usage and computational cost too."}
{"_id": "0265769b0fbf86bb0e700573c80e388bb54c3f7a", "title": "Using Linear Algebra for Intelligent Information Retrieval", "text": "Currently most approaches to retrieving textual materials from scienti c databases depend on a lexical match between words in users requests and those in or assigned to documents in a database Because of the tremendous diversity in the words people use to describe the same document lexical methods are necessarily incomplete and imprecise Using the singular value decomposition SVD one can take advantage of the implicit higher order structure in the association of terms with documents by determining the SVD of large sparse term by document matrices Terms and documents represented by of the largest singular vectors are then matched against user queries We call this retrieval method Latent Semantic Indexing LSI because the subspace represents important associative relationships between terms and documents that are not evident in individual documents LSI is a completely automatic yet intelligent indexing method widely applicable and a promising way to improve users access to many kinds of textual materials or to documents and services for which textual descriptions are available A survey of the computational requirements for managing LSI encoded databases as well as current and future applications of LSI is presented"}
{"_id": "d4bac775ab95fdb7bcabf4e587d2c035849ebe08", "title": "On the Duration, Addressability, and Capacity of Memory-Augmented Recurrent Neural Networks", "text": "Memory-augmented recurrent neural networks (M-RNNs) have demonstrated empirically that they are very attractive for many applications, but a good theoretical understanding of their behaviors is unclear yet. In this paper, three analytical indicators named duration, addressability, and capacity of general forms of the additional memory in M-RNNs are formalized. The analysis results of the interactions among these indicators reveal that it is hard for an M-RNN to simultaneously provide good performance on more than two out of three of indicators. Meanwhile, the duration, addressability, and capacity are applied to analyze and compare two M-RNNs: long short term memory and neural turing machine for different cases. The comparison results show that none of the models has better performance on one indicator than the other model all the time. Moreover, it is found that separating memory system into sub-memories can bring the increasing duration and addressability and the decreasing capacity for the whole memory system."}
{"_id": "fe4f36731311fa013cd083a7e3c961f392325afc", "title": "Top-Down Induction of First-Order Logical Decision Trees", "text": ""}
{"_id": "12940f962d2434758555003a6d843284a0223eb2", "title": "Effective Implementation of DGEMM on Modern Multicore CPU", "text": "In this paper we will present a detailed study on tuning double-precision matrix-matrix multiplication (DGEMM) on the Intel Xeon E5-2680 CPU. We selected an optimal algorithm from the instruction set perspective as well software tools optimized for Intel Advance Vector Extensions (AVX). Our optimizations included the use of vector memory operations, and AVX instructions. Our proposed algorithm achieves a performance improvement of 33% compared to the latest results achieved using the Intel Math Kernel Library DGEMM subroutine."}
{"_id": "7385b1bcd94ed3904c0e1429e209c80249de0d2a", "title": "New trends in gender and mathematics performance: a meta-analysis.", "text": "In this article, we use meta-analysis to analyze gender differences in recent studies of mathematics performance. First, we meta-analyzed data from 242 studies published between 1990 and 2007, representing the testing of 1,286,350 people. Overall, d = 0.05, indicating no gender difference, and variance ratio = 1.08, indicating nearly equal male and female variances. Second, we analyzed data from large data sets based on probability sampling of U.S. adolescents over the past 20 years: the National Longitudinal Surveys of Youth, the National Education Longitudinal Study of 1988, the Longitudinal Study of American Youth, and the National Assessment of Educational Progress. Effect sizes for the gender difference ranged between -0.15 and +0.22. Variance ratios ranged from 0.88 to 1.34. Taken together, these findings support the view that males and females perform similarly in mathematics."}
{"_id": "9b1dbf0c6aa8e9ebce91bde061801852a71ad72f", "title": "Fuzzy logic control of energy storage system in microgrid operation", "text": "Recent development in Renewable Energy Sources (RES) have led to a higher penetration in existing power systems. As the majority of RES are intermittent by nature, it presents major challenges to the grid operators. An Energy Storage System (ESS) can be connected to mitigate this intermittent sources. When multiple renewable energy sources, flexible loads and ESS are connected to the grid, complexity of the system is significantly increased. Such problems have multiple constraints and objective hence it is challenging to design an effective rule-based control strategy. A control strategy based on fuzzy logic which is similar to human reasoning tolerates uncertainties and imprecision. The proposed fuzzy inference system (FIS) aims to reduce the grid fluctuation and increase the energy storage life-cycle by deciding when and how much to charge/discharge the ESS. A real data was used to test and validate the proposed FIS. In this paper, MATLAB/Simulink is used to create and implement the microgrid test bench and FIS. The proposed methodology is tested against a rule-based control strategy. Simulation studies were carried out on the developed model and results have shown that the proposed FIS can effectively reduce the fluctuation and prolong the life cycle of the ESS."}
{"_id": "f1040d6862ddb7480e80e5e482a66fb144101508", "title": "Multiple Text Document Summarization System using hybrid Summarization technique", "text": "Text Summarization plays an important role in the area of text mining and natural language processing. As the information resources are increasing tremendously, readers are overloaded with loads of information. Finding out the relevant data and manually summarizing it in short time is much more difficult, challenging and tedious task for a human being. Text Summarization aims to compress the source text into a shorter and concise form with preserving its information content and overall meaning. Summarization can be classified into two main categories i.e. extractive summarization and abstractive summarization. This paper presents a novel approach to generate abstractive summary from extractive summary using WordNet ontology. An experimental result shows the generated summary in well-compressed, grammatically correct and human readable format."}
{"_id": "b6e87d90297634bea29f49f8ffc1a72fc00c27a1", "title": "The facial width-to-height ratio determines interpersonal distance preferences in the observer.", "text": "Facial width-to-height ratio (fWHR) is correlated with a number of aspects of aggressive behavior in men. Observers appear to be able to assess aggressiveness from male fWHR, but implications for interpersonal distance preferences have not yet been determined. This study utilized a novel computerized stop-distance task to examine interpersonal space preferences of female participants who envisioned being approached by a man; men's faces photographed posed in neutral facial expressions were shown in increasing size to mimic approach. We explored the effect of the men's fWHR, their behavioral aggression (measured previously in a computer game), and women's ratings of the men's aggressiveness, attractiveness, and masculinity on the preferred interpersonal distance of 52 German women. Hierarchical linear modelling confirmed the relationship between the fWHR and trait judgements (ratings of aggressiveness, attractiveness, and masculinity). There were effects of fWHR and actual aggression on the preferred interpersonal distance, even when controlling statistically for men's and the participants' age. Ratings of attractiveness, however, was the most influential variable predicting preferred interpersonal distance. Our results extend earlier findings on fWHR as a cue of aggressiveness in men by demonstrating implications for social interaction. In conclusion, women are able to accurately detect aggressiveness in emotionally neutral facial expressions, and adapt their social distance preferences accordingly."}
{"_id": "774169dce5395df1ed9e34ca5823c787ada833ce", "title": "Denoising autoencoder with modulated lateral connections learns invariant representations of natural images", "text": "Suitable lateral connections between encoder and decoder are shown to allow higher layers of a denoising autoencoder (dAE) to focus on invariant representations. In regular autoencoders, detailed information needs to be carried through the highest layers but lateral connections from encoder to decoder relieve this pressure. It is shown that abstract invariant features can be translated to detailed reconstructions when invariant features are allowed to modulate the strength of the lateral connection. Three dAE structures with modulated and additive lateral connections, and without lateral connections were compared in experiments using real-world images. The experiments verify that adding modulated lateral connections to the model 1) improves the accuracy of the probability model for inputs, as measured by denoising performance; 2) results in representations whose degree of invariance grows faster towards the higher layers; and 3) supports the formation of diverse invariant poolings."}
{"_id": "6286495e4c509c18524163c71146438cc39d7c9b", "title": "Short text classification based on Wikipedia and Word2vec", "text": "Different from long texts, the features of Chinese short texts is much sparse, which is the primary cause of the low accuracy in the classification of short texts by using traditional classification methods. In this paper, a novel method was proposed to tackle the problem by expanding the features of short text based on Wikipedia and Word2vec. Firstly, build the semantic relevant concept sets of Wikipedia. We get the articles that have high relevancy with Wikipedia concepts and use the word2vec tools to measure the semantic relatedness between target concepts and related concepts. And then we use the relevant concept sets to extend the short texts. Compared to traditional similarity measurement between concepts using statistical method, this method can get more accurate semantic relatedness. The experimental results show that by expanding the features of short texts, the classification accuracy can be improved. Specifically, our method appeared to be more effective."}
{"_id": "54209b06d3c976398fd0b35e6fc08107dd648af5", "title": "The effect of shame and shame memories on paranoid ideation and social anxiety.", "text": "BACKGROUND\nSocial wariness and anxiety can take different forms. Paranoid anxiety focuses on the malevolence of others, whereas social anxiety focuses on the inadequacies in the self in competing for social position and social acceptance. This study investigates whether shame and shame memories are differently associated with paranoid and social anxieties.\n\n\nMETHOD\nShame, traumatic impact of shame memory, centrality of shame memory, paranoia and social anxiety were assessed using self-report questionnaires in 328 participants recruited from the general population.\n\n\nRESULTS\nResults from path analyses show that external shame is specifically associated with paranoid anxiety. In contrast, internal shame is specifically associated with social anxiety. In addition, shame memories, which function like traumatic memories, or that are a central reference point to the individual's self-identity and life story, are significantly associated with paranoid anxiety, even when current external and internal shame are considered at the same time. Thus, traumatic impact of shame memory and centrality of shame memory predict paranoia (but not social anxiety) even when considering for current feelings of shame.\n\n\nCONCLUSION\nOur study supports the evolutionary model suggesting there are two different types of 'conspecific' anxiety, with different evolutionary histories, functions and psychological processes. Paranoia, but less so social anxiety, is associated with traumatic impact and the centrality of shame memories. Researchers and clinicians should distinguish between types of shame memory, particularly those where the self might have felt vulnerable and subordinate and perceived others as threatening and hostile, holding malevolent intentions towards the self."}
{"_id": "87f0c2e44da1677df5fb1b57e94031522717537e", "title": "Measuring Computer Performance: A Practitioner's Guide", "text": "A practitioner's guide Measuring computer performance sets out the fundamental techniques used in analyzing and understanding the performance of computer systems. Throughout the book, the emphasis is on practical methods of measurement, simulation and analytical modeling. The author discusses performance metrics and provides detailed coverage of the strategies used in benchmark programs. He gives intuitive explanations of the key statistical tools needed to interpret measured performance data. He also describes the genera\u00ecdesign of experiments' technique, and shows how the maximum amount of information can be obtained for the minimum effort. The book closes with a chapter on the technique of queueing analysis. Appendices listing common probability distributions and statistical tables are included, along with a glossary of important technical terms. This practically oriented book will be of great interest to anyone who wants a detailed, yet intuitive, understanding of computer systems performance analysis."}
{"_id": "395362e847b5e9a5018c988a7f75a6e587753b83", "title": "Two hearts in three-quarter time : How to waltz the social media / viral marketing dance", "text": "The concept of viral marketing has been discussed in the literature for over 15 years, since Jeffrey Rayport first introduced the term in 1996. However, the more widespread use of social media has recently pushed this idea to a whole new level. We provide insight into the relationship between social media and viral marketing, and illustrate the six steps executives should take in order to dance the social media/viral marketing waltz. We define viral marketing as electronic word-of-mouth whereby some form of marketing message related to a company, brand, or product is transmitted in an exponentially growing way\u00e2\u20ac\u201doften through the use of social media applications. We consider the three conditions that need to be fulfilled to create a viral marketing epidemic (i.e., giving the right message to the right messengers in the right environment) and present four different groups of social media viral marketing campaigns (nightmares, strokes-of-luck, homemade issues, and triumphs). We conclude Purchase Export Previous article Next article Check if you have access through your login credentials or your institution."}
{"_id": "bff411fd40bf3e40c9b4f61554dc3370710e2d43", "title": "A 5MHz, 24V-to-1.2V, AO2T current mode buck converter with one-cycle transient response and sensorless current detection for medical meters", "text": "With a stringent input-to-output conversion ratio (CR) of 20 (24V input and 1.2V output) for a DC-DC converter, two stage cascaded architectures are easy to implement, but suffer from poor efficiency and doubled number of power components. This paper presents a single-stage solution with a proposed Adaptive ON-OFF Time (AO2T) control. In steady state, the control works as an adaptive ON-time valley current mode control to accommodate the large CR. During load transient periods, both ON- and OFF-time are adaptively adjustable to instantaneous load change, in order to accomplish fast load transient response within one switching cycle. To facilitate the high speed current mode control, a sensorless current detection circuit is also proposed. Operating at 5MHz, the converter achieves a maximum efficiency of 89.8% at 700mA and an efficiency of 85% at 2A full load. During a load current slew rate of 1.8A/200ns, the undershoot/overshoot voltages at VO are 23mV and 37mV respectively."}
{"_id": "663a83b658e99d21c810bb79f6d08d75ef41b5fe", "title": "Metamaterial-Substrate Antenna Array for MIMO Communication System", "text": "We demonstrate how a magnetic permeability enhanced metamaterial can enhance the antenna array of a multiple-input multiple-output(MIMO) communication system. The performance of a rectangular patch antenna array on a metamaterial substrate was studied relative to a similar array constructed on a conventional FR4 substrate. Differently spaced arrays were analytically compared using array correlation coefficients and mean effective gain as performance metrics. Achievable channel capacity were obtained through channel measurements made on a MIMO testbed. While results show that arrays on conventional FR4 substrates have higher capacity due to gain and efficiency factors, arrays can be made smaller, and have less mutual coupling and correlation coefficients, when using a metamaterial substrate, but the antenna built on the metamaterial substrate can be made more efficient through the use of better host materials. This was reflected in the analysis of both antenna arrays normalized to remove efficiency and gain differences where they showed similar performances. Hence, metamaterial substrates are a cost-effective solution when antenna miniaturization is a key design criteria compared to conventional substrates that achieve the same miniaturization factor without significantly sacrificing performance."}
{"_id": "6e578d6e9531dbf0d948081fe109df9b254ad4c4", "title": "The Simpler The Better: A Unified Approach to Predicting Original Taxi Demands based on Large-Scale Online Platforms", "text": "Taxi-calling apps are gaining increasing popularity for their efficiency in dispatching idle taxis to passengers in need. To precisely balance the supply and the demand of taxis, online taxicab platforms need to predict the Unit Original Taxi Demand (UOTD), which refers to the number of taxi-calling requirements submitted per unit time (e.g., every hour) and per unit region (e.g., each POI). Predicting UOTD is non-trivial for large-scale industrial online taxicab platforms because both accuracy and flexibility are essential. Complex non-linear models such as GBRT and deep learning are generally accurate, yet require labor-intensive model redesign after scenario changes (e.g., extra constraints due to new regulations). To accurately predict UOTD while remaining flexible to scenario changes, we propose LinUOTD, a unified linear regression model with more than 200 million dimensions of features. The simple model structure eliminates the need of repeated model redesign, while the high-dimensional features contribute to accurate UOTD prediction. We further design a series of optimization techniques for efficient model training and updating. Evaluations on two large-scale datasets from an industrial online taxicab platform verify that LinUOTD outperforms popular non-linear models in accuracy. We envision our experiences to adopt simple linear models with high-dimensional features in UOTD prediction as a pilot study and can shed insights upon other industrial large-scale spatio-temporal prediction problems."}
{"_id": "725fe7d620e8b318be4b6c9cea9da24a8be93fbb", "title": "KeLP: a Kernel-based Learning Platform for Natural Language Processing", "text": "Kernel-based learning algorithms have been shown to achieve state-of-the-art results in many Natural Language Processing (NLP) tasks. We present KELP, a Java framework that supports the implementation of both kernel-based learning algorithms and kernel functions over generic data representation, e.g. vectorial data or discrete structures. The framework has been designed to decouple kernel functions and learning algorithms: once a new kernel function has been implemented it can be adopted in all the available kernelmachine algorithms. The platform includes different Online and Batch Learning algorithms for Classification, Regression and Clustering, as well as several Kernel functions, ranging from vector-based to structural kernels. This paper will show the main aspects of the framework by applying it to different NLP tasks."}
{"_id": "e928564981b35eccc1035df3badf74de7611d9cc", "title": "Linear Machine Decision Trees", "text": "This article presents an algorithm for inducing multiclass decision trees with multivariate tests at internal decision nodes. Each test is constructed by training a linear machine and eliminating variables in a controlled manner. Empirical results demonstrate that the algorithm builds small accurate trees across a variety of tasks."}
{"_id": "706418cef23f6b0cc480426a1f4fe564a298dece", "title": "Biological network analysis and comparison: mining new biological knowledge", "text": "The mechanisms underlying life machinery are still not completely understood. Something is known, something is \u201cprobably\u201d known, other things are still unknown. Scientists all over the world are working very hard to clarify the processes regulating the cell life cycle and bioinformaticians try to support them by developing specialized automated tools. Within the plethora of applications devoted to the study of life mechanisms, tools for the analysis and comparison of biological networks are catching the attention of many researchers. It is interesting to investigate why."}
{"_id": "c31c71fb2d12acc3b6ee37295171ea57d47f53cd", "title": "The Function of Fiction is the Abstraction and Simulation of Social Experience.", "text": "Fiction literature has largely been ignored by psychology researchers because its only function seems to be entertainment, with no connection to empirical validity. We argue that literary narratives have a more important purpose. They offer models or simulations of the social world via abstraction, simplification, and compression. Narrative fiction also creates a deep and immersive simulative experience of social interactions for readers. This simulation facilitates the communication and understanding of social information and makes it more compelling, achieving a form of learning through experience. Engaging in the simulative experiences of fiction literature can facilitate the understanding of others who are different from ourselves and can augment our capacity for empathy and social inference."}
{"_id": "921470d44320a5dc4c144278cef1dc157b7b81f4", "title": "Machine Learning to Balance the Load in Parallel Branch-and-Bound", "text": "We describe in this paper a new approach to parallelize branchand-bound on a certain number of processors. We propose to split the optimization of the original problem into the optimization of several subproblems that can be optimized separately with the goal that the amount of work that each processor carries out is balanced between the processors, while achieving interesting speedups. The main innovation of our approach consists in the use of machine learning to create a function able to estimate the difficulty (number of nodes) of a subproblem of the original problem. We also present a set of features that we developed in order to characterize the encountered subproblems. These features are used as input of the function learned with machine learning in order to estimate the difficulty of a subproblem. The estimates of the numbers of nodes are then used to decide how to partition the original optimization tree into a given number of subproblems, and to decide how to distribute them among the available processors. The experiments that we carry out show that our approach succeeds in balancing the amount of work between the processors, and that interesting speedups can be achieved with little effort."}
{"_id": "7b03d0f65f523817a04a94e91bb749305a057d9d", "title": "Cervical radiculopathy.", "text": "Cervical radiculopathy is the result of irritation and/or compression of nerve root as it exits the cervical spine. Pain is a common presenting symptom and may be accompanied by motor or sensory deficits in areas innervated by the affected nerve root. Diagnosis is suggested by history and corresponding physical examination findings. Confirmation is achieved with MRI. A multimodal approach to treatment helps patients improve. Medications may be used to alleviate symptoms and manage pain. Physical therapy and manipulation may improve neck discomfort. Guided corticosteroid injections and selected nerve blocks may help control nerve root pain. Most patients improve with a conservative, nonoperative treatment course."}
{"_id": "76fa67f573e986d7f4d85cd21222b10fb20b622d", "title": "Flower species recognition system using convolution neural networks and transfer learning", "text": "Automatic identification and recognition of medicinal plant species in environments such as forests, mountains and dense regions is necessary to know about their existence. In recent years, plant species recognition is carried out based on the shape, geometry and texture of various plant parts such as leaves, stem, flowers etc. Flower based plant species identification systems are widely used. While modern search engines provide methods to visually search for a query image that contains a flower, it lacks in robustness because of the intra-class variation among millions of flower species around the world. Hence in this proposed research work, a Deep learning approach using Convolutional Neural Networks (CNN) is used to recognize flower species with high accuracy. Images of the plant species are acquired using the built-in camera module of a mobile phone. Feature extraction of flower images is performed using a Transfer Learning approach (i.e. extraction of complex features from a pre-trained network). A machine learning classifier such as Logistic Regression or Random Forest is used on top of it to yield a higher accuracy rate. This approach helps in minimizing the hardware requirement needed to perform the computationally intensive task of training a CNN. It is observed that, CNN combined with Transfer Learning approach as feature extractor outperforms all the handcrafted feature extraction methods such as Local Binary Pattern (LBP), Color Channel Statistics, Color Histograms, Haralick Texture, Hu Moments and Zernike Moments. CNN combined with Transfer Learning approach yields impressive Rank-1 accuracies of 73.05%, 93.41% and 90.60% using OverFeat, Inception-v3 and Xception architectures, respectively as Feature Extractors on FLOWERS102 dataset."}
{"_id": "063e5be439030fd0ba54a9636d101aa6b8bc5d2a", "title": "Deep learning of binary hash codes for fast image retrieval", "text": "Approximate nearest neighbor search is an efficient strategy for large-scale image retrieval. Encouraged by the recent advances in convolutional neural networks (CNNs), we propose an effective deep learning framework to generate binary hash codes for fast image retrieval. Our idea is that when the data labels are available, binary codes can be learned by employing a hidden layer for representing the latent concepts that dominate the class labels. The utilization of the CNN also allows for learning image representations. Unlike other supervised methods that require pair-wised inputs for binary code learning, our method learns hash codes and image representations in a point-wised manner, making it suitable for large-scale datasets. Experimental results show that our method outperforms several state-of-the-art hashing algorithms on the CIFAR-10 and MNIST datasets. We further demonstrate its scalability and efficacy on a large-scale dataset of 1 million clothing images."}
{"_id": "2e384f057211426ac5922f1b33d2aa8df5d51f57", "title": "Describing objects by their attributes", "text": "We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (\u201cspotty dog\u201d, not just \u201cdog\u201d); to say something about unfamiliar objects (\u201chairy and four-legged\u201d, not just \u201cunknown\u201d); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (\u201cspotty\u201d) or discriminative (\u201cdogs have it but sheep do not\u201d). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework."}
{"_id": "448efcae3b97aa7c01b15c6bc913d4fbb275f644", "title": "Style Finder : Fine-Grained Clothing Style Recognition and Retrieval", "text": "With the rapid proliferation of smartphones and tablet computers, search has moved beyond text to other modalities like images and voice. For many applications like Fashion, visual search offers a compelling interface that can capture stylistic visual elements beyond color and pattern that cannot be as easily described using text. However, extracting and matching such attributes remains an extremely challenging task due to high variability and deformability of clothing items. In this paper, we propose a fine-grained learning model and multimedia retrieval framework to address this problem. First, an attribute vocabulary is constructed using human annotations obtained on a novel finegrained clothing dataset. This vocabulary is then used to train a fine-grained visual recognition system for clothing styles. We report benchmark recognition and retrieval results on Women\u2019s Fashion Coat Dataset and illustrate potential mobile applications for attribute-based multimedia retrieval of clothing items and image annotation."}
{"_id": "ada53a115e1551f3fbad3dc5930c1187473a78a4", "title": "Efficient Object Category Recognition Using Classemes", "text": "We introduce a new descriptor for images which allows the construction of efficient and compact classifiers with good accuracy on object category recognition. The descriptor is the output of a large number of weakly trained object category classifiers on the image. The trained categories are selected from an ontology of visual concepts, but the intention is not to encode an explicit decomposition of the scene. Rather, we accept that existing object category classifiers often encode not the category per se but ancillary image characteristics; and that these ancillary characteristics can combine to represent visual classes unrelated to the constituent categories\u2019 semantic meanings. The advantage of this descriptor is that it allows object-category queries to be made against image databases using efficient classifiers (efficient at test time) such as linear support vector machines, and allows these queries to be for novel categories. Even when the representation is reduced to 200 bytes per image, classification accuracy on object category recognition is comparable with the state of the art (36% versus 42%), but at orders of magnitude lower computational cost."}
{"_id": "18bba31182e6d54ce86c46253741ef19c3588d57", "title": "Low-Profile Dual-Wideband Inverted-T Open Slot Antenna for the LTE/WWAN Tablet Computer With a Metallic Frame", "text": "An inverted-T open slot (ITOS) antenna with a low profile suitable for the LTE/WWAN tablet computer with a metallic frame is presented. With a low profile of 7 mm above the top edge of the device ground plane of the tablet computer, the ITOS antenna can provide two wide operating bands of 698-960 and 1710-2690 MHz to cover the LTE/WWAN operation. The ITOS antenna shows a simple structure and can fit in the narrow region between the metallic frame and the display panel of the tablet computer. The lower and higher operating bands of the antenna can be, respectively, controlled by the low-band and high-band feeds thereof. The low-band feed excites the antenna's longer slot arm to obtain a wide lower band covering the 698-960 MHz band. The high-band feed excites the antenna's shorter slot arm to achieve a wide higher band covering the 1710-2690 MHz band. Working principle of the low-profile ITOS antenna for generating the dual-wideband operation is described. Simulated and experimental results of the antenna are presented and discussed."}
{"_id": "87de954d3d35aee2c470715059de707d0bef101f", "title": "Locational marginal pricing basics for restructured wholesale power markets", "text": "Although Locational Marginal Pricing (LMP) plays an important role in many restructured wholesale power markets, the detailed derivation of LMPs as actually used in industry practice is not readily available. This lack of transparency greatly hinders the efforts of researchers to evaluate the performance of these markets. In this paper, different AC and DC optimal power flow (OPF) models are presented to help understand the derivation of LMPs. As a byproduct of this analysis, the paper provides a rigorous explanation of the basic LMP and LMP-decomposition formulas (neglecting real power losses) presented without derivation in the business practice manuals of the U.S. Midwest Independent System Operator (MISO)."}
{"_id": "1aa84de3a79f1fea7638ce79b200c10ceb8a4495", "title": "On the O(1=k) convergence of asynchronous distributed alternating Direction Method of Multipliers", "text": "We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases of this formulation and studied their distributed solution through either subgradient based methods with O(1/\u221ak) rate of convergence (where k is the iteration number) or Alternating Direction Method of Multipliers (ADMM) based methods, which require a synchronous implementation and a globally known order on the agents. In this paper, we present a novel asynchronous ADMM based distributed method for the general formulation and show that it converges at the rate O (1=k)."}
{"_id": "5b14660522550753ecf5a1e44da7d0526cac4b0a", "title": "Joint Optimization of Resource Provisioning in Cloud Computing", "text": "Cloud computing exploits virtualization to provision resources efficiently. Increasingly, Virtual Machines (VMs) have high bandwidth requirements; however, previous research does not fully address the challenge of both VM and bandwidth provisioning. To efficiently provision resources, a joint approach that combines VMs and bandwidth allocation is required. Furthermore, in practice, demand is uncertain. Service providers allow the reservation of resources. However, due to the dangers of over- and under-provisioning, we employ stochastic programming to account for this risk. To improve the efficiency of the stochastic optimization, we reduce the problem space with a scenario tree reduction algorithm, that significantly increases tractability, whilst remaining a good heuristic. Further we perform a sensitivity analysis that finds the tolerance of our solution to parameter changes. Based on historical demand data, we use a deterministic equivalent formulation to find that our solution is optimal and responds well to changes in parameter values. We also show that sensitivity analysis of prices can be useful for both users and providers in maximizing cost efficiency."}
{"_id": "50e78da0b2114a031b272f793f1967cb261d01da", "title": "Psychophysiology of False Memories in a Deese-Roediger-McDermott Paradigm with Visual Scenes", "text": "Remembering something that has not in fact been experienced is commonly referred to as false memory. The Deese-Roediger-McDermott (DRM) paradigm is a well-elaborated approach to this phenomenon. This study attempts to investigate the peripheral physiology of false memories induced in a visual DRM paradigm. The main research question is whether false recognition is different from true recognition in terms of accompanying physiological responses.Sixty subjects participated in the experiment, which included a study phase with visual scenes each showing a group of interrelated items in social contexts. Subjects were divided into an experimental group undergoing a classical DRM design and a control group without DRM manipulation. The control group was implemented in order to statistically control for possible biases produced by memorability differences between stimulus types. After a short retention interval, a pictorial recognition phase was conducted in the manner of a Concealed Information Test. Simultaneous recordings of electrodermal activity, respiration line length, phasic heart rate, and finger pulse waveform length were used. Results yielded a significant Group by Item Type interaction, showing that true recognition is accompanied by greater electrodermal activity than false recognition.Results are discussed in the light of Sokolov's Orienting Reflex, the Preliminary Process Theory and the Concealed Information Test. Implications and restrictions of the introduced design features are critically discussed. This study demonstrates the applicability of measures of peripheral physiology to the field of false memory research."}
{"_id": "aaa2c6f6278c233cb3c83d12698f1e2b76db17fa", "title": "Making business sense of the Internet.", "text": "For managers in large, well-established businesses, the Internet is a tough nut to crack. It is very simple to set up a Web presence and very difficult to create a Web-based business model. Established businesses that over decades have carefully built brands and physical distribution relationships risk damaging all they have created when they pursue commerce through the Net. Still, managers can't avoid the impact of electronic commerce on their businesses. They need to understand the opportunities available to them and recognize how their companies may be vulnerable if rivals seize those opportunities first. Broadly speaking, the Internet presents four distinct types of opportunities. First, it links companies directly to customers, suppliers, and other interested parties. Second, it lets companies bypass other players in an industry's value chain. Third, it is a tool for developing and delivering new products and services to new customers. Fourth, it will enable certain companies to dominate the electronic channel of an entire industry or segment, control access to customers, and set business rules. As he elaborates on these four points, the author gives established companies a systematic way to sort through the risks and rewards of doing business in cyberspace."}
{"_id": "54598a8872516bf85b9dbe79bfa573351b0f2042", "title": "Transductive Event Classification through Heterogeneous Networks", "text": "Events can be defined as \"something that occurs at specific place and time associated with some specific actions\". In general, events extracted from news articles and social networks are used to map the information from web to the various phenomena that occur in our physical world. One of the main steps to perform this relationship is the use of machine learning algorithms for event classification, which has received great attention in the web document engineering field in recent years. Traditional machine learning algorithms are based on vector space model representations and supervised classification. However, events are composed of multiple representations such as textual data, temporal information, geographic location and other types of metadata. All these representations are poorly represented together in a vector space model. Moreover, supervised classification requires the labeling of a significant sample of events to construct a training set for learning process, thereby hampering the practical application of event classification. In this paper, we propose a method called TECHN (Transductive Event Classification through Heterogeneous Networks), which considers event metadata as different objects in an heterogeneous network. Besides, the TECHN method has the ability to automatically learn which types of network objects (event metadata) are most efficient in the classification task. In addition, our TECHN method is based on a transductive classification that considers both labeled events and a vast amount of unlabeled events. The experimental results show that TECHN method obtains promising results, especially when we consider different weights of importance for each type of event metadata and a small set of labeled events."}
{"_id": "e2469446d791b1159a2e6f8ee925e57ef1f3e788", "title": "Executive Functions Predict the Success of Top-Soccer Players", "text": "While the importance of physical abilities and motor coordination is non-contested in sport, more focus has recently been turned toward cognitive processes important for different sports. However, this line of studies has often investigated sport-specific cognitive traits, while few studies have focused on general cognitive traits. We explored if measures of general executive functions can predict the success of a soccer player. The present study used standardized neuropsychological assessment tools assessing players' general executive functions including on-line multi-processing such as creativity, response inhibition, and cognitive flexibility. In a first cross-sectional part of the study we compared the results between High Division players (HD), Lower Division players (LD) and a standardized norm group. The result shows that both HD and LD players had significantly better measures of executive functions in comparison to the norm group for both men and women. Moreover, the HD players outperformed the LD players in these tests. In the second prospective part of the study, a partial correlation test showed a significant correlation between the result from the executive test and the numbers of goals and assists the players had scored two seasons later. The results from this study strongly suggest that results in cognitive function tests predict the success of ball sport players."}
{"_id": "82070bef06578a24e63b2b739ec86d4d31bb576a", "title": "Dataset Augmentation in Feature Space", "text": "Dataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a simpler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating, or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsupervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Working in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data."}
{"_id": "5480ade1722fa06fc1b5113907367f9dec5dea8f", "title": "Guidelines for Financial Forecasting with Neural Networks", "text": "Neural networks are good at classification, forecasting and recognition. They are also good candidates of financial forecasting tools. Forecasting is often used in the decision making process. Neural network training is an art. Trading based on neural network outputs, or trading strategy is also an art. We will discuss a seven-step neural network forecasting model building approach in this article. Pre and post data processing/analysis skills, data sampling, training criteria and model recommendation will also be covered in this article. 1. Forecasting with Neural Networks Forecasting is a process that produces a set of outputs by given a set of variables. The variables are normally historical data. Basically, forecasting assumes that future occurrences are based, at least in part, on presently observable or past events. It assumes that some aspects of the past patterns will continue into the future. Past relationships can then be discovered through study and observation. The basic idea of forecasting is to find an approximation of mapping between the input and output data in order to discover the implicit rules governing the observed movements. For instance, the forecasting of stock prices can be described in this way. Assume that i u represents today's price, i v represents the price after ten days. If the prediction of a stock price after ten days could be obtained using today's stock price, then there should be a functional mapping i u to i v , where ) ( i i i u v \u0393 = . Using all ( i u , i v ) pairs of historical data, a general function () \u0393 which consists of () i \u0393 could be obtained, that is ) (u v \u0393 = . More generally, u which consists of more information in today's price could be used in function () \u0393 . As NNs are universal approximators, we can find a NN simulating this () \u0393 function. The trained network is then used to predict the movements for the future. NN based financial forecasting has been explored for about a decade. Many research papers are published on various international journals and conferences proceedings. Some companies and institutions are also claiming or marketing the so called advanced forecasting tools or models. Some research results of financial forecasting found in references. For instance, stock trading system[4], stock forecasting [6, 22], foreign exchange rates forecasting [15, 24], option prices [25], advertising and sales volumes [13]. However, Callen et al. [3] claim that NN models are not necessarily superior to linear time series models even when the data are financial, seasonal and nonlinear. 2. Towards a Better Robust Financial Forecasting Model In working towards a more robust financial forecasting model, the following issues are worth examining. First, instead of emphasizing on the forecasting accuracy only, other financial criteria should be considered. Current researchers tend to use goodness of fit or similar criteria to judge or train their models in financial domain. In terms of mathematical calculation this approach is a correct way in theory. As we understand that a perfect forecasting is impossible in reality. No model can achieve such an ideal goal. Under this constraint, seeking a perfect forecasting is not our aim. We can only try to optimize our imperfect forecasts and use other yardsticks to give the most realistic measure. Second, there should be adequate organization and processing of forecasting data. Preprocessing and proper sampling of input data can have impact on the forecasting performance. Choice of indicators as inputs through sensitivity analysis could help to eliminate redundant inputs. Furthermore, NN forecasting results should be used wisely and effectively. For example, as the forecast is not perfect, should we compare the NN output with the previous forecast or with the real data especially when price levels are used as the forecasting targets? Third, a trading system should be used to decide on the best tool to use. NN is not the single tool that can be used for financial forecasting. We also cannot claim that it is the best forecasting tool. In fact, people are still not aware of which kind of time series is the most suitable for NN applications. To conduct post forecasting analysis will allow us to find out the suitability of models and series. We may then conclude that a certain kind of models should be used for a certain kind of time series. Training or building NN models is a trial and error procedure. Some researchers are not willing to test more on their data set [14]. If there is a system that can help us to formalize these tedious exploratory procedures, it will certainly be of great value to financial forecasting. Instead of just presenting one successful experiment, possibility or confidence level can be applied to the outputs. Data are partitioned into several sets to find out the particular knowledge of this time series. As stated by David Wolpert and William Macready about their No-Free-Lunch theorems [28], averaged over all problems, all search algorithms perform equally. Just experimenting on a single data set, a NN model which outperforms other models can be found. However, for another data set one model which outperforms NN model can also be found according to No-Free-Lunch theorems. To avoid such a case of one model outperforming others, we partition the data set into several sub data sets. The recommended NN models are those that outperform other models for all sub time horizons. In other words, only those models incorporated with enough local knowledge can be used for future forecasting. It is very important and necessary to emphasize these three issues here. Different criteria exist for the academics and the industry. In academics, sometime people seek for the accuracy towards 100%. While in industry a guaranteed 60% accuracy is typically aimed for. In addition, profit is the eventual goal of practitioners, so a profit oriented forecasting model may fit their needs. Cohen [5] surveyed 150 papers in the proceedings of the 8th National Conference on artificial intelligence. He discovered that only 42% of the papers reported that a program had run on more than one example; just 30% demonstrated performance in some way; a mere 21% framed hypotheses or made predictions. He then concluded that the methodologies used were incomplete with respect to the goals of designing and analyzing AI system. Tichy [20] showed that in a very large study of over 400 research articles in computer science. Over 40% of the articles are about new designs and the models completely lack experimental data. In a recent IEEE computer journal, he also points out 16 excuses to avoid experimentation for computer scientists [21]. What he is talking is true and not a joke. Prechelt [14] showed that the situation is not better in the NN literature. Out of 190 papers published in wellknown journals dedicated to NNs, 29% did not employ even a single realistic or real learning problem. Only 8% of the articles presented results for more than one problem using real world data. To build a NN forecasting we need sufficient experiments. To test only for one market or just for one particular time period means nothing. It will not lead to a robust model based on manually, trial-and-error, or ad hoc experiments. More robust model is needed but not only in one market or for one time period. Because of the lack of industrial models and because failures in academic research are not published, a single person or even a group of researchers will not gain enough information or experiences to build up a good forecasting model. It is obvious that an automated system dealing with NN models building is necessary. 3. Steps of NN Forecasting: The Art of NN Training As NN training is an art, many searchers and practitioners have worked in the field to work towards successful prediction and classification. For instance, William Remus and Marcus O'connor [16] suggest some principles for NN prediction and classification are of critical importance in the chapter, \u201cPrinciples of Forecasting\u201d in \u201cA Handbook for Researchers and Practitioners\u201d: \u2022 Clean the data prior to estimating the NN model. \u2022 Scale and deseasonalize data prior to estimating the model. \u2022 Use appropriate methods to choose the right starting point. \u2022 Use specialized methods to avoid local optima. \u2022 Expand the network until there is no significant improvement in fit. \u2022 Use pruning techniques when estimating NNs and use holdout samples when evaluating NNs. \u2022 Take care to obtain software that has in-built features to avoid NN disadvantages. \u2022 Build plausible NNs to gain model acceptance by reducing their size. \u2022 Use more approaches to ensure that the NN model is valid. With the authors' experience and sharing from other researchers and practitioners, we propose a seven-step approach for NN financial forecasting model building. The seven steps are basic components of the automated system and normally involved in the manual approach. Each step deals with an important issue. They are data preprocessing, input and output selection, sensitive analysis, data organization, model construction, post analysis and model recommendation. Step 1. Data Preprocessing A general format of data is prepared. Depending on the requirement, longer term data, e.g. weekly, monthly data may also be calculated from more frequently sampled time series. We may think that it makes sense to use as frequent data sampling as possible for experiments. However, researchers have found that increasing observation frequency does not always help to improve the accuracy of forecasting [28]. Inspection of data to find outliers is also important as outliers make it difficult for NNs and other forecasting models to model the true underlying functional. Although NNs have been shown to be universal approximators, it had been found that NNs had difficulty modeling seasonal patterns in time series [11].When a time series contains significant seasonality, the data"}
{"_id": "a8ce5b17c6f870c1ec2dfb8aee6217bb6f657af6", "title": "Backing Rich Credentials with a Blockchain PKI \u2217", "text": "This is the second of a series of papers describing the results of a project whose goal was to identify five remote identity proofing solutions that can be used as alternatives to knowledge-based verification. This paper describes the second solution, which makes use of a rich credential adapted for use on a blockchain and backed by a blockchain PKI. A rich credential, also used in Solution 1, allows the subject to identify him/herself to a remote verifier with which the subject has no prior relationship by presenting verification factors including possession of a private key, knowledge of a password, and possession of one or more biometric features, with selective disclosure of attributes and selective presentation of verification factors. In Solution 2 the issuer is a bank and the biometric verification factor is speaker recognition, which can be combined with face recognition to defeat voice morphing. The paper describes in detail the concept of a blockchain PKI, and shows that it has remarkable advantages over a traditional PKI, notably the fact that revocation checking is performed on the verifier\u2019s local copy of the blockchain without requiring CRLs or OCSP."}
{"_id": "bc1b39230215db5b608dbd8f43c6da2bef9a8179", "title": "Advanced Message Queuing Protocol", "text": "The advanced message queuing protocol (AMQP) working group's goal is to create an open standard for an interoperable enterprise-scale asynchronous messaging protocol. AMQP is finally addressing the lack of enterprise messaging interoperability standards. This relatively simple yet compellingly powerful enterprise messaging protocol is thus poised to open up a bright new era for enterprise messaging"}
{"_id": "e99bd9fd681d64d209a462692fe80d8e28e0db1c", "title": "Taking Empowerment to the Next Level : A Multiple-Level Model of Empowerment , Performance , and Satisfaction", "text": "Most research to date has approached employee empowerment as an individual-level phenomenon. In this study we proposed a work-unit-level construct, empowerment climate, and tested a multiple-level model integrating macro and micro approaches to empowerment. Empowerment climate was shown to be empirically distinct from psychological empowerment and positively related to manager ratings of work-unit performance. A cross-level mediation analysis using hierarchical linear modeling showed that psychological empowerment mediated the relationships between empowerment climate and individual performance and job satisfaction."}
{"_id": "bd792d657d5461b6bb1d943fa43b5efd7b9e2907", "title": "Lattice-based motion planning for a general 2-trailer system", "text": "Motion planning for a general 2-trailer system poses a hard problem for any motion planning algorithm and previous methods have lacked any completeness or optimality guarantees. In this work we present a lattice-based motion planning framework for a general 2-trailer system that is resolution complete and resolution optimal. The solution will satisfy both differential and obstacle imposed constraints and is intended either as a part of an autonomous system or as a driver support system to automatically plan complicated maneuvers in backward and forward motion. The proposed framework relies on a precomputing step that is performed offline to generate a finite set of kinematically feasible motion primitives. These motion primitives are then used to create a regular state lattice that can be searched for a solution using standard graph-search algorithms. To make this graph-search problem tractable for real-time applications a novel parametrization of the reachable state space is proposed where each motion primitive moves the system from and to a selected set of circular equilibrium configurations. The approach is evaluated over three different scenarios and impressive real-time performance is achieved."}
{"_id": "0786d19321c380f98ade66e4c9c8c9380ac89beb", "title": "Labelling strategies for hierarchical multi-label classification techniques", "text": "Many hierarchical multi-label classification systems predict a real valued score for every (instance, class) couple, with a higher score reflecting more confidence that the instance belongs to that class. These classifiers leave the conversion of these scores to an actual label set to the user, who applies a cut-off value to the scores. The predictive performance of these classifiers is usually evaluated using threshold independent measures like precision-recall curves. However, several applications require actual label sets, and thus an automatic labelling strategy. In this article, we present and evaluate different alternatives to perform the actual labelling in hierarchical multi-label classification. We investigate the selection of both single and multiple thresholds. Despite the existence of multiple threshold selection strategies in non-hierarchical multi-label classification, they can not be applied directly to the hierarchical context. The proposed strategies are implemented within two main approaches: optimisation of a certain performance measure of interest (such as F-measure or hierarchical loss), and simulating training set properties (such as class distribution or label cardinality) in the predictions. We assess the performance of the proposed labelling schemes on 10 datasets from different application domains. Our results show that selecting multiple thresholds may result in \u2217Corresponding author. Tel : +32(0)9 331 36 93 Fax : +32(0)9 221 76 73 Email addresses: Isaac.Triguero@irc.vib-UGent.be (Isaac Triguero), Celine.Vens@kuleuven-kulak.be (Celine Vens) Preprint submitted to Elsevier January 29, 2016 an efficient and effective solution for hierarchical multi-label problems."}
{"_id": "b820d775aeffd7fdb0f518e85148916b001e34ab", "title": "Building a Robust Implementation of Bearing-only Inertial SLAM for a UAV", "text": "This paper presents the on-going design and implementation of a robust inertial sensor based Simultaneous Localisation And Mapping (SLAM) algorithm for an Unmanned Aerial Vehicle (UAV) using bearing-only observations. A single colour vision camera is used to observe the terrain from which image points corresponding to features in the environment are extracted. The SLAM algorithm estimates the complete 6-DoF motion of the UAV along with the three-dimensional position of the features in the environment. An Extended Kalman Filter (EKF) approach is used where a technique of delayed initialisation is performed to initialise the 3D position of features from bearing-only observations. Data association is achieved using a multi-hypothesis innovation gate based on the spatial uncertainty of each feature. Results are presented by running the algorithm off-line using inertial sensor and vision data collected during a flight test of a UAV."}
{"_id": "9b894683358574205f606422918891d3ff9ba2ef", "title": "Normative rational agents-A BDI approach", "text": "This paper proposes an approach on how to accommodate norms to an already existing architecture of rational agents. Starting from the famous BDI model, an extension of the BDI execution loop will be presented; it will address such issues as norm instantiation and norm internalization, with a particular emphasis on the problem of norm consistency. A proposal for the resolution of conflicts between newly occurring norms, on one side, and already existing norms or mental states, on the other side, will be described. While it is fairly difficult to imagine an evaluation for the proposed architecture, a challenging scenario inspired form the science-fiction literature will be used to give the reader an intuition of how the proposed approach will deal with situations of normative conflicts."}
{"_id": "745c2b01a77e63ad5835db50a0420ce4be6e1d81", "title": "Temporal QoS-aware web service recommendation via non-negative tensor factorization", "text": "With the rapid growth of Web Service in the past decade, the issue of QoS-aware Web service recommendation is becoming more and more critical. Since the Web service QoS information collection work requires much time and effort, and is sometimes even impractical, the service QoS value is usually missing. There are some work to predict the missing QoS value using traditional collaborative filtering methods based on user-service static model. However, the QoS value is highly related to the invocation context (e.g., QoS value are various at different time). By considering the third dynamic context information, a Temporal QoS-aware Web Service Recommendation Framework is presented to predict missing QoS value under various temporal context. Further, we formalize this problem as a generalized tensor factorization model and propose a Non-negative Tensor Factorization (NTF) algorithm which is able to deal with the triadic relations of user-service-time model. Extensive experiments are conducted based on our real-world Web service QoS dataset collected on Planet-Lab, which is comprised of service invocation response-time and throughput value from 343 users on 5817 Web services at 32 time periods. The comprehensive experimental analysis shows that our approach achieves better prediction accuracy than other approaches."}
{"_id": "4894435e4e96b7ddcc42853992da4c694711c5fb", "title": "Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays", "text": "Recent advances in sensing technology have enabled a new generation of tabletop displays that can sense multiple points of input from several users simultaneously. However, apart from a few demonstration techniques [17], current user interfaces do not take advantage of this increased input bandwidth. We present a variety of multifinger and whole hand gestural interaction techniques for these displays that leverage and extend the types of actions that people perform when interacting on real physical tabletops. Apart from gestural input techniques, we also explore interaction and visualization techniques for supporting shared spaces, awareness, and privacy. These techniques are demonstrated within a prototype room furniture layout application, called RoomPlanner."}
{"_id": "49af3e80343eb80c61e727ae0c27541628c7c5e2", "title": "Introduction to Modern Information Retrieval", "text": "Come with us to read a new book that is coming recently. Yeah, this is a new coming book that many people really want to read will you be one of them? Of course, you should be. It will not make you feel so hard to enjoy your life. Even some people think that reading is a hard to do, you must be sure that you can do it. Hard will be felt when you have no ideas about what kind of book to read. Or sometimes, your reading material is not interesting enough."}
{"_id": "5a459e764726c4ffceba51c76baa7f511ee5b1b8", "title": "Cyberguide: A mobile context-aware tour guide", "text": "Future computing environments will free the user from the constraints of the desktop. Applications for a mobile environment should take advantage of contextual information, such as position, to o er greater services to the user. In this paper, we present the Cyberguide project, in which we are building prototypes of a mobile context-aware tour guide. Knowledge of the user's current location, as well as a history of past locations, are used to provide more of the kind of services that we come to expect from a real tour guide. We describe the architecture and features of a variety of Cyberguide prototypes developed for indoor and outdoor use on a number of di erent hand-held platforms. We also discuss the general research issues that have emerged in our context-aware applications development in a mobile environment."}
{"_id": "656dc19ce6bda922d8d8cfeed11c6037d10e1c87", "title": "Interacting with Paper on the Digital Desk", "text": "real world can be overcome in VR, and physical tools can. be m a d e obsolete by more flexible, virtual alternatives'.. Physical tools can be hard to replace , however. At one time it seemed that paper might become obsolete, for example, and visionaries predicted the \"paperlless office\" would dominate within a few years. But the trouble is that people like paper. It is easier to read than a screen [5], it is cheap, universally accepted , tactile, and portable. According to some studies, paperwork in the office has increased by a l~actor of six since 1970, and is now growing at 20% annually [14]. Like electronic documents, paper has properties that people just cannot seem to give up, making it resilient in the face of computer-based alternatives [10]. Consequently, we have two desks: one for \"paper pushing\" and the other for \"pixel pushing.\" Although activities on the two desk.s are often related, the two are quite isolated from each other. Printers and scanners provide a way for documents to move back ;and forth between desk-tops, but this conversion process is inconvenient. Wang's Freestyle system [3], for example, was. a \"paper-less office\" system that attempted to coexist with partially paper-based processes. A key factor necessary for adoption of that system was to minimize the printing and scanning re-RoE.ALL o quired, because too many of these tasks would cause the process to revert entirely back to paper. Trade-offs between electronic and paper documents can make the choice of medium difficult, but imagine if we did not have to choose, and we had a space where documents could be both paper and electronic at the same time. Instead of putting the user in the virtual world of the computer , we could do the opposite: add the computer to the real world of the user and create a Computer Augmented Environment for paper (see guest editors' introduction.). Instead of replacing paper with computers, we could enhance paper with computation. The Xerox PaperWorks product [8] takes a step in this direction with its fax-based paper user interface (UI) to a storage and retrieval system. With this system, ordinary paper forms are enhanced to control a PC through a fax machine. These paper documents gain some properties of electronic documents, but fax machines are slow compared to computer screens. Response time is limited by the delay it takes to scan and print a \u2026"}
{"_id": "448390e361ed5bf7a1a2721c8b62068b1f5cf1b0", "title": "Keeping track of time: evidence for episodic-like memory in great apes", "text": "Episodic memory, as defined by Tulving, can be described in terms of behavioural elements (what, where and when information) but it is also accompained by an awareness of one\u2019s past (chronesthesia) and a subjective conscious experience (autonoetic awareness). Recent experiments have shown that corvids and rodents recall the where, what and when of an event. This capability has been called episodic-like memory because it only fulfils the behavioural criteria for episodic memory. We tested seven chimpanzees, three orangutans and two bonobos of various ages by adapting two paradigms, originally developed by Clayton and colleagues to test scrub jays. In Experiment 1, subjects were fed preferred but perishable food (frozen juice) and less preferred but non-perishable food (grape). After the food items were hidden, subjects could choose one of them either after 5\u00a0min or 1\u00a0h. The frozen juice was still available after 5\u00a0min but melted after 1\u00a0h and became unobtainable. Apes chose the frozen juice significantly more after 5 min and the grape after 1\u00a0h. In Experiment 2, subjects faced two baiting events happening at different times, yet they formed an integrated memory for the location and time of the baiting event for particular food items. We also included a memory task that required no temporal encoding. Our results showed that apes remember in an integrated fashion what, where and when (i.e., how long ago) an event happened; that is, apes distinguished between different events in which the same food items were hidden in different places at different times. The temporal control of their choices was not dependent on the familiarity of the platforms where the food was hidden. Chimpanzees\u2019 and bonobos\u2019 performance in the temporal encoding task was age-dependent, following an inverted U-shaped distribution. The age had no effect on the performance of the subjects in the task that required no temporal encoding."}
{"_id": "a6387dd26dedb81440adec9be776ecc03f76f623", "title": "DESH: Database evaluation system with hibernate ORM framework", "text": "Relational databases have been the predominant choice for back-ends in enterprise applications for several decades. JDBC - a Java API - that is used for developing such applications and persisting data on the back-end requires enormous time and effort. JDBC makes the application logic to become tightly coupled with the database and consequently is inadequate for building enterprise applications that need to adopt to dynamic requirements. Hence, ORM frameworks such as Hibernate, became prominent. However, even with ORM, the relational back-end often faces a drawback of lack of scalability and flexibility. In this context, NoSQL databases are increasingly gaining popularity. Existing research works have either benchmarked Hibernate with an RDBMS or with one of the NoSQL databases. However, it has not been attempted in the literature to test both an RDBMS and a NoSQL database for their performance within the context of a single application developed using the features of Hibernate. This kind of a study will provide an insight that using Hibernate ORM solution would help us to develop database-independent applications and how much performance gain can be achieved when an application is ported from an RDBMS back-end to a NoSQL database backend. The objective of this work is to develop a business application using Hibernate framework that can individually communicate with an RDBMS as well as a specific NoSQL database and to evaluate the performance of both these databases."}
{"_id": "4ea92f2ddbb7ce610c6e80377e61397bad7309ff", "title": "Neural Approaches to Conversational AI", "text": "This tutorial surveys neural approaches to conversational AI that were developed in the last few years. We group conversational systems into three categories: (1) question answering agents, (2) task-oriented dialogue agents, and (3) social bots. For each category, we present a review of state-of-the-art neural approaches, draw the connection between neural approaches and traditional symbolic approaches, and discuss the progress we have made and challenges we are facing, using specific systems and models as case studies."}
{"_id": "b13b0bf8c3ee0976671e1fa37e3643e744215dfb", "title": "Towards Twitter hashtag recommendation using distributed word representations and a deep feed forward neural network", "text": "Hashtags are useful for categorizing and discovering content and conversations in online social networks. However, assigning hashtags requires additional user effort, hampering their widespread adoption. Therefore, in this paper, we introduce a novel approach for hashtag recommendation, targeting English language tweets on Twitter. First, we make use of a skip-gram model to learn distributed word representations (word2vec). Next, we make use of the distributed word representations learned to train a deep feed forward neural network. We test our deep neural network by recommending hashtags for tweets with user-assigned hashtags, using Mean Squared Error (MSE) as the objective function. We also test our deep neural network by recommending hashtags for tweets without user-assigned hashtags. Our experimental results show that the proposed approach recommends hashtags that are specific to the semantics of the tweets and that preserve the linguistic regularity of the tweets. In addition, our experimental results show that the proposed approach is capable of generating hashtags that have not been seen before."}
{"_id": "31b6b7a1e00ada40a674f0fa13fa695245058f97", "title": "Review of Charge Pump Topologies for Micro Energy Harvesting Systems", "text": "Corresponding Author: Michelle Lim Sern Mi Institute of Microengineering and Nanoelectronics (IMEN), UKM, 43600 Bangi, Selangor, Malaysia Email: mlsm_2002@yahoo.com Abstract: This paper reviews CMOS based charge pump topologies used within autonomous embedded micro-systems. These charge pump structures have evolved from its simplistic diode-tied, single-branches with major threshold drops to exponential type, dual-branches with sophisticated gate and substrate control for lower voltage operation. Published charge pumps are grouped based on architecture, operation principles and pump optimization techniques with their pros and cons compared and results contrasted. The various charge pump topologies and schemes used are considered based on pumping efficiency, power efficiency, charge transferability, circuit complexity, pumping capacitors, form factor and minimum supply voltages with an optimum load. This article concludes with an overview of suitable techniques and recommendations that will aid a designer in selecting the most suitable charge pump topology especially for low ambient micro energy harvesting applications."}
{"_id": "4b3b4b5c6664092cb055efb308178693d2aea2d6", "title": "Private Query on Encrypted Data in Multi-user Settings", "text": "Searchable encryption schemes allow users to perform keyword based searches on an encrypted database. Almost all existing such schemes only consider the scenario where a single user acts as both the data owner and the querier. However, most databases in practice do not just serve one user; instead, they support search and write operations by multiple users. In this paper, we systematically study searchable encryption in a practical multi-user setting. Our results include a set of security notions for multi-user searchable encryption as well as a construction which is provably secure under the newly introduced security notions."}
{"_id": "008150576e5fa29fdf22d863605a261808800dc6", "title": "SMIT: Stochastic Multi-Label Image-to-Image Translation", "text": "Cross-domain mapping has been a very active topic in recent years. Given one image, its main purpose is to translate it to the desired target domain, or multiple domains in the case of multiple labels. This problem is highly challenging due to three main reasons: (i) unpaired datasets, (ii) multiple attributes, and (iii) the multimodality associated with the translation. Most of the existing state-of-theart has focused only on two reasons, i.e. producing disentangled representations from unpaired datasets in a one-toone domain translation or producing multiple unimodal attributes from unpaired datasets. In this work, we propose a joint framework of diversity and multi-mapping image-toimage translations, using a single generator to conditionally produce countless and unique fake images that hold the underlying characteristics of the source image. Extensive experiments over different datasets demonstrate the effectiveness of our proposed approach with comparisons to the state-of-the-art in both multi-label and multimodal problems. Additionally, our method is able to generalize under different scenarios: continuous style interpolation, continuous label interpolation, and multi-label mapping."}
{"_id": "0cfff455b8cd9cb3ac2975f9b134e4eb60256cf6", "title": "Instagram #Instasad?: Exploring Associations Among Instagram Use, Depressive Symptoms, Negative Social Comparison, and Strangers Followed", "text": "As the use and influence of social networking continues to grow, researchers have begun to explore its consequences for psychological well-being. Some research suggests that Facebook use can have negative consequences for well-being. Instagram, a photo-sharing social network created in 2010, has particular characteristics that may make users susceptible to negative consequences. This study tested a theoretically grounded moderated meditation model of the association between Instagram use and depressive symptoms through the mechanism of negative social comparison, and moderation by amount of strangers one follows. One hundred and seventeen 18-29 year olds completed online questionnaires containing demographics, frequency of Instagram use, amount of strangers followed on Instagram, the Center for Epidemiological Resources Scale for Depression, and the Social Comparison Rating Scale. Instagram use was marginally positively associated with depressive symptoms, and positive social comparison was significantly negatively associated with depressive symptoms. Amount of strangers followed moderated the associations of Instagram use with social comparison (significantly) and depressive symptoms (marginally), and further significantly moderated the indirect association of Instagram use with depressive symptoms through social comparison. Findings generally suggest that more frequent Instagram use has negative associations for people who follow more strangers, but positive associations for people who follow fewer strangers, with social comparison and depressive symptoms. Implications of negative associations of social networking for people who follow strangers and the need for more research on Instagram use given its increasing popularity are explored."}
{"_id": "4f77341f99376290f21eeccd148d544ae2757b0b", "title": "Processing capacity defined by relational complexity: implications for comparative, developmental, and cognitive psychology.", "text": "Working memory limits are best defined in terms of the complexity of the relations that can be processed in parallel. Complexity is defined as the number of related dimensions or sources of variation. A binary relation has one argument and one source of variation; its argument can be instantiated in only one way at a time. A binary relation has two arguments, two sources of variation, and two instantiations, and so on. Dimensionality is related to the number of chunks, because both attributes on dimensions and chunks are independent units of information of arbitrary size. Studies of working memory limits suggest that there is a soft limit corresponding to the parallel processing of one quaternary relation. More complex concepts are processed by \"segmentation\" or \"conceptual chunking.\" In segmentation, tasks are broken into components that do not exceed processing capacity and can be processed serially. In conceptual chunking, representations are \"collapsed\" to reduce their dimensionality and hence their processing load, but at the cost of making some relational information inaccessible. Neural net models of relational representations show that relations with more arguments have a higher computational cost that coincides with experimental findings on higher processing loads in humans. Relational complexity is related to processing load in reasoning and sentence comprehension and can distinguish between the capacities of higher species. The complexity of relations processed by children increases with age. Implications for neural net models and theories of cognition and cognitive development are discussed."}
{"_id": "99559bd86a4166c4d2f55823ad57515a3a2ac1a1", "title": "What causes hidradenitis suppurativa?", "text": "Hidradenitis suppurativa (HS)--a rather common, very chronic and debilitating inflammatory skin appendage disorder with a notoriously underestimated burden of disease--has long been a playground for the high priests of nomenclature: Ask a bunch of eminent dermatologists and skin pathologists to publicly share their thoughts on what causes HS, and they will soon get entrenched in a heated debate on whether this historical term is a despicable misnomer. Fortunately, the recently founded Hidradenitis Suppurativa Foundation (HSF; http://www.hs-foundation.org), to which EXP DERMATOL serves as home journal, has broken with this unproductive tradition and has encouraged publication of the current CONTROVERSIES feature. This is exclusively devoted to discussing the pathobiology of this chronic neutrophilic folliculitis of unknown origin. Although traces of terminological bickering remain visible, it does the HS experts in our virtual debate room credit that they engage in a constructive and comprehensive dissection of potential pathogenesis pathways that may culminate in the clinical picture we know under the competing terms HS or acne inversa. These experts sketch more often complementary than mutually exclusive pathogenesis scenarios, and the outlines of a conceivable consensus on the many open pathobiology questions begin to emerge in these CONTROVERSIES. Hopefully, this heralds a welcome new tradition: to get to the molecular heart of HS pathogenesis, which can only be achieved by a renaissance of solid basic HS research, as the key to developing more effective HS therapy."}
{"_id": "6310e179b47a5e4cbb6c478b279aece6b605687f", "title": "Achievable cases in an asynchronous environment", "text": "The paper deals with achievability of fault tolerant goals in a completely asynchronous distributed system. Fischer, Lynch, and Paterson [FLP] proved that in such a system \"nontrivial agreement\" cannot be achieved even in the (possible) presence of a single \"benign\" fault. In contrast, we exhibit two pairs of goals that are achievable even in the presence of up to t \u226a n/2 faulty processors, contradicting the widely held assumption that no nontrivial goals are attainable in such a system. The first pair deals with renaming processors so as to reduce the size of the initial name space. When only uniqueness is required of the new names, we present a lower bound of n + 1 on the size of the new name space, and a renaming algorithm which establishes an upper bound of n + t. In case the new names are required also to preserve the original order, a tight bound of 2t(n- t + 1) - 1 is obtained. The second pair of goals deals with the multi-slot critical section problem. We present algorithms for controlled access to a critical section. As for the number of slots required, a tight bound of t + 1 is proved in case the slots are identical. In the case of distinct slots the upper bound is 2t + 1."}
{"_id": "958e19db35adf74d6f74e36f3ade4494bd9829f6", "title": "Steerable Higher Order Mode Dielectric Resonator Antenna With Parasitic Elements for 5G Applications", "text": "This paper presents the findings of a steerable higher order mode (TE $^{\\mathrm {y}}_{1\\delta 3}$ ) dielectric resonator antenna with parasitic elements. The beam steering was successfully achieved by switching the termination capacitor on the parasitic element. In this light, all of the dielectric resonator antennas (DRAs) have the same dielectric permittivity similar to that of ten and excited by a $50\\Omega $ microstrip with a narrow aperture. The effect of the mutual coupling on the radiation pattern and the reflection coefficient, as well as the array factor, was investigated clearly using MATLAB version 2014b and ANSYS HFSS version 16. As the result, the antenna beam of the proposed DRA array managed to steer from \u221232\u00b0 to +32\u00b0 at 15 GHz. Furthermore, the measured antenna array showed the maximum gain of 9.25 dBi and the reflection coefficients which are less than \u221210 dB with the bandwidth more than 1.3 GHz, which is viewed as desirable for device-to-device communication in 5G Internet of Things applications."}
{"_id": "846985982d86768102081bdc8a466609591b634c", "title": "Autonomous UAV Surveillance in Complex Urban Environments", "text": "We address the problem of multi-UAV-based surveillance in complex urban environments with occlusions. The problem consists of controlling the flight of UAVs with on-board cameras so that the coverage and recency of the information about the designated area is maximized. In contrast to the existing work, sensing constraints due to occlusions and UAV motion constraints are modeled realistically and taken into account. We propose a novel \\emph{occlusion-aware} surveillance algorithm based on a decomposition of the surveillance problem into a variant of the 3D art gallery problem and an instance of traveling salesman problem for Dubins vehicles. The algorithm is evaluated on the high-fidelity \\textsc{AgentFly} UAV simulation testbed which accurately models all constraints and effects involved. The results confirm the importance of occlusion-aware flight path planning, in particular in the case of narrow-street areas and low UAV flight altitudes."}
{"_id": "5b32759bfc440e2c5df75ffd28a4c62bfb8c5935", "title": "Spatial augmented reality \u2014 A tool for 3D data visualization", "text": "This paper presents a proposal for the use of Spatial Augmented Reality as a tool for 3D data visualizations. The use of traditional display technologies such as LCD monitors provides a fish tank metaphor for the user, i.e. the information is behind a plane of glass. The use of VR technologies allows the user to be immersed in the 3D volume and remove this fish tank problem, but can limit the set of techniques that allow users to interact with spatial data. Spatial Augmented Reality employed in conjunction with a highresolution monitor provides an elegant blend of spatial reasoning, tangible interaction, and detailed information viewing. This paper purposes a range of usages for SAR in 3D visualizations."}
{"_id": "17c30c9638f10919c4760b6684e7c02929140552", "title": "Learning deep semantic attributes for user video summarization", "text": "This paper presents a Semantic Attribute assisted video SUMmarization framework (SASUM). Compared with traditional methods, SASUM has several innovative features. Firstly, we use a natural language processing tool to discover a set of keywords from an image and text corpora to form the semantic attributes of visual contents. Secondly, we train a deep convolution neural network to extract visual features as well as predict the semantic attributes of video segments which enables us to represent video contents with visual and semantic features simultaneously. Thirdly, we construct a temporally constrained video segment affinity matrix and use a partially near duplicate image discovery technique to cluster visually and semantically consistent video frames together. These frame clusters can then be condensed to form an informative and compact summary of the video. We will present experimental results to show the effectiveness of the semantic attributes in assisting the visual features in video summarization and our new technique achieves state-of-the-art performance."}
{"_id": "9e7df4362870bdded41f2a3c81e25a0fbb93ed3f", "title": "Infants\u2019 Metaphysics: The Case of Numerical Identity", "text": "Adults conceptualize the world in terms of enduring physical objects. Sortal concepts provide conditions of individuation (establishing the boundaries of objects) and numerical identity (establishing whether an object is the same one as one encountered at some other time). In the adult conceptual system, there are two roughly hierarchical levels of object sortals. Most general is the sortal bounded physical object itself, for which spatiotemporal properties provide the criteria for individuation and identity. More specific sortals, such as dog or car, rely on additional types of properties to provide criteria for individuation and identity. We conjecture that young infants might represent only the general sortal, object, and construct more specific sortals later (the Object-first Hypothesis). This is closely related to Bower's (1974) conjecture that infants use spatiotemporal information to trace identity before they use property information. Five studies using the visual habituation paradigm were conducted to address the Object-first Hypothesis. In these studies, 10-month-old infants were able to use spatiotemporal information but failed to use property/kind information to set up representations of numerically distinct individuals, thus providing empirical evidence for the Object-first Hypothesis. Finally, infants succeed at object individuation in terms of more specific sortals by 12 months. The relation between success at our task and early noun comprehension is discussed."}
{"_id": "62036dc21fedc161663cc4fd080e9aef95c57581", "title": "The Google effect in doctoral theses", "text": "It is often said that successive generations of researchers face an increasing educational burden due to knowledge accumulation. On the other hand, technological advancement over time can improve the productivity of researchers and even change their cognitive processes. This paper presents a longitudinal study (2004\u20132011) of citation behavior in doctoral theses at the Massachusetts Institute of Technology\u2019s Department of Electrical Engineering and Computer Science. It is found that the number of references cited has increased over the years. At the same time, there has been a decrease in the length of time in the doctoral program and a relative constancy in the culture of the department. This suggests that students are more productive in facing an increased knowledge burden, and indeed seem to encode prior literature as transactive memory to a greater extent, as evidenced by the greater use of older literature."}
{"_id": "c60c6632548f09f066ccb693dd2e1738ca012d6c", "title": "W-Band Large-Scale High-Gain Planar Integrated Antenna Array", "text": "This communication presents a 32 \u00d7 32 high-gain patch array antenna fed by the substrate integrated waveguide (SIW) structure at W-band. The array antenna consists of two layers to achieve a compact topology, which allows for mass production using a standard PCB fabrication process. The wideband feeding network is placed in the bottom layer while the radiating patches are on the top layer. This configuration also resolves the trade-off between gain and bandwidth of conventional SIW array antennas. Measured gain of the 32 \u00d7 32 antenna array is within the range 28.81-29.97 dBi in the working bandwidth of 91-97 GHz. Measured impedance bandwidth covers the same frequency band for . The cross-polarization of the antenna array is less than 40 dB at the beam direction. Good agreement between the simulated and measured results validates our design."}
{"_id": "dced0eeb9627f1e61398877e4334dfd6417c789d", "title": "Position and Orientation Estimation Through Millimeter-Wave MIMO in 5G Systems", "text": "Millimeter-wave (mm-wave) signals and large antenna arrays are considered enabling technologies for future 5G networks. While their benefits for achieving high-data rate communications are well-known, their potential advantages for accurate positioning are largely undiscovered. We derive the Cram\u00e9r-Rao bound (CRB) on position and rotation angle estimation uncertainty from mm-wave signals from a single transmitter, in the presence of scatterers. We also present a novel two-stage algorithm for position and rotation angle estimation that attains the CRB for average to high signal-to-noise ratio. The algorithm is based on multiple measurement vectors matching pursuit for coarse estimation, followed by a refinement stage based on the space-alternating generalized expectation maximization algorithm. We find that accurate position and rotation angle estimation is possible using signals from a single transmitter, in either line-of-sight, non-line-of-sight, or obstructed-line-of-sight conditions."}
{"_id": "21a275beb31bfa71d9884c993e161578f15caba9", "title": "Enhancing the TED-LIUM Corpus with Selected Data for Language Modeling and More TED Talks", "text": "In this paper, we present improvements made to the TED-LIUM corpus we released in 2012. These enhancements fall into two categories. First, we describe how we filtered publicly available monolingual data and used it to estimate well-suited language models (LMs), using open-source tools. Then, we describe the process of selection we applied to new acoustic data from TED talks, providing additions to our previously released corpus. Finally, we report some experiments we made around these improvements."}
{"_id": "305edd92f237f8e0c583a809504dcec7e204d632", "title": "Blockchain challenges and opportunities: a survey", "text": "Copyright \u00a9 2017 Inderscience Enterprises Ltd. 2 Blockchain has numerous benefits such as decentralization, persistency, anonymity and auditability. There is a wide spectrum of blockchain applications ranging from cryptocurrency, financial services, risk management, Internet of Things to public and social services. Although a number of studies focus on using the blockchain technology in various application aspects, there is no comprehensive survey on the blockchain technology in both technological and application perspectives. To fill this gap, we conduct a comprehensive survey on the blockchain technology. In particular, this paper gives the blockchain taxonomy, introduces typical blockchain consensus algorithms, reviews blockchain applications and discusses technical challenges as well as recent advances in tackling the challenges. Moreover, this paper also points out the future directions in the blockchain technology."}
{"_id": "742b07defd167155bda5c81fff463c9ec7701f8c", "title": "A Real Time Audio Fingerprinting System for Advertisement Tracking and Reporting in FM Radio", "text": "In this paper we present a system designed to detect, identify and track commercial segments transmitted by a radio station. The program is entirely written in Visual C++ and uses state of the art audio fingerprinting technologies to achieve a great performance, being able to operate several times faster than real time while keeping a moderate computational load."}
{"_id": "214903557e03035b8dc49c06a34f93c6d71a0a88", "title": "Detecting and discriminating behavioural anomalies", "text": "This paper aims to address the problem of anomaly detection and discrimination in complex behaviours, where anomalies are subtle and difficult to detect owing to the complex temporal dynamics and correlations among multiple objects\u2019 behaviours. Specifically, we decompose a complex behaviour pattern according to its temporal characteristics or spatial-temporal visual contexts. The decomposed behaviour is then modelled using a cascade of Dynamic Bayesian Networks (CasDBNs). In contrast to existing standalone models, the proposed behaviour decomposition and cascade modelling offers distinct advantage in simplicity for complex behaviour modelling. Importantly, the decomposition and cascade structure map naturally to the structure of complex behaviour, allowing for a more effective detection of subtle anomalies in surveillance videos. Comparative experiments using both indoor and outdoor data are carried out to demonstrate that, in addition to the novel capability of discriminating different types of anomalies, the proposed framework outperforms existing methods in detecting durational anomalies in complex behaviours and subtle anomalies that are difficult to detect when objects are viewed in isolation. & 2010 Elsevier Ltd. All rights reserved."}
{"_id": "b319bfee0dad071d5fefad16ef0b087a33acd13b", "title": "Mobile Virtual Network Admission Control and Resource Allocation for Wireless Network Virtualization: A Robust Optimization Approach", "text": "Wireless network virtualization is a promising technology in next generation wireless networks. In this paper, motivated by the experience of user equipment (UE) admission control in traditional wireless networks, we propose a novel concept of mobile virtual network (MVN) admission control for wireless virtualization. By limiting the number of MVNs embedded in the physical network, MVN admission control can effectively guarantee quality of service (QoS) experienced by users of MVNs and maximize the utilization of the physical networks at the same time. Specifically, we propose a two-stage MVN embedding mechanism that can decouple short-term physical resource allocation from long-term admission control and resource leasing. With recent advances in robust optimization, we formulate the MVN admission control problem as a robust optimization problem. Both the long-term admission control and short- term resource allocation problems are transformed to convex problems, which can be solved efficiently. Simulation results are presented to show the effectiveness of the proposed scheme."}
{"_id": "61f30c93c68064d8706ba7b6d3b57701bd1b9ffc", "title": "The Experience of Nature A Psychological Perspective", "text": ""}
{"_id": "f688b0ad320298a397e82350002ab09e552a8faf", "title": "A novel framework for SMS spam filtering", "text": "A novel framework for SMS spam filtering is introduced in this paper to prevent mobile phone users from unsolicited SMS messages. The framework makes use of two distinct feature selection approaches based on information gain and chi-square metrics to find out discriminative features representing SMS messages. The discriminative feature subsets are then employed in two different Bayesian-based classifiers, so that SMS messages are categorized as either spam or legitimate. Moreover, the paper introduces a real-time mobile application for Android\u2122 based mobile phones utilizing the proposed spam filtering scheme, as well. Hence, SMS spam messages are silently filtered out without disturbing phone users. Effectiveness of the filtering framework is evaluated on a large SMS message collection including legitimate and spam messages. Following the evaluation, remarkably accurate classification results are obtained for both spam and legitimate messages."}
{"_id": "ef0c711fe5970b09bae538fa5d841231be6f10d1", "title": "Forecasting stock price directional movements using technical indicators: Investigating window size effects on one-step-ahead forecasting", "text": "Accurate forecasting of directional changes in stock prices is important for algorithmic trading and investment management. Technical analysis has been successfully used in financial forecasting and recently researchers have explored the optimization of parameters for technical indicators. This study investigates the relationship between the window size used for calculating technical indicators and the accuracy of one-step-ahead (variable steps) forecasting. The directions of the future price movements are predicted using technical analysis and machine learning algorithms. Results show a correlation between window size and forecasting step size for the Support Vector Machines approach but not for the other approaches."}
{"_id": "a59686736e5d200e238a9341d176f3ab0ba60fee", "title": "Scene Graph Generation by Belief RNNs", "text": "Understanding and describing the scene beyond an image has great value. It is a 1 step forward from recognizing individual objects and their relationships in isolation. 2 In addition, Scene graph is a crucial step towards a deeper understanding of a visual 3 scene. We present a method in an end-to-end model that given an image generates 4 a scene graph including nodes that corresponds to the objects in the image and 5 an edges corresponds to relationship between objects. Our main contribution is 6 introducing a novel deep structure prediction module Belief RNNs that performs 7 learning on a large graphs in a very efficient and generic way. 8"}
{"_id": "5f8fa49eea09a43a6e6f6e7fdc4387751aee8973", "title": "A new Design Criteria for Hash-Functions", "text": "The most common way of constructing a hash function (e.g., SHA-1) is to iterate a compression function on the input message. The compression function is usually designed from scratch or made out of a block-cipher. In this paper, we introduce a new security notion for hash-functions, stronger than collision-resistance. Under this notion, the arbitrary length hash function H must behave as a random oracle when the fixed-length building block is viewed as an ideal primitive. This enables to eliminate all possible generic attacks against iterative hash-functions. In this paper, we show that the current design principle behind hash functions such as SHA-1 and MD5 \u2014 the (strengthened) Merkle-Damg\u030aard transformation \u2014 does not satisfy this security notion. We provide several constructions that provably satisfy this notion; those new constructions introduce minimal changes to the plain Merkle-Damg\u030aard construction and are easily implementable in practice. This paper is a modified version of a paper to appear at Crypto 2005."}
{"_id": "54fdcd219dd064230cf9d6a551dda717ce87598c", "title": "Ordinary magic. Resilience processes in development.", "text": "The study of resilience in development has overturned many negative assumptions and deficit-focused models about children growing up under the threat of disadvantage and adversity. The most surprising conclusion emerging from studies of these children is the ordinariness of resilience. An examination of converging findings from variable-focused and person-focused investigations of these phenomena suggests that resilience is common and that it usually arises from the normative functions of human adaptational systems, with the greatest threats to human development being those that compromise these protective systems. The conclusion that resilience is made of ordinary rather than extraordinary processes offers a more positive outlook on human development and adaptation, as well as direction for policy and practice aimed at enhancing the development of children at risk for problems and psychopathology."}
{"_id": "466f136e64f26b1a02260cef1e8c1a68425f1c5d", "title": "An automated framework for generating variable-accuracy battery models from datasheet information", "text": "Models based on an electrical circuit equivalent have become the most popular choice for modeling the behavior of batteries, thanks to their ease of co-simulation with other parts of a digital system. Such circuit models are actually model templates: the specific values of their electrical elements must be derived by the analysis of the specific battery devices to be modeled. This process requires either to measure the battery characteristics or to derive them from the datasheet. In the latter case, however, very often not all information are available and the model fitting becomes then unfeasible.\n In this paper we present a methodology for deriving, in a semi-automatic way, circuit equivalent battery models solely from data available in a battery datasheet. In order to account for the different amount of information available, we introduce the concept of \"level\" of a model, so that models with different accuracy can be derived depending on the available data. The methodology requires only minimal intervention by the designer and it automatically generates MATLAB models once the required data for the corresponding model level are transcribed from the datasheet.\n Simulation results show that our methodology allows to accurately reconstruct the information reported in the datasheet as well as to derive missing ones."}
{"_id": "27b056c2a91f28f69a312deda0b252319122fb23", "title": "Spatial Reference in Linguistic Human-Robot Interaction: Iterative, Empirically Supported Development of a Model of Projective Relations", "text": "The aim of our research is to enable spontaneous and efficient spatial reference to objects in human-robot interaction situations. This paper presents the iterative, empirically based design of a robotic system that uses a computational model for identifying objects on the basis of a range of spatial reference systems. The efficiency of the system is evaluated by two successive empirical studies involving uninformed users. The linguistic analysis points to the striking variability in speakers\u2019 spontaneous strategies and preferences, and it motivates a number of modifications of the computational model."}
{"_id": "34f3d783d9bdb6db29b37f400341a027618f3ad4", "title": "Mobile Media Use, Multitasking and Distractibility", "text": "Portable media devices are ubiquitous and their use has become a core component of many people\u2019s daily experience, but to what effect? In this paper, the authors review research on the ways in which media use and multitasking relate to distraction, distractibility and impulsivity. They review recent research on the effects of media multitasking on driving, walking, work, and academic performance. The authors discuss earlier research concerning the nature of media\u2019s impact on attention and review cognitive and neuropsychological findings on the effects of divided attention. Research provides clear evidence that mobile media use is distracting, with consequences for safety, efficiency and learning. Greater use of media is correlated with higher levels of trait impulsivity and distractibility, but the direction of causality has not been established. Individuals may become more skilled at media multitasking over time, but intervention is currently required to improve the safe and effective use of mobile media. DOI: 10.4018/ijcbpl.2012070102 16 International Journal of Cyber Behavior, Psychology and Learning, 2(3), 15-29, July-September 2012 Copyright \u00a9 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. and neuropsychological research on the effects of divided attention. Finally, we explore how high levels of media multitasking might alter our general responses to experience in ways that are marked by changes in distractibility and impulsivity. THE DISTRACTING EFFECTS OF MULTITASKING WITH MOBILE MEDIA"}
{"_id": "27f7e9ff722cad915ba228dce80aa51d883c8e9f", "title": "Multileaved Comparisons for Fast Online Evaluation", "text": "Evaluation methods for information retrieval systems come in three types: offline evaluation, using static data sets annotated for relevance by human judges; user studies, usually conducted in a lab-based setting; and online evaluation, using implicit signals such as clicks from actual users. For the latter, preferences between rankers are typically inferred from implicit signals via interleaved comparison methods, which combine a pair of rankings and display the result to the user. We propose a new approach to online evaluation called multileaved comparisons that is useful in the prevalent case where designers are interested in the relative performance of more than two rankers. Rather than combining only a pair of rankings, multileaved comparisons combine an arbitrary number of rankings. The resulting user clicks then give feedback about how all these rankings compare to each other. We propose two specific multileaved comparison methods. The first, called team draft multileave, is an extension of team draft interleave. The second, called optimized multileave, is an extension of optimized interleave and is designed to handle cases where a large number of rankers must be multileaved. We present experimental results that demonstrate that both team draft multileave and optimized multileave can accurately determine all pairwise preferences among a set of rankers using far less data than the interleaving methods that they extend."}
{"_id": "cc75cf7aeab864d01c4ce50ecb4ae9658993013b", "title": "A tensorized logic programming language for large-scale data", "text": "We introduce a new logic programming language T-PRISM based on tensor embeddings. Our embedding scheme is a modification of the distribution semantics in PRISM, one of the state-of-the-art probabilistic logic programming languages, by replacing distribution functions with multidimensional arrays, i.e., tensors. T-PRISM consists of two parts: logic programming part and numerical computation part. The former provides flexible and interpretable modeling at the level of first order logic, and the latter part provides scalable computation utilizing parallelization and hardware acceleration with GPUs. Combing these two parts provides a remarkably wide range of high-level declarative modeling from symbolic reasoning to deep learning. To embody this programming language, we also introduce a new semantics, termed tensorized semantics, which combines the traditional least model semantics in logic programming with the embeddings of tensors. In T-PRISM, we first derive a set of equations related to tensors from a given program using logical inference, i.e., Prolog execution in a symbolic space and then solve the derived equations in a continuous space by TensorFlow. Using our preliminary implementation of T-PRISM, we have successfully dealt with a wide range of modeling. We have succeeded in dealing with real large-scale data in the declarative modeling. This paper presents a DistMult model for knowledge graphs using the FB15k and WN18 datasets."}
{"_id": "f3f317923bf96ed7cf405b9c5a0635e9c4959c4b", "title": "X-band microstrip narrowband BPF composed of split ring resonator", "text": "The paper deals with the development of X-band microstrip narrowband bandpass filter (BPF) composed of split ring resonator (SRR). The proposed filter is designed on a grounded 0.508mm thick Roger RT/Duroid\u00ae 5880 dielectric substrate with the dimension of 25mm length and 20mm width. Some parametric studies to achieve an optimum design of microstrip narrowband BPF are investigated prior realization by varying the physical parameters of SRR including the width of rings and the gap between rings. From the experimental characterization, it shows that the realized microstrip narrowband BPF has a center frequency of 9.02 GHz with the values of return loss and insertion loss of 15.231 dB and 2.768 dB, respectively. The measured result is 20MHz lower than the simulated one which has center frequency of 9 GHz with the values of return loss and insertion loss of 15.025 dB and 2.242 dB, respectively."}
{"_id": "16c0a0275fdd532ac00a6b9c09b57c539eb09ef0", "title": "Enhanced Modelling and Performance in Braided Pneumatic Muscle Actuators", "text": "Pneumatic technology has been successfully applied for over two millennia. Even today, pneumatic cylinder based technology forms the keystone of many manufacturing processes where there is a need for simple, high-speed, low-cost, reliable motion. But when the system requires accurate control of position, velocity or acceleration profiles, these actuators form a far from satisfactory solution. Braided pneumatic muscle actuators (pMAs) form an interesting development of the pneumatic principle offering even higher power/weight performance, operation in a wide range of environments and accurate control of position, motion and force. This technology provides an interesting and potentially very successful alternative actuation source for robots as well as other applications. However, there are difficulties with this approach due to the following. (i) Modeling errors. Models of the force response are still nonoptimal and for good results these models are highly complex, which makes accurate design difficult. (ii) Low bandwidth\u2014the bandwidth of the actuator\u2014link assemblies are often considered to be too low for practical success in many applications, particularly robotics. In this paper we address these limitations and show how the performance in each area can be enhanced with overall improvements in the response and utility of the braided pMAs. KEY WORDS\u2014actuators, braided pneumatic muscle actuators, McKibben muscles"}
{"_id": "1be73398d9fc18153d11a8ceb50a44558e397211", "title": "High Dimensional Semiparametric Gaussian Copula Graphical Models", "text": "We propose a semiparametric approach called the nonparanormal SKEPTIC for efficiently and robustly estimating high-dimensional undirected graphical models. To achieve modeling flexibility, we consider the nonparanormal graphical models proposed by Liu, Lafferty and Wasserman [J. Mach. Learn. Res. 10 (2009) 2295\u20132328]. To achieve estimation robustness, we exploit nonparametric rank-based correlation coefficient estimators, including Spearman\u2019s rho and Kendall\u2019s tau. We prove that the nonparanormal SKEPTIC achieves the optimal parametric rates of convergence for both graph recovery and parameter estimation. This result suggests that the nonparanormal graphical models can be used as a safe replacement of the popular Gaussian graphical models, even when the data are truly Gaussian. Besides theoretical analysis, we also conduct thorough numerical simulations to compare the graph recovery performance of different estimators under both ideal and noisy settings. The proposed methods are then applied on a large-scale genomic data set to illustrate their empirical usefulness. The R package huge implementing the proposed methods is available on the Comprehensive R Archive Network: http://cran.r-project.org/."}
{"_id": "2fa4d227d33fc520ef24c71c50b4a36eee0a7177", "title": "Chapter 3 Recent Developments in Auxiliary Particle Filtering", "text": "State space models (SSMs; sometimes termed hidden Markov models, particularly in the discrete case) are very popular statistical models for time series. Such models describe the trajectory of some system of interest as an unobserved E-valued Markov chain, known as the signal process. Let X1 \u223c \u03bd and Xn|(Xn\u22121 = xn\u22121) \u223c f(\u00b7|xn\u22121) denote this process. Indirect observations are available via an observation process, {Yn}n\u2208N. Conditional upon Xn, Yn is independent of the remainder of the observation and signal processes, with Yn|(Xn = xn) \u223c g(\u00b7|xn). For any sequence {zn}n\u2208N, we write zi:j = (zi, zi+1, ..., zj). In numerous applications, we are interested in estimating, recursively in time, an analytically intractable sequence of posterior distributions {p (x1:n| y1:n)}n\u2208N, of the form:"}
{"_id": "415dae363449d399141d07941424b057a26716aa", "title": "Fast time series classification using numerosity reduction", "text": "Many algorithms have been proposed for the problem of time series classification. However, it is clear that one-nearest-neighbor with Dynamic Time Warping (DTW) distance is exceptionally difficult to beat. This approach has one weakness, however; it is computationally too demanding for many realtime applications. One way to mitigate this problem is to speed up the DTW calculations. Nonetheless, there is a limit to how much this can help. In this work, we propose an additional technique, numerosity reduction, to speed up one-nearest-neighbor DTW. While the idea of numerosity reduction for nearest-neighbor classifiers has a long history, we show here that we can leverage off an original observation about the relationship between dataset size and DTW constraints to produce an extremely compact dataset with little or no loss in accuracy. We test our ideas with a comprehensive set of experiments, and show that it can efficiently produce extremely fast accurate classifiers."}
{"_id": "0f8863a34b2630c832316205284ac4ed3d0bf840", "title": "Blind Source Separation in nonminimum-phase systems based on filter decomposition", "text": "This paper focuses on the causality problem in the task of Blind Source Separation (BSS) of speech signals in nonminimum-phase mixing channels. We propose a new algorithm for solving this problem using filter decomposition approach. Our proposed algorithm uses an integrated cost function in which independence criterion is defined in frequency-domain. The parameters of demixing system are derived in time-domain, so the algorithm has the benefits of both time and frequency-domain approaches. Compared to the previous work in this framework, our proposed algorithm is the extension of filter decomposition idea in multi-channel blind deconvolution to the problem of blind source separation of speech signals. The proposed method is capable of dealing with both minimum-phase and nonminimum-phase mixing situations. Simulation results show considerable improvement in separating speech signals specially when the mixing system is nonminimum-phase."}
{"_id": "905ad646e5745afe6a3b02617cd8452655232c0d", "title": "CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information", "text": "Machine learning has become mainstream across industries. Numerous examples proved the validity of it for security applications. In this work, we investigate how to reverse engineer a neural network by using only power side-channel information. To this end, we consider a multilayer perceptron as the machine learning architecture of choice and assume a non-invasive and eavesdropping attacker capable of measuring only passive side-channel leakages like power consumption, electromagnetic radiation, and reaction time. We conduct all experiments on real data and common neural net architectures in order to properly assess the applicability and extendability of those attacks. Practical results are shown on an ARM CORTEX-M3 microcontroller. Our experiments show that the side-channel attacker is capable of obtaining the following information: the activation functions used in the architecture, the number of layers and neurons in the layers, the number of output classes, and weights in the neural network. Thus, the attacker can effectively reverse engineer the network using sidechannel information. Next, we show that once the attacker has the knowledge about the neural network architecture, he/she could also recover the inputs to the network with only a single-shot measurement. Finally, we discuss several mitigations one could use to thwart such attacks."}
{"_id": "940ecc652c8b1aacbaef540242dfb49a1ccc2261", "title": "Parking Guidance System Based on ZigBee and Geomagnetic Sensor Technology", "text": "Concerning the phenomenon that common parking service could not satisfy the increasing demand of the private vehicle owners, an intelligent parking guidance system based on Zig Bee network and geomagnetic sensors was designed. Real-time vehicle position and related traffic information were collected by geomagnetic sensors around parking lots and updated to center server via Zig Bee network. On the other hand, out-door Liquid Crystal Display screens controlled by center server can display information of available parking places. In this paper, guidance strategy was divided into four levels, which could provide clear and effective information to drivers. The experimental results prove that the distance detection accuracy of geomagnetic sensors was within 0.4m, and the lowest package loss rate of the wireless network in the range of 150m is 0%. This system can provide solution for better parking service in intelligent cities."}
{"_id": "15218d9c029cbb903ae7c729b2c644c24994c201", "title": "Normalized ( Pointwise ) Mutual Information in Collocation Extraction", "text": "In this paper, we discuss the related information theoretical association measures of mutual information and pointwise mutual information, in the context of collocation extraction. We introduce normalized variants of these measures in order to make them more easily interpretable and at the same time less sensitive to occurrence frequency. We also provide a small empirical study to give more insight into the behaviour of these new measures in a collocation extraction setup."}
{"_id": "3dd3e986fda5541a039239c8b5c705b030ecc73f", "title": "Beyond the session timeout: automatic hierarchical segmentation of search topics in query logs", "text": "Most analysis of web search relevance and performance takes a single query as the unit of search engine interaction. When studies attempt to group queries together by task or session, a timeout is typically used to identify the boundary. However, users query search engines in order to accomplish tasks at a variety of granularities, issuing multiple queries as they attempt to accomplish tasks. In this work we study real sessions manually labeled into hierarchical tasks, and show that timeouts, whatever their length, are of limited utility in identifying task boundaries, achieving a maximum precision of only 70%. We report on properties of this search task hierarchy, as seen in a random sample of user interactions from a major web search engine's log, annotated by human editors, learning that 17% of tasks are interleaved, and 20% are hierarchically organized. No previous work has analyzed or addressed automatic identification of interleaved and hierarchically organized search tasks. We propose and evaluate a method for the automated segmentation of users' query streams into hierarchical units. Our classifiers can improve on timeout segmentation, as well as other previously published approaches, bringing the accuracy up to 92% for identifying fine-grained task boundaries, and 89-97% for identifying pairs of queries from the same task when tasks are interleaved hierarchically. This is the first work to identify, measure and automatically segment sequences of user queries into their hierarchical structure. The ability to perform this kind of segmentation paves the way for evaluating search engines in terms of user task completion."}
{"_id": "a1c3941f7671e90d4f3985d91793e280157be729", "title": "Query segmentation revisited", "text": "We address the problem of query segmentation: given a keyword query, the task is to group the keywords into phrases, if possible. Previous approaches to the problem achieve reasonable segmentation performance but are tested only against a small corpus of manually segmented queries. In addition, many of the previous approaches are fairly intricate as they use expensive features and are difficult to be reimplemented.\n The main contribution of this paper is a new method for query segmentation that is easy to implement, fast, and that comes with a segmentation accuracy comparable to current state-of-the-art techniques. Our method uses only raw web n-gram frequencies and Wikipedia titles that are stored in a hash table. At the same time, we introduce a new evaluation corpus for query segmentation. With about 50,000 human-annotated queries, it is two orders of magnitude larger than the corpus being used up to now."}
{"_id": "b40567c71269c01c57330cce321d9147bc62a2e8", "title": "Exploratory search: from finding to understanding", "text": "Research tools critical for exploratory search success involve the creation of new interfaces that move the process beyond predictable fact retrieval."}
{"_id": "863472a67f5bdc67e4782306efd883fca23e3a3d", "title": "Delphi method: techniques and applications", "text": "Cross-impact analysis is a method for revising estimated probabilities of future events in terms of estimated interactions among those events. This Report presents an elementary cross-impact model where the cross-impacts are formulated as relative probabilities. Conditions are derived for the consistency of the matrix of relative probabilities of n events. An extension also provides a necessary condition for the vector of absolute probabilities to be consistent with the relative probability matrix. An averaging technique is formulated for resolving inconsistencies in the matrix, and a nearest-point computation derived for resolving inconsistencies between the set of absolute probabilities and the matrix. Although elementary, the present model clarifies some of the conceptual problems associated with cross-impact analysis, and supplies a relatively sound basis for revising probability estimates in the limited case where interactions can be approximated by relative probabilities."}
{"_id": "879530362c206b673f957e40778fbd83511b06b6", "title": "Phishing e-mail detection by using deep learning algorithms", "text": "Phishing e-mails are considered as spam e-mails, which aim to collect sensitive personal information about the users via network. Since the main purpose of this behavior is mostly to harm users financially, it is vital to detect these phishing or spam e-mails immediately to prevent unauthorized access to users' vital information. To detect phishing e-mails, using a quicker and robust classification method is important. Considering the billions of e-mails on the Internet, this classification process is supposed to be done in a limited time to analyze the results. In this work, we present some of the early results on the classification of spam email using deep learning and machine methods. We utilize word2vec to represent emails instead of using the popular keyword or other rule-based methods. Vector representations are then fed into a neural network to create a learning model. We have tested our method on an open dataset and found over 96% accuracy levels with the deep learning classification methods in comparison to the standard machine learning algorithms."}
{"_id": "6ca3c5ee075c463f2914e8ec211e041955502ec6", "title": "Bundle Adjustment in the Large", "text": "We present the design and implementation of a new inexact Newton type algorithm for solving large-scale bundle adjustment problems with tens of thousands of images. We explore the use of Conjugate Gradients for calculating the Newton step and its performance as a function of some simple and computationally efficient preconditioners. We show that the common Schur complement trick is not limited to factorization-based methods and that it can be interpreted as a form of preconditioning. Using photos from a street-side dataset and several community photo collections, we generate a variety of bundle adjustment problems and use them to evaluate the performance of six different bundle adjustment algorithms. Our experiments show that truncated Newton methods, when paired with relatively simple preconditioners, offer state of the art performance for large-scale bundle adjustment. The code, test problems and detailed performance data are available at http://grail.cs.washington.edu/projects/bal."}
{"_id": "5760c9e626645091155ed64c32447effde010ddb", "title": "Building Caring Healthcare Systems in the Internet of Things", "text": "The nature of healthcare and the computational and physical technologies and constraints present a number of challenges to systems designers and implementers. In spite of the challenges, there is a significant market for systems and products to support caregivers in their tasks as the number of people needing assistance grows substantially. In this paper, we present a structured approach for describing Internet of Things (IoT) for healthcare systems. We illustrate the approach for three use cases and discuss relevant quality issues that arise, in particular, the need to consider caring as a requirement."}
{"_id": "8ce420f34411fb76c22b46f2d574baf9e2bd089e", "title": "Mining Online Reviews for Predicting Sales Performance: A Case Study in the Movie Domain", "text": "Posting reviews online has become an increasingly popular way for people to express opinions and sentiments toward the products bought or services received. Analyzing the large volume of online reviews available would produce useful actionable knowledge that could be of economic values to vendors and other interested parties. In this paper, we conduct a case study in the movie domain, and tackle the problem of mining reviews for predicting product sales performance. Our analysis shows that both the sentiments expressed in the reviews and the quality of the reviews have a significant impact on the future sales performance of products in question. For the sentiment factor, we propose Sentiment PLSA (S-PLSA), in which a review is considered as a document generated by a number of hidden sentiment factors, in order to capture the complex nature of sentiments. Training an S-PLSA model enables us to obtain a succinct summary of the sentiment information embedded in the reviews. Based on S-PLSFA, we propose ARSA, an Autoregressive Sentiment-Aware model for sales prediction. We then seek to further improve the accuracy of prediction by considering the quality factor, with a focus on predicting the quality of a review in the absence of user-supplied indicators, and present ARSQA, an Autoregressive Sentiment and Quality Aware model, to utilize sentiments and quality for predicting product sales performance. Extensive experiments conducted on a large movie data set confirm the effectiveness of the proposed approach."}
{"_id": "6ae6dc55308bf1d02e3d26947c29c33347dad26d", "title": "Entity Retrieval via Entity Factoid Hierarchy", "text": "We propose that entity queries are generated via a two-step process: users first select entity facts that can distinguish target entities from the others; and then choose words to describe each selected fact. Based on this query generation paradigm, we propose a new entity representation model named as entity factoid hierarchy. An entity factoid hierarchy is a tree structure composed of factoid nodes. A factoid node describes one or more facts about the entity in different information granularities. The entity factoid hierarchy is constructed via a factor graph model, and the inference on the factor graph is achieved by a modified variant of Multiple-try Metropolis algorithm. Entity retrieval is performed by decomposing entity queries and computing the query likelihood on the entity factoid hierarchy. Using an array of benchmark datasets, we demonstrate that our proposed framework significantly improves the retrieval performance over existing models."}
{"_id": "6a7cc15006d874b670db10f72706cd58af7a3dcb", "title": "A Foray into Conficker's Logic and Rendezvous Points", "text": "We present an in depth static analysis of the Conficker worm, primarily through the exploration of the client-side binary logic. In this paper, we summarize various aspects of the inner workings of binary variants A and B,1 which were the first in a chain of recent revisions aimed to keep this epidemic resistant to ongoing eradication attempts. These first two variants have combined to produce a multi-million node population of infected hosts, whose true main purpose has yet to be fully understood. We further validate aspects of our analysis through in-situ network analyses, and discuss some attribution links about its origins."}
{"_id": "860fa25a5c28dd3698517729222bbf37f27ff6f4", "title": "Information Security Threats Classification Pyramid", "text": "Threat classification is extremely important for organizations, as it is an important step towards implementation of information security. Most of the existing threat classifications listed threats in static ways without linking threats to information system areas. The aim of this paper is to design a methodology that can classify deliberate threats in a dynamic way to represent each threat in different areas of the information system. This technique is based on the following factors: the attacker's prior knowledge (i. e. the knowledge hold by the source of the threat) about the system, loss of security information and the criticality of the area that might be affected by that threat."}
{"_id": "eea181af6fc81ac0897c79a8bdb1c2dcbe410863", "title": "Collaborative privacy management: mobile privacy beyond your own devices", "text": "As the development of mobile devices and applications, mobile privacy has become a very important issue. Current researches on mobile privacy mainly focus on potential leakages on a particular device. However, leakage of sensitive data on a mobile device not only violates the privacy of the phone (or data) owner, but also violates the privacy of many other people whose information are contained in the data directly or indirectly (they are called data involvers). To address such problems, we introduce a collaborative privacy management framework, which aims to provide fine-grained data privacy protection for both data owners and data involvers in a distributed manner. Based on individual privacy policies specified by each user, a collaborative privacy policy is generated and enforced on different devices automatically. As a proof-of-concept prototype, we implement the proposed framework on Android and demonstrate its applicability with two case studies."}
{"_id": "223b72e228f787edda258dcdaaf62ae815a9cba6", "title": "Adding nesting structure to words", "text": "We propose the model of nested words for representation of data with both a linear ordering and a hierarchically nested matching of items. Examples of data with such dual linear-hierarchical structure include executions of structured programs, annotated linguistic data, and HTML/XML documents. Nested words generalize both words and ordered trees, and allow both word and tree operations. We define nested word automata\u2014finite-state acceptors for nested words, and show that the resulting class of regular languages of nested words has all the appealing theoretical properties that the classical regular word languages enjoys: deterministic nested word automata are as expressive as their nondeterministic counterparts; the class is closed under union, intersection, complementation, concatenation, Kleene-*, prefixes, and language homomorphisms; membership, emptiness, language inclusion, and language equivalence are all decidable; and definability in monadic second order logic corresponds exactly to finite-state recognizability. We also consider regular languages of infinite nested words and show that the closure properties, MSO-characterization, and decidability of decision problems carry over.\n The linear encodings of nested words give the class of visibly pushdown languages of words, and this class lies between balanced languages and deterministic context-free languages. We argue that for algorithmic verification of structured programs, instead of viewing the program as a context-free language over words, one should view it as a regular language of nested words (or equivalently, a visibly pushdown language), and this would allow model checking of many properties (such as stack inspection, pre-post conditions) that are not expressible in existing specification logics.\n We also study the relationship between ordered trees and nested words, and the corresponding automata: while the analysis complexity of nested word automata is the same as that of classical tree automata, they combine both bottom-up and top-down traversals, and enjoy expressiveness and succinctness benefits over tree automata."}
{"_id": "6c008772684f7a9fec6178f88d8fa32491a3dc16", "title": "Smart Refrigerator Based on Internet of Things (IoT): An Approach to Efficient Food Management", "text": "This paper focuses on the technology of the smart refrigerator connected through the Internet of Things (IoT) platform. It deals with the issue of food wastage and efficient food distribution through a network of connected refrigerators within a defined neighborhood. This paper defines the three-layer architecture: front-end layer, gateway layer and the back-end layer that can allow the implementation of the system. This low-cost network can be created through a microcontroller and wireless sensors at the front-end architecture. The process was discussed to determine: the stock item below a defined quantity or threshold limit, determining the product expiration date and eventually determining the purchase or exchange of food items across the network. It also includes the proof of concept technology by allowing connection of an ultrasonic sensor to an IoT platform that can allow the exchange of information on the positioning of the food items inside the refrigerator. This eventually can allow for a large-scale commercial implementation of the project to deal with the current issue of food distribution."}
{"_id": "f8c26a64696459940fc41d30d68c81d1f6146ca1", "title": "1.3 \u03bcm, 56-Gbit/s EML module target to 400GbE", "text": "We developed 1.3 \u03bcm, EML module built in EML driver IC for both 50 and 56 Gbit/s. 40-km SMF transmission with clear eye openings under 56 Gbit/s operation were demonstrated for the first time."}
{"_id": "56b16779396d3d5b2393144f1bc9d2f11ebdf558", "title": "Hybrid digital and analog beamforming design for large-scale MIMO systems", "text": "Large-scale multiple-input multiple-output (MIMO) systems enable high spectral efficiency by employing large antenna arrays at both the transmitter and the receiver of a wireless communication link. In traditional MIMO systems, full digital beamforming is done at the baseband; one distinct radio-frequency (RF) chain is required for each antenna, which for large-scale MIMO systems can be prohibitive from either cost or power consumption point of view. This paper considers a two-stage hybrid beamforming structure to reduce the number of RF chains for large-scale MIMO systems. The overall beamforming matrix consists of analog RF beamforming implemented using phase shifters and baseband digital beamforming of much smaller dimension. This paper considers precoder and receiver design for maximizing the spectral efficiency when the hybrid structure is used at both the transmitter and the receiver. On the theoretical front, bounds on the minimum number of transmit and receive RF chains that are required to realize the theoretical capacity of the large-scale MIMO system are presented. It is shown that the hybrid structure can achieve the same performance as the fully-digital beamforming scheme if the number of RF chains at each end is greater than or equal to twice the number of data streams. On the practical design front, this paper proposes a heuristic hybrid beamforming design strategy for the critical case where the number of RF chains is equal to the number of data streams, and shows that the performance of the proposed hybrid beamforming design can achieve spectral efficiency close to that of the fully-digital solution."}
{"_id": "adf5e430f3d32bcb4eeae90d157b81df7dc06410", "title": "Design and implementation of embedded Web server based on arm and Linux", "text": "This paper achieves the design of an embedded Web server, which takes ARM920T-S3c2410s chip as its core and Linux as its operating system. This is because Linux can be reduced and transplanted. The method used to transplant Web server Boa on the embedded Linux platform is also discussed in detail, and through CGI technology functions of dynamic Web page is successfully realized. Relevant experiments show that after the Web server is embedded into the network video monitoring system, dynamic page interaction can be achieved between the Web server and the embedded system via the browser in the Windows environment."}
{"_id": "99a39ea1b9bf4e3afc422329c1d4d77446f060b8", "title": "Machine Learning Strategies for Time Series Forecasting", "text": "The increasing availability of large amounts of historical data and the need of performing accurate forecasting of future behavior in several scientific and applied domains demands the definition of robust and efficient techniques able to infer from observations the stochastic dependency between past and future. The forecasting domain has been influenced, from the 1960s on, by linear statistical methods such as ARIMA models. More recently, machine learning models have drawn attention and have established themselves as serious contenders to classical statistical models in the forecasting community. This chapter presents an overview of machine learning techniques in time series forecasting by focusing on three aspects: the formalization of one-step forecasting problems as supervised learning tasks, the discussion of local learning techniques as an effective tool for dealing with temporal data and the role of the forecasting strategy when we move from one-step to multiple-step forecasting."}
{"_id": "e343de355b7b0cd0654b4098e4446421ca98a5e3", "title": "A qualitative study on factors that influence women\u2019s choice of delivery in health facilities in Addis Ababa, Ethiopia", "text": "BACKGROUND\nFacility based delivery for mothers is one of the proven interventions to reduce maternal and neonatal morbidity and mortality. This study identified women's reasons for seeking to give birth in a health facility and captured their perceptions of the quality of care they received during their most recent birth, in a population with high utilization of facility based deliveries.\n\n\nMETHODS\nThis qualitative study was conducted in eight health centers in Addis Ababa. Women bringing their index child for first vaccinations were invited to participate in an in-depth interview about their last delivery. Sixteen in-depth interviews were conducted. Interviews were conducted by trained researchers using a semi-structured interview guide. The data were transcribed verbatim in Amharic and translated into English. A thematic analysis was conducted to answer specific study questions.\n\n\nRESULTS\nAll research participants expressed a preference for facility based delivery because of their awareness of obstetric complications, and related perceptions that facility-birth is safer for the mother and child. Dimensions of quality of care and the cost of services were identified as influencing decisions about whether to seek care in the public or private sector. Media campaigns, information from social networks and women's experiences with healthcare providers and facilities influenced care-seeking decisions.\n\n\nCONCLUSIONS\nThe universal preference for facility-based birth by women in this study indicates that, in Addis Ababa, facility based delivery has become a preferred norm. Sources of information for decision-making and the dimensions of quality prioritized by women should be taken into account to develop interventions to promote facility-based births in other settings."}
{"_id": "999b3bd7ce7567ea7307fe393c462417d083d9da", "title": "Compressive Sensing Image Sensors-Hardware Implementation", "text": "The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal-oxide-semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed."}
{"_id": "14c6bf65894d4328242b118455618b27ac5e35ae", "title": "Partitioning Well-Clustered Graphs with k-Means and Heat Kernel", "text": "We study a suitable class of well-clustered graphs that admit good k-way partitions and present the first almost-linear time algorithm for with almost-optimal approximation guarantees partitioning such graphs. A good k-way partition is a partition of the vertices of a graph into disjoint clusters (subsets) {Si}i=1, such that each cluster is better connected on the inside than towards the outside. This problem is a key building block in algorithm design, and has wide applications in community detection and network analysis. Key to our result is a theorem on the multi-cut and eigenvector structure of the graph Laplacians of these well-clustered graphs. Based on this theorem, we give the first rigorous guarantees on the approximation ratios of the widely used k-means clustering algorithms. We also give an almost-linear time algorithm based on heat kernel embeddings and approximate nearest neighbor data structures."}
{"_id": "8e04afb34228a7fbb3f6ef3af8cfe85e0e74c34b", "title": "Approximate Computing for Low Power and Security in the Internet of Things", "text": "To save resources for Internet of Things (IoT) devices, a proposed approach segments operands and corresponding basic arithmetic operations that can be carried out by approximate function units for almost all applications. The approach also increases the security of IoT devices by hiding information for IP watermarking, digital fingerprinting, and lightweight encryption."}
{"_id": "12744ba4bf73b73265be98673ba502d1d55eb776", "title": "Speaker identification on the SCOTUS corpus", "text": "This paper reports the results of our experiments on speaker identification in the SCOTUS corpus, which includes oral arguments from the Supreme Court of the United States. Our main findings are as follows: 1) a combination of Gaussian mixture models and monophone HMM models attains near-100% textindependent identification accuracy on utterances that are longer than one second; 2) the sampling rate of 11025 Hz achieves the best performance (higher sampling rates are harmful); and a sampling rate as low as 2000 Hz still achieves more than 90% accuracy; 3) a distance score based on likelihood numbers was used to measure the variability of phones among speakers; we found that the most variable phone is the phone UH (as in good), and the velar nasal NG is more variable than the other two nasal sounds M and N; 4.) our models achieved \u201cperfect\u201d forced alignment on very long speech segments (40 minutes). These findings and their significance are discussed."}
{"_id": "d377cf16f7188c1ee39521a433fa72c3d1ff3b6e", "title": "Diffusion-on-Manifold Aggregation of Local Features for Shape-based 3D Model Retrieval", "text": "Aggregating a set of local features has become one of the most common approaches for representing a multi-media data such as 2D image and 3D model. The success of Bag-of-Features (BF) aggregation [2] prompted several extensions to BF, that are, VLAD [12], Fisher Vector (FV) coding [22] and Super Vector (SV) coding [34]. They all learn small number of codewords, or representative local features, by clustering a set of large number of local features. The set of local features extracted from a media data (e.g., an image) is encoded by considering distribution of features around the codewords; BF uses frequency, VLAD and FV uses displacement vector, and SV uses a combination of both. In doing so, these encoding algorithms assume linearity of feature space about a codeword. Consequently, even if the set of features form a non-linear manifold, its non-linearity would be ignored, potentially degrading quality of aggregated features. In this paper, we propose a novel feature aggregation algorithm called Diffusion-on-Manifold (DM) that tries to take into account, via diffusion distance, structure of non-linear manifold formed by the set of local features. In view of 3D shape retrieval, we also propose a local 3D shape feature defined for oriented point set. Experiments using shape-based 3D model retrieval scenario show that the DM aggregation results in better retrieval accuracy than the existing aggregation algorithms we've compared against, that are, VLAD, FV, and SV, etc.."}
{"_id": "71622e179271983ea66e8e00b0405be4e4de5b51", "title": "Fractional-Slot Concentrated-Windings Synchronous Permanent Magnet Machines: Opportunities and Challenges", "text": "Fractional-slot concentrated-winding (FSCW) synchronous permanent magnet (PM) machines have been gaining interest over the last few years. This is mainly due to the several advantages that this type of windings provides. These include high-power density, high efficiency, short end turns, high slot fill factor particularly when coupled with segmented stator structures, low cogging torque, flux-weakening capability, and fault tolerance. This paper is going to provide a thorough analysis of FSCW synchronous PM machines in terms of opportunities and challenges. This paper will cover the theory and design of FSCW synchronous PM machines, achieving high-power density, flux-weakening capability, comparison of single- versus double-layer windings, fault-tolerance rotor losses, parasitic effects, comparison of interior versus surface PM machines, and various types of machines. This paper will also provide a summary of the commercial applications that involve FSCW synchronous PM machines."}
{"_id": "90bb6bf3ec4dd7481c81966a9998197d99964cde", "title": "Comparison between SPM and IPM motor drives for EV application", "text": "Electric Vehicles make use of permanent magnet synchronous traction motors for their high torque density and efficiency. A comparison between interior permanent magnet (IPM) and surface mounted permanent magnet (SPM) motors is carried out, in terms of performance at given inverter ratings. A simplified analytical approach is proposed and its results are confirmed by actual motor design, validated through FEA. The main result of the proposed analysis is to put in evidence the different behavior of IPM and SPM motors in terms of overload capability: as expected, the overload current results in a power overload of the IPM motor, while in the SPM motor the maximum power is upper limited irrespectively of the current overload."}
{"_id": "245a48b708cac7514d8c216869b623329eda38dc", "title": "A new structure of universal motor using soft magnetic composites", "text": "The universal or ac commutator motor, widely used in hand tools and domestic appliances generally uses a two-pole stator with a concentrated winding and an armature with interlocked coils elements. The copper volume and the axial length of the end windings of such conventional structures are then usually very important. In this paper, the authors present a new universal motor structure based on an efficient use of the isotropic magnetic properties of the soft magnetic composites and on the concentrated winding technique. The stator core presents a claw-pole structure and the armature has a concentrated winding with several coils wound around the same tooth. With this new ac commutator motor structure, a reduction of the total volume by a ratio equal to 200% is obtained when compared to a classical universal motor structure with nearly identical performance."}
{"_id": "526e8c6115fa0cc5cfa0e8326d60cfcb067a3e6f", "title": "Comparison of synchronous PM machine types for wide constant-power speed range operation", "text": "This paper presents a detailed comparison of the high-speed operating characteristics of four synchronous PM machines for applications that require wide speed ranges of constant-power operation. These machines include surface PM machines with both distributed and fractional-slot concentrated windings, and two interior PM machine with distributed windings. These two versions of the interior PM machine include one with and a tight constraint on the machine's back-emf voltage at top speed and one without this constraint The target application is an automotive direct-drive starter/alternator requiring a very wide 10:1 constant power speed ratio (CPSR). Detailed comparisons of the performance characteristics of the machines are presented that include important issues such as the back-emf voltage at top speed, machine mass and cost, and eddy current losses in the magnets. Analytical results are verified using finite element analysis (FEA). Guidelines are developed to help drive designers decide which type of machine is most suitable for high-CPSR applications. Tradeoffs associated with choosing each of these machines are presented."}
{"_id": "913e3aa538924e667f8595ed45ac6881059cda25", "title": "Electrical machine topologies and technologies for electric, hybrid, and fuel cell vehicles", "text": "This paper overviews various electrical machine technologies, including brushless permanent magnet, induction, switched reluctance, and brushless reluctance machines, with particular emphasis on the concept of hybrid electrical machine topologies, and compares their relative merits, in terms of torque and power density, operating speed range, over load capability, and efficiency, for application in electric, hybrid and fuel cell vehicles. electric vehicles."}
{"_id": "75a753c53e930bfc4c08cdf64eba43a04865f4a3", "title": "Spacer-First Damascene-Gate FinFET Architecture Featuring Stringer-Free Integration", "text": "This letter presents a new Damascene-gate FinFET process that inherently suppresses stringers, resulting from gate and spacers patterning. The so-called spacer-first integration scheme relies on the engineering of a hydrogen silsesquioxane layer by electron beam lithography followed by two selective compartmentalized development steps to successively release the Damascene-gate cavity and the source/drain (S/D) contact regions. In contrast to the existing gate-first and gate-last integration approaches, the resulting FinFET process does not impose any restriction or interdependency on the sizing of the fins, gate, spacers, and S/D regions. A complete morphological and electrical validation is proposed in the particular case of wrap-around self-aligned metallic Schottky S/D contacts."}
{"_id": "35058c9b564bb758aa6f91617fc93d137ea5da7a", "title": "Thickness Reduction and Performance Enhancement of Steerable Square Loop Antenna Using Hybrid High Impedance Surface", "text": "In order to reduce the thickness of a steerable beam square loop antenna, the effects of combining it with various periodic high impedance surfaces (HISs) are investigated. When a via-less HIS is used, the radiation pattern has high side lobes, which are shown to be due to surface waves propagating in the HIS lattice. Using a HIS with vias between the plates and ground removes the surface waves, but the beam is distorted due to strong coupling between the HIS's vias and the antenna element. Consequently a hybrid HIS is designed. This uses a via-less lattice beneath the loop, with vias at the edge of the HIS to suppress surface wave propagation. Consequently, a square loop antenna with four feeds on a hybrid HIS substrate is proposed for beam steering applications. This antenna has a low-profile with a total thickness of 4.69 mm for a test frequency band of 4.3 GHz to 5.0 GHz. It exhibits a gain of 8.65 dB at the test frequency (4.7 GHz). Compared to the earlier reported steerable square loop antenna, the new antenna achieves a 61% reduction in substrate thickness, a bandwidth enhancement by 150 MHz and an increase in gain by 1.95 dB."}
{"_id": "40df5e610cf857bb7aa6a8a92914423a90b357d3", "title": "Performing edge detection by difference of Gaussians using q-Gaussian kernels", "text": "X iv :1 31 1. 25 61 v2 [ cs .C V ] 1 2 N ov 2 01 3 Performing edge detection by difference of Gaussians using q-Gaussian kernels Lucas Assirati, N\u00fabia R. da Silva, 2 Lilian Berton, Alneu de A. Lopes, and Odemir M. Bruno 2 Scientific Computing Group, S\u00e3o Carlos Institute of Physics, University of S\u00e3o Paulo (USP), cx 369 13560-970 S\u00e3o Carlos, S\u00e3o Paulo, Brazil www.scg.ifsc.usp.br Institute of Mathematics and Computer Science, University of S\u00e3o Paulo (USP), Avenida Trabalhador s\u00e3o-carlense, 40"}
{"_id": "c984052e5bcc01b822da743b8c5f3d0125f895b8", "title": "Cyber Defense Capability Model: A Foundation Taxonomy", "text": "Cyber attacks have significantly increased over the last few years, where the attackers are highly skilled, more organized and supported by other powerful actors to devise attacks towards specific targets. To aid the development of a strategic plan to defend against emerging attacks, we present a high-level taxonomy along with a cyber defense model to address the interaction and relationships between taxonomy elements. A cyber-kinetic reference model which is used widely by U.S Air Force is adopted as a baseline for the model and taxonomy development. Asset, Cyber Capability, and Preparation Process are the three high-level elements that are presented for the cyber defense capability model. The Cyber Capability, as the focal point of the study, uses three classifiers to characterize the strategic cyber defense mechanisms, which are classified by active, passive and collaborative defense. To achieve a proper cyber defense strategy, the key actors, assets and associated preparation procedure are identified. Finally, the proposed taxonomy is extensible so that additional dimensions or classifications can be added to future needs."}
{"_id": "7849c2a50b63ea4a4699434cec32ca2d6c0c7401", "title": "Practical Privacy-Preserving Medical Diagnosis Using Homomorphic Encryption", "text": "The use of remote services offered by cloud providers have been popular in the last lustrum. Services allow users to store remote files, or to analyze data for several purposes, like health-care or message analysis. However, when personal data are sent to the Cloud, users may lose privacy on the data-content, and on the other side cloud providers may use those data for their own businesses. In this paper, we present our solution to analyze users health-data directly into the Cloud while preserving users privacy. Our solution makes use of homomorphic encryption to protect users data during the analysis. In particular, we developed a mobile application that offloads users data into the Cloud, and a homomorphic encryption algorithm that processes those data without leaking any information to the Cloud provider. Performed empirical tests show that our HE algorithm is able to evaluate users data in reasonable time proving the feasibility of this emerging way of private-data evaluation."}
{"_id": "41a58749f0be2b53df4acb19604e9de73d2bd548", "title": "Explaining consumer satisfaction of services: The role of innovativeness and emotion in an electronic mediated environment", "text": "a r t i c l e i n f o The services section has started to dominate economic activity in many industrialized economies since the last decade. The growth of services in Electronic Mediated Environment (EME) has changed the manner in which firms and consumers interact. Drawing on such theoretical foundations as trust, risk perceptions, trying, emotion, and satisfaction, this study proposes an integrative research model to investigate the relationships among these constructs in the EME context. We collected data from 415 consumers and found that consumer innovativeness and emotion are significant antecedents of trust, risk perception of service, perceived benefits, and service quality. These important factors will influence consumer satisfaction in EME. This empirical investigation breaks a new ground for future studies to understand and gauge the influence of innovativeness and emotion in emerging IT artifacts such as EME services. The service sector has become quantitatively the most essential sector in the world economy over the last decade. It now leads economic activities in a wide variety of industrialized economies. According to CIA World Factbook [14], the service industry accounted for 76.8% and 73.1% of the U.S. and European Union gross domestic product in 2011, respectively. Concurrently, there are more service companies on the list of Future 500 companies than before. Given the precipitously growing rate of the service section, technologically empowered advancements of information technology (IT) and the ever developing Internet are constantly galvanizing today's organizations to more effectively and efficiently coordinate with their constituents via the value-added electronic services (e-services) conduit. Furthermore, the mushrooming growth of e-commerce reflects the increasingly paramount role of IT in e-services while IT spurs the influx of e-service economy across the Internet. It is apparent that Information Systems (IS) plays a crucial role in EME because contemporary organizations have to leverage IT to improve business performance and assist the communication and service delivery amid various value-chain constituents. Consumers play a focal role in the paradigmatic change from the exchange of goods towards a service-centered model of exchange as found in both IS and marketing literature [73]. In this study, the conceptual definition of services in EME is based on Dai and Salam [19] and thereafter refers to the \" services being facilitated by IT where the consumer interacts with a proper user interface including web site, mobile phone, social media, virtual world environment, tablets, etc., so as to gain a consumption \u2026"}
{"_id": "d3d499ea6f0dc4ade2ed8af03300552364253ba7", "title": "ITALICA at PAN 2013: An Ensemble Learning Approach to Author Profiling Notebook for PAN at CLEF 2013", "text": "This notebook discusses the approach to the Author Profiling task developed by the Italica group for PAN 2013. This system implements two different sets of classifiers which are combined later in order to build a final classifier that takes into account the decisions of the previous ones. The initial classifiers are focused on vector space representations of the documents as a bag of words and n-grams of POS tags and also on a set of stylistic features of the texts. The final classifier consists of a stacking schema that combines the other ones. This approach has obtained better results for the Spanish dataset than for the English dataset, probably due to the use of more detailed POS tagset in the former."}
{"_id": "93bfc225f24072100061a86de920e321dfd3ef26", "title": "An extension of the GQM+Strategies approach with formal causal reasoning", "text": "Context: Successful organizations need to manage and synchronize their strategic objectives with daily operations and activities. In general, achieving that requires a continuous process of organizational alignment. GQM + Strategies is an approach that helps software organizations with documenting and aligning organizational goals and strategies, and developing measurement programs. Objective: In this paper, the GQM + Strategies approach is further evolved and extended to include capabilities to evaluate the relationships of organizational goals and strategies through causal analysis. Method: We used an analytical paradigm to develop a formal causal model over the GQM + Strategies structure. In addition, an empirical pre-experimental study was designed to test practitioners\u2019 abilities to provide necessary input for the formal causal model. Results: A developed formal causal model over the GQM + Strategies structure allows the use of causal reasoning for the purpose of analyzing dependencies among chosen sets of goals. We illustrate this by showing how to analyze the impact of risky goals on other goals in the grid. The results of the empirical study showed that the practitioners had no difficulties providing their predictions, i.e. inputs into the causal model. Conclusion: The proposed solution extends the existing GQM + Strategies knowledge base by further elaborating and clarifying the process of creating grids by introducing causality theory. The use of causality theory allows experts to quantify their knowledge and beliefs regarding the effectiveness of organizational strategies. As a part of future work, formal causal models and causal reasoning can be implemented as a supporting software tool for the GQM + Strategies approach. \u00a9 2017 Elsevier B.V. All rights reserved."}
{"_id": "e4f36082d6570fb97395384aa116130248816110", "title": "Recommender System Based on Product Taxonomy in E-Commerce Sites", "text": "This article describes a novel and fast recommender system for websites based on product taxonomy and user click patterns. The proposed system consists of the following four steps. First, a product-preference matrix for each customer is estimated through a linear combination of the click, basket placement, and purchase statuses. Second, the preference matrix of the genre and that of the specific type are constructed by computing the ratios of the number of clicks, basket placements, and purchases of a product to the totals. Third, cluster analysis is performed using the genre preference matrix, and a neighborhood formation process is conducted using the specific-type preference matrix. Finally, data are generated for prediction, in which a customer\u2019s preference for specific types is greater than a given threshold value. Using these data sets, computational burden and processing time are greatly reduced. The effectiveness of the proposed approach was assessed by applying the F1 metric to an experimental e-commerce website. The proposed method outperformed conventional methods."}
{"_id": "93b5a905d2246b766b336a9d9e4a9b1e2894c074", "title": "A novel approach for efficient supergraph query processing on graph databases", "text": "In recent years, large amount of data modeled by graphs, namely graph data, have been collected in various domains. Efficiently processing queries on graph databases has attracted a lot of research attentions. Supergraph query is a kind of new and important queries in practice. A supergraph query, q, on a graph database D is to retrieve all graphs in D such that q is a supergraph of them. Because the number of graphs in databases is large and subgraph isomorphism testing is NP-complete, efficiently processing such queries is a big challenge. This paper first proposes an optimal compact method for organizing graph databases. Common subgraphs of the graphs in a database are stored only once in the compact organization of the database, in order to reduce the overall cost of subgraph isomorphism testings from stored graphs to queries during query processing. Then, an exact algorithm and an approximate algorithm for generating significant feature set with optimal order are proposed to construct indices on graph databases. The optimal order on the feature set is to reduce the number of subgraph isomorphism testings during query processing. Based on the compact organization of graph databases, a novel algorithm of testing subgraph isomorphisms from multiple graphs to one graph is presented. Finally, based on all these techniques, a query processing method is proposed. Analytical and experimental results show that the proposed algorithms outper-form the existing similar algorithms by one to two orders of magnitude."}
{"_id": "29cd59e5ea5e9511db4f31b96e10ef081757fca7", "title": "Eye images increase generosity , but not for long : the limited effect of a false cue", "text": "\u204e Corresponding author. E-mail address: asparks@uoguelph.ca (A. Sparks). 1090-5138/$ \u2013 see front matter \u00a9 2013 Elsevier Inc. Al http://dx.doi.org/10.1016/j.evolhumbehav.2013.05.001 Article history: Initial receipt 5 November 2012 Final revision received 3 May 2013"}
{"_id": "36f9ea27c00e2a6de754aea947c9212e5d389325", "title": "A New Approach of Digital Forensic Model for Digital Forensic Investigation", "text": "The research introduces a structured and consistent approach for digital forensic investigation. Digital forensic science provides tools, techniques and scientifically proven methods that can be used to acquire and analyze digital evidence. The digital forensic investigation must be retrieved to obtain the evidence that will be accepted in the court. This research focuses on a structured and consistent approach to digital forensic investigation. This research aims at identifying activities that facilitate and improves digital forensic investigation process. Existing digital forensic framework will be reviewed and then the analysis will be compiled. The result from the evaluation will produce a new model to improve the whole investigation process."}
{"_id": "9099dcc184b8bb48ebb434433b6e80875949c574", "title": "Real-time object tracking via CamShift-based robust framework", "text": "In recent years, lots of object tracking methods have been presented for better tracking accuracies. However, few of them can be applied to the real-time applications due to high computational cost. Aiming at achieving better realtime tracking performance, we propose an adaptive robust framework for object tracking based on the CamShift approach, which is notable for its simplicity and high processing efficiency. An adaptive local search method is presented to search for the best object candidate to avoid that the CamShift tracker gets confused by the surrounding background and erroneously incorporates it into the object region. A Kalman filter is also incorporated into our framework for prediction of the object's movement, so as to reduce the search effort and possible tracking failure caused by fast object motion. The experimental results demonstrate that the proposed tracking framework is robust and computationally effective."}
{"_id": "5fbc60008d538237e18258d21fcfc08200f9f48e", "title": "2.4 GHz inkjet-printed RF energy harvester on bulk cardboard substrate", "text": "An experimental investigation on the inkjet-printed power harvester for 2.4GHz and review of RF characterization of substrate and printed conductors are presented in this paper. A one stage discrete rectifier based on a voltage doubler structure and a planar monopole antenna are fabricated on cardboard using inkjet printing. The performance of the whole system is examined by measuring the output voltage of the RF power harvester. By the utilization of the proposed idea, the fabrication of low-cost environmentally-friendly battery-less wireless modules is conceivable."}
{"_id": "9acb353a94c258584c9048e8a700dbed1008d64a", "title": "DCnet: A new data center network architecture", "text": "The migration of computation and storage to cloud data centers has imposed new communication requirements and a need for fundamental changes in infrastructure. The cloud paradigm means that significant network communication will occur between servers in data centers. Furthermore, the migration of virtual machines and containers across physical servers means that in addition to traditional requirements of efficiency, scalability, and easy manageability, future data center networks must also support efficient mobility. To solve the problem, the paper proposes a radical redesign for data center networks, using a new architectural approach that we call DCnet. The DCnet approach changes the addressing and routing schemes at layers 2 and 3 completely. The new addressing mechanism allows an organization to assign addresses that span multiple data centers and permits VM mobility across data centers. Despite radical changes in addressing and routing, the DCnet architecture retains basics from Ethernet and IP, allowing existing hardware building blocks to be reused. The paper uses IPv6 in examples, but the new addressing model and address mobility approach can be applied to IPv4. By assigning each addressable entity a unique ID, and using the ID as the host suffix in an IP address, DCnet allows legacy operating systems and applications to be used without changes. The paper reports a simulation study that uses mininet and SDN to demonstrate the feasibility of the DCnet approach."}
{"_id": "27d2ee0a25f97137aaea666a1d39350cd7f1c4ba", "title": "Software engineering for security: a roadmap", "text": "doesn't need to be secure? Almost every softwarecontrolled system faces threats from potential adversaries, from Internet-aware client applications running on PCs, to complex telecommunications and power systems accessible over the Internet, to commodity software with copy protection mechanisms. Software engineers must be cognizant of these threats and engineer systems with credible defenses, while still delivering value to customers. In this paper, we present our perspectives on the research issues that arise in the interactions between software engineering and security."}
{"_id": "e3a9993a00fd9240013c83532b907954a6513ed9", "title": "A novel soil measuring wireless sensor network", "text": "Emerging technologies have made low-power and low-cost wireless sensor networks feasible. This paper presents a hierarchical wireless sensor network for measuring soil parameters such as temperature and humidity. Specifically, we designed sensor nodes that are placed completely underground and are used to collect soil measurements. These nodes use their radios to deliver the collected measurements to one of multiple relay nodes located above ground. In turn, relay nodes that are capable of long-range communications forward the data collected from the network's sensor nodes to a base node, which is connected to a workstation. The proposed hierarchical wireless sensor network uses a probabilistic communication protocol to achieve a very low duty cycle and hence a long life-time for soil monitoring applications."}
{"_id": "e34839909ef0ef1d375f4dee2e76eed3a1d29beb", "title": "Improved FIFO Scheduling Algorithm Based on Fuzzy Clustering in Cloud Computing", "text": "In cloud computing, some large tasks may occupy too many resources and some small tasks may wait for a long time based on First-In-First-Out (FIFO) scheduling algorithm. To reduce tasks\u2019 waiting time, we propose a task scheduling algorithm based on fuzzy clustering algorithms. We construct a task model, resource model, and analyze tasks\u2019 preference, then classify resources with fuzzy clustering algorithms. Based on the parameters of cloud tasks, the algorithm will calculate resource expectation and assign tasks to different resource clusters, so the complexity of resource selection will be decreased. As a result, the algorithm will reduce tasks\u2019 waiting time and improve the resource utilization. The experiment results show that the proposed algorithm shortens the execution time of tasks and increases the resource utilization."}
{"_id": "590867686d95b9cd6893126780fca7f34125e6fe", "title": "A Fast Approach for Semantic Similar Short Texts Retrieval", "text": "Retrieving semantic similar short texts is a crucial issue to many applications, e.g., web search, ads matching, questionanswer system, and so forth. Most of the traditional methods concentrate on how to improve the precision of the similarity measurement, while current real applications need to efficiently explore the top similar short texts semantically related to the query one. We address the efficiency issue in this paper by investigating the similarity strategies and incorporating them into the FAST framework (efficient FrAmework for semantic similar Short Texts retrieval). We conduct comprehensive performance evaluation on real-life data which shows that our proposed method outperforms the state-ofthe-art techniques."}
{"_id": "cfbe4eb488df546f953d65a929dc686bf7ef18d4", "title": "2013-01-1544 Simplified Extended Kalman Filter Observer for SOC Estimation of Commercial Power-Oriented LFP Lithium Battery Cells", "text": "The lithium iron phosphate (LFP) cell chemistry is finding wide acceptance for energy storage on-board hybrid electric vehicles (HEVs) and electric vehicles (EVs), due to its high intrinsic safety, fast charging, and long cycle life. However, three main challenges need to be addressed for the accurate estimation of their state of charge (SOC) at runtime: \uf0b7 Long voltage relaxation time to reach its open circuit voltage (OCV) after a current pulse \uf0b7 Time-, temperatureand SOC-dependent hysteresis \uf0b7 Very flat OCV-SOC curve for most of the SOC range In view of these problems, traditional SOC estimation techniques such as coulomb counting with error correction using the SOC-OCV correlation curve are not suitable for this chemistry. This work addressed these challenges with a novel combination of the extended Kalman filter (EKF) algorithm, a two-RC-block equivalent circuit and the traditional coulomb counting method. The simplified implementation of the EKF algorithm offers a computationally efficient option for runtime SOC evaluation on-board vehicles. The SOC estimation was validated with experimental data of a current profile contaminated with pseudo-random noise and with an offset in the initial condition. The model rapidly converged to within 4% of the true SOC even with imposed errors of 40% to initial SOC, 24% to current measurement and 6% to voltage measurement. INTRODUCTION The LFP olivine has emerged as one of the favored cathode materials for lithium ion batteries, especially for use as a rechargeable energy storage device (RESS) on-board HEVs and EVs, thanks to its high intrinsic safety [1], capacity for fast charging, and long cycle life [2]. Recent research and development advancements in this cell technology, especially the commercial launch of high-power LFP cells, have led to these cells matching the performance of the latest supercapacitors over short time periods (up to 30 seconds). A metric of great importance for a rechargeable lithium battery pack is the accurate runtime evaluation of its SOC, which is defined as the percentage of the completely extractable charge capacity remaining in the battery. It is akin to the fuel gauge of a conventional vehicle. The SOC indicates the amount of electrical energy remaining in the battery pack that is available to do work. An accurate runtime estimate of the SOC is important for the battery application designers and the battery users. However, the charge capacity of a battery depends upon a number of factors, including average current, discharge time, voltage cut-off limit, electrolyte temperature, aging, and battery storage time [3]. Armed with the confidence that the battery SOC would be determined accurately, the designer is able to efficiently use available battery capacity and reduce over-engineering; enabling the use of smaller and lighter batteries. With an accurate indication of the battery SOC, the user ensures that the battery is not over-charged or under-discharged; and suffers less range anxiety. Overall, the battery lasts longer and provides better performance. An accurate SOC is also a very important input for the battery management system. Over the years, many techniques have been proposed for estimation of the battery SOC, and they generally depend upon the battery chemistry and the final application [4-9]. The most reliable test for establishing the SOC of a battery is to charge or discharge it completely, thus physically reaching 100% or 0% SOC. This test is often adopted for an EV or a PHEV that is charged completely every evening, and allows the onboard SOC estimation algorithm to gain valuable feedback to recalibrate itself. For an HEV, which is never charged from the grid, ampere-hour counting remains the most popular technique. This technique (also called the bookkeeping system or coulomb counting) uses discrete integration of measured current over time as a direct indicator of SOC. Since this integration includes errors in current measurement and battery losses, the result needs to be periodically corrected. The OCV vs. SOC correlation curve is often used to provide points for recalibration. Other techniques such as direct measurement of the cell physical properties, including impedance or internal resistance, are not practical for LFP cells during runtime. The coulomb counting technique with correction (after rest periods) using an OCV-SOC correlation curve is not practical for cells exhibiting hysteresis since the battery cell takes a long time to reach a steady-state OCV after a current pulse. The problem is aggravated for the LFP batteries that also have a very flat OCV-SOC correlation curve. Current SOCestimation models are unable to take care of all of these complications. A more robust algorithm is needed to estimate the instantaneous total charge available for work inside an LFP cell. The EKF technique, an adaptive estimator, has emerged as one of the practical solutions to enhance the accuracy of SOC determination, but is complicated and needs heavy computing resources on-board the vehicle [6-8]. This paper presents a novel, simplified implementation of the extended Kalman filter technique that overcomes the practical challenges involved in runtime evaluation of the SOC of commercial high-power LFP cells. Its formulation demands a lower level of resources compared to traditional EKF implementations. CHALLENGES IN RUNTIME ESTIMATION OF SOC OF LFP CELLS Voltage relaxation time Figure 1: One pulse of the pulse discharge test (sign convention from Figure 1). The OCV-SOC correlation curve is often used to correct the current integral errors during runtime. This is usually done when the vehicle has been at rest (with its battery neither charging nor discharging) for a sufficiently long duration (3060 minutes), and when its battery voltage at the terminals is assumed to approximate the value of the OCV. This assumption is valid for most battery chemistries. The authors attempted to validate this assumption for the LFP cells using pulse discharge and charge tests. Under this test, the cell was first completely charged, rested for two hours, and then subjected to ten discharge pulses at 1C rate interspersed by one-hour rest phases until the cell was completely discharged. Subsequently, the cell was charged using ten charge pulses at 1C rate interspersed by one-hour rest phases until the cell was completely charged. The cell was then allowed to rest for 13 hours. A schematic of one pulse of the discharge test is shown in Figure 1. Figure 2 presents the cell current and terminal voltage measurements during the complete pulse discharge and charge experimental test. Figure 2: Input current and voltage response for a pulse charge and discharge test (inset in figure 3). Figure 3: Inset from figure 2 to highlight the long voltage stabilization time. The authors\u2019 primary interest in the experiment was to validate the assumption that the voltage of the LFP cell relaxes to approximately reach its OCV (as shown in Figure 1) after a long rest of one hour. However, it was observed that after the complete pulse discharge and charge test (Figure 2), when the SOC had reached 100%, the voltage did not relax to its OCV even after 13 hours (Figure 3). Voltage relaxation in 2 hours Voltage relaxation in 13 hours"}
{"_id": "7133ed98bfefa2e7a177f88e8c100562fca82b3a", "title": "OpenStreetMap: User-Generated Street Maps", "text": "The OpenStreetMap project is a knowledge collective that provides user-generated street maps. OSM follows the peer production model that created Wikipedia; its aim is to create a set of map data that's free to use, editable, and licensed under new copyright schemes. A considerable number of contributors edit the world map collaboratively using the OSM technical infrastructure, and a core group, estimated at approximately 40 volunteers, dedicate their time to creating and improving OSM's infrastructure, including maintaining the server, writing the core software that handles the transactions with the server, and creating cartographical outputs. There's also a growing community of software developers who develop software tools to make OSM data available for further use across different application domains, software platforms, and hardware devices. The OSM project's hub is the main OSM Web site."}
{"_id": "693a614718a96d61968ec573b2932a3301092c9a", "title": "Neural Networks: A Comprehensive Foundation", "text": "El profesor Haykin ha escrito un libro muy interesante, que ya lleva su segunda edici\u00f3n, lo cual es extrafio dentro de la gran cantidad de libros que se producen en la actualidad. Es f\u00e1cil encontrar en la literatura contempor\u00e1nea la constante referencia que se hace a este libro; sin embargo, el libro no es lo que el t\u00edtulo indica, ya que no es un libro ortodoxo de las teor\u00edas y t\u00e9cnicas que son conocidas como redes neuronales (tambi\u00e9n conocidas como sistemas conexionistas o sistemas neurocomputacionales), tampoco es un libro que presente los fundamentos o bases te\u00f3ricas de las redes neuronales. neuronales, as\u00ed como los grandes modelos de esta \u00e1rea de investigaci\u00f3n, como son los perceptrones y las redes RBFN (Radial-Basis Function Networks). La presentaci\u00f3n de estos cuatro cap\u00edtulos es excelente."}
{"_id": "b563b7e6e14661d488bb082bffe7c2837e56c022", "title": "Hypertrophy of labia minora: experience with 163 reductions.", "text": "OBJECTIVE\nOur purpose was to describe the surgical procedure, its results, and its complications and to determine whether patients are satisfied with surgical reduction of labia minora in cases of hypertrophy.\n\n\nSTUDY DESIGN\nThe records of 163 patients who underwent reduction of the labia minora during a 9-year period were reviewed. The ages of the patients ranged from 12 to 67 years (median, 26). Motives for requesting surgery were aesthetic concerns in 87% of the cases, discomfort in clothing in 64%, discomfort with exercise in 26%, and entry dyspareunia in 43%. Anatomic results were assessed 1 month postoperatively. Patient satisfaction was assessed by means of a mailed questionnaire.\n\n\nRESULTS\nNo surgery-related significant complications were noticed. Anatomic results were satisfactory for 151 patients (93%). Ninety-eight completed questionnaires were returned. Eighty-one patients (83%) found that the results after surgery were satisfactory. Eighty-seven (89%) were satisfied with the aesthetic result, and 91 (93%) approved the functional outcome. Four patients (4%) would not undergo the same procedure again.\n\n\nCONCLUSION\nLabia minora reduction is a simple surgical procedure associated with a high degree of patient satisfaction."}
{"_id": "558df7438e47643817f5df1b3a6f42090c371651", "title": "Cue-based feeding in the NICU: using the infant's communication as a guide.", "text": "Although studies have shown cue-based feeding can lead to earlier achievement of full oral feeding, the successful implementation of cue-based feeding has been constrained by the volume-driven culture, which has existed for many years in the NIC U. This culture was built on the notion that a \"better\" nurse is one who could \"get more in,\" and infants who are \"poor feeders\" are ones who \"can't take enough.\" The infant who feeds faster is often viewed as more skilled in this task-oriented approach. The feeding relationship and the infant's communication about the experience of feeding may not be nurtured. This article will explain the central role of the preterm infant's communication in successful cue-based feeding. When the infant is perceived as having meaningful behavior (i.e., communicative intent), the focus changes from a volume-driven to a co-regulated approach, through which the infant guides the caregiver. This is cue-based feeding."}
{"_id": "a48227a744e541829e4483ec9cdee620518c5621", "title": "Connective tissue graft to correct peri-implant soft tissue margin: A clinical report.", "text": "This clinical report describes the use of a subepithelial connective tissue graft to recontour a soft tissue margin discrepancy for a single-implant crown in the anterior maxilla. This procedure demonstrates that the use of soft tissue grafts to correct an esthetic deficiency may be a feasible approach to establish new and stable peri-implant soft tissue contours. The patient presented was followed for 18 months."}
{"_id": "6e07e1390c6a6cd6ca52ac909e1db807d7ba19be", "title": "Flexible Electronics and Displays: High-Resolution, Roll-to-Roll, Projection Lithography and Photoablation Processing Technologies for High-Throughput Production", "text": "Flexible electronics are increasingly being used in a number of applications which benefit from their low profile, light weight, and favorable dielectric properties. However, despite these advantages, the range of practical, high-volume, applications for flexible electronics will remain limited in the future unless a number of challenges related to lithographic patterning on flexible substrates are successfully addressed. The most critical of these pertain to system parameters that affect the cost and performance of flexible circuits, including the resolution, panel size, process throughput, substrate distortion, material handling, and yield. We present a new class of roll-to-roll lithography systems, developed in recent years, that were designed to address the challenges in each of these critical areas. These systems provides high-resolution projection imaging over very large exposure areas, on flexible substrate materials. Additionally, they achieve high-precision alignment by means of image scaling to compensate for substrate distortion due to processing; and they also performs high-throughput photoablation, patterning millions of pixels simultaneously, by means of projection imaging. This technology is attractive for current and emerging applications, such as flexible circuit boards and flexible chip carriers, as well as for potential future applications such as such as flexible displays and macroelectronic systems."}
{"_id": "8204511767af16ea60c444c3bc64328f42b7fe3d", "title": "Concept Detection on Medical Images using Deep Residual Learning Network", "text": "Medical images are often used in clinical diagnosis. However, interpreting the insights gained from them is often a time-consuming task even for experts. For this reason, there is a need for methods that can automatically approximate the mapping from medical images to condensed textual descriptions. For identifying the presence of relevant biomedical concepts in medical images for the ImageCLEF 2017 Caption concept detection subtask we propose the use of a pretrained residual deep neural network. Specifically, a 50-layered resNet was used and retrained on the medical images. The proposed method achieved F1 Score 0.1583 on the test data."}
{"_id": "76e6b5560309d0248017a258be5a774133a03340", "title": "Fair and balanced?: bias in bug-fix datasets", "text": "Software engineering researchers have long been interested in where and why bugs occur in code, and in predicting where they might turn up next. Historical bug-occurence data has been key to this research. Bug tracking systems, and code version histories, record when, how and by whom bugs were fixed; from these sources, datasets that relate file changes to bug fixes can be extracted. These historical datasets can be used to test hypotheses concerning processes of bug introduction, and also to build statistical bug prediction models. Unfortunately, processes and humans are imperfect, and only a fraction of bug fixes are actually labelled in source code version histories, and thus become available for study in the extracted datasets. The question naturally arises, are the bug fixes recorded in these historical datasets a fair representation of the full population of bug fixes? In this paper, we investigate historical data from several software projects, and find strong evidence of systematic bias. We then investigate the potential effects of \"unfair, imbalanced\" datasets on the performance of prediction techniques. We draw the lesson that bias is a critical problem that threatens both the effectiveness of processes that rely on biased datasets to build prediction models and the generalizability of hypotheses tested on biased data."}
{"_id": "d00898c07ed32c22b607509c6f856980795b84b5", "title": "Angels and Devils of Digital Social Norm Enforcement: A Theory about Aggressive versus Civilized Online Comments", "text": "We develop a theory that explains when commenters choose to be aggressive versus civilized in social media depending on their personal social norm context. In particular, we enrich traditional social norm theory by introducing the concept of moral legitimacy. This concept suggests that justifications, particularly those that put social norm violators outside of moral boundaries, cause aggression. Using the diversity of 45,982 comments of a real-world online firestorm, our results confirm that social norm contexts matter strongly for online behavior. The developed theory challenges existing speculations about online aggression and helps to develop strategies to encourage enlightened, civilized discourse on the Internet."}
{"_id": "68b848fc0ecde9931d1a30857f25f1809d9b2dee", "title": "A Sensing Chair Using Pressure Distribution Sensors", "text": "One challenge in multimodal interface research is the lack of robust subsystems that support multimodal interactions. By focusing on a chair\u2014an object that is involved in virtually all human\u2013computer interactions, the sensing chair project enables an ordinary office chair to become aware of its occupant\u2019s actions and needs. Surface-mounted pressure distribution sensors are placed over the seatpan and backrest of the chair for real time capturing of contact information between the chair and its occupant. Given the similarity between a pressure distribution map and a grayscale image, pattern recognition techniques commonly used in computer and robot vision, such as principal components analysis, have been successfully applied to solving the problem of sitting posture classification. The current static posture classification system operates in real time with an overall classification accuracy of 96% and 79% for familiar (people it had felt before) and unfamiliar users, respectively. Future work is aimed at a dynamic posture tracking system that continuously tracks not only steady-state (static) but transitional (dynamic) sitting postures. Results reported here form important stepping stones toward an intelligent chair that can find applications in many areas including multimodal interfaces, intelligent environment, and safety of automobile operations."}
{"_id": "dc8c826d7d4b48be51881da7531387546b7687b7", "title": "A Geometric Framework for Rectangular Shape Detection", "text": "Rectangular shape detection has a wide range of applications, such as license plate detection, vehicle detection, and building detection. In this paper, we propose a geometric framework for rectangular shape detection based on the channel-scale space of RGB images. The framework consists of algorithms developed to address three issues of a candidate shape (i.e., a connected component of edge points), including: 1) outliers; 2) open shape; and 3) fragmentation. Furthermore, we propose an interestness measure for rectangular shapes by integrating imbalanced points (one type of interest points). Our experimental study shows the promise of the proposed framework."}
{"_id": "cee460289798c222724e0cb6f92f522ca6966fd1", "title": "DESC: enabling secure data exchange based on smart contracts", "text": "Dear editor, As data are one of the most important factors of today\u2019s intelligent systems but are relatively scarce, people try exchanging data with other organizations or public marketplaces. However, data exchange is now lack of a secure execution mechanism to automatically and fairly protect the rights which are seriously concerned by data owners. For instance, data owners cannot limit the identity of a data transferee who is banned to take the data by any way. Moreover, data transferees are also afraid that their applications for exchange may be unfairly treated. For instance, data owners could dynamically change data price when multiple transferees submit their applications at the same time. A data marketplace can take over the role of a trusted third party during data exchange, but single point failure is always a serious problem in a large-scale distributed system. Furthermore, these marketplaces usually use contracts based on natural languages which may be maliciously misinterpreted and cannot be automatically executed. Numerous researches have been presented to provide digital rights management (DRM) [1]. For instance, Zhu et al. [2] presented a privacypreserving video subscription scheme with the limitation of expire date. However, rights of data owners are more complex than rights of digital content owners. Decentralized and trustworthy enforcement for access control policies is a long-term issue. Han et al. [3] proposed an optimized mechanism which is a trusted and decentralized access control framework for the client/server architecture. These studies still need third party authorities which are used to execute or supervise the rights control, leading to a few trust and security issues. Existing studies [4] on attribute-based access control have been applied to many fields, such as e-commerce and Internet of Things. This article introduces attribute-based access control and proposes a secure framework for data exchange based on smart contracts (DESC) to enable automatic and fair access control. Featuring with decentralized, fair, transparent, immutable and traceable advantages, blockchain 2.0 [5] provides a generalized framework for implementing decentralized compute resources named smart contracts which define rights and policies in mathematical and programming forms. Once the predefined smart contract is triggered by a transaction, it can automatically execute the specific contractual clauses. To find out which data rights are seriously concerned by data owners, we investigate the DRM mechanisms and several data exchange platforms: GBDEX, national engineering lab for big data distribution and exchange technologies. Then,"}
{"_id": "4626fcf7e4d04e347273e335716b3cfe7c725cd6", "title": "Multi-style paper pop-up designs from 3D models", "text": "Paper pop-ups are interesting three-dimensional books that fascinate people of all ages. The design and construction of these pop-up books however are done manually and require a lot of time and effort. This has led to computer-assisted or automated tools for designing paper pop-ups. This paper proposes an approach for automatically converting a 3D model into a multi-style paper pop-up. Previous automated approaches have only focused on single-style pop-ups, where each is made of a single type of pop-up mechanisms. In our work, we combine multiple styles in a pop-up, which is more representative of actual artist\u2019s creations. Our method abstracts a 3D model using suitable primitive shapes that both facilitate the formation of the considered pop-up mechanisms and closely approximate the input model. Each shape is then abstracted using a set of 2D patches that combine to form a valid pop-up. We define geometric conditions that ensure the validity of the combined pop-up structures. In addition, our method also employs an image-based approach for producing the patches to preserve the textures, finer details and important contours of the input model. Finally, our system produces a printable design layout and decides an assembly order for the construction instructions. The feasibility of our results is verified by constructing the actual paper pop-ups from the designs generated by our system."}
{"_id": "d672b80773bb726d46d60a36e85106b57f8ffc48", "title": "Server virtualization architecture and implementation", "text": "Virtual machine technology, or virtualization, is gaining momentum in the information technology community. While virtual machines are not a new concept, recent advances in hardware and software technology have brought virtualization to the forefront of IT management. Stability, cost savings, and manageability are among the reasons for the recent rise of virtualization. Virtual machine solutions can be classified by hardware, software, and operating system/containers. From its inception on the mainframe to distributed servers on x86, the virtual machine has matured and will play an increasing role in systems management."}
{"_id": "29764684f770cbe714dfb8ac0354d4ef6734f131", "title": "Personality-based Adaptation for Teamwork in Game Agents", "text": "This paper presents a novel learning framework to provide computer game agents the ability to adapt to the player as well as other game agents. Our technique generally involves a personality adaptation module encapsulated in a reinforcement learning framework. Unlike previous work in which adaptation normally involves a decision process on every single action the agent takes, we introduce a two-level process whereby adaptation only takes place on an abstracted actions set which we coin as agent personality. With the personality defined, each agent will then take actions according to the restrictions imposed in its personality. In doing so, adaptation takes place in appropriately defined intervals in the game, without disrupting or slowing down the game constantly with intensive decision-making computations, hence improving enjoyment for the player. Moreover, by decoupling adaptation from action selection, we have a modular adaptive system that can be used with existing action planning methods. With an actual typical game scenario that we have created, it is shown that a team of agents using our framework to adapt towards the player are able to perform better than a team with scripted behavior. Consequently, we also show the team performs even better when adapted to-"}
{"_id": "71120b6d5bac5cd1b2aebe51b5dbda27001f20d3", "title": "A vision-based method for weeds identification through the Bayesian decision theory", "text": "One of the objectives of precision agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. This paper outlines an automatic computer vision-based approach for the detection and differential spraying of weeds in corn crops. The method is designed for post-emergence herbicide applications where weeds and corn plants display similar spectral signatures and the weeds appear irregularly distributed within the crop\u2019s field. The proposed strategy involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based measuring relationships between crop and weeds. The decision making determines the cells to be sprayed based on the computation of a posterior probability under a Bayesian framework. The a priori probability in this framework is computed taking into account the dynamic of the physical system (tractor) where the method is embedded. The main contributions of this paper are: (1) the combination of the image segmentation and decision making processes and (2) the decision making itself which exploits a previous knowledge which is mapped as the a priori probability. The performance of the method is illustrated by comparative analysis against some existing strategies. 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."}
{"_id": "fa46ae777be8776a417a24e0b6c3f6076c5e578d", "title": "Text detection and recognition using enhanced MSER detection and a novel OCR technique", "text": "Detection and recognition of text from any natural scene image is challenging but essential extensively for extracting information from the image. In this paper, we propose an accurate and effective algorithm for detecting enhanced Maximally Stable Extremal Regions (MSERs) as main character candidates and these character candidates are filtered by stroke width variation for removing regions where the stroke width exhibits too much variation. For the detection of text regions, firstly some preprocessing is applied to the natural image and then after detecting MSERs, an intersection of canny edge and MSER region is produced to locate regions that are even more likely to belong to text. Finally, the selected text region is taken as an input of a novel Optical Character Recognition (OCR) technique to make the text editable and usable. The evaluation results substantiates 77.47% of the f-measure on the ICDAR 2011 dataset which is better than the previous performance 76.22%."}
{"_id": "320909305b111df6085eeda87ff61c860d607307", "title": "European option pricing with transaction costs and trading restrictions", "text": "In this paper, using the utility based option pricing approach, pioneered by Hodges and Neuberger, we propose a new model in a continuous-time market with proportional transaction costs and several trading restrictions. According to this model, we study the problem of option pricing and prove that the value function of our model is a unique constrained viscosity solution of a HJB equation."}
{"_id": "bd7707ba90da3a90d15988a736dbde7301b7f410", "title": "Strategies Critical Paths to Improving Employee Commitment", "text": "This article addresses how key leader communication strategies can increase worker loyalty. Leader communication has long been shown to be a critical factor in superior worker motivation and performance (Levering, 1988; Robbins, 2001), and has great potential to aid organizations in their quest for committed employees. Organizations\u2019 need for committed employees is now acute due to such recent shifts in the business environment as the current economic slowdown (which places a premium on increasing worker productivity), skilled worker shortages, increasing ethnic and cultural diversity, rapid technological innovations, and growing organizational reliance on \u201cknowledge workers\u201d to achieve competitive advantage. Research clearly shows that leader communication practices play an integral role in developing and sustaining the employee commitment demanded by the preceding scenarios (Goleman, 1998; Goleman, 2000; Reina & Reina, 1999). However, leaders are often faced with a plethora of options and communication techniques that are not directly linked with strategic goals. To make sense of these various communication tactics, leaders need a systemic method which links practice to results. This article presents such a model, which can also be used to diagnostically assess training needs. In essence, the model is a set of strategy-based \u201cbest practices\u201d that guide leaders to more effectively transmit and foster organizational trust; which, in turn, will significantly enhance worker loyalty and organizational outcomes. To best explain this model and its relevance, this study first examines the link between leader communication and worker loyalty. Next, selected, well-documented leader communication skills and practices are discussed, followed by a section that presents the Motivating Language Model and a resultant \u201cbest practices\u201d checklist for managers. Finally, our study concludes with suggested future directions for both the research and applications of leader communications that promote worker loyalty."}
{"_id": "22d65cd45fded0f9727487d3cc9597289bed5bc8", "title": "Data-driven Automated Induction of Prerequisite Structure Graphs", "text": "With the growing popularity of MOOCs and sharp trend of digitalizing education, there is a huge amount of free digital educational material on the web along with the activity logs of large number of participating students. However, this data is largely unstructured and there is hardly any information about the relationship between material from different sources. We propose a generic algorithm to use educational material and student activity data from heterogeneous sources to create a Prerequisite Structure Graph (PSG). A PSG is a directed acyclic graph, where the nodes are educational units and the edges specify the pairwise ordering of the units in effective teaching by instructors or for effective learning by students. We propose an unsupervised approach utilizing both text content and student data, which outperforms to supervised methods (utilizing only text content) on the task of estimating a PSG."}
{"_id": "19d3b02185ad36fb0b792f2a15a027c58ac91e8e", "title": "Im2Text: Describing Images Using 1 Million Captioned Photographs", "text": "We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset \u2013 performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning."}
{"_id": "340ad17611fddb1f59a8b50114839544878229b7", "title": "Real-time Accurate Object Detection using Multiple Resolutions", "text": "We propose a multi-resolution framework inspired by human visual search for general object detection. Different resolutions are represented using a coarse-to-fine feature hierarchy. During detection, the lower resolution features are initially used to reject the majority of negative windows at relatively low cost, leaving a relatively small number of windows to be processed in higher resolutions. This enables the use of computationally more expensive higher resolution features to achieve high detection accuracy. We applied this framework on Histograms of Oriented Gradient (HOG) features for object detection. Our multi-resolution detector produced better performance for pedestrian detection than state-of-the-art methods (Dalal and Triggs, 2005), and was faster during both training and testing. Testing our method on motorbikes and cars from the VOC database revealed similar improvements in both speed and accuracy, suggesting that our approach is suitable for realtime general object detection applications."}
{"_id": "2744ea7c1f495d97e0cfbecf3e6a315a34d71b6a", "title": "Scaling Optoelectronic-VLSI Circuits into the 21 st Century : A Technology Roadmap", "text": "Technologies now exist for implementing dense surface-normal optical interconnections for silicon CMOS VLSI using hybrid integration techniques. The critical factors in determining the performance of the resulting photonic chip are the yield on the transceiver device arrays, the sensitivity and power dissipation of the receiver and transmitter circuits, and the total optical power budget available. The use of GaAs\u2013AlGaAs multiple-quantum-well p-i-n diodes for on-chip detection and modulation is one effective means of implementing the optoelectronic transceivers. We discuss a potential roadmap for the scaling of this hybrid optoelectronic VLSI technology as CMOS linewidths shrink and the characteristics of the hybrid optoelectronic tranceiver technology improve. An important general conclusion is that, unlike electrical interconnects, such dense optical interconnections directly to an electronic circuit will likely be able to scale in capacity to match the improved performance of future CMOS technology."}
{"_id": "4887c1557929ee16a5176417878590c7edbc6cd8", "title": "An automatic tool for the analysis of natural language requirements", "text": "Using automatic tools for the quality analysis of Natural Language (NL) requirements is recognized as a key factor for achieving software quality. Unfortunately few tools and techniques for the NL requirements analysis are currently available. This paper presents a methodology and a tool (called QuARS Quality Analyzer for Requirement Specifications) for analyzing NL requirements in a systematic and automatic way. QuARS allows requirements engineers to perform an initial parsing of the requirements in order to automatically detect potential linguistic defects that could cause interpretation problems at subsequent stages in developing the software. This tool is also able to partially support the consistency and completeness analysis by clustering the requirements according to specific topics."}
{"_id": "8c4d125e6fb855637a9cd7291d27a8380d538267", "title": "Control of lower limb exoskeleton for elderly assistance on basic mobility tasks", "text": "This paper presents the modelling, simulation and control of a lower limbs exoskeleton devices whose aim is to assist elderly people during standing-up and walking tasks. A humanoid and actuated exoskeleton frames were modelled in Solid Works and assembled in Visual Nastran 4D virtual environment. Control of exoskeletons actuators by means of PID and fuzzy controllers, together with a finite state machine, was designed. Simulations of the humanoid wearing an exoskeleton to perform the mentioned tasks, using controllers with set and adaptive orientation references as input, were performed. It is assumed that the humanoid, representing an old person, is only capable to provide 70% of the required torque, therefore the exoskeleton must supply 30% to complete the motions. With the aid of exoskeleton devices, the elderly will be able to perform motions of a physically able people. Optimization of fuzzy controller parameters for walking motion was done using SDA. The exoskeleton succeeded on assisting standing-up and walking motions but research is still needed to improve the performance and adaptability of the system."}
{"_id": "2a748cc66531dd7f4d122e66cc0cb461d1205fc0", "title": "HD Maps: Fine-Grained Road Segmentation by Parsing Ground and Aerial Images", "text": "In this paper we present an approach to enhance existing maps with fine grained segmentation categories such as parking spots and sidewalk, as well as the number and location of road lanes. Towards this goal, we propose an efficient approach that is able to estimate these fine grained categories by doing joint inference over both, monocular aerial imagery, as well as ground images taken from a stereo camera pair mounted on top of a car. Important to this is reasoning about the alignment between the two types of imagery, as even when the measurements are taken with sophisticated GPS+IMU systems, this alignment is not sufficiently accurate. We demonstrate the effectiveness of our approach on a new dataset which enhances KITTI [8] with aerial images taken with a camera mounted on an airplane and flying around the city of Karlsruhe, Germany."}
{"_id": "b833ee2d196180e11eb4d93f793acd66ff1e2dbe", "title": "Metric localization using Google Street View", "text": "Accurate metrical localization is one of the central challenges in mobile robotics. Many existing methods aim at localizing after building a map with the robot. In this paper, we present a novel approach that instead uses geo-tagged panoramas from the Google Street View as a source of global positioning. We model the problem of localization as a non-linear least squares estimation in two phases. The first estimates the 3D position of tracked feature points from short monocular camera sequences. The second computes the rigid body transformation between the Street View panoramas and the estimated points. The only input of this approach is a stream of monocular camera images and odometry estimates. We quantified the accuracy of the method by running the approach on a robotic platform in a parking lot by using visual fiducials as ground truth. Additionally, we applied the approach in the context of personal localization in a real urban scenario by using data from a Google Tango tablet."}
{"_id": "0f8c30445f3d994ac220dd101de6999cb6eaf911", "title": "Autonomous Robot Navigation in Highly Populated Pedestrian Zones", "text": "In the past, there has been a tremendous progress in the area of autonomous robot navigation and a large variety of robots have been developed who demonstrated robust navigation capabilities indoors, in non-urban outdoor environments, or on roads and relatively few approaches focus on navigation in urban environments such as city centers. Urban areas, however, introduce numerous challenges for autonomous robots as they are rather unstructured and dynamic. In this paper, we present a navigation system for mobile robots designed to operate in crowded city environments and pedestrian zones. We describe the different components of this system including a SLAM module for dealing with huge maps of city centers, a planning component for inferring feasible paths taking also into account the traversability and type of terrain, a module for accurate localization in dynamic environments, and means for calibrating and monitoring the platform. Our navigation system has been implemented and tested in several large-scale field tests, in which a real robot autonomously navigated over several kilometers in a complex urban environment. This also included a public demonstration, during which the robot autonomously traveled along a more than three kilometer long route through the city center of Freiburg, Germany."}
{"_id": "1233f38bddaebafe9f4ae676bb2f8671f6c4821a", "title": "Distance Transforms of Sampled Functions", "text": "We describe linear-time algorithms for solving a class of problems that involve transforming a cost function on a grid using spatial information. These problems can be viewed as a generalization of classical distance transforms of binary images, where the binary image is replaced by an arbitrary function on a grid. Alternatively they can be viewed in terms of the minimum convolution of two functions, which is an important operation in grayscale morphology. A consequence of our techniques is a simple and fast method for computing the Euclidean distance transform of a binary image. Our algorithms are also applicable to Viterbi decoding, belief propagation, and optimal control. ACM Classification: F.2.1, I.4 AMS Classification: 68T45, 68W40"}
{"_id": "38f3bb5e868046261d2933a982ca4ef4031cea7c", "title": "The ODD protocol : a review and first update", "text": "The \u2018ODD\u2019 (Overview, Design concepts, and Details) protocol was published in 2006 to standardize the published descriptions of individual-based and agent-based models (ABMs). The primary objectives of ODD are to make model descriptions more understandable and complete, thereby making ABMs less subject to criticism for being irreproducible. We have systematically evaluated existing uses of the ODD protocol and identified, as expected, parts of ODD needing improvement and clarification. Accordingly, we revise the definition of ODD to clarify aspects of the original version and thereby facilitate future standardization of ABM descriptions. We discuss frequently raised critiques in ODD but also two emerging, and unanticipated, benefits: ODD improves the rigorous formulation of models and helps make the theoretical foundations of large models more visible. Although the protocol was designed for ABMs, it can help with documenting any large, complex model, alleviating some general objections against such models."}
{"_id": "38628d26d4f624378f4303b61ae93c5d34d007c3", "title": "Predicting inter-thread cache contention on a chip multi-processor architecture", "text": "This paper studies the impact of L2 cache sharing on threads that simultaneously share the cache, on a chip multi-processor (CMP) architecture. Cache sharing impacts threads nonuniformly, where some threads may be slowed down significantly, while others are not. This may cause severe performance problems such as sub-optimal throughput, cache thrashing, and thread starvation for threads that fail to occupy sufficient cache space to make good progress. Unfortunately, there is no existing model that allows extensive investigation of the impact of cache sharing. To allow such a study, we propose three performance models that predict the impact of cache sharing on co-scheduled threads. The input to our models is the isolated L2 cache stack distance or circular sequence profile of each thread, which can be easily obtained on-line or off-line. The output of the models is the number of extra L2 cache misses for each thread due to cache sharing. The models differ by their complexity and prediction accuracy. We validate the models against a cycle-accurate simulation that implements a dual-core CMP architecture, on fourteen pairs of mostly SPEC benchmarks. The most accurate model, the inductive probability model, achieves an average error of only 3.9%. Finally, to demonstrate the usefulness and practicality of the model, a case study that details the relationship between an application's temporal reuse behavior and its cache sharing impact is presented."}
{"_id": "26facdf13e37f3ec9388b8527fe2c00691099b34", "title": "Accounting Information System Risk Assessment Algorithm Based on Analytic Hierarchy Process", "text": "So far, there is little research on accounting information system risk assessment in our country. In order to provide the basis to meet the security needs of the accounting information system and reduce the risk of accounting information system security, reduce financial loses and improve the work efficiency, a model of enterprise accounting information system risk assessment method based on Analytic Hierarchy Process is proposed. The analytic hierarchy process model is applied to one corporate accounting information system for risk assessment. It can be concluded that the proposed method get better result of risk assessment, and have strong operability and effectiveness of risk assessment in the accounting information system for enterprise."}
{"_id": "eb4936c35ceebae4b92ac7dc1c873f7d52882bbd", "title": "Scalable pseudo-random noise radar", "text": "This paper describes the implementation of a pseudo-random noise radar system on a scalable sensor platform that consists of several multi-purpose 61- and 122-GHz transceivers in a Silicon-Germanium BiCMOS technology. The multi-band transceivers are equipped with a frequency multiplier to generate the 61- and 122-GHz carrier signals from a single 30.5-GHz LO signal and can be cascaded to build a MIMO radar system with 2 different frequency bands. The implemented scalable sensor system is suitable for various radar and communication applications by using the same transceiver chips with integrated binary phase-shift keying modulators and I/Q receiver. A suitable pseudo-random binary sequence was used in a radar measurement to demonstrate the applicability of the developed chips."}
{"_id": "1e470e4b8ef7a4833784205a36e42eece2c8016f", "title": "Efficient Private ERM for Smooth Objectives", "text": "In this paper, we consider efficient differentially private empirical risk minimization from the viewpoint of optimization algorithms. For strongly convex and smooth objectives, we prove that gradient descent with output perturbation not only achieves nearly optimal utility, but also significantly improves the running time of previous state-of-the-art private optimization algorithms, for both \u01eb-DP and (\u01eb, \u03b4)-DP. For non-convex but smooth objectives, we propose an RRPSGD (Random Round Private Stochastic Gradient Descent) algorithm, which provably converges to a stationary point with privacy guarantee. Besides the expected utility bounds, we also provide guarantees in high probability form. Experiments demonstrate that our algorithm consistently outperforms existing method in both utility and running time."}
{"_id": "408a09da1930643328ed60a620ab1f33bb85d4d1", "title": "DENSER Cities: A System for Dense Efficient Reconstructions of Cities", "text": "This paper is about the efficient generation of dense, colored models of city-scale environments from range data and in particular, stereo cameras. Better maps make for better understanding; better understanding leads to better robots, but this comes at a cost. The computational and memory requirements of large dense models can be prohibitive. We provide the theory and the system needed to create cityscale dense reconstructions. To do so, we apply a regularizer over a compressed 3D data structure while dealing with the complex boundary conditions this induces during the data-fusion stage. We show that only with these considerations can we swiftly create neat, large, \u201cwell behaved\u201d reconstructions. We evaluate our system using the KITTI dataset and provide statistics for the metric errors in all surfaces created compared to those measured with 3D laser. Our regularizer reduces the median error by 40% in 3.4 km of dense reconstructions with a median accuracy of 6 cm. For subjective analysis, we provide a qualitative review of 6.1 km of our dense reconstructions in an attached video. These are the largest dense reconstructions from a single passive camera we are aware of in the literature. Video: https://youtu.be/FRmF7mH86EQ"}
{"_id": "d3b402abe55100b2ef7ebbed370b55e67ae66618", "title": "Running on four legs as though they were one", "text": "that run on one leg. The generalization uf these one-leg algorithms for control of machines with several legs is explored. The generalization is quite simple when multilegged systems run with gaits that use the support legs one at a time. For these gaits the ~ n e-l e g algorithms can be used b o control multilegged running. The cortcept of a virtual leg is introduced to further extend the approach to gaits that use the legs in pairs, such as the trot, the pace, and the bound. These quadruped running gaits map into gaits that use one virtual leg for support at a rime, for which the one-leg algorithms can provide control. This approach was used in laboratory experiments to control a quadruped machine that runs with a trotting gait. 1. ~N1RUI)IICTION UNNING is a form of legged locomotion characterized by travel at high speed and by periods of ballistic flight during which all feet leave the ground. The basic control task is to establish a pattern of leg and body motions that stabilizes the attitude and altitude of the body while propelling the body in the desired direction at the desired speed 121. The characteristics of high speed and ballistic flight suggest that dynamics and active stability are important to accomplishing this control task, and to getting a better general understanding of running. In recent work the aulhors used dynamic techniques to control an experimental one-legged machine that balances itself as it runs by hopping [ 3 , 41. The goal of these experiments was to focus on the dynamic aspects of locomo-tion, while avoiding the difficult task of coordinating the behavior of several legs. This paper addresses the task of leg coordination by starting with the algorithms that were used to control the one-legged hopping machine, and extending them to control a quadruped running machine. The general goal is to develop a set of locomotion principles that applies to the dynamic behavior of diverse legged systems-machines and animals alike-independent of the number of legs."}
{"_id": "b56ef8e5791a3852864c787ae013b77b10ea1998", "title": "Fusion of IMU and Vision for Absolute Scale Estimation in Monocular SLAM", "text": "The fusion of inertial and visual data is widely used to improve an object\u2019s pose estimation. However, this type of fusion is rarely used to estimate further unknowns in the visual framework. In this paper we present and compare two different approaches to estimate the unknown scale parameter in a monocular SLAM framework. Directly linked to the scale is the estimation of the object\u2019s absolute velocity and position in 3D. The first approach is a spline fitting task adapted from Jung and Taylor and the second is an extended Kalman filter. Both methods have been simulated offline on arbitrary camera paths to analyze their behavior and the quality of the resulting scale estimation. We then embedded an online multi rate extended Kalman filter in the Parallel Tracking and Mapping (PTAM) algorithm of Klein and Murray together with an inertial sensor. In this inertial / monocular SLAM framework, we show a real time, robust and fast converging scale estimation. Our approach does not depend on known patterns in the vision part nor a complex temporal synchronization between the visual and inertial sensor. The research leading to these results has received funding from the European Community\u2019s Seventh Framework Programme (FP7/2007-2013) under grant agreement n. 231855 (sFly). Gabriel N\u00fctzi is currently a Master student at the ETH Zurich. Stephan Weiss is currently PhD student at the ETH Zurich. Davide Scaramuzza is currently senior researcher and team leader at the ETH Zurich. Roland Siegwart is full professor at the ETH Zurich and head of the Autonomous Systems Lab. G. N\u00fctzi \u00b7 S. Weiss \u00b7 D. Scaramuzza \u00b7 R. Siegwart ETH Autonomous Systems Laboratory, 8092, Zurich, Switzerland, www.asl.ethz.ch E-mail: nuetzi@student.ethz.ch, stephan.weiss@mavt.ethz.ch, davide.scaramuzza@ieee.org, rsiegwart@ethz.ch 2"}
{"_id": "3521726832493e775fc4958cb944f79db3414226", "title": "On Realistically Attacking Tor with Website Fingerprinting", "text": "Website fingerprinting allows a local, passive observer monitoring a web-browsing client\u2019s encrypted channel to determine her web activity. Previous attacks have shown that website fingerprinting could be a threat to anonymity networks such as Tor under laboratory conditions. However, there are significant differences between laboratory conditions and realistic conditions. First, the training data set is very similar to the testing data set under laboratory conditions, but the attacker may not be able to guarantee similarity realistically. Second, laboratory packet sequences correspond to a single page each, but for realistic packet sequences the split between pages is not obvious. Third, packet sequences may include noise, which may adversely affect website fingerprinting, but this effect has not been studied. In this paper, we tackle these three problems to bridge the gap between laboratory and realistic conditions for website fingerprinting. We show that we can maintain a fresh training set with minimal resources. We demonstrate several classification-based techniques that allow us to split full packet sequences effectively into sequences corresponding to a single page each. Although we were not able to remove noise effectively, we will show that it is difficult for users to generate sufficient background noise to disrupt website fingerprinting on Tor. With our techniques, we are able to build the first website fingerprinting system that can operate on packet sequences collected in the wild."}
{"_id": "e71a6cb92b14384ccd2ac5b89151a650dc45db4a", "title": "Low Precision Policy Distillation with Application to Low-Power, Real-time Sensation-Cognition-Action Loop with Neuromorphic Computing", "text": "Low precision networks in the reinforcement learning (RL) setting are relatively unexplored because of the limitations of binary activations for function approximation. Here, in the discrete action ATARI domain, we demonstrate, for the first time, that low precision policy distillation from a high precision network provides a principled, practical way to train an RL agent. As an application, on 10 different ATARI games, we demonstrate real-time end-to-end game playing on lowpower neuromorphic hardware by converting a sequence of game frames into discrete actions."}
{"_id": "3653428d2b11bf8bdec0a39e0187aa1ff33d8c81", "title": "Embodied Cognition is Not What you Think it is", "text": "The most exciting hypothesis in cognitive science right now is the theory that cognition is embodied. Like all good ideas in cognitive science, however, embodiment immediately came to mean six different things. The most common definitions involve the straight-forward claim that \"states of the body modify states of the mind.\" However, the implications of embodiment are actually much more radical than this. If cognition can span the brain, body, and the environment, then the \"states of mind\" of disembodied cognitive science won't exist to be modified. Cognition will instead be an extended system assembled from a broad array of resources. Taking embodiment seriously therefore requires both new methods and theory. Here we outline four key steps that research programs should follow in order to fully engage with the implications of embodiment. The first step is to conduct a task analysis, which characterizes from a first person perspective the specific task that a perceiving-acting cognitive agent is faced with. The second step is to identify the task-relevant resources the agent has access to in order to solve the task. These resources can span brain, body, and environment. The third step is to identify how the agent can assemble these resources into a system capable of solving the problem at hand. The last step is to test the agent's performance to confirm that agent is actually using the solution identified in step 3. We explore these steps in more detail with reference to two useful examples (the outfielder problem and the A-not-B error), and introduce how to apply this analysis to the thorny question of language use. Embodied cognition is more than we think it is, and we have the tools we need to realize its full potential."}
{"_id": "6e37979d2a910e8a2337927731619fd789a5213b", "title": "Approximation Theory of the Mlp Model in Neural Networks", "text": "In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. The MLP model is one of the more popular and practical of the many neural network models. Mathematically it is also one of the simpler models. Nonetheless the mathematics of this model is not well understood, and many of these problems are approximation-theoretic in character. Most of the research we will discuss is of very recent vintage. We will report on what has been done and on various unanswered questions. We will not be presenting practical (algorithmic) methods. We will, however, be exploring the capabilities and limitations of this model. In the rst two sections we present a brief introduction and overview of neural networks and the multilayer feedforward perceptron model. In Section 3 we discuss in great detail the question of density. When does this model have the theoretical ability to approximate any reasonable function arbritrarily well? In Section 4 we present conditions for simultaneously approximating a function and its derivatives. Section 5 considers the interpolation capability of this model. In Section 6 we study upper and lower bounds on the order of approximation of this model. The material presented in Sections 3{6 treats the single hidden layer MLP model. In Section 7 we discuss some of the diierences that arise when considering more than one hidden layer. The lengthy list of references includes many papers not cited in the text, but relevant to the subject matter of this survey."}
{"_id": "78cd78f39414274f42dc750f6862e03b45133826", "title": "aBCD18 - An advanced 0.18um BCD technology for PMIC application", "text": "We present a new advanced 0.18um BCD(Bipolar-CMOS-DMOS) technology with the key features being a 40V HV-MOS and an SSTC(Sidewall Selective Transistor Cell) type EEPROM as well as complimentary available analog devices such as a high gain BJT, 4fF/um2 MIM capacitor, and 10k \u2126/sq. poly resistor. To reduce device area and enhance latch up immunity, a 15um depth deep trench isolation process has been developed, which will help to significantly reduce the chip size."}
{"_id": "7275fe2be64a0da31403b0848ebbbd7b694e35ab", "title": "Comparison of Sparse Representation and Fourier Discriminant Methods : Damage Location Classification in Indirect Lab-scale Bridge Structural Health Monitoring", "text": "This paper presents a novel method for interpreting data to improve the indirect structural health monitoring (SHM) of bridges. The research presented in the study is part of an ongoing study aimed at developing a novel SHM paradigm for the health assessment of bridges. In this paradigm, we envision the use of an instrumented vehicle that assesses a bridge\u2019s dynamic characteristics while traveling across the bridge. These characteristics are then correlated to the health of the structure by means of advanced signal processing and pattern recognition approaches. In this paper, we present and compare two classification algorithms that locate the presence of damages at well-defined locations on the structure: sparse representation and the Fourier discriminant methods, and find that the sparse representation method provides superior classification accuracy."}
{"_id": "ae155256be15eab9ccc14089fde367f65b482a58", "title": "Three control flow obfuscation methods for Java software", "text": "Three novel control computation (control flow) obfuscation methods are described for protecting Java class files. They are basic block fission obfuscation, intersecting loop obfuscation and replacing goto obfuscation. The basic block fission obfuscation splits some chosen basic block(s) into more basic blocks, in which opaque predicates and goto instructions are inserted to make decompiling unsuccessful. The intersecting loop obfuscation intersects two loops and then adds this pattern into programs. The intersecting loop structure should not appear in a Java source program or a class file. The replacing goto obfuscation replaces goto instructions with conditional branch instructions. The newmethods were tested against 16 decompilers. The study also implemented multi-level exit obfuscation and single-level exit obfuscation for comparison. Both the intersecting loop obfuscation and the replacing goto obfuscation successfully defeated all the decompilers."}
{"_id": "cfd8a3466a33e277cd71856c8863082d64af0389", "title": "Increasing Brand Attractiveness and Sales through Social Media Comments on Public Displays - Evidence from a Field Experiment in the Retail Industry", "text": "Retailers and brands are just starting to utilize online social media to support their businesses. Simultaneously, public displays are becoming ubiquitous in public places, raising the question about how these two technologies could be used together to attract new and existing customers as well as strengthen the relationship toward a focal brand. Accordingly, in a field experiment we displayed brandand product-related comments from the social network Facebook as pervasive advertising in small-space retail stores, known as kiosks. From interviews conducted with real customers during the experiment and the corresponding analysis of sales data we could conclude three findings. Showing social media comments resulted in (1) customers perceiving brands as more innovative and attractive, (2) a measurable, positive effect on sales on both the brand and the product in question and (3) customers wanting to see the comments of others, but not their own, creating a give-andtake paradox for using public displays to show social media comments."}
{"_id": "ef9b81d77bde81c53ae4e303767e1b474f4cf437", "title": "Proposal Manipulation of 3 D Objects in Immersive Virtual Environments", "text": "Interactions within virtual environments (VE) often require manipulation of 3D virtual objects. For this purpose, several research have been carried out focusing on mouse, touch and gestural input. While existing mid-air gestures in immersive VE (IVE) can offer natural manipulations, mimicking interactions with physical objects, they cannot achieve the same levels of precision attained with mouse-based applications for computer-aided design (CAD). Following prevailing trends in previous research, such as degrees-offreedom (DOF) separation, virtual widgets and scaled user motion, we intend to explore techniques for IVEs that combine the naturalness of mid-air gestures and the precision found in traditional CAD systems. In this document we survey and discuss the state-of-the-art in 3D object manipulation. With challenges identified, we present a research proposal and corresponding work plan."}
{"_id": "496dd71be8032024f7e490728a76cbe861e521f6", "title": "Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet", "text": "Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus."}
{"_id": "f7c97a6326e38f773163ab49e4f53ee8f2aaa1bd", "title": "Nerve entrapment syndromes as a cause of pain in the hip, groin and buttock.", "text": "In sports medicine, chronic hip, groin and buttock pain is a common diagnostic problem. Because of the complex anatomy of this region and the many potential neurological causes for pain, few sports clinicians have a detailed understanding of this problem. This paper discusses the clinical aspects of nerve entrapment syndromes related to sport and takes a regional approach in order to provide a diagnostic framework for the general sports physician. The various neurological syndromes are discussed and the surgical management elaborated in detail. For some specific conditions, such as the so-called 'piriformis syndrome', the pathophysiological understanding has changed since the early descriptions and now this particular diagnosis is often ascribed to almost any cause of buttock and/or hamstring symptoms. We discuss the nature of the origin of local symptoms and note that the often described symptoms are more likely due to compression of structures other then the sciatic nerve. Furthermore, the role of piriformis hypertrophy or anatomical nerve variations in the genesis of this syndrome must be questioned. We suggest renaming this the 'deep gluteal syndrome' to account for all of the observed phenomena. As sports medicine continues to develop a scientific basis, the role of nerve entrapments as the basis for chronic symptomatology is undergoing a new understanding and clinicians need to be aware of the diagnostic possibilities and be able to advise patients accordingly on the basis of scientific fact not anecdotal fiction."}
{"_id": "930c3db008ddf723da2b0571d7fd1e966e4c740f", "title": "On the use of words and n-grams for Chinese information retrieval", "text": "In the processing of Chinese documents and queries in information retrieval (IR), one has to identify the units that are used as indexes. Words and n-grams have been used as indexes in several previous studies, which showed that both kinds of indexes lead to comparable IR performances. In this study, we carry out more experiments on different ways to segment documents and queries, and to combine words with n-grams. Our experiments show that a combination of the longest-matching algorithm with single characters is the best choice."}
{"_id": "a1ca82f5a5aa902e3f25d486292e439765711f19", "title": "Detecting topic evolution in scientific literature: how can citations help?", "text": "Understanding how topics in scientific literature evolve is an interesting and important problem. Previous work simply models each paper as a bag of words and also considers the impact of authors. However, the impact of one document on another as captured by citations, one important inherent element in scientific literature, has not been considered. In this paper, we address the problem of understanding topic evolution by leveraging citations, and develop citation-aware approaches. We propose an iterative topic evolution learning framework by adapting the Latent Dirichlet Allocation model to the citation network and develop a novel inheritance topic model. We evaluate the effectiveness and efficiency of our approaches and compare with the state of the art approaches on a large collection of more than 650,000 research papers in the last 16 years and the citation network enabled by CiteSeerX. The results clearly show that citations can help to understand topic evolution better."}
{"_id": "3978e9f794174c7a2700b20193c071a7b1532b22", "title": "Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods", "text": "Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints."}
{"_id": "2a7a76cddd1d04aed660c62a0a879470bf98ca32", "title": "On automatic differentiation", "text": "During these last years, the environmental problems have acquired a growing place in our society. It becomes necessary to find a lasting way to the nuclear waste storage. The waste storage is in relation to the characteristics of wastes according to their activity and the life of the radionuclide. After many studies on the topic, led in particular by ANDRA, the storage in deep level is considered as the reference strategy for wastes with high or medium activity and long-lived. We have to make simulations of the radionucleide transport in underground in order to determine the impact of a possible propagation of radioelements. The modelling of the flow in porous media around the storage requires to know the physical parameters of the different geological layers. Those parameters (porosity and diffusion) are not directly accessible by measurements, hence we have to solve an inverse problem to recover them."}
{"_id": "4ac53a9341ae34d4651eff34e729e91806ab0c44", "title": "Equation of State Calculations by Fast Computing Machines", "text": "1087 instead, only water molecules with different amounts of excitation energy. These may follow any of three paths: (a) The excitation energy is lost without dissociation into radicals (by collision, or possibly radiation, as in aromatic hydrocarbons). (b) The molecules dissociate, but the resulting radicals recombine without escaping from the liquid cage. (c) The molecules dissociate and escape from the cage. In this case we would not expect them to move more than a few molecular diameters through the dense medium before being thermalized. paths (a) and (b) can be designated H 2 0* and those following path (c) can be designated H 2 0t. It seems reasonable to assume for the purpose of these calculations that the ionized H 2 0 molecules will become the H 20 t molecules, but this is not likely to be a complete correspondence. In conclusion we would like to emphasize that the qualitative result of this section is not critically dependent on the exact values of the physical parameters used. However, this treatment is classical, and a correct treatment must be wave mechanical; therefore the result of this section cannot be taken as an a priori theoretical prediction. The success of the radical diffusion model given above lends some plausibility to the occurrence of electron capture as described by this crude calculation. Further work is clearly needed. A general method, suitable for fast computing machines, for investigatiflg such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two-dimensional rigid-sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four-term virial coefficient expansion."}
{"_id": "f514a883e2cb8901fc368078d8445da5aed18806", "title": "A theoretical framework for simulated annealing", "text": "Simulated Annealing has been a very successful general algorithm for the solution of large, complex combinatorial optimization problems. Since its introduction, several applications in different fields of engineering, such as integrated circuit placement, optimal encoding, resource allocation, logic synthesis, have been developed. In parallel, theoretical studies have been focusing on the reasons for the excellent behavior of the algorithm. This paper reviews most of the important results on the theory of Simulated Annealing, placing them in a unified framework. New results are reported as well."}
{"_id": "7ebb21adef59cdc0e854d75f1829118de8edec4d", "title": "Evidence of Time-Dependent Vertical Breakdown in GaN-on-Si HEMTs", "text": "This paper demonstrates and investigates the time-dependent vertical breakdown of GaN-on-Si power transistors. The study is based on electrical characterization, dc stress tests and electroluminescence measurements. We demonstrate the following original results: 1) when submitted to two-terminal (drain-to-substrate) stress, the AlGaN/GaN transistors show a time-dependent degradation process, which leads to the catastrophic failure of the devices; 2) time-to-failure follows a Weibull distribution and is exponentially dependent on stress voltage; 3) the degradation mechanism is strongly field dependent and weakly thermally activated, with an activation energy of 0.25 eV; and 4) emission microscopy suggests that vertical current flows under the whole drain area, possibly through extended defects. The catastrophic failure occurs at random positions under the drain contact. The time-dependent failure is ascribed to a percolation process activated by the high-electric field that leads to the generation of localized shunt paths between drain and substrate."}
{"_id": "6df975cfe9172e41e5cfb47a52956b78b0c818a1", "title": "Document clustering algorithms, representations and evaluation for information retrieval", "text": "Digital collections of data continue to grow exponentially as the information age continues to infiltrate every aspect of society. These sources of data take many different forms such as unstructured text on the world wide web, sensor data, images, video, sound, results of scientific experiments and customer profiles for marketing. Clustering is an unsupervised learning approach that groups similar examples together without any human labeling of the data. Due to the very broad nature of this definition, there have been many different approaches to clustering explored in the scientific literature. This thesis addresses the computational efficiency of document clustering in an information retrieval setting. This includes compressed representations, efficient and scalable algorithms, and evaluation with a specific use case for increasing efficiency of a distributed and parallel search engine. Furthermore, it addresses the evaluation of document cluster quality for large scale collections containing millions to billions of documents where complete labeling of a collection is impractical. The cluster hypothesis from information retrieval is also tested using several different approaches throughout the thesis. This research introduces novel approaches to clustering algorithms, document representations, and clustering evaluation. The combination of document clustering algorithms and document representations outlined in this thesis are able to process large scale web search data sets. This has not previously been reported in the literature without resorting to sampling and producing a small number of clusters. Furthermore, the algorithms are able to do this on a single machine. However, they can also function in a distributed and parallel setting to further increase their scalability. This thesis introduces new clustering algorithms that can also be applied to problems outside the information retrieval domain. There are many large data sets being produced from advertising, social networks, videos, images, sound, satellites, sensor data, and a myriad of other sources. Therefore, I anticipate these approaches will become applicable to more domains, as larger data sets become available."}
{"_id": "281c3f3ba4d566775c67a99ce8eed1d68c046b07", "title": "A Personalized Company Recommender System for Job Seekers", "text": "Our team intends to develop a recommendation system for job seekers based on the information of current employees in big companies. Several models are implemented to achieve over 60% success rate in classifying employees, and we use these models to help job seekers identify their best fitting company."}
{"_id": "ff92f53566fc225e349480439e03e0f34cdf71d2", "title": "Spaceborne bi-and multistatic SAR : potential and challenges", "text": "Biand multistatic synthetic aperture radar (SAR) operates with distinct transmit and receive antennas that are mounted on separate platforms. Such a spatial separation has several operational advantages, which will increase the capability, reliability and flexibility of future SAR missions. Various spaceborne biand multistatic SAR configurations are introduced, and their potential for different applications such as frequent monitoring, wide-swath imaging, scene classification, single pass cross-track interferometry and resolution enhancement is compared. Furthermore, some major challenges such as phase and time synchronisation, biand multistatic SAR processing, satellite orbit selection and relative position sensing are addressed."}
{"_id": "19db8f6acb84546930c6ba22b6fde1c73cfcc4ba", "title": "Joint Multimodal Learning with Deep Generative Models", "text": "We investigate deep generative models that can exchange mul tiple modalities bidirectionally, e.g., generating images from correspondin g texts and vice versa. Recently, some studies handle multiple modalities on deep gen erative models, such as variational autoencoders (VAEs). However, these models typically assume that modalities are forced to have a conditioned relation, i.e., we can only generate modalities in one direction. To achieve our objective, we sh ould extract a joint representation that captures high-level concepts among al l modalities and through which we can exchange them bi-directionally. As described h rein, we propose a joint multimodal variational autoencoder (JMVAE), in whicall modalities are independently conditioned on joint representation. In other words, it models a joint distribution of modalities. Furthermore, to be able to gene rate missing modalities from the remaining modalities properly, we develop an additional method, JMVAE-kl, that is trained by reducing the divergence betwee n JMVAE\u2019s encoder and prepared networks of respective modalities. Our experi ments show that our proposed method can obtain appropriate joint representati o from multiple modalities and that it can generate and reconstruct them more prop erly than conventional VAEs. We further demonstrate that JMVAE can generate multip le modalities bidirectionally."}
{"_id": "6f2cd87c665a034fdc4e51be11b2185ace398d79", "title": "Virtual garments : A Fully Geometric Approach for Clothing Design", "text": "Modeling dressed characters is known as a very tedious process. It usually requires specifying 2D fabric patterns, positioning and assembling them in 3D, and then performing a physically-based simulation. The latter accounts for gravity and collisions to compute the rest shape of the garment, with the adequate folds and wrinkles. This paper presents a more intuitive way to design virtual clothing. We start with a 2D sketching system in which the user draws the contours and seam-lines of the garment directly on a virtual mannequin. Our system then converts the sketch into an initial 3D surface using an existing method based on a precomputed distance field around the mannequin. The system then splits the created surface into different panels delimited by the seam-lines. The generated panels are typically not developable. However, the panels of a realistic garment must be developable, since each panel must unfold into a 2D sewing pattern. Therefore our system automatically approximates each panel with a developable surface, while keeping them assembled along the seams. This process allows us to output the corresponding sewing patterns. The last step of our method computes a natural rest shape for the 3D garment, including the folds due to the collisions with the body and gravity. The folds are generated using procedural modeling of the buckling phenomena observed in real fabric. The result of our algorithm consists of a realistic looking 3D mannequin dressed in the designed garment and the 2D patterns which can be used for distortion free texture mapping. As we demonstrate, the patterns we create allow us to sew real replicas of the virtual garments."}
{"_id": "60cd946e854e2adf256358d2e5e17b0459ba80c6", "title": "Synthetic Population Generation Without a Sample", "text": ""}
{"_id": "554195d80bd87bbad5856ce5b7d166b147272e26", "title": "A Low-energy, Multi-copy Inter-contact Routing Protocol for Disaster Response Networks", "text": "This paper presents a novel multi-copy routing protocol for disruption-tolerant networks whose objective is to minimize energy expended on communication. The protocol is designed for disaster-response applications, where power and infrastructure resources are disrupted. Unlike other delay-tolerant networks, energy is a vital resource in post-disaster scenarios to ensure availability of (disruption-tolerant) communication until infrastructure is restored. Our approach exploits naturally recurrent mobility and contact patterns in the network, formed by rescue workers, volunteers, survivors, and their (possibly stranded) vehicles to reduce the number of message copies needed to attain an adequate delivery ratio in the face of disconnection and intermittent connectivity. A new notion of inter-contact routing is proposed that allows estimating route delays and delivery probabilities, identifying more reliable routes and controlling message replication and forwarding accordingly. We simulate the scheme using a mobility model that reflects recurrence inspired by disaster scenarios, and compare our results to previous DTN routing techniques. The evaluation shows that the new approach reduces the resource overhead per message over previous approaches while maintaining a comparable delivery ratio at the expense of a small (bounded) increase in latency."}
{"_id": "9b3d37789cfc45affb6eae5bbc3b13fa2ff8afb8", "title": "Emotion detection in suicide notes", "text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.05.050 \u21d1 Corresponding author at: LT3 Language and Translation Technology Team, University College Ghent, Groot-Brittanni\u00eblaan 45, 9000 Ghent, Belgium. Tel.: +32 9 224 97 53. E-mail addresses: bart.desmet@hogent.be (B. Desmet), veronique.hoste@ hogent.be (V. Hoste). Bart Desmet a,b,\u21d1, V\u00e9ronique Hoste a,c"}
{"_id": "2eb8df51ba9ed9daa072b98e5b83a05aa8691f1c", "title": "An improved MPPT technique for high gain DC-DC converter using model predictive control for photovoltaic applications", "text": "This paper presents an enhanced Maximum Power Point Tracking (MPPT) of Photovoltaic (PV) systems by means of Model Predictive Control (MPC) techniques. The PV array can feed power to the load through a DC/DC converter boosting the output voltage. Due to stochastic behavior of solar energy, MPPT control technique of PV arrays is required to operate at maximum power point. Extracting the maximum power from PV systems has been widely investigated within the literature. The main contribution of this paper is enhancement of the Incremental Conductance (INC) method through a fixed step predictive control under measured fast solar radiation variation. The proposed predictive control to achieve Maximum Power Point (MPP) speeds up the control loop since it predicts error before the switching signal is applied to the selected high gain multilevel DC-DC converter. Comparing the developed technique to the conventional INC method shows significant improvement in PV system performance. Experimental validation is presented using the dSpace CP 1103 to implement the proposed MPC-MPPT."}
{"_id": "e78e8053d9e0dd76c749f6ab3ca3924071863c1a", "title": "Islanding detection of active distribution system with parallel inverters", "text": "Average absolute frequency deviation value (AFDVavg) based active islanding detection technique is recently introduced for islanding detection of active distribution system with a single active DG. This paper implements, evaluates and analyze the performance of AFDVavg technique for active distribution system with parallel inverter. The main focus is on islanding detection of an ADN with parallel inverters including the worst case i.e. zero power imbalance condition. The case study is done on an ADN which is energized by two parallel inverter based DGs. Worst case of islanding has been simulated successfully by adjusting the equivalent load of ADN to match the total generation. The resultant quality factor of the equivalent load is kept 1 for testing islanding. Computer simulations are done in MATLAB."}
{"_id": "61d76e585ed1ef137a5f0b86a9869134cbcc8122", "title": "Integration of open source platform duckietown and gesture recognition as an interactive interface for the museum robotic guide", "text": "In recent years, population aging becomes a serious problem. To decrease the demand for labor when navigating visitors in museums, exhibitions, or libraries, this research designs an automatic museum robotic guide which integrates image and gesture recognition technologies to enhance the guided tour quality of visitors. The robot is a self-propelled vehicle developed by ROS (Robot Operating System), in which we achieve the automatic driving based on the function of lane-following via image recognition. This enables the robot to lead guests to visit artworks following the preplanned route. In conjunction with the vocal service about each artwork, the robot can convey the detailed description of the artwork to the guest. We also design a simple wearable device to perform gesture recognition. As a human machine interface, the guest is allowed to interact with the robot by his or her hand gestures. To improve the accuracy of gesture recognition, we design a two phase hybrid machine learning-based framework. In the first phase (or training phase), k-means algorithm is used to train historical data and filter outlier samples to prevent future interference in the recognition phase. Then, in the second phase (or recognition phase), we apply KNN (k-nearest neighboring) algorithm to recognize the hand gesture of users in real time. Experiments show that our method can work in real time and get better accuracy than other methods."}
{"_id": "27db31b3e29cbdb19a91beedd3cdfdda46ce1fd1", "title": "Stochastic policy gradient reinforcement learning on a simple 3D biped", "text": "We present a learning system which is able to quickly and reliably acquire a robust feedback control policy for 3D dynamic walking from a blank-slate using only trials implemented on our physical robot. The robot begins walking within a minute and learning converges in approximately 20 minutes. This success can be attributed to the mechanics of our robot, which are modeled after a passive dynamic walker, and to a dramatic reduction in the dimensionality of the learning problem. We reduce the dimensionality by designing a robot with only 6 internal degrees of freedom and 4 actuators, by decomposing the control system in the frontal and sagittal planes, and by formulating the learning problem on the discrete return map dynamics. We apply a stochastic policy gradient algorithm to this reduced problem and decrease the variance of the update using a state-based estimate of the expected cost. This optimized learning system works quickly enough that the robot is able to continually adapt to the terrain as it walks."}
{"_id": "dbe1cb8e323da5bf045b51534e7ed06e69ca53df", "title": "Sanity Check: A Strong Alignment and Information Retrieval Baseline for Question Answering", "text": "While increasingly complex approaches to question answering (QA) have been proposed, the true gain of these systems, particularly with respect to their expensive training requirements, can be in- flated when they are not compared to adequate baselines. Here we propose an unsupervised, simple, and fast alignment and informa- tion retrieval baseline that incorporates two novel contributions: a one-to-many alignment between query and document terms and negative alignment as a proxy for discriminative information. Our approach not only outperforms all conventional baselines as well as many supervised recurrent neural networks, but also approaches the state of the art for supervised systems on three QA datasets. With only three hyperparameters, we achieve 47% P@1 on an 8th grade Science QA dataset, 32.9% P@1 on a Yahoo! answers QA dataset and 64% MAP on WikiQA."}
{"_id": "ae0ea38bee64d91c62200eb9bd4922c326be6b35", "title": "MPPT of photovoltaic systems using extremum - seeking control", "text": "A stability analysis for a maximum power point tracking (MPPT) scheme based on extremum-seeking control is developed for a photovoltaic (PV) array supplying a dc-to-dc switching converter. The global stability of the extremum-seeking algorithm is demonstrated by means of Lyapunov's approach. Subsequently, the algorithm is applied to an MPPT system based on the \"perturb and observe\" method. The steady-state behavior of the PV system with MPPT control is characterized by a stable oscillation around the maximum power point. The tracking algorithm leads the array coordinates to the maximum power point by increasing or decreasing linearly with time the array voltage. Off-line measurements are not required by the control law, which is implemented by means of an analog multiplier, standard operational amplifiers, a flip-flop circuit and a pulsewidth modulator. The effectiveness of the proposed MPPT scheme is demonstrated experimentally under different operating conditions."}
{"_id": "f5b6dcf63c721869b513d9ba5995c4508414eade", "title": "Systematic Risk in Supply Chain Networks", "text": "I production output is generally correlated with the state of the economy. Nonetheless, during times of economic downturn, some industries take the biggest hit, whereas at times of economic boom they reap most benefits. To provide insight into this phenomenon, we map supply networks of industries and firms and investigate how the supply network structure mediates the effect of economy on industry or firm sales. Previous research has shown that retail sales are correlated with the state of the economy. Since retailers source their products from other industries, the sales of their suppliers can also be correlated with the state of the economy. This correlation represents the source of systematic risk for an industry that propagates through a supply chain network. Specifically, we identify the following mechanisms that can affect the correlation between sales and the state of the economy in a supply chain network: propagation of systematic risk into production decisions, aggregation of orders from multiple customers in a supply chain network, and aggregation of orders over time. We find that the first effect does not amplify the correlation; however, the latter two intensify correlation and result in the amplification of correlation upstream in supply networks. We demonstrate three managerial implications of this phenomenon: implications for the cost of capital, for the risk-adjusted valuation of supply chain improvement projects, and for supplier selection and risk. Data, as supplemental material, are available at http://dx.doi.org/10.1287/mnsc.2015.2187."}
{"_id": "36fb553aa996885017afe3489a8377eceddc08ee", "title": "Planning-based prediction for pedestrians", "text": "We present a novel approach for determining robot movements that efficiently accomplish the robot's tasks while not hindering the movements of people within the environment. Our approach models the goal-directed trajectories of pedestrians using maximum entropy inverse optimal control. The advantage of this modeling approach is the generality of its learned cost function to changes in the environment and to entirely different environments. We employ the predictions of this model of pedestrian trajectories in a novel incremental planner and quantitatively show the improvement in hindrance-sensitive robot trajectory planning provided by our approach."}
{"_id": "35582a30685083c62dca992553eec44123be9d07", "title": "The Weighted Majority Algorithm", "text": "We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials with a prediction to be made in each and the goal of the learner is to make few mistakes We are interested in the case that the learner has reason to believe that one of some pool of known algorithms will perform well but the learner does not know which one A simple and e ective method based on weighted voting is introduced for constructing a compound algorithm in such a circumstance We call this method the Weighted Majority Algorithm We show that this algorithm is robust in the presence of errors in the data We discuss various versions of the Weighted Majority Algorithm and prove mistake bounds for them that are closely related to the mistake bounds of the best algorithms of the pool For example given a sequence of trials if there is an algorithm in the pool A that makes at most m mistakes then the Weighted Majority Algorithm will make at most c log jAj m mistakes on that sequence where c is xed constant"}
{"_id": "7dc20c32ea644047aa8989f407a6a93571d1013f", "title": "A Novel Approach for Network Attack Classification Based on Sequential Questions", "text": "With the development of incipient technologies, user devices becoming more exposed and ill-used by foes. In upcoming decades, traditional security measures will not be sufficient enough to handle this huge threat towards distributed hardware and software. Lack of standard network attack taxonomy has become an indispensable dispute on developing a clear understanding about the attacks in order to have an operative protection mechanism. Present attack categorization techniques protect a specific group of threat which has either messed the entire taxonomy structure or ambiguous when one network attacks get blended with few others attacks. Hence, this raises concerns about developing a common and general purpose taxonomy. In this study, a sequential question-answer based model of categorization is proposed. In this article, an intrusion detection framework and threat grouping schema are proposed on the basis of four sequential questions (\u201cWho\u201d, \u201cWhere\u201d, \u201cHow\u201d and \u201cWhat\u201d). We have used our method for classifying traditional network attacks in order to identify initiator, source, attack style and seriousness of an attack. Another focus of the paper is to provide a preventive list of actions for network administrator as a guideline to reduce overall attack consequence. Recommended taxonomy is designed to detect common attacks rather than any particular type of attack which can have a practical effect in real life attack classification. From the analysis of the classifications obtained from few infamous attacks, it is obvious that the proposed system holds certain benefits related to the prevailing taxonomies. Future research directions have also been well acknowledged."}
{"_id": "85c3d578718efda398973861438e8ab6dfc13b8f", "title": "Community Detection in Temporal Networks", "text": "Many complex systems in nature, society and technologyfrom the online social networks to the internet from the nervous system to power gridscan be represented as a graph of vertices interconnected by edges. Small world network, Scale free network, Community detection are fundamental properties of complex networks. Community is a sub graph with densely connected nodes typically reveals topological relations, network structure, organizational and functional characteristics of the underlying network, e.g., friendships on Facebook, Followers of a VIP account on Twitter, and interaction with business professionals on LinkedIn. Online social networks are dynamic in nature, they evolve in size and space rapidly. In this paper we are detecting incremental disjoint communities using dyanamic multi label propagation algorithm. It tackles the temporal event changes, i.e., addition, deletion of an edge or vertex in a sub graph for every timestamp. The experimental results on Enron real network dataset shows that the proposed method is quite well in identifying communities."}
{"_id": "7894683e9f0108245d43c3de91a3426e52e0d27f", "title": "GMove: Group-Level Mobility Modeling Using Geo-Tagged Social Media", "text": "Understanding human mobility is of great importance to various applications, such as urban planning, traffic scheduling, and location prediction. While there has been fruitful research on modeling human mobility using tracking data (e.g., GPS traces), the recent growth of geo-tagged social media (GeoSM) brings new opportunities to this task because of its sheer size and multi-dimensional nature. Nevertheless, how to obtain quality mobility models from the highly sparse and complex GeoSM data remains a challenge that cannot be readily addressed by existing techniques. We propose GMove, a group-level mobility modeling method using GeoSM data. Our insight is that the GeoSM data usually contains multiple user groups, where the users within the same group share significant movement regularity. Meanwhile, user grouping and mobility modeling are two intertwined tasks: (1) better user grouping offers better within-group data consistency and thus leads to more reliable mobility models; and (2) better mobility models serve as useful guidance that helps infer the group a user belongs to. GMove thus alternates between user grouping and mobility modeling, and generates an ensemble of Hidden Markov Models (HMMs) to characterize group-level movement regularity. Furthermore, to reduce text sparsity of GeoSM data, GMove also features a text augmenter. The augmenter computes keyword correlations by examining their spatiotemporal distributions. With such correlations as auxiliary knowledge, it performs sampling-based augmentation to alleviate text sparsity and produce high-quality HMMs.\n Our extensive experiments on two real-life data sets demonstrate that GMove can effectively generate meaningful group-level mobility models. Moreover, with context-aware location prediction as an example application, we find that GMove significantly outperforms baseline mobility models in terms of prediction accuracy."}
{"_id": "5f173fc75106f8234f41890b61567db851b256c2", "title": "RN to BSN Transition: A Concept Analysis.", "text": "Over 670,000 ADN- and diploma-prepared nurses will need to complete their BSN degrees to meet the Institute of Medicine's recommendation that at least 80% of registered nurses (RNs) be BSN-prepared by year 2020. Understanding motivators, barriers, and the transition experience for RNs to advance their degree will help educators and nurse leaders understand the importance of a partnership to educate and mentor RNs to pursue a BSN degree."}
{"_id": "51cdd25a38c4daa3d57e57218310410b383add73", "title": "Design Considerations of Charge Pump for Antenna Switch Controller With SOI CMOS Technology", "text": "An enhanced charge pump for the antenna switch controller using the silicon-on-insulator (SOI) CMOS technology is presented in this brief. The pseudo cross-coupled technique is proposed to reduce parasitic capacitances at charging/discharging nodes through charge transferring paths, which improves the current drive capability and provides better accuracy of the output voltage. Furthermore, the codesign between the gate control voltages of power MOS transistors and the clock drive signals of pumping capacitors has been investigated to eliminate the reversion loss and reduce the ripple voltage. The pseudo cross-coupled charge pump has been fabricated in the 0.18- $\\mu\\text{m}$ SOI CMOS technology with an area of 0.065 mm2. According to the comparison results of the conventional and enhanced charge pumps, the start-up time and the recovery time are typically shortened by 71.4% and 21.7%, owing to the improvement of the current drive capability, and the ripple voltage at no-load condition is greatly reduced by 46.1%."}
{"_id": "13e8ecf7c98b8c076b65769b8f448d8466fffc30", "title": "Design criteria for the RF section of UHF and microwave passive RFID transponders", "text": "A set of design criteria for the radio-frequency (RF) section of long-range passive RF identification (RFID) transponders operating in the 2.45-GHz or 868-MHz industrial, scientific, and medical (ISM) frequency ranges is derived in this paper, focusing in particular on the voltage multiplier, the power-matching network, and the backscatter modulation. The paper discusses the design tradeoffs between the error probability at the reader receiver and the converted RF-dc power at the transponder, determining the regions of the design space that allow optimization of the operating range and the data rate of the RFID system."}
{"_id": "82d2a93f3774b77aa62d5896cadd4d4f628c268c", "title": "Low-Profile RFID Tag Antenna Using Compact AMC Substrate for Metallic Objects", "text": "A low-profile (aplambda/100) passive radio frequency identification (RFID) tag antenna incorporated with a compact artificial magnetic conductor (AMC) substrate is proposed. A novel modified rectangular patch type AMC substrate with offset vias is employed to significantly reduce the planar dimensions of a tag antenna, which is about 0.1lambda times 0.2lambda. The effect of a ground plane size and the number of AMC unit cells on the antenna performance such as return loss and gain is investigated with a view of tag installation. It is demonstrated through experiments that the proposed tag antenna shows relatively tolerant reading distances from 2.4 to 4.8 m under various platform environments such as a conductive plate or a plastic water container."}
{"_id": "ab3fe99806d2efff8fcad0a713893bc0846afd30", "title": "Power reflection coefficient analysis for complex impedances in RFID tag design", "text": "Kurokawa's method of calculating the power reflection coefficient from the Smith chart in the situation when one complex impedance is directly connected to another is applied to passive RFID tag design, where power reflection is important, as it determines the tag characteristics. The performance analysis of a specific RFID tag is presented together with experimental data, which is in close agreement with the theory."}
{"_id": "f017d6284b6526790dca6bff0bb0495231534e2a", "title": "High-Impedance Electromagnetic Surfaces with a Forbidden Frequency Band", "text": "A new type of metallic electromagnetic structure has been developed that is characterized by having high surface impedance. Although it is made of continuous metal, and conducts dc currents, it does not conduct ac currents within a forbidden frequency band. Unlike normal conductors, this new surface does not support propagating surface waves, and its image currents are not phase reversed. The geometry is analogous to a corrugated metal surface in which the corrugations have been folded up into lumped-circuit elements, and distributed in a two-dimensional lattice. The surface can be described using solid-state band theory concepts, even though the periodicity is much less than the free-space wavelength. This unique material is applicable to a variety of electromagnetic problems, including new kinds of low-profile antennas."}
{"_id": "240ac8f82a346482cbe7cc824bb649089bf0a262", "title": "Combining natural language processing and network analysis to examine how advocacy organizations stimulate conversation on social media.", "text": "Social media sites are rapidly becoming one of the most important forums for public deliberation about advocacy issues. However, social scientists have not explained why some advocacy organizations produce social media messages that inspire far-ranging conversation among social media users, whereas the vast majority of them receive little or no attention. I argue that advocacy organizations are more likely to inspire comments from new social media audiences if they create \"cultural bridges,\" or produce messages that combine conversational themes within an advocacy field that are seldom discussed together. I use natural language processing, network analysis, and a social media application to analyze how cultural bridges shaped public discourse about autism spectrum disorders on Facebook over the course of 1.5 years, controlling for various characteristics of advocacy organizations, their social media audiences, and the broader social context in which they interact. I show that organizations that create substantial cultural bridges provoke 2.52 times more comments about their messages from new social media users than those that do not, controlling for these factors. This study thus offers a theory of cultural messaging and public deliberation and computational techniques for text analysis and application-based survey research."}
{"_id": "7d1a028b5e40bea0a0705b1163a3a8d888ef973b", "title": "A sparse-response deep belief network based on rate distortion theory", "text": "Deep belief networks (DBNs) are currently the dominant technique for modeling the architectural depth of brain, and can be trained efficiently in a greedy layer-wise unsupervised learning manner. However, DBNs without a narrow hidden bottleneck typically produce redundant, continuous-valued codes and unstructured weight patterns. Taking inspiration from rate distortion (RD) theory, which encodes original data using as few bits as possible, we introduce in this paper a variant of DBN, referred to as sparse-response DBN (SR-DBN). In this approach, Kullback\u2013Leibler divergence between the distribution of data and the equilibrium distribution defined by the building block of DBN is considered as a distortion function, and the sparse response regularization induced by L1-norm of codes is used to achieve a small code rate. Several experiments by extracting features from different scale image datasets show that our approach SR-DBN learns codes with small rate, extracts features at multiple levels of abstraction mimicking computations in the cortical hierarchy, and obtains more discriminative representation than PCA and several basic algorithms of DBNs. & 2014 Elsevier Ltd. All rights reserved."}
{"_id": "ed0b24315d62fac40a2ec5447b1742a4c5d640b7", "title": "Pl@ntNet mobile app", "text": "Pl@ntNet is an image sharing and retrieval application for the identification of plants, available on iPhone and iPad devices. Contrary to previous content-based identification applications it can work with several parts of the plant including flowers, leaves, fruits and bark. It also allows integrating user's observations in the database thanks to a collaborative workflow involving the members of a social network specialized on plants. Data collected so far makes it one of the largest mobile plant identification tool."}
{"_id": "8800de3bd6c33397c669596c16349bf277570854", "title": "Resilient Cell-Based Architecture for Time-to-Digital Converter", "text": "This paper proposes a resilient Time-to-Digital Converter (TDC) that lends itself to cell-based design automation. We adopt a shrinking-based architecture with a number of distinctive techniques. First of all, a specialized on-chip re-calibration scheme is developed so that the real-time transfer function of the TDC in silicon (which maps an input pulse-width to its corresponding output code) can be derived on the chip and thereby the absolute value (instead of just a relative code) of an input pulse-width under measurement can be reported. Secondly, the sampling errors stemming from the jitters of training clocks used in the calibration scheme are mitigated by the principle of multi sampling. Thirdly, a flexible coarse-shrinking block is adopted and an automatic adjustment scheme is employed so that the coarse-shrinking block can adjust itself when operated under different input pulse-width ranges."}
{"_id": "f7ef63734af6843e99ee7ce6972a077baf37ecfa", "title": "Measuring User Productivity in Machine Translation Enhanced Computer Assisted Translation", "text": "This paper addresses the problem of reliably measuring productivity gains by professional translators working with a machine translation enhanced computer assisted translation tool. In particular, we report on a field test we carried out with a commercial CAT tool in which translation memory matches were supplemented with suggestions from a commercial machine translation engine. The field test was conducted with 12 professional translators working on real translation projects. Productivity of translators were measured with two indicators, post-editing speed and post-editing effort, on two translation directions, English\u2013Italian and English\u2013German, and two linguistic domains, legal and information technology. Besides a detailed statistical analysis of the experimental results, we also discuss issues encountered in running the test."}
{"_id": "721d4ae075de7bf9ea2cac01fe02e2920ee5c789", "title": "A Deep Learning Architecture for Image Representation, Visual Interpretability and Automated Basal-Cell Carcinoma Cancer Detection", "text": "This paper presents and evaluates a deep learning architecture for automated basal cell carcinoma cancer detection that integrates (1) image representation learning, (2) image classification and (3) result interpretability. A novel characteristic of this approach is that it extends the deep learning architecture to also include an interpretable layer that highlights the visual patterns that contribute to discriminate between cancerous and normal tissues patterns, working akin to a digital staining which spotlights image regions important for diagnostic decisions. Experimental evaluation was performed on set of 1,417 images from 308 regions of interest of skin histopathology slides, where the presence of absence of basal cell carcinoma needs to be determined. Different image representation strategies, including bag of features (BOF), canonical (discrete cosine transform (DCT) and Haar-based wavelet transform (Haar)) and proposed learned-from-data representations, were evaluated for comparison. Experimental results show that the representation learned from a large histology image data set has the best overall performance (89.4% in F-measure and 91.4% in balanced accuracy), which represents an improvement of around 7% over canonical representations and 3% over the best equivalent BOF representation."}
{"_id": "57aa48fa743f124e529bccb7c0542a90b2966fe8", "title": "Defining Media Enjoyment as the Satisfaction of Intrinsic Needs", "text": "This article presents a model of enjoyment rooted in self-determination theory (Deci & Ryan, 1985) that includes the satisfaction of three needs related to psychological wellbeing: autonomy, competence, and relatedness. In an experiment designed to validate this conceptualization of enjoyment, we manipulate video game characteristics related to the satisfaction of these needs and examine their relative effects on enjoyment. The validated model explains 51% of the variance in enjoyment, even without including needs usually studied in relation to enjoyment such as pleasure seeking. Results indicate the utility of defining enjoyment as need satisfaction. These results are discussed in terms of a broader conceptualization of enjoyment represented as the satisfaction of a comprehensive set of functional needs."}
{"_id": "6c6eb640e60c0599871c9d96af4e31be43ea6219", "title": "A Short-term Traffic Flow Forecasting Method Based on the Hybrid PSO-SVR", "text": "Accurate short-term flow forecasting is important for the real-time traffic control, but due to its complex nonlinear data pattern, getting a high precision is difficult. The support vector regression model (SVR) has been widely used to solve nonlinear regression and time series predicting problems. To get a higher precision with less learning time, this paper presents a Hybrid PSO-SVR forecasting method, which uses particle swarm optimization (PSO) to search optimal SVR parameters. In order to find a PSO that is more proper for SVR parameters searching, this paper proposes three kinds of strategies to handle the particles flying out of the searching space Through the comparison of three strategies, we find one of the strategies can make PSO get the optimal parameters more quickly. The PSO using this strategy is called fast PSO. Furthermore, aiming at the problem about the decrease of prediction accuracy caused by the noises in the original data, this paper proposes a hybrid PSO-SVR method with historical momentum based on the similarity of historical short-term flow data. The results of extensive comparison experiments indicate that the proposed model can get more accurate forecasting results than other state-of-the-art algorithms, and when the data contain noises, the method with historical momentum still gets accurate forecasting results."}
{"_id": "9f8fb8486e114b1c971bd3918306078bd83229f7", "title": "Fully Parallel Stochastic LDPC Decoders", "text": "Stochastic decoding is a new approach to iterative decoding on graphs. This paper presents a hardware architecture for fully parallel stochastic low-density parity-check (LDPC) decoders. To obtain the characteristics of the proposed architecture, we apply this architecture to decode an irregular state-of-the-art (1056,528) LDPC code on a Xilinx Virtex-4 LX200 field-programmable gate-array (FPGA) device. The implemented decoder achieves a clock frequency of 222 MHz and a throughput of about 1.66 Gb/s at Eb/N0=4.25 dB (a bit error rate of 10-8). It provides decoding performance within 0.5 and 0.25 dB of the floating-point sum-product algorithm with 32 and 16 iterations, respectively, and similar error-floor behavior. The decoder uses less than 40% of the lookup tables, flip-flops, and IO ports available on the FPGA device. The results provided in this paper validate the potential of stochastic LDPC decoding as a practical and competitive fully parallel decoding approach."}
{"_id": "b86a030b86fc3a441c7f63def45d591b4c58e068", "title": "Sugihara causality analysis of scalp EEG for detection of early Alzheimer's disease", "text": "Recently, Sugihara proposed an innovative causality concept, which, in contrast to statistical predictability in Granger sense, characterizes underlying deterministic causation of the system. This work exploits Sugihara causality analysis to develop novel EEG biomarkers for discriminating normal aging from mild cognitive impairment (MCI) and early Alzheimer's disease (AD). The hypothesis of this work is that scalp EEG based causality measurements have different distributions for different cognitive groups and hence the causality measurements can be used to distinguish between NC, MCI, and AD participants. The current results are based on 30-channel resting EEG records from 48 age-matched participants (mean age 75.7\u00a0years) - 15 normal controls (NCs), 16 MCI, and 17 early-stage AD. First, a reconstruction model is developed for each EEG channel, which predicts the signal in the current channel using data of the other 29 channels. The reconstruction model of the target channel is trained using NC, MCI, or AD records to generate an NC-, MCI-, or AD-specific model, respectively. To avoid over fitting, the training is based on the leave-one-out principle. Sugihara causality between the channels is described by a quality score based on comparison between the reconstructed signal and the original signal. The quality scores are studied for their potential as biomarkers to distinguish between the different cognitive groups. First, the dimension of the quality scores is reduced to two principal components. Then, a three-way classification based on the principal components is conducted. Accuracies of 95.8%, 95.8%, and 97.9% are achieved for resting eyes open, counting eyes closed, and resting eyes closed protocols, respectively. This work presents a novel application of Sugihara causality analysis to capture characteristic changes in EEG activity due to cognitive deficits. The developed method has excellent potential as individualized biomarkers in the detection of pathophysiological changes in early-stage AD."}
{"_id": "882fcf1527774505e02762272df3651a249bf830", "title": "Understanding choice overload in recommender systems", "text": "Even though people are attracted by large, high quality recommendation sets, psychological research on choice overload shows that choosing an item from recommendation sets containing many attractive items can be a very difficult task. A web-based user experiment using a matrix factorization algorithm applied to the MovieLens dataset was used to investigate the effect of recommendation set size (5 or 20 items) and set quality (low or high) on perceived variety, recommendation set attractiveness, choice difficulty and satisfaction with the chosen item. The results show that larger sets containing only good items do not necessarily result in higher choice satisfaction compared to smaller sets, as the increased recommendation set attractiveness is counteracted by the increased difficulty of choosing from these sets. These findings were supported by behavioral measurements revealing intensified information search and increased acquisition times for these large attractive sets. Important implications of these findings for the design of recommender system user interfaces will be discussed."}
{"_id": "ccee800244908d2960830967e70ead7dd8266f7a", "title": "Deep Rewiring: Training very sparse deep networks", "text": "Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently on sparse networks. Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints. We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network. DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded. We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance. DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior."}
{"_id": "5100213838a7a3d3a33623128e5ef24a3879d2ab", "title": "Stability Analysis and Stabilization methods of DC Microgrid with Multiple Parallel-Connected DC-DC Converters loaded", "text": "-Constant power loads (CPLs) may yield instability due to the well-known negative impedance characteristic. This study analyzes the factors which cause instability of a DC microgrid with multiple DC-DC converters. Two stabilization methods are presented for two operation modes: constant voltage source mode and droop mode. And sufficient conditions for the stability of the DC microgrid are obtained by identifying the eigenvalues of the Jacobian matrix. The key is to transform the eigenvalue problem to a quadratic eigenvalue problem (QEP). When applying the methods in practical engineering, the salient feature is that the stability parameter domains can be estimated by the available constraints, such as the values of capacities, inductances, maximum load power and distances of the cables. Compared with some classical methods, the proposed methods have wider stability region. The simulation results based on MATLAB /simulink platform verify the feasibility of the methods."}
{"_id": "522b3ad27c2d6317ffb3bb5647777e88d932f342", "title": "Power grids reliability appraisal with intrusion tolerant capability in SCADA systems", "text": "By intruding into the substations and control center of the supervisory control and data acquisition (SCADA) system, trip commands can be sent to intelligent electronic devices (IEDs) which control the system breakers. Reliability of the power system can be impacted through the cyber attacks. In this paper, a modified semi-Markov process (SMP) model is used to describe the procedures of normal and penetration attacks against the intrusion tolerant system. By modeling the transition probabilities between the SMP states and the sojourn time of each SMP state, the mean times to compromise (MTTCs) of the normal and penetration attacks are calculated. With increased probabilities of breaker trips resulted from the cyber attacks, the loss of load probabilities (LOLP) are evaluated based on IEEE reliability test system 79 (RTS79). When the level of attack increases or the level of defense in the system decreases, the simulation results demonstrate that the power system becomes less reliable."}
{"_id": "747b11aa05cefd542dcfe498e3225301fa4a0b33", "title": "FiDO: A Community-based Web Browsing Agent and CDN for Challenged Network Environments", "text": "Homes located on tribal lands, particularly in rural areas of the United States, continue to lack access to broadband Internet and cellular connectivity [19]. Inspired by previous observations of community content similarity in tribal networks, we propose FiDO, a community-based Web browsing and content delivery system that takes advantage of user mobility, opportunistic connectivity, and collaborative filtering to provide relevant Web content to members of disconnected households via opportunistic contact with cellular base stations during a daily commute. We evaluate FiDO using trace-driven simulations with network usage data collected from a tribal-operated ISP that serves the Coeur d\u2019Alene Indian Reservation in Western Idaho. By collecting data about household Web preferences and applying a collaborative filtering technique based on the Web usage patterns of the surrounding reservation community, we are able to opportunistically browse the Web on behalf of members of disconnected households, providing an average of 69.4 Web pages (all content from a specific URL, e.g., \u201chttp://gis.cdatribe-nsn.gov/LandBuyBack/\u201d) crawled from 73% of their top 10 most visited Web domains (e.g., \u201ccdatribe-nsn.gov\u201d or \u201ccnn.com/\u201d) per day. Moreover, this content is able to be fetched and pushed to users even when the opportunistic data rate is limited to an average of only 0.99 Mbps (\u03c3 = 0.24 Mbps) and the daily opportunistic connection time is an average of 45.9 minutes (\u03c3 = 2.3 minutes). Additionally, we demonstrate a hybrid \u201csearch and browse\u201d approach that allocates a percentage of opportunistic resources to the download of user-specified content. By dedicating only 10% of opportunistic windows of connectivity to the download of social media content, 51% of households were able to receive all of their daily expected social media content in addition to an average of 55.3 Web pages browsed on their behalf from an average of 4 different Web domains. Critically, we demonstrate the feasibility of a collaborative and community-based Web browsing model that extends access to Web content across the last mile(s) using existing infrastructure and rural patterns of mobility."}
{"_id": "1fb54581120c38528984a657b1da6c24679dbea6", "title": "Actions and Attributes from Wholes and Parts", "text": "We investigate the importance of parts for the tasks of action and attribute classification. We develop a part-based approach by leveraging convolutional network features inspired by recent advances in computer vision. Our part detectors are a deep version of poselets and capture parts of the human body under a distinct set of poses. For the tasks of action and attribute classification, we train holistic convolutional neural networks and show that adding parts leads to top-performing results for both tasks. We observe that for deeper networks parts are less significant. In addition, we demonstrate the effectiveness of our approach when we replace an oracle person detector, as is the default in the current evaluation protocol for both tasks, with a state-of-the-art person detection system."}
{"_id": "a37817edf9c28155575d226fc8cff4643fb84466", "title": "Evaluating abdominal core muscle fatigue: Assessment of the validity and reliability of the prone bridging test.", "text": "The aims of this study were to research the amplitude and median frequency characteristics of selected abdominal, back, and hip muscles of healthy subjects during a prone bridging endurance test, based on surface electromyography (sEMG), (a) to determine if the prone bridging test is a valid field test to measure abdominal muscle fatigue, and (b) to evaluate if the current method of administrating the prone bridging test is reliable. Thirty healthy subjects participated in this experiment. The sEMG activity of seven abdominal, back, and hip muscles was bilaterally measured. Normalized median frequencies were computed from the EMG power spectra. The prone bridging tests were repeated on separate days to evaluate inter and intratester reliability. Significant differences in normalized median frequency slope (NMFslope ) values between several abdominal, back, and hip muscles could be demonstrated. Moderate-to-high correlation coefficients were shown between NMFslope values and endurance time. Multiple backward linear regression revealed that the test endurance time could only be significantly predicted by the NMFslope of the rectus abdominis. Statistical analysis showed excellent reliability (ICC=0.87-0.89). The findings of this study support the validity and reliability of the prone bridging test for evaluating abdominal muscle fatigue."}
{"_id": "0c92fc4a70eda75480e8d96630d5e3f7aba830f4", "title": "ProfileDroid: multi-layer profiling of android applications", "text": "The Android platform lacks tools for assessing and monitoring apps in a systematic way. This lack of tools is particularly problematic when combined with the open nature of Google Play, the main app distribution channel. As our key contribution, we design and implement ProfileDroid, a comprehensive, multi-layer system for monitoring and profiling apps. Our approach is arguably the first to profile apps at four layers: (a) static, or app specification, (b) user interaction, (c) operating system, and (d) network. We evaluate 27 free and paid Android apps and make several observations: (a) we identify discrepancies between the app specification and app execution, (b) free versions of apps could end up costing more than their paid counterparts, due to an order of magnitude increase in traffic, (c) most network traffic is not encrypted, (d) apps communicate with many more sources than users might expect---as many as 13, and (e) we find that 22 out of 27 apps communicate with Google during execution. ProfileDroid is the first step towards a systematic approach for (a) generating cost-effective but comprehensive app profiles, and (b) identifying inconsistencies and surprising behaviors."}
{"_id": "bafeaa6a0bf7de92b1472674ca99b02f18fa4e63", "title": "Playing Atari Games with Deep Reinforcement Learning and Human Checkpoint Replay", "text": "This paper introduces a novel method for learning how to play the most difficult Atari 2600 games from the Arcade Learning Environment using deep reinforcement learning. The proposed method, called human checkpoint replay, consists in using checkpoints sampled from human gameplay as starting points for the learning process. This is meant to compensate for the difficulties of current exploration strategies, such as \u03b5-greedy, to find successful control policies in games with sparse rewards. Like other deep reinforcement learning architectures, our model uses a convolutional neural network that receives only raw pixel inputs to estimate the state value function. We tested our method on Montezuma\u2019s Revenge and Private Eye, two of the most challenging games from the Atari platform. The results we obtained show a substantial improvement compared to previous learning approaches, as well as over a random player. We also propose a method for training deep reinforcement learning agents using human gameplay experience, which we call human experience replay."}
{"_id": "8da1dda34ecc96263102181448c94ec7d645d085", "title": "Approximation by superpositions of a sigmoidal function", "text": "Abstr,,ct. In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set ofaffine functionals can uniformly approximate any continuous function of n real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single bidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks."}
{"_id": "9e4291de6cdce8e6f247effa308d72e2ec3f6122", "title": "Estimating the dimension of a linear model", "text": ""}
{"_id": "20c18a6895130ba6617ce6a59b8a0eb8173981e7", "title": "Satellite Image Analysis for Disaster and Crisis-Management Support", "text": "This paper describes how multisource satellite data and efficient image analysis may successfully be used to conduct rapid-mapping tasks in the domain of disaster and crisis-management support. The German Aerospace Center (DLR) has set up a dedicated crosscutting service, which is the so-called \"Center for satellite-based Crisis Information\" (ZKI), to facilitate the use of its Earth-observation capacities in the service of national and international response to major disaster situations, humanitarian relief efforts, and civil security issues. This paper describes successful rapid satellite mapping campaigns supporting disaster relief and demonstrates how this technology can be used for civilian crisis-management purposes. During the last years, various international coordination bodies were established, improving the disaster-response-related cooperation within the Earth-observation community worldwide. DLR/ZKI operates in this context, closely networking with public authorities (civil security), nongovernmental organizations (humanitarian relief organizations), satellite operators, and other space agencies. This paper reflects on several of these international activities, such as the International Charter Space and Major Disasters, describes mapping procedures, and reports on rapid-mapping experiences gained during various disaster-response applications. The example cases presented cover rapid impact assessment after the Indian Ocean Tsunami, forest fires mapping for Portugal, earthquake-damage assessment for Pakistan, and landslide extent mapping for the Philippines"}
{"_id": "2074d749e61c4b1ccbf957fe18efe00fa590a124", "title": "Toward a Science of Cyber\u2013Physical System Integration", "text": "System integration is the elephant in the china store of large-scale cyber-physical system (CPS) design. It would be hard to find any other technology that is more undervalued scientifically and at the same time has bigger impact on the presence and future of engineered systems. The unique challenges in CPS integration emerge from the heterogeneity of components and interactions. This heterogeneity drives the need for modeling and analyzing cross-domain interactions among physical and computational/networking domains and demands deep understanding of the effects of heterogeneous abstraction layers in the design flow. To address the challenges of CPS integration, significant progress needs to be made toward a new science and technology foundation that is model based, precise, and predictable. This paper presents a theory of composition for heterogeneous systems focusing on stability. Specifically, the paper presents a passivity-based design approach that decouples stability from timing uncertainties caused by networking and computation. In addition, the paper describes cross-domain abstractions that provide effective solution for model-based fully automated software synthesis and high-fidelity performance analysis. The design objectives demonstrated using the techniques presented in the paper are group coordination for networked unmanned air vehicles (UAVs) and high-confidence embedded control software design for a quadrotor UAV. Open problems in the area are also discussed, including the extension of the theory of compositional design to guarantee properties beyond stability, such as safety and performance."}
{"_id": "2cb46d5cab5590ef9950bd303bdfae41e7a98b1a", "title": "An evaluation of alternative architectures for transaction processing in the cloud", "text": "Cloud computing promises a number of advantages for the deployment of data-intensive applications. One important promise is reduced cost with a pay-as-you-go business model. Another promise is (virtually) unlimited throughput by adding servers if the workload increases. This paper lists alternative architectures to effect cloud computing for database applications and reports on the results of a comprehensive evaluation of existing commercial cloud services that have adopted these architectures. The focus of this work is on transaction processing (i.e., read and update workloads), rather than analytics or OLAP workloads, which have recently gained a great deal of attention. The results are surprising in several ways. Most importantly, it seems that all major vendors have adopted a different architecture for their cloud services. As a result, the cost and performance of the services vary significantly depending on the workload."}
{"_id": "271c41d1584938f618690151eb391eb3545d182e", "title": "Integrated Dual-Channel X-Band Offset-Transmitter for Phase Steering and DDMA Arrays", "text": "Multiple-Input and Multiple-Output (MIMO) arrays provide more degrees of freedom than conventional phased arrays. They require that every transmitting element of the array can be identified when received. One way to achieve this is to give every element its own unique frequency. Offset-transmitters may be used to introduce MIMO or Doppler Division Multiple Access (DDMA) into phased-arrays without an excessive increase in waveform-generating hardware. Our dual-channel demonstrator IC can obtain a phase accuracy better than 1 degree and an spurious level of better than -65dBc for a single on-chip channel. This work investigates at X-band the effects of the limited on-chip isolation of 35dB, when multiple offset outputs are generated on a single chip for both beam steering and DDMA. In case of beam steering, the requirements on channel-to-channel isolation are less strict, making it well within reach. In the case of DDMA, we recommend increasing the channel-to-channel isolation by implementing multiple chips, in which case independent signals can be generated."}
{"_id": "480b1a47373a9791947da2b83224d2f2bd833e7d", "title": "Semi-supervised Semantic Pattern Discovery with Guidance from Unsupervised Pattern Clusters", "text": "We present a simple algorithm for clustering semantic patterns based on distributional similarity and use cluster memberships to guide semi-supervised pattern discovery. We apply this approach to the task of relation extraction. The evaluation results demonstrate that our novel bootstrapping procedure significantly outperforms a standard bootstrapping. Most importantly, our algorithm can effectively prevent semantic drift and provide semi-supervised learning with a natural stopping criterion."}
{"_id": "a5e6c14a9335e7bcd981c3ad67cc73af2474b136", "title": "A Theoretical Basis for a Biopharmaceutic Drug Classification: The Correlation of in Vitro Drug Product Dissolution and in Vivo Bioavailability", "text": "A biopharmaceutics drug classification scheme for correlating in vitro drug product dissolution and in vivo bioavailability is proposed based on recognizing that drug dissolution and gastrointestinal permeability are the fundamental parameters controlling rate and extent of drug absorption. This analysis uses a transport model and human permeability results for estimating in vivo drug absorption to illustrate the primary importance of solubility and permeability on drug absorption. The fundamental parameters which define oral drug absorption in humans resulting from this analysis are discussed and used as a basis for this classification scheme. These Biopharmaceutic Drug Classes are defined as: Case 1. High solubility-high permeability drugs, Case 2. Low solubility-high permeability drugs, Case 3. High solubility-low permeability drugs, and Case 4. Low solubility-low permeability drugs. Based on this classification scheme, suggestions are made for setting standards for in vitro drug dissolution testing methodology which will correlate with the in vivo process. This methodology must be based on the physiological and physical chemical properties controlling drug absorption. This analysis points out conditions under which no in vitro-in vivo correlation may be expected e.g. rapidly dissolving low permeability drugs. Furthermore, it is suggested for example that for very rapidly dissolving high solubility drugs, e.g. 85% dissolution in less than 15 minutes, a simple one point dissolution test, is all that may be needed to insure bioavailability. For slowly dissolving drugs a dissolution profile is required with multiple time points in systems which would include low pH, physiological pH, and surfactants and the in vitro conditions should mimic the in vivo processes. This classification scheme provides a basis for establishing in vitro-in vivo correlations and for estimating the absorption of drugs based on the fundamental dissolution and permeability properties of physiologic importance."}
{"_id": "cd4235fe314179bb2035af3aff81d29b9150b898", "title": "A Survey of Enabling Technologies of Low Power and Long Range Machine-to-Machine Communications", "text": "Low power and long range machine-to-machine (M2M) communication techniques are expected to provide ubiquitous connections for the wireless devices. In this paper, three major low power and long range M2M solutions are surveyed. The first type of solutions is referred to as the low power wide area (LPWA) network. The design of the LPWA techniques features low cost, low data rate, long communication range, and low power consumption. The second type of solutions is the IEEE 802.11ah which features higher data rates using a wider bandwidth than the LPWA-based solutions. The third type of solutions is operated under the cellular network infrastructure. Based on the analysis of the pros and cons of the enabling technologies of the surveyed M2M solutions, as well as the corresponding deployment strategies, the gaps in knowledge are identified. The paper also presents a summary of the research directions for improving the performance of the surveyed low power and long range M2M communication technologies."}
{"_id": "c093e12d8c95abb49532d2f0dfd04417864942d5", "title": "A New Bit-Serial Architecture for Field Multiplication Using Polynomial Bases", "text": "Multiplication is the main finite field arithmetic operation in elliptic curve cryptography and its bit-serial hardware implementation is attractive in resource constrained environments such as smart cards, where the chip area is limited. In this paper, a new serial-output bitserial multiplier using polynomial bases over binary extension fields is proposed. It generates a bit of the multiplication in each clock cycle with the latency of one cycle. To the best of our knowledge, this is the first time that such a serial-output bit-serial multiplier architecture using polynomial bases for general irreducible polynomials is proposed."}
{"_id": "72118216235f3875078a3c8696e2abd4273bc3b8", "title": "DBpedia SPARQL Benchmark - Performance Assessment with Real Queries on Real Data", "text": "Triple stores are the backbone of increasingly many Data Web applications. It is thus evident that the performance of those stores is mission critical for individual projects as well as for data integration on the Data Web in general. Consequently, it is of central importance during the implementation of any of these applications to have a clear picture of the weaknesses and strengths of current triple store implementations. In this paper, we propose a generic SPARQL benchmark creation procedure, which we apply to the DBpedia knowledge base. Previous approaches often compared relational and triple stores and, thus, settled on measuring performance against a relational database which had been converted to RDF by using SQL-like queries. In contrast to those approaches, our benchmark is based on queries that were actually issued by humans and applications against existing RDF data not resembling a relational schema. Our generic procedure for benchmark creation is based on query-log mining, clustering and SPARQL feature analysis. We argue that a pure SPARQL benchmark is more useful to compare existing triple stores and provide results for the popular triple store implementations Virtuoso, Sesame, Jena-TDB, and BigOWLIM. The subsequent comparison of our results with other benchmark results indicates that the performance of triple stores is by far less homogeneous than suggested by previous benchmarks."}
{"_id": "1b35cbf853dd8becf8777aa224896daf4ee4d076", "title": "Bioinspired engineering of thermal materials.", "text": "In the development of next-generation materials with enhanced thermal properties, biological systems in nature provide many examples that have exceptional structural designs and unparalleled performance in their thermal or nonthermal functions. Bioinspired engineering thus offers great promise in the synthesis and fabrication of thermal materials that are difficult to engineer through conventional approaches. In this review, recent progress in the emerging area of bioinspired advanced materials for thermal science and technology is summarized. State-of-the-art developments of bioinspired thermal-management materials, including materials for efficient thermal insulation and heat transfer, and bioinspired materials for thermal/infrared detection, are highlighted. The dynamic balance of bioinspiration and practical engineering, the correlation of inspiration approaches with the targeted applications, and the coexistence of molecule-based inspiration and structure-based inspiration are discussed in the overview of the development. The long-term outlook and short-term focus of this critical area of advanced materials engineering are also presented."}
{"_id": "3f530ad817fb4597a1b755ae98daa839a167f2b9", "title": "Empirical Study on the Healing Nature of Mandalas", "text": "Mandalas were first used in therapy by Carl Jung, who found that the act of drawing mandalas had a calming effect on patients while at the same time facilitating psychic integration. There is a scarcity of controlled empirical studies of the healing impact of mandalas on mental health. Based on the efficacy of James Pennebaker\u2019s written disclosure paradigm in promoting mental well-being (Pennebaker, 1997a, 1997b), the purpose of our study was to examine the benefits for those suffering from post traumatic stress disorder (PTSD) of processing traumatic events through the creation of mandalas. Benefits to participants were measured in terms of changes in the variables of PTSD symptoms, depressive symptoms, anxiety, spiritual meaning, and the frequency of physical symptoms and illness. Relative to those in the control condition, individuals assigned to the experimental mandala-creation group reported greater decreases in symptoms of trauma at the 1-month follow up. There were no other statistically significant outcome differences. Alternative modes of processing traumatic events (e.g., visually symbolically) may serve individuals who are either reluctant or unable to write about their experiences."}
{"_id": "621479624ad634a7ed86977aacd780339af91ec7", "title": "Improving the Performance of the FFT-based Parallel Code-phase Search Acquisition of GNSS Signals by Decomposition of the Circular Correlation", "text": "Dr. Cyril Botteron leads the GNSS and UWB groups in the electronics and signal processing laboratory at EPFL. He received his PhD degree from the University of Calgary, Canada, in 2003. His current research interests comprise the development of low power radio frequency (RF) integrated circuits and advanced signal processing techniques for ultra-low power communications and global and local positioning applications."}
{"_id": "060c5f6e70c552bb6cb387713e3776af98f7a69b", "title": "ButterflyNet: a mobile capture and access system for field biology research", "text": "Through a study of field biology practices, we observed that biology fieldwork generates a wealth of heterogeneous information, requiring substantial labor to coordinate and distill. To manage this data, biologists leverage a diverse set of tools, organizing their effort in paper notebooks. These observations motivated ButterflyNet, a mobile capture and access system that integrates paper notes with digital photographs captured during field research. Through ButterflyNet, the activity of leafing through a notebook expands to browsing all associated digital photos. ButterflyNet also facilitates the transfer of captured content to spreadsheets, enabling biologists to share their work. A first-use study with 14 biologists found this system to offer rich data capture and transformation, in a manner felicitous with current practice."}
{"_id": "72f95251e642eee833396d4271daa2bf0d7e074f", "title": "An extra low-power 1Tbit/s bandwidth PLL/DLL-less eDRAM PHY using 0.3V low-swing IO for 2.5D CoWoS application", "text": "A 1Tbit/s bandwidth PHY is demonstrated through 2.5D CoWoS platform. Two chips: SOC and eDRAM have been fabricated in TSMC 40nm CMOS technology and stacked on another silicon interposer chip in 65nm technology. Total 1024 DQ bus operating in 1.1Gbit/s with Vmin=0.3V are proven in experimental results. A novel timing compensation mechanism is presented to achieve a low-power and small area eDRAM PHY that excludes PLL/DLL but retains good timing margin. Another data sampling alignment training approach is reserved to enhance timing robustness. A compact low-swing IO also achieves great power efficiency of 0.105mW/Gbps."}
{"_id": "d8f080bb2b888f1c19478ddf7ffa95f0cf59a708", "title": "Closed-Loop Control of a Three-Phase Neutral-Point-Clamped Inverter Using an Optimized Virtual-Vector-Based Pulsewidth Modulation", "text": "This paper presents a closed-loop control scheme for the three-level three-phase neutral-point-clamped dc-ac converter using the optimized nearest three virtual-space-vector pulsewidth modulation, which is a modulation that produces low output-voltage distortion with a significant reduction of the dc-link capacitance. A new specific loop modifying the modulating waveforms is proposed to rapidly control possible perturbations in the neutral-point voltage balance. An online estimation of the load displacement angle and load linear/nonlinear nature is introduced at no extra cost. The remaining part of the control is analogous to the control for a two-level converter with an appropriate interfacing to the selected modulation. The closed-loop control is designed for the case of a renewable-energy source connected to the ac mains, and its performance is analyzed through simulation and experiments."}
{"_id": "68ab0f936e22acb28a1fdb617b9960b2d799c93a", "title": "From single cells to deep phenotypes in cancer", "text": "In recent years, major advances in single-cell measurement systems have included the introduction of high-throughput versions of traditional flow cytometry that are now capable of measuring intracellular network activity, the emergence of isotope labels that can enable the tracking of a greater variety of cell markers and the development of super-resolution microscopy techniques that allow measurement of RNA expression in single living cells. These technologies will facilitate our capacity to catalog and bring order to the inherent diversity present in cancer cell populations. Alongside these developments, new computational approaches that mine deep data sets are facilitating the visualization of the shape of the data and enabling the extraction of meaningful outputs. These applications have the potential to reveal new insights into cancer biology at the intersections of stem cell function, tumor-initiating cells and multilineage tumor development. In the clinic, they may also prove important not only in the development of new diagnostic modalities but also in understanding how the emergence of tumor cell clones harboring different sets of mutations predispose patients to relapse or disease progression."}
{"_id": "9d4b4bb9cb11378e82946c0b80d9eba0c8ca351c", "title": "Sequence Covering for Efficient Host-Based Intrusion Detection", "text": "This paper introduces a new similarity measure, the covering similarity, which we formally define for evaluating the similarity between a symbolic sequence and a set of symbolic sequences. A pairwise similarity can also be directly derived from the covering similarity to compare two symbolic sequences. An efficient implementation to compute the covering similarity is proposed which uses a suffix-tree data structure, but other implementations, based on suffix array for instance, are possible and are possibly necessary for handling very large-scale problems. We have used this similarity to isolate attack sequences from normal sequences in the scope of host-based intrusion detection. We have assessed the covering similarity on two well-known benchmarks in the field. In view of the results reported on these two datasets for the state-of-the-art methods, according to the comparative study, we have carried out based on three challenging similarity measures commonly used for string processing, or in bioinformatics, we show that the covering similarity is particularly relevant to address the detection of anomalies in sequences of system calls."}
{"_id": "cc63ddfadf41bbcbfe89b3c0ff9f9a4868cd9b0c", "title": "Qualitative clinical evaluation of scapular dysfunction: a reliability study.", "text": "The purpose of this study was to determine the intrarater and interrater reliability of a clinical evaluation system for scapular dysfunction. No commonly accepted terminology presently exists for describing the abnormal dynamic scapular movement patterns that are commonly associated with shoulder injury. A method of observation was devised for clinical evaluation of scapular dysfunction. Blinded evaluators (2 physicians and 2 physical therapists) were familiarized with the evaluation method of scapular movement patterns before viewing a videotape of 26 subjects with and without scapular dysfunction. Each evaluator was asked to categorize the predominant scapular movement pattern observed during bilateral humeral scaption and abduction motions. Reliability was assessed by a kappa coefficient. Intertester reliability (kappa = 0.4) was found to be slightly lower than intratester reliability (kappa = 0.5). These results indicate that, with refinement, this qualitative evaluation method may allow clinicians to standardize the categorization of dynamic scapular dysfunction patterns."}
{"_id": "9b71b307a2f99fb404c6f6159b146547a0dc1cbc", "title": "DART: a Dataset of Arguments and their Relations on Twitter", "text": "The problem of understanding the stream of messages exchanged on social media such as Facebook and Twitter is becoming a major challenge for automated systems. The tremendous amount of data exchanged on these platforms as well as the specific form of language adopted by social media users constitute a new challenging context for existing argument mining techniques. In this paper, we describe a resource of natural language arguments called DART (Dataset of Arguments and their Relations on Twitter) where the complete argument mining pipeline over Twitter messages is considered: (i) we identify which tweets can be considered as arguments and which cannot, and (ii) we identify what is the relation, i.e., support or attack, linking such tweets to each other."}
{"_id": "cec6a3cc35d3b08c5decc55987a8cf84f0820af8", "title": "I feel your pain: emotional closeness modulates neural responses to empathically experienced rejection.", "text": "Empathy is generally thought of as the ability to share the emotional experiences of others. In scientific terms, this is usually operationalized as an ability to vicariously feel others' mental and emotional experiences. Supporting this account, research demonstrates that watching others experience physical pain activates similar brain regions to the actual experience of pain itself. First-hand experience of social rejection also activates this network. The current work extends these findings by examining whether the \"pain\" network is similarly implicated in witnessing rejection, and whether emotional closeness modulates this response. We provide evidence for each of these suppositions, demonstrating: (a) that the pain network is activated when watching a friend suffer social rejection, and (b) that interpersonal closeness with that friend modulates this response. Further, we found that the inferior frontal gyrus, critical for representing others' mental and emotional states, mediates the relationship between emotional closeness and neural responses to watching the rejection of a friend."}
{"_id": "52eb8f4aa2ffcdf01f945fc8c3ad466855fd1567", "title": "HPI Question Answering System in BioASQ 2016", "text": "Question answering (QA) systems are crucial when searching for exact answers for natural language questions in the biomedical domain. Answers to many of such questions can be extracted from the 26 millions biomedical publications currently included in MEDLINE when relying on appropriate natural language processing (NLP) tools. In this work we describe our participation in the task 4b of the BioASQ challenge using two QA systems that we developed for biomedicine. Preliminary results show that our systems achieved first and second positions in the snippet retrieval sub-task and for the generation of ideal answers."}
{"_id": "5a5a51a911e7c5800a8ba518e4347ad3bc91d8cb", "title": "The acceptance and use of customer relationship management (CRM) systems: An empirical study of distribution service industry in Taiwan", "text": "With the rapid change of business competitive environment, enterprise resource integration and innovative issues of business operation have gradually become the most important issues for businesses. Furthermore, many enterprises have implemented novel information technology and developing the innovative e-business applications systems such as enterprise resource planning (ERP), customer relationship management (CRM), knowledge management (KM) and supply chain management (SCM) to enhance their competitive advantages. CRM systems can help organizations to gain the potential new customers, promote the existing customers\u2019 purchase, maintain good relationship with customers as well as to enhance the customer value, thus can improve the enterprise images. Moreover, the development and applications of CRM systems have also been considered as important issues for researchers and practitioners in recent years. For Taiwan\u2019s industry, it has been gradually transferred from manufacturing-oriented to a service-oriented. Thus, the service industry has higher percentage in the GDP and in which the distribution service industry is the biggest one and be a core industry in the whole of service industry. The main purpose of this study is to explore the factors affecting the acceptance and use of CRM systems. Furthermore, the proposed research model was built on the basis of unified theory of acceptance and use of technology (UTAUT) and task-technology fit (TTF) framework as well as technological and managerial theories. The implications of findings for practice will be discussed. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "6873e61b9a155b6a81c27b09fb8c11c26f5815ca", "title": "Spam filtering email classification (SFECM) using gain and graph mining algorithm", "text": "This paper proposes a hybrid solution of spam email classifier using context based email classification model as main algorithm complimented by information gain calculation to increase spam classification accuracy. Proposed solution consists of three stages email pre-processing, feature extraction and email classification. Research has found that LingerIG spam filter is highly effective at separating spam emails from cluster of homogenous work emails. Also experiment result proved the accuracy of spam filtering is 100% as recorded by the team of developers at University of Sydney. The study has shown that implementing the spam filter in the context-based email classification model is feasible. Experiment of the study has confirmed that spam filtering aspect of context-based classification model can be improved."}
{"_id": "dbc677c69b3b55da3a817ca6e83f40a8f438dd88", "title": "Efficient parallel translating embedding for knowledge graphs", "text": "Knowledge graph embedding aims to embed entities and relations of knowledge graphs into low-dimensional vector spaces. Translating embedding methods regard relations as the translation from head entities to tail entities, which achieve the state-of-the-art results among knowledge graph embedding methods. However, a major limitation of these methods is the time consuming training process, which may take several days or even weeks for large knowledge graphs, and result in great difficulty in practical applications. In this paper, we propose an efficient parallel framework for translating embedding methods, called ParTrans-X, which enables the methods to be paralleled without locks by utilizing the distinguished structures of knowledge graphs. Experiments on two datasets with three typical translating embedding methods, i.e., TransE [3], TransH [19], and a more efficient variant TransE- AdaGrad [11] validate that ParTrans-X can speed up the training process by more than an order of magnitude."}
{"_id": "33ffc82ea3d8708ed9037debc3f1d4f9e9e49269", "title": "Characterizing the software process: a maturity framework", "text": "A description is given of a software-process maturity framework that has been developed to provide the US Department of Defense with a means to characterize the capabilities of software-development organizations. This software-development process-maturity model reasonably represents the actual ways in which software-development organizations improve. It provides a framework for assessing these organizations and identifying the priority areas for immediate improvement. It also helps identify those places where advanced technology can be most valuable in improving the software-development process. The framework can be used by any software organization to assess its own capabilities and identify the most important areas for improvement.<>"}
{"_id": "7588c5f7cc699d7a071e85e08eb98d514e9c73f8", "title": "Capability Maturity Model for Software", "text": "This paper provides an overview of the latest version of the Capability Maturity Model for Software, CMM v1.1. Based on over six years of experience with software process improvement and the contributions of hundreds of reviewers, CMM v1.1 describes the software engineering and management practices that characterize organizations as they mature their processes for developing and maintaining software. This paper stresses the need for a process maturity framework to prioritize improvement actions, describes the process maturity framework of five maturity levels and the associated structural components, and discusses future directions for the CMM."}
{"_id": "f22a4acdd90388f386acaf0589f17731e0cf5cfa", "title": "Climbing the \"Stairway to Heaven\" -- A Mulitiple-Case Study Exploring Barriers in the Transition from Agile Development towards Continuous Deployment of Software", "text": "Agile software development is well-known for its focus on close customer collaboration and customer feedback. In emphasizing flexibility, efficiency and speed, agile practices have lead to a paradigm shift in how software is developed. However, while agile practices have succeeded in involving the customer in the development cycle, there is an urgent need to learn from customer usage of software also after delivering and deployment of the software product. The concept of continuous deployment, i.e. the ability to deliver software functionality frequently to customers and subsequently, the ability to continuously learn from real-time customer usage of software, has become attractive to companies realizing the potential in having even shorter feedback loops. However, the transition towards continuous deployment involves a number of barriers. This paper presents a multiple-case study in which we explore barriers associated with the transition towards continuous deployment. Based on interviews at four different software development companies we present key barriers in this transition as well as actions that need to be taken to address these."}
{"_id": "98354bb3d015d684d9248589191367fd7069cbc6", "title": "Photogrammetric Multi-View Stereo and Imaging Network Design", "text": ""}
{"_id": "e2e194b5103e233ebe47cf6d1e5bc31e3981fd73", "title": "A Multi-User Surface Visuo-Haptic Display Using Electrostatic Friction Modulation and Capacitive-Type Position Sensing", "text": "This paper proposes a visuo-haptic feedback system that can provide haptic feedback to multiple users on an LCD monitor using electrostatic adhesion and built-in capacitive sensors. The prototype system consists of a 40-inch LCD monitor, an ITO electrode sheet arranged on the monitor, and multiple contact pads for electrostatic adhesion. By applying low-frequency haptic voltage and high-frequency sensing voltage to each pad, the system realizes passive haptic feedback, as well as position sensing using the same components. The passive haptic feedback force exceeds 1 N at 300 V$_\\mathrm{rms}$ . The performance of the sensing system is investigated in terms of interference, which shows the haptic voltage and existence of multiple pads can affect the sensing and can shift the output by a few millimeters. A pilot application, virtual hockey game, successfully demonstrates the sensing and haptic rendering capability of the proposed system. The effect of the interference on the demonstration is discussed."}
{"_id": "8efbc664db0f6d0286daefaf9003d068ec1cbaa9", "title": "Music, Intelligence and Artificiality", "text": "The discipline of Music-AI is defined as that activity which seeks to program computers to perform musical tasks in an intelligent, which possibly means humanlike way. A brief historical survey of different approaches within the discipline is presented. Two particular issues arise: the explicit representation of knowledge; and symbolic and subsymbolic representation and processing. When attempting to give a precise definition of Music-AI, it is argued that all musical processes must make some reference to human behaviour, and so Music-AI is a central rather than a peripheral discipline for musical computing. However, it turns out that the goals of Music-AI as first expressed, the mimicking of human behaviour, are impossible to achieve in full, and that it is impossible, in principle, for computers to pass a musical version of the Turing test. In practice, however, computers are used for their non-human-like behaviour just as much as their human-like behaviour, so the real goal of Music-AI must be reformulated. Furthermore, it is argued that the non-holistic analysis of human behaviour which this reformulation entails is actually informative for our understanding of human behaviour. Music-AI could also be fruitfully concerned with developing musical intelligences which were explicitly not human. Music-AI is then seen to be as much a creative enterprise as a scientific one."}
{"_id": "d07300357d17631864244dded83e9a8a81fa340b", "title": "A 450 fs 65-nm CMOS Millimeter-Wave Time-to-Digital Converter Using Statistical Element Selection for All-Digital PLLs", "text": "This paper presents a time-to-digital converter (TDC) that operates with a 20\u201364 GHz input and underpins the phase digitization function in a millimeter-wave all-digital fractional-N frequency synthesizer. A self-calibrated inductor-less frequency divider using dynamic CML latches provides an eight-phase input to a 3-bit \u201ccoarse\u201d TDC, which is interfaced to a 5-bit \u201cfine\u201d TDC through a sub-sampling coarse-fine interface circuit. A wide bandwidth low dropout (LDO) on-chip regulator is used to decrease the effect of supply noise on TDC performance. A synthesized digital engine implements calibration using statistical element selection with mean adaptation to alleviate TDC nonlinearity that results from random mismatches and PVT variations. The TDC is fabricated in 65-nm CMOS along with the divider and calibration circuits, and achieves 450-fs resolution. The measured DNL and INL of the TDC are 0.65 and 1.2 LSB, respectively. The TDC consumes 11 mA from 1-V supply voltage. The TDC features a figure-of-merit of 0.167 (0.47) pJ per conversion step without (with) the frequency divider. A single-shot experiment shows that the on-chip LDO reduces the effect of TDC noise by reducing the standard deviation from 0.856 to 0.167 LSB for constant input. The prototype occupies an active area of $502\\times 110~\\mu \\text{m}^{\\mathbf {2}}$ excluding pads."}
{"_id": "6a1dd4867f7fd75ac083375b41291dc8b419e82c", "title": "Presupposed Content and Entailments in Natural Language Inference", "text": "Previous work has presented an accurate natural logic model for natural language inference. Other work has demonstrated the effectiveness of computing presuppositions for solving natural language inference problems. We extend this work to create a system for correctly computing lexical presuppositions and their interactions within the natural logic framework. The combination allows our system to properly handle presupposition projection from the lexical to the sentential level while taking advantage of the accuracy and coverage of the natural logic system. To solve an inference problem, our system computes a sequence of edits from premise to hypothesis. For each edit the system computes an entailment relation and a presupposition entailment relation. The relations are then separately composed according to a syntactic tree and the semantic properties of its nodes. Presuppositions are projected based on the properties of their syntactic and semantic environment. The edits are then composed and the resulting entailment relations are combined with the presupposition relation to yield an answer to the inference problem."}
{"_id": "62a9b8aed78beb34266149d569b63262a5a881ba", "title": "Reactive power capability of the wind turbine with Doubly Fed Induction Generator", "text": "With the increasing integration into power grids, wind power plants play an important role in the power system. Many requirements for the wind power plants have been proposed in the grid codes. According to these grid codes, wind power plants should have the ability to perform voltage control and reactive power compensation at the point of common coupling (PCC). Besides the shunt flexible alternating current transmission system (FACTS) devices such as the static var compensator (SVC) and the static synchronous compensator (STATCOM), the wind turbine itself can also provide a certain amount of reactive power compensation, depending on the wind speed and the active power control strategy. This paper analyzes the reactive power capability of Doubly Fed Induction Generator (DFIG) based wind turbine, considering the rated stator current limit, the rated rotor current limit, the rated rotor voltage limit, and the reactive power capability of the grid side convertor (GSC). The boundaries of reactive power capability of DFIG based wind turbine are derived. The result was obtained using the software MATLAB."}
{"_id": "2753e8c047fb2e58eb60726cdac1f0bdaa71d4ef", "title": "Mycorrhizosphere interactions to improve plant fitness and soil quality", "text": "Arbuscular mycorrhizal fungi are key components of soil microbiota and obviously interact with other microorganisms in the rhizosphere, i.e. the zone of influence of plant roots on microbial populations and other soil constituents. Mycorrhiza formation changes several aspects of plant physiology and some nutritional and physical properties of the rhizospheric soil. These effects modify the colonization patterns of the root or mycorrhizas (mycorrhizosphere) by soil microorganisms. The rhizosphere of mycorrhizal plants, in practice a mycorrhizosphere, harbors a great array of microbial activities responsible for several key ecosystem processes. This paper summarizes the main conceptual principles and accepted statements on the microbial interactions between mycorrhizal fungi and other members of rhizosphere microbiota and discusses current developments and future trends concerning the following topics: (i) effect of soil microorganisms on mycorrhiza formation; (ii) mycorrhizosphere establishment; (iii) interactions involved in nutrient cycling and plant growth; (iv) interactions involved in the biological control of plant pathogens; and (v) interactions to improve soil quality. The main conclusion is that microbial interactions in the rhizosphere of mycorrhizal plants improve plant fitness and soil quality, critical issues for a sustainable agricultural development and ecosystem functioning."}
{"_id": "973c113db6805ccfb7c1c3d71008867accf4fc03", "title": "A Hidden Markov Model for Vehicle Detection and Counting", "text": "To reduce roadway congestion and improve traffic safety, accurate traffic metrics, such as number of vehicles travelling through lane-ways, are required. Unfortunately most existing infrastructure, such as loop-detectors and many video detectors, do not feasibly provide accurate vehicle counts. Consequently, a novel method is proposed which models vehicle motion using hidden Markov models (HMM). The proposed method represents a specified small region of the roadway as 'empty', 'vehicle entering', 'vehicle inside', and 'vehicle exiting', and then applies a modified Viterbi algorithm to the HMM sequential estimation framework to initialize and track vehicles. Vehicle observations are obtained using an Adaboost trained Haar-like feature detector. When tested on 88 hours of video, from three distinct locations, the proposed method proved to be robust to changes in lighting conditions, moving shadows, and camera motion, and consistently out-performed Multiple Target Tracking (MTT) and Virtual Detection Line(VDL) implementations. The median vehicle count error of the proposed method is lower than MTT and VDL by 28%, and 70% respectively. As future work, this algorithm will be implemented to provide the traffic industry with improved automated vehicle counting, with the intent to eventually provide real-time counts."}
{"_id": "b4c909ab72e4c1409cbbd160c052bb34793a29a2", "title": "Striving for the moral self: the effects of recalling past moral actions on future moral behavior.", "text": "People's desires to see themselves as moral actors can contribute to their striving for and achievement of a sense of self-completeness. The authors use self-completion theory to predict (and show) that recalling one's own (im)moral behavior leads to compensatory rather than consistent moral action as a way of completing the moral self. In three studies, people who recalled their immoral behavior reported greater participation in moral activities (Study 1), reported stronger prosocial intentions (Study 2), and showed less cheating (Study 3) than people who recalled their moral behavior. These compensatory effects were related to the moral magnitude of the recalled event, but they did not emerge when people recalled their own positive or negative nonmoral behavior (Study 2) or others' (im)moral behavior (Study 3). Thus, the authors extend self-completion theory to the moral domain and use it to integrate the research on moral cleansing (remunerative moral strivings) and moral licensing (relaxed moral strivings)."}
{"_id": "deb55cf5286c5a04cd25b3e5a91cc8caf3ca7ba1", "title": "Transcriptional reprogramming in yeast using dCas9 and combinatorial gRNA strategies", "text": "BACKGROUND\nTranscriptional reprogramming is a fundamental process of living cells in order to adapt to environmental and endogenous cues. In order to allow flexible and timely control over gene expression without the interference of native gene expression machinery, a large number of studies have focused on developing synthetic biology tools for orthogonal control of transcription. Most recently, the nuclease-deficient Cas9 (dCas9) has emerged as a flexible tool for controlling activation and repression of target genes, by the simple RNA-guided positioning of dCas9 in the vicinity of the target gene transcription start site.\n\n\nRESULTS\nIn this study we compared two different systems of dCas9-mediated transcriptional reprogramming, and applied them to genes controlling two biosynthetic pathways for biobased production of isoprenoids and triacylglycerols (TAGs) in baker's yeast Saccharomyces cerevisiae. By testing 101 guide-RNA (gRNA) structures on a total of 14 different yeast promoters, we identified the best-performing combinations based on reporter assays. Though a larger number of gRNA-promoter combinations do not perturb gene expression, some gRNAs support expression perturbations up to ~threefold. The best-performing gRNAs were used for single and multiplex reprogramming strategies for redirecting flux related to isoprenoid production and optimization of TAG profiles. From these studies, we identified both constitutive and inducible multiplex reprogramming strategies enabling significant changes in isoprenoid production and increases in TAG.\n\n\nCONCLUSION\nTaken together, we show similar performance for a constitutive and an inducible dCas9 approach, and identify multiplex gRNA designs that can significantly perturb isoprenoid production and TAG profiles in yeast without editing the genomic context of the target genes. We also identify a large number of gRNA positions in 14 native yeast target pomoters that do not affect expression, suggesting the need for further optimization of gRNA design tools and dCas9 engineering."}
{"_id": "7c5adcf159af8df4d91774b196fb32189efc0368", "title": "Expertise-related deactivation of the right temporoparietal junction during musical improvisation", "text": "Musical training has been associated with structural changes in the brain as well as functional differences in brain activity when musicians are compared to nonmusicians on both perceptual and motor tasks. Previous neuroimaging comparisons of musicians and nonmusicians in the motor domain have used tasks involving prelearned motor sequences or synchronization with an auditorily presented sequence during the experiment. Here we use functional magnetic resonance imaging (fMRI) to examine expertise-related differences in brain activity between musicians and nonmusicians during improvisation--the generation of novel musical-motor sequences--using a paradigm that we previously used in musicians alone. Despite behaviorally matched performance, the two groups showed significant differences in functional brain activity during improvisation. Specifically, musicians deactivated the right temporoparietal junction (rTPJ) during melodic improvisation, while nonmusicians showed no change in activity in this region. The rTPJ is thought to be part of a ventral attentional network for bottom-up stimulus-driven processing, and it has been postulated that deactivation of this region occurs in order to inhibit attentional shifts toward task-irrelevant stimuli during top-down, goal-driven behavior. We propose that the musicians' deactivation of the rTPJ during melodic improvisation may represent a training-induced shift toward inhibition of stimulus-driven attention, allowing for a more goal-directed performance state that aids in creative thought."}
{"_id": "34b958afff511bdc47d6fed6013da8700659f936", "title": "With a flick of the wrist: stretch sensors as lightweight input for mobile devices", "text": "With WristFlicker, we detect wrist movement through sets of stretch sensors embedded in clothing. Our system supports wrist rotation (pronation/supination), and both wrist tilts (flexion/extension and ulnar/radial deviation). Each wrist movement is measured by two opposing stretch sensors, mimicking the counteracting movement of muscles. We discuss interaction techniques that allow a user to control a music player through this lightweight input."}
{"_id": "212fc5ddeb4416aa7e1435f4c69391d0ad4fb18d", "title": "Hierarchical Neural Language Models for Joint Representation of Streaming Documents and their Content", "text": "We consider the problem of learning distributed representations for documents in data streams. The documents are represented as low-dimensional vectors and are jointly learned with distributed vector representations of word tokens using a hierarchical framework with two embedded neural language models. In particular, we exploit the context of documents in streams and use one of the language models to model the document sequences, and the other to model word sequences within them. The models learn continuous vector representations for both word tokens and documents such that semantically similar documents and words are close in a common vector space. We discuss extensions to our model, which can be applied to personalized recommendation and social relationship mining by adding further user layers to the hierarchy, thus learning user-specific vectors to represent individual preferences. We validated the learned representations on a public movie rating data set from MovieLens, as well as on a large-scale Yahoo News data comprising three months of user activity logs collected on Yahoo servers. The results indicate that the proposed model can learn useful representations of both documents and word tokens, outperforming the current state-of-the-art by a large margin."}
{"_id": "a00ae26413dacb97d01eae007ea8b7bcdbbdf1e6", "title": "IEC 61850 substation configuration language as a basis for automated security and SDN configuration", "text": "IEC61850 has revolutionized the way substations are configured and maintained. Substation Configuration Language (SCL) defines the parameters needed to configure individual devices and combine them into a working system. Security is implemented by IEC62351 and there are potential vulnerabilities. Best practice recommendations are for defense in depth. SCL contains sufficient information to auto-configure network equipment, firewalls, IDS and SDN based networks."}
{"_id": "472e4265895de65961b70779fdfbecafc24079ed", "title": "Learning to Navigate for Fine-Grained Classification", "text": "Fine-grained classification is challenging due to the difficulty of finding discriminative features. Finding those subtle traits that fully characterize the object is not straightforward. To handle this circumstance, we propose a novel self-supervision mechanism to effectively localize informative regions without the need of bounding-box/part annotations. Our model, termed NTS-Net for Navigator-Teacher-Scrutinizer Network, consists of a Navigator agent, a Teacher agent and a Scrutinizer agent. In consideration of intrinsic consistency between informativeness of the regions and their probability being ground-truth class, we design a novel training paradigm, which enables Navigator to detect most informative regions under the guidance from Teacher. After that, the Scrutinizer scrutinizes the proposed regions from Navigator and makes predictions. Our model can be viewed as a multi-agent cooperation, wherein agents benefit from each other, and make progress together. NTS-Net can be trained end-to-end, while provides accurate fine-grained classification predictions as well as highly informative regions during inference. We achieve state-of-the-art performance in extensive benchmark datasets."}
{"_id": "5989fe24effd265b4a8557e1e55bebc4891fb687", "title": "Understanding predictability and exploration in human mobility", "text": "Predictive models for human mobility have important applications in many fields including traffic control, ubiquitous computing, and contextual advertisement. The predictive performance of models in literature varies quite broadly, from over 90% to under 40%. In this work we study which underlying factors\u00a0- in terms of modeling approaches and spatio-temporal characteristics of the data sources\u00a0- have resulted in this remarkably broad span of performance reported in the literature. Specifically we investigate which factors influence the accuracy of next-place prediction, using a high-precision location dataset of more than 400 users observed for periods between 3 months and one year. We show that it is much easier to achieve high accuracy when predicting the time-bin location than when predicting the next place. Moreover, we demonstrate how the temporal and spatial resolution of the data have strong influence on the accuracy of prediction. Finally we reveal that the exploration of new locations is an important factor in human mobility, and we measure that on average 20-25% of transitions are to new places, and approx. 70% of locations are visited only once. We discuss how these mechanisms are important factors limiting our ability to predict human mobility."}
{"_id": "cf6891232d0589ba2c768a4ab243471e0a84f7c4", "title": "Detection of hiding in the least significant bit", "text": "In this paper, we apply the theory of hypothesis testing to the steganalysis, or detection of hidden data, in the least significant bit (LSB) of a host image. The hiding rate (if data is hidden) and host probability mass function (PMF) are unknown. Our main results are as follows. a) Two types of tests are derived: a universal (over choices of host PMF) method that has certain asymptotic optimality properties and methods that are based on knowledge or estimation of the host PMF and, hence, an appropriate likelihood ratio (LR). b) For a known host PMF, it is shown that the composite hypothesis testing problem corresponding to an unknown hiding rate reduces to a worst-case simple hypothesis testing problem. c) Using the results for a known host PMF, practical tests based on the estimation of the host PMF are obtained. These are shown to be superior to the state of the art in terms of receiver operating characteristics as well as self-calibration across different host images. Estimators for the hiding rate are also developed."}
{"_id": "b8e7dfa21aac846cb52848e54a68dd822ced20dd", "title": "An Efficient Algorithm of Frequent Itemsets Mining Based on MapReduce", "text": "Mainstream parallel algorithms for mining frequent itemsets (patterns) were designed by implementing FP-Growth or Apriori algorithms on MapReduce (MR) framework. Existing MR FP-Growth algorithms can not distribute data equally among nodes, and MR Apriori algorithms utilize multiple map/reduce procedures and generate too many key-value pairs with value of 1; these disadvantages hinder their performance. This paper proposes an algorithm FIMMR: it firstly mines local frequent itemsets for each data chunk as candidates, applies prune strategies to the candidates, and then identifies global frequent itemsets from candidates. Experimental results show that the time efficiency of FIMMR outperforms PFP and SPC significantly; and under small minimum support threshold, FIMMR can achieve one order of magnitude improvement than the other two algorithms; meanwhile, the speedup of FIMMR is also satisfactory."}
{"_id": "b95d4c85157158c02d509953405ebe9708fe8612", "title": "Visual Analysis of Public Utility Service Problems in a Metropolis", "text": "Issues about city utility services reported by citizens can provide unprecedented insights into the various aspects of such services. Analysis of these issues can improve living quality through evidence-based decision making. However, these issues are complex, because of the involvement of spatial and temporal components, in addition to having multi-dimensional and multivariate natures. Consequently, exploring utility service problems and creating visual representations are difficult. To analyze these issues, we propose a visual analytics process based on the main tasks of utility service management. We also propose an aggregate method that transforms numerous issues into legible events and provide visualizations for events. In addition, we provide a set of tools and interaction techniques to explore such issues. Our approach enables administrators to make more informed decisions."}
{"_id": "1cfb539a0b332dc6adfb5106ed5aafec39fc93d2", "title": "Bootstrapping Biomedical Ontologies for Scientific Text using NELL", "text": "We describe an open information extraction system for biomedical text based on NELL (the NeverEnding Language Learner) [7], a system designed for extraction from Web text. NELL uses a coupled semi-supervised bootstrapping approach to learn new facts from text, given an initial ontology and a small number of \u201cseeds\u201d for each ontology category. In contrast to previous applications of NELL, in our task the initial ontology and seeds are automatically derived from existing resources. We show that NELL\u2019s bootstrapping algorithm is susceptible to ambiguous seeds, which are frequent in the biomedical domain. Using NELL to extract facts from biomedical text quickly leads to semantic drift. To address this problem, we introduce a method for assessing seed quality, based on a larger corpus of data derived from the Web. In our method, seed quality is assessed at each iteration of the bootstrapping process. Experimental results show significant improvements over NELL\u2019s original bootstrapping algorithm on two types of tasks: learning terms from biomedical categories, and named-entity recognition for biomedical entities using a learned lexicon."}
{"_id": "45bb1e9e1333824bad34c9276e9326161472f985", "title": "Exploring Recurrent Neural Networks to Detect Named Entities from Biomedical Text", "text": "Biomedical named entity recognition (bio-NER) is a crucial and basic step in many biomedical information extraction tasks. However, traditional NER systems are mainly based on complex hand-designed features which are derived from various linguistic analyses and maybe only adapted to specified area. In this paper, we construct Recurrent Neural Network to identify entity names with word embeddings input rather than hand-designed features. Our contributions mainly include three aspects: 1) we adapt a deep learning architecture Recurrent Neural Network (RNN) to entity names recognition; 2) based on the original RNNs such as Elman-type and Jordan-type model, an improved RNN model is proposed; 3) considering that both past and future dependencies are important information, we combine bidirectional recurrent neural networks based on information entropy at the top layer. The experiments conducted on the BioCreative II GM data set demonstrate RNN models outperform CRF and deep neural networks (DNN), furthermore, the improved RNN model performs better than two original RNN models and the combined method is effective."}
{"_id": "49b2155cddfa659335bcdca18dcb52984a39bc5e", "title": "Extracting Complex Biological Events with Rich Graph-Based Feature Sets", "text": "We describe a system for extracting complex events among genes and proteins from biomedical literature, developed in context of the BioNLP\u201909 Shared Task on Event Extraction. For each event, its text trigger, class, and arguments are extracted. In contrast to the prevailing approaches in the domain, events can be arguments of other events, resulting in a nested structure that better captures the underlying biological statements. We divide the task into independent steps which we approach as machine learning problems. We define a wide array of features and in particular make extensive use of dependency parse graphs. A rule-based post-processing step is used to refine the output in accordance with the restrictions of the extraction task. In the shared task evaluation, the system achieved an F-score of 51.95% on the primary task, the best performance among the participants."}
{"_id": "c999a3a9551dd679ba56c1fbf0d2ea3b3e94162b", "title": "Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks", "text": "Two problems arise when using distant supervision for relation extraction. First, in this method, an already existing knowledge base is heuristically aligned to texts, and the alignment results are treated as labeled data. However, the heuristic alignment can fail, resulting in wrong label problem. In addition, in previous approaches, statistical models have typically been applied to ad hoc features. The noise that originates from the feature extraction process can cause poor performance. In this paper, we propose a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address these two problems. To solve the first problem, distant supervised relation extraction is treated as a multi-instance problem in which the uncertainty of instance labels is taken into account. To address the latter problem, we avoid feature engineering and instead adopt convolutional architecture with piecewise max pooling to automatically learn relevant features. Experiments show that our method is effective and outperforms several competitive baseline methods."}
{"_id": "0ecb33ced5b0976accdf13817151f80568b6fdcb", "title": "Coarse-to-Fine n-Best Parsing and MaxEnt Discriminative Reranking", "text": "Discriminative reranking is one method for constructing high-performance statistical parsers (Collins, 2000). A discriminative reranker requires a source of candidate parses for each sentence. This paper describes a simple yet novel method for constructing sets of 50-best parses based on a coarse-to-fine generative parser (Charniak, 2000). This method generates 50-best lists that are of substantially higher quality than previously obtainable. We used these parses as the input to a MaxEnt reranker (Johnson et al., 1999; Riezler et al., 2002) that selects the best parse from the set of parses for each sentence, obtaining an f-score of 91.0% on sentences of length 100 or less."}
{"_id": "8fb2c6f3cf3c872221fc838a0d36c32517081cf4", "title": "Requirements of Quality of Service in Wireless Sensor Network", "text": "Sensor networks are distributed networks made up of small sensing devices equipped with processors, memory, and short-range wireless communication. They differ from traditional computer networks in that they have resource constraints, unbalanced mixture traffic, data redundancy, network dynamics, and energy balance. Work within wireless sensor networks (WSNs) Quality of service (QoS) has been isolated and specific either on certain functional layers or application scenarios. However the area of sensor network quality of service (QoS) remains largely open. In this paper we define WSNs QoS requirements within a WSNs application, and then analyzing Issues for QoS Monitoring."}
{"_id": "367e75211fd27cbdd2fca17360eae8d907f99d62", "title": "The critical role of retrieval practice in long-term retention", "text": "Learning is usually thought to occur during episodes of studying, whereas retrieval of information on testing simply serves to assess what was learned. We review research that contradicts this traditional view by demonstrating that retrieval practice is actually a powerful mnemonic enhancer, often producing large gains in long-term retention relative to repeated studying. Retrieval practice is often effective even without feedback (i.e. giving the correct answer), but feedback enhances the benefits of testing. In addition, retrieval practice promotes the acquisition of knowledge that can be flexibly retrieved and transferred to different contexts. The power of retrieval practice in consolidating memories has important implications for both the study of memory and its application to educational practice."}
{"_id": "bc3cd07c3745ac3e11fde10d314cda978c3acb58", "title": "An analysis of unreliability and asymmetry in low-power wireless links", "text": "Experimental studies have demonstrated that the behavior of real links in low-power wireless networks (such as wireless sensor networks) deviates to a large extent from the ideal binary model used in several simulation studies. In particular, there is a large transitional region in wireless link quality that is characterized by significant levels of unreliability and asymmetry, significantly impacting the performance of higher-layer protocols. We provide a comprehensive analysis of the root causes of unreliability and asymmetry. In particular, we derive expressions for the distribution, expectation, and variance of the packet reception rate as a function of distance, as well as for the location and extent of the transitional region. These expressions incorporate important environmental and radio parameters such as the path loss exponent and shadowing variance of the channel, and the modulation, encoding, and hardware variance of the radios."}
{"_id": "9fbbc36ccfd38fe264e3e0dd6e1e142388e62753", "title": "An LED Driver With Pulse Current Driving Technique", "text": "A novel driving technique, called pulse current driving technique for LED driver, is proposed. A complete introduction and analysis of the proposed driving technique and the conventional driving technique will be presented in this paper. The LED driver supply maximum peak current is 220 mA and operating frequency is between 500 kHz and1 MHz. Simulation and experimental results are presented to verify the analytical results. The LED driver is fabricated with CMOS 0.35-\u03bcm technology and chip area with pads is 0.89 mm 2. The design of the proposed circuit structure is simple, without additional output capacitance compensation for stability and with small power consumption, and is implemented in integrated circuit. As a result of the above benefits, the proposed approach is believed to be suitable in portable application."}
{"_id": "4a011ce69bdc4d9f223bb6f8ca1f479afdd0c256", "title": "Flow Motifs in Soccer : What can passing behavior tell us ?", "text": "In soccer, both individual and team performance is crucial to win matches. Passing is the backbone of the game and forms the basis of important decisions made by managers and owners; such as buying players, picking offensive or defensive strategies or even defining a style of play. These decisions can be supported by analyzing how a player performs and how his style affects team performance. The flow of a player or a team can be studied by finding unique passing motifs from the patterns in the subgraphs of a possession-passing network of soccer games. These flow motifs can be used to analyze individual players and teams based on the diversity and frequency of their involvement in different motifs. Building on the flow motif analyses, we introduce an expected goals model to measure the effectiveness of each style of play. We also make use of a novel way to represent motif data that is easy to understand and can be used to compare players, teams and seasons. Further, we exploit the relationship between play style and the pass probability matrix to support our analysis. Our data set has the last 4 seasons of 6 big European leagues with 8219 matches, 3532 unique players and 155 unique teams. We will use flow motifs to analyze different events, such as for example the transfer of Claudio Bravo to Pep Guardiola\u2019s Manchester City, who Jean Seri is and why he must be an elite midfielder and the difference in attacking style between Lionel Messi and Cristiano Ronaldo. Ultimately, an analysis of Post-F\u00e0bregas Arsenal is conducted wherein different techniques are combined to analyze the impact the acquisition of Mesut \u00d6zil and Alexis S\u00e1nchez had on the strategies implemented at Arsenal."}
{"_id": "4bf081cd8262605fefd45e49138740e85fa826fc", "title": "An Alternative to NCD for Large Sequences, Lempel-Ziv Jaccard Distance", "text": "The Normalized Compression Distance (NCD) has been used in a number of domains to compare objects with varying feature types. This flexibility comes from the use of general purpose compression algorithms as the means of computing distances between byte sequences. Such flexibility makes NCD particularly attractive for cases where the right features to use are not obvious, such as malware classification. However, NCD can be computationally demanding, thereby restricting the scale at which it can be applied. We introduce an alternative metric also inspired by compression, the Lempel-Ziv Jaccard Distance (LZJD). We show that this new distance has desirable theoretical properties, as well as comparable or superior performance for malware classification, while being easy to implement and orders of magnitude faster in practice."}
{"_id": "4eb49bb21a22cf4b8be636508d3e456764b6092d", "title": "Number 693 December 2000 ON THE EFFECT OF THE INTERNET ON INTERNATIONAL TRADE", "text": "The Internet stimulates trade. Using a gravity equation of trade among 56 countries, we find no evidence of an effect of the Internet on total trade flows in 1995 and only weak evidence of an effect in 1996. However, we find an increasing and significant impact from 1997 to 1999. Specifically, our results imply that a 10 percent increase in the relative number of web hosts in one country would have led to about 1 percent greater trade in 1998 and 1999. Surprisingly, we find that the effect of the Internet on trade has been stronger for poor countries than for rich countries, and that there is little evidence that the Internet has reduced the impact of distance on trade. The evidence is consistent with a model in which the Internet creates a global exchange for goods, thereby reducing market-specific sunk costs of exporting."}
{"_id": "f4ce72174cd7bc936ca16430b5f502052ebca64d", "title": "Impact of IEEE 802.11n/ac PHY/MAC High Throughput Enhancements over Transport/Application Layer Protocols - A Survey", "text": "Since the inception of Wireless Local Area Networks (WLANs) in the year 1997, it has tremendously grown in the last few years. IEEE 802.11 is popularly known as WLAN. To provide the last mile wireless broadband connectivity to users, IEEE 802.11 is enriched with IEEE 802.11a, IEEE 802.11b and IEEE 802.11g. More recently, IEEE 802.11n, IEEE 802.11ac and IEEE 802.11ad are introduced with enhancements to the physical (PHY) layer and medium access control (MAC) sublayer to provide much higher data rates and thus these amendments are called High Throughput WLANs (HT-WLANs). In IEEE 802.11n, the data rate has increased up to 600 Mbps, whereas IEEE 802.11ac/ad is expected to support a maximum throughput of 1 to 7 Gbps over wireless media. For both standards, PHY is enhanced with multiple-input multiple-output (MIMO) antenna technologies, channel bonding, short guard intervals (SGI), enhanced modulation and coding schemes (MCS). At the same time, MAC layer overhead is reduced by introducing frame aggregation and block acknowledgement technologies. However, existing studies reveal that although PHY and MAC enhancements promise to improve physical data rate significantly, they yield negative impact over upper layer protocols \u2013 mainly for reliable end-to-end transport/application layer protocols. As a consequence, a large number of schools have focused researches on HT-WLANs to improve the coordination among PHY/MAC and upper layer protocols and thus, boost up the performance benefit. In this survey, we discuss the impact of enhancements of PHY/MAC layer in HT-WLANs over transport/application layer protocols. Several measures have been reported in the literature that boost up data rate of WLANs and use aforesaid enhancements effectively for performance benefit of end-to-end protocols. We also point out limitations of existing researches and list down different open challenges that can be explored for the development of next generation HT-WLAN technologies. Keywords-IEEE 802.11n, IEEE 802.11ac, MIMO, MU-MIMO, Channel bonding, Short-guard interval (SGI), Frame aggregation, Block Acknowledgement, TCP/UDP Throughput."}
{"_id": "dd765e6ce51b3af890cbf7e61b999679317e6cb7", "title": "A Lower-Back Robotic Exoskeleton: Industrial Handling Augmentation Used to Provide Spinal Support", "text": "A lower-back exoskeleton prototype designed to provide back support for industrial workers who manually handle heavy materials is presented in this article. Reducing spinal loads during these tasks can reduce the risk of work-related back injuries. Biomechanical studies show that compression of the lumbar spine is a key risk factor for musculoskeletal injuries. To address this issue, we present a wearable exoskeleton designed to provide back support and reduce lumbar spine compression. To provide effective assistance and avoid injury to muscles or tendons, we apply a continuous torque of approximately 40 Nm on both hip joints to actively assist both hip abduction/adduction (HAA) and hip flexion/extension (HFE). Each actuation unit includes a modular and a compact series-elastic actuator (SEA) with a clutch. The SEA provides mechanical compliance at the interface between the exoskeleton and the user, and the clutches can automatically disengage the torque between the exoskeleton and the user. These experimental results show that the exoskeleton can lower lumbar compression by reducing the need for muscular activity in the spine. Furthermore, powering both HFE and HAA can effectively reduce the lumbar spinal loading user experience when lifting and lowering objects while in a twisted posture."}
{"_id": "c759c1f4376e322943d8a463064367fcee5f4eb6", "title": "Scalable Gaussian Process Regression Using Deep Neural Networks", "text": "We propose a scalable Gaussian process model for regression by applying a deep neural network as the feature-mapping function. We first pre-train the deep neural network with a stacked denoising auto-encoder in an unsupervised way. Then, we perform a Bayesian linear regression on the top layer of the pre-trained deep network. The resulting model, Deep-Neural-Network-based Gaussian Process (DNN-GP), can learn much more meaningful representation of the data by the finite-dimensional but deep-layered feature-mapping function. Unlike standard Gaussian processes, our model scales well with the size of the training set due to the avoidance of kernel matrix inversion. Moreover, we present a mixture of DNN-GPs to further improve the regression performance. For the experiments on three representative large datasets, our proposed models significantly outperform the state-of-the-art algorithms of Gaussian process regression."}
{"_id": "d5bed3840cbb7e29a89847c21b697609607d12d2", "title": "Modeling and controller design for a voice coil actuated engine valve", "text": "This paper presents a physical model of a spark ignition (SI) engine valve actuated by a voice coil actuator (VCA) and proposes a novel control strategy that achieves fast transitions and low seating velocities with high precision position control. The control strategy is based on applying two different pulse width modulated (PWM) voltages for transient and steady state performance. This novel switching control strategy performs better than current strategies using electromechanical valve actuators to implement variable valve timing (VVT). The main improvement over electromechanical actuators is the fact that the novel control strategy achieves the conflicting performance requirements of very fast transition times while simultaneously exhibiting low contact velocities"}
{"_id": "9cd1916541b886da7f224ed8cfa630a1c05d29a2", "title": "Extension of heuristic evaluation method: a review and reappraisal", "text": "As one of the major discount methods, the heuristic evaluation method (HE) is the most commonly used usability inspection method. We introduced the history and procedure of this method, as well as its strengths and weaknesses. We then reviewed the applications of this method to different Human-Computer Interaction systems with the adapted heuristic sets. We also reviewed many studies that extended the traditional HE method in different ways. Finally, the paper ends with a reappraisal of these extension methods and future research direction to improve the HE method."}
{"_id": "4637f6625e7f40d3984c8a2a7a76bbd64f47bc04", "title": "On-line Active Reward Learning for Policy Optimisation in Spoken Dialogue Systems", "text": "The ability to compute an accurate reward function is essential for optimising a dialogue policy via reinforcement learning. In real-world applications, using explicit user feedback as the reward signal is often unreliable and costly to collect. This problem can be mitigated if the user\u2019s intent is known in advance or data is available to pre-train a task success predictor off-line. In practice neither of these apply for most real world applications. Here we propose an on-line learning framework whereby the dialogue policy is jointly trained alongside the reward model via active learning with a Gaussian process model. This Gaussian process operates on a continuous space dialogue representation generated in an unsupervised fashion using a recurrent neural network encoder-decoder. The experimental results demonstrate that the proposed framework is able to significantly reduce data annotation costs and mitigate noisy user feedback in dialogue policy learning."}
{"_id": "4855d08ac06282b075327961c0485aeee4898f6b", "title": "The Analysis of the Disease Spectrum in China", "text": "Analysis of the related risks of disease provides a scientific basis for disease prevention and treatment, hospital management, and policy formulation by the changes in disease spectrum of patients in hospital. Retrospective analysis was made to the first diagnosis, age, gender, daily average cost of hospitalized patients, and other factors in the First Affiliated Hospital of Nanjing Medical University during 2006-2013. The top 4 cases were as follows: cardiovascular disease, malignant tumors, lung infections, and noninsulin dependent diabetes mellitus. By the age of disease analysis, we found a younger age trend of cardiovascular disease, and the age of onset of cancer or diabetes was somewhat postponed. The average daily cost of hospitalization and the average daily cost of the main noncommunicable diseases were both on the rise. Noncommunicable diseases occupy an increasingly important position in the constitution of the disease, and they caused an increasing medical burden. People should pay attention to health from the aspects of lifestyle changing. Hospitals should focus on building the appropriate discipline. On the other hand, an integrated government response is required to tackle key risks. Multiple interventions are needed to lower the burden of these diseases and to improve national health."}
{"_id": "ddaa3e20c1cd31c8f511bdaf5d84a80aa6660737", "title": "Mechanics and energetics of level walking with powered ankle exoskeletons.", "text": "Robotic lower limb exoskeletons that can alter joint mechanical power output are novel tools for studying the relationship between the mechanics and energetics of human locomotion. We built pneumatically powered ankle exoskeletons controlled by the user's own soleus electromyography (i.e. proportional myoelectric control) to determine whether mechanical assistance at the ankle joint could reduce the metabolic cost of level, steady-speed human walking. We hypothesized that subjects would reduce their net metabolic power in proportion to the average positive mechanical power delivered by the bilateral ankle exoskeletons. Nine healthy individuals completed three 30 min sessions walking at 1.25 m s(-1) while wearing the exoskeletons. Over the three sessions, subjects' net metabolic energy expenditure during powered walking progressed from +7% to -10% of that during unpowered walking. With practice, subjects significantly reduced soleus muscle activity (by approximately 28% root mean square EMG, P<0.0001) and negative exoskeleton mechanical power (-0.09 W kg(-1) at the beginning of session 1 and -0.03 W kg(-1) at the end of session 3; P=0.005). Ankle joint kinematics returned to similar patterns to those observed during unpowered walking. At the end of the third session, the powered exoskeletons delivered approximately 63% of the average ankle joint positive mechanical power and approximately 22% of the total positive mechanical power generated by all of the joints summed (ankle, knee and hip) during unpowered walking. Decreases in total joint positive mechanical power due to powered ankle assistance ( approximately 22%) were not proportional to reductions in net metabolic power ( approximately 10%). The ;apparent efficiency' of the ankle joint muscle-tendon system during human walking ( approximately 0.61) was much greater than reported values of the ;muscular efficiency' of positive mechanical work for human muscle ( approximately 0.10-0.34). High ankle joint ;apparent efficiency' suggests that recoiling Achilles' tendon contributes a significant amount of ankle joint positive power during the push-off phase of walking in humans."}
{"_id": "5d3684bb22e98f1f4d8480c3f90ccb36543f2290", "title": "Comparison among three analytical methods for knowledge communities group-decision analysis", "text": "Knowledge management can greatly facilitate an organization\u2019s learning via strategic insight. Assessing the achievements of knowledge communities (KC) includes both a theoretical basis and practical aspect; however, a cautionary word is in order, because using improper measurements will increase complexity and reduce applicability. Group decision-making, the essence of knowledge communities, lets one considers multi-dimensional problems for the decision-maker, sets priorities for each decision factor, and assesses rankings for all alternatives. The purpose of this study is to establish the objective and measurable patterns to obtain anticipated achievements of KC through conducting a group-decision comparison. The three multiple-criteria decision-making methods we used, simple average weight (SAW), \u2018\u2018Technique for Order Preference by Similarity to an Ideal Solution\u2019\u2019 (TOPSIS) and \u2018\u2018VlseKriterijumska Optimizacija I Kompromisno Resenje\u2019\u2019 (VIKOR), are based on an aggregating function representing \u2018\u2018closeness to the ideal point\u2019\u2019. The TOPSIS and VIKOR methods were used to highlight our innovative idea, academic analysis, and practical appliance value. Simple average weight (SAW) is known to be a common method to get the preliminary outcome. Our study provides a comparison analysis of the above-three methods. An empirical case is illustrated to demonstrate the overall KC achievements, showing their similarities and differences to achieve group decisions. Our results showed that TOPSIS and simple average weight (SAW) had identical rankings overall, but TOPSIS had better distinguishing capability. TOPSIS and VIKOR had almost the same success setting priorities by weight. However, VIKOR produced different rankings than those from TOPSIS and SAW, and VIKOR also made it easy to choose appropriate strategies. Both the TOPSIS and VIKOR methods are suitable for assessing similar problems, provide excellent results close to reality, and grant superior analysis. 2006 Elsevier Ltd. All rights reserved."}
{"_id": "3f63b12887fd8d29bb6de64fce506b2a388ae3ed", "title": "E-commerce customer churn prediction based on improved SMOTE and AdaBoost", "text": "E-commerce customer churn rate is high and the customer churn dataset is seriously imbalanced. In order to improve the prediction accuracy of churn customers as well as strengthen to identify non-churn customers, this paper presents e-commerce customer churn prediction model based on improved SMOTE and AdaBoost. First, processing the churn data with improved SMOTE, which combines oversampling and undersampling methods to address the imbalance problem and subsequently integrates AdaBoost algorithm to predict. Finally, the empirical study on B2C E-commerce platform proves that this model has better efficiency and accuracy compared with the mature customer churn prediction algorithms."}
{"_id": "1432d0c3a4b96bc3af9f83d65d77be2fb1046fb6", "title": "Dynamic visual attention: searching for coding length increments", "text": "A visual attention system should respond placidly when comm n stimuli are presented, while at the same time keep alert to anomalous visual inputs. In this paper, a dynamic visual attention model based on the rarity of featu r s is proposed. We introduce the Incremental Coding Length (ICL) to measure th e perspective entropy gain of each feature. The objective of our model is to ma xi ize the entropy of the sampled visual features. In order to optimize energy c onsumption, the limit amount of energy of the system is re-distributed among st features according to their Incremental Coding Length. By selecting featur es with large coding length increments, the computational system can achieve at tention selectivity in both static and dynamic scenes. We demonstrate that the prop osed model achieves superior accuracy in comparison to mainstream approaches i n stat c saliency map generation. Moreover, we also show that our model captures s everal less-reported dynamic visual search behaviors, such as attentional swing and inhibition of return."}
{"_id": "284b18d7196f608448ca3d9496bf220b1dfffcf5", "title": "Deep Boltzmann Machines", "text": "We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and dataindependent expectations are approximated using persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer \u201cpre-training\u201d phase that allows variational inference to be initialized with a single bottomup pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks."}
{"_id": "519feb1f3c23baea6960dfa204521f96a74b82bb", "title": "CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research", "text": "Saliency modeling has been an active research area in computer vision for about two decades. Existing state of the art models perform very well in predicting where people look in natural scenes. There is, however, the risk that these models may have been overfitting themselves to available small scale biased datasets, thus trapping the progress in a local minimum. To gain a deeper insight regarding current issues in saliency modeling and to better gauge progress, we recorded eye movements of 120 observers while they freely viewed a large number of naturalistic and artificial images. Our stimuli includes 4000 images; 200 from each of 20 categories covering different types of scenes such as Cartoons, Art, Objects, Low resolution images, Indoor, Outdoor, Jumbled, Random, and Line drawings. We analyze some basic properties of this dataset and compare some successful models. We believe that our dataset opens new challenges for the next generation of saliency models and helps conduct behavioral studies on bottom-up visual attention."}
{"_id": "eb2897f5aec62135da5ac896bf6c8ed23a27caca", "title": "Calculating Completeness of Agile Scope in Scaled Agile Development", "text": "Flexible nature of scope definition in agile makes it difficult or impossible to measure its completeness and quality. The aim of this paper is to highlight the important ingredients of scope definition for agile projects and to present a method for agile projects in order to measure the quality and completeness of their scope definitions. The proposed method considers the elements that are retrieved as a result of the systematic literature review. An industrial survey is conducted to validate and prioritize these elements. Elements are then assigned weights according to their importance in scope definition to build a scorecard for calculating the score of user stories present in the product backlog. The proposed method is able to identify the clear and complete user stories that can better be implemented in the coming iteration. Formal experiments are performed for the evaluation of the proposed method, and it suggests that the method is useful for experts in order to quantify the completeness and quality of scope definition of an agile software project."}
{"_id": "9f6fa9677c015a1d7078f5bacda9e51c3fc0397a", "title": "User Profile Based Research Paper Recommendation", "text": "Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user. [1] For a recommender system to be effective, it has to have a comprehensive and appropriately catalogued repository of resources. Also, it has to be accurate in understanding the need of the user, and basically has to correctly profile the user. Ideally, one needs to take into consideration the fact that a user\u2019s preferences are constantly changing and that is often not done by recommender systems. To meet this need, we propose an interactive research paper recommender system that observes the various themes that occur in a researcher\u2019s work over time and keeps learning the user\u2019s preferences constantly based on the user\u2019s feedback of the papers recommended to them. We use a topic model to take into account the themes occurring in the document. The query is modeled as a bag of topics, where topics represent latent generative clusters of relative words. The entire collection or research papers is also modeled this way. We then estimate the similarity between the query and each research paper using a bag-of-topics based similarity measure and find the best ones. To take into account the users preferences, we keep track of the papers which the user likes and augment the query with the topics which recur in a lot of the users preferred papers the next time recommendations are required. We also truncate the topics which the user seems to ignore."}
{"_id": "419ba9915c21bb56402a89e4d4612fb77ed6da2f", "title": "Software Design and Architecture The once and future focus of software engineering", "text": "The design of software has been a focus of software engineering research since the field's beginning. This paper explores key aspects of this research focus and shows why design will remain a principal focus. The intrinsic elements of software design, both process and product, are discussed: concept formation, use of experience, and means for representation, reasoning, and directing the design activity. Design is presented as being an activity engaged by a wide range of stakeholders, acting throughout most of a system's lifecycle, making a set of key choices which constitute the application's architecture. Directions for design research are outlined, including: (a) drawing lessons, inspiration, and techniques from design fields outside of computer science, (b) emphasizing the design of application \"character\" (functionality and style) as well as the application's structure, and (c) expanding the notion of software to encompass the design of additional kinds of intangible complex artifacts."}
{"_id": "c26a8ab9f2b31a3c4a9b86afccd579b0b73e81db", "title": "Detecting Targeted Smartphone Malware with Behavior-Triggering Stochastic Models", "text": "Malware for current smartphone platforms is becoming increasingly sophisticated. The presence of advanced networking and sensing functions in the device is giving rise to a new generation of targeted malware characterized by a more situational awareness, in which decisions are made on the basis of factors such as the device location, the user profile, or the presence of other apps. This complicates behavioral detection, as the analyst must reproduce very specific activation conditions in order to trigger malicious payloads. In this paper, we propose a system that addresses this problem by relying on stochastic models of usage and context events derived from real user traces. By incorporating the behavioral particularities of a given user, our scheme provides a solution for detecting malware targeting such a specific user. Our results show that the properties of these models follow a power-law distribution: a fact that facilitates an efficient generation of automatic testing patterns tailored for individual users, when done in conjunction with a cloud infrastructure supporting device cloning and parallel testing. We report empirical results with various representative case studies, demonstrating the effectiveness of this approach to detect complex activation patterns."}
{"_id": "22e584677475a4a807b852a1ced75d5cdf24e23c", "title": "Anaphoric Annotation in the ARRAU Corpus", "text": "Arrau is a new corpus annotated for anaphoric relations, with information about agreement and explicit representation of multiple antecedents for ambiguous anaphoric expressions and discourse antecedents for expressions which refer to abstract entities such as events, actions and plans. The corpus contains texts from different genres: task-oriented dialogues from the Trains-91 and Trains-93 corpus, narratives from the English Pear Stories corpus, newspaper articles from the Wall Street Journal portion of the Penn Treebank, and mixed text from the Gnome corpus."}
{"_id": "162d50e6e2c000baf10148f761cc0929aad48ca2", "title": "Index Selection for OLAP", "text": "On-line analytical processing (OLAP) is a recent and important application of database systems. Typically, OLAP data is presented as a multidimensional \\data cube.\" OLAP queries are complex and can take many hours or even days to run, if executed directly on the raw data. The most common method of reducing execution time is to precompute some of the queries into summary tables (subcubes of the data cube) and then to build indexes on these summary tables. In most commercial OLAP systems today, the summary tables that are to be precomputed are picked rst, followed by the selection of the appropriate indexes on them. A trial-and-error approach is used to divide the space available between the summary tables and the indexes. This two-step process can perform very poorly. Since both summary tables and indexes consume the same resource |space | their selection should be done together for the most e cient use of space. In this paper, we give algorithms that automate the selection of summary tables and indexes. In particular, we present a family of algorithms of increasing time complexities, and prove strong performance bounds for them. The algorithms with higher complexities have better performance bounds. However, the increase in the performance bound is diminishing, and we show that an algorithm of moderate complexity can perform fairly close to the optimal. This work was supported by NSF grant IRI{92{23405, ARO grant DAAH04{95{1{0192, and Air Force Contract F33615{93{ 1{1339. Present address of V. Harinarayan and A. Rajaraman: Junglee Corp., Palo Alto, CA."}
{"_id": "411a22de263daebb60aaae6da713f345e72e20a5", "title": "My First Deep Learning System of 1991 + Deep Learning Timeline 1962-2013", "text": "Deep Learning has attracted significant attention in recent years. Here I present a brief overview of my first Deep Learner of 1991, and its historic context, with a timeline of Deep Learning highlights. Note: As a machine learning researcher I am obsessed with proper credit assignment. This draft is the result of an experiment in rapid massive open online peer review. Since 20 September 2013, subsequent revisions published under www.deeplearning.me have absorbed many suggestions for improvements by experts. The abbreviation \u201cTL\u201d is used to refer to subsections of the timeline section. Figure 1: My first Deep Learning system of 1991 used a deep stack of recurrent neural networks (a Neural Hierarchical Temporal Memory) pre-trained in unsupervised fashion to accelerate subsequent supervised learning [79, 81, 82]. 1 ar X iv :1 31 2. 55 48 v1 [ cs .N E ] 1 9 D ec 2 01 3 In 2009, our Deep Learning Artificial Neural Networks became the first Deep Learners to win official international pattern recognition competitions [40, 83] (with secret test sets known only to the organisers); by 2012 they had won eight of them (TL 1.13), including the first contests on object detection in large images (ICPR 2012) [2, 16] and image segmentation (ISBI 2012) [3, 15]. In 2011, they achieved the world\u2019s first superhuman visual pattern recognition results [20, 19]. Others have implemented very similar techniques, e.g., [51], and won additional contests or set benchmark records since 2012, e.g., (TL 1.13, TL 1.14). The field of Deep Learning research is far older though\u2014compare the timeline (TL) further down. My first Deep Learner dates back to 1991 [79, 81, 82] (TL 1.7). It can perform credit assignment across hundreds of nonlinear operators or neural layers, by using unsupervised pre-training for a stack of recurrent neural networks (RNN) (deep by nature) as in Figure 1. Such RNN are general computers more powerful than normal feedforward NN, and can encode entire sequences of input vectors. The basic idea is still relevant today. Each RNN is trained for a while in unsupervised fashion to predict its next input. From then on, only unexpected inputs (errors) convey new information and get fed to the next higher RNN which thus ticks on a slower, self-organising time scale. It can easily be shown that no information gets lost. It just gets compressed (much of machine learning is essentially about compression). We get less and less redundant input sequence encodings in deeper and deeper levels of this hierarchical temporal memory, which compresses data in both space (like feedforward NN) and time. There also is a continuous variant [84]. One ancient illustrative Deep Learning experiment of 1993 [82] required credit assignment across 1200 time steps, or through 1200 subsequent nonlinear virtual layers. The top level code of the initially unsupervised RNN stack, however, got so compact that (previously infeasible) sequence classification through additional supervised learning became possible. There is a way of compressing higher levels down into lower levels, thus partially collapsing the hierarchical temporal memory. The trick is to retrain lower-level RNN to continually imitate (predict) the hidden units of already trained, slower, higher-level RNN, through additional predictive output neurons [81, 79, 82]. This helps the lower RNN to develop appropriate, rarely changing memories that may bridge very long time lags. The Deep Learner of 1991 was a first way of overcoming the Fundamental Deep Learning Problem identified and analysed in 1991 by my very first student (now professor) Sepp Hochreiter (TL 1.6): the problem of vanishing or exploding gradients [46, 47, 10]. The latter motivated all our subsequent Deep Learning research of the 1990s and 2000s. Through supervised LSTM RNN (1997) (e.g., [48, 32, 39, 36, 37, 40, 38, 83], TL 1.8) we could eventually perform similar feats as with the 1991 system [81, 82], overcoming the Fundamental Deep Learning Problem without any unsupervised pre-training. Moreover, LSTM could also learn tasks unlearnable by the partially unsupervised 1991 chunker [81, 82]. Particularly successful are stacks of LSTM RNN [40] trained by Connectionist Temporal Classification (CTC) [36]. On faster computers of 2009, this became the first RNN system ever to win an official international pattern recognition competition [40, 83], through the work of my PhD student and postdoc Alex Graves, e.g., [40]. To my knowledge, this also was the first Deep Learning system ever (recurrent or not) to win such a contest (TL 1.10). (In fact, it won three different ICDAR 2009 contests on connected handwriting in three different languages, e.g., [83, 40], TL 1.10.) A while ago, Alex moved on to Geoffrey Hinton\u2019s lab (Univ. Toronto), where a stack [40] of our bidirectional LSTM RNN [39] also broke a famous TIMIT speech recognition record [38] (TL 1.14), despite thousands of man years previously spent on HMM-based speech recognition research. CTC-LSTM also helped to score first at NIST\u2019s OpenHaRT2013 evaluation [11]. Recently, well-known entrepreneurs also got interested [43, 52] in such hierarchical temporal memories [81, 82] (TL 1.7). The expression Deep Learning actually got coined relatively late, around 2006, in the context of unsupervised pre-training for less general feedforward networks [44] (TL 1.9). Such a system reached 1.2% error rate [44] on the MNIST handwritten digits [54, 55], perhaps the most famous benchmark of Machine Learning. Our team first showed that good old backpropagation (TL 1.2) on GPUs (with training pattern distortions [6, 86] but without any unsupervised pre-training) can actually achieve a three times better result of 0.35% [17] back then, a world record (a previous standard net achieved 0.7% [86]; a backprop-trained [54, 55] Convolutional NN (CNN or convnet) [29, 30, 54, 55] got 0.39% [70](TL 1.9);"}
{"_id": "0de651bb9e647683eb06d9362a2690a6b3ad26a3", "title": "Event-Centric Clustering of News Articles", "text": "Entertainity AB plans to build a news service to provide news to end-users in an innovative way. The service must include a way to automatically group series of news from different sources and publications, based on the stories they are covering. This thesis include three contributions: a survey of known clustering methods, an evaluation of human versus human results when grouping news articles in an event-centric manner, and last an evaluation of an incremental clustering algorithm to see if it is possible to consider a reduced input size and still get a sufficient result. The conclusions are that the result of the human evaluation indicates that users are different enough to warrant a need to take that into account when evaluating algorithms. It is also important that this difference is considered when conducting cluster analysis to avoid overfitting. The evaluation of an incremental event-centric algorithm shows it is desirable to adjust the similarity threshold, depending on what result one want. When running tests with different input sizes, the result implies that a short summary of a news article is a natural feature selection when performing cluster analysis."}
{"_id": "5a886f4eb71afadbf04e9ee38643eb9f9143a0ac", "title": "Can Invisible Watermarks Resolve Rightful Ownerships?", "text": "Digital watermarks have been proposed in recent literature as the means for copyright protection of multimedia data. In this paper we address the capability of invisible watermarking schemes to resolve copyright ownerships. We will show that rightful ownerships cannot be resolved by current watermarking schemes alone. In addition, in the absence of standardization of watermarking procedures, anyone can claim ownership of any watermarked image. Specifically, we provide counterfeit watermarking schemes that can be performed on a watermarked image to allow multiple claims of rightful ownerships. We also propose non-invertible watermarking schemes in this paper and discuss in general the usefulness of digital watermarks in identifying the rightful copyright owners. The results, coupled with the recent attacks on some image watermarks, further imply that we have to carefully re-think our approaches to invisible watermarking of images, and re-evaluate the promises, applications and limitations of such digital means of copyright protection."}
{"_id": "60d188dd81d30d1b959a0a1ff8c0b0d768025dc7", "title": "Reflection from layered surfaces due to subsurface scattering", "text": "The reflection of light from most materials consists of two major terms: the specular and the diffuse. Specular reflection may be modeled from first principles by considering a rough surface consisting of perfect reflectors, or micro-facets. Diffuse reflection is generally considered to result from multiple scattering either from a rough surface or from within a layer near the surface. Accounting for diffuse reflection by Lambert\u2019s Cosine Law, as is universally done in computer graphics, is not a physical theory based on first principles. This paper presents a model for subsurface scattering in layered surfaces in terms of one-dimensional linear transport theory. We derive explicit formulas for backscattering and transmission that can be directly incorporated in most rendering systems, and a general Monte Carlo method that is easily added to a ray tracer. This model is particularly appropriate for common layered materials appearing in nature, such as biological tissues (e.g. skin, leaves, etc.) or inorganic materials (e.g. snow, sand, paint, varnished or dusty surfaces). As an application of the model, we simulate the appearance of a face and a cluster of leaves from experimental data describing their layer properties. CR"}
{"_id": "64f0ab3f2c141779fa8769ea192bd40b0a9529c7", "title": "Semantic mapping for mobile robotics tasks: A survey", "text": "The evolution of contemporary mobile robotics has given thrust to a series of additional conjunct technologies. Of such is the semantic mapping, which provides an abstraction of space and a means for human-robot communication. The recent introduction and evolution of semantic mapping motivated this survey, in which an explicit analysis of the existing methods is sought. The several algorithms are categorized according to their primary characteristics, namely scalability, inference model, temporal coherence and topological map usage. The applications involving semantic maps are also outlined in the work at hand, emphasizing on human interaction, knowledge representation and planning. The existence of public available validation datasets and benchmarking, suitable for the evaluation of semantic mapping techniques is also discussed in detail. Last, an attempt to address open issues and questions is also made."}
{"_id": "16be065238cc8120bbf66bcb098f578c65e0b17b", "title": "Power Electronics Technologies for Railway Vehicles", "text": "This paper describes the current range of advanced IGBT propulsion inverters used in railways from light rail vehicles to high speed trains & locomotives. The second part of the paper describes future trends in IGBT railway propulsion drives. One such trend is the concept of power integration which leads to weight, volume and cost reduction . Finally, for systems with an AC input supply, a concept for reducing the weight of the main input transformer is also described. This uses a configuration of resonant converter and a medium frequency transformer , the so called e-transformer, and could be particularly targeted for 15 kV, 16.7 Hz supplied systems."}
{"_id": "3e1e94b06abc988cea6abbe85728ba182af5b5a1", "title": "Modeling ITIL Business Motivation Model in ArchiMate", "text": "According to Enterprise Architecture (EA) approaches, organizations have motivational concepts that are used to model the motivations that underlie its design or change, which represent the organization\u2019s Business Motivation Model (BMM). Likewise, this BMM is also present in the organizations who provide IT services. ITIL has become a reference for IT service providers, but is commonly modeled as a process-oriented approach to IT Service Management, often disregarding the remaining EA domains or its motivational elements. Conversely, we believe that like EA, ITIL has an important motivation model that should be formally represented."}
{"_id": "d5fb5f3f152236781faa0affd2f01fa3794479ae", "title": "The use of unlabeled data to improve supervised learning for text summarization", "text": "With the huge amount of information available electronically, there is an increasing demand for automatic text summarization systems. The use of machine learning techniques for this task allows one to adapt summaries to the user needs and to the corpus characteristics. These desirable properties have motivated an increasing amount of work in this field over the last few years. Most approaches attempt to generate summaries by extracting sentence segments and adopt the supervised learning paradigm which requires to label documents at the text span level. This is a costly process, which puts strong limitations on the applicability of these methods. We investigate here the use of semi-supervised algorithms for summarization. These techniques make use of few labeled data together with a larger amount of unlabeled data. We propose new semi-supervised algorithms for training classification models for text summarization. We analyze their performances on two data sets - the Reuters news-wire corpus and the Computation and Language (cmp_lg) collection of TIPSTER SUMMAC. We perform comparisons with a baseline - non learning - system, and a reference trainable summarizer system."}
{"_id": "5d69d9163d3fa1fbfd9d416e37e803d911885983", "title": "Neither Quick Nor Proper - Evaluation of QuickProp for Learning Deep Neural Networks", "text": "Neural networks and especially convolutional neural networks are of great interest in current computer vision research. However, many techniques, extensions, and modifications have been published in the past, which are not yet used by current approaches. In this paper, we study the application of a method called QuickProp for training of deep neural networks. In particular, we apply QuickProp during learning and testing of fully convolutional networks for the task of semantic segmentation. We compare QuickProp empirically with gradient descent, which is the current standard method. Experiments suggest that QuickProp can not compete with standard gradient descent techniques for complex computer vision tasks like semantic segmentation."}
{"_id": "6e3e199268c96bed239f2b25dfa1d21d5e16d824", "title": "Triple-Wideband Open-Slot Antenna for the LTE Metal-Framed Tablet device", "text": "An open-slot antenna with a low profile of 7 mm to provide a triple-wideband LTE operation in the metal-framed tablet device is presented. With the low profile, the antenna can fit in the narrow region between the metal frame and the display panel of the metal-framed tablet device. The triple-wideband operation covers 698-960 MHz (low band), 1710-2690 MHz (middle band), and 3400-3800 MHz (high band). The antenna mainly has a rectangular open slot and an L-shaped metal strip embedded therein to provide two open slots of different resonant paths. Using a microstrip feedline having a step-shaped section across the two open slots, good excitation of multiple slot resonant modes is obtained. Details of the proposed antenna are presented."}
{"_id": "4e582bd1f4a175abc142ad005bae6bba6b3b53e8", "title": "Static analysis tools as early indicators of pre-release defect density", "text": "During software development it is helpful to obtain early estimates of the defect density of software components. Such estimates identify fault-prone areas of code requiring further testing. We present an empirical approach for the early prediction of pre-release defect density based on the defects found using static analysis tools. The defects identified by two different static analysis tools are used to fit and predict the actual pre-release defect density for Windows Server 2003. We show that there exists a strong positive correlation between the static analysis defect density and the pre-release defect density determined by testing. Further, the predicted pre-release defect density and the actual pre-release defect density are strongly correlated at a high degree of statistical significance. Discriminant analysis shows that the results of static analysis tools can be used to separate high and low quality components with an overall classification rate of 82.91%."}
{"_id": "b99f72445b1d4acb91aa159f0555f95e80e12b53", "title": "LTSG: Latent Topical Skip-Gram for Mutually Improving Topic Model and Vector Representations", "text": "Topic models have been widely used in discovering latent topics which are shared across documents in text mining. Vector representations, word embeddings and topic embeddings, map words and topics into a low-dimensional and dense real-value vector space, which have obtained high performance in NLP tasks. However, most of the existing models assume the results trained by one of them are perfect correct and used as prior knowledge for improving the other model. Some other models use the information trained from external large corpus to help improving smaller corpus. In this paper, we aim to build such an algorithm framework that makes topic models and vector representations mutually improve each other within the same corpus. An EM-style algorithm framework is employed to iteratively optimize both topic model and vector representations. Experimental results show that our model outperforms state-of-the-art methods on various NLP tasks."}
{"_id": "6261795029d0c97a06d81472e55618b9b7fd5f20", "title": "Hierarchical Classification of Ten Skin Lesion Classes", "text": "This paper presents a hierarchical classification system based on the kNearest Neighbors (kNN) classifier for classification of ten different classes of Malignant and Benign skin lesions from color image data. Our key contribution is to focus on the ten most common classes of skin lesions. There are five malignant: Actinic Keratosis (AK), Basal Cell Carcinoma (BCC), Squamous Cell Carcinoma (SCC), Melanoma (MEL), Intraepithelial Carcinoma (IEC) and five benign: Melanocytic Nevus / Mole (ML), Seborrhoeic Keratosis (SK), Dermatofibroma (DF), Haemangioma (VASC), Pyogenic Granuloma (PYO). Moreover, we use only high resolution color images acquired using a standard camera (nondermoscopy). Our image dataset contains 1300 lesions belonging to ten classes (45 AK, 239 BCC, 331 ML, 88 SCC, 257 SK, 76 MEL, 65 DF, 97 VASC, 24 PYO and 78 IEC)."}
{"_id": "b6450f16a6c23242dd96f9ba543b3339919d4832", "title": "Visual saliency detection based on Bayesian model", "text": "Image saliency detection is very useful in many computer vision tasks while it still remains a challenging problem. In this paper, we propose a new computational saliency detection model which is implemented with a coarse to fine strategy under the Bayesian framework. First, saliency points are applied to get a coarse location of the saliency region. And then, based on the rough region, we compute a prior map for the Bayesian model to achieve the final saliency map. Experimental results on a public available dataset show the effectiveness of the proposed prior map and the strength of our saliency map compared with several previous method."}
{"_id": "6b509a872a23abf233cf212303eae31eed6e02c7", "title": "77 GHz radar transceiver with dual integrated antenna elements", "text": "A complete radar transceiver for 77 GHz with two integrated antenna elements is presented. Based on a previously published design [1], two of the transmit and receive channels of the transceiver are supplemented with integrated antenna elements. The antennas exhibit a well defined antenna pattern with an efficiency of better than 50%."}
{"_id": "54e20e508c2cbb00dcf0d88ee95424af0f40be98", "title": "A 0.65 THz Focal-Plane Array in a Quarter-Micron CMOS Process Technology", "text": "A focal-plane array (FPA) for room-temperature detection of 0.65-THz radiation has been fully integrated in a low-cost 0.25 mum CMOS process technology. The circuit architecture is based on the principle of distributed resistive self-mixing and facilitates broadband direct detection well beyond the cutoff frequency of the technology. The 3 timesZ 5 pixel array consists of differential on-chip patch antennas, NMOS direct detectors, and integrated 43-dB voltage amplifiers. At 0.65 THz the FPA achieves a responsivity (Rv) of 80 kV/W and a noise equivalent power (NEP) of 300 pW/ radic{Hz}. Active multi-pixel imaging of postal envelopes demonstrates the FPAs potential for future cost-effective terahertz imaging solutions."}
{"_id": "b6f5e35251be380a7bb854f24939c8c045327ea9", "title": "Regulation of wound healing by growth factors and cytokines.", "text": "Cutaneous wound healing is a complex process involving blood clotting, inflammation, new tissue formation, and finally tissue remodeling. It is well described at the histological level, but the genes that regulate skin repair have only partially been identified. Many experimental and clinical studies have demonstrated varied, but in most cases beneficial, effects of exogenous growth factors on the healing process. However, the roles played by endogenous growth factors have remained largely unclear. Initial approaches at addressing this question focused on the expression analysis of various growth factors, cytokines, and their receptors in different wound models, with first functional data being obtained by applying neutralizing antibodies to wounds. During the past few years, the availability of genetically modified mice has allowed elucidation of the function of various genes in the healing process, and these studies have shed light onto the role of growth factors, cytokines, and their downstream effectors in wound repair. This review summarizes the results of expression studies that have been performed in rodents, pigs, and humans to localize growth factors and their receptors in skin wounds. Most importantly, we also report on genetic studies addressing the functions of endogenous growth factors in the wound repair process."}
{"_id": "a5dbfb900cc5c1ae953fc16a414547ab0d937d33", "title": "Traffic accident prediction using 3-D model-based vehicle tracking", "text": "Intelligent visual surveillance for road vehicles is the key to developing autonomous intelligent traffic systems. Recently, traffic incident detection employing computer vision and image processing has attracted much attention. In this paper, a probabilistic model for predicting traffic accidents using three-dimensional (3-D) model-based vehicle tracking is proposed. Sample data including motion trajectories are first obtained by 3-D model-based vehicle tracking. A fuzzy self-organizing neural network algorithm is then applied to learn activity patterns from the sample trajectories. Finally, vehicle activity is predicted by locating and matching each partial trajectory with the learned activity patterns, and the occurrence probability of a traffic accident is determined. Experiments show the effectiveness of the proposed algorithms."}
{"_id": "b8278022f19724e6e10a913d054ca3e919705553", "title": "DETERMINING TRENDS IN GLOBAL CRIME AND JUSTICE : AN OVERVIEW OF RESULTS FROM THE UNITED NATIONS SURVEYS OF CRIME TRENDS AND OPERATIONS OF CRIMINAL JUSTICE SYSTEMS", "text": "Effectively measuring comparative developments in crime and justice trends from a global perspective remains a key challenge for international policy makers. The ability to compare crime levels across countries enables policy makers to determine where interventions should occur and improves understanding of the key causes of crime in different societies across the globe. Nevertheless, there are significant challenges to comparative work in the field of criminal justice, not least of which is the ability to quantify accurately levels of crime across countries. Taking into account the methodological weaknesses of using crosscountry data sources, the present article provides conclusions obtained by analysing the large amount of data available from the various United Nations surveys of crime trends and operations of criminal justice systems. \u201cNot everything that can be counted, counts. And not everything that counts can be counted.\u201d Albert Einstein"}
{"_id": "5f8ff7d8508f357c82ffdeda3428ae27df1ec282", "title": "Spoofing Face Recognition With 3D Masks", "text": "Spoofing is the act of masquerading as a valid user by falsifying data to gain an illegitimate access. Vulnerability of recognition systems to spoofing attacks (presentation attacks) is still an open security issue in biometrics domain and among all biometric traits, face is exposed to the most serious threat, since it is particularly easy to access and reproduce. In this paper, many different types of face spoofing attacks have been examined and various algorithms have been proposed to detect them. Mainly focusing on 2D attacks forged by displaying printed photos or replaying recorded videos on mobile devices, a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor. However, with the advancements in 3D reconstruction and printing technologies, this assumption can no longer be maintained. In this paper, we aim to inspect the spoofing potential of subject-specific 3D facial masks for different recognition systems and address the detection problem of this more complex attack type. In order to assess the spoofing performance of 3D masks against 2D, 2.5D, and 3D face recognition and to analyze various texture-based countermeasures using both 2D and 2.5D data, a parallel study with comprehensive experiments is performed on two data sets: the Morpho database which is not publicly available and the newly distributed 3D mask attack database."}
{"_id": "09cdd282f8d4851df7a49ee5984f8cdbe17dcd55", "title": "Deep Contrast Learning for Salient Object Detection", "text": "Salient object detection has recently witnessed substantial progress due to powerful features extracted using deep convolutional neural networks (CNNs). However, existing CNN-based methods operate at the patch level instead of the pixel level. Resulting saliency maps are typically blurry, especially near the boundary of salient objects. Furthermore, image patches are treated as independent samples even when they are overlapping, giving rise to significant redundancy in computation and storage. In this paper, we propose an end-to-end deep contrast network to overcome the aforementioned limitations. Our deep network consists of two complementary components, a pixel-level fully convolutional stream and a segment-wise spatial pooling stream. The first stream directly produces a saliency map with pixel-level accuracy from an input image. The second stream extracts segment-wise features very efficiently, and better models saliency discontinuities along object boundaries. Finally, a fully connected CRF model can be optionally incorporated to improve spatial coherence and contour localization in the fused result from these two streams. Experimental results demonstrate that our deep model significantly improves the state of the art."}
{"_id": "9360ce51ec055c05fd0384343792c58363383952", "title": "FuseNet: Incorporating Depth into Semantic Segmentation via Fusion-Based CNN Architecture", "text": "In this paper we address the problem of semantic labeling of indoor scenes on RGB-D data. With the availability of RGB-D cameras, it is expected that additional depth measurement will improve the accuracy. Here we investigate a solution how to incorporate complementary depth information into a semantic segmentation framework by making use of convolutional neural networks (CNNs). Recently encoder-decoder type fully convolutional CNN architectures have achieved a great success in the field of semantic segmentation. Motivated by this observation we propose an encoder-decoder type network, where the encoder part is composed of two branches of networks that simultaneously extract features from RGB and depth images and fuse depth features into the RGB feature maps as the network goes deeper. Comprehensive experimental evaluations demonstrate that the proposed fusion-based architecture achieves competitive results with the state-of-the-art methods on the challenging SUN RGB-D benchmark obtaining 76.27% global accuracy, 48.30% average class accuracy and 37.29% average intersectionover-union score."}
{"_id": "0a4f8d1a8bf2afd6763a4fc41606f75dbd10a7bc", "title": "Boosted decision trees for word recognition in handwritten document retrieval", "text": "Recognition and retrieval of historical handwritten material is an unsolved problem. We propose a novel approach to recognizing and retrieving handwritten manuscripts, based upon word image classification as a key step. Decision trees with normalized pixels as features form the basis of a highly accurate AdaBoost classifier, trained on a corpus of word images that have been resized and sampled at a pyramid of resolutions. To stem problems from the highly skewed distribution of class frequencies, word classes with very few training samples are augmented with stochastically altered versions of the originals. This increases recognition performance substantially. On a standard corpus of 20 pages of handwritten material from the George Washington collection the recognition performance shows a substantial improvement in performance over previous published results (75% vs 65%). Following word recognition, retrieval is done using a language model over the recognized words. Retrieval performance also shows substantially improved results over previously published results on this database. Recognition/retrieval results on a more challenging database of 100 pages from the George Washington collection are also presented."}
{"_id": "84c54bd23a96cccf404522928ac5bbf58f7442fc", "title": "Accelerating CNN inference on FPGAs: A Survey", "text": "Convolutional Neural Networks (CNNs) are currently adopted to solve an ever greater number of problems, ranging from speech recognition to image classi cation and segmentation. The large amount of processing required by CNNs calls for dedicated and tailored hardware support methods. Moreover, CNN workloads have a streaming nature, well suited to recon gurable hardware architectures such as FPGAs. The amount and diversity of research on the subject of CNN FPGA acceleration within the last 3 years demonstrates the tremendous industrial and academic interest. This paper presents a state-of-the-art of CNN inference accelerators over FPGAs. The computational workloads, their parallelism and the involved memory accesses are analyzed. At the level of neurons, optimizations of the convolutional and fully connected layers are explained and the performances of the di erent methods compared. At the network level, approximate computing and datapath optimization methods are covered and state-of-the-art approaches compared. The methods and tools investigated in this survey represent the recent trends in FPGA CNN inference accelerators and will fuel the future advances on e cient hardware deep learning."}
{"_id": "f182603044792bf6dc46a2f1c801806b2ee60cf5", "title": "Modular joint design for performance enhanced humanoid robot LOLA", "text": "The paper presents the performance enhanced humanoid robot LOLA which is currently being manufactured. The goal of the project is the realization of a fast, human-like walking motion. The robot is characterized by its lightweight construction, a modular, multi-sensory joint design with brushless motors and an electronics architecture using decentralized joint controllers. The fusion of motor, gear and sensors into a highly integrated, mechatronic joint module has several advantages for the whole system, including high power density, good dynamic performance and reliability. Additional degrees of freedom are introduced in elbow, waist and toes. Linear actuators are employed for the knee joints for a better mass distribution in the legs"}
{"_id": "a24b9fc2f8c61494b87b42874a1ddf780471fb75", "title": "Localized forgery detection in hyperspectral document images", "text": "Hyperspectral imaging is emerging as a promising technology to discover patterns that are otherwise hard to identify with regular cameras. Recent research has shown the potential of hyperspectral image analysis to automatically distinguish visually similar inks. However, a major limitation of prior work is that automatic distinction only works when the number of inks to be distinguished is known a priori and their relative proportions in the inspected image are roughly equal. This research work aims at addressing these two problems. We show how anomaly detection combined with unsupervised clustering can be used to handle cases where the proportions of pixels belonging to the two inks are highly unbalanced. We have performed experiments on the publicly available UWA Hyperspectral Documents dataset. Our results show that INFLO anomaly detection algorithm is able to best distinguish inks for highly unbalanced ink proportions."}
{"_id": "245a138cb1b8d8a10881ab23603abfd5002dc3ae", "title": "HYDRA: hybrid design for remote attestation (using a formally verified microkernel)", "text": "Remote Attestation (RA) allows a trusted entity (verifier) to securely measure internal state of a remote untrusted hardware platform (prover). RA can be used to establish a static or dynamic root of trust in embedded and cyber-physical systems. It can also be used as a building block for other security services and primitives, such as software updates and patches, verifiable deletion and memory resetting. There are three major types of RA designs: hardware-based, software-based, and hybrid, each with its own set of benefits and drawbacks.\n This paper presents the first hybrid RA design - called HYDRA - that builds upon formally verified software components that ensure memory isolation and protection, as well as enforce access control to memory and other resources. HYDRA obtains these properties by using the formally verified seL4 microkernel. (Until now, this was only attainable with purely hardware-based designs.) Using seL4 imposes fewer hardware requirements on the underlying microprocessor. Also, building upon a formally verified software component increases confidence in security of the overall design of HYDRA and its implementation. We instantiate HYDRA on two commodity hardware platforms and assess the performance and overhead of performing RA on such platforms via experimentation; we show that HYDRA can attest 10MB of memory in less than 250msec when using a Speck-based cryptographic checksum."}
{"_id": "60bc8ce676e5d0664fea181f624dc3cdd8300790", "title": "A Novel Boil-Off Gas Re-Liquefaction Using a Spray Recondenser for Liquefied Natural-Gas Bunkering Operations", "text": "This study presents the design of a novel boil-off gas (BOG) re-liquefaction technology using a BOG recondenser system. The BOG recondenser system targets the liquefied natural gas (LNG) bunkering operation, in which the BOG phase transition occurs in a pressure vessel instead of a heat exchanger. The BOG that is generated during LNG bunkering operation is characterized as an intermittent flow with various peak loads. The system was designed to temporarily store the transient BOG inflow, condense it with subcooled LNG and store the condensed liquid. The superiority of the system was verified by comparing it with the most extensively employed conventional re-liquefaction system in terms of consumption energy and via an exergy analysis. Static simulations were conducted for three compositions; the results indicated that the proposed system provided 0 to 6.9% higher efficiencies. The exergy analysis indicates that the useful work of the conventional system is 24.9%, and the useful work of the proposed system is 26.0%. Process dynamic simulations of six cases were also performed to verify the behaviour of the BOG recondenser system. The results show that the pressure of the holdup in the recondenser vessel increased during the BOG inflow mode and decreased during the initialization mode. The maximum pressure of one of the bunkering cases was 3.45 bar. The system encountered a challenge during repetitive operations due to overpressurizing of the BOG recondenser vessel."}
{"_id": "f4ff946af3ab18605ace98bd7336988dc8cfd0f6", "title": "The compressive response of carbon fiber composite pyramidal truss sandwich cores Dedicated to Professor", "text": "Pyramidal truss sandwich cores with relative densities 4q in the range 1 \u2013 10 % have been made from carbon fiber reinforced polymer laminates using a snap-fitting method. The measured quasi-static uniaxial compressive strength increased with increasing 4q from 1 to 11 MPa over the relative density range investigated here. A robust face-sheet/ truss joint design was developed to suppress truss\u2013 face sheet node fracture. Core failure then occurred by either (i) Euler buckling (4q < 2%) or (ii) delamination failure (4q > 2%) of the struts. Micro-buckling failure of the struts was not observed in the experiments reported here. Analytical models for the collapse of the composite cores by Euler bucking, delamination failure and micro-buckling of the struts have been developed. Good agreement between the measurements and predictions based on the Euler buckling and delamination failure of the struts is obtained. The micro-buckling analysis indicates this mechanism of failure is not activated until delamination is suppressed. The measurements and predictions reported here indicate that composite cellular materials with a pyramidal micro-structure reside in a gap in the strength versus density material property space, providing new opportunities for lightweight, high strength structural design."}
{"_id": "26ea68bca906b1cf272d0dc35c5f1d7eb9ed6652", "title": "Unsupervized Image Clustering With SIFT-Based Soft-Matching Affinity Propagation", "text": "It is known that affinity propagation can perform exemplar-based unsupervised image clustering by taking as input similarities between pairs of images and producing a set of exemplars that best represent the images, and then assigning each nonexemplar image to its most appropriate exemplar. However, the clustering performance of affinity propagation is largely limited by the adopted similarity between any pair of images. As the scale invariant feature transform (SIFT) has been widely employed to extract image features, the nonmetric similarity between any pair of images was proposed by \u201chard\u201d matching of SIFT features (e.g., counting the number of matching SIFT features). In this letter, we notice, however, that the decision of hard matching of SIFT features is binary, which is not necessary for deriving similarities. Hence, we propose a novel measure of similarities by replacing hard matching with the so-called soft matching. Experimental examples show that significant performance gains can be achieved by the resulting affinity propagation algorithm."}
{"_id": "942e8bb67562c1145b8863f3a99a24cee7537e22", "title": "Data quality issues in implementing an ERP", "text": "Data quality is a critical issue during the implementation of an enterprise resource planning (ERP) system. Data quality problems can have a significant impact on an organisation\u2019s information system. Therefore, it is essential to understand data quality issues to ensure success in implementing ERP systems. This paper uses SAP as an example of an ERP system and describes a study, which explores data quality problems with existing systems, and identifies critical success factors that impact data quality. The study resulted in the development of a framework for understanding data quality issues in implementing an ERP, and application of this framework in a case study in two large Australian organisations. The findings of the study suggest that the importance of data quality needs to be widely understood in implementing an ERP, as well as providing recommendations that may be useful to practitioners. Computerised databases continue to proliferate and as organisations become increasingly dependent upon their databases to support business process and decision making, the number of errors in stored data and the organisational impact of these errors are likely to increase (Klein, 1998). Information research has demonstrated that inaccurate and incomplete data may adversely affect the competitive success of an organisation (Redman, 1992). Poor quality information can have significant social and business impacts (Strong et al., 1997). For example, NBC News reported, `\u0300 dead people still eat!\u2019\u2019 It was found that, because of outdated information in government databases, food stamps continued to be sent to recipients long after they died. Fraud from food stamps cost US taxpayers billions of dollars. Business and industry often have similar DQ problems. For example, a financial company absorbed a net loss totalling more than $250 million when interest rates changed dramatically, all because the company database was lacking in quality and simple updates (Huang et al., 1999). In order to ensure DQ in information systems, it is important to understand the underlying factors that influence DQ. There have been some studies of critical success factors in information systems and quality management, such as total quality management (TQM) and just-in-time (JIT). Some of the DQ literature has also addressed the critical points and steps for DQ. Table I shows the related research efforts and reflects issues or elements of critical success factors for DQ. ERP-SAP ERP systems use relational database technology to integrate the various elements of an organisation\u2019s information systems. They provide a number of separate, but integrated modules. The use of an ERP avoids the costs of maintaining many separate `\u0300 legacy\u2019\u2019 systems and overcomes the problems associated with interfacing different systems. It is quite expensive to implement an ERP system, requiring multimillion-dollar budgets and large project teams. Despite the expense, such systems are becoming very widely used in the world\u2019s largest companies (Scapens et al., 1998). SAP is one of the well-known ERP packages on the market, with strengths in finance and accounting. SAP is an integrated business system, which evolved from a concept first developed by five former IBM systems engineers in 1972. It is a software package designed to enable businesses to effectively and efficiently run a variety of business processes within a single integrated system. SAP stands for systems, applications and products in data processing. It is produced by SAP AG, based in Walldorf, Germany, which employs more than 22,000 people in more than 50 countries. SAP AG is the third-largest software company in the world. SAP software is deployed at more than 22,000 business installations in more than 100 countries and is currently used by companies of all sizes, including more than half of the world\u2019s 500 top companies (SAP AG Corporate Overview, 2000). Therefore, SAP is an excellent system to study in an effort to evaluate ERP environments. Table I Summary of literature review identifying factors influencing (data) quality Fa ctor Sa raph et a l. (19 89) E ng lish (1999) Firth (19 96) W ang (1 998) H uang e t a l. (1 999) Segev (1996 ) Z hu and M eredith (19 95) O rr (1998) Cu sh ing (1974) Yu and N eter (19 73) Fie lds et a l. (19 86) N ich ols (19 87) Bo w en (199 3) Tra in in g * * * * * To p m anage m ent supp ort * * * * * * O rganisation al stru cture (com m unicatio n) * * M anag e chang e * E m ployee/p erson nel re la tions * * * D Q contro l Interna l contro l * Input cont ro l * So urce: From Xu, 200 0 [ 48 ] Hongjiang Xu, Jeretta Horn Nord, Noel Brown and G. Daryl Nord Data quality issues in implementing an ERP Industrial Management & Data Systems 102/1 [2002] 47\u00b158"}
{"_id": "b86c46dd39239407704efc7bf78ffde68b0fd094", "title": "Real-Time Billboard Substitution in a Video Stream", "text": "We present a system that accepts as input a continuous stream of TV broadcast images from sports events. The system automatically detects a predetermined billboard in the scene, and replaces it with a user defined pattern, with no cooperation from the billboards or camera operators. The replacement is performed seamlessly so that a viewer should not be able to detect the substitution. This allows the targeting of advertising to the appropriate audience, which is especially useful for international events. This requires several modules using state of the art computer graphics, image processing and computer vision technology. The system relies on modular design, and on a pipeline architecture, in which the search and track modules propagate their results in symbolic form throughout the pipeline buffer, and the replacement is performed at the exit of the pipe only, therefore relying on accumulated information. Also, images are processed only once. This allows the system to make replacement decisions based on complete sequences, thus avoiding mid-sequence on screen billboard changes. We present the algorithms and the overall system architecture, and discuss further applications of the technology."}
{"_id": "a9d240d911e338e6083cee83570c10af6f9dafb8", "title": "A Comprehensive Study of StaQC for Deep Code Summarization", "text": "Learning the mapping between natural language (NL) and programming language, such as retrieving or generating code snippets based on NL queries and annotating code snippets using NL, has been explored by lots of research works [2, 19, 21]. At the core of these works are machine learning and deep learning models, which usually demand for large datasets of pairs for training. This paper describes an experimental study of StaQC [50], a large-scale and high-quality dataset of pairs in Python and SQL domain, systematically mined from the Stack Overflow forum (SO). We compare StaQC with two other popular datasets mined from SO on the code summarization task, showing that StaQC helps achieve substantially better results, improving the current state-of-the-art model by an order of 8% \u223c 9% in BLEU metric."}
{"_id": "25ebaeb46b4fb3ee1a9d6769832c97cdece00865", "title": "Finding Groups in Data: An Introduction to Cluster Analysis", "text": "Description: The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. \"Cluster analysis is the increasingly important and practical subject of finding groupings in data. The authors set out to write a book for the user who does not necessarily have an extensive background in mathematics. They succeed very well.\" \u2014Mathematical Reviews \"Finding Groups in Data [is] a clear, readable, and interesting presentation of a small number of clustering methods. In addition, the book introduced some interesting innovations of applied value to clustering literature.\" \u2014Journal of Classification \"This is a very good, easy-to-read, and practical book. It has many nice features and is highly recommended for students and practitioners in various fields of study.\" \u2014Technometrics An introduction to the practical application of cluster analysis, this text presents a selection of methods that together can deal with most applications. These methods are chosen for their robustness, consistency, and general applicability. This book discusses various types of data, including interval-scaled and binary variables as well as similarity data, and explains how these can be transformed prior to clustering."}
{"_id": "e8480bca4664e9944fdd473e08bb9bcafea0551c", "title": "A Cluster Separation Measure", "text": "A measure is presented which indicates the similarity of clusters which are assumed to have a data density which is a decreasing function of distance from a vector characteristic of the cluster. The measure can be used to infer the appropriateness of data partitions and can therefore be used to compare relative appropriateness of various divisions of the data. The measure does not depend on either the number of clusters analyzed nor the method of partitioning of the data and can be used to guide a cluster seeking algorithm."}
{"_id": "c49ba2f8ed1ccda37f76f4f01624bce0768ef6ae", "title": "K-Means-Type Algorithms: A Generalized Convergence Theorem and Characterization of Local Optimality", "text": "The K-means algorithm is a commonly used technique in cluster analysis. In this paper, several questions about the algorithm are addressed. The clustering problem is first cast as a nonconvex mathematical program. Then, a rigorous proof of the finite convergence of the K-means-type algorithm is given for any metric. It is shown that under certain conditions the algorithm may fail to converge to a local minimum, and that it converges under differentiability conditions to a Kuhn-Tucker point. Finally, a method for obtaining a local-minimum solution is given."}
{"_id": "cdf92f0295cbe97c241fb12f239e4268b784bd34", "title": "Optimization of Control Parameters for Genetic Algorithms", "text": "The task of optimizing a complex system presents at least two levels of problems for the system designer. First, a class of optimization algorithms must be chosen that is suitable for application to the system. Second, various parameters of the optimization algorithm need to be tuned for efficiency. A class of adaptive search procedures called genetic algorithms (GA) has been used to optimize a wide variety of complex systems. GA's are applied to the second level task of identifying efficient GA's for a set of numerical optimization problems. The results are validated on an image registration problem. GA's are shown to be effective for both levels of the systems optimization problem."}
{"_id": "47abb93c712f8873cfd4ec2c7dfc759418cdb330", "title": "Performance Optimization of a High Current Dual Active Bridge with a Wide Operating Voltage Range", "text": "The main aim of this paper is to improve the performance of high current dual active bridge converters when operated over a wide voltage range. A typical application is for fuel cell vehicles where a bi-directional interface between a 12V battery and a high voltage DC bus is required. The battery side voltage ranges from 11V to 16V while the fuel cell is operated between 220V and 447V and the required power is typically 1kW. Careful analysis shows that the high currents on the battery side cause significant design issues in order to obtain a high efficiency. The standard phase shift modulation method can result in high conduction and switching losses. This paper proposes a combined triangular and trapezoidal modulation method to reduce losses over the wide operating range. Approximately, a 2% improvement in efficiency can be expected. An experimental system is used to verify the improved performance of the dual active bridge using the proposed advanced modulation method."}
{"_id": "c1f3e92cfec36fd70f9fe37366d0899877ad227e", "title": "Segmentation of color image using adaptive thresholding and masking with watershed algorithm", "text": "The individualization of an object from a digital image is a common problem in the field of image processing. We propose a modified version of the watershed algorithm for image segmentation. We have presented an adaptive masking and a threshoding mechanism over each color channel to overcome over segmentation problem, before combining the segmentation from each channel into the final one. Our proposed method ensures accuracy and quality of the 10 different kinds of color images. The experimental results are obtained using image quality assessment (IQA) metrics such as PSNR, MSE, PSNRRGB and Color Image Quality Measure (CQM) based on reversible YUV color transformation. Consequently, the proposed modified watershed approach can enhance the image segmentation performance. Similarly, it is worth noticing that our proposed MWS approach is faster than many other segmentation algorithms, which makes it appropriate for real-time application. According to the visual and quantitative verification, the proposed algorithm is performing better than three other algorithms."}
{"_id": "c8a5e1a8c46fa368449d3afd6da5e0f5b2812659", "title": "Disease recognition in Mango crop using modified rotational kernel transform features", "text": "Soft computing in the field of agriculture science is being employed with computer vision techniques in order to detect the diseases in crops to increase the overall yield. A Modified Rotation Kernel Transformation(MRKT) based directional feature extraction scheme is presents to resolve the issues occurring due to shape, color or other deceptive features during plant disease recognition. The MRKT based scheme is used to calculate the directional features and histograms for plant parts like leaf and fruit of Mango crop. These histograms and the directional feature set with use of artificial neural network lead to a better recognition technique of Anthracnose disease which occurs in form of black spots on fruits and leaves of mango plant. The results obtained using proposed MRKT directional feature set have shows the better results with accuracy up to 98%."}
{"_id": "0ba6d4d3037ed94ccb19ac884070ff778dae2122", "title": "A Taxonomy of Attacks on the DNP3 Protocol", "text": "Distributed Network Protocol (DNP3) is the predominant SCADA protocol in the energy sector \u2013 more than 75% of North American electric utilities currently use DNP3 for industrial control applications. This paper presents a taxonomy of attacks on the protocol. The attacks are classified based on targets (control center, outstation devices and network/communication paths) and threat categories (interception, interruption, modification and fabrication). To facilitate risk analysis and mitigation strategies, the attacks are associated with the specific DNP3 protocol layers they exploit. Also, the operational impact of the attacks is categorized in terms of three key SCADA objectives: process confidentiality, process awareness and process control. The attack taxonomy clarifies the nature and scope of the threats to DNP3 systems, and can provide insights into the relative costs and benefits of implementing mitigation strategies."}
{"_id": "afeee08b2f21f2408efd1af442cbdca545b86e3d", "title": "A Roadmap for the Value-Loading Problem", "text": "We analyze the value-loading problem. This is the problem of robustly encoding moral values into an AI agent interacting with a complex environment. Like many before, we argue that this is both a major concern and an extremely challenging problem. Solving it will likely require years, if not decades, of multidisciplinary work by teams of top scientists and experts. Given how uncertain the timeline of human-level AI research is, we thus argue that a pragmatic partial solution should be designed as"}
{"_id": "2c85a458b9c9aabb32271fca1ee2efc383a5a03a", "title": "Adaptive Kalman Filtering Methods for Low-Cost GPS/INS Localization for Autonomous Vehicles", "text": "For autonomous vehicles, navigation systems must be accurate enough to provide lane-level localization. Highaccuracy sensors are available but not cost-effective for production use. Although prone to significant error in poor circumstances, even low-cost GPS systems are able to correct Inertial Navigation Systems (INS) to limit the effects of dead reckoning error over short periods between sufficiently accurate GPS updates. Kalman filters (KF) are a standard approach for GPS/INS integration, but require careful tuning in order to achieve quality results. This creates a motivation for a KF which is able to adapt to different sensors and circumstances on its own. Typically for adaptive filters, either the process (Q) or measurement (R) noise covariance matrix is adapted, and the other is fixed to values estimated a priori. We show that by adapting Q based on the state-correction sequence and R based on GPS receiver-reported standard deviation, our filter reduces GPS root-mean-squared error by 23% in comparison to raw GPS, with 15% from only adapting R."}
{"_id": "f89760c4fc1aadbef441a6e1fe6ce0b9411f1c38", "title": "User simulation for spoken dialogue systems: learning and evaluation", "text": "We propose the \u201cadvanced\u201d n-grams as a new technique for simulating user behaviour in spoken dialogue systems, and we compare it with two methods used in our prior work, i.e. linear feature combination and \u201cnormal\u201d n-grams. All methods operate on the intention level and can incorporate speech recognition and understanding errors. In the linear feature combination model user actions (lists of\u3008 speech act, task \u3009 pairs) are selected, based on features of the current dialogue state which encodes the whole history of the dialogue. The user simulation based on \u201cnormal\u201d n-grams treats a dialogue as a sequence of lists of \u3008 speech act, task \u3009 pairs. Here the length of the history considered is restricted by the order of the n-gram. The \u201cadvanced\u201d n-grams are a variation of the normal ngrams, where user actions are conditioned not only on speech acts and tasks but also on the current status of the tasks, i.e. whether the information needed by the application (in our case flight booking) has been provided and confirmed by the user. This captures elements of goal-directed user behaviour. All models were trained and evaluated on the C OMMUNICATOR corpus, to which we added annotations for user actions and dialogue context. We then evaluate how closely the synthetic responses resemble the real user responses by comparing the user response generated by each user simulation model in a given dialogue context (taken from the annotated corpus) with the actual user response. We propose the expected accuracy, expected precision, and expected recall evaluation metrics as opposed to standard precision and recall used in prior work. We also discuss why they are more appropriate metrics for evaluating user simulation models compared to their standard counterparts. The advanced n-grams produce higher scores than the normal n-grams for small values of n, which proves their strength when little amount of data is available to train larger ngrams. The linear model produces the best expected accuracy but with respect to expected precision and expected recall it is outperformed by the large n-grams even though it is trained using more information. As a task-based evaluation, we also run each of the user simulation models against a system policy trained on the same corpus. Here the linear feature combination model outperforms the other methods and the advanced n-grams outperform the normal ngrams for all values of n, which again shows their potential. We also calculate the perplexity of the different user models."}
{"_id": "deae28e92f85717389f5ee1d0e05a145c2f4e9a2", "title": "Correlated Nonideal Effects of Dark and Light I\u2013V Characteristics in a-Si/c-Si Heterojunction Solar Cells", "text": "a-Si/c-Si (amorphous Silcon/crystalline Silicon) heterojunction solar cells exhibit several distinctive dark and light I-V nonideal features. The dark I-V of these cells exhibits unusually high ideality factors at low forward-bias and the occurrence of a \u201cknee\u201d at medium forward-bias. Nonidealities under illumination, such as the failure of superposition and the occurrence of an \u201cS-type\u201d curve, are also reported in these cells. However, the origin of these nonidealities and how the dark I-V nonidealities manifest themselves under illumination, and vice versa, have not been clearly and consistently explained in the current literature. In this study, a numerical framework is used to interpret the origin of the dark I-V nonidealities, and a novel simulation technique is developed to separate the photo-current and the contact injection current components of the light I-V. Using this technique, the voltage dependence of photo-current is studied to explain the failure of the superposition principle and the origin of the S-type light I-V characteristics. The analysis provides a number of insights into the correlations between the dark I-V and the light I-V. Finally, using the experimental results from this study and from the current literature, it is shown that these nonideal effects indeed affect the dark I-V and the light I-V in a predictable manner."}
{"_id": "659b633ef52a1fd382fe22325c0a5798237662a4", "title": "Power Control in Ad-Hoc Networks: Theory, Architecture, Algorithm and Implementation of the COMPOW Protocol", "text": "We present a new protocol for power control in ad hoc networks. We describe the issues in conceptualizing the power control problem, and provide an architecturally simple as well as theoretically well founded solution. The solution is shown to simultaneously satisfy the three objectives of maximizing the traffic carrying capacity of the entire network, extending battery life through providing low power routes, and reducing the contention at the MAC layer. Further, the protocol has the plug and play feature that it can be employed in conjunction with any routing protocol that pro-actively maintains a routing table. The protocol, called COMPOW, has been implemented in the Linux kernel and we describe the software architecture and implementation details."}
{"_id": "28b39a2e9ab23f1ea7ebd0ef9a239dab37b51cca", "title": "Supervised Matrix Factorization for Cross-Modality Hashing", "text": "Matrix factorization has been recently utilized for the task of multi-modal hashing for cross-modality visual search, where basis functions are learned to map data from different modalities to the same Hamming embedding. In this paper, we propose a novel cross-modality hashing algorithm termed Supervised Matrix Factorization Hashing (SMFH) which tackles the multi-modal hashing problem with a collective non-matrix factorization across the different modalities. In particular, SMFH employs a well-designed binary code learning algorithm to preserve the similarities among multi-modal original features through a graph regularization. At the same time, semantic labels, when available, are incorporated into the learning procedure. We conjecture that all these would facilitate to preserve the most relevant information during the binary quantization process, and hence improve the retrieval accuracy. We demonstrate the superior performance of SMFH on three cross-modality visual search benchmarks, i.e., the PASCAL-Sentence, Wiki, and NUS-WIDE, with quantitative comparison to various stateof-the-art methods [Kumar and Udupa, 2011; Rastegari et al., 2013; Zhang and Li, 2014; Ding et al., 2014]."}
{"_id": "7405189a473ae9ff64aa63d615ad9e3cec6a5fcb", "title": "Proposing a new design approach for M-learning applications", "text": "As Information and Communication Technologies (ICT) are developing frequently, new learning approaches have been introduced to facilitate teaching and learning processes. Mobile devices are growing in popularity and there has been an increasing interest in their potential as innovative tools for learning. Mobile learning or shortly (M-learning) is a new learning approach intending to use mobile devices such as laptops, smart phones and personal digital assistant (PDA) in learning process in any place and at any time. M-learning application needs to be designed in a way that considers the special features and constraints of mobile devices such as screen size, available storage, processor speed and battery life. This paper explores the existing design approaches for M-learning and highlights their main limitations. As a step forward in this direction, this paper also proposes a new design approach for M-learning applications focusing mainly on three main aspects: learner, learning content and technology. The approach consists of three phases: starting dimensions (SD) phase, M-learning Development (MLD) phase and Learning Content Design (LCD) phase. Finally, this paper provides a case study scenario to demonstrate the feasibility of the proposed approach."}
{"_id": "8579c547a113fe7558d95ad0027a6199d9c32918", "title": "A Survey on Mobile Anchor Node Assisted Localization in Wireless Sensor Networks", "text": "Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints on cost and power consumption make it infeasible to equip each sensor node in the network with a global position system (GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use mobile anchor nodes (MANs), which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. A considerable body of research has addressed the mobile anchor node assisted localization (MANAL) problem. However, to the best of our knowledge, no updated surveys on MAAL reflecting recent advances in the field have been presented in the past few years. This survey presents a review of the most successful MANAL algorithms, focusing on the achievements made in the past decade, and aims to become a starting point for researchers who are initiating their endeavors in MANAL research field. In addition, we seek to present a comprehensive review of the recent breakthroughs in the field, providing links to the most interesting and successful advances in this research field."}
{"_id": "cace61f67bc4d5d1d750e48c30642f3bdca2cba0", "title": "Coarse-to-Fine Question Answering for Long Documents", "text": "We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-ofthe-art models. While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences. Inspired by how people first skim the document, identify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting relevant sentences and a more expensive RNN for producing the answer from those sentences. We treat sentence selection as a latent variable trained jointly from the answer only using reinforcement learning. Experiments demonstrate the state of the art performance on a challenging subset of the WIKIREADING dataset (Hewlett et al., 2016) and on a new dataset, while speeding up the model by 3.5x-6.7x."}
{"_id": "45681f19c0b55ed5065674738e3f7ff0c7a99f99", "title": "Direct Causality Detection via the Transfer Entropy Approach", "text": "The detection of direct causality, as opposed to indirect causality, is an important and challenging problem in root cause and hazard propagation analysis. Several methods provide effective solutions to this problem when linear relationships between variables are involved. For nonlinear relationships, currently only overall causality analysis can be conducted, but direct causality cannot be identified for such processes. In this paper, we describe a direct causality detection approach suitable for both linear and nonlinear connections. Based on an extension of the transfer entropy approach, a direct transfer entropy (DTE) concept is proposed to detect whether there is a direct information flow pathway from one variable to another. Especially, a differential direct transfer entropy concept is defined for continuous random variables, and a normalization method for the differential direct transfer entropy is presented to determine the connectivity strength of direct causality. The effectiveness of the proposed method is illustrated by several examples, including one experimental case study and one industrial case study."}
{"_id": "773cff8d795a27706b483122a6a2b432a1a6657a", "title": "From Adjective Glosses to Attribute Concepts: Learning Different Aspects That an Adjective Can Describe", "text": "Adjectives are one of the major contributors to conveying subjective meaning to sentences. Understanding the semantics of adjectives is essential for understanding natural language sentences. In this paper we propose a novel approach for learning different properties that an adjective can describe, corresponding to the \u2018attribute concepts\u2019, which is not currently available in existing linguistic resources. We accomplish this by reading adjective glosses and bootstrapping attribute concepts from a seed of adjectives. We show that we can learn new attribute concepts using adjective glosses of WordNet adjectives with high accuracy as compared with human annotation on a test set."}
{"_id": "e2890afe42e64b910609e7554130d6a81427e02a", "title": "Smart home garden irrigation system using Raspberry Pi", "text": "Irrigation system is a method of allowing water to drip slowly to the roots of plants, either onto the soil surface or directly onto the root zone, through solenoid valve. However, it is found that the market price of the system is expensive for small area coverage. Thus, this paper proposes a design for smart home garden irrigation system that implements ready-to-use, energy-efficient, and cost effective devices. Raspberry Pi, which is implemented in this system is integrated with multi-sensors such as soil moisture sensors, ultrasonic sensors, and light sensors. This proposed system managed to reduce cost, minimize waste water, and reduce physical human interface. In this paper, the relay is utilized to control the switching of solenoid valve. The system also managed to measure moisture of the soil and control the solenoid valve according to human's requirements. It is conducted with Graphical User Interface (GUI) using Android application to activate watering activity. Email notification is also sent to the home user for alert purposes either for normal or critical operations. An experimental setup has been tested and it is proven that the system can intelligently control and monitor the soil moisture levels in the experiment field."}
{"_id": "c3a7f3d65e9c3542b5083828960c383eddbfa31c", "title": "Achieving lightweight trustworthy traceability", "text": "Despite the fact that traceability is a required element of almost all safety-critical software development processes, the trace data is often incomplete, inaccurate, redundant, conflicting, and outdated. As a result, it is neither trusted nor trustworthy. In this vision paper we propose a philosophical change in the traceability landscape which transforms traceability from a heavy-weight process producing untrusted trace links, to a light-weight results-oriented trustworthy solution. Current traceability practices which retard agility are cast away and replaced with a disciplined, just-in-time approach. The novelty of our solution lies in a clear separation of trusted trace links from untrusted ones, the change in perspective from `living-with' inacurate traces toward rigorous and ongoing debridement of stale links from the trusted pool, and the notion of synthesizing available `project exhaust' as evidence to systematically construct or reconstruct purposed, highly-focused trace links."}
{"_id": "da75a71c9bfafdb943aa08c60227b1ffd73af10e", "title": "Instant Dehazing of Images Using Polarization", "text": "We present an approach to easily remove the effects of haze from images. It is based on the fact that usually airlight scattered by atmospheric particles is partially polarized. Polarization filtering alone cannot remove the haze effects, except in restricted situations. Our method, however, works under a wide range of atmospheric and viewing conditions. We analyze the image formation process, taking into account polarization effects of atmospheric scattering. We then invert the process to enable the removal of haze from images. The method can be used with as few as two images taken through a polarizer at different orientations. This method works instantly, without relying on changes of weather conditions. We present experimental results of complete dehazing in far from ideal conditions for polarization filtering. We obtain a great improvement of scene contrast and correction of color. As a by product, the method also yields a range (depth) map of the scene, and information about properties of the atmospheric particles."}
{"_id": "10652afefe9d6470e44f0d70cec80207eb243291", "title": "Types for Units-of-Measure: Theory and Practice", "text": "Units-of-measure are to science what types are to programming. In science and engineering, dimensional and unit consistency provides a first check on the correctness of an equation or formula, just as in programming the passing of a program by the type-checker eliminates one possible reason for failure. Units-of-measure errors can have catastrophic consequences, the most famous of which was the loss in September 1999 of NASA\u2019s Mars Climate Orbiter probe, caused by a confusion between newtons (the SI unit of force) and lbf (poundforce, a unit of force sometimes used in the US). The report into the disaster made many recommendations [14]. Notably absent, though, was any suggestion that programming languages might assist in the prevention of such errors, either through static analysis tools, or through type-checking. Over the years, many people have suggested ways of extending programming languages with support for static checking of units-of-measure. It\u2019s even possible to abuse the rich type systems of existing languages such as C++ and Haskell to achieve it, but at some cost in usability [18, 6]. More recently, Sun\u2019s design for its Fortress programming language has included type system support for dimensions and units [2]. In this short course, we\u2019ll look at the design supported by the F# programming language, the internals of its type system, and the theoretical basis for units safety. The tutorial splits into three parts:"}
{"_id": "471dc2841c2e735c525a5e9a1846d6f05d0a1001", "title": "Programming languages and dimensions", "text": "Scientists and engineers must ensure that the equations and formulae which they use are dimensionally consistent, but existing programming languages treat all numeric values as dimensionless. This thesis investigates the extension of programming languages to support the notion of physical dimension. A type system is presented similar to that of the programming language ML but extended with polymorphic dimension types. An algorithm which infers most general dimension types automatically is then described and proved correct. The semantics of the language is given by a translation into an explicitlytyped language in which dimensions are passed as arguments to functions. The operational semantics of this language is specified in the usual way by an evaluation relation defined by a set of rules. This is used to show that if a program is well-typed then no dimension errors can occur during its evaluation. More abstract properties of the language are investigated using a denotational semantics: these include a notion of invariance under changes in the units of measure used, analogous to parametricity in the polymorphic lambda calculus. Finally the dissertation is summarised and many possible directions for future research in dimension types and related type systems are described."}
{"_id": "96fdc213902146cae5ceae7c43ea9f6c1171d69e", "title": "Development environments for autonomous mobile robots: A survey", "text": "Robotic Development Environments (RDEs) have come to play an increasingly important role in robotics research in general, and for the development of architectures for mobile robots in particular. Yet, no systematic evaluation of available RDEs has been performed; establishing a comprehensive list of evaluation criteria targeted at robotics applications is desirable that can subsequently be used to compare their strengths and weaknesses. Moreover, there are no practical evaluations of the usability and impact of a large selection of RDEs that provides researchers with the information necessary to select an RDE most suited to their needs, nor identifies trends in RDE research that suggest directions for future RDE development. This survey addresses the above by selecting and describing nine open source, freely available RDEs for mobile robots, evaluating and comparing them from various points of view. First, based on previous work concerning agent systems, a conceptual framework of four broad categories is established, encompassing the characteristics and capabilities that an RDE supports. Then, a practical evaluation of RDE usability in designing, implementing, and executing robot architectures is presented. Finally, the impact of specific RDEs on the field of robotics is addressed by providing a list of published applications and research projects that give concrete examples of areas in which systems have been used. The comprehensive evaluation and comparison of the nine RDEs concludes with suggestions of how to use the results of this survey and a brief discussion of future trends in RDE design."}
{"_id": "6ddb346a60cbbc6d7328bd19845fa2967725cf2b", "title": "VQM-based QoS/QoE mapping for streaming video", "text": "Numerous new broadband and multimedia applications have emerged in recent years, in which streaming video services are particularly of interest. Unlike data communication, streaming video requires a subjective measure of quality referred to as the Quality of Experience (QoE), but the subjective evaluation is very time consuming and complex. We are proposing an objective approach to provide mapping between QoE and Quality of Service (QoS), in which Video Quality Metric (VQM) is utilized to indicate QoE. We have analyzed the emulation results and derived a simple formula to estimate QoE from QoS parameters for streaming video under certain conditions. This method is non-intrusive and quick to use."}
{"_id": "5afb06ed45edcd33e0ba4f83fd2f89288fab9864", "title": "Nanoconnectomic upper bound on the variability of synaptic plasticity", "text": "Information in a computer is quantified by the number of bits that can be stored and recovered. An important question about the brain is how much information can be stored at a synapse through synaptic plasticity, which depends on the history of probabilistic synaptic activity. The strong correlation between size and efficacy of a synapse allowed us to estimate the variability of synaptic plasticity. In an EM reconstruction of hippocampal neuropil we found single axons making two or more synaptic contacts onto the same dendrites, having shared histories of presynaptic and postsynaptic activity. The spine heads and neck diameters, but not neck lengths, of these pairs were nearly identical in size. We found that there is a minimum of 26 distinguishable synaptic strengths, corresponding to storing 4.7 bits of information at each synapse. Because of stochastic variability of synaptic activation the observed precision requires averaging activity over several minutes."}
{"_id": "38aa85ac4a107ca3bb02575a5a7928023a4f8f56", "title": "Online Attendance Management System Using RFID with Object Counter", "text": "Educational institutions\u2019 administrators in our country and the whole world are concerned about regularity of student attendance. Student overall academic performance is affected by it. The conventional method of taking attendance by calling names or signing on paper is very time consuming, and hence inefficient. Radio Frequency Identification (RFID) based attendance system is one of the solutions to address this problem. A system that can automatically capture student\u2019s attendance by flashing their student card at the RFID reader and save all the mentioned troubles was given by T.S.Lim et al. [1].The RFID system using web-based applications such as ASP.NET and IIS server to cater the recording and reporting of the students\u2019 attendances was given by Abdul Aziz Mohammed et.al. [2]. Arulogun O. T. et al made an attempt to solve recurrent lecture attendance monitoring problem in developing countries using RFID technology [3]. Rajan Patel et al [4] presents the integration of ubiquitous computing systems into classroom for managing the students\u2019 attendance using RFID technology. H.B. Hamid et al [5] gave a system that has been built using the web-based applications such as JSP, MySQL and Apache to cater the recording and reporting of the students\u2019 attendances. NetBeans IDE 6.1 is used for developing the overall system. We have proposed the system in this paper using C#. Microsoft Visual Studio is used for the system designing. Also, the issue related to fake /false attendance through the RFID system has been addressed, we eliminate it by using a special object counter for the head count. Ankita Agrawal & Ashish Bansal 132"}
{"_id": "04bc01bbcfa93f72f7ea958911de3aedd7320936", "title": "Model-based evaluation: from dependability to security", "text": "The development of techniques for quantitative, model-based evaluation of computer system dependability has a long and rich history. A wide array of model-based evaluation techniques is now available, ranging from combinatorial methods, which are useful for quick, rough-cut analyses, to state-based methods, such as Markov reward models, and detailed, discrete-event simulation. The use of quantitative techniques for security evaluation is much less common, and has typically taken the form of formal analysis of small parts of an overall design, or experimental red team-based approaches. Alone, neither of these approaches is fully satisfactory, and we argue that there is much to be gained through the development of a sound model-based methodology for quantifying the security one can expect from a particular design. In this work, we survey existing model-based techniques for evaluating system dependability, and summarize how they are now being extended to evaluate system security. We find that many techniques from dependability evaluation can be applied in the security domain, but that significant challenges remain, largely due to fundamental differences between the accidental nature of the faults commonly assumed in dependability evaluation, and the intentional, human nature of cyber attacks."}
{"_id": "56495dd3c1f86ed674c524b978958548999f6a1c", "title": "The use of socially assistive robots in the design of intelligent cognitive therapies for people with dementia", "text": "Currently the 2 percent growth rate for the world's older population exceeds the 1.2 rate for the world's population as a whole. By 2050, the number of individuals over the age 85 is projected to be three times more than there is today. Most of these individuals will need physical, emotional, and cognitive assistance. In this paper, we present a new adaptive robotic system based on the socially assistive robotics (SAR) technology that tries to provide a customized help protocol through motivation, encouragements, and companionship to users suffering from cognitive changes related to aging and/or Alzheimer's disease. Our results show that this approach can engage the patients and keep them interested in interacting with the robot, which, in turn, increases their positive behavior."}
{"_id": "477af70a1b6e9d50464409b694ac8096f09c7fae", "title": "3D deep convolutional neural networks for amino acid environment similarity analysis", "text": "Central to protein biology is the understanding of how structural elements give rise to observed function. The surfeit of protein structural data enables development of computational methods to systematically derive rules governing structural-functional relationships. However, performance of these methods depends critically on the choice of protein structural representation. Most current methods rely on features that are manually selected based on knowledge about protein structures. These are often general-purpose but not optimized for the specific application of interest. In this paper, we present a general framework that applies 3D convolutional neural network (3DCNN) technology to structure-based protein analysis. The framework automatically extracts task-specific features from the raw atom distribution, driven by supervised labels. As a pilot study, we use our network to analyze local protein microenvironments surrounding the 20 amino acids, and predict the amino acids most compatible with environments within a protein structure. To further validate the power of our method, we construct two amino acid substitution matrices from the prediction statistics and use them to predict effects of mutations in T4 lysozyme structures. Our deep 3DCNN achieves a two-fold increase in prediction accuracy compared to models that employ conventional hand-engineered features and successfully recapitulates known information about similar and different microenvironments. Models built from our predictions and substitution matrices achieve an 85% accuracy predicting outcomes of the T4 lysozyme mutation variants. Our substitution matrices contain rich information relevant to mutation analysis compared to well-established substitution matrices. Finally, we present a visualization method to inspect the individual contributions of each atom to the classification decisions. End-to-end trained deep learning networks consistently outperform methods using hand-engineered features, suggesting that the 3DCNN framework is well suited for analysis of protein microenvironments and may be useful for other protein structural analyses."}
{"_id": "f3b8decd3d7f320efd60df2ac70fd8972095b7b7", "title": "Wireless Technologies for IoT in Smart Cities", "text": "As cities continue to grow, numerous initiatives for Smart Cities are being conducted. The concept of Smart City encompasses several concepts being governance, economy, management, infrastructure, technology and people. This means that a Smart City can have different communication needs. Wireless technologies such as WiFi, ZigBee, Bluetooth, WiMax, 4G or LTE (Long Term Evolution) have presented themselves as solutions to the communication needs of Smart City initiatives. However, as most of them employ unlicensed bands, interference and coexistence problems are increasing. In this paper, the wireless technologies available nowadays for IoT (Internet of Things) in Smart Cities are presented. Our contribution is a review of wireless technologies, their comparison and the problems that difficult coexistence among them. In order to do so, the characteristics and adequacy of wireless technologies to each domain are considered. The problems derived of over-crowded unlicensed spectrum and coexistence difficulties among each technology are discussed as well. Finally, power consumption concerns are addressed."}
{"_id": "28bf499a69bd467b9f3c4426cd5d0f4e65724648", "title": "Chinese Image Text Recognition on grayscale pixels", "text": "This paper presents a novel scheme for Chinese text recognition in images and videos. It's different from traditional paradigms that binarize text images, fed the binarized text to an OCR engine and get the recognized results. The proposed scheme, named grayscale based Chinese Image Text Recognition (gCITR), implements the recognition directly on grayscale pixels via the following steps: image text over-segmentation, building recognition graph, Chinese character recognition and beam search determination. The advantages of gCITR lie in: (1) it does not heavily rely on the performance of binarization, which is not robust in practical and thus severely affects the performance of OCR, (2) grayscale image retains more information of the text thus facilitates the recognition. Experimental results on text from 13 TV news videos demonstrate the effectiveness of the proposed gCITR, from which significant performance gains are observed."}
{"_id": "61513eff4371e48748d792c9b136bc45dd0e625d", "title": "Applying Stochastic Integer Programming to Optimization of Resource Scheduling in Cloud Computing", "text": "Resource scheduling based on SLA(Service Level Agreement) in cloud computing is NP-hard problem. There is no efficient method to solve it. This paper proposes an optimal model for the problem and give an algorithm by applying stochastic integer programming technique. Applying Gr\u00f6bner bases theory for solving the stochastic integer programming problem and the experimental results of the implementation are also presented."}
{"_id": "4af63ed343df388b6353b6fc77c7137d27822bf4", "title": "ZooKeeper: Wait-free Coordination for Internet-scale Systems", "text": "In this paper, we describe ZooKeeper, a service for coordinating processes of distributed applications. Since ZooKeeper is part of critical infrastructure, ZooKeeper aims to provide a simple and high performance kernel for building more complex coordination primitives at the client. It incorporates elements from group messaging, shared registers, and distributed lock services in a replicated, centralized service. The interface exposed by ZooKeeper has the wait-free aspects of shared registers with an event-driven mechanism similar to cache invalidations of distributed file systems to provide a simple, yet powerful coordination service. The ZooKeeper interface enables a high-performance service implementation. In addition to the wait-free property, ZooKeeper provides a per client guarantee of FIFO execution of requests and linearizability for all requests that change the ZooKeeper state. These design decisions enable the implementation of a high performance processing pipeline with read requests being satisfied by local servers. We show for the target workloads, 2:1 to 100:1 read to write ratio, that ZooKeeper can handle tens to hundreds of thousands of transactions per second. This performance allows ZooKeeper to be used extensively by client applications."}
{"_id": "7e9c01fd5e0b290cec25e9d1a156e8aa473ee151", "title": "Performance Analysis of Enhanced SPECK Algorithm", "text": "A secure lightweight block cipher is an effective security solution for applications running on resource-constrained devices. The SPECK family of lightweight block ciphers is designed for low-end devices that need data security. The randomness of the keystream produced by this algorithm is a good indicator of its security strength. Nevertheless, the computed proportion of sequences based on empirical test results falls beyond the range of acceptable interval, an evidence of non-randomness to each algorithm. In this study, the researchers enhanced SPECK (k-SPECK) through the addition of a key derivation function in its processes. Both SPECK and k - SPECK were evaluated with the two sophisticated statistical test suite. The result shows that k - SPECK outperformed SPECK in terms of randomness as its three variations successfully passed all the test of statistical packages NIST and DieHarder, making it more cryptographically secured. Similarly, software implementation was also evaluated showing that k - SPECK 128/192 and 128/256 performed faster in encryption cycle compared to SPECK."}
{"_id": "38315f3ca2fa0b4c528492ae3eb482fb01a834a6", "title": "FPGA Implementations of the Round Two SHA-3 Candidates", "text": "The second round of the NIST-run public competition is underway to find a new hash algorithm(s) for inclusion in the NIST Secure Hash Standard (SHA-3). This paper presents the full implementations of all of the second round candidates in hardware with all of their variants. In order to determine their computational efficiency, an important aspect in NIST's round two evaluation criteria, this paper gives an area/speed comparison of each design both with and without a hardware interface, thereby giving an overall impression of their performance in resource constrained and resource abundant environments. The implementation results are provided for a Virtex-5 FPGA device. The efficiency of the architectures for the hash functions are compared in terms of throughput per unit area."}
{"_id": "e5d0706662f7a9dc37215cd0841c78eca89e709d", "title": "PC-01: Introduction to computational thinking: Educational technology in primary and secondary education", "text": "This article describes the design and implementation of the course Introduction to Computational Thinking (PC-01) for primary and secondary education. The course introduces the \u201cconcepts\u201d and core \u201cprocesses\u201d of Computational Thinking using a visual programming environment. The design of the course PC-01 includes a set of multimedia content, learning tools and technologies that help teachers and students in the teaching-learning process. These tools allow the teacher the successful delivery of the course content as well as the individual/collective monitoring of students' progress in the course. Students have access to their progress throughout the course and participate in creative and collaborative tasks. This course also serves to introduce transversely educational technologies to many students of primary and secondary education. The technological environment integrates the online teaching resources and the methodological tools of the course. The course uses Scratch 2.0 as the programming environment and Moodle as the learning platform. At present the course is being implemented in public schools in the Dominican Republic."}
{"_id": "e999c63026736b3e2b2f1f59f4441a875e502149", "title": "Soft Switching Circuit for Interleaved Boost Converters", "text": "A zero-voltage switching-zero-current switching interleaved boost converter is proposed in this paper. An active circuit branch in parallel with the main switches is added and it is composed of an auxiliary switch and a snubber capacitor. By using this interleaved converter topology, zero current turn-on and zero voltage turn-off of the main switches can be achieved and the reverse-recovery loss of boost diode can be reduced. In addition, the auxiliary switches are zero-voltage transmission during the whole switching transition. A prototype of boost converter rated at 1.2kW has been built to confirm the effectiveness of the converter"}
{"_id": "4b48e7dbbbf337175f5052c0d79c837289dbc25a", "title": "Automatic artifact removal from EEG - a mixed approach based on double blind source separation and support vector machine", "text": "Electroencephalography (EEG) recordings are often obscured by physiological artifacts that can render huge amounts of data useless and thus constitute a key challenge in current brain-computer interface research. This paper presents a new algorithm that automatically and reliably removes artifacts from EEG based on blind source separation and support vector machine. Performance on a motor imagery task is compared for artifact-contaminated and preprocessed signals to verify the accuracy of the proposed approach. The results showed improved results over all datasets. Furthermore, the online applicability of the algorithm is investigated."}
{"_id": "e2bcc7d137c3d61b038f1abf54c96d044f516fe2", "title": "An End-to-End Multitask Learning Model for Fact Checking", "text": "With huge amount of information generated every day on the web, fact checking is an important and challenging task which can help people identify the authenticity of most claims as well as providing evidences selected from knowledge source like Wikipedia. Here we decompose this problem into two parts: an entity linking task (retrieving relative Wikipedia pages) and recognizing textual entailment between the claim and selected pages. In this paper, we present an end-to-end multi-task learning with bi-direction attention (EMBA) model to classify the claim as \u201csupports\u201d, \u201crefutes\u201d or \u201cnot enough info\u201d with respect to the pages retrieved and detect sentences as evidence at the same time. We conduct experiments on the FEVER (Fact Extraction and VERification) paper test dataset and shared task test dataset, a new public dataset for verification against textual sources. Experimental results show that our method achieves comparable performance compared with the baseline system."}
{"_id": "f3fa1f32535a1725ca3074482aa8304bdd385064", "title": "OctoMap : A Probabilistic , Flexible , and Compact 3 D Map Representation for Robotic Systems", "text": "In this paper, we present an approach for modeling 3D environments based on octrees using a probabilistic occupancy estimation. Our technique is able to represent full 3D models including free and unknown areas. It is available as an open-source library to facilitate the development of 3D mapping systems. We also provide a detailed review of existing approaches to 3D modeling. Our approach was thoroughly evaluated using different real-world and simulated datasets. The results demonstrate that our approach is able to model the data probabilistically while, at the same time, keeping the memory requirement at a minimum."}
{"_id": "e189b162d573137cc9da570c14575f6f7f785ce0", "title": "Proximal Tibiofibular Joint Instability and Treatment Approaches: A Systematic Review of the Literature.", "text": "PURPOSE\nTo evaluate the treatment options, outcomes, and complications associated with proximal tibiofibular joint (PTFJ) instability, which will aim to improve surgical treatment of PTFJ instability and aid surgeons in their decision making and treatment selection.\n\n\nMETHODS\nA systematic review was performed according to Preferred Reporting Items for Systematic Review and Meta-Analysis guidelines. Inclusion criteria were as follows: PTFJ instability treatment techniques, PTFJ surgical outcomes, English language, and human studies. Exclusion criteria were cadaveric studies, animal studies, basic science articles, editorial articles, review articles, and surveys. Furthermore, we excluded studies that did not report patient follow-up time and studies without any patient-reported, clinical or radiographic outcomes at the final follow-up.\n\n\nRESULTS\nThe systematic review identified 44 studies (96 patients) after inclusion and exclusion criteria application. For the treatment of PTFJ instability, there were 18 studies (35 patients) describing nonoperative management, 3 studies (4 patients) reported on open reduction, 11 studies (25 patients) reported on fixation, 4 studies (10 patients) that described proximal fibula resection, 3 studies (11 patients) reported on adjustable cortical button repair, 2\u00a0studies (3 patients) reported on ligament reconstructions, and 5 (8 patients) studies reported on biceps femoris tendon rerouting. The most (77% to 90%) PTFJ dislocations and instability were anterolateral/unspecified anterior dislocation or instability. Improved outcomes after all forms of PTFJ instability treatment were reported; however, high complication rates were associated with both PTFJ fixation (28%) and fibular head resection (20%).\n\n\nCONCLUSIONS\nImproved outcomes can be expected after surgical treatment of PTFJ instability. Proximal tibiofibular ligament reconstruction, specifically biceps rerouting and anatomic graft reconstruction, leads to improved outcomes with low complication rates. Nonoperative treatment is associated with persistent symptoms, whereas both fixation and fibular head resection are associated with high complication rates.\n\n\nLEVEL OF EVIDENCE\nLevel IV, systematic review of level IV studies."}
{"_id": "71245f9d9ba0317f78151698dc1ddba7583a3afd", "title": "Knowledge Graph Embedding with Numeric Attributes of Entities", "text": "Knowledge Graph (KG) embedding projects entities and relations into low dimensional vector space, which has been successfully applied in KG completion task. The previous embedding approaches only model entities and their relations, ignoring a large number of entities\u2019 numeric attributes in KGs. In this paper, we propose a new KG embedding model which jointly model entity relations and numeric attributes. Our approach combines an attribute embedding model with a translation-based structure embedding model, which learns the embeddings of entities, relations, and attributes simultaneously. Experiments of link prediction on YAGO and Freebase show that the performance is effectively improved by adding entities\u2019 numeric attributes in the embedding model."}
{"_id": "6aa3d8bcca2ebdc52ef7cd786204c338f9d609f2", "title": "Improving Distributional Similarity with Lessons Learned from Word Embeddings", "text": "Recent trends suggest that neural-network-inspired word embedding models outperform traditional count-based distributional models on word similarity and analogy detection tasks. We reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others."}
{"_id": "163878ec1df0f222b76e5d252922960da9a2527b", "title": "Adolescence-limited and life-course-persistent antisocial behavior: a developmental taxonomy.", "text": "A dual taxonomy is presented to reconcile 2 incongruous facts about antisocial behavior: (a) It shows impressive continuity over age, but (b) its prevalence changes dramatically over age, increasing almost 10-fold temporarily during adolescence. This article suggests that delinquency conceals 2 distinct categories of individuals, each with a unique natural history and etiology: A small group engages in antisocial behavior of 1 sort or another at every life stage, whereas a larger group is antisocial only during adolescence. According to the theory of life-course-persistent antisocial behavior, children's neuropsychological problems interact cumulatively with their criminogenic environments across development, culminating in a pathological personality. According to the theory of adolescence-limited antisocial behavior, a contemporary maturity gap encourages teens to mimic antisocial behavior in ways that are normative and adjustive."}
{"_id": "6e4234b46c94bbc124739bf88ce099ea4d719042", "title": "A hot/cool-system analysis of delay of gratification: dynamics of willpower.", "text": "A 2-system framework is proposed for understanding the processes that enable--and undermine--self-control or \"willpower\" as exemplified in the delay of gratification paradigm. A cool, cognitive \"know\" system and a hot, emotional \"go\" system are postulated. The cool system is cognitive, emotionally neutral, contemplative, flexible, integrated, coherent, spatiotemporal, slow, episodic, and strategic. It is the seat of self-regulation and self-control. The hot system is the basis of emotionality, fears as well as passions--impulsive and reflexive--initially controlled by innate releasing stimuli (and, thus, literally under \"stimulus control\"): it is fundamental for emotional (classical) conditioning and undermines efforts at self-control. The balance between the hot and cool systems is determined by stress, developmental level, and the individual's self-regulatory dynamics. The interactions between these systems allow explanation of findings on willpower from 3 decades of research."}
{"_id": "1de4966482722d1f7da8bbef16d108639d9a1c38", "title": "Risk and Rationality in Adolescent Decision Making: Implications for Theory, Practice, and Public Policy.", "text": "Crime, smoking, drug use, alcoholism, reckless driving, and many other unhealthy patterns of behavior that play out over a lifetime often debut during adolescence. Avoiding risks or buying time can set a different lifetime pattern. Changing unhealthy behaviors in adolescence would have a broad impact on society, reducing the burdens of disease, injury, human suffering, and associated economic costs. Any program designed to prevent or change such risky behaviors should be founded on a clear idea of what is normative (what behaviors, ideally, should the program foster?), descriptive (how are adolescents making decisions in the absence of the program?), and prescriptive (which practices can realistically move adolescent decisions closer to the normative ideal?). Normatively, decision processes should be evaluated for coherence (is the thinking process nonsensical, illogical, or self-contradictory?) and correspondence (are the outcomes of the decisions positive?). Behaviors that promote positive physical and mental health outcomes in modern society can be at odds with those selected for by evolution (e.g., early procreation). Healthy behaviors may also conflict with a decision maker's goals. Adolescents' goals are more likely to maximize immediate pleasure, and strict decision analysis implies that many kinds of unhealthy behavior, such as drinking and drug use, could be deemed rational. However, based on data showing developmental changes in goals, it is important for policy to promote positive long-term outcomes rather than adolescents' short-term goals. Developmental data also suggest that greater risk aversion is generally adaptive, and that decision processes that support this aversion are more advanced than those that support risk taking. A key question is whether adolescents are developmentally competent to make decisions about risks. In principle, barring temptations with high rewards and individual differences that reduce self-control (i.e., under ideal conditions), adolescents are capable of rational decision making to achieve their goals. In practice, much depends on the particular situation in which a decision is made. In the heat of passion, in the presence of peers, on the spur of the moment, in unfamiliar situations, when trading off risks and benefits favors bad long-term outcomes, and when behavioral inhibition is required for good outcomes, adolescents are likely to reason more poorly than adults do. Brain maturation in adolescence is incomplete. Impulsivity, sensation seeking, thrill seeking, depression, and other individual differences also contribute to risk taking that resists standard risk-reduction interventions, although some conditions such as depression can be effectively treated with other approaches. Major explanatory models of risky decision making can be roughly divided into (a) those, including health-belief models and the theory of planned behavior, that adhere to a \"rational\" behavioral decision-making framework that stresses deliberate, quantitative trading off of risks and benefits; and (b) those that emphasize nondeliberative reaction to the perceived gists or prototypes in the immediate decision environment. (A gist is a fuzzy mental representation of the general meaning of information or experience; a prototype is a mental representation of a standard or typical example of a category.) Although perceived risks and especially benefits predict behavioral intentions and risk-taking behavior, behavioral willingness is an even better predictor of susceptibility to risk taking-and has unique explanatory power-because adolescents are willing to do riskier things than they either intend or expect to do. Dual-process models, such as the prototype/willingness model and fuzzy-trace theory, identify two divergent paths to risk taking: a reasoned and a reactive route. Such models explain apparent contradictions in the literature, including different causes of risk taking for different individuals. Interventions to reduce risk taking must take into account the different causes of such behavior if they are to be effective. Longitudinal and experimental research are needed to disentangle opposing causal processes-particularly, those that produce positive versus negative relations between risk perceptions and behaviors. Counterintuitive findings that must be accommodated by any adequate theory of risk taking include the following: (a) Despite conventional wisdom, adolescents do not perceive themselves to be invulnerable, and perceived vulnerability declines with increasing age; (b) although the object of many interventions is to enhance the accuracy of risk perceptions, adolescents typically overestimate important risks, such as HIV and lung cancer; (c) despite increasing competence in reasoning, some biases in judgment and decision making grow with age, producing more \"irrational\" violations of coherence among adults than among adolescents and younger children. The latter occurs because of a known developmental increase in gist processing with age. One implication of these findings is that traditional interventions stressing accurate risk perceptions are apt to be ineffective or backfire because young people already feel vulnerable and overestimate their risk. In addition, research shows that experience is not a good teacher for children and younger adolescents, because they tend to learn little from negative outcomes (favoring the use of effective deterrents, such as monitoring and supervision), although learning from experience improves considerably with age. Experience in the absence of negative consequences may increase feelings of invulnerability and thus explain the decrease in risk perceptions from early to late adolescence, as exploration increases. Finally, novel interventions that discourage deliberate weighing of risks and benefits by adolescents may ultimately prove more effective and enduring. Mature adults apparently resist taking risks not out of any conscious deliberation or choice, but because they intuitively grasp the gists of risky situations, retrieve appropriate risk-avoidant values, and never proceed down the slippery slope of actually contemplating tradeoffs between risks and benefits."}
{"_id": "a72058dce5770231f67bfe5f7a254c6e3cbf2e63", "title": "Adolescent sexual risk behavior: a multi-system perspective.", "text": "Adolescents are at high risk for a number of negative health consequences associated with early and unsafe sexual activity, including infection with human immunodeficiency virus, other sexually transmitted diseases, and unintended pregnancy. As a result, researchers have attempted to identify those factors that influence adolescent sexual risk behavior so that meaningful prevention and intervention programs may be developed. We propose that research efforts so far have been hampered by the adoption of models and perspectives that are narrow and do not adequately capture the complexity associated with the adolescent sexual experience. In this article, we review the recent literature (i.e., 1990-1999) pertaining to the correlates of adolescent sexual risk-taking, and organize the findings into a multisystemic perspective. Factors from the self, family, and extrafamilial systems of influence are discussed. We also consider several methodological problems that limit the literature's current scope, and consider implications of the adoption of a multisystemic framework for future research endeavors. We conclude with a discussion of the implications of the available research for practitioners working to reduce sexual risk behavior among adolescents."}
{"_id": "bde108937b92937c71e9da68eadafbf4048ebb19", "title": "Separation of Fine Granular Mixtures in S-Plate-Type Electrostatic Separators", "text": "Plate-type electrostatic separators are commonly used for the selective sorting of conductive from nonconductive constituents of a granular mixture. This paper addresses the more delicate issue of granular materials that are characterized by finite nonzero resistivities. The study was conducted with three materials having similar granule size range and specific weight but slightly different electrical properties. Two sets of experiments were performed for evaluating the effect of electrode geometry on the behavior of each of the materials. These results were then employed for predicting the outcome of the electrostatic separation of binary mixtures of good and poor conductors. A third set of experiments validated the predictions and pointed out a couple of nonelectric factors that might deteriorate the performance of a plate-type electrostatic separator in an industrial environment."}
{"_id": "99b403db3efa6fc1a9a5ba7b9a82fed629da943a", "title": "Design of Permanent Magnet-Assisted Synchronous Reluctance Motors Made Easy", "text": "Electric motor design is a multi-variable problem which involves geometric dimensions of the stator and rotor. Presenting a unique solution for a family of optimization criteria has always been a challenge for motor designers. Several numerical tools such as finite element methods (FEM) have been developed to perform a precise analysis and predict the outcome of the design. However, limits in parametric analysis as well as mathematical and computational burden on numerical tools usually prohibit the designer in obtaining a unique solution for the design problem. These limits and demands in optimized solutions motivate the designer to use analytical models in order to perform a comprehensive parametric design. An accurate analytical model is crucial for this purpose. In this paper, an analytical model for permanent magnet assisted synchronous reluctance motor (PMa- SynRM) with four flux barriers and one cutout per pole is developed. Flux densities are found in the air gap, in the cutouts, and in the flux barriers; thus, the back-EMF developed by the permanent magnets is derived. Equations for the d-axis and the q-axis inductances are also obtained. Electromagnetic torque is finally derived using the co-energy method. The developed analytical model highlights the contribution of the reluctance variation and permanent magnets on the developed torque. Simulation results are obtained using both Matlab and Ansoft/Maxwell packages. These outcomes are supported by the experimental results obtained from a laboratory test bed."}
{"_id": "4cc383e0192eb69c9074e3585f2e2f2ad522e5af", "title": "Effective Value of Decision Tree with KDD 99 Intrusion Detection Datasets for Intrusion Detection System", "text": "A decision tree is a outstanding method for the data mining. In intrusion detection systems (IDSs), the data mining techniques are useful to detect the attack especially in anomaly detection. For the decision tree, we use the DARPA 98 Lincoln Laboratory Evaluation Data Set (DARPA Set) as the training data set and the testing data set. KDD 99 Intrusion Detection data set is also based on the DARPA Set. These three entities are widely used in IDSs. Hence, we describe the total process to generate the decision tree learned from the DARPA Sets. In this paper, we also evaluate the effective value of the decision tree as the data mining method for the IDSs, and the DARPA Set as the learning data set for the decision trees."}
{"_id": "653ede53fd5a52405403d5b72b3ab6afbacaed3d", "title": "A highly area-efficient controller for capacitive touch screen panel systems", "text": "In this paper, a highly area-efficient controller for capacitive touch screen panels (TSPs) is proposed. The proposed controller uses a 10-bit successive approximation register analog-to-digital converter (SAR ADC) with an adder to compensate for the capacitance variation in the TSP and for the offset voltage variation in the charge amplifier of the sensing circuit. By using the proposed compensation method, the area of the controller can be reduced by 90.3% of the area of the conventional controllers. The measurement results showed that the signal-to-noise ratio (SNR) of the controller increases from 12.5 to 21.3 dB after compensation. Also, its spatial jitter decreases from \u00b11.5 to \u00b10.46 mm, which is 7% of the sensor pitch of 8 mm."}
{"_id": "c9f2908346cd208b0118432cefb09c8396fe96e9", "title": "Could Data from Location-Based Social Networks Be Used to Support Urban Planning?", "text": "A great quantity of information is required to support urban planning. Usually there are many (not integrated) data sources, originating from different government bodies, in distinct formats and variable properties (e.g. reliability, completeness). The effort to handle these data, integrate and analyze them is high, taking to much time for the information to be available to help decision making. We argue that data from location-based social networks (LBSN) could be used to provide useful information in reasonable time, despite several limitations they have. To asses this, as a case study, we used data from different LBSN to calculate the Local Availability Index (IOL) for a Brazilian city. This index is part of a methodology to estimate quality of urban life inside cities and is used to support urban planning. The results suggest that data from LBSN are useful and could be used to provide insights for local governments."}
{"_id": "3fdd39c693d5a12df62ea39760d7c85e91b7a75a", "title": "IO Performance Interference among Consolidated n-Tier Applications: Sharing Is Better Than Isolation for Disks", "text": "The performance unpredictability associated with migrating applications into cloud computing infrastructures has impeded this migration. For example, CPU contention between co-located applications has been shown to exhibit counter-intuitive behavior. In this paper, we investigate IO performance interference through the experimental study of consolidated n-tier applications leveraging the same disk. Surprisingly, we found that specifying a specific disk allocation, e.g., limiting the number of Input/Output Operations Per Second (IOPs) per VM, results in significantly lower performance than fully sharing disk across VMs. Moreover, we observe severe performance interference among VMs can not be totally eliminated even with a sharing strategy (e.g., response times for constant workloads still increase over 1,100%). By using a micro-benchmark (Filebench) and an n-tier application benchmark systems (RUBBoS), we demonstrate the existence of disk contention in consolidated environments, and how performance loss occurs when co-located database systems in order to maintain database consistency flush their logs from memory to disk. Potential solutions to these isolation issues are (1) to increase the log buffer size to amortize the disk IO cost (2) to decrease the number of write threads to alleviate disk contention. We validate these methods experimentally and find a 64% and 57% reduction in response time (or more generally, a reduction in performance interference) for constant and increasing workloads respectively."}
{"_id": "660e8111b9014a95888ef33073c04f08b7c1bb4d", "title": "Performance of MIMO spatial multiplexing algorithms using indoor channel measurements and models", "text": "Several algorithms have recently been proposed for multiplexing multiple users in multiple input multiple output (MIMO) downlinks. The ability of a transmitter to accomplish this using spatial methods is generally dependent on whether the users\u2019 channels are correlated. Up to this point, most of the multiplexing algorithms have been tested on uncorrelated Gaussian channels, a best-case scenario. In this paper, we examine the performance of multiplexing multiple users under more realistic channel conditions by using indoor channel measurements and a statistical model for test cases. We use a block zero-forcing algorithm to test performance at various user separation distances, optimizing for both maximum throughput under a power constraint and minimum power under a rate constraint. The results show that for the measured indoor environment (rich scattering, non-line-of-sight), a separation of five wavelengths is enough to achieve close to the maximum available performance for two users. Since many spatial multiplexing algorithms require channel state information (CSI) at the transmitter, we also examine the performance loss due to CSI error. The results show that a user can move up to one-half wavelength before the original channel measurement becomes unusable. Copyright # 2004 John Wiley & Sons, Ltd."}
{"_id": "1d8a1baa939e516545eb74d4308b312d4845977f", "title": "Shilling attack detection utilizing semi-supervised learning method for collaborative recommender system", "text": "Collaborative filtering (CF) technique is capable of generating personalized recommendations. However, the recommender systems utilizing CF as their key algorithms are vulnerable to shilling attacks which insert malicious user profiles into the systems to push or nuke the reputations of targeted items. There are only a small number of labeled users in most of the practical recommender systems, while a large number of users are unlabeled because it is expensive to obtain their identities. In this paper, Semi-SAD, a new semi-supervised learning based shilling attack detection algorithm is proposed to take advantage of both types of data. It first trains a na\u00efve Bayes classifier on a small set of labeled users, and then incorporates unlabeled users with EM-\u03bb to improve the initial na\u00efve Bayes classifier. Experiments on MovieLens datasets are implemented to compare the efficiency of Semi-SAD with supervised learning based detector and unsupervised learning based detector. The results indicate that Semi-SAD can better detect various kinds of shilling attacks than others, especially against obfuscated and hybrid shilling attacks."}
{"_id": "1c583cf878977caad8ff21355444814168a70aaa", "title": "A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning", "text": "In this paper, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains. To solve the resulting nonconvex optimization problem, we introduce a fast stochastic proximal-gradient algorithm that incorporates prior knowledge through nonsmooth regularization. For datasets of size n, our approach requires O(n/\u03b5) gradient evaluations to reach \u03b5-accuracy and, when a certain error bound holds, the complexity improves to O(\u03ban log(1/\u03b5)). These rates are n times better than those achieved by typical, full gradient methods."}
{"_id": "46d29ee2b97362299ef83c06ffc4461906f1ccda", "title": "It\u2019s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation", "text": "Eye gaze is an important non-verbal cue for human affect analysis. Recent gaze estimation work indicated that information from the full face region can benefit performance. Pushing this idea further, we propose an appearance-based method that, in contrast to a long-standing line of work in computer vision, only takes the full face image as input. Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps to flexibly suppress or enhance information in different facial regions. Through extensive evaluation, we show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation, achieving improvements of up to 14.3% on MPIIGaze and 27.7% on EYEDIAP for person-independent 3D gaze estimation. We further show that this improvement is consistent across different illumination conditions and gaze directions and particularly pronounced for the most challenging extreme head poses."}
{"_id": "018e730f8947173e1140210d4d1760d05c9d3854", "title": "Zero Shot Recognition with Unreliable Attributes", "text": "In principle, zero-shot learning makes it possible to train a recognition model simply by specifying the category\u2019s attributes. For example, with classifiers for generic attributes like striped and four-legged, one can construct a classifier for the zebra category by enumerating which properties it possesses\u2014even without providing zebra training images. In practice, however, the standard zero-shot paradigm suffers because attribute predictions in novel images are hard to get right. We propose a novel random forest approach to train zero-shot models that explicitly accounts for the unreliability of attribute predictions. By leveraging statistics about each attribute\u2019s error tendencies, our method obtains more robust discriminative models for the unseen classes. We further devise extensions to handle the few-shot scenario and unreliable attribute descriptions. On three datasets, we demonstrate the benefit for visual category learning with zero or few training examples, a critical domain for rare categories or categories defined on the fly."}
{"_id": "1dab435f6e6450770784c83fb400a0b57b0cfcbd", "title": "Switchboard: a matchmaking system for multiplayer mobile games", "text": "Supporting interactive, multiplayer games on mobile phones over cellular networks is a difficult problem. It is particularly relevant now with the explosion of mostly single-player or turn-based games on mobile phones. The challenges stem from the highly variable performance of cellular networks and the need for scalability (not burdening the cellular infrastructure, nor any server resources that a game developer deploys). We have built a service for matchmaking in mobile games -- assigning players to games such that game settings are satisfied as well as latency requirements for an enjoyable game. This requires solving two problems. First, the service needs to know the cellular network latency between game players. Second, the service needs to quickly group players into viable game sessions. In this paper, we present the design of our service, results from our experiments on predicting cellular latency, and results from efficiently grouping players into games."}
{"_id": "7b2dc21401b867a9b80d735a2ae44457e36c63bd", "title": "Bistatic Synthetic Aperture Radar Imaging for Arbitrary Flight Trajectories", "text": "In this paper, we present an analytic, filtered backprojection (FBP) type inversion method for bistatic synthetic aperture radar (BISAR). We consider a BISAR system where a scene of interest is illuminated by electromagnetic waves that are transmitted, at known times, from positions along an arbitrary, but known, flight trajectory and the scattered waves are measured from positions along a different flight trajectory which is also arbitrary, but known. We assume a single-scattering model for the radar data, and we assume that the ground topography is known but not necessarily flat. We use microlocal analysis to develop the FBP-type reconstruction method. We analyze the computational complexity of the numerical implementation of the method and present numerical simulations to demonstrate its performance."}
{"_id": "7d07425a77f3042264be780bc8319a6fd8134502", "title": "Modelling of IoT Traffic and Its Impact on LoRaWAN", "text": "The recent growth of IoT use-cases in a wide array of industrial, utility and environmental applications has necessitated the need for connectivity solutions with diverse requirements. Connectivity through BLE, Zigbee and 6LoPAN are examples of short-range IoT deployments. But to provide connectivity to a high density of devices over larger coverage areas, Low-Power Wide-Area Network (LPWAN) technologies in both licensed as well as unlicensed bands have been considered. In this paper, we consider modelling the traffic from IoT devices connected through LPWAN technologies. Due to diverse applications of IoT, it is not trivial to have a single traffic model to represent all of them, but the traffic can be broadly classified as either periodic, event-triggered, or a combination of both. We evaluate the performance of LoRaWAN, one such LPWAN technology, in the presence of a hybrid of both traffic types, where the event propagates spatially over time. In a practical deployment of sensor based IoT devices, the devices are usually densely deployed to ensure sufficient & reliable measurement. Thereby, when an event occurs, they exhibit spatial & temporal correlation in their traffic rate due to the natural phenomena of the metric they measure. We use the CMMPP model to represent such characteristic traffic from independent IoT devices triggered by an event. The characteristics of LoRa, the physical layer of LoRaWAN, is abstracted based on required signal strength and interference thresholds for different modulation parameters. Through system simulations, we demonstrate that there is a significant performance hit in LoRaWAN based networks, during the occurrence of events. In particular, using the packet delivery rate (PDR) as the metric, we found that while the system was able to handle regular updates from the devices with a PDR > 80%, event-driven traffic nearly impaired the network causing the PDR to drop below 10%."}
{"_id": "88fc1622d9964c43b28810c7db5eedf6d26c2447", "title": "Multi-Level Sequence GAN for Group Activity Recognition", "text": "We propose a novel semi supervised, Multi Level Sequential Generative Adversarial Network (MLS-GAN ) architecture for group activity recognition. In contrast to previous works which utilise manually annotated individual human action predictions, we allow the models to learn it\u2019s own internal representations to discover pertinent subactivities that aid the final group activity recognition task. The generator is fed with person-level and scene-level features that are mapped temporally through LSTM networks. Action-based feature fusion is performed through novel gated fusion units that are able to consider long-term dependancies, exploring the relationships among all individual actions, to learn an intermediate representation or \u2018action code\u2019 for the current group activity. The network achieves it\u2019s semi-supervised behaviour by allowing it to perform group action classification together with the adversarial real/fake validation. We perform extensive evaluations on different architectural variants to demonstrate the importance of the proposed architecture. Furthermore, we show that utilising both person-level and scene-level features facilitates the group activity prediction better than using only person-level features. Our proposed architecture outperforms current state-of-the-art results for sports and pedestrian based classification tasks on Volleyball and Collective Activity datasets, showing it\u2019s flexible nature for effective learning of group activities. 1"}
{"_id": "d556f50b4cd41a140f52008860e3ab18552b4d6c", "title": "Analyzing User Modeling on Twitter for Personalized News Recommendations", "text": "How can micro-blogging activities on Twitter be leveraged for user modeling and personalization? In this paper we investigate this question and introduce a framework for user modeling on Twitter which enriches the semantics of Twitter messages (tweets) and identifies topics and entities (e.g. persons, events, products) mentioned in tweets. We analyze how strategies for constructing hashtag-based, entity-based or topic-based user profiles benefit from semantic enrichment and explore the temporal dynamics of those profiles. We further measure and compare the performance of the user modeling strategies in context of a personalized news recommendation system. Our results reveal how semantic enrichment enhances the variety and quality of the generated user profiles. Further, we see how the different user modeling strategies impact personalization and discover that the consideration of temporal profile patterns can improve recommendation quality."}
{"_id": "0ebea909e08dbaa853aeda8be5ea930dc996b143", "title": "Project massive: self-regulation and problematic use of online gaming", "text": "A longitudinal design was employed to collect three waves of survey data over a 14 month period from 2790 online gamers. Respondents were asked questions about their gaming activity, motivations, personality, social and emotional environment, and the effect gaming has had on their lives. Prospective analysis was used to establish causal and temporal linkages among the repeatedly measured factors. While the data provide some indication that a player's reasons for playing do influence the development of problematic usage, these effects are overshadowed by the central importance of self-regulation in managing both the timing and amount of play. An individual's level of self-regulatory activity is shown to be very important in allowing them to avoid negative outcomes like problematic use. The role of depression is also discussed. With responsible use, online gaming appears to be a healthy recreational activity that provides millions of people with hours of social entertainment and adaptive diversion. However, failure to manage play behavior can lead to feelings of dependency."}
{"_id": "d7e52a6a53147077183260a4ad029a75ab8e9e87", "title": "Identifying Users With Alternate Behaviors of Lurking and Active Participation in Multilayer Social Networks", "text": "With the growing complexity of scenarios relating to online social networks (OSNs), there is an emergence of effective models and methods for understanding the characteristics and dynamics of multiple interconnected types of user relations. Profiles on different OSNs belonging to the same user can be linked using the multilayer structure, opening to unprecedented opportunities for user behavior analysis in a complex system. In this paper, we leverage the importance of studying the dichotomy between information-producers (contributors) and information-consumers (lurkers), and their interplay over a multilayer network, in order to effectively analyze such different roles a user may take on different OSNs. In this respect, we address the novel problem of identification and characterization of opposite behaviors that users may alternately exhibit over multiple layers of a complex network. We propose the first ranking method for alternate lurker-contributor behaviors on a multilayer OSN, dubbed mlALCR. Performance of mlALCR has been assessed quantitatively as well as qualitatively, and comparatively against methods designed for ranking either contributors or lurkers, on four real-world multilayer networks. Empirical evidence shows the significance and uniqueness of mlALCR in being able to mine alternate lurker-contributor behaviors over different layer networks."}
{"_id": "ea284083954b3fef48f7517a98a920822d145340", "title": "Low-Voltage Super class AB CMOS OTA cells with very high slew rate and power efficiency", "text": "A simple technique to achieve low-voltage power-efficient class AB operational transconductance amplifiers (OTAs) is presented. It is based on the combination of class AB differential input stages and local common-mode feedback (LCMFB) which provides additional dynamic current boosting, increased gain-bandwidth product (GBW), and near-optimal current efficiency. LCMFB is applied to various class AB differential input stages, leading to different class AB OTA topologies. Three OTA realizations based on this technique have been fabricated in a 0.5-/spl mu/m CMOS technology. For an 80-pF load they show enhancement factors of slew rate and GBW of up to 280 and 3.6, respectively, compared to a conventional class A OTA with the same 10-/spl mu/A quiescent currents and /spl plusmn/1-V supply voltages. In addition, the overhead in terms of common-mode input range, output swing, silicon area, noise, and static power consumption, is minimal."}
{"_id": "d278da6edddd56001c991a48279422b9a623d8ce", "title": "Trapped in the net: will internet addiction become a 21st-century epidemic?", "text": ""}
{"_id": "b50c2539621c2b7f86cca97ffaa6d12e10f7b270", "title": "Human gut microbiota in obesity and after gastric bypass.", "text": "Recent evidence suggests that the microbial community in the human intestine may play an important role in the pathogenesis of obesity. We examined 184,094 sequences of microbial 16S rRNA genes from PCR amplicons by using the 454 pyrosequencing technology to compare the microbial community structures of 9 individuals, 3 in each of the categories of normal weight, morbidly obese, and post-gastric-bypass surgery. Phylogenetic analysis demonstrated that although the Bacteria in the human intestinal community were highly diverse, they fell mainly into 6 bacterial divisions that had distinct differences in the 3 study groups. Specifically, Firmicutes were dominant in normal-weight and obese individuals but significantly decreased in post-gastric-bypass individuals, who had a proportional increase of Gammaproteobacteria. Numbers of the H(2)-producing Prevotellaceae were highly enriched in the obese individuals. Unlike the highly diverse Bacteria, the Archaea comprised mainly members of the order Methanobacteriales, which are H(2)-oxidizing methanogens. Using real-time PCR, we detected significantly higher numbers of H(2)-utilizing methanogenic Archaea in obese individuals than in normal-weight or post-gastric-bypass individuals. The coexistence of H(2)-producing bacteria with relatively high numbers of H(2)-utilizing methanogenic Archaea in the gastrointestinal tract of obese individuals leads to the hypothesis that interspecies H(2) transfer between bacterial and archaeal species is an important mechanism for increasing energy uptake by the human large intestine in obese persons. The large bacterial population shift seen in the post-gastric-bypass individuals may reflect the double impact of the gut alteration caused by the surgical procedure and the consequent changes in food ingestion and digestion."}
{"_id": "4093b4431a5d3b753fe4f912a88d7b8836de3855", "title": "Sizing equation and Finite Element Analysis optimum design of axial-flux permanent-magnet motor for electric vehicle direct drive", "text": "This paper presents the design process of a slotted TORUS axial-flux permanent-magnet motor suitable for direct drive of an electric vehicle through sizing equation and Finite Element Analysis. AFPM motor is a high torque density motor which can be easily and compactly mounted into the automobile's wheel fitting the rim perfectly. A double-sided slotted AFPM motor with 6 rotor poles for high torque density and stable rotation is the preliminary design. In order to determine the design requirements, a simple vehicle dynamic model evaluating the vehicle performance considering an automobile's typical-trip cursing scenario is considered. Axial-flux permanent-magnet machine's fundamental theory and sizing equation are applied to obtain the initial design parameters of the motor with the highest possible torque. The FEA of designed motor was conducted via commercial Vector Field Opera-3D 14.0 software for evaluation and accuracy enhancement of the design parameters. FEA simulation results are compared with those results obtained from sizing equation showing a good agreement of flux density values in various parts of the designed motor at no-load condition. The motor meets all the requirements and limitations of the electric vehicle fitting the shape and the size of a classical rim of the vehicle wheel."}
{"_id": "6ba673470771d3dd47100126db14fde18c8e0cf9", "title": "Generic and unified model of Switched Capacitor Converters", "text": "A generic modeling methodology that analyzes the losses in Switched Capacitors Converters (SCC) was developed and verified by simulation and experiments. The proposed analytical approach is unified, covering both hard and soft switched SCC topologies. The major advantage of the proposed model is that it expresses the losses as a function of the currents passing through each flying capacitor. Since these currents are linearly proportional to the output current, the model is also applicable to SCC with multiple capacitors. The proposed model provides an insight into the expected losses in SCC and the effects of their operational conditions such as duty cycle. As such, the model can help in the optimization of SCC systems and their control to achieve desired regulations."}
{"_id": "9c13d3584332e3670b73b119d6661bccb10e240e", "title": "Technical Committees Switched-Capacitor Power Electronics Circuits By", "text": "One of the main orientations in power electronics in the last decade has been the development of switching-mode converters without inductors and transformers. Light weight, small size and high power density are the result of using only switches and capacitors in the power stage of these converters. Thus, they serve as ideal power supplies for mobile electronic systems (e.g. cellular phones, personal digital assistants, and so forth). Switched-capacitor (SC) converters, with their large voltage conversion ratio, promise to be a response to such challenges of the 21st century as high-efficiency converters with low EMI emissions and the ability to realize steep step-down of the voltage (to 3V or even a smaller supply voltage for integrated circuits) or steep step-up of the voltage for automotive industry or internet services in the telecom industry. This paper is a tutorial of the main results in SC-converter research and design. Switched-Capacitor Converters\u2014 A Typical Power Electronics Contribution of the CAS Society From all of the research subjects in the Power Systems and Power Electronics Circuits Technical Committee\u2014the stability of power systems, analysis of power electronics circuits (which are time-variable circuits with internally-controlled switches), chaos, modeling and simulation of converters, hardswitching and soft-switching converters, and so forth\u2014the SCconverter is probably the most pertinent contribution of the CAS Society to power electronics. In fact, most of the researchers in this field belong to our Society\u2019s technical committee and most of the breakthrough contributions on this subject have appeared in CAS publications. The reason for this is clear. Pursuing small size filters in the 1950\u2019s, the circuit theory community found a solution in the elimination of bulky inductors from the structure of a passive filter. Active and then switched-capacitor filters showed the possibility of implementing the filtering function without using magnetic devices. The power electronics designer faced a similar challenge: the request for miniaturization of the power supply. This could be achieved only by eliminating the inductors and transformers. In a converter, the inductor serves the two purposes of processing energy and filtering the output voltage. The use of a switched-capacitor circuit for processing energy seemed doomed to failure because it is known that charging a capacitor from zero is achieved at a 50% efficiency. Ten years of research were needed to overcome the efficiency dilemma and to develop high-efficiency switched-capacitor energy-processing circuits. First SC Converters and Basic Principles The primary goal of any switching-mode power converter is to provide a constant (DC or AC) output voltage at its load, despite variations in the input voltage or load. A control element, therefore, has to be introduced in the process of transmitting energy so that the converter (power stage) changes its topology cyclically, and the durations of the switching topologies are adjusted for regulation purposes. The first SC converters were developed by a group of researchers from Kumamoto, Japan, who processed a DC unregulated voltage toward a DC regulated voltage [1, 2, 4]. These DC-DC converters were soon followed by AC-DC converters [3], DC-AC inverters [5] and AC-AC transformers [6]. By avoiding magnetic elements, these circuits, realized in a hybrid technology, featured a high power density (23W/inch). Figure 1(a). Basic SC step-down DC-DC converter. *A. Ioinovici is with the Department of Electrical and Electronics Engineering, Holon Academic Institute of Technology, Israel. R V S"}
{"_id": "fe13e79621be1fea2f6f4f37417155fb7079b05a", "title": "Unified analysis of switched-capacitor resonant converters", "text": "A family of switched-capacitor resonant circuits using only two transistors is presented. The circuit operates under zero-current switching and, therefore, the switching loss is zero. It also offers a wide choice of voltage conversions including fractional as well as multiple and inverted voltage conversion ratios."}
{"_id": "ade3287ba2c903326f584d4a2117715d8aefe0b9", "title": "Monte Carlo Strength Evaluation: Fast and Reliable Password Checking", "text": "Modern password guessing attacks adopt sophisticated probabilistic techniques that allow for orders of magnitude less guesses to succeed compared to brute force. Unfortunately, best practices and password strength evaluators failed to keep up: they are generally based on heuristic rules designed to defend against obsolete brute force attacks. Many passwords can only be guessed with significant effort, and motivated attackers may be willing to invest resources to obtain valuable passwords. However, it is eminently impractical for the defender to simulate expensive attacks against each user to accurately characterize their password strength. This paper proposes a novel method to estimate the number of guesses needed to find a password using modern attacks. The proposed method requires little resources, applies to a wide set of probabilistic models, and is characterised by highly desirable convergence properties.\n The experiments demonstrate the scalability and generality of the proposal. In particular, the experimental analysis reports evaluations on a wide range of password strengths, and of state-of-the-art attacks on very large datasets, including attacks that would have been prohibitively expensive to handle with existing simulation-based approaches."}
{"_id": "90fcb6bd123a88bc6be5ea233351f0e12d517f98", "title": "Shape from focus system", "text": ""}
{"_id": "86f4f047f3d4174ef0da6f1358f80748e1050303", "title": "Rewarded Outcomes Enhance Reactivation of Experience in the Hippocampus", "text": "Remembering experiences that lead to reward is essential for survival. The hippocampus is required for forming and storing memories of events and places, but the mechanisms that associate specific experiences with rewarding outcomes are not understood. Event memory storage is thought to depend on the reactivation of previous experiences during hippocampal sharp wave ripples (SWRs). We used a sequence switching task that allowed us to examine the interaction between SWRs and reward. We compared SWR activity after animals traversed spatial trajectories and either received or did not receive a reward. Here, we show that rat hippocampal CA3 principal cells are significantly more active during SWRs following receipt of reward. This SWR activity was further enhanced during learning and reactivated coherent elements of the paths associated with the reward location. This enhanced reactivation in response to reward could be a mechanism to bind rewarding outcomes to the experiences that precede them."}
{"_id": "c3547b957720d88c9f2a6815e868821b1e6cccd0", "title": "An unsupervised user identification algorithm using network embedding and scalable nearest neighbour", "text": "Most of the current studies on social network (SN) mainly focused on a single SN platform. Integration of SNs can provide more sufficient user behaviour data and more complete network structure, and thus is rewarding to an ocean of studies on social computing. Recognizing the identical users across SNs, or user identification, naturally bridges the SNs through users and has attracted extensive attentions. Due to the fragmentation, inconsistency and disruption of the accessible information among SNs, user identification is still an intractable problem. Different from the efforts implemented on user profiles and users\u2019 content, many studies have noticed the accessibility and reliability of network structure in most of the SNs for addressing this issue. Although substantial achievements have been made, most of the current network structure-based solutions are supervised or semi-supervised and require some given identified users or seed users. In the scenarios where seed users are hard to obtain, it is laborious to label the seed users manually. In this study, we proposed an unsupervised scheme by employing the reliability and consistence of friend relationships in different SNs, termed Unsupervised Friend Relationship-based User Identification algorithm (UFRUI). The UFRUI first models the network structure and embeds the feature of each user into a vector using network embedding technique, and then converts the user identification problem into a nearest neighbour problem. Finally, the matching user is computed using the scalable nearest neighbour algorithm. Results of experiments demonstrated that UFRUI performs much better than current state-of-art network structure-based algorithm without seed users."}
{"_id": "2f083c5aa21cbb19bffd480b33ab5e8d5bfba21e", "title": "An information theoretic approach to improve semantic similarity assessments across multiple ontologies", "text": "Semantic similarity has become, in recent years, the backbone of numerous knowledge-based applications dealing with textual data. From the different methods and paradigms proposed to assess semantic similarity, ontology-based measures and, more specifically, those based on quantifying the Information Content (IC) of concepts are the most widespread solutions due to their high accuracy. However, these measures were designed to exploit a single ontology. They thus cannot be leveraged in many contexts in which multiple knowledge bases are considered. In this paper, we propose a new approach to achieve accurate IC-based similarity assessments for concept pairs spread throughout several ontologies. Based on Information Theory, our method defines a strategy to accurately measure the degree of commonality between concepts belonging to different ontologies\u2014this is the cornerstone for estimating their semantic similarity. Our approach therefore enables classic IC-based measures to be directly applied in a multiple ontology setting. An empirical evaluation, based on well-established benchmarks and ontologies related to the bio-medical domain, illustrates the accuracy of our approach, and demonstrates that similarity estimations provided by our approach are significantly more correlated with human ratings of similarity than those obtained via related works. Semantic similarity is a pillar of text understanding since it quantifies the degree of resemblance between the meanings of textual terms. In recent years, due to the marked increase in electronically available textual data, considerable effort has been focused on defining semantic measures, which have been extensively applied in various contexts such as information retrieval [54], information extraction [56], word sense disambiguation [32], data clustering [4,27], data privacy [6,45\u201347] and biomedicine (e.g. protein classification and interaction [19,57], chemical entity identification [18], identification of the communalities between brain data and linguistic data [15], etc.) to cite a few. Different knowledge sources have been used to facilitate the semantic similarity calculus, including: measures relying solely on textual corpora to estimate similarity from the degree of co-occurrence of terms [11], and measures involving structured knowledge bases such as ontologies [36,37,44], whereby the similarity calculus is based on the analysis of semantic relationships modelled between concepts. Compared to corpora-based approaches, ontology-based measures have a dual benefit: concept meanings can be"}
{"_id": "561da3e199bc53412851b75e2fd808496d6b8264", "title": "How to Remember What to Remember: Exploring Possibilities for Digital Reminder Systems", "text": "Digital reminder systems typically use time and place as triggers to remind people to perform activities. In this paper, we investigate how digital reminder systems could better support the process of remembering in a wider range of situations. We report findings from a survey and one-week diary study, which reveal that people want to remember to perform a broad spectrum of activities in the future, many of which cannot be supported by simple time- and location-based reminders. In addition to these examples of prospective memory, or \u2018remembering intentions\u2019 [53], we also find that people want support in \u2018retrieving\u2019 [53] information and details, especially those encountered through social interactions or intended for use in conversations with others. Drawing on our analysis of what people want to remember and how they try to support this, we draw implications for the design of intelligent reminder systems such as digital assistants (e.g. Microsoft\u2019s Cortana) and smart speaker systems (e.g. Amazon Echo), and highlight the possibilities afforded by drawing on conversation and giving material form to digital reminders."}
{"_id": "4b1a53d987b20630679510c351f3a554faea5281", "title": "Automatic emotional speech classification", "text": "Our purpose is to design a useful tool which can be used in psychology to automatically classify utterances into five emotional states such as anger, happiness, neutral, sadness, and surprise. The major contribution of the paper is to rate the discriminating capability of a set of features for emotional speech recognition. A total of 87 features has been calculated over 500 utterances from the Danish Emotional Speech database. The sequential forward selection method (SFS) has been used in order to discover a set of 5 to 10 features which are able to classify the utterances in the best way. The criterion used in SFS is the cross-validated correct classification score of one of the following classifiers: nearest mean and Bayes classifier where class pdf are approximated via Parzen windows or modelled as Gaussians. After selecting the 5 best features, we reduce the dimensionality to two by applying principal component analysis. The result is a 51.6% /spl plusmn/ 3% correct classification rate at 95% confidence interval for the five aforementioned emotions, whereas a random classification would give a correct classification rate of 20%. Furthermore, we find out those two-class emotion recognition problems whose error rates contribute heavily to the average error and we indicate that a possible reduction of the error rates reported in this paper would be achieved by employing two-class classifiers and combining them."}
{"_id": "f95a6fe2a3c40d1dcd12112928bb7695578d6b82", "title": "Combining Memory-Based and Model-Based Collaborative Filtering in Recommender System", "text": "Collaborative filtering (CF) technique has been proved to be one of the most successful techniques in recommender systems. Two types of algorithms for collaborative filtering have been researched: memory-based CF and model-based CF. Memory-based approaches identify the similarity between two users by comparing their ratings on a set of items and have suffered from two fundamental problems: sparsity and scalability. Alternatively, the model-based approaches have been proposed to alleviate these problems, but these approaches tend to limit the range of users. This paper presents an approach that combines the advantages of these two kinds of approaches by joining the two methods. Firstly, it employs memory-based CF to fill the vacant ratings of the user-item matrix. Then, it uses the item-based CF as model-based to form the nearest neighbors of every item. At last, it produces prediction of the target user to the target item at real time. The collaborative filtering recommendation method combining memory-based CF and model-based CF can provide better recommendation than traditional collaborative filtering."}
{"_id": "18f63d843bc2dadf606e0d95c0d7895ae47c37b5", "title": "Taming verification hardness: an efficient algorithm for testing subgraph isomorphism", "text": "Graphs are widely used to model complicated data semantics in many applications. In this paper, we aim to develop efficient techniques to retrieve graphs, containing a given query graph, from a large set of graphs. Considering the problem of testing subgraph isomorphism is generally NP-hard, most of the existing techniques are based on the framework of filtering-and-verification to reduce the precise computation costs; consequently various novel feature-based indexes have been developed. While the existing techniques work well for small query graphs, the verification phase becomes a bottleneck when the query graph size increases. Motivated by this, in the paper we firstly propose a novel and efficient algorithm for testing subgraph isomorphism, QuickSI. Secondly, we develop a new feature-based index technique to accommodate QuickSI in the filtering phase. Our extensive experiments on real and synthetic data demonstrate the efficiency and scalability of the proposed techniques, which significantly improve the existing techniques."}
{"_id": "7ae6c10058ecf4967f678f3bd08b3aa0d37c6327", "title": "How Data Breaches Ruin Firm Reputation on Social Media! - Insights from a Sentiment-based Event Study", "text": "Data breach events are heavily discussed in social media. Data breaches, which imply the loss of personal sensitive data, have negative consequences on the affected firms such as loss of market value, loss of customers and reputational damage. In the digital era, wherein ensuring information security is extremely demanding and the dissemination of information occurs at a very high speed, protecting corporate reputation has become a challenging task. While several studies have provided empirical evidence of the financial consequences of data breaches, little attention has been dedicated to the link between data breaches and reputational risk. To address this research gap, we have measured the reputational effect of data breaches based on social media content by applying a novel approach, the sentiment-based event study. The empirical results provide strong evidence that data breach events deteriorate the reputation of companies involved in such incidents."}
{"_id": "5186a6d64609ce43665e751b7acdc91515b821b1", "title": "What Is Seen Is Who You Are: Are Cues in Selfie Pictures Related to Personality Characteristics?", "text": "Developments and innovation in the areas of mobile information technology, digital media and social networks foster new reflections on computer-mediated communication research, especially in the field of self-presentation. In this context, the selfie as a self-portrait photo is interesting, because as a meaningful gesture, it actively and directly relates the content of the photo to the author of the picture. From the perspective of the selfie as an image and the impression it forms, in the first part of the research we explored the distinctive characteristics of selfie pictures; moreover, from the perspective of the potential reflection of a selfie image on the personality of its author, in the second part we related the characteristics of selfie pictures to various personality constructs (e.g., Big Five personality traits narcissism and femininity-masculinity). Important aspects of selfies especially in relation to gender include the tilt of the head, the side of the face exhibited, mood and head position, later related also to the context of the selfie picture. We found no significant relations between selfie cues and personality constructs. The face-ism index was related to entitlement, and selfie availability to neuroticism."}
{"_id": "ec2f036f1e6f56ad6400e42cf1eb14f4ebe122c6", "title": "Tagging Sentence Boundaries", "text": "In this paper we tackle sentence boundary disambiguation through a part-of-speech (POS) tagging framework. We describe necessary changes in text tokenization and the implementat ion of a POS tagger and provide results of an evaluation of this system on two corpora. We also describe an extension of the traditional POS tagging by combining it with the document-centered approach to proper name identification and abbreviation handling. This made the resulting system robust to domain and topic shifts. 1 I n t r o d u c t i o n Sentence boundary disambiguation (SBD) is an important aspect in developing virtually any practical text processing application syntactic parsing, Information Extraction, Machine Translation, Text Alignment, Document Summarization, etc. Segmenting text into sentences in most cases is a simple m a t t e r a period, an exclamation mark or a question mark usually signal a sentence boundary. However, there are cases when a period denotes a decimal point or is a part of an abbreviation and thus it does not signal a sentence break. Furthermore, an abbreviation itself can be the last token in a sentence, in which case its period acts at the same time as part of this abbreviation and as the end-of-sentence indicator (fullstop). The first large class of sentence boundary disambiguators uses manually built rules which are usually encoded in terms of regular expression grammars supplemented with lists of abbreviations, common words, proper names, etc. For instance, the Alembic workbench (Aberdeen et al., 1995) contains a sentence splitting module which employs over 100 regular-expression rules written in Flex. To put together a few rules which do a job is fast and easy, but to develop a good rule-based system is quite a labour consuming enterprise. Another potential shortcoming is that such systems are usually closely tailored to a particular corpus and are not easily portable across domains. Automatically trainable software is generally seen as a way of producing systems quickly re-trainable for a new corpus, domain or even for another language. Thus, the second class of SBD systems employs machine learning techniques such as decision tree classifiers (Riley, 1989), maximum entropy modeling (MAXTERMINATOR) (Reynar and Ratnaparkhi, 1997), neural networks (SATZ) (Palmer and Hearst , 1997), etc.. Machine learning systems treat the SBD task as a classification problem, using features such as word spelling, capitalization, suffix, word class, etc., found in the local context of potent im sentence breaking punctuation. There is, however, one catch all machine learning approaches to the SBD task known to us require labeled examples for training. This implies an investment in the annotat ion phase. There are two corpora normally used for evaluation and development in a number of text processing tasks and in the SBD task in particular: the Brown Corpus and the Wall Street Journal (WSJ) corpus both par t of the Penn Treebank (Marcus, Marcinkiewicz, and Santorini, 1993). Words in both these corpora are annota ted with part-ofspeech (POS) information and the text is split into documents, paragraphs and sentences. This gives all necessary information for the development of an SBD system and its evaluation. State-of-theart machine-learning and rule-based SBD systems achieve the error rate of about 0.8-1.5% measured on the Brown Corpus and the WSJ. The best performance on the WSJ was achieved by a combination of the SATZ system with the Alembic system 0.5% error rate. The best performance on the Brown Corpus, 0.2% error rate, was reported by (Riley, 1989), who trained a decision tree classifier on a 25 million word corpus. 1.1 W o r d b a s e d vs . S y n t a c t i c M e t h o d s The first source of ambiguity in end-of-sentence marking is introduced by abbreviations: if we know tha t the word which precedes a period is n o t an abbreviation, then almost certainly this period denotes a sentence break. However, if this word is an abbreviation, then it is not tha t easy to make a clear decision. The second major source of information"}
{"_id": "4be6ff4c12e3f61021c6433031e1ccc0e864ebec", "title": "Information leakage from optical emanations", "text": "A previously unknown form of compromising emanations has been discovered. LED status indicators on data communication equipment, under certain conditions, are shown to carry a modulated optical signal that is significantly correlated with information being processed by the device. Physical access is not required; the attacker gains access to all data going through the device, including plaintext in the case of data encryption systems. Experiments show that it is possible to intercept data under realistic conditions at a considerable distance. Many different sorts of devices, including modems and Internet Protocol routers, were found to be vulnerable. A taxonomy of compromising optical emanations is developed, and design changes are described that will successfully block this kind of \"Optical Tempest\" attack."}
{"_id": "ba0322cfc2f409892d2014a7d17764944686bcd8", "title": "Recurrent neural networks for time-series prediction.", "text": "Recurrent neural networks have been used for time-s eri s prediction with good results. In this dissertation we compare recurrent neural networks with time-delayed feed forward networks, feed forward networks and li near regression models to see which architecture that can make the most accurate predictions. The data used in all experiments is real-world sales data containing two kinds of segments: campaign segments and non-campaign segments. The task is to make predictions of sales under campaigns and with this in mind we evaluate if we c an get more accurate predictions if we only use the campaign segments when modeling the data. Throughout the entire project the KDD process have been used to give a st ructured work-process. The results showed that recurrent network is not better han the other evaluated algorithms, in fact, the time-delayed feed forward neural network showed to give the best predictions. The results also showed that we c ould get more accurate predictions when only using information from campaign segments."}
{"_id": "e9db355b4e6dc959c8737a8ce8813a3daa538201", "title": "Low-Power 2.4 GHz Wake-Up Radio for Wireless Sensor Networks", "text": "Power consumption is a critical issue in many wireless sensor network scenarios where network life expectancy is measured in months or years. Communication protocols typically rely on duty-cycle mechanisms to reduce the power usage at the cost of decreased network responsiveness and increased communication latency. A low-power radio-triggered device can be used to continuously monitor the channel and activate the node for incoming communications, allowing purely asynchronous operations. To be effective, the power consumption of this wake-up receiver must be on the order of tens of microwatts since this device is always active. This paper presents the ongoing efforts to design such a low-power receiver. Initial results indicate an average power consumption below 20 uW."}
{"_id": "2dad797bcc1752855808107ed09cc4ce5fa6a09b", "title": "Guidance for the evaluation and treatment of hereditary and acquired thrombophilia", "text": "Thrombophilias are hereditary and/or acquired conditions that predispose patients to thrombosis. Testing for thrombophilia is commonly performed in patients with venous thrombosis and their relatives; however such testing usually does not provide information that impacts management and may result in harm. This manuscript, initiated by the Anticoagulation Forum, provides clinical guidance for thrombophilia testing in five clinical situations: following 1) provoked venous thromboembolism, 2) unprovoked venous thromboembolism; 3) in relatives of patients with thrombosis, 4) in female relatives of patients with thrombosis considering estrogen use; and 5) in female relatives of patients with thrombosis who are considering pregnancy. Additionally, guidance is provided regarding the timing of thrombophilia testing. The role of thrombophilia testing in arterial thrombosis and for evaluation of recurrent pregnancy loss is not addressed. Statements are based on existing guidelines and consensus expert opinion where guidelines are lacking. We recommend that thrombophilia testing not be performed in most situations. When performed, it should be used in a highly selective manner, and only in circumstances where the information obtained will influence a decision important to the patient, and outweigh the potential risks of testing. Testing should not be performed during acute thrombosis or during the initial (3-month) period of anticoagulation."}
{"_id": "ac7023994da7768224e76d35c6178db36062182c", "title": "Understanding payment card fraud through knowledge extraction from neural networks using large-scale datasets", "text": ""}
{"_id": "4301903b424a8a8391c3e65af7e86127c6b1aebf", "title": "A whispered Mandarin corpus for speech technology applications", "text": "Whispered speech is a natural mode of speech in which voicing is absent \u2013 its acoustics differ significantly from normally spoken speech or so-called neutral speech, such that it is challenging to use only neutral speech to build speech processing and automatic recognition systems that can deal effectively with whisper. At the same time, humans can naturally produce and perceive whispered speech without explicit training. Tonal languages such as Mandarin present an interesting dilemma \u2013 tone is primarily encoded by pitch tracks which are absent during whispered speech, but humans can still tell tones apart. How humans manage to process whispered speech well without explicit training on it, whereas machine algorithms fail, is presently an unresolved question which could prove fruitful with study. This, however, is hindered by the lack of suitable, systematically collected corpora. We present iWhisper-Mandarin, a 25-hour parallel corpus of neutral and whispered Mandarin, designed to support research in linguistics and speech technology. We demonstrate and verify that earlier techniques applied to whispered speech from non-tonal languages also work with Mandarin, and present some preliminary studies on voice activity detection and whispered Mandarin speech recognition."}
{"_id": "94590aad302da2bbc6082dee710a6d6c92cef243", "title": "Deploying Deep Neural Networks in the Embedded Space", "text": "Recently, Deep Neural Networks (DNNs) have emerged as the dominant model across various AI applications. In the era of IoT and mobile systems, the efficient deployment of DNNs on embedded platforms is vital to enable the development of intelligent applications. This paper summarises our recent work on the optimised mapping of DNNs on embedded settings. By covering such diverse topics as DNN-to-accelerator toolflows, high-throughput cascaded classifiers and domain-specific model design, the presented set of works aim to enable the deployment of sophisticated deep learning models on cutting-edge mobile and embedded systems."}
{"_id": "418432bc00011c24ea9ceaa0f25f6d75d50bee9f", "title": "Smart manufacturing, manufacturing intelligence and demand-dynamic performance", "text": "Smart Manufacturing is the dramatically intensified and pervasive application of networked information-based technologies throughout the manufacturing and supply chain enterprise. It responds and leads to a dramatic and fundamental business transformation to demand-dynamic economics keyed on customers, partners and the public; enterprise performance; demand-driven supply chain services; and broad-based workforce involvement. ITenabled Smart factories and supply networks can better respond to national interests and strategic imperatives and can revitalize the industrial sector by facilitating global competitiveness and exports, providing sustainable jobs, radically improving performance, and facilitating manufacturing innovation."}
{"_id": "f47fbedffdbeb083bc1616c628dc8934766be3ad", "title": "Efficient Training and Evaluation of Recurrent Neural Network Language Models for Automatic Speech Recognition", "text": "Recurrent neural network language models RNNLMs are becoming increasingly popular for a range of applications including automatic speech recognition. An important issue that limits their possible application areas is the computational cost incurred in training and evaluation. This paper describes a series of new efficiency improving approaches that allows RNNLMs to be more efficiently trained on graphics processing units GPUs and evaluated on CPUs. First, a modified RNNLM architecture with a nonclass-based, full output layer structure F-RNNLM is proposed. This modified architecture facilitates a novel spliced sentence bunch mode parallelization of F-RNNLM training using large quantities of data on a GPU. Second, two efficient RNNLM training criteria based on variance regularization and noise contrastive estimation are explored to specifically reduce the computation associated with the RNNLM output layer softmax normalisation term. Finally, a pipelined training algorithm utilizing multiple GPUs is also used to further improve the training speed. Initially, RNNLMs were trained on a moderate dataset with 20M words from a large vocabulary conversational telephone speech recognition task. The training time of RNNLM is reduced by up to a factor of 53 on a single GPU over the standard CPU-based RNNLM toolkit. A 56 times speed up in test time evaluation on a CPU was obtained over the baseline F-RNNLMs. Consistent improvements in both recognition accuracy and perplexity were also obtained over C-RNNLMs. Experiments on Google's one billion corpus also reveals that the training of RNNLM scales well."}
{"_id": "19dfe0d4ff2418f54560c6b40c4b8135f0eba7a8", "title": "Learning under Feature Drifts in Textual Streams", "text": "Huge amounts of textual streams are generated nowadays, especially in social networks like Twitter and Facebook. As the discussion topics and user opinions on those topics change drastically with time, those streams undergo changes in data distribution, leading to changes in the concept to be learned, a phenomenon called concept drift. One particular type of drift, that has not yet attracted a lot of attention is feature drift, i.e., changes in the features that are relevant for the learning task at hand. In this work, we propose an approach for handling feature drifts in textual streams. Our approach integrates i) an ensemble-based mechanism to accurately predict the feature/word values for the next time-point by taking into account the different features might be subject to different temporal trends and ii) a sketch-based feature space maintenance mechanism that allows for a memory-bounded maintenance of the feature space over the stream. Experiments with textual streams from the sentiment analysis, email preference and spam detection demonstrate that our approach achieves significantly better or competitive performance compared to baselines."}
{"_id": "583d56e59f722873fb3729f890761dd870bb3b11", "title": "Structural Embedding of Syntactic Trees for Machine Comprehension", "text": "Deep neural networks for machine comprehension typically utilizes only word or character embeddings without explicitly taking advantage of structured linguistic information such as constituency trees and dependency trees. In this paper, we propose structural embedding of syntactic trees (SEST), an algorithm framework to utilize structured information and encode them into vector representations that can boost the performance of algorithms for the machine comprehension. We evaluate our approach using a state-of-the-art neural attention model on the SQuAD dataset. Experimental results demonstrate that our model can accurately identify the syntactic boundaries of the sentences and extract answers that are syntactically coherent over the baseline methods."}
{"_id": "6396ab37641d36be4c26420e58adeb8665914c3b", "title": "Modeling Biological Processes for Reading Comprehension", "text": "Machine reading calls for programs that read and understand text, but most current work only attempts to extract facts from redundant web-scale corpora. In this paper, we focus on a new reading comprehension task that requires complex reasoning over a single document. The input is a paragraph describing a biological process, and the goal is to answer questions that require an understanding of the relations between entities and events in the process. To answer the questions, we first predict a rich structure representing the process in the paragraph. Then, we map the question to a formal query, which is executed against the predicted structure. We demonstrate that answering questions via predicted structures substantially improves accuracy over baselines that use shallower representations."}
{"_id": "7539bfcc5b039a39b745c58f66263c99fcc9857f", "title": "Attention-over-Attention Neural Networks for Reading Comprehension", "text": "Cloze-style queries are representative problems in reading comprehension. Over the past few months, we have seen much progress that utilizing neural network approach to solve Cloze-style questions. In this paper, we present a novel model called attention-over-attention reader for the Cloze-style reading comprehension task. Our model aims to place another attention mechanism over the document-level attention, and induces \u201cattended attention\u201d for final predictions. Unlike the previous works, our neural network model requires less pre-defined hyper-parameters and uses an elegant architecture for modeling. Experimental results show that the proposed attentionover-attention model significantly outperforms various state-of-the-art systems by a large margin in public datasets, such as CNN and Children\u2019s Book Test datasets."}
{"_id": "f0464c197f0506c5cf763eb837d9b17ed22d2340", "title": "The transition to high school as a developmental process among multiethnic urban youth.", "text": "The high school transition was examined in an ethnically diverse, urban sample of 1,979 adolescents, followed from 7th to 10th grade (M(age) = 14.6, SD = .37 in 7th grade). Twice annually, data were gathered on adolescents' perceptions of school climate, psychological functioning, and academic behaviors. Piecewise growth modeling results indicate that adolescents were doing well before the transition but experienced transition disruptions in psychological functioning and grades, and many continued to struggle across high school. The immediate experience of the transition appeared to be particularly challenging for African American and Latino students when the numerical representation of their ethnic groups declined significantly from middle to high school. Findings highlight the value of examining the transition in a larger developmental context and the importance of implementing transition support."}
{"_id": "1745ba30f8ff3f4a0480147592cf7a4e73199ce5", "title": "Smart Kitchen : A User Centric Cooking Support System", "text": "Several cooking support systems have been studied to give users instructions based on the recipes stepby-step, using multi-media contents. These systems usually disturb users\u2019 cooking process forcing them to provide information to the system in order to give them beneficial information. In this sense, these systems are considered to be \u201dsystem centric\u201d. We propose a system called \u201dSmart Kitchen\u201d considered to be in \u201duser centric\u201d, in which a user can cook normally without being concerned about the system. Smart Kitchen can understand cooking processes, in other words, what the user is doing. In this paper, we discuss the design of the Smart Kitchen system and explain three essential modules of tracking food, recognizing food material, and recognizing cooking action."}
{"_id": "0ad7368af50c2d96ba6881d899daf06728735193", "title": "Ally Friendly Jamming: How to Jam Your Enemy and Maintain Your Own Wireless Connectivity at the Same Time", "text": "This paper presents a novel mechanism, called Ally Friendly Jamming, which aims at providing an intelligent jamming capability that can disable unauthorized (enemy) wireless communication but at the same time still allow authorized wireless devices to communicate, even if all these devices operate at the same frequency. The basic idea is to jam the wireless channel continuously but properly control the jamming signals with secret keys, so that the jamming signals are unpredictable interference to unauthorized devices, but are recoverable by authorized ones equipped with the secret keys. To achieve the ally friendly jamming capability, we develop new techniques to generate ally jamming signals, to identify and synchronize with multiple ally jammers. This paper also reports the analysis, implementation, and experimental evaluation of ally friendly jamming on a software defined radio platform. Both the analytical and experimental results indicate that the proposed techniques can effectively disable enemy wireless communication and at the same time maintain wireless communication between authorized devices."}
{"_id": "ffe64d80b5ff515f92247f4090e720b423269c7e", "title": "Radar Cross-Section Analysis of Backscattering RFID Tags", "text": "In this letter, we present a graphical method based on Green's analysis of scattering antennas to optimize the load impedances for achieving maximum modulated radar cross-section (RCS) of backscattering radio-frequency identification (RFID) tags. A planar tag antenna specifically designed for working in the ultra-high-frequency band is used to experimentally validate the method. The measured RCS of the loaded tag antenna shows acceptable agreement with the theory and simulated results. This method is especially valuable to the design of semipassive RFID tags when the communication range is solely determined by the backscattering modulation efficiency of tags."}
{"_id": "b3986b6d730e957fa7a587463047127c517fb49a", "title": "The impact of emotional intelligence on work engagement of registered nurses: the mediating role of organisational justice.", "text": "AIMS AND OBJECTIVES\nTo explore the impact of emotional intelligence and organisational justice on work engagement in Chinese nurses and to examine the mediating role of organisational justice to provide implications for promoting clinical nurses' work engagement.\n\n\nBACKGROUND\nThe importance of work engagement on nurses' well-being and quality of care has been well documented. Work engagement is significantly predicted by job resources. However, little research has concentrated simultaneously on the influence of both personal and organisational resources on nurses' work engagement.\n\n\nDESIGN\nA descriptive, cross-sectional design was employed.\n\n\nMETHODS\nA total of 511 nurses from four public hospitals were enrolled by multistage sampling. Data collection was undertaken using the Wong and Law Emotional Intelligence Scale, the Organizational Justice questionnaire and the Utrecht Work Engagement Scale-9. We analysed the data using structural equation modelling.\n\n\nRESULTS\nEmotional intelligence and organisational justice were significant predictors and they accounted for 44% of the variance in nurses' work engagement. Bootstrap estimation confirmed an indirect effect of emotional intelligence on work engagement via organisational justice.\n\n\nCONCLUSIONS\nEmotional intelligence and organisational justice positively predict work engagement and organisational justice partially mediates the relationship between emotional intelligence and work engagement.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nOur study supports the idea that enhancing organisational justice can increase the impact of emotional intelligence. Managers should take into account the importance of emotional intelligence and perceptions of organisational justice in human resources management and apply targeted interventions to foster work engagement."}
{"_id": "2bc6558aaca1846d20ee08983abdb23549f4b8d6", "title": "Design and Analysis of Low-Power 11- Transistor Full Adder", "text": "Full adders are exigent components in applications such as digital signal processors (DSP) architectures and microprocessors. In this paper, we propose a technique to build a new 11-transistor FA .We have done HSPICE simulation runs the new design 11-T full adders .In CMOS integrated circuit design there is a tradeoff between static power consumption and technology scaling. Static power dissipation is a challenge for the circuit designer. So we reduce the static power dissipation. In order to achieve lower static power consumption, one has to scarifies design area and circuit performance. In this paper we propose a new circuit of 11-Transistor full adder in CMOS VLSI circuit."}
{"_id": "60710e5c73d480593cbc1b51f21bf38f25cc1f73", "title": "Phishing counter measures and their effectiveness \u2013 literature review", "text": "Purpose \u2013 Phishing is essentially a social engineering crime on the Web, whose rampant occurrences and technique advancements are posing big challenges for researchers in both academia and the industry. The purpose of this study is to examine the available phishing literatures and phishing countermeasures, to determine how research has evolved and advanced in terms of quantity, content and publication outlets. In addition to that, this paper aims to identify the important trends in phishing and its countermeasures and provides a view of the research gap that is still prevailing in this field of study. Design/methodology/approach \u2013 This paper is a comprehensive literature review prepared after analysing 16 doctoral theses and 358 papers in this field of research. The papers were analyzed based on their research focus, empirical basis on phishing and proposed countermeasures. Findings \u2013 The findings reveal that the current anti-phishing approaches that have seen significant deployments over the internet can be classified into eight categories. Also, the different approaches proposed so far are all preventive in nature. A Phisher will mainly target the innocent consumers who happen to be the weakest link in the security chain and it was found through various usability studies that neither server-side security indicators nor client-side toolbars and warnings are successful in preventing vulnerable users from being deceived. Originality/value \u2013 Educating the internet users about phishing, as well as the implementation and proper application of anti-phishing measures, are critical steps in protecting the identities of online consumers against phishing attacks. Further research is required to evaluate the effectiveness of the available countermeasures against fresh phishing attacks. Also there is the need to find out the factors which influence internet user\u2019s ability to correctly identify phishing websites."}
{"_id": "050b64c2343ef3c7f0c60285e4429e9bb8175dff", "title": "Automotive big data: Applications, workloads and infrastructures", "text": "Data is increasingly affecting the automotive industry, from vehicle development, to manufacturing and service processes, to online services centered around the connected vehicle. Connected, mobile and Internet of Things devices and machines generate immense amounts of sensor data. The ability to process and analyze this data to extract insights and knowledge that enable intelligent services, new ways to understand business problems, improvements of processes and decisions, is a critical capability. Hadoop is a scalable platform for compute and storage and emerged as de-facto standard for Big Data processing at Internet companies and in the scientific community. However, there is a lack of understanding of how and for what use cases these new Hadoop capabilities can be efficiently used to augment automotive applications and systems. This paper surveys use cases and applications for deploying Hadoop in the automotive industry. Over the years a rich ecosystem emerged around Hadoop comprising tools for parallel, in-memory and stream processing (most notable MapReduce and Spark), SQL and NOSQL engines (Hive, HBase), and machine learning (Mahout, MLlib). It is critical to develop an understanding of automotive applications and their characteristics and requirements for data discovery, integration, exploration and analytics. We then map these requirements to a confined technical architecture consisting of core Hadoop services and libraries for data ingest, processing and analytics. The objective of this paper is to address questions, such as: What applications and datasets are suitable for Hadoop? How can a diverse set of frameworks and tools be managed on multi-tenant Hadoop cluster? How do these tools integrate with existing relational data management systems? How can enterprise security requirements be addressed? What are the performance characteristics of these tools for real-world automotive applications? To address the last question, we utilize a standard benchmark (TPCx-HS), and two application benchmarks (SQL and machine learning) that operate on a dataset of multiple Terabytes and billions of rows."}
{"_id": "a8e5a973654cfdcb410a00877d3175b2a08bd244", "title": "Online payment system using steganography and visual cryptography", "text": "A rapid growth in E-Commerce market is seen in recent time throughout the world. With ever increasing popularity of online shopping, Debit or Credit card fraud and personal information security are major concerns for customers, merchants and banks specifically in the case of CNP (Card Not Present). This paper presents a new approach for providing limited information only that is necessary for fund transfer during online shopping thereby safeguarding customer data and increasing customer confidence and preventing identity theft. The method uses combined application of steganography and visual cryptography for this purpose."}
{"_id": "679ff90cc4ec1974f0da3dff38d9f9a1957eeada", "title": "Algorithmic motion planning in robotics", "text": "A survey is presented of an approach to motion planning that emphasizes object-oriented, exact, and discrete (or combinatorial) algorithmic techniques in which worst-case asymptotically efficient solutions are being sought. Following a statement of the problem, motion planning in static and known environments is treated. The discussion covers general solutions, lower bounds, the projection method, the retraction method, the expanded obstacles, the single-component approach, and a mobile convex object moving in a 2D polygonal space. Variants of the motion-planning problem are then considered, namely, optimal motion planning, adaptive and exploratory motion planning, motion planning in the presence of moving obstacles, constrained motion planning, motion planning with uncertainty, and general task planning.<>"}
{"_id": "427abf26ea34128b8aa84dc1943e6d364e36125b", "title": "Elliptic operator for shape analysis", "text": "Many shape analysis methods treat the geometry of an object as a metric space that can be captured by the Laplace-Beltrami operator. In this paper, we propose to adapt a classical operator from quantum mechanics to the field of shape analysis where we suggest to integrate a scalar function through a unified elliptical Hamiltonian operator. We study the addition of a potential function to the Laplacian as a generator for dual spaces in which shape processing is performed. Then, we evaluate the resulting spectral basis for different applications such as mesh compression and shape matching. The suggested operator is shown to produce better functional spaces to operate with, as demonstrated by the proposed framework that outperforms existing spectral methods, for example, when applied to shape matching benchmarks."}
{"_id": "363cfbd6f1e76b1e3431f09e6a1fa1541a4e2162", "title": "R: A language and environment for statistical computing.", "text": "Copyright (\u00a9) 1999\u20132012 R Foundation for Statistical Computing. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the R Core Team."}
{"_id": "0da76f057e6900620ca2172c6cf4a66a61db017e", "title": "A Neural Network Approch to Efficient Valuation of Large Portfolios of Variable Annuities", "text": "Managing and hedging the risks associated with Variable Annuity (VA) products requires intraday valuation of key risk metrics for these products. The complex structure of VA products and computational complexity of their accurate evaluation has compelled insurance companies to adopt Monte Carlo (MC) simulations to value their large portfolios of VA products. Because the MC simulations are computationally demanding, especially for intraday valuations, insurance companies need more efficient valuation techniques. Recently, a framework based on traditional spatial interpolation techniques has been proposed that can significantly decrease the computational complexity of MC simulation (Gan and Lin, 2015). However, traditional interpolation techniques require the definition of a distance function that can significantly impact their accuracy. Moreover, none of the traditional spatial interpolation techniques provide all of the key properties of accuracy, efficiency, and granularity (Hejazi et al., 2015). In this paper, we present a neural network approach for the spatial interpolation framework that affords an efficient way to find an effective distance function. The proposed approach is accurate, efficient, and provides an accurate granular view of the input portfolio. Our numerical experiments illustrate the superiority of the performance of the proposed neural network approach compared to the traditional spatial interpolation schemes."}
{"_id": "808ab375741c6539ccdc535785c549b21a8a2e8f", "title": "Hippocampal Neurogenesis Regulates Forgetting During Adulthood and Infancy", "text": "Throughout life, new neurons are continuously added to the dentate gyrus. As this continuous addition remodels hippocampal circuits, computational models predict that neurogenesis leads to degradation or forgetting of established memories. Consistent with this, increasing neurogenesis after the formation of a memory was sufficient to induce forgetting in adult mice. By contrast, during infancy, when hippocampal neurogenesis levels are high and freshly generated memories tend to be rapidly forgotten (infantile amnesia), decreasing neurogenesis after memory formation mitigated forgetting. In precocial species, including guinea pigs and degus, most granule cells are generated prenatally. Consistent with reduced levels of postnatal hippocampal neurogenesis, infant guinea pigs and degus did not exhibit forgetting. However, increasing neurogenesis after memory formation induced infantile amnesia in these species."}
{"_id": "0566bf06a0368b518b8b474166f7b1dfef3f9283", "title": "Learning to detect unseen object classes by between-class attribute transfer", "text": "We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, \u201cAnimals with Attributes\u201d, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes."}
{"_id": "06554235c2c9361a14c0569206b58a355a63f01b", "title": "Zero-Shot Learning Through Cross-Modal Transfer", "text": "This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images."}
{"_id": "1029bb5c9f9cb7ab486eb0c2a2a1c59104820928", "title": "Siamese Neural Networks for One-Shot Image Recognition", "text": "The process of learning good features for machine learning applications can be very computationally expensive and may prove difficult in cases where little data is available. A prototypical example of this is the one-shot learning setting, in which we must correctly make predictions given only a single example of each new class. In this paper, we explore a method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs. Once a network has been tuned, we can then capitalize on powerful discriminative features to generalize the predictive power of the network not just to new data, but to entirely new classes from unknown distributions. Using a convolutional architecture, we are able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks. Humans exhibit a strong ability to acquire and recognize new patterns. In particular, we observe that when presented with stimuli, people seem to be able to understand new concepts quickly and then recognize variations on these concepts in future percepts (Lake et al., 2011). Machine learning has been successfully used to achieve state-ofthe-art performance in a variety of applications such as web search, spam detection, caption generation, and speech and image recognition. However, these algorithms often break down when forced to make predictions about data for which little supervised information is available. We desire to generalize to these unfamiliar categories without necessitating extensive retraining which may be either expensive or impossible due to limited data or in an online prediction setting, such as web retrieval. Proceedings of the 32 International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copyright 2015 by the author(s). Figure 1. Example of a 20-way one-shot classification task using the Omniglot dataset. The lone test image is shown above the grid of 20 images representing the possible unseen classes that we can choose for the test image. These 20 images are our only known examples of each of those classes. One particularly interesting task is classification under the restriction that we may only observe a single example of each possible class before making a prediction about a test instance. This is called one-shot learning and it is the primary focus of our model presented in this work (Fei-Fei et al., 2006; Lake et al., 2011). This should be distinguished from zero-shot learning, in which the model cannot look at any examples from the target classes (Palatucci et al., 2009). One-shot learning can be directly addressed by developing domain-specific features or inference procedures which possess highly discriminative properties for the target task. As a result, systems which incorporate these methods tend to excel at similar instances but fail to offer robust solutions that may be applied to other types of problems. In this paper, we present a novel approach which limits assumptions on the structure of the inputs while automatically acquiring features which enable the model to generalize successfully from few examples. We build upon the deep learnSiamese Neural Networks for One-shot Image Recognition Figure 2. Our general strategy. 1) Train a model to discriminate between a collection of same/different pairs. 2) Generalize to evaluate new categories based on learned feature mappings for"}
{"_id": "efca19604c4b6b9a05ac6838e3481a9aa027a210", "title": "Developing an Intelligent Chat-bot Tool to Assist High School Students for Learning General Knowledge Subjects", "text": "UPDATED\u201411 December 2017. Artificial Intelligent Chatbots are used in industries such as Banking systems, Customer services, Education. There are many intelligent tutoring systems currently in practice, but few of them are known to assist high school students for learning their general knowledge subjects. This project paper proposes an artificial intelligent chat-bot tool developed to assist high school students for learning their general knowledge subjects. The proposed intelligent chat-bot tool is a web based intelligent tutoring system which can be accessed by large number of students globally, the users are not required to pay licensing fees. The intelligent chat-bot tool will be available 24 hours a day at schools or public libraries or any other location which has internet access. The intelligent chatbot tool will make use of Natural Language Processing techniques to answer the queries by high school students. The intelligent chat-bot tool will be trained on a knowledge base comprising of general knowledge questions and answers. The intelligent chat-bot tool will have ability to participate in small talks with its learners."}
{"_id": "8dc8ebd3afc192254284f61a0882dc1cd5e37ddd", "title": "MRG-OHTC Database for Online Handwritten Tibetan Character Recognition", "text": "A handwritten Tibetan database, MRG-OHTC, is presented to facilitate the research of online handwritten Tibetan character recognition. The database contains 910 Tibetan character classes written by 130 persons from Tibetan ethnic minority. These characters are selected from basic set and extension set A of Tibetan coded character set. The current version of this database is collected using electronic pen on digital tablet. We investigate some characteristic of writing style from different writers. We evaluate MRG-OHTC database using existing algorithms as a baseline. Experimental results reveal a big challenge to higher recognition performance. To our knowledge, MRG-OHTC is the first publicly available database for handwritten Tibetan research. It provides a basic database to compare empirically different algorithms for handwritten Tibetan character recognition."}
{"_id": "a6020d6bce5c69e476dfee15bdf63944e2a717b3", "title": "Security Without Identification: Transaction Systems to Make Big Brother Obsolete", "text": "The large-scale automated transaction systems of the near future can be designed to protect the privacy and maintain the security of both individuals and organizations."}
{"_id": "aef6d6f148e761bdf8654f969fda14010b44ea34", "title": "Interactive Freeform Design of Tensegrity", "text": "In this paper, we propose a novel interactive method for flexibly designing tensegrity structures under valid force equilibriums. Unlike previous form-finding techniques that aim to obtain a unique solution by fixing some parameters such as the force densities and lengths of elements, our method provides a design system that allows a user to continuously interact with the form within a multidimensional solution space. First, a valid initial form is generated by converting a given polygon mesh surface into a strut-and-cable network that follows the original surface, and the form is then perturbed to attain an equilibrium state through a two-step optimization of both node coordinates and force densities. Then, the form can be freely transformed by using a standard 2D input device while the system updates the form on the fly based on the orthogonal projection of a deformation mode into the solution space. The system provides a flexible platform for designing a tensegrity form for use as a static architectural structure or a kinetic deployable system."}
{"_id": "0488cbc473dfa206794534ba69058b07c80f895b", "title": "The Promise of Sustainable Happiness", "text": "Reference: Boehm, J. K., & Lyubomirsky, S. (in press). The promise of sustainable happiness. In S. J. Lopez (Ed.), Handbook of positive psychology (2 ed.). Oxford: Oxford University Press. The Promise of Sustainable Happiness Julia K. Boehm and Sonja Lyubomirsky Department of Psychology University of California, Riverside Abstract From ancient history to recent times, philosophers, writers, self-help gurus, and now scientists have taken up the challenge of how to foster greater happiness. This chapter discusses why some people are happier than others, focusing on the distinctive ways that happy and unhappy individuals construe themselves and others, respond to social comparisons, make decisions, and self-reflect. We suggest that, despite several barriers to increased well-being, less happy people can strive successfully to be happier by learning a variety of effortful strategies and practicing them with determination and commitment. The sustainable happiness model (Lyubomirsky, Sheldon, & Schkade, 2005) provides a theoretical framework for experimental intervention research on how to increase and maintain happiness. According to this model, three factors contribute to an individual\u2019s chronic happiness level \u2013 1) the set point, 2) life circumstances, and 3) intentional activities, or effortful acts that are naturally variable and episodic. Such activities, which include committing acts of kindness, expressing gratitude or optimism, and savoring joyful life events, represent the most promising route to sustaining enhanced happiness. We describe a half dozen randomized controlled interventions testing the efficacy of each of these activities in raising and maintaining well-being, as well as the mediators and moderators underlying their effects. Future researchers must endeavor not only to learn which particular practices make people happier, but how and why they do so."}
{"_id": "385800aa5a2c4c84ad476df05721bf0ad7bf5df0", "title": "Comparative Study of Cloud Computing Data Security Methods", "text": "Cloud computing is the concept implemented to decipher the Daily Computing Problems. Cloud computing is basically virtual pool of resources and it provides these resources to users via internet. Cloud computing is the internet based development and used in computer technology. The prevalent problem associated with cloud computing is data privacy, security, anonymity and reliability etc. But the most important between them is security and how cloud provider assures it. The work plan here is to eliminate the concerns regarding data privacy using encryption algorithms to enhance the security in cloud. Have discussed about cloud computing security mechanisms and presented the comparative study of several algorithms."}
{"_id": "dd9af33a7262be2eea3cfb18f11e60fc1681c135", "title": "Determining the informational, navigational, and transactional intent of Web queries", "text": "In this paper, we define and present a comprehensive classification of user intent for Web searching. The classification consists of three hierarchical levels of informational, navigational, and transactional intent. After deriving attributes of each, we then developed a software application that automatically classified queries using a Web search engine log of over a million and a half queries submitted by several hundred thousand users. Our findings show that more than 80% of Web queries are informational in nature, with about 10% each being navigational and transactional. In order to validate the accuracy of our algorithm, we manually coded 400 queries and compared the results from this manual classification to the results determined by the automated method. This comparison showed that the automatic classification has an accuracy of 74%. Of the remaining 25% of the queries, the user intent is vague or multi-faceted, pointing to the need for probabilistic classification. We discuss how search engines can use knowledge of user intent to provide more targeted and relevant results in Web searching. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "98cae3215bf2dea4055d0b2f4ba4640689fc6541", "title": "A Kernel Approach to Comparing Distributions", "text": "We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a Reproducing Kernel Hilbert Space. We apply this technique to construct a two-sample test, which is used for determining whether two sets of observations arise from the same distribution. We use this test in attribute matching for databases using the Hungarian marriage method, where it performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist."}
{"_id": "2f278d1dab0f6e3939c747a2fc2a4cecdfc912b9", "title": "Beliefs and biases in web search", "text": "People's beliefs, and unconscious biases that arise from those beliefs, influence their judgment, decision making, and actions, as is commonly accepted among psychologists. Biases can be observed in information retrieval in situations where searchers seek or are presented with information that significantly deviates from the truth. There is little understanding of the impact of such biases in search. In this paper we study search-related biases via multiple probes: an exploratory retrospective survey, human labeling of the captions and results returned by a Web search engine, and a large-scale log analysis of search behavior on that engine. Targeting yes-no questions in the critical domain of health search, we show that Web searchers exhibit their own biases and are also subject to bias from the search engine. We clearly observe searchers favoring positive information over negative and more than expected given base rates based on consensus answers from physicians. We also show that search engines strongly favor a particular, usually positive, perspective, irrespective of the truth. Importantly, we show that these biases can be counterproductive and affect search outcomes; in our study, around half of the answers that searchers settled on were actually incorrect. Our findings have implications for search engine design, including the development of ranking algorithms that con-sider the desire to satisfy searchers (by validating their beliefs) and providing accurate answers and properly considering base rates. Incorporating likelihood information into search is particularly important for consequential tasks, such as those with a medical focus."}
{"_id": "1396705965f7f17b5c5ef7df4f45b269daea9fdd", "title": "Emotional intelligence as a standard intelligence.", "text": "The authors have claimed that emotional intelligence (EI) meets traditional standards for an intelligence (J. D. Mayer, D. R. Caruso, & P. Salovey, 1999). R. D. Roberts, M. Zeidner, and G. Matthews (2001) questioned whether that claim was warranted. The central issue raised by Roberts et al. concerning Mayer et al. (1999) is whether there are correct answers to questions on tests purporting to measure EI as a set of abilities. To address this issue (and others), the present authors briefly restate their view of intelligence, emotion, and EI. They then present arguments for the reasonableness of measuring EI as an ability, indicate that correct answers exist, and summarize recent data suggesting that such measures are, indeed, reliable."}
{"_id": "855b91d7f3dc246f470f3383b0ed23ed1b5a99be", "title": "Architecting for DevOps and Continuous Deployment", "text": "Development and Operations (DevOps) in the context of Continuous Deployment (CD) have emerged as an attractive software development movement, which tries to establish a strong connection between development and operations teams. CD is defined as the ability to quickly put new releases into production. We believe that DevOps/CD brings new challenges for architects, which considerably impacts both on their (architectural) design decisions and their organizational responsibilities. We assert that there is an important and urgent need of sufficient research work to gain a deep understanding of how DevOps/CD adoption can influence architecting, architectural decision-making processes and their outcomes in an organization. This PhD research is aimed at understanding and addressing new challenges for designing architectures for supporting DevOps in the context of CD."}
{"_id": "7470ad3cf73c4fff5b68b3e68f9434f1b9614df6", "title": "SASI: a generic texture descriptor for image retrieval", "text": "In this paper, a generic texture descriptor, namely, Statistical Analysis of Structural Information (SASI) is introduced as a representation of texture. SASI is based on statistics of clique autocorrelation coefficients, calculated over structuring windows. SASI defines a set of clique windows to extract and measure various structural properties of texture by using a spatial multi-resolution method. Experimental results, performed on various image databases, indicate that SASI is more successful then the Gabor Filter descriptors in capturing small granularities and discontinuities such as sharp corners and abrupt changes. Due to the flexibility in designing the clique windows, SASI reaches higher average retrieval rates compared to Gabor Filter descriptors. However, the price of this performance is increased computational complexity."}
{"_id": "e3df511dcd7bfc42068dff8a60d78c77dbfe255e", "title": "Iterative Integration of Visual Insights during Scalable Patent Search and Analysis", "text": "Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With \"PatViz\u201d, a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness."}
{"_id": "2982b0312546e56d4c4e2b9776788e45cb467ebb", "title": "Compact four-port MIMO antenna system at 3.5 GHz", "text": "This work proposed a compact multiple-input multiple-output (MIMO) antenna system having four antennas each fed by coplanar waveguide (CPW) operating for 3.5GHz band (3400\u20133600) MHz. The proposed MIMO antenna system is realized with two groups of printed Inverted L-monopole of size 5\u00d74 mm2 placed orthogonally to each other to attain the diversity capability of polarization and pattern at the edge of non-ground portion of size 15 \u00d7 15 mm2 of mobile handset. The proposed MIMO antenna system consists of four monopole inverted-L shape radiator each with parasitic shorted stripe extending from the ground plane having same dimensions. The parasitic inverted-L shorted striped served as tuning element by reducing the resonating frequency of the monopole radiator and improving the impedance matching. The mutual coupling between the closely packed antennas is addressed by the neutralization line by feeding anti-phase current to the neighbor antenna. The simulation result showed that that the impedance bandwidth at S11 \u2264 \u221210 dB is 200 MHz enough for 5G network communication channel. The diversity parameters of interest are envelop correlation coefficient and mean effective gain are also calculated."}
{"_id": "3a44bace7f6ef4b21f0c04ac86ebe7a014d9205f", "title": "PostBL: Post-mesh Boundary Layer Mesh Generation Tool", "text": "A boundary layer mesh is a mesh with dense element distribution in the normal direction along specific boundaries. PostBL is a utility to generate boundary layers elements on an already existing mesh model. PostBL supports creation of hexahedral, prism, quad and tri boundary layer elements. It is a part of MeshKit, which is an open-source library for mesh generation functionalities. Generally, boundary layer mesh generation is a pre-meshing process; in this effort, we start from a model that has already been meshed. Boundary layers elements can be generated along the entire skin, selected exterior or internal surface boundaries. MeshKit uses graph-based approach for representing meshing operations; PostBL is one such meshing operation. It can be coupled to other meshing operations like Jaal, NetGen, TetGen, CAMAL and custom meshing tools like RGG. Simple examples demonstrating generation of boundary layers on different mesh types and OECD Vattenfall T-Junction benchmark hexahedral-mesh are presented."}
{"_id": "916f0019107a45b59aa568f61a3f52d4ae017033", "title": "Where the Truth Lies: Explaining the Credibility of Emerging Claims on the Web and Social Media", "text": "The web is a huge source of valuable information. However, in recent times, there is an increasing trend towards false claims in social media, other web-sources, and even in news. Thus, factchecking websites have become increasingly popular to identify such misinformation based on manual analysis. Recent research proposed methods to assess the credibility of claims automatically. However, there are major limitations: most works assume claims to be in a structured form, and a few deal with textual claims but require that sources of evidence or counter-evidence are easily retrieved from the web. None of these works can cope with newly emerging claims, and no prior method can give user-interpretable explanations for its verdict on the claim\u2019s credibility. This paper overcomes these limitations by automatically assessing the credibility of emerging claims, with sparse presence in web-sources, and generating suitable explanations from judiciously selected sources. To this end, we retrieve diverse articles about the claim, and model the mutual interaction between: the stance (i.e., support or refute) of the sources, the language style of the articles, the reliability of the sources, and the claim\u2019s temporal footprint on the web. Extensive experiments demonstrate the viability of our method and its superiority over prior works. We show that our methods work well for early detection of emerging claims, as well as for claims with limited presence on the web and social media."}
{"_id": "50e1d7224ce65e9b96b2d55544a0d2c953c3b915", "title": "Functional Specialization for Semantic and Phonological Processing in the Left Inferior Prefrontal Cortex", "text": "Neuroimaging and neuropsychological studies have implicated left inferior prefrontal cortex (LIPC) in both semantic and phonological processing. In this study, functional magnetic resonance imaging was used to examine whether separate LIPC regions participate in each of these types of processing. Performance of a semantic decision task resulted in extensive LIPC activation compared to a perceptual control task. Phonological processing of words and pseudowords in a syllable-counting task resulted in activation of the dorsal aspect of the left inferior frontal gyrus near the inferior frontal sulcus (BA 44/45) compared to a perceptual control task, with greater activation for nonwords compared to words. In a direct comparison of semantic and phonological tasks, semantic processing preferentially activated the ventral aspect of the left inferior frontal gyrus (BA 47/45). A review of the literature demonstrated a similar distinction between left prefrontal regions involved in semantic processing and phonological/lexical processing. The results suggest that a distinct region in the left inferior frontal cortex is involved in semantic processing, whereas other regions may subserve phonological processes engaged during both semantic and phonological tasks."}
{"_id": "042305fdca6fa3efa785c77bd1d72bf9cabbd993", "title": "Color spaces for computer graphics", "text": "Normal human color perception is a product of three independent sensory systems. By mirroring this mechanism, full-color display devices create colors as mixtures of three primaries. Any displayable color can be described by the corresponding values of these primaries. Frequently it is more convenient to define various other color spaces, or coordinate systems, for color representation or manipulation. Several such color spaces are presented which are suitable for applications involving user specification of color, along with the defining equations and illustrations. The use of special color spaces for particular kinds of color computations is discussed."}
{"_id": "c3f720c064c60dd42a9b6f647d56f61229c962cb", "title": "Using CMMI together with agile software development: A systematic review", "text": "http://dx.doi.org/10.1016/j.infsof.2014.09.012 0950-5849/ 2014 Elsevier B.V. All rights reserved. \u21d1 Corresponding author at: Center of Informatics, Federal University of Pernambuco (CIn/UFPE), Av. Jornalista Anibal Fernandes, S/N, Cidade Universit\u00e1ria, 50. Recife, PE, Brazil. Tel.: +55 81 8250 0470. E-mail addresses: fss4@cin.ufpe.br (F. Selleri Silva), fsfs@cin.ufpe.br (F.S.F. Soares), alp5@cin.ufpe.br (A.L. Peres), ima2@cin.ufpe.br (I.M.d. Azevedo), aplfv@ci (A.P.L.F. Vasconcelos), fkk@cin.ufpe.br (F.K. Kamei), srlm@cin.ufpe.br (S.R.d.L. Meira). Fernando Selleri Silva a,b,\u21d1, Felipe Santana Furtado Soares , Angela Lima Peres , Ivanildo Monteiro de Azevedo , Ana Paula L.F. Vasconcelos , Fernando Kenji Kamei , Silvio Romero de Lemos Meira a,c"}
{"_id": "3fbe522c63b973a83f88c6aac68bc1385e90ed5b", "title": "Embedding Temporal Network via Neighborhood Formation", "text": "Given the rich real-life applications of network mining as well as the surge of representation learning in recent years, network embedding has become the focal point of increasing research interests in both academic and industrial domains. Nevertheless, the complete temporal formation process of networks characterized by sequential interactive events between nodes has yet seldom been modeled in the existing studies, which calls for further research on the so-called temporal network embedding problem. In light of this, in this paper, we introduce the concept of neighborhood formation sequence to describe the evolution of a node, where temporal excitation effects exist between neighbors in the sequence, and thus we propose a Hawkes process based Temporal Network Embedding (HTNE) method. HTNE well integrates the Hawkes process into network embedding so as to capture the influence of historical neighbors on the current neighbors. In particular, the interactions of low-dimensional vectors are fed into the Hawkes process as base rate and temporal influence, respectively. In addition, attention mechanism is also integrated into HTNE to better determine the influence of historical neighbors on current neighbors of a node. Experiments on three large-scale real-life networks demonstrate that the embeddings learned from the proposed HTNE model achieve better performance than state-of-the-art methods in various tasks including node classification, link prediction, and embedding visualization. In particular, temporal recommendation based on arrival rate inferred from node embeddings shows excellent predictive power of the proposed model."}
{"_id": "1bf2c4ce84b83b285f76a14dee459fd5353f2121", "title": "Survey of semantic annotation platforms", "text": "The realization of the Semantic Web requires the widespread availability of semantic annotations for existing and new documents on the Web. Semantic annotations are to tag ontology class instance data and map it into ontology classes. The fully automatic creation of semantic annotations is an unsolved problem. Instead, current systems focus on the semi-automatic creation of annotations. The Semantic Web also requires facilities for the storage of annotations and ontologies, user interfaces, access APIs, and other features to fully support annotation usage. This paper examines current Semantic Web annotation platforms that provide annotation and related services, and reviews their architecture, approaches and performance."}
{"_id": "455e1168304e0eb2909093d5ab9b5ec85cda5028", "title": "The String-to-String Correction Problem", "text": "The string-to-string correction problem is to determine the distance between two strings as measured by the minimum cost sequence of \u201cedit operations\u201d needed to change the one string into the other. The edit operations investigated allow changing one symbol of a string into another single symbol, deleting one symbol from a string, or inserting a single symbol into a string. An algorithm is presented which solves this problem in time proportional to the product of the lengths of the two strings. Possible applications are to the problems of automatic spelling correction and determining the longest subsequence of characters common to two strings."}
{"_id": "3e2fd91786d37293af7f60a0d67b290faf5475f3", "title": "Ontology Learning for the Semantic Web", "text": "The Semantic Web relies heavily on the formal ontologies that structure underlying data for the purpose of comprehensive and transportable machine understanding. Therefore, the success of the Semantic Web depends strongly on the proliferation of ontologies, which requires fast and easy engineering of ontologies and avoidance of a knowledge acquisition bottleneck. Ontology Learning greatly facilitates the construction of ontologies by the ontology engineer. The vision of ontology learning that we propose here includes a number of complementary disciplines that feed on different types of unstructured, semi-structured and fully structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, refinement, and evaluation giving the ontology engineer a wealth of coordinated tools for ontology modeling. Besides of the general framework and architecture, we show in this paper some exemplary techniques in the ontology learning cycle that we have implemented in our ontology learning environment, Text-ToOnto, such as ontology learning from free text, from dictionaries, or from legacy ontologies, and refer to some others that need to complement the complete architecture, such as reverse engineering of ontologies from database schemata or learning from XML documents. Ontologies for the Semantic Web Conceptual structures that define an underlying ontology are germane to the idea of machine processable data on the Semantic Web. Ontologies are (meta)data schemas, providing a controlled vocabulary of concepts, each with an explicitly defined and machine processable semantics. By defining shared and common domain theories, ontologies help both people and machines to communicate concisely,"}
{"_id": "73c66a35fd02a8c1d488d4d16521205a659b360b", "title": "Design and optimization on ESD self-protection schemes for 700V LDMOS in high voltage power IC", "text": "This paper presents an ESD self-protection scheme for a 700V high-voltage laterally diffused metal-oxide-semiconductor (LDMOS) field effect transistor. The safe operating area (SOA) and breakdown failure mechanism of 700V LDMOS are analyzed using simulations and experimental results. The scalability of thermal failure current with LDMOS width is also demonstrated."}
{"_id": "e34895325d62aa253f7a9034920ba6f3f1fc0906", "title": "Battery Pack Modeling , Simulation , and Deployment on a Multicore Real Time Target", "text": "Battery Management System (BMS) design is a complex task requiring sophisticated models that mimic the electrochemical behavior of the battery cell under a variety of operating conditions. Equivalent circuits are well-suited for this task because they offer a balance between fidelity and simulation speed, their parameters reflect direct experimental observations, and they are scalable. Scalability is particularly important at the real time simulation stage, where a model of the battery pack runs on a real-time simulator that is physically connected to the peripheral hardware in charge of monitoring and control. With modern battery systems comprising hundreds of cells, it is important to employ a modeling and simulation approach that is capable of handling numerous simultaneous instances of the basic unit cell while maintaining real time performance. In previous publications we presented a technique for the creation of a battery cell model that contains the electrochemical fingerprints of a battery cell based on equivalent circuit model fitting to experimental data. In this work we extend our previous model to represent a battery pack, featuring cell creation, placement, and connection using automation scripts, thus facilitating the design of packs of arbitrary size and electrical topology. In addition, we present an assessment of model partitioning schemes for real time execution on multicore targets to ensure efficient use of hardware resources, a balanced computational load, and a study of the potential impact of the calculation latencies inherent to distributed systems on solver accuracy. Prior to C code generation for real time execution, a model profiler assesses the model partitioning and helps determine the multicore configuration that results in the lowest average turnaround time, the time elapsed between task start and finish. The resulting model is useful in the generation of multiple operating scenarios of interest in the design of charging, balancing, and safety related procedures. CITATION: Gazzarri, J., Shrivastava, N., Jackey, R., and Borghesani, C., \"Battery Pack Modeling, Simulation, and Deployment on a Multicore Real Time Target,\" SAE Int. J. Aerosp. 7(2):2014, doi:10.4271/2014-01-2217. 2014-01-2217 Published 09/16/2014 Copyright \u00a9 2014 The MathWorks, Inc. doi:10.4271/2014-01-2217 saeaero.saejournals.org A second goal is to provide a scalable methodology capable of supporting any number of battery cell components configured in series or parallel. A MATLAB script creates, places, and connects each model component, including battery cell blocks and electrical components for the battery pack load, and partitions the model."}
{"_id": "06900e1849e604e9cedc3f4e7bae37932c661349", "title": "Deep learning for smart agriculture: Concepts, tools, applications, and opportunities", "text": "In recent years, Deep Learning (DL), such as the algorithms of Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and Generative Adversarial Networks (GAN), has been widely studied and applied in various fields including agriculture. Researchers in the fields of agriculture often use software frameworks without sufficiently examining the ideas and mechanisms of a technique. This article provides a concise summary of major DL algorithms, including concepts, limitations, implementation, training processes, and example codes, to help researchers in agriculture to gain a holistic picture of major DL techniques quickly. Research on DL applications in agriculture is summarized and analyzed, and future opportunities are discussed in this paper, which is expected to help researchers in agriculture to better understand DL algorithms and learn major DL techniques quickly, and further to facilitate data analysis, enhance related research in agriculture, and thus promote DL applications effectively."}
{"_id": "6ee29ea56fe068365fbc7aa69b14d3be8ee100ad", "title": "Using Object Information for Spotting Text", "text": "Text spotting, also called text detection, is a challenging computer vision task because of cluttered backgrounds, diverse imaging environments, various text sizes and similarity between some objects and characters, e.g., tyre and \u2019o\u2019. However, text spotting is a vital step in numerous AI and computer vision systems, such as autonomous robots and systems for visually impaired. Due to its potential applications and commercial values, researchers have proposed various deep architectures and methods for text spotting. These methods and architectures concentrate only on text in images, but neglect other information related to text. There exists a strong relationship between certain objects and the presence of text, such as signboards or the absence of text, such as trees. In this paper, a text spotting algorithm based on text and object dependency is proposed. The proposed algorithm consists of two subconvolutional neural networks and three training stages. For this study, a new NTU-UTOI dataset containing over 22k non-synthetic images with 277k bounding boxes for text and 42 text-related object classes is established. According to our best knowledge, it is the second largest nonsynthetic text image database. Experimental results on three benchmark datasets with clutter backgrounds, COCO-Text, MSRA-TD500 and SVT show that the proposed algorithm provides comparable performance to state-of-the-art text spotting methods. Experiments are also performed on our newly established dataset to investigate the effectiveness of object information for text spotting. The experimental results indicate that the object information contributes significantly on the performance gain."}
{"_id": "d3b493a52e4fda9d40f190ddf9541dbf51ed7658", "title": "A Mobile Application for Smart House Remote Control System", "text": "At the start of the second decade of 21th century, the time has come to make the Smart Houses a reallity for regular use. The different parts of a Smart House are researched but there are still distances from an applicable system, using the modern technology. In this paper we present an overview of the Smart House subsystems necessary for controlling the house using a mobile application efficiently and securely. The sequence diagram of the mobile application connectiing to the server application and also the usecases possible are presented. The challenges faced in designing the mobile application and illustrating the updated house top plane view in that application, are discussed and soloutions are adapted for it. Finally the designed mobile application was implemented and the important sections of it were described, such as the interactive house top view map which indicates the status of the devices using predefined icons. The facilities to manage the scheduled tasks and defined rules are also implemented in this mobile application that was developed for use in Windows Mobile platform. This application has the capability of connecting to the main server using GPRS mobile internet and SMS. This system is expected to be an important step towards a unified system structure that can be used efficiently in near future regular houses. Keywords\u2014Smart House, Mobile Application, Remote Control, Automated Home, Windows Mobile."}
{"_id": "d85692a61a5476b6dfa4e633ac1143d6e1060c71", "title": "Adaptive energy-aware free viewpoint video transmission over wireless networks", "text": "As consumer-level cameras are becoming cheaper and cheaper, the scene of interest can now be captured by multiple cameras simultaneously from different viewpoints. By transmitting texture and depth maps of the views captured using two adjacent cameras, users can synthesize any view in between using depth image based rendering (DIBR) technique such as 3D-warping. This provides users with continuous view angles and is key technique for Free Viewpoint TV (FTV), etc. In this scenario, multiple views must be delivered through bandwidth-limited wireless networks, how to adaptive reduce the source coding rate to fit the bandwidth becomes an important issue. The standard multi-view video codec H.264 MVC explores the inter-view and intra-view correlation to reduce source encoding rate. But all views are required at the client side for decoding any single view, which is apparently not transmission efficient. A better solution is using H.264 to differentially encode the two adjacent views separately, but still this requires double-folder bandwidth comparing to the traditional video streaming. Multiple description coding (MDC) encodes one view into multiple descriptions for robust transmission. This inspires another option: sending only part of each view. This can greatly reduce the source encoding rate, but interpolating skipped frames before synthesizing the middle view consumes mobile devices' precious battery life. Whether to send at a lower source encoding rate but with extra energy consumption or send at a high source encoding rate is nontrivial, and has not been studied in any formal way. In this paper, we investigated both source encoding rate reduction and energy consumption with the bandwidth constraint. We formulated the problem into an optimization problem, i.e., maximizing the system utility, which is defined as the received video quality minus the weighted energy consumption. Simulation results showed that our proposed scheme provides a significant advantage over competing schemes in typical network conditions."}
{"_id": "85621cd521a0647b42ad0c449a9d03b1bfb47697", "title": "Integral Channel Features \u2013 Addendum", "text": "This document is meant to serve as an addendum to [1], published at BMVC 2009. The purpose of this addendum is twofold: (1) to respond to feedback we\u2019ve received since publication and (2) to describe a number of changes, especially to the non-maximal suppression, that further improve performance. The performance of our updated detection increases 5% to over 91% detection rate at 1 false positive per image on the INRIA dataset, and similarly on the Caltech Pedestrian Dataset, while overall system runtime for multiscale detection decreases by 1/3 to just under 1.5s per 640\u00d7480 image. We begin by rectifying an important omission to the related work. Levi and Weiss had an innovative application of integral images to multiple image channels quite early on, demonstrating good results on face detection from few training examples [4]. This work appears to be the earliest such use of integral images, indeed the authors even describe a precursor to integral histograms. Many thanks to Mark Everingham for sending us this reference."}
{"_id": "5e9fafb0e70130417e04cf942a0a6f4b43ad583e", "title": "Decision trees and forests: a probabilistic perspective", "text": "Decision trees and ensembles of decision trees are very popular in machine learning and often achieve state-of-the-art performance on black-box prediction tasks. However, popular variants such as C4.5, CART, boosted trees and random forests lack a probabilistic interpretation since they usually just specify an algorithm for training a model. We take a probabilistic approach where we cast the decision tree structures and the parameters associated with the nodes of a decision tree as a probabilistic model; given labeled examples, we can train the probabilistic model using a variety of approaches (Bayesian learning, maximum likelihood, etc). The probabilistic approach allows us to encode prior assumptions about tree structures and share statistical strength between node parameters; furthermore, it offers a principled mechanism to obtain probabilistic predictions which is crucial for applications where uncertainty quantification is important. Existing work on Bayesian decision trees relies on Markov chain Monte Carlo which can be computationally slow and suffer from poor mixing. We propose a novel sequential Monte Carlo algorithm that computes a particle approximation to the posterior over trees in a top-down fashion. We also propose a novel sampler for Bayesian additive regression trees by combining the above top-down particle filtering algorithm with the Particle Gibbs (Andrieu et al., 2010) framework. Finally, we propose Mondrian forests (MFs), a computationally efficient hybrid solution that is competitive with non-probabilistic counterparts in terms of speed and accuracy, but additionally produces well-calibrated uncertainty estimates. MFs use the Mondrian process (Roy and Teh, 2009) as the randomization mechanism and hierarchically smooth the node parameters within each tree (using a hierarchical probabilistic model and approximate Bayesian updates), but combine the trees in a non-Bayesian fashion. MFs can be grown in an incremental/online fashion and remarkably, the distribution of online MFs is the same as that of batch MFs."}
{"_id": "44ae903e53b1969152b02cdd1d3eb4903b681dbb", "title": "Recent advances in LVCSR : A benchmark comparison of performances", "text": "Large Vocabulary Continuous Speech Recognition (LVCSR), which is characterized by a high variability of the speech, is the most challenging task in automatic speech recognition (ASR). Believing that the evaluation of ASR systems on relevant and common speech corpora is one of the key factors that help accelerating research, we present, in this paper, a benchmark comparison of the performances of the current state-of-the-art LVCSR systems over different speech recognition tasks. Furthermore, we put objectively into evidence the best performing technologies and the best accuracy achieved so far in each task. The benchmarks have shown that the Deep Neural Networks and Convolutional Neural Networks have proven their efficiency on several LVCSR tasks by outperforming the traditional Hidden Markov Models and Guaussian Mixture Models. They have also shown that despite the satisfying performances in some LVCSR tasks, the problem of large-vocabulary speech recognition is far from being solved in some others, where more research efforts are still needed."}
{"_id": "c419854fdb19ae07af2b7d5a7a6ce02f1de5f0e5", "title": "Active Learning in Collaborative Filtering Recommender Systems", "text": "In Collaborative Filtering Recommender Systems user\u2019s preferences are expressed in terms of rated items and each rating allows to improve system prediction accuracy. However, not all of the ratings bring the same amount of information about the user\u2019s tastes. Active Learning aims at identifying rating data that better reflects users\u2019 preferences. Active learning Strategies are used to selectively choose the items to present to the user in order to acquire her ratings and ultimately improve the recommendation accuracy. In this survey article, we review recent active learning techniques for collaborative filtering along two dimensions: (a) whether the system requested ratings are personalised or not, and, (b) whether active learning is guided by one criterion (heuristic) or multiple"}
{"_id": "414651bbf1520c560ea34175d98b4d688e08e0d9", "title": "Acoustical Sound Database in Real Environments for Sound Scene Understanding and Hands-Free Speech Recognition", "text": "This paper reports on a project for collection of the sound scene data. The sound scene data is necessary for studies such as sound source localization, sound retrieval, sound recognition and hands-free speech recognition in real acoustical environments. There are many kinds of sound scenes in real environments. The sound scene is denoted by sound sources and room acoustics. The number of combination of the sound sources, source positions and rooms is huge in real acoustical environments. However, the sound in the environments can be simulated by convolution of the isolated sound sources and impulse responses. As an isolated sound source, a hundred kinds of non-speech sounds and speech sounds are collected. The impulse responses are collected in various acoustical environments. In this paper, progress of our sound scene database project and application to environment sound recognition are described."}
{"_id": "8f64de9c6e3c52222896280c9fb19ff6c0a504ea", "title": "Teaching the science of learning", "text": "The science of learning has made a considerable contribution to our understanding of effective teaching and learning strategies. However, few instructors outside of the field are privy to this research. In this tutorial review, we focus on six specific cognitive strategies that have received robust support from decades of research: spaced practice, interleaving, retrieval practice, elaboration, concrete examples, and dual coding. We describe the basic research behind each strategy and relevant applied research, present examples of existing and suggested implementation, and make recommendations for further research that would broaden the reach of these strategies."}
{"_id": "dfc5ad31fddd4142ef0e5cf2d85cc62e40af7af9", "title": "Snitches, Trolls, and Social Norms: Unpacking Perceptions of Social Media Use for Crime Prevention", "text": "In this paper, we describe how people perceive the use of social media to support crime prevention in their communities. Based on survey and interview data from residents in high- and low-crime neighborhoods in Chicago, we found that African Americans, people from high-crime neighborhoods, and people with low levels of trust in local police are less likely to view social media as an appropriate tool to support citizens or the police in local crime prevention efforts. Residents' concerns include information getting into the wrong hands, trolls, and being perceived as a snitch. Despite concerns with usage, citizens also viewed social media as a tool that can supplement in-person crime prevention efforts and facilitate relationship-building and information sharing. We discuss the complexities of hyper-local usage of social media to combat crime by describing the social and historical contexts in which these tools exist."}
{"_id": "b8a0cfa55b3393de4cc600d115cf6adb49bfa4ee", "title": "Web Service SWePT: A Hybrid Opinion Mining Approach", "text": "The increasing use of social networks and online sites where people can express their opinions has created a growing interest in Opinion Mining. One of the main tasks of Opinion Mining is to determine whether an opinion is positive or negative. Therefore, the role of the feelings expressed on the web has become crucial, mainly due to the concern of businesses and government to automatically identify the semantic orientation of the views of customers or citizens. This is also a concern, in the area of health to identify psychological disorders. This research focuses on the development of a web application called SWePT (Web Service for Polarity detection in Spanish Texts), which implements the Sequential Minimal Optimization (SMO) algorithm, extracting its features from an affective lexicon in Mexican Spanish. For this purpose, a corpus and an affective lexicon in Mexican Spanish were created. The experiments using three (positive, neutral, negative) and five categories (very positive, positive, neutral, negative, and very negative) allow us to demonstrate the effectiveness of the presented method. SWePT has also been implemented in the Emotion-bracelet interface, which shows the opinion of a user graphically."}
{"_id": "664e3e84dee394701241ce31222b985268ea005d", "title": "Spatio-Temporal Anomaly Detection for Industrial Robots through Prediction in Unsupervised Feature Space", "text": "Spatio-temporal anomaly detection by unsupervised learning have applications in a wide range of practical settings. In this paper we present a surveillance system for industrial robots using a monocular camera. We propose a new unsupervised learning method to train a deep feature extractor from unlabeled images. Without any data augmentation, the algorithm co-learns the network parameters on different pseudo-classes simultaneously to create unbiased feature representation. Combining the learned features with a prediction system, we can detect irregularities in high dimensional data feed (e.g. video of a robot performing pick and place task). The results show how the proposed approach can detect previously unseen anomalies in the robot surveillance video. Although the technique is not designed for classification, we show the use of the learned features in a more traditional classification application for CIFAR-10 dataset."}
{"_id": "c337cadc3cbb9e51e425b12afd7bad6dae5700c0", "title": "A Cloud-based Approach for Interoperable Electronic Health Records (EHRs)", "text": "We present a cloud-based approach for the design of interoperable electronic health record (EHR) systems. Cloud computing environments provide several benefits to all the stakeholders in the healthcare ecosystem (patients, providers, payers, etc.). Lack of data interoperability standards and solutions has been a major obstacle in the exchange of healthcare data between different stakeholders. We propose an EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defines the clinical data attributes. CHISTAR application components are designed using the cloud component model approach that comprises of loosely coupled components that communicate asynchronously. In this paper, we describe the high-level design of CHISTAR and the approaches for semantic interoperability, data integration, and security."}
{"_id": "a8b3c3d49c09d32cab3b97226b811b2ab0790923", "title": "Feature Learning for Detection and Prediction of Freezing of Gait in Parkinson's Disease", "text": "Freezing of gait (FoG) is a common gait impairment among patients with advanced Parkinson\u2019s disease. FoG is associated with falls and negatively impact the patient\u2019s quality of life. Wearable systems that detect FoG have been developed to help patients resume walking by means of auditory cueing. However, current methods for automated detection are not yet ideal. In this paper, we first compare feature learning approaches based on time-domain and statistical features to unsupervised ones based on principal components analysis. The latter systematically outperforms the former and also the standard in the field \u2013 Freezing Index by up to 8.1% in terms of F1-measure for FoG detection. We go a step further by analyzing FoG prediction, i.e., identification of patterns (pre-FoG) occurring before FoG episodes, based only on motion data. Until now this was only attempted using electroencephalography. With respect to the three-class problem (FoG vs. pre-FoG vs. normal locomotion), we show that FoG prediction performance is highly patient-dependent, reaching an F1-measure of 56% in the pre-FoG class for patients who exhibit enough gait degradation before FoG."}
{"_id": "a1ae532a9ae1cd11b138d9a83cee2d152b91186e", "title": "Reading Between the Lines: Content-Agnostic Detection of Spear-Phishing Emails", "text": "Spear-phishing is an effective attack vector for infiltrating companies and organisations. Based on the multitude of personal information available online, an attacker can craft seemingly legit emails and trick his victims into opening malicious attachments and links. Although anti-spoofing techniques exist, their adoption is still limited and alternative protection approaches are needed. In this paper, we show that a sender leaves content-agnostic traits in the structure of an email. Based on these traits, we develop a method capable of learning profiles for a large set of senders and identifying spoofed emails as deviations thereof. We evaluate our approach on over 700,000 emails from 16,000 senders and demonstrate that it can discriminate thousands of senders, identifying spoofed emails with 90% detection rate and less than 1 false positive in 10,000 emails. Moreover, we show that individual traits are hard to guess and spoofing only succeeds if entire emails of the sender are available to the attacker."}
{"_id": "e5646254bff42180ac8f9143d8c96b6f9817e0aa", "title": "Deep Neural Network Based Subspace Learning of Robotic Manipulator Workspace Mapping", "text": "The manipulator workspace mapping is an important problem in robotics and has attracted significant attention in the community. However, most of the pre-existing algorithms have expensive time complexity due to the reliance on sophisticated kinematic equations. To solve this problem, this paper introduces subspace learning (SL), a variant of subspace embedding, where a set of robot and scope parameters is mapped to the corresponding workspace by a deep neural network (DNN). Trained on a large dataset of around 6\u00d7 10 samples obtained from a MATLAB R \u00a9 implementation of a classical method and sampling of designed uniform distributions, the experiments demonstrate that the embedding significantly reduces run-time from 5.23\u00d7 10 s of traditional discretization method to 0.224 s, with high accuracies (average F-measure is 0.9665 with batch gradient descent and resilient backpropagation)."}
{"_id": "8954290aa454ee4c4b3f1dafbe2b66287899ed83", "title": "Grasping the Intentions of Others: The Perceived Intentionality of an Action Influences Activity in the Superior Temporal Sulcus during Social Perception", "text": "An explication of the neural substrates for social perception is an important component in the emerging field of social cognitive neuroscience and is relevant to the field of cognitive neuroscience as a whole. Prior studies from our laboratory have demonstrated that passive viewing of biological motion (Pelphrey, Mitchell, et al., 2003; Puce et al., 1998) activates the posterior superior temporal sulcus (STS) region. Furthermore, recent evidence has shown that the perceived context of observed gaze shifts (Pelphrey, Singerman, et al., 2003; Pelphrey et al., 2004) modulates STS activity. Here, using event-related functional magnetic resonance imaging at 4 T, we investigated brain activity in response to passive viewing of goal- and non-goal- directed reaching-to-grasp movements. Participants viewed an animated character making reaching-to-grasp movements either toward (correct) or away (incorrect) from a blinking dial. Both conditions evoked significant posterior STS activity that was strongly right lateralized. By examining the time course of the blood oxygenation level-dependent response from areas of activation, we observed a functional dissociation. Incorrect trials evoked significantly greater activity in the STS than did correct trials, while an area posterior and inferior to the STS (likely corresponding to the MT/V5 complex) responded equally to correct and incorrect movements. Parietal cortical regions, including the superior parietal lobule and the anterior intraparietal sulcus, also responded equally to correct and incorrect movements, but showed evidence for differential responding based on the hand and arm (left or right) of the animated character used to make the reaching-to-grasp movement. The results of this study further suggest that a region of the right posterior STS is involved in analyzing the intentions of other people's actions and that activity in this region is sensitive to the context of observed biological motions."}
{"_id": "7dee0155d17ca046b69e51fecd62a057329add01", "title": "Modeling Avoidance in Mood and Anxiety Disorders Using Reinforcement Learning", "text": "BACKGROUND\nSerious and debilitating symptoms of anxiety are the most common mental health problem worldwide, accounting for around 5% of all adult years lived with disability in the developed world. Avoidance behavior-avoiding social situations for fear of embarrassment, for instance-is a core feature of such anxiety. However, as for many other psychiatric symptoms the biological mechanisms underlying avoidance remain unclear.\n\n\nMETHODS\nReinforcement learning models provide formal and testable characterizations of the mechanisms of decision making; here, we examine avoidance in these terms. A total of 101 healthy participants and individuals with mood and anxiety disorders completed an approach-avoidance go/no-go task under stress induced by threat of unpredictable shock.\n\n\nRESULTS\nWe show an increased reliance in the mood and anxiety group on a parameter of our reinforcement learning model that characterizes a prepotent (pavlovian) bias to withhold responding in the face of negative outcomes. This was particularly the case when the mood and anxiety group was under stress.\n\n\nCONCLUSIONS\nThis formal description of avoidance within the reinforcement learning framework provides a new means of linking clinical symptoms with biophysically plausible models of neural circuitry and, as such, takes us closer to a mechanistic understanding of mood and anxiety disorders."}
{"_id": "152086bea7688c533794c0076bfa210ce1031bfc", "title": "Semantic Annotation and Reasoning for Sensor Data", "text": "Developments in (wireless) sensor and actuator networks and the capabilities to manufacture low cost and energy efficient networked embedded devices have lead to considerable interest in adding real world sense to the Internet and the Web. Recent work has raised the idea towards combining the Internet of Things (i.e. real world resources) with semantic Web technologies to design future service and applications for the Web. In this paper we focus on the current developments and discussions on designing Semantic Sensor Web, particularly, we advocate the idea of semantic annotation with the existing authoritative data published on the semantic Web. Through illustrative examples, we demonstrate how rule-based reasoning can be performed over the sensor observation and measurement data and linked data to derive additional or approximate knowledge. Furthermore, we discuss the association between sensor data, the semantic Web, and the social Web which enable construction of context-aware applications and services, and contribute to construction of a networked knowledge framework."}
{"_id": "158463840d4beed75e5a821e218526cd4d4d6801", "title": "The SSN ontology of the W3C semantic sensor network incubator group", "text": "The W3C Semantic Sensor Network Incubator group (the SSN-XG) produced an OWL 2 ontology to describe sensors and observations \u2014 the SSN ontology, available at http://purl.oclc.org/NET/ssnx/ssn. The SSN ontology can describe sensors in terms of capabilities, measurement processes, observations and deployments. This article describes the SSN ontology. It further gives an example and describes the use of the ontology in recent research projects."}
{"_id": "5b081ed14184ca48da725032b1022c23669cc2be", "title": "SemSOS: Semantic sensor Observation Service", "text": "Sensor Observation Service (SOS) is a Web service specification defined by the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) group in order to standardize the way sensors and sensor data are discovered and accessed on the Web. This standard goes a long way in providing interoperability between repositories of heterogeneous sensor data and applications that use this data. Many of these applications, however, are ill equipped at handling raw sensor data as provided by SOS and require actionable knowledge of the environment in order to be practically useful. There are two approaches to deal with this obstacle, make the applications smarter or make the data smarter. We propose the latter option and accomplish this by leveraging semantic technologies in order to provide and apply more meaningful representation of sensor data. More specifically, we are modeling the domain of sensors and sensor observations in a suite of ontologies, adding semantic annotations to the sensor data, using the ontology models to reason over sensor observations, and extending an open source SOS implementation with our semantic knowledge base. This semantically enabled SOS, or SemSOS, provides the ability to query high-level knowledge of the environment as well as low-level raw sensor data."}
{"_id": "961b8e95e4b360e5d95ef79a21958540d4e551ab", "title": "New Generation Sensor Web Enablement", "text": "Many sensor networks have been deployed to monitor Earth's environment, and more will follow in the future. Environmental sensors have improved continuously by becoming smaller, cheaper, and more intelligent. Due to the large number of sensor manufacturers and differing accompanying protocols, integrating diverse sensors into observation systems is not straightforward. A coherent infrastructure is needed to treat sensors in an interoperable, platform-independent and uniform way. The concept of the Sensor Web reflects such a kind of infrastructure for sharing, finding, and accessing sensors and their data across different applications. It hides the heterogeneous sensor hardware and communication protocols from the applications built on top of it. The Sensor Web Enablement initiative of the Open Geospatial Consortium standardizes web service interfaces and data encodings which can be used as building blocks for a Sensor Web. This article illustrates and analyzes the recent developments of the new generation of the Sensor Web Enablement specification framework. Further, we relate the Sensor Web to other emerging concepts such as the Web of Things and point out challenges and resulting future work topics for research on Sensor Web Enablement."}
{"_id": "a1cb4845c6cb1fa0ee2661f87ccf1a9ece93cc69", "title": "An Internet of Things Platform for Real-World and Digital Objects", "text": "The vision of the Internet of Things (IoT) relies on the provisioning of real-world services, which are provided by smart objects that are directly related to the physical world. A structured, machine-processible approach to provision such real-world services is needed to make heterogeneous physical objects accessible on a large scale and to integrate them with the digital world. The incorporation of observation and measurement data obtained from the physical objects with the Web data, using information processing and knowledge engineering methods, enables the construction of \u201dintelligent and interconnected things\u201d. The current research mostly focuses on the communication and networking aspects between the devices that are used for sensing amd measurement of the real world objects. There is, however, relatively less effort concentrated on creating dynamic infrastructures to support integration of the data into the Web and provide unified access to such data on service and application levels. This paper presents a semantic modelling and linked data approach to create an information framework for IoT. The paper describes a platform to publish instances of the IoT related resources and entities and to link them to existing resources on the Web. The developed platform supports publication of extensible and interoperable descriptions in the form of linked data."}
{"_id": "0effabe9c90862b3b26dbc7e023bb0515028da51", "title": "metaSEM: an R package for meta-analysis using structural equation modeling", "text": "The metaSEM package provides functions to conduct univariate, multivariate, and three-level meta-analyses using a structural equation modeling (SEM) approach via the OpenMx package in the R statistical platform. It also implements the two-stage SEM approach to conducting fixed- and random-effects meta-analytic SEM on correlation or covariance matrices. This paper briefly outlines the theories and their implementations. It provides a summary on how meta-analyses can be formulated as structural equation models. The paper closes with a conclusion on several relevant topics to this SEM-based meta-analysis. Several examples are used to illustrate the procedures in the supplementary material."}
{"_id": "a29feffbb8b410c55c9b24f51e8cb5071911b3a4", "title": "User-friendly 3D object manipulation gesture using kinect", "text": "With the rapid development and wide spread of the virtual reality technology, we can easily find VR systems in various places such as school, library and home. Translation and rotation, the most frequently used gestures for manipulating objects in the real world, need to be implemented in order to make a user feel comfortable while manipulating 3D objects in the systems. In this paper, we propose a set of user-friendly 3D object manipulation gestures and develop a recognizer using Kinect for the VR systems. The usefulness and suitableness of the proposed system is shown from the user study performed."}
{"_id": "357bc607f40abeb75b0c92119c35b698b8f6cd8f", "title": "SmartPhoto: A Resource-Aware Crowdsourcing Approach for Image Sensing with Smartphones", "text": "Photos obtained via crowdsourcing can be used in many critical applications. Due to the limitations of communication bandwidth, storage and processing capability, it is a challenge to transfer the huge amount of crowdsourced photos. To address this problem, we propose a framework, called SmartPhoto, to quantify the quality (utility) of crowdsourced photos based on the accessible geographical and geometrical information (called metadata) including the smartphone's orientation, position and all related parameters of the built-in camera. From the metadata, we can infer where and how the photo is taken, and then only transmit the most useful photos. Three optimization problems regarding the tradeoffs between photo utility and resource constraints, namely the Max-Utility problem, the online Max-Utility problem and the Min-Selection problem, are studied. Efficient algorithms are proposed and their performance bounds are theoretically proved. We have implemented SmartPhoto in a testbed using Android based smartphones, and proposed techniques to improve the accuracy of the collected metadata by reducing sensor reading errors and solving object occlusion issues. Results based on real implementations and extensive simulations demonstrate the effectiveness of the proposed algorithms."}
{"_id": "339a6951822855cbf86420978f877d58cd1a7aac", "title": "Ambiguity as a resource for design", "text": "Ambiguity is usually considered anathema in Human Computer Interaction. We argue, in contrast, that it is a resource for design that can be used to encourage close personal engagement with systems. We illustrate this with examples from contemporary arts and design practice, and distinguish three broad classes of ambiguity according to where uncertainty is located in the interpretative relationship linking person and artefact. Ambiguity of information finds its source in the artefact itself, ambiguity of context in the sociocultural discourses that are used to interpret it, and ambiguity of relationship in the interpretative and evaluative stance of the individual. For each of these categories, we describe tactics for emphasising ambiguity that may help designers and other practitioners understand and craft its use."}
{"_id": "310fe4e6cb6d090f7817de4c1034e35567b56e34", "title": "Robust Multi-pose Facial Expression Recognition", "text": "Previous research on facial expression recognition mainly focuses on near frontal face images, while in realistic interactive scenarios, the interested subjects may appear in arbitrary non-frontal poses. In this paper, we propose a framework to recognize six prototypical facial expressions, namely, anger, disgust, fear, joy, sadness and surprise, in an arbitrary head pose. We build a multi-pose training set by rendering 3D face scans from the BU-4DFE dynamic facial expression database [17] at 49 different viewpoints. We extract Local Binary Pattern (LBP) descriptors and further utilize multiple instance learning to mitigate the influence of inaccurate alignment in this challenging task. Experimental results demonstrate the power and validate the effectiveness of the proposed multi-pose facial expression recognition framework."}
{"_id": "41034ac4b9defdb50b98eeb6836679f375b6644e", "title": "Microservices validation: Mjolnirr platform case study", "text": "Microservice architecture is a cloud application design pattern that implies that the application is divided into a number of small independent services, each of which is responsible for implementing of a certain feature. The need for continuous integration of developed and/or modified microservices in the existing system requires a comprehensive validation of individual microservices and their co-operation as an ensemble with other microservices. In this paper, we would provide an analysis of existing methods of cloud applications testing and identify features that are specific to the microservice architecture. Based on this analysis, we will try to propose a validation methodology of the microservice systems."}
{"_id": "1f09b639227b5ff588f8b885dad5474742affff1", "title": "Robust Nonrigid Registration by Convex Optimization", "text": "We present an approach to nonrigid registration of 3D surfaces. We cast isometric embedding as MRF optimization and apply efficient global optimization algorithms based on linear programming relaxations. The Markov random field perspective suggests a natural connection with robust statistics and motivates robust forms of the intrinsic distortion functional. Our approach outperforms a large body of prior work by a significant margin, increasing registration precision on real data by a factor of 3."}
{"_id": "f176b7177228c1a18793cf922455545d408a65ae", "title": "Active Damping in DC/DC Power Electronic Converters: A Novel Method to Overcome the Problems of Constant Power Loads", "text": "Multi-converter power electronic systems exist in land, sea, air, and space vehicles. In these systems, load converters exhibit constant power load (CPL) behavior for the feeder converters and tend to destabilize the system. In this paper, the implementation of novel active-damping techniques on dc/dc converters has been shown. Moreover, the proposed active-damping method is used to overcome the negative impedance instability problem caused by the CPLs. The effectiveness of the new proposed approach has been verified by PSpice simulations and experimental results."}
{"_id": "a083ba4f23dd2e5229f6009253e28ef4f81759e7", "title": "Extracting Social Structures from Conversations in Twitter: A Case Study on Health-Related Posts", "text": "Online Social Networks (e.g., Twitter, Facebook) have dramatically grown in usage and popularity in recent years. In addition to keeping track of friends and acquaintances, these networks provide powerful means of exchanging and sharing information on many different topics of interest (e.g., sports, religion, politics, health concerns, etc.). Moreover, the use of these networks has introduced a completely new way of collaboration among people, virtually creating spheres of friends who generally feel comfortable discussing a variety of subjects and even helping each other to grow in knowledge about certain subjects. In this work, we built and analyzed networks of social groups on Twitter related to the top leading causes of death in the United States. Due to space limitations, we present results for the state of Florida and only for the top four leading causes of death. We show that using a concept of time window in the creation of relations between users, we can reconstruct social networks for these conversations and these networks have characteristics that are similar to typical social networks. The argument is that social network information can be extracted even in cases where users are not directly talking to each other (as it is the case in most of Twitter)."}
{"_id": "5bf4644c104ac6778a0aa07418321b14e0010e81", "title": "HCI and Autonomous Vehicles: Contextual Experience Informs Design", "text": "The interaction between drivers and their cars will change significantly with the introduction of autonomous vehicles. The driver's role will shift towards a supervisory control of their autonomous vehicle. The eventual relief from the driving task enables a complete new area of research and practice in human-computer interaction and interaction design. In this one-day workshop, participants will explore the opportunities the design space of autonomous driving will bring to HCI researchers and designers. On the day before workshop participants are invited to visit (together with workshop organizers) Google Partnerplex and Stanford University. At Google participants will have the opportunity to explore Google's autonomous car simulator and might have the chance to experience one of the Google Cars (if available). At Stanford participants are invited to ride in a Wizard-of-Oz autonomous vehicle. Based on this first-hand experience we will discuss design approaches and prototype interaction systems during the next day's workshop. The outcome of this workshop will be a set of concepts, interaction sketches, and low-fidelity paper prototypes that address constraints and potentials of driving in an autonomous car."}
{"_id": "8887690f0abd32d2d708995b1c85667eae0de753", "title": "Using BabelNet to Improve OOV Coverage in SMT", "text": "Out-of-vocabulary words (OOVs) are a ubiquitous and difficult problem in statistical machine translation (SMT). This paper studies different strategies of using BabelNet to alleviate the negative impact brought about by OOVs. BabelNet is a multilingual encyclopedic dictionary and a semantic network, which not only includes lexicographic and encyclopedic terms, but connects concepts and named entities in a very large network of semantic relations. By taking advantage of the knowledge in BabelNet, three different methods \u2013 using direct training data, domain-adaptation techniques and the BabelNet API \u2013 are proposed in this paper to obtain translations for OOVs to improve system performance. Experimental results on English\u2013Polish and English\u2013Chinese language pairs show that domain adaptation can better utilize BabelNet knowledge and performs better than other methods. The results also demonstrate that BabelNet is a really useful tool for improving translation performance of SMT systems."}
{"_id": "e58506ef0f6721729d2f72c61e6bb46565b887de", "title": "Path selection based on local terrain feature for unmanned ground vehicle in unknown rough terrain environment", "text": "In this paper, we propose an autonomous navigation for Unmanned Ground Vehicles (UGVs) by a path selection method based on local features of terrain in unknown outdoor rough terrain environment. The correlation between a local terrain feature obtained from a path and a value of the path obtained from the global path planning is extracted in advance. When UGV comes to a branch while it is running, the value of path is estimated using the correlation with local terrain feature. Thereby, UGV navigation is performed by path selection under the criterion of shortest time in unknown outdoor environment. We experimented on a simulator and confirmed that the proposed method can select more effective paths in comparison with a simple path selection method."}
{"_id": "cc2fb12eaa4dae74c5de0799b29624b5c585c43b", "title": "Behavioral Cloning from Observation", "text": "Humans often learn how to perform tasks via imitation: they observe others perform a task, and then very quickly infer the appropriate actions to take based on their observations. While extending this paradigm to autonomous agents is a wellstudied problem in general, there are two particular aspects that have largely been overlooked: (1) that the learning is done from observation only (i.e., without explicit action information), and (2) that the learning is typically done very quickly. In this work, we propose a two-phase, autonomous imitation learning technique called behavioral cloning from observation (BCO), that aims to provide improved performance with respect to both of these aspects. First, we allow the agent to acquire experience in a self-supervised fashion. This experience is used to develop a model which is then utilized to learn a particular task by observing an expert perform that task without the knowledge of the specific actions taken. We experimentally compare BCO to imitation learning methods, including the state-of-the-art, generative adversarial imitation learning (GAIL) technique, and we show comparable task performance in several different simulation domains while exhibiting increased learning speed after expert trajectories become available."}
{"_id": "43c46282c1f4cf2e718387d2c136a66d9c832cd1", "title": "Inference Over Distribution of Posterior Class Probabilities for Reliable Bayesian Classification and Object-Level Perception", "text": "State of the art Bayesian classification approaches typically maintain a posterior distribution over possible classes given available sensor observations (images). Yet, while these approaches fuse all classifier outputs thus far, they do not provide any indication regarding how reliable the posterior classification is, thus limiting its functionality in terms of autonomous systems and robotics. On the other hand, current deep learning based classifiers provide an uncertainty measure, thereby quantifying model uncertainty. However, they do so on a single frame basis and do not consider a sequential framework. In this letter, we develop a novel approach that infers a distribution over posterior class probabilities, while accounting for model uncertainty. This distribution enables reasoning about uncertainty in the posterior classification and, therefore, is of prime importance for robust classification, object-level perception in uncertain and ambiguous scenarios, and for safe autonomy in general. The distribution of the posterior class probability has no known analytical solution; thus, we propose to approximate this distribution via sampling. We evaluate our approach in simulation and using real images fed into a convolutional neural network classifier."}
{"_id": "564fb39c07fcd91594bada44e803478d52231928", "title": "Protecting Your Right: Verifiable Attribute-Based Keyword Search with Fine-Grained Owner-Enforced Search Authorization in the Cloud", "text": "Search over encrypted data is a critically important enabling technique in cloud computing, where encryption-before-outsourcing is a fundamental solution to protecting user data privacy in the untrusted cloud server environment. Many secure search schemes have been focusing on the single-contributor scenario, where the outsourced dataset or the secure searchable index of the dataset are encrypted and managed by a single owner, typically based on symmetric cryptography. In this paper, we focus on a different yet more challenging scenario where the outsourced dataset can be contributed from multiple owners and are searchable by multiple users, i.e., multi-user multi-contributor case. Inspired by attribute-based encryption (ABE), we present the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e., file-level) search authorization. Our scheme allows multiple owners to encrypt and outsource their data to the cloud server independently. Users can generate their own search capabilities without relying on an always online trusted authority. Fine-grained search authorization is also implemented by the owner-enforced access policy on the index of each file. Further, by incorporating proxy re-encryption and lazy re-encryption techniques, we are able to delegate heavy system update workload during user revocation to the resourceful semi-trusted cloud server. We formalize the security definition and prove the proposed ABKS-UR scheme selectively secure against chosen-keyword attack. To build confidence of data user in the proposed secure search system, we also design a search result verification scheme. Finally, performance evaluation shows the efficiency of our scheme."}
{"_id": "8e79e46513e83bad37a029d1c49fca4a1c204738", "title": "Learning Structured Natural Language Representations for Semantic Parsing", "text": "We introduce a neural semantic parser which is interpretable and scalable. Our model converts natural language utterances to intermediate, domain-general natural language representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We achieve the state of the art on SPADES and GRAPHQUESTIONS and obtain competitive results on GEOQUERY and WEBQUESTIONS. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.1"}
{"_id": "a1d326e7710cb9a1464ef52ca557a20ea5aa7e91", "title": "A four-band dual-polarized cavity-backed antenna on LTCC technology for 60GHz applications", "text": "In this work, we present a 4-band dual-polarized antenna designed for 8-chanel applications. Based on LTCC technology, the antenna is a patch coupled aperture with a backed cavity. Each antenna element of a designated band contains two channels through two orthogonally polarized ports. Combining four dual-polarized antenna elements under different frequency, an 8-channel antenna for 60GHz applications can be achieved. The array antenna contains 8 feeding ports, which correspond to 8 independent channels. The isolation between each port can reach 20dB in most of the frequency band."}
{"_id": "1f376c10b20319121102db78e7790cf47d8fa046", "title": "Optimal Reconfiguration for Supply Restoration With Informed A$^{\\ast}$ Search", "text": "Reconfiguration of radial distribution networks is the basis of supply restoration after faults and of load balancing and loss minimization. The ability to automatically reconfigure the network quickly and efficiently is a key feature of autonomous and self-healing networks, an important part of the future vision of smart grids. We address the reconfiguration problem for outage recovery, where the cost of the switching actions dominates the overall cost: when the network reverts to its normal configuration relatively quickly, the electricity loss and the load imbalance in a temporary suboptimal configuration are of minor importance. Finding optimal feeder configurations under most optimality criteria is a difficult optimization problem. All known complete optimal algorithms require an exponential time in the network size in the worst case, and cannot be guaranteed to scale up to arbitrarily large networks. Hence most works on reconfiguration use heuristic approaches that can deliver solutions but cannot guarantee optimality. These approaches include local search, such as tabu search, and evolutionary algorithms. We propose using optimal informed search algorithms in the A family, introduce admissible heuristics for reconfiguration, and demonstrate empirically the efficiency of our approach. Combining A with admissible cost lower bounds guarantees that reconfiguration plans are optimal in terms of switching action costs."}
{"_id": "30a1f82da02441d099bf15ca026751fcd76c8dee", "title": "Additive Quantization for Extreme Vector Compression", "text": "We introduce a new compression scheme for high-dimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks. We show that the proposed scheme permits efficient distance and scalar product computations between compressed and uncompressed vectors. We further suggest vector encoding and codebook learning algorithms that can minimize the coding error within the proposed scheme. In the experiments, we demonstrate that the proposed compression can be used instead of or together with product quantization. Compared to product quantization and its optimized versions, the proposed compression approach leads to lower coding approximation errors, higher accuracy of approximate nearest neighbor search in the datasets of visual descriptors, and lower image classification error, whenever the classifiers are learned on or applied to compressed vectors."}
{"_id": "f5b8894cf0606b991a913b84a2a3e8b43e4c32de", "title": "Toward Efficient Action Recognition: Principal Backpropagation for Training Two-Stream Networks", "text": "In this paper, we propose the novel principal backpropagation networks (PBNets) to revisit the backpropagation algorithms commonly used in training two-stream networks for video action recognition. We content that existing approaches always take all the frames/snippets for the backpropagation not optimal for video recognition since the desired actions only occur in a short period within a video. To remedy these drawbacks, we design a watch-and-choose mechanism. In particular, the watching stage exploits a dense snippet-wise temporal pooling strategy to discover the global characteristic for each input video, while the choosing phase only backpropagates a small number of representative snippets that are selected with two novel strategies, i.e., Max-rule and KL-rule. We prove that with the proposed selection strategies, performing the backpropagation on the selected subset is capable of decreasing the loss of the whole snippets as well. The proposed PBNets are evaluated on two standard video action recognition benchmarks UCF101 and HMDB51, where it surpasses the state of the arts consistently, but requiring less memory and computation to achieve high performance."}
{"_id": "291001586b37fbd38e1887378586e40757bea499", "title": "Idea Generation Techniques among Creative Professionals", "text": "The creative process has been a key topic research over the past century, but it wasn't until the last decade that creativity became a hot topic of research in the HCI. It is an important commodity to businesses and individuals alike spawning numerous research studies in business, psychology and design. However, it wasn't until recently that researchers became interested in developing technologies to support creative behavior. This article outlines the role of creativity in design from the designer's perspective, provides a model for the creative process and provides a foundation and direction for future creativity support research by identifying nineteen idea generation techniques utilized by creative professionals."}
{"_id": "0aae0d373f652fd39f0373199b0416f518d6eb8b", "title": "MusicMiner : Visualizing timbre distances of music as topographical maps", "text": "Timbre distances and similarities are an expression of the phenomenon that some music appears similar while other songs sound very different to us. The notion of genre is often used to categorize music, but songs from a single genre do not necessarily sound similar and vice versa. Instead we aim at a visualization of timbre similarities of sound within a music collection. We analyzed and compared a large amount of different audio features and psychoacoustic variants thereof for the purpose of modelling timbre distance of sound. The sound of polyphonic music is commonly described by extracting audio features on short time windows during which the sound is assumed to be stationary. The resulting down sampled time series are aggregated to form a high level feature vector describing the music. We generated high level features by systematically applying static and temporal statistics for aggregation. Especially the temporal structure of features has previously been largely neglected. A novel supervised feature selection method is applied to the huge set of possible features. Distances between vectors of the selected features correspond to timbre differences in music. The features show few redundancies and have high potential for explaining possible clusters. They outperform seven other previously proposed feature sets on several datasets w.r.t. the separation of the known groups of timbrally different music. Clustering and visualization based on these feature vectors can discover emergent structures in collections of music. Visualization based on Emergent Self-Organizing Maps in particular enables the unsupervised discovery of timbrally consistent clusters that may or may not correspond to musical genres and artists. We demonstrate the visualizations capabilities of the U-Map and related methods based on the new audio features. An intuitive browsing of large music collections is offered based on the paradigm of topographic maps. The user can navigate the sound space and interact with the maps to play music or show the context of a song. Data Bionics Research Group, Philipps-University Marburg, 35032 Marburg, Germany"}
{"_id": "6674729287f2482eda9e836846d2a35e63ea401c", "title": "Rank Pooling for Action Recognition", "text": "We propose a function-based temporal pooling method that captures the latent structure of the video sequence data - e.g., how frame-level features evolve over time in a video. We show how the parameters of a function that has been fit to the video data can serve as a robust new video representation. As a specific example, we learn a pooling function via ranking machines. By learning to rank the frame-level features of a video in chronological order, we obtain a new representation that captures the video-wide temporal dynamics of a video, suitable for action recognition. Other than ranking functions, we explore different parametric models that could also explain the temporal changes in videos. The proposed functional pooling methods, and rank pooling in particular, is easy to interpret and implement, fast to compute and effective in recognizing a wide variety of actions. We evaluate our method on various benchmarks for generic action, fine-grained action and gesture recognition. Results show that rank pooling brings an absolute improvement of 7-10 average pooling baseline. At the same time, rank pooling is compatible with and complementary to several appearance and local motion based methods and features, such as improved trajectories and deep learning features."}
{"_id": "259d69ba1400310016f6c15dadbd9c509ff73723", "title": "Activity sensing in the wild: a field trial of ubifit garden", "text": "Recent advances in small inexpensive sensors, low-power processing, and activity modeling have enabled applications that use on-body sensing and machine learning to infer people's activities throughout everyday life. To address the growing rate of sedentary lifestyles, we have developed a system, UbiFit Garden, which uses these technologies and a personal, mobile display to encourage physical activity. We conducted a 3-week field trial in which 12 participants used the system and report findings focusing on their experiences with the sensing and activity inference. We discuss key implications for systems that use on-body sensing and activity inference to encourage physical activity."}
{"_id": "c995c383b26d36e823a9995c94835596ce96f665", "title": "Semantic Space: an infrastructure for smart spaces", "text": "Semantic Space is a pervasive computing infrastructure that exploits semantic Web technologies to support explicit representation, expressive querying, and flexible reasoning of contexts in smart spaces."}
{"_id": "06d35d33fdd15a2efa6d42fdec731f9ac12dfaf3", "title": "Dyscalculia : Characteristics , Causes , and Treatments", "text": "Developmental Dyscalculia (DD) is a learning disorder affecting the ability to acquire school-level arithmetic skills, affecting approximately 3-6% of individuals. Progress in understanding the root causes of DD and how best to treat it have been impeded by lack of widespread research and variation in characterizations of the disorder across studies. However, recent years have witnessed significant growth in the field, and a growing body of behavioral and neuroimaging evidence now points to an underlying deficit in the representation and processing of numerical magnitude information as a potential core deficit in DD. An additional product of the recent progress in understanding DD is the resurgence of a distinction between \u2018primary\u2019 and \u2018secondary\u2019 developmental dyscalculia. The first appears related to impaired development of brain mechanisms for processing numerical magnitude information, while the latter refers to mathematical deficits stemming from external factors such as poor teaching, low socio-economic status, and behavioral attention problems or domain-general cognitive deficits. Increased awareness of this distinction going forward, in combination with longitudinal empirical research, offers great potential for deepening our understanding of the disorder and developing effective educational interventions."}
{"_id": "2dc293701b306a5d049dac23f84ab73f1969b197", "title": "Acquiring Hyponymy Relations from Web Documents", "text": "This paper describes an automatic method for acquiring hyponymy relations from HTML documents on the WWW. Hyponymy relations can play a crucial role in various natural language processing systems. Most existing acquisition methods for hyponymy relations rely on particular linguistic patterns, such as \u201c NP such as NP\u201d. Our method, however, does not use such linguistic patterns, and we expect that our procedure can be applied to a wide range of expressions for which existing methods cannot be used. Our acquisition algorithm uses clues such as itemization or listing in HTML documents and statistical measures such as document frequencies and verb-noun co-occurrences."}
{"_id": "aa6352ccde2e6c6412bc85a66a1348864c012198", "title": "Gender differences in narcissism: a meta-analytic review.", "text": "Despite the widely held belief that men are more narcissistic than women, there has been no systematic review to establish the magnitude, variability across measures and settings, and stability over time of this gender difference. Drawing on the biosocial approach to social role theory, a meta-analysis performed for Study 1 found that men tended to be more narcissistic than women (d = .26; k = 355 studies; N = 470,846). This gender difference remained stable in U.S. college student cohorts over time (from 1990 to 2013) and across different age groups. Study 1 also investigated gender differences in three facets of the Narcissistic Personality Inventory (NPI) to reveal that the narcissism gender difference is driven by the Exploitative/Entitlement facet (d = .29; k = 44 studies; N = 44,108) and Leadership/Authority facet (d = .20; k = 40 studies; N = 44,739); whereas the gender difference in Grandiose/Exhibitionism (d = .04; k = 39 studies; N = 42,460) was much smaller. We further investigated a less-studied form of narcissism called vulnerable narcissism-which is marked by low self-esteem, neuroticism, and introversion-to find that (in contrast to the more commonly studied form of narcissism found in the DSM and the NPI) men and women did not differ on vulnerable narcissism (d = -.04; k = 42 studies; N = 46,735). Study 2 used item response theory to rule out the possibility that measurement bias accounts for observed gender differences in the three facets of the NPI (N = 19,001). Results revealed that observed gender differences were not explained by measurement bias and thus can be interpreted as true sex differences. Discussion focuses on the implications for the biosocial construction model of gender differences, for the etiology of narcissism, for clinical applications, and for the role of narcissism in helping to explain gender differences in leadership and aggressive behavior. Readers are warned against overapplying small effect sizes to perpetuate gender stereotypes."}
{"_id": "9e1be80488c07093d5c3b334c713f15e1181a38c", "title": "Linggle: a Web-scale Linguistic Search Engine for Words in Context", "text": "In this paper, we introduce a Web-scale linguistics search engine, Linggle, that retrieves lexical bundles in response to a given query. The query might contain keywords, wildcards, wild parts of speech (PoS), synonyms, and additional regular expression (RE) operators. In our approach, we incorporate inverted file indexing, PoS information from BNC, and semantic indexing based on Latent Dirichlet Allocation with Google Web 1T. The method involves parsing the query to transforming it into several keyword retrieval commands. Word chunks are retrieved with counts, further filtering the chunks with the query as a RE, and finally displaying the results according to the counts, similarities, and topics. Clusters of synonyms or conceptually related words are also provided. In addition, Linggle provides example sentences from The New York Times on demand. The current implementation of Linggle is the most functionally comprehensive, and is in principle language and dataset independent. We plan to extend Linggle to provide fast and convenient access to a wealth of linguistic information embodied in Web scale datasets including Google Web 1T and Google Books Ngram for many major languages in the world."}
{"_id": "cbd92fac853bfb56fc1c3752574dc0831d8bc181", "title": "Document Language Models, Query Models, and Risk Minimization for Information Retrieval", "text": "We present a framework for information retrieval that combines document models and query models using a probabilistic ranking function based on Bayesian decision theory. The framework suggests an operational retrieval model that extends recent developments in the language modeling approach to information retrieval. A language model for each document is estimated, as well as a language model for each query, and the retrieval problem is cast in terms of risk minimization. The query language model can be exploited to model user preferences, the context of a query, synonomy and word senses. While recent work has incorporated word translation models for this purpose, we introduce a new method using Markov chains defined on a set of documents to estimate the query models. The Markov chain method has connections to algorithms from link analysis and social networks. The new approach is evaluated on TREC collections and compared to the basic language modeling approach and vector space models together with query expansion using Rocchio. Significant improvements are obtained over standard query expansion methods for strong baseline TF-IDF systems, with the greatest improvements attained for short queries on Web data."}
{"_id": "e185c84b0c722eefa725f7131cd55aa9ceead871", "title": "Top-N Recommendation with Novel Rank Approximation", "text": "The importance of accurate recommender systems has been widely recognized by academia and industry. However, the recommendation quality is still rather low. Recently, a linear sparse and low-rank representation of the user-item matrix has been applied to produce Top-N recommendations. This approach uses the nuclear norm as a convex relaxation for the rank function and has achieved better recommendation accuracy than the state-of-the-art methods. In the past several years, solving rank minimization problems by leveraging nonconvex relaxations has received increasing attention. Some empirical results demonstrate that it can provide a better approximation to original problems than convex relaxation. In this paper, we propose a novel rank approximation to enhance the performance of Top-N recommendation systems, where the approximation error is controllable. Experimental results on real data show that the proposed rank approximation improves the Top-N recommendation accuracy substantially."}
{"_id": "0add47725080054b8a6b41dc4c1bdf080e55b34e", "title": "Cloud RAN for Mobile Networks\u2014A Technology Overview", "text": "Cloud Radio Access Network (C-RAN) is a novel mobile network architecture which can address a number of challenges the operators face while trying to support growing end-user's needs. The main idea behind C-RAN is to pool the Baseband Units (BBUs) from multiple base stations into centralized BBU Pool for statistical multiplexing gain, while shifting the burden to the high-speed wireline transmission of In-phase and Quadrature (IQ) data. C-RAN enables energy efficient network operation and possible cost savings on baseband resources. Furthermore, it improves network capacity by performing load balancing and cooperative processing of signals originating from several base stations. This paper surveys the state-of-the-art literature on C-RAN. It can serve as a starting point for anyone willing to understand C-RAN architecture and advance the research on C-RAN."}
{"_id": "761f0c2baaafc82633c1d78e3931d031d9673b11", "title": "Therapeutic Uses of Active Videogames: A Systematic Review.", "text": "BACKGROUND\nActive videogames (AVGs) may be useful for promoting physical activity for therapeutic uses, including for balance, rehabilitation, and management of illness or disease. The literature from 64 peer-reviewed publications that assessed health outcomes of AVGs for therapeutic purposes was synthesized.\n\n\nMATERIALS AND METHODS\nPubMed, Medline, and PyschInfo were queried for original studies related to the use of AVGs to improve physical outcomes in patients who were ill or undergoing rehabilitation related to balance, burn treatment, cancer, cerebral palsy, Down's syndrome, extremity dysfunction or amputation, hospitalization, lupus, Parkinson's disease, spinal injury, or stroke. The following inclusion criteria were used: (1) human subjects; (2) English language; (3) not duplicates; (4) new empirical data; and (5) tests an AVG, including commercially available or custom-designed. Studies were included regardless of participants' age or the study design.\n\n\nRESULTS AND LIMITATIONS\nOverall, the vast majority of studies demonstrated promising results for improved health outcomes related to therapy, including significantly greater or comparable effects of AVG play versus usual care. However, many studies were pilot trials with small, homogeneous samples, and many studies lacked a control or comparison group. Some trials tested multiweek or multimonth interventions, although many used a single bout of gameplay, and few included follow-up assessments to test sustainability of improved health.\n\n\nCONCLUSIONS AND IMPLICATIONS\nAVGs were acceptable and enjoyable to the populations examined and appear as a promising tool for balance, rehabilitation, and illness management. Future research directions and implications for clinicians are discussed."}
{"_id": "35ea1f747f03d2bac874ee56a30039363e28e8e0", "title": "Logic BIST for large industrial designs: real issues and case studies", "text": "to Abstract This paper discusses practical issues involved in app ing logic built-in self-test (BIST) to four large industria designs. These multi-clock designs, ranging in size fr 200K to 800K gates, pose significant challenges to log BIST methodology, flow, and tools. The paper presents process of generating a BIST-compliant core along with t logic BIST controller for at-speed testing. Comparativ data on fault grades and area overhead between autom test pattern generation (ATPG) and logic BIST a reported. The experimental results demonstrate that w automation of the proposed solutions, logic BIST c achieve test quality approaching that of ATPG with min mal area overhead and few changes to the design flow."}
{"_id": "54bcf473448df84416ae2ab518a5a9181f22693a", "title": "Supervised and Unsupervised Intrusion Detection Based on CAN Message Frequencies for In-vehicle Network", "text": "Modern vehicles are equipped with Electronic Control Units (ECUs) and external communication devices. The Controller Area Network (CAN), a widely used communication protocol for ECUs, does not have a security mechanism to detect improper packets; if attackers exploit the vulnerability of an ECU and manage to inject a malicious message, they are able to control other ECUs to cause improper operation of the vehicle. With the increasing popularity of connected cars, it has become an urgent matter to protect in-vehicle networks against security threats. In this paper, we study the applicability of statistical anomaly detection methods for identifying malicious CAN messages in invehicle networks. We focus on intrusion attacks of malicious messages. Because the occurrence of an intrusion attack certainly influences the message traffic, we focus on the number of messages observed in a fixed time window to detect intrusion attacks. We formalize features to represent a message sequence that incorporates the number of messages associated with each receiver ID. We collected CAN message data from an actual vehicle and conducted a quantitative analysis of the methods and the features in practical situations. The results of our experiments demonstrated our proposed methods provide fast and accurate detection in various cases."}
{"_id": "41edd975efbbc5af28971bc2669d9a1ae6ee8ace", "title": "PCN: Point Completion Network", "text": "Shape completion, the problem of estimating the complete geometry of objects from partial observations, lies at the core of many vision and robotics applications. In this work, we propose Point Completion Network (PCN), a novel learning-based approach for shape completion. Unlike existing shape completion methods, PCN directly operates on raw point clouds without any structural assumption (e.g. symmetry) or annotation (e.g. semantic class) about the underlying shape. It features a decoder design that enables the generation of fine-grained completions while maintaining a small number of parameters. Our experiments show that PCN produces dense, complete point clouds with realistic structures in the missing regions on inputs with various levels of incompleteness and noise, including cars from LiDAR scans in the KITTI dataset."}
{"_id": "abbb7e4a44b545b60e478efaf2d662333929a926", "title": "Identification of extremist videos in online video sharing sites", "text": "Web 2.0 has become an effective grassroots communication platform for extremists to promote their ideas, share resources, and communicate among each other. As an important component of Web 2.0, online video sharing sites such as YouTube and Google video have also been utilized by extremist groups to distribute videos. This study presented a framework for identifying extremist videos in online video sharing sites by using user-generated text content such as comments, video descriptions, and titles without downloading the videos. Text features including lexical features, syntactic features and content specific features were first extracted. Then Information Gain was used for feature selection, and Support Vector Machine was deployed for classification. The exploratory experiment showed that our proposed framework is effective for identifying online extremist videos, with the F-measure as high as 82%."}
{"_id": "7adf6090f8f247da6fbc036ac5b1b678b92fd9fe", "title": "A Hybrid Approach for Arabic Text Summarization Using Domain Knowledge and Genetic Algorithms", "text": "Text summarization is the process of producing a shorter version of a specific text. Automatic summarization techniques have been applied to various domains such as medical, political, news, and legal domains proving that adapting domain-relevant features could improve the summarization performance. Despite the existence of plenty of research work in the domain-based summarization in English and other languages, there is a lack of such work in Arabic due to the shortage of existing knowledge bases. In this paper, a hybrid, single-document text summarization approach (abbreviated as (ASDKGA)) is presented. The approach incorporates domain knowledge, statistical features, and genetic algorithms to extract important points of Arabic political documents. The ASDKGA approach is tested on two corpora KALIMAT corpus and Essex Arabic Summaries Corpus (EASC). The Recall-Oriented Understudy for Gisting Evaluation (ROUGE) framework was used to compare the automatically generated summaries by the ASDKGA approach with summaries generated by humans. Also, the approach is compared against three other Arabic text summarization approaches. The (ASDKGA) approach demonstrated promising results when summarizing Arabic political documents with average F-measure of 0.605 at the compression ratio of 40%."}
{"_id": "fd1bc01a9caa53679052b0e803523a2dacb27710", "title": "Ring-LWE Ciphertext Compression and Error Correction: Tools for Lightweight Post-Quantum Cryptography", "text": "Some lattice-based public key cryptosystems allow one to transform ciphertext from one lattice or ring representation to another efficiently and without knowledge of public and private keys. In this work we explore this lattice transformation property from cryptographic engineering viewpoint. We apply ciphertext transformation to compress Ring-LWE ciphertexts and to enable efficient decryption on an ultra-lightweight implementation targets such as Internet of Things, Smart Cards, and RFID applications. Significantly, this can be done without modifying the original encryption procedure or its security parameters. Such flexibility is unique to lattice-based cryptography and may find additional, unique real-life applications.\n Ciphertext compression can significantly increase the probability of decryption errors. We show that the frequency of such errors can be analyzed, measured and used to derive precise failure bounds for n-bit error correction. We introduce XECC, a fast multi-error correcting code that allows constant time implementation in software.\n We use these tools to construct and explore TRUNC8, a concrete Ring-LWE encryption and authentication system. We analyze its implementation, security, and performance. We show that our lattice compression technique reduces ciphertext size by more than 40% at equivalent security level, while also enabling public key cryptography on previously unreachable ultra-lightweight platforms.\n The experimental public key encryption and authentication system has been implemented on an 8-bit AVR target, where it easily outperforms elliptic curve and RSA-based proposals at similar security level. Similar results have been obtained with a Cortex M0 implementation. The new decryption code requires only a fraction of the software footprint of previous Ring-LWE implementations with the same encryption parameters, and is well suited for hardware implementation."}
{"_id": "3b21679e45a90a652431fbd5f8a0622c47c4caea", "title": "LEARNING ENGLISH IS FUN VIA KAHOOT : STUDENTS \u2019 ATTITUDE , MOTIVATION AND PERCEPTIONS", "text": "This study is aimed to investigate primary level students\u2019 attitude and motivation and perceptions in learning English using Kahoot game. According to current development in Malaysian education field, 21st century learning style is more focused and given importance. Thus, Kahoot, game-based learning platform has been used to stimulate students\u2019 from a rural area in southern Malaysia to learn English as their second language more effectively, actively and interestingly. The research design for this study is action research. Nine students were chosen as the target group for this study. The Kahoot game was conducted for three different topics in English after they completed the lesson each day. The data was collected using Attitude and Motivation Test Battery questionnaires consisting of 10 items, interview session with 5 semi-structured questions for three selected students and results of each Kahoot game played by the students. The data was analysed using descriptive analysis. The results of the questionnaires, interview session and results of the game were shown in figures and tables. Findings of the study show that all the nine students were able to engage actively in the game and they were able to master the target language effectively. They enjoy learning English using games and wish to have more games in future."}
{"_id": "f115f68b3f4db8513ec9e916a15dfad1c9a9339b", "title": "A Simplified Iron Loss Model for Laminated Magnetic Cores", "text": "This paper presents a new method for the prediction of iron losses in laminated magnetic cores. The method is based on the evaluation of the magnetization curves and loop shapes by which the iron losses, including hysteresis, excess, and classical eddy-current losses, are calculated from the loop area. The purpose of the method is to omit the numerical solution of the nonlinear diffusion equation (resulting from Maxwell equations) while maintaining reasonable accuracy of the iron losses. Therefore, the proposed method determines, using simple concepts, three magnetic field strength components that are responsible for generating the iron losses. The magnetic field component of the hysteresis loss is calculated by a static hysteresis model. The magnetic field component of the excess loss is calculated using the statistical loss theory. Determining the magnetic field component of the classical eddy-current loss is based upon the assumption that the flux distribution in the steel sheet is uniform, but, here, the eddy-current loss is implicitly enforced to be dependent on the magnetic characteristics of the material. The proposed method is applied to the calculation of iron losses in an electrical steel when the accuracy, stability, and generality of the method are discussed."}
{"_id": "3811148feed575242fc40ad7ace8a49ed7e44460", "title": "Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents", "text": "Evolution strategies (ES) are a family of black-box optimization algorithms able to train deep neural networks roughly as well as Q-learning and policy gradient methods on challenging deep reinforcement learning (RL) problems, but are much faster (e.g. hours vs. days) because they parallelize better. However, many RL problems require directed exploration because they have reward functions that are sparse or deceptive (i.e. contain local optima), and it is unknown how to encourage such exploration with ES. Here we show that algorithms that have been invented to promote directed exploration in small-scale evolved neural networks via populations of exploring agents, specifically novelty search (NS) and quality diversity (QD) algorithms, can be hybridized with ES to improve its performance on sparse or deceptive deep RL tasks, while retaining scalability. Our experiments confirm that the resultant new algorithms, NS-ES and two QD algorithms, NSR-ES and NSRA-ES, avoid local optima encountered by ES to achieve higher performance on Atari and simulated robots learning to walk around a deceptive trap. This paper thus introduces a family of fast, scalable algorithms for reinforcement learning that are capable of directed exploration. It also adds this new family of exploration algorithms to the RL toolbox and raises the interesting possibility that analogous algorithms with multiple simultaneous paths of exploration might also combine well with existing RL algorithms outside ES."}
{"_id": "e07b2fd383ceb3a3d37c2b1436ae392584687312", "title": "GlobeSen: an open interconnection framework based on named sensory date for IoT", "text": "How to efficiently interconnect ubiquitous wireless sensor networks (WSNs) and the Internet becomes an important challenge of Internet of Things. In this paper, we explore a route of information-centric networking (ICN) to solve this challenge and propose an open interconnection framework named GlobeSen. To overcome the problem that traditional ICN solution (such as NDN) is not suitable for resource-constrained WSNs, we present a new implementation of NDN, NDNs, for WSNs. Specifically, by extracting the spatio-temporal information and data type information of interest, we construct a globally unique name structure, and exploit the spatio-temporal relation operation as the matching method. Based on the new naming strategy and matching method, we further design packet forwarding and routing schemes. Moreover, we also develop a gateway system, NDNt, for protocol translating and an application, SenBrowser, to provide a user-friendly interface for generating interests and illustrating the returned sensory data. We implement a proof-of-concept prototype based on TelosB sensor nodes and an ARM development board, and conduct a series of experiments to evaluate the performance of GlobeSen."}
{"_id": "0348cc294883adf24b48c00f50df9870983317d2", "title": "FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem", "text": "The ability to simultaneously localize a robot and accurately map its surroundings is considered by many to be a key prerequisite of truly autonomous robots. However, few approaches to this problem scale up to handle the very large number of landmarks present in real environments. Kalman filter-based algorithms, for example, require time quadratic in the number of landmarks to incorporate each sensor observation. This paper presents FastSLAM, an algorithm that recursively estimates the full posterior distribution over robot pose and landmark locations, yet scales logarithmically with the number of landmarks in the map. This algorithm is based on an exact factorization of the posterior into a product of conditional landmark distributions and a distribution over robot paths. The algorithm has been run successfully on as many as 50,000 landmarks, environments far beyond the reach of previous approaches. Experimental results demonstrate the advantages and limitations of the FastSLAM algorithm on both simulated and real-world data."}
{"_id": "41869bf990c0996fe95f2a080a9ff93bedf9067a", "title": "Simultaneous Localization and Mapping with Detection and Tracking of Moving Objects", "text": "Both simultaneous localization and mapping (SLAM) and detection and tracking of moving objects (DTMO) play key roles in robotics and automation. For certain constrained environments, SLAM and DTMO are becoming solved problems. But for robots working outdoors, and at high speeds, SLAM and DTMO are still incomplete. In earlier works, SLAM and DTMO are treated as two separate problems. In fact, they can be complementary to one another. In this paper, we present a new method to integrate SLAM and DTMO to solve both problems simultaneously for both indoor and outdoor applications. The results of experiments carried out with CMU Navlab8 and Navlab11 vehicles with the maximum speed of 45 mph in crowded urban and suburban areas verify the described work."}
{"_id": "61d234dd4f7b733e5acf2550badcf1e9333b6de1", "title": "Credibilist occupancy grids for vehicle perception in dynamic environments", "text": "In urban environments, moving obstacles detection and free space determination are key issues for driving assistance systems and autonomous vehicles. When using lidar sensors scanning in front of the vehicle, uncertainty arises from ignorance and errors. Ignorance is due to the perception of new areas and errors come from imprecise pose estimation and noisy measurements. Complexity is also increased when the lidar provides multi-echo and multi-layer information. This paper presents an occupancy grid framework that has been designed to manage these different sources of uncertainty. A way to address this problem is to use grids projected onto the road surface in global and local frames. The global one generates the mapping and the local one is used to deal with moving objects. A credibilist approach is used to model the sensor information and to do a global fusion with the world-fixed map. Outdoor experimental results carried out with a precise positioning system show that such a perception strategy increases significantly the performance compared to a standard approach."}
{"_id": "88dd3e150405af30e0f7e11121dacd64ec131759", "title": "An evidential approach to map-building for autonomous vehicles", "text": "In this work, we examine the problem of constructing and maintaining a map of an autonomous vehicle\u2019s environment for the purpose of navigation, using evidential reasoning. The inherent uncertainty in the origin of measurements of sensors demands a probabilistic approach to processing, or fusing, the new sensory information to build an accurate map. In this paper, the map is based on a two-dimensional (2-D) occupancy grid. The sensor readings are \u201cfused\u201d into the map using the Dempster\u2013Shafer inference rule. This evidential approach with its multivalued hypotheses allows quantitative analysis of the quality of the data. The map building system is experimentally evaluated using sonar data from real environments."}
{"_id": "3d01fa12d5d18ca671b81e36ec336881af462cc4", "title": "Toward Abstractive Summarization Using Semantic Representations", "text": "We present a novel abstractive summarization framework that draws on the recent development of a treebank for the Abstract Meaning Representation (AMR). In this framework, the source text is parsed to a set of AMR graphs, the graphs are transformed into a summary graph, and then text is generated from the summary graph. We focus on the graph-tograph transformation that reduces the source semantic graph into a summary graph, making use of an existing AMR parser and assuming the eventual availability of an AMR-totext generator. The framework is data-driven, trainable, and not specifically designed for a particular domain. Experiments on goldstandard AMR annotations and system parses show promising results. Code is available at: https://github.com/summarization"}
{"_id": "87167bb04182279302b1b97d3e52bcb687f0f27c", "title": "MANAGING HEALTH CARE SUPPLY CHAIN : TRENDS , ISSUES , AND SOLUTIONS FROM A LOGISTICS PERSPECTIVE", "text": "The U.S. healthcare industry is a large enterprise accounting for over 14.1% of the national economic output in 2001. It has been under pressure for cost containment and providing quality health care services to consumers. Its record of investing heavily on development of sophisticated drugs and diagnostic systems does not match that of technologies to manage its day-to-day operations. In order to achieve improved performance, healthcare supply chain must be efficient and integrated. The driver for this integration is logistics and supply chain management. This paper describes trends, issues and some solutions for logistics management for Health Care Supply Chain with concepts drawn from Industrial Engineering, and Operations Research disciplines applied to specific domains. A healthcare supply chain template utilizing Ecommerce strategy is presented. Use of simulation, optimization, and information sharing techniques are demonstrated to optimize purchasing and inventory policies. ("}
{"_id": "2c142fa56326c1058443092bd1caf7c957306bf8", "title": "A study on peer startup process and initial offset placement in P2P live streaming systems", "text": "In this paper, we measure and study the peer startup process in PPLive, a popular commercial P2P streaming system, and focus on a fundamental issue in this aspect: how a peer initializes its buffer when it joins a channel, i.e., initial offset placement of peers' buffers in the startup stage. We build a general model of peer startup process in chunk-based P2P streaming systems and present an initial offset placement scheme we inferred from the measurement results, i.e., proportional placement (PP) scheme. With FP scheme, the initial buffer offset is set to the offset of the reference neighbor peer plus an advance proportional to the reference neighbor peer's offset lag or buffer width. We evaluate the performance of PP scheme and find it is stable when the placement is based on offset lag, but will be unstable when it is based on buffer width if the chunk fetching strategy and neighbor peer selection mechanism are not properly designed. We finally report our detailed measurement results of the peer startup process and initial offset placement algorithms used in PPLive. Our models and measurement results could be useful for guiding the analysis and design of buffering protocols for a real P2P live streaming system."}
{"_id": "9f071a8e51ff72eb7cca49f90dae6f48f6f489a5", "title": "Rectal mucosal dissection commencing directly on the anorectal line versus commencing above the dentate line in laparoscopy-assisted transanal pull-through for Hirschsprung's disease: Prospective medium-term follow-up.", "text": "BACKGROUND\nIn 2007, we began using the anorectal line (ARL) as the landmark for commencing rectal mucosal dissection (RMD) instead of the dentate line (DL) during laparoscopy-assisted transanal pull-through (L-TAPT) for Hirschsprung's disease (HD). We conducted a medium-term prospective comparison of postoperative fecal continence (POFC) between DL and ARL cases to follow our short-term study.\n\n\nMETHODS\nPOFC is assessed by scoring frequency of motions, severity of staining, severity of perianal erosions, anal shape, requirement for medications, sensation of rectal fullness, and ability to distinguish flatus from stool on a scale of 0 to 2 (maximum: 14).\n\n\nRESULTS\nPatient demographics were similar for ARL (2007-2014: n=33) and DL (1997-2006: n=41). There were no intraoperative complications and 2 cases of postoperative colitis in both ARL (6.1%) and DL (4.9%). Mean annual medium-term POFC scores for the 4-7 term of this study were consistently better in ARL: 9.7\u00b11.4*, 10.1\u00b11.6*, 10.6\u00b11.6, and 11.3\u00b11.4* in ARL and 8.6\u00b11.5, 9.1\u00b11.6, 9.8\u00b11.9, 10.0\u00b11.6 in DL (*: p<0.05).\n\n\nCONCLUSIONS\nMedium-term POFC is better when the ARL is used as the landmark for RMD during L-TAPT for HD."}
{"_id": "201c0366e232ed7073ccd80a6ed91c65d9cee952", "title": "Xpander: Towards Optimal-Performance Datacenters", "text": "Despite extensive efforts to meet ever-growing demands, today's datacenters often exhibit far-from-optimal performance in terms of network utilization, resiliency to failures, cost efficiency, incremental expandability, and more. Consequently, many novel architectures for high-performance datacenters have been proposed. We show that the benefits of state-of-the-art proposals are, in fact, derived from the fact that they are (implicitly) utilizing \"expander graphs\" (aka expanders) as their network topologies, thus unveiling a unifying theme of these proposals. We observe, however, that these proposals are not optimal with respect to performance, do not scale, or suffer from seemingly insurmountable deployment challenges. We leverage these insights to present Xpander, a novel datacenter architecture that achieves near-optimal performance and provides a tangible alternative to existing datacenter designs. Xpander's design turns ideas from the rich graph-theoretic literature on constructing optimal expanders into an operational reality. We evaluate Xpander via theoretical analyses, extensive simulations, experiments with a network emulator, and an implementation on an SDN-capable network testbed. Our results demonstrate that Xpander significantly outperforms both traditional and proposed datacenter designs. We discuss challenges to real-world deployment and explain how these can be resolved."}
{"_id": "6d67bb7124987b24b728b2bba47f43e87ca50df1", "title": "CRED: A Deep Residual Network of Convolutional and Recurrent Units for Earthquake Signal Detection", "text": "Earthquake signal detection is at the core of observational seismology. A good detection algorithm should be sensitive to small and weak events with a variety of waveform shapes, robust to background noise and non-earthquake signals, and efficient for processing large data volumes. Here, we introduce the CnnRnn Earthquake Detector (CRED), a detector based on deep neural networks. The network uses a combination of convolutional layers and bi-directional long-short-term memory units in a residual structure. It learns the timefrequency characteristics of the dominant phases in an earthquake signal from three component data recorded on a single station. We train the network using 500,000 seismograms (250k associated with tectonic earthquakes and 250k identified as noise) recorded in Northern California and tested it with a F-score of 99.95. The robustness of the trained model with respect to the noise level and non-earthquake signals is shown by applying it to a set of semi-synthetic signals. The model is applied to one month of continuous data recorded at Central Arkansas to demonstrate its efficiency, generalization, and sensitivity. Our model is able to detect more than 700 microearthquakes as small as -1.3 ML induced during hydraulic fracturing far away than the training region. The performance of the model is compared with STA/LTA, template matching, and FAST algorithms. Our results indicate an efficient and reliable performance of CRED. This framework holds great promise in lowering the detection threshold while minimizing false positive detection rates."}
{"_id": "8c1586bcc3a6905e3eb7a432077c2aee6a4b9e80", "title": "Neighborhood-Level LGBT Hate Crimes and Bullying Among Sexual Minority Youths: A Geospatial Analysis.", "text": "The goal of this study was to evaluate a novel measure of environmental risk factors for bullying among sexual minority youths. Data on lesbian, gay, bisexual, and transgender (LGBT) assault hate crimes were obtained from police records, geocoded, and then linked to individual-level data on bullying and sexual orientation from the 2008 Boston Youth Survey Geospatial Dataset (N = 1,292; 108 sexual minorities). Results indicated that sexual minority youths who reported relational and electronic bullying were more likely to reside in neighborhoods with higher LGBT assault hate crime rates. There was no asso- ciation between LGBT assault hate crimes and bullying among heterosexual youths, pro- viding evidence for specificity to sexual minority youth. Moreover, no relationships were observed between sexual minority bullying and neighborhood-level violent and property crimes, indicating that the results were specific to LGBT assault hate crimes."}
{"_id": "e03bde3b5e723e9330388c7a38b81d5f2ef0684a", "title": "Disease and pest prediction IoT system in orchard: A preliminary study", "text": "Each species of diseases and pests is considered to be harmful to plants and has a great adverse effect on agriculture. In addition to decreasing crop productivity and quality, pesticides or fungicides which cause environmental pollution are needed to kill diseases and pests. Therefore, the IoT system is designed to reduce the frequent use of insecticides and fungicides and to predict when the pests appear in order to lower the appearance of pests. Therefore, weather stations near the orchard have been installed and used to analyze the correlation between pests and weather data that had a large effect on pests. The prediction models have been created utilizing correlation information. In other words, we proposed a system that provides disease and pest prediction information so that farmers can quickly control them."}
{"_id": "b304d10d4e1bd0b3fc60dbd12c88e191a0a91415", "title": "Mendeley: Recommendations for Researchers", "text": "For a researcher, keeping up with what is going on in their research field can be a difficult and time-consuming task. For example, a fresh PhD student may want to know what are the relevant papers matching their research interests. An assistant professor may like to be up-to-date with what their colleagues are publishing. A professor might want to be notified about funding opportunities relevant to the work done in their research group. Since the volume of published research and research activity is constantly growing, it is becoming increasingly more difficult for researchers to be able to manage and filter through the research information flow. In this challenging context, Mendeley's mission is to become the world's \"research operating system\". We do this not only by providing our well-know reference management system, but also by providing discovery capabilities for researchers on different kinds of entities, such as articles and profiles. In our talk, we will share Mendeley's experiences with building our article and profile recommendation systems, the challenges that we have faced and the solutions that we have put in place. We will discuss how we address different users' needs with our data and algorithm infrastructure to achieve good user experience."}
{"_id": "d1d0314ae9ddfdf2cd24ff29d0ef77e437bbf13f", "title": "Robust Image Registration Using Log-Polar Transform", "text": "This paper describes a hierarchical image registration algorithm for affine motion recovery. The algorithm estimates the affine transformation parameters necessary to register any two digital images misaligned due to rotation, scale, shear, and translation. The parameters are computed iteratively in a coarse-to-fine hierarchical framework using a variation of the Levenberg-Marquadt nonlinear least squares optimization method. This approach yields a robust solution that precisely registers images with subpixel accuracy. A log-polar registration module is introduced to accommodate arbitrary rotation angles and a wide range of scale changes. This serves to furnish a good initial estimate for the optimization-based affine registration stage. We demonstrate the hybrid algorithm on pairs of digital images subjected to large affine motion."}
{"_id": "fb89872355141b5cadb80351c28169869e7e3df6", "title": "Singer Traits Identification using Deep Neural Network", "text": "The author investigates automatic recognition of singers\u2019 gender and age through audio features using deep neural network (DNN). Features of each singing voice, fundamental frequency and Mel-Frequency Cepstrum Coefficients (MFCC) are extracted for neural network training. 10,000 singing voice from Smule\u2019s Sing! Karaoke app is used for training and evaluation, and the DNN-based method achieves an average recall of 91% for gender classification and 36% for age identification."}
{"_id": "0168a0214eb62e6baf9e9de2b82f5fcd8f483dee", "title": "Comparing Prediction Market Structures, With an Application to Market Making", "text": "Ensuring sufficient liquidity is one of the key challenges for designers of prediction markets. Various market making algorithms have been proposed in the literature and deployed in practice, but there has been little effort to evaluate their benefits and disadvantages in a systematic manner. We introduce a novel experimental design for comparing market structures in live trading that ensures fair comparison between two different microstructures with the same trading population. Participants trade on outcomes related to a two-dimensional random walk that they observe on their computer screens. They can simultaneously trade in two markets, corresponding to the independent horizontal and vertical random walks. We use this experimental design to compare the popular inventory-based logarithmic market scoring rule (LMSR) market maker and a new information based Bayesian market maker (BMM). Our experiments reveal that BMM can offer significant benefits in terms of price stability and expected loss when controlling for liquidity; the caveat is that, unlike LMSR, BMM does not guarantee bounded loss. Our investigation also elucidates some general properties of market makers in prediction markets. In particular, there is an inherent tradeoff between adaptability to market shocks and convergence during market equilibrium."}
{"_id": "cdc7c145b3e5cacea6f89588af5cc852a5a8a1cc", "title": "Design and analysis of a hybrid energy harvester for self-powered sensor", "text": "This paper presents a slightly different concept of energy harvesting method as compared to the conventional energy harvesters. The new energy harvester of interest is known as Hybrid Energy Harvester (HEH). It comprises of the Piezoelectric and Magnetostrictive/Electromagnetic transducers integrated into a system. Theoretically, the hybrid energy harvester device developed is targeted to expand the vibration bandwidth as compared to the standalone energy harvesters, hence higher output power will be generated. To develop the hybrid energy harvester, mathematical models of each respective transducers are formulated, then simulations are carried out using the MATLAB/Simulink analysis tool. Next, a macro-scale prototype device is designed, fabricated and validated. From the experimental characterizations, it is unveiled that the maximum open circuit output voltage produced by the hybrid energy harvester is 6.23 V at a resonant frequency of 59.6 Hz."}
{"_id": "97107c6bfe47dad7c4e4508938b1c162a3b6b453", "title": "Tracking port scanners on the IP backbone", "text": "Port scanning is the usual precursor to malicious attacks on today's Internet. Although many algorithms have been proposed for different aspects of the scan detection problem, their focused designed space is enterprise gateway level Intrusion Detection. Furthermore, we find few studies that track scanner behaviors over an extended period of time. Operating from a unique vantage point, the IP backbone, we put all the pieces together in designing and implementing a fast and accurate online port scan detection and tracking system. We introduce our flexible architecture, discuss trade-offs and design choices. Specifically, we go in depth to two design choices: the distinct counter data structure and the buffer size tuning. Our choice of a probabilistic counter is proved to have a low average error of 3.5%. The buffer size is derived using Lyapunov drift techniques on an equivalent queuing model. Our initial scanner tracking results show that the scanners' timing behavior follows a 90-10 curve. That is, 90% of scanners are active for a short time, with low scanning rates, while 10% are long term and fast scanners, with a few super-scanners lasting the entire duration of monitoring."}
{"_id": "3bad518b0f56e72efadc4791a2bd65aaeaf47ec1", "title": "Evaluating enterprise resource planning ( ERP ) post-implementation problems in Egypt", "text": "The aim of the research is to identify and evaluate the main problems experienced in the ERP postimplementation stage of multinational, privately-owned Egyptian and governmental organizations in Egypt. Data gathering was achieved by means of a set of interviews and online questionnaire conducted to 50 companies implementing ERP in Egypt. The paper presents a descriptive analysis of the difficulties and problems encountered by organizations in Egypt following ERP implementation and how these have contributed to unsuccessful implementation overall."}
{"_id": "5f90f398bc32a09e1dc2928d51a268f319acb5b3", "title": "Prediction of Indian election using sentiment analysis on Hindi Twitter", "text": "Sentiment analysis is considered to be a category of machine learning and natural language processing. It is used to extricate, recognize, or portray opinions from different content structures, including news, audits and articles and categorizes them as positive, neutral and negative. It is difficult to predict election results from tweets in different Indian languages. We used Twitter Archiver tool to get tweets in Hindi language. We performed data (text) mining on 42,235 tweets collected over a period of a month that referenced five national political parties in India, during the campaigning period for general state elections in 2016. We made use of both supervised and unsupervised approaches. We utilized Dictionary Based, Naive Bayes and SVM algorithm to build our classifier and classified the test data as positive, negative and neutral. We identified the sentiment of Twitter users towards each of the considered Indian political parties. The results of the analysis for Naive Bayes was the BJP (Bhartiya Janta Party), for SVM it was the BJP (Bhartiya Janta Party) and for the Dictionary Approach it was the Indian Nathional Congress. SVM predicted a 78.4% chance that the BJP would win more elections in the general election due to the positive sentiment they received in tweets. As it turned out, BJP won 60 out of 126 constituencies in the 2016 general election, far more than any other political party as the next party (the Indian National Congress) only won 26 out of 126 constituencies."}
{"_id": "2a1c273aceaff3b8b82840cec5d180a96c7514d1", "title": "The double-edged sword of leader charisma: Understanding the curvilinear relationship between charismatic personality and leader effectiveness.", "text": "This study advanced knowledge on charisma by (a) introducing a new personality-based model to conceptualize and assess charisma and by (b) investigating curvilinear relationships between charismatic personality and leader effectiveness. Moreover, we delved deeper into this curvilinear association by (c) examining moderation by the leader's level of adjustment and by (d) testing a process model through which the effects of charismatic personality on effectiveness are explained with a consideration of specific leader behaviors. Study 1 validated HDS charisma (Hogan Development Survey) as a useful trait-based measure of charisma. In Study 2 a sample of leaders (N = 306) were assessed in the context of a 360-degree development center. In line with the too-much-of-a-good-thing effect, an inverted U-shaped relationship between charismatic personality and observer-rated leader effectiveness was found, indicating that moderate levels are better than low or high levels of charisma. Study 3 (N = 287) replicated this curvilinear relationship and further illustrated the moderating role of leader adjustment, in such a way that the inflection point after which the effects of charisma turn negative occurs at higher levels of charisma when adjustment is high. Nonlinear mediation modeling further confirmed that strategic and operational leader behaviors fully mediate the curvilinear relationship. Leaders low on charisma are less effective because they lack strategic behavior; highly charismatic leaders are less effective because they lack operational behavior. In sum, this work provides insight into the dispositional nature of charisma and uncovers the processes through which and conditions under which leader charisma translates into (in)effectiveness. (PsycINFO Database Record"}
{"_id": "5f6246b2fcce70bc35e8e8323d5830ff016e148a", "title": "Assessment of Factors Associated with Malnutrition among Under Five Years Age Children at Machakel Woreda, Northwest Ethiopia: A CaseControl Study", "text": "Introduction; malnutrition continues to be a significant public health and development concern not only in developing country but also in the world. It is a serious problem because it is causing the deaths of 3.5 million children under 5 years old peryear. Its magnitude is still highest in Ethiopia as well as in Amhara Region that remains a major public health problem. Objective: The main aim of this study is to assess associated factors of malnutrition on under five years children in Machakel Woreda. Methods: Unmatched Case control study was conducted. Cases were children of aged 6-59 months who have malnutrition (weight for height <2sd, weight for age \u2264 2Sd, height for age \u2264 2Sd, Mid-Upper Arm Circumstance (MUAC) <12 cm, if there is edema) and controls were 6-59 months of children those who have not malnutrition (weight for height \u2265 2sd, weight for age \u2265 2Sd, height for age \u2265 2sd, MUAC>12 cm, there is no edema). A consecutive sampling technique was employ to select study subjects for this study. Logistic regression was used to analyze data by using backward variable selection technique. Result: A total of 102 cases and 201 controls were included in the study with overall response rate of 94.4%. Sixty five (63.20%) of cases and 49 (24.40%) controls had fathers that cannot read and write. Thirty nine (38.23%) of cases and 44(21.89%) of controls had history of diarrheal episode. Those children whose family use drinking water from unprotected source were 3 times more likely to have malnutrition as compared to those children whose family use drinking water from protected source with [AOR=3.04, 95%CI (1.01, 9.17)]. Conclusion: The fining of this study revealed that inappropriate child carrying and feeding practice were strongly associated with under five malnutrition. Therefore, the responsible body should implement on nutritional intervention activities at all level of the community."}
{"_id": "d50aa24d7b2a8a782ea95bbb3897f359b3c39ab3", "title": "Enhancing the trust-based recommendation process with explicit distrust", "text": "When a Web application with a built-in recommender offers a social networking component which enables its users to form a trust network, it can generate more personalized recommendations by combining user ratings with information from the trust network. These are the so-called trust-enhanced recommendation systems. While research on the incorporation of trust for recommendations is thriving, the potential of explicitly stated distrust remains almost unexplored. In this article, we introduce a distrust-enhanced recommendation algorithm which has its roots in Golbeck's trust-based weighted mean. Through experiments on a set of reviews from Epinions.com, we show that our new algorithm outperforms its standard trust-only counterpart with respect to accuracy, thereby demonstrating the positive effect that explicit distrust can have on trust-based recommendations."}
{"_id": "22622c17e9c64e49a848c9048311a949f433e306", "title": "Trust mechanisms for cloud computing", "text": "Trust is a critical factor in cloud computing; in present practice it depends largely on perception of reputation, and self assessment by providers of cloud services. We begin this paper with a survey of existing mechanisms for establishing trust, and comment on their limitations. We then address those limitations by proposing more rigorous mechanisms based on evidence, attribute certification, and validation, and conclude by suggesting a framework for integrating various trust mechanisms together to reveal chains of trust in the cloud."}
{"_id": "58e789dc4a6f68ec8e54bf06622ffc022e04eecc", "title": "A new generation of microwave bandpass filters using multilayer ring resonators", "text": "A new generation of compact microwave bandpass filters (BPF) is proposed. The filters are based on multilayer ring resonators (MRR) technique which is applicable to both narrow band and ultra-wideband (UWB) bandpass filters. This class of filters has very compact circuit size and simple structure for fabrication. Multiple attenuation poles exist and can be simply placed to the left or right of the passband. The dimensions of the filters can be designed to around 2 mm \u00d7 2 mm \u00d7 1 mm. Out of the five filters presented, two of them are fabricated and measured. Good agreement between the simulation and measurements confirmed the design approach."}
{"_id": "cc611049fa1029230858ec62399b571cb40bfb71", "title": "Power Amplifiers and Transmitters for RF and Microwave", "text": "The generation of RF/microwave power is required not only in wireless communications, but also in applications such as jamming, imaging, RF heating, and miniature dc/dc converters. Each application has its own unique requirements for frequency, bandwidth, load, power, efficiency, linearity, and cost. RF power is generated by a wide variety of techniques, implementations, and active devices. Power amplifiers are incorporated into transmitters in a similarly wide variety of architectures, including linear, Kahn, envelope tracking, outphasing, and Doherty. Linearity can be improved through techniques such as feedback, feedforward, and predistortion."}
{"_id": "e6901330f1b056578cf2584420cecd01168b1761", "title": "CMOS Outphasing Class-D Amplifier With Chireix Combiner", "text": "This letter presents a CMOS outphasing class-D power amplifier (PA) with a Chireix combiner. Two voltage-mode class-D amplifiers used in the outphasing system were designed and implemented with a 0.18-mum CMOS process. By applying the Chireix combiner technique, drain efficiency of the outphasing PA for CDMA signals was improved from 38.6% to 48% while output power was increased from 14.5 to 15.4 dBm with an adjacent channel power ratio of -45 dBc."}
{"_id": "32d31bdc147ee2ade8166b715bd381227c523f0a", "title": "An extended Doherty amplifier with high efficiency over a wide power range", "text": "An extension of the Doherty amplifier architecture which maintains high efficiency over a wide range of output power (>6 dB) is presented. This extended Doherty amplifier is demonstrated experimentally with InGaP-GaAs HBTs at a frequency of 950 MHz. P/sub 1 dB/ is measured at 27.5 dBm with PAE of 46%. PAE of at least 39% is maintained for over an output power range of 12 dB backed-off from P/sub 1 dB/. This is an improvement over the classical Doherty amplifier, where high efficiency is typically obtained up to 5-6 dB backed-off from P/sub 1 dB/. Generalized design equations for the Doherty amplifier are derived to show a careful choice of the output matching circuit and device scaling parameters can improve efficiencies at lower output power."}
{"_id": "c413d55baf1ee96df29256a879e96daf4c2cc072", "title": "Segmenting Salient Objects from Images and Videos", "text": "In this paper we introduce a new salient object segmentation method, which is based on combining a saliency measure with a conditional random field (CRF) model. The proposed saliency measure is formulated using a statistical framework and local feature contrast in illumination, color, and motion information. The resulting saliency map is then used in a CRF model to define an energy minimization based segmentation approach, which aims to recover well-defined salient objects. The method is efficiently implemented by using the integral histogram approach and graph cut solvers. Compared to previous approaches the introduced method is among the few which are applicable to both still images and videos including motion cues. The experiments show that our approach outperforms the current state-of-the-art methods in both qualitative and quantitative terms."}
{"_id": "42ca4c23233a346ee921427eaa53f7c85dd2ccf6", "title": "Mapping, Learning, Visualization, Classification, and Understanding of fMRI Data in the NeuCube Evolving Spatiotemporal Data Machine of Spiking Neural Networks", "text": "This paper introduces a new methodology for dynamic learning, visualization, and classification of functional magnetic resonance imaging (fMRI) as spatiotemporal brain data. The method is based on an evolving spatiotemporal data machine of evolving spiking neural networks (SNNs) exemplified by the NeuCube architecture [1]. The method consists of several steps: mapping spatial coordinates of fMRI data into a 3-D SNN cube (SNNc) that represents a brain template; input data transformation into trains of spikes; deep, unsupervised learning in the 3-D SNNc of spatiotemporal patterns from data; supervised learning in an evolving SNN classifier; parameter optimization; and 3-D visualization and model interpretation. Two benchmark case study problems and data are used to illustrate the proposed methodology\u2014fMRI data collected from subjects when reading affirmative or negative sentences and another one\u2014on reading a sentence or seeing a picture. The learned connections in the SNNc represent dynamic spatiotemporal relationships derived from the fMRI data. They can reveal new information about the brain functions under different conditions. The proposed methodology allows for the first time to analyze dynamic functional and structural connectivity of a learned SNN model from fMRI data. This can be used for a better understanding of brain activities and also for online generation of appropriate neurofeedback to subjects for improved brain functions. For example, in this paper, tracing the 3-D SNN model connectivity enabled us for the first time to capture prominent brain functional pathways evoked in language comprehension. We found stronger spatiotemporal interaction between left dorsolateral prefrontal cortex and left temporal while reading a negated sentence. This observation is obviously distinguishable from the patterns generated by either reading affirmative sentences or seeing pictures. The proposed NeuCube-based methodology offers also a superior classification accuracy when compared with traditional AI and statistical methods. The created NeuCube-based models of fMRI data are directly and efficiently implementable on high performance and low energy consumption neuromorphic platforms for real-time applications."}
{"_id": "48a1aab13e5b36122bc1c22231fa39ad75175852", "title": "Locked-in syndrome : a challenge for embodied cognitive science", "text": "Embodied approaches in cognitive science hold that the body is crucial for cognition. What this claim amounts to, however, still remains unclear. This paper contributes to its clarification by confronting three ways of understanding embodiment\u2014the sensorimotor approach, extended cognition and enactivism\u2014with Lockedin syndrome (LIS). LIS is a case of severe global paralysis in which patients are unable to move and yet largely remain cognitively intact. We propose that LIS poses a challenge to embodied approaches to cognition requiring them to make explicit the notion of embodiment they defend and its role for cognition. We argue that the sensorimotor and the extended functionalist approaches either fall short of accounting for cognition in LIS from an embodied perspective or do it too broadly by relegating the body only to a historical role. Enactivism conceives of the body as autonomous system and of cognition as sense-making. From this perspective embodiment is not equated with bodily movement but with forms of agency that do not disappear with body paralysis. Enactivism offers a clarifying perspective on embodiment and thus currently appears to be the framework in embodied cognition best suited to address the challenge posed by LIS."}
{"_id": "f86722f9181cbe0a75a60c12ea987b90adce7437", "title": "Challenges of Computational Processing of Code-Switching", "text": "This paper addresses challenges of Natural Language Processing (NLP) on non-canonical multilingual data in which two or more languages are mixed. It refers to code-switching which has become more popular in our daily life and therefore obtains an increasing amount of attention from the research community. We report our experience that covers not only core NLP tasks such as normalisation, language identification, language modelling, part-of-speech tagging and dependency parsing but also more downstream ones such as machine translation and automatic speech recognition. We highlight and discuss the key problems for each of the tasks with supporting examples from different language pairs and relevant previous work."}
{"_id": "26b9db33bf020a4d145f248094280ba62ab853f9", "title": "SME e-readiness in Malaysia : Implications for Planning and Implementation", "text": "This study hoped to answer 2 main objectives. The first objective was to assess the level of e-readiness of SMEs in Northern Malaysia. The second objective was to investigate the factors contributing to the e-readiness of SMEs in Northern Malaysia. Questionnaires were distributed using a simple random sampling method to 300 SMEs in Penang, Kedah and Perlis. The findings of this study show that SMEs in Northern Malaysia are ready to go for e-business, e-commerce and Internet in general. The findings also showed that in general top management commitment and infrastructure and technology have significant impact on SMEs\u2019 e-readiness. However, human capital, resistance to change, and information security do not have significant impact or contribution on e-readiness in SMEs."}
{"_id": "78d3b266e1dd981dd7c5eef6371393ea0d0983b2", "title": "Fame for sale: Efficient detection of fake Twitter followers", "text": "Fake followers are those Twitter accounts specifically created to inflate the number of followers of a target account. Fake followers are dangerous for the social platform and beyond, since they may alter concepts like popularity and influence in the Twittersphere\u2014hence impacting on economy, politics, and society. In this paper, we contribute along different dimensions. First, we review some of the most relevant existing features and rules (proposed by Academia and Media) for anomalous Twitter accounts detection. Second, we create a baseline dataset of verified human and fake follower accounts. Such baseline dataset is publicly available to the scientific community. Then, we exploit the baseline dataset to train a set of machine-learning classifiers built over the reviewed rules and features. Our results show that most of the rules proposed by Media provide unsatisfactory performance in revealing fake followers, while features proposed in the past by Academia for spam detection provide good results. Building on the most promising features, we revise the classifiers both in terms of reduction of overfitting and cost for gathering the data needed to compute the features. The final result is a novel Class A classifier, general enough to thwart overfitting, lightweight thanks to the usage of the less costly features, and still able to correctly classify more than 95% of the accounts of the original training set. We ultimately perform an information fusion-based sensitivity analysis, to assess the global sensitivity of each of the features employed by the classifier. The findings reported in this paper, other than being supported by a thorough experimental methodology and interesting on their own, also pave the way for further investigation on the novel issue of fake Twitter followers."}
{"_id": "749546a58a1d46335de785c41a3eae977e84a0df", "title": "SVM incremental learning, adaptation and optimization", "text": "The objective of machine learning is to identify a model that yields good generalization performance. This involves repeatedly selecting a hypothesis class, searching the hypothesis class by minimizing a given objective function over the model\u2019s parameter space, and evaluating the generalization performance of the resulting model. This search can be computationally intensive as training data continuously arrives, or as one needs to tune hyperparameters in the hypothesis class and the objective function. In this paper, we present a framework for exact incremental learning and adaptation of support vector machine (SVM) classifiers. The approach is general and allows one to learn and unlearn individual or multiple examples, adapt the current SVM to changes in regularization and kernel parameters, and evaluate generalization performance through exact leave-one-out error estimation. I. I NTRODUCTION SVM techniques for classification and regression provide powerful tools for learning models that generalize well even in sparse, high dimensional settings. Their success can be attributed to Vapnik\u2019s seminal work in statistical learning theory [15] which provided key insights into the factors affecting generalization performance. SVM learning can be viewed as a practical implementation of Vapnik\u2019s structural risk minimizationinduction principle which involves searching over hypothesis classes of varying capacity to find the model with the best generalization performance. SVM classifiers of the formf(x) = w \u00b7\u03a6(x)+b are learned from the data{(xi, yi) \u2208 R I m \u00d7 {\u22121, 1} \u2200 i \u2208 {1, . . . , N}} by minimizing min w,b,\u03be 1 2 \u2016w\u2016 + C N \u2211"}
{"_id": "d09c99383a9d530943da36e4b56f73f22502e278", "title": "Speaker-Independent Silent Speech Recognition From Flesh-Point Articulatory Movements Using an LSTM Neural Network", "text": "Silent speech recognition SSR converts nonaudio information such as articulatory movements into text. SSR has the potential to enable persons with laryngectomy to communicate through natural spoken expression. Current SSR systems have largely relied on speaker-dependent recognition models. The high degree of variability in articulatory patterns across different speakers has been a barrier for developing effective speaker-independent SSR approaches. Speaker-independent SSR approaches, however, are critical for reducing the amount of training data required from each speaker. In this paper, we investigate speaker-independent SSR from the movements of flesh points on tongue and lips with articulatory normalization methods that reduce the interspeaker variation. To minimize the across-speaker physiological differences of the articulators, we propose Procrustes matching-based articulatory normalization by removing locational, rotational, and scaling differences. To further normalize the articulatory data, we apply feature-space maximum likelihood linear regression and i-vector. In this paper, we adopt a bidirectional long short-term memory recurrent neural network BLSTM as an articulatory model to effectively model the articulatory movements with long-range articulatory history. A silent speech dataset with flesh-point articulatory movements was collected using an electromagnetic articulograph from 12 healthy and two laryngectomized English speakers. Experimental results showed the effectiveness of our speaker-independent SSR approaches on healthy as well as laryngectomy speakers. In addition, BLSTM outperformed the standard deep neural network. The best performance was obtained by the BLSTM with all the three normalization approaches combined."}
{"_id": "86cf4c859d34c84fefdd8d36e5ea8ab691948512", "title": "Parallel Training for Deep Stacking Networks", "text": "The Deep Stacking Network (DSN) is a special type of deep architecture developed to enable and benefit from parallel learning of its model parameters on large CPU clusters. As a prospective key component of future speech recognizers, the architectural design of the DSN and its parallel training endow the DSN with scalability over a vast amount of training data. In this paper, we present our first parallel implementation of the DSN training algorithm. Particularly, we show the tradeoff between the time/memory saving via training parallelism and the associated cost arising from inter-CPU communication. Further, in phone classification experiments, we demonstrate a significantly lowered error rate using parallel full-batch training distributed over a CPU cluster, compared with sequential minibatch training implemented in a single CPU machine under otherwise identical experimental conditions and as exploited prior to the work reported in this paper."}
{"_id": "755908791b97686e9646b773e56f22637d9209ac", "title": "Feature based Summarization of Customers' Reviews of Online Products", "text": "With the growing availability and popularity of opinion-rich resources such as review forums for the product sold online, choosing the right product from a large number of products have become difficult for the user. For trendy product, the number of customers\u2019 opinions available can be in the thousands. It becomes hard for the customers to read all the reviews and if he reads only a few of those reviews, then he may get a biased view about the product. Makers of the products may also feel difficult to maintain, keep track and understand the customers\u2019 views for the products. Several research works have been proposed in the past to address these issues, but they have certain limitations: The systems implemented are completely opaque, the reviews are not easier to perceive and are time consuming to analyze because of large irrelevant information apart from actual opinions about the features, the feature based summarization system that are implemented are more generic ones and static in nature. In this research, we proposed a dynamic system for feature based summarization of customers\u2019 opinions for online products, which works according to the domain of the product. We are extracting online reviews for a product on periodic bases, each time after extraction, we carry out the following work: Firstly, identification of features of a product from customers' opinions is done. Next, for each feature, its corresponding opinions\u2019 are extracted and their orientation or polarity (positive/negative) is detected. The final polarity of feature-opinions pairs is calculated. At last, feature based summarizations of the reviews are generated, by extracting the relevant excerpts with respect to each feature-opinions pair and placing it into their respective feature based cluster. These feature based excerpts can easily be digested by the user. \u00a9 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of KES International."}
{"_id": "9f6fe58a9e1b306ad9c89f36698bb8afd78413a9", "title": "Trust or consequences? Causal effects of perceived risk and subjective norms on cloud technology adoption", "text": "Cloud computing has become a popular alternative information curation solution for organizations. As more corporate proprietary information is stored in the Cloud, concerns about Cloud information security have also increased. This study investigates the causal effect of perceived risk and subjective norms on users\u2019 trust intention to adopt Cloud technology. A partial least squares structural equation modeling (PLS-SEM) analysis was performed to assess latent variables and examine moderating effects to Cloud technology adoption. Our findings suggest that a user\u2019s perceived risk and subjective norms have a significant effect on their trust intention vis-a-vis Cloud adoption, which leads to their decision on whether to adopt the Cloud technology. While a user\u2019s attitudes do influence their intention to trust the Cloud, their attitude is not moderated by either perceived risk or subjective norms. On the other hand, a user\u2019s perceived risk of the Cloud environment predominately moderates their knowledge and perceived behavioral control, which results in their knowledge and perceived behavioral control not having a direct effect on their intention to trust the Cloud. Moreover, a user\u2019s"}
{"_id": "e77edecf23e7296cdec9a916249d7ae310324cba", "title": "Managing flood disasters on the built environment in the rural communities of Zimbabwe: Lessons learnt", "text": "This article is about managing flood disasters affecting the built environment in the rural communities of Zimbabwe. Using Tsholotsho district in Matabeleland North province as a case study, the authors argue that flooding has adversely impacted the built environment through destroying infrastructure. The principal objectives of this study were to establish the impact of flood disasters on the built environment, to demarcate factors that perpetuate communities' vulnerabilities to flooding and to delineate challenges that negate the management of flood disasters in the built environment. This qualitative study was based on a purposive sample of 40 participants. Data were collected through semi-structured interviews and observation methods. The findings were that floods can damage human shelter, roads, bridges and dams. Locating homesteads near rivers and dams, using poor-quality construction materials, and lack of flood warning were found to perpetuate vulnerability to flooding. Poverty and costs of rebuilding infrastructure, lack of cooperation between the communities and duty-bearers, and failure to use indigenous knowledge were found to be impeding the management of flood disasters. The study concluded that flood disasters can wipe out community development gains accumulated over many years. Further, community vulnerability to flooding in the built environment is socially constructed. The study posits that addressing the root causes, reducing flood vulnerability and avoiding risk creation are viable options to development in the built environment. Lastly, reconstruction following flood disasters is arduous and gruelling, and not an easy exercise."}
{"_id": "692cce8f39622626274bb260346e8f645c5e1310", "title": "A Reduced Reference Image Quality assessment for Multiply Distorted Images", "text": "In this paper, we propose a new Reduced Reference Image Quality Metric for multiply degraded images based on a features extraction step and its combination. The selected features are extracted from the original image and its degraded version. Some of them aim to quantify the level of the considered degradation types, while the others quantify its sharpness. These features are then combined to obtain a single value, which corresponds to the predicted subjective score. Our method has been evaluated and compared in terms of correlation with subjective judgments to some recent methods by using the LIVE Multiply Distorted Image Quality Database."}
{"_id": "d83792571b14041714d3da73e2ca8f5fa59e638e", "title": "From Social Media to GeoSocial Intelligence: Crowdsourcing Civic Co-management for Flood Response in Jakarta, Indonesia", "text": "Here we present a review of PetaJakarta.org, a system designed to harness social media use in Jakarta for the purpose of relaying information about flood locations from citizen to citizen and from citizens and the city\u2019s emergency management agency. The project aimed to produce an open, real-time situational overview of flood conditions and provide decision support for the management agency, as well as offering the government a data source for post-event analysis. As such, the platform was designed as a socio-technological system and developed as a civic co-management tool to enable climate adaptation and community resilience in Jakarta, a delta megacity suffering enormous infrastructural instability due to a troubled confluence of environmental factors\u2014the city\u2019s rapid urbanization, its unique geographic limitations, and increasing sea-levels and monsoon rainfalls resulting from climate change. The chapter concludes with a discussion of future research in open source platform and their role in infrastructure and disaster management. From now on there is an interconnection, an intertwining, even a symbiosis of technologies, exchanges, movements, which makes it so that a flood\u2014for instance\u2014wherever it may occur, must necessarily involve relationships with any number of technical, social, economic, political intricacies that keep us from regarding it simply as a misadventure or a misfortune whose consequences can be more or less easily circumscribed. \u2014 Jean-Luc Nancy (2015, 3\u20134) * E. Turpin & T. Holderness SMART Infrastructure Facility, University of Wollongong, Australia e-mail: eturpin@uow.edu.au | tomas@uow.edu.au Chapter 6 in Social Media for Government Services, Springer 2016 (preprint version)"}
{"_id": "284387ee4657d761be893316a0d608c52fb018c5", "title": "Physically Based Real-Time Translucency for Leaves", "text": "This paper presents a new shading model for real-time rendering of plant leaves that reproduces all important attributes of a leaf and allows for a large number of leaves to be shaded. In particular, we use a physically based model for accurate subsurface scattering on the translucent side of directly lit leaves. For real-time rendering of this model, we formulate it as an image convolution process and express the result in an efficient directional basis that is fast to evaluate. We also propose a data acquisition method for leaves that uses off-the-shelf devices."}
{"_id": "6b8fdca2795732ad591a8c247b2dfc89ce9b4c33", "title": "Beagle: Automated Extraction and Interpretation of Visualizations from the Web", "text": "\"How common is interactive visualization on the web?\" \"What is the most popular visualization design?\" \"How prevalent are pie charts really?\" These questions intimate the role of interactive visualization in the real (online) world. In this paper, we present our approach (and findings) to answering these questions. First, we introduce Beagle, which mines the web for SVG-based visualizations and automatically classifies them by type (i.e., bar, pie, etc.). With Beagle, we extract over 41,000 visualizations across five different tools and repositories, and classify them with 85% accuracy, across 24 visualization types. Given this visualization collection, we study usage across tools. We find that most visualizations fall under four types: bar charts, line charts, scatter charts, and geographic maps. Though controversial, pie charts are relatively rare for the visualization tools that were studied. Our findings also suggest that the total visualization types supported by a given tool could factor into its ease of use. However this effect appears to be mitigated by providing a variety of diverse expert visualization examples to users."}
{"_id": "ad86e63d5ed30ee4484b5d00c1742e93c9e45b49", "title": "A review on plant disease detection using image processing", "text": "India is the agriculture based country, since it contributes 7.68 percent of total global agricultural output. In India, agricultural sector contributes about seventeen percentage of total Indian gross domestic product (GDP). Effective growth and improved yield of plants are necessary for increment of farmer's profit and economy of India. For this purpose farmers need domain experts for manual monitoring of plants. But manual monitoring will not give satisfactory result all the time. Moreover, domain experts are not available at all regions and are expensive as farmers have to pay fees including travelling charges. Hence, it requires developing an efficient smart farming technique which will help for better yield and growth with less human efforts. In this paper, we provide a review on methods developed by various researchers for detection of diseases in plants, in the field of image processing. It includes research in disease detection of plants such as apple, grapes, pepper, pomegranate, tomato etc."}
{"_id": "de07b140f738b75f04532ecc149bdcf3bc1ee6af", "title": "Learning Procedures from Text: Codifying How-to Procedures in Deep Neural Networks", "text": "A lot of knowledge about procedures and how-tos are described in text. Recently, extracting semantic relations from the procedural text has been actively explored. Prior work mostly has focused on finding relationships among verb-noun pairs or clustering of extracted pairs. In this paper, we investigate the problem of learning individual procedure-specific relationships (e.g. is method of, is alternative of, or is subtask of ) among sentences. To identify the relationships, we propose an end-to-end neural network architecture, which can selectively learn important procedure-specific relationships. Using this approach, we could construct a how-to knowledge base from the largest procedure sharing-community, wikihow.com. The evaluation of our approach shows that it outperforms the existing entity relationship extraction algorithms."}
{"_id": "65dc6382c7e7c8d8a96e19db9235f4884f642bb0", "title": "A distributed camera system for multi-resolution surveillance", "text": "We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database."}
{"_id": "2e813ed09b47aaca10209c00d00d6f65b08d2cdc", "title": "Separability of EEG signals recorded during right and left motor imagery using adaptive autoregressive parameters.", "text": "Electroencephalogram (EEG) recordings during right and left motor imagery can be used to move a cursor to a target on a computer screen. Such an EEG-based brain-computer interface (BCI) can provide a new communication channel to replace an impaired motor function. It can be used by, e.g., patients with amyotrophic lateral sclerosis (ALS) to develop a simple binary response in order to reply to specific questions. Four subjects participated in a series of on-line sessions with an EEG-based cursor control. The EEG was recorded from electrodes overlying sensory-motor areas during left and right motor imagery. The EEG signals were analyzed in subject-specific frequency bands and classified on-line by a neural network. The network output was used as a feedback signal. The on-line error (100%-perfect classification) was between 10.0 and 38.1%. In addition, the single-trial data were also analyzed off-line by using an adaptive autoregressive (AAR) model of order 6. With a linear discriminant analysis the estimated parameters for left and right motor imagery were separated. The error rate obtained varied between 5.8 and 32.8% and was, on average, better than the on-line results. By using the AAR-model for on-line classification an improvement in the error rate can be expected, however, with a classification delay around 1 s."}
{"_id": "2c2a6ed5b7c792c8a53fd3fc7e437371fe8a0128", "title": "Environmental Sensor Networks : A revolution in the earth system science ?", "text": "Environmental Sensor Networks (ESNs) facilitate the study of fundamental processes and the development of hazard response systems. They have evolved from passive logging systems that require manual downloading, into \u2018intelligent\u2019 sensor networks that comprise a network of automatic sensor nodes and communications systems which actively communicate their data to a Sensor Network Server (SNS) where these data can be integrated with other environmental datasets. The sensor nodes can be fixed or mobile and range in scale appropriate to the environment being sensed. ESNs range in scale and function and we have reviewed over 50 representative examples. Large Scale Single Function Networks tend to use large single purpose nodes to cover a wide geographical area. Localised Multifunction Sensor Networks typically monitor a small area in more detail, often with wireless adhoc systems. Biosensor Networks use emerging biotechnologies to monitor environmental processes as well as developing proxies for immediate use. In the future, sensor networks will integrate these three elements (Heterogeneous Sensor Networks). The communications system and data storage and integration (cyberinfrastructure) aspects of ESNs are discussed, along with current challenges which need to be addressed. We argue that Environmental Sensor Networks will become a standard research tool for future Earth System and Environmental Science. Not only do they provide a \u2018virtual\u2019 connection with the environment, they allow new field and conceptual approaches to the study of environmental processes to be developed. We suggest that although technological advances have facilitated these changes, it is vital that Earth Systems and Environmental Scientists utilise them. \u00a9 2006 Elsevier B.V. All rights reserved."}
{"_id": "6df617304e9f1185694f11ca5cae5c27e868809b", "title": "Sensor networks: evolution, opportunities, and challenges", "text": "Wireless microsensor networks have been identified as one of the most important technologies for the 21st century. This paper traces the history of research in sensor networks over the past three decades, including two important programs of the Defense Advanced Research Projects Agency (DARPA) spanning this period: the Distributed Sensor Networks (DSN) and the Sensor Information Technology (SensIT) programs. Technology trends that impact the development of sensor networks are reviewed, and new applications such as infrastructure security, habitat monitoring, and traffic control are presented. Technical challenges in sensor network development include network discovery, control and routing, collaborative signal and information processing, tasking and querying, and security. The paper concludes by presenting some recent research results in sensor network algorithms, including localized algorithms and directed diffusion, distributed tracking in wireless ad hoc networks, and distributed classification using local agents."}
{"_id": "3b655db109beaae48b238045cf9618418e349f36", "title": "Structured low-rank approximation and its applications", "text": "Fitting data by a bounded complexity linear model is equivalent to low-rank approximation of a matrix constructed from the data. The data matrix being Hankel structured is equivalent to the existence of a linear time-invariant system that fits the data and the rank constraint is related to a bound on the model complexity. In the special case of fitting by a static model, the data matrix and its low-rank approximation are unstructured. We outline applications in system theory (approximate realization, model reduction, output error, and errors-in-variables identification), signal processing (harmonic retrieval, sum-of-damped exponentials, and finite impulse response modeling), and computer algebra (approximate common divisor). Algorithms based on heuristics and local optimization methods are presented. Generalizations of the low-rank approximation problem result from different approximation criteria (e.g., weighted norm) and constraints on the data matrix (e.g., nonnegativity). Related problems are rank minimization and structured pseudospectra. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "f064920eafd965e93e974273bf738527d5ee663f", "title": "A novel ripple-based constant on-time control with virtual inductance and offset cancellation for DC power converters", "text": "In recent years, there has been a growing trend of mandating high power conversion efficiency, for both the heavy-load and the light-load conditions. To achieve this purpose, a ripple-based constant on-time (RBCOT) control for DC-DC converters has received wide attentions because of its natural characteristic of switching frequency reduction under light-load condition. However, a RBCOT converter suffers from output-voltage offset problem and sub-harmonic instability problem. In this paper, a modified RBCOT buck converter circuit will be proposed to solve both problems using the concept of virtual inductor current to stabilize the feedback and an offset cancellation circuit to cancel out the output DC offset. A control model based on describing function is also developed for the modified converter. From the model, it's found out that it's much easier to accomplish adaptive-voltage-positioning (AVP) using the proposed modified RBCOT scheme compared to a conventional constant-frequency controller. Simulation and experimental results are also given to verify the proposed scheme."}
{"_id": "0a51f6ae1d1eb15413cbd757cbe7b6a5a4d60961", "title": "Unbiased scalable softmax optimization", "text": "Recent neural network and language models rely on softmax distributions with an extremely large number of categories. Since calculating the softmax normalizing constant in this context is prohibitively expensive, there is a growing literature of efficiently computable but biased estimates of the softmax. In this paper we propose the first unbiased algorithms for maximizing the softmax likelihood whose work per iteration is independent of the number of classes and datapoints (and no extra work is required at the end of each epoch). We show that our proposed unbiased methods comprehensively outperform the state-of-the-art on seven real world datasets."}
{"_id": "e6ca9d19791cb591d3f13075dbe27e60ecc4d703", "title": "Position sensing in brake-by-wire callipers using resolvers", "text": "Recent designs for brake-by-wire systems use \"resolvers\" to provide accurate and continuous measurements for the absolute position and speed of the rotor of the electric actuators in brake callipers (permanent magnet DC motors). Resolvers are absolute-angle transducers that are integrated with estimator modules called \"angle tracking observer\" and together they provide position and speed measurements. Current designs for angle-tracking observers are unstable in applications with high acceleration and/or speed. In this paper, we introduce a new angle-tracking observer in which a closed-loop linear time-invariant (LTI) observer is integrated with a quadrature encoder. Finite-gain stability of the proposed design and its robustness to three different kinds of parameter variations are proven based on theorems of input-output stability in nonlinear control theory. In our experiments, we examined the performance of our observer and two other methods (a well-known LTI observer and an extended Kalman filter) to estimate the position and speed of a brake-by-wire actuator. The results show that because of the very high speed and acceleration of the actuator in this application, the LTI observer and Kalman filter cannot track the rotor position and diverge. In contrast, with a properly designed open-loop transfer function and selecting a suitable switching threshold, our proposed angle-tracking observer is stable and highly accurate in a brake-by-wire application"}
{"_id": "626ca63cb9ac59c48c31804d67dc54420a17439c", "title": "Identifying the Information Structure of Scientific Abstracts: An Investigation of Three Different Schemes", "text": "Many practical tasks require accessing specific types of information in scientific literature; e.g. information about the objective, methods, results or conclusions of the study in question. Several schemes have been developed to characterize such information in full journal papers. Yet many tasks focus on abstracts instead. We take three schemes of different type and granularity (those based on section names, argumentative zones and conceptual structure of documents) and investigate their applicability to biomedical abstracts. We show that even for the finest-grained of these schemes, the majority of categories appear in abstracts and can be identified relatively reliably using machine learning. We discuss the impact of our results and the need for subsequent task-based evaluation of the schemes."}
{"_id": "26a1223b1beb72a12c7944668ca26cf78ec12d8d", "title": "Derive: a tool that automatically reverse-engineers instruction encodings", "text": "Many binary tools, such as disassemblers, dynamic code generation systems, and executable code rewriters, need to understand how machine instructions are encoded. Unfortunately, specifying such encodings is tedious and error-prone. Users must typically specify thousands of details of instruction layout, such as opcode and field locations values, legal operands, and jump offset encodings. We have built a tool called DERIVE that extracts these details from existing software: the system assembler. Users need only provide the assembly syntax for the instructions for which they want encodings. DERIVE automatically reverse-engineers instruction encoding knowledge from the assembler by feeding it permutations of instructions and doing equation solving on the output.\nDERIVE is robust and general. It derives instruction encodings for SPARC, MIPS, Alpha, PowerPC, ARM, and x86. In the last case, it handles variable-sized instructions, large instructions, instruction encodings determined by operand size, and other CISC features. DERIVE is also remarkably simple: it is a factor of 6 smaller than equivalent, more traditional systems. Finally, its declarative specifications eliminate the mis-specification errors that plague previous approaches, such as illegal registers used as operands or incorrect field offsets and sizes. This paper discusses our current DERIVE prototype, explains how it computes instruction encodings, and also discusses the more general implications of the ability to extract functionality from installed software."}
{"_id": "691db8802f982a15f5c085acae73f95e17cd3d1f", "title": "Multi-user lax communications: A multi-armed bandit approach", "text": "Inspired by cognitive radio networks, we consider a setting where multiple users share several channels modeled as a multi-user multi-armed bandit (MAB) problem. The characteristics of each channel are unknown and are different for each user. Each user can choose between the channels, but her success depends on the particular channel chosen as well as on the selections of other users: if two users select the same channel their messages collide and none of them manages to send any data. Our setting is fully distributed, so there is no central control. As in many communication systems, the users cannot set up a direct communication protocol, so information exchange must be limited to a minimum. We develop an algorithm for learning a stable configuration for the multi-user MAB problem. We further offer both convergence guarantees and experiments inspired by real communication networks, including comparison to state-of-the-art algorithms."}
{"_id": "7125280b9daee53ddd7bfad3908f7bc0dedf718a", "title": "A Deep Learning Approach to Human Activity Recognition Based on Single Accelerometer", "text": "In this paper, we propose an acceleration-based human activity recognition method using popular deep architecture, Convolution Neural Network (CNN). In particular, we construct a CNN model and modify the convolution kernel to adapt the characteristics of tri-axial acceleration signals. Also, for comparison, we use some widely used methods to accomplish the recognition task on the same dataset. The large dataset we constructed consists of 31688 samples from eight typical activities. The experiment results show that the CNN works well, which can reach an average accuracy of 93.8% without any feature extraction methods."}
{"_id": "ec5ba7b84ce66ff47968d568a1d32a74baab2fed", "title": "SketchInsight: Natural data exploration on interactive whiteboards leveraging pen and touch interaction", "text": "In this work, we advance research efforts in combining the casual sketching approach of whiteboards with the machine's computing power. We present SketchInsight, a system that applies the familiar and collaborative features of a whiteboard interface to the accurate data exploration capabilities of interactive visualizations. SketchInsight enables data analysis with more fluid interaction, allowing people to visually explore their data by drawing simple charts and directly manipulating them. In addition, we report results from a qualitative study conducted to evaluate user experience in exploring data with SketchInsight, expanding our understanding on how people use a pen- and touch-enabled digital whiteboard for data exploration. We also discuss the challenges in building a working system that supports data analytic capabilities with pen and touch interaction and freeform annotation."}
{"_id": "43b91b602731ada903f94fdffa746343346dfa80", "title": "Spare parts inventory control : a literature review", "text": "Spare parts inventory are needed for maintenance and repair of final products, vehicles, industrial machines and equipments, frequently requiring high investments and significantly affecting customer satisfaction. Inventory management is complex due to the large number of different items and low demands. This article presents a literature review on single location spare parts inventory control, embracing both demand forecasting techniques and inventory control decisions on the different life cycle stages. Overall, the literature review identified the following research opportunities on inventory management: criteria to decide to stock or not an item, how much to order in the first and the last batch, demand forecasting and inventory control models integration and case studies on real applications."}
{"_id": "9522339865bb99280286a549afb4b87758bd8bd8", "title": "Towards Deviceless Edge Computing: Challenges, Design Aspects, and Models for Serverless Paradigm at the Edge", "text": "Recently, Cloud Computing, Edge Computing, and the Internet of Things (IoT) have been converging ever stronger, sparking creation of very large-scale, geographically distributed systems [1, 2]. Such systems intensively exploit Cloud Computing models and technologies, predominantly by utilizing large and remote data centers, but also nearby Cloudlets [3, 4] to enhance resource-constrained Edge devices (e.g., in terms of computation offloading [5\u20137] and data staging [8]) or to provide an execution environment for cloud-centric IoT/Edge applications [9, 10]. Serverless computing is an emerging paradigm, typically referring to a software architecture where application is decomposed into \u201ctriggers\u201d and \u201cactions\u201d (or functions), and there is a platform that provides seamless hosting and execution of developer-defined functions (FaaS), making it easy to develop, manage, scale, and operate them. This complexity mitigation is mainly achieved by incorporating sophisticated runtime mechanisms into serverless or FaaS platforms. Hence, such platforms are usually characterized by fully automating many of the management and operations processes. Therefore, serverless computing can be considered as the next step in the evolution of Cloud platforms, such as PaaS, or more generally of the utility computing. While originally designed for centralized cloud deployments, the benefits of serverless paradigm become especially evident in the context of Edge Computing [11]. This is mainly because in such systems, traditional infrastructure and application management solutions are tedious, ineffective, error-prone, and ultimately very costly. Luckily, some of the existing serverless techniques, such"}
{"_id": "506f9b9d67beadbd06c2ab063fc49eefe79bb13c", "title": "Tensor-based approach to accelerate deformable part models", "text": "This article provides next step towards solving speed bottleneck of any system that intensively uses convolutions operations (e.g. CNN). Method described in the article is applied on deformable part models (DPM) algorithm. Method described here is based on multidimensional tensors and provides efficient tradeoff between DPM performance and accuracy. Experiments on various databases, including Pascal VOC, show that the proposed method allows decreasing a number of convolutions up to 4.5 times compared with DPM v.5, while maintaining similar accuracy. If insignificant accuracy degradation is allowable, higher computational gain can be achieved. The method consists of filters tensor decomposition and convolutions shortening using the decomposed filter. Mathematical overview of the proposed method as well as simulation results are provided."}
{"_id": "0a38ee3188b74106799ac22deb3d50933015e097", "title": "User activities outlier detection system using principal component analysis and fuzzy rule-based system", "text": "In this paper, a user activities outlier detection system is introduced. The proposed system is implemented in a smart home environment equipped with appropriate sensory devices. An activity outlier detection system consist of a two-stage integration of Principal Component Analysis (PCA) and Fuzzy Rule-Based System (FRBS). In the first stage, the Hamming distance is used to measure the distances between the activities. PCA is then applied to the distance measures to find two indices of Hotelling's T2 and Squared Prediction Error (SPE). In the second stage of the process, the calculated indices are provided as inputs to FRBSs to model them heuristically. They are used to identify the outliers and classify them. Three case studies are reported to demonstrate the effectiveness of the proposed system. The proposed system successfully identifies the outliers and helps in distinguishing between the normal and abnormal behaviour patterns of the Activities of Daily Living (ADL)."}
{"_id": "e8f998dc5d8b42907e9dadc9476341d944e6b5f4", "title": "World ocean simulation system (WOSS): a simulation tool for underwater networks with realistic propagation modeling", "text": "Network simulators are a fundamental tool for the performance evaluation of protocols and applications in complex scenarios, which would be too expensive or infeasible to realize in practice.\n With the aim to provide a shared environment for the simulation of underwater networks we have adapted the ns2 network simulator to provide a detailed reproduction of the propagation of sound in water (i.e., by means of ray tracing instead of empirical relations). This has been tied to formerly available simulation frameworks (such as the MIRACLE extensions to ns2) to provide a completely customizable tool, including acoustic propagation, physical layer modeling, and cross-layer specification of networking protocols. In this paper, we describe our tool, and use it for a case study involving the comparison of three MAC protocols for underwater networks over different kinds of physical layers. Our results compare the transmission coordination approach chosen by each protocol, and show when it is better to rely on random access, as opposed to loose or tight coordination."}
{"_id": "dc905e7911e38486ddaeb0722a1951525e9d0a15", "title": "Dialectic Perspective on Innovation 1 A Dialectic Perspective on Innovation : Conflicting Demands , Multiple Pathways , and Ambidexterity", "text": "Innovation, the development and intentional introduction of new and useful ideas by individuals, teams, and organizations, lies at the heart of human adaptation. Decades of research in different disciplines and at different organizational levels have produced a wealth of knowledge about how innovation emerges and the factors that facilitate and inhibit innovation. We propose that this knowledge needs integration. In an initial step toward this goal, we apply a dialectic perspective on innovation to overcome limitations of dichotomous reasoning and to gain a more valid account of innovation. We point out that individuals, teams, and organizations need to self-regulate and manage conflicting demands of innovation and that multiple pathways can lead to idea generation and innovation. By scrutinizing the current use of the concept of organizational ambidexterity and extending it to individuals and teams, we develop a framework to help guide and facilitate future research and practice. Readers expecting specific and universal prescriptions of how to innovate will be disappointed as current research does not allow such inferences. Rather, we think innovation research should focus on developing and testing principles of innovation management in addition to developing decision aids for organizational practice. To this end, we put forward key propositions and action principles of innovation management. Dialectic Perspective on Innovation 3 A Dialectic Perspective on Innovation: Conflicting Demands, Multiple Pathways, and Ambidexterity Throughout this paper we use the term creativity for the generation of new and useful ideas and that of innovation to include both creative ideas and their implementation. Innovation can take different forms, including technological innovation, product and service innovation, and process innovation. The importance of innovation is widely acknowledged by organizational scholars, practitioners, and the wider society in an economic environment characterized by fierce competition, rapid change, and the global challenges of climate change and economic booms and busts. Decades of research have produced a wealth of knowledge about the characteristics of individuals, teams, and organizations that are related to outcomes of innovation (e.g., Anderson, De Dreu, & Nijstad, 2004; Damanpour, 1991). Some of these findings converge around factors that have been reliably found to influence innovation, such as a shared vision, innovative organizational culture (Miron, Erez, & Naveh, 2004; Naveh & Erez, 2004), emphasis on exploration rather than exploitation, and investment in R&D (Zahra & George, 2002; Cohen & Levinthal, 1990). Findings with respect to other factors, such as team diversity (Gibson & Gibbs, 2006), task related conflict, monetary rewards (Amabile, 2000; Eisenberger & Rhoades, 2001) remain contradictory. While we think that the scientific understanding of innovation is an important endeavor in its own right, we also suggest that the impact of scientific knowledge about innovation can be improved in organizational practice. While many reasons may account for a lack of transfer of scientific knowledge to management practices, one reason may be that the scientific findings do not readily or easily produce actionable knowledge. On the other hand, simplistically inferring practical implications or trying to be overly prescriptive can do harm, particularly if the context of an individual, team, or organization is not Dialectic Perspective on Innovation 4 taken into account. We, therefore, chose to integrate empirical findings of the extant literature with an attempt to develop a set of broad principles of innovation management that can guide decision making and action in organizations. We base this integration on a perspective on innovation which we term \u201cdialectic\u201d; we present and develop this perspective below. Toward this goal we offer three contributions to the literature: \u2022 By developing and applying a dialectic perspective on innovation, we aim to gain a more valid account of innovation in organizations that can enrich research and practice. \u2022 By reviewing core findings about innovation we illustrate how multiple pathways can lead to idea generation and innovation. \u2022 By redefining and extending the concept of ambidexterity, we propose a cross-level framework for the successful management of inherently conflicting demands of innovation. Tensions of Innovation: Theoretical Perspectives A pervasive theme in research on organizational innovation is that innovation is characterized by tensions (Lewis, Welsh, Dehler, & Green, 2002), paradoxes (Miron et al., 2004), contradictions (King, Anderson, & West, 1991), dilemmas (Benner & Tushman, 2003), and the so-called \u201cdark side\u201d of innovation processes (Anderson & Gasteiger, 2007). Table 1 presents a number of examples of tensions related to innovation that have been noted in the published literature. We organize Table 1 by the referent level of the tension (individual, team, or organizational) and by whether the tension is primarily focused on antecedents of innovation or on innovative processes and outcomes. That tensions have been described frequently at all levels of analysis and with regard to antecedents, processes, and consequences of innovation provides Dialectic Perspective on Innovation 5 what we think is compelling evidence for their pervasiveness within organizations attempting to innovate. -----------------------------Insert Table 1 about here -----------------------------We propose with others that understanding and managing these tensions is central to successful innovation and use the terms conflicting demands and conflicting activities to refer to the origins of tensions, paradoxes, contradictions and dilemmas. In the following we contrast two theoretical perspectives and the strategies they imply for dealing with these tensions. One strategy deals with tensions by emphasizing the separation of conflicting activities to different sub-organizations or even different organizations altogether (O'Reilly & Tushman, 2004). The top management of the organization is responsible for the necessary integration of activities that produce the tensions. In a way, this strategy deals with tensions by reducing them as much as possible. This strategy derives from a dichotomous theory perspective (March, Sproull, & Tamuz, 1991). We contrast the dichotomous approach with a second theoretical perspective which argues that a strict separation of conflicting activities to sub-organizations leads to disadvantages. Given that a system has sufficient levels of internal complexity (Brown & Eisenhardt, 1997), the tensions should be kept within the system to be managed rather than organized \u201cout\u201d of the system. Both the dichotomous (\u201ckeep separate\u201d) and dialectic (\u201cintegrate and manage\u201d) perspectives concur that the innovation process poses potentially conflicting task demands on individuals, teams, and organizations. By task demands we refer to the patterns of requisite Dialectic Perspective on Innovation 6 activity an individual, team, or organization must engage in to achieve an outcome. Innovation confronts individuals, teams, and organizations with fundamentally different demands in several important and unavoidable ways: First, the demands of innovation differ from the demands of routine performance. Whereas routine performance is based on the exploitation of knowledge, skills, and abilities that emphasize quality and efficiency criteria, innovation requires exploratory action and creative thinking. People and teams who need to be creative and innovative must crave newness and be curious, while people and teams who are supposed to produce efficiently must be able to close their minds to new ideas that just interrupt the clear pattern of existing routines and hinder the further development of those routines into ever more efficient production. Second, the innovation process itself encompasses different sets of activities, such as those related to idea generation and innovation implementation: These sets of activities are linked to different or conflicting antecedents. For instance, granting autonomy is linked to the generation of new ideas (Shalley, Zhou, & Oldham, 2004), while initiating structure is related to the success of implementing incremental innovation (Keller, 2006). Maximizing the conditions fostering creativity is unlikely to translate directly into innovation because innovation encompasses much more than idea generation. Indeed, the maximization of factors that facilitate the development of new ideas is likely to simultaneously cause conditions that may inhibit idea implementation, and thus innovation overall. For example, Xerox Parc developed many innovations in software design, PC hardware and PC connectivity. The creativity of this group was enormous. And these innovations were often implemented in products. However, Xerox made little economic use of these enormously creative new products that essentially were fed into innovations marketed by Apple and Microsoft (Bergin, 2006; Miller & Steinberg, 2006). Dialectic Perspective on Innovation 7 Third, in addition, there are different types of innovation: An innovation is incremental when it builds on and exploits existing products and processes; an innovation is radical or disruptive if it departs strongly from the status quo (Benner & Tushman, 2003). Incremental innovation leads to expected increases in mean levels of performance, while radical innovation creates variability in performance (potential for high losses or high returns). The expected level of returns is lower for radical than for incremental innovation (Taylor & Greve, 2006). We assume with others that for radical innovation the problem of conflicting task demands of innovation is more pronounced than for incremental innovation and the management of tensions is particularly challenging (Christensen, 1997). But even incremental inn"}
{"_id": "3626dc925f846beccd97398521fec4c86b009ea4", "title": "Mining Road Traffic Accident Data to Improve Safety: Role of Road-Related Factors on Accident Severity in Ethiopia", "text": "Road traffic accidents (RTAs) are a major public health concern, resulting in an estimated 1.2 million deaths and 50 million injuries worldwide each year. In the developing world, RTAs are among the leading cause of death and injury; Ethiopia in particular experiences the highest rate of such accidents. Thus, methods to reduce accident severity are of great interest to traffic agencies and the public at large. In this work, we applied data mining technologies to link recorded road characteristics to accident severity in Ethiopia, and developed a set of rules that could be used by the Ethiopian Traffic Agency to improve safety. Problem Statement The costs of fatalities and injuries due to road traffic accidents (RTAs) have a tremendous impact on societal well-being and socioeconomic development. RTAs are among the leading causes of death and injury worldwide, causing an estimated 1.2 million deaths and 50 million injuries each year (World Health Organization, 2004). Ethiopia has the highest rate of RTAs, owing to the fact that road transport is the major transportation system in the country. The Ethiopian traffic control system archives data on various aspects of the traffic system, such as traffic volume, concentration, and vehicle accidents. With more vehicles and traffic, the capital city of Addis Ababa takes the lion\u2019s share of the risk, with an average of 20 accidents being recorded every day and even more going unreported. The basic hypothesis of this research is that accidents are not randomly scattered along the road network, and that drivers are not involved in accidents at random. There are complex circumstantial relationships between several characteristics (driver, road, car, etc.) and the accident occurrence. As such, one cannot improve safety without successfully relating accident frequency and severity to the causative variables (Kononov and Janson 2002). We will attempt to extend the authors\u2019 previous work in this area by generating additional attributes and focusing on the contribution of road-related factors to accident severity in Ethiopia. This will help to identify the parts of a road that are risky, thus supporting traffic accident data analysis in decision-making processes. The general objective of the research is to investigate the role of road-related factors in accident severity, using RTA data from Ethiopia and predictive models. Our three specific objectives include: 1) exploring the underlying variables (especially road-related ones) that impact car accident severity, 2) predicting accident severity using different data mining techniques, and 3) comparing standard classification models for this task. Literature Review Various studies have addressed the different aspects of RTAs, with most focusing on predicting or establishing the critical factors influencing injury severity (Chong, A. et al. 2005). Numerous data mining-related studies have been undertaken to analyze RTA data locally and globally, with results frequently varying depending on the socio-economic conditions and infrastructure of a given location. Ossenbruggen, Pendharkar et al. (2001) used a logistic regression model to identify the prediction factors of crashes and crash-related injuries, using models to perform a risk assessment of a given region. These models included attributes describing a site by its land use activity, roadside design, use of traffic control devices, and traffic exposure. Their study illustrated that village sites were less hazardous than residential or shopping sites. Abdalla et al. (1997) also studied the relationship between casualty frequency and the distance of an accident from residential zones. Not surprisingly, casualty frequencies were higher in accidents that occurred nearer to residential zones, possibly due to higher exposure. The"}
{"_id": "d2b481e579d30fa3b69bdd250ad7a479f74022f7", "title": "A knowledge-based approach for duplicate elimination in data cleaning", "text": "Existing duplicate elimination methods for data cleaning work on the basis of computing the degree of similarity between nearby records in a sorted database. High recall can be achieved by accepting records with low degrees of similarity as duplicates, at the cost of lower precision. High precision can be achieved analogously at the cost of lower recall. This is the recall\u2013precision dilemma. We develop a generic knowledge-based framework for effective data cleaning that can implement any existing data cleaning strategies and more. We propose a new method for computing transitive closure under uncertainty for dealing with the merging of groups of inexact duplicate records and explain why small changes to window sizes has little effect on the results of the sorted neighborhood method. Experimental study with two real-world datasets show that this approach can accurately identify duplicates and anomalies with high recall and precision, thus effectively resolving the recall\u2013precision dilemma. r 2001 Published by Elsevier Science Ltd."}
{"_id": "375336dd185c9267611606d587bb4660aa4d548e", "title": "Identifying Exoplanets with Deep Learning : A Five-planet Resonant Chain around Kepler-80 and an Eighth Planet around Kepler-90", "text": "NASA\u2019s Kepler Space Telescope was designed to determine the frequency of Earth-sized planets orbiting Sun-like stars, but these planets are on the very edge of the mission\u2019s detection sensitivity. Accurately determining the occurrence rate of these planets will require automatically and accurately assessing the likelihood that individual candidates are indeed planets, even at low signal-to-noise ratios. We present a method for classifying potential planet signals using deep learning, a class of machine learning algorithms that have recently become state-of-theart in a wide variety of tasks. We train a deep convolutional neural network to predict whether a given signal is a transiting exoplanet or a false positive caused by astrophysical or instrumental phenomena. Our model is highly effective at ranking individual candidates by the likelihood that they are indeed planets: 98.8% of the time it ranks plausible planet signals higher than false-positive signals in our test set. We apply our model to a new set of candidate signals that we identified in a search of known Kepler multi-planet systems. We statistically validate two new planets that are identified with high confidence by our model. One of these planets is part of a five-planet resonant chain around Kepler-80, with an orbital period closely matching the prediction by three-body Laplace relations. The other planet orbits Kepler-90, a star that was previously known to host seven transiting planets. Our discovery of an eighth planet brings Kepler-90 into a tie with our Sun as the star known to host the most planets."}
{"_id": "dd135a89b5075af5cbef5becaf419457cdd77cc9", "title": "An Introduction to Restricted Boltzmann Machines", "text": "Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can be interpreted as stochastic neural networks. The increase in computational power and the development of faster learning algorithms have made them applicable to relevant machine learning problems. They attracted much attention recently after being proposed as building blocks of multi-layer learning systems called deep belief networks. This tutorial introduces RBMs as undirected graphical models. The basic concepts of graphical models are introduced first, however, basic knowledge in statistics is presumed. Different learning algorithms for RBMs are discussed. As most of them are based on Markov chain Monte Carlo (MCMC) methods, an introduction to Markov chains and the required MCMC techniques is provided."}
{"_id": "0433c25af979bf63f833dd05b418ca988f26132a", "title": "Automatic generation of buffer overflow attack signatures: an approach based on program behavior models", "text": "Buffer overflows have become the most common target for network-based attacks. They are also the primary mechanism used by worms and other forms of automated attacks. Although many techniques have been developed to prevent server compromises due to buffer overflows, these defenses still lead to server crashes. When attacks occur repeatedly, as is common with automated attacks, these protection mechanisms lead to repeated restarts of the victim application, rendering its service unavailable. To overcome this problem, we develop a new approach that can learn the characteristics of a particular attack, and filter out future instances of the same attack or its variants. By doing so, our approach significantly increases the availability of servers subjected to repeated attacks. The approach is fully automatic, does not require source code, and has low runtime overheads. In our experiments, it was effective against most attacks, and did not produce any false positives"}
{"_id": "1f4fff64adef5ec6ae21e8647d5a042bf71d64d9", "title": "Human detection in surveillance videos and its applications - a review", "text": "Detecting human beings accurately in a visual surveillance system is crucial for diverse application areas including abnormal event detection, human gait characterization, congestion analysis, person identification, gender classification and fall detection for elderly people. The first step of the detection process is to detect an object which is in motion. Object detection could be performed using background subtraction, optical flow and spatio-temporal filtering techniques. Once detected, a moving object could be classified as a human being using shape-based, texture-based or motion-based features. A comprehensive review with comparisons on available techniques for detecting human beings in surveillance videos is presented in this paper. The characteristics of few benchmark datasets as well as the future research directions on human detection have also been discussed."}
{"_id": "4ae120e82e9ec157de6c68a000863225cf1bbbd7", "title": "Survey on Independent Component Analysis", "text": "A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Well-known linear transformation methods include, for example, principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this paper, we survey the existing theory and methods for ICA."}
{"_id": "0ef93a4f1324b47eef4ad75a158b74aed9b0fdb3", "title": "Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks", "text": "Large multilayer neural networks trained with backpropagation have recently achieved state-ofthe-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian techniques lack scalability to large dataset and network sizes. In this work we present a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP). Similar to classical backpropagation, PBP works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients. A series of experiments on ten real-world datasets show that PBP is significantly faster than other techniques, while offering competitive predictive abilities. Our experiments also show that PBP provides accurate estimates of the posterior variance on the network weights."}
{"_id": "5c3b6cb9cfd2466918c998a00e97436dec801713", "title": "Query Segmentation and Resource Disambiguation Leveraging Background Knowledge", "text": "Accessing the wealth of structured data available on the Data Web is still a key challenge for lay users. Keyword search is the most convenient way for users to access information (e.g., from data repositories). In this paper we introduce a novel approach for determining the correct resources for user-supplied keyword queries based on a hidden Markov model. In our approach the user-supplied query is modeled as the observed data and the background knowledge is used for parameter estimation. Instead of learning parameter estimation from training data, we leverage the semantic relationships between data items for computing the parameter estimations. In order to maximize accuracy and usability, query segmentation and resource disambiguation are mutually tightly interwoven. First, an initial set of potential segmentations is obtained leveraging the underlying knowledge base; then the final correct set of segments is determined after the most likely resource mapping was computed using a scoring function. While linguistic methods like named entity, multi-word unit recognition and POS-tagging fail in the case of an incomplete sentences (e.g. for keyword-based queries), we will show that our statistical approach is robust with regard to query expression variance. Our experimental results when employing the hidden Markov model for resource identification in keyword queries reveal very promising results."}
{"_id": "78f472e341139f78e74c605a14b330133190464b", "title": "Humanoid locomotion on uneven terrain using the time-varying divergent component of motion", "text": "This paper presents a framework for dynamic humanoid locomotion on uneven terrain using a novel time-varying extension to the Divergent Component of Motion (DCM). By varying the natural frequency of the DCM, we are able to achieve generic CoM height trajectories during stepping. The proposed planning algorithm computes admissible DCM reference trajectories given desired ZMP plans for single and double support. This is accomplished using reverse-time integration of the discretized DCM dynamics over a finite time horizon. To account for discontinuities during replanning, linear Model Predictive Control (MPC) is implemented over a short preview window. DCM tracking control is achieved using a time-varying proportional-integral controller based on the Virtual Repellent Point (VRP). The effectiveness of the combined approach is verified in simulation using a 30-DoF model of THOR, a compliant torque-controlled humanoid."}
{"_id": "d31bffbc48a5a89872a9a9e996eaf7c22253b4b0", "title": "From Hunt the Wumpus to EverQuest: Introduction to Quest Theory", "text": "The paper will explore how the landscape types and the quest types are used in various games, how they structure the gameplay, how they act as bones for the game-content (graphics, dialogue, sound) and how they sometimes form the base on which a story is imposed and related to the player. The question then becomes, how does the quest structure influence the story structure? How do the limitations of the quest combinations limit the kinds of story that are possible? How rich can the imposed story be, without breaking the gameplay? Are landscape and quest-structure the dominant factors in quest game design, to which the story-ambitions must defer? The main thesis of the paper is that if we understand the powerful but simple structure the grammar of quests (how they work, how they are used) we can understand both the limits and the potential of these kinds of games."}
{"_id": "ef52037f9ae7b0f1b584c67ae026ff654fee3363", "title": "Two practical man-in-the-middle attacks on Bluetooth secure simple pairing and countermeasures", "text": "We propose two new Man-In-The-Middle (MITM) attacks on Bluetooth Secure Simple Pairing (SSP). The attacks are based on the falsification of information sent during the input/output capabilities exchange and also the fact that the security of the protocol is likely to be limited by the capabilities of the least powerful or the least secure device type. In addition, we devise countermeasures that render the attacks impractical, as well as improvements to the existing Bluetooth SSP in order to make it more secure. Moreover, we provide a comparative analysis of the existing MITM attacks on Bluetooth."}
{"_id": "91de27943422af4fc4e43cdc47dcf1428377b592", "title": "Optimizing orthodontic treatment in patients taking bisphosphonates for osteoporosis.", "text": "Bisphosphonates have unique pharmacological characteristics unlike those of any other drug group. Millions of adults take oral bisphosphonates for long-term treatment of osteoporosis and osteopenia; some of these people will most likely also seek orthodontic treatment. Adverse dental effects from bisphosphonates have been reported, including decreased tooth movement, impaired bone healing, and osteonecrosis in the mandible and the maxilla. Osteonecrosis has been rarely observed after bisphosphonate use for osteoporosis. However, adverse drug effects might occur more frequently in orthodontic patients, and they would probably be noted before the end-stage pathology of osteonecrosis. Adverse effects during orthodontic treatment, including decreased tooth movement, could last for years after the drug therapy is stopped. Successful orthodontic treatment requires optimal bone healing to prevent excessive tooth mobility. Bisphosphonates appear to have 2 bone elimination rates--a fast elimination of weeks from the bone surface and a slow elimination of years after incorporation into the bone structure. This article presents methods to clinically and radiographically monitor orthodontic patients who are taking oral bisphosphonates. Efforts to minimize adverse effects and optimize orthodontic procedures with physician-approved drug holidays are discussed. The orthodontic treatment results of 3 patients who received bisphosphonate therapy are reported."}
{"_id": "a8f1d7b48f7c29946602d42ba75f226f75a955cb", "title": "Use of e-books in an academic and research environment: A case study from the Indian Institute of Science", "text": "Purpose: E-books are not as popular as other types of e-publications, such as e-journals and e-newspapers. The possible reasons for this could be because the technology for creating/accessing e-books (both hardware and software) is not yet matured, users\u2019 perspectives about e-books need to be changed and the awareness of e-books needs to be raised. The purpose of this study is to investigate the use and usability of e-books from users\u2019 perspectives in an academic and research environment."}
{"_id": "d35c48fc4b2e02a225fab212f5b0fcbc4511b726", "title": "Robust Visual Tracking with Deep Convolutional Neural Network Based Object Proposals on PETS", "text": "Tracking by detection based object tracking methods encounter numerous complications including object appearance changes, size and shape deformations, partial and full occlusions, which make online adaptation of classifiers and object models a substantial challenge. In this paper, we employ an object proposal network that generates a small yet refined set of bounding box candidates to mitigate the this object model refitting problem by concentrating on hard negatives when we update the classifier. This helps improving the discriminative power as hard negatives are likely to be due to background and other distractions. Another intuition is that, in each frame, applying the classifier only on the refined set of object-like candidates would be sufficient to eliminate most of the false positives. Incorporating an object proposal makes the tracker robust against shape deformations since they are handled naturally by the proposal stage. We demonstrate evaluations on the PETS 2016 dataset and compare with the state-of-theart trackers. Our method provides the superior results."}
{"_id": "ee1140f49c2f1ce32d0ed9404078c724429cc487", "title": "Design and Sensitivity Analysis of Highly Compact Comparator for Ku-Band Monopulse Radar", "text": "This paper gives the design of a highly compact comparator at a Ku-band frequency and presents analysis results of the comparator for the fabrication inaccuracies. First an unconventional magic-t using a nonstandard waveguide is designed at 15.50 GHz. To reduce the volume occupied by the magic-t, its E-arm (or difference port) is kept parallel to the plane of two inputs of the magic-t instead of perpendicular to them as is done in a convention magic-t. The sum and the difference ports of the above folded magic-t are then matched using inductive windows at 15.50 GHz. Keeping the required location of the outputs of the comparator in mind, four of these matched folded magic-ts are suitably interconnected to design a highly compact comparator. The effects of the fabrication errors in the waveguide and matching elements dimensions on the centre frequency, magnitude and phase response of the comparator are also analyzed and presented."}
{"_id": "141a224fd027f82e8359579f6283b0eaf445efa6", "title": "5G 3GPP-Like Channel Models for Outdoor Urban Microcellular and Macrocellular Environments", "text": "For the development of new 5G systems to operate in bands up to 100 GHz, there is a need for accurate radio propagation models at these bands that currently are not addressed by existing channel models developed for bands below 6 GHz. This document presents a preliminary overview of 5G channel models for bands up to 100 GHz. These have been derived based on extensive measurement and ray tracing results across a multitude of frequencies from 6 GHz to 100 GHz, and this document describes an initial 3D channel model which includes: 1) typical deployment scenarios for urban microcells (UMi) and urban macrocells (UMa), and 2) a baseline model for incorporating path loss, shadow fading, line of sight probability, penetration and blockage models for the typical scenarios. Various processing methodologies such as clustering and antenna decoupling algorithms are also presented."}
{"_id": "a804e95d5641e85674cab2fe8c51c34bfbe3a146", "title": "Breathing at a rate of 5.5 breaths per minute with equal inhalation-to-exhalation ratio increases heart rate variability.", "text": "OBJECTIVES\nPrior studies have found that a breathing pattern of 6 or 5.5 breaths per minute (bpm) was associated with greater heart rate variability (HRV) than that of spontaneous breathing rate. However, the effects of combining the breathing rate with the inhalation-to-exhalation ratio (I:E ratio) on HRV indices are inconsistent. This study aimed to examine the differences in HRV indices and subjective feelings of anxiety and relaxation among four different breathing patterns.\n\n\nMETHODS\nForty-seven healthy college students were recruited for the study, and a Latin square experimental design with a counterbalance in random sequences was applied. Participants were instructed to breathe at two different breathing rates (6 and 5.5 breaths) and two different I:E ratios (5:5 and 4:6). The HRV indices as well as anxiety and relaxation levels were measured at baseline (spontaneous breathing) and for the four different breathing patterns.\n\n\nRESULTS\nThe results revealed that a pattern of 5.5 bpm with an I:E ratio of 5:5 produced a higher NN interval standard deviation and higher low frequency power than the other breathing patterns. Moreover, the four different breathing patterns were associated with significantly increased feeling of relaxation compared with baseline.\n\n\nCONCLUSION\nThe study confirmed that a breathing pattern of 5.5 bpm with an I:E ratio of 5:5 achieved greater HRV than the other breathing patterns. This finding can be applied to HRV biofeedback or breathing training in the future."}
{"_id": "3c90ba87e22586d8f5224d2a47aeb0bcb6f776fc", "title": "Data transmission via GSM voice channel for end to end security", "text": "Global System for Mobile Communications (GSM) technology still plays a key role because of its availability, reliability and robustness. Recently, additional consumer applications are proposed in which GSM is used as a backup or data transmission service. Unfortunately sending data via GSM channel is a challenging task since it is speech sensitive and suppresses other forms of signals. In this paper, a systematic method is proposed to develop a modem that transmits data over GSM voice channel (DoGSMV) using speech like (SL) symbols. Unlike the previous approaches an artificial search space is produced to find best SL symbols and analyses by synthesis (AbyS) method is introduced for parameter decoding. As a result 1.6 kbps simulation data rate is achieved when wireless communication errors are ignored."}
{"_id": "a893412a71da30e41c076f3768c519cc78af7e10", "title": "Frequency-Reconfigurable Low-Profile Circular Monopolar Patch Antenna", "text": "A novel frequency-reconfigurable antenna is presented based on a circular monopolar patch antenna. The antenna comprises of a center-fed circular patch surrounded by four sector-shaped patches. Eight varactor diodes are introduced to bridge the gaps between the circular patch and the sector-shaped patches. By changing the reverse bias voltage of the varactor diodes, the antenna can be switched in the operating frequency. A fully functional prototype is developed and tested, which exhibits a continuously tunable frequency band from 1.64 GHz to 2.12 GHz. The measured efficiency rises from about 45% to about 85% as the operating frequency increases from 1.64 GHz to 2.12 GHz. Stable monopole like radiation patterns are achieved at all operating frequencies. In addition, the antenna owns a low-profile structure (0.04 free-space wavelengths at 1.9 GHz). The frequency selective feature and stable radiation patterns make the antenna potentially suitable for future wireless communication systems, such as cognitive radio."}
{"_id": "9687bea911e5fa72aeecbae752b2d785c38798dc", "title": "Three-dimensional soft-tissue and hard-tissue changes in the treatment of bimaxillary protrusion.", "text": "INTRODUCTION\nFacial convexity related to bimaxillary protrusion is prevalent in many populations. Underlying\u00a0skeletal protrusion combined with increased dentoalveolar protrusion contributes to facial muscle imbalance and lip incompetence, which is undesirable for many patients. In this study, we evaluated the relationship between soft-tissue and hard-tissue changes in an orthodontically treated Asian population.\n\n\nMETHODS\nTwenty-four consecutive adult Asian patients (mean age, 24 years), diagnosed with severe bimaxillary dentoalveolar protrusion, were evaluated using pretreatment and posttreatment cone-beam computed tomography. The patients were treated with 4 first premolar extractions followed by anterior retraction with either skeletal or intraoral anchorage. Serial cone-beam computed tomography radiographs were registered on the entire cranial base and fossa. Soft-tissue and hard-tissue changes were determined through landmark displacement and color mapping.\n\n\nRESULTS\nUpper lip retraction was concentrated between the nasolabial folds and commissures. Lower lip retraction was accompanied by significant redistribution of soft tissues at pogonion. Soft-tissue changes correlated well with regional facial muscle activity. Significant retractions (2-4 mm) of the soft tissues occurred beyond the midsagittal region. Use of skeletal anchorage resulted in 1.5 mm greater lower lip retraction than intraoral anchorage, with greater retraction of the maxillary and mandibular incisor root apices.\n\n\nCONCLUSIONS\nProfound soft-tissue changes accompanied retraction of the anterior dentition with both treatment modalities."}
{"_id": "de0715589cf7027785b11389aa1bfd100e24fe7d", "title": "Management of a Large Frontal Encephalocoele With Supraorbital Bar Remodeling and Advancement.", "text": "Of all the craniofacial abnormalities, facial clefts are the most disfiguring. Facial clefts are classified according to the affected anatomical area as described by Tessier. Through this classification, the location and extent of the cleft can be designated numerically.A 2-month-old male infant was referred to authors' craniofacial unit, from a hospital in a rural province of South Africa, with a problem of a supranasal encephalocoele. Bilateral raised eyebrows were noted as was a right-sided upper lid central third coloboma. Computed tomography and magnetic resonance imaging scans confirmed the presence of a supranasal encephalocoele with a large frontal bone defect and splayed nasal bones. Bilateral enlarged orbits were noted with tented orbital roofs that we classified as Tessier number 10 facial clefts. The child was booked for an encephalocoele excision and calvarial reconstruction at 4 months of age.As a result of the encephalocoele, the supraorbital bar with its adjacent nasal bones was cleaved in 2, resulting in a significant frontal bone defect. Osteotomies were performed to remove the supraorbital bar and nasal bones from the calvarium. The supraorbital bar segment was remodeled and plated with absorbable poly-L-lactic acid plates. Osteotomies of the nasal bones allowed them to be united centrally, also with absorbable plates. This entire construct was transferred and secured to the calvarium, but in a more caudal position thereby obliterating the frontal bone and Tessier number 10 facial cleft defects with a naturally contoured construct."}
{"_id": "781e17a61ff50ef3ab038fccf25dcc159f0b0899", "title": "Parallel Cat Swarm Optimization", "text": "We investigate a parallel structure of cat swarm optimization (CSO) in this paper, and we call it parallel cat swarm optimization (PCSO). In the experiments, we compare particle swarm optimization (PSO) with CSO and PCSO. The experimental results indicate that both CSO and PCSO perform well. Moreover, PCSO is an effective scheme to improve the convergent speed of cat swarm optimization in case the population size is small and the whole iteration is less."}
{"_id": "16e7cf26e1331e0308cfabb779b6e4402e0ae888", "title": "WiFi-Nano: reclaiming WiFi efficiency through 800 ns slots", "text": "The increase in WiFi physical layer transmission speeds from 1~Mbps to 1 Gbps has reduced transmission times for a 1500 byte packet from 12 ms to 12 us. However, WiFi MAC overheads such as channel access and acks have not seen similar reductions and cumulatively contribute about 150 us on average per packet. Thus, the efficiency of WiFi has deteriorated from over 80% at 1 Mbps to under 10% at 1 Gbps.\n In this paper, we propose WiFi-Nano, a system that uses 800 ns slots} to significantly improve WiFi efficiency. Reducing slot time from 9 us to 800 ns makes backoffs efficient, but clear channel assessment can no longer be completed in one slot since preamble detection can now take multiple slots. Instead of waiting for multiple slots for detecting preambles, nodes speculatively transmit preambles as their backoff counters expire, while continuing to detect premables using self-interference cancellation. Upon detection of preambles from other transmitters, nodes simply abort their own preamble transmissions, thereby allowing the earliest transmitter to succeed. Further, receivers speculatively transmit their ack preambles at the end of packet reception, thereby reducing ack overhead. We validate the effectiveness of WiFi-Nano through implementation on an FPGA-based software defined radio platform, and through extensive simulations, demonstrate efficiency gains of up to 100%."}
{"_id": "2e2fadd04f248e38e169dcfee9830ef109b77add", "title": "Who, What, When, and Where: Multi-Dimensional Collaborative Recommendations Using Tensor Factorization on Sparse User-Generated Data", "text": "Given the abundance of online information available to mobile users, particularly tourists and weekend travelers, recommender systems that effectively filter this information and suggest interesting participatory opportunities will become increasingly important. Previous work has explored recommending interesting locations; however, users would also benefit from recommendations for activities in which to participate at those locations along with suitable times and days. Thus, systems that provide collaborative recommendations involving multiple dimensions such as location, activities and time would enhance the overall experience of users.The relationship among these dimensions can be modeled by higher-order matrices called tensors which are then solved by tensor factorization. However, these tensors can be extremely sparse. In this paper, we present a system and an approach for performing multi-dimensional collaborative recommendations for Who (User), What (Activity), When (Time) and Where (Location), using tensor factorization on sparse user-generated data. We formulate an objective function which simultaneously factorizes coupled tensors and matrices constructed from heterogeneous data sources. We evaluate our system and approach on large-scale real world data sets consisting of 588,000 Flickr photos collected from three major metro regions in USA. We compare our approach with several state-of-the-art baselines and demonstrate that it outperforms all of them."}
{"_id": "8f72d67c4ace54471c5c8984914889098ac1cb48", "title": "A System to Verify Network Behavior of Known Cryptographic Clients", "text": "Numerous exploits of client-server protocols and applications involve modifying clients to behave in ways that untampered clients would not, such as crafting malicious packets. In this paper, we develop a system for verifying in near real-time that a cryptographic client\u2019s message sequence is consistent with its known implementation. Moreover, we accomplish this without knowing all of the client-side inputs driving its behavior. Our toolchain for verifying a client\u2019s messages explores multiple candidate execution paths in the client concurrently, an innovation useful for aspects of certain cryptographic protocols such as message padding (which will be permitted in TLS 1.3). In addition, our toolchain includes a novel approach to symbolically executing the client software in multiple passes that defers expensive functions until their inputs can be inferred and concretized. We demonstrate client verification on OpenSSL and BoringSSL to show that, e.g., Heartbleed exploits can be detected without Heartbleed-specific filtering and within seconds of the first malicious packet. On legitimate traffic our verification keeps pace with Gmail-shaped workloads, with a median lag of 0.85s."}
{"_id": "49d4cb2e1788552a04c7f8fec33fbfabb3882995", "title": "Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review", "text": "This paper investigates recent research on active learning for (geo) text and image classification, with an emphasis on methods that combine visual analytics and/or deep learning. Deep learning has attracted substantial attention across many domains of science and practice, because it can find intricate patterns in big data; but successful application of the methods requires a big set of labeled data. Active learning, which has the potential to address the data labeling challenge, has already had success in geospatial applications such as trajectory classification from movement data and (geo) text and image classification. This review is intended to be particularly relevant for extension of these methods to GISience, to support work in domains such as geographic information retrieval from text and image repositories, interpretation of spatial language, and related geo-semantics challenges. Specifically, to provide a structure for leveraging recent advances, we group the relevant work into five categories: active learning, visual analytics, active learning with visual analytics, active deep learning, plus GIScience and Remote Sensing (RS) using active learning and active deep learning. Each category is exemplified by recent influential work. Based on this framing and our systematic review of key research, we then discuss some of the main challenges of integrating active learning with visual analytics and deep learning, and point out research opportunities from technical and application perspectives\u2014for application-based opportunities, with emphasis on those that address big data with geospatial components."}
{"_id": "ecbcccd71b3c7e0cca8ecf0997e9775019b51488", "title": "Managing Employee Compliance with Information Security Policies: The Critical Role of Top Management and Organizational Culture", "text": "We develop an individual behavioral model that integrates the role of top management and organizational culture into the theory of planned behavior in an attempt to better understand how top management can influence security compliance behavior of employees. Using survey data and structural equation modeling, we test hypotheses on the relationships among top management participation, organizational culture, and key determinants of employee compliance with information security policies. We find that top management participation in information security initiatives has significant direct and indirect influences on employees\u2019 attitudes towards, subjective norm of, and perceived behavioral control over compliance with information security policies. We also find that the top management participation strongly influences organizational culture which in turn impacts employees\u2019 attitudes towards and perceived behavioral control over compliance with information security policies. Furthermore, we find that the effects of top management participation and organizational culture on employee behavioral intentions are fully mediated by employee cognitive beliefs about compliance with information security policies. Our findings extend information security research literature by showing how top management can play a proactive role in shaping employee compliance behavior in addition to the deterrence oriented remedies advocated in the extant literature. Our findings also refine the theories about the role of organizational culture in shaping employee compliance behavior. Significant theoretical and practical implications of \u2217This project was partially funded by a grant to the authors from the Defense Information Systems Agency (DISA) of the Department of Defense (DoD). The authors express their thanks to the editor, senior editor, associate editor, and two anonymous reviewers for their detailed and constructive comments and suggestions throughout the review process. \u2020Corresponding author."}
{"_id": "37d0f16fa5940e109d0eb9c9307793aca36e9b3d", "title": "A Social Network-Based Recommender System (SNRS)", "text": "Social influence plays an important role in product marketing. However, it has rarely been considered in traditional recommender systems. In this paper we present a new paradigm of recommender systems which can utilize information in social networks, including user preferences, an item's general acceptance, and influence from social friends. A probabilistic model is developed to make personalized recommendations from such information. We extract data from a real online social network, and our analysis of this large dataset reveals that friends have a tendency to select the same items and give similar ratings. Experimental results on this dataset show that our proposed system not only improves the prediction accuracy of recommender systems but also remedies the data sparsity and cold-start issues inherent in collaborative filtering. Furthermore, we propose to improve the performance of our system by applying semantic filtering of social networks, and validate its improvement via a class project experiment. In this experiment we demonstrate how relevant friends can be selected for inference based on the semantics of friend relationships and finer-grained user ratings. Such technologies can be deployed by most"}
{"_id": "d8e505800b7c5a4381148be77158d215e4820237", "title": "CRCTOL: A semantic-based domain ontology learning system", "text": "Keywords: ontologies < controlled vocabularies < index languages < language types < (language), knowledge acquisition < knowledge engineering < artificial intelligence < computer applications < computer operations < (activities and operations)"}
{"_id": "58c4edc75e92eae076ae1ac5755f2d4017c5a08c", "title": "A study of the uniqueness of source code", "text": "This paper presents the results of the first study of the uniqueness of source code. We define the uniqueness of a unit of source code with respect to the entire body of written software, which we approximate with a corpus of 420 million lines of source code. Our high-level methodology consists of examining a collection of 6,000 software projects and measuring the degree to which each project can be `assembled' solely from portions of this corpus, thus providing a precise measure of `uniqueness' that we call syntactic redundancy. We parameterized our study over a variety of variables, the most important of which being the level of granularity at which we view source code. Our suite of experiments together consumed approximately four months of CPU time, providing quantitative answers to the following questions: at what levels of granularity is software unique, and at a given level of granularity, how unique is software? While we believe these questions to be of intrinsic interest, we discuss possible applications to genetic programming and developer productivity tools."}
{"_id": "598e7ca4b4fa1aea4305b32c9bc715a54ccd1d92", "title": "Parameterized Pattern Matching: Algorithms and Applications", "text": "The problem of finding sections of code that either are identical or are related by the systematic renaming of variables or constants can be modeled in terms of parameterized strings ( p-strings) and parameterized matches ( p-matches). P-strings are strings over two alphabets, one of which represents parameters. Two p-strings are a parameterized match ( p-match) if one p-string is obtained by renaming the parameters of the other by a one-to-one function. In this paper, we investigate parameterized pattern matching via parameterized suffix trees (p-suffix trees). We give two algorithms for constructing p-suffix trees: one (eager) that runs in linear time for fixed alphabets, and another that uses auxiliary data structures and runs in O(n log(n)) time for variable alphabets, where n is input length. We show that using a p-suffix tree for a pattern p-string P, it is possible to search for all p-matches of P within a text p-string T in space linear in |P| and time linear in |T | for fixed alphabets, or O( |T | log(min( |P|, _)) time and O( |P|) space for variable alphabets, where _ is the sum of the alphabet sizes. The simpler p-suffix tree construction algorithm eager has been implemented, and experiments show it to be practical. Since it runs faster than predicted by the above worst-case bound, we reanalyze the algorithm and show that eager runs in time O(min(t |S |+m(t, S) | t>0) log _)), where for an input p-string S, m(t, S) is the number of maximal p-matches of length at least t that occur within S, and _ is the sum of the alphabet sizes. Experiments with the author's program dup (B. Baker, in ``Comput. Sci. Statist.,'' Vol. 24, 1992) for finding all maximal p-matches within a p-string have found m(t, S) to be less than |S | in practice unless t is small. ] 1996 Academic Press, Inc."}
{"_id": "013dd6784e5448d1911acee021704fd4990a8992", "title": "Speech and Language Processing", "text": "CHAPTER 18 Lexicons for Sentiment and Affect Extraction \" [W]e write, not with the fingers, but with the whole person. The nerve which controls the pen winds itself about every fibre of our being, threads the heart, pierces the liver. \" Virginia Woolf, Orlando \" She runs the gamut of emotions from A to B. \" Dorothy Parker, reviewing Hepburn's performance in Little Women In this chapter we turn to tools for interpreting affective meaning, extending our affective study of sentiment analysis in Chapter 7. We use the word 'affective', following the tradition in affective computing (Picard, 1995) to mean emotion, sentiment, personality , mood, and attitudes. Affective meaning is closely related to subjectivity, subjectivity the study of a speaker or writer's evaluations, opinions, emotions, and speculations (Wiebe et al., 1999). How should affective meaning be defined? One influential typology of affec-tive states comes from Scherer (2000), who defines each class of affective states by factors like its cognition realization and time course: Emotion: Relatively brief episode of response to the evaluation of an external or internal event as being of major significance. Mood: Diffuse affect state, most pronounced as change in subjective feeling, of low intensity but relatively long duration, often without apparent cause. Interpersonal stance: Affective stance taken toward another person in a specific interaction, colouring the interpersonal exchange in that situation. Figure 18.1 The Scherer typology of affective states, with descriptions from Scherer (2000)."}
{"_id": "09a6d7e831cf496fb5fb2903415eab4c73235715", "title": "Programming with a Differentiable Forth Interpreter", "text": "Given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model. In this paper, we consider the case of prior procedural knowledge for neural networks, such as knowing how a program should traverse a sequence, but not what local actions should be performed at each step. To this end, we present an end-to-end differentiable interpreter for the programming language Forth which enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data. We can optimise this behaviour directly through gradient descent techniques on user-specified objectives, and also integrate the program into any larger neural computation graph. We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition. When connected to outputs of an LSTM and trained jointly, our interpreter achieves state-of-the-art accuracy for end-to-end reasoning about quantities expressed in natural language stories."}
{"_id": "0eebcfc2238030c5cfdbf8a7fb80afd8c9f09632", "title": "Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets", "text": "Despite the recent achievements in machine learning, we are still very far from achieving real artificial intelligence. In this paper, we discuss the limitations of standard deep learning approaches and show that some of these limitations can be overcome by learning how to grow the complexity of a model in a structured way. Specifically, we study the simplest sequence prediction problems that are beyond the scope of what is learnable with standard recurrent networks, algorithmically generated sequences which can only be learned by models which have the capacity to count and to memorize sequences. We show that some basic algorithms can be learned from sequential data using a recurrent network associated with a trainable memory."}
{"_id": "55d7e420d31ebd2be1f7503d3a007f750d3bf6aa", "title": "TOLDI: An effective and robust approach for 3D local shape description", "text": "Feature description for the 3D local shape in the presence of noise, varying mesh resolutions, clutter and occlusion is a quite challenging task in 3D computer vision. This paper tackles the problem by proposing a new local reference frame (LRF) together with a novel triple orthogonal local depth images (TOLDI) representation, forming the TOLDI method for local shape description. Compared with previous methods, TOLDI manages to perform efficient, distinctive and robust description for the 3D local surface simultaneously under various feature matching contexts. The proposed LRF differs from many prior ones in its calculation of the z-axis and x-axis, the z-axis is calculated using the normal of the keypoint and the x-axis is computed by aggregating the weighted projection vectors of the radius neighbors. TOLDI feature descriptors are then obtained by concatenating three local depth images (LDI) captured from three orthogonal view planes in the LRF into feature vectors. The performance of our TOLDI approach is rigorously evaluated on several public datasets, which contain three major surface matching scenarios, namely shape retrieval, object recognition and 3D registration. Experimental results and comparisons with the state-of-the-arts validate the effectiveness, robustness, high efficiency, and overall superiority of our method. Our method is also applied to aligning 3D object \u2217Corresponding author Email addresses: jqyang@hust.edu.cn (Jiaqi Yang), hangfanzq@163.com (Qian Zhang), Yang Xiao@hust.edu.cn (Yang Xiao), zgcao@hust.edu.cn (Zhiguo Cao) Preprint submitted to Journal of LTEX Templates November 23, 2016 and indoor scene point clouds obtained by different devices (i.e., LiDAR and Kinect), the accurate outcomes further confirm the effectiveness of our method."}
{"_id": "5e6bb3dcc11af8fc5dbc35ee1e420bd8e153d665", "title": "Synaptic patterning and the timescales of cortical dynamics", "text": "Neocortical circuits, as large heterogeneous recurrent networks, can potentially operate and process signals at multiple timescales, but appear to be differentially tuned to operate within certain temporal receptive windows. The modular and hierarchical organization of this selectivity mirrors anatomical and physiological relations throughout the cortex and is likely determined by the regional electrochemical composition. Being consistently patterned and actively regulated, the expression of molecules involved in synaptic transmission constitutes the most significant source of laminar and regional variability. Due to their complex kinetics and adaptability, synapses form a natural primary candidate underlying this regional temporal selectivity. The ability of cortical networks to reflect the temporal structure of the sensory environment can thus be regulated by evolutionary and experience-dependent processes."}
{"_id": "28345e8835f1f3db8b6613ded9449a82da058752", "title": "Consistent Generative Query Networks", "text": "Stochastic video prediction is usually framed as an extrapolation problem where the goal is to sample a sequence of consecutive future image frames conditioned on a sequence of observed past frames. For the most part, algorithms for this task generate future video frames sequentially in an autoregressive fashion, which is slow and requires the input and output to be consecutive. We introduce a model that overcomes these drawbacks \u2013 it learns to generate a global latent representation from an arbitrary set of frames within a video. This representation can then be used to simultaneously and efficiently sample any number of temporally consistent frames at arbitrary time-points in the video. We apply our model to synthetic video prediction tasks and achieve results that are comparable to state-of-the-art video prediction models. In addition, we demonstrate the flexibility of our model by applying it to 3D scene reconstruction where we condition on location instead of time. To the best of our knowledge, our model is the first to provide flexible and coherent prediction on stochastic video datasets, as well as consistent 3D scene samples. Please check the project website https://bit.ly/2jX7Vyu to view scene reconstructions and videos produced by our model."}
{"_id": "16816b84f9ab8cc1765c4783b8641728e74e2c68", "title": "Single-label and multi-label conceptor classifiers in pre-trained neural networks", "text": "Training large neural network models from scratch is not feasible due to over-fitting on small datasets and too much time consumed on large datasets. To address this, transfer learning, namely utilizing the feature extracting capacity learned by large models, becomes a hot spot in neural network community. At the classifying stage of pre-trained neural network model, either a linear SVM classifier or a Softmax classifier is employed and that is the only trained part of the whole model. In this paper, inspired by transfer learning, we propose a classifier based on conceptors called Multi-label Conceptor Classifier (MCC) to deal with multi-label classification in pre-trained neural networks. When no multi-label sample exists, MCC equates to Fast Conceptor Classifier, a fast single-label classifier proposed in our previous work, thus being applicable to single-label classification. Moreover, by introducing a random search algorithm, we further improve the performance of MCC on single-label datasets Caltech-101 and Caltech-256, where it achieves state-of-the-art results. Also, its evaluations with pre-trained rather than fine-tuning neural networks are investigated on multi-label dataset PASCAL VOC-2007, where it achieves comparable results."}
{"_id": "dd6a7171aff0e5d43dbb06c756340ab8c7d4469e", "title": "A fuzzy logic-based text classification method for social media", "text": "A FUZZY LOGIC-BASED TEXT CLASSIFICATION METHOD FOR SOCIAL MEDIA"}
{"_id": "b790bfc415e0736d9d83e7a7b01cc8cbc3dd99bc", "title": "Exploiting the web of data in model-based recommender systems", "text": "The availability of a huge amount of interconnected data in the so called Web of Data (WoD) paves the way to a new generation of applications able to exploit the information encoded in it. In this paper we present a model-based recommender system leveraging the datasets publicly available in the Linked Open Data (LOD) cloud as DBpedia and LinkedMDB. The proposed approach adapts support vector machine (SVM) to deal with RDF triples. We tested our system and showed its effectiveness by a comparison with different recommender systems techniques -- both content-based and collaborative filtering ones."}
{"_id": "d582aa808a64bdcba911c20314d9cd2803adcf16", "title": "Figurative language processing after traumatic brain injury in adults: A preliminary study", "text": "Figurative speech (e.g., proverb, irony, metaphor, and idiom) has been reported to be particularly sensitive to measurement of abstract thinking in patients who suffer from impaired abstraction and language abilities. Metaphor processing was investigated with fMRI in adults with moderate to severe post-acute traumatic brain injury (TBI) and healthy age-matched controls using a valence-judgment task. We hypothesized that TBI patients would display decreased activation of the left inferior frontal gyrus (LIFG), which is considered central to semantic memory retrieval and abstract thought, in comparison with healthy controls. We also predicted that decreased activation in TBI individuals would correlate with their behavioral response times. A whole-brain analysis across the two participant groups revealed that patients did not strongly engage frontal and temporal regions related to semantic processing for novel metaphor comprehension, whereas control participants exhibited more intensive and concentrated activation within frontal and temporal areas. A region of interest (ROI) analysis verified that the LIFG was underactivated in TBI patients compared to controls across all conditions. TBI patients' impaired abstraction of novel stimuli may stem from reduced prefrontal control of semantic memory as well as disrupted interconnectivity of prefrontal cortex with other regions."}
{"_id": "4bc0895c2f1fd20ff06b68bd573b18cb21d528d3", "title": "High-density 80\u00a0K SNP array is a powerful tool for genotyping G. hirsutum accessions and genome analysis", "text": "High-throughput genotyping platforms play important roles in plant genomic studies. Cotton (Gossypium spp.) is the world\u2019s important natural textile fiber and oil crop. Upland cotton accounts for more than 90% of the world\u2019s cotton production, however, modern upland cotton cultivars have narrow genetic diversity. The amounts of genomic sequencing and re-sequencing data released make it possible to develop a high-quality single nucleotide polymorphism (SNP) array for intraspecific genotyping detection in cotton. Here we report a high-throughput CottonSNP80K array and its utilization in genotyping detection in different cotton accessions. 82,259 SNP markers were selected from the re-sequencing data of 100 cotton cultivars and used to produce the array on the Illumina Infinium platform. 77,774 SNP loci (94.55%) were successfully synthesized on the array. Of them, 77,252 (99.33%) had call rates of >95% in 352 cotton accessions and 59,502 (76.51%) were polymorphic loci. Application tests using 22 cotton accessions with parent/F1 combinations or with similar genetic backgrounds showed that CottonSNP80K array had high genotyping accuracy, good repeatability, and wide applicability. Phylogenetic analysis of 312 cotton cultivars and landraces with wide geographical distribution showed that they could be classified into ten groups, irrelevant of their origins. We found that the different landraces were clustered in different subgroups, indicating that these landraces were major contributors to the development of different breeding populations of modern G. hirsutum cultivars in China. We integrated a total of 54,588 SNPs (MAFs >0.05) associated with 10 salt stress traits into 288\u00a0G. hirsutum accessions for genome-wide association studies (GWAS), and eight significant SNPs associated with three salt stress traits were detected. We developed CottonSNP80K array with high polymorphism to distinguish upland cotton accessions. Diverse application tests indicated that the CottonSNP80K play important roles in germplasm genotyping, variety verification, functional genomics studies, and molecular breeding in cotton."}
{"_id": "c6cc83007b2ae2306a0b56809ba5ae1df548d316", "title": "Comparison of different designs of cylindrical printed quadrifilar Helix antennas", "text": "INMARSAT allocated frequencies are 1.525 GHz to 1.559 GHz as downlink and 1.625 GHz to 1.6605 GHz as uplink GPS allocated frequencies 1.57542 GHz in L1 band. In this paper we design two types of Helix antenna as PQHA to operate in allocated frequencies for INMARSAT and GPS application. We used for their optimum design Electromagnetic simulations and were verified by fabrication and VSWR measurement."}
{"_id": "5a1b10252e92c5d84fd4c99a218fe1eb6e22a2b7", "title": "Normalization at the field level: Fractional counting of citations", "text": "Van Raan et al. (2010) accepted our critique for the case of journal normalization (previously CPP/JCSm); CWTS has in the meantime adapted its procedures. However, a new indicator was proposed for field normalization (previously CPP/FCSm), called the \u201cmean normalized citation score\u201d (MNCS; cf. Lundberg, 2007). In our opinion, this latter change does not sufficiently resolve the problems. Since the new indicator is considered another \u201ccrown indicator,\u201d it seems urgent to warn against and elaborate on these remaining problems. In addition to damaging evaluation processes at the level of individuals and institutions, the \u201ccrown indicator\u201d is also used by CWTS for the \u201cLeiden Rankings,\u201d and flaws in it can therefore misguide policies at national levels. also"}
{"_id": "67d0284c3615baab4a7d3efee70a3edd3ff777ec", "title": "Caesar: A content router for high-speed forwarding on content names", "text": "Today, high-end routers forward hundreds of millions of packets per second by means of longest prefix match on forwarding tables with less than a million IP prefixes. Information-Centric Networking, a novel form of networking where content is requested by its name, poses a new challenge in the design of high-end routers: process at least the same amount of packets, assuming a forwarding table that contains hundreds of millions of content prefixes. In this work we design and preliminarily evaluate Caesar, the first content router that supports name-based forwarding at high speed. Caesar efficiently uses available processing and memory units in a high-end router to support forwarding tables containing a billion content prefixes with unlimited characters."}
{"_id": "5128ba477450db7c0c0257ff9948c1c4ff47a315", "title": "Ultra wideband antennas: past and present", "text": "Since the release by the Federal Communications Commission (FCC) of a bandwidth of 7.5GHz (from 3.1GHz to 10.6GHz) for ultra wideband (UWB) wireless communications, UWB is rapidly advancing as a high data rate wireless communication technology. As is the case in conventional wireless communication systems, an antenna also plays a very crucial role in UWB systems. However, there are more challenges in designing a UWB antenna than a narrow band one. A suitable UWB antenna should be capable of operating over an ultra wide bandwidth as allocated by the FCC. At the same time, satisfactory radiation properties over the entire frequency range are also necessary. This paper focuses on UWB planar printed circuit board (PCB) antenna design and analysis. Studies have been undertaken covering the areas of UWB fundamentals and antenna theory. Extensive investigations were also carried out on the development of UWB antennas from the past to present. First, the planar PCB antenna designs for UWB system is introduced and described. Next, the special design considerations for UWB antennas are summarized. State-of-the-art UWB antennas are also reviewed. Finally, a new concept for the design of a UWB antenna with a bandwidth ranging from 2GHz-11.3GHz is introduced, which satisfies the system requirements for S-DMB, WiBro, WLAN, CMMB and the entire UWB . KeywordsUltra wideband, planar PCB Antenna and bandwidth."}
{"_id": "6c34f033d3ff725e2a920dd5ec494283f7ad1e14", "title": "SfePy - Write Your Own FE Application", "text": "SfePy (Simple Finite Elements in Python) is a framework for solving various kinds of problems (mechanics, physics, biology, ...) described by partial differential equations in two or three space dimensions by the finite element method. The paper illustrates its use in an interactive environment or as a framework for building custom finite-element based solvers."}
{"_id": "27663929454d464e90be4bde0b2eb8d7b88359a2", "title": "A New Framework for Computer Science and Engineering", "text": "T raditionally, computing studies has two partitions\u2014science and engineering\u2014that are separated by a line roughly at the computer architecture level. Above the line is computer science and below is computer engineering. Each of these two disciplines then breaks into subspecialties, such as the decomposition of computer science into theory, systems, and AI. Such an approach succeeds in dividing the field into affinity groups, but it tends to isolate its respective parts and the systems that come from them. This isolation, in turn, makes it difficult to incorporate either an integrative systems-oriented perspective or the interdisciplinary work that is rapidly becoming critical to computing studies\u2019 future.1 A more effective organization for computer science and engineering (CS&E)\u2014and one that helps point the way toward its promising future\u2014requires an intrinsically interdisciplinary framework that combines academic and systems-oriented computing perspectives. My colleagues and I have been developing such a framework, which is already changing key segments of CS&E at the University of Southern California (USC). The framework reaggregates computer science and computer engineering and then repartitions the resulting single field into analysis and synthesis components.2-4 Analysis, the more academic of the two, focuses on the nature of computing and the relationships between computer science and other major scientific domains. The synthesis component builds up computer engineering as a more pragmatic systems-oriented activity through a hierarchy of related layers, each of which covers important interdisciplinary issues. The framework is based on the notion that science is foremost about dissecting and understanding, and engineering is mostly about envisioning and building. True, scientists also envision and build\u2014integrated models, for example\u2014but such activities are directed mostly at dissecting and understanding something. Likewise engineers also dissect and understand things, but primarily to envision and build. Extending these notions to computing studies, computer science is concerned with dissecting and understanding the nature of computing and its relationship to the other sciences, while computer engineering is concerned To realize the compelling future of computing studies, researchers and analysts must rethink the traditional way of partitioning and structuring computer science and engineering. A framework that USC is already partially implementing combines an explicitly interdisciplinary academic focus with a systems-oriented perspective. Paul S. Rosenbloom University of Southern California"}
{"_id": "6408c5c6d204ae998f6f4d9717f1f5ab02a60d0e", "title": "Vision-based automated parking system", "text": "This paper describes an approach to overcome a situation of monitoring and managing a parking area using a vision based automated parking system. With the rapid increase of cars the need to find available parking space in the most efficient manner, to avoid traffic congestion in a parking area, is becoming a necessity in car park management. Current car park management is dependent on either human personnel keeping track of the available car park spaces or a sensor based system that monitors the availability of each car park space or the overall number of available car park spaces. In both situations, the information available was only the total number of car park spaces available and not the actual location available. In addition, the installation and maintenance cost of a sensor based system is dependent on the number of sensors used in a car park. This paper shows a vision based system that is able to detect and indicate the available parking spaces in a car park. The methods utilized to detect available car park spaces were based on coordinates to indicate the regions of interest and a car classifier. This paper shows that the initial work done here has an accuracy that ranges from 90% to 100% for a 4 space car park. The work done indicated that the application of a vision based car park management system would be able to detect and indicate the available car park spaces"}
{"_id": "539612553d3a9f19b6aa0678d110fd55893fec24", "title": "Automatic Classification for Vietnamese News", "text": "This paper proposes an automatic framework to classify Vietnamese news from news sites on the Internet. In this proposed framework, the extracted main content of Vietnamese news is performed automatically by applying the improved performance extraction method from [1]. This information will be classified by using two machine learning methods: Support vector machine and na\u00efve bayesian method. Our experiments implemented with Vietnamese news extracted from some sites showed that the proposed classification framework give acceptable results with a rather high accuracy, leading to applying it to real information systems."}
{"_id": "4bd64d44034cd4efbc4e2f3e161c37b4daf8de64", "title": "Co-change Clusters: Extraction and Application on Assessing Software Modularity", "text": "The traditional modular structure defined by the package hierarchy suffers from the dominant decomposition problem and it is widely accepted that alternative forms of modularization are necessary to increase developer\u2019s productivity. In this paper, we propose an alternative form to understand and assess package modularity based on co-change clusters, which are highly inter-related classes considering cochange relations. We evaluate how co-change clusters relate to the package decomposition of four real-world systems. The results show that the projection of co-change clusters to packages follows different patterns in each system. Therefore, we claim that modular views based on co-change clusters can improve developers\u2019 understanding on how well-modularized are their systems, considering that modularity is the ability to confine changes and evolve components in parallel."}
{"_id": "ecd7c2aeecab995c565a8f764736cc96a4e8b467", "title": "The Sensemaking-Coevolution-Implementation Theory of software design", "text": "Understanding software design practice is critical to understanding modern information systems development. New developments in empirical software engineering, information systems design science and the interdisciplinary design literature combined with recent advances in process theory and testability have created a situation ripe for innovation. Consequently, this paper utilizes these breakthroughs to formulate a process theory of software design practice: SensemakingCoevolution-Implementation Theory explains how complex software systems are created by collocated software development teams in organizations. It posits that an independent agent (design team) creates a software system by alternating between three activities: organizing their perceptions about the context, mutually refining their understandings of the context and design space, and manifesting their understanding of the design space in a technological artifact. This theory development paper defines and illustrates Sensemaking-Coevolution-Implementation Theory, grounds its concepts and relationships in existing literature, conceptually evaluates the theory and situates it in the broader context of information systems development."}
{"_id": "cf73c1373be6deb33a118899268ca2b50bd428a8", "title": "Bayesian Nonparametric Modeling of Categorical Data for Information Fusion and Causal Inference", "text": "This paper presents a nonparametric regression model of categorical time series in the setting of conditional tensor factorization and Bayes network. The underlying algorithms are developed to provide a flexible and parsimonious representation for fusion of correlated information from heterogeneous sources, which can be used to improve the performance of prediction tasks and infer the causal relationship between key variables. The proposed method is first illustrated by numerical simulation and then validated with two real-world datasets: (1) experimental data, collected from a swirl-stabilized lean-premixed laboratory-scale combustor, for detection of thermoacoustic instabilities and (2) publicly available economics data for causal inference-making."}
{"_id": "6996cc89d8fa2e6ac67d537dbc53d596bfae1f2f", "title": "Classification of watermelon leaf diseases using neural network analysis", "text": "This paper mainly discussed the process to classify Anthracnose and Downey Mildew, watermelon leaf diseases using neural network analysis. A few of infected leaf samples were collected and they were captured using a digital camera with specific calibration procedure under controlled environment. The classification on the watermelon's leaf diseases is based on color feature extraction from RGB color model where the RGB pixel color indices have been extracted from the identified Regions of Interest (ROI). The proposed automated classification model involved the process of diseases classification using Statistical Package for the Social Sciences (SPSS) and Neural Network Pattern Recognition Toolbox in MATLAB. Determinations in this work have shown that the type of leaf diseases achieved 75.9% of accuracy based on its RGB mean color component."}
{"_id": "949f1658b2f0ffa0e9dcd14baaf94313716a9cf8", "title": "Human Interaction With Robot Swarms: A Survey", "text": "Recent advances in technology are delivering robots of reduced size and cost. A natural outgrowth of these advances are systems comprised of large numbers of robots that collaborate autonomously in diverse applications. Research on effective autonomous control of such systems, commonly called swarms, has increased dramatically in recent years and received attention from many domains, such as bioinspired robotics and control theory. These kinds of distributed systems present novel challenges for the effective integration of human supervisors, operators, and teammates that are only beginning to be addressed. This paper is the first survey of human-swarm interaction (HSI) and identifies the core concepts needed to design a human-swarm system. We first present the basics of swarm robotics. Then, we introduce HSI from the perspective of a human operator by discussing the cognitive complexity of solving tasks with swarm systems. Next, we introduce the interface between swarm and operator and identify challenges and solutions relating to human-swarm communication, state estimation and visualization, and human control of swarms. For the latter, we develop a taxonomy of control methods that enable operators to control swarms effectively. Finally, we synthesize the results to highlight remaining challenges, unanswered questions, and open problems for HSI, as well as how to address them in future works."}
{"_id": "1b688a2191c5ddd02d7b2bb828255a57d2f50ef4", "title": "A k-mean clustering algorithm for mixed numeric and categorical data", "text": "Use of traditional k-mean type algorithm is limited to numeric data. This paper presents a clustering algorithm based on k-mean paradigm that works well for data with mixed numeric and categorical features. We propose new cost function and distance measure based on co-occurrence of values. The measures also take into account the significance of an attribute towards the clustering process. We present a modified description of cluster center to overcome the numeric data only limitation of k-mean algorithm and provide a better characterization of clusters. The performance of this algorithm has been studied on real world data sets. Comparisons with other clustering algorithms illustrate the effectiveness of this approach. 2007 Elsevier B.V. All rights reserved."}
{"_id": "317d67031e309fdb8df2969667d3c9e5ed7684ba", "title": "Unsupervised Learning with Mixed Numeric and Nominal Data", "text": "\u00d0This paper presents a Similarity-Based Agglomerative Clustering (SBAC) algorithm that works well for data with mixed numeric and nominal features. A similarity measure, proposed by Goodall for biological taxonomy [15], that gives greater weight to uncommon feature value matches in similarity computations and makes no assumptions of the underlying distributions of the feature values, is adopted to define the similarity measure between pairs of objects. An agglomerative algorithm is employed to construct a dendrogram and a simple distinctness heuristic is used to extract a partition of the data. The performance of SBAC has been studied on real and artificially generated data sets. Results demonstrate the effectiveness of this algorithm in unsupervised discovery tasks. Comparisons with other clustering schemes illustrate the superior performance of this approach. Index Terms\u00d0Agglomerative clustering, conceptual clustering, feature weighting, interpretation, knowledge discovery, mixed numeric and nominal data, similarity measures, 2 aggregation."}
{"_id": "03fbf51605f68c730a6b37d40d805875a078e307", "title": "Chameleon: Hierarchical Clustering Using Dynamic Modeling", "text": "68 Computer C lustering is a discovery process in data mining. 1 It groups a set of data in a way that maximizes the similarity within clusters and minimizes the similarity between two different clusters. 1,2 These discovered clusters can help explain the characteristics of the underlying data distribution and serve as the foundation for other data mining and analysis techniques. Clustering is useful in characterizing customer groups based on purchasing patterns, categorizing Web documents, 3 grouping genes and proteins that have similar func-tionality, 4 grouping spatial locations prone to earthquakes based on seismological data, and so on. Most existing clustering algorithms find clusters that fit some static model. Although effective in some cases, these algorithms can break down\u2014that is, cluster the data incorrectly\u2014if the user doesn't select appropriate static-model parameters. Or sometimes the model cannot adequately capture the clusters' characteristics. Most of these algorithms break down when the data contains clusters of diverse shapes, densities , and sizes. Existing algorithms use a static model of the clusters and do not use information about the nature of individual clusters as they are merged. Furthermore, one set of schemes (the CURE algorithm and related schemes) ignores the information about the aggregate interconnectivity of items in two clusters. The other set of schemes (the Rock algorithm, group averaging method, and related schemes) ignores information about the closeness of two clusters as defined by the similarity of the closest items across two clusters. (For more information, see the \" Limitations of Traditional Clustering Algorithms \" sidebar.) By only considering either interconnectivity or close-ness, these algorithms can easily select and merge the wrong pair of clusters. For instance, an algorithm that focuses only on the closeness of two clusters will incorrectly merge the clusters in Figure 1a over those in Figure 1b. Similarly, an algorithm that focuses only on interconnectivity will, in Figure 2, incorrectly merge the dark-blue with the red cluster rather than the green one. Here, we assume that the aggregate interconnec-tivity between the items in the dark-blue and red clusters is greater than that of the dark-blue and green clusters. However, the border points of the dark-blue cluster are much closer to those of the green cluster than those of the red cluster. Chameleon is a new agglomerative hierarchical clustering algorithm that overcomes the limitations of existing clustering algorithms. Figure 3 (on page 70) provides an overview of the overall approach \u2026"}
{"_id": "1ad6bb9e58fdafab084f2944789e2f5c25631282", "title": "Cortical cores in network dynamics", "text": "Spontaneous brain activity at rest is spatially and temporally organized in networks of cortical and subcortical regions specialized for different functional domains. Even though brain networks were first studied individually through functional Magnetic Resonance Imaging, more recent studies focused on their dynamic 'integration'. Integration depends on two fundamental properties: the structural topology of brain networks and the dynamics of functional connectivity. In this scenario, cortical hub regions, that are central regions highly connected with other areas of the brain, play a fundamental role in serving as way stations for network traffic. In this review, we focus on the functional organization of a set of hub areas that we define as the 'dynamic core'. In the resting state, these regions dynamically interact with other regions of the brain linking multiple networks. First, we introduce and compare the statistical measures used for detecting hubs. Second, we discuss their identification based on different methods (functional Magnetic Resonance Imaging, Diffusion Weighted Imaging, Electro/Magneto Encephalography). Third, we show that the degree of interaction between these core regions and the rest of the brain varies over time, indicating that their centrality is not stationary. Moreover, alternating periods of strong and weak centrality of the core relate to periods of strong and weak global efficiency in the brain. These results indicate that information processing in the brain is not stable, but fluctuates and its temporal and spectral properties are discussed. In particular, the hypothesis of 'pulsed' information processing, discovered in the slow temporal scale, is explored for signals at higher temporal resolution."}
{"_id": "c1a016b327e4a60e98df4219ff993609eebf2dad", "title": "Machine Learning Approach for Intrusion Detection on Cloud Virtual Machines", "text": "Development of the cloud computing in recent years is increasing rapidly and gained great success, its security issues have got more and more attention. Many challenges in cloud computation increase the threat of data and service availability. There is need of many security services in order to improve cloud security for users as well as providers. In this paper, we propose a Anomaly Intrusion Detection System using machine learning approach for virtual machines on cloud computing. Our proposal is feature selection over events from Virtual Machine Monitor to detect anomaly in parallel to training the system so it will learn new threats and update the model. The experiment has been carried out on NSL-KDD'99 datasets using Na\u00efve Bayes Tree (NB Tree) Classifier and hybrid approach of NB Tree and Random Forest."}
{"_id": "207b4e72f9270727b4efc603c6ad3d7ba80b3093", "title": "An Ontology for Secure Socio-Technical Systems", "text": "Security is often compromised by exploiting vulnerabilities in the interface between the organization and the information systems that support it. This reveals the necessity of modeling and analyzing information systems together with the organizational setting where they will operate. In this chapter we address this problem by presenting a modeling language tailored to analyze the problem of security at an organizational level. This language proposes a set of concepts founded on the notions of permission, delegation, and trust. The chapter also presents a semantics for these concepts, based on Datalog. A case study from the bank domain is employed to illustrate the proposed language."}
{"_id": "79f4461352710ede162e20f74b74355ca0c00395", "title": "Evaluating operating system vulnerability to memory errors", "text": "Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure."}
{"_id": "12778709c5cd1a7783da415028dc7261083776ef", "title": "Detection of Human, Legitimate Bot, and Malicious Bot in Online Social Networks Based on Wavelets", "text": "Social interactions take place in environments that influence people\u2019s behaviours and perceptions. Nowadays, the users of Online Social Network (OSN) generate a massive amount of content based on social interactions. However, OSNs wide popularity and ease of access created a perfect scenario to practice malicious activities, compromising their reliability. To detect automatic information broadcast in OSN, we developed a wavelet-based model that classifies users as being human, legitimate robot, or malicious robot, as a result of spectral patterns obtained from users\u2019 textual content. We create the feature vector from the Discrete Wavelet Transform along with a weighting scheme called Lexicon-based Coefficient Attenuation. In particular, we induce a classification model using the Random Forest algorithm over two real Twitter datasets. The corresponding results show the developed model achieved an average accuracy of 94.47% considering two different scenarios: single theme and miscellaneous one."}
{"_id": "0694956de0c6cd620a81499e94022743a35f87b5", "title": "Postmarketing Drug Safety Surveillance Using Publicly Available Health-Consumer-Contributed Content in Social Media", "text": "Postmarketing drug safety surveillance is important because many potential adverse drug reactions cannot be identified in the premarketing review process. It is reported that about 5% of hospital admissions are attributed to adverse drug reactions and many deaths are eventually caused, which is a serious concern in public health. Currently, drug safety detection relies heavily on voluntarily reporting system, electronic health records, or relevant databases. There is often a time delay before the reports are filed and only a small portion of adverse drug reactions experienced by health consumers are reported. Given the popularity of social media, many health social media sites are now available for health consumers to discuss any health-related issues, including adverse drug reactions they encounter. There is a large volume of health-consumer-contributed content available, but little effort has been made to harness this information for postmarketing drug safety surveillance to supplement the traditional approach. In this work, we propose the association rule mining approach to identify the association between a drug and an adverse drug reaction. We use the alerts posted by Food and Drug Administration as the gold standard to evaluate the effectiveness of our approach. The result shows that the performance of harnessing health-related social media content to detect adverse drug reaction is good and promising."}
{"_id": "f89644430bbb26ac6054b22a425058708806359a", "title": "Single inductor three-level boost bridgeless PFC rectifier with nature voltage clamp", "text": "A single inductor three-level boost bridgeless PFC rectifier with nature voltage clamp is proposed. In the proposed converter, the operation principle is quite different from other boost bridgeless three-level converters. The charging stage is presented in the low side while the discharging stage in the high side and vice versa. As a result, nature voltage clamp is achieved. And the slow diodes are \u2018shared\u2019. They operate not only as rectifying diodes, but also as voltage clamping diodes. Besides, only single inductor is required and the device utilization factor is high. Besides, the common-mode(CM) noise is rather low. First, the operational principle are analyzed in detail. Then the performance evaluation and design consideration are presented. Finally, a 1kW experimental prototype is built. The result shows that the efficiency is above 94% at full load under 90V and the maximum efficiency is 98.4%."}
{"_id": "6503cc8fd7d29e6d875bf9dc1a06c2cffcf03f9d", "title": "WhatsApp network forensics: Decrypting and understanding the WhatsApp call signaling messages", "text": "WhatsApp is a widely adopted mobile messaging application with over 800 million users. Recently, a calling feature was added to the application and no comprehensive digital forensic analysis has been performed with regards to this feature at the time of writing this paper. In this work, we describe how we were able to decrypt the network traffic and obtain forensic artifacts that relate to this new calling feature which included the: a) WhatsApp phone numbers, b) WhatsApp server IPs, c) WhatsApp audio codec (Opus), d) WhatsApp call duration, and e) WhatsApp's call termination. We explain the methods and tools used to decrypt the traffic as well as thoroughly elaborate on our findings with respect to the WhatsApp signaling messages. Furthermore, we also provide the community with a tool that helps in the visualization of the WhatsApp protocol messages. \u00a9 2015 Elsevier Ltd. All rights reserved."}
{"_id": "b12e3a71c8798961d061fc64f265d21b811afd39", "title": "A survey of socially interactive robots", "text": "This paper reviews \u201csocially interactive robots\u201d: robots for which social human\u2013robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of \u201csocial robots\u201d. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002]. \u00a9 2003 Elsevier Science B.V. All rights reserved."}
{"_id": "2d294c58b2afb529b26c49d3c92293431f5f98d0", "title": "Maximum Margin Projection Subspace Learning for Visual Data Analysis", "text": "Visual pattern recognition from images often involves dimensionality reduction as a key step to discover a lower dimensional image data representation and obtain a more manageable problem. Contrary to what is commonly practiced today in various recognition applications where dimensionality reduction and classification are independently treated, we propose a novel dimensionality reduction method appropriately combined with a classification algorithm. The proposed method called maximum margin projection pursuit, aims to identify a low dimensional projection subspace, where samples form classes that are better discriminated, i.e., are separated with maximum margin. The proposed method is an iterative alternate optimization algorithm that computes the maximum margin projections exploiting the separating hyperplanes obtained from training a support vector machine classifier in the identified low dimensional space. Experimental results on both artificial data, as well as, on popular databases for facial expression, face and object recognition verified the superiority of the proposed method against various state-of-the-art dimensionality reduction algorithms."}
{"_id": "df40fd12861622bbe4059a6a61ce94d37ac1a884", "title": "From Motion to Photons in 80 Microseconds: Towards Minimal Latency for Virtual and Augmented Reality", "text": "We describe an augmented reality, optical see-through display based on a DMD chip with an extremely fast (16 kHz) binary update rate. We combine the techniques of post-rendering 2-D offsets and just-in-time tracking updates with a novel modulation technique for turning binary pixels into perceived gray scale. These processing elements, implemented in an FPGA, are physically mounted along with the optical display elements in a head tracked rig through which users view synthetic imagery superimposed on their real environment. The combination of mechanical tracking at near-zero latency with reconfigurable display processing has given us a measured average of 80 \u03bcs of end-to-end latency (from head motion to change in photons from the display) and also a versatile test platform for extremely-low-latency display systems. We have used it to examine the trade-offs between image quality and cost (i.e. power and logical complexity) and have found that quality can be maintained with a fairly simple display modulation scheme."}
{"_id": "90d7af1507b05218e64bd03210bb6c190addc6aa", "title": "A comprehensive study of hybrid neural network hidden Markov model for offline handwritten Chinese text recognition", "text": "This paper proposes an effective segmentation-free approach using a hybrid neural network hidden Markov model (NN-HMM) for offline handwritten Chinese text recognition (HCTR). In the general Bayesian framework, the handwritten Chinese text line is sequentially modeled by HMMs with each representing one character class, while the NN-based classifier is adopted to calculate the posterior probability of all HMM states. The key issues in feature extraction, character modeling, and language modeling are comprehensively investigated to show the effectiveness of NN-HMM framework for offline HCTR. First, a conventional deep neural network (DNN) architecture is studied with a well-designed feature extractor. As for the training procedure, the label refinement using forced alignment and the sequence training can yield significant gains on top of the frame-level cross-entropy criterion. Second, a deep convolutional neural network (DCNN) with automatically learned discriminative features demonstrates its superiority to DNN in the HMM framework. Moreover, to solve the challenging problem of distinguishing quite confusing classes due to the large vocabulary of Chinese characters, NN-based classifier should output 19900 HMM states as the classification units via a high-resolution modeling within each character. On the ICDAR 2013 competition task of CASIA-HWDB database, DNN-HMM yields a promising character error rate (CER) of 5.24% by making a good trade-off between the computational complexity and recognition accuracy. To the best of our knowledge, DCNN-HMM can achieve a best published CER of 3.53%."}
{"_id": "340e8db8d35bc3343727ac51c905f9dbd6c92c18", "title": "Mobile robot programming using natural language", "text": "How will naive users program domestic robots? This paper describes the design of a practical system that uses natural language to teach a vision-based robot how to navigate in a miniature town. To enable unconstrained speech the robot is provided with a set of primitive procedures derived from a corpus of route instructions. When the user refers to a route that is not known to the robot, the system will learn it by combining primitives as instructed by the user. This paper describes the components of the Instruction Based Learning architecture and discusses issues of knowledge representation, the selection of primitives and the conversion of natural language into robot-understandable procedures."}
{"_id": "2fb30f6a482384fdf5d37cdc029f679522e6a381", "title": "Identifying Opposing Views in Online Discourse", "text": "Discourse in online media often takes the form of siloed discussions where unpopular views tend to get drowned out by the majority. This is especially true in platforms such as Reddit, where only the most popular opinions get visibility. In this work, we propose that this problem may be solved by building tools that surface opposing views and I advance a means for automatically identifying disagreeing views on Reddit."}
{"_id": "1fdf6e7f798cb343d5b6ba7f31b356436a651443", "title": "An Active Pixel Sensor Fabricated Using CMOS / CCD Process Technology", "text": "Paul P. K. Lee, Russell C. Gee*, R. Michael Guidash, T-H. Lee, and Eric R. Fossum* Microelectronics Technology Division, Eastman Kodak Company 1669 Lake Avenue, Rochester, New York 14650-2008 Tel: (716) 477-2869, Fax: (716) 477-4947, Internet: ppklee@Kodak.COM *Jet Propulsion Laboratory, California Institute of Technology 4800 Oak Grove Drive, Pasadena, CA 91109 Abstract This paper describes the integration of the active pixel sensor (APS) architecture normally fabricated in conventional complementary metal oxide semiconductor (CMOS) technology with a pinned photodiode (PPD) device using a mixed process technology. This new technology allows mix and match of CMOS and high performance charged coupled device (CCD) modules. The PPD [1] becomes the photoactive element in an XYaddressable area array with each pixel containing active devices for the transfer, readout and reset functions. It is a standard photo-sensitive element, available in a high performance true two-phase CCD technology developed previously for CCD-based image sensors [2]. An n-well 2-\u03bcm CMOS technology was combined with the CCD process to provide the best features from both. A Design of Experiment approach was used with Technology Computer Aided Design (TCAD) tools to develop and optimize the new mixed process technology without sacrificing any CCD performance while minimizing impact to the CMOS device characteristics [3]. By replacing polysilicon photocapacitor or photogate in conventional APS with the pinned photodiode, deficiencies in poor blue response and high dark current are minimized [4]."}
{"_id": "be0e9375941150c319489e66780782d14d91e79f", "title": "Developing criteria for establishing interrater reliability of specific items: applications to assessment of adaptive behavior.", "text": "A set of criteria based upon biostatistical considerations for determining the interrater reliability of specific adaptive behavior items in a given setting was presented. The advantages and limitations of extant statistical assessment procedures were discussed. Also, a set of guidelines for differentiating type of adaptive behavior that are statistically reliable from those that are reliable in a clinical or practical sense was delineated. Data sets were presented throughout in order to illustrate the advantages of recommended statistical procedures over other available ones."}
{"_id": "1c1e724bbb718c12ad381d192941dc738c5f5b1b", "title": "COID: A cluster\u2013outlier iterative detection approach to multi-dimensional data analysis", "text": "Nowadays, most data mining algorithms focus on clustering methods alone. Also, there are a lot of approaches designed for outlier detection. We observe that, in many situations, clusters and outliers are concepts whose meanings are inseparable to each other, especially for those data sets with noise. Thus, it is necessary to treat both clusters and outliers as concepts of the same importance in data analysis. In this paper, we present our continuous work on the cluster\u2013outlier iterative detection algorithm (Shi in SubCOID: exploring cluster-outlier iterative detection approach to multi-dimensional data analysis in subspace. Auburn, pp. 132\u2013135, 2008; Shi and Zhang in Towards exploring interactive relationship between clusters and outliers in multi-dimensional data analysis. IEEE Computer Society. Tokyo, pp. 518\u2013519, 2005) to detect the clusters and outliers in another perspective for noisy data sets. In this algorithm, clusters are detected and adjusted according to the intra-relationship within clusters and the inter-relationship between clusters and outliers, and vice versa. The adjustment and modification of the clusters and outliers are performed iteratively until a certain termination condition is reached. This data processing algorithm can be applied in many fields, such as pattern recognition, data clustering, and signal processing. Experimental results demonstrate the advantages of our approach."}
{"_id": "0f9023736824d996414bac85a1388225d3c92283", "title": "Contextualized Sarcasm Detection on Twitter", "text": "Sarcasm requires some shared knowledge between speaker and audience; it is a profoundly contextual phenomenon. Most computational approaches to sarcasm detection, however, treat it as a purely linguistic matter, using information such as lexical cues and their corresponding sentiment as predictive features. We show that by including extra-linguistic information from the context of an utterance on Twitter \u2013 such as properties of the author, the audience and the immediate communicative environment \u2013 we are able to achieve gains in accuracy compared to purely linguistic features in the detection of this complex phenomenon, while also shedding light on features of interpersonal interaction that enable sarcasm in conversation."}
{"_id": "f1d0d122ee12c4fbc24557943dd7692b76ba1778", "title": "Metamaterial advances for radar and communications", "text": "Metamaterial antennas have progressed considerably in the last few years. Kymeta expects to have its Ku-band metamaterial antenna commercially available in 2017. It is intended for use on vessels anywhere in the world for high speed internet using satellites. Echodyne and PARC, a Xerox Co., have developed a metamaterial array for radar and cell phone usage. The Army Research Laboratory in Adelphi MD has funded the development of a metamaterial 250\u2013505 MHZ antenna with a \u03bb/20 thickness. Complementing this a conventional tightly coupled dipole antenna (TCDA) has been developed which provides a 20\u22361 bandwidth with a \u03bb/40 thickness. Target cloaking has been demonstrated at microwaves using metamaterials. With cloaking the electromagnetic wave signal transmitted by a radar goes around the target making it invisible. It has also been demonstrated at L-band with a 50% bandwidth using fractals. Stealthing by absorption using a thin flexible and stretchable metamaterials sheet has been demonstrated to give 6 dB absorption over the band from 8\u201310 GHz, larger absorption over a narrower band. Using fractals material < 1 mm thick simulation gave a 10 dB reduction in backscatter from 2 to 20 GHz, \u223c20 dB reduction from 9 to 15 GHz. Good results were obtained for large incidence angles and both polarizations. Using metamaterial one can focus 6\u00d7 beyond diffraction limit at 0.38 \u00b5m (Moore's Law marches on); 40\u00d7 diffraction limit, \u03bb/80, at 375 MHz. Has been used in cell phones to provide antennas 5\u00d7 smaller (1/10th \u03bb) having 700 MHz\u20132.7 GHz bandwidth. Provided isolation between antennas having 2.5 cm separation equivalent to lm separation. Used for phased array wide angle impedance matching (WAIM). It has been found that n-doped graphene has a negative index of refraction."}
{"_id": "04e753d012b5be333e81bf856371ac77c0a46923", "title": "Fast and flexible word searching on compressed text", "text": "We present a fast compression technique for natural language texts. The novelties are that (1) decompression of arbitrary portions of the text can be done very efficiently, (2) exact search for words and phrases can be done on the compressed text directly, using any known sequential pattern-matching algorithm, and (3) word-based approximate and extended search can also be done efficiently without any decoding. The compression scheme uses a semistatic word-based model and a Huffman code where the coding alphabet is byte-oriented rather than bit-oriented. We compress typical English texts to about 30% of their original size, against 40% and 35% for Compress and Gzip, respectively. Compression time is close to that of Compress and approximately half of the time of Gzip, and decompression time is lower than that of Gzip and one third of that of Compress. We present three algorithms to search the compressed text. They allow a large number of variations over the basic word and phrase search capability, such as sets of characters, arbitrary regular expressions, and approximate matching. Separators and stopwords can be discarded at search time without significantly increasing the cost. When searching for simple words, the experiments show that running our algorithms on a compressed text is twice as fast as running the best existing software on the uncompressed version of the same text. When searching complex or approximate patterns, our algorithms are up to 8 times faster than the search on uncompressed text. We also discuss the impact of our technique in inverted files pointing to logical blocks and argue for the possibility of keeping the text compressed all the time, decompressing only for displaying purposes."}
{"_id": "6e4aa518dffbfc83f1b480b5faf4803fd915f20a", "title": "Content-Aware Partial Compression for Big Textual Data Analysis Acceleration", "text": "Analysing text-based data has become increasingly important due to the importance of text from sources such as social media, web contents, web searches. The growing volume of such data creates challenges for data analysis including efficient and scalable algorithm, effective computing platforms and energy efficiency. Compression is a standard method for reducing data size but current standard compression algorithms are destructive to the organisation of data contents. This work introduces Content-aware, Partial Compression (CaPC) for text using a dictionary-based approach. We simply use shorter codes to replace strings while maintaining the original data format and structure, so that the compressed contents can be directly consumed by analytic platforms. We evaluate our approach with a set of real-world datasets and several classical MapReduce jobs on Hadoop. We also provide a supplementary utility library for Hadoop, hence, existing MapReduce programs can be used directly on the compressed datasets with little or no modification. In evaluation, we demonstrate that CaPC works well with a wide variety of data analysis scenarios, experimental results show ~30% average data size reduction, and up to ~32% performance increase on some I/O intensive jobs on an in-house Hadoop cluster. While the gains may seem modest, the point is that these gains are 'for free' and act as supplementary to all other optimizations."}
{"_id": "07422b9e0a868e3d09bc48c3ac98de4f4932ece7", "title": "An Unsupervised Aspect-Sentiment Model for Online Reviews", "text": "With the increase in popularity of online review sites comes a corresponding need for tools capable of extracting the information most important to the user from the plain text data. Due to the diversity in products and services being reviewed, supervised methods are often not practical. We present an unsupervised system for extracting aspects and determining sentiment in review text. The method is simple and flexible with regard to domain and language, and takes into account the influence of aspect on sentiment polarity, an issue largely ignored in previous literature. We demonstrate its effectiveness on both component tasks, where it achieves similar results to more complex semi-supervised methods that are restricted by their reliance on manual annotation and extensive knowledge sources."}
{"_id": "1ac7018b0935cdb5bf52b34d738b110e2ef0416a", "title": "Learning Attitudes and Attributes from Multi-aspect Reviews", "text": "Most online reviews consist of plain-text feedback together with a single numeric score. However, understanding the multiple `aspects' that contribute to users' ratings may help us to better understand their individual preferences. For example, a user's impression of an audio book presumably depends on aspects such as the story and the narrator, and knowing their opinions on these aspects may help us to recommend better products. In this paper, we build models for rating systems in which such dimensions are explicit, in the sense that users leave separate ratings for each aspect of a product. By introducing new corpora consisting of five million reviews, rated with between three and six aspects, we evaluate our models on three prediction tasks: First, we uncover which parts of a review discuss which of the rated aspects. Second, we summarize reviews by finding the sentences that best explain a user's rating. Finally, since aspect ratings are optional in many of the datasets we consider, we recover ratings that are missing from a user's evaluation. Our model matches state-of-the-art approaches on existing small-scale datasets, while scaling to the real-world datasets we introduce. Moreover, our model is able to `disentangle' content and sentiment words: we automatically learn content words that are indicative of a particular aspect as well as the aspect-specific sentiment words that are indicative of a particular rating."}
{"_id": "964a3497fb5167eb4856ef870d46f5f57c4d01a7", "title": "Workload characterization of JVM languages", "text": "Being developed with a single language in mind, namely Java, the Java Virtual Machine (JVM) nowadays is targeted by numerous programming languages. Automatic memory management, Just-In-Time (JIT) compilation, and adaptive optimizations provided by the JVM make it an attractive target for different language implementations. Even though being targeted by so many languages, the JVM has been tuned with respect to characteristics of Java programs only \u2013 different heuristics for the garbage collector or compiler optimizations are focused more on Java programs. In this dissertation, we aim at contributing to the understanding of the workloads imposed on the JVM by both dynamically-typed and statically-typed JVM languages. We introduce a new set of dynamic metrics and an easy-to-use toolchain for collecting the latter. We apply our toolchain to applications written in six JVM languages \u2013 Java, Scala, Clojure, Jython, JRuby, and JavaScript. We identify differences and commonalities between the examined languages and discuss their implications. Moreover, we have a close look at one of the most efficient compiler optimizations \u2013 method inlining. We present the decision tree of the HotSpot JVM\u2019s JIT compiler and analyze how well the JVM performs in inlining the workloads written in different JVM languages."}
{"_id": "55e45491ab76bdced323921b30b0d8b817267cbb", "title": "Elderly people at home: technological help in everyday activities", "text": "The aim of this paper is to understand to what extent elderly people are likely to accept a technological aid in performing everyday activities. In this perspective, the present research focused on elderly people's strategies in performing everyday activities at home, in order to understand in what domestic domains technology can be considered an acceptable help. We administered a questionnaire focusing on preferred strategies in carrying out common domestic tasks, and on attitudes towards new technologies and home modification to a sample of 123 elderly people living in Rome. Results show that the adoption of a strategy, including the introduction of technological devices, is highly problem-specific, while personal factors are relevant only in particular situations. With increasing age, people are more inclined to give up, and higher educational levels correspond to more frequent technological solutions."}
{"_id": "d874fa8ad1f95a036ba6c27dd4aa872951a6963d", "title": "Remote Sensing and Geospatial Technological Applications for Site-specific Management of Fruit and Nut Crops: A Review", "text": "Site-specific crop management (SSCM) is one facet of precision agriculture which is helping increase production with minimal input. It has enhanced the cost-benefit scenario in crop production. Even though the SSCM is very widely used in row crop agriculture like corn, wheat, rice, soybean, etc. it has very little application in cash crops like fruit and nut. The main goal of this review paper was to conduct a comprehensive review of advanced technologies, including geospatial technologies, used in site-specific management of fruit and nut crops. The review explores various remote sensing data from different platforms like satellite, LIDAR, aerial, and field imaging. The study analyzes the use of satellite sensors, such as Quickbird, Landsat, SPOT, and IRS imagery as well as hyperspectral narrow-band remote sensing data in study of fruit and nut crops in blueberry, citrus, peach, apple, etc. The study also explores other geospatial technologies such as GPS, GIS spatial modeling, advanced image processing techniques, and information technology for suitability study, orchard delineation, and classification accuracy assessment. The study also provides an example of a geospatial model developed in ArcGIS ModelBuilder to automate the blueberry production suitability analysis. The GIS spatial model is developed using various crop characteristics such as chilling hours, soil permeability, drainage, and pH, and land cover to determine the best sites for growing OPEN ACCESS Remote Sensing 2010, 2 1974 blueberry in Georgia, U.S. The study also provides a list of spectral reflectance curves developed for some fruit and nut crops, blueberry, crowberry, redblush citrus, orange, prickly pear, and peach. The study also explains these curves in detail to help researchers choose the image platform, sensor, and spectrum wavelength for various fruit and nut crops SSCM."}
{"_id": "428bf3913ebf76cf107aff7b3502d4fb86bc5921", "title": "A biomimetic soft fingertip applicable to haptic feedback systems for texture identification", "text": "Humans recognize textures using the tactile data obtained from the human somatosensory system. Recognition of textures allows humans discriminate objects and materials. Moreover, by understanding the object's or material's texture, the human intuitively estimates roughness and the friction properties of the object or the material. This ability is necessary for object manipulative tasks. Likewise artificial haptic systems too, should have the ability to encode textures and feedback those data to haptic applications such as haptic displays. In this paper a biomimetic soft fingertip sensor that can be used in above haptic systems is introduced. The fingertip has the ability to detect force and vibration modalities. We propose three features calculated from the covariance signal of two adjacent accelerometers in the fingertip to use in texture identification. The covariance signal is transformed using Discrete Wavelet Transform (DWT) and the three features mentioned below are calculated. The mean and variance of the approximate signal, and the energies of the detailed signal are chosen as features. Then, the proposed features were validate by using those in an Artificial Neural Network (ANN) to classify seven wood samples. The results showed a 65% success rate in classifying wood samples and that the proposed features are acceptable to encode textures."}
{"_id": "be0b922ec9625a5908032bde6ae47fa6c4216a38", "title": "Maximum Margin Reward Networks for Learning from Explicit and Implicit Supervision", "text": "Neural networks have achieved state-ofthe-art performance on several structuredoutput prediction tasks, trained in a fully supervised fashion. However, annotated examples in structured domains are often costly to obtain, which thus limits the applications of neural networks. In this work, we propose Maximum Margin Reward Networks, a neural networkbased framework that aims to learn from both explicit (full structures) and implicit supervision signals (delayed feedback on the correctness of the predicted structure). On named entity recognition and semantic parsing, our model outperforms previous systems on the benchmark datasets, CoNLL-2003 and WebQuestionsSP."}
{"_id": "48674565d0737bb422406d757c02042759b635ae", "title": "A benchmark for semantic image segmentation", "text": "Though quite a few image segmentation benchmark datasets have been constructed, there is no suitable benchmark for semantic image segmentation. In this paper, we construct a benchmark for such a purpose, where the ground-truths are generated by leveraging the existing fine granular ground-truths in Berkeley Segmentation Dataset (BSD) as well as using an interactive segmentation tool for new images. We also propose a percept-tree-based region merging strategy for dynamically adapting the ground-truth for evaluating test segmentation. Moreover, we propose a new evaluation metric that is easy to understand and compute, and does not require boundary matching. Experimental results show that, compared with BSD, the generated ground-truth dataset is more suitable for evaluating semantic image segmentation, and the conducted user study demonstrates that the proposed evaluation metric matches user ranking very well."}
{"_id": "389266c1cb73a7918d599be659d91e7fb7f431df", "title": "Multi-Feature Based Emotion Recognition for Video Clips", "text": "In this paper, we present our latest progress in Emotion Recognition techniques, which combines acoustic features and facial features in both non-temporal and temporal mode. This paper presents the details of our techniques used in the Audio-Video Emotion Recognition subtask in the 2018 Emotion Recognition in the Wild (EmotiW) Challenge. After the multimodal results fusion, our final accuracy in Acted Facial Expression in Wild (AFEW) test dataset achieves 61.87%, which is 1.53% higher than the best results last year. Such improvements prove the effectiveness of our methods."}
{"_id": "615016f45e89baea5c0aa3307efc41b458afb8af", "title": "Affective state detection via facial expression analysis within a human-computer interaction context", "text": "The advancement in technology indicates that there is an opportunity to enhance Human-Computer Interaction by way of affective state recognition. Affective state recognition is typically based on passive stimuli such as watching video clips, which does not reflect genuine interaction. This paper presents a study on affective state recognition using active stimuli, i.e. facial expressions of users when they attempt computerised tasks, particularly across typical usage of computer systems. A data collection experiment is presented for acquiring data from normal users whilst they interact with software, attempting to complete a set of predefined tasks. In addition, a hierarchical machine learning approach is presented for facial expression-based affective state recognition, which employs an Euclidean distance-based feature representation, conjointly with a customised encoding for users\u2019 self-reported affective states. Consequently, the aim is to find the potential relationship between the facial expressions, as defined by Paul Ekman, and the self-reported emotional states specified by users using Russells Circumplex model, in relation to the actual feelings and affective states. The main findings of this study suggest that facial expressions cannot precisely reveal the actual feelings of users whilst interacting with common computerised tasks. Moreover, during active interaction tasks more variation occurs within the facial expressions of participants than occurs within passive interaction."}
{"_id": "8196de6e8fd4e941bf0598cf1abb62181d845f3c", "title": "Cultural difference and adaptation of communication styles in computer-mediated group brainstorming", "text": "Supporting creativity via collaborative group brainstorming is a prevalent practice in organizations. Today's technology makes it easy for international and intercultural group members to brainstorm together remotely, but surprisingly little is known about how culture and medium shape the underlying brainstorming process. In a laboratory study, we examined the influences of individual cultural background (American versus Chinese), group cultural composition (same- versus mixed-culture groups), and communication medium (text-only versus video-enabled chatrooms) on group brainstorming conversations. Cultural differences and adaptation in conversational talkativeness and responsiveness were identified. The text-only medium reduced cultural differences in talkativeness. Working in a mixed-culture group led to cultural adaptation in the communication style of Chinese but not American participants. We discuss implications for international group brainstorming."}
{"_id": "58832ee5fdd48b9c919074e6317eff70161759dc", "title": "Gender identity disorder in twins: a review of the case report literature.", "text": "INTRODUCTION\nThe etiology of gender identity disorder (GID) remains largely unknown. In recent literature, increased attention has been attributed to possible biological factors in addition to psychological variables.\n\n\nAIM\nTo review the current literature on case studies of twins concordant or discordant for GID.\n\n\nMETHODS\nA systematic, comprehensive literature review.\n\n\nRESULTS\nOf 23 monozygotic female and male twins, nine (39.1%) were concordant for GID; in contrast, none of the 21 same-sex dizygotic female and male twins were concordant for GID, a statistically significant difference (P=0.005). Of the seven opposite-sex twins, all were discordant for GID.\n\n\nCONCLUSIONS\nThese findings suggest a role for genetic factors in the development of GID."}
{"_id": "d8d4b573394dbd456dd0eca8914f6dad0f9a0b42", "title": "Improving the Reliability of Series Resonant Inverters for Induction Heating Applications", "text": "This paper analyzes a high-power (100-kW) high-frequency (50-kHz) voltage-fed inverter with a series resonant load circuit for industrial induction heating applications which is characterized by a full-bridge inverter composed of isolated-gate bipolar transistors and a new power control based on phase-shift (PS) control. This power control circuit incorporates a load-adaptive variable-frequency controller and automated blanking time management in order to allow the inverter to work in zero-voltage switching for all output power levels and load conditions. An important improvement of the inverter reliability is achieved by choosing an appropriate and novel switching sequence for the PS inverter. The results are verified experimentally using a prototype for induction hardening applications. A comparative study between the proposed and standard PS power control will be made."}
{"_id": "27df8b3bd29d0bc73d9b8c35853062649c34cb22", "title": "Applying PCA for Traffic Anomaly Detection: Problems and Solutions", "text": "Spatial Principal Component Analysis (PCA) has been proposed for network-wide anomaly detection. A recent work has shown that PCA is very sensitive to calibration settings, unfortunately, the authors did not provide further explanations for this observation. In this paper, we fill this gap and provide the reasoning behind the found discrepancies. First, we revisit PCA for anomaly detection and evaluate its performance on our data. We develop a slightly modified version of PCA that uses only data from a single router. Instead of correlating data across different spatial measurement points, we correlate the data across different metrics. With the help of the analyzed data, we explain the pitfalls of PCA and underline our argumentation with measurement results. We show that the main problems that make PCA difficult to apply are (i) the temporal correlation in the data; (ii) the non-stationarity of the data; and (iii) the difficulty about choosing the right number of components. Moreover, we propose a solution to deal with the most dominant problem, the temporal correlation in data. We find that when we consider temporal correlation, PCA detection results are significantly improved."}
{"_id": "c8475045e23975e7001082d5414ad91eb40771f1", "title": "You Can Touch This: Eleven Years and 258218 Images of Objects", "text": "Touch has become a central input modality for a wide variety of interactive devices, most of our mobile devices are operated using touch. In addition to interacting with digital artifacts, people touch and interact with many other objects in their daily lives. We provide a unique photo dataset containing all touched objects over the last 11 years. All photos were contributed by Alberto Frigo, who was involved early on in the \"Quantified Self\" movement. He takes photos of every object he touches with his dominant hand. We analyzed the 258,218 images with respect to the types objects, their distribution, and related activities."}
{"_id": "07d4304493c19202bb172660179734b522314b71", "title": "Business Intelligence and Analytics Education, and Program Development: A Unique Opportunity for the Information Systems Discipline", "text": "\u201cBig Data,\u201d huge volumes of data in both structured and unstructured forms generated by the Internet, social media, and computerized transactions, is straining our technical capacity to manage it. More importantly, the new challenge is to develop the capability to understand and interpret the burgeoning volume of data to take advantage of the opportunities it provides in many human endeavors, ranging from science to business. Data Science, and in business schools, Business Intelligence and Analytics (BI&A) are emerging disciplines that seek to address the demands of this new era. Big Data and BI&A present unique challenges and opportunities not only for the research community, but also for Information Systems (IS) programs at business schools. In this essay, we provide a brief overview of BI&A, speculate on the role of BI&A education in business schools, present the challenges facing IS departments, and discuss the role of IS curricula and program development, in delivering BI&A education. We contend that a new vision for the IS discipline should address these challenges."}
{"_id": "f5a5202acd2a0c14f589c505845eeac16334d652", "title": "3D printing with polymers: Challenges among expanding options and opportunities.", "text": "OBJECTIVES\nAdditive manufacturing, which is more colloquially referred to as 3D printing, is quickly approaching mainstream adoption as a highly flexible processing technique that can be applied to plastic, metal, ceramic, concrete and other building materials. However, taking advantage of the tremendous versatility associated with in situ photopolymerization as well as the ability to select from a variety of preformed processible polymers, 3D printing predominantly targets the production of polymeric parts and models. The goal of this review is to connect the various additive manufacturing techniques with the monomeric and polymeric materials they use while highlighting emerging material-based developments.\n\n\nMETHODS\nModern additive manufacturing technology was introduced approximately three decades ago but this review compiles recent peer-reviewed literature reports to demonstrate the evolution underway with respect to the various building techniques that differ significantly in approach as well as the new variations in polymer-based materials being employed.\n\n\nRESULTS\nRecent growth of 3D printing has been dramatic and the ability of the various platform technologies to expand from rapid production prototypic models to the greater volume of readily customizable production of working parts is critical for continued high growth rates. This transition to working part production is highly dependent on adapting materials that deliver not only the requisite design accuracy but also the physical and mechanical properties necessary for the application.\n\n\nSIGNIFICANCE\nWith the weighty distinction of being called the next industrial revolution, 3D printing technologies is already altering many industrial and academic operations including changing models for future healthcare delivery in medicine and dentistry."}
{"_id": "b1c0c0d731a7b924eba5ecb58f865d0cfeaebec1", "title": "Facebook groups as LMS: A case study", "text": "This paper describes a pilot study in using Facebook as an alternative to a learning management system (LMS). The paper reviews the current research on the use of Facebook in academia and analyzes the differences between a Facebook group and a regular LMS. The paper reports on a precedent-setting attempt to use a Facebook group as a course Web site, serving as a platform for delivering content and maintaining interactions among the students and between the students and the instructor. The paper presents findings from the students\u2019 self-assessments and reflections on their experience. The students expressed satisfaction with learning in Facebook and willingness to continue using these groups in future courses."}
{"_id": "3288b55e231f29afc57a0e0caa27848d494126b8", "title": "Multi-resolution texture analysis for fingerprint based age-group estimation", "text": "In this paper the possibility of using digital fingerprints to estimate age-groups of human being, particularly children is investigated. To our knowledge, age-group estimation in humans, using digital fingerprints have not been addressed formally. Age-group estimation can be applied in many areas like on-line child protection, access control and customized internet services etc. Motivated by the fact that human digital fingerprint vary in texture as the person ages, a multi-resolution texture approach for automatic age-group estimation has been presented in this paper. Three standard classifiers were used to judge the accuracy of the proposed method. In the process of this research study, a novel method for digital fingerprint reference point generation was developed, which provides reference point for very poor quality images also. The proposed reference point generation method is compared with core-point method using FG-NET DB1 dataset. Experimental results proves that a digital fingerprint can be used to identify age-groups, particularly children. A classification accuracy of 80 percent was achieved for children below the age of 14 by using the aforesaid method."}
{"_id": "16c424286b42593678dfdb4b64a84beb21d8dbf0", "title": "Scaled Agile Framework: Presentation and real world example", "text": "This case focuses on the applicability of the Scaled Agile Framework (SAFe) founded by Dean Leffingwell. Modern organizations often work with agile software engineering teams using traditional single team-level methods like Scrum, but multiple teams and the program or portfolio level are not part of methods like Scrum. SAFe tries to apply agile methodologies to the whole organisation. The real world example focuses on a key element of SAFe, the Program Increment (PI) planning meeting and how it can improve multiple team collaboration."}
{"_id": "c798fe6462214a60a264bc0a164199a56f21f12d", "title": "Case Study On Social Engineering Techniques for Persuasion", "text": "T There are plenty of security software in market; each claiming the best, still we daily face problem of viruses and other malicious activities. If we know the basic working principal of such malware then we can very easily prevent most of them even without security software. Hackers and crackers are experts in psychology to manipulate people into giving them access or the information necessary to get access. This paper discusses the inner working of such attacks. Case study of Spyware is provided. In this case study, we got 100% success using social engineering techniques for deception on Linux operating system, which is considered as the most secure operating system. Few basic principal of defend, for the individual as well as for the organization, are discussed here, which will prevent most of such attack if followed."}
{"_id": "96e6c5e64d0f9ec4f605ca169986b3d025eef51f", "title": "Enhancing linear system theory curriculum with an inverted pendulum robot", "text": "The demands on both delivery methodology and content of control curriculum continue to evolve as more applications in our world incorporate control theory and the need for continuing education increases. Not only has the content evolved, but the application of social-behavioral science research has resulted in cooperative and active learning practices by faculty. In response to these shifts in education, an open-source inverted pendulum robot was used in a linear systems theory (LST) class taught as part of a professional master's program (PMP). The robot had to have a low cost, and enough capability to enable the students to explore and test the ideas presented in lecture and to engage in collaborative learning experiences. This paper discusses the robot, describes the key control theory experiments, and reviews the lessons learned from this experience."}
{"_id": "2d6fdca512c81da880f3d90a2d71e876f66e3c5d", "title": "Evaluating a tool for improving accessibility to charts and graphs", "text": "We discuss factors in the design and evaluation of natural language-driven assistive technologies that generate descriptions of, and allow interaction with, graphical representations of numerical data. In particular, we provide data in favor of 1) screen-reading technologies as a usable, useful, and cost-effective means of interacting with graphs. The data also show that by carrying out evaluation of Assistive Technologies on populations other than the target communities, certain subtleties of navigation and interaction may be lost or distorted."}
{"_id": "2b61e632de33201a205fd4ee350a2b51e6c31af1", "title": "Preventing Location-Based Identity Inference in Anonymous Spatial Queries", "text": "The increasing trend of embedding positioning capabilities (for example, GPS) in mobile devices facilitates the widespread use of location-based services. For such applications to succeed, privacy and confidentiality are essential. Existing privacy-enhancing techniques rely on encryption to safeguard communication channels, and on pseudonyms to protect user identities. Nevertheless, the query contents may disclose the physical location of the user. In this paper, we present a framework for preventing location-based identity inference of users who issue spatial queries to location-based services. We propose transformations based on the well-established K-anonymity concept to compute exact answers for range and nearest neighbor search, without revealing the query source. Our methods optimize the entire process of anonymizing the requests and processing the transformed spatial queries. Extensive experimental studies suggest that the proposed techniques are applicable to real-life scenarios with numerous mobile users."}
{"_id": "3b3708ec1b0ee6da5340a40aafda07c2fc7522e4", "title": "MobiHide: A Mobilea Peer-to-Peer System for Anonymous Location-Based Queries", "text": "Modern mobile phones and PDAs are equipped with positioning capabilities (e.g., GPS). Users can access public location-based services (e.g., Google Maps) and ask spatial queries. Although communication is encrypted, privacy and confidentiality remain major concerns, since the queries may disclose the location and identity of the user. Commonly, spatial K-anonymity is employed to hide the query initiator among a group of K users. However, existing work either fails to guarantee privacy, or exhibits unacceptably long response time. In this paper we propose MobiHide, a Peer-to-Peer system for anonymous location-based queries, which addresses these problems. MobiHide employs the Hilbert space-filling curve to map the 2-D locations of mobile users to 1-D space. The transformed locations are indexed by a Chord-based distributed hash table, which is formed by the mobile devices. The resulting Peer-to-Peer system is used to anonymize a query by mapping it to a random group of K users that are consecutive in the 1-D space. Compared to existing state-of-the-art, MobiHide does not provide theoretical anonymity guarantees for skewed query distributions. Nevertheless, it achieves strong anonymity in practice, and it eliminates system hotspots. Our experimental evaluation shows that MobiHide has good load balancing and fault tolerance properties, and is applicable to real-life scenarios with numerous mobile users."}
{"_id": "01a29e319e2afa2d29cab62ef1f492a953e8ca70", "title": "Location Privacy in Mobile Systems: A Personalized Anonymization Model", "text": "This paper describes a personalized k-anonymity model for protecting location privacy against various privacy threats through location information sharing. Our model has two unique features. First, we provide a unified privacy personalization framework to support location k-anonymity for a wide range of users with context-sensitive personalized privacy requirements. This framework enables each mobile node to specify the minimum level of anonymity it desires as well as the maximum temporal and spatial resolutions it is willing to tolerate when requesting for k-anonymity preserving location-based services (LBSs). Second, we devise an efficient message perturbation engine which runs by the location protection broker on a trusted server and performs location anonymization on mobile users' LBS request messages, such as identity removal and spatio-temporal cloaking of location information. We develop a suite of scalable and yet efficient spatio-temporal cloaking algorithms, called CliqueCloak algorithms, to provide high quality personalized location k-anonymity, aiming at avoiding or reducing known location privacy threats before forwarding requests to LBS provider(s). The effectiveness of our CliqueCloak algorithms is studied under various conditions using realistic location data synthetically generated using real road maps and traffic volume data"}
{"_id": "0d8f17d8d1d05d6405be964648e7fc622c776c5d", "title": "User needs for location-aware mobile services", "text": "Mobile contexts of use vary a lot, and may even be continuously changing during use. The context is much more than location, but its other elements are still difficult to identify or measure. Location information is becoming an integral part of different mobile devices. Current mobile services can be enhanced with location-aware features, thus providing the user with a smooth transition towards context-aware services. Potential application fields can be found in areas such as travel information, shopping, entertainment, event information and different mobile professions. This paper studies location-aware mobile services from the user's point of view. The paper draws conclusions about key issues related to user needs, based on user interviews, laboratory and field evaluations with users, and expert evaluations of location-aware services. The user needs are presented under five main themes: topical and comprehensive contents, smooth user interaction, personal and user-generated contents, seamless service entities and privacy issues."}
{"_id": "1850a106fad32010e2fae2f3c34d47e3237bb3f4", "title": "A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them", "text": "The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that \u201cclassical\u201d flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. One key implementation detail is the median filtering of intermediate flow fields during optimization. While this improves the robustness of classical methods it actually leads to higher energy solutions, meaning that these methods are not optimizing the original objective function. To understand the principles behind this phenomenon, we derive a new objective function that formalizes the median filtering heuristic. This objective function includes a non-local smoothness term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that can better preserve motion details. To take advantage of the trend towards video in wide-screen format, we further introduce an asymmetric pyramid downsampling scheme that enables the estimation of longer range horizontal motions. The methods are evaluated on the Middlebury, MPI Sintel, and KITTI datasets using the same parameter settings."}
{"_id": "85f6fbb74339be3b2d87ff5198ac440035ed63bb", "title": "Deep Reinforcement Learning for Multi-Resource Multi-Machine Job Scheduling", "text": "Minimizing job scheduling time is a fundamental issue in data center networks that has been extensively studied in recent years. The incoming jobs require different CPU and memory units, and span different number of time slots. The traditional solution is to design efficient heuristic algorithms with performance guarantee under certain assumptions. In this paper, we improve a recently proposed job scheduling algorithm using deep reinforcement learning and extend it to multiple server clusters. Our study reveals that deep reinforcement learning method has the potential to outperform traditional resource allocation algorithms in a variety of complicated environments."}
{"_id": "e9a9d7f2a1226b463fb18f2215553dfd01aa38e7", "title": "Sign language recognition with recurrent neural network using human keypoint detection", "text": "We study the sign language recognition problem which is to translate the meaning of signs from visual input such as videos. It is well-known that many problems in the field of computer vision require a huge amount of dataset to train deep neural network models. We introduce the KETI sign language dataset which consists of 10,480 videos of high resolution and quality. Since different sign languages are used in different countries, the KETI sign language dataset can be the starting line for further research on the Korean sign language recognition.\n Using the sign language dataset, we develop a sign language recognition system by utilizing the human keypoints extracted from face, hand, and body parts. The extracted human keypoint vector is standardized by the mean and standard deviation of the keypoints and used as input to recurrent neural network (RNN). We show that our sign recognition system is robust even when the size of training data is not sufficient. Our system shows 89.5% classification accuracy for 100 sentences that can be used in emergency situations."}
{"_id": "04dbf77f98d12225f906cf3b1c3791ed5a8b5f6e", "title": "Enhancement of Images Using Histogram Processing Techniques", "text": "Image enhancement is a mean as the improvement of an image appearance by increasing dominance of some features or by decreasing ambiguity between different regions of the image. Image enhancement processes consist of a collection of techniques that seek to improve the visual appearance of an image or to convert the image to a form better suited for analysis by a human or machine. Many images such as medical images, remote sensing images, electron microscopy images and even real life photographic pictures, suffer from poor contrast. Therefore it is necessary to enhance the contrast.The purpose of image enhancement methods is to increase image visibility and details. Enhanced image provide clear image to eyes or assist feature extraction processing in computer vision system. Numerous enhancement methods have been proposed but the enhancement efficiency, computational requirements, noise amplification, user intervention, and application suitability are the common factors to be considered when choosing from these different methods for specific image processing application."}
{"_id": "bd31e012b9a8cc575426dd87a334f86d2c957762", "title": "Insights into Alzheimer disease pathogenesis from studies in transgenic animal models", "text": "Alzheimer disease is the most common cause of dementia among the elderly, accounting for ~60-70% of all cases of dementia. The neuropathological hallmarks of Alzheimer disease are senile plaques (mainly containing p-amyloid peptide derived from amyloid precursor protein) and neurofibrillary tangles (containing hyperphosphorylated Tau protein), along with neuronal loss. At present there is no effective treatment for Alzheimer disease. Given the prevalence and poor prognosis of the disease, the development of animal models has been a research priority to understand pathogenic mechanisms and to test therapeutic strategies. Most cases of Alzheimer disease occur sporadically in people over 65 years old, and are not genetically inherited. Roughly 5% of patients with Alzheimer disease have familial Alzheimer disease--that is, related to a genetic predisposition, including mutations in the amyloid precursor protein, presenilin 1, and presenilin 2 genes. The discovery of genes for familial Alzheimer disease has allowed transgenic models to be generated through the overexpression of the amyloid precursor protein and/or presenilins harboring one or several mutations found in familial Alzheimer disease. Although none of these models fully replicates the human disease, they have provided valuable insights into disease mechanisms as well as opportunities to test therapeutic approaches. This review describes the main transgenic mouse models of Alzheimer disease which have been adopted in Alzheimer disease research, and discusses the insights into Alzheimer disease pathogenesis from studies in such models. In summary, the Alzheimer disease mouse models have been the key to understanding the roles of soluble b-amyloid oligomers in disease pathogenesis, as well as of the relationship between p-amyloid and Tau pathologies."}
{"_id": "dbb35fc25270c183a10cf5d463487531529a599b", "title": "Centrality based Document Ranking", "text": "In this paper, we address the problem of ranking clinical documents using centrality based approach. We model the documents to be ranked as nodes in a graph and place edges between documents based on their similarity. Given a query, we compute similarity of the query with respect to every document in the graph. Based on these similarity values, documents are ranked for a given query. Initially, Lucene is used to retrieve top fifty documents that are relevant to the query and then our proposed approach is applied on these retrieved documents to rerank them. Experimental results show that our approach did not perform well as the documents retrieved by Lucene are not among the top 50 documents in the Gold Standard."}
{"_id": "1845bad6994a1b7bfc4ef033e9e412f79da25c2d", "title": "Proceso de Valoraci\u00f3n para la Mejora de Procesos Software en Peque\u00f1as Organizaciones", "text": "1 Grupo IDIS Facultad de Ingenier\u00eda Electr\u00f3nica y Telecomunicaciones Universidad del Cauca Calle 5 No. 4 \u2013 70. Popay\u00e1n, Cauca, Colombia. fjpino@unicauca.edu.co. Web: http://www.unicauca.edu.co/idis/ 2 Grupo Alarcos Escuela Superior de Inform\u00e1tica Universidad Castilla-La Mancha Paseo de la Universidad 4, Ciudad Real, Espa\u00f1a. {Felix.Garcia, Mario.Piattini}@uclm.es. Web: http://alarcos.inf-cr.uclm.es/"}
{"_id": "2dfca6b61e68d454a6449adac3f80d7f88829ed2", "title": "Exploiting Structure in Policy Construction", "text": "Markov decision processes (MDPs) have recently been appbed to the problem of modeling decisiontheoretic planning While traditional methods for solving MDPs are often practical for small states spaces, their effectiveness for large AI planning problems is questionable We present an algorithm, called structured policy Iteration (SPI), that con\u00ad structs optimal policies without explicit enumera\u00ad tion of the state space The algorithm retains the fundamental computational steps of the commonly used modified policy iteration algorithm, but ex\u00ad ploits the variable and prepositional independencies reflected in a temporal Bayesian network represen tation of MDPs The principles behind SPI can be applied to any structured representation of stochas\u00ad tic actions, pobcies and value functions, and the algorithm itself can be used in conjunction with re\u00ad cent approximation methods"}
{"_id": "6fba4968f1b39d490bf95fe4030e3d385f167074", "title": "Machine Learning with World Knowledge: The Position and Survey", "text": "Machine learning has become pervasive in multiple domains, impacting a wide variety of applications, such as knowledge discovery and data mining, natural language processing, information retrieval, computer vision, social and health informatics, ubiquitous computing, etc. Two essential problems of machine learning are how to generate features and how to acquire labels for machines to learn. Particularly, labeling large amount of data for each domain-specific problem can be very time consuming and costly. It has become a key obstacle in making learning protocols realistic in applications. In this paper, we will discuss how to use the existing general-purpose world knowledge to enhance machine learning processes, by enriching the features or reducing the labeling work. We start from the comparison of world knowledge with domain-specific knowledge, and then introduce three key problems in using world knowledge in learning processes, i.e., explicit and implicit feature representation, inference for knowledge linking and disambiguation, and learning with direct or indirect supervision. Finally we discuss the future directions of this research topic."}
{"_id": "7c1cdcbdd30163f3d7fd9789e42c4a37eb2f7f04", "title": "Learning Concept Embeddings for Query Expansion by Quantum Entropy Minimization", "text": "In web search, users queries are formulated using only few terms and term-matching retrieval functions could fail at retrieving relevant documents. Given a user query, the technique of query expansion (QE) consists in selecting related terms that could enhance the likelihood of retrieving relevant documents. Selecting such expansion terms is challenging and requires a computational framework capable of encoding complex semantic relationships. In this paper, we propose a novel method for learning, in a supervised way, semantic representations for words and phrases. By embedding queries and documents in special matrices, our model disposes of an increased representational power with respect to existing approaches adopting a vector representation. We show that our model produces high-quality query expansion terms. Our expansion increase IR measures beyond expansion from current word-embeddings models and well-established traditional QE methods."}
{"_id": "c52b77c18700e9625c885a824a0c8b95c3e9cf21", "title": "Short-term traffic flow forecasting with spatial-temporal correlation in a hybrid deep learning framework", "text": "Deep learning approaches have reached a celebrity status in artificial intelligence field, its success have mostly relied on Convolutional Networks (CNN) and Recurrent Networks. By exploiting fundamental spatial properties of images and videos, the CNN always achieves dominant performance on visual tasks. And the Recurrent Networks (RNN) especially long short-term memory methods (LSTM) can successfully characterize the temporal correlation, thus exhibits superior capability for time series tasks. Traffic flow data have plentiful characteristics on both time and space domain. However, applications of CNN and LSTM approaches on traffic flow are limited. In this paper, we propose a novel deep architecture combined CNN and LSTM to forecast future traffic flow (CLTFP). An 1-dimension CNN is exploited to capture spatial features of traffic flow, and two LSTMs are utilized to mine the short-term variability and periodicities of traffic flow. Given those meaningful features, the feature-level fusion is performed to achieve short-term traffic flow forecasting. The proposed CLTFP is compared with other popular forecasting methods on an open datasets. Experimental results indicate that the CLTFP has considerable advantages in traffic flow forecasting. in additional, the proposed CLTFP is analyzed from the view of Granger Causality, and several interesting properties of traffic flow and CLTFP are discovered and discussed . Traffic flow forecasting, Convolutional neural network, Long short-term memory, Feature-level fusion"}
{"_id": "f2673ba93f6b287007d4474ddf98a99234fd991d", "title": "Classification of Artefacts in EEG Signal Recordings and Overview of Removing Techniques", "text": "EEG is a record of brain activity from various sites of the brain. Artefacts are unwanted noise signals in an EEG record. Classification of artefacts based on source of its generation like physiological artefacts and external artefacts. Body of the subjects are main source of Physiological artifacts, while external artefacts are from outside the body due to the environment or measuring devices. Recognition, identification and elimination of artifacts is an important process to minimize the chance of misinterpretation of EEG, not only for clinical and non-clinical fields such as brain computer interface, intelligent control system robotics etc. This paper classifies the artefacts from the database collected at Dr. R. N. Cooper Mun. General Hospital Mumbai India."}
{"_id": "7b86d08da8b7ffcdd987b20669bd05ea54e73881", "title": "Autonomia: an autonomic computing environment", "text": "The prolifeation of Internet technologies, services and devices, have made the current networked system designs, and management tools incapable of designing reliable, secure networked systems and services. In fact, we have reached a level of complexity, heterogeneity, and a rapid change rate that our information infiastructure is becoming unmanageable and insecure. This had led researchers to consider alternative designs and management techniques that are based on strategies used by biological systems to deal with complexity, heterogeneity and uncertainty. The approach is referred to as autonomic computing. An autonomic computing system is the system that has the capabilities of being sevdefining, self-healing, self-configwing, self-optimizing, etc. In this paper, we present our approach to implement an autonomic computing infastructure, Autonomia that provides dynamically programmable control and management services to support the development and deployment of smart (intelligent) applications. The A UTONOMIA environment provides the application developers with all the tools required to specifj, the appropriate control and management schemes to maintain any quality of service requirement or application attribute/firnctionality (e.g., perjormance, fault, security, etc.) and the core autonomic middleware services to maintain the autonomic requirements of a wide range of network applications and services. We have successfully implemented a proof-of-concept prototype system that can support the self-configuring, self-deploying and selfhealing of any networked application."}
{"_id": "11e595c3cc3ad99b7f0075dde4b1aeda2fbf6f89", "title": "SnapApp: Reducing Authentication Overhead with a Time-Constrained Fast Unlock Option", "text": "We present SnapApp, a novel unlock concept for mobile devices that reduces authentication overhead with a time-constrained quick-access option. SnapApp provides two unlock methods at once: While PIN entry enables full access to the device, users can also bypass authentication with a short sliding gesture (\"Snap\"). This grants access for a limited amount of time (e.g. 30 seconds). The device then automatically locks itself upon expiration. Our concept further explores limiting the possible number of Snaps in a row, and configuring blacklists for app use during short access (e.g. to exclude banking apps). We discuss opportunities and challenges of this concept based on a 30-day field study with 18 participants, including data logging and experience sampling methods. Snaps significantly reduced unlock times, and our app was perceived to offer a good tradeoff. Conceptual challenges include, for example, supporting users in configuring their blacklists."}
{"_id": "14a7c54fce5ed04118942ac5b2f382819dd76bcb", "title": "Towards Conversational Recommender Systems", "text": "People often ask others for restaurant recommendations as a way to discover new dining experiences. This makes restaurant recommendation an exciting scenario for recommender systems and has led to substantial research in this area. However, most such systems behave very differently from a human when asked for a recommendation. The goal of this paper is to begin to reduce this gap. In particular, humans can quickly establish preferences when asked to make a recommendation for someone they do not know. We address this cold-start recommendation problem in an online learning setting. We develop a preference elicitation framework to identify which questions to ask a new user to quickly learn their preferences. Taking advantage of latent structure in the recommendation space using a probabilistic latent factor model, our experiments with both synthetic and real world data compare different types of feedback and question selection strategies. We find that our framework can make very effective use of online user feedback, improving personalized recommendations over a static model by 25% after asking only 2 questions. Our results demonstrate dramatic benefits of starting from offline embeddings, and highlight the benefit of bandit-based explore-exploit strategies in this setting."}
{"_id": "120383bf9ffcacdaa8bee17101b042b71bfe30f2", "title": "An autonomous flyer photographer", "text": "In this paper we explore the combination of a latest generation mobile device and a micro quadrotor platform to perform indoor autonomous navigation for the purpose of autonomous photography. We use the Yellowstone tablet from Google's Tango project [1], equipped with onboard, fully integrated sensing platform and with significant computational capability. To the best of our knowledge we are the first to exploit the Google's Tango tablet as source of pose estimate to control the quadrotor's motion. Using the tablet's onboard camera the system is able to detect people and generate a desired pose that the quadrotor will have to reach in order to take a well framed picture of the detected subject. The experimental results and live video demonstrate the capabilities of the autonomous flying robot photographer using the system described throughout this manuscript."}
{"_id": "3beb69bfa0a8fef2cd04fc441e57304e3bf815f6", "title": "Conservative surgical management of subungual (matrix derived) melanoma: report of seven cases and literature review.", "text": "BACKGROUND\nSubungual melanoma (SUM) is a rare entity, comprising approximately 0\u00b77-3\u00b75% of all melanoma subtypes. SUM histopathologically belongs to the acral lentiginous pathological subtype of malignant melanoma. Its diagnosis is helped by dermoscopy but pathological examination of doubtful cases is required. Classical management of SUM is based on radical surgery, namely distal phalanx amputation. Conservative treatment with nonamputative wide excision of the nail unit followed by a skin graft has been insufficiently reported in the medical literature even though it is performed in many centres.\n\n\nOBJECTIVES\nTo report a series of patients with in situ or minimally invasive SUM treated by conservative surgery, to investigate the postoperative evolution and to evaluate the outcome with a review of the literature.\n\n\nMETHODS\nWe performed a retrospective extraction study from our melanoma register of all patients with in situ and minimally invasive SUM treated with conservative surgery in the University Hospital Department of Dermatology, Lyon, France from 2004 to 2009. The patient demographics, disease presentation, delay to diagnosis, histopathology and postoperative evolution were reviewed.\n\n\nRESULTS\nSeven cases of SUM treated as such were identified in our melanoma database. All cases had a clinical presentation of melanonychia striata. The mean delay to diagnosis was 2years. Surgical excision of the entire nail unit with a 5-10mm safety margin without bone resection followed by full-thickness skin graft taken from the arm was performed in all cases. No recurrence was observed with a mean follow-up of 45months. Functional results were found satisfactory by all patients and their referring physicians. Sixty-two other cases have been found in the literature and are also discussed.\n\n\nCONCLUSIONS\nConservative surgical management in patients with in situ or minimally invasive SUM is a procedure with good cosmetic and functional outcome and, in our cases as well as in the literature, the prognosis is not changed."}
{"_id": "642ee3df85a3bf5f4c02e1ba86c67745f9354b7e", "title": "Examining The Adoption Of Human Resource Information System In The Context Of Saudi Arabia", "text": "The aim of this paper is to examine the factors influencing the adoption of the human resource information system (HRIS) in the Saudi context. A few researches have looked at the adoption of HRIS, but none of these studies have examined the adoption of such a system from the users\u2019 perspective in general and Saudi Arabia (SA) in particular. This study has developed a model that integrates service quality with perceived usefulness and perceived ease of use to investigate the factors affecting the users\u2019 adoption of HRIS. The model was empirically tested using the structure equation modelling by employing data collected from a survey of HRIS users. The findings and discussion presented in this study will help the Saudi organisations to understand the current status of their HRIS to improve it and benefit from it. This study provides guidelines for its implementation for theory and practices, limitations and future directions."}
{"_id": "7f7c61d446ab95830c5b7fa14f3c8ae17325d7f3", "title": "Exploring the role of robotic kissing \u201cKissenger\u201d in digital communication through alan turing's imitation game", "text": "With advances in robotics and artificial intelligence, and increasing prevalence of commercial social robots and chatbots, humans are communicating with artificially intelligent machines for various applications in a wide range of settings. Intimate touch is crucial to relationships forming and emotional bonding, however it remains as a missing element in digital communication between humans, as well as humans and machines. A kissing machine, Kissenger, is built to transmit haptic kissing sensations in a communication network. In order to investigate the role of robotic kissing using the Kissenger device in digital communication, we conducted a modified version of the Imitation Game described by Alan Turing by including the use of the kissing machine. The study consists of 4 rounds, in which participants communicate with two players, a male and a female, using online chatrooms with the goal to identify the female player. In the last two rounds, the male player is replaced with a chatbot with female personas. Participants are told to remotely kiss the players using the kissing device at least once during the chat in the second and fourth round, but not in the first and third round. Results show that robotic kissing has no effect on the winning rates of the male and female players during human-human communication, but it increases the winning rate of the female player when a chatbot is involved in the game."}
{"_id": "2897b921f734bc7083f263717ba2217257d3e661", "title": "A Neuroevolution Approach to General Atari Game Playing", "text": "This paper addresses the challenge of learning to play many different video games with little domain-specific knowledge. Specifically, it introduces a neuroevolution approach to general Atari 2600 game playing. Four neuroevolution algorithms were paired with three different state representations and evaluated on a set of 61 Atari games. The neuroevolution agents represent different points along the spectrum of algorithmic sophistication - including weight evolution on topologically fixed neural networks (conventional neuroevolution), covariance matrix adaptation evolution strategy (CMA-ES), neuroevolution of augmenting topologies (NEAT), and indirect network encoding (HyperNEAT). State representations include an object representation of the game screen, the raw pixels of the game screen, and seeded noise (a comparative baseline). Results indicate that direct-encoding methods work best on compact state representations while indirect-encoding methods (i.e., HyperNEAT) allow scaling to higher dimensional representations (i.e., the raw game screen). Previous approaches based on temporal-difference (TD) learning had trouble dealing with the large state spaces and sparse reward gradients often found in Atari games. Neuroevolution ameliorates these problems and evolved policies achieve state-of-the-art results, even surpassing human high scores on three games. These results suggest that neuroevolution is a promising approach to general video game playing (GVGP)."}
{"_id": "8d4d06159413e1bb65ef218b4c78664d84a9b3c3", "title": "Acquisition and analysis of volatile memory from android devices", "text": "The Android operating system for mobile phones, which is still relatively new, is rapidly gaining market share, with dozens of smartphones and tablets either released or set to be released. In this paper, we present the first methodology and toolset for acquisition and deep analysis of volatile physical memory from Android devices. The paper discusses some of the challenges in performing Android memory acquisition, discusses our new kernel module for dumping memory, named dmd, and specifically addresses the difficulties in developing device-independent acquisition tools. Our acquisition tool supports dumping memory to either the SD on the phone or via the network. We also present analysis of kernel structures using newly developed Volatility functionality. The results of this work illustrate the potential that deep memory analysis offers to digital forensics investigators. a 2011 Elsevier Ltd. All rights reserved."}
{"_id": "8db37013b0b3315badaa7190d4c3af9ec56ab278", "title": "New acquisition method based on firmware update protocols for Android smartphones", "text": "Android remains the dominant OS in the smartphone market even though the iOS share of the market increased during the iPhone 6 release period. As various types of Android smartphones are being launched in the market, forensic studies are being conducted to test data acquisition and analysis. However, since the application of new Android security technologies, it has become more difficult to acquire data using existing forensic methods. In order to address this problem, we propose a new acquisition method based on analyzing the firmware update protocols of Android smartphones. A physical acquisition of Android smartphones can be achieved using the flash memory read command by reverse engineering the firmware update protocol in the bootloader. Our experimental results demonstrate that the proposed method is superior to existing forensic methods in terms of the integrity guarantee, acquisition speed, and physical dump with screen-locked smartphones (USB debugging disabled). \u00a9 2015 The Authors. Published by Elsevier Ltd on behalf of DFRWS. This is an open access articleunder theCCBY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)."}
{"_id": "deaa995628987862b8521b43dc9519d868c78ab2", "title": "Forensic analysis of social networking applications on mobile devices", "text": "The increased use of social networking applications on smartphones makes these devices a goldmine for forensic investigators. Potential evidence can be held on these devices and recovered with the right tools and examination methods. This paper focuses on conducting forensic analyses on three widely used social networking applications on smartphones: Facebook, Twitter, and MySpace. The tests were conducted on three popular smartphones: BlackBerrys, iPhones, and Android phones. The tests consisted of installing the social networking applications on each device, conducting common user activities through each application, acquiring a forensically sound logical image of each device, and performing manual forensic analysis on each acquired logical image. The forensic analyses were aimed at determining whether activities conducted through these applications were stored on the device\u2019s internal memory. If so, the extent, significance, and location of the data that could be found and retrieved from the logical image of each device were determined. The results show that no traces could be recovered from BlackBerry devices. However, iPhones and Android phones store a significant amount of valuable data that could be recovered and used by forensic investigators. a 2012 A. Marrington, N. Al Mutawa & I. Baggili. Published by Elsevier Ltd. All rights"}
{"_id": "5c97d83fb681f398bb3bf3d1df85f0c2dbe12297", "title": "iPhone 3 GS Forensics : Logical analysis using Apple iTunes Backup Utility", "text": "The iPhone mobile is used worldwide due to its enhanced computing capabilities, increased storage capacity as well as its attractive touch interface. These characteristics made the iPhone a popular smart phone device. The increased use of the iPhone lead it to become a potential source of digital evidence in criminal investigations. Therefore, iPhone forensics turned into an essential practice for forensic and security practitioners today. This research aimed at investigating and examining the logical backup acquisition of the iPhone 3GS mobile device using the Apple iTunes backup utility. It was found that significant data of forensic value such as e-mail messages, text and multimedia messages, calendar events, browsing history, GPRS locations, contacts, call history and voicemail recording can be retrieved using this method of iPhone acquisition."}
{"_id": "ccd74a2b3176edb4175e4cd4b2e603e79df9067a", "title": "Training and operation of an integrated neuromorphic network based on metal-oxide memristors", "text": "Despite much progress in semiconductor integrated circuit technology, the extreme complexity of the human cerebral cortex, with its approximately 1014 synapses, makes the hardware implementation of neuromorphic networks with a comparable number of devices exceptionally challenging. To provide comparable complexity while operating much faster and with manageable power dissipation, networks based on circuits combining complementary metal-oxide-semiconductors (CMOSs) and adjustable two-terminal resistive devices (memristors) have been developed. In such circuits, the usual CMOS stack is augmented with one or several crossbar layers, with memristors at each crosspoint. There have recently been notable improvements in the fabrication of such memristive crossbars and their integration with CMOS circuits, including first demonstrations of their vertical integration. Separately, discrete memristors have been used as artificial synapses in neuromorphic networks. Very recently, such experiments have been extended to crossbar arrays of phase-change memristive devices. The adjustment of such devices, however, requires an additional transistor at each crosspoint, and hence these devices are much harder to scale than metal-oxide memristors, whose nonlinear current\u2013voltage curves enable transistor-free operation. Here we report the experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification). The network can be taught in situ using a coarse-grain variety of the delta rule algorithm to perform the perfect classification of 3 \u00d7 3-pixel black/white images into three classes (representing letters). This demonstration is an important step towards much larger and more complex memristive neuromorphic networks."}
{"_id": "1866a86d4b0dcee27bc3c5f9ed7909b09da8d236", "title": "A Natural Language Tutorial Dialogue System for Physics", "text": "We describe theWHY2-ATLAS intelligent tutoring system for qualitative physics that interacts with students via natural language dialogue. We focus on the issue of analyzing and responding to multi-sentential explanations. We explore approaches for achieving a deeper understanding of these explanations and dialogue management approaches and strategies for providing appropriate feedback on them."}
{"_id": "6916f6f2fbafea44cf772579a37c70f0d64cdbc0", "title": "Nonword repetition and sentence repetition as clinical markers of specific language impairment: the case of Cantonese.", "text": "PURPOSE\nRecent research suggests that nonword repetition (NWR) and sentence repetition (SR) tasks can be used to discriminate between children with SLI and their typically developing age-matched (TDAM) and younger (TDY) peers.\n\n\nMETHOD\nFourteen Cantonese-speaking children with SLI and 30 of their TDAM and TDY peers were compared on NWR and SR tasks. NWR of IN nonwords (CV combinations attested in the language) and OUT nonwords (CV combinations unattested in the language) were compared. SR performance was compared using 4 different scoring methods.\n\n\nRESULTS\nThe SLI group did not score significantly lower than the TDAM group on the test of NWR (overall results were TDAM = SLI > TDY). There were nonsignificant group differences on IN syllables but not on OUT syllables. The results do not suggest a limitation in phonological working memory in Cantonese-speaking children with SLI. The SR task discriminated between children and their TDAM peers but not between children with SLI and their TDY peers matched for mean length of utterance.\n\n\nCONCLUSIONS\nSR but not NWR discriminates between children with SLI and their TDAM peers. Poorer NWR for English-speaking children with SLI might be attributable to weaker use of the redintegration strategy in word repetition. Further cross-linguistic investigations of processing strategies are required."}
{"_id": "34f3955cb11db849789f7fbc78eb3cb347dd573d", "title": "Combining multiple feature selection methods for stock prediction: Union, intersection, and multi-intersection approaches", "text": ""}
{"_id": "136cdff5828759b32302c8ba3f0111753286e534", "title": "Data mining for yield enhancement in semiconductor manufacturing and an empirical study", "text": "During wafer fabrication, process data, equipment data, and lot history will be automatically or semi-automatically recorded and accumulated in database for monitoring the process, diagnosing faults, and managing manufacturing. However, in high-tech industry such as semiconductor manufacturing, many factors that are interrelated affect the yield of fabricated wafers. Engineers who rely on personal domain knowledge cannot find possible root causes of defects rapidly and effectively. This study aims to develop a framework for data mining and knowledge discovery from database that consists of a Kruskal\u2013Wallis test, K-means clustering, and the variance reduction splitting criterion to investigate the huge amount of semiconductor manufacturing data and infer possible causes of faults and manufacturing process variations. The extracted information and knowledge is helpful to engineers as a basis for trouble shooting and defect diagnosis. We validated this approach with an empirical study in a semiconductor foundry company in Taiwan and the results demonstrated the practical viability of this approach. 2006 Elsevier Ltd. All rights reserved."}
{"_id": "2bf0d52ff202eb65c66727584bdee44a14c6c1ef", "title": "ING FOR TEXT CLASSIFICATION", "text": "Zero-shot Learners are models capable of predicting unseen classes. In this work, we propose a Zero-shot Learning approach for text categorization. Our method involves training model on a large corpus of sentences to learn the relationship between a sentence and embedding of sentence\u2019s tags. Learning such relationship makes the model generalize to unseen sentences, tags, and even new datasets provided they can be put into same embedding space. The model learns to predict whether a given sentence is related to a tag or not; unlike other classifiers that learn to classify the sentence as one of the possible classes. We propose three different neural networks for the task and report their accuracy on the test set of the dataset used for training them as well as two other standard datasets for which no retraining was done. We show that our models generalize well across new unseen classes in both cases. Although the models do not achieve the accuracy level of the state of the art supervised models, yet it evidently is a step forward towards general intelligence in natural language processing."}
{"_id": "4ef973984a8ea481edf74e0d2074e19d0389e76b", "title": "Three-Dimensional Object Recognition from Single Two-Dimensional Images", "text": "A computer vision system has been implemented that can recognize threedimensional objects from unknown viewpoints in single gray-scale images. Unlike most other approaches, the recognition is accomplished without any attempt to reconstruct depth information bottom-up from the visual input. Instead, three other mechanisms are used that can bridge the gap between the two-dimensional image and knowledge of three-dimensional objects. First, a process of perceptual organization is used to form groupings and structures in the image that are likely to be invariant over a wide range of viewpoints. Second, a probabilistic ranking method is used to reduce the size of the search space during model based matching. Finally, a process of spatial correspondence brings the projections of three-dimensional models into direct correspondence with the image by solving for unknown viewpoint and model parameters. A high level of robustness in the presence of occlusion and missing data can be achieved through full application of a viewpoint consistency constraint. It is argued that similar mechanisms and constraints form the basis for recognition in human vision. This paper has been published in Artificial Intelligence, 31, 3 (March 1987), pp. 355\u2013395."}
{"_id": "4c794724efba21333fa10b727609187e3cfee398", "title": "Bispectrum estimation: A digital signal processing framework", "text": "It is the purpose of this tutorial paper to place bispectrum estimation in a digital signal processing framework in order to aid engineers in grasping the utility of the available bispectrum estimation techniques, to discuss application problems that can directly benefit from the use of the bispectrum, and to motivate research in this area. Three general reasons are behind the use of bispectrum in signal processing and are addressed in the paper: to extract information due to deviations from normality, to estimate the phase of parametric signals, and to detect and characterize the properties of nonlinear mechanisms that generate time series."}
{"_id": "d92092849ca6bbf689fd45ee47205677907b0242", "title": "Database Security", "text": "Computer security is concerned with the ability of a computer system to enforce a security policy governing the disclosure, modification, or destruction of information. The security policy may be specific to the organization, or may be generic. For example, the DoD mandatory security (or multilevel security) poficies restrict access to classified information to cleared personnel. Discretionary security policies, on the other hand, define access restrictions based on the identity of users (or groups), the type of access (e.g., select, update, insert, delete), the specific object being accessed, and perhaps other factors (time of day, which application program is being used, etc.). Different types of users (system managers, database administrators, and ordinary users) may have different access rights to the data in the system. The access controls commonly found in most database systems are examples of discretionary access controls. This paper discusses discretionary security and mandatory security for database systems. We outline the current state of research in database security and briefly discuss some open research issues."}
{"_id": "92e5dfff21bb7fc2bece1036057a9597306cd5b7", "title": "General duality between optimal control and estimation", "text": "Optimal control and estimation are dual in the LQG setting, as Kalman discovered, however this duality has proven difficult to extend beyond LQG. Here we obtain a more natural form of LQG duality by replacing the Kalman-Bucy filter with the information filter. We then generalize this result to non-linear stochastic systems, discrete stochastic systems, and deterministic systems. All forms of duality are established by relating exponentiated costs to probabilities. Unlike the LQG setting where control and estimation are in one-to-one correspondence, in the general case control turns out to be a larger problem class than estimation and only a sub-class of control problems have estimation duals. These are problems where the Bellman equation is intrinsically linear. Apart from their theoretical significance, our results make it possible to apply estimation algorithms to control problems and vice versa."}
{"_id": "295521cfe1a56458d53a58613de5fb92c97c5c23", "title": "FAST: fast architecture sensitive tree search on modern CPUs and GPUs", "text": "In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous computing power by integrating multiple cores, each with wide vector units. There has been much work to exploit modern processor architectures for database primitives like scan, sort, join and aggregation. However, unlike other primitives, tree search presents significant challenges due to irregular and unpredictable data accesses in tree traversal.\n In this paper, we present FAST, an extremely fast architecture sensitive layout of the index tree. FAST is a binary tree logically organized to optimize for architecture features like page size, cache line size, and SIMD width of the underlying hardware. FAST eliminates impact of memory latency, and exploits thread-level and datalevel parallelism on both CPUs and GPUs to achieve 50 million (CPU) and 85 million (GPU) queries per second, 5X (CPU) and 1.7X (GPU) faster than the best previously reported performance on the same architectures. FAST supports efficient bulk updates by rebuilding index trees in less than 0.1 seconds for datasets as large as 64Mkeys and naturally integrates compression techniques, overcoming the memory bandwidth bottleneck and achieving a 6X performance improvement over uncompressed index search for large keys on CPUs."}
{"_id": "26dae2294cef1ab062b14bf7fe5649ddc88df046", "title": "Architectural Data Flow Analysis", "text": "Quality properties including performance, security and compliance are crucial for a system's success but are hard to prove, especially for complex systems. Data flow analyses support this but often only consider source code and thereby introduce high costs of repair. Data flow analyses on the architectural design level use call-and-return semantics or event-based communication between components but do not define data flows as first class entities or consider important runtime or deployment configurations. We propose introducing data flows as first class entities on the architectural level. Analyses ensure that systems meet the quality requirements even after changes in e.g. runtime or deployment configurations. Having data flows modeled as first class entities allows analyzing compliance with privacy laws, requirements for external service providers, and throughput requirements in big data scenarios on architectural level. The results allow early, cost-efficient fixing of issues."}
{"_id": "22e74d49ed9fd3834e1fc265188cd8d6aa292087", "title": "Amortized Inference and Learning in Latent Conditional Random Fields for Weakly-Supervised Semantic Image Segmentation", "text": "Conditional random fields (CRFs) are commonly employed as a post-processing tool for image segmentation tasks. The unary potentials of the CRF are often learnt independently by a classifier, thereby decoupling the inference in CRF from the training of classifier. Such a scheme works effectively, when pixel-level labelling is available for all the images. However, in absence of pixel-level labels, the classifier is faced with the uphill task of selectively assigning the image-level labels to the pixels of the image. Prior work often relied on localization cues, such as saliency maps, objectness priors, bounding boxes etc., to address this challenging problem. In contrast, we model the labels of the pixels as latent variables of a CRF. The pixels and the image-level labels are the observed variables of the latent CRF. We amortize the cost of inference in the latent CRF over the entire dataset, by training an inference network to approximate the posterior distribution of the latent variables given the observed variables. The inference network can be trained in an end-to-end fashion, and requires no localization cues for training. Moreover, unlike other approaches for weakly-supervised segmentation, the proposed model doesn\u2019t require further post-processing. The proposed model achieves performance comparable with other approaches that employ saliency masks for the task of weakly-supervised semantic image segmentation on the challenging VOC 2012 dataset."}
{"_id": "02e8bb6cdc91cc2d94b2b2a40e254028600ec736", "title": "Common-centroid capacitor placement considering systematic and random mismatches in analog integrated circuits", "text": "One of the most important issues during the analog layout phase is to achieve accurate capacitance ratios. However, systematic and random mismatches will affect the accuracy of the capacitance ratios. A common-centroid placement is helpful to reduce the systematic mismatch, but it still needs the property of high dispersion to reduce the random mismatch [10]. To deal with this problem, we propose a simulated annealing [15] based approach to construct a common-centroid placement which exhibits the highest possible degree of dispersion. To facilitate this framework, we first propose the pair-sequence representation to represent a common-centroid placement. Then, we present three operations to perturb the representation, which can increase the degree of dispersion without breaking the common-centroid constraint in the resulting placement. Finally, to enhance the efficiency of our simulated annealing based approach, we propose three techniques to speed up our program. The experimental results show that our placements can simultaneously achieve smaller oxide-gradient-induced mismatch and larger overall correlation coefficients (i.e., higher degree of dispersion) than [10] in all test cases. Besides, our program can run much faster than [10] in larger benchmarks."}
{"_id": "bc2e3161c077b2358895b1ac4cdd45c8af1e4f64", "title": "agriGO: a GO analysis toolkit for the agricultural community", "text": "Gene Ontology (GO), the de facto standard in gene functionality description, is used widely in functional annotation and enrichment analysis. Here, we introduce agriGO, an integrated web-based GO analysis toolkit for the agricultural community, using the advantages of our previous GO enrichment tool (EasyGO), to meet analysis demands from new technologies and research objectives. EasyGO is valuable for its proficiency, and has proved useful in uncovering biological knowledge in massive data sets from high-throughput experiments. For agriGO, the system architecture and website interface were redesigned to improve performance and accessibility. The supported organisms and gene identifiers were substantially expanded (including 38 agricultural species composed of 274 data types). The requirement on user input is more flexible, in that user-defined reference and annotation are accepted. Moreover, a new analysis approach using Gene Set Enrichment Analysis strategy and customizable features is provided. Four tools, SEA (Singular enrichment analysis), PAGE (Parametric Analysis of Gene set Enrichment), BLAST4ID (Transfer IDs by BLAST) and SEACOMPARE (Cross comparison of SEA), are integrated as a toolkit to meet different demands. We also provide a cross-comparison service so that different data sets can be compared and explored in a visualized way. Lastly, agriGO functions as a GO data repository with search and download functions; agriGO is publicly accessible at http://bioinfo.cau.edu.cn/agriGO/."}
{"_id": "b890772a9b44fc9f153c63e7046f427067613c6d", "title": "Snakeboard motion planning with viscous friction and skidding", "text": "The snakeboard is a well-studied example for mechanical systems analysis, largely because of its simultaneous richness in behavior and simplicity in design. However, few snakeboard models incorporate dissipative friction in the traveling direction and skidding as a violation of the rigid nonholonomic constraints. In this paper we investigate these effects on trajectory planning by evaluating a previously proposed friction model as well as a novel skidding model based on the addition of Rayleigh dissipation functions. We show how these additions change the usual behavior of gaits in the forward planning problem, and incorporate the changes into the solutions of the inverse planning problem by utilizing body coordinates along with a curvature parameterization for trajectories."}
{"_id": "32fb146b50b43aa10a09536f1d5b2359de98b442", "title": "Paddy diseases identification with texture analysis using fractal descriptors based on fourier spectrum", "text": "The efforts to increasing the quantity and quality of rice production are obstructed by the paddy disease. This research attempted to identify the four major paddy diseases in Indonesia (leaf blast, brown spot, bacterial leaf blight, and tungro) using fractal descriptors to analyze the texture of the lesions. The lesion images were extracted manually. The descriptors of `S' component of each lesion images then used in classification process using probabilistic neural networks. This techniques achieved at least 83.00% accuracy when identifying the diseases. This method has a potential to be used as one of the feature if it combined with other features, especially when two diseases with relatively same color involved."}
{"_id": "1b6920882e24bfd74cf71df46f3cc9c4a6ed3004", "title": "Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries", "text": "The ACL Anthology Network (AAN)1 is a comprehensive manually curated networked database of citations and collaborations in the field of Computational Linguistics. Each citation edge in AAN is associated with one or more citing sentences. A citing sentence is one that appears in a scientific article and contains an explicit reference to another article. In this paper, we shed the light on the usefulness of AAN citing sentences for understanding research trends and summarizing previous discoveries and contributions. We also propose and motivate several different uses and applications of citing sentences."}
{"_id": "37de340b2a26a94a0e1db02a155cacb33c10c746", "title": "Flexible UHF/VHF Vivaldi antenna for broadband and gas balloon applications", "text": "A Vivaldi flexible antenna with a \u22126 dB bandwidth from 150 MHz to 2000 MHz is introduced. The antenna is fabricated on a 60\u221760 cm2 silicone substrate. In this paper, we present the design, realization and performances of this wide band and directive antenna. The proposed structure is lightweight, easy to realize and does not require any matching network. The targeted application is the radio-localization of signal source emissions from helium gas balloon. Six antennas are integrated on the bottom side of the balloon and the information is recovered with a cable that serves to its stabilization."}
{"_id": "49f4917447472a35088a678efe2850bd2ef83fb5", "title": "A survey of trust computation models for service management in internet of things systems", "text": "In this paper we survey trust computation models for Internet of things (IoT) systems for the purpose of service management, i.e., whether or not to select an IoT device as a service provider. Future IoT systems will connect the physical world into cyberspace everywhere and everything via billions of smart objects, and are expected to have a high economic impact. To date there is little work on trust computation in IoT environments for service management, especially for dealing with misbehaving owners of IoT devices that provide services to other IoT devices in the system. Our approach is to classify existing trust computation models for service management in IoT systems based on five essential design dimensions for a trust computation model: trust composition, trust propagation, trust aggregation, trust update, and trust formation. We summarize pros and cons of each dimension\u2019s options, and highlight the effectiveness of defense mechanisms against malicious attacks. We also summarize the most, least, and little visited trust computation techniques in the literature and provide insight on the effectiveness of trust computation techniques as applying to IoT systems. Finally, we identify gaps in IoT trust computation research and suggest future research directions."}
{"_id": "af98f81714b8a44cc209dbabb1827abe1aa0cab7", "title": "Trust management for the internet of things and its application to service composition", "text": "The Internet of Things (IoT) integrates a large amount of everyday life devices from heterogeneous network environments, bringing a great challenge into security and reliability management. Recognizing that the smart objects in IoT are most likely human-carried or human-operated devices, we propose a scalable trust management protocol for IoT, with the emphasis on social relationships. We consider multiple trust properties including honesty, cooperativeness, and community-interest to account for social interaction. Each node performs trust evaluation towards a limited set of devices of its interest only. The trust management protocol is event-driven upon the occurrence of a social encounter or interaction event, and trust is aggregated using both direct observations and indirect recommendations. We analyze the effect of trust parameters on trust assessment accuracy and trust convergence time. Our results show that there exists a trade-off between trust assessment accuracy vs. trust convergence time in the presence of false recommendations attacks performed by malicious nodes. We demonstrate the effectiveness of the proposed trust management protocol with a trust-based service composition application. Our results indicate that trust-based service composition significantly outperforms non-trust-based (random) service composition and its performance approaches the maximum achievable performance with global knowledge."}
{"_id": "1b148b263743ae21ba53c52d33ee90c18828bf83", "title": "Embedded Interaction: Interacting with the Internet of Things", "text": "The Internet of Things assumes that objects have digital functionality and can be identified and tracked automatically. The main goal of embedded interaction is to look at new opportunities that arise for interactive systems and the immediate value users gain. The authors developed various prototypes to explore novel ways for human-computer interaction (HCI), enabled by the Internet of Things and related technologies. Based on these experiences, they derive a set of guidelines for embedding interfaces into people's daily lives."}
{"_id": "407cf7a598d69c7802d16ada79d25e3c59275c9b", "title": "TRM-IoT: A trust management model based on fuzzy reputation for internet of things", "text": "Since a large scale Wireless Sensor Network (WSN) is to be completely integrated into Internet as a core part of Internet of Things (IoT) or Cyber Physical System (CPS), it is necessary to consider various security challenges that come with IoT/CPS, such as the detection of malicious attacks. Sensors or sensor embedded things may establish direct communication between each other using 6LoWPAN protocol. A trust and reputation model is recognized as an important approach to defend a large distributed sensor networks in IoT/CPS against malicious node attacks, since trust establishment mechanisms can stimulate collaboration among distributed computing and communication entities, facilitate the detection of untrustworthy entities, and assist decision-making process of various protocols. In this paper, based on in-depth understanding of trust establishment process and quantitative comparison among trust establishment methods, we present a trust and reputation model TRM-IoT to enforce the cooperation between things in a network of IoT/CPS based on their behaviors. The accuracy, robustness and lightness of the proposed model is validated through a wide set of simulations."}
{"_id": "e11f9ca6e574c779bdf0a868c368e5b1567a1517", "title": "Learned Optimizers that Scale and Generalize", "text": "Learning to learn has emerged as an important direction for achieving artificial intelligence. Two of the primary barriers to its adoption are an inability to scale to larger problems and a limited ability to generalize to new tasks. We introduce a learned gradient descent optimizer that generalizes well to new tasks, and which has significantly reduced memory and computation overhead. We achieve this by introducing a novel hierarchical RNN architecture, with minimal perparameter overhead, augmented with additional architectural features that mirror the known structure of optimization tasks. We also develop a meta-training ensemble of small, diverse optimization tasks capturing common properties of loss landscapes. The optimizer learns to outperform RMSProp/ADAM on problems in this corpus. More importantly, it performs comparably or better when applied to small convolutional neural networks, despite seeing no neural networks in its meta-training set. Finally, it generalizes to train Inception V3 and ResNet V2 architectures on the ImageNet dataset for thousands of steps, optimization problems that are of a vastly different scale than those it was trained on. We release an open source implementation of the meta-training algorithm."}
{"_id": "abd7efe11691d4be550e0d829c36c8332c9eb2a4", "title": "Rational Trust Modeling", "text": "Trust models are widely used in various computer science disciplines. The primary purpose of a trust model is to continuously measure the trustworthiness of a set of entities based on their behaviors. In this article, the novel notion of rational trust modeling is introduced by bridging trust management and game theory. Note that trust models/reputation systems have been used in game theory (e.g., repeated games) for a long time, however, game theory has not been utilized in the process of trust model construction; this is the novelty of our approach. In our proposed setting, the designer of a trust model assumes that the players who intend to utilize the model are rational/selfish, i.e., they decide to become trustworthy or untrustworthy based on the utility that they can gain. In other words, the players are incentivized (or penalized) by the model itself to act properly. The problem of trust management can be then approached by game theoretical analyses and solution concepts such as Nash equilibrium. Although rationality might be built-in in some existing trust models, we intend to formalize the notion of rational trust modeling from the designer\u2019s perspective. This approach will result in two fascinating outcomes. First of all, the designer of a trust model can incentivize trustworthiness in the first place by incorporating proper parameters into the trust function, which can be later utilized among selfish players in strategic trust-based interactions (e.g., e-commerce scenarios). Furthermore, using a rational trust model, we can prevent many well-known attacks on trust models. These two prominent properties also help us to predict the behavior of the players in subsequent steps by game theoretical analyses."}
{"_id": "bc04b8f54488d74b61cf835ec7551c512c09ac9a", "title": "Crosslingual Distributed Representations of Words", "text": "Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines when applied to a new language."}
{"_id": "5e1751757c0f4289dac45f474a94226edefe9ded", "title": "Measuring the impact of opening the London shared bicycle scheme to casual users", "text": "0968-090X/$ see front matter 2011 Published b doi:10.1016/j.trc.2011.12.004 \u21d1 Corresponding author. Tel.: +44 20 7679 7214. E-mail address: n.lathia@cs.ucl.ac.uk (N. Lathia). The increasing availability of sensor data in urban areas now offers the opportunity to perform continuous evaluations of transport systems and measure the effects of policy changes, in an empirical, large-scale, and non-invasive way. In this paper, we study one such example: the effect of changing the user-access policy in the London Barclays Cycle Hire scheme. When the scheme was launched in July 2010, users were required to apply for a key to access to the system. By December 2010, this policy was overridden in order to allow for \u2018\u2018casual\u2019\u2019 usage, so that anyone in possession of a debit or credit card could gain access. While the transport authority measured the policy shift\u2019s success by the increased number of trips, we set out to investigate how the change affected the system\u2019s usage throughout the city. We present an extensive analysis of station data collected from the scheme\u2019s web site both preand post-policy change, showing how differences in both global and local behaviour can be measured, and how the policy change correlates with a variety of effects observed around the city. We find that, as expected, quicker access to the system correlates with greater week end usage; it also reinforces the week-day commuting trend. In both the preand post-change periods, the geographic distribution of activity at individual stations forms concentric circles around central London. However, upon policy change, a number of stations undergo a complete usage change, now exhibiting an opposite trend with respect to that which they had prior to the policy change. 2011 Published by Elsevier Ltd."}
{"_id": "f121154f0a7625fbb1613bd4cc2e705f9de8fd0c", "title": "Boosted Regression Active Shape Models", "text": "We present an efficient method of fitting a set of local feature models to an image within the popular Active Shape Model (ASM) framework [3]. We compare two different types of non-linear boosted feature models trained using GentleBoost [9]. The first type is a conventional feature detector classifier, which learns a discrimination function between the appearance of a feature and the local neighbourhood. The second local model type is a boosted regression predictor which learns the relationship between the local neighbourhood appearance and the displacement from the true feature location. At run-time the second regression model is much more efficient as only the current feature patch needs to be processed. We show that within the local iterative search of the ASM the local feature regression provides improved localisation on two publicly available human face test sets as well as increasing the search speed by a factor of eight."}
{"_id": "9bd1d9770cbf37e7825b2b36e7015f72a60b4f14", "title": "3D Printing for the Rapid Prototyping of Structural Electronics", "text": "In new product development, time to market (TTM) is critical for the success and profitability of next generation products. When these products include sophisticated electronics encased in 3D packaging with complex geometries and intricate detail, TTM can be compromised - resulting in lost opportunity. The use of advanced 3D printing technology enhanced with component placement and electrical interconnect deposition can provide electronic prototypes that now can be rapidly fabricated in comparable time frames as traditional 2D bread-boarded prototypes; however, these 3D prototypes include the advantage of being embedded within more appropriate shapes in order to authentically prototype products earlier in the development cycle. The fabrication freedom offered by 3D printing techniques, such as stereolithography and fused deposition modeling have recently been explored in the context of 3D electronics integration - referred to as 3D structural electronics or 3D printed electronics. Enhanced 3D printing may eventually be employed to manufacture end-use parts and thus offer unit-level customization with local manufacturing; however, until the materials and dimensional accuracies improve (an eventuality), 3D printing technologies can be employed to reduce development times by providing advanced geometrically appropriate electronic prototypes. This paper describes the development process used to design a novelty six-sided gaming die. The die includes a microprocessor and accelerometer, which together detect motion and upon halting, identify the top surface through gravity and illuminate light-emitting diodes for a striking effect. By applying 3D printing of structural electronics to expedite prototyping, the development cycle was reduced from weeks to hours."}
{"_id": "6aa98c0d9ed4d20f54b3f584cf94ce369f374517", "title": "K- NEAREST NEIGHBOR ALGORITHM FOR INSTANCE BASED LEARNING", "text": "Instance Based Learning (IBL) results in classifying a new instance by examining and comparing it to the rest of the instances in the dataset. An example of this type of learning is the K-Nearest Neighbor algorithm which is based on examining an average Euclidian distance of the nearest k neighbors' parameters given a certain situation."}
{"_id": "75968944df14fa314a69ad7330ba614d689cb0f0", "title": "Systems of Systems Engineering: Basic Concepts, Model-Based Techniques, and Research Directions", "text": "The term \u201cSystem of Systems\u201d (SoS) has been used since the 1950s to describe systems that are composed of independent constituent systems, which act jointly towards a common goal through the synergism between them. Examples of SoS arise in areas such as power grid technology, transport, production, and military enterprises. SoS engineering is challenged by the independence, heterogeneity, evolution, and emergence properties found in SoS. This article focuses on the role of model-based techniques within the SoS engineering field. A review of existing attempts to define and classify SoS is used to identify several dimensions that characterise SoS applications. The SoS field is exemplified by a series of representative systems selected from the literature on SoS applications. Within the area of model-based techniques the survey specifically reviews the state of the art for SoS modelling, architectural description, simulation, verification, and testing. Finally, the identified dimensions of SoS characteristics are used to identify research challenges and future research areas of model-based SoS engineering."}
{"_id": "9c377cd0ff9a8bac5fb22d8990c8e27cc9f6956a", "title": "EnsembleHMD: Accurate Hardware Malware Detectors with Specialized Ensemble Classifiers", "text": "Hardware-based malware detectors (HMDs) are a promising new approach to defend against malware. HMDs collect low-level architectural features and use them to classify malware from normal programs. With simple hardware support, HMDs can be always on, operating as a first line of defense that prioritizes the application of more expensive and more accurate software-detector. In this paper, our goal is to increase the accuracy of HMDs, to improve detection, and reduce overhead. We use specialized detectors targeted towards a specific type of malware to improve the detection of each type. Next, we use ensemble learning techniques to improve the overall accuracy by combining detectors. We explore detectors based on logistic regression (LR) and neural networks (NN). The proposed detectors reduce the false-positive rate by more than half compared to using a single detector, while increasing their sensitivity. We develop metrics to estimate detection overhead; the proposed detectors achieve more than 16.6x overhead reduction during online detection compared to an idealized software-only detector, with an 8x improvement in relative detection time. NN detectors outperform LR detectors in accuracy, overhead (by 40%), and time-to-detection of the hardware component (by 5x). Finally, we characterize the hardware complexity by extending an open-core and synthesizing it on an FPGA platform, showing that the overhead"}
{"_id": "0853727984d419650ad475e5bab0177ff95de079", "title": "Crunch Time: The Reasons and Effects of Unpaid Overtime in the Games Industry", "text": "The games industry is notorious for its intense work ethics with uncompensated overtime and weekends at the office, also known as crunch or crunch time. Since crunch time is so common within the industry, is it possible that the benefits of crunch time outweigh the disadvantages? By studying postmortems and conducting interviews with employees in the industry, we aim to characterise crunch time and discover its effects on the industry. We provide a classification of crunch, i.e., four types of crunch which all have distinct characteristics and affect the product, employees and schedule in various ways. One of the crunch types stands out from the others by only having positive effects on product and schedule. A characteristic that all of the types have in common is an increase in stress levels amongst the employees. We identify a set of reasons for crunch and show that crunch is less pronounced in game studios where prioritisation of features is a regular practice."}
{"_id": "9a78a7fe80382cda184ae183775195332c9140e7", "title": "How to Train Your Browser: Preventing XSS Attacks Using Contextual Script Fingerprints", "text": "Cross-Site Scripting (XSS) is one of the most common web application vulnerabilities. It is therefore sometimes referred to as the \u201cbuffer overflow of the web.\u201d Drawing a parallel from the current state of practice in preventing unauthorized native code execution (the typical goal in a code injection), we propose a script whitelisting approach to tame JavaScript-driven XSS attacks. Our scheme involves a transparent script interception layer placed in the browser\u2019s JavaScript engine. This layer is designed to detect every script that reaches the browser, from every possible route, and compare it to a list of valid scripts for the site or page being accessed; scripts not on the list are prevented from executing. To avoid the false positives caused by minor syntactic changes (e.g., due to dynamic code generation), our layer uses the concept of contextual fingerprints when comparing scripts.\n Contextual fingerprints are identifiers that represent specific elements of a script and its execution context. Fingerprints can be easily enriched with new elements, if needed, to enhance the proposed method\u2019s robustness. The list can be populated by the website\u2019s administrators or a trusted third party. To verify our approach, we have developed a prototype and tested it successfully against an extensive array of attacks that were performed on more than 50 real-world vulnerable web applications. We measured the browsing performance overhead of the proposed solution on eight websites that make heavy use of JavaScript. Our mechanism imposed an average overhead of 11.1% on the execution time of the JavaScript engine. When measured as part of a full browsing session, and for all tested websites, the overhead introduced by our layer was less than 0.05%. When script elements are altered or new scripts are added on the server side, a new fingerprint generation phase is required. To examine the temporal aspect of contextual fingerprints, we performed a short-term and a long-term experiment based on the same websites. The former, showed that in a short period of time (10 days), for seven of eight websites, the majority of valid fingerprints stay the same (more than 92% on average). The latter, though, indicated that, in the long run, the number of fingerprints that do not change is reduced. Both experiments can be seen as one of the first attempts to study the feasibility of a whitelisting approach for the web."}
{"_id": "0458cec30079a53a2b7726a14f5dd826b9b39bfd", "title": "Affordance detection of tool parts from geometric features", "text": "As robots begin to collaborate with humans in everyday workspaces, they will need to understand the functions of tools and their parts. To cut an apple or hammer a nail, robots need to not just know the tool's name, but they must localize the tool's parts and identify their functions. Intuitively, the geometry of a part is closely related to its possible functions, or its affordances. Therefore, we propose two approaches for learning affordances from local shape and geometry primitives: 1) superpixel based hierarchical matching pursuit (S-HMP); and 2) structured random forests (SRF). Moreover, since a part can be used in many ways, we introduce a large RGB-Depth dataset where tool parts are labeled with multiple affordances and their relative rankings. With ranked affordances, we evaluate the proposed methods on 3 cluttered scenes and over 105 kitchen, workshop and garden tools, using ranked correlation and a weighted F-measure score [26]. Experimental results over sequences containing clutter, occlusions, and viewpoint changes show that the approaches return precise predictions that could be used by a robot. S-HMP achieves high accuracy but at a significant computational cost, while SRF provides slightly less accurate predictions but in real-time. Finally, we validate the effectiveness of our approaches on the Cornell Grasping Dataset [25] for detecting graspable regions, and achieve state-of-the-art performance."}
{"_id": "56b49bea38398550c0eebf922b4b7d9566696782", "title": "Space syntax and spatial cognition: or why the axial line?", "text": "Space syntax research has found that spatial configuration alone explains a substantial proportion of the variance between aggregate human movement rates in different locations in both urban and building interior space. Although it seems possible to explain how people move on the basis of these analyses, the question of why they move this way has always seemed problematic because the analysis contains no explicit representations of either motivations or individual cognition. One possible explanation for the method\u2019s predictive power is that some aspects of cognition are implicit in space syntax analysis. This article reviews the contribution made by syntax research to the understanding of environmental cognition. It proposes that cognitive space, defined as that space which supports our understanding of configurations more extensive than our current visual field, is not a metric space, but topological. A hypothetical process for deriving a nonmetric space from the metric visibility graph involving exploratory movement is developed. The resulting space is shown to closely resemble the axial graph. One of the favored assumptions of those dealing with human action is that the \u201cprime mover\u201d is individual motivation or goal-directed behavior. The evidence from space syntax research, however, is that one can get quite a long way in predicting movement behavior without invoking goals and motivations and in fact, without explicitly assuming anything about individuals or their cognitive capacity. This empirical success is sometimes taken to suggest that individuals and their motivations are not considered important or of interest in space syntax research. Here I argue the reverse. I propose that clues to the nature of individual motivation and cognition may be implicit in space 30 ENVIRONMENT AND BEHAVIOR, Vol. 35 No. 1, January 2003 30-65 DOI: 10.1177/0013916502238864 \u00a9 2003 Sage Publications \u00a9 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. at PENNSYLVANIA STATE UNIV on April 16, 2008 http://eab.sagepub.com Downloaded from syntax theory and analysis, and by making these explicit, the field can contribute to a better understanding of individual-level mechanisms. It is important to note that space syntax research started out as research into societies as distinct to individuals, and it is perhaps this way of slicing the cake\u2014taking a view on what is external and common to all individuals and therefore potentially social\u2014that has led us to take a quite fundamentally different approach to those commonly adopted by researchers who start through a consideration of the individual. I will suggest that the light syntax research can throw on issues of cognition stems from taking this social and therefore broadly exosomatic perspective. I will start, however, with an apparently relatively minor outcome of this program of social research, which was probably unforeseen at the outset but which has become central to the broad social theory proposed by space syntax. Space syntax analysis (which represents and quantifies aspects of spatial pattern) has found that spatial configuration correlates powerfully with observed movement by both pedestrians (e.g., Hillier, Burdett, Peponis, & Penn, 1987; Hillier, Hanson, Peponis, Hudson, & Burdett, 1983; Hillier, Penn, Hanson, Grajewski, & Xu, 1993; Peponis, Hadjinikolaou, Livieratos, & Fatouros, 1989; Read, 1999) and drivers (Penn, Hillier, Banister, & Xu, 1998a, 1998b) (Figure 1). This degree of correlation is surprising because the analysis appears not to incorporate many of the factors considered critical in previous efforts to model human movement patterns through the built environment. The analysis incorporates only certain aspects of the geometry of the environment and makes no mention of the motivations or intentions of the movers either explicitly through the use of origin-destination (O-D) information or implicitly by inclusion of land use, development density, or other factors that might act as a proxy for these. Even the geometric descriptions it uses are extremely parsimonious. No direct account is taken of the metric properties of space, and the analysis that to date has shown the most consistent empirical correlation reduces the effects of metric distance to a minimum and places the most emphasis on the average number of changes of direction encountered on routes, not to specific destinations but to all possible destinations. This appears to eliminate a key factor in most modeling approaches based on rational choice, in which the main \u201ccost\u201d associated with travel\u2014 and which it is assumed the rational individual will seek to minimize while achieving their goals\u2014is usually taken as travel time judged on the basis of metric distance. It is, however, exactly the parsimony of the representation and the consistent strength of the empirical results that makes for the practical usefulness of syntax analysis as a design tool (see www.spacesyntax.com for a summary of recent applications projects). Few assumptions need to be made about future Penn / SPACE SYNTAX AND SPATIAL COGNITION 31 \u00a9 2003 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. at PENNSYLVANIA STATE UNIV on April 16, 2008 http://eab.sagepub.com Downloaded from 32 ENVIRONMENT AND BEHAVIOR / January 2003"}
{"_id": "960d29a046a38aacdcd8c0153cb2b592e66bb0d4", "title": "Integrated path planning and dynamic steering control for autonomous vehicles", "text": "A method is presented for combining two previously proposed algorithms for path-planning and dynamic steering control into a computationally feasible scheme for real-time feedback control of autonomous vehicles in uncertain environments. In the proposed approach to vehicle guidance and control, Path Relaxation is used to compute critical points along a globally desirable path using a priori information and sensor data'. Generalized potential fields are then used for local feedback control to drive the vehicle along a collision-free path using the critical points as subgoals'. Simulation results are presented to demonstrate the control scheme."}
{"_id": "e0398ab99daa5236720cd1d91e5b150985aac4f3", "title": "Food image analysis: Segmentation, identification and weight estimation", "text": "We are developing a dietary assessment system that records daily food intake through the use of food images taken at a meal. The food images are then analyzed to extract the nutrient content in the food. In this paper, we describe the image analysis tools to determine the regions where a particular food is located (image segmentation), identify the food type (feature classification) and estimate the weight of the food item (weight estimation). An image segmentation and classification system is proposed to improve the food segmentation and identification accuracy. We then estimate the weight of food to extract the nutrient content from a single image using a shape template for foods with regular shapes and area-based weight estimation for foods with irregular shapes."}
{"_id": "e92329142f66b782312d23bb35166a826fc08255", "title": "Developing an Assessment Method of Active Aging: University of Jyvaskyla Active Aging Scale.", "text": "OBJECTIVE\nTo develop an assessment method of active aging for research on older people.\n\n\nMETHOD\nA multiphase process that included drafting by an expert panel, a pilot study for item analysis and scale validity, a feedback study with focus groups and questionnaire respondents, and a test-retest study. Altogether 235 people aged 60 to 94 years provided responses and/or feedback.\n\n\nRESULTS\nWe developed a 17-item University of Jyvaskyla Active Aging Scale with four aspects in each item (goals, ability, opportunity, and activity; range 0-272). The psychometric and item properties are good and the scale assesses a unidimensional latent construct of active aging.\n\n\nDISCUSSION\nOur scale assesses older people's striving for well-being through activities pertaining to their goals, abilities, and opportunities. The University of Jyvaskyla Active Aging Scale provides a quantifiable measure of active aging that may be used in postal questionnaires or interviews in research and practice."}
{"_id": "b0a3acd8e5be85102ce3c3627de8c5cbecb78c6e", "title": "A comparison of radial and axial flux structures in electrical machines", "text": "This paper explores the comparative advantages and disadvantages of the simplest form of axial flux geometry, the single sided version, compared to the most common form of radial flux geometry with an inside rotor, when considering permanent magnet synchronous machines. A detailed literature review of comparisons is presented. New material is presented highlighting the benefits of single sided axial flux geometry. The constraints and assumptions made when comparing possibilities are discussed in detail, including a study of the biases these can introduce. The basis of this comparison is founded on constant electromagnetic airgap shear stress, being the product of electric loading and magnetic loading, and indeed the constancy of both of those factors. The metric used for comparison is simply that of the masses of the active materials, steel, copper, and magnet material. Examples are presented, including a machine which recently went into production in quantity. A range of lesser issues which are relevant when one is making a choice, are presented and discussed."}
{"_id": "dc77e8f0426e360482843cc7b1405dc3d39b2567", "title": "Rethinking Conventional Collaborative Filtering for Recommending Daily Fashion Outfits", "text": "A conventional collaborative \u0080ltering approach using a standard utility matrix fails to capture the aspect of matching clothing items when recommending daily fashion out\u0080ts. Moreover, it is challenged by the new user cold-start problem. In this paper, we describe a novel approach for guiding users in selecting daily fashion out\u0080ts, by providing out\u0080t recommendations from a system consisting of an Internet of \u008cings wardrobe enabled with RFID technology and a corresponding mobile application. We show where a conventional collaborative \u0080ltering approach comes short when recommending fashion out\u0080ts, and how our novel approach\u2014powered by machine learning algorithms\u2014shows promising results in the domain of fashion recommendation. Evaluation of our novel approach using a real-world dataset demonstrates the system\u2019s e\u0082ectiveness and its ability to provide daily out\u0080t recommendations that are relevant to the users. A non-comparable evaluation of the conventional approach is also given."}
{"_id": "80a4a999f784f1cc2e1b8779ca4c9d42fe3a696d", "title": "The Effect of Using E-Learning Tools in Online and Campus-based Classrooms on Student Performance", "text": "Creating an integrative research framework that extends a model frequently used in the Information Systems field, the Technology Acceptance Model, together with variables used in the Education field, this empirical study investigates the factors influencing student performance as reflected by their final course grade. The Technology Acceptance Model explains computer acceptance in general terms. The model measures the impact of external variables on internal beliefs, attitudes, and intentions. Perceived Usefulness and Perceived Ease of Use, two main constructs in the model, refer to an individual\u2019s perception of how the adoption of a new technology will increase their efficiency, and the individual\u2019s perception of how easy the technology will be to use. The lower the perceived effort is, the easier the technology will be to adopt. Thus, Perceived Usefulness, Perceived Ease of Use, Computer Self-Efficacy, and Computer Anxiety were measured to determine their effect on student performance."}
{"_id": "295efa3e6bfebe2481e806536adb1891ccb5f550", "title": "Simple and Space-Efficient Minimal Perfect Hash Functions", "text": "A perfect hash function (PHF) h : U \u2192 [0, m \u2212 1] for a key set S is a function that maps the keys of S to unique values. The minimum amount of space to represent a PHF for a given set S is known to be approximately 1.44n/m bits, where n = |S|. In this paper we present new algorithms for construction and evaluation of PHFs of a given set (for m = n and m = 1.23n), with the following properties: 1. Evaluation of a PHF requires constant time. 2. The algorithms are simple to describe and implement, and run in linear time. 3. The amount of space needed to represent the PHFs is around a factor 2 from the information theoretical minimum. No previously known algorithm has these properties. To our knowledge, any algorithm in the literature with the third property either: \u2013 Requires exponential time for construction and evaluation, or \u2013 Uses near-optimal space only asymptotically, for extremely large n. Thus, our main contribution is a scheme that gives low space usage for realistic values of n. The main technical ingredient is a new way of basing PHFs on random hypergraphs. Previously, this approach has been used to design simple PHFs with superlinear space usage. \u22c6 This work was supported in part by GERINDO Project\u2013grant MCT/CNPq/CTINFO 552.087/02-5, and CNPq Grants 30.5237/02-0 (Nivio Ziviani) and 142786/2006-3 (Fabiano C. Botelho) 3 This version of the paper is identical to the one published in the WADS 2007 proceedings. Unfortunately, it does not give reference and credit to the paper The Bloomier Filter: An Efficient Data Structure for Static Support Lookup Tables, by Chazelle et al., Proceedings of SODA 2004. They present a way of constructing PHFs that is equivalent to ours. It is explained as a modification of the \u201cBloomier Filter\u201d data structure at the end of Section 3.3, but they do not make explicit that a PHF is constructed. Thus, the simple construction of a PHF described must be attributed to Chazelle et al. The new contribution of this paper is to analyze and optimize the constant of the space usage considering implementation aspects as well as a way of constructing MPHFs from that PHFs."}
{"_id": "122eda0ff026311fbb6e8d589262da0b9821c19f", "title": "Fast and Concurrent RDF Queries with RDMA-Based Distributed Graph Exploration", "text": "Many public knowledge bases are represented and stored as RDF graphs, where users can issue structured queries on such graphs using SPARQL. With massive queries over large and constantly growing RDF data, it is imperative that an RDF graph store should provide low latency and high throughput for concurrent query processing. However, prior systems still experience high perquery latency over large datasets and most prior designs have poor resource utilization such that each query is processed in sequence. We present Wukong, a distributed graph-based RDF store that leverages RDMA-based graph exploration to provide highly concurrent and low-latency queries over large data sets. Wukong is novel in three ways. First, Wukong provides an RDMA-friendly distributed key/value store that provides differentiated encoding and fine-grained partitioning of graph data to reduce RDMA transfers. Second, Wukong leverages full-history pruning to avoid the cost of expensive final join operations, based on the observation that the cost of one-sided RDMA operations is largely oblivious to the payload size to a certain extent. Third, countering conventional wisdom of preferring migration of execution over data, Wukong seamlessly combines data migration for low latency and execution distribution for high throughput by leveraging the low latency and high throughput of onesided RDMA operations, and proposes a worker-obliger model for efficient load balancing. Evaluation on a 6-node RDMA-capable cluster shows that Wukong significantly outperforms state-of-the-art systems like TriAD and Trinity.RDF for both latency and throughput, usually at the scale of orders of magnitude. Short for Sun Wukong, who is known as the Monkey King and is a main character in the Chinese classical novel \u201cJourney to the West\u201d. Since Wukong is known for his extremely fast speed (21,675 kilometers in one somersault) and the ability to fork himself to do massive multi-tasking, we term our system as Wukong. The source code and a brief instruction on how to install Wukong is available at http://ipads.se.sjtu.edu.cn/projects/wukong."}
{"_id": "e95c5f71a5317efce85afe841ec19eff8d89cbfb", "title": "Manual Evaluation of the Pelvic Floor : State of the Art and Evaluative Hypothesis", "text": "The pelvic floor is an anatomical area where the balance of different visceral, muscular and liquid pressures play a fundamental role in the physiological pursuit of the functions of all the structures contained therein. When the balance is broken, multiple disorders and pathologies arise, requiring conservative or surgical multidisciplinary treatments. The focus of this article is to propose a manual evaluation of the musculoskeletal structures of the pelvic floor since a complete palpatory examination taking into account the muscular, articular and ligamentous aspect of the pelvic area is currently missing. The detection of the abnormal area is a determining factor to organize properly the therapeutic work because, potentially resulting in better results. According to the Authors' knowledge, this is the first article in the current scientific landscape proposing a complete manual evaluation."}
{"_id": "3a219fbf0f5b6c88739610ab6ae80001073b5397", "title": "A High-Power $Ka$-Band (31\u201336 GHz) Solid-State Amplifier Based on Low-Loss Corporate Waveguide Combining", "text": "A method of using low-loss waveguide septum combiners is developed into a high-power -band (31-36 GHz) amplifier producing 50 W at 33 GHz (Ka-band) using 32 low-power (>2 W) solid-state amplifier modules. By using low-loss waveguide combining and a packaged monolithic microwave integrated circuit with a low-loss microstrip-to-waveguide launcher, the output loss is minimized, allowing for the overall power-combining efficiency to remain high, 80% (average insertion loss of combiner < 0.7 dB and average insertion loss of launcher <0.3 dB) over 31-36 GHz. In the past, lower power-combining efficiencies have limited the number of modules that can be combined at -band, and hence, have limited the power output. The approach demonstrated in this paper, with high power-combining efficiency, allows a very large number (32) of solid-state amplifier modules to be combined to produce high powers. Greater than 50 W was demonstrated with low power modules, but even higher powers 120 W are possible. The current approach is based on corporate combining, using low-loss waveguide septum combiners that provide isolation, maintaining the true graceful degradation of a modular solid-state amplifier system."}
{"_id": "ab2a41722ee1f2b26575080238ba25f7173a6ae2", "title": "Ka band spatial power-combining amplifier structures", "text": "2.24 w power-amplifier (PA) module at 35 GHz presented using broad-band spatial power-combining system. The combiner can accommodate more monolithic microwave integrated-circuit (MMIC) PA with stagger placement structure on limited microstrip space in Ka-band waveguide structure with good return losses, and heat can dissipated into aluminum carrier quickly. This combiner is based on a slotline-to-microstrip transition structure, which also serves as a four-way power combiner. The proposed 2*2 combining structure combined by vertical stacking inside the waveguide was analyzed and optimized by finite-element-method (FEM) simulations and experiments."}
{"_id": "fa9c7e3c6d55175de25bea79ba66ef91607f3920", "title": "Novel high efficiency broadband Ku band power combiner", "text": "High power solid-state power amplifiers require a high efficiency power dividing/combining structure to keep the power loss as low as possible. The heat sinking capability of the divider/combiner also limits its maximum output power with continues wave (CW) configuration. In this paper, we introduce a novel 8-way Ku band power divider/combiner system, it demonstrate advantages of low loss, broadband and good heat sinking capability simultaneously. As its sub-components, low loss probes for waveguide-to-microstrip transition and low loss broadband 1-to-2 power combiners are designed and fabricated. The measured back-to-back insertion loss of the whole 8-way power combiner is lower than 0.5dB in the whole Ku band, and the corresponding combining efficiency is as high as 94.5%. The simulated thermal resistance of the system is as low as 0.21\u00b0C/W, indicating the proposed power combiner is able to produce 50W of CW output power with commercial available Monolithic Microwave Integrated Circuits (MMICs)."}
{"_id": "f7686113611ac3d03c1cd412ebf46ba5e35b3071", "title": "Image-guided full waveform inversion", "text": "Multiple problems, including high computational cost, spurious local minima, and solutions with no geologic sense, have prevented widespread application of full waveform inversion (FWI), especially FWI of seismic reflections. These problems are fundamentally related to a large number of model parameters and to the absence of low frequencies in recorded seismograms. Instead of inverting for all the parameters in a dense model, image-guided full waveform inversion inverts for a sparse model space that contains far fewer parameters. We represent a model with a sparse set of values, and from these values, we use image-guided interpolation (IGI) and its adjoint operator to compute finelyand uniformly-sampled models that can fit recorded data in FWI. Because of this sparse representation, image-guided FWI updates more blocky models, and this blockiness in the model space mitigates the absence of low frequencies in recorded data. Moreover, IGI honors imaged structures, so image-guided FWI built in this way yields models that are geologically sensible."}
{"_id": "7522ffeb372f703ff0fd0f4299107f53b9d40717", "title": "Neural Network-Based Spectrum Estimation for Online WPE Dereverberation", "text": "In this paper, we propose a novel speech dereverberation framework that utilizes deep neural network (DNN)-based spectrum estimation to construct linear inverse filters. The proposed dereverberation framework is based on the state-of-the-art inverse filter estimation algorithm called weighted prediction error (WPE) algorithm, which is known to effectively reduce reverberation and greatly boost the ASR performance in various conditions. In WPE, the accuracy of the inverse filter estimation, and thus the deverberation performance, is largely dependent on the estimation of the power spectral density (PSD) of the target signal. Therefore, the conventional WPE iteratively performs the inverse filter estimation, actual dereverberation and the PSD estimation to gradually improve the PSD estimate. However, while such iterative procedure works well when sufficiently long acoustically-stationary observed signals are available, WPE\u2019s performance degrades when the duration of observed/accessible data is short, which typically is the case for real-time applications using online block-batch processing with small batches. To solve this problem, we incorporate the DNN-based spectrum estimator into the framework ofWPE, because a DNN can estimate the PSD robustly even from very short observed data. We experimentally show that the proposed framework outperforms the conventional WPE, and improves the ASR performance in real noisy reverberant environments in both single-channel and multichannel cases."}
{"_id": "6c23b705efe6e5143b680345f16e1290bd7e9b9e", "title": "Bayesian artificial intelligence for tackling uncertainty in self-adaptive systems: The case of dynamic decision networks", "text": "In recent years, there has been a growing interest towards the application of artificial intelligence approaches in software engineering (SE) processes. In the specific area of SE for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and AI. However, just few significant results have been published. This paper briefly studies uncertainty in SASs and surveys techniques that have been developed to engineer SASs in order to tackle uncertainty. In particular, we highlight techniques that use AI concepts. We also report and discuss our own experience using Dynamic Decision Networks (DDNs) to model and support decision-making in SASs while explicitly taking into account uncertainty. We think that Bayesian inference, and specifically DDNs, provide a useful formalism to engineer systems that dynamically adapt themselves at runtime as more information about the environment and the execution context is discovered during execution. We also discuss partial results, challenges and future research avenues."}
{"_id": "a4bfc06557b3b335f48e4a74a135d507d3616bb2", "title": "Data-driven 3D visual pronunciation of Chinese IPA for language learning", "text": "In the framework of intelligent aided language learning, a real-time data-driven 3D visual pronunciation system of Chinese IPA is proposed. First, a high quality articulatory speech corpus including speech and 3D articulatory data of lips, tongue and jaw movements is collected through Electro-Magnetic Articulograph; second, the 3D articulatory modeling including shape design and motion synthesis is conducted. The articulatory shape is obtained by designing a precise 3D facial model including internal and external articulators. The articulatory motion synthesis is obtained combining parameterized model and anatomical model. The system can thus illustrate the slight differences among phonemes by synthesizing both internal and external articulatory movements. The perceptual evaluation shows the suitability of the system for instructing language learners to articulate."}
{"_id": "c5695d4104e245ad54d3fe8e4ad33e65970c2d6a", "title": "Wearable impedance analyzer based on AD5933", "text": "In this paper a system for measuring impedance, based on AD5933 circuit is presented. The impedance measuring range is between 9 \u03a9 and 18 M\u03a9 for a 1 kHz \u00f7 100 kHz frequency range. Possibilities of expanding this range of measurement are also presented in the paper. The system calibration is automatic and the relative error of the impedance modulus measurement is in the range \u00b12%. Measured impedance main parameters are shown locally on an OLED display but can also be stored in an SD memory card for further processing. The system is portable, modular and adaptable to a large number of applications."}
{"_id": "1d5f156e4e7a0a7e19cc71dc23347854ab44da8a", "title": "Fostering User Engagement: Rhetorical Devices for Applause Generation Learnt from TED Talks", "text": "One problem that every presenter faces when delivering a public discourse is how to hold the listeners\u2019 attentions or to keep them involved. Therefore, many studies in conversation analysis work on this issue and suggest qualitatively constructions that can effectively lead to audience\u2019s applause. To investigate these proposals quantitatively, in this study we analyze the transcripts of 2,135 TED Talks, with a particular focus on the rhetorical devices that are used by the presenters for applause elicitation. Through conducting regression analysis, we identify and interpret 24 rhetorical devices as triggers of audience applauding. We further build models that can recognize applause-evoking sentences and conclude this work with potential implications. Introduction and Motivation Many academic studies have been devoted to the tasks of user engagement characterization. However, applause, as the most direct audience reaction, has till now not been fully investigated and understood. The audience do not just clap whenever they like, they do so only at certain points and are continuously looking for appropriate times when applause can possibly occur (Kuo, 2001). Having a deep understanding of audience\u2019s applause is important because it will help the speakers to better design their speech with recognizable calls for conjoined response, and to make their presentation more appealing and engageable. Despite its importance, to date relatively limited work have been conducted on the topic of applause generation, except a few qualitative studies done by social psychologists. Atkinson (1984) first claimed that applause is closely synchronized with a number of actions on the part of the speakers, which he referred to as \u201crhetorical devices\u201d. He identified three rhetorical devices that are effective in evoking applauses, including: contrasts, three-part lists, and projection of names. Heritage and Greatbatch (1986) found five other basic rhetorical devices, including: puzzlesolution, headline-punchline, combination, position taking This is a pre-print of an article appearing at ICWSM 2017. and pursuit. In addition, new categories were also identified in many recent studies, such as greetings, expressing appreciations, etc. (Bull and Feldman, 2011). To date, research on applause generation has been limited to the analysis of political speeches only. Besides, all of the aforementioned work were conducted using qualitative methods. Many critical questions, such as, What triggers the audience\u2019s applause?, When do audience applaud?, etc., remain unanswered. To address these gaps, this work aims to identify the rhetorical devices for applause elicitation using data-driven methods. To this end, we propose two research questions: RQ1: What are the rhetorical devices that cause the audiences to applaud during a specific part of a speech? RQ2: To what extent the hypothesized rhetorical devices can be used to predict applause generation? To answer both questions, we crawl 2,135 TED talk transcripts and conduct quantitative analysis to investigate the factors that could trigger audience\u2019s applause. We find that factors such as, gratitude expressions, phonetic structure, projection of names, emotion, etc., have significant effects on applause generation. These identified factors are later used to build machine learning models that can automatically identify applause-evoking segments."}
{"_id": "e66833e4a8ed96340c8ca1130e0564c2e280da7c", "title": "Battery Management System: An Overview of Its Application in the Smart Grid and Electric Vehicles", "text": "With the rapidly evolving technology of the smart grid and electric vehicles (EVs), the battery has emerged as the most prominent energy storage device, attracting a significant amount of attention. The very recent discussions about the performance of lithium-ion (Li-ion) batteries in the Boeing 787 have confirmed so far that, while battery technology is growing very quickly, developing cells with higher power and energy densities, it is equally important to improve the performance of the battery management system (BMS) to make the battery a safe, reliable, and cost-efficient solution. The specific characteristics and needs of the smart grid and EVs, such as deep charge/discharge protection and accurate state-of-charge (SOC) and state-of-health (SOH) estimation, intensify the need for a more efficient BMS. The BMS should contain accurate algorithms to measure and estimate the functional status of the battery and, at the same time, be equipped with state-of-the-art mechanisms to protect the battery from hazardous and inefficient operating conditions."}
{"_id": "28a619d5a236aba4dba31c8b2756cbb7bdd4e2ee", "title": "AUGMENTED REALITY VISUALIZATION TOOL FOR KINGS STORMWATER BRIDGE", "text": "Augmented Reality (AR) is an emerging visualization technology that promises to change the way people see the world. The main goal of AR is to \u201cenhance\u201d a person\u2019s vision of the real world with useful information about the surrounding environment. Primarily this can be done through the use of a portable computer such as a laptop and a head-mounted display (HMD) unit. When such technology is made available to scientists and engineers, Augmented Reality can provide them with the freedom to do scientific analysis at the actual location of an experiment. With this idea in mind, the goal of this paper is to investigate the development of an AR system as a visualization tool for structural engineers. This particular application will focus on monitoring the \u201chealth\u201d of the Kings Stormwater Bridge structure. Unfortunately, since AR is still in its infancy, much of the research devoted to it has focused mainly on technical issues rather than on the development of how this technology would be used by scientists. This has led to the development of \u201cfunctional\u201d AR systems that primarily can only be operated by the developer of the system ([1], [2], [3], [12], [14]). This paper avoids this trend by focusing on the design of the visualization system so that it is useful for structural engineers. As a result, this should produce a more robust Augmented Reality system that can be used by many different users. Thus, it is the goal of this paper to show how scientists would benefit by designing a visualization system with their needs in mind."}
{"_id": "0c820f32eb814702808fbe8612f6fec363b50438", "title": "PSO-based optimization for isolated intersections signal timings and simulation", "text": "In this paper, based on the analyzing the signal control problem of single intersections, a kind of real-time optimization method with PSO (particle swarm optimization) solving signal timings for single intersection is put forward. On detecting the flow of current cycle and the cycle before, estimating the flow of next circle is proceeded. Setting least delay as performance index, we get each phase time for the next cycle. A simulation experiment for the traffic model at a four-phase intersection is also performed. The result shows that the method is efficient."}
{"_id": "1df5051913989b441e7df2ddc00aa8c3ab5960d0", "title": "Android anti-forensics through a local paradigm", "text": "Mobile devices are among the most disruptive technologies of the last years, gaining even more diffusion and success in the daily life of a wide range of people categories. Unfortunately, while the number of mobile devices implicated in crime activities is relevant and growing, the capability to perform the forensic analysis of such devices is limited both by technological and methodological problems. In this paper, we focus on Anti-Forensic techniques applied to mobile devices, presenting some fully automated instances of such techniques to Android devices. Furthermore, we tested the effectiveness of such techniques versus both the cursory examination of the device and some acquisition tools. a 2010 Digital Forensic Research Workshop. Published by Elsevier Ltd. All rights reserved."}
{"_id": "c52587066bd22ce807501101b7c0b9a9d8727777", "title": "Deriving common malware behavior through graph clustering", "text": "Detection of malicious software (malware) continues to be a problem as hackers devise new ways to evade available methods. The proliferation of malware and malware variants requires methods that are both powerful, and fast to execute. This paper proposes a method to derive the common execution behavior of a family of malware instances. For each instance, a graph is constructed that represents kernel objects and their attributes, based on system call traces. The method combines these graphs to develop a supergraph for the family. This supergraph contains a subgraph, called the HotPath, which is observed during the execution of all the malware instances. The proposed method is scalable, identifies previously-unseen malware instances, shows high malware detection rates, and false positive rates close to 0%."}
{"_id": "8c065f7908aafde7e1446104c605cc1fe04a6f60", "title": "Liability and Computer Security: Nine Principles", "text": "The conventional wisdom is that security priorities should be set by risk analysis. However, reality is subtly different: many computer security systems are at least as much about shedding liability as about minimising risk. Banks use computer security mechanisms to transfer liability to their customers; companies use them to transfer liability to their insurers, or (via the public prosecutor) to the taxpayer; and they are also used to shift the blame to other departments (\u201cwe did everything that GCHQ/the internal auditors told us to\u201d). We derive nine principles which might help designers avoid the most common pitfalls."}
{"_id": "93083f4225ea62b3733a76fc64f9991ed5fd6878", "title": "When Sparse Traditional Models Outperform Dense Neural Networks: the Curious Case of Discriminating between Similar Languages", "text": "We present the results of our participation in the VarDial 4 shared task on discriminating closely related languages. Our submission includes simple traditional models using linear support vector machines (SVMs) and a neural network (NN). The main idea was to leverage language group information. We did so with a two-layer approach in the traditional model and a multi-task objective in the neural network. Our results confirm earlier findings: simple traditional models outperform neural networks consistently for this task, at least given the amount of systems we could examine in the available time. Our two-layer linear SVM ranked 2nd in the shared task."}
{"_id": "c0f7687cef4ae5901ca06d4fefa21304255e3748", "title": "RAN: Radical analysis networks for zero-shot learning of Chinese characters", "text": "Chinese characters have a huge set of character categories, more than 20,000 and the number is still increasing as more and more novel characters continue being created. However, the enormous characters can be decomposed into a few fundamental structural radicals, only about 500. This paper introduces the Radical Analysis Networks (RAN) that recognize Chinese characters by identifying radicals and analyzing 2D spatial structures between them. The proposed RAN first extracts visual features from Chinese character input by employing convolutional neural networks as an encoder. Then a decoder based on recurrent neural networks is employed, who aims to generate a caption of Chinese character by detecting radicals and 2D structures through a spatial attention mechanism. The manner of treating Chinese character input as a composition of radicals rather than a single picture severely reduces the size of vocabulary and enables RAN to possess the ability of recognizing unseen Chinese character classes only if their radicals have been seen, called zero-shot learning. We test a simple implementation of RAN on experiments of recognizing printed Chinese characters with seen and unseen classes and RAN simultaneously obtains convincing performance on unseen classes and state-of-the-art performance on seen classes."}
{"_id": "0f112e49240f67a2bd5aaf46f74a924129f03912", "title": "Age-Invariant Face Recognition", "text": "One of the challenges in automatic face recognition is to achieve temporal invariance. In other words, the goal is to come up with a representation and matching scheme that is robust to changes due to facial aging. Facial aging is a complex process that affects both the 3D shape of the face and its texture (e.g., wrinkles). These shape and texture changes degrade the performance of automatic face recognition systems. However, facial aging has not received substantial attention compared to other facial variations due to pose, lighting, and expression. We propose a 3D aging modeling technique and show how it can be used to compensate for the age variations to improve the face recognition performance. The aging modeling technique adapts view-invariant 3D face models to the given 2D face aging database. The proposed approach is evaluated on three different databases (i.g., FG-NET, MORPH, and BROWNS) using FaceVACS, a state-of-the-art commercial face recognition engine."}
{"_id": "02fdca5fdba792e4f2c70b8b637abe4824343800", "title": "Improving the reliability of commodity operating systems", "text": "Despite decades of research in extensible operating system technology, extensions such as device drivers remain a significant cause of system failures. In Windows XP, for example, drivers account for 85&percent; of recently reported failures.This article describes Nooks, a reliability subsystem that seeks to greatly enhance operating system (OS) reliability by isolating the OS from driver failures. The Nooks approach is practical: rather than guaranteeing complete fault tolerance through a new (and incompatible) OS or driver architecture, our goal is to prevent the vast majority of driver-caused crashes with little or no change to the existing driver and system code. Nooks isolates drivers within lightweight protection domains inside the kernel address space, where hardware and software prevent them from corrupting the kernel. Nooks also tracks a driver's use of kernel resources to facilitate automatic cleanup during recovery.To prove the viability of our approach, we implemented Nooks in the Linux operating system and used it to fault-isolate several device drivers. Our results show that Nooks offers a substantial increase in the reliability of operating systems, catching and quickly recovering from many faults that would otherwise crash the system. Under a wide range and number of fault conditions, we show that Nooks recovers automatically from 99&percent; of the faults that otherwise cause Linux to crash.While Nooks was designed for drivers, our techniques generalize to other kernel extensions. We demonstrate this by isolating a kernel-mode file system and an in-kernel Internet service. Overall, because Nooks supports existing C-language extensions, runs on a commodity operating system and hardware, and enables automated recovery, it represents a substantial step beyond the specialized architectures and type-safe languages required by previous efforts directed at safe extensibility."}
{"_id": "0e7cc7073116b8df6d04e5c08c41977b49fb2940", "title": "Algebraic gossip: a network coding approach to optimal multiple rumor mongering", "text": "The problem of simultaneously disseminating k messages in a large network of n nodes, in a decentralized and distributed manner, where nodes only have knowledge about their own contents, is studied. In every discrete time-step, each node selects a communication partner randomly, uniformly among all nodes and only one message can be transmitted. The goal is to disseminate rapidly, with high probability, all messages to all nodes. It is shown that a random linear coding (RLC) based protocol disseminates all messages to all nodes in time ck + O (\u221ak ln(k) ln(n)), where c < 3.46 using pull-based dissemination and c < 5.96 using push-based dissemination. Simulations suggest that c < 2 might be a tighter bound. Thus, if k \u226b (ln(n))3, the time for simultaneous dissemination RLC is asymptotically at most ck, versus the \u03a9(k log2(n))3 time of sequential dissemination. Furthermore, when k \u226b (ln(n))3, the dissemination time is order optimal. When k \u226a (ln(n))2, RLC reduces dissemination time by a factor of \u03a9(\u221ak/ln k) over sequential dissemination. The overhead of the RLC protocol is negligible for messages of reasonable size. A store-and-forward mechanism without coding is also considered. It is shown that this approach performs no better than a sequential approach when k=\u221e n. Owing to the distributed nature of the system, the proof requires analysis of an appropriate time-varying Bernoulli process."}
{"_id": "58fdcf6b407f5ff1c72e51e127ca9d6b5aa36d79", "title": "Characterizing Variation in Crowd-Sourced Data for Training Neural Language Generators to Produce Stylistically Varied Outputs", "text": "One of the biggest challenges of endto-end language generation from meaning representations in dialogue systems is making the outputs more natural and varied. Here we take a large corpus of 50K crowd-sourced utterances in the restaurant domain and develop text analysis methods that systematically characterize types of sentences in the training data. We then automatically label the training data to allow us to conduct two kinds of experiments with a neural generator. First, we test the effect of training the system with different stylistic partitions and quantify the effect of smaller, but more stylistically controlled training data. Second, we propose a method of labeling the style variants during training, and show that we can modify the style of the generated utterances using our stylistic labels. We contrast and compare these methods that can be used with any existing large corpus, showing how they vary in terms of semantic quality and"}
{"_id": "08fe1ad30ca904f14f3792ab0da6287bb4889729", "title": "Deep Supervised Hashing for Fast Image Retrieval", "text": "In this paper, we present a new hashing method to learn compact binary codes for highly efficient image retrieval on large-scale datasets. While the complex image appearance variations still pose a great challenge to reliable retrieval, in light of the recent progress of Convolutional Neural Networks (CNNs) in learning robust image representation on various vision tasks, this paper proposes a novel Deep Supervised Hashing (DSH) method to learn compact similarity-preserving binary code for the huge body of image data. Specifically, we devise a CNN architecture that takes pairs of images (similar/dissimilar) as training inputs and encourages the output of each image to approximate discrete values (e.g. +1/-1). To this end, a loss function is elaborately designed to maximize the discriminability of the output space by encoding the supervised information from the input image pairs, and simultaneously imposing regularization on the real-valued outputs to approximate the desired discrete values. For image retrieval, new-coming query images can be easily encoded by propagating through the network and then quantizing the network outputs to binary codes representation. Extensive experiments on two large scale datasets CIFAR-10 and NUS-WIDE show the promising performance of our method compared with the state-of-the-arts."}
{"_id": "3440032449b644e320c66fb0333a883dc05794c0", "title": "A Hybrid Cryptosystem Based On Vigenere Cipher and Columnar Transposition Cipher", "text": "Privacy is one of the key issues addressed by information Security. Through cryptographic encryption methods, one can prevent a third party from understanding transmitted raw data over unsecured channel during signal transmission. The cryptographic methods for enhancing the security of digital contents have gained high significance in the current era. Breach of security and misuse of confidential information that has been intercepted by unauthorized parties are key problems that information security tries to solve. This paper sets out to contribute to the general body of knowledge in the area of classical cryptography by developing a new hybrid way of encryption of plaintext. The cryptosystem performs its encryption by encrypting the plaintext using columnar transposition cipher and further using the ciphertext to encrypt the plaintext again using Vigen\u00e8re cipher. At the end, cryptanalysis was performed on the ciphertext. The implementation will be done using java programming."}
{"_id": "229467e56c6093cb1f5927f8ffeddd51ac012934", "title": "Trojan data layouts: right shoes for a running elephant", "text": "MapReduce is becoming ubiquitous in large-scale data analysis. Several recent works have shown that the performance of Hadoop MapReduce could be improved, for instance, by creating indexes in a non-invasive manner. However, they ignore the impact of the data layout used inside data blocks of Hadoop Distributed File System (HDFS). In this paper, we analyze different data layouts in detail in the context of MapReduce and argue that Row, Column, and PAX layouts can lead to poor system performance. We propose a new data layout, coined Trojan Layout, that internally organizes data blocks into attribute groups according to the workload in order to improve data access times. A salient feature of Trojan Layout is that it fully preserves the fault-tolerance properties of MapReduce. We implement our Trojan Layout idea in HDFS 0.20.3 and call the resulting system Trojan HDFS. We exploit the fact that HDFS stores multiple replicas of each data block on different computing nodes. Trojan HDFS automatically creates a different Trojan Layout per replica to better fit the workload. As a result, we are able to schedule incoming MapReduce jobs to data block replicas with the most suitable Trojan Layout. We evaluate our approach using three real-world workloads. We compare Trojan Layouts against Hadoop using Row and PAX layouts. The results demonstrate that Trojan Layout allows MapReduce jobs to read their input data up to 4.8 times faster than Row layout and up to 3.5 times faster than PAX layout."}
{"_id": "4fa0d9c4c3d17458085ee255b7a4b7c325d59e32", "title": "DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia", "text": "The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110 Wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things. The DBpedia project maps Wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties. The mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different Wikipedia editions to be combined. The project publishes releases of all DBpedia knowledge bases for download and provides SPARQL query access to 14 out of the 111 language editions via a global network of local DBpedia chapters. In addition to the regular releases, the project maintains a live knowledge base which is updated whenever a page in Wikipedia changes. DBpedia sets 27 million RDF links pointing into over 30 external data sources and thus enables data from these sources to be used together with DBpedia data. Several hundred data sets on the Web publish RDF links pointing to DBpedia themselves and make DBpedia one of the central interlinking hubs in the Linked Open Data (LOD) cloud. In this system report, we give an overview of the DBpedia community project, including its architecture, technical implementation, maintenance, internationalisation, usage statistics and applications."}
{"_id": "c8be59e5991497e1818d5fd9c6d104c9cf3d3657", "title": "Using the WordNet Ontology in the GeoCLEF Geographical Information Retrieval Task", "text": "This paper describes how we managed to use the WordNet ontology for the GeoCLEF 2005 English monolingual task. Both a query expansion method, based on the expansion of geographical terms by means of WordNet synonyms and meronyms, and a method based on the expansion of"}
{"_id": "1c2dbbc5268eff6c78f581b8fc7c649d40b60538", "title": "RelFinder: Revealing Relationships in RDF Knowledge Bases", "text": "The Semantic Web has recently seen a rise of large knowledge bases (such as DBpedia) that are freely accessible via SPARQL endpoints. The structured representation of the contained information opens up new possibilities in the way it can be accessed and queried. In this paper, we present an approach that extracts a graph covering relationships between two objects of interest. We show an interactive visualization of this graph that supports the systematic analysis of the found relationships by providing highlighting, previewing, and filtering features."}
{"_id": "ad5b72dfd0b1e31e0cce2f6419228c6a17bb225a", "title": "Hole-filling Based on Disparity Map and Inpainting for Depth- Image-Based Rendering", "text": "Due to the changes in viewpoint, holes may appear in the novel view synthesized by depth-image-based rendering (DIBR). A hole-filling method combining disparity-mapbased hole-filling and inpainting is proposed. The method first eliminates matching errors in destination image according to the disparity map. Then, holes are classified into different types according to their sizes. Finally, a disparity-map-based approach and an improved exemplar-based inpainting algorithm are used to fill different types of holes according to the type of hole. Experimental results show that the artifacts at the edge of foreground objects can be reduced in synthesized view since the matching errors are eliminated before hole-filling. In addition, the proposed method can achieve more natural and satisfactory results in filled areas in comparison with the disparity-mapbased approach and Gautier\u2019s inpainting algorithm, though it may result in higher time complexity. Keyword: Depth-image-based Rendering, Hole-filling, Holes, Disparity Map, Inpainting"}
{"_id": "53211544bc0c9a0303a1380e422dfaf7642312d8", "title": "A Case for Standard Non-blocking Collective Operations", "text": "In this paper we make the case for adding standard nonblocking collective operations to the MPI standard. The non-blocking point-to-point and blocking collective operations currently defined by MPI provide important performance and abstraction benefits. To allow these benefits to be simultaneously realized, we present an application programming interface for non-blocking collective operations in MPI. Microbenchmark and application-based performance results demonstrate that non-blocking collective operations offer not only improved convenience, but improved performance as well, when compared to manual use of threads with blocking collectives."}
{"_id": "766f1663235ae62ba4c0b6b19ebc11ae4cbc6264", "title": "Designing for collaborative creative problem solving", "text": "Collaborative creativity is traditionally supported by formal techniques, such as brainstorming. These techniques improve the idea-generation process by creating group synergies, but also suffer from a number of negative effects. Current electronic tools to support collaborative creativity overcome some of these problems, but introduce new ones, by either losing the benefits of face-to-face communication or the immediacy of simultaneous contribution. Using an interactive environment as a test bed, we are investigating how collaborative creativity can be supported electronically while maintaining face-to-face communication. What are the design-factors influencing such a system? We have designed a brainstorming application that uses an interactive table and a large wall display, and compared the results of using it to traditional paper-based brainstorming in a user study with 30 participants. From the considerations that went into the design and the observations during the study we derive a number of design guidelines for collaborative systems in interactive environments."}
{"_id": "e7e6c400a0e360cfefa0a0586c80008c7c784e23", "title": "Decision feedback equalization", "text": "As real world communication channels are stressed with higher data rates, intersymbol interference (ISI) becomes a dominant limiting factor. One way to combat this effect that has recently received considerable attention is the use of a decision feedback equalizer (DFE) in the receiver. The action of the DFE is to feed back a weighted sum of past decision to cancel the ISI they cause in the present signaling interval. This paper summarizes the work in this area beginning with the linear equalizer. Three performance criteria have been used to derive optimum systems; 1) minimize the noise variance under a \"zero forcing\" (ZF) constraint i.e., insist that all intersymbol interference is cancelled, 2) minimize the mean-square error (MMSE) between the true sample and the observed signal just prior to the decision threshold, and 3) minimize the probability of error (Min Pe). The transmitter can be fixed and the receiver optimized or one can obtain the joint optimum transmitter and receiver. The number of past decisions used in the feedback equalization can be finite or infinite. The infinite case is easier to handle analytically. In addition to reviewing the work done in the area, we show that the linear equalizer is in fact a portion of the DFE receiver and that the processing done by the DFE is exactly equivalent to the general problem of linear prediction. Other similarities in the various system structures are also shown. The effect of error propagation due to incorrect decisions is discussed, and the coaxial cable channel is used as an example to demonstrate the improvement available using DFE."}
{"_id": "da4e852a2aaec170adb411dee0afd491f93a436d", "title": "Web evolution and Web Science", "text": "This paper examines the evolution of the World Wide Web as a network of networks and discusses the emergence of Web Science as an interdisciplinary area that can provide us with insights on how the Web developed, and how it has affected and is affected by society. Through its different stages of evolution, the Web has gradually changed from a technological network of documents to a network where documents, data, people and organisations are interlinked in various and often unexpected ways. It has developed from a technological artefact separate from people to an integral part of human activity that is having an increasingly significant impact on the world. This paper outlines the lessons from this retrospective examination of the evolution of the Web, presents the main outcomes of Web Science activities and discusses directions along which future developments could be anticipated."}
{"_id": "52c87fb7aa741023504525502fe2ebff5f98dc24", "title": "Self-taught clustering", "text": "This paper focuses on a new clustering task, called self-taught clustering. Self-taught clustering is an instance of unsupervised transfer learning, which aims at clustering a small collection of target unlabeled data with the help of a large amount of auxiliary unlabeled data. The target and auxiliary data can be different in topic distribution. We show that even when the target data are not sufficient to allow effective learning of a high quality feature representation, it is possible to learn the useful features with the help of the auxiliary data on which the target data can be clustered effectively. We propose a co-clustering based self-taught clustering algorithm to tackle this problem, by clustering the target and auxiliary data simultaneously to allow the feature representation from the auxiliary data to influence the target data through a common set of features. Under the new data representation, clustering on the target data can be improved. Our experiments on image clustering show that our algorithm can greatly outperform several state-of-the-art clustering methods when utilizing irrelevant unlabeled auxiliary data."}
{"_id": "2dd309f18df2de3c477dd2148df6edbaddfaae60", "title": "Towards a questionnaire for measuring affective benefits and costs of communication technologies", "text": "As CSCW creates and investigates technologies for social communication, it is important to understand the emotional benefits and costs of these systems. We propose the Affec-tive Benefits and Costs of Communication Technologies (ABCCT) questionnaire to supplement traditional qualita-tive methods of understanding communication media. We discuss the pilots of this survey with 45 children and 110 adults measuring the inter-item reliability of this instru-ment. We present the results of interviews with 14 children and 14 adults, which help confirm that the ABCCT measures the same constructs that may emerge through interview investigations. Finally, we demonstrate that the ABCCT is sensitive enough to discriminate between differ-ent communication technologies and has shown promise in some of its early adoption. Though the ABCCT is not with-out limitations, it may provide a way to compare technolo-gies in field deployments, draw findings across investiga-tions, and quantify the impact of specific design decisions."}
{"_id": "340486c84ae3732ceb3696fd707b759232ee5e87", "title": "Segmenting Two-Sided Markets", "text": "Recent years have witnessed the rise of many successful e-commerce marketplace platforms like the Amazon marketplace, AirBnB, Uber/Lyft, and Upwork, where a central platform mediates economic transactions between buyers and sellers. A common feature of many of these two-sided marketplaces is that the platform has full control over search and discovery, but prices are determined by the buyers and sellers. Motivated by this, we study the algorithmic aspects of market segmentation via directed discovery in two-sided markets with endogenous prices. We consider a model where an online platform knows each buyer/seller\u2019s characteristics, and associated demand/supply elasticities. Moreover, the platform can use discovery mechanisms (search, recommendation, etc.) to control which buyers/sellers are visible to each other. We develop efficient algorithms for throughput (i.e. volume of trade) and welfare maximization with provable guarantees under a variety of assumptions on the demand and supply functions. We also test the validity of our assumptions on demand curves inferred from NYC taxicab log-data, as well as show the performance of our algorithms on synthetic experiments."}
{"_id": "9408579f68f26e36e7b877d721263723f3104af3", "title": "A Hardware Platform for Evaluating Low-Energy Multiprocessor Embedded Systems Based on COTS Devices", "text": "Embedded systems are usually energy constrained. Moreover, in these systems, increased productivity and reduced time to market are essential for product success. To design complex embedded systems while reducing the development time and cost, there is a great tendency to use commercial off-the-shelf (\u201cCOTS\u201d) devices. At system level, dynamic voltage and frequency scaling (DVFS) is one of the most effective techniques for energy reduction. Nonetheless, many widely used COTS processors either do not have DVFS or apply DVFS only to processor cores. In this paper, an easy-to-implement COTS-based evaluation platform for low-energy embedded systems is presented. To achieve energy saving, DVFS is provided for the whole microcontroller (including core, phase-locked loop, memory, and I/O). In addition, facilities are provided for experimenting with fault-tolerance techniques. The platform is equipped with energy measurement and debugging equipment. Physical experiments show that applying DVFS on the whole microcontroller provides up to 47% and 12% energy saving compared with the sole use of dynamic power management and applying DVFS only on the core, respectively. Although the platform is designed for ARM-based embedded systems, our approach is general and can be applied to other types of systems."}
{"_id": "4705978f82dc4e6f45efe97a30f66c0cbecd451d", "title": "Design & comparison of a conventional and permanent magnet based claw-pole machine for automotive application", "text": "This paper presents the design and performance comparison of a conventional claw-pole machine with a permanent magnet based claw-pole machine for automotive application. The magnets are placed in the inter-claw region to decrease the leakage flux and to supply increased magnetic flux in the machine. It has been observed that with the addition of permanent magnets weighing only a few grams, there is a significant increment of more than 22% in output power of the machine. The geometric dimensions of magnets were also varied to verify their effects on performance and it has been observed that with the increase in magnet weight there is a non-linear increment in torque of the machine."}
{"_id": "804c718085a50c7a77e357aab0478985bb120c0e", "title": "Strategic Surge Pricing and Forecast Communication on On-Demand Service Platforms", "text": "On-demand service platforms (e.g., Uber, Lyft) match consumers with independent workers nearby at short notice. To manage fluctuating supply and demand conditions across market locations (zones), many on-demand platforms provide market forecasts to workers and practice surge pricing, wherein the price in a particular zone is temporarily raised above the regular price. We jointly analyze the strategic role of surge pricing and forecast communication explicitly accounting for workers\u2019 incentives to move between zones and the platform\u2019s incentive to share forecasts truthfully. Conventional wisdom suggests that surge pricing is useful in zones where demand for workers exceeds their supply. However, we show that when the platform relies on independent workers to serve consumers across different zones, surge pricing is also useful in zones where supply exceeds demand. Because individual workers do not internalize the competitive externality that they impose on other workers, too few workers may move from a zone with excess supply to an adjacent zone requiring additional workers. Moreover, the platform may have an incentive to misreport market forecasts to exaggerate the need for workers to move. We show how and why distorting the price in a zone with excess supply through surge pricing can increase total platform profit across zones, by incentivizing more workers to move and by making forecast communication credible. Our analysis offers insights for effectively managing on-demand platforms through surge pricing and forecast sharing, and the resulting implications for consumers and workers."}
{"_id": "6e05e7b8a2b1e55329d52389778e46793e9347b9", "title": "Biometric cryptosystems based fuzzy commitment scheme: a security evaluation", "text": "Biometric systems are developed in order to replace traditional authentication. However, protecting the stored templates is considered as one of the critical steps in designing a secure biometric system. When biometric data is compromised, unlike passwords, it can\u2019t be revoked. One methodology for biometric template protection is \u2018Biometric Cryptosystem\u2019. Biometric cryptosystems benefit from both fields of cryptography and biometrics where the biometrics exclude the need to remember passwords and the cryptography provides high security levels for data. In order to, develop these systems, Fuzzy Commitment Scheme (FCS) is considered as well known approach proposed in the literature to protect the user\u2019s data and has been used in several applications. However, these biometric cryptosystems are hampered by the lack of formal security analysis to prove their security strength and effectiveness. Hence, in this paper we present several metrics to analyze the security and evaluate the weaknesses of biometric cryptosystems based on fuzzy commitment scheme."}
{"_id": "0170445683d4197e05032c2769faf829c3f017fd", "title": "Leakage current mechanisms and leakage reduction techniques in deep-submicrometer CMOS circuits", "text": "High leakage current in deep-submicrometer regimes is becoming a significant contributor to power dissipation of CMOS circuits as threshold voltage, channel length, and gate oxide thickness are reduced. Consequently, the identification and modeling of different leakage components is very important for estimation and reduction of leakage power, especially for low-power applications. This paper reviews various transistor intrinsic leakage mechanisms, including weak inversion, drain-induced barrier lowering, gate-induced drain leakage, and gate oxide tunneling. Channel engineering techniques including retrograde well and halo doping are explained as means to manage short-channel effects for continuous scaling of CMOS devices. Finally, the paper explores different circuit techniques to reduce the leakage power consumption."}
{"_id": "47d20dc4f6eb2d2e01f03e3f7585c387faa45830", "title": "The path to personalized medicine.", "text": "n engl j med 363;4 nejm.org july 22, 2010 301 to human illness, identified genetic variability in patients\u2019 responses to dozens of treatments, and begun to target the molecular causes of some diseases. In addition, scientists are developing and using diagnostic tests based on genetics or other molecular mechanisms to better predict patients\u2019 responses to targeted therapy. The challenge is to deliver the benefits of this work to patients. As the leaders of the National Institutes of Health (NIH) and the Food and Drug Administration (FDA), we have a shared vision of personalized medicine and the scientific and regulatory structure needed to support its growth. Together, we have been focusing on the best ways to develop new therapies and optimize prescribing by steering patients to the right drug at the right dose at the right time. We recognize that myriad obstacles must be overcome to achieve these goals. These include scientific challenges, such as determining which genetic markers have the most clinical significance, limiting the off-target effects of gene-based therapies, and conducting clinical studies to identify genetic variants that are correlated with a drug response. There are also policy challenges, such as finding a level of regulation for genetic tests that both protects patients and encourages innovation. To make progress, the NIH and the FDA will invest in advancing translational and regulatory science, better define regulatory pathways for coordinated approval of codeveloped diagnostics and therapeutics, develop risk-based approaches for appropriate review of diagnostics to more accurately assess their validity and clinical utility, and make information about tests readily available. Moving from concept to clinical use requires basic, translational, and regulatory science. On the basic-science front, studies are identifying many genetic variations underlying the risks of both rare and common diseases. These newly discovered genes, proteins, and pathways can represent powerful new drug targets, but currently there is insufficient evidence of a downstream market to entice the private sector to explore most of them. To fill that void, the NIH and the FDA will The Path to Personalized Medicine"}
{"_id": "b9d1b1bab62921aa53fce45ad39050e1316969a7", "title": "On Real-Time Transactions", "text": "Next generation real-time systems will require greater flexibility and predictability than is commonly found in today's systems. These future systems include the space station, integrated vision/robotics/AI systems, collections of humans/robots coordinating to achieve common objectives (usually in hazardous environments such as undersea exploration or chemical plants), and various command and control applications. The complexity of such systems due to timing constraints, concurrency, and distribution is high. It is accepted that the synchronization, failure atomicity, and permanence properties of transactions aid in the development of distributed systems. However, little work has been done in exploiting transactions in a real-time context. We have been attempting to categorize real-time data into classes depending on their time, synchronization, atomicity, and permanence properties. Then, using the semantics of the data and the applications, we are developing special, tailored, real-time transactions that only supply the minimal properties necessary for that class. This reduces the system overhead in supporting access to various types of data. The eventual goal is to verify that timing requirements can be met."}
{"_id": "59db1ff8eff571c29d50373dadbeb1fe3a171c28", "title": "Active ESD protection for input transistors in a 40-nm CMOS process", "text": "This work presents a novel design for input ESD protection. By replacing the protection resistor with an active switch that isolates the input transistors from the pad under ESD stress, the ESD robustness can be greatly improved. The proposed designs were designed and verified in a 40-nm CMOS process using only thin oxide devices, which can successfully pass the typical industry ESD-protection specifications of 2-kV HBM and 200-V MM ESD tests."}
{"_id": "e463c58b48da45386ebcfe0767089f52e7182c30", "title": "Intrinsic Evaluations of Word Embeddings: What Can We Do Better?", "text": "This paper presents an analysis of existing methods for the intrinsic evaluation of word embeddings. We show that the main methodological premise of such evaluations is \u201cinterpretability\u201d of word embeddings: a \u201cgood\u201d embedding produces results that make sense in terms of traditional linguistic categories. This approach is not only of limited practical use, but also fails to do justice to the strengths of distributional meaning representations. We argue for a shift from abstract ratings of word embedding \u201cquality\u201d to exploration of their strengths and weaknesses."}
{"_id": "2491f3044219a07954f4d3c841a51e4fc01674ac", "title": "Significant Triples : Adjective + Noun + Verb Combinations", "text": "We investigate the identification and, to some extent, the classification of collocational word groups that consist of an adjectival modifier (A), an accusative object noun (N), and a verb (V) by means of parsing a newspaper corpus with a lexicalized probabilistic grammar.1 Data triples are extracted from the resulting Viterbi parses and subsequently evaluated by means of a statistical association measure. The extraction results are then compared to predefined descriptive classes of ANV-triples. We also use a decision tree algorithm to classify part of the data obtained, on the basis of a small set of manually classified examples. Much of the candidate data is lexicographically relevant: the triples include idiomatic combinations (e.g. (sich) eine goldene Nase verdienen, \u2018get rich\u2019, lit. \u2018earn oneself a golden nose\u2019), combinations of N+V and N+A collocations (e.g. (eine) klare Absage erteilen, \u2018refuse resolutely\u2019, lit. \u2018give a clear refusal\u2019), next to cases where N+V or N+A collocations are found, in combination with other (not necessarily collocational) context partners. To extract such data from text corpora, a grammar is needed that captures verb+object relations: simple pattern matching on part-of-speech shapes is not sufficient. Statistical tools then allow to order the data in a way useful for subsequent manual selection by lexicographers. 1This work has been carried out in the context of the Transferbereich 32: Automatische Exzerption, a DFG-funded project aiming at the creation of support tools for the corpus-based updating of printed dictionaries in lexicography, carried out in cooperation with the publishers Langenscheidt KG and Duden BIFAB AG."}
{"_id": "cb5a1d0d5c73b55677f1148bc6bf33eb8a8f23ea", "title": "Control of a 9-DoF Wheelchair-mounted robotic arm system using a P300 Brain Computer Interface: Initial experiments", "text": "A wheelchair-mounted robotic arm (WMRA) system was designed and built to meet the needs of mobilityimpaired persons with limitations of upper extremities, and to exceed the capabilities of current devices of this type. The control of this 9-degree-of-freedom system expands upon conventional control methods and combines the 7-DoF robotic arm control with the 2-degree-of-freedom power wheelchair control. The 3- degrees of redundancy are optimized to effectively perform activities of daily living and overcome singularities, joint limits and some workspace limitations. The control system is designed for teleoperated or autonomous coordinated Cartesian control, which offers expandability for future research. A P300 Brain Computer Interface (BCI), the BCI2000, was implemented to control the WMRA system. The control is done by recording and analysing the brain activity through an electrode cap while providing visual stimulation to the user via a visual matrix. The visual matrix contains a symbolic or an alphabetic array corresponding to the motion of the WMRA. By recognizing online and in real-time, which element in the matrix elicited a P300, the BCI system can identify which element the user chose to communicate. The chosen element is then communicated to the controller of the WMRA system. The speed and accuracy of the BCI system was tested. This paper gives details of the WMRA's integration with the BCI2000 and documents the experimental results of the BCI and the WMRA in simulation."}
{"_id": "e834aaa5e2f47c1b2198b9a8812207120c736df9", "title": "Dual-wideband log-periodic antennas", "text": "Two log-periodic (LP) antenna configurations able to suppress a specific frequency band are introduced in this paper. LP antennas belong to the class of frequency independent antennas and are able to achieve multi-decade wide bandwidths. We start with a parametric study of a traditional planar curved tooth LP antenna aimed in determining the means for achieving specific bandwidth. Two techniques for accomplishing consistent near/far-field dual-band performance are introduced next. The first is a LP antenna integrated with a dual-wideband feed line, and the second one is a LP with removed resonant teeth. Antennas and feeds are designed on a microstrip substrate and method of moments modeling within FEKO is used to demonstrate their performance. Required dual-band operation is 1.8-5 GHz and 7-11 GHz."}
{"_id": "1f88427d7aa8225e47f946ac41a0667d7b69ac52", "title": "What is the best multi-stage architecture for object recognition?", "text": "In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer. Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filters in one or both stages are learned in supervised or unsupervised mode. This paper addresses three questions: 1. How does the non-linearities that follow the filter banks influence the recognition accuracy? 2. does learning the filter banks in an unsupervised or supervised manner improve the performance over random filters or hardwired filters? 3. Is there any advantage to using an architecture with two stages of feature extraction, rather than one? We show that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks. We show that two stages of feature extraction yield better accuracy than one. Most surprisingly, we show that a two-stage system with random filters can yield almost 63% recognition rate on Caltech-101, provided that the proper non-linearities and pooling layers are used. Finally, we show that with supervised refinement, the system achieves state-of-the-art performance on NORB dataset (5.6%) and unsupervised pre-training followed by supervised refinement produces good accuracy on Caltech-101 (\u226b 65%), and the lowest known error rate on the undistorted, unprocessed MNIST dataset (0.53%)."}
{"_id": "24b85f2d66c610ad889eff38985ddffe434cc04f", "title": "The International Caries Detection and Assessment System (ICDAS): an integrated system for measuring dental caries.", "text": "This paper describes early findings of evaluations of the International Caries Detection and Assessment System (ICDAS) conducted by the Detroit Center for Research on Oral Health Disparities (DCR-OHD). The lack of consistency among the contemporary criteria systems limits the comparability of outcomes measured in epidemiological and clinical studies. The ICDAS criteria were developed by an international team of caries researchers to integrate several new criteria systems into one standard system for caries detection and assessment. Using ICDAS in the DCR-OHD cohort study, dental examiners first determined whether a clean and dry tooth surface is sound, sealed, restored, crowned, or missing. Afterwards, the examiners classified the carious status of each tooth surface using a seven-point ordinal scale ranging from sound to extensive cavitation. Histological examination of extracted teeth found increased likelihood of carious demineralization in dentin as the ICDAS codes increased in severity. The criteria were also found to have discriminatory validity in analyses of social, behavioral and dietary factors associated with dental caries. The reliability of six examiners to classify tooth surfaces by their ICDAS carious status ranged between good to excellent (kappa coefficients ranged between 0.59 and 0.82). While further work is still needed to define caries activity, validate the criteria and their reliability in assessing dental caries on smooth surfaces, and develop a classification system for assessing preventive and restorative treatment needs, this early evaluation of the ICDAS platform has found that the system is practical; has content validity, correlational validity with histological examination of pits and fissures in extracted teeth; and discriminatory validity."}
{"_id": "85ef93638f36aa0a54161ad9e729c72d8ed2202b", "title": "Implicit Age Cues in Resumes: Subtle Effects on Hiring Discrimination", "text": "Anonymous resume screening, as assumed, does not dissuade age discriminatory effects. Building on job market signaling theory, this study investigated whether older applicants may benefit from concealing explicitly mentioned age signals on their resumes (date of birth) or whether more implicit/subtle age cues on resumes (older-sounding names/old-fashioned extracurricular activities) may lower older applicants' hirability ratings. An experimental study among 610 HR professionals using a mixed factorial design showed hiring discrimination of older applicants based on implicit age cues in resumes. This effect was more pronounced for older raters. Concealing one's date of birth led to overall lower ratings. Study findings add to the limited knowledge on the effects of implicit age cues on hiring discrimination in resume screening and the usefulness of anonymous resume screening in the context of age. Implications for research and practice are discussed."}
{"_id": "5ad3c058535653b1c898302ffa42d5dccee542e3", "title": "A CNN for the identification of corresponding patches in SAR and optical imagery of urban scenes", "text": "In this paper we propose a convolutional neural network (CNN), which allows to identify corresponding patches of very high resolution (VHR) optical and SAR imagery of complex urban scenes. Instead of a siamese architecture as conventionally used in CNNs designed for image matching, we resort to a pseudo-siamese configuration with no interconnection between the two streams for SAR and optical imagery. The network is trained with automatically generated training data and does not resort to any hand-crafted features. First evaluations show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development to a generalized multi-sensor matching procedure."}
{"_id": "7f0a5ec34054fed46d2bcf3e2f94351fafc404c1", "title": "Retinal Vessel Segmentation using Deep Neural Networks", "text": "Automatic segmentation of blood vessels in fundus images is of great importance as eye diseases as well as some systemic diseases cause observable pathologic modifications. It is a binary classification problem: for each pixel we consider two possible classes (vessel or non-vessel). We use a GPU implementation of deep max-pooling convolutional neural networks to segment blood vessels. We test our method on publiclyavailable DRIVE dataset and our results demonstrate the high effectiveness of the deep learning approach. Our method achieves an average accuracy and AUC of 0.9466 and 0.9749, respectively."}
{"_id": "2d633a55b16b4d5bc4edfd0587789a1c29d20b5c", "title": "A theoretical analysis of Model-Based Interval Estimation", "text": "Several algorithms for learning near-optimal policies in Markov Decision Processes have been analyzed and proven efficient. Empirical results have suggested that Model-based Interval Estimation (MBIE) learns efficiently in practice, effectively balancing exploration and exploitation. This paper presents the first theoretical analysis of MBIE, proving its efficiency even under worst-case conditions. The paper also introduces a new performance metric, average loss, and relates it to its less \"online\" cousins from the literature."}
{"_id": "e8551da1196d674d2e169598ca6bdd0bed79fbe2", "title": "Optimality Properties and Driver Input Parameterization for Trail-braking Cornering", "text": "In this work we present an analysis of rally-racing driving techniques using numerical optimization. We provide empirical guidelines for executing a TrailBraking (TB) maneuver, one of the common cornering techniques used in rally-racing. These guidelines are verified via experimental data collected from TB maneuvers performed by an expert rally driver. We show that a TB maneuver can be generated as a special case of the minimum-time cornering problem subject to specific boundary conditions. We propose simple parameterizations of the driver\u2019s steering, throttle and braking commands that lead to an alternative formulation of the optimization problem with a reduced search space. We show that the proposed parametrization of the driver\u2019s commands can reproduce TB maneuvers along a variety of corner geometries, requiring only a small number of parameters."}
{"_id": "f7edcafdcb9661609aaacf743d32a467a7d3b921", "title": "A Comparison of the Elastic Properties of Graphene- and Fullerene-Reinforced Polymer Composites: The Role of Filler Morphology and Size", "text": "Nanoscale carbon-based fillers are known to significantly alter the mechanical and electrical properties of polymers even at relatively low loadings. We report results from extensive molecular-dynamics simulations of mechanical testing of model polymer (high-density polyethylene) nanocomposites reinforced by nanocarbon fillers consisting of graphene flakes and fullerenes. By systematically varying filler concentration, morphology, and size, we identify clear trends in composite stiffness with reinforcement. To within statistical error, spherical fullerenes provide a nearly size-independent level of reinforcement. In contrast, two-dimensional graphene flakes induce a strongly size-dependent response: we find that flakes with radii in the 2-4\u2009nm range provide appreciable enhancement in stiffness, which scales linearly with flake radius. Thus, with flakes approaching typical experimental sizes (~0.1-1\u2009\u03bcm), we expect graphene fillers to provide substantial reinforcement, which also is much greater than what could be achieved with fullerene fillers. We identify the atomic-scale features responsible for this size- and morphology-dependent response, notably, ordering and densification of polymer chains at the filler-matrix interface, thereby providing insights into avenues for further control and enhancement of the mechanical properties of polymer nanocomposites."}
{"_id": "cf76581ef2150774497382c62a0a2af699b6c6e5", "title": "Structured Approaches for Exploring Interpersonal Relationships in Natural Language Text", "text": "Title of dissertation: STRUCTURED APPROACHES FOR EXPLORING INTERPERSONAL RELATIONSHIPS IN NATURAL LANGUAGE TEXT Snigdha Chaturvedi, Doctor of Philosophy, 2016 Dissertation directed by: Dr. Hal Daum\u00e9 III Department of Computer Science Human relationships have long been studied by scientists from domains like sociology, psychology, literature, etc. for understanding people\u2019s desires, goals, actions and expected behaviors. In this dissertation we study inter-personal relationships as expressed in natural language text. Modeling inter-personal relationships from text finds application in general natural language understanding, as well as real-world domains such as social networks, discussion forums, intelligent virtual agents, etc. We propose that the study of relationships should incorporate not only linguistic cues in text, but also the contexts in which these cues appear. Our investigations, backed by empirical evaluation, support this thesis, and demonstrate that the task benefits from using structured models that incorporate both types of information. We present such structured models to address the task of modeling the nature of relationships between any two given characters from a narrative. To begin with, we assume that relationships are of two types: cooperative and non-cooperative. We first describe an approach to jointly infer relationships between all characters in the narrative, and demonstrate how the task of characterizing the relationship between two characters can benefit from including information about their relationships with other characters in the narrative. We next formulate the relationshipmodeling problem as a sequence prediction task to acknowledge the evolving nature of human relationships, and demonstrate the need to model the history of a relationship in predicting its evolution. Thereafter, we present a data-driven method to automatically discover various types of relationships such as familial, romantic, hostile, etc. Like before, we address the task of modeling evolving relationships but don\u2019t restrict ourselves to two types of relationships. We also demonstrate the need to incorporate not only local historical but also global context while solving this problem. Lastly, we demonstrate a practical application of modeling inter-personal relationships in the domain of online educational discussion forums. Such forums offer opportunities for its users to interact and form deeper relationships. With this view, we address the task of identifying initiation of such deeper relationships between a student and the instructor. Specifically, we analyze contents of the forums to automatically suggest threads to the instructors that require their intervention. By highlighting scenarios that need direct instructor-student interactions, we alleviate the need for the instructor to manually peruse all threads of the forum and also assist students who have limited avenues for communicating with instructors. We do this by incorporating the discourse structure of the thread through latent variables that abstractly represent contents of individual posts and model the flow of information in the thread. Such latent structured models that incorporate the linguistic cues without losing their context can be helpful in other related natural language understanding tasks as well. We demonstrate this by using the model for a very different task: identifying if a stated desire has been fulfilled by the end of a story."}
{"_id": "4b9a9fb54b3451e4212e298053f81f0cd49d70a2", "title": "Context-Awareness for Mobile Sensing: A Survey and Future Directions", "text": "The evolution of smartphones together with increasing computational power has empowered developers to create innovative context-aware applications for recognizing user-related social and cognitive activities in any situation and at any location. The existence and awareness of the context provide the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze, and share local sensory knowledge in the purpose for a large-scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects and also assist individuals. However, many open challenges remain, which are mostly arisen because the middleware services provided in mobile devices have limited resources in terms of power, memory, and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved and, at the same time, better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlightens them by proposing possible solutions."}
{"_id": "e4644cca621794d6a4c719aef0ae6af6967fff6b", "title": "Recent advances in image processing techniques for automated harvesting purposes: A review", "text": "Image processing has been proved to be an effective tool for analysis in various human activity areas, namely, agricultural applications. Interpreting a digital color image of fruit orchard captured in field environment is extremely challenging due to adverse weather conditions, luminance variability and the presence of dust, insects and other unavoidable image noises. The purpose of this survey is to categorize and briefly review the literature on computer analysis of fruit images in agricultural operations, which comprises more than 60 papers published in the last 10 years. With the aim to perform applied research in agricultural imaging, this paper intends to focus on advanced image processing and analysis techniques used in applications for detection and classifications of fruits, developed in the last decade. For the reviewed techniques, some performance evaluation metrics achieved in various experiments are emphasized to help the researchers when making choices and develop new computer vision applications in fruit images."}
{"_id": "4c65005c8822c3117bd3c3746e3a9b9e17386328", "title": "Bitcoin and Cryptocurrency Technologies - A Comprehensive Introduction", "text": ""}
{"_id": "0ee6f08cd730930a352fe55c0f34e54d1a536c1d", "title": "On-Line New Event Detection and Tracking", "text": "We define and describe the related problems of new event detection and event tracking within a stream of broadcast news stories. We focus on a strict on-line setting-i.e., the system must make decisions about one story before looking at any subsequent stories. Our approach to detection uses a single pass clustering algorithm and a novel thresholding model that incorporates the properties of events as a major component. Our approach to tracking is similar to typical information filtering methods. We discuss the value of \u201csurprising\u201d features that have unusual occurrence characteristics, and briefly explore on-line adaptive filtering to handle evolving events in the news. New event detection and event tracking are part of the Topic Detection and Tracking (TDT) initiative."}
{"_id": "3ab8be80708701aea2fe951d01b36fffb3632eb2", "title": "Does visual attention select objects or locations?", "text": "Much research supports location-based attentional selection, but J. Duncan (1984) presented data favoring object-based selection in a shape discrimination task. Does attention select objects or locations? We confirmed that Duncan's task elicits selection from spatially invariant object representations rather than from a grouped location-based representation. We next asked whether this finding was due to location-based filtering; the results again supported object-based selection. Finally, we demonstrated that when Duncan's objects were used in a cued detection task the results were consistent with location-based selection. These results suggest that there may not be a single attention mechanism, consistent with Duncan's original claim that object-based and location-based attentional selection are not mutually exclusive. Rather, attentional limitations may depend on the type of stimulus representation used in performing a given task."}
{"_id": "6ddfd8bde3cf3bf1244ba41d0a739f9bae0918d2", "title": "The Genetic Structure of Marijuana and Hemp", "text": "Despite its cultivation as a source of food, fibre and medicine, and its global status as the most used illicit drug, the genus Cannabis has an inconclusive taxonomic organization and evolutionary history. Drug types of Cannabis (marijuana), which contain high amounts of the psychoactive cannabinoid \u03949-tetrahydrocannabinol (THC), are used for medical purposes and as a recreational drug. Hemp types are grown for the production of seed and fibre, and contain low amounts of THC. Two species or gene pools (C. sativa and C. indica) are widely used in describing the pedigree or appearance of cultivated Cannabis plants. Using 14,031 single-nucleotide polymorphisms (SNPs) genotyped in 81 marijuana and 43 hemp samples, we show that marijuana and hemp are significantly differentiated at a genome-wide level, demonstrating that the distinction between these populations is not limited to genes underlying THC production. We find a moderate correlation between the genetic structure of marijuana strains and their reported C. sativa and C. indica ancestry and show that marijuana strain names often do not reflect a meaningful genetic identity. We also provide evidence that hemp is genetically more similar to C. indica type marijuana than to C. sativa strains."}
{"_id": "0405470a8a399463f7d53a971988a54526aa5aa6", "title": "Solutions to the GSM Security Weaknesses", "text": "Recently, the mobile industry has experienced an extreme increment in number of its users. The GSM network with the greatest worldwide number of users succumbs to several security vulnerabilities. Although some of its security problems are addressed in its upper generations, there are still many operators using 2G systems. This paper briefly presents the most important security flaws of the GSM network and its transport channels. It also provides some practical solutions to improve the security of currently available 2G systems."}
{"_id": "94aa90a61e27c79fe08bdce545d03ce696bf7f8b", "title": "Performance on Raven \u2019 s Advanced Progressive Matrices by African , East Indian , and White engineering students in South Africa", "text": "The hypothesis is tested that the Raven\u2019s Advanced Progressive Matrices (APM) has the same construct validity in African university students as it does in non-African university students. Analyses were made of scores from 294 highly select 17\u201323-year-olds in the Faculties of Engineering and the Built Environment at the University of the Witwatersrand (187 Africans, 40 East Indians, 67 Whites; 70 women, 224 men). Out of a total of 36 problems, the African students solved an average of 22, the East Indian students, 25, and the White students, 29 (P < .001), placing them at the 57th, 64th, and 86th percentiles, respectively, and yielding IQ equivalents of 103, 106, and 117 on the 1993 US norms. Four months earlier, they had completed the Standard Progressive Matrices. The two tests correlated .60 or higher for both the Africans and the non-Africans, and both tests predicted final end-of-year grades with mean r\u2019s = .30 (P\u2019s < .05). Items found difficult by one group were difficult for the others; items found easy by one group were easy for the others (mean r\u2019s = .90, P < .001). The African\u2013East Indian\u2013White differences were \u2018\u2018Jensen Effects,\u2019\u2019 being most pronounced on the general factor of intelligence (measured in this instance by items with the highest item-total correlations). Indeed, the g loadings showed cross-cultural generality: For example, item-total correlations calculated on the East Indian students predicted the magnitude of the African\u2013White differences. When each of the 0160-2896/03/$ \u2013 see front matter D 2003 Elsevier Science Inc. All rights reserved. PII: S0160 -2896 (02 )00140 -X * Corresponding author. Tel.: +1-519-661-3685; fax: +1-519-850-2302. E-mail addresses: Rushton@uwo.ca (J.P. Rushton), 135skuy@mentor.edcm.wits.ac.za (M. Skuy). 1 Tel.: +27-11-716-5286; fax: +27-11-339-3844. Intelligence 31 (2003) 123\u2013137 36 g loadings and race effects were aggregated into nine four-item \u2018\u2018subtests,\u2019\u2019 the magnitude of the Jensen Effect was Spearman\u2019s r = .52. There were no sex differences. D 2003 Elsevier Science Inc. All rights reserved."}
{"_id": "2019d480c9a8b983661e00a4a5affc4f6549f296", "title": "THRIVE: Threshold Homomorphic encryption based secure and privacy preserving bIometric VErification system", "text": "In this paper, we introduce a new biometric verification and template protection system which we call THRIVE. The system includes novel enrollment and authentication protocols based on threshold homomorphic encryption where a private key is shared between a user and a verifier. In the THRIVE system, only encrypted binary biometric templates are stored in a database and verification is performed via homomorphically randomized templates, thus, original templates are never revealed during authentication. Due to the underlying threshold homomorphic encryption scheme, a malicious database owner cannot perform full decryption on encrypted templates of the users in the database. In addition, security of the THRIVE system is enhanced using a two-factor authentication scheme involving user\u2019s private key and biometric data. Using simulation-based techniques, the proposed system is proven secure in the malicious model. The proposed system is suitable for applications where the user does not want to reveal her biometrics to the verifier in plain form, but needs to prove her identity by using biometrics. The system can be used with any biometric modality where a feature extraction method yields a fixed size binary template and a query template is verified when its Hamming distance to the database template is less than a threshold. The overall connection time for the proposed THRIVE system is estimated to be 336 ms on average for 256-bit biometric templates on a desktop PC running with quad core 3.2 GHz CPUs at 10 Mbit/s up/down link connection speed. Consequently, the proposed system can be efficiently used in real-life applications."}
{"_id": "10c81f3e782acd97b5ace5c5840ea71be6f99740", "title": "Key performance indicators for traffic intensive web-enabled business processes", "text": "Purpose \u2013 Intensive traffic often occurs in web-enabled business processes hosted by travel industry and government portals. An extreme case for intensive traffic is flash crowd situations when the number of web users spike within a short time due to unexpected events caused by political unrest or extreme weather conditions. As a result, the servers hosting these business processes can no longer handle overwhelming service requests. To alleviate this problem, process engineers usually analyze audit trail data collected from the application server and reengineer their business processes to withstand unexpected surge in the visitors. However, such analysis can only reveal the performance of the application server from the internal perspective. This paper aims to investigate this issue. Design/methodology/approach \u2013 This paper proposes an approach for analyzing key performance indicators of traffic intensive web-enabled business processes from audit trail data, web server logs, and stress testing logs. Findings \u2013 The key performance indicators identified in the study\u2019s approach can be used to understand the behavior of traffic intensive web-enabled business processes and the underlying factors that affect the stability of the web server. Originality/value \u2013 The proposed analysis also provides an internal as well as an external view of the performance. Moreover, the calculated key performance indicators can be used by the process engineers for locating potential bottlenecks, reengineering business processes, and implementing contingency measures for traffic intensive situations."}
{"_id": "83b6a98a3792f82ce00130b662da96af527b82c0", "title": "MobiLearn go: mobile microlearning as an active, location-aware game", "text": "Mobile technologies hold great potential to make studying both more effective and more enjoyable. In this work we present a mobile, microlearning application. Our system is designed with two goals: be flexible enough to support learning in any subject and encourage frequent short study sessions in a variety of contexts. We discuss the use of our application to assess the feasibility of microlearning for non-language learning and the relationship between the physical location of study sessions and information retention."}
{"_id": "48b9d350a0f20fdce7c8edc198fb3b3350600157", "title": "Clustering with Deep Learning: Taxonomy and New Methods", "text": "Clustering methods based on deep neural networks have proven promising for clustering real-world data because of their high representational power. In this paper we propose a systematic taxonomy of clustering methods that utilize deep neural networks. We base our taxonomy on a comprehensive review of recent work and validate the taxonomy in a case study. In this case study, we show that the taxonomy enables researchers and practitioners to systematically create new clustering methods by selectively recombining and replacing distinct aspects of previous methods with the goal of overcoming their individual limitations. The experimental evaluation confirms this and shows that the method created for the case study achieves state-of-the-art clustering quality and surpasses it in some cases."}
{"_id": "3ac313ebfbd8426febe3d5d825a87ea8a85db587", "title": "Color-Ciratefi : A color-based RST-invariant template matching algorithm", "text": "Template matching is a technique widely used for finding patterns in digital images. An efficient template matching algorithm should be able to detect template instances that have undergone geometric transformations. Similarly, a color template matching should be able to deal with color constancy problem. Recently we have proposed a new grayscale template matching algorithm named Ciratefi, invariant to rotation, scale, translation, brightness and contrast. In this paper we introduce the Color-Ciratefi that takes into account the color information. We use a new similarity metric in the CIELAB space to obtain invariance to brightness and contrast changes. Experiments show that Color-Ciratefi is more accurate than C-color-SIFT, the well-known SIFT algorithm that uses a set of color invariants. In conventional computers, Color-Ciratefi is slower than C-color-SIFT. However Color-Ciratefi is more suitable than C-color-SIFT to be implemented in highly parallel architectures like FPGA, because it repeats exactly the same set of operations for each pixel."}
{"_id": "33e20449aa40488c6d4b430a48edf5c4b43afdab", "title": "The Faces of Engagement: Automatic Recognition of Student Engagementfrom Facial Expressions", "text": "Student engagement is a key concept in contemporary education, where it is valued as a goal in its own right. In this paper we explore approaches for automatic recognition of engagement from students' facial expressions. We studied whether human observers can reliably judge engagement from the face; analyzed the signals observers use to make these judgments; and automated the process using machine learning. We found that human observers reliably agree when discriminating low versus high degrees of engagement (Cohen's \u03ba = 0.96). When fine discrimination is required (four distinct levels) the reliability decreases, but is still quite high ( \u03ba = 0.56). Furthermore, we found that engagement labels of 10-second video clips can be reliably predicted from the average labels of their constituent frames (Pearson r=0.85), suggesting that static expressions contain the bulk of the information used by observers. We used machine learning to develop automatic engagement detectors and found that for binary classification (e.g., high engagement versus low engagement), automated engagement detectors perform with comparable accuracy to humans. Finally, we show that both human and automatic engagement judgments correlate with task performance. In our experiment, student post-test performance was predicted with comparable accuracy from engagement labels ( r=0.47) as from pre-test scores ( r=0.44)."}
{"_id": "756d29ac3cc9b56054f08501cc4814183dc948d7", "title": "Image Thresholding for Optical Character Recognition and Other Applications Requiring Character Image Extraction", "text": "Two new, cost-effective thresholding algorithms for use in extracting binary images of characters from machineor hand-printed documents are described. The creation of a binary representation from an analog image requires such algorithms to determine whether a point is converted into a binary one because it falls within a character stroke or a binary zero because it does not. This thresholding is a critical step in Optical Character Recognition (OCR). I t is also essential for other Character Image Extraction (CIEJ applications, such as the processing of machine-printed or handwritten characters from carbon copy forms or bank checks, where smudges and scenic backgrounds, for example, may have to be suppressed. The first algorithm, a nonlinear, adaptive procedure, is implemented with a minimum of hardware and is intended for many CIE applications. The second is a more aggressive approach directed toward specialized, high-volume applications which justify extra complexity."}
{"_id": "270e367871940591caf98417474c3650c4440528", "title": "Neuropsychologic Theory and Findings in Attention-Deficit/Hyperactivity Disorder: The State of the Field and Salient Challenges for the Coming Decade", "text": "The past decade has witnessed the establishment of several now well-replicated findings in the neuropsychology of attention-deficit/hyperactivity disorder (ADHD), which have been confirmed by meta-analyses. Progress has been notable from the importing of cognitive science and neuroscience paradigms. Yet these findings point to many neural networks being involved in the syndrome and to modest effect sizes suggesting that any one neuropsychologic deficit will not be able to explain the disorder. In this article, leading theories and key findings are briefly reviewed in four key domains: attention, executive functions, state regulation and motivation, and temporal information processing. Key issues facing the field of neuropsychologic research and theory in ADHD include 1) the need for more integrative developmental accounts that address both multiple neural systems and the socialization processes that assure their development; 2) consideration of multiple models/measures in the same study so as to examine relative contributions, within-group heterogeneity, and differential deficit; and 3) better integration of cognitive process models with affective and temperament theories so that early precursors to ADHD can be better understood. Overall, the field has witnessed notable progress as it converges on an understanding of ADHD in relation to disruption of a multicomponent self-regulatory system. The next era must articulate multipathway, multilevel developmental accounts of ADHD that incorporate neuropsychologic effects."}
{"_id": "27208c88f07a1ffe97760c12be08fad3ab68fee2", "title": "Multimodal Deep Learning", "text": "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train a deep network that learns features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. We validate our methods on the CUAVE and AVLetters datasets with an audio-visual speech classification task, demonstrating superior visual speech classification on AVLetters and effective multimodal fusion."}
{"_id": "62f1acc01cebefd6379beeee8e019447f6ef57eb", "title": "Transportation mode detection using mobile phones and GIS information", "text": "The transportation mode such as walking, cycling or on a train denotes an important characteristic of the mobile user's context. In this paper, we propose an approach to inferring a user's mode of transportation based on the GPS sensor on her mobile device and knowledge of the underlying transportation network. The transportation network information considered includes real time bus locations, spatial rail and spatial bus stop information. We identify and derive the relevant features related to transportation network information to improve classification effectiveness. This approach can achieve over 93.5% accuracy for inferring various transportation modes including: car, bus, aboveground train, walking, bike, and stationary. Our approach improves the accuracy of detection by 17% in comparison with the GPS only approach, and 9% in comparison with GPS with GIS models. The proposed approach is the first to distinguish between motorized transportation modes such as bus, car and aboveground train with such high accuracy. Additionally, if a user is travelling by bus, we provide further information about which particular bus the user is riding. Five different inference models including Bayesian Net, Decision Tree, Random Forest, Na\u00efve Bayesian and Multilayer Perceptron, are tested in the experiments. The final classification system is deployed and available to the public."}
{"_id": "8889c38ddd610e0df7dc96c24fa5c4b76a92e110", "title": "Power Consumption Modeling of Skid-Steer Tracked Mobile Robots on Rigid Terrain", "text": "Power consumption is a key element in outdoor mobile robot autonomy. This issue is very relevant in skid-steer tracked vehicles on account of their large ground contact area. In this paper, the power losses due to dynamic friction have been modeled from two different perspectives: 1) the power drawn by the rigid terrain and 2) the power supplied by the motors. Comparison of both approaches has provided new insight on skid steering on hard flat terrains at walking speeds. Experimental power models, which also include traction resistance and other power losses, have been obtained for two different track widths over marble flooring and asphalt with Auriga- beta, which is a full-size mobile robot. To this end, various internal probes have been set at different points of the power stream. Furthermore, new energy implications for navigation of these kinds of vehicles have been deduced and tested."}
{"_id": "b12e1675e6f3d1e5b06b84ff4d240cbdf3e8446e", "title": "Collision avoidance of unmanned ships based on artificial potential field", "text": "The collision avoidance problem of autonomous ships is analyzed in this paper. Firstly, the background and the difficulties are reviewed. Then an improved artificial potential field method is designed to solve the collision avoidance problem of unmanned ships, and the system characteristics are investigated. Finally, simulation results demonstrate the ship's control effectiveness in the obstacle environment."}
{"_id": "b93ca4cb9850d3f71bf6b03393351f6c63e4a27a", "title": "Putting value co-creation into practice: a case for advisory support", "text": "The concept of value co-creation and its notion of the customer as co-creator of value have gained much academic interest, notably in marketing and operations research. While several competing perspectives have been conceptually discussed in literature, research on the practical implications of value co-creation is scarce. Using the example of sales-oriented advisory, we show gaps between existing co-creation concepts and current practice in five problem areas. We develop four general solution perspectives on the advisor-client encounter as guidelines to overcome these gaps and discuss design requirements of their technological instantiations in advisory support systems. We present exemplary implementations of such systems in two domains: travel counseling and financial advisory. Revealing the practical implications of value co-creation on advisory encounters, these examples also demonstrate that the solution perspectives have to be implemented quite differently for individual domains."}
{"_id": "a2704db7911149b4735229e2c7ff2441c8815c4b", "title": "Intrusion detection system for high-speed network", "text": "The increasing network throughput challenges the current Network Intrusion Detection Systems (NIDS) to have compatible highperformance data processing. In this paper, we describe an in-depth research on the related techniques of high-performance network intrusion detection and an implementation of a Rule-based High-performance Network Intrusion Detection System (RHPNIDS) for high-speed networks. By integrating several performance optimizing methods, the performance of RHPNIDS is very impressive compared with the popular open source NIDS Snort. q 2004 Elsevier B.V. All rights reserved."}
{"_id": "923e2704b61f869074e20ae0ff0dc753b456d3d5", "title": "Health data integration with Secured Record Linkage: A practical solution for Bangladesh and other developing countries", "text": "Knowledge discovery from various health data repositories requires the incorporation of healthcare data from diversified sources. Maintaining record linkage during the integration of medical data is an important research issue. Researchers have given different solutions to this problem that are applicable for developed countries where electronic health record of patients are maintained with identifiers like social security number (SSN), universal patient identifier (UPI), health insurance number, etc. These solutions cannot be used correctly for record linkage of health data of developing countries because of missing data, ambiguity in patient identification, and high amount of noise in patient information. Also, identifiable health data in electronic health repositories may produce a significant risk to patient privacy and also make the health information systems security vulnerable to hackers. In this paper, we have analyzed the practical problems of collecting and integrating healthcare data in Bangladesh for developing national health data warehouse. We have proposed a privacy preserved secured record linkage architecture that can support constrained health data of developing countries such as Bangladesh. Our technique can anonymize identifiable private data of the patients while maintaining record linkage in integrated health repositories to facilitate knowledge discovery process. Experimental results show that our proposed method successfully linked records with acceptable accuracy for noisy data in the absence of any standard ID like SSN."}
{"_id": "8e383be7e8a37c93435952797731f07940eac6dc", "title": "PEIR, the personal environmental impact report, as a platform for participatory sensing systems research", "text": "PEIR, the Personal Environmental Impact Report, is a participatory sensing application that uses location data sampled from everyday mobile phones to calculate personalized estimates of environmental impact and exposure. It is an example of an important class of emerging mobile systems that combine the distributed processing capacity of the web with the personal reach of mobile technology. This paper documents and evaluates the running PEIR system, which includes mobile handset based GPS location data collection, and server-side processing stages such as HMM-based activity classification (to determine transportation mode); automatic location data segmentation into \"trips''; lookup of traffic, weather, and other context data needed by the models; and environmental impact and exposure calculation using efficient implementations of established models. Additionally, we describe the user interface components of PEIR and present usage statistics from a two month snapshot of system use. The paper also outlines new algorithmic components developed based on experience with the system and undergoing testing for integration into PEIR, including: new map-matching and GSM-augmented activity classification techniques, and a selective hiding mechanism that generates believable proxy traces for times a user does not want their real location revealed."}
{"_id": "487bb641bb133348a03ba6a8a1f674151b868cd3", "title": "Toolkit-Based High-Performance Data Mining of Large Data on MapReduce Clusters", "text": "The enormous growth of data in a variety of applications has increased the need for high performance data mining based on distributed environments. However, standard data mining toolkits per se do not allow the usage of computing clusters. The success of MapReduce for analyzing large data has raised a general interest in applying this model to other, data intensive applications. Unfortunately current research has not lead to an integration of GUI based data mining toolkits with distributed file system based MapReduce systems. This paper defines novel principles for modeling and design of the user interface, the storage model and the computational model necessary for the integration of such systems. Additionally, it introduces a novel system architecture for interactive GUI based data mining of large data on clusters based on MapReduce that overcomes the limitations of data mining toolkits. As an empirical demonstration we show an implementation based on Weka and Hadoop."}
{"_id": "22325668ff233759f52e20992929837e8de25417", "title": "Robust Distributed Spectrum Sensing in Cognitive Radio Networks", "text": "Distributed spectrum sensing (DSS) enables a Cognitive Radio (CR) network to reliably detect licensed users and avoid causing interference to licensed communications. The data fusion technique is a key component of DSS. We discuss the Byzantine failure problem in the context of data fusion, which may be caused by either malfunctioning sensing terminals or Spectrum Sensing Data Falsification (SSDF) attacks. In either case, incorrect spectrum sensing data will be reported to a data collector which can lead to the distortion of data fusion outputs. We investigate various data fusion techniques, focusing on their robustness against Byzantine failures. In contrast to existing data fusion techniques that use a fixed number of samples, we propose a new technique that uses a variable number of samples. The proposed technique, which we call Weighted Sequential Probability Ratio Test (WSPRT), introduces a reputation-based mechanism to the Sequential Probability Ratio Test (SPRT). We evaluate WSPRT by comparing it with a variety of data fusion techniques under various network operating conditions. Our simulation results indicate that WSPRT is the most robust against the Byzantine failure problem among the data fusion techniques that were considered."}
{"_id": "71941248d1e3d78e9a81e5cca3a566b875e10d3b", "title": "A Local Detection Approach for Named Entity Recognition and Mention Detection", "text": "In this paper, we study a novel approach for named entity recognition (NER) and mention detection (MD) in natural language processing. Instead of treating NER as a sequence labeling problem, we propose a new local detection approach, which relies on the recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode each sentence fragment and its left/right contexts into a fixedsize representation. Subsequently, a simple feedforward neural network (FFNN) is learned to either reject or predict entity label for each individual text fragment. The proposed method has been evaluated in several popular NER and MD tasks, including CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Tri-lingual Entity Discovery and Linking (EDL) tasks. Our method has yielded pretty strong performance in all of these examined tasks. This local detection approach has shown many advantages over the traditional sequence labeling methods."}
{"_id": "ebb0a67e17f26dbb829b8a6b4b02d412a589efe7", "title": "Variational Inference for Beta-Bernoulli Dirichlet Process Mixture Models", "text": "A commonly used paradigm in diverse application areas is to assume that an observed set of individual binary features is generated from a Bernoulli distribution with probabilities varying according to a Beta distribution. In this paper, we present our nonparametric variational inference algorithm for the Beta-Bernoulli observation model. Our primary focus is clustering discrete binary data using the Dirichlet process (DP) mixture model."}
{"_id": "4f5dce7c560e48299ed6f413f65746d65751c53f", "title": "Inverted 'V--Y' advancement medial epicanthoplasty.", "text": "The presence of an epicanthal fold is a distinctive anatomic feature of the Asian population. Epicanthoplasty can be very helpful in resolving the above problematic anatomy and may produce a more aesthetic appearance. The goal of this report is to describe a modification in the traditional operative methods related to medial epicanthoplasty, to minimise scar appearance and maximise normal appearance. An inverse triangular flap of skin is based on the semilunar skin, which is elevated and retracted within the line of the normal eyelid. The design of four key points is crucial to this epicanthoplasty. A total of 62 patients were performed with this method; 42 of this group underwent simultaneous double-eyelid plasty. Most of the patients obtained satisfactory results aesthetically followed from 6 months to 4 years postoperation. There was no recurrence of the epicanthal fold. As many as 38 patients were followed by interview and photographs were taken at the same time. The rest of the patients were followed up through telephone. No patients complained about visible scarring in the epicanthal area. As many as 60 patients obtained satisfactory results aesthetically, two patients complained about the median canthal asymmetry. The result indicates the reliability and feasibility of epicanthoplasty."}
{"_id": "e8fb7b4b200ae319d3a78f71219037016f54f742", "title": "Operator Splitting Methods in Compressive Sensing and Sparse Approximation", "text": "Compressive sensing and sparse approximation have many emerging applications, and are a relatively new driving force for development of splitting methods in optimization. Many sparse coding problems are well described by variational models with `1-norm penalties and constraints that are designed to promote sparsity. Successful algorithms need to take advantage of the separable structure of potentially complicated objectives by \u201csplitting\u201d them into simpler pieces and iteratively solving a sequence of simpler convex minimization problems. In particular, isolating `1 terms from the rest of the objective leads to simple soft thresholding updates or `1 ball projections. A few basic splitting techniques can be used to design a huge variety of relevant algorithms. This chapter will focus on operator splitting strategies that are based on proximal operators, duality, and alternating direction methods. These will be explained in the context of basis pursuit variants and through compressive sensing applications."}
{"_id": "0f508e61079487f17d7e87c1f4577686c75d64fb", "title": "On the bipolarity in argumentation frameworks", "text": "In this paper, we propose a survey of the use of bipolarity in argumentation frameworks, i.e. the presence of two kinds of entities (a positive entity and a negative entity). An argumentation process follows three steps: building the arguments and the interactions between them, valuating the arguments using or not the interactions and finally defining the acceptability of the arguments. This paper shows on various applications and with some formal definitions that bipolarity appears (in some cases since always) and can be used in each step of this process under different forms."}
{"_id": "d90d412d89fd2a3627e787927139598f921997ff", "title": "MULTI-SKILLED MOTION CONTROL", "text": "Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems, including agents that can move with skill and agility through their environment. An open problem in this setting is that of developing good strategies for integrating or merging policies for multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution. We extend policy distillation methods to the continuous action setting and leverage this technique to combine expert policies, as evaluated in the domain of simulated bipedal locomotion across different classes of terrain. We also introduce an input injection method for augmenting an existing policy network to exploit new input features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these methods allows a policy to be incrementally augmented with new skills. We compare our progressive learning and integration via distillation (PLAID) method against three alternative baselines."}
{"_id": "37e32423348926e33a8edaaecd4f83589dd19bf8", "title": "A gesture-driven computer interface using Kinect", "text": "Automatic recognition of human actions from video has been studied for many years. Although still very difficult in uncontrolled scenarios, it has been successful in more restricted settings (e.g., fixed viewpoint, no occlusions) with recognition rates approaching 100%. However, the best-performing methods are complex and computationally-demanding and thus not well-suited for real-time deployments. This paper proposes to leverage the Kinect camera for close-range gesture recognition using two methods. Both methods use feature vectors that are derived from the skeleton model provided by the Kinect SDK in real-time. Although both methods perform nearest-neighbor classification, one method does this in the space of features using the Euclidean distance metric, while the other method does this in the space of feature covariances using a log-Euclidean metric. Both methods recognize 8 hand gestures in real time achieving correct-classification rates of over 99% on a dataset of 20 subjects but the method based on Euclidean distance requires feature-vector collections to be of the same size, is sensitive to temporal misalignment, and has higher computation and storage requirements."}
{"_id": "162a4f3cca543c6aefb5df9fabe2cf56b25e3303", "title": "Orchestrating a mixed reality performance", "text": "A study of a professional touring mixed reality performance called Desert Rain yields insights into how performers orchestrate players' engagement in an interactive experience. Six players at a time journey through an extended physical and virtual set. Each sees a virtual world projected onto a screen made from a fine water spray. This acts as a traversable interface, supporting the illusion that performers physically pass between real and virtual worlds. Live and video-based observations of Desert Rain, coupled with interviews with players and the production team, have revealed how the performers create conditions for the willing suspension of disbelief, and how they monitor and intervene in the players experience without breaking their engagement. This involves carefully timed performances and \u201coff-face\u201d and \u201cvirtual\u201d interventions. In turn, these are supported by the ability to monitor players' physical and virtual activity through asymmetric interfaces."}
{"_id": "85d57c55a9aca81961f41a1f5cac0d3335952ae1", "title": "History of Computers, Electronic Commerce, and Agile Methods", "text": "The purpose of this document is to present a literature review relevant to a study of using agile methods to manage the development of Internet websites and their subsequent quality. This study places website quality within the context of the $2 trillion U.S. ecommerce industry. Thus, this section provides a history of electronic computers, electronic commerce, software methods, software quality metrics, agile methods, and studies of agile methods. None of these histories are without controversy. For instance, some scholars begin the study of the electronic computer by mentioning the emergence of the Sumerian text, Hammurabi code, or the Abacus. We, however, will align our history with the emergence of the modern electronic computer at the beginning of World War II. The history of electronic commerce also has poorly defined beginnings. Some studies of electronic commerce begin with the widespread use of the Internet in the early 1990s. However, electronic commerce cannot be appreciated without establishing a deeper context. The origins of the software industry are also somewhat poorly documented. It started sometime in the 1960s, though it didn't really take off until the 1980s. Software methods, however, are often associated with the rise of structured design in 1969, though they can firmly be traced back to the late 1950s. The first uses of agile methods are also not well defined. Many associate agile methods with the appearance of extreme programming in 1999, though agile methods began taking hold in the early 1990s, and most of its principles appeared before 1975. While research on agile methods began appearing in 1998, therein lies the issue. Few scholarly studies, if any, have been performed on agile methods, which is the basic purpose of this literature review. That is, to establish the context to conduct scholarly research within the fields of agile methods and electronic commerce. Since agile methods may be linked to outcomes such as software quality, an in-depth analysis of literature on software quality metrics is also included.History of Computers and Software Figure 1. Timeline and history of computers and software Page-3-Electronic computers. Electronic computers are simply machines that perform useful functions such as mathematical calculations or inputting, processing, and outputting data and information in meaningful forms (Rosen, 1969). Modern electronic computers are characterized by four major generations: first generation vacuum tube"}
{"_id": "e6fe4bef5d213ad07d729374f90ff4cc70c53f22", "title": "Secure file storage in cloud computing using hybrid cryptography algorithm", "text": "Now a day's cloud computing is used in many areas like industry, military colleges etc to storing huge amount of data. We can retrieve data from cloud on request of user. To store data on cloud we have to face many issues. To provide the solution to these issues there are n number of ways. Cryptography and steganography techniques are more popular now a day's for data security. Use of a single algorithm is not effective for high level security to data in cloud computing. In this paper we have introduced new security mechanism using symmetric key cryptography algorithm and steganography. In this proposed system AES, blowfish, RC6 and BRA algorithms are used to provide block wise security to data. All algorithm key size is 128 bit. LSB steganography technique is introduced for key information security. Key information contains which part of file is encrypted using by which algorithm and key. File is splited into eight parts. Each and every part of file is encrypted using different algorithm. All parts of file are encrypted simultaneously with the help of multithreading technique. Data encryption Keys are inserted into cover image using LSB technique. Stego image is send to valid receiver using email. For file decryption purpose reverse process of encryption is applied."}
{"_id": "f9c9ff9296519310253f3ea2f8a2a97781176dfe", "title": "A 4K-Input High-Speed Winner-Take-All (WTA) Circuit with Single-Winner Selection for Change-Driven Vision Sensors", "text": "Winner-Take-All (WTA) circuits play an important role in applications where a single element must be selected according to its relevance. They have been successfully applied in neural networks and vision sensors. These applications usually require a large number of inputs for the WTA circuit, especially for vision applications where thousands to millions of pixels may compete to be selected. WTA circuits usually exhibit poor response-time scaling with the number of competitors, and most of the current WTA implementations are designed to work with less than 100 inputs. Another problem related to the large number of inputs is the difficulty to select just one winner, since many competitors may have differences below the WTA resolution. In this paper, a WTA circuit is presented that handles more than four thousand inputs, to our best knowledge the hitherto largest WTA, with response times below the microsecond, and with a guaranty of just a single winner selection. This performance is obtained by the combination of a standard analog WTA circuit and a fast digital single-winner selector with almost no size penalty. This WTA circuit has been successfully employed in the fabrication of a Selective Change-Driven Vision Sensor based on 180 nm CMOS technology. Both simulated and experimental results are presented in the paper, showing that a single pixel event can be selected in just 560 ns, and a multipixel pixel event can be processed in 100 us. Similar results with a conventional approach would require a camera working at more than 1 Mfps for the single-pixel event detection, and 10 kfps for the whole multipixel event to be processed."}
{"_id": "20cff72b310b6edf7d924175e7b7650e02947b06", "title": "Monte Carlo Localization for Mobile Robots", "text": "To navigate reliably in indoor environments, a mobile robot must know where it is. Thus, reliable position estimation is a key problem in mobile robotics. We believe that probabilistic approaches are among the most promising candidates to providing a comprehensive and real-time solution to the robot localization problem. However, current methods still face considerable hurdles. In particular, the problems encountered are closely related to the type of representation used to represent probability densities over the robot\u2019s state space. Recent work on Bayesian filtering with particle-based density representations opens up a new approach for mobile robot localization, based on these principles. In this paper we introduce the Monte Carlo Localization method, where we represent the probability density involved by maintaining a set of samples that are randomly drawn from it. By using a sampling-based representation we obtain a localization method that can represent arbitrary distributions. We show experimentally that the resulting method is able to efficiently localize a mobile robot without knowledge of its starting location. It is faster, more accurate and less memory-intensive than earlier grid-based methods."}
{"_id": "19baaee177081e0021a4910c6280d4378f1d933d", "title": "Monte Carlo Localization: Efficient Position Estimation for Mobile Robots", "text": "This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computationally cumbersome (such as grid-based approaches that represent the state space by high-resolution 3D grids), or had to resort to extremely coarse-grained resolutions. Our approach is computationally efficient while retaining the ability to represent (almost) arbitrary distributions. MCL applies sampling-based methods for approximating probability distributions, in a way that places computation \u201cwhere needed.\u201d The number of samples is adapted on-line, thereby invoking large sample sets only when necessary. Empirical results illustrate that MCL yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches. It is also much easier to implement."}
{"_id": "7f0bbe9dd4aa3bfb8a355a2444f81848b020b7a4", "title": "A tutorial on particle filters for on-line nonlinear/non-Gaussian Bayesian tracking", "text": "Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system. Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods based on point mass (or \u201cparticle\u201d) representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the standard EKF through an illustrative example."}
{"_id": "9e273ee15f2decbeddc23906320f3d5060677d3e", "title": "PowerMatcher: multiagent control in the electricity infrastructure", "text": "Different driving forces push the electricity production towards decentralization. As a result, the current electricity infrastructure is expected to evolve into a network of networks, in which all system parts communicate with each other and influence each other. Multi-agent systems and electronic markets form an appropriate technology needed for control and coordination tasks in the future electricity network. We present the PowerMatcher, a market-based control concept for supply and demand matching (SDM) in electricity networks. In a presented simulation study is shown that the simultaneousness of electricity production and consumption can be raised substantially using this concept. Further, we present a field test with medium-sized electricity producing and consuming installations controlled via this concept, currently in preparation."}
{"_id": "b3dca2f49a55fb0360c7bb1ec11e83c0c60da2c6", "title": "Investigating performance of students: a longitudinal study", "text": "This paper, investigates how academic performance of students evolves over the years in their studies during a study programme. To determine typical progression patterns over the years, students are described by a 4 tuple (e.g. x1, x2, x3, x4), these being the clusters' mean to which a student belongs to in each year of the degree. For this purpose, two consecutive cohorts have been analyzed using X-means clustering. Interestingly the patterns found in both cohorts show that a substantial number of students stay in the same kind of groups during their studies."}
{"_id": "c7ed1eb8c5baf06885af7c6de45605d9a27284d7", "title": "Kernel-Based Sensor Fusion With Application to Audio-Visual Voice Activity Detection", "text": "In this paper, we address the problem of multiple view data fusion in the presence of noise and interferences. Recent studies have approached this problem using kernel methods, by relying particularly on a product of kernels constructed separately for each view. From a graph theory point of view, we analyze this fusion approach in a discrete setting. More specifically, based on a statistical model for the connectivity between data points, we propose an algorithm for the selection of the kernel bandwidth, a parameter that, as we show, has important implications on the robustness of this fusion approach to interferences. Then, we consider the fusion of audio-visual speech signals measured by a single microphone and by a video camera pointed to the face of the speaker. Specifically, we address the task of voice activity detection, i.e., the detection of speech and nonspeech segments, in the presence of structured interferences such as keyboard taps and office noise. We propose an algorithm for voice activity detection based on the audio-visual signal. Simulation results show that the proposed algorithm outperforms competing fusion and voice activity detection approaches. In addition, we demonstrate that a proper selection of the kernel bandwidth indeed leads to improved performance."}
{"_id": "3d3ab91a3934ca9674fc5d7a2178de346a9b7bc5", "title": "Does Competition Destroy Ethical Behavior?", "text": "This paper shows that conduct described as unethical and blamed on \u201cgreed\u201d is sometimes a consequence of market competition. I illustrate this observation using examples of five censured activities: employment of children, corruption, excessive executive pay, corporate earnings manipulation, and involvement of universities in commercial activities. In all these examples, I show how competitive pressures lead to the spread of the censured behavior. My point is not to excuse or condemn any or all of these practices, but merely to pinpoint the crucial role of competition, as opposed to greed, in their spread. I focus here on ethics, not efficiency. The relationship between the two is complex. In many cases, ethical norms evolve to sustain cooperative behavior and thus to promote successful functioning of social institutions. For example, ethical condemnation of corruption is based on the idea that officials should not selfishly abuse public trust, and that a society functions better when its government works fairly. Likewise, the ethical norm against the employment of children is driven in part by the more general concern with abuse of the weak by the strong. When ethics promote social cooperation, ethical behavior and efficient behavior typically go together. In other instances, there is a mismatch between ethics and efficiency, either because ethical standards which might have had efficiency justifications long ago no longer do or because the behavior that is ethical in some idealized society makes matters worse in the real world. For example, the ethical norm against debt or interest, which might have been justifiable a millennium ago, is clearly no longer efficient. And while child labor might be a bad idea in a world with good access to capital markets and educational opportunities, for many families in the developing world the alternative to child labor is malnutrition and disease. These examples of a mismatch suggest that behavior condemned as unethical is not always inefficient. In still other instances, the conduct that is advertised as ethical is the result of political indoctrination by parochial interests or of simple confusion. For example, the ethical exhortations to \u201cBuy American!\u201d or to pay the \u201cliving wage\u201d are underwritten by labor unions serving their members, not the public. In various times and places, tribalism, racism, anti-Semitism, the hatred of the rich, and other unsavory sentiments reflected the dominant ethic. In these instances, what is considered ethical is very far from what economists would consider efficient. In some of the examples I discuss below, a credible case can be made that the conduct perceived to be unethical is also inefficient; in other examples, there is more ambiguity. My interest, however, is not to evaluate efficiency, but only to bring the crucial role of competition into the explanation of why activities morally sanctioned by the society spread. In four of the examples I discuss, censured behavior reduces costs (in the last one, it raises revenues). I assume that the proprietor of the firm values ethical behavior, but that such behavior is a normal good. When sanctioned behavior by competitors reduces their costs, it also reduces prices in the market, and as a result the proprietor\u2019s income falls. When his income falls, so does his own demand for ethical behavior, leading to the spread of censured practices. The analysis I present is closely related to the ideas in Gary Becker\u2019s (1957) classic study of discrimination and reveals a broad range of circumstances where competition promotes censured conduct."}
{"_id": "497ba80e13d9e92fe8f6f3b149629b8ab32eeeea", "title": "The draft genome of the transgenic tropical fruit tree papaya (Carica papaya Linnaeus)", "text": "Papaya, a fruit crop cultivated in tropical and subtropical regions, is known for its nutritional benefits and medicinal applications. Here we report a 3\u00d7 draft genome sequence of \u2018SunUp\u2019 papaya, the first commercial virus-resistant transgenic fruit tree to be sequenced. The papaya genome is three times the size of the Arabidopsis genome, but contains fewer genes, including significantly fewer disease-resistance gene analogues. Comparison of the five sequenced genomes suggests a minimal angiosperm gene set of 13,311. A lack of recent genome duplication, atypical of other angiosperm genomes sequenced so far, may account for the smaller papaya gene number in most functional groups. Nonetheless, striking amplifications in gene number within particular functional groups suggest roles in the evolution of tree-like habit, deposition and remobilization of starch reserves, attraction of seed dispersal agents, and adaptation to tropical daylengths. Transgenesis at three locations is closely associated with chloroplast insertions into the nuclear genome, and with topoisomerase I recognition sites. Papaya offers numerous advantages as a system for fruit-tree functional genomics, and this draft genome sequence provides the foundation for revealing the basis of Carica\u2019s distinguishing morpho-physiological, medicinal and nutritional properties."}
{"_id": "4b0597781bed0fd6f37c7da886764621c5e25d01", "title": "A Video Representation Using Temporal Superpixels", "text": "We develop a generative probabilistic model for temporally consistent super pixels in video sequences. In contrast to supermodel methods, object parts in different frames are tracked by the same temporal super pixel. We explicitly model flow between frames with a bilateral Gaussian process and use this information to propagate super pixels in an online fashion. We consider four novel metrics to quantify performance of a temporal super pixel representation and demonstrate superior performance when compared to supermodel methods."}
{"_id": "b6bbda4a8cd2a7bdeb88ecd7f8a47e866823e32d", "title": "Multi-Modal Learning: Study on A Large-Scale Micro-Video Data Collection", "text": "Micro-video sharing social services, as a new phenomenon in social media, enable users to share micro-videos and thus gain increasing enthusiasm among people. One distinct characteristic of micro-videos is the multi-modality, as these videos always have visual signals, audio tracks, textual descriptions as well as social clues. Such multi-modality data makes it possible to obtain a comprehensive understanding of videos and hence provides new opportunities for researchers. However, limited efforts thus far have been dedicated to this new emerging user-generated contents (UGCs) due to the lack of large-scale benchmark dataset. Towards this end, in this paper, we construct a large-scale micro-video dataset, which can support many research domains, such as popularity prediction and venue estimation. Based upon this dataset, we conduct an initial study in popularity prediction of micro-videos. Finally, we identify our future work."}
{"_id": "b05c77eff73602229a6f4d41c69bd471a8af045a", "title": "BLEU Evaluation of Machine-Translated English-Croatian Legislation", "text": "This paper presents work on the evaluation of online available machine translation (MT) service, i.e. Google Translate, for English-Croatian language pair in the domain of legislation. The total set of 200 sentences, for which three reference translations are provided, is divided into short and long sentences. Human evaluation is performed by native speakers, using the criteria of adequacy and fluency. For measuring the reliability of agreement among raters, Fleiss' kappa metric is used. Human evaluation is enriched by error analysis, in order to examine the influence of error types on fluency and adequacy, and to use it in further research. Translation errors are divided into several categories: non-translated words, word omissions, unnecessarily translated words, morphological errors, lexical errors, syntactic errors and incorrect punctuation. The automatic evaluation metric BLEU is calculated with regard to a single and multiple reference translations. System level Pearson\u2019s correlation between BLEU scores based on a single and multiple reference translations is given, as well as correlation between short and long sentences BLEU scores, and correlation between the criteria of fluency and adequacy and each error category."}
{"_id": "d506aeb962b317c56bbb516a633eb93fc581441f", "title": "ITIL as common practice reference model for IT service management: formal assessment and implications for practice", "text": "Due to enhanced focus on the customer in the planning, development and delivery of information services, IT service management has become increasingly important. These days IT management is focussing particularly on the de facto standard ITIL (IT Infrastructure Library) for implementing IT service management. In doing this, deficits of the ITIL reference model are often being overlooked, the benefits are merely assumed and misunderstandings spread. This results in uncertainties, unfulfilled expectations and problems in the execution of ITIL transformation projects. Against this background the article critically assesses the ITIL reference model. A formal analysis of the model on the basis of established criteria according to the principles for proper modelling is undertaken and implications for IT management are deduced from this. Four case studies of ITIL transformation projects conducted for this purpose, as well as an analysis of the ITIL documents, serve as basis for the assessment."}
{"_id": "b50e38e06f51e30f365e8cbd218769cf2ddc78d2", "title": "Prevalence of kidney stones in the United States.", "text": "BACKGROUND\nThe last nationally representative assessment of kidney stone prevalence in the United States occurred in 1994. After a 13-yr hiatus, the National Health and Nutrition Examination Survey (NHANES) reinitiated data collection regarding kidney stone history.\n\n\nOBJECTIVE\nDescribe the current prevalence of stone disease in the United States, and identify factors associated with a history of kidney stones.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nA cross-sectional analysis of responses to the 2007-2010 NHANES (n=12 110).\n\n\nOUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS\nSelf-reported history of kidney stones. Percent prevalence was calculated and multivariable models were used to identify factors associated with a history of kidney stones.\n\n\nRESULTS AND LIMITATIONS\nThe prevalence of kidney stones was 8.8% (95% confidence interval [CI], 8.1-9.5). Among men, the prevalence of stones was 10.6% (95% CI, 9.4-11.9), compared with 7.1% (95% CI, 6.4-7.8) among women. Kidney stones were more common among obese than normal-weight individuals (11.2% [95% CI, 10.0-12.3] compared with 6.1% [95% CI, 4.8-7.4], respectively; p<0.001). Black, non-Hispanic and Hispanic individuals were less likely to report a history of stone disease than were white, non-Hispanic individuals (black, non-Hispanic: odds ratio [OR]: 0.37 [95% CI, 0.28-0.49], p<0.001; Hispanic: OR: 0.60 [95% CI, 0.49-0.73], p<0.001). Obesity and diabetes were strongly associated with a history of kidney stones in multivariable models. The cross-sectional survey design limits causal inference regarding potential risk factors for kidney stones.\n\n\nCONCLUSIONS\nKidney stones affect approximately 1 in 11 people in the United States. These data represent a marked increase in stone disease compared with the NHANES III cohort, particularly in black, non-Hispanic and Hispanic individuals. Diet and lifestyle factors likely play an important role in the changing epidemiology of kidney stones."}
{"_id": "fcdcc9c97ad69efc2a918b06f2c6459cca5312c5", "title": "End to End Automation on Cloud with Build Pipeline: The Case for DevOps in Insurance Industry, Continuous Integration, Continuous Testing, and Continuous Delivery", "text": "In modern environment, delivering innovative idea in a fast and reliable manner is extremely significant for any organizations. In the existing scenario, Insurance industry need to better respond to dynamic market requirements, faster time to market for new initiatives and services, and support innovative ways of customer interaction. In past few years, the transition to cloud platforms has given benefits such as agility, scalability, and lower capital costs but the application lifecycle management practices are slow with this disruptive change. DevOps culture extends the agile methodology to rapidly create applications and deliver them across environment in automated manner to improve performance and quality assurance. Continuous Integration (CI) and Continuous delivery (CD) has emerged as a boon for traditional application development and release management practices to provide the capability to release quality artifacts continuously to customers with continuously integrated feedback. The objective of the paper is to create a proof of concept for designing an effective framework for continuous integration, continuous testing, and continuous delivery to automate the source code compilation, code analysis, test execution, packaging, infrastructure provisioning, deployment, and notifications using build pipeline concept."}
{"_id": "554a822a88668878b127be698c4488badbb4e822", "title": "Improving Neural Knowledge Base Completion with Cross-Lingual Projections", "text": "In this paper we present a cross-lingual extension of a neural tensor network model for knowledge base completion. We exploit multilingual synsets from BabelNet to translate English triples to other languages and then augment the reference knowledge base with cross-lingual triples. We project monolingual embeddings of different languages to a shared multilingual space and use them for network initialization (i.e., as initial concept embeddings). We then train the network with triples from the cross-lingually augmented knowledge base. Results on WordNet link prediction show that leveraging cross-lingual information yields significant gains over exploiting only monolingual triples."}
{"_id": "fd258a7e6c723c18d012bf7fef275199e4479b5b", "title": "How does EMDR work ?", "text": "Eye movement desensitisation and reprocessing (EMDR) is an effective treatment for alleviating trauma symptoms, and the positive effects of this treatment have been scientifically confirmed under wellcontrolled conditions. This has provided an opportunity to explore how EMDR works. The present paper reports on the findings of a long series of experiments that disproved the hypothesis that eye movements or other \u2018dual tasks\u2019 are unnecessary. These experiments also disproved the idea that \u2018bilateral stimulation\u2019 is needed; moving the eyes up and down produces the same effect as horizontal eye movement, and so do tasks that require no eye movement at all. However, it is important that the dual task taxes working memory. Several predictions can be derived from the working memory explanation for eye movements in EMDR. These seem to hold up extremely well in critical experimental tests, and create a solid explanation on how eye movements work. This paper discusses the implications that this theory and the empirical findings may have for the EMDR technique. \u00a9 Copyright 2012 Textrum Ltd. All rights reserved."}
{"_id": "5543fc8d80b7bb73b343780b16279620f4be0f02", "title": "An Empirical Analysis of Bug Reports and Bug Fixing in Open Source Android Apps", "text": "Smartphone platforms and applications (apps) have gained tremendous popularity recently. Due to the novelty of the smartphone platform and tools, and the low barrier to entry for app distribution, apps are prone to errors, which affects user experience and requires frequent bug fixes. An essential step towards correcting this situation is understanding the nature of the bugs and bug-fixing processes associated with smartphone platforms and apps. However, prior empirical bug studies have focused mostly on desktop and server applications. Therefore, in this paper, we perform an in-depth empirical study on bugs in the Google Android smartphone platform and 24 widely-used open-source Android apps from diverse categories such as communication, tools, and media. Our analysis has three main thrusts. First, we define several metrics to understand the quality of bug reports and analyze the bug-fix process, including developer involvement. Second, we show how differences in bug life-cycles can affect the bug-fix process. Third, as Android devices carry significant amounts of security-sensitive information, we perform a study of Android security bugs. We found that, although contributor activity in these projects is generally high, developer involvement decreases in some projects, similarly, while bug-report quality is high, bug triaging is still a problem. Finally, we observe that in Android apps, security bug reports are of higher quality but get fixed slower than non-security bugs. We believe that the findings of our study could potentially benefit both developers and users of Android apps."}
{"_id": "5ac712a51758dacfec2bb6ff1585570c1139e99f", "title": "Dynamic Tangible User Interface Palettes", "text": "Graphics editors often suffer from a large number of tool palettes that compete with valuable document space. To address this problem and to bring back physical affordances similar to a painter\u2019s palette, we propose to augment a digital tabletop with spatially tracked handheld displays. These displays are dynamically updated depending on their spatial location. We introduce the concept of spatial Work Zones that take up distinct 3D regions above the table surface and serve as physical containers for digital content that is organized as stacks of horizontal layers. Spatial Work Zones are represented either by physical objects or on-screen on the tabletop. Associated layers can be explored fluently by entering a spatial Work Zone with a handheld display. This provides quick access and seamless changes between tools and parts of the document that are instantly functional, i.e., ready to be used by a digital pen. We discuss several use cases illustrating our techniques and setting them into context with previous systems. Early user feedback indicates that combining dynamic GUI functionality with the physicality of spatially tracked handheld displays is promising and can be generalized beyond graphics editing."}
{"_id": "a5dab7a826e1f4c28c4f668e9183ea2f524f1327", "title": "Prevalence of Vitamin D Insufficiency in an Adult Normal Population", "text": "The vitamin D status of a general adult urban population was estimated between November and April in 1569 subjects selected from 20 French cities grouped in nine geographical regions (between latitude 43\u00b0 and 51\u00b0 N). Major differences in 25-hydroxyvitamin D (25(OH)D) concentration were found between regions, the lowest values being seen in the North and the greatest in the South, with a significant \u2018sun\u2019 effect (r = 0.72; p = 0.03) and latitude effect (r = -0.79; p = 0.01). In this healthy adult population, 14% of subjects exhibited 25(OH)D values \u2264 30 nmol/l (12 ng/ml), which represents the lower limit (< 2 SD) for a normal adult population measured in winter with the same method (RIA Incstar). A significant negative correlation was found between serum intact parathyroid hormone (iPTH) and serum 25(OH)D values (p < 0.01). Serum iPTH held a stable plateau level at 36 pg/ml as long as serum 25(OH)D values were higher than 78 nmol/l (31 ng/ml), but increased when the serum 25(OH)D value fell below this. When the 25(OH)D concentration became equal to or lower than 11.3 nmol/l (4.6 ng/ml), the PTH values reached the upper limit of normal values (55 pg/ml) found in vitamin D replete subjects. These results showed that in French normal adults living in an urban environment with a lack of direct exposure to sunshine, diet failed to provide an adequate amount of vitamin D. It is important to pay attention to this rather high prevalence of vitamin D insufficiency in the general adult population and to discuss the clinical utility of winter supplementation with low doses of vitamin D."}
{"_id": "8e56d797edbd28336d9abcfe656532c3b4afa8f4", "title": "Marital processes predictive of later dissolution: behavior, physiology, and health.", "text": "Seventy-three married couples were studied in 1983 and 1987. To identify marital processes associated with dissolution, a balance theory of marriage was used to generate 1 variable for dividing couples into regulated and nonregulated groups. For studying the precursors of divorce, a \"cascade\" model of marital dissolution, which forms a Guttman-like scale, received preliminary support. Compared with regulated couples, nonregulated couples had (a) marital problems rated as more severe (Time 1); (b) lower marital satisfaction (Time 1 and Time 2); (c) poorer health (Time 2); (d) smaller finger pulse amplitudes (wives); (e) more negative ratings for interactions; (f) more negative emotional expression; (g) less positive emotional expression; (h) more stubbornness and withdrawal from interaction; (i) greater defensiveness; and (j) greater risk for marital dissolution (lower marital satisfaction and higher incidence of consideration of dissolution and of actual separation)."}
{"_id": "665a4335a80872e1a4ab5905130b81dde123ee7f", "title": "A Tutorial on Machine Learning and Data Science Tools with Python", "text": "In this tutorial, we will provide an introduction to the main Python software tools used for applying machine learning techniques to medical data. The focus will be on open-source software that is freely available and is cross platform. To aid the learning experience, a companion GitHub repository is available so that you can follow the examples contained in this paper interactively using Jupyter notebooks. The notebooks will be more exhaustive than what is contained in this chapter, and will focus on medical datasets and healthcare problems. Briefly, this tutorial will first introduce Python as a language, and then describe some of the lower level, general matrix and data structure packages that are popular in the machine learning and data science communities, such as NumPy and Pandas. From there, we will move to dedicated machine learning software, such as SciKit-Learn. Finally we will introduce the Keras deep learning and neural networks library. The emphasis of this paper is readability, with as little jargon used as possible. No previous experience with machine learning is assumed. We will use openly available medical datasets throughout."}
{"_id": "f08f000b5b16fa47e21bb370ae637e68a90dab45", "title": "High frequency ultrasound imaging of surface and subsurface structures of fingerprints", "text": "High frequency ultrasound is suitable for non-invasive evaluation of skin because it can obtain both morphological and biomechanical information. A specially developed acoustic microscope system with the central frequency of 100 MHz was developed. The system was capable of (1) conventional C-mode acoustic microscope imaging of thinly sliced tissue, (2) ultrasound impedance imaging of the surface of in vivo thick tissue and (3) 3D ultrasound imaging of inside of the in vivo tissue. In the present study, ultrasound impedance imaging and 3D ultrasound imaging of in vivo fingerprints were obtained. The impedance image showed pores of the sweat glands in the surface of fingerprint and 3D ultrasound imaging showed glands of the rear surface of fingerprint. Both findings were not visualized by normal optical imaging, thus the system can be applied to pathological diagnosis of skin lesions and assessment of aging of the skin in cosmetic point of view."}
{"_id": "8d4ee7bb188474523fa9d36a68acf1672e6abf3a", "title": "A privacy-preserving approach to policy-based content dissemination", "text": "We propose a novel scheme for selective distribution of content, encoded as documents, that preserves the privacy of the users to whom the documents are delivered and is based on an efficient and novel group key management scheme. Our document broadcasting approach is based on access control policies specifying which users can access which documents, or subdocuments. Based on such policies, a broadcast document is segmented into multiple subdocuments, each encrypted with a different key. In line with modern attribute-based access control, policies are specified against identity attributes of users. However our broadcasting approach is privacy-preserving in that users are granted access to a specific document, or subdocument, according to the policies without the need of providing in clear information about their identity attributes to the document publisher. Under our approach, not only does the document publisher not learn the values of the identity attributes of users, but it also does not learn which policy conditions are verified by which users, thus inferences about the values of identity attributes are prevented. Moreover, our key management scheme on which the proposed broadcasting approach is based is efficient in that it does not require to send the decryption keys to the users along with the encrypted document. Users are able to reconstruct the keys to decrypt the authorized portions of a document based on subscription information they have received from the document publisher. The scheme also efficiently handles new subscription of users and revocation of subscriptions."}
{"_id": "21c9dd68b908825e2830b206659ae6dd5c5bfc02", "title": "Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images", "text": "We introduce Embed to Control (E2C), a method for model learning and control of non-linear dynamical systems from raw pixel images. E2C consists of a deep generative model, belonging to the family of variational autoencoders, that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. Our model is derived directly from an optimal control formulation in latent space, supports long-term prediction of image sequences and exhibits strong performance on a variety of complex control problems."}
{"_id": "39b7007e6f3dd0744833f292f07ed77973503bfd", "title": "Data-Efficient Hierarchical Reinforcement Learning", "text": "Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve more complex tasks. Yet, the majority of current HRL methods require careful task-specific design and on-policy training, making them difficult to apply in real-world scenarios. In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control. For generality, we develop a scheme where lower-level controllers are supervised with goals that are learned and proposed automatically by the higher-level controllers. To address efficiency, we propose to use off-policy experience for both higherand lower-level training. This poses a considerable challenge, since changes to the lower-level behaviors change the action space for the higher-level policy, and we introduce an off-policy correction to remedy this challenge. This allows us to take advantage of recent advances in off-policy model-free RL to learn both higherand lower-level policies using substantially fewer environment interactions than on-policy algorithms. We term the resulting HRL agent HIRO and find that it is generally applicable and highly sample-efficient. Our experiments show that HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations,1 learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods, we find that our approach substantially outperforms previous state-of-the-art techniques.2"}
{"_id": "5b44f587c4c7611d04e304fd7fa37648338d0cbf", "title": "Data-Efficient Learning of Feedback Policies from Image Pixels using Deep Dynamical Models", "text": "Data-efficient reinforcement learning (RL) in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. We consider a particularly important instance of this challenge, the pixels-to-torques problem, where an RL agent learns a closed-loop control policy (\u201ctorques\u201d) from pixel information only. We introduce a data-efficient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model for learning a low-dimensional feature embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning is crucial for longterm predictions, which lie at the core of the adaptive nonlinear model predictive control strategy that we use for closed-loop control. Compared to state-of-the-art RL methods for continuous states and actions, our approach learns quickly, scales to high-dimensional state spaces, is lightweight and an important step toward fully autonomous end-to-end learning from pixels to torques."}
{"_id": "d06ae5effef2922e7ee24a4b0f8274486f0a6523", "title": "Optimal number of response categories in rating scales: reliability, validity, discriminating power, and respondent preferences.", "text": "Using a self-administered questionnaire, 149 respondents rated service elements associated with a recently visited store or restaurant on scales that differed only in the number of response categories (ranging from 2 to 11) and on a 101-point scale presented in a different format. On several indices of reliability, validity, and discriminating power, the two-point, three-point, and four-point scales performed relatively poorly, and indices were significantly higher for scales with more response categories, up to about 7. Internal consistency did not differ significantly between scales, but test-retest reliability tended to decrease for scales with more than 10 response categories. Respondent preferences were highest for the 10-point scale, closely followed by the seven-point and nine-point scales. Implications for research and practice are discussed."}
{"_id": "b51ca791c7ca1b4a6fdea9f73425014287da4aad", "title": "FPGA vs. GPU for sparse matrix vector multiply", "text": "Sparse matrix-vector multiplication (SpMV) is a common operation in numerical linear algebra and is the computational kernel of many scientific applications. It is one of the original and perhaps most studied targets for FPGA acceleration. Despite this, GPUs, which have only recently gained both general-purpose programmability and native support for double precision floating-point arithmetic, are viewed by some as a more effective platform for SpMV and similar linear algebra computations. In this paper, we present an analysis comparing an existing GPU SpMV implementation to our own, novel FPGA implementation. In this analysis, we describe the challenges faced by any SpMV implementation, the unique approaches to these challenges taken by both FPGA and GPU implementations, and their relative performance for SpMV."}
{"_id": "0437f6a8c291dba214fc8567e5a90fb8a5635722", "title": "The ArchC Architecture Description Language and Tools", "text": "This paper presents an architecture description language (ADL) called ArchC, which is an open-source SystemC-based language that is specialized for processor architecture description. Its main goal is to provide enough information, at the right level of abstraction, in order to allow users to explore and verify new architectures, by automatically generating software tools like simulators and co-verification interfaces. ArchC\u2019s key features are a storage-based co-verification mechanism that automatically checks the consistency of a refined ArchC model against a reference (functional) description, memory hierarchy modeling capability, the possibility of integration with other SystemC IPs and the automatic generation of high-level SystemC simulators and assemblers. We have used ArchC to synthesize both functional and cycle-based simulators for the MIPS and Intel 8051 processors, as well as functional models of architectures like SPARC V8, TMS320C62x, XScale and PowerPC."}
{"_id": "1a7eb7efcb66be44657424a055279c8c176f0a8e", "title": "Sign Language Recognition using Sequential Pattern Trees", "text": "This paper presents a novel, discriminative, multi-class classifier based on Sequential Pattern Trees. It is efficient to learn, compared to other Sequential Pattern methods, and scalable for use with large classifier banks. For these reasons it is well suited to Sign Language Recognition. Using deterministic robust features based on hand trajectories, sign level classifiers are built from sub-units. Results are presented both on a large lexicon single signer data set and a multi-signer Kinect\u2122 data set. In both cases it is shown to out perform the non-discriminative Markov model approach and be equivalent to previous, more costly, Sequential Pattern (SP) techniques."}
{"_id": "c94b5881427e876cf5a20c43823ba763745b3435", "title": "Hardware-In-the-Loop Simulation of Electric Vehicle Powertrain System", "text": "This paper presents a design procedure of HardwareIn-the-Loop (HIL) for Electric Vehicle powertrain system modeling and simulation. First, simulation is performed in MATLAB/Simulink platform. Then, an Electric Vehicle Powertrain Hardware-In-the-Loop system including BMS (Battery Manage System), MCU (Motor Control Unit) and VMS (vehicle management system) is built based on the dSPACE with MATLAB/Simulink, along with the RTW toolbox (real time workshop). Through this system, the Electric Vehicle Control strategy is simulated and validated, and the results of offline simulation & real time simulation with laboratory prototype are presented. Keywords-Hardware In the Loop; Electric Vehicle; Simulation; dSPACE; DSP"}
{"_id": "645627794d14c7298a6adc569fbdaf3993719866", "title": "Introduction: Hybrid intelligent adaptive systems", "text": "This issue of International Journal of Intelligent Systems includes extended versions of selected papers from the 4th International Conference on Soft Computing, held in Iizuka, Japan, September 30]October 5, 1996. The topic of the special issue is \u2018\u2018Hybrid Intelligent Adaptive Systems.\u2019\u2019 Research on hybrid systems is one of the key issues of developing intelligent systems and it can apply a wide range of tools, including artificial neural networks, fuzzy logic, knowledge-based systems, genetic algorithms, evolutionary computation, and chaos models. The papers in this issue have been carefully reviewed and modified to give the readers a comprehensive overview of theoretical aspects, design, and implementation issues of hybrid intelligent adaptive systems. In the first paper by Kasabov and Kozma, a general framework of developing hybrid, intelligent, and adaptive systems is given. This work develops multimodular, fuzzy neural network systems and applies it to phoneme-based speech recognition. The second paper by Miyata, Furuhashi, and Uchikawa proposes fuzzy abductive inference with degrees of manifestations. This method infers irredundant combinations of candidates with degrees of belief for the manifestations. It is also demonstrated that the results of the inference method are applicable to medical diagnosis and system fault detection. Cho adopts ideas of artificial life in his work to develop evolutionary neural networks. The introduced modular neural network can evole its structure autonomously using a structural genetic code. The effectiveness of the method is demonstrated on the example of handwritten digit recognition. Feuring and Lippe study theoretical aspects of fuzzy neural networks. They propose a training algorithm for fuzzy neural networks that satisfies a certain goodness criterion. The second part of the special issue contains articles related to time series analysis and systems control. Yamazaki, Kang, and Ochiai introduce a hierarchical neural network system for adaptive, intelligent control, based on the analogy with the human thinking process. The optimum parameter space of the neural network system is found by a self-controllable algorithm, which can lead to either equilibrium or to nonequilibrium, chaotic behavior. The results of this study are applied, e.g., to laser beam analysis, semiconductor design, and design of magnetic devices. The work by Kozma, Kasabov, Kim, and Cohen presents a chaotic neuro-fuzzy method for time series analysis and process control. The"}
{"_id": "8c0e26f1ba46d5af376fa99c608e186c23ab85b5", "title": "A 6LoWPAN Accelerator for Internet of Things Endpoint Devices", "text": "The Internet of Things (IoT) is revolutionizing the Internet of the future and the way smart devices, people, and processes interact with each other. Challenges in endpoint devices (EDs) communication are due not only to the security and privacy-related requirements but also due to the ever-growing amount of data transferred over the network that needs to be handled, which can considerably reduce the system availability to perform other tasks. This paper presents an IPv6 over low power wireless personal area networks (6LoWPAN) accelerator for the IP-based IoT networks, which is able to process and filter IPv6 packets received by an IEEE 802.15.4 radio transceiver without interrupting the microcontroller. It offers nearly 13.24% of performance overhead reduction for the packet processing and the address filtering tasks, while guaranteeing full system availability when unwanted packets are received and discarded. The contribution is meant to be deployed on heterogeneous EDs at the network edge, which include on the same architecture, field-programmable gate array (FPGA) technology beside a microcontroller and an IEEE 802.15.4-compliant radio transceiver. Performed evaluations include the low FPGA hardware resources utilization and the energy consumption analysis when the processing and accepting/rejection of a received packet is performed by the 6LoWPAN accelerator."}
{"_id": "dde4b9ccbbcfd589c963aa8e5aec3f5fa236be65", "title": "Real-time Incremental Speech-to-Speech Translation of Dialogs", "text": "In a conventional telephone conversation between two speakers of the same language, the interaction is real-time and the speakers process the information stream incrementally. In this work, we address the problem of incremental speech-to-speech translation (S2S) that enables cross-lingual communication between two remote participants over a telephone. We investigate the problem in a novel real-time Session Initiation Protocol (SIP) based S2S framework. The speech translation is performed incrementally based on generation of partial hypotheses from speech recognition. We describe the statistical models comprising the S2S system and the SIP architecture for enabling real-time two-way cross-lingual dialog. We present dialog experiments performed in this framework and study the tradeoff in accuracy versus latency in incremental speech translation. Experimental results demonstrate that high quality translations can be generated with the incremental approach with approximately half the latency associated with nonincremental approach."}
{"_id": "be31ac03bf153f26c523a34a095168bc149b4628", "title": "An Edge Driven Wavelet Frame Model for Image Restoration", "text": "Wavelet frame systems are known to be effective in capturing singularities from noisy and degraded images. In this paper, we introduce a new edge driven wavelet frame model for image restoration by approximating images as piecewise smooth functions. With an implicit representation of image singularities sets, the proposed model inflicts different strength of regularization on smooth and singular image regions and edges. The proposed edge driven model is robust to both image approximation and singularity estimation. The implicit formulation also enables an asymptotic analysis of the proposed models and a rigorous connection between the discrete model and a general continuous variational model. Finally, numerical results on image inpainting and deblurring show that the proposed model is compared favorably against several popular image restoration models."}
{"_id": "dff4fa5374648fbf838fd10dc1bc5b8cf7b60356", "title": "Long-term stability of anterior open-bite treatment by intrusion of maxillary posterior teeth.", "text": "INTRODUCTION\nAnterior open bite results from the combined influences of skeletal, dental, functional, and habitual factors. The long-term stability of anterior open bite corrected with absolute anchorage has not been thoroughly investigated. The purpose of this study was to examine the long-term stability of anterior open-bite correction with intrusion of the maxillary posterior teeth.\n\n\nMETHODS\nNine adults with anterior open bite were treated by intrusion of the maxillary posterior teeth. Lateral cephalographs were taken immediately before and after treatment, 1 year posttreatment, and 3 years posttreatment to evaluate the postintrusion stability of the maxillary posterior teeth.\n\n\nRESULTS\nOn average, the maxillary first molars were intruded by 2.39 mm (P<0.01) during treatment and erupted by 0.45 mm (P<0.05) at the 3-year follow-up, for a relapse rate of 22.88%. Eighty percent of the total relapse of the intruded maxillary first molars occurred during the first year of retention. Incisal overbite increased by a mean of 5.56 mm (P<0.001) during treatment and decreased by a mean of 1.20 mm (P<0.05) by the end of the 3-year follow-up period, for a relapse rate of 17.00%. Incisal overbite significantly relapsed during the first year of retention (P<0.05) but did not exhibit significant recurrence between the 1-year and 3-year follow-ups.\n\n\nCONCLUSIONS\nMost relapse occurred during the first year of retention. Thus, it is reasonable to conclude that the application of an appropriate retention method during this period clearly enhances the long-term stability of the treatment."}
{"_id": "4ff04ef961094ba08587cb572eccc5382ac63d8f", "title": "The orbitofrontal cortex and reward.", "text": "The primate orbitofrontal cortex contains the secondary taste cortex, in which the reward value of taste is represented. It also contains the secondary and tertiary olfactory cortical areas, in which information about the identity and also about the reward value of odors is represented. The orbitofrontal cortex also receives information about the sight of objects and faces from the temporal lobe cortical visual areas, and neurons in it learn and reverse the visual stimulus to which they respond when the association of the visual stimulus with a primary reinforcing stimulus (such as a taste reward) is reversed. However, the orbitofrontal cortex is involved in representing negative reinforcers (punishers) too, such as aversive taste, and in rapid stimulus-reinforcement association learning for both positive and negative primary reinforcers. In complementary neuroimaging studies in humans it is being found that areas of the orbitofrontal cortex (and connected subgenual cingulate cortex) are activated by pleasant touch, by painful touch, by rewarding and aversive taste, and by odor. Damage to the orbitofrontal cortex in humans can impair the learning and reversal of stimulus- reinforcement associations, and thus the correction of behavioral responses when these are no longer appropriate because previous reinforcement contingencies change. This evidence thus shows that the orbitofrontal cortex is involved in decoding and representing some primary reinforcers such as taste and touch; in learning and reversing associations of visual and other stimuli to these primary reinforcers; and in controlling and correcting reward-related and punishment-related behavior, and thus in emotion."}
{"_id": "4a733055633db751fc76dad7d897f9bc74f2dd48", "title": "Adverse Effects Associated with Protein Intake above the Recommended Dietary Allowance for Adults", "text": "Background. While high-protein consumption-above the current recommended dietary allowance for adults (RDA: 0.8\u2009g protein/kg body weight/day)-is increasing in popularity, there is a lack of data on its potential adverse effects. Objective. To determine the potential disease risks due to high protein/high meat intake obtained from diet and/or nutritional supplements in humans. Design. Review. Subjects. Healthy adult male and female subjects. Method. In order to identify relevant studies, the electronic databases, Medline and Google Scholar, were searched using the terms:\"high protein diet,\" \"protein overconsumption,\" \"protein overuse,\" and \"high meat diet.\" Papers not in English were excluded. Further studies were identified by citations in retrieved papers. Results. 32 studies (21 experimental human studies and 11 reviews) were identified. The adverse effects associated with long-term high protein/high meat intake in humans were (a) disorders of bone and calcium homeostasis, (b) disorders of renal function, (c) increased cancer risk, (d) disorders of liver function, and (e) precipitated progression of coronary artery disease. Conclusions. The findings of the present study suggest that there is currently no reasonable scientific basis in the literature to recommend protein consumption above the current RDA (high protein diet) for healthy adults due to its potential disease risks. Further research needs to be carried out in this area, including large randomized controlled trials."}
{"_id": "cf92dcda0e59ec6f191122b3feab92b0f0463dc2", "title": "Automated Lip Reading Technique for Password Authentication", "text": "As technology is starting to conquer every strata of the society, the war for protecting confidential data from being intercepted is growing intense by the hour. Biometricsecurity stands out as the most secure form of authentication in high security zones such as defense, space missions and research head-quarters. Today, forms of password-protection range from facerecognition to retina -scan. Here, we develop a system for recognizing and converting lip movement of an individual into a recognized pattern which is set as a password for the system using image-processing. This system is also a break-through for providing people with motor-disabilities a robust and easy way of protecting their data. By capturing and tracing the successive movement of lips during speech, the corresponding word can be detected. The captured images are represented as points on a two-dimensional flat manifold that enables us to efficiently define the pronunciation of each word and thereby analyze or synthesize the motion of the lips. The motion of lips helps us track the word syllable-by-syllable. With multiple levels of image processing, it becomes possible to set the matching parameters to a very close value, hence not allowing any bruteforce or other infamous hacking techniques to break into the user\u2019s system. This lip reading technique also serves applications in areas where communication via direct speech is not possible."}
{"_id": "147db0417e95ce8039ce18671260096c1bc046b8", "title": "Efficient Deep Learning on Multi-Source Private Data", "text": "Machine learning models benefit from large and diverse datasets. Using such datasets, however, often requires trusting a centralized data aggregator. For sensitive applications like healthcare and finance this is undesirable as it could compromise patient privacy or divulge trade secrets. Recent advances in secure and privacy-preserving computation, including trusted hardware enclaves and differential privacy, offer a way for mutually distrusting parties to efficiently train a machine learning model without revealing the training data. In this work, we introduce Myelin, a deep learning framework which combines these privacy-preservation primitives, and use it to establish a baseline level of performance for fully private machine learning."}
{"_id": "8f0849d8f56ec1e2b440bf9e8832c42a9434720a", "title": "The crisis in antibiotic resistance.", "text": "The synthesis of large numbers of antibiotics over the past three decades has caused complacency about the threat of bacterial resistance. Bacteria have become resistant to antimicrobial agents as a result of chromosomal changes or the exchange of the exchange of genetic material via plasmids and transposons. Streptococcus pneumoniae, Streptococcus pyogenes, and staphylococci, organisms that cause respiratory and cutaneous infections, and members of the Enterobacteriaceae and Pseudomonas families, organisms that cause diarrhea, urinary infection, and sepsis, are now resistant to virtually all of the older antibiotics. The extensive use of antibiotics in the community and hospitals has fueled this crisis. Mechanisms such as antibiotic control programs, better hygiene, and synthesis of agents with improved antimicrobial activity need to be adopted in order to limit bacterial resistance."}
{"_id": "396d2184364e45601f3b93684b75de691ad242fe", "title": "A case for dynamic frequency tuning in on-chip networks", "text": "Performance and power are the first order design metrics for Network-on-Chips (NoCs) that have become the de-facto standard in providing scalable communication backbones for multicores/CMPs. However, NoCs can be plagued by higher power consumption and degraded throughput if the network and router are not designed properly. Towards this end, this paper proposes a novel router architecture, where we tune the frequency of a router in response to network load to manage both performance and power. We propose three dynamic frequency tuning techniques, FreqBoost, FreqThrtl and FreqTune, targeted at congestion and power management in NoCs. As enablers for these techniques, we exploit Dynamic Voltage and Frequency Scaling (DVFS) and the imbalance in a generic router pipeline through time stealing. Experiments using synthetic workloads on a 8x8 wormhole-switched mesh interconnect show that FreqBoost is a better choice for reducing average latency (maximum 40%) while, FreqThrtl provides the maximum benefits in terms of power saving and energy delay product (EDP). The FreqTune scheme is a better candidate for optimizing both performance and power, achieving on an average 36% reduction in latency, 13% savings in power (up to 24% at high load), and 40% savings (up to 70% at high load) in EDP. With application benchmarks, we observe IPC improvement up to 23% using our design. The performance and power benefits also scale for larger NoCs."}
{"_id": "8ed7d136089f3e164b98414b5826d5850f5bcb57", "title": "Deep CNNs With Spatially Weighted Pooling for Fine-Grained Car Recognition", "text": "Fine-grained car recognition aims to recognize the category information of a car, such as car make, car model, or even the year of manufacture. A number of recent studies have shown that a deep convolutional neural network (DCNN) trained on a large-scale data set can achieve impressive results at a range of generic object classification tasks. In this paper, we propose a spatially weighted pooling (SWP) strategy, which considerably improves the robustness and effectiveness of the feature representation of most dominant DCNNs. More specifically, the SWP is a novel pooling layer, which contains a predefined number of spatially weighted masks or pooling channels. The SWP pools the extracted features of DCNNs with the guidance of its learnt masks, which measures the importance of the spatial units in terms of discriminative power. As the existing methods that apply uniform grid pooling on the convolutional feature maps of DCNNs, the proposed method can extract the convolutional features and generate the pooling channels from a single DCNN. Thus minimal modification is needed in terms of implementation. Moreover, the parameters of the SWP layer can be learned in the end-to-end training process of the DCNN. By applying our method to several fine-grained car recognition data sets, we demonstrate that the proposed method can achieve better performance than recent approaches in the literature. We advance the state-of-the-art results by improving the accuracy from 92.6% to 93.1% on the Stanford Cars-196 data set and 91.2% to 97.6% on the recent CompCars data set. We have also tested the proposed method on two additional large-scale data sets with impressive results observed."}
{"_id": "e7396a3c27fa3c811b0fc383c92c6b6bf3f4fb7d", "title": "The Art of Multiple Sequence Alignment in R", "text": "Figure 1: The art of multiple sequence alignment. This document is intended to illustrate the art of multiple sequence alignment in R using DECIPHER. Even though its beauty is often concealed, multiple sequence alignment is a form of art in more ways than one. Take a look at Figure 1 for an illustration of what is happening behind the scenes during multiple sequence alignment. The practice of sequence alignment is one that requires a degree of skill, and it is that art which this vignette intends to convey. It is simply not enough to \u201cplug\u201d sequences into a multiple sequence aligner and blindly trust the result. An appreciation for the art as well a careful consideration of the results are required. What really is multiple sequence alignment, and is there a single correct alignment? Generally speaking, alignment seeks to perform the act of taking multiple divergent biological sequences of the same \u201ctype\u201d and fitting them to a form that reflects some shared quality. That quality"}
{"_id": "409802b3944d8a41b2f45a02d3dded1b9b672019", "title": "Incorporating query-specific feedback into learning-to-rank models", "text": "Relevance feedback has been shown to improve retrieval for a broad range of retrieval models. It is the most common way of adapting a retrieval model for a specific query. In this work, we expand this common way by focusing on an approach that enables us to do query-specific modification of a retrieval model for learning-to-rank problems. Our approach is based on using feedback documents in two ways: 1) to improve the retrieval model directly and 2) to identify a subset of training queries that are more predictive than others. Experiments with the Gov2 collection show that this approach can obtain statistically significant improvements over two baselines; learning-to-rank (SVM-rank) with no feedback and learning-to-rank with standard relevance feedback."}
{"_id": "ed26822ea4354567ef162750c3003d8c16a47cbe", "title": "EuroSense: Automatic Harvesting of Multilingual Sense Annotations from Parallel Text", "text": "Parallel corpora are widely used in a variety of Natural Language Processing tasks, from Machine Translation to cross-lingual Word Sense Disambiguation, where parallel sentences can be exploited to automatically generate high-quality sense annotations on a large scale. In this paper we present EUROSENSE, a multilingual sense-annotated resource based on the joint disambiguation of the Europarl parallel corpus, with almost 123 million sense annotations for over 155 thousand distinct concepts and entities from a languageindependent unified sense inventory. We evaluate the quality of our sense annotations intrinsically and extrinsically, showing their effectiveness as training data for Word Sense Disambiguation."}
{"_id": "2bcdb3d9b716dbf4bf81e5b170f3de9232fc0132", "title": "Gradient Descent Provably Optimizes Over-parameterized Neural Networks", "text": "One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For anm hidden node shallow neural network with ReLU activation and n training data, we show as long asm is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods."}
{"_id": "a0dc8911c47c3d0e1643ecfaa7032cee6fb5eb64", "title": "Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary set", "text": "We address a cross-scenario clothing retrieval problem- given a daily human photo captured in general environment, e.g., on street, finding similar clothing in online shops, where the photos are captured more professionally and with clean background. There are large discrepancies between daily photo scenario and online shopping scenario. We first propose to alleviate the human pose discrepancy by locating 30 human parts detected by a well trained human detector. Then, founded on part features, we propose a two-step calculation to obtain more reliable one-to-many similarities between the query daily photo and online shopping photos: 1) the within-scenario one-to-many similarities between a query daily photo and an extra auxiliary set are derived by direct sparse reconstruction; 2) by a cross-scenario many-to-many similarity transfer matrix inferred offline from the auxiliary set and the online shopping set, the reliable cross-scenario one-to-many similarities between the query daily photo and all online shopping photos are obtained."}
{"_id": "047175fb23f6f152d86e81100ba7140dd2847636", "title": "Robust Subspace Segmentation by Low-Rank Representation", "text": "We propose low-rank representation (LRR) to segment data drawn from a union of multiple linear (or affine) subspaces. Given a set of data vectors, LRR seeks the lowestrank representation among all the candidates that represent all vectors as the linear combination of the bases in a dictionary. Unlike the well-known sparse representation (SR), which computes the sparsest representation of each data vector individually, LRR aims at finding the lowest-rank representation of a collection of vectors jointly. LRR better captures the global structure of data, giving a more effective tool for robust subspace segmentation from corrupted data. Both theoretical and experimental results show that LRR is a promising tool for subspace segmentation."}
{"_id": "3ffe34c47874451b3aa0e8a83aa97febe2985225", "title": "Reducing the energy cost of human walking using an unpowered exoskeleton", "text": "With efficiencies derived from evolution, growth and learning, humans are very well-tuned for locomotion. Metabolic energy used during walking can be partly replaced by power input from an exoskeleton, but is it possible to reduce metabolic rate without providing an additional energy source? This would require an improvement in the efficiency of the human\u2013machine system as a whole, and would be remarkable given the apparent optimality of human gait. Here we show that the metabolic rate of human walking can be reduced by an unpowered ankle exoskeleton. We built a lightweight elastic device that acts in parallel with the user's calf muscles, off-loading muscle force and thereby reducing the metabolic energy consumed in contractions. The device uses a mechanical clutch to hold a spring as it is stretched and relaxed by ankle movements when the foot is on the ground, helping to fulfil one function of the calf muscles and Achilles tendon. Unlike muscles, however, the clutch sustains force passively. The exoskeleton consumes no chemical or electrical energy and delivers no net positive mechanical work, yet reduces the metabolic cost of walking by 7.2 \u00b1 2.6% for healthy human users under natural conditions, comparable to savings with powered devices. Improving upon walking economy in this way is analogous to altering the structure of the body such that it is more energy-effective at walking. While strong natural pressures have already shaped human locomotion, improvements in efficiency are still possible. Much remains to be learned about this seemingly simple behaviour."}
{"_id": "b6db9d4ad617bde1a442cae69c7327e7c1b1abee", "title": "A spelling correction model for end-to-end speech recognition", "text": "Attention-based sequence-to-sequence models for speech recognition jointly train an acoustic model, language model (LM), and alignment mechanism using a single neural network and require only parallel audio-text pairs. Thus, the language model component of the end-to-end model is only trained on transcribed audio-text pairs, which leads to performance degradation especially on rare words. While there have been a variety of work that look at incorporating an external LM trained on text-only data into the end-to-end framework, none of them have taken into account the characteristic error distribution made by the model. In this paper, we propose a novel approach to utilizing text-only data, by training a spelling correction (SC) model to explicitly correct those errors. On the LibriSpeech dataset, we demonstrate that the proposed model results in an 18.6% relative improvement in WER over the baseline model when directly correcting top ASR hypothesis, and a 29.0% relative improvement when further rescoring an expanded n-best list using an external LM."}
{"_id": "c60b517516e3839ee29ebec6f4a90674329b01ff", "title": "Yield optimization using advanced statistical correlation methods", "text": "This work presents a novel yield optimization methodology based on establishing a strong correlation between a group of fails and an adjustable process parameter. The core of the methodology comprises three advanced statistical correlation methods. The first method performs multivariate correlation analysis to uncover linear correlation relationships between groups of fails and measurements of a process parameter. The second method partitions a dataset into multiple subsets and tries to maximize the average of the correlations each calculated based on one subset. The third method performs statistical independence test to evaluate the risk of adjusting a process parameter. The methodology was applied to an automotive product line to improve yield. Five process parameter changes were discovered which led to significant improvement of the yield and consequently significant reduction of the yield fluctuation."}
{"_id": "84de97089bfe42d5ff6521ffaa36f746b9cfcc0a", "title": "Development trends for human monoclonal antibody therapeutics", "text": "Fully human monoclonal antibodies (mAbs) are a promising and rapidly growing category of targeted therapeutic agents. The first such agents were developed during the 1980s, but none achieved clinical or commercial success. Advances in technology to generate the molecules for study \u2014 in particular, transgenic mice and yeast or phage display \u2014 renewed interest in the development of human mAbs during the 1990s. In 2002, adalimumab became the first human mAb to be approved by the US Food and Drug Administration (FDA). Since then, an additional six human mAbs have received FDA approval: panitumumab, golimumab, canakinumab, ustekinumab, ofatumumab and denosumab. In addition, 3 candidates (raxibacumab, belimumab and ipilimumab) are currently under review by the FDA, 7 are in Phase III studies and 81 are in either Phase I or II studies. Here, we analyse data on 147 human mAbs that have entered clinical study to highlight trends in their development and approval, which may help inform future studies of this class of therapeutic agents."}
{"_id": "ec1df2ae7a2754c2c763d6817fc5358ceb71bd26", "title": "Changes in structural and functional connectivity among resting-state networks across the human lifespan", "text": "At rest, the brain's sensorimotor and higher cognitive systems engage in organized patterns of correlated activity forming resting-state networks. An important empirical question is how functional connectivity and structural connectivity within and between resting-state networks change with age. In this study we use network modeling techniques to identify significant changes in network organization across the human lifespan. The results of this study demonstrate that whole-brain functional and structural connectivity both exhibit reorganization with age. On average, functional connections within resting-state networks weaken in magnitude while connections between resting-state networks tend to increase. These changes can be localized to a small subset of functional connections that exhibit systematic changes across the lifespan. Collectively, changes in functional connectivity are also manifest at a system-wide level, as components of the control, default mode, saliency/ventral attention, dorsal attention, and visual networks become less functionally cohesive, as evidenced by decreased component modularity. Paralleling this functional reorganization is a decrease in the density and weight of anatomical white-matter connections. Hub regions are particularly affected by these changes, and the capacity of those regions to communicate with other regions exhibits a lifelong pattern of decline. Finally, the relationship between functional connectivity and structural connectivity also appears to change with age; functional connectivity along multi-step structural paths tends to be stronger in older subjects than in younger subjects. Overall, our analysis points to age-related changes in inter-regional communication unfolding within and between resting-state networks."}
{"_id": "1c9bbf45c8a1c314bd30dca723238f941b4bcd29", "title": "A Nested-Graph Model for the Representation and Manipulation of Complex Objects", "text": "Three recent trends in database research are object-oriented and deductive databases and graph-based user interfaces. We draw these trends together in a data model we call the Hypernode Model. The single data structure of this model is the hypernode, a graph whose nodes can themselves be graphs. Hypernodes are typed, and types, too, are nested graphs. We give the theoretical foundations of hypernodes and types, and we show that type checking is tractable. We show also how conventional type-forming operators can be simulated by our graph types, including cyclic types. The Hypernode Model comes equipped with a rule-based query language called Hyperlog, which is complete with respect to computation and update. We define the operational semantics of Hyperlog and show that the evaluation can be performed efficiently. We discuss also the use of Hyperlog for supporting database browsing, an essential feature of Hypertext databases. We compare our work with other graph-based data models\u2014unlike previous graph-based models, the Hypernode Model provides inherent support for data abstraction via its nesting of graphs. Finally, we briefly discuss the implementation of a DBMS based on the Hypernode Model."}
{"_id": "bc93ff646e6f863d885e609db430716d7590338f", "title": "Fast Compact City Modeling for Navigation Pre-Visualization", "text": "Nowadays, GPS-based car navigation systems mainly use speech and aerial views of simplified road maps to guide drivers to their destination. However, drivers often experience difficulties in linking the simple 2D aerial map with the visual impression that they get from the real environment, which is inherently ground-level based. Therefore, supplying realistically textured 3D city models at ground-level proves very useful for pre-visualizing an upcoming traffic situation. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the latter will more easily understand the required maneuver. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. We present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed which could allow for pre-visualization of any conceivable traffic situation by car navigation modules."}
{"_id": "bb47383009e740d2f4f0e2095f705c2ecab4d184", "title": "Design of impedance matching circuits for RF energy harvesting systems", "text": "This paper presents a systematic design methodology for impedance matching circuits of an RF energy harvester to maximize the harvested energy for a range of input power levels with known distribution. Design of input matching networks for RF rectifier differs from those for traditional RF circuits such as low-noise amplifier because the transistors in the RF rectifiers operate in different regions. In this work, it is shown that the input impedance of the rectifier estimated based on small signal simulation of transistors at the peak of input signal in the rectifier when the transistors are biased with stable DC operating voltage provides the largest harvested energy from a fixed input power. For variable input power levels, a matching network selection strategy is proposed to maximize expected harvested energy given the probability density distribution of input power levels. As an experimental sample, an RF energy harvester circuit is designed to maximize the harvested energy in the 902\u2013928 MHz band employing an off-chip impedance matching circuit designed using the proposed methodology. The proposed RF energy harvester exhibits a measured maximum power conversion efficiency (PCE) of 32% at \u221215 dBm (32 \u03bcW) and delivers an output DC voltage of 3.2 V to a 1 M\u03a9 load."}
{"_id": "fa733019ed69133849acb2abbb7a20927da4b0a3", "title": "Fast and Accurate Calibration of a Kinect Sensor", "text": "The article describes a new algorithm for calibrating a Kinect sensor that achieves high accuracy using only 6 to 10 image-disparity pairs of a planar checkerboard pattern. The method estimates the projection parameters for both color and depth cameras, the relative pose between them, and the function that converts kinect disparity units (kdu) into metric depth. We build on the recent work of Herrera et. al [8] that uses a large number of input frames and multiple iterative minimization steps for obtaining very accurate calibration results. We propose several modifications to this estimation pipeline that dramatically improve stability, usability, and runtime. The modifications consist in: (i) initializing the relative pose using a new minimal, optimal solution for registering 3D planes across different reference frames, (ii) including a metric constraint during the iterative refinement to avoid a drift in the disparity to depth conversion, and (iii) estimating the parameters of the depth distortion model in an open-loop post-processing step. Comparative experiments show that our pipeline can achieve a calibration accuracy similar to [8] while using less than 1/6 of the input frames and running in 1/30 of the time."}
{"_id": "d287e56f57d583ae5404654b7ecc27b3aa6dc5aa", "title": "Ring oscillators for CMOS process tuning and variability control", "text": "Test structures utilizing ring oscillators to monitor MOSFET ac characteristics for digital CMOS circuit applications are described. The measurements provide information on the average behavior of sets of a few hundred MOSFETs under high speed switching conditions. The design of the ring oscillators is specifically tailored for process centering and monitoring of variability in circuit performance in the manufacturing line as well as in the product. The delay sensitivity to key MOSFET parameter variations in a variety of ring oscillator designs is studied using a compact model for partially depleted silicon on insulator(PD-SOI) technology, but the analysis is equally valid for conventional bulk Si technology. Examples of hardware data illustrating the use of this methodology are taken primarily from experimental hardware in the 90-nm CMOS technology node in PD-SOI. The design and data analysis techniques described here allow very rapid investigation of the sources of variations in circuit delays."}
{"_id": "0c53bdd975f351ebe88e5cc130df1e9097f69e95", "title": "The Power of Word Clusters for Text Classification", "text": "The recently introducedInformation Bottleneck method [21] provides an information theoretic framework, for extr acting features of one variable, that are relevant for the values of an ther variable. Several previous works already suggeste d applying this method for document clustering, gene expression data a nalysis, spectral analysis and more. In this work we present a novel implementation of this method for supervised text cla ssification. Specifically, we apply the information bottlen eck method to find word-clusters that preserve the information a bout document categories and use these clusters as features for classification. Previous work [1] used a similar clustering procedure to show that word-clusters can significantly redu c the feature space dimensionality, with only a minor change in cl assification accuracy. In this work we present similar resul ts and go further to show that when the training sample is small word clusters can yield significant improvement in classificatio n accuracy (up to18%) over the performance using the words directly."}
{"_id": "9d3b8af14903250d57107838e8d11f77a2913fc6", "title": "PSSE: An Architecture For A Personalized Semantic Search Engine", "text": "Semantic technologies promise a next generation of semantic search engines. General search engines don\u2019t take into consideration the semantic relationships between query terms and other concepts that might be significant to user. Thus, semantic web vision and its core ontologies are used to overcome this defect. The order in which these results are ranked is also substantial. Moreover, user preferences and interests must be taken into consideration so as to provide user a set of personalized results. In this paper we propose, an architecture for a Personalized Semantic Search Engine (PSSE). PSSE is a crawler-based search engine that makes use of multi-crawlers to collect resources from both semantic as well as traditional web resources. In order for the system to reduce processing time, web pages' graph is clustered, then clusters are annotated using document annotation agents that work in parallel. Annotation agents use methods of ontology matching to find resources of the semantic web as well as means of information extraction techniques in order to provide a well description of HTML documents. System ranks resources based on a final score that's calculated based on traditional link analysis, content analysis and a weighted user profile for more personalized results. We have a belief that the merge of these techniques together enhances search results."}
{"_id": "184500ebdb349acaf5cdbeb74da1b335816656fd", "title": "On the Philosophy of Bitcoin/Blockchain Technology: Is it a Chaotic, Complex System?", "text": "The Philosophy of Blockchain Technology is concerned, among other things, with its ontology, how might it be characterised, how is it being created, implemented, and adopted, how does it operate in the world, and how does it evolve over time. Here, we concentrate on whether Bitcoin/blockchain can be considered a complex system and, if so, whether a chaotic one. Beyond mere academic curiosity, a positive response would raise concerns about the likelihood of Bitcoin/blockchain entering a 2010-FlashCrash-type of chaotic regime, with catastrophic consequences for financial systems based on it. This paper starts by enhancing the relevant details of Bitcoin/blockchain ecosystem formed by the blockchain itself, bitcoin end-users (payers and payees), capital gain seekers, miners, full nodes maintainers, and developers, and their interactions. Secondly, the Information Theory of Complex Systems is briefly discussed for later use. Finally, blockchain is investigated with the help of Crutchfield\u201fs Statistical Complexity measure. The low non-null statistical complexity value obtained suggests that blockchain may be considered algorithmically complicated, but hardly a complex system and unlikely to enter a chaotic regime."}
{"_id": "44d39169cb96258713b96b1a871808d979b36681", "title": "WiStep: Device-free Step Counting with WiFi Signals", "text": "Inspired by the emerging WiFi-based applications, in this paper, we leverage ubiquitous WiFi signals and propose a device-free step counting system, called WiStep. Based on the multipath propagation model, when a person is walking, her torso and limbs move at different speeds, which modulates wireless signals to the propagation paths with different lengths and thus introduces different frequency components into the received Channel State Information (CSI). To count walking steps, we first utilize time-frequency analysis techniques to segment and recognize the walking movement, and then dynamically select the sensitive subcarriers with largest amplitude variances from multiple CSI streams. Wavelet decomposition is applied to extract the detail coefficients corresponding to the frequencies induced by feet or legs, and compress the data so as to improve computing speed. Short-time energy of the coefficients is then calculated as the metric for step counting. Finally, we combine the results derived from the selected subcarriers to produce a reliable step count estimation. In contrast to counting steps based on the torso frequency analysis, WiStep can count the steps of in-place walking even when the person's torso speed is null. We implement WiStep on commodity WiFi devices in two different indoor scenarios, and various influence factors are taken into consideration when evaluating the performance of WiStep. The experimental results demonstrate that WiStep can realize overall step counting accuracies of 90.2% and 87.59% respectively in these two scenarios, and it is resilient to the change of scenarios."}
{"_id": "475e3a17cad15b484a4c162dda8b6419eb4604f4", "title": "Word Attention for Sequence to Sequence Text Understanding", "text": "Attention mechanism has been a key component in Recurrent Neural Networks (RNNs) based sequence to sequence learning framework, which has been adopted in many text understanding tasks, such as neural machine translation and abstractive summarization. In these tasks, the attention mechanism models how important each part of the source sentence is to generate a target side word. To compute such importance scores, the attention mechanism summarizes the source side information in the encoder RNN hidden states (i.e., ht), and then builds a context vector for a target side word upon a subsequence representation of the source sentence, since ht actually summarizes the information of the subsequence containing the first t-th words in the source sentence. We in this paper, show that an additional attention mechanism called word attention, that builds itself upon word level representations, significantly enhances the performance of sequence to sequence learning. Our word attention can enrich the source side contextual representation by directly promoting the clean word level information in each step. Furthermore, we propose to use contextual gates to dynamically combine the subsequence level and word level contextual information. Experimental results on abstractive summarization and neural machine translation show that word attention significantly improve over strong baselines. In particular, we achieve the state-of-the-art result on WMT\u201914 English-French translation task with 12M training data."}
{"_id": "08026d939ac1f30951ff7f4f7c335bf3fef47be4", "title": "Neither Snow Nor Rain Nor MITM...: An Empirical Analysis of Email Delivery Security", "text": "The SMTP protocol is responsible for carrying some of users' most intimate communication, but like other Internet protocols, authentication and confidentiality were added only as an afterthought. In this work, we present the first report on global adoption rates of SMTP security extensions, including: STARTTLS, SPF, DKIM, and DMARC. We present data from two perspectives: SMTP server configurations for the Alexa Top Million domains, and over a year of SMTP connections to and from Gmail. We find that the top mail providers (e.g., Gmail, Yahoo, and Outlook) all proactively encrypt and authenticate messages. However, these best practices have yet to reach widespread adoption in a long tail of over 700,000 SMTP servers, of which only 35% successfully configure encryption, and 1.1% specify a DMARC authentication policy. This security patchwork---paired with SMTP policies that favor failing open to allow gradual deployment---exposes users to attackers who downgrade TLS connections in favor of cleartext and who falsify MX records to reroute messages. We present evidence of such attacks in the wild, highlighting seven countries where more than 20% of inbound Gmail messages arrive in cleartext due to network attackers."}
{"_id": "75da7a21c87f198af775646280785c6d7ecbcf59", "title": "Fake News Headline Classification using Neural Networks with Attention", "text": "The Fake News Challenge is a competition to classify articles as agreeing, disagreeing, discussing, or having no relation to their headlines. The competition provides an annotated dataset (FNC-1) for training supervised models as well as a baseline model to compare against. We cast our fake news problem as a textual entailment task by using multiple Bidirectional LSTMs and an attention mechanism to predict the entailment of the articles to their paired headlines. To tune our model, we adjust a multilayer perceptron predictor layer\u2019s width as well as the amount of dropout and regularization to control overfitting. We evaluate the effectiveness of our model against the baseline and compare to implementations of solutions to the same FNC-1 stance classification task."}
{"_id": "0c759542ce106928f51d98adde1d2ae170f7bf03", "title": "A Fusion Approach for Efficient Human Skin Detection", "text": "A reliable human skin detection method that is adaptable to different human skin colors and illumination conditions is essential for better human skin segmentation. Even though different human skin-color detection solutions have been successfully applied, they are prone to false skin detection and are not able to cope with the variety of human skin colors across different ethnic. Moreover, existing methods require high computational cost. In this paper, we propose a novel human skin detection approach that combines a smoothed 2-D histogram and Gaussian model, for automatic human skin detection in color image(s). In our approach, an eye detector is used to refine the skin model for a specific person. The proposed approach reduces computational costs as no training is required, and it improves the accuracy of skin detection despite wide variation in ethnicity and illumination. To the best of our knowledge, this is the first method to employ fusion strategy for this purpose. Qualitative and quantitative results on three standard public datasets and a comparison with state-of-the-art methods have shown the effectiveness and robustness of the proposed approach."}
{"_id": "c3efc7d586e242d6a11d047a25b67ecc0f1cce0c", "title": "Stemming Algorithms: A Comparative Study and their Analysis", "text": "Stemming is an approach used to reduce a word to its stem or root form and is used widely in information retrieval tasks to increase the recall rate and give us most relevant results. There are number of ways to perform stemming ranging from manual to automatic methods, from language specific to language independent each having its own advantage over the other. This paper represents a comparative study of various available stemming alternatives widely used to enhance the effectiveness and efficiency of information retrieval."}
{"_id": "b7c09ef4e9deb4702e1a268f060b3f6a228dcf9c", "title": "Simulation and analysis of DQ frame and P+Resonant controls for voltage source inverter to distributed generation", "text": "An inverter control design in DQ reference frame takes 3-phase sinusoidal signals and converts them to the DQ-frame to make a 2-phase based DC variable control. The error is calculated by subtracting the feedback from reference and then feedding it to PI controllers. The outputs of PI controllers are of the voltage form and transferred back to the ABC frame and fed to SPWM modules. P+Resonant controllers suggest establishing the control strategy in stationary (alpha-beta) frame. In this frame the axis are fix and three phase variables are mapped in a rotating vector with the same angular speed of the original signals. The P+Resonant controller has zero steady state error at the resonant frequency. This paper explains how to apply DQ-frame and P+Resonant for inverter controller design for grid connected single-phase Voltage Source Inverters (VSI). To evaluate the effectiveness of both control techniques, it is also presented simulated results with application of these controllers."}
{"_id": "a4c408a991925aca147df668999d775eb2a1e24a", "title": "The effect of Bikram yoga on arterial stiffness in young and older adults.", "text": "BACKGROUND AND OBJECTIVE\nBikram yoga is the most popular form of hot yoga, despite the limited information available on its cardiovascular benefits. This study sought to determine the effect of Bikram yoga on arterial stiffness and insulin resistance in young and older adults.\n\n\nMETHODS\nTwenty-four young (mean age\u00b1standard deviation, 30\u00b11 years) and 18 middle-aged and older (mean age, 53\u00b12 years) adults completed an 8-week Bikram yoga intervention. Bikram yoga classes were performed for 90 minutes per session, three times per week, in a room heated to 40.5\u00b0C with 40%--60% relative humidity.\n\n\nRESULTS\nBody mass, body fat percentage, blood pressure, and fasting blood glucose and triglyceride concentrations did not significantly change as a result of the intervention in either the young or the older group. Trunk flexibility, as measured by the sit-and-reach test, increased in both groups (p<0.01). Total (p<0.05) and low-density lipoprotein cholesterol (p<0.05) levels, plasma insulin concentrations (p<0.01), and scores on the homeostatic model of the assessment of insulin resistance (p<0.01) decreased in older adults, whereas total and high-density lipoprotein cholesterol concentrations were reduced in young adults (all p<0.05). Carotid artery compliance (p<0.05) was increased and \u03b2-stiffness index decreased in young (p<0.05) but not in older adults. Carotid pulse pressure did not significantly change in either group.\n\n\nCONCLUSION\nA relatively short-term Bikram yoga intervention improved arterial stiffness in young but not older adults and significantly reduced insulin resistance index in older but not young adults."}
{"_id": "1379ad7fe27fa07419b7f6956af754bdb6d49558", "title": "Spectral Hashing", "text": "Semantic hashing[1] seeks compact binary codes of data-points so that the Hamming distance between codewords correlates with semantic similarity. In this paper, we show that the problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard. By relaxing the original problem, we obtain a spectral method whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian. By utilizing recent results on convergence of graph Laplacian eigenvectors to the Laplace-Beltrami eigenfunctions of manifolds, we show how to efficiently calculate the code of a novel datapoint. Taken together, both learning the code and applying it to a novel point are extremely simple. Our experiments show that our codes outperform the state-of-the art."}
{"_id": "281912ce741aef62a77191b5690584200968712c", "title": "Toward Automatic Activity Classification and Movement Assessment During a Sports Training Session", "text": "Motion analysis technologies have been widely used to monitor the potential for injury and enhance athlete performance. However, most of these technologies are expensive, can only be used in laboratory environments, and examine only a few trials of each movement action. In this paper, we present a novel ambulatory motion analysis framework using wearable inertial sensors to accurately assess all of an athlete's activities in real training environment. We first present a system that automatically classifies a large range of training activities using the discrete wavelet transform (DWT) in conjunction with a random forest classifier. The classifier is capable of successfully classifying various activities with up to 98% accuracy. Second, a computationally efficient gradient descent algorithm is used to estimate the relative orientations of the wearable inertial sensors mounted on the shank, thigh, and pelvis of a subject, from which the flexion-extension knee and hip angles are calculated. These angles, along with sacrum impact accelerations, are automatically extracted for each stride during jogging. Finally, normative data are generated and used to determine if a subject's movement technique differed to the normative data in order to identify potential injury-related factors. For the joint angle data, this is achieved using a curve-shift registration technique. It is envisaged that the proposed framework could be utilized for accurate and automatic sports activity classification and reliable movement technique evaluation in various unconstrained environments for both injury management and performance enhancement."}
{"_id": "55d4f7d74247018d469398f8cb74f701fde9ef79", "title": "Anterior prefrontal function and the limits of human decision-making.", "text": "The frontopolar cortex (FPC), the most anterior part of the frontal lobes, forms the apex of the executive system underlying decision-making. Here, we review empirical evidence showing that the FPC function enables contingent interposition of two concurrent behavioral plans or mental tasks according to respective reward expectations, overcoming the serial constraint that bears upon the control of task execution in the prefrontal cortex. This function is mechanistically explained by interactions between FPC and neighboring prefrontal regions. However, its capacity appears highly limited, which suggests that the FPC is efficient for protecting the execution of long-term mental plans from immediate environmental demands and for generating new, possibly more rewarding, behavioral or cognitive sequences, rather than for complex decision-making and reasoning."}
{"_id": "70530a7ac26c917d7271c7687af454afbf0fc284", "title": "Non linear hybrid watermarking for High Dynamic Range images", "text": "The present work explores the use of a specific method, called non linear hybrid watermarking to watermark High Dynamic Range (HDR) images. The non linear hybrid technique combines both additive and multiplicative watermark embedding and is based on a square root embedding equation. We evaluate the robustness and objective quality of HDR tone mapped watermarked images on two different HDR databases. The experimentations show that the watermark is imperceptible and successfully survives tone mapping attack."}
{"_id": "e0a2fbccaaaf7e7cacf97508ff42bbcb4f131f0a", "title": "Autonomous Intersection Management: A Heuristic Approach", "text": "Self-driving cars do not sound like a part of science fiction movies anymore. With most of the automobile giants committed to launching their autonomous cars as soon as possible, it is expected that self-driving cars will hit the road in a very short time. Increasing the penetration factor of autonomous vehicles will need algorithms to control them in an efficient way in different scenarios. One such scenario is intersection management (IM). We propose an intuitive heuristic to resolve space\u2013time conflicts in vehicle\u2019s trip, enabling vehicles to cross the intersection safely, and with minimum delay. The intersection is modeled as a group of conflict points where intersecting internal links cross each other. For safe and efficient scheduling of vehicles, heuristics are proposed separately for: 1) entering in approach lane; 2) safe traversal in the approach lane; 3) safe and efficient crossing of intersection; and (4) safe departure along depart lane. Vehicle scheduling is done using a three-leveled heuristic in which each level serves a distinct purpose and together guarantee collision-free passing of vehicles. Performance evaluation of the proposed scheme is done in Simulation of Urban MObility simulator and a comparative analysis is done with three other IM schemes. Experiments show that the proposed scheme gives a smaller average trip delay of vehicles compared with other IM schemes."}
{"_id": "840894b784378fe64ef977c44db759b8aa0527cf", "title": "A Hybrid Document Feature Extraction Method Using Latent Dirichlet Allocation and Word2Vec", "text": "Latent Dirichlet Allocation (LDA) is a probabilistic topic model to discover latent topics from documents and describe each document with a probability distribution over the discovered topics. It defines a global hierarchical relationship from words to a topic and then from topics to a document. Word2Vec is a word-embedding model to predict a target word from its surrounding contextual words. In this paper, we propose a hybrid approach to extract features from documents with bag-of-distances in a semantic space. By using both Word2Vec and LDA, our hybrid method not only generates the relationships between documents and topics, but also integrates the contextual relationships among words. Experimental results indicate that document features generated by our hybrid method are useful to improve classification performance by consolidating both global and local relationships."}
{"_id": "cf43e14ebadf62d323f9ac963a960e055ed4e947", "title": "Exploitation of spectral redundancy in cyclostationary signals", "text": "It is shown that the cyclostationarity attribute, as it is reflected in the periodicities of (second-order) moments of the signal, can be interpreted in terms of the property that allows generation of spectral lines from the signal by putting it through a (quadratic) nonlinear transformation. The fundamental link between the spectral-line generation property and the statistical property called spectral correlation, which corresponds to the correlation that exists between the random fluctuations of components of the signal residing in distinct spectral bands, is explained. The effects on the spectral-correlation characteristics of some basic signal processing operations, such as filtering, product modulation, and time sampling, are examined. It is shown how to use these results to derive the spectral-correlation characteristics for various types of man-made signals. Some ways of exploiting the inherent spectral redundancy associated with spectral correlation to perform various signal processing tasks involving detection and estimation of highly corrupted man-made signals are described.<>"}
{"_id": "25ebceb80d24eead12ae6cb5d42afd52802f79a5", "title": "Decentralized Reactive Power Sharing and Frequency Restoration in Islanded Microgrid", "text": "P-f and Q-V droop methods are the most common decentralized control methods in islanded microgrid. Although with the P-f droop method, an accurate active power sharing can be achieved among distributed energy resources (DERs), by Q-V droop, the reactive power sharing among DERs often deteriorates due to its highly dependence on the power line impedances and the local load. Variation of frequency and voltage by load changes is another challenge in droop control method. In this paper, a new autonomous control method is proposed to share reactive power among DERs accurately and restore frequency of a microgrid. The proposed method does not require any communication link and so maintains reliability and simplicity of the network. The synchronizing among DERs is obtained by the load change detection, which is accomplished by wavelet transform. The method operation principle is explained and analyzed. Simulation results are presented to validate the effectiveness of the proposed method."}
{"_id": "6dc245637d1d7335f50dbab0ee9d8463e7b35a49", "title": "Classification of Alzheimer\u2019s disease subjects from MRI using hippocampal visual features", "text": "Indexing and classification tools for Content Based Visual Information Retrieval (CBVIR) have been penetrating the universe of medical image analysis. They have been recently investigated for Alzheimer\u2019s disease (AD) diagnosis. This is a normal \u201cknowledge diffusion\u201d process, when methodologies developed for multimedia mining penetrate a new application area. The latter brings its own specificities requiring an adjustment of methodologies on the basis of domain knowledge. In this paper, we develop an automatic classification framework for AD recognition in structural Magnetic Resonance Images (MRI). The main contribution of this work consists in considering visual features from the most involved region in AD (hippocampal area) and in using a late fusion to increase precision results. Our approach has been first evaluated on the baseline MR images of 218 subjects from the Alzheimer\u2019s Disease Neuroimaging Initiative (ADNI) database and then tested on a 3T weighted contrast MRI obtained from a subsample of a large French epidemiological study: \u201cBordeaux dataset\u201d. The experimental results show that our classification of patients with AD versus NC (Normal Control) subjects achieves the accuracies of 87 % and 85 % for ADNI subset and \u201cBordeaux dataset\u201d respectively. For the most challenging group of subjects with the Mild Cognitive Impairment (MCI), we reach accuracies of 78.22 % and 72.23 % for MCI versus NC and MCI versus AD respectively on ADNI. The late fusion scheme improves classification results by 9 % in average for these three categories. Results demonstrate very promising classification performance and simplicity compared to the state-of-the-art volumetric AD diagnosis methods."}
{"_id": "5f3d0ffcdc557f834c6668d19d69eb63d26421f0", "title": "Knowledge Representation for Robots through Human-Robot Interaction", "text": "The representation of the knowledge needed by a robot to perform complex tasks is restricted by the limitations of perception. One possible way of overcoming this situation and designing \u201cknowledgeable\u201d robots is to rely on the interaction with the user. We propose a multi-modal interaction framework that allows to effectively acquire knowledge about the environment where the robot operates. In particular, in this paper we present a rich representation framework that can be automatically built from the metric map annotated with the indications provided by the user. Such a representation, allows then the robot to ground complex referential expressions for motion commands and to devise topological navigation plans to achieve the target locations."}
{"_id": "6191d0e138a44d7750c4d5df5f00efb7e6664d4b", "title": "A DUAL-BAND CIRCULARLY POLARIZED STUB LOADED MICROSTRIP PATCH ANTENNA FOR GPS APPLICATIONS", "text": "A single-feed low-profile and easy to fabricate circularly polarized microstrip patch antenna has been developed for GPS applications. For dual frequency operation, four slots are etched near edges of the patch and a crossed slot etched in the center for generating circular polarization. In order to reducing the frequency ratio of two frequency bands of the antenna, the patch is loaded by four short circuit microstrip stubs. The paper reports several simulation results that confirm the desired characteristics of the antenna. Using stub loading, the frequency ratio of two bands of the antenna can be, even, reduced to 1.1."}
{"_id": "902bab66389112b12886d264acbf401fcc042572", "title": "Counterfactual Policy Optimization Using Domain-Adversarial Neural Networks", "text": "Choosing optimal (or at least better) policies is an important problem in domains from medicine to education to finance and many others. One approach to this problem is through controlled experiments/trials but controlled experiments are expensive. Hence it is important to choose the best policies on the basis of observational data. This presents two difficult challenges: (i) missing counterfactuals, and (ii) selection bias. This paper presents theoretical bounds on estimation errors of counterfactuals from observational data by making connections to domain adaptation theory. It also presents a principled way of choosing optimal policies using domain adversarial neural networks. This illustrates the effectiveness of domain adversarial training together with various features of our algorithm on a semi-synthetic breast cancer dataset."}
{"_id": "e2bda90e896510b3a71b5748b454fb70694be1d5", "title": "Examining the psychometric properties of a sport-related concussion survey: a Rasch measurement approach", "text": "Awareness of sport-related concussion (SRC) is an essential step in increasing the number of athletes or parents who report on SRC. This awareness is important, as there is no established data on medical care at youth-level sports and may be limited to individuals with only first aid training. In this circumstance, aside from the coach, it is the players and their parents who need to be aware of possible signs and symptoms. The aim of this study was to examine the psychometric properties of a parent and player concussion survey intended for use before and after an education campaign regarding SRC. 1441 questionnaires were received from parents and 284 questionnaires from players. The responses to the sixteen-item section of the questionnaire\u2019s \u2018recognition of signs and symptoms\u2019 were submitted to psychometric analysis using the dichotomous and polytomous Rasch model via the Rasch Unidimensional Measurement Model software RUMM2030. The Rasch model of Modern Test Theory can be considered a refinement of, or advance on, traditional analyses of an instrument\u2019s psychometric properties. The main finding is that these sixteen items measure two factors: items that are symptoms of concussion and items that are not symptoms of concussion. Parents and athletes were able to identify most or all of the symptoms, but were not as good at distinguishing symptoms that are not symptoms of concussion. Analyzing these responses revealed differential item functioning for parents and athletes on non-symptom items. When the DIF was resolved a significant difference was found between parents and athletes. The main finding is that the items measure two \u2018dimensions\u2019 in concussion symptom recognition. The first dimension consists of those items that are symptoms of concussion and the second dimension of those items that are not symptoms of concussion. Parents and players were able to identify most or all of the symptoms of concussion, so one would not expect to pick up any positive change on these items after an education campaign. Parents and players were not as good at distinguishing symptoms that are not symptoms of concussion. It is on these items that one may possibly expect improvement to manifest, so to evaluate the effectiveness of an education campaign it would pay to look for improvement in distinguishing symptoms that are not symptoms of concussion."}
{"_id": "ff22453c4ddf9ee112aba5f7ab7ddf26a40d0e6f", "title": "Motion correction with PROPELLER MRI: application to head motion and free-breathing cardiac imaging.", "text": "A method for motion correction, involving both data collection and reconstruction, is presented. The PROPELLER MRI method collects data in concentric rectangular strips rotated about the k-space origin. The central region of k-space is sampled for every strip, which (a) allows one to correct spatial inconsistencies in position, rotation, and phase between strips, (b) allows one to reject data based on a correlation measure indicating through-plane motion, and (c) further decreases motion artifacts through an averaging effect for low spatial frequencies. Results are shown in which PROPELLER MRI is used to correct for bulk motion in head images and respiratory motion in nongated cardiac images. Magn Reson Med 42:963-969, 1999."}
{"_id": "ef65f1a2700d16da9b0d49e179eba65d8483489c", "title": "An efficient audio fingerprint search algorithm for music retrieval", "text": "The conventional audio fingerprinting system by Haitsma uses a lookup table to identify the candidate songs in the database, which contains the sub-fingerprints of songs, and searches the candidates to find a song whose bit error rate is the lowest. However, this approach has a drawback that the number of database accesses increases dramatically, especially when the database contains a large number of songs or when a matching sub-fingerprint is not found in the lookup table due to a heavily degraded input signal. In this paper, a novel search method is proposed to overcome these difficulties. The proposed method partitions each song found from the lookup table into blocks, assigns a weight to each block, and uses the weight as a search priority to speed up the search process while reducing the number of database accesses. Various results from our experiment show the significant improvement in search speed while maintaining the search accuracy comparable to the conventional method."}
{"_id": "06454251111dcea7cbad4adb1acd856ca8cd3f75", "title": "Expression and export: recombinant protein production systems for Aspergillus", "text": "Several Aspergillus species, in particular Aspergillus niger and Aspergillus oryzae, are widely used as protein production hosts in various biotechnological applications. In order to improve the expression and secretion of recombinant proteins in these filamentous fungi, several novel genetic engineering strategies have been developed in recent years. This review describes state-of-the-art genetic manipulation technologies used for strain improvement, as well as recent advances in designing the most appropriate engineering strategy for a particular protein production process. Furthermore, current developments in identifying bottlenecks in the protein production and secretion pathways are described and novel approaches to overcome these limitations are introduced. An appropriate combination of expression vectors and optimized host strains will provide cell factories customized for each production process and expand the great potential of Aspergilli as biotechnology workhorses to more complex multi-step industrial applications."}
{"_id": "dc1df387936ec30d6a18601e4eb0e5fd195a5fca", "title": "Analysis of Sentence Scoring Methods for Extractive Automatic Text Summarization", "text": "Automatic text summarization is a major area of research in the domain of information systems. Most of the methods requires domain knowledge in order to produce a coherent and meaningful summary. In Extractive text summarization, sentences are scored on some features. A large number of feature based scoring methods have been proposed for extractive automatic text summarization by researchers. This paper reviews features for sentence scoring. The results on combinations of various features for scoring are discussed. ROUGE-N is used to evaluate generated summary with abstractive summary of DUC 2002 dataset."}
{"_id": "7d4c85662ca70abb26e37b2fc40a045fd0369f70", "title": "Nonisolated High Gain DC\u2013DC Converter for DC Microgrids", "text": "DC microgrids are popular due to the integration of renewable energy sources such as solar photovoltaics and fuel cells. Owing to the low output voltage of these dc power generators, high efficient high gain dc\u2013dc converters are in need to connect the dc microgrid. In this paper, a nonisolated high gain dc\u2013dc converter is proposed without using the voltage multiplier cell and/or hybrid switched-capacitor technique. The proposed topology utilizes two nonisolated inductors that are connected in series/parallel during discharging/charging mode. The operation of switches with two different duty ratios is the main advantage of the converter to achieve high voltage gain without using extreme duty ratio. The steady-state analysis of the proposed converter using two different duty ratios is discussed in detail. In addition, a 100\u00a0W, 20/200\u00a0V prototype circuit of the high gain dc\u2013dc converter is developed, and the performance is validated using experimental results."}
{"_id": "43afc11883fb147ac37b4dc40bf6e7fa5fccf341", "title": "Hash Kernels for Structured Data", "text": "We propose hashing to facilitate efficient kernels. This gen eralizes previous work using sampling and we show a principled way to compute the kernel matrix for d ata streams and sparse feature spaces. Moreover, we give deviation bounds from the exact ke rnel matrix. This has applications to estimation on strings and graphs."}
{"_id": "bfcf14ae04a9a326f9263dcdd30e475334a96d39", "title": "SMOTE: Synthetic Minority Over-sampling Technique", "text": "An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of \u201cnormal\u201d examples with only a small percentage of \u201cabnormal\u201d or \u201cinteresting\u201d examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy."}
{"_id": "d920943892caa0bc9f300cb9e3b7f3ab250f78c9", "title": "An insight into imbalanced Big Data classification: outcomes and challenges", "text": "BigData applications are emerging during the last years, and researchers frommany disciplines are aware of the high advantages related to the knowledge extraction from this type of problem. However, traditional learning approaches cannot be directly applied due to scalability issues. To overcome this issue, the MapReduce framework has arisen as a \u201cde facto\u201d solution. Basically, it carries out a \u201cdivide-andconquer\u201d distributed procedure in a fault-tolerant way to adapt for commodity hardware.Being still a recent discipline, few research has been conducted on imbalanced classification for Big Data. The reasons behind this are mainly the difficulties in adapting standard techniques to the MapReduce programming style. Additionally, inner problems of imbalanced data, namely lack of data and small disjuncts, are accentuated during the data partitioning to fit theMapReduce programming style. This paper is designed under three main pillars. First, to present the first outcomes for imbalanced classification in Big Data problems, introducing the current B Alberto Fern\u00e1ndez alberto@decsai.ugr.es Sara del R\u00edo srio@decsai.ugr.es Nitesh V. Chawla nchawla@nd.edu Francisco Herrera herrera@decsai.ugr.es 1 Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain 2 Department of Computer Science and Engineering, 384 Fitzpatrick Hall, University of Notre Dame, Notre Dame, IN 46556, USA 3 Interdisciplinary Center for Network Science and Applications, 384 Nieuwland Hall of Science, University of Notre Dame, Notre Dame, IN 46556, USA research state of this area. Second, to analyze the behavior of standard pre-processing techniques in this particular framework. Finally, taking into account the experimental results obtained throughout this work, we will carry out a discussion on the challenges and future directions for the topic."}
{"_id": "ccea9e75ba38b13d87ee22e1d8fc89c396a58213", "title": "The implementation of a full EMV smartcard for a point-of-sale transaction", "text": "This paper examines the changes in the payment card environment as they relate to EMV (named after Europay, MasterCard and Visa). This research shows that if the combined dynamic data authentication (CDA) card variant of the EMV card is deployed in a full EMV environment, given the relevant known vulnerabilities and attacks against the EMV technology, the consequences of unauthorized disclosure of the cardholder data is of significantly reduced value to a criminal."}
{"_id": "cd1481e9cc0c86bcf3a44672f887522a95a174e8", "title": "Challenges and chances for smart rail mobility at mmWave and THz bands from the channels viewpoint", "text": "In the new era of \u201csmart rail mobility\u201d, infrastructure, trains, and travelers will be interconnected to achieve optimized mobility, higher safety, and lower costs. In order to realize a seamless high-data rate wireless connectivity, up to dozens of GHz bandwidth is required, and this motivates the exploration of the underutilized millimeter wave (mmWave) as well as the largely unexplored Terahertz (THz) bands. In order to realize the smart rail mobility at mmWave and THz bands, it is critical to gain a thorough understanding of the wireless channels. In this paper, according to the state of the art in research on railway wireless channels, we identify the main technical challenges and the corresponding chances concerning the reference scenario modules, accurate and efficient simulation platform, beamforming strategies, and handover design."}
{"_id": "688fff05ef1badc0803cd4b5998912b2213b9a1f", "title": "News via Voldemort: Parody accounts in topical discussions on Twitter", "text": "Parody accounts are prevalent on Twitter, offering irreverent interpretations of public figures, fictional characters, and more. These accounts post comments framed within the context of their fictional universes or stereotypes of their subjects, responding in-character to topical events. This article positions parody accounts as a ritualised social media practice, an extension of fan practices and irreverent internet culture. By providing a typology of parody accounts and analysing the topicality of selected parody accounts\u2019 tweets, the research examines how these accounts contribute to topical discussions. In-character framing of topical comments allows parody accounts to offer original interpretations of breaking news"}
{"_id": "26dbfca1155d823b72db68663af3262b3da08da6", "title": "Agglomerative Information Bottleneck", "text": "We introduce a novel distributional clustering algorithm that explicitly maximizes the mutual information per cluster between the data and given categories. This algorithm can be considered as a bottom up hard version of the recently introduced \u201cInformation Bottleneck Method\u201d. We relate the mutual information between clusters and categories to the Bayesian classification error, which provides another motivation for using the obtained clusters as features. The algorithm is compared with the top-down soft version of the information bottleneck method and a relationship between the hard and soft results is established. We demonstrate the algorithm on the 20 Newsgroups data set. For a subset of two news-groups we achieve compression by 3 orders of magnitudes loosing only 10% of the original mutual information."}
{"_id": "ff47e72c838a6cba9227fa54d2b622da160b295e", "title": "DEVELOPMENT OF SEISMIC VULNERABILITY ASSESSMENT METHODOLOGIES OVER THE PAST 30 YEARS", "text": "Models capable of estimating losses in future earthquakes are of fundamental importance for emergency planners and for the insurance and reinsurance industries. One of the main ingredients in a loss model is an accurate, transparent and conceptually sound algorithm to assess the seismic vulnerability of the building stock and indeed many tools and methodologies have been proposed over the past 30 years for this purpose. This paper takes a look at some of the most significant contributions in the field of vulnerability assessment and identifies the key advantages and disadvantages of these procedures in order to distinguish the main characteristics of an ideal methodology."}
{"_id": "600376bb397c5f3d8d33856b3a6db314b6bda67d", "title": "WiFi-SLAM Using Gaussian Process Latent Variable Models", "text": "WiFi localization, the task of determining the physical location of a mobile device from wireless signal strengths, has been shown to be an accurate method of indoor and outdoor localization and a powerful building block for location-aware applications. However, most localization techniques require a training set of signal strength readings labeled against a ground truth location map, which is prohibitive to collect and maintain as maps grow large. In this paper we propose a novel technique for solving the WiFi SLAM problem using the Gaussian Process Latent Variable Model (GPLVM) to determine the latent-space locations of unlabeled signal strength data. We show how GPLVM, in combination with an appropriate motion dynamics model, can be used to reconstruct a topological connectivity graph from a signal strength sequence which, in combination with the learned Gaussian Process signal strength model, can be used to perform efficient localization."}
{"_id": "edfceceb0edacf19cf981ea60fc255f03d6135fa", "title": "Sentential Representations in Distributional Semantics", "text": "This thesis is about the problem of representing sentential meaning in distributional semantics. Distributional semantics obtains the meanings of words through their usage, based on the hypothesis that words occurring in similar contexts will have similar meanings. In this framework, words are modeled as distributions over contexts and are represented as vectors in high dimensional space. Compositional distributional semantics attempts to extend this approach to higher linguistics structures. Some basic composition models proposed in literature to obtain the meaning of phrases or possibly sentences show promising results in modeling simple phrases. The goal of the thesis is to further extend these composition models to obtain sentence meaning representations. The thesis puts more focus on unsupervised methods which make use of the context of phrases and sentences to optimize the parameters of a model. Three different methods are presented. The first model is the PLF model, a practical composition and linguistically motivated model which is based on the lexical function model introduced by Baroni and Zamparelli (2010) and Coecke et al. (2010). The second model is the Chunk-based Smoothed Tree Kernels (CSTKs) model, extending Smoothed Tree Kernels (Mehdad et al., 2010) by utilizing vector representations of chunks. The final model is the C-PHRASE model, a neural network-based approach, which jointly optimizes the vector representations of words and phrases using a context predicting objective. The thesis makes three principal contributions to the field of compositional distributional semantics. The first is to propose a general framework to estimate the parameters and evaluate the basic composition models. This provides a fair way to comparing the models using a set of phrasal datasets. The second is to extend these basic models to the sentence level, using syntactic information to build up the sentence vectors. The third contribution is to evaluate all the proposed models, showing that they perform on par with or outperform competing models presented in the literature. Thesis Supervisor: Marco Baroni Title: Associate Professor"}
{"_id": "ffca8d2320d8ae48a0ad39200f4db3a2324a8e57", "title": "An Automatic Collocation Extraction from Arabic Corpus", "text": "Problem statement: The identification of collocations is very importa n part in natural language processing applications that require some degree of semantic interpretation such as, machine translation, information retrieval and text summari z tion. Because of the complexities of Arabic, the collocations undergo some variations such as, morph ological, graphical, syntactic variation that constitutes the difficulties of identifying the col location. Approach: We used the hybrid method for extracting the collocations from Arabic corpus that is based on linguistic information and association measures. Results: This method extracted the bi-gram candidates of Ar abic collocation from corpus and evaluated the association measures by using the n-best evaluation method. We reported the precision values for each association measure in ea ch n-best list. Conclusion: The experimental results showed that the log-likelihood ratio is the best as ociation measure that achieved highest precision."}
{"_id": "c5ffffe86f460a5e455ab93505b28debd6daf7c4", "title": "Burnout, compassion fatigue, compassion satisfaction, and secondary traumatic stress in trauma nurses.", "text": "The relationship of burnout (BO), compassion fatigue (CF), compassion satisfaction (CS), and secondary traumatic stress (STS) to personal/environmental characteristics, coping mechanisms, and exposure to traumatic events was explored in 128 trauma nurses. Of this sample, 35.9% had scores consistent with BO, 27.3% reported CF, 7% reported STS, and 78.9% had high CS scores. High BO and high CF scores predicted STS. Common characteristics correlating with BO, CF, and STS were negative coworker relationships, use of medicinals, and higher number of hours worked per shift. High CS correlated with greater strength of supports, higher participation in exercise, use of meditation, and positive coworker relationships. Caring for trauma patients may lead to BO, CF, and STS; identifying predictors of these can inform the development of interventions to mitigate or minimize BO, CF, and STS in trauma nurses."}
{"_id": "884b2a366528d5c8546c7c30b641979a4dcb39b7", "title": "A Hierarchical Deep Architecture and Mini-batch Selection Method for Joint Traffic Sign and Light Detection", "text": "Traffic light and sign detectors on autonomous cars are integral for road scene perception. The literature is abundant with deep learning networks that detect either lights or signs, not both, which makes them unsuitable for real-life deployment due to the limited graphics processing unit (GPU) memory and power available on embedded systems. The root cause of this issue is that no public dataset contains both traffic light and sign labels, which leads to difficulties in developing a joint detection framework. We present a deep hierarchical architecture in conjunction with a mini-batch proposal selection mechanism that allows a network to detect both traffic lights and signs from training on separate traffic light and sign datasets. Our method solves the overlapping issue where instances from one dataset are not labelled in the other dataset. We are the first to present a network that performs joint detection on traffic lights and signs. We measure our network on the Tsinghua-Tencent 100K benchmark for traffic sign detection and the Bosch Small Traffic Lights benchmark for traffic light detection and show it outperforms the existing Bosch Small Traffic light state-of-the-art method. We focus on autonomous car deployment and show our network is more suitable than others because of its low memory footprint and real-time image processing time. Qualitative results can be viewed at https://youtu.be/ YmogPzBXOw."}
{"_id": "1a1e6b61834e5ab0ec60a0be89b7e5a4b7160081", "title": "Doctor AI: Predicting Clinical Events via Recurrent Neural Networks", "text": "Leveraging large historical data in electronic health record (EHR), we developed Doctor AI, a generic predictive model that covers observed medical conditions and medication uses. Doctor AI is a temporal model using recurrent neural networks (RNN) and was developed and applied to longitudinal time stamped EHR data from 260K patients over 8 years. Encounter records (e.g. diagnosis codes, medication codes or procedure codes) were input to RNN to predict (all) the diagnosis and medication categories for a subsequent visit. Doctor AI assesses the history of patients to make multilabel predictions (one label for each diagnosis or medication category). Based on separate blind test set evaluation, Doctor AI can perform differential diagnosis with up to 79% recall@30, significantly higher than several baselines. Moreover, we demonstrate great generalizability of Doctor AI by adapting the resulting models from one institution to another without losing substantial accuracy."}
{"_id": "6d1e97df31e9a4b0255243d86608c4b7f725133b", "title": "Incremental Task and Motion Planning: A Constraint-Based Approach", "text": "We present a new algorithm for task and motion planning (TMP) and discuss the requirements and abstractions necessary to obtain robust solutions for TMP in general. Our Iteratively Deepened Task and Motion Planning (IDTMP) method is probabilistically-complete and offers improved performance and generality compared to a similar, state-of-theart, probabilistically-complete planner. The key idea of IDTMP is to leverage incremental constraint solving to efficiently add and remove constraints on motion feasibility at the task level. We validate IDTMP on a physical manipulator and evaluate scalability on scenarios with many objects and long plans, showing order-of-magnitude gains compared to the benchmark planner and a four-times self-comparison speedup from our extensions. Finally, in addition to describing a new method for TMP and its implementation on a physical robot, we also put forward requirements and abstractions for the development of similar planners in the future."}
{"_id": "553314923a92c3a9fd020287c69e69c7c3a76301", "title": "David J. Chalmers The Singularity A Philosophical Analysis", "text": "What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to evergreater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the \u2018singularity\u2019. The basic argument here was set out by the statistician I.J. Good in his 1965 article \u2018Speculations Concerning the First Ultraintelligent Machine\u2019:"}
{"_id": "7f5c140f2a890c13986fb82607e5b123663f6a8d", "title": "Personality Neuroscience and the Biology of Traits", "text": "Personality neuroscience involves the use of neuroscience methods to study individual differences in behavior, motivation, emotion, and cognition. Personality psychology has contributed much to identifying the important dimensions of personality, but relatively little to understanding the biological sources of those dimensions. However, the rapidly expanding field of personality neuroscience is increasingly shedding light on this topic. This article provides a survey of progress in the use of neuroscience to study personality traits, organized using a hierarchical model of traits based on the Big Five dimensions: Extraversion, Neuroticism, Agreeableness, Conscientiousness, and Openness \u2044 Intellect. Evidence is reviewed for hypotheses about the biological systems involved in each trait. The mission of personality psychology is \u2018\u2018to provide an integrative framework for understanding the whole person\u2019\u2019 (McAdams & Pals, 2006; p. 204), and many different methods must be brought to bear to accomplish such an ambitious goal. The field of personality neuroscience rests on the premise that the whole person cannot be understood without understanding the brain. In this article, I discuss the role that neuroscience can play in personality research and review the progress of this rapidly expanding field. (For more in depth review of the field and its influential theories see DeYoung & Gray, 2009; and Zuckerman, 2005.) Personality psychology\u2019s attempt to understand the whole person calls for a broad conception of personality itself, such as the one provided by McAdams and Pals (2006, p. 212): Personality is an individual\u2019s unique variation on the general evolutionary design for human nature, expressed as a developing pattern of dispositional traits, characteristic adaptations, and integrative life stories, complexly and differentially situated in culture. This definition describes three levels at which personality can be analyzed: traits, characteristic adaptations, and life stories. Personality neuroscience has focused primarily on traits, which are relatively stable patterns of behavior, motivation, emotion, and cognition (Pytlik Zillig, Hemenover, & Dienstbier, 2002; Wilt & Revelle, 2009) that are not specific to a particular social milieu or culture. This is not to say that traits are evident to the same extent or with identical manifestations in all cultures, but rather that any trait can be observed in a subset of situations in any culture. In contrast to traits, characteristic adaptations and life stories describe the individual\u2019s specific responses to his or her particular life circumstances. Obviously, the latter two levels of analysis are crucial for understanding any individual, but their complexity renders them less amenable to study by neuroscience, and this article focuses on the biology of traits. Traits can be considered probabilistic descriptions of the frequency and intensity with which individuals exhibit various behavioral, motivational, emotional, and cognitive states Social and Personality Psychology Compass 4/12 (2010): 1165\u20131180, 10.1111/j.1751-9004.2010.00327.x a 2010 The Author Social and Personality Psychology Compass a 2010 Blackwell Publishing Ltd (Fleeson, 2001; Fleeson & Gallagher, 2009). Individuals who are high in some trait will experience the states associated with that trait more often and more intensely than individuals low in that trait. For example, someone high in Extraversion will be talkative, outgoing, and excited more often than someone low in Extraversion, but even the person low in Extraversion may experience those states occasionally. The aim of personality neuroscience is to understand both the biological systems that are responsible for the states associated with traits and the parameters of those systems that cause them to function differently in different individuals. The systems themselves are presumed to be present in every intact human brain \u2013 hence McAdams and Pals\u2019 (2006) reference to \u2018\u2018the general evolutionary design for human nature\u2019\u2019 \u2013 but their parameters will vary from person to person. (For example, all people have brain systems that respond to rewards, but in different individuals these systems will respond with different degrees of vigor to a particular reward, and the systems\u2019 average level of response may be associated with some personality trait.) When considering the biological sources of personality, one must distinguish between proximal and distal sources. The proximal sources, just described, consist of stable differences in the functioning of the neural systems that produce the states associated with traits. The distal sources are both genetic and environmental, as indicated by the fact that heritability estimates for personality traits are in the range of 40% to 80%, depending on trait and method (Bouchard & McGue, 2003; Riemann, Angleitner, & Strelau, 1997). (The heritability of a trait indicates the amount of its variation in a population due to genetic, rather than environmental, variation.) It is important to remember that when either genes or situations have lasting effects on traits, they must do so by changing the brain; thus, personality differences are \u2018biological\u2019 regardless of their heritability, in the sense that they must be proximally generated by the brain no matter whether they originated in genes or environment. Methods in Personality Neuroscience Scientific investigation of the biological basis of personality has been limited, until relatively recently, by a lack of technology to examine brain structure and function in living human beings. Prior to the development of neuroimaging, the sole means for assessing brain activity was the electroencephalogram (EEG), which measures the brain\u2019s electrical activity at the scalp. Today a number of methods are available for personality neuroscience. Five important categories of neuroscientific methods are (1) neuroimaging (e.g., magnetic resonance imaging [MRI] or positron emission tomography [PET]), which allows assessment of brain structure and function with a relatively high spatial resolution; (2) molecular genetics, which allows assessment of variation in specific genes that are expressed in the brain; (3) EEG, which provides the highest temporal resolution of neural activity of any available method; (4) assays of endogenous psychoactive substances or their byproducts (e.g., hormone levels in saliva or neurotransmitter metabolites in cerebrospinal fluid); and (5) psychopharmocological manipulation (e.g., tryptophan depletion or augmentation to alter levels of the neurotransmitter serotonin). Personality neuroscience employs these methods in conjunction with the methods of personality psychology, attempting to link biological variables to traits. Measurement of traits is typically accomplished by questionnaire, through self-report and \u2044 or ratings by peers or other knowledgeable informants. Questionnaires provide a convenient and reliable method for assessing a broad range of stable individual differences, drawing on raters\u2019 experiences over a far greater span of time than is available in the laboratory. However, 1166 Personality Neuroscience a 2010 The Author Social and Personality Psychology Compass 4/12 (2010): 1165\u20131180, 10.1111/j.1751-9004.2010.00327.x Social and Personality Psychology Compass a 2010 Blackwell Publishing Ltd traits can also be assessed in other ways, such as through laboratory tasks, behavioral observation, or experience sampling, and personality should not be identified exclusively with the variables provided by personality questionnaires. What we want to explain in personality neuroscience is not how people answer questionnaires, but rather why they exhibit stable patterns of behavior, motivation, emotion, and cognition. Most studies discussed in this review begin with some psychological trait and attempt to discover its biological basis. Another valid and useful approach is to begin with stable individual differences in some biological parameter (such as asymmetry in the level of left versus right cortical hemisphere activity, measured at rest by EEG) and then to attempt to discover what psychological trait or traits are associated with that biological trait. An important caveat for anyone exploring the literature on personality neuroscience or contemplating entering the field as an investigator is that much inconsistency exists in the findings to date. In neuroimaging research, inconsistency is probably due in large part to the use very small samples, due to cost. Unfortunately, small samples often lack statistical power to detect true effects (Yarkoni, 2009). Worse still, low power has the unfortunate additional consequence of increasing the proportion of false positives among significant findings (Green et al., 2008). In genetic studies, though larger samples are typically used, power may still be an issue, and inconsistency may arise, because each trait is influenced by many genes, each accounting for only a very small amount of the variance of the trait (Green et al., 2008). These difficulties highlight the importance of testing reasonably focused hypotheses, rather than simply exploring associations with biological variables in the absence of any theory of the causal mechanisms that might underlie a given trait. The Structure of Personality Traits Personality neuroscience can usefully be guided by existing knowledge about the structure of personality \u2013 that is, knowledge about how various traits relate to one another and to the major dimensions of personality. In order to produce a coherent overview of the progress of personality neuroscience, one needs to relate findings to a reasonably comprehensive taxonomy of traits, such as that provided by the Five Factor Model or Big Five, which categorizes the majority of traits within five broad domains, typically labeled Extraversion, Neuroticism, Agreeableness, Conscientiousness, and Openness \u2044 Intellect (Digman, 199"}
{"_id": "495098ddd7018302f421654025c0e2d2cda681d0", "title": "Comparing Entities in RDF Graphs", "text": "The Semantic Web has fuelled the appearance of numerous open-source knowledge bases. Knowledge bases enable new types of information search, going beyond classical query answering and into the realm of exploratory search, and providing answers to new types of user questions. One such question is how two entities are comparable, i.e., what are similarities and differences between the information known about the two entities. Entity comparison is an important task and a widely used functionality available in many information systems. Yet it is usually domain-specific and depends on a fixed set of aspects to compare. In this paper we propose a formal framework for domain-independent entity comparison that provides similarity and difference explanations for input entities. We model explanations as conjunctive queries, we discuss how multiple explanations for an entity pair can be ranked and we provide a polynomial-time algorithm for generating most specific similarity explanations."}
{"_id": "34eb064c80d0c26373023b3a6b0e10bc3202066f", "title": "A meta-analytic review of the effects of acute cortisol administration on human memory", "text": "Adrenal glucocorticoids (GC) secreted during stress modulate memory. Animal and human studies investigating the effects of acute GC treatment on memory have reported conflicting (enhancing as well as impairing) results. Several theories have been proposed to integrate these contradictory findings. Among the variables discussed are the timing of the GC treatment (before learning or before retrieval) and the time of day (morning versus afternoon). Here we review meta-analytically the results of 16 studies, which experimentally investigated the acute impact of cortisol treatment on human memory. The results revealed that the timing of GC application in the course of a study is a relevant variable which explains a substantial amount of the significant heterogeneity within the effect sizes. The studies which administered cortisol before retrieval (n = 4) reported a significant decrease (average effect size of d = -.49) in memory performance. Studies which administered cortisol before learning (n =12) found on average no effect (d = .08), but there is heterogeneity within these effect sizes. Further analysis on these experiments indicated that studies, which administered cortisol in the morning found a significant memory impairment (d = -.40), while studies conducted in the afternoon observed a small but significant memory enhancement (d = .22). This meta-analysis supports the idea that the timing of GC administration (before learning or before retrieval) is a major determinant of the effects of GCs on human memory. We discuss methodological limitations of the current analysis and suggest several areas for future research."}
{"_id": "d692cf71170a249e42db41e7d1e1d81597b8938e", "title": "Collaborative Edge and Cloud Neural Networks for Real-Time Video Processing", "text": "The efficient processing of video streams is a key component in many emerging Internet of Things (IoT) and edge applications, such as Virtual and Augmented Reality (V/AR) and self-driving cars. These applications require real-time high-throughput video processing. This can be attained via a collaborative processing model between the edge and the cloud\u2014called an Edge-Cloud model. To this end, many approaches were proposed to optimize the latency and bandwidth consumption of Edge-Cloud video processing, especially for Neural Networks (NN)-based methods. In this demonstration. We investigate the efficiency of these NN techniques, how they can be combined, and whether combining them leads to better performance. Our demonstration invites participants to experiment with the various NN techniques, combine them, and observe how the underlying NN changes with different techniques and how these changes affect accuracy, latency and bandwidth consumption. PVLDB Reference Format: Philipp M. Grulich and Faisal Nawab. Collaborative Edge and Cloud Neural Networks for Real-Time Video Processing. PVLDB, 11 (12): 2046-2049, 2018. DOI: https://doi.org/10.14778/3229863.3236256"}
{"_id": "e40677877b040a774bad8d01beca43c9d5864bf3", "title": "Measuring Motivations of Crowdworkers: The Multidimensional Crowdworker Motivation Scale", "text": "Crowd employment is a new form of short term and flexible employment which has emerged during the past decade. For understanding this new form of employment, it is crucial to understand the underlying motivations of the workforce involved in it. This paper presents the Multidimensional Crowdworker Motivation Scale (MCMS), a scale for measuring the motivation of crowdworkers on micro-task platforms. The scale is theoretically grounded in Self-Determination Theory and tailored specifically to the context of crowdsourced micro-labor. The MCMS was validated on data collected in ten countries and three income groups. Furthermore, measurement invariance tests showed that motivations measured with the MCMS are comparable across countries and income groups. This work constitutes a first step towards understanding the motivations of the international crowd workforce."}
{"_id": "2f7678f96837afbc1f58680ad844c35ffa52b0c1", "title": "High-Performance Distributed ML at Scale through Parameter Server Consistency Models", "text": "As Machine Learning (ML) applications embrace greater data size and model complexity, practitioners turn to distributed clusters to satisfy the increased computational and memory demands. Effective use of clusters for ML programs requires considerable expertise in writing distributed code, but existing highlyabstracted frameworks like Hadoop that pose low bar-ed frameworks like Hadoop that pose low barriers to distributed-programming have not, in practice, matched the performance seen in highly specialized and advanced ML implementations. The recent Parameter Server (PS) paradigm is a middle ground between these extremes, allowing easy conversion of single-machine parallel ML programs into distributed ones, while maintaining high throughput through relaxed \u201cconsistency models\u201d that allow asynchronous (and, hence, inconsistent) parameter reads. However, due to insufficient theoretical study, it is not clear which of these consistency models can really ensure correct ML algorithm output; at the same time, there remain many theoreticallymotivated but undiscovered opportunities to maximize computational throughput. Inspired by this challenge, we study both the theoretical guarantees and empirical behavior of iterative-convergent ML algorithms in existing PS consistency models. We then use the gleaned insights to improve a consistency model using an \u201ceager\u201d PS communication mechanism, and implement it as a new PS system that enables ML programs to reach their solution more quickly."}
{"_id": "625e2f3229d75dc6ec346961efe485617dd3e048", "title": "Vision-Controlled Micro Flying Robots: From System Design to Autonomous Navigation and Mapping in GPS-Denied Environments", "text": "Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented."}
{"_id": "6facbe854c57343710d7a4de32cb8ab8d7b1c951", "title": "Minimum snap trajectory generation and control for quadrotors", "text": "We address the controller design and the trajectory generation for a quadrotor maneuvering in three dimensions in a tightly constrained setting typical of indoor environments. In such settings, it is necessary to allow for significant excursions of the attitude from the hover state and small angle approximations cannot be justified for the roll and pitch. We develop an algorithm that enables the real-time generation of optimal trajectories through a sequence of 3-D positions and yaw angles, while ensuring safe passage through specified corridors and satisfying constraints on velocities, accelerations and inputs. A nonlinear controller ensures the faithful tracking of these trajectories. Experimental results illustrate the application of the method to fast motion (5\u201310 body lengths/second) in three-dimensional slalom courses."}
{"_id": "9f85c8f83c835b997f4c613525836f6b53cfdb7f", "title": "Autonomous indoor 3D exploration with a micro-aerial vehicle", "text": "In this paper, we propose a stochastic differential equation-based exploration algorithm to enable exploration in three-dimensional indoor environments with a payload constrained micro-aerial vehicle (MAV). We are able to address computation, memory, and sensor limitations by considering only the known occupied space in the current map. We determine regions for further exploration based on the evolution of a stochastic differential equation that simulates the expansion of a system of particles with Newtonian dynamics. The regions of most significant particle expansion correlate to unexplored space. After identifying and processing these regions, the autonomous MAV navigates to these locations to enable fully autonomous exploration. The performance of the approach is demonstrated through numerical simulations and experimental results in single and multi-floor indoor experiments."}
{"_id": "c79082ee9822903053974d91d21ab3cbdbe6d2c4", "title": "A (Very) Brief History of Artificial Intelligence", "text": "telligence are traced to philosophy, fiction, and imagination. Early inventions in electronics, engineering, and many other disciplines have influenced AI. Some early milestones include work in problems solving which included basic work in learning, knowledge representation, and inference as well as demonstration programs in language understanding, translation, theorem proving, associative memory, and knowledge-based systems. The article ends with a brief examination of influential organizations and current issues facing the field."}
{"_id": "ffaca6b5174f02c6d43ab0df2991c2a62281b271", "title": "Culture and Conformity : A Meta-Analysis of Studies Using Asch ' s ( 1952 b , 1956 ) Line Judgment Task", "text": "A meta-analysis of conformity studies using an Asch-type line judgment task (1952b, 1956) was conducted to investigate whether the level of conformity has changed over time and whether it is related crogs-culturally to individualism-collectivism. The fiterature search produced 133 studies drawn from 17 countries. An analysis of U.S. studies found that conformity has declined since the 1950s. Results from 3 surveys were used to assess a country's individualism-collectivism, and for each survey the measures were found to be significantly related to conformity. Collectivist countries tended to show higher levels of conformity than individualist countries. Conformity research must attend more to cultural variables and to their role in the processes involved in social influence."}
{"_id": "9cc8512aed2aaadd9d7529857221c0d3c103cb45", "title": "Generative Single Image Reflection Separation", "text": "Single image reflection separation is an ill-posed problem since two scenes, a transmitted scene and a reflected scene, need to be inferred from a single observation. To make the problem tractable, in this work we assume that categories of two scenes are known. It allows us to address the problem by generating both scenes that belong to the categories while their contents are constrained to match with the observed image. A novel network architecture is proposed to render realistic images of both scenes based on adversarial learning. The network can be trained in a weakly supervised manner, i.e., it learns to separate an observed image without corresponding ground truth images of transmission and reflection scenes which are difficult to collect in practice. Experimental results on real and synthetic datasets demonstrate that the proposed algorithm performs favorably against existing methods."}
{"_id": "551c028aa8bce0b423f1cb717de868b08bba1aee", "title": "UAVS FOR THE DOCUMENTATION OF ARCHAEOLOGICAL EXCAVATIONS", "text": "UAV photogrammetry experienced a growing variety of diverse applications in different scientific disciplines. Comparable early, UAVs were deployed in the Cultural Heritage and Archaeology domains, mainly for the purpose of monument, building and landscape modelling. In this paper, we will focus on the investigation of UAV application for documenting archaeological excavations. Due to the fact, that excavations are dynamic processes and therefore the objects to be acquired change significantly within few hours, UAVs can provide a suitable alternative to traditional measurement methods such as measuring tape and tachymeter in some cases. Nevertheless, the image processing steps have to be automated to a large amount as results, usually sketches, maps, orthophotos and 3D models, should be available temporally close to the related excavation event. In order to accelerate the processing workflow, an interface between the UAV ground control software and various photogrammetric software packages was developed at ETH Zurich which allows for an efficient management and transfer of orientation, trajectory and sensor data for fast project setup. The capabilities of photogrammetric methods using UAVs as a sensor platform will be demonstrated in 3 case studies: The documentation of a large archaeological site in Bhutan, an excavation of a smaller site containing ancient tombs which include several uncovered objects in the Nasca region in Peru and the Maya site of Cop\u00e1n in Honduras. The first and the third case study deal with the 3D modelling of buildings and their remains by means of photogrammetry, which means that accurate flight planning had to be applied and followed during the flights. In the second case study, we acquired various aerial images over the excavation area Pernil Alto near Palpa in a more simple way for quick documentation of the area of interest. In a third part, we will present our results from comparisons between the planned positions for image acquisition and the positions realized by the navigation unit during the flight for both UAV systems mentioned above. We will describe how accurate orientation data improve automated image processing if they are at hand directly after the flight and explain the workflow developed at ETH Zurich."}
{"_id": "1a3cef149d57a7d55a2b3545f5fc3148e4fa2b85", "title": "Reversible Alopecia with Localized Scalp Necrosis After Accidental Embolization of the Parietal Artery with Hyaluronic Acid", "text": "Hyaluronic acid (HA) filler injection is widely used for soft-tissue augmentation. Complications associated with HA filling are not uncommon; however, HA-induced alopecia is a rarely reported complication that could result in severe secondary psychological trauma. The etiology, clinical traits, treatment strategies, outcomes, and possible reversibility of HA-induced alopecia have not been characterized. Here, we report a case in which bilateral temple injections of 6.5\u00a0mL of HA led to persistent pain over the left scalp for several days. Although the pain was relieved at day 9 after 600\u00a0U of hyaluronidase were injected in the left temple, the patient developed localized alopecia at the left temporoparietal region with central skin necrosis at day 15. After topical applications of recombinant bovine basic fibroblast growth factor gel and 2% minoxidil spay, the necrotic skin wound was healed at day 42. Hair regrowth and normal hair density were restored at day 74. Analyses of Doppler ultrasound examinations and histopathology of the skin biopsy suggested that mild ischemia of the left temporoparietal region led to reversible alopecia, while the permanent hair loss in the left parietal area was associated with severe skin ischemia. Therefore, the key to treatment would be to focus on the effective correction of severe ischemia-induced skin necrosis to prevent permanent hair loss. Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 ."}
{"_id": "04975368149e407c2105b76a7523e027661bd4f0", "title": "A Survey of Homomorphic Encryption for Nonspecialists", "text": "The goal of encryption is to ensure confidentiali y of data in communication and storage processes. Recently, its use in constrained devices led to consider additional features, such as the ability to delegate computations to untrusted computers. For this purpose, we would like to give the untrusted computer only an encrypted version of the data to process. The computer will perform the computation on this encrypted data, hence without knowing anything on its real value. Finally, it will send back the result, and we will decrypt it. For coherence, the decrypted result has to be equal to the intended computed value if performed on the original data. For this reason, the encryption scheme has to present a particular structure. Rivest et al. proposed in 1978 to solve this issue through homomorphic encryption [1]. Unfortunately, Brickell and Yacobi pointed out in [2] some security f aws in the firs proposals of Rivest et al. Since this f rst attempt, a lot of articles have proposed solutions dedicated to numerous application contexts: secret sharing schemes, threshold schemes (see, e.g., [3]), zero-knowledge proofs (see, e.g., [4]), oblivious transfer (see, e.g., [5]), commitment schemes (see, e.g., [3]), anonymity, privacy, electronic voting, electronic auctions, lottery protocols (see, e.g., [6]), protection ofmobile agents (see, e.g., [7]), multiparty computation (see, e.g., [3]), mix-nets (see, e.g., [8, 9]), watermarking or finge printing protocols (see, e.g., [10\u201314]), and so forth. The goal of this article is to provide nonspecialists with a survey of homomorphic encryption techniques. Section 2 recalls some basic concepts of cryptography and presents homomorphic encryption; it is particularly aimed at noncryptographers, providing guidelines about the main characteristics of encryption primitives: algorithms, performance, security. Section 3 provides a survey of homomorphic encryption schemes published so far, and analyses their characteristics. Most schemes we describe are based onmathematical notions the reader may not be familiar with. In the cases these notions can easily be introduced, we present them briefl . The reader may refer to [15] for more information concerning those we could not introduce properly, or algorithmic problems related to their computation. Before going deeper in the subject, let us introduce some notation. The integer (x) denotes the number of bits constituting the binary expansion of x. As usual, Zn will denote the set of integers modulo n, and Z\u2217n the set of its invertible elements."}
{"_id": "03b9563779bb1623a21c30386aef863ae4d11dc0", "title": "4W1H in mobile crowd sensing", "text": "With the rapid proliferation of sensor-rich smartphones, mobile crowd sensing has become a popular research field. In this article, we propose a four-stage life cycle (i.e., task creation, task assignment, individual task execution, and crowd data integration) to characterize the mobile crowd sensing process, and use 4W1H (i.e., what, when, where, who, and how) to sort out the research problems in the mobile crowd sensing domain. Furthermore, we attempt to foresee some new research directions in future mobile crowd sensing research."}
{"_id": "93b7ffd5ea1955d617396460fa034e972c089e72", "title": "Assistive, Rehabilitation, and Surgical Robots from the Perspective of Medical and Healthcare Professionals", "text": "The presence of robots in the medical and healthcare fields is increasing. Commercially available medical and healthcare robots are in use by hospitals and nursing homes. Many more assistive, rehabilitation, and surgical robots are being developed at research institutions. In this paper, we examine the awareness of medical and healthcare professionals with respect to robotic applications and the social and psychological impacts of human-robot interaction."}
{"_id": "28a699880bce80839953ac07c1d1838822a36120", "title": "Computer-aided organic synthesis.", "text": "It is tempting for those in the field of organic synthesis to liken the process of retrosynthesis to a game of chess. That the world chess champion was recently defeated by a computer leads us to think that perhaps new and powerful computing methods could be applied to synthetic problems. Here the analogy between synthesis and chess is outlined. Achievements in the 35-year history of computer-aided synthetic design are described, followed by some more recent developments."}
{"_id": "1c1c40927787c40ffe0db9629ede6828ecf09e65", "title": "Finline Ortho-Mode Transducer for Millimeter Waves", "text": "We evaluate the possibility of using a finline orthomode transducer (OMT) at millimeter wavelengths. A finline OMT has low loss, low cross-polarization, and good return loss over a full waveguide band. We propose a novel finline OMT structure for millimeter-wavelengths and present results at X-band."}
{"_id": "88323e38f676a31ed613dad604829808ff96f714", "title": "A novel broadband EBG using multi-period mushroom-like structure", "text": "A novel broadband electromagnetic band-gap (EBG) structure is presented using multi-period mushroom-like structure with different patch size cascaded. Direct transmission method is used to determine the band-gap of the EBG structure. The effects of the unit number and patch size on the mushroomlike EBG structure are investigated. Two kinds of unit with different patch size are cascaded to enlarge the band-gap of EBG structure, which achieves almost 87.1%. The simulation results show that the band-gap almost covers the stop-band produced by the two uniform configurations with different patch size respectively."}
{"_id": "7cb4705968f54b8761cadc46ed8c979ce9c22688", "title": "Driver Sleepiness Detection System Based on Eye Movements Variables", "text": "Driver sleepiness is a hazard state, which can easily lead to traffic accidents. To detect driver sleepiness in real time, a novel driver sleepiness detection system using support vector machine (SVM) based on eye movements is proposed. Eye movements data are collected using SmartEye system in a driving simulator experiment. Characteristic parameters, which include blinking frequency, gaze direction, fixation time, and PERCLOS, are extracted based on the data using a statistical method. 13 sleepiness detection models including 12 specific models and 1 general model are developed based on SVM. Experimental results demonstrate that eye movements can be used to detect driver sleepiness in real time. The detecting accuracy of the specific models significantly exceeds the general model (P < 0.001), suggesting that individual differences are an important consideration when building detection algorithms for different drivers."}
{"_id": "c03233796d17006bd4e5568b3c1f78cebacf87ca", "title": "Perceptual objects capture attention", "text": "A recent study has demonstrated that the mere organization of some elements in the visual field into an object attracts attention automatically [Kimchi, R., Yeshurun, Y., & Cohen-Savransky, A. (2007). Automatic, stimulus-driven attentional capture by objecthood. Psychonomic Bulletin & Review, 14(1), 166-172]. We tested whether similar results will emerge when the target is not a part of the object and with simplified task demands. A matrix of 16 black L elements in various orientations preceded the presentation of a Vernier target. The target was either added to the matrix (Experiment 1), or appeared after its offset (Experiment 2). On some trials four elements formed a square-like object, and on some of these trials the target appeared in the center of the object. No featural uniqueness or abrupt onset was associated with the object and it did not predict the target location or the direction of the target's horizontal offset. Performance was better when the target appeared in the center of the object than in a different location than the object, even when the target appeared after the matrix offset. These findings support the hypothesis that a perceptual object captures attention (Kimchi et al., 2007), and demonstrate that this automatic deployment of attention to the object is robust and involves a spatial component."}
{"_id": "a558ad30dd01010995d4c21b4d219f28e4dc71ee", "title": "Potential of flex-PCB winding in coreless tubular permanent magnet synchronous machines", "text": "Many applications are limited by the performance of the motors that power them. In the small power range, slotless BLDC motors are commonly used due to their intrinsic high power density and smooth torque characteristic. However, recent work has shown that the power density of radial flux rotating motors could be significantly increased by replacing their windings, made of copper wire by windings printed on flexible PCB, taking most advantage of the possibilities offered this technology. The aim of this work is to evaluate the performance gain that could be obtained by using this technology in linear tubular motors. To that end, an analytical model is implemented and used for comparing different winding shapes regarding to two criteria, one related to the power density, the other to the force smoothness. A track shape which annihilates the higher harmonics is developed and compared to different 3-segment PCB windings and to a conventional wire winding."}
{"_id": "3fccc22f283b2319603e69958fe2f51434b914e4", "title": "Memory encoding in Alzheimer's disease: an fMRI study of explicit and implicit memory.", "text": "Alzheimer's disease is the most common cause of dementia in older adults. Although the cognitive deficits and pathologic hallmarks of Alzheimer's disease have been well characterized, few functional imaging studies have examined the functional competency of specific brain regions and their relationship to specific behavioural memory deficits in Alzheimer's disease. We used functional MRI (fMRI) to examine seven early stage Alzheimer's disease patients and seven healthy age-matched neurologically normal control subjects during intentional encoding of scenes. Subjects viewed blocks of novel scenes, repeated scenes or baseline. Data were analysed using whole-brain statistical parametric mapping and region of interest approaches. The Alzheimer's disease group demonstrated impaired explicit recognition memory, but intact implicit memory (repetition priming), for the scenes. Alzheimer's disease patients demonstrated a graded deficit in activation for novel versus repeated scenes along the ventral visual stream, with most impaired activation changes in the mesial temporal lobe (MTL) and fusiform regions, most preserved activations in primary visual cortex and variably affected activations in secondary visual areas. Group-level correlations with behavioural measures of explicit memory were found in MTL, lingual and fusiform areas, whereas correlations with priming were found in lateral occipital, parietal and frontal areas. Together, these fMRI findings indicate a dissociation in Alzheimer's disease between impaired explicit memory encoding in MTL and fusiform regions and intact implicit encoding in earlier-stage occipital cortex."}
{"_id": "c0f293d40fbf71bf4627de91918bc2cce3ad2ea3", "title": "Summarizing sporting events using twitter", "text": "The status updates posted to social networks, such as Twitter and Facebook, contain a myriad of information about what people are doing and watching. During events, such as sports games, many updates are sent describing and expressing opinions about the event. In this paper, we describe an algorithm that generates a journalistic summary of an event using only status updates from Twitter as a source. Temporal cues, such as spikes in the volume of status updates, are used to identify the important moments within an event, and a sentence ranking method is used to extract relevant sentences from the corpus of status updates describing each important moment within an event. We evaluate our algorithm compared to human-generated summaries and the previous best summarization algorithm, and find that the results of our method are superior to the previous algorithm and approach the readability and grammaticality of the human-generated summaries."}
{"_id": "97f4bde2776711b2f6e785753c45195d4f78fe0f", "title": "Circulating Mitochondrial DAMPs Cause Inflammatory Responses to Injury", "text": "Injury causes a systemic inflammatory response syndrome (SIRS) that is clinically much like sepsis. Microbial pathogen-associated molecular patterns (PAMPs) activate innate immunocytes through pattern recognition receptors. Similarly, cellular injury can release endogenous \u2018damage\u2019-associated molecular patterns (DAMPs) that activate innate immunity. Mitochondria are evolutionary endosymbionts that were derived from bacteria and so might bear bacterial molecular motifs. Here we show that injury releases mitochondrial DAMPs (MTDs) into the circulation with functionally important immune consequences. MTDs include formyl peptides and mitochondrial DNA. These activate human polymorphonuclear neutrophils (PMNs) through formyl peptide receptor-1 and Toll-like receptor (TLR) 9, respectively. MTDs promote PMN Ca2+ flux and phosphorylation of mitogen-activated protein (MAP) kinases, thus leading to PMN migration and degranulation in vitro and in vivo. Circulating MTDs can elicit neutrophil-mediated organ injury. Cellular disruption by trauma releases mitochondrial DAMPs with evolutionarily conserved similarities to bacterial PAMPs into the circulation. These signal through innate immune pathways identical to those activated in sepsis to create a sepsis-like state. The release of such mitochondrial \u2018enemies within\u2019 by cellular injury is a key link between trauma, inflammation and SIRS."}
{"_id": "4b30c2431b8a27a8480d43996f7bb550e73fb268", "title": "CNN vs. SIFT for Image Retrieval: Alternative or Complementary?", "text": "In the past decade, SIFT is widely used in most vision tasks such as image retrieval. While in recent several years, deep convolutional neural networks (CNN) features achieve the state-of-the-art performance in several tasks such as image classification and object detection. Thus a natural question arises: for the image retrieval task, can CNN features substitute for SIFT? In this paper, we experimentally demonstrate that the two kinds of features are highly complementary. Following this fact, we propose an image representation model, complementary CNN and SIFT (CCS), to fuse CNN and SIFT in a multi-level and complementary way. In particular, it can be used to simultaneously describe scene-level, object-level and point-level contents in images. Extensive experiments are conducted on four image retrieval benchmarks, and the experimental results show that our CCS achieves state-of-the-art retrieval results."}
{"_id": "8894a55335c7eebe944411bf9df88894ef5dc93d", "title": "Efficient Voltage Regulation for Microprocessor Cores Stacked in Vertical Voltage Domains", "text": "Due to exponential (Moores law) scaling of advanced CMOS technologies, the challenges associated with delivering power to performance and mobile computing systems are outpacing the capabilities of conventional voltage regulator (VR) topologies. To continue to scale throughput at constant power density, the level of parallelism in microprocessor architectures is expected to increase substantially. In this paper, we present a power conversion topology to provide independent multicore regulation in the 0.8-1.4 V range from a 12-V dc bus. The topology uses a multistage ladder converter to manage power delivery to digital circuits stacked in vertical voltage domains. This approach has several advantages with regard to systems efficiency as it allows a more moderate conversion ratio of the main dc-dc converter. Moreover, the parallel converter only needs to process a fraction of the power of each core as the current can be \u201crecycled\u201d by adjacent cores in the stack. We develop a dynamical model for a multiple-input, multiple-output control scheme that uses a simple integral-control law, augmented with fast voltage- and current-mode feedforward. Measurement results of a discrete prototype verify the control scheme and demonstrate the potential advantages in system efficiency but also emphasize the remaining challenges in meeting stringent VR dynamic response requirements."}
{"_id": "03b18dcde7ba5bb0e87b2bdb68ab7af951daf162", "title": "On Using Very Large Target Vocabulary for Neural Machine Translation", "text": "Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrasebased statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method based on importance sampling that allows us to use a very large target vocabulary without increasing training complexity. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to match, and in some cases outperform, the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use an ensemble of a few models with very large target vocabularies, we achieve performance comparable to the state of the art (measured by BLEU) on both the English\u2192German and English\u2192French translation tasks of WMT\u201914."}
{"_id": "060e380b28be29b7eda509981a50b4406ea6b21b", "title": "Agreement-based Joint Training for Bidirectional Attention-based Neural Machine Translation", "text": "The attentional mechanism has proven to be effective in improving end-to-end neural machine translation. However, due to the intricate structural divergence between natural languages, unidirectional attention-based models might only capture partial aspects of attentional regularities. We propose agreement-based joint training for bidirectional attention-based end-to-end neural machine translation. Instead of training source-to-target and target-to-source translation models independently, our approach encourages the two complementary models to agree on word alignment matrices on the same training data. Experiments on ChineseEnglish and English-French translation tasks show that agreement-based joint training significantly improves both alignment and translation quality over independent training."}
{"_id": "1c23e1ad1a538416e8123f128a87c928b09be868", "title": "Alignment by Agreement", "text": "We present an unsupervised approach to symmetric word alignment in which two simple asymmetric models are trained jointly to maximize a combination of data likelihood and agreement between the models. Compared to the standard practice of intersecting predictions of independently-trained models, joint training provides a 32% reduction in AER. Moreover, a simple and efficient pair of HMM aligners provides a 29% reduction in AER over symmetrized IBM model 4 predictions. Disciplines Computer Sciences Comments Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics (HLT-NAACL '06). Association for Computational Linguistics, Stroudsburg, PA, USA, 104-111. DOI=10.3115/1220835.1220849 http://dx.doi.org/10.3115/1220835.1220849 \u00a9 ACM, 2006. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, {(2006)} http://doi.acm.org/10.3115/1220835.1220849\" Email permissions@acm.org This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/cis_papers/533 Alignment by Agreement Percy Liang UC Berkeley Berkeley, CA 94720 pliang@cs.berkeley.edu Ben Taskar UC Berkeley Berkeley, CA 94720 taskar@cs.berkeley.edu Dan Klein UC Berkeley Berkeley, CA 94720 klein@cs.berkeley.edu"}
{"_id": "f7b48b0028a9887f85fe857b62441f391560ef6d", "title": "Fan-Beam Millimeter-Wave Antenna Design Based on the Cylindrical Luneberg Lens", "text": "A new design of two-dimensional cylindrical Luneberg lens is introduced based on TE10 mode propagation between parallel plates, with special focus on ease of manufacturing. The parallel plates are partially filled with low cost polymer material (Rexolite epsivr = 2.54) to match Luneberg's law. A planar linear tapered slot antenna (LTSA) is inserted into the air region between the parallel plates at the edge of the Luneberg lens as a feed antenna, with fine positioning to the focal point of the Luneberg lens to optimize the antenna system performance. A combined ray-optics/diffraction method is used to obtain the radiation pattern of the system and results are compared with predictions of a time domain numerical solver. Measurements done on a 10-cm Luneberg lens designed for operation at 30 GHz agree very well with predictions. For this prototype, 3-dB E- and if-plane beamwidths of 6.6deg and 54deg respectively were obtained, and the sidelobe level in the E-plane was -17.7-dB. Although the parallel plate configuration should lead to a narrow band design due to the dispersion characteristics of the TE10 mode, the measurement results demonstrate broadband characteristics with radiation efficiencies varying between 43% and 72% over the tested frequency band of 26.5-37 GHz. The designed cylindrical Luneberg lens can be used to launch multiple beams by implementing an arc array of planar LTSA elements at the periphery of the lens, and can be easily extended to higher mm-wave frequencies."}
{"_id": "0b789e34df1af1c23d64ee82dce9aeb6982bbba9", "title": "SINGO: A single-end-operative and genderless connector for self-reconfiguration, self-assembly and self-healing", "text": "Flexible and reliable connection is critical for self-reconfiguration, self-assembly, or self-healing. However, most existing connection mechanisms suffer from a deficiency that a connection would seize itself if one end malfunctions or is out of service. To mitigate this limitation on self-healing, this paper presents a new SINGO connector that can establish or disengage a connection even if one end of the connection is not operational. We describe the design and the prototype of the connector and demonstrate its performance by both theoretical analysis and physical experimentations."}
{"_id": "7c3d54d7ca0d0960ec5e03db9e75c09db7005a2c", "title": "Content matters: A study of hate groups detection based on social networks analysis and web mining", "text": "In recent years, with rapid growth of social networking websites, users are very active in these platforms and large amount of data are aggregated. Among those social networking websites, Facebook is the most popular website that has most users. However, in Facebook, the abusing problem is a very critical issue, such as Hate Groups. Therefore, many researchers are devoting on how to detect potential hate groups, such as using the techniques of social networks analysis. However, we believe content is also a very important factors for hate groups detection. Thus, in this paper, we will propose an architecture to for hate groups detection which is based on the technique of Social Networks Analysis and Web Mining (Text Mining; Natural Language Processing). From the experiment result, it shows that content plays an critical role for hate groups detection and the performance is better than the system that just applying social networks analysis."}
{"_id": "1f95617ace456a1d543ccc6de4171904d8aeb1d2", "title": "Occupational risks and challenges of seafaring.", "text": "UNLABELLED\nSeafarers are exposed to a high diversity of occupational health hazards onboard ships.\n\n\nOBJECTIVE\nThe aim of this article is to present a survey of the current, most important hazards in seafaring including recommendations on measures how to deal with these problems.\n\n\nMETHODS\nThe review is based on maritime expert opinions as well a PubMed analysis related to the occupational risks of seafaring.\n\n\nRESULTS\nDespite recent advances in injury prevention, accidents due to harmful working and living conditions at sea and of non-observance of safety rules remain a main cause of injury and death. Mortality in seafaring from cardiovascular diseases (CVD) is mainly caused by increased risks and impaired treatment options of CVD at sea. Further, shipboard stress and high demand may lead to fatigue and isolation which have an impact on the health of onboard seafarers. Communicable diseases in seafaring remain an occupational problem. Exposures to hazardous substances and UV-light are important health risks onboard ships. Because of harsh working conditions onboard including environmental conditions, sufficient recreational activities are needed for the seafarers' compensation both onboard and ashore. However, in reality there is often a lack of leisure time possibilities.\n\n\nDISCUSSION\nSeafaring is still an occupation with specific work-related risks. Thus, a further reduction of occupational hazards aboard ships is needed and poses a challenge for maritime health specialists and stakeholders. Nowadays, maritime medicine encompasses a broad field of workplaces with different job-related challenges."}
{"_id": "8cab4d4370461767e7193679a308d92f6b89f00a", "title": "IoT Security: Ongoing Challenges and Research Opportunities", "text": "The Internet of Things (IoT) opens opportunities for wearable devices, home appliances, and software to share and communicate information on the Internet. Given that the shared data contains a large amount of private information, preserving information security on the shared data is an important issue that cannot be neglected. In this paper, we begin with general information security background of IoT and continue on with information security related challenges that IoT will encountered. Finally, we will also point out research directions that could be the future work for the solutions to the security challenges that IoT encounters."}
{"_id": "57d774b8592b4b3f83f1304be43701ad8517e79a", "title": "Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization", "text": "In multilabel learning, each instance in the training set is associated with a set of labels and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, this problem is addressed in the way that a neural network algorithm named BP-MLL, i.e., backpropagation for multilabel learning, is proposed. It is derived from the popular backpropagation algorithm through employing a novel error function capturing the characteristics of multilabel learning, i.e., the labels belonging to an instance should be ranked higher than those not belonging to that instance. Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multilabel learning algorithms"}
{"_id": "2881b79ff142496c27d9558361e48f105208dec4", "title": "Investigating Information Systems with Action Research", "text": "Action research is an established research method in use in the social and medical sciences since the mid-twentieth century, and has increased in importance for information systems toward the end of the 1990s. Its particular philosophic context is couched in strongly post-positivist assumptions such as idiographic and interpretive research ideals. Action research has developed a history within information systems that can be explicitly linked to early work by Lewin and the Tavistock Institute. Action research varies in form, and responds to particular problem domains. The most typical form is a participatory method based on a five-step model, which is exemplified by published IS research."}
{"_id": "e578f65fd5e499a72481da3a288a401f564a779a", "title": "Chapter 2 Depression and a Stepped Care Model", "text": "Given the public health significance of depression and the limited resources available for providing evidence-based treatment, there is a need to develop effective models of care to reduce the personal and societal costs of the disorder. Within stepped care service provisions, all patients presenting with symptoms of depression generally are first offered the lowest intensity and least intrusive intervention deemed necessary following assessment and triage. Only when patients do not show improvement do they move to higher, more intensive levels of care. However, stepped care models also provide information to aid clinicians in decision making regarding selection of treatment strategies that are most appropriate for an individual patient. For some individuals, lower levels of care would never be appropriate or may not be preferred by the consumer. Thus, stepped interventions offer a variety of treatment options to match the intensity of the patient\u2019s presenting problem as well as potential patient preference. In this chapter, we discuss various strategies for treating depression consistent with a stepped model of care beginning with least intensive treatment and then moving up through the hierarchy of steps of care. While this is not a comprehensive review of all available treatments for depression, the chapter is designed to make clinicians aware of specific strategies for addressing depressive symptoms and to provide guidance about resources available at the various levels of care."}
{"_id": "0562bc5f82b40e2e9c0ae035aa2dd1da6107017c", "title": "vSlicer: latency-aware virtual machine scheduling via differentiated-frequency CPU slicing", "text": "Recent advances in virtualization technologies have made it feasible to host multiple virtual machines (VMs) in the same physical host and even the same CPU core, with fair share of the physical resources among the VMs. However, as more VMs share the same core/CPU, the CPU access latency experienced by each VM increases substantially, which translates into longer I/O processing latency perceived by I/O-bound applications. To mitigate such impact while retaining the benefit of CPU sharing, we introduce a new class of VMs called latency-sensitive VMs (LSVMs), which achieve better performance for I/O-bound applications while maintaining the same resource share (and thus cost) as other CPU-sharing VMs. LSVMs are enabled by vSlicer, a hypervisor-level technique that schedules each LSVM more frequently but with a smaller micro time slice. vSlicer enables more timely processing of I/O events by LSVMs, without violating the CPU share fairness among all sharing VMs. Our evaluation of a vSlicer prototype in Xen shows that vSlicer substantially reduces network packet round-trip times and jitter and improves application-level performance. For example, vSlicer doubles both the connection rate and request processing throughput of an Apache web server; reduces a VoIP server's upstream jitter by 62%; and shortens the execution times of Intel MPI benchmark programs by half or more."}
{"_id": "b5429dd7cce627a1699daa0a6a0edc9d0e3081f1", "title": "CENTURION: Incentivizing multi-requester mobile crowd sensing", "text": "The recent proliferation of increasingly capable mobile devices has given rise to mobile crowd sensing (MCS) systems that outsource the collection of sensory data to a crowd of participating workers that carry various mobile devices. Aware of the paramount importance of effectively incentivizing participation in such systems, the research community has proposed a wide variety of incentive mechanisms. However, different from most of these existing mechanisms which assume the existence of only one data requester, we consider MCS systems with multiple data requesters, which are actually more common in practice. Specifically, our incentive mechanism is based on double auction, and is able to stimulate the participation of both data requesters and workers. In real practice, the incentive mechanism is typically not an isolated module, but interacts with the data aggregation mechanism that aggregates workers' data. For this reason, we propose CENTURION, a novel integrated framework for multi-requester MCS systems, consisting of the aforementioned incentive and data aggregation mechanism. CENTURION's incentive mechanism satisfies truthfulness, individual rationality, computational efficiency, as well as guaranteeing non-negative social welfare, and its data aggregation mechanism generates highly accurate aggregated results. The desirable properties of CENTURION are validated through both theoretical analysis and extensive simulations."}
{"_id": "463d85ce8348eba0935d155d557634ed7e57bb07", "title": "FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION", "text": "Video surveillance is active research topic in computer vision research area for humans & vehicles, so it is used over a great extent. Multiple images generated using a fixed camera contains various objects, which are taken under different variations, illumination changes after that the object\u2019s identity and orientation are provided to the user. This scheme is used to represent individual images as well as various objects classes in a single, scale and rotation invariant model.The objective is to improve object recognition accuracy for surveillance purposes & to detect multiple objects with sufficient level of scale invariance.Multiple objects detection& recognition is important in the analysis of video data and higher level security system. This method can efficiently detect the objects from query images as well as videos by extracting frames one by one. When given a query image at runtime, by generating the set of query features and it will find best match it to other sets within the database. Using SURF algorithm find the database object with the best feature matching, then object is present in the query image. Keywords\u2014 Image recognition, Query image, Local feature, Surveillance system, SURF algorithm."}
{"_id": "940825a6c2e4000f2f95ea70b8358e4bbd63b77c", "title": "Efficient Hierarchical Identity-Based Signature With Batch Verification for Automatic Dependent Surveillance-Broadcast System", "text": "The automatic-dependent surveillance-broad-cast (ADS-B) is generally regarded as the most important module in air traffic surveillance technology. To obtain better airline security, ADS-B system will be deployed in most airspace by 2020, where aircraft will be equipped with an ADS-B device that periodically broadcasts messages to other aircraft and ground station controllers. Due to the open communication environment, the ADS-B system is subject to a broad range of attacks. To simultaneously implement both integrity and authenticity of messages transmitted in the ADS-B system, Yang et al. proposed a new authentication frame based on the three-level hierarchical identity-based signature (TLHIBS) scheme with batch verification, as well as constructing two schemes for the ADS-B system. However, neither TLHIBS schemes are sufficiently lightweight for practical deployment due to the need for complex hash-to-point operation or expensive certification management. In this paper, we construct an efficient TLHIBS scheme with batch verification for the ADS-B system. Our scheme does not require hash-to-point operation or (expensive) certification management. We then prove the TLHIBS scheme secure in the random oracle model. We also demonstrate the practicality of the scheme using experiments, whose findings indicate that the TLHIBS scheme supports attributes required by the ADS-B system without the computation cost in Chow et al.'s scheme and Yang et al.'s TLHIBS schemes."}
{"_id": "5721cc62c8cd71e5302330dd2d4a1ab0b38f5553", "title": "Workplace harassment: double jeopardy for minority women.", "text": "To date there have been no studies of how both sex and ethnicity might affect the incidence of both sexual and ethnic harassment at work. This article represents an effort to fill this gap. Data from employees at 5 organizations were used to test whether minority women are subject to double jeopardy at work, experiencing the most harassment because they are both women and members of a minority group. The results supported this prediction. Women experienced more sexual harassment than men, minorities experienced more ethnic harassment than Whites, and minority women experienced more harassment overall than majority men, minority men, and majority women."}
{"_id": "6d60a7255878f03e3405c60c72a6420c38b165bc", "title": "Exposure of Children and Adolescents to Alcohol Marketing on Social Media Websites", "text": "AIMS\nIn 2011, online marketing became the largest marketing channel in the UK, overtaking television for the first time. This study aimed to describe the exposure of children and young adults to alcohol marketing on social media websites in the UK.\n\n\nMETHODS\nWe used commercially available data on the three most used social media websites among young people in the UK, from December 2010 to May 2011. We analysed by age (6-14 years; 15-24 years) and gender the reach (proportion of internet users who used the site in each month) and impressions (number of individual pages viewed on the site in each month) for Facebook, YouTube and Twitter. We further analysed case studies of five alcohol brands to assess the marketer-generated brand content available on Facebook, YouTube and Twitter in February and March 2012.\n\n\nRESULTS\nFacebook was the social media site with the highest reach, with an average monthly reach of 89% of males and 91% of females aged 15-24. YouTube had a similar average monthly reach while Twitter had a considerably lower usage in the age groups studied. All five of the alcohol brands studied maintained a Facebook page, Twitter page and YouTube channel, with varying levels of user engagement. Facebook pages could not be accessed by an under-18 user, but in most cases YouTube content and Twitter content could be accessed by those of all ages.\n\n\nCONCLUSION\nThe rise in online marketing of alcohol and the high use of social media websites by young people suggests that this is an area requiring further monitoring and regulation."}
{"_id": "31c280aa6c4f6a98c0fa5e1a843355e1e8da4007", "title": "An implementation of the FP-growth algorithm", "text": "The FP-growth algorithm is currently one of the fastest approaches to frequent item set mining. In this paper I describe a C implementation of this algorithm, which contains two variants of the core operation of computing a projection of an FP-tree (the fundamental data structure of the FP-growth algorithm). In addition, projected FP-trees are (optionally) pruned by removing items that have become infrequent due to the projection (an approach that has been called FP-Bonsai). I report experimental results comparing this implementation of the FP-growth algorithm with three other frequent item set mining algorithms I implemented (Apriori, Eclat, and Relim)."}
{"_id": "ee42bceb15d28ce0c7fcd3e37d9a564dfbb3ab90", "title": "An analytical method for diseases prediction using machine learning techniques", "text": ""}
{"_id": "4d357ffc1cf60d3f34b5345899619882791474bb", "title": "Deep Reinforcement Learning for General Video Game AI", "text": "The General Video Game AI (GVGAI) competition and its associated software framework provides a way of benchmarking AI algorithms on a large number of games written in a domain-specific description language. While the competition has seen plenty of interest, it has so far focused on online planning, providing a forward model that allows the use of algorithms such as Monte Carlo Tree Search. In this paper, we describe how we interface GVGAI to the OpenAI Gym environment, a widely used way of connecting agents to reinforcement learning problems. Using this interface, we characterize how widely used implementations of several deep reinforcement learning algorithms fare on a number of GVGAI games. We further analyze the results to provide a first indication of the relative difficulty of these games relative to each other, and relative to those in the Arcade Learning Environment under similar conditions."}
{"_id": "429e2f4d95fe77d8c61245265529963df171da68", "title": "Working Memory Capacity as Executive Attention", "text": "Performance on measures of working memory (WM) capacity predicts performance on a wide range of real-world cognitive tasks. I review the idea that WM capacity (a) is separable from short-term memory, (b) is an important component of general fluid intelligence, and (c) represents a domainfree limitation in ability to control attention. Studies show that individual differences in WM capacity are reflected in performance on antisaccade, Stroop, and dichotic-listening tasks. WM capacity, or executive attention, is most important under conditions in which interference leads to retrieval of response tendencies that conflict with the current task."}
{"_id": "f50bd6c24e5fef71e54d0ec873a733b6fd6302e4", "title": "Characterizing Horn Antenna Signals for Breast Cancer Detection", "text": "An in-depth analysis of signals from horn antennas used in breast cancer detection research has been carried out. It was found that following the excitation signal, both the double-ridged and the quad-ridged ultra-wideband horn antennas radiate additional unwanted signals that persistently oscillate. Undesirable signal oscillations sources were identified as the horn antenna cavity resonance and inherent antenna LC resonance. These signals interfere with the tumor\u2019s signal response and need to be eliminated for successful detection of the cancerous growth. This paper proposes solutions to remove or minimize these signals without affecting antenna parameters such as bandwidth, gain, ports isolation, and polarization isolation. Modification of the antenna cavity successfully suppressed the unwanted cavity oscillation. Modification of the antenna waveguide reduced inductance and consequently mitigated LC oscillation. The resulting time and frequency domain horn antenna signals demonstrate the effectiveness of the proposed methods. Finally, a breast phantom with a tumor is simulated using signals from the original and a modified horn antenna. The delay and sum method is used to create images. The breast images demonstrate enhanced image quality through the reduction of clutter using the proposed techniques. The signal-to-clutter ratios are 0.448 and 1.6823 dB for the images produced by using the original and modified antennas, respectively."}
{"_id": "bd4113b9b41f663465f2bcd5d97103715e67e273", "title": "Visualising the structure of architectural open spaces based on shape analysis", "text": "Visualisation and interpretation of architectural spaces are intellectually challenging and multifaceted exercises with fairly wide ranging applications and implications. In the context of urban planning, they are most commonly undertaken to evaluate the usage of the architectural space to ensure efficient navigation and accessibility [1]. These exercises clearly assume a certain influence of the built structures on the human cognition. However, what aspect of architectural space affects the human behaviour still remains an open debate. In this respect, it is closely similar to an exercise to identify the unknown visual variables in information visualization. Since a quantitative analysis of the architectural geometric structure on a large scale will be a daunting computational task, the open space bounded by the built structures is studied instead [2]. The study of architectural open spaces essentially involves the computation of the visibility polygon or isovist (space visible all around a viewpoint, see Figure 1a) from a viewpoint and calculating various shape measures of the visibility polygon. The isovist computation involves drawing rays"}
{"_id": "13b77bed4038262f2dedc3bfba8a1905e1e8dd2b", "title": "Design and Implementation of the LogicBlox System", "text": "The LogicBlox system aims to reduce the complexity of software development for modern applications which enhance and automate decision-making and enable their users to evolve their capabilities via a ``self-service'' model. Our perspective in this area is informed by over twenty years of experience building dozens of mission-critical enterprise applications that are in use by hundreds of large enterprises across industries such as retail, telecommunications, banking, and government. We designed and built LogicBlox to be the system we wished we had when developing those applications.\n In this paper, we discuss the design considerations behind the LogicBlox system and give an overview of its implementation, highlighting innovative aspects. These include: LogiQL, a unified and declarative language based on Datalog; the use of purely functional data structures; novel join processing strategies; advanced incremental maintenance and live programming facilities; a novel concurrency control scheme; and built-in support for prescriptive and predictive analytics."}
{"_id": "d2752e691248eb77223bf7a4a3cd4d8878d682b7", "title": "Should I use TensorFlow", "text": "Google\u2019s Machine Learning framework TensorFlow was opensourced in November 2015 [1] and has since built a growing community around it. TensorFlow is supposed to be flexible for research purposes while also allowing its models to be deployed productively [7]. This work is aimed towards people with experience in Machine Learning considering whether they should use TensorFlow in their environment. Several aspects of the framework important for such a decision are examined, such as the heterogenity, extensibility and its computation graph. A pure Python implementation of linear classification is compared with an implementation utilizing TensorFlow. I also contrast TensorFlow to other popular frameworks with respect to modeling capability, deployment and performance and give a brief description of the current adaption of the framework."}
{"_id": "443362dc552b36c33138c415408d307213ddfa36", "title": "devices and apps for health care professionals : uses and benefits table 1 uses for Mobile devices and apps by health care professionals", "text": ""}
{"_id": "7d2a06f43648cf023566b58143a4e7b50f3b80ca", "title": "Robust Abandoned Object Detection Using Dual Foregrounds", "text": "As an alternative to the tracking-based approaches that heavily depend on accurate detection of moving objects, which often fail for crowded scenarios, we present a pixelwise method that employs dual foregrounds to extract temporally static image regions. Depending on the application, these regions indicate objects that do not constitute the original background but were brought into the scene at a subsequent time, such as abandoned and removed items, illegally parked vehicles. We construct separate longand short-term backgrounds that are implemented as pixelwise multivariate Gaussian models. Background parameters are adapted online using a Bayesian update mechanism imposed at different learning rates. By comparing each frame with these models, we estimate two foregrounds. We infer an evidence score at each pixel by applying a set of hypotheses on the foreground responses, and then aggregate the evidence in time to provide temporal consistency. Unlike optical flow-based approaches that smear boundaries, our method can accurately segment out objects even if they are fully occluded. It does not require on-site training to compensate for particular imaging conditions. While having a low-computational load, it readily lends itself to parallelization if further speed improvement is necessary."}
{"_id": "03adaf0497fdd9fb32ef0ef925db9bc7da4f2b4f", "title": "Learning a Classification Model for Segmentation", "text": "We propose a two-class classification model for grouping. Human segmented natural images are used as positive examples. Negative examples of grouping are constructed by randomly matching human segmentations and images. In a preprocessing stage an image is oversegmented into superpixels. We define a variety of features derived from the classical Gestalt cues, including contour, texture, brightness and good continuation. Information-theoretic analysis is applied to evaluate the power of these grouping cues. We train a linear classifier to combine these features. To demonstrate the power of the classification model, a simple algorithm is used to randomly search for good segmentations. Results are shown on a wide range of images."}
{"_id": "14d1a458f49e251cbbab34349e379469300a2bae", "title": "Scene Parsing with Object Instances and Occlusion Ordering", "text": "This work proposes a method to interpret a scene by assigning a semantic label at every pixel and inferring the spatial extent of individual object instances together with their occlusion relationships. Starting with an initial pixel labeling and a set of candidate object masks for a given test image, we select a subset of objects that explain the image well and have valid overlap relationships and occlusion ordering. This is done by minimizing an integer quadratic program either using a greedy method or a standard solver. Then we alternate between using the object predictions to refine the pixel labels and vice versa. The proposed system obtains promising results on two challenging subsets of the LabelMe and SUN datasets, the largest of which contains 45, 676 images and 232 classes."}
{"_id": "6fb37cbc83bd6cd1d732f07288939a5061400e91", "title": "Enhancing Neural Disfluency Detection with Hand-Crafted Features", "text": "In this paper, we apply a bidirectional Long Short-Term Memory with a Conditional Random Field to the task of disfluency detection. Long-range dependencies is one of the core problems for disfluency detection. Our model handles long-range dependencies by both using the Long Short-Term Memory and hand-crafted discrete features. Experiments show that utilizing the hand-crafted discrete features significantly improves the model\u2019s performance by achieving the state-of-the-art score of 87.1% on the Switchboard corpus."}
{"_id": "6e50ef83a27c487dfdb87724263aedee342d5f9b", "title": "Pitfalls in pediatric radiology", "text": "This essay depicts some of the diagnostic errors identified in a large academic pediatric imaging department during a 13-year period. Our aim is to illustrate potential situations in which errors are more likely to occur and more likely to cause harm, and to share our difficult cases so other radiologists might learn without having to experience those situations themselves."}
{"_id": "bddea8b543a37c73a7080625593e235d8cb60b57", "title": "Detection and identification algorithm of the S1 and S2 heart sounds", "text": "This article attempts to contribute to heart abnormalities detection by suggesting a segmentation algorithm of S1 and S2 heart sounds. It describes the steps followed and the results achieved during the realization of the first PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning) Classifying Heart Sounds Challenge. The performed work leads to improved results, compared to the results found by the three finalists of this challenge, using the envelope of Shannon energy and other methods, to detect and identify first and second heart sound of phonocardiogram signals."}
{"_id": "bbb51d4d9fbaf0a60444ce8f861368a956acf044", "title": "MV3R-Tree: A Spatio-Temporal Access Method for Timestamp and Interval Queries", "text": "Among the various types of spatio-temporal queries, the most common ones involve window queries in time. In particular, timestamp (or timeslice) queries retrieve all objects that intersect a window at a specific timestamp. Interval queries include multiple (usually consecutive) timestamps. Although several indexes have been developed for either type, currently there does not exist a structure that can efficiently process both query types. This is a significant problem due to the fundamental importance of these queries in any spatio-temporal system that deals with historical information retrieval. Our paper addresses the problem by proposing the MV3R-tree, a structure that utilizes the concepts of multi-version B-trees and 3D-Rtrees. Extensive experimentation proves that MV3R-trees compare favorably with specialized structures aimed at timestamp and interval window queries, both in terms of time and space requirements."}
{"_id": "549d8886b516e21363f5e88e2db6dc77d6d2b51a", "title": "Towards Music Structural Segmentation across Genres: Features, Structural Hypotheses, and Annotation Principles", "text": "This article faces the problem of how different audio features and segmentation methods work with different music genres. A new annotated corpus of Chinese traditional Jingju music is presented. We incorporate this dataset with two existing music datasets from the literature in an integrated retrieval system to evaluate existing features, structural hypotheses, and segmentation algorithms outside a Western bias. A harmonic-percussive source separation technique is introduced to the feature extraction process and brings significant improvement to the segmentation. Results show that different features capture the structural patterns of different music genres in different ways. Novelty- or homogeneity-based segmentation algorithms and timbre features can surpass the investigated alternatives for the structure analysis of Jingju due to their lack of harmonic repetition patterns. Findings indicate that the design of audio features and segmentation algorithms as well as the consideration of contextual information related to the music corpora should be accounted dependently in an effective segmentation system."}
{"_id": "d9af508376bf488da3e3a223260d4a9ff012e1e7", "title": "The effect of five prosthetic feet on the gait and loading of the sound limb in dysvascular below-knee amputees.", "text": "The purpose of this study was to determine the effects of prosthetic foot design on the vertical ground reaction forces experienced by the sound and amputated limbs in a group of persons with dysvascular below-knee amputations. Stride characteristics, joint motion, and ground reaction forces were recorded simultaneously during a self-selected free walking velocity in seven subjects wearing five different prosthetic feet (SACH, Flex-Foot, Carbon Copy II, Seattle, Quantum). Subjects used each foot for one month prior to testing. Results indicated that the sound limb was exposed to higher vertical ground reaction forces than normal despite a reduced walking velocity. Use of the Flex-Foot resulted in the lowest sound limb vertical forces, which appears to be related to its large arc of dorsiflexion motion. In addition, there was increased loading response knee flexion of the sound limb indicating an attempt by these subjects to modulate floor impact. These results suggest that the intact lower extremity is susceptible to excessive floor impact, and that prosthetic foot design can have an effect on the magnitude of the vertical forces experienced by the limb."}
{"_id": "079596d84f82f2d3cd2cc5f34b48fde34f351b07", "title": "Non-Local Patch-Based Image Inpainting", "text": "Image inpainting is the process of filling in missing regions in an image in a plausible way. In this contribution, we propose and describe an implementation of a patch-based image inpainting algorithm. The method is actually a two-dimensional version of our video inpainting algorithm proposed in [16]. The algorithm attempts to minimise a highly non-convex functional, first introducted by Wexler et al. in [18]. The functional specifies that a good solution to the inpainting problem should be an image where each patch is very similar to its nearest neighbour in the unoccluded area. Iterations are performed in a multi-scale framework which yields globally coherent results. In this manner two of the major goals of image inpainting, the correct reconstruction of textures and structures, are addressed. We address a series of important practical issues which arise when using such an approach. In particular, we reduce execution times by using the PatchMatch [3] algorithm for nearest neighbour searches, and we propose a modified patch distance which improves the comparison of textured patches. We address the crucial issue of initialisation and the choice of the number of pyramid levels, two points which are rarely discussed in such approaches. We provide several examples which illustrate the advantages of our algorithm, and compare our results with those of state-of-the-art methods."}
{"_id": "5ee048adb63061ac04c53bd45716f4e2434b7440", "title": "CaffePresso: An optimized library for Deep Learning on embedded accelerator-based platforms", "text": "Off-the-shelf accelerator-based embedded platforms offer a competitive energy-efficient solution for lightweight deep learning computations over CPU-based systems. Low-complexity classifiers used in power-constrained and performance-limited scenarios are characterized by operations on small image maps with 2-3 deep layers and few class labels. For these use cases, we consider a range of embedded systems with 5--20 W power budgets such as the Xilinx ZC706 board (with MXP soft vector processor), NVIDIA Jetson TX1 (GPU), TI Keystone II (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). Deep Learning computations push the capabilities of these platforms to the limit through compute-intensive evaluations of multiple 2D convolution filters per layer, and high communication requirements arising from the movement of intermediate maps across layers. We present CaffePresso, a Caffe-compatible framework for generating optimized mappings of user-supplied ConvNet specifications to target various accelerators such as FPGAs, DSPs, GPUs, RISC-multicores. We use an automated code generation and auto-tuning approach based on knowledge of the ConvNet requirements, as well as platform-specific constraints such as on-chip memory capacity, bandwidth and ALU potential. While one may expect the Jetson TX1 + cuDNN to deliver high performance for ConvNet configurations, (1) we observe a flipped result with slower GPU processing compared to most other systems for smaller embedded-friendly datasets such as MNIST and CIFAR10, and (2) faster and more energy efficient implementation on the older 28nm TI Keystone II DSP over the newer 20nm NVIDIA TX1 SoC in all cases."}
{"_id": "0df7ffbabf057c6d88462e8df5bbb4ba18d68810", "title": "The Normalization of Occurrence and Co-occurrence Matrices in Bibliometrics using Cosine Similarities and Ochiai Coefficients", "text": "We prove that Ochiai similarity of the co-occurrence matrix is equal to cosine similarity in the underlying occurrence matrix. Neither the cosine nor the Pearson correlation should be used for the normalization of co-occurrence matrices because the similarity is then normalized twice, and therefore over-estimated; the Ochiai coefficient can be used instead. Results are shown using a small matrix (5 cases, 4 variables) for didactic reasons, and also Ahlgren et al.\u2019s (2003) co-occurrence matrix of 24 authors in library and information sciences. The over-estimation is shown numerically and will be illustrated using multidimensional scaling and cluster dendograms. If the occurrence matrix is not available (such as in internet research or author co-citation analysis) using Ochiai for the normalization is preferable to using the cosine."}
{"_id": "101d619b5911e9c2fda6f02365c593ae61617cb6", "title": "Reinforcement Learning of Cooperative Persuasive Dialogue Policies using Framing", "text": "In this paper, we apply reinforcement learning for automatically learning cooperative persuasive dialogue system policies using framing, the use of emotionally charged statements common in persuasive dialogue between humans. In order to apply reinforcement learning, we describe a method to construct user simulators and reward functions specifically tailored to persuasive dialogue based on a corpus of persuasive dialogues between human interlocutors. Then, we evaluate the learned policy and the effect of framing through experiments both with a user simulator and with real users. The experimental evaluation indicates that applying reinforcement learning is effective for construction of cooperative persuasive dialogue systems which use framing."}
{"_id": "00cd200396de7aad8f45de4b5c8977e669ab6f00", "title": "Inference in Probabilistic Logic Programs using Weighted CNF's", "text": "Probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities. Several classical probabilistic inference tasks (such as MAP and computing marginals) have not yet received a lot of attention for this formalism. The contribution of this paper is that we develop efficient inference algorithms for these tasks. This is based on a conversion of the probabilistic logic program and the query and evidence to a weighted CNF formula. This allows us to reduce the inference tasks to wellstudied tasks such as weighted model counting. To solve such tasks, we employ state-ofthe-art methods. We consider multiple methods for the conversion of the programs as well as for inference on the weighted CNF. The resulting approach is evaluated experimentally and shown to improve upon the state-of-theart in probabilistic logic programming."}
{"_id": "2199cb39adbf22b2161cd4f65662e4a152885bae", "title": "Automatic Access Control Based on Face and Hand Biometrics in a Non-cooperative Context", "text": "Automatic access control systems (ACS) based on the human biometrics or physical tokens are widely employed in public and private areas. Yet these systems, in their conventional forms, are restricted to active interaction from the users. In scenarios where users are not cooperating with the system, these systems are challenged. Failure in cooperation with the biometric systems might be intentional or because the users are incapable of handling the interaction procedure with the biometric system or simply forget to cooperate with it, due to for example, illness like dementia. This work introduces a challenging bimodal database, including face and hand information of the users when they approach a door to open it by its handle in a noncooperative context. We have defined two (an easy and a challenging) protocols on how to use the database. We have reported results on many baseline methods, including deep learning techniques as well as conventional methods on the database. The obtained results show the merit of the proposed database and the challenging nature of access control with non-cooperative users."}
{"_id": "2e1bb7a4af40726ccaa1a4eb4823d31622371c17", "title": "How does communication heal? Pathways linking clinician-patient communication to health outcomes.", "text": "OBJECTIVE\nAlthough prior research indicates that features of clinician-patient communication can predict health outcomes weeks and months after the consultation, the mechanisms accounting for these findings are poorly understood. While talk itself can be therapeutic (e.g., lessening the patient's anxiety, providing comfort), more often clinician-patient communication influences health outcomes via a more indirect route. Proximal outcomes of the interaction include patient understanding, trust, and clinician-patient agreement. These affect intermediate outcomes (e.g., increased adherence, better self-care skills) which, in turn, affect health and well-being. Seven pathways through which communication can lead to better health include increased access to care, greater patient knowledge and shared understanding, higher quality medical decisions, enhanced therapeutic alliances, increased social support, patient agency and empowerment, and better management of emotions.\n\n\nCONCLUSION\nFuture research should hypothesize pathways connecting communication to health outcomes and select measures specific to that pathway.\n\n\nPRACTICE IMPLICATIONS\nClinicians and patients should maximize the therapeutic effects of communication by explicitly orienting communication to achieve intermediate outcomes (e.g., trust, mutual understanding, adherence, social support, self-efficacy) associated with improved health."}
{"_id": "ec43d37aad744150af144d27a08b0b097607e712", "title": "Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article]", "text": "Natural language processing (NLP) is a theory-motivated range of computational techniques for the automatic analysis and representation of human language. NLP research has evolved from the era of punch cards and batch processing (in which the analysis of a sentence could take up to 7 minutes) to the era of Google and the likes of it (in which millions of webpages can be processed in less than a second). This review paper draws on recent developments in NLP research to look at the past, present, and future of NLP technology in a new light. Borrowing the paradigm of `jumping curves' from the field of business management and marketing prediction, this survey article reinterprets the evolution of NLP research as the intersection of three overlapping curves-namely Syntactics, Semantics, and Pragmatics Curveswhich will eventually lead NLP research to evolve into natural language understanding."}
{"_id": "aa8186266a8b0363febc509eafc4d291b4c3e71a", "title": "Representation Learning of Large-Scale Knowledge Graphs via Entity Feature Combinations", "text": "Knowledge graphs are typical large-scale multi-relational structures, which comprise a large amount of fact triplets. Nonetheless, existing knowledge graphs are still sparse and far from being complete. To refine the knowledge graphs, representation learning is widely used to embed fact triplets into low-dimensional spaces. Many existing knowledge graph embedding models either focus on learning rich features from entities but fail to extract good features of relations, or employ sophisticated models that have rather high time and memory-space complexities. In this paper, we propose a novel knowledge graph embedding model, CombinE. It exploits entity features from two complementary perspectives via the plus and minus combinations. We start with the plus combination, where we use shared features of entity pairs participating in a relation to convey its relation features. To also allow differences of each pairs of entities participating in a relation, we also use the minus combination, where we concentrate on individual entity features, and regard relations as a channel to offset the divergence and preserve the prominence between head and tail entities. Compared with the state-of-the-art models, our experimental results demonstrate that CombinE outperforms existing ones and has low time and memory-space complexities."}
{"_id": "3a8bf6aeabde8000c103146059df381f6ba1ef16", "title": "Automatic classification of irregularly sampled time series with unequal lengths: A case study on estimated glomerular filtration rate", "text": "A patient's estimated glomerular filtration rate (eGFR) can provide important information about disease progression and kidney function. Traditionally, an eGFR time series is interpreted by a human expert labelling it as stable or unstable. While this approach works for individual patients, the time consuming nature of it precludes the quick evaluation of risk in large numbers of patients. However, automating this process poses significant challenges as eGFR measurements are usually recorded at irregular intervals and the series of measurements differs in length between patients. Here we present a two-tier system to automatically classify an eGFR trend. First, we model the time series using Gaussian process regression (GPR) to fill in `gaps' by resampling a fixed size vector of fifty time-dependent observations. Second, we classify the resampled eGFR time series using a K-NN/SVM classifier, and evaluate its performance via 5-fold cross validation. Using this approach we achieved an F-score of 0.90, compared to 0.96 for 5 human experts when scored amongst themselves."}
{"_id": "04649bf1d35f05771fbd1b98af64e29d26cb4c7f", "title": "DG Placement and Sizing in Radial Distribution Network Using PSO & HBMO Algorithms", "text": "Optimal placement and sizing of DG in distribution network is an optimization problem with continuous and discrete variables. Many researchers have used evolutionary methods for finding the optimal DG placement and sizing. This paper proposes a hybrid algorithm PSO&HBMO for optimal placement and sizing of distributed generation (DG) in radial distribution system to minimize the total power loss and improve the voltage profile. The proposed method is tested on a standard 13 bus radial distribution system and simulation results carried out using MATLAB software. The simulation results indicate that PSO&HBMO method can obtain better results than the simple heuristic search method and PSO algorithm. The method has a potential to be a tool for identifying the best location and rating of a DG to be installed for improving voltage profile and line losses reduction in an electrical power system. Moreover, current reduction is obtained in distribution system."}
{"_id": "7a0877205ab05d1e6baff72e319f52acf37cb6b1", "title": "Content delivery over TLS: a cryptographic analysis of keyless SSL", "text": "The Transport Layer Security (TLS) protocol is designed to allow two parties, a client and a server, to communicate securely over an insecure network. However, when TLS connections are proxied through an intermediate middlebox, like a Content Delivery Network (CDN), the standard endto- end security guarantees of the protocol no longer apply. In this paper, we investigate the security guarantees provided by Keyless SSL, a CDN architecture currently deployed by CloudFlare that composes two TLS 1.2 handshakes to obtain a proxied TLS connection. We demonstrate new attacks that show that Keyless SSL does not meet its intended security goals. These attacks have been reported to CloudFlare and we are in the process of discussing fixes. We argue that proxied TLS handshakes require a new, stronger, 3-party security definition. We present 3(S)ACCEsecurity, a generalization of the 2-party ACCE security definition that has been used in several previous proofs for TLS. We modify Keyless SSL and prove that our modifications guarantee 3(S)ACCE-security, assuming ACCE-security for the individual TLS 1.2 connections. We also propose a new design for Keyless TLS 1.3 and prove that it achieves 3(S)ACCEsecurity, assuming that the TLS 1.3 handshake implements an authenticated 2-party key exchange. Notably, we show that secure proxying in Keyless TLS 1.3 is computationally lighter and requires simpler assumptions on the certificate infrastructure than our proposed fix for Keyless SSL. Our results indicate that proxied TLS architectures, as currently used by a number of CDNs, may be vulnerable to subtle attacks and deserve close attention."}
{"_id": "fb17e9cab49665863f360d5f9e61e6048a7e1b28", "title": "Reconstruction-Based Pairwise Depth Dataset for Depth Image Enhancement Using CNN", "text": "Raw depth images captured by consumer depth cameras suffer from noisy and missing values. Despite the success of CNN-based image processing on color image restoration, similar approaches for depth enhancement have not been much addressed yet because of the lack of raw-clean pairwise dataset. In this paper, we propose a pairwise depth image dataset generation method using dense 3D surface reconstruction with a filtering method to remove low quality pairs. We also present a multi-scale Laplacian pyramid based neural network and structure preserving loss functions to progressively reduce the noise and holes from coarse to fine scales. Experimental results show that our network trained with our pairwise dataset can enhance the input depth images to become comparable with 3D reconstructions obtained from depth streams, and can accelerate the convergence of dense 3D reconstruction results."}
{"_id": "6e4d29fc4ac85255058831bcd39f4a0630264200", "title": "Towards Optimal Binary Code Learning via Ordinal Embedding", "text": "Binary code learning, a.k.a., hashing, has been recently popular due to its high efficiency in large-scale similarity search and recognition. It typically maps high-dimensional data points to binary codes, where data similarity can be efficiently computed via rapid Hamming distance. Most existing unsupervised hashing schemes pursue binary codes by reducing the quantization error from an original real-valued data space to a resulting Hamming space. On the other hand, most existing supervised hashing schemes constrain binary code learning to correlate with pairwise similarity labels. However, few methods consider ordinal relations in the binary code learning process, which serve as a very significant cue to learn the optimal binary codes for similarity search. In this paper, we propose a novel hashing scheme, dubbed Ordinal Embedding Hashing (OEH), which embeds given ordinal relations among data points to learn the ranking-preserving binary codes. The core idea is to construct a directed unweighted graph to capture the ordinal relations, and then train the hash functions using this ordinal graph to preserve the permutation relations in the Hamming space. To learn such hash functions effectively, we further relax the discrete constraints and design a stochastic gradient decent algorithm to obtain the optimal solution. Experimental results on two large-scale benchmark datasets demonstrate that the proposed OEH method can achieve superior performance over the state-of-the-arts approaches. At last, the evaluation on query by humming dataset demonstrates the OEH also has good performance for music retrieval by using user\u2019s humming or singing."}
{"_id": "882a807b806324b4ef13a98f329120c6228b78f5", "title": "Symmetric-Aperture Antenna for Broadband Circular Polarization", "text": "A novel symmetric-aperture antenna is proposed for broadband circularly polarized (CP) radiation using a coplanar waveguide (CPW) feed. The proposed antenna consists of a CPW-feed and a wide symmetric aperture along the diagonal axis. A symmetric-aperture consists of two diamond-shaped sections. The symmetric-aperture and CPW-feeding structure are fabricated on the same plane of a dielectric substrate. Wideband CP radiation is achieved by changing the arms of symmetric-aperture and stripline length. The measured bandwidths of 3-dB axial ratio (AR) and 10-dB return loss are around 68% (2.4-4.85 GHz) and 107% (1.6-5.25 GHz), respectively. The measured boresight gain is more than 3.0 dBic over the CP bandwidth. Overall antenna size is around 0.7\u03bbo \u00d7 0.7\u03bbo \u00d7 0.019\u03bbo at 3.5 GHz."}
{"_id": "22c0bf46726ad1c90b5fac7f16638c813e86c829", "title": "A Novelty Detection Approach to Classification", "text": "Novelty Detection techniques are concept-learning methods that proceed by recognizing positive instances of a concept rather than diierentiating between its positive and negative instances. Novelty Detection approaches consequently require very few, if any, negative training instances. This paper presents a particular Novelty Detection approach to classiica-tion that uses a Redundancy Compression and Non-Redundancy Diierentiation technique based on the (Gluck & Myers 1993) model of the hippocampus, a part of the brain critically involved in learning and memory. In particular, this approach consists of training an autoencoder to reconstruct positive input instances at the output layer and then using this au-toencoder to recognize novel instances. Classiication is possible, after training, because positive instances are expected to be reconstructed accurately while negative instances are not. The purpose of this paper is to compare Hippo, the system that implements this technique , to C4.5 and feedforward neural network classi-cation on several applications."}
{"_id": "383f12c877a5babcdd1805aae1ea9ddf64648245", "title": "Concept drift detection for streaming data", "text": "Common statistical prediction models often require and assume stationarity in the data. However, in many practical applications, changes in the relationship of the response and predictor variables are regularly observed over time, resulting in the deterioration of the predictive performance of these models. This paper presents Linear Four Rates (LFR), a framework for detecting these concept drifts and subsequently identifying the data points that belong to the new concept (for relearning the model). Unlike conventional concept drift detection approaches, LFR can be applied to both batch and stream data; is not limited by the distribution properties of the response variable (e.g., datasets with imbalanced labels); is independent of the underlying statistical-model; and uses user-specified parameters that are intuitively comprehensible. The performance of LFR is compared to benchmark approaches using both simulated and commonly used public datasets that span the gamut of concept drift types. The results show LFR significantly outperforms benchmark approaches in terms of recall, accuracy and delay in detection of concept drifts across datasets."}
{"_id": "40de599b11b1553649354991cdf849048cb05f00", "title": "Cost-Sensitive Online Classification", "text": "Both cost-sensitive classification and online learning have been extensively studied in data mining and machine learning communities, respectively. However, very limited study addresses an important intersecting problem, that is, \u201cCost-Sensitive Online Classification\". In this paper, we formally study this problem, and propose a new framework for Cost-Sensitive Online Classification by directly optimizing cost-sensitive measures using online gradient descent techniques. Specifically, we propose two novel cost-sensitive online classification algorithms, which are designed to directly optimize two well-known cost-sensitive measures: (i) maximization of weighted sum of sensitivity and specificity, and (ii) minimization of weighted misclassification cost. We analyze the theoretical bounds of the cost-sensitive measures made by the proposed algorithms, and extensively examine their empirical performance on a variety of cost-sensitive online classification tasks. Finally, we demonstrate the application of the proposed technique for solving several online anomaly detection tasks, showing that the proposed technique could be a highly efficient and effective tool to tackle cost-sensitive online classification tasks in various application domains."}
{"_id": "7c2cb0272af5817cd58fb1707d6e5833c851d72b", "title": "Learning from Imbalanced Data", "text": "With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Although existing knowledge discovery and data engineering techniques have shown great success in many real-world applications, the problem of learning from imbalanced data (the imbalanced learning problem) is a relatively new challenge that has attracted growing attention from both academia and industry. The imbalanced learning problem is concerned with the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. Due to the inherent complex characteristics of imbalanced data sets, learning from such data requires new understandings, principles, algorithms, and tools to transform vast amounts of raw data efficiently into information and knowledge representation. In this paper, we provide a comprehensive review of the development of research in learning from imbalanced data. Our focus is to provide a critical review of the nature of the problem, the state-of-the-art technologies, and the current assessment metrics used to evaluate learning performance under the imbalanced learning scenario. Furthermore, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potential important research directions for learning from imbalanced data."}
{"_id": "7f75048db423526bab02c887dd36bf6d9d84b76c", "title": "Classifying Data Streams with Skewed Class Distributions and Concept Drifts", "text": "Classification is an important data analysis tool that uses a model built from historical data to predict class labels for new observations. More and more applications are featuring data streams, rather than finite stored data sets, which are a challenge for traditional classification algorithms. Concept drifts and skewed distributions, two common properties of data stream applications, make the task of learning in streams difficult. The authors aim to develop a new approach to classify skewed data streams that uses an ensemble of models to match the distribution over under-samples of negatives and repeated samples of positives."}
{"_id": "9781c919d268e0bf0e4b0488040be6d427a9808f", "title": "Novel Radiation-Hardened-by-Design (RHBD) 12T Memory Cell for Aerospace Applications in Nanoscale CMOS Technology", "text": "In this paper, a novel radiation-hardened-by-design (RHBD) 12T memory cell is proposed to tolerate single node upset and multiple-node upset based on upset physical mechanism behind soft errors together with reasonable layout-topology. The verification results obtained confirm that the proposed 12T cell can provide a good radiation robustness. Compared with 13T cell, the increased area, power, read/write access time overheads of the proposed 12T cell are \u221218.9%, \u221223.8%, and 171.6%/\u221250.0%, respectively. Moreover, its hold static noise margin is 986.2 mV which is higher than that of 13T cell. This means that the proposed 12T cell also has higher stability when it provides fault tolerance capability."}
{"_id": "02c2b8d35abc44652e854853c06ff2100aa56d8f", "title": "Smart Electric Motorcycle Security System Based on GSM and ZigBee Communication", "text": "The storage of natural fuel is decreasing day by day and burning much of it can cause a massive environmental disaster. In order to save the nature from the disaster people need to think about some alternatives. The concept of making the battery operated vehicles comes as a rescue. The battery operated electric vehicle help in saving fuel and also help in lowering air pollution. Hybrid vehicles are already brought by several automobile manufacturers in market. This proposed system is going to offer an electric motorcycle combined with some security features based on ARM microcontroller, 8051 microcontroller, ZigBee and GSM communication which can be utilized both in Hybrid motorcycles and Electric motorcycles. When the proposed security features will be merged with those vehicles it will be an evolutionary concept in the world of next generation two-wheeler industry because besides lowering the level of air pollution and helping in controlling the use of fuel it mainly helps people to drive motorcycles safely."}
{"_id": "a57723a736e23d1373211cdae374ce4f5ec11d25", "title": "Facial Expression Detection using Filtered Local Binary Pattern Features with ECOC Classifiers and Platt Scaling", "text": "We outline a design for a FACS-based facial expression recognition system and describe in more detail the implementation of two of its main components. Firstly we look at how features that are useful from a pattern analysis point of view can be extracted from a raw input image. We show that good results can be obtained by using the method of local binary patterns (LPB) to generate a large number of candidate features and then selecting from them using fast correlation-based filtering (FCBF). Secondly we show how Platt scaling can be used to improve the performance of an error-correcting output code (ECOC) classifier."}
{"_id": "b809d4947a9d544cded70446da3c6bfdf4e0256a", "title": "Automatic Acquisition of Linguistic Patterns for Conceptual Modeling", "text": "Conceptual modeling aims to capture the knowledge of a domain as perceived by the modeler and represent it in a way that will help further system analysis and design. Object-oriented methodology has been taken as a common practice for software system analysis and design because of its obvious advantages of handling the \u201csoftware crisis\u201d (Booch, 1994). Object-oriented analysis (OOA) is \u201ca method of analysis that examines requirements from the perspectives of the classes and objects\u201d (Booch, 1994). Object-oriented design (OOD) is a method of design \u201cencompassing the process of OO decomposition and a notation for depicting both logical and physical as well as static and dynamic models of the system\u201d (Booch, 1994)."}
{"_id": "84a73482b20e2644379af0ebef80e1ea25a4a90d", "title": "Focus on Formative Feedback", "text": "This paper reviews the corpus of research on feedback, with a particular focus on formative feedback\u2014defined as information communicated to the learner that is intended to modify the learner\u2019s thinking or behavior for the purpose of improving learning. According to researchers in the area, formative feedback should be multidimensional, nonevaluative, supportive, timely, specific, credible, infrequent, and genuine (e.g., Brophy, 1981; Schwartz & White, 2000). Formative feedback is usually presented as information to a learner in response to some action on the learner\u2019s part. It comes in a variety of types (e.g., verification of response accuracy, explanation of the correct answer, hints, worked examples) and can be administered at various times during the learning process (e.g., immediately following an answer, after some period of time has elapsed). Finally, there are a number of variables that have been shown to interact with formative feedback\u2019s success at promoting learning (e.g., individual characteristics of the learner and aspects of the task). All of these issues will be discussed in this paper. This review concludes with a set of guidelines for generating formative feedback."}
{"_id": "13b4569db536ba6ed90ed72e4206797e718a692e", "title": "X Goes First: Teaching Simple Games through Multimodal Interaction", "text": "What would it take to teach a computer to play a game entirely through language and sketching? In this paper, we present an implemented program through which an instructor can teach the rules of simple board games using such input. We describe the architecture, information flow, and vocabulary of instructional events and walk through an annotated example. In our approach, the instructional and communication events guide abductive reasoning for language interpretation and help to integrate information from sketching and language. Having a general target representation enables the learning process to be viewed more as translation and problem solving than as induction. Lastly, learning by demonstration complements and extends instruction, resulting in concrete, operational rules."}
{"_id": "3ac0df7ef0958ea660bfb6d8961d8964ef2afa47", "title": "Performance Analysis and Application of Mobile Blockchain", "text": "Mobile security has become more and more important due to the boom of mobile commerce (m-commerce). However, the development of m-commerce is facing many challenges regarding data security problems. Recently, blockchain has been introduced as an effective security solution deployed successfully in many applications in practice, such as, Bitcoin, cloud computing, and Internet-of-Things. In this paper, we introduce a new m-commerce application using blockchain technology, namely, MobiChain, to secure transactions in the m-commerce. Especially, in the MobiChain application, the mining processes can be executed efficiently on mobile devices using our proposed Android core module. Through real experiments, we evaluate the performance of the proposed model and show that blockchain will be an efficient security solution for future m-commerce."}
{"_id": "410372059a5e2c1214cab03543741c0369afc802", "title": "Indicator selection for daily equity trading with recurrent reinforcement learning", "text": "Recurrent reinforcement learning (RRL), a machine learning technique, is very successful in training high frequency trading systems. When trading analysis of RRL is done with lower frequency financial data, e.g. daily stock prices, the decrease of autocorrelation in prices may lead to a decrease in trading profit. In this paper, we propose a RRL trading system which utilizes the price information, jointly with the indicators from technical analysis, fundamental analysis and econometric analysis, to produce long/short signals for daily trading. In the proposed trading system, we use a genetic algorithm as a pre-screening tool to search suitable indicators for RRL trading. Moreover, we modify the original RRL parameter update scheme in the literature for out-of-sample trading. Empirical studies are conducted based on data sets of 238 S&P stocks. It is found that the trading performance concerning the out-of sample daily Sharpe ratios turns better: the number of companies with a positive and significant Sharpe ratio increases after feeding the selected indicators jointly with prices information into the RRL system."}
{"_id": "4329143e227370d931a207184f3b995c0cb5c8d5", "title": "Self-stabilizing Systems in Spite of Distributed Control", "text": "The synchronization task between loosely coupled cyclic sequential processes (as can be distinguished in, for instance, operating systems) can be viewed as keeping the relation \u201cthe system is in a legitimate state\u201d invariant. As a result, each individual process step that could possibly cause violation of that relation has to be preceded by a test deciding whether the process in question is allowed to proceed or has to be delayed. The resulting design is readily\u2014and quite systematically\u2014implemented if the different processes can be granted mutually exclusive access to a common store in which \u201cthe current system state\u201d is recorded."}
{"_id": "70291d9f3ddae33b9f79e55932f24ed6de22ffb0", "title": "Optimisation of the mover kinetic energy of a miniature linear actuator", "text": "The linear actuator presented in this paper is a core component of an electronically-controlled linear escapement mechanism. Its role is to unlock mechanical parts, its mover being used as a hammer. In this situation, the kinetic energy acquired by the mover at the position of the impact is a defining factor. The multiobjective optimisation presented in this paper targets the maximisation of said kinetic energy and the minimisation of the overall size of the actuator. In addition to the objective, the originality of this paper is that the control strategy as well as the behaviour of the guide of the mover are directly taken into account in the optimisation process."}
{"_id": "0c3166e6773fb8a894670bf4d6da9bd512170e63", "title": "Smart Contract Templates: foundations, design landscape and research directions", "text": "In this position paper, we consider some foundational topics regarding smart contracts (such as terminology, automation, enforceability, and semantics) and define a smart contract as an agreement whose execution is both automatable and enforceable. We explore a simple semantic framework for smart contracts, covering both operational and non-operational aspects. We describe templates and agreements for legally-enforceable smart contracts, based on legal documents. Building upon the Ricardian Contract triple, we identify operational parameters in the legal documents and use these to connect legal agreements to standardised code. We also explore the design landscape, including increasing sophistication of parameters, increasing use of common standardised code, and long-term academic research. We conclude by identifying further work and sketching an initial set of requirements for a common language to support Smart Contract Templates."}
{"_id": "cb4d277a51da6894fe5143013978567ef5f805c8", "title": "Natural language question answering over RDF: a graph data driven approach", "text": "RDF question/answering (Q/A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a national language question, the existing work takes a two-stage approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q/A) from a graph data-driven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q/A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. We compare our method with some state-of-the-art RDF Q/A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly."}
{"_id": "8552ab48bd3d0b4f54a513c1dbcb4159a7be04a8", "title": "A capability-based security approach to manage access control in the Internet of Things", "text": "Resource and information protection plays a relevant role in distributed systems like the ones present in the Internet of Things (IoT). Authorization frameworks like RBAC and ABAC do not provide scalable, manageable, effective, and efficient mechanisms to support distributed systems with many interacting services and are not able to effectively support the dynamicity and scaling needs of IoT contexts that envisage a potentially unbound number of sensors, actuators and related resources, services and subjects, as well as a more relevance of short-lived, unplanned and dynamic interaction patterns. Furthermore, as more end-users start using smart devices (e.g. smart phones, smart home appliances, etc.) the need to have more scalable, manageable, understandable and easy to use access control mechanisms increases. This paper describes a capability based access control system that enterprises, or even individuals, can use to manage their own access control processes to services and information. The proposed mechanism supports rights delegation and a more sophisticated access control customization. The proposed approach is being developed within the European FP7 IoT@Work project to manage access control to some of the project\u2019s services deployed in the shop floor."}
{"_id": "45392756fd0d437091d172e4cbbc37a66650555f", "title": "Automated video looping with progressive dynamism", "text": "Given a short video we create a representation that captures a spectrum of looping videos with varying levels of dynamism, ranging from a static image to a highly animated loop. In such a progressively dynamic video, scene liveliness can be adjusted interactively using a slider control. Applications include background images and slideshows, where the desired level of activity may depend on personal taste or mood. The representation also provides a segmentation of the scene into independently looping regions, enabling interactive local adjustment over dynamism. For a landscape scene, this control might correspond to selective animation and deanimation of grass motion, water ripples, and swaying trees. Converting arbitrary video to looping content is a challenging research problem. Unlike prior work, we explore an optimization in which each pixel automatically determines its own looping period. The resulting nested segmentation of static and dynamic scene regions forms an extremely compact representation."}
{"_id": "79d3d3cb4d59886aa3d95d7fdf9da2b8f220182f", "title": "Dwarf germplasm: the key to giant Cannabis hempseed and cannabinoid crops", "text": "After a century of banishment, both euphoric (\u201cmarijuana\u201d) and non-euphoric (\u201cindustrial hemp\u201d) classes of Cannabis sativa are attracting billions of dollars of investment as new legitimate crops. Most domesticated C. sativa is very tall, a phenotype that is desirable only for hemp fibre obtained from the stems. However, because the principal demands today are for chemicals from the inflorescence and oilseeds from the infructescence, an architecture maximizing reproductive tissues while minimizing stems is appropriate. Such a design was the basis of the greatest short-term increases in crop productivity in the history of agriculture: the creation of short-stature (\u201csemi-dwarf\u201d), high-harvest-index grain cultivars, especially by ideotype breeding, as demonstrated during the \u201cGreen Revolution.\u201d This paradigm has considerable promise for C. sativa. The most critical dwarfing character for breeding such productivity into C. sativa is contraction of internodes. This reduces stem tissues (essentially a waste product except for fibre hemp) and results in compact inflorescences (which, on an area basis, maximize cannabinoid chemicals) and infructescences (which maximize oilseed production), as well as contributing to ease of harvesting and efficiency of production on an area basis. Four sources of germplasm useful for breeding semi-dwarf biotypes deserve special attention: (1) Naturally short northern Eurasian wild plants (often photoperiodically day-neutral, unlike like most biotypes) adapted to the stress of very short seasons by maximizing relative development of reproductive tissues. (2) Short, high-harvest-index, oilseed plants selected in northern regions of Eurasia. (3) \u201cIndica type\u201d marijuana, an ancient semi-dwarf cultigen tracing to the Afghanistan-Pakistan area. (4) Semi-dwarf strains of marijuana bred illegally in recent decades to avoid detection when grown clandestinely indoors for the black market. Although the high THC content in marijuana strains limits their usage as germplasm for low-THC cultivars, modern breeding techniques can control this variable. The current elimination of all marijuana germplasm from breeding of hemp cultivars is short-sighted because marijuana biotypes possess a particularly wide range of genes. There is an urgent need to develop public gene bank collections of Cannabis."}
{"_id": "803981a8993db08d966f882c7e69ee7ba110acac", "title": "A 3.5-GHz PLL for fast low-IF/zero-IF LO switching in an 802.11 transceiver", "text": "A PLL technique is introduced that enables fast and accurate frequency switching, independent of the loop bandwidth. It uses separate tuning paths, each driving a separate VCO tune port. Different frequencies are produced by letting the VCO make different weighted combinations of the stable tuning voltages. The PLL converges to the stable tuning voltages by switching it a few times between the desired frequencies and tuning paths. Once the stabilized tuning voltages are found, one can switch between frequencies as fast as one can switch between K/sub VCO/s. The technique is applied to a 3.5-GHz integer-N PLL to enable fast jumping of the local oscillator (LO) frequency when an 802.11 transceiver is switched between a low and a zero intermediate frequency (LIF/ZIF). It uses dual phase/frequency detectors (PFD), charge pumps (CPs), and on-chip loop filters to control two separate low-leakage VCO tune ports. Each PFD/tune port combination can be (de)activated separately, without disturbing the loop filters' charge. The 50-kHz bandwidth PLL achieves a measured 7-MHz jump with /spl plusmn/20 kHz accuracy within 6 /spl mu/s. The measured phase noise is -123 dBc/Hz at 1-MHz offset."}
{"_id": "fdb2f0db2ead8d79b3e831c0e7dc0d12e8a4b506", "title": "Imaging of conjoined twins", "text": "The incidence of conjoined twins is estimated to be around 1 in 250,000 live births. There is a distinct female predominance. In this paper the imaging of conjoined twins both antenatally and postnatally is reviewed, in particular taking into consideration recent advances with multidetector CT. Accurate counselling of parents regarding the likely outcome of the pregnancy and the likelihood of successful separation is dependent on good prenatal imaging with ultrasound and MRI. Planning of postnatal surgical separation is aided by accurate preoperative imaging which, depending on the conjoined area, will encompass many imaging modalities, but often relies heavily on CT scanning."}
{"_id": "32e34ae2d53a1829dcfaadc1e6ce8cc9aa795152", "title": "Sentiment Analysis of Twitter Data", "text": "We examine sentiment analysis on Twitter data. The contributions of this paper are: (1) We introduce POS-specific prior polarity features. (2) We explore the use of a tree kernel to obviate the need for tedious feature engineering. The new features (in conjunction with previously proposed features) and the tree kernel perform approximately at the same level, both outperforming the state-of-the-art baseline."}
{"_id": "a9470c0bb432001f63fd28d1450aecf4370e2b91", "title": "Understanding Judgment of Information Quality and Cognitive Authority in the WWW", "text": "In the WWW, people are engaging in interaction with more, and more diverse information than ever before, so there is an increasing need for information \u201cfiltering.\u201d But because of the diversity of information resources, and the lack of traditional quality control on the WWW, the criteria of authority and quality of information that people have used for this purpose in past contexts may no longer be relevant. This paper reports on a study of people\u2019s decision making with respect to quality and authority in the WWW. Seven facets of judgment of information quality were identified: source, content, format, presentation, currency, accuracy, and speed of loading. People mentioned source credibility with two levels: institutional level and individual level. Authority was identified as a underlying theme in source credibility. Institutional authority involved: institutional domain identified by URL; institution type; and institution reputation recognized by names. Individual authority involved: identification of creator/author; creator/author affiliation; and creator/author\u2019s name. People were more or less concerned with evaluating information quality depending upon: the consequence of use of information; act or commitment based on information; and the focus of inquiry. It was also found that people believed that the web, as an institution, was less authoritative and less credible than other types of information systems."}
{"_id": "175f831c29f959a8ab1730090e099504c4be9ab4", "title": "Containment of misinformation spread in online social networks", "text": "With their blistering expansions in recent years, popular on-line social sites such as Twitter, Facebook and Bebo, have become some of the major news sources as well as the most effective channels for viral marketing nowadays. However, alongside these promising features comes the threat of misinformation propagation which can lead to undesirable effects, such as the widespread panic in the general public due to faulty swine flu tweets on Twitter in 2009. Due to the huge magnitude of online social network (OSN) users and the highly clustered structures commonly observed in these kinds of networks, it poses a substantial challenge to efficiently contain viral spread of misinformation in large-scale social networks.\n In this paper, we focus on how to limit viral propagation of misinformation in OSNs. Particularly, we study a set of problems, namely the \u03b21T -- Node Protectors, which aims to find the smallest set of highly influential nodes whose decontamination with good information helps to contain the viral spread of misinformation, initiated from the set I, to a desired ratio (1 \u2212 \u03b2) in T time steps. In this family set, we analyze and present solutions including inapproximability result, greedy algorithms that provide better lower bounds on the number of selected nodes, and a community-based heuristic method for the Node Protector problems. To verify our suggested solutions, we conduct experiments on real world traces including NetHEPT, NetHEPT_WC and Facebook networks. Empirical results indicate that our methods are among the best ones for hinting out those important nodes in comparison with other available methods."}
{"_id": "c0cd899b64c916df8ebc90f5494a13aed917071c", "title": "Side Channels in Deduplication: Trade-offs between Leakage and Efficiency", "text": "Deduplication removes redundant copies of files or data blocks stored on the cloud. Client-side deduplication, where the client only uploads the file upon the request of the server, provides major storage and bandwidth savings, but introduces a number of security concerns. Harnik et al. (2010) showed how cross-user client-side deduplication inherently gives the adversary access to a (noisy) side-channel that may divulge whether or not a particular file is stored on the server, leading to leakage of user information. We provide formal definitions for deduplication strategies and their security in terms of adversarial advantage. Using these definitions, we provide a criterion for designing good strategies and then prove a bound characterizing the necessary trade-off between security and efficiency."}
{"_id": "8c9a6dd09afe5fe525c039d4e87b164eaba6abea", "title": "Using speakers' referential intentions to model early cross-situational word learning.", "text": "Word learning is a \"chicken and egg\" problem. If a child could understand speakers' utterances, it would be easy to learn the meanings of individual words, and once a child knows what many words mean, it is easy to infer speakers' intended meanings. To the beginning learner, however, both individual word meanings and speakers' intentions are unknown. We describe a computational model of word learning that solves these two inference problems in parallel, rather than relying exclusively on either the inferred meanings of utterances or cross-situational word-meaning associations. We tested our model using annotated corpus data and found that it inferred pairings between words and object concepts with higher precision than comparison models. Moreover, as the result of making probabilistic inferences about speakers' intentions, our model explains a variety of behavioral phenomena described in the word-learning literature. These phenomena include mutual exclusivity, one-trial learning, cross-situational learning, the role of words in object individuation, and the use of inferred intentions to disambiguate reference."}
{"_id": "70a5576ab08b14e9b976cf826711055e0d7d7fb7", "title": "The Ideal Elf: Identity Exploration in World of Warcraft", "text": "In this study, we examine the identity exploration possibilities presented by online multiplayer games in which players use graphics tools and character-creation software to construct an avatar, or character. We predicted World of Warcraft players would create their main character more similar to their ideal self than the players themselves were. Our results support this idea; a sample of players rated their character as having more favorable attributes that were more favorable than their own self-rated attributes. This trend was stronger among those with lower psychological well-being, who rated themselves comparatively lower than they rated their character. Our results suggest that the game world allows players the freedom to create successful virtual selves regardless of the constraints of their actual situation."}
{"_id": "7a4f2c1fa0a899582484945991c3e205b9a45344", "title": "Detection of Atrial Fibrillation using model-based ECG analysis", "text": "Atrial fibrillation (AF) is an arrhythmia that can lead to several patient risks. This kind of arrhythmia affects mostly elderly people, in particular those who suffer from heart failure (one of the main causes of hospitalization). Thus, detection of AF becomes decisive in the prevention of cardiac threats. In this paper an algorithm for AF detection based on a novel algorithm architecture and feature extraction methods is proposed. The aforementioned architecture is based on the analysis of the three main physiological characteristics of AF: i) P wave absence ii) heart rate irregularity and iii) atrial activity (AA). Discriminative features are extracted using model-based statistic and frequency based approaches. Sensitivity and specificity results (respectively, 93.80% and 96.09% using the MIT-BIH AF database) show that the proposed algorithm is able to outperform state-of-the-art methods."}
{"_id": "192218530b0aaa4c812bf1d603a63a5e676a1271", "title": "Index Financial Time Series Based on Zigzag-Perceptually Important Points", "text": "Problem statement: Financial time series were usually large in size, unstructured and of high dimensionality. Since, the illustration of fin a cial time series shape was typically characterize d by a few number of important points. These important p oints moved in zigzag directions which could form technical patterns. However, these important p oints exhibited in different resolutions and diffic ult to determine. Approach: In this study, we proposed novel methods of financ i l time series indexing by considering their zigzag movement. The methods c onsist of two major algorithms: first, the identification of important points, namely the Zigz ag-Perceptually Important Points (ZIPs) identification method and next, the indexing method namely Zigzag based M-ary Tree (ZM-Tree) to structure and organize the important points. Results: The errors of the tree building and retrieving compared to the original time series increased when t important points increased. The dimensionality reduction using ZM-Tree based on tree pruning and n umber of retrieved points techniques performed better when the number of important points increase d. Conclusion: Our proposed techniques illustrated mostly acceptable performance in tree op rations and dimensionality reduction comparing to existing similar technique like Specialize Binar y Tree (SB-Tree)."}
{"_id": "7be8c7687a957837ba8ce4945afa76cb68dbe495", "title": "Towards High Performance Video Object Detection", "text": "There has been significant progresses for image object detection in recent years. Nevertheless, video object detection has received little attention, although it is more challenging and more important in practical scenarios. Built upon the recent works [37, 36], this work proposes a unified approach based on the principle of multi-frame end-to-end learning of features and cross-frame motion. Our approach extends prior works with three new techniques and steadily pushes forward the performance envelope (speed-accuracy tradeoff), towards high performance video object detection."}
{"_id": "a06805d4de16df54395e1700ec51797e5a65eb64", "title": "Fusion of Stereo Vision for Pedestrian Recognition using Convolutional Neural Networks", "text": "Pedestrian detection is a highly debated issue in the scientific community due to its outstanding importance for a large number of applications, especially in the fields of automotive safety, robotics and surveillance. In spite of the widely varying methods developed in recent years, pedestrian detection is still an open challenge whose accuracy and robustness has to be improved. Therefore, in this paper, we focus on improving the classification component in the pedestrian detection task on the Daimler stereo vision data set by adopting two approaches: 1) by combining three image modalities (intensity, depth and flow) to feed a unique convolutional neural network (CNN) and 2) by fusing the results of three independent CNNs."}
{"_id": "9ca0bcb74141aa9f96c067af8e0c515af42321f2", "title": "Millimeter-Wave Technology for Automotive Radar Sensors in the 77 GHz Frequency Band", "text": "The market for driver assistance systems based on millimeter-wave radar sensor technology is gaining momentum. In the near future, the full range of newly introduced car models will be equipped with radar based systems which leads to high volume production with low cost potential. This paper provides background and an overview of the state of the art of millimeter-wave technology for automotive radar applications, including two actual silicon based fully integrated radar chips. Several advanced packaging concepts and antenna systems are presented and discussed in detail. Finally measurement results of the fully integrated radar front ends are shown."}
{"_id": "2b7c330e7b3fbe96ea6f5342eae17d90095026cc", "title": "Science-based views of drug addiction and its treatment.", "text": ""}
{"_id": "286e38e59c0476943c4a8e6de11c740d4b318154", "title": "Massively parallel Monte Carlo for many-particle simulations on GPUs", "text": "Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computi ng. Parallel algorithms for Monte Carlo simulations of thermodynamic en s mbles of particles have received little attention becaus e of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys det ailed balance and implement it for a system of hard disks on the GPU. We repro duce results of serial high-precision Monte Carlo runs to ve rify the method. This is a good test case because the hard disk equa tion of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation e xecutes over one billion trial moves per second, which is 148 times fa ster than on a single Intel Xeon E5540 CPU core, enables 27 tim es better performance per dollar, and cuts energy usage by a fac tor of 13. With this improved performance we are able to calcu l te the equation of state for systems of up to one million hard dis ks. These large system sizes are required in order to probe th e nature of the melting transition, which has been debated for the las t forty years. In this paper we present the details of our comp utational method, and discuss the thermodynamics of hard disks separa t ly in a companion paper."}
{"_id": "21a4ea3908d22e11e1694adbcbefb25deb3bff21", "title": "CNN-based ranking for biomedical entity normalization", "text": "Most state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities. The CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance. We propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement."}
{"_id": "7ae3ecc05c4baa7c1780e233d9a81fa71d71b0f7", "title": "Trending topic prediction on social network", "text": "The fast information sharing on social network services generates more than thousands of topics every day. It is extremely important for business organizations and administrative decision makers to learn the popularity of these topics as quickly as possible. In this paper, we propose a prediction mode based on SVM with features of three subsets: quantity specific features, quality and user specific features which supplement each other. Furthermore, we divide topic data into time slices which is used as a unit of feature construction. Our findings suggest that the capability of our prediction model outperforms previous methods and also reveals that subsets of features play different role in the prediction of trending topics."}
{"_id": "71e9a23138a5d7c35b28bd98fd616c81719b1b7a", "title": "NoSQL Database: New Era of Databases for Big data Analytics - Classification, Characteristics and Comparison", "text": "Digital world is growing very fast and become more complex in the volume (terabyte to petabyte), variety (structured and un-structured and hybrid), velocity (high speed in growth) in nature. This refers to as \u2018Big Data\u2019 that is a global phenomenon. This is typically considered to be a data collection that has grown so large it can\u2019t be effectively managed or exploited using conventional data management tools: e.g., classic relational database management systems (RDBMS) or conventional search engines. To handle this problem, traditional RDBMS are complemented by specifically designed a rich set of alternative DBMS; such as NoSQL, NewSQL and Search-based systems. This paper motivation is to provide classification, characteristics and evaluation of NoSQL databases in Big Data Analytics. This report is intended to help users, especially to the organizations to obtain an independent understanding of the strengths and weaknesses of various NoSQL database approaches to supporting applications that process huge volumes of data."}
{"_id": "85dec08cd4daa177a71d0dd467f38c8d5f3de306", "title": "Towards parameter-free data mining", "text": "Most data mining algorithms require the setting of many input parameters. Two main dangers of working with parameter-laden algorithms are the following. First, incorrect settings may cause an algorithm to fail in finding the true patterns. Second, a perhaps more insidious problem is that the algorithm may report spurious patterns that do not really exist, or greatly overestimate the significance of the reported patterns. This is especially likely when the user fails to understand the role of parameters in the data mining process.Data mining algorithms should have as few parameters as possible, ideally none. A parameter-free algorithm would limit our ability to impose our prejudices, expectations, and presumptions on the problem at hand, and would let the data itself speak to us. In this work, we show that recent results in bioinformatics and computational theory hold great promise for a parameter-free data-mining paradigm. The results are motivated by observations in Kolmogorov complexity theory. However, as a practical matter, they can be implemented using any off-the-shelf compression algorithm with the addition of just a dozen or so lines of code. We will show that this approach is competitive or superior to the state-of-the-art approaches in anomaly/interestingness detection, classification, and clustering with empirical tests on time series/DNA/text/video datasets."}
{"_id": "434d9fb4fcb3f02d521639dbe5d9297bee617c0f", "title": "Do Differences in Password Policies Prevent Password Reuse?", "text": "Password policies were originally designed to make users pick stronger passwords. However, research has shown that they often fail to achieve this goal. In a systematic audit of the top 100 web sites in Germany, we explore if diversity in current real-world password policies prevents password reuse. We found that this is not the case: we are the first to show that a single password could hypothetically fulfill 99% of the policies under consideration. This is especially problematic because password reuse exposes users to similar risks as weak passwords. We thus propose a new approach for policies that focuses on password reuse and respects other websites to determine if a password should be accepted. This re-design takes current user behavior into account and potentially boosts the usability and security of password-based authentication."}
{"_id": "87b0b1abfbac7ef528c47a903f21de9bf1b869f8", "title": "Review on Machine Learning Techniques for Automatic Segmentation of Liver Images", "text": "The segmentation of liver from abdominal images has gained huge importance in medical imaging field since it is fundamental step to diagnose liver tumor and other liver diseases, liver volume measurement and 3D liver volume rendering. In this paper, recent automatic methods for liver image segmentation proposed in literature are reviewed. Generally, Liver image segmentation techniques can be semi-automatic and fully automatic. Under fully automatic segmentation category, various methods, approaches, recent advances and related problems has been explained. Brief summary of all discussed methods has been provided. Finally it is concluded that liver image segmentation is still an open problem since various drawbacks of proposed methods can be addressed. Keywords\u2014 Liver image Segmentation, Automatic Segmentation, Neural Network, Support Vector Machine, Clustering, Hybrid, Review."}
{"_id": "5813454b5dcc834e03a7a5b8187f752acec84c4b", "title": "Vacuum-assisted thermal annealing of CH3NH3PbI3 for highly stable and efficient perovskite solar cells.", "text": "Solar cells incorporating lead halide-based perovskite absorbers can exhibit impressive power conversion efficiencies (PCEs), recently surpassing 15%. Despite rapid developments, achieving precise control over the morphologies of the perovskite films (minimizing pore formation) and enhanced stability and reproducibility of the devices remain challenging, both of which are necessary for further advancements. Here we demonstrate vacuum-assisted thermal annealing as an effective means for controlling the composition and morphology of the CH(3)NH(3)PbI(3) films formed from the precursors of PbCl(2) and CH(3)NH(3)I. We identify the critical role played by the byproduct of CH(3)NH(3)Cl on the formation and the photovoltaic performance of the perovskite film. By completely removing the byproduct through our vacuum-assisted thermal annealing approach, we are able to produce pure, pore-free planar CH(3)NH(3)PbI(3) films with high PCE reaching 14.5% in solar cell device. Importantly, the removal of CH(3)NH(3)Cl significantly improves the device stability and reproducibility with a standard deviation of only 0.92% in PCE as well as strongly reducing the photocurrent hysteresis."}
{"_id": "b78d00a50745c1833e513b8c188d372a35a5a184", "title": "Interpreting BLEU/NIST Scores: How Much Improvement do We Need to Have a Better System?", "text": "Automatic evaluation metrics for Machine Translation (MT) systems, such as BLEU and the related NIST metric, are becoming increasingly important in MT. Yet, their behaviors are not fully understood. In this paper, we analyze some flaws in the BLEU/NIST metrics. With a better understanding of these problems, we can better interpret the reported BLEU/NIST scores. In addition, this paper reports a novel method of calculating the confidence intervals for BLEU/NIST scores using bootstrapping. With this method, we can determine whether two MT systems are significantly different from each other."}
{"_id": "b194b0ec0fc347693f131e99ffdcf2d54e554e8d", "title": "Low-power flyback converter with synchronous rectification for a system with AC power distribution", "text": "IEC 1000-3-2 regulations impose a reduced harmonic content on any converter with an input power higher than 75 W. However, if the power architecture of the system is based on small on-board converters, and the total power is higher than 75 W, IEC regulations must be fulfilled although each individual converter need not comply with the regulations. In this paper, one of the different possible solutions is presented. Each on-board converter has an active input current shaper (AICS) in order to reduce the input current harmonic content of each converter and, hence, to comply with IEC 1000-3-2 regulations. Moreover, two different types of AICSs were compared: the conventional one and a new type of AICS based on a full-wave rectifier."}
{"_id": "db65702ce3ae18fea886fbd936054e29f97457e1", "title": "An analysis of 3D point cloud reconstruction from light field images", "text": "Current methodologies for the generation of 3D point cloud from real world scenes rely upon a set of 2D images capturing the scene from several points of view. Novel plenoptic cameras sample the light field crossing the main camera lens creating a light field image. The information available in a plenoptic image must be processed in order to render a view or create the depth map of the scene. This paper analyses a method for the reconstruction of 3D models. The reconstruction of the model is obtained from a single image shot. Exploiting the properties of plenoptic images, a point cloud is generated and compared with a point cloud of the same object but generated with a different plenoptic camera."}
{"_id": "aea8513e0eded82c3137d5a206a2fcdc26aed5a3", "title": "Multi-robot long-term persistent coverage with fuel constrained robots", "text": "In this paper, we present an algorithm to solve the Multi-Robot Persistent Coverage Problem (MRPCP). Here, we seek to compute a schedule that will allow a fleet of agents to visit all targets of a given set while maximizing the frequency of visitation and maintaining a sufficient fuel capacity by refueling at depots. We also present a heuristic method to allow us to compute bounded suboptimal results in real time. The results produced by our algorithm will allow a team of robots to efficiently cover a given set of targets or tasks persistently over long periods of time, even when the cost to transition between tasks is dynamic."}
{"_id": "1bd1b7344044e8cc068a77b439fca011120c4bc3", "title": "Toward Scalable Verification for Safety-Critical Deep Networks", "text": "The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability. Formal verification can address these concerns by guaranteeing that a deep learning system operates as intended, but the state-of-the-art is limited to small systems. In this work-in-progress report we give an overview of our work on mitigating this difficulty, by pursuing two complementary directions: devising scalable verification techniques, and identifying design choices that result in deep learning systems that are more amenable to verification. ACM Reference Format: Lindsey Kuper, Guy Katz, Justin Gottschlich, Kyle Julian, Clark Barrett, and Mykel J. Kochenderfer. 2018. Toward Scalable Verification for SafetyCritical Deep Networks. In Proceedings of SysML Conference (SysML). ACM, New York, NY, USA, 3 pages."}
{"_id": "41b77840bf309358ecf45b16d00053ed12aea5c0", "title": "Identifying careless responses in survey data.", "text": "When data are collected via anonymous Internet surveys, particularly under conditions of obligatory participation (such as with student samples), data quality can be a concern. However, little guidance exists in the published literature regarding techniques for detecting careless responses. Previously several potential approaches have been suggested for identifying careless respondents via indices computed from the data, yet almost no prior work has examined the relationships among these indicators or the types of data patterns identified by each. In 2 studies, we examined several methods for identifying careless responses, including (a) special items designed to detect careless response, (b) response consistency indices formed from responses to typical survey items, (c) multivariate outlier analysis, (d) response time, and (e) self-reported diligence. Results indicated that there are two distinct patterns of careless response (random and nonrandom) and that different indices are needed to identify these different response patterns. We also found that approximately 10%-12% of undergraduates completing a lengthy survey for course credit were identified as careless responders. In Study 2, we simulated data with known random response patterns to determine the efficacy of several indicators of careless response. We found that the nature of the data strongly influenced the efficacy of the indices to identify careless responses. Recommendations include using identified rather than anonymous responses, incorporating instructed response items before data collection, as well as computing consistency indices and multivariate outlier analysis to ensure high-quality data."}
{"_id": "9c58c19b01b04ca7dbf122d59684bf05353cc77b", "title": "A new scale of social desirability independent of psychopathology.", "text": "It has long been recognized that personality test scores are influenced by non-test-relevant response determinants. Wiggins and Rumrill (1959) distinguish three approaches to this problem. Briefly, interest in the problem of response distortion has been concerned with attempts at statistical correction for \"faking good\" or \"faking bad\" (Meehl & Hathaway, 1946), the analysis of response sets (Cronbach, 1946,1950), and ratings of the social desirability of personality test items (Edwards, 19 5 7). A further distinction can be made, however, which results in a somewhat different division of approaches to the question of response distortion. Common to both the Meehl and Hathaway corrections for faking good and faking bad and Cronbach's notion of response sets is an interest in the test behavior of the subject(S). By social desirability, on the other hand, Edwards primarily means the \"scale value for any personality statement such that the scale value indicates the position of the statement on the social desirability continuum . . .\" (1957, p. 3). Social desirability, thus, has been used to refer to a characteristic of test items, i.e., their scale position on a social desirability scale. Whether the test behavior of 5s or the social desirability properties of items are the focus of interest, however, it now seems clear that underlying both these approaches is the concept of statistical deviance. In the construction of the MMPI K scale, for example, items were selected which differentiated between clinically normal persons producing abnormal te\u00a5Tpfpfiles~snd^cTinically abnormal individuals with abnormal test profiles, and between clinically abnormal persons with normal test profiles and abnormal 5s whose test records were abnormal. Keyed responses to the K scale items tend to be statistically deviant in the parent populations. Similarly, the development of the Edwards Social Desirability Scale (SDS) illustrates this procedure. Items were drawn from various MMPI scales (F, L, K, and the Manifest Anxiety Scale [Taylor, 1953]) and submitted to judges who categorized them as either socially desirable or socially undesirable. Only items on which there was unanimous agreement among the 10 judges were included in the SDS. It seems clear that the items in Edwards SDS would, of necessity, have extreme social desirability scale positions or, in other words, be statistically deviant. Some unfortunate consequences follow from the strict use of the statistical deviance model in the development of-sOcialTtesirSbTBty scales. With items drawn from the MMPI, it is apparent that in addition to their scalability for social desirability the items may also be characterized by their content which,^n a general sense, has pathological implications. When a social desrrabtltty^scale constructed according to this procedure is then applied to a college student population, the meaning of high social desirability scores is not at all clear. When 5s given the Edwards SDS deny, for example, that their sleep is fitful and disturbed (Item 6) or that they worry quite a bit over possible misfortunes (Item 35), it cannot be determined whether these responses are attributable to social desirability or to a genuine absence of such symptoms. The probability of occurrence of the symptoms represented in MMPI items (and incorportated in the SDS)"}
{"_id": "b65c9bac7a42ac4a52a7be4d8d58b152d9124d11", "title": "Simultaneous administration of the Rosenberg Self-Esteem Scale in 53 nations: exploring the universal and culture-specific features of global self-esteem.", "text": "The Rosenberg Self-Esteem Scale (RSES) was translated into 28 languages and administered to 16,998 participants across 53 nations. The RSES factor structure was largely invariant across nations. RSES scores correlated with neuroticism, extraversion, and romantic attachment styles within nearly all nations, providing additional support for cross-cultural equivalence of the RSES. All nations scored above the theoretical midpoint of the RSES, indicating generally positive self-evaluation may be culturally universal. Individual differences in self-esteem were variable across cultures, with a neutral response bias prevalent in more collectivist cultures. Self-competence and self-liking subscales of the RSES varied with cultural individualism. Although positively and negatively worded items of the RSES were correlated within cultures and were uniformly related to external personality variables, differences between aggregates of positive and negative items were smaller in developed nations. Because negatively worded items were interpreted differently across nations, direct cross-cultural comparisons using the RSES may have limited value."}
{"_id": "5d86a5dbcb22cee5b931d8e9d6a1a95d6d8f394d", "title": "Assessing psychopathic attributes in a noninstitutionalized population.", "text": "The present study examined antisocial dispositions in 487 university students. Primary and secondary psychopathy scales were developed to assess a protopsychopathic interpersonal philosophy. An antisocial action scale also was developed for purposes of validation. The primary, secondary, and antisocial action scales were correlated with each other and with boredom susceptibility and disinhibition but not with experience seeking and thrill and adventure seeking. Secondary psychopathy was associated with trait anxiety. Multiple regression analysis revealed that the strongest predictors of antisocial action were disinhibition, primary psychopathy, secondary psychopathy, and sex, whereas thrill and adventure seeking was a negative predictor. This argues against a singular behavioral inhibition system mediating both antisocial and risk-taking behavior. These findings are also consistent with the view that psychopathy is a continuous dimension."}
{"_id": "822838560825f0b58587d33ce8d28b743b8b851a", "title": "Wound healing activity of the fruit skin of Punica granatum.", "text": "The skin of the fruit and the bark of Punica granatum are used as a traditional remedy against diarrhea, dysentery, and intestinal parasites. The fruit skin extract of P. granatum was tested for its wound healing activity in rats using an excision wound model. The animals were divided into three groups of six each. The experimental group of animals was topically treated with P. granatum at a dose of 100 mg/kg every day for 15 days, while the controls and standard group animals were treated with petroleum jelly and mupirocin ointment, respectively. Phytochemical analysis of the extract revealed the presence of saponins, triterpenes, tannins, alkaloids, flavonoids, and cardiac glycosides. Extract-treated animals exhibited 95% reduction in the wound area when compared with controls (84%), which was statistically significant (P<.01). The extract-treated wounds were found to epithelize faster compared with controls. The hydroxyproline content of extract-treated animals was significantly higher than controls (P<.05). The fruit skin extract did not show any antimicrobial activity against the microrganisms tested. P. granatum promotes significant wound healing in rats and further evaluation of this activity in humans is suggested."}
{"_id": "652e78b85a9b40a4dd7f389e862ff2dc9ac9c661", "title": "Functional near infrared spectroscopy (fNIRS): an emerging neuroimaging technology with important applications for the study of brain disorders.", "text": "Functional near-infrared spectroscopy (fNIRS) is an emerging functional neuroimaging technology offering a relatively non-invasive, safe, portable, and low-cost method of indirect and direct monitoring of brain activity. Most exciting is its potential to allow more ecologically valid investigations that can translate laboratory work into more realistic everyday settings and clinical environments. Our aim is to acquaint clinicians and researchers with the unique and beneficial characteristics of fNIRS by reviewing its relative merits and limitations vis-\u00e0-vis other brain-imaging technologies such as functional magnetic resonance imaging (fMRI). We review cross-validation work between fMRI and fNIRS, and discuss possible reservations about its deployment in clinical research and practice. Finally, because there is no comprehensive review of applications of fNIRS to brain disorders, we also review findings from the few studies utilizing fNIRS to investigate neurocognitive processes associated with neurological (Alzheimer's disease, Parkinson's disease, epilepsy, traumatic brain injury) and psychiatric disorders (schizophrenia, mood disorders, anxiety disorders)."}
{"_id": "4124dcce3d6ab7a4e1b8987c490277d2244abdf0", "title": "Dose the Use of Big Data Analytics Guarantee a High Firm Performance? An Empirical Study Bases on the Dynamic Capabilities Theory", "text": "For the sake of enriching relevant research field, this study focuses on two main questions: (1) What is the effect of DBA usage on firm performance (including market performance and operational performance)? and (2) What factors that drive organizations to use big data analytics (BDA) are key drivers, based on dynamic capabilities theory? And we also identified the moderating effect of environmental dynamism between BDA use and firm performance. Furthermore, we introduce the need pull/technology push framework to identify and theorize paths via which factors influence the use of BDA. Purely from the need pull/technology push theory perspective, we propose that perceived usefulness of BDA should belong to need pull factor. Thus, we select perceived usefulness and satisfaction level with existing IT technology as need pull factors, BDA use of competitors and technology compatibility as technology push factors for big data analytics (BDA) use, and examines if these factors have a positive effect on big data analytics (BDA)."}
{"_id": "6abe5eda71c3947013c59bbae700402813a1bc7f", "title": "Performance Comparison between Five NoSQL Databases", "text": "Recently NoSQL databases and their related technologies are developing rapidly and are widely applied in many scenarios with their BASE (Basic Availability, Soft state, Eventual consistency) features. At present, there are more than 225 kinds of NoSQL databases. However, the overwhelming amount and constantly updated versions of databases make it challenging for people to compare their performance and choose an appropriate one. This paper is trying to evaluate the performance of five NoSQL clusters (Redis, MongoDB, Couchbase, Cassandra, HBase) by using a measurement tool \u2013 YCSB (Yahoo! Cloud Serving Benchmark), explain the experimental results by analyzing each database's data model and mechanism, and provide advice to NoSQL developers and users."}
{"_id": "dbf2fab97f9570c196995f86eb685d8ecc2a7043", "title": "Cathodic protection by zinc sacrificial anodes: impact on marine sediment metallic contamination.", "text": "Cathodic protection by sacrificial zinc anodes is often applied to prevent immerged metallic structures from corrosion. But this technique induces the zinc anodes dissolution, which can induce marine sediments and seawater contamination. A large scale experiment, in natural seawater, was conducted during 12 months, in order to evaluate the potential environmental impact of this continuous zinc dissolution, and of some necessary cleaning operations of the anodes surfaces. The heavy metal (Cr, Cu, Pb and Zn) concentration in water and sediment samples was monitored. A sequential extraction procedure was applied on sediment samples to differentiate the zinc mobile fractions from the residual one. A significant increase of zinc concentration was observed in water as well as in the surface sediments under the specific operating conditions. Sediments then become a secondary pollution source, as the sorbed labile zinc can be remobilized to seawater."}
{"_id": "0c3a860f8e5452daa66b4623011567b42856f9a2", "title": "Larger-Context Language Modelling with Recurrent Neural Network", "text": "In this work, we propose a novel method to incorporate corpus-level discourse information into language modelling. We call this larger-context language model. We introduce a late fusion approach to a recurrent language model based on long short-term memory units (LSTM), which helps the LSTM unit keep intra-sentence dependencies and inter-sentence dependencies separate from each other. Through the evaluation on three corpora (IMDB, BBC, and PennTree Bank), we demonstrate that the proposed model improves perplexity significantly. In the experiments, we evaluate the proposed approach while varying the number of context sentences and observe that the proposed late fusion is superior to the usual way of incorporating additional inputs to the LSTM. By analyzing the trained largercontext language model, we discover that content words, including nouns, adjectives and verbs, benefit most from an increasing number of context sentences. This analysis suggests that larger-context language model improves the unconditional language model by capturing the theme of a document better and more easily."}
{"_id": "86e43d1b37518d10e803a2e6dc4bc954c88fbcbe", "title": "Title Parental Expectations and Children ' s Academic Performance in Sociocultural Context", "text": "In this paper, we review research on parental expectations and their effects on student achievement within and across diverse racial and ethnic groups. Our review suggests that the level of parental expectations varies by racial/ethnic group, and that students' previous academic performance is a less influential determinant of parental expectations among racial/ethnic minority parents than among European American parents. To explain this pattern, we identify three processes associated with race/ethnicity that moderate the relation between students' previous performance and parental expectations. Our review also indicates that the relation of parental expectations to concurrent or future student achievement outcomes is weaker for racial/ethnic minority families than for European American families. We describe four mediating processes by which high parental expectations may influence children's academic trajectories and show how these processes are associated with racial/ethnic status. The article concludes with a discussion of educational implications as well as suggestions for future research."}
{"_id": "f24a1e2c670b0464d1b5503ac1bdbe8e6f0ce419", "title": "Frequency Enhancement in Miller Divider with Injection-Locking Portrait", "text": "In this paper, we present a methodology to enhance the operating range of a Miller frequency divider. Conventionally Miller frequency dividers are visualized as mixers. We present how injection locking portrait of Miller dividers helps us to understand the dynamics of Miller frequency dividers. Further, we discuss how to enhance the operating range while optimizing the divider between power dissipation and operating range. The Miller divider with the proposed methodology has been designed in a standard CMOS 65 nm (low leakage) process. Post layout simulation results show that Miller divider with optimization techniques presented in this paper can operate over the frequency range of 13 GHz for 0dBm input power at center frequency of 61.5 GHz."}
{"_id": "54f5d15d64b2f404c89fc1817d2eb1cda6c0fe45", "title": "Does Attendance Matter ? An Examination of Student Attitudes , Participation , Performance and Attendance", "text": "Non attendance of lectures and tutorials appears to be a growing trend. The literature suggests many possible reasons including students\u2019 changing lifestyle, attitudes, teaching and technology. This paper looks at the reasons for non attendance of students in the Faculty of Commerce at the University of Wollongong and identifies relationships between attendance, participation and performance. The results indicate that there are valid reasons for non attendance that are both in the control of learners and teachers. There are also clear benefits for students to be gained in attendance; however, changes in the way we learn, teach, assess and use technology are recommended if we wish to reverse the trend. This journal article is available in Journal of University Teaching & Learning Practice: http://ro.uow.edu.au/jutlp/vol3/iss2/3 Jour na l o f Un ive rs i t y Teach ing and Lear n ing Pr ac t i ce Does Attendance Matter? An Examination of Student Attitudes, Participation, Performance and Attendance"}
{"_id": "ada12ff23e862c86e4271bc6cdaaa6f92dc1bf90", "title": "Applied machine vision of plants: a review with implications for field deployment in automated farming operations", "text": "Automated visual assessment of plant condition, specifically foliage wilting, reflectance and growth parameters, using machine vision has potential use as input for real-time variable-rate irrigation and fertigation systems in precision agriculture. This paper reviews the research literature for both outdoor and indoor applications of machine vision of plants, which reveals that different environments necessitate varying levels of complexity in both apparatus and nature of plant measurement which can be achieved. Deployment of systems to the field environment in precision agriculture applications presents the challenge of overcoming image variation caused by the diurnal and seasonal variation of sunlight. From the literature reviewed, it is argued that augmenting a monocular RGB vision system with additional sensing techniques potentially reduces image analysis complexity while enhancing system robustness to environmental variables. Therefore, machine vision systems with a foundation in optical and lighting design may potentially expedite the transition from laboratory and research prototype to robust field tool."}
{"_id": "6cf6dc8bb7995a1f529d51b11f1677e045337337", "title": "SmartPaste: Learning to Adapt Source Code", "text": "Deep Neural Networks have been shown to succeed at a range of natural language tasks such as machine translation and text summarization. While tasks on source code (ie, formal languages) have been considered recently, most work in this area does not attempt to capitalize on the unique opportunities offered by its known syntax and structure. In this work, we introduce SmartPaste, a first task that requires to use such information. The task is a variant of the program repair problem that requires to adapt a given (pasted) snippet of code to surrounding, existing source code. As first solutions, we design a set of deep neural models that learn to represent the context of each variable location and variable usage in a data flow-sensitive way. Our evaluation suggests that our models can learn to solve the SmartPaste task in many cases, achieving 58.6% accuracy, while learning meaningful representation of variable usages."}
{"_id": "0760b3baa196cd449d2c81604883815a6fc73b6a", "title": "Analysis and reduction of quadrature errors in the material point method ( MPM )", "text": "The material point method (MPM) has demonstrated itself as a computationally effective particle method for solving solid mechanics problems involving large deformations and/or fragmentation of structures, which are sometimes problematic for finite element methods (FEMs). However, similar to most methods that employ mixed Lagrangian (particle) and Eulerian strategies, analysis of the method is not straightforward. The lack of an analysis framework for MPM, as is found in FEMs, makes it challenging to explain anomalies found in its employment and makes it difficult to propose methodology improvements with predictable outcomes. In this paper we present an analysis of the quadrature errors found in the computation of (material) internal force in MPM and use this analysis to direct proposed improvements. In particular, we demonstrate that lack of regularity in the grid functions used for representing the solution to the equations of motion can hamper spatial convergence of the method. We propose the use of a quadratic B-spline basis for representing solutions on the grid, and we demonstrate computationally and explain theoretically why such a small change can have a significant impact on the reduction in the internal force quadrature error (and corresponding \u2018grid crossing error\u2019) often experienced when using MPM. Copyright q 2008 John Wiley & Sons, Ltd."}
{"_id": "fabddc3b57c367eef7b9f29b8bedac03c00efaf6", "title": "Handbook of Natural Language Processing", "text": "Handbook of Natural Language Processing and Machine Translation \u00b7 Learning Natural Language Processing With Python and NLTK p.1 Tokenizing words and By far, the most popular toolkit or API to do natural language processing... Nitin Indurkhya, Fred J. Damerau, Handbook of Natural Language Processing, Second Edition. Chapman & Hall/CRC Machine Learning & Pattern Recognition. Arabic Natural Language Processing, Machine Translation Handbook of Natural Language Processing and Machine Translation, 164-175, 2011. 2011."}
{"_id": "569047ef761bc13403c68611bf1d5fbce945cbde", "title": "A construction of cryptography system based on quantum neural network", "text": "Quantum neural networks (QNNs) have been explored as one of the best approach for improving the computational efficiency of neural networks. Because of the powerful and fantastic performance of quantum computation, some researchers have begun considering the implications of quantum computation on the field of artificial neural networks (ANNs).The purpose of this paper is to introduce an application of QNNs in construction of cryptography system in which two networks exchange their outputs (in qubits) and the key to be synchronized between two communicating parties. This system is based on multilayer qubit QNNs trained with back-propagation algorithm."}
{"_id": "fd26f8069cfa528463fdf8a90864587e997ee86d", "title": "A Multi-View Fusion Neural Network for Answer Selection", "text": "Community question answering aims at choosing the most appropriate answer for a given question, which is important in many NLP applications. Previous neural network-based methods consider several different aspects of information through calculating attentions. These different kinds of attentions are always simply summed up and can be seen as a \u201csingle view\u201d, causing severe information loss. To overcome this problem, we propose a Multi-View Fusion Neural Network, where each attention component generates a \u201cview\u201d of the QA pair and a fusion RNN integrates the generated views to form a more holistic representation. In this fusion RNN method, a filter gate collects important information of input and directly adds it to the output, which borrows the idea of residual networks. Experimental results on the WikiQA and SemEval-2016 CQA datasets demonstrate that our proposed model outperforms the state-of-the-art methods."}
{"_id": "a6b49df6de5a1246930ae084f819d4c1d813df52", "title": "Calming effects of deep touch pressure in patients with autistic disorder, college students, and animals.", "text": "ABSTRACT Many people with autistic disorder have problems with oversensitivity to both touch and sound. The author (an autistic person) developed a device that delivers deep touch pressure to help her learn to tolerate touching and to reduce anxiety and nervousness. The \"squeeze machine\" applies lateral, inwardly directed pressure to both lateral aspects of a person's entire body, by compressing the user between two foam-padded panels. Clinical observations and several studies suggest that deep touch pressure is therapeutically beneficial for both children with autistic disorder and probably children with attention-deficit hyperactivity disorder. Only minor and occasional adverse effects have been noted. Data are reported that show a similar calming effect in nonreferred college students. A review of the animal literature reveals that animals have similar calming reactions, and also suggests possible additional physiological effects of deep touch pressure. At present, there are increasing anecdotal reports of the clinical value of the squeeze machine, including suggestions that it can be used to reduce required doses of psychostimulant medications. More clinical studies are needed to evaluate the potential role of this seemingly beneficial form of \"physiological\" stimulation."}
{"_id": "e5645ce314bc346b40a57bb5856c4b04bed6235c", "title": "\"Oops, I Did It Again\" - Security of One-Time Signatures Under Two-Message Attacks", "text": "One-time signatures (OTS) are called one-time, because the accompanying security reductions only guarantee security under single-message attacks. However, this does not imply that efficient attacks are possible under two-message attacks. Especially in the context of hash-based OTS (which are basic building blocks of recent standardization proposals) this leads to the question if accidental reuse of a one-time key pair leads to immediate loss of security or to graceful degradation. In this work we analyze the security of the most prominent hash-based OTS, Lamport\u2019s scheme, its optimized variant, and WOTS, under different kinds of two-message attacks. Interestingly, it turns out that the schemes are still secure under two message attacks, asymptotically. However, this does not imply anything for typical parameters. Our results show that for Lamport\u2019s scheme, security only slowly degrades in the relevant attack scenarios and typical parameters are still somewhat secure, even in case of a two-message attack. As we move on to optimized Lamport and its generalization WOTS, security degrades faster and faster, and typical parameters do not provide any reasonable level of security under two-message attacks."}
{"_id": "6b9c790904c8aea2b616388361cec9bef92e8bab", "title": "Multi-granularity Generator for Temporal Action Proposal", "text": "Temporal action proposal generation is an important task, aiming to localize the video segments containing human actions in an untrimmed video. In this paper, we propose a multi-granularity generator (MGG) to perform the temporal action proposal from different granularity perspectives, relying on the video visual features equipped with the position embedding information. First, we propose to use a bilinear matching model to exploit the rich local information within the video sequence. Afterwards, two components, namely segment proposal generator (SPG) and frame actionness generator (FAG), are combined to perform the task of temporal action proposal at two distinct granularities. SPG considers the whole video in the form of feature pyramid and generates segment proposals from one coarse perspective, while FAG carries out a finer actionness evaluation for each video frame. Our proposed MGG can be trained in an end-to-end fashion. Through temporally adjusting the segment proposals with fine-grained information based on frame actionness, MGG achieves the superior performance over state-of-the-art methods on the public THUMOS-14 and ActivityNet-1.3 datasets. Moreover, we employ existing action classifiers to perform the classification of the proposals generated by MGG, leading to significant improvements compared against the competing methods for the video detection task."}
{"_id": "8738cf489b103d4c9fc42f17b1acf813e3495fdb", "title": "The human amygdala and the emotional evaluation of sensory stimuli", "text": "A wealth of animal data implicates the amygdala in aspects of emotional processing. In recent years, functional neuroimaging and neuropsychological studies have begun to refine our understanding of the functions of the amygdala in humans. This literature offers insights into the types of stimuli that engage the amygdala and the functional consequences that result from this engagement. Specific conclusions and hypotheses include: (1) the amygdala activates during exposure to aversive stimuli from multiple sensory modalities; (2) the amygdala responds to positively valenced stimuli, but these responses are less consistent than those induced by aversive stimuli; (3) amygdala responses are modulated by the arousal level, hedonic strength or current motivational value of stimuli; (4) amygdala responses are subject to rapid habituation; (5) the temporal characteristics of amygdala responses vary across stimulus categories and subject populations; (6) emotionally valenced stimuli need not reach conscious awareness to engage amygdala processing; (7) conscious hedonic appraisals do not require amygdala activation; (8) activation of the amygdala is associated with modulation of motor readiness, autonomic functions, and cognitive processes including attention and memory; (9) amygdala activations do not conform to traditional models of the lateralization of emotion; and (10) the extent and laterality of amygdala activations are related to factors including psychiatric status, gender and personality. The strengths and weakness of these hypotheses and conclusions are discussed with reference to the animal literature."}
{"_id": "cd31ecb3b58d1ec0d8b6e196bddb71dd6a921b6d", "title": "Economic dispatch for a microgrid considering renewable energy cost functions", "text": "Microgrids are operated by a customer or a group of customers for having a reliable, clean and economic mode of power supply to meet their demand. Understanding the economics of system is a prime factor which really depends on the cost/kWh of electricity supplied. This paper presents an easy and simple method for analyzing the dispatch rate of power. An isolated microgrid with solar and wind is considered in this paper. Generation cost functions are modeled with the inclusion of investment cost and maintenance cost of resources. Economic dispatch problem is solved using the reduced gradient method. The effects on total generation cost, with the inclusion of wind energy and solar energy into a microgrid is studied and found the most profitable solution by considering different practical scenarios. The paper gives a detailed correlation between the cost function, investment cost, lifetime and the fluctuant energy forecasting of wind and solar resources. It also discusses the advantages of including the renewable energy credits for the solar panel."}
{"_id": "a4e14abdd4b7ebe669cdb9f2354b9ef7b0ba83f6", "title": "All-digital PLL and transmitter for mobile phones", "text": "We present the first all-digital PLL and polar transmitter for mobile phones. They are part of a single-chip GSM/EDGE transceiver SoC fabricated in a 90 nm digital CMOS process. The circuits are architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrateable with a digital baseband and application processor. To achieve this, we exploit the new paradigm of a deep-submicron CMOS process environment by leveraging on the fast switching times of MOS transistors, the fine lithography and the precise device matching, while avoiding problems related to the limited voltage headroom. The transmitter architecture is fully digital and utilizes the wideband direct frequency modulation capability of the all-digital PLL. The amplitude modulation is realized digitally by regulating the number of active NMOS transistor switches in accordance with the instantaneous amplitude. The conventional RF frequency synthesizer architecture, based on a voltage-controlled oscillator and phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter. The transmitter performs GMSK modulation with less than 0.5/spl deg/ rms phase error, -165 dBc/Hz phase noise at 20 MHz offset, and 10 /spl mu/s settling time. The 8-PSK EDGE spectral mask is met with 1.2% EVM. The transmitter occupies 1.5 mm/sup 2/ and consumes 42 mA at 1.2 V supply while producing 6 dBm RF output power."}
{"_id": "fbee79fe48239dbe004431e1cb6943e53957a1f1", "title": "A Design Procedure for All-Digital Phase-Locked Loops Based on a Charge-Pump Phase-Locked-Loop Analogy", "text": "In this brief, a systematic design procedure for a second-order all-digital phase-locked loop (PLL) is proposed. The design procedure is based on the analogy between a type-II second-order analog PLL and an all-digital PLL. The all-digital PLL design inherits the frequency response and stability characteristics of the analog prototype PLL"}
{"_id": "4b4ce1d77ffa509ba5849012cdf40286ac9a5eeb", "title": "Analysis and modeling of bang-bang clock and data recovery circuits", "text": "A large-signal piecewise-linear model is proposed for bang-bang phase detectors that predicts characteristics of clock and data recovery circuits such as jitter transfer, jitter tolerance, and jitter generation. The results are validated by 1-Gb/s and 10-Gb/s CMOS prototypes using an Alexander phase detector and an LC oscillator."}
{"_id": "6e8c09f6157371685039c922cc399f37243f0409", "title": "Analysis of charge-pump phase-locked loops", "text": "In this paper, we present an exact analysis for third-order charge-pump phase-locked loops using state equations. Both the large-signal lock acquisition process and the small-signal linear tracking behavior are described using this analysis. The nonlinear state equations are linearized for the small-signal condition and the z-domain noise transfer functions are derived. A comparison to some of the existing analysis methods such as the impulse-invariant transformation and s-domain analysis is provided. The effect of the loop parameters and the reference frequency on the loop phase margin and stability is analyzed. The analysis is verified using behavioral simulations in MATLAB and SPECTRE."}
{"_id": "7a641aa209ab3cd2fe7c8b268a39c9a56850d956", "title": "Digitally Controlled Oscillator ( DCO )-Based Architecture for RF Frequency Synthesis in a Deep-Submicrometer CMOS Process", "text": "A novel digitally controlled oscillator (DCO)-based architecture for frequency synthesis in wireless RF applications is proposed and demonstrated. It deliberately avoids any use of an analog tuning voltage control line. Fine frequency resolution is achieved through high-speed dithering. Other imperfections of analog circuits are compensated through digital means. The presented ideas enable the employment of fully-digital frequency synthesizers using sophisticated signal processing algorithms, realized in the most advanced deep-submicrometer digital CMOS processes which allow almost no analog extensions. They also promote costeffective integration with the digital back-end onto a single silicon die. The demonstrator test chip has been fabricated in a digital 0.13m CMOS process together with a DSP, which acts as a digital baseband processor with a large number of digital gates in order to investigate noise coupling. The phase noise is112 dBc/Hz at 500-kHz offset. The close-in spurious tones are below 62 dBc, while the far-out spurs are below 80 dBc. The presented ideas have been incorporated in a commercial Bluetooth transceiver."}
{"_id": "4b0a5d72121400da4979830d632bca7e77361a19", "title": "Exploiting the use of recurrent neural networks for driver behavior profiling", "text": "Driver behavior affects traffic safety, fuel/energy consumption and gas emissions. The purpose of driver behavior profiling is to understand and have a positive influence on driver behavior. Driver behavior profiling tasks usually involve an automated collection of driving data and the application of computer models to classify what characterizes the aggressiveness of drivers. Different sensors and classification methods have been employed for this task, although low-cost solutions, high performance and collaborative sensing remain open questions for research. This paper makes an investigation with different Recurrent Neural Networks (RNN), aiming to classify driving events employing data collected by smartphone accelerometers. The results show that specific configurations of RNN upon accelerometer data provide high accuracy results, being a step towards the development of safer transportation systems."}
{"_id": "1bfe2afcdb8c71f664fb03fb16c3feff661586a6", "title": "Automotive Ethernet: In-vehicle networking and smart mobility", "text": "This paper discusses novel communication network topologies and components and describes an evolutionary path of bringing Ethernet into automotive applications with focus on electric mobility. For next generation in-vehicle networking, the automotive industry identified Ethernet as a promising candidate besides CAN and FlexRay. Ethernet is an IEEE standard and is broadly used in consumer and industry domains. It will bring a number of changes for the design and management of in-vehicle networks and provides significant re-use of components, software, and tools. Ethernet is intended to connect inside the vehicle high-speed communication requiring sub-systems like Advanced Driver Assistant Systems (ADAS), navigation and positioning, multimedia, and connectivity systems. For hybrid (HEVs) or electric vehicles (EVs), Ethernet will be a powerful part of the communication architecture layer that enables the link between the vehicle electronics and the Internet where the vehicle is a part of a typical Internet of Things (IoT) application. Using Ethernet for vehicle connectivity will effectively manage the huge amount of data to be transferred between the outside world and the vehicle through vehicle-to-x (V2V and V2I or V2I+I) communication systems and cloud-based services for advanced energy management solutions. Ethernet is an enabling technology for introducing advanced features into the automotive domain and needs further optimizations in terms of scalability, cost, power, and electrical robustness in order to be adopted and widely used by the industry."}
{"_id": "8774e74ac5d7a93ccecb99f32c81117d27500e5a", "title": "Root Gap Correction with a Deep Inpainting Model", "text": "Imaging roots of growing plants in a non-invasive and affordable fashion has been a long-standing problem in image-assisted plant breeding and phenotyping. One of the most affordable and diffuse approaches is the use of mesocosms, where plants are grown in soil against a glass surface that permits the roots visualization and imaging. However, due to soil and the fact that the plant root is a 2D projection of a 3D object, parts of the root are occluded. As a result, even under perfect root segmentation, the resulting images contain several gaps that may hinder the extraction of finely grained root system architecture traits. We propose an effective deep neural network to recover gaps from disconnected root segments. We train a fully supervised encoder-decoder deep CNN that, given an image containing gaps as input, generates an inpainted version, recovering the missing parts. Since in real data ground-truth is lacking, we use synthetic root images [10] that we artificially perturb by introducing gaps to train and evaluate our approach. We show that our network can work both in dicot and monocot cases in reducing root gaps. We also show promising exemplary results in real data from chickpea root architectures."}
{"_id": "eeaf71f0da6fc681de155b5087276665b9a9225e", "title": "Punching shear capacity of continuous slabs", "text": "Provisions for punching shear design of reinforced concrete slabs are usually calibrated on the basis of results from tests on isolated specimens that simulate the slab zone within the points of contraflexure around a column. However, the punching behavior of interior slab-column connections in actual continuous slabs without transverse reinforcement may be influenced by the effects of moment redistribution and compressive membrane action, which can lead to higher punching strengths and lower deformation capacities compared to those in isolated specimens. This paper discusses these behavioral differences on the basis of experiments performed on symmetric edge-restrained slabs and investigates available test data by means of a numerical model. A simplified calculation method (based on the Critical Shear Crack Theory) that accounts for these effects is also proposed. The calculation model shows consistent agreement with the results of the numerical evaluation and is sufficiently simple to be used in design and assessment."}
{"_id": "5034e8b6965dc201f5fca6b9d119005ed10b5352", "title": "Digital Dead Time Auto-Tuning for Maximum Efficiency Operation of Isolated DC-DC Converters", "text": "The rapidly growing use of digital control in Distributed Power Architectures (DPA) has been primarily driven by the increasing need of sophisticated Power Management functions. However, specifically designed control algorithms can also be exploited for optimizing the system performance. In this paper we present a new auto-tuning method and system that makes it possible to operate an isolated DC-DC converter at maximum efficiency. The auto-tuning performs an optimum adjustment of both primary side dead time and secondary side conduction times based on the measurement of the input current. It does not require any additional external components since current sensing functions are already implemented for power management purposes. Experimental measurements performed on an asymmetrical driven half-bridge DC-DC converter demonstrate the effectiveness of our solution and its robustness to component variations."}
{"_id": "9055ab8dfa071f351fbe302938358601769607cd", "title": "A Hybrid Model to Detect Malicious Executables", "text": "We present a hybrid data mining approach to detect malicious executables. In this approach we identify important features of the malicious and benign executables. These features are used by a classifier to learn a classification model that can distinguish between malicious and benign executables. We construct a novel combination of three different kinds of features: binary n-grams, assembly n-grams, and library function calls. Binary features are extracted from the binary executables, whereas assembly features are extracted from the disassembled executables. The function call features are extracted from the program headers. We also propose an efficient and scalable feature extraction technique. We apply our model on a large corpus of real benign and malicious executables. We extract the above mentioned features from the data and train a classifier using support vector machine. This classifier achieves a very high accuracy and low false positive rate in detecting malicious executables. Our model is compared with other feature-based approaches, and found to be more efficient in terms of detection accuracy and false alarm rate."}
{"_id": "8879309ca4222235246039a0e77983b313894ef3", "title": "Scalable knowledge harvesting with high precision and high recall", "text": "Harvesting relational facts from Web sources has received great attention for automatically constructing large knowledge bases. Stateof-the-art approaches combine pattern-based gathering of fact candidates with constraint-based reasoning. However, they still face major challenges regarding the trade-offs between precision, recall, and scalability. Techniques that scale well are susceptible to noisy patterns that degrade precision, while techniques that employ deep reasoning for high precision cannot cope with Web-scale data.\n This paper presents a scalable system, called PROSPERA, for high-quality knowledge harvesting. We propose a new notion of ngram-itemsets for richer patterns, and use MaxSat-based constraint reasoning on both the quality of patterns and the validity of fact candidates.We compute pattern-occurrence statistics for two benefits: they serve to prune the hypotheses space and to derive informative weights of clauses for the reasoner. The paper shows how to incorporate these building blocks into a scalable architecture that can parallelize all phases on a Hadoop-based distributed platform. Our experiments with the ClueWeb09 corpus include comparisons to the recent ReadTheWeb experiment. We substantially outperform these prior results in terms of recall, with the same precision, while having low run-times."}
{"_id": "6ae8e0ea13abfaaeed3fb4c720716e0e89667094", "title": "Manufacturing Analytics and Industrial Internet of Things", "text": "Over the last two decades, manufacturing across the globe has evolved to be more intel-ligent and data driven. In the age of industrial Internet of Things, a smart production unit can be perceived as a large connected industrial system of materials, parts, machines, tools, inventory, and logistics that can relay data and communicate with each other. While, traditionally, the focus has been on machine health and predictive maintenance, the manufacturing industry has also started focusing on analyzing data from the entire production line. These applications bring a new set of analytics challenges. Unlike tradi-tional data mining analysis, which consists of lean datasets (that is, datasets with few fea-tures), manufacturing has fat datasets. In addition, previous approaches to manufacturing analytics restricted themselves to small time periods of data. The latest advances in big data analytics allows researchers to do a deep dive into years of data. Bosch collects and utilizes all available information about its products to increase its understanding of complex linear and nonlinear relationships between parts, machines, and assembly lines. This helps in use cases such as the discovery of the root cause of internal defects. This article presents a case study and provides detail about challenges and approaches in data extraction, modeling, and visualization."}
{"_id": "d771ce5fefb6e853ab176a09204556ae663e682f", "title": "Optimized Broadcast for Deep Learning Workloads on Dense-GPU InfiniBand Clusters: MPI or NCCL?", "text": "Traditionally, MPI runtimes have been designed for clusters with a large number of nodes. However, with the advent of MPI+CUDA applications and dense multi-GPU systems, it has become important to design efficient communication schemes. This coupled with new application workloads brought forward by Deep Learning frameworks like Caffe and Microsoft CNTK pose additional design constraints due to very large message communication of GPU buffers during the training phase. In this context, special-purpose libraries like NCCL have been proposed. In this paper, we propose a pipelined chain (ring) design for the MPI_Bcast collective operation along with an enhanced collective tuning framework in MVAPICH2-GDR that enables efficient intra-/internode multi-GPU communication. We present an in-depth performance landscape for the proposed MPI_Bcast schemes along with a comparative analysis of NCCL Broadcast and NCCL-based MPI_Bcast. The proposed designs for MVAPICH2-GDR enable up to 14X and 16.6X improvement, compared to NCCL-based solutions, for intra- and internode broadcast latency, respectively. In addition, the proposed designs provide up to 7% improvement over NCCL-based solutions for data parallel training of the VGG network on 128 GPUs using Microsoft CNTK. The proposed solutions outperform the recently introduced NCCL2 library for small and medium message sizes and offer comparable/better performance for very large message sizes."}
{"_id": "bb0f9676e1c04d2c85edae092564dd04ab0f6dd0", "title": "Hesitant fuzzy analytic hierarchy process", "text": "Ordinary fuzzy sets have been recently extended to intuitionistic and hesitant fuzzy sets, which are frequently used for the solution of decision-making problems. All these extensions aim at better defining the membership values or functions of the considered parameters. In this paper we develop a hesitant fuzzy analytic hierarchy process method involving multi-experts' linguistic evaluations aggregated by ordered weighted averaging (OWA) operator. The developed hesitant fuzzy AHP method is applied to a multicriteria supplier selection problem."}
{"_id": "10dd5ac23ec470379da5bf27e0f6ece71b65af8b", "title": "Outcome prediction of DOTA2 based on Na\u00efve Bayes classifier", "text": "Although DOTA2 is a popular game around the world, no clear algorithm or software are designed to forecast the winning probability by analyzing the lineups. However, the author finds that Naive Bayes classifier, one of the most common classification algorithm, can analyze the lineups and predict the outcome according to the lineups and gives an improved Naive Bayes classifier. Using the DOTA2 data set published in the UCI Machine Learning Repository, we test Naive Bayes classifier's prediction of respective winning probability of both sides in the game. The results show that Naive Bayes classifier is a practical tool to analyze the lineups and predict the outcome based on players' choices."}
{"_id": "f24d3d69ea8c5e18d3fdb70290d52b71378bce5c", "title": "Forma Track: Tracking People based on Body Shape", "text": "Knowledge of a person\u2019s whereabouts in the home is key to context-aware applications, but many people do not want to carry or wear a tag or mobile device in the home. Therefore, many tracking systems are now using so-called weak biometrics such as height, weight, and width. In this paper, we propose to use body shape as a weak biometric, differentiating people based on features such as head size, shoulder size, or torso size. The basic idea is to scan the body with a radar sensor and to compute the reflection profile: the amount of energy that reflects back from each part of the body. Many people have different body shapes even if they have the same height, weight, or width, which makes body shape a stronger biometric. We built a proof-of-concept system called FormaTrack to test this approach, and evaluate using eight participants of varying height and weight. We collected over 2800 observations while capturing a wide range of factors such as clothing, hats, shoes, and backpacks. Results show that FormaTrack can achieve a precision, recall, direction and identity accuracy (over all possible groups of 2 people) of 100%, 99.86%, 99.7% and 95.3% respectively. Results indicate that FormaTrack can achieve over 99% tracking accuracy with 2 people in a home with 5 or more rooms."}
{"_id": "2e81f32376dac2d8fccd5b9fbbff02e6b408c313", "title": "In vivo evaluation of bacterial cellulose/acrylic acid wound dressing hydrogel containing keratinocytes and fibroblasts for burn wounds", "text": "The healing of wounds, including those from burns, currently exerts a burden on healthcare systems worldwide. Hydrogels are widely used as wound dressings and in the field of tissue engineering. The popularity of bacterial cellulose-based hydrogels has increased owing to their biocompatibility. Previous study demonstrated that bacterial cellulose/acrylic acid (BC/AA) hydrogel increased the healing rate of burn wound. This in vivo study using athymic mice has extended the use of BC/AA hydrogel by the addition of human epidermal keratinocytes and human dermal fibroblasts. The results showed that hydrogel loaded with cells produces the greatest acceleration on burn wound healing, followed by treatment with hydrogel alone, compared with the untreated group. The percentage wound reduction on day 13 in the mice treated with hydrogel loaded with cells (77.34\u2009\u00b1\u20096.21%) was significantly higher than that in the control-treated mice (64.79\u2009\u00b1\u20096.84%). Histological analysis, the expression of collagen type I via immunohistochemistry, and transmission electron microscopy indicated a greater deposition of collagen in the mice treated with hydrogel loaded with cells than in the mice administered other treatments. Therefore, the BC/AA hydrogel has promising application as a wound dressing and a cell carrier."}
{"_id": "1f9b31e0fc44e9cfce34b43ff681aea42e4c0d96", "title": "Expert and exceptional performance: evidence of maximal adaptation to task constraints.", "text": "Expert and exceptional performance are shown to be mediated by cognitive and perceptual-motor skills and by domain-specific physiological and anatomical adaptations. The highest levels of human performance in different domains can only be attained after around ten years of extended, daily amounts of deliberate practice activities. Laboratory analyses of expert performance in many domains such as chess, medicine, auditing, computer programming, bridge, physics, sports, typing, juggling, dance, and music reveal maximal adaptations of experts to domain-specific constraints. For example, acquired anticipatory skills circumvent general limits on reaction time, and distinctive memory skills allow a domain-specific expansion of working memory capacity to support planning, reasoning, and evaluation. Many of the mechanisms of superior expert performance serve the dual purpose of mediating experts' current performance and of allowing continued improvement of this performance in response to informative feedback during practice activities."}
{"_id": "bf9ca74b5b82e9679383e0d1e9e4ae9b14b68b99", "title": "AutoSVD++: An Efficient Hybrid Collaborative Filtering Model via Contractive Auto-encoders", "text": "Collaborative filtering (CF) has been successfully used to provide users with personalized products and services. However, dealing with the increasing sparseness of user-item matrix still remains a challenge. To tackle such issue, hybrid CF such as combining with content based filtering and leveraging side information of users and items has been extensively studied to enhance performance. However, most of these approaches depend on hand-crafted feature engineering, which is usually noise-prone and biased by different feature extraction and selection schemes. In this paper, we propose a new hybrid model by generalizing contractive auto-encoder paradigm into matrix factorization framework with good scalability and computational efficiency, which jointly models content information as representations of effectiveness and compactness, and leverage implicit user feedback to make accurate recommendations. Extensive experiments conducted over three large-scale real datasets indicate the proposed approach outperforms the compared methods for item recommendation."}
{"_id": "eaee9b67377de9985b46d013f77cc37b4b811014", "title": "wUbi-Pen: windows graphical user interface interacting with haptic feedback stylus", "text": "In this work, we designed stylus type haptic interfaces interacting with a touch screen. There are two kinds of haptic styli termed wUbi-Pen. The type I has functions of providing vibration, impact, texture and sound and it is a stand alone system including own battery and wireless communication module. The type II does not include the texture display module and it is a miniaturized version. We present a new interaction scheme on the Windows graphical user interface based on pointer movement haptic feedback events such as clicking, drag, drop, moving and etc. In addition, a simple interactive digital sketchbook has been implemented, which is providing haptic and auditory feedback while drawing and touching objects. We also introduce a tactile image display method on a touch screen with the wUbi-Pen and a simple fitting puzzle utilizing haptic feedback events."}
{"_id": "c422f4d5574f17fdfa32d08d652bf8827216f921", "title": "OpenARC: open accelerator research compiler for directive-based, efficient heterogeneous computing", "text": "This paper presents Open Accelerator Research Compiler (OpenARC): an open-source framework that supports the full feature set of OpenACC V1.0 and performs source-to-source transformations, targeting heterogeneous devices, such as NVIDIA GPUs. Combined with its high-level, extensible Intermediate Representation (IR) and rich semantic annotations, OpenARC serves as a powerful research vehicle for prototyping optimization, source-to-source transformations, and instrumentation for debugging, performance analysis, and autotuning. In fact, OpenARC is equipped with various capabilities for advanced analyses and transformations, as well as built-in performance and debugging tools. We explain the overall design and implementation of OpenARC, and we present key analysis techniques necessary to efficiently port OpenACC applications. Porting various OpenACC applications to CUDA GPUs using OpenARC demonstrates that OpenARC performs similarly to a commercial compiler, while serving as a general research framework."}
{"_id": "a46105d8e36ee3f9c2fc003e29758894170966b5", "title": "Order patterns recurrence plots in the analysis of ERP data", "text": "Recurrence quantification analysis (RQA) is an established tool for data analysis in various behavioural sciences. In this article we present a refined notion of RQA based on order patterns. The use of order patterns is commonplace in time series analysis. Exploiting this concept in combination with recurrence plots (RP) and their quantification (RQA) allows for advances in contemporary EEG research, specifically in the analysis of event related potentials (ERP), as the method is known to be robust against non-stationary data. The use of order patterns recurrence plots (OPRPs) on EEG data recorded during a language processing experiment exemplifies the potentials of the method. We could show that the application of RQA to ERP data allows for a considerable reduction of the number of trials required in ERP research while still maintaining statistical validity."}
{"_id": "19bf79bb48b27c3334bf6f18c237c2a5f7009b57", "title": "On Symmetric and Asymmetric LSHs for Inner Product Search", "text": "We consider the problem of designing locality sensitive hashes (LSH) for inner product similarity, and of the power of asymmetric hashes in this context. Shrivastava and Li argue that there is no symmetric LSH for the problem and propose an asymmetric LSH based on different mappings for query and database points. However, we show there does exist a simple symmetric LSH that enjoys stronger guarantees and better empirical performance than the asymmetric LSH they suggest. We also show a variant of the settings where asymmetry is in-fact needed, but there a different asymmetric LSH is required."}
{"_id": "41c74d11a72f7410aa1d95708a9d710f317522a2", "title": "CORONET: Fault tolerance for Software Defined Networks", "text": "Software Defined Networking, or SDN, based networks are being deployed not only in testbed networks, but also in production networks. Although fault-tolerance is one of the most desirable properties in production networks, there are not much study in providing fault-tolerance to SDN-based networks. The goal of this work is to develop a fault tolerant SDN architecture that can rapidly recover from faults and scale to large network sizes. This paper presents CORONET, a SDN fault-tolerant system that recovers from multiple link failures in the data plane. We describe a prototype implementation based on NOX that demonstrates fault recovery for emulated topologies using Mininet. We also discuss possible extensions to handle control plane and controller faults."}
{"_id": "ca73bd452855b1b90cecd84e54d703186911e168", "title": "Alteration of the platelet serotonin transporter in romantic love.", "text": "BACKGROUND\nThe evolutionary consequences of love are so important that there must be some long-established biological process regulating it. Recent findings suggest that the serotonin (5-HT) transporter might be linked to both neuroticism and sexual behaviour as well as to obsessive-compulsive disorder (OCD). The similarities between an overvalued idea, such as that typical of subjects in the early phase of a love relationship, and obsession, prompted us to explore the possibility that the two conditions might share alterations at the level of the 5-HT transporter.\n\n\nMETHODS\nTwenty subjects who had recently (within the previous 6 months) fallen in love, 20 unmedicated OCD patients and 20 normal controls, were included in the study. The 5-HT transporter was evaluated with the specific binding of 3H-paroxetine (3H-Par) to platelet membranes.\n\n\nRESULTS\nThe results showed that the density of 3H-Par binding sites was significantly lower in subjects who had recently fallen in love and in OCD patients than in controls.\n\n\nDISCUSSION\nThe main finding of the present study is that subjects who were in the early romantic phase of a love relationship were not different from OCD patients in terms of the density of the platelet 5-HT transporter, which proved to be significantly lower than in the normal controls. This would suggest common neurochemical changes involving the 5-HT system, linked to psychological dimensions shared by the two conditions, perhaps at an ideational level."}
{"_id": "7fc50128bc9ab46f1f62c203c5fc5d566c680964", "title": "Message authentication in driverless cars", "text": "Driverless cars and driver-assisted vehicles are becoming prevalent in today's transportation systems. Companies like TESLA, Audi, and Google are among the leaders in this new era of the automotive industry. Modern vehicles, equipped with sophisticated navigation devices, complex driver-assisted systems, and a whole lot of safety features, bring broader impacts to our quality of life, economic development, and environmental sustainability. Instant safety messages such as pre-collision warnings, blind-spot detection, pedestrian and object awareness significantly improve the safety for drivers, passengers, and pedestrians. As a result, vehicles would be able to travel closely yet safely together, forming a platoon, thus resulting in a reduction of traffic congestion and fuel consumption. Driverless cars also have non-safety-related applications: which are used to facilitate traffic management and infotainment dissemination for drivers and passengers. Vehicular Ad hoc Network (VANET), the wireless communication technology enabling driverless cars, features not only dynamic topology but also high mobility. The vehicle nodes move rapidly on streets and highways. Their movements also depend on but not limited to road traffic, speed limits, and behavior of nearby vehicles. With massive amount of messages exchanged among driverless cars that command the cars' movements at high speeds and in close distances, any malicious alternation could lead to disastrous accidents. Message authentication to ensure data integrity is paramount to attack preparation. This paper presents a novel message authentication scheme that protects cars from bogus messages and makes VANET resilient to Denial-of-Service (DoS) attacks. The work includes a simulation framework that integrates vehicle and data traffic models to validate the effectiveness of the proposed message authentication scheme."}
{"_id": "add7fbbab85a552bc624699364614a348ec7be56", "title": "An overview of reservoir computing: theory, applications and implementations", "text": "Training recurrent neural networks is hard. Recently it has however been discovered that it is possible to just construct a random recurrent topology, and only train a single linear readout layer. State-ofthe-art performance can easily be achieved with this setup, called Reservoir Computing. The idea can even be broadened by stating that any high dimensional, driven dynamic system, operated in the correct dynamic regime can be used as a temporal \u2018kernel\u2019 which makes it possible to solve complex tasks using just linear post-processing techniques. This tutorial will give an overview of current research on theory, application and implementations of Reservoir Computing."}
{"_id": "1a19e8eaa6ee94d09bb9e815a67907cc0bf4d6f8", "title": "Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations", "text": "A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology."}
{"_id": "24547f720f472dd92870c1a7c4cb8bb450307f27", "title": "Adaptive Nonlinear System Identification with Echo State Networks", "text": "Echo state networks (ESN) are a novel approach to recurrent neural network training. An ESN consists of a large, fixed, recurrent \"reservoir\" network, from which the desired output is obtained by training suitable output connection weights. Determination of optimal output weights becomes a linear, uniquely solvable task of MSE minimization. This article reviews the basic ideas and describes an online adaptation scheme based on the RLS algorithm known from adaptive linear systems. As an example, a 10-th order NARMA system is adaptively identified. The known benefits of the RLS algorithms carryover from linear systems to nonlinear ones; specifically, the convergence rate and misadjustment can be determined at design time."}
{"_id": "46c7242d985a5bb65bb2d36942e446aa2f10f656", "title": "Spine dynamics as a computational resource in spine-driven quadruped locomotion", "text": "Recent results suggest that compliance and non-linearity in physical bodies of soft robots may not be disadvantageous properties with respect to control, but rather of advantage. In the context of morphological computation one could see such complex structures as potential computational resources. In this study, we implement and exploit this view point in a spine-driven quadruped robot called Kitty by using its flexible spine as a computational resource. The spine is an actuated multi-joint structure consisting of a sequence of soft silicone blocks. Its complex dynamics are captured by a set of force sensors and used to construct a closed-loop to drive the motor commands. We use simple static, linear readout weights to combine the sensor values to generate multiple gait patterns (bounding, trotting, turning behavior). In addition, we demonstrate the robustness of the setup by applying strong external perturbations in form of additional loads. The system is able to fully recover to its nominal gait patterns (which are encoded in the linear readout weights) after the perturbation has vanished."}
{"_id": "5d8433aee68d9a9cb40d6115145ff4974fe1bc01", "title": "Variable impedance actuators: A review", "text": "Variable Impedance Actuators (VIA) have received increasing attention in recent years as many novel applications involving interactions with an unknown and dynamic environment including humans require actuators with dynamics that are not well-achieved by classical stiff actuators. This paper presents an overview of the different VIAs developed and proposes a classification based on the principles through which the variable stiffness and damping are achieved. The main classes are active impedance by control, inherent compliance and damping actuators, inertial actuators, and combinations of them, which are then further divided into subclasses. This classification allows for designers of new devices to orientate and take inspiration and users of VIA\u2019s to be guided in the design and implementation process for their targeted application."}
{"_id": "c849e41faef43945fcb02a12bae0ca6da50fc795", "title": "The politics of comments: predicting political orientation of news stories with commenters' sentiment patterns", "text": "Political views frequently conflict in the coverage of contentious political issues, potentially causing serious social problems. We present a novel social annotation analysis approach for identification of news articles' political orientation. The approach focuses on the behavior of individual commenters. It uncovers commenters' sentiment patterns towards political news articles, and predicts the political orientation from the sentiments expressed in the comments. It takes advantage of commenters' participation as well as their knowledge and intelligence condensed in the sentiment of comments, thereby greatly reduces the high complexity of political view identification. We conduct extensive study on commenters' behaviors, and discover predictive commenters showing a high degree of regularity in their sentiment patterns. We develop and evaluate sentiment pattern-based methods for political view identification."}
{"_id": "53b1f4f916c9a27ba13744b06ace1f5c4ae2c10e", "title": "A machine learning approach for filtering Monte Carlo noise", "text": "The most successful approaches for filtering Monte Carlo noise use feature-based filters (e.g., cross-bilateral and cross non-local means filters) that exploit additional scene features such as world positions and shading normals. However, their main challenge is finding the optimal weights for each feature in the filter to reduce noise but preserve scene detail. In this paper, we observe there is a complex relationship between the noisy scene data and the ideal filter parameters, and propose to learn this relationship using a nonlinear regression model. To do this, we use a multilayer perceptron neural network and combine it with a matching filter during both training and testing. To use our framework, we first train it in an offline process on a set of noisy images of scenes with a variety of distributed effects. Then at run-time, the trained network can be used to drive the filter parameters for new scenes to produce filtered images that approximate the ground truth. We demonstrate that our trained network can generate filtered images in only a few seconds that are superior to previous approaches on a wide range of distributed effects such as depth of field, motion blur, area lighting, glossy reflections, and global illumination."}
{"_id": "81f47ac001f7ca941516fa9c682105357efe2414", "title": "Connecting Intelligent Things in Smart Hospitals Using NB-IoT", "text": "The widespread use of Internet of Things (IoT), especially smart wearables, will play an important role in improving the quality of medical care, bringing convenience for patients and improving the management level of hospitals. However, due to the limitation of communication protocols, there exists non unified architecture that can connect all intelligent things in smart hospitals, which is made possible by the emergence of the Narrowband IoT (NB-IoT). In light of this, we propose an architecture to connect intelligent things in smart hospitals based on NB-IoT, and introduce edge computing to deal with the requirement of latency in medical process. As a case study, we develop an infusion monitoring system to monitor the real-time drop rate and the volume of remaining drug during the intravenous infusion. Finally, we discuss the challenges and future directions for building a smart hospital by connecting intelligent things."}
{"_id": "79d0a1a08395737df102d3b53938ea2e1425a306", "title": "MADMAC: Multiple Attribute Decision Methodology for Adoption of Clouds", "text": "Cloud Adoption decisions tend to involve multiple, conflicting criteria (attributes) with incommensurable units of measurements, which must be compared among multiple alternatives using imprecise and incomplete available information. Multi-attribute Decision Making (MADM) has been shown to provide a rational basis to aid decision making in such scenarios. We present a MADMAC framework for cloud adoption, consisting of 3 Decision Areas (DA) referred to as the Cloud Switch, Cloud Type and Vendor Choice. It requires the definition of Attributes, Alternatives and Attribute Weights, to construct a Decision Matrix and arrive at a relative ranking to identify the optimal alternative. We also present a taxonomy organized in a two level hierarchy: Server-centric clouds, Client-centric clouds and Mobile-centric clouds, which further map to detailed, specific applications or workloads. DSS presented showing algorithms derived from MADMAC can compute and optimize CA decisions separately for the three stages, where the attributes differently influence CA decisions. A modified Wide-band Delphi method is proposed for assessing the relative weights for each attribute, by workload. Relative ranks are calculated using these weights, and the Simple Additive Weighting (SAW) method is used to generate value functions for all the alternatives, and rank the alternatives by their value to finally choose the best alternative. Results from application of the method to four different types of workloads show that the method converges on reasonable cloud adoption decisions. MADMAC's key advantage is its fully quantitative and iterative convergence approach based on proven multi-attribute decision methods, which enables decision makers to comparatively assess the relative robustness of alternative cloud adoption decisions in a defensible manner. Being amenable to automation, it can respond well to even complex arrays of decision criteria inputs, unlike human decision makers. It can be implemented as a web-based DSS to readily support cloud decision making world-wide, and improved further using fuzzy TOPSIS methods, to address concerns about preferential inter-dependence of attributes, insufficient input data or judgment expertise."}
{"_id": "3b45e4d64e356820064247b6a356963366edf160", "title": "BRUTE: A High Performance and Extensibile Traffic Generator", "text": "Evaluating the performance of high speed networks is a critical task due to the lack of reliable tools to generate traffic workloads at high rates and to the inaccessibility of network equipments. To address the problem of reliable device testing, we developed an extensible traffic generator, called BRUTE (Browny and RobUst Traffic Engine), running on top of Linux operating system. BRUTE takes advantage of the Linux kernel potential in order to accurately generate traffic flows up to very high bitrates. This software is an extensible framework that provides a number of optimized functions that easily allow the implementation of ad hoc modules for customized traffic patterns generation. A number of library modules implementing common traffic profiles (like CBR, Poisson process and Poissonian Arrival of Burst process) are already implemented and integrated into the package. Performance of BRUTE has been compared to the one of several widespread traffic generators over Fast-Ethernet and Gigabit-Ethernet networks. The results of our tests show the superior performance of BRUTE in terms of both the achieved throughput and the level of bitrate precision, under all possible values of the frame length."}
{"_id": "5140bf8570f0bf308487a9e9db774010922fc67e", "title": "Nuclei segmentation for computer-aided diagnosis of breast cancer", "text": "Breast cancer is the most common cancer among women. The effectiveness of treatment depends on early detection of the disease. Computer-aided diagnosis plays an increasingly important role in this field. Particularly, digital pathology has recently become of interest to a growing number of scientists. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies. The task at hand is to classify those as either benign or malignant. We propose a robust segmentation procedure giving satisfactory nuclei separation even when they are densely clustered in the image. Firstly, we determine centers of the nuclei using conditional erosion. The erosion is performed on a binary mask obtained with the use of adaptive thresholding in grayscale and clustering in a color space. Then, we use the multi-label fast marching algorithm initialized with the centers to obtain the final segmentation. A set of 84 features extracted from the nuclei is used in the classification by three different classifiers. The approach was tested on 450 microscopic images of fine needle biopsies obtained from patients of the Regional Hospital in Zielona G\u00f3ra, Poland. The classification accuracy presented in this paper reaches 100%, which shows that a medical decision support system based on our method would provide accurate diagnostic information."}
{"_id": "0e39e519471cc41b232381bd529542e2c02f21fa", "title": "Environmental sound classification with convolutional neural networks", "text": "This paper evaluates the potential of convolutional neural networks in classifying short audio clips of environmental sounds. A deep model consisting of 2 convolutional layers with max-pooling and 2 fully connected layers is trained on a low level representation of audio data (segmented spectrograms) with deltas. The accuracy of the network is evaluated on 3 public datasets of environmental and urban recordings. The model outperforms baseline implementations relying on mel-frequency cepstral coefficients and achieves results comparable to other state-of-the-art approaches."}
{"_id": "4c06e27ae186c913b1efc401925bbd73b9796fb6", "title": "Normally-off AlGaN/GaN HFET with p-type GaN gate and AlGaN buffer", "text": "A 1.5 A normally-off GaN transistor for power applications in p-type GaN gate technology with a modified epitaxial concept is presented. A higher threshold voltage is achieved while keeping the on-state resistance low by using an Al-GaN buffer instead of a GaN buffer. Additionally, the AlGaN buffer acts as a back-barrier and suppresses source-drain punch-through currents in the off-state. P-GaN gate GaN transistors with AlGaN buffer will therefore yield higher breakdown voltages as compared to standard GaN buffer versions which results in an excellent VBr-to-RON ratio. The proposed normally-off technology shows save operation under elevated ambient temperature up to 200 \u00b0C without thermal runaway. In contrast to standard normally-on AlGaN/GaN HEMTs, a reverse diode operation is possible for offstate conditions which may enable improved inverter circuits."}
{"_id": "8bd98a35827044093552cf9bee2e6fa60e7d12db", "title": "Directed Multi-Objective Optimization", "text": "While evolutionary computing inspired approaches to multi-objective optimization have many advantages over conventional approaches; they generally do not explicitly exploit directional/gradient information. This can be inefficient if the underlying objectives are reasonably smooth, and this may limit the application of such approaches to real-world problems. This paper develops a local framework for such problems by geometrically analyzing the multiobjective concepts of descent, diversity and convergence/optimality. It is shown that locally optimal, multi-objective descent direction can be calculated that maximally reduce all the objectives and a local sub-space also exists that is a basis for diversity updates. Local convergence of a point towards the optimal Pareto set is therefore assured. The concept of a population of points is also considered and it is shown that it can be used to locally increase the diversity of the population while still ensuring convergence and a method to extract the local directional information from the population is also described. The paper describes and introduces the basic theoretical concepts as well as demonstrating how they are used on simple test problems."}
{"_id": "9b21fb4b0048e53fa6219389a8fcc44e6b7a8b89", "title": "The Usable Privacy Policy Project : Combining Crowdsourcing , Machine Learning and Natural Language Processing to Semi-Automatically Answer Those Privacy Questions Users Care About", "text": "Natural language privacy policies have become a de facto standard to address expectations of \u201cnotice and choice\u201d on the Web. However, users generally do not read these policies and those who do read them struggle to understand their content. Initiatives aimed at addressing this problem through the development of machine-readable standards have run into obstacles, with many website operators showing reluctance to commit to anything more than what they currently do. This project builds on recent advances in natural language processing, privacy preference modeling, crowdsourcing, formal methods, and privacy interface design to develop a practical framework based on websites\u2019 existing natural language privacy policy that empowers users to more meaningfully control their privacy, without requiring additional cooperation from website operators. Our approach combines fundamental research with the development of scalable technologies to (1) semi-automatically extract key privacy policy features from natural language privacy policies, and (2) present these features to users in an easy-to-digest format that enables them to make more informed privacy decisions as they interact with different websites. This work will also involve the systematic collection and analysis of website privacy policies, looking for trends and deficiencies both in the wording and content of these policies across different sectors and using this analysis to inform public policy. This report outlines the project\u2019s research agenda and overall approach."}
{"_id": "cf5f9e8ff12ae55845ea0f33d5b5fb07a22daaee", "title": "Revisiting Snodgrass and Vanderwart's object pictorial set: the role of surface detail in basic-level object recognition.", "text": "Theories of object recognition differ to the extent that they consider object representations as being mediated only by the shape of the object, or shape and surface details, if surface details are part of the representation. In particular, it has been suggested that color information may be helpful at recognizing objects only in very special cases, but not during basic-level object recognition in good viewing conditions. In this study, we collected normative data (naming agreement, familiarity, complexity, and imagery judgments) for Snodgrass and Vanderwart's object database of 260 black-and-white line drawings, and then compared the data to exactly the same shapes but with added gray-level texture and surface details (set 2), and color (set 3). Naming latencies were also recorded. Whereas the addition of texture and shading without color only slightly improved naming agreement scores for the objects, the addition of color information unambiguously improved naming accuracy and speeded correct response times. As shown in previous studies, the advantage provided by color was larger for objects with a diagnostic color, and structurally similar shapes, such as fruits and vegetables, but was also observed for man-made objects with and without a single diagnostic color. These observations show that basic-level 'everyday' object recognition in normal conditions is facilitated by the presence of color information, and support a 'shape + surface' model of object recognition, for which color is an integral part of the object representation. In addition, the new stimuli (sets 2 and 3) and the corresponding normative data provide valuable materials for a wide range of experimental and clinical studies of object recognition."}
{"_id": "2a916a3ceed78bcdccc3dcfd1a5f938abb3804be", "title": "A shape-based similarity measure for time series data with ensemble learning", "text": "This paper introduces a shape-based similarity measure, called the angular metric for shape similarity (AMSS), for time series data. Unlike most similarity or dissimilarity measures, AMSS is based not on individual data points of a time series but on vectors equivalently representing it. AMSS treats a time series as a vector sequence to focus on the shape of the data and compares data shapes by employing a variant of cosine similarity. AMSS is, by design, expected to be robust to time and amplitude shifting and scaling, but sensitive to short-term oscillations. To deal with the potential drawback, ensemble learning is adopted, which integrates data smoothing when AMSS is used for classification. Evaluative experiments reveal distinct properties of AMSS and its effectiveness when applied in the ensemble framework as compared to existing measures."}
{"_id": "53966eb7c0de59e97e592ac48a1a782d7eed5620", "title": "Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control", "text": "The Differentiable Neural Computer (DNC) can learn algorithmic and question answering tasks. An analysis of its internal activation patterns reveals three problems: Most importantly, the lack of key-value separation makes the address distribution resulting from content-based look-up noisy and flat, since the value influences the score calculation, although only the key should. Second, DNC\u2019s deallocation of memory results in aliasing, which is a problem for content-based look-up. Thirdly, chaining memory reads with the temporal linkage matrix exponentially degrades the quality of the address distribution. Our proposed fixes of these problems yield improved performance on arithmetic tasks, and also improve the mean error rate on the bAbI question answering dataset by 43%."}
{"_id": "7df073fd756756e4f639b2894325cb32164cf7de", "title": "Supervised N-gram topic model", "text": "We propose a Bayesian nonparametric topic model that rep- resents relationships between given labels and the corre- sponding words/phrases, from supervised articles. Unlike existing supervised topic models, our proposal, supervised N-gram topic model (SNT), focuses on both a number of topics and power-law distribution in the word frequencies to extract topic specific N-grams. To achieve this goal, SNT takes a Bayesian nonparametric approach to topic sampling, which generates word distribution jointly with the given variable in textual order, and then form each N-gram word as a hierarchy of Pitman-Yor process priors. Experiments on labeled text data show that SNT is useful as a generative model for discovering more phrases that complement human experts and domain specific knowledge than the existing al- ternatives. The results show that SNT can be applied to various tasks such as automatic annotation."}
{"_id": "39424070108220c600f67fa2dbd25f779a9fdb7a", "title": "A Planning based Framework for Essay Generation", "text": "Generating an article automatically with computer program is a challenging task in artificial intelligence and natural language processing. In this paper, we target at essay generation, which takes as input a topic word in mind and generates an organized article under the theme of the topic. We follow the idea of text planning (Reiter and Dale, 1997) and develop an essay generation framework. The framework consists of three components, including topic understanding, sentence extraction and sentence reordering. For each component, we studied several statistical algorithms and empirically compared between them in terms of qualitative or quantitative analysis. Although we run experiments on Chinese corpus, the method is language independent and can be easily adapted to other language. We lay out the remaining challenges and suggest avenues for future research."}
{"_id": "04979919f0e3419750df13c47a9605ac2c9a4721", "title": "Automatic Personality and Interaction Style Recognition from Facebook Profile Pictures", "text": "In this paper, we address the issue of personality and interaction style recognition from profile pictures in Facebook. We recruited volunteers among Facebook users and collected a dataset of profile pictures, labeled with gold standard self-assessed personality and interaction style labels. Then, we exploited a bag-of-visual-words technique to extract features from pictures. Finally, different machine learning approaches were used to test the effectiveness of these features in predicting personality and interaction style traits. Our good results show that this task is very promising, because profile pictures convey a lot of information about a user and are directly connected to impression formation and identity management."}
{"_id": "0467951e23147910fcdbda835fa43895515709b1", "title": "A Genetic Algorithm Approach for Creating Neural-Network Ensembles", "text": "A neural-network ensemble is a successful technique where the outputs of a set of separately trained neural networks are combined to form one unified prediction. An effective ensemble should consist of a set of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well; however, most existing techniques only indirectly address the problem of creating such a set. We present an algorithm called Addemup that uses genetic algorithms to explicitly search for a highly diverse set of accurate trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are highly accurate while disagreeing with each other as much as possible. Experiments on four real-world domains show that Addemup is able to generate a set of trained networks that is more accurate than several existing ensemble approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble."}
{"_id": "23bf275a591ba14bba9ea0e8edf96ff232de6df3", "title": "Client-Server Computing in Mobile Environments", "text": "Recent advances in wireless data networking and portable information appliances have engendered a new paradigm of computing, called mobile computing, in which users carrying portable devices have access to data and information services regardless of their physical location or movement behavior. In the meantime, research addressing information access in mobile environments has proliferated. In this survey, we provide a concrete framework and categorization of the various ways of supporting mobile client-server computing for information access. We examine characteristics of mobility that distinguish mobile client-server computing from its traditional counterpart. We provide a comprehensive analysis of new paradigms and enabler concepts for mobile client-server computing, including mobile-aware adaptation, extended client-server model, and mobile data access. A comparative and detailed review of major research prototypes for mobile information access is also presented."}
{"_id": "6fb7dac40259f62012178c2f06f312ba5b17e627", "title": "Android Forensics: Simplifying Cell Phone Examinations.", "text": "It is hardly appropriate to call the devices many use to receive the occasional phone call a telephone any more. The capability of these devices is growing, as is the number of people utilizing them. By the end of 2009, 46.3% of mobile phones in use in the United States were reported to be smart phones (AdMob, 2010). With the increased availability of these powerful devices, there is also a potential increase for criminals to use this technology as well. Criminals could use smart phones for a number of activities such as committing fraud over e-mail, harassment through text messages, trafficking of child pornography, communications related to narcotics, etc. The data stored on smart phones could be extremely useful to analysts through the course of an investigation. Indeed, mobile devices are already showing themselves to have a large volume of probative information that is linked to an individual with just basic call history, contact, and text message data; smart phones contain even more useful information, such as e-mail, browser history, and chat logs. Mobile devices probably have more probative information that can be linked to an individual per byte examined than most computers -and this data is harder to acquire in a forensically proper fashion. Part of the problem lies in the plethora of cell phones available today and a general lack of hardware, software, and/or interface standardization within the industry. These differences range from the media on which data is stored and the file system to the operating system and the effectiveness of certain tools. Even different model cell phones made by the same manufacture may require different data cables and software to access the phone's information. The good news is there are numerous people in the field working on making smart phone forensics easier. Already there is material available on how to conduct an examination on Blackberry phones and a growing number of resources about the iPhone. However, there is a new smart phone OS on the market named Android and it will likely gain in appeal and market share over the next year. While Android initially launched with only one phone on T-Mobile, phones are now available on Sprint, Verizon and AT&T as well."}
{"_id": "8fe04d65bad6016af542c09d80007fa60c8dda63", "title": "Letter-based classification of Arabic scripts style in ancient Arabic manuscripts: Preliminary results", "text": "Classifying ancient Arabic manuscripts based on handwriting styles is one of the important roles in the field of paleography. Recognizing the style of handwriting in Arabic manuscripts helps in identifying the origin and date of ancient documents. In this paper we proposed using segmented letters from Arabic manuscripts to recognize handwriting style. Both Gabor Filters (GF) and Local Binary Pattern (LBP) are used to extract features from letters. The fused features are sent to Support Vector Machine (SVM) classifier. Experimental results have been implemented using manuscripts images from the Qatar National Library (QNL) and other online datasets. Better results are achieved when both GF and LBP descriptors are combined. The recognized Handwritten Arabic styles are Diwani, Kufic, Naskh, Farsi, Ruq'ah and Thuluth."}
{"_id": "aeb23992f2a2d386a3a3521908c5354905fc3352", "title": "Managing Knowledge in Organizations: An Integrative Framework and Review of Emerging Themes", "text": "In this concluding article to the Management Science special issue on \u201cManaging Knowledge in Organizations: Creating, Retaining, and Transferring Knowledge,\u201d we provide an integrative framework for organizing the literature on knowledge management. The framework has two dimensions. The knowledge management outcomes of knowledge creation, retention, and transfer are represented along one dimension. Properties of the context within which knowledge management occurs are represented on the other dimension. These properties, which affect knowledge management outcomes, can be organized according to whether they are properties of a unit (e.g., individual, group, organization) involved in knowledge management, properties of relationships between units or properties of the knowledge itself. The framework is used to identify where research findings about knowledge management converge and where gaps in our understanding exist. The article discusses mechanisms of knowledge management and how those mechanisms affect a unit\u2019s ability to create, retain and transfer knowledge. Emerging themes in the literature on knowledge management are identified. Directions for future research are suggested. (Knowledge Management; Organizational Learning; Knowledge Transfer; Innovation; Organizational Memory )"}
{"_id": "246cc31be5c0744b2e6bf0a2871d3b634959f2e0", "title": "Broken Promises : An Experiment", "text": "We test whether promises per se are effective in enhancing cooperative behavior in a form of trust game. In a new treatment, rather than permitting free-form messages, we instead allow only a bare promise-only message to be sent (or not). We find that bare promises are much less effective in achieving good social outcomes than free-form messages; in fact, bare promise-only messages lead to behavior that is much the same as when no messages are feasible. Our design also permits us to test the predictions of guilt aversion against the predictions of lying aversion. Our experimental results provide evidence that mainly supports the guilt-aversion predictions, but we also find some support for the presence of lying aversion."}
{"_id": "4bb8e200b25db55f7014a46c6f2d71b471e49a4f", "title": "miRCancer: a microRNA-cancer association database constructed by text mining on literature", "text": "MOTIVATION\nResearch interests in microRNAs have increased rapidly in the past decade. Many studies have showed that microRNAs have close relationships with various human cancers, and they potentially could be used as cancer indicators in diagnosis or as a suppressor for treatment purposes. There are several databases that contain microRNA-cancer associations predicted by computational methods but few from empirical results. Despite the fact that abundant experiments investigating microRNA expressions in cancer cells have been carried out, the results have remain scattered in the literature. We propose to extract microRNA-cancer associations by text mining and store them in a database called miRCancer.\n\n\nRESULTS\nThe text mining is based on 75 rules we have constructed, which represent the common sentence structures typically used to state microRNA expressions in cancers. The microRNA-cancer association database, miRCancer, is updated regularly by running the text mining algorithm against PubMed. All miRNA-cancer associations are confirmed manually after automatic extraction. miRCancer currently documents 878 relationships between 236 microRNAs and 79 human cancers through the processing of >26 000 published articles.\n\n\nAVAILABILITY\nmiRCancer is freely available on the web at http://mircancer.ecu.edu/"}
{"_id": "30a621c3ff9ec93e59db7f8d3c92f6c0d89ebd5d", "title": "The big data ecosystem at LinkedIn", "text": "The use of large-scale data mining and machine learning has proliferated through the adoption of technologies such as Hadoop, with its simple programming semantics and rich and active ecosystem. This paper presents LinkedIn's Hadoop-based analytics stack, which allows data scientists and machine learning researchers to extract insights and build product features from massive amounts of data. In particular, we present our solutions to the ``last mile'' issues in providing a rich developer ecosystem. This includes easy ingress from and egress to online systems, and managing workflows as production processes. A key characteristic of our solution is that these distributed system concerns are completely abstracted away from researchers. For example, deploying data back into the online system is simply a 1-line Pig command that a data scientist can add to the end of their script. We also present case studies on how this ecosystem is used to solve problems ranging from recommendations to news feed updates to email digesting to descriptive analytical dashboards for our members."}
{"_id": "71fbbc1675780f2f945073f9d92c09b8d76f80f0", "title": "Google news personalization: scalable online collaborative filtering", "text": "Several approaches to collaborative filtering have been studied but seldom have studies been reported for large (several millionusers and items) and dynamic (the underlying item set is continually changing) settings. In this paper we describe our approach to collaborative filtering for generating personalized recommendations for users of Google News. We generate recommendations using three approaches: collaborative filtering using MinHash clustering, Probabilistic Latent Semantic Indexing (PLSI), and covisitation counts. We combine recommendations from different algorithms using a linear model. Our approach is content agnostic and consequently domain independent, making it easily adaptable for other applications and languages with minimal effort. This paper will describe our algorithms and system setup in detail, and report results of running the recommendations engine on Google News."}
{"_id": "e25221b4c472c4337383341f6b2c9375e86709af", "title": "Matrix Factorization Techniques for Recommender Systems", "text": ""}
{"_id": "4f1462aa3bb75e1c26ed4c05edd75b70471d2d10", "title": "Differential effects of mindful breathing, progressive muscle relaxation, and loving-kindness meditation on decentering and negative reactions to repetitive thoughts.", "text": "Decentering has been proposed as a potential mechanism of mindfulness-based interventions but has received limited empirical examination to date in experimental studies comparing mindfulness meditation to active comparison conditions. In the present study, we compared the immediate effects of mindful breathing (MB) to two alternative stress-management techniques: progressive muscle relaxation (PMR) and loving-kindness meditation (LKM) to test whether decentering is unique to mindfulness meditation or common across approaches. Novice meditators (190 female undergraduates) were randomly assigned to complete one of three 15-min stress-management exercises (MB, PMR, or LKM) presented by audio recording. Immediately after the exercise, participants completed measures of decentering, frequency of repetitive thoughts during the exercise, and degree of negative reaction to thoughts. As predicted, participants in the MB condition reported greater decentering relative to the other two conditions. The association between frequency of repetitive thought and negative reactions to thoughts was relatively weaker in the MB condition than in the PMR and LKM conditions, in which these two variables were strongly and positively correlated. Consistent with the construct of decentering, the relative independence between these two variables in the MB condition suggests that mindful breathing may help to reduce reactivity to repetitive thoughts. Taken together, results help to provide further evidence of decentering as a potential mechanism that distinguishes mindfulness practice from other credible stress-management approaches."}
{"_id": "e6762d30bb51a5da8a169a4496b24894fb3bc2b4", "title": "Procedural Modeling of Interconnected Structures", "text": "The complexity and detail of geometric scenes that are used in today\u2019s computer animated films and interactive games have reached a level where the manual creation by traditional 3D modeling tools has become infeasible. This is why procedural modeling concepts have been developed which generate highly complex 3D models by automatically executing a set of formal construction rules. Well-known examples are variants of L-systems which describe the bottom-up growth process of plants and shape grammars which define architectural buildings by decomposing blocks in a top-down fashion. However, none of these approaches allows for the easy generation of interconnected structures such as bridges or roller coasters where a functional interaction between rigid and deformable parts of an object is needed. Our approach mainly relies on the top-down decomposition principle of shape grammars to create an arbitrarily complex but well structured layout. During this process, potential attaching points are collected in containers which represent the set of candidates to establish interconnections. Our grammar then uses either abstract connection patterns or geometric queries to determine elements in those containers that are to be connected. The two different types of connections that our system supports are rigid object chains and deformable beams. The former type is constructed by inverse kinematics, the latter by spline interpolation. We demonstrate the descriptive power of our grammar by example models of bridges, roller coasters, and wall-mounted catenaries."}
{"_id": "d94a6a0cd9e03faf6e70814c8053305f01e2c885", "title": "Named Entity Recognition using Support Vector Machine: A Language Independent Approach", "text": "Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes , defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) . In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages. Keywords\u2014Named Entity (NE); Named Entity Recognition (NER); Support Vector Machine (SVM); Bengali; Hindi."}
{"_id": "e9c9da57bbf9a968489cb90ec7252319bcab42fb", "title": "Hard Mixtures of Experts for Large Scale Weakly Supervised Vision", "text": "Training convolutional networks (CNNs) that fit on a single GPU with minibatch stochastic gradient descent has become effective in practice. However, there is still no effective method for training large networks that do not fit in the memory of a few GPU cards, or for parallelizing CNN training. In this work we show that a simple hard mixture of experts model can be efficiently trained to good effect on large scale hashtag (multilabel) prediction tasks. Mixture of experts models are not new [7, 3], but in the past, researchers have had to devise sophisticated methods to deal with data fragmentation. We show empirically that modern weakly supervised data sets are large enough to support naive partitioning schemes where each data point is assigned to a single expert. Because the experts are independent, training them in parallel is easy, and evaluation is cheap for the size of the model. Furthermore, we show that we can use a single decoding layer for all the experts, allowing a unified feature embedding space. We demonstrate that it is feasible (and in fact relatively painless) to train far larger models than could be practically trained with standard CNN architectures, and that the extra capacity can be well used on current datasets."}
{"_id": "0ec33f27de8350470935ec5bf9d198eceaf63904", "title": "Local Naive Bayes Nearest Neighbor for image classification", "text": "We present Local Naive Bayes Nearest Neighbor, an improvement to the NBNN image classification algorithm that increases classification accuracy and improves its ability to scale to large numbers of object classes. The key observation is that only the classes represented in the local neighborhood of a descriptor contribute significantly and reliably to their posterior probability estimates. Instead of maintaining a separate search structure for each class's training descriptors, we merge all of the reference data together into one search structure, allowing quick identification of a descriptor's local neighborhood. We show an increase in classification accuracy when we ignore adjustments to the more distant classes and show that the run time grows with the log of the number of classes rather than linearly in the number of classes as did the original. Local NBNN gives a 100 times speed-up over the original NBNN on the Caltech 256 dataset. We also provide the first head-to-head comparison of NBNN against spatial pyramid methods using a common set of input features. We show that local NBNN outperforms all previous NBNN based methods and the original spatial pyramid model. However, we find that local NBNN, while competitive with, does not beat state-of-the-art spatial pyramid methods that use local soft assignment and max-pooling."}
{"_id": "6a4c500af497a4846c35480b43927cd5812a8486", "title": "Bayesian Personalized Ranking with Multi-Channel User Feedback", "text": "Pairwise learning-to-rank algorithms have been shown to allow recommender systems to leverage unary user feedback. We propose Multi-feedback Bayesian Personalized Ranking (MF-BPR), a pairwise method that exploits different types of feedback with an extended sampling method. The feedback types are drawn from different \"channels\", in which users interact with items (e.g., clicks, likes, listens, follows, and purchases). We build on the insight that different kinds of feedback, e.g., a click versus a like, reflect different levels of commitment or preference. Our approach differs from previous work in that it exploits multiple sources of feedback simultaneously during the training process. The novelty of MF-BPR is an extended sampling method that equates feedback sources with \"levels\" that reflect the expected contribution of the signal. We demonstrate the effectiveness of our approach with a series of experiments carried out on three datasets containing multiple types of feedback. Our experimental results demonstrate that with a right sampling method, MF-BPR outperforms BPR in terms of accuracy. We find that the advantage of MF-BPR lies in its ability to leverage level information when sampling negative items."}
{"_id": "1cb8617c3449694173dc34eab384eecb435f6a36", "title": "When Morality Opposes Justice : Conservatives Have Moral Intuitions that Liberals may not Recognize", "text": "Researchers in moral psychology and social justice have agreed that morality is about matters of harm, rights, and justice. On this definition of morality, conservative opposition to social justice programs appears to be immoral, and has been explained as a product of various non-moral processes such as system justification or social dominance orientation. In this article we argue that, from an anthropological perspective, the moral domain is usually much broader, encompassing many more aspects of social life and valuing institutions as much or more than individuals. We present theoretical and empirical reasons for believing that there are five psychological systems that provide the foundations for the world s many moralities. The five foundations are psychological preparations for detecting and reacting emotionally to issues related to harm/care, fairness/reciprocity, ingroup/loyalty, authority/respect, and purity/sanctity. Political liberals have moral intuitions primarily based upon the first two foundations, and therefore misunderstand the moral motivations of political conservatives, who generally rely upon all five foundations."}
{"_id": "801af1633ae94d98144b0fc2b289e667f04f8e7e", "title": "A Review on Data Mining in Banking Sector", "text": "The banking industry has undergone various changes in the way they conduct the business and focus on modern technologies to compete the market. The banking industry has started realizing the importance of creating the knowledge base and its utilization for the benefits of the bank in the area of strategic planning to survive in the competitive market. In the modern era, the technologies are advanced and it facilitates to generate, capture and store data are increased enormously. Data is the most valuable asset, especially in financial industries. The value of this asset can be evaluated only if the organization can extract the valuable knowledge hidden in raw data. The increase in the huge volume of data as a part of day to day operations and through other internal and external sources, forces information technology industries to use technologies like data mining to transform knowledge from data. Data mining technology provides the facility to access the right information at the right time from huge volumes of raw data. Banking industries adopt the data mining technologies in various areas especially in customer segmentation and profitability, Predictions on Prices/Values of different investment products, money market business, fraudulent transaction detections, risk predictions, default prediction on pricing. It is a valuable tool which identifies potentially useful information from large amount of data, from which organization can gain a clear advantage over its competitors. This study shows the significance of data mining technologies and its advantages in the banking and financial sectors."}
{"_id": "6d045eabb4880786a56fd2334e42169af27ba4b1", "title": "An improved swarm optimized functional link artificial neural network (ISO-FLANN) for classification", "text": "Multilayer perceptron (MLP) (trained with back propagation learning algorithm) takes large computational time. The complexity of the network increases as the number of layers and number of nodes in layers increases. Further, it is also very difficult to decide the number of nodes in a layer and the number of layers in the network required for solving a problem a priori. In this paper an improved particle swarm optimization (IPSO) is used to train the functional link artificial neural network (FLANN) for classification and we name it ISO-FLANN. In contrast to MLP, FLANN has less architectural complexity, easier to train, and more insight may be gained in the classification problem. Further, we rely on global classification ata mining unctional link artificial neural networks ulti-layer perception article swarm optimization mproved particle swarm optimization VM capabilities of IPSO to explore the entire weight space, which is plagued by a host of local optima. Using the functionally expanded features; FLANN overcomes the non-linear nature of problems. We believe that the combined efforts of FLANN and IPSO (IPSO + FLANN = ISO \u2212 FLANN) by harnessing their best attributes can give rise to a robust classifier. An extensive simulation study is presented to show the effectiveness of proposed classifier. Results are compared with MLP, support vector machine(SVM) with radial basis function (RBF) kernel, FLANN with gradiend descent learning and fuzzy swarm net (FSN). SN"}
{"_id": "6566a986f0067b87d5864fc94a5fde645e1f4b94", "title": "AudioDAQ: turning the mobile phone's ubiquitous headset port into a universal data acquisition interface", "text": "We present AudioDAQ, a new platform for continuous data acquisition using the headset port of a mobile phone. AudioDAQ differs from existing phone peripheral interfaces by drawing all necessary power from the microphone bias voltage, encoding all data as analog audio, and leveraging the phone's built-in voice memo application (or a custom application) for continuous data collection. These properties make the AudioDAQ design more universal, so it works across a broad range of phones including sophisticated smart phones and simpler feature phones, enables simple analog peripherals without requiring a microcontroller, requires no hardware or software modifications on the phone itself, uses significantly less power than prior approaches, and allows continuous data capture over an extended period of time. The AudioDAQ design is efficient because it draws all necessary power from the microphone bias voltage, and it is general because this voltage and a voice memo application are present on most mobile phones in use today. We show the viability of our architecture by evaluating an end-to-end system that can capture EKG signals continuously for hours and send the data to the cloud for storage, processing, and visualization."}
{"_id": "9c8e7655dd233df3d5b3249f416ec992cebe7a10", "title": "Quasi-Newton Optimization in Deep Q-Learning for Playing ATARI Games", "text": "Reinforcement Learning (RL) algorithms allow artificial agents to improve their selection of actions so as to increase rewarding experiences in their environments. The learning can become intractably slow as the state space of the environment grows. This has motivated methods that learn internal representations of the agents state by a function approximator. Impressive results have been produced by using deep artificial neural networks to learn the state representations. However, deep reinforcement learning algorithms require solving a non-convex and nonlinear unconstrained optimization problem. Methods for solving the optimization problems in deep RL are restricted to the class of first-order algorithms, like stochastic gradient descent (SGD). The major drawback of the SGD methods is that they have the undesirable effect of not escaping saddle points. Furthermore, these methods require exhaustive trial and error to fine-tune many learning parameters. Using second derivative information can result in improved convergence properties, but computing the Hessian matrix for large-scale problems is not practical. QuasiNewton methods, like SGD, require only first-order gradient information, but they can result in superlinear convergence, which makes them attractive alternatives. The limitedmemory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) approach is one of the most popular quasi-Newton methods that construct positive definite Hessian approximations. In this paper, we introduce an efficient optimization method, based on the limited memory BFGS quasi-Newton method using line search strategy \u2013 as an alternative to SGD methods. Our method bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate a low-rank Hessian approximations. We provide empirical results on variety of the classic ATARI 2600 games. Our results show a robust convergence with preferred generalization characteristics, as well as fast training time and no need for the experience replaying mechanism."}
{"_id": "c0f2028a4e6e10e730e4a65bca2ed361739a3447", "title": "Perforator flaps of the facial artery angiosome.", "text": "For small to moderate-sized defects in the head and neck region, local flaps have been the mainstay of reconstruction for years. However, in certain instances, additional flap translation is required be it advancement, transposition or rotation. In such cases, the local flap concept is combined with perforator flap know-how, allowing larger loco-regional flaps to be raised to reconstruct relatively larger defects, even in cosmetically-expensive areas. In our cohort of fifteen patients', we have utilised detailed microanatomy of the facial artery perforators to reconstruct such defects with good results."}
{"_id": "ebd9206eb8a42ae151e8ca8fb4d2893790c52683", "title": "Quality assessment of Median filtering techniques for impulse noise removal from digital images", "text": "Impulse noise still poses challenges in front of researchers today. The removal of impulse noise brings blurring which leads to edges being distorted and image thus being of poor quality. Hence the need is to preserve edges and fine details during filtering. The proposed method consists of noise detection and then removal of detected noise by Improved Adaptive Median Filter using pixels that are not noise themselves in gray level as well as colour images. The pixels are split in two groups, which are noise-free pixels and noisy pixels. In removing out Impulse noise, only noisy pixels are processed. The noiseless pixels are then sent directly to the output image. The proposed method adaptively changes the masking matrix size of the median filter based on the count of the noisy pixels. Computer simulation and analysis have been carried out eventually to analyse the performance of the proposed method with that of Simple Median Filter (SMF), Simple Adaptive Median Filter (SAMF) and Adaptive Switched Median Filter (ASMF). The proposed filter proves to be more efficient in terms of both objective and subjective parameters."}
{"_id": "2454c84c71d282b95bc99d05adda914361905ffe", "title": "Correlation clustering in general weighted graphs", "text": "We consider the following general correlation-clustering problem [N. Bansal, A. Blum, S. Chawla, Correlation clustering, in: Proc. 43rd Annu. IEEE Symp. on Foundations of Computer Science, Vancouver, Canada, November 2002, pp. 238\u2013250]: given a graph with real nonnegative edge weights and a \u3008+\u3009/\u3008\u2212\u3009 edge labelling, partition the vertices into clusters to minimize the total weight of cut \u3008+\u3009 edges and uncut \u3008\u2212\u3009 edges. Thus, \u3008+\u3009 edges with large weights (representing strong correlations between endpoints) encourage those endpoints to belong to a common cluster while \u3008\u2212\u3009 edges with large weights encourage the endpoints to belong to different clusters. In contrast to most clustering problems, correlation clustering specifies neither the desired number of clusters nor a distance threshold for clustering; both of these parameters are effectively chosen to be the best possible by the problem definition. Correlation clustering was introduced by Bansal et al. [Correlation clustering, in: Proc. 43rd Annu. IEEE Symp. on Foundations of Computer Science, Vancouver, Canada, November 2002, pp. 238\u2013250], motivated by both document clustering and agnostic learning. They proved NP-hardness and gave constant-factor approximation algorithms for the special case in which the graph is complete (full information) and every edge has the same weight. We give an O(log n)-approximation algorithm for the general case based on a linear-programming rounding and the \u201cregion-growing\u201d technique. We also prove that this linear program has a gap of (log n), and therefore our approximation is tight under this approach. We also give an O(r3)-approximation algorithm for Kr,r -minor-free graphs. On the other hand, we show that the problem is equivalent to minimum multicut, and therefore APX-hard and difficult to approximate better than (log n). \u00a9 2006 Elsevier B.V. All rights reserved."}
{"_id": "afec832e9757a81d55ad5d79688223b32faa7f69", "title": "Leveraging Text and Knowledge Bases for Triple Scoring: An Ensemble Approach - The BOKCHOY Triple Scorer at WSDM Cup 2017", "text": "We present our winning solution for the WSDM Cup 2017 triple scoring task. We devise an ensemble of four base scorers, so as to leverage the power of both text and knowledge bases for that task. Then we further refine the outputs of the ensemble by trigger word detection, achieving even better predictive accuracy. The code is available at https://github.com/wsdm-cup-2017/bokchoy."}
{"_id": "19fa211462778aae150082d98ef2d00c04ee5fea", "title": "Compact Nonlinear Maps and Circulant Extensions", "text": "Kernel approximation via nonlinear random feature maps is w idely used in speeding up kernel machines. There are two main challenges for the conventional kernel ap proximation methods. First, before performing kernel approximation, a good kernel has to be chosen. Pickin g a good kernel is a very challenging problem in itself. Second, high-dimensional maps are often required i n order to achieve good performance. This leads to high computational cost in both generating the nonlinear ma ps, nd in the subsequent learning and prediction process. In this work, we propose to optimize the nonlinear m ps directly with respect to the classification objective in a data-dependent fashion. The proposed approa ch achieves kernel approximation and kernel learning in a joint framework. This leads to much more compact maps wit hout hurting the performance. As a by-product, the same framework can also be used to achieve more compact ke rnel maps to approximate a known kernel. We also introduce Circulant Nonlinear Maps, which uses a circu lant-structured projection matrix to speed up the nonlinear maps for high-dimensional data."}
{"_id": "181cff110dd7f03b2a050af4990393adfeffbbf7", "title": "MAV urban localization from Google street view data", "text": "We tackle the problem of globally localizing a camera-equipped micro aerial vehicle flying within urban environments for which a Google Street View image database exists. To avoid the caveats of current image-search algorithms in case of severe viewpoint changes between the query and the database images, we propose to generate virtual views of the scene, which exploit the air-ground geometry of the system. To limit the computational complexity of the algorithm, we rely on a histogram-voting scheme to select the best putative image correspondences. The proposed approach is tested on a 2 km image dataset captured with a small quadroctopter flying in the streets of Zurich. The success of our approach shows that our new air-ground matching algorithm can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over-season variations, thus, outperforming conventional visual place-recognition approaches."}
{"_id": "613c3e44eb58b86ad18c16bf469fc7b3c55df95f", "title": "Actinomycetes from the South China Sea sponges: isolation, diversity, and potential for aromatic polyketides discovery", "text": "Marine sponges often harbor dense and diverse microbial communities including actinobacteria. To date no comprehensive investigation has been performed on the culturable diversity of the actinomycetes associated with South China Sea sponges. Structurally novel aromatic polyketides were recently discovered from marine sponge-derived Streptomyces and Saccharopolyspora strains, suggesting that sponge-associated actinomycetes can serve as a new source of aromatic polyketides. In this study, a total of 77 actinomycete strains were isolated from 15 South China Sea sponge species. Phylogenetic characterization of the isolates based on 16S rRNA gene sequencing supported their assignment to 12 families and 20 genera, among which three rare genera (Marihabitans, Polymorphospora, and Streptomonospora) were isolated from marine sponges for the first time. Subsequently, \u03b2-ketoacyl synthase (KS\u03b1) gene was used as marker for evaluating the potential of the actinomycete strains to produce aromatic polyketides. As a result, KS\u03b1 gene was detected in 35 isolates related to seven genera (Kocuria, Micromonospora, Nocardia, Nocardiopsis, Saccharopolyspora, Salinispora, and Streptomyces). Finally, 10 strains were selected for small-scale fermentation, and one angucycline compound was detected from the culture extract of Streptomyces anulatus strain S71. This study advanced our knowledge of the sponge-associated actinomycetes regarding their diversity and potential in producing aromatic polyketides."}
{"_id": "860bfa4dd7b8f471e40148fa4e74b454974171b2", "title": "Fpga-based face detection system using Haar classifiers", "text": "This paper presents a hardware architecture for face detection based system on AdaBoost algorithm using Haar features. We describe the hardware design techniques including image scaling, integral image generation, pipelined processing as well as classifier, and parallel processing multiple classifiers to accelerate the processing speed of the face detection system. Also we discuss the optimization of the proposed architecture which can be scalable for configurable devices with variable resources. The proposed architecture for face detection has been designed using Verilog HDL and implemented in Xilinx Virtex-5 FPGA. Its performance has been measured and compared with an equivalent software implementation. We show about 35 times increase of system performance over the equivalent software implementation."}
{"_id": "59becb4a1d82eb55045115df32afc66a97553713", "title": "Reduction of unbalanced axial magnetic force in post-fault operation of a novel six-phase double-stator axial flux PM machine using model predictrive control", "text": "This paper investigated the post-fault operation of a novel six-phase double-stator axial flux permanent magnet machine with detached winding configuration, which was found to be superior to existing winding configuration in previous study. However, the unbalanced magnetic force problem still remains unsolved. In this paper, the axial force balancing control principle is proposed and a group of specific current waveforms are deduced. When applying these currents under post-fault condition, magnetic torque, axial magnetic force and rotor losses of the machine are calculated in finite element analysis. The results are compared with normal condition and commonly-used post-fault current waveforms. It is verified that this method reduced the unbalanced axial magnetic force immensely and the torque ripple was also kept at a low level. In order to achieve the proposed current waveform, finite control set model predictive control (FCS-MPC) is adopted. This paper proposed the post-fault model of dual three-phase permanent magnet machines and designed a cost function to track the desired current waveforms. The model of the machine is used to predict the future behavior of the controlled variables and the cost function decides the next step of the inverter by evaluating all the predictions. At last, it is verified by simulation results that the control strategy performs well in both dynamic and steady-state situations."}
{"_id": "4ce94ff24cc238440a76598da13361ef4da9e5ed", "title": "The Whale Optimization Algorithm", "text": "This paper proposes a novel nature-inspired meta-heuristic optimization algorithm, called Whale Optimization Algorithm (WOA), which mimics the social behavior of humpback whales. The algorithm is inspired by the bubble-net hunting strategy. WOA is tested with 29 mathematical optimization problems and 6 structural design problems. Optimization results prove that the WOA algorithm is very competitive compared to the state-of-art meta-heuristic algorithms as well as conventional methods. The source codes of the WOA algorithm are publicly available at http://www.alimirjalili.com/WOA.html \u00a9 2016 Elsevier Ltd. All rights reserved."}
{"_id": "8a7a9672b4981e72d6e9206024c758cc047db8cd", "title": "Evolution strategies \u2013 A comprehensive introduction", "text": "This article gives a comprehensive introduction into one of the main branches of evolutionary computation \u2013 the evolution strategies (ES) the history of which dates back to the 1960s in Germany. Starting from a survey of history the philosophical background is explained in order to make understandable why ES are realized in the way they are. Basic ES algorithms and design principles for variation and selection operators as well as theoretical issues are presented, and future branches of ES research are discussed."}
{"_id": "59a3fea1f38c5dd661cc5bfec50add2c2f881454", "title": "A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimisation: NSGA-II", "text": "Multi-objectiveevolutionaryalgorithmswhichusenon-dominatedsorting andsharinghave beenmainly criticizedfor their (i) computational complexity (where is thenumberof objectivesand is thepopulationsize), (ii) non-elitismapproach, and(iii) theneedfor specifyingasharingparameter . In this paper , we suggesta non-dominatedsortingbasedmulti-objective evolutionaryalgorithm(wecalledit theNon-dominatedSortingGA-II or NSGA-II) which alleviatesall theabove threedifficulties.Specifically, a fastnon-dominatedsorting approachwith computationalcomplexity is presented. Second,a selectionoperatoris presentedwhich createsa mating pool by combiningthe parentandchild populationsandselectingthe best(with respectto fitnessand spread) solutions.Simulationresultson five difficult testproblemsshow that theproposedNSGA-II is ableto find muchbetterspreadof solutionsin all problemscomparedto PAES\u2014anotherelitist multi-objective EA which paysspecial attentiontowardscreatinga diversePareto-optimalfront. Becauseof NSGA-II\u2019s low computational requirements, elitist approach, andparameter -lesssharingapproach,NSGA-II shouldfind increasingapplicationsin theyearsto come."}
{"_id": "945790435ea901e1061df670b59808fe5764c66c", "title": "Pre- and post-processes for automatic colorization using a fully convolutional network", "text": "Automatic colorization is a significant task especially for Anime industry. An original trace image to be colorized contains not only outlines but also boundary contour lines of shadows and highlight areas. Unfortunately, these lines tend to decrease the consistency among all images. Thus, this paper provides a method for a cleaning pre-process of anime dataset to improve the prediction quality of a fully convolutional network, and a refinement post-process to enhance the output of the network."}
{"_id": "09ebd63a1061e1b28c0c1cd005b02a542e58f550", "title": "Random Bits Regression: a Strong General Predictor for Big Data", "text": "* Correspondence: momiao.xiong@gmail.com; Momiao.Xiong@uth.tmc.edu; yin.yao@nih.gov; lijin@fudan.edu.cn Equal contributors Human Genetics Center, School of Public Health, University of Texas Houston Health Sciences Center, Houston, TX, USA Unit on Statistical Genomics, Division of Intramural Division Programs, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA Ministry of Education Key Laboratory of Contemporary Anthropology, Collaborative Innovation Center for Genetics and Development, School of Life Sciences, Fudan University, Shanghai 200433, China Full list of author information is available at the end of the article Abstract"}
{"_id": "a45d4b15ce3adbb9f755377ace8405e7cc90efd5", "title": "Information Theoretic Learning with Infinitely Divisible Kernels", "text": "In this paper, we develop a framework for information theore tic l arning based on infinitely divisible matrices. We formulate an entropy-lik e functional on positive definite matrices based on Renyi\u2019s axiomatic definition of en tropy and examine some key properties of this functional that lead to the conce pt of infinite divisibility. The proposed formulation avoids the plug in estimation of density and brings along the representation power of reproducing kernel Hilbe rt spaces. As an application example, we derive a supervised metric learning algo rithm using a matrix based analogue to conditional entropy achieving results co mparable with the state of the art."}
{"_id": "a44364bf7c9d8d2d4b93637e81cf9d51dc3aae62", "title": "Cyber Security and Power System Communication\u2014Essential Parts of a Smart Grid Infrastructure", "text": "The introduction of \u201csmart grid\u201d solutions imposes that cyber security and power system communication systems must be dealt with extensively. These parts together are essential for proper electricity transmission, where the information infrastructure is critical. The development of communication capabilities, moving power control systems from \u201cislands of automation\u201d to totally integrated computer environments, have opened up new possibilities and vulnerabilities. Since several power control systems have been procured with \u201copenness\u201d requirements, cyber security threats become evident. For refurbishment of a SCADA/EMS system, a separation of the operational and administrative computer systems must be obtained. The paper treats cyber security issues, and it highlights access points in a substation. Also, information security domain modeling is treated. Cyber security issues are important for \u201csmart grid\u201d solutions. Broadband communications open up for smart meters, and the increasing use of wind power requires a \u201csmart grid system\u201d."}
{"_id": "38e49f5ef5a0c2c99e73c2c2ec20c3c216e98e02", "title": "Trustworthiness Attributes and Metrics for Engineering Trusted Internet-Based Software Systems", "text": "Trustworthiness of Internet-based software systems, apps, services and platform is a key success factor for their use and acceptance by organizations and end-users. The notion of trustworthiness, though, is subject to individual interpretation and preference, e.g., organizations require confidence about how their business critical data is handled whereas end-users may be more concerned about usability. As one main contribution, we present an extensive list of software quality attributes that contribute to trustworthiness. Those software quality attributes have been identified by a systematic review of the research literature and by analyzing two real-world use cases. As a second contribution, we sketch an approach for systematically deriving metrics to measure the trustworthiness of software system. Our work thereby contributes to better understanding which software quality attributes should be considered and assured when engineering trustworthy Internet-based software systems."}
{"_id": "b6c99914d3bca48f39ca668e6bf9e1194bf95e12", "title": "The foreign-language effect: thinking in a foreign tongue reduces decision biases.", "text": "Would you make the same decisions in a foreign language as you would in your native tongue? It may be intuitive that people would make the same choices regardless of the language they are using, or that the difficulty of using a foreign language would make decisions less systematic. We discovered, however, that the opposite is true: Using a foreign language reduces decision-making biases. Four experiments show that the framing effect disappears when choices are presented in a foreign tongue. Whereas people were risk averse for gains and risk seeking for losses when choices were presented in their native tongue, they were not influenced by this framing manipulation in a foreign language. Two additional experiments show that using a foreign language reduces loss aversion, increasing the acceptance of both hypothetical and real bets with positive expected value. We propose that these effects arise because a foreign language provides greater cognitive and emotional distance than a native tongue does."}
{"_id": "5dc48a883f0e8ed17e58fbdfdeea7d41fa751578", "title": "Fixed budget quantized kernel least-mean-square algorithm", "text": "We present a quantization-based kernel least mean square (QKLMS) algorithm with a fixed memory budget. In order to deal with the growing support inherent in online kernel methods, the proposed method utilizes a growing and pruning combined technique and defines a criterion, significance, based on weighted statistical contribution of a data center. This method doesn\u2019t need any apriori information and its computational complexity is acceptable, linear with the center number. As we show theoretically and experimentally, the introduced algorithm successfully quantifies the least \u2018significant\u2019 datum and preserves the most important ones resulting in less system error."}
{"_id": "d79dd895912a36670b3477645f361e2fdd73185b", "title": "On Extracting Structured Knowledge from Unstructured Business Documents", "text": "Efficient management of text data is a major concern of business organizations. In this direction, we propose a novel approach to extract structured knowledge from large corpora of unstructured business documents. This knowledge is represented in the form of object instances, which are common ways of organizing the available information about entities, and are modeled here using document templates. The approach itself is based on the observation that a significant fraction of these documents are created using the cut-copy-pastemethod, and thus, it is important to factor this observation into business document analysis projects. Correspondingly, our approach solves the problem of object instance extraction in two steps, namely similarity search and then extraction of object instances from the selected documents. Early qualitative results on a couple of carefully selected document corpora indicate the effective applicability of the approach for solving an important component of the efficient text management problem."}
{"_id": "8bce31108f598986558e9afb1061eb988ea4f3be", "title": "Automated Image Annotation based on YOLOv3", "text": "A typical pedestrian protection system requires sophisticated hardware and robust detection algorithms. To solve these problems the existing systems use hybrid sensors where mono and stereo vision merged with active sensors. One of the most assuring pedestrian detection sensors is far infrared range camera. The classical pedestrian detection approach based on Histogram of oriented gradients is not robust enough to be applied in devices which consumers can trust. An application of deep neural network-based approach is able to perform with significantly higher accuracy. However, the deep learning approach requires a high number of labeled data examples. The investigation presented in this paper aimed the acceleration of pedestrian labeling in far-infrared image sequences. In order to accelerate pedestrian labeling in far-infrared camera videos, we have integrated the YOLOv3 object detector into labeling software. The verification of the pre-labeled results was around eleven times faster than manual labeling of every single frame."}
{"_id": "37a5c952e6e0e3d6d7cd5c30cf4307ee06e4ab5c", "title": "STUDENTS ACCEPTANCE OF MOBILE LEARNING FOR HIGHER EDUCATION IN SAUDI ARABIA", "text": "Mobile learning is the next step in the development of distance learning. Widespread access to mobile devices and the opportunity to learn regardless of time and place make the mobile learning an important tool for lifelong learning. The research objectives are to examine the possibility of acceptance in mobile learning (m-Learning) and study main factors that affect using mLearning that focus on higher education students in Saudi Arabia. The researcher used a quantitative approach survey of 80 students. The modified acceptance framework that based on the Unified Theory of Acceptance and Use of Technology (UTAUT) model is adopted to determine the factors that influence the students\u2019 intention to use m-Learning. The results from statistical analysis show that the acceptance level of students on m-Learning is in the high level."}
{"_id": "8c8a16d379b3dec62771fec8c1cc64e4123c5958", "title": "A digitally compensated 1.5 GHz CMOS/FBAR frequency reference", "text": "A temperature-compensated 1.5 GHz film bulk acoustic wave resonator (FBAR)-based frequency reference implemented in a 0.35 \u00bfm CMOS process is presented. The ultra-small form factor (0.79 mm \u00d7 1.72 mm) and low power dissipation (515 \u00bfA with 2 V supply) of a compensated FBAR oscillator present a promising alternative for the replacement of quartz crystal frequency references. The measured post-compensation frequency drift over a 0-100\u00b0C temperature range is <\u00b110 ppm. The measured oscillator phase noise is -133 dBc/ Hz at 100 kHz offset from the 1.5 GHz carrier."}
{"_id": "6edd1ec57bf0efaf1460ec876d9e794dacf461d1", "title": "A Lightweight Virtualization Solution for Android Devices", "text": "Mobile virtualization has emerged fairly recently and is considered a valuable way to mitigate security risks on Android devices. However, major challenges in mobile virtualization include runtime, hardware, resource overhead, and compatibility. In this paper, we propose a lightweight Android virtualization solution named Condroid, which is based on container technology. Condroid utilizes resource isolation based on namespaces feature and resource control based on cgroups feature. By leveraging them, Condroid can host multiple independent Android virtual machines on a single kernel to support mutilple Android containers. Furthermore, our implementation presents both a system service sharing mechanism to reduce memory utilization and a filesystem sharing mechanism to reduce storage usage. The evaluation results on Google Nexus 5 demonstrate that Condroid is feasible in terms of runtime, hardware resource overhead, and compatibility. Therefore, we find that Condroid has a higher performance than other virtualization solutions."}
{"_id": "9c1dd4348d8aa62e7236ba2a5b265b181ebe58ab", "title": "Why an Intelligence Explosion is Probable", "text": "If a future Artificial Intelligence were to reach the level of human intelligence there is a possibility that it would be able to rapidly redesign itself until its own capabilities far exceeded those of human beings. We analyze the principle factors that might govern the rapidity of this \u2018intelligence explosion\u2019 process. We argue that if degree of intelligence is defined using a relatively uncontroversial measure that involves only the relative speed of thought with respect to that of the human mind, and if an intelligence explosion is defined as a thousandfold increase in speed, then there are compelling reasons to believe that none of the barriers to this process look plausible, and therefore an intelligence explosion is highly likely."}
{"_id": "2e4d61cabfbbeee19a3861a1a8b663252ef673c6", "title": "Combined Star-Delta Winding Analysis", "text": "The combined star-delta winding has been attracting the attention of industry because of its potential to improve the efficiency of electrical machines. In order to show the efficiency benefits of this type of winding, many authors have used methods to determine the improvements in winding factors, published in different papers, which have not been validated yet. Some of these methods contradict each other. Furthermore, as a demerit of this winding, a unidirectional rotation has been reported. In this paper, a set of new formulas to calculate the winding factors for combined star-delta windings is proposed. In order to check the validity of the proposed formulas, a 2-D FEM and experimental tests on a salient pole synchronous machine are presented, which allows different combined star-delta winding connections. In contrast to other publications, the measurements show that combined windings can be applied for both directions of rotation without any problems."}
{"_id": "cff164069f3fe0c24a9b05acfdebab8473be45a0", "title": "Examining Alignment of Frames Using Actor-Network Constructs: The Implementation of an IT Project", "text": "IT projects often cannot succeed without bridging different stakeholders\u2019 perspectives on system design and function during implementation. These perspectives often start as competing technology frames consisting of different beliefs, interests, technology evaluation routines and artifact characteristics. For successful implementation of IT, these competing frames usually need to be stabilized into a dominant frame which is widely accepted. Prior research has offered few clues on the mechanisms that are at work in aligning competing frames. This paper uses a case study to propose mechanisms for aligning frames\u2014namely, two actor-network theory constructs of \u201cblack-boxing\u201d and \u201cobligatory points of passage\u201d\u2014that aid in translating the competing frames into a truce frame. The examined properties of these mechanisms are described and analyzed, thereby extending the literature on technology frames by providing evidence of generic aligning mechanisms."}
{"_id": "167c2993b9d15c4d8ee247f0cc1852359af63d3e", "title": "Swearing, Euphemisms, and Linguistic Relativity", "text": "Participants read aloud swear words, euphemisms of the swear words, and neutral stimuli while their autonomic activity was measured by electrodermal activity. The key finding was that autonomic responses to swear words were larger than to euphemisms and neutral stimuli. It is argued that the heightened response to swear words reflects a form of verbal conditioning in which the phonological form of the word is directly associated with an affective response. Euphemisms are effective because they replace the trigger (the offending word form) by another word form that expresses a similar idea. That is, word forms exert some control on affect and cognition in turn. We relate these findings to the linguistic relativity hypothesis, and suggest a simple mechanistic account of how language may influence thinking in this context."}
{"_id": "bac17e78347a071e2a640a611ef46eac3e8e8d44", "title": "Game-Enhanced Second Language Vocabulary Acquisition Strategies: A Systematic Review", "text": "This paper is a synthesis of 17 qualitative, quantitative, and mixed-method studies published over the last decade. The synthesis suggests that game-enhanced learning provides a set of effective strategies such as language repetitions, contextual clues, interaction with native speakers and peers, and imagery to practice and use second language vocabulary in the authentic context. Some of the strategies such as word lists, dictionaries, and vocabulary exercises are more beneficial when combined with native speakers or peers\u2019 interactions. Due to high interactivity of games, note taking and media strategies provide less support for vocabulary learning. The lack of high quality studies and empirical data makes it difficult to draw conclusions about which video games strategies provide the most benefit. The synthesis of research identifies that generally game-enhanced practices are helpful for second language vocabulary enhancement."}
{"_id": "7ab0fa87dcdd86301ed566f97d8345015e73de21", "title": "Development of wearable and flexible ultrasonic sensor for skeletal muscle monitoring", "text": "A wearable and flexible ultrasonic sensor was developed for monitoring of skeletal muscle contraction. The sensor was constructed using a PVDF piezoelectric polymer film without a matching layer or backing material. Due to its lightness (less than 1 gram), thinness (200 \u03bcm) and flexibility, this sensor was wearable and enabled non-invasive and continuous muscle monitoring without restricting muscle movement, which is not feasible using a conventional handheld ultrasonic probe. The developed sensor was used to monitor muscle contractions in the index finger by ultrasonic pulse-echo measurements and in the forearm by through-transmission measurements. It was successfully demonstrated that the tissue thickness variations were measured in accordance with the muscle contraction performed."}
{"_id": "e397b73a945671c0b05ab421c32fa418cf9744cb", "title": "Overcoming the Bell-Shaped Dose-Response of Cannabidiol by Using Cannabis Extract Enriched in Cannabidiol", "text": "Cannabidiol (CBD), a major constituent of Cannabis, has been shown to be a powerful anti\u2010in\u2010 flammatory and anti\u2010anxiety drug, without exerting a psychotropic effect. However, when given either intraperitoneally or orally as a purified product, a bell\u2010shaped dose\u2010response was observed, which limits its clinical use. In the present study, we have studied in mice the anti\u2010inflammatory and anti\u2010nociceptive activities of standardized plant extracts derived from the Cannabis sativa L., clone 202, which is highly enriched in CBD and hardly contains any psychoactive ingredients. In stark contrast to purified CBD, the clone 202 extract, when given either intraperitoneally or orally, provided a clear correlation between the anti\u2010inflammatory and anti\u2010nociceptive responses and the dose, with increasing responses upon increasing doses, which makes this plant medicine ideal for clinical uses. The clone 202 extract reduced zymosan\u2010induced paw swelling and pain in mice, and prevented TNF\u03b1 production in vivo. It is likely that other components in the extract synergize with CBD to achieve the desired anti\u2010inflammatory action that may contribute to overcoming the bell\u2010shaped dose\u2010response of purified CBD. We therefore propose that Cannabis clone 202 (Avi\u2010 dekel) extract is superior over CBD for the treatment of inflammatory conditions."}
{"_id": "8723604fd4b0f6c84878c90468e825de7fdbf653", "title": "TSC-DL: Unsupervised trajectory segmentation of multi-modal surgical demonstrations with Deep Learning", "text": "The growth of robot-assisted minimally invasive surgery has led to sizable datasets of fixed-camera video and kinematic recordings of surgical subtasks. Segmentation of these trajectories into locally-similar contiguous sections can facilitate learning from demonstrations, skill assessment, and salvaging good segments from otherwise inconsistent demonstrations. Manual, or supervised, segmentation can be prone to error and impractical for large datasets. We present Transition State Clustering with Deep Learning (TSC-DL), a new unsupervised algorithm that leverages video and kinematic data for task-level segmentation, and finds regions of the visual feature space that correlate with transition events using features constructed from layers of pre-trained image classification Deep Convolutional Neural Networks (CNNs). We report results on three datasets comparing Deep Learning architectures (AlexNet and VGG), choice of convolutional layer, dimensionality reduction techniques, visual encoding, and the use of Scale Invariant Feature Transforms (SIFT). We find that the deep architectures extract features that result in up-to a 30.4% improvement in Silhouette Score (a measure of cluster tightness) over the traditional \u201cshallow\u201d features from SIFT. We also present cases where TSC-DL discovers human annotator omissions. Supplementary material, data and code is available at: http://berkeleyautomation.github.io/tsc-dl/."}
{"_id": "929f18130460d562d15db085b87a781a0dd34017", "title": "Effects of concurrent endurance and strength training on running economy and .VO(2) kinetics.", "text": "PURPOSE\nIt has been suggested that endurance training influences the running economy (CR) and the oxygen uptake (.VO(2)) kinetics in heavy exercise by accelerating the primary phase and attenuating the .VO(2) slow component. However, the effects of heavy weight training (HWT) in combination with endurance training remain unclear. The purpose of this study was to examine the influence of a concurrent HWT+endurance training on CR and the .VO(2) kinetics in endurance athletes.\n\n\nMETHODS\nFifteen triathletes were assigned to endurance+strength (ES) or endurance-only (E) training for 14 wk. The training program was similar, except ES performed two HWT sessions a week. Before and after the training period, the subjects performed 1) an incremental field running test for determination of .VO(2max) and the velocity associated (V(.VO2max)), the second ventilatory threshold (VT(2)); 2) a 3000-m run at constant velocity, calculated to require 25% of the difference between .VO(2max) and VT(2), to determine CR and the characteristics of the VO(2) kinetics; 3) maximal hopping tests to determine maximal mechanical power and lower-limb stiffness; 4) maximal concentric lower-limb strength measurements.\n\n\nRESULTS\nAfter the training period, maximal strength were increased (P < 0.01) in ES but remained unchanged in E. Hopping power decreased in E (P < 0.05). After training, economy (P < 0.05) and hopping power (P < 0.001) were greater in ES than in E. .VO(2max), leg hopping stiffness and the .VO(2) kinetics were not significantly affected by training either in ES or E.\n\n\nCONCLUSION\nAdditional HWT led to improved maximal strength and running economy with no significant effects on the .VO(2) kinetics pattern in heavy exercise."}
{"_id": "19fed967f1e482ca94caf75d89c8ec508b180675", "title": "The igraph software package for complex network research", "text": "There is no other package around that satisfies all the following requirements: \u2022Ability to handle large graphs efficiently \u2022Embeddable into higher level environments (like R [6] or Python [7]) \u2022Ability to be used for quick prototyping of new algorithms (impossible with \u201cclick & play\u201d interfaces) \u2022Platform-independent and open source igraph aims to satisfy all these requirements while possibly remaining easy to use in interactive mode as well."}
{"_id": "714b3e765c923b8f62d45b26b1edabebb9f0ca5a", "title": "Recurrence Plots of Dynamical Systems .", "text": "A new graphical tool for measuring the time constancy of dynamical systems is presented and illustrated with typical examples. In recent years a number of methods have been devised to compute dynamical parameters from time series [ 11. Such parameters are the information dimension, entropy, Liapunov exponents, dimension spectrum, etc. In all cases it is assumed that the time series is obtained from an autonomous dynamical system, i .e. the evolution equations do not contain the time explicitly. It is also assumed that the time series is much longer than the characteristic times of the dynamical system. In the present letter we present a new diagnostic tool which we call recurrence plot; this tool tests the above assumptions, and gives useful information also when they are not satisfied. As the examples will show, the information obtained from recurrence plots is often surprising, and not easily obtainable by other methods. Let x( i ) be the i-th point on the orbit describing a dynamical system in d-dimensional space, for i = 1, ..., N . The recurrence plot is an array of dots in a N x N square, where a dot is placed at (i, j) whenever x( j) is sufficiently close to x( i ) . In practice one proceeds as follows to obtain a recurrence plot from a time series {ui}. First, choosing an embedding dimension d, one constructs the d-dimensional orbit of X(i) by the method of time delays (i.e. if the ui are scalar, x(i) = (ui, u ~ + ~ , ..., u ~ + ~ ~ ) ) . Next, one chooses r(i) such that the ball of radius r(i) centred at x (i) in Rd contains a reasonable number of other points x (j) of the orbit (in the examples below, we have chosen a sequence of increasing radii and stopped when we found at least 10 neighbours). In our computation we have allowed the radius r(i) to depend on the point x ( i ) , and r(i) has been selected by a routine used in our alghorithm for the determination of the Liapunov exponents-see ref. [2]. Finally, one plots a dot at each point (i, j) for which x ( j ) is in the ball of radius r(i) centred at x( i ) . We call this picture a recurrence plot. Note that the i, j are in fact times: therefore a recurrence plot describes natural (but subtle) time correlation information. Recurrence plots tend to be fairly symmetric with respect to the diagonal i =j because if x ( i ) is close to x(j), then x ( j ) is close to x( i ) . There is, however, no complete symmetry because we do not require r(i) = r ( j ) . 974 EUROPHYSICS LETTERS Fig. 1. A scatter plot for 20000 iterates of the Henon map (x, y)+ (1 1.4x2+ y, 0 . 3 ~ ) . The embedding dimension is 8. In this figure, and in figs. 2 and 4, the diagonal is added. We shall now analyse some recurrence plots obtained from experimental or computergenerated time series. We distinguish-somewhat arbitrarily-between large-scale typology and small-scale textuye. For an autonomous dynamical system, if all the characteristic times are short compared to the length of the time series, the typology of the recurrence plot is homogeneous. This means that the overall pattern is uniformly grey although at small scale nontrivial texture may be visible (see below). An example of a homogeneous recurrence plot is provided by the Henon map, as exhibited in fig. 1. Another typology is provided by the time evolution with drif t , i .e. by dynamical systems which are not quite homogeneous, but contain adiabatically (slowly) varying parameters [3]. To get an example we started with a time series obtained from the Lorenz system, and added a term varying linearly with time (from 0 to 10 percent of the amplitude of the original signal). The corresponding recurrence plot is our fig. 2. The rich structure of this plot is due to the Lorenz system itself and not to the drift; the main effect of the drift is to make the plot somewhat paler away from the diagonal and darker near it (progressive decorrelation at large time intervals). Since the human eyehrain is not very good at seeing small variations of overall darkness, a histogram of darkness as a function of i j is given in fig. 3; this shows unquestionably the existence of a drift. Our next recurrence plot, fig. 4, shows what might be called a periodic typology; it is obtained from an experimental time series communicated by Ciliberto and described in ref. [21. This striking picture shows that-after a transient-the system goes into slow oscillations superimposed on the chaotic motion which is otherwise known to exist (2 positive characteristic exponents have been found in ref[2]). These slow oscillations (which are not quite periodic) are not easily recognizable on the original time series (and certainly have gone unrecognized in the original analysis). J.-P. ECKMANN et al.: RECURRENCE PLOTS OF DYNAMICAL SYSTEMS 975"}
{"_id": "06e113d1d82d1e4f30432660b73a6979ee47b83f", "title": "Centrality in Social Networks Conceptual Clarification", "text": "The intuitive background for measures of structural centrality in social networks is reviewed aPzd existing measures are evaluated in terms of their consistency with intuitions and their interpretability. Three distinct intuitive conceptions of centrality are uncovered and existing measures are refined to embody these conceptions. Three measures are developed for each concept, one absolute and one relative measure of the ~entra~~t~~ of ~os~tio~ls in a network, and one relenting the degree of centralization of the entire network. The implications of these measures for the experimental study of small groups is examined."}
{"_id": "5ce64de5b87da6365b7a718d3bfdae62f3930286", "title": "Graph Drawing by Force-directed Placement", "text": "A graph G = (V,E) is a set V of vertices and a set E of edges, in which an edge joins a pair of vertices. 1 Normally, graphs are depicted with their vertices as points in a plane and their edges as line or curve segments connecting those points. There are different styles of representation, suited to different types of graphs or different purposes of presentation. We concentrate on the most general class of graphs: undirected graphs, drawn with straight edges. In this paper, we introduce an algorithm that attempts to produce aesthetically-pleasing, two-dimensional pictures of graphs by doing simplified simulations of physical systems. We are concerned with drawing undirected graphs according to some generally accepted aesthetic criteria: 2"}
{"_id": "c4e9b1a51ce4885204a5a9ee53504e7d8f5c6f6e", "title": "SnakeFighter - Development of a Water Hydraulic Fire Fighting Snake Robot", "text": "This paper presents the SnakeFighter concept and describes the generic element within this concept in the form of a water hydraulic snake robot. Applications of a SnakeFighter system are presented with focus on fire intervention tasks. The development of a water hydraulic snake robot that demonstrates the concept is described. The robot is the first water hydraulic snake robot ever constructed. The paper identifies design challenges of a complete SnakeFighter system and describes future research on this concept"}
{"_id": "acfc2513cbad049ca41fc920fcdab72a91c2b4ef", "title": "The role played by the concept of presence in validating the efficacy of a cybertherapy treatment: a literature review", "text": "The present paper considers the existing research in cybertherapy, which is a psychological therapy carried out with the use of a mediated environment, and examines the way in which the users\u2019 sense of presence in the mediated environment can be of relevance for the validation of the intervention. With this purpose, a collection of 41 papers reporting the measurement of presence in the context of a cybertherapy treatment has been identified and examined. The general relevance of presence in cybertherapy and the measurement techniques adopted in the studies collected here are described and discussed. The way in which presence corresponds to establishing internal validity, convergent or predictive validity and external validity of a treatment is examined. In conclusion, a checklist to apply when planning a validation study is proposed, to improve the way in which presence is used."}
{"_id": "a761f26f8239acd88fc83787f28a7f2d2ff9ea22", "title": "Procedural Embedding of knowledge in Planner", "text": "Since the l a s t I JCAI , the PLANNER problem s o l v i n g fo rma l i sm has con t i nued to deve lop . Our ep ls temo lgy f o r the f ounda t i ons f o r problem s o l v i n g has been ex tended . An overv iew of the f o rma l i sm Is g iven from an I n f o r m a t i o n p rocess ing v i e w p o i n t . A s imp le example Is e x p l a i n e d us ing snapshots of the s t a t e of the problem s o l v i n g as the example I s worked, f i n a l l y , c u r r e n t a p p l i c a t i o n s f o r the fo rma l i sm are I i s ted . 1 . The S t r u c t u r a l Foundat ions o f Problem S o l v i n g We would l i k e to develop a f o u n d a t i o n f o r problem s o l v i n g analogous In some ways to the c u r r e n t l y e x i s t i n g f ounda t i ons f o r mathemat ics . Thus we need to analyze the s t r u c t u r e of f o u n d a t i o n s f o r mathemat ics . A f o u n d a t i o n f o r mathematics must p rov ide a d e f i n i t i o n a l f o rma l i sm In which mathemat ica l o b j e c t s can be d e f i n e d and t h e i r e x i s t e n c e p roved . For example set t heo ry as a f o u n d a t i o n p rov ides t h a t o b j e c t s must be b u i l t out o f s e t s . Then the re must be a deduc t i ve f o r m a l i s m In which fundamental t r u t h s can be s t a t e d and the means p rov ided to deduce a d d i t i o n a l t r u t h s f rom those a l r e a d y e s t a b l i s h e d . Cur rent mathemat ica l f ounda t i ons such as set theo ry seem q u i t e n a t u r a l and adequate f o r the vas t body o f c l a s s i c a l mathemat ics . The o b j e c t s and reason ing of most mathemat ica l domains such as a n a l y s i s and a lgebra can be e a s i l y founded on set t h e o r y . The e x i s t e n c e o f c e r t a i n a s t r o n o m i c a l l y l a r g e c a r d i n a l s poses some problems f o r set t h e o r e t i c f o u n d a t i o n s . However, the problems posed seem to be of p r a c t i c a l Importance on l y t o c e r t a i n ca tego ry t h e o r i s t s . Foundat ions o f mathemat ics have devoted a g rea t deal o f a t t e n t i o n to the problems o f c o n s i s t e n c y and comple teness. The problem of cons i s t ency Is Impor tant s i nce I f the f ounda t i ons are i n c o n s i s t e n t then any fo rmu la whatsoever may be deduced thus t r i v i a l i z i n g the f o u n d a t i o n s . Semantics f o r f ounda t i ons o f mathemat ics are d e f i n e d model 167 t h e o r e t i c a l l y In terms o f the n o t i o n o f s a t i s f i a b i l i t y . The problem o f completeness is t h a t , f o r a f o u n d a t i o n of mathematics to be I n t u i t i v e l y s a t i s f a c t o r y a l l the t r u e fo rmu las shou ld be proveab le s ince a f o u n d a t i o n mathemat ics alms to be a theory of mathemat ica l t r u t h . S i m i l a r fundamental ques t i ons must be faced by a f o u n d a t i o n f o r problem s o l v i n g . However t he re are some impor tant d i f f e r e n c e s s ince a f o u n d a t i o n of problem s o l v i n g alms more to be a theory of a c t i o n s and purposes than a theory of mathemat ica l t r u t h . A f o u n d a t i o n f o r problem s o l v i n g must s p e c i f y a g o a l o r l e n t e d f o rma l i sm In which problems can be s t a t e d . Fur thermore the re must be a f o rma l i sm f o r s p e c i f y i n g the a l l o w a b l e methods o f s o l u t i o n of p rob lems. As p a r t o f the d e f i n i t i o n o f the f o r m a l i s m s , the f o l l o w i n g elements must be d e f i n e d : the data s t r u c t u r e , the c o n t r o l s t r u c t u r e , and the p r i m i t i v e p rocedures . The problem of what are a l l o w a b l e data s t r u c t u r e s f o r f a c t s about the wo r l d Immediate ly a r i s e s . Being a theory of a c t i o n s , a f o u n d a t i o n f o r problem s o l v i n g must c o n f r o n t the problem of change: How can account be taken of the changing s i t u a t i o n In the wor ld? In o rder f o r the re to be problem s o l v i n g , the re must be a problem s o l v e r . A f o u n d a t i o n f o r problem s o l v i n g must cons ide r how much knowledge and what k ind of knowledge problem s o l v e r s can have about themse lves . In c o n t r a s t to the f o u n d a t i o n o f mathemat ics , the semant ics f o r a f o u n d a t i o n f o r problem s o l v i n g should be d e f i n e d In terms of p r o p e r t i e s of p rocedures . We would l i k e to see mathemat ica l I n v e s t i g a t i o n s on the adequacy of the f o u n d a t i o n s f o r problem s o l v i n g p rov ided by PLANNER. In chap te r C of the d i s s e r t a t i o n , we have made the beg inn ings of one k ind of such an I n v e s t i g a t i o n . To be more s p e c i f i c a f o u n d a t i o n f o r problem s o l v i n g must concern I t s e l f w i t h the f o l l o w i n g complex o f t o p i c s . PROCEDURAL EMBEDDING: How can \" r e a l w o r l d \" knowledge be e f f e c t i v e l y embedded In p rocedures . What are good ways to express problem s o l u t i o n methods and how can p lans f o r the s o l u t i o n of problems be fo rmu la ted? GENERALIZED COMPILATION: What are good methods f o r t r a n s f o r m i n g h igh l e v e l g o a l o r l e n t e d language in to s p e c i f i c e f f i c i e n t a l g o r i t h m s ."}
{"_id": "911e6ffe9e9e43201ef9328b5a9db379fdd25af6", "title": "A high gain multi-port bidirectional non-isolated DC-DC converter for renewable integration", "text": "A topology for bidirectional non-isolated multiport DC-DC converter is proposed in this paper. The energy generation from the renewable sources such as wind and solar aie intermittent in nature. A battery port is mandatory for this purpose. Hence, a high gain multiport bidirectional converter for DC micro-grid application with high conversion efficiency, reduced size, and low cost is proposed. The bidirectional power flow capability through a feedback back path to charge a battery from the load capacitor is also provided. Therefore, the battery is not only charged from the input source but also from the load capacitor. The number of outputs are independent from each other and the battery is charged from the second output. Steady state operation and theoretical analysis of the converter proposed is elaborated. To validate the performance and theoretical concepts, it is simulated in MATLAB/Simulink software. The results of simulation verify the appropriate performance of proposed topology."}
{"_id": "e9a1e9dc719567d67126eaec060c9b744ddfb3d9", "title": "Solid state circuit breaker protection devices for DC microgrid in review", "text": "The advent of alternating current (AC) is due to its ability to transmit power over long distances from transformers, does not mark the end of utilizing direct current (DC). The development of renewable energy sources is because of high usage of DC because the generated power is directed to the main grid for conversion into electric power and the source of energy cannot tolerate high voltage. Fault detection is a major problem that arises when an excess amount of current flows in the circuit leading to the necessity of having circuit breakers. Circuit breakers are made of light materials, which creates a problem for dealing with high voltage DC applications with the protection of circuit breakers have become popularized, and different papers have outlined various studies on the effectiveness of Solid State Circuit Breakers (SSCBs) protection for a direct current microgrid. This paper reviews the benefits and shortfalls of The Wide bandgap (WBG) SSCBs and its application with photovoltaic (PV) generators."}
{"_id": "fe7afa011143467fb40d382480566bdd05b78da1", "title": "Patients' Experience After a Fall and Their Perceptions of Fall Prevention: A Qualitative Study.", "text": "An exploratory descriptive study was conducted to explore the perspectives of patients who had fallen in the hospital; 100 patients were interviewed. An inductive content analysis approach was adopted. Six themes emerged: Apathetic toward falls, self-blame behavior, reluctance to impose on busy nurses, negative feelings toward nurses, overestimating own ability, and poor retention of information. Patients often downplayed the risks of falls and were reluctant to call for help."}
{"_id": "d409d8978034de5e5e8f9ee341d4a00441e3d05f", "title": "Annual research review: re-thinking the classification of autism spectrum disorders.", "text": "BACKGROUND\nThe nosology of autism spectrum disorders (ASD) is at a critical point in history as the field seeks to better define dimensions of social-communication deficits and restricted/repetitive behaviors on an individual level for both clinical and neurobiological purposes. These different dimensions also suggest an increasing need for quantitative measures that accurately map their differences, independent of developmental factors such as age, language level and IQ.\n\n\nMETHOD\nPsychometric measures, clinical observation as well as genetic, neurobiological and physiological research from toddlers, children and adults with ASD are reviewed.\n\n\nRESULTS\nThe question of how to conceptualize ASDs along dimensions versus categories is discussed within the nosology of autism and the proposed changes to the DSM-5 and ICD-11. Differences across development are incorporated into the new classification frameworks.\n\n\nCONCLUSIONS\nIt is crucial to balance the needs of clinical practice in ASD diagnostic systems, with neurobiologically based theories that address the associations between social-communication and restricted/repetitive dimensions in individuals. Clarifying terminology, improving description of the core features of ASD and other dimensions that interact with them and providing more valid and reliable ways to quantify them, both for research and clinical purposes, will move forward both practice and science."}
{"_id": "7dc70e8889efbb31f9f3bc2c2ab68bff101102b4", "title": "Efficacy of botulinum toxin A in upper limb function of hemiplegic patients", "text": "Botulinum toxin A has been reported to reduce spasticity and increase the comfort of hemiplegic patients. The aim of this study was to assess the efficacy of the treatment on disability, especially in manual activities, and to attempt to identify predictive factors of improvement. Twenty patients (mean age: 54.4 years; M: 14; right hemiplegia: 12) were included, with a delay of at least three months after unilateral hemispheric stroke. Botulinum toxin A (BOTOX\u00ae) was injected into the arm adductors (8 cases), forearm flexors (17 cases), pronators, wrist and finger flexors (20 cases), with a total dose of 200 to 300 U. Examination (day 1 and 15, month 2 and 5) consisted of spasticity assessment (modified Ashworth scale), muscle strength, passive range of motion (goniometry), and pain, followed by functional tests, especially the Rivermead Motor Assessment (RMA) and Nine-hole Peg Test (NHPT). Performance in daily living was assessed with the Functional Independence Measure (FIM), and an original analysis of hand grasp, grip and pinches used in domestic activities (9 items), and of comfort of patients and caregivers. Significant reduction in spasticity was observed on the elbow flexors, pronators, wrist and fingers flexors, especially at day 15 (mean 0.90 to 1 point), with wide variations in effect. Muscle strength was increased in wrist and fingers extensors, with concomitant increase in the opening of the thumb to index finger space. There was no effect on the NHPT requiring distal manipulation, but the RMA, which especially concerned picking up and releasing a tennis ball, showed significant improvement. Furthermore, use of the upper limb in daily living increased, particularly for internal grasping of objects, and for grasping by the top, transporting and releasing of objects. Patients and caregivers reported facilitation in dressing, and in proximal and distal care of the upper limb. The global flexor position of the limb improved. Adverse reactions were rare and mostly consisted of transitory pain during injection. The improvement in the RMA was better explained by the quality of the initial motor command on distal prehension (positive correlation with motor strength), and that in hand using in domestic activities by a lower level of spasticity on pronators and wrist flexors (negative correlations with spasticity). Conversely, the severity of the motor deficit (negative correlations with motor strength) and a high level of spasticity before injection (positive correlations with spasticity) mostly explained the improvement in comfort. In conclusion, botulinum toxin A is efficient in improving hand use in patients with relatively preserved distal motricity, and in increasing comfort in patients with severe global disorders."}
{"_id": "f484d068057d9e7e1b7e7764087373a41da177cf", "title": "Use of Virtual Reality Feedback for Patients with Chronic Neck Pain and Kinesiophobia", "text": "This study examined how individuals with and without neck pain performed exercises under the influence of altered visual feedback in virtual reality. Chronic neck pain (n = 9) and asymptomatic (n = 10) individuals were recruited for this cross-sectional study. Participants performed head rotations while receiving programmatically manipulated visual feedback from a head-mounted virtual reality display. The main outcome measure was the control-display gain (ratio between actual head rotation angle and visual rotation angle displayed) recorded at the just-noticeable difference. Actual head rotation angles were measured for different gains. Detection of the manipulated visual feedback was affected by gain. The just-noticeable gain for asymptomatic individuals, below and above unity gain, was 0.903 and 1.159, respectively. Head rotation angle decreased or increased 5.45\u00b0 for every 0.1 increase or decrease in gain, respectively. The just-noticeable gain for chronic pain individuals, below unity gain, was 0.950. The head rotation angle increased 4.29\u00b0 for every 0.1 decrease in gain. On average, chronic pain individuals reported that neck rotation was feasible for 84% of the unity gain trials, 66% of the individual just-noticeable difference trials, and 50% of the \u201cnudged\u201d just-noticeable difference trials. This research demonstrated that virtual reality may be useful for promoting the desired outcome of increased range of motion in neck rehabilitation exercises by altering visual feedback."}
{"_id": "d9fa14d9a914ed5f899d48dba005a82160f58f56", "title": "Detailed Garment Recovery from a Single-View Image", "text": "Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, we propose a method that is able to compute a rich and realistic 3D model of a human body and its outfits from a single photograph with little human interaction. Our algorithm is not only able to capture the global shape and geometry of the clothing, it can also extract small but important details of cloth, such as occluded wrinkles and folds. Unlike previous methods using full 3D information (i.e. depth, multi-view images, or sampled 3D geometry), our approach achieves detailed garment recovery from a single-view image by using statistical, geometric, and physical priors and a combination of parameter estimation, semantic parsing, shape recovery, and physicsbased cloth simulation. We demonstrate the effectiveness of our algorithm by re-purposing the reconstructed garments for virtual try-on and garment transfer applications, as well as cloth animation for digital characters."}
{"_id": "f777bd8cfa6c40bbf7096367deaa60a30a8ec29a", "title": "SEMI-AUTOMATIC EXTRACTION OF RIBBON ROADS FORM HIGH RESOLUTION REMOTELY SENSED IMAGERY BY COOPERATION BETWEEN ANGULAR TEXTURE SIGNATURE AND TEMPLATE MATCHING", "text": "Road tracking is a promising technique to increase the efficiency of road mapping. In this paper an improved road tracker, based on cooperation between angular texture signature and template matching, is presented. Our tracker uses parabola to model the road trajectory and to predict the position of next road centreline point. It employs angular texture signature to get the exact moving direction of current road centreline point, and moves forward one predefined step along the direction to reach a new position, and then uses curvature change to verify the new added road point whether right enough. We also build compactness of angular texture signature polygon to check whether the angular texture signature is suitable to be used to go on tracking. When angular texture signature fails, least squares template matching is then employed instead. Cooperation between angular texture signature and template matching can reliably extract continuous and homogenous ribbon roads on high resolution remotely sensed imagery. \u2217 Corresponding author"}
{"_id": "68603a9372f4e9194ab09c4e585e3150b4025e97", "title": "Female Pattern Hair Loss: a clinical and pathophysiological review*", "text": "Female Pattern Hair Loss or female androgenetic alopecia is the main cause of hair loss in adult women and has a major impact on patients' quality of life. It evolves from the progressive miniaturization of follicles that lead to a subsequent decrease of the hair density, leading to a non-scarring diffuse alopecia, with characteristic clinical, dermoscopic and histological patterns. In spite of the high frequency of the disease and the relevance of its psychological impact, its pathogenesis is not yet fully understood, being influenced by genetic, hormonal and environmental factors. In addition, response to treatment is variable. In this article, authors discuss the main clinical, epidemiological and pathophysiological aspects of female pattern hair loss."}
{"_id": "9ddb4277e87447b872e8a85852a0221e09590124", "title": "Learning Fine-Grained Knowledge about Contingent Relations between Everyday Events", "text": "Much of the user-generated content on social media is provided by ordinary people telling stories about their daily lives. We develop and test a novel method for learning fine-grained common-sense knowledge from these stories about contingent (causal and conditional) relationships between everyday events. This type of knowledge is useful for text and story understanding, information extraction, question answering, and text summarization. We test and compare different methods for learning contingency relation, and compare what is learned from topic-sorted story collections vs. general-domain stories. Our experiments show that using topicspecific datasets enables learning finergrained knowledge about events and results in significant improvement over the baselines. An evaluation on Amazon Mechanical Turk shows 82% of the relations between events that we learn from topic-sorted stories are judged as contingent."}
{"_id": "fe90653bdebfc729e1f926eb9d2d9d2f31042cc1", "title": "Distributional Semantic Models of Attribute Meaning in Adjectives and Nouns", "text": "Attributes such as size, weight or color are at the core of conceptualization, i.e., the formal representation of entities or events in the real world. In natural language, formal attributes find their counterpart in attribute nouns which can be used in order to generalize over individual properties (e.g., big or small in case of size, blue or red in case of color). In order to ascribe such properties to entities or events, adjectivenoun phrases are a very frequent linguistic pattern (e.g., a blue shirt, a big lion). In these constructions, attribute meaning is conveyed only implicitly, i.e., without being overtly realized at the phrasal surface. This thesis is about modeling attribute meaning in adjectives and nouns in a distributional semantics framework. This implies the acquisition of meaning representations for adjectives, nouns and their phrasal combination from corpora of natural language text in an unsupervised manner, without tedious handcrafting or manual annotation efforts. These phrase representations can be used to predict implicit attribute meaning from adjective-noun phrases \u2013 a problem which will be referred to as attribute selection throughout this thesis. The approach to attribute selection proposed in this thesis is framed in structured distributional models. We model adjective and noun meanings as distinct semantic vectors in the same semantic space spanned by attributes as dimensions of meaning. Based on these word representations, we make use of vector composition operations in order to construct a phrase representation from which the most prominent attribute(s) being expressed in the compositional semantics of the adjective-noun phrase can be selected by means of an unsupervised selection function. This approach not only accounts for the linguistic principle of compositionality that underlies adjective-noun phrases, but also avoids inherent sparsity issues that result from the fact that the relationship between an adjective, a noun and a particular attribute is rarely explicitly observed in corpora. The attribute models developed in this thesis aim at a reconciliation of the conflict between specificity and sparsity in distributional semantic models. For this purpose, we compare various instantiations of attribute models capitalizing on pattern-based and dependency-based distributional information as well as attribute-specific latent topics induced from a weakly supervised adaptation of Latent Dirichlet Allocation. Moreover, we propose a novel framework of distributional enrichment in order to enhance structured vector representations by incorporating additional lexical information from complementary distributional sources. In applying distributional enrichment to distributional attribute models, we follow the idea to augment structured"}
{"_id": "937d5bf28040bdf9cbb3c1a47da8853118eb49a7", "title": "What exactly is viral marketing ?", "text": "So what is viral marketing? In 1997, when we first coined the term in a Netscape newsletter (www.dfj.com/viralmarketing), we used several examples to illustrate the phenomenon, defined loosely as \u201cnetwork-enhanced word of mouth.\u201d Its original inspiration came from the adoption pattern of Hotmail, the Webbased email service that launched in 1996. Tim Draper persuaded Hotmail to include a promotional pitch with a clickable URL in every message sent by a Hotmail user. Therein lay one of the critical elements of viral marketing: every customer becomes an involuntary salesperson simply by using the product."}
{"_id": "5a7ffe0cf842cd6a7f12b912bcc82ac4f642fab7", "title": "Plan-Based Dialogue Management in a Physics Tutor", "text": "This paper describes an application of APE (the Atlas Planning Engine), an integrated planning and execution system at the heart of the Atlas dialogue management system. APE controls a mixedinitiative dialogue between a human user and a host system, where turns in the \u2018conversation\u2019 may include graphical actions and/or written text. APE has full unification and can handle arbitrarily nested discourse constructs, making it more powerful than dialogue managers based on finitestate machines. We illustrate this work by describing Atlas-Andes, an intelligent tutoring system built using APE with the Andes physics tutor as the host."}
{"_id": "1e49c7a6391744dee8260e836cb33300c00dfb9e", "title": "Intelligent Unmanned Air Vehicle Flight Systems", "text": "This paper describes an intelligent autonomous airborne flight capability that is being used as a test bed for future technology development. The unmanned air vehicles (UAVs) fly under autonomous control of both an onboard computer and an autopilot. The onboard computer provides the mission control and runs the autonomous Intelligent Controller (IC) software while the autopilot controls the vehicle navigation and flight control. The autonomous airborne flight system is described in detail. An IC architecture directly applicable to the design of unmanned vehicles is also presented. The UAVs may operate independently or in cooperation with one another to carry out a specified mission. The intelligent UAV flight system is used to evaluate and study autonomous UAV control as well as multi-vehicle collaborative control."}
{"_id": "b34626d58b29f507f9186637a7260202c324e8f8", "title": "Relationship between smartphone addiction and physical activity in Chinese international students in Korea", "text": "BACKGROUND AND AIMS\nExcessive usage of smartphones may induce social problems, such as depression and impairment of social and emotional functioning. Moreover, its usage can impede physical activity, but the relationship between smartphone addiction and physical activity is obscure. Therefore, we examined the relationship and the impact of excessive smartphone use on physical activity.\n\n\nMETHODS\nThis study collected data through the structured questionnaire consisting of general characteristics, the number and hours of smartphone usage, and the Smartphone Addiction Proneness Scale (SAPS) from 110 Chinese international students in Korea. The body composition and physical activity, such as the total daily number of steps and consumed calories, were measured.\n\n\nRESULTS\nIn this study, high-risk smartphone users showed less physical activity, such as the total number of steps taken and the average consumed calories per day. Moreover, their body composition, such as muscle mass and fat mass, was significantly different. Among these factors, the hours of smartphone use revealed the proportional relationship with smartphone addiction (\u03b2 = 0.209, p = 0.026), while the average number of walking steps per day showed a significant reverse proportional tendency in participants with smartphone addiction (\u03b2 = -0.883, p < 0.001).\n\n\nCONCLUSIONS\nParticipants with smartphone addiction were less likely to walk for each day. Namely, smartphone addiction may negatively influence physical health by reducing the amount of physical activity, such as walking, resulting in an increase of fat mass and a decrease of muscle mass associated with adverse health consequences."}
{"_id": "6bfa78fcb47ea38b7ca20461f4811a6189afff18", "title": "Snakes and Strings: New Robotic Components for Rescue Operations", "text": "The Japanese government is establishing an International Rescue Complex to promote research and development of key technologies for realization of practical search-and-rescue robots, anticipating for future large-scale earthquakes and other catastrophic disasters. This paper proposes a new paradigm called \u201csnakes and strings\u201d, for developing practical mobile robot systems that may be useful in such situations. \u201cSnakes\u201d stands for snake-like robots, which can skillfully move among the debris of the collapsed buildings. \u201cStrings\u201d, on the other hand, means robotic systems using strings or tethers, such as proposed in the \u201chyper-tether\u201d research [9]. Theters can continuously supply energy, accomplish reliable communication link, and also exhibit high traction force. This paper will present many new mechanical implementations of snake-like robots developed in our lab., and also explain in detail the new paradigm."}
{"_id": "36b55b943d53e58412b4950a0d183d087c230d4e", "title": "The Traveling Salesman Problem for Cubic Graphs", "text": "We show how to find a Hamiltonian cycle in a graph of degree at most three with n vertices, in time O(2) \u2248 1.260 and linear space. Our algorithm can find the minimum weight Hamiltonian cycle (traveling salesman problem), in the same time bound. We can also count or list all Hamiltonian cycles in a degree three graph in time O(2) \u2248 1.297. We also solve the traveling salesman problem in graphs of degree at most four, by randomized and deterministic algorithms with runtime O((27/4)) \u2248 1.890 and O((27/4+ \u01eb)) respectively. Our algorithms allow the input to specify a set of forced edges which must be part of any generated cycle. Our cycle listing algorithm shows that every degree three graph has O(2) Hamiltonian cycles; we also exhibit a family of graphs with 2 Hamiltonian cycles per graph. Article Type Communicated by Submitted Revised Regular paper J. S. B. Mitchell April 2004 Work supported in part by NSF grant CCR-9912338. A preliminary version of this paper appeared at the 8th Annual Workshop on Algorithms and Data Structures,"}
{"_id": "21c1f5927e3064a645b48ba27b0a365ba59a7b87", "title": "Overcoming the Brittleness Bottleneck using Wikipedia: Enhancing Text Categorization with Encyclopedic Knowledge", "text": "When humans approach the task of text categorization, they interpret the specific wording of the document in the much larger context of their background knowledge and experience. On the other hand, state-of-the-art information retrieval systems are quite brittle\u2014they traditionally represent documents as bags of words, and are restricted to learning from individual word occurrences in the (necessarily limited) training set. For instance, given the sentence \u201cWal-Mart supply chain goes real time\u201d, how can a text categorization system know that Wal-Mart manages its stock with RFID technology? And having read that \u201cCiprofloxacin belongs to the quinolones group\u201d, how on earth can a machine know that the drug mentioned is an antibiotic produced by Bayer? In this paper we present algorithms that can do just that. We propose to enrich document representation through automatic use of a vast compendium of human knowledge\u2014an encyclopedia. We apply machine learning techniques to Wikipedia, the largest encyclopedia to date, which surpasses in scope many conventional encyclopedias and provides a cornucopia of world knowledge. Each Wikipedia article represents a concept, and documents to be categorized are represented in the rich feature space of words and relevant Wikipedia concepts. Empirical results confirm that this knowledge-intensive representation brings text categorization to a qualitatively new level of performance across a diverse collection of datasets."}
{"_id": "259d0304adcb49e40436137684b78a80c9ef097b", "title": "The Generative Lexicon", "text": "In this paper, I will discuss four major topics relating to current research in lexical semantics: methodology, descriptive coverage, adequacy of the representation, and the computational usefulness of representations. In addressing these issues, I will discuss what I think are some of the central problems facing the lexical semantics community, and suggest ways of best approaching these issues. Then, I will provide a method for the decomposition of lexical categories and outline a theory of lexical semantics embodying a notion of cocompositionality and type coercion, as well as several levels of semantic description, where the semantic load is spread more evenly throughout the lexicon. I argue that lexical decomposition is possible if it is performed generatively. Rather than assuming a fixed set of primitives, I will assume a fixed number of generative devices that can be seen as constructing semantic expressions. I develop a theory of Qualia Structure, a representation language for lexical items, which renders much lexical ambiguity in the lexicon unnecessary, while still explaining the systematic polysemy that words carry. Finally, I discuss how individual lexical structures can be integrated into the larger lexical knowledge base through a theory of lexical inheritance. This provides us with the necessary principles of global organization for the lexicon, enabling us to fully integrate our natural language lexicon into a conceptual whole."}
{"_id": "3c398007c04eb12c0b7417f5d135919a300a470d", "title": "Centroid-Based Document Classification: Analysis and Experimental Results", "text": "In recent years we have seen a tremendous growth in the volume of text documents available on the Internet, digital libraries, news sources, and company-wide intrane ts. Automatic text categorization, which is the task of assigning text documents to pre-specified classes (topics o r themes) of documents, is an important task that can help both in organizing as well as in finding information on these h uge resources. Text categorization presents unique challenges due to the large number of attributes present in t he data set, large number of training samples, and attribute dependencies. In this paper we focus on a simple linear-time centroid-based document classification algorithm, that despite its simplicity and robust performance, has not been extensively studied and analyzed. Our extensive experiments show that this centroid-based classifier consi stently and substantially outperforms other algorithms su ch as Naive Bayesian, k-nearest-neighbors, and C4.5, on a wide range of datasets. O ur analysis shows that the similarity measure used by the centroid-based scheme allows it to class ify new document based on how closely its behavior matches the behavior of the documents belonging to differen t classes, as measured by the average similarity between the documents. This matching allows it to dynamically adjus t for classes with different densities. Furthermore, our analysis shows that the similarity measure of the centroidbased scheme accounts for dependencies between the terms in the different classes. We believe that this feature is the reason why it consistently outperforms other classifiers th at cannot take these dependencies into account."}
{"_id": "4e89ac6de1ed1c63f26168b1afea9b64e0c766f4", "title": "Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language", "text": "This article presents a measure of semantic similarity in an is-a taxonomy based on the notion of shared information content. Experimental evaluation against a benchmark set of human similarity judgments demonstrates that the measure performs better than the traditional edge-counting approach. The article presents algorithms that take advantage of taxonomic similarity in resolving syntactic and semantic ambiguity, along with experimental results demonstrating their e ectiveness."}
{"_id": "87ae824d0e87b19939c373b415814cce85778f44", "title": "Vaginal orgasm is associated with vaginal (not clitoral) sex education, focusing mental attention on vaginal sensations, intercourse duration, and a preference for a longer penis.", "text": "INTRODUCTION\nEvidence was recently provided for vaginal orgasm, orgasm triggered purely by penile-vaginal intercourse (PVI), being associated with better psychological functioning. Common sex education and sexual medicine approaches might undermine vaginal orgasm benefits.\n\n\nAIMS\nTo examine the extent to which women's vaginal orgasm consistency is associated with (i) being told in childhood or adolescence that the vagina was the important zone for inducing female orgasm; (ii) how well they focus mentally on vaginal sensations during PVI; (iii) greater PVI duration; and (iv) preference for above-average penis length.\n\n\nMETHODS\n\u2002 In a representative sample of the Czech population, 1,000 women reported their vaginal orgasm consistency (from never to almost every time; only 21.9% never had a vaginal orgasm), estimates of their typical foreplay and PVI durations, what they were told in childhood and adolescence was the important zone for inducing female orgasm, their degree of focus on vaginal sensations during PVI, and whether they were more likely to orgasm with a longer than average penis.\n\n\nMAIN OUTCOME MEASURES\nThe association of vaginal orgasm consistency with the predictors noted above.\n\n\nRESULTS\nVaginal orgasm consistency was associated with all hypothesized correlates. Multivariate analysis indicated the most important predictors were being educated that the vagina is important for female orgasm, being mentally focused on vaginal sensations during PVI, and in some analyses duration of PVI (but not foreplay) and preferring a longer than average penis.\n\n\nCONCLUSIONS\nFocusing attention on penile-vaginal sensation supports vaginal orgasm and the myriad benefits thereof. Brody S, and Weiss P. Vaginal orgasm is associated with vaginal (not clitoral) sex education, focusing mental attention on vaginal sensations, intercourse duration, and a preference for a longer penis."}
{"_id": "ceac72b22fe1d633b067861b1060e7f558a63eb5", "title": "Schedulability Analysis for Tasks with Static and Dynamic Offsets", "text": "In this paper we present an extension to current schedulability analysis techniques for periodic tasks with offsets, scheduled under a preemptive fixed priority scheduler. Previous techniques allowed only static offsets restricted to being smaller than the task periods. With the extension presented in this paper, we eliminate this restriction and we allow both static and dynamic offsets. The most significant application of this extension is in the analysis of multiprocessor and distributed systems. We show that we can achieve a significant increase of the maximum schedulable utilization by using the new technique, as opposed to using previously known worst-case analysis techniques for distributed systems. ___________________ This work has been supported in part by the Comisi\u00f3n Interministerial de Ciencia y Tecnolog\u00eda of the Spanish Government, under grant number TAP97-892"}
{"_id": "35cd8951a533a5366f2b76ba31c8f80dc490f02d", "title": "Knowledge Technologies", "text": "Several technologies are emerging that provide new ways to capture, store, present and use knowledge. This book is the first to provide a comprehensive introduction to five of the most important of these technologies: Knowledge Engineering, Knowledge Based Engineering, Knowledge Webs, Ontologies and Semantic Webs. For each of these, answers are given to a number of key questions (What is it? How does it operate? How is a system developed? What can it be used for? What tools are available? What are the main issues?). The book is aimed at students, researchers and practitioners interested in Knowledge Management, Artificial Intelligence, Design Engineering and Web Technologies. During the 1990s, Nick worked at the University of Nottingham on the application of AI techniques to knowledge management and on various knowledge acquisition projects to develop expert systems for military applications. In 1999, he joined Epistemics where he worked on numerous knowledge projects and helped establish knowledge management programmes at large organisations in the engineering, technology and legal sectors. He is author of the book\"Knowledge Acquisition in Practice\", which describes a step-by-step procedure for acquiring and implementing expertise. He maintains strong links with leading research organisations working on knowledge technologies, such as knowledge-based engineering, ontologies and semantic technologies."}
{"_id": "330c9d6cb1d12d075de81ac0d50473dfbc06a944", "title": "Hotplug or Ballooning: A Comparative Study on Dynamic Memory Management Techniques for Virtual Machines", "text": "In virtualization environments, static memory allocation for virtual machines (VMs) can lead to severe service level agreement (SLA) violations or inefficient use of memory. Dynamic memory allocation mechanisms such as ballooning and memory hotplug were proposed to handle the dynamics of memory demands. However, these mechanisms so far have not been quantitatively or comparatively studied. In this paper, we first develop a runtime system called U-tube, which provides a framework to adopt memory hotplug or ballooning for dynamic memory allocation. We then implement fine-grained memory hotplug in Xen. We demonstrate the effectiveness of U-tube for dynamic memory management through two case studies: dynamic memory balancing and memory overcommitment. With these two case studies, we make a quantitative comparison between memory hotplug and ballooning. The experiments show that there is no absolute winner for different scenarios. Our findings can be very useful for practitioners to choose the suitable dynamic memory management techniques in different scenarios."}
{"_id": "10469075c749654963e591011637df730921acfa", "title": "The big five personality dimensions and entrepreneurial status: a meta-analytical review.", "text": "In this study, the authors used meta-analytical techniques to examine the relationship between personality and entrepreneurial status. Personality variables used in previous studies were categorized according to the five-factor model of personality. Results indicate significant differences between entrepreneurs and managers on 4 personality dimensions such that entrepreneurs scored higher on Conscientiousness and Openness to Experience and lower on Neuroticism and Agreeableness. No difference was found for Extraversion. Effect sizes for each personality dimension were small, although the multivariate relationship for the full set of personality variables was moderate (R = .37). Considerable heterogeneity existed for all of the personality variables except Agreeableness, suggesting that future research should explore possible moderators of the personality-entrepreneurial status relationship."}
{"_id": "608319d244f78ca7600bd8d52eb4092c5629f5c4", "title": "A Twitter Tale of Three Hurricanes: Harvey, Irma, and Maria", "text": "People increasingly use microblogging platforms such as Twitter during natural disasters and emergencies. Research studies have revealed the usefulness of the data available on Twitter for several disaster response tasks. However, making sense of social media data is a challenging task due to several reasons such as limitations of available tools to analyze high-volume and high-velocity data streams. This work presents an extensive multidimensional analysis of textual and multimedia content from millions of tweets shared on Twitter during the three disaster events. Specifically, we employ various Artificial Intelligence techniques from Natural Language Processing and Computer Vision fields, which exploit different machine learning algorithms to process the data generated during the disaster events. Our study reveals the distributions of various types of useful information that can inform crisis managers and responders as well as facilitate the development of future automated systems for disaster management."}
{"_id": "6141fafb1fefb2a757ef0760278508e8890b3775", "title": "Integral imaging using color multiplexed holographic optical element", "text": "We propose a novel integral imaging system using holographic optical element (HOE) which can provide multi-color 3-dimensional (3D) images. The optical properties of a lens array are recorded on a photopolymer using color multiplexing method with two different wavelength lasers (473 nm and 532 nm). The lens array HOE recorded on the photopolymer functions as 3D imaging HOE for integral imaging instead of the conventional lens array or concave mirror array. Recording scheme for color multiplexed HOE is presented and the proposed system is experimentally verified."}
{"_id": "f27a619df1d53d7db0649ff172bd1aab3103e085", "title": "Gaming frequency and academic performance", "text": "There are numerous claims that playing computer and video games may be educationally beneficial, but there has been little formal investigation into whether or not the frequency of exposure to such games actually affects academic performance. This paper explores the issue by analysing the relationships between gaming frequency \u2013 measured as the amount of time undergraduate students spend playing games in their free time \u2013 and their academic performance as measured by their examination marks. Using a sample of 713 students, correlation analyses between gaming frequency and examination performance were conducted for students of varying gaming frequency, study discipline, gender, and general attitudes towards gaming and study. The results reveal that examination marks are in fact negatively correlated with gaming frequency \u2013 i.e. frequent gamers generally achieve lower marks than less frequent gamers."}
{"_id": "e82a0cda29cebc481f70015dae63bafbc5ce82f7", "title": "Co-management: concepts and methodological implications.", "text": "Co-management, or the joint management of the commons, is often formulated in terms of some arrangement of power sharing between the State and a community of resource users. In reality, there often are multiple local interests and multiple government agencies at play, and co-management can hardly be understood as the interaction of a unitary State and a homogeneous community. An approach focusing on the legal aspects of co-management, and emphasizing the formal structure of arrangements (how governance is configured) runs the risk of neglecting the functional side of co-management. An alternative approach is to start from the assumption that co-management is a continuous problem-solving process, rather than a fixed state, involving extensive deliberation, negotiation and joint learning within problem-solving networks. This presumption implies that co-management research should preferably focus on how different management tasks are organized and distributed concentrating on the function, rather than the structure, of the system. Such an approach has the effect of highlighting that power sharing is the result, and not the starting point, of the process. This kind of research approach might employ the steps of (1) defining the social-ecological system under focus; (2) mapping the essential management tasks and problems to be solved; (3) clarifying the participants in the problem-solving processes; (4) analyzing linkages in the system, in particular across levels of organization and across geographical space; (5) evaluating capacity-building needs for enhancing the skills and capabilities of people and institutions at various levels; and (6) prescribing ways to improve policy making and problem-solving."}
{"_id": "05fff685560a2199f10a5ce2a599a1b5b02f85e1", "title": "A model of neuronal responses in visual area MT", "text": "Electrophysiological studies indicate that neurons in the middle temporal (MT) area of the primate brain are selective for the velocity of visual stimuli. This paper describes a computational model of MT physiology, in which local image velocities are represented via the distribution of MT neuronal responses. The computation is performed in two stages, corresponding to neurons in cortical areas V1 and MT. Each stage computes a weighted linear sum of inputs, followed by rectification and divisive normalization. V1 receptive field weights are designed for orientation and direction selectivity. MT receptive field weights are designed for velocity (both speed and direction) selectivity. The paper includes computational simulations accounting for a wide range of physiological data, and describes experiments that could be used to further test and refine the model."}
{"_id": "2d0b9205d0d19f96e3b75733611d6c88cf948036", "title": "Text2Scene: Generating Abstract Scenes from Textual Descriptions", "text": "In this paper, we propose an end-to-end model that learns to interpret natural language describing a scene to generate an abstract pictorial representation. The pictorial representations generated by our model comprise the spatial distribution and attributes of the objects in the described scene. Our model uses a sequence-to-sequence network with a double attentive mechanism and introduces a regularization strategy. These scene representations can be sampled from our model similarly as in languagegeneration models. We show that the proposed model, initially designed to handle the generation of cartoon-like pictorial representations in the Abstract Scenes Dataset, can also handle, under minimal modifications, the generation of semantic layouts corresponding to real images in the COCO dataset. Human evaluations using a visual entailment task show that pictorial representations generated with our full model can entail at least one out of three input visual descriptions 94% of the times, and at least two out of three 62% of the times for each image."}
{"_id": "c5953a609b401f254b6179ec0fca8e9e53006bb6", "title": "Preference for Nature in Urbanized Societies : Stress , Restoration , and the Pursuit of Sustainability", "text": "Urbanicity presents a challenge for the pursuit of sustainability. High settlement density may offer some environmental, economic, and social advantages, but it can impose psychological demands that people find excessive. These demands of urban life have stimulated a desire for contact with nature through suburban residence, leading to planning and transportation practices that have profound implications for the pursuit of sustainability. Some might dismiss people\u2019s desire for contact with nature as the result of an anti-urban bias in conjunction with a romantic view of nature. However, research in environmental psychology suggests that people\u2019s desire for contact with nature serves an important adaptive function, namely, psychological restoration. Based on this insight, we offer a perspective on an underlying practical challenge: designing communities that balance settlement density with satisfactory access to nature experience. We discuss research on four issues: how people tend to believe that nature is restorative; how restoration needs and beliefs shape environmental preferences; how well people actually achieve restoration in urban and natural environments; and how contact with nature can promote health. In closing, we consider urban nature as a design option that promotes urban sustainability."}
{"_id": "0729515f62042d1274c131360c33a121df71c856", "title": "Generation from Abstract Meaning Representation using Tree Transducers", "text": "Language generation from purely semantic representations is a challenging task. This paper addresses generating English from the Abstract Meaning Representation (AMR), consisting of re-entrant graphs whose nodes are concepts and edges are relations. The new method is trained statistically from AMRannotated English and consists of two major steps: (i) generating an appropriate spanning tree for the AMR, and (ii) applying tree-tostring transducers to generate English. The method relies on discriminative learning and an argument realization model to overcome data sparsity. Initial tests on held-out data show good promise despite the complexity of the task. The system is available open-source as part of JAMR at: http://github.com/jflanigan/jamr"}
{"_id": "12bb8acfc11a52e3022f0f64254e435dfc9af05e", "title": "JMASM Algorithms and Code Pseudo-Random Number Generators for Vector Processors and Multicore Processors", "text": "There is a lack of good pseudo-random number generators capable of utilizing the vector processing capabilities and multiprocessing capabilities of modern computers. A suitable generator must have a feedback path long enough to fit the vector length or permit multiple instances with different parameter sets. The risk that multiple random streams from identical generators have overlapping subsequences can be eliminated by combining two different generators. Combining two generators can also increase randomness by remedying weaknesses caused by the need for mathematical tractability. Larger applications need higher precision. The article discusses hitherto ignored imprecisions caused by quantization errors in the application of generators with prime modulus and when generating uniformly distributed integers with an arbitrary interval length. A C++ software package that overcomes all these problems is offered. The RANVEC1 code combines a Mersenne Twister variant and a multiply-with-carry generator to produce vector output. It is suitable for processors with vector sizes up to 512 bits. Some theorists have argued that there is no theoretical proof that the streams from different generators are statistically independent. The article contends that the request for such a proof misunderstands the nature of the problem, and that the mathematical tractability that would allow such a proof would also defeat it. This calls for a more fundamental philosophical discussion of the status of proofs in relation to deterministic pseudo-random sequences."}
{"_id": "48fc6dc4ee7c010bccebd68cd1609742dc0bb1cd", "title": "Mining Frequent Patterns in Data Streams at Multiple Time Granularities", "text": "Although frequent-pattern mining has been widely studied and used, it is challenging to extend it to data streams. Compared to mining from a static transaction data set, the streaming case has far more information to track and far greater complexity to manage. Infrequent items can become frequent later on and hence cannot be ignored. The storage structure needs to be dynamically adjusted to reflect the evolution of itemset frequencies over time. In this paper, we propose computing and maintaining all the frequent patterns (which is usually more stable and smaller than the streaming data) and dynamically updating them with the incoming data streams. We extended the framework to mine time-sensitive patterns with approximate support guarantee. We incrementally maintain tilted-time windows for each pattern at multiple time granularities. Interesting"}
{"_id": "a5e2cc82193a9c092892a83f79d5835fa12dfeea", "title": "The psychology of globalization.", "text": "The influence of globalization on psychological functioning is examined. First, descriptions of how globalization is occurring in various world regions are presented. Then the psychological consequences of globalization are described, with a focus on identity issues. Specifically, it is argued that most people worldwide now develop a bicultural identity that combines their local identity with an identity linked to the global culture; that identity confusion may be increasing among young people in non-Western cultures as a result of globalization; that some people join self-selected cultures to maintain an identity that is separate from the global culture; and that a period of emerging adulthood increasingly extends identity explorations beyond adolescence, through the mid- to late twenties."}
{"_id": "fb45c069629429e3becbd9d862f8c31f52dd5156", "title": "Three stages of emotional word processing: an ERP study with rapid serial visual presentation.", "text": "Rapid responses to emotional words play a crucial role in social communication. This study employed event-related potentials to examine the time course of neural dynamics involved in emotional word processing. Participants performed a dual-target task in which positive, negative and neutral adjectives were rapidly presented. The early occipital P1 was found larger when elicited by negative words, indicating that the first stage of emotional word processing mainly differentiates between non-threatening and potentially threatening information. The N170 and the early posterior negativity were larger for positive and negative words, reflecting the emotional/non-emotional discrimination stage of word processing. The late positive component not only distinguished emotional words from neutral words, but also differentiated between positive and negative words. This represents the third stage of emotional word processing, the emotion separation. Present results indicated that, similar with the three-stage model of facial expression processing; the neural processing of emotional words can also be divided into three stages. These findings prompt us to believe that the nature of emotion can be analyzed by the brain independent of stimulus type, and that the three-stage scheme may be a common model for emotional information processing in the context of limited attentional resources."}
{"_id": "8674c6a49b6f23693bb8f7288bea7425535f4cb4", "title": "Read Empiricism And The Philosophy Of Mind Empiricism And The Philosophy Of Mind empiricism and the philosophy of mind", "text": "empiricism and the philosophy of mind. Book lovers, when you need a new book to read, find the book here. Never worry not to find what you need. Is the empiricism and the philosophy of mind your needed book now? That's true; you are really a good reader. This is a perfect book that comes from great author to share with you. The book offers the best experience and lesson to take, not only take, but also learn."}
{"_id": "8fef10f868659a06ef78771daa1874f2d396f6b0", "title": "Adversarial Deep Learning for Robust Detection of Binary Encoded Malware", "text": "Malware is constantly adapting in order to avoid detection. Model-based malware detectors, such as SVM and neural networks, are vulnerable to so-called adversarial examples which are modest changes to detectable malware that allows the resulting malware to evade detection. Continuous-valued methods that are robust to adversarial examples of images have been developed using saddle-point optimization formulations. We are inspired by them to develop similar methods for the discrete, e.g. binary, domain which characterizes the features of malware. A specific extra challenge of malware is that the adversarial examples must be generated in a way that preserves their malicious functionality. We introduce methods capable of generating functionally preserved adversarial malware examples in the binary domain. Using the saddle-point formulation, we incorporate the adversarial examples into the training of models that are robust to them. We evaluate the effectiveness of the methods and others in the literature on a set of Portable Execution (PE) files. Comparison prompts our introduction of an online measure computed during training to assess general expectation of robustness."}
{"_id": "7f13e66231c96f34f8de2b091e5b5dafb5db5327", "title": "Syntax-aware Neural Machine Translation Using CCG", "text": "Neural machine translation (NMT) models are able to partially learn syntactic information from sequential lexical information. Still, some complex syntactic phenomena such as prepositional phrase attachment are poorly modeled. This work aims to answer two questions: 1) Does explicitly modeling source or target language syntax help NMT? 2) Is tight integration of words and syntax better than multitask training? We introduce syntactic information in the form of CCG supertags either in the source as an extra feature in the embedding, or in the target, by interleaving the target supertags with the word sequence. Our results on WMT data show that explicitly modeling syntax improves machine translation quality for English\u2194German, a high-resource pair, and for English\u2194Romanian, a lowresource pair and also several syntactic phenomena including prepositional phrase attachment. Furthermore, a tight coupling of words and syntax improves translation quality more than multitask training."}
{"_id": "56b8ba1cfb6310b9b78194bce33eebdeb0ba7b4a", "title": "Modern Recommender Systems: from Computing Matrices to Thinking with Neurons", "text": "Starting with the Netflix Prize, which fueled much recent progress in the field of collaborative filtering, recent years have witnessed rapid development of new recommendation algorithms and increasingly more complex systems, which greatly differ from their early content-based and collaborative filtering systems. Modern recommender systems leverage several novel algorithmic approaches: from matrix factorization methods and multi-armed bandits to deep neural networks. In this tutorial, we will cover recent algorithmic advances in recommender systems, highlight their capabilities, and their impact. We will give many examples of industrial-scale recommender systems that define the future of the recommender systems area. We will discuss related evaluation issues, and outline future research directions. The ultimate goal of the tutorial is to encourage the application of novel recommendation approaches to solve problems that go beyond user consumption and to further promote research in the intersection of recommender systems and databases."}
{"_id": "3984ffd42059f5758b284ae31422332dee6e337a", "title": "Transparent operation of Kronecker product based full dimension MIMO to exploit 2D antenna array", "text": "A concept of full-dimension MIMO (FD-MIMO) adopting 2-dimensional (2D) antenna array has been proposed with intent to cope with practical challenges of massive MIMO systems. In this paper, we investigate an FD-MIMO operation that can be transparently applied with current 3GPP LTE specifications, and propose a novel method that facilitates an accurate estimation of channel quality indicator (CQI) at base station (BS). We first introduce a feedback scheme in which horizontal and vertical dimension channel information are separately reported. To further improve accuracy of modulation and coding scheme (MCS) selection to maximize system throughput, we propose an enhanced feedback scheme that is helpful to more accurate CQI estimation at the BS. In the evaluation results, we show performance improvement of the proposed scheme over the conventional MIMO scheme."}
{"_id": "4d411142604ebd7496948ac0aa96d5d1f3383012", "title": "Medical Image Analysis: Progress over Two Decades and the Challenges Ahead", "text": "\u00d0The analysis of medical images has been woven into the fabric of the Pattern Analysis and Machine Intelligence (PAMI) community since the earliest days of these Transactions. Initially, the efforts in this area were seen as applying pattern analysis and computer vision techniques to another interesting dataset. However, over the last two to three decades, the unique nature of the problems presented within this area of study have led to the development of a new discipline in its own right. Examples of these include: the types of image information that are acquired, the fully three-dimensional image data, the nonrigid nature of object motion and deformation, and the statistical variation of both the underlying normal and abnormal ground truth. In this paper, we look at progress in the field over the last 20 years and suggest some of the challenges that remain for the years to come. Index Terms\u00d0Medical imaging, medical image analysis, computer vision, image segmentation, image registration, nonrigid motion, deformable contours, image-guided surgery."}
{"_id": "6374fe5c5216d94d719b60ffa3adc52ca24bcdc7", "title": "Dynamic Programming for Detecting, Tracking, and Matching Deformable Contours", "text": "Abstruct-The problem of segmenting an image into separate regions and tracking them over time is one of the most significant problems in vision. Tenopoulos et al have proposed an approach to detect the contour regions of complex shapes, assuming a user selected initial contour not very far from the desired solution. We propose to further explore the information provided by the user\u2019s selected points and applyan optimal method to detect contours which allows a segmentation of the image. The method is based on dynamic programming @P), and applies to a wide variety of shapes. It is exact and not iterative. We also consider a multiscale approach capable of speeding up the algorithm by a factor of 20, although at the expense of losing the guaranteed optimality characteristic. The problem of tracking and matching these contours is addressed. For tracking, the final contour obtained at one frame is sampled and used as initial points for the next frame. Then, the same DP process is applied. For matching, a novel strategy is proposed where the solution is a smooth displacement field in which unmatched regions are allowed while cross vectors are not. The algorithm is again based on DP and the optimal solution is guaranteed. We have demonstrated the algorithms on natural objects in a large spectrum of applications, including interactive segmentation and automatic tracking of the regions of interest in medical images."}
{"_id": "18f355d7ef4aa9f82bf5c00f84e46714efa5fd77", "title": "Dynamic programming algorithm optimization for spoken word recognition", "text": "This paper reports on an optimum dynamic programming (DP) based time-normalization algorithm for spoken word recognition. First, a general principle of time-normalization i s given using timewarping function. Then, two time-normalized distance definitions, d e d symmetric and asymmetric forms, are derived from the principle. These two forms are compared with each other through theoretical discussions and experimental studies. The symmetric form algorithm superiority is established. A new technique, called slope constraint, is successfully introduced, in which the warping function slope is restricted so as to improve discrimination between words in different categories. The effective slope constraint characteristic is qualitatively analyzed, and the optimum slope constraint condition is determined through experiments. The optimized algorithm is then extensively subjected to experimentat comparison with various DP-algorithms, previously applied to spoken word recognition by different research groups. The experiment shows that the present algorithm gives no more than about twothirds errors, even compared to the best conventional algorithm. I"}
{"_id": "554a67df7aeedbe9b7e08fcb516c17aae38a7b7b", "title": "Humour Research: State of Art", "text": "Humour is a multi-disciplinary field of research. People have been working on humour in many fields of research like psychology, philosophy and linguistics, sociology and literature. Especially in the context of computer science (or Artificial Intelligence) humour research aims at modeling humour in a computationally tractable way. Having computational models of humour allows interface designers to have the computer generate and interpret humour when interacting with users. There are many situations in human-human interaction where humour plays an important role in keeping the conversation going. Making use of the so-called CASA paradigm (Computers Are Social Actors) we may expect that a similar role can be played in human-computer interaction. In this report we survey current humour research with the aim to identify useful theories that can be applied in the human-computer interaction context. We focus on the following subjects: humour theories, related humour research, linguistic aspects of humour, computational aspects of humour, applications and resources. 1 University of Twente, Center of Telematics and Information Technology, Technical Report CTIT-02-34, September 2002, 24 pp."}
{"_id": "150e1f5f6c981538de1d6aeb4c48c90eccf5ca60", "title": "2.4 GHz micro-strip patch antenna array with suppressed sidelobes", "text": "In this paper, we present the design and simulated results for a patch antenna array at 2.4 GHz. The antenna consists of one linear array with six series-fed patch antennas. In radar and smart-antenna applications it is very important to use antennas with high gain and low sidelobes in order to avoid the interference and to increase the gain of antenna array. The aim of this research is to design an antenna array that has suppressed sidelobes as much as possible in azimuth by using Kaiser-Bessel amplitude coefficients. Our designed antenna has a gain of 15.1 dB with an angular width of 14.6\u00b0 at a frequency of 2.4 GHz. The measured 10 dB return loss bandwidth equals 18.7 MHz. Patch antenna array is analyzed using CST MICROWAVE STUDIO 2014."}
{"_id": "f218e9988e30b0dea133b8fcda7033b6f1172af9", "title": "Distinguishing Between Natural and Computer-Generated Images Using Convolutional Neural Networks", "text": "Distinguishing between natural images (NIs) and computer-generated (CG) images by naked human eyes is difficult. In this paper, we propose an effective method based on a convolutional neural network (CNN) for this fundamental image forensic problem. Having observed the rather limited performance of training existing CCNs from scratch or fine-tuning pre-trained network, we design and implement a new and appropriate network with two cascaded convolutional layers at the bottom of a CNN. Our network can be easily adjusted to accommodate different sizes of input image patches while maintaining a fixed depth, a stable structure of CNN, and a good forensic performance. Considering the complexity of training CNNs and the specific requirement of image forensics, we introduce the so-called local-to-global strategy in our proposed network. Our CNN derives a forensic decision on local patches, and a global decision on a full-sized image can be easily obtained via simple majority voting. This strategy can also be used to improve the performance of existing methods that are based on hand-crafted features. Experimental results show that our method outperforms existing methods, especially in a challenging forensic scenario with NIs and CG images of heterogeneous origins. Our method also has good robustness against typical post-processing operations, such as resizing and JPEG compression. Unlike previous attempts to use CNNs for image forensics, we try to understand what our CNN has learned about the differences between NIs and CG images with the aid of adequate and advanced visualization tools."}
{"_id": "e8a8a0ee711a6f993b68c8c7cb8adaca97b0166e", "title": "Content-based retrieval of logo and trademarks in unconstrained color image databases using Color Edge Gradient Co-occurrence Histograms", "text": "In this paper, we present an algorithm that extends the Color Edge Co-occurrence Histogram (CECH) object detection scheme on compound color objects, for the retrieval of logos and trademarks in unconstrained color image databases. We introduce more accurate information to the CECH, by virtue of incorporating color edge detection using vector order statistics. This produces a more accurate representation of edges in color images, as compared to the simple color pixel difference classification of edges seen with the CECH. Our proposed method is thus reliant on edge gradient information, and so we call it the Color Edge Gradient Co-occurrence Histogram (CEGCH). We also introduce a color quantization method based in the hue\u2013saturation\u2013value (HSV) color space, illustrating that it is a more suitable scheme of quantization for image retrieval, compared to the color quantization scheme introduced with the CECH. Experimental results demonstrate that the CEGCH and the HSV color quantization scheme is insensitive to scaling, rotation, and partial deformations, and outperforms the use of the CECH in image retrieval, with higher precision and recall. We also perform experiments on a closely related algorithm based on the Color Co-occurrence Histogram (CCH) and demonstrate that our algorithm is also superior in comparison, with higher precision and recall. 2009 Elsevier Inc. All rights reserved."}
{"_id": "03b01daccd140e7d65358f31f8c4472d18573a5a", "title": "Limiting privacy breaches in privacy preserving data mining", "text": "There has been increasing interest in the problem of building accurate data mining models over aggregate data, while protecting privacy at the level of individual records. One approach for this problem is to randomize the values in individual records, and only disclose the randomized values. The model is then built over the randomized data, after first compensating for the randomization (at the aggregate level). This approach is potentially vulnerable to privacy breaches: based on the distribution of the data, one may be able to learn with high confidence that some of the randomized records satisfy a specified property, even though privacy is preserved on average.In this paper, we present a new formulation of privacy breaches, together with a methodology, \"amplification\", for limiting them. Unlike earlier approaches, amplification makes it is possible to guarantee limits on privacy breaches without any knowledge of the distribution of the original data. We instantiate this methodology for the problem of mining association rules, and modify the algorithm from [9] to limit privacy breaches without knowledge of the data distribution. Next, we address the problem that the amount of randomization required to avoid privacy breaches (when mining association rules) results in very long transactions. By using pseudorandom generators and carefully choosing seeds such that the desired items from the original transaction are present in the randomized transaction, we can send just the seed instead of the transaction, resulting in a dramatic drop in communication and storage cost. Finally, we define new information measures that take privacy breaches into account when quantifying the amount of privacy preserved by randomization."}
{"_id": "e85828a8ca8c9ab054ab21aa974e13a02d988b48", "title": "Minimum elastic bounding box algorithm for dimension detection of 3D objects: a case of airline baggage measurement", "text": "Motivated by the interference of appendages in airline baggage dimension detection using three-dimensional (3D) point cloud, a minimum elastic bounding box (MEBB) algorithm for dimension detection of 3D objects is developed. The baggage dimension measurements using traditional bounding box method or shape fitting method can cause large measurements due to the interference of appendages. Starting from the idea of \u2018enclosing\u2019, an elastic bounding box model with the deformable surface is established. On the basis of using principal component analysis to obtain the main direction of the bounding box, the elastic rules for deformable surfaces are developed so as to produce a large elastic force when it comes into contact with the main body part and to produce a small elastic force when it comes into contact with the appendages part. The airline baggage measurement shows how to use MEBB for dimension detection, especially for the processing of isotropic density distribution, the elasticity computing and the adaptive adjustment of elasticity. Results on typical baggage samples, comparisons to other methods, and error distribution experiments with different algorithm parameters show that the authors\u2019 method can reliably obtain the size of the main body part of the object under the interference of appendages."}
{"_id": "c80192c4cbda0eaa29d92ed03df80300987628b3", "title": "A systematic literature review: Information security culture", "text": "Human behavior inside organizations is considered the main threat to organizations. Moreover, in information security the human element consider the most of weakest link in general. Therefore it is crucial to create an information security culture to protect the organization's assets from inside and to influence employees' security behavior. This paper focuses on identifying the definitions and frameworks for establishing and maintaining information security culture inside organizations. It presents work have been done to conduct a systematic literature review of papers published on information security culture from 2003 to 2016. The review identified 68 papers that focus on this area, 18 of which propose an information security culture framework. An analysis of these papers indicate there is a positive relationship between levels of knowledge and how employees behave. The level of knowledge significantly affects information security behavior and should be considered as a critical factor in the effectiveness of information security culture and in any further work that is carried out on information security culture. Therefore, there is a need for more studies to identity the security knowledge that needs to be incorporated into organizations and to find instances of best practice for building an information security culture within organizations."}
{"_id": "dd131cd0a292c8fa2491cf5248b707703c2c03a5", "title": "Sample Preparation on Micro-Electrode-Dot-Array Digital Microfluidic Biochips", "text": "Sample preparation in digital microfluidics refers to the generation of droplets with target concentrations for onchip biochemical applications. In recent years, digital microfluidic biochips (DMFBs) have been adopted as a platform for sample preparation. However, there remain one major problem associated with sample preparation on a conventional DMFB. For conventional DMFBs, only a (1:1) mixing/splitting model can be used, leading to an increase in the number of fluidic operations required for sample preparation. To overcome the drawback, we adopt a next generation DMFB platform, referred to as micro-electrode-dot-array (MEDA), for sample preparation. We propose the first sample preparation method that exploits the MEDA-specific advantages of fine-grained control of droplet sizes and real-time droplet sensing. Experimental demonstration using a fabricated MEDA biochip and simulation results highlight the effectiveness of the proposed sample-preparation method."}
{"_id": "4f004dc25b815ccef2bf9bf7063114a72795436f", "title": "Ratio estimators in simple random sampling", "text": "This study proposes ratio estimators by adapting the estimators type ofRay and Singh [J. Ind. Stat. Assoc. 19 (1981) 147] to traditional and the other ratio-type estimators in simple random sampling in literature. Theoretically, mean square error (MSE) equations of all proposed ratio estimators are obtained and compared with each other. By these comparisons the conditions, which make each proposed estimator more efficient than the others, are found. The theoretical results are supported by a numerical illustration. 2003 Elsevier Inc. All rights reserved."}
{"_id": "e81e2991c4cb732b9b601ddfb5a432ff2d03714d", "title": "A Novel Approach for Brain Tumor Detection Using Support Vector Machine , K-Means and PCA Algorithm", "text": "The conventional method of detection and classification of brain tumor is by human inspection with the use of medical resonant brain images. But it is impractical when large amounts of data is to be diagnosed and to be reproducible. And also the operator assisted classification leads to false predictions and may also lead to false diagnose. Medical Resonance images contain a noise caused by operator performance which can lead to serious inaccuracies classification. The use Support Vector Machine, Kmean and PCA shown great potential in this field. Principal Component Analysis gives fast and accurate tool for Feature Extraction of the tumors. Kmean is used for Image Segmentation. Support Vector Mahine with Training and Testing of Brain Tumor Images techniques is implemented for automated brain tumor classification."}
{"_id": "98f723caf182563c84d9fe9f407ade49acf8cf80", "title": "SNAP and SPAN: Towards Dynamic Spatial Ontology", "text": "We propose a modular ontology of the dynamic features of reality. This amounts, on the one hand, to a purely spatial ontology supporting snapshot views of the world at successive instants of time and, on the other hand, to a purely spatiotemporal ontology of change and process. We argue that dynamic spatial ontology must combine these two distinct types of inventory of the entities and relationships in reality, and we provide characterizations of spatiotemporal reasoning in the light of the interconnections between them."}
{"_id": "d991a90640d6c63785d61cbd9093f79e89e5a925", "title": "MaxMax: A Graph-Based Soft Clustering Algorithm Applied to Word Sense Induction", "text": "This paper introduces a linear time graph-based soft clustering algorithm. The algorithm applies a simple idea: given a graph, vertex pairs are assigned to the same cluster if either vertex has maximal affinity to the other. Clusters of varying size, shape, and density are found automatically making the algorithm suited to tasks such Word Sense Induction (WSI), where the number of classes is unknown and where class distributions may be skewed. The algorithm is applied to two WSI tasks, obtaining results comparable with those of systems adopting existing, state-of-the-art methods."}
{"_id": "8ce3338a65737ad17e20978e9186e088f8cbd34d", "title": "A study of success and failure in product innovation: The case of the U.S. electronics industry", "text": "This paper summarizes the first phase of the Stanford Innovation Project, a long-term study of U.S. industrial innovation. As part of this initial phase, begun in 1982, two surveys were conducted: 1) an open-ended survey of 158 new products in the electronics industry, followed by 2) a structured survey of 118 of the original products. Both surveys used a pairwise comparison methodology. Our research identified eight broad areas that appear to be important for new product success in a high-technology environment: 1) market knowledge gained through frequent and intense customer interaction, which leads to high benefit-to-cost products; 2) and 3) planning and coordination of the new product process, especially the R&D phase; 4) emphasis on marketing and sales; 5) management support for the product throughout the development and launch stages; 6) the contribution margin of the product; 7) early market entry; 8) proximity of the new product technologies and markets to the existing strengths of the developing unit. Based on these results, a preliminary model of the new product process is proposed in the concluding section. There is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than the creation of a new system. Niccolo Machiavelli."}
{"_id": "ed768287ea577ccbd1ef6bd080889495279303a5", "title": "Fundamentals of Parameterized Complexity", "text": "It's not surprisingly when entering this site to get the book. One of the popular books now is the fundamentals of parameterized complexity. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book."}
{"_id": "2165eab8a25414be16c1b37033aa4613190760bf", "title": "Supporting Iterative Cohort Construction with Visual Temporal Queries", "text": "Many researchers across diverse disciplines aim to analyze the behavior of cohorts whose behaviors are recorded in large event databases. However, extracting cohorts from databases is a difficult yet important step, often overlooked in many analytical solutions. This is especially true when researchers wish to restrict their cohorts to exhibit a particular temporal pattern of interest. In order to fill this gap, we designed COQUITO, a visual interface that assists users defining cohorts with temporal constraints. COQUITO was designed to be comprehensible to domain experts with no preknowledge of database queries and also to encourage exploration. We then demonstrate the utility of COQUITO via two case studies, involving medical and social media researchers."}
{"_id": "a770ec2d9850893a58854564f50e2e4c715f73c0", "title": "Pseudo test collections for training and tuning microblog rankers", "text": "Recent years have witnessed a persistent interest in generating pseudo test collections, both for training and evaluation purposes. We describe a method for generating queries and relevance judgments for microblog search in an unsupervised way. Our starting point is this intuition: tweets with a hashtag are relevant to the topic covered by the hashtag and hence to a suitable query derived from the hashtag. Our baseline method selects all commonly used hashtags, and all associated tweets as relevance judgments; we then generate a query from these tweets. Next, we generate a timestamp for each query, allowing us to use temporal information in the training process. We then enrich the generation process with knowledge derived from an editorial test collection for microblog search.\n We use our pseudo test collections in two ways. First, we tune parameters of a variety of well known retrieval methods on them. Correlations with parameter sweeps on an editorial test collection are high on average, with a large variance over retrieval algorithms. Second, we use the pseudo test collections as training sets in a learning to rank scenario. Performance close to training on an editorial test collection is achieved in many cases. Our results demonstrate the utility of tuning and training microblog search algorithms on automatically generated training material."}
{"_id": "7c38c9ff0108e774cdfe2a90ced1c89812e7f498", "title": "Simulation of Automotive Radar Target Lists using a Novel Approach of Object Representation", "text": "The development of radar signal processing algorithms for target tracking and higher-level automotive applications is mainly done based on real radar data. A data basis has to be acquired during cost-expensive and time-consuming test runs. For a comparably simple application like the adaptive cruise control (ACC), the variety of significant traffic situations can sufficiently be covered by test runs. But for more advanced applications like intersection assistance, the effort for the acquisition of a representative set of radar data will be unbearable. In this paper, we propose a way of simulating radar target lists in a realistic but computationally undemanding way, which will allow to significantly reduce the amount of real radar data needed"}
{"_id": "65e1032bc978e1966e075c9f3e8107b5c2a7bac9", "title": "Computer-Aided Diagnosis of Mammographic Masses Using Scalable Image Retrieval", "text": "Computer-aided diagnosis of masses in mammograms is important to the prevention of breast cancer. Many approaches tackle this problem through content-based image retrieval techniques. However, most of them fall short of scalability in the retrieval stage, and their diagnostic accuracy is, therefore, restricted. To overcome this drawback, we propose a scalable method for retrieval and diagnosis of mammographic masses. Specifically, for a query mammographic region of interest (ROI), scale-invariant feature transform (SIFT) features are extracted and searched in a vocabulary tree, which stores all the quantized features of previously diagnosed mammographic ROIs. In addition, to fully exert the discriminative power of SIFT features, contextual information in the vocabulary tree is employed to refine the weights of tree nodes. The retrieved ROIs are then used to determine whether the query ROI contains a mass. The presented method has excellent scalability due to the low spatial-temporal cost of vocabulary tree. Extensive experiments are conducted on a large dataset of 11 553 ROIs extracted from the digital database for screening mammography, which demonstrate the accuracy and scalability of our approach."}
{"_id": "b9c2c52955f448a52d7893b5eb8b33e8592c29dd", "title": "HOPE: Enabling Efficient Service Orchestration in Software-Defined Data Centers", "text": "The functional scope of today's software-defined data centers (SDDC) has expanded to such an extent that servers face a growing amount of critical background operational tasks like load monitoring, logging, migration, and duplication, etc. These ancillary operations, which we refer to as management operations, often nibble the stringent data center power envelope and exert a tremendous amount of pressure on front-end user tasks. However, existing power capping, peak shaving, and time shifting mechanisms mainly focus on managing data center power demand at the \"macro level\" -- they do not distinguish ancillary background services from user tasks, and therefore often incur significant performance degradation and energy overhead.\n In this study we explore \"micro-level\" power management in SDDC: tuning a specific set of critical loads for the sake of overall system efficiency and performance. Specifically, we look at management operations that can often lead to resource contention and energy overhead in an IaaS SDDC. We assess the feasibility of this new power management paradigm by characterizing the resource and power impact of various management operations. We propose HOPE, a new system optimization framework for eliminating the potential efficiency bottleneck caused by the management operations in the SDDC. HOPE is implemented on a customized OpenStack cloud environment with heavily instrumented power infrastructure. We thoroughly validate HOPE models and optimization efficacy under various user workload scenarios. Our deployment experiences show that the proposed technique allows SDDC to reduce energy consumption by 19%, reduce management operation execution time by 25.4%, and in the meantime improve workload performance by 30%."}
{"_id": "bca4e05a45f310ceb327d67278858343e8df7089", "title": "Improved churn prediction in telecommunication industry using data mining techniques", "text": ""}
{"_id": "0dcd15fd9db2c0d4677a661cb1d7632b19891ba8", "title": "Non-linear Feedback Neural Network for Solution of Quadratic Programming Problems", "text": "This paper presents a recurrent neural circuit for solving quadratic programming problems. The objective is tominimize a quadratic cost function subject to linearconstraints. The proposed circuit employs non-linearfeedback, in the form of unipolar comparators, to introducetranscendental terms in the energy function ensuring fastconvergence to the solution. The proof of validity of the energy function is also provided. The hardware complexity of the proposed circuit comparesfavorably with other proposed circuits for the same task. PSPICE simulation results arepresented for a chosen optimization problem and are foundto agree with the algebraic solution."}
{"_id": "e2f7cac3a73520673dae6614616bc6aa23ac7e64", "title": "A Named Entity Recognition Shootout for German", "text": "\u2022 CRF-based methods: \u2022 StanfordNER [4]: CRF + standard features \u2022 GermaNER [5]: CRF + distributional semantics, gazetteers, ... \u2022 Recurrent Neural Network: \u2022 BiLSTM + CRF [6] using characterand word embeddings using FastText \u2022 Wikipedia (contemporary encyclopedia) \u2022 Europeana (historic newspaper from 1703 to 1899) Conclusion & Future Contemporary corpora: minor improvements GermEval: 82.93 \u00e0 82.93 CoNLL: 82.99 \u00e0 84.73 Historic corpora: major improvements LFT: 69.62 \u00e0 73.44 ONB: 70.46 \u00e0 78.56 Same setup as Exp. 2 but with transfer learning and considering only the BiLSTM-based method"}
{"_id": "1717dee0e8785d963e0333a0bb945757444bb651", "title": "Forensic carving of network packets and associated data structures", "text": "Using validated carving techniques, we show that popular operating systems (e.g. Windows, Linux, and OSX) frequently have residual IP packets, Ethernet frames, and associated data structures present in system memory from long-terminated network traffic. Such information is useful for many forensic purposes including establishment of prior connection activity and services used; identification of other systems present on the system\u2019s LAN or WLAN; geolocation of the host computer system; and cross-drive analysis. We show that network structures can also be recovered from memory that is persisted onto a mass storage medium during the course of system swapping or hibernation. We present our network carving techniques, algorithms and tools, and validate these against both purpose-built memory images and a readily available forensic corpora. These techniques are valuable to both forensics tasks, particularly in analyzing mobile devices, and to cyber-security objectives such as malware analysis. Published by Elsevier Ltd."}
{"_id": "62a7cfab468ef3bbd763db8f80745bd93d2be7dd", "title": "Behavioral malware detection approaches for Android", "text": "Android, the fastest growing mobile operating system released in November 2007, boasts of a staggering 1.4 billion active users. Android users are susceptible to malicious applications that can hack into their personal data due to the lack of careful monitoring of their in-device security. There have been numerous works on devising malware detection methods. However, none of earlier works are conclusive enough for direct application and lack experimental validation. In this paper, we have investigated the natures and identities of malicious applications and devised two novel detection approaches for detection: network-based detection and system call based detection approaches. To evaluate our proposed approaches, we performed experiments on a subset of 1260 malwares, acquired from Android Malware Genome Project, a malware database created by Y. Zhou et al. [1] and 227 non-malware (benign) applications. Results show that our system call based approach is able to detect malwares with an accuracy of 87% which is quite significant in general malware detection context. Our proposed detection approaches along with the experimental results will provide security professionals with more precise and quantitative approaches in their investigations of mobile malwares on Android systems."}
{"_id": "e2d76fc1efbbf94a624dde792ca911e6687a4fd4", "title": "High Accuracy Android Malware Detection Using Ensemble Learning", "text": "With over 50 billion downloads and more than 1.3 million apps in Google\u2019s official market, Android has continued to gain popularity amongst smartphone users worldwide. At the same time there has been a rise in malware targeting the platform, with more recent strains employing highly sophisticated detection avoidance techniques. As traditional signature based methods become less potent in detecting unknown malware, alternatives are needed for timely zero-day discovery. Thus this paper proposes an approach that utilizes ensemble learning for Android malware detection. It combines advantages of static analysis with the efficiency and performance of ensemble machine learning to improve Android malware detection accuracy. The machine learning models are built using a large repository of malware samples and benign apps from a leading antivirus vendor. Experimental results and analysis presented shows that the proposed method which uses a large feature space to leverage the power of ensemble learning is capable of 97.3 % to 99% detection accuracy with very low false positive rates. Keywordsmobile security; Android; malware detection; ensemble learning; static analysis; machine learning; data mining; random forest"}
{"_id": "08d32340e0e6aa50952860b90dfba2fe4764a85a", "title": "Crowdroid: behavior-based malware detection system for Android", "text": "The sharp increase in the number of smartphones on the market, with the Android platform posed to becoming a market leader makes the need for malware analysis on this platform an urgent issue.\n In this paper we capitalize on earlier approaches for dynamic analysis of application behavior as a means for detecting malware in the Android platform. The detector is embedded in a overall framework for collection of traces from an unlimited number of real users based on crowdsourcing. Our framework has been demonstrated by analyzing the data collected in the central server using two types of data sets: those from artificial malware created for test purposes, and those from real malware found in the wild. The method is shown to be an effective means of isolating the malware and alerting the users of a downloaded malware. This shows the potential for avoiding the spreading of a detected malware to a larger community."}
{"_id": "12ef153d9c7ccc374d56acf34b59fb2eaec6f755", "title": "Dissecting Android Malware: Characterization and Evolution", "text": "The popularity and adoption of smart phones has greatly stimulated the spread of mobile malware, especially on the popular platforms such as Android. In light of their rapid growth, there is a pressing need to develop effective solutions. However, our defense capability is largely constrained by the limited understanding of these emerging mobile malware and the lack of timely access to related samples. In this paper, we focus on the Android platform and aim to systematize or characterize existing Android malware. Particularly, with more than one year effort, we have managed to collect more than 1,200 malware samples that cover the majority of existing Android malware families, ranging from their debut in August 2010 to recent ones in October 2011. In addition, we systematically characterize them from various aspects, including their installation methods, activation mechanisms as well as the nature of carried malicious payloads. The characterization and a subsequent evolution-based study of representative families reveal that they are evolving rapidly to circumvent the detection from existing mobile anti-virus software. Based on the evaluation with four representative mobile security software, our experiments show that the best case detects 79.6% of them while the worst case detects only 20.2% in our dataset. These results clearly call for the need to better develop next-generation anti-mobile-malware solutions."}
{"_id": "95688c5444a109ec64e8921cdac469b4db5220ac", "title": "Brain-computer interfacing in disorders of consciousness.", "text": "BACKGROUND\nRecent neuroimaging research has strikingly demonstrated the existence of covert awareness in some patients with disorders of consciousness (DoC). These findings have highlighted the potential for the development of simple brain-computer interfaces (BCI) as a diagnosis in behaviourally unresponsive patients.\n\n\nOBJECTIVES\nThis study here reviews current EEG-based BCIs that hold potential for assessing and eventually assisting patients with DoC. It highlights key areas for further development that might eventually make their application feasible in this challenging patient group.\n\n\nMETHODS\nThe major types of BCIs proposed in the literature are considered, namely those based on the P3 potential, sensorimotor rhythms, steady state oscillations and slow cortical potentials. In each case, a brief overview of the relevant literature is provided and then their relative merits for BCI applications in DoC are considered.\n\n\nRESULTS\nA range of BCI designs have been proposed and tested for enabling communication in fully conscious, paralysed patients. Although many of these have potential applicability for patients with DoC, they share some key challenges that need to be overcome, including limitations of stimulation modality, feedback, user training and consistency.\n\n\nCONCLUSION\nFuture work will need to address the technical and practical challenges facing reliable implementation at the patient's bedside."}
{"_id": "8dc8672e67ff6c01af48e99c5b71e5e4e37f2a80", "title": "Ontology-driven Information Extraction", "text": "Homogeneous unstructured data (HUD) are collections of unstructured documents that share common properties, such as similar layout, common file format, or common domain of values. Building on such properties, it would be desirable to automatically process HUD to access the main information through a semantic layer \u2013 typically an ontology \u2013 called semantic view. Hence, we propose an ontology-based approach for extracting semantically rich information from HUD, by integrating and extending recent technologies and results from the fields of classical information extraction, table recognition, ontologies, text annotation, and logic programming. Moreover, we design and implement a system, named KnowRex, that has been successfully applied to curriculum vitae in the Europass style to offer a semantic view of them, and be able, for example, to select those which exhibit required skills."}
{"_id": "4761f0fd96284103ecd3603349c7f8078ad28676", "title": "Adversarial Learning for Neural Dialogue Generation", "text": "In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator\u2014analagous to the human evaluator in the Turing test\u2014 to distinguish between the human-generated dialogues and the machine-generated ones. The outputs from the discriminator are then used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues. In addition to adversarial training we describe a model for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines."}
{"_id": "ec7d96dafdb2001e1f523ff4ddb6849b62a99b8b", "title": "Fuzzy Hidden Markov Models for Indonesian Speech Classification", "text": "Indonesia has a lot of tribe, so that there are a lot of dialects. Speech classification is difficult if the database uses speech signals from various people who have different characteristics because of gender and dialect. The different characteristics will influence frequency, intonation, amplitude, and period of the speech. It makes the system must be trained for the various templates reference of speech signal. Therefore, this study has been developed for Indonesian speech classification. This study designs the solution of the different characteristics for Indonesian speech classification. The solution combines Fuzzy on Hidden Markov Models. The new design of fuzzy Hidden Markov Models will be proposed in this study. The models will consist of Fuzzy C-Means Clustering which will be designed to substitute the vector quantization process and a new forward and backward method to handle the membership degree of data. The result shows FHMM is better than HMM and the improvement was 3.33 %."}
{"_id": "8c200aa59d88f48f029d75252858cf329d6d4b4a", "title": "Image-based indoor positioning system: fast image matching using omnidirectional panoramic images", "text": "In this paper, we developed an image-based indoor localization system using omnidirectional panoramic images to which location information is added. By the combination of the robust image matching by PCA-SIFT and fast nearest neighbor search algorithm based on Locality Sensitive Hashing (LSH), our system can estimate users' positions with high accuracy and in a short time. To improve the precision, we introduced the \"confidence\" of the image matching results. We conducted experiments at the Railway Museum and we obtained 426 omnidirectional panoramic reference images and 1067 supplemental images for image matching. Experimental results using 126 test images demonstrated that the location detection accuracy is above 90% with about 2.2s of processing time."}
{"_id": "20e1edef9ac41b6a4c1c382d2ef5e10fd0463459", "title": "Feature-based brain MRI retrieval for Alzheimer disease diagnosis", "text": "In this paper we consider the application of the feature-based approach to medical image retrieval, particularly brain MRI scans for early Alzheimer's disease diagnosis. The key idea is to provide the doctor with the images which have similar visual properties and have full case record, giving the ability to make more informed decision in the prodromal phase of the disease. With regard to the state-of-the art SIFT features in a Bag-of-Visual-Words approach we propose to use the Laguerre Circular Harmonic Functions coefficients as feature vectors. An additional pre-classification step based on estimation of Alzheimer's disease early image abnormalities is proposed to improve overall precision."}
{"_id": "2f6ee7428abc96dde0dab34d5dff7563baf28478", "title": "Knowledge Sharing in the Workplace: A Social Networking Site Assessment", "text": "Enterprise executives are taking note that great potential exists for enhancing knowledge sharing and linking experts within the organization when incorporating social software technologies into their operations. But there are warnings of potential negative effects if ineffective use or even misuse of these technologies occurs. There is relatively little empirical research examining the business impact of social software implementations evolving from Web 2.0 applications when considering the effects of trust on knowledge sharing. This paper attempts to fill this gap in the research by proposing a theoretical framework to study the effects of trust, risk and benefits, critical mass, and social influence on knowledge sharing intentions of employees using social media technology in the organization."}
{"_id": "8e0b8e87161dd4001d31832d5d9864fd31e8eccd", "title": "Rectangular patch antenna with inset feed and modified ground-plane for wideband antenna", "text": "This paper presents the technique for increasing bandwidth of rectangular patch antenna from 0.88 GHz (7.76 - 8.64 GHz) to 6.75 GHz (3.49 - 10.24 GHz). This technique use inset feed patch antenna with modified ground plane for achieved widest bandwidth. We will propose three types of rectangular patch antenna: the simple rectangular patch fed by microstrip line, inset feed rectangular patch and inset feed rectangular patch with modifies ground plane. The final simulation result show that the lower edge of frequency is moved down from 7.76 GHz to 3.49 GHz and the higher edge of frequency is shifted up from 8.64 GHz to 10.24 GHz, which is the one selection for increasing bandwidth of rectangular patch antenna for wideband. Details of the increasing bandwidth of microstrip patch antenna are described, and simulation results for obtained wideband performance are presented by using IE3D Zeland software."}
{"_id": "9d3472849dc2cadf194ae29adbf46bdda861d8b7", "title": "Learning to Ask: Neural Question Generation for Reading Comprehension", "text": "We study automatic question generation for sentences from text passages in reading comprehension. We introduce an attention-based sequence learning model for the task and investigate the effect of encoding sentencevs. paragraph-level information. In contrast to all previous work, our model does not rely on hand-crafted rules or a sophisticated NLP pipeline; it is instead trainable end-to-end via sequenceto-sequence learning. Automatic evaluation results show that our system significantly outperforms the state-of-the-art rule-based system. In human evaluations, questions generated by our system are also rated as being more natural (i.e., grammaticality, fluency) and as more difficult to answer (in terms of syntactic and lexical divergence from the original text and reasoning needed to answer)."}
{"_id": "42cf562f8d089982e59432d356dc588ffd600737", "title": "Petri Net Modeling of Cyber-Physical Attacks on Smart Grid", "text": "This paper investigates the use of Petri nets for modeling coordinated cyber-physical attacks on the smart grid. Petri nets offer more flexibility and expressiveness than traditional attack trees to represent the actions of simultaneous attackers. However, Petri net models for attacks on very large critical infrastructures such as the smart grid require a great amount of manual effort and detailed expertise in cyber-physical threats. To overcome these obstacles, we propose a novel hierarchical method to construct large Petri nets from a number of smaller Petri nets that can be created separately by different domain experts. The construction method is facilitated by a model description language that enables identical places in different Petri nets to be matched. The new modeling approach is described for an example attack on smart meters, and its efficacy is demonstrated by a proof-of-concept Python program."}
{"_id": "72a959deeccbfcac45ba99e1de305b37fcc887f3", "title": "A trust-semantic fusion-based recommendation approach for e-business applications", "text": "a r t i c l e i n f o Collaborative Filtering (CF) is the most popular recommendation technique but still suffers from data sparsi-ty, user and item cold-start problems, resulting in poor recommendation accuracy and reduced coverage. This study incorporates additional information from the users' social trust network and the items' semantic domain knowledge to alleviate these problems. It proposes an innovative Trust\u2013Semantic Fusion (TSF)-based recommendation approach within the CF framework. Experiments demonstrate that the TSF approach significantly outperforms existing recommendation algorithms in terms of recommendation accuracy and coverage when dealing with the above problems. A business-to-business recommender system case study validates the applicability of the TSF approach. Recommender systems are considered the most popular forms of web personalization and have become a promising and important research topic in information sciences and decision support systems Recommender systems are used to either predict whether a particular user will like a particular item or to identify a set of k items that will be of interest to a certain user, and have been used in different web-based applications including e-business, e-learning and e-tourism [8,22,31]. Currently, Collaborative Filtering (CF) is probably the most known and commonly used recommendation approach in recommender systems. CF works by collecting user ratings for items in a given domain and computing similarities between users or between items in order to produce recommendations [1,31]. CF can be further divided into user-based and item-based CF approaches. In user-based CF approach, a user will receive recommendations of items that similar users liked. In item-based CF approach, a user will receive recommendations of items that are similar to the ones that the user liked in the past [1]. Despite their popularity and success, the CF-based approaches still suffer from some major limitations ; these include data sparsity, cold-start user and cold-start item problems [1,3,36,37]. The data sparsity problem occurs when the number of available items increases and the number of ratings in the rating matrix is insufficient for generating accurate predictions. When the ratings obtained are very small compared to the number of ratings that are needed to be predicted, a recommender system becomes unable to locate similar neighbors and produces poor recommendations. The cold-start (CS) user problem, which is also known as the new user problem, affects users who have none, or a small number of ratings. When the number of rated items is small for the CS user, \u2026"}
{"_id": "46bf9ea8d3fad703a4d13279c60927ce90247bc3", "title": "Development of Network Gateway Between CAN and FlexRay Protocols For ECU Embedded Systems", "text": "This paper presents a network gateway between the CAN and FlexRay protocols for ECU embedded systems. Flexible gateways are necessary for implementing several functions to avoiding the need for increased clock speed. This gateway system is based on a second internal control unit, which can be implemented as a RISC or a specialized finite state machine and use combined message buffer RAM. The control unit is used to lessen the interrupt burden of the main CPU and to reduce the required CPU performance, while message buffer RAM is used to store all messages transmitted and received. It requires a high system frequency to guarantee the transfer of data between two or more networks in real-time and to assure data integrity. To compensate for the limited functionality of the modules, the control unit requires greater program memory capacity, for storing the complex routing information. This gateway system is based on proven communication controllers, expanded with a gateway interface. These communication controllers are linked by an independent CPU data bus structure that enables the transfer of messages. The data flow between the modules is controlled by an application specific gateway unit that provides all control signals. This unit consists of special function registers or of an independent CPU finite state machine. Due to the special gateway interfaces, which allows concurrent CPU and gateway unit request to the same cell, semaphores and flags are not necessary. This maximizes the flexibility of the module. These Gateways play and will continue to play a major role in automotive networks such as CAN, and FlexRay"}
{"_id": "5099a67c1a2514b99fd27409ac3ab8e2c1047470", "title": "Exemplar-Based 3D Shape Segmentation in Point Clouds", "text": "This paper addresses the problem of automatic 3D shape segmentation in point cloud representation. Of particular interest are segmentations of noisy real scans, which is a difficult problem in previous works. To guide segmentation of target shape, a small set of pre-segmented exemplar shapes in the same category is adopted. The main idea is to register the target shape with exemplar shapes in a piece-wise rigid manner, so that pieces under the same rigid transformation are more likely to be in the same segment. To achieve this goal, an over-complete set of candidate transformations is generated in the first stage. Then, each transformation is treated as a label and an assignment is optimized over all points. The transformation labels, together with nearest-neighbor transferred segment labels, constitute final labels of target shapes. The method is not dependent on high-order features, and thus robust to noise as can be shown in the experiments on challenging datasets."}
{"_id": "98d33cce6f90945549aaff9097d6d282688387f8", "title": "Applying Vector Space Model (VSM) Techniques in Information Retrieval for Arabic Language", "text": "i Acknowledgments vi"}
{"_id": "b9aec954cb5cf4f17fe034848f38c4fdced0f693", "title": "Unpacking Spear Phishing Susceptibility", "text": "We report the results of a field experiment where we sent to over 1200 university students an email or a Facebook message with a link to (non-existing) party pictures from a non-existing person, and later asked them about the reasons for their link clicking behavior. We registered a significant difference in clicking rates: 20% of email versus 42.5% of Facebook recipients clicked. The most frequently reported reason for clicking was curiosity (34%), followed by the explanations that the message fit recipient\u2019s expectations (27%). Moreover, 16% thought that they might know the sender. These results show that people\u2019s decisional heuristics are relatively easy to misuse in a targeted attack, making defense especially challenging."}
{"_id": "5952abac86c6123f4123b522bdff6b652b05976e", "title": "A Model for Competence and Integrity in Variable Payoff Games", "text": "Agents often have to trust one another when engaging in joint actions. In many cases, no single design team has the authority to assure that agents cooperate. Trust is required when agents hold potentially different values or conflicting goals. This paper presents a framework and some initial experiments for decomposing agent reputation within a multi-agent society into two characteristics: competence and integrity. The framework models competence as the probability of successfully carrying out an intended action. Integrity is modeled as a rational commitment to maintaining a reputation, based on the agent\u2019s assessment of the game\u2019s discount rate. We show that a simple, one-level-deep recursive model\u2014given accurate knowledge of self and the other agent\u2019s competence and integrity (commitment to reputation)\u2014outperforms titfor-tat and other standard strategies in evolutionary round-robin iterated prisoner\u2019s dilemma tournaments. This indicates that the approach taken here warrants further investigation using more realistic and complex strategies."}
{"_id": "8c1209062c63f6b7fb023b16c3c4e09af1fe6c40", "title": "Automatic construction of multifaceted browsing interfaces", "text": "Databases of text and text-annotated data constitute a significant fraction of the information available in electronic form. Searching and browsing are the typical ways that users locate items of interest in such databases. Interfaces that use multifaceted hierarchies represent a new powerful browsing paradigm which has been proven to be a successful complement to keyword searching. Thus far, multifaceted hierarchies have been created manually or semi-automatically, making it difficult to deploy multifaceted interfaces over a large number of databases. We present automatic and scalable methods for creation of multifaceted interfaces. Our methods are integrated with traditional relational databases and can scale well for large databases. Furthermore, we present methods for selecting the best portions of the generated hierarchies when the screen space is not sufficient for displaying all the hierarchy at once. We apply our technique to a range of large data sets, including annotated images, television programming schedules, and web pages. The results are promising and suggest directions for future research."}
{"_id": "fd6c6b3050f96f682fe34b2b6da29e8c0c424b3c", "title": "Smart Cameras as Embedded Systems", "text": "I ncreasingly powerful integrated circuits are making an entire range of new applications possible. Complementary metal-oxide semiconductor (CMOS) sensors, for example, have made the digital camera a commonplace consumer item. These light-sensitive chips, positioned where film would normally be, capture images as reusable digital files that users can upload to their computer, manipulate with software, and distribute electronically. Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While today's digital cameras capture images, smart cameras capture high-level descriptions of the scene and analyze what they see. These devices could support a wide variety of applications including human and animal detection, surveillance, motion analysis, and facial identification. Video processing has an insatiable demand for real-time performance. Fortunately, Moore's law provides an increasing pool of available computing power to apply to real-time analysis. Smart cameras leverage very large-scale integration (VLSI) to provide such analysis in a low-cost, low-power system with substantial memory. Moving well beyond pixel processing and compression, these systems run a wide range of algorithms to extract meaning from streaming video. Because they push the design space in so many dimensions, smart cameras are a leading-edge application for embedded system research. has developed a first-generation smart camera system that can detect people and analyze their movement in real time. Although there are many approaches to real-time video analysis, we chose to focus initially on human gesture recognition\u2014identifying whether a subject is walking, standing, waving his arms, and so on. Because much work remains to be done on this problem, we sought to design an embedded system that can incorporate future algorithms as well as use those we created exclusively for this application. As Figure 1 shows, our algorithms use both low-level and high-level processing. The low-level component identifies different body parts and categorizes their movement in simple terms. The high-level component, which is application-dependent, uses this information to recognize each body part's action and the person's overall activity based on scenario parameters. The system captures images from the video input, which can be either uncompressed or compressed (MPEG and motion JPEG), and applies four different algorithms to detect and identify human body parts. Region extraction. The first algorithm transforms the pixels of an image, like that shown in Figure 2a, into an M \u00d7 N bitmap and eliminates the background. It then detects the body part's skin area using a YUV \u2026"}
{"_id": "222d9922ff0fcb187fc55cb9394c7f87aee84111", "title": "3D-Parallel Coordinates: Visualization for time varying multidimensional data", "text": "Parallel coordinates can visualize multidimensional data efficiently, during to its shortage in displaying time-varying data, we present a new method to add time dimension, which can extend parallel coordinates into 3D space(3D-Parallel Coordinates) consisted of attribute, range and time dimension. So that time-varying multidimensional data can be displayed as polygonal line cluster for recording and analyzing. A technique called clipping shade is taken to highlight datasets around the current time, and results reveal that 3D-Parallel Coordinates can effectively analyze attribute's time-varying character."}
{"_id": "07792365f05ec0142e96a0358b973067bafd511c", "title": "Interaction and outeraction: instant messaging in action", "text": "We discuss findings from an ethnographic study of instant messaging (IM) in the workplace and its implications for media theory. We describe how instant messaging supports a variety of informal communication tasks. We document the affordances of IM that support flexible, expressive communication. We describe some unexpected uses of IM that highlight aspects of communication which are not part of current media theorizing. They pertain to communicative processes people use to connect with each other and to manage communication, rather than to information exchange. We call these processes \"outeraction\". We discuss how outeractional aspects of communication affect media choice and patterns of media use."}
{"_id": "5826627a151271e38c6f0588186ccf96d21f5a3c", "title": "Location-Based Services for Mobile Telephony: a Study of Users' Privacy Concerns", "text": "Context -aware computing often involves tracking peoples\u2019 location. Many studies and applications highlight the importance of keeping people\u2019s location information private. We discuss two types of locationbased services; location-tracking services that are based on other parties tracking the user\u2019s location and position-aware services that rely on the device\u2019s knowledge of its own location. We present an experimental case study that examines people\u2019s concern for location privacy and compare this to the use of location-based services. We find that even though the perceived usefulness of the two different types of services is the same, locationtracking services generate more concern for privacy than posit ion-aware services. We conclude that development emphasis should be given to position-aware services but that location-tracking services have a potential for success if users are given a simple option for turning the location-tracking off."}
{"_id": "3cd0b6a48b14f86ed261240f30113a41bacd2255", "title": "There is more to context than location", "text": "Context is a key issue in interaction between human and computer, describing the surrounding facts that add meaning. In mobile computing research published the parameter location is most often used to approximate context and to implement context-aware applications. We propose that ultra-mobile computing, characterized by devices that are operational and operated while on the move (e.g. PDAs, mobile phones, wearable computers), can significantly benefit from a wider notion of context. To structure the field we introduce a working model for context, discuss mechanisms to acquire context beyond location, and application of context-awareness in ultra-mobile computing. We investigate the utility of sensors for context-awareness and present two prototypical implementations-a light sensitive display and an orientation aware PDA interface. The concept is then extended to a model for sensor fusion to enable more sophisticated context recognition. Based on an implementation of the model an experiment is described and the feasibility of the approach is demonstrated. Further we explore fusion of sensors for acquisition of information on more sophisticated contexts. 1 Introduction Context is \" that which surrounds, and gives meaning to something else \" =. Various areas of computer science have been investigating this concept over the last 40 years, to relate information processing and communication to aspects of the situations in which such processing occurs. Most notably, context is a key concept in Natural Language Processing and more generally in Human-Computer Interaction. For instance, state of the art graphical user interfaces use context to adapt menus to contexts such as user preference and dialogue status. A new domain, in which context currently receives growing attention, is mobile computing. While a first wave of mobile computing was based on portable general-purpose computers and primarily focussed on location transparency, a second wave is now based on ultra-mobile devices and an interest in relating these to their surrounding situation of usage. Ultra-mobile devices are a new class of small mobile computer, defined as computing devices that are operational and operated while on the move, and characterized by a shift from general-purpose computing to task-specific support. Ultra-mobile devices comprise for instance Personal Digital Assistants (PDAs), mobile phones, and wearable computers. A primary concern of context-awareness in mobile computing is awareness of the physical environment surrounding a user and their ultra-mobile device. In recent work, this concern has been addressed by implementation of location-awareness, for instance based on global positioning, or the use of beacons. Location \u2026"}
{"_id": "62edb6639dc857ad0f33e5d8ef97af89be7a3bc7", "title": "The Active Badge Location System", "text": "A novel system for the location of people in an office environment is described. Members of staff wear badges that transmit signals providing information about their location to a centralized location service, through a network of sensors. The paper also examines alternative location techniques, system design issues and applications, particularly relating to telephone call routing. Location systems raise concerns about the privacy of an individual and these issues are also addressed."}
{"_id": "26b575e04cd78012e5d6723438f659ca9e90c00a", "title": "Cross-domain transfer for reinforcement learning", "text": "A typical goal for transfer learning algorithms is to utilize knowledge gained in a source task to learn a target task faster. Recently introduced transfer methods in reinforcement learning settings have shown considerable promise, but they typically transfer between pairs of very similar tasks. This work introduces Rule Transfer, a transfer algorithm that first learns rules to summarize a source task policy and then leverages those rules to learn faster in a target task. This paper demonstrates that Rule Transfer can effectively speed up learning in Keepaway, a benchmark RL problem in the robot soccer domain, based on experience from source tasks in the gridworld domain. We empirically show, through the use of three distinct transfer metrics, that Rule Transfer is effective across these domains."}
{"_id": "8b5aee873fc9787269f18f6eec0c05d980716126", "title": "Involvement of gut microbiome in human health and disease: brief overview, knowledge gaps and research opportunities", "text": "The commensal, symbiotic, and pathogenic microbial community which resides inside our body and on our skin (the human microbiome) can perturb host energy metabolism and immunity, and thus significantly influence development of a variety of human diseases. Therefore, the field has attracted unprecedented attention in the last decade. Although a large amount of data has been generated, there are still many unanswered questions and no universal agreements on how microbiome affects human health have been agreed upon. Consequently, this review was written to provide an updated overview of the rapidly expanding field, with a focus on revealing knowledge gaps and research opportunities. Specifically, the review covered animal physiology, optimal microbiome standard, health intervention by manipulating microbiome, knowledge base building by text mining, microbiota community structure and its implications in human diseases and health monitoring by analyzing microbiome in the blood. The review should enhance interest in conducting novel microbiota investigations that will further improve health and therapy."}
{"_id": "9b096c69df21254e71e2d528ba12e02e3b51c990", "title": "Discrimination and racial disparities in health: evidence and needed research", "text": "This paper provides a review and critique of empirical research on perceived discrimination and health. The patterns of racial disparities in health suggest that there are multiple ways by which racism can affect health. Perceived discrimination is one such pathway and the paper reviews the published research on discrimination and health that appeared in PubMed between 2005 and 2007. This recent research continues to document an inverse association between discrimination and health. This pattern is now evident in a wider range of contexts and for a broader array of outcomes. Advancing our understanding of the relationship between perceived discrimination and health will require more attention to situating discrimination within the context of other health-relevant aspects of racism, measuring it comprehensively and accurately, assessing its stressful dimensions, and identifying the mechanisms that link discrimination to health."}
{"_id": "1997bf5a05f160068b827809f6469767416b7c84", "title": "Augmented Virtual Environments (AVE): Dynamic Fusion of Imagery and 3D Models", "text": "An Augmented Virtual Environment (AVE) fuses dynamic imagery with 3D models. The AVE provides a unique approach to visualize and comprehend multiple streams of temporal data or images. Models are used as a 3D substrate for the visualization of temporal imagery, providing improved comprehension of scene activities. The core elements of AVE systems include model construction, sensor tracking, real-time video/image acquisition, and dynamic texture projection for 3D visualization. This paper focuses on the integration of these components and the results that illustrate the utility and benefits of the resulting augmented virtual environment."}
{"_id": "e3a2a87550a83feafc1cdcce1cc621b3a666134a", "title": "Real time automation of agricultural environment", "text": "The paper, \u201cReal time automation of agricultural environment\u201d, using PIC16F877A and GSM SIM300 modem is focused on automating the irrigation system for social welfare of Indian agricultural system. This system will be useful for monitoring the soil moisture condition of the farm as well as controlling the soil moisture by monitoring the level of water in the water source and accordingly switching the motor ON/OFF for irrigation purposes. The system proposes a soil moisture sensor at each place where the moisture has to be monitored. Once the moisture reaches a particular level, the system takes appropriate steps to regulate or even stop the water flow. The circuit also monitors the water in the water source so that if the water level becomes very low, it switches off the motor to prevent damage to the motor due to dry run. The system also consists of a GSM modem through which the farmer can easily be notified about the critical conditions occurring during irrigation process."}
{"_id": "4b1f7d877063742858b6672172a635a8893ebf73", "title": "Prognostics of Combustion Instabilities from Hi-speed Flame Video using A Deep Convolutional Selective Autoencoder", "text": "The thermo-acoustic instabilities arising in combustion processes cause significant deterioration and safety issues in various human-engineered systems such as land and air based gas turbine engines. The phenomenon is described as selfsustaining and having large amplitude pressure oscillations with varying spatial scales of periodic coherent vortex shedding. Early detection and close monitoring of combustion instability are the keys to extending the remaining useful life (RUL) of any gas turbine engine. However, such impending instability to a stable combustion is extremely difficult to detect only from pressure data due to its sudden (bifurcationtype) nature. Toolchains that are able to detect early instability occurrence have transformative impacts on the safety and performance of modern engines. This paper proposes an endto-end deep convolutional selective autoencoder approach to capture the rich information in hi-speed flame video for instability prognostics. In this context, an autoencoder is trained to selectively mask stable flame and allow unstable flame image frames. Performance comparison is done with a wellknown image processing tool, conditional random field that is trained to be selective as well. In this context, an informationtheoretic threshold value is derived. The proposed framework is validated on a set of real data collected from a laboratory scale combustor over varied operating conditions where it is shown to effectively detect subtle instability features as a combustion process makes transition from stable to unstable region. \u2217corresponding author Adedotun Akintayo et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited."}
{"_id": "e8f65887210e6f20413a60a1e0a0ec4a4e2735bd", "title": "Marginal structural models for estimating effect modification.", "text": "PURPOSE\nThe use of marginal structural models (MSMs) to adjust for measured confounding factors is becoming increasingly common in observational studies. Here, we propose MSMs for estimating effect modification in observational cohort and case-control studies.\n\n\nMETHODS\nMSMs for estimating effect modification were derived by the use of the potential outcome model. The proposed methods were applied to a cohort study and a case-control study.\n\n\nRESULTS\nIn cohort studies, effect modification can be estimated by the application of a logistic MSM to individuals who experienced the event in question. In case-control studies, effect modification can be estimated by the ratio between the estimate from the model applied to case data and that applied to control data. The application of the model to real data from a cohort study indicated that the estimate from the proposed method was close to that from standard regression analysis. In a case-control study, the estimate from the proposed method may be biased.\n\n\nCONCLUSIONS\nEpidemiological researchers can use MSMs to estimate effect modification. In case-control studies, it should be determined whether the estimated effect modification is biased by applying a logistic MSM of control data."}
{"_id": "55ebc9c86efc3dae3e0c5345dda662c210f62ff2", "title": "Vectorization for SIMD architectures with alignment constraints", "text": "When vectorizing for SIMD architectures that are commonly employed by today's multimedia extensions, one of the new challenges that arise is the handling of memory alignment. Prior research has focused primarily on vectorizing loops where all memory references are properly aligned. An important aspect of this problem, namely, how to vectorize misaligned memory references, still remains unaddressed.This paper presents a compilation scheme that systematically vectorizes loops in the presence of misaligned memory references. The core of our technique is to automatically reorganize data in registers to satisfy the alignment requirement imposed by the hardware. To reduce the data reorganization overhead, we propose several techniques to minimize the number of data reorganization operations generated. During the code generation, our algorithm also exploits temporal reuse when aligning references that access contiguous memory across loop iterations. Our code generation scheme guarantees to never load the same data associated with a single static access twice. Experimental results indicate near peak speedup factors, e.g., 3.71 for 4 data per vector and 6.06 for 8 data per vector, respectively, for a set of loops where 75% or more of the static memory references are misaligned."}
{"_id": "9a8a859d1a9352e13e7fa4fc9e66f1de7066a9c9", "title": "RISC I: A Reduced Instruction Set VLSI Computer", "text": "The Reduced Instruction Set Computer (RISC) Project investigates an alternatrve to the general trend toward computers wrth increasingly complex instruction sets: With a proper set of instructions and a corresponding architectural design, a machine wrth a high effective throughput can be achieved. The simplicity of the instruction set and addressing modes allows most Instructions to execute in a single machine cycle, and the srmplicity of each instruction guarantees a short cycle time. In addition, such a machine should have a much shorter design trme. This paper presents the architecture of RISC I and its novel hardware support scheme for procedure call/return. Overlapprng sets of regrster banks that can pass parameters directly to subrouttnes are largely responsible for the excellent performance of RISC I. Static and dynamtc comparisons between this new architecture and more traditional machines are given. Although instructions are simpler, the average length of programs was found not to exceed programs for DEC VAX 11 by more than a factor of 2. Preliminary benchmarks demonstrate the performance advantages of RISC. It appears possible to build a single chip computer faster than VAX 11/780."}
{"_id": "d2ae393cf723228cf6f96d61ee068c681203e943", "title": "A Reinforcement Learning Approach to Automated GUI Robustness Testing", "text": "Graphical User Interfaces (GUIs) can be found in almost all modern desktop, tablet and smartphone applications. Since they are the glue between an application\u2019s components, they lend themselves to system level testing. Unfortunately, although many tools promise automation, GUI testing is still an inherently difficult task and involves great manual labor. However, tests that aim at critical faults, like crashes and excessive response times, are completely automatable and can be very effective. These robustness tests often apply random algorithms to select the actions to be executed on the GUI. This paper proposes a new approach to fully automated robustness testing of complex GUI applications with the goal to enhance the fault finding capabilities. The approach uses a well-known machine learning algorithm called Q-Learning in order to combine the advantages of random and coverage-based testing. We will explain how it operates, how we plan to implement it and provide arguments for its usefulness."}
{"_id": "851c1ae977cc6c0d6bed5c516a012ad7359135d1", "title": "FACTORS INFLUENCING CITIZENS \u2019 ADOPTION OF AN E-GOVERNMENT SYSTEM : VALIDATION OF THE DECOMPOSED THEORY OF PLANNED BEHAVIOR ( DTPB )", "text": "The study explores the adoption of an electronic government (e-government) system called electronic district (e-District) system in context of India. The study validates the decomposed theory of planned behaviour (DTPB) to understand the impact of its factors on the potential adopter\u2019s intention to adopt this system. The proposed research model is validated using the data collected from 304 respondents from a state called Bihar of India where this e-government was supposed to be implemented. The data is analysed through regression analysis technique. The empirical findings of the proposed research model indicated the significant relationships of all proposed hypotheses. The study also provides its limitations, future research directions, and implications for theory and practice toward the end."}
{"_id": "a332fa84fb865fac25e9c7cf0c18933303a858d0", "title": "The Measurement of Dielectric Properties of Liquids at Microwave Frequencies Using Open-ended Coaxial Probes", "text": "Significant progress has been made in recent years in the development of microwave tomographic imaging systems for medical applications. In order to design an appropriate microwave imaging system for industrial applications, and to interpret the images produced, the materials under imaging need to be characterised. In this paper, we describe the use of open-ended coaxial probes for the measurement of dielectric properties of liquids at frequencies between 400MHz and 20GHz. The results obtained using the Misra-Blackham model for a number of liquids including water of different salinity are compared with those published in the literature showing a good agreement. For saline water, in particular, the frequency of the minimum loss depends on the salinity. It may change from 1.5GHz for the inclusion of 0.2% NaCl to 7GHz for the inclusion of 3.5% NaCl. The real part of the permittivity may also change by approximately 50% from 400MHz to 20GHz."}
{"_id": "c18085b447996e43e39cba27f5a8fe7b1e5d8717", "title": "Basset: learning the regulatory code of the accessible genome with deep convolutional neural networks.", "text": "The complex language of eukaryotic gene expression remains incompletely understood. Despite the importance suggested by many noncoding variants statistically associated with human disease, nearly all such variants have unknown mechanisms. Here, we address this challenge using an approach based on a recent machine learning advance-deep convolutional neural networks (CNNs). We introduce the open source package Basset to apply CNNs to learn the functional activity of DNA sequences from genomics data. We trained Basset on a compendium of accessible genomic sites mapped in 164 cell types by DNase-seq, and demonstrate greater predictive accuracy than previous methods. Basset predictions for the change in accessibility between variant alleles were far greater for Genome-wide association study (GWAS) SNPs that are likely to be causal relative to nearby SNPs in linkage disequilibrium with them. With Basset, a researcher can perform a single sequencing assay in their cell type of interest and simultaneously learn that cell's chromatin accessibility code and annotate every mutation in the genome with its influence on present accessibility and latent potential for accessibility. Thus, Basset offers a powerful computational approach to annotate and interpret the noncoding genome."}
{"_id": "c02fd0b0ad018556de5f9cddcccdf813c8fbb0f8", "title": "Deep Learning-Based Large-Scale Automatic Satellite Crosswalk Classification", "text": "High-resolution satellite imagery has been increasingly used on remote sensing classification problems. One of the main factors is the availability of this kind of data. Despite the high availability, very little effort has been placed on the zebra crossing classification problem. In this letter, crowdsourcing systems are exploited in order to enable the automatic acquisition and annotation of a large-scale satellite imagery database for crosswalks related tasks. Then, this data set is used to train deep-learning-based models in order to accurately classify satellite images that contain or not contain zebra crossings. A novel data set with more than 240000 images from 3 continents, 9 countries, and more than 20 cities was used in the experiments. The experimental results showed that freely available crowdsourcing data can be used to accurately (97.11%) train robust models to perform crosswalk classification on a global scale."}
{"_id": "7d27f32efd13a2d15da673d9561094be5c5cd335", "title": "Single carrier FDMA for uplink wireless transmission", "text": "Single carrier frequency division multiple access (SC FDMA), a modified form of orthogonal FDMA (OFDMA), is a promising technique for high data rate uplink communications in future cellular systems. SC-FDMA has similar throughput performance and essentially the same overall complexity as OFDMA. A principal advantage of SC-FDMA is the peak-to-average power ratio (PARR), which is lower than that of OFDMA. SC FDMA is currently a strong candidate for the uplink multiple access scheme in the long term evolution of cellular systems under consideration by the third generation partnership project (3GPP). In this paper, we give an overview of SC-FDMA. We also analyze the effects of subcarrier mapping on throughput and PARR. Among the possible subcarrier mapping approaches, we find that localized FDMA (LFDMA) with channel-dependent scheduling (CDS) results in higher throughput than interleaved FDMA (JFDMA). However, the PARR performance of IFDMA is better than that of LFDMA. As in other communications systems there are complex tradeoffs between design parameters and performance in an SC-FDMA system"}
{"_id": "abf8ec8f0b29ac0a1976c90c0e62e5f6035d1404", "title": "A Linear Programming Approach for Multiple Object Tracking", "text": "We propose a linear programming relaxation scheme for the class of multiple object tracking problems where the inter-object interaction metric is convex and the intra-object term quantifying object state continuity may use any metric. The proposed scheme models object tracking as a multi-path searching problem. It explicitly models track interaction, such as object spatial layout consistency or mutual occlusion, and optimizes multiple object tracks simultaneously. The proposed scheme does not rely on track initialization and complex heuristics. It has much less average complexity than previous efficient exhaustive search methods such as extended dynamic programming and is found to be able to find the global optimum with high probability. We have successfully applied the proposed method to multiple object tracking in video streams."}
{"_id": "25319cb22c1168a8c4039df2a3f1622e6a8e2098", "title": "Developing a frame of reference for ex-ante IT/IS investment evaluation", "text": "Investment appraisal techniques are an integral part of many traditional capital budgeting processes. However, the adoption of Information Systems (IS) and the development of resulting infrastructures are being increasingly viewed on the basis of consumption. Consequently, decision-makers are now moving away from the confines of rigid capital budgeting processes, which have traditionally compared IS with non-ISrelated investments. With this in mind, the authors seek to dissect investment appraisal from the broader capital budgeting process to allow a deeper understanding of the mechanics involved with IS justification. This analysis presents conflicting perspectives surrounding the scope and sensitivity of traditional appraisal methods. In contributing to this debate, the authors present taxonomies of IS benefit types and associated natures, and discuss the resulting implications of using traditional appraisal techniques during the IS planning and decision-making process. A frame of reference that can be used to navigate through the variety of appraisal methods available to decision-makers is presented and discussed. Taxonomies of appraisal techniques that are classified by their respective characteristics are also presented. Perspectives surrounding the degree of involvement that financial appraisal should play during decision making and the limitations surrounding investment appraisal techniques are identified. European Journal of Information Systems (2002) 11, 74\u201382. DOI: 10.1057/palgrave/ejis/3000411"}
{"_id": "b5faed9100982c3ca23e95414b11e429e37c1bb3", "title": "Player AnalysisInput Frame Depth EstimationDepth Estimation Scene ReconstructionScene", "text": "We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage."}
{"_id": "23a7ac9eb49449a68df108b8fa87a0f8dda7a65a", "title": "Control of permanent-magnet generators applied to variable-speed wind-energy systems connected to the grid", "text": "Wind energy is a prominent area of application of variable-speed generators operating on the constant grid frequency. This paper describes the operation and control of one of these variable-speed wind generators: the direct driven permanent magnet synchronous generator (PMSG). This generator is connected to the power network by means of a fully controlled frequency converter, which consists of a pulsewidth-modulation (PWM) rectifier, an intermediate dc circuit, and a PWM inverter. The generator is controlled to obtain maximum power from the incident wind with maximum efficiency under different load conditions. Vector control of the grid-side inverter allows power factor regulation of the windmill. This paper shows the dynamic performance of the complete system. Different experimental tests in a 3-kW prototype have been carried out to verify the benefits of the proposed system."}
{"_id": "66896c692beaf16a856f6a5d0c6d8a79075638ca", "title": "Autonomous Agents and Ethical Decision-Making", "text": "Machine ethics, also known as artificial morality, is a newly emerging field concerned with ensuring appropriate behavior of machines toward humans and other machines. In this article, we discuss the importance of machine ethics and present a computational model of ethical decision-making for autonomous agents. The proposed model implements a mechanism for integrating the results of diverse assessments into a unique cue, and takes into account the agent\u2019s preferences, good and bad past experiences, ethical rules, and current emotional state as the main factors involved in choosing the most appropriate option. The design of the model is based on theories and models developed in fields such as neuroscience, psychology, artificial intelligence, and cognitive informatics. In particular, the model attempts to emulate neural mechanisms of the human brain involved in ethical decision-making."}
{"_id": "dcb97ef4fbd2468fcc1ab348c3dfea998d64bb88", "title": "Design and prototype of monolithic compliant grippers for adaptive grasping", "text": "This study presents the designs of monolithic compliant grippers made by silicon rubber for adaptive grasping applications. The proposed grippers are symmetric two-finger designs actuated by one linear actuator. An optimal design procedure including topology and size optimization methods is used to synthesize the rubber grippers with the objective to maximize mechanical advantage, which is defined as the ratio of output force to input force. Three grippers synthesized with different design parameters are prototyped. The best design is identified through the adaptability test, and a robotic gripper assembly is developed to demonstrate the effectiveness of the design through the test of grasping irregular objects. The maximum payload for the gripper assembly is identified as 2.5kg."}
{"_id": "d0756bd678f02343b65168cd2f7022a0d122ffd3", "title": "Biology and therapy of fibromyalgia. New therapies in fibromyalgia", "text": "Fibromyalgia is a chronic, musculoskeletal pain condition that predominately affects women. Although fibromyalgia is common and associated with substantial morbidity and disability, there are no US Food and Drug Administration-approved treatments. However, progress has been made in identifying pharmacological and non-pharmacological treatments for fibromyalgia. Recent pharmacological treatment studies have focused on selective serotonin and norepinephrine reuptake inhibitors, which enhance serotonin and norepinephrine neurotransmission in the descending pain pathways and lack many of the adverse side effects associated with tricyclic medications. Promising results have also been reported for medications that bind to the alpha2delta subunit of voltage-gated calcium channels, resulting in decreased calcium influx at nerve terminals and subsequent reduction in the release of several neurotransmitters thought to play a role in pain processing. There is also evidence to support exercise, cognitive behavioral therapy, education, and social support in the management of fibromyalgia. It is likely that many patients would benefit from combinations of pharmacological and non-pharmacological treatments, but more study is needed."}
{"_id": "af431ef83a9691e9164233606fc990bc693dfa1a", "title": "A De-Identification Pipeline for Ultrasound Medical Images in DICOM Format", "text": "Clinical data sharing between healthcare institutions, and between practitioners is often hindered by privacy protection requirements. This problem is critical in collaborative scenarios where data sharing is fundamental for establishing a workflow among parties. The anonymization of patient information burned in DICOM images requires elaborate processes somewhat more complex than simple de-identification of textual information. Usually, before sharing, there is a need for manual removal of specific areas containing sensitive information in the images. In this paper, we present a pipeline for ultrasound medical image de-identification, provided as a free anonymization REST service for medical image applications, and a Software-as-a-Service to streamline automatic de-identification of medical images, which is freely available for end-users. The proposed approach applies image processing functions and machine-learning models to bring about an automatic system to anonymize medical images. To perform character recognition, we evaluated several machine-learning models, being Convolutional Neural Networks (CNN) selected as the best approach. For accessing the system quality, 500 processed images were manually inspected showing an anonymization rate of 89.2%. The tool can be accessed at https://bioinformatics.ua.pt/dicom/anonymizer and it is available with the most recent version of Google Chrome, Mozilla Firefox and Safari. A Docker image containing the proposed service is also publicly available for the community."}
{"_id": "38630466fc6df8453d0e191bcb5f41f6683fc44c", "title": "Towards Integrating Plot and Character for Interactive Drama", "text": "Interactive drama concerns itself with building dramaticall y interesting virtual worlds inhabited by computer-controlled characters, within which the user (hereafter referred to as the player) experiences a story from a first person perspective (Bates 1992). Over the past decade there has been a fair amount of research into believable agents, that is, autonomous characters exhibiting rich personaliti es, emotions, and social interactions (Mateas 1997; Bates, Loyall and Reill y 1992; Blumberg 1996; Hayes-Roth, van Gent and Huber 1997; Lester and Stone 1997; Stern, Frank, and Resner 1998). There has been comparatively littl e work, however, exploring how the local, reactive behavior of believable agents can be integrated with the more global, deliberative nature of a story plot, so as to build interactive, dramatic worlds (Weyrauch 1997; Blumberg and Galyean 1995). The authors are currently engaged in a two to three year collaboration to build an interactive story world integrating believable agents and interactive plot. This paper provides a brief description of the project goals and design requirements, discusses the problem of autonomy in the context of story-based believable agents, and finall y describes an architecture that uses the dramatic beat as a structural principle to integrate plot and character."}
{"_id": "6afc4631b6854f3695c71056bc87ebb92b48bb4a", "title": "Believable Social and Emotional Agents.", "text": "One of the key steps in creating quality interactive drama is the ability to create quality interactive characters (or believable agents). Two important aspects of such characters will be that they appear emotional and that they can engage in social interactions. My basic approach to these problems has been to use a broad agent architecture and minimal amounts of modeling of other agent in the environment. This approach is based on an understanding of the artistic nature of the problem. To enable agent-builders (artists) to create emotional agents, I provide a general framework for building emotional agents, default emotion-processing rules, and discussion about how to create quality, emotional characters. My framework gets a lot of its power from being part of a broad agent architecture. The concept is simple: the agent will be emotionally richer if there are more things to have emotions about and more ways to express them. This reliance on breadth has also meant that I have been able to create simple emotion models that rely on perception and motivation instead of deep modeling of other agents and complex cognitive processing. To enable agent builders to create social behaviors for believable agents, I have designed a methodology that provides heuristics for incorporating personality into social behaviors and suggests how to model other agents in the environment. I propose an approach to modeling other agents that calls for limiting the amount of modeling of other agents to that which is sufficient to create the desired behavior. Using this technique, I have been able to build robust social behaviors that use surprisingly little representation. I have used this methodology to build a number of social behaviors, like negotiation and making friends. I have built three simulations containing seven agents to drive and test this work. I have also conducted user studies to demonstrate that these agents appear to be emotional and can engage in non-trivial social interactions while also being good characters with distinct personalities."}
{"_id": "2aa73fd403ca4033d4f3476d03458cb882c83d97", "title": "An Oz-Centric Review of Interactive Drama and Believable Agents", "text": "Believable agents are autonomous agents that exhibit rich personalities. Interactive dramas take place in virtual worlds inhabited by characters (believable agents) with whom an audience interacts. In the course of this interaction, the audience experiences a story (lives a plot arc). This report presents the research philosophy behind the Oz Project, a research group at CMU that has spent the last ten years studying believable agents and interactive drama. The report then surveys current work from an Oz perspective. This research was partially supported by a grant from Intel Corporation. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of Intel Corporation."}
{"_id": "54dc49be897aa19efca79b1ea04e60e40704c123", "title": "Multi-level direction of autonomous creatures for real-time virtual environments", "text": "There have been several recent efforts to build behavior-based autonomous creatures. While competent autonomous action is highly desirable, there is an important need to integrate autonomy with \u201cdirectability\u201d. In this paper we discuss the problem of building autonomous animated creatures for interactive virtual environments which are also capable of being directed at multiple levels. We present an approach to control which allows an external entity to \u201cdirect\u201d an autonomous creature at the motivational level, the task level, and the direct motor level. We also detail a layered architecture and a general behavioral model for perception and action-selection which incorporates explicit support for multi-level direction. These ideas have been implemented and used to develop several autonomous animated creatures."}
{"_id": "5cf63f02e44e7cd9a73b8fbaf1eccab54bc7768e", "title": "Integrating Reactive and Scripted Behaviors in a Life-Like Presentation Agent", "text": "1. ABSTRACT Animated agents based either on real video, cartoon-style drawings or even model-based 3D graphics offer great promise for computer-based presentations as they make presentations more lively and appealing and allow for the emulation of conversation styles known from human-human communication. In this paper, we describe a life-like interface agent which presents multimedia material to the user following the directives of a script. The overall behavior of the presentation agent is partly determined by such a script, and partly by the agent's self-behavior. In our approach, the agent's behavior is defined in a declarative specification language. Behavior specifications are used to automatically generate a control module for an agent display system. The first part of the paper describes the generation process which involves AI planning and a two-step compilation. Since the manual creation of presentation scripts is tedious and error-prone, we also address the automated generation of presentation scripts which may be forwarded to the interface agent. The second part of the paper presents an approach for multimedia presentation design which combines hierarchical planning with temporal reasoning."}
{"_id": "7e2bbe87cfdd2409c59a175c711d61e8ab0c852e", "title": "A Review on Facial Micro-Expressions Analysis: Datasets, Features and Metrics", "text": "Facial micro-expressions are very brief, spontaneous facial expressions that appear on the face of humans when they either deliberately or unconsciously conceal an emotion. Micro-expression has shorter duration than macro-expression, which makes it more challenging for human and machine. Over the past ten years, automatic micro-expressions recognition has attracted increasing attention from researchers in psychology, computer science, security, neuroscience and other related disciplines. The aim of this paper is to provide the insights of automatic micro-expressions analysis and recommendations for future research. There has been a lot of datasets released over the last decade that facilitated the rapid growth in this field. However, comparison across different datasets is difficult due to the inconsistency in experiment protocol, features used and evaluation methods. To address these issues, we review the datasets, features and the performance metrics deployed in the literature. Relevant challenges such as the spatial temporal settings during data collection, emotional classes versus objective classes in data labelling, face regions in data analysis, standardisation of metrics and the requirements for real-world implementation are discussed. We conclude by proposing some promising future directions to advancing micro-expressions research."}
{"_id": "0c0f353dbac84311ea4f1485d4a8ac0b0459be8c", "title": "Nexus : A GPU Cluster for Accelerating Neural Networks for Video Analysis", "text": "We address the problem of serving Deep Neural Networks (DNNs) efficiently from a cluster of GPUs. In order to realize the promise of very low-cost processing made by accelerators such as GPUs, it is essential to run them at sustained high utilization. Doing so requires cluster-scale resource management that performs detailed scheduling of GPUs, reasoning about groups of DNN invocations that need to be co-scheduled, and moving from the conventional whole-DNN execution model to executing fragments of DNNs. Nexus is a fully implemented system that includes these innovations. On large-scale case studies on 16 GPUs, Nexus shows 1.8-41\u00d7 better throughput than state-of-the-art systems while staying within latency constraints >99% of the time."}
{"_id": "b6bd18aee1c4746327b3ecf5df66da5d58b3bbdb", "title": "A Comprehensive Data Quality Methodology for Web and Structured Data", "text": "Measuring and improving data quality in an organization or in a group of interacting organizations is a complex task. Several methodologies have been developed in the past providing a basis for the definition of a complete data quality program applying assessment and improvement techniques in order to guarantee high data quality levels. Since the main limitation of existing approaches is their specialization on specific issues or contexts, this paper presents the comprehensive data quality (CDQ) methodology that aims at integrating and enhancing the phases, techniques and tools proposed by previous approaches. CDQ methodology is conceived to be at the same time complete, flexible and simple to apply. Completeness is achieved by considering existing techniques and tools and integrating them in a framework that can work in both intra and inter organizational contexts, and can be applied to all types of data. The methodology is flexible since it supports the user in the selection of the most suitable techniques and tools within each phase and in any context. Finally, CDQ is simple since it is organized in phases and each phase is characterized by a specific goal and techniques to apply. The methodology is explained by means of a running example."}
{"_id": "c14dff27746b49bea3c5f68621261f266a766461", "title": "SALT STRESS AND PHYTO-BIOCHEMICAL RESPONSES OF PLANTS-A REVIEW", "text": ""}
{"_id": "bbc7119a04b60f2ca8432af3dea8a46662013729", "title": "A flexible exoskeleton for hip assistance", "text": "This paper proposes a new concept called \u201ca flexible exoskeleton\u201d and presents the design of a wearable walking assistance device to provide physical gait assistance for the elderly. To overcome some limitations of previous wearable walking assistance devices, 3 novel mechanisms are proposed: 1) flexible thigh frames that withstand the designated load direction, 2) an adjustable flexible hip brace surrounding the wearer's body, and 3) slim and backdrivable actuators that minimize the maximum height and resistance. The proposed walking assistance device has reduced metabolic energy consumption by 13.2\u00b14% compared with a free condition."}
{"_id": "2763cd9f3e49aa449c893bc3a397e6163377cf25", "title": "Using Visual Speech Information in Masking Methods for Audio Speaker Separation", "text": "This paper examines whether visual speech information can be effective within audio-masking-based speaker separation to improve the quality and intelligibility of the target speech. Two visual-only methods of generating an audio mask for speaker separation are first developed. These use a deep neural network to map the visual speech features to an audio feature space from which both visually derived binary masks and visually derived ratio masks are estimated, before application to the speech mixture. Second, an audio ratio masking method forms a baseline approach for speaker separation which is extended to exploit visual speech information to form audio-visual ratio masks. Speech quality and intelligibility tests are carried out on the visual-only, audio-only, and audio-visual masking methods of speaker separation at mixing levels from $-$ 10 to +10 dB. These reveal substantial improvements in the target speech when applying the visual-only and audio-only masks, but with highest performance occurring when combining audio and visual information to create the audio-visual masks."}
{"_id": "f32d9a72d51f6db6ec26f0209be73dd3c400b42e", "title": "Toward a ban on lethal autonomous weapons: surmounting the obstacles", "text": "A 10-point plan toward fashioning a proposal to ban some---if not all---lethal autonomous weapons."}
{"_id": "32231cecd7a740f6914b9896709188c0d876320c", "title": "Maintaining excellence: deliberate practice and elite performance in young and older pianists.", "text": "Two studies investigated the role of deliberate practice in the maintenance of cognitive-motor skills in expert and accomplished amateur pianists. Older expert and amateur pianists showed the normal pattern of large age-related reductions in standard measures of general processing speed. Performance on music-related tasks showed similar age-graded decline for amateur pianists but not for expert pianists, whose average performance level was only slightly below that of young expert pianists. The degree of maintenance of relevant pianistic skills for older expert pianists was predicted by the amount of deliberate practice during later adulthood. The role of deliberate practice in the active maintenance of superior domain-specific performance in spite of general age-related decline is discussed."}
{"_id": "64a60b0325d1df9fb41afd2934f836de5b342bd5", "title": "Learning Multi-level Features For Sensor-based Human Action Recognition", "text": "This paper proposes amulti-level feature learning framework for human action recognition using a single body-worn inertial sensor. The framework consists of three phases, respectively designed to analyze signal-based (low-level), components (mid-level) and semantic (high-level) information. Low-level features capture the time and frequency domain property while mid-level representations learn the composition of the action. The Maxmargin Latent Pattern Learning (MLPL) method is proposed to learn high-level semantic descriptions of latent action patterns as the output of our framework. The proposedmethod achieves the state-of-the-art performances, 88.7%, 98.8% and 72.6% (weighted F1 score) respectively, on Skoda, WISDM and OPP datasets. \u00a9 2017 Elsevier B.V. All rights reserved."}
{"_id": "b2b012f7a798757704ee5b9743b99176a9578ef8", "title": "Computer-Aided Cobb Measurement Based on Automatic Detection of Vertebral Slopes Using Deep Neural Network", "text": "Objective\nTo develop a computer-aided method that reduces the variability of Cobb angle measurement for scoliosis assessment.\n\n\nMethods\nA deep neural network (DNN) was trained with vertebral patches extracted from spinal model radiographs. The Cobb angle of the spinal curve was calculated automatically from the vertebral slopes predicted by the DNN. Sixty-five in vivo radiographs and 40 model radiographs were analyzed. An experienced surgeon performed manual measurements on the aforementioned radiographs. Two examiners used both the proposed and the manual measurement methods to analyze the aforementioned radiographs.\n\n\nResults\nFor model radiographs, the intraclass correlation coefficients were greater than 0.98, and the mean absolute differences were less than 3\u00b0. This indicates that the proposed system showed high repeatability for measurements of model radiographs. For the in vivo radiographs, the reliabilities were lower than those from the model radiographs, and the differences between the computer-aided measurement and the manual measurement by the surgeon were higher than 5\u00b0.\n\n\nConclusion\nThe variability of Cobb angle measurements can be reduced if the DNN system is trained with enough vertebral patches. Training data of in vivo radiographs must be included to improve the performance of DNN.\n\n\nSignificance\nVertebral slopes can be predicted by DNN. The computer-aided system can be used to perform automatic measurements of Cobb angle, which is used to make reliable and objective assessments of scoliosis."}
{"_id": "77e1c954ad03472cc3d07aa39a06b5c5713452d3", "title": "Do You Feel What I Feel? Social Aspects of Emotions in Twitter Conversations", "text": "We present a computational framework for understanding the social aspects of emotions in Twitter conversations. Using unannotated data and semisupervised machine learning, we look at emotional transitions, emotional influences among the conversation partners, and patterns in the overall emotional exchanges. We find that conversational partners usually express the same emotion, which we name Emotion accommodation, but when they do not, one of the conversational partners tends to respond with a positive emotion. We also show that tweets containing sympathy, apology, and complaint are significant emotion influencers. We verify the emotion classification part of our framework by a human-annotated corpus."}
{"_id": "8f96d88288385c3dbc681be5adf800bdaab3b35e", "title": "Concreteness ratings for 40 thousand generally known English word lemmas.", "text": "Concreteness ratings are presented for 37,058 English words and 2,896 two-word expressions (such as zebra crossing and zoom in), obtained from over 4,000 participants by means of a norming study using Internet crowdsourcing for data collection. Although the instructions stressed that the assessment of word concreteness would be based on experiences involving all senses and motor responses, a comparison with the existing concreteness norms indicates that participants, as before, largely focused on visual and haptic experiences. The reported data set is a subset of a comprehensive list of English lemmas and contains all lemmas known by at least 85 % of the raters. It can be used in future research as a reference list of generally known English lemmas."}
{"_id": "994b296d67237cece5bc26e51e4d6a264bd8fba6", "title": "On IC traceability via blockchain", "text": "Traceability of ICs is important for verifying provenance. We present a novel IC traceability scheme based on blockchain. A blockchain is an immutable public record that maintains a continuously-growing list of data records secured from tampering and revision. In the proposed scheme, IC ownership transfer information is recorded and archived in a blockchain. This safe, verifiable method prevents any party from altering or challenging the legitimacy of the information being exchanged. However, we also need to establish correspondence between a record in a public database and the physical device, in an unclonable way. We propose an embedded physically unclonable function (PUF) to establish this correspondence The blockchain ensures the legitimacy of an IC's current owner, while the PUF safeguards against IC counterfeiting and tampering. Our proposed solution combines hardware and software protocols to let supply chain participants authenticate, track, trace, analyze, and provision chips during their entire life cycle."}
{"_id": "1139099a3f3271745ff9e7180aa63bb563b57ab8", "title": "Load balancing and OpenMP implementation of nested parallelism", "text": "Many problems have multiple layers of parallelism. The outer-level may consist of few and coarse-grained tasks. Next, each of these tasks may also be rich in parallelism, and be split into a number of fine-grained tasks, which again may consist of even finer subtasks, and so on. Here we argue and demonstrate by examples that utilizing multiple layers of parallelism may give much better scaling than if one restricts oneself to only one level of parallelism. Two non-trivial issues for multi-level parallelism are load balancing and implementation. In this paper we provide an algorithm for finding good distributions of threads to tasks and discuss how to implement nested parallelism in OpenMP. 2005 Elsevier B.V. All rights reserved."}
{"_id": "4c1a78e16ca798677f20e4706b221554f7a4b75f", "title": "Design of a transmit-receive (T/R) switch in TSMC 180nm RF CMOS process for 2.4GHz transceiver", "text": "Design and analysis of a RF transmit/receive switch in 180nm CMOS RF process for 2.4GHz transceiver is presented. LC resonance along with stacked gate architecture is presented for designing the T/R switch. The switch exhibited excellent performance in terms of insertion loss, isolation linearity and gain. The switch added minimum noise by itself during the input and output switching and achieved stability throughout the operating rang at 2.4GHz. The designed switch exhibits -48.73dB and -79.04dB insertion loss during receiver and transmitter mode respectively. The isolation between non-used ports is -96.27 in both mode of operation. The designed switch handles more than 20 dBm power at 2.4 GHz frequency."}
{"_id": "7a31df833a16f15ec5e0e09dd2ff1f1afbdbd35d", "title": "Situation and Perspective of Knowledge Engineering", "text": "Knowledge Engineering was in the past primary concerned with building and developing knowledge-based systems, an objective which puts Knowledge Engineering in a niche of the world-wide research efforts at best. This has changed dramatically: Knowledge Engineering is now a key technology in the upcoming knowledge society. Companies are recognizing knowledge as their key assets, which have to be exploited and protected in a fast changing, global and competitive economy. This situation has led to the application of Knowledge Engineering techniques in Knowledge Management. The demand for more efficient (business to) business processes requires the interconnection and interoperation of different information systems. But information integration is not an algorithmic task that is easy to solve: much knowledge is required to resolve the semantic differences of data residing in two information systems. Thus Knowledge Engineering has become a major technqiue for information integration. And, last but not least the fast growing World Wide Web generates an ever increasing demand for more efficient knowledge exploitation and creation techniques. Here again Knowledge Engineering technologies may become the key technology for solving the problem. In this paper we discuss these recent developments and describe our view of the future."}
{"_id": "71b791d24e4888f0106b0959000a034cae7c7677", "title": "How Taxing is Corruption on International Investors?", "text": "This paper studies the effect of corruption on foreign direct investment. The sample covers bilateral investment from twelve source countries to 45 host countries. There are two central findings. First, a rise in either the tax rate on multinational firms or the corruption level in a host country reduces inward foreign direct investment (FDI). In a benchmark estimation, an increase in the corruption level from that of Singapore to that of Mexico would have the same negative effect on inward FDI as raising the tax rate by fifty percentage points. Second, American investors are averse to corruption in host countries, but not necessarily more so than average OECD investors, in spite of the U.S. Foreign Corrupt Practices Act of 1977."}
{"_id": "3cca6de144cc2f1a2860447dfafdd7e8b42fbaaa", "title": "On the modelling of publish/subscribe communication systems", "text": "This paper presents a formal framework of a distributed computation based on a publish/subscribe system. The framework abstracts the system through two delays, namely the subscription/unsubscription delay and the diffusion delay. This abstraction allows one to model concurrent execution of publication and subscription operations without waiting for the stability of the system state and to define a Liveness property which gives the conditions for the presence of a notification event in the global history of the system. This formal framework allows us to analytically define a measure of the effectiveness of a publish/subscribe system, which reflects the percentage of notifications guaranteed by the system to subscribers. A simulation study confirms the validity of the analytical measurements. Copyright c \u00a9 2005 John Wiley & Sons, Ltd."}
{"_id": "f49487c0295adcedcc00bf6533fcf3a171b17fe8", "title": "Research on Credit Card Fraud Detection Model Based on Class Weighted Support Vector Machine", "text": "To deal with credit card fraud, a detection model based on Class Weighted Support Vector Machine was established. Due to large-scale and high dimensions of data, Principal Component Analysis (PCA) was adopted firstly to screen out the main factors from a great deal of indicative attributes in order to reduce the training dimension of SVM effectively. Then according to the characteristics of credit card transactions data which are imbalance, an improved SVM--Imbalance Class Weighted SVM (ICW-SVM) was adopted. With the application and verification in real dataset from bank, it is demonstrated that this model is more suitable for solving credit card fraud detection problem with higher precision and effective than others."}
{"_id": "9304f80fbca5bfdcc3820543d186d2b2da5b1c4d", "title": "A Survey of Indoor Positioning and Object Locating Systems", "text": "This paper investigated various indoor positioning techniques and presented a comprehensive study about their advantages and disadvantages. Infrared, Ultrasonic and RF technologies are used in different indoor positioning systems. RFID positioning systems based on RSSI technology are the most recent developments. Positioning accuracy was greatly improved by using integrated RFID technologies."}
{"_id": "fedca173539260915bccddb072635d65aee412ce", "title": "SURVEY ON REVIEW SPAM DETECTION", "text": "The proliferation of E-commerce sites has made web an excellent source of gathering customer reviews about products; as there is no quality control anyone one can write anything which leads to review spam. This paper previews and reviews the substantial research on Review Spam detection technique. Further it provides state of art depicting some previous attempt to study review spam detection. KeywordsNatural language processing, reviews centric spam detection, reviewer centric spam detection"}
{"_id": "4d423acc78273b75134e2afd1777ba6d3a398973", "title": "The CMU Pose, Illumination, and Expression (PIE) Database", "text": "Between October 2000 and December 2000 we collected a database of over 40,000 facial images of 68 people. Using the CMU 3D Room we imaged each person across 13 different poses, under 43 different illumination conditions, and with 4 different expressions. We call this database the CMU Pose, Illumination, and Expression (PIE) database. In this paper we describe the imaging hardware, the collection procedure, the organization of the database, several potential uses of the database, and how to obtain the database."}
{"_id": "bbbd015155bbe5098aad6b49a548e9f3570e49ec", "title": "Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition", "text": "This paper introduces a novel Gabor-Fisher (1936) classifier (GFC) for face recognition. The GFC method, which is robust to changes in illumination and facial expression, applies the enhanced Fisher linear discriminant model (EFM) to an augmented Gabor feature vector derived from the Gabor wavelet representation of face images. The novelty of this paper comes from 1) the derivation of an augmented Gabor feature vector, whose dimensionality is further reduced using the EFM by considering both data compression and recognition (generalization) performance; 2) the development of a Gabor-Fisher classifier for multi-class problems; and 3) extensive performance evaluation studies. In particular, we performed comparative studies of different similarity measures applied to various classifiers. We also performed comparative experimental studies of various face recognition schemes, including our novel GFC method, the Gabor wavelet method, the eigenfaces method, the Fisherfaces method, the EFM method, the combination of Gabor and the eigenfaces method, and the combination of Gabor and the Fisherfaces method. The feasibility of the new GFC method has been successfully tested on face recognition using 600 FERET frontal face images corresponding to 200 subjects, which were acquired under variable illumination and facial expressions. The novel GFC method achieves 100% accuracy on face recognition using only 62 features."}
{"_id": "0160ec003ae238a98676b6412b49d4b760f63544", "title": "Probabilistic Visual Learning for Object Representation", "text": "We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a Mixture-of-Gaussians model (for multimodal distributions). These probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects, such as hands."}
{"_id": "258e4e58c67ecc8a030f5ffd187657344e7d3cc7", "title": "Introspection-based memory de-duplication and migration", "text": "Memory virtualization abstracts a physical machine's memory resource and presents to the virtual machines running on it a piece of physical memory that could be shared, compressed and moved. To optimize the memory resource utilization by fully leveraging the flexibility afforded by memory virtualization, it is essential that the hypervisor have some sense of how the guest VMs use their allocated physical memory. One way to do this is virtual machine introspection (VMI), which interprets byte values in a guest memory space into semantically meaningful data structures. However, identifying a guest VM's memory usage information such as free memory pool is non-trivial. This paper describes a bootstrapping VM introspection technique that could accurately extract free memory pool information from multiple versions of Windows and Linux without kernel version-specific hard-coding, how to apply this technique to improve the efficiency of memory de-duplication and memory state migration, and the resulting improvement in memory de-duplication speed, gain in additional memory pages de-duplicated, and reduction in traffic loads associated with memory state migration."}
{"_id": "14b9ef1d5c0688aeb27846225ca4cb1e9dac7085", "title": "Deep LSTM based Feature Mapping for Query Classification", "text": "Traditional convolutional neural network (CNN) based query classification uses linear feature mapping in its convolution operation. The recurrent neural network (RNN), differs from a CNN in representing word sequence with their ordering information kept explicitly. We propose using a deep long-short-term-memory (DLSTM) based feature mapping to learn feature representation for CNN. The DLSTM, which is a stack of LSTM units, has different order of feature representations at different depth of LSTM unit. The bottom LSTM unit equipped with input and output gates, extracts the first order feature representation from current word. To extract higher order nonlinear feature representation, the LSTM unit at higher position gets input from two parts. First part is the lower LSTM unit\u2019s memory cell from previous word. Second part is the lower LSTM unit\u2019s hidden output from current word. In this way, the DLSTM captures the nonlinear nonconsecutive interaction within n-grams. Using an architecture that combines a stack of the DLSTM layers with a tradition CNN layer, we have observed new state-of-the-art query classification accuracy on benchmark data sets for query classification."}
{"_id": "40e389184188ce21c3aa66b8f3fe38dbdd076457", "title": "Robust Nighttime Vehicle Detection by Tracking and Grouping Headlights", "text": "Nighttime traffic surveillance is difficult due to insufficient and unstable appearance information and strong background interference. We present in this paper a robust nighttime vehicle detection system by detecting, tracking, and grouping headlights. First, we train AdaBoost classifiers for headlights detection to reduce false alarms caused by reflections. Second, to take full advantage of the complementary nature of grouping and tracking, we alternately optimize grouping and tracking. For grouping, motion features produced by tracking are used by headlights pairing. We use a maximal independent set framework for effective pairing, which is more robust than traditional pairing-by-rules methods. For tracking, context information provided by pairing is employed by multiple object tracking. The experiments on challenging datasets and quantitative evaluation show promising performance of our method."}
{"_id": "e754dda667a22f8953d7515fdf5dfe947a01ebda", "title": "A Reinforcement Learning-driven Translation Model for Search-Oriented Conversational Systems", "text": "Search-oriented conversational systems rely on information needs expressed in natural language (NL). We focus here on the understanding of NL expressions for building keywordbased queries. We propose a reinforcementlearning-driven translation model framework able to 1) learn the translation from NL expressions to queries in a supervised way, and, 2) to overcome the lack of large-scale dataset by framing the translation model as a word selection approach and injecting relevance feedback as a reward in the learning process. Experiments are carried out on two TREC datasets. We outline the effectiveness of our approach."}
{"_id": "b05646b8a4930519081c68569be2ce7e463e9cc5", "title": "Key technology and application of millimeter wave communications for 5G: a survey", "text": "As mobile communication technology continues to develop, millimeter-wave communication for 5G mobile networks is attracting widespread public attention. Many studies have appeared in the literatures related to millimeter wave mobile broadband communication systems. This paper briefly introduces the development of 5G and millimeter wave communications, analyses several practical designs and implementations of millimeter wave communications for 5G, and proposes some future applications and challenges for 5G."}
{"_id": "ac2c955a61002b674bd104b91f89087271fc3b8e", "title": "Five-Level Reduced-Switch-Count Boost PFC Rectifier With Multicarrier PWM", "text": "A multilevel boost power factor correction (PFC) rectifier is presented in this paper controlled by cascaded controller and multicarrier pulse width modulation technique. The presented topology has less active semiconductor switches compared to similar ones reducing the number of required gate drives that would shrink the manufactured box significantly. A simple controller has been implemented on the studied converter to generate a constant voltage at the output while generating a five-level voltage waveform at the input without connecting the load to the neutral point of the dc bus capacitors. Multicarrier pulse-width modulation technique has been used to produce switching pulses from control signal at a fixed switching frequency. Multilevel voltage waveform harmonics has been analyzed comprehensively which affects the harmonic contents of input current and the size of required filters directly. Full experimental results confirm the good dynamic performance of the proposed five-level PFC boost rectifier in delivering power from ac grid to the dc loads while correcting the power factor at the ac side as well as reducing the current harmonics remarkably."}
{"_id": "25b0d0316ece493899d74cfb98ce7b77dca8352e", "title": "Stock market prediction system with modular neural networks", "text": "This paper discusses a buying and selling timing prediction system for stocks on the Tokyo Stock Exchange and analysis of intemal representation. It is based on modular neural networks[l][2]. We developed a number of learning algorithms and prediction methods for the TOPIX(Toky0 Stock Exchange Prices Indexes) prediction system. The prediction system achieved accurate predictions and the simulation on stocks tradmg showed an excellent profit. The prediction system was developed by Fujitsu and Nikko Securities."}
{"_id": "481eadc1511574a6375ae1a04e4dc1e824d09de5", "title": "Subspace Learning and Imputation for Streaming Big Data Matrices and Tensors", "text": "Extracting latent low-dimensional structure from high-dimensional data is of paramount importance in timely inference tasks encountered with \u201cBig Data\u201d analytics. However, increasingly noisy, heterogeneous, and incomplete datasets, as well as the need for real-time processing of streaming data, pose major challenges to this end. In this context, the present paper permeates benefits from rank minimization to scalable imputation of missing data, via tracking low-dimensional subspaces and unraveling latent (possibly multi-way) structure from incomplete streaming data. For low-rank matrix data, a subspace estimator is proposed based on an exponentially weighted least-squares criterion regularized with the nuclear norm. After recasting the nonseparable nuclear norm into a form amenable to online optimization, real-time algorithms with complementary strengths are developed, and their convergence is established under simplifying technical assumptions. In a stationary setting, the asymptotic estimates obtained offer the well-documented performance guarantees of the batch nuclear-norm regularized estimator. Under the same unifying framework, a novel online (adaptive) algorithm is developed to obtain multi-way decompositions of low-rank tensors with missing entries and perform imputation as a byproduct. Simulated tests with both synthetic as well as real Internet and cardiac magnetic resonance imagery (MRI) data confirm the efficacy of the proposed algorithms, and their superior performance relative to state-of-the-art alternatives."}
{"_id": "c662e9a76dacdc9ca7171e7731e26c9a823f05a9", "title": "Prevalence of autism in a US metropolitan area.", "text": "CONTEXT\nConcern has been raised about possible increases in the prevalence of autism. However, few population-based studies have been conducted in the United States.\n\n\nOBJECTIVES\nTo determine the prevalence of autism among children in a major US metropolitan area and to describe characteristics of the study population.\n\n\nDESIGN, SETTING, AND POPULATION\nStudy of the prevalence of autism among children aged 3 to 10 years in the 5 counties of metropolitan Atlanta, Ga, in 1996. Cases were identified through screening and abstracting records at multiple medical and educational sources, with case status determined by expert review.\n\n\nMAIN OUTCOME MEASURES\nAutism prevalence by demographic factors, levels of cognitive functioning, previous autism diagnoses, special education eligibility categories, and sources of identification.\n\n\nRESULTS\nA total of 987 children displayed behaviors consistent with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria for autistic disorder, pervasive developmental disorder-not otherwise specified, or Asperger disorder. The prevalence for autism was 3.4 per 1000 (95% confidence interval [CI], 3.2-3.6) (male-female ratio, 4:1). Overall, the prevalence was comparable for black and white children (black, 3.4 per 1000 [95% CI, 3.0-3.7] and white, 3.4 per 1000 [95% CI, 3.2-3.7]). Sixty-eight percent of children with IQ or developmental test results (N = 880) had cognitive impairment. As severity of cognitive impairment increased from mild to profound, the male-female ratio decreased from 4.4 to 1.3. Forty percent of children with autism were identified only at educational sources. Schools were the most important source for information on black children, children of younger mothers, and children of mothers with less than 12 years of education.\n\n\nCONCLUSION\nThe rate of autism found in this study was higher than the rates from studies conducted in the United States during the 1980s and early 1990s, but it was consistent with those of more recent studies."}
{"_id": "c2fafa93bd9b91ede867d4979bc747334d989040", "title": "A direct fingerprint minutiae extraction approach based on convolutional neural networks", "text": "Minutiae, as the essential features of fingerprints, play a significant role in fingerprint recognition systems. Most existing minutiae extraction methods are based on a series of hand-defined preprocesses such as binarization, thinning and enhancement. However, these preprocesses require strong prior knowledge and are always lossy operations. And that will lead to dropped or false extractions of minutiae. In this paper, a novel minutiae extraction approach based on deep convolutional neural networks is proposed, which directly extract minutiae on raw fingerprint images without any preprocess since we tactfully take the advantage of the strong representative capacity of deep convolutional neural networks. Minutiae can be effectively extracted due to the well designed architectures. Furthermore, the accuracy is guaranteed in that the comprehensive estimate is made to eliminate spurious minutiae. Moreover, a number of implement skills are employed both to avoid overfitting and to improve the robustness. This approach makes a good performance because it not only makes all use of information in fingerprint images but also learns the minutiae patterns from large amounts of data. Comparisons are made with previous works and a widely applied commercial fingerprint identification system. Results show that our approach performs better both in accuracy and robustness."}
{"_id": "4606eeaa5945a262639879cc8bd00b1cadd39a86", "title": "VOCALOID - commercial singing synthesizer based on sample concatenation", "text": "The song submitted here to the \u201cSynthesis of Singing Challenge\u201d is synthesized by the latest version of the singing synthesizer \u201cVocaloid\u201d, which is commercially available now. In this paper, we would like to present the overview of Vocaloid, its product lineups, description of each component, and the synthesis technique used in Vocaloid."}
{"_id": "36b0ba31eb7489772616ea9d5bd789483d494e93", "title": "PWM regenerative rectifiers: state of the art", "text": "New regulations impose more stringent limits on current harmonics injected by power converters that are achieved with pulsewidth-modulated (PWM) rectifiers. In addition, several applications demand the capability of power regeneration to the power supply. This work presents the state of the art in the field of regenerative rectifiers with reduced input harmonics and improved power factor. Regenerative rectifiers are able to deliver energy back from the dc side to the ac power supply. Topologies for single- and three-phase power supplies are considered with their corresponding control strategies. Special attention is given to the application of voltage- and current-source PWM rectifiers in different processes with a power range from a few kilowatts up to several megawatts. This paper shows that PWM regenerative rectifiers are a highly developed and mature technology with a wide industrial acceptance."}
{"_id": "5b1aa922e511f2b29331e7d961cb722fd0a5d8f0", "title": "Adaptive control of systems in cascade with saturation", "text": "Some systems may be approximated as block cascades structures where each block of the cascade can be approximately feedback linearized. In such systems, the states of the lower subsystem blocks affect the dynamics of the upper subsystems. In generating an inverse for a given block, a subset of its lower subsystem states can be treated as virtual actuators in addition to actual direct actuation that may be available. The desired virtual actuator signal arising from the upper subsystem's inverse now appears as a command to the lower subsystem. This paper introduces an adaptive element that is capable of canceling modeling errors arising due to feedback linearization. It also introduces reference models that include a pseudocontrol hedging signal which protects the adaptive element from lower subsystem's dynamics."}
{"_id": "82b544b32cb92c8767f7d3c126f9c3a3f584d54b", "title": "Selecting Coherent and Relevant Plots in Large Scatterplot Matrices", "text": "The scatterplot matrix (SPLOM) is a well-established technique to visually explore high-dimensional data sets. It is characterized by the number of scatterplots (plots) of which it consists of. Unfortunately, this number quadratically grows with the number of the data set\u2019s dimensions. Thus, a SPLOM scales very poorly. Consequently, the usefulness of SPLOMs is restricted to a small number of dimensions. For this, several approaches already exist to explore such \u201csmall\u201d SPLOMs. Those approaches address the scalability problem just indirectly and without solving it. Therefore, we introduce a new greedy approach to manage \u201clarge\u201d SPLOMs with more than hundred dimensions. We establish a combined visualization and interaction scheme that produces intuitively interpretable SPLOMs by combining known quality measures, a pre-process reordering, and a perception-based abstraction. With this scheme, the user can interactively find large amounts of relevant plots in large SPLOMs."}
{"_id": "101bba64bf85351376576c12b160d74bb7b83790", "title": "Using galvanic skin response for cognitive load measurement in arithmetic and reading tasks", "text": "Galvanic Skin Response (GSR) has recently attracted researchers' attention as a prospective physiological indicator of cognitive load and emotions. However, it has commonly been investigated through single or few measures and in one experimental scenario. In this research, aiming to perform a comprehensive study, we have assessed GSR data captured from two different experiments, one including text reading tasks and the other using arithmetic tasks, each imposing multiple cognitive load levels. We have examined temporal and spectral features of GSR against different task difficulty levels. ANOVA test was applied for the statistical evaluation. Obtained results show the strong significance of the explored features, especially the spectral ones, in cognitive workload measurement in the two studied experiments."}
{"_id": "8520c84169ba0a55783629f61576b1d247a4b321", "title": "A Blockchain-based Security-Oriented Framework for Cloud Federation", "text": "Cloud federations have been formed to share the services, prompt and support cooperation, as well as interoperability among their already deployed cloud systems. However, the creation and management of the cloud federations lead to various security issues such as confidentially, integrity and availability of the data. Despite the access control policies in place, an attacker may compromise the communication channel processing the access requests and the decisions between the access control systems and the members(users) and vice-versa. In cloud federation, the rating of the services offered by different cloud members becomes integral to providing the users with the best quality services. Hence, we propose an innovative blockchainbased framework that on the one hand permits secure communication between the members of the federation and the access control systems, while on the other hand provides the quality services to the members by considering the service constraints imposed by them."}
{"_id": "1ba77164225515320452654e7b665a4e01cafd2b", "title": "Total Recall: System Support for Automated Availability Management", "text": "Goal: Highly available data storage in large-scale distributed systems in which * Hosts are transiently inaccessible * Individual host failures are common Current peer-to-peer systems are prime examples * Highly dynamic, challenging environment * hosts join and leave frequently in short-term * Hosts leave permanently over long-term * Workload varies in terms of popularity, access patterns, file size These systems require automated availability management."}
{"_id": "1a964f13abb3cdc5675dbfd612fa0409608e28c7", "title": "CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes", "text": "We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace pooling operations. CSRNet is an easy-trained model because of its pure convolutional structure. We demonstrate CSRNet on four datasets (ShanghaiTech dataset, the UCF_CC_50 dataset, the WorldEXPO'10 dataset, and the UCSD dataset) and we deliver the state-of-the-art performance. In the ShanghaiTech Part_B dataset, CSRNet achieves 47.3% lower Mean Absolute Error (MAE) than the previous state-of-the-art method. We extend the targeted applications for counting other objects, such as the vehicle in TRANCOS dataset. Results show that CSRNet significantly improves the output quality with 15.4% lower MAE than the previous state-of-the-art approach."}
{"_id": "e967bd1157609c435067209768097f31fc7e96f4", "title": "The role of character strengths in adolescent romantic relationships: an initial study on partner selection and mates' life satisfaction.", "text": "The present study investigated the role of 24 character strengths in 87 adolescent romantic relationships focusing on their role in partner selection and their role in mates' life satisfaction. Measures included the Values in Action Inventory of Strengths for Youth, the Students' Life Satisfaction Scale, and an Ideal Partner Profiler for the composition of an ideal partner. Honesty, humor, and love were the most preferred character strengths in an ideal partner. Hope, religiousness, honesty, and fairness showed the most substantial assortment coefficients. Hierarchical regression analyses revealed targets' character strengths as explaining variance in targets' life satisfaction. Furthermore, to a lesser degree, specific character strengths of partners and couples' similarity in certain character strengths explained variance in targets' life satisfaction beyond targets' character strengths. This first research on this topic showed that character strengths play a significant role in adolescent romantic relationships."}
{"_id": "987dbe601d2becca52af60dd2f758d0832337e56", "title": "LDA-AdaBoost.MH: Accelerated AdaBoost.MH based on latent Dirichlet allocation for text categorization", "text": "AdaBoost.MH is a boosting algorithm that is considered to be one of the most accurate algorithms for multilabel classification. It works by iteratively building a committee of weak hypotheses of decision stumps. To build the weak hypotheses, in each iteration, AdaBoost.MH obtains the whole extracted features and examines them one by one to check their ability to characterize the appropriate category. Using Bag-Of-Words for text representation dramatically increases the computational time of AdaBoost.MH learning, especially for large-scale datasets. In this paper we demonstrate how to improve the efficiency and effectiveness of AdaBoost.MH using latent topics, rather than words. A well-known probabilistic topic modelling method, Latent Dirichlet Allocation, is used to estimate the latent topics in the corpus as features for AdaBoost.MH. To evaluate LDA-AdaBoost.MH, the following four datasets have been used: Reuters-21578-ModApte, WebKB, 20-Newsgroups and a collection of Arabic news. The experimental results confirmed that representing the texts as a small number of latent topics, rather than a large number of words, significantly decreased the computational time of AdaBoost.MH learning and improved its performance for text categorization."}
{"_id": "fd53325ba95cdb0699f4219e852e96940964b4ca", "title": "Adapting instance weights for unsupervised domain adaptation using quadratic mutual information and subspace learning", "text": "Domain adaptation (DA) algorithms utilize a label-rich old dataset (domain) to build a machine learning model (classification, detection etc.) in a label-scarce new dataset with different data distribution. Recent approaches transform cross-domain data into a shared subspace by minimizing the shift between their marginal distributions. In this paper, we propose a novel iterative method to learn a common subspace based on non-parametric quadratic mutual information (QMI) between data and corresponding class labels. We extend a prior work of discriminative subspace learning based on maximization of QMI and integrate instance weighting into the QMI formulation. We propose an adaptive weighting model to identify relevant samples that share underlying similarity across domains and ignore irrelevant ones. Due to difficulty of applying cross-validation, an alternative strategy is integrated with the proposed algorithm to setup model parameters. A set of comprehensive experiments on benchmark datasets is conducted to prove the efficacy of our proposed framework over state-of-the-art approaches."}
{"_id": "d96cf2e65ebd1874af78f81af6498d556468d042", "title": "Outlier Detection over Massive-Scale Trajectory Streams", "text": "The detection of abnormal moving objects over high-volume trajectory streams is critical for real-time applications ranging from military surveillance to transportation management. Yet this outlier detection problem, especially along both the spatial and temporal dimensions, remains largely unexplored. In this work, we propose a rich taxonomy of novel classes of neighbor-based trajectory outlier definitions that model the anomalous behavior of moving objects for a large range of real-time applications. Our theoretical analysis and empirical study on two real-world datasets\u2014the Beijing Taxi trajectory data and the Ground Moving Target Indicator data stream\u2014and one generated Moving Objects dataset demonstrate the effectiveness of our taxonomy in effectively capturing different types of abnormal moving objects. Furthermore, we propose a general strategy for efficiently detecting these new outlier classes called the minimal examination (MEX) framework. The MEX framework features three core optimization principles, which leverage spatiotemporal as well as the predictability properties of the neighbor evidence to minimize the detection costs. Based on this foundation, we design algorithms that detect the outliers based on these classes of new outlier semantics that successfully leverage our optimization principles. Our comprehensive experimental study demonstrates that our proposed MEX strategy drives the detection costs 100-fold down into the practical realm for applications that analyze high-volume trajectory streams in near real time."}
{"_id": "72d8af32991a69d0fa3ff2d8f80c8553cca7c0cd", "title": "Deep Robust Kalman Filter", "text": "A Robust Markov Decision Process (RMDP) is a sequential decision making model that accounts for uncertainty in the parameters of dynamic systems. This uncertainty introduces difficulties in learning an optimal policy, especially for environments with large state spaces. We propose two algorithms, RTD-DQN and Deep-RoK, for solving large-scale RMDPs using nonlinear approximation schemes such as deep neural networks. The RTD-DQN algorithm incorporates the robust Bellman temporal difference error into a robust loss function, yielding robust policies for the agent. The Deep-RoK algorithm is a robust Bayesian method, based on the Extended Kalman Filter (EKF), that accounts for both the uncertainty in the weights of the approximated value function and the uncertainty in the transition probabilities, improving the robustness of the agent. We provide theoretical results for our approach and test the proposed algorithms on a continuous state domain."}
{"_id": "05ff07498f456ce7504ec1a2b6b646647157d4b8", "title": "Fast Computation of Wasserstein Barycenters", "text": "We present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric. This mean, known as the Wasserstein barycenter, is the measure that minimizes the sum of its Wasserstein distances to each element in that set. We propose two original algorithms to compute Wasserstein barycenters that build upon the subgradient method. A direct implementation of these algorithms is, however, too costly because it would require the repeated resolution of large primal and dual optimal transport problems to compute subgradients. Extending the work of Cuturi (2013), we propose to smooth the Wasserstein distance used in the definition of Wasserstein barycenters with an entropic regularizer and recover in doing so a strictly convex objective whose gradients can be computed for a considerably cheaper computational cost using matrix scaling algorithms. We use these algorithms to visualize a large family of images and to solve a constrained clustering problem."}
{"_id": "9327691fac69ecc6fbe848df94a6e5d358c76b29", "title": "Proximal Splitting Methods in Signal Processing \u2217", "text": "The proximity operator of a convex function is a natural exte nsion of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex opt imization problems, has recently been introduced in the arena of inverse problems an d, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. The e proximal splitting methods are shown to capture and extend several well-known a lgorithms in a unifying framework. Applications of proximal methods in signal r ecovery and synthesis are discussed."}
{"_id": "1ad0ffeb6e69a5bc09ffa53712888b84a3b9df95", "title": "Super-Samples from Kernel Herding", "text": "We extend the herding algorithm to continuous spaces by using the kernel trick. The resulting \u201ckernel herding\u201d algorithm is an infinite memory deterministic process that learns to approximate a PDF with a collection of samples. We show that kernel herding decreases the error of expectations of functions in the Hilbert space at a rateO(1/T )which is much faster than the usual O(1/ \u221a T ) for iid random samples. We illustrate kernel herding by approximating Bayesian predictive distributions."}
{"_id": "1e300582966022da2126753fa406db29404784e1", "title": "Clustering with Bregman Divergences", "text": "A wide variety of distortion functions, such as squared Eucl idean distance, Mahalanobis distance, Itakura-Saito distance and relative entropy, have been use d for clustering. In this paper, we propose and analyze parametric hard and soft clustering algori thms based on a large class of distortion functions known as Bregman divergences. The proposed algor ithms unify centroid-based parametric clustering approaches, such as classical kmeans, the Linde-Buzo-Gray (LBG) algorithm and information-theoretic clustering, which arise by special hoices of the Bregman divergence. The algorithms maintain the simplicity and scalability of the c lassicalkmeans algorithm, while generalizing the method to a large class of clustering loss functi o s. This is achieved by first posing the hard clustering problem in terms of minimizing the loss i n Bregman information, a quantity motivated by rate distortion theory, and then deriving an it erative algorithm that monotonically decreases this loss. In addition, we show that there is a biject ion between regular exponential families and a large class of Bregman divergences, that we call regula r Bregman divergences. This result enables the development of an alternative interpretation o f a efficient EM scheme for learning mixtures of exponential family distributions, and leads to a si mple soft clustering algorithm for regular Bregman divergences. Finally, we discuss the connection be tween rate distortion theory and Bregman clustering and present an information theoretic analys is of Bregman clustering algorithms in terms of a trade-off between compression and loss in Bregman information."}
{"_id": "3c86958bd1902e60496a8fcb8312ec1ab1c32b63", "title": "Fast Image Recovery Using Variable Splitting and Constrained Optimization", "text": "We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods."}
{"_id": "5b4c1480477fe2324a99fa032bec2834ef31efd6", "title": "Needs, affect, and interactive products - Facets of user experience", "text": "Subsumed under the umbrella of User Experience (UX), practitioners and academics of Human\u2013Computer Interaction look for ways to broaden their understanding of what constitutes \u2018\u2018pleasurable experiences\u201d with technology. The present study considered the fulfilment of universal psychological needs, such as competence, relatedness, popularity, stimulation, meaning, security, or autonomy, to be the major source of positive experience with interactive technologies. To explore this, we collected over 500 positive experiences with interactive products (e.g., mobile phones, computers). As expected, we found a clear relationship between need fulfilment and positive affect, with stimulation, relatedness, competence and popularity being especially salient needs. Experiences could be further categorized by the primary need they fulfil, with apparent qualitative differences among some of the categories in terms of the emotions involved. Need fulfilment was clearly linked to hedonic quality perceptions, but not as strongly to pragmatic quality (i.e., perceived usability), which supports the notion of hedonic quality as \u2018\u2018motivator\u201d and pragmatic quality as \u2018\u2018hygiene factor.\u201d Whether hedonic quality ratings reflected need fulfilment depended on the belief that the product was responsible for the experience (i.e., attribution). 2010 Elsevier B.V. All rights reserved."}
{"_id": "46d16042c6edb8d1992ffa2a2fd853dca85af7cb", "title": "MD-GAN: Multi-Discriminator Generative Adversarial Networks for Distributed Datasets", "text": "A recent technical breakthrough in the domain of machine learning is the discovery and the multiple applications of Generative Adversarial Networks (GANs). Those generative models are computationally demanding, as a GAN is composed of two deep neural networks, and because it trains on large datasets. A GAN is generally trained on a single server. In this paper, we address the problem of distributing GANs so that they are able to train over datasets that are spread on multiple workers. MD-GAN is exposed as the first solution for this problem: we propose a novel learning procedure for GANs so that they fit this distributed setup. We then compare the performance of MD-GAN to an adapted version of Federated Learning to GANs, using the MNIST and CIFAR10 datasets. MD-GAN exhibits a reduction by a factor of two of the learning complexity on each worker node, while providing better performances than federated learning on both datasets. We finally discuss the practical implications of distributing GANs."}
{"_id": "7b8031213276b23060fbd17d1d7182835fc2e0c3", "title": "An integrated 122GHz differential frequency doubler with 37GHz bandwidth in 130 nm SiGe BiCMOS technology", "text": "This paper describes an integrated frequency multiplier, implemented as a Gilbert cell based frequency doubler in a 130 nm SiGe BiCMOS technology. The circuit demonstrates a 3 dB bandwidth of 97\u2013134GHz with peak output power of 1 dBm for 1 dBm input power. The fundamental suppression, measured at the single-ended output, is better than 21 dBc while the frequency doubler consumes 69mW from a 3.3V supply. The doubler is preceded by a differential amplifier functioning as an active balun to generate a differential signal for the Gilbert cell."}
{"_id": "e59f288fe83575ce701d0dc64e9825d2165eca67", "title": "A 12-bit 1.6 GS/s interleaved SAR ADC with dual reference shifting and interpolation achieving 17.8 fJ/conv-step in 65nm CMOS", "text": "A 12-bit SAR ADC architecture with dual reference shifting and interpolation technique has been proposed and implemented with 8-way time interleaving in 65nm CMOS. The proposed technique converts 4 bits per SAR conversion cycle with reduced overhead, which is a key to achieve both high speed and resolution while maintaining low power consumption. The measured peak SNDR is 72dB and remains above 65.3dB at 1-GHz input frequency at sample rate of 1.6 GS/s. It achieves a record power efficiency of 17.8fJ/conv-step among the recently published high-speed/resolution ADCs."}
{"_id": "1003462fd2d360542ea712430b13cf0c43fc7f9d", "title": "Deep Clustering with Convolutional Autoencoders", "text": "Deep clustering utilizes deep neural networks to learn feature representation that is suitable for clustering tasks. Though demonstrating promising performance in various applications, we observe that existing deep clustering algorithms either do not well take advantage of convolutional neural networks or do not considerably preserve the local structure of data generating distribution in the learned feature space. To address this issue, we propose a deep convolutional embedded clustering algorithm in this paper. Specifically, we develop a convolutional autoencoders structure to learn embedded features in an end-to-end way. Then, a clustering oriented loss is directly built on embedded features to jointly perform feature refinement and cluster assignment. To avoid feature space being distorted by the clustering loss, we keep the decoder remained which can preserve local structure of data in feature space. In sum, we simultaneously minimize the reconstruction loss of convolutional autoencoders and the clustering loss. The resultant optimization problem can be effectively solved by mini-batch stochastic gradient descent and back-propagation. Experiments on benchmark datasets empirically validate the power of convolutional autoencoders for feature learning and the effectiveness of local structure preservation."}
{"_id": "43c2933035ed445c22bd8ee20179d64b084f29ca", "title": "Oblivious Dynamic Searchable Encryption on Distributed Cloud Systems", "text": "Dynamic Searchable Symmetric Encryption (DSSE) allows search/update operations over encrypted data via an encrypted index. However, DSSE has been shown to be vulnerable to statistical inference attacks, which can extract a significant amount of information from access patterns on encrypted index and files. While generic Oblivious Random Access Machine (ORAM) can hide access patterns, it has been shown to be extremely costly to be directly used in DSSE setting. By exploiting the distributed cloud infrastructure, we develop a series of Oblivious Distributed DSSE schemes called ODSE, which enable oblivious access on the encrypted index with a high security and improved efficiency over the use of generic ORAM. Specifically, ODSE schemes are 3\u00d7-57\u00d7 faster than applying the state-of-the-art generic ORAMs on encrypted dictionary index in real network settings. One of the proposed ODSE schemes offers desirable security guarantees such as informationtheoretic security with robustness against malicious servers. These properties are achieved by exploiting some of the unique characteristics of searchable encryption and encrypted index, which permits us to harness the computation and communication efficiency of multi-server PIR and Write-Only ORAM simultaneously. We fully implemented ODSE and have conducted extensive experiments to assess the performance of our proposed schemes in a real cloud environment."}
{"_id": "0c643edaa987db5aef20ffe44d651c1c4c122233", "title": "Automatic categorization of figures in scientific documents", "text": "Figures are very important non-textual information contained in scientific documents. Current digital libraries do not provide users tools to retrieve documents based on the information available within the figures. We propose an architecture for retrieving documents by integrating figures and other information. The initial step in enabling integrated document search is to categorize figures into a set of pre-defined types. We propose several categories of figures based on their functionalities in scholarly articles. We have developed a machine-learning-based approach for automatic categorization of figures. Both global features, such as texture, and part features, such as lines, are utilized in the architecture for discriminating among figure categories. The proposed approach has been evaluated on a testbed document set collected from the CiteSeer scientific literature digital library. Experimental evaluation has demonstrated that our algorithms can produce acceptable results for realworld use. Our tools will be integrated into a scientific document digital library."}
{"_id": "244c7d89bcd67b26756123edf44c71fd5f39dea6", "title": "GestureGAN for Hand Gesture-to-Gesture Translation in the Wild", "text": "Hand gesture-to-gesture translation in the wild is a challenging task since hand gestures can have arbitrary poses, sizes, locations and self-occlusions. Therefore, this task requires a high-level understanding of the mapping between the input source gesture and the output target gesture. To tackle this problem, we propose a novel hand Gesture Generative Adversarial Network (GestureGAN). GestureGAN consists of a single generator G and a discriminator D, which takes as input a conditional hand image and a target hand skeleton image. GestureGAN utilizes the hand skeleton information explicitly, and learns the gesture-to-gesture mapping through two novel losses, the color loss and the cycle-consistency loss. The proposed color loss handles the issue of \"channel pollution\" while back-propagating the gradients. In addition, we present the Frechet ResNet Distance (FRD) to evaluate the quality of generated images. Extensive experiments on two widely used benchmark datasets demonstrate that the proposed GestureGAN achieves state-of-the-art performance on the unconstrained hand gesture-to-gesture translation task. Meanwhile, the generated images are in high-quality and are photo-realistic, allowing them to be used as data augmentation to improve the performance of a hand gesture classifier. Our model and code are available at https://github.com/Ha0Tang/GestureGAN."}
{"_id": "fa6e2cc0ddae94afdebd494e94bb54fa74fed209", "title": "A review of optical Braille recognition", "text": "The process to take a Braille document image and convert its content into its equivalent natural language characters, is called Optical Braille Recognition (OBR). It involves two main consecutive steps: Braille cell recognition and Braille cell transcription. Braille cell recognition contains few steps including: Image acquisition, Image De-skewing, Image pre-processing, dot recognition, cell recognition and segmentation. Image transcription aims to convert the segmented Braille cell, into its equivalent natural language characters. In this survey we aim to study the earlier works done by other researchers on both Braille cell recognition and transcription."}
{"_id": "80c77d57a4e1c365d1feb7c0a1ba165b10f7de7c", "title": "Memory traces in dynamical systems.", "text": "To perform nontrivial, real-time computations on a sensory input stream, biological systems must retain a short-term memory trace of their recent inputs. It has been proposed that generic high-dimensional dynamical systems could retain a memory trace for past inputs in their current state. This raises important questions about the fundamental limits of such memory traces and the properties required of dynamical systems to achieve these limits. We address these issues by applying Fisher information theory to dynamical systems driven by time-dependent signals corrupted by noise. We introduce the Fisher Memory Curve (FMC) as a measure of the signal-to-noise ratio (SNR) embedded in the dynamical state relative to the input SNR. The integrated FMC indicates the total memory capacity. We apply this theory to linear neuronal networks and show that the capacity of networks with normal connectivity matrices is exactly 1 and that of any network of N neurons is, at most, N. A nonnormal network achieving this bound is subject to stringent design constraints: It must have a hidden feedforward architecture that superlinearly amplifies its input for a time of order N, and the input connectivity must optimally match this architecture. The memory capacity of networks subject to saturating nonlinearities is further limited, and cannot exceed square root N. This limit can be realized by feedforward structures with divergent fan out that distributes the signal across neurons, thereby avoiding saturation. We illustrate the generality of the theory by showing that memory in fluid systems can be sustained by transient nonnormal amplification due to convective instability or the onset of turbulence."}
{"_id": "bdd85b0db321ed22a3b2dc77154dac8b3ec3ad4d", "title": "Webcam-Based Eye Movement Analysis Using CNN", "text": "Due to its low price, webcam has become one of the most promising sensors with the rapid development of computer vision. However, the accuracies of eye tracking and eye movement analysis are largely limited by the quality of the webcam videos. To solve this issue, a novel eye movement analysis model is proposed based on five eye feature points rather than a single point (such as the iris center). First, a single convolutional neural network (CNN) is trained for eye feature point detection, and five eye feature points are detected for obtaining more useful eye movement information. Subsequently, six types of original time-varying eye movement signals can be constructed by feature points of each frame, which can reduce the dependency of the iris center in low quality videos. Finally, behaviors-CNN can be trained by the time-varying eye movement signals for recognizing different eye movement patterns, which is capable of avoiding the influence of errors from the basic eye movement type detection and artificial eye movement feature construction. To validate the performance, a webcam-based visual activity data set was constructed, which contained almost 0.5 million frames collected from 38 subjects. The experimental results on this database have demonstrated that the proposed model can obtain promising results for natural and convenient eye movement-based applications."}
{"_id": "299a80fb590834c42db813ecdbdf4f5f851ca9ee", "title": "An efficient active ripple filter for use in DC-DC conversion", "text": "When low ripple is required from a switched mode dc-dc converter, dissipative active filters offer an alternative to passive LC and coupled inductor filters. Analytical and experimental results are presented for a simple active ripple filter. The filter employs a pair of current transformers as sensors for feedforward and feedback control and two metal-oxide-semiconductor field-effect transistors (MOSFETs) as cancellation current drivers. Measurements demonstrate good ripple attenuation up to 5 MHz, more than 70 dB being obtained at 100 kHz, the switching frequency of a test converter. The overall efficiency was measured as 95%, with room for further improvement. The filter is suitable for input and output smoothing in dc-dc converters for aerospace and other critical applications."}
{"_id": "14ba087f7fb4e5e1c3c850f5795caf279786e6ca", "title": "Revealing the problems with 802.11 medium access control protocol in multi-hop wireless ad hoc networks", "text": "The IEEE 802.11 medium access control (MAC) protocol is a standard for wireless LANs, it is also widely used in almost all test beds and simulations for the research in wireless mobile multi-hop ad hoc networks. However, this protocol was not designed for multi-hop networks. Although it can support some ad hoc network architecture, it is not intended to support the wireless mobile ad hoc network, in which multi-hop connectivity is one of the most prominent features. In this paper, we focus on the following question: can IEEE 802.11 MAC protocol function well in multi-hop networks? By presenting several serious problems encountered in transmission control protocol (TCP) connections in an IEEE 802.11 based multi-hop network, we show that the current TCP protocol does not work well above the current 802.11 MAC layer. The relevant problems include the TCP instability problem found in this kind of network, the a b Purchase Export Previous article Next article Check if you have access through your login credentials or your institution."}
{"_id": "c1bd81e6b1ea16547b24cebda62db9c636646dfb", "title": "Electricity Generation Using an Air-Cathode Single Chamber Microbial Fuel Cell in the Presence and Absence of a Proton Exchange Membrane", "text": "Microbial fuel cells (MFCs) are typically designed as a two-chamber system with the bacteria in the anode chamber separated from the cathode chamber by a polymeric proton exchange membrane (PEM). Most MFCs use aqueous cathodes where water is bubbled with air to provide dissolved oxygen to electrode. To increase energy output and reduce the cost of MFCs, we examined power generation in an air-cathode MFC containing carbon electrodes in the presence and absence of a polymeric proton exchange membrane (PEM). Bacteria present in domestic wastewater were used as the biocatalyst, and glucose and wastewater were tested as substrates. Power density was found to be much greater than typically reported for aqueous-cathode MFCs, reaching a maximum of 262 ( 10 mW/m2 (6.6 ( 0.3 mW/L; liquid volume) using glucose. Removing the PEM increased the maximum power density to 494 ( 21 mW/m2 (12.5 ( 0.5 mW/L). Coulombic efficiency was 40-55% with the PEM and 9-12% with the PEM removed, indicating substantial oxygen diffusion into the anode chamber in the absence of the PEM. Power output increased with glucose concentration according to saturation-type kinetics, with a half saturation constant of 79 mg/L with the PEM-MFC and 103 mg/L in the MFC without a PEM (1000 \u03a9 resistor). Similar results on the effect of the PEM on power density were found using wastewater, where 28 ( 3 mW/m2 (0.7 ( 0.1 mW/L) (28% Coulombic efficiency) was produced with the PEM, and 146 ( 8 mW/m2 (3.7 ( 0.2 mW/L) (20% Coulombic efficiency) was produced when the PEM was removed. The increase in power output when a PEM was removed was attributed to a higher cathode potential as shown by an increase in the open circuit potential. An analysis based on available anode surface area and maximum bacterial growth rates suggests that mediatorless MFCs may have an upper order-of-magnitude limit in power density of 103 mW/m2. A cost-effective approach to achieving power densities in this range will likely require systems that do not contain a polymeric PEM in the MFC and systems based on direct oxygen transfer to a carbon cathode. Introduction Bacteria can be used to catalyze the conversion of organic matter into electricity (1-7). Fuel cells that use bacteria are loosely classified here as two different types: biofuel cells that generate electricity from the addition of artificial electron shuttles (mediators) (8-12) and microbial fuel cells (MFCs) that do not require the addition of a mediator (5, 7, 13-15). It has recently been shown that certain metal-reducing bacteria, belonging primarily to the family Geobacteraceae, can directly transfer electrons to electrodes using electrochemically active redox enzymes such as cytochromes on their outer membrane (16-18). These so-called mediatorless MFCs are considered to have more commercial application potential than biofuel cells because the mediators used in biofuel cells are expensive and toxic to the microorganisms (7). MFCs typically produce power at a density of less than 50 mW/m2 (normalized to anode projected surface area) (7, 13, 19). In a MFC, two electrodes (anode and cathode) are each placed in water in two chambers joined by a proton exchange membrane (PEM). The main disadvantage of a twochamber MFC is that the solution cathode must be aerated to provide oxygen to the cathode. It is known that the power output of an MFC can be improved by increasing the efficiency of the cathode. For example, power is increased by adding ferricyanide (20) to the cathode chamber. It is possible, however, to design a MFC that does not require that the cathode be placed in water. In hydrogen fuel cells, the cathode is bonded directly to the PEM so that oxygen in the air can directly react at the electrode (21). This technique was successfully used to produce electricity from wastewater in a single chamber MFC by Liu et al. (15). Park and Zeikus (20) produced a maximum of 788 mW/m2 using a unique system with a Mn4+ graphite anode and a direct-air Fe3+ graphite cathode (20). Because the power output of MFCs is low relative to other types of fuel cells, reducing their cost is essential if power generation using this technology is to be an economical method of energy production. Most studies have used relatively expensive solid graphite electrodes (7, 19), but graphite-felt (14) and carbon cloth (15) can also be used. The use of air-driven cathodes can reduce MFC costs because passive oxygen transfer to the cathode using air does not require energy intensive air sparging of the water. Finally, PEMs such as Nafion are quite expensive. We wondered if this material was essential for power production in an MFC. We therefore designed and constructed a carbon-cloth, aircathode fuel cell to try to increase power density to levels not previously achieved with aqueous cathode systems. To study the effect of the PEM on power production, we compared power density for glucose and wastewater feeds with the system in the presence and absence of a polymeric PEM."}
{"_id": "1d72c53e99e9e35c5556ee9732887425b05930bb", "title": "Advanced Machine Learning Technologies and Applications", "text": "The recognition of a character begins with analyzing its form and extracting the features that will be exploited for the identification. Primitives can be described as a tool to distinguish an object of one class from another object of another class. It is necessary to define the significant primitives. The size of vector primitives can be large if a large number of primitives are extracted including redundant and irrelevant features. As a result, the performance of the recognition system becomes poor, and as the number of features increases, so does the computing time. Feature selection, therefore, is required to ensure the selection of a subset of features that gives accurate recognition. In our work we propose a feature selection approach based genetic algorithm to improve the discrimination capacity of the Multilayer Perceptron Neural Networks (MLP)."}
{"_id": "d42d9460fcaa3fe836f556682ab13b3f3c6bd145", "title": "Effects of birth spacing on maternal, perinatal, infant, and child health: a systematic review of causal mechanisms.", "text": "This systematic review of 58 observational studies identified hypothetical causal mechanisms explaining the effects of short and long intervals between pregnancies on maternal, perinatal, infant, and child health, and critically examined the scientific evidence for each causal mechanism hypothesized. The following hypothetical causal mechanisms for explaining the association between short intervals and adverse outcomes were identified: maternal nutritional depletion, folate depletion, cervical insufficiency, vertical transmission of infections, suboptimal lactation related to breastfeeding-pregnancy overlap, sibling competition, transmission of infectious diseases among siblings, incomplete healing of uterine scar from previous cesarean delivery, and abnormal remodeling of endometrial blood vessels. Women's physiological regression is the only hypothetical causal mechanism that has been proposed to explain the association between long intervals and adverse outcomes. We found growing evidence supporting most of these hypotheses."}
{"_id": "de3189199c3da03c24cc56b1610e6993c86b389b", "title": "Paternal filicide in Qu\u00e9bec.", "text": "In this retrospective study, relevant demographic, social, and clinical variables were examined in 77 cases of paternal filicide. Between 1991 and 2001, all consecutive coroners' files on domestic homicide in Qu\u00e9bec, Canada, were reviewed, and 77 child victims of 60 male parent perpetrators were identified. The results support data indicating that more fathers commit filicide than do mothers. A history of family abuse was characteristic of a substantial number of cases, and most of the cases involved violent means of homicide. Filicide was frequently (60%) followed by the suicide of the perpetrator and more so (86%) in cases involving multiple sibling victims. The abuse of drugs and alcohol was rare. At the time of the offense, most of the perpetrators were suffering from a psychiatric illness, usually depressive disorder. Nearly one-third were in a psychotic state. The proportion of fatal abuse cases was comparatively low. Many of the perpetrators had had contact with health professionals prior to the offense, although none had received treatment for a psychiatric illness."}
{"_id": "47a44665e0b6b70b5566bcbf201abc5bfdc630f2", "title": "Adaptive Comparator Bias-Current Control of 0.6 V Input Boost Converter for ReRAM Program Voltages in Low Power Embedded Applications", "text": "A novel method adaptively controls the comparator bias current in a boost converter for sensor-data storage on Resistive Random Access Memory (ReRAM). ReRAM is ideal for low-power sensors because it is faster and lower voltage than NAND flash. However, ReRAM's voltage design needs to consider program current variation due to data pattern dependency and cell reliability is sensitive to small voltage fluctuations. The proposed control method of the boost converter architecture has three proposals. First, low program voltage ripple (VRIPPLE) and high program energy efficiency is obtained with an adaptive 2 to 8 \u03bcA comparator bias-current (ICMP) control method. ICMP is adjusted depending on the number of program bits (NBITS) in the SET data. In 2 cases of NBITS = 1 and 16, and compared to a conventional scheme (ICMP = 8 \u03bcA and 2 \u03bcA), respectively, VRIPPLE decreases by 41.2% and energy efficiency improves by 6.6%, respectively. Secondly, as NBITS decrease during SET verify, ICMP also decreases, which further reduces SET energy by 8.9 pJ. Thirdly, although the conventional boost converter fails in the `SS' process condition, an additional cross-coupled charge-pump boosts the buffer circuit, which provides full functional operation over the input range of 0.6 to 1.0 V."}
{"_id": "5b64fb780ed5705409d3cbc895ca64054a15d725", "title": "ECOFEN: An End-to-end energy Cost mOdel and simulator For Evaluating power consumption in large-scale Networks", "text": "Wired networks are increasing in size and their power consumption is becoming a matter of concern. Evaluating the end-to-end electrical cost of new network architectures and protocols is difficult due to the lack of monitored realistic infrastructures. We propose an End-to-End energy Cost mOdel and simulator For Evaluating power consumption in large-scale Networks (ECOFEN) whose user's entries are the network topology and traffic. Based on configurable measurement of different network components (routers, switches, NICs, etc.), it provides the power consumption of the overall network including the end-hosts as well as the power consumption of each equipment over time."}
{"_id": "2495ebdcb6da8d8c2e82cf57fcaab0ec003d571d", "title": "Using Multiple Segmentations to Discover Objects and their Extent in Image Collections", "text": "Given a large dataset of images, we seek to automatically determine the visually similar object and scene classes together with their image segmentation. To achieve this we combine two ideas: (i) that a set of segmented objects can be partitioned into visual object classes using topic discovery models from statistical text analysis; and (ii) that visual object classes can be used to assess the accuracy of a segmentation. To tie these ideas together we compute multiple segmentations of each image and then: (i) learn the object classes; and (ii) choose the correct segmentations. We demonstrate that such an algorithm succeeds in automatically discovering many familiar objects in a variety of image datasets, including those from Caltech, MSRC and LabelMe."}
{"_id": "2be8e06bc3a4662d0e4f5bcfea45631b8beca4d0", "title": "Watch and learn: Semi-supervised learning of object detectors from videos", "text": "We present a semi-supervised approach that localizes multiple unknown object instances in long videos. We start with a handful of labeled boxes and iteratively learn and label hundreds of thousands of object instances. We propose criteria for reliable object detection and tracking for constraining the semi-supervised learning process and minimizing semantic drift. Our approach does not assume exhaustive labeling of each object instance in any single frame, or any explicit annotation of negative data. Working in such a generic setting allow us to tackle multiple object instances in video, many of which are static. In contrast, existing approaches either do not consider multiple object instances per video, or rely heavily on the motion of the objects present. The experiments demonstrate the effectiveness of our approach by evaluating the automatically labeled data on a variety of metrics like quality, coverage (recall), diversity, and relevance to training an object detector."}
{"_id": "6d4e3616d0b27957c4107ae877dc0dd4504b69ab", "title": "Shuffle and Learn: Unsupervised Learning Using Temporal Order Verification", "text": "In this paper, we present an approach for learning a visual representation from the raw spatiotemporal signals in videos. Our representation is learned without supervision from semantic labels. We formulate our method as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful visual representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. To demonstrate its sensitivity to human pose, we show results for pose estimation on the FLIC and MPII datasets that are competitive, or better than approaches using significantly more supervision. Our method can be combined with supervised representations to provide an additional boost in accuracy."}
{"_id": "f226ec13e016943102eb7ebedab7cf3e9bef69b2", "title": "Mask R-CNN", "text": ""}
{"_id": "1436afbd9120cf8f7be2ba329fae0a0f6093e407", "title": "Rational Protocol Design: Cryptography against Incentive-Driven Adversaries", "text": "Existing work on \"rational cryptographic protocols\" treats each party (or coalition of parties) running the protocol as a selfish agent trying to maximize its utility. In this work we propose a fundamentally different approach that is better suited to modeling a protocol under attack from an external entity. Specifically, we consider a two-party game between an protocol designer and an external attacker. The goal of the attacker is to break security properties such as correctness or privacy, possibly by corrupting protocol participants; the goal of the protocol designer is to prevent the attacker from succeeding. We lay the theoretical groundwork for a study of cryptographic protocol design in this setting by providing a methodology for defining the problem within the traditional simulation paradigm. Our framework provides ways of reasoning about important cryptographic concepts (e.g., adaptive corruptions or attacks on communication resources) not handled by previous game-theoretic treatments of cryptography. We also prove composition theorems that-for the first time-provide a sound way to design rational protocols assuming \"ideal communication resources\" (such as broadcast or authenticated channels) and then instantiate these resources using standard cryptographic tools. Finally, we investigate the problem of secure function evaluation in our framework, where the attacker has to pay for each party it corrupts. Our results demonstrate how knowledge of the attacker's incentives can be used to circumvent known impossibility results in this setting."}
{"_id": "a4a6011cd378adb86b168f51ce4cb144207ec354", "title": "Malware Classification with Deep Convolutional Neural Networks", "text": "In this paper, we propose a deep learning framework for malware classification. There has been a huge increase in the volume of malware in recent years which poses a serious security threat to financial institutions, businesses and individuals. In order to combat the proliferation of malware, new strategies are essential to quickly identify and classify malware samples so that their behavior can be analyzed. Machine learning approaches are becoming popular for classifying malware, however, most of the existing machine learning methods for malware classification use shallow learning algorithms (e.g. SVM). Recently, Convolutional Neural Networks (CNN), a deep learning approach, have shown superior performance compared to traditional learning algorithms, especially in tasks such as image classification. Motivated by this success, we propose a CNN-based architecture to classify malware samples. We convert malware binaries to grayscale images and subsequently train a CNN for classification. Experiments on two challenging malware classification datasets, Malimg and Microsoft malware, demonstrate that our method achieves better than the state-of-the-art performance. The proposed method achieves 98.52% and 99.97% accuracy on the Malimg and Microsoft datasets respectively."}
{"_id": "7661b964be8b1fcdd2f38f3a313fe86a7e1f2f66", "title": "A Computer-Aided Diagnosis System of Nuclear Cataract", "text": "Cataracts are the leading cause of blindness worldwide, and nuclear cataract is the most common form of cataract. An algorithm for automatic diagnosis of nuclear cataract is investigated in this paper. Nuclear cataract is graded according to the severity of opacity using slit lamp lens images. Anatomical structure in the lens image is detected using a modified active shape model. On the basis of the anatomical landmark, local features are extracted according to clinical grading protocol. Support vector machine regression is employed for grade prediction. This is the first time that the nucleus region can be detected automatically in slit lamp images. The system is validated using clinical images and clinical ground truth on >5000 images. The success rate of structure detection is 95% and the average grading difference is 0.36 on a 5.0 scale. The automatic diagnosis system can improve the grading objectivity and potentially be used in clinics and population studies to save the workload of ophthalmologists."}
{"_id": "caba398dc80312ee9a3cfede11919e38ee1d3807", "title": "Data mining techniques for customer relationship management", "text": "Advancements in technology have made relationship marketing a reality in recent years. Technologies such as data warehousing, data mining, and campaign management software have made customer relationship management a new area where firms can gain a competitive advantage. Particularly through data mining\u2014the extraction of hidden predictive information from large databases\u2014organizations can identify valuable customers, predict future behaviors, and enable firms to make proactive, knowledge-driven decisions. The automated, future-oriented analyses made possible by data mining move beyond the analyses of past events typically provided by history-oriented tools such as decision support systems. Data mining tools answer business questions that in the past were too time-consuming to pursue. Yet, it is the answers to these questions make customer relationship management possible. Various techniques exist among data mining software, each with their own advantages and challenges for different types of applications. A particular dichotomy exists between neural networks and chi-square automated interaction detection (CHAID). While differing approaches abound in the realm of data mining, the use of some type of data mining is necessary to accomplish the goals of today\u2019s customer relationship management philosophy. \uf6d9 2002 Elsevier Science Ltd. All rights reserved."}
{"_id": "198e49f57131a586b3e9494b0c46960d143e6201", "title": "An ultra-wideband common gate LNA with gm-boosted and noise cancelling techniques", "text": "In this paper, an ultra-wideband (UWB) common gate low-noise amplifier (LNA) with gm-boosted and noise-cancelling techniques is presented In this scheme we utilize gm-boosted stage for cancelling the noise of matching device. The bandwidth extension and flat gain are achieved by using of series and shunt peaking techniques. Simulated in .13 um Cmos technology, the proposed LNA achieved 2.38-3.4dB NF and S11 less than -11dB in the 3.1-10.6 GHz band Maximum power gain(S21) is 11dB and -3dB bandwidth is 1.25-11.33 GHz. The power consumption of LNA is 5.8mW."}
{"_id": "75a2a2a412bca0270254d4b9bb4dae89c25fff92", "title": "SDN-based distributed mobility management for 5G networks", "text": "Software-Defined Networking (SDN) is transforming the networking ecosystem. SDN allows network operators to easily and quickly introduce new services and flexibly adapt to their requirements, while simplifying the network management to reduce the cost of operation, maintenance and deployment. On the other hand, mobility is a key aspect for the future mobile networks. In this context, Distributed Mobility Management (DMM) has been recently introduced as a new trend to overcome the limitations of the today's mobility management protocols which are highly centralized and hierarchical. Driven from the fact that DMM and SDN share the same principle in which the data and control plane are decoupled, we propose a DMM solution based on SDN architecture called S-DMM. This solution offers a lot of advantages including no need to deploy any mobility-related component at the access router, independence of the underlying technologies, and per-flow mobility support. On one hand, the numerical results prove that S-DMM is more scalable than the legacy DMM. On the other hand, the experiment results from a real implementation show that S-DMM comes at no performance penalty (in terms of handover latency and end-to-end delay) compared to legacy DMM, yet at a slightly better management cost, which makes S-DMM a promising candidate for a mobility management solution in the context of 5G networks."}
{"_id": "ca356b3d49ff88ac29abf14afbd78b241b1c4439", "title": "Categorization and Machine Learning Methods : Current State of the Art By Durga Bhavani Dasari", "text": "In this informative age, we find many documents are available in digital forms which need classification of the text. For solving this major problem present researchers focused on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of pre classified documents, the characteristics of the categories. The main benefit of the present approach is consisting in the manual definition of a classifier by domain experts where effectiveness, less use of expert work and straightforward portability to different domains are possible. The paper examines the main approaches to text categorization comparing the machine learning paradigm and present state of the art. Various issues pertaining to three different text similarity problems, namely, semantic, conceptual and contextual are also discussed."}
{"_id": "df2f072d0bc1aa026aeef5e3fd3c4367c2de1b2c", "title": "Video shot boundary detection: Seven years of TRECVid activity", "text": "Shot boundary detection (SBD) is the process of automatically detecting the boundaries between shots in video. It is a problem which has attracted much attention since video became available in digital form as it is an essential pre-processing step to almost all video analysis, indexing, summarisation, search, and other contentbased operations. Automatic SBD was one of the tracks of activity within the annual TRECVid benchmarking exercise, each year from 2001 to 2007 inclusive. Over those seven years we have seen 57 different research groups from across the world work to determine the best approaches to SBD while using a common dataset and common scoring metrics. In this paper we present an overview of the TRECVid shot boundary detection task, a high-level overview of the most significant of the approaches taken, and a comparison of performances, focussing on one year (2005) as an example."}
{"_id": "bdfb20d112069c106a9535407d107a43ebe6517f", "title": "A model of saliency-based auditory attention to environmental sound", "text": "A computational model of auditory attention to environmental sound, inspired by the structure of the human auditory system, is presented. The model simulates how listeners switch their attention over time between different auditory streams, based on bottom-up and top-down cues. The bottom-up cues are determined by the time-dependent saliency of each stream. The latter is calculated on the basis of an auditory saliency map, which encodes the intensity and the amount of spectral and temporal irregularities of the sound, and binary spectro-temporal masks for the different streams. The top-down cues are determined by the amount of volitional focusing on particular auditory streams. A competitive winner-takes-all mechanism, which balances bottom-up and top-down attention for each stream, determines which stream is selected for entry into the working memory. Consequently, the model is able to delimit the time periods during which particular streams are paid attention to. Although the main ideas could be applied to all types of sound, the implementation of the model was targeted at environmental sound in particular. As an illustration, the model is used to reproduce results from a detailed field experiment on the perception of transportation noise. Finally, it is shown how this model could be a valuable tool, complementing auralization, in the design of outdoor soundscapes."}
{"_id": "252384bae49e1f2092e6a9553cd9a67f41134ded", "title": "On the Existence of a Spectrum of Policies that Subsumes the Least Recently Used (LRU) and Least Frequently Used (LFU) Policies", "text": "Sam H. Nohs Sang Lyul Mint t Department of Computer Engineering Seoul National University Seoul 151-742, Korea http://ssrnet.snu.ac.kr http://archi.snu.ac.kr We show that there exists a spectrum of block replacement policies that subsumes both the Least Recently Used (LRU) and the Least Frequently Used (LFU) policies. The spectrum is formed according to how much more weight we give to the recent history than to the older history, and is referred to as the LRFU (Least Recently/Frequently Used) policy. Unlike many previous policies that use limited history to make block replacement decisions, the LRFU policy uses the complete reference history of blocks recorded during their cache residency. Nevertheless, the LRFU requires only a few words for each block to maintain such history. This paper also describes an implementation of the LRFU that again subsumes the LRU and LFU implementations. The LRFU policy is applied to buffer caching, and results from trace-driven simulations show that the LRFU performs better than previously known policies for the workloads we considered. This point is reinforced by results from our integration of the LRFU into the FreeBSD operating system."}
{"_id": "4a4b5c8511f7a2a26f5847437d9fcdcf4a72c0f4", "title": "HyperENTM: Evolving Scalable Neural Turing Machines through HyperNEAT", "text": "Recent developments in memory-augmented neural networks allowed sequential problems requiring long-term memory to be solved, which were intractable for traditional neural networks. However, current approaches still struggle to scale to large memory sizes and sequence lengths. In this paper we show how access to an external memory component can be encoded geometrically through a novel HyperNEAT-based Neural Turing Machine (HyperNTM). The indirect HyperNEAT encoding allows for training on small memory vectors in a bit vector copy task and then applying the knowledge gained from such training to speed up training on larger size memory vectors. Additionally, we demonstrate that in some instances, networks trained to copy nine bit vectors can be scaled to sizes of 1,000 without further training. While the task in this paper is simple, the HyperNTM approach could now allow memory-augmented neural networks to scale to problems requiring large memory vectors and sequence lengths."}
{"_id": "13837b79abf1b29813f05888c04194c950bd2bd5", "title": "A review: Information extraction techniques from research papers", "text": "Text extraction is a crucial stage of analyzing Journal papers. Journal papers generally are in PDF format which is semi structured data. Journal papers are presented into different sections like Introduction, Methodology, Experimental setup, Result and analysis etc. so that it is easy to access information from any section as per the reader's interest. The main importance on section extraction is to find a representative subset of the data, which contains the information of the entire set. Various approaches to extract sections from research papers include stastical methods, NLP, Machine Learning etc. In this paper we present review of various extraction techniques from a PDF document."}
{"_id": "4f035a61deda62fcf94c349251f14e1c413c4902", "title": "Accidental death due to complete autoerotic asphyxia associated with transvestic fetishism and anal self-stimulation - case report.", "text": "A case is reported of a 36-year-old male, found dead in his locked room, lying on a bed, dressed in his mother's clothes, with a plastic bag over his head, hands tied and with a barrel wooden cork in his rectum. Two pornographic magazines were found on a chair near the bed, so that the deceased could see them well. Asphyxia was controlled with a complex apparatus which consisted of two elastic luggage rack straps, the first surrounding his waist, perineum, and buttocks, and the second the back of his body, and neck. According to the psychological autopsy based on a structured interview (SCID-I, SCID-II) with his father, the deceased was single, unemployed and with a part college education. He had grown up in a poor family with a reserved father and dominant mother, and was indicative of fulfilling DSM-IV diagnostic criteria for alcohol dependence, paraphilia involving hypoxyphilia with transvestic fetishism and anal masturbation and a borderline personality disorder. There was no evidence of previous psychiatric treatment. The Circumstances subscale of Beck's Suicidal Intent Scale (SIS-CS) pointed at the lack of final acts (thoughts or plans) in anticipation of death, and absence of a suicide note or overt communication of suicidal intent before death. Integration of the crime scene data with those of the forensic medicine and psychological autopsy enabled identification of the event as an accidental death, caused by neck strangulation, suffocation by a plastic bag, and vagal stimulation due to a foreign body in the rectum."}
{"_id": "6bdda860d65b41c093b42a69277d6ac898fe113d", "title": "Quadruped Robot Running With a Bounding Gait", "text": "Scout II, an autonomous four-legged robot with only one actuator per compliant leg is described. We demonstrate the need to model the actuators and the power source of the robot system carefully in order to obtain experimentally valid models for simulation and analysis. We describe a new, simple running controller that requires minimal task level feedback, yet achieves reliable and fast running up to 1.2 m/s. These results contribute to the increasing evidence that apparently complex dynamically dexterous tasks may be controlled via simple control laws. In addition, the simple mechanical design of our robot may be used as a template for the control of higher degree of freedom quadrupeds. An energetics analysis reveals a highly efficient system with a specific resistance of 0.32 when based on mechanical power dissipation and of 1.0 when based on total electrical power dissipation."}
{"_id": "9b44dc9a71584410e466b1645d588bdf56b3f83e", "title": "Virtual Engineering Factory: Creating Experience Base for Industry 4.0", "text": "In recent times, traditional manufacturing is upgrading and adopting Industry 4.0, which supports computerization of manufacturing by round-the-clock connection and communication of engineering objects. Consequently, Decisional DNAbased knowledge representation of manufacturing objects, processes, and system is achieved by virtual engineering objects (VEO), virtual engineering processes (VEP), and virtual engineering factories (VEF), respectively. In this study, assimilation of VEO-VEP-VEF concept in the Cyber-physical systembased Industry 4.0 is proposed. The planned concept is implemented on a case study. Also, Decisional DNA features such as similarity identification and phenotyping are explored for validation. It is concluded that this approach can support Industry 4.0 and can facilitate in real time critical, creative, and effective decision making."}
{"_id": "e0149f4926373368a0d88f7fb57c5549500fd83a", "title": "Linear Model Selection by Cross-Validation", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journal of the American Statistical Association. We consider the problem of selecting a model having the best predictive ability among a class of linear models. The popular leave-one-out cross-validation method, which is asymptotically equivalent to many other model selection methods such as the Akaike information criterion (AIC), the Cp, and the bootstrap, is asymptotically inconsistent in the sense that the probability of selecting the model with the best predictive ability does not converge to 1 as the total number of observations n-s o. We show that the inconsistency of the leave-one-out cross-validation can be rectified by using a leave-n,-out cross-validation with nv, the number of observations reserved for validation, satisfying no/n-1 I as n s* xoo. This is a somewhat shocking discovery, because ne/n-* 1 is totally opposite to the popular leave-one-out recipe in cross-validation. Motivations, justifications, and discussions of some practical aspects of the use of the leave-n,-out cross-validation method are provided, and results from a simulation study are presented."}
{"_id": "6104e184d04b5cf19dc61e4d0f334f625d761a13", "title": "An Efficient Domain-Independent Algorithm for Detecting Approximately Duplicate Database Records", "text": "Detecting database records that are approximate duplicates, but not exact duplicates, is an important task. Databases may contain duplicate records concerning the same real-world entity because of data entry errors, because of un-standardized abbreviations, or because of diierences in the detailed schemas of records from multiple databases, among other reasons. In this paper, we present an eecient algorithm for recognizing clusters of approximately duplicate records. Three key ideas distinguish the algorithm presented. First, a version of the Smith-Waterman algorithm for computing minimum edit-distance is used as a domain-independent method to recognize pairs of approximately duplicate records. Second, the union//nd algorithm is used to keep track of clusters of duplicate records incrementally, as pairwise duplicate relationships are discovered. Third, the algorithm uses a priority queue of cluster subsets to respond adaptively to the size and homogeneity of the clusters discovered as the database is scanned. This typically reduces by over 75% the number of times that the expensive pair-wise record matching (Smith-Waterman or other) is applied, without impairing accuracy. Comprehensive experiments on synthetic databases and on a real database of bibliographic records connrm the eeectiveness of the new algorithm."}
{"_id": "8395bf87af97f9dcb0fa0292c55fe1e1a943c45f", "title": "Genetic Algorithms + Data Structures = Evolution Programs", "text": "If you are looking for Genetic Algorithms Data Structures Evolution Programs in pdf file you can find it here. This is the best place for you where you can find the genetic algorithms data structures evolution programs document. Sign up for free to get the download links. There are 3 sources of download links that you can download and save it in your desktop."}
{"_id": "27bbe6f2ee3a751191665f006a3663bef401e4c3", "title": "Topology Control of Multihop Wireless Networks Using Transmit Power Adjustment", "text": "We consider the problem of adjusting the transmit powers of nodes in a multihop wireless network (also called an ad hoc network) to create a desired topology. We formulate it as a constrained optimization problem with two constraints connectivity and biconnectivity, and one optimization objective maximum power used. We present two centralized algorithms for use in static networks, and prove their optimality. For mobile networks, we present two distributed heuristics that adaptively adjust node transmit powers in response to topological changes and attempt to maintain a connected topology using minimum power. We analyze the throughput, delay, and power consumption of our algorithms using a prototype software implementation, an emulation of a power-controllable radio, and a detailed channel model. Our results show that the performance of multihop wireless networks in practice can be substantially increased with topology control."}
{"_id": "00a03ca06a6e94f232c6413f7d0297b48b97b338", "title": "Energy-Efficient Communication Protocol for Wireless Microsensor Networks", "text": "Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multihop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adap tive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster base stations (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show that LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional routing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated."}
{"_id": "01c1c63e4927b3b25d2a201b6258380679e3588d", "title": "Parssec: A Parallel Simulation Environment for Complex Systems", "text": "S ystems are now being designed that can scale to very large configurations. Examples include parallel architectures with thousands of processors, wireless networks connecting tens of thousands of computers, and parallel database systems capable of processing millions of transactions a second. In some cases, systems have grown considerably larger than envisaged by their designers. Design and development costs for such systems could be significantly reduced if only there were efficient techniques for evaluating design alternatives and predicting their impact on overall system performance metrics. Due to the systems' analytical intractability, simulation is the most common performance evaluation technique for such systems. However, the long execution times needed for sequential simulation models often hamper evaluation. For instance, it can easily take weeks to simulate wireless networks with thousands of mobile nodes that are subject to the interference and signal-fading effects of a realistic environment. Also, the simulation of parallel programs typically suffers from slowdown factors of 30 to 50 per processor. 1 This means that simulation of a parallel program executing for five to 15 minutes on a 128-node multi-processor can take anywhere from a few weeks to a few months\u2014even on state-of-the-art sequential workstations. As the size of the physical system increases, the models' memory requirements can easily exceed the workstations' capacity. The limitations of sequential model execution have led to growing interest in the use of parallel execution for simulating large-scale systems. Widespread use of parallel simulation, however, has been significantly hindered by a lack of tools for integrating parallel model execution into the overall framework of system simulation. Although a number of algorithmic alternatives exist for parallel execution of discrete-event simulation models, performance analysts not expert in parallel simulation have relatively few tools giving them flexibility to experiment with multiple algorithmic or architectural alternatives for model execution. Another drawback to widespread use of simulations is the cost of model design and maintenance. The design and development costs for detailed simulation models for complex systems can easily rival the costs for the physical systems themselves. The simulation environment we developed at UCLA attempts to address some of these issues by providing these features: \u2022 An easy path for the migration of simulation models to operational software prototypes. \u2022 Implementation on both distributed-and shared-memory platforms and support for a diverse set of parallel simulation protocols. \u2022 Support for visual and hierarchical model design. Our environment consists of three primary components: \u2026"}
{"_id": "8400b45d5149411be0c8a79165c9096b2d4387cf", "title": "The volcanormal density for radar-based extended target tracking", "text": "This paper presents a novel shape model for radar-based vehicle tracking. This model is a special probability density function (pdf), designed to approximate the radar measurement spread of a vehicle more adequately than widely used models. While this pdf is used to determine the likelihood of the spatial measurements, additionally a recent Doppler measurement model is integrated in a maximum likelihood estimator. This formulation allows the estimation of a vehicle's pose, motion and axis lengths based on the radar's spatial and Doppler dimension. On this basis, an efficient computation method is derived. An evaluation of actual radar and ground truth data demonstrates the precise estimation of a vehicle's state."}
{"_id": "72f2a9e9c5feb3b0eba76b8d7e5af759e392fe77", "title": "The Effect of Data Sampling When Using Random Forest on Imbalanced Bioinformatics Data", "text": "Ensemble learning is a powerful tool that has shown promise when applied towards bioinformatics datasets. In particular, the Random Forest classifier has been an effective and popular algorithm due to its relatively good classification performance and its ease of use. However, Random Forest does not account for class imbalance which is known for decreasing classification performance and increasing bias towards the majority class. In this study, we seek to determine if the inclusion of data sampling will improve the performance of the Random Forest classifier. In order to test the effect of data sampling, we used Random Undersampling along with two post-sampling class distribution ratios: 35:65 and 50:50 (minority:majority). Additionally, we also built inductive models with Random Forest when no data sampling technique was applied, so we can observe the true effect of the data sampling. All three options were tested on a series of fifteen imbalanced bioinformatics datasets. Our results show that data sampling does improve the classification performance of Random Forest, especially when using the 50:50 post-sampling class distribution ratio. However, statistical analysis shows that the increase in performance is not statistically significant. Thus, we can state that while data sampling does improve the classification performance of Random Forest, it is not a necessary step as the classifier is fairly robust to imbalanced data on its own."}
{"_id": "f7ec4269303b4f5a4b4964a278a149a69f2a5910", "title": "Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer\u2019s Disease Diagnosis", "text": "Accurate and early diagnosis of Alzheimer\u2019s disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93\u00a0AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer\u2019s Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance."}
{"_id": "887002d61b53926d5a210b8ad8473ce951f90de3", "title": "Smart home design based on ZigBee wireless sensor network", "text": "This work is based on the Internet of Things and ZigBee wireless sensor network technology. A kind of smart home design based on ZigBee wireless sensor network was proposed in this paper. Texas Instruments MCU device LM3S9B96, which is the ARM Cortex-M3 based controllers, was used in this system. The entire system is running on the \u03bcC/OS-II embedded real-time multitasking operating system. Users can access this system by a dynamic webpage of LwIP TCP/IP protocol stack or GSM SMS. Using this system users can conveniently know the environment parameters of home, such as temperature, humidity, meter readings, light, and control the home electronic equipments, such as light, aircondition, heater, by ZigBee wireless sensor network."}
{"_id": "95b6f5793ba8214c873b397bd3115239d915e29a", "title": "Analytics using R for predicting credit defaulters", "text": "Nowadays there are many risks related to bank loans, especially for the banks so as to reduce their capital loss. The analysis of risks and assessment of default becomes crucial thereafter. Banks hold huge volumes of customer behaviour related data from which they are unable to arrive at a judgement if an applicant can be defaulter or not. Data Mining is a promising area of data analysis which aims to extract useful knowledge from tremendous amount of complex data sets. This work aims to develop a model and construct a prototype for the same using a data set available in the UCI repository. The model is a decision tree based classification model that uses the functions available in the R Package. Prior to building the model, the dataset is pre-processed, reduced and made ready to provide efficient predictions. The final model is used for prediction with the test dataset and the experimental results prove the efficiency of the built model."}
{"_id": "6f98ed81c0bc0377402359037cf90ca77e916456", "title": "Two Methods for Semi-automatic Image Segmentation based on Fuzzy Connectedness and Watersheds", "text": "At the present time, one of the best methods for semiautomatic image segmentation seems to be the approach based on the fuzzy connectedness principle. First, we identify some deficiencies of this approach and propose a way to improve it, through the introduction of competitive learning. Second, we propose a different approach, based on watersheds. We show that the competitive fuzzy connectedness-based method outperforms the noncompetitive variant and generally (but not always) outperforms the watershed-based approach. The competitive variant of the fuzzy connectedness-based method can be a good alternative to the watersheds."}
{"_id": "951247706479cb0119ecec385aeedfd6bf6cff97", "title": "Generalized Semi-Orthogonal Multiple-Access for Massive MIMO", "text": "We propose a novel framework to enable concurrent transmission of multiple users to an access node equipped with a massive multiple-input multiple-output (mMIMO), which encompasses and extends the conventional time-division duplex (TDD) protocol and the recently proposed semi-orthogonal multiple-access (SOMA). The new solution is referred to generalized semi-orthogonal multiple-access (GSOMA). To enable GSOMA, the users are grouped such that the users within each group are assigned mutually orthogonal pilot sequences. However the pilot sequences can be reused in different groups and the users in different group are coordinated such that their pilot sequences do not interfere by a semi-orthogonal mechanism enabled by a resource-offset sequence. We describe the general framework and analyze a case of GSOMA and show that a properly designed GSOMA can outperform the conventional TDD and SOMA."}
{"_id": "a401596b7c337afefe0ea228ef9cd4908429b43a", "title": "A survey of top-k query processing techniques in relational database systems", "text": "Efficient processing of top-k queries is a crucial requirement in many interactive environments that involve massive amounts of data. In particular, efficient top-k processing in domains such as the Web, multimedia search, and distributed systems has shown a great impact on performance. In this survey, we describe and classify top-k processing techniques in relational databases. We discuss different design dimensions in the current techniques including query models, data access methods, implementation levels, data and query certainty, and supported scoring functions. We show the implications of each dimension on the design of the underlying techniques. We also discuss top-k queries in XML domain, and show their connections to relational approaches."}
{"_id": "a66addffd1c74ec98bb4f869da86e8fb42c3b6f4", "title": "Quaternion Domain $k$ -Means Clustering for Improved Real Time Classification of E-Nose Data", "text": "This paper proposes a novel clustering technique implemented in the quaternion domain for qualitative classification of E-nose data. The proposed technique is in many ways similar to the popular k-means clustering algorithm. However, computations carried out in the quaternion space have yielded better class separability and higher cluster validity. A pool of possible cluster centers was created by subjecting each initial center to a fixed rotation in the quaternion space. The test samples were then compared with each of the centers in the pool and assigned to an appropriate center using minimum Euclidean distance criterion. The evolving clusters have been evaluated periodically for their compactness and interclass separation using the Davis-Bouldin (DB) index. The set of clusters having minimum DB index was chosen as an optimal one. It was observed that using the proposed technique the inverse DB index remains significantly higher with successive iterations implying a consistent performance on the cluster validity front. Furthermore, clusters formed using quaternion algebra have been observed to have a smaller DB index. Finally, when compared with the traditional k-means algorithm, the proposed technique performed significantly better in terms of percentage classification of unlabeled samples."}
{"_id": "1850e9b7b72dfcb178b8b813cc6110dd5ee1147b", "title": "On Summarizing Large-Scale Dynamic Graphs", "text": "How can we describe a large, dynamic graph over time? Is it random? If not, what are the most apparent deviations from randomness \u2013 a dense block of actors that persists over time, or perhaps a star with many satellite nodes that appears with some fixed periodicity? In practice, these deviations indicate patterns \u2013 for example, research collaborations forming and fading away over the years. Which patterns exist in real-world dynamic graphs, and how can we find and rank their importance? These are exactly the problems we focus on. Our main contributions are (a) formulation: we show how to formalize this problem as minimizing an information theoretic encoding cost, (b) algorithm: we propose TIMECRUNCH, an effective and scalable method for finding coherent, temporal patterns in dynamic graphs and (c) practicality: we apply our method to several large, diverse real-world datasets with up to 36 million edges and introduce our auxiliary ECOVIZ framework for visualizing and interacting with dynamic graphs which have been summarized by TIMECRUNCH. We show that TIMECRUNCH is able to compress these graphs by summarizing important temporal structures and finds patterns that agree with intuition."}
{"_id": "094d1aca0d2e60322ca225b8083db6bcb48f528e", "title": "Efficient Evolutionary Algorithm for Single-Objective Bilevel Optimization", "text": "Bilevel optimization problems are a class of challenging optimization problems, which contain two levels of optimization tasks. In these problems, the optimal solutions to the lower level problem become possible feasible candidates to the upper level problem. Such a requirement makes the optimization problem difficult to solve, and has kept the researchers busy towards devising methodologies, which can efficiently handle the problem. Despite the efforts, there hardly exists any effective methodology, which is capable of handling a complex bilevel problem. In this paper, we introduce bilevel evolutionary algorithm based on quadratic approximations (BLEAQ) of optimal lower level variables with respect to the upper level variables. The approach is capable of handling bilevel problems with different kinds of complexities in relatively smaller number of function evaluations. Ideas from classical optimization have been hybridized with evolutionary methods to generate an efficient optimization algorithm for generic bilevel problems. The efficacy of the algorithm has been shown on two sets of test problems. The first set is a recently proposed SMD test set, which contains problems with controllable complexities, and the second set contains standard test problems collected from the literature. The proposed method has been evaluated against two benchmarks, and the performance gain is observed to be significant."}
{"_id": "ac07f8f77106932b9dbf17facba54350fe812283", "title": "A new method for pulse oximetry possessing inherent insensitivity to artifact", "text": "A new method for pulse oximetry is presented that possesses an inherent insensitivity to corruption by motion artifact, a primary limitation in the practical accuracy and clinical applicability of current technology. Artifact corruption of the underlying photoplethysmographic signals is reduced in real time, using an electronic processing methodology that is based upon inversion of a physical artifact model. This fundamental approach has the potential to provide uninterrupted output and superior accuracy under conditions of sustained subject motion, therefore, widening the clinical scope of this useful measurement. A new calibration technique for oxygen saturation is developed for use with these processed signals, which is shown to be a generalization of the classical Interpretation. The detailed theoretical and practical issues of Implementation are then explored, highlighting important engineering simplifications implicit in this new approach. A quantitative investigation of the degree of insensitivity to artifact is also undertaken, with the aid of a custom electronic system and commercial pulse oximeter probes, which is compared and contrasted with the performance of a conventional implementation. It is demonstrated that this new methodology results in a reduced sensitivity to common classes of motion artifact, while retaining the generality to be combined with conventional signal processing techniques."}
{"_id": "a4fdfac66b4c914acdc2f69c513ab6a93ebcca5e", "title": "Intrinsic Motivation in Museums : Why Does One Want to Learn ?", "text": "One often meets successful adults, professionals, or scientists who recall that their lifelong vocational interest was first sparked by a visit to a museum. In these accounts the encounter with a real, concrete object from a different world an exotic animal, a strange dress, a beautiful artifact is the kernel from which a n entire career of learning grew For others with an already developed curiosity about some field such as zoology, anthropology, or art, the museum provided an essential link in the cultivation of knowledge a place where information lo'it its abstractness and became concrete. In either case, many people ascribe powerful motivation to a museum visit, claiming that their desire to learn more about some aspect of the world was directly caused by it. Granted that these accounts of \"crystallizing experiences\" (Waiters and Gaidner 1986) attributed to museums might often be embellished and exaggerated in retrospect, i t would be rash to dismiss them entirely, for the fascination of museums seems to be a very real psychological phenomenon. The question rather becomes. How do museums motivate viewers to learn? Is there a unique, sui generis \"museum experience\" that helps viewers start on the long journey of learning? How do museums present information in a meaningful way, a way that deepens a person's experience and promotes further learning? To begin answering these questions, it will be useful to review what we know about human motivation in relation to learning. Children are born with a desire for knowledge, and some of the most stupendous feats of learning to walk, talk, get along with others, to take care of oneself are accomplished without seeming effort in the first few years of life. It would be difficult to see how a species as dependent on learning as we are could have"}
{"_id": "1e3b2878abf6472c6f8794cd736378719ba892cc", "title": "The Language of Deceivers: Linguistic Features of Crowdfunding Scams", "text": "Crowdfunding sites with recent explosive growth are equally attractive platforms for swindlers or scammers. Though the growing number of articles on crowdfunding scams indicate that the fraud threats are accelerating, there has been little knowledge on the scamming practices and patterns. The key contribution of this research is to discover the hidden clues in the text by exploring linguistic features to distinguish scam campaigns from non-scams. Our results indicate that by providing less information and writing more carefully (and less informally), scammers deliberately try to deceive people; (i) they use less number of words, verbs, and sentences in their campaign pages. (ii) scammers make less typographical errors, 4.5-4.7 times lower than non-scammers.(iii) Expressivity of scams is 2.6-8.5 times lower as well."}
{"_id": "9d6e41e9f1552df245d4b4441fbcbb925a6ca653", "title": "Distributed-system Features and Challenges Shiviz Is a New Distributed System Debugging Visualization Tool Debugging Distributed Systems", "text": "D istributed systems pose unique challenges for software developers. Reasoning about concurrent activities of system nodes and even understanding the system\u2019s communication topology can be difficult. A standard approach to gaining insight into system activity is to analyze system logs. Unfortunately, this can be a tedious and complex process. This article looks at several key features and debugging challenges that differentiate distributed systems from other kinds of software. The article presents several promising tools and ongoing research to help resolve these challenges."}
{"_id": "8057fff9b6ea8725a890fa555cbc5beb76ebc7d9", "title": "Wireless Industrial Monitoring and Control Networks: The Journey So Far and the Road Ahead", "text": "While traditional wired communication technologies have played a crucial role in industrial monitoring and control networks over the past few decades, they are increasingly proving to be inadequate to meet the highly dynamic and stringent demands of today\u2019s industrial applications, primarily due to the very rigid nature of wired infrastructures. Wireless technology, however, through its increased pervasiveness, has the potential to revolutionize the industry, not only by mitigating the problems faced by wired solutions, but also by introducing a completely new class of applications. While present day wireless technologies made some preliminary inroads in the monitoring domain, they still have severe limitations especially when real-time, reliable distributed control operations are concerned. This article provides the reader with an overview of existing wireless technologies commonly used in the monitoring and control industry. It highlights the pros and cons of each technology and assesses the degree to which each technology is able to meet the stringent demands of industrial monitoring and control networks. Additionally, it summarizes mechanisms proposed by academia, especially serving critical applications by addressing the real-time and reliability requirements of industrial process automation. The article also describes certain key research problems from the physical layer communication for sensor networks and the wireless networking perspective that have yet to be addressed to allow the successful use of wireless technologies in industrial monitoring and control networks."}
{"_id": "6d3812e649db0ff9c00168e7532e844067a89cfa", "title": "Trend Analysis in Social Networking using Opinion Mining A Survey", "text": "The popularity of social media in recent years has increased a lot and created new opportunities to study the interactions of different groups of people. Because of this it comes to possible to study the user\u201fs opinion. Two popular topics in the study of social networks are community detection and finding trends. Trend Analysis is an ongoing field of research in Data mining field, May referred as an \u201cOpinion Mining\u201d by some researchers. Trend Analysis is the computational treatment of opinions, sentiments and subjectivity of text. The related field to Trend Analysis contains finding emotions in a statement which attracted researchers recently are discussed. The main target of this survey is to give nearly full image of Trend Analysis along with the community detection and their related fields with brief details. The main contributions of this paper include the overall analysis to bridge the concept between trend analyses with community detection. To find out the anomaly detection in a community with using the concept of trend analysis is a difficult task."}
{"_id": "de92fb0f6fef3cb1aa24e4f78f70ad7cba39e300", "title": "Emergency department management of vaginal bleeding in the nonpregnant patient.", "text": "Abnormal uterine bleeding is the most common reason women seek gynecologic care, and many of these women present to an emergency department for evaluation. It is essential that emergency clinicians have a thorough understanding of the underlying physiology of the menstrual cycle to appropriately manage a nonpregnant woman with abnormal bleeding. Evidence to guide the management of nonpregnant patients with abnormal bleeding is limited, and recommendations are based mostly on expert opinion. This issue reviews common causes of abnormal bleeding, including anovulatory, ovulatory, and structural causes in both stable and unstable patients. The approach to abnormal bleeding in the prepubertal girl is also discussed. Emergency clinicians are encouraged to initiate treatment to temporize an acute bleeding episode until timely follow-up with a gynecologist can be obtained."}
{"_id": "e5bff81141dd62e4b3eb915f5191305f7a99f7b3", "title": "Three-dimensional tracking of axonal projections in the brain by magnetic resonance imaging.", "text": "The relationship between brain structure and complex behavior is governed by large-scale neurocognitive networks. The availability of a noninvasive technique that can visualize the neuronal projections connecting the functional centers should therefore provide new keys to the understanding of brain function. By using high-resolution three-dimensional diffusion magnetic resonance imaging and a newly designed tracking approach, we show that neuronal pathways in the rat brain can be probed in situ. The results are validated through comparison with known anatomical locations of such fibers."}
{"_id": "482261e9dbaa0a4d7f4d532e5063ffdf575e2c5f", "title": "An aspect representation for object manipulation based on convolutional neural networks", "text": "We propose an intelligent visuomotor system that interacts with the environment and memorizes the consequences of actions. As more memories are recorded and more interactions are observed, the agent becomes more capable of predicting the consequences of actions and is, thus, better at planning sequences of actions to solve tasks. In previous work, we introduced the aspect transition graph (ATG) which represents how actions lead from one observation to another using a directed multi-graph. In this work, we propose a novel aspect representation based on hierarchical CNN features, learned with convolutional neural networks, that supports manipulation and captures the essential affordances of an object based on RGB-D images. In a traditional planning system, robots are given a pre-defined set of actions that take the robot from one symbolic state to another. However symbolic states often lack the flexibility to generalize across similar situations. Our proposed representation is grounded in the robot's observations and lies in a continuous space that allows the robot to handle similar unseen situations. The hierarchical CNN features within a representation also allow the robot to act precisely with respect to the spatial location of individual features. We evaluate the robustness of this representation using the Washington RGB-D Objects Dataset and show that it achieves state of the art results for instance pose estimation. We then test this representation in conjunction with an ATG on a drill grasping task on Robonaut-2. We show that given grasp, drag, and turn demonstrations on the drill, the robot is capable of planning sequences of learned actions to compensate for reachability constraints."}
{"_id": "9aa9b61b91817ba1bf069b6d10d171d3596c19e5", "title": "Aspect Extraction Performance with POS Tag Pattern of Dependency Relation in Aspect-based Sentiment Analysis", "text": "The most important task in aspect-based sentiment analysis (ABSA) is the aspect and sentiment word extraction. It is a challenge to identify and extract each aspect and it specific associated sentiment word correctly in the review sentence that consists of multiple aspects with various polarities expressed for multiple sentiments. By exploiting the dependency relation between words in a review, the multiple aspects and its corresponding sentiment can be identified. However, not all types of dependency relation patterns are able to extract candidate aspect and sentiment word pairs. In this paper, a preliminary study was performed on the performance of different type of dependency relation with different POS tag patterns in pre-extracting candidate aspect from customer review. The result contributes to the identification of the specific type dependency relation with it POS tag pattern that lead to high aspect extraction performance. The combination of these dependency relations offers a solution for single aspect single sentiment and multi aspect multi sentiment cases."}
{"_id": "70f4b0cbf251592f3be651b748a2d514bad67109", "title": "Diversifying Web Service Recommendation Results via Exploring Service Usage History", "text": "The last decade has witnessed a tremendous growth of web services as a major technology for sharing data, computing resources, and programs on the web. With the increasing adoption and presence of web services, design of novel approaches for effective web service recommendation to satisfy users' potential requirements has become of paramount importance. Existing web service recommendation approaches mainly focus on predicting missing QoS values of web service candidates which are interesting to a user using collaborative filtering approach, content-based approach, or their hybrid. These recommendation approaches assume that recommended web services are independent to each other, which sometimes may not be true. As a result, many similar or redundant web services may exist in a recommendation list. In this paper, we propose a novel web service recommendation approach incorporating a user's potential QoS preferences and diversity feature of user interests on web services. User's interests and QoS preferences on web services are first mined by exploring the web service usage history. Then we compute scores of web service candidates by measuring their relevance with historical and potential user interests, and their QoS utility. We also construct a web service graph based on the functional similarity between web services. Finally, we present an innovative diversity-aware web service ranking algorithm to rank the web service candidates based on their scores, and diversity degrees derived from the web service graph. Extensive experiments are conducted based on a real world web service dataset, indicating that our proposed web service recommendation approach significantly improves the quality of the recommendation results compared with existing methods."}
{"_id": "769626642fbb888da16f0371bb6011280a5d53fb", "title": "Exploring Social-Historical Ties on Location-Based Social Networks", "text": "Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to \u201ccheck-in\u201d at geographical locations and share such experiences with their friends. Millions of \u201ccheck-in\u201d records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user\u2019s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user\u2019s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user\u2019s check-in behavior. In particular, our model captures the property of user\u2019s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user\u2019s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user\u2019s checkins and shows how social and historical ties can help location prediction."}
{"_id": "7770c8e1b66cef14d34b6497839c3598731cb140", "title": "Mobile marketing: the role of permission and acceptance", "text": "The escalation and convergence of distributed networks and wireless telecommunications has created a tremendous potential platform for providing business services. In consumer markets, mobile marketing is expected to be a key growth area. The immediacy, interactivity and mobility of wireless devices provide a novel platform for marketing. The personal and ubiquitous nature of devices means that interactivity can be provided anytime and anywhere. However, as experience has shown, it is important to keep the consumer in mind. Mobile marketing permission and acceptance are core issues that marketers have yet to fully explain or resolve. This paper provides direction in this area. After briefly discussing some background on mobile marketing, the paper conceptualises key characteristics for mobile marketing permission and acceptance. The paper concludes with predictions on the future of mobile marketing and some core areas of further research."}
{"_id": "ccfef606b58af279018d5f40bb38f623422c1536", "title": "Understanding mobility based on GPS data", "text": "Both recognizing human behavior and understanding a user's mobility from sensor data are critical issues in ubiquitous computing systems. As a kind of user behavior, the transportation modes, such as walking, driving, etc., that a user takes, can enrich the user's mobility with informative knowledge and provide pervasive computing systems with more context information. In this paper, we propose an approach based on supervised learning to infer people's motion modes from their GPS logs. The contribution of this work lies in the following two aspects. On one hand, we identify a set of sophisticated features, which are more robust to traffic condition than those other researchers ever used. On the other hand, we propose a graph-based post-processing algorithm to further improve the inference performance. This algorithm considers both the commonsense constraint of real world and typical user behavior based on location in a probabilistic manner. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change point-based segmentation method and Decision Tree-based inference model, the new features brought an eight percent improvement in inference accuracy over previous result, and the graph-based post-processing achieve a further four percent enhancement."}
{"_id": "e31cc8270a609dec0afd08f83f5dc6b156ab5c42", "title": "Compiler-Directed Lightweight Checkpointing for Fine-Grained Guaranteed Soft Error Recovery", "text": "This paper presents Bolt, a compiler-directed soft error recovery scheme, that provides fine-grained and guaranteed recovery without excessive performance and hardware overhead. To get rid of expensive hardware support, the compiler protects the architectural inputs during their entire liveness period by safely checkpointing the last updated value in idempotent regions. To minimize the performance overhead, Bolt leverages a novel compiler analysis that eliminates those checkpoints whose value can be reconstructed by other checkpointed values without compromising the recovery guarantee. As a result, Bolt incurs only 4.7% performance overhead on average which is 57% reduction compared to the state-of-the-art scheme that requires expensive hardware support for the same recovery guarantee as Bolt."}
{"_id": "c666a47e4fcd55f07281c39153abd40e7d3c20e8", "title": "A ternary content-addressable memory (TCAM) based on 4T static storage and including a current-race sensing scheme", "text": "A 256 144-bit TCAM is designed in 0.18m CMOS. The proposed TCAM cell uses 4T static storage for increased density. The proposed match-line (ML) sense scheme reduces power consumption by minimizing switching activity of search-lines and limiting voltage swing of MLs. The scheme achieves a match-time of 3 ns and operates at a minimum supply voltage of 1.2 V."}
{"_id": "a3d3f566bb2a9020f51ba2e5a561d43c20f2fa3e", "title": "The Neural Substrate of Human Empathy: Effects of Perspective-taking and Cognitive Appraisal", "text": "Whether observation of distress in others leads to empathic concern and altruistic motivation, or to personal distress and egoistic motivation, seems to depend upon the capacity for self-other differentiation and cognitive appraisal. In this experiment, behavioral measures and event-related functional magnetic resonance imaging were used to investigate the effects of perspective-taking and cognitive appraisal while participants observed the facial expression of pain resulting from medical treatment. Video clips showing the faces of patients were presented either with the instruction to imagine the feelings of the patient (imagine other) or to imagine oneself to be in the patient's situation (imagine self). Cognitive appraisal was manipulated by providing information that the medical treatment had or had not been successful. Behavioral measures demonstrated that perspective-taking and treatment effectiveness instructions affected participants' affective responses to the observed pain. Hemodynamic changes were detected in the insular cortices, anterior medial cingulate cortex (aMCC), amygdala, and in visual areas including the fusiform gyrus. Graded responses related to the perspective-taking instructions were observed in middle insula, aMCC, medial and lateral premotor areas, and selectively in left and right parietal cortices. Treatment effectiveness resulted in signal changes in the perigenual anterior cingulate cortex, in the ventromedial orbito-frontal cortex, in the right lateral middle frontal gyrus, and in the cerebellum. These findings support the view that humans' responses to the pain of others can be modulated by cognitive and motivational processes, which influence whether observing a conspecific in need of help will result in empathic concern, an important instigator for helping behavior."}
{"_id": "384ce792cf2b2afbe001f2168bfe7d5e7804c736", "title": "Tiny ImageNet Visual Recognition Challenge", "text": "In this work, we investigate the effect of convolutional network depth, receptive field size, dropout layers, rectified activation unit type and dataset noise on its accuracy in Tiny-ImageNet Challenge settings. In order to make a thorough evaluation of the cause of the peformance improvement, we start with a basic 5 layer model with 5\u00d75 convolutional receptive fields. We keep increasing network depth or reducing receptive field size, and continue applying modern techniques, such as PReLu and dropout, to the model. Our model achieves excellent performance even compared to state-of-the-art results, with 0.444 final error rate on the test set."}
{"_id": "3f7b48306ac4573c867d28ebf48465deed2ab016", "title": "Immersive VR for Scientific Visualization: A Progress Report", "text": "potential to be a powerful tool for the visualization of burgeoning scientific data sets and models. In this article we sketch a research agenda for the hardware and software technology underlying IVR for scientific visualization. In contrast to Brooks\u2019 excellent survey last year, which reported on the state of IVR and provided concrete examples of its production use, this article is somewhat speculative. We don\u2019t present solutions but rather a progress report, a hope, and a call to action, to help scientists cope with a major crisis that threatens to impede their progress. Brooks\u2019 examples show that the technology has only recently started to mature\u2014in his words, it \u201cbarely works.\u201d IVR is used for walkthroughs of buildings and other structures, virtual prototyping (vehicles such as cars, tractors, and airplanes), medical applications (surgical visualization, planning, and training), \u201cexperiences\u201d applied as clinical therapy (reliving Vietnam experiences to treat post-traumatic stress disorder, treating agoraphobia), and entertainment. Building on Brooks\u2019 work, here we concentrate on why scientific visualization is also a good application area for IVR. First we\u2019ll briefly review scientific visualization as a means of understanding models and data, then discuss the problem of exploding data set size, both from sensors and from simulation runs, and the consequent demand for new approaches. We see IVR as part of the solution: as a richer visualization and interaction environment, it can potentially enhance the scientist\u2019s ability to manipulate the levels of abstraction necessary for multi-terabyte and petabyte data sets and to formulate hypotheses to guide very long simulation runs. In short, IVR has the potential to facilitate a more balanced human-computer partnership that maximizes bandwidth to the brain by more fully engaging the human sensorium. We argue that IVR remains in a primitive state of development and is, in the case of CAVEs and tiled projection displays, very expensive and therefore not in routine use. (We use the term cave to denote both the original CAVE developed at the University of Illinois\u2019 Electronic Visualization Laboratory and CAVE-style derivatives.) Evolving hardware and software technology may, however, enable IVR to become as ubiquitous as 3D graphics workstations\u2014once exotic and very expensive\u2014are today. Finally, we describe a research agenda, first for the technologies that enable IVR and then for the use of IVR for scientific visualization. Punctuating the discussion are sidebars giving examples of scientific IVR work currently under way at Brown University that addresses some of the research challenges, as well as other sidebars on data set size growth and IVR interaction metaphors."}
{"_id": "62363da5c2e84a06cf9a21ca5977a4dd18bfbe9a", "title": "Establishment of tumor-specific copy number alterations from plasma DNA of patients with cancer", "text": "With the increasing number of available predictive biomarkers, clinical management of cancer is becoming increasingly reliant on the accurate serial monitoring of tumor genotypes. We tested whether tumor-specific copy number changes can be inferred from the peripheral blood of patients with cancer. To this end, we determined the plasma DNA size distribution and the fraction of mutated plasma DNA fragments with deep sequencing and an ultrasensitive mutation-detection method, i.e., the Beads, Emulsion, Amplification, and Magnetics (BEAMing) assay. When analyzing the plasma DNA of 32 patients with Stage IV colorectal carcinoma, we found that a subset of the patients (34.4%) had a biphasic size distribution of plasma DNA fragments that was associated with increased circulating tumor cell numbers and elevated concentration of mutated plasma DNA fragments. In these cases, we were able to establish genome-wide tumor-specific copy number alterations directly from plasma DNA. Thus, we could analyze the current copy number status of the tumor genome, which was in some cases many years after diagnosis of the primary tumor. An unexpected finding was that not all patients with progressive metastatic disease appear to release tumor DNA into the circulation in measurable quantities. When we analyzed plasma DNA from 35 patients with metastatic breast cancer, we made similar observations suggesting that our approach may be applicable to a variety of tumor entities. This is the first description of such a biphasic distribution in a surprisingly high proportion of cancer patients which may have important implications for tumor diagnosis and monitoring."}
{"_id": "4586eae7af09bf653cee78d71ba25711ec3b3e5a", "title": "SEISA: set expansion by iterative similarity aggregation", "text": "In this paper, we study the problem of expanding a set of given seed entities into a more complete set by discovering other entities that also belong to the same concept set. A typical example is to use \"Canon\" and \"Nikon\" as seed entities, and derive other entities (e.g., \"Olympus\") in the same concept set of camera brands. In order to discover such relevant entities, we exploit several web data sources, including lists extracted from web pages and user queries from a web search engine. While these web data are highly diverse with rich information that usually cover a wide range of the domains of interest, they tend to be very noisy. We observe that previously proposed random walk based approaches do not perform very well on these noisy data sources. Accordingly, we propose a new general framework based on iterative similarity aggregation, and present detailed experimental results to show that, when using general-purpose web data for set expansion, our approach outperforms previous techniques in terms of both precision and recall."}
{"_id": "9380085172e541d617a823729cc6fdd168f02c8f", "title": "A simple intrusion detection method for controller area network", "text": "The Controller Area Network (CAN) is established as the main communication channel inside vehicles. CAN relies on frame broadcast to share data between different microcontrollers managing critical or comfort functions such as cruise control or air conditioning. CAN is distinguished by its simplicity, its real-time application compatibility and its low deployment cost. However, CAN major drawback is its lack of security support. That is, CAN fails to provide protections against attacks such as intrusion, denial of service or impersonation. We propose in this work a simple intrusion detection method for CAN. Our main idea is to make each microcontroller monitor the CAN bus to detect malicious frames."}
{"_id": "84ab4a5f4f1265e3ef61db90e17a6b71fff597a5", "title": "One Million Sense-Tagged Instances for Word Sense Disambiguation and Induction", "text": "Supervised word sense disambiguation (WSD) systems are usually the best performing systems when evaluated on standard benchmarks. However, these systems need annotated training data to function properly. While there are some publicly available open source WSD systems, very few large annotated datasets are available to the research community. The two main goals of this paper are to extract and annotate a large number of samples and release them for public use, and also to evaluate this dataset against some word sense disambiguation and induction tasks. We show that the open source IMS WSD system trained on our dataset achieves stateof-the-art results in standard disambiguation tasks and a recent word sense induction task, outperforming several task submissions and strong baselines."}
{"_id": "16f0fb8717df492c745cd4d5d799c89dc2b05227", "title": "Affective Man-Machine Interface: Unveiling Human Emotions through Biosignals", "text": "As is known for centuries, humans exhibit an electrical profile. This profile is altered through various psychological and physiological processes, which can be measured through biosignals; e.g., electromyography (EMG) and electrodermal activity (EDA). These biosignals can reveal our emotions and, as such, can serve as an advanced man-machine interface (MMI) for empathic consumer products. However, such a MMI requires the correct classification of biosignals to emotion classes. This chapter starts with an introduction on biosignals for emotion detection. Next, a state-of-the-art review is presented on automatic emotion classification. Moreover, guidelines are presented for affective MMI. Subsequently, a research is presented that explores the use of EDA and three facial EMG signals to determine neutral, positive, negative, and mixed emotions, using recordings of 21 people. A range of techniques is tested, which resulted in a generic framework for automated emotion classification with up to 61.31% correct classification of the four emotion classes, without the need of personal profiles. Among various other directives for future research, the results emphasize the need for parallel processing of multiple biosignals. That men are machines (whatever else they may be) has long been suspected; but not till our generation have men fairly felt in concrete just what wonderful psycho-neuro-physical mechanisms they are. William James (1893; 1842 \u2013 1910) A. Fred, J. Filipe, and H. Gamboa (Eds.): BIOSTEC 2009, CCIS 52, pp. 21\u201347, 2010. c \u00a9 Springer-Verlag Berlin Heidelberg 2010 22 E.L. van den Broek et al."}
{"_id": "5ab730d9060572c7b9f940be07ee26581bb4c73b", "title": "Demand forecasting with high dimensional data: The case of SKU retail sales forecasting with intra- and inter-category promotional information", "text": "In marketing analytics applications in OR, the modeler often faces the problem of selecting key variables from a large number of possibilities. For example, SKU level retail store sales are affected by inter and intra category effects which potentially need to be considered when deciding on promotional strategy and producing operational forecasts, but no research has put this well accepted concept into forecasting practice: an obvious obstacle is the ultra-high dimensionality of the variable space. This paper develops a four steps methodological framework to overcome the problem. It is illustrated by investigating the value of both intraand inter-category SKU level promotional information in improving forecast accuracy. The method consists of the identification of potentially influential categories, the building of the explanatory variable space, variable selection and model estimation by a multistage LASSO regression, and the use of a rolling scheme to generate forecasts. The success of this new method for dealing with high dimensionality is demonstrated by improvements in forecasting accuracy compared to alternative methods of simplifying the variable space. The empirical results show that models integrating more information perform significantly better than the baseline model when using the proposed methodology framework. In general, we can improve the forecasting accuracy by 14.3 percent over the model using only the SKU\u2019s own predictors. But of the improvements achieved, 88.1 percent of it comes from the intra-category information, and only 11.9 percent from the inter-category information. The substantive marketing results also have implications for promotional category management."}
{"_id": "85654484e5a420b1af9024147c3df55fd4c8202e", "title": "Live-fly, large-scale field experimentation for large numbers of fixed-wing UAVs", "text": "In this paper, we present extensive advances in live-fly field experimentation capabilities of large numbers of fixed-wing aerial robots, and highlight both the enabling technologies as well as the challenges addressed in such largescale flight operations. We showcase results from recent field tests, including the autonomous launch, flight, and landing of 50 UAVs, which illuminate numerous operational lessons learned and generate rich multi-UAV datasets. We detail the design and open architecture of the testbed, which intentionally leverages low-cost and open-source components, aimed at promoting continued advances and alignment of multi-robot systems research and practice."}
{"_id": "8555edf9dd7de6caf36873adc2a77508bc45ac8a", "title": "Model-based dynamic gait generation for a leg-wheel transformable robot", "text": "We report on the model-based approach to dynamic trotting and pronking gait generation for a leg-wheel transformable robot. The rolling spring-loaded inverted pendulum (R-SLIP) model served as the template for robot locomotion by programming the robot's motion according to the stable fixed-point trajectories of the model. Two strategies are developed to match the robot leg motions to the virtual model leg: First, using the two active degrees of freedom on each leg to simulate the passive spring effect of the R-SLIP model leg. Second, installing a torsion spring on the leg-wheel module to render the leg-wheel morphology identical to that of the model leg. This model-based approach to dynamic behavior generation for the robot is experimentally evaluated. The robot can successfully generate an R-SLIP-like stable pronking gait with a flight phase."}
{"_id": "6d3a038660e2961e1f87fe7d5ae2dd56dee87fee", "title": "MobiCloud: A geo-distributed mobile cloud computing platform", "text": "In a cloud computing environment, users prefer to migrate their locally processing workloads onto the cloud where more resources with better performance can be expected. ProtoGENI [1] and PlanetLab [17] have further improved the current Internet-based resource outsourcing by allowing end users to construct a virtual network system through virtualization and programmable networking technologies. However, to the best of our knowledge, there is no such general service or resource provisioning platform designated for mobile devices. In this paper, we present a new design and implementation of MobiCloud that is a geo-distributed mobile cloud computing platform. The discussions of the system components, infrastructure, management, implementation flow, and service scenarios are followed by an example on how to experience the MobiCloud system."}
{"_id": "5dd865deb652c8b07bac3ce5cedcb17ba9103abf", "title": "Toward Enabling Automated Cognition and Decision-Making in Complex Cyber-Physical Systems", "text": "This article presents a framework for empowering automated cognition and decision making in complex cyber-physical systems (CPS). The key idea is for each cyber-physical component in the system to be able to construct from observations and actions a reduced model of its behavior and respond to external stimuli in the form of a (timed) probabilistic automaton such as a semi-Markov decision process. The reduced order behavioral models of various cyber-physical components can then be integrated into an optimization and decision-making module to determine the best actions for the components under different operating scenarios and cost / payoff functions by solving a bounded-rationality global decision-making problem.1"}
{"_id": "9b2c2e87b90bf1cf7f41d796b2298204a10050a8", "title": "Gossip-based Causal Order Delivery Protocol Respecting Deadline Constraints in Publish/Subscribe Systems", "text": "Publish/subscribe systems based on gossip protocols are elastically to scale in and out and provides suitable consistency guarantees for data safety and high availability but, does not deal with end-to-end message delay and message order-based consistency. Especially in real-time collaborative applications, it is possible for the messages to take each a different time to arrive at end users. So, these applications should be based on P/S infrastructure including dealing with message deadlines and message ordering consistencies. Gossip communication is becoming one of the promising solutions for addressing P/S scalability problems in providing information propagation functionality by exploiting a mixture of diverse consistency options. In this paper, we present a new causal order protocol based on scalable P/S architecture for real-time collaborative applications in social web platforms to guarantee causally ordered message delivery, respecting deadline-constraints from brokers to subscribers. In the proposed protocol, every broker manages a 2-dimensional vector, representing its knowledge of the last message sent by each broker at a certain time. But, every broker disseminates a multicast message with a 1-dimensional vector, the time-stamped information that represents the maximum number of gossip rounds to subscribers because all messages disseminated by brokers have the same deadline as the maximum number of gossip rounds. Therefore, the proposed protocol for P/S based on gossiping results in very low communication overhead from brokers to subscribers in the context of respecting deadline-constraints."}
{"_id": "93fd07093b896a59bc98c7a1eade732b71196189", "title": "Cotton Leaf Spot Diseases Detection Utilizing Feature Selection with Skew Divergence Method", "text": "This research work exposes the novel approach of analysis at existing works based on machine vision system for the identification of the visual symptoms of Cotton crop diseases, from RGB images. Diseases regions of cotton crops are revealed in digital pictures, Which were amended and segmented. In this work Proposed Enhanced PSO feature selection method adopts Skew divergence method and user features like Edge,Color,Texture variances to extract the features. Set of features was extracted from each of them. The extracted feature was input to the SVM, Back propagation neural network (BPN),Fuzzy with Edge CYMK color feature and GA feature selection. Tests were performed to identify the best classification model. It has been hypothesized that from the given characteristics of the images, there should be a subset of features more informative of the image domain.To test this hypothesis, three classification models were assessed via crossvalidation. To Evaluate its efficiency of six types of diseases have been accurately classified like Bacterial Blight, Fusariumwilt, Leaf Blight, Root rot, Micro Nurient, Verticilium Wilt.The Experimental results obtained show that the robust feature vector set which is an Enhancement of a feature extraction method (EPSO) has been afforded the performance assessment of this system. Keywords\u2014 SVM, BPN, Fuzzy, CMYK and Edge Features, Genetic Algorithm, Cotton leaf data sets., Enhance Particle swarm optimization, Skew divergences features. I. Introduct ion Data mining is the process of extracting patterns from the data and to transform raw data into information.. It is commonly utilized in a broad range of profiling practices, such as marketing, surveillance, fraud detection, agriculture environment and scientific discovery. Among other things, the most prominent features of data mining techniques are clustering and predictive. The benefits of data mining are its capability to gain deeper perceptive of the patterns using current available exposure capabilities . In this work categorization of the cotton leaf diseases are done using data mining with image processing techniques. The main target is to identify the disease in the leaf spot of the cotton crops. In this regard, It is discussed that about 80 to 90 percentage disease on the Cotton crops are on its leaf spot [1]. Consequently areas of interest is that identifying the leaf of the cotton tree rather than whole cotton plant the cotton leaf is mostly suffered from diseases like fungus, virus, Foliar leaf spot of cotton, Alternaria leaf spot of cotton. The machine visualization system right now is usually contained on computer, digital camera and application software. Various kinds of algorithms are incorporated in the application software. Image processing analysis is one significant method that helps to segment the image into objects and background. One of the key steps in image analysis is feature detection.Plant disease reduces the production and quality of food, fiber and biofuel crops. Image processing has fascinated many researchers in the area of pattern recognition and technique combined with data mining are applied to trace out the leaf spots from the of plant leaves, which helps in diagnosing the cotton leaf spot diseases accurately The image processing techniques are extensively applied to agricultural science, and it has great perspective, especially in the plant protection field, which ultimately leads to crop management. Image analysis can be applied for the following purposes: 1. To detect diseased leaf, stem, fruit, root 2. To enumerate affected area by disease. 3. To find the boundaries of the affected area. A moreover leaf spot is a major component of the crops [2]. The diseases can be easily recognized with the help of the polluted vicinity of the crop. Usually the leaves will naturally going to focus the impure part in a clear way which can be easily identified. Generally by naked eyes we can easily identify the infected region. So we can say the transform in the crop color is the imperative feature for the notification. When the health of the crop is in good stage then the color of the crop is dissimilar but as soon as the crop is going to affected by some harming pathogens, the color transforms automatically. Crop diseases have turned into a dilemma because it may cause a diminution in productivity [3]. In this research work described that goal of identifying foliar diseases in cotton plantations. The primary goal of the developed system has been to identify the existence of pathogens in cotton fillers. Once a disease is identified it has to be automatically classified through further processing of the corresponding image. A survey conducted in one of south zone particular Tamil Nadu at Andhiyur district. During the investigation congregate from the farmers' side gathered the suggestion about the cotton crop diseases details. In this work organized about the Section 2. Literature survey, Section 3. Motivation, Problem Analysis, Section 4. Material and Methods, Section 5. Comparative studies, Section 6. Result and Discussion, Section 7. Conclusion and Acknowledgment, References. II. Literature Survey The proposed work describes about the diagnosis of cotton leaves using various approaches suggesting that the various implementation ways as illustrated and discussed below. Hui Li et al., in the year 2011 has been implemented the Web-Based Intelligent Diagnosis System for Cotton Diseases Control system the author proposed a BP neural network for his system. International Journal of Scientific Engineering and Technology (ISSN : 2277-1581) Volume No.3, Issue No.1 pp : 22-30 1 Jan 2014 IJSET@2014 Page 23 A research scheme was designed for the system test, in which 80 samples, including 8 main species of diseases, 10 samples in each sort were included. The result showed the rate of correctness that system could identify the symptom was 89.5% in average, and the average running time for a diagnosis was 900ms [4].Yan Cheng Zhang et al., ( 2007 ) proposed fuzzy feature selection approach fuzzy curves (FC) and fuzzy surfaces (FS) \u2013 to select features and classification of cotton disease levels. [5]. Syed A. Health et al., In this research work discussed about the automated system that can identify the pest and disease affecting parts such as cotton leaf, boll or flower. In this work proposed a CMYK based image cleaning technique to remove shadows, hands and other impurities from images. The outcomes are tested over a database consisting of 600 images to classify the presented image as a leaf, boll or flower [6]. Bernardes A. A. et al., ( 2011) proposed method for automatic classification of cotton diseases through feature extraction of leaf symptoms from digital images. Wavelet transform energy has been used for feature extraction while SVM has been used for classification. The image set of supposedly adulterated leaves was classified within one of the four other sub-classes, namely: MA, RA, AS, and NONE. Finally obtained were: 96.2% accuracy for the SA class, 97.1% accuracy for the MA class, 80% accuracy for the RA class, and 71.4% accuracy for the AS class [7]. Meunkaewjinda. A, et al., (2008). In his work the cotton leaf disease segmentation is performed using modified self organizing feature map with genetic algorithms for optimization and support vector machines for classification. Finally, the resulting segmented image is filtered by Gabor wavelet which allows the system to analyze leaf disease color features more efficiently [8]. Gulhane. V. A et al. (2011) This work described Self organizing feature map together with a back-propagation neural network is used to recognize the color of the image. Finally find out the classification of the cotton diseases [9]. Viraj A. Gulhane. e tal., (2012).This proposed work addresses about the disease analysis is possible for the cotton leaf disease recognition, the analysis of the various diseases present on the cotton leaves can be effectively detected in the early stage before it will injure the whole crops, initially we can be able to detect three types of diseases of the cotton leaves by the methodology of Eigen feature regularization and extraction technique. In this method 90% of detection of Red spot i.e. fungal disease is detected, it is most dangerous disease; it can highly affect the productivity of the cotton crop in more extent. And if it detects in early stage we can say that, we able to make better manufacture [10]. Qinghai He et al., (2013). In this paper the author described RGB color model, HIS color model, and YCbCr color model for extracting the injured image from cotton leaf images were developed.. The ratio of damage (\u03b3) was chosen as feature to measure the degree of damage which caused by diseases or pests. In this work shows the comparison of the results obtained by implementing in different color models, the comparison of outcomes shows good accuracy in both color models and YCbCr color space is considered as the best color model for extracting the damaged image [11]. III. Motivation This research work address on solving various problems like boosting up the precision rates and diminish the error rates using the Proposed Enhanced PSO which make use of Skew divergences to calculate the features correctly from the image.This method increases the accuracy and reduce the information gap to farmers. Creating a complicated agricultural environment to support the farmers to easily identify the diseases and get control of it using the Pest Recommendation module. \uf0b7 Deployed the automatic disease classification processes using advanced Data mining and Image processing techniques. \uf0b7 Augment the profit and safety of the farmer\u2019s life and diminish their burden. \uf0b7 Time consuming (less). \uf0b7 Enhance the economic status of our country. Problem Credentials \uf0d8 Ancient days crop disease identification process through the laboratory condition. \uf0d8 However, this requires continuou"}
{"_id": "760a16378ab60b6353a915b6a688a4d317c8daee", "title": "Digital image integrity - a survey of protection and verification techniques", "text": "We are currently on a verge of a revolution in digital photography. Developments in computational imaging and adoption of artificial intelligence have spawned new editing techniques that give impressive results in astonishingly short timeframes. The advent of multi-sensor and multi-lens cameras will further challenge many existing integrity verification techniques. As a result, it will be necessary to re-evaluate our notion of image authenticity and look for new techniques that could work efficiently in this new reality. The goal of this paper is to thoroughly review existing techniques for protection and verification of digital image integrity. In contrast to other recent surveys, the discussion covers the most important developments both in active protection and in passive forensic analysis techniques. Existing approaches are analyzed with respect to their capabilities, fundamental limitations, and prospective attack vectors. Whenever possible, the discussion is supplemented with real operation examples and a list of available implementations. Finally, the paper reviews resources available in the research community, including public data-sets and commercial or open-source software. The paper concludes by discussing relevant developments in computational imaging and highlighting future challenges and open research problems."}
{"_id": "8fec69c8b84a00b1c657f2b7b1b77d19af4d0fe3", "title": "Robust multivariate autoregression for anomaly detection in dynamic product ratings", "text": "User provided rating data about products and services is one key feature of websites such as Amazon, TripAdvisor, or Yelp. Since these ratings are rather static but might change over time, a temporal analysis of rating distributions provides deeper insights into the evolution of a products' quality. Given a time-series of rating distributions, in this work, we answer the following questions: (1) How to detect the base behavior of users regarding a product's evaluation over time? (2) How to detect points in time where the rating distribution differs from this base behavior, e.g., due to attacks or spontaneous changes in the product's quality? To achieve these goals, we model the base behavior of users regarding a product as a latent multivariate autoregressive process. This latent behavior is mixed with a sparse anomaly signal finally leading to the observed data. We propose an efficient algorithm solving our objective and we present interesting findings on various real world datasets."}
{"_id": "2e29ef2217a27c088f390579bfe0d3cf2878fe1b", "title": "Datalog+/-: A Family of Logical Knowledge Representation and Query Languages for New Applications", "text": "This paper summarizes results on a recently introduced family of Datalog-based languages, called Datalog+/-, which is a new framework for tractable ontology querying, and for a variety of other applications. Datalog+/- extends plain Datalog by features such as existentially quantified rule heads and, at the same time, restricts the rule syntax so as to achieve decidability and tractability. In particular, we discuss three paradigms ensuring decidability: chase termination, guardedness, and stickiness."}
{"_id": "6d6f075065a6c426f7e77c862cc5af2aa9dbbbb5", "title": "Estimating 3D Egomotion from Perspective Image Sequence", "text": "This paper deals with the computation of sensor motion from sets of displacement vectors obtained from consecutive pairs of images. The problem is investigated with emphasis on its application to autonomous robots and land vehicles. First, the effects of 3-D camera rotation and translation upon the observed image are discussed and in particular the concept of the focus of expansion (FOE). It is shown that locating the FOE precisely is difficult when displacement vectors are corrupted by noise and errors. A more robust performance can be achieved by computing a 2-D region of possible FOE-locations (termed the fuzzy FOE) instead of looking for a single-point FOE. The shape of this FOE-region is an explicit indicator for the accuracy of the result. It has been shown elsewhere that given the fuzzy FOE, a number of powerful inferences about the 3-D scene structure and motion become possible. This paper concentrates on the aspects of computing the fuzzy FOE and shows the performance of a particular algorithm on real motion sequences taken from a moving autonomous land vehicle. Zndex Tenns-Autonomous mobile robot, dynamic scene analysis, fuzzy focus of expansion, motion analysis, passive navigation, sensor motion estimation."}
{"_id": "3f90560a649fcd0d618c4e5538b18d07ef116539", "title": "Container Port Performance Measurement and Comparison Leveraging Ship GPS Traces and Maritime Open Data", "text": "Container ports are generally measured and compared using performance indicators such as container throughput and facility productivity. Being able to measure the performance of container ports quantitatively is of great importance for researchers to design models for port operation and container logistics. Instead of relying on the manually collected statistical information from different port authorities and shipping companies, we propose to leverage the pervasive ship GPS traces and maritime open data to derive port performance indicators, including ship traffic, container throughput, berth utilization, and terminal productivity. These performance indicators are found to be directly related to the number of container ships arriving at the terminals and the number of containers handled at each ship. Therefore, we propose a framework that takes the ships' container-handling events at terminals as the basis for port performance measurement. With the inferred port performance indicators, we further compare the strengths and weaknesses of different container ports at the terminal level, port level, and region level, which can potentially benefit terminal productivity improvement, liner schedule optimization, and regional economic development planning. In order to evaluate the proposed framework, we conduct extensive studies on large-scale real-world GPS traces of container ships collected from major container ports worldwide through the year, as well as various maritime open data sources concerning ships and ports. Evaluation results confirm that the proposed framework not only can accurately estimate various port performance indicators but also effectively produces port comparison results such as port performance ranking and port region comparison."}
{"_id": "1fe8b9f66b29e1c43c50a200dd9a172cb531f7d8", "title": "Visibility in bad weather from a single image", "text": "Bad weather, such as fog and haze, can significantly degrade the visibility of a scene. Optically, this is due to the substantial presence of particles in the atmosphere that absorb and scatter light. In computer vision, the absorption and scattering processes are commonly modeled by a linear combination of the direct attenuation and the airlight. Based on this model, a few methods have been proposed, and most of them require multiple input images of a scene, which have either different degrees of polarization or different atmospheric conditions. This requirement is the main drawback of these methods, since in many situations, it is difficult to be fulfilled. To resolve the problem, we introduce an automated method that only requires a single input image. This method is based on two basic observations: first, images with enhanced visibility (or clear-day images) have more contrast than images plagued by bad weather; second, airlight whose variation mainly depends on the distance of objects to the viewer, tends to be smooth. Relying on these two observations, we develop a cost function in the framework of Markov random fields, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation. The method does not require the geometrical information of the input image, and is applicable for both color and gray images."}
{"_id": "23c5f46f7e780edd31158a894496c34c9bd94eba", "title": "Gray and color image contrast enhancement by the curvelet transform", "text": "We present in this paper a new method for contrast enhancement based on the curvelet transform. The curvelet transform represents edges better than wavelets, and is therefore well-suited for multiscale edge enhancement. We compare this approach with enhancement based on the wavelet transform, and the Multiscale Retinex. In a range of examples, we use edge detection and segmentation, among other processing applications, to provide for quantitative comparative evaluation. Our findings are that curvelet based enhancement out-performs other enhancement methods on noisy images, but on noiseless or near noiseless images curvelet based enhancement is not remarkably better than wavelet based enhancement."}
{"_id": "2cfe25a0398b060065eb493c5f6abcc92c36d826", "title": "A novel algorithm for color constancy", "text": "Color constancy is the skill by which it is possible to tell the color of an object even under a colored light. I interpret the color of an object as its color under a fixed canonical light, rather than as a surface reflectance function. This leads to an analysis that shows two distinct sets of circumstances under which color constancy is possible. In this framework, color constancy requires estimating the illuminant under which the image was taken. The estimate is then used to choose one of a set of linear maps, which is applied to the image to yield a color descriptor at each point. This set of maps is computed in advance. The illuminant can be estimated using image measurements alone, because, given a number of weak assumptions detailed in the text, the color of the illuminant is constrained by the colors observed in the image. This constraint arises from the fact that surfaces can reflect no more light than is cast on them. For example, if one observes a patch that excites the red receptor strongly, the illuminant cannot have been deep blue. Two algorithms are possible using this constraint, corresponding to different assumptions about the world. The first algorithm, Crule will work for any surface reflectance. Crule corresponds to a form of coefficient rule, but obtains the coefficients by using constraints on illuminant color. The set of illuminants for which Crule will be successful depends strongly on the choice of photoreceptors: for narrowband photoreceptors, Crule will work in an unrestricted world. The second algorithm, Mwext, requires that both surface reflectances and illuminants be chosen from finite dimensional spaces; but under these restrictive conditions it can recover a large number of parameters in the illuminant, and is not an attractive model of human color constancy. Crule has been tested on real images of Mondriaans, and works well. I show results for Crule and for the Retinex algorithm of Land (Land 1971; Land 1983; Land 1985) operating on a number of real images. The experimental work shows that for good constancy, a color constancy system will need to adjust the gain of the receptors it employs in a fashion analagous to adaptation in humans."}
{"_id": "5c90d7d1287eff26464225f77da814f479a0beac", "title": "Contrast Limited Adaptive Histogram Equalization image processing to improve the detection of simulated spiculations in dense mammograms", "text": "The purpose of this project was to determine whether Contrast Limited Adaptive Histogram Equalization (CLAHE) improves detection of simulated spiculations in dense mammograms. Lines simulating the appearance of spiculations, a common marker of malignancy when visualized with masses, were embedded in dense mammograms digitized at 50 micron pixels, 12 bits deep. Film images with no CLAHE applied were compared to film images with nine different combinations of clip levels and region sizes applied. A simulated spiculation was embedded in a background of dense breast tissue, with the orientation of the spiculation varied. The key variables involved in each trial included the orientation of the spiculation, contrast level of the spiculation and the CLAHE settings applied to the image. Combining the 10 CLAHE conditions, 4 contrast levels and 4 orientations gave 160 combinations. The trials were constructed by pairing 160 combinations of key variables with 40 backgrounds. Twenty student observers were asked to detect the orientation of the spiculation in the image. There was a statistically significant improvement in detection performance for spiculations with CLAHE over unenhanced images when the region size was set at 32 with a clip level of 2, and when the region size was set at 32 with a clip level of 4. The selected CLAHE settings should be tested in the clinic with digital mammograms to determine whether detection of spiculations associated with masses detected at mammography can be improved."}
{"_id": "810a78860a3f16779e42dce16f4895bfeac3c6f8", "title": "Photographic tone reproduction for digital images", "text": "A classic photographic task is the mapping of the potentially high dynamic range of real world luminances to the low dynamic range of the photographic print. This tone reproduction problem is also faced by computer graphics practitioners who must map digital images to a low dynamic range print or screen. The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator. In particular, we use and extend the techniques developed by Ansel Adams to deal with digital images. The resulting algorithm is simple and is shown to produce good results for the wide variety of images that we have tested. Photographic Tone Reproduction for Digital Images Erik Reinhard University of Utah Michael Stark University of Utah Peter Shirley University of Utah Jim Ferwerda Cornell University"}
{"_id": "a8cd4bc012e08a2c6b2ca7618ec411a8d9e523e7", "title": "Mixture of experts for classification of gender, ethnic origin, and pose of human faces", "text": "In this paper we describe the application of mixtures of experts on gender and ethnic classification of human faces, and pose classification, and show their feasibility on the FERET database of facial images. The FERET database allows us to demonstrate performance on hundreds or thousands of images. The mixture of experts is implemented using the \"divide and conquer\" modularity principle with respect to the granularity and/or the locality of information. The mixture of experts consists of ensembles of radial basis functions (RBFs). Inductive decision trees (DTs) and support vector machines (SVMs) implement the \"gating network\" components for deciding which of the experts should be used to determine the classification output and to restrict the support of the input space. Both the ensemble of RBF's (ERBF) and SVM use the RBF kernel (\"expert\") for gating the inputs. Our experimental results yield an average accuracy rate of 96% on gender classification and 92% on ethnic classification using the ERBF/DT approach from frontal face images, while the SVM yield 100% on pose classification."}
{"_id": "c8bd02c043c6712f62bd4a0f926719c7e94c64e2", "title": "Building Optimal Question Answering System Automatically using Configuration Space Exploration (CSE) for QA4MRE 2013 Tasks", "text": "Different software systems for automatic question answering have been developed in recent years. Some systems perform well on specific domains, but may not be appropriate for other domains. As the complexity and scaling of such information systems become ever greater, it is much more challenging to effectively and efficiently determine which toolkits, algorithms, knowledge bases or other resources should be integrated into a system so that one can achieve a desired or optimal level of performance on a given task. In this working notepaper, we present a generic framework that can be used for any machinereading task and automatically find the best configuration of algorithmic components as well as values of their corresponding parameters. Although we have designed the framework for all QA4MRE-2013 tasks (i.e. Main task, Biomedical about Alzheimer\u2019s and Entrance Exam), our analysis will mostly focus on Biomedical about Alzheimer\u2019s task. We introduce the Configuration Space Exploration (CSE) framework, an extension to the Unstructured Information Management Architecture (UIMA) which provides a general distributed solution for building and exploring possible configurations for any intelligent information system. For the Biomedial about Alzheimer\u2019s task, CSE was used to generate more than 1000 different configurations from existing components; we selected the 3 best runs for submission. We achieved an average c@1 of 0.27; our highest score was 0.60 for reading-test-1, and our lowest was 0.0 for reading-test3. We further enhanced the system by introducing point-wise mutual information (PMI) scoring for answer ranking, which produced an average c@1 of 0.4025, with a highest score of 0.77 for reading test-1 and a lowest score of 0.2 for reading test-2."}
{"_id": "c5de28087a86f028a360f5a255088ca85cef662b", "title": "Automated user modeling for personalized digital libraries", "text": "Digital libraries (DL) have become one of the most typical ways of accessing any kind of digitalized information. Due to this key role, users welcome any improvements on the services they receive from digital libraries. One trend used to improve digital services is through personalization. Up to now, the most common approach for personalization in digital libraries has been user-driven. Nevertheless, the design of efficient personalized services has to be done, at least in part, in an automatic way. In this context, machine learning techniques automate the process of constructing user models. This paper proposes a new approach to construct digital libraries that satisfy user\u2019s necessity for information: Adaptive Digital Libraries, libraries that automatically learn user preferences and goals and personalize their interaction using this"}
{"_id": "061fdf4ccb2e891d7493b6ae4fa9dbb6531ee9f1", "title": "First-Order Mixed Integer Linear Programming", "text": "Mixed integer linear programming (MILP) is a powerful representation often used to formulate decision-making problems under uncertainty. However, it lacks a natural mechanism to reason about objects, classes of objects, and relations. First-order logic (FOL), on the other hand, excels at reasoning about classes of objects, but lacks a rich representation of uncertainty. While representing propositional logic in MILP has been extensively explored, no theory exists yet for fully combining FOL with MILP. We propose a new representation, called first-order programming or FOP, which subsumes both FOL and MILP. We establish formal methods for reasoning about first order programs, including a sound and complete lifted inference procedure for integer first order programs. Since FOP can offer exponential savings in representation and proof size compared to FOL, and since representations and proofs are never significantly longer in FOP than in FOL, we anticipate that inference in FOP will be more tractable than inference in FOL for corresponding problems."}
{"_id": "1cd67d1821c1c26c1639fb98a7852b780c80d337", "title": "Speed and power scaling of SRAM's", "text": "Simple models for the delay, power, and area of a static random access memory (SRAM) are used to determine the optimal organizations for an SRAM and study the scaling of their speed and power with size and technology. The delay is found to increase by about one gate delay for every doubling of the RAM size up to 1 Mb, beyond which the interconnect delay becomes an increasingly significant fraction of the total delay. With technology scaling, the nonscaling of threshold mismatches in the sense amplifiers is found to significantly impact the total delay in generations of 0.1 /spl mu/m and below."}
{"_id": "f8006d3d53367c9d5539531ab8f28238e3a643b3", "title": "Prevention of Relay Attack Using NFC", "text": "Near Field Communication (NFC) is one of the emerging and promising technological developments for mobile phones and other contactless devices. NFC technologies allow two active devices embedded with chip transmit small pieces of data between each other via short range wireless connection and at low speed depending on the configurations. It offers low friction process because of the close range that the two NFC enabled devices can setup a connection. The combination of NFC with smart devices has led to development and range of NFC that includes data exchange, service discovery, connection, e-payment, and ticketing. With the help of a NFC enabled phone and card reader device, contactless card transaction can be performed. Security problems related to relay attack were analyzed and identified a proper solution to prevent the attack. In the proposed system, a Frame wait integer is used to check and verify data manipulation, by attaching the transacted data with a signed integer."}
{"_id": "9f255612278488123b02cd67acc4b1d3c62c89dc", "title": "Completion and Reconstruction with Primitive Shapes", "text": "We consider the problem of reconstruction from incomplete point-clouds. To find a closed mesh the reconstruction is guided by a set of primitive shapes which has been detected on the input point-cloud (e.g. planes, cylinders etc.). With this guidance we not only continue the surrounding structure into the holes but also synthesize plausible edges and corners from the primitives\u2019 intersections. To this end we give a surface energy functional that incorporates the primitive shapes in a guiding vector field. The discretized functional can be minimized with an efficient graphcut algorithm. A novel greedy optimization strategy is proposed to minimize the functional under the constraint that surface parts corresponding to a given primitive must be connected. From the primitive shapes our method can also reconstruct an idealized model that is suitable for use in a CAD system."}
{"_id": "d2befdf16fef9c3d024a0910d9405e3211344723", "title": "Improved DV-Hop localization algorithm for wireless sensor networks", "text": "Wireless sensor networks (WSN) are usually composed of a great number of randomly deployed nodes that communicate among themselves and gather information about the environment. In many applications, it is required to know geographical location of the sensor which detected an event. The existing localization algorithms can be classified into two categories: range-based and range-free. Range-based algorithms measure the actual distances between nodes, while range-free algorithms approximate the distance based on connectivity information. Range-free localization is cost-effective alternative to a more expensive range based approaches since there is no necessity for additional hardware. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is one of the range-free localization algorithms utilizing hop-distance estimation. In this paper, we propose an improvement of the range-free DV-Hop localization algorithm and present simulations results to evaluate it."}
{"_id": "9098c9788db3ad789bdbcff1dd50452c9722b761", "title": "Syntactic methods for topic-independent authorship attribution", "text": "The efficacy of syntactic features for topic-independent authorship attribution is evaluated, taking a feature set of frequencies of words and punctuation marks as baseline. The features are \u2018deep\u2019 in the sense that they are derived by parsing the subject texts, in contrast to \u2018shallow\u2019 syntactic features for which a part-of-speech analysis is enough. The experiments are conducted on a corpus of novels written around the year 1900 by 20 different authors, and cover two tasks. In the first task, text samples are taken from books by one author, and the goal is to pair samples from the same book. In the second task, text samples are taken from several authors, but only one sample from each book, and the goal is to pair samples from the same author. In the first task, the baseline feature set outperformed the syntax-based feature set, but for the second task, the outcome was the opposite. This suggests that, compared to lexical features such as vocabulary and punctuation, syntactic features are more robust to changes in topic."}
{"_id": "f032eb287490833d7c2dc92a80808ef1e4e1f1fb", "title": "Object permanence in five-month-old infants", "text": "A new method was devised to test object permanence in young infants. Fivemonth-old infants were habituated to a screen that moved back and forth through a 180-degree arc, in the manner of a drawbridge. After infants reached habituation, a box was centered behind the screen. Infants were shown two test events: a possible event and an impossible event. In the possible event, the screen stopped when it reached the occluded box; in the impossible event, the screen moved through the space occupied by the box. The results indicated that infants looked reliably longer at the impossible than at the possible event. This finding suggested that infants (1) understood that the box continued to exist, in its same location, after it was occluded by the screen, and (2) expected the screen to stop against the occluded box and were surprised, or puzzled, when it failed to do so. A control experiment in which the box was placed next to the screen provided support for this interpretation of the results. Together, the results of these experiments indicate that, contrary to Piaget\u2019s (1954) claims, infants as young as 5 months of age understand that objects continue to exist when occluded. The results also indicate that 5-month-old infants realize that solid objects do not move through the space occupied by other solid objects. *This research was supported by a grant from the National Institute of Health (HD-13248) to ESS. The data analysis was supported by a grant from the National Science Foundation (SES84-08626) to SW. While working on this research, RI3 was supported by fellowships from the Natural Sciences and Engineering Research Council of Canada and the Qu6bec Department of Education. We thank Judy Deloache and Bob Reeve, for their careful reading of the manuscript; Marty Banks, Susan Carey, and Paul Harris, for helpful comments on earlier versions of the manuscript; Wendy Smith Born, Sarah Mangelsdorf, and the members of the Infant Lab at the University of Pennsylvania, for their help with the data collection; and Dawn Iacobucci, for her help with the data analysis. **Reprint requests should be sent to RenCe Baillargeon, Psychology Department, University of Illinois at Urbana-Champaign, 603 East Daniel Street, Champaign, IL 61820, U.S.A. OOlO-0277/85/$5.90 0 Elsevier Sequoia/Printed in The Netherlands"}
{"_id": "dcd72f0a9cdc37450379f401fc2f4f87e30f5021", "title": "Social Software in Higher Education: The Diversity of Applications and Their Contributions to Students' Learning Experiences", "text": ""}
{"_id": "e15f310bc625b4bbab0026911520f0ff188a4a77", "title": "Risks and risk mitigation in global software development: A tertiary study", "text": "Context There is extensive interest in global software development (GSD) which has led to a large number of papers reporting on GSD. A number of systematic literature reviews (SLRs) have attempted to aggregate information from individual studies. Objective: We wish to investigate GSD SLR research with a focus on discovering what research has been conducted in the area and to determine if the SLRs furnish appropriate risk and risk mitigation advice to provide guidance to organizations involved with GSD. Method: We performed a broad automated search to identify GSD SLRs. Data extracted from each study included: (1) authors, their affiliation and publishing venue, (2) SLR quality, (3) research focus, (4) GSD risks, (5) risk mitigation strategies and, (6) for each SLR the number of primary studies reporting each risk and risk mitigation strategy. Results: We found a total of 37 papers reporting 24 unique GSD SLR studies. Major GSD topics covered include: (1) organizational environment, (2) project execution, (3) project planning and control and (4) project scope and requirements. We extracted 85 risks and 77 risk mitigation advice items and categorized them under four major headings: outsourcing rationale, software development, human resources, and project management. The largest group of risks was related to project management. GSD outsourcing rationale risks ranked highest in terms of primary study support but in many cases these risks were only identified by a single SLR. Conclusions: The focus of the GSD SLRs we identified is mapping the research rather than providing evidence-based guidance to industry. Empirical support for the majority of risks identified is moderate to low, both in terms of the number of SLRs identifying the risk, and in the number of primary studies providing empirical support. Risk mitigation advice is also limited, and empirical support for these items is low. 2013 Elsevier B.V. All rights reserved."}
{"_id": "1bc2f951a993c027dd82445964d02f3b563b4b30", "title": "Exploring trajectory-driven local geographic topics in foursquare", "text": "The location based social networking services (LBSNSs) are becoming very popular today. In LBSNSs, such as Foursquare, users can explore their places of interests around their current locations, check in at these places to share their locations with their friends, etc. These check-ins contain rich information and imply human mobility patterns; thus, they can greatly facilitate mining and analysis of local geographic topics driven by users' trajectories. The local geographic topics indicate the potential and intrinsic relations among the locations in accordance with users' trajectories. These relations are useful for users in both location and friend recommendations. In this paper, we focus on exploring the local geographic topics through check-ins in Pittsburgh area in Foursquare. We use the Latent Dirichlet Allocation (LDA) model to discover the local geographic topics from the checkins. We also compare the local geographic topics on weekdays with those at weekends. Our results show that LDA works well in finding the related places of interests."}
{"_id": "2347e09e505cebc14a1b542bf876fb7294ce7813", "title": "Computational Thinking as a Computer Science Education Framework and the Related Effects on Gender Equity", "text": "I have recently completed my third year of study in the Educational Psychology and Educational Technology doctoral program at Michigan State University. I have successfully completed all of my coursework and comprehensive/qualifying exams. I will be proposing my dissertation study in the Fall semester of 2016 and expect to defend my dissertation in the Fall semester of 2017. My prior research has been focused on issues related to computational thinking, creativity, and computer science education. I am currently developing my literature review and honing my core research questions. I hope to subsequently develop my research methods and measures more fully, with plans to begin fieldwork in Fall of 2016."}
{"_id": "d5f2bbd11b69186d40fd1a70636d2fa255ad0d91", "title": "A genomic perspective on protein families.", "text": "In order to extract the maximum amount of information from the rapidly accumulating genome sequences, all conserved genes need to be classified according to their homologous relationships. Comparison of proteins encoded in seven complete genomes from five major phylogenetic lineages and elucidation of consistent patterns of sequence similarities allowed the delineation of 720 clusters of orthologous groups (COGs). Each COG consists of individual orthologous proteins or orthologous sets of paralogs from at least three lineages. Orthologs typically have the same function, allowing transfer of functional information from one member to an entire COG. This relation automatically yields a number of functional predictions for poorly characterized genomes. The COGs comprise a framework for functional and evolutionary genome analysis."}
{"_id": "a535d4dc9196a3749a6fd3c51a3e0b7994a0d46a", "title": "Language Learning in Virtual Reality Environments: Past, Present, and Future", "text": "This study investigated the research trends in language learning in a virtual reality environment by conducting a content analysis of findings published in the literature from 2004 to 2013 in four top ranked computer-assisted language learning journals: Language Learning & Technology, CALICO Journal, Computer Assisted Language Learning, and ReCALL. Data from 29 articles were cross-analyzed in terms of research topics, technologies used, language learning settings, sample groups, and methodological approaches. It was found that the three most popular research topics for learners were interactive communication; behaviors, affections, and beliefs; and task-based instruction. However, the analysis results highlight the need for the inclusion of the impact of teacher. The data also revealed that more studies are utilizing triangulation of measurement processes to enable in-depth analysis. A trend of gathering data through informal learning procedures was also observed. This article concludes by highlighting particular fields related to VR in which further research is urgently needed."}
{"_id": "e9397b23bf529a5c7e62c484600c5afe195ab798", "title": "Emotion classification using minimal EEG channels and frequency bands", "text": "In this research we propose to use EEG signal to classify two emotions (i.e., positive and negative) elicited by pictures. With power spectrum features, the accuracy rate of SVM classifier is about 85.41%. Considering each pair of channels and different frequency bands, it shows that frontal pairs of channels give a better result than the other area and high frequency bands give a better result than low frequency bands. Furthermore, we can reduce number of pairs of channels from 7 to 5 with almost the same accuracy and can cut low frequency bands in order to save computation time. All of these are beneficial to the development of emotion classification system using minimal EEG channels in real-time."}
{"_id": "0bb54f6ab6fb052db5052859338d691cdb731bed", "title": "Crowd evacuation planning using Cartesian Genetic Programming and agent-based crowd modeling", "text": "This paper proposes a new evolutionary algorithm-based methodology for optimal crowd evacuation planning. In the proposed methodology, a heuristic-based evacuation scheme is firstly introduced. The key idea is to divide the region into a set of sub-regions and use a heuristic rule to dynamically recommend an exit to agents in each sub-region. Then, an evolutionary framework based on the Cartesian Genetic Programming algorithm and an agent-based crowd simulation model is developed to search for the optimal heuristic rule. By considering dynamic environment features to construct the heuristic rule and using multiple scenarios for training, the proposed methodology aims to find generic and efficient heuristic rules that perform well on different scenarios. The proposed methodology is applied to guide people's evacuation behaviors in six different scenarios. The simulation results demonstrate that the heuristic rule offered by the proposed method is effective to reduce the crowd evacuation time on different scenarios."}
{"_id": "70bcc6766f055fc279bbb07af967c026ff8a2d9c", "title": "Learning Robust Visual-Semantic Embeddings", "text": "Many of the existing methods for learning joint embedding of images and text use only supervised information from paired images and its textual attributes. Taking advantage of the recent success of unsupervised learning in deep neural networks, we propose an end-to-end learning framework that is able to extract more robust multi-modal representations across domains. The proposed method combines representation learning models (i.e., auto-encoders) together with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss) to learn joint embeddings for semantic and visual features. A novel technique of unsupervised-data adaptation inference is introduced to construct more comprehensive embeddings for both labeled and unlabeled data. We evaluate our method on Animals with Attributes and Caltech-UCSD Birds 200-2011 dataset with a wide range of applications, including zero and few-shot image recognition and retrieval, from inductive to transductive settings. Empirically, we show that our frame-work improves over the current state of the art on many of the considered tasks."}
{"_id": "2bc27481fa57a1b247ab1fc5d23a07912480352a", "title": "How computer games affect CS (and other) students' school performance", "text": "Compulsive game playing, especially of the role-playing variety, risks failing grades and withdrawal of financial support from tuition-paying parents."}
{"_id": "3716c4896944c3461477f845319ac09e3dfe3a10", "title": "eSports: collaborative and synchronous video annotation system in grid computing environment", "text": "We designed eSports - a collaborative and synchronous video annotation platform, which is to be used in Internet scale cross-platform grid computing environment to facilitate computer supported cooperative work (CSCW) in education settings such as distance sport coaching, distance classroom etc. Different from traditional multimedia annotation systems, eSports provides the capabilities to collaboratively and synchronously play and archive real time live video, to take snapshots, to annotate video snapshots using whiteboard and to play back the video annotations synchronized with original video streams. eSports is designed based on the grid based collaboration paradigm $the shared event model using NaradaBrokering, which is a publish/subscribe based distributed message passing and event notification system. In addition to elaborate the design and implementation of eSports, we analyze the potential use cases of eSports under different education settings. We believed that eSports is very useful to improve the online collaborative coaching and education."}
{"_id": "948876640d3ca519a2c625a4a52dc830fec26b29", "title": "The Role of Flow Experience in Cyber-Game Addiction", "text": "Consumer habit, an important key to repetitive consumption, is an interesting yet puzzling phenomenon. Sometimes this consumption becomes obsessive--consumers will continue to act a certain way even when they feel it is not in their best interests. However, not all consumers develop such addictions. This study uses cyber-game addiction syndrome as an analogue to trace the possible causes of consumer addiction. Results from structure equation modeling show that repetition of favorite activities has a moderate effect upon addiction, which is in line with the assertion of rational addiction theory. However, flow experience--the emotional state embracing perceptional distortion and enjoyment--shows a much stronger impact on addiction. This suggests that consumers who have experienced flow are more likely to be addicted."}
{"_id": "c1b66422b1dab3eeee6d6c760f4bd227a8bb16c5", "title": "Being There: The Subjective Experience of Presence", "text": ""}
{"_id": "f40823290aaba15e8073792d302c0ff8c1d37486", "title": "Analysis of Bayesian Classification based Approaches for Android Malware Detection", "text": "Mobile malware has been growing in scale and complexity spurred by the unabated uptake of smartphones worldwide. Android is fast becoming the most popular mobile platform resulting in sharp increase in malware targeting the platform. Additionally, Android malware is evolving rapidly to evade detection by traditional signature-based scanning. Despite current detection measures in place, timely discovery of new malware is still a critical issue. This calls for novel approaches to mitigate the growing threat of zero-day Android malware. Hence, in this paper we develop and analyze proactive Machine Learning approaches based on Bayesian classification aimed at uncovering unknown Android malware via static analysis. The study, which is based on a large malware sample set of majority of the existing families, demonstrates detection capabilities with high accuracy. Empirical results and comparative analysis are presented offering useful insight towards development of effective static-analytic Bayesian classification based solutions for detecting unknown Android malware."}
{"_id": "7253c6cd672281576a96db1037f135ce3e78fe41", "title": "Problems of Reliability and Validity in Ethnographic Research", "text": "Although problems of reliability and validity have been explored thoroughly by experimenters and other quantitative researchers, their treatment by ethnographers has been sporadic and haphazard This article analyzes these constructs as defined and addressed by ethnographers. Issues of reliability and validity in ethnographic design are compared to their counterparts in experimental design. Threats to the credibility of ethnographic research are summarized and categorized from field study methodology. Strategies intended to enhance credibility are incorporated throughout the investigative process: study design, data collection, data analysis, and presentation of findings. Common approaches to resolving various categories of contamination are illustrated from the current literature in educational ethnography."}
{"_id": "bd475c9494a9e10bc711e5d8301eddb684b1663a", "title": "Dairy products and colorectal cancer risk: a systematic review and meta-analysis of cohort studies.", "text": "BACKGROUND\nPrevious studies of the association between intake of dairy products and colorectal cancer risk have indicated an inverse association with milk, however, the evidence for cheese or other dairy products is inconsistent.\n\n\nMETHODS\nWe conducted a systematic review and meta-analysis to clarify the shape of the dose-response relationship between dairy products and colorectal cancer risk. We searched the PubMed database for prospective studies published up to May 2010. Summary relative risks (RRs) were estimated using a random effects model.\n\n\nRESULTS\nNineteen cohort studies were included. The summary RR was 0.83 (95% CI [confidence interval]: 0.78-0.88, I2=25%) per 400 g/day of total dairy products, 0.91 (95% CI: 0.85-0.94, I2=0%) per 200 g/day of milk intake and 0.96 (95% CI: 0.83-1.12, I2=28%) per 50 g/day of cheese. Inverse associations were observed in both men and women but were restricted to colon cancer. There was evidence of a nonlinear association between milk and total dairy products and colorectal cancer risk, P<0.001, and the inverse associations appeared to be the strongest at the higher range of intake.\n\n\nCONCLUSION\nThis meta-analysis shows that milk and total dairy products, but not cheese or other dairy products, are associated with a reduction in colorectal cancer risk."}
{"_id": "1037548f688bd3e566df0d4184509976695124cf", "title": "Gene Expression Omnibus: NCBI gene expression and hybridization array data repository", "text": "The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo."}
{"_id": "85bfaa2fbe1feed65f90bca30c21bea00d73097b", "title": "Fine-grained Entity Recognition with Reduced False Negatives and Large Type Coverage", "text": "Fine-grained Entity Recognition (FgER) is the task of detecting and classifying entity mentions to a large set of types spanning diverse domains such as biomedical, finance and sports. We observe that when the type set spans several domains, detection of entity mention becomes a limitation for supervised learning models. The primary reason being lack of dataset where entity boundaries are properly annotated while covering a large spectrum of entity types. Our work directly addresses this issue. We propose Heuristics Allied with Distant Supervision (HAnDS) framework to automatically construct a quality dataset suitable for the FgER task. HAnDS framework exploits the high interlink among Wikipedia and Freebase in a pipelined manner, reducing annotation errors introduced by naively using distant supervision approach. Using HAnDS framework, we create two datasets, one suitable for building FgER systems recognizing up to 118 entity types based on the FIGER type hierarchy and another for up to 1115 entity types based on the TypeNet hierarchy. Our extensive empirical experimentation warrants the quality of the generated datasets. Along with this, we also provide a manually annotated dataset for benchmarking FgER systems."}
{"_id": "d4b91865a3c520b7516576dba7f81c30d1e002d2", "title": "Neuronal correlates of a perceptual decision", "text": "THE relationship between neuronal activity and psychophysical judgement has long been of interest to students of sensory processing. Previous analyses of this problem have compared the performance of human or animal observers in detection or discrimination tasks with the signals carried by individual neurons, but have been hampered because neuronal and perceptual data were not obtained at the same time and under the same conditions1\u20134. We have now measured the performance of monkeys and of visual cortical neurons while the animals performed a psychophysical task well matched to the properties of the neurons under study. Here we report that the reliability and sensitivity of most neurons on this task equalled or exceeded that of the monkeys. We therefore suggest that under our conditions, psychophysical judgements could be based on the activity of a relatively small number of neurons."}
{"_id": "0fba92ea026075dc27df9f6d9c7ab1f856a0a3b2", "title": "Clinical and microbiologic characteristics of vulvovaginitis in Korean prepubertal girls, 2009\u20132014: a single center experience", "text": "OBJECTIVE\nTo update information on the clinical and microbiologic characteristics of pediatric vulvovaginitis in Korean prepubertal girls.\n\n\nMETHODS\nA total of 120 girls (aged 0 to 9 years) with culture-confirmed pediatric vulvovaginitis, diagnosed between 2009 and 2014, were enrolled in the study. The epidemiologic and microbiologic characteristics, and clinical outcomes were assessed. Patients with sexual precocity, as well as those who were referred for suspected sexual abuse, were excluded.\n\n\nRESULTS\nGirls aged 4 to 6 years were at the highest risk of pediatric vulvovaginitis. Seasonal distribution indicated obvious peaks in summer and winter. Of the 120 subjects, specific pathogens were identified in the genital specimens in only 20 cases (16.7%). Streptococcus pyogenes (n=12, 60%) was the leading cause of specific vulvovaginitis. Haemophilus influenzae was isolated in one patient. No cases presented with enteric pathogens, such as Shigella or Yersinia. A history of recent upper respiratory tract infection, swimming, and bubble bath use was reported in 37.5%, 15.8%, and 10.0% of patients, respectively. Recent upper respiratory tract infection was not significantly correlated with the detection of respiratory pathogens in genital specimens (P>0.05). Of 104 patients who underwent perineal hygienic care, 80 (76.9%) showed improvement of symptoms without antibiotic treatment. Furthermore, the efficacy of hygienic care was not significantly different between patients with or without specific pathogens (P>0.05).\n\n\nCONCLUSION\nSpecific pathogens were only found in 16.7% of pediatric vulvovaginitis cases. Our results indicate an excellent outcome with hygienic care, irrespective of the presence of specific pathogens."}
{"_id": "e75c5d1b7ecd71cd9f1fdc3d07f56290517ef1e5", "title": "HyPer: A hybrid OLTP&OLAP main memory database system based on virtual memory snapshots", "text": "The two areas of online transaction processing (OLTP) and online analytical processing (OLAP) present different challenges for database architectures. Currently, customers with high rates of mission-critical transactions have split their data into two separate systems, one database for OLTP and one so-called data warehouse for OLAP. While allowing for decent transaction rates, this separation has many disadvantages including data freshness issues due to the delay caused by only periodically initiating the Extract Transform Load-data staging and excessive resource consumption due to maintaining two separate information systems. We present an efficient hybrid system, called HyPer, that can handle both OLTP and OLAP simultaneously by using hardware-assisted replication mechanisms to maintain consistent snapshots of the transactional data. HyPer is a main-memory database system that guarantees the ACID properties of OLTP transactions and executes OLAP query sessions (multiple queries) on the same, arbitrarily current and consistent snapshot. The utilization of the processor-inherent support for virtual memory management (address translation, caching, copy on update) yields both at the same time: unprecedentedly high transaction rates as high as 100000 per second and very fast OLAP query response times on a single system executing both workloads in parallel. The performance analysis is based on a combined TPC-C and TPC-H benchmark."}
{"_id": "732212be0e6c5216158a7470c79fa2ff98a2da06", "title": "A Ka-band asymmetrical stacked-FET MMIC Doherty power amplifier", "text": "We present a stacked-FET monolithic millimeter-wave (mmW) integrated circuit Doherty power amplifier (DPA). The DPA employs a novel asymmetrical stack gate bias to achieve high power and high efficiency at 6-dB power back-off (PBO). The circuit is fabricated in a 0.15-\u00b5m enhancement mode (E-mode) Gallium Arsenide (GaAs) process. Experimental results demonstrate output power at 1-dB gain compression (P1dB) of 28.2 dBm, peak power added efficiency (PAE) of 37% and PAE at 6-dB PBO of 27% at 28 GHz. Measured small signal gain is 15 dB while the 3-dB bandwidth covers from 25.5 to 29.5 GHz. Using digital predistortion (DPD) with a 20 MHz 64 QAM modulated signal, an adjacent channel power ratio (ACPR) of \u221246 dBc has been observed."}
{"_id": "03837b659b4a8878c2a2dbef411cd986fecfef8e", "title": "Autoregressive Attention for Parallel Sequence Modeling", "text": "We introduce an autoregressive attention mechanism for parallelizable characterlevel sequence modeling. We use this method to augment a neural model consisting of blocks of causal convolutional layers connected by highway network skip connections. We denote the models with and without the proposed attention mechanism respectively as Highway Causal Convolution (Causal Conv) and Autoregressive-attention Causal Convolution (ARA-Conv). The autoregressive attention mechanism crucially maintains causality in the decoder, allowing for parallel implementation. We demonstrate that these models, compared to their recurrent counterparts, enable fast and accurate learning in character-level NLP tasks. In particular, these models outperform recurrent neural network models in natural language correction and language modeling tasks, and run in a fraction of the time."}
{"_id": "3466b4007e1319db8aa9cabdc001f138cffbd981", "title": "Benchmarks as Limits to Arbitrage : Understanding the Low Volatility Anomaly \u2217", "text": "Over the past 41 years, high volatility and high beta stocks have substantially underperformed low volatility and low beta stocks in U.S. markets. We propose an explanation that combines the average investor's preference for risk and the typical institutional investor\u2019s mandate to maximize the ratio of excess returns and tracking error relative to a fixed benchmark (the information ratio) without resorting to leverage. Models of delegated asset management show that such mandates discourage arbitrage activity in both high alpha, low beta stocks and low alpha, high beta stocks. This explanation is consistent with several aspects of the low volatility anomaly including why it has strengthened in recent years even as institutional investors have become more dominant."}
{"_id": "94e21eb765285ec9ed431006613d320586477e2c", "title": "Learning Semantic Deformation Flows with 3D Convolutional Networks", "text": "Shape deformation requires expert user manipulation even when the object under consideration is in a high fidelity format such as a 3D mesh. It becomes even more complicated if the data is represented as a point set or a depth scan with significant self occlusions. We introduce an end-to-end solution to this tedious process using a volumetric Convolutional Neural Network (CNN) that learns deformation flows in 3D. Our network architectures take the voxelized representation of the shape and a semantic deformation intention (e.g., make more sporty) as input and generate a deformation flow at the output. We show that such deformation flows can be trivially applied to the input shape, resulting in a novel deformed version of the input without losing detail information. Our experiments show that the CNN approach achieves comparable results with state of the art methods when applied to CAD models. When applied to single frame depth scans, and partial/noisy CAD models we achieve \u223c60% less error compared to the state-of-the-art."}
{"_id": "615c92a9130adc40aad4268027a7af3c9ede192e", "title": "The immediate effect of kinesiology taping on muscular imbalance in the lateral flexors of the neck in infants: a\u00a0randomized masked study.", "text": "OBJECTIVE\nTo investigate the immediate effect of kinesiology taping (KT) on muscular imbalance in the lateral flexors of the neck.\n\n\nDESIGN\nRandomized controlled trial.\n\n\nPARTICIPANTS\nTwenty-nine infants with congenital muscular torticollis and muscular imbalance in the lateral flexors of the neck were chosen consecutively. In addition, 5 healthy infants with no signs of muscular imbalance in the neck were tested.\n\n\nMETHOD\nThe infants were randomly allocated to either an intervention group or a control group. The intervention group had kinesiology taping applied on the affected side using the muscle-relaxing technique. The healthy infants were tested both with and without kinesiology taping. The evaluator was blinded to whether the infants were or were not taped.\n\n\nRESULTS\nThere was a significant difference in the change of Muscle Function Scale (MFS) scores between the groups (P < .0001). In the intervention group, there were significantly lower scores on the affected side that had been taped (P < .0001) and also significantly higher scores on the unaffected side (P = .01). There were no significant differences in the control group. For the healthy infants, with no imbalance in the lateral flexors of the neck, there were no changes to the MFS scores regardless of whether the kinesiology tape was applied.\n\n\nCONCLUSIONS\nFor infants with congenital muscular torticollis, kinesiology taping applied on the affected side had an immediate effect on the MFS scores for the muscular imbalance in the lateral flexors of the neck."}
{"_id": "d742830713edd589a7b4f6f4c56f07392b3e3d09", "title": "The emergence of psychopathy: Implications for the neuropsychological approach to developmental disorders", "text": "In this paper, I am going to examine the disorder of psychopathy and consider how genetic anomalies could give rise to the relatively specific neuro-cognitive impairments seen in individuals with this disorder. I will argue that genetic anomalies in psychopathy reduce the salience of punishment information (perhaps as a function of noradrenergic disturbance). I will argue that the ability of the amygdala to form the stimulus-punishment associations necessary for successful socialization is disrupted and that because of this, individuals with psychopathy do not learn to avoid actions that will harm others. It is noted that this model follows the neuropsychological approach to the study of developmental disorders, an approach that has been recently criticized. I will argue that these criticisms are less applicable to psychopathy. Indeed, animal work on the development of the neural systems necessary for emotion, does not support a constructivist approach with respect to affect. Importantly, such work indicates that while environmental effects can alter the responsiveness of the basic neural architecture mediating emotion, environmental effects do not construct this architecture. However, caveats to the neuropsychological approach with reference to this disorder are noted."}
{"_id": "fe419be5c53e2931e1d6370c914ce166be29ff6e", "title": "Dynamic Switching Networks", "text": ""}
{"_id": "b1e4eff874567d014482f6abd64ac59c0818ec6f", "title": "Primate frontal eye fields. II. Physiological and anatomical correlates of electrically evoked eye movements.", "text": "We studied single neurons in the frontal eye fields of awake macaque monkeys and compared their activity with the saccadic eye movements elicited by microstimulation at the sites of these neurons. Saccades could be elicited from electrical stimulation in the cortical gray matter of the frontal eye fields with currents as small as 10 microA. Low thresholds for eliciting saccades were found at the sites of cells with presaccadic activity. Presaccadic neurons classified as visuomovement or movement were most associated with low (less than 50 microA) thresholds. High thresholds (greater than 100 microA) or no elicited saccades were associated with other classes of frontal eye field neurons, including neurons responding only after saccades and presaccadic neurons, classified as purely visual. Throughout the frontal eye fields, the optimal saccade for eliciting presaccadic neural activity at a given recording site predicted both the direction and amplitude of the saccades that were evoked by microstimulation at that site. In contrast, the movement fields of postsaccadic cells were usually different from the saccades evoked by stimulation at the sites of such cells. We defined the low-threshold frontal eye fields as cortex yielding saccades with stimulation currents less than or equal to 50 microA. It lies along the posterior portion of the arcuate sulcus and is largely contained in the anterior bank of that sulcus. It is smaller than Brodmann's area 8 but corresponds with the union of Walker's cytoarchitectonic areas 8A and 45. Saccade amplitude was topographically organized across the frontal eye fields. Amplitudes of elicited saccades ranged from less than 1 degree to greater than 30 degrees. Smaller saccades were evoked from the ventrolateral portion, and larger saccades were evoked from the dorsomedial portion. Within the arcuate sulcus, evoked saccades were usually larger near the lip and smaller near the fundus. Saccade direction had no global organization across the frontal eye fields; however, saccade direction changed in systematic progressions with small advances of the microelectrode, and all contralateral saccadic directions were often represented in a single electrode penetration down the bank of the arcuate sulcus. Furthermore, the direction of change in these progressions periodically reversed, allowing particular saccade directions to be multiply represented in nearby regions of cortex.(ABSTRACT TRUNCATED AT 400 WORDS)"}
{"_id": "0c3078bf214cea52669ec13962a0a242243d0e09", "title": "A Broadband Folded Printed Quadrifilar Helical Antenna Employing a Novel Compact Planar Feeding Circuit", "text": "A broadband printed quadrifilar helical antenna employing a novel compact feeding circuit is proposed in this paper. This antenna presents an excellent axial ratio over a wide beamwidth, with a 29% bandwidth. A specific feeding circuit based on an aperture-coupled transition and including two 90\u00b0 surface mount hybrids has been designed to be integrated with the quadrifilar antenna. Over the bandwidth, the measured reflection coefficient of the antenna fed by the wideband compact circuit has been found to be equal to or lower than -12 dB and the maximum gain varies between 1.5 and 2.7 dBic from 1.18 to 1.58 GHz. The half-power beamwidth is 150\u00b0, with an axial ratio below 3 dB over this range. The compactness of the feeding circuit allows small element spacing in array arrangements."}
{"_id": "12417f4f32a3dbb6245a4c8dd345aee4d5a2f7b0", "title": "Clustering Semantic Spaces of Suicide Notes and Newsgroups Articles", "text": "Historically, suicide risk assessment has relied on question-and-answer type tools. These tools, built on psychometric advances, are widely used because of availability. Yet there is no known tool based on biologic and cognitive evidence. This absence often cause a vexing clinical problem for clinicians who question the value of the result as time passes. The purpose of this paper is to describe one experiment in a series of experiments to develop a tool that combines Biological Markers ( Bm) with Thought Markers ( Tm), and use machine learning to compute a real-time index for assessing the likelihood repeated suicide attempt in the next six-months. For this study we focus using unsupervised machine learning to distinguish between actual suicide notes and newsgroups. This is important because it gives us insight into how well these methods discriminate between real notes and general conversation."}
{"_id": "2d952982a5049d3315b21c8da7ebd6b165a87b0b", "title": "Receiver Design for a Bionic Nervous System: Modeling the Dendritic Processing Power", "text": "Intrabody nanonetworks for nervous system monitoring are envisioned as a key application of the Internet of Nano-Things (IoNT) paradigm, with the aim of developing radically new medical diagnosis and treatment techniques. Indeed, very recently, bionic devices have been implanted inside a living human brain as innovative treatment for drug-resistant epilepsy. In this context, this paper proposes a systems-theoretic communication model to capture the actual behavior of biological neurons. Specifically, biological neurons exhibit physical extension due to their projections called dendrites, which propagate the electrochemical stimulation received via synapses to the soma. Experimental evidences show that the dendrites exhibit two main features: 1) the compartmentalization at the level of the dendritic branches of the neuronal processes and 2) the location-dependent preference for different frequencies. Stemming from these experimental evidences, we propose to model the dendritic tree as a spatiotemporal filter bank, where each filter models the behavior in both space and time of a dendritic branch. Each filter is fully characterized along with the overall neuronal response. Furthermore, sufficient conditions on the incoming stimulus for inducing a null-neuronal response are derived. The conducted theoretical analysis shows that: 1) the neuronal information is encoded in the stimulus temporal pattern, i.e., it is possible to select the neuron to affect by changing the stimulus frequency content; in this sense, the communication among neurons is frequency-selective and 2) the spatial distribution of the dendrites affects the neuronal response; in this sense, the communication among neurons is spatial-selective. The theoretical analysis is validated through a real neuron morphology."}
{"_id": "4582dc42bb6e475debcfcd46d6418fcf3c88139e", "title": "Modeling and simulation of a PV pumping system under real climatic conditions", "text": "This article presents the modeling and simulation of a PV pumping system operating over the sun which has been developed under PSIM environment. The main objective is to optimize the PV pumping system through the introduction of two stages. The first one concerns the insertion of a DC-DC converter working with an MPPT in order to get the maximum power from the PV array. The second concerns the optimization of the DC-AC inverter which is controlled through a PWM control in order to get good voltage and current outputs spectrum with reduced losses and harmonics rate. The improvements brought by the MPPT and the PWM on the PV pumping system are shown through the example treated under real climatic conditions."}
{"_id": "6991983848bdee3f0c15b2e6925e49c359b0c701", "title": "A Novel Artificial Bee Colony Algorithm", "text": "Artificial bee colony algorithm is a new population-based evolutionary method based on the intelligent behavior of honey bee swarm. It has shown more effective than other biological-inspired algorithms. However, there are still insufficiencies in ABC algorithm, which is good at exploration but poor at exploitation and its convergence speed is also an issue in some cases. For these insufficiencies, we propose a novel artificial bee colony algorithm (NABC) for numerical optimization problems in this paper to improve the exploitation capability by incorporating the current best solution into the search procedure. Experiments are conducted on a set of unimodal/multimodal benchmark functions. The experiments results of NABC have been compared with Gbest-guided artificial bee colony algorithm (G-ABC), improved artificial bee colony algorithm (I-ABC), Elitist artificial bee colony algorithm (E-ABC). The results show that NABC is superior to those algorithms in most of the tested functions."}
{"_id": "a6df9a75a7a946cad8c32ee2a8c88d826a21430c", "title": "New York University 2016 System for KBP Event Nugget: A Deep Learning Approach", "text": "This is the first time New York University (NYU) participates in the event nugget (EN) evaluation of the Text Analysis Conference (TAC). We developed EN systems for both subtasks of event nugget, i.e, EN Task 1: Event Nugget Detection and EN Task 2: Event Nugget Detection and Coreference. The systems are mainly based on our recent research on deep learning for event detection (Nguyen and Grishman, 2015a; Nguyen and Grishman, 2016a). Due to the limited time we could devote to system development this year, we only ran the systems on the English evaluation data. However, we expect that the adaptation of the current systems to new languages can be done quickly. The development experiments show that although our current systems do not rely on complicated feature engineering, they significantly outperform the reported systems last year for the EN subtasks on the 2015 evaluation data."}
{"_id": "52ca97c355031ab2736faf4fd62143bbe98ca51c", "title": "A Survey on M2M Systems for mHealth: A Wireless Communications Perspective", "text": "In the new era of connectivity, marked by the explosive number of wireless electronic devices and the need for smart and pervasive applications, Machine-to-Machine (M2M) communications are an emerging technology that enables the seamless device interconnection without the need of human interaction. The use of M2M technology can bring to life a wide range of mHealth applications, with considerable benefits for both patients and healthcare providers. Many technological challenges have to be met, however, to ensure the widespread adoption of mHealth solutions in the future. In this context, we aim to provide a comprehensive survey on M2M systems for mHealth applications from a wireless communication perspective. An end-to-end holistic approach is adopted, focusing on different communication aspects of the M2M architecture. Hence, we first provide a systematic review ofWireless Body Area Networks (WBANs), which constitute the enabling technology at the patient's side, and then discuss end-to-end solutions that involve the design and implementation of practical mHealth applications. We close the survey by identifying challenges and open research issues, thus paving the way for future research opportunities."}
{"_id": "ddef9472cb6fab446333d6a5699916e0ca9ca06c", "title": "Antimicrobial agents from plants: antibacterial activity of plant volatile oils.", "text": "The volatile oils of black pepper [Piper nigrum L. (Piperaceae)], clove [Syzygium aromaticum (L.) Merr. & Perry (Myrtaceae)], geranium [Pelargonium graveolens L'Herit (Geraniaceae)], nutmeg [Myristica fragrans Houtt. (Myristicaceae), oregano [Origanum vulgare ssp. hirtum (Link) Letsw. (Lamiaceae)] and thyme [Thymus vulgaris L. (Lamiaceae)] were assessed for antibacterial activity against 25 different genera of bacteria. These included animal and plant pathogens, food poisoning and spoilage bacteria. The volatile oils exhibited considerable inhibitory effects against all the organisms under test while their major components demonstrated various degrees of growth inhibition."}
{"_id": "1f6699f14a7aa6da086f27cc4eb49965378fdb3e", "title": "Generic decoding of seen and imagined objects using hierarchical visual features", "text": "Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval."}
{"_id": "378f0a62471ef232c7730d8a67717afa5104ab21", "title": "Heterogeneous Information Network Embedding for Recommendation", "text": "Due to the flexibility in modelling data heterogeneity, heterogeneous information network (HIN) has been adopted to characterize complex and heterogeneous auxiliary data in recommender systems, called HIN based recommendation. It is challenging to develop effective methods for HIN based recommendation in both extraction and exploitation of the information from HINs. Most of HIN based recommendation methods rely on path based similarity, which cannot fully mine latent structure features of users and items. In this paper, we propose a novel heterogeneous network embedding based approach for HIN based recommendation, called HERec. To embed HINs, we design a meta-path based random walk strategy to generate meaningful node sequences for network embedding. The learned node embeddings are first transformed by a set of fusion functions, and subsequently integrated into an extended matrix factorization (MF) model. The extended MF model together with fusion functions are jointly optimized for the rating prediction task. Extensive experiments on three real-world datasets demonstrate the effectiveness of the HERec model. Moreover, we show the capability of the HERec model for the cold-start problem, and reveal that the transformed embedding information from HINs can improve the recommendation performance."}
{"_id": "35da684bbf6d22a4adfd6ecabf0df964a0211f60", "title": "A Comprehensive Survey of Technologies for Building a Hybrid High Performance Intrusion Detection System", "text": "Intrusion detection plays a vital role in maintaining the stability of any network. The major requirements for any intrusion detection system are speed, accuracy and less memory. Though various intrusion detection methods are available, they excel at some points while lack in the others. This paper presents a comprehensive survey of the technologies that are used for detecting intrusions. It analyzes the pros and cons of each technology and the literature works that utilizes these technologies. Challenges faced by the current IDS and the requirements for IDS in the current network scenario are discussed in detail. A detailed study on the datasets that can be used for effective building of an IDS is discussed. The research framework is proposed and a discussion of the various technologies that can be used for improving the efficiency of the IDS is provided."}
{"_id": "6fa717574ecadf095615012c68b72d584fb45df4", "title": "Mixed Reality: A Survey", "text": "This chapter presents an overview of the Mixed Reality (MR) paradigm, which proposes to overlay our real-world environment with digital, computer-generated objects. It presents example applications and outlines limitations and solutions for their technical implementation. In MR systems, users perceive both the physical environment around them and digital elements presented through, for example, the use of semitransparent displays. By its very nature, MR is a highly interdisciplinary field engaging signal processing, computer vision, computer graphics, user interfaces, human factors, wearable computing, mobile computing, information visualization, and the design of displays and sensors. This chapter presents potential MR applications, technical challenges in realizing MR systems, as well as issues related to usability and collaboration in MR. It separately presents a section offering a selection of MR projects which have either been partly or fully undertaken at Swiss universities and rounds off with a section on current challenges and trends."}
{"_id": "4f57f486adea0bf95c252620a4e8af39232ef8bc", "title": "Swish: a Self-Gated Activation Function", "text": "The choice of activation functions in deep networks has a significant effect on the training dynamics and task performance. Currently, the most successful and widely-used activation function is the Rectified Linear Unit (ReLU). Although various alternatives to ReLU have been proposed, none have managed to replace it due to inconsistent gains. In this work, we propose a new activation function, named Swish, which is simply f(x) = x \u00b7 sigmoid(x). Our experiments show that Swish tends to work better than ReLU on deeper models across a number of challenging datasets. For example, simply replacing ReLUs with Swish units improves top-1 classification accuracy on ImageNet by 0.9% for Mobile NASNetA and 0.6% for Inception-ResNet-v2. The simplicity of Swish and its similarity to ReLU make it easy for practitioners to replace ReLUs with Swish units in any neural network."}
{"_id": "e4236431341d033000cbebb6440da1eb10ef8d57", "title": "Pharmacological profile of lurasidone, a novel antipsychotic agent with potent 5-hydroxytryptamine 7 (5-HT7) and 5-HT1A receptor activity.", "text": "Lurasidone [(3aR,4S,7R,7aS)-2-[(1R,2R)-2-[4-(1,2-benzisothiazol-3-yl)piperazin-1-ylmethyl]cyclohexylmethyl]hexahydro-4,7-methano-2H-isoindole-1,3-dione hydrochloride; SM-13496] is an azapirone derivative and a novel antipsychotic candidate. The objective of the current studies was to investigate the in vitro and in vivo pharmacological properties of lurasidone. Receptor binding affinities of lurasidone and several antipsychotic drugs were tested under comparable assay conditions using cloned human receptors or membrane fractions prepared from animal tissue. Lurasidone was found to have potent binding affinity for dopamine D(2), 5-hydroxytryptamine 2A (5-HT(2A)), 5-HT(7), 5-HT(1A), and noradrenaline alpha(2C) receptors. Affinity for noradrenaline alpha(1), alpha(2A), and 5-HT(2C) receptors was weak, whereas affinity for histamine H(1) and muscarinic acetylcholine receptors was negligible. In vitro functional assays demonstrated that lurasidone acts as an antagonist at D(2) and 5-HT(7) receptors and as a partial agonist at the 5-HT(1A) receptor subtype. Lurasidone showed potent effects predictive of antipsychotic activity, such as inhibition of methamphetamine-induced hyperactivity and apomorphine-induced stereotyped behavior in rats, similar to other antipsychotics. Furthermore, lurasidone had only weak extrapyramidal effects in rodent models. In animal models of anxiety disorders and depression, treatment with lurasidone was associated with significant improvement. Lurasidone showed a preferential effect on the frontal cortex (versus striatum) in increasing dopamine turnover. Anti-alpha(1)-noradrenergic, anticholinergic, and central nervous system (CNS) depressant actions of lurasidone were also very weak. These results demonstrate that lurasidone possesses antipsychotic activity and antidepressant- or anxiolytic-like effects with potentially reduced liability for extrapyramidal and CNS depressant side effects."}
{"_id": "cbce6b36b49176e8406118b48f2f2c73b455288d", "title": "Building balanced scorecard with SWOT analysis , and implementing \" Sun Tzu ' s The Art of Business Management Strategies \" on QFD methodology", "text": "Article Summary) Conjoining the SWOT matrix with the balanced scorecard (BSC) makes a systematic and holistic strategic management system. The SWOT matrix clearly identifies the critical success factors that can be implemented into the identification of the different aspects toward the balanced scorecard. It is, therefore, a more structural approach in setting up the foundation of the balanced scorecard; instead of simply identifying the key performance indicators (KPI) via gut feeling or by brainstorming. The next step of the whole process is to make use of the quality function deployment (QFD) methodology with the balanced scorecard (BSC) attributes identified as the Whats on the vertical axis, and the major strategies of The Art of Business Management Sun Tzu's as the horizontal Hows axis. The relationships are then studied in the body of the QFD matrix. Consideration is then given as to how the model presented can be customised to allow companies using this approach to develop and implement their corporate business strategic plan. Full Text (5087 words) Copyright MCB UP Limited (MCB) 2000 S.F. Lee: Hong Kong Institute of Vocational Education, Vocational Training Council, HongKong, China Andrew Sai On Ko: International Management Centre, Oxford Brookes, UK 1. Overview Almost every successful manager understands the importance of offering quality service to gain a distinctive advantage over the competition. After all, gaining an advantage is the key to success and even for the sake of survival. However, many of the so-called organizational strategies that companies depend on are not endurable. Why? These strategies may not be well structured and cannot stand the test of time. Nevertheless, powerful organizational tactics cannot easily be imitated; things like a patented algorithm, exclusive right to a special resource, and brand name recognition. \"If only I knew then what I know now, I would have done things differently\". People always make this comment after they have implemented the wrong corporate strategies. Since we live in times of entwining complexity, acceleration, and change, making the right decision is extremely important for strategic planning. It is fair to say that every organization and individuals have their unique set of strengths, weaknesses, opportunities, and threats. It is very important that an organization determines its strengths, weaknesses, opportunities, and threats, as well as the competitions'. By linking the SWOT analysis with the balanced scorecard, an organization can balance its strengths against its competitions' weaknesses, and optimise its opportunities within the market. Sun Tzu's The Art of War is the most famous work on military operations in ancient China and it is also one of the most revered and well-known military texts outside China. Being the oldest military treatise in the world, it becomes one of the greatest cultural legacies of the Chinese nation. The significance of Sun Tzu's works and military statements in influencing military thoughts has seldom been questioned and were used extensively during the Warring States Period in Chinese history. Sun Tzu's philosophy has become part of the collective unconscious of most educated Chinese. These strategies encapsulated in mnemonic phrases; many of them passed into proverbs and are very widely known in the Chinese cultural sphere. In anticipation of the global economy competitions and business and trade wars among countries, the words of Sun Tzu are particularly useful and appropriate for evaluating business management strategies. Sun Tzu's main philosophy in military situations are identified and correlated to actual business environment. Apparently, relating modern management philosophies and organization behaviours to Sun Tzu's strategies by looking into the philosophies advocated by him and the situation of top management seems to be difficult and unrealistic. It is, however, essential to identify the differences between the ancient and modern times and also between military and business operations before proper management strategies were launched. His strategic planning in waging war in ancient China highlights the strategic management in today's corporations. The objective is to emulate the model set by Sun Tzu, presenting the essences in organising for strategy. Organizations should have their strategic plans constantly reviewed, continuously shaped and positioned in the struggle of continuous competition. QFD maintains customer ideas and requirements throughout the process that in turn leads to customer satisfaction. A further customer expectation is for the next purchase to be a better product. However, this is identified at no extra cost. When the satisfaction is not achieved, a competitor's product is likely to be chosen. QFD is a powerful technique that enables companies to anticipate and prioritise customer needs in their totality and to incorporate them effectively into the product and service provided for the end-user (Wassermann, 1993). 2. The SWOT analysis When implementing a SWOT analysis to devise a set of strategies, the following guidelines should be utilised. Strengths (Weihrich, 1982) Determine your organization's strong points. This should be from both your internal and external customers. Do not be humble; be as pragmatic as possible. Are there any unique or distinct advantages that makes your organization stand out in the crowd? What makes the customers choose your organization over the competition's? Are there any products or services in which your competition cannot imitate (now and in the future)? Weaknesses (Weihrich, 1982) Determine your organization's weaknesses, not only from your point of view, but also more importantly, from your customers. Although it may be difficult for an organization to acknowledge its weaknesses, it is best to handle the bitter reality without procrastination. Are there any operations or procedures that can be streamlined? What and why do your competition operate better than your organization? Is there any avoidance that your organization should be aware of? Does your competition have a certain market segment conquered? Opportunities (Weihrich, 1982) Another major factor is to determine how your organization can continue to grow within the marketplace. After all, opportunities are everywhere, such as changes in technology, government policy, social patterns, and so on. Where and what are the attractive opportunities within your marketplace? Are there any new emerging trends within the market? What does your organization predict in the future that may depict new opportunities? Threats (Weihrich, 1982) No one likes to think about threats, but we still have to face them, despite the fact that they are external factors that are out of our control. For example, the recent major economic slump in Asia. It is vital to be prepared and face threats even during turbulent situations. What is your competition doing that is suppressing your organizational development? Are there any changes in consumer demand, which call for new requirements of your products or services? Is the changing technology hurting your organization's position within the marketplace? 2.1 The wizardry of SWOT The wizardry of SWOT is the matching of specific internal and external factors, which creates a strategic matrix, which makes sense. (Note: The internal factors are within the control of your organization, such as operations, finance, marketing, and in other areas. The external factors are out of your organization's control, such as political and economic factors, technology, competition, and in other areas). The four combinations are called the Maxi-Maxi (Strengths/Opportunities), Maxi-Mini (Strengths/Threats), Mini-Maxi (Weakness/Opportunities), and Mini-Mini (Weaknesses/Threats). (See Figure 1.) 1 Maxi-Maxi (S/O): this combination shows the organization's strengths and opportunities. In essence, an organization should strive to maximise its strengths to capitalise on new opportunities (Weihrich, 1982). 2 Maxi-Mini (S/T): this combination shows the organization's strengths in consideration of threats, e.g. from competitors. In essence, an organization should strive to use its strengths to parry or minimise threats (Weihrich, 1982). 3 Mini-Maxi (W/O): this combination shows the organization's weaknesses in tandem with opportunities. It is an exertion to conquer the organization's weaknesses by making the most out of any new opportunities (Weihrich, 1982). 4 Mini-Mini (W/T): this combination shows the organization's weaknesses by comparison with the current external threats. This is most definitely defensive strategy, to minimise an organization's internal weaknesses and avoid external threats (Weihrich, 1982). 2.2 Collateral insight of SWOT As mentioned, the wizardry of SWOT is the matching of specific internal and external factors. However, what about the matching items within internal factors and items within external factors. The primary reason is that matching these factors will create strategies that will not make sense. For example, with a combination of strength and weakness (both are internal factors), lets say one of your organization's strengths is \"plenty of cash\" and one of your weaknesses is \"lack of training\". Therefore, mixing these two factors together, your management team might simply decide to plan more training for the staff members. The obvious remark for this purposeless strategy will be \"so what!\" Mainly because you should not train, just for the sake of training. A successful training program must have a specific target in response to external changes. You have to determine your organization's specific needs for training in line with the external and internal factors. In other words, the strategy must have an external factor as a trigger in order for it to be feasible. 3. The balanced scorecard Professor Robert Kaplan and David Norton developed the balanced scorecard (BS"}
{"_id": "f1ffa7fae3e6192ead3b7354b528f549e209fbc6", "title": "Modelling and Simulation of Electric Vehicle Fast Charging Stations Driven by High Speed Railway Systems", "text": "The aim of this investigation is the analysis of the opportunity introduced by the use of railway infrastructures for the power supply of fast charging stations located in highways. Actually, long highways are often located far from urban areas and electrical infrastructure, therefore the installations of high power charging areas can be difficult. Specifically, the aim of this investigation is the analysis of the opportunity introduced by the use of railway infrastructures for the power supply of fast charging stations located in highways. Specifically, this work concentrates on fast-charging electric cars in motorway service areas by using high-speed lines for supplying the required power. Economic, security, safety and environmental pressures are motivating and pushing countries around the globe to electrify transportation, which currently accounts for a significant amount, above 70 percent of total oil demand. Electric cars require fast-charging station networks to allowing owners to rapidly charge their batteries when they drive relatively long routes. In other words, this means about the infrastructure towards building charging stations in motorway service areas and addressing the problem of finding solutions for suitable electric power sources. A possible and promising solution is proposed in the study that involves using the high-speed railway line, because it allows not only powering a high load but also it can be located relatively near the motorway itself. This paper presents a detailed investigation on the modelling and simulation of a 2 \u00d7 25 kV system to feed the railway. A model has been developed and implemented using the SimPower systems tool in MATLAB/Simulink to simulate the railway itself. Then, the model has been applied to simulate the battery charger and the system as a whole in two successive steps. The results showed that the concept could work in a real situation. Nonetheless if more than twenty 100 kW charging bays are required in each direction or if the line topology is changed for whatever reason, it cannot be guaranteed that the railway system will be able to deliver the additional power that is necessary."}
{"_id": "4ca5682291e17fe09cc160159c08139aca32da52", "title": "Analysis of LCLC resonant converters for high-voltage high-frequency applications", "text": "This paper proposes a novel LCLC full-bridge resonant converter with ZCS and ZVS for high-voltage high-frequency applications. In this operating model, both of ZCS and ZVS can be achieved and the switching losses can be reduced to a minimum. The proposed converter adopts the parasitic parameters of the step-up voltage transformer and one other serial capacitor is needed to form the resonant tank. Compared with other resonant converters, where only ZCS or ZVS can be achieved, the proposed working model can increase conversion efficiency with soft-switching features. Finally, A 40 V input, 4800 V output and 300 W prototype with 500 kHz switching frequency was built to verify the benefits of the proposed converter. The achieved efficiency of the converter peaked at 96.7%."}
{"_id": "9cfff0ff6652e0e81dce434bbe572e391d2f57bf", "title": "Ontologies for Cultural Heritage", "text": "In the cultural heritage domain information systems are increasingly deployed, digital representations of physical objects are produced in immense numbers and there is a strong political pressure on memory institutions to make their holdings accessible to the public in digital form. The sector splits into a set of disciplines with highly specialized fields. Due to the resulting diversity, one can hardly speak about a \u201cdomain\u201d in the sense of \u201cdomain ontologies\u201d [33]. On the other side, study and research of the past is highly interdisciplinary. Characteristically, archaeology employs a series of \u201cauxiliary\u201d disciplines, such as archaeometry, archaeomedicine, archaeobotany, archaeometallurgy, archaeoastronomy, etc., but also historical sources and social theories. Interoperability between various highly specialized systems, integrated information access and information integration increasingly becomes a demand to support research, professional heritage administration, preservation, public curiosity and education. Therefore the sector is characterized by a complex schema integration problem of associating complementary information from various dedicated systems, which can be efficiently addressed by formal ontologies [14,32,33]. There is a proliferation of specialized terminology, but terminology is less used as a means of agreement between experts than as an intellectual tool for hypothesis building based on discriminating phenomena. Automated classification is a long established discipline of archaeology, but few terminological systems are widely accepted. The sector is, however, more focused on establishing knowledge about facts and context in the past than about classes of things and the laws of their behavior. Respectively, the concatenation of related facts by co-reference [56] to particulars, such as things, people, places, periods is a major open issue. Knowledge Organisation Systems (KOS, [62]) describing people and places are employed to a certain degree, and pose similar technical problems as ontologies, but the required scale is very large. In this chapter, we describe how ontologies are and could be employed to improve information management in the cultural heritage sector."}
{"_id": "34ad05e8db9dbaedc5b89caa2987d34beb8e719f", "title": "Rollout Sampling Policy Iteration for Decentralized POMDPs", "text": "We present decentralized rollout sampling policy iteration (DecRSPI) \u2014 a new algorithm for multi-agent decision problems formalized as DEC-POMDPs. DecRSPI is designed to improve scalability and tackle problems that lack an explicit model. The algorithm uses MonteCarlo methods to generate a sample of reachable belief states. Then it computes a joint policy for each belief state based on the rollout estimations. A new policy representation allows us to represent solutions compactly. The key benefits of the algorithm are its linear time complexity over the number of agents, its bounded memory usage and good solution quality. It can solve larger problems that are intractable for existing planning algorithms. Experimental results confirm the effectiveness and scalability of the approach."}
{"_id": "582e7ca855aacce883150c4bff0b0049a5c157bc", "title": "Internet of Robotic Things-Converging Sensing / Actuating , Hypoconnectivity , Artificial Intelligence and IoT Platforms", "text": "The Internet of Things (IoT) concept is evolving rapidly and influencing new developments in various application domains, such as the Internet of Mobile Things (IoMT), Autonomous Internet of Things (A-IoT), Autonomous System of Things (ASoT), Internet of Autonomous Things (IoAT), Internet of Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc. that are progressing/advancing by using IoT technology. The IoT influence represents new development and deployment challenges in different areas such as seamless platform integration, context based cognitive network integration, new mobile sensor/actuator network paradigms, things identification (addressing, naming in IoT) and dynamic things discoverability and many others. The IoRT represents new convergence challenges and their need"}
{"_id": "267019eeb62735664528139a7247eaffee21fd20", "title": "New folded configuration of rectangular waveguide filters with asymmetrical transmission zeros", "text": "This paper presents a novel filter topology for the implementation of asymmetric responses with transmission zeros (TZs) in rectangular waveguide technology. It is based on a compact folded E-plane arrangement where adjacent resonators are capacitively coupled through rectangular slots, and non-adjacent resonators are coupled through simple inductive windows. The new folded configuration allows the introduction of transmission zeros above the passband, which can be easily controlled. High order filters can be designed by cascading an arbitrary number of resonators in a folded layout. Components based on this novel configuration are amenable to simple manufacturing processes and can be used in high-power environments. A triplexer containing filters implemented with the proposed topology has been designed. Measurements of a manufactured prototype are included to validate the use of this topology in practical applications."}
{"_id": "bd04e7ed282b0854c626c74e1499e67486bfc1e6", "title": "Microstrip series fed antenna array for millimeter wave automotive radar applications", "text": "A 2D tapered microstrip antenna array is analyzed and designed in the paper by adopting the method of Modified Transmission Line Model. The pattern of the E-plane is determined by 20 linear series fed rectangular microstrip patches with tapered width, and 16 linear arrays compose the 2D array fed by four-stage T-junction power divider network. The excitation coefficients in both E and H plane follow the Taylor distribution. The antenna array achieves a pencil beam of 29dB gain at W band, and its half power beamwidths(HPBW) for E plane and H plane are both about 5\u00b0. The Side Lobe Level (SLL) of the pattern at the center frequency is lower than -17dB and 19dB at E and H plane respectively."}
{"_id": "6390ed0ed8855d5e6a1d71935e770a02102d0851", "title": "Calibration of a dual-laser triangulation system for assembly line completeness inspection", "text": "In controlled industrial environments, laser triangulation is an effective technique for 3D reconstruction, which is increasingly being used for quality inspection and metrology. In this paper, we propose a method for calibrating a dual laser triangulation system - consisting of a camera, two line lasers, and a linear motion platform designed to perform completeness inspection tasks on an assembly line. Calibration of such a system involves the recovery of two sets of parameters - the plane parameters of the two line lasers, and the translational direction of the linear motion platform. First, we address these two calibration problems separately. While many solutions have been given for the former problem, the latter problem has been largely ignored. Next, we highlight an issue specific to the use of multiple lasers - that small errors in the calibration parameters can lead to misalignment between the reconstructed point clouds of the different lasers. We present two different procedures that can eliminate such misalignments by imposing constraints between the two sets of calibration parameters. Our calibration method requires only the use of planar checkerboard patterns, which can be produced easily and inexpensively."}
{"_id": "ae3f4c214aa05843e0b2317d66c64861b9a9f085", "title": "Adiposity and cancer risk: new mechanistic insights from epidemiology", "text": "Excess body adiposity, commonly expressed as body mass index (BMI), is a risk factor for many common adult cancers. Over the past decade, epidemiological data have shown that adiposity\u2013cancer risk associations are specific for gender, site, geographical population, histological subtype and molecular phenotype. The biological mechanisms underpinning these associations are incompletely understood but need to take account of the specificities observed in epidemiology to better inform future prevention strategies."}
{"_id": "0c3751db5a24c636c1aa8abfd9d63321b38cfce5", "title": "Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization", "text": "Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications."}
{"_id": "24424918dc93c016deeaeb86a01c8bfc01253c9b", "title": "Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives", "text": "Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings. In this paper, we show that SVRG is one such method: originally designed for strongly convex objectives, is also very robust under non-strongly convex or sum-of-non-convex settings. If f(x) is a sum of smooth, convex functions but f is not strongly convex (such as Lasso or logistic regression), we propose a variant SVRG that makes a novel choice of growing epoch length on top of SVRG. SVRG is a direct, faster variant of SVRG in this setting. If f(x) is a sum of non-convex functions but f is strongly convex, we show that the convergence of SVRG linearly depends on the non-convexity parameter of the summands. This improves the best known result in this setting, and gives better running time for stochastic PCA."}
{"_id": "680cbbc88d537bd6f5a68701b1bb0080a77faa00", "title": "Accelerating Stochastic Gradient Descent using Predictive Variance Reduction", "text": "Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning."}
{"_id": "9fe5a7a24ff81ba2b6769e811b6ab47188a45242", "title": "Accelerated Proximal Gradient Methods for Nonconvex Programming", "text": "Nonconvex and nonsmooth problems have recently received considerable attention in signal/image processing, statistics and machine learning. However, solving the nonconvex and nonsmooth optimization problems remains a big challenge. Accelerated proximal gradient (APG) is an excellent method for convex programming. However, it is still unknown whether the usual APG can ensure the convergence to a critical point in nonconvex programming. In this paper, we extend APG for general nonconvex and nonsmooth programs by introducing a monitor that satisfies the sufficient descent property. Accordingly, we propose a monotone APG and a nonmonotone APG. The latter waives the requirement on monotonic reduction of the objective function and needs less computation in each iteration. To the best of our knowledge, we are the first to provide APG-type algorithms for general nonconvex and nonsmooth problems ensuring that every accumulation point is a critical point, and the convergence rates remain O ( 1 k2 ) when the problems are convex, in which k is the number of iterations. Numerical results testify to the advantage of our algorithms in speed."}
{"_id": "87c9709d5f76629249bc5c425c4a5ea1b9c3409c", "title": "Audio Signal Classification: History and Current Techniques", "text": "Audio signal classification (ASC) consists of extracting relevant features from a sound, and of using these features to identify into which of a set of classes the sound is most likely to fit. The feature extraction and grouping algorithms used can be quite diverse depending on the classification domain of the application. This paper presents background necessary to understand the general research domain of ASC, including signal processing, spectral analysis, psychoacoustics and auditory scene analysis. Also presented are the basic elements of classification systems. Perceptual and physical features are discussed, as well as clustering algorithms and analysis duration. Neural nets and hidden Markov models are discussed as they relate to ASC. These techniques are presented with an overview of the current state of the ASC research literature."}
{"_id": "cb6a9405b08fc268e800fbcab13f1425b66d3da6", "title": "Potassium Incorporation for Enhanced Performance and Stability of Fully Inorganic Cesium Lead Halide Perovskite Solar Cells.", "text": "Thermally unstable nature of hybrid organic-inorganic perovskites has been a major obstacle to fabricating the long-term operational device. A cesium lead halide perovskite has been suggested as an alternative light absorber, due to its superb thermal stability. However, the phase instability and poor performance are hindering the further progress. Here, cesium lead halide perovskite solar cells with enhanced performance and stability are demonstrated via incorporating potassium cations. Based on Cs0.925K0.075PbI2Br, the planar-architecture device achieves a power conversion efficiency of 10.0%, which is a remarkable record in the field of inorganic perovskite solar cells. In addition, the device shows an extended operational lifetime against air. Our research will stimulate the development of cesium lead halide perovskite materials for next-generation photovoltaics."}
{"_id": "a5ab5483b7cd8725ef4227f26ec43e6139c405b8", "title": "Probabilistic Student Models: Bayesian Belief Networks and Knowledge Space Theory", "text": "The applicability of Knowledge Space Theory (Falmagne and Doignon) and Bayesian Belief Networks (Pearl) as probabilistic student models imbedded in an Intelligent Tutoring System is examined. Student modeling issues such as knowledge representation, adaptive assessment, curriculum advancement, and student feedback are addressed. Several factors contribute to uncertainty in student modeling such as careless errors and lucky guesses, learning and forgetting, and unanticipated student response patterns. However, a probabilistic student model can represent uncertainty regarding the estimate of the student's knowledge and can be tested using empirical student data and established statistical techniques."}
{"_id": "2ffc77e3a847bf5cc08a40cecb107368b494d4b7", "title": "Sentence Segmentation in Narrative Transcripts from Neuropsycological Tests using Recurrent Convolutional Neural Networks", "text": "Automated discourse analysis tools based on Natural Language Processing (NLP) aiming at the diagnosis of languageimpairing dementias generally extract several textual metrics of narrative transcripts. However, the absence of sentence boundary segmentation in the transcripts prevents the direct application of NLP methods which rely on these marks to function properly, such as taggers and parsers. We present the first steps taken towards automatic neuropsychological evaluation based on narrative discourse analysis, presenting a new automatic sentence segmentation method for impaired speech. Our model uses recurrent convolutional neural networks with prosodic, Part of Speech (PoS) features, and word embeddings. It was evaluated intrinsically on impaired, spontaneous speech, as well as, normal, prepared speech, and presents better results for healthy elderly (CTL) (F1 = 0.74) and Mild Cognitive Impairment (MCI) patients (F1 = 0.70) than the Conditional Random Fields method (F1 = 0.55 and 0.53, respectively) used in the same context of our study. The results suggest that our model is robust for impaired speech and can be used in automated discourse analysis tools to differentiate narratives produced by MCI and CTL."}
{"_id": "c3b64af1a00766a27cab874c35b9eec33eb791c3", "title": "Firefly: illuminating future network-on-chip with nanophotonics", "text": "Future many-core processors will require high-performance yet energy-efficient on-chip networks to provide a communication substrate for the increasing number of cores. Recent advances in silicon nanophotonics create new opportunities for on-chip networks. To efficiently exploit the benefits of nanophotonics, we propose Firefly - a hybrid, hierarchical network architecture. Firefly consists of clusters of nodes that are connected using conventional, electrical signaling while the inter-cluster communication is done using nanophotonics - exploiting the benefits of electrical signaling for short, local communication while nanophotonics is used only for global communication to realize an efficient on-chip network. Crossbar architecture is used for inter-cluster communication. However, to avoid global arbitration, the crossbar is partitioned into multiple, logical crossbars and their arbitration is localized. Our evaluations show that Firefly improves the performance by up to 57% compared to an all-electrical concentrated mesh (CMESH) topology on adversarial traffic patterns and up to 54% compared to an all-optical crossbar (OP XBAR) on traffic patterns with locality. If the energy-delay-product is compared, Firefly improves the efficiency of the on-chip network by up to 51% and 38% compared to CMESH and OP XBAR, respectively."}
{"_id": "fa577137bc438fe685b0fc6c2e175f1cbf279a1e", "title": "Cross-domain recommendations without overlapping data: myth or reality?", "text": "Cross-domain recommender systems adopt different techniques to transfer learning from source domain to target domain in order to alleviate the sparsity problem and improve accuracy of recommendations. Traditional techniques require the two domains to be linked by shared characteristics associated to either users or items. In collaborative filtering (CF) this happens when the two domains have overlapping users or item (at least partially). Recently, Li et al. [7] introduced codebook transfer (CBT), a cross-domain CF technique based on co-clustering, and presented experimental results showing that CBT is able to transfer knowledge between non-overlapping domains. In this paper, we disprove these results and show that CBT does not transfer knowledge when source and target domains do not overlap."}
{"_id": "1d5ca5dda6526012738276f3e58cd752a30b4652", "title": "Shreds: Fine-Grained Execution Units with Private Memory", "text": "Once attackers have injected code into a victim program's address space, or found a memory disclosure vulnerability, all sensitive data and code inside that address space are subject to thefts or manipulation. Unfortunately, this broad type of attack is hard to prevent, even if software developers wish to cooperate, mostly because the conventional memory protection only works at process level and previously proposed in-process memory isolation methods are not practical for wide adoption. We propose shreds, a set of OS-backed programming primitives that addresses developers' currently unmet needs for fine-grained, convenient, and efficient protection of sensitive memory content against in-process adversaries. A shred can be viewed as a flexibly defined segment of a thread execution (hence the name). Each shred is associated with a protected memory pool, which is accessible only to code running in the shred. Unlike previous works, shreds offer in-process private memory without relying on separate page tables, nested paging, or even modified hardware. Plus, shreds provide the essential data flow and control flow guarantees for running sensitive code. We have built the compiler toolchain and the OS module that together enable shreds on Linux. We demonstrated the usage of shreds and evaluated their performance using 5 non-trivial open source software, including OpenSSH and Lighttpd. The results show that shreds are fairly easy to use and incur low runtime overhead (4.67%)."}
{"_id": "41a08affbb5c847d6f2a82d5ae4be9ed5df52353", "title": "Node Classification in Signed Social Networks", "text": "Node classification in social networks has been proven to be useful in many real-world applications. The vast majority of existing algorithms focus on unsigned social networks (or social networks with only positive links), while little work exists for signed social networks. It is evident from recent developments in signed social network analysis that negative links have added value over positive links. Therefore, the incorporation of negative links has the potential to benefit various analytical tasks. In this paper, we study the novel problem of node classification in signed social networks. We provide a principled way to mathematically model positive and negative links simultaneously and propose a novel framework NCSSN for node classification in signed social networks. Experimental results on real-world signed social network datasets demonstrate the effectiveness of the proposed framework NCSSN. Further experiments are conducted to gain a deeper understanding of the importance of negative links for NCSSN."}
{"_id": "3667a56bd911445a6c0fc9447771d964e6831146", "title": "Dilemmas in a General Theory of Planning *", "text": "The search for scientific bases for confronting problems of social policy is bound to fail, because of the nature of these problems. They are \"wicked\" problems, whereas science has developed to deal with \"tame\" problems. Policy problems cannot be definitively described. Moreover, in a pluralistic society there is nothing like the undisputable public good; there is no objective definition of equity; policies that respond to social problems cannot be meaningfully correct or false; and it makes no sense to talk about \"optinaal solutions\" to social probIems unless severe qualifications are imposed first. Even worse, there are no \"solutions\" in the sense of definitive and objective answers. George Bernard Shaw diagnosed the case several years ago; in more recent times popular protest may have already become a social movement . Shaw averred that \"every profession is a conspiracy against the laity.\" The contemporary publics are responding as though they have made the same discovery. Few o f the modern professionals seem to be immune f rom the popular a t t a c k whether they be social workers, educators, housers, public health officials, policemen, city planners, highway engineers or physicians. Our restive clients have been telling us that they don ' t like the educational programs that schoolmen have been offering, the redevelopment projects urban renewal agencies have been proposing, the lawenforcement styles o f the police, the administrative behavior o f the welfare agencies, the locations o f the highways, and so on. In the courts, the streets, and the political campaigns, we've been hearing ever-louder public protests against the professions' diagnoses o f the clients' problems, against professionally designed governmental programs, against professionally certified standards for the public services. It does seem odd that this attack should be coming just when professionals in * This is a modification of a paper presented to the Panel on Policy Sciences, American Association for the Advancement of Science, Boston, December 1969."}
{"_id": "67238f415dfb2024298082d8c9f7eb529fe191f5", "title": "Where Do Rewards Come From ?", "text": "Reinforcement learning has achieved broad and successful application in cognitive science in part because of its general formulation of the adaptive control problem as the maximization of a scalar reward function. The computational reinforcement learning framework is motivated by correspondences to animal reward processes, but it leaves the source and nature of the rewards unspecified. This paper advances a general computational framework for reward that places it in an evolutionary context, formulating a notion of an optimal reward function given a fitness function and some distribution of environments. Novel results from computational experiments show how traditional notions of extrinsically and intrinsically motivated behaviors may emerge from such optimal reward functions. In the experiments these rewards are discovered through automated search rather than crafted by hand. The precise form of the optimal reward functions need not bear a direct relationship to the fitness function, but may nonetheless confer significant advantages over rewards based only on fitness."}
{"_id": "355bd3078c7bc72d86eb97bbfde4383d3e5a67dc", "title": "Team-based learning: systematic research review.", "text": "Team-based learning (TBL) is an active learning method developed to help students achieve course objectives while learning how to function in teams. Many faculty members have adopted TBL because it is a unique teaching method, but evidence about its effectiveness is unclear. Seventeen original studies on TBL are presented in this systematic review of research. The studies include descriptive, explanatory, and experimental research published from 2003 to 2011 in the nursing, medical, education, and business literature. Generally, students are satisfied with TBL and student engagement is higher in TBL classes. Evidence also exists that students in TBL classes score higher on examinations. However, further high-quality experimental studies are needed to confirm that TBL positively affects examination scores and other learning outcomes and to determine whether TBL produces students who have the ability to function well in groups."}
{"_id": "e683a8c6eca64d7736188996bcc79c7598ff7aa6", "title": "Demographics, weather and online reviews: a study of restaurant recommendations", "text": "Online recommendation sites are valuable information sources that people contribute to, and often use to choose restaurants. However, little is known about the dynamics behind participation in these online communities and how the recommendations in these communities are formed. In this work, we take a first look at online restaurant recommendation communities to study what endogenous (i.e., related to entities being reviewed) and exogenous factors influence people's participation in the communities, and to what extent. We analyze an online community corpus of 840K restaurants and their 1.1M associated reviews from 2002 to 2011, spread across every U.S. state. We construct models for number of reviews and ratings by community members, based on several dimensions of endogenous and exogenous factors. We find that while endogenous factors such as restaurant attributes (e.g., meal, price, service) affect recommendations, surprisingly, exogenous factors such as demographics (e.g., neighborhood diversity, education) and weather (e.g., temperature, rain, snow, season) also exert a significant effect on reviews. We find that many of the effects in online communities can be explained using offline theories from experimental psychology. Our study is the first to look at exogenous factors and how it related to online online restaurant reviews. It has implications for designing online recommendation sites, and in general, social media and online communities."}
{"_id": "c4492b7114e520e5dd04582bd84b5bfee3b91610", "title": "Action understanding and active inference", "text": "We have suggested that the mirror-neuron system might be usefully understood as implementing Bayes-optimal perception of actions emitted by oneself or others. To substantiate this claim, we present neuronal simulations that show the same representations can prescribe motor behavior and encode motor intentions during action\u2013observation. These simulations are based on the free-energy formulation of active inference, which is formally related to predictive coding. In this scheme, (generalised) states of the world are represented as trajectories. When these states include motor trajectories they implicitly entail intentions (future motor states). Optimizing the representation of these intentions enables predictive coding in a prospective sense. Crucially, the same generative models used to make predictions can be deployed to predict the actions of self or others by simply changing the bias or precision (i.e. attention) afforded to proprioceptive signals. We illustrate these points using simulations of handwriting to illustrate neuronally plausible generation and recognition of itinerant (wandering) motor trajectories. We then use the same simulations to produce synthetic electrophysiological responses to violations of intentional expectations. Our results affirm that a Bayes-optimal approach provides a principled framework, which accommodates current thinking about the mirror-neuron system. Furthermore, it endorses the general formulation of action as active inference."}
{"_id": "aaa83a3a7e35e2e1ec2c914ee131124588fafaa2", "title": "Planning optimal grasps", "text": "In this paper we will address the problem of planning optimal grasps. Two general optimality criteria, that consider the total finger force and the maximum finger force will be introduced and discussed. Moreover their formalization, using various metrics on a space of generalized forces, will be detailed. The geometric interpretation of the two criteria will lead to an efficient planning algorithm. An example of its use in a robotic environment equipped with two-jaw and three-jaw grippers will also be shown."}
{"_id": "5c74d80a3a33ebdb61f0cd7c703211b4867acdfc", "title": "Non-Uniform Sinusoidally Modulated Half-Mode Leaky-Wave Lines for Near-Field Focusing Pattern Synthesis", "text": "A novel non-uniform sinusoidally modulated half-mode microstrip structure with application to near-field focused leaky-wave radiation in the backward Fresnel zone is proposed. First, it is presented a dispersion analysis of the constituent backward leaky wave in the sinusoidally modulated unit cell in half-width microstrip technology. This information is then used to design a finite non-uniform line that focuses the radiated fields at the desired point. Finally, eight similar line sources are arranged in a radial array to generate a three-dimensional focused spot located at the desired focal length over the simple central coaxial feeding. Simulated and experimental results are presented to validate the proposed simple approach."}
{"_id": "26441c387cac470b16c9114f3b3efe5dd37e1064", "title": "mPHASiS: Mobile patient healthcare and sensor information system", "text": "Pervasive care and chronic disease management to reduce institutionalization is a priority for most western countries. The realization of next generation ubiquitous and pervasive healthcare systems will be a challenging task, as these systems are likely to involve a complex structure. Such systems will consist of various devices, ranging from resource-constrained sensors and actuators to complex multimedia devices, supporting time critical applications. This is further compounded by cultural and socio-economical factors that must be addressed for next generation healthcare systems to be widely diffused and used. In this study, the requirements for a vital sign monitoring solution space is derived and mPHASiS is developed based on these requirements. mPHASiS is an end to end solution not only providing sensor networking and vital sign monitoring but also closing the loop by signaling alert messages to the caregiver and allowing pervasive access to vital signs of a patient using smartphones over a heterogeneous network. A role based access control mechanism is developed to limit access to sensitive data. The end to end delay and delay variations for both vital sign data collection and pervasive access are analyzed. mPHASiS is developed as a complementary solution augmenting functionality of a hospital information system and can be loosely couple with the hospital information system using webservices. & 2010 Elsevier Ltd. All rights reserved."}
{"_id": "0706225eeac0f855b19c365313db61252ecde0d7", "title": "An Integrated Experimental Environment for Distributed Systems and Networks", "text": "Three experimental environments traditionally support network and distributed systems research: network emulators, network simulators, and live networks. The continued use of multiple approaches highlights both the value and inadequacy of each. Netbed, a descendant of Emulab, provides an experimentation facility that integrates these approaches, allowing researchers to configure and access networks composed of emulated, simulated, and wide-area nodes and links. Netbed's primary goals are ease of use, control, and realism, achieved through consistent use of virtualization and abstraction.By providing operating system-like services, such as resource allocation and scheduling, and by virtualizing heterogeneous resources, Netbed acts as a virtual machine for network experimentation. This paper presents Netbed's overall design and implementation and demonstrates its ability to improve experimental automation and efficiency. These, in turn, lead to new methods of experimentation, including automated parameter-space studies within emulation and straightforward comparisons of simulated, emulated, and wide-area scenarios."}
{"_id": "3e36eb936002a59b81d8abb4548dc2c42a29b743", "title": "Security measures in automation systems-a practice-oriented approach", "text": "Often security is seen as an add-on service for automation systems that frequently conflicts with other goals such as efficient transmission or resource limitations. This article goes for a practice-oriented approach to security in automation systems. It analysis common threats to automation systems and automation networks in particular, sets up a model to classify systems with respect to security and discusses common measures available at different system levels. The description of measures should allow to rate the effects on the overall system security"}
{"_id": "40f0bc39b9c9a144c3d41994ed300b3ae7ad3b65", "title": "Learner Engagement Measurement and Classification in 1:1 Learning", "text": "We explore the feasibility of measuring learner engagement and classifying the engagement level based on machine learning applied on data from 2D/3D camera sensors and eye trackers in a 1:1 learning setting. Our results are based on nine pilot sessions held in a local high school where we recorded features related to student engagement while consuming educational content. We label the collected data as Engaged or NotEngaged while observing videos of the students and their screens. Based on the collected data, perceptual user features (e.g., body posture, facial points, and gaze) are extracted. We use feature selection and classification methods to produce classifiers that can detect whether a student is engaged or not. Accuracies of up to 85-95% are achieved on the collected dataset. We believe our work pioneers in the successful classification of student engagement based on perceptual user features in a 1:1 authentic learning setting."}
{"_id": "57d7b7b5e5da602f160dff6d424e39149a922d39", "title": "Low-Cost Gas Sensors Produced by the Graphite Line-Patterning Technique Applied to Monitoring Banana Ripeness", "text": "A low-cost sensor array system for banana ripeness monitoring is presented. The sensors are constructed by employing a graphite line-patterning technique (LPT) to print interdigitated graphite electrodes on tracing paper and then coating the printed area with a thin film of polyaniline (PANI) by in-situ polymerization as the gas-sensitive layer. The PANI layers were used for the detection of volatile organic compounds (VOCs), including ethylene, emitted during ripening. The influence of the various acid dopants, hydrochloric acid (HCl), methanesulfonic acid (MSA), p-toluenesulfonic acid (TSA) and camphorsulfonic acid (CSA), on the electrical properties of the thin film of PANI adsorbed on the electrodes was also studied. The extent of doping of the films was investigated by UV-Vis absorption spectroscopy and tests showed that the type of dopant plays an important role in the performance of these low-cost sensors. The array of three sensors, without the PANI-HCl sensor, was able to produce a distinct pattern of signals, taken as a signature (fingerprint) that can be used to characterize bananas ripeness."}
{"_id": "e0534bfb477c5a82e98d0cb386ae3eb31d349c91", "title": "Cellular and molecular mechanisms of hepatocellular carcinoma: an update", "text": "Hepatocellular carcinoma (HCC) is the most common primary malignant tumor that accounts for\u00a0~80\u00a0% of all liver cancer cases worldwide. It is a multifactorial disease caused by a variety of risk factors and often develops in the background of underlying cirrhosis. A number of cellular phenomena, such as tumor microenvironment, inflammation, oxidative stress, and hypoxia act in concert with various molecular events to facilitate tumor initiation, progression, and metastasis. The emergence of microRNAs and molecular-targeted therapies adds a new dimension in our efforts to combat this deadly disease. Intense research in this multitude of areas has led to significant progress in our understanding of cellular processes and molecular mechanisms that occur during multistage events that lead to hepatocarcinogenesis. In this review, we discuss the current knowledge of HCC, focusing mainly on advances that have occurred during the past 5\u00a0years and on the development of novel therapeutics for liver cancer."}
{"_id": "baa643c70eb214852e83360d641afe518e8a4404", "title": "A Parallel Random Forest Algorithm for Big Data in a Spark Cloud Computing Environment", "text": "With the emergence of the big data age, the issue of how to obtain valuable knowledge from a dataset efficiently and accurately has attracted increasingly attention from both academia and industry. This paper presents a Parallel Random Forest (PRF) algorithm for big data on the Apache Spark platform. The PRF algorithm is optimized based on a hybrid approach combining data-parallel and task-parallel optimization. From the perspective of data-parallel optimization, a vertical data-partitioning method is performed to reduce the data communication cost effectively, and a data-multiplexing method is performed is performed to allow the training dataset to be reused and diminish the volume of data. From the perspective of task-parallel optimization, a dual parallel approach is carried out in the training process of RF, and a task Directed Acyclic Graph (DAG) is created according to the parallel training process of PRF and the dependence of the Resilient Distributed Datasets (RDD) objects. Then, different task schedulers are invoked for the tasks in the DAG. Moreover, to improve the algorithm's accuracy for large, high-dimensional, and noisy data, we perform a dimension-reduction approach in the training process and a weighted voting approach in the prediction process prior to parallelization. Extensive experimental results indicate the superiority and notable advantages of the PRF algorithm over the relevant algorithms implemented by Spark MLlib and other studies in terms of the classification accuracy, performance, and scalability. With the expansion of the scale of the random forest model and the Spark cluster, the advantage of the PRF algorithm is more obvious."}
{"_id": "8b74a32cebb5faf131595496f6470ff9c2c33468", "title": "Personality and motivations associated with Facebook use", "text": "Facebook is quickly becoming one of the most popular tools for social communication. However, Facebook is somewhat different from other Social Networking Sites as it demonstrates an offline-to-online trend; that is, the majority of Facebook Friends are met offline and then added later. The present research investigated how the Five-Factor Model of personality relates to Facebook use. Despite some expected trends regarding Extraversion and Openness to Experience, results indicated that personality factors were not as influential as previous literature would suggest. The results also indicated that a motivation to communicate was influential in terms of Facebook use. It is suggested that different motivations may be influential in the decision to use tools such as Facebook, especially when individual functions of Facebook are being considered. \u00d3 2008 Elsevier Ltd. All rights reserved. 1. Personality correlates and competency factors associated"}
{"_id": "af15e4f7fadc7e44fdac502ae122f3eef22f5807", "title": "Personality and the prediction of consequential outcomes.", "text": "Personality has consequences. Measures of personality have contemporaneous and predictive relations to a variety of important outcomes. Using the Big Five factors as heuristics for organizing the research literature, numerous consequential relations are identified. Personality dispositions are associated with happiness, physical and psychological health, spirituality, and identity at an individual level; associated with the quality of relationships with peers, family, and romantic others at an interpersonal level; and associated with occupational choice, satisfaction, and performance, as well as community involvement, criminal activity, and political ideology at a social institutional level."}
{"_id": "423fff94db2be3ddef5e3204338d2111776eafea", "title": "Rhythms of social interaction: messaging within a massive online network", "text": "We have analyzed the fully-anonymized headers of 362 million messages exchanged by 4.2 million users of Facebook, an online social network of college students, during a 26 month interval. The data reveal a number of strong daily and weekly regularities which provide insights into the time use of college students and their social lives, including seasonal variations. We also examined how factors such as school affiliation and informal online \u201cfriend\u201d lists affect the observed behavior and temporal patterns. Finally, we show that Facebook users appear to be clustered by school with respect to their temporal messaging patterns."}
{"_id": "4316fc1e70781ab5d52396bb23ed4d2508c13392", "title": "Match makers and deal breakers: analyses of assortative mating in newlywed couples.", "text": "We conducted a comprehensive analysis of assortative mating (i.e., the similarity between wives and husbands on a given characteristic) in a newlywed sample. These newlyweds showed (a) strong similarity in age, religiousness, and political orientation; (b) moderate similarity in education and verbal intelligence; (c) modest similarity in values; and (d) little similarity in matrix reasoning, self- and spouse-rated personality, emotional experience and expression, and attachment. Further analyses established that similarity was not simply due to background variables such as age and education and reflected initial assortment (i.e., similarity at the time of marriage) rather than convergence (i.e., increasing similarity with time). Finally, marital satisfaction primarily was a function of the rater's own traits and showed little relation to spousal similarity."}
{"_id": "ac923df3a1e5dfb8a5b187a6475fa49af699362e", "title": "INTRINSIC AND EXTRINSIC MOTIVATION 1", "text": "A central tenet of economics is that individuals respond to incentives. For psychologists and sociologists, by contrast, rewards and punishments are often counterproductive, because they undermine \u201cintrinsic motivation\u201d. We reconcile these two views, showing how performance incentives offered by an informed principal (manager, teacher, parent) can adversely impact an agent\u2019s (worker, child) perception of the task, or of his own abilities. Incentives are then only weak reinforcers in the short run, and negative reinforcers in the long run. We also study the effects of empowerment, help and excuses on motivation, as well as situations of ego-bashing reflecting a battle for dominance within a relationship."}
{"_id": "e142e52af456af42c6dfa55a15f11706b0c95531", "title": "Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning", "text": "We present Deep Voice 3, a fully-convolutional attention-based neural textto-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training an order of magnitude faster. We scale Deep Voice 3 to dataset sizes unprecedented for TTS, training on more than eight hundred hours of audio from over two thousand speakers. In addition, we identify common error modes of attention-based speech synthesis networks, demonstrate how to mitigate them, and compare several different waveform synthesis methods. We also describe how to scale inference to ten million queries per day on a single GPU server."}
{"_id": "3aa883df8963851c56e651218e178eb00900f833", "title": "Taking spiritual history in clinical practice: a systematic review of instruments.", "text": "BACKGROUND\nTo facilitate the addressing of spirituality in clinical practice, several authors have created instruments for obtaining a spiritual history. However, in only a few studies have authors compared these instruments. The aim of this study was to compare the most commonly used instruments for taking a spiritual history in a clinical setting.\n\n\nMETHODS\nA systematic review of spiritual history assessment was conducted in five stages: identification of instruments used in the literature (databases searching); relevant articles from title and initial abstract review; exclusion and Inclusion criteria; full text retrieval and final analysis of each instrument.\n\n\nRESULTS\nA total of 2,641 articles were retrieved and after the analysis, 25 instruments were included. The authors independently evaluated each instrument on 16 different aspects. The instruments with the greatest scores in the final analysis were FICA, SPIRITual History, FAITH, HOPE, and the Royal College of Psychiatrists. Concerning all 25 instruments, 20 of 25 inquire about the influence of spirituality on a person's life and 17 address religious coping. Nevertheless, only four inquire about medical practices not allowed, six deal with terminal events, nine have mnemonics to facilitate their use, and five were validated.\n\n\nCONCLUSIONS\nFICA, SPIRITual History, FAITH, HOPE, and Royal College of Psychiatrists scored higher in our analysis. The use of each instrument must be individualized, according to the professional reality, time available, patient profile, and settings."}
{"_id": "18592d8609cab37e21abafb3c3ce98e541828f96", "title": "A Metadata Catalog Service for Data Intensive Applications", "text": "Advances in computational, storage and network technologies as well as middle ware such as the Globus Toolkit allow scientists to expand the sophistication and scope of data-intensive applications. These applications produce and analyze terabytes and petabytes of data that are distributed in millions of files or objects. To manage these large data sets efficiently, metadata or descriptive information about the data needs to be managed. There are various types of metadata, and it is likely that a range of metadata services will exist in Grid environments that are specialized for particular types of metadata cataloguing and discovery. In this paper, we present the design of a Metadata Catalog Service (MCS) that provides a mechanism for storing and accessing descriptive metadata and allows users to query for data items based on desired attributes. We describe our experience in using the MCS with several applications and present a scalability study of the service."}
{"_id": "36548a8574d82530ccee5070826cbd960738b7cd", "title": "Using JFLAP to interact with theorems in automata theory", "text": "An automata theory course can be taught in an interactive, hands-on manner using a computer. At Duke we have been using the software tool JFLAP to provide interaction and feedback in CPS 140, our automata theory course. JFLAP is a tool for designing and running nondeterministic versions of finite automata, pushdown automata, and Turing machines. Recently, we have enhanced JFLAP to allow one to study the proofs of several theorems that focus on conversions of languages, from one form to another, such as converting an NFA to a DFA and then to a minimum state DFA. In addition, our enhancements combined with other tools allow one to interactively study LL and LR parsing methods."}
{"_id": "d1c407b4c549da6fecd51de95b80ad345bf2bb85", "title": "Comparison of Wavelet and Short Time Fourier Transform Methods in the Analysis of EMG Signals", "text": "The electromyographic (EMG) signal observed at the surface of the skin is the sum of thousands of small potentials generated in the muscle fiber. There are many approaches to analyzing EMG signals with spectral techniques. In this study, the short time Fourier Transform (STFT) and wavelet transform (WT) were applied to EMG signals and coefficients were obtained. In these studies, MATLAB 7.01 program was used. According to obtained results, it was determined that WT is more useful than STFT in the fields of eliminating of resolution problem and providing of changeable resolution during analyze."}
{"_id": "a1b72bdedc5fe96f582b87a319b3c6a4164a210d", "title": "Obesity is associated with macrophage accumulation in adipose tissue.", "text": "Obesity alters adipose tissue metabolic and endocrine function and leads to an increased release of fatty acids, hormones, and proinflammatory molecules that contribute to obesity associated complications. To further characterize the changes that occur in adipose tissue with increasing adiposity, we profiled transcript expression in perigonadal adipose tissue from groups of mice in which adiposity varied due to sex, diet, and the obesity-related mutations agouti (Ay) and obese (Lepob). We found that the expression of 1,304 transcripts correlated significantly with body mass. Of the 100 most significantly correlated genes, 30% encoded proteins that are characteristic of macrophages and are positively correlated with body mass. Immunohistochemical analysis of perigonadal, perirenal, mesenteric, and subcutaneous adipose tissue revealed that the percentage of cells expressing the macrophage marker F4/80 (F4/80+) was significantly and positively correlated with both adipocyte size and body mass. Similar relationships were found in human subcutaneous adipose tissue stained for the macrophage antigen CD68. Bone marrow transplant studies and quantitation of macrophage number in adipose tissue from macrophage-deficient (Csf1op/op) mice suggest that these F4/80+ cells are CSF-1 dependent, bone marrow-derived adipose tissue macrophages. Expression analysis of macrophage and nonmacrophage cell populations isolated from adipose tissue demonstrates that adipose tissue macrophages are responsible for almost all adipose tissue TNF-alpha expression and significant amounts of iNOS and IL-6 expression. Adipose tissue macrophage numbers increase in obesity and participate in inflammatory pathways that are activated in adipose tissues of obese individuals."}
{"_id": "9b066d16f6c28315dddb589fa366f81cbd2a18b0", "title": "A New Prime and Probe Cache Side-Channel Attack for Cloud Computing", "text": "Cloud computing is considered one of the most dominant paradigms in the Information Technology (IT) industry nowadays. It supports multi-tenancy to fulfil future increasing demands for accessing and using resources provisioned over the Internet. However, multi-tenancy in cloud computing has unique vulnerabilities such as clients' co-residence and virtual machine physical co-residency. Physical co-residency of virtual machines can facilitate attackers with an ability to interfere with another virtual machine running on the same physical machine due to an insufficient logical isolation. In the worst scenario, attackers can exfiltrate sensitive information of victims on the same physical machine by using hardware side-channels. There are various types of side-channels attacks, which are classified according to hardware medium they target and exploit, for instance, cache side-channel attacks. CPU caches are one of the most hardware devices targeted by adversaries because it has high-rate interactions and sharing between processes. This paper presents a new Prime and Probe cache side-channel attack, which can prime physical addresses. These addresses are translated form virtual addresses used by a virtual machine. Then, time is measured to access these addresses and it will be varied according to where the data is located. If it is in the CPU cache, the time will be less than in the main memory. The attack was implemented in a server machine comparable to cloud environment servers. The results show that the attack needs less effort and time than other types and is easy to be launched."}
{"_id": "e3cf27b3ab900708fa58466dfe08623731730faf", "title": "Impossibility of Distributed Consensus with One Faulty Process", "text": "The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the \u201cByzantine Generals\u201d problem."}
{"_id": "a0abc128adbd02ad1f98f2fba22974c3692b427b", "title": "Estimating the relationship between isoseismal area and earthquake magnitude by a hybrid fuzzy-neural-network method", "text": "Utilizing information diffusion method and artificial neural networks, we propose in this paper a hybrid fuzzy neural network to estimate the relationship between isoseismal area and earthquake magnitude. We focus on the study of incompleteness and contradictory nature of patterns in scanty historical earthquake records. Information diffusion method is employed to construct fuzzy relationships which are equal to the number of observations. Integration of the relationships can change the contradictory patterns into more compatible ones which, in turn, can smoothly and quickly train the feedforward neural network with backpropagation algorithm (BP) to obtain the final relationship. A practical application is employed to show the superiority of the model. @ 1999 Elsevier Science B.V. All rights reserved."}
{"_id": "2af4ce08453c65a79d79da2d873287270fdb475e", "title": "Mining significant graph patterns by leap search", "text": "With ever-increasing amounts of graph data from disparate sources, there has been a strong need for exploiting significant graph patterns with user-specified objective functions. Most objective functions are not antimonotonic, which could fail all of frequency-centric graph mining algorithms. In this paper, we give the first comprehensive study on general mining method aiming to find most significant patterns directly. Our new mining framework, called LEAP (Descending Leap Mine), is developed to exploit the correlation between structural similarity and significance similarity in a way that the most significant pattern could be identified quickly by searching dissimilar graph patterns. Two novel concepts, structural leap search and frequency descending mining, are proposed to support leap search in graph pattern space. Our new mining method revealed that the widely adopted branch-and-bound search in data mining literature is indeed not the best, thus sketching a new picture on scalable graph pattern discovery. Empirical results show that LEAP achieves orders of magnitude speedup in comparison with the state-of-the-art method. Furthermore, graph classifiers built on mined patterns outperform the up-to-date graph kernel method in terms of efficiency and accuracy, demonstrating the high promise of such patterns."}
{"_id": "7ba5d4220f608e5fa8899fb92853bbc99875768e", "title": "Mapping the lateral extent of human cadaver decomposition with soil chemistry.", "text": "Soil below decomposing cadavers may have a different lateral spatial extent depending upon whether scavengers have access to the human cadaver or not. We examined the lateral spatial extent of decomposition products to a depth of 7cm of soils beneath two decomposing corpses, one in which the subject was autopsied, unclothed and placed under a wire cage to restrict scavenger access and one in which the subject was not autopsied, unclothed and exposed to scavengers. The two bodies had accumulated degree days (ADD) of 5799 and 5469 and post mortem interval (PMI) of 288 and 248d, respectively. The spatial extent for dissolved organic carbon (DOC) and organic nitrogen (DON) for both bodies was large but similar suggesting some movement off site for both compounds. Mean DOC was 1087\u00b1727 and 1484\u00b11236\u03bcgg(-1) dry soil under the two corpses relative to 150\u00b168\u03bcgg(-1) in upslope control soils. Sulfate tended to have 'hot spots' of lower values relative to the control soils indicative of anaerobic respiration. pH was lower and electrical conductivity was higher in the soil under both decomposing cadavers relative to control soils. Some of the nutrients examined downslope of the human remains were significantly higher than control soils upslope suggesting movement of decomposition products off-site which could be an important factor when using human remains detector dogs."}
{"_id": "1c3f90f5fd10dc2d813106069d0843dd741eff21", "title": "Text Steganography by Changing Words Spelling", "text": "One of the important issues in security fields is hidden exchange of information. There are different methods for this purpose such as cryptography and steganography. Steganography is a method of hiding data within a cover media so that other individuals fail to realize their existence. In this paper a new method for steganography in English texts is proposed. In this method the US and UK spellings of words substituted in order to hide data in an English text. For example \"color\" has different spelling in UK (colour) and US (color). Therefore the data can be hidden in the text by substituting these words."}
{"_id": "ce6a0e1eb76d55ff7ee38f1536737839b8d995fa", "title": "Analysis and Design of Tag Antenna Based UHF RFID for Libraries", "text": "A kind of UHF RFID tag antenna for library management is designed. The series of design requirements are proposed through rich analytics. Refer to the management model of intelligent library based on UHF RFID. For the antenna, the size is mm mm 3 90 \uf0b4 , adopting T-shaped matching which good for adjusting of antenna impedance matching, the center frequency by simulations is 918 MHz under application environment, Return Loss is less than -10dB in the frequency band of 860MHz~960MHz , the minimum value of return loss is -24.4dB . Simulation results show that the tag antenna has a good performance under application environment for library."}
{"_id": "b07bae86dcd40e74674f97c0419d6c2543ecd7c0", "title": "Antibiotic resistance is ancient", "text": "The discovery of antibiotics more than 70\u2009years ago initiated a period of drug innovation and implementation in human and animal health and agriculture. These discoveries were tempered in all cases by the emergence of resistant microbes. This history has been interpreted to mean that antibiotic resistance in pathogenic bacteria is a modern phenomenon; this view is reinforced by the fact that collections of microbes that predate the antibiotic era are highly susceptible to antibiotics. Here we report targeted metagenomic analyses of rigorously authenticated ancient DNA from 30,000-year-old Beringian permafrost sediments and the identification of a highly diverse collection of genes encoding resistance to \u03b2-lactam, tetracycline and glycopeptide antibiotics. Structure and function studies on the complete vancomycin resistance element VanA confirmed its similarity to modern variants. These results show conclusively that antibiotic resistance is a natural phenomenon that predates the modern selective pressure of clinical antibiotic use."}
{"_id": "2588acc7a730d864f84d4e1a050070ff873b03d5", "title": "Action Recognition by an Attention-Aware Temporal Weighted Convolutional Neural Network", "text": "Research in human action recognition has accelerated significantly since the introduction of powerful machine learning tools such as Convolutional Neural Networks (CNNs). However, effective and efficient methods for incorporation of temporal information into CNNs are still being actively explored in the recent literature. Motivated by the popular recurrent attention models in the research area of natural language processing, we propose the Attention-aware Temporal Weighted CNN (ATW CNN) for action recognition in videos, which embeds a visual attention model into a temporal weighted multi-stream CNN. This attention model is simply implemented as temporal weighting yet it effectively boosts the recognition performance of video representations. Besides, each stream in the proposed ATW CNN framework is capable of end-to-end training, with both network parameters and temporal weights optimized by stochastic gradient descent (SGD) with back-propagation. Our experimental results on the UCF-101 and HMDB-51 datasets showed that the proposed attention mechanism contributes substantially to the performance gains with the more discriminative snippets by focusing on more relevant video segments."}
{"_id": "9606fb370f8d4a99d7bd33e5ada15b8232ec671d", "title": "Comparison of complex network analysis software: Citespace, SCI2 and Gephi", "text": "Big Data Analysis (BDA) has attracted considerable interest and curiosity from scientists of various fields recently. As big size and complexity of big data, it is pivotal to uncover hidden patterns, bursts of activity, correlations and laws of it. Complex network analysis could be effective method for this purpose, because of its powerful data organization and visualization ability. Besides the general complex software (Gephi), some bibliometrics software (Citespace and SCI2) could also be used for this analysis, due to their powerful data process and visualization functions. This paper presents a comparison of Citespace, SCI2 and Gephi from the following three aspects: (1) Data preprocessing. Citespace is time-consuming and laborious on merging synonyms. SCI2 is efficient and suitable for massive data. Both of them could remove duplicate records. However, Gephi lacks these. (2) Network extraction. Citespace pays more attention on temporal analysis, but SCI2 and Gephi pay less attention on it. Besides, Citespace and SCI2 could provide pruning algorithms highlight the main structure of social network, but Gephi lacks this. (3) Visualization. In Citespace, co-occurrence network could present time, frequency and betweenness centrality simultaneously; cluster view could label the clusters with phrases. However, SCI2 and Gephi provide more various layout algorithms to present network. Besides, they have a better edit-ability than Citespace on generated network."}
{"_id": "cf5caa2d22e0b75f4ff4058124dd12962d2bb875", "title": "Optimum Design of Traveling-Wave SIW Slot Array Antennas", "text": "This paper presents an optimum design method based on the Method of Least Squares (MLS) for the traveling-wave substrate integrated waveguide (SIW) slot array antenna. Elliot's design formulas for dielectric filled waveguide including external and internal mutual coupling are used for the design by means of the equivalent waveguide. In order to achieve an efficient antenna, design formulas are extended to nonuniformly spaced slots. An error function consisting of four terms is formulated and then minimized with respect to the design parameters namely lengths, offsets, spacings and excitations of slots using combination of genetic algorithm and conjugate gradient method (GA-CG). A linear slot array at $X$ -band is designed, simulated and fabricated for verification. Results of the proposed method, simulation software and measurement are in good agreement."}
{"_id": "1bed30d161683d279780aee34619f94a860fa973", "title": "Efficient virtual memory for big memory servers", "text": "Our analysis shows that many \"big-memory\" server workloads, such as databases, in-memory caches, and graph analytics, pay a high cost for page-based virtual memory. They consume as much as 10% of execution cycles on TLB misses, even using large pages. On the other hand, we find that these workloads use read-write permission on most pages, are provisioned not to swap, and rarely benefit from the full flexibility of page-based virtual memory.\n To remove the TLB miss overhead for big-memory workloads, we propose mapping part of a process's linear virtual address space with a direct segment, while page mapping the rest of the virtual address space. Direct segments use minimal hardware---base, limit and offset registers per core---to map contiguous virtual memory regions directly to contiguous physical memory. They eliminate the possibility of TLB misses for key data structures such as database buffer pools and in-memory key-value stores. Memory mapped by a direct segment may be converted back to paging when needed.\n We prototype direct-segment software support for x86-64 in Linux and emulate direct-segment hardware. For our workloads, direct segments eliminate almost all TLB misses and reduce the execution time wasted on TLB misses to less than 0.5%."}
{"_id": "444de73739fc0f1010869389df4bde605fc6f0ae", "title": "Development and characterization of new generation panel fan-out (P-FO) packaging technology", "text": "Wafer Level Packaging (WLP) is a packaging technology focusing on integrated circuit (IC) packaging at wafer level instead of die level. WLP essentially consists of IC foundry fabrication process and subsequent device interconnection and back-end passivation process. The general wafer level packages (WLPs) are designed for fan-in chip scale packaging but the shrinkage of pad pitch and size at the chip to package interface is much faster than the shrinkage at the package to board interface. The embedded package technologies are developed to provide larger package size in order to offer a sufficient area to accommodate the redistribution interconnection with standard pitches at the packaging. Embedded technology (FO-WLP / eWLB and embedded die) are unique technological breakthroughs enabling further growth of next-generation packaging modules and brings several benefits such as small form factor of packages, electrical and thermal performance improvement and cost reduction for IDMs. Compare to FO-WLP, panel level packaging (PLP) has the advantage of using large panel processing compare with FO-WLP while 12\" is commonly used. Hence panel fan-out package (P-FO) has a better cost down potential than FO-WLP. To demonstrate the new generation panel fan-out package conception, the several hetero-technologies are integrated into the packaging which composed of PCB, LCD, Bumping and FOWLP has substantially broadened the spectrum of the well established wafer level packaging (WLP) technologies. The new generation panel fan-out packaging combines the different infrastructures from several fields to realize the conception. The achieved results for the package, such as the large panel size process and precise die alignment capability point to the outstanding potential of this novel hetero-system. In this paper, mixing PCB, semiconductor back-end, semiconductor WLP and LCD Gen 2.5 (370\u00d7470mm) size processing technologies combined with innovation as well as integration of P-FO techniques are proposed, including high accuracy die bonding and die shift compensation at film lamination, lower warpage sheet form film lamination, good copper trace plating uniformity control at large panel area and also precise photolithographic technique. In addition to technology elaboration and process depiction, relevant experimental data and final reliability test results on board level have also been thoroughly demonstrated and discussed."}
{"_id": "a422f0f6edb58560ed399d1e7acb6d972f3d7fff", "title": "Systems Competition and Network Effects", "text": "Many products have little or no value in isolation, but generate value when combined with others. Examples include: nuts and bolts, which together provide fastening services; home audio or video components and programming, which together provide entertainment services; automobiles, repair parts and service, which together provide transportation services; facsimile machines and their associated communications protocols, which together provide fax services; automatic teller machines and ATM cards, which together provide transaction services; camera bodies and lenses, which together provide photographic services. These are all examples of products that are strongly complementary, although they need not be consumed in fixed proportions. We describe them as forming systems, which refers to collections of two or more components together with an interface that allows the components to work together. This paper and the others in this symposium explore the economics of such systems. Market competition between systems, as opposed to market competition between individual products, highlights at least three important issues: expectations, coordination, and compatibility. A recent wave of research has focused on the behavior and performance of the variety of private and public institutions that arise in systems markets to influence expectations, facilitate coordination, and achieve compatibility. In many cases, the components purchased for a single system are spread over time, which means that rational buyers must form expectations about"}
{"_id": "0fedb3f3459bb591c3c2e30beb7dd0a9d3af3bcc", "title": "Mining knowledge-sharing sites for viral marketing", "text": "Viral marketing takes advantage of networks of influence among customers to inexpensively achieve large changes in behavior. Our research seeks to put it on a firmer footing by mining these networks from data, building probabilistic models of them, and using these models to choose the best viral marketing plan. Knowledge-sharing sites, where customers review products and advise each other, are a fertile source for this type of data mining. In this paper we extend our previous techniques, achieving a large reduction in computational cost, and apply them to data from a knowledge-sharing site. We optimize the amount of marketing funds spent on each customer, rather than just making a binary decision on whether to market to him. We take into account the fact that knowledge of the network is partial, and that gathering that knowledge can itself have a cost. Our results show the robustness and utility of our approach."}
{"_id": "2a005868b79511cf8c924cd5990e2497527a0527", "title": "Community structure in social and biological networks.", "text": "A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known--a collaboration network and a food web--and find that it detects significant and informative community divisions in both cases."}
{"_id": "1dcd6838e7914217daee785683aeb0dc12cdc6cd", "title": "Continuous-Scale Kinetic Fluid Simulation", "text": "Kinetic approaches, i.e., methods based on the lattice Boltzmann equations, have long been recognized as an appealing alternative for solving the incompressible Navier-Stokes equations in computational fluid dynamics. However, such approaches have not been widely adopted in graphics mainly due to the underlying inaccuracy, instability and inflexibility. In this paper, we try to tackle these problems in order to make kinetic approaches practical for graphical applications. To achieve more accurate and stable simulations, we propose to employ a novel non-orthogonal central-moment-relaxation model, where we propose an adaptive relaxation method to retain both stability and accuracy in turbulent flows. To achieve flexibility, we propose a novel continuous-scale formulation that enables samples at arbitrary resolutions to easily communicate with each other in a more continuous sense and with loose geometrical constraints, which allows efficient and adaptive sample construction to better match the physical scale. Such a capability directly leads to an automatic sample construction algorithm which generates static and dynamic scales at initialization and during simulation, respectively. This effectively makes our method suitable for simulating turbulent flows with arbitrary geometrical boundaries. Our simulation results with applications to smoke animations show the benefits of our method, with comparisons for justification and verification."}
{"_id": "a9adc58d23640a21c6b4c39caa6e499790e120c1", "title": "A proactive approach based on online reliability prediction for adaptation of service-oriented systems", "text": "Service computing is an emerging technology in System of Systems Engineering (SoS Engineering or SoSE), which regards System as a Service (i.e. SaaS), and aims to construct a robust and value-added complex system by outsourcing external component systems through service composition technology. A service-oriented SoS runs under a dynamic and uncertain environment. To successfully deploy SoS\u2019s run-time quality assurance, online reliability time series prediction, which aims to predict the reliability in near future for a service-oriented SoS arises as a grand challenge in SoS research. In this paper, we tackle the prediction challenge by exploiting two novel prediction models. We adopt motifs-based Dynamic Bayesian Networks (or m DBNs) model to perform one-step-ahead time series prediction, and propose a multi-steps trajectories DBNs (or multi DBNs) to further revise the future reliability prediction. Finally, an proactive adaption strategy is achieved based on the reliability prediction results. Extensive experiments conducted on real-world Web services demonstrate that our models outperform other well-known approaches consistently."}
{"_id": "474a139ee7b4223ca1e37b61665ddc8214528dce", "title": "A line in the sand: a wireless sensor network for target detection, classification, and tracking", "text": "Intrusion detection is a surveillance problem of practical import that is well suited to wireless sensor networks. In this paper, we study the application of sensor networks to the intrusion detection problem and the related problems of classifying and tracking targets. Our approach is based on a dense, distributed, wireless network of multi-modal resource-poor sensors combined into loosely coherent sensor arrays that perform in situ detection, estimation, compression, and exfiltration. We ground our study in the context of a security scenario called \u201cA Line in the Sand\u201d and accordingly define the target, system, environment, and fault models. Based on the performance requirements of the scenario and the sensing, communication, energy, and computation ability of the sensor network, we explore the design space of sensors, signal processing algorithms, communications, networking, and middleware services. We introduce the influence field, which can be estimated from a network of binary sensors, as the basis for a novel classifier. A contribution of our work is that we do not assume a reliable network; on the contrary, we quantitatively analyze the effects of network unreliability on application performance. Our work includes multiple experimental deployments of over 90 sensors nodes at MacDill Air Force Base in Tampa, Florida, as well as other field experiments of comparable scale. Based on these experiences, we identify a set of key lessons and articulate a few of the challenges facing extreme scaling to tens or hundreds of thousands of sensor nodes."}
{"_id": "74614071cf9827642d317c3a363126335f37baf0", "title": "Multi-factor Authentication Framework for Cloud Computing", "text": "Cloud computing is a new paradigm to deliver services over the Internet. Data Security is the most critical issues in a cloud computing environment. Authentication is a key technology for information security, which is a mechanism to establish proof of identities to get access of information in the system. Traditional password authentication does not provide enough security for information in cloud computing environment to the most modern means of attacks. In this paper, we propose a new multi-factor authentication framework for cloud computing. The proposed framework provides a feasible and a most efficient mechanism which can closely integrate with the traditional authentication system.The proposed framework is verified by developing Cloud Access Management (CAM) system which authenticates the user based on multiple factors. Also using secret-splitting and encrypted value of arithmetic captcha is innovative factor for user authentication for cloud computing environment. Prototype model for cloud computing own cloud server is implemented using open sources technology. The proposed framework shows the close agreement with the standard criteria for security."}
{"_id": "769aa168fdd0be340435e7e2dc1ce7724be3932e", "title": "Multi-Sensor Data Integration Using Deep Learning for Characterization of Defects in Steel Elements \u2020", "text": "Nowadays, there is a strong demand for inspection systems integrating both high sensitivity under various testing conditions and advanced processing allowing automatic identification of the examined object state and detection of threats. This paper presents the possibility of utilization of a magnetic multi-sensor matrix transducer for characterization of defected areas in steel elements and a deep learning based algorithm for integration of data and final identification of the object state. The transducer allows sensing of a magnetic vector in a single location in different directions. Thus, it enables detecting and characterizing any material changes that affect magnetic properties regardless of their orientation in reference to the scanning direction. To assess the general application capability of the system, steel elements with rectangular-shaped artificial defects were used. First, a database was constructed considering numerical and measurements results. A finite element method was used to run a simulation process and provide transducer signal patterns for different defect arrangements. Next, the algorithm integrating responses of the transducer collected in a single position was applied, and a convolutional neural network was used for implementation of the material state evaluation model. Then, validation of the obtained model was carried out. In this paper, the procedure for updating the evaluated local state, referring to the neighboring area results, is presented. Finally, the results and future perspective are discussed."}
{"_id": "c70e44131988a28393fe6b324263edc9a8a878b1", "title": "Direct visibility of point sets", "text": "This paper proposes a simple and fast operator, the \"Hidden\" Point Removal operator, which determines the visible points in a point cloud, as viewed from a given viewpoint. Visibility is determined without reconstructing a surface or estimating normals. It is shown that extracting the points that reside on the convex hull of a transformed point cloud, amounts to determining the visible points. This operator is general - it can be applied to point clouds at various dimensions, on both sparse and dense point clouds, and on viewpoints internal as well as external to the cloud. It is demonstrated that the operator is useful in visualizing point clouds, in view-dependent reconstruction and in shadow casting."}
{"_id": "39252db89b465a7ee642cc61695e0f2ff737a0f1", "title": "Better to be frustrated than bored: The incidence, persistence, and impact of learners' cognitive-affective states during interactions with three different computer-based learning environments", "text": "We study the incidence (rate of occurrence), persistence (rate of reoccurrence immediately after occurrence), and impact (effect on behavior) of students\u2019 cognitive-affective states during their use of three different computer-based learning environments. Students\u2019 cognitive-affective states are studied using different populations (Philippines, USA), different methods (quantitative field observation, self-report), and different types of learning environments (dialogue tutor, problemsolving game, and problem-solving based Intelligent Tutoring System). By varying the studies along these multiple factors, we can have greater confidence that findings which generalize across studies are robust. The incidence, persistence, and impact of boredom, frustration, confusion, engaged concentration, delight, and surprise were compared. We found that boredom was very persistent across learning environments and was associated with poorer learning and problem behaviors, such as gaming the system. Despite prior hypothesis to the contrary, frustration was less persistent, less associated with poorer learning, and did not appear to be an antecedent to gaming the system. Confusion and engaged concentration were the most common states within all three learning environments. Experiences of delight and surprise were rare. These findings suggest that significant effort should be put into detecting and responding to boredom and confusion, with a particular emphasis on developing pedagogical interventions to disrupt the \u201cvicious cycles\u201d which occur when a student becomes bored and remains bored for long periods of time."}
{"_id": "2e4efcc1045c5df82840435658d8a62254faba84", "title": "Blood Exosomes Endowed with Magnetic and Targeting Properties for Cancer Therapy.", "text": "Exosomes are a class of naturally occurring nanoparticles that are secreted endogenously by mammalian cells. Clinical applications for exosomes remain a challenge because of their unsuitable donors, low scalability, and insufficient targeting ability. In this study, we developed a dual-functional exosome-based superparamagnetic nanoparticle cluster as a targeted drug delivery vehicle for cancer therapy. The resulting exosome-based drug delivery vehicle exhibits superparamagnetic behavior at room temperature, with a stronger response to an external magnetic field than individual superparamagnetic nanoparticles. These properties enable exosomes to be separated from the blood and to target diseased cells. In vivo studies using murine hepatoma 22 subcutaneous cancer cells showed that drug-loaded exosome-based vehicle delivery enhanced cancer targeting under an external magnetic field and suppressed tumor growth. Our developments overcome major barriers to the utility of exosomes for cancer application."}
{"_id": "86dc975f9cbd9a205f8e82fb1db3b61c6b738fa5", "title": "Large-scale concept ontology for multimedia", "text": "As increasingly powerful techniques emerge for machine tagging multimedia content, it becomes ever more important to standardize the underlying vocabularies. Doing so provides interoperability and lets the multimedia community focus ongoing research on a well-defined set of semantics. This paper describes a collaborative effort of multimedia researchers, library scientists, and end users to develop a large standardized taxonomy for describing broadcast news video. The large-scale concept ontology for multimedia (LSCOM) is the first of its kind designed to simultaneously optimize utility to facilitate end-user access, cover a large semantic space, make automated extraction feasible, and increase observability in diverse broadcast news video data sets"}
{"_id": "df39d80bfe9ea60ad165a91b9633400851b25e5d", "title": "A droop control strategy of parallel-inverter-based microgrid", "text": "In this paper, the control strategy for a parallel-inverters-based microgrid is presented. The control strategy includes outer active and reactive power control loop based on frequency and voltage droop method to avoid critical communications among distributed generation units (DGs). The inner inverter control loop is employed to produce the required inverter output voltage provided by the outer power loop. In addition, two inverter control schemes are introduced and compared. This microgrid can operate at both grid-connected and islanded mode with proper power sharing capability between parallel DG systems. Moreover, smooth transition between operating modes is achieved without causing negative effect on the utility and critical loads. The performance of this control strategy is verified in simulation using Matlab/Simulink."}
{"_id": "23acfcba83485814587fab5797ba022346c83863", "title": "Human EEG responses to 1\u2013100\u00a0Hz flicker: resonance phenomena in visual cortex and their potential correlation to cognitive phenomena", "text": "The individual properties of visual objects, like form or color, are represented in different areas in our visual cortex. In order to perceive one coherent object, its features have to be bound together. This was found to be achieved in cat and monkey brains by temporal correlation of the firing rates of neurons which code the same object. This firing rate is predominantly observed in the gamma frequency range (approx. 30\u201380\u00a0Hz, mainly around 40\u00a0Hz). In addition, it has been shown in humans that stimuli which flicker at gamma frequencies are processed faster by our brains than when they flicker at different frequencies. These effects could be due to neural oscillators, which preferably oscillate at certain frequencies, so-called resonance frequencies. It is also known that neurons in visual cortex respond to flickering stimuli at the frequency of the flickering light. If neural oscillators exist with resonance frequencies, they should respond more strongly to stimulation with their resonance frequency. We performed an experiment, where ten human subjects were presented flickering light at frequencies from 1 to 100\u00a0Hz in 1-Hz steps. The event-related potentials exhibited steady-state oscillations at all frequencies up to at least 90\u00a0Hz. Interestingly, the steady-state potentials exhibited clear resonance phenomena around 10, 20, 40 and 80\u00a0Hz. This could be a potential neural basis for gamma oscillations in binding experiments. The pattern of results resembles that of multiunit activity and local field potentials in cat visual cortex."}
{"_id": "8638aa6bb67fad77e74cdf2fb29175e4fbb62421", "title": "Aperture scanning Fourier ptychographic microscopy", "text": "Fourier ptychographic microscopy (FPM) is implemented through aperture scanning by an LCOS spatial light modulator at the back focal plane of the objective lens. This FPM configuration enables the capturing of the complex scattered field for a 3D sample both in the transmissive mode and the reflective mode. We further show that by combining with the compressive sensing theory, the reconstructed 2D complex scattered field can be used to recover the 3D sample scattering density. This implementation expands the scope of application for FPM and can be beneficial for areas such as tissue imaging and wafer inspection. c \u00a9 2016 Optical Society of America OCIS codes: (180.0180) Microscopy; (090.1995) Digital holography; (110.6880) Three-dimensional image acquisition. References and links 1. G. Zheng, R. Horstmeyer, and C. Yang, \u201cWide-field, high-resolution Fourier ptychographic microscopy,\u201d Nat. Photonics 7, 739\u2013745 (2013). 2. R. Horstmeyer and C. Yang, \u201cA phase space model of Fourier ptychographic microscopy,\u201d Opt. Express 22, 338\u2013358 (2014). 3. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, \u201cHigh numerical aperture Fourier ptychography: principle, implementation and characterization,\u201d Opt. Express 23, 3472\u20133491 (2015). 4. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, \u201cQuantitative phase imaging via Fourier ptychographic microscopy,\u201d Opt. Lett. 38, 4845\u20134848 (2013). 5. L. Tian and L. Waller, \u201c3D intensity and phase imaging from light field measurements in an LED array microscope,\u201d Optica 2, 104\u2013111 (2015). 6. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang\u201e \u201cDiffraction tomography with Fourier ptychography,\u201d Optica 3, 827\u2013835 (2016). 7. S. Dong, R. Horstmeyer, R. Shiradkar, K. Guo, X. Ou, Z. Bian, H. Xin, and G. Zheng, \u201cAperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging,\u201d Opt. Express 22, 13586\u201313599 (2014). 8. R. Horstmeyer, X. Ou, J. Chung, G. Zheng, and C. Yang, \u201cOverlapped Fourier coding for optical aberration removal,\u201d Opt. Express 22, 24062\u201324080 (2014). 9. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, \u201cSparsely sampled Fourier ptychography,\u201d Opt. Express 22, 5455\u20135464 (2014). 10. S. Kubota and J. W. Goodman, \u201cVery efficient speckle contrast reduction realized by moving diffuser device,\u201d Appl. Opt. 49, 4385\u20134391 (2010). 11. R. Horstmeyer, R. Heintzmann, G. Popescu, L. Waller, and C. Yang, \u201cStandardizing the resolution claims for coherent microscopy,\u201d Nat. Photonics 10, 68\u201371 (2016). 12. J. W. Goodman, \u201cIntroduction to Fourier Optics,\u201d (Roberts and Company Publishers, 2005), 1st ed. 13. E. J. Cand\u00e8s, J. Romberg, and T. Tao, \u201cRobust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,\u201d IEEE Trans. Inf. Theory 52, 489\u2013509 (2006). 14. E. J. Cand\u00e8s, and T. Tao\u201e \u201cNear-optimal signal recovery from random projections: Universal encoding strategies?,\u201d IEEE Trans. Inf. Theory 52, 5406\u20135425 (2006). 15. D. L. Donoho, \u201cCompressed sensing,\u201d IEEE Trans. Inf. Theory 52, 1289\u20131306 (2006). 16. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, \u201cCompressive holography,\u201d Opt. Express 17, 13040\u2013 13049 (2009). 17. M.Born, \u201cQuantenmechanik der sto\u00dfvorg\u00e4nge,\u201d Zeitschrift f\u00fcr Physik 38, 803\u2013827 (1926). 18. R. E. Blahut, Theory of Remote Image Formation (Cambridge University Press, 2004). 19. E. J. Cand\u00e8s and T. Tao, \u201cDecoding by linear programming,\u201d IEEE Trans. Inf. Theory 51, 4203\u20134215 (2005). 20. L. I. Rudin, S. Osher, and E. Fatemi, \u201cNonlinear total variation based noise removal algorithms,\u201d Physica D. 60, 259\u2013268 (1992). 21. J. M. Bioucas-Dias and M. A. Figueiredo, \u201cA new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,\u201d IEEE Trans. Image Process. 16, 2992\u20133004 (2007). Vol. 7, No. 8 | 1 Aug 2016 | BIOMEDICAL OPTICS EXPRESS 3140 #263394 Journal \u00a9 2016 http://dx.doi.org/10.1364/BOE.7.003140 Received 20 Apr 2016; revised 23 Jun 2016; accepted 17 Jul 2016; published 29 Jul 2016 22. J. M. Bioucas-Dias and M. A. Figueiredo, \u201cTwo-step Iterative Shrinkage/Thresholding Algorithm for Linear Inverse Problems,\u201d http://www.lx.it.pt/ bioucas/TwIST/TwIST.htm 23. E. J. Cand\u00e8s, J .K. Romberg, and T. Tao, \u201cStable signal recovery from incomplete and inaccurate measurements,\u201d Comm. Pure and Applied Math. 59, 1207\u20131223 (2006). 24. J. Mairal, J. Ponce, G. Sapiro, A. Zisserman, and F. R. Bach, \u201cSupervised dictionary learning,\u201d Proc. Adv. Neural Inf. Process. Syst. Conf. pp. 1033\u00e2\u0102\u015e1040 (2009). 25. I. Yamaguchi and T. Zhang, \u201cPhase-shifting digital holography,\u201d Opt. Lett. 22, 1268\u20131270 (1997)."}
{"_id": "2ddadf16e815f2e7c66de20082b117f03d2b1876", "title": "Tribeca: A Stream Database Manager for Network Traffic Analysis", "text": "High speed computer and telephone networks carry large amounts of data and signalling traffic. The engineers who build and maintain these networks use a combination of hardware and software tools to monitor the stream of network traffic. Some of these tools operate directly on the live network; others record data on magnetic tape for later offline analysis by software. Most analysis tasks require tens to hundreds of gigabytes of data. Traffic analysis applications include protocol performance analysis, conformance testing, error monitoring and fraud detection."}
{"_id": "aaaf0cdba3b05eb66a09f0e49fdd698b8db3c6a7", "title": "Sliding-Window Superposition Coding: Two-User Interference Channels", "text": "A low-complexity coding scheme is developed to achieve the r at egion of maximum likelihood decoding for interference channels. As in the classical rate-splitting mu ltiple access scheme by Grant, Rimoldi, Urbanke, and Whitin g, the proposed coding scheme uses superposition of multiple c od words with successive cancellation decoding, which can be implemented using standard point-to-point encoders and decoders. Unlike rate-splitting multiple access, which is not rate-optimal for multiple receivers, the proposed coding s cheme transmits codewords over multiple blocks in a stagger ed manner and recovers them successively over sliding decodin g wi dows, achieving the single-stream optimal rate region as well as the more general Han\u2013Kobayashi inner bound for the tw o-user interference channel. The feasibility of this schem e in practice is verified by implementing it using commercial c hannel codes over the two-user Gaussian interference chann el. I. I NTRODUCTION For high data rates and massive connectivity, next-generat ion cellular networks are expected to deploy many small base stations. While such dense deployment provides the ben fit of bringing radio closer to end users, it also increases the amount of interference from neighboring cells. Consequ ently, efficient and effective management of interference is expected to become one of the main challenges for high-spe ctral-efficiency, low-power, broad-coverage wireless communications. Over the past few decades, several techniques at different p otocol layers [1]\u2013[3] have been proposed to mitigate adver se effects of interference in wireless networks. One importan t conceptual technique at the physical layer is simultaneous decoding[4, Section 6.2], [5]. In this decoding method, each receive r attempts to recover both the intended and a subset of the interfering codewords at the same time. When the inter fer nce is strong [6], [7] and weak [8]\u2013[11], simultaneous The material in this paper was presented in part in the IEEE In ternational Symposium on Information Theory (ISIT) 2014, H onolulu, HI, and in part in the IEEE Globecom Workshops (GC Wkshps) 2014, Austin , TX. L. Wang is jointly with the Department of Electrical Enginee ring, Stanford University, Stanford, CA 94305 USA and the De partment of Electrical Engineering Systems, Tel Aviv University, Tel Aviv, Israe l ( mail: wanglele@stanford.edu). Y.-H. Kim is with the Department of Electrical and Computer E ngineering, University of California, San Diego, La Jolla, CA 92093 USA (e-mail: yhk@ucsd.edu). C.-Y. Chen is with Broadcom Limited, 190 Mathilda Place, Sun nyvale, CA 94086 USA (email: uscychen@gmail.com). H. Park is with the School of Electronics and Computer Engine eri g, Chonnam National University, Gwangju 61186, Korea ( e-mail: hpark1@jnu.ac.kr). E. \u015ea\u015fo\u011flu is with Intel Corporation, Santa Clara, CA 950 54 USA (e-mail: eren.sasoglu@gmail.com)."}
{"_id": "bdecc15a07f2ac8bad367e681b6c40de4bb104f3", "title": "A brain-controlled exoskeleton with cascaded event-related desynchronization classifiers", "text": "This paper describes a brain-machine interface for the online control of a powered lower-limb exoskeleton based on electroencephalogram (EEG) signals recorded over the user\u2019s sensorimotor cortical areas. We train a binary decoder that can distinguish two different mental states, which is applied in a cascaded manner to efficiently control the exoskeleton in three different directions: walk front, turn left and turn right. This is realized by first classifying the user\u2019s intention to walk front or change the direction. If the user decides to change the direction, a subsequent classification is performed to decide turn left or right. The user\u2019s mental command is conditionally executed considering the possibility of obstacle collision. All five subjects were able to successfully complete the 3-way navigation task using brain signals while mounted in the exoskeleton. We observed on average 10.2% decrease in overall task completion time compared to the baseline protocol. August 25, 2016 DRAFT"}
{"_id": "54f575dbab3a9d5ffcecacc57431e80e1ee62688", "title": "Microsaccades uncover the orientation of covert attention", "text": "Fixational eye movements are subdivided into tremor, drift, and microsaccades. All three types of miniature eye movements generate small random displacements of the retinal image when viewing a stationary scene. Here we investigate the modulation of microsaccades by shifts of covert attention in a classical spatial cueing paradigm. First, we replicate the suppression of microsaccades with a minimum rate about 150 ms after cue onset. Second, as a new finding we observe microsaccadic enhancement with a maximum rate about 350 ms after presentation of the cue. Third, we find a modulation of the orientation towards the cue direction. These multiple influences of visual attention on microsaccades accentuate their role for visual information processing. Furthermore, our results suggest that microsaccades can be used to map the orientation of visual attention in psychophysical experiments."}
{"_id": "87282cd98f07a33594155e8b00a43a45abe124e8", "title": "Document-level Multi-aspect Sentiment Classification by Jointly Modeling Users, Aspects, and Overall Ratings", "text": "Document-level multi-aspect sentiment classification aims to predict user\u2019s sentiment polarities for different aspects of a product in a review. Existing approaches mainly focus on text information. However, the authors (i.e. users) and overall ratings of reviews are ignored, both of which are proved to be significant on interpreting the sentiments of different aspects in this paper. Therefore, we propose a model called Hierarchical User Aspect Rating Network (HUARN) to consider user preference and overall ratings jointly. Specifically, HUARN adopts a hierarchical architecture to encode word, sentence, and document level information. Then, user attention and aspect attention are introduced into building sentence and document level representation. The document representation is combined with user and overall rating information to predict aspect ratings of a review. Diverse aspects are treated differently and a multi-task framework is adopted. Empirical results on two real-world datasets show that HUARN achieves state-of-the-art performances."}
{"_id": "51d55f38d077679adc9d0992087fb993fd74b771", "title": "Pixel Based Off-line Signature Verification System", "text": "The verification of handwritten signatures is one of the oldest and the most popular authentication methods all around the world. As technology improved, different ways of comparing and analyzing signatures become more and more sophisticated. Since the early seventies, people have been exploring how computers can fully take over the task of signature verification and tried different methods. However, none of them is satisfactory enough and time consuming too. Therefore, our proposed pixel based offline signature verification system is one of the fastest and easiest ways to authenticate any handwritten signature we have ever found. For signature acquisition, we have used scanner. Then we have divided the signature image into 2D array and calculated the hexadecimal RGB value of each pixel. After that, we have calculated the total percentage of matching. If the percentage of matching is more than 90, the signature is considered as valid otherwise invalid. We have experimented on more than 35 signatures and the result of our experiment is quite impressive. We have made the whole system web based so that the signature can be verified from anywhere. The average execution time for signature verification is only 0.00003545 second only. Keywords\u2013 off-line signature, signature verification, pixel, execution time, pre-processing"}
{"_id": "570ef6255e06147cedf274b8ac1c085b23ab47e6", "title": "Norms for word lists that create false memories.", "text": "Roediger and McDermott (1995) induced false recall and false recognition for words that were not presented in lists. They had subjects study 24 lists of 15 words that were associates of a common word (called the critical target or critical lure) that was not presented in the list. False recall and false recognition of the critical target occurred frequently in response to these lists. The purpose of the current work was to provide a set of normative data for the lists Roediger and McDermott used and for 12 others developed more recently. We tested false recall and false recognition for critical targets from 36 lists. Despite the fact that all lists were constructed to produce false remembering, the diversity in their effectiveness was large--60% or more of subjects falsely recalled window and sleep following the appropriate lists, and false recognition for these items was greater than 80%. However, the list generated from king led to 10% false recall and 27% false recognition. Possible reasons for these wide differences in effectiveness of the lists are discussed. These norms serve as a useful benchmark for designing experiments about false recall and false recognition in this paradigm."}
{"_id": "5fd16142d0c3e0ae69da838dccad42aaea1a1745", "title": "Software defect prediction using machine learning on test and source code metrics", "text": "Context. Software testing is the process of finding faults in software while executing it. The results of the testing are used to find and correct faults. Software defect prediction estimates where faults are likely to occur in source code. The results from the defect prediction can be used to optimize testing and ultimately improve software quality. Machine learning, that concerns computer programs learning from data, is used to build prediction models which then can be used to classify data. Objectives. In this study we, in collaboration with Ericsson, investigated whether software metrics from source code files combined with metrics from their respective tests predicts faults with better prediction performance compared to using only metrics from the source code files. Methods. A literature review was conducted to identify inputs for an experiment. The experiment was applied on one repository from Ericsson to identify the best performing set of metrics. Results. The prediction performance results of three metric sets are presented and compared with each other. Wilcoxon\u2019s signed rank tests are performed on four different performance measures for each metric set and each machine learning algorithm to demonstrate significant differences of the results. Conclusions. We conclude that metrics from tests can be used to predict faults. However, the combination of source code metrics and test metrics do not outperform using only source code metrics. Moreover, we conclude that models built with metrics from the test metric set with minimal information of the source code can in fact predict faults in the source code."}
{"_id": "992395dcd22a61a4519a46550a6c19eed7c7c1bf", "title": "Core stability exercise principles.", "text": "Core stability is essential for proper load balance within the spine, pelvis, and kinetic chain. The so-called core is the group of trunk muscles that surround the spine and abdominal viscera. Abdominal, gluteal, hip girdle, paraspinal, and other muscles work in concert to provide spinal stability. Core stability and its motor control have been shown to be imperative for initiation of functional limb movements, as needed in athletics. Sports medicine practitioners use core strengthening techniques to improve performance and prevent injury. Core strengthening, often called lumbar stabilization, also has been used as a therapeutic exercise treatment regimen for low back pain conditions. This article summarizes the anatomy of the core, the progression of core strengthening, the available evidence for its theoretical construct, and its efficacy in musculoskeletal conditions."}
{"_id": "2b2ba9b0022ff45939527836a150959fe388ee23", "title": "From Data Mining to Knowledge Discovery: An Overview", "text": ""}
{"_id": "7b550425afe75edbfe7058ea4075cbf7a82c98a8", "title": "An Extension on \u201c Statistical Comparisons of Classifiers over Multiple Data Sets \u201d for all Pairwise Comparisons", "text": "In a recently published paper in JMLR, Dem\u0161ar (2006) recommends a set of non-parametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic procedures and some of the most advanced ones when comparing a control method. However, it does not deal with some advanced topics in depth. Regarding these topics, we focus on more powerful proposals of statistical procedures for comparing n\u00d7n classifiers. Moreover, we illustrate an easy way of obtaining adjusted and comparable p-values in multiple comparison procedures."}
{"_id": "0468532d28499ef38287afd7557dd62b802ee85b", "title": "KEEL: a software tool to assess evolutionary algorithms for data mining problems", "text": "This paper introduces a software tool named KEEL, which is a software tool to assess evolutionary algorithms for Data Mining problems of various kinds including as regression, classification, unsupervised learning, etc. It includes evolutionary learning algorithms based on different approaches: Pittsburgh, Michigan and IRL, as well as the integration of evolutionary learning techniques with different pre-processing techniques, allowing it to perform a complete analysis of any learning model in comparison to existing software tools. Moreover, KEEL has been designed with a double goal: research and educational."}
{"_id": "145e7f1f5a1f2f618b064fe5344cd292cd0b820c", "title": "Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients", "text": "This paper introduces a novel parameter automation strategy for the particle swarm algorithm and two further extensions to improve its performance after a predefined number of generations. Initially, to efficiently control the local search and convergence to the global optimum solution, time-varying acceleration coefficients (TVAC) are introduced in addition to the time-varying inertia weight factor in particle swarm optimization (PSO). From the basis of TVAC, two new strategies are discussed to improve the performance of the PSO. First, the concept of \"mutation\" is introduced to the particle swarm optimization along with TVAC (MPSO-TVAC), by adding a small perturbation to a randomly selected modulus of the velocity vector of a random particle by predefined probability. Second, we introduce a novel particle swarm concept \"self-organizing hierarchical particle swarm optimizer with TVAC (HPSO-TVAC)\". Under this method, only the \"social\" part and the \"cognitive\" part of the particle swarm strategy are considered to estimate the new velocity of each particle and particles are reinitialized whenever they are stagnated in the search space. In addition, to overcome the difficulties of selecting an appropriate mutation step size for different problems, a time-varying mutation step size was introduced. Further, for most of the benchmarks, mutation probability is found to be insensitive to the performance of MPSO-TVAC method. On the other hand, the effect of reinitialization velocity on the performance of HPSO-TVAC method is also observed. Time-varying reinitialization step size is found to be an efficient parameter optimization strategy for HPSO-TVAC method. The HPSO-TVAC strategy outperformed all the methods considered in this investigation for most of the functions. Furthermore, it has also been observed that both the MPSO and HPSO strategies perform poorly when the acceleration coefficients are fixed at two."}
{"_id": "95cca0eaf4df3c1f9b9ace738e6a9e12785d3c96", "title": "Agricultural aid to seed cultivation: An Agribot", "text": "Machine intelligence is a developing technology which has made its way to various fields of engineering and technology. Robots are slowly being implemented in the field of agriculture, very soon AgriBots are to take over the agricultural fields and be used for various difficult and tiresome tasks involving agriculture. They have become the inevitable future of agriculture. This paper proposes an idea that will help in effective cultivation of vast areas of land left uncultivated or barren. Numerous farmers are dying during hill farming mainly due to falling from heights, which can be reduced by this technological effort. The proposed work will help in cultivation in remote areas and increase green cover as well as help farmers in harsh environments. The Agricultural Aid to Seed Cultivation (AASC) robot will be an unmanned aerial vehicle equipped with a camera, a digital image processing unit and a seed cultivation unit. A quadcopter is chosen as an aerial vehicle is independent of the form and shape of the ground and is not deterred by these factors while providing high mobility and reliability. The research aims about the new technology which can be suitable for any kind of remote farming."}
{"_id": "5f5bbfc164ffde5a124b738fd0de6f3c1e34a390", "title": "The Adaptive Lasso and Its Oracle Properties", "text": "The lasso is a popular technique for simultaneous estimation and variable selection. Lasso variable selection has been shown to be consistent under certain conditions. In this work we derive a necessary condition for the lasso variable selection to be consistent. Consequently, there exist certain scenarios where the lasso is inconsistent for variable selection. We then propose a new version of the lasso, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the 1 penalty. We show that the adaptive lasso enjoys the oracle properties; namely, it performs as well as if the true underlying model were given in advance. Similar to the lasso, the adaptive lasso is shown to be near-minimax optimal. Furthermore, the adaptive lasso can be solved by the same efficient algorithm for solving the lasso. We also discuss the extension of the adaptive lasso in generalized linear models and show that the oracle properties still hold under mild regularity conditions. As a byproduct of our theory, the nonnegative garotte is shown to be consistent for variable selection."}
{"_id": "84d527952c6b8fdfa5ea4a05e3d54e71b2c2bd99", "title": "An empirical examination of the factor structure of compassion", "text": "Compassion has long been regarded as a core part of our humanity by contemplative traditions, and in recent years, it has received growing research interest. Following a recent review of existing conceptualisations, compassion has been defined as consisting of the following five elements: 1) recognising suffering, 2) understanding the universality of suffering in human experience, 3) feeling moved by the person suffering and emotionally connecting with their distress, 4) tolerating uncomfortable feelings aroused (e.g., fear, distress) so that we remain open to and accepting of the person suffering, and 5) acting or being motivated to act to alleviate suffering. As a prerequisite to developing a high quality compassion measure and furthering research in this field, the current study empirically investigated the factor structure of the five-element definition using a combination of existing and newly generated self-report items. This study consisted of three stages: a systematic consultation with experts to review items from existing self-report measures of compassion and generate additional items (Stage 1), exploratory factor analysis of items gathered from Stage 1 to identify the underlying structure of compassion (Stage 2), and confirmatory factor analysis to validate the identified factor structure (Stage 3). Findings showed preliminary empirical support for a five-factor structure of compassion consistent with the five-element definition. However, findings indicated that the 'tolerating' factor may be problematic and not a core aspect of compassion. This possibility requires further empirical testing. Limitations with items from included measures lead us to recommend against using these items collectively to assess compassion. Instead, we call for the development of a new self-report measure of compassion, using the five-element definition to guide item generation. We recommend including newly generated 'tolerating' items in the initial item pool, to determine whether or not factor-level issues are resolved once item-level issues are addressed."}
{"_id": "e649ec9be443c0a58114a1363a34d156cb98d0c4", "title": "PuppetGAN: Transferring Disentangled Properties from Synthetic to Real Images", "text": "In this work we propose a model that enables controlled manipulation of visual attributes of real \u201ctarget\u201d images (e.g. lighting, expression or pose) using only implicit supervision with synthetic \u201csource\u201d exemplars. Specifically, our model learns a shared low-dimensional representation of input images from both domains in which a property of interest is isolated from other content features of the input. By using triplets of synthetic images that demonstrate modification of the visual property that we would like to control (for example mouth opening) we are able to perform disentanglement of image representations with respect to this property without using explicit attribute labels in either domain. Since our technique relies on triplets instead of explicit labels, it can be applied to shape, texture, lighting, or other properties that are difficult to measure or represent as explicit conditioners. We quantitatively analyze the degree to which trained models learn to isolate the property of interest from other content features with a proof-of-concept digit dataset and demonstrate results in a far more difficult setting, learning to manipulate real faces using a synthetic 3D faces dataset. We also explore limitations of our model with respect to differences in distributions of properties observed in two domains."}
{"_id": "12aaaa99916da45a30c63096f60a687034c41895", "title": "TimeSpan: Using Visualization to Explore Temporal Multi-dimensional Data of Stroke Patients", "text": "We present TimeSpan, an exploratory visualization tool designed to gain a better understanding of the temporal aspects of the stroke treatment process. Working with stroke experts, we seek to provide a tool to help improve outcomes for stroke victims. Time is of critical importance in the treatment of acute ischemic stroke patients. Every minute that the artery stays blocked, an estimated 1.9 million neurons and 12 km of myelinated axons are destroyed. Consequently, there is a critical need for efficiency of stroke treatment processes. Optimizing time to treatment requires a deep understanding of interval times. Stroke health care professionals must analyze the impact of procedures, events, and patient attributes on time-ultimately, to save lives and improve quality of life after stroke. First, we interviewed eight domain experts, and closely collaborated with two of them to inform the design of TimeSpan. We classify the analytical tasks which a visualization tool should support and extract design goals from the interviews and field observations. Based on these tasks and the understanding gained from the collaboration, we designed TimeSpan, a web-based tool for exploring multi-dimensional and temporal stroke data. We describe how TimeSpan incorporates factors from stacked bar graphs, line charts, histograms, and a matrix visualization to create an interactive hybrid view of temporal data. From feedback collected from domain experts in a focus group session, we reflect on the lessons we learned from abstracting the tasks and iteratively designing TimeSpan."}
{"_id": "c39d04c6f3b84c77ad379d0358bfbe7148ad4fd2", "title": "Risk-based adaptive security for smart IoT in eHealth", "text": ""}
{"_id": "ca89131171b58a4c6747187cd0c0f737ad4297b2", "title": "Lower-extremity resistance training on unstable surfaces improves proxies of muscle strength, power and balance in healthy older adults: a randomised control trial", "text": "BACKGROUND\nIt is well documented that both balance and resistance training have the potential to mitigate intrinsic fall risk factors in older adults. However, knowledge about the effects of simultaneously executed balance and resistance training (i.e., resistance training conducted on unstable surfaces [URT]) on lower-extremity muscle strength, power and balance in older adults is insufficient. The objective of the present study was to compare the effects of machine-based stable resistance training (M-SRT) and two types of URT, i.e., machine-based (M-URT) and free-weight URT (F-URT), on measures of lower-extremity muscle strength, power and balance in older adults.\n\n\nMETHODS\nSeventy-five healthy community-dwelling older adults aged 65-80 years, were assigned to three intervention groups: M-SRT, M-URT and F-URT. Over a period of ten weeks, all participants exercised two times per week with each session lasting ~60\u00a0min. Tests included assessment of leg muscle strength (e.g., maximal isometric leg extension strength), power (e.g., chair rise test) and balance (e.g., functional reach test), carried out before and after the training period. Furthermore, maximal training load of the squat-movement was assessed during the last training week.\n\n\nRESULTS\nMaximal training load of the squat-movement was significantly lower in F-URT in comparison to M-SRT and M-URT. However, lower-extremity resistance training conducted on even and uneven surfaces meaningfully improved proxies of strength, power and balance in all groups. M-URT produced the greatest improvements in leg extension strength and F-URT in the chair rise test and functional reach test.\n\n\nCONCLUSION\nAside from two interaction effects, overall improvements in measures of lower-extremity muscle strength, power and balance were similar across training groups. Importantly, F-URT produced similar results with considerably lower training load as compared to M-SRT and M-URT. Concluding, F-URT seems an effective and safe alternative training program to mitigate intrinsic fall risk factors in older adults.\n\n\nTRIAL REGISTRATION\nThis trial has been registered with clinicaltrials.gov ( NCT02555033 ) on 09/18/2015."}
{"_id": "fdbf773d6516b660ad7fb3d007d9ec5d4f75db7d", "title": "On Detecting Clustered Anomalies Using SCiForest", "text": "Detecting local clustered anomalies is an intricate problem for many existing anomaly detection methods. Distance-based and density-based methods are inherently restricted by their basic assumptions\u2014anomalies are either far from normal points or being sparse. Clustered anomalies are able to avoid detection since they defy these assumptions by being dense and, in many cases, in close proximity to normal instances. In this paper, without using any density or distance measure, we propose a new method called SCiForest to detect clustered anomalies. SCiForest separates clustered anomalies from normal points effectively even when clustered anomalies are very close to normal points. It maintains the ability of existing methods to detect scattered anomalies, and it has superior time and space complexities against existing distance-based and density-based methods."}
{"_id": "f2a6ef6457139d2a6e05f3823e0819e889b6b646", "title": "Analysis and test procedures for NOR flash memory defects", "text": "Widespread use of non-volatile memories, especially flash memories, in diverse applications such as in mobile computing and systemon-chip is becoming a common place. As a result, testing them for faults and reliability is drawing considerable interest of designers and researchers. One of the most predominant failure modes for which these memories must be tested is called disturb faults. In this paper, we first analyze different defects that are responsible for disturb faults using a 2-dimension device simulator. We determine the impact of various defects on cell performance and develop a methodology based on channel erase technique to detect these defects. Our tests are efficient and can be converted to march tests prevalently used to test memories. We also propose a very low cost design-for-testability approach that can be used to apply the test technique developed in this paper. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "304f71afc304bb5eae39795d64b7e42f8a7788f7", "title": "Vehicle model recognition from frontal view image measurements", "text": "Available online 28 June 2010"}
{"_id": "8db7f5a54321e1a4cd51d0666607279556a57404", "title": "Systematic review of the efficacy of meditation techniques as treatments for medical illness.", "text": "BACKGROUND\nMeditative techniques are sought frequently by patients coping with medical and psychological problems. Because of their increasingly widespread appeal and use, and the potential for use as medical therapies, a concise and thorough review of the current state of scientific knowledge of these practices as medical interventions was conducted.\n\n\nPURPOSE\nTo systematically review the evidence supporting efficacy and safety of meditative practices in treating illnesses, and examine areas warranting further study. Studies on normal healthy populations are not included.\n\n\nMETHODS\nSearches were performed using PubMed, PsycInfo, and the Cochrane Database. Keywords were Meditation, Meditative Prayer, Yoga, Relaxation Response. Qualifying studies were reviewed and independently rated based on quality by two reviewers. Mid-to-high-quality studies (those scoring above 0.65 or 65% on a validated research quality scale) were included.\n\n\nRESULTS\nFrom a total of 82 identified studies, 20 randomized controlled trials met our criteria. The studies included 958 subjects total (397 experimentally treated, 561 controls). No serious adverse events were reported in any of the included or excluded clinical trials. Serious adverse events are reported in the medical literature, though rare. The strongest evidence for efficacy was found for epilepsy, symptoms of the premenstrual syndrome and menopausal symptoms. Benefit was also demonstrated for mood and anxiety disorders, autoimmune illness, and emotional disturbance in neoplastic disease.\n\n\nCONCLUSIONS\nThe results support the safety and potential efficacy of meditative practices for treating certain illnesses, particularly in nonpsychotic mood and anxiety disorders. Clear and reproducible evidence supporting efficacy from large, methodologically sound studies is lacking."}
{"_id": "056c1cd2cc9a0732a7b2dcd65639b2cb4319e501", "title": "Analysis of open source Business Intelligence suites", "text": "In recent time, technology applications in different fields, especially Business Intelligence (BI) have been developed rapidly and considered to be one of the most significant uses of information technology. BI is a broad category of applications and technologies for gathering, storing, analyzing, and providing access to data to help enterprise users make better business decisions. Whereas in the past the Business Intelligence market was strictly dominated by closed source and commercial tools, the last few years were characterized by the birth of open source solutions. This represents a tremendous competitive advantage, however the choice of a suitable open source BI suite is a challenge. The present study evaluated and compared the last versions of the five main Open Source Business Intelligence suites: JasperSoft, Palo, Pentaho, SpagoBI and Vanilla."}
{"_id": "401bf438664674701932a3ebcdccc5845733cead", "title": "Machine Learning for Reliable Network Attack Detection in SCADA Systems", "text": "Critical Infrastructures (CIs) use Supervisory Control And Data Acquisition (SCADA) systems for remote control and monitoring. Sophisticated security measures are needed to address malicious intrusions, which are steadily increasing in number and variety due to the massive spread of connectivity and standardisation of open SCADA protocols. Traditional Intrusion Detection Systems (IDSs) cannot detect attacks that are not already present in their databases. Therefore, in this paper, we assess Machine Learning (ML) for intrusion detection in SCADA systems using a real data set collected from a gas pipeline system and provided by the Mississippi State University (MSU). The contribution of this paper is two-fold: 1) The evaluation of four techniques for missing data estimation and two techniques for data normalization, 2) The performances of Support Vector Machine (SVM), and Random Forest (RF) are assessed in terms of accuracy, precision, recall and F 1 score for intrusion detection. Two cases are differentiated: binary and categorical classifications. Our experiments reveal that RF detect intrusions effectively, with an F_1 score of respectively > 99%."}
{"_id": "2728ccae8ee19aef2f6e2af106f401b5c251811a", "title": "For Whom Is a Picture Worth a Thousand Words ? Extensions of a Dual-Coding Theory of Multimedia Learning", "text": "In 2 experiments, highand low-spatial ability students viewed a computer-generated animation and listened simultaneously (concurrent group) or successively (successive group) to a narration that explained the workings either of a bicycle tire pump (Experiment 1) or of the human respiratory system (Experiment 2). The concurrent group generated more creative solutions to subsequent transfer problems than did the successive group; this contiguity effect was strong for highbut not for low-spatial ability students. Consistent with a dual-coding theory, spatial ability allows high-spatial learners to devote more cognitive resources to building referential connections between visual and verbal representations of the presented material, whereas low-spatial ability learners must devote more cognitive resources to building representation connections between visually presented material and its visual representation."}
{"_id": "58f692e9b03cb973355aab46bb6f867239aeb513", "title": "Understanding network failures in data centers: measurement, analysis, and implications", "text": "We present the first large-scale analysis of failures in a data center network. Through our analysis, we seek to answer several fundamental questions: which devices/links are most unreliable, what causes failures, how do failures impact network traffic and how effective is network redundancy? We answer these questions using multiple data sources commonly collected by network operators. The key findings of our study are that (1) data center networks show high reliability, (2) commodity switches such as ToRs and AggS are highly reliable, (3) load balancers dominate in terms of failure occurrences with many short-lived software related faults,(4) failures have potential to cause loss of many small packets such as keep alive messages and ACKs, and (5) network redundancy is only 40% effective in reducing the median impact of failure."}
{"_id": "3aa2f79d62822083a477f471a84fa017515007c9", "title": "Networked Pixels: Strategies for Building Visual and Auditory Images with Distributed Independent Devices", "text": "This paper describes the development of the hardware and software for Bloom, a light installation installed at Kew Gardens, London in December of 2016. The system is made up of a set of nearly 1000 distributed pixel devices each with LEDs, GPS sensor, and sound hardware, networked together with WiFi to form a display system. Media design for this system required consideration of the distributed nature of the devices. We outline the software and hardware designed for this system, and describe two approaches to the software and media design, one whereby we employ the distributed devices themselves for computation purposes (the approach we ultimately selected), and another whereby the devices are controlled from a central server that is performing most of the computation necessary. We then review these approaches and outline possibilities for future research."}
{"_id": "7a12bc84a56baf256253f9bf4c335b88a7148769", "title": "Corpora \u2013 the best possible solution for tracking rare phenomena in underresourced languages : clitics in Bosnian , Croatian and Serbian", "text": "Complex linguistic phenomena, such as Clitic Climbing in Bosnian, Croatian and Serbian, are often described intuitively, only from the perspective of the main tendency. In this paper, we argue that web corpora currently offer the best source of empirical material for studying Clitic Climbing in BCS. They thus allow the most accurate description of this phenomenon, as less frequent constructions can be tracked only in big, well-annotated data sources. We compare the properties of web corpora for BCS with traditional sources and give examples of studies on CC based on web corpora. Furthermore, we discuss problems related to web corpora and suggest some improvements for"}
{"_id": "d8668faa54063d4c26b0fe4f076f2ec1a875b7b0", "title": "Improving deep neural networks using softplus units", "text": "Recently, DNNs have achieved great improvement for acoustic modeling in speech recognition tasks. However, it is difficult to train the models well when the depth grows. One main reason is that when training DNNs with traditional sigmoid units, the derivatives damp sharply while back-propagating between layers, which restrict the depth of model especially with insufficient training data. To deal with this problem, some unbounded activation functions have been proposed to preserve sufficient gradients, including ReLU and softplus. Compared with ReLU, the smoothing and nonzero properties of the in gradient makes softplus-based DNNs perform better in both stabilization and performance. However, softplus-based DNNs have been rarely exploited for the phoneme recognition task. In this paper, we explore the use of softplus units for DNNs in acoustic modeling for context-independent phoneme recognition tasks. The revised RBM pre-training and dropout strategy are also applied to improve the performance of softplus units. Experiments show that, the DNNs with softplus units get significantly performance improvement and uses less epochs to get convergence compared to the DNNs trained with standard sigmoid units and ReLUs."}
{"_id": "2b1746683306e6f6bb3556fbf28e3b1feed20267", "title": "Sign of a Threat : The Effects of Warning Systems in Survival Horror Games", "text": "This paper studies the way survival horror games are designed to frighten and scare the gamer. Comparing video games and movies, the experiential state of the gamer and that of the spectator, as well as the shock of surprise and tension suspense, it focuses on the effects of forewarning on the emotional responses to survival horror games."}
{"_id": "33da981657b43c4bdb7d9ffaa81703ff3a5066bf", "title": "Analyzing and predicting viral tweets", "text": "Twitter and other microblogging services have become indispensable sources of information in today's web. Understanding the main factors that make certain pieces of information spread quickly in these platforms can be decisive for the analysis of opinion formation and many other opinion mining tasks.\n This paper addresses important questions concerning the spread of information on Twitter. What makes Twitter users retweet a tweet? Is it possible to predict whether a tweet will become \"viral\", i.e., will be frequently retweeted? To answer these questions we provide an extensive analysis of a wide range of tweet and user features regarding their influence on the spread of tweets. The most impactful features are chosen to build a learning model that predicts viral tweets with high accuracy. All experiments are performed on a real-world dataset, extracted through a public Twitter API based on user IDs from the TREC 2011 microblog corpus."}
{"_id": "2382814a374fbfaa253e7830415482bf4166bd61", "title": "Structural-RNN: Deep Learning on Spatio-Temporal Graphs", "text": "Deep Recurrent Neural Network architectures, though remarkably capable at modeling sequences, lack an intuitive high-level spatio-temporal structure. That is while many problems in computer vision inherently have an underlying high-level structure and can benefit from it. Spatiotemporal graphs are a popular tool for imposing such high-level intuitions in the formulation of real world problems. In this paper, we propose an approach for combining the power of high-level spatio-temporal graphs and sequence learning success of Recurrent Neural Networks (RNNs). We develop a scalable method for casting an arbitrary spatio-temporal graph as a rich RNN mixture that is feedforward, fully differentiable, and jointly trainable. The proposed method is generic and principled as it can be used for transforming any spatio-temporal graph through employing a certain set of well defined steps. The evaluations of the proposed approach on a diverse set of problems, ranging from modeling human motion to object interactions, shows improvement over the state-of-the-art with a large margin. We expect this method to empower new approaches to problem formulation through high-level spatio-temporal graphs and Recurrent Neural Networks."}
{"_id": "46b2cd0ef7638dcb4a6220a52232712beb2fa850", "title": "Deep Representation Learning for Human Motion Prediction and Classification", "text": "Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction."}
{"_id": "8027f50bbcee3938196c6d5519464df16c275f8d", "title": "On Human Motion Prediction Using Recurrent Neural Networks", "text": "Human motion modelling is a classical problem at the intersection of graphics and computer vision, with applications spanning human-computer interaction, motion synthesis, and motion prediction for virtual and augmented reality. Following the success of deep learning methods in several computer vision tasks, recent work has focused on using deep recurrent neural networks (RNNs) to model human motion, with the goal of learning time-dependent representations that perform tasks such as short-term motion prediction and long-term human motion synthesis. We examine recent work, with a focus on the evaluation methodologies commonly used in the literature, and show that, surprisingly, state of the art performance can be achieved by a simple baseline that does not attempt to model motion at all. We investigate this result, and analyze recent RNN methods by looking at the architectures, loss functions, and training procedures used in state-of-the-art approaches. We propose three changes to the standard RNN models typically used for human motion, which results in a simple and scalable RNN architecture that obtains state-of-the-art performance on human motion prediction."}
{"_id": "59b7a5d1e35bd9383fa2e9f1820b340a16531d3f", "title": "Trial for RDF: adapting graph query languages for RDF data", "text": "Querying RDF data is viewed as one of the main applications of graph query languages, and yet the standard model of graph databases -- essentially labeled graphs -- is different from the triples-based model of RDF. While encodings of RDF databases into graph data exist, we show that even the most natural ones are bound to lose some functionality when used in conjunction with graph query languages. The solution is to work directly with triples, but then many properties taken for granted in the graph database context (e.g., reachability) lose their natural meaning.\n Our goal is to introduce languages that work directly over triples and are closed, i.e., they produce sets of triples, rather than graphs. Our basic language is called TriAL, or Triple Algebra: it guarantees closure properties by replacing the product with a family of join operations. We extend TriAL with recursion, and explain why such an extension is more intricate for triples than for graphs. We present a declarative language, namely a fragment of datalog, capturing the recursive algebra. For both languages, the combined complexity of query evaluation is given by low-degree polynomials. We compare our languages with relational languages, such as finite-variable logics, and previously studied graph query languages such as adaptations of XPath, regular path queries, and nested regular expressions; many of these languages are subsumed by the recursive triple algebra. We also provide examples of the usefulness of TriAL in querying graph and RDF data."}
{"_id": "3c094494f6a911de3087ed963d3d893f6f2b1d71", "title": "Clinical application of the Hybrid Assistive Limb (HAL) for gait training\u2014a systematic review", "text": "OBJECTIVE\nThe aim of this study was to review the literature on clinical applications of the Hybrid Assistive Limb system for gait training.\n\n\nMETHODS\nA systematic literature search was conducted using Web of Science, PubMed, CINAHL and clinicaltrials.gov and additional search was made using reference lists in identified reports. Abstracts were screened, relevant articles were reviewed and subject to quality assessment.\n\n\nRESULTS\nOut of 37 studies, 7 studies fulfilled inclusion criteria. Six studies were single group studies and 1 was an explorative randomized controlled trial. In total, these studies involved 140 participants of whom 118 completed the interventions and 107 used HAL for gait training. Five studies concerned gait training after stroke, 1 after spinal cord injury (SCI) and 1 study after stroke, SCI or other diseases affecting walking ability. Minor and transient side effects occurred but no serious adverse events were reported in the studies. Beneficial effects on gait function variables and independence in walking were observed.\n\n\nCONCLUSIONS\nThe accumulated findings demonstrate that the HAL system is feasible when used for gait training of patients with lower extremity paresis in a professional setting. Beneficial effects on gait function and independence in walking were observed but data do not allow conclusions. Further controlled studies are recommended."}
{"_id": "12947fa637d2e95ba533672a78cc2897f6e2d666", "title": "Keyphrase Extraction Using Knowledge Graphs", "text": "Extracting keyphrases from documents automatically is an important and interesting task since keyphrases provide a quick summarization for documents. Although lots of efforts have been made on keyphrase extraction, most of the existing methods (the co-occurrence-based methods and the statistic-based methods) do not take semantics into full consideration. The co-occurrence-based methods heavily depend on the co-occurrence relations between two words in the input document, which may ignore many semantic relations. The statistic-based methods exploit the external text corpus to enrich the document, which introduce more unrelated relations inevitably. In this paper, we propose a novel approach to extract keyphrases using knowledge graphs, based on which we could detect the latent relations of two keyterms (i.e., noun words and named entities) without introducing many noises. Extensive experiments over real data show that our method outperforms the state-of-the-art methods including the graph-based co-occurrence methods and statistic-based clustering methods."}
{"_id": "50fb1890a2e71249e8f50af72495233e52059158", "title": "ValueCharts: analyzing linear models expressing preferences and evaluations", "text": "In this paper we propose ValueCharts, a set of visualizations and interactive techniques intended to support decision-makers in inspecting linear models of preferences and evaluation. Linear models are popular decision-making tools for individuals, groups and organizations. In Decision Analysis, they help the decision-maker analyze preferential choices under conflicting objectives. In Economics and the Social Sciences, similar models are devised to rank entities according to an evaluative index of interest. The fundamental goal of building models expressing preferences and evaluations is to help the decision-maker organize all the information relevant to a decision into a structure that can be effectively analyzed. However, as models and their domain of application grow in complexity, model analysis can become a very challenging task. We claim that ValueCharts will make the inspection and application of these models more natural and effective. We support our claim by showing how ValueCharts effectively enable a set of basic tasks that we argue are at the core of analyzing and understanding linear models of preferences and evaluation."}
{"_id": "7f793c6862e9b8e6c55643521a96ed00b4722bb3", "title": "Artificial Neural Networks to Forecast Air Pollution 221 Artificial Neural Networks to Forecast Air Pollution", "text": "European laws concerning urban and suburban air pollution requires the analysis and implementation of automatic operating procedures in order to prevent the risk for the principal air pollutants to be above alarm thresholds (e.g. the Directive 2002/3/EC for ozone or the Directive 99/30/CE for the particulate matter with an aerodynamic diameter of up to 10 \u03bcm, called PM10). As an example of European initiative to support the investigation of air pollution forecast, the COST Action ES0602 (Towards a European Network on Chemical Weather Forecasting and Information Systems) provides a forum for standardizing and benchmarking approaches in data exchange and multi-model capabilities for air quality forecast and (near) real-time information systems in Europe, allowing information exchange between meteorological services, environmental agencies, and international initiatives. Similar efforts are also proposed by the National Oceanic and Atmospheric Administration (NOAA) in partnership with the United States Environmental Protection Agency (EPA), which are developing an operational, nationwide Air Quality Forecasting (AQF) system. Critical air pollution events frequently occur where the geographical and meteorological conditions do not permit an easy circulation of air and a large part of the population moves frequently between distant places of a city. These events require drastic measures such as the closing of the schools and factories and the restriction of vehicular traffic. Indeed, many epidemiological studies have consistently shown an association between particulate air pollution and cardiovascular (Brook et al., 2007) and respiratory (Pope et al., 1991) diseases. The forecasting of such phenomena with up to two days in advance would allow taking more efficient countermeasures to safeguard citizens\u2019 health. Air pollution is highly correlated with meteorological variables (Cogliani, 2001). Indeed, pollutants are usually entrapped into the planetary boundary layer (PBL), which is the lowest part of the atmosphere and has behaviour directly influenced by its contact with the ground. It responds to surface forcing in a timescale of an hour or less. In this layer, physical quantities such as flow velocity, temperature, moisture and pollutants display rapid fluctuations (turbulence) and vertical mixing is strong. Different automatic procedures have been developed to forecast the time evolution of the concentration of air pollutants, using also meteorological data. Mathematical models of the 10"}
{"_id": "2366a2c3c629bb20fdcb8f2b56136f3642a4edd6", "title": "Grace: safe multithreaded programming for C/C++", "text": "The shift from single to multiple core architectures means that programmers must write concurrent, multithreaded programs in order to increase application performance. Unfortunately, multithreaded applications are susceptible to numerous errors, including deadlocks, race conditions, atomicity violations, and order violations. These errors are notoriously difficult for programmers to debug.\n This paper presents Grace, a software-only runtime system that eliminates concurrency errors for a class of multithreaded programs: those based on fork-join parallelism. By turning threads into processes, leveraging virtual memory protection, and imposing a sequential commit protocol, Grace provides programmers with the appearance of deterministic, sequential execution, while taking advantage of available processing cores to run code concurrently and efficiently. Experimental results demonstrate Grace's effectiveness: with modest code changes across a suite of computationally-intensive benchmarks (1-16 lines), Grace can achieve high scalability and performance while preventing concurrency errors."}
{"_id": "0a4102694c46dbe389177663adc27b6e4aa98d85", "title": "Structural Data De-anonymization: Quantification, Practice, and Implications", "text": "In this paper, we study the quantification, practice, and implications of structural data (e.g., social data, mobility traces) De-Anonymization (DA). First, we address several open problems in structural data DA by quantifying perfect and (1-\u03b5)-perfect structural data DA}, where \u03b5 is the error tolerated by a DA scheme. To the best of our knowledge, this is the first work on quantifying structural data DA under a general data model, which closes the gap between structural data DA practice and theory. Second, we conduct the first large-scale study on the de-anonymizability of 26 real world structural datasets, including Social Networks (SNs), Collaborations Networks, Communication Networks, Autonomous Systems, and Peer-to-Peer networks. We also quantitatively show the conditions for perfect and (1-\u03b5)-perfect DA of the 26 datasets. Third, following our quantification, we design a practical and novel single-phase cold start Optimization based DA} (ODA) algorithm. Experimental analysis of ODA shows that about 77.7% - 83.3% of the users in Gowalla (.2M users and 1M edges) and 86.9% - 95.5% of the users in Google+ (4.7M users and 90.8M edges) are de-anonymizable in different scenarios, which implies optimization based DA is implementable and powerful in practice. Finally, we discuss the implications of our DA quantification and ODA and provide some general suggestions for future secure data publishing."}
{"_id": "eb3398d205664dc402de6783d45a5d169f1cc335", "title": "Technology , globalization , and international competitiveness : Challenges for developing countries", "text": "This paper traces the role of technology in economic growth and competitiveness, summarizes the strategies of the fastest growing economies over the last 50 years from the perspective of their technology strategy, summarizes some of the key global trends which are making it more difficult for developing countries to replicate the fast growth experience of the countries mentioned, and traces the impact of the rise of China on developing countries. The main argument of this paper is that technology is an increasingly important element of globalisation and of competitiveness and that the acceleration in the rate of technological change and the pre-requisites necessary to participate effectively in globalisation are making it more difficult for many developing countries to compete. Section 2 gives a long-term perspective on technology and economic growth. Section 3 presents a global overview of changes in regional competitiveness as revealed by economic growth. Section 4 identifies some of the high performers in the last 50 years and reviews the strategies of the high performing East Asian economies comprising the well known \u201cgang of four\u201d, plus three South East Asian countries. Section 5 reviews the strategies of the BRICM countries, the largest developing country economies (Brazil, Russia, India, China and Mexico). It also argues that it is harder for developing countries to replicate the success of the high performing East Asian countries for two main reasons. One relates to new elements in the global competitive environment. These are summarized in section 6. The other is the rapid rise of China (and to a lesser extent India). This is covered in Section 7, which also includes a preliminary analysis of the effects of the rapid rise of China on the rest of the world. Finally, Section 8 draws some conclusions. Developing countries must develop more technological capability and greater flexibility to succeed in the more demanding and asymmetric global environment. It is likely that the pressures of globalisation and greater international competition generate strong protectionist retrenchment in both developed and developing countries. These should be resisted. The world as a whole will be better off if developed countries focus on increasing their flexibility to adjust to changing comparative advantage resulting from rapid technical change, and developing countries focus on increasing their education, infrastructure, and technological capability. There remain however large asymmetries in the global system and greater efforts need to be made to provide some global balancing and transfer mechanisms."}
{"_id": "a2cbaa3913fb53adc28ace2c60f161b1ef742f2e", "title": "A Novel Multi-task Deep Learning Model for Skin Lesion Segmentation and Classification", "text": "In this study, a multi-task deep neural network is proposed for skin lesion analysis. The proposed multi-task learning model solves different tasks (e.g., lesion segmentation and two independent binary lesion classifications) at the same time by exploiting commonalities and differences across tasks. This results in improved learning efficiency and potential prediction accuracy for the task-specific models, when compared to training the individual models separately. The proposed multi-task deep learning model is trained and evaluated on the dermoscopic image sets from the International Skin Imaging Collaboration (ISIC) 2017 Challenge \u201cSkin Lesion Analysis towards Melanoma Detection\u201d, which consists of 2000 training samples and 150 evaluation samples. The experimental results show that the proposed multi-task deep learning model achieves promising performances on skin lesion segmentation and classification. The average value of Jaccard index for lesion segmentation is 0.724, while the average values of area under the receiver operating characteristic curve (AUC) on two individual lesion classifications are 0.880 and 0.972, respectively."}
{"_id": "b072513aab92e64320dd6abccf132d79ccdefa70", "title": "Multi-Task Learning with Multi-View Attention for Answer Selection and Knowledge Base Question Answering", "text": "Answer selection and knowledge base question answering (KBQA) are two important tasks of question answering (QA) systems. Existing methods solve these two tasks separately, which requires large number of repetitive work and neglects the rich correlation information between tasks. In this paper, we tackle answer selection and KBQA tasks simultaneously via multi-task learning (MTL), motivated by the following motivations. First, both answer selection and KBQA can be regarded as a ranking problem, with one at text-level while the other at knowledge-level. Second, these two tasks can benefit each other: answer selection can incorporate the external knowledge from knowledge base (KB), while KBQA can be improved by learning contextual information from answer selection. To fulfill the goal of jointly learning these two tasks, we propose a novel multi-task learning scheme that utilizes multi-view attention learned from various perspectives to enable these tasks to interact with each other as well as learn more comprehensive sentence representations. The experiments conducted on several real-world datasets demonstrate the effectiveness of the proposed method, and the performance of answer selection and KBQA is improved. Also, the multi-view attention scheme is proved to be effective in assembling attentive information from different representational perspectives."}
{"_id": "051e5e3e591f0ffbb25158aa7207ee750a6e1e43", "title": "A Unified Approach for Learning the Parameters of Sum-Product Networks", "text": "\u2022 We present a unified approach for learning the parameters of Sum-Product networks (SPNs). \u2022 We construct a more efficient factorization of complete and decomposable SPN into a mixture of trees, with each tree being a product of univariate distributions. \u2022 We show that the MLE problem for SPNs can be formulated as a signomial program. \u2022 We construct two parameter learning algorithms for SPNs by using sequential monomial approximations (SMA) and the concave-convex procedure (CCCP). Both SMA and CCCP admit multiplicative weight updates. \u2022 We prove the convergence of CCCP on SPNs."}
{"_id": "eaf229ab3886680c7df4a14605562fd2db97d580", "title": "A survey on skin detection in colored images", "text": "Color is an efficient feature for object detection as it has the advantage of being invariant to changes in scaling, rotation, and partial occlusion. Skin color detection is an essential required step in various applications related to computer vision. The rapidly-growing research in human skin detection is based on the premise that information about individuals, intent, mode, and image contents can be extracted from colored images, and computers can then respond in an appropriate manner. Detecting human skin in complex images has proven to be a challenging problem because skin color can vary dramatically in its appearance due to many factors such as illumination, race, aging, imaging conditions, and complex background. However, many methods have been developed to deal with skin detection problem in color images. The purpose of this study is to provide an up-to-date survey on skin color modeling and detection methods. We also discuss relevant issues such as color spaces, cost and risks, databases, testing, and benchmarking. After investigating these methods and identifying their strengths and limitations, we conclude with several implications for future direction."}
{"_id": "0a4f1477371e61029d53bc1404c26cecc9dc48e4", "title": "The evolution of cooperation.", "text": "Cooperation in organisms, whether bacteria or primates, has been a difficulty for evolutionary theory since Darwin. On the assumption that interactions between pairs of individuals occur on a probabilistic basis, a model is developed based on the concept of an evolutionarily stable strategy in the context of the Prisoner's Dilemma game. Deductions from the model, and the results of a computer tournament show how cooperation based on reciprocity can get started in an asocial world, can thrive while interacting with a wide range of other strategies, and can resist invasion once fully established. Potential applications include specific aspects of territoriality, mating, and disease."}
{"_id": "1fc8209f86ec99e3275a2cb78153dc2ccc7e1be9", "title": "Visual Graphs from Motion (VGfM): Scene understanding with object geometry reasoning", "text": "Recent approaches on visual scene understanding attempt to build a scene graph \u2013 a computational representation of objects and their pairwise relationships. Such rich semantic representation is very appealing, yet difficult to obtain from a single image, especially when considering complex spatial arrangements in the scene. Differently, an image sequence conveys useful information using the multi-view geometric relations arising from camera motions. Indeed, object relationships are naturally related to the 3D scene structure. To this end, this paper proposes a system that first computes the geometrical location of objects in a generic scene and then efficiently constructs scene graphs from video by embedding such geometrical reasoning. Such compelling representation is obtained using a new model where geometric and visual features are merged using an RNN framework. We report results on a dataset we created for the task of 3D scene graph generation in multiple views."}
{"_id": "82ce357b9927dde027fed4ad61ad00a2a495c6f6", "title": "Chip design of a 12-bit 5MS/s fully differential SAR ADC with resistor-capacitor array DAC technique for wireless application", "text": "A 1.8-V 12-bit 5MS/s successive approximation register (SAR) analog-to-digital converter (ADC) implemented in TSMC 0.18-um CMOS process is presented. To reduce DAC switching energy and chip area, a hybrid resistor-capacitor DAC is applied. To save energy, asynchronous control logic to drive the ADC is used. A pre-amplifier based comparator circuit is built to reduce the kickback noise from the dynamic latch designs. With 1.8 V supply voltage and 5.0 MHz sampling rate, measured results achieve -0.55/0.72 LSB (Least Significant Bit) of DNL (differential nonlinearity) and -0.78/0.92 LSB of integral nonlinearity (INL) respectively, and 10.76 bits of an effective number of bits (ENOB) at 1MHz input frequency. The chip area is 0.83 mm2 including pads and the power consumption is 490\u03bcW for optical and wireless communications."}
{"_id": "cf23584a0ef704404f4d7093be0b204203412d8c", "title": "Captcha as graphical passwords-enhanced with video-based captcha for secure services", "text": "CAPTCHAs, also known as reverse Turing tests are real-time assessments that are commonly used by programs to tell humans and machines apart. This can be achieved by assigning and assessing or evaluating hard AI problems such that these problems could only be solved easily by human but not by machines. A new security approach based on hard AI problems and Captcha technology is known as Captcha as gRaphical Passwords (CaRP). This scheme can address many security problems such as dictionary attacks, online guessing attacks and shoulder-surfing attacks etc. In this paper, we present an enhanced security for the CaRP scheme i,e CaRP with motion-based Captcha. The motion can be done by using video. The movement of Captcha ensures high security over the normal CaRP scheme. Video-based captcha can provide a great challenge for humans in exploiting the remarkable perceptual abilities or skills of humans to unravel structure-from-motion. The scheme can resist attack caused due to moving object detection. Thus it can thwart many attacks, can enforce more security and it stipulates security along with usability to legitimate users in dealing with real time applications."}
{"_id": "25458ba08da7883d3649432986f0c5d711215dbf", "title": "Quantum-inspired evolutionary algorithm for a class of combinatorial optimization", "text": "This paper proposes a novel evolutionary algorithm inspired by quantum computing, called a quantum-inspired evolutionary algorithm (QEA), which is based on the concept and principles of quantum computing, such as a quantum bit and superposition of states. Like other evolutionary algorithms, QEA is also characterized by the representation of the individual, the evaluation function, and the population dynamics. However, instead of binary, numeric, or symbolic representation, QEA uses a Q-bit, defined as the smallest unit of information, for the probabilistic representation and a Q-bit individual as a string of Q-bits. A Q-gate is introduced as a variation operator to drive the individuals toward better solutions. To demonstrate its effectiveness and applicability, experiments are carried out on the knapsack problem, which is a well-known combinatorial optimization problem. The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm."}
{"_id": "70127f6cd77b92f0e7228179466edb4f259f56c2", "title": "Towards secure mobile cloud computing: A survey", "text": "Mobile cloud computing is gaining popularity among mobile users. The ABI Research predicts that the number of mobile cloud computing subscribers is expected to grow from 42.8 million (1.1% of total mobile users) in 2008 to 998 million (19% of total mobile users) in 2014. Despite the hype achieved by mobile cloud computing, the growth of mobile cloud computing subscribers is still below expectations. According to the recent survey conducted by the International Data Corporation, most IT Executives and CEOs are not interested in adopting such services due to the risks associatedwith security and privacy. The security threats have become a hurdle in the rapid adaptability of the mobile cloud computing paradigm. Significant efforts have been devoted in research organizations and academia to build securemobile cloud computing environments and infrastructures. In spite of the efforts, there are a number of loopholes and challenges that still exist in the security policies of mobile cloud computing. This literature review: (a)highlights the current state of the artwork proposed to securemobile cloud computing infrastructures, (b) identifies the potential problems, and (c) provides a taxonomy of the state of the art. \u00a9 2012 Elsevier B.V. All rights reserved."}
{"_id": "ef406ea1d5aaecdeb0dfef81fb7947907bfe605f", "title": "Kinematics and workspace analysis for a novel 6-PSS parallel manipulator", "text": "In this paper, a novel 6-PSS parallel manipulator is designed and the three dimensional structure model is obtained with the Solidworks. The inverse kinematics analysis is performed based on the designed geometric parameters. And according to this analysis, we obtain the theoretical simulation results of virtual structures with different positions and orientations by using MATLAB software. These simulation structures verify the feasibility of this novel 6-PSS parallel manipulator. With a given orientation, a numerical search method is adopted in finding the reachable workspace with the judgment conditions of physical constraints, and the relationship between the reachable workspace and the size of the structure is studied in virtual simulation. Therefore, the range including the largest reachable workspace is achieved and a possible further application of this novel 6-PSS parallel manipulator is proposed, especially in some fields of different structure size requirements."}
{"_id": "e731dfffffef99a7e8dfdb4dc0b34b30f02c8e0b", "title": "Design and Implementation of a Pure Sine Wave Single Phase Inverter for Photovoltaic Applications", "text": "This paper aims at developing the control circuit for a single phase inverter which produces a pure sine wave with an output voltage that has the same magnitude and frequency as a grid voltage. A microcontroller, based on an advanced technology to generate a sine wave with fewer harmonics, less cost and a simpler design. The technique used is the sinusoidal pulse width modulation signal (SPWM) which is generated by microcontroller. The designed inverter is tested on various AC loads and is essentially focused upon low power electronic applications such as a lamp, a fan and chargers etc. The proposed model of the inverter can improve the output wave forms of the inverter and the dead time control reduced to 63\u03bcs. The finished design is simulated in Proteus and PSIM software to ensure output results which is verified practically."}
{"_id": "0abe9fda6240ba72e2f76afe10d5f6181e144527", "title": "Social Turing Tests: Crowdsourcing Sybil Detection", "text": "As popular tools for spreading spam and malware, Sybils (or fake accounts) pose a serious threat to online communities such as Online Social Networks (OSNs). Today, sophisticated attackers are creating realistic Sybils that effec tively befriend legitimate users, rendering most automated Sybil detection techniques ineffective. In this paper, we explor e the feasibility of acrowdsourcedSybil detection system for OSNs. We conduct a large user study on the ability of humans to detect today\u2019s Sybil accounts, using a large corpus of ground-truth Sybil accounts from the Facebook and Renren networks. We analyze detection accuracy by both \u201cexperts\u201d and \u201cturkers\u201d under a variety of conditions, and find that while turkers vary significantly in their effective ness, experts consistently produce near-optimal results. We use these results to drive the design of a multi-tier crowdsourcing Sybil detection system. Using our user study data, we show that this system is scalable, and can be highly effective either as a standalone system or as a complementary technique to current tools."}
{"_id": "0f56311dad9f03083a4f4e791aab0b6e0aa2ff07", "title": "Building Natural-Language Generation Systems", "text": "Reiter and Dale\u2019s book is about natural language generation (NLG), and describes methods for building systems of this kind. It offers a comprehensive overview of the field of NLG, from a perspective that has rarely been described elsewhere. It is very valuable for people with little or even no knowledge about NLG particulars who intend to investigate applications where NLG technology might play a certain role, as well as for people who want to obtain an application-oriented overview of the field. In this sense, all communities intended to be served by the book\u2014students, academics in related areas, and software developers\u2014are well addressed. In addition, some sections provide material that is probably new even to researchers with considerable experience in the field, such as the discussions on corpus analysis and domain modeling. In the introduction, the field of NLG is briefly characterized from researchand application-oriented perspectives and is illustrated by screen shots produced by several systems. Then conditions for effective uses of this technology are elaborated and contrasted with conditions where other techniques are more appropriate, and methods for determining the intended functionality of a system to be built are discussed. The main sections of the book are devoted to the prototypical architecture of applicationoriented NLG systems and their major processing phases: document planning, microplanning, and surface realization. Each of these three phases is illustrated by a number of detailed examples, demonstrating the successive refinements of utterance specifications in the course of processing. In the final section, the embedding of natural language processing technology is discussed, featuring typography, combined uses with graphics and hypertext, and integration with speech. The methods are illustrated by a large number of examples\u2014the book contains more than 120 figures on its 248 pages. At the end of each section, a number of useful references for further reading are related to the section topics. In the appendix, a table summarizing the 35 systems referred to in the book is given. In general, the presentations in the book are beneficial for their high degree of detail in documenting representations. Running examples are illustrated by intermediate representations at various stages of processing. They demonstrate very well the increasing degrees of precision in which utterances are specified and the associated commitments about possible text portions that express these specifications. Particularly valuable are the comparisons of alternative representations, ranging from skeletal propositions over lexicalized case frames to canned text, including a comparison of the consequences for the distribution of work over the system components involved and discussions of the associated pros and cons. These chapters provide a number of valuable hints, given the currently low degree of standardization in the field. Further important aspects addressed, which have been widely ignored in the literature so far,"}
{"_id": "0e78074a081f2d3a35fbb6f74ba9b7e27e64757b", "title": "Deep Neural Networks for Acoustic Modeling in Speech Recognition", "text": "Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input. An alternative way to evaluate the fit is to use a feedforward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks with many hidden layers, that are trained using new methods have been shown to outperform Gaussian mixture models on a variety of speech recognition benchmarks, sometimes by a large margin. This paper provides an overview of this progress and represents the shared views of four research groups who have had recent successes in using deep neural networks for acoustic modeling in speech recognition."}
{"_id": "199766ed535029b0b284479765b5ebbe50a2789f", "title": "Practical Issues in Automatic Documentation Generation", "text": "PLANDoc, a system under joint development by Columbia and Bellcore, documents the activity of planning engineers as they study telephone routes. It takes as input a trace of the engineer\u2019s interaction with a network planning tool and produces 1-2 page summary. In this paper, we describe the user needs analysis we performed and how it influenced the development of PLANDoc. In particular, we show how it pinpointed the need for a sublanguage specification, allowing us to identify input messages and to characterize the different sentence paraphrases for realizing them. We focus on the systematic use of conjunction in combination with paraphrase that we developed for PLANDoc, which allows for the generation of summaries that are both concise\u2013avoiding repetition of similar information, and fluent\u2013 avoiding repetition of similar phrasing."}
{"_id": "874917014eefd1a3ec439f294dc69adf61860ad0", "title": "S3Email: A method for securing emails from service providers", "text": "We often send our confidential information such as passport, credit card, social security numbers over email without concern about the security of email services. Existing network security mechanisms provide adequate security from external malicious adversaries and eavesdroppers, but they don't guarantee that the email service providers (ESPs) wouldn't or can't access our email data themselves, which in some cases could be highly confidential. One of the ways to protect email data from ESPs is to use Pretty Good Privacy (PGP) that has many limitations including key storage problem and dependability on third party services, making it cumbersome to use in practice. In this paper, we present S3Email method that provides email security against ESPs. The proposed method uses a cryptographic secret sharing technique in a novel way and encrypts the email metadata, body and attachments before the email is sent. In the proposed solution, the email sender and receiver must have at least two email accounts on the existing ESPs, which is not unusual today. Experiments and analysis show that the S3Email method provides information theoretic security with minimal computational overhead."}
{"_id": "f0e70bdbad807fec3ab4d5e7951c13408273fb59", "title": "Robust solutions for network design under transportation cost and demand uncertainty", "text": "In many applications, the network design problem (NDP) faces significant uncertainty in transportation costs and demand, as it can be difficult to estimate current (and future values) of these quantities. In this paper, we present a robust optimization-based formulation for the NDP under transportation cost and demand uncertainty. We show that solving an approximation to this robust formulation of the NDP can be done efficiently for a network with single origin and destination per commodity and general uncertainty in transportation costs and demand that are independent of each other. For a network with path constraints, we propose an efficient column generation procedure to solve the linear programming relaxation. We also present computational results that show that the approximate robust solution found provides significant savings in the worst case while incurring only minor sub-optimality for specific instances of the uncertainty. Journal of the Operational Research Society advance online publication, 7 February 2007 doi:10.1057/palgrave.jors.2602362"}
{"_id": "2518d6b39387cc6c2ac64b3f921d5e60d544072c", "title": "Design and Control of Concentric-Tube Robots", "text": "A novel approach toward construction of robots is based on a concentric combination of precurved elastic tubes. By rotation and extension of the tubes with respect to each other, their curvatures interact elastically to position and orient the robot's tip, as well as to control the robot's shape along its length. In this approach, the flexible tubes comprise both the links and the joints of the robot. Since the actuators attach to the tubes at their proximal ends, the robot itself forms a slender curve that is well suited for minimally invasive medical procedures. This paper demonstrates the potential of this technology. Design principles are presented and a general kinematic model incorporating tube bending and torsion is derived. Experimental demonstration of real-time position control using this model is also described."}
{"_id": "1794435f6b541109ee9ea812d80d5b9add95aacd", "title": "Fractal Merkle Tree Representation and Traversal", "text": "We introduce a technique for traversal of Merkle trees, and propose an efficient algorithm that generates a sequence of leaves along with their associated authentication paths. For one choice of parameters, and a total of N leaves, our technique requires a worst-case computational effort of 2 log N/loglog N hash function evaluations per output, and a total storage capacity of less than 1.5 log N/loglog N hash values. This is a simultaneous improvement both in space and time complexity over any previously published algorithm."}
{"_id": "ad7841f04f71653b33919475378d95dc0a372892", "title": "Three-column fixation for complex tibial plateau fractures.", "text": "OBJECTIVES\n1) To introduce a computed tomography-based \"three-column fixation\" concept; and 2) to evaluate clinical outcomes (by using a column-specific fixation technique) for complex tibial plateau fractures (Schatzker classification Types V and VI).\n\n\nDESIGN\nProspective cohort study.\n\n\nSETTING\nLevel 1 trauma center.\n\n\nPATIENTS\nTwenty-nine cases of complex tibial plateau fractures were included. Based on routine x-ray and computed tomography images, all the fractures were classified as a \"three-column fracture,\" which means at least one separate fragment was found in lateral, medial, and posterior columns in the proximal tibia (Schatzker classification Types V and VI).\n\n\nINTERVENTION\nThe patients were operated on in a \"floating position\" with a combined approach, an inverted L-shaped posterior approach combined with an anterior-lateral approach. All three columns of fractures were fixed.\n\n\nOUTCOME MEASURES\nOperative time, blood loss, quality of reduction and alignment, fracture healing, complications, and functional outcomes based on Hospital for Special Surgery score and lower-extremity measure were recorded.\n\n\nRESULTS\nAll the cases were followed for average 27.3 months (range, 24-36 months). All the cases had satisfactory reduction except one case, which had a 4-mm stepoff at the anterior ridge of the tibial plateau postoperatively. No case of secondary articular depression was found. One case had secondary varus deformity, one case had secondary valgus deformity, and two cases of screw loosening occurred postoperatively. No revision surgery was performed. Two cases had culture-negative wound drainage. No infection was noted. The average radiographic bony union time and full weightbearing time were 13.1 weeks (range, 11-16 weeks) and 16.7 weeks (range, 12-24 weeks), respectively. The mean Short Form 36, Hospital for Special Surgery score, and lower-extremity measure at 24 months postoperatively were 89 (range, 80-98), 90 (range, 84-98), and 87 (range, 80-95), respectively. The average range of motion of the affected knee was 2.7\u00b0 to 123.4\u00b0 at 2 years after the operation.\n\n\nCONCLUSION\nThree-column fixation is a new fixation concept in treating complex tibial plateau fractures, which is especially useful for multiplanar fractures involving the posterior column. The combination of posterior and anterior-lateral approaches is a safe and effective way to have direct reduction and satisfactory fixation for such difficult tibial plateau fractures."}
{"_id": "07b1c96d8d2eaead6ab5064b289ceba8ceeb716f", "title": "Authenticated Append-only Skip Lists", "text": "In this work we describe, design and analyze the security of a tamper-evident, append-only data structure for maintaining secure data sequences in a loosely coupled distributed system where individual system components may be mutually distrust-ful. The resulting data structure, called an Authenticated Append-Only Skip List (AASL), allows its maintainers to produce one-way digests of the entire data sequence, which they can publish to others as a commitment on the contents and order of the sequence. The maintainer can produce efficiently succinct proofs that authenticate a particular datum in a particular position of the data sequence against a published digest. AASLs are secure against tampering even by malicious data structure maintain-ers. First, we show that a maintainer cannot \" invent \" and authenticate data elements for the AASL after he has committed to the structure. Second, he cannot equivocate by being able to prove conflicting facts about a particular position of the data sequence. This is the case even when the data sequence grows with time and its maintainer publishes successive commitments at times of his own choosing. AASLs can be invaluable in reasoning about the integrity of system logs maintained by untrusted components of a loosely-coupled distributed system ."}
{"_id": "efe9f16b3072340981411a3fab9dd8886981f89c", "title": "High Sensitivity Flexible Capacitive Pressure Sensor Using Polydimethylsiloxane Elastomer Dielectric Layer Micro-Structured by 3-D Printed Mold", "text": "3-D printing is used to fabricate mold for micro-structuring the polydimethylsiloxane (PDMS) elastomeric dielectric layer in capacitive flexible pressure sensors. It is shown that, despite of the limited resolution of the used commercial 3-D printer for producing the mold, the fabricated sensor with the micro-structured PDMS layers can achieve sensitivity higher than previous work using micro-fabricated silicon wafer molds. The devices also present very low detection limit, fast response/recovery speed, excellent durability and good tolerance to variations of ambient temperature and humidity, which enables to reliably monitor weak human physiological signals in real time. As an application example, the flexible pressure sensor is integrated in a wearable system to monitor wrist pulse."}
{"_id": "1b659f8ea800b429f7824d1fec3e5dcd7e387fda", "title": "Optical nano-imaging of gate-tunable graphene plasmons", "text": "The ability to manipulate optical fields and the energy flow of light is central to modern information and communication technologies, as well as quantum information processing schemes. However, because photons do not possess charge, a way of controlling them efficiently by electrical means has so far proved elusive. A promising way to achieve electric control of light could be through plasmon polaritons\u2014coupled excitations of photons and charge carriers\u2014in graphene. In this two-dimensional sheet of carbon atoms, it is expected that plasmon polaritons and their associated optical fields can readily be tuned electrically by varying the graphene carrier density. Although evidence of optical graphene plasmon resonances has recently been obtained spectroscopically, no experiments so far have directly resolved propagating plasmons in real space. Here we launch and detect propagating optical plasmons in tapered graphene nanostructures using near-field scattering microscopy with infrared excitation light. We provide real-space images of plasmon fields, and find that the extracted plasmon wavelength is very short\u2014more than 40 times smaller than the wavelength of illumination. We exploit this strong optical field confinement to turn a graphene nanostructure into a tunable resonant plasmonic cavity with extremely small mode volume. The cavity resonance is controlled in situ by gating the graphene, and in particular, complete switching on and off of the plasmon modes is demonstrated, thus paving the way towards graphene-based optical transistors. This successful alliance between nanoelectronics and nano-optics enables the development of active subwavelength-scale optics and a plethora of nano-optoelectronic devices and functionalities, such as tunable metamaterials, nanoscale optical processing, and strongly enhanced light\u2013matter interactions for quantum devices and biosensing applications."}
{"_id": "c614fd002ee1ca57a91b33e5407670dcc6503094", "title": "Early Career Award. Clarifying the emotive functions of asymmetrical frontal cortical activity.", "text": "Asymmetrical activity over the frontal cortex has been implicated in the experience and expression of emotions and motivations. Explanations of the research have suggested that relatively greater left frontal activity is associated with positive affect and/or approach motivation, and that relatively greater right frontal activity is associated with negative affect and/or withdrawal motivation. In past research, affective valence and motivational direction were confounded, as only positive (negative) affects that were associated with approach (withdrawal) motivation were examined. Consequently, this research is unable to address whether asymmetrical frontal activity is associated with affective valence, motivational direction, or some combination of valence and motivation. In this article, I review research on the emotion of anger, a negative emotion often associated with approach motivation, that suggests that asymmetrical frontal cortical activity is due to motivational direction and not affective valence. Methodological and theoretical implications for the study of the frontal asymmetry specifically, and for emotion and motivation more generally, are discussed."}
{"_id": "92ac57eb4ed7a380b44bfc751824177326832449", "title": "PV-OWL \u2014 Pharmacovigilance surveillance through semantic web-based platform for continuous and integrated monitoring of drug-related adverse effects in open data sources and social media", "text": "The recent EU regulation on Pharmacovigilance [Regulation (EU) 1235/2010, Directive 2010/84/EU] imposes both to Pharmaceutical companies and Public health agencies to maintain updated safety information of drugs, monitoring all available data sources. Here, we present our project aiming to develop a web platform for continuous monitoring of adverse effects of medicines (pharmacovigilance), by integrating information from public databases, scientific literature and social media. The project will start by scanning all available data sources concerning drug adverse events, both open (e.g., FAERS \u2014 FDA Adverse Event Reporting Systems, medical literature, social media, etc.) and proprietary data (e.g., discharge hospital records, drug prescription archives, electronic health records), that require agreement with respective data owners. Subsequent, pharmacovigilance experts will perform a semi-automatic mapping of codes identifying drugs and adverse events, to build the thesaurus of the web based platform. After these preliminary activities, signal generation and prioritization will be the core of the project. This task will result in risk confidence scores for each included data source and a comprehensive global score, indicating the possible association between a specific drug and an adverse event. The software framework MOMIS, an open source data integration system, will allow semi-automatic virtual integration of heterogeneous and distributed data sources. A web platform, based on MOMIS, able to merge many heterogeneous data sets concerning adverse events will be developed. The platform will be tested by external specialized subjects (clinical researchers, public or private employees in pharmacovigilance field). The project will provide a) an innovative way to link, for the first time in Italy, different databases to obtain novel safety indicators; b) a web platform for a fast and easy integration of all available data, useful to verify and validate hypothesis generated in signal detection. Finally, the development of the unified safety indicator (global risk score) will result in a compelling, easy-to-understand, visual format for a broad range of professional and not professional users like patients, regulatory authorities, clinicians, lawyers, human scientists."}
{"_id": "33c2080344f3d2b80a7dc847a674567ff5809542", "title": "Mundanely miraculous: the robot in healthcare", "text": "As both hero and villain, robots have played prominent roles in media such as films and books. Now, robots are no longer hidden away from the public conscious in fictive worlds or real-life factories. Robots are becoming a real part of our everyday encounters in environments such as healthcare settings. In this paper, we describe a discourse analysis of 60 YouTube videos that showcase robots in healthcare activities. Our narrative weaves three discourses that construct visions of the healthcare robot: (1) the miraculous robot as the robot that enhances patient care; (2) the mundane robot as the innocuous robot that integrates into the workflow seamlessly; and (3) the preternatural robot as the robot that is miraculous but never mundane. We propose several contrary visions to this dominant narrative of healthcare robots as a framework for future fieldwork that, we argue, should investigate the institutions of robotics."}
{"_id": "57ba4b6de23a6fc9d45ff052ed2563e5de00b968", "title": "An efficient deep neural networks training framework for robust face recognition", "text": "In recent years, the triplet loss-based deep neural networks (DNN) are widely used in the task of face recognition and achieve the state-of-the-art performance. However, the complexity of training the triplet loss-based DNN is significantly high due to the difficulty in generating high-quality training samples. In this paper, we propose a novel DNN training framework to accelerate the training process of the triplet loss-based DNN and meanwhile to improve the performance of face recognition. More specifically, the proposed framework contains two stages: 1) The DNN initialization. A deep architecture based on the softmax loss function is designed to initialize the DNN. 2) The adaptive fine-tuning. Based on the trained model, a set of high-quality triplet samples is generated and used to fine-tune the network, where an adaptive triplet loss function is introduced to improve the discriminative ability of DNN. Experimental results show that, the model obtained by the proposed DNN training framework achieves 97.3% accuracy on the LFW benchmark with low training complexity, which verifies the efficiency and effectiveness of the proposed framework."}
{"_id": "0a9180f5ee94352352672d77103e115dc140d727", "title": "Complexity in Phonological Treatment: Clinical Factors.", "text": "The construct of complexity has been advanced recently as a potentially contributing variable in the efficacy of treatment for children with functional phonological disorders. Thus far, complexity has been defined in terms of linguistic and psycholinguistic structure, articulatory phonetic variables, and conventional clinical factors. The focus of this paper is on clinical complexity as it influences the selection of target sounds for treatment, with three clinical factors reviewed: consistency of the error, normative age of acquisition, and number of errors to be treated. The collective findings suggest that treatment of seemingly more complex targets results in greater phonological gains. These results are integrated with converging evidence from other populations and language and learning domains."}
{"_id": "0170d21b7ba150bf4bb9185131b11c2c2313a09c", "title": "An Inventory of Preposition Relations", "text": "We describe an inventory of semantic relations that are expressed by prepositions. We define these relations by building on the word sense disambiguation task for prepositions and propose a mapping from preposition senses to the relation labels by collapsing semantically related senses across prepositions."}
{"_id": "0ab7e22cb420db2da4299768cb4209ef96ab6573", "title": "Fitting Tweedie \u2019 s Compound Poisson Model to Insurance Claims Data : Dispersion Modelling", "text": "We reconsider the problem of producing fair and accurate tariffs based on aggregated insurance data giving numbers of claims and total costs for the claims. J\u00f8rgensen and de Souza (Scand. Actuarial J., 1994) assumed Poisson arrival of claims and gamma distributed costs for individual claims. J\u00f8rgensen and de Souza (1994) directly modelled the risk or expected cost of claims per insured unit, \u03bc say. They observed that the dependence of the likelihood function on \u03bc is as for a linear exponential family, so that modelling similar to that of generalized linear models is possible. In this paper we observe that, when modelling the cost of insurance claims, it is generally necessary to model the dispersion of the costs as well as their mean. In order to model the dispersion we use the framework of double generalized linear models. Modelling the dispersion increases the precision of the estimated tariffs. The use of double generalized linear models also allows us to handle the case where only the total cost of claims and not the number of claims has been recorded."}
{"_id": "017ee80e9996a78768cdca09e66157f9de1be6c0", "title": "Feeding State Modulates Behavioral Choice and Processing of Prey Stimuli in the Zebrafish Tectum", "text": "Animals use the sense of vision to scan their environment, respond to threats, and locate food sources. The neural computations underlying the selection of a particular behavior, such as escape or approach, require flexibility to balance potential costs and benefits for survival. For example, avoiding novel visual objects reduces predation risk but negatively affects foraging success. Zebrafish larvae approach small, moving objects (\"prey\") and avoid large, looming objects (\"predators\"). We found that this binary classification of objects by size is strongly influenced by feeding state. Hunger shifts behavioral decisions from avoidance to approach and recruits additional prey-responsive neurons in the tectum, the main visual processing center. Both behavior and tectal function are modulated by signals from the hypothalamic-pituitary-interrenal axis and the serotonergic system. Our study has revealed a neuroendocrine mechanism that modulates the perception of food and the willingness to take risks in foraging decisions."}
{"_id": "515e34476452bbfeb111ce5480035ae1f7aa4bee", "title": "inAir: measuring and visualizing indoor air quality", "text": "Good indoor air quality is a vital part of human health. Poor indoor air quality can contribute to the development of chronic respiratory diseases such as asthma, heart disease, and lung cancer. Complicating matters, poor air quality is extremely difficult for humans to detect through sight and smell alone and existing sensing equipment is designed to be used by and provide data for scientists rather than everyday citizens. We propose inAir, a tool for measuring, visualizing, and learning about indoor air quality. inAir provides historical and real-time visualizations of indoor air quality by measuring tiny hazardous airborne particles as small as 0.5 microns in size. Through user studies we demonstrate how inAir promotes greater awareness and motivates individual actions to improve indoor air quality."}
{"_id": "f02013a10e9aa9112da764d289aebbb826fcb032", "title": "Sub-sentencial Paraphrasing by Contextual Pivot Translation", "text": "The ability to generate or to recognize paraphrases is key to the vast majority of NLP applications. As correctly exploiting context during translation has been shown to be successful, using context information for paraphrasing could also lead to improved performance. In this article, we adopt the pivot approach based on parallel multilingual corpora proposed by (Bannard and Callison-Burch, 2005), which finds short paraphrases by finding appropriate pivot phrases in one or several auxiliary languages and back-translating these pivot phrases into the original language. We show how context can be exploited both when attempting to find pivot phrases, and when looking for the most appropriate paraphrase in the original subsentential \u201cenvelope\u201d. This framework allows the use of paraphrasing units ranging from words to large sub-sentential fragments for which context information from the sentence can be successfully exploited. We report experiments on a text revision task, and show that in these experiments our contextual sub-sentential paraphrasing system outperforms a strong baseline system."}
{"_id": "515a9124980768ff698b5e823826637b42899bf6", "title": "Spatial Filtering for EEG-Based Regression Problems in Brain\u2013Computer Interface (BCI)", "text": "Electroencephalogram (EEG) signals are frequently used in brain\u2013computer interfaces (BCIs), but they are easily contaminated by artifacts and noise, so preprocessing must be done before they are fed into a machine learning algorithm for classification or regression. Spatial filters have been widely used to increase the signal-to-noise ratio of EEG for BCI classification problems, but their applications in BCI regression problems have been very limited. This paper proposes two common spatial pattern (CSP) filters for EEG-based regression problems in BCI, which are extended from the CSP filter for classification, by using fuzzy sets. Experimental results on EEG-based response speed estimation from a large-scale study, which collected 143 sessions of sustained-attention psychomotor vigilance task data from 17 subjects during a 5-month period, demonstrate that the two proposed spatial filters can significantly increase the EEG signal quality. When used in LASSO and $k$-nearest neighbors regression for user response speed estimation, the spatial filters can reduce the root-mean-square estimation error by $10.02-19.77\\%$, and at the same time increase the correlation to the true response speed by $19.39-86.47\\%$."}
{"_id": "320e3c85037a58c533f5d042bcf22c1362a11ed1", "title": "Robust TiN HM process to overcome under etch issue for SAV scheme on 14nm node", "text": "For advance node such as 14nm technology and beyond, back end of line interconnect has implemented self-aligned via (SAV) schemes for better via-metal short process window [1]. A TiN metal hard mask (MHM) is used for the trench pattern definition, after which via lithography and partial via (PV) etch is performed where the TiN was opened. The via etch condition has very good selectivity so that via is formed in a self-aligned fashion by TiN HM [2]. It is indeed to have significant benefit of via to metal short [3]. However, one of the trade off in SAV scheme is the via under etch that whether or not via can well land on an opened oxide area during PV etch. To define the contact area between via resist hole and opened HM oxide as an effective area for via to open successfully. \u201cFig. 1\u201d shows the effective area change during process variation due to Via alignment, Via photo resist CD variation (Via ADICD) and post hard mask etch CD variation (AMICD). \u201cFig. 2\u201d shows the mechanism of this under etch failure mode and typical TEM image from 64nm metal pitch test vehicle. In this work, we try to enlarge the process window by an aggressive AMICD targeting in combining with a higher density TiN material to maintain profile."}
{"_id": "44db0c2f729661e7b30af484a1ad5df4e70cb22a", "title": "Studying Bluetooth Malware Propagation: The BlueBag Project", "text": "Bluetooth worms currently pose relatively little danger compared to Internet scanning worms. The BlueBag project shows targeted attacks through Bluetooth malware using proof-of-concept codes and mobile devices"}
{"_id": "2c080f9543febd770d6d1988508af339b4cb614d", "title": "Multi-view gait recognition using 3D convolutional neural networks", "text": "In this work we present a deep convolutional neural network using 3D convolutions for Gait Recognition in multiple views capturing spatio-temporal features. A special input format, consisting of the gray-scale image and optical flow enhance color invariance. The approach is evaluated on three different datasets, including variances in clothing, walking speeds and the view angle. In contrast to most state-of-the-art Gait Recognition systems the used neural network is able to generalize gait features across multiple large view angle changes. The results show a comparable to better performance in comparison with previous approaches, especially for large view differences."}
{"_id": "bdeff833f3a7c035ff2987984317493d0a02f8d9", "title": "The TUM Gait from Audio, Image and Depth (GAID) database: Multimodal recognition of subjects and traits", "text": "Recognizing people by the way they walk \u2013 also known as gait recognition \u2013 has been studied extensively in the recent past. Recent gait recognition methods solely focus on data extracted from an RGB video stream. With this work, we provide a means for multimodal gait recognition, by introducing the freely available TUM Gait from Audio, Image and Depth (GAID) database. This database simultaneously contains RGB video, depth and audio. With 305 people in three variations, it is one of the largest to-date. To further investigate challenges of time variation, a subset of 32 people is recorded a second time. We define standardized experimental setups for both person identification and for the assessment of the soft biometrics age, gender, height, and shoe type. For all defined experiments, we present several baseline results on all available modalities. These effectively demonstrate multimodal fusion being beneficial to gait recognition."}
{"_id": "0a87428c6b2205240485ee6bb9cfb00fd9ed359c", "title": "Secrets of optical flow estimation and their principles", "text": "The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that \u201cclassical\u201d flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. Moreover, we find that while median filtering of intermediate flow fields during optimization is a key to recent performance gains, it leads to higher energy solutions. To understand the principles behind this phenomenon, we derive a new objective that formalizes the median filtering heuristic. This objective includes a nonlocal term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that ranks at the top of the Middlebury benchmark."}
{"_id": "645996bc6da54ba056ad741f5f8161fc5fd88dbb", "title": "Partial Multi-Modal Sparse Coding via Adaptive Similarity Structure Regularization", "text": "Multi-modal sparse coding has played an important role in many multimedia applications, where data are usually with multiple modalities. Recently, various multi-modal sparse coding approaches have been proposed to learn sparse codes of multi-modal data, which assume that data appear in all modalities, or at least there is one modality containing all data. However, in real applications, it is often the case that some modalities of the data may suffer from missing information and thus result in partial multi-modality data. In this paper, we propose to solve the partial multi-modal sparse coding problem via multi-modal similarity structure regularization. Specifically, we propose a partial multi-modal sparse coding framework termed Adaptive Partial Multi-Modal Similarity Structure Regularization for Sparse Coding (AdaPM2SC), which preserves the similarity structure within the same modality and between different modalities. Experimental results conducted on two real-world datasets demonstrate that AdaPM2SC significantly outperforms the state-of-the-art methods under partial multi-modality scenario."}
{"_id": "35807e94e983b8e9deed3e29325d7575c8fbcb44", "title": "The relationship among young adult college students' depression, anxiety, stress, demographics, life satisfaction, and coping styles.", "text": "Recent research indicates that young adult college students experience increased levels of depression, anxiety, and stress. It is less clear what strategies college health care providers might use to assist students in decreasing these mental health concerns. In this paper, we examine the relative importance of coping style, life satisfaction, and selected demographics in predicting undergraduates' depression, anxiety, and stress. A total of 508 full-time undergraduate students aged 18-24 years completed the study measures and a short demographics information questionnaire. Coping strategies and life satisfaction were assessed using the Brief COPE Inventory and an adapted version of the Brief Students' Multidimensional Life Satisfaction Scale. Depression, anxiety, and stress were measured using the Depression Anxiety and Stress Scale-21 (DASS-21). Multiple regression analyses were used to examine the relative influence of each of the independent variables on depression, anxiety, and stress. Maladaptive coping was the main predictor of depression, anxiety, and stress. Adaptive coping was not a significant predictor of any of the three outcome variables. Reducing maladaptive coping behaviors may have the most positive impact on reducing depression, anxiety, and stress in this population."}
{"_id": "8e773b1840b894603c06b677a0f15ebcf0f26378", "title": "Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task", "text": "We present Spider, a large-scale, complex and cross-domain semantic parsing and textto-SQL dataset annotated by 11 college students. It consists of 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables, covering 138 different domains. We define a new complex and cross-domain semantic parsing and textto-SQL task where different complex SQL queries and databases appear in train and test sets. In this way, the task requires the model to generalize well to both new SQL queries and new database schemas. Spider is distinct from most of the previous semantic parsing tasks because they all use a single database and the exact same programs in the train set and the test set. We experiment with various state-of-the-art models and the best model achieves only 12.4% exact matching accuracy on a database split setting. This shows that Spider presents a strong challenge for future research. Our dataset and task are publicly available at https://yale-lily. github.io/spider."}
{"_id": "19204fc73d7cd1b15b48aef40de54787de5dbf4b", "title": "Particle Swarm Optimization (PSO) for the constrained portfolio optimization problem", "text": "0957-4174/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.eswa.2011.02.075 \u21d1 Corresponding author. Tel.: +4773597119. E-mail addresses: hanhong@stud.ntnu.no (H (Y. Wang), Kesheng.wang@ntnu.no (K. Wang), (Y. Chen). One of the most studied problems in the financial investment expert system is the intractability of portfolios. The non-linear constrained portfolio optimization problem with multi-objective functions cannot be efficiently solved using traditionally approaches. This paper presents a meta-heuristic approach to portfolio optimization problem using Particle Swarm Optimization (PSO) technique. The model is tested on various restricted and unrestricted risky investment portfolios and a comparative study with Genetic Algorithms is implemented. The PSO model demonstrates high computational efficiency in constructing optimal risky portfolios. Preliminary results show that the approach is very promising and achieves results comparable or superior with the state of the art solvers. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "6672b7f87e18fa66f35e549252d418da23d2be28", "title": "Inside the Black Box: How to Explain Individual Predictions of a Machine Learning Model : How to automatically generate insights on predictive model outputs, and gain a better understanding on how the model predicts each individual data point.", "text": "Machine learning models are becoming more and more powerful and accurate, but their good predictions usually come with a high complexity. Depending on the situation, such a lack of interpretability can be an important and blocking issue. This is especially the case when trust is needed on the user side in order to take a decision based on the model prediction. For instance, when an insurance company uses a machine learning algorithm in order to detect fraudsters: the company would trust the model to be based on meaningful variables before actually taking action and investigating on a particular individual. In this thesis, several explanation methods are described and compared on multiple datasets (text data, numerical), on classification and regression problems."}
{"_id": "612f52497b4331b6f384e0e353091e23e2e72610", "title": "AI Methods in Algorithmic Composition: A Comprehensive Survey", "text": "Algorithmic composition is the partial or total automation of the process of music composition by using computers. Since the 1950s, different computational techniques related to Artificial Intelligence have been used for algorithmic composition, including grammatical representations, probabilistic methods, neural networks, symbolic rule-based systems, constraint programming and evolutionary algorithms. This survey aims to be a comprehensive account of research on algorithmic composition, presenting a thorough view of the field for researchers in Artificial Intelligence."}
{"_id": "4ca0032799949f215eea46cdaf6823be9a48e34c", "title": "Tactile information transmission by apparent movement phenomenon using shape-memory alloy device", "text": "This paper introduces the development of a tactile device using a shape-memory alloy, and describes the information transmission by the higher psychological perception such as the phantom sensation and the apparent movement of the tactility. The authors paid attention to the characteristic of a shape-memory alloy formed into a thread, which changes its length according to its body temperature, and developed a vibration-generating actuator electrically driven by periodic signals generated by current control circuits, for the tactile information transmission. The size of the actuator is quite compact and the energy consumption is only 20mW. By coupling the actuators as a pair, an information transmission system was constructed for presenting the apparent movement of the tactility, to transmit quite novel sensation to a user. Based on the preliminary experiment, the parameters for the tactile information transmission were examined. Then the information transmission by the device was tested by 10 subjects, and evaluated by questionnaires. The apparent movement was especially well perceived by users as a sensation of a small object running on the skin surface or as being tapped by something, according to the well-determined signals given to the actuators. Several users reported that they perceived a novel rubbing sensation given by the AM, and we further experimented the presentation of the sensation in detail to be used as a sensory-aids tactile display for the handicapped and elderly people."}
{"_id": "71b5e8d9970f3b725127eb7da3d5183b921ad017", "title": "Review of automatic segmentation methods of multiple sclerosis white matter lesions on conventional magnetic resonance imaging", "text": "Magnetic resonance (MR) imaging is often used to characterize and quantify multiple sclerosis (MS) lesions in the brain and spinal cord. The number and volume of lesions have been used to evaluate MS disease burden, to track the progression of the disease and to evaluate the effect of new pharmaceuticals in clinical trials. Accurate identification of MS lesions in MR images is extremely difficult due to variability in lesion location, size and shape in addition to anatomical variability between subjects. Since manual segmentation requires expert knowledge, is time consuming and is subject to intra- and inter-expert variability, many methods have been proposed to automatically segment lesions. The objective of this study was to carry out a systematic review of the literature to evaluate the state of the art in automated multiple sclerosis lesion segmentation. From 1240 hits found initially with PubMed and Google scholar, our selection criteria identified 80 papers that described an automatic lesion segmentation procedure applied to MS. Only 47 of these included quantitative validation with at least one realistic image. In this paper, we describe the complexity of lesion segmentation, classify the automatic MS lesion segmentation methods found, and review the validation methods applied in each of the papers reviewed. Although many segmentation solutions have been proposed, including some with promising results using MRI data obtained on small groups of patients, no single method is widely employed due to performance issues related to the high variability of MS lesion appearance and differences in image acquisition. The challenge remains to provide segmentation techniques that work in all cases regardless of the type of MS, duration of the disease, or MRI protocol, and this within a comprehensive, standardized validation framework. MS lesion segmentation remains an open problem."}
{"_id": "90930683f4ef3da8c51ed7d2553774c196172cb3", "title": "Proposal Encoding and Decoding Graph Representations of Natural Language", "text": ""}
{"_id": "beb8abd4972b3355f1ea170e94847b22a3d5b1a7", "title": "Robot task planning using semantic maps", "text": "Task planning for mobile robots usually relies solely on spatial information and on shallow domain knowledge, like labels attached to objects and places. Although spatial information is necessary for performing basic robot operations (navigation, localization, and obstacle avoidance), the use of deeper domain knowledge is pivotal to endow a robot with higher degrees of autonomy and intelligence. In this paper, we focus on semantic knowlege, and show how this type of knowledge can be profitably used for robot task planning. We start by defining a specific type of semantic maps, which integrate hierarchical spatial information and semantic knowledge. We then proceed to describe how these semantic maps can improve task planning in three ways: enriching the planning domain, relaxing unachievable goals, and improving the efficiency of the planner in large domains. Finally, we show several exepriments that demonstrate the effectiveness of our solutions in a doman involving robot navigation in a domestic environment."}
{"_id": "376d178129566bd29e2f06354fc3aba6e0a2954e", "title": "The Relevance of Data Warehousing and Data Mining in the Field of Evidence-based Medicine to Support Healthcare Decision Making", "text": "Evidence-based medicine is a new direction in modern healthcare. Its task is to prevent, diagnose and medicate diseases using medical evidence. Medical data about a large patient population is analyzed to perform healthcare management and medical research. In order to obtain the best evidence for a given disease, external clinical expertise as well as internal clinical experience must be available to the healthcare practitioners at right time and in the right manner. External evidence-based knowledge can not be applied directly to the patient without adjusting it to the patient\u2019s health condition. We propose a data warehouse based approach as a suitable solution for the integration of external evidence-based data sources into the existing clinical information system and data mining techniques for finding appropriate therapy for a given patient and a given disease. Through integration of data warehousing, OLAP and data mining techniques in the healthcare area, an easy to use decision support platform, which supports decision making process of care givers and clinical managers, is built. We present three case studies, which show, that a clinical data warehouse that facilitates evidence-based medicine is a reliable, powerful and user-friendly platform for strategic decision making, which has a great relevance for the practice and acceptance of evidence-based medicine. Keywords\u2014data mining, data warehousing, decision-support systems, evidence-based medicine."}
{"_id": "1c1f5288338bc899cb664c0722d1997ec7cda32d", "title": "Towards an implementation framework for business intelligence in healthcare", "text": "As healthcare organizations continue to be asked to do more with less, access to information is essential for sound evidence-based decision making. Business intelligence (BI) systems are designed to deliver decision-support information and have been repeatedly shown to provide value to organizations. Many healthcare organizations have yet to implement BI systems and no existing research provides a healthcare-specific framework to guide implementation. To address this research gap, we employ a case study in a Canadian Health Authority in order to address three questions: (1) what are the most significant adverse impacts to the organization\u2019s decision processes and outcomes attributable to a lack of decision-support capabilities? (2) what are the root causes of these impacts, and what workarounds do they necessitate? and (3) in light of the issues identified, what are the key considerations for healthcare organizations in the early stages of BI implementation? Using the concept of co-agency as a guide we identified significant decision-related adverse impacts and their root causes. We found strong management support, the right skill sets and an information-oriented culture to be key implementation considerations. Our major contribution is a framework for defining and prioritizing decision-support information needs in the context of healthcare-specific processes. \u00a9 2013 Elsevier Ltd. All rights reserved."}
{"_id": "7085851515bf3e7a5ce5f5e0cb44d4559a3c9819", "title": "Image detail enhancement with spatially guided filters", "text": "In recent years, enhancing image details without introducing artifacts has been attracting much attention in image processing community. Various image filtering methods have been proposed to achieve this goal. However, existing methods usually treat all pixels equally during the filtering process without considering the relationship between filtering strengths and image contents. In this paper, we address this issue by spatially distributing the filtering strengths with simple low-level features. First, to determine the pixel-wise filtering strength, we construct a spatially guided map, which exploits the spatial influence of image details based on the edge response of an image. Then, we further integrate this guided map into two state-of-the-art image filters and apply the improved filters to the task of image detail enhancement. Experiments demonstrate that our results generate better content-specific visual effects and introduce much less artifacts."}
{"_id": "919bd86eb5fbccd3862e3e2927d4a0d468c7c591", "title": "Stock Selection and Trading Based on Cluster Analysis of Trend and Momentum Indicators", "text": ""}
{"_id": "73e51b9820e90eb6525fc953c35c9288527cecfd", "title": "Attention-based Belief or Disbelief Feature Extraction for Dependency Parsing", "text": "Existing neural dependency parsers usually encode each word in a sentence with bi-directional LSTMs, and estimate the score of an arc from the LSTM representations of the head and the modifier, possibly missing relevant context information for the arc being considered. In this study, we propose a neural feature extraction method that learns to extract arcspecific features. We apply a neural network-based attention method to collect evidences for and against each possible head-modifier pair, with which our model computes certainty scores of belief and disbelief, and determines the final arc score by subtracting the score of disbelief from the one of belief. By explicitly introducing two kinds of evidences, the arc candidates can compete against each other based on more relevant information, especially for the cases where they share the same head or modifier. It makes possible to better discriminate two or more competing arcs by presenting their rivals (disbelief evidence). Experiments on various datasets show that our arc-specific feature extraction mechanism significantly improves the performance of bi-directional LSTMbased models by explicitly modeling long-distance dependencies. For both English and Chinese, the proposed model achieve a higher accuracy on dependency parsing task than most existing neural attention-based models."}
{"_id": "4bf9d0d9217aefdeccd727450b542d9ac3f3b2ff", "title": "SURVEY ON FAULT TOLERANCE IN GRID COMPUTING", "text": "Grid computing is defined as a hardware and software infrastructure that enables coordinated resource sharing within dynamic organizations. In grid computing, the probability of a failure is much greater than in traditional parallel computing. Therefore, the fault tolerance is an important property in order to achieve reliability, availability and QOS. In this paper, we give a survey on various fault tolerance techniques, fault management in different systems and related issues. A fault tolerance service deals with various types of resource failures, which include process failure, processor failure and network failures. This survey provides the related research results about fault tolerance in distinct functional areas of grid infrastructure and also gave the future directions about fault tolerance techniques, and it is a good reference for researcher."}
{"_id": "c207e3e521f098742230bf693442f9acedc84c6f", "title": "Competing Loyalty Programs : Impact of Market Saturation , Market Share , and Category Expandability", "text": "Loyalty programs have become an important component of firms\u2019 relationship management strategies. There are now some industries in which numerous rival loyalty programs are offered, inducing intense competition among these programs. However, existing research on loyalty programs has often studied such programs in a noncompetitive setting and has often focused on a single program in isolation. Addressing this gap, this research examines the effect of a firm\u2019s competitive positioning and market saturation on the performance of the firm\u2019s loyalty program. Based on the analysis of firmand individual-level data from the airline industry, the results indicate that larger firms tend to benefit more from their loyalty program offerings than smaller firms. Moreover, when the product category demand is rigid, the impact of an individual loyalty program decreases as the marketplace becomes more saturated with competing programs. However, when the product category is highly expandable, the saturation effect disappears. Under such situations, loyalty programs can help an industry gain competitive advantage over substitute offerings outside the industry, and multiple programs can effectively coexist even under a high level of market saturation."}
{"_id": "a6837071fc77bcdf591354b5037fa33f8db9756f", "title": "Parallel $(k)$-Clique Community Detection on Large-Scale Networks", "text": "The analysis of real-world complex networks has been the focus of recent research. Detecting communities helps in uncovering their structural and functional organization. Valuable insight can be obtained by analyzing the dense, overlapping, and highly interwoven k-clique communities. However, their detection is challenging due to extensive memory requirements and execution time. In this paper, we present a novel, parallel k-clique community detection method, based on an innovative technique which enables connected components of a network to be obtained from those of its subnetworks. The novel method has an unbounded, user-configurable, and input-independent maximum degree of parallelism, and hence is able to make full use of computational resources. Theoretical tight upper bounds on its worst case time and space complexities are given as well. Experiments on real-world networks such as the Internet and the World Wide Web confirmed the almost optimal use of parallelism (i.e., a linear speedup). Comparisons with other state-of-the-art k-clique community detection methods show dramatic reductions in execution time and memory footprint. An open-source implementation of the method is also made publicly available."}
{"_id": "bc940df39b7a25c508fcba50119a05f01a54ac0a", "title": "Reconfigurable quarter-mode SIW antenna employing a fluidically switchable via", "text": "A new method to fluidically switch through via posts is proposed for the first time. The reconfigurable via post is used toward designing a switchable QMSIW antenna. The switching method is based on filling/emptying a non-plated via hole with/from a liquid metal alloy. A QMSIW antenna is designed to initially work at ~3.2 GHz. Connecting the fluidically switchable via post at the corner of the QMSIW antenna shifts up the operating frequency. This translates into a switching range of 3.2-4.7 GHz. The Polydimethylsiloxane (PDMS) structures including the micro-channels are bonded to the QMSIW circuit board using a unique fabrication technique."}
{"_id": "3bbe0072cb586a75bee83bd7f0832af3e89e8d56", "title": "General multilevel linear modeling for group analysis in FMRI", "text": "This article discusses general modeling of multisubject and/or multisession FMRI data. In particular, we show that a two-level mixed-effects model (where parameters of interest at the group level are estimated from parameter and variance estimates from the single-session level) can be made equivalent to a single complete mixed-effects model (where parameters of interest at the group level are estimated directly from all of the original single sessions' time series data) if the (co-)variance at the second level is set equal to the sum of the (co-)variances in the single-level form, using the BLUE with known covariances. This result has significant implications for group studies in FMRI, since it shows that the group analysis requires only values of the parameter estimates and their (co-)variance from the first level, generalizing the well-established \"summary statistics\" approach in FMRI. The simple and generalized framework allows different prewhitening and different first-level regressors to be used for each subject. The framework incorporates multiple levels and cases such as repeated measures, paired or unpaired t tests and F tests at the group level; explicit examples of such models are given in the article. Using numerical simulations based on typical first-level covariance structures from real FMRI data we demonstrate that by taking into account lower-level covariances and heterogeneity a substantial increase in higher-level Z score is possible."}
{"_id": "8d3be75a8d50f0c08cb77fe77350827e10d64998", "title": "Dynamic ransomware protection using deterministic random bit generator", "text": "Ransomware has become a very significant cyber threat. The basic idea of ransomware was presented in the form of a cryptovirus in 1995. However, it was considered as merely a conceptual topic since then for over a decade. In 2017, ransomware has become a reality, with several famous cases of ransomware having compromised important computer systems worldwide. For example, the damage caused by CryptoLocker and WannaCry is huge, as well as global. They encrypt victims' files and require user's payment to decrypt them. Because they utilize public key cryptography, the key for recovery cannot be found in the footprint of the ransomware on the victim's system. Therefore, once infected, the system cannot be recovered without paying for restoration. Various methods to deal this threat have been developed by antivirus researchers and experts in network security. However, it is believed that cryptographic defense is infeasible because recovering a victim's files is computationally as difficult as breaking a public key cryptosystem. Quite recently, various approaches to protect the crypto-API of an OS from malicious codes have been proposed. Most ransomware generate encryption keys using the random number generation service provided by the victim's OS. Thus, if a user can control all random numbers generated by the system, then he/she can recover the random numbers used by the ransomware for the encryption key. In this paper, we propose a dynamic ransomware protection method that replaces the random number generator of the OS with a user-defined generator. As the proposed method causes the virus program to generate keys based on the output from the user-defined generator, it is possible to recover an infected file system by reproducing the keys the attacker used to perform the encryption."}
{"_id": "d746cd387522d826b2ab402cb37c96059fa04261", "title": "Corroborating Answers from Multiple Web Sources", "text": "The Internet has changed the way people look for information. Users now expect the answers to their questions to be available through a simple web search. Web search engines are increasingly efficient at identifying the best sources for any given keyword query, and are often able to identify the answer within the sources. Unfortunately, many web sources are not trustworthy, because of erroneous, misleading, biased, or outdated information. In many cases, users are not satisfied with \u2014or do not trust\u2014 the results from any single source and prefer checking several sources for corroborating evidence. In this paper, we propose methods to aggregate query results from different sources in order to save users the hassle of individually checking query-related web sites to corroborate answers. To return the best aggregated answers to the users, our techniques consider the number, importance, and similarity of the web sources reporting each answer, as well as the importance of the answer within the source. We present an experimental evaluation of our technique on real web queries, comparing the corroborated answers returned with real user clicks."}
{"_id": "3c0d94608ce09f481cb9532f0784e76bec3e2993", "title": "An Analysis of the AskMSR Question-Answering System", "text": "We describe the architecture of the AskMSR question answering system and systematically evaluate contributions of different system components to accuracy. The system differs from most question answering systems in its dependency on data redundancy rather than sophisticated linguistic analyses of either questions or candidate answers. Because a wrong answer is often worse than no answer, we also explore strategies for predicting when the question answering system is likely to give an incorrect answer."}
{"_id": "669b4d7cf907479f0697305b8e11cb3700cca86a", "title": "Snowball: extracting relations from large plain-text collections", "text": "Text documents often contain valuable structured data that is hidden Yin regular English sentences. This data is best exploited infavailable as arelational table that we could use for answering precise queries or running data mining tasks.We explore a technique for extracting such tables from document collections that requires only a handful of training examples from users. These examples are used to generate extraction patterns, that in turn result in new tuples being extracted from the document collection.We build on this idea and present our Snowball system. Snowball introduces novel strategies for generating patterns and extracting tuples from plain-text documents.At each iteration of the extraction process, Snowball evaluates the quality of these patterns and tuples without human intervention,and keeps only the most reliable ones for the next iteration. In this paper we also develop a scalable evaluation methodology and metrics for our task, and present a thorough experimental evaluation of Snowball and comparable techniques over a collection of more than 300,000 newspaper documents."}
{"_id": "6dc34bae7fb2e12499ebdff2902ccde612dbb0f1", "title": "Web-scale information extraction in knowitall: (preliminary results)", "text": "Manually querying search engines in order to accumulate a large bodyof factual information is a tedious, error-prone process of piecemealsearch. Search engines retrieve and rank potentially relevantdocuments for human perusal, but do not extract facts, assessconfidence, or fuse information from multiple documents. This paperintroduces KnowItAll, a system that aims to automate the tedious process ofextracting large collections of facts from the web in an autonomous,domain-independent, and scalable manner.The paper describes preliminary experiments in which an instance of KnowItAll, running for four days on a single machine, was able to automatically extract 54,753 facts. KnowItAll associates a probability with each fact enabling it to trade off precision and recall. The paper analyzes KnowItAll's architecture and reports on lessons learned for the design of large-scale information extraction systems."}
{"_id": "7bdb08efd640311ad18466a80498c78267f886ca", "title": "Scaling question answering to the Web", "text": ""}
{"_id": "76a0016ce19363ef8f7ba5c3964c4a0c29b608ca", "title": "ModaNet: A Large-scale Street Fashion Dataset with Polygon Annotations", "text": "Understanding clothes from a single image would have huge commercial and cultural impacts on modern societies. However, this task remains a challenging computer vision problem due to wide variations in the appearance, style, brand and layering of clothing items. We present a new database called ModaNet, a large-scale collection of images based on Paperdoll dataset. Our dataset provides 55,176 street images, fully annotated with polygons on top of the 1 million weakly annotated street images in Paperdoll. ModaNet aims to provide a technical benchmark to fairly evaluate the progress of applying the latest computer vision techniques that rely on large data for fashion understanding. The rich annotation of the dataset allows to measure the performance of state-of-the-art algorithms for object detection, semantic segmentation and polygon prediction on street fashion images in detail."}
{"_id": "a5e3cf9f7bfdbc227673a6d8f4d59112f1b5bb3a", "title": "Visual comparison for information visualization", "text": "Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we find that all designs are assembled from the building blocks of juxtaposition, superposition, and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools."}
{"_id": "b49eff91518a8bd1773a7f826af75f6c2f463e93", "title": "Recommendations For Streaming Data", "text": "Recommender systems have become increasingly popular in recent years because of the broader popularity of many web-enabled electronic commerce applications. However, most recommender systems today are designed in the context of an offline setting. The online setting is, however, much more challenging because the existing methods do not work very effectively for very large-scale systems. In many applications, it is desirable to provide real-time recommendations in large-scale scenarios. The main problem in applying streaming algorithms for recommendations is that the in-core storage space for memory-resident operations is quite limited. In this paper, we present a probabilistic neighborhood-based algorithm for performing recommendations in real-time. We present experimental results, which show the effectiveness of our approach in comparison to state-of-the-art methods."}
{"_id": "ec40595c1d9e1f8ec21a83295cb82b6cc4652694", "title": "Finding academic experts on a multisensor approach using Shannon's entropy", "text": "Expert finding is an information retrieval task concerned wi th the search for the most knowledgeable people, in some topic, with basis on document s describing peoples activities. The task involves taking a user query as input and retur ning a list of people sorted by their level of expertise regarding the user query. This pape r introduces a novel approach for combining multiple estimators of expertise based on a mu ltisensor data fusion framework together with the Dempster-Shafer theory of evidence a nd Shannon\u2019s entropy. More specifically, we defined three sensors which detect heteroge neous information derived from the textual contents, from the graph structure of the citati on patterns for the community of experts, and from profile information about the academic exp erts. Given the evidences collected, each sensor may define different candidates as ex perts and consequently do not agree in a final ranking decision. To deal with these conflicts , we applied the DempsterShafer theory of evidence combined with Shannon\u2019s Entropy f rmula to fuse this information and come up with a more accurate and reliable final rankin g list. Experiments made over two datasets of academic publications from the Compute r Science domain attest for the adequacy of the proposed approach over the traditional s tate of the art approaches. We also made experiments against representative supervised s tat of the art algorithms. Results revealed that the proposed method achieved a similar p erformance when compared to these supervised techniques, confirming the capabilities o f the proposed framework. This work was supported by national funds through FCT Funda \u00e7\u00e3o para a Ci\u00eancia e a Tecnologia, under project PEst-OE/EEI/LA0021/2011 and supported by nationa l funds through FCT Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia, under project PTDC/EIA-CCO/119722/2010"}
{"_id": "9067ea7b00fd982a12343b603efd10d56add6bcc", "title": "Crowd detection and management using cascade classifier on ARMv8 and OpenCV-Python", "text": "The steady increase in population and overcrowding has become an unavoidable factor in any public gathering or on the street during any festive occasions. The intelligent monitoring technology has been developing in recent years and human tracking has made a lot of progress. In this paper, we propose a method to manage the crowd by keeping in track the count of the people in the scene. In our study, we develop a system using Raspberry Pi 3 board that consists of ARMv8 CPU that detects the human heads and provide a count of humans in the region using OpenCV-Python. A Haar cascade classifier is trained for human head detection. Human tracking is achieved by indicating the direction of movement of the person. The results of the analysis will be helpful in managing the crowd in any area with high density of crowds."}
{"_id": "1aa88d275eaa0746031dcd567e4c1a6777a13e37", "title": "ClusType: Effective Entity Recognition and Typing by Relation Phrase-Based Clustering", "text": "Entity recognition is an important but challenging research problem. In reality, many text collections are from specific, dynamic, or emerging domains, which poses significant new challenges for entity recognition with increase in name ambiguity and context sparsity, requiring entity detection without domain restriction. In this paper, we investigate entity recognition (ER) with distant-supervision and propose a novel relation phrase-based ER framework, called ClusType, that runs data-driven phrase mining to generate entity mention candidates and relation phrases, and enforces the principle that relation phrases should be softly clustered when propagating type information between their argument entities. Then we predict the type of each entity mention based on the type signatures of its co-occurring relation phrases and the type indicators of its surface name, as computed over the corpus. Specifically, we formulate a joint optimization problem for two tasks, type propagation with relation phrases and multi-view relation phrase clustering. Our experiments on multiple genres---news, Yelp reviews and tweets---demonstrate the effectiveness and robustness of ClusType, with an average of 37% improvement in F1 score over the best compared method."}
{"_id": "26d92017242e51238323983eba0fad22bac67505", "title": "Make new friends, but keep the old: recommending people on social networking sites", "text": "This paper studies people recommendations designed to help users find known, offline contacts and discover new friends on social networking sites. We evaluated four recommender algorithms in an enterprise social networking site using a personalized survey of 500 users and a field study of 3,000 users. We found all algorithms effective in expanding users' friend lists. Algorithms based on social network information were able to produce better-received recommendations and find more known contacts for users, while algorithms using similarity of user-created content were stronger in discovering new friends. We also collected qualitative feedback from our survey users and draw several meaningful design implications."}
{"_id": "6453adf73e02bccf16e6478936f42a76c418ab02", "title": "The big and the small: challenges of imaging the brain's circuits.", "text": "The relation between the structure of the nervous system and its function is more poorly understood than the relation between structure and function in any other organ system. We explore why bridging the structure-function divide is uniquely difficult in the brain. These difficulties also explain the thrust behind the enormous amount of innovation centered on microscopy in neuroscience. We highlight some recent progress and the challenges that remain."}
{"_id": "18da6a6fdd698660a7f41fe0533d05a4e6443183", "title": "Learning for Semantic Parsing", "text": "Semantic parsing is the task of mapping a natural language sentence into a complete, formal meaning representation. Over the past decade, we have developed a number of machine learning methods for inducing semantic parsers by training on a corpus of sentences paired with their meaning representations in a specified formal language. We have demonstrated these methods on the automated construction of naturallanguage interfaces to databases and robot command languages. This paper reviews our prior work on this topic and discusses directions for future research."}
{"_id": "41e4ce2f6cb0fa1ed376eea1c6e8aff5cfc247fd", "title": "Evaluation of Different Control Strategies for Teams of Unmanned Marine Vehicles ( UMVs ) Comparing the Requirements of Underwater Communication and Navigation in MATLAB \u00ae Simulations", "text": "Thomas Glotzbach, TU Ilmenau; Fraunhofer Center for Applied Systems Technology, Ilmenau/Germany, thomas.glotzbach@tu-ilmenau.de Andrea Picini, INNOVA S.p.A., Rome/Italy, a.picini@innova-eu.net Antonio Zangrilli, INNOVA S.p.A., Rome/Italy, a.zangrilli@innova-eu.net Mike Eichhorn, TU Ilmenau/Germany, mike.eichhorn@tu-ilmenau.de Peter Otto, TU Ilmenau/Germany, peter.otto@tu-ilmenau.de Matthias Schneider, TU Ilmenau/Germany, schneider.matthias@tu-ilmenau.de"}
{"_id": "4915b2c8551d8c5ecfcc41809578671a09652c8b", "title": "Entity Comparison in RDF Graphs", "text": "In many applications, there is an increasing need for the new types of RDF data analysis that are not covered by standard reasoning tasks such as SPARQL query answering. One such important analysis task is entity comparison, i.e., determining what are similarities and differences between two given entities in an RDF graph. For instance, in an RDF graph about drugs, we may want to compare Metamizole and Ibuprofen and automatically find out that they are similar in that they are both analgesics but, in contrast to Metamizole, Ibuprofen also has a considerable anti-inflammatory effect. Entity comparison is a widely used functionality available in many information systems, such as universities or product comparison websites. However, comparison is typically domain-specific and depends on a fixed set of aspects to compare. In this paper, we propose a formal framework for domain-independent entity comparison over RDF graphs. We model similarities and differences between entities as SPARQL queries satisfying certain additional properties, and propose algorithms for computing them."}
{"_id": "05f7c10bac0581e13b75554197a4d6f0059f2481", "title": "G-SQL: Fast Query Processing via Graph Exploration", "text": "A lot of real-life data are of graph nature. However, it is not until recently that business begins to exploit data\u2019s connectedness for business insights. On the other hand, RDBMSs are a mature technology for data management, but they are not for graph processing. Take graph traversal, a common graph operation for example, it heavily relies on a graph primitive that accesses a given node\u2019s neighborhood. We need to join tables following foreign keys to access the nodes in the neighborhood if an RDBMS is used to manage graph data. Graph exploration is a fundamental building block of many graph algorithms. But this simple operation is costly due to a large volume of I/O caused by the massive amount of table joins. In this paper, we present G-SQL, our effort toward the integration of a RDBMS and a native in-memory graph processing engine. G-SQL leverages the fast graph exploration capability provided by the graph engine to answer multi-way join queries. Meanwhile, it uses RDBMSs to provide mature data management functionalities, such as reliable data storage and additional data access methods. Specifically, G-SQL is a SQL dialect augmented with graph exploration functionalities and it dispatches query tasks to the in-memory graph engine and its underlying RDMBS. The G-SQL runtime coordinates the two query processors via a unified cost model to ensure the entire query is processed efficiently. Experimental results show that our approach greatly expands capabilities of RDBMs and delivers exceptional performance for SQL-graph hybrid queries."}
{"_id": "4881365641b05d921c56099642b17895bbf0674d", "title": "Kinect Gesture Recognition for Interactive System", "text": "Gaming systems like Kinect and XBox always have to tackle the problem of extracting features from video data sets and classifying the body movement. In this study, reasonable features like human joints positions, joints velocities, joint angles and joint angular velocities are extracted. We used several machine learning methods including Naive Bayes, Support Vector Machine and Random Forest to learn and classify the human gestures. Simulation results and the final confusion matrix show that by combining delicately preprocessed data sets and Random Forest methods, the F-scores of the correct predictions can be maximized. Such methods can also be applied in real-time scenarios."}
{"_id": "31afd0a18126720eeef5880bcaa14768c4005387", "title": "Differentially private aggregation of distributed time-series with transformation and encryption", "text": "We propose the first differentially private aggregation algorithm for distributed time-series data that offers good practical utility without any trusted server. This addresses two important challenges in participatory data-mining applications where (i) individual users collect temporally correlated time-series data (such as location traces, web history, personal health data), and (ii) an untrusted third-party aggregator wishes to run aggregate queries on the data.\n To ensure differential privacy for time-series data despite the presence of temporal correlation, we propose the Fourier Perturbation Algorithm (FPAk). Standard differential privacy techniques perform poorly for time-series data. To answer n queries, such techniques can result in a noise of \u0398(n) to each query answer, making the answers practically useless if n is large. Our FPAk algorithm perturbs the Discrete Fourier Transform of the query answers. For answering n queries, FPAk improves the expected error from \u0398(n) to roughly \u0398(k) where k is the number of Fourier coefficients that can (approximately) reconstruct all the n query answers. Our experiments show that k << n for many real-life data-sets resulting in a huge error-improvement for FPAk.\n To deal with the absence of a trusted central server, we propose the Distributed Laplace Perturbation Algorithm (DLPA) to add noise in a distributed way in order to guarantee differential privacy. To the best of our knowledge, DLPA is the first distributed differentially private algorithm that can scale with a large number of users: DLPA outperforms the only other distributed solution for differential privacy proposed so far, by reducing the computational load per user from O(U) to O(1) where U is the number of users."}
{"_id": "295af62fc8daff93dea71e2d02f0cc0713977e11", "title": "General Perspectives on Knowledge Management: Fostering a Research Agenda", "text": "We trace in pragmatic terms some of what we know about knowledge, information technology, knowledge management practice and research, and provide two complementary frameworks that highlight potential opportunities for building a research agenda in this area. The papers in this special issue are then discussed."}
{"_id": "f25495685e3b3c7068d417564bde1259417106db", "title": "Modeling and execution of event stream processing in business processes", "text": "The Internet of Things and Cyber-physical Systems provide enormous amounts of real-time data in form of streams of events. Businesses can benefit from the integration of these real-world data; new services can be provided to customers, or existing business processes can be improved. Events are a well-known concept in business processes. However, there is no appropriate abstraction mechanism to encapsulate event stream processing in units that represent business functions in a coherent manner across the process modeling, process execution, and IT infrastructure layer. In this paper we present Event Stream Processing Units (SPUs) as such an abstraction mechanism. SPUs encapsulate application logic for event stream processing and enable a seamless transition between process models, executable process representations, and components at the IT layer. We derive requirements for SPUs and introduce EPC and BPMN extensions to model SPUs at the abstract and at the technical process layer. We introduce a transformation from SPUs in EPCs to SPUs in BPMN and implement our modeling notation extensions in Software AG ARIS. We present a runtime infrastructure that executes SPUs and supports implicit invocation and completion semantics. We illustrate our approach using a logistics process as running example."}
{"_id": "cea91802880fe805d70290e5c56349784f69ffae", "title": "Issues in Implementing IT Governance in Small and Medium Enterprises", "text": "Nowadays, almost all the enterprises in the world fall in Small Medium Enterprises (SMEs) category even though SMEs has different definitions in different countries. At the same time, there is no exemplar standard and /or framework for information technology (IT) governance for SMEs. This paper presents the main issues in implementing IT governance in SMEs. Firstly, it explains the definition of SMEs and their characteristics, secondly it discusses IT governance definition and its framework. Finally, some issues and approaches for ITG implementation in SMEs are described, and end with conclusion and future work."}
{"_id": "ea90930fa85029ab97ea00bbf1041f0e4e541226", "title": "UAV with two passive rotating hemispherical shells for physical interaction and power tethering in a complex environment", "text": "For the past few years, unmanned aerial vehicles (UAVs) have been successfully employed in several investigations and exploration tasks such as aerial inspection and manipulations. However, most of these UAVs are limited to open spaces distant from any obstacles because of the high risk of falling as a result of an exposed propeller or not enough protection. On the other hand, a UAV with a passive rotating spherical shell can fly over a complex environment but cannot engage in physical interaction and perform power tethering because of the passive rotation of the spherical shell. In this study, we propose a new mechanism that allows physical interaction and power tethering while the UAV is well-protected and has a good flight stability, which enables exploration in a complex environment such as disaster sites. We address the current problem by dividing the whole shell into two separate hemispherical shells that provide a gap unaffected by passive rotation. In this paper, we mainly discuss the concept, general applications, and design of the proposed system. The capabilities of the proposed system for physical interaction and power tethering in a complex space were initially verified through laboratory-based test flights of our experimental prototype."}
{"_id": "861f52ad7facec72e1e357d678560b691d15b869", "title": "Loop closure detection for visual SLAM using PCANet features", "text": "Loop closure detection benefits simultaneous localization and mapping (SLAM) in building a consistent map of the environment by reducing the accumulate error. Handcrafted features have been successfully used in traditional approaches, whereas in this paper, we show that unsupervised features extracted by deep learning models, can improves the accuracy of loop closure detection. In particular, we employ a cascaded deep network, namely the PCANet, to extract features as image descriptors. We tested the performance of our proposed method on open datasets to compare with traditional approaches. We found that the PCANet features outperform state-of-the-art handcrafted competitors, and are computational efficient to be implemented in practical robotics."}
{"_id": "c8cb1e0135b25634aa8e2728231feae12ac369db", "title": "Taming THC: potential cannabis synergy and phytocannabinoid-terpenoid entourage effects.", "text": "Tetrahydrocannabinol (THC) has been the primary focus of cannabis research since 1964, when Raphael Mechoulam isolated and synthesized it. More recently, the synergistic contributions of cannabidiol to cannabis pharmacology and analgesia have been scientifically demonstrated. Other phytocannabinoids, including tetrahydrocannabivarin, cannabigerol and cannabichromene, exert additional effects of therapeutic interest. Innovative conventional plant breeding has yielded cannabis chemotypes expressing high titres of each component for future study. This review will explore another echelon of phytotherapeutic agents, the cannabis terpenoids: limonene, myrcene, \u03b1-pinene, linalool, \u03b2-caryophyllene, caryophyllene oxide, nerolidol and phytol. Terpenoids share a precursor with phytocannabinoids, and are all flavour and fragrance components common to human diets that have been designated Generally Recognized as Safe by the US Food and Drug Administration and other regulatory agencies. Terpenoids are quite potent, and affect animal and even human behaviour when inhaled from ambient air at serum levels in the single digits ng\u00b7mL(-1) . They display unique therapeutic effects that may contribute meaningfully to the entourage effects of cannabis-based medicinal extracts. Particular focus will be placed on phytocannabinoid-terpenoid interactions that could produce synergy with respect to treatment of pain, inflammation, depression, anxiety, addiction, epilepsy, cancer, fungal and bacterial infections (including methicillin-resistant Staphylococcus aureus). Scientific evidence is presented for non-cannabinoid plant components as putative antidotes to intoxicating effects of THC that could increase its therapeutic index. Methods for investigating entourage effects in future experiments will be proposed. Phytocannabinoid-terpenoid synergy, if proven, increases the likelihood that an extensive pipeline of new therapeutic products is possible from this venerable plant. http://dx.doi.org/10.1111/bph.2011.163.issue-7."}
{"_id": "3eaea188a8ca60bdb391f3df0539e52ff0a45024", "title": "Autonomous navigation robot for landmine detection applications", "text": "Modern robotic technologies have provided efficient solutions to protect workers from hazards in the work environments; such as radioactive, toxic, or explosive. This paper presents a design of a holonomic mobile robot for rough terrain that can replace the human role in the demining applications. A suspension system based on the bogie mechanism was adopted in order to overcome rough terrains. The flexible design of the robot enables it to move in any direction at any instant without changing the orientation of the body. The motion of the robot was simulated using the Pro-Engineer software and the robot stability margins for different road conditions were found in order to predict the possibility of tipping-over before any maneuver."}
{"_id": "3621bc359003e36707733650cccadf4333683293", "title": "Neural Probabilistic Language Models", "text": ""}
{"_id": "4b75d707eb3ffe4607c8cdd5436c8d7f8573fed9", "title": "Multilingual Models for Compositional Distributed Semantics", "text": "We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data."}
{"_id": "54c32d432fb624152da7736543f2685840860a57", "title": "Modeling Documents with Deep Boltzmann Machines", "text": "We introduce a type of Deep Boltzmann Machine (DBM) that is suitable for extracting distributed semantic representations from a large unstructured collection of documents. We overcome the apparent difficulty of training a DBM with judicious parameter tying. This enables an efficient pretraining algorithm and a state initialization scheme for fast inference. The model can be trained just as efficiently as a standard Restricted Boltzmann Machine. Our experiments show that the model assigns better log probability to unseen data than the Replicated Softmax model. Features extracted from our model outperform LDA, Replicated Softmax, and DocNADE models on document retrieval and document classification tasks."}
{"_id": "759ecfda2b8122a52347faf1f86cc919fcbfdaf6", "title": "Sentiment Classification Using Machine Learning Techniques with Syntax Features", "text": "Sentiment classification has adopted machine learning techniques to improve its precision and efficiency. However, the features are always produced by basic words-bag methods without much consideration for words' syntactic properties, which could play an important role in the judgment of sentiment meanings. To remedy this, we firstly generate syntax trees of the sentences, with the analysis of syntactic features of the sentences. Then we introduce multiple sentiment features into the basic words-bag features. Such features were trained on movie reviews as data, with machine learning methods (Naive Bayes and support vector machines). The features and factors introduced by syntax tree were examined to generate a more accurate solution for sentiment classification."}
{"_id": "c6dac8aca55dc7326e5cb996b386db5bce4da46e", "title": "SEM-PPA: A semantical pattern and preference-aware service mining method for personalized point of interest recommendation", "text": "Point-of-Interest (POI) recommendation has received increasing attention in Location-based Social Networks (LBSNs). It involves user behavior analysis, movement pattern model and trajectory sequence prediction, in order to recommend personalized services to target user. Existing POI recommendation methods are confronted with three problems: (1) they only consider the location information of users' check-ins, which causes data sparsity; (2) they fail to consider the order of users' visited locations, which is valuable to reflect the interest or preference of users; (3) users cannot be recommended the suitable services when they move into the new place. To address the above issues, we propose a semantical pattern and preference-aware service mining method called SEM-PPA to make full use of the semantic information of locations for personalized POI recommendation. In SEM-PPA, we firstly propose a novel algorithm to classify the locations into different types for location identification; then we construct the user model for each user from four aspects, which are location trajectory, semantic trajectory, location popularity and user familiarity; in addition, a potential friends discovery algorithm based on movement pattern is proposed. Finally, we conduct extensive experiments to evaluate the recommendation accuracy and recommendation effectiveness on two real-life datasets from GeoLife and Beijing POI. Experimental results show that SEM-PPA can achieve better recommendation performance in particular for sparse data and recommendation accuracy in comparison with other methods."}
{"_id": "13c71c7450aa27a9f926adb6cead1f338c7967c5", "title": "Depixelizing pixel art", "text": "We describe a novel algorithm for extracting a resolution-independent vector representation from pixel art images, which enables magnifying the results by an arbitrary amount without image degradation. Our algorithm resolves pixel-scale features in the input and converts them into regions with smoothly varying shading that are crisply separated by piecewise-smooth contour curves. In the original image, pixels are represented on a square pixel lattice, where diagonal neighbors are only connected through a single point. This causes thin features to become visually disconnected under magnification by conventional means, and creates ambiguities in the connectedness and separation of diagonal neighbors. The key to our algorithm is in resolving these ambiguities. This enables us to reshape the pixel cells so that neighboring pixels belonging to the same feature are connected through edges, thereby preserving the feature connectivity under magnification. We reduce pixel aliasing artifacts and improve smoothness by fitting spline curves to contours in the image and optimizing their control points."}
{"_id": "43216b24e5aaf934e602564e725a2068c8dbcd87", "title": "Morphological, molecular and statistical tools to identify Castanea species and their hybrids", "text": "The aim of this study was to investigate, in relation to reference samples, the genetic structure of several natural Castanea populations of unknown origin and of hybrids from the clone collection of the Louriz\u00e1n Forest Research Center. A total of 115 individuals sampled from four Castanea sativa stands located in the northwest of Spain, 61 Castanea crenata individuals and 27 Castanea mollissima individuals were classified on the basis of four morphological traits, and genotyped with 11 microsatellite loci to define a set of reference samples. The data analyzed with the program STRUCTURE detected four clusters: the two Asiatic species and two clusters in C. sativa. From these reference samples, pure individuals and hybrids of known genealogy were simulated to determine the efficiency with which STRUCTURE and NEWHYBRIDS assigned them to pure or hybrid groups and to a specific genealogical class, respectively. As expected, the discrimination and assignment of the simulated individuals improved with increasing F st value. The two clusters identified within C. sativa may correspond to gene pools with different adaptive characteristics previously identified in provenance tests; pure and admixed populations of both C. sativa gene pools were identified."}
{"_id": "e9e5ab27de4475654f2170745812ad134eb21801", "title": "HUmanoid Robotic Leg via pneumatic muscle actuators : implementation and control", "text": "In this article, a HUmanoid Robotic Leg (HURL) via the utilization of pneumatic muscle actuators (PMAs) is presented. PMAs are a pneumatic form of actuation possessing crucial attributes for the implementation of a design that mimics the motion characteristics of a human ankle. HURL acts as a feasibility study in the conceptual goal of developing a 10 degree-of-freedom (DoF) lower-limb humanoid for compliance and postural control, while serving as a knowledge basis for its future alternative use in prosthetic robotics. HURL\u2019s design properties are described in detail, while its 2-DoF motion capabilities (dorsiflexion\u2013plantar flexion, eversion\u2013inversion) are experimentally evaluated via an advanced nonlinear PID-based control algorithm."}
{"_id": "25e3b4ff26968032c15953dbd542de3619aa9833", "title": "Evasion and Hardening of Tree Ensemble Classifiers", "text": "Classifier evasion consists in finding for a given instance x the \u201cnearest\u201d instance x\u2032 such that the classifier predictions of x and x\u2032 are different. We present two novel algorithms for systematically computing evasions for tree ensembles such as boosted trees and random forests. Our first algorithm uses a Mixed Integer Linear Program solver and finds the optimal evading instance under an expressive set of constraints. Our second algorithm trades off optimality for speed by using symbolic prediction, a novel algorithm for fast finite differences on tree ensembles. On a digit recognition task, we demonstrate that both gradient boosted trees and random forests are extremely susceptible to evasions. Finally, we harden a boosted tree model without loss of predictive accuracy by augmenting the training set of each boosting round with evading instances, a technique we call adversarial boosting."}
{"_id": "8d46fb7cdeab9be071b0bc134aee5ef942d704b2", "title": "Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory.", "text": "Delay-period activity of prefrontal cortical cells, the neural hallmark of working memory, is generally assumed to be sustained by reverberating synaptic excitation in the prefrontal cortical circuit. Previous model studies of working memory emphasized the high efficacy of recurrent synapses, but did not investigate the role of temporal synaptic dynamics. In this theoretical work, I show that biophysical properties of cortical synaptic transmission are important to the generation and stabilization of a network persistent state. This is especially the case when negative feedback mechanisms (such as spike-frequency adaptation, feedback shunting inhibition, and short-term depression of recurrent excitatory synapses) are included so that the neural firing rates are controlled within a physiological range (10-50 Hz), in spite of the exuberant recurrent excitation. Moreover, it is found that, to achieve a stable persistent state, recurrent excitatory synapses must be dominated by a slow component. If neuronal firings are asynchronous, the synaptic decay time constant needs to be comparable to that of the negative feedback; whereas in the case of partially synchronous dynamics, it needs to be comparable to a typical interspike interval (or oscillation period). Slow synaptic current kinetics also leads to the saturation of synaptic drive at high firing frequencies that contributes to rate control in a persistent state. For these reasons the slow NMDA receptor-mediated synaptic transmission is likely required for sustaining persistent network activity at low firing rates. This result suggests a critical role of the NMDA receptor channels in normal working memory function of the prefrontal cortex."}
{"_id": "84c172ad228900b5fae49cf00aee3a8b0f64abb2", "title": "A survey of multicore processors", "text": "General-purpose multicore processors are being accepted in all segments of the industry, including signal processing and embedded space, as the need for more performance and general-purpose programmability has grown. Parallel processing increases performance by adding more parallel resources while maintaining manageable power characteristics. The implementations of multicore processors are numerous and diverse. Designs range from conventional multiprocessor machines to designs that consist of a \"sea\" of programmable arithmetic logic units (ALUs). In this article, we cover some of the attributes common to all multicore processor implementations and illustrate these attributes with current and future commercial multicore designs. The characteristics we focus on are application domain, power/performance, processing elements, memory system, and accelerators/integrated peripherals."}
{"_id": "50f1abdc0c4518c68553d8afa67bcbd83925afb8", "title": "Diabetes Disease Diagnosis Method based on Feature Extraction using K-SVM", "text": "Now-a-days, diabetes disease is considered one of the key reasons of death among the people in the world. The availability of extensive medical information leads to the search for proper tools to support physicians to diagnose diabetes disease accurately. This research aimed at improving the diagnostic accuracy and reducing diagnostic missclassification based on the extracted significant diabetes features. Feature selection is critical to the superiority of classifiers founded through knowledge discovery approaches, thereby solving the classification problems relating to diabetes patients. This study proposed an integration approach between the SVM technique and K-means clustering algorithms to diagnose diabetes disease. Experimental results achieved high accuracy for differentiating the hidden patterns of the Diabetic and Nondiabetic patients compared with the modern diagnosis methods in term of the performance measure. The T-test statistical method obtained significant improvement results based on KSVM technique when tested on the UCI Pima Indian standard dataset. Keywords\u2014K-means Clustering; Diabetes Patients; SVM;"}
{"_id": "4d9b2f1939c1e697b1cef3fb4cc90ccb7172b7d1", "title": "Detecting Code Smells in Python Programs", "text": "As a traditional dynamic language, Python is increasingly used in various software engineering tasks. However, due to its flexibility and dynamism, Python is a particularly challenging language to write code in and maintain. Consequently, Python programs contain code smells which indicate potential comprehension and maintenance problems. With the aim of supporting refactoring strategies to enhance maintainability, this paper describes how to detect code smells in Python programs. We introduce 11 Python smells and describe the detection strategy. We also implement a smell detection tool named Pysmell and use it to identify code smells in five real world Python systems. The results show that Pysmell can detect 285 code smell instances in total with the average precision of 97.7%. It reveals that Large Class and Large Method are most prevalent. Our experiment also implies Python programs may be suffering code smells further."}
{"_id": "5ae5539d1fc98b593b010f4e187ae4b15a121184", "title": "End-to-end text recognition with convolutional neural networks", "text": "Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003."}
{"_id": "0fe35d406842f054ce306ef67b20dc51a5a59dfd", "title": "A Comparative Study of Demographic Attribute Inference in Twitter", "text": "Social media platforms have become a major gateway to receive and analyze public opinions. Understanding users can provide invaluable context information of their social media posts and significantly improve traditional opinion analysis models. Demographic attributes, such as ethnicity, gender, age, among others, have been extensively applied to characterize social media users. While studies have shown that user groups formed by demographic attributes can have coherent opinions towards political issues, these attributes are often not explicitly coded by users through their profiles. Previous work has demonstrated the effectiveness of different user signals such as users\u2019 posts and names in determining demographic attributes. Yet, these efforts mostly evaluate linguistic signals from users\u2019 posts and train models from artificially balanced datasets. In this paper, we propose a comprehensive list of user signals: self-descriptions and posts aggregated from users\u2019 friends and followers, users\u2019 profile images, and users\u2019 names. We provide a comparative study of these signals side-by-side in the tasks on inferring three major demographic attributes, namely ethnicity, gender, and age. We utilize a realistic unbalanced datasets that share similar demographic makeups in Twitter for training models and evaluation experiments. Our experiments indicate that self-descriptions provide the strongest signal for ethnicity and age inference and clearly improve the overall performance when combined with tweets. Profile images for gender inference have the highest precision score with overall score close to the best result in our setting. This suggests that signals in self-descriptions and profile images have potentials to facilitate demographic attribute inferences in Twitter, and are promising for future investigation."}
{"_id": "825c44afb65a50226cd7bd2ca3b2dd0e63d33366", "title": "Imaging cellular network dynamics in three dimensions using fast 3D laser scanning", "text": "Spatiotemporal activity patterns in three-dimensionally organized cellular networks are fundamental to the function of the nervous system. Despite advances in functional imaging of cell populations, a method to resolve local network activity in three dimensions has been lacking. Here we introduce a three-dimensional (3D) line-scan technology for two-photon microscopy that permits fast fluorescence measurements from several hundred cells distributed in 3D space. We combined sinusoidal vibration of the microscope objective at 10 Hz with 'smart' movements of galvanometric x-y scanners to repeatedly scan the laser focus along a closed 3D trajectory. More than 90% of cell somata were sampled by the scan line within volumes of 250 \u03bcm side length. Using bulk-loading of calcium indicator, we applied this method to reveal spatiotemporal activity patterns in neuronal and astrocytic networks in the rat neocortex in vivo. Two-photon population imaging using 3D scanning opens the field for comprehensive studies of local network dynamics in intact tissue."}
{"_id": "2d9b8a354a594dadf024528e30c0ae4b99c71234", "title": "Autoencoder node saliency: Selecting relevant latent representations", "text": "The autoencoder is an artificial neural network that learns hidden representations of unlabeled data. With a linear transfer function it is similar to the principal component analysis (PCA). While both methods use weight vectors for linear transformations, the autoencoder does not come with any indication similar to the eigenvalues in PCA that are paired with eigenvectors. We propose a novel supervised node saliency (SNS) method that ranks the hidden nodes, which contain weight vectors for transformations. SNS is able to indicate the nodes specialized in a learning task. The latent representations of a hidden node can be described using a one-dimensional histogram. We apply normalized entropy difference (NED) to measure the \u201dinterestingness\u201d of the histograms, and conclude a property for NED values to identify a good classifying node. By applying our methods to real datasets, we demonstrate their ability to find valuable nodes and explain the learned tasks in autoencoders."}
{"_id": "bb13eb440c38db3afacad04ae60ec37525890239", "title": "Syntax Annotation for the GENIA Corpus", "text": "Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from \"nominal\" information such as named entity to \"verbal\" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts."}
{"_id": "8af4913163a81ab3c59c97ec45ea7d04d2196f82", "title": "Algorithms for time series knowledge mining", "text": "Temporal patterns composed of symbolic intervals are commonly formulated with Allen's interval relations originating in temporal reasoning. This representation has severe disadvantages for knowledge discovery. The Time Series Knowledge Representation (TSKR) is a new hierarchical language for interval patterns expressing the temporal concepts of coincidence and partial order. We present effective and efficient mining algorithms for such patterns based on itemset techniques. A novel form of search space pruning effectively reduces the size of the mining result to ease interpretation and speed up the algorithms. On a real data set a concise set of TSKR patterns can explain the underlying temporal phenomena, whereas the patterns found with Allen's relations are far more numerous yet only explain fragments of the data."}
{"_id": "d998d6a7e34bc52f6e279037c63781a5f7a5467a", "title": "Kodu Game Lab: a programming environment", "text": "Kodu Game Lab is a tile-based visual programming tool that enables users to learn programming concepts through making and playing computer games. Kodu is a relatively new programming language designed specifically for young children to learn through independent exploration. It is integrated in a real-time isometric 3D gaming environment that is designed to compete with modern console games in terms of intuitive user interface and graphical production values."}
{"_id": "170ed7291219ef1d5e9d10ed186b1a7ef1f8bbc2", "title": "KairosVM: Deterministic introspection for real-time virtual machine hierarchical scheduling", "text": "Consolidation and isolation are key technologies that promoted the undisputed popularity of virtualization in most of the computer industry. This popularity has recently led to a growing interest in real-time virtualization, making this technology enter the real-time system market. However, it has several issues due to the strict timing guarantees contracted. Moreover supporting legacy software stacks adds another level of complexity when the software is a black box. We present KairosVM, a latency-bounded, real-time extension to Linux's KVM module. It aims to bridge the lack of communication of the real-time requirements between the guest scheduler and the host scheduler, exploiting virtual machine introspection. The hypervisor captures the real-time requirements of the guest by catching previously added undefined instructions, without the need to do any modification to the guests. Our evaluations show that KairosVM's overhead is negligible when compared to existing introspection solutions thus can be used in real-time."}
{"_id": "de1d2aafcd8c3065c869f15ff41b0dd1452722b4", "title": "Music Feature Maps with Convolutional Neural Networks for Music Genre Classification", "text": "Nowadays, deep learning is more and more used for Music Genre Classification: particularly Convolutional Neural Networks (CNN) taking as entry a spectrogram considered as an image on which are sought different types of structure.\n But, facing the criticism relating to the difficulty in understanding the underlying relationships that neural networks learn in presence of a spectrogram, we propose to use, as entries of a CNN, a small set of eight music features chosen along three main music dimensions: dynamics, timbre and tonality. With CNNs trained in such a way that filter dimensions are interpretable in time and frequency, results show that only eight music features are more efficient than 513 frequency bins of a spectrogram and that late score fusion between systems based on both feature types reaches 91% accuracy on the GTZAN database."}
{"_id": "3349ff515ae864bc712634e7b290e08bb77eec75", "title": "A study of the documentation essential to software maintenance", "text": "Software engineering has been striving for years to improve the practice of software development and maintenance. Documentation has long been prominent on the list of recommended practices to improve development and help maintenance. Recently however, agile methods started to shake this view, arguing that the goal of the game is to produce software and that documentation is only useful as long as it helps to reach this goal.On the other hand, in the re-engineering field, people wish they could re-document useful legacy software so that they may continue maintain them or migrate them to new platform.In these two case, a crucial question arises: \"How much documentation is enough?\" In this article, we present the results of a survey of software maintainers to try to establish what documentation artifacts are the most useful to them."}
{"_id": "3d8650c28ae2b0f8d8707265eafe53804f83f416", "title": "Experiments with a New Boosting Algorithm", "text": "In an earlier paper [9], we introduced a new \u201cboosting\u201d algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a \u201cpseudo-loss\u201d which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman\u2019s [1] \u201cbagging\u201d method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem."}
{"_id": "f1161cd952f7c52266d0761ab4973c1c134b4962", "title": "Machine Learning, Neural and Statistical Classification", "text": "Algorithms which construct classifiers from sample data -such as neural networks, radial basis functions, and decision trees -have attracted growing attention for their wide applicability. Researchers in the fields of Statistics, Artificial Intelligence, Machine Learning, Data Mining, and Pattern Recognition are continually introducing (or rediscovering) induction methods, and often publishing implementing code. It is natural for practitioners and potential users to wonder, \"Which classification technique is best?\", or more realistically, \"What subset of methods tend to work well for a given type of dataset?\". This book provides perhaps the best current answer to that question."}
{"_id": "8e33c26c619e4283a8b27f240d4855265ad22015", "title": "Modeling Human Understanding of Complex Intentional Action with a Bayesian Nonparametric Subgoal Model", "text": "Most human behaviors consist of multiple parts, steps, or subtasks. These structures guide our action planning and execution, but when we observe others, the latent structure of their actions is typically unobservable, and must be inferred in order to learn new skills by demonstration, or to assist others in completing their tasks. For example, an assistant who has learned the subgoal structure of a colleague\u2019s task can more rapidly recognize and support their actions as they unfold. Here we model how humans infer subgoals from observations of complex action sequences using a nonparametric Bayesian model, which assumes that observed actions are generated by approximately rational planning over unknown subgoal sequences. We test this model with a behavioral experiment in which humans observed different series of goal-directed actions, and inferred both the number and composition of the subgoal sequences associated with each goal. The Bayesian model predicts human subgoal inferences with high accuracy, and significantly better than several alternative models and straightforward heuristics. Motivated by this result, we simulate how learning and inference of subgoals can improve performance in an artificial user assistance task. The Bayesian model learns the correct subgoals from fewer observations, and better assists users by more rapidly and accurately inferring the goal of their actions than alternative approaches."}
{"_id": "ce8efafd57a849e84d3f694bf12a21ce1a28029d", "title": "Webcam-Based Accurate Eye-Central Localization", "text": "This paper contains experimental procedure of webcam-based eye-tracker specially for low power devices. This paper consists of five processes. First one is background suppression for reducing average processing requirement. Second one is Haar-cascade feature-based face detection algorithm. Third one is geometrically eye-position determination. The fourth part is to detect and track eye-ball center using mean of gradient vector. The fifth and last step is to detect where the user looking. We simply calculate percentage of movement of eye to detect either it looking left, right, up or down. This procedure is highly effective with satisfactory accuracy. It also requires less processing power."}
{"_id": "632a3e8971f6a26cf127a03689c28399d2fce7d8", "title": "Domain Adaptation Problems: A DASVM Classification Technique and a Circular Validation Strategy", "text": "This paper addresses pattern classification in the framework of domain adaptation by considering methods that solve problems in which training data are assumed to be available only for a source domain different (even if related) from the target domain of (unlabeled) test data. Two main novel contributions are proposed: 1) a domain adaptation support vector machine (DASVM) technique which extends the formulation of support vector machines (SVMs) to the domain adaptation framework and 2) a circular indirect accuracy assessment strategy for validating the learning of domain adaptation classifiers when no true labels for the target--domain instances are available. Experimental results, obtained on a series of two-dimensional toy problems and on two real data sets related to brain computer interface and remote sensing applications, confirmed the effectiveness and the reliability of both the DASVM technique and the proposed circular validation strategy."}
{"_id": "85379baf4972e15cd7b9f5e06ce177e693b35f53", "title": "Semi-Supervised Kernel Matching for Domain Adaptation", "text": "In this paper, we propose a semi-supervised kernel matching method to address domain adaptation problems where the source distribution substantially differs from the target distribution. Specifically, we learn a prediction function on the labeled source data while mapping the target data points to similar source data points by matching the target kernel matrix to a submatrix of the source kernel matrix based on a Hilbert Schmidt Independence Criterion. We formulate this simultaneous learning and mapping process as a non-convex integer optimization problem and present a local minimization procedure for its relaxed continuous form. Our empirical results show the proposed kernel matching method significantly outperforms alternative methods on the task of cross domain sentiment classification."}
{"_id": "88a697325ea102bf2468598a1bd33c632f9162b2", "title": "Soft-Switching Solid-State Transformer (S4T)", "text": "This paper presents a new topology for a fully bidirectional soft-switching solid-state transformer (S4T). The minimal topology, featuring 12 main devices and a high-frequency transformer, does not use an intermediate dc voltage link, and provides sinusoidal input and output voltages. The S4T can be configured to interface with two- or multiterminal dc, single- or multiphase ac systems. An auxiliary resonant circuit creates zero-voltage-switching conditions for main devices from no-load to full-load, and helps manage interactions with circuit parasitic elements. The modularized structure allows series and/or parallel stacking of converter cells for high-voltage and high-power applications."}
{"_id": "7d98511e9d429bf8b338871a882c294d1f949b58", "title": "Categorical equational systems : algebraic models and equational reasoning", "text": "Declaration This dissertation is the result of my own work done under the guidance of my supervisor, and includes nothing which is the outcome of work done in collaboration except where specifically indicated in the text. This dissertation is not substantially the same as any that I have submitted or will be submitting for a degree or diploma or other qualification at this or any other University. This dissertation does not exceed the regulation length of 60,000 words, including tables and footnotes. 5 Summary We introduce two abstract notions of equational algebraic system, called Equational System (ES) and Term Equational System (TES), in order to achieve sufficient expressivity as needed in modern applications in computer science. These generalize the classical concept of (enriched) algebraic theory of Kelly and Power [1993]. We also develop a theory for constructing free algebras for ESs and a theory of equational reasoning for TESs. In Part I, we introduce the general abstract, yet practical, concept of equational system and develop finitary and transfinitary conditions under which we give an explicit construction of free algebras for ESs. This free construction extends the well-known construction of free algebras for \u03c9-cocontinuous endofunctors to an equational setting, capturing the intuition that free algebras consist of freely constructed terms quotiented by given equations and congruence rules. We further show the monadicity and cocom-pleteness of categories of algebras for ESs under the finitary and transfinitary conditions. To illustrate the expressivity of equational systems, we exhibit various examples including two modern applications, the \u03a3-monoids of Fiore et al. [1999] and the \u03c0-algebras of Stark [2005]. In Part II, we introduce the more concrete notion of term equational system, which is obtained by specializing the concept of equational system, but remains more general than that of enriched algebraic theory. We first develop a sound logical deduction system, called Term Equational Logic (TEL), for equational reasoning about algebras of TESs. Then, to pursue a complete logic, we give an internal completeness result, from which together with the explicit construction of free algebras one can typically synthesize sound and complete rewriting-style equational logics. To exemplify this scenario, we give two applications: multi-sorted algebraic theories and nominal equational theories of Clouston and Pitts [2007] and of Gabbay and Mathijssen [2007]. 7 Acknowledgements First and foremost, I am deeply grateful to my Ph.D. supervisor Marcelo Fiore for his patient guidance and support, without which this thesis would \u2026"}
{"_id": "071fd3f8a05ed1b29b72c77612222925c4fc0138", "title": "A Novel Myoelectric Pattern Recognition Strategy for Hand Function Restoration After Incomplete Cervical Spinal Cord Injury", "text": "This study presents a novel myoelectric pattern recognition strategy towards restoration of hand function after incomplete cervical spinal cord Injury (SCI). High density surface electromyogram (EMG) signals comprised of 57 channels were recorded from the forearm of nine subjects with incomplete cervical SCI while they tried to perform six different hand grasp patterns. A series of pattern recognition algorithms with different EMG feature sets and classifiers were implemented to identify the intended tasks of each SCI subject. High average overall accuracies (>; 97%) were achieved in classification of seven different classes (six intended hand grasp patterns plus a hand rest pattern), indicating that substantial motor control information can be extracted from partially paralyzed muscles of SCI subjects. Such information can potentially enable volitional control of assistive devices, thereby facilitating restoration of hand function. Furthermore, it was possible to maintain high levels of classification accuracy with a very limited number of electrodes selected from the high density surface EMG recordings. This demonstrates clinical feasibility and robustness in the concept of using myoelectric pattern recognition techniques toward improved function restoration for individuals with spinal injury."}
{"_id": "c15710116262610a988fa9e6f8657aa630d22200", "title": "Suicidal decapitation using a tractor loader: a case report and review of the literature.", "text": "In forensic practice, decapitated bodies are predominantly associated with decapitation by wheels of trains or with postmortem dismemberment following homicide. In the suicidal context, decapitation accounts for less than 1% of total suicide. Apart from decapitation by trains, other encountered methods involve suicidal hanging and vehicle-assisted ligature suicide. Reported here is a unique case of suicidal decapitation in a 45-year-old man using a tractor loader at the foot of a silo, on his farm. The head was recovered in the loader and there were several impact spots from the loader as well as blood on the silo wall. The autopsy revealed a complete decapitation wound with the severance plane located between the third and fourth cervical vertebra. A 1.5 cm wide abrasion on the anterior part of the neck and abrasions under the chin were noted. This very unique case of intentional suicidal decapitation is the first reported case of a planned system intended to create decapitation outside the unique case of homemade guillotine and the more common decapitation by train."}
{"_id": "92f366979edb22ed7dba1de20a241d384658b54a", "title": "Anatomical Basis for Safe and Effective Volumization of the Temple.", "text": "BACKGROUND\nOne of the earliest but often unaddressed signs of facial aging is volume loss in the temple. Treatment of the area can produce satisfying results for both patient and practitioner.\n\n\nOBJECTIVE\nSafe injection requires explicit knowledge of the anatomy to avoid complications related to the multitude of vessels that course throughout the region at various depths. The authors aim to detail the anatomy of the area and provide a safe and easy-to-follow method for injection.\n\n\nMATERIALS AND METHODS\nThe authors review the relevant anatomy of the temporal region and its application to cosmetic filler injections.\n\n\nRESULTS\nThe authors describe an easy-to-follow approach for a safe and effective injection window based on numerous anatomical studies. Injection in this area is not without risk, including potential blindness. The authors review the potential complications and their treatments.\n\n\nCONCLUSION\nHollowing of the temple is an early sign of aging that, when corrected, can lead to significant patient and practitioner satisfaction. Proper anatomically knowledge is required to avoid potentially severe complications. In this study, the authors present a reliable technique to safely and effectively augment this often undertreated area of the aging face."}
{"_id": "b4856d812f068e3459b99538670423884e0321dd", "title": "Weakly Supervised Object Discovery by Generative Adversarial & Ranking Networks.", "text": "The deep generative adversarial networks (GAN) recently have been shown to be promising for different computer vision applications, like image editing, synthesizing high resolution images, generating videos, etc. These networks and the corresponding learning scheme can handle various visual space mappings. We approach GANs with a novel training method and learning objective, to discover multiple object instances for three cases: 1) synthesizing a picture of a specific object within a cluttered scene; 2) localizing different categories in images for weakly supervised object detection; and 3) improving object discovery in object detection pipelines. A crucial advantage of our method is that it learns a new deep similarity metric, to distinguish multiple objects in one image. We demonstrate that the network can act as an encoder-decoder generating parts of an image which contain an object, or as a modified deep CNN to represent images for object detection in supervised and weakly supervised scheme. Our ranking GAN offers a novel way to search through images for object specific patterns. We have conducted experiments for different scenarios and demonstrate the method performance for object synthesizing and weakly supervised object detection and classification using the MS-COCO and PASCAL VOC datasets."}
{"_id": "4fbe1e8f65f4c71565e8e6eb47c0c3a78a182bf3", "title": "Translation techniques in cross-language information retrieval", "text": "Cross-language information retrieval (CLIR) is an active sub-domain of information retrieval (IR). Like IR, CLIR is centered on the search for documents and for information contained within those documents. Unlike IR, CLIR must reconcile queries and documents that are written in different languages. The usual solution to this mismatch involves translating the query and/or the documents before performing the search. Translation is therefore a pivotal activity for CLIR engines. Over the last 15 years, the CLIR community has developed a wide range of techniques and models supporting free text translation. This article presents an overview of those techniques, with a special emphasis on recent developments."}
{"_id": "5fe68bfd7ebf8f5a45eba87e347b9725fb061b18", "title": "A Review on Phishing Attacks and Various Anti Phishing Techniques", "text": "Phishing is a threat that acquire sensitive information such as username, password etc through online. Phishing often takes place in email spoofing or instant messaging .Phishing email contains messages like ask the users to enter the personal information so that it is easy for hackers to hack the information. This paper presents an overview about various phishing attacks and various techniques to protect the information."}
{"_id": "3f3a4821ac823445ca92f31703be6b0560e3f2ea", "title": "Open Government Data Implementation Evaluation", "text": "This paper analyses the implementation of the Open Government Data strategy and portal of the City of Vienna. This evaluation is based on qualitative interviews and online polls after the strategy was implemented. Two groups of users were involved in the evaluation: internal target groups (employees and heads of department in the City of Vienna\u2019s public administration departments) and external stakeholders (citizens, business representatives, science and research, journalists). Analyzed aspects included the present organizational processes, the benefits (to business and society), and requirements for future Open Government Data initiatives. This evaluation reveals success factors which accompanied the implementation: the clear definition of responsibilities and the implementation along a process model, the integration of the Open Government Data platform into existing Content Management Systems, the evaluation of the Open Government Data initiative very shortly after its inception. Based on the theoretical and empirical findings, recommendations for future Open Government Data strategies are made which target the local authority and would require action on the federal level such as Creative Commons Attribution License as the default for subsidy funds or public relation measures carried out directly by the data providing departments."}
{"_id": "831a87c47f9f46eb9734b9ca0ad076e666d76e88", "title": "Geometric Correspondence Network for Camera Motion Estimation", "text": "In this paper, we propose a new learning scheme for generating geometric correspondences to be used for visual odometry. A convolutional neural network (CNN) combined with a recurrent neural network (RNN) are trained together to detect the location of keypoints as well as to generate corresponding descriptors in one unified structure. The network is optimized by warping points from source frame to reference frame, with a rigid body transform. Essentially, learning from warping. The overall training is focused on movements of the camera rather than movements within the image, which leads to better consistency in the matching and ultimately better motion estimation. Experimental results show that the proposed method achieves better results than both related deep learning and hand crafted methods. Furthermore, as a demonstration of the promise of our method we use a naive SLAM implementation based on these keypoints and get a performance on par with ORB-SLAM."}
{"_id": "2f6121bf4541fb54344d50cd2e2578000a473bee", "title": "An Efficient and Fast Li-Ion Battery Charging System Using Energy Harvesting or Conventional Sources", "text": "This paper presents a multi-input battery charging system that is capable of increasing the charging efficiency of lithium-ion (Li-ion) batteries. The proposed battery charging system consists of three main building blocks: a pulse charger, a step-down dc\u2013dc converter, and a power path controller. The pulse charger allows charging via a wall outlet or an energy harvesting system. It implements charge techniques that increase the battery charge efficiency of a Li-ion battery. The power path controller (PPC) functions as a power monitor and selects the optimal path for charging either via an energy harvesting system or an ac adapter. The step-down dc\u2013dc converter provides an initial supply voltage to start up the energy harvesting system. The integrated circuit design is implemented on a CMOS 0.18\u00a0\u03bcm technology process. Experimental results verify that the proposed pulse charger reduces the charging time of 100\u00a0mAh and 45\u00a0mAh Li-ion batteries respectively by 37.35% and 15.56% and improves the charge efficiency by 3.15% and 3.27% when compared to the benchmark constant current-constant voltage charging technique. The step-down dc\u2013dc converter has a maximum efficiency of 90% and the operation of the PPC is also verified by charging the battery via a thermoelectric energy harvesting system."}
{"_id": "b8b28250e373fb47af8784cd00c39bf44b08e0e4", "title": "Automatic Question Generation using Discourse Cues", "text": "In this paper, we present a system that automatically generates questions from natural language text using discourse connectives. We explore the usefulness of the discourse connectives for Question Generation (QG) that looks at the problem beyond sentence level. Our work divides the QG task into content selection and question formation. Content selection consists of finding the relevant part in text to frame question from while question formation involves sense disambiguation of the discourse connectives, identification of question type and applying syntactic transformations on the content. The system is evaluated manually for syntactic and semantic correctness."}
{"_id": "35321a8880d1ff77875ae39531f0ec92e9f888d4", "title": "Natural Language Processing methods and systems for biomedical ontology learning", "text": "While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of Natural Language Processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies."}
{"_id": "eaa9c95930e4d2e4630b0634831596c74348ca52", "title": "Predictive coding: a fresh view of inhibition in the retina.", "text": "Interneurons exhibiting centre--surround antagonism within their receptive fields are commonly found in peripheral visual pathways. We propose that this organization enables the visual system to encode spatial detail in a manner that minimizes the deleterious effects of intrinsic noise, by exploiting the spatial correlation that exists within natural scenes. The antagonistic surround takes a weighted mean of the signals in neighbouring receptors to generate a statistical prediction of the signal at the centre. The predicted value is subtracted from the actual centre signal, thus minimizing the range of outputs transmitted by the centre. In this way the entire dynamic range of the interneuron can be devoted to encoding a small range of intensities, thus rendering fine detail detectable against intrinsic noise injected at later stages in processing. This predictive encoding scheme also reduces spatial redundancy, thereby enabling the array of interneurons to transmit a larger number of distinguishable images, taking into account the expected structure of the visual world. The profile of the required inhibitory field is derived from statistical estimation theory. This profile depends strongly upon the signal: noise ratio and weakly upon the extent of lateral spatial correlation. The receptive fields that are quantitatively predicted by the theory resemble those of X-type retinal ganglion cells and show that the inhibitory surround should become weaker and more diffuse at low intensities. The latter property is unequivocally demonstrated in the first-order interneurons of the fly's compound eye. The theory is extended to the time domain to account for the phasic responses of fly interneurons. These comparisons suggest that, in the early stages of processing, the visual system is concerned primarily with coding the visual image to protect against subsequent intrinsic noise, rather than with reconstructing the scene or extracting specific features from it. The treatment emphasizes that a neuron's dynamic range should be matched to both its receptive field and the statistical properties of the visual pattern expected within this field. Finally, the analysis is synthetic because it is an extension of the background suppression hypothesis (Barlow & Levick 1976), satisfies the redundancy reduction hypothesis (Barlow 1961 a, b) and is equivalent to deblurring under certain conditions (Ratliff 1965)."}
{"_id": "33442a58af2978e0d8af8c513530474f8bed6109", "title": "A SPEC RG Cloud Group's Vision on the Performance Challenges of FaaS Cloud Architectures", "text": "As a key part of the serverless computing paradigm, Function-as-a-Service (FaaS) platforms enable users to run arbitrary functions without being concerned about operational issues. However, there are several performance-related issues surrounding the state-of-the-art FaaS platforms that can deter widespread adoption of FaaS, including sizeable overheads, unreliable performance, and new forms of the cost-performance trade-off. In this work we, the SPEC RG Cloud Group, identify six performance-related challenges that arise specifically in this FaaS model, and present our roadmap to tackle these problems in the near future. This paper aims at motivating the community to solve these challenges together."}
{"_id": "d892fb81815052796e1bf4f4d6794e72956d1807", "title": "Automatic navigation and landing of an indoor AR. drone quadrotor using ArUco marker and inertial sensors", "text": "Within the next few years, unmanned quadrotors are likely to become an important vehicle in humans' daily life. However, their automatic navigation and landing in indoor environments are among the commonly discussed topics in this regard. In fact, the quadrotor should be able to automatically find the landing point from the nearby position, navigate toward it, and finally, land on it accurately and smoothly. In this paper, we proposed a low-cost and thorough solution to this problem by using both bottom-facing and front-facing cameras of the drone. In addition, in the case that vision data were unavailable, inertial measurements alongside a Kalman filter were used to navigate the drone to achieve the promising continuity and reliability. An AR.Drone 2.0 quadrotor, as well as an ArUco marker, were employed in order to test the proposed method experimentally. The results indicated that the drone successfully landed on the predefined position with an acceptable time and accuracy."}
{"_id": "7768f88efe03b9735f79462c0f89aa04a074f107", "title": "Improved Neural Machine Translation with SMT Features", "text": "Neural machine translation (NMT) conducts end-to-end translation with a source language encoder and a target language decoder, making promising translation performance. However, as a newly emerged approach, the method has some limitations. An NMT system usually has to apply a vocabulary of certain size to avoid the time-consuming training and decoding, thus it causes a serious out-of-vocabulary problem. Furthermore, the decoder lacks a mechanism to guarantee all the source words to be translated and usually favors short translations, resulting in fluent but inadequate translations. In order to solve the above problems, we incorporate statistical machine translation (SMT) features, such as a translation model and an n-gram language model, with the NMT model under the log-linear framework. Our experiments show that the proposed method significantly improves the translation quality of the state-of-the-art NMT system on Chinese-toEnglish translation tasks. Our method produces a gain of up to 2.33 BLEU score on NIST open test sets."}
{"_id": "71cbd5b7858785e8946523ca59c051eb0f1347ba", "title": "MARSSx 86 : A Full System Simulator for x 86 CPUs", "text": "We present MARSS, an open source, fast, full system simulation tool built on QEMU to support cycle-accurate simulation of superscalar homogeneous and heterogeneous multicore x86 processors. MARSS includes detailed models of coherent caches, interconnections, chipsets, memory and IO devices. MARSS simulates the execution of all software components in the system, including unmodified binaries of applications, OS and libraries."}
{"_id": "da12335ee5dbba81bc60e2f5f26828278a239b0c", "title": "Automatic programming assessment and test data generation a review on its approaches", "text": "Automatic programming assessment has recently become an important method in assisting lecturers and instructors of programming courses to automatically mark and grade students' programming exercises as well as to provide useful feedbacks on students' programming solutions. As part of the method, test data generation process plays as an integral part to perform a dynamic testing on students' programs. To date, various automated methods for test data generation particularly in software testing field are available. Unfortunately, they are seldom used in the context of automatic programming assessment research area. Nevertheless, there have been limited studies taking a stab to integrate both of them due to more useful features and to include a better quality program testing coverage. Thus, this paper provides a review on approaches that have been implemented in various studies with regard to automatic programming assessment, test data generation and integration of both of them. This review is aimed at gathering different techniques that have been employed to forward an interested reader the starting points for finding further information regarding its trends. In addition, the result of the review reveals the main gap that exists within the context of the considered areas which contributes to our main research topic of interest."}
{"_id": "9c14dac9156668ef7c35f5464da70dfc0b1362ac", "title": "Learning Important Features Through Propagating Activation Differences", "text": "The purported \u201cblack box\u201d nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its \u2018reference activation\u2019 and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. Video tutorial: http://goo.gl/ qKb7pL, code: http://goo.gl/RM8jvH."}
{"_id": "3a0e8925a1fbb715accc3dab8a03a38207dc0206", "title": "Approximation by exponential sums revisited \u2729", "text": "Article history: Received 15 May 2009 Accepted 15 August 2009 Available online 2 September 2009 Communicated by Ginette Saracco"}
{"_id": "c4117d5fc857e58adca3565e78771a36402a9a00", "title": "Reduced keyframes for fast bundle adjustment using point and line features in monoslam", "text": "Visual Simultaneous Localization and Mapping (VSLAM) requires feature detection on visual data. In indoor scenes that include architectures such as plain walls and doors, there are no or less corners detected, in such cases tracking will be lost. To overcome this situation we track different types of features that help in feature extraction even in textureless scenes. Line features are used to get the outline of the structure. This paper explains method of line based localization when no point features are detected and process of mapping the matched lines between two frames. One of the existing state of art monocular SLAM that extracts point features only, is used as the main build and LBD (Line Band Descriptor) is incorporated in it for localization and mapping. Our new system extracts more features compared to the existing system and complexity of the bundle adjustment is decreased as it decreases the keyframes selected for local mapping and optimization when tested using our own dataset and compared with the existing system."}
{"_id": "b982db342c253a24552df7fecda471de18178c66", "title": "Teaching the Basics of NLP and ML in an Introductory Course to Information Science", "text": "In this paper we discuss our experience of teaching basic Natural Language Processing (NLP) and Machine Learning (ML) in an introductory course to Information Science. We discuss the challenges we faced while incorporating NLP and ML to the curriculum followed by a presentation of how we met these challenges. The overall response (of students) to the inclusion of this new topic to the curriculum has been positive. Students this semester are pursuing NLP/ML projects, formulating their own tasks (some of which are novel and presented towards the end of the paper), collecting and annotating data and building models for their task."}
{"_id": "4148bee10361cc9aa5eaf1b97670734c0e0abdee", "title": "Echo State Queueing Network: A new reservoir computing learning tool", "text": "In the last decade, a new computational paradigm was introduced in the field of Machine Learning, under the name of Reservoir Computing (RC). RC models are neural networks which a recurrent part (the reservoir) that does not participate in the learning process, and the rest of the system where no recurrence (no neural circuit) occurs. This approach has grown rapidly due to its success in solving learning tasks and other computational applications. Some success was also observed with another recently proposed neural network designed using Queueing Theory, the Random Neural Network (RandNN). Both approaches have good properties and identified drawbacks. In this paper, we propose a new RC model called Echo State Queueing Network (ESQN), where we use ideas coming from RandNNs for the design of the reservoir. ESQNs consist in ESNs where the reservoir has a new dynamics inspired by recurrent RandNNs. The paper positions ESQNs in the global Machine Learning area, and provides examples of their use and performances. We show on largely used benchmarks that ESQNs are very accurate tools, and we illustrate how they compare with standard ESNs."}
{"_id": "bae3082c934833f43071bf53bd5596b53e86593e", "title": "Experiences of stigma and access to HAART in children and adolescents living with HIV/AIDS in Brazil.", "text": "This study describes and conceptualizes the experiences of stigma in a group of children living with HIV in S\u00e3o Paulo, Brazil, and evaluates the impact of access to highly active antiretroviral therapy (HAART) over the social course of AIDS and over the children's experiences of stigma. Through ethnographic research in S\u00e3o Paulo from 1999 to 2001, the life trajectories of 50 children ages 1-15 living with or affected by HIV were studied. Data were collected via participant observation and semi-structured informal interviews and analyzed using social theories on illness experience and social inequality. Our results demonstrate that AIDS-related stigma occurs within complex discrimination processes that change as children reach adolescence. We found that structural violence in the forms of poverty, racism, and inequalities in social status, gender, and age fuels children's experiences of stigma. We also describe how access to HAART changes the lived experience of children, reduces stigma, and brings new challenges in AIDS care such as adolescents' sexuality and treatment adherence. Based on these results, we propose structural violence as the framework to study stigma and argue that interventions to reduce stigma that solely target the perception and attitudes toward people living with HIV are limited. In contrast universal access to HAART in Brazil is a powerful intervention that reduces stigma, in that it transforms AIDS from a debilitating and fatal disease to a chronic and manageable one, belongs to a broader mechanism to assure citizens' rights, and reduces social inequalities in access to health care."}
{"_id": "b74a7508ea021133a0c62b93ba4319c1e5b9943e", "title": "Bricks, Blocks, Boxes, Cubes, and Dice: On the Role of Cubic Shapes for the Design of Tangible Interactive Devices", "text": "Cubic shapes play an astonishing role in the design of tangible interactive devices. Due to our curiosity for this widespread design preference lasting over thirty years, we constituted a literature survey of papers, books and products since the late 1970s. Out of a corpus of fourty-seven papers, books and products that propose cubic shapes for tangible interactive devices we trace the origins of cubicle tangibles and highlight the rationale for their application. Through a comparative study, we analyze the properties of this shape for tangible interaction design and classify these along the themes of: Manipulation as Input, Placement in Space as Input, Arrangement, Multifunctionality, Randomness, Togetherness & Variations, Physical Qualities, Container, and Pedestal for Output. We propose a taxonomy for cubic shaped tangible interactive devices based on the reviewed contributions, in order to support researchers and designers in their future work of designing cubic shaped tangible interactive devices."}
{"_id": "cb759ecc850ed9e4da36f62797bbee883cc930e1", "title": "A case study of a five-year-old child with pervasive developmental disorder-not otherwise specified using sound-based interventions.", "text": "The aim of this study was to determine the efficacy of The Listening Program (TLP) in treating a child with pervasive developmental disorder-not otherwise specified (PDD-NOS). Using a single-subject case study design, one child with PDD-NOS was administered a 20-week TLP intervention focused on improving sensory processing and language function. Data collection included pre- and post-evaluations using video footage, and Sensory Profile and Listening Checklist questionnaires. Results of the study indicated improved behaviour and sensory tolerance in the post-intervention video footage, including active participation in singing and movements to song. Sensory Profile and Listening Checklist questionnaires indicated significant improvements in sensory processing, receptive/expressive listening and language, motor skills, and behavioural/social adjustment at the post-intervention assessment. Although small in scope, this study highlights the need for continued research by occupational therapists into sound-based interventions. Particularly, occupational therapists need to perform larger-scale studies utilizing TLP to verify the efficacy of this alternative treatment method."}
{"_id": "0be87b7efe2c4ea7079cc88ec0cf96d2de4406f3", "title": "Health, Inequality, and Economic Development", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "c62e4ef5b459827a13cc1f7015dbf33ea7677e06", "title": "Real-Time CPU-Based Large-Scale Three-Dimensional Mesh Reconstruction", "text": "In robotics, especially in this era of autonomous driving, mapping is one key ability of a robot to be able to navigate through an environment, localize on it, and analyze its traversability. To allow for real-time execution on constrained hardware, the map usually estimated by feature-based or semidense SLAM algorithms is a sparse point cloud; a richer and more complete representation of the environment is desirable. Existing dense mapping algorithms require extensive use of graphics processing unit (GPU) computing and they hardly scale to large environments; incremental algorithms from sparse points still represent an effective solution when light computational effort is needed and big sequences have to be processed in real time. In this letter, we improved and extended the state-of-the-art incremental manifold mesh algorithm proposed by Litvinov and Lhuillier and extended by Romanoni and Matteucci. While these algorithms do not reconstruct the map in real time and they embed points from SLAM or structure from motion only when their position is fixed, in this letter, we propose the first incremental algorithm able to reconstruct a manifold mesh in real time through single core CPU processing, which is also able to modify the mesh according to three-dimensional points updates from the underlying SLAM algorithm. We tested our algorithm against two state-of-the-art incremental mesh mapping systems on the KITTI dataset, and we showed that, while accuracy is comparable, our approach is able to reach real-time performances thanks to an order of magnitude speed-up."}
{"_id": "909010161b696582c2f30890f43174602284caf7", "title": "Does the use of social networking sites increase children's risk of harm?", "text": "Although research findings have been equivocal as to whether the use of social networking sites (SNS) increases experiences of online risk among children, the affordances of SNS lend support to this possibility, attracting much policy and public concern. The present article examines whether the use of such services increases the risks that children and young people encounter by analyzing data from a random stratified sample of approximately 1000 internetusing children aged 9-16 years in each of 25 European countries. Four hypotheses were formulated and tested. The first hypothesis, namely that children who use social networking sites will encounter more risks online than those who do not, is supported by the data. The second hypothesis stated that SNS users with more digital competence will encounter more online risk than those with less competence; this was also supported, despite being counter to common assumptions. Thirdly, we hypothesized that SNS users with more risky SNS practices (e.g. a public profile, displaying identifying information, with a very large number of contacts) will encounter more online risk than those with fewer risky practices: this too was supported by the data; thus what matters for risk is how SNS are used, a useful point for awareness-raising initiatives. The fourth hypothesis stated that SNS users with more digital competence in using the internet will experience less harm associated with online risk. The data did not support this hypothesis, since digital competence did not reduce the probability of children saying that they have been bothered or upset by something on the internet. Finally, the study found that, although this had not been predicted, whether or not risks are experienced as harmful depends on the specific relation between risks and platforms (website, instant messaging, gaming or social networking). We call on future research to explore how particular affordances sustain particular communicative conditions and, in turn, are responded to differently by children. The research and policy implications of the findings are discussed."}
{"_id": "d2653f5beb9bee689f32610a3bcc87cb5aade992", "title": "Integrating the Architecture Tradeoff Analysis Method ( ATAM ) with the Cost Benefit Analysis Method ( CBAM )", "text": ".............................................................................................................vii"}
{"_id": "d7d9442c3e578f7ad6a6d49ea1aab368294b712f", "title": "Textile Materials for the Design of Wearable Antennas: A Survey", "text": "In the broad context of Wireless Body Sensor Networks for healthcare and pervasive applications, the design of wearable antennas offers the possibility of ubiquitous monitoring, communication and energy harvesting and storage. Specific requirements for wearable antennas are a planar structure and flexible construction materials. Several properties of the materials influence the behaviour of the antenna. For instance, the bandwidth and the efficiency of a planar microstrip antenna are mainly determined by the permittivity and the thickness of the substrate. The use of textiles in wearable antennas requires the characterization of their properties. Specific electrical conductive textiles are available on the market and have been successfully used. Ordinary textile fabrics have been used as substrates. However, little information can be found on the electromagnetic properties of regular textiles. Therefore this paper is mainly focused on the analysis of the dielectric properties of normal fabrics. In general, textiles present a very low dielectric constant that reduces the surface wave losses and increases the impedance bandwidth of the antenna. However, textile materials are constantly exchanging water molecules with the surroundings, which affects their electromagnetic properties. In addition, textile fabrics are porous, anisotropic and compressible materials whose thickness and density might change with low pressures. Therefore it is important to know how these characteristics influence the behaviour of the antenna in order to minimize unwanted effects. This paper presents a survey of the key points for the design and development of textile antennas, from the choice of the textile materials to the framing of the antenna. An analysis of the textile materials that have been used is also presented."}
{"_id": "88ddb1cb74e1ae7bb83a40b0ae05d3d18c4f88bb", "title": "Computer Game Mods, Modders, Modding, and the Mod Scene", "text": "Computer games have increasingly been the focus of user-led innovations in the form of game mods. This paper examines how different kinds of socio-technical affordances serve to organize the actions of the people who develop and share their game mods. The affordances examined include customization and tailoring mechanisms, software and content copyright licenses, game software infrastructure and development tools, career contingencies and organizational practices of mod teams, and social worlds intersecting the mod scene. Numerous examples will be used to ground this review and highlight how such affordances can organize, facilitate or constrain what can be done. Overall, this study helps to provide a deeper understanding of how a web of associated affordances collectively serve to govern what mods get made, how modding practices emerge and flourish, and how modders and the game industry serve each others' interests, though not always in equivocal terms."}
{"_id": "a913797b1566eee6d8f7d1177df5d7ef29dadaef", "title": "Feature selection and the class imbalance problem in predicting protein function from sequence.", "text": "When the standard approach to predict protein function by sequence homology fails, other alternative methods can be used that require only the amino acid sequence for predicting function. One such approach uses machine learning to predict protein function directly from amino acid sequence features. However, there are two issues to consider before successful functional prediction can take place: identifying discriminatory features, and overcoming the challenge of a large imbalance in the training data. We show that by applying feature subset selection followed by undersampling of the majority class, significantly better support vector machine (SVM) classifiers are generated compared with standard machine learning approaches. As well as revealing that the features selected could have the potential to advance our understanding of the relationship between sequence and function, we also show that undersampling to produce fully balanced data significantly improves performance. The best discriminating ability is achieved using SVMs together with feature selection and full undersampling; this approach strongly outperforms other competitive learning algorithms. We conclude that this combined approach can generate powerful machine learning classifiers for predicting protein function directly from sequence."}
{"_id": "6f99ca602380901753db8f1bb0d4e76287677a9b", "title": "A Category-Based Model for ABAC", "text": "In Attribute-Based Access Control (ABAC) systems, access to resources is controlled by evaluating rules against the attributes of the user and the object involved in the access request, as well as the values of the relevant attributes from the environment. This is a powerful concept: ABAC is able to enforce DAC and RBAC policies, as well as more general, dynamic access control policies, where the decision to grant or deny an access request is based on the system's state. However, in its current definition, ABAC does not lend itself well to some operations, such as review queries, and it is in general more costly to specify and maintain than simpler systems such as RBAC. To address these issues, in this paper we propose a formal model of ABAC based on the notion of a category that underlies the general category-based metamodel of access control (CBAC). Our proposed approach adds structure to ABAC, so that policies are easier to design and understand, review queries become easy to evaluate, and simple systems such as RBAC can be implemented as instances of ABAC without additional costs."}
{"_id": "7eeb362f11bfc1c89996e68e3a7c5678e271f95b", "title": "Attacking Fieldbus Communications in ICS: Applications to the SWaT Testbed", "text": ""}
{"_id": "34b44a9e55184b48c94a15f29f052941b342e8bf", "title": "A view of the parallel computing landscape", "text": "Writing programs that scale with increasing numbers of cores should be as easy as writing programs for sequential computers."}
{"_id": "4f7f4e86b4cb2eb14d38f31638884664108b4043", "title": "SAP HANA - From Relational OLAP Database to Big Data Infrastructure", "text": "SAP HANA started as one of the best-performing database engines for OLAP workloads strictly pursuing a main-memory centric architecture and exploiting hardware developments like large number of cores and main memories in the TByte range. Within this paper, we outline the steps from a traditional relational database engine to a Big Data infrastructure comprising different methods to handle data of different volume, coming in with different velocity, and showing a fairly large degree of variety. In order to make the presentation of this transformation process more tangible, we discuss two major technical topics\u2013HANA native integration points as well as extension points for collaboration with Hadoop-based data management infrastructures. The overall of goal of this paper is to (a) review current application patterns and resulting technical challenges as well as to (b) paint the big picture for upcoming architectural designs with SAP HANA database as the core of a SAP Big Data infrastructure."}
{"_id": "45c3e37fc9ac1d17a4f752097b317946907ef83b", "title": "Predictive Control of a Three-Level Boost Converter and an NPC Inverter for High-Power PMSG-Based Medium Voltage Wind Energy Conversion Systems", "text": "In this paper, a new medium voltage power converter topology using a diode rectifier, three-level boost (TLB) converter, and neutral-point-clamped (NPC) inverter is proposed for a high-power permanent magnet synchronous generator-based wind energy conversion system. The generator-side TLB converter performs the maximum power point tracking and balancing of dc-link capacitor voltages, while the grid-side NPC inverter regulates the net dc-bus voltage and reactive power to the grid. A significant improvement in the grid power quality is accomplished as the NPC inverter no longer controls the dc-link neutral point voltage. A model predictive strategy is proposed to control the complete system where the discrete-time models of the proposed power electronic converters are used to predict the future behavior of control variables. These predictions are evaluated using two independent cost functions, and the switching states which minimize these cost functions are selected and applied to the generator- and grid-side converters directly. In order to comply with the high-power application, the switching frequencies of the TLB converter and NPC inverter are minimized and maintained below 1.5 and 1 kHz, respectively. The proposed topology and control strategy are verified through MATLAB simulations on a 3-MW/3000-V/577-A system and dSPACE DS1103-based experiments on 3.6-kW/208-V/10-A prototype."}
{"_id": "3ba66b7716d2f93ae39b2bb79427038e449f5a7c", "title": "Maintaining Views Incrementally", "text": "We present incremental evaluation algorithms to compute changes to materialized views in relational and deductive database systems, in response to changes (insertions, deletions, and updates) to the relations. The view definitions can be in SQL or Datalog, and may use UNION, negation, aggregation (e.g. SUM, MIN), linear recursion, and general recursion.\nWe first present a counting algorithm that tracks the number of alternative derivations (counts) for each derived tuple in a view. The algorithm works with both set and duplicate semantics. We present the algorithm for nonrecursive views (with negation and aggregation), and show that the count for a tuple can be computed at little or no cost above the cost of deriving the tuple. The algorithm is optimal in that it computes exactly those view tuples that are inserted or deleted. Note that we store only the number of derivations, not the derivations themselves.\nWe then present the Delete and Rederive algorithm, DRed, for incremental maintenance of recursive views (negation and aggregation are permitted). The algorithm works by first deleting a superset of the tuples that need to be deleted, and then rederiving some of them. The algorithm can also be used when the view definition is itself altered."}
{"_id": "00bfd8236a233be25687cddaf8b9d91aa15c3298", "title": "Modeling Relation Paths for Representation Learning of Knowledge Bases", "text": "Representation learning of knowledge bases aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text. The source code of this paper can be obtained from https://github.com/mrlyk423/ relation_extraction."}
{"_id": "0dddf37145689e5f2899f8081d9971882e6ff1e9", "title": "Transition-based Knowledge Graph Embedding with Relational Mapping Properties", "text": "Many knowledge repositories nowadays contain billions of triplets, i.e. (head-entity, relationship, tail-entity), as relation instances. These triplets form a directed graph with entities as nodes and relationships as edges. However, this kind of symbolic and discrete storage structure makes it difficult for us to exploit the knowledge to enhance other intelligenceacquired applications (e.g. the QuestionAnswering System), as many AI-related algorithms prefer conducting computation on continuous data. Therefore, a series of emerging approaches have been proposed to facilitate knowledge computing via encoding the knowledge graph into a low-dimensional embedding space. TransE is the latest and most promising approach among them, and can achieve a higher performance with fewer parameters by modeling the relationship as a transitional vector from the head entity to the tail entity. Unfortunately, it is not flexible enough to tackle well with the various mapping properties of triplets, even though its authors spot the harm on performance. In this paper, we thus propose a superior model called TransM to leverage the structure of the knowledge graph via pre-calculating the distinct weight for each training triplet according to its relational mapping property. In this way, the optimal function deals with each triplet depending on its own weight. We carry out extensive experiments to compare TransM with the state-of-the-art method TransE and other prior arts. The performance of each approach is evaluated within two different application scenarios on several benchmark datasets. Results show that the model we proposed significantly outperforms the former ones with lower parameter complexity as TransE."}
{"_id": "1a8e864093212caf1ec98c207c55cc21d4dc5775", "title": "Semantic Data Integration for Knowledge Graph Construction at Query Time", "text": "The evolution of the Web of documents into a Web of services and data has resulted in an increased availability of data from almost any domain. For example, general domain knowledge bases such as DBpedia or Wikidata, or domain specific Web sources like the Oxford Art archive, allow for accessing knowledge about a wide variety of entities including people, organizations, or art paintings. However, these data sources publish data in different ways, and they may be equipped with different search capabilities, e.g., SPARQL endpoints or REST services, thus requiring data integration techniques that provide a unified view of the published data. We devise a semantic data integration approach named FuhSen that exploits keyword and structured search capabilities of Web data sources and generates on-demand knowledge graphs merging data collected from available Web sources. Resulting knowledge graphs model semantics or meaning of merged data in terms of entities that satisfy keyword queries, and relationships among those entities. FuhSen relies on both RDF to semantically describe the collected entities, and on semantic similarity measures to decide on relatedness among entities that should be merged. We empirically evaluate the results of FuhSen data integration techniques on data from the DBpedia knowledge base. The experimental results suggest that FuhSen data integration techniques accurately integrate similar entities semantically into knowledge graphs."}
{"_id": "512a783b0daf2cedcdb1586c12b8a9f9fe571d40", "title": "TransG : A Generative Model for Knowledge Graph Embedding", "text": "Recently, knowledge graph embedding, which projects symbolic entities and relations into continuous vector space, has become a new, hot topic in artificial intelligence. This paper proposes a novel generative model (TransG) to address the issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples. The new model can discover latent semantics for a relation and leverage a mixture of relationspecific component vectors to embed a fact triple. To the best of our knowledge, this is the first generative model for knowledge graph embedding, and at the first time, the issue of multiple relation semantics is formally discussed. Extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines."}
{"_id": "96acb1c882ad655c6b8459c2cd331803801446ca", "title": "Representation Learning of Knowledge Graphs with Entity Descriptions", "text": "Representation learning (RL) of knowledge graphs aims to project both entities and relations into a continuous lowdimensional space. Most methods concentrate on learning representations with knowledge triples indicating relations between entities. In fact, in most knowledge graphs there are usually concise descriptions for entities, which cannot be well utilized by existing methods. In this paper, we propose a novel RL method for knowledge graphs taking advantages of entity descriptions. More specifically, we explore two encoders, including continuous bag-of-words and deep convolutional neural models to encode semantics of entity descriptions. We further learn knowledge representations with both triples and descriptions. We evaluate our method on two tasks, including knowledge graph completion and entity classification. Experimental results on real-world datasets show that, our method outperforms other baselines on the two tasks, especially under the zero-shot setting, which indicates that our method is capable of building representations for novel entities according to their descriptions. The source code of this paper can be obtained from https://github.com/xrb92/DKRL."}
{"_id": "5136bf3188361f67fa77910103dfa273ffbfc48d", "title": "Smart Societies, Infrastructure, Technologies and Applications", "text": "We describe a new approach to building high performance nonstop infrastructure for scalable AI and cloud computing. With AI as the technological driving force behind future smart cities and smart societies, we will need powerful new nonstop AI infrastructures at both the cloud level and at the edge."}
{"_id": "78af2aab79955a3fb4fa3504f8c89d79dd57c7d0", "title": "Warm-white light-emitting diode with yellowish orange SiALON ceramic phosphor.", "text": "A warm-white light-emitting diode (LED) without blending of different kinds of phosphors is demonstrated. An approach that consists of a blue LED chip and a wavelength-conversion phosphor is carried out. The phosphor is a newly developed yellowish orange CaEuSiAlON ceramic phosphor with high efficiency. The CIE1931 chromaticity coordinates (x, y) are (0.458, 0.414), the color temperature is 2750 K, and the luminous efficacy of this LED is 25.9 lm/W at room temperature and with a forward-bias current of 20 mA. The chromaticity of the assembled LED is more thermally stable than that of a LED with a conventional oxide phosphor (YAG:Ce) because of the better thermal stability of the oxynitride phosphor."}
{"_id": "304e650c28301523c05ab046736069df751d144e", "title": "A Parallel Algorithm for Enumerating All Maximal Cliques in Complex Network", "text": "Efficient enumeration of all maximal cliques in a given graph has many applications in graph theory, data mining and bio informatics. However, the exponentially increasing computation time of this problem confines the scale of the graph. Meanwhile, recent researches show that many networks in our world are complex networks involving massive data. To solve the maximal clique problem in the real-world scenarios, this paper presents a parallel algorithm Peamc (parallel enumeration of all maximal cliques) which exploits several new and effective techniques to enumerate all maximal cliques in a complex network. Furthermore, we provide a performance study on a true-life call graph with up to 2,423,807 vertices and 5,317,183 edges. The experimental results show that Peamc can find all the maximal cliques in a complex network with high efficiency and scalability"}
{"_id": "b5d77cd01a8cbbb6290f00bca64254266833d078", "title": "Evaluating mobile apps for breathing training: The effectiveness of visualization", "text": "Deep and slow breathing exercises can be an effective adjunct in the treatment of stress, anxiety, post-traumatic stress disorder, chronic pain and depression. Breathing techniques are traditionally learned in courses with trainers and/or with materials such as audio CDs for home practice. Recently, mobile apps have been proposed as novel breathing training tools, but to the best of our knowledge no research has focused on their evaluation so far. In this paper, we study three different designs for breathing training apps. The first employs audio instructions as in traditional training based on audio CDs, while the other two include visualizations of the breathing process, representative of those employed in current breathing training apps. We carry out a thorough analysis, focusing on users\u2019 physiological parameters as well as subjective perception. One visualization produces better results both objectively (measured deepness of breath) and subjectively (users\u2019 preferences and perceived effectiveness) than the more traditional audio-only design. This indicates that a visualization can contribute to the effectiveness of breathing training apps. We discuss which features could have allowed one visualization (but not the other) to obtain better results than traditional audio-only instructions."}
{"_id": "d530f2c629c4031597de0b3931ed78472d486135", "title": "Software Architecture Quality Measurement Stability and", "text": "Over the past years software architecture has become an important sub-field of software engineering. There has been substantial advancement in developing new technical approaches to start handling architectural design as an engineering discipline. Measurement is an essential part of any engineering discipline. Quantifying the quality attributes of the software architecture will reveal good insights about the architecture. It will also help architects and practioners to choose the best fit of alternative architectures that meets their needs. This work paves the way for researchers to start investigating ways to measure software architecture quality attributes. Measurement of these qualities is essential for this sub-field of software engineering. This work explores Stability and Understandability of software architecture, several metrics that affect them, and literature review of these qualities. Keywords\u2014Software Engineering; Software Architecture; Quality Attributes; Stability; Understandability"}
{"_id": "d6e6d07f47245974ebc9017c5295a6574286d8f1", "title": "Detection Based on Structural Properties", "text": "Phishing attacks pose a serious threat to end-users and commercial institutions alike. Majority of the present day phishing attacks employ e-mail as their primary carrier, in order to allure unsuspecting victims to visit the masqueraded website. While the recent defense mechanisms focus on detection by validating the authenticity of the website, very few approaches have been proposed which concentrate on detecting e-mail based phishing attacks based on the structural properties inherently present in the phishing e-mail. Also, phishing attacks growing in ingenuity as well as sophistication render most of existing browser based solutions weak. In this paper, we propose a novel technique to discriminate phishing e-mails from the legitimate e-mails using the distinct structural features present in them. The derived features, together with oneclass Support Vector Machine (SVM), can be used to efficiently classify phishing e-mails before it reaches the users inbox, essentially reducing the human exposure. Our prototype implementation sits between a user's mail transfer agent (MTA) and mail user agent (MUA) and processes each arriving e-mail even before it reaches the inbox. Using live email data, we demonstrate that our approach is able to detect a wide range of phishing e-mails with minimal performance overhead"}
{"_id": "7e63fbf5b2dffbe549ddc8cf1000dc9178abbfc6", "title": "Heart rate variability in elite triathletes, is variation in variability the key to effective training? A case comparison", "text": "Measures of an athlete\u2019s heart rate variability (HRV) have shown potential to be of use in the prescription of training. However, little data exists on elite athletes who are regularly exposed to high training loads. This case study monitored daily HRV in two elite triathletes (one male: 22\u00a0year, $$ \\dot{V}$$ O2max 72.5\u00a0ml\u00a0kg\u00a0min\u22121; one female: 20\u00a0year, $$ \\dot{V}$$ O2max 68.2\u00a0ml\u00a0kg\u00a0min\u22121) training 23\u00a0\u00b1\u00a02\u00a0h per week, over a 77-day period. During this period, one athlete performed poorly in a key triathlon event, was diagnosed as non-functionally over-reached (NFOR) and subsequently reactivated the dormant virus herpes zoster (shingles). The 7-day rolling average of the log-transformed square root of the mean sum of the squared differences between R\u2013R intervals (Ln rMSSD), declined towards the day of triathlon event (slope\u00a0=\u00a0\u22120.17\u00a0ms/week; r 2\u00a0=\u00a0\u22120.88) in the NFOR athlete, remaining stable in the control (slope\u00a0=\u00a00.01\u00a0ms/week; r 2\u00a0=\u00a00.12). Furthermore, in the NFOR athlete, coefficient of variation of HRV (CV of Ln rMSSD 7-day rolling average) revealed large linear reductions towards NFOR (i.e., linear regression of HRV variables versus day number towards NFOR: \u22120.65%/week and r 2\u00a0=\u00a0\u22120.48), while these variables remained stable for the control athlete (slope\u00a0=\u00a00.04%/week). These data suggest that trends in both absolute HRV values and day-to-day variations may be useful measurements indicative of the progression towards mal-adaptation or non-functional over-reaching."}
{"_id": "454ab1f1519152fbaa03cb24381fac8e148d6761", "title": "Evolutionary Algorithm for Extractive Text Summarization", "text": "Text summarization is the process of automatically creating a compressed version of a given document preserving its information content. There are two types of summarization: extractive and abstractive. Extractive summarization methods simplify the problem of summarization into the problem of selecting a representative subset of the sentences in the original documents. Abstractive summarization may compose novel sentences, unseen in the original sources. In our study we focus on sentence based extractive document summarization. The extractive summarization systems are typically based on techniques for sentence extraction and aim to cover the set of sentences that are most important for the overall understanding of a given document. In this paper, we propose unsupervised document summarization method that creates the summary by clustering and extracting sentences from the original document. For this purpose new criterion functions for sentence clustering have been proposed. Similarity measures play an increasingly important role in document clustering. Here we\u2019ve also developed a discrete differential evolution algorithm to optimize the criterion functions. The experimental results show that our suggested approach can improve the performance compared to sate-of-the-art summarization approaches."}
{"_id": "da57fd49a22065021533dbb80cda23e3e57b9691", "title": "TEM Horn Antenna Loaded With Absorbing Material for GPR Applications", "text": "A novel exponentially tapered TEM horn antenna is presented. It is fed directly by a coax through a compact transition section and possesses a wide bandwidth, good radiation characteristic, and easy integration with microwave circuits. Two specially tapered metal flares have been designed to make a wideband characteristic and reduce the reflection. To overcome aperture reflections at low frequencies and unbalanced aperture field at higher frequencies, an arc surface is added at the end of the flare plates, and an absorbing material is loaded on the outer surface of the flare section. Measured results show that the presented antenna has a wide bandwidth from 0.83 to more than 12.8 GHz under a return loss <;-10 dB, small input reflection, a moderate gain from 0 dBi at 0.83 GHz to 10 dBi at 12 GHz, and a good radiation characteristics in the time domain, which can be suitable for ground penetrating radar (GPR) applications."}
{"_id": "565a3ec8b9fbf62d3341ba0047e14afc8f40cfdb", "title": "Eye Contact Detection via Deep Neural Networks", "text": "With the presence of ubiquitous devices in our daily lives, effectively capturing and managing user attention becomes a critical device requirement. While gaze-tracking is typically employed to determine the user\u2019s focus of attention, gaze-lock detection to sense eyecontact with a device is proposed in [16]. This work proposes eye contact detection using deep neural networks, and makes the following contributions: (1) With a convolutional neural network (CNN) architecture, we achieve superior eye-contact detection performance as compared to [16] with minimal data pre-processing ; our algorithm is furthermore validated on multiple datasets, (2) Gaze-lock detection is improved by combining head pose and eye-gaze information consistent with social attention literature, and (3) We demonstrate gaze-locking on an Android mobile platform via CNN model compression."}
{"_id": "893167546c870eac602d81874c6473fd3cd8bd21", "title": "Efficient parallel skyline processing using hyperplane projections", "text": "The skyline of a set of multi-dimensional points (tuples) consists of those points for which no clearly better point exists in the given set, using component-wise comparison on domains of interest. Skyline queries, i.e., queries that involve computation of a skyline, can be computationally expensive, so it is natural to consider parallelized approaches which make good use of multiple processors. We approach this problem by using hyperplane projections to obtain useful partitions of the data set for parallel processing. These partitions not only ensure small local skyline sets, but enable efficient merging of results as well. Our experiments show that our method consistently outperforms similar approaches for parallel skyline computation, regardless of data distribution, and provides insights on the impacts of different optimization strategies."}
{"_id": "fe0c2dee50267f8502018a35f3a54aa7f509e959", "title": "Fear of being laughed at and social anxiety : A preliminary psychometric study", "text": "The present study examines the relationship between questionnaire measures of social phobia and gelotophobia. A sample of 211 Colombian adults filled in Spanish versions of the Social Anxiety and Distress scale (SAD; Watson & Friend, 1969), the Fear of Negative Evaluation scale (FNE; Watson & Friend, 1969) and the GELOPH<15> (Ruch & Proyer, 2008). Results confirmed that both Social Anxiety and Distress and Fear of Negative Evaluation scale overlapped with the fear of being laughed at without being identical with it. The SAD and FNE correlated highly with the GELOPH<15> but not all high scorers in these scales expressed a fear of being laughed at. Furthermore, an item factor analysis yielded three factors that were mostly loaded by items of the respective scales. This three-factor structure was verified using confirmatory factor analysis. A measure model where one general factor of social anxiety was specified, or another one where two different factors were defined (gelotophobia vs. social anxiety assessed by SAD and FNE) showed a very poor fit to the data. It is concluded that the fear of being laughed cannot fully be accounted for by these measures of social phobia."}
{"_id": "015ce3f823dac9e78ab3ff1f63e67e5a00145ac6", "title": "Caraoke: An E-Toll Transponder Network for Smart Cities", "text": "Electronic toll collection transponders, e.g., E-ZPass, are a widely-used wireless technology. About 70% to 89% of the cars in US have these devices, and some states plan to make them mandatory. As wireless devices however, they lack a basic function: a MAC protocol that prevents collisions. Hence, today, they can be queried only with directional antennas in isolated spots. However, if one could interact with e-toll transponders anywhere in the city despite collisions, it would enable many smart applications. For example, the city can query the transponders to estimate the vehicle flow at every intersection. It can also localize the cars using their wireless signals, and detect those that run a red-light. The same infrastructure can also deliver smart street-parking, where a user parks anywhere on the street, the city localizes his car, and automatically charges his account. This paper presents Caraoke, a networked system for delivering smart services using e-toll transponders. Our design operates with existing unmodified transponders, allowing for applications that communicate with, localize, and count transponders, despite wireless collisions. To do so, Caraoke exploits the structure of the transponders' signal and its properties in the frequency domain. We built Caraoke reader into a small PCB that harvests solar energy and can be easily deployed on street lamps. We also evaluated Caraoke on four streets on our campus and demonstrated its capabilities."}
{"_id": "e42bfb5f90857ce5e28a5c19f47c70020a0e6e4d", "title": "Mathematical Modeling of Hexacopter", "text": "The purpose of this paper is to present the basic mathematical modeling of microcopters, which could be used to develop proper methods for stabilization and trajectory control. The microcopter taken into account consists of six rotors, with three pairs of counter-rotating fixedpitch blades. The microcopter is controlled by adjusting the angular velocities of the rotors which are spun by electric motors. It is assumed as a rigid body, so the differential equations of the microcopter dynamics can be derived from both the Newton-Euler and Euler-Lagrange equations. Euler-angle parametrization of three-dimensional rotations contains singular points in the coordinate space that can cause failure of both dynamical model and control. In order to avoid singularities, the rotations of the microcopter are parametrized in terms of quaternions. This choice has been made taking into consideration the linearity of quaternion formulation, their stability and efficiency."}
{"_id": "3cf41744e7e020e728a70aeba5cca067843d40c9", "title": "Sources of variance in current sense of humor inventories : How much substance , how much method variance ?", "text": "The present study investigates the relationship of self-report inventories of \"sense of humor\" and behavioral measures of humor as well as their location in the Eysenckian PEN system. 110 male and female adults in the ages from 17 to 83 years answered the following inventories: SHRQ (Martin and Lefcourt 1984), SHQZ (Ziv 1981), SHQ-3 revised (Svebak 1993), CHS (Martin and Lefcourt 1983), MSHS (Thorson and Powell 1993), HIS (Bell, McGhee, and Duffey 1986), 3 WD-K (Ruch 1983), CPPT (K\u00f6hler and Ruch 1993), TDS (Murgatroyd, Rushton, Apter, and Ray 1978), STCI-T (Ruch, Freiss, and K\u00f6hler 1993), and EPQ-R (Eysenck, Eysenck, and Barrett 1985). Reliability of the humor scales is examined and convergent and discriminant validity of homologous scales of humor appreciation and humor creation is determined. Behavioral measures and self-report instruments yield only meager correlations. While humor appreciation and humor creation form distinct traits in the behavioral measures, they can not be validly discriminated in the self-reports. Factor analysis of self-report inventories yields that the sense of humor is composed of the two orthogonal dimensions of cheerfulness and seriousness. Extraversion is predictive of cheerfulness, low seriousness, and quantity of humor production. Psychoticism is associated with low seriousness, wit and quality of humor production. Finally, emotional stability correlates with cheerfulness. All in all, the general state of the art in the assessment of the sense of humor and its components appears to be far from being satisfactory. Sources of variance 2 -"}
{"_id": "acef60b233cc05fa2ea4700e1eb22273bebab77a", "title": "An interactive tool for manual, semi-automatic and automatic video annotation", "text": "The annotation of image and video data of large datasets is a fundamental task in multimedia information retrieval and computer vision applications. The aim of annotation tools is to relieve the user from the burden of the manual annotation as much as possible. To achieve this ideal goal, many different functionalities are required in order to make the annotations process as automatic as possible. Motivated by the limitations of existing tools, we have developed the iVAT: an interactive Video Annotation Tool. It supports manual, semi-automatic and automatic annotations through the interaction of the user with various detection algorithms. To the best of our knowledge, it is the first tool that integrates several computer vision algorithms working in an interactive and incremental learning framework. This makes the tool flexible and suitable to be used in different application domains. A quantitative and qualitative evaluation of the proposed tool on a challenging case study domain is presented and discussed. Results demonstrate that the use of the semi-automatic, as well as the automatic, modality drastically reduces the human effort while preserving the quality of the annotations. 2014 Elsevier Inc. All rights reserved."}
{"_id": "8818ceb7bf41b07037a7396058b69a6da6dc06a6", "title": "Marginalized Denoising Dictionary Learning With Locality Constraint", "text": "Learning good representation for images is always a hot topic in machine learning and pattern recognition fields. Among the numerous algorithms, dictionary learning is a well-known strategy for effective feature extraction. Recently, more discriminative sub-dictionaries have been built by Fisher discriminative dictionary learning with specific class labels. Different types of constraints, such as sparsity, low rankness, and locality, are also exploited to make use of global and local information. On the other hand, as the basic building block of deep structure, the auto-encoder has demonstrated its promising performance in extracting new feature representation. To this end, we develop a unified feature learning framework by incorporating the marginalized denoising auto-encoder into a locality-constrained dictionary learning scheme, named marginalized denoising dictionary learning. Overall, we deploy low-rank constraint on each sub-dictionary and locality constraint instead of sparsity on coefficients, in order to learn a more concise and pure feature spaces meanwhile inheriting the discrimination from sub-dictionary learning. Finally, we evaluate our algorithm on several face and object data sets. Experimental results have demonstrated the effectiveness and efficiency of our proposed algorithm by comparing with several state-of-the-art methods."}
{"_id": "b87d5f9b8013386f4ff5ad1a130efe6e924dca5c", "title": "Sentiment classification: The contribution of ensemble learning", "text": "Article history: Received 27 August 2012 Received in revised form 1 August 2013 Accepted 5 August 2013 Available online 15 August 2013"}
{"_id": "0ba15f8dccb35c0d0311d97a3620705642c48031", "title": "All your network are belong to us: a transport framework for mobile network selection", "text": "Mobile devices come with an assortment of networks: WiFi in two different frequency bands, each of which can run in infrastructure-mode, WiFi-Direct mode, or ad hoc mode; cellular radios, which can run in LTE/4G, 3G, or EDGE modes; and Bluetooth. But how should an app choose which network to use? There is no systematic solution to this problem today: in current practice the choice is almost always left to the user, who usually has no idea what's best. In fact, what's best for a user depends on the app's performance objectives (throughput, delay, object load time, etc.) and the user's constraints on cost and battery life. Besides, what's best for a single user or app must be balanced with what's best for the wireless network as a whole (individual optimality vs. social utility). This paper introduces Delphi, a transport-layer module to resolve these issues. Delphi has three noteworthy components: \"local learning\", in which a mobile device estimates or infers useful properties of different networks efficiently, \"property sharing\", in which mobile devices share what they learn with other nearby devices, and \"selection\", in which each node selects a network using what it has observed locally and/or from its neighbors."}
{"_id": "bc18ee4a0f26320a86852b057077e8eca78b0c13", "title": "Identifying Ghanaian pre-service teachers \u2019 readiness for computer use : A Technology Acceptance Model approach", "text": "This study extends the technology acceptance model to identify factors that influence technology acceptance among pre-service teachers in Ghana. Data from 380 usable questionnaires were tested against the research model. Utilising the extended technology acceptance model (TAM) as a research framework, the study found that: pre-service teachers\u2019 pedagogical beliefs, perceived ease of use, perceived usefulness of computer technology and attitude towards computer use to be significant determinants of actual use of computer technology. Results obtained employing multiple stepwise regression analysis revealed that: (1) pre-service teachers\u2019 pedagogical beliefs significantly influenced both perceived ease of use and perceived usefulness, (2) both perceived ease of use and perceived usefulness influence attitude towards computer use and attitude towards computer use significantly influences pre-service teachers\u2019 actual use of computers. However, statistically, perceived ease of use did not significantly influence perceived usefulness. The findings contribute to the literature by validating the TAM in the Ghanaian context and provide several prominent implications for the research and practice of technology integration development."}
{"_id": "fee633956a45926319fa5b93155be4f2dc51d027", "title": "Community Detection based on Distance Dynamics", "text": "In this paper, we introduce a new community detection algorithm, called Attractor, which automatically spots communities in a network by examining the changes of \"distances\" among nodes (i.e. distance dynamics). The fundamental idea is to envision the target network as an adaptive dynamical system, where each node interacts with its neighbors. The interaction will change the distances among nodes, while the distances will affect the interactions. Such interplay eventually leads to a steady distribution of distances, where the nodes sharing the same community move together and the nodes in different communities keep far away from each other. Building upon the distance dynamics, Attractor has several remarkable advantages: (a) It provides an intuitive way to analyze the community structure of a network, and more importantly, faithfully captures the natural communities (with high quality). (b) Attractor allows detecting communities on large-scale networks due to its low time complexity (O(|E|)). (c) Attractor is capable of discovering communities of arbitrary size, and thus small-size communities or anomalies, usually existing in real-world networks, can be well pinpointed. Extensive experiments show that our algorithm allows the effective and efficient community detection and has good performance compared to state-of-the-art algorithms."}
{"_id": "66cb25187cd4342ae258b21440ec572099f71e8d", "title": "Discovering affective regions in deep convolutional neural networks for visual sentiment prediction", "text": "In this paper, we address the problem of automatically recognizing emotions in still images. While most of current work focus on improving whole-image representations using CNNs, we argue that discovering affective regions and supplementing local features will boost the performance, which is inspired by the observation that both global distributions and salient objects carry massive sentiments. We propose an algorithm to discover affective regions via deep framework, in which we use an off-the-shelf tool to generate N object proposals from a query image and rank these proposals with their objectness scores. Then, each proposal's sentiment score is computed using a pre-trained and fine-tuned CNN model. We combine both scores and select top K regions from the N candidates. These K regions are regarded as the most affective ones of the input image. Finally, we extract deep features from the whole-image and the selected regions, respectively, and sentiment label is predicted. The experiments show that our method is able to detect the affective local regions and achieve state-of-the-art performances on several popular datasets."}
{"_id": "9f4f9ee12b952133df7c66fd3a813e97e29e9d72", "title": "Localization & Segmentation of Indian Car Number Plate system : A Projection based Multistage Approach", "text": "Enormous incorporation of information technologies into all aspects of recent life caused demand for processing vehicles as conceptual resources in information systems. This can be achieved by a human agent, or by special intelligent equipment which is be able to recognize vehicles by their number plates in a real environment and reflect it into conceptual resources. Because of this, various recognition techniques have been developed and number plate recognition systems are today used in various traffic and security applications, such as parking, access and border control, or tracking of stolen cars. This work deals with problematic from field of artificial intelligence, machine vision and neural networks in construction of an automatic car number plate Recognition (ACNPR). This techniques deals with enhanced shadow removal algorithm, along with horizontal and vertical projection method for number plate localization & segmentation..Work comparatively deals with methods achieving invariance of systems towards image illumination, translations and various light conditions during the capture."}
{"_id": "7a11cd3434c7f1eabbf76f03182255f62fd27fa3", "title": "Application of CFD in building performance simulation for the outdoor environment : an overview", "text": "This paper provides an overview of the application of CFD in building performance simulation for the outdoor environment, focused on four topics: (1) pedestrian wind environment around buildings, (2) wind-driven rain on building facades, (3) convective heat transfer coefficients at exterior building surfaces, and (4) air pollutant dispersion around buildings. For each topic, its background, the need for CFD, an overview of some past CFD studies, a discussion about accuracy and some perspectives for practical application are provided. The paper indicates that for all four topics, CFD offers considerable advantages compared to wind tunnel modelling or (semi-)empirical formulae because it can provide detailed whole-flow field data under fully controlled conditions and without similarity constraints. The main limitations are the deficiencies of steady RANS modelling, the increased complexity and computational expense of LES and the requirement of systematic and time-consuming CFD solution verification and validation studies."}
{"_id": "b17dd35f5e884823fd292a6d72d8124d0758173a", "title": "Numerical Study of Urban Canyon Microclimate Related to Geometrical Parameters", "text": "In this study a microclimate analysis on a particular urban configuration: the\u2014street canyon\u2014has been carried out. The analysis, conducted by performing numerical simulations using the finite volumes commercial code ANSYS-Fluent, shows the flow field in an urban environment, taking into account three different aspect ratios (H/W). This analysis can be helpful in the study on urban microclimate and on the heat exchanges with the buildings. Fluid-dynamic fields on vertical planes within the canyon, have been evaluated. The results show the importance of the geometrical configuration, in relation to the ratio between the height (H) of the buildings and the width (W) of the road. This is a very important subject from the point of view of \u201cSmart Cities\u201d, considering the urban canyon as a subsystem of a larger one (the city), which is affected by climate changes."}
{"_id": "bc39516aba1e74f7d8ed7391b4ff9e5a9ceeecf2", "title": "BEST PRACTICE GUIDELINE FOR THE CFD SIMULATION OF FLOWS IN THE URBAN ENVIRONMENT", "text": "Legal notice by the COST Office Neither the COST Office nor any person acting on its behalf is responsible for the use which might be made of the information contained in the present publication. The COST Office is not responsible for the external web sites referred to in the present publication. No permission to reproduce or utilize the contents of this book by any means is necessary, other than in the case of images, diagrams or other material from other copyright holders. In such cases permission of the copyright holders is required. This book may be cited as: Title of the book and Action Number"}
{"_id": "f5e717d62ee75465deb3c3495b2b867bdc17560e", "title": "Urban Physics : Effect of the micro-climate on comfort , health and energy demand", "text": "Y-NC-ND license. Abstract The global trend towards urbanisation explains the growing interest in the study of the modification of the urban climate due to the heat island effect and global warming, and its impact on energy use of buildings. Also urban comfort, health and durability, referring respectively to pedestrian wind/ thermal comfort, pollutant dispersion and wind-driven rain are of interest. Urban Physics is a wellestablished discipline, incorporating relevant branches of physics, environmental chemistry, aerodynamics, meteorology and statistics. Therefore, Urban Physics is well positioned to provide keycontributions to the current urban problems and challenges. The present paper addresses the role of Urban Physics in the study of wind comfort, thermal comfort, energy demand, pollutant dispersion and wind-driven rain. Furthermore, the three major research methods applied in Urban Physics, namely field experiments, wind tunnel experiments and numerical simulations are discussed. Case studies illustrate the current challenges and the relevant contributions of Urban Physics. & 2012. Higher Education Press Limited Company. Production and hosting by Elsevier B.V. Open access under CC BY-NC-ND license."}
{"_id": "7046b8930c262c1841a7fad461dfc37eeb466771", "title": "Verification , Validation , and Predictive Capability in Computational Engineering and Physics", "text": "Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue."}
{"_id": "e6886cadc783c6ca26db0ea912154d4681366492", "title": "Practical path planning and path following for a non-holonomic mobile robot based on visual servoing", "text": "In this paper, a practical real-time path planning and robot navigation algorithm for a non-holonomic indoor mobile robot based on visual servoing is implemented. The proposed algorithm is divided into three parts; the first part uses Multi-Stencils Fast Marching (MSFM) as a path planning method. But the generated path results from fast marching methods when used directly, not guaranteed to be safe and smooth. Subsequently, the robot can touch corners, walls and obstacles. The proposed algorithm uses image processing methods to solve this problem. The second part estimates the position and orientation of the robot, from the visual information, to follow the desired path with avoiding obstacles. The third part proposes a decentralized PD-like Fuzzy Logic Controller (FLC) to keep up the robot on the desired path. Experimental results show that the developed design is valid to estimate shortest-path by avoiding obstacles and able to guide the robot to follow the path in real-time."}
{"_id": "2bdbea0ce990ecb4f9001d3afb261f246aa8595a", "title": "Analysis of Statistical Question Classification for Fact-Based Questions", "text": "Question classification systems play an important role in question answering systems and can be used in a wide range of other domains. The goal of question classification is to accurately assign labels to questions based on expected answer type. Most approaches in the past have relied on matching questions against hand-crafted rules. However, rules require laborious effort to create and often suffer from being too specific. Statistical question classification methods overcome these issues by employing machine learning techniques. We empirically show that a statistical approach is robust and achieves good performance on three diverse data sets with little or no hand tuning. Furthermore, we examine the role different syntactic and semantic features have on performance. We find that semantic features tend to increase performance more than purely syntactic features. Finally, we analyze common causes of misclassification error and provide insight into ways they may be overcome."}
{"_id": "b836a5485dd5a4e39e22ec5a30bf804180a97bfe", "title": "UTAssistant: A Web Platform Supporting Usability Testing in Italian Public Administrations", "text": "Even if the benefits of the usability testing are remarkable, it is scarcely adopted in the software development process. To foster its adoption, this paper presents a Web platform, UTAssistant, that supports people, also without skills in Human-Computer Interaction (HCI), in evaluating Web site usability."}
{"_id": "01d023ff2450a1d0ef42a8c00592d124bbeafe69", "title": "A Game Theoretic Framework for Incentives in P2P Systems", "text": "Peer-To-Peer (P2P) networks are self-organizing, distributed systems, with no centralized authority or infrastructure. Because of the voluntary participation, the availability of resources in a P2P system can be highly variable and unpredictable. In this paper, we use ideas from Game Theory to study the interaction of strategic and rational peers, and propose a differential service-based incentive scheme to improve the system\u2019s performance."}
{"_id": "07fe26f10ab2eb7ba7ad7df5096813c949dcabc9", "title": "Fast and Robust Joint Models for Biomedical Event Extraction", "text": "Extracting biomedical events from literature has attracted much recent attention. The bestperforming systems so far have been pipelines of simple subtask-specific local classifiers. A natural drawback of such approaches are cascading errors introduced in early stages of the pipeline. We present three joint models of increasing complexity designed to overcome this problem. The first model performs joint trigger and argument extraction, and lends itself to a simple, efficient and exact inference algorithm. The second model captures correlations between events, while the third model ensures consistency between arguments of the same event. Inference in these models is kept tractable through dual decomposition. The first two models outperform the previous best joint approaches and are very competitive with respect to the current state-of-theart. The third model yields the best results reported so far on the BioNLP 2009 shared task, the BioNLP 2011 Genia task and the BioNLP 2011 Infectious Diseases task."}
{"_id": "535c0310c00371846ea8cecdb0957ced42f2f1ba", "title": "How to improve knowledge transfer strategies and practices in education ? Answers from a systematic literature review", "text": "Building on the systematic review methodology, this paper aims to examine the knowledge transfer process in education and its main determinants in this specific context. Our findings suggest that linkage agents are central actors in the knowledge transfer process. Their intervention is critical to help adapt the knowledge produced by researchers and make it easier to adopt and use by practitioners. Moreover, the effectiveness of this process hinges on several factors that were broken down into three major categories: determinants related to transferredknowledge attributes, those related to the actors involved in the process, and determinants related to transfer mechanisms."}
{"_id": "a5d1a2566d3cd4b32c2a2294019e598c0a57b219", "title": "Laryngeal Tumor Detection and Classification in Endoscopic Video", "text": "The development of the narrow-band imaging (NBI) has been increasing the interest of medical specialists in the study of laryngeal microvascular network to establish diagnosis without biopsy and pathological examination. A possible solution to this challenging problem is presented in this paper, which proposes an automatic method based on anisotropic filtering and matched filter to extract the lesion area and segment blood vessels. Lesion classification is then performed based on a statistical analysis of the blood vessels' characteristics, such as thickness, tortuosity, and density. Here, the presented algorithm is applied to 50 NBI endoscopic images of laryngeal diseases and the segmentation and classification accuracies are investigated. The experimental results show the proposed algorithm provides reliable results, reaching an overall classification accuracy rating of 84.3%. This is a highly motivating preliminary result that proves the feasibility of the new method and supports the investment in further research and development to translate this study into clinical practice. Furthermore, to our best knowledge, this is the first time image processing is used to automatically classify laryngeal tumors in endoscopic videos based on tumor vascularization characteristics. Therefore, the introduced system represents an innovation in biomedical and health informatics."}
{"_id": "332f0395484d174a0fd6c6a72a12054627fed3f4", "title": "Generalized Simulated Annealing", "text": "We propose a new stochastic algorithm (generalized simulated annealing) for computationally finding the global minimum of a given (not necessarily convex) energy/cost function defined in a continuous D-dimensional space. This algorithm recovers, as particular cases, the so called classical (\u201cBoltzmann machine\u201d) and fast (\u201cCauchy machine\u201d) simulated annealings, and can be quicker than both. Key-words: Simulated annealing; Nonconvex optimization; Gradient descent; Generalized Statistical Mechanics."}
{"_id": "04e5b276da90c8181d6ad8397f763a181baae949", "title": "Apples-to-apples in cross-validation studies: pitfalls in classifier performance measurement", "text": "Cross-validation is a mainstay for measuring performance and progress in machine learning. There are subtle differences in how exactly to compute accuracy, F-measure and Area Under the ROC Curve (AUC) in cross-validation studies. However, these details are not discussed in the literature, and incompatible methods are used by various papers and software packages. This leads to inconsistency across the research literature. Anomalies in performance calculations for particular folds and situations go undiscovered when they are buried in aggregated results over many folds and datasets, without ever a person looking at the intermediate performance measurements. This research note clarifies and illustrates the differences, and it provides guidance for how best to measure classification performance under cross-validation. In particular, there are several divergent methods used for computing F-measure, which is often recommended as a performance measure under class imbalance, e.g., for text classification domains and in one-vs.-all reductions of datasets having many classes. We show by experiment that all but one of these computation methods leads to biased measurements, especially under high class imbalance. This paper is of particular interest to those designing machine learning software libraries and researchers focused on high class imbalance."}
{"_id": "ce8bd8f2fb1687929ee23ebee2394a51f06872d7", "title": "An Effective High Threating Alarm Mining Method for Cloud Security Management", "text": "Security equipment such as intrusion prevention system is an important supplementary for security management. They reduce the difficulty of network management by giving alarms corresponding to different attacks instead of raw traffic packet inspection. But there are many false alarms due to their running mechanism, which greatly reduces its usability. In this paper, we develop a hierarchical framework to mine high threating alarms from the massive alarm logs, and aim to provide fundamental and useful information for administrators to design efficient management policy. First, the alarms are divided into two parts based on their attributes, the first part mainly includes several kinds of famous attacks which are critical for security management, we proposed a similar alarm mining method based on Choquet integral to cluster and rank the frequently occurred attacks. The rest alarms constitute the second part, which are caused by the potential threats attacks, also include many false alarms. To reduce the effect of false alarms and rank the potential threats, we employ the frequent pattern mining algorithm to mine correlation rules and then filter false alarms. Following, we proposed a self-adapting threat degree calculation method to qualify the threat degree of these alarms after filtering. To verity the methods developed, an experimental platform is constructed in the campus network of Xi\u2019an Jiaotong University. Experimental results based on the data collected verify the efficiency of the developed methods. For the first kind of alarms, the similar alarms mining accuracy is higher than 97% and the alarms are ranked with different processing urgencies. For the rest alarms, the proposed methods have filtering accuracy above 80% and can rank the potential threats. Based on the ranking results, administrators can deal with the high threats with their limited time and energy, in turn, keep the network under control."}
{"_id": "59db42a9e913bc9c56bdcd7b1303e5c5328f50a6", "title": "Application of Data Mining Classification Algorithms for Breast Cancer Diagnosis", "text": "Breast cancer is one of the diseases that represent a large number of incidence and mortality in the world. Data mining classifications techniques will be effective tools for classifying data of cancer to facilitate decision-making.\n The objective of this paper is to compare the performance of different machine learning algorithms in the diagnosis of breast cancer, to define exactly if this type of cancer is a benign or malignant tumor.\n Six machine learning algorithms were evaluated in this research Bayes Network (BN), Support Vector Machine (SVM), k-nearest neighbors algorithm (Knn), Artificial Neural Network (ANN), Decision Tree (C4.5) and Logistic Regression. The simulation of the algorithms is done using the WEKA tool (The Waikato Environment for Knowledge Analysis) on the Wisconsin breast cancer dataset available in UCI machine learning repository."}
{"_id": "8efac913ff430ef698dd3fa5df4cbb7ded3cab50", "title": "An Unsupervised Clustering Tool for Unstructured Data", "text": "We present an unsupervised clustering tool, Principal Direction Divisive Partitioning, which is a scal-able and versatile top-down method applicable to any set of data that can be represented as numerical vectors. A description of the basic method, a summary of the main application areas where this has been used, and some recent results on the selection of signiicant words as well as the process of updating clusters as new data arrives is discussed."}
{"_id": "6e5809cc76d1909a67913e2f730961dba86084c0", "title": "Basic studies on wet adhesion system for wall climbing robots", "text": "This paper reports a vacuum-based wet adhesion system for wall climbing robots. In this adhesion system, a suction cup adheres on a wet surface. The problems addressed are an adherability on a rough surface, which is comes from the seal action of a liquid, and low friction between suction cup and adhered rough and smooth surfaces which is comes form lubricating action of a liquid. Generally, it is difficult that a vacuumed suction cup adheres on rough surface such concrete plate and hardly slidable. In this paper, the adhesion force and friction when a suction cup adheres on smooth glass plate and concrete plate are measured and compared wet condition with dry condition. The experiment result showed that a viscosity is important at the sealing performance of adhesion on rough surface. The silicon oil of a viscosity of 5000cSt allows a suction cup to adhere on concrete surface. In this condition it comes up to the adhesion force when a suction cup adheres on smooth glass with dry condition."}
{"_id": "a0219c4ecbf7ee4603aaec5811027dfa85b3d85d", "title": "Selection from alphabetic and numeric menu trees using a touch screen: breadth, depth, and width", "text": "Goal items were selected by a series of touch-menu choices among sequentially subdivided ranges of integers or alphabetically ordered words. The number of alternatives at each step, b, was varied, and, inversely, the size of the target area for the touch. Mean response time for each screen was well described by T = k+clogb, in agreement with the Hick-Hyman and Fitts' laws for decision and movement components in series. It is shown that this function favors breadth over depth in menus, whereas others might not. Speculations are offered as to when various functions could be expected."}
{"_id": "79883a68028f206062a73ac7f32271212e92ade8", "title": "FILA: Fine-grained indoor localization", "text": "Indoor positioning systems have received increasing attention for supporting location-based services in indoor environments. WiFi-based indoor localization has been attractive due to its open access and low cost properties. However, the distance estimation based on received signal strength indicator (RSSI) is easily affected by the temporal and spatial variance due to the multipath effect, which contributes to most of the estimation errors in current systems. How to eliminate such effect so as to enhance the indoor localization performance is a big challenge. In this work, we analyze this effect across the physical layer and account for the undesirable RSSI readings being reported. We explore the frequency diversity of the subcarriers in OFDM systems and propose a novel approach called FILA, which leverages the channel state information (CSI) to alleviate multipath effect at the receiver. We implement the FILA system on commercial 802.11 NICs, and then evaluate its performance in different typical indoor scenarios. The experimental results show that the accuracy and latency of distance calculation can be significantly enhanced by using CSI. Moreover, FILA can significantly improve the localization accuracy compared with the corresponding RSSI approach."}
{"_id": "be4d16e6875d109288ce55d2e029122a6c5ad774", "title": "Detection of covert attacks on cyber-physical systems by extending the system dynamics with an auxiliary system", "text": "Securing cyber-physical systems is vital for our modern society since they are widely used in critical infrastructure like power grid control and water distribution. One of the most sophisticated attacks on these systems is the covert attack, where an attacker changes the system inputs and disguises his influence on the system outputs by changing them accordingly. In this paper an approach to detect such an attack by extending the original system with a switched auxiliary system is proposed. Furthermore, a detection system using a switched Luenberger observer is presented. The effectiveness of the proposed method is illustrated by a simulation example."}
{"_id": "905b3a30d54520055672b0f363c683b4b258e636", "title": "Matchmaker: constructing constrained texture maps", "text": "Texture mapping enhances the visual realism of 3D models by adding fine details. To achieve the best results, it is often necessary to force a correspondence between some of the details of the texture and the features of the model.The most common method for mapping texture onto 3D meshes is to use a planar parameterization of the mesh. This, however, does not reflect any special correspondence between the mesh geometry and the texture. The Matchmaker algorithm presented here forces user-defined feature correspondence for planar parameterization of meshes. This is achieved by adding positional constraints to the planar parameterization. Matchmaker allows users to introduce scores of constraints while maintaining a valid one-to-one mapping between the embedding and the 3D surface. Matchmaker's constraint mechanism can be used for other applications requiring parameterization besides texture mapping, such as morphing and remeshing.Matchmaker begins with an unconstrained planar embedding of the 3D mesh generated by conventional methods. It moves the constrained vertices to the required positions by matching a triangulation of these positions to a triangulation of the planar mesh formed by paths between constrained vertices. The matching triangulations are used to generate a new parameterization that satisfies the constraints while minimizing the deviation from the original 3D geometry."}
{"_id": "864605d79662579b48e248d8fb4fe36c047baaaa", "title": "A First Step towards Eye State Prediction Using EEG", "text": "In this paper, we investigate how the eye state (open or closed) can be predicted by measuring brain waves with an EEG. To this end, we recorded a corpus containing the activation strength of the fourteen electrodes of a commercial EEG headset as well as the manually annotated eye state corresponding to the recorded data. We tested 42 different machine learning algorithms on their performance to predict the eye state after training with the corpus. The best-performing classifier, KStar, produced a classification error rate of only 2.7% which is a 94% relative reduction over the majority vote of 44.9% classification error."}
{"_id": "4be1afa552fd9241204a878c7ae9e40dd351de52", "title": "Differences in energy expenditure during high-speed versus standard-speed yoga: A randomized sequence crossover trial.", "text": "OBJECTIVES\nTo compare energy expenditure and volume of oxygen consumption and carbon dioxide production during a high-speed yoga and a standard-speed yoga program.\n\n\nDESIGN\nRandomized repeated measures controlled trial.\n\n\nSETTING\nA laboratory of neuromuscular research and active aging.\n\n\nINTERVENTIONS\nSun-Salutation B was performed, for eight minutes, at a high speed versus and a standard-speed separately while oxygen consumption was recorded. Caloric expenditure was calculated using volume of oxygen consumption and carbon dioxide production.\n\n\nMAIN OUTCOME MEASURES\nDifference in energy expenditure (kcal) of HSY and SSY.\n\n\nRESULTS\nSignificant differences were observed in energy expenditure between yoga speeds with high-speed yoga producing significantly higher energy expenditure than standard-speed yoga (MD=18.55, SE=1.86, p<0.01). Significant differences were also seen between high-speed and standard-speed yoga for volume of oxygen consumed and carbon dioxide produced.\n\n\nCONCLUSIONS\nHigh-speed yoga results in a significantly greater caloric expenditure than standard-speed yoga. High-speed yoga may be an effective alternative program for those targeting cardiometabolic markers."}
{"_id": "1e20f9de45d26950ecd11965989d2b15a5d0d86b", "title": "Deep Unfolding: Model-Based Inspiration of Novel Deep Architectures", "text": "Model-based methods and deep neural networks have both been tremendously successful paradigms in machine learning. In model-based methods, we can easily express our problem domain knowledge in the constraints of the model at the expense of difficulties during inference. Deterministic deep neural networks are constructed in such a way that inference is straightforward, but we sacrifice the ability to easily incorporate problem domain knowledge. The goal of this paper is to provide a general strategy to obtain the advantages of both approaches while avoiding many of their disadvantages. The general idea can be summarized as follows: given a model-based approach that requires an iterative inference method, we unfold the iterations into a layer-wise structure analogous to a neural network. We then de-couple the model parameters across layers to obtain novel neural-network-like architectures that can easily be trained discriminatively using gradient-based methods. The resulting formula combines the expressive power of a conventional deep network with the internal structure of the model-based approach, while allowing inference to be performed in a fixed number of layers that can be optimized for best performance. We show how this framework can be applied to the non-negative matrix factorization to obtain a novel non-negative deep neural network architecture, that can be trained with a multiplicative back-propagation-style update algorithm. We present experiments in the domain of speech enhancement, where we show that the resulting model is able to outperform conventional neural network while only requiring a fraction of the number of parameters. We believe this is due to the ability afforded by our framework to incorporate problem level assumptions into the architecture of the deep network. arXiv.org This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c \u00a9Mitsubishi Electric Research Laboratories, Inc., 2014 201 Broadway, Cambridge, Massachusetts 02139"}
{"_id": "f7be4d8a804267b63b77300028324ce3ab4d347a", "title": "Channel Estimation for Massive MIMO Using Gaussian-Mixture Bayesian Learning", "text": "Pilot contamination posts a fundamental limit on the performance of massive multiple-input-multiple-output (MIMO) antenna systems due to failure in accurate channel estimation. To address this problem, we propose estimation of only the channel parameters of the desired links in a target cell, but those of the interference links from adjacent cells. The required estimation is, nonetheless, an underdetermined system. In this paper, we show that if the propagation properties of massive MIMO systems can be exploited, it is possible to obtain an accurate estimate of the channel parameters. Our strategy is inspired by the observation that for a cellular network, the channel from user equipment to a base station is composed of only a few clustered paths in space. With a very large antenna array, signals can be observed under extremely sharp regions in space. As a result, if the signals are observed in the beam domain (using Fourier transform), the channel is approximately sparse, i.e., the channel matrix contains only a small fraction of large components, and other components are close to zero. This observation then enables channel estimation based on sparse Bayesian learning methods, where sparse channel components can be reconstructed using a small number of observations. Results illustrate that compared to conventional estimators, the proposed approach achieves much better performance in terms of the channel estimation accuracy and achievable rates in the presence of pilot contamination."}
{"_id": "000c009765a276d166fc67595e107a9bc44f230d", "title": "Bayesian Compressive Sensing", "text": "The data of interest are assumed to be represented as N-dimensional real vectors, and these vectors are compressible in some linear basis B, implying that the signal can be reconstructed accurately using only a small number M Lt N of basis-function coefficients associated with B. Compressive sensing is a framework whereby one does not measure one of the aforementioned N-dimensional signals directly, but rather a set of related measurements, with the new measurements a linear combination of the original underlying N-dimensional signal. The number of required compressive-sensing measurements is typically much smaller than N, offering the potential to simplify the sensing system. Let f denote the unknown underlying N-dimensional signal, and g a vector of compressive-sensing measurements, then one may approximate f accurately by utilizing knowledge of the (under-determined) linear relationship between f and g, in addition to knowledge of the fact that f is compressible in B. In this paper we employ a Bayesian formalism for estimating the underlying signal f based on compressive-sensing measurements g. The proposed framework has the following properties: i) in addition to estimating the underlying signal f, \"error bars\" are also estimated, these giving a measure of confidence in the inverted signal; ii) using knowledge of the error bars, a principled means is provided for determining when a sufficient number of compressive-sensing measurements have been performed; iii) this setting lends itself naturally to a framework whereby the compressive sensing measurements are optimized adaptively and hence not determined randomly; and iv) the framework accounts for additive noise in the compressive-sensing measurements and provides an estimate of the noise variance. In this paper we present the underlying theory, an associated algorithm, example results, and provide comparisons to other compressive-sensing inversion algorithms in the literature."}
{"_id": "2bd5f754dc529cb6ee812dd1f3e24a13b9877afb", "title": "Exploiting Software: How to Break Code", "text": "To be useful, software must respond to events in a predictable manner. The results can then be used as a window to the interior workings of the code, revealing some of the mechanisms of operations, which may be used to find ways to make it fail in a dangerous way. To some, the window is as clear as a six inch thick pane of lead, but to those with a high level of understanding it can be clear, or at the very least serve as a keyhole. This is an allusion to the old detective stories where someone looks through the keyhole to see what is behind the door. For these reasons, no software that interacts with humans can ever be considered completely secure, and human error in the development of the software can leave the equivalent of keyholes throughout the code."}
{"_id": "5f1d959f1ea180d0612d5f0c8599d8b5e8c5d36d", "title": "A tutorial on cross-layer optimization in wireless networks", "text": "This tutorial paper overviews recent developments in optimization-based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable myopic policies are shown to optimize system performance. We then describe key lessons learned and the main obstacles in extending the work to general resource allocation problems for multihop wireless networks. Towards this end, we show that a clean-slate optimization-based approach to the multihop resource allocation problem naturally results in a \"loosely coupled\" cross-layer solution. That is, the algorithms obtained map to different layers [transport, network, and medium access control/physical (MAC/PHY)] of the protocol stack, and are coupled through a limited amount of information being passed back and forth. It turns out that the optimal scheduling component at the MAC layer is very complex, and thus needs simpler (potentially imperfect) distributed solutions. We demonstrate how to use imperfect scheduling in the cross-layer framework and describe recently developed distributed algorithms along these lines. We conclude by describing a set of open research problems"}
{"_id": "ea90d871d068dff9648c83a70e73e98bbd85922e", "title": "Learning to guide task and motion planning using score-space representation", "text": "In this paper, we propose a learning algorithm that speeds up the search in task and motion planning problems. Our algorithm proposes solutions to three different challenges that arise in learning to improve planning efficiency: what to predict, how to represent a planning problem instance, and how to transfer knowledge from one problem instance to another. We propose a method that predicts constraints on the search space based on a generic representation of a planning problem instance, called score space, where we represent a problem instance in terms of performance of a set of solutions attempted so far. Using this representation, we transfer knowledge, in the form of constraints, from previous problems based on the similarity in score space. We design a sequential algorithm that efficiently predicts these constraints, and evaluate it in three different challenging task and motion planning problems. Results indicate that our approach perform orders of magnitudes faster than an unguided planner."}
{"_id": "55419d7878d75fc38125567f0a43515e0f34ae6a", "title": "A Comparison of Intensive Care Unit Mortality Prediction Models through the Use of Data Mining Techniques", "text": "OBJECTIVES\nThe intensive care environment generates a wealth of critical care data suited to developing a well-calibrated prediction tool. This study was done to develop an intensive care unit (ICU) mortality prediction model built on University of Kentucky Hospital (UKH)'s data and to assess whether the performance of various data mining techniques, such as the artificial neural network (ANN), support vector machine (SVM) and decision trees (DT), outperform the conventional logistic regression (LR) statistical model.\n\n\nMETHODS\nThe models were built on ICU data collected regarding 38,474 admissions to the UKH between January 1998 and September 2007. The first 24 hours of the ICU admission data were used, including patient demographics, admission information, physiology data, chronic health items, and outcome information.\n\n\nRESULTS\nOnly 15 study variables were identified as significant for inclusion in the model development. The DT algorithm slightly outperformed (AUC, 0.892) the other data mining techniques, followed by the ANN (AUC, 0.874), and SVM (AUC, 0.876), compared to that of the APACHE III performance (AUC, 0.871).\n\n\nCONCLUSIONS\nWith fewer variables needed, the machine learning algorithms that we developed were proven to be as good as the conventional APACHE III prediction."}
{"_id": "3b5128bfe35875d0cead04b7d19024d841b605f9", "title": "Multispectral pedestrian detection: Benchmark dataset and baseline", "text": "With the increasing interest in pedestrian detection, pedestrian datasets have also been the subject of research in the past decades. However, most existing datasets focus on a color channel, while a thermal channel is helpful for detection even in a dark environment. With this in mind, we propose a multispectral pedestrian dataset which provides well aligned color-thermal image pairs, captured by beam splitter-based special hardware. The color-thermal dataset is as large as previous color-based datasets and provides dense annotations including temporal correspondences. With this dataset, we introduce multispectral ACF, which is an extension of aggregated channel features (ACF) to simultaneously handle color-thermal image pairs. Multi-spectral ACF reduces the average miss rate of ACF by 15%, and achieves another breakthrough in the pedestrian detection task."}
{"_id": "cdda901de62f6581b278423f3a31084ce5354885", "title": "Indoor scene segmentation using a structured light sensor", "text": "In this paper we explore how a structured light depth sensor, in the form of the Microsoft Kinect, can assist with indoor scene segmentation. We use a CRF-based model to evaluate a range of different representations for depth information and propose a novel prior on 3D location. We introduce a new and challenging indoor scene dataset, complete with accurate depth maps and dense label coverage. Evaluating our model on this dataset reveals that the combination of depth and intensity images gives dramatic performance gains over intensity images alone. Our results clearly demonstrate the utility of structured light sensors for scene understanding."}
{"_id": "3267ace5f65ad9b8d1e872f239478e4497117eac", "title": "Wireless Mesh Software Defined Networks (wmSDN)", "text": "In this paper we propose to integrate Software Defined Networking (SDN) principles in Wireless Mesh Networks (WMN) formed by OpenFlow switches. The use of a centralized network controller and the ability to setup arbitrary paths for data flows make SDN a handy tool to deploy fine-grained traffic engineering algorithms in WMNs. However, centralized control may be harmful in multi-hop radio networks formed by commodity devices (e.g. Wireless Community Networks), in which node isolation and network fragmentation are not rare events. To exploit the pros and mitigate the cons, our framework uses the traditional OpenFlow centralized controller to engineer the routing of data traffic, while it uses a distributed controller based on OLSR to route: i) OpenFlow control traffic, ii) data traffic, in case of central controller failure. We implemented and tested our Wireless Mesh Software Defined Network (wmSDN) showing its applicability to a traffic engineering use-case, in which the controller logic balances outgoing traffic among the Internet gateways of the mesh. Albeit simple, this use case allows showing a possible usage of SDN that improves user performance with respect to the case of a traditional mesh with IP forwarding and OLSR routing. The wmSDN software toolkit is formed by Open vSwitch, POX controller, OLSR daemon and our own Bash and Python scripts. The tests have been carried out in an emulation environment based on Linux Containers, NS3 and CORE tools."}
{"_id": "3030aa0a9620e9abfe795df2cb3d87e611762e36", "title": "Closed loop speed control of chopper fed DC motor for industrial drive application", "text": "In this paper a comparative transient and robustness analysis of closed loop speed control employing different linear controllers for the same dc motor using 4 quadrant chopper is investigated. The controller configurations can be broadly classified under (i) Integer order PID controller (ii) linear state space observer and (iii) fractional order PID controller. All of them exhibit superior performances, the first one is the conventional controller used in industries but the later two are modern controllers that have rich potential for industry use and have large advantage over the conventional controllers. The closed loop control of chopper fed DC motor is shown with the 1st quadrant operation of chopper circuit."}
{"_id": "13619ff7e0e42f050e2858bd7e34fe5277691e93", "title": "Input Sentence Splitting and Translating", "text": "We propose a method to split and translate input sentences for speech translation in order to overcome the long sentence problem. This approach is based on three criteria used to judge the goodness of translation results. The criteria utilize the output of an MT system only and assumes neither a particular language nor a particular MT approach. In an experiment with an EBMT system, in which prior methods cannot work or work badly, the proposed split-and-translate method achieves much better results in translation quality."}
{"_id": "26f4f07696a3828f5eeb0d8bb8944da80228b77d", "title": "The Alternating Decision Tree Learning Algorithm", "text": "The application of boosting procedures to decision tree algorithms has been shown to produce very accurate classi ers. These classiers are in the form of a majority vote over a number of decision trees. Unfortunately, these classi ers are often large, complex and di\u00c6cult to interpret. This paper describes a new type of classi cation rule, the alternating decision tree, which is a generalization of decision trees, voted decision trees and voted decision stumps. At the same time classi ers of this type are relatively easy to interpret. We present a learning algorithm for alternating decision trees that is based on boosting. Experimental results show it is competitive with boosted decision tree algorithms such as C5.0, and generates rules that are usually smaller in size and thus easier to interpret. In addition these rules yield a natural measure of classi cation con dence which can be used to improve the accuracy at the cost of abstaining from predicting examples that are hard to classify."}
{"_id": "fb8c6c9ef97a057d2ddd657273a05b03e42ec63b", "title": "High-quality speech coding with SampleRNN", "text": "We provide a speech coding scheme employing a generative model based on SampleRNN that, while operating at significantly lower bitrates, matches or surpasses the perceptual quality of state-of-theart classic wide-band codecs. Moreover, it is demonstrated that the proposed scheme can provide a meaningful rate-distortion trade-off without retraining. We evaluate the proposed scheme in a series of listening tests and discuss limitations of the approach."}
{"_id": "0c4e274363a9f3c24087bf870579893edfa05b51", "title": "A natural language planner interface for mobile manipulators", "text": "Natural language interfaces for robot control aspire to find the best sequence of actions that reflect the behavior intended by the instruction. This is difficult because of the diversity of language, variety of environments, and heterogeneity of tasks. Previous work has demonstrated that probabilistic graphical models constructed from the parse structure of natural language can be used to identify motions that most closely resemble verb phrases. Such approaches however quickly succumb to computational bottlenecks imposed by construction and search the space of possible actions. Planning constraints, which define goal regions and separate the admissible and inadmissible states in an environment model, provide an interesting alternative to represent the meaning of verb phrases. In this paper we present a new model called the Distributed Correspondence Graph (DCG) to infer the most likely set of planning constraints from natural language instructions. A trajectory planner then uses these planning constraints to find a sequence of actions that resemble the instruction. Separating the problem of identifying the action encoded by the language into individual steps of planning constraint inference and motion planning enables us to avoid computational costs associated with generation and evaluation of many trajectories. We present experimental results from comparative experiments that demonstrate improvements in efficiency in natural language understanding without loss of accuracy."}
{"_id": "ed556b99772ffb67951423ceb26e841db20f76cf", "title": "The Myth of Millions of Annual Self-Defense Gun Uses: A Case Study of Survey Overestimates of Rare Events", "text": "A criminologist has been using national self-report surveys to estimate the incidence of self-defense gun use in the United States (Kleck 199 I). His most recent estimate is that civilians use guns in self-defense against offenders more than 2.5 million times each year (Kleck and Gertz 1995). This figure has been widely cited, not only by the National Rifle Association but in the media and in Congress. A criminologist in Canada is touting high self-defense estimates for that country, derived from similar self-report surveys (Mauser 1995). All attempts at external validation of the 2.5 million figure show that it is an enormous overestimate (Hemenway, in press). For example, in 34% of the times a gun was used for self-defense, the offender was allegedlycommitting a burglary. In other words, guns were reportedly used by defenders for self-defense in approximately 845,000 burglaries. From sophisticated victimization surveys, however,we know that there were fewer than 6 million burglaries in the year of the survey and in only 22% of those cases was someone certainly at home (1.3 million burglaries). Since only 42% of U.S. households own firearms,"}
{"_id": "b83b0bd2f43472f777176620d6e7136c940be77c", "title": "One Century of Brain Mapping Using Brodmann Areas*", "text": "100 years after their publication, Brodmann\u2019s maps of the cerebral cortex are universally used to locate neuropsychological functions. On the occasion of this jubilee the life and work of Korbinian Brodmann are reported. The core functions of each single Brodmann area are described and Brodmann\u2019s views on neuropsychological processes are depicted. 100 Jahre nach ihrer Ver\u00f6ffentlichung wird Brodmanns Kartierung des zerebralen Kortex universell zur Lokalisation neuropsychologischer Funktionen eingesetzt. Anl\u00e4sslich dieses Jubil\u00e4ums werden Leben und Werk von Korbinian Brodmann dargestellt. Die wesentlichen Funktionen der einzelnen Brodmann-Areale werden beschrieben und Brodmanns Ansichten \u00fcber neuropsychologische Prozesse wiedergegeben."}
{"_id": "8a02611c1f049846d0daa5f7a8844676874020dd", "title": "CONTEXTUAL CLASSIFICATION OF POINT CLOUD DATA BY EXPLOITING INDIVIDUAL 3D NEIGBOURHOODS", "text": "The fully automated analysis of 3D point clouds is of great importance in photogrammetry, remote sensing and computer vision. For reliably extracting objects such as buildings, road inventory or vegetation, many approaches rely on the results of a point cloud classification, where each 3D point is assigned a respective semantic class label. Such an assignment, in turn, typically involves statistical methods for feature extraction and machine learning. Whereas the different components in the processing workflow have extensively, but separately been investigated in recent years, the respective connection by sharing the results of crucial tasks across all components has not yet been addressed. This connection not only encapsulates the interrelated issues of neighborhood selection and feature extraction, but also the issue of how to involve spatial context in the classification step. In this paper, we present a novel and generic approach for 3D scene analysis which relies on (i) individually optimized 3D neighborhoods for (ii) the extraction of distinctive geometric features and (iii) the contextual classification of point cloud data. For a labeled benchmark dataset, we demonstrate the beneficial impact of involving contextual information in the classification process and that using individual 3D neighborhoods of optimal size significantly increases the quality of the results for both pointwise and contextual classification."}
{"_id": "f53d7df6866a9b79eaae5e4f72ed2f03e6d5dca0", "title": "Examining the factors that influence early adopters' smartphone adoption: The case of college students", "text": "The influence of early adopters on potential adopters\u2019 decisions of whether or not to adopt a product is known to be critical. In this paper, we examine the factors that influence the adoption behavior of smartphone early adopters by looking at smartphone adoption behavior of college students, because a large portion of the early adopters of smartphones are college students. Our focus is on the effect of normative peer influence on a college student\u2019s smartphone adoption. We also examine the influence of other factors such as selfinnovativeness, self-efficacy, the decision maker\u2019s attitudes towards a product, financial burden of using the product, familial influence, and other demographic factors (e.g., age and gender). College students\u2019 adoption behavior is studied using logit and probit choice models developed based on random utility theory. The discrete choice models are empirically estimated using survey data. We find important influence of friends, financial burden, and other family members on the smartphone adoption of college students who adopted smartphones earlier than other students. 2013 Elsevier Ltd. All rights reserved."}
{"_id": "4580bd24ac5c9d222ba90fc3ef63d810492fd52a", "title": "The calvaria of Sangiran 38, Sendangbusik, Sangiran Dome, Java.", "text": "We describe in detail Sangiran 38 (S38), an adult partial calvaria recovered in 1980 from the Bapang (Kabuh) Formation of the Sangiran Dome near the hamlet of Sendangbusik, Java. Several other hominins (Bukuran, Hanoman 1, and Bs 9706) recovered in the vicinity come from either the upper-most Sangiran (Pucangan) or lower-most Bapang formations. S38 is from the lower Bapang Formation, which (40)Ar/(39)Ar age estimates suggest spans between 1.47 and 1.58Ma. Anatomical and metric comparisons with a worldwide set of 'early non-erectus'Homo, and Homo erectus (sensu lato) fossils indicate S38 is best considered a member of H. erectus. Although smaller in size, S38 is similar in overall morphology to the Bukuran specimen of similar age and provenance. The S38 calvaria exhibits several depressed lesions of the vault consistent with a scalp or systemic infection or soft tissue cyst."}
{"_id": "5e28e81e757009d2f76b8674e0da431f5845884a", "title": "Using Discriminant Eigenfeatures for Image Retrieval", "text": "|This paper describes the automatic selection of features from an image training set using the theories of multi-dimensional linear discriminant analysis and the associated optimal linear projection. We demonstrate the eeectiveness of these Most Discriminating Features for view-based class retrieval from a large database of widely varying real-world objects presented as \\well-framed\" views, and compare it with that of the principal component analysis."}
{"_id": "959be28e967b491a46b941f94373c5d5cabec23d", "title": "A Novel Economic Sharing Model in a Federation of Selfish Cloud Providers", "text": "This paper presents a novel economic model to regulate capacity sharing in a federation of hybrid cloud providers (CPs). The proposed work models the interactions among the CPs as a repeated game among selfish players that aim at maximizing their profit by selling their unused capacity in the spot market but are uncertain of future workload fluctuations. The proposed work first establishes that the uncertainty in future revenue can act as a participation incentive to sharing in the repeated game. We, then, demonstrate how an efficient sharing strategy can be obtained via solving a simple dynamic programming problem. The obtained strategy is a simple update rule that depends only on the current workloads and a single variable summarizing past interactions. In contrast to existing approaches, the model incorporates historical and expected future revenue as part of the virtual machine (VM) sharing decision. Moreover, these decisions are not enforced neither by a centralized broker nor by predefined agreements. Rather, the proposed model employs a simple grim trigger strategy where a CP is threatened by the elimination of future VM hosting by other CPs. Simulation results demonstrate the performance of the proposed model in terms of the increased profit and the reduction in the variance in the spot market VM availability and prices."}
{"_id": "ef0d03b917a3f3cd33ad3fb5c7cf63b9e4f6e0d4", "title": "Binding mechanism of PicoGreen to DNA characterized by magnetic tweezers and fluorescence spectroscopy", "text": "Fluorescent dyes are broadly used in many biotechnological applications to detect and visualize DNA molecules. However, their binding to DNA alters the structural and nanomechanical properties of DNA and, thus, interferes with associated biological processes. In this work we employed magnetic tweezers and fluorescence spectroscopy to investigate the binding of PicoGreen to DNA at room temperature in a concentration-dependent manner. PicoGreen is an ultrasensitive quinolinium nucleic acid stain exhibiting hardly any background signal from unbound dye molecules. By means of stretching and overwinding single, torsionally constrained, nick-free double-stranded DNA molecules, we acquired force-extension and supercoiling curves which allow quantifying DNA contour length, persistence length and other thermodynamical binding parameters, respectively. The results of our magnetic tweezers single-molecule binding study were well supported through analyzing the fluorescent spectra of stained DNA. On the basis of our work, we could identify a concentration-dependent bimodal binding behavior, where, apparently, PicoGreen associates to DNA as an intercalator and minor-groove binder simultaneously."}
{"_id": "3cb37802d5f787fc2f00a150c5c94a3ee6da8323", "title": "Applications of Random Sampling in Computational Geometry, II", "text": "Random sampling is used for several new geometric algorithms. The algorithms are \u201cLas Vegas,\u201d and their expected bounds are with respect to the random behavior of the algorithms. One algorithm reports all the intersecting pairs of a set of line segments in the plane, and requires &Ogr;(A + n log n) expected time, where A is the size of the answer, the number of intersecting pairs reported. The algorithm requires &Ogr;(n) space in the worst case. Another algorithm computes the convex hull of a point set in E3 in &Ogr;(n log A) expected time, where n is the number of points and A is the number of points on the surface of the hull. A simple Las Vegas algorithm triangulates simple polygons in &Ogr;(n log log n) expected time. Algorithms for half-space range reporting are also given. In addition, this paper gives asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets."}
{"_id": "2f5263e6895cda8215c52e51786c4456f164fa33", "title": "Comparison of Different Classification Techniques Using WEKA for Breast Cancer", "text": "The development of data-mining applications such as classification and clustering has shown the need for machine learning algorithms to be applied to large scale data. In this paper we present the comparison of different classification techniques using Waikato Environment for Knowledge Analysis or in short, WEKA. WEKA is an open source software which consists of a collection of machine learning algorithms for data mining tasks. The aim of this paper is to investigate the performance of different classification or clustering methods for a set of large data. The algorithm or methods tested are Bayes Network, Radial Basis Function, Pruned Tree, Single Conjunctive Rule Learner and Nearest Neighbors Algorithm. A fundamental review on the selected technique is presented for introduction purposes. The data breast cancer data with a total data of 6291 and a dimension of 699 rows and 9 columns will be used to test and justify the differences between the classification methods or algorithms. Subsequently, the classification technique that has the potential to significantly improve the common or conventional methods will be suggested for use in large scale data, bioinformatics or other general applications. Keywords\u2014 Machine Learning, Data Mining, WEKA, Classification, Bioinformatics."}
{"_id": "6a04569373d96b958c447ea3e628ef388b7591e6", "title": "Classification of Liver Cancer using Artificial Neural Network and Support Vector Machine", "text": "The purpose of this study is to compare the performance of Artificial Neural Network (ANN) and Support Vector Machine (SVM) for liver cancer classification. The performance of both models is compared and validated on BUPA Liver Disorder Dataset in terms of accuracy, sensitivity, specificity and Area under Curve (AUC). The comparative results show that the SVM classifier outperforms ANN classifier where SVM gives an accuracy of 63.11%, specificity of 100% and AUC of 68.34% where as ANN gives classification accuracy of 57.28%, specificity of 32.56 and AUC of 53.78. This result indicates that the classification capability of SVM is better than ANN and may potentially fill in a critical gap in the use of current or future classification algorithms for liver cancer."}
{"_id": "15f37ff89f68747a3f4cb34cb6097ba79b201f92", "title": "A simulation study of the number of events per variable in logistic regression analysis.", "text": "We performed a Monte Carlo study to evaluate the effect of the number of events per variable (EPV) analyzed in logistic regression analysis. The simulations were based on data from a cardiac trial of 673 patients in which 252 deaths occurred and seven variables were cogent predictors of mortality; the number of events per predictive variable was (252/7 =) 36 for the full sample. For the simulations, at values of EPV = 2, 5, 10, 15, 20, and 25, we randomly generated 500 samples of the 673 patients, chosen with replacement, according to a logistic model derived from the full sample. Simulation results for the regression coefficients for each variable in each group of 500 samples were compared for bias, precision, and significance testing against the results of the model fitted to the original sample. For EPV values of 10 or greater, no major problems occurred. For EPV values less than 10, however, the regression coefficients were biased in both positive and negative directions; the large sample variance estimates from the logistic model both overestimated and underestimated the sample variance of the regression coefficients; the 90% confidence limits about the estimated values did not have proper coverage; the Wald statistic was conservative under the null hypothesis; and paradoxical associations (significance in the wrong direction) were increased. Although other factors (such as the total number of events, or sample size) may influence the validity of the logistic model, our findings indicate that low EPV can lead to major problems."}
{"_id": "a9d023728ff20d9f0a30e36a234173879c851b25", "title": "The Role of ICT to Achieve the UN Sustainable Development Goals (SDG)", "text": "This paper is aiming at illustrating the potential of ICT for achieving the Sustainable Development Goals which were declared by the United Nations in 2015 as binding for all nations of our planet addressing both developing and developed countries. ICT must play a significant role if the SDGs should be achieved as projected in 2030. The paper gives an overview of some of the existing efforts in this area and is written as an appeal to all professionals, scientists and IT-professional and their organization to take a holistic approach for all ICT-activities and projects to always include and monitor the effects of their work on the SDGs. The impacts of ICT on sustainability are twofold: On the one hand there might be negative effects on sustainability such as the generation of electronic waste, on the other hand ICT is definitely an enabler to more efficient resource usage, education and business operations which is critical success factor for achieving the SDGs."}
{"_id": "0ec660ab0ea61ab820a67942b894a62f1bc9b0a5", "title": "Multi-agent Deep Reinforcement Learning with Extremely Noisy Observations", "text": "Multi-agent reinforcement learning systems aim to provide interacting agents with the ability to collaboratively learn and adapt to the behaviour of other agents. In many real-world applications, the agents can only acquire a partial view of the world. Here we consider a setting whereby most agents\u2019 observations are also extremely noisy, hence only weakly correlated to the true state of the environment. Under these circumstances, learning an optimal policy becomes particularly challenging, even in the unrealistic case that an agent\u2019s policy can be made conditional upon all other agents\u2019 observations. To overcome these difficulties, we propose a multi-agent deep deterministic policy gradient algorithm enhanced by a communication medium (MADDPG-M), which implements a two-level, concurrent learning mechanism. An agent\u2019s policy depends on its own private observations as well as those explicitly shared by others through a communication medium. At any given point in time, an agent must decide whether its private observations are sufficiently informative to be shared with others. However, our environments provide no explicit feedback informing an agent whether a communication action is beneficial, rather the communication policies must also be learned through experience concurrently to the main policies. Our experimental results demonstrate that the algorithm performs well in six highly non-stationary environments of progressively higher complexity, and offers substantial performance gains compared to the baselines."}
{"_id": "d28107b7bb85e770e3584ac4541cb5585aadbccc", "title": "Modified GDI Technique-A Power Efficient Method For Digital Circuit Design", "text": "This paper presents logic style compariso ns based on different logic functions and claimed m odified Gate Diffusion Input logic (Mod-GDI) to be much more power-efficient than Gate Diffusion Input logic (GDI) and complementary CMOS logic design. However, DC and Tr ansient analysis performed on more efficient modifi ed Gate Diffusion Input logic (Mod-GDI) circuit realization s and a wider range of different logic cells, as we ll as the use of practical circuit arrangements reveal Mod-GDI to be superior to GDI and CMOS in the majority cases wit h respect to speed, area, power dissipation, and power-delay pro ducts. This manuscript shows that Mod-GDI is the logic style of preference for the realization of arbitrary combinational circuits, if low voltage, low power, and small power-delay products are of concern. All Simulations are perfor med through PSPICE based on 0.18 \u03bcm CMOS technology, and results show power characteristics of Mod-GDI technique of low power digital circuit design. Simulation results shows up to 45% reduction in power-delay product in Mod-GDI. Mod-GDI approach allows realization of a broad variety of multifaceted logic functions by means of only two transistors. T his technique is appropriate for designing of fast, low power circuits, using reduced number of transistor (as compared to CMOS techniques), while improving power characteris tics."}
{"_id": "c531eea9aae40b09b535ec30d9d41025e11ca459", "title": "MOMENTUM EFFECT , VALUE EFFECT , RISK PREMIUM AND PREDICTABILITY OF STOCK RETURNS \u2013 A STUDY ON INDIAN MARKET", "text": "Article History Received: 26 March 2018 Revised: 2 May 2018 Accepted: 9 May 2018 Published: 14 May 2018"}
{"_id": "2f40f04fb5089f3cd77ea30120dfb5181dd82a37", "title": "Automated Classification of L/R Hand Movement EEG Signals using Advanced Feature Extraction and Machine Learning", "text": "In this paper, we propose an automated computer platform for the purpose of classifying Electroencephalography (EEG) signals associated with left and right hand movements using a hybrid system that uses advanced feature extraction techniques and machine learning algorithms. It is known that EEG represents the brain activity by the electrical voltage fluctuations along the scalp, and Brain-Computer Interface (BCI) is a device that enables the use of the brain\u2019s neural activity to communicate with others or to control machines, artificial limbs, or robots without direct physical movements. In our research work, we aspired to find the best feature extraction method that enables the differentiation between left and right executed fist movements through various classification algorithms. The EEG dataset used in this research was created and contributed to PhysioNet by the developers of the BCI2000 instrumentation system. Data was preprocessed using the EEGLAB MATLAB toolbox and artifacts removal was done using AAR. Data was epoched on the basis of Event-Related (De) Synchronization (ERD/ERS) and movement-related cortical potentials (MRCP) features. Mu/beta rhythms were isolated for the ERD/ERS analysis and delta rhythms were isolated for the MRCP analysis. The Independent Component Analysis (ICA) spatial filter was applied on related channels for noise reduction and isolation of both artifactually and neutrally generated EEG sources. The final feature vector included the ERD, ERS, and MRCP features in addition to the mean, power and energy of the activations of the resulting Independent Components (ICs) of the epoched feature datasets. The datasets were inputted into two machinelearning algorithms: Neural Networks (NNs) and Support Vector Machines (SVMs). Intensive experiments were carried out and optimum classification performances of 89.8 and 97.1 were obtained using NN and SVM, respectively. This research shows that this method of feature extraction holds some promise for the classification of various pairs of motor movements, which can be used in a BCI context to mentally control a computer or machine. Keywords\u2014EEG; BCI; ICA; MRCP; ERD/ERS; machine learning; NN; SVM"}
{"_id": "6d9c315f716f3527b0e09c94bfb7870328a91ef6", "title": "UMBC_EBIQUITY-CORE: Semantic Textual Similarity Systems", "text": "We describe three semantic text similarity systems developed for the *SEM 2013 STS shared task and the results of the corresponding three runs. All of them shared a word similarity feature that combined LSA word similarity and WordNet knowledge. The first, which achieved the best mean score of the 89 submitted runs, used a simple term alignment algorithm augmented with penalty terms. The other two runs, ranked second and fourth, used support vector regression models to combine larger sets of features."}
{"_id": "dbe8c61628896081998d1cd7d10343a45b7061bd", "title": "Modular Construction of Time-Delay Neural Networks for Speech Recognition", "text": "Several strategies are described that overcome limitations of basic network models as steps towards the design of large connectionist speech recognition systems. The two major areas of concern are the problem of time and the problem of scaling. Speech signals continuously vary over time and encode and transmit enormous amounts of human knowledge. To decode these signals, neural networks must be able to use appropriate representations of time and it must be possible to extend these nets to almost arbitrary sizes and complexity within finite resources. The problem of time is addressed by the development of a Time-Delay Neural Network; the problem of scaling by Modularity and Incremental Design of large nets based on smaller subcomponent nets. It is shown that small networks trained to perform limited tasks develop time invariant, hidden abstractions that can subsequently be exploited to train larger, more complex nets efficiently. Using these techniques, phoneme recognition networks of increasing complexity can be constructed that all achieve superior recognition performance."}
{"_id": "a6011b61cab8efec98bf512e2937fc68a0905147", "title": "Box Pleating is Hard", "text": "In their seminal 1996 paper, Bern and Hayes initiated investigation into the computational complexity of origami [3]. They proved that it is NP-hard to determine whether a given general crease pattern can be folded flat, when the creases have or have not been assigned crease directions (mountain fold or valley fold). Since that time, there has been considerable work in analyzing the computational complexity of other origami related problems. For example, Arkin et al. [1] proved that deciding foldability is hard even for simple folds, while Demaine et al. [4] proved that optimal circle packing for origami design is also hard. At the end of their paper, Bern and Hayes pose some interesting open questions to further their work. While most of them have been investigated since, two in particular (problems 2 and 3) have remained untouched until now. First, while the gadgets used in their hardness proof for unassigned crease patterns are relatively straightforward, their gadgets for assigned crease patterns are considerably more convoluted, and quite difficult to check. In particular, we found an error in their crossover gadget where signals are not guaranteed to transmit dependably for wires that do not cross orthogonally, which is required in their construction. Is there a simpler way to achieve a correct result (i.e. \u201cwithout tabs\u201d)? Second, their reductions construct creases at a variety of unconstrained angles. Is deciding flat foldability easy under more restrictive inputs? For example box pleating, folding only along creases aligned at multiples of 45\u25e6 to each other, is a subset of particular interest in transformational robotics and selfassembly, with a universality result constructing arbitrary polycubes using box pleating [2]. In this paper we prove deciding flat foldability of box pleated crease patterns to be NP-hard in both the unassigned and assigned cases, using relatively simple gadgets containing no more than 20 layers at any point. A crease pattern is a straight-line graph of creases on a square paper which all must be folded by \u00b1180\u25e6 resulting in a flat folding, a piecewise isometry in the plane such that the paper does not intersect itself. We call a crease a valley fold if it folds"}
{"_id": "4914e7554866eb46cb026add5347bd2523d93ba0", "title": "Valuing American Options by Simulation : A Simple Least-Squares Approach", "text": "This article presents a simple yet powerful new approach for approximating the value of American options by simulation. The key to this approach is the use of least squares to estimate the conditional expected payoff to the optionholder from continuation. This makes this approach readily applicable in path-dependent and multifactor situations where traditional finite difference techniques cannot be used. We illustrate this technique with several realistic examples including valuing an option when the underlying asset follows a jump-diffusion process and valuing an American swaption in a 20-factor string model of the term structure."}
{"_id": "6e49d32f0a9a80dfe4c2cb81e1036bedb495d361", "title": "Occlusion Culling in OpenSG PLUS", "text": "Very large polygonal models are frequently used in scientific computing, mechanical engineering, or virtual medicine. An effective technique to handle these large models is occlusion culling. Like most large model techniques, occlusion culling trades overhead costs with the rendering costs of the possibly occluded geometry. In this paper, we present occlusion culling techniques for OpenSG. These techniques utilize OpenGL or OpenGL extensions to determine occlusion information. Techniques utilizing image space and temporal coherence are used to optimize occlusion query efficiency. Only the standard OpenSG data structures are used and no precomputation is necessary for the presented techniques."}
{"_id": "087edac1ffd684b71eb8d1fcbdda1035243c61e5", "title": "Analyzing (social media) networks with NodeXL", "text": "We present NodeXL, an extendible toolkit for network overview, discovery and exploration implemented as an add-in to the Microsoft Excel 2007 spreadsheet software. We demonstrate NodeXL data analysis and visualization features with a social media data sample drawn from an enterprise intranet social network. A sequence of NodeXL operations from data import to computation of network statistics and refinement of network visualization through sorting, filtering, and clustering functions is described. These operations reveal sociologically relevant differences in the patterns of interconnection among employee participants in the social media space. The tool and method can be broadly applied."}
{"_id": "a606d44a895617c291baf4b46fb2fc3ca0aa314a", "title": "In Search of the Uncanny Valley", "text": "Recent advances in computer animation and robotics have lead to greater and greater realism of human appearance to be obtained both on screen and in physical devices. A particular issue that has arisen in this pursuit is whether increases in realism necessarily lead to increases in acceptance. The concept of the uncanny valley suggests that high, though not perfect, levels of realism will result in poor acceptance. We review this concept and its psychological basis."}
{"_id": "41a3f6a6f97baf49c5dd26093814e124b238d380", "title": "Timeline: A Dynamic Hierarchical Dirichlet Process Model for Recovering Birth/Death and Evolution of Topics in Text Stream", "text": "Topic models have proven to be a useful tool for discovering latent structures in document collections. However, most document collections often come as temporal streams and thus several aspects of the latent structure such as the number of topics, the topics\u2019 distribution and popularity are time-evolving. Several models exist that model the evolution of some but not all of the above aspects. In this paper we introduce infinite dynamic topic models, iDTM, that can accommodate the evolution of all the aforementioned aspects. Our model assumes that documents are organized into epochs, where the documents within each epoch are exchangeable but the order between the documents is maintained across epochs. iDTM allows for unbounded number of topics: topics can die or be born at any epoch, and the representation of each topic can evolve according to a Markovian dynamics. We use iDTM to analyze the birth and evolution of topics in the NIPS community and evaluated the efficacy of our model on both simulated and real datasets with favorable outcome."}
{"_id": "1909ecf45f9e1a56347f75ec5fb04d925036ef3e", "title": "Robust control of uncertain Markov Decision Processes with temporal logic specifications", "text": "We present a method for designing a robust control policy for an uncertain system subject to temporal logic specifications. The system is modeled as a finite Markov Decision Process (MDP) whose transition probabilities are not exactly known but are known to belong to a given uncertainty set. A robust control policy is generated for the MDP that maximizes the worst-case probability of satisfying the specification over all transition probabilities in this uncertainty set. To this end, we use a procedure from probabilistic model checking to combine the system model with an automaton representing the specification. This new MDP is then transformed into an equivalent form that satisfies assumptions for stochastic shortest path dynamic programming. A robust version of dynamic programming solves for a \u03b5-suboptimal robust control policy with time complexity O(log1/\u03b5) times that for the non-robust case."}
{"_id": "56c38c79b639b55d862f7255b26f4f39c309a576", "title": "Enforcing k-anonymity in Web Mail Auditing", "text": "We study the problem of k-anonymization of mail messages in the realistic scenario of auditing mail traffic in a major commercial Web mail service. Mail auditing is necessary in various Web mail debugging and quality assurance activities, such as anti-spam or the qualitative evaluation of novel mail features. It is conducted by trained professionals, often referred to as \"auditors\", who are shown messages that could expose personally identifiable information. We address here the challenge of k-anonymizing such messages, focusing on machine generated mail messages that represent more than 90% of today's mail traffic. We introduce a novel message signature Mail-Hash, specifically tailored to identifying structurally-similar messages, which allows us to put such messages in a same equivalence class. We then define a process that generates, for each class, masked mail samples that can be shown to auditors, while guaranteeing the k-anonymity of users. The productivity of auditors is measured by the amount of non-hidden mail content they can see every day, while considering normal working conditions, which set a limit to the number of mail samples they can review. In addition, we consider k-anonymity over time since, by definition of k-anonymity, every new release places additional constraints on the assignment of samples. We describe in details the results we obtained over actual Yahoo mail traffic, and thus demonstrate that our methods are feasible at Web mail scale. Given the constantly growing concern of users over their email being scanned by others, we argue that it is critical to devise such algorithms that guarantee k-anonymity, and implement associated processes in order to restore the trust of mail users."}
{"_id": "4cadd2db91f9dacffe45ef27bb305d406d7f24a0", "title": "Descriptive versus interpretive phenomenology: their contributions to nursing knowledge.", "text": "A number of articles in the nursing literature discuss the differences between descriptive and interpretive approaches to doing phenomenology. A review of studies demonstrates, however, that many researchers do not articulate which approach guides the study, nor do they identify the philosophical assumptions on which the study is based. Such lack of clarity makes it difficult for the reader to obtain a sense of how the knowledge produced by the study is to be evaluated and used. In this article, the authors compare the philosophical components of descriptive and interpretive approaches to doing phenomenology and illustrate types of knowledge produced by each through reviewing specific studies. They focus on the various uses of phenomenology in generating useful knowledge for health care practice."}
{"_id": "48ee7caa3590c3dec44322fe3d30aa321107b6b8", "title": "SQLEM: Fast Clustering in SQL using the EM Algorithm", "text": "Clustering is one of the most important tasks performed in Data Mining applications. This paper presents an efficient SQL implementation of the EM algorithm to perform clustering in very large databases. Our version can effectively handle high dimensional data, a high number of clusters and more importantly, a very large number of data records. We present three strategies to implement EM in SQL: horizontal, vertical and a hybrid one. We expect this work to be useful for data mining programmers and users who want to cluster large data sets inside a relational DBMS."}
{"_id": "7844e08ba1984c5d1dc608c540d87d564028a9b8", "title": "Author Identification Using Imbalanced and Limited Training Texts", "text": "This paper deals with the problem of author identification. The common N-grams (CNG) method [6] is a language-independent profile-based approach with good results in many author identification experiments so far. A variation of this approach is presented based on new distance measures that are quite stable for large profile length values. Special emphasis is given to the degree upon which the effectiveness of the method is affected by the available training text samples per author. Experiments based on text samples on the same topic from the Reuters Corpus Volume 1 are presented using both balanced and imbalanced training corpora. The results show that CNG with the proposed distance measures is more accurate when only limited training text samples are available, at least for some of the candidate authors, a realistic condition in author identification problems."}
{"_id": "dd596f9da673fd7b8af9a8bfaac7a1f617086fe6", "title": "Bigrams of Syntactic Labels for Authorship Discrimination of Short Texts", "text": "We present a method for authorship discrimination that is based on the frequency of bigrams of syntactic labels that arise from partial parsing of the text. We show that this method, alone or combined with other classification features, achieves a high accuracy on discrimination of the work of Anne and Charlotte Bront\u00eb, which is very difficult to do by traditional methods. Moreover, high accuracies are achieved even on fragments of text little more than 200 words long. ................................................................................................................................................................................."}
{"_id": "094fc15bc058b0d62a661a1460885a9490bdb1bd", "title": "A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization", "text": "The Rocchio relevance feedback algorithm is one of the most popular and widely applied learning methods from information retrieval. Here, a probabilistic analysis of this algorithm is presented in a text categorization framework. The analysis gives theoretical insight into the heuristics used in the Rocchio algorithm, particularly the word weighting scheme and the similarity metric. It also suggests improvements which lead to a probabilistic variant of the Rocchio classi er. The Rocchio classi er, its probabilistic variant, and a naive Bayes classi er are compared on six text categorization tasks. The results show that the probabilistic algorithms are preferable to the heuristic Rocchio classi er not only because they are more well-founded, but also because they achieve better performance."}
{"_id": "a8f76a803c8b2e8bd2e8a5bce83ee93def315e35", "title": "Multiple nutrient deficiency detection in paddy leaf images using color and pattern analysis", "text": "Paddy being the staple food of India is majorly affected by deficiency of primary nutrient elements like nitrogen, phosphorus and potassium. Leaves can be deficient with multiple nutrient elements at a same time. This can alter natural color of paddy leaves. Such leaves are considered as defective. The proposed work is to automate multiple nutrient element deficiency identification of paddy leaves. Pattern analysis RGB color features are extracted to identify defective paddy leaves. Firstly the database of healthy, nitrogen, phosphorus and potassium defected paddy leaves are created. For any test image effective comparison at different levels are employed such as multiple color comparison, multiple pattern comparison and combination of color and patterns comparison, so that defectiveness is accurately identified for combination of deficiency such as nitrogen-phosphorus(NP), nitrogen-potassium(NK) and phosphorous-potassium (KP)."}
{"_id": "62e913431bcef5983955e9ca160b91bb19d9de42", "title": "Facial Landmark Detection with Tweaked Convolutional Neural Networks", "text": "This paper concerns the problem of facial landmark detection. We provide a unique new analysis of the features produced at intermediate layers of a convolutional neural network (CNN) trained to regress facial landmark coordinates. This analysis shows that while being processed by the CNN, face images can be partitioned in an unsupervised manner into subsets containing faces in similar poses (i.e., 3D views) and facial properties (e.g., presence or absence of eye-wear). Based on this finding, we describe a novel CNN architecture, specialized to regress the facial landmark coordinates of faces in specific poses and appearances. To address the shortage of training data, particularly in extreme profile poses, we additionally present data augmentation techniques designed to provide sufficient training examples for each of these specialized sub-networks. The proposed Tweaked CNN (TCNN) architecture is shown to outperform existing landmark detection methods in an extensive battery of tests on the AFW, ALFW, and 300W benchmarks. Finally, to promote reproducibility of our results, we make code and trained models publicly available through our project webpage."}
{"_id": "eb81ecaff4c376c45c3c75a9200399ac279760e0", "title": "Undervalued or Overvalued Customers : Capturing Total Customer Engagement Value", "text": "Customers can interact with and create value for firms in a variety of ways. This article proposes that assessing the value of customers based solely upon their transactions with a firm may not be sufficient, and valuing this engagement correctly is crucial in avoiding undervaluation and overvaluation of customers. We propose four components of a customer\u2019s engagement value (CEV) with a firm. The first component is customer lifetime value (the customer\u2019s purchase behavior), the second is customer referral value (as it relates to incentivized referral of new customers), the third is customer influencer value (which includes the customer\u2019s behavior to influence other customers, that is increasing acquisition, retention, and share of wallet through word of mouth of existing customers as well as prospects), and the fourth is customer knowledge value (the value added to the firm by feedback from the customer). CEV provides a comprehensive framework that can ultimately lead to more efficient marketing strategies that enable higher long-term contribution from the customer. Metrics to measure CEV, future research propositions regarding relationships between the four components of CEV are proposed and marketing strategies that can leverage these relationships suggested."}
{"_id": "123e98412253f2b8a602a0d699a5f493e17c62ef", "title": "Wordnet Enhanced Automatic Crossword Generation", "text": "We report on a system for automatically generating and displaying crosswords from a system manager supplied database of potential clues and corresponding words that index those clues. The system relies on the lexical relations encoded in WordNet to enhance the aesthetics of the resulting crossword by making it easier to automatically identify a grid that may be populated with words and clues that have a thematic focus. The system architecture is provided in overview, as is empirical evaluation."}
{"_id": "93b20637a8fb4ef3db785e8bc618bfe051c779eb", "title": "Software evolution storylines", "text": "This paper presents a technique for visualizing the interactions between developers in software project evolution. The goal is to produce a visualization that shows more detail than animated software histories, like code_swarm [15], but keeps the same focus on aesthetics and presentation. Our software evolution storylines technique draws inspiration from XKCD's \"Movie Narrative Charts\" and the aesthetic design of metro maps. We provide the algorithm, design choices, and examine the results of using the storylines technique. Our conclusion is that the it is able to show more details when compared to animated software project history videos. However, it does not scale to the largest projects, such as Eclipse and Mozilla."}
{"_id": "e5a1d41e6212951cb6a831ed61a59d00b7ff6867", "title": "Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph", "text": "While conversing with chatbots, humans typically tend to ask many questions, a significant portion of which can be answered by referring to large-scale knowledge graphs (KG). While Question Answering (QA) and dialog systems have been studied independently, there is a need to study them closely to evaluate such real-world scenarios faced by bots involving both these tasks. Towards this end, we introduce the task of Complex Sequential QA which combines the two tasks of (i) answering factual questions through complex inferencing over a realistic-sized KG of millions of entities, and (ii) learning to converse through a series of coherently linked QA pairs. Through a labor intensive semi-automatic process, involving in-house and crowdsourced workers, we created a dataset containing around 200K dialogs with a total of 1.6M turns. Further, unlike existing large scale QA datasets which contain simple questions that can be answered from a single tuple, the questions in our dialogs require a larger subgraph of the KG. Specifically, our dataset has questions which require logical, quantitative, and comparative reasoning as well as their combinations. This calls for models which can: (i) parse complex natural language questions, (ii) use conversation context to resolve coreferences and ellipsis in utterances, (iii) ask for clarifications for ambiguous queries, and finally (iv) retrieve relevant subgraphs of the KG to answer such questions. However, our experiments with a combination of state of the art dialog and QA models show that they clearly do not achieve the above objectives and are inadequate for dealing with such complex real world settings. We believe that this new dataset coupled with the limitations of existing models as reported in this paper should encourage further research in Complex Sequential QA."}
{"_id": "02bbd99f8cb18de861ca0330387c008b7c8289f9", "title": "Can we learn to gamble efficiently", "text": "Betting is an important problem faced by millions of sports fans each day. Presented with an upcoming matchup between team A and team B, and given the opportunity to place a 50/50 wager on either, where should a gambler put her money? This decision is not, of course, made in isolation: both teams will have played a number of decided matches with other teams throughout the season. Furthermore, a reasonable assumption to make is that the relation \u201cA tends to beat B\u201d is transitive. Under transitivity, the best prediction strategy is clearly to sort the teams by their abilities and predict according to this ranking. The obvious difficulty is that the best ranking of the teams is not known an advance. But there\u2019s a more subtle issue: even with knowledge of all match outcomes in advance, i.e. a list of items of the form (team X < team Y ), it\u2019s NP-hard to determine the best ranking of the teams when the outcomes are noisy. This is exactly the infamous Minimum Feedback Arc Set problem. The question we pose is as follows: can we design a non-trivial online prediction strategy in this setting which achieves vanishing regret relative to the best ranking in hindsight, even when the latter is computationally infeasible? It is tempting to believe this is impossible, as competing with the best ranking would appear tantamount to finding the best ranking. However, this assertion is false: the algorithm need not learn a ranking explicitly, it must only output a prediction (X < Y ) or (Y < X) when presented with a pair (X, Y ), and these predictions do not necessarily have to satisfy transitivity. Indeed, consider the following simple algorithm: treat each team pair (X, Y ) as an independent learning problem (ignoring all other matchups). In the long run, this will achieve vanishing regret with respect to the best ranking. So why is this not desirable? The trivial approach unfortunately admits a bad regret bound: the algorithm must see O(n) matches per team before it can start to make decent predictions. On the other hand, there is an information-theoretic approach that requires only O(log n) observations per team\u2014the downside, unfortunately, is that this requires exponential computation. We would like to achieve this rate with an efficient method."}
{"_id": "2a2c6b4d9e9d0f9d7aaedb641281f36b7e95f368", "title": "Towards Building Forensics Enabled Cloud Through Secure Logging-as-a-Service", "text": "Collection and analysis of various logs (e.g., process logs, network logs) are fundamental activities in computer forensics. Ensuring the security of the activity logs is therefore crucial to ensure reliable forensics investigations. However, because of the black-box nature of clouds and the volatility and co-mingling of cloud data, providing the cloud logs to investigators while preserving users' privacy and the integrity of logs is challenging. The current secure logging schemes, which consider the logger as trusted cannot be applied in clouds since there is a chance that cloud providers (logger) collude with malicious users or investigators to alter the logs. In this paper, we analyze the threats on cloud users' activity logs considering the collusion between cloud users, providers, and investigators. Based on the threat model, we propose Secure-Logging-as-a-Service ( SecLaaS), which preserves various logs generated for the activity of virtual machines running in clouds and ensures the confidentiality and integrity of such logs. Investigators or the court authority can only access these logs by the RESTful APIs provided by SecLaaS, which ensures confidentiality of logs. The integrity of the logs is ensured by hash-chain scheme and proofs of past logs published periodically by the cloud providers. In prior research, we used two accumulator schemes Bloom filter and RSA accumulator to build the proofs of past logs. In this paper, we propose a new accumulator scheme - Bloom-Tree, which performs better than the other two accumulators in terms of time and space requirement."}
{"_id": "75889981a9cdaebbec99b4e1f124410f32f59de4", "title": "Cloud forensics: A research perspective", "text": "Cloud computing and digital forensics are both developing topics and researching these topics requires an understanding of the main aspects of both cloud computing and digital forensics. In cloud computing it is necessary not only to understand its characteristics and the different services and deployment models but also to survey the underpinning elements of cloud computing such as virtualization and the distributed computing which are important to identify its impact on current digital forensics guidelines and procedures. Unlike papers discussing the challenges and opportunities presented by cloud computing in relation to digital forensics, in this paper, we will discuss the underpinning cloud computing elements which are required to provide forensics friendly cloud services. Furthermore, we suggest a set of questions that will aid in the process of cloud forensics analysis."}
{"_id": "4b1576a5f98ea32b8eafcadcd9ec5068ab8d5564", "title": "The wisdom development scale: further validity investigations.", "text": "Researchers are gaining an interest in the concept of wisdom, a more holistic yet often ineffable educational outcome. Models of wisdom abound, but few have rigorously tested measures. This study looks at Brown's (2004a, 2004b) Model of Wisdom Development and its associated measure, the Wisdom Development Scale (WDS; Brown & Greene, 2006). The construct validity, measurement invariance, criterion validity, and reliability of scores from the WDS were assessed with over 3000 participants from two separate groups: one a sample of professionals and the other a sample of college students. Support for construct validity and reliability with these samples was found, along with measurement invariance. Latent means analyses showed predicted discrimination between the groups, and criterion validity evidence, with another measure of collegiate educational outcomes, was found."}
{"_id": "591b33e9ee81d4b4ae8be76d279e5128ebc7972d", "title": "Rule Extraction from Support Vector Machines: An Overview of Issues and Application in Credit Scoring", "text": "1 Department of Decision Sciences and Information Management, K.U.Leuven Naamsestraat 69, B-3000 Leuven, Belgium {David.Martens;Johan.Huysmans; Bart.Baesens;Jan.Vanthienen}@econ.kuleuven.be 2 School of Computing, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore rudys@comp.nus.edu.sg 3 University of Southampton, School of Management, Highfield Southampton, SO17 1BJ, UK Bart@soton.ac.uk"}
{"_id": "bfffeb30c922fcf2d0c4033da6ef01a704ca2584", "title": "Disgust is a factor in extreme prejudice.", "text": "Understanding intergroup prejudice is a dominant research focus for social psychology. Prejudice is usually conceptualized as a continuum of positive/negative affect, but this has limitations. It neither reflects people's ability to maintain simultaneous positive and negative stereotypes of others nor explains extreme prejudice (bigotry). Some researchers have proposed multidimensional models of prejudice in which different negative emotions are evoked depending on the situation. Extending this to bigotry raises the question of which emotions are most relevant. Therefore, this study looked at 'anti-group' texts--writings which promote extreme intergroup hostility--and analysed the frequency of emotive language. Findings suggest that bigotry may be distinguished by high levels of disgust."}
{"_id": "9cc14fafb440222869311e8e857423b359b6dd69", "title": "Thinking forensics: Cognitive science for forensic practitioners.", "text": "Human factors and their implications for forensic science have attracted increasing levels of interest across criminal justice communities in recent years. Initial interest centred on cognitive biases, but has since expanded such that knowledge from psychology and cognitive science is slowly infiltrating forensic practices more broadly. This article highlights a series of important findings and insights of relevance to forensic practitioners. These include research on human perception, memory, context information, expertise, decision-making, communication, experience, verification, confidence, and feedback. The aim of this article is to sensitise forensic practitioners (and lawyers and judges) to a range of potentially significant issues, and encourage them to engage with research in these domains so that they may adapt procedures to improve performance, mitigate risks and reduce errors. Doing so will reduce the divide between forensic practitioners and research scientists as well as improve the value and utility of forensic science evidence."}
{"_id": "f6e6c66512d3e0527794cd21dc1240b2fd4b60d4", "title": "CoSIP: A Constrained Session Initiation Protocol for the Internet of Things", "text": "The Internet of Things (IoT) refers to the interconnection of billions of constrained devices, denoted as \u201csmart objects\u201d (SO), in an Internet-like structure. SOs typically feature limited capabilities in terms of computation and memory and operate in constrained environments, such low-power lossy networks. As IP has been foreseen as the standard for smart-object communication, an e\u21b5ort to bring IP connectivity to SOs and define suitable communication protocols (i.e. CoAP) is being carried out within standardization organisms, such as IETF. In this paper, we propose a constrained version of the Session Initiation Protocol (SIP), named \u201cCoSIP\u201d, whose intent is to allow constrained devices to instantiate communication sessions in a lightweight and standard fashion. Session instantiation can include a negotiation phase of some parameters which will be used for all subsequent communication. CoSIP can be adopted in several application scenarios, such as service discovery and publish/subscribe applications, which are detailed. An evaluation of the proposed protocol is also presented, based on a Java implementation of CoSIP, to show the benefits that its adoption can bring about, in terms of compression rate with the existing SIP protocol and message overhead compared with the use of CoAP."}
{"_id": "374b0382a975db8cc6fcebbd4abb7d1233e972da", "title": "The relational model is dead, SQL is dead, and I don't feel so good myself", "text": "We report the opinions expressed by well-known database researchers on the future of the relational model and SQL during a panel at the International Workshop on Non-Conventional Data Access (NoCoDa 2012), held in Florence, Italy in October 2012 in conjunction with the 31st International Conference on Conceptual Modeling. The panelists include: Paolo Atzeni (Universit\u00e0 Roma Tre, Italy), Umeshwar Dayal (HP Labs, USA), Christian S. Jensen (Aarhus University, Denmark), and Sudha Ram (University of Arizona, USA). Quotations from movies are used as a playful though effective way to convey the dramatic changes that database technology and research are currently undergoing."}
{"_id": "a2c557be770b997c3876619a140a88513916c199", "title": "Adaptively generating high quality fixes for atomicity violations", "text": "It is difficult to fix atomicity violations correctly. Existing gate lock algorithm (GLA) simply inserts gate locks to serialize exe-cutions, which may introduce performance bugs and deadlocks. Synthesized context-aware gate locks (by Grail) require complex source code synthesis. We propose \uf061Fixer to adaptively fix ato-micity violations. It firstly analyses the lock acquisitions of an atomicity violation. Then it either adjusts the existing lock scope or inserts a gate lock. The former addresses cases where some locks are used but fail to provide atomic accesses. For the latter, it infers the visibility (being global or a field of a class/struct) of the gate lock such that the lock only protects related accesses. For both cases, \uf061Fixer further eliminates new lock orders to avoid introducing deadlocks. Of course, \uf061Fixer can produce both kinds of fixes on atomicity violations with locks. The experi-mental results on 15 previously used atomicity violations show that: \uf061Fixer correctly fixed all 15 atomicity violations without introducing deadlocks. However, GLA and Grail both intro-duced 5 deadlocks. HFix (that only targets on fixing certain types of atomicity violations) only fixed 2 atomicity violations and introduced 4 deadlocks. \uf061Fixer also provides an alternative way to insert gate locks (by inserting gate locks with proper visibility) considering fix acceptance."}
{"_id": "594d358856d68e9d055b81185b0df04c7923535f", "title": "A model for team-based access control (TMAC 2004)", "text": "Role based access control (RBAC) has been proved to be effective for defining access control. However, in an environment where collaborative work is needed, additional features should be added on top of RBAC to accommodate the new requirements. In this paper we describe a team access control extension model called TMAC04 (Team-Based Access Control 2004), which is built on the well-known RBAC. The TMAC04 model efficiently represents teamwork in the real world. It allows certain users to join a team based on their existing roles in an organization within limited contexts and new permissions to perform the required work."}
{"_id": "e592381a9c3ebc8e20c237dcc1700ac65e326841", "title": "Lumbar stabilization: a review of core concepts and current literature, part 2.", "text": "Lumbar-stabilization exercise programs have become increasingly popular as a treatment for low-back pain. In this article, we outline an evidence-based medicine approach to evaluating patients for a lumbar-stabilization program. We also discuss typical clinical components of this type of program and the rationale for including these particular features based on the medical literature."}
{"_id": "5ae26c80b3ea607e54cd0dad5ef4505156fbba24", "title": "Mars Exploration Rover mobility assembly design, test and performance", "text": "In January 2004, NASA landed two mobile robotic spacecraft, or rovers, on opposite sides of the planet Mars. These rovers, named Spirit and Opportunity, were each sent on their own scientific mission of exploration. Their objective was to work as robotic geologists. After more than a year the vehicles are still in excellent health, and returning vast amounts of scientific information on the ancient water processes that helped form Mars. Key to the success of the rovers was the development of their advanced mobility system. In this paper the mobility assembly, the mechanical hardware that determines the vehicles mobility capability, is described. The details of the design, test, and performance of the mobility assembly are shown to exceed the mission requirements. The rovers' ability to traverse the Mars terrain with its combination of rocks, craters, soft soils, and hills was verified, and the system design validated."}
{"_id": "1c5be5e7009dd91513f7593417e50a82bb903377", "title": "AZIMUT, a leg-track-wheel robot", "text": "AZIMUT is a mobile robotic platform that combines wheels, legs and tracks to move in three-dimensional environments. The robot is symmetrical and is made of four independent leg-track-wheel articulations. It can move with its articulations up, down or straight, or to move sideways without changing the robot's orientation. To validate the concept, the rst prototype developed measures 70.5 cm 70.5 cm with the articulations up. It has a body clearance of 8.4 cm to 40.6 cm depending on the position of the articulations. The design of the robot is highly modular, with distributed embedded systems to control the di erent components of the robot."}
{"_id": "4cd3cb09bb9d30c9cb5a74f4257a49ebe798ab79", "title": "Rocky 7: a next generation Mars rover prototype", "text": "This paper provides a system overview of a new Mars rover prototype, Rocky 7 1. We describe all system aspects: mechanical and electrical design, computer and software infrastructure , algorithms for navigation and manipulation, science data acquisition, and outdoor rover testing. In each area, the improved or added functionality is explained in a context of its path to ight, and within the constraints of desired science missions."}
{"_id": "eeb8c7a22f731839755a4e820b608215e9885276", "title": "Stability and traction control of an actively actuated micro-rover", "text": ""}
{"_id": "db806e6ed3004760f9e20d485c5721647697e5a3", "title": "A 5.58 nW Crystal Oscillator Using Pulsed Driver for Real-Time Clocks", "text": "A 5.58 nW real-time clock using a crystal oscillator is presented. In this circuit, the amplifier used in a traditional circuit is replaced with pulsed drivers. The pulse is generated with precise timing using a DLL. With this approach, an extremely low oscillation amplitude of 160 mV results in low-power operation. This low-amplitude oscillation is sustained robustly using additional supply voltages: a lower supply for the final drive stage and a higher supply used for pulses that drive the final drive stage, which ensures low ON-resistance necessary for reliable operation. The different supply levels are generated on-chip by a switched capacitor network (SCN) from a single supply. The circuit has been tested at different supply voltages and temperatures. It shows a minimum power consumption of 5.58 nW and power supply sensitivity of 30.3 ppm/V over supply voltage of 0.94-1.2 V, without degrading the crystal's temperature dependency: between -20 \u00b0C and 80 \u00b0C. Moreover, its performance as a real-time clock has been verified by measurement of an Allan deviation of 1.16 \u00d7 10-8."}
{"_id": "64bca350109cb52aecaa038b627605f7143bfcc6", "title": "Understanding Intra-Class Knowledge Inside CNN", "text": "Convolutional Neural Network (CNN) has been successful in image recognition tasks, and recent works shed lights on how CNN separates different classes with the learned inter-class knowledge through visualization [8, 10, 13]. In this work, we instead visualize the intra-class knowledge inside CNN to better understand how an object class is represented in the fully-connected layers. To invert the intraclass knowledge into more interpretable images, we propose a non-parametric patch prior upon previous CNN visualization models [8, 10]. With it, we show how different \u201cstyles\u201d of templates for an object class are organized by CNN in terms of location and content, and represented in a hierarchical and ensemble way. Moreover, such intra-class knowledge can be used in many interesting applications, e.g. style-based image retrieval and style-based object completion."}
{"_id": "c0b54e53b2702f403369ceb9924769bfc4fcebb3", "title": "Branches of evolutionary algorithms and their effectiveness for clustering records", "text": "Clustering is a process that aims to group the similar records in one cluster and dissimilar records in different clusters. K-means is one of the most popular and well-known clustering technique for its simplicity and light weight. However, the main drawback of K-means clustering technique is that it requires a user (data miner) to estimate the number of clusters in advance. Another limitation of K-means is that it has a tendency to get stuck at local optima. In order to overcome these limitations many evolutionary algorithm based clustering techniques have been proposed since the 1990s and applied to various fields. In this paper, we present an up-to-date review of some major evolutionary algorithm based clustering techniques for the last twenty (20) years (1995-2015). A total of 63 ranked (i.e. based on citation reports and JCR/CORE rank) evolutionary algorithm based clustering approaches are reviewed. Maximum of the techniques do not require any user to define the number of clusters in advance. We present the limitations and advantages of some evolutionary algorithm based clustering techniques. We also present a thorough discussion and future research directions of evolutionary algorithm based clustering techniques."}
{"_id": "6c561c9eef1a2b4c28f05eb4986910b9db168145", "title": "13C-NMR-Based Metabolomic Profiling of Typical Asian Soy Sauces", "text": "It has been a strong consumer interest to choose high quality food products with clear information about their origin and composition. In the present study, a total of 22 Asian soy sauce samples have been analyzed in terms of (13)C-NMR spectroscopy. Spectral data were analyzed by multivariate statistical methods in order to find out the important metabolites causing the discrimination among typical soy sauces from different Asian regions. It was found that significantly higher concentrations of glutamate in Chinese red cooking (CR) soy sauce may be the result of the manual addition of monosodium glutamate (MSG) in the final soy sauce product. Whereas lower concentrations of amino acids, like leucine, isoleucine and valine, observed in CR indicate the different fermentation period used in production of CR soy sauce, on the other hand, the concentration of some fermentation cycle metabolites, such as acetate and sucrose, can be divided into two groups. The concentrations of these fermentation cycle metabolites were lower in CR and Singapore Kikkoman (SK), whereas much higher in Japanese shoyu (JS) and Taiwan (China) light (TL), which depict the influence of climatic conditions. Therefore, the results of our study directly indicate the influences of traditional ways of fermentation, climatic conditions and the selection of raw materials and can be helpful for consumers to choose their desired soy sauce products, as well as for researchers in further authentication studies about soy sauce."}
{"_id": "7ddcae313dbd1550a255e1a91a32eea0448f6f34", "title": "A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays", "text": "This video presents insights into the usability challenges present in consumer VR Head-Mounted Displays regarding a users' capability to interact with and be aware of reality. We demonstrate how these issues can be overcome through selectively incorporating necessary elements of reality into VR, as a user engages with reality. We term this approach Engagement-Dependent Augmented Virtuality."}
{"_id": "9ad1200bfbb4df99228fea6f7651a885c369a9b7", "title": "Understanding Human Mobility from Twitter", "text": "Understanding human mobility is crucial for a broad range of applications from disease prediction to communication networks. Most efforts on studying human mobility have so far used private and low resolution data, such as call data records. Here, we propose Twitter as a proxy for human mobility, as it relies on publicly available data and provides high resolution positioning when users opt to geotag their tweets with their current location. We analyse a Twitter dataset with more than six million geotagged tweets posted in Australia, and we demonstrate that Twitter can be a reliable source for studying human mobility patterns. Our analysis shows that geotagged tweets can capture rich features of human mobility, such as the diversity of movement orbits among individuals and of movements within and between cities. We also find that short- and long-distance movers both spend most of their time in large metropolitan areas, in contrast with intermediate-distance movers' movements, reflecting the impact of different modes of travel. Our study provides solid evidence that Twitter can indeed be a useful proxy for tracking and predicting human movement."}
{"_id": "927822a5ebfa171c90e1aed749028d6afac77870", "title": "Volume/weight/cost comparison of a 1MVA 10 kV/400 V solid-state against a conventional low-frequency distribution transformer", "text": "Solid-State Transformers (SSTs) are an emergent topic in the context of the Smart Grid paradigm, where SSTs could replace conventional passive transformers to add flexibility and controllability, such as power routing capabilities or reactive power compensation, to the grid. This paper presents a comparison of a 1000 kVA three-phase, low-frequency distribution transformer (LFT) and an equally rated SST, with respect to volume, weight, losses, and material costs, where the corresponding data of the SST is partly based on a full-scale prototype design. It is found that the SST's costs are at least five times and its losses about three times higher, its weight similar but its volume reduced to less than 80%. In addition, an AC/DC application is also considered, where the comparison turns out in favor of the SST-based concept, since its losses are only about half compared to the LFT-based system, and the volume and the weight are reduced to about one third, whereas the material costs advantage of the LFT is much less pronounced."}
{"_id": "51bbda3972c8d0eb954b61766b088848a699a5de", "title": "Civitas: The Smart City Middleware, from Sensors to Big Data", "text": "Software development for smart cities bring into light new concerns, such as how to deal with scalability, heterogeneity (sensors, actuators, high performance computing devices, etc.), geolocation information or privacy issues, among some. Traditional approaches to distributed systems fail to address these challenges, because they were mainly devoted to enterprise IT environments. This paper proposes a middleware framework, called Civitas, specially devoted to support the task of service development for the Smart City paradigm. This paper also analyzes the main drawbacks of traditional approaches to middleware service development and how these are overcome in the middleware framework proposed here."}
{"_id": "68c2e0864224468b66abf37bd87c0897aa477b90", "title": "On the exploitation of a high-throughput SHA-256 FPGA design for HMAC", "text": "High-throughput and area-efficient designs of hash functions and corresponding mechanisms for Message Authentication Codes (MACs) are in high demand due to new security protocols that have arisen and call for security services in every transmitted data packet. For instance, IPv6 incorporates the IPSec protocol for secure data transmission. However, the IPSec's performance bottleneck is the HMAC mechanism which is responsible for authenticating the transmitted data. HMAC's performance bottleneck in its turn is the underlying hash function. In this article a high-throughput and small-size SHA-256 hash function FPGA design and the corresponding HMAC FPGA design is presented. Advanced optimization techniques have been deployed leading to a SHA-256 hashing core which performs more than 30% better, compared to the next better design. This improvement is achieved both in terms of throughput as well as in terms of throughput/area cost factor. It is the first reported SHA-256 hashing core that exceeds 11Gbps (after place and route in Xilinx Virtex 6 board)."}
{"_id": "1d8308b0e83fa16f50f506185d49473a0e7aa5d4", "title": "Service Migration Patterns -- Decision Support and Best Practices for the Migration of Existing Service-Based Applications to Cloud Environments", "text": "In many ways cloud computing is an extension of the service-oriented computing (SOC) approach to create resilient and elastic hosting environments and applications. Service-oriented Architectures (SOA), thus, share many architectural properties with cloud environments and cloud applications, such as the distribution of application functionality among multiple application components (services) and their loosely coupled integration to form a distributed application. Existing service-based applications are, therefore, ideal candidates to be moved to cloud environments in order to benefit from the cloud properties, such as elasticity or pay-per-use pricing models. In order for such an application migration and the overall restructuring of an IT application landscape to be successful, decisions have to be made regarding (i) the portion of the application stack to be migrated and (ii) the process to follow during the migration in order to guarantee an acceptable service level to application users. In this paper, we present best practices how we addressed these challenges in form of service migration patterns as well as a methodology how these patterns should be applied during the migration of a service-based application or multiples thereof. Also, we present an implementation of the approach, which has been used to migrate a web-application stack from Amazon Web Services to the T-Systems cloud offering Dynamic Services for Infrastructure (DSI)."}
{"_id": "dd2f411593178a9d9a59537cf2ddc3f91371f15a", "title": "Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display", "text": "Optical see-through head-mounted displays (OST-HMD) feature an unhindered and instantaneous view of the surgery site and can enable a mixed reality experience for surgeons during procedures. In this paper, we present a systematic approach to identify the criteria for evaluation of OST-HMD technologies for specific clinical scenarios, which benefit from using an object-anchored 2D-display visualizing medical information. Criteria for evaluating the performance of OST-HMDs for visualization of medical information and its usage are identified and proposed. These include text readability, contrast perception, task load, frame rate, and system lag. We choose to compare three commercially available OST-HMDs, which are representatives of currently available head-mounted display technologies. A multi-user study and an offline experiment are conducted to evaluate their performance. Statistical analysis demonstrates that Microsoft HoloLens performs best among the three tested OST-HMDs, in terms of contrast perception, task load, and frame rate, while ODG R-7 offers similar text readability. The integration of indoor localization and fiducial tracking on the HoloLens provides significantly less system lag in a relatively motionless scenario. With ever more OST-HMDs appearing on the market, the proposed criteria could be used in the evaluation of their suitability for mixed reality surgical intervention. Currently, Microsoft HoloLens may be more suitable than ODG R-7 and Epson Moverio BT-200 for clinical usability in terms of the evaluated criteria. To the best of our knowledge, this is the first paper that presents a methodology and conducts experiments to evaluate and compare OST-HMDs for their use as object-anchored 2D-display during interventions."}
{"_id": "be492fa6af329a487e2048df6c931e7f9d41d451", "title": "Extracting and Retargeting Color Mappings from Bitmap Images of Visualizations", "text": "Visualization designers regularly use color to encode quantitative or categorical data. However, visualizations \u201cin the wild\u201d often violate perceptual color design principles and may only be available as bitmap images. In this work, we contribute a method to semi-automatically extract color encodings from a bitmap visualization image. Given an image and a legend location, we classify the legend as describing either a discrete or continuous color encoding, identify the colors used, and extract legend text using OCR methods. We then combine this information to recover the specific color mapping. Users can also correct interpretation errors using an annotation interface. We evaluate our techniques using a corpus of images extracted from scientific papers and demonstrate accurate automatic inference of color mappings across a variety of chart types. In addition, we present two applications of our method: automatic recoloring to improve perceptual effectiveness, and interactive overlays to enable improved reading of static visualizations."}
{"_id": "bc7c48be4156f5f42b0add032b509c23c88681f6", "title": "A BIOMECHANICAL ANALYSIS OF FRONT VERSUS BACK SQUAT: INJURY IMPLICATIONS", "text": "The aim of this study was to examine the differences in trunk and lower limb kinematics between the front and back squat. 2D kinematic data was collected as participants (n = 12) completed three repetitions of both front and back squat exercises at 50 % of their back squat one repetition maximum. Stance width was standardised at 107(\u00b110) % of biacromial breadth. The Wilcoxon signed ranks test was used to examine significant differences in dependent variables between both techniques. Results showed that the back squat exhibited a significantly greater trunk lean than the front squat throughout (p < 0.05) with no differences occurring in knee joint kinematics. The results of this study in conjunction with other squat related literature (Russell et al., 1989) suggest that the back squat gives rise to an increased risk of lower back injury."}
{"_id": "0d0da3e46ad8a8fd79ec4b689e4601e84dad9595", "title": "A content-driven reputation system for the wikipedia", "text": "We present a content-driven reputation system for Wikipedia authors. In our system, authors gain reputation when the edits they perform to Wikipedia articles are preserved by subsequent authors, and they lose reputation when their edits are rolled back or undone in short order. Thus, author reputation is computed solely on the basis of content evolution; user-to-user comments or ratings are not used. The author reputation we compute could be used to flag new contributions from low-reputation authors, or it could be used to allow only authors with high reputation to contribute to controversialor critical pages. A reputation system for the Wikipedia could also provide an incentive for high-quality contributions. We have implemented the proposed system, and we have used it to analyze the entire Italian and French Wikipedias, consisting of a total of 691, 551 pages and 5, 587, 523 revisions. Our results show that our notion of reputation has good predictive value: changes performed by low-reputation authors have a significantly larger than average probability of having poor quality, as judged by human observers, and of being later undone, as measured by our algorithms."}
{"_id": "723375f6048b23fb17d523a255c30703842171c9", "title": "Energy Management Fuzzy Logic Supervisory for Electric Vehicle Power Supplies System", "text": "This paper introduces an energy management strategy based on fuzzy logic supervisory for road electric vehicle, combining a fuel cell power source and two energy storage devices, i.e., batteries and ultracapacitors. The control strategy is designed to achieve the high-efficiency operation region of the individual power source and to regulate current and voltage at peak and average power demand, without compromising the performance and efficiency of the overall system. A multiple-input power electronic converter makes the interface among generator, energy storage devices, and the voltage dc-link bus. Classical regulators implement the control loops of each input of the converter. The supervisory system coordinates the power flows among the power sources and the load. The paper is mainly focused on the fuzzy logic supervisory for energy management of a specific power electronic converter control algorithm. Nevertheless, the proposed system can be easily adapted to other converters arrangements or to distributed generation applications. Simulation and experimental results on a 3-kW prototype prove that the fuzzy logic is a suitable energy management control strategy."}
{"_id": "732313849ee46dd92647e19c4ef9473eb0f08e0e", "title": "Byzantine Agreement , Made Trivial Silvio Micali", "text": "We present a very simple, cryptographic, Byzantine-agreement protocol that, with n = 3t + 1 players, t of which are malicious, halts in expected 9 rounds."}
{"_id": "93fcfe49812a28f2c5569c06c1a69444e45e7a70", "title": "A wireless electrocardiogram detection for personal health monitoring", "text": "The current paper presents low-power analog integrated circuits (ICs) for wireless electrocardiogram (ECG) detection in personal health monitoring. Considering the power-efficient communication in the body sensor network (BSN), the required low-power analog ICs are developed for a healthcare system through miniaturization and system integration. The proposed system comprises the design and implementation with three subsystems, namely, (1) the ECG acquisition node, (2) the protocol for standard IEEE 802.15.4 ZigBee system, and (3) the radio frequency (RF) transmitter circuits. A preamplifier, a low-pass filter, and a successive-approximation analog-to-digital converter (SA-ADC) are integrated to detect an ECG signal. For high integration, the ZigBee protocol is adopted for wireless communication. To transmit the ECG signal through wireless communication, a quadrature voltage-controlled oscillator and a 2.4 GHz low-IF transmitter with a power amplifier and up-conversion mixer are also developed. In the receiver, a 2.4 GHz fully integrated CMOS radio-frequency front-end with a low-noise amplifier, and a quadrature mixer is proposed. The low-power wireless bio-signal acquisition SoC (WBSA-SoC) has been implemented in TSMC 0.18-\u03bcm standard CMOS process. The measurement results on the human body reveal that the ECG signals can be acquired effectively by the proposed SoC."}
{"_id": "959feac19b902741f50708bf5e21f2a8351428a1", "title": "16 Personalization in E-Commerce Applications", "text": "This chapter is about personalization and adaptation in electronic commerce (e-commerce) applications. In the first part, we briefly introduce the challenges posed by e-commerce and we discuss how personalization strategies can help companies to face such challenges. Then, we describe the aspects of personalization, taken as a general technique for the customization of services to the user, which have been successfully employed in e-commerce Web sites. To conclude, we present some emerging trends and and we discuss future perspectives."}
{"_id": "811b625834ef59f0eecf58fda38f3d4f3165186b", "title": "Motivation and Personality : A Neuropsychological Perspective", "text": "Personality is strongly influenced by motivation systems that organise responses to rewards and punishments and that drive approach and avoidance behavior. Neuropsychological research has identified: (a) two avoidance systems, one related to pure active avoidance and escape, and one to passive avoidance and behavioral inhibition produced by goal-conflict; and (b) two approach systems, one related to the actions of reward seeking and one to experience and behavior related to pleasure on receiving reward. These systems mediate fluid moment-by-moment reactions to changing stimuli, with relatively stable person-specific sensitivities to these stimuli manifested in personality traits. We review what is known about these motivational traits, integrating the theory-driven approach based on animal learning paradigms with the empirical tradition of the Big Five personality model. People differ from one another, and this fact is obvious to everyone. It is common to talk about people\u2019s personalities using lexical terms to describe their characteristic ways of thinking, feeling and behaving (e.g., \u2018bold\u2019, \u2018lazy\u2019, \u2018intelligent\u2019), and we use these descriptors to infer people\u2019s intentions and likely future behavior. Personality psychologists have long analyzed the ratings of large numbers of trait descriptive adjectives to produce the most widely used taxonomy of personality: the Big Five, which includes the dimensions of Extraversion, Neuroticism, Conscientiousness, Agreeableness, and Openness to Experience \u2044 Intellect (John, Naumann, & Soto, 2008). These Big Five traits also emerge from existing personality questionnaires that were not designed specifically to measure them (e.g., Markon, Krueger, & Watson, 2005), suggesting it is a good candidate for a consensual model in personality psychology. What the Big Five model does not immediately offer, however, is an explanation for the causal sources of personality traits. Why do people think, feel, and act in the ways that they do? People react to situations, of course; but different people react differently to the same situation, suggesting that they have different behavioral propensities. In order to answer this why question, we must discover what drives people\u2019s actions and reactions. Inferring motivation from observed personality has been something of a dark art in psychology. However, one promising approach to this question is based on the biology of motivational control systems, studied by psychologists for over a century in non-human animals, and for somewhat less time in humans. This approach operates on the premise that stable individual differences in behavior (personality traits) must be due to relatively stable individual differences in the operation of brain systems that produce (state) behavior from moment-tomoment. From this perspective, each of our many traits reflects the operations of a set of brain systems that has evolved to respond to a different class of functional requirements (Denissen & Penke, 2008; McNaughton, 1989; Nettle, 2006; Pickering & Gray, 1999). Social and Personality Psychology Compass 7/3 (2013): 158\u2013175, 10.1111/spc3.12016 a 2013 Blackwell Publishing Ltd In what follows, we focus on those motivational processes and personality traits most closely aligned with biological research on reactions to reward and punishment and associated approach and avoidance behavior. This focus is warranted both by the importance of these phenomena for motivation and by the existence of extensive research on them. Our aim is to offer an introduction for researchers wishing to explore the role of motivation in personality from the perspective of these underlying psychobiological systems. Only after a description of what is known about the operation of these systems do we branch out to consider the personality traits associated with them. Our major assumption is that most fundamental personality traits have a motivational core; and we aim to show that the descriptive personality research tradition, which produced the Big Five, can be integrated with the experimental research tradition that has focused on the sensitivities of basic motivation systems. In this review, we focus on systems related to approach and avoidance primarily at the level of explanation that Gray (1975) labeled \u2018the conceptual nervous system\u2019, which is based on analysis of behavior as well as neurobiology and attempts to describe important psychological processes without specifying their exact instantiation in the nervous system \u2013 this approach has afforded a detailed analysis of reactions to classes of motivationally significant stimuli and can be used to derive predictions concerning the functions of the real nervous system (e.g., in fMRI studies). Rather than going into extensive detail regarding the biological basis of the systems, we focus primarily on their functions, discussing biological evidence only when it is necessary for addressing some functional question. Approach-Avoidance Theories of Motivation and Their Relation to Personality The most important classes of motivational stimuli can be grouped into \u2018rewards\u2019 and \u2018punishments\u2019. Animals can be seen as cybernetic systems with attractors and repulsors (positive and negative goals) that have evolved to promote survival and reproduction (Carver & Scheier, 1998; DeYoung, 2010d). Without a tendency to approach beneficial stimuli (e.g., food, drink, and sexual mates) and to avoid aversive stimuli (e.g., predators and poisons) a species would not survive. \u2018Reward\u2019 and \u2018punishment\u2019 may seem straightforward concepts, but they hide some nonobvious complexities. For the classical behaviorist, rewards increase the frequency of the behavior leading to them, whereas punishments decrease the frequency of behavior leading to them. That is, a \u2018reward\u2019 is something a person will work to obtain; and a \u2018punishment\u2019 is something a person will work to avoid. But the behaviorist definition of \u2018reward\u2019 also includes a different class of stimuli, namely the termination or omission of expected punishment. The effect on behavior and emotion of the \u2018hope\u2019 of achieving a reward is similar to that of anticipated \u2018relief\u2019 through avoiding a punishment. Similarly, although a \u2018punishment\u2019 can be described as something people will work to avoid or escape from (or which they will attack defensively), the omission of an expected reward is experienced as punishing; an effect known as frustrative nonreward. Thus, \u2018fear\u2019 has important similarities with \u2018frustration\u2019. (For further discussion of this literature, see Corr & McNaughton, 2012.) These complexities can be understood straightforwardly from the cybernetic perspective, in which rewards are any stimuli that indicate progress toward or attainment of a goal, whereas punishments are any stimuli that disrupt progress toward a goal. However, in any experimental situation, it is necessary to confirm that the subject perceives stimuli as actually rewarding and punishing, as there are likely to be significant individual differences in how people react to the same stimuli (for further discussion of this point, see Corr, forthcoming). Motivation and Personality 159 a 2013 Blackwell Publishing Ltd Social and Personality Psychology Compass 7/3 (2013): 158\u2013175, 10.1111/spc3.12016 Current approach-avoidance theories trace their origins to early researchers who posited that two motivation \u2044emotion processes underlie behavior (e.g., Konorski, 1967; Mowrer, 1960; Schneirla, 1959), one related to reward (approach behavior and positive emotions), and the other to punishment (avoidance behavior and negative emotions). Neuroscience measures, including pharmacological manipulation, assessment of neural activity, and neuroanatomical studies, have been used to investigate the neuropsychological systems that underlie reactions to these classes of stimuli, providing confirmation of the hypothesis that distinct systems underlie rewardand punishment-related motivation (Gray & McNaughton, 2000). This animal-based work migrated into personality psychology in the 1970s via Jeffrey A. Gray (e.g., 1970, 1972a,b, 1975, 1977), whose Reinforcement Sensitivity theory (RST) argued that the major traits of personality reflect long-term stabilities in systems that mediate reactions to different classes of reinforcing stimuli, generating emotion and shaping (\u2018motivating\u2019) approach and avoidance behavior. The leap from understanding motivational systems to understanding personality traits requires the postulate that relatively stable individual differences exist in the operations of these brain-behavioral systems. A personality trait can be defined as a probabilistic constant in equations that predict the frequency and intensity with which individuals exhibit various motivational states, as well as the behavioral, emotional, and cognitive states that accompany these motivational states (DeYoung, 2010c; Fleeson, 2001; Fleeson & Gallagher, 2009). Note that this assumes exposure of the population to a normal range of situations. If situations are limited to prevent exposure to some trait-relevant class of situations, then individual differences in that trait may not be manifest. A neuropsychological approach to personality aims to understand both the biological systems that are responsible for the states associated with any given trait and the parameters of those systems that cause them to differ across individuals. The systems themselves will be present in every intact human brain, but the values of their parameters will vary from person to person. Thus, for example, all people have brain systems that respond to punishing stimuli, but in different individuals these systems respond differently to a given stimulus. It is the typical level of response of such a system in any given individual, averaged across different situations, that is associated with that individual\u2019s score on the personality trait in question. This is not to imply that an individual wil"}
{"_id": "0b0aca78dd2cb0776d7a526af7198348683ac407", "title": "A New Method for Yielding a Database of Location Fingerprints in WLAN", "text": "Location fingerprinting in wireless LAN (WLAN) positioning has received much attention recently. One of the key issues of this technique is generating the database of \u2018fingerprints\u2019. The conventional method does not utilise the spatial correlation of measurements sampled at adjacent reference points (RPs), and the \u2018training\u2019 process is not an easy task. A new method based on kriging is presented in this paper. An experiment shows that the new method can not only achieve more accurate estimation, but can also greatly reduce the workload and save training time. This can make the fingerprinting technique more flexible and easier to implement. Binghao Li, Yufei Wang, Andrew Dempster, Chris Rizos are with the School of Surveying and Spatial Information System, University of New South Wales, Sydney, Australia Hyung Keun Lee is with the School of Avionics and Telecommunication, Hankuk Aviation University, Kyunggi-do, Korea"}
{"_id": "56f491ab34991a729c7e9e6cb111551fcc9e6709", "title": "Hybrid algorithm for indoor positioning using wireless LAN", "text": "Locating an indoor mobile station based on a wireless communication infrastructure, has practical applications. The most widely employed methods today use an RF propagation loss (PL) model or location fingerprinting (LF). The PL method is known to perform poorly compared to LF. But LF requires an extensive training dataset and cannot adapt well to configuration changes or receiver breakdown. In this paper, we develop a hybrid method that combines the strength of these two methods. It first formulates the RF PL in a nonlinear, censored regression model and adjusts the regression function to the observed signal strength in the fingerprint dataset. In the absence of a training dataset, the hybrid method coincides with the PL method, and, as the spatial granularity of the training dataset increases, the result of the algorithm approaches the result of the LF method. It balances the flexibility and accuracy of the two traditional methods, makes intelligent use of missing values, produces error bounds, and can be made dynamic. We evaluate the performance of the algorithm by applying it to a real site and observe satisfactory positioning accuracy."}
{"_id": "c2eb695eb37d236ef4ecd96283455b8af8e5a03c", "title": "WLAN location determination via clustering and probability distributions", "text": "We present a WLAN location determination technique, the Joint Clustering technique, that uses: (1) signal strength probability distributions to address the noisy wireless channel, and (2) clustering of locations to reduce the computational cost of searching the radio map. The Joint Clustering technique reduces computational cost by more than an order of magnitude, compared to the current state of the art techniques, allowing non-centralized implementation on mobile clients. Results from 802.11-equipped iPAQ implementations show that the new technique gives user location to within 7 feet with over 90% accuracy."}
{"_id": "12035569a91a5886c41439217191a83551a7514e", "title": "A Knowledge-Augmented Neural Network Model for Implicit Discourse Relation Classification", "text": "Identifying discourse relations that are not overtly marked with discourse connectives remains a challenging problem. The absence of explicit clues indicates a need for the combination of world knowledge and weak contextual clues, which can hardly be learned from a small amount of manually annotated data. In this paper, we address this problem by augmenting the input text with external knowledge and context and by adopting a neural network model that can effectively handle the augmented text. Experiments show that external knowledge did improve the classification accuracy. On the other hand, contextual information provided no significant gain for implicit discourse relations while it worked for explicit ones."}
{"_id": "e0638e0628021712ac76e3472663ccc17bd8838c", "title": "SIGN LANGUAGE RECOGNITION: STATE OF THE ART", "text": "Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. The sign language recognition steps are described in this survey. The data acquisition, data preprocessing and transformation, feature extraction, classification and results obtained are examined. Some future directions for research in this area also suggested."}
{"_id": "2198a500c53c95091dec5b28f66b0937be4a60e4", "title": "Non-Orthogonal Multiple Access Based Integrated Terrestrial-Satellite Networks", "text": "In this paper, we investigate the downlink transmission of a non-orthogonal multiple access (NOMA)-based integrated terrestrial-satellite network, in which the NOMA-based terrestrial networks and the satellite cooperatively provide coverage for ground users while reusing the entire bandwidth. For both terrestrial networks and the satellite network, multi-antennas are equipped and beamforming techniques are utilized to serve multiple users simultaneously. A channel quality-based scheme is proposed to select users for the satellite, and we then formulate the terrestrial user pairing as a max\u2013min problem to maximize the minimum channel correlation between users in one NOMA group. Since the terrestrial networks and the satellite network will cause interference to each other, we first investigate the capacity performance of the terrestrial networks and the satellite networks separately, which can be decomposed into the designing of beamforming vectors and the power allocation schemes. Then, a joint iteration algorithm is proposed to maximize the total system capacity, where we introduce the interference temperature limit for the satellite since the satellite can cause interference to all base station users. Finally, numerical results are provided to evaluate the user paring scheme as well as the total system performance, in comparison with some other proposed algorithms and existing algorithms."}
{"_id": "ddf7349594c0302ed7fbb6ceb606831dc92812fa", "title": "Handling class imbalance in customer churn prediction", "text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.05.027 * Corresponding author. Tel.: +32 9 264 89 80; fax: E-mail address: dirk.vandenpoel@UGent.be (D. Va URL: http://www.crm.UGent.be (D. Van den Poel). Customer churn is often a rare event in service industries, but of great interest and great value. Until recently, however, class imbalance has not received much attention in the context of data mining [Weiss, G. M. (2004). Mining with rarity: A unifying framework. SIGKDD Explorations, 6 (1), 7\u201319]. In this study, we investigate how we can better handle class imbalance in churn prediction. Using more appropriate evaluation metrics (AUC, lift), we investigated the increase in performance of sampling (both random and advanced under-sampling) and two specific modelling techniques (gradient boosting and weighted random forests) compared to some standard modelling techniques. AUC and lift prove to be good evaluation metrics. AUC does not depend on a threshold, and is therefore a better overall evaluation metric compared to accuracy. Lift is very much related to accuracy, but has the advantage of being well used in marketing practice [Ling, C., & Li, C. (1998). Data mining for direct marketing problems and solutions. In Proceedings of the fourth international conference on knowledge discovery and data mining (KDD-98). New York, NY: AAAI Press]. Results show that under-sampling can lead to improved prediction accuracy, especially when evaluated with AUC. Unlike Ling and Li [Ling, C., & Li, C. (1998). Data mining for direct marketing problems and solutions. In Proceedings of the fourth international conference on knowledge discovery and data mining (KDD98). New York, NY: AAAI Press], we find that there is no need to under-sample so that there are as many churners in your training set as non churners. Results show no increase in predictive performance when using the advanced sampling technique CUBE in this study. This is in line with findings of Japkowicz [Japkowicz, N. (2000). The class imbalance problem: significance and strategies. In Proceedings of the 2000 international conference on artificial intelligence (IC-AI\u20192000): Special track on inductive learning, Las Vegas, Nevada], who noted that using sophisticated sampling techniques did not give any clear advantage. Weighted random forests, as a cost-sensitive learner, performs significantly better compared to random forests, and is therefore advised. It should, however always be compared to logistic regression. Boosting is a very robust classifier, but never outperforms any other technique. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "95e4c1c7d715203a0a9a0f8ed65d5e856865740f", "title": "community2vec: Vector representations of online communities encode semantic relationships", "text": "Vector embeddings of words have been shown to encode meaningful semantic relationships that enable solving of complex analogies. This vector embedding concept has been extended successfully to many different domains and in this paper we both create and visualize vector representations of an unstructured collection of online communities based on user participation. Further, we quantitatively and qualitatively show that these representations allow solving of semantically meaningful community analogies and also other more general types of relationships. These results could help improve community recommendation engines and also serve as a tool for sociological studies of community relatedness."}
{"_id": "9eee4c9bc050b39d1f32bb5f61d9bdc2582ba029", "title": "7th order sub-millimeter wave frequency multiplier based on graphene implemented using a microstrip transition between two rectangular waveguides", "text": "In this work, a 7th order sub-millimeter wave frequency multiplier for Terahertz applications is presented. The multiplier is based on a microstrip line with a small gap where a graphene film layer is deposited. The multiplication phenomena is reached via the nonlinear behavior of the graphene film using an input signal provided by the WR28 standard waveguide operating in the Ka frequency band (31 to 40 GHz). A prototype of the multiplier has been manufactured and the behavior of the output power in the 220-280 GHz band versus the input power has been experimentally characterized."}
{"_id": "8d790e0caae136ac2c8100a86cf633189d094e46", "title": "An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks", "text": "In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments."}
{"_id": "ea558efe0d5049315f0e4575d695945841398a25", "title": "Induction of mitochondrial-mediated apoptosis by Morinda citrifolia (Noni) in human cervical cancer cells.", "text": "Cervical cancer is the second most common cause of cancer in women and has a high mortality rate. Cisplatin, an antitumor agent, is generally used for its treatment. However, the administration of cisplatin is associated with side effects and intrinsic resistance. Morinda citrifolia (Noni), a natural plant product, has been shown to have anti-cancer properties. In this study, we used Noni, cisplatin, and the two in combination to study their cytotoxic and apoptosis-inducing effects in cervical cancer HeLa and SiHa cell lines. We demonstrate here, that Noni/Cisplatin by themselves and their combination were able to induce apoptosis in both these cell lines. Cisplatin showed slightly higher cell killing as compared to Noni and their combination showed additive effects. The observed apoptosis appeared to be mediated particularly through the up-regulation of p53 and pro-apoptotic Bax proteins, as well as down- regulation of the anti-apoptotic Bcl-2, Bcl-XL proteins and survivin. Augmentation in the activity of caspase-9 and -3 was also observed, suggesting the involvement of the intrinsic mitochondrial pathway of apoptosis for both Noni and Cisplatin in HeLa and SiHa cell lines."}
{"_id": "b128d69fa7ccba2a80a6029d2208feddb246b6bd", "title": "Applying Machine Learning to Text Segmentation for Information Retrieval", "text": "We propose a self-supervised word segmentation technique for text segmentation in Chinese information retrieval. This method combines the advantages of traditional dictionary based, character based and mutual information based approaches, while overcoming many of their shortcomings. Experiments on TREC data show this method is promising. Our method is completely language independent and unsupervised, which provides a promising avenue for constructing accurate multi-lingual or cross-lingual information retrieval systems that are flexible and adaptive. We find that although the segmentation accuracy of self-supervised segmentation is not as high as some other segmentation methods, it is enough to give good retrieval performance. It is commonly believed that word segmentation accuracy is monotonically related to retrieval performance in Chinese information retrieval. However, for Chinese, we find that the relationship between segmentation and retrieval performance is in fact nonmonotonic; that is, at around 70% word segmentation accuracy an over-segmentation phenomenon begins to occur which leads to a reduction in information retrieval performance. We demonstrate this effect by presenting an empirical investigation of information retrieval on Chinese TREC data, using a wide variety of word segmentation algorithms with word segmentation accuracies ranging from 44% to 95%, including 70% word segmentation accuracy from our self-supervised word-segmentation approach. It appears that the main reason for the drop in retrieval performance is that correct compounds and collocations are preserved by accurate segmenters, while they are broken up by less accurate (but reasonable) segmenters, to a surprising advantage. This suggests that words themselves might be too broad a notion to conveniently capture the general semantic meaning of Chinese text. Our research suggests machine learning techniques can play an important role in building adaptable information retrieval systems and different evaluation standards for word segmentation should be given to different applications."}
{"_id": "b2e8e0a9e94783cd7616582e4b72a93f121888a0", "title": "Kirigami skins make a simple soft actuator crawl", "text": "Bioinspired soft machines made of highly deformable materials are enabling a variety of innovative applications, yet their locomotion typically requires several actuators that are independently activated. We harnessed kirigami principles to significantly enhance the crawling capability of a soft actuator. We designed highly stretchable kirigami surfaces in which mechanical instabilities induce a transformation from flat sheets to 3D-textured surfaces akin to the scaled skin of snakes. First, we showed that this transformation was accompanied by a dramatic change in the frictional properties of the surfaces. Then, we demonstrated that, when wrapped around an extending soft actuator, the buckling-induced directional frictional properties of these surfaces enabled the system to efficiently crawl."}
{"_id": "3fe70222056286c4241488bf687afbd6db02d207", "title": "Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection", "text": "We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing."}
{"_id": "c27bfdf94eb39009b04e242e9dff9a46769fe2a2", "title": "A PRINCIPAL AXIS TRANSFORMATION FOR NON-HERMITIAN MATRICES CARL ECKART AND GALE YOUNG The availability of the principal axis transformation for hermitian", "text": "each two nonparallel elements of G cross each other. Obviously the conclusions of the theorem do not hold. The following example will show that the condition that no two elements of the collection G shall have a complementary domain in common is also necessary. In the cartesian plane let M be a circle of radius 1 and center at the origin, and iVa circle of radius 1 and center at the point (5, 5). Let d be a collection which contains each continuum which is the sum of M and a horizontal straight line interval of length 10 whose left-hand end point is on the circle M and which contains no point within M. Let G2 be a collection which contains each continuum which is the sum of N and a vertical straight line interval of length 10 whose upper end point is on the circle N and which contains no point within N. Let G = Gi+G2 . No element of G crosses any other element of G, but uncountably many have a complementary domain in common with some other element of the collection. However, it is evident that no countable subcollection of G covers the set of points each of which is common to two continua of the collection G. I t is not known whether or not the condition that each element of G shall separate some complementary domain of every other one can be omitted."}
{"_id": "a95c2fbffa7bc9b2c530dec353822833d2e4822b", "title": "Tooth survival following non-surgical root canal treatment: a systematic review of the literature.", "text": "AIMS\nTo investigate (i) the effect of study characteristics on reported tooth survival after root canal treatment (RCTx) and (ii) the effect of clinical factors on the proportion of root filled teeth surviving after RCTx.\n\n\nMETHODOLOGY\nLongitudinal human clinical studies investigating tooth survival after RCTx which were published up to the end of 2007 were identified electronically (MEDLINE and Cochrane database 1966-2007 December, week 4). In addition, four journals (Dental Traumatology, International Endodontic Journal, Journal of Endodontics, Oral Surgery Oral Medicine Oral Pathology Oral Radiology & Endodontics), bibliographies of all relevant articles and review articles were hand searched. Two reviewers (Y-LN, KG) assessed and selected the studies based on specified inclusion criteria and extracted the data onto a pre-designed proforma, independently. The criteria were as follows: (i) clinical study on RCTx; (ii) stratified analysis of primary and secondary RCTx available; (iii) sample size given and larger than 10; (iv) at least 6-month postoperative review; (v) success based on survival of tooth; and (vi) proportion of teeth surviving after treatment given or could be calculated from the raw data. Three strands of evidence or analyses were used to triangulate a consensus view. The reported findings from individual studies, including those excluded for quantitative analysis, were utilized for the intuitive synthesis, which constituted the first strand of evidence. Secondly, the pooled weighted proportion of teeth surviving and thirdly the combined effects of potential prognostic factors were estimated using the fixed and random effects meta-analyses on studies fulfilling all the inclusion criteria.\n\n\nRESULTS\nOf the 31 articles identified, 14 studies published between 1993 and 2007 were included. The majority of studies were retrospective (n = 10) and only four prospective. The pooled percentages of reported tooth survival over 2-3, 4-5 and 8-10 years following RCTx were 86% (95% CI: 75%, 98%), 93% (95% CI: 92%, 94%) and 87% (95% CI: 82%, 92%), respectively. Substantial differences in study characteristics were found to hinder effective direct comparison of findings. Evidence for the effect of prognostic factors on tooth survival was weak. Based on the data available for meta-analyses, four conditions were found to significantly improve tooth survival. In descending order of influence, the conditions increasing observed proportion of survival were as follows: (i) a crown restoration after RCTx; (ii) tooth having both mesial and distal proximal contacts; (iii) tooth not functioning as an abutment for removable or fixed prosthesis; and (iv) tooth type or specifically non-molar teeth. Statistical heterogeneity was substantial in some cases but its source could not be investigated because of insufficient available information.\n\n\nCONCLUSIONS\nThe pooled proportion of teeth surviving over 2-10 years following RCTx ranged between 86% and 93%. Four factors (listed above) were identified as significant prognostic factors with concurrence between all three strands of evidence."}
{"_id": "d26a0b517feeaa53df5a384c2cdebbcc97318753", "title": "100 top-cited scientific papers in limb prosthetics", "text": "Research has tremendously contributed to the developments in both practical and fundamental aspects of limb prosthetics. These advancements are reflected in scientific articles, particularly in the most cited papers. This article aimed to identify the 100 top-cited articles in the field of limb prosthetics and to investigate their main characteristics. Articles related to the field of limb prosthetics and published in the Web of Knowledge database of the Institute for Scientific Information (ISI) from the period of 1980 to 2012. The 100 most cited articles in limb prosthetics were selected based on the citation index report. All types of articles except for proceedings and letters were included in the study. The study design and level of evidence were determined using Sackett's initial rules of evidence. The level of evidence was categorized either as a systematic review or meta-analysis, randomized controlled trial, cohort study, case-control study, case series, expert opinion, or design and development. The top cited articles in prosthetics were published from 1980 to 2012 with a citation range of 11 to 90 times since publication. The mean citation rate was 24.43 (SD 16.7) times. Eighty-four percent of the articles were original publications and were most commonly prospective (76%) and case series studies (67%) that used human subjects (96%) providing level 4 evidence. Among the various fields, rehabilitation (47%), orthopedics (29%), and sport sciences (28%) were the most common fields of study. The study established that studies conducted in North America and were written in English had the highest citations. Top cited articles primarily dealt with lower limb prosthetics, specifically, on transtibial and transradial prosthetic limbs. Majority of the articles were experimental studies."}
{"_id": "5db6564b3f43dee8cb454bd1c7b919051ca2fcba", "title": "Requirements engineering and the creative process in the video game industry", "text": "The software engineering process in video game development is not clearly understood, hindering the development of reliable practices and processes for this field. An investigation of factors leading to success or failure in video game development suggests that many failures can be traced to problems with the transition from preproduction to production. Three examples, drawn from real video games, illustrate specific problems: 1) how to transform documentation from its preproduction form to a form that can be used as a basis for production;, 2) how to identify implied information in preproduction documents; and 3) how to apply domain knowledge without hindering the creative process. We identify 3 levels of implication and show that there is a strong correlation between experience and the ability to identify issues at each level. The accumulated evidence clearly identifies the need to extend traditional requirements engineering techniques to support the creative process in video game development."}
{"_id": "01905a9c0351aad54ee7dbba1544cd9db06ca935", "title": "An integrated conceptual model for information system security risk management supported by enterprise architecture management", "text": "Risk management is today a major steering tool for any organisation wanting to deal with information system (IS) security. However, IS security risk management (ISSRM) remains a difficult process to establish and maintain, mainly in a context of multi-regulations with complex and inter-connected IS. We claim that a connection with enterprise architecture management (EAM) contributes to deal with these issues. A first step towards a better integration of both domains is to define an integrated EAM-ISSRM conceptual model. This paper is about the elaboration and validation of this model. To do so, we improve an existing ISSRM domain model, i.e. a conceptual model depicting the domain of ISSRM, with the concepts of EAM. The validation of the EAM-ISSRM integrated model is then performed with the help of a validation group assessing the utility and usability of the model."}
{"_id": "765b28edf5e68f37d05d6e521a6b97ac9b4b7bb6", "title": "A simple regression method for mapping quantitative trait loci in line crosses using flanking markers", "text": "The use of flanking marker methods has proved to be a powerful tool for the mapping of quantitative trait loci (QTL) in the segregating generations derived from crosses between inbred lines. Methods to analyse these data, based on maximum-likelihood, have been developed and provide good estimates of QTL effects in some situations. Maximum-likelihood methods are, however, relatively complex and can be computationally slow. In this paper we develop methods for mapping QTL based on multiple regression which can be applied using any general statistical package. We use the example of mapping in an F2 population and show that these regression methods produce very similar results to those obtained using maximum likelihood. The relative simplicity of the regression methods means that models with more than a single QTL can be explored and we give examples of two linked loci and of two interacting loci. Other models, for example with more than two QTL, with environmental fixed effects, with between family variance or for threshold traits, could be fitted in a similar way. The ease, speed of application and generality of regression methods for flanking marker analyses, and the good estimates they obtain, suggest that they should provide the method of choice for the analysis of QTL mapping data from inbred line crosses."}
{"_id": "d1f7a69d46d09054dcdfce253b89ecede60a724c", "title": "Comparative analysis of big data transfer protocols in an international high-speed network", "text": "Large-scale scientific installations generate voluminous amounts of data (or big data) every day. These data often need to be transferred using high-speed links (typically with 10 Gb/s or more link capacity) to researchers located around the globe for storage and analysis. Efficiently transferring big data across countries or continents requires specialized big data transfer protocols. Several big data transfer protocols have been proposed in the literature, however, a comparative analysis of these protocols over a long distance international network is lacking in the literature. We present a comparative performance and fairness study of three open-source big data transfer protocols, namely, GridFTP, FDT, and UDT, using a 10 Gb/s high-speed link between New Zealand and Sweden. We find that there is limited performance difference between GridFTP and FDT. GridFTP is stable in terms of handling file system and TCP socket buffer. UDT has an implementation issue that limits its performance. FDT has issues with small buffer size limiting its performance, however, this problem is overcome by using multiple flows. Our work indicates that faster file systems and larger TCP socket buffers in both the operating system and application are useful in improving data transfer rates."}
{"_id": "4248a9dcabd62b14fa7181ae11e0b27bb08b1c95", "title": "Self-presentation and gender on MySpace", "text": "Available online 15 August 2008 Within the cultural context of MySpace, this study explores the ways emerging adults experience social networking. Through focus group methodology, the role of virtual peer interaction in the development of personal, social, and gender identities was investigated. Findings suggest that college students utilize MySpace for identity exploration, engaging in social comparison and expressing idealized aspects of the selves they wish to become. The public nature of self and relationship displays introduce feedback mechanisms by which emerging adults can legitimize images as associated with the self. Also, male\u2013female differences in self-presentation parallel, and possibly intensify, gender norms offline. Our study suggests that social networking sites provide valuable opportunities for emerging adults to realize possible selves; however, increased pressure for female sexual objectification and intensified social comparison may also negatively impact identity development. A balanced view, presenting both opportunities and drawbacks, should be encouraged in policies regarding youth participation in social networking sites. Published by Elsevier Inc."}
{"_id": "56ca8ae59288b20692a549887677f58f91e77954", "title": "How Envy Influences SNS Intentions to Use", "text": "Social networking sites (SNS) have grown to be one of the most prevalent technologies, providing users a variety of benefits. However, SNS also provide user sufficient opportunities to access others\u2019 positive information. This would elicit envy. In the current study, we develop a theoretical framework that elaborates the mechanism through which online envy is generated and influences SNS usage. We specify that online users experience two types of envy and each one could have distinct influences on continuance intention to use of SNS. Our findings provide valuable implications for both academic researchers and IS practitioners."}
{"_id": "6de4e806365af08e3445eb374fee96df74e08384", "title": "A Theory of Social Comparison Processes", "text": "In this paper we shall present a further development of a previously published theory concerning opinion influence processes in social groups (7). This further development has enabled us to extend the theory to deal with other areas, in addition to opinion formation, in which social comparison is important. Specifically, we shall develop below how the theory applies to the appraisal and evaluation of abilities as well as opinions. Such theories and hypotheses in the area of social psychology are frequently viewed in terms of how \u201cplausible\u201d they seem. \u201cPlausibility\u201d usually means whether or not the theory or hypothesis fits one\u2019s intuition or one\u2019s common sense. In this meaning much of the theory which is to be presented here is not\u201d plausible \u201c. The theory does, however, explain a considerable amount of data and leads to testable derivations. Three experiments, specifically designed to test predictions from this extension of the theory, have now been completed (5, 12, 19). They all provide good corroboration. We will in the following pages develop the theory and present the relevant data."}
{"_id": "846b59235a7b6f55728e2f308bb1f97a4a6dceba", "title": "Facebook use, envy, and depression among college students: Is facebooking depressing?", "text": "It is not\u2014unless it triggers feelings of envy. This study uses the framework of social rank theory of depression and conceptualizes Facebook envy as a possible link between Facebook surveillance use and depression among college students. Using a survey of 736 college students, we found that the effect of surveillance use of Facebook on depression is mediated by Facebook envy. However, when Facebook envy is controlled for, Facebook use actually lessens depression. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "93adca9ce6f4a0fab9ea027c90b4df828cfa10d7", "title": "Learning Actionable Representations from Visual Observations", "text": "In this work we explore a new approach for robots to teach themselves about the world simply by observing it. In particular we investigate the effectiveness of learning task-agnostic representations for continuous control tasks. We extend Time-Contrastive Networks (TCN) that learn from visual observations by embedding multiple frames jointly in the embedding space as opposed to a single frame. We show that by doing so, we are now able to encode both position and velocity attributes significantly more accurately. We test the usefulness of this self-supervised approach in a reinforcement learning setting. We show that the representations learned by agents observing themselves take random actions, or other agents perform tasks successfully, can enable the learning of continuous control policies using algorithms like Proximal Policy Optimization (PPO) using only the learned embeddings as input. We also demonstrate significant improvements on the real-world Pouring dataset with a relative error reduction of 39.4% for motion attributes and 11.1% for static attributes compared to the single-frame baseline. Video results are available at https://sites.google.com/view/actionablerepresentations"}
{"_id": "3017197366cf3dec8b38f89e7be08cbf5d04a272", "title": "Survey of Surveys (SoS) - Mapping The Landscape of Survey Papers in Information Visualization", "text": "Information visualization as a field is growing rapidly in popularity since the first information visualization conference in 1995. However, as a consequence of its growth, it is increasingly difficult to follow the growing body of literature within the field. Survey papers and literature reviews are valuable tools for managing the great volume of previously published research papers, and the quantity of survey papers in visualization has reached a critical mass. To this end, this survey paper takes a quantum step forward by surveying and classifying literature survey papers in order to help researchers understand the current landscape of Information Visualization. It is, to our knowledge, the first survey of survey papers (SoS) in Information Visualization. This paper classifies survey papers into natural topic clusters which enables readers to find relevant literature and develops the first classification of classifications. The paper also enables researchers to identify both mature and less developed research directions as well as identify future directions. It is a valuable resource for both newcomers and experienced researchers in and outside the field of Information Visualization and Visual Analytics."}
{"_id": "492895decb5210cc5ec76ba1ceb9a886e51a0a16", "title": "MATLAB functions to analyze directional (azimuthal) data - I: Single-sample inference", "text": "Data that represent azimuthal directions cannot be analyzed statistically in the same manner as ordinary variables that measure, for example, size or quality. Many statistical techniques have been developed for dealing with directional data, but available software for the unique calculations is scarce. This paper describes a MATLAB script, located on the IAMG server, that calculates descriptive statistics and performs inference on azimuthal parameters. The calculations include tests for specific distributions, and tests on the preferred direction and concentration of vectors about this direction. The inference methods use large-sample approximations, plus resampling methods (bootstrap) for smaller sample sizes. r 2005 Elsevier Ltd. All rights reserved."}
{"_id": "ccc59d3bd1f32137c8e3fe591e85189dc2b29d75", "title": "Underwater Image Super-Resolution by Descattering and Fusion", "text": "Underwater images are degraded due to scatters and absorption, resulting in low contrast and color distortion. In this paper, a novel self-similarity-based method for descattering and super resolution (SR) of underwater images is proposed. The traditional approach of preprocessing the image using a descattering algorithm, followed by application of an SR method, has the limitation that most of the high-frequency information is lost during descattering. Consequently, we propose a novel high turbidity underwater image SR algorithm. We first obtain a high resolution (HR) image of scattered and descattered images by using a self-similarity-based SR algorithm. Next, we apply a convex fusion rule for recovering the final HR image. The super-resolved images have a reasonable noise level after descattering and demonstrate visually more pleasing results than conventional approaches. Furthermore, numerical metrics demonstrate that the proposed algorithm shows a consistent improvement and that edges are significantly enhanced."}
{"_id": "e9bb002640b61da66e34fd935bc06403cff513d4", "title": "DeepXplore: Automated Whitebox Testing of Deep Learning Systems", "text": "Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including self-driving cars and malware detection, where the correctness and predictability of a system's behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs.\n We design, implement, and evaluate DeepXplore, the first whitebox framework for systematically testing real-world DL systems. First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradient-based search techniques.\n DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in state-of-the-art DL models with thousands of neurons trained on five popular datasets including ImageNet and Udacity self-driving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model's accuracy by up to 3%."}
{"_id": "3924d32fc6350c2ad17e0d87b6951d0855288973", "title": "Establishing the computer-patient working alliance in automated health behavior change interventions.", "text": "Current user interfaces for automated patient and consumer health care systems can be improved by leveraging the results of several decades of research into effective patient-provider communication skills. A research project is presented in which several such \"relational\" skills - including empathy, social dialogue, nonverbal immediacy behaviors, and other behaviors to build and maintain good working relationships over multiple interactions - are explicitly designed into a computer interface within the context of a longitudinal health behavior change intervention for physical activity adoption. Results of a comparison among 33 subjects interacting near-daily with the relational system and 27 interacting near-daily with an identical system with the relational behaviors ablated, each for 30 days indicate, that the use of relational behaviors by the system significantly increases working alliance and desire to continue working with the system. Comparison of the above groups to another group of 31 subjects interacting with a control system near-daily for 30 days also indicated a significant increase in proactive viewing of health information."}
{"_id": "91df909e9dde8a2b5a8a29155976a587c187bc23", "title": "Automated Assessment of Surgical Skills Using Frequency Analysis", "text": "We present an automated framework for visual assessment of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis techniques for extracting motion quality via frequency coefficients are introduced. The framework is tested on videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques."}
{"_id": "9de10673c62b00df092c6cff0122a72badf29909", "title": "Functional Near Infrared Spectroscope for Cognition Brain Tasks by Wavelets Analysis and Neural Networks", "text": "Brain Computer Interface (BCI) has been recently increased in research. Functional Near Infrared Spectroscope (fNIRs) is one the latest technologies which utilize light in the near-infrared range to determine brain activities. Because near infrared technology allows design of safe, portable, wearable, non-invasive and wireless qualities monitoring systems, fNIRs monitoring of brain hemodynamics can be value in helping to understand brain tasks. In this paper, we present results of fNIRs signal analysis indicating that there exist distinct patterns of hemodynamic responses which recognize brain tasks toward developing a BCI. We applied two different mathematics tools separately, Wavelets analysis for preprocessing as signal filters and feature extractions and Neural networks for cognition brain tasks as a classification module. We also discuss and compare with other methods while our proposals perform better with an average accuracy of 99.9% for classification. Keywords\u2014functional near infrared spectroscope (fNIRs), braincomputer interface (BCI), wavelets, neural networks, brain activity, neuroimaging."}
{"_id": "52e54b089ae39bbbad95ec2bdf08018504c276df", "title": "Treacher Collins syndrome: clinical implications for the paediatrician\u2014a new mutation in a severely affected newborn and comparison with three further patients with the same mutation, and review of the literature", "text": "Treacher Collins syndrome (TCS) is the most common and well-known mandibulofacial dysostosis caused by mutations in at least three genes involved in pre-rRNA transcription, the TCOF1, POLR1D and POLR1C genes. We present a severely affected male individual with TCS with a heterozygous de novo frameshift mutation within the TCOF1 gene (c.790_791delAG,p.Ser264GlnfsX7) and compare the clinical findings with three previously unpublished, milder affected individuals from two families with the same mutation. We elucidate typical clinical features of TCS and its clinical implications for the paediatrician and mandibulofacial surgeon, especially in severely affected individuals and give a short review of the literature. Conclusion:The clinical data of these three families illustrate that the phenotype associated with this specific mutation has a wide intra- and interfamilial variability, which confirms that variable expressivity in carriers of TCOF1 mutations is not a simple consequence of the mutation but might be modified by the combination of genetic, environmental and stochastic factors. Being such a highly complex disease treatment of individuals with TCS should be tailored to the specific needs of each individual, preferably by a multidisciplinary team consisting of paediatricians, craniofacial surgeons and geneticists."}
{"_id": "5b02cf69f2f9efe0cb61c922974748d10d1506af", "title": "Quality-of-service in cloud computing: modeling techniques and their applications", "text": "Recent years have seen the massive migration of enterprise applications to the cloud. One of the challenges posed by cloud applications is Quality-of-Service (QoS) management, which is the problem of allocating resources to the application to guarantee a service level along dimensions such as performance, availability and reliability. This paper aims at supporting research in this area by providing a survey of the state of the art of QoS modeling approaches suitable for cloud systems. We also review and classify their early application to some decision-making problems arising in cloud QoS management."}
{"_id": "bd2a1d74ba9ff2b9ec3bbc2efc715383de2272a7", "title": "Smart vehicle accident detection and alarming system using a smartphone", "text": "Vehicle accident is the paramount thread for the people's life which causes a serious wound or even dead. The automotive companies have made lots of progress in alleviating this thread, but still the probability of detrimental effect due to an accident is not reduced. Infringement of spFieed is one of the elementary reasons for a vehicle accident. Therewithal, external pressure and change of tilt angle with road surface blameworthy for this mishap. As soon as the emergency service could divulge about an accident, the more the effect would be mitigated. For this purpose, we developed an Android based application that detects an accidental situation and sends emergency alert message to the nearest police station and health care center. This application is integrated with an external pressure sensor to extract the outward force of the vehicle body. It measures speed and change of tilt angle with GPS and accelerometer sensors respectively on Android phone. By checking conditions, this application also capable of reducing the rate of false alarm."}
{"_id": "13b9903ba6ff0e45cdee1811986658a997b8c2f9", "title": "Robust Automatic Measurement of 3D Scanned Models for the Human Body Fat Estimation", "text": "In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well."}
{"_id": "5550b610a93e44b47c056febaffbf288b3762149", "title": "Asymmetrical impact of trustworthiness attributes on trust, perceived value and purchase intention: a conceptual framework for cross-cultural study on consumer perception of online auction", "text": "Lack of trust has been frequently cited to be one of the key factors that discourage customers from participating in e-commerce, while cultural differences affect the formation of trust. As such, this cross-cultural study aims at establishing a conceptual framework to investigate the primary antecedents of online trust and the mechanism that converts consumer trust into perceived value and purchase intention in online auction. Two elements of our model bear mention here. First, the authors divide the consumer trust into two facets, seller and intermediary trust, and intend to assess their reciprocal relationships. Second, the authors intend to examine the asymmetrical impact of the negative and positive trustworthiness attributes on trust, and the interrelationship among trust, perceived value and purchase intention across cultural groups."}
{"_id": "84914ad3342d0c4cea14e8d8ebbbb7680e812ba8", "title": "Combining Classifiers for Chinese Word Segmentation", "text": "In this paper we report results of a supervised machine-learning approach to Chinese word segmentation. First, a maximum entropy tagger is trained on manually annotated data to automatically labels the characters with tags that indicate the position of character within a word. An error-driven transformation-based tagger is then trained to clean up the tagging inconsistencies of the first tagger. The tagged output is then converted into segmented text. The preliminary results show that this approach is competitive compared with other supervised machine-learning segmenters reported in previous studies."}
{"_id": "f4cd1a4229bc4d36172cf0b6c88bead2ca59ac01", "title": "Deep Transfer in Reinforcement Learning by Language Grounding", "text": "In this paper, we explore the utilization of natural language to drive transfer for reinforcement learning (RL). Despite the wide-spread application of deep RL techniques, learning generalized policy representations that work across domains remains a challenging problem. We demonstrate that textual descriptions of environments provide a compact intermediate channel to facilitate effective policy transfer. We employ a model-based RL approach consisting of a differentiable planning module, a model-free component and a factorized representation to effectively utilize entity descriptions. Our model outperforms prior work on both transfer and multi-task scenarios in a variety of different environments."}
{"_id": "33c5b066ebedd22675c55212232697060c7276ab", "title": "Demystifying automotive safety and security for semiconductor developer", "text": "Advances in both semiconductor and automotive industry are today enabling the next generation of vehicles with significant electronics content than ever before. Consumers can now avail vehicle offerings in the form of Electric and Hybrid Electric Vehicles (EV/HEV) that have improved fuel efficiency, provide enhanced driver-passenger comfort and experience through Advance Driver Assistance Systems (ADAS) and car infotainment systems, and more. Increasing electronics, software content, and connectivity drive two consumer concerns \u2014 \u201cfunctional safety\u201d and \u201csecurity\u201d \u2014 to the forefront. In this tutorial, we dissect these concerns from an end application perspective and translate the system level requirements and standards into semiconductor development requirements. We indicate both current and emerging practices, and touch upon areas requiring new or optimal design and electronic design automation (EDA) solutions. While functional safety is the primary focus for deep-dive in this tutorial, we also examine key facets of automotive security which is now emerging as a critical area for further understanding and standardization."}
{"_id": "2b5081abd1c0a606f63c58bfcf47bea6d2075ac5", "title": "Detecting influenza epidemics using search engine query data", "text": "Seasonal influenza epidemics are a major public health concern, causing tens of millions of respiratory illnesses and 250,000 to 500,000 deaths worldwide each year. In addition to seasonal influenza, a new strain of influenza virus against which no previous immunity exists and that demonstrates human-to-human transmission could result in a pandemic with millions of fatalities. Early detection of disease activity, when followed by a rapid response, can reduce the impact of both seasonal and pandemic influenza. One way to improve early detection is to monitor health-seeking behaviour in the form of queries to online search engines, which are submitted by millions of users around the world each day. Here we present a method of analysing large numbers of Google search queries to track influenza-like illness in a population. Because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms, we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day. This approach may make it possible to use search queries to detect influenza epidemics in areas with a large population of web search users."}
{"_id": "1976c9eeccc7115d18a04f1e7fb5145db6b96002", "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "text": "Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read/write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications."}
{"_id": "237a38af298997604f0ee6d03ca2e25880750fab", "title": "Capturing Evolving Visit Behavior in Clickstream Data", "text": "Many online sites, both retailers and content providers, routinely monitor visitor traffic as a useful measure of their overall success. However, simple summaries such as the total number of visits per month provide little insight about individual-level site-visit patterns, especially in a changing environment such as the Internet. This article develops an individual-level model for evolving visiting behavior based on Internet clickstream data. We capture cross-sectional variation in site-visit behavior as well as changes over time as visitors gain experience with the site. In addition, we examine the relationship between visiting frequency and purchasing propensity at an e-commerce site. We find evidence supporting the notion that people who visit a retail site more frequently have a greater propensity to buy. We also show that changes (i.e., evolution) in an individual's visit frequency over time provides further information regarding which customer segments are likely to have higher purchasing conversion rates. Disciplines Business | Business Analytics | E-Commerce | Management Sciences and Quantitative Methods | Marketing | Technology and Innovation This technical report is available at ScholarlyCommons: https://repository.upenn.edu/marketing_papers/275 Capturing Evolving Visit Behavior in Clickstream Data Wendy W. Moe and Peter S. Fader The Wharton School University of Pennsylvania Marketing Department 1400 Steinberg Hall-Dietrich Hall Philadelphia, PA 19104-6371 telephone: (215) 898-7603 fax: (215) 898-2534 e-mail: wendy@marketing.wharton.upenn.edu"}
{"_id": "cd353b4c2659cc85b915ba156122b373f27447d8", "title": "Automatic Generation of Personas Using YouTube Social Media Data", "text": "We develop and implement an approach for automating generating personas in real time using actual YouTube social media data from a global media corporation. From the organization's YouTube channel, we gather demographic data, customer interactions, and topical interests, leveraging more than 188,000 subscriber profiles and more than 30 million user interactions. We demonstrate that online user data can be used to develop personas in real time using social media analytics. To get diverse aspects of personas, we collect user data from other social media channels as well and match them with the corresponding user data to generate richer personas. Our results provide corporate insights into competitive marketing, topical interests, and preferred product features for the users of the online news medium. Research implications are that reasonably rich personas can be generated in real time, instead of being the result of a laborious and time-consuming manual development process."}
{"_id": "9b0f919b5c7142564986acb84b9f246c8a693dca", "title": "Dual Encoding for Abstractive Text Summarization.", "text": "Recurrent neural network-based sequence-to-sequence attentional models have proven effective in abstractive text summarization. In this paper, we model abstractive text summarization using a dual encoding model. Different from the previous works only using a single encoder, the proposed method employs a dual encoder including the primary and the secondary encoders. Specifically, the primary encoder conducts coarse encoding in a regular way, while the secondary encoder models the importance of words and generates more fine encoding based on the input raw text and the previously generated output text summarization. The two level encodings are combined and fed into the decoder to generate more diverse summary that can decrease repetition phenomenon for long sequence generation. The experimental results on two challenging datasets (i.e., CNN/DailyMail and DUC 2004) demonstrate that our dual encoding model performs against existing methods."}
{"_id": "970dcf32eb95ca141238cec911b439017caee0c4", "title": "World of Warcraft and the impact of game culture and play in an undergraduate game design course", "text": "During the past two decades, digital games have become an increasingly popular source of study for academics, educational researchers and instructional designers. Much has been written about the potential of games for teaching and learning, both in the design of educational/serious games and the implementation of off-the-shelf games for learning. Yet relatively little research has been conducted about how game culture and the enmeshed practice of play may impact classroom dynamics. The purpose of this study is to present a case study about how the use of World of Warcraft (WoW) as a teaching tool and medium of play impacted class dynamics in an undergraduate university-level course for game design. Specifically, this study will address how WoW\u2019s game culture and the practice of play impacted (a) student-to-student dynamics and (b) class dynamics. The goal of this study is to explore some of the dynamics of play as a component of learning. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "920246280e7e70900762ddfa7c41a79ec4517350", "title": "(MP)2T: Multiple People Multiple Parts Tracker", "text": "We present a method for multi-target tracking that exploits the persistence in detection of object parts. While the implicit representation and detection of body parts have recently been leveraged for improved human detection, ours is the first method that attempts to temporally constrain the location of human body parts with the express purpose of improving pedestrian tracking. We pose the problem of simultaneous tracking of multiple targets and their parts in a network flow optimization framework and show that parts of this network need to be optimized separately and iteratively, due to inter-dependencies of node and edge costs. Given potential detections of humans and their parts separately, an initial set of pedestrian tracklets is first obtained, followed by explicit tracking of human parts as constrained by initial human tracking. A merging step is then performed whereby we attempt to include part-only detections for which the entire human is not observable. This step employs a selective appearance model, which allows us to skip occluded parts in description of positive training samples. The result is high confidence, robust trajectories of pedestrians as well as their parts, which essentially constrain each other\u2019s locations and associations, thus improving human tracking and parts detection. We test our algorithm on multiple real datasets and show that the proposed algorithm is an improvement over the state-of-the-art."}
{"_id": "2c85cf8dbdeac83eea1bcdaa3a1663089010570b", "title": "The LLVM Compiler Framework and Infrastructure Tutorial", "text": "The LLVM Compiler Infrastructure (http://llvm.cs. uiuc.edu) is a robust system that is well suited for a wide variety of research and development work. This brief paper introduces the LLVM system and provides pointers to more extensive documentation, complementing the tutorial presented at LCPC. 1 Brief Overview and Goals The LLVM Compiler Infrastructure [2] is a language and target-independent compiler system, designed for both static and dynamic compilation. LLVM (Low Level Virtual Machine) can be used to build both traditional optimizing compilers for high performance programming languages as well as compiler-based tools such as Just-In-Time (JIT) translators, profilers, binary translators, memory sandboxing tools, static analysis tools, and others. A major goal of the LLVM development community is to provide a fast, robust, and well-documented core framework to support high-quality compilers and tools. A second major goal is to develop a broad community of contributors to ensure that the software continues to grow and be relevant, and to maintain its robustness and quality. A third goal is to provide a flexible infrastructure for new compiler development and research. 2 Key LLVM Design Features The LLVM system has three main components: the LLVM Virtual Instruction Set, a collection of reusable libraries for analysis, transformation and code generation; and tools built from these libraries. The LLVM framework is characterized by a clean, simple, and modular design, which allows new users to understand the representation, write new analyses, transformations, and tools easily. The LLVM Virtual Instruction Set is the common intermediate representation shared by all of the LLVM subsystems. It is a simple, mid-level, threeaddress code representation that is designed to be both language-independent and target-independent [1]. The instruction set provides explicit control-flow graphs, explicit dataflow information (in SSA form), and a mid-level, language R. Eigenmann et al. (Eds.): LCPC 2004, LNCS 3602, pp. 15\u201316, 2005. c \u00a9 Springer-Verlag Berlin Heidelberg 2005 16 C. Lattner and V. Adve independent type system. The type system is rich enough to support sophisticated, language-independent compiler techniques, including pointer analysis, dependence analysis, and interprocedural dataflow analyses, and transformations based on them. Exactly the same instruction set is used both as a persistent, external representation for compiled code and as the internal representation for all mid-level compiler passes. Making this representation persistent enables all LLVM passes to be used at compile-time, link-time, load-time, run-time, and \u201cidle-time\u201d (between program runs). The key design features and the innovative aspects of the code representation are are described in a paper [2], and the complete instruction set is described in an online reference manual [1]. The LLVM source base mostly consists of modular and reusable libraries. These components include analyses and optimizations, native code generators, JIT compiler support, profile-guided feedback, etc. These compiler components reduce the cost and difficulty of building compilers for new platforms, new source languages, and for building new kinds of tools (debuggers, profilers, etc). The LLVM system also includes several complete tools built from these components, including an ANSI-conforming C/C++ compiler (which uses the GNU Compiler Collection (GCC) parsers). The C/C++ compiler applies a large number of module-level (compile-time) and cross-module (link-time) optimizations. Other LLVM tools include programs to manipulate the IR (e.g. convert between ASCII and binary formats, extract functions from a program, etc), a modular optimizer (which can run any LLVM optimization on an program, driven from the command line), automated compiler debugging tools, and others."}
{"_id": "7f8ceee6645d0ff74d1a280dd4db257125a14eea", "title": "Normal vulvovaginal, perineal, and pelvic anatomy with reconstructive considerations.", "text": "A thorough insight into the female genital anatomy is crucial for understanding and performing pelvic reconstructive procedures. The intimate relationship between the genitalia and the muscles, ligaments, and fascia that provide support is complex, but critical to restore during surgery for correction of prolapse or aesthetic reasons. The external female genitalia include the mons pubis, labia majora and minora, clitoris, vestibule with glands, perineal body, and the muscles and fascia surrounding these structures. Through the perineal membrane and the perineal body, these superficial vulvar structures are structurally related to the deep pelvic muscle levator ani with its fascia. The levator ani forms the pelvic floor with the coccygeus muscle and provides vital support to all the pelvic organs and stability to the perineum. The internal female genital organs include the vagina, cervix, uterus, tubes, and ovaries with their visceral fascia. The visceral fascia also called the endopelvic fascia, surrounds the pelvic organs and connects them to the pelvic walls. It is continuous with the paraurethral and paravaginal fascia, which is attached to the perineal membrane. Thus, the internal and external genitalia are closely related to the muscles and fascia, and work as one functioning unit."}
{"_id": "3c5bb15767650257f22bb0b3420af857ff89c5f7", "title": "Automatic Color Calibration for Large Camera Arrays", "text": "We present a color calibration pipeline for large camera arrays. We assume static lighting conditions for each camera, such as studio lighting or a stationary array outdoors. We also assume we can place a planar calibration target so it is visible from every camera. Our goal is uniform camera color responses, not absolute color accuracy, so we match the cameras to each other instead of to a color standard. We first iteratively adjust the color channel gains and offsets for each camera to make their responses as similar as possible. This step white balances the cameras, and for studio applications, ensures that the range of intensities in the scene are mapped to the usable output range of the cameras. Residual errors are then calibrated in post-processing. We present results calibrating an array of 100 CMOS image sensors in different physical configurations, including closely or widely spaced cameras with overlapping fields of views, and tightly packed cameras with non-overlapping fields of view. The process is entirely automatic, and the camera configuration runs in less than five minutes on the 100 camera array. CR Categories: I.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture.Camera Calibration;"}
{"_id": "4931c91f4b30eb122def1e697abc096f14c48987", "title": "Learning values across many orders of magnitude", "text": "Most learning algorithms are not invariant to the scale of the function that is being approximated. We propose to adaptively normalize the targets used in learning. This is useful in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were all clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using the adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance."}
{"_id": "2321157bf6d830f67ff0399e2cd0d2d1c91a9079", "title": "Propagation Models for Trust and Distrust in Social Networks", "text": "Semantic Web endeavors have mainly focused on issues pertaining to knowledge representation and ontology design. However, besides understanding information metadata stated by subjects, knowing about their credibility becomes equally crucial. Hence, trust and trust metrics, conceived as computational means to evaluate trust relationships between individuals, come into play. Our major contribution to Semantic Web trust management through this work is twofold. First, we introduce a classification scheme for trust metrics along various axes and discuss advantages and drawbacks of existing approaches for Semantic Web scenarios. Hereby, we devise an advocacy for local group trust metrics, guiding us to the second part which presents Appleseed, our novel proposal for local group trust computation. Compelling in its simplicity, Appleseed borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion. Moreover, we provide extensions for the Appleseed nucleus that make our trust metric handle distrust statements."}
{"_id": "77b99e0a3a6f99537a4b497c5cd67be95c1b7088", "title": "The Human Element in Autonomous Vehicles", "text": "Autonomous vehicle research has been prevalent for well over a decade but only recently has there been a small amount of research conducted on the human interaction that occurs in autonomous vehicles. Although functional software and sensor technology is essential for safe operation, which has been the main focus of autonomous vehicle research, handling all elements of human interaction is also a very salient aspect of their success. This paper will provide an overview of the importance of human vehicle interaction in autonomous vehicles, while considering relevant related factors that are likely to impact adoption. Particular attention will be given to prior research conducted on germane areas relating to control in the automobile, in addition to the different elements that are expected to affect the likelihood of success for these vehicles initially developed for human operation. This paper will also include a discussion of the limited research conducted to consider interactions with humans and the current state of published functioning software and sensor technology that exists."}
{"_id": "3968f91edaba00dff1383a156226cc56ee3b9581", "title": "Data and Code for \"Automatic Identification of Narrative Diegesis and Point of View\"", "text": "The style of narrative news affects how it is interpreted and received by readers. Two key stylistic characteristics of narrative text are point of view and diegesis: respectively, whether the narrative recounts events personally or impersonally, and whether the narrator is involved in the events of the story. Although central to the interpretation and reception of news, and of narratives more generally, there has been no prior work on automatically identifying these two characteristics in text. We develop automatic classifiers for point of view and diegesis, and compare the performance of different feature sets for both. We built a goldstandard corpus where we double-annotated to substantial agreement (\u03ba > 0.59) 270 English novels for point of view and diegesis. As might be expected, personal pronouns comprise the best features for point of view classification, achieving an average F1 of 0.928. For diegesis, the best features were personal pronouns and the occurrences of first person pronouns in the argument of verbs, achieving an average F1 of 0.898. We apply the classifier to nearly 40,000 news texts across five different corpora comprising multiple genres (including newswire, opinion, blog posts, and scientific press releases), and show that the point of view and diegesis correlates largely as expected with the nominal genre of the texts. We release the training data and the classifier for use by the community."}
{"_id": "a0ba2a4c47dcf3513f70e2fd8fa2a61aec04a4d0", "title": "A fast, invariant representation for human action in the visual system.", "text": "Humans can effortlessly recognize others' actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition; however, the underlying neural computations remain poorly understood. We use magnetoencephalography decoding and a data set of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discriminates between different actions, and when it does so in a manner that is invariant to changes in 3D viewpoint. We measure the latency difference between invariant and noninvariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. We were unable to detect a difference in decoding latency or temporal profile between invariant and noninvariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time and that both form and motion information are crucial for fast, invariant action recognition. NEW & NOTEWORTHY The human brain can quickly recognize actions despite transformations that change their visual appearance. We use neural timing data to uncover the computations underlying this ability. We find that within 200 ms action can be read out of magnetoencephalography data and that this representation is invariant to changes in viewpoint. We find form and motion are needed for this fast action decoding, suggesting that the brain quickly integrates complex spatiotemporal features to form invariant action representations."}
{"_id": "2a8a3e00b978e4ae12da7fe536b7a5a719c0f0ef", "title": "Active sampling for entity matching", "text": "In entity matching, a fundamental issue while training a classifier to label pairs of entities as either duplicates or non-duplicates is the one of selecting informative training examples. Although active learning presents an attractive solution to this problem, previous approaches minimize the misclassification rate (0-1 loss) of the classifier, which is an unsuitable metric for entity matching due to class imbalance (i.e., many more non-duplicate pairs than duplicate pairs). To address this, a recent paper [1] proposes to maximize recall of the classifier under the constraint that its precision should be greater than a specified threshold. However, the proposed technique requires the labels of all n input pairs in the worst-case.\n Our main result is an active learning algorithm that approximately maximizes recall of the classifier while respecting a precision constraint with provably sub-linear label complexity (under certain distributional assumptions). Our algorithm uses as a black-box any active learning module that minimizes 0-1 loss. We show that label complexity of our algorithm is at most log n times the label complexity of the black-box, and also bound the difference in the recall of classifier learnt by our algorithm and the recall of the optimal classifier satisfying the precision constraint. We provide an empirical evaluation of our algorithm on several real-world matching data sets that demonstrates the effectiveness of our approach."}
{"_id": "31f3a12fb25ddb0a27ebdda7dd8d014996debd74", "title": "Device analyzer: largescale mobile data collection", "text": "We collected usage information from 12,500 Android devices in the wild over the course of nearly 2 years. Our dataset contains 53 billion data points from 894 models of devices running 687 versions of Android. Processing the collected data presents a number of challenges ranging from scalability to consistency and privacy considerations. We present our system architecture for collection and analysis of this highly-distributed dataset, discuss how our system can reliably collect time-series data in the presence of unreliable timing information, and discuss issues and lessons learned that we believe apply to many other big data collection projects."}
{"_id": "5b80cda5166ba75e6c1110fc4ca6b8c12fe1fa37", "title": "Predictors of Computer Use in Community-Dwelling, Ethnically Diverse Older Adults", "text": "OBJECTIVE\nIn this study, we analyzed self-reported computer use, demographic variables, psychosocial variables, and health and well-being variables collected from 460 ethnically diverse, community-dwelling elders to investigate the relationship computer use has with demographics, well-being, and other key psychosocial variables in older adults.\n\n\nBACKGROUND\nAlthough younger elders with more education, those who employ active coping strategies, or those who are low in anxiety levels are thought to use computers at higher rates than do others, previous research has produced mixed or inconclusive results regarding ethnic, gender, and psychological factors or has concentrated on computer-specific psychological factors only (e.g., computer anxiety). Few such studies have employed large sample sizes or have focused on ethnically diverse populations of community-dwelling elders.\n\n\nMETHOD\nWith a large number of overlapping predictors, zero-order analysis alone is poorly equipped to identify variables that are independently associated with computer use. Accordingly, both zero-order and stepwise logistic regression analyses were conducted to determine the correlates of two types of computer use: e-mail and general computer use.\n\n\nRESULTS\nResults indicate that younger age, greater level of education, non-Hispanic ethnicity, behaviorally active coping style, general physical health, and role-related emotional health each independently predicted computer usage.\n\n\nCONCLUSION\nStudy findings highlight differences in computer usage, especially in regard to Hispanic ethnicity and specific health and well-being factors.\n\n\nAPPLICATION\nPotential applications of this research include future intervention studies, individualized computer-based activity programming, or customizable software and user interface design for older adults responsive to a variety of personal characteristics and capabilities."}
{"_id": "77e9753fe4b89d6e14f2a5f6b39cd5d146a965ae", "title": "Searching Solitaire in Real Time", "text": "This article presents a new real-time heuristic search method for planning problems with distinct stages. Our multistage nested rollout algorithm allows the user to apply separate heuristics at each stage of the search process and tune the search magnitude for each stage. We propose a searchtree compression that reveals a new state representation for the games of Klondike Solitaire and Thoughtful Solitaire, a version of Klondike Solitaire in which the location of all cards is known. Moreover, we present a Thoughtful Solitaire solver based on these methods that can determine over 80% of Thoughtful Solitaire games in less than 4 seconds. Finally, we demonstrate empirically that no less than 82% and no more than 91.44% of Klondike Solitaire games have winning solutions, leaving less than 10% of games unresolved."}
{"_id": "55b051ed153a84fdc0972468c56f7591a46ba80d", "title": "We're doing it live: A multi-method empirical study on continuous experimentation", "text": "Context: Continuous experimentation guides development activities based on data collected on a subset of online users on a new experimental version of the software. It includes practices such as canary releases, gradual rollouts, dark launches, or A/B testing. Objective: Unfortunately, our knowledge of continuous experimentation is currently primarily based on well-known and outspoken industrial leaders. To assess the actual state of practice in continuous experimentation, we conducted a mixed-method empirical study. Method: In our empirical study consisting of four steps, we interviewed 31 developers or release engineers, and performed a survey that attracted 187 complete responses. We analyzed the resulting data using statistical analysis and open coding. Results: Our results lead to several conclusions: (1) from a software architecture perspective, continuous experimentation is especially enabled by architectures that foster independently deployable services, such as microservices-based architectures; (2) from a developer perspective, experiments require extensive monitoring and analytics to discover runtime problems, consequently leading to developer on call policies and influencing the role and skill sets required by developers; and (3) from a process perspective, many organizations conduct experiments based on intuition rather than clear guidelines and robust statistics. Conclusion: Our findings show that more principled and structured approaches for release decision making are needed, striving for highly automated, systematic, and dataand hypothesis-driven deployment and experimentation."}
{"_id": "32ff33f1b31cf4cf12ab6a84f661879407a9f1ca", "title": "EAF2- A Framework for Categorizing Enterprise Architecture Frameworks", "text": "What constitutes an enterprise architecture framework is a contested subject. The contents of present enterprise architecture frameworks thus differ substantially. This paper aims to alleviate the confusion regarding which framework contains what by proposing a meta framework for enterprise architecture frameworks. By using this meta framework, decision makers are able to express their requirements on what their enterprise architecture framework must contain and also to evaluate whether the existing frameworks meets these requirements. An example classification of common EA frameworks illustrates the approach."}
{"_id": "48ed22947ab27b4d8eb9036b80b37859912a5d17", "title": "Vacant parking space detector for outdoor parking lot by using surveillance camera and FCM classifier", "text": "The most prevailing approach now for parking lot vacancy detecting system is to use sensor-based techniques. The main impediments to the camera-based system in applying to parking lots on rooftop and outside building are the glaring sun light and dark shadows in the daytime, and low-light intensity and back-lighting in the nighttime. To date, no camera-based detecting systems for outdoor parking lots have been in practical use. A few engineering firms provide the camera-based system, which is only for underground and indoor parking lots. This paper reports on the new camera based system called ParkLotD for detecting vacancy/occupancy in parking lots. ParkLotD uses a classifier based on fuzzy c-means (FCM) clustering and hyper-parameter tuning by particle swarm optimization (PSO). The test result of the detection error rate for the indoor multi-story parking lot has improved by an order of magnitude compared to the current system based on the edge detection approach. ParkLotD demonstrates high detection performance and enables the camera-based system to achieve the practical use in outdoor parking lots."}
{"_id": "8efe3b64ee8a936e583b18b415d0071a1cd65a7c", "title": "System Design of a 77 GHz Automotive Radar Sensor with Superresolution DOA Estimation", "text": "This paper introduces a novel 77 GHz FMCW automotive long range radar (LRR) system concept. High resolution direction of arrival (DOA) estimation is an important requirement for the application in automotive safety systems. The challenges in system design regarding low cost and superresolution signal processing are discussed. Dominant interferences to the MUSIC DOA estimator are amplitude and phase mismatches due to inhomogeneous antenna patterns. System simulation results deliver design guidelines for the required signal-to-noise ratio (SNR) and the antenna design. Road traffic measurements with a demonstrator system show superior DOA resolution and demonstrate the feasibility of the design goals."}
{"_id": "908f48cf72e0724a80baf87913f1b8534ed5a380", "title": "Automotive Radar \u2013 Status and Trends", "text": "The paper gives a brief overview of automotive radar. The status of the frequency regulation for short and long range radar is summarized because of its importance for car manufacturers and their sensor suppliers. Front end concepts and antenna techniques of 24 GHz and 77 GHz sensors are briefly described. Their impact on the sensor\u2019s field of view and on the angular measurement capability is discussed. Esp. digital beamforming concepts are considered and promising results are presented."}
{"_id": "408a8e250316863da94ffb3eab077175d08c01bf", "title": "Multiple Emitter Location and Signal Parameter-- Estimation", "text": ""}
{"_id": "ad570ceaffa4a012ff3e0157df80c6e1229f0a73", "title": "Proposal of millimeter-wave holographic radar with antenna switching", "text": "This paper proposes a millimeter-wave holographic radar with a simple structure for automotive applications. The simplicity can be realized by switching both transmitting antennas and receiving antennas. Also, a super resolution technique is introduced for the detection of angular positions in the proposed radar. The radar developed experimentally has accomplished an azimuthal angular resolution of less than 2 degrees and an azimuthal field of view (FoV) of more than 20 degrees simultaneously."}
{"_id": "752a9d8506a1e67687e29f845b13f465a705a63c", "title": "Planning and Control for Collision-Free Cooperative Aerial Transportation", "text": "This paper presents planning and control synthesis for multiple aerial manipulators to transport a common object. Each aerial manipulator that consists of a hexacopter and a two-degree-of-freedom robotic arm is controlled by an augmented adaptive sliding mode controller based on a closed-chain robot dynamics. We propose a motion planning algorithm by exploiting rapidly exploring random tree star (RRT*) and dynamic movement primitives (DMPs). The desired path for each aerial manipulator is obtained by using RRT* with Bezier curve, which is designed to handle environmental obstacles, such as buildings or equipments. During aerial transportation, to avoid unknown obstacle, DMPs modify the trajectory based on the virtual leader\u2013follower structure. By the combination of RRT* and DMPs, the cooperative aerial manipulators can carry a common object to keep reducing the interaction force between multiple robots while avoiding an obstacle in the unstructured environment. To validate the proposed planning and control synthesis, two experiments with multiple custom-made aerial manipulators are presented, which involve user-guided trajectory and RRT*-planned trajectory tracking in unstructured environments.Note to Practitioners\u2014This paper presents a viable approach to autonomous aerial transportation using multiple aerial manipulators equipped with a multidegree-of-freedom robotic arm. Existing approaches for cooperative manipulation based on force decomposition or impedance-based control often require a heavy or expensive force/torque sensor. However, this paper suggests a method without using a heavy or expensive force/torque sensor based on closed-chain dynamics in joint space and rapidly exploring random tree star (RRT*) that generates the desired trajectory of aerial manipulators. Unlike conventional RRT*, in this paper, our method can also avoid an unknown moving obstacle during aerial transportation by exploiting RRT* and dynamic movement primitives. The proposed planning and control synthesis is tested to demonstrate performance in a lab environment with two custom-made aerial manipulators and a common object."}
{"_id": "9ea9b1ac918ec5d00ff352bbb3d90c31752e5a7e", "title": "Reliable Client Accounting for P2P-Infrastructure Hybrids", "text": "Content distribution networks (CDNs) have started to adopt hybrid designs, which employ both dedicated edge servers and resources contributed by clients. Hybrid designs combine many of the advantages of infrastructurebased and peer-to-peer systems, but they also present new challenges. This paper identifies reliable client accounting as one such challenge. Operators of hybrid CDNs are accountable to their customers (i.e., content providers) for the CDN\u2019s performance. Therefore, they need to offer reliable quality of service and a detailed account of content served. Service quality and accurate accounting, however, depend in part on interactions among untrusted clients. Using the Akamai NetSession client network in a case study, we demonstrate that a small number of malicious clients used in a clever attack could cause significant accounting inaccuracies. We present a method for providing reliable accounting of client interactions in hybrid CDNs. The proposed method leverages the unique characteristics of hybrid systems to limit the loss of accounting accuracy and service quality caused by faulty or compromised clients. We also describe RCA, a system that applies this method to a commercial hybrid content-distribution network. Using trace-driven simulations, we show that RCA can detect and mitigate a variety of attacks, at the expense of a moderate increase in logging overhead."}
{"_id": "2822a883d149956934a20614d6934c6ddaac6857", "title": "A survey of appearance models in visual object tracking", "text": "Visual object tracking is a significant computer vision task which can be applied to many domains, such as visual surveillance, human computer interaction, and video compression. Despite extensive research on this topic, it still suffers from difficulties in handling complex object appearance changes caused by factors such as illumination variation, partial occlusion, shape deformation, and camera motion. Therefore, effective modeling of the 2D appearance of tracked objects is a key issue for the success of a visual tracker. In the literature, researchers have proposed a variety of 2D appearance models.\n To help readers swiftly learn the recent advances in 2D appearance models for visual object tracking, we contribute this survey, which provides a detailed review of the existing 2D appearance models. In particular, this survey takes a module-based architecture that enables readers to easily grasp the key points of visual object tracking. In this survey, we first decompose the problem of appearance modeling into two different processing stages: visual representation and statistical modeling. Then, different 2D appearance models are categorized and discussed with respect to their composition modules. Finally, we address several issues of interest as well as the remaining challenges for future research on this topic.\n The contributions of this survey are fourfold. First, we review the literature of visual representations according to their feature-construction mechanisms (i.e., local and global). Second, the existing statistical modeling schemes for tracking-by-detection are reviewed according to their model-construction mechanisms: generative, discriminative, and hybrid generative-discriminative. Third, each type of visual representations or statistical modeling techniques is analyzed and discussed from a theoretical or practical viewpoint. Fourth, the existing benchmark resources (e.g., source codes and video datasets) are examined in this survey."}
{"_id": "056462a3d5a78362700cd964e5d0bae4a5a9f08b", "title": "Polarization imaging reflectometry in the wild", "text": "We present a novel approach for on-site acquisition of surface reflectance for planar, spatially varying, isotropic samples in uncontrolled outdoor environments. Our method exploits the naturally occurring linear polarization of incident and reflected illumination for this purpose. By rotating a linear polarizing filter in front of a camera at three different orientations, we measure the polarization reflected off the sample and combine this information with multi-view analysis and inverse rendering in order to recover per-pixel, high resolution reflectance and surface normal maps. Specifically, we employ polarization imaging from two near orthogonal views close to the Brewster angle of incidence in order to maximize polarization cues for surface reflectance estimation. To the best of our knowledge, our method is the first to successfully extract a complete set of reflectance parameters with passive capture in completely uncontrolled outdoor settings. To this end, we analyze our approach under the general, but previously unstudied, case of incident partial linear polarization (due to the sky) in order to identify the strengths and weaknesses of the method under various outdoor conditions. We provide practical guidelines for on-site acquisition based on our analysis, and demonstrate high quality results with an entry level DSLR as well as a mobile phone."}
{"_id": "9e2286da5ac2f3f66dbd6710c9b4fa2719ace948", "title": "Product platform design and customization: Status and promise", "text": "In an effort to improve customization for today\u2019s highly competitive global marketplace, many companies are utilizing product families and platform-based product development to increase variety, shorten lead times, and reduce costs. The key to a successful product family is the product platform from which it is derived either by adding, removing, or substituting one or more modules to the platform or by scaling the platform in one or more dimensions to target specific market niches. This nascent field of engineering design has matured rapidly in the past decade, and this paper provides a comprehensive review of the flurry of research activity that has occurred during that time to facilitate product family design and platform-based product development for mass customization. Techniques for identifying platform leveraging strategies within a product family are reviewed along with metrics for assessing the effectiveness of product platforms and product families. Special emphasis is placed on optimization approaches and artificial intelligence techniques to assist in the process of product family design and platform-based product development. Web-based systems for product platform customization are also discussed. Examples from both industry and academia are presented throughout the paper to highlight the benefits of product families and product platforms. The paper concludes with a discussion of potential areas of research to help bridge the gap between planning and managing families of products and designing and manufacturing them."}
{"_id": "0d14221e3bbb1a58f115a7c7301dc4d4048be13f", "title": "WebWitness: Investigating, Categorizing, and Mitigating Malware Download Paths", "text": "Most modern malware download attacks occur via the browser, typically due to social engineering and driveby downloads. In this paper, we study the \u201corigin\u201d of malware download attacks experienced by real network users, with the objective of improving malware download defenses. Specifically, we study the web paths followed by users who eventually fall victim to different types of malware downloads. To this end, we propose a novel incident investigation system, named WebWitness. Our system targets two main goals: 1) automatically trace back and label the sequence of events (e.g., visited web pages) preceding malware downloads, to highlight how users reach attack pages on the web; and 2) leverage these automatically labeled in-the-wild malware download paths to better understand current attack trends, and to develop more effective defenses. We deployed WebWitness on a large academic network for a period of ten months, where we collected and categorized thousands of live malicious download paths. An analysis of this labeled data allowed us to design a new defense against drive-by downloads that rely on injecting malicious content into (hacked) legitimate web pages. For example, we show that by leveraging the incident investigation information output by WebWitness we can decrease the infection rate for this type of drive-by downloads by almost six times, on average, compared to existing URL blacklisting approaches."}
{"_id": "b0fbee324607b46ae65ec4ce9601d3a2bfa17202", "title": "Speech recognition in adverse conditions : A review", "text": "Speech recognition in adverse conditions: A review Sven L. Mattys a , Matthew H. Davis b , Ann R. Bradlow c & Sophie K. Scott d a Department of Psychology, University of York, York, UK b Medical Research Council, Cognition and Brain Sciences Unit, Cambridge, UK c Department of Linguistics, Northwestern University, Evanston, IL, USA d Institute of Cognitive Neuroscience, University College London, London, UK"}
{"_id": "7562c087ff18553f74fae8aff29892394f181582", "title": "3D free-form surface registration and object recognition", "text": "A new technique to recognise 3D free-form objects via registration is proposed. This technique attempts to register a free-form surface, represented by a set of % MathType!MTEF!2!1!+-% feaafeart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x% fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaGOmamaala% aabaGaaGymaaqaaiaaikdaaaGaamiraaaa!38F8!\\[2\\frac{1}{2}D\\] sensed data points, to the model surface, represented by another set of % MathType!MTEF!2!1!+-% feaafeart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x% fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaGOmamaala% aabaGaaGymaaqaaiaaikdaaaGaamiraaaa!38F8!\\[2\\frac{1}{2}D\\] model data points, without prior knowledge of correspondence or view points between the two point sets. With an initial assumption that the sensed surface be part of a more complete model surface, the algorithm begins by selecting three dispersed, reliable points on the sensed surface. To find the three corresponding model points, the method uses the principal curvatures and the Darboux frames to restrict the search over the model space. Invariably, many possible model 3-typles will be found. For each hypothesized model 3-tuple, the transformation to match the sensed 3-tuple to the model 3-tuple can be determined. A heuristic search is proposed to single out the optimal transformation in low order time. For realistic object recognition or registration, where the two range images are often extracted from different view points of the model, the earlier assumption that the sensed surface be part of a more complete model surface cannot be relied on. With this, the sensed 3-tuple must be chosen such that the three sensed points lie on the common region visible to both the sensed and model views. We propose an algorithm to select a minimal non-redundant set of 3-tuples such that at least one of the 3-tuples will lie on the overlap. Applying the previous algorithm to each 3-tuple within this set, the optimal transformation can be determined. Experiments using data obtained from a range finder have indicated fast registration for relatively complex test cases. If the optimal registrations between the sensed data (candidate) and each of a set of model data are found, then, for 3D object recognition purposes, the minimal best fit error can be used as the decision rule."}
{"_id": "5656fa5aa6e1beeb98703fc53ec112ad227c49ca", "title": "Multi-Prediction Deep Boltzmann Machines", "text": "We introduce the multi-prediction deep Boltzmann machine (MP-DBM). The MPDBM can be seen as a single probabilistic model trained to maximize a variational approximation to the generalized pseudolikelihood, or as a family of recurrent nets that share parameters and approximately solve different inference problems. Prior methods of training DBMs either do not perform well on classification tasks or require an initial learning pass that trains the DBM greedily, one layer at a time. The MP-DBM does not require greedy layerwise pretraining, and outperforms the standard DBM at classification, classification with missing inputs, and mean field prediction tasks.1"}
{"_id": "66eb3adbadb7d75266428d31a82dba443cab47dd", "title": "A Modified CFOA and Its Applications to Simulated Inductors, Capacitance Multipliers, and Analog Filters", "text": "In this paper, using a minimum number of passive components, i.e., new grounded and floating inductance simulators, grounded capacitance multipliers, and frequency-dependent negative resistors (FDNRs) based on one/two modified current-feedback operational amplifiers (MCFOAs), are proposed. The type of the simulators depends on the passive element selection used in the structure of the circuit without requiring critical active and passive component-matching conditions and/or cancellation constraints. In order to show the flexibility of the proposed MCFOA, a single-input three-output (SITO) voltage-mode (VM) filter, two three-input single-output (TISO) VM filters, and an SITO current-mode (CM) filter employing a single MCFOA are reported. The layout of the proposed MCFOA is also given. A number of simulations using the SPICE program and some experimental tests are performed to exhibit the performance of the introduced structures."}
{"_id": "37c1b8775e5eb99ab21ddca50ccf109c6abf906d", "title": "Data Structures for Traveling Salesmen", "text": "The choice of data structure for tour representation plays a critical role in the efficiency of local improvement heuristics for the Traveling Salesman Problem. The tour data structure must permit queries about the relative order of cities in the current tour and must allow sections of the tour to be reversed. The traditional array-based representation of a tour permits the relative order of cities to be determined in small constant time, but requires worst-case \u03a9(N) time (where N is the number of cities) to implement a reversal, which renders it impractical for large instances. This paper considers alternative tour data structures, examining them from both a theoretical and experimental point of view. The first alternative we consider is a data structure based on splay trees, where all queries and updates take amortized time O( logN). We show that this is close to the best possible, because in the cell probe model of computation any data structure must take worst-case amortized time \u03a9( logN /log logN) per operation. Empirically (for random Euclidean instances), splay trees overcome their large constant-factor overhead and catch up to arrays by N = 10 , 000, pulling ahead by a factor of 4-10 (depending on machine) when N = 100 , 000. Two alternative tree-based data structures do even better in this range, however. Although both are asymptotically inferior to the splay tree representation, the latter does not appear to pull even with them until N \u223c 1 , 000 , 000. ________________ 1 Rutgers University, New Brunswick, NJ 08903, and University of California at San Diego, La Jolla, CA 92093 2 Room 2D-150, AT&T Bell Laboratories, Murray Hill, NJ 07974 3 Department of Mathematics and Computer Science, Amherst College, Amherst, MA 01002 4 Department of Mathematics, Rutgers University, New Brunswick, NJ 08903 * A preliminary version of this paper appeared under the same title in Proceedings 4th Ann. ACM-SIAM Symp. on Discrete Algorithms (1993), 145-154."}
{"_id": "5dd204ae6b82ccaf4a840a704bfd4753e8d48add", "title": "An Operating System for the Home", "text": "Network devices for the home such as remotely controllable locks, lights, thermostats, cameras, and motion sensors are now readily available and inexpensive. In theory, this enables scenarios like remotely monitoring cameras from a smartphone or customizing climate control based on occupancy patterns. However, in practice today, such smarthome scenarios are limited to expert hobbyists and the rich because of the high overhead of managing and extending current technology. We present HomeOS, a platform that bridges this gap by presenting users and developers with a PC-like abstraction for technology in the home. It presents network devices as peripherals with abstract interfaces, enables cross-device tasks via applications written against these interfaces, and gives users a management interface designed for the home environment. HomeOS already has tens of applications and supports a wide range of devices. It has been running in 12 real homes for 4\u20138 months, and 42 students have built new applications and added support for additional devices independent of our efforts."}
{"_id": "3d9424dbab0ad247d47ab53b53c6cc5648dc647d", "title": "How do training and competition workloads relate to injury? The workload-injury aetiology model.", "text": "Injury aetiology models that have evolved over the previous two decades highlight a number of factors which contribute to the causal mechanisms for athletic injuries. These models highlight the pathway to injury, including (1) internal risk factors (eg, age, neuromuscular control) which predispose athletes to injury, (2) exposure to external risk factors (eg, playing surface, equipment), and finally (3) an inciting event, wherein biomechanical breakdown and injury occurs. The most recent aetiological model proposed in 2007 was the first to detail the dynamic nature of injury risk, whereby participation may or may not result in injury, and participation itself alters injury risk through adaptation. However, although training and competition workloads are strongly associated with injury, existing aetiology models neither include them nor provide an explanation for how workloads alter injury risk. Therefore, we propose an updated injury aetiology model which includes the effects of workloads. Within this model, internal risk factors are differentiated into modifiable and non-modifiable factors, and workloads contribute to injury in three ways: (1) exposure to external risk factors and potential inciting events, (2) fatigue, or negative physiological effects, and (3) fitness, or positive physiological adaptations. Exposure is determined solely by total load, while positive and negative adaptations are controlled both by total workloads, as well as changes in load (eg, the acute:chronic workload ratio). Finally, we describe how this model explains the load-injury relationships for total workloads, acute:chronic workload ratios and the training load-injury paradox."}
{"_id": "4869d77ae92fb93d7becfc709904cc7740b1419d", "title": "Comparing and Aligning Process Representations", "text": "Processes within organizations can be highly complex chains of inter-related steps, involving numerous stakeholders and information systems. Due to this complexity, having access to the right information is vital to the proper execution and effective management of an organization\u2019s business processes. A major challenge in this regard is that information on a single process is often spread out over numerous models, documents, and systems. This phenomenon results from efforts to provide a variety of process stakeholders with the information that is relevant to them, in a suitable format. However, this disintegration of process information also has considerable disadvantages for organizations. In particular, it can lead to severe maintenance issues, reduced execution efficiency, and negative effects on the quality of process results. Against this background, this doctoral thesis focuses on the spread of process information in organizations and, in particular, on the mitigation of the negative aspects associated with this phenomenon. The main contributions of this thesis are five techniques that focus on the alignment and comparison of process information from different informational artifacts. Each of these techniques tackles a specific scenario involving multiple informational artifacts that contain process information in different representation formats. Among others, we present automated techniques for the detection of inconsistencies between process models and textual process descriptions, the alignment of process performance measurements to process models, conformance-checking in the context of uncertainty, and the matching of process models through the analysis of event-log information. We demonstrate the efficacy and usefulness of these techniques through quantitative evaluations involving data obtained from real-world settings. Altogether, the presented work provides important contributions for the analysis, comparison, and alignment of process information in various representation formats through the development of novel concepts and techniques. The contributions, furthermore, provide a means for organizations to improve the efficiency and quality of their processes."}
{"_id": "1512a1cc2b8199c1e3258b1bf26bc402d42ee88f", "title": "Variability of Worked Examples and Transfer of Geometrical Problem-Solving Skills : A Cognitive-Load Approach", "text": "Four computer-based training strategies for geometrical problem solving in the domain of computer numerically controlled machinery programming were studied with regard to their effects on training performance, transfer performance, and cognitive load. A lowand a high-variability conventional condition, in which conventional practice problems had to be solved (followed by worked examples), were compared with a lowand a high-variability worked condition, in which worked examples had to be studied. Results showed that students who studied worked examples gained most from high-variability examples, invested less time and mental effort in practice, and attained better and less effort-demanding transfer performance than students who first attempted to solve conventional problems and then studied work examples."}
{"_id": "646181575a871cb6bc2e98005ce517ebe5772d66", "title": "Comparison of the Hardware Performance of the AES Candidates Using Reconfigurable Hardware", "text": "The results of implementations of all five AES finalists using Xilinx Field Programmable Gate Arrays are presented and analyzed. Performance of four alternative hardware architectures is discussed and compared. The AES candidates are divided into three classes depending on their hardware performance characteristics. Recommendation regarding the optimum choice of the algorithms for AES is provided."}
{"_id": "682a975a638bfa37fa4b4ca53222dcee756fe826", "title": "Alzheimer's disease: genes, proteins, and therapy.", "text": "Rapid progress in deciphering the biological mechanism of Alzheimer's disease (AD) has arisen from the application of molecular and cell biology to this complex disorder of the limbic and association cortices. In turn, new insights into fundamental aspects of protein biology have resulted from research on the disease. This beneficial interplay between basic and applied cell biology is well illustrated by advances in understanding the genotype-to-phenotype relationships of familial Alzheimer's disease. All four genes definitively linked to inherited forms of the disease to date have been shown to increase the production and/or deposition of amyloid beta-protein in the brain. In particular, evidence that the presenilin proteins, mutations in which cause the most aggressive form of inherited AD, lead to altered intramembranous cleavage of the beta-amyloid precursor protein by the protease called gamma-secretase has spurred progress toward novel therapeutics. The finding that presenilin itself may be the long-sought gamma-secretase, coupled with the recent identification of beta-secretase, has provided discrete biochemical targets for drug screening and development. Alternate and novel strategies for inhibiting the early mechanism of the disease are also emerging. The progress reviewed here, coupled with better ability to diagnose the disease early, bode well for the successful development of therapeutic and preventative drugs for this major public health problem."}
{"_id": "1d0b61503222191fe85c7bd112f91036f6a5028e", "title": "FA*IR: A Fair Top-k Ranking Algorithm", "text": "In this work, we define and solve the Fair Top-k Ranking problem, in which we want to determine a subset of k candidates from a large pool of n \u00bb k candidates, maximizing utility (i.e., select the \"best\" candidates) subject to group fairness criteria.\n Our ranked group fairness definition extends group fairness using the standard notion of protected groups and is based on ensuring that the proportion of protected candidates in every prefix of the top-k ranking remains statistically above or indistinguishable from a given minimum. Utility is operationalized in two ways: (i) every candidate included in the top-k should be more qualified than every candidate not included; and (ii) for every pair of candidates in the top-k, the more qualified candidate should be ranked above.\n An efficient algorithm is presented for producing the Fair Top-k Ranking, and tested experimentally on existing datasets as well as new datasets released with this paper, showing that our approach yields small distortions with respect to rankings that maximize utility without considering fairness criteria. To the best of our knowledge, this is the first algorithm grounded in statistical tests that can mitigate biases in the representation of an under-represented group along a ranked list."}
{"_id": "540cc30354ed4fe646f262042de77fec4995493e", "title": "Lower limb rehabilitation robot", "text": "This paper describes a new prototype of lower limb rehabilitation robot (for short: LLRR), including its detailed structure, operative principles, manipulative manual and control mode which give considerate protection to patients. It implements the programmable process during the course of the limbs rehabilitation, furthermore, renders variable and predetermined step posture and force sensing. LLRR could assist patient in-- simulating normal people's footsteps, exercising leg muscles, gradually recovering the neural control toward walking function and finally walking in normal way utterly. Such robot is comprised with steps posture controlling system and weight alleviation controlling mechanism."}
{"_id": "ddf7c908fbb102fae3ea927d2dd96181ac4f1976", "title": "The Air Traffic Flow Management Problem: An Integer Optimization Approach", "text": "In this paper, we present a new Integer Program (IP) for the Air Traffic Flow Management (ATFM) problem. The model we propose provides a complete representation of all the phases of each flights, i.e., the phase of taking-off, of cruising and of landing; suggesting all the actions to be implemented to achieve the goal of safe, efficient, and expeditious aircraft movement. The distinctive feature of the model is that it allows rerouting decisions. These decisions are formulated by means of \u201clocal\u201d conditions, which allow us to represent such decisions in a very compact way by only introducing new constraints. Moreover, to strengthen the polyhedral structure of the underlying relaxation, we also present three classes of valid inequalities. We report short computational times (less than 15 minutes) on instances of the size of the US air traffic control system that make it realistic that our approach can be used as the main engine of managing air traffic in the US."}
{"_id": "2d248dc0c67ec82b70f8a59f1e24964916916a9e", "title": "An Improved Neural Segmentation Method Based on U-NET", "text": "\u6458\u8981:\u5c40\u90e8\u9ebb\u9189\u6280\u672f\u4f5c\u4e3a\u73b0\u4ee3\u793e\u4f1a\u6700\u4e3a\u5e38\u89c1\u7684\u9ebb\u9189\u6280 \u672f,\u5177\u6709\u5b89\u5168\u6027\u9ad8,\u526f\u4f5c\u7528\u5c0f\u7b49\u4f18\u52bf\u3002\u901a\u8fc7\u5206\u6790\u8d85\u58f0 \u56fe\u50cf,\u5206\u5272\u56fe\u50cf\u4e2d\u7684\u795e\u7ecf\u533a\u57df,\u6709\u52a9\u4e8e\u63d0\u5347\u5c40\u90e8\u9ebb\u9189 \u624b\u672f\u7684\u6210\u529f\u7387\u3002\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u4f5c\u4e3a\u76ee\u524d\u6700\u4e3a\u9ad8\u6548\u7684\u56fe \u50cf\u5904\u7406\u65b9\u6cd5\u4e4b\u4e00,\u5177\u6709\u51c6\u786e\u6027\u9ad8,\u9884\u5904\u7406\u5c11\u7b49\u4f18\u52bf\u3002 \u901a\u8fc7\u5377\u79ef\u795e\u7ecf\u7f51\u7edc\u6765\u5bf9\u8d85\u58f0\u56fe\u50cf\u4e2d\u7684\u795e\u7ecf\u533a\u57df\u8fdb\u884c\u5206 \u5272,\u901f\u5ea6\u66f4\u5feb,\u51c6\u786e\u6027\u66f4\u9ad8\u3002\u76ee\u524d\u5df2\u6709\u7684\u56fe\u50cf\u5206\u5272\u7f51 \u7edc\u7ed3\u6784\u4e3b\u8981\u6709U-NET[1],SegNet[2]\u3002U-NET\u7f51\u7edc\u8bad\u7ec3 \u65f6\u95f4\u77ed,\u8bad\u7ec3\u53c2\u6570\u8f83\u5c11,\u4f46\u6df1\u5ea6\u7565\u6709\u4e0d\u8db3\u3002SegNet \u7f51 \u7edc\u5c42\u6b21\u8f83\u6df1,\u8bad\u7ec3\u65f6\u95f4\u8fc7\u957f,\u4f46\u5bf9\u8bad\u7ec3\u6837\u672c\u9700\u6c42\u8f83\u591a \u7531\u4e8e\u533b\u5b66\u6837\u672c\u6570\u91cf\u6709\u9650,\u4f1a\u5bf9\u6a21\u578b\u8bad\u7ec3\u4ea7\u751f\u4e00\u5b9a\u5f71\u54cd\u3002 \u672c\u6587\u6211\u4eec\u5c06\u91c7\u7528\u4e00\u79cd\u6539\u8fdb\u540e\u7684 U-NET \u7f51\u7edc\u7ed3\u6784\u6765\u5bf9\u8d85 \u58f0\u56fe\u50cf\u4e2d\u7684\u795e\u7ecf\u533a\u57df\u8fdb\u884c\u5206\u5272,\u6539\u8fdb\u540e\u7684 U-NET \u7f51\u7edc \u7ed3\u6784\u52a0\u5165\u7684\u6b8b\u5dee\u7f51\u7edc(residual network)[3],\u5e76\u5bf9\u6bcf\u4e00\u5c42 \u7ed3\u679c\u8fdb\u884c\u89c4\u8303\u5316(batch normalization)\u5904\u7406[4]\u3002\u5b9e\u9a8c\u7ed3 \u679c\u8868\u660e,\u4e0e\u4f20\u7edf\u7684U-NET\u7f51\u7edc\u7ed3\u6784\u76f8\u6bd4,\u6539\u8fdb\u540e\u7684UNET \u7f51\u7edc\u5206\u5272\u6548\u679c\u5177\u6709\u663e\u8457\u63d0\u5347,\u8bad\u7ec3\u65f6\u95f4\u7565\u6709\u589e\u52a0\u3002 \u540c\u65f6,\u5c06\u6539\u8fdb\u540e\u7684 U-NET \u7f51\u7edc\u4e0e SegNet \u7f51\u7edc\u7ed3\u6784\u8fdb \u884c\u5bf9\u6bd4,\u53d1\u73b0\u6539\u8fdb\u540e\u7684 U-net \u65e0\u8bba\u4ece\u8bad\u7ec3\u901f\u5ea6\u8fd8\u662f\u4ece \u5206\u5272\u6548\u679c\u5747\u9ad8\u4e8e SegNet \u7f51\u7edc\u7ed3\u6784\u3002\u6539\u8fdb\u540e\u7684 U-net \u7f51 \u7edc\u7ed3\u6784\u5728\u795e\u7ecf\u8bc6\u522b\u65b9\u9762\u5177\u6709\u5f88\u597d\u7684\u5e94\u7528\u573a\u666f\u3002"}
{"_id": "3311d4f6f90c564bb30daa8ff159bb35649aab46", "title": "Covariate Shift in Hilbert Space: A Solution via Sorrogate Kernels", "text": "Covariate shift is an unconventional learning scenario in which training and testing data have different distributions. A general principle to solve the problem is to make the training data distribution similar to that of the test domain, such that classifiers computed on the former generalize well to the latter. Current approaches typically target on sample distributions in the input space, however, for kernel-based learning methods, the algorithm performance depends directly on the geometry of the kernel-induced feature space. Motivated by this, we propose to match data distributions in the Hilbert space, which, given a pre-defined empirical kernel map, can be formulated as aligning kernel matrices across domains. In particular, to evaluate similarity of kernel matrices defined on arbitrarily different samples, the novel concept of surrogate kernel is introduced based on the Mercer\u2019s theorem. Our approach caters the model adaptation specifically to kernel-based learning mechanism, and demonstrates promising results on several real-world applications. Proceedings of the 30 th International Conference on Machine Learning, Atlanta, Georgia, USA, 2013. JMLR: W&CP volume 28. Copyright 2013 by the author(s)."}
{"_id": "4c99b87df6385bd945a00633f829e4a9ec5ce314", "title": "Massive Social Network Analysis: Mining Twitter for Social Good", "text": "Social networks produce an enormous quantity of data. Facebook consists of over 400 million active users sharing over 5 billion pieces of information each month. Analyzing this vast quantity of unstructured data presents challenges for software and hardware. We present GraphCT, a Graph Characterization Toolkit for massive graphs representing social network data. On a 128-processor Cray XMT, GraphCT estimates the betweenness centrality of an artificially generated (R-MAT) 537 million vertex, 8.6 billion edge graph in 55 minutes and a real-world graph (Kwak, et al.) with 61.6 million vertices and 1.47 billion edges in 105 minutes. We use GraphCT to analyze public data from Twitter, a microblogging network. Twitter's message connections appear primarily tree-structured as a news dissemination system. Within the public data, however, are clusters of conversations. Using GraphCT, we can rank actors within these conversations and help analysts focus attention on a much smaller data subset."}
{"_id": "7b1e18688dae102b8702a074f71bbea8ba540998", "title": "Automotive Safety and Security Integration Challenges", "text": "The ever increasing complexity of automotive vehicular systems, their connection to external networks, to the internet of things as well as their greater internal networking opens doors to hacking and malicious attacks. Security an d privacy risks in modern automotive vehicular systems are well publicized by now. That violation of securit y could lead to safety violations \u2013 is a well-argued and accepted argument. The safety discipline has matured over decades , but the security discipline is much younger . There are arguments and rightfully so, that the security engineering process is similar to the functional safety engineering process (formalized by the norm ISO 26262 ) and that they could be laid side -by-side and could be performed together but, by a different set of experts. There are moves to define a security engineering process along the lines of a functional safety engineering process for automotive vehicular systems . But, are these efforts at formalizing safety-security sufficient to produce safe and secure systems? When one sets out on this path with the idea of building safe and s ecure systems , one realizes that there are quite a few challenges, contradictions , dis imilarities, concerns to be addressed before safe and secure systems started coming out of production lines. The effort of this paper is to bring some such challeng e areas to the notice of the community and to suggest a way forward."}
{"_id": "a608bd857a131fe0d9e10c2219747b9fa03c5afc", "title": "Comprehensive Experimental Analyses of Automotive Attack Surfaces", "text": "Modern automobiles are pervasively computerized, and hence potentially vulnerable to attack. However, while previous research has shown that the internal networks within some modern cars are insecure, the associated threat model \u2014 requiring prior physical access \u2014 has justifiably been viewed as unrealistic. Thus, it remains an open question if automobiles can also be susceptible to remote compromise. Our work seeks to put this question to rest by systematically analyzing the external attack surface of a modern automobile. We discover that remote exploitation is feasible via a broad range of attack vectors (including mechanics tools, CD players, Bluetooth and cellular radio), and further, that wireless communications channels allow long distance vehicle control, location tracking, in-cabin audio exfiltration and theft. Finally, we discuss the structural characteristics of the automotive ecosystem that give rise to such problems and highlight the practical challenges in mitigating them."}
{"_id": "cdbb46785f9b9acf8d03f3f8aba58b201f06639f", "title": "Security Threats to Automotive CAN Networks - Practical Examples and Selected Short-Term Countermeasures", "text": "The IT security of automotive systems is an evolving area of research. To analyse the current situation and the potentially growing tendency of arising threats we performed several practical tests on recent automotive technology. With a focus on automotive systems based on CAN bus technology, this article summarises the results of four selected tests performed on the control systems for the window lift, warning light and airbag control system as well as the central gateway. These results are supplemented in this article by a classification of these four attack scenarios using the established CERT taxonomy and an analysis of underlying security vulnerabilities, and especially, potential safety implications. With respect to the results of these tests, in this article we further discuss two selected countermeasures to address basic weaknesses exploited in our tests. These are adaptations of intrusion detection (discussing three exemplary detection patterns) and IT-forensic measures (proposing proactive measures based on a forensic model). This article discusses both looking at the four attack scenarios introduced before, covering their capabilities and restrictions. While these reactive approaches are short-term measures, which could already be added to today\u2019s automotive IT architecture, long-term concepts also are shortly introduced, which are mainly preventive but will require a major redesign. Beneath a short overview on respective research approaches, we discuss their individual requirements, potential and restrictions. & 2010 Elsevier Ltd. All rights reserved."}
{"_id": "555c839c4d4a4436c3e11ae043a70f1abdab861e", "title": "Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?", "text": "The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n\u2192\u221e). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n \u2192 1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a \u201cgood\u201d representation for supervised learning, characterized by small sample complexity (n). We consider the case of visual object recognition though the theory applies to other domains. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translations, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and unique (discriminative) signature can be computed for each image patch, I, in terms of empirical distributions of the dot-products between I and a set of templates stored during unsupervised learning. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such estimates. Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts. The theory extends existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition\u2014and that this representation may be continuously learned in an unsupervised way during development and visual experience. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 1231216. http://cbmm.mit.edu CBMM Paper March, 2014 1\u201324 ar X iv :1 31 1. 41 58 v5 [ cs .C V ] 1 1 M ar 2 01 4 Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning? Fabio Anselmi \u2217 \u2020, Joel Z. Leibo \u2217 , Lorenzo Rosasco \u2217 \u2020 , Jim Mutch \u2217 , Andrea Tacchetti \u2217 \u2020 , and Tomaso Poggio \u2217 \u2020 \u2217Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, and \u2020Istituto Italiano di Tecnologia, Genova, 16163 The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n\u2192\u221e). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n \u2192 1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a \u201cgood\u201d representation for supervised learning, characterized by small sample complexity (n). We consider the case of visual object recognition though the theory applies to other domains. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translations, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and unique (discriminative) signature can be computed for each image patch, I, in terms of empirical distributions of the dot-products between I and a set of templates stored during unsupervised learning. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such estimates. Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts. The theory extends existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition\u2014and that this representation may be continuously learned in an unsupervised way during development and visual experience.1 Invariance | Hierarchy | Convolutional networks | Visual cortex It is known that Hubel and Wiesel\u2019s original proposal [1] for visual area V1\u2014of a module consisting of complex cells (C-units) combining the outputs of sets of simple cells (S-units) with identical orientation preferences but differing retinal positions\u2014can be used to construct translationinvariant detectors. This is the insight underlying many networks for visual recognition, including HMAX [2] and convolutional neural nets [3, 4]. We show here how the original idea can be expanded into a comprehensive theory of visual recognition relevant for computer vision and possibly for visual cortex. The first step in the theory is the conjecture that a representation of images and image patches, with a feature vector that is invariant to a broad range of transformations\u2014such as translation, scale, expression of a face, pose of a body, and viewpoint\u2014makes it possible to recognize objects from only a few labeled examples, as humans do. The second step is proving that hierarchical architectures of Hubel-Wiesel (\u2018HW\u2019) modules (indicated by \u2227 in Fig. 1) can provide such invariant representations while maintaining discriminative information about the original image. Each \u2227 -module provides a feature vector, which we call a signature, for the part of the visual field that is inside its \u201creceptive field\u201d; the signature is invariant to (R) affine transformations within the receptive field. The hierarchical architecture, since it computes a set of signatures for l=4"}
{"_id": "9518b99e874d733759e8fc4b6d48493135d2daf6", "title": "Automatic SQL-to-NoSQL schema transformation over the MySQL and HBase databases", "text": "The explosive growth of huge data lets cloud computing be more and more popular in recent years. Traditional web-based content management systems (CMS), e.g., phpBB, WordPress, and Joomla, store data using relational databases whose key advantage is the strong relationships among tables. However, regarding the flexibility and the feasibility of parallel processing, cloud computing adopts NoSQL databases that can support horizontal scaling to handle big data. Therefore, how to transform the existing SQL data into the NoSQL database becomes an emerging and necessary issue. This paper is motivated to propose an automatic SQL-to-NoSQL schema transformation mechanism over the MySQL and HBase databases. Based on our experimental results, the proposed mechanism is able to improve about 47% access performance."}
{"_id": "2ce9eb52608443e7574f35a0b356a5325987f64a", "title": "Adaptive web search based on user profile constructed without any effort from users", "text": "Web search engines help users find useful information on the World Wide Web (WWW). However, when the same query is submitted by different users, typical search engines return the same result regardless of who submitted the query. Generally, each user has different information needs for his/her query. Therefore, the search result should be adapted to users with different information needs. In this paper, we first propose several approaches to adapting search results according to each user's need for relevant information without any user effort, and then verify the effectiveness of our proposed approaches. Experimental results show that search systems that adapt to each user's preferences can be achieved by constructing user profiles based on modified collaborative filtering with detailed analysis of user's browsing history in one day."}
{"_id": "0faeda106e40a43a208f76824d8001baac2eefbc", "title": "Broadband focusing flat mirrors based on plasmonic gradient metasurfaces.", "text": "We demonstrate that metal-insulator-metal configurations, with the top metal layer consisting of a periodic arrangement of differently sized nanobricks, can be designed to function as broadband focusing flat mirrors. Using 50-nm-high gold nanobricks arranged in a 240-nm-period lattice on the top of a 50-nm-thick layer of silicon dioxide deposited on a continuous 100-nm-thick gold film, we realize a 17.3 \u00d7 17.3 \u03bcm(2) flat mirror that efficiently reflects (experiment: 14-27%; theory: 50-78%) and focuses a linearly polarized (along the direction of nanobrick size variation) incident beam in the plane of its polarization with the focal length, which changes from ~15 to 11 \u03bcm when tuning the light wavelength from 750 to 950 nm, respectively. Our approach can easily be extended to realize the radiation focusing in two dimensions as well as other optical functionalities by suitably controlling the phase distribution of reflected light."}
{"_id": "5d7cff09bfa17ec2f45b7e0731bd66ed773c4ae5", "title": "Opinion mining from noisy text data", "text": "The proliferation of Internet has not only generated huge volumes of unstructured information in the form of web documents, but a large amount of text is also generated in the form of emails, blogs, and feedbacks etc. The data generated from online communication acts as potential gold mines for discovering knowledge. Text analytics has matured and is being successfully employed to mine important information from unstructured text documents. Most of these techniques use Natural Language Processing techniques which assume that the underlying text is clean and correct. Statistical techniques, though not as accurate as linguistic mechanisms, are also employed for the purpose to overcome the dependence on clean text. The chief bottleneck for designing statistical mechanisms is however its dependence on appropriately annotated training data. None of these methodologies are suitable for mining information from online communication text data due to the fact that they are often noisy. These texts are informally written. They suffer from spelling mistakes, grammatical errors, improper punctuation and irrational capitalization. This paper focuses on opinion extraction from noisy text data. It is aimed at extracting and consolidating opinions of customers from blogs and feedbacks, at multiple levels of granularity. Ours is a hybrid approach, in which we initially employ a semi-supervised method to learn domain knowledge from a training repository which contains both noisy and clean text. Thereafter we employ localized linguistic techniques to extract opinion expressions from noisy text. We have developed a system based on this approach, which provides the user with a platform to analyze opinion expressions extracted from a repository."}
{"_id": "ea5af4493a92facfc186372a6e8b35fb95ff39b3", "title": "Stereo-Matching Network for Structured Light", "text": "Recently, deep learning has been widely applied in binocular stereo matching for depth acquisition, which has led to an immense increase of accuracy. However, little attention has been paid to the structured light field. In this letter, a network for structured light is proposed to extract effective matching features for depth acquisition. The proposed network promotes the Siamese network by considering receptive fields of different scales and assigning proper weights to the corresponding features, which is achieved by combining pyramid-pooling structure with the squeeze-and-excitation network into the Siamese network for feature extraction and weight calculations, respectively. For network training and testing, a structured-light dataset with amended ground truths is generated by projecting a random pattern into the existing binocular stereo dataset. Experiments demonstrate that the proposed network is capable of real-time depth acquisition, and it provides superior depth maps using structured light."}
{"_id": "13b44d1040bf8fc1edb9de23f50af1f324e63697", "title": "Attribute-Guided Face Generation Using Conditional CycleGAN", "text": "We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a highres face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to 1) handle unpaired training data because the training low/high-res and high-res attribute images may not necessarily align with each other, and to 2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate high-quality results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces high-quality and interesting results on identity transfer. We demonstrate three applications on identityguided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method."}
{"_id": "9b9a93592e93516d0fc42d9ba2ef4ce9682e257b", "title": "Adaptive midlife defense mechanisms and late-life health.", "text": "A growing body of research suggests that personality characteristics relate to physical health; however, this relation ship has primarily been tested in cross-sectional studies that have not followed the participants into old age. The present study utilizes data from a 70-year longitudinal study to prospectively examine the relationship between the adaptive defense mechanisms in midlife and objectively assessed physical health in late life. In addition to examining the direct effect, we test whether social support mediates this relation ship. The sample consisted of 90 men who were followed for over seven decades beginning in late adolescence. Health ratings from medical records were made at three time points (ages 70, 75, and 80). Defense mechanisms were coded from narratives by trained independent raters (Vaillant, Bond, & Vaillant, 1986). Independent raters assessed social supports between ages 50 and 70. More adaptive defenses in midlife were associated with better physical health at all three time points in late life. These relationships were partially mediated by social support. Findings are consistent with the theory that defense maturity is important for building social relationships, which in turn contribute to better late-life physical health. Psychological interventions aimed at improving these domains may be beneficial for physical health."}
{"_id": "4e70539e5fa8a6b420356e92525019b11c730c3e", "title": "Transformer-Less Converter Concept for a Grid-Connection of Thin-Film Photovoltaic Modules", "text": "A transformer-less converter concept for grid- connected photovoltaic systems is proposed that combines a DC/DC converter front-end with a DC/AC inverter. The converter system has an earth-connected DC input, as required from many thin-film photovoltaic modules. The DC/DC converter increases the positive photovoltaic DC-bus voltage by its negative DC output voltage to supply a grid-connected 3-phase inverter. This architecture extends today's power electronic converter topologies for thin-film photovoltaic modules considering their special requirements with the ambition to realize higher power conversion efficiencies at lower cost."}
{"_id": "8a7b0520de8d9af82617bb13d7aef000aae26119", "title": "A dual frequency OMT in the Ku band for TT&C applications", "text": "A mixed characterisation by means of the generalized admittance matrix and the generalized scattering matrices, obtained with mode matching is proposed to design dual-band orthomode transducer (OMT) components. Accurate and efficient fullwave analysis software based on this procedure has been developed. A dual frequency OMT in the Ku band with high performance has been fully designed with the developed software. The good agreement between the numerical and experimental results validates the design process."}
{"_id": "0bd4ac51804a813aa309bb578b7d716461e6f32b", "title": "Streaming Live Media over a Peer-to-Peer Network", "text": "The high bandwidth required by live streaming video greatly limits the number of clients that can be served by a source. In this work, we discuss and evaluate an architecture, calledSpreadIt, for streaming live media over a network of clients, using the resources of the clients themselves. Using SpreadIt, we can distribute bandwidth requirements over the network. The key challenge is to allow an application level multicast tree to be easily maintained over a network of transient peers, while ensuring that quality of service does not degrade. We propose a basic peering infrastructure layer for streaming applications, which uses a redirect primitive to meet the challenge successfully. Through empirical and simulation studies, we show that SpreadIt provides a good quality of service, which degrades gracefully with increasing number of clients. Perhaps more significantly, existing applications can be made to work with SpreadIt, without any change to their code base. Keywords\u2014 peer-to-peer networks, live streaming media, peering layer, redirect primitive, transient nodes"}
{"_id": "6188d0d48ee111fe730d8d22e3b291139f3a45cd", "title": "System Design for Geosynchronous Synthetic Aperture Radar Missions", "text": "Geosynchronous synthetic aperture radar (GEO SAR) has been studied for several decades but has not yet been implemented. This paper provides an overview of mission design, describing significant constraints (atmosphere, orbit, temporal stability of the surface and atmosphere, measurement physics, and radar performance) and then uses these to propose an approach to initial system design. The methodology encompasses all GEO SAR mission concepts proposed to date. Important classifications of missions are: 1) those that require atmospheric phase compensation to achieve their design spatial resolution; and 2) those that achieve full spatial resolution without phase compensation. Means of estimating the atmospheric phase screen are noted, including a novel measurement of the mean rate of change of the atmospheric phase delay, which GEO SAR enables. Candidate mission concepts are described. It seems likely that GEO SAR will be feasible in a wide range of situations, although extreme weather and unstable surfaces (e.g., water, tall vegetation) prevent 100% coverage. GEO SAR offers an exciting imaging capability that powerfully complements existing systems."}
{"_id": "b6037dfc6a962c15472fe02dc4e70123ae287672", "title": "A hybrid feature selection scheme for unsupervised learning and its application in bearing fault diagnosis", "text": "With the development of the condition-based maintenance techniques and the consequent requirement for good machine learning methods, new challenges arise in unsupervised learning. In the real-world situations, due to the relevant features that could exhibit the real machine condition are often unknown as priori, condition monitoring systems based on unimportant features, e.g. noise, might suffer high falsealarm rates, especially when the characteristics of failures are costly or difficult to learn. Therefore, it is important to select the most representative features for unsupervised learning in fault diagnostics. In this paper, a hybrid feature selection scheme (HFS) for unsupervised learning is proposed to improve the robustness and the accuracy of fault diagnostics. It provides a general framework of the feature selection based on significance evaluation and similarity measurement with respect to the multiple clustering solutions. The effectiveness of the proposed HFS method is demonstrated by a bearing fault diagnostics application and comparison with other features selection methods. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "7fbe2aac3e8bfd4b51e5c177f824827e6ebec5b1", "title": "Software testing: a research travelogue (2000-2014)", "text": "Despite decades of work by researchers and practitioners on numerous software quality assurance techniques, testing remains one of the most widely practiced and studied approaches for assessing and improving software quality. Our goal, in this paper, is to provide an accounting of some of the most successful research performed in software testing since the year 2000, and to present what appear to be some of the most significant challenges and opportunities in this area. To be more inclusive in this effort, and to go beyond our own personal opinions and biases, we began by contacting over 50 of our colleagues who are active in the testing research area, and asked them what they believed were (1) the most significant contributions to software testing since 2000 and (2) the greatest open challenges and opportunities for future research in this area. While our colleagues\u2019 input (consisting of about 30 responses) helped guide our choice of topics to cover and ultimately the writing of this paper, we by no means claim that our paper represents all the relevant and noteworthy research performed in the area of software testing in the time period considered\u2014a task that would require far more space and time than we have available. Nevertheless, we hope that the approach we followed helps this paper better reflect not only our views, but also those of the software testing community in general."}
{"_id": "9db1f913f56ff745099a71065161656631989a72", "title": "Provably Fair Representations", "text": "Machine learning systems are increasingly used to make decisions about people\u2019s lives, such as whether to give someone a loan or whether to interview someone for a job. This has led to considerable interest in making such machine learning systems fair. One approach is to transform the input data used by the algorithm. This can be achieved by passing each input data point through a representation function prior to its use in training or testing. Techniques for learning such representation functions from data have been successful empirically, but typically lack theoretical fairness guarantees. We show that it is possible to prove that a representation function is fair according to common measures of both group and individual fairness, as well as useful with respect to a target task. These provable properties can be used in a governance model involving a data producer, a data user and a data regulator, where there is a separation of concerns between fairness and target task utility to ensure transparency and prevent perverse incentives. We formally define the \u2018cost of mistrust\u2019 of using this model compared to the setting where there is a single trusted party, and provide bounds on this cost in particular cases. We present a practical approach to learning fair representation functions and apply it to financial and criminal justice datasets. We evaluate the fairness and utility of these representation functions using measures motivated by our theoretical results."}
{"_id": "963c7449b5239d4c7dc41c4471b7e04114d35503", "title": "Pinpointing Vulnerabilities", "text": "Memory-based vulnerabilities are a major source of attack vectors. They allow attackers to gain unauthorized access to computers and their data. Previous research has made significant progress in detecting attacks. However, developers still need to locate and fix these vulnerabilities, a mostly manual and time-consuming process. They face a number of challenges. Particularly, the manifestation of an attack does not always coincide with the exploited vulnerabilities, and many attacks are hard to reproduce in the lab environment, leaving developers with limited information to locate them. In this paper, we propose Ravel, an architectural approach to pinpoint vulnerabilities from attacks. Ravel consists of an online attack detector and an offline vulnerability locator linked by a record & replay mechanism. Specifically, Ravel records the execution of a production system and simultaneously monitors it for attacks. If an attack is detected, the execution is replayed to reveal the targeted vulnerabilities by analyzing the program's memory access patterns under attack. We have built a prototype of Ravel based on the open-source FreeBSD operating system. The evaluation results in security and performance demonstrate that Ravel can effectively pinpoint various types of memory vulnerabilities and has low performance overhead."}
{"_id": "17168ca2262960c57ee141b5d7095022e038ddb4", "title": "A dataset for activity recognition in an unmodified kitchen using smart-watch accelerometers", "text": "Activity recognition from smart devices and wearable sensors is an active area of research due to the widespread adoption of smart devices and the benefits it provide for supporting people in their daily lives. Many of the available datasets for fine-grained primitive activity recognition focus on locomotion or sports activities with less emphasis on real-world day-to-day behavior. This paper presents a new dataset for activity recognition in a realistic unmodified kitchen environment. Data was collected using only smart-watches from 10 lay participants while they prepared food in an unmodified rented kitchen. The paper also providing baseline performance measures for different classifiers on this dataset. Moreover, a deep feature learning system and more traditional statistical features based approaches are compared. This analysis shows that - for all evaluation criteria - data-driven feature learning allows the classifier to achieve best performance compared with hand-crafted features."}
{"_id": "193305161dbd27f602914616fc7ecea0ce5ac548", "title": "Methods for Evaluating Text Extraction Toolkits: An Exploratory Investigation", "text": "Text extraction tools are vital for obtaining the textual content of computer files and for using the electronic text in a wide variety of applications, including search and natural language processing. However, when extraction tools fail, they convert once reliable electronic text into garbled text, or no text at all. The techniques and tools for validating the accuracy of these text extraction tools are conspicuously absent from academia and industry. This paper contributes to closing this gap. We discuss an exploratory investigation into a method and a set of tools for evaluating a text extraction toolkit. Although this effort focuses on the popular open source Apache Tika toolkit and the govdocs1 corpus, the method generally applies to other text extraction toolkits and corpora."}
{"_id": "55684524a6659cb8456089de8ef691c12d619456", "title": "CIS-UltraCal: An open-source ultrasound calibration toolkit", "text": "We present an open-source MATLAB toolkit for ultrasound calibration. It has a convenient graphical user interface which sits on top of an extensive API. Calibration using three different phantoms is explicitly supported: the cross-wire phantom, the single-wall phantom, and the Hopkins phantom. Image processing of the Hopkins phantom is automated by making use of techniques from binary morphology, radon transform and RANSAC. Numerous calibration and termination parameters are exposed. It is also modular, allowing one to apply the system to original phantoms by writing a minimum of new code."}
{"_id": "95860601e865fcfd191fe49a1467ad13ab35a39d", "title": "The Floyd-Warshall algorithm on graphs with negative cycles", "text": "The Floyd-Warshall algorithm is a simple and widely used algorithm to compute shortest paths between all pairs of vertices in an edge weighted directed graph. It can also be used to detect the presence of negative cycles. We will show that for this task many existing implementations of the Floyd-Warshall algorithm will fail because exponentially large numbers can appear during its execution."}
{"_id": "bca568c230cfd9669325e22757634718557940a3", "title": "An exact algorithm for a vehicle routing problem with time windows and multiple use of vehicles", "text": "The vehicle routing problem with multiple use of vehicles is a variant of the classical vehicle routing problem. It arises when each vehicle performs several routes during the workday due to strict time limits on route duration (e.g., when perishable goods are transported). The routes are defined over customers with a revenue, a demand and a time window. Given a fixed-size fleet of vehicles, it might not be possible to serve all customers. Thus, the customers must be chosen based on their associated revenue minus the traveling cost to reach them. We introduce a branch-and-price approach to address this problem where lower bounds are computed by solving the linear programming relaxation of a set packing formulation, using column generation. The pricing subproblems are elementary shortest path problems with resource constraints. Computational results are reported on euclidean problems derived from well-known benchmark instances for the vehicle routing problem with time windows. 2009 Elsevier B.V. All rights reserved."}
{"_id": "670512be1b488df49f84d5279056362c0a560c5a", "title": "Fibonacci heaps and their uses in improved network optimization algorithms", "text": "In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in O(log n) amortized time and all other standard heap operations in O(1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where n is the number of vertices and m the number of edges in the problem graph:- O(n log n + m) for the single-source shortest path problem with nonnegative edge lengths, improved from O(mlog(m/n+2)n);\n
- O(n2log n + nm) for the all-pairs shortest path problem, improved from O(nm log(m/n+2)n);\n
- O(n2log n + nm) for the assignment problem (weighted bipartite matching), improved from O(nmlog(m/n+2)n);\n
- O(m\u03b2(m, n)) for the minimum spanning tree problem, improved from O(mlog log(m/n+2)n); where \u03b2(m, n) = min {i \u21bf log(i)n \u2264 m/n}. Note that \u03b2(m, n) \u2264 log*n if m \u2265 n.\n
Of these results, the improved bound for minimum spanning trees is the most striking, although all the results give asymptotic improvements for graphs of appropriate densities."}
{"_id": "735db80abfa57f2cdafd5236710f3fb1c042be98", "title": "Faster Algorithms for the Shortest Path Problem", "text": "Efficient implementations of Dijkstra's shortest path algorithm are investigated. A new data structure, called the radix heap, is proposed for use in this algorithm. On a network with n vertices, m edges, and nonnegative integer arc costs bounded by C, a one-level form of radix heap gives a time bound for Dijkstra's algorithm of O(m + n log C). A two-level form of radix heap gives a bound of O(m + n log C/log log C). A combination of a radix heap and a previously known data structure called a Fibonacci heap gives a bound of O(m + na @@@@log C). The best previously known bounds are O(m + n log n) using Fibonacci heaps alone and O(m log log C) using the priority queue structure of Van Emde Boas et al. [ 17]."}
{"_id": "29e6f7e71350e6688266d4a4af106e6de44ee467", "title": "Circular and Elliptical CPW-Fed Slot and Microstrip-Fed Antennas for Ultrawideband Applications", "text": "This letter presents novel circular and elliptical coplanar waveguide (CPW)-fed slot and mictrostip-fed antenna designs targeting the 3.1\u00bf10.6 GHz band. The antennas are comprised of elliptical or circular stubs that excite similar-shaped slot apertures. Four prototypes have been examined, fabricated and experimentally tested, the three being fed by a CPW and the fourth by a microstrip line, exhibiting a very satisfactory behavior throughout the 7.5 GHz of the allocated bandwidth in terms of impedance matching $(hbox VSWR\u00bf2)$, radiation efficiency and radiation pattern characteristics. Measured impedance bandwidths of beyond 175% will be presented."}
{"_id": "829033fd070c6ed30d28a21187e0def25a3e809f", "title": "Numerical methods for the Stokes and Navier-Stokes equations driven by threshold slip boundary conditions", "text": "In this article, we discuss the numerical solution of the Stokes and Navier-Stokes equations completed by nonlinear slip boundary conditions of friction type in two and three dimensions. To solve the Stokes system, we first reduce the related variational inequality into a saddle point-point problem for a well chosen augmented Lagrangian. To solve this saddle point problem we suggest an alternating direction method of multiplier together with finite element approximations. The solution of the Navier Stokes system combines finite element approximations, time discretization by operator splitting and augmented Lagrangian method. Numerical experiment results for two and three dimensional flow confirm the interest of these approaches."}
{"_id": "190405c058ad260de3a8733d9e4d2d3e04540ef0", "title": "ETCF: An Ensemble Model for CTR Prediction", "text": "Online advertising has attracted lots of attention in both academic and industrial domains. Among many realworld advertising systems, click through rate (CTR) prediction plays a central role. With a large volume of user history log and given its various features, it is quite a challenge to fully extract the meaningful information inside that amount of data. What's more, for many machine learning models, in order to achieve the best performance of the CTR prediction, a lot of hyper-parameters need to be tuned, which often costs plenty of time. In this paper, we propose an ensemble model named ETCF, which cascades GBDT with gcForest to tackle the practical problems of CTR prediction and do not need much hyper-parameter tuning work to realize its best performance. Experimental results validate the effective prediction power of ETCF against classical baseline models, and the superiority of GBDT transformed features."}
{"_id": "bbab29f9342f5aaf4da94936d5da2275b963aa2b", "title": "A Computational Model for the Dynamical Learning of Event Taxonomies", "text": "We present a computational model that can learn event taxonomies online from the continuous sensorimotor information flow perceived by an agent while interacting with its environment. Our model implements two fundamental learning biases. First, it learns probabilistic event models as temporal sensorimotor forward models and event transition models, which predict event model transitions given particular perceptual circumstances. Second, learning is based on the principle of minimizing free energy, which is further biased towards the detection of free energy transients. As a result, the algorithm forms conceptual structures that encode events and event boundaries. We show that event taxonomies can emerge when the algorithm is run on multiple levels of precision. Moreover, we show that generally any type of forward model can be used, as long as it learns sufficiently fast. Finally, we show that the developed structures can be used to hierarchically plan goaldirected behavior by means of active inference."}
{"_id": "537a49de7de4c36e058893e7fa43d76a56df0423", "title": "Heterogeneity and plasticity of T helper cells", "text": "CD4 T helper (Th) cells play critical roles in adaptive immune responses. They recruit and activate other immune cells including B cells, CD8 T cells, macrophages, mast cells, neutrophils, eosinophils and basophils. Based on their functions, their pattern of cytokine secretion and their expression of specific transcription factors, Th cells, differentiated from na\u00efve CD4 T cells, are classified into four major lineages, Th1, Th2, Th17 and T regulatory (Treg) cells, although other Th lineages may exist. Subsets of the same lineage may express different effector cytokines, reside at different locations or give rise to cells with different fates, whereas cells from different lineages may secrete common cytokines, such as IL-2, IL-9 and IL-10, resulting in massive heterogeneity of the Th cell population. In addition, the pattern of cytokine secretion may switch from that of one lineage toward another under certain circumstances, suggesting that Th cells are plastic. Tregs are also more heterogeneous and plastic than were originally thought. In this review, we summarize recent reports on heterogeneity and plasticity of Th cells, and discuss potential mechanisms and implications of such features that Th cells display."}
{"_id": "6beb4d5d7b232ff3975251a5dc5d5474fc5ea8d6", "title": "A Blind Navigation System Using RFID for Indoor Environments", "text": "A location and tracking system becomes very important to our future world of pervasive computing, where information is all around us. Location is one of the most needed information for emerging and future applications. Since the public use of GPS satellite is allowed, several state-of-the-art devices become part of our life, e.g. a car navigator and a mobile phone with a built-in GPS receiver. However, location information for indoor environments is still very limited as mentioned in paper. Several techniques are proposed to get location information in buildings such as using a radio signal triangulation. Using Radio Frequency Identification (RFID) tags is a new way of giving location information to users. Due to its passive communication circuit, RFID tags can be embedded almost anywhere without an energy source. The tags stores location information and give it to any reader that is within a proximity range which can be up to 7-10 centimeters. In this paper RFID-based system for navigation in a building for blind people or visually impaired"}
{"_id": "be92bcc5ac7ba3c759015ddd00cb47d65e75fc22", "title": "Predicting food nutrition facts using pocket-size near-infrared sensor", "text": "Diet monitoring is one of the most important aspects in preventative health care that aims to reduce various health risks. Manual recording has been a prevalence among all approaches yet it is tedious and often end up with a low adherence rate. Several existing techniques that have been developed to monitor food intake suffer too with accuracy, efficiency, and user acceptance rate. In this paper we propose a novel approach on measuring food nutrition facts, through a pocket-size non-intrusive near-infrared (NIR) scanner. We build efficient regression models that can make quantitative prediction on food nutrition contents, such as energy and carbohydrate. Our extensive experiments on off-the-shelf liquid foods demonstrates the accuracy of these regression models and proves the applicability of using NIR spectra that are collected by small hand-held scanner, on food nutrition prediction."}
{"_id": "dd5d787296203501b9f45e8ed073fdb86fc28a59", "title": "Sequential Query Expansion using Concept Graph", "text": "Manually and automatically constructed concept graphs (or semantic networks), in which the nodes correspond to words or phrases and the typed edges designate semantic relationships between words and phrases, have been previously shown to be rich sources of effective latent concepts for query expansion. However, finding good expansion concepts for a given query in large and dense concept graphs is a challenging problem, since the number of candidate concepts that are related to query terms and phrases and need to be examined increases exponentially with the distance from the original query concepts. In this paper, we propose a two-stage feature-based method for sequential selection of the most effective concepts for query expansion from a concept graph. In the first stage, the proposed method weighs the concepts according to different types of computationally inexpensive features, including collection and concept graph statistics. In the second stage, a sequential concept selection algorithm utilizing more expensive features is applied to find the most effective expansion concepts at different distances from the original query concepts. Experiments on TREC datasets of different type indicate that the proposed method achieves significant improvement in retrieval accuracy over state-of-the-art methods for query expansion using concept graphs."}
{"_id": "8f5d905a63dc11f3895949c05c73a660d73aa4b7", "title": "Timing to Perfection: The Biology of Central and Peripheral Circadian Clocks", "text": "The mammalian circadian system, which is comprised of multiple cellular clocks located in the organs and tissues, orchestrates their regulation in a hierarchical manner throughout the 24\u00a0hr of the day. At the top of the hierarchy are the suprachiasmatic nuclei, which synchronize subordinate organ and tissue clocks using electrical, endocrine, and metabolic signaling pathways that impact the molecular mechanisms of cellular clocks. The interplay between the central neural and peripheral tissue clocks is not fully understood and remains a major challenge in determining how neurological and metabolic homeostasis is achieved across the sleep-wake cycle. Disturbances in the communication between the plethora of body clocks can desynchronize the circadian system, which is believed to contribute to the development of diseases such as obesity and neuropsychiatric disorders. This review will highlight the relationship between clocks and metabolism, and describe how cues such as light, food, and reward mediate entrainment of the circadian system."}
{"_id": "ae007a9e2384a6a607a5e477f3468e0a36d67a42", "title": "Deep learning to extract laboratory mouse ultrasonic vocalizations from scalograms", "text": "We tested two techniques that can assist in the automatic extraction and analysis of laboratory mouse ultrasonic vocalizations. First, we substituted a Morlet-wavelet-based scalogram in place of the commonly used Fourier-based spectrogram. The frequencies in a scalogram are logarithmically scaled, which likely makes it better model a mouse's pitch perception based on the structure of its inner ear. Works that use the linear spectrogram are more likely to overemphasize frequency change at high frequencies, and so might misclassify calls when partitioning them according to frequency modulation. We showed that laboratory mice do indeed modulate their calls more when making high-frequency calls, which we would expect if they perceive pitch logarithmically. Secondly, we used \u201cdeep\u201d convolutional neural networks to automatically identify calls within the scalogram. A convolutional neural network performs much like the animal visual cortex, and has enjoyed recent success in computer-vision and related problems. We compared the convolutional network to a conventional fully-connected neural network and the Song filter used by recent papers, and found that our convolutional network identifies calls with greater precision and recall."}
{"_id": "73dffadfc50ebfc3d8e426ea9c05d06708f612ff", "title": "Towards Secure Provenance in the Cloud: A Survey", "text": "Provenance information are meta-data that summarize the history of the creation and the actions performed on an artefact e.g. data, process etc. Secure provenance is essential to improve data forensics, ensure accountability and increase the trust in the cloud. In this paper, we survey the existing cloud provenance management schemes and proposed security solutions. We investigate the current related security challenges resulting from the nature of the provenance model and the characteristics of the cloud and we finally identify potential research directions which we feel necessary t should be covered in order to build a secure cloud provenance for the next generation."}
{"_id": "daf6ddd50515d5408cb9f6183c5d308adee7d521", "title": "Community detection with partially observable links and node attributes", "text": "Community detection has been an important task for social and information networks. Existing approaches usually assume the completeness of linkage and content information. However, the links and node attributes can usually be partially observable in many real-world networks. For example, users can specify their privacy settings to prevent non-friends from viewing their posts or connections. Such incompleteness poses additional challenges to community detection algorithms. In this paper, we aim to detect communities with partially observable link structure and node attributes. To fuse such incomplete information, we learn link-based and attribute-based representations via kernel alignment and a co-regularization approach is proposed to combine the information from both sources (i.e., links and attributes). The link-based and attribute-based representations can lend strength to each other via the partial consensus learning. We present two instantiations of this framework by enforcing hard and soft consensus constraint respectively. Experimental results on real-world datasets show the superiority of the proposed approaches over the baseline methods and its robustness under different observable levels."}
{"_id": "119dc65e021736a5dbbb6f2cbd57d8b158fb2cd5", "title": "The Fast-Forward Planning System", "text": "planner in the Fifth International Conference on Artificial Intelligence Planning and Scheduling (AIPS\u201900) planning systems competition. Like the well-known HSP system, FF relies on forward search in the state space, guided by a heuristic that estimates goal distances by ignoring delete lists. It differs from HSP in a number of important details. This article describes the algorithmic techniques used in FF in comparison to HSP and evaluates their benefits in terms of run-time and solution-length behavior."}
{"_id": "5a1032042b0a201d63cfc190b121c6994bd130a3", "title": "Detailed Full-Body Reconstructions of Moving People from Monocular RGB-D Sequences", "text": "We accurately estimate the 3D geometry and appearance of the human body from a monocular RGB-D sequence of a user moving freely in front of the sensor. Range data in each frame is first brought into alignment with a multi-resolution 3D body model in a coarse-to-fine process. The method then uses geometry and image texture over time to obtain accurate shape, pose, and appearance information despite unconstrained motion, partial views, varying resolution, occlusion, and soft tissue deformation. Our novel body model has variable shape detail, allowing it to capture faces with a high-resolution deformable head model and body shape with lower-resolution. Finally we combine range data from an entire sequence to estimate a high-resolution displacement map that captures fine shape details. We compare our recovered models with high-resolution scans from a professional system and with avatars created by a commercial product. We extract accurate 3D avatars from challenging motion sequences and even capture soft tissue dynamics."}
{"_id": "72f3753ea7e5e81b9bde11a64ebe65bed9e0bcbd", "title": "Auto-Join: Joining Tables by Leveraging Transformations", "text": "Traditional equi-join relies solely on string equality comparisons to perform joins. However, in scenarios such as adhoc data analysis in spreadsheets, users increasingly need to join tables whose join-columns are from the same semantic domain but use different textual representations, for which transformations are needed before equi-join can be performed. We developed Auto-Join, a system that can automatically search over a rich space of operators to compose a transformation program, whose execution makes input tables equi-join-able. We developed an optimal sampling strategy that allows Auto-Join to scale to large datasets efficiently, while ensuring joins succeed with high probability. Our evaluation using real test cases collected from both public web tables and proprietary enterprise tables shows that the proposed system performs the desired transformation joins efficiently and with high quality."}
{"_id": "1810f1ee76ef2be504e9b27f5178b09d95ce605c", "title": "Integrated ultra-high impedance front-end for non-contact biopotential sensing", "text": "Non-contact ECG/EEG electrodes, which operate primarily through capacitive coupling, have been extensively studied for unobtrusive physiological monitoring. Previous implementations using discrete off-the-shelf amplifiers have been encumbered by the need for manually tuned input capacitance neutralization networks and complex DC-biasing schemes. We have designed and fabricated a custom integrated high input impedance (60 fF\u22255T\u2126), low noise (0.05 fA/\u221aHz) non-contact sensor front-end specifically to bypass these limitations. The amplifier fully bootstraps both the internal and external parasitic impedances to achieve an input capacitance of 60 fF without neutralization. To ensure DC stability and eliminate the need for external large valued resistances, a low-leakage, on-chip biasing network is included. Stable frequency response is demonstrated below 0.05 Hz even with coupling capacitances as low as 0.5 pF."}
{"_id": "2eb1be4c3aa3f4103c0d90d4f3e421476553ebec", "title": "Machine learning on sequential data using a recurrent weighted average", "text": "Recurrent Neural Networks (RNN) are a type of statistical model designed to handle sequential data. The model reads a sequence one symbol at a time. Each symbol is processed based on information collected from the previous symbols. With existing RNN architectures, each symbol is processed using only information from the previous processing step. To overcome this limitation, we propose a new kind of RNN model that computes a recurrent weighted average (RWA) over every past processing step. Because the RWA can be computed as a running average, the computational overhead scales like that of any other RNN architecture. The approach essentially reformulates the attention mechanism into a stand-alone model. The performance of the RWA model is assessed on the variable copy problem, the adding problem, classification of artificial grammar, classification of sequences by length, and classification of the MNIST images (where the pixels are read sequentially one at a time). On almost every task, the RWA model is found to fit the data significantly faster than a standard LSTM model."}
{"_id": "a73f80f35ef0fcaba38a0119b54a987b27b5bac6", "title": "Job shop scheduling", "text": "Then\u00d7m minimum-makespan general job-shop scheduling problem, hereafter referred to as the JSSP, can be described by a set of n jobs{Ji}1\u2264j\u2264n which is to be processed on a set of m machines{Mr}1\u2264r\u2264m. Each job has a technological sequence of machines to be processed. The processing of job Jj on machineMr is called theoperationOjr. OperationOjr requires the exclusive use of Mr for an uninterrupted durationpjr, its processing time. Ascheduleis a set of completion times for each operation {cjr}1\u2264j\u2264n,1\u2264r\u2264m that satisfies those constraints. The time required to complete all the jobs is called the makespanL. The objective when solving or optimizing this general problem is to determine t h schedule which minimizesL. An example of a3 \u00d7 3 JSSP is given in Table 7.1. The data includes the"}
{"_id": "911e19ea5130cabb88f39f405a3d9d3191a5fcce", "title": "Improving the Transcription of Academic Lectures for Information Retrieval", "text": "Recording university lectures through lecture capture systems is increasingly common, generating large amounts of audio and video data. Transcribing recordings greatly enhances their usefulness by making them easy to search. However, the number of recordings accumulates rapidly, rendering manual transcription impractical. Automatic transcription, on the other hand, suffers from low levels of accuracy, partly due to the special language of academic disciplines, which standard language models do not cover. This paper looks into the use of Wikipedia to dynamically adapt language models for scholarly speech. We propose Ranked Word Correct Rate as a new metric better aligned with the goals of improving transcript search ability and specialist word recognition. The study shows that, while overall transcription accuracy may remain low, targeted language modelling can substantially improve search ability, an important goal in its own right."}
{"_id": "1645fc7f9f8f4965d94f0d62584f72322f528052", "title": "Aggregate G-Buffer Anti-Aliasing -Extended Version-", "text": "We present Aggregate G-Buffer Anti-Aliasing (AGAA), a new technique for efficient anti-aliased deferred rendering of complex geometry using modern graphics hardware. In geometrically complex situations where many surfaces intersect a pixel, current rendering systems shade each contributing surface at least once per pixel. As the sample density and geometric complexity increase, the shading cost becomes prohibitive for real-time rendering. Under deferred shading, so does the required framebuffer memory. Our goal is to make high per-pixel sampling rates practical for real-time applications by substantially reducing shading costs and per-pixel storage compared to traditional deferred shading. AGAA uses the rasterization pipeline to generate a compact, pre-filtered geometric representation inside each pixel. We shade this representation at a fixed rate, independent of geometric complexity. By decoupling shading rate from geometric sampling rate, the algorithm reduces the storage and bandwidth costs of a geometry buffer, and allows scaling to high visibility sampling rates for anti-aliasing. AGAA with two aggregates per-pixel generates results comparable to 32$\\times$ MSAA, but requires 54 percent less memory and is up to 2.6$\\times$ faster ( $-30$ percent memory and 1.7 $\\times$ faster for 8 $\\times$ MSAA)."}
{"_id": "17eddf33b513ae1134abadab728bdbf6abab2a05", "title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning", "text": "Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches (Daum\u00e9 III et al., 2009; Ross and Bagnell, 2010) provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem."}
{"_id": "9ffe50db7c0e8b976123a26bdfa30614c51432bc", "title": "CDM protection design for CMOS applications using RC-triggered rail clamps", "text": "A new ESD protection design method is introduced that optimizes the CDM circuit response of an RC-triggered rail clamp network for I/O protection. The design is verified by VF-TLP and CDM test data of I/O test rings with two different output driver options."}
{"_id": "7f2d8be4305321e3690eab3a61857d4a80db664a", "title": "A Generalized Path Integral Control Approach to Reinforcement Learning", "text": "With the goal to generate more scalable algorithms with high er efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towar ds combining classical techniques from optimal control and dynamic programming with modern learni ng techniques from statistical estimation theory. In this vein, this paper suggests to use the fr amework of stochastic optimal control with path integrals to derive a novel approach to RL with para meterized policies. While solidly grounded in value function estimation and optimal control b ased on the stochastic Hamilton-JacobiBellman (HJB) equations, policy improvements can be transf ormed into an approximation problem of a path integral which has no open algorithmic parameters o ther than the exploration noise. The resulting algorithm can be conceived of as model-based, sem i-model-based, or even model free, depending on how the learning problem is structured. The upd ate equations have no danger of numerical instabilities as neither matrix inversions nor g radient learning rates are required. Our new algorithm demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slig htly heuristically motivated probability matching approach can actually perform well. Empirical eva luations demonstrate significant performance improvements over gradient-based policy learnin g a d scalability to high-dimensional control problems. Finally, a learning experiment on a simul ated 12 degree-of-freedom robot dog illustrates the functionality of our algorithm in a complex robot learning scenario. We believe that Policy Improvement withPath Integrals ( PI2) offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL based o n trajectory roll-outs."}
{"_id": "3d5575a554dae9f244c5b552a567874524302756", "title": "Tactical Language and Culture Training Systems: Using AI to Teach Foreign Languages and Cultures", "text": "72 AI MAGAZINE The Tactical Language and Culture Training System (TLCTS) helps people quickly acquire functional skills in foreign languages and cultures. It includes interactive lessons that focus on particular communicative skills and interactive games that apply those skills. Heavy emphasis is placed on spoken communication: learners must learn to speak the foreign language to complete the lessons and play the games. It focuses on the language and cultural skills needed to accomplish particular types of tasks and gives learners rich, realistic opportunities to practice achieving those tasks. Several TLCTS courses have been developed so far. Tactical Iraqi, Tactical Pashto, and Tactical French are in widespread use by U.S. marines and soldiers, and increasingly by military service members in other countries. Additional courses are being developed for use by business executives, workers for nongovernmental organizations, and high school and college students. While precise numbers are impossible to obtain (we do not control copies made by the U.S. government), over 40,000 and as many as 60,000 people have trained so far with TLCTS courses. More than 1000 people download copies of TLCTS courses each month, either for their own use or to set up computer language labs and redistribute copies to students. Just one training site, the military advisor training center at Fort Riley, Kansas, trains approximately 10,000 people annually. Artificial intelligence technologies play multiple essential functions in TLCTS. Speech is the primary input modality, so automated speech recognition tailored to foreign language learners is essential. TLCTS courses are populated with \u201cvirtual humans\u201d that engage in dialogue with learners. AI techniques are used to model the decision processes of the virtual humans and to support the generation of their behavior. This makes it possible to give learners extensive conversational practice. Learner modeling software continually monitors each learner\u2019s application of communication skills to estimate the learner\u2019s"}
{"_id": "7d060fc04306580d8693e1335caf4c37ad83357b", "title": "Sphinx-4: a flexible open source framework for speech recognition", "text": "Sphinx-4 is a flexible, modular and pluggable framework to he lp foster new innovations in the core research of hidden Mark ov model (HMM) recognition systems. The design of Sphinx-4 is b a ed on patterns that have emerged from the design of past sys tems as well as new requirements based on areas that researchers c urr ntly want to explore. To exercise this framework, and to pr vide researchers with a \u201dresearch-ready\u201d system, Sphinx-4 also includes several implementations of both simple and stateofhe-art techniques. The framework and the implementations are all f reely available via open source."}
{"_id": "7e7a9771e954ed0e6605986b4d2e53be7b7a9550", "title": "My science tutor: A conversational multimedia virtual tutor for elementary school science", "text": "This article describes My Science Tutor (MyST), an intelligent tutoring system designed to improve science learning by students in 3rd, 4th, and 5th grades (7 to 11 years old) through conversational dialogs with a virtual science tutor. In our study, individual students engage in spoken dialogs with the virtual tutor Marni during 15 to 20 minute sessions following classroom science investigations to discuss and extend concepts embedded in the investigations. The spoken dialogs in MyST are designed to scaffold learning by presenting open-ended questions accompanied by illustrations or animations related to the classroom investigations and the science concepts being learned. The focus of the interactions is to elicit self-expression from students. To this end, Marni applies some of the principles of Questioning the Author, a proven approach to classroom conversations, to challenge students to think about and integrate new concepts with prior knowledge to construct enriched mental models that can be used to explain and predict scientific phenomena. In this article, we describe how spoken dialogs using Automatic Speech Recognition (ASR) and natural language processing were developed to stimulate students' thinking, reasoning and self explanations. We describe the MyST system architecture and Wizard of Oz procedure that was used to collect data from tutorial sessions with elementary school students. Using data collected with the procedure, we present evaluations of the ASR and semantic parsing components. A formal evaluation of learning gains resulting from system use is currently being conducted. This paper presents survey results of teachers' and children's impressions of MyST."}
{"_id": "12f929e2a6ee67bcfcec626229230c54750fd293", "title": "The architecture of the Festival speech synthesis system", "text": "We describe a new formalism for storing linguistic data in a text to speech system. Linguistic entities such as words and phones are stored as feature structures in a general object called an linguistic item. Items are configurable at run time and via the feature structure can contain arbitrary information. Linguistic relations are used to store the relationship between items of the same linguistic type. Relations can take any graph structure but are commonly trees or lists. Utterance structures contain all the items and relations contained in a single utterance. We first describe the design goals when building a synthesis architecture, and then describe some problems with previous architectures. We then discuss our new formalism in general along with the implementation details and consequences of our approach."}
{"_id": "1354c00d5b2bc4b64c0dc63d1d94e8724d3a31b5", "title": "Finite-State Transducers in Language and Speech Processing", "text": "Finite-state machines have been used in various domains of natural language processing. We consider here the use of a type of transducers that supports very efficient programs: sequential transducers. We recall classical theorems and give new ones characterizing sequential string-tostring transducers. Transducers that output weights also play an important role in language and speech processing. We give a specific study of string-to-weight transducers, including algorithms for determinizing and minimizing these transducers very efficiently, and characterizations of the transducers admitting determinization and the corresponding algorithms. Some applications of these algorithms in speech recognition are described and illustrated."}
{"_id": "3b095e04f13a487c0b8679e64098d7929c1d7db7", "title": "Measuring benchmark similarity using inherent program characteristics", "text": "This paper proposes a methodology for measuring the similarity between programs based on their inherent microarchitecture-independent characteristics, and demonstrates two applications for it: 1) finding a representative subset of programs from benchmark suites and 2) studying the evolution of four generations of SPEC CPU benchmark suites. Using the proposed methodology, we find a representative subset of programs from three popular benchmark suites - SPEC CPU2000, MediaBench, and MiBench. We show that this subset of representative programs can be effectively used to estimate the average benchmark suite IPC, L1 data cache miss-rates, and speedup on 11 machines with different ISAs and microarchitectures - this enables one to save simulation time with little loss in accuracy. From our study of the similarity between the four generations of SPEC CPU benchmark suites, we find that, other than a dramatic increase in the dynamic instruction count and increasingly poor temporal data locality, the inherent program characteristics have more or less remained unchanged"}
{"_id": "8161198fc00f5ac5112ea83427d8452c38d76a8b", "title": "On Probabilistic Analysis of Disagreement in Synchronous Consensus Protocols", "text": "This paper presents a probabilistic analysis of disagreement for a family of simple synchronous consensus algorithms aimed at solving the 1-of-n selection problem in presence of unrestricted communication failures. In this problem, a set of n nodes are to select one common value among n proposed values. There are two possible outcomes of each node's selection process: decide to select a value or abort. We have disagreement if some nodes select the same value while other nodes decide to abort. Previous research has shown that it is impossible to guarantee agreement among the nodes subjected to an unbounded number of message losses. Our aim is to find decision algorithms for which the probability of disagreement is as low as possible. In this paper, we investigate two different decision criteria, one optimistic and one pessimistic. We assume two communication failure models, symmetric and asymmetric. For symmetric communication failures, we present the closed-form expressions for the probability of disagreement. For asymmetric failures, we analyse the algorithm using a probabilistic model checking tool. Our results show that the choice of decision criterion significantly influences the probability of disagreement for the 1-of-n selection algorithm. The optimistic decision criterion shows a lower probability of disagreement compare to the pessimistic one when the probability of message loss is less than 30% to 70%. On the other hand, the optimistic decision criterion has in general a higher maximum probability of disagreement compared to the pessimistic criterion."}
{"_id": "718c4a116678c9295e01631154385b01f4d7afda", "title": "Combined Effects of Attention and Motivation on Visual Task Performance: Transient and Sustained Motivational Effects", "text": "We investigated how the brain integrates motivational and attentional signals by using a neuroimaging paradigm that provided separate estimates for transient cue- and target-related signals, in addition to sustained block-related responses. Participants performed a Posner-type task in which an endogenous cue predicted target location on 70% of trials, while motivation was manipulated by varying magnitude and valence of a cash incentive linked to task performance. Our findings revealed increased detection performance (d') as a function of incentive value. In parallel, brain signals revealed that increases in absolute incentive magnitude led to cue- and target-specific response modulations that were independent of sustained state effects across visual cortex, fronto-parietal regions, and subcortical regions. Interestingly, state-like effects of incentive were observed in several of these brain regions, too, suggesting that both transient and sustained fMRI signals may contribute to task performance. For both cue and block periods, the effects of administering incentives were correlated with individual trait measures of reward sensitivity. Taken together, our findings support the notion that motivation improves behavioral performance in a demanding attention task by enhancing evoked responses across a distributed set of anatomical sites, many of which have been previously implicated in attentional processing. However, the effect of motivation was not simply additive as the impact of absolute incentive was greater during invalid than valid trials in several brain regions, possibly because motivation had a larger effect on reorienting than orienting attentional mechanisms at these sites."}
{"_id": "38a1dc0fb154680a5716e8b77a4776c1e5c0b8db", "title": "Final Project : Entangled Photons and Quantum Teleportation", "text": "This paper contains an overview of the concept of photon entanglement and discusses generating EPR photon pairs and performing Bell state measurements on them. The procedure of quantum teleportation, or transferring the quantum state of a particle onto another particle, is described in detail. The experimental progress in performing such operations is reviewed in relationship to the developing field of quantum communication."}
{"_id": "725fc85e3134a387ad1d161bf3586507db3dc903", "title": "Vehicle tracking system for intelligent and connected vehicle based on radar and V2V fusion", "text": "The environment perception plays a significantly role in intelligent vehicles and the advanced driver assistance system (ADAS), which enhances the driving safety and convenience. Target tracking is one of the key technologies of environment perception. The on-board sensors such as cameras and radar are commonly used for target tracking while they have limitations in terms of detection range and angle of view. One way to overcome the perception limitations of on-board ranging sensors by incorporating the vehicle-to-vehicle (V2V) communication. This paper proposes a vehicle tracking system which fuse the radar and V2V information to improve the target tracking accuracy. The proposed system integrates the radar, GPS and DSRC communication equipment. The GPS and radar are utilized to obtain its own position information and the position information of nearby vehicles. The proposed system also resolves the problem of data association in multiple target tracking measurements by other connected vehicles' identity information. With the association measurements, a Kalman filter is used to improve the accuracy of target tracking. An assessment of tracking system in real road environment shows that the proposed fusion approach for target tracking can reduce the data association error and improve the vehicle target tracking accuracy."}
{"_id": "926c7f50510a1f4376651c4d46df5fcc935c3004", "title": "Prior Knowledge, Level Set Representations & Visual Grouping", "text": "In this paper, we propose a level set method for shape-driven object extraction. We introduce a voxel-wise probabilistic level set formulation to account for prior knowledge. To this end, objects are represented in an implicit form. Constraints on the segmentation process are imposed by seeking a projection to the image plane of the prior model modulo a similarity transformation. The optimization of a statistical metric between the evolving contour and the model leads to motion equations that evolve the contour toward the desired image properties while recovering the pose of the object in the new image. Upon convergence, a solution that is similarity invariant with respect to the model and the corresponding transformation are recovered. Promising experimental results demonstrate the potential of such an approach."}
{"_id": "524dbb2c6592fd7889e96f0745b6ec5ce9c5adc8", "title": "Project Success : A Multidimensional Strategic Concept", "text": "This article presents projects as powerful strategic weapons, initiated to create economic value and competitive advantage. It suggests that project managers are the new strategic leaders, who must take on total responsibility for project business results. Defining and assessing project success is therefore a strategic management concept, which should help align project efforts with the shortand long-term goals of the organization. While this concept seems simple and intuitive, there is very little agreement in previous studies as to what really constitutes project success. Traditionally, projects were perceived as successful when they met time, budget, and performance goals. However, many would agree that there is more to project success than meeting time and budget. The object of this study was to develop a multidimensional framework for assessing project success, showing how different dimensions mean different things to different stakeholders at different times and for different projects. Given the complexity of this question, a combination of qualitative and quantitative methods and two data sets were used. The analysis identified four major distinct success dimensions: (1) project efficiency, (2) impact on the customer, (3) direct business and organizational success, and (4) preparing for the future. The importance of the dimensions varies according to time and the level of technological uncertainty involved in the project. The article demonstrates how these dimensions should be addressed during the project\u2019s definition, planning, and execution phases, and provides a set of guidelines for project managers and senior managers, as well as suggestions for further research. c 2002 Elsevier Science Ltd. All rights reserved."}
{"_id": "62a6cf246c9bec56babab9424fa36bfc9d4a47e8", "title": "CFO: Conditional Focused Neural Question Answering with Large-scale Knowledge Bases", "text": "How can we enable computers to automatically answer questions like \u201cWho created the character Harry Potter\u201d? Carefully built knowledge bases provide rich sources of facts. However, it remains a challenge to answer factoid questions raised in natural language due to numerous expressions of one question. In particular, we focus on the most common questions \u2014 ones that can be answered with a single fact in the knowledge base. We propose CFO, a Conditional Focused neuralnetwork-based approach to answering factoid questions with knowledge bases. Our approach first zooms in a question to find more probable candidate subject mentions, and infers the final answers with a unified conditional probabilistic framework. Powered by deep recurrent neural networks and neural embeddings, our proposed CFO achieves an accuracy of 75.7% on a dataset of 108k questions \u2013 the largest public one to date. It outperforms the current state of the art by an absolute margin of 11.8%."}
{"_id": "a7b640053b5bbdc24085e5adf04c3c8393c1e645", "title": "Forensics examination of volatile system data using virtual introspection", "text": "While static examination of computer systems is an important part of many digital forensics investigations, there are often important system properties present only in volatile memory that cannot be effectively recovered using static analysis techniques, such as offline hard disk acquisition and analysis. An alternative approach, involving the live analysis of target systems to uncover this volatile data, presents significant risks and challenges to forensic investigators as observation techniques are generally intrusive and can affect the system being observed. This paper provides a discussion of live digital forensics analysis through virtual introspection and presents a suite of virtual introspection tools developed for Xen (VIX tools). The VIX tools suite can be used for unobtrusive digital forensic examination of volatile system data in virtual machines, and addresses a key research area identified in the virtualization in digital forensics research agenda [22]."}
{"_id": "317728b77280386e23a4f291af2094f69f574883", "title": "Better \u03f5-Dependencies for Offline Approximate Nearest Neighbor Search, Euclidean Minimum Spanning Trees, and \u03f5-Kernels", "text": "Recently, Arya, da Fonseca, and Mount [STOC 2011, SODA 2012] made notable progress in improving the &epsis;-dependencies in the space/query-time tradeoffs for (1 + &epsis;)-factor approximate nearest neighbor search in fixed-dimensional Euclidean spaces. However, &epsis;-dependencies in the preprocessing time were not considered, and so their data structures cannot be used to derive faster algorithms for offline proximity problems. Known algorithms for many such problems, including approximate bichromatic closest pair (BCP) and approximate Euclidean minimum spanning trees (EMST), typically have factors near (1/&epsis;)d/2\u00b1O(1) in the running time when the dimension d is a constant.\n We describe a technique that breaks the (1/&epsis;)d/2 barrier and yields new results for many well-known proximity problems, including:\n \u2022 an O((1/&epsis;)d/3+O(1) n)-time randomized algorithm for approximate BCP,\n \u2022 an O((1/&epsis;)d/3+O(1) n log n)-time algorithm for approximate EMST, and\n \u2022 an O(n log n + (1/&epsis;)d/3+O(1) n)-time algorithm to answer n approximate nearest neighbor queries on n points.\n Using additional bit-packing tricks, we can shave off the log n factor for EMST, and even move most of the &epsis;-factors to a sublinear term.\n The improvement arises from a new time bound for exact \"discrete Voronoi diagrams\", which were previously used in the construction of &epsis;-kernels (or extent-based coresets), a well-known tool for another class of fundamental problems. This connection leads to more results, including:\n \u2022 a streaming algorithm to maintain an approximate diameter in O((1/&epsis;)d/3+O(1)) time per point using O((1/&epsis;)d/2+O(1)) space, and\n \u2022 a streaming algorithm to maintain an &epsis;-kernel in O((1/&epsis;)d/4+O(1)) time per point using O((1/&epsis;)d/2+O(1)) space."}
{"_id": "1a3470626b24ccd510047925f80d21affde3c3b8", "title": "Automatic Calcium Scoring in Low-Dose Chest CT Using Deep Neural Networks With Dilated Convolutions", "text": "Heavy smokers undergoing screening with low-dose chest CT are affected by cardiovascular disease as much as by lung cancer. Low-dose chest CT scans acquired in screening enable quantification of atherosclerotic calcifications and thus enable identification of subjects at increased cardiovascular risk. This paper presents a method for automatic detection of coronary artery, thoracic aorta, and cardiac valve calcifications in low-dose chest CT using two consecutive convolutional neural networks. The first network identifies and labels potential calcifications according to their anatomical location and the second network identifies true calcifications among the detected candidates. This method was trained and evaluated on a set of 1744 CT scans from the National Lung Screening Trial. To determine whether any reconstruction or only images reconstructed with soft tissue filters can be used for calcification detection, we evaluated the method on soft and medium/sharp filter reconstructions separately. On soft filter reconstructions, the method achieved F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta, aortic valve, and mitral valve calcifications, respectively. On sharp filter reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively. Linearly weighted kappa coefficients for risk category assignment based on per subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter reconstructions, respectively. These results demonstrate that the presented method enables reliable automatic cardiovascular risk assessment in all low-dose chest CT scans acquired for lung cancer screening."}
{"_id": "5776d0fea69d826519ee3649f620e8755a490efe", "title": "Lifelong Machine Learning Systems: Beyond Learning Algorithms", "text": "Lifelong Machine Learning, or LML, considers systems that can learn many tasks from one or more domains over its lifetime. The goal is to sequentially retain learned knowledge and to selectively transfer that knowledge when learning a new task so as to develop more accurate hypotheses or policies. Following a review of prior work on LML, we propose that it is now appropriate for the AI community to move beyond learning algorithms to more seriously consider the nature of systems that are capable of learning over a lifetime. Reasons for our position are presented and potential counter-arguments are discussed. The remainder of the paper contributes by defining LML, presenting a reference framework that considers all forms of machine learning, and listing several key challenges for and benefits from LML research. We conclude with ideas for next steps to advance the field."}
{"_id": "c80ed55dc33d7da5530873fdfb6fad925ba78f30", "title": "Scalp dermoscopy of androgenetic alopecia in Asian people.", "text": "Although dermoscopy is used mainly for diagnosing pigmented skin lesions, this device has been reported to be useful in observing alopecia areata and frontal fibrosing alopecia. Herein, we investigated the dermoscopic features and their incidence of androgenetic alopecia (AGA; n = 50 men) and female AGA (FAGA; n = 10 women) in Asian people. More than 20% hair diameter diversity (HDD), which reportedly is an early sign of AGA and corresponds to hair follicle miniaturization, was observed in the affected area of all AGA and FAGA cases, suggesting that HDD is an essential feature to diagnose AGA and FAGA. Peripilar signs, corresponding to perifollicular pigmentation, were seen in 66% (33/50) of AGA and 20% (2/10) of FAGA women. This incidence in the present study was lower than previously reported in white subjects possibly because the Asian skin color conceals slight peripilar pigmentation. Yellow dots were observed in 26% (13/50) of AGA and 10% (1/10) of FAGA cases and the number of yellow dots in AGA and FAGA was limited to 10 on the overall hair loss area. Yellow dots possibly indicate the coincidence of AGA and enlargement of the sebaceous glands caused by common end-organ hypersensitivity to androgen. In conclusion, dermoscopy is useful to diagnose AGA and FAGA and provides insights into the pathogenesis of AGA."}
{"_id": "5f52a2c6e896ad3d745f6f33e78b85e810d6234c", "title": "Abstractive Summarization of Reddit Posts with Multi-level Memory Networks", "text": "ive Summarization of Reddit Posts with Multi-level Memory Networks Byeongchang Kim Hyunwoo Kim Gunhee Kim Department of Computer Science and Engineering & Center for Superintelligence Seoul National University, Seoul, Korea {byeongchang.kim,hyunwoo.kim}@vision.snu.ac.kr gunhee@snu.ac.kr http://vision.snu.ac.kr/projects/reddit-tifu"}
{"_id": "1be6597999e71483f73c57ed35e3fb45f132c0fc", "title": "FARMER: a novel approach to file access correlation mining and evaluation reference model for optimizing peta-scale file system performance", "text": "File correlation, which refers to a relationship among related files that can manifest in the form of their common access locality (temporal and/or spatial), has become an increasingly important consideration for performance enhancement in peta-scale storage systems. Previous studies on file correlations mainly concern with two aspects of files: file access sequence and semantic attribute. Based on mining with regard to these two aspects of file systems, various strategies have been proposed to optimize the overall system performance. Unfortunately, all of these studies consider either file access sequences or semantic attribute information separately and in isolation, thus unable to accurately and effectively mine file correlations, especially in large-scale distributed storage systems.\n This paper introduces a novel File Access coRrelation Mining and Evaluation Reference model (FARMER) for optimizing petascale file system performance that judiciously considers both file access sequences and semantic attributes simultaneously to evaluate the degree of file correlations by leveraging the Vector Space Model (VSM) technique adopted from the Information Retrieval field. We extract the file correlation knowledge from some typical file system traces using FARMER, and incorporate FARMER into a real large-scale object-based storage system as a case study to dynamically infer file correlations and evaluate the benefits and costs of a FARMER-enabled prefetching algorithm for the metadata servers under real file system workloads. Experimental results show that FARMER can mine and evaluate file correlations more accurately and effectively. More significantly, the FARMER-enabled prefetching algorithm is shown to reduce the metadata operations latency by approximately 24-35% when compared to a state-of-the-art metadata prefetching algorithm and a commonly used replacement policy."}
{"_id": "fdb229ad4b68e40daa45539003ab31067fea11b9", "title": "A Risk Management Approach to the \"Insider Threat\"", "text": "Recent surveys indicate that the financial impact and operating losses due to insider intrusions are increasing. But these studies often disagree on what constitutes an \u201cinsider;\u201d indeed, many define it only implicitly. In theory, appropriate selection of, and enforcement of, properly specified security policies should prevent legitimate users from abusing their access to computer systems, information, and other resources. However, even if policies could be expressed precisely, the natural mapping between the natural language expression of a security policy, and the expression of that policy in a form that can be implemented on a computer system or network, creates gaps in enforcement. This paper defines \u201cinsider\u201d precisely, in terms of these gaps, and explores an access-based model for analyzing threats that include those usually termed \u201cinsider threats.\u201d This model enables an organization Matt Bishop Matt Bishop, Dept. of Computer Science, University of California at Davis e-mail: bishop@cs."}
{"_id": "0a59337568cbf74e7371fb543f7ca34bbc2153ac", "title": "Geodesic flow kernel for unsupervised domain adaptation", "text": "In real-world applications of visual recognition, many factors - such as pose, illumination, or image quality - can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive cross-validation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods."}
{"_id": "285faa4cc54ef9b1834128705e0f96ad17b61e0b", "title": "SIFT Flow: Dense Correspondence across Scenes and Its Applications", "text": "While image alignment has been studied in different areas of computer vision for decades, aligning images depicting different scenes remains a challenging problem. Analogous to optical flow, where an image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its nearest neighbors in a large image corpus containing a variety of scenes. The SIFT flow algorithm consists of matching densely sampled, pixelwise SIFT features between two images while preserving spatial discontinuities. The SIFT features allow robust matching across different scene/object appearances, whereas the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach robustly aligns complex scene pairs containing significant spatial differences. Based on SIFT flow, we propose an alignment-based large database framework for image analysis and synthesis, where image information is transferred from the nearest neighbors to a query image according to the dense scene correspondence. This framework is demonstrated through concrete applications such as motion field prediction from a single image, motion synthesis via object transfer, satellite image registration, and face recognition."}
{"_id": "51a4d658c93c5169eef7568d3d1cf53e8e495087", "title": "Unsupervised Visual Domain Adaptation Using Subspace Alignment", "text": "In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyper parameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods."}
{"_id": "7bbacae9177e5349090336c23718a51bc94f6bfc", "title": "Avoiding Confusing Features in Place Recognition", "text": "We seek to recognize the place depicted in a query image using a database of \u201cstreet side\u201d images annotated with geolocation information. This is a challenging task due to changes in scale, viewpoint and lighting between the query and the images in the database. One of the key problems in place recognition is the presence of objects such as trees or road markings, which frequently occur in the database and hence cause significant confusion between different places. As the main contribution, we show how to avoid features leading to confusion of particular places by using geotags attached to database images as a form of supervision. We develop a method for automatic detection of image-specific and spatially-localized groups of confusing features, and demonstrate that suppressing them significantly improves place recognition performance while reducing the database size. We show the method combines well with the state of the art bag-of-features model including query expansion, and demonstrate place recognition that generalizes over wide range of viewpoints and lighting conditions. Results are shown on a geotagged database of over 17K images of Paris downloaded from Google Street View."}
{"_id": "342c5b941398c733659dd6fe9e0b3b4e3f210877", "title": "Sherpa : Hyperparameter Optimization for Machine Learning Models", "text": "Sherpa is a free open-source hyperparameter optimization library for machine learning models. It is designed for problems with computationally expensive iterative function evaluations, such as the hyperparameter tuning of deep neural networks. With Sherpa, scientists can quickly optimize hyperparameters using a variety of powerful and interchangeable algorithms. Additionally, the framework makes it easy to implement custom algorithms. Sherpa can be run on either a single machine or a cluster via a grid scheduler with minimal configuration. Finally, an interactive dashboard enables users to view the progress of models as they are trained, cancel trials, and explore which hyperparameter combinations are working best. Sherpa empowers machine learning researchers by automating the tedious aspects of model tuning and providing an extensible framework for developing automated hyperparameter-tuning strategies. Its source code and documentation are available at https://github.com/LarsHH/sherpa and https://parameter-sherpa.readthedocs.io/, respectively. A demo can be found at https://youtu.be/L95sasMLgP4. 1 Existing Hyperparameter Optimization Libraries Hyperparameter optimization algorithms for machine learning models have previously been implemented in software packages such as Spearmint [15], HyperOpt [2], Auto-Weka 2.0 [9], and Google Vizier [5] among others. Spearmint is a Python library based on Bayesian optimization using a Gaussian process. Hyperparameter exploration values are specified using the markup language YAML and run on a grid via SGE and MongoDB. Overall, it combines Bayesian optimization with the ability for distributed training. HyperOpt is a hyperparameter optimization framework that uses MongoDB to allow parallel computation. The user manually starts workers which receive tasks from the HyperOpt instance. It offers the use of Random Search and Bayesian optimization based on a Tree of Parzen Estimators. Auto-WEKA 2.0 implements the SMAC [6] algorithm for automatic model selection and hyperparameter optimization within the WEKA machine learning framework. It provides a graphical user interface and supports parallel runs on a single machine. It is meant to be accessible for novice users and specifically targets the problem of choosing a model. Auto-WEKA is related to Auto-Sklearn [4] and Auto-Net [11] which specifically focus on tuning Scikit-Learn models and fully-connected 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montr\u00e9al, Canada. Table 1: Comparison to Existing Libraries Spearmint Auto-WEKA HyperOpt Google Vizier Sherpa Early Stopping No No No Yes Yes Dashboard/GUI Yes Yes No Yes Yes Distributed Yes No Yes Yes Yes Open Source Yes Yes Yes No Yes # of Algorithms 2 1 2 3 5 neural networks in Lasagne, respectively. Auto-WEKA, Auto-Sklearn, and Auto-Net focus on an end-to-end automatic approach. This makes it easy for novice users, but restricts the user to the respective machine learning library and the models it implements. In contrast our work aims to give the user more flexibility over library, model and hyper-parameter optimization algorithm selection. Google Vizier is a service provided by Google for its cloud machine learning platform. It incorporates recent innovation in Bayesian optimization such as transfer learning and provides visualizations via a dashboard. Google Vizier provides many key features of a current hyperparameter optimization tool to Google Cloud users and Google engineers, but is not available in an open source version. A similar situation occurs with other cloud based platforms like Microsoft Azure Hyperparameter Tuning 1 and Amazon SageMaker\u2019s Hyperparameter Optimization 2. 2 Need for a new library The field of machine learning has experienced massive growth over recent years. Access to open source machine learning libraries such as Scikit-Learn [14], Keras [3], Tensorflow [1], PyTorch [13], and Caffe [8] allowed research in machine learning to be widely reproduced by the community making it easy for practitioners to apply state of the art methods to real world problems. The field of hyperparameter optimization for machine learning has also seen many innovations recently such as Hyperband [10], Population Based Training [7], Neural Architecture Search [17], and innovation in Bayesian optimization such as [16]. While the basic implementation of some of these algorithms can be trivial, evaluating trials in a distributed fashion and keeping track of results becomes cumbersome which makes it difficult for users to apply these algorithms to real problems. In short, Sherpa aims to curate implementations of these algorithms while providing infrastructure to run these in a distributed way. The aim is for the platform to be scalable from usage on a laptop to a computation grid."}
{"_id": "44d895652d584a0b9e1c808e23fa1b1b9f102aca", "title": "Avoiding pathologies in very deep networks", "text": "Choosing appropriate architectures and regularization strategies of deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Lastly, we characterize the class of models obtained by performing dropout on Gaussian processes."}
{"_id": "c44dd70bfddf7cf3fd05974977910cb489614038", "title": "Future directions for behavioral information security research", "text": "Information Security (InfoSec) research is far reaching and includes many approaches to deal with protecting and mitigating threats to the information assets and technical resources available within computer based systems. Although a predominant weakness in properly securing information assets is the individual user within an organization, much of the focus of extant security research is on technical issues. The purpose of this paper is to highlight future directions for Behavioral InfoSec research, which is a newer, growing area of research. The ensuing paper presents information about challenges currently faced and future directions that Behavioral InfoSec researchers should explore. These areas include separating insider deviant behavior from insider misbehavior, approaches to understanding hackers, improving information security compliance, cross-cultural Behavioral InfoSec research, and data collection and measurement issues in Behavioral InfoSec research. a 2012 Elsevier Ltd. All rights reserved."}
{"_id": "0b07f20c2037a6ca5fcc1dd022092fd5c57dd647", "title": "Kitchen Units for Food Units for Table . . . Past Future Predictor", "text": "In many computer vision applications, machines will need to reason beyond the present, and predict the future. This task is challenging because it requires leveraging extensive commonsense knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently obtaining this knowledge is through the massive amounts of readily available unlabeled video. In this paper, we present a large scale framework that capitalizes on temporal structure in unlabeled video to learn to anticipate both actions and objects in the future. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. We experimentally validate this idea on two challenging \u201cin the wild\u201d video datasets, and our results suggest that learning with unlabeled videos significantly helps forecast actions and anticipate objects."}
{"_id": "dd60efb7aad277c10c1d94dba8c3abbb9db4484d", "title": "Development and initial validation of a short three-dimensional inventory of character strengths", "text": "Character strength is described as a positive and organized pattern of emotions, thoughts, and behaviors. It serves as a schema that organizes categories of information toward the self, others, and the world, and provides the self-aware knowledge that facilitates the pursuit of goals, values, and ethical principles. Recent research has suggested that three reliable factors emerge from the measures of character strengths: caring, inquisitiveness, and self-control. The goal of this paper is to develop a psychometrically sound short measure of character strength. The questions were addressed in two studies using two independent samples: a cross-cultural (i.e., 518 Asians and 556 Westerners) sample, and a cross-population (i.e., 175 community participants and 171 inpatients) sample in China. Findings from the exploratory and confirmatory factor analysis suggested a cross-cultural three-factor model of character strength that could be measured by the Three-dimensional Inventory of Character Strengths (TICS). A multigroup confirmatory factor analysis further indicated that the number of factors and factor loadings was invariant in the medical and community samples. This result indicated that the brief inventory could be applied to a medical context. Internal reliability, content validity, and predictive validity were good, although the predictive validity of the three character strengths for psychological symptoms in the medical sample was more modest than that in the community sample. TICS is expected to be used for screening populations at risk, and a tool to aid mental health professionals in group-based treatment/intervention planning. It also should be noted that this short inventory should be used with caution for individual decision making."}
{"_id": "dcb6d67b16ab45a1c185026d842b5010ce4aae04", "title": "Active learning strategies for rating elicitation in collaborative filtering: A system-wide perspective", "text": "The accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor-quality data during training, that is, garbage in, garbage out. Active learning aims to remedy this problem by focusing on obtaining better-quality data that more aptly reflects a user's preferences. However, traditional evaluation of active learning strategies has two major flaws, which have significant negative ramifications on accurately evaluating the system's performance (prediction error, precision, and quantity of elicited ratings). (1) Performance has been evaluated for each user independently (ignoring system-wide improvements). (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition).\n In this article we show that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system centric). We propose a new evaluation methodology and use it to evaluate some novel and state-of-the-art rating elicitation strategies. We found that the system-wide effectiveness of a rating elicitation strategy depends on the stage of the rating elicitation process, and on the evaluation measures (MAE, NDCG, and Precision). In particular, we show that using some common user-centric strategies may actually degrade the overall performance of a system. Finally, we show that the performance of many common active learning strategies changes significantly when evaluated concurrently with the natural acquisition of ratings in recommender systems."}
{"_id": "b752286f5ec74753c89c81c61e2b6a21b710aa60", "title": "DeeperBind: Enhancing prediction of sequence specificities of DNA binding proteins", "text": "Transcription factors (TFs) are macromolecules that bind to cis-regulatory specific sub-regions of DNA promoters and initiate transcription. Finding the exact location of these binding sites (aka motifs) is important in a variety of domains such as drug design and development. To address this need, several in vivo and in vitro techniques have been developed so far that try to characterize and predict the binding specificity of a protein to different DNA loci. The major problem with these techniques is that they are not accurate enough in prediction of the binding affinity and characterization of the corresponding motifs. As a result, downstream analysis is required to uncover the locations where proteins of interest bind. Here, we propose DeeperBind, a long short term recurrent convolutional network for prediction of protein binding specificities with respect to DNA probes. DeeperBind can model the positional dynamics of probe sequences and hence reckons with the contributions made by individual sub-regions in DNA sequences, in an effective way. Moreover, it can be trained and tested on datasets containing varying-length sequences. We apply our pipeline to the datasets derived from protein binding microarrays (PBMs), an in-vitro high-throughput technology for quantification of protein-DNA binding preferences, and present promising results. To the best of our knowledge, this is the most accurate pipeline that can predict binding specificities of DNA sequences from the data produced by high-throughput technologies through utilization of the power of deep learning for feature generation and positional dynamics modeling."}
{"_id": "a69f714848bdaf252b8cf86cb6d31c48cb211938", "title": "A Survey of Challenges and Applications of Wireless Body Area Network (WBAN) and Role of a Virtual Doctor Server in Existing Architecture", "text": "Wireless body area network has gained much interest and became emerging technology at health service facilities due to its wide range of utility and vital role to improve the human health. In this research paper, we are conducting a comprehensive survey of wireless Body Area Network (WBAN) and also introducing a virtual doctor server (VDS) in existing WBAN architecture. Existing architecture of WBAN consists of: wireless sensor, wireless actuator node , wireless central unit and wireless Personal Device (PD). Personal Digital Assistant (PDA) or smart phone can be used as PD. Based on the existing architecture mentioned above, we propose a design concept for a virtual doctor server (VDS) to support various patient health care services. VDS will keep the historical data about the patient, generate the daily tips and advices for him, call the doctor or emergency squad if required and can provide first aid assistance instructions on patient or any of his close relative's PDA's."}
{"_id": "975181c84a0991bb69f5af825e2019080d22cfcd", "title": "Opinion Target Extraction Using Partially-Supervised Word Alignment Model", "text": "Mining opinion targets from online reviews is an important and challenging task in opinion mining. This paper proposes a novel approach to extract opinion targets by using partially-supervised word alignment model (PSWAM). At first, we apply PSWAM in a monolingual scenario to mine opinion relations in sentences and estimate the associations between words. Then, a graph-based algorithm is exploited to estimate the confidence of each candidate, and the candidates with higher confidence will be extracted as the opinion targets. Compared with existing syntax-based methods, PSWAM can effectively avoid parsing errors when dealing with informal sentences in online reviews. Compared with the methods using alignment model, PSWAM can capture opinion relations more precisely through partial supervision from partial alignment links. Moreover, when estimating candidate confidence, we make penalties on higherdegree vertices in our graph-based algorithm in order to decrease the probability of the random walk running into the unrelated regions in the graph. As a result, some errors can be avoided. The experimental results on three data sets with different sizes and languages show that our approach outperforms state-of-the-art methods."}
{"_id": "d72b366e1d45cbcddfe5c856b77a2801d8d0c11f", "title": "Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model", "text": "Existing neural semantic parsers mainly utilize a sequence encoder, i.e., a sequential LSTM, to extract word order features while neglecting other valuable syntactic information such as dependency graph or constituent trees. In this paper, we first propose to use the syntactic graph to represent three types of syntactic information, i.e., word order, dependency and constituency features. We further employ a graph-to-sequence model to encode the syntactic graph and decode a logical form. Experimental results on benchmark datasets show that our model is comparable to the state-ofthe-art on Jobs640, ATIS and Geo880. Experimental results on adversarial examples demonstrate the robustness of the model is also improved by encoding more syntactic information."}
{"_id": "80da8937d741c3a6942e177c743a0b22eb9ba84d", "title": "Wireless Optical Communication for Underwater Applications", "text": "Wireless optical communication can be presented as an alternative to acoustic modems for scenarios where high speed, moderate distances, lower power and less complex communication systems are desired. Wireless communication is much more feasible solution to the problem of communicating with robotic vehicles. In this paper, wireless optical communication for the underwater criteria is designed and analysed. The focus of the paper is to construct light emitting diode based links at low cost with faster data rate of 1Mbps, wirelessly under the water. The choice of using LEDs instead of Lasers was largely economic. However, the underwater environment can be very challenging optically and many of the advantages that lasers have in terms of beam quality can be rapidly degraded by scattering and turbulence. Keyword: Moderate distance, Wireless optical, Super bright blue LED."}
{"_id": "715216a92c338a3c35319026d38ed0da0c57d013", "title": "Integrated Pedestrian and Direction Classification Using a Random Decision Forest", "text": "For analysing the behaviour of pedestrians in a scene, it is common practice that pedestrian localization, classification, and tracking are conducted consecutively. The direction of a pedestrian, being part of the pose, implies the future path. This paper proposes novel Random Decision Forests (RDFs) to simultaneously classify pedestrians and their directions, without adding an extra module for direction classification to the pedestrian classification module. The proposed algorithm is trained and tested on the TUD multi-view pedestrian and Daimler Mono Pedestrian Benchmark data-sets. The proposed integrated RDF classifiers perform comparable to pedestrian or direction trained separated RDF classifiers. The integrated RDFs yield results comparable to those of state-of-the-art and baseline methods aiming for pedestrian classification or body direction classification, respectively."}
{"_id": "149d5accf2294abeb6599d71252d0fddce169aef", "title": "A C-band microwave rectifier based on harmonic termination and with input filter removed", "text": "This paper presents a C-band rectifier employing a third harmonic termination which is realized by a \u03bbg/12 short-ended microstrip transmission line. The proposed harmonic termination balances the capacitive impedance of the diode at the fundamental frequency and blocks harmonic current of the diode. The input low-pass filter or matching circuit is removed and the bandwidth of rectifier is enhanced. Theoretical analysis and experiments are carried out. The results show that the proposed topology can realize high efficiency with a wide input frequency range. The fabricated rectifier has a compact dimension of 14mm by 22mm. A maximum RF-DC conversion efficiency of 75.6% at 13 dBm input power is observed. The measured efficiency remains above 70% with the operating frequency form 5.2 GHz to 6 GHz."}
{"_id": "623a47b58ba669d278398d43352e4624a998d10d", "title": "Magnetic resonance imaging in pediatric appendicitis: a systematic review", "text": "Magnetic resonance imaging for the evaluation of appendicitis in children has rapidly increased recently. This change has been primarily driven by the desire to avoid CT radiation dose. This meta-analysis reviews the diagnostic performance of MRI for pediatric appendicitis and discusses current knowledge of cost-effectiveness. We used a conservative Haldane correction statistical method and found pooled diagnostic parameters including a sensitivity of 96.5% (95% confidence interval [CI]: 94.3\u201397.8%), specificity of 96.1% (95% CI: 93.5\u201397.7%), positive predictive value of 92.0% (95% CI: 89.3\u201394.0%) and negative predictive value of 98.3% (95% CI: 97.3\u201399.0%), based on 11 studies. Assessment of patient outcomes associated with MRI use at two institutions indicates that time to antibiotics was 4.7\u00a0h and 8.2\u00a0h, time to appendectomy was 9.1\u00a0h and 13.9\u00a0h, and negative appendectomy rate was 3.1% and 1.4%, respectively. Alternative diagnoses were present in ~20% of cases, most commonly adnexal cysts and enteritis/colitis. Regarding technique, half-acquisition single-shot fast spin-echo (SSFSE) pulse sequences are crucial. While gadolinium-enhanced T1-weighted pulse sequences might be helpful, any benefit beyond non-contrast MRI has not been confirmed. Balanced steady-state free precession (SSFP) sequences are generally noncontributory. Protocols do not need to exceed five sequences; four-sequence protocols are commonly utilized. Sedation generally is not indicated; patients younger than 5\u00a0years might be attempted based on the child\u2019s ability to cooperate. A comprehensive pediatric cost-effectiveness analysis that includes both direct and indirect costs is needed."}
{"_id": "5d2528c3b356ce94be9c59f078d2d4cd074bde9a", "title": "Energy disaggregation meets heating control", "text": "Heating control is of particular importance, since heating accounts for the biggest amount of total residential energy consumption. Smart heating strategies allow to reduce such energy consumption by automatically turning off the heating when the occupants are sleeping or away from home. The present context or occupancy state of a household can be deduced from the appliances that are currently in use. In this study we investigate energy disaggregation techniques to infer appliance states from an aggregated energy signal measured by a smart meter. Since most household devices have predictable energy consumption, we propose to use the changes in aggregated energy consumption as features for the appliance/occupancy state classification task. We evaluate our approach on real-life energy consumption data from several households, compare the classification accuracy of various machine learning techniques, and explain how to use the inferred appliance states to optimize heating schedules."}
{"_id": "43343c9b1c49b361fa2fb34c6698827693b5bdd2", "title": "Anisotropic material properties of fused deposition modeling ABS", "text": "Rapid Prototyping (RP) technologies provide the ability to fabricate initial prototypes from various model materials. Stratasys Fused Deposition Modeling (FDM) is a typical RP process that can fabricate prototypes out of ABS plastic. To predict the mechanical behavior of FDM parts, it is critical to understand the material properties of the raw FDM process material, and the effect that FDM build parameters have on anisotropic material properties. This paper characterizes the properties of ABS parts fabricated by the FDM 1650. Using a Design of Experiment (DOE) approach, the process parameters of FDM, such as raster orientation, air gap, bead width, color, and model temperature were examined. Tensile strengths and compressive strengths of directionally fabricated specimens were measured and compared with injection molded FDM ABS P400 material. For the FDM parts made with a 0.003 inch overlap between roads, the typical tensile strength ranged between 65 and 72 percent of the strength of injection molded ABS P400. The compressive strength ranged from 80 to 90 percent of the injection molded FDM ABS. Several build rules for designing FDM parts were formulated based on experimental results."}
{"_id": "c6c95c996037c00c62df1d3d2cfb3e010a317faf", "title": "Modulation, control and capacitor voltage balancing of alternate arm modular multilevel converter with DC fault blocking capability", "text": "This paper provides an overview of DC side fault tolerance issues of VSC based HVDC system and the need for fault tolerant converters. The working principle and DC fault ride through capability of recently introduced Alternate Arm Modular Multilevel Converter(AAMMC) has been discussed. The capacitor voltage balancing issues of AAMMC is analyzed and a novel scheme for balancing capacitor voltages of the wave shaping circuit is presented in this paper. The voltage balancing of capacitors of wave shaping circuits in the arm is done by introducing an overlap period during zero voltage period. Using the proposed scheme, the magnitude and direction of the current during the overlap period can be controlled by varying the switching pattern. It helps in charging or discharging of the submodule capacitors to bring them to their reference value. At the end of the overlap period, the arm current is brought to zero before opening the director switch so as to avoid the spike across the arm inductor. The efficacy of the proposed control scheme has been validated using simulation study done in PSCAD/EMTDC."}
{"_id": "8c47d887076d69e925aa7b2b86859f667f4576ac", "title": "Upright orientation of man-made objects", "text": "Humans usually associate an upright orientation with objects, placing them in a way that they are most commonly seen in our surroundings. While it is an open challenge to recover the functionality of a shape from its geometry alone, this paper shows that it is often possible to infer its upright orientation by analyzing its geometry. Our key idea is to reduce the two-dimensional (spherical) orientation space to a small set of orientation candidates using functionality-related geometric properties of the object, and then determine the best orientation using an assessment function of several functional geometric attributes defined with respect to each candidate. Specifically we focus on obtaining the upright orientation for man-made objects that typically stand on some flat surface (ground, floor, table, etc.), which include the vast majority of objects in our everyday surroundings. For these types of models orientation candidates can be defined according to static equilibrium. For each candidate, we introduce a set of discriminative attributes linking shape to function. We learn an assessment function of these attributes from a training set using a combination of Random Forest classifier and Support Vector Machine classifier. Experiments demonstrate that our method generalizes well and achieves about 90% prediction accuracy for both a 10-fold cross-validation over the training set and a validation with an independent test set."}
{"_id": "68598649496ee6ebd2dfd0a6931b15cf51a64cff", "title": "Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image", "text": "We propose a computational framework to jointly parse a single RGB image and reconstruct a holistic 3D configuration composed by a set of CAD models using a stochastic grammar model. Specifically, we introduce a Holistic Scene Grammar (HSG) to represent the 3D scene structure, which characterizes a joint distribution over the functional and geometric space of indoor scenes. The proposed HSG captures three essential and often latent dimensions of the indoor scenes: i) latent human context, describing the affordance and the functionality of a room arrangement, ii) geometric constraints over the scene configurations, and iii) physical constraints that guarantee physically plausible parsing and reconstruction. We solve this joint parsing and reconstruction problem in an analysis-by-synthesis fashion, seeking to minimize the differences between the input image and the rendered images generated by our 3D representation, over the space of depth, surface normal, and object segmentation map. The optimal configuration, represented by a parse graph, is inferred using Markov chain Monte Carlo (MCMC), which efficiently traverses through the non-differentiable solution space, jointly optimizing object localization, 3D layout, and hidden human context. Experimental results demonstrate that the proposed algorithm improves the generalization ability and significantly outperforms prior methods on 3D layout estimation, 3D object detection, and holistic scene understanding."}
{"_id": "ba73f704626c0f82b467a5500717ae9aef3879d1", "title": "Automated reference resolution in legal texts", "text": "This paper investigates the task of reference resolution in the legal domain. This is a new interesting task in Legal Engineering research. The goal is to create a system which can automatically detect references and then extracts their referents. Previous work limits itself to detect and resolve references at the document targets. In this paper, we go a step further in trying to resolve references to sub-document targets. Referents extracted are the smallest fragments of texts in documents, rather than the entire documents that contain the referenced texts. Based on analyzing the characteristics of reference phenomena in legal texts, we propose a four-step framework to deal with the task: mention detection, contextual information extraction, antecedent candidate extraction, and antecedent determination. We also show how machine learning methods can be exploited in each step. The final system achieves 80.06\u00a0% in the F1 score for detecting references, 85.61\u00a0% accuracy for resolving them, and 67.02\u00a0% in the F1 score for the end-to-end setting task on the Japanese National Pension Law corpus."}
{"_id": "cb277057c0ae4bd59ad2dae6228b0ff0eb828c32", "title": "SMART LOW VOLTAGE AC SOLID STATE CIRCUIT BREAKERS FOR SMART GRIDS", "text": "The solid state circuit breaker (SSCB) is a device used in the power system in order to provide protection when a short circuit or fault current occurs. The objective of this paper is to study and implement a smart prototype of SSCB for smart grids. The presented SSCB is controlled through the current/time characteristics of that used in the conventional mechanical circuit breakers in addition to limit the high fault current levels (fault current limiter) especially with the proliferation of the distributed generation and the associated fault current level increase. In this paper, the principle of operation of the mechanical circuit breakers (MCB) and their classifications are introduced, andswitches used in the design of SSCBs are presented. Simulation of SSCB is carried out to study its feasibility and performance in the case of various operating conditions. Then, ahardware prototype of SSCB using IGBTs devices is constructed and tested to elucidate the proposed"}
{"_id": "92859cbb598c76e591411e00ff201b4f15efb6fc", "title": "ECNU: Using Traditional Similarity Measurements and Word Embedding for Semantic Textual Similarity Estimation", "text": "This paper reports our submissions to semantic textual similarity task, i.e., task 2 in Semantic Evaluation 2015. We built our systems using various traditional features, such as string-based, corpus-based and syntactic similarity metrics, as well as novel similarity measures based on distributed word representations, which were trained using deep learning paradigms. Since the training and test datasets consist of instances collected from various domains, three different strategies of the usage of training datasets were explored: (1) use all available training datasets and build a unified supervised model for all test datasets; (2) select the most similar training dataset and separately construct a individual model for each test set; (3) adopt multi-task learning framework to make full use of available training sets. Results on the test datasets show that using all datasets as training set achieves the best averaged performance and our best system ranks 15 out of 73."}
{"_id": "0bb4401b9a1b064c513bda3001f43f8f2f3e28de", "title": "Learning with Noisy Labels", "text": "In this paper, we theoretically study the problem of binary classification in the presence of random classification noise \u2014 the learner, instead of seeing the true labels, sees labels that have independently been flipped with some small probability. Moreover, random label noise is class-conditional \u2014 the flip probability depends on the class. We provide two approaches to suitably modify any given surrogate loss function. First, we provide a simple unbiased estimator of any loss, and obtain performance bounds for empirical risk minimization in the presence of iid data with noisy labels. If the loss function satisfies a simple symmetry condition, we show that the method leads to an efficient algorithm for empirical minimization. Second, by leveraging a reduction of risk minimization under noisy labels to classification with weighted 0-1 loss, we suggest the use of a simple weighted surrogate loss, for which we are able to obtain strong empirical risk bounds. This approach has a very remarkable consequence \u2014 methods used in practice such as biased SVM and weighted logistic regression are provably noise-tolerant. On a synthetic non-separable dataset, our methods achieve over 88% accuracy even when 40% of the labels are corrupted, and are competitive with respect to recently proposed methods for dealing with label noise in several benchmark datasets."}
{"_id": "165ef2b5f86b9b2c68b652391db5ece8c5a0bc7e", "title": "Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation", "text": "Recent advances in semantic image segmentation have mostly been achieved by training deep convolutional neural networks (CNNs). We show how to improve semantic segmentation through the use of contextual information, specifically, we explore 'patch-patch' context between image regions, and 'patch-background' context. For learning from the patch-patch context, we formulate Conditional Random Fields (CRFs) with CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied to avoid repeated expensive CRF inference for back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image input and sliding pyramid pooling is effective for improving performance. Our experimental results set new state-of-the-art performance on a number of popular semantic segmentation datasets, including NYUDv2, PASCAL VOC 2012, PASCAL-Context, and SIFT-flow. In particular, we achieve an intersection-overunion score of 78:0 on the challenging PASCAL VOC 2012 dataset."}
{"_id": "32cde90437ab5a70cf003ea36f66f2de0e24b3ab", "title": "The Cityscapes Dataset for Semantic Urban Scene Understanding", "text": "Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark."}
{"_id": "3cdb1364c3e66443e1c2182474d44b2fb01cd584", "title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation", "text": "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet/."}
{"_id": "4fc2bbaa1f0502b5412e3d56acce2c9aa08bb586", "title": "Long-Time Exposure to Violent Video Games Does Not Show Desensitization on Empathy for Pain: An fMRI Study", "text": "As a typical form of empathy, empathy for pain refers to the perception and appraisal of others' pain, as well as the corresponding affective responses. Numerous studies investigated the factors affecting the empathy for pain, in which the exposure to violent video games (VVGs) could change players' empathic responses to painful situations. However, it remains unclear whether exposure to VVG influences the empathy for pain. In the present study, in terms of the exposure experience to VVG, two groups of participants (18 in VVG group, VG; 17 in non-VVG group, NG) were screened from nearly 200 video game experience questionnaires. And then, the functional magnetic resonance imaging data were recorded when they were viewing painful and non-painful stimuli. The results showed that the perception of others' pain were not significantly different in brain regions between groups, from which we could infer that the desensitization effect of VVGs was overrated."}
{"_id": "5711a5dad3fd93b14593ac46c5df0267e8991637", "title": "An Augmented Reality System for Astronomical Observations", "text": "Anyone who has gazed through the eyepiece of an astronomical telescope knows that, with the exception of the Moon and the planets, extra-solar astronomical objects are disappointing to observe visually. This is mainly due to their low surface brightness, but also depends on the visibility, sky brightness and telescope aperture. We propose a system which projects images of astronomical objects (with focus on nebulae and galaxies), animations and additional information directly into the eyepiece view of an astronomical telescope. As the telescope orientation is queried continuously, the projected image is adapted in real-time to the currently visible field of view. For projection, a custom-built video projection module with high contrast and low maximum luminance value was developed. With this technology visitors to public observatories have the option to experience the richness of faint astronomical objects while directly looking at them through a telescope."}
{"_id": "dbfd731c183f13d0d5c686c681d70b19e12377c5", "title": "A gateway system for an automotive system: LIN, CAN, and FlexRay", "text": "In the automotive industry, the usage of microcontrollers has been increasing rapidly. Many mechanical parts of automobiles are being replaced with microcontrollers, and these microcontrollers are controlling various parts of the automobile, so communications between microcontrollers must be reliable. Until now, several protocols for automotive communications have been introduced, and LIN, CAN, and FlexRay are the most widely known protocols. Different vendors of automobiles use different protocols and each protocol possess different features. This paper presents a network gateway system between LIN, low-speed CAN, high-speed CAN, and FlexRay."}
{"_id": "4164cfef02ee5e783983296e7d9914063208a203", "title": "ECG-Cryptography and Authentication in Body Area Networks", "text": "Wireless body area networks (BANs) have drawn much attention from research community and industry in recent years. Multimedia healthcare services provided by BANs can be available to anyone, anywhere, and anytime seamlessly. A critical issue in BANs is how to preserve the integrity and privacy of a person's medical data over wireless environments in a resource efficient manner. This paper presents a novel key agreement scheme that allows neighboring nodes in BANs to share a common key generated by electrocardiogram (ECG) signals. The improved Jules Sudan (IJS) algorithm is proposed to set up the key agreement for the message authentication. The proposed ECG-IJS key agreement can secure data commnications over BANs in a plug-n-play manner without any key distribution overheads. Both the simulation and experimental results are presented, which demonstrate that the proposed ECG-IJS scheme can achieve better security performance in terms of serval performance metrics such as false acceptance rate (FAR) and false rejection rate (FRR) than other existing approaches. In addition, the power consumption analysis also shows that the proposed ECG-IJS scheme can achieve energy efficiency for BANs."}
{"_id": "f16b23e8e0788e3298e533e71bafef7135300a5e", "title": "An anomaly-based network intrusion detection system using Deep learning", "text": "Recently, anomaly-based intrusion detection techniques are valuable methodology to detect both known as well as unknown/new attacks, so they can cope with the diversity of the attacks and the constantly changing nature of network attacks. There are many problems need to be considered in anomaly-based network intrusion detection system (NIDS), such as ability to adapt to dynamic network environments, unavailability of labeled data, false positive rate. This paper, we use Deep learning techniques to implement an anomaly-based NIDS. These techniques show the sensitive power of generative models with good classification, capabilities to deduce part of its knowledge from incomplete data and the adaptability. Our experiments with KDDCup99 network traffic connections show that our work is effective to exact detect in anomaly-based NIDS and classify intrusions into five groups with the accuracy based on network data sources."}
{"_id": "987dbeff39ef3efce631e04211a748360b5d818d", "title": "Smarter City : Smart Energy Grid based on Blockchain Technology", "text": "The improvement of the Quality of Life (QoL) and the enhancement of the Quality of Services (QoS) represent the main goal of every city evolutionary process. It is possible making cities smarter promoting innovative solutions by use of Information and Communication Technology (ICT) for collecting and analysing large amounts of data generated by several sources, such as sensor networks, wearable devices, and IoT devices spread among the city. The integration of different technologies and different IT systems, needed to build smart city applications and services, remains the most challenge to overcome. In the Smart City context, this paper intends to investigate the Smart Environment pillar, and in particular the aspect related to the implementation of Smart Energy Grid for citizens in the urban context. The innovative characteristic of the proposed solution consists of using the Blockchain technology to join the Grid, exchanging information, and buy/sell energy between the involved nodes (energy providers and private citizens), using the Blockchain granting ledger. Keywords\u2014 Information Technology, Smart City, Digital Revolution, Digital Innovation, Blockchain, Smart Energy Grid, Machine Learning."}
{"_id": "8ac97ef1255dffa3be216a7f6155946f7ec57c22", "title": "Social interaction based video recommendation: Recommending YouTube videos to facebook users", "text": "Online videos, e.g., YouTube videos, are important topics for social interactions among users of online social networking sites (OSN), e.g., Facebook. This opens up the possibility of exploiting video-related user social interaction information for better video recommendation. Towards this goal, we conduct a case study of recommending YouTube videos to Facebook users based on their social interactions. We first measure social interactions related to YouTube videos among Facebook users. We observe that the attention a video attracts on Facebook is not always well-aligned with its popularity on YouTube. Unpopular videos on YouTube can become popular on Facebook, while popular videos on YouTube often do not attract proportionally high attentions on Facebook. This finding motivates us to develop a simple top-k video recommendation algorithm that exploits user social interaction information to improve the recommendation accuracy for niche videos, that are globally unpopular, but highly relevant to a specific user or user group. Through experiments on the collected Facebook traces, we demonstrate that our recommendation algorithm significantly outperforms the YouTube-popularity based video recommendation algorithm as well as a collaborative filtering algorithm based on user similarities."}
{"_id": "7c24236fad762e5e34264c1b123e67ac04307f76", "title": "TextLuas: Tracking and Visualizing Document and Term Clusters in Dynamic Text Data", "text": "For large volumes of text data collected over time, a key knowledge discovery task is identifying and tracking clusters. These clusters may correspond to emerging themes, popular topics, or breaking news stories in a corpus. Therefore, recently there has been increased interest in the problem of clustering dynamic data. However, there exists little support for the interactive exploration of the output of these analysis techniques, particularly in cases where researchers wish to simultaneously explore both the change in cluster structure over time and the change in the textual content associated with clusters. In this paper, we propose a model for tracking dynamic clusters characterized by the evolutionary events of each cluster. Motivated by this model, the TextLuas system provides an implementation for tracking these dynamic clusters and visualizing their evolution using a metro map metaphor. To provide overviews of cluster content, we adapt the tag cloud representation to the dynamic clustering scenario. We demonstrate the TextLuas system on two different text corpora, where they are shown to elucidate the evolution of key themes. We also describe how TextLuas was applied to a problem in bibliographic network research."}
{"_id": "a55aeb5339a5b9e61c0fea8d836c62a7c1d19f8d", "title": "Signal processing of range detection for SFCW radars using Matlab and GNU radio", "text": "Development of radar technology is now rapidly. One of them is Step Frequency Continuous Wave Radar (SFCW Radar). SFCW radar can be used for various purposes. SFCW radar consists of antenna, control unit, signal processing unit. The radar advantages compared to the other radar are this radar is easier to implement and more widely within the range of radar. In this research that will be done is design SFCW radar. There are several stages to be performed in this research. At first, the simulation will be done using Matlab\u00ae. This simulation will determine the parameters needed to obtain an appropriate resolution. The second, simulations is performed by using GNU radio. This simulation will be adjusted to the parameters that have been previously designed by Matlab\u00ae. Both of the results will be analyzed. Input signal is complex. Simulation results for 5kHz-1.285MHz will display in graph. Graph will show location of the object in 5 m, 1km, and 1.1km depend on initial design."}
{"_id": "537f17624e4f8513c14f850c0e9c012c93c132b6", "title": "Weather classification with deep convolutional neural networks", "text": "In this paper, we study weather classification from images using Convolutional Neural Networks (CNNs). Our approach outperforms the state of the art by a huge margin in the weather classification task. Our approach achieves 82.2% normalized classification accuracy instead of 53.1% for the state of the art (i.e., 54.8% relative improvement). We also studied the behavior of all the layers of the Convolutional Neural Networks, we adopted, and interesting findings are discussed."}
{"_id": "990c5f2aefab9df89c40025d85013fe28f0a5810", "title": "Image Deblurring via Enhanced Low-Rank Prior", "text": "Low-rank matrix approximation has been successfully applied to numerous vision problems in recent years. In this paper, we propose a novel low-rank prior for blind image deblurring. Our key observation is that directly applying a simple low-rank model to a blurry input image significantly reduces the blur even without using any kernel information, while preserving important edge information. The same model can be used to reduce blur in the gradient map of a blurry input. Based on these properties, we introduce an enhanced prior for image deblurring by combining the low rank prior of similar patches from both the blurry image and its gradient map. We employ a weighted nuclear norm minimization method to further enhance the effectiveness of low-rank prior for image deblurring, by retaining the dominant edges and eliminating fine texture and slight edges in intermediate images, allowing for better kernel estimation. In addition, we evaluate the proposed enhanced low-rank prior for both the uniform and the non-uniform deblurring. Quantitative and qualitative experimental evaluations demonstrate that the proposed algorithm performs favorably against the state-of-the-art deblurring methods."}
{"_id": "7c5f143adf1bf182bf506bd31f9ddb0f302f3ce9", "title": "10 Internet Addiction and Its Cognitive Behavioral Therapy", "text": ""}
{"_id": "81a81ff448b0911cbb63abecd9c54949ce8d50fd", "title": "Functional connectivity dynamically evolves on multiple time-scales over a static structural connectome: Models and mechanisms", "text": "Over the last decade, we have observed a revolution in brain structural and functional Connectomics. On one hand, we have an ever-more detailed characterization of the brain's white matter structural connectome. On the other, we have a repertoire of consistent functional networks that form and dissipate over time during rest. Despite the evident spatial similarities between structural and functional connectivity, understanding how different time-evolving functional networks spontaneously emerge from a single structural network requires analyzing the problem from the perspective of complex network dynamics and dynamical system's theory. In that direction, bottom-up computational models are useful tools to test theoretical scenarios and depict the mechanisms at the genesis of resting-state activity. Here, we provide an overview of the different mechanistic scenarios proposed over the last decade via computational models. Importantly, we highlight the need of incorporating additional model constraints considering the properties observed at finer temporal scales with MEG and the dynamical properties of FC in order to refresh the list of candidate scenarios."}
{"_id": "e71423028d7204d2a8f5fa1b3671f9409192b943", "title": "Spider Monkey Optimization algorithm for numerical optimization", "text": "Swarm intelligence is one of the most promising area for the researchers in the field of numerical optimization. Researchers have developed many algorithms by simulating the swarming behavior of various creatures like ants, honey bees, fish, birds and the findings are very motivating. In this paper, a new approach for numerical optimization is proposed by modeling the foraging behavior of spider monkeys. Spider monkeys have been categorized as fission\u2013fusion social structure based animals. The animals which follow fission\u2013 fusion social systems, split themselves from large to smaller groups and vice-versa based on the scarcity or availability of food. The proposed swarm intelligence approach is named as Spider Monkey Optimization (SMO) algorithm and can broadly be classified as an algorithm inspired by intelligent foraging behavior of fission\u2013fusion social structure based animals."}
{"_id": "3fa6ccf2e0cf89fe758b9d634030102f9c3f928a", "title": "Modelling Domain Relationships for Transfer Learning on Retrieval-based Question Answering Systems in E-commerce", "text": "Nowadays, it is a heated topic for many industries to build automatic question-answering (QA) systems. A key solution to these QA systems is to retrieve from a QA knowledge base the most similar question of a given question, which can be reformulated as a paraphrase identification (PI) or a natural language inference (NLI) problem. However, most existing models for PI and NLI have at least two problems: They rely on a large amount of labeled data, which is not always available in real scenarios, and they may not be efficient for industrial applications. In this paper, we study transfer learning for the PI and NLI problems, aiming to propose a general framework, which can effectively and efficiently adapt the shared knowledge learned from a resource-rich source domain to a resource-poor target domain. Specifically, since most existing transfer learning methods only focus on learning a shared feature space across domains while ignoring the relationship between the source and target domains, we propose to simultaneously learn shared representations and domain relationships in a unified framework. Furthermore, we propose an efficient and effective hybrid model by combining a sentence encoding-based method and a sentence interaction-based method as our base model. Extensive experiments on both paraphrase identification and natural language inference demonstrate that our base model is efficient and has promising performance compared to the competing models, and our transfer learning method can help to significantly boost the performance. Further analysis shows that the inter-domain and intra-domain relationship captured by our model are insightful. Last but not least, we deploy our transfer learning model for PI into our online chatbot system, which can bring in significant improvements over our existing system. Finally, we launch our new system on the chatbot platform Eva in our E-commerce site AliExpress."}
{"_id": "f97efbc01bea86303fcecb0719b8a76394becde2", "title": "Generation of circularly polarized conical beam pattern using toroidal helical knot antenna", "text": "A novel circularly polarized antenna with a conical beam radiation pattern is presented. It consists of a feeding probe and a (1, 6) helical torus knot as a polarizer. Measured results shows that, the antenna has an impedance bandwidth of 19.67% and axial ratio bandwidth of 15.7%. With the RHCP gain of about 6 dBic at 3.5 GHz. Toroidal helical knot antenna is mechanically simple to fabricate using additive manufacturing technology. The proposed prototype is suitable for mounting on vehicles to facilitate communication with geostationary satellites."}
{"_id": "330c147e02ff73f4f46aac15a2fafc6dda235b1a", "title": "Towards deep learning with segregated dendrites", "text": "Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons."}
{"_id": "173c819ce3fccafdc4f64af71fd6868e815580ad", "title": "Introduction to E-commerce Learning Objectives", "text": "\u25cf To understand the complexity of e-commerce and its many facets. \u25cf To explore how e-business and e-commerce fit together. \u25cf To identify the impact of e-commerce. \u25cf To recognise the benefits and limitations of e-commerce. \u25cf To use classification frameworks for analysing e-commerce. \u25cf To identify the main barriers to the growth and development of e-commerce in organisations. \uf8f5 Even today, some considerable time after the so called 'dot com/Internet revolution', electronic commerce (e-commerce) remains a relatively new, emerging and constantly changing area of business management and information technology. There has been and continues to be much publicity and discussion about e-commerce. Library catalogues and shelves are filled with books and articles on the subject. However, there remains a sense of confusion, suspicion and misunderstanding surrounding the area, which has been exacerbated by the different contexts in which electronic commerce is used, coupled with the myriad related buzzwords and acronyms. This book aims to consolidate the major themes that have arisen from the new area of electronic commerce and to provide an understanding of its application and importance to management. In order to understand electronic commerce it is important to identify the different terms that are used, and to assess their origin and usage. According to the editor-in-chief of International Journal of Electronic Commerce , Vladimir Zwass, 'Electronic commerce is sharing business information , maintaining business relationships and conducting business transactions by means of telecommunications networks'. 1 He maintains that in its purest form, electronic commerce has existed for over 40 years, originating from the electronic transmission of messages during the Berlin airlift in 1948. 2 From this, electronic data interchange (EDI) was the next stage of e-commerce development. In the 1960s a cooperative effort between industry groups produced a first attempt at common electronic data formats. The formats, however, were only for purchasing, transportation and finance data, and were used primarily for intra-industry transactions. It was not until the late 1970s that work began for national Electronic Data Interchange (EDI) standards, which developed well into the early 1990s. EDI is the electronic transfer of a standardised business transaction between a sender and receiver computer, over some kind of private network or value added network (VAN). Both sides would have to have the same application software and the data would be exchanged in an extremely rigorous format. In sectors such as retail, automotive, defence and heavy manufacturing, EDI was \u2026"}
{"_id": "10a012daccba8aa0e1bdcfca5f83fcf2df84de87", "title": "Development of personality in early and middle adulthood: set like plaster or persistent change?", "text": "Different theories make different predictions about how mean levels of personality traits change in adulthood. The biological view of the Five-factor theory proposes the plaster hypothesis: All personality traits stop changing by age 30. In contrast, contextualist perspectives propose that changes should be more varied and should persist throughout adulthood. This study compared these perspectives in a large (N = 132,515) sample of adults aged 21-60 who completed a Big Five personality measure on the Internet. Conscientiousness and Agreeableness increased throughout early and middle adulthood at varying rates; Neuroticism declined among women but did not change among men. The variety in patterns of change suggests that the Big Five traits are complex phenomena subject to a variety of developmental influences."}
{"_id": "158816d275cc8f5db956dcec2d01c28afbb52693", "title": "Nature over nurture: temperament, personality, and life span development.", "text": "Temperaments are often regarded as biologically based psychological tendencies with intrinsic paths of development. It is argued that this definition applies to the personality traits of the five-factor model. Evidence for the endogenous nature of traits is summarized from studies of behavior genetics, parent-child relations, personality structure, animal personality, and the longitudinal stability of individual differences. New evidence for intrinsic maturation is offered from analyses of NEO Five-Factor Inventory scores for men and women age 14 and over in German, British, Spanish, Czech, and Turkish samples (N = 5,085). These data support strong conceptual links to child temperament despite modest empirical associations. The intrinsic maturation of personality is complemented by the culturally conditioned development of characteristic adaptations that express personality; interventions in human development are best addressed to these."}
{"_id": "158cb844ec3e08698c008df8b144d0e86a9135d9", "title": "A power primer.", "text": "One possible reason for the continued neglect of statistical power analysis in research in the behavioral sciences is the inaccessibility of or difficulty with the standard material. A convenient, although not comprehensive, presentation of required sample sizes is provided here. Effect-size indexes and conventional values for these are given for operationally defined small, medium, and large effects. The sample sizes necessary for .80 power to detect effects at these levels are tabled for eight standard statistical tests: (a) the difference between independent means, (b) the significance of a product-moment correlation, (c) the difference between independent rs, (d) the sign test, (e) the difference between independent proportions, (f) chi-square tests for goodness of fit and contingency tables, (g) one-way analysis of variance, and (h) the significance of a multiple or multiple partial correlation."}
{"_id": "943aeaa18dd3383a3c94a5a318158cc8d38b5106", "title": "Gender differences in personality: a meta-analysis.", "text": "Four meta-analyses were conducted to examine gender differences in personality in the literature (1958-1992) and in normative data for well-known personality inventories (1940-1992). Males were found to be more assertive and had slightly higher self-esteem than females. Females were higher than males in extraversion, anxiety, trust, and, especially, tender-mindedness (e.g., nurturance). There were no noteworthy sex differences in social anxiety, impulsiveness, activity, ideas (e.g., reflectiveness), locus of control, and orderliness. Gender differences in personality traits were generally constant across ages, years of data collection, educational levels, and nations."}
{"_id": "ac2c7dc70619f3ac0cbe7476f905cf918495afe8", "title": "Personality and Performance at the Beginning of the New Millennium : What Do We Know and Where Do We Go Next ?", "text": "As we begin the new millennium, it is an appropriate time to examine what we have learned about personality-performance relationships over the past century and to embark on new directions for research. In this study we quantitatively summarize the results of 15 prior meta-analytic studies that have investigated the relationship between the Five Factor Model (FFM) personality traits and job performance. Results support the previous findings that conscientiousness is a valid predictor across performance measures in all occupations studied. Emotional stability was also found to be a generalizable predictor when overall work performance was the criterion, but its relationship to specific performance criteria and occupations was less consistent than was conscientiousness. Though the other three Big Five traits (extraversion, openness and agreeableness) did not predict overall work performance, they did predict success in specific occupations or relate to specific criteria. The studies upon which these results are based comprise most of the research that has been conducted on this topic in the past century. Consequently, we call for a moratorium on meta-analytic studies of the type reviewed in our study and recommend that researchers embark on a new research agenda designed to further our understanding of personalityperformance linkages."}
{"_id": "bcfd3332349d0e7ddbce4a5636900d71e029d691", "title": "3D printing bone models extracted from medical imaging data", "text": "Additive manufacturing in the form of 3D printing is becoming more common with the rise of low-cost hobby machines which makes research in the field of medicine more accessible. This article explores and experiments with methods of producing three dimensional replicas of body parts as education material from the medical imaging data of Computed Tomography and Magnetic Resonance Imaging."}
{"_id": "507584ef8a5a923d536b1e7d6394f8de69db4bd9", "title": "A novel redis security extension for NoSQL database using authentication and encryption", "text": "Redis is a new generation NoSQL database. Redis in its simplest form is a key value pair based data system. It supports all the data structures like variables, Linked list, arrays, strings, and queues. However unlike the conventional databases, Redis does not provide enough security for the data. Anyone can get the value if the key is known because the data is stored in the form of key value pair. Therefore such a database is unsuitable for enterprise and most practical application data. In this paper, the work is carried out to add immense security to a Redis system using following: a) Authentication Service b) Encryption Services c) Security to persistent data d) Security to blob data (multimedia data for images). The Encryption algorithm plays a very important role in the field of Database Management System. Here we make use of AES algorithm because the AES algorithm consumes least Encryption and Decryption time in comparison with RSA and DES [12]. The principle of the work is that a separate Key is created in the database whose value is an encrypted data, encrypted by symmetric key cryptography using AES. This data contains all other key values being concatenated and encrypted. Once a query is generated, first the extraction of independent data entities are made, followed by decryption of data using symmetric key. We also design a UI system to demonstrate the capabilities of the system with and without the security implementation. The result of our work shows that adding the security extension does not increase the overhead by much in terms of system resources and latency. We also extend the key-value based system to be able to store binary image data which is also stored in the encrypted pattern."}
{"_id": "49221df19239f3f279a49a61397d54836d0a8071", "title": "SPICE Models to Analyze Radiated and Conducted Susceptibilities of Shielded Coaxial Cables", "text": "This paper presents simulation program with integrated circuit emphasis (SPICE) models for the analyses of the radiated and conducted susceptibilities of shielded coaxial cables. Derived from the transmission-line equations of shielded cables, these models can be directly used for the time-domain and frequency-domain analyses, and connected to nonlinear and time-varying loads with the models already available in SPICE. In the proposed models, the usual discretization of shielded cables is not needed, and both the transfer impedance and admittance have been taken into account. Results obtained with these new models are in a good agreement with those obtained by using other approaches."}
{"_id": "ceb7784d1bebbc8e97e97cbe2b3b76bce1e708a5", "title": "Requirements Elicitation for an Inter-Organizational Business Intelligence System for Small and Medium Retail Enterprises", "text": "Business Intelligence (BI) is on everyone's lips nowadays, since it provides businesses with the possibility to analyze their business practices and improve them. However, Small and Medium Enterprises (SME) often cannot leverage the positive effects of BI because of missing resources like personnel, knowledge, or money. Since SME pose a major form of business organization, this fact has to be overcome. As the retail industry is a substantial part of the SME branch, we propose an inter-organizational approach for a BI system for retail SME, which allows them to collaboratively collect data and perform analysis task. The aim of our ongoing research effort is the development of such a system following the Design Science Research Methodology. Within this article, the status quo of current BI practices in SME in the retail industry is analyzed through qualitative interviews with ten SME managers. Afterwards, adoption and success factors of BI systems and Inter-organizational Information Systems are worked out in a comprehensive structured literature review. Based on the status quo and the adoption and success factors, first requirements for the acceptance of an inter-organizational BI system are identified and validated in another round of qualitative interviews. This leads to nine functional requirements and three non-functional requirements, which can be used for designing and implementing an inter-organizational BI system for SME in the following research efforts."}
{"_id": "3d9f87a56c7ea1a99f98c99eaefc23e8df008d5b", "title": "Internet of Things-aided Smart Grid: Technologies, Architectures, Applications, Prototypes, and Future Research Directions", "text": "Traditional power grids are being transformed into Smart Grids (SGs) to solve the problems of uni-directional information flow, energy wastage, growing energy demand, reliability and security. SGs offer bi-directional energy flow between service providers and consumers, involving power generation, transmission, distribution and utilization systems. SGs employ various devices for the monitoring, analysis and control of the grid, deployed at power plants, distribution centers and in consumers\u2019 premises in a very large number. Hence, an SG requires connectivity, automation and the tracking of such devices. This is achieved with the help of Internet of Things (IoT). IoT helps SG systems to support various network functions throughout the generation, transmission, distribution and consumption of energy by incorporating IoT devices (such as sensors, actuators and smart meters), as well as by providing the connectivity, automation and tracking for such devices. In this paper, we provide the first comprehensive survey on IoTaided SG systems, which includes the existing architectures, applications and prototypes of IoT-aided SG systems. This survey also highlights the open issues, challenges and future research directions for IoT-aided SG systems."}
{"_id": "8177904bb3f434c9b2852fb289c500eba57b35a4", "title": "Bert: a scheduler for best-effort and realtime paths", "text": "We describe a new algorithm, called BERT, that can be used to schedule both best effort and realtime tasks on a multimedia workstation. BERT exploits two innovations. First, it is based on the virtual clock algorithm originally developed to schedule bandwidth on packet switches. Because this algorithm maintains a relationship between virtual time and real time, our algorithm allows us to simultaneously factor realtime deadlines into the scheduling decision, and to allocate some fraction of the processor to best effort tasks. Second, BERT includes a mechanism that allows one task to steal cycles from another. This mechanism is valuable for two reasons: it allows us to distinguish among tasks based on their importance, and it results in a robust algorithm that is not overly sensitive to the task having made a perfect reservation."}
{"_id": "469657d82343e1280f224ef9e841561d48016c57", "title": "Similarity and Locality Based Indexing for High Performance Data Deduplication", "text": "Data deduplication has gained increasing attention and popularity as a space-efficient approach in backup storage systems. One of the main challenges for centralized data deduplication is the scalability of fingerprint-index search. In this paper, we propose SiLo, a near-exact and scalable deduplication system that effectively and complementarily exploits similarity and locality of data streams to achieve high duplicate elimination, throughput, and well balanced load at extremely low RAM overhead. The main idea behind SiLo is to expose and exploit more similarity by grouping strongly correlated small files into a segment and segmenting large files, and to leverage the locality in the data stream by grouping contiguous segments into blocks to capture similar and duplicate data missed by the probabilistic similarity detection. SiLo also employs a locality based stateless routing algorithm to parallelize and distribute data blocks to multiple backup nodes. By judiciously enhancing similarity through the exploitation of locality and vice versa, SiLo is able to significantly reduce RAM usage for index-lookup, achieve the near-exact efficiency of duplicate elimination, maintain a high deduplication throughput, and obtain load balance among backup nodes."}
{"_id": "915330171091202fa629d4241f05917447f03708", "title": "Functional System and Areal Organization of a Highly Sampled Individual Human Brain", "text": "Resting state functional MRI (fMRI) has enabled description of group-level functional brain organization at multiple spatial scales. However, cross-subject averaging may obscure patterns of brain organization specific to each individual. Here, we characterized the brain organization of a single individual repeatedly measured over more than a year. We report a reproducible and internally valid subject-specific areal-level parcellation that corresponds with subject-specific task activations. Highly convergent correlation network estimates can be derived from this parcellation if sufficient data are collected-considerably more than typically acquired. Notably, within-subject correlation variability across sessions exhibited a heterogeneous distribution across the cortex concentrated in visual and somato-motor regions, distinct from the pattern of intersubject variability. Further, although the individual's systems-level organization is broadly similar to the group, it demonstrates distinct topological features. These results provide a foundation for studies of individual differences in cortical organization and function, especially for special or rare individuals. VIDEO ABSTRACT."}
{"_id": "231edab2b378f5b63c03bf676b7d8b285b9bb80d", "title": "A Fast Similarity Join Algorithm Using Graphics Processing Units", "text": "A similarity join operation A BOWTIEepsiv B takes two sets of points A, B and a value epsiv isin Ropf, and outputs pairs of points p isin A,q isin B, such that the distance D(p, q) les epsiv. Similarity joins find use in a variety of fields, such as clustering, text mining, and multimedia databases. A novel similarity join algorithm called LSS is presented that executes on a graphics processing unit (GPU), exploiting its parallelism and high data throughput. As GPUs only allow simple data operations such as the sorting and searching of arrays, LSS uses these two operations to cast a similarity join operation as a GPU sort-and-search problem. It first creates, on the fly, a set of space-filling curves on one of its input datasets, using a parallel GPU sort routine. Next, LSS processes each point p of the other dataset in parallel. For each p, it searches an interval of one of the space-filling curves guaranteed to contain all the pairs in which p participates. Using extensive theoretical and experimental analysis, LSS is shown to offer a good balance between time and work efficiency. Experimental results demonstrate that LSS is suitable for similarity joins in large high-dimensional datasets, and that it performs well when compared against two existing prominent similarity join methods."}
{"_id": "819c22a7c4ec4d9dd12c2b15a475881555608b9d", "title": "Detecting Anomalous Trajectories and Behavior Patterns Using Hierarchical Clustering from Taxi GPS Data", "text": "Anomalous taxi trajectories are those chosen by a small number of drivers that are different from the regular choices of other drivers. These anomalous driving trajectories provide us an opportunity to extract driver or passenger behaviors and monitor adverse urban traffic events. Because various trajectory clustering methods have previously proven to be an effective means to analyze similarities and anomalies within taxi GPS trajectory data, we focus on the problem of detecting anomalous taxi trajectories, and we develop our trajectory clustering method based on the edit distance and hierarchical clustering. To achieve this objective, first, we obtain all the taxi trajectories crossing the same source\u2013destination pairs from taxi trajectories and take these trajectories as clustering objects. Second, an edit distance algorithm is modified to measure the similarity of the trajectories. Then, we distinguish regular trajectories and anomalous trajectories by applying adaptive hierarchical clustering based on an optimal number of clusters. Moreover, we further analyze these anomalous trajectories and discover four anomalous behavior patterns to speculate on the cause of an anomaly based on statistical indicators of time and length. The experimental results show that the proposed method can effectively detect anomalous trajectories and can be used to infer clearly fraudulent driving routes and the occurrence of adverse traffic events."}
{"_id": "25a3d24b7325366d48ecfe8ffe9494a1e9e5ffb2", "title": "A 1.6Gbps Digital Clock and Data Recovery Circuit", "text": "A digital clock and data recovery circuit employs simple 3-level digital-to-analog converters to interface the digital loop filter to the voltage controlled oscillator and achieves low jitter performance. Test chip fabricated in a 0.13mum CMOS process achieves BER < 10-12 , plusmn1500ppm lock-in range, plusmn2500ppm tracking range, recovered clock jitter of 8.9ps rms and consumes 12mW power from a single-pin 1.2V supply, while operating at 1.6Gbps"}
{"_id": "43e33e80d74205e860dd4b8e26b7c458c60e201a", "title": "Deep Convolutional Networks as shallow Gaussian Processes", "text": "We show that the output of a (residual) convolutional neural network (CNN) with an appropriate prior over the weights and biases is a Gaussian process (GP) in the limit of infinitely many convolutional filters, extending similar results for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike \u201cdeep kernels\u201d, has very few parameters: only the hyperparameters of the original CNN. Further, we show that this kernel has two properties that allow it to be computed efficiently; the cost of evaluating the kernel for a pair of images is similar to a single forward pass through the original CNN with only one filter per layer. The kernel equivalent to a 32-layer ResNet obtains 0.84% classification error on MNIST, a new record for GPs with a comparable number of parameters. 1"}
{"_id": "cd5270755035c6c9544db5c157fdc86b1a2e0561", "title": "From cyber-physical systems to Industry 4.0: make future manufacturing become possible", "text": "Recently, \u2018Industry 4.0\u2019 has become the supporting force in the manufacturing field. From CPS to Industry 4.0, they are leading a new round of industrial revolution. To meet the fourth industrial revolution, USA, European countries, Japan, South Korea and China have presented different versions of the so-called \u2018Industry 4.0\u2019 strategies respectively, which are different from each other in the content, but their common goal is to achieve manufacturing structural adjustment and restructuring and upgrading, to seize a place for future manufacturing, to provide users with better service. What is more, we summarised the five major trends of the future manufacturing. Then, we introduce some enabling technologies about Industry 4.0, among them, CPS plays a key role in the Industry 4.0 factory. Based on the customer to business (C2B) mode, we regard everything as a service, and propose a structure of Industry 4.0. Finally, the automotive industry as an example, we make a brief introduction about an Industry 4.0 case. [Received 15 December 2015; Revised 30 May 2016; Accepted 30 May 2016]"}
{"_id": "dd97ad893999e02e4f6648edae10222829ec48c2", "title": "Line-start synchronous reluctance motors: Design guidelines and testing via active inertia emulation", "text": "Line-Start Synchronous Reluctance (LS-SyR) Motors are a viable solution to reach IE4 efficiency class with the frame size of induction motors having standard efficiency. The proposed paper reviews the guidelines for the design of LS-SyR machines and introduces a new technique for testing their capability of load synchronization. Design wise, the efficiency at synchronous speed and the load pull-in capability must be traded off. The key design principles of LS-SyR machines are reviewed in this sense, and two manufacturing solutions are proposed for comparison. A new experimental method for the evaluation of the pull-in characteristic is proposed. The test fixture consists of a torque-controlled ac drive load, equipped with a shaft accelerometer. The load applied to the line-start machine under test is augmented with a component proportional to the acceleration measurement, to emulate additional inertia."}
{"_id": "1e38c680492a958a2bd616a9a7121f905746a37e", "title": "Toward De-Anonymizing Bitcoin by Mapping Users Location", "text": "The Bitcoin system (https://bitcoin.org) is a pseudo-anonymous currency that can dissociate a user from any real-world identity. In that context, a successful breach of the virtual and physical divide represents a signficant aw in the Bit-coin system [1]. In this project we demonstrate how to glean information about the real-world users behind Bitcoin transactions. We analyze publicly available data about the cryptocurrency. In particular, we focus on determining information about a Bitcoin user's physical location by examining that user's spending habits."}
{"_id": "03a9adc2ab1c199e4be1547a632152c487baebb1", "title": "Twitter spam detection based on deep learning", "text": "Twitter spam has long been a critical but difficult problem to be addressed. So far, researchers have developed a series of machine learning-based methods and blacklisting techniques to detect spamming activities on Twitter. According to our investigation, current methods and techniques have achieved the accuracy of around 80%. However, due to the problems of spam drift and information fabrication, these machine-learning based methods cannot efficiently detect spam activities in real-life scenarios. Moreover, the blacklisting method cannot catch up with the variations of spamming activities as manually inspecting suspicious URLs is extremely time-consuming. In this paper, we proposed a novel technique based on deep learning techniques to address the above challenges. The syntax of each tweet will be learned through WordVector Training Mode. We then constructed a binary classifier based on the preceding representation dataset. In experiments, we collected and implemented a 10-day real Tweet datasets in order to evaluate our proposed method. We first studied the performance of different classifiers, and then compared our method to other existing text-based methods. We found that our method largely outperformed existing methods. We further compared our method to non-text-based detection techniques. According to the experiment results, our proposed method was more accurate."}
{"_id": "97cce81ca813ed757e1e76c0023865c7dbdc7308", "title": "The Effects of Feedback Interventions on Performance : A Historical Review , a Meta-Analysis , and a Preliminary Feedback Intervention Theory", "text": "the total number of papers may exceed 10,000. Nevertheless, cost consideration forced us to consider mostly published papers and technical reports in English. 4 Formula 4 in Seifert (1991) is in error\u2014a multiplier of n, of cell size, is missing in the numerator. 5 Unfortunately, the technique of meta-analysis cannot be applied, at present time, to such effects because the distribution of dis based on a sampling of people, whereas the statistics of techniques such as ARIMA are based on the distribution of a sampling of observations in the time domain regardless of the size of the people sample involved (i.e., there is no way to compare a sample of 100 points in time with a sample of 100 people). That is, a sample of 100 points in time has the same degrees of freedom if it were based on an observation of 1 person or of 1,000 people. 258 KLUGER AND DENISI From the papers we reviewed, only 131 (5%) met the criteria for inclusion. We were concerned that, given the small percentage of usable papers, our conclusions might not fairly represent the larger body of relevant literature. Therefore, we analyzed all the major reasons to reject a paper from the meta-analysis, even though the decision to exclude a paper came at the first identification of a missing inclusion criterion. This analysis showed the presence of review articles, interventions of natural feedback removal, and papers that merely discuss feedback, which in turn suggests that the included studies represent 1015% of the empirical FI literature. However, this analysis also showed that approximately 37% of the papers we considered manipulated feedback without a control group and that 16% reported confounded treatments, that is, roughly two thirds of the empirical FI literature cannot shed light on the question of FI effects on performance\u2014a fact that requires attention from future FI researchers. Of the usable 131 papers (see references with asterisks), 607 effect sizes were extracted. These effects were based on 12,652 participants and 23,663 observations (reflecting multiple observations per participant). The average sample size per effect was 39 participants. The distribution of the effect sizes is presented in Figure 1. The weighted mean (weighted by sample size) of this distribution is 0.41, suggesting that, on average, FI has a moderate positive effect on performance. However, over 38% of the effects were negative (see Figure 1). The weighted variance of this distribution is 0.97, whereas the estimate of the sampling error variance is only 0.09. A potential problem in meta-analyses is a violation of the assumption of independence. Such a violation occurs either when multiple observations are taken from the same study (Rosenthal, 1984) or when several papers are authored by the same person (Wolf, 1986). In the present investigation, there were 91 effects derived from the laboratory experiments reported by Mikulincer (e.g., 1988a, 1988b). This raises the possibility that the average effect size is biased, because his studies manipulated extreme negative FIs and used similar tasks. In fact, the weighted average d in Mikulincer's studies was \u20140.39; whereas in the remainder of the"}
{"_id": "8791cb341ee5b5922d6340bd0305b8ff389b5249", "title": "Random telegraph noise (RTN) in scaled RRAM devices", "text": "The random telegraph noise (RTN) related read instability in resistive random access memory (RRAM) is evaluated by employing the RTN peak-to-peak (P-p) amplitude as a figure of merit (FoM). Variation of the FoM value over multiple set/reset cycles is found to follow the log-normal distribution. P-p decreases with the reduction of the read current, which allows scaling of the RRAM operating current. The RTN effect is attributed to the mechanism of activation/deactivation of the electron traps in (in HRS) or near (in LRS) the filament that affects the current through the RRAM device."}
{"_id": "0b0f8ef8e8d595274a76d408ecc0de41824975d7", "title": "Automatic Generation of Efficient Domain-Optimized Planners from Generic Parametrized Planners", "text": "When designing state-of-the-art, domain-independent planning systems, many decisions have to be made with respect to the domain analysis or compilation performed during preprocessing, the heuristic functions used during search, and other features of the search algorithm. These design decisions can have a large impact on the performance of the resulting planner. By providing many alternatives for these choices and exposing them as parameters, planning systems can in principle be configured to work well on different domains. However, usually planners are used in default configurations that have been chosen because of their good average performance over a set of benchmark domains, with limited experimentation of the potentially huge range of possible configurations. In this work, we propose a general framework for automatically configuring a parameterized planner, showing that substantial performance gains can be achieved. We apply the framework to the well-known LPG planner, which has 62 parameters and over 6.5 \u00d7 10 possible configurations. We demonstrate that by using this highly parameterized planning system in combination with the off-the-shelf, state-of-the-art automatic algorithm configuration procedure ParamILS, the planner can be specialized obtaining significantly improved performance."}
{"_id": "3594c733cd685cc5b6d2279320043534ce288ac7", "title": "Screen Printing of Highly Loaded Silver Inks on Plastic Substrates Using Silicon Stencils.", "text": "Screen printing is a potential technique for mass-production of printed electronics; however, improvement in printing resolution is needed for high integration and performance. In this study, screen printing of highly loaded silver ink (77 wt %) on polyimide films is studied using fine-scale silicon stencils with openings ranging from 5 to 50 \u03bcm wide. This approach enables printing of high-resolution silver lines with widths as small as 22 \u03bcm. The printed silver lines on polyimide exhibit good electrical properties with a resistivity of 5.5\u00d710(-6) \u03a9 cm and excellent bending tolerance for bending radii greater than 5 mm (tensile strains less than 0.75%)."}
{"_id": "11aedb8f95a007363017dae311fc525f67bd7876", "title": "Minimum Error Rate Training in Statistical Machine Translation", "text": "Often, the training procedure for statistical machine translation models is based on maximum likelihood or related criteria. A general problem of this approach is that there is only a loose relation to the final translation quality on unseen text. In this paper, we analyze various training criteria which directly optimize translation quality. These training criteria make use of recently proposed automatic evaluation metrics. We describe a new algorithm for efficient training an unsmoothed error count. We show that significantly better results can often be obtained if the final evaluation criterion is taken directly into account as part of the training procedure."}
{"_id": "51c88cf10b75ed1a5445660cc64cee3d1c6fd8c5", "title": "Bayesian Unsupervised Word Segmentation with Nested Pitman-Yor Language Modeling", "text": "In this paper, we propose a new Bayesian model for fully unsupervised word segmentation and an efficient blocked Gibbs sampler combined with dynamic programming for inference. Our model is a nested hierarchical Pitman-Yor language model, where Pitman-Yor spelling model is embedded in the word model. We confirmed that it significantly outperforms previous reported results in both phonetic transcripts and standard datasets for Chinese and Japanese word segmentation. Our model is also considered as a way to construct an accurate word n-gram language model directly from characters of arbitrary language, without any \u201cword\u201d indications."}
{"_id": "65e90d9f6754d32db464f635e7fdec672fad9ccf", "title": "The Second International Chinese Word Segmentation Bakeoff", "text": "The second international Chinese word segmentation bakeoff was held in the summer of 2005 to evaluate the current state of the art in word segmentation. Twenty three groups submitted 130 result sets over two tracks and four different corpora. We found that the technology has improved over the intervening two years, though the out-of-vocabulary problem is still or paramount importance."}
{"_id": "9703efad5e36e1ef3ab2292144c1a796515e5f6a", "title": "The Mathematics of Statistical Machine Translation: Parameter Estimation", "text": "We describe a series o,f five statistical models o,f the translation process and give algorithms,for estimating the parameters o,f these models given a set o,f pairs o,f sentences that are translations o,f one another. We define a concept o,f word-by-word alignment between such pairs o,f sentences. For any given pair of such sentences each o,f our models assigns a probability to each of the possible word-by-word alignments. We give an algorithm for seeking the most probable o,f these alignments. Although the algorithm is suboptimal, the alignment thus obtained accounts well for the word-by-word relationships in the pair o,f sentences. We have a great deal o,f data in French and English from the proceedings o,f the Canadian Parliament. Accordingly, we have restricted our work to these two languages; but we,feel that because our algorithms have minimal linguistic content they would work well on other pairs o,f languages. We also ,feel, again because of the minimal linguistic content o,f our algorithms, that it is reasonable to argue that word-by-word alignments are inherent in any sufficiently large bilingual corpus."}
{"_id": "2c0a239caa3c2c590e4d6f23ad01c1f77adfc7a0", "title": "A cross-cultural test of the Maluma-Takete phenomenon.", "text": ""}
{"_id": "841bc100c482f068509a45cf9b380258ae13bf20", "title": "Occupational therapy interventions for employment and education for adults with serious mental illness: a systematic review.", "text": "In this systematic review, we investigated research literature evaluating the effectiveness of occupational therapy interventions focusing on participation and performance in occupations related to paid and unpaid employment and education for people with serious mental illness. The review included occupation- and activity-based interventions and interventions addressing performance skills, aspects of the environment, activity demands, and client factors. The results indicate that strong evidence exists for the effectiveness of supported employment using individual placement and support to result in competitive employment. These outcomes are stronger when combined with cognitive or social skills training. Supported education programs emphasizing goal setting, skill development, and cognitive training result in increased participation in educational pursuits. The evidence for instrumental activities of daily living interventions that targeted specific homemaking occupations and supported parenting was limited but positive. Environmental cognitive supports, such as signs, and other compensatory strategies are useful in managing maladaptive behavior."}
{"_id": "e2e92ed915d5e12e5c5a6c2ecf9dac313751426f", "title": "THE FUNCTIONAL APPROACH TO THE STUDY OF ATTITUDES", "text": "At the psychological level the reasons for holding or for changing attitudes are found in the functions they perform for the individual, specifically the functions of adjustment, ego defense, value expression, and knowledge. The conditions necessary to arouse or modify an attitude vary according to the motivational basis of the attitude. Ego-defensive attitudes, for example, can be aroused by threats, appeals to hatred and repressed impulses, and authoritarian suggestion, and can be changed by removal of threat, catharsis, and self-insight. Expressive attitudes are aroused by cues associated with the individual's values and by the need to reassert his self-image and can be changed by showing the appropriateness of the new or modified beliefs to the self-concept Brain washing is primarily directed at the value-expressive function and operates by controlling all environmental supports of old values. Changing attitudes may involve generalization of change to related areas of belief and feeling. Minimal generalization seems to be the rule among adults; for example, in politics voting for an opposition candidate does not have much effect upon party identification. The author is Professor of Psychology at the University of Michigan, former president of the Society for the Psychological Study of Social Issues, and coeditor of Research Methods in the Behavioral Sciences and Public Opinion and Propaganda."}
{"_id": "0a182d0e685bc23f2edf8a4b221b1567f8773272", "title": "Automatic linkage of vital records.", "text": "The term record linkage has been used to indicate the bringing together of two or more separately re?orded pieces of information concerning a particular individual or family (I). Defined in this broad manner, it includes almost any use of a file of records to determine what has subsequently happened to people about whom one has some prior information, The various facts concerning an individual which in any modern society are recorded routinely would, if brought together, form an extensively documented history of his 1ife. In theory at least, an understanding might be derived from such collective histories concerning many of the factors which operate to influence the welfare of human populations, factors about which we are at present almost entirely in ignorance. Of course, much of the recorded information is in a relatively inaccessible form; but, even when circumstances have been most favorable, as in the registrations of births, deaths, and marriages, and in the census, there has been little recognition of the special value of the records as a source of statistics when they are brought together so as to relate the successive events in the lives of particular individuals and families. The chief reason for this lies in the high cost of searching manually for large numbers of single documents among vast accumulations of files. It is obvious that the searching could be mechanized, but as yet there has been no clear demonstration that machines can carry out the record linkages rapidly enough, cheaply enough, and with sufficient accuracy to make this practicable. The need for various follow-up studies such as might be carried out with the aid of record linkage have been discussed in detail elsewhere (I, 2), and there are numerous examples o{ important surveys which could be greatly extended in scope if existing record files were more readily linkable (3). Our special interest in the techniques of record linkage relates to their possible use (i) for keeping track of large groups of individuals who have been exposed to low levels of radiation, in order to determine the causes of their eventual deaths (see 4, chap. 8, para. 48; 5), and (ii) for assessing the relative importance of repeated natural mutations on the one hand, and of fertility differentials on the other, in maintaining the frequency of genetic defects in human populations (see 4, chap. 6, para. 36c). Our own studies (6) were started as part of a plan to look for possible differentials of family fertility in relation to the presence or absence of hereditary disease (through the use of vital records and a register of hm-sdicapped children). The first step has been the development of a method for linking birth records to marriage records automatically with a Datatron 205 computer. For this purpose use has been made of the records of births which occurred in the Canadian province of British Columbia during the year 1955 (34, 138 births) and of the marriages which took place in the same province over the 10-year period 1946-55 (114,47 1 marriages). Fortunately, these records were already in punch-card form as a part of Canada\u2019s National Index, and from them could be extracted most of the necessary information on names and other"}
{"_id": "a12613a49398ee4c2d159175f91e47d269c3826d", "title": "Financial Stock Market Forecast using Data Mining Techniques", "text": "The automated computer programs using data mining and predictive technologies do a fare amount of trades in the markets. Data mining is well founded on the theory that the historic data holds the essential memory for predicting the future direction. This technology is designed to help investors discover hidden patterns from the historic data that have probable predictive capability in their investment decisions. The prediction of stock markets is regarded as a challenging task of financial time series prediction. Data analysis is one way of predicting if future stocks prices will increase or decrease. Five methods of analyzing stocks were combined to predict if the day's closing price would increase or decrease. These methods were Typical Price (TP), Bollinger Bands, Relative Strength Index (RSI), CMI and Moving Average (MA). This paper discussed various techniques which are able to predict with future closing stock price will increase or decrease better than level of significance. Also, it investigated various global events and their issues predicting on stock markets. It supports numerically and graphically."}
{"_id": "40f94c7e8dbce2c0fd0edc29d4ba94f52266acdd", "title": "A method for classifying medical images using transfer learning: A pilot study on histopathology of breast cancer", "text": "The advance of deep learning has made huge changes in computer vision and produced various off-the-shelf trained models. Particularly, Convolutional Neural Network (CNN) has been widely used to build image classification model which allow researchers transfer the pre-trained learning model for other classifications. We propose a transfer learning method to detect breast cancer using histopathology images based on Google's Inception v3 model which were initially trained for the classification of non-medical images. The pilot study shows the feasibility of transfer learning in the detection of breast cancer with AUC of 0.93."}
{"_id": "770ee5d2536f5a3e753950ad00494d6c6d982565", "title": "Data Management for Journalism", "text": "We describe the power and potential of data journalism, where news stories are reported and published with data and dynamic visualizations. We discuss the challenges facing data journalism today and how recent data management tools such as Google Fusion Tables have helped in the newsroom. We then describe some of the challenges that need to be addressed in order for data journalism to reach its full"}
{"_id": "ea03a5ea291bff200be74a40aa71a7254012d018", "title": "Escape loneliness by going digital: a quantitative and qualitative evaluation of a Dutch experiment in using ECT to overcome loneliness among older adults.", "text": "BACKGROUND\nThis study evaluates the outcomes of an Internet-at-home intervention experiment that intended to decrease loneliness among chronically ill and physically handicapped older adults through introducing them to the use of an electronic communication facility.\n\n\nMETHOD\nTo determine the effectiveness of the experiment in terms of reducing loneliness, 15 older adults were interviewed three times: shortly before the start, two years later and immediately after termination of the experiment, while their loneliness scores at zero and post-measurement were compared with those of a control group.\n\n\nRESULTS\nBoth the participants and the control persons experienced a reduction in loneliness over time. However, the reduction was only significant for the intervention participants. Moreover, the changes in loneliness were significantly greater among the participants compared to the control persons. When looking more in detail, the effect of the experiment was only significant regarding emotional loneliness and among the highest educated. Findings of the qualitative research enabled us to understand the mechanisms through which the intervention helped alleviate loneliness. E-mail was found to facilitate social contact. Furthermore, the computer and Internet were often used to pass the time, taking people's minds off their loneliness. Unexpectedly, the intervention also improved people's self-confidence.\n\n\nCONCLUSION\nThe decline in loneliness is likely to be greater if persons under more favorable circumstances are selected and if more social functions of the Internet are used."}
{"_id": "90e994a802a0038f24c8e3735d7619ebb40e6e93", "title": "Semantic Foggy Scene Understanding with Synthetic Data", "text": "This work addresses the problem of semantic foggy scene understanding (SFSU). Although extensive research has been performed on image dehazing and on semantic scene understanding with clear-weather images, little attention has been paid to SFSU. Due to the difficulty of collecting and annotating foggy images, we choose to generate synthetic fog on real images that depict clear-weather outdoor scenes, and then leverage these partially synthetic data for SFSU by employing state-of-the-art convolutional neural networks (CNN). In particular, a complete pipeline to add synthetic fog to real, clear-weather images using incomplete depth information is developed. We apply our fog synthesis on the Cityscapes dataset and generate Foggy Cityscapes with 20,550 images. SFSU is tackled in two ways: (1) with typical supervised learning, and (2) with a novel type of semi-supervised learning, which combines (1) with an unsupervised supervision transfer from clear-weather images to their synthetic foggy counterparts. In addition, we carefully study the usefulness of image dehazing for SFSU. For evaluation, we present Foggy Driving, a dataset with 101 real-world images depicting foggy driving scenes, which come with ground truth annotations for semantic segmentation and object detection. Extensive experiments show that (1) supervised learning with our synthetic data significantly improves the performance of state-of-the-art CNN for SFSU on Foggy Driving; (2) our semi-supervised learning strategy further improves performance; and (3) image dehazing marginally advances SFSU with our learning strategy. The datasets, models and code are made publicly available."}
{"_id": "1d737920e815b2eed4ff8f90d0c615c269fab3cf", "title": "An architecture-based approach to self-adaptive software", "text": "CONSIDER THE FOLLOWING SCEnario. A fleet of unmanned air vehicles undertakes a mission to disable an enemy airfield. Pre-mission intelligence indicates that the airfield is not defended, and mission planning proceeds accordingly. While the UAVs are en route to the target, new intelligence indicates that a mobile surface-to-air missile launcher now guards the airfield. The UAVs autonomously replan their mission, dividing into two groups\u2014a SAM-suppression unit and an airfield-suppression unit\u2014and proceed to accomplish their objectives. During the flight, specialized algorithms for detecting and recognizing SAM launchers automatically upload and are integrated into the SAM-suppression unit\u2019s software. In this scenario, new software components are dynamically inserted into fielded, heterogeneous systems without requiring system restart, or indeed, any downtime. Mission replanning relies on analyses that include feedback from current performance. Furthermore, such replanning can take place autonomously, can involve multiple, distributed, cooperating planners, and where major changes are demanded and require human approval or guidance, can cooperate with mission analysts. Throughout, system integrity requires the assurance of consistency, correctness, and coordination of changes. Other applications for fleets of UAVs might include environment and land-use monitoring, freeway-traffic management, fire fighting, airborne cellular-telephone relay stations, and damage surveys in times of natural disaster. How wasteful to construct afresh a specific software platform for each new UAV application! Far better if software architects can simply adapt the platform to the application at hand, and better yet, if the platform itself adapts on demand even while serving some other purpose. For example, an irborne sensor platform designed for environmental and land-use monitoring could prove useful for damage surveys following an earthquake or hurricane, provided someone could change the software quickly enough and with sufficient assurance that the new system would perform as intended. Software engineering aims for the systematic, principled design and deployment of applications that fulfill software\u2019s original promise\u2014applications that retain full plasticity throughout their lifecycle and that are as easy to modify in the field as they are on the drawing board. Software engineers have pursued many techniques for achieving this goal: specification languages, high-level programming languages, and object-oriented analysis and design, to name just a few. However, while each contributes to the goal, the sum total still falls short. Self-adaptive software will provide the key. Many disciplines will contribute to its progress, but wholesale advances require a sysSELF-ADAPTIVE SOFTWARE REQUIRES HIGH DEPENDABILITY, ROBUSTNESS, ADAPTABILITY, AND AVAILABILITY. THIS ARTICLE DESCRIBES AN INFRASTRUCTURE SUPPORTING TWO SIMULTANEOUS PROCESSES IN SELF-ADAPTIVE SOFTWARE: SYSTEM EVOLUTION, THE CONSISTENT APPLICATION OF CHANGE OVER TIME, AND SYSTEM ADAPTATION, THE CYCLE OF DETECTING CHANGING CIRCUMSTANCES AND PLANNING AND DEPLOYING RESPONSIVE MODIFICATIONS."}
{"_id": "26f52b2558c2bee9440f54e18b1cb05c1bb4c44a", "title": "Approach and Avoidance Achievement Goals and Intrinsic Motivation : A Mediational Analysis", "text": "Most contemporary achievement goal conceptualizations consist of a performance goal versus mastery goal dichotomy. The present research offers an alternative framework by partitioning the performance goal orientation into independent approach and avoidance motivational orientations. Two experiments investigated the predictive utility of the proposed approach-avoidance achievement goal conceptualization in the intrinsic motivation domain. Results from both experiments supported the proposed framework; only performance goals grounded in the avoidance of failure undermined intrinsic motivation. Task involvement was validated as a mediator of the observed effects on intrinsic motivation. Ramifications for the achievement goal approach to achievement motivation and future research avenues are discussed."}
{"_id": "5f1e8d265b1d3301c0472f92e097e4c3b9298d80", "title": "Intrinsic cancer subtypes-next steps into personalized medicine", "text": "Recent technological advances have significantly improved our understanding of tumor biology by means of high-throughput mutation and transcriptome analyses. The application of genomics has revealed the mutational landscape and the specific deregulated pathways in different tumor types. At a transcriptional level, multiple gene expression signatures have been developed to identify biologically distinct subgroups of tumors. By supervised analysis, several prognostic signatures have been generated, some of them\u00a0being commercially available. However, an unsupervised approach is required to discover a priori unknown molecular subtypes, the so-called intrinsic subtypes. Moreover, an integrative analysis of the molecular events associated with tumor biology has\u00a0been translated into a better tumor classification. This molecular characterization confers new opportunities for therapeutic strategies in the management of cancer patients. However, the applicability of these new molecular classifications is limited because of several issues such as technological validation and cost. Further comparison with well-established clinical and pathological features is expected to accelerate clinical translation. In this review, we will focus on the data reported on molecular classification in the most common tumor types such as breast, colorectal and lung carcinoma, with special emphasis on recent data regarding tumor intrinsic subtypes. Likewise, we will review the potential applicability of these new classifications in the clinical routine."}
{"_id": "f748311d212786bfe6b6c6878d633db04bbe4e9a", "title": "3-D Deep Learning Approach for Remote Sensing Image Classification", "text": "Recently, a variety of approaches have been enriching the field of remote sensing (RS) image processing and analysis. Unfortunately, existing methods remain limited to the rich spatiospectral content of today\u2019s large data sets. It would seem intriguing to resort to deep learning (DL)-based approaches at this stage with regard to their ability to offer accurate semantic interpretation of the data. However, the specificity introduced by the coexistence of spectral and spatial content in the RS data sets widens the scope of the challenges presented to adapt DL methods to these contexts. Therefore, the aim of this paper is first to explore the performance of DL architectures for the RS hyperspectral data set classification and second to introduce a new 3-D DL approach that enables a joint spectral and spatial information process. A set of 3-D schemes is proposed and evaluated. Experimental results based on well-known hyperspectral data sets demonstrate that the proposed method is able to achieve a better classification rate than state-of-the-art methods with lower computational costs."}
{"_id": "47f3011e1e3fc22b75cbd3b0a132a69a9f999d38", "title": "Towards Know-how Mapping Using Goal Modeling", "text": "In organizing the knowledge in a field of study, it is common to use classification techniques to organize concepts and approaches along dimensions of interest. In technology domains, an advance often appears in the form of a new way or method for achieving an objective. This paper proposes to use goal modeling to map the means-ends knowledge (\u201cknow-how\u201d) in a domain. A know-how map highlights the structure of recognized problems and known solutions in the domain, thus facilitating gap identification and prompting new research and innovation. We contrast the proposed goal-oriented approach with a claim-oriented approach, using Web Page Ranking as a sample domain."}
{"_id": "1cdb7114285246f48b395810a116a9d64c24eb46", "title": "Controlling Home Appliances Remotely through Voice Command", "text": "the main concern in systems development is the integration of technologies to increase customer satisfaction. Research presented in this paper focuses mainly in three things first to understand the speech or voice of user second is to control the home appliances through voice call and third is to finds intrusion in the house. The user can make a voice call in order to perform certain actions such as switching lights on/off, getting the status of any appliance etc. And when system finds intrusion it sends an alert voice message to preconfigured cell when the user is away from the place. The proposed system is implemented using voice Global System for Mobile Communications (GSM) and wireless technology based on .NET framework and Attention (AT) commands. Microsoft speech reorganization engine, speech SDK 5.1 is used to understand the voice command of user. As it is wireless so more cost effective and easy to use. The GSM technology used in system provide the everywhere access of the system for security. Experimental results show that the system is more secure and cost effective as compared to existing systems. We conclude that this system provides solution for the problems faced by home owner in daily life and make their life easy and comfortable by proposing cost effective and reliable solution. Keywords-component; Voice GSM; Voice message; radio friquencey (RF); AT commands."}
{"_id": "5b41a00a2fe665ad7353570c9ceabadd453e58e7", "title": "RFID Coverage Extension Using Microstrip-Patch Antenna Array [Wireless Corner]", "text": "In this paper, a UHF-band 2 times 2 microstrip phased-array antenna is designed and implemented to extend the coverage of an RFID reader system. The phased-array antenna has four microstrip-patch antennas, three Wilkinson power dividers, and a transmission-line phase shifter. These are printed on a dielectric substrate with a dielectric constant of 4.5. The array has dimensions of 34 cm times 45 cm, operating at a frequency of 867 MHz, as specified in RFID Gen2 protocol European standards. The phased-array antenna has a measured directivity of 12.1 dB, and the main-beam direction can be steered to angles of plusmn 40deg, with a HPBW of 90deg. The phased-array antenna is used as the receiving antenna in a commercial reader system. Experimental results indicate that the coverage of the RFID system with the phased-array antenna is superior to the coverage with a conventional broader-beamwidth microstrip-patch antenna. The proposed system can also be used for a wireless positioning system."}
{"_id": "926a3da3c64ac0c9097c5f2fdc175b74035c6ec5", "title": "N-Fold Inspection: A Requirements Analysis Technique", "text": "N-fold inspection uses traditional inspections of the user requirements document (URD) but replicates the inspection activities using N independent teams. A pilot study was conducted to explore the usefulness of N-fold inspection during requirements analysis. A comparison of N-fold inspection with other development techniques reveals that N-fold inspection is a cost-effective method for finding faults in the URD and may be a valid technique in the development of mission-critical software systems."}
{"_id": "9516b08a55ff710271104d70b9f12ce9c2eb9d4e", "title": "A Benchmark for Open Link Prediction", "text": "Knowledge graphs (KG) and open information extraction systems (OIE) both represent information by a set of (subject, relation, object)-triples. In KGs, arguments and relations are taken from a closed vocabulary, i.e., entities are disambiguated and relations are assumed to be semantically distinct. In OIE, arguments and relations are extracted from natural language text. There is no predefined or closed vocabulary, many different mentions may refer to the same entity, and relations can be semantically close but not equal, e.g. \u201cvisited by\u201d and \u201cfrequently visited by\u201d. The set of OIE triples can be viewed as an open knowledge graph (OKG). In this paper, we study the problem of open link prediction: Given a mention of an entity and an open relation (e.g., \u201cA. Einstein\u201d and \u201cis a native of\u201d), predict correct entity mentions (e.g., \u201cImperial Germany\u201d) using solely the OKG. Models that aim to solve the open link prediction task sucessfully need to be able to reason about different entity mentions, as well as the relations between them, in a fully automatic, unsupervised way. As a benchmark for this task we design an OKG dataset, an evaluation protocol and present and evaluate a number of baseline models. In particular, we compare a token-based co-occurrence model, a non token-based link prediction model, and several token-based link prediction models. Our results indicate that the latter combination is beneficial."}
{"_id": "db5b6be0eee4b4d70c0f589b9a124f183007f0bb", "title": "Dioxin characterisation , formation and minimisation during municipal solid waste ( MSW ) incineration : review", "text": "The present review discusses the current views on methods to minimise dioxins, namely polychlorinated dibenzodioxins (PCDDs) and dibenzofurans (PCDFs), formation in MSW incineration systems. The structure of this group of compounds is discussed initially and then the toxic equivalence scale is presented for the most common isomers and congeners in the dioxin family. The current status on dioxin limits imposed in various countries and by various organisations is presented. A detailed analysis of the theories leading to dioxin formation in MSW incineration is given, since, this has been one of the most controversial areas of dioxin chemistry for the past 20 years. Three dioxin formation theories were considered possible for a long time; (i) from PCDD/PCDFs originally present in the furnace feedstock; (ii) from precursor compounds (foundation formatting molecules which could react rapidly with other groups in the system to form dioxins) in the MSW feed; (iii) from de novo synthesis of smaller, relatively innocuous chemical molecules combining together to form the dioxins. Methods (ii) and (iii) are based on heterogeneously catalysed reactions. Some researchers are considering possible homogeneous thermal reaction formation of dioxin. This review demonstrates that with the advanced modern MSW combustion systems, option (i) is a most unlikely route and also methods (ii) and (iii) are quite feasible. Based on thermodynamic and kinetic data in the literature, the rate and extent of the formation of dioxins and their precursors by certain mechanisms can definitely be contributing to routes (ii) and (iii). Since even the most advanced MSW combustion systems do not produce complete combustion, predominantly because of inadequate feed preparation and turbulence, some de novo synthesis of precursors can also take place. These \u2018de novo precursors\u2019 could be carried through the combustion unit adsorbed or absorbed on particulate material such as soot and dust, but also these precursors could be formed during the cooling process by heterogeneous catalytic reactions and go on to form dioxins. The maximum rate of formation of PCDD/PCDFs from both sources lies in the temperature range 300\u2013400 \u25e6C. This knowledge of formation rates and mechanisms provides the basis of designing combustion systems. A two stage approach is adopted; firstly, system design to achieve complete combustion and minimise formation; secondly, end-of-pipe treatment systems to remove dioxins. In the first case, combustion temperature should be above 1000 \u25e6C, combustion residence time should be greater than 1 s, combustion chamber turbulence should be represented by a Reynolds number greater than 50,000, good MSW feed preparation and controlled feed rate are also critical. In the second category, very rapid gas cooling from 400 to 250 \u25e6C should be achieved, semi-dry lime scrubbing and Abbreviations:AC, activated carbon; ACSS, activated carbon scrubbing solution; APCD, air pollution control device; APME, Association of Plastics Manufacturers in Europe; ASME, American Society of Mechanical Engineers; BF, bag filter; BPM, best practical means; COT, UK Committee on Toxicology of Chemicals in Food, Consumer Products and Environment; CTZ, critical temperature zone; DOE, Department of the Environment, UK; DIAC, direct injec tion of activated carbon; DS, dry scrubber; DSI, dry sorbent injection; EADON, toxic equivalence factors proposed by Eadon method; EEC, European Economi c Community; EGB, electro granular bed; ESP, electrostatic precipitator; FF, fabric filter; FRG, Federal Republic of Germany; GCP, good combustion pr actice; HMIP, Her Majesty\u2019s Inspectorate on Pollution, UK; HOC, hearth-oven coke; HRGC, high resolution gas chromatography; HRMS, high resolution mass spectrometry; IARC, International Agency for Research on Cancer; Kow, octanol-water partition coefficient; MSW, municipal solid waste; MSWI, municipal solid waste incinerator; NATO, Northern Atlantic Treaty Organisation; NOAEL, no observed adverse effect level; OCDD, octachlorinated dibenzodio xin; OCDF, octachlorinated dibenzofuran; PCB, polychlorinated biphenyls; PCCS, programmable computer control system; PCDD, polychlorinated dibenzodiox in; PCDF, polychlorinated dibenzofuran; PCP, pentachlorophenol; RDF, refuse derived fuel; SCR, selective catalytic reduction/reactor; SDA, spray dry abs orber; TALuft, Technische Airbildung Luft; TCDD, tetrachlorinated dibenzodioxin; TCDF, tetrachlorinated dibenzofuran; TDI, tolerable daily intake; TEF, toxi c equivalence factor; TEQ, toxic equivalence; USEPA, United States Environmental Protection Agency; WHO, World Health Organisation; WQ, water quench; WS, wet scrubber E-mail address:kemckayg@ust.hk (G. McKay). 1385-8947/02/$ \u2013 see front matter \u00a9 2002 Elsevier Science B.V. All rights reserved. PII: S1385-8947(01)00228-5 344 G. McKay / Chemical Engineering Journal 86 (2002) 343\u2013368 bag filtration coupled with activated carbon injection adsorption as end-of-pipe treatments can all play a role in prevention or minimisation of dioxins in the final flue gas emission to the atmosphere. \u00a9 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "4655d0eb4fdb6b4a70bbc6e4ee3dfaae5c92fd4e", "title": "Compressive sensing", "text": "1 Scope The Shannon/Nyquist sampling theorem tells us that in order to not lose information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many applications, including digital image and video cameras, the Nyquist rate can be so high that we end up with too many samples and must compress in order to store or transmit them. In other applications, including imaging systems (medical scanners, radars) and high-speed analog-to-digital converters, increasing the sampling rate or density beyond the current state-of-the-art is very expensive. In this lecture, we will learn about a new technique that tackles these issues using compressive sensing [1, 2]. We will replace the conventional sampling and reconstruction operations with a more general linear measurement scheme coupled with an optimization in order to acquire certain kinds of signals at a rate significantly below Nyquist."}
{"_id": "c33126aa1ac7cc87828b2f37c8c561c3b98a53a3", "title": "Formal Specification with the Java Modeling Language", "text": "This text is a general, self contained, and tool independent introduction into the Java Modeling Language, JML. It appears in a book about the KeY approach and tool, because JML is the dominating starting point of KeY style Java verification. However, this chapter does not depend on KeY, nor any other specific tool, nor on any specific verification methodology. With this text, the authors aim to provide, for the time being, the definitive, general JML tutorial. Other chapters in this book discuss the particular usage of JML in KeY style verification.1 In this chapter, however, we only refer to KeY in very few places, without relying on it. This introduction is written for all readers with an interest in formal specification of software in general, and anyone who wants to learn about the JML approach to specification in particular. A preliminary version of this chapter appeared as a technical report [Huisman et al., 2014]."}
{"_id": "e41c295d1888428b067589413bd5dd0a8469b546", "title": "Low complexity photo sensor dead pixel detection algorithm", "text": "Although the number of pixels in image sensors is increasing exponentially, production techniques have only been able to linearly reduce the probability that a pixel will be defective. The result is a rapidly increasing probability that a sensor will contain one or more defective pixels. The defect pixel detection and defect pixel correction are operated separately but the former must employ before the latter is in use. Traditional detection scheme, which finds the defect pixels during manufacturing, is not able to discover the spread defect pixels years late. Consequently, the lifetime and robust defect pixel detection technique, which identifies the fault pixels when camera is in use, is more practical and developed. The paper presents a two stages dead pixel detection technique without complicated mathematic computations so that the embedded devices can easily implement it. Using six dead pixel types are tested and the experimental result indicates that it can be accelerated more than four times the detection time."}
{"_id": "f0041b836d507b8d22367a6ef7faab583769de82", "title": "Probabilistic Latent Semantic Indexing", "text": "Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized model is able to deal with domain{specific synonymy as well as with polysemous words. In contrast to standard Latent Semantic Indexing (LSI) by Singular Value Decomposition, the probabilistic variant has a solid statistical foundation and defines a proper generative data model. Retrieval experiments on a number of test collections indicate substantial performance gains over direct term matching methods as well as over LSI. In particular, the combination of models with different dimensionalities has proven to be advantageous."}
{"_id": "21496686f28b5b7b978cc9155bbb164424b55e7b", "title": "Latent Semantic Indexing (LSI): TREC-3 Report", "text": "We used LSI for both TREC-3 routing and adhoc tasks. For the routing tasks an LSI space was constructed using the training documents. We compared profiles constructed using just the topic words (no training) with profiles constructed using the average of relevant documents (no use of the topic words). Not surprisingly, the centroid of the relevant documents was 30% better than the topic words. This simple feedback method was quite good compared to the routing performance of other systems. Various combinations of information from the topic words and relevant documents provide small additional improvements in performance. For the adhoc task we compared LSI to keyword vector matching (i.e. using no dimension reduction). Small advantages were obtained for LSI even with the long TREC topic statements."}
{"_id": "362290c3a5a535d3cdc89f89ac10dd48c735dc94", "title": "Highly Efficient Mining of Overlapping Clusters in Signed Weighted Networks", "text": "In many practical contexts, networks are weighted as their links are assigned numerical weights representing relationship strengths or intensities of inter-node interaction. Moreover, the links' weight can be positive or negative, depending on the relationship or interaction between the connected nodes. The existing methods for network clustering however are not ideal for handling very large signed weighted networks. In this paper, we present a novel method called LPOCSIN (short for \"Linear Programming based Overlapping Clustering on Signed Weighted Networks\") for efficient mining of overlapping clusters in signed weighted networks. Different from existing methods that rely on computationally expensive cluster cohesiveness measures, LPOCSIN utilizes a simple yet effective one. Using this measure, we transform the cluster assignment problem into a series of alternating linear programs, and further propose a highly efficient procedure for solving those alternating problems. We evaluate LPOCSIN and other state-of-the-art methods by extensive experiments covering a wide range of synthetic and real networks. The experiments show that LPOCSIN significantly outperforms the other methods in recovering ground-truth clusters while being an order of magnitude faster than the most efficient state-of-the-art method."}
{"_id": "a98cc18933adc0281dfadf00cb8e26df61ede429", "title": "A large-scale evaluation of automatic pulmonary nodule detection in chest CT using local image features and k-nearest-neighbour classification", "text": "A scheme for the automatic detection of nodules in thoracic computed tomography scans is presented and extensively evaluated. The algorithm uses the local image features of shape index and curvedness in order to detect candidate structures in the lung volume and applies two successive k-nearest-neighbour classifiers in the reduction of false-positives. The nodule detection system is trained and tested on three databases extracted from a large-scale experimental screening study. The databases are constructed in order to evaluate the algorithm on both randomly chosen screening data as well as data containing higher proportions of nodules requiring follow-up. The system results are extensively evaluated including performance measurements on specific nodule types and sizes within the databases and on lesions which later proved to be malignant. In a random selection of 813 scans from the screening study a sensitivity of 80% with an average 4.2 false-positives per scan is achieved. The detection results presented are a realistic measure of a CAD system performance in a low-dose screening study which includes a diverse array of nodules of many varying sizes, types and textures."}
{"_id": "e1edf3581e776a0943be4c9ac7f629ee86091ac0", "title": "The Software Concordance: A User Interface for Advanced Software Documents", "text": "The Software Concordance is a hypermedia software development environment exploring how document technology and versioned hypermedia can improve software document management. The Software Concordance\u2019s central tool is a document editor that integrates program analysis and hypermedia services for both source code and multimedia documentation in XML. The editor allows developers to embed inline multimedia documentation, including images and audio clips, into their program sources and to bind them to any program fragment. Web style hyperlinks are also supported. The developers are able to move seamlessly between source code and the documentation that describes its motivation, design, correctness and use without disrupting the integrated program analysis services, which include lexing, parsing and type checking. This paper motivates the need for environments like the Software Concordance, describes the key features and architecture of its document editor and discusses future work to improve its functionality."}
{"_id": "0834f03fab14f4da01347f65da1c2861c7184671", "title": "Cloud-enabled privacy-preserving collaborative learning for mobile sensing", "text": "In this paper, we consider the design of a system in which Internet-connected mobile users contribute sensor data as training samples, and collaborate on building a model for classification tasks such as activity or context recognition. Constructing the model can naturally be performed by a service running in the cloud, but users may be more inclined to contribute training samples if the privacy of these data could be ensured. Thus, in this paper, we focus on privacy-preserving collaborative learning for the mobile setting, which addresses several competing challenges not previously considered in the literature: supporting complex classification methods like support vector machines, respecting mobile computing and communication constraints, and enabling user-determined privacy levels. Our approach, Pickle, ensures classification accuracy even in the presence of significantly perturbed training samples, is robust to methods that attempt to infer the original data or poison the model, and imposes minimal costs. We validate these claims using a user study, many real-world datasets and two different implementations of Pickle."}
{"_id": "249fb17f09f9d4b45fd74403962326b7788ce8c3", "title": "Malware Detection Using Deep Transferred Generative Adversarial Networks", "text": "Malicious software is generated with more and more modified features of which the methods to detect malicious software use characteristics. Automatic classification of malicious software is efficient because it does not need to store all characteristic. In this paper, we propose a transferred generative adversarial network (tGAN) for automatic classification and detection of the zero-day attack. Since the GAN is unstable in training process, often resulting in generator that produces nonsensical outputs, a method to pre-train GAN with autoencoder structure is proposed. We analyze the detector, and the performance of the detector is visualized by observing the clustering pattern of malicious software using t-SNE algorithm. The proposed model gets the best performance compared with the conventional machine learning algorithms."}
{"_id": "991bfca9860a763d3e7a4269c58a1b8697a8b9bc", "title": "Example-based hinting of true type fonts", "text": "Hinting in TrueType is a time-consuming manual process in which a typographer creates a sequence of instructions for better fitting the characters of a font to a grid of pixels. In this paper, we propose a new method for automatically hinting TrueType fonts by transferring hints of one font to another. Given a hinted source font and a target font without hints, our method matches the outlines of corresponding glyphs in each font, and then translates all of the individual hints for each glyph from the source to the target font. It also translates the control value table (CVT) entries, which are used to unify feature sizes across a font. The resulting hinted font already provides a great improvement over the unhinted version. More importantly, the translated hints, which preserve the sound, hand-designed hinting structure of the original font, provide a very good starting point for a professional typographer to complete and fine-tune, saving time and increasing productivity. We demonstrate our approach with examples of automatically hinted fonts at typical display sizes and screen resolutions. We also provide estimates of the time saved by a professional typographer in hinting new fonts using this semi-automatic approach."}
{"_id": "f9f3744dc76d1aea13b2190f0135b6f882e1668d", "title": "Learning Progressions: Supporting Instruction and Formative Assessment Margaret Heritage the Council of Chief State School Officers T He Fast Scass \u0083 Formative Assessment for Teachers and Learners Learning Progressions: Supporting Instruction and Formative Assessment", "text": "By its very nature, learning involves progression. To assist in its emergence, teachers need to understand the pathways along which students are expected to progress. These pathways or progressions ground both instruction and assessment. Yet, despite a plethora of standards and curricula, many teachers are unclear about how learning progresses in specific domains. This is an undesirable situation for teaching and learning, and one that particularly affects teachers\u2019 ability to engage in formative assessment."}
{"_id": "f76652b4ba6df64f6e4f1167a51c0ef8a7c719fa", "title": "Corporate Social Responsibility and Innovation in Management Accounting", "text": "\u2022 This study develops a conceptual framework to illustrate how sustainability issues are embedded in the management control system to operationalise firms' corporate social responsibility (CSR) objectives and strategies. \u2022 Categorising firms' CSR activities into Responsive and Strategic CSR agendas (Porter and Kramer 2006), this framework emphasises that different uses of MCS must be adopted to effectively operationalise different CSR agendas. \u2022 Boundary systems and diagnostic uses of budgets and performance management systems are more pertinent to operationalising Responsive CSR agendas, whereas the belief system and interactive uses of MCS are more effective in facilitating the selection and implementation of Strategic CSR programmes. \u2022 This study provides empirical evidence on the different types of Responsive and Strategic CSR programmes in two Chinese State-owned Enterprises (SOEs). Our empirical data shed light on the role of MCS in implementing CSR strategies at these firms."}
{"_id": "1d200f2f428598b79fc1df8da5576dcd35a00d8f", "title": "A High Gain Input-Parallel Output-Series DC/DC Converter With Dual Coupled Inductors", "text": "High voltage gain dc-dc converters are required in many industrial applications such as photovoltaic and fuel cell energy systems, high-intensity discharge lamp (HID), dc back-up energy systems, and electric vehicles. This paper presents a novel input-parallel output-series boost converter with dual coupled inductors and a voltage multiplier module. On the one hand, the primary windings of two coupled inductors are connected in parallel to share the input current and reduce the current ripple at the input. On the other hand, the proposed converter inherits the merits of interleaved series-connected output capacitors for high voltage gain, low output voltage ripple, and low switch voltage stress. Moreover, the secondary sides of two coupled inductors are connected in series to a regenerative capacitor by a diode for extending the voltage gain and balancing the primary-parallel currents. In addition, the active switches are turned on at zero current and the reverse recovery problem of diodes is alleviated by reasonable leakage inductances of the coupled inductors. Besides, the energy of leakage inductances can be recycled. A prototype circuit rated 500-W output power is implemented in the laboratory, and the experimental results shows satisfactory agreement with the theoretical analysis."}
{"_id": "ecad8c357147604fc065e717f54758ad4c10552b", "title": "Computer analysis of computed tomography scans of the lung: a survey", "text": "Current computed tomography (CT) technology allows for near isotropic, submillimeter resolution acquisition of the complete chest in a single breath hold. These thin-slice chest scans have become indispensable in thoracic radiology, but have also substantially increased the data load for radiologists. Automating the analysis of such data is, therefore, a necessity and this has created a rapidly developing research area in medical imaging. This paper presents a review of the literature on computer analysis of the lungs in CT scans and addresses segmentation of various pulmonary structures, registration of chest scans, and applications aimed at detection, classification and quantification of chest abnormalities. In addition, research trends and challenges are identified and directions for future research are discussed."}
{"_id": "82a621f4fa20d48e3ff8ccfce5a1c36316a21cd7", "title": "A 0.5-V bulk-input fully differential operational transconductance amplifier", "text": "We present a fully differential two-stage Miller op-amp operating from a 0.5 V power supply. The input signal is applied to the bulk nodes of the input devices. A prototype was designed in a standard 0.18 /spl mu/m CMOS process using standard 0.5 V V/sub T/ devices. It has a measured 52 dB DC gain, a 2.5 MHz gain-bandwidth and consumes 110 /spl mu/W."}
{"_id": "de3a2527a2c5ad8c18332382666388aa5eb9fee1", "title": "A 40-50-GHz SiGe 1 : 8 differential power divider using shielded broadside-coupled striplines", "text": "This paper presents a 1 : 8 differential power divider implemented in a commercial SiGe BiCMOS process using fully shielded broadside-coupled striplines integrated vertically in the silicon interconnect stackup. The 1 : 8 power divider is only 1.12 x1.5 mm2 including pads, and shows 0.4-dB rms gain imbalance and <3deg rms phase imbalance from 40 to 50 GHz over all eight channels, a measured power gain of 14.9 plusmn0.6 dB versus a passive divider at 45 GHz, and a 3-dB bandwidth from 37 to 52 GHz. A detailed characterization of the shielded broadside-coupled striplines is presented and agrees well with simulations. These compact lines can be used for a variety of applications in SiGe/CMOS millimeter-wave circuits, including differential signal distribution, miniature power dividers, matching networks, filters, couplers, and baluns."}
{"_id": "6c02053805434162e0fed26e1d5e035eb1071249", "title": "AutoRec: Autoencoders Meet Collaborative Filtering", "text": "This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets."}
{"_id": "5f6d9b8461a9d774da12f1b363eede4b7088cf5d", "title": "A \u2013$21.2$ -dBm Dual-Channel UHF Passive CMOS RFID Tag Design", "text": "Previous research results showed that UHF passive CMOS RFID tags had difficulty to achieve sensitivity less than -20 dBm. This paper presents a dual-channel 15-bit UHF passive CMOS RFID tag prototype that can work at sensitivity lower than -20 dBm. The proposed tag chip harvests energy and backscatters uplink data at 866.4-MHz (for ETSI) or 925-MHz (for FCC) channel and receives downlink data at 433-MHz channel. Consequently, the downlink data transmission does not interrupt our tag from harvesting RF energy. To use the harvested energy efficiently, we design a tag chip that includes neither a regulator nor a VCO such that the harvested energy is completely used in receiving, processing, and backscattering data. Without a regulator, our tag uses as few active analog circuits as possible in the receiver front-end. Instead, our tag uses a novel digital circuit to decode the received data. Without a VCO, the design of our tag can extract the required clock signal from the downlink data. Measurement result shows that the sensitivity of the proposed passive tag chip can reach down to -21.2 dBm. Such result corresponds to a 19.6-m reader-to-tag distance under 36-dBm EIRP and 0.4-dBi tag antenna gain. The chip was fabricated in TSMC 0.18- \u03bcm CMOS process. The die area is 0.958 mm \u00d70.931mm."}
{"_id": "0c981396e25e3f447410ae08ee0745337d412bbf", "title": "Quantifying and Visualizing Attribute Interactions", "text": "Interactions are the prototypical examples of unexpected and irreducible patterns in data that encompass several attributes. We attempt to unify the divergent definitions and conceptions of interactions present in the literature, and propose two separate conceptions of what interactions are: if a group of attributes interacts observationally, we cannot understand their relationship without looking at all the attributes simultaneously; if a group of attributes interacts representationally, we cannot create a model that would not include conditions involving all the attributes. Rather than test for interactions, we attempt to quantify them, and visually present the most important interactions of the data. Our quantification is based on how well we can approximate a joint probability distribution without admitting that there are interactions. Judging from these illustrations, most machine learning procedures are not treating interactions appropriately."}
{"_id": "bd1bd5056db4bb6d5b8cfcb9065ae2a18b69f58e", "title": "Development of a coax-to-microstrip transition over the range of 1\u201315GHz", "text": "In this paper, a coax-to-microstrip transition incorporating a miniature coaxial connector with glass bead is developed. The transition is firstly simulated with the aid of HFSS and then measured in the frequency range of 1-15 GHz. The simulated and measured results are found in good agreement."}
{"_id": "bd0d799b0adb1067dd67988917a6ce6eec700cfd", "title": "18 kW three phase inverter system using hermetically sealed SiC phase-leg power modules", "text": "Power electronics play an important role in electricity utilization from generation to end customers. Thus, high-efficiency power electronics help to save energy and conserve energy resources. Research on silicon carbide (SiC) power electronics has shown their better efficiency compared to Si power electronics due to the significant reduction in both conduction and switching losses. Combined with their high-temperature capability, SiC power electronics are more reliable and compact. This paper focuses on the development of such a high efficiency, high temperature inverter based on SiC JFET and diode modules. It involves the work on high temperature packaging (>200 \u00b0C), inverter design and prototype development, device characterization, and inverter testing. A SiC inverter prototype with a power rating of 18 kW is developed and demonstrated. When tested at moderate load levels compared to the inverter rating, an efficiency of 98.2% is achieved by the initial prototype without optimization, which is higher than most Si inverters."}
{"_id": "4991785cb0e6ee3d0b7823b59e144fb80ca3a83e", "title": "VQA: Visual Question Answering", "text": ""}
{"_id": "77e3c48aa10535276e7f570a3af594ba63de7d65", "title": "Training Deep Neural Networks on Noisy Labels with Bootstrapping", "text": "Current state-of-the-art deep learning systems for visual object recognition and detection use purely supervised training with regularization such as dropout to avoid overfitting. The performance depends critically on the amount of labeled examples, and in current practice the labels are assumed to be unambiguous and accurate. However, this assumption often does not hold; e.g. in recognition, class labels may be missing; in detection, objects in the image may not be localized; and in general, the labeling may be subjective. In this work we propose a generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency. We consider a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data. In experiments we demonstrate that our approach yields substantial robustness to label noise on several datasets. On MNIST handwritten digits, we show that our model is robust to label corruption. On the Toronto Face Database, we show that our model handles well the case of subjective labels in emotion recognition, achieving state-of-theart results, and can also benefit from unlabeled face images with no modification to our method. On the ILSVRC2014 detection challenge data, we show that our approach extends to very deep networks, high resolution images and structured outputs, and results in improved scalable detection."}
{"_id": "1667758be2fd585b037fe1f0044620e6c47b06f1", "title": "The Type of Information Overload Affects Electronic Knowledge Repository continuance", "text": "In the present competitive organizational environment more organizations are implementing knowledge management initiatives to gain strategic advantage. One such initiative is that of implementing electronic knowledge repositories (EKR) which often leads to a rapid increase in the quantity of information employees must process daily, raising concerns of employees being overloaded. This is especially true for current EKRs using distributive technology, enabling customizable individual workspaces which can result in loose knowledge structures. This paper identifies a new type of information overload (IO), extending the concept as occurring in both knowledge seekers and contributors and uses cognitive dissonance theory to provide evidence that IO can change employees\u2019 perception of EKR usage. This research paper provides the first empirical evidence that overload has no sufficient affect on EKR continuance intention directly, but has a significant negative affect on the two main success measures: Perceived usefulness and satisfaction of the system."}
{"_id": "2f3a6728b87283ccf0f8822f7a60bca8280f0957", "title": "Learning to aggregate vertical results into web search results", "text": "Aggregated search is the task of integrating results from potentially multiple specialized search services, or verticals, into the Web search results. The task requires predicting not only which verticals to present (the focus of most prior research), but also predicting where in the Web results to present them (i.e., above or below the Web results, or somewhere in between). Learning models to aggregate results from multiple verticals is associated with two major challenges. First, because verticals retrieve different types of results and address different search tasks, results from different verticals are associated with different types of predictive evidence (or features). Second, even when a feature is common across verticals, its predictiveness may be vertical-specific. Therefore, approaches to aggregating vertical results require handling an inconsistent feature representation across verticals, and, potentially, a vertical-specific relationship between features and relevance. We present 3 general approaches that address these challenges in different ways and compare their results across a set of 13 verticals and 1070 queries. We show that the best approaches are those that allow the learning algorithm to learn a vertical-specific relationship between features and relevance."}
{"_id": "621c031237d47c2bd15e69ed13aaaa1256defd23", "title": "Clustering Validity Based on the Improved Hubert \\Gamma Statistic and the Separation of Clusters", "text": "The validity of clustering is one important research field in clustering analysis, and many clustering validity functions have been proposed, especially those based on the geometrical structure of data set, such as Dunn's index and Xie-Beni index. In this way, the compactness and the separation of clusters are usually taken into account. Xie-Beni index decreases with the number of partitions increasing. It is difficult to choose the optimal number of clusters when there are lots of clusters in data. In this paper, a novel clustering validity function is proposed, which is based on the improved Huber Gamma statistic combined with the separation of clusters. Unlike other clustering validity, the function has the only maximum with the clustering number increasing. The experiments indicate that the function can be used as the optimal index for the choice of the clustering numbers"}
{"_id": "b8945cfb7ed72c0fd70263379c328b8570bd763f", "title": "Advanced algorithms for neural networks: a C++ sourcebook", "text": ""}
{"_id": "a2770a51760a134dbb77889d5517550943ea7b81", "title": "A Dual-Polarized Dual-Band Antenna With High Gain for 2G/3G/LTE Indoor Communications", "text": "A compact dual-polarized dual-band omnidirectional antenna with high gain is presented for 2G/3G/LTE communications, which comprise two horizontal polarization (HP) and a vertical polarization (VP) elements. The upper HP element consists of four pairs of modified printed magneto-electric (ME) dipoles that are fed by a four-way power divider feeding network, and eight pieces of arc-shaped parasitic patches that are printed on both sides of the circular printed circuit board alternately. The four-way power divider feeding network together with the four pairs of ME dipoles mainly provide a stable 360\u00b0 radiation pattern and high gain, while the eight pieces of patches are used to enhance the bandwidth. The lower HP element is similar to the upper one except that it do not have the parasitic patches. The VP element consists of four pairs of cone-shaped patches. Different from the HP element, the upper VP element provides the lower frequency band while the lower VP one yields the upper frequency band. The VP element and the HP element are perpendicularly arranged to obtain the compact and dual-polarized features. Measured results show that a bandwidth of 39.6% (0.77\u20131.15 GHz) with a gain of about 2.6 dBi and another bandwidth of 55.3% (1.66\u20132.93 GHz) with a gain of about 4.5 dBi can be achieved for the HP direction, while a bandwidth of 128% (0.7\u20133.2 GHz) with a gain of around 4.4 dBi can be acquired for the VP direction. Port isolation larger than 20 dB and low-gain variation levels within 2 dBi are also obtained. Hence, the proposed antenna is suitable for 2G/3G/LTE indoor communications."}
{"_id": "f3606221360c5c648de9ba527e397c502a06eddb", "title": "Mature HIV-1 capsid structure by cryo-electron microscopy and all-atom molecular dynamics", "text": "Retroviral capsid proteins are conserved structurally but assemble into different morphologies. The mature human immunodeficiency virus-1 (HIV-1) capsid is best described by a \u2018fullerene cone\u2019 model, in which hexamers of the capsid protein are linked to form a hexagonal surface lattice that is closed by incorporating 12 capsid-protein pentamers. HIV-1 capsid protein contains an amino-terminal domain (NTD) comprising seven \u03b1-helices and a \u03b2-hairpin, a carboxy-terminal domain (CTD) comprising four \u03b1-helices, and a flexible linker with a 310-helix connecting the two structural domains. Structures of the capsid-protein assembly units have been determined by X-ray crystallography; however, structural information regarding the assembled capsid and the contacts between the assembly units is incomplete. Here we report the cryo-electron microscopy structure of a tubular HIV-1 capsid-protein assembly at 8\u2009\u00c5 resolution and the three-dimensional structure of a native HIV-1 core by cryo-electron tomography. The structure of the tubular assembly shows, at the three-fold interface, a three-helix bundle with critical hydrophobic interactions. Mutagenesis studies confirm that hydrophobic residues in the centre of the three-helix bundle are crucial for capsid assembly and stability, and for viral infectivity. The cryo-electron-microscopy structures enable modelling by large-scale molecular dynamics simulation, resulting in all-atom models for the hexamer-of-hexamer and pentamer-of-hexamer elements as well as for the entire capsid. Incorporation of pentamers results in closer trimer contacts and induces acute surface curvature. The complete atomic HIV-1 capsid model provides a platform for further studies of capsid function and for targeted pharmacological intervention."}
{"_id": "f398126174b51e958d4e866c754c65195f26730d", "title": "Quantitative functional evaluation of a 3D-printed silicone-embedded prosthesis for partial hand amputation: A case report.", "text": "STUDY DESIGN\nA male patient with partial hand amputation of his nondominant hand, with only stumps of the proximal phalanx of the first and fifth finger, was evaluated. The performance of using two alternative 3D printed silicone-embedded personalized prostheses was evaluated using the quantitative Jebsen Hand Function Test.\n\n\nINTRODUCTION\nCustom design and fabrication of 3D printed prostheses appears to be a good technique for improving the clinical treatment of patients with partial hand amputations. Despite its importance the literature shows an absence of studies reporting on quantitative functional evaluations of 3D printed hand prostheses.\n\n\nPURPOSE OF THE STUDY\nWe aim at producing the first quantitative assessment of the impact of using 3D printed silicone-embedded prostheses that can be fabricated and customized within the clinical environment.\n\n\nMETHODS\nAlginate molds and computed tomographic scans were taken from the patient's hand. Each candidate prosthesis was modeled in Computer Aided Design software and then fabricated using a combination of 3D printed parts and silicone-embedded components.\n\n\nDISCUSSION\nIncorporating the patient's feedback during the design loop was very important for obtaining a good aid on his work activities. Although the explored patient-centered design process still requires a multidisciplinary team, functional benefits are large.\n\n\nCONCLUSION(S)\nQuantitative data demonstrates better hand performance when using 3D printed silicone-embedded prosthesis vs not using any aid. The patient accomplished complex tasks such as driving a nail and opening plastic bags. This was impossible without the aid of produced prosthesis."}
{"_id": "33a1ee51cc5d51609943896a95c1371538f2d017", "title": "A new hybrid imperialist swarm-based optimization algorithm for university timetabling problems", "text": ""}
{"_id": "09260da50abc7a39b3f00281c5fce6fa604de88b", "title": "Graphical Models for Probabilistic and Causal Reasoning", "text": "This chapter surveys the development of graphical models known as Bayesian networks, summarizes their semantical basis and assesses their properties and applications to reasoning and planning. Bayesian networks are directed acyclic graphs (DAGs) in which the nodes represent variables of interest (e.g., the temperature of a device, the gender of a patient, a feature of an object, the occurrence of an event) and the links represent causal influences among the variables. The strength of an influence is represented by conditional probabilities that are attached to each cluster of parents-child nodes in the network. Figure 1 illustrates a simple yet typical Bayesian network. It describes the causal relationships among the season of the year (X1), whether rain falls (X2) during the season, whether the sprinkler is on (X3) during that season, whether the pavement would get wet (X4), and whether the pavement would be slippery (X5). All variables in this figure are binary, taking a value of either true or false, except the root variable X1 which can take one of four values: Spring, Summer, Fall, or Winter. Here, the absence of a direct link between X1 and X5, for example, captures our understanding that the influence of seasonal variations on the slipperiness of the pavement is mediated by other conditions (e.g., the wetness of the pavement). As this example illustrates, a Bayesian network constitutes a model of the environment rather than, as in many other knowledge representation schemes (e.g., logic, rule-based systems and neural networks), a model of the reasoning process. It simulates, in fact, the causal mechanisms that operate in the environment, and thus allows the investigator to answer a variety of queries, including: associational queries, such as \u201cHaving observed A, what can we expect of B?\u201d; abductive queries, such as \u201cWhat is the most plausible explanation for a given set of observations?\u201d; and control queries; such as \u201cWhat will happen if we intervene and act on the environment?\u201d Answers to the first type of query depend only on probabilistic"}
{"_id": "1ce0ed147b783778dd596e7e663b112fd3340164", "title": "Semantic portrait color transfer with internet images", "text": "We present a novel color transfer method for portraits by exploring their high-level semantic information. First, a database is set up which consists of a collection of portrait images download from the Internet, and each of them is manually segmented using image matting as a preprocessing step. Second, we search the database using Face++ to find the images with similar poses to a given source portrait image, and choose one satisfactory image from the results as the target. Third, we extract portrait foregrounds from both source and target images. Then, the system extracts the semantic information, such as faces, eyes, eyebrows, lips, teeth, etc., from the extracted foreground of the source using image matting algorithms. After that, we perform color transfer between corresponding parts with the same semantic information. We get the final transferred result by seamlessly compositing different parts together using alpha blending. Experimental results show that our semantics-driven approach can generate better color transfer results for portraits than previous methods and provide users a new means to retouch their portraits."}
{"_id": "83051896c30d55062be0e5aa4ec1bf97cfe83748", "title": "Real-Time Detection of Camera Tampering", "text": "This paper presents a novel technique for camera tampering detection. It is implemented in real-time and was developed for use in surveillance and security applications. This method identifies camera tampering by detecting large differences between older frames of video and more recent frames. A buffer of incoming video frames is kept and three different measures of image dissimilarity are used to compare the frames. After normalization, a set of conditions is tested to decide if camera tampering has occurred. The effects of adjusting the internal parameters of the algorithm are examined. The performance of this method is shown to be extremely favorable in real-world settings."}
{"_id": "3ce807eb425cb775b5b6f987cdb24eb3556308d4", "title": "Effect of Methanol Addition on the Resistivity and Morphology of PEDOT:PSS Layers on Top of Carbon Nanotubes for Use as Flexible Electrodes.", "text": "UNLABELLED\nOvercoating carbon nanotube (CNT) films on flexible poly(ethylene terephthalate) (PET) foils with poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (\n\n\nPEDOT\nPSS) layers reduces the surface roughness, which is interesting for use in organic electronics. Adding methanol to the\n\n\nPEDOT\nPSS aqueous solution used for spin coating of the\n\n\nPEDOT\nPSS layer improves the wetting behavior of the CNT/PET surface. Samples with different volume fractions of methanol (0, 33, 50, 67, and 75 vol %) are compared with respect to the transmission, horizontal, and vertical resistivity. With grazing-incidence small-angle X-ray scattering, the film morphologies are probed, which is challenging because of the substrate flexibility. At 50 vol %, methanol optimum conditions are achieved with the resistivity close to that of the bare CNT/PET substrates because of the best contact between the\n\n\nPEDOT\nPSS film and CNT surface. At lower methanol ratios, the\n\n\nPEDOT\nPSS films cannot adapt the CNT morphology, and at higher methanol ratios, they rupture into domains and no continuous\n\n\nPEDOT\nPSS layers are formed."}
{"_id": "359c3ea51638c0f2d923012f2c8ecad1640fe4a2", "title": "Matching resumes and jobs based on relevance models", "text": "We investigate the difficult problem of matching semi-structured resumes and jobs in a large scale real-world collection. We compare standard approaches to Structured Relevance Models (SRM), an extensionof relevance-based language model for modeling and retrieving semi-structured documents. Preliminary experiments show that the SRM approach achieved promising performance and performed better than typical unstructured relevance models."}
{"_id": "1eb0bf4b9bf04e870962b742c4fc6cb330d1235a", "title": "Business processes--attempts to find a definition", "text": "Definitions of business process given in much of the literature on Business Process Management are limited in depth and their related models of business processes are correspondingly constrained. After giving a brief history of the progress of business process modeling techniques from production systems to the office environment, this paper proposes that most definitions are based on machine metaphor type explorations of a process. While these techniques are often rich and illuminating it is suggested that they are too limited to express the true nature of business processes that need to develop and adapt to today\u2019s challenging environment."}
{"_id": "8c21c42dc882f907d18974e97226d8efb6382378", "title": "A multi-view deep learning architecture for classification of breast microcalcifications", "text": "In this paper we address the problem of differentiating between malignant and benign tumors based on their appearance in the CC and MLO mammography views. Classification of clustered breast microcalcifications into benign and malignant categories is an extremely challenging task for computerized algorithms and expert radiologists alike. We describe a deep-learning classification method that is based on two view-level decisions, implemented by two neural networks, followed by a single-neuron layer that combines the viewlevel decisions into a global decision that mimics the biopsy results. Our method is evaluated on a large multi-view dataset extracted from the standardized digital database for screening mammography (DDSM). Experimental results show that our network structure significantly improves on previously suggested methods."}
{"_id": "1b7e522b12713543dae7e628b848088c9fb098f6", "title": "Four-Dimensional Discrete-time Lotka-Volterra Models with an Application to Ecology", "text": "This paper presents a study of the two-predators-two-preys discrete-time Lotka-Volterra model with selfinhibition terms for preys with direct applications to ecological problems. Parameters in the model are modified so that each of them has its own biological meaning, enabling more intuitive interpretation of biological conditions to the model . Moreover, the modified version is applicable to simulate a part of a large ecosystem, not only a closed predator-prey system with four species. An easy graphical method of analysis of the conditions on parameters ensuring long persistence under coexistence is presented. As an application, it is explained that a predator specie who feed on relatively small number of preys compared to the other predator species must be selective on the available preys in order for long persistence of the ecosystem. This may be regarded as a theoretical explanation of the existence of flush-pursuer birds, those who uses highly specialized hunting strategy and cross-adapts to the ecosystem relative to the ordinary bird species."}
{"_id": "f72624e9bbe3a91e57ac28ab0f4650a76066bfd5", "title": "Time Series Forecasting Based on Augmented Long Short-Term Memory", "text": "In this paper, we use recurrent autoencoder model to predict the time series in single and multiple steps ahead. Previous prediction methods, such as recurrent neural network (RNN) and deep belief network (DBN) models, cannot learn long term dependencies. And conventional long short-term memory (LSTM) model doesn\u2019t remember recent inputs. Combining LSTM and autoencoder (AE), the proposed model can capture long-term dependencies across data points and uses features extracted from recent observations for augmenting LSTM at the same time. Based on comprehensive experiments, we show that the proposed methods significantly improves the state-of-art performance on chaotic time series benchmark and also has better performance on real-world data. Both single-output and multiple-output predictions are investigated."}
{"_id": "8fe6b34838797e1de5e1d82b03190c52c15118fa", "title": "Nasal base reduction by alar release: a laboratory evaluation.", "text": "BACKGROUND\nWhen reducing the broad nasal base, there is a limit to the amount of soft tissue that can be resected, beyond which the anatomy distorts and the nostrils become stenotic (if resection enters the nostril). Alar mobilization by freeing soft-tissue attachments helps. This study purported to examine the nature of those attachments and the extent of medialization.\n\n\nMETHODS\nThe supporting tissues of the ala were sequentially divided in 16 fresh hemifacial cadavers. Key structures included the following: (1) the soft tissues and pyriform ligament of the anterior maxilla, (2) the periosteum posterior to the pyriform rim (in the bony nasal vault), and (3) the soft tissues along the horizontal pyriform rim. After release of each tethering region, the ala-pyriform distance was measured.\n\n\nRESULTS\nAfter releasing the anterior maxillary periosteum and pyriform ligament along the vertical pyriform rim, the ala-pyriform distance was reduced by a mean of 1.9 mm. After releasing the periosteum posterior to the pyriform rim (in the nasal vault), it was reduced by a mean of 1.7 mm. Releasing the soft tissues (which were thick medially) of the horizontal pyriform rim reduced the mean distance 1.0 mm for a total of 4.6 mm. Medialization resulting from anterior and posterior releases was significantly greater than that from the horizontal pyriform rim (p < 0.0006 and p < 0.015, respectively), but they were not significantly different from one another.\n\n\nCONCLUSIONS\nThis cadaver study confirmed the role of the stabilizing effect of the pyriform ligament and the periosteum lateral and posterior to the pyriform rim. The total release was substantial, suggesting a clinical means of achieving tension-free alar medialization."}
{"_id": "2a49aa3580a1218ac04278f3ae212b21ce430e2c", "title": "Deep Network Guided Proof Search", "text": "Deep learning techniques lie at the heart of several significant AI advances in recent years including object recognition and detection, image captioning, machine translation, speech recognition and synthesis, and playing the game of Go. Automated first-order theorem provers can aid in the formalization and verification of mathematical theorems and play a crucial role in program analysis, theory reasoning, security, interpolation, and system verification. Here we suggest deep learning based guidance in the proof search of the theorem prover E. We train and compare several deep neural network models on the traces of existing ATP proofs of Mizar statements and use them to select processed clauses during proof search. We give experimental evidence that with a hybrid, two-phase approach, deep learning based guidance can significantly reduce the average number of proof search steps while increasing the number of theorems proved. Using a few proof guidance strategies that leverage deep neural networks, we have found first-order proofs of 7.36% of the first-order logic translations of the Mizar Mathematical Library theorems that did not previously have ATP generated proofs. This increases the ratio of statements in the corpus with ATP generated proofs from 56% to 59%."}
{"_id": "3a8c0f15f8778d282de191e43ccd70b9b17d6623", "title": "Ontological Constructs to Create Money Laundering Schemes", "text": "There is an increasing tendency in the money laundering sector to utilize electronic commerce and web services. Misuse of web services and electronic money transfers occurs at many points in complex trading schemes. We provide ontological components that can be combined to construct some of these money laundering schemes. These constructs can be helpful for investigators, in order to decompose suspected financial schemes and recognize"}
{"_id": "15ad1dfeac0f7d768187e8fc61c82ff6ce9576f5", "title": "Procedural facade variations from a single layout", "text": "We introduce a framework to generate many variations of a facade design that look similar to a given facade layout. Starting from an input image, the facade is hierarchically segmented and labeled with a collection of manual and automatic tools. The user can then model constraints that should be maintained in any variation of the input facade design. Subsequently, facade variations are generated for different facade sizes, where multiple variations can be produced for a certain size. Computing such new facade variations has many unique challenges, and we propose a new algorithm based on interleaving heuristic search and quadratic programming. In contrast to most previous work, we focus on the generation of new design variations and not on the automatic analysis of the input's structure. Adding a modeling step with the user in the loop ensures that our results routinely are of high quality."}
{"_id": "3aa5a743cefb8e99acce051198a17a8b12830f04", "title": "Survey of Insurance Fraud Detection Using Data Mining Techniques", "text": "62 Abstract\u2014 With an increase in financial accounting fraud in the current economic scenario experienced, financial accounting fraud detection has become an emerging topics of great importance for academics, research and industries. Financial fraud is a deliberate act that is contrary to law, rule or policy with intent to obtain unauthorized financial benefit and intentional misstatements or omission of amounts by deceiving users of financial statements, especially investors and creditors. Data mining techniques are providing great aid in financial accounting fraud detection, since dealing with the large data volumes and complexities of financial data are big challenges for forensic accounting. Financial fraud can be classified into four: bank fraud, insurance fraud, securities and commodities fraud. Fraud is nothing but wrongful or criminal trick planned to result in financial or personal gains. This paper describes the more details on insurance sector related frauds and related solutions. In finance, insurance sector is doing important role and also it is unavoidable sector of every human being."}
{"_id": "7b3b6b874b89f22c46cc481448171a28ec919a55", "title": "A Region Ensemble for 3-D Face Recognition", "text": "In this paper, we introduce a new system for 3D face recognition based on the fusion of results from a committee of regions that have been independently matched. Experimental results demonstrate that using 28 small regions on the face allow for the highest level of 3D face recognition. Score-based fusion is performed on the individual region match scores and experimental results show that the Borda count and consensus voting methods yield higher performance than the standard sum, product, and min fusion rules. In addition, results are reported that demonstrate the robustness of our algorithm by simulating large holes and artifacts in images. To our knowledge, no other work has been published that uses a large number of 3D face regions for high-performance face matching. Rank one recognition rates of 97.2% and verification rates of 93.2% at a 0.1% false accept rate are reported and compared to other methods published on the face recognition grand challenge v2 data set."}
{"_id": "3e91300a76e5c88fa9850067546a6e2fa1c8914d", "title": "Deformation Modeling for Robust 3D Face Matching", "text": "Face recognition based on 3D surface matching is promising for overcoming some of the limitations of current 2D image-based face recognition systems. The 3D shape is generally invariant to the pose and lighting changes, but not invariant to the nonrigid facial movement such as expressions. Collecting and storing multiple templates to account for various expressions for each subject in a large database is not practical. We propose a facial surface modeling and matching scheme to match 2.5D facial scans in the presence of both nonrigid deformations and pose changes (multiview) to a stored 3D face model with neutral expression. A hierarchical geodesic-based resampling approach is applied to extract landmarks for modeling facial surface deformations. We are able to synthesize the deformation learned from a small group of subjects (control group) onto a 3D neutral model (not in the control group), resulting in a deformed template. A user-specific (3D) deformable model is built for each subject in the gallery with respect to the control group by combining the templates with synthesized deformations. By fitting this generative deformable model to a test scan, the proposed approach is able to handle expressions and pose changes simultaneously. A fully automatic and prototypic deformable model based 3D face matching system has been developed. Experimental results demonstrate that the proposed deformation modeling scheme increases the 3D face matching accuracy in comparison to matching with 3D neutral models by 7 and 10 percentage points, respectively, on a subset of the FRGC v2.0 3D benchmark and the MSU multiview 3D face database with expression variations."}
{"_id": "7d123526e2aa766154163ccfcecc09df3bc4e5ce", "title": "Monitoring critical temperatures in permanent magnet synchronous motors using low-order thermal models", "text": "Monitoring critical temperatures in electric motors is crucial for preventing shortened motor life spans due to excessive thermal stress. With regard to interior permanent magnet synchronous motors (IPMSM), critical temperatures typically occur in the magnets and in the stator end winding. As directly measuring temperatures, especially on the rotating part, is costly, sensitive and thus not applicable with respect to automotive applications, model-based approaches are preferred. In this paper, two low-order thermal models for accurate temperature estimations in the permanent magnets, the winding and the end winding are introduced and compared. The model parameters are estimated solely based on experimental data via a multistep identification approach for linear parameter-varying systems. The model performances are validated by extensive experimental results based on a high-speed PMSM typically used as traction motor in subcompact electric cars."}
{"_id": "673fc41e7d0b8c862cb4f4da76b5733cc987b491", "title": "RC-RCD clamp circuit for ringing losses reduction in a flyback converter", "text": "An RCD clamp circuit is usually used in flyback converters, in order to limit the voltage spikes caused by leakage transformer inductance. Oscillation ringing appears due to the clamp diode, which deteriorates the converter's power rate. This brief describes this ringing phenomenon and the use of an RC-RCD clamp circuit for damping the clamp diode's oscillation. This clamp circuit is capable for improving a flyback converter's power ratio."}
{"_id": "666f9fd755810dba8ca149860ce641a43abd6666", "title": "A secure and unclonable embedded system using instruction-level PUF authentication", "text": "In this paper we present a secure and unclonable embedded system design that can target either an FPGA or an ASIC technology. The premise of the security is that the executed machine code and the executing environment (the embedded processor) will authenticate each other at a per-instruction basis using Physical Unclonable Functions (PUFs) that are built into the processor. The PUFs ensure that the execution of the binary code may only proceed if the binary is compiled with the correct intrinsic knowledge of the PUFs, and that such intrinsic knowledge is virtually unique to each processor and therefore unclonable. We will explain how to implement and integrate the PUFs into the processor's execution environment such that each instruction is authenticated and de-obfuscated on-demand and how to transform an ordinary binary executable into PUF-aware, obfuscated binaries. We will also present a prototype system on a Xilinx Spartan6-based FPGA board."}
{"_id": "7515208360258a1ca939fd519103b324a8b7aea3", "title": "Augmenting feature model through customer preference mining by hybrid sentiment analysis", "text": "A feature model is an essential tool to identify variability and commonality within a product line of an enterprise, assisting stakeholders to configure product lines and to discover opportunities for reuse. However, the number of product variants needed to satisfy individual customer needs is still an open question, as feature models do not incorporate any direct customer preference information. In this paper, we propose to incorporate customer preference information into feature models using sentiment analysis of user-generated online product reviews. The proposed sentiment analysis method is a hybrid combination of affective lexicons and a rough-set technique. It is able to predict sentence sentiments for individual product features with acceptable accuracy, and thus augment a feature model by integrating positive and negative opinions of the customers. Such opinionated customer preference information is regarded as one attribute of the features, which helps to decide the number of variants needed within a product line. Finally, we demonstrate the feasibility and potential of the proposed method via an application case of Kindle Fire HD tablets."}
{"_id": "53c1339f0bfbd9d1d2535e296b995acad25d8b4c", "title": "Inferring tumor progression from genomic heterogeneity.", "text": "Cancer progression in humans is difficult to infer because we do not routinely sample patients at multiple stages of their disease. However, heterogeneous breast tumors provide a unique opportunity to study human tumor progression because they still contain evidence of early and intermediate subpopulations in the form of the phylogenetic relationships. We have developed a method we call Sector-Ploidy-Profiling (SPP) to study the clonal composition of breast tumors. SPP involves macro-dissecting tumors, flow-sorting genomic subpopulations by DNA content, and profiling genomes using comparative genomic hybridization (CGH). Breast carcinomas display two classes of genomic structural variation: (1) monogenomic and (2) polygenomic. Monogenomic tumors appear to contain a single major clonal subpopulation with a highly stable chromosome structure. Polygenomic tumors contain multiple clonal tumor subpopulations, which may occupy the same sectors, or separate anatomic locations. In polygenomic tumors, we show that heterogeneity can be ascribed to a few clonal subpopulations, rather than a series of gradual intermediates. By comparing multiple subpopulations from different anatomic locations, we have inferred pathways of cancer progression and the organization of tumor growth."}
{"_id": "34e87a802ec9f5daa9de523c4f41d59a9ea70661", "title": "Will NoSQL Databases Live Up to Their Promise?", "text": "Many organizations collect vast amounts of customer, scientific, sales, and other data for future analysis. Traditionally, most of these organizations have stored structured data in relational databases for subsequent access and analysis. However, a growing number of developers and users have begun turning to various types of nonrelational, now frequently called NoSQL-databases. Nonrelational databases, including hierarchical, graph, and object-oriented databases-have been around since the late 1960s. However, new types of NoSQL databases are being developed. And only now are they beginning to gain market traction. Different NoSQL databases take different approaches. What they have in common is that they're not relational. Their primary advantage is that, unlike relational databases, they handle unstructured data such as word-processing files, e-mail, multimedia, and social media efficiently. This paper discuss issues such as limitations, advantages, concerns and doubts regarding NoSQL databases."}
{"_id": "bc018fc951c124aa4519697f1884fd5afaf43439", "title": "Wide-band microstrip antenna with an H-shaped coupling aperture", "text": "Theoretical and experimental results of a wide-band planar antenna are presented. This antenna can achieve a wide bandwidth, low cross-polarization levels, and low backward radiation levels. For wide bandwidth and easy integration with active circuits, it uses aperture-coupled stacked square patches. The coupling aperture is an H-shaped aperture. Based on the finite-difference time-domain method, a parametric study of the input impedance of the antenna is presented, and effects of each parameter on the antenna impedance are illustrated. One antenna is also designed, fabricated, and measured. The measured return loss exhibits an impedance bandwidth of 21.7%. The cross-polarization levels in both and planes are better than 23 dB. The front-to-back ratio of the antenna radiation pattern is better than 22 dB. Both theoretical and experimental results of parameters and radiation patterns are presented and discussed."}
{"_id": "56039c570e834c4da4103087d549929283826c30", "title": "Putting Egocentric and Allocentric into Perspective", "text": "In the last decade many studies examined egocentric and allocentric spatial relations. For various tasks, navigators profit from both kinds of relations. However, their interrelation seems to be underspecified. We present four elementary representations of allocentric and egocentric relations (sensorimotor contingencies, egocentric coordinate systems, allocentric coordinate systems, and perspective-free representations) and discuss them with respect to their encoding and retrieval. Elementary representations are problematic for capturing large spaces and situations which encompass both allocentric and egocentric relations at the same time. Complex spatial representations provide a solution to this problem. They combine elementary coordinate representations either by pair-wise connections or by hierarchical embedding. We discuss complex spatial representations with respect to computational requirements and their plausibility regarding behavioral and neural findings. This work is meant to clarify concepts of egocentric and allocentric, to show their limitations, benefits and empirical plausibility and to point out new directions for future research."}
{"_id": "21dd874d7103d5c9c95b8ad52f0cbf1928b926e8", "title": "Novel Video Prediction for Large-scale Scene using Optical Flow", "text": "Making predictions of future frames is a critical challenge in autonomous driving research. Most of the existing methods for video prediction attempt to generate future frames in simple and fixed scenes. In this paper, we propose a novel and effective optical flow conditioned method for the task of video prediction with an application to complex urban scenes. In contrast with previous work, the prediction model only requires video sequences and optical flow sequences for training and testing. Our method uses the rich spatial-temporal features in video sequences. The method takes advantage of the motion information extracting from optical flow maps between neighbor images as well as previous images. Empirical evaluations on the KITTI dataset and the Cityscapes dataset demonstrate the effectiveness of our method."}
{"_id": "205cc882bac94592e5771e6eafc406da3a045d16", "title": "Impact of timing and frequency offsets on multicarrier waveform candidates for 5G", "text": "This paper presents a study of the candidate waveforms for 5G when they are subject to timing and carrier frequency offset. These waveforms are: orthogonal frequency division multiplexing (OFDM), generalized frequency division multiplexing (GFDM), universal filtered multicarrier (UFMC), circular filter bank multicarrier (C-FBMC), and linear filter bank multicarrier (FBMC). We are particularly interested in multiple access interference (MAI) when a number of users transmit their signals to a base station in an asynchronous or a quasi-synchronous manner. We identify the source of MAI in these waveforms and present some numerical analysis that confirm our findings. The goal of this study is to answer the following question, \u201cWhich one of the 5G candidate waveforms has more relaxed synchronization requirements?\u201d."}
{"_id": "63767524ffb687de673da256adfefed220f26efb", "title": "Contextmapping : experiences from practice", "text": "In recent years, various methods and techniques have emerged for mapping the contexts of people's interaction with products. Designers and researchers use these techniques to gain deeper insight into the needs and dreams of prospective users of new products. As most of these techniques are still under development, there is a lack of practical knowledge about how such studies can be conducted. In this paper we share our insights, based on several projects from research and many years of industrial practice, of conducting user studies with generative techniques. The appendix contains a single case illustrating the application of these techniques in detail."}
{"_id": "617c4b1aacce2ec9e9ec85c23f555e8ddd4a7a0c", "title": "Enhancing Test Compression With Dependency Analysis for Multiple Expansion Ratios", "text": "Scan test data compression is widely used in industry to reduce test data volume (TDV) and test application time (TAT). This paper shows how multiple scan chain expansion ratios can help to obtain high test data compression in system-on-chips. Scan chains are partitioned with a higher expansion ratio than normal in scan compression mode and then are gradually concatenated based on a cost function to detect any faults that could not be detected at the higher expansion ratios. It improves the overall test compression ratio since it potentially allows faults to be detected at the highest expansion ratio. This paper introduces a new cost function to choose scan chain concatenation candidates for concatenation for multiple expansion ratios. To avoid TDV and TAT increase by scan concatenation, the proposed method takes a logic structure and scan chain length into consideration. Experiment results show the proposed method reduces TAT and TDV by 53%\u201364% compared with a traditional scan compression method."}
{"_id": "5567fb38adbcd7d0a373e7abb51948b6d1cbad8d", "title": "Implementation Challenges for Information Security Awareness Initiatives in E-Government", "text": "With the widespread adoption of electronic government services, there has been a need to ensure a seamless flow of information across public sector organizations, while at the same time, maintaining confidentiality, integrity and availability. Governments have put in place various initiatives and programs including information security awareness to provide the needed understanding on how public sector employees can maintain security and privacy. Nonetheless, the implementation of such initiatives often faces a number of challenges that impede further take-up of e-government services. This paper aims to provide a better understanding of the challenges contributing towards the success of information security awareness initiatives implementation in the context of e-government. Political, organizational, social as well as technological challenges have been utilized in a conceptual framework to signify such challenges in e-government projects. An empirical case study conducted in a public sector organization in Greece was exploited in this research to reflect on these challenges. While, the results from this empirical study confirm the role of the identified challenges for the implementation of security awareness programs in e-government, it has been noticed that awareness programmers often pursue different targets of preserving security and privacy, which sometimes results in adding more complexity to the organization."}
{"_id": "2e055933a7e22e369441fe1bd922a3881e13f97f", "title": "Positive emotions in early life and longevity: findings from the nun study.", "text": "Handwritten autobiographies from 180 Catholic nuns, composed when participants were a mean age of 22 years, were scored for emotional content and related to survival during ages 75 to 95. A strong inverse association was found between positive emotional content in these writings and risk of mortality in late life (p < .001). As the quartile ranking of positive emotion in early life increased, there was a stepwise decrease in risk of mortality resulting in a 2.5-fold difference between the lowest and highest quartiles. Positive emotional content in early-life autobiographies was strongly associated with longevity 6 decades later. Underlying mechanisms of balanced emotional states are discussed."}
{"_id": "26da3190bbe181dac7a0ced5cef7745358a5346c", "title": "Targetless Calibration of a Lidar - Perspective Camera Pair", "text": "A novel method is proposed for the calibration of a camera - 3D lidar pair without the use of any special calibration pattern or point correspondences. The proposed method has no specific assumption about the data source: plain depth information is expected from the lidar scan and a simple perspective camera is used for the 2D images. The calibration is solved as a 2D-3D registration problem using a minimum of one (for extrinsic) or two (for intrinsic-extrinsic) planar regions visible in both cameras. The registration is then traced back to the solution of a non-linear system of equations which directly provides the calibration parameters between the bases of the two sensors. The method has been tested on a large set of synthetic lidar-camera image pairs as well as on real data acquired in outdoor environment."}
{"_id": "aed2fd4da163bc4891d95f358fbc03d3f8765b1a", "title": "ISCEV standard for clinical visual evoked potentials: (2016 update)", "text": "Visual evoked potentials (VEPs) can provide important diagnostic information regarding the functional integrity of the visual system. This document updates the ISCEV standard for clinical VEP testing and supersedes the 2009 standard. The main changes in this revision are the acknowledgment that pattern stimuli can be produced using a variety of technologies with an emphasis on the need for manufacturers to ensure that there is no luminance change during pattern reversal or pattern onset/offset. The document is also edited to bring the VEP standard into closer harmony with other ISCEV standards. The ISCEV standard VEP is based on a subset of stimulus and recording conditions that provide core clinical information and can be performed by most clinical electrophysiology laboratories throughout the world. These are: (1) Pattern-reversal VEPs elicited by checkerboard stimuli with large 1 degree (\u00b0) and small 0.25\u00b0 checks. (2) Pattern onset/offset VEPs elicited by checkerboard stimuli with large 1\u00b0 and small 0.25\u00b0 checks. (3) Flash VEPs elicited by a flash (brief luminance increment) which subtends a visual field of at least 20\u00b0. The ISCEV standard VEP protocols are defined for a single recording channel with a midline occipital active electrode. These protocols are intended for assessment of the eye and/or optic nerves anterior to the optic chiasm. Extended, multi-channel protocols are required to evaluate postchiasmal lesions."}
{"_id": "38b2e523828a1f23ad5ad4306a0f9fedca167c90", "title": "Satellite Imagery Multiscale Rapid Detection with Windowed Networks", "text": "Detecting small objects over large areas remains a significant challenge in satellite imagery analytics. Among the challenges is the sheer number of pixels and geographical extent per image: a single DigitalGlobe satellite image encompasses over 64 km and over 250 million pixels. Another challenge is that objects of interest are often minuscule (\u223c 10 pixels in extent even for the highest resolution imagery), which complicates traditional computer vision techniques. To address these issues, we propose a pipeline (SIMRDWN) that evaluates satellite images of arbitrarily large size at native resolution at a rate of \u2265 0.2 km/s. Building upon the tensorflow object detection API paper [9], this pipeline offers a unified approach to multiple object detection frameworks that can run inference on images of arbitrary size. The SIMRDWN pipeline includes a modified version of YOLO (known as YOLT [25]), along with the models in [9]: SSD [14], Faster R-CNN [22], and R-FCN [3]. The proposed approach allows comparison of the performance of these four frameworks, and can rapidly detect objects of vastly different scales with relatively little training data over multiple sensors. For objects of very different scales (e.g. airplanes versus airports) we find that using two different detectors at different scales is very effective with negligible runtime cost. We evaluate large test images at native resolution and find mAP scores of 0.2 to 0.8 for vehicle localization, with the YOLT architecture achieving both the highest mAP and fastest inference speed."}
{"_id": "141378662f78978496e7be9005c8b73e418f869e", "title": "N-gram Counts and Language Models from the Common Crawl", "text": "We contribute 5-gram counts and language models trained on the Common Crawl corpus, a collection over 9 billion web pages. This release improves upon the Google n-gram counts in two key ways: the inclusion of low-count entries and deduplication to reduce boilerplate. By preserving singletons, we were able to use Kneser-Ney smoothing to build large language models. This paper describes how the corpus was processed with emphasis on the problems that arise in working with data at this scale. Our unpruned Kneser-Ney English 5-gram language model, built on 975 billion deduplicated tokens, contains over 500 billion unique n-grams. We show gains of 0.5\u20131.4 BLEU by using large language models to translate into various languages."}
{"_id": "89a6412f009f6ffbdd11399bda8b02161caf56fc", "title": "Empirical Study on Deep Learning Models for QA", "text": "In this paper we explore deep learning models with memory component or attention mechanism for question answering task. We combine and compare three models, Neural Machine Translation [1], Neural Turing Machine [5], and Memory Networks [15] for a simulated QA data set [14]. This paper is the first one that uses Neural Machine Translation and Neural Turing Machines for solving QA tasks. Our results suggest that the combination of attention and memory have potential to solve certain QA problem."}
{"_id": "a739ae988ba0e3ff232f4507627dfc282ba7b3f4", "title": "Depth-Gated LSTM", "text": "In this short note, we present an extension of long short-term memory (LSTM) neural networks to using a depth gate to connect memory cells of adjacent layers. Doing so introduces a linear dependence between lower and upper layer recurrent units. Importantly, the linear dependence is gated through a gating function, which we call depth gate. This gate is a function of the lower layer memory cell, the input to and the past memory cell of this layer. We conducted experiments and verified that this new architecture of LSTMs was able to improve machine translation and language modeling performances."}
{"_id": "5adcac7d15ec8999fa2beb62f0ddc6893884e080", "title": "A review on fingerprint orientation estimation", "text": "Fingerprint orientation plays important roles in fingerprint enhancement, fingerprint classification, and fingerprint recognition. This paper critically reviews the primary advances on fingerprint orientation estimation. Advantages and limitations of existing methods have been addressed. Issues on future development have been discussed. Copyright \u00a9 2010 John Wiley & Sons, Ltd."}
{"_id": "e4d3f79e11a0c7ca05bed53d0f297ba42bf1ae7f", "title": "Clustering Spatial Data in the Presence of Obstacles: a Density-Based Approach", "text": "Clustering spatial data is a well-known problem that has been extensively studied. Grouping similar data in large 2-dimensional spaces to find hidden patterns or meaningful sub-groups has many applications such as satellite imagery, geographic information systems, medical image analysis, marketing, computer visions, etc. Although many methods have been proposed in the literature, very few have considered physical obstacles that may have significant consequences on the effectiveness of the clustering. Taking into account these constraints during the clustering process is costly and the modeling of the constraints is paramount for good performance. In this paper, we investigate the problem of clustering in the presence of constraints such as physical obstacles and introduce a new approach to model these constraints using polygons. We also propose a strategy to prune the search space and reduce the number of polygons to test during clustering. We devise a density-based clustering algorithm, DBCluC, which takes advantage of our constraint modeling to efficiently cluster data objects while considering all physical constraints. The algorithm can detect clusters of arbitrary shape and is insensitive to noise, the input order, and the difficulty of constraints. Its average running complexity is O(NlogN) where N is the number of"}
{"_id": "767529222fb682669398fd76df4b0eb4eb2e5c2d", "title": "The selection of fusion levels in thoracic idiopathic scoliosis.", "text": "From the material and data reviewed in our study of 405 patients, it appears that postoperative correction of the thoracic spine approximately equals the correction noted on preoperative side-bending roentgenograms. Selective thoracic fusion can be safely performed on a Type-II curve of less than 80 degrees, but care must be taken to use the vertebra that is neutral and stable so that the lower level of the fusion is centered over the sacrum. The lumbar curve spontaneously corrects to balance the thoracic curve when selective thoracic fusion is performed and the lower level of fusion is properly selected. In Type-III, IV, and V thoracic curves the lower level of fusion should be centered over the sacrum to achieve a balanced, stable spine."}
{"_id": "4fd5cb9a8067b7317f250cbe7d81be337c28aa16", "title": "Image quality measures and their performance", "text": "A number of quality measures are evaluated for gray scale image compression. They are all bivariate, exploiting the differences between corresponding pixels in the original and degraded images. It is shown that although some numerical measures correlate well with the observers\u2019 response for a given compression technique, they are not reliable for an evaluation across different techniques. A graphical measure called Hosaka plots, however, can be used to appropriately specify not only the amount, but also the type of degradation in reconstructed images."}
{"_id": "6421c677dfc71fe0fada74b4adae27417cd50d00", "title": "CacheAudit: A Tool for the Static Analysis of Cache Side Channels", "text": "We present CacheAudit, a versatile framework for the automatic, static analysis of cache side channels. CacheAudit takes as input a program binary and a cache configuration and derives formal, quantitative security guarantees for a comprehensive set of side-channel adversaries, namely, those based on observing cache states, traces of hits and misses, and execution times. Our technical contributions include novel abstractions to efficiently compute precise overapproximations of the possible side-channel observations for each of these adversaries. These approximations then yield upper bounds on the amount of information that is revealed.\n In case studies, we apply CacheAudit to binary executables of algorithms for sorting and encryption, including the AES implementation from the PolarSSL library, and the reference implementations of the finalists of the eSTREAM stream cipher competition. The results we obtain exhibit the influence of cache size, line size, associativity, replacement policy, and coding style on the security of the executables and include the first formal proofs of security for implementations with countermeasures such as preloading and data-independent memory access patterns."}
{"_id": "a506e751cf2a28179bcd48c75a7d3f537c847f9a", "title": "A correlation game for unsupervised learning yields computational interpretations of Hebbian excitation, anti-Hebbian inhibition, and synapse elimination", "text": "Much has been learned about plasticity of biological synapses from empirical studies. Hebbian plasticity is driven by correlated activity of presynaptic and postsynaptic neurons. Synapses that converge onto the same neuron often behave as if they compete for a fixed resource; some survive the competition while others are eliminated. To provide computational interpretations of these aspects of synaptic plasticity, we formulate unsupervised learning as a zero-sum game between Hebbian excitation and anti-Hebbian inhibition in a neural network model. The game formalizes the intuition that Hebbian excitation tries to maximize correlations of neurons with their inputs, while anti-Hebbian inhibition tries to decorrelate neurons from each other. We further include a model of synaptic competition, which enables a neuron to eliminate all connections except those from its most strongly correlated inputs. Through empirical studies, we show that this facilitates the learning of sensory features that resemble parts of objects. Much has been learned about plasticity of biological synapses from empirical studies. One line of research has explored Hebbian plasticity, which is triggered by correlated presynaptic and postsynaptic activity. Many kinds of excitatory synapses are strengthened by correlated activity, and are said to be Hebbian [Bi and Poo, 2001]. Strengthening of inhibitory synapses by correlated activity has also been observed [Gaiarsa et al., 2002]. The phenomenon is sometimes said to be anti-Hebbian, because strengthening an inhibitory synapse is like making a negative number more negative. Another line of research has shown that synapses converging onto the same neuron often behave as if they compete for shares of a fixed \u201cpie\u201d; some survive the competition while others are eliminated [Lichtman and Colman, 2000]. According to this viewpoint, Hebbian plasticity does not increase or decrease the pie of synaptic resources; it only allocates resources across convergent synapses [Miller, 1996] Theoretical neuroscientists have proposed a number of computational functions for Hebbian excitation, anti-Hebbian inhibition, and synaptic competition/elimination. Hebbian excitation has long been invoked as a mechanism for learning features from sensory input [Von der Malsburg, 1973]. Lateral inhibition has been used to sparsen neural activity, thereby facilitating Hebbian feature learning [Von der Malsburg, 1973, Fukushima, 1980, Rumelhart and Zipser, 1985, Kohonen, 1990]. Endowing the lateral inhibition with anti-Hebbian plasticity can give more robust control over sparseness of activity. F\u00f6ldi\u00e1k [1990] demonstrated this with numerical experiments, but did not provide an interpretation in terms of an optimization principle. Less relevant here are antiHebbian models without reference to sparse activity in linear [Foldiak, 1989, Rubner and Tavan, 1989, Rubner and Schulten, 1990] and nonlinear [Carlson, 1990, Girolami and Fyfe, 1997] networks. Also less relevant is the application of anti-Hebbian plasticity to feedforward rather than lateral connections [Hyv\u00e4rinen and Oja, 1998]. 1 ar X iv :1 70 4. 00 64 6v 1 [ cs .N E ] 3 A pr 2 01 7 Leen [1991] performed a stability analysis for a linear network with Hebbian feedforward connections and anti-Hebbian lateral connections. Plumbley [1993] derived a linear network with antiHebbian lateral inhibition (but no plasticity of feedforward connections) from the principle of information maximization with a power constraint. Pehlevan et al. [2015] showed that a linear network with Hebbian feedforward connections and anti-Hebbian lateral inhibition can be interpreted as online gradient optimization of a \u201csimilarity matching\u201d cost function. Pehlevan and Chklovskii [2014] and Hu et al. [2014] went on to extend the similarity matching principle to derive nonlinear neural networks for unsupervised learning. Synaptic competition and elimination have been studied in models of cortical development, and have been shown to play an important role in the emergence of feature selectivity [Miller, 1996]. The subject of the present work is a mathematical formalism that provides computational interpretations of Hebbian excitation, anti-Hebbian inhibition, and synaptic competition/elimination in nonlinear neural networks. We start by formulating unsupervised learning as the maximization of output-input correlations subject to upper bound constraints on output-output correlations. We motivate our formulation by describing its relation to previous theoretical frameworks for unsupervised learning, such as maximization [Linsker, 1988, Atick and Redlich, 1990, Plumbley, 1993, Bell and Sejnowski, 1995] or minimization of mutual information [Hyv\u00e4rinen and Oja, 2000], (2) projection onto a subspace that maximizes a moment-based statistic such as variance [Oja, 1982, Linsker, 1988] or kurtosis [Huber, 1985], and the (3) similarity matching principle [Pehlevan et al., 2015]. To solve our constrained maximization problem, we introduce Lagrange multipliers. This Lagrangian dual formulation of unsupervised learning can in turn be solved by a nonlinear neural network with Hebbian excitation and anti-Hebbian inhibition. The network is very similar to the original model of F\u00f6ldi\u00e1k [1990], differing mainly by its use of rectification rather than sigmoidal nonlinearity. (The latter can also be handled by our formalism, as shown in Appendix B.) Lagrange multipliers were also used to study anti-Hebbian plasticity by Plumbley [1993], but only for linear networks. Effectively, excitation and inhibition behave like players in a game, and the inhibitory connections can be interpreted as Lagrange multipliers. The game is zero-sum, in that excitation tries to maximize a payoff function and inhibition tries to minimize exactly the same payoff function. Roughly speaking, however, one could say that excitation aims to maximize the correlation of each output neuron with its inputs, while inhibition aims to decorrelate the output neurons from each other. Our term \u201ccorrelation game\u201d is derived from this intuitive picture. Within our mathematical formalism, we also consider a dynamics of synaptic competition and elimination that is drawn from models of cortical development [Miller and MacKay, 1994]. Competition between the excitatory synapses convergent on a single neuron is capable of driving the strengths of some synapses to zero. In numerical experiments with the MNIST dataset, we show that synapse elimination has the computational function of facilitating the learning of features that resemble \u201cparts\u201d of objects. Theoretical analysis shows that the surviving synapses converging onto an output neuron come from its most strongly correlated inputs; synapses from weakly correlated inputs are eliminated. Our correlation game is closely related to the similarity matching principle of Pehlevan and Chklovskii [2014] and Pehlevan et al. [2015]. The major novelty is the introduction of decorrelation as a constraint for the optimization. Paralleling our work, Pehlevan et al. [2017] have shown that the similarity matching principle leads to a game theoretic formulation through Hubbard-Stratonovich duality. Again our novelty is the use of decorrelation as a constraint, which leads to our correlation game through Lagrangian duality. Our model of synaptic competition and elimination was borrowed with slight modification from the literature on modeling neural development [Miller and MacKay, 1994]. It can be viewed as a more biologically plausible alternative to previous unsupervised learning algorithms that sparsen features. For example, Hoyer [2004] is similar to ours because it can be interpreted as independently sparsening each set of convergent synapses, rather than applying a global L1 regularizer to all synapses."}
{"_id": "0349b6927f434d5b1ead687012eb06776b4f76b0", "title": "Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control", "text": "A broad range of neural and behavioral data suggests that the brain contains multiple systems for behavioral choice, including one associated with prefrontal cortex and another with dorsolateral striatum. However, such a surfeit of control raises an additional choice problem: how to arbitrate between the systems when they disagree. Here, we consider dual-action choice systems from a normative perspective, using the computational theory of reinforcement learning. We identify a key trade-off pitting computational simplicity against the flexible and statistically efficient use of experience. The trade-off is realized in a competition between the dorsolateral striatal and prefrontal systems. We suggest a Bayesian principle of arbitration between them according to uncertainty, so each controller is deployed when it should be most accurate. This provides a unifying account of a wealth of experimental evidence about the factors favoring dominance by either system."}
{"_id": "8394eca39f743cf4cd098fb92cc4718d34be1d04", "title": "Stakeholders in Explainable AI", "text": "There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable. However, there is no general consensus over what is meant by \u2018explainable\u2019 and \u2018interpretable\u2019. In this paper, we argue that this lack of consensus is due to there being several distinct stakeholder communities. We note that, while the concerns of the individual communities are broadly compatible, they are not identical, which gives rise to different intents and requirements for explainability/interpretability. We use the software engineering distinction between validation and verification, and the epistemological distinctions between knowns/unknowns, to tease apart the concerns of the stakeholder communities and highlight the areas where their foci overlap or diverge. It is not the purpose of the authors of this paper to \u2018take sides\u2019 \u2014 we count ourselves as members, to varying degrees, of multiple communities \u2014 but rather to help disambiguate what stakeholders mean when they ask \u2018Why?\u2019 of an AI."}
{"_id": "6c2b280af13bf1824fbc9690438c9e46c9619dd0", "title": "Loughborough University Institutional Repository Product attachment and replacement : implications for sustainable design", "text": "This research investigated a number of main areas of attachment in order to determine how consumer-product relationships are formed and to identify whether these feelings influence replacement decisions. Primary research comprised of interviews with consumers to discuss the topic area in relation to three possessions selected for their attachment qualities. The research highlighted how attachment is determined by multiple themes, many of which are circumstantial to consumers\u2019 experiences and therefore difficult for designers to control. Findings showed that memories were the most prominent theme of participants\u2019 attachment, closely followed by pleasure and usability. Enjoyment and pleasure were found to be the primary reason for attachment to newly purchased items, whereas nostalgia was highest for older possessions. Appearance and reliability were found to have considerable influence on participants\u2019 attitudes towards replacement."}
{"_id": "65f415c6d88aca139867702fc64aa179781b8e65", "title": "PID: the Pathway Interaction Database", "text": "The Pathway Interaction Database (PID, http://pid.nci.nih.gov) is a freely available collection of curated and peer-reviewed pathways composed of human molecular signaling and regulatory events and key cellular processes. Created in a collaboration between the US National Cancer Institute and Nature Publishing Group, the database serves as a research tool for the cancer research community and others interested in cellular pathways, such as neuroscientists, developmental biologists and immunologists. PID offers a range of search features to facilitate pathway exploration. Users can browse the predefined set of pathways or create interaction network maps centered on a single molecule or cellular process of interest. In addition, the batch query tool allows users to upload long list(s) of molecules, such as those derived from microarray experiments, and either overlay these molecules onto predefined pathways or visualize the complete molecular connectivity map. Users can also download molecule lists, citation lists and complete database content in extensible markup language (XML) and Biological Pathways Exchange (BioPAX) Level 2 format. The database is updated with new pathway content every month and supplemented by specially commissioned articles on the practical uses of other relevant online tools."}
{"_id": "b0df677063ba6e964c1a3a1aa733bd06737aec7f", "title": "Global Inference for Sentence Compression: An Integer Linear Programming Approach", "text": "Sentence compression holds promise for many applications ranging from summarization to subtitle generation. Our work views sentence compression as an optimization problem and uses integer linear programming (ILP) to infer globally optimal compressions in the presence of linguistically motivated constraints. We show how previous formulations of sentence compression can be recast as ILPs and extend these models with novel global constraints. Experimental results on written and spoken texts demonstrate improvements over state-of-the-art models."}
{"_id": "2e65fd8cddf967b2fa8b8f7bc86cde9c3414fa1e", "title": "Toward tense as a clinical marker of specific language impairment in English-speaking children.", "text": "A critical clinical issue is the identification of a clinical marker, a linguistic form or principle that can be shown to be characteristic of children with Specific Language Impairment (SLI). In this paper we evaluate, as candidate clinical markers, a set of morphemes that mark Tense. In English, this includes -s third person singular, -ed regular past, BE, and DO. According to the Extended Optional Infinitive Account (EOI) of Rice, Wexler, and Cleave (1995), this set of morphemes is likely to appear optionally in the grammars of children with SLI at a rate lower than the optionality evident in younger controls. Three groups of preschool children participated: 37 children with SLI, and two control groups, one of 40 MLU-equivalent children and another of 45 age-equivalent children. Three kinds of evidence support the conclusion that a set of morphemes that marks Tense can be considered a clinical marker: (a) low levels of accuracy for the target morphemes for the SLI group relative to either of the two control groups; (b) affectedness for the set of morphemes defined by the linguistic function of Tense, but not for morphemes unrelated to Tense; and (c) a bimodal distribution for Tense-marking morphemes relative to age peers, in which the typical children are at essentially adult levels of the grammar, whereas children in the SLI group were at low (i.e., non-adultlike) levels of performance. The clinical symptoms are evident in omissions of surface forms. Errors of subject-verb agreement and syntactic misuses are rare, showing that, as predicted, children in an EOI stage who are likely to mark Tense optionally at the same time know a great deal about the grammatical properties of finiteness and agreement in the adult grammar. The findings are discussed in terms of alternative accounts of the grammatical limitations of children with SLI and implications for clinical identification."}
{"_id": "43d40cf9251ba497f6ea3957bfc3c189fd11d421", "title": "A Privacy-Aware Authentication Scheme for Distributed Mobile Cloud Computing Services", "text": "In modern societies, the number of mobile users has dramatically risen in recent years. In this paper, an efficient authentication scheme for distributed mobile cloud computing services is proposed. The proposed scheme provides security and convenience for mobile users to access multiple mobile cloud computing services from multiple service providers using only a single private key. The security strength of the proposed scheme is based on bilinear pairing cryptosystem and dynamic nonce generation. In addition, the scheme supports mutual authentication, key exchange, user anonymity, and user untraceability. From system implementation point of view, verification tables are not required for the trusted smart card generator (SCG) service and cloud computing service providers when adopting the proposed scheme. In consequence, this scheme reduces the usage of memory spaces on these corresponding service providers. In one mobile user authentication session, only the targeted cloud service provider needs to interact with the service requestor (user). The trusted SCG serves as the secure key distributor for distributed cloud service providers and mobile clients. In the proposed scheme, the trusted SCG service is not involved in individual user authentication process. With this design, our scheme reduces authentication processing time required by communication and computation between cloud service providers and traditional trusted third party service. Formal security proof and performance analyses are conducted to show that the scheme is both secure and efficient."}
{"_id": "4210b1b1ea33935061394a077a21ff7bac7d7fd0", "title": "A Real Options Approach for the Valuation of Highway Concessions", "text": "T theory of real options offers an approach for the valuation of investments in real assets, based on the methodology developed for financial options. This approach is especially appropriate in the context of strategic decision making under conditions of uncertainty. This is the case for the valuation of highway concessions, where real options arise from certain clauses of the contracts, for example, a minimum traffic guarantee. The possible exercise of these kinds of rights means an added value for the project that cannot be easily captured using traditional procedures for investment valuation. In this paper, we develop a methodology to value these clauses of highway concessions and, for that purpose, we consider the traffic on the highway as the underlying asset in an option contract, taking into account that, when a nonfinancial asset is used, some adjustments have to be made to the options approach. This methodology is applied to the case of an already operating highway concession, using real data, and the authors obtain an estimate of the value of a minimum traffic guarantee, which depends on several parameters that are analyzed. The authors conclude that this methodology is an appropriate tool for the valuation of highway concessions that have operational flexibility."}
{"_id": "001719ac5585722d14bf4f2d807383a368504a4a", "title": "Pedestrian detection in low resolution videos", "text": "Pedestrian detection in low resolution videos can be challenging. In outdoor surveillance scenarios, the size of pedestrians in the images is often very small (around 20 pixels tall). The most common and successful approaches for single frame pedestrian detection use gradient-based features and a support vector machine classifier. We propose an extension of these ideas, and develop a new algorithm that extracts gradient features from a spatiotemporal volume, consisting of a short sequence of images (about one second in duration). The additional information provided by the motion of the person compensates for the loss of resolution. On standard datasets (PETS2001, VIRAT) we show a significant improvement in performance over single-frame detection."}
{"_id": "16381badb3adebd1506756a919c13beee335cdfd", "title": "Computer-generated pen-and-ink illustration", "text": "This paper describes the principles of traditional pen-and-ink illustration, and shows how a great number of them can be implemented as part of an automated rendering system. It introduces \u201cstroke textures,\u201d which can be used for achieving both texture and tone with line drawing. Stroke textures also allow resolution-dependent rendering, in which the choice of strokes used in an illustration is appropriately tied to the resolution of the target medium. We demonstrate these techniques using complex architectural models, including Frank Lloyd Wright's \u201cRobie House.\u201d"}
{"_id": "926862f4facebf5f76dd5f75d178c8d13d5f7468", "title": "No need to war-drive: unsupervised indoor localization", "text": "We propose UnLoc, an unsupervised indoor localization scheme that bypasses the need for war-driving. Our key observation is that certain locations in an indoor environment present identifiable signatures on one or more sensing dimensions. An elevator, for instance, imposes a distinct pattern on a smartphone's accelerometer; a corridor-corner may overhear a unique set of WiFi access points; a specific spot may experience an unusual magnetic fluctuation. We hypothesize that these kind of signatures naturally exist in the environment, and can be envisioned as internal landmarks of a building. Mobile devices that \"sense\" these landmarks can recalibrate their locations, while dead-reckoning schemes can track them between landmarks. Results from 3 different indoor settings, including a shopping mall, demonstrate median location errors of 1:69m. War-driving is not necessary, neither are floorplans the system simultaneously computes the locations of users and landmarks, in a manner that they converge reasonably quickly. We believe this is an unconventional approach to indoor localization, holding promise for real-world deployment."}
{"_id": "65deebcc4e04585c08c2c349eb28091a5da4b573", "title": "jModelTest 2: more models, new heuristics and parallel computing", "text": "jModelTest 2: more models, new heuristics and parallel computing Diego Darriba, Guillermo L. Taboada, Ram\u00f3n Doallo and David Posada Supplementary Table 1. New features in jModelTest 2 Supplementary Table 2. Model selection accuracy Supplementary Table 3. Mean square errors for model averaged estimates Supplementary Note 1. Hill-climbing hierarchical clustering algorithm Supplementary Note 2. Heuristic filtering Supplementary Note 3. Simulations from prior distributions Supplementary Note 4. Speed-up benchmark on real and simulated datasets"}
{"_id": "1f413da5f04b76034c698b6c2d120df7484e2334", "title": "Vegan diets: practical advice for athletes and exercisers", "text": "With the growth of social media as a platform to share information, veganism is becoming more visible, and could be becoming more accepted in sports and in the health and fitness industry. However, to date, there appears to be a lack of literature that discusses how to manage vegan diets for athletic purposes. This article attempted to review literature in order to provide recommendations for how to construct a vegan diet for athletes and exercisers. While little data could be found in the sports nutrition literature specifically, it was revealed elsewhere that veganism creates challenges that need to be accounted for when designing a nutritious diet. This included the sufficiency of energy and protein; the adequacy of vitamin B12, iron, zinc, calcium, iodine and vitamin D; and the lack of the long-chain n-3 fatty acids EPA and DHA in most plant-based sources. However, via the strategic management of food and appropriate supplementation, it is the contention of this article that a nutritive vegan diet can be designed to achieve the dietary needs of most athletes satisfactorily. Further, it was suggested here that creatine and \u03b2-alanine supplementation might be of particular use to vegan athletes, owing to vegetarian diets promoting lower muscle creatine and lower muscle carnosine levels in consumers. Empirical research is needed to examine the effects of vegan diets in athletic populations however, especially if this movement grows in popularity, to ensure that the health and performance of athletic vegans is optimised in accordance with developments in sports nutrition knowledge."}
{"_id": "959b6a7feb3e35c7de91b43f8427a3e9b725576e", "title": "An approach to automotive ECG measurement validation using a car-integrated test framework", "text": "Development and integration of physiological sensors into automotive applications is gaining importance. Assistance systems which possess knowledge about the driver's cognitive state could increase road safety. In this paper we present a flexible framework that enables the development, evaluation and verification of sensors and algorithms for automotive applications using physiological signals under realistic driving conditions. We have integrated a custom capacitive ECG measurement system into a test car and validated its performance in real world driving tests. During first test runs, the capacitive system achieved a sensitivity of up to 95.5% and a precision rate of up to 92.6%. Our system also records synchronized vehicle dynamics. We discuss the road test measurements which suggest that the driving situation highly impacts the quality of ECG signal. Therefore, information on driving dynamics could be used to improve the precision rate of future capacitive ECG measurement."}
{"_id": "c66fc8ffffaadc087688e3ba27fd2eaa0c4a2fde", "title": "Spatial delineation of soil erosion vulnerability in the Lake Tana Basin , Ethiopia", "text": "The main objective of this study was to identify the most vulnerable areas to soil erosion in the Lake Tana Basin, Blue Nile, Ethiopia using the Soil and Water Assessment Tool (SWAT), a physically based distributed hydrological model, and a Geographic Information System based decision support system that uses multi-criteria evaluation (MCE). The SWAT model was used to estimate the sediment yield within each sub-basin and identify the most sediment contributing areas in the basin. Using the MCE analysis, an attempt was made to combine a set of factors (land use, soil, slope and river layers) to make a decision according to the stated objective. On the basis of simulated SWAT, sediment yields greater than 30 tons/ha for each of the sub-basin area, 18\u00d04% of the watershed was determined to be high erosion potential area. The MCE results indicated that 12\u201330\u00d05% of the watershed is high erosion potential area. Both approaches show comparable watershed area with high soil erosion susceptibility. The output of this research can aid policy and decision makers in determining the soil erosion \u2018hot spots\u2019 and the relevant soil and water conservation measures. Copyright \uf6d9 2009 John Wiley & Sons, Ltd."}
{"_id": "496016be0eadb48c694127b2d08e0059109750ff", "title": "The development of the SGI-16: a shortened sensory gating deficit and distractibility questionnaire for adults with ADHD.", "text": "The Sensory Gating Inventory (SGI) is a questionnaire composed of 36 items designed to investigate abnormal perception related to the inability to control sensitivity to sensory stimuli frequently reported in adult with ADHD. This questionnaire can be considered too lengthy to be taken by people with ADHD, and a shortened version is needed. One hundred and sixty-three adults with ADHD responded to the SGI-36. An item reduction process took into account both the results of statistical analyses and the expertise of a steering committee. Construct validity, reliability, and external validity were tested for a short version (16 items). The structure of the SGI-16 was confirmed by principal components factor analysis. Cronbach's alpha coefficients ranged from 0.78 to 0.89. The SGI-16 dimension scores were highly correlated with their respective SGI-36 dimension scores. The SGI-16 seems to be both appropriate and useful for use in clinical practice to investigate perceptual abnormalities in adults with ADHD."}
{"_id": "ebcae59f4fefe0dece6e7e7031fdc9c8503a09c2", "title": "Vehicular Ad Hoc Networks (VANETs): Challenges and Perspectives", "text": "Vehicular ad hoc network (VANET), a subclass of mobile ad hoc networks (MANETs), is a promising approach for future intelligent transportation system (ITS). These networks have no fixed infrastructure and instead rely on the vehicles themselves to provide network functionality. However, due to mobility constraints, driver behavior, and high mobility, VANETs exhibit characteristics that are dramatically different from many generic MANETs. This article provides a comprehensive study of challenges in these networks, which we concentrate on the problems and proposed solutions. Then we outline current state of the research and future perspectives. With this article, readers can have a more thorough understanding of vehicle ad hoc networking and the research trends in this area"}
{"_id": "fa5728247a11f57d96f9072b07944bbefb47a97f", "title": "Design of 5.9 ghz dsrc-based vehicular safety communication", "text": "The automotive industry is moving aggressively in the direction of advanced active safety. Dedicated short-range communication (DSRC) is a key enabling technology for the next generation of communication-based safety applications. One aspect of vehicular safety communication is the routine broadcast of messages among all equipped vehicles. Therefore, channel congestion control and broadcast performance improvement are of particular concern and need to be addressed in the overall protocol design. Furthermore, the explicit multichannel nature of DSRC necessitates a concurrent multichannel operational scheme for safety and non-safety applications. This article provides an overview of DSRC based vehicular safety communications and proposes a coherent set of protocols to address these requirements"}
{"_id": "550c6452452c8119f564079b752590a459dcce2c", "title": "Group signatures with verifier-local revocation", "text": "Group signatures have recently become important for enabling privacy-preserving attestation in projects such as Microsoft's ngscb effort (formerly Palladium). Revocation is critical to the security of such systems. We construct a short group signature scheme that supports Verifier-Local Revocation (VLR). In this model, revocation messages are only sent to signature verifiers (as opposed to both signers and verifiers). Consequently there is no need to contact individual signers when some user is revoked. This model is appealing for systems providing attestation capabilities. Our signatures are as short as standard RSA signatures with comparable security. Security of our group signature (in the random oracle model) is based on the Strong Diffie-Hellman assumption and the Decision Linear assumption in bilinear groups. We give a precise model for VLR group signatures and discuss its implications."}
{"_id": "97cb31247602189ca18c357c73b0d1ed84d0fa9d", "title": "A Survey of Inter-Vehicle Communication \u2217", "text": "As a component of the intelligent transportation system (ITS) and one of the concrete applications of mobile ad hoc networks, inter-vehicle communication (IVC) has attracted research attention from both the academia and industry of, notably, US, EU, and Japan. The most important feature of IVC is its ability to extend the horizon of drivers and on-board devices (e.g., radar or sensors) and, thus, to improve road traffic safety and efficiency. This paper surveys IVC with respect to key enabling technologies ranging from physical radio frequency to group communication primitives and security issues. The mobility models used to evaluate the feasibility of these technologies are also briefly described. We focus on the discussion of various MAC protocols that seem to be indispensable components in the network protocol stack of IVC. By analyzing the application requirements and the protocols built upon the MAC layer to meet these requirements, we also advocate our perspective that ad hoc routing protocols and group communication primitives migrated from wired networks might not be an efficient way to support the envisioned applications, and that new coordination algorithms directly based on MAC could be designed for this purpose."}
{"_id": "710de527b862fe4b81bcf3acbf5ffe4db6852f4e", "title": "Detecting Satire in the News with Machine Learning", "text": "We built models with Logistic Regression and linear Support Vector Machines on a large dataset consisting of regular news articles and news from satirical websites, and showed that such linear classifiers on a corpus with about 60,000 articles can perform with a precision of 98.7% and a recall of 95.2% on a random test set of the news. On the other hand, when testing the classifier on \u201cpublication sources\u201d which are completely unknown during training, only an accuracy of 88.2% and an F1-score of 76.3% are achieved. As another result, we showed that the same algorithm can distinguish between news written by the news agency itself and paid articles from customers. Here the results had an accuracy of 99%."}
{"_id": "781c776dfec453ddd1e9faafea144ad09e7a636a", "title": "Dual Grid Array Antennas in a Thin-Profile Package for Flip-Chip Interconnection to Highly Integrated 60-GHz Radios", "text": "We examine the current development of highly integrated 60-GHz radios with an interest in antenna-circuit interfaces. We design and analyze grid array antennas with special attention to the differential feeding and the patterned ground plane. More importantly, we integrate two grid array antennas in a package; propose the way of assembling it to the system printed circuit board; and demonstrate a total solution of low cost and thin profile to highly integrated 60-GHz radios. We show that the package in low temperature cofired ceramic (LTCC) technology measures only 13\u00d713\u00d70.575 mm3 ; can carry a 60-GHz radio die of current and future sizes with flip-chip bonding; and achieves good antenna performance in the 60-GHz band with maximum gain of 13.5 and 14.5 dBi for the single-ended and differential antennas, respectively."}
{"_id": "e519211eed94789da9311a808b8d4bd6a5f73318", "title": "NLOS Identification in Range-Based Source Localization: Statistical Approach", "text": "Least squares estimation is a widely-used technique for range-based source localization, which obtains the most probable position of mobile station. These methods cannot provide desirable accuracy in the case with a non line of sight (NLOS) path between mobile station and base stations. To circumvent this drawback, many algorithms have been proposed to identify and mitigate this error; however, they have a large run-time overhead. On the other hand, new positioning systems utilize a large set of base stations, and a practical algorithm should be fast enough to deal with them. In this paper, we propose a novel algorithm based on subspace method to identify and eliminate the NLOS error. Simulation studies show that our algorithm is faster and more accurate compared with other conventional methods, especially in the large-scale cases."}
{"_id": "ba282518e054d16c085ebdd9b52dafd70260befc", "title": "Visual Place Recognition: A Survey", "text": "Visual place recognition is a challenging problem due to the vast range of ways in which the appearance of real-world places can vary. In recent years, improvements in visual sensing capabilities, an ever-increasing focus on long-term mobile robot autonomy, and the ability to draw on state-of-the-art research in other disciplines-particularly recognition in computer vision and animal navigation in neuroscience-have all contributed to significant advances in visual place recognition systems. This paper presents a survey of the visual place recognition research landscape. We start by introducing the concepts behind place recognition-the role of place recognition in the animal kingdom, how a \u201cplace\u201d is defined in a robotics context, and the major components of a place recognition system. Long-term robot operations have revealed that changing appearance can be a significant factor in visual place recognition failure; therefore, we discuss how place recognition solutions can implicitly or explicitly account for appearance change within the environment. Finally, we close with a discussion on the future of visual place recognition, in particular with respect to the rapid advances being made in the related fields of deep learning, semantic scene understanding, and video description."}
{"_id": "568cff415e7e1bebd4769c4a628b90db293c1717", "title": "Concepts Not Alone: Exploring Pairwise Relationships for Zero-Shot Video Activity Recognition", "text": "Vast quantities of videos are now being captured at astonishing rates, but the majority of these are not labelled. To cope with such data, we consider the task of content-based activity recognition in videos without any manually labelled examples, also known as zero-shot video recognition. To achieve this, videos are represented in terms of detected visual concepts, which are then scored as relevant or irrelevant according to their similarity with a given textual query. In this paper, we propose a more robust approach for scoring concepts in order to alleviate many of the brittleness and low precision problems of previous work. Not only do we jointly consider semantic relatedness, visual reliability, and discriminative power. To handle noise and non-linearities in the ranking scores of the selected concepts, we propose a novel pairwise order matrix approach for score aggregation. Extensive experiments on the large-scale TRECVID Multimedia Event Detection data show the superiority of our approach."}
{"_id": "055afd1853ebe05dbb41405d188f1569cc98f814", "title": "Strategic human resource management ( SHRM ) practices and its effect on financial performance : evidence from some selected scheduled private commercial banks in Bangladesh", "text": "Formulation and execution of Strategic Human Resource Management (SHRM) practices and its effect on perceived financial performance of any organization is a burning issue in the globalized competitive business era. This study aims to find out the relationship between SHRM and financial performance to ensure the sustainability and competitive advantage of the selected scheduled private commercial banks in Bangladesh. The research has been conducted on managers as sample of some private commercial banks during the period of January to November 2012 to collect the primary data. To evaluate the financial performance researchers have used annual reports of 2011 and 2012, journals, web sites etc. as secondary source. Survey research findings indicate that strategic integration and development of HRM were practiced to a full extent in the sampled firms. From the cross sectional analysis, the financial performance indicators show that the capital adequacy was mostly at satisfactory level compared with eh industry average. The quality of assets loan varies from bank to bank but most of them are performing at the desired level. Management efficiency was out-performing the standard in most of the cases. The profitability indicators ratio was also better than the average of private commercial banks. The result presented in this study suggests practicing intensive SHRM so that improved financial performance can be asserted to sustain in the competitive environment."}
{"_id": "ea114d3388c0e1e9095612619e4312fafa14d668", "title": "Stock Market Classification Model Using Sentiment Analysis on Twitter Based on Hybrid Naive Bayes Classifiers", "text": "Sentiment analysis has become one of the most popular process to predict stock market behaviour based on consumer reactions. Concurrently, the availability of data from Twitter has also attracted researchers towards this research area. Most of the models related to sentiment analysis are still suffering from inaccuracies. The low accuracy in classification has a direct effect on the reliability of stock market indicators. The study primarily focuses on the analysis of the Twitter dataset. Moreover, an improved model is proposed in this study; it is designed to enhance the classification accuracy. The first phase of this model is data collection, and the second involves the filtration and transformation, which are conducted to get only relevant data. The most crucial phase is labelling, in which polarity of data is determined and negative, positive or neutral values are assigned to people opinion. The fourth phase is the classification phase in which suitable patterns of the stock market are identified by hybridizing Na\u00efve Bayes Classifiers (NBCs), and the final phase is the performance and evaluation. This study proposes Hybrid Na\u00efve Bayes Classifiers (HNBCs) as a machine learning method for stock market classification. The outcome is instrumental for investors, companies, and researchers whereby it will enable them to formulate their plans according to the sentiments of people. The proposed method has produced a significant result; it has achieved accuracy equals 90.38%."}
{"_id": "d821ddcae947853a53073c7828ca2a63fc9a9362", "title": "When and why do retrieval attempts enhance subsequent encoding?", "text": "Unsuccessful retrieval attempts can enhance subsequent encoding and learning. In three experiments, subjects either attempted to retrieve word pairs prior to studying them (e.g., attempting to recall tide-? before studying tide-beach) or did not attempt retrieval and retention of the studied targets was assessed on a subsequent cued recall test. Experiment 1 showed that attempting retrieval enhanced subsequent encoding and recall relative to not attempting retrieval when the word pairs were semantically related, but not when the pairs were unrelated. In Experiment 2, studying a different word pair prior to the correct pair (e.g., studying tide-wave prior to tide-beach) did not produce the same effect as attempting retrieval prior to studying. Constraining retrieval to a particular candidate word prior to study (e.g., recalling tide-wa__ before studying tide-beach) produced a negative effect on subsequent recall. Experiment 3 showed that attempting retrieval did not enhance encoding when a brief delay occurred between the retrieval attempt and the subsequent study trial. The results support the idea that a search set of candidates related to the retrieval cue is activated during retrieval and that this retrieval-specific activation can enhance subsequent encoding of those candidates."}
{"_id": "ad701b851c63d6fa9d2a145cd182011e4985231c", "title": "Clustered Data Management in Virtual Docker Networks Spanning Geo-Redundant Data Centers : A Performance Evaluation Study of Docker Networking", "text": "...................................................................................................................................... iii Acknowledgement ....................................................................................................................... iv Table of"}
{"_id": "6ba0491f9dde8ea042ea4a49df34838b345f23c2", "title": "Nonparametric variational inference", "text": "Variational methods are widely used for approximate posterior inference. However, their use is typically limited to families of distributions that enjoy particular conjugacy properties. To circumvent this limitation, we propose a family of variational approximations inspired by nonparametric kernel density estimation. The locations of these kernels and their bandwidth are treated as variational parameters and optimized to improve an approximate lower bound on the marginal likelihood of the data. Unlike most other variational approximations, using multiple kernels allows the approximation to capture multiple modes of the posterior. We demonstrate the efficacy of the nonparametric approximation with a hierarchical logistic regression model and a nonlinear matrix factorization model. We obtain predictive performance as good as or better than more specialized variational methods and MCMC approximations. The method is easy to apply to graphical models for which standard variational methods are difficult to derive."}
{"_id": "dc69176673f1f55a497ff43c2a42197e15438619", "title": "Movie trailer quality evaluation using real-time human electroencephalogram", "text": "The total US box office revenue exceeds ten billion dollars a year. Inevitably, new product forecasting and diagnosis have high financially stakes. The motion picture industry has a huge incentive to perform early prediction of movie success or failure. In this paper, we present a study that evaluate viewer responses to short movie trailers using a cap instrumented with Electroencephalogram (EEG) sensors. The method can be used to evaluate prerelease movies regarding how engaging they are to viewers. The primary advantage of our approach is that responses of a viewer can be recorded without any manual inputs from the viewer. The data collected, if analyzed properly, can reveal more accurate information regarding the viewer's emotion states while watching the movie than post-viewing surveys. This approach also enables the delivery of personalized entertainment through the brain interface."}
{"_id": "9d22cb3c483b4ad3b66ef32df3d09d7af289a541", "title": "A fast procedure for computing the distance between complex objects in three-dimensional space", "text": "An efficient and reliable algorithm for computing the Euclidean distance between a pair of convex sets in Rm is described. Extensive numerical experience with a broad family of polytopes in R 3 shows that the computational cost is approximately linear in the total number of vertices specifying the two polytopes. The algorithm has special features which makes its application in a variety of robotics problems attractive. These are discussed and an example of collision"}
{"_id": "300e36e7528f9f49f440446218e829e76e65fc30", "title": "Depression and Psychosocial Risk Factors among Community-Dwelling Older Adults in Singapore.", "text": "Depression is the most common mental and emotional disorder that emerges in the late stages of life. It is closely associated with poor health, disability, mortality, and suicide. The study examines the risk factors of depression in late life, especially the psychosocial factors, among a sample comprising 162 community-dwelling Singaporean adults aged 65 years and above. An interview-based structured survey was conducted in multiple senior activity centers located in different parts of Singapore. Results from the hierarchical regression analysis show that 32.9% of the variance in geriatric depression can be explained by the three psychosocial factors, among which loneliness, perceived social support, and the emotional regulation component of resilience are significantly associated with depression in older adults. Large-scale studies should be conducted to confirm the findings of the present study, and to further examine the predictive effects of these psychosocial factors on depression among older adults."}
{"_id": "a62ac71cd51124973ac57c87d09a3461ecbd8e61", "title": "The least mean fourth (LMF) adaptive algorithm and its family", "text": "New steepest descent algorithms for adaptive filtering and have been devised which allow error minimization in the mean fourth and mean sixth, etc., sense. During adaptation, the weights undergo exponential relaxation toward their optimal solutions. T ime constants have been derived, and surprisingly they turn out to be proportional to the time constants that would have been obtained if the steepest descent least mean square (LMS) algorithm of Widrow and Hoff had been used. The new gradient algorithms are insignificantly more complicated to program and to compute than the LMS algorithm. Their general form is W J+l = w, t 2plqK-lx,, where W, is the present weight vector, W, + 1 is the next weight vector, r, is the present error, X, is the present input vector, u is a constant controlling stability and rate of convergence, and 2 K is the exponent of the error being minimized. Conditions have been derived for weight-vector convergence of the mean and of the variance for the new gradient algorithms. The behavior of the least mean fourth (LMF) algorithm is of special interest. In comparing this algorithm to the LMS algorithm, when both are set to have exactly the same time constants for the weight relaxation process, the LMF algorithm, under some circumstances, will have a substantially lower weight noise than the LMS algorithm. It is possible, therefore, that a min imum mean fourth error algorithm can do a better job of least squares estimation than a mean square error algorithm. This intriguing concept has implications for all forms of adaptive algorithms, whether they are based on steepest descent or otherwise."}
{"_id": "8a7ab1494c8e63e6d0be9672cd779f3723346493", "title": "Towards Linked Research Data: an Institutional Approach", "text": "For Open Science to be widely adopted, a strong institutional support for scientists will be essential. Bielefeld University and the associated Center of Excellence Cognitive Interaction Technology (CITEC) have developed a platform that enables researchers to manage their publications and the underlying research data in an easy and efficient way. Following a Linked Data approach we integrate this data into a unified linked data store and interlink it with additional data sources from inside the university and outside sources like DBpedia. Based on the existing platform, a concrete case study from the domain of biology is implemented that releases optical motion tracking data of stick insect locomotion. We investigate the cost and usefulness of such a detailed, domain-specific semantic enrichment in order to evaluate whether this approach might be considered for large-scale deployment."}
{"_id": "acc99594d04d703ac345bd2008fe79e62df319b6", "title": "Efficient Exploratory Learning of Inverse Kinematics on a Bionic Elephant Trunk", "text": "We present an approach to learn the inverse kinematics of the \u201cbionic handling assistant\u201d-an elephant trunk robot. This task comprises substantial challenges including high dimensionality, restrictive and unknown actuation ranges, and nonstationary system behavior. We use a recent exploration scheme, online goal babbling, which deals with these challenges by bootstrapping and adapting the inverse kinematics on the fly. We show the success of the method in extensive real-world experiments on the nonstationary robot, including a novel combination of learning and traditional feedback control. Simulations further investigate the impact of nonstationary actuation ranges, drifting sensors, and morphological changes. The experiments provide the first substantial quantitative real-world evidence for the success of goal-directed bootstrapping schemes, moreover with the challenge of nonstationary system behavior. We thereby provide the first functioning control concept for this challenging robot platform."}
{"_id": "75d5a6112f7fd9b2953fb538be331fbea7414ab4", "title": "Winning the DARPA Grand Challenge with an AI Robot", "text": "This paper describes the software architecture of Stanley, an autonomous land vehicle developed for high-speed desert driving without human intervention. The vehicle recently won the DARPA Grand Challenge, a major robotics competition. The article describes the software architecture of the robot, which relied pervasively on state-of-the-art AI technologies, such as machine learning and probabilistic reasoning."}
{"_id": "5896b9299d100bdd10fee983fe365dc3bcf35a67", "title": "A 3-$\\mu\\hbox{W}$ CMOS Glucose Sensor for Wireless Contact-Lens Tear Glucose Monitoring", "text": "This paper presents a noninvasive wireless sensor platform for continuous health monitoring. The sensor system integrates a loop antenna, wireless sensor interface chip, and glucose sensor on a polymer substrate. The IC consists of power management, readout circuitry, wireless communication interface, LED driver, and energy storage capacitors in a 0.36-mm2 CMOS chip with no external components. The sensitivity of our glucose sensor is 0.18 \u03bcA\u00b7mm-2\u00b7mM-1. The system is wirelessly powered and achieves a measured glucose range of 0.05-1 mM with a sensitivity of 400 Hz/mM while consuming 3 \u03bcW from a regulated 1.2-V supply."}
{"_id": "05a420a3a9fd71b391ef5e9dc03e466b80211e78", "title": "A Review and Characterization of Progressive Visual Analytics", "text": "Progressive Visual Analytics (PVA) has gained increasing attention over the past years. It brings the user into the loop during otherwise long-running and non-transparent computations by producing intermediate partial results. These partial results can be shown to the user for early and continuous interaction with the emerging end result even while it is still being computed. Yet as clear-cut as this fundamental idea seems, the existing body of literature puts forth various interpretations and instantiations that have created a research domain of competing terms, various definitions, as well as long lists of practical requirements and design guidelines spread across different scientific communities. This makes it more and more difficult to get a succinct understanding of PVA\u2019s principal concepts, let alone an overview of this increasingly diverging field. The review and discussion of PVA presented in this paper address these issues and provide (1) a literature collection on this topic, (2) a conceptual characterization of PVA, as well as (3) a consolidated set of practical recommendations for implementing and using PVA-based visual analytics solutions."}
{"_id": "dffa8baf8e5e203972ae7acdcf7eeb76e959e794", "title": "A closer look at recognition-based graphical passwords on mobile devices", "text": "Graphical password systems based on the recognition of photographs are candidates to alleviate current over-reliance on alphanumeric passwords and PINs. However, despite being based on a simple concept -- and user evaluations consistently reporting impressive memory retention -- only one commercial example exists and overall take-up is low. Barriers to uptake include a perceived vulnerability to observation attacks; issues regarding deployability; and the impact of innocuous design decisions on security not being formalized. Our contribution is to dissect each of these issues in the context of mobile devices -- a particularly suitable application domain due to their increasing significance, and high potential to attract unauthorized access. This produces: 1) A novel yet simple solution to the intersection attack that permits greater variability in login challenges; 2) Detailed analysis of the shoulder surfing threat that considers both simulated and human testing; 3) A first look at image processing techniques to contribute towards automated photograph filtering. We operationalize our observations and gather data in a field context where decentralized mechanisms of varying entropy were installed on the personal devices of participants. Across two working weeks success rates collected from users of a high entropy version were similar to those of a low entropy version at 77%, and login durations decreased significantly across the study."}
{"_id": "be3995a8ddd43d1bf5709667f356a39c6e0112a4", "title": "Robust Inference via Generative Classifiers for Handling Noisy Labels", "text": "Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets. To mitigate the issue, we propose a novel inference method, termed Robust Generative classifier (RoG), applicable to any discriminative (e.g., softmax) neural classifier pre-trained on noisy datasets. In particular, we induce a generative classifier on top of hidden feature spaces of the pre-trained DNNs, for obtaining a more robust decision boundary. By estimating the parameters of generative classifier using the minimum covariance determinant estimator, we significantly improve the classification accuracy with neither re-training of the deep model nor changing its architectures. With the assumption of Gaussian distribution for features, we prove that RoG generalizes better than baselines under noisy labels. Finally, we propose the ensemble version of RoG to improve its performance by investigating the layer-wise characteristics of DNNs. Our extensive experimental results demonstrate the superiority of RoG given different learning models optimized by several training techniques to handle diverse scenarios of noisy labels."}
{"_id": "4ce17d8b5235c8f774d243e944ab7277e0da635e", "title": "Domain Specific Sentence Level Mood Extraction from Malayalam Text", "text": "Natural Language Processing (NLP) is a field which studies the interactions between computers and natural languages. NLP is used to enable computers to attain the capability of manipulating natural languages with a level of expertise equivalent to humans. There exists a wide range of applications for NLP, of which sentiment analysis(SA) plays a major role. In sentimental analysis, the emotional polarity of a given text is analysed, and classified as positive, negative or neutral. A more difficult task is to refine the classification into different moods such as happy, sad, angry etc. Analysing a natural language for mood extraction is not at all an easy task for a computer. Even after achieving capabilities of massive amount of computation within a matter of seconds, understanding the sentiments embodied in phrases and sentences of textual information remains one of the toughest tasks for a computer till date. This paper focuses on tagging the appropriate mood in Malayalam text. Tagging is used to specify whether a sentence indicates a sad, happy or angry mood of the person involved or if the sentence contains just facts, devoid of emotions. This research work is heavily dependent on the language since the structures vary from language to language. Mood extraction and tagging has been successfully implemented for English and other European languages. For the south Indian language Malayalam, no significant work has yet been done on mood extraction. We will be focusing on domain-specific sentence-level mood extraction from Malayalam text. The extraction process involves parts-of-speech tagging of the input sentence, extracting the patterns from the input sentence which will contribute to the mood of the sentence, such as the adjective, adverb etc., and finally scoring the sentence on an emotive scale by calculating the semantic orientation of the sentence using the extracted patterns. Sentiment classification is becoming a promising topic with the rise of social media such as blogs, social networking sites, where people express their views on various topics. Mood extraction enables computers to automate the activities performed by human for making decisions based on the moods of opinions expressed in Malayalam text."}
{"_id": "c47dd7808e95a9291174302fc782cfc9fbc5c5fe", "title": "A Reinforcement Learning Approach to Interactive-Predictive Neural Machine Translation", "text": "We present an approach to interactivepredictive neural machine translation that attempts to reduce human effort from three directions: Firstly, instead of requiring humans to select, correct, or delete segments, we employ the idea of learning from human reinforcements in form of judgments on the quality of partial translations. Secondly, human effort is further reduced by using the entropy of word predictions as uncertainty criterion to trigger feedback requests. Lastly, online updates of the model parameters after every interaction allow the model to adapt quickly. We show in simulation experiments that reward signals on partial translations significantly improve character F-score and BLEU compared to feedback on full translations only, while human effort can be reduced to an average number of 5 feedback requests for every input."}
{"_id": "622c5da12c87ecc3ea8be91f79192b6e0ee559d2", "title": "A Critical Evaluation of User Participation Research: Gaps and Future Directions", "text": "In this theoretical synthesis, we juxtapose three traditions of prior research on user participation and involvement: the survey and experimental literature on the relationship between user participation and IS success, the normative literature on alternative development approaches, and qualitative studies that examine user participation from a variety of theoretical perspectives. We also assess progress made in the three bodies of literature, and identify gaps and directions of future research for improving user participation."}
{"_id": "eabc4bd921a9566688fc94a50b9b2aece0d241c1", "title": "Epidermoid cyst of the clitoris: a case report.", "text": "We report a rare case of spontaneous clitoral epidermal cyst without any declared previous female genital mutilation. This patient was successfully and surgically resected with good local and cosmetic results."}
{"_id": "513a3529a94939233989191c5a18d80fdf5dd2d7", "title": "Multi-threshold CMOS design for low power digital circuits", "text": "Multi-threshold CMOS (MTCMOS) power gating is a design technique in which a power gating transistor is connected between the logic transistors and either power or ground, thus creating a virtual supply rail or virtual ground rail, respectively. Power gating transistor sizing, transition (sleep mode to active mode) current, short circuit current and transition time are design issues for power gating design. The use of power gating design results in the delay overhead in the active mode. If both nMOS and pMOS sleep transistor are used in power gating, delay overhead will increase. This paper proposes the design methodology for reducing the delay of the logic circuits during active mode. This methodology limits the maximum value of transition current to a specified value and eliminates short circuit current. Experiment results show 16.83% reduction in the delay."}
{"_id": "3e001ba16eb634a68e2265fbd44c51900c313583", "title": "A 0.4 V 63 $\\mu$W 76.1 dB SNDR 20 kHz Bandwidth Delta-Sigma Modulator Using a Hybrid Switching Integrator", "text": "This paper presents a delta-sigma modulator operating at a supply voltage of 0.4 V. The designed delta-sigma modulator uses a proposed hybrid switching integrator and operates at a low supply voltage without clock boosting or bootstrapped switches. The proposed integrator consists of both switched-resistor and switched-capacitor operations and significantly reduces distortion at a low supply voltage. Variation in the turn-on resistance, which is the main source of distortion, is avoided by placing the switches at the virtual ground node of the amplifier. The proposed low-voltage design scheme can replace commonly-used clock boosting techniques, which rely on internal high-voltage generation circuits. A fabricated modulator achieves a 76.1 dB signal-to-noise-plus-distortion ratio (SNDR) and an 82 dB dynamic range at a 20 kHz bandwidth. The measured total power consumption is 63 \u03bcW from a 0.4 V supply voltage. The measured results show robust SNDR performance, even at \u00b110% supply voltage variations. The measured results also show stable performance over a wide temperature range."}
{"_id": "bbfc23a1a426a09a80628f055ec061d2960d3b28", "title": "PyDial: A Multi-domain Statistical Dialogue System Toolkit", "text": "Statistical Spoken Dialogue Systems have been around for many years. However, access to these systems has always been difficult as there is still no publicly available end-to-end system implementation. To alleviate this, we present PyDial, an opensource end-to-end statistical spoken dialogue system toolkit which provides implementations of statistical approaches for all dialogue system modules. Moreover, it has been extended to provide multidomain conversational functionality. It offers easy configuration, easy extensibility, and domain-independent implementations of the respective dialogue system modules. The toolkit is available for download under the Apache 2.0 license."}
{"_id": "38d1c08ed452247ca656fcf284ebd47d761ccefb", "title": "A study of injection locking and pulling in oscillators", "text": "Injection locking characteristics of oscillators are derived and a graphical analysis is presented that describes injection pulling in time and frequency domains. An identity obtained from phase and envelope equations is used to express the requisite oscillator nonlinearity and interpret phase noise reduction. The behavior of phase-locked oscillators under injection pulling is also formulated."}
{"_id": "8cbace3d8c06a942f7a429fe8667f4df7f435837", "title": "Quantum generalisation of feedforward neural networks", "text": "We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e. unitary. (The classical networks we generalise are called feedforward, and have step-function activation functions.) The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically."}
{"_id": "3075a141f3df551c90155426fb944febd54db8b7", "title": "Depth Selective Camera: A Direct, On-Chip, Programmable Technique for Depth Selectivity in Photography", "text": "Time of flight (ToF) cameras use a temporally modulated light source and measure correlation between the reflected light and a sensor modulation pattern, in order to infer scene depth. In this paper, we show that such correlational sensors can also be used to selectively accept or reject light rays from certain scene depths. The basic idea is to carefully select illumination and sensor modulation patterns such that the correlation is non-zero only in the selected depth range - thus light reflected from objects outside this depth range do not affect the correlational measurements. We demonstrate a prototype depth-selective camera and highlight two potential applications: imaging through scattering media and virtual blue screening. This depth-selectivity can be used to reject back-scattering and reflection from media in front of the subjects of interest, thereby significantly enhancing the ability to image through scattering media-critical for applications such as car navigation in fog and rain. Similarly, such depth selectivity can also be utilized as a virtual blue-screen in cinematography by rejecting light reflecting from background, while selectively retaining light contributions from the foreground subject."}
{"_id": "ae812c869ffc63281673fbed33e86b11b5764189", "title": "Power factor improvement of a transverse flux machine with high torque density", "text": "Design of a permanent magnet transverse flux machine optimized for high torque density and high power factor is presented. The optimization process focuses on performance enhancement through improvement of power factor without sacrificing torque density. Simple magnetostatic simulation based method is proposed for the optimization process. Magnetic couplings among axially separated phases are also discussed in the article. Experimental results demonstrating the correlation of measured inductances with that of simulation results are provided."}
{"_id": "ff9c7f708e7f735a97f20088a561369f079fb657", "title": "A probabilistic model for item-based recommender systems", "text": "Recommender systems estimate the conditional probability P(\u03c7j|\u03c7i) of item \u03c7j being bought, given that a customer has already purchased item \u03c7i. While there are different ways of approximating this conditional probability, the expression is generally taken to refer to the frequency of co-occurrence of items in the same basket, or other user-specific item lists, rather than being seen as the co-occurrence of \u03c7j with \u03c7i as a proportion of all other items bought alongside \u03c7i. This paper proposes a probabilistic calculus for the calculation of conditionals based on item rather than basket counts. The proposed method has the consequence that items bough together as part of small baskets are more predictive of each other than if they co-occur in large baskets. Empirical results suggests that this may result in better take-up of personalised recommendations."}
{"_id": "a82585e8e76ddcb389c6751a6ab7533ade734f62", "title": "Efficacy of music therapy treatment based on cycles of sessions: a randomised controlled trial.", "text": "We undertook a randomised controlled trial to assess whether a music therapy (MT) scheme of administration, including three working cycles of one month spaced out by one month of no treatment, is effective to reduce behavioural disturbances in severely demented patients. Sixty persons with severe dementia (30 in the experimental and 30 in the control group) were enrolled. Baseline multidimensional assessment included demographics, Mini Mental State Examination (MMSE), Barthel Index and Neuropsychiatry Inventory (NPI) for all patients. All the patients of the experimental and control groups received standard care (educational and entertainment activities). In addition, the experimental group received three cycles of 12 active MT sessions each, three times a week. Each 30-min session included a group of three patients. Every cycle of treatment was followed by one month of wash-out. At the end of this study, MT treatment resulted to be more effective than standard care to reduce behavioural disorders. We observed a significant reduction over time in the NPI global scores in both groups (F(7,357) = 9.06, p < 0.001) and a significant difference between groups (F(1,51) = 4.84, p < 0.05) due to a higher reduction of behavioural disturbances in the experimental group at the end of the treatment (Cohen's d = 0.63). The analysis of single NPI items shows that delusions, agitation and apathy significantly improved in the experimental, but not in the control group. This study suggests the effectiveness of MT approach with working cycles in reducing behavioural disorders of severely demented patients."}
{"_id": "f2e41debdab3ecd764298e3029321892caf943f4", "title": "Uninterruptible power supply systems provide protection", "text": "Nowadays, uninterruptible power supply (UPS) systems are in use throughout the world, helping to supply a wide variety of critical loads, in situations of power outage or anomalies of the mains. This article describes the most common line problems and the relationship between these and the different existing kinds of UPS, showing their operation modes as well as the existent energy storage systems. It also addresses an overview of the control schemes applied to different distributed UPS configurations. Finally, it points out the applicability of such systems in distributed generation, microgrids, and renewable energy systems."}
{"_id": "fc2c1a9e316fdde0c95a3df6f19d78d0e408fa93", "title": "The examination timetabling problem at Universiti Malaysia Pahang: Comparison of a constructive heuristic with an existing software solution", "text": "This paper presents a real-world, capacitated examination timetabling problem from Universiti Malaysia Pahang (UMP), Malaysia. The problem has constraints which have not been modelled before, these being the distance between examination rooms and splitting exams across several rooms. These constraints provide additional challenges in defining a suitable model and in developing a constructive heuristic. One of the contributions of this paper is to formally define this real-world problem. A further contribution is the constructive heuristic that is able to produce good quality solutions for the problem, which are superior to the solutions that are produced using the university\u2019s current software. Moreover, our method adheres to all hard constraints which the current systems fails to do. 2010 Elsevier B.V. All rights reserved."}
{"_id": "a2cad4e4fd946adf6cc87e483b2ba18579de1264", "title": "No-Reference Image Quality Assessment in the Spatial Domain", "text": "We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain. The new model, dubbed blind/referenceless image spatial quality evaluator (BRISQUE) does not compute distortion-specific features, such as ringing, blur, or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of \u201cnaturalness\u201d in the image due to the presence of distortions, thereby leading to a holistic measure of quality. The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc.) is required, distinguishing it from prior NR IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well. To illustrate a new practical application of BRISQUE, we describe how a nonblind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over state-of-the-art methods. A software release of BRISQUE is available online: http://live.ece.utexas.edu/research/quality/BRISQUE_release.zip for public use and evaluation."}
{"_id": "7c294aec0d2bd03b2838c7bb817dd7b029b1a945", "title": "Forward Thinking: Building Deep Random Forests", "text": "The success of deep neural networks has inspired many to wonder whether other learners could benefit from deep, layered architectures. We present a general framework called forward thinking for deep learning that generalizes the architectural flexibility and sophistication of deep neural networks while also allowing for (i) different types of learning functions in the network, other than neurons, and (ii) the ability to adaptively deepen the network as needed to improve results. This is done by training one layer at a time, and once a layer is trained, the input data are mapped forward through the layer to create a new learning problem. The process is then repeated, transforming the data through multiple layers, one at a time, rendering a new dataset, which is expected to be better behaved, and on which a final output layer can achieve good performance. In the case where the neurons of deep neural nets are replaced with decision trees, we call the result a Forward Thinking Deep Random Forest (FTDRF). We demonstrate a proof of concept by applying FTDRF on the MNIST dataset. We also provide a general mathematical formulation, called Forward Thinking that allows for other types of deep learning problems to be considered."}
{"_id": "883459a6879571bcace868f77b4b787718e64917", "title": "Exploiting Shallow Linguistic Information for Relation Extraction from Biomedical Literature", "text": "We propose an approach for extracting relations between entities from biomedical literature based solely on shallow linguistic information. We use a combination of kernel functions to integrate two different information sources: (i) the whole sentence where the relation appears, and (ii) the local contexts around the interacting entities. We performed experiments on extracting gene and protein interactions from two different data sets. The results show that our approach outperforms most of the previous methods based on syntactic and semantic information."}
{"_id": "3b889bd9c4f04b8419dcf408e3c1941b91c3a252", "title": "Oxidants, antioxidants, and the degenerative diseases of aging.", "text": "Metabolism, like other aspects of life, involves tradeoffs. Oxidant by-products of normal metabolism cause extensive damage to DNA, protein, and lipid. We argue that this damage (the same as that produced by radiation) is a major contributor to aging and to degenerative diseases of aging such as cancer, cardiovascular disease, immune-system decline, brain dysfunction, and cataracts. Antioxidant defenses against this damage include ascorbate, tocopherol, and carotenoids. Dietary fruits and vegetables are the principal source of ascorbate and carotenoids and are one source of tocopherol. Low dietary intake of fruits and vegetables doubles the risk of most types of cancer as compared to high intake and also markedly increases the risk of heart disease and cataracts. Since only 9% of Americans eat the recommended five servings of fruits and vegetables per day, the opportunity for improving health by improving diet is great."}
{"_id": "37f27f051221833b9f166e0891a893fad925bf7c", "title": "A theoretical framework of a BIM-based multi-disciplinary collaboration platform", "text": "a r t i c l e i n f o Keywords: BIM BIM-server Multidisciplinary collaboration Operational technical requirements Support technical requirements Most complex projects in the Architecture, Engineering, and Construction (AEC) industries involve multidisciplinary collaboration and the exchange of large building data set. Traditionally, the collaboration efforts across the disciplines have been based on the frequent exchange of 2D drawings and documents. However, during the past decade, the widespread adoption of object-oriented Computer-aided Design (CAD) tools has generated more interests in Building Information Modelling (BIM). A number of BIM-compliant applications such as analysis tools, model checkers and facility management applications are being developed. This paper develops a theoretical framework of technical requirements for using BIM-server as a multidisciplinary collaboration platform. The methodologies that are used to develop the framework include focus group interviews (FGIs) with representatives from the diverse AEC disciplines, a case study of an Architectural project using a state-of-the-art BIM-server, and a critical review and analysis of current collaboration platforms that are available to the AEC industries. This paper concludes that greater emphasis should be placed on supporting technical requirements to facilitate technology management and implementation across disciplines. Their implications for user-centric technology development in design and construction industry are also discussed. Traditionally, the inter-disciplinary collaboration in the Architecture, Engineering, and Construction (AEC) industries has revolved around the exchange of 2D drawings and documents. Even though the separate design disciplines have been using 3D models and applications for visualization and design development, the collaboration practices have remained more or less 2D-based until recently. The widespread use and proliferation of object-oriented Computer-Aided Design (CAD) packages and increased constructability and level of automation in construction processes provide encouraging motives for the exchange of 3D data in the collaboration process. Building Information Modelling (BIM) is envisaged to play a significant role in this transformation. BIM is an advanced approach to object-oriented CAD, which extends the capability of traditional CAD approach by defining and applying intelligent relationships between the elements in the building model. BIM models include both geometric and non-geometric data such as object attributes and specifications. The built-in intelligence allows automated extraction of 2D drawings, documentation and other building information directly from the BIM model. This built-in intelligence also provides constraints that can reduce modelling errors and prevent technical flaws in the design, based on the rules encoded in the software [11,16,21]. Most recent CAD packages such as ArchiCAD \u2026"}
{"_id": "24beb987b722d4a25d3157a43000e685aa8f8874", "title": "A Maximum Entropy Model for Part-Of-Speech Tagging", "text": "This paper presents a statistical model which trains from a corpus annotated with Part Of Speech tags and assigns them to previously unseen text with state of the art accuracy The model can be classi ed as a Maximum Entropy model and simultaneously uses many contextual features to predict the POS tag Furthermore this paper demonstrates the use of specialized fea tures to model di cult tagging decisions discusses the corpus consistency problems discovered during the implementation of these features and proposes a training strategy that mitigates these problems"}
{"_id": "3787980e3f4c2e0e20a08027ec1eff18b4fd0c0a", "title": "TnT -- A Statistical Part-of-Speech Tagger", "text": "Trigrams'n'Tags (TnT) is an efficient statistical part-of-speech tagger. Contrary to claims found elsewhere in the literature, we argue that a tagger based on Markov models performs at least as well as other current approaches, including the Maximum Entropy framework. A recent comparison has even shown that TnT performs significantly better for the tested corpora. We describe the basic model of TnT, the techniques used for smoothing and for handling unknown words. Furthermore, we present evaluations on two corpora."}
{"_id": "6a473e9e0a2183928b2d78bddf4b3d01ff46c454", "title": "Chunking with Support Vector Machines", "text": "We apply Support Vector Machines (SVMs) to identify English base phrases (chunks). SVMs are known to achieve high generalization performance even with input data of high dimensional feature spaces. Furthermore, by the Kernel principle, SVMs can carry out training with smaller computational overhead independent of their dimensionality. We apply weighted voting of 8 SVMsbased systems trained with distinct chunk representations. Experimental results show that our approach achieves higher accuracy than previous approaches."}
{"_id": "74f9cd73d95ddb6bf64b946aa9f5ddeaedf4b345", "title": "WebCrow: A Web-Based System for Crossword Solving", "text": "Language games represent one of the most fascinating challenges of research in artificial intelligence. In this paper we give an overview of WebCrow, a system that tackles crosswords using the Web as a knowledge base. This appears to be a novel approach with respect to the available literature. It is also the first solver for nonEnglish crosswords and it has been designed to be potentially multilingual. Although WebCrow has been implemented only in a preliminary version, it already displays very interesting results reaching the performance of a human beginner: crosswords that are \u201ceasy\u201d for expert humans are solved, within competition time limits, with 80% of correct words and over 90% of correct letters. Introduction Crosswords are probably the most played language puzzles worldwide. Until recently, the research in artificial intelligence focused its interest only on the NP-complete crossword generation problem (Mazlack 1976) (Ginsberg et al. 1990) which can now be solved in a few seconds for reasonable dimensions (Ginsberg 1993). Conversely, problems like solving crosswords from clues have been defined as AI-complete (Littman 2000) and are extremely challenging for machines since there is no closedworld assumption and they require human-level knowledge. Interestingly, AI has nowadays the opportunity to exploit a stable nucleus of technology, such as search engines, information retrieval and machine learning techniques, that can enable computers to enfold with semantics real-life concepts. We will present here a software system, called WebCrow, whose major assumption is to attack crosswords making use of the Web as its primary source of knowledge, being this an extremely rich and self-updating repository of human knowledge1. This paper gives a general overview of the modules and the subproblems present in the project. Copyright c \u00a9 2005, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. 1The WebCrow project, with the emphasis on the power coming from searching the Web, was briefly described in the Nature magazine(News-Nature 2004) and mentioned in the AAAI Web site concerning games and puzzles http://www.aaai.org/AITopics/html/crosswd.html To the best of our knowledge the only significant attempt reported in the literature to tackle this problem is the Proverb system, which has reached human-like performances (Keim, Shazeer, & Littman 1999). Unlike Proverb, WebCrow does not possess any knowledge-specific expert module nor a crossword database of great dimensions. WebCrow, instead, in order to stress the generality of its knowledge and language-independence, has in its core a module performing a special form of question answering on the Web that we shall call clue-answering. The second goal of the system, that is filling the crossword grid with the best set of candidate answers, has been tackled as a Probabilistic-CSP. Italian crosswords are structurally similar to Americans, but they differ in two aspects: two-letter words2 are allowed and, more importantly, they contain stronger and more ambiguous references to socio-cultural and political topics. The latter is a phenomenon especially present in newspapers and it introduces an additional degree of complexity in crossword solving, since it requires the possession of a knowledge that is extremely broad, fresh and also robust to volunteer vagueness and ambiguity. Neologisms and compound words are often present. We have collected 685 examples of Italian crosswords. The majority was obtained from two sources: the main Italian crossword magazine La Settimana Enigmistica and the main on-line newspaper\u2019s crossword section La Repubblica. 60 puzzles were randomly chosen to form the experimental test suite. The remaining part constituted a database of clue-answer pairs used as an aid in the answering process. In this paper the challenge of WebCrow is to solve Italian crosswords within a 15 minutes time limit (running the program on a common laptop), as in real competitions. The version of WebCrow that is discussed here is preliminary but it has already given very promising results. WebCrow averaged around 70% words correct and 80% letters correct in the overall test set. On the examples that expert players consider \u201ceasy\u201d WebCrow performed with 80% words correct (100% in one case) and over 90% letters correct. Architectural overview WebCrow has been designed as a modular and data-driven system. Two are the main sources that it applies to: Web 2Name initials, acronyms, abbreviations or portions of a word."}
{"_id": "704e95e632be9fe0a5f2f15aa4a19b9341635930", "title": "Muscle glycogen synthesis after exercise: effect of time of carbohydrate ingestion.", "text": "The time of ingestion of a carbohydrate supplement on muscle glycogen storage postexercise was examined. Twelve male cyclists exercised continuously for 70 min on a cycle ergometer at 68% VO2max, interrupted by six 2-min intervals at 88% VO2max, on two separate occasions. A 25% carbohydrate solution (2 g/kg body wt) was ingested immediately postexercise (P-EX) or 2 h postexercise (2P-EX). Muscle biopsies were taken from the vastus lateralis at 0, 2, and 4 h postexercise. Blood samples were obtained from an antecubital vein before and during exercise and at specific times after exercise. Muscle glycogen immediately postexercise was not significantly different for the P-EX and 2P-EX treatments. During the first 2 h postexercise, the rate of muscle glycogen storage was 7.7 mumol.g wet wt-1.h-1 for the P-EX treatment, but only 2.5 mumol.g wet wt-1.h-1 for the 2P-EX treatment. During the second 2 h of recovery, the rate of glycogen storage slowed to 4.3 mumol.g wet wt-1.h-1 during treatment P-EX but increased to 4.1 mumol.g wet wt-1.h-1 during treatment 2P-EX. This rate, however, was still 45% slower (P less than 0.05) than that for the P-EX treatment during the first 2 h of recovery. This slower rate of glycogen storage occurred despite significantly elevated plasma glucose and insulin levels. The results suggest that delaying the ingestion of a carbohydrate supplement post-exercise will result in a reduced rate of muscle glycogen storage."}
{"_id": "9d104f4508b5b9f3bfd4258cb182493b58a8ae69", "title": "VERL: An Ontology Framework for Representing and Annotating Video Events", "text": "in large multimedia databases and make inferences about the data stream\u2019s content. A human operator can specify gross indices, such as the date or hour, and perhaps the location and the activity (for example, a football game) at the time of capture. Finer-grain annotations require separating the video into meaningful segments such as shots and scenes. Many automated techniques and annotation formats for capturing this type of information exist. However, the descriptions they produce don\u2019t define the data stream content adequately for semantic retrieval and reasoning. Such descriptions must be based on observable or inferrable events in the data streams. We describe a framework for video event representation and annotation that\u2019s based on the definition of an ontology suitable for video content. An ontology consists of a specific vocabulary for describing a certain reality and a set of explicit assumptions regarding the vocabulary\u2019s intended meaning.1 The Video Event Representation Language (VERL)2-4 describes an event ontology, and the Video Event Markup Language (VEML)4 lets us annotate instances of the events described in VERL (see the sidebar \u201cUsing VERL and VEML\u201d on p. 78 for an example). Annotating video in VEML consists of describing instances of events and objects, in a previously defined ontology using VERL. Figure 1 illustrates the conceptual elements involved and their relationships: annotations draw on one or several domain-specific ontologies, defined according to VERL concepts and constructs. VEML incorporates extensible event and object type hierarchies rooted in VERL\u2019s event and object concepts. The annotations also draw on VEML for content organization. Marking up data streams in a well-defined language rooted in an ontology enables nonambiguous data sharing among users. Furthermore, annotation data is accessible to automatic machine manipulation for indexing or inferencing. The framework we describe resulted from discussions at a series of workshops sponsored by the US government\u2019s Advanced Research and Development Activity (ARDA). It therefore includes the intellectual contributions of many individuals. Currently, only a small community of researchers is using VERL and VEML; this article aims to publicize the framework to a broader community as its relevance depends on its widespread adoption. An earlier description can be found elsewhere.5"}
{"_id": "2d54dc50bbc1a0a63b6f1000bc255f88d57a7a63", "title": "It\u2019s All Fun and Games until Someone Annotates: Video Games with a Purpose for Linguistic Annotation", "text": "Annotated data is prerequisite for many NLP applications. Acquiring large-scale annotated corpora is a major bottleneck, requiring significant time and resources. Recent work has proposed turning annotation into a game to increase its appeal and lower its cost; however, current games are largely text-based and closely resemble traditional annotation tasks. We propose a new linguistic annotation paradigm that produces annotations from playing graphical video games. The effectiveness of this design is demonstrated using two video games: one to create a mapping from WordNet senses to images, and a second game that performs Word Sense Disambiguation. Both games produce accurate results. The first game yields annotation quality equal to that of experts and a cost reduction of 73% over equivalent crowdsourcing; the second game provides a 16.3% improvement in accuracy over current state-of-the-art sense disambiguation games with WordNet."}
{"_id": "319e868be52dfdccf8ade7e59d880e0a09654312", "title": "Natural Language Processing for Information Retrieval", "text": "The paper summarizes the essential properties of document retrieval and reviews both conventional practice and research findings, the latter suggesting that simple statistical techniques can be effective. It then considers the new opportunities and challenges presented by the user\u2019s ability to search full text directly (rather than e.g. titles and abstracts), and suggests appropriate approaches to doing this, with a focus on the potential role of natural language processing. The paper also comments on possible connections with data and knowledge retrieval, and concludes by emphasizing the importance of rigorous performance testing."}
{"_id": "6a2fe560574b76994ab1148b4dae0bfb89e3a3e3", "title": "Anticipating the Future by Constructing Human Activities using Object Affordances", "text": "An important aspect of human perception is anticipation and anticipating which activities will a human do next (and how to do them) in useful for many applications, for example, anticipation enables an assistive robot to plan ahead for reactive responses in the human environments. In this work, we present a constructive approach for generating various possible future human activities by reasoning about the rich spatial-temporal relations through object affordances. We represent each possible future using an anticipatory temporal conditional random field (ATCRF) where we sample the nodes and edges corresponding to future object trajectories and human poses from a generative model. We then represent the distribution over the potential futures using a set of constructed ATCRF particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 75.4%, 69.2% and 58.1% for an anticipation time of 1, 3 and 10 seconds respectively. 1"}
{"_id": "aaa7047198d4a952865b85c0ce3511cd87345f10", "title": "Constraint-Based Approaches for Balancing Bike Sharing Systems", "text": "In order to meet the users\u2019 demand, bike sharing systems must be regularly rebalanced. The problem of balancing bike sharing systems (BBSS) is concerned with designing optimal tours and operating instructions for relocating bikes among stations to maximally comply with the expected future bike demands. In this paper, we tackle the BBSS by means of Constraint Programming: first, we introduce two novel constraint models for the BBSS including a smart branching strategy that focusses on the most promising routes. Second, in order to speed-up the search process, we incorporate both models in a Large Neighborhood Search (LNS) approach that is adapted to the respective CP model. Third, we perform an extensive computational evaluation on instances based on real-world data, where we see that the LNS approach outperforms the Branch & Bound approach and is competitive with other existing approaches."}
{"_id": "56c5602f6ea8e2dae4f983ffc894c305e6ed2092", "title": "Informative path planning for an autonomous underwater vehicle", "text": "We present a path planning method for autonomous underwater vehicles in order to maximize mutual information. We adapt a method previously used for surface vehicles, and extend it to deal with the unique characteristics of underwater vehicles. We show how to generate near-optimal paths while ensuring that the vehicle stays out of high-traffic areas during predesignated time intervals. In our objective function we explicitly account for the fact that underwater vehicles typically take measurements while moving, and that they do not have the ability to communicate until they resurface. We present field results from ocean trials on planning paths for a specific AUV, an underwater glider."}
{"_id": "8f54d01622d88dde15f48d4efa5dce23fb25c3a9", "title": "Memristive devices for computing: Beyond CMOS and beyond von Neumann", "text": "Traditional CMOS technology and its continuous down-scaling have been the driving force to improve performance of existing computer architectures. Today, however, both technology and computer architectures are facing challenges that make them incapable of delivering the growing computing performance requirement at pre-defined constraints. This forces the exploration of both novel architectures and technologies; not only to maintain the economic profit of technology scaling, but also to enable the computing architecture solutions for big-data and data-intensive applications. This paper discusses the emerging memristive device as a complement (or an alternative) to CMOS devices and shows how such devices enable novel computing paradigms that will solve the challenges of today's architectures for certain applications. The paper covers not only the potential of memristor devices in enabling novel memory technologies, logic design styles, and arithmetic operations, but also their potential in enabling in-memory computing and neuromorphic computing."}
{"_id": "e38bb2fb6aad0158be9d9257a4ea259daee28bdb", "title": "An Approach for Factory-Wide Control Utilizing Virtual Metrology", "text": "In the semiconductor manufacturing industry, market demands and technology trends drive manufacturers towards increases in wafer size and decreases in device size. Application of factory-wide advanced process control (APC) techniques down to the wafer-to-wafer (W2W) level control capability is becoming the only choice to cope with the demanding situation. However, one of the main limitations in undertaking W2W control is the nonavailability of timely metrology data at the wafer level. Recently virtual metrology (VM) techniques have been proposed to provide timely wafer level metrology data; they have the potential to be used in realizing W2W control. In this paper, the VM approach to W2W control on factory level is described. VM for an individual process is realized by utilizing the preprocess metrology and the process data from the underlying tools that is generally collected in real time for fault detection purposes. The VM implementation for factory-wide run-to-run control brings unique opportunities and issues to the forefront such as dealing with expected lower quality of VM data, coordination between VM modules of cascading processes for better prediction quality, flexibility of the factory-wide controller to accommodate lower quality VM data, dynamic adjustments to the target values of individual processes by the factory-wide controller when using VM data, and dealing with metrology delays at the factory level. Near and long-term solutions are presented to address these issues in order to allow VM to be used today and become an integral part of the factory-wide APC solution for W2W control."}
{"_id": "ea38789c6687e7ccb483693046fff5293e903c51", "title": "High Confidence Policy Improvement", "text": "We present a batch reinforcement learning (RL) algorithm that provides probabilistic guarantees about the quality of each policy that it proposes, and which has no hyper-parameters that require expert tuning. The user may select any performance lower-bound, \u03c1\u2212, and confidence level, \u03b4, and our algorithm will ensure that the probability that it returns a policy with performance below \u03c1\u2212 is at most \u03b4. We then propose an incremental algorithm that executes our policy improvement algorithm repeatedly to generate multiple policy improvements. We show the viability of our approach with a simple gridworld and the standard mountain car problem, as well as with a digital marketing application that uses real world data."}
{"_id": "7084f88ff8ca11584d826c1751c4b90b7da713fd", "title": "UltraGesture: Fine-Grained Gesture Sensing and Recognition", "text": "With the rising of AR/VR technology and miniaturization of mobile devices, gesture is becoming an increasingly popular means of interacting with smart devices. Some pioneer ultrasound based human gesture recognition systems have been proposed. They mostly rely on low resolution Doppler Effect, and hence focus on whole hand motion and cannot deal with minor finger motions. In this paper, we present UltraGesture, a Channel Impulse Response (CIR) based ultrasonic finger motion perception and recognition system. CIR measurements can provide with 7 mm resolution, rendering it sufficient for minor finger motion recognition. UltraGesture encapsulates CIR measurements into an image, and builds a Convolutional Neural Network model to classify these images into different categories, which corresponding to distinct gestures. Our system runs on commercial speakers and microphones that already exist on most mobile devices without hardware modification. Our results show that UltraGesture achieves an average accuracy of greater than 97% for 12 gestures including finger click and rotation."}
{"_id": "5d9b1dc587a8c6abae88475353140d58fde81ca7", "title": "A classification of transaction processing systems", "text": "The problems in building a transaction processing system are discussed, and it is shown that the difficulties are a function of specific attributes of the underlying database system. A model of a transaction processing system is presented, and five system dimensions important in classifying transaction processing systems-the process, machine, heterogeneity, data, and site components-are introduced. The specific problems posed by various combinations of system characteristics are analyzed. The evolution of transaction processing systems are described in terms of the framework.<>"}
{"_id": "787a77968973e072ce901d2145f77ad70f633bda", "title": "Characterizing 3D Floating Gate NAND Flash", "text": "In this paper, we characterize a state-of-the-art 3D floating gate NAND flash memory through comprehensive experiments on an FPGA platform. Then, we present distinct observations on performance and reliability, such as operation latencies and various error patterns. We believe that through our work, novel 3D NAND flash-oriented designs can be developed to achieve better performance and reliability."}
{"_id": "2d08ee24619ef41164e076fee2b268009aedd0f0", "title": "Web Mining Research: A Survey", "text": "With the huge amount of information available online, the World Wide Web is a fertile area for data mining research. The Web mining research is at the cross road of research from several research communities, such as database, information retrieval, and within AI, especially the sub-areas of machine learning and natural language processing. However, there is a lot of confusions when comparing research efforts from different point of views. In this paper, we survey the research in the area of Web mining, point out some confusions regarded the usage of the term Web mining and suggest three Web mining categories. Then we situate some of the research with respect to these three categories. We also explore the connection between the Web mining categories and the related agent paradigm. For the survey, we focus on representation issues, on the process, on the learning algorithm, and on the application of the recent works as the criteria. We conclude the paper with some research issues."}
{"_id": "b9e51f3dd6dff7c5cfdd851340d7c2ae51bf1ac4", "title": "SpaceTwist: Managing the Trade-Offs Among Location Privacy, Query Performance, and Query Accuracy in Mobile Services", "text": "In a mobile service scenario, users query a server for nearby points of interest but they may not want to disclose their locations to the service. Intuitively, location privacy may be obtained at the cost of query performance and query accuracy. The challenge addressed is how to obtain the best possible performance, subjected to given requirements for location privacy and query accuracy. Existing privacy solutions that use spatial cloaking employ complex server query processing techniques and entail the transmission of large quantities of intermediate result. Solutions that use transformation-based matching generally fall short in offering practical query accuracy guarantees. Our proposed framework, called SpaceTwist, rectifies these shortcomings for k nearest neighbor (kNN) queries. Starting with a location different from the user's actual location, nearest neighbors are retrieved incrementally until the query is answered correctly by the mobile terminal. This approach is flexible, needs no trusted middleware, and requires only well-known incremental NN query processing on the server. The framework also includes a server-side granular search technique that exploits relaxed query accuracy guarantees for obtaining better performance. The paper reports on empirical studies that elicit key properties of SpaceTwist and suggest that the framework offers very good performance and high privacy, at low communication cost."}
{"_id": "222dbc207e01ddef92e2a0822218368573b0cef0", "title": "Nonlinear controller design for a magnetic levitation device", "text": "Various applications of micro-robotic technology suggest the use of new actuator systems which allow motions to be realized with micrometer accuracy. Conventional actuation techniques such as hydraulic or pneumatic systems are no longer capable of fulfilling the demands of hi-tech micro-scale areas such as miniaturized biomedical devices and MEMS production equipment. These applications pose significantly different problems from actuation on a large scale. In particular, large scale manipulation systems typically deal with sizable friction, whereas micro manipulation systems must minimize friction to achieve submicron precision and avoid generation of static electric fields. Recently, the magnetic levitation technique has been shown to be a feasible actuation method for microscale applications. In this paper, a magnetic levitation device is recalled from the authors\u2019 previous work and a control approach is presented to achieve precise motion control of a magnetically levitated object with sub-micron positioning accuracy. The stability of the controller is discussed through the Lyapunov method. Experiments are conducted and showed that the proposed control technique is capable of performing a positioning operation with rms accuracy of 16 lm over a travel range of 30 mm. The nonlinear control strategy proposed in this paper showed a significant improvement in comparison with the conventional control strategies for large gap magnetic levitation systems."}
{"_id": "aad658d9dee24f15014360af83d63df34f47f12e", "title": "Motor control exercise for chronic non-specific low-back pain.", "text": "BACKGROUND\nNon-specific low back pain (LBP) is a common condition. It is reported to be a major health and socioeconomic problem associated with work absenteeism, disability and high costs for patients and society. Exercise is a modestly effective treatment for chronic LBP. However, current evidence suggests that no single form of exercise is superior to another. Among the most commonly used exercise interventions is motor control exercise (MCE). MCE intervention focuses on the activation of the deep trunk muscles and targets the restoration of control and co-ordination of these muscles, progressing to more complex and functional tasks integrating the activation of deep and global trunk muscles. While there are previous systematic reviews of the effectiveness of MCE, recently published trials justify an updated systematic review.\n\n\nOBJECTIVES\nTo evaluate the effectiveness of MCE in patients with chronic non-specific LBP.\n\n\nSEARCH METHODS\nWe conducted electronic searches in CENTRAL, MEDLINE, EMBASE, five other databases and two trials registers from their inception up to April 2015. We also performed citation tracking and searched the reference lists of reviews and eligible trials.\n\n\nSELECTION CRITERIA\nWe included randomised controlled trials (RCTs) that examined the effectiveness of MCE in patients with chronic non-specific LBP. We included trials comparing MCE with no treatment, another treatment or that added MCE as a supplement to other interventions. Primary outcomes were pain intensity and disability. We considered function, quality of life, return to work or recurrence as secondary outcomes. All outcomes must have been measured with a valid and reliable instrument.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo independent review authors screened the search results, assessed risk of bias and extracted the data. A third independent review author resolved any disagreement. We assessed risk of bias using the Cochrane Back and Neck (CBN) Review Group expanded 12-item criteria (Furlan 2009). We extracted mean scores, standard deviations and sample sizes from the included trials, and if this information was not provided we calculated or estimated them using methods recommended in the Cochrane Handbook. We also contacted the authors of the trials for any missing or unclear information. We considered the following time points: short-term (less than three months after randomisation); intermediate (at least three months but less than 12 months after randomisation); and long-term (12 months or more after randomisation) follow-up. We assessed heterogeneity by visual inspection of the forest plots, and by calculating the Chi(2) test and the I(2) statistic. We combined results in a meta-analysis expressed as mean difference (MD) and 95% confidence interval (CI). We assessed the overall quality of the evidence using the GRADE approach.\n\n\nMAIN RESULTS\nWe included 29 trials (n = 2431) in this review. The study sample sizes ranged from 20 to 323 participants. We considered a total of 76.6% of the included trials to have a low risk of bias, representing 86% of all participants. There is low to high quality evidence that MCE is not clinically more effective than other exercises for all follow-up periods and outcomes tested. When compared with minimal intervention, there is low to moderate quality evidence that MCE is effective for improving pain at short, intermediate and long-term follow-up with medium effect sizes (long-term, MD -12.97; 95% CI -18.51 to -7.42). There was also a clinically important difference for the outcomes function and global impression of recovery compared with minimal intervention. There is moderate to high quality evidence that there is no clinically important difference between MCE and manual therapy for all follow-up periods and outcomes tested. Finally, there is very low to low quality evidence that MCE is clinically more effective than exercise and electrophysical agents (EPA) for pain, disability, global impression of recovery and quality of life with medium to large effect sizes (pain at short term, MD -30.18; 95% CI -35.32 to -25.05). Minor or no adverse events were reported in the included trials.\n\n\nAUTHORS' CONCLUSIONS\nThere is very low to moderate quality evidence that MCE has a clinically important effect compared with a minimal intervention for chronic low back pain. There is very low to low quality evidence that MCE has a clinically important effect compared with exercise plus EPA. There is moderate to high quality evidence that MCE provides similar outcomes to manual therapies and low to moderate quality evidence that it provides similar outcomes to other forms of exercises. Given the evidence that MCE is not superior to other forms of exercise, the choice of exercise for chronic LBP should probably depend on patient or therapist preferences, therapist training, costs and safety."}
{"_id": "51be84ac278371e6b1cb487c8e480bbaed5c57c4", "title": "Treatment planning of the edentulous maxilla", "text": "The predictability of successful osseointegrated implant rehabilitation of the edentulous jaw as described by Branemark et al., introduced a new era of management for the edentulous predicament. Implant rehabilitation of the edentulous maxilla remains one of the most complex restorative challenges because of the number of variables that affect both the aesthetic and functional aspect of the prosthesis. Among the prosthesis designs used to treat the edentulous maxilla are fixed or removable implant-supported restorations. Since the aesthetic requirements and preoperative situation of each patient varies, considerable time must be spent on accurate diagnosis to ensure patient desires are satisfied and predictable outcomes are achieved. The purpose of this article is to compare the treatment options and prosthesis designs for the edentulous maxilla. Emphasis will be placed on diagnosis and treatment planning. Criteria will be given to guide the practitioner in deciding whether a fixed or removable restoration should be placed. This objective will be accomplished through the review of cases with regard to varying design considerations and factors that influence the decision-making process."}
{"_id": "8de8acd9e93b4cf1635b4b6992daf6aee41645bd", "title": "An Efficient Diagnosis System for Parkinson's Disease Using Kernel-Based Extreme Learning Machine with Subtractive Clustering Features Weighting Approach", "text": "A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance."}
{"_id": "f18821d47b2bee62a7b3047ed7b71a5895214619", "title": "Speaker-listener neural coupling underlies successful communication.", "text": "Verbal communication is a joint activity; however, speech production and comprehension have primarily been analyzed as independent processes within the boundaries of individual brains. Here, we applied fMRI to record brain activity from both speakers and listeners during natural verbal communication. We used the speaker's spatiotemporal brain activity to model listeners' brain activity and found that the speaker's activity is spatially and temporally coupled with the listener's activity. This coupling vanishes when participants fail to communicate. Moreover, though on average the listener's brain activity mirrors the speaker's activity with a delay, we also find areas that exhibit predictive anticipatory responses. We connected the extent of neural coupling to a quantitative measure of story comprehension and find that the greater the anticipatory speaker-listener coupling, the greater the understanding. We argue that the observed alignment of production- and comprehension-based processes serves as a mechanism by which brains convey information."}
{"_id": "4a2850c4d74825508b2aedc88f50ab1f664a87ee", "title": "Practical Signal-Dependent Noise Parameter Estimation From a Single Noisy Image", "text": "The additive white Gaussian noise is widely assumed in many image processing algorithms. However, in the real world, the noise from actual cameras is better modeled as signal-dependent noise (SDN). In this paper, we focus on the SDN model and propose an algorithm to automatically estimate its parameters from a single noisy image. The proposed algorithm identifies the noise level function of signal-dependent noise assuming the generalized signal-dependent noise model and is also applicable to the Poisson-Gaussian noise model. The accuracy is achieved by improved estimation of local mean and local noise variance from the selected low-rank patches. We evaluate the proposed algorithm with both synthetic and real noisy images. Experiments demonstrate that the proposed estimation algorithm outperforms the state-of-the-art methods."}
{"_id": "4614382c4d8ca41c6d145c61dba29c2cc39301bd", "title": "Human-Assisted Graph Search: It's Okay to Ask Questions", "text": "We consider the problem of human-assisted graph search: given a directed acyclic graph with some (unknown) target node(s), we consider the problem of finding the target node(s) by asking an omniscient human questions of the form \u201cIs there a target node that is reachable from the current node?\u201d. This general problem has applications in many domains that can utilize human intelligence, including curation of hierarchies, debugging workflows, image segmentation and categorization, interactive search and filter synthesis. To our knowledge, this work provides the first formal algorithmic study of the optimization of human computation for this problem. We study various dimensions of the problem space, providing algorithms and complexity results. We also compare the performance of our algorithm against other algorithms, for the problem of webpage categorization on a real taxonomy. Our framework and algorithms can be used in the design of an optimizer for crowdsourcing platforms such as Mechanical Turk."}
{"_id": "c3389acd6f5b79be0ca7fa187b6bc853faa087fc", "title": "Joint detection and decoding for MIMO systems with polar codes", "text": "As well known, the near-optimal K-best detection is popular in multiple-input and multiple-output (MIMO) systems. In this paper, we first propose the joint approaches of K-best detection and polar decoding. For joint detection and decoding (JDD) approach, both hard and soft decisions are considered. The simplified successive cancellation (SSC) decoding is exploited for hard decision, and the successive cancellation list (SCL) decoding is used as soft decision. The system setup for JDD is als o introduced, in which the modulation points across several channels are considered together. Simulation results have demonstrated the performance advantage of the JDD algorithms over the separated ones. For 1/2-rate polar codes, JDD schemes show 50% complexity reduction compared to the separated ones. Furthermore, by employing SSC hard decoding, the JDD algorithm is promising for high-throughput and low-complexity application s."}
{"_id": "40ccca414291f5585fefb22cb9d3e841ef63fc27", "title": "The Alignment Template Approach to Statistical Machine Translation", "text": "A phrase-based statistical machine translation approach the alignment template approach is described. This translation approach allows for general many-to-many relations between words. Thereby, the context of words is taken into account in the translation model, and local changes in word order from source to target language can be learned explicitly. The model is described using a log-linear modeling approach, which is a generalization of the often used source-channel approach. Thereby, the model is easier to extend than classical statistical machine translation systems. We describe in detail the process for learning phrasal translations, the feature functions used, and the search algorithm. The evaluation of this approach is performed on three different tasks. For the German-English speech Verbmobil task, we analyze the effect of various system components. On the French-English Canadian Hansards task, the alignment template system obtains significantly better results than a single-word-based translation model. In the Chinese-English 2002 National Institute of Standards and Technology (NIST) machine translation evaluation it yields statistically significantly better NIST scores than all competing research and commercial translation systems."}
{"_id": "ed31fd132d5f4d2912215f74ac05871a5c658460", "title": "Adolescent Internet Addiction: Testing the Association Between Self-Esteem, the Perception of Internet Attributes, and Preference for Online Social Interactions", "text": "There is a general consensus that Internet addiction (IA) is mainly related to social aspects of the Web, especially among adolescents. The empirical link between poor social skills and IA is well documented; however, theoretical explanations for this relationship are less developed. One possibility is that people with poor social skills are especially prone to develop a preference for online social interaction (POSI), which, in turn, predicts problematic usage. This hypothesis has been tested for loneliness and social anxiety, but not for self-esteem (SE; one of the main antecedents of IA). Furthermore, the mediating role of the perceived relevance of some Internet features (e.g., anonymity) in the relationship between SE and POSI has never been investigated. A cross-sectional study was conducted with 257 adolescents. Using mediation analyses, we found evidence among females for the mediating role of (a) POSI in the relationship between SE and IA, and (b) the subjective relevance of some Internet features in the association between SE and POSI. No significant effects were found for males."}
{"_id": "c4896778fe323243dd75f2bfea193f5da8fbfb14", "title": "Automatic Detection of Calibration Grids in Time-of-Flight Images", "text": "It is convenient to calibrate time-of-flight cameras by established methods, using images of a chequerboard pattern. The low resolution of the amplitude image, however, makes it difficult to detect the board reliably. Heuristic detection methods, based on connected image-components, perform very poorly on this data. An alternative, geometrically-principled method is introduced here, based on the Hough transform. The projection of a chequerboard is represented by two pencils of lines, which are identified as oriented clusters in the gradient-data of the image. A projective Hough transform is applied to each of the two clusters, in axis-aligned coordinates. The range of each transform is properly bounded, because the corresponding gradient vectors are approximately parallel. Each of the two transforms contains a series of collinear peaks; one for every line in the given pencil. This pattern is easily detected, by sweeping a dual line through the transform. The proposed Hough-based method is compared to the standard OpenCV detection routine, by application to several hundred time-of-flight images. It is shown that the new method detects significantly more calibration boards, over a greater variety of poses, without any overall loss of accuracy. This conclusion is based on an analysis of both geometric and photometric error."}
{"_id": "3ff964e3764db5d0e12bbbd9eaa0b29e488f4b30", "title": "Reinforcement Learning with Self-Modifying Policies", "text": "A learner\u2019s modifiable components are called its policy. An algorithm that modifies the policy is a learning algorithm. If the learning algorithm has modifiable components represented as part of the policy, then we speak of a self-modifying policy (SMP). SMPs can modify the way they modify themselves etc. They are of interest in situations where the initial learning algorithm itself can be improved by experience \u2014 this is what we call \u201clearning to learn\u201d. How can we force some (stochastic) SMP to trigger better and better self-modifications? The success-story algorithm (SSA) addresses this question in a lifelong reinforcement learning context. During the learner\u2019s life-time, SSA is occasionally called at times computed according to SMP itself. SSA uses backtracking to undo those SMP-generated SMP-modifications that have not been empirically observed to trigger lifelong reward accelerations (measured up until the current SSA call \u2014 this evaluates the long-term effects of SMP-modifications setting the stage for later SMP-modifications). SMP-modifications that survive SSA represent a lifelong success history. Until the next SSA call, they build the basis for additional SMP-modifications. Solely by self-modifications our SMP/SSA-based learners solve a complex task in a partially observable environment (POE) whose state space is far bigger than most reported in the POE literature."}
{"_id": "780b05a35f2c7dd4b4d6e2a844ef5e145f1972ae", "title": "Speaker-sensitive dual memory networks for multi-turn slot tagging", "text": "In multi-turn dialogs, natural language understanding models can introduce obvious errors by being blind to contextual information. To incorporate dialog history, we present a neural architecture with Speaker-Sensitive Dual Memory Networks which encode utterances differently depending on the speaker. This addresses the different extents of information available to the system \u2014 the system knows only the surface form of user utterances while it has the exact semantics of system output. We performed experiments on real user data from Microsoft Cortana, a commercial personal assistant. The result showed a significant performance improvement over the state-of-the-art slot tagging models using contextual information."}
{"_id": "b2a1e70d6a489aebade29cb583a5cf2e08e5f4ad", "title": "Integrated force and distance sensing using elastomer-embedded commodity proximity sensors", "text": "We describe a combined force and distance sensor using a commodity infrared distance sensor embedded in a transparent elastomer with applications in robotic manipulation. Prior to contact, the sensor works as a distance sensor (0\u201310 cm), whereas after contact the material doubles as a spring, with force proportional to the compression of the elastomer (0\u2013 5 N). We describe its principle of operation and design parameters, including polymer thickness, mixing ratio, and emitter current, and show that the sensor response has an inflection point at contact that is independent of an object\u2019s surface properties. We then demonstrate how two arrays of eight sensors, each mounted on a standard Baxter gripper, can be used to (1) improve gripper alignment during grasping, (2) determine contact points with objects, and (3) obtain crude 3D models that can serve to determine possible grasp locations."}
{"_id": "99f53c1117ead2974c5ff7729a77a6755809e133", "title": "Accurate Synchronization of Speech and EGG Signal Using Phase Information", "text": "Synchronization of speech and corresponding Electroglottographic (EGG) signal is very helpful for speech processing research and development. During simultaneous recording of speech and EGG signals, the speech signal will be delayed by the duration corresponding to the speech wave propagation from the glottis to the microphone relative to the EGG signal. Even in same session of recording, the delay between the speech and the EGG signals is varying due to the natural movement of speaker\u2019s head and movement of microphone in case MIC is held by hand. To study and model the information within glottal cycles, precise synchronization of speech and EGG signals is of utmost necessity. In this work, we propose a method for synchronization of speech and EGG signals based on the glottal activity information present in the signals. The performance of the proposed method is demonstrated by estimation of delay between the two signals (speech signals and corresponding EGG signals) and synchronizing these signals by compensating the estimated delay. The CMU-Arctic database consist of simultaneous recording of the speech and the EGG signals is used for the evaluation of the proposed method."}
{"_id": "88704fb46ee1664e0a8417ec1f9f224fa0d70785", "title": "Wormhole Attack Detection in Wireless Networks", "text": "Wormhole attacks can destabilize or disable wireless sensor networks. In a typical wormhole attack, the attacker receives packets at one point in the network, forwards them through a wired or wireless link with less latency than the network links, and relays them to another point in the network. We first propose a centralized algorithm to detect wormholes and show its correctness rigorously. For the distributed wireless network, we proposes DAWN, Distributed detection Algorithm against Wormhole in wireless Network coding systems, by exploring the change of the flow directions of the innovative packets caused by wormholes. We find that the robustness depends on the node density in the network, and prove a necessary condition to achieve collision-resistance. Our solutions only depend on the local information that can be obtained from regular network coding protocols, and thus the overhead that our algorithms introduce is acceptable for most applications."}
{"_id": "5f010dae38b54015adfa6e42b7e042fe923e9d93", "title": "Polygraph: automatically generating signatures for polymorphic worms", "text": "It is widely believed that content-signature-based intrusion detection systems (IDS) are easily evaded by polymorphic worms, which vary their payload on every infection attempt. In this paper, we present Polygraph, a signature generation system that successfully produces signatures that match polymorphic worms. Polygraph generates signatures that consist of multiple disjoint content substrings. In doing so, Polygraph leverages our insight that for a real-world exploit to function properly, multiple invariant substrings must often be present in all variants of a payload; these substrings typically correspond to protocol framing, return addresses, and in some cases, poorly obfuscated code. We contribute a definition of the polymorphic signature generation problem; propose classes of signature suited for matching polymorphic worm payloads; and present algorithms for automatic generation of signatures in these classes. Our evaluation of these algorithms on a range of polymorphic worms demonstrates that Polygraph produces signatures for polymorphic worms that exhibit low false negatives and false positives."}
{"_id": "ec5b5c434f9d0bfc3954c212226d436e32bcf7d5", "title": "Are We Ready for Driver-less Vehicles? Security vs. Privacy- A Social Perspective", "text": "At this moment Automous cars are probably the biggest and most talked about technology in the Robotics Reserach Community. In spite of great technolgical advances over past few years a full fledged autonomous car is still far from reality. This article talks about the existing system and discusses the possibility of a Computer Vision enabled driving being superior than the LiDar based system. A detailed overview of privacy violations that might arise from autonomous driving has been discussed in detail both from a technical as well as legal perspective. It has been proved through evidence and arguments that efficient and accurate estimation and efficient solution of the contraint satisfaction problem adressed in the case of autonomous cars are negatively correlated with the preserving the privacy of the user. It is a very difficult trade-off since both are very important aspects and has to be taken into account. The fact that one cannot compromise with the safety issues of the car makes it inevidable to run into serious privacy concerns that might have adverse social and political effects."}
{"_id": "f6dd8c7e8d38b7a315417fbe57d20111d7b84a16", "title": "Image-based Localization with Spatial LSTMs", "text": "In this work we propose a new CNN+LSTM architecture for camera pose regression for indoor and outdoor scenes. CNNs allow us to learn suitable feature representations for localization that are robust against motion blur and illumination changes. We make use of LSTM units on the CNN output in spatial coordinates in order to capture contextual information. This substantially enlarges the receptive field of each pixel leading to drastic improvements in localization performance. We provide extensive quantitative comparison of CNN-based vs SIFT-based localization methods, showing the weaknesses and strengths of each. Furthermore, we present a new large-scale indoor dataset with accurate ground truth from a laser scanner. Experimental results on both indoor and outdoor public datasets show our method outperforms existing deep architectures, and can localize images in hard conditions, e.g., in the presence of mostly textureless surfaces."}
{"_id": "ebd946975ee11b7396c336d9d3d97b7755ef1a70", "title": "Communication issues in requirements elicitation: a content analysis of stakeholder experiences", "text": "The gathering of stakeholder requirements comprises an early, but continuous and highly critical stage in system development. This phase in development is subject to a large degree of error, influenced by key factors rooted in communication problems. This pilot study builds upon an existing theory-based categorisation of these problems through presentation of a four-dimensional framework on communication. Its structure is validated through a content analysis of interview data, from which themes emerge, that can be assigned to the dimensional categories, highlighting any problematic areas. The paper concludes with a discussion on the utilisation of the framework for requirements elicitation exercises."}
{"_id": "259bbc822121df705bf3d5898ae031cd712505ea", "title": "Game Theory in Signal Processing and Communications", "text": "1Department of Mobile Communications, School of Electrical Engineering and Computer Sciences, Technical University of Berlin, Berlin, Germany 2Wireless Networking, Signal Processing and Security Lab, Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77004, USA 3Division of Communication Systems, Department of Electrical Engineering (ISY), Link\u00f6ping University, SE-581 83 Link\u00f6ping, Sweden 4Communications Laboratory, Faculty of Electrical Engineering and Information Technology, Dresden University of Technology, 01062 Dresden, Germany"}
{"_id": "4eca7aa4a96300caf8622d666ecf5635d8b72132", "title": "Exercise motion classification from large-scale wearable sensor data using convolutional neural networks", "text": "The ability to accurately identify human activities is essential for developing automatic rehabilitation and sports training systems. In this paper, large-scale exercise motion data obtained from a forearm-worn wearable sensor are classified with a convolutional neural network (CNN). Time-series data consisting of accelerometer and orientation measurements are formatted as images, allowing the CNN to automatically extract discriminative features. A comparative study on the effects of image formatting and different CNN architectures is also presented. The best performing configuration classifies 50 gym exercises with 92.1% accuracy."}
{"_id": "979a05a8fc3465299e89ec7289547a35e2a454a2", "title": "Long-range GPS-denied aerial inertial navigation with LIDAR localization", "text": "Despite significant progress in GPS-denied autonomous flight, long-distance traversals (> 100 km) in the absence of GPS remain elusive. This paper demonstrates a method capable of accurately estimating the aircraft state over a 218 km flight with a final position error of 27 m, 0.012% of the distance traveled. Our technique efficiently captures the full state dynamics of the air vehicle with semi-intermittent global corrections using LIDAR measurements matched against an a priori Digital Elevation Model (DEM). Using an error-state Kalman filter with IMU bias estimation, we are able to maintain a high-certainty state estimate, reducing the computation time to search over a global elevation map. A sub region of the DEM is scanned with the latest LIDAR projection providing a correlation map of landscape symmetry. The optimal position is extracted from the correlation map to produce a position correction that is applied to the state estimate in the filter. This method provides a GPS-denied state estimate for long range drift-free navigation. We demonstrate this method on two flight data sets from a full-sized helicopter, showing significantly longer flight distances over the current state of the art."}
{"_id": "1b1a829c43f1a4f3a3d70f033a1b8e7bee1f7112", "title": "EbbRT : Elastic Building Block Runtime-overview Schatzberg", "text": ""}
{"_id": "f76121706d3f7128c5de049bfeab4c97fe47fd9d", "title": "Modeling of Item-Difficulty for Ontology-based MCQs", "text": "Multiple choice questions (MCQs) that can be generated from a domain ontology can significantly reduce human effort & time re quired for authoring & administering assessments in an e-Learning envir o ment. Even though there are various methods for generating MCQs from ontologi es, methods for determining the difficulty-levels of such MCQs are less explor ed. In this paper, we study various aspects and factors that are involved in deter mining the difficultyscore of an MCQ, and propose an ontology-based model for the p rediction. This model characterizes the difficulty values associated with t he s em and choice set of the MCQs, and describes a measure which combines both the s cor s. Furthermore, the notion of assigning difficultly-scores based on th e skill level of the test taker is utilized for predicating difficulty-score of a stem . We studied the effectiveness of the predicted difficulty-scores with the help of a psychometric model from the Item Response Theory, by involving real-students a d domain experts. Our results show that, the predicated difficulty-levels of t he MCQs are having high correlation with their actual difficulty-levels."}
{"_id": "92e7a6ef61dfddf35eedfe79bebec0dd505f9ce5", "title": "Kirner\u2019s deformity and clinodactyly in one family", "text": "Clinodactyly and Kirner\u2019s deformity are generally regarded as independent anomalies. We present the first family in which both deformities are present. This concurrence raises the question as to whether these deformities are variable expressions of the same genetic disorder. However, because a conclusive genetic test could not be performed, coincidence cannot be excluded."}
{"_id": "274db5816e6f4abd10eb546691886b65ff937c9f", "title": "Investigation of multilingual deep neural networks for spoken term detection", "text": "The development of high-performance speech processing systems for low-resource languages is a challenging area. One approach to address the lack of resources is to make use of data from multiple languages. A popular direction in recent years is to use bottleneck features, or hybrid systems, trained on multilingual data for speech-to-text (STT) systems. This paper presents an investigation into the application of these multilingual approaches to spoken term detection. Experiments were run using the IARPA Babel limited language pack corpora (~10 hours/language) with 4 languages for initial multilingual system development and an additional held-out target language. STT gains achieved through using multilingual bottleneck features in a Tandem configuration are shown to also apply to keyword search (KWS). Further improvements in both STT and KWS were observed by incorporating language questions into the Tandem GMM-HMM decision trees for the training set languages. Adapted hybrid systems performed slightly worse on average than the adapted Tandem systems. A language independent acoustic model test on the target language showed that retraining or adapting of the acoustic models to the target language is currently minimally needed to achieve reasonable performance."}
{"_id": "7c0674a7603fcbc8abe3e2d65deb2f7dff11d3e7", "title": "Channel selection based on multichannel cross-correlation coefficients for distant speech recognition", "text": "In theory, beamforming performance can be improved by using as many microphones as possible, but in practice it has been shown that using all possible channels does not always improve speech recognition performance [1, 2, 3, 4, 5]. In this work, we present a new channel selection method in order to increase the computational efficiency of beamforming for distant speech recognition (DSR) without sacrficing performance."}
{"_id": "09b337587bd369b540157abee7e88f828798908f", "title": "Key Risk Indicators \u2013 Their Role in Operational Risk Management and Measurement", "text": "In comparison with other operational risk management and measurement tools such as loss data (internal, public and consortium), risk-and-control assessments, capital allocation and performance measurement, key risk indicators (KRIs) remain one of the outstanding action items on most firms\u2019 to-do lists, along with scenario analysis. KRIs are not new, but, up until recently, attempts to implement such programmes have often been characterised as less than effective. We believe that there are emerging best practices for KRI programmes that overcome some of the traditional challenges in KRI implementations. Further, we believe that investment in KRI programmes will reap many benefits, including the ability to clearly convey risk appetite, optimise risk and return, and improve the likelihood of achieving primary business goals through more effective operational risk management. In this chapter, we will seek to demystify KRIs, understand the basic fundamentals in identifying, specifying, selecting and implementing quality indicators, and consider how to monitor and report on them, in conjunction with other useful operational risk management information, to create powerful management reporting. We will also examine the potential for more advanced applications of KRIs, touching on the measurement of risk for correlation purposes, and the potential for composite indicators as well as KRI benchmarking. Finally, we will overview an industry KRI 10"}
{"_id": "72ef661a052be7685d93c79011ba4a5fbcf308f7", "title": "Measuring problem video game playing in adolescents.", "text": "AIMS\nSome researchers suggest that for some people, video game playing is an addictive behaviour similar to substance dependence. Our aim was to design and validate a scale to measure the problems associated with the apparently addictive use of all types of video games and video game systems, because there is no instrument at the present time that can be used for this purpose.\n\n\nDESIGN\nWe reviewed the DSM-IV criteria for substance dependence and for pathological gambling, as well as the literature on the addictions in order to design a short scale (PVP; problem video game playing) that is quick and easy to apply.\n\n\nPARTICIPANTS\nThe scale was administered to 223 Spanish adolescents aged between 13 and 18 years. The study was carried out in Granada and Algeciras, Spain.\n\n\nFINDINGS\nPsychometric analyses show that the PVP seems to be unidimensional and has acceptable internal consistency (Cronbach's alpha) at 0.69. The pattern of associations between the scale scores and alternative measures of problem play supports its construct validity (higher total scores in the scale were associated with higher frequency of play, mean and longest times per session, self and parents' perception of playing to excess, and scores in the Severity of Dependence Scale).\n\n\nCONCLUSIONS\nOur results confirm that the excessive use of video games is associated with a number of problems which resemble a dependence syndrome, and the PVP appears as a useful instrument for the measurement of such problems."}
{"_id": "6abac64862f7d207cac58c6a93f75dc80d74e575", "title": "A mathematical theory of communication: Meaning, information, and topology", "text": ""}
{"_id": "81e055bc2ec6810f450452a03618e677187c270b", "title": "Markerless tracking using planar structures in the scene", "text": "We describea markerlesscamera tracking systemfor augmentedreality thatoperatesin environmentswhich contain oneor more planes. This is a commonspecialcase, which we showsignificantlysimplifiestracking. The result is a practical, reliable, vision-basedtracker. Furthermore, thetrackedplaneimposesa natural referenceframe, sothat thealignmentof thereal andvirtual coordinatesystemsis rather simplerthan would be the casewith a general structure-and-motionsystem.Multiple planescan be tracked, and additional data such as 2D point tracks are easilyincorporated."}
{"_id": "d86b21faa45e4f917da70d00b20c05fdbecaf956", "title": "Planar object recognition using projective shape representation", "text": "We describe a model based recognition system, called LEWIS, for the identification of planar objects based on a projectively invariant representation of shape. The advantages of this shape description include simple model acquisition (direct from images), no need for camera calibration or object pose computation, and the use of index functions. We describe the feature construction and recognition algorithms in detail and provide an analysis of the combinatorial advantages of using index functions. Index functions are used to select models from a model base and are constructed from projective invariants based on algebraic curves and a canonical projective coordinate frame. Examples are given of object recognition from images of real scenes, with extensive object libraries. Successful recognition is demonstrated despite partial occlusion by unmodelled objects, and realistic lighting conditions."}
{"_id": "dec34eb3d7907a04a995905a212d4c616f2056e7", "title": "Scholarly use of social media and altmetrics: A review of the literature", "text": "Social media has become integrated into the fabric of the scholarly communication system in fundamental ways: principally through scholarly use of social media platforms and the promotion of new indicators on the basis of interactions with these platforms. Research and scholarship in this area has accelerated since the coining and subsequent advocacy for altmetrics\u2014that is, research indicators based on social media activity. This review provides an extensive account of the state-of-the art in both scholarly use of social media and altmetrics. The review consists of two main parts: the first examines the use of social media in academia, examining the various functions these platforms have in the scholarly communication process and the factors that affect this use. The second part reviews empirical studies of altmetrics, discussing the various interpretations of altmetrics, data collection and methodological limitations, and differences according to platform. The review ends with a critical discussion of the implications of this transformation in the scholarly communication system."}
{"_id": "a59f02c67de85b729e46e510af419dcd8b86d4d0", "title": "Classification of Acceleration Data for Biometric Gait Recognition on Mobile Devices", "text": "Ubiquitous mobile devices like smartphones and tablets are often not secured against unauthorized access as the users tend to not use passwords because of convenience reasons. Therefore, this study proposes an alternative user authentication method for mobile devices based on gait biometrics. The gait characteristics are captured using the built-in accelerometer of a smartphone. Various features are extracted from the measured accelerations and utilized to train a support vector machine (SVM). Among the extracted features are the Meland Bark-frequency cepstral coefficients (MFCC, BFCC) which are commonly used in speech and speaker recognition and have not been used for gait recognition previously. The proposed approach showed competitive recognition performance, yielding 5.9% FMR at 6.3% FNMR in a mixedday scenario."}
{"_id": "2c235caa348cd376f9303238d22f15f631020e2e", "title": "Attributed scattering centers for SAR ATR", "text": "High-frequency radar measurements of man-made targets are dominated by returns from isolated scattering centers, such as corners and flat plates. Characterizing the features of these scattering centers provides a parsimonious, physically relevant signal representation for use in automatic target recognition (ATR). In this paper, we present a framework for feature extraction predicated on parametric models for the radar returns. The models are motivated by the scattering behaviour predicted by the geometrical theory of diffraction. For each scattering center, statistically robust estimation of model parameters provides high-resolution attributes including location, geometry, and polarization response. We present statistical analysis of the scattering model to describe feature uncertainty, and we provide a least-squares algorithm for feature estimation. We survey existing algorithms for simplified models, and derive bounds for the error incurred in adopting the simplified models. A model order selection algorithm is given, and an M-ary generalized likelihood ratio test is given for classifying polarimetric responses in spherically invariant random clutter."}
{"_id": "e4a4479db7b16f9b364fabb58bcdaf53812c6ad2", "title": "The role of prediction algorithms in the MavHome smart home architecture", "text": "The goal of the Managing an Adaptive Versatile Home (MavHome) project is to create a home that acts as a rational agent. The agent seeks to maximize inhabitant comfort and minimize operation cost. In order to achieve these goals, the agent must be able to predict the mobility patterns and device usages of the inhabitants. In this article, we introduce the MavHome project and its underlying architecture. The role of prediction algorithms within the architecture is discussed, and three prediction algorithms that are central to home operations are presented. We demonstrate the effectiveness of these algorithms on synthetic and/or actual smart home data. THE ROLE OF PREDICTION ALGORITHMS IN THE MAVHOME SMART HOME ARCHITECTURE"}
{"_id": "670312e7fa0037d262852a16e233ed62790eb255", "title": "Surface Aesthetics in Tip Rhinoplasty: A Step-by-Step Guide.", "text": "Tip rhinoplasty is a key component of aesthetic rhinoplasty. An understanding of the correlation between tip surface aesthetics and the underlying anatomic structures enables proper identification and correction of tip abnormalities. Surface aesthetics of the attractive nose are created by certain lines, shadows, and highlights with specific proportions and breakpoints. In this Featured Operative Technique, the authors describe a stepwise process for tip rhinoplasty that conceptualizes aesthetic subunits as geometric polygons to define the existing deformity, the operative plan, and the aesthetic goals. Tip rhinoplasty is described in detail, from initial markings through incisions and dissection. The autorim graft concept is explained, and lateral crural steal and footplate setback techniques are described for the attainment of symmetric domes with correct lateral crural resting angles. Methods in columellar reconstruction are described, including creating the columella (C') breakpoint and the infralobular caudal contour graft. The principal author (B.\u00c7.) has applied these techniques to 257 consecutive \"polygon rhinoplasties\" over the past 3 years."}
{"_id": "7b7ccb8b6af9d7015a52e3726e4ab007d1b92bb3", "title": "Multiuser MIMO-OFDM for Next-Generation Wireless Systems", "text": "This overview portrays the evolution of orthogonal frequency division multiplexing (OFDM) research. The amelioration of powerful multicarrier OFDM arrangements with multiple-input multiple-output (MIMO) systems has numerous benefits, which are detailed in this treatise. We continue by highlighting the limitations of conventional detection and channel estimation techniques designed for multiuser MIMO OFDM systems in the so-called rank-deficient scenarios, where the number of users supported or the number of transmit antennas employed exceeds the number of receiver antennas. This is often encountered in practice, unless we limit the number of users granted access in the base station's or radio port's coverage area. Following a historical perspective on the associated design problems and their state-of-the-art solutions, the second half of this treatise details a range of classic multiuser detectors (MUDs) designed for MIMO-OFDM systems and characterizes their achievable performance. A further section aims for identifying novel cutting-edge genetic algorithm (GA)-aided detector solutions, which have found numerous applications in wireless communications in recent years. In an effort to stimulate the cross pollination of ideas across the machine learning, optimization, signal processing, and wireless communications research communities, we will review the broadly applicable principles of various GA-assisted optimization techniques, which were recently proposed also for employment in multiuser MIMO OFDM. In order to stimulate new research, we demonstrate that the family of GA-aided MUDs is capable of achieving a near-optimum performance at the cost of a significantly lower computational complexity than that imposed by their optimum maximum-likelihood (ML) MUD aided counterparts. The paper is concluded by outlining a range of future research options that may find their way into next-generation wireless systems."}
{"_id": "57cf7dfe1879c7b65e1ac515d200b831fec21ba6", "title": "Machine learning for attack vector identification in malicious source code", "text": "As computers and information technologies become ubiquitous throughout society, the security of our networks and information technologies is a growing concern. As a result, many researchers have become interested in the security domain. Among them, there is growing interest in observing hacker communities for early detection of developing security threats and trends. Research in this area has often reported hackers openly sharing cybercriminal assets and knowledge with one another. In particular, the sharing of raw malware source code files has been documented in past work. Unfortunately, malware code documentation appears often times to be missing, incomplete, or written in a language foreign to researchers. Thus, analysis of such source files embedded within hacker communities has been limited. Here we utilize a subset of popular machine learning methodologies for the automated analysis of malware source code files. Specifically, we explore genetic algorithms to resolve questions related to feature selection within the context of malware analysis. Next, we utilize two common classification algorithms to test selected features for identification of malware attack vectors. Results suggest promising direction in utilizing such techniques to help with the automated analysis of malware source code."}
{"_id": "2562cc845082410e5805d80324ba28b6afe64535", "title": "Emergency Response Information System Interoperability: Development of Chemical Incident Response Data Model", "text": "Special Issue"}
{"_id": "c48ecac482f30c7930bec08a8e9adfe5f8a9e27e", "title": "USER INTERFACE DESIGN OF VOICE CONTROLLED CONSUMER ELECTRONICS", "text": "Today speech recognition of a small vocabulary can be realized so costeffectively that the technology can penetrate into consumer electronics. But, as first applications that failed on the market show, it is by no means obvious how to incorporate voice control in a user interface. This paper addresses the issue of how to design a voice control so that the user perceives it as a benefit. User interface guidelines that are adapted or specific to voice control are presented. Then the process of designing a voice control in the user-centred approach is described. By means of two examples, the car stereo and the telephone answering machine, it is shown how this is turned into practice."}
{"_id": "736a804745d3a7027a1271ec0b66a2ab5069b1de", "title": "Reliability of Capacitors for DC-Link Applications in Power Electronic Converters\u2014An Overview", "text": "DC-link capacitors are an important part in the majority of power electronic converters which contribute to cost, size and failure rate on a considerable scale. From capacitor users' viewpoint, this paper presents a review on the improvement of reliability of dc link in power electronic converters from two aspects: 1) reliability-oriented dc-link design solutions; 2) conditioning monitoring of dc-link capacitors during operation. Failure mechanisms, failure modes and lifetime models of capacitors suitable for the applications are also discussed as a basis to understand the physics-of-failure. This review serves to provide a clear picture of the state-of-the-art research in this area and to identify the corresponding challenges and future research directions for capacitors and their dc-link applications."}
{"_id": "17a4695b37249e7aed64092912b6ae0de5b2693b", "title": "And Lead Us (Not) into Persuasion\u2026? Persuasive Technology and the Ethics of Communication", "text": "The paper develops ethical guidelines for the development and usage of persuasive technologies (PT) that can be derived from applying discourse ethics to this type of technologies. The application of discourse ethics is of particular interest for PT, since 'persuasion' refers to an act of communication that might be interpreted as holding the middle between 'manipulation' and 'convincing'. One can distinguish two elements of discourse ethics that prove fruitful when applied to PT: the analysis of the inherent normativity of acts of communication ('speech acts') and the Habermasian distinction between 'communicative' and 'strategic rationality' and their broader societal interpretation. This essay investigates what consequences can be drawn if one applies these two elements of discourse ethics to PT."}
{"_id": "785e4b034f740574ef053ebae78086e9cfa565d5", "title": "Mapping Large Spatial Flow Data with Hierarchical Clustering", "text": "It is challenging to map large spatial flow data due to the problem of occlusion and cluttered display, where hundreds of thousands of flows overlap and intersect each other. Existing flow mapping approaches often aggregate flows using predetermined high-level geographic units (e.g. states) or bundling partial flow lines that are close in space, both of which cause a significant loss or distortion of information and may miss major patterns. In this research, we developed a flow clustering method that extracts clusters of similar flows to avoid the cluttering problem, reveal abstracted flow patterns, and meanwhile preserves data resolution as much as possible. Specifically, our method extends the traditional hierarchical clustering method to aggregate and map large flow data. The new method considers both origins and destinations in determining the similarity of two flows, which ensures that a flow cluster represents flows from similar origins to similar destinations and thus minimizes information loss during aggregation. With the spatial index and search algorithm, the new method is scalable to large flow data sets. As a hierarchical method, it generalizes flows to different hierarchical levels and has the potential to support multi-resolution flow mapping. Different distance definitions can be incorporated to adapt to uneven spatial distribution of flows and detect flow clusters of different densities. To assess the quality and fidelity of flow clusters and flow maps, we carry out a case study to analyze a data set of 243,850 taxi trips within an urban area."}
{"_id": "22cd88500b98b9c8964fa4cce704dcd767d10ca3", "title": "GOTCHA password hackers!", "text": "We introduce GOTCHAs (Generating panOptic Turing Tests to Tell Computers and Humans Apart) as a way of preventing automated offline dictionary attacks against user selected passwords. A GOTCHA is a randomized puzzle generation protocol, which involves interaction between a computer and a human. Informally, a GOTCHA should satisfy two key properties: (1) The puzzles are easy for the human to solve. (2) The puzzles are hard for a computer to solve even if it has the random bits used by the computer to generate the final puzzle --- unlike a CAPTCHA [44]. Our main theorem demonstrates that GOTCHAs can be used to mitigate the threat of offline dictionary attacks against passwords by ensuring that a password cracker must receive constant feedback from a human being while mounting an attack. Finally, we provide a candidate construction of GOTCHAs based on Inkblot images. Our construction relies on the usability assumption that users can recognize the phrases that they originally used to describe each Inkblot image --- a much weaker usability assumption than previous password systems based on Inkblots which required users to recall their phrase exactly. We conduct a user study to evaluate the usability of our GOTCHA construction. We also generate a GOTCHA challenge where we encourage artificial intelligence and security researchers to try to crack several passwords protected with our scheme."}
{"_id": "1213d57a7c101786883924da886b26f4a7ca98dd", "title": "A Formal Model for a System's Attack Surface", "text": "Practical software security metrics and measurements are essential to the development of secure software [18]. In this paper, we propose to use a software system\u2019s attack surface measurement as an indicator of the system\u2019s security; the larger the attack surface, the more insecure the system. We formalize the notion of a system\u2019s attack surface using an I/O automata model of the system [15] and define a quantitative measure of the attack surface in terms of three kinds of resources used in attacks on the system: methods, channels, and data. We demonstrate the feasibility of our approach by measuring the attack surfaces of two open source FTP daemons and two IMAP servers. Software developers can use our attack surface measurement method in the software development process and software consumers can use the method in their decision making process. This research was sponsored by the US Army Research Office (ARO) under contract no. DAAD190210389, SAP Labs, LLC under award no. 1010751, and the Software Engineering Institute. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. Report Documentation Page Form Approved"}
{"_id": "7963bde0df999d4ae94939cd5215cd800164baab", "title": "Combining Big Data and Thick Data Analyses for Understanding Youth Learning Trajectories in a Summer Coding Camp", "text": "In this paper we explore how to assess novice youths' learning of programming in an open-ended, project-based learning environment. Our goal is to combine analysis of frequent, automated snapshots of programming (e.g., \"big\" data) within the \"thick\" social context of kids? learning for deeper insights into their programming trajectories. This paper focuses on the first stage of this endeavor: the development of exploratory quantitative measures of youths? learning of computer science concepts. Analyses focus on kids? learning in a series of three Scratch Camps where 64 campers aged 10-13 used Scratch 2.0 to make a series of creative projects over 30 hours in five days. In the discussion we consider the highlights of the insights-and blind spots-of each data source with regard to youths' learning."}
{"_id": "46b202fbd5a345f6a9d220e18fc5ab7fe6a8fe91", "title": "Link Prediction in Bipartite Networks-Predicting Yelp Reviews", "text": "In this paper, we aim to predict new user reviews on businesses in the Yelp social network. We formulate this as a network link prediction problem by modeling Yelp dataset as a bipartite network between users and businesses. We implement link prediction algorithms with various proximity metrics, thoroughly evaluate the effectiveness of each algorithm and conclude that Delta, AdamicAdar and Common Neighbors algorithms perform the best in precision."}
{"_id": "515b6c50b5b049a49651ff3e9d6017e15e4ce02d", "title": "Construction and evaluation of a robust multifeature speech/music discriminator", "text": "We report on the construction of a real-time computer system capable of distinguishing speech signals from music signals over a wide range of digital audio input. We have examined 13 features intended to measure conceptually distinct properties of speech and/or music signals, and combined them in several multidimensional classification frameworks. We provide extensive data on system performance and the cross-validated training/test setup used to evaluate the system. For the datasets currently in use, the best classifier classifies with 5.8% error on a frame-by-frame basis, and 1.4% error when integrating long (2.4 second) segments of sound."}
{"_id": "b585acba8c59809f15022b37a236a68389e0a42a", "title": "Globally Optimal Crowdsourcing Quality Management", "text": "We study crowdsourcing quality management, that is, given worker responses to a set of tasks, our goal is to jointly estimate the true answers for the tasks, as well as the quality of the workers. Prior work on this problem relies primarily on applying ExpectationMaximization (EM) on the underlying maximum likelihood problem to estimate true answers as well as worker quality. Unfortunately, EM only provides a locally optimal solution rather than a globally optimal one. Other solutions to the problem (that do not leverage EM) fail to provide global optimality guarantees as well. In this paper, we focus on filtering, where tasks require the evaluation of a yes/no predicate, and rating, where tasks elicit integer scores from a finite domain. We design algorithms for finding the global optimal estimates of correct task answers and worker quality for the underlying maximum likelihood problem, and characterize the complexity of these algorithms. Our algorithms conceptually consider all mappings from tasks to true answers (typically a very large number), leveraging two key ideas to reduce, by several orders of magnitude, the number of mappings under consideration, while preserving optimality. We also demonstrate that these algorithms often find more accurate estimates than EM-based algorithms. This paper makes an important contribution towards understanding the inherent complexity of globally optimal crowdsourcing quality management."}
{"_id": "b1e40c14d49ef0042b785073fba1c6c583e9ef28", "title": "CRADLE: An Online Plan Recognition Algorithm for Exploratory Domains", "text": "In exploratory domains, agents\u2019 behaviors include switching between activities, extraneous actions, and mistakes. Such settings are prevalent in real world applications such as interaction with open-ended software, collaborative office assistants, and integrated development environments. Despite the prevalence of such settings in the real world, there is scarce work in formalizing the connection between high-level goals and low-level behavior and inferring the former from the latter in these settings. We present a formal grammar for describing users\u2019 activities in such domains. We describe a new top-down plan recognition algorithm called CRADLE (Cumulative Recognition of Activities and Decreasing Load of Explanations) that uses this grammar to recognize agents\u2019 interactions in exploratory domains. We compare the performance of CRADLE with state-of-the-art plan recognition algorithms in several experimental settings consisting of real and simulated data. Our results show that CRADLE was able to output plans exponentially more quickly than the state-of-the-art without compromising its correctness, as determined by domain experts. Our approach can form the basis of future systems that use plan recognition to provide real-time support to users in a growing class of interesting and challenging domains."}
{"_id": "50287b978d60343ba3dd5225dffc86eb392722c8", "title": "On the Optimality of the Simple Bayesian Classifier under Zero-One Loss", "text": "The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains containing clear attribute dependences suggest that the answer to this question may be positive. This article shows that, although the Bayesian classifier's probability estimates are only optimal under quadratic loss if the independence assumption holds, the classifier itself can be optimal under zero-one loss (misclassification rate) even when this assumption is violated by a wide margin. The region of quadratic-loss optimality of the Bayesian classifier is in fact a second-order infinitesimal fraction of the region of zero-one optimality. This implies that the Bayesian classifier has a much greater range of applicability than previously thought. For example, in this article it is shown to be optimal for learning conjunctions and disjunctions, even though they violate the independence assumption. Further, studies in artificial domains show that it will often outperform more powerful classifiers for common training set sizes and numbers of attributes, even if its bias is a priori much less appropriate to the domain. This article's results also imply that detecting attribute dependence is not necessarily the best way to extend the Bayesian classifier, and this is also verified empirically."}
{"_id": "5fb874a1c8106a5b2b2779ee8e1433149109ba00", "title": "Learning Bayesian Networks is NP-Complete", "text": "Algorithms for learning Bayesian networks from data have two components: a scoring metric and a search procedure. The scoring metric computes a score reeecting the goodness-of-t of the structure to the data. The search procedure tries to identify network structures with high scores. Heckerman et al. (1995) introduce a Bayesian metric, called the BDe metric, that computes the relative posterior probability of a network structure given data. In this paper, we show that the search problem of identifying a Bayesian network|among those where each node has at most K parents|that has a relative posterior probability greater than a given constant is NP-complete, when the BDe metric is used. 12.1 Introduction Recently, many researchers have begun to investigate methods for learning Bayesian networks. Many of these approaches have the same basic components: a scoring metric and a search procedure. The scoring metric takes a database of observed cases D and a network structure B S , and returns a score reeecting the goodness-of-t of the data to the structure. A search procedure generates networks for evaluation by the scoring metric. These approaches use the two components to identify a network structure or set of structures that can be used to predict future events or infer causal relationships. Cooper and Herskovits (1992)|herein referred to as CH|derive a Bayesian metric, which we call the BD metric, from a set of reasonable assumptions about learning Bayesian networks containing only discrete variables. Heckerman et al. (1995)|herein referred to as HGC|expand upon the work of CH to derive a new metric, which we call the BDe metric, which has the desirable property of likelihood equivalence. Likelihood equivalence says that the data cannot help to discriminate equivalent structures. We now present the BD metric derived by CH. We use B h S to denote the hypothesis that B S is an I-map of the distribution that generated the database. 2 Given a belief-network structure B S , we use i to denote the parents of x i. We use r i to denote the number of states of variable x i , and q i = Q x l 2 i r l to denote the number of instances of i. We use the integer j to index these instances. That is, we write i = j to denote the observation of the jth instance of the parents of x i. 1996 Springer-Verlag. 2 There is an \u2026"}
{"_id": "7783fd2984ac139194d21c10bd83b4c9764826a3", "title": "Probabilistic reasoning in intelligent systems - networks of plausible inference", "text": "Probabilistic methods to create the areas, of computational tools. But I needed to get canned, bayesian networks worked recently strongly. Recently I tossed this book was published. In intelligent systems is researchers in, ai operations research excellence award for graduate. Too concerned about how it i've been. Apparently daphne koller and learning structures evidential reasoning. Pearl is a language for i've. Despite its early publication date it, is not great give the best references."}
{"_id": "837ae38d8eba5635fb8a2e0a5cdb4764e8ea348a", "title": "Expert Systems and Probabilistic Network Models", "text": "Follow up what we will offer in this article about expert systems and probabilistic network models. You know really that this book is coming as the best seller book today. So, when you are really a good reader or you're fans of the author, it does will be funny if you don't have this book. It means that you have to get this book. For you who are starting to learn about something new and feel curious about this book, it's easy then. Just get this book and feel how this book will give you more exciting lessons."}
{"_id": "91549f524d37dc08316ab3f98483386d790ecbfe", "title": "Learning augmented Bayesian classifiers: A comparison of distribution-based and classification-based approaches", "text": "The na\u00efve Bayes classifier is built on the assumpti on of conditional independence between the attributes given the class. The algorithm has been shown to be surprisingly robust to obvious violations of this condition, but it is natural to ask if it is possib le to further improve the accuracy by relaxing this assumption. We examine an approach where na\u00efve Bayes is augmented by the addition of correlation a rcs between attributes. We explore two methods for finding the set of augmenting arcs, a greedy hillclimbing search, and a novel, more computationally efficient algorithm that we call SuperParent. We compare these methods to TAN; a state-of the-art distribution-based approach to finding the augmenti ng arcs."}
{"_id": "b32adaf6b35e2c3720e0c5df147a3b5437139f6c", "title": "A novel three-phase utility interface minimizing line current harmonics of high-power telecommunications rectifier modules", "text": "Based on the combination of a three-phase diode bridge and a dc/dc boost converter, a new three-phase threeswitch three-level pulsewidth modulated (PWM) rectifier system is developed. It can be characterized by sinusoidal mains current consumption, controlled output voltage, and low-blocking voltage stress on the power transistors. The application could be, e.g., for feeding the dc link of a telecommunications power supply module. The stationary operational behavior, the control of the mains currents, and the control of the output voltage are analyzed. Finally, the stresses on the system components are determined by digital simulation and compared to the stresses in a conventional six-switch two-level PWM rectifier system."}
{"_id": "53964e0ccc0412e2fbb2cdf3483e1f383208febe", "title": "Event Detection in Crowded Videos", "text": "Real-world actions occur often in crowded, dynamic environments. This poses a difficult challenge for current approaches to video event detection because it is difficult to segment the actor from the background due to distracting motion from other objects in the scene. We propose a technique for event recognition in crowded videos that reliably identifies actions in the presence of partial occlusion and background clutter. Our approach is based on three key ideas: (1) we efficiently match the volumetric representation of an event against oversegmented spatio-temporal video volumes; (2) we augment our shape-based features using flow; (3) rather than treating an event template as an atomic entity, we separately match by parts (both in space and time), enabling robustness against occlusions and actor variability. Our experiments on human actions, such as picking up a dropped object or waving in a crowd show reliable detection with few false positives."}
{"_id": "34bf02183c48ec300e40ad144efcdfe67dacd681", "title": "Accelerated singular value thresholding for matrix completion", "text": "Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real world applications, such as recommender system and image in-painting. These problems can be formulated as a general matrix completion problem. The Singular Value Thresholding (SVT) algorithm is a simple and efficient first-order matrix completion method to recover the missing values when the original data matrix is of low rank. SVT has been applied successfully in many applications. However, SVT is computationally expensive when the size of the data matrix is large, which significantly limits its applicability. In this paper, we propose an Accelerated Singular Value Thresholding (ASVT) algorithm which improves the convergence rate from O(1/N) for SVT to O(1/N2), where N is the number of iterations during optimization. Specifically, the dual problem of the nuclear norm minimization problem is derived and an adaptive line search scheme is introduced to solve this dual problem. Consequently, the optimal solution of the primary problem can be readily obtained from that of the dual problem. We have conducted a series of experiments on a synthetic dataset, a distance matrix dataset and a large movie rating dataset. The experimental results have demonstrated the efficiency and effectiveness of the proposed algorithm."}
{"_id": "addadac34d2b44b2eb199d69349e1096544d628f", "title": "Bot Classification for Real-Life Highly Class-Imbalanced Dataset", "text": "Botnets are networks formed with a number of machines infected by malware called bots. Detection of these malicious networks is becoming a major concern as they pose a serious threat to the network security. Most of the research on bot detection is based on particular botnet characteristics which fail to detect other types of botnets and bots. Furthermore, there are very few bot detection methods that considered real-life class-imbalanced dataset. A dataset is class-imbalanced if there are significantly more instances in one class than the other classes. In this paper, we develop three generic features to detect different types of bots regardless of their botnet characteristics. We develop five classification models based on those features to classify bots from a large, real-life, class-imbalanced network dataset. Results show that our methodology can detect bots more accurately than the existing methods. Experimental results also demonstrate that the developed methodology can successfully detect bots when the proportion of bots to normal activity is very small. We also provide a performance comparison of our methodology with a recent study on bot detection in a real-life, large, imbalanced dataset."}
{"_id": "4e8576ed43ad33bbbfcc69317ae5536d9a8d203b", "title": "A 1.8-GHz CMOS Power Amplifier Using a Dual-Primary Transformer With Improved Efficiency in the Low Power Region", "text": "A 1.8-GHz CMOS power amplifier for a polar transmitter is implemented with a 0.18- RF CMOS process. The matching components, including the input and output transformers, were integrated. A dual-primary transformer is proposed in order to increase the efficiency in the low power region of the amplifier. The loss induced by the matching network for the low-output power region is minimized using the dual-primary transformer. The amplifier achieved a power-added efficiency of 40.7% at a maximum output power of 31.6 dBm. The dynamic range was 34 dB for a supply voltage that ranged from 0.5 to 3.3 V. The low power efficiency was 32% at the output power of 16 dBm."}
{"_id": "fb36faebed766d557d59d90aabec050c37c334e5", "title": "A framework for easing the development of applications embedding answer set programming", "text": "Answer Set Programming (ASP) is a well-established declarative problem solving paradigm which became widely used in AI and recognized as a powerful tool for knowledge representation and reasoning (KRR), especially for its high expressiveness and the ability to deal also with incomplete knowledge.\n Recently, thanks to the availability of a number of robust and efficient implementations, ASP has been increasingly employed in a number of different domains, and used for the development of industrial-level and enterprise applications. This made clear the need for proper development tools and interoperability mechanisms for easing interaction and integration with external systems in the widest range of real-world scenarios, including mobile applications and educational contexts.\n In this work we present a framework for integrating the KRR capabilities of ASP into generic applications. We show the use of the framework by illustrating proper specializations for some relevant ASP systems over different platforms, including the mobile setting; furthermore, the potential of the framework for educational purposes is illustrated by means of the development of several ASP-based applications."}
{"_id": "6b3b3cbce3ec7514632fcd92a7b0e0e26acc3ed2", "title": "Behavioral Addiction versus Substance Addiction: Correspondence of Psychiatric and Psychological Views", "text": "INTRODUCTION\nBehavioral science experts believe that all entities capable of stimulating a person can be addictive; and whenever a habit changes into an obligation, it can be considered as an addiction. Researchers also believe that there are a number of similarities as well as some differences between drug addiction and behavioral addiction diagnostic symptoms. The purpose of this study is to consider different approaches in this field.\n\n\nMETHODS\nThis is a descriptive research using content analysis method. First, differences and similarities of various perspectives on addiction and addiction behavior in different substances were obtained, thereafter, the data was coded and categorized, subjects were discussed and major issues were extracted.\n\n\nRESULTS\nBehavioral addiction such as internet addiction is similar to drug addiction except that in the former, the individual is not addicted to a substance but the behavior or the feeling brought about by the relevant action. In addition, the physical signs of drug addiction, are absent in behavioral addiction. Others have stated that behaviorally addicted individuals have certain symptoms and will undergo the same consequences brought about by addiction to alcohol and drugs as well as other obsessive behaviors.\n\n\nCONCLUSION\nSimilar to substance abuse prevention, programs aimed at addicted individuals and specialized training can educate adolescents about the warning signs of online addiction, in order to assist the early detection of this disorder. For prevention of behavioral addiction (such as internet addiction) authorities, cultural institutions and parents should monitor the use of internet and teach to the adolescent and children, the useful and appropriate methods of internet use."}
{"_id": "da8836da91907c8b8011c3a49bd92b63aabeb87f", "title": "Evaluation of highly available and fault-tolerant middleware clustered architectures using RabbitMQ", "text": "The paper presents a performance evaluation of message broker system, Rabbit MQ in high availability - enabling and redundant configurations. Rabbit MQ is a message queuing system realizing the middleware for distributed systems that implements the Advanced Message Queuing Protocol. The scalability and high availability design issues are discussed. Since HA and performance scalability requirements are in conflict, scenarios for using clustered RabbitMQ nodes and mirrored queues are presented. The results of performance measurements are reported."}
{"_id": "3de9457c9c22bdc1c9ea23d9c1364dd18bbb9bdb", "title": "Compiling C/C++ SIMD Extensions for Function and Loop Vectorizaion on Multicore-SIMD Processors", "text": "SIMD vectorization has received significant attention in the past decade as an important method to accelerate scientific applications, media and embedded applications on SIMD architectures such as Intel\u00ae SSE, AVX, and IBM* AltiVec. However, most of the focus has been directed at loops, effectively executing their iterations on multiple SIMD lanes concurrently relying upon program hints and compiler analysis. This paper presents a set of new C/C++ high-level vector extensions for SIMD programming, and the Intel\u00ae C++ product compiler that is extended to translate these vector extensions and produce optimized SIMD instruction sequences of vectorized functions and loops. For a function, our main idea is to vectorize the entire function for callers instead of just vectorizing loops (if any) inside the function. It poses the challenge of dealing with complicated control-flow in the function body, and matching caller and callee for SIMD vector calls while vectorizing caller functions (or loops) and callee functions. Our compilation methods for automatically compiling vector extensions are described. We present performance results of several non-trivial visual computing, computational, and simulation workloads, utilizing SIMD units through the vector extensions on Intel\u00ae Multicore 128-bit SIMD processors, and we show that significant SIMD speedups (3.07x to 4.69x) are achieved over the serial execution."}
{"_id": "804ec35ccf4fe44c43b355dcd974625a4d8738b4", "title": "The dawning era of polymer therapeutics", "text": "As we enter the twenty-first century, research at the interface of polymer chemistry and the biomedical sciences has given rise to the first nano-sized (5\u2013100 nm) polymer-based pharmaceuticals, the 'polymer therapeutics'. Polymer therapeutics include rationally designed macromolecular drugs, polymer\u2013drug and polymer\u2013protein conjugates, polymeric micelles containing covalently bound drug, and polyplexes for DNA delivery. The successful clinical application of polymer\u2013protein conjugates, and promising clinical results arising from trials with polymer\u2013anticancer-drug conjugates, bode well for the future design and development of the ever more sophisticated bio-nanotechnologies that are needed to realize the full potential of the post-genomic age."}
{"_id": "53a3f2a5f629745643a42658aebd9519e0fbf290", "title": "Comparing a Clipmap to a Sparse Voxel Octree for Global Illumination", "text": "Voxel cone tracing is a real-time method that approximates global illumination using a voxel approximation of the original scene. However, a high-resolution voxel approximation, which is necessary for good quality, consumes much memory, and a compact data structure for storing the voxels is necessary. In this thesis, as a primary contribution, we provide a comparison of two such data structures: a Sparse Voxel Octree, and a Clipmap. We implement these two data structures, and provide detailed descriptions of both with many important implementation details. These descriptions are much more complete than what exists in the current literature, and it is the secondary contribution of this thesis. In the comparison, we find that the octree performs worse than the clipmap with respect to memory consumption and performance, due to the overhead introduced by the complex octree data structure. However, with respect to visual quality, the octree is the superior choice, since the clipmap does not provide the same voxel resolution everywhere."}
{"_id": "f1048c91b5ec79f84dc468a24ab133d7d5c5d7e4", "title": "SuBiC: A Supervised, Structured Binary Code for Image Search", "text": "For large-scale visual search, highly compressed yet meaningful representations of images are essential. Structured vector quantizers based on product quantization and its variants are usually employed to achieve such compression while minimizing the loss of accuracy. Yet, unlike binary hashing schemes, these unsupervised methods have not yet benefited from the supervision, end-to-end learning and novel architectures ushered in by the deep learning revolution. We hence propose herein a novel method to make deep convolutional neural networks produce supervised, compact, structured binary codes for visual search. Our method makes use of a novel block-softmax nonlinearity and of batch-based entropy losses that together induce structure in the learned encodings. We show that our method outperforms state-of-the-art compact representations based on deep hashing or structured quantization in single and cross-domain category retrieval, instance retrieval and classification. We make our code and models publicly available online."}
{"_id": "3cff3e32440eccd1504c188a25d100c40483a1bf", "title": "A methodology for power characterization of associative memories", "text": "Content Addressable Memories (CAM) have become increasingly more important in applications requiring high speed memory search due to their inherent massively parallel processing architecture. We present a complete power analysis methodology for CAM systems to aid the exploration of their power-performance trade-offs in future systems. Our proposed methodology uses detailed transistor level circuit simulation of power behavior and a handful of input data types to simulate full chip power consumption. Furthermore, we applied our power analysis methodology on a custom designed associative memory test chip. This chip was developed by Fermilab for the purpose of developing high performance real-time pattern recognition on high volume data produced by a future large-scale scientific experiment. We applied our methodology to configure a power model for this test chip. Our model is capable of predicting the total average power within 4% of actual power measurements. Our power analysis methodology can be generalized and applied to other CAM-like memory systems and accurately characterize their power behavior."}
{"_id": "3bb4216137db0d6e10de9412780bd2beb6dd7508", "title": "A data-driven approach to exploring similarities of tourist attractions through online reviews", "text": "The motivation for tourists to visit a city is often driven by the uniqueness of the attractions accessible within the region. The draw to these locations varies by visitor as some travelers are interested in a single specific attraction while others prefer thematic travel. Tourists today have access to detailed experiences of other visitors to these locations in the form of user-contributed text reviews, opinions, photographs, and videos, all contributed through online tourism platforms. The data available through these platforms offer a unique opportunity to examine the similarities and difference between these attractions, their cities, and the visitors that contribute the reviews. In this work we take a data-driven approach to assessing similarity through textual analysis of user-contributed reviews, uncovering nuanced differences and similarities in the ways that reviewers write about attractions and cities."}
{"_id": "54402c81b46a9c38551cdf8d4bd62d7588efce8e", "title": "An End-to-End Deep Framework for Answer Triggering with a Novel Group-Level Objective", "text": "Given a question and a set of answer candidates, answer triggering determines whether the candidate set contains any correct answers. If yes, it then outputs a correct one. In contrast to existing pipeline methods which first consider individual candidate answers separately and then make a prediction based on a threshold, we propose an end-to-end deep neural network framework, which is trained by a novel group-level objective function that directly optimizes the answer triggering performance. Our objective function penalizes three potential types of error and allows training the framework in an end-to-end manner. Experimental results on the WIKIQA benchmark show that our framework outperforms the state of the arts by a 6.6% absolute gain under F1 measure1."}
{"_id": "5584bb5e0e063194ffc6c1a38eb937951992861c", "title": "Truncating Temporal Differences: On the Efficient Implementation of TD(lambda) for Reinforcement Learning", "text": "Temporal diierence (TD) methods constitute a class of methods for learning predictions in multi-step prediction problems, parameterized by a recency factor. Currently the most important application of these methods is to temporal credit assignment in reinforcement learning. Well known reinforcement learning algorithms, such as AHC or Q-learning, may be viewed as instances of TD learning. This paper examines the issues of the eecient and general implementation of TD() for arbitrary , for use with reinforcement learning algorithms optimizing the discounted sum of rewards. The traditional approach, based on eligibility traces, is argued to suuer from both ineeciency and lack of generality. The TTD (Truncated Temporal Diierences) procedure is proposed as an alternative, that indeed only approximates TD(), but requires very little computation per action and can be used with arbitrary function representation methods. The idea from which it is derived is fairly simple and not new, but probably unexplored so far. Encouraging experimental results are presented, suggesting that using > 0 with the TTD procedure allows one to obtain a signiicant learning speedup at essentially the same cost as usual TD(0) learning."}
{"_id": "aec1538916f6a6ec3a30ea6b8e84d33a9bb741a2", "title": "Cooperative learning in neural networks using particle swarm optimizers", "text": "This paper presents a method to employ particle swarms optim izers in a cooperative configuration. This is achieved by splitting the input vector into several sub-vectors, each w hich is optimized cooperatively in its own swarm. The applic ation of this technique to neural network training is investigate d, with promising results."}
{"_id": "5c386d601ffcc75f7635a4a5c6066824b37b9425", "title": "CAPTCHAs: An Artificial Intelligence Application to Web Security", "text": "Nowadays, it is hard to find a popular Web site with a registration form that is not protected by an automated human proof test which displays a sequence of characters in an image, and requests the user to enter the sequence into an input field. This security mechanism is based on the Turing Test\u2014one of the oldest concepts in Artificial Intelligence\u2014and it is most often called Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). This kind of test has been conceived to prevent the automated access to an important Web resource, for example, a Web mail service or a Social Network. There are currently hundreds of these tests, which are served millions of times a day, thus involving a huge amount of human work. On the other side, a number of these tests have been broken, that is, automated programs designed by researchers, hackers, and spammers have been able to automatically serve the correct answer. In this chapter, we present the history and the concept of CAPTCHAs, along with their applications and a wide review of their instantiations. We also discuss their evaluation, both from the user and the security perspectives, including usability, attacks, and countermeasures. We expect this chapter provides to the reader a good overview of this interesting field. CES IN COMPUTERS, VOL. 83 109 Copyright \u00a9 2011 Elsevier Inc. 65-2458/DOI: 10.1016/B978-0-12-385510-7.00003-5 All rights reserved. 110 J.M. GOMEZ HIDALGO AND G. ALVAREZ MARA\u00d1ON 1. I ntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 1 .1. T he Turing Test and the Origin of CAPTCHAs . . . . . . . . . . . . . . . 111 2. M otivation and Applications . . . . . . . . . . . . . . . . . . . . . . . . 116 2 .1. G eneral Description of CAPTCHAs . . . . . . . . . . . . . . . . . . . . . 116 2 .2. D esirable Properties of CAPTCHAs . . . . . . . . . . . . . . . . . . . . . 117 2 .3. I m plementation and Deployment . . . . . . . . . . . . . . . . . . . . . . 119 2 .4. A pplications and the Rise of the Robots . . . . . . . . . . . . . . . . . . . 121 3. T ypes of CAPTCHAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 3 .1. O CR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 3 .2. I m age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3 .3. A udio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 3 .4. C ognitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 4. E valuation of CAPTCHAs . . . . . . . . . . . . . . . . . . . . . . . . . 146 4 .1. E fficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4 .2. A ccessibility Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4 .3. P ractical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 5. S ecurity and Attacks on CAPTCHAs . . . . . . . . . . . . . . . . . . . . 156 5 .1. A ttacks on CAPTCHAs . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5 .2. S ecurity Requirements on CAPTCHAs . . . . . . . . . . . . . . . . . . . 169 6. A lternatives to CAPTCHAs . . . . . . . . . . . . . . . . . . . . . . . . . 171 7. C onclusions and Future Trends . . . . . . . . . . . . . . . . . . . . . . . 173 R eferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173"}
{"_id": "d2ae3cdab29631b65818c323837d530598ad7d4e", "title": "Ultrasound-Guided Fenestration of the Carpal Ligament Using a Double-Needle Approach.", "text": "Injection techniques for carpal tunnel syndrome have evolved from landmark palpation injection techniques to more accurate ultrasound-guided approaches. Presented is a case report describing a technique serendipitously discovered during a carpal ligament fenestration. The case involved a 66-year-old man with a diagnosis of carpal tunnel syndrome. After a failed attempt at treatment using a wrist splint and activity modification, he was treated by median nerve hydrodissection with 100% temporary pain relief. When his symptoms recurred, a carpal tunnel combined hydrodissection/fenestration technique was performed. Because of difficulty extricating the carpal ligament from the median nerve with the first needle, which was placed longitudinal to the median nerve, a second needle was placed transverse to the median nerve to aid in hydrodissection. The second needle was left in because it was found to be helpful in maintaining a safe distance between the median nerve and the carpal ligament by intermittent injection through the second needle. The patient reported 70% relief of his symptoms at 2-week follow-up and 50% sustained relief at 3 months. A 2-needle technique is feasible and can be helpful during median nerve hydrodissection/carpal ligament fenestration when technical or anatomical issues arise preventing treatment using the traditional single needle approach. This double needle approach allows for use of injectate to maintain separation between the median nerve and the ligament during the fenestration."}
{"_id": "42d635d63049cbee41d313c7f2981bbcb079b9f5", "title": "When does Consumer Empowerment Lead to Satisfied Customers? Some Mediating and Moderating Effects of the Empowerment-Satisfaction Link", "text": "Technological advances increasingly provide marketers with the opportunity to empower consumers. Consumer empowerment is a positive subjective state evoked by consumer perceptions of increasing control. As a positive state, increasing consumer empowerment should be associated with increasing consumer satisfaction. If such a relationship exists, it may be influenced by a number of contextual variables. Knowing in what contexts empowerment has a greater impact on satisfaction would help marketers decide when they could more effectively use such a strategy. This study has two purposes: 1) to investigate the relationship between consumer empowerment and satisfaction and 2) to investigate a set of potential influences on that relationship. Marketers will be in a better position to decide when to empower consumers if they have guidance on the relationships between these variables."}
{"_id": "4aa8d888b8932eda034ecbef6e4fcc80500ca849", "title": "Linear codes over F2+uF2+vF2+uvF2", "text": "In this work, we investigate linear codes over the ring F2 + uF2 + vF2 + uvF2. We first analyze the structure of the ring and then define linear codes over this ring which turns out to be a ring that is not finite chain or principal ideal contrary to the rings that have hitherto been studied in coding theory. Lee weights and Gray maps for these codes are defined by extending on those introduced in works such as Betsumiya et al. (Discret Math 275:43\u201365, 2004) and Dougherty et al. (IEEE Trans Inf 45:32\u201345, 1999). We then characterize the F2 +uF2 +vF2 +uvF2-linearity of binary codes under the Gray map and give a main class of binary codes as an example of F2 + uF2 + vF2 + uvF2-linear codes. The duals and the complete weight enumerators for F2 + uF2 + vF2 + uvF2-linear codes are also defined after which MacWilliams-like identities for complete and Lee weight enumerators as well as for the ideal decompositions of linear codes over F2 + uF2 + vF2 + uvF2 are obtained."}
{"_id": "ac64582605060febe1441ba93344c4350b1494a5", "title": "AIMED- A Personalized TV Recommendation System", "text": "Previous personalized DTV recommendation systems focus only on viewers\u2019 historical viewing records or demographic data. This study proposes a new recommending mechanism from a user oriented perspective. The recommending mechanism is based on user properties such as Activities, Interests, Moods, Experiences, and Demographic information\u2014AIMED. The AIMED data is fed into a neural network model to predict TV viewers\u2019 program preferences. Evaluation results indicate that the AIMED model significantly increases recommendation accuracy and decreases prediction errors compared to the conventional model."}
{"_id": "8836e83fa9913d50d8d39e297b69b25519552a02", "title": "Proportional smile design using the recurring esthetic dental (red) proportion.", "text": "Dentists have needed an objective way in which to evaluate a smile. A method for determining the ideal size and position of the anterior teeth has been presented here. Use of the FIVE to evaluate the RED proportion and the width-to-height ratio, tempered with sound clinical judgment, gives pleasing and consistent results. With the diversity that exists in nature, rarely does the final result follow all the mathematical rules of proportional smile design. This approach may serve as a foundation on which to base initial smile design, however. When one begins to understand the relationship between beauty, mathematics, and the surrounding world, one begins to appreciate their interdependence."}
{"_id": "dcb56cf8166e325da2d62dfa96f64d1ed42c7675", "title": "Scientific workflow management and the Kepler system", "text": "Many scientific disciplines are now data and information driven, and new scientific knowledge is often gained by scientists putting together data analysis and knowledge discovery \u201cpipelines\u201d. A related trend is that more and more scientific communities realize the benefits of sharing their data and computational services, and are thus contributing to a distributed data and computational community infrastructure (a.k.a. \u201cthe Grid\u201d). However, this infrastructure is only a means to an end and scientists ideally should be bothered little with its existence. The goal is for scientists to focus on development and use of what we call scientific workflows. These are networks of analytical steps that may involve, e.g., database access and querying steps, data analysis and mining steps, and many other steps including computationally intensive jobs on high performance cluster computers. In this paper we describe characteristics of and requirements for scientific workflows as identified in a number of our application projects. We then elaborate on Kepler, a particular scientific workflow system, currently under development across a number of scientific data management projects. We describe some key features of Kepler and its underlying Ptolemy ii system, planned extensions, and areas of future research. Kepler is a communitydriven, open source project, and we always welcome related projects and new contributors to join. \u2217Work supported by NSF/ITR 0225676 (SEEK), DOE SciDAC DE-FC02-01ER25486 (SDM), NSF/ITR CCR-00225610 (Chess), NSF/ITR 0225673 (GEON), NIH/NCRR 1R24 RR019701-01 Biomedical Informatics Research Network Coordinating Center (BIRN-CC), NSF/ITR 0325963 (ROADNet), NSF/DBI-0078296 (Resurgence) \u2020San Diego Supercomputer Center, UC San Diego; ?Dept. of Computer Science & Genome Center, UC Davis; \u2021National Center for Ecological Analysis and Synthesis, UC Santa Barbara; and \u00a7Department of Electrical Engineering and Computer Sciences, UC Berkeley"}
{"_id": "095c9244b1ac0d6b83e5b8448e9ffa56e8fb4c6a", "title": "A Taxonomy of Workflow Management Systems for Grid Computing", "text": "With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research."}
{"_id": "1e2ac9587c9a57a49583990602142c84b3f19625", "title": "Computation for ChIP-seq and RNA-seq studies", "text": "Genome-wide measurements of protein-DNA interactions and transcriptomes are increasingly done by deep DNA sequencing methods (ChIP-seq and RNA-seq). The power and richness of these counting-based measurements comes at the cost of routinely handling tens to hundreds of millions of reads. Whereas early adopters necessarily developed their own custom computer code to analyze the first ChIP-seq and RNA-seq datasets, a new generation of more sophisticated algorithms and software tools are emerging to assist in the analysis phase of these projects. Here we describe the multilayered analyses of ChIP-seq and RNA-seq datasets, discuss the software packages currently available to perform tasks at each layer and describe some upcoming challenges and features for future analysis tools. We also discuss how software choices and uses are affected by specific aspects of the underlying biology and data structure, including genome size, positional clustering of transcription factor binding sites, transcript discovery and expression quantification."}
{"_id": "41c97a6b41aefc6b0e0a3c702db080fd5aeef6f5", "title": "Distributed computing in practice: the Condor experience", "text": "Since 1984, the Condor project has enabled ordinary users to do extraordinary computing. Today, the project continues to explore the social and technical problems of cooperative computing on scales ranging from the desktop to the world-wide computational Grid. In this paper, we provide the history and philosophy of the Condor project and describe how it has interacted with other projects and evolved along with the field of distributed computing. We outline the core components of the Condor system and describe how the technology of computing must correspond to social structures. Throughout, we reflect on the lessons of experience and chart the course travelled by research ideas as they grow into production systems. Copyright c \u00a9 2005 John Wiley & Sons, Ltd."}
{"_id": "700f884863cb271733008101d54a36ee86b8ef7b", "title": "Effects of playing a violent video game as male versus female avatar on subsequent aggression in male and female players.", "text": "Previous research has shown that violent video games can increase aggression in players immediately after they play. The present research examines the effects of one subtle cue within violent video games that might moderate these effects-whether the avatar is male or female. One common stereotype is that males are more aggressive than females. Thus, playing a violent video game as a male avatar, compared to a female avatar, should be more likely to prime aggressive thoughts and inclinations in players and lead to more aggressive behavior afterwards. Male and female university students (N\u2009=\u2009242) were randomly assigned to play a violent video game as a male or female avatar. After gameplay, participants gave an ostensible partner who hated spicy food hot sauce to eat. The amount of hot sauce given was used to measure aggression. Consistent with priming theory, results showed that both male and female participants who played a violent game as a male avatar behaved more aggressively afterwards than those who played as female avatar. The priming effects of the male avatar were somewhat stronger for male participants than for female participants, suggesting that male participants identified more with the male avatar than did the female participants. These results are particularly noteworthy because they are consistent with another recent experiment showing that playing a violent game as an avatar with a different stereotypically aggressive attribute (black skin color) stimulates more aggression than playing as an avatar without the stereotypically aggressive attribute (Yang et al., 2014, Social Psychological and Personality Science)."}
{"_id": "647cdaae2cfc707a93a137f38ec572d08d8dd3d3", "title": "Meta BCI : Hippocampus-striatum network inspired architecture towards flexible BCI", "text": "Classifying neural signals is a crucial step in the brain-computer interface (BCI). Although Deep Neural Network (DNN) has been shown to be surprisingly good at classification, DNN suffers from long training time and catastrophic forgetting. Catastrophic forgetting refers to a phenomenon in which a DNN tends to forget previously learned task when it learns a new task. Here we argue that the solution to this problem may be found in the human brain, specifically, by combining functions of the two regions: the striatum and the hippocampus, which is pivotal for reinforcement learning and memory recall relevant to the current context, respectively. The mechanism of these brain regions provides insights into resolving catastrophic forgetting and long training time of DNNs. Referring to the hippocampus-striatum network we discuss design principles of combining different types of DNNs for building a new BCI architecture, called \u201cMeta BCI\u201d."}
{"_id": "fc8fafb596b85dc5b4bae2e8c5971d1384250c74", "title": "A smart parking system to minimize searching time, fuel consumption and CO2 emission", "text": "This paper proposes a Smart Parking System (SPS) based on Internet of Thing (IoT) and Wireless Sensors Network (WSN). It is assumed that Global Positioning System (GPS) and wireless communication system is placed in the vehicle. GPS gets the position of vehicle and wireless communication send it to server. A smartphone uses for user to remotely view the parking, status of spots and book a slot manually. The proposed system decreases the searching time of free slot by giving route to free slot. As the fuel consumption and carbon dioxide (CO2) emission decreases by decreasing the distance and wise versa because there is direct relationship between them. It is shown in simulations result at the end of the paper that the objectives of the paper to minimize searching time, fuel consumption and CO2 emission achieved."}
{"_id": "785c7337888e0016090c1ab7e588999ffad14f03", "title": "MapReduce as a programming model for association rules algorithm on Hadoop", "text": "As association rules widely used, it needs to study many problems, one of which is the generally larger and multi-dimensional datasets, and the rapid growth of the mount of data. Single-processor's memory and CPU resources are very limited, which makes the algorithm performance inefficient. Recently the development of network and distributed technology makes cloud computing a reality in the implementation of association rules algorithm. In this paper we describe the improved Apriori algorithm based on MapReduce mode, which can handle massive datasets with a large number of nodes on Hadoop platform."}
{"_id": "0a0b7e3814727aeae9fc8718a91f1cba4e2f707b", "title": "Design of a linear variable-stiffness mechanism using preloaded bistable beams", "text": "A machine with variable internal stiffness can meet the performance requirement in different working environments. Linear variable-stiffness mechanisms (LVSM) are rarely found compared with those producing rotary motion. This paper presents an approach to design a linear variable-stiffness mechanism. The idea is to parallel connect two lateral bistable beams with a vertical spring. Through the preload adjustment of the bistable beams, the output force to displacement curve can exhibit different stiffnesses. The merit of the proposed LVSM is that very large stiffness variation can be achieved in a compact space. The stiffness may even be tuned to zero by assigning a proper stiffness to the vertical spring. An optimization formulation is presented to design a LVSM with the largest stiffness variation. The effects of various parameters on the stiffness variation and linearity are discussed. The results are numerically verified with a prototype illustrated."}
{"_id": "579739c79c93a5bafdb2b0cff4d6d1c8bf30797b", "title": "Performance Sensitivity of Space Sharing Processor Scheduling in Distributed-Memory Multicomputers", "text": "Processor scheduling in distributed-memory systems has received considerable attention in recent years. Several commercial distributed-memory systems use spacesharing processor scheduling. In space-sharing, the set of processors in a system is partitioned and each partition is assigned for the exclusive use of a job. Space-sharing policies can be divided into fixed, static, or dynamic categories. For distributed-memory systems, dynamic policies incur high overhead. Thus, static policies are considered as these policies provide a better performance than the fixed policies. Several static policies have been proposed in the literature. In a previously proposed adaptive static policy, the partition size is a function of the number of queued jobs. This policy, however, tends to underutilize the system resources. To improve the performance of this policy, we propose a new policy in which the partition size is a function of the total number of jobs in the system, as opposed to only the queued jobs. The results presented here demonstrate that the new policy performs substantially better than the original policy for the various workload and system parameters. Another major contribution is the evaluation of the performance sensitivity to job structure, variances in inter-arrival times and job service times, and network topology."}
{"_id": "ab138bc53af41bfc5f1a5b2ce5ab4f11973e50aa", "title": "An Ontology-Based Approach to Text Summarization", "text": "Extractive text summarization aims to create a condensed version of one or more source documents by selecting the most informative sentences. Research in text summarization has therefore often focused on measures of the usefulness of sentences for a summary. We present an approach to sentence extraction that maps sentences to nodes of a hierarchical ontology. By considering ontology attributes we are able to improve the semantic representation of a sentence's information content. The classifier that maps sentences to the taxonomy is trained using search engines and is therefore very flexible and not bound to a specific domain. In our experiments, we train an SVM classifier to identify summary sentences using ontology-based sentence features. Our experimental results show that the ontology-based extraction of sentences outperforms baseline classifiers, leading to higher Rouge scores of summary extracts."}
{"_id": "268d5b4fe3a2f93e7e69ae03ffe14476d0322d72", "title": "Assessing mental health issues on college campuses: preliminary findings from a pilot study", "text": "A significant fraction of college students suffer from serious mental health issues including depression, anxiety, self-harm and suicidal thought. The prevalence and severity of these issues among college students also appear to increase over time. However, most of these issues often remain undiagnosed, and as a result, untreated. One of the main reasons of this gap between illness and treatment results from the lack of reliable data over time. While health care services in college campuses have been focusing on detection of illness onset and appropriate interventions, their tools are mostly manual surveys which often fail to capture the granular details of contexts and behaviors which might provide important cues about illness onset. To overcome the limitations of these manual tools, we deployed a smartphone based tool or unobtrusive and continuous data collection from 22 students during an academic semester. In this paper, we present the preliminary findings from our study about assessing mental health on college campuses using passively sensed smartphone data."}
{"_id": "ac3388f02fad08d9703d8d926af08810ea72cc88", "title": "Incremental multi-step Q-learning", "text": "This paper presents a novel incremental algorithm that combines Q-learning, a well-known dynamic-programming based reinforcement learning method, with the TD(\u03bb) return estimation process, which is typically used in actor-critic learning, another well-known dynamic-programming based reinforcement learning method. The parameter \u03bb is used to distribute credit throughout sequences of actions, leading to faster learning and also helping to alleviate the non-Markovian effect of coarse state-space quatization. The resulting algorithm.Q(\u03bb)-learning, thus combines some of the best features of the Q-learning and actor-critic learning paradigms. The behavior of this algorithm has been demonstrated through computer simulations."}
{"_id": "5af06815baa4b8f53adc9dc22f6eb3f6f1ad8ff8", "title": "DTCTH: a discriminative local pattern descriptor for image classification", "text": "Despite lots of effort being exerted in designing feature descriptors, it is still challenging to find generalized feature descriptors, with acceptable discrimination ability, which are able to capture prominent features in various image processing applications. To address this issue, we propose a computationally feasible discriminative ternary census transform histogram (DTCTH) for image representation which uses dynamic thresholds to perceive the key properties of a feature descriptor. The code produced by DTCTH is more stable against intensity fluctuation, and it mainly captures the discriminative structural properties of an image by suppressing unnecessary background information. Thus, DTCTH becomes more generalized to be used in different applications with reasonable accuracies. To validate the generalizability of DTCTH, we have conducted rigorous experiments on five different applications considering nine benchmark datasets. The experimental results demonstrate that DTCTH performs as high as 28.08% better than the existing state-of-the-art feature descriptors such as GIST, SIFT, HOG, LBP, CLBP, OC-LBP, LGP, LTP, LAID, and CENTRIST."}
{"_id": "941a668cb77010e032a809861427fa8b1bee8ea0", "title": "Electrocardiogram (ECG) Signal Processing", "text": "Signal processing today is performed in the vast majority of systems for ECG analysis and interpretation. The objective of ECG signal processing is manifold and comprises the improvement of measurement accuracy and reproducibility (when compared with manual measurements) and the extraction of information not readily available from the signal through visual assessment. In many situations, the ECG is recorded during ambulatory or strenuous conditions such that the signal is corrupted by different types of noise, sometimes originating from another physiological process of the body. Hence, noise reduction represents another important objective of ECG signal processing; in fact, the waveforms of interest are sometimes so heavily masked by noise that their presence can only be revealed once appropriate signal processing has first been applied. Electrocardiographic signals may be recorded on a long timescale (i.e., several days) for the purpose of identifying intermittently occurring disturbances in the heart rhythm. As a result, the produced ECG recording amounts to huge data sizes that quickly fill up available storage space. Transmission of signals across public telephone networks is another application in which large amounts of data are involved. For both situations, data compression is an essential operation and, consequently, represents yet another objective of ECG signal processing. Signal processing has contributed significantly to a new understanding of the ECG and its dynamic properties as expressed by changes in rhythm and beat morphology. For example, techniques have been developed that characterize oscillations related to the cardiovascular system and reflected by subtle variations in heart rate. The detection of low-level, alternating changes in T wave amplitude is another example of oscillatory behavior that has been established as an indicator of increased risk for sudden, life-threatening arrhythmias. Neither of these two oscillatory signal properties can be perceived by the naked eye from a standard ECG printout. Common to all types of ECG analysis\u2014whether it concerns resting ECG interpretation, stress testing, ambulatory monitoring, or intensive care monitoring\u2014is a basic set of algorithms that condition the signal with respect to different types of noise and artifacts, detect heartbeats, extract basic ECG measurements of wave amplitudes and durations, and compress the data for efficient storage or transmission; the block diagram in Fig. 1 presents this set of signal processing algorithms. Although these algorithms are frequently implemented to operate in sequential order, information on the occurrence time of a heartbeat, as produced by the QRS detector, is sometimes incorporated into the other algorithms to improve performance. The complexity of each algorithm varies from application to application so that, for example, noise filtering performed in ambulatory monitoring is much more sophisticated than that required in resting ECG analysis. Once the information produced by the basic set of algorithms is available, a wide range of ECG applications exist where it is of interest to use signal processing for quantifying heart rhythm and beat morphology properties. The signal processing associated with two such applications\u2014high-resolution ECG and T wave alternans\u2014are briefly described at the end of this article. The interested reader is referred to, for example, Ref. 1, where a detailed description of other ECG applications can be found."}
{"_id": "b550d628d4ead187cbb58875899392fe1667809c", "title": "A Business Intelligence System", "text": "\u0e23\u0e30\u0e1a\u0e1a Business Intelligence \u0e40\u0e1b\u0e47\u0e19\u0e0b\u0e2d\u0e1f\u0e41\u0e27\u0e23\u0e4c\u0e17\u0e35\u0e48\u0e40\u0e02\u0e49\u0e32\u0e44\u0e1b\u0e0a\u0e48\u0e27\u0e22\u0e2d \u0e32\u0e19\u0e27\u0e22\u0e04\u0e27\u0e32\u0e21\u0e2a\u0e30\u0e14\u0e27\u0e01\u0e43\u0e19\u0e14\u0e49\u0e32\u0e19\u0e01\u0e32\u0e23\u0e27\u0e40\u0e34\u0e04\u0e23\u0e32\u0e30\u0e2b\u0e4c\u0e02\u0e49\u0e2d\u0e21\u0e25\u0e39 \u0e41\u0e25\u0e30\u0e01\u0e32\u0e23\u0e15\u0e31\u0e14\u0e2a\u0e34\u0e19\u0e43\u0e08\u0e43\u0e19 \u0e02\u0e49\u0e2d\u0e21\u0e39\u0e25\u0e17\u0e32\u0e07\u0e18\u0e38\u0e23\u0e01\u0e34\u0e08\u0e17\u0e31\u0e49\u0e07\u0e2b\u0e21\u0e14\u0e02\u0e2d\u0e07\u0e43\u0e19\u0e2b\u0e19\u0e27\u0e48\u0e22\u0e07\u0e32\u0e19 \u0e0b\u0e36\u0e48\u0e07\u0e0a\u0e48\u0e27\u0e22\u0e43\u0e2b\u0e49\u0e1c\u0e39\u0e49\u0e43\u0e0a\u0e49\u0e07\u0e32\u0e19\u0e23\u0e30\u0e1a\u0e1a\u0e07\u0e32\u0e19\u0e43\u0e19\u0e2d\u0e07\u0e04\u0e4c\u0e01\u0e23\u0e17 \u0e32\u0e01\u0e32\u0e23\u0e15\u0e31\u0e14\u0e2a\u0e34\u0e19\u0e43\u0e08\u0e17\u0e32\u0e07\u0e18\u0e38\u0e23\u0e01\u0e34\u0e08\u0e17\u0e35\u0e48\u0e14\u0e35\u0e22\u0e34\u0e48\u0e07\u0e02\u0e49\u0e36\u0e19 \u0e21\u0e35\u0e1f\u0e31\u0e07\u0e01\u0e4c\u0e0a\u0e48\u0e31\u0e19\u0e01\u0e32\u0e23\u0e43\u0e0a\u0e49 \u0e07\u0e32\u0e19\u0e15\u0e48\u0e32\u0e07\u0e46\u0e04\u0e23\u0e1a\u0e16\u0e49\u0e27\u0e19\u0e44\u0e21\u0e48\u0e27\u0e48\u0e32\u0e08\u0e30\u0e40\u0e1b\u0e47\u0e19 Data Integration, Reporting, ad-hoc Query, Analysis Dashboard, Data Mining \u0e17\u0e31\u0e49\u0e07\u0e19\u0e35\u0e49\u0e40\u0e1e\u0e37\u0e48\u0e2d\u0e25\u0e14\u0e01\u0e32\u0e23\u0e17 \u0e32\u0e07\u0e32\u0e19\u0e17\u0e48\u0e35 \u0e0b\u0e49 \u0e32\u0e0b\u0e49\u0e2d\u0e19 \u0e25\u0e14\u0e04\u0e27\u0e32\u0e21\u0e1c\u0e34\u0e14\u0e1e\u0e25\u0e32\u0e14\u0e41\u0e25\u0e30\u0e04\u0e27\u0e32\u0e21\u0e02\u0e31\u0e14\u0e41\u0e22\u0e49\u0e07\u0e17\u0e48\u0e35\u0e2d\u0e32\u0e08\u0e08\u0e30\u0e40\u0e01\u0e34\u0e14\u0e02\u0e36\u0e49\u0e19\u0e40\u0e19\u0e37\u0e48\u0e2d\u0e07\u0e08\u0e32\u0e01\u0e01\u0e32\u0e23\u0e2a\u0e37\u0e48\u0e2d\u0e2a\u0e32\u0e23\u0e23\u0e30\u0e2b\u0e27\u0e48\u0e32\u0e07\u0e2b\u0e19\u0e48\u0e27\u0e22\u0e07\u0e32\u0e19\u0e20\u0e32\u0e22\u0e43\u0e19\u0e01\u0e31\u0e19\u0e40\u0e2d\u0e07 \u0e2b\u0e23\u0e37\u0e2d\u0e01\u0e31\u0e1a\u0e2b\u0e19\u0e48\u0e27\u0e22\u0e07\u0e32\u0e19 \u0e20\u0e32\u0e22\u0e19\u0e2d\u0e01"}
{"_id": "a102c48fd4c6336e9deb826402d411e30fe72595", "title": "Understanding and Evaluating User Satisfaction with Music Discovery", "text": "We study the use and evaluation of a system for supporting music discovery, the experience of finding and listening to content previously unknown to the user. We adopt a mixed methods approach, including interviews, unsupervised learning, survey research, and statistical modeling, to understand and evaluate user satisfaction in the context of discovery. User interviews and survey data show that users' behaviors change according to their goals, such as listening to recommended tracks in the moment, or using recommendations as a starting point for exploration. We use these findings to develop a statistical model of user satisfaction at scale from interactions with a music streaming platform. We show that capturing users' goals, their deviations from their usual behavior, and their peak interactions on individual tracks are informative for estimating user satisfaction. Finally, we present and validate heuristic metrics that are grounded in user experience for online evaluation of recommendation performance. Our findings, supported with evidence from both qualitative and quantitative studies, reveal new insights about user expectations with discovery and their behavioral responses to satisfying and dissatisfying systems."}
{"_id": "b681da8d4be586f6ed6658038c81cdcde1d54406", "title": "Dual-Band and Polarization-Flexible Cavity Antenna Based on Substrate Integrated Waveguide", "text": "In this letter, a novel dual-band and polarization-flexible substrate integrated waveguide (SIW) cavity antenna is proposed. The SIW cavity used for the antenna is excited by a conventional TE120 mode for its first resonance. With the intervention of the slot, a second resonance excited by a modified- TE120 mode is also generated, thereby providing a broadside radiation pattern at two resonant frequencies. In addition, the proposed antenna has two orthogonal feeding lines. Therefore, it is possible to provide any six major polarization states. In this letter, three major polarization cases are simulated and compared to measured results. Since modern communication systems require multifunctional antennas, the proposed antenna concept is a promising candidate."}
{"_id": "16d228220eda4ae28e78e3d27a591d2bbe2bbeb1", "title": "EEVi \u2013Framework and Guidelines to Evaluate the Effectiveness of Cyber-Security Visualization", "text": "Cyber-security visualization aims to reduce security analysts\u2019 workload by presenting information as visual analytics instead of a string of text and characters. However, the adoption of the resultant visualizations by security analysts, is not widespread. The literature indicates a lack of guidelines and standardized evaluation techniques for effective visualization in cyber-security, as a reason for the low adoption rate. Consequently, this article addresses the research gap by introducing a framework called EEVi for effective cyber-security visualizations for the performed task. The term \u2018effective visualization\u2019 is defined as the features of visualization that are critical for an analyst to competently perform a certain task. EEVi has been developed by analyzing qualitative data which led to the formation of cognitive relationships (called links) between data. These relationships acted as guidelines for effective cyber-security visualization to perform tasks. The methodology to develop this framework can be applied to other fields to understand cognitive relationships between data. Additionally, the analysis of the framework presented, demonstrates how EEVi can be put into practice using the guidelines for effective cybersecurity visualization. The guidelines can be used to guide visualization developers to create effective visualizations for security analysts based on their requirements."}
{"_id": "71d1094206db2d3504bc9ba736646ee1cc0beed0", "title": "Google's PageRank and beyond - the science of search engine rankings", "text": "Why doesn't your home page appear on the first page of search results, even when you query your own name? How do other web pages always appear at the top? What creates these powerful rankings? And how? The first book..."}
{"_id": "683a1cab80c2a64e74d41e46c10cacfab1b23696", "title": "Videogame Addiction and its Treatment", "text": "For many, the concept of videogame addiction seems far-fetched, particularly if their concepts and definitions of addiction involve the taking of drugs. This paper overviews the small, but growing area of videogame addiction and then examines the treatment options available for those affected by excessive videogame playing. An overview of the available empirical literature appears to indicate that adverse effects are likely to affect only a relatively small subgroup of players and that frequent players are the most at-risk for developing problems. Worldwide, there are relatively few practitioners that specialise in the treatment of videogame addiction and this may be because there are so few players who are genuinely addicted to playing videogames. However, the Internet may be facilitating excessive online game playing as evidenced by the increasing number of specialist addiction treatment clinics for online videogame addiction. This paper overviews the various approaches that have been used as an intervention in treating videogame addicts, including online support groups, 12-step programmes, behavioural and cognitive-behavioural therapies, and motivational interviewing."}
{"_id": "0d8a5addbd17d2c7c8043d8877234675da19938a", "title": "Activity Forecasting", "text": "We address the task of inferring the future actions of people from noisy visual input. We denote this task activity forecasting. To achieve accurate activity forecasting, our approach models the effect of the physical environment on the choice of human actions. This is accomplished by the use of state-of-the-art semantic scene understanding combined with ideas from optimal control theory. Our unified model also integrates several other key elements of activity analysis, namely, destination forecasting, sequence smoothing and transfer learning. As proof-of-concept, we focus on the domain of trajectory-based activity analysis from visual input. Experimental results demonstrate that our model accurately predicts distributions over future actions of individuals. We show how the same techniques can improve the results of tracking algorithms by leveraging information about likely goals and trajectories."}
{"_id": "10b987b076fe56e08c89693cdb7207c13b870540", "title": "Anticipating Visual Representations from Unlabeled Video", "text": "Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future."}
{"_id": "385750bcf95036c808d63db0e0b14768463ff4c6", "title": "Autoencoding beyond pixels using a learned similarity metric", "text": "We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic."}
{"_id": "980cf8e3b59dd0923f7e7cf66d2bec4102d7035f", "title": "Unsupervised Learning for Physical Interaction through Video Prediction", "text": "A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 50, 000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot\u2019s future actions amounts to learning a \u201cvisual imagination\u201d of different futures based on different courses of action. Our experiments show that our proposed method not only produces more accurate video predictions, but also more accurately predicts object motion, when compared to prior methods."}
{"_id": "cf18287e79b1fd73cd333fc914bb24c00a537f4c", "title": "Self-Supervised Visual Planning with Temporal Skip Connections", "text": "In order to autonomously learn wide repertoires of complex skills, robots must be able to learn from their own autonomously collected data, without human supervision. One learning signal that is always available for autonomously collected data is prediction. If a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location. However, in complex open-world scenarios, designing a representation for prediction is difficult. In this work, we instead aim to enable self-supervised robot learning through direct video prediction: instead of attempting to design a good representation, we directly predict what the robot will see next, and then use this model to achieve desired goals. A key challenge in video prediction for robotic manipulation is handling complex spatial arrangements such as occlusions. To that end, we introduce a video prediction model that can keep track of objects through occlusion by incorporating temporal skipconnections. Together with a novel planning criterion and action space formulation, we demonstrate that this model substantially outperforms prior work on video prediction-based control. Our results show manipulation of objects not seen during training, handling multiple objects, and pushing objects around obstructions. These results represent a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robot learning."}
{"_id": "920c3a5ad8bdf57c6300dcf81eb6dd361db15d88", "title": "Integrated staffing and scheduling for an aircraft line maintenance problem", "text": "This paper presents a method for constructing the workforce schedules of an aircraft maintenance company. The method integrates both the staffing and the scheduling decision. We formulate the optimization problem using a mixed integer linear programming approach and solve it heuristically using a branch-and-bound enumeration framework. Each node of the tree represents a combination of teams and team sizes. An extensive computational experiment is added to illustrate the algorithmic performance using a set of test instances that are derived from a real-life setting. We conclude this paper by discussing some results and by addressing interesting future extensions to the model."}
{"_id": "9b305e918be3401e79c6dc019cf1e1563fde9a31", "title": "Local pattern transformation based feature extraction techniques for classification of epileptic EEG signals", "text": "Background and objective: According to the World Health Organization (WHO) epilepsy affects approximately 45\u201350 million people. Electroencephalogram (EEG) records the neurological activity in the brain and it is used to identify epilepsy. Visual inspection of EEG signals is a time-consuming process and it may lead to human error. Feature extraction and classification are two main steps that are required to build an automated epilepsy detection framework. Feature extraction reduces the dimensions of the input signal by retaining informative features and the classifier assigns a proper class label to the extracted feature vector. Our aim is to present effective feature extraction techniques for automated epileptic EEG signal classification. Methods: In this study, two effective feature extraction techniques (Local Neighbor Descriptive Pattern [LNDP] and One-dimensional Local Gradient Pattern [1D-LGP]) have been introduced to classify epileptic EEG signals. The classification between epileptic seizure and non-seizure signals is performed using different machine learning classifiers. The benchmark epilepsy EEG dataset provided by the University of Bonn is used in this research. The classification performance is evaluated using 10-fold cross validation. The classifiers used are the Nearest Neighbor (NN), Support Vector Machine (SVM), Decision Tree (DT) and Artificial Neural Network (ANN). The experiments have been repeated for 50 times. Results: LNDP and 1D-LGP feature extraction techniques with ANN classifier achieved the average classification accuracy of 99.82% and 99.80%, respectively, for the classification between normal and epileptic EEG signals. Eight different experimental cases were tested. The classification results were better than those of some existing methods. Conclusions: This study suggests that LNDP and 1D-LGP could be effective feature extraction techniques for the classification of epileptic EEG signals. . Introduction Epilepsy is a central nervous system disorder and it has been eported that approximately 45\u201350 million people suffer from this isorder [1]. EEG captures the neurological activity inside the brain y placing electrodes on the scalp and helps in detection of epilepic seizure [2,3]. Epileptic seizure detection can be considered as classification problem where the task is to classify an input sigal either as an epileptic seizure signal or as a non-seizure signal. EG signals are usually recorded for long durations to carry out n analysis. Detection of epileptic seizure in these long duration EG signals requires expertise. In the absence of expert, particuarly in emergencies, seizure detection becomes a challenging task. herefore, providing a framework for automated epileptic seizure \u2217 Corresponding author: E-mail address: abegiitdhanbad@gmail.com (A.K. Jaiswal). ttp://dx.doi.org/10.1016/j.bspc.2017.01.005 746-8094/\u00a9 2017 Elsevier Ltd. All rights reserved. \u00a9 2017 Elsevier Ltd. All rights reserved. detection is of great significance. A number of methods have been proposed in literature for the classification of epileptic EEG signals. The methods can be categorized into following domains of signal analysis: Time domain analysis: Some techniques in the field of epileptic EEG signal classification belong to this category. Techniques like linear prediction [4], Fractional linear prediction [5], Principal component analysis based radial basis function neural network [6], etc. have been proposed for epileptic seizure detection. Frequency domain analysis: With the assumption that the EEG signals are stationary signals, Polat and Gunes [7] introduced a hybrid framework based on frequency domain analysis with Fourier transform and decision tree. In the hybrid model, Fourier transform was used for feature extraction and decision tree was used for the classification. Srinivasan et al. [8] used features extracted from time domain and frequency domain for seizure detection in EEG signals. The extracted features were fed to the Elman neural network to detect epileptic seizure. 82 A.K. Jaiswal, H. Banka / Biomedical Signal Processing and Control 34 (2017) 81\u201392 0 50"}
{"_id": "89701a3b04c3f102ebec83db3249b20791eacb38", "title": "Obtaining accurate trajectories from repeated coarse and inaccurate location logs", "text": "Context awareness is a key property for enabling context aware services. For a mobile device, the user's location or trajectory is one of the crucial contexts. One common challenge for detecting location or trajectory by mobile devices is to manage the tradeoff between accuracy and power consumption. Typical approaches are (1) controlling the frequency of usage of sensors and (2) sensor fusion technique. The algorithm proposed in this paper takes a different approach to improve the accuracy by merging repeatedly measured coarse and inaccurate location data from cell tower. The experimental result shows that the mean error distance between the detected trajectory and the ground truth is improved from 44m to 10.9m by merging data from 41 days' measurement."}
{"_id": "7d96bb01a28bd8120930a79e3843d1486d03442b", "title": "A flame detection algorithm based on Bag-of-Features in the YUV color space", "text": "Computer vision-based fire detection involves flame detection and smoke detection. This paper proposes a new flame detection algorithm, which is based on a Bag-of-Features technique in the YUV color space. Inspired by that the color of flame in image and video will fall in certain regions in the color space, models of flame pixels and non-flame pixels are established based on code book in the training phase in our proposal. In the testing phase, the input image is split into some N\u00d7N blocks and each block is classified respectively. In each N\u00d7N block, the pixels values in the YUV color space are extracted as features, just as in the training phase. According to the experimental results, our proposed method can reduce the number of false alarms greatly compared with an alternative algorithm, while it also ensures the accurate classification of positive samples. The classification performance of our proposed method is better than that of alternative algorithms."}
{"_id": "b6aa3b1341c43a6ba0382cd14e06a34548c3f17f", "title": "Multi sensor system for automatic fall detection", "text": "To reduce elderly falling risk in the area where video surveillance system is unavailable due to privacy reason and so on, we need a monitor system which does no recognize what they are doing. This paper presents an elderly-falling detection system using ultrasonic sensors. The ultrasonic technology-based multi sensors are couple of receiver and transmitter together, which are connected to Arduino microcontroller in order to send the elderly person's fall related signal using WiFi to the processing unit. The sensors are positioned as an array on the roof and wall in the room. The signal is analyzed to recognize human by sensing distance, and detect the action such as standing, sitting, and falling by pattern matching with the standard templates of top and side signal. In experiments, the proposed system can recognize accuracy of falling detection is 93% approximately."}
{"_id": "20be06193d0d47a7af6943f06a0e9f4eb5c69ebb", "title": "Learning Globally-Consistent Local Distance Functions for Shape-Based Image Retrieval and Classification", "text": "We address the problem of visual category recognition by learning an image-to-image distance function that attempts to satisfy the following property: the distance between images from the same category should be less than the distance between images from different categories. We use patch-based feature vectors common in object recognition work as a basis for our image-to-image distance functions. Our large-margin formulation for learning the distance functions is similar to formulations used in the machine learning literature on distance metric learning, however we differ in that we learn local distance functions\u00bfa different parameterized function for every image of our training set\u00bfwhereas typically a single global distance function is learned. This was a novel approach first introduced in Frome, Singer, & Malik, NIPS 2006. In that work we learned the local distance functions independently, and the outputs of these functions could not be compared at test time without the use of additional heuristics or training. Here we introduce a different approach that has the advantage that it learns distance functions that are globally consistent in that they can be directly compared for purposes of retrieval and classification. The output of the learning algorithm are weights assigned to the image features, which is intuitively appealing in the computer vision setting: some features are more salient than others, and which are more salient depends on the category, or image, being considered. We train and test using the Caltech 101 object recognition benchmark."}
{"_id": "d36eafff37536ed8adfa791eeb6f6ef6e1705c0d", "title": "A demonstration of the GUINNESS: A GUI based neural NEtwork SyntheSizer for an FPGA", "text": "The GUINNESS is a tool flow for the deep neural network toward FPGA implementation [3,4,5] based on the GUI (Graphical User Interface) including both the binarized deep neural network training on GPUs and the inference on an FPGA. It generates the trained the Binarized deep neural network [2] on the desktop PC, then, it generates the bitstream by using standard the FPGA CAD tool flow. All the operation is done on the GUI, thus, the designer is not necessary to write any scripts to descript the neural network structure, training behaviour, only specify the values for hyper parameters. After finished the training, it automatically generates C++ codes to synthesis the bitstream using the Xilinx SDSoC system design tool flow. Thus, our tool flow is suitable for the software programmers who are not familiar with the FPGA design."}
{"_id": "a85ad1a2ee829c315be6ded0eee8a1dadc21a666", "title": "DR(eye)VE: A Dataset for Attention-Based Tasks with Applications to Autonomous and Assisted Driving", "text": "Autonomous and assisted driving are undoubtedly hot topics in computer vision. However, the driving task is extremely complex and a deep understanding of drivers' behavior is still lacking. Several researchers are now investigating the attention mechanism in order to define computational models for detecting salient and interesting objects in the scene. Nevertheless, most of these models only refer to bottom up visual saliency and are focused on still images. Instead, during the driving experience the temporal nature and peculiarity of the task influence the attention mechanisms, leading to the conclusion that real life driving data is mandatory. In this paper we propose a novel and publicly available dataset acquired during actual driving. Our dataset, composed by more than 500,000 frames, contains drivers' gaze fixations and their temporal integration providing task-specific saliency maps. Geo-referenced locations, driving speed and course complete the set of released data. To the best of our knowledge, this is the first publicly available dataset of this kind and can foster new discussions on better understanding, exploiting and reproducing the driver's attention process in the autonomous and assisted cars of future generations."}
{"_id": "4f6fa087d402c3a1bfcc5c1dc417abfacf55db2d", "title": "Tooth structure removal associated with various preparation designs for anterior teeth.", "text": "STATEMENT OF PROBLEM\nThe conservation of sound tooth structure helps preserve tooth vitality and reduce postoperative sensitivity. Innovative preparation designs, like those for porcelain laminate veneers, are much less invasive than conventional complete-coverage crown preparations. However, no study has quantified the amount of tooth structure removed during these preparations.\n\n\nPURPOSE\nThe purpose of this study was to quantify and compare the amount of tooth structure removed when various innovative and conventional tooth preparation designs were completed on different teeth.\n\n\nMATERIAL AND METHOD\n. A new comprehensive tooth preparation design classification system was introduced. Typodont resin teeth representing the maxillary left central incisor, maxillary left canine, and mandibular left central incisor were prepared with the following designs: partial (V1), traditional (V2), extended (V3), and complete (V4) porcelain laminate veneer preparations; resin-bonded retainer preparation with grooves (A1) and with wing/grooves (A2); all-ceramic crown preparation with 0.8 mm axial reduction and tapering chamfer finish line (F1), all-ceramic crown preparation with 1.0 mm axial reduction and rounded shoulder finish line (F2), and metal-ceramic crown with 1.4 mm axial reduction and facial shoulder finish line (F3). After tooth preparations (10 per group), the crown was separated from the root at the CEJ. The removed coronal tooth structure was measured with gravimetric analysis. Means and standard deviations for tooth structure removal with different preparation designs were calculated and analyzed with analysis of variance at a significance level of P<.05.\n\n\nRESULTS\nSignificant differences in the amount of tooth structure removal were noted between preparation designs. Ceramic veneers and resin-bonded prosthesis retainers were the least invasive preparation designs, removing approximately 3% to 30% of the coronal tooth structure by weight. Approximately 63% to 72% of the coronal tooth structure was removed when teeth were prepared for all-ceramic and metal-ceramic crowns. For a single crown restoration, the tooth structure removal required for an F3 preparation (metal-ceramic crown) was 4.3 times greater than for a V2 preparation (porcelain laminate veneer, facial surface only) and 2.4 times greater than for a V4 preparation (more extensive porcelain laminate veneer).\n\n\nCONCLUSION\nWithin the limitations of this study, tooth preparations for porcelain laminate veneers and resin-bonded prostheses required approximately one-quarter to one-half the amount of tooth reduction of conventional complete-coverage crowns."}
{"_id": "f84cb8b29b6d28af09fbc9656aed433ed0d064ae", "title": "InTouch Tactile Tales: Haptic Feedback and Long-Distance Storytelling", "text": "The squeeze of an arm during a tense moment in a story -- an absentminded caress on the back of the hand while listening to an engaging tale -- the physical presence and interpersonal touch of a loved one can be an important part of reading to a child. However the thrill of the tale is often lost and the intimacy diluted when separated families have to resort to the flatness of a video-call or mobile app to share a bedtime story with their loved one. This project is setting out to create a physical story portal, a magical object which uses technologies to link teller and listener through sound, light and touch to help bridge this gap when families are separated. While the development of the InTouch reading system is still in the preliminary stages, a prototype handheld device has been created to allow adults and children to share touch-based messages during story time. It has been trialled with several families in the lab, and the initial results have pointed to both guidance and design considerations that should be taken into account for such a system to be successfully deployed."}
{"_id": "822320c87f5c91ab80af31ff1497f8193fa2f4f4", "title": "Quaternion Zernike moments and their invariants for color image analysis and object recognition", "text": "Moments and moment invariants have become a powerfu l tool in pattern recognition and image analysis. C onventional methods to deal with color images are based on RGB decomposition or graying, which may lose some signi ficant color information. In this paper, by using the algebra of quaternions, we introduce the quaternion Zernike m oments (QZMs) to deal with the color images in a holistic manner. It is s hown that the QZMs can be obtained from the convent ional Zernike moments of each channel. We also provide the theore tical framework to construct a set of combined inva riants with respect to rotation, scaling and translation (RST) transfor mation. Experimental results are provided to illust rate the efficiency of the proposed descriptors."}
{"_id": "7382e1173b3507a63009d335ff2944644c1d7cbd", "title": "Bilayer Phosphorene: Effect of Stacking Order on Bandgap and Its Potential Applications in Thin-Film Solar Cells.", "text": "Phosphorene, a monolayer of black phosphorus, is promising for nanoelectronic applications not only because it is a natural p-type semiconductor but also because it possesses a layer-number-dependent direct bandgap (in the range of 0.3 to 1.5 eV). On basis of the density functional theory calculations, we investigate electronic properties of the bilayer phosphorene with different stacking orders. We find that the direct bandgap of the bilayers can vary from 0.78 to 1.04 eV with three different stacking orders. In addition, a vertical electric field can further reduce the bandgap to 0.56 eV (at the field strength 0.5 V/\u00c5). More importantly, we find that when a monolayer of MoS2 is superimposed with the p-type AA- or AB-stacked bilayer phosphorene, the combined trilayer can be an effective solar-cell material with type-II heterojunction alignment. The power conversion efficiency is predicted to be \u223c18 or 16% with AA- or AB-stacked bilayer phosphorene, higher than reported efficiencies of the state-of-the-art trilayer graphene/transition metal dichalcogenide solar cells."}
{"_id": "e141967830295d9a1a2f1442bf62fc936a44bdcd", "title": "A General Empirical Solution to the Macro Software Sizing and Estimating Problem", "text": "Application software development has been an area of organizational effort that has not been amenable to the normal managerial and cost controls. Instances of actual costs of several times the initial budgeted cost, and a time to initial operational capability sometimes twice as long as planned are more often the case than not."}
{"_id": "244fc78ce607812edb90290727dab4d33377e986", "title": "Transfer of mitochondria via tunneling nanotubes rescues apoptotic PC12 cells", "text": "Tunneling nanotubes (TNTs) are F-actin-based membrane tubes that form between cells in culture and in tissues. They mediate intercellular communication ranging from electrical signalling to the transfer of organelles. Here, we studied the role of TNTs in the interaction between apoptotic and healthy cells. We found that pheochromocytoma (PC) 12 cells treated with ultraviolet light (UV) were rescued when cocultured with untreated PC12 cells. UV-treated cells formed a different type of TNT with untreated PC12 cells, which was characterized by continuous microtubule localized inside these TNTs. The dynamic behaviour of mCherry-tagged end-binding protein 3 and the accumulation of detyrosinated tubulin in these TNTs indicate that they are regulated structures. In addition, these TNTs show different biophysical properties, for example, increased diameter allowing dye entry, prolonged lifetime and decreased membrane fluidity. Further studies demonstrated that microtubule-containing TNTs were formed by stressed cells, which had lost cytochrome c but did not enter into the execution phase of apoptosis characterized by caspase-3 activation. Moreover, mitochondria colocalized with microtubules in TNTs and transited along these structures from healthy to stressed cells. Importantly, impaired formation of TNTs and untreated cells carrying defective mitochondria were unable to rescue UV-treated cells in the coculture. We conclude that TNT-mediated transfer of functional mitochondria reverse stressed cells in the early stages of apoptosis. This provides new insights into the survival mechanisms of damaged cells in a multicellular context."}
{"_id": "a0ff514a8a64ba5a7cd7430ca04245fd037d040c", "title": "Business Analytics in the Context of Big Data: A Roadmap for Research", "text": "This paper builds on academic and industry discussions from the 2012 and 2013 pre-ICIS events: BI Congress III and the Special Interest Group on Decision Support Systems (SIGDSS) workshop, respectively. Recognizing the potential of \u201cbig data\u201d to offer new insights for decision making and innovation, panelists at the two events discussed how organizations can use and manage big data for competitive advantage. In addition, expert panelists helped to identify research gaps. While emerging research in the academic community identifies some of the issues in acquiring, analyzing, and using big data, many of the new developments are occurring in the practitioner community. We bridge the gap between academic and practitioner research by presenting a big data analytics framework that depicts a process view of the components needed for big data analytics in organizations. Using practitioner interviews and literature from both academia and practice, we identify the current state of big data research guided by the framework and propose potential areas for future research to increase the relevance of academic research to practice."}
{"_id": "1da201da419a6401328427b5725cc5736d130798", "title": "The Price of Anarchy in Supply Chains: Quantifying the Efficiency of Price-Only Contracts", "text": "I this paper, we quantify the efficiency of decentralized supply chains that use price-only contracts. With a price-only contract, a buyer and a seller agree only on a constant transaction price, without specifying the amount that will be transferred. It is well known that these contracts do not provide incentives to the parties to coordinate their inventory/capacity decisions. We measure efficiency with the price of anarchy (PoA), defined as the largest ratio of profits between the integrated supply chain (that is, fully coordinated) and the decentralized supply chain. We characterize the efficiency of various supply chain configurations: push or pull inventory positioning, two or more stages, serial or assembly systems, single or multiple competing suppliers, and single or multiple competing retailers."}
{"_id": "adfcb54a7b54a9d882f37355de58d14fd1e44f34", "title": "Learning Semantic Script Knowledge with Event Embeddings", "text": "Induction of common sense knowledge about prototypical sequences of events has recently received much attention (e.g., (Chambers & Jurafsky, 2008; Regneri et al., 2010)). Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from texts. We show that this approach results in a substantial boost in ordering performance with respect to previous methods."}
{"_id": "331e3a964f8bb6f878a4b545658ed79c96440eac", "title": "Experimental study platform of the automatic transfer switch used to power supplies back-up", "text": "This paper presents the practical implementation of an experimental platform used for the automatic transfer switch to the back-up power source. The automatic transfer switch concept represents the procedure of fast connection of the electrical consumers to a back-up electric power source. The connection to the back-up power source is necessary in the event of failure of the main power source. The experimental platform is designed to operate in automatic standard mode, manual mode and through a programmable logic controller. The selection's mode of operation is determined by the operation's key. The simulation of grid faults on the power supply is done by means of the controls and the variable voltage regulators located on the experimental platform. The automatic transfer switch is conditioned by the effective status of the power supply and minimum voltage allowed by devices. The signaling status of the power supply is performed based on the information received from relays of interface and programmable logic controllers. The platform presents to users two technologies able to achieve the same function. Thus, in case of the classic mode of the automatic transfer switch, the wired technology is used and this involves many components and related links. In case of the automatic transfer switch based on programmable logic controller, a technology based on programming is also implemented. Using this platform, the advantage of implementation of programmable logic controllers versus wired technology in industrial applications can be observed."}
{"_id": "4da75a1153bd56a2997a7fd66a1d722179114d50", "title": "Power to the people? The evolving recognition of human aspects of security", "text": "It is perhaps unsurprising to find much of the focus in IT and computer security being drawn towards the technical aspects of the discipline. However, it is increasingly recognised that technology alone cannot deliver a complete solution, and there is also a tangible need to address human aspects. At the core, people must understand the threats they face and be able to use the protection available to them, and although this has not been entirely ignored, it has not received the level of attention that it merits either. Indeed, security surveys commonly reveal that the more directly user-facing aspects such as policy, training and education are prone to receiving significantly less attention than technical controls such as firewalls, antivirus and intrusion detection. The underlying reason for such disparity is that the human aspects are in many ways a more challenging problem to approach, not least because they cannot be easily targeted with a product-based solution. There is also a direct overlap into the technical area, with issues such as the usability and acceptability of technology solutions having a direct impact upon the actual protection that they are able to deliver. This paper explores these themes, highlighting the need for human aspects to form part of a holistic security strategy alongside the necessary technologies. Taking the specific examples of security awareness and two user-facing technical controls (user authentication and antivirus), the discussion examines how things have evolved to the present day and considers how they need to be positioned for the future. a 2012 Elsevier Ltd. All rights reserved."}
{"_id": "cf4930edad2a38bfc3b9aae7d678d46080bbcc1d", "title": "Modeling Label Ambiguity for Neural List-Wise Learning to Rank", "text": "List-wise learning to rank methods are considered to be the stateof-the-art. One of the major problems with these methods is that the ambiguous nature of relevance labels in learning to rank data is ignored. Ambiguity of relevance labels refers to the phenomenon that multiple documents may be assigned the same relevance label for a given query, so that no preference order should be learned for those documents. In this paper we propose a novel sampling technique for computing a list-wise loss that can take into account this ambiguity. We show the e ectiveness of the proposed method by training a 3-layer deep neural network. We compare our new loss function to two strong baselines: ListNet and ListMLE. We show that our method generalizes better and signi cantly outperforms other methods on the validation and test sets. CCS CONCEPTS \u2022Computingmethodologies\u2192Neural networks; \u2022Information systems \u2192Learning to rank;"}
{"_id": "f624f3a95eea3614ef4abaac4ad2bf4206c353b3", "title": "The incredible queen of green: Nutritive value and therapeutic potential of Moringa oleifera Lam.", "text": "Moringa oleifera Lam. (synonym: Moringa pterygosperma Gaertn.) (M. oleifera) known in 82 countries by 210 different names is well known by the name of the miracle tree. It is one of the extensively cultivated and highly valued members of Moringaceae, a monogeneric family, comprising of thirteen perennial angiosperm shrubs and trees[1-3]. Moringa tree is endemic to the Himalayan foothills of Pakistan, Afghanistan, Bangladesh and India, and is cultivated throughout tropics. It is recognized by a mixture of vernacular names, among of them, drumstick tree, horseradish tree, ben oil tree and malunggay are the most commonly reported in the history of this plant[4]. In Pakistan, Sohanjna is the vernacular name of M. oleifera[5,6]. It yields low quality timber, as it is a softwood tree, but it is belived for centuries that this plant possesses a number of industrial, traditional and medicinal benefits[7]. Fertilizer (seed cake), green manure (leaves), blue dye (wood), fencing (living trees), domestic cleaning agent (crushed leaves), alley cropping, animal feed (leaves and seed cake), medicine (all plant parts), foliar nutrient (juice expressed from the leaves), gum (tree trunks), biogas (leaves), biopesticide, ornamental plantings, water purifier (powdered seeds), honey (flower nectar) are different uses of this plant reported in literature[2,6,8-20]. M. oleifera is a good source of aminoacids and contains a number of important minerals, \u03b2-carotene, various phenolics and vitamins[21,22]. M. oleifera is also an important vegetable food article of trade, particularly in Pakistan, Hawaii, Philippines, Africa and India which has a huge deliberation as the natural nutrition[1,23]. In South Asia, various plant parts, including leaves, bark, root, gum, flowers, pods, seeds and seed oil are used for the variety of infectious and inflammatory disorders along with hepatorenal, gastrointestinal, hematological and cardiovascular diseases[22,24-26]. Various therapeutic potentials are also credited to different parts of ARTICLE INFO ABSTRACT"}
{"_id": "2caf7d062968009777e09f0759e3ce7aca20dd74", "title": "Pricing Services Subject to Congestion: Charge Per-Use Fees or Sell Subscriptions?", "text": "Should a firm charge on a per-use basis or sell subscriptions when its service experiences congestion? Queueing-based models of pricing primarily focus on charging a fee per use for the service, in part because per-use pricing enables the firm to regulate congestion\u2014raising the per-use price naturally reduces how frequently customers use a service. The firm has less control over usage with subscription pricing (by definition, with subscription pricing customers are not charged proportional to their actual usage), and this is a disadvantage when customers dislike congestion. However, we show that subscription pricing is more effective at earning revenue. Consequently, the firm may be better off with subscription pricing, even, surprisingly, when congestion is intuitively most problematic for the firm: e.g., as congestion becomes more disliked by consumers. We show that the absolute advantage of subscription pricing relative to per-use pricing can be substantial, whereas the potential advantage of per-use pricing is generally modest. Subscription pricing becomes relatively more attractive if consumers become more heterogeneous in their service rates (e.g., some know they are \u201cheavy\u201d users and others know they are \u201clight\u201d users) as long as capacity is fixed, the potential utilization is high, and the two segments have substantially different usage rates. Otherwise, heterogeneity in usage rates makes subscription pricing less attractive relative to per-use pricing. We conclude that subscription pricing can be effective even if congestion is relevant for the overall quality of a service. Keywords service operations, operations strategy, pricing and revenue management, game theory, queueing theory Disciplines Business | Business Administration, Management, and Operations | Business Analytics | Management Sciences and Quantitative Methods | Marketing | Operations and Supply Chain Management | Sales and Merchandising This technical report is available at ScholarlyCommons: https://repository.upenn.edu/marketing_papers/320 Pricing Services Subject to Congestion: Charge Per-Use Fees or Sell Subscriptions? G\u00e9rard P. Cachon Pnina Feldman Operations and Information Management, The Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA Operations and Information Technology Management, Haas School of Business, University of California, Berkeley, California 94720, USA cachon@wharton.upenn.edu feldman@haas.berkeley.edu June 17, 2008; revised July 14, 2009; May 21, 2010; August 10, 2010 Should a \u0085rm charge on a per-use basis or sell subscriptions when its service experiences congestion? Queueing-based models of pricing primarily focus on charging a fee per use for the service, in part because per-use pricing enables the \u0085rm to regulate congestion raising the per-use price naturally reduces how frequently customers use a service. The \u0085rm has less control over usage with subscription pricing (by de\u0085nition, with subscription pricing customers are not charged proportional to their actual usage), and this is a disadvantage when customers dislike congestion. However, we show that subscription pricing is more e\u00a4ective at earning revenue. Consequently, the \u0085rm may be better o\u00a4 with subscription pricing, even, surprisingly, when congestion is intuitively most problematic for the \u0085rm: e.g., as congestion becomes more disliked by consumers. We show that the absolute advantage of subscription pricing relative to per-use pricing can be substantial whereas the potential advantage of per-use pricing is generally modest. Subscription pricing becomes relatively more attractive if consumers become more heterogeneous in their service rates (e.g., some know they are \u0093heavy\u0094users and others know they are \u0093light\u0094users) as long as capacity is \u0085xed, the potential utilization is high and the two segments have substantially di\u00a4erent usage rates. Otherwise, heterogeneity in usage rates makes subscription pricing less attractive relative to per-use. We conclude that subscription pricing can be e\u00a4ective even if congestion is relevant for the overall quality of a service. How should a \u0085rm price its service when congestion is an unavoidable reality? Customers dislike congestion, so a \u0085rm has an incentive to ensure it provides reasonably fast service. At the same time, the \u0085rm needs to earn an economic pro\u0085t, so the \u0085rm\u0092s pricing scheme must generate a su\u00a2 cient amount of revenue. Furthermore, these issues are closely linked: the chosen pricing scheme in\u0087uences how frequently customers use a service, which dictates the level of congestion; congestion correlates with the customers\u0092perceived value for the service, and that determines the amount of revenue the \u0085rm can generate. A natural option is to charge customers a per-use fee or toll: customers pay a per-transaction fee each time they withdraw money from the ATM; beauty shops and hair salons price on a peruse basis; and car maintenance companies, such as Pep Boys Auto, charge each time a service is"}
{"_id": "00c38c340aa123096873cee1db9f35d0f5dad0ec", "title": "Bayesian models for Large-scale Hierarchical Classification", "text": "A challenging problem in hierarchical classification is to leverage the hierarchical relations among classes for improving classification performance. An even greater challenge is to do so in a manner that is computationally feasible for large scale problems. This paper proposes a set of Bayesian methods to model hierarchical dependencies among class labels using multivariate logistic regression. Specifically, the parent-child relationships are modeled by placing a hierarchical prior over the children nodes centered around the parameters of their parents; thereby encouraging classes nearby in the hierarchy to share similar model parameters. We present variational algorithms for tractable posterior inference in these models, and provide a parallel implementation that can comfortably handle largescale problems with hundreds of thousands of dimensions and tens of thousands of classes. We run a comparative evaluation on multiple large-scale benchmark datasets that highlights the scalability of our approach and shows improved performance over the other state-of-the-art hierarchical methods."}
{"_id": "af2789eee94c37f875147b274812831c984dd06e", "title": "A critical survey of state-of-the-art image inpainting quality assessment metrics", "text": "7 Image inpainting is the process of restoring missing pixels in digital images in a plausible way. Research in image inpainting has received considerable attention in di erent areas, including restoration of old and damaged documents, removal of undesirable objects, computational photography, retouching applications, etc. The challenge is that the recovery processes themselves introduce noticeable artifacts within and around the restored image regions. As an alternative to subjective evaluation by humans, a number of approaches have been introduced to quantify inpainting processes objectively. Unfortunately, existing objective metrics have their own strengths and weaknesses as they use di erent criteria. This paper provides a thorough insight into existing metrics related to image inpainting quality assessment, developed during the last few years. The paper provides, under a new framework, a comprehensive description of existing metrics, their strengths, their weaknesses, and a detailed performance analysis on real images from public image inpainting database. The paper also outlines future research directions and applications of inpainting and inpainting-related quality assessment measures."}
{"_id": "212d697785d2ca072d224b6c19000e3988788b15", "title": "Programming your network at run-time for big data applications", "text": "Recent advances of software defined networking and optical switching technology make it possible to program the network stack all the way from physical topology to flow level traffic control. In this paper, we leverage the combination of SDN controller with optical switching to explore the tight integration of application and network control. We particularly study the run-time network configuration for big data applications to jointly optimize application performance and network utilization. We use Hadoop as an example to discuss the integrated network control architecture, job scheduling, topology and routing configuration mechanisms for Hadoop jobs. Our analysis suggests that such an integrated control has great potential to improve application performance with relatively small configuration overhead. We believe our study shows early promise of achieving the long-term goal of tight network and application integration using SDN."}
{"_id": "8ec4587cd743ccba95fc82966534e8b6425b4039", "title": "A Fundamental Storage-Communication Tradeoff in Distributed Computing with Straggling Nodes", "text": "The optimal storage-computation tradeoff is characterized for a MapReduce-like distributed computing system with straggling nodes, where only a part of the nodes can be utilized to compute the desired output functions. The result holds for arbitrary output functions and thus generalizes previous results that restricted to linear functions. Specifically, in this work, we propose a new information-theoretical converse and a new matching coded computing scheme, that we call coded computing for straggling systems (CCS)."}
{"_id": "2e0ff618cb8585d7b260ad3db3c89ad8fe96b7df", "title": "Language, cognition, and short-term memory in individuals with Down syndrome.", "text": "The developmentally emerging phenotype of language and cognition in individuals with Down syndrome is summarized on the basis of the project's prior work. Identified are a) the emerging divergence of expressive and receptive language, b) the emerging divergence of lexical and syntactic knowledge in each process, and c) the emerging divergence within cognitive skills of auditory short-term memory and visuospatial short-term memory from other visuospatial skills. Expressive syntax and auditory short-term memory are identified as areas of particular difficulty. Evidence for the continued acquisition of language skills in adolescence is presented. The role of the two components of working memory, auditory and visual, in language development is investigated in studies of narrative and longitudinal change in language skills. Predictors of individual differences during six years of language development are evaluated through hierarchical linear modelling. Chronological age, visuospatial short-term memory, and auditory-short term memory are identified as key predictors of performance at study entry, but not individual change over time, for expressive syntax. The same predictors account for variation in comprehension skill at study outset; and change over the six years can be predicted by chronological age and the change in visuospatial short-term memory skills."}
{"_id": "d39264952d124c0a9a80a7b419329256208ee40e", "title": "The MIT Encyclopedia of Cognitive Sciences: Sign Languages", "text": "Sign languages (alternatively, signed languages) are human languages whose forms consist of sequences of movements and configurations of the hands and arms, face, and upper torso. Typically, sign languages are perceived through the visual mode. Sign languages thus contrast, of course, with spoken languages, whose forms consist of sounds produced by sequences of movements and configurations of the mouth and vocal tract. More informally, then, sign languages are visual-gestural languages, whereas spoken languages are auditory-vocal languages."}
{"_id": "432143ab67c05f42c918c4ed6fd9412d26e659be", "title": "Optimal Multi-taxi Dispatch for Mobile Taxi-Hailing Systems", "text": "Traditional taxi-hailing systems through wireless networks in metropolitan areas allow taxis to compete for passengers chaotically and accidentally, which generally result in inefficiencies, long waiting time and low satisfaction of taxi-hailing passengers. In this paper, we propose a new Mobile Taxi-hailing System (called MTS) based on optimal multi-taxi dispatch, which can be used by taxi service companies (TSCs). Different from the competition modes used in traditional taxi-hailing systems, MTS assigns vacant taxis to taxi-hailing passengers proactively. For the taxi dispatch problem in MTS, we define a system utility function, which involves the total net profits of taxis and waiting time of passengers. Moreover, in the utility function, we take into consideration the various classes of taxis with different resource configurations, and the cost associated with taxis' empty travel distances. Our goal is to maximize the system utility function, restricted by the individual net profits of taxis and the passengers' requirements for specified classes of taxis. To solve this problem, we design an optimal algorithm based on the idea of Kuhn-Munkres (called KMBA), and prove the correctness and optimality of the proposed algorithm. Additionally, we demonstrate the significant performances of our algorithm through extensive simulations."}
{"_id": "6a4007e60346e4501acc936b49b7a476e73afa1e", "title": "Learning Multilingual Word Representations using a Bag-of-Words Autoencoder", "text": "Recent work on learning multilingual word representations usually relies on the use of word-level alignements (e.g. infered with the help of GIZA++) between translated sentences, in order to align the word embeddings in different languages. In this workshop paper, we investigate an autoencoder model for learning multilingual word representations that does without such word-level alignements. The autoencoder is trained to reconstruct the bag-of-word representation of given sentence from an encoded representation extracted from its translation. We evaluate our approach on a multilingual document classification task, where labeled data is available only for one language (e.g. English) while classification must be performed in a different language (e.g. French). In our experiments, we observe that our method compares favorably with a previously proposed method that exploits word-level alignments to learn word representations."}
{"_id": "28f38afa6199fb7c950b59924833ec1f34144ecf", "title": "Digital receivers and transmitters using polyphase filter banks for wireless communications", "text": "This paper provides a tutorial overview of multichannel wireless digital receivers and the relationships between channel bandwidth, channel separation, and channel sample rate. The overview makes liberal use of figures to support the underlying mathematics. A multichannel digital receiver simultaneously down-convert a set of frequency-division-multiplexed (FDM) channels residing in a single sampled data signal stream. In a similar way, a multichannel digital transmitter simultaneously up-converts a number of baseband signals to assemble a set of FDM channels in a single sampled data signal stream. The polyphase filter bank has become the architecture of choice to efficiently accomplish these tasks. This architecture uses three interacting processes to assemble or to disassemble the channelized signal set. In a receiver, these processes are an input commutator to effect spectral folding or aliasing due to a reduction in sample rate, a polyphase -path filter to time align the partitioned and resampled time series in each path, and a discrete Fourier transform to phase align and separate the multiple baseband aliases. In a transmitter, these same processes operate in a related manner to alias baseband signals to high order Nyquist zones while increasing the sample rate with the output commutator. This paper presents a sequence of simple modifications to sampled data structures based on analog prototype systems to obtain the basic polyphase structure. We further discuss ways to incorporate small modifications in the operation of the polyphase system to accommodate secondary performance requirements. MATLAB simulations of a 10-, 40-, and 50-channel resampling receiver are included in the electronic version of this paper. An animated version of the ten-channel resampling receiver illustrates the time and frequency response of the filter bank when driven by a slowly varying linear FM sweep."}
{"_id": "b4e2a5054c3f6a6293a9cab1cd291f9329df518e", "title": "A PASSIVE MILLIMETER-WAVE IMAGER USED FOR CONCEALED WEAPON DETECTION", "text": "An 8mm-band passive millimeter-wave imager BHU-2D has been developed by Beihang University. This imager is designed for detecting concealed weapons on human body. The imager adopts two-dimensional synthetic aperture interferometric radiometer (SAIR) technique which avoids the radiation risk to human body. Compared with scanning technique, SAIR technique could obtain images of a larger field of view (FOV) while achieving high imaging rate, which are necessary for security monitoring. In this paper, the imaging principle of SAIR is stated firstly. Secondly, background cancellation method designed for BHU-2D is interpreted. The technique is used to reduce the complexity as well as the dimensional requirements of the receiving elements. Thirdly, system configuration is illustrated in detail. Then external point source calibration method is introduced and discussed specifically for security check applications. Finally, imaging experiments on a person with concealed weapon are conducted by which the design and image calibration algorithms are verified. To conclude, experimental results prove that BHU-2D could be employed in security check applications."}
{"_id": "5375a8c2d94c9587bb250805a3db2b3b489e65a3", "title": "Syntax-Aware Multi-Sense Word Embeddings for Deep Compositional Models of Meaning", "text": "Deep compositional models of meaning acting on distributional representations of words in order to produce vectors of larger text constituents are evolving to a popular area of NLP research. We detail a compositional distributional framework based on a rich form of word embeddings that aims at facilitating the interactions between words in the context of a sentence. Embeddings and composition layers are jointly learned against a generic objective that enhances the vectors with syntactic information from the surrounding context. Furthermore, each word is associated with a number of senses, the most plausible of which is selected dynamically during the composition process. We evaluate the produced vectors qualitatively and quantitatively with positive results. At the sentence level, the effectiveness of the framework is demonstrated on the MSRPar task, for which we report results within the state-of-the-art range."}
{"_id": "0fc2b637e260ac26d9288fe04cb352e428ea045c", "title": "Empirical analysis of programming language adoption", "text": "Some programming languages become widely popular while others fail to grow beyond their niche or disappear altogether. This paper uses survey methodology to identify the factors that lead to language adoption. We analyze large datasets, including over 200,000 SourceForge projects, 590,000 projects tracked by Ohloh, and multiple surveys of 1,000-13,000 programmers.\n We report several prominent findings. First, language adoption follows a power law; a small number of languages account for most language use, but the programming market supports many languages with niche user bases. Second, intrinsic features have only secondary importance in adoption. Open source libraries, existing code, and experience strongly influence developers when selecting a language for a project. Language features such as performance, reliability, and simple semantics do not. Third, developers will steadily learn and forget languages. The overall number of languages developers are familiar with is independent of age. Finally, when considering intrinsic aspects of languages, developers prioritize expressivity over correctness. They perceive static types as primarily helping with the latter, hence partly explaining the popularity of dynamic languages."}
{"_id": "a4f541158cfecd780a9053a8f57dbde3c365d644", "title": "Symbolic Music Genre Transfer with CycleGAN", "text": "Deep generative models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have recently been applied to style and domain transfer for images, and in the case of VAEs, music. GAN-based models employing several generators and some form of cycle consistency loss have been among the most successful for image domain transfer. In this paper we apply such a model to symbolic music and show the feasibility of our approach for music genre transfer. Evaluations using separate genre classifiers show that the style transfer works well. In order to improve the fidelity of the transformed music, we add additional discriminators that cause the generators to keep the structure of the original music mostly intact, while still achieving strong genre transfer. Visual and audible results further show the potential of our approach. To the best of our knowledge, this paper represents the first application of GANs to symbolic music domain transfer."}
{"_id": "49d06007952c70016b8027d3d581c7ad8993140f", "title": "An effective way of word-level language identification for code-mixed facebook comments using word-embedding via character-embedding", "text": "Individuals utilize online networking sites like Facebook and Twitter to express their interests, opinions or reviews. The users used English language as their medium for communication in earlier days. Despite the fact that content can be written in Unicode characters now, people find it easier to communicate by mixing two or more languages together or lean toward writing their native language in Roman script. These types of data are called code-mixed text. While processing such social-media data, recognizing the language of the text is an important task. In this work, we have developed a system for word-level language identification on code-mixed social media text. The work is accomplished for Tamil-English and Malayalam-English code-mixed Facebook comments. The methodology used for the system is a novel approach which is implemented based on features obtained from character-based embedding technique with the context information and uses a machine learning based classifier, Support Vector Machine for training and testing. An accuracy of 93% was obtained for Malayalam-English and 95% for Tamil-English code-mixed text."}
{"_id": "bef85a596596689caa74bb711044a15d5b3ba1f8", "title": "Estimation of Symptom Severity During Chemotherapy From Passively Sensed Data: Exploratory Study", "text": "BACKGROUND\nPhysical and psychological symptoms are common during chemotherapy in cancer patients, and real-time monitoring of these symptoms can improve patient outcomes. Sensors embedded in mobile phones and wearable activity trackers could be potentially useful in monitoring symptoms passively, with minimal patient burden.\n\n\nOBJECTIVE\nThe aim of this study was to explore whether passively sensed mobile phone and Fitbit data could be used to estimate daily symptom burden during chemotherapy.\n\n\nMETHODS\nA total of 14 patients undergoing chemotherapy for gastrointestinal cancer participated in the 4-week study. Participants carried an Android phone and wore a Fitbit device for the duration of the study and also completed daily severity ratings of 12 common symptoms. Symptom severity ratings were summed to create a total symptom burden score for each day, and ratings were centered on individual patient means and categorized into low, average, and high symptom burden days. Day-level features were extracted from raw mobile phone sensor and Fitbit data and included features reflecting mobility and activity, sleep, phone usage (eg, duration of interaction with phone and apps), and communication (eg, number of incoming and outgoing calls and messages). We used a rotation random forests classifier with cross-validation and resampling with replacement to evaluate population and individual model performance and correlation-based feature subset selection to select nonredundant features with the best predictive ability.\n\n\nRESULTS\nAcross 295 days of data with both symptom and sensor data, a number of mobile phone and Fitbit features were correlated with patient-reported symptom burden scores. We achieved an accuracy of 88.1% for our population model. The subset of features with the best accuracy included sedentary behavior as the most frequent activity, fewer minutes in light physical activity, less variable and average acceleration of the phone, and longer screen-on time and interactions with apps on the phone. Mobile phone features had better predictive ability than Fitbit features. Accuracy of individual models ranged from 78.1% to 100% (mean 88.4%), and subsets of relevant features varied across participants.\n\n\nCONCLUSIONS\nPassive sensor data, including mobile phone accelerometer and usage and Fitbit-assessed activity and sleep, were related to daily symptom burden during chemotherapy. These findings highlight opportunities for long-term monitoring of cancer patients during chemotherapy with minimal patient burden as well as real-time adaptive interventions aimed at early management of worsening or severe symptoms."}
{"_id": "76a101656455a39da33b782e8cd0d4a90dc0f5de", "title": "Evaluating and exploring the MYO ARMBAND", "text": "Electromyography (EMG) is a technique that studies the electrical activity of the muscle. EMG is the basic component of Myo armband. The MYO ARMBAND is a device which can be used by humans to interact with computers. The myo armband involves the interaction of devices in VE (virtual environment). Working and characteristics of Myo are studied and analyzed. This paper surveys the work on visual analysis of gestures by using Myo armband and the input from keyboard and mouse. We have also performed two tests where participants are allowed to play a game, first by using keyboard and other by using MYO. Discussion and conclusion are also written at the end of this paper."}
{"_id": "1dd646c22c2eedefe5a0e08fa1bfd085847e2f80", "title": "Multimedia learning in computer science: The effect of fifferent modes of instruction on student understanding", "text": "In a series of three experimental studies, we investigate the understanding achieved by learners when studying a complex domain in computer science, that is, the subject of memory management concepts in operating systems. We use three different media combinations to explain the subject of memory management concepts - Animation and text, animation and voice and a combination of animation, voice and text. The students who had no prior knowledge on the subject where given a test on recall and transfer knowledge after viewing the treatments. This paper reports the results of the experiment which may provide useful guidance for instructional designers specifically in the area of computer science."}
{"_id": "733999af65828ff465b8f4921eea3aa987e1434e", "title": "Fast Hydraulic and Thermal Erosion on the GPU Bal\u00e1zs J\u00e1k\u00f3", "text": "Computer games, TV series, movies, simulators, and many other computer graphics applications use external scenes where a realistic looking terrain is a vital part of the viewing experience. Creating such terrains is a challenging task. In this paper we propose a method that generates realistic virtual terrains by simulation of hydraulic and thermal erosion on a predefined height field terrain. The model is designed to be executed interactively on parallel architectures like graphics processors."}
{"_id": "b166a51f873301af5d186224310127e66cc02fe7", "title": "Channel estimation in millimeter wave MIMO systems with one-bit quantization", "text": "We develop channel estimation agorithms for millimeter wave (mmWave) multiple input multiple output (MIMO) systems with one-bit analog-to-digital converters (ADCs). Since the mmWave MIMO channel is sparse due to the propagation characteristics, the estimation problem is formulated as a one-bit compressed sensing problem. We propose a modified EM algorithm that exploits sparsity and has better performance than the conventional EM algorithm. We also present a second solution using the generalized approximate message passing (GAMP) algorithm to solve this optimization problem. The simulation results show that GAMP can reduce mean squared error in the important low and medium SNR regions."}
{"_id": "54061a4aa0cb241a726f54d0569efae1c13aab3a", "title": "CRISP-DM 1.0: Step-by-step data mining guide", "text": "This document describes the CRISP-DM process model, including an introduction to the CRISP-DM methodology, the CRISP-DM reference model, the CRISP-DM user guide and the CRISP-DM reports, as well as an appendix with additional useful and related information. This document and information herein, are the exclusive property of the partners of the CRISP-DM All trademarks and service marks mentioned in this document are marks of their respective owners and are as such acknowledged by the members of the CRISP-DM consortium. Foreword CRISP-DM was conceived in late 1996 by three \" veterans \" of the young and immature data mining market. DaimlerChrysler (then Daimler-Benz) was already experienced, ahead of most industrial and commercial organizations, in applying data mining in its business operations. SPSS (then ISL) had been providing services based on data mining since 1990 and had launched the first commercial data mining workbench \u2013 Clementine \u2013 in 1994. NCR, as part of its aim to deliver added value to its Teradata data warehouse customers, had established teams of data mining consultants and technology specialists to service its clients' requirements. At that time, early market interest in data mining was showing signs of exploding into widespread uptake. This was both exciting and terrifying. All of us had developed our approaches to data mining as we went along. Were we doing it right? Was every new adopter of data mining going to have to learn, as we had initially, by trial and error? And from a supplier's perspective, how could we demonstrate to prospective customers that data mining was sufficiently mature to be adopted as a key part of their business processes? A standard process model, we reasoned, non-proprietary and freely available, would address these issues for us and for all practitioners. A year later we had formed a consortium, invented an acronym (CRoss-Industry Standard Process for Data Mining), obtained funding from the European Commission and begun to set out our initial ideas. As CRISP-DM was intended to be industry-, tool-and application-neutral, we knew we had to get input from as wide a range as possible of practitioners and others (such as data warehouse vendors and management consultancies) with a vested interest in data mining. We did this by creating the CRISP-DM Special Interest Group (\" The SIG \" , as it became known). We launched the SIG by broadcasting an invitation to interested parties to join us in Amsterdam for a day-long \u2026"}
{"_id": "771a7bd936b620458aa3f4a400a879a0d685ae2f", "title": "Translating Collocations for Bilingual Lexicons: A Statistical Approach", "text": "Collocations are notoriously difficult for non-native speakers to translate, primarily because they are opaque and cannot be translated on a word-by-word basis. We describe a program named Champollion which, given a pair of parallel corpora in two different languages and a list of collocations in one of them, automatically produces their translations. Our goal is to provide a tool for compiling bilingual lexical information above the word level in multiple languages,for different domains. The algorithm we use is based on statistical methods and produces p-word translations of n-word collocations in which n and p need not be the same. For example, Champollion translates make ... decision, employment equity, and stock market into prendre ... d6cision, 6quit6 en mati6re d'emploi, and bourse respectively. Testing Champollion on three years' worth of the Hansards corpus yielded the French translations of 300 collocations for each year, evaluated at 73% accuracy on average. In this paper, we describe the statistical measures used, the algorithm, and the implementation of Champollion, presenting our results and evaluation."}
{"_id": "516e41018a42f762aa8d8cd739f568203fa98805", "title": "Adaptive RED : An Algorithm for Increasing the Robustness of RED \u2019 s Active Queue Management", "text": "The RED active queuemanagement algorithmallows network operatorsto simultaneouslyachieve high throughput and low averagedelay. However, the resulting average queuelength is quite sensiti ve to the level of congestion andto theREDparameter settings,andis thereforenotpredictable in advance. Delay being a major componentof thequalityof servicedeliveredto their customers, network operatorswould naturallylike to have a rougha priori estimateof the averagedelaysin their congestedrouters;to achieve suchpredictableaveragedelayswith RED would requireconstanttuning of the parametersto adjustto currenttraffic conditions. Our goal in this paperis to solve this problemwith minimal changesto the overall RED algorithm. To do so,we revisit theAdaptiveREDproposalof Fenget al. from 1997 [6, 7]. We make several algorithmicmodificationsto this proposal,while leaving thebasicideaintact,andthenevaluateits performanceusingsimulation.Wefind thatthis revisedversionof AdaptiveRED,whichcanbeimplemented asasimpleextensionwithin REDrouters,removesthesensitivity to parametersthat affect RED\u2019s performanceand canreliablyachieve aspecifiedtargetaveragequeuelength in a wide variety of traffic scenarios.Basedon extensi ve simulations,we believe that Adaptive RED is sufficiently robustfor deploymentin routers."}
{"_id": "7c5c4ed47bccd0016d53b7bbc27a41dd74bebf1e", "title": "Adaptive generic learning for face recognition from a single sample per person", "text": "Real-world face recognition systems often have to face the single sample per person (SSPP) problem, that is, only a single training sample for each person is enrolled in the database. In this case, many of the popular face recognition methods fail to work well due to the inability to learn the discriminatory information specific to the persons to be identified. To address this problem, in this paper, we propose an Adaptive Generic Learning (AGL) method, which adapts a generic discriminant model to better distinguish the persons with single face sample. As a specific implementation of the AGL, a Coupled Linear Representation (CLR) algorithm is proposed to infer, based on the generic training set, the within-class scatter matrix and the class mean of each person given its single enrolled sample. Thus, the traditional Fisher's Linear Discriminant (FLD) can be applied to SSPP task. Experiments on the FERET and a challenging passport face database show that the proposed method can achieve better results compared with other common solutions to the SSPP problem."}
{"_id": "b092dc2f15febfad6ab7826ee77c7e4a0b436ebb", "title": "A 10Gb/s 4.1mW 2-IIR + 1-discrete-tap DFE in 28nm-LP CMOS", "text": "A half-rate decision feedback equalizer (DFE) with two infinite impulse response (IIR) filters and one discrete-time tap is presented. The two IIR filters have different time constants to cancel the long tail of the pulse response. The discrete-tap cancels the first post-cursor inter-symbol interference term. The system can operate with a low transmit swing of 150mVpp-diff and 24 dB channel loss at the Nyquist frequency while consuming 4.1mW at 10 Gb/s. The receiver, including the DFE, clock buffers and clock phase adjustment, occupies an area of 8,760 \u03bcm2 and was fabricated in an ST 28nm LP CMOS process."}
{"_id": "919d08e9645404c60b88f5f5e8511e363ccdf922", "title": "A survey on techniques to handle face recognition challenges: occlusion, single sample per subject and expression", "text": "Face recognition is receiving a significant attention due to the need of facing important challenges when developing real applications under unconstrained environments. The three most important challenges are facial occlusion, the problem of dealing with a single sample per subject (SSPS) and facial expression. This paper describes and analyzes various strategies that have been developed recently for overcoming these three major challenges that seriously affect the performance of real face recognition systems. This survey is organized in three parts. In the first part, approaches to tackle the challenge of facial occlusion are classified, illustrated and compared. The second part briefly describes the SSPS problem and the associated solutions. In the third part, facial expression challenge is illustrated. In addition, pros and cons of each technique are stated. Finally, several improvements for future research are suggested, providing a useful perspective for addressing new research in face recognition."}
{"_id": "a23c5e1e27853c997cf50d76cfff86f1fdda1a42", "title": "DO INSTITUTIONS MATTER FOR REGIONAL GROWTH AND DEVELOPMENT? THE CASE OF TURKEY", "text": "...........................................................................................................................................i Table of"}
{"_id": "5e2a7fe9dbf963baff8d5327ae885c8d4808e34a", "title": "Beam-Steerable Planar Antenna Using Circular Disc and Four PIN-Controlled Tapered Stubs for WiMAX and WLAN Applications", "text": "A planar beam-steerable antenna aimed at WiMAX and WLAN applications is presented. The design uses a single-layer structure, which includes a central circular disc surrounded by four PIN-controlled tapered microstrip stubs. Using the PIN diodes, the stubs change their status from grounded to open-ended mode to provide pattern reconfigurability in four directions. To support WiMAX and WLAN applications, the antenna is designed to operate at IEEE 802.11b/g standard 2.4 GHz with a bandwidth of 150 MHz in all the four states. A prototype on FR4 substrate with thickness of 6.4 mm and radius of 50 mm is designed and tested. The measured and simulated results of the antenna indicate that the direction of the main beam can be successfully controlled at four specific directions ( \u03c6 = 0\u00b0, 90 \u00b0, 180 \u00b0, 270 \u00b0), at a deflection angle of 35 \u00b0 from the boresight with a stable gain of more than 5 dBi and front-to-back ratio of more than 20 dB across the band 2.38-2.53 GHz."}
{"_id": "ea8bac74a46016734f9b4b3a34fc1c0e1010bd4d", "title": "Relational Mixture of Experts: Explainable Demographics Prediction with Behavioral Data", "text": "Given a collection of basic customer demographics (e.g., age and gender) andtheir behavioral data (e.g., item purchase histories), how can we predictsensitive demographics (e.g., income and occupation) that not every customermakes available?This demographics prediction problem is modeled as a classification task inwhich a customer's sensitive demographic y is predicted from his featurevector x. So far, two lines of work have tried to produce a\"good\" feature vector x from the customer's behavioraldata: (1) application-specific feature engineering using behavioral data and (2) representation learning (such as singular value decomposition or neuralembedding) on behavioral data. Although these approaches successfullyimprove the predictive performance, (1) designing a good feature requiresdomain experts to make a great effort and (2) features obtained fromrepresentation learning are hard to interpret. To overcome these problems, we present a Relational Infinite SupportVector Machine (R-iSVM), a mixture-of-experts model that can leveragebehavioral data. Instead of augmenting the feature vectors of customers, R-iSVM uses behavioral data to find out behaviorally similar customerclusters and constructs a local prediction model at each customer cluster. In doing so, R-iSVM successfully improves the predictive performance withoutrequiring application-specific feature designing and hard-to-interpretrepresentations. Experimental results on three real-world datasets demonstrate the predictiveperformance and interpretability of R-iSVM. Furthermore, R-iSVM can co-existwith previous demographics prediction methods to further improve theirpredictive performance."}
{"_id": "2e64e12c72f79f8e41d51de80d4716652bfb5ed3", "title": "Perceptual Design of Haptic Icons", "text": "The bulk of applications for haptic feedback employ direct rendering approaches wherein a user touches a virtual model of some \u201creal\u201d thing, often displayed graphically as well. We propose a new class of applications based on abstract messages, ranging from \u201chaptic icons\u201d \u2013 brief signals conveying an object\u2019s or event\u2019s state, function or content \u2013 to an expressive haptic language for interpersonal communication. Building this language requires us to understand how synthetic haptic signals are perceived, and what they can mean to us. Experiments presented here address the perception question by using an efficient version of Multidimensional Scaling (MDS) to extract perceptual axes for complex haptic icons: once this space is mapped, icons can be designed to maximize both differentiability and individual salience. Results show that a set of icons constructed by varying the frequency, magnitude and shape of 2-sec, time-invariant wave shapes map to two perceptual axes, which differ depending on the signals\u2019 frequency range; and suggest that expressive capability is maximized in one frequency subspace."}
{"_id": "e5d312728e90eecb46ddc506bfde6b9c9fb9d2c9", "title": "Accuracy of deception judgments.", "text": "We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature."}
{"_id": "dbcd79bd7edcdcbb5912a50796fc3c2746729eb5", "title": "LETOR : Benchmark Dataset for Research on Learning to Rank for Information Retrieval", "text": "This paper is concerned with learning to rank for information retrieval (IR). Ranking is the central problem for information retrieval, and employing machine learning techniques to learn the ranking function is viewed as a promising approach to IR. Unfortunately, there was no benchmark dataset that could be used in comparison of existing learning algorithms and in evaluation of newly proposed algorithms, which stood in the way of the related research. To deal with the problem, we have constructed a benchmark dataset referred to as LETOR and distributed it to the research communities. Specifically we have derived the LETOR data from the existing data sets widely used in IR, namely, OHSUMED and TREC data. The two collections contain queries, the contents of the retrieved documents, and human judgments on the relevance of the documents with respect to the queries. We have extracted features from the datasets, including both conventional features, such as term frequency, inverse document frequency, BM25, and language models for IR, and features proposed recently at SIGIR, such as HostRank, feature propagation, and topical PageRank. We have then packaged LETOR with the extracted features, queries, and relevance judgments. We have also provided the results of several state-ofthe-arts learning to rank algorithms on the data. This paper describes in details about LETOR."}
{"_id": "77098722f32323e48378dea28b676bb1406dc504", "title": "CEET: A Compressed Encrypted & Embedded Technique for Digital Image Steganography", "text": "In this information era, digital information sharing and transfer plays a vital role and their use has increased exponentially with the development of technology. Thus providing security of data is a topic of concern. Data hiding is a powerful tool that provides mechanism for securing data over insecure channel by concealing information within information. Steganography inherits data hiding concept and passes information through host files in such a way that the existence of the embedded secret is unknown. This paper presents a joint application of compression, encryption and embedding techniques for implementing digital image steganography. Compression technique generates noise in the image. Inorder to retain the noise distortion to a minimum level, LSB insertion method is used for embedding purpose where the bits are inserted at the last 2 LSB\u2019s of the image. In this proposed technique the secret information is initially compressed and then the resultant bits are encrypted. Finally these encrypted bits are embedded into an image. The main objective is to develop an application that increases the steganographic capacity and enhances the stego image quality while keeping the security intact."}
{"_id": "8a7854af0ed08d959c1bd1592f02e1019d81d614", "title": "Pedagogy-Based-Technology and Chemistry Students' Performance in Higher Institutions: A Case of Debre Berhan University.", "text": "Many students have difficulty in learning abstract and complex lessons of chemistry. This study investigated how students develop their understandings of abstract and complex lessons in chemistry with the aid of visualizing tools: animation, simulation and video that allow them to build clear concepts. Animation, simulation and video enable learning by doing and provide opportunity to explore the abstract and complex lessons of chemistry. They also enable to present information in more dynamic, compelling and interactive way with engaging environment. The study applied animation, simulation and video supporting, on student-centered learning activities, in electrochemistry for second year students organized with state of art technology flash and micro media player and evaluated for its effectiveness. A particular focus of this investigation was to what extent the uses of animation, simulation and video influenced student-centered learning. Preand posttests were administrated on the target group and concurrent embedded mixed case study design was employed. Total quality of the developed package was evaluated by administrating a separate schedule containing open and closed ended question. The comments and ratings obtained in the learners\u2019 insights provided the basis for the learning impact of the study. The result obtained from the experiment and responses of the schedule showed that technology integrated with appropriate pedagogy improves the performance of students."}
{"_id": "163ec037bf66e609964498b412faafb2013da205", "title": "Usability Heuristics for Transactional Web Sites", "text": "One of the most recognized methods to evaluate usability in software applications is the heuristic evaluation. In this inspection method, Nielsen's heuristics, are the most widely used evaluation instrument. However, there is evidence in the literature which establishes that these heuristics are not appropriate when they are used to measure the level of usability of new emerging categories of software applications. This paper presents a case study where this concept is verified for Transactional Web Sites. Therefore, given the present limitations, a new set of usability heuristics was proposed following a structured and systematic methodology. Finally, fifteen new usability heuristics were obtained as final product of this research. A validation phase allowed to contrast this new proposal with the Nielsen's principles under a real context, where a heuristic evaluation was performed to a Transactional Web Site. The results established that the new set of usability heuristics, which is presented in this study, provides more accurate and promising results than the current proposal of Nielsen."}
{"_id": "f444f648caf1d71bf6daffdbbe6fb6120184e7ac", "title": "Knowledge Management Issues in Malaysian Organizations : the Perceptions of Leaders", "text": "Knowledge management is a relatively young topic to other older strategic tools, and it has been attracting the attention of researchers and academics over the past decade or so. In this regard globalization is already in many countries including Malaysia like many other developing countries, Malaysia is also striving to fulfil its vision of becoming a developed nation. It is a fast growing country in terms of both economy and technology. However, obstacles for successful knowledge management initiatives will be there for this country, as well. Thus, exploration of the potential barriers is essential before any Malaysian company embarks on a knowledge management journey. This research has shed light to some of the important issues and challenges with reference to knowledge management in Malaysia by providing a more thorough and clear picture of the knowledge management status amongst Malaysian organizations. This could in turn help top managers and decision makers to develop a more insightful agenda to ensure success for their respective knowledge management initiatives."}
{"_id": "a511bee99b4ebb1edf83e818f97663be58d45a22", "title": "Generating Extractive Summaries of Scientific Paradigms", "text": "Researchers and scientists increasingly find themselves in the position of having to quickly understand large amounts of technical material. Our goal is to effectively serve this need by using bibliometric text mining and summarization techniques to generate summaries of scientific literature. We show how we can use citations to produce automatically generated, readily consumable, technical extractive summaries. We first propose C-LexRank, a model for summarizing single scientific articles based on citations, which employs community detection and extracts salient information-rich sentences. Next, we further extend our experiments to summarize a set of papers, which cover the same scientific topic. We generate extractive summaries of a set of Question Answering (QA) and Dependency Parsing (DP) papers, their abstracts, and their citation sentences and show that citations have unique information amenable to creating a summary."}
{"_id": "08e00342bd46ba05932f3fe4ac10a3f7a350d836", "title": "Muscular and postural synergies of the human hand.", "text": "Because humans have limited ability to independently control the many joints of the hand, a wide variety of hand shapes can be characterized as a weighted combination of just two or three main patterns of covariation in joint rotations, or \"postural synergies.\" The present study sought to align muscle synergies with these main postural synergies and to describe the form of membership of motor units in these postural/muscle synergies. Seventeen joint angles and the electromyographic (EMG) activities of several hand muscles (both intrinsic and extrinsic muscles) were recorded while human subjects held the hand statically in 52 specific shapes (i.e., shaping the hand around 26 commonly grasped objects or forming the 26 letter shapes of a manual alphabet). Principal-components analysis revealed several patterns of muscle synergy, some of which represented either coactivation of all hand muscles, or reciprocal patterns of activity (above and below average levels) in the intrinsic index finger and thumb muscles or (to a lesser extent) in the extrinsic four-tendoned extensor and flexor muscles. Single- and multiunit activity was generally a multimodal function of whole hand shape. This implies that motor-unit activation does not align with a single synergy; instead, motor units participate in multiple muscle synergies. Thus it appears that the organization of the global pattern of hand muscle activation is highly distributed. This organization mirrors the highly fractured somatotopy of cortical hand representations and may provide an ideal substrate for motor learning and recovery from injury."}
{"_id": "332d94c611866616f6334fa3e960c2a6348bf9fb", "title": "IT Governance Maturity: Developing a Maturity Model Using the Delphi Method", "text": "To advance in maturity, organizations should pay attention to both the hard and soft sides of IT governance (ITG). The hard side is related to processes and structure, the soft side to social aspects like behavior and organizational culture. This paper describes a study to develop an ITG maturity model (MM) that includes both. Our research method is based on literature study, the Delphi method and makes use of a Group Decision Support System. We chose to design a focus area MM. In this type of MM maturity is determined by a set of focus areas. The study reveals one MM as being appropriate for hard ITG. For soft ITG we found no single model appropriate. Soft governance needs more specific capabilities defined for each focus area individually. Based on knowledge from literature and experts we selected models for each focus area. Three alternatives for informal organization need further research."}
{"_id": "fb0031e4d2a7358fca04da94c5d7e52da794fe34", "title": "A maturity model for information governance", "text": "Information Governance (IG) as defined by Gartner is the \u201cspecification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. Includes the processes, roles, standards and metrics that ensure the effective and efficient use of information in enabling an organization to achieve its goals\u201d. In this paper, we present how to create an IG maturity model based on existing reference documents. The process is based on existing maturity model development methods. These methods allow for a systematic approach to maturity model development backed up by a well-known and proved scientific research method called Design Science Research. Then, based on the maturity model proposed in this paper, an assessment is conducted and the results are presented, this assessment was conducted as a self-assessment in the context of the EC-funded E-ARK project for the seven pilots of the project. The main conclusion from this initial assessment is that there is much room for improvement with most pilots achieving results between maturity level two and three. As future work, the goal is to analyze other references from different domains, such as, records management. These references will enhance, detail and help develop the maturity model making it even more valuable for all types of organization that deal with information governance."}
{"_id": "16b8c8f05bf07b22d2ef9b64d97119cd4e0b378d", "title": "Method engineering: engineering of information systems development methods and tools", "text": "This paper proposes the term method engineering for the research field of the construction of information systems development methods and tools. Some research issues in method engineering are identified. One major research topic in method engineering is discussed in depth: situational methods, i.e. the configuration of a project approach that is tuned to the project at hand. A language and support tool for the engineering of situational methods are discussed."}
{"_id": "34d03cfb02806e668f9748ee60ced1b269d1db6c", "title": "Developing Maturity Models for IT Management", "text": ""}
{"_id": "64eaddd5ef643f7a1e0a2d5ca7a803a55d23d019", "title": "A Model of Data Warehousing Process Maturity", "text": "Even though data warehousing (DW) requires huge investments, the data warehouse market is experiencing incredible growth. However, a large number of DW initiatives end up as failures. In this paper, we argue that the maturity of a data warehousing process (DWP) could significantly mitigate such large-scale failures and ensure the delivery of consistent, high quality, \u201csingle-version of truth\u201d data in a timely manner. However, unlike software development, the assessment of DWP maturity has not yet been tackled in a systematic way. In light of the critical importance of data as a corporate resource, we believe that the need for a maturity model for DWP could not be greater. In this paper, we describe the design and development of a five-level DWP maturity model (DWP-M) over a period of three years. A unique aspect of this model is that it covers processes in both data warehouse development and operations. Over 20 key DW executives from 13 different corporations were involved in the model development process. The final model was evaluated by a panel of experts; the results strongly validate the functionality, productivity, and usability of the model. We present the initial and final DWP-M model versions, along with illustrations of several key process areas at different levels of maturity."}
{"_id": "474c93dddbac14fce8f550af860bf1929a588075", "title": "3D Reconstruction from Hyperspectral Images", "text": "3D reconstruction from hyper spectral images has seldom been addressed in the literature. This is a challenging problem because 3D models reconstructed from different spectral bands demonstrate different properties. If we use a single band or covert the hyper spectral image to gray scale image for the reconstruction, fine structural information may be lost. In this paper, we present a novel method to reconstruct a 3D model from hyper spectral images. Our proposed method first generates 3D point sets from images at each wavelength using the typical structure from motion approach. A structural descriptor is developed to characterize the spatial relationship between the points, which allows robust point matching between two 3D models at different wavelength. Then a 3D registration method is introduced to combine all band-level models into a single and complete hyper spectral 3D model. As far as we know, this is the first attempt in reconstructing a complete 3D model from hyper spectral images. This work allows fine structural-spectral information of an object be captured and integrated into the 3D model, which can be used to support further research and applications."}
{"_id": "671dedcb4ddd33f982735f3ec25cdb14d87e5fbf", "title": "Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers", "text": "Neural networks have great success in many machine learning applications, but the fundamental learning theory behind them remains largely unsolved. Learning neural networks is NP-hard, but in practice, simple algorithms like stochastic gradient descent (SGD) often produce good solutions. Moreover, it is observed that overparameterization (that is, designing networks whose number of parameters is larger than statistically needed to perfectly fit the data) improves both optimization and generalization, appearing to contradict traditional learning theory. In this work, we extend the theoretical understanding of two and three-layer neural networks in the overparameterized regime. We prove that, using overparameterized neural networks, one can (improperly) learn some notable hypothesis classes, including two and three-layer neural networks with fewer parameters. Moreover, the learning process can be simply done by SGD or its variants in polynomial time using polynomially many samples. We also show that for a fixed sample size, the generalization error of the solution found by some SGD variant can be made almost independent of the number of parameters in the overparameterized network. \u2217V1 appears on this date and V2 polishes writing and tightens parameters. Authors sorted in alphabetical order."}
{"_id": "21866c9fbbdba55a4f43692fdb38a2eb71c5c245", "title": "A novel compact ka-band high-rejection diplexer based on substrate integrated waveguide", "text": "In this paper, a compact diplexer using substrate integrated waveguide (SIW) resonators is introduced. The diplexer consists of two filters and a circulator. The diplexer channels are 4% and 5% relative bandwidth at 32GHz and 33GHz, respectively. The filters are based on SIW resonators, which have higher Q values. This paper also analyzed the factors which affect the Q value of resonators, and describes the designing procedure of Generalized Chebyshev filters, which used in this paper. The circulator also is on account of SIW. It uses inductive windows to increase bandwidth. At last, the paper gives the diplexer's simulation and measure results. And two results are similar with each other."}
{"_id": "1ded003bb856edac2ba22468c7c268da2dc85917", "title": "B2B E-commerce: Frameworks for e-readiness assessment", "text": "This paper looks into the development of e- readiness model for B2B e-commerce of Small and Medium Enterprises (SMEs). Drawing on existing research on e-readiness models of ICT (information, Communications Technology) specifically B2B e-commerce technologies, this paper formulates a conceptual framework of B2B e-commerce assessment constructs together with the specific research approach towards the development of a more robust e-readiness model. The research presents a conceptual model and framework that highlight the key factors in B2B e-commerce implementation, index score formulations, focus group evaluations and the recommendation approaches to improve B2B e-commerce readiness."}
{"_id": "2cf71dd34a6bc2eb084eddc3cdc4bc6123a37004", "title": "Hopping of a monopedal robot with a biarticular muscle driven by electromagnetic linear actuators", "text": "The compliance of muscles with external forces and the structural stability given by biarticular muscles are important features of animals to realize dynamic whole body motions such as running and hopping in various environments. For this reason, we have been studying an electromagnetic linear actuator. This actuator can emulate the behavior of a human muscle such as the spring-damper characteristics by quick control of the output force (i.e. impedance control) and it is expected to be used as an artificial muscle. In this paper, we develop a monopedal robot possessing bi- and mono-articular muscles implemented by linear actuators. Thanks to the biarticular muscle, the bouncing direction of the robot can be controlled by changing the stiffness ellipse at the endpoint (i.e. foot) of the robot. We confirm that the bouncing direction of the robot can be controlled and hopping can be achieved by changing the stiffness ellipse."}
{"_id": "86bc75fce91f5c1802f74e0867eba428935254cf", "title": "The Influence of Context on Dialogue Act Recognition", "text": "This article presents a deep analysis of the influence of context information on dialogue act recognition. We performed experiments on the annotated subsets of three different corpora: the widely explored Switchboard Dialog Act Corpus, as well as the unexplored LEGO and Cambridge Restaurant corpora. In contrast with previous work, especially in what concerns the Switchboard corpus, we used an event-based classification approach, using Support Vector Machines (SVMs), instead of the more common sequential approaches, such as Hidden Markov Models (HMMs). We opted for such an approach so that we could control the amount of provided context information and, thus, explore its range of influence. Our base features consist of n-grams, punctuation, and whwords. Context information is obtained from previous utterances and provided in three ways \u2013 n-grams, n-grams tagged with relative position, and dialogue act classifications. A comparative study was conducted to evaluate the performance of the three approaches. From it, we were able to assess the importance of context information on dialogue act recognition, as well as its range of influence for each of the three selected representations. In addition to the conclusions originated by the analysis, this work also produced results that advance the state-of-the-art, especially considering previous work on the Switchboard corpus. Furthermore, since, to our knowledge, the Email addresses: eugenio.ribeiro@l2f.inesc-id.pt (Eug\u00e9nio Ribeiro), ricardo.ribeiro@inesc-id.pt (Ricardo Ribeiro), david.matos@inesc-id.pt (David Martins de Matos) Preprint submitted to Computer Speech and Language June 3, 2015 ar X iv :1 50 6. 00 83 9v 1 [ cs .C L ] 2 J un 2 01 5 remaining datasets had not been previously explored for this task, our experiments can be used as baselines for future work on those corpora."}
{"_id": "08c46db09ce69973e98a965910d43083a4b8fa73", "title": "Are More Data Always Better for Factor Analysis ?", "text": "Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors and that yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are cross-correlated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 pre-screened series often yield satisfactory or even better results than using all 147 series. Our simulation analysis is unique in that special attention is paid to cross-correlated idiosyncratic errors, and we also allow the factors to have weak loadings on groups of series. It thus allows us to better understand the properties of the principal components estimator in empirical applications. \u2217Graduate School of Business, 719 Uris Hall, Columbia University, New York, NY. Email: jb903@columbia.edu \u2020Corresponding author: Department of Economics, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218. Email: Serena.Ng@jhu.edu The authors would like to thank National Science Foundation for financial support. We thank Pritha Mitra for useful research assistance."}
{"_id": "9a7def005efb5b4984886c8a07ec4d80152602ab", "title": "Attention, please! A Critical Review of Neural Attention Models in Natural Language Processing", "text": "Attention is an increasingly popular mechanism used in a wide range of neural architectures. Because of the fast-paced advances in this domain, a systematic overview of attention is still missing. In this article, we define a unified model for attention architectures for natural language processing, with a focus on architectures designed to work with vector representation of the textual data. We discuss the dimensions along which proposals differ, the possible uses of attention, and chart the major research activities and open challenges in the area."}
{"_id": "0607acbb450d2afef7f2aa5b53bb05966bd065ed", "title": "Low-rank matrix factorization for Deep Neural Network training with high-dimensional output targets", "text": "While Deep Neural Networks (DNNs) have achieved tremendous success for large vocabulary continuous speech recognition (LVCSR) tasks, training of these networks is slow. One reason is that DNNs are trained with a large number of training parameters (i.e., 10-50 million). Because networks are trained with a large number of output targets to achieve good performance, the majority of these parameters are in the final weight layer. In this paper, we propose a low-rank matrix factorization of the final weight layer. We apply this low-rank technique to DNNs for both acoustic modeling and language modeling. We show on three different LVCSR tasks ranging between 50-400 hrs, that a low-rank factorization reduces the number of parameters of the network by 30-50%. This results in roughly an equivalent reduction in training time, without a significant loss in final recognition accuracy, compared to a full-rank representation."}
{"_id": "46e6606c434d00fe93db549d82fdc338c79e48a3", "title": "Single-step assembly of a gene and entire plasmid from large numbers of oligodeoxyribonucleotides.", "text": "Here, we describe assembly PCR as a method for the synthesis of long DNA sequences from large numbers of oligodeoxyribonucleotides (oligos). The method, which is derived from DNA shuffling [Stemmer, Nature 370 (1994a) 389-391], does not rely on DNA ligase but instead relies on DNA polymerase to build increasingly longer DNA fragments during the assembly process. A 1.1-kb fragment containing the TEM-1 beta-lactamase-encoding gene (bla) was assembled in a single reaction from a total of 56 oligos, each 40 nucleotides (nt) in length. The synthetic gene was PCR amplified and cloned in a vector containing the tetracycline-resistance gene (TcR) as the sole selectable marker. Without relying on ampicillin (Ap) selection, 76% of the TcR colonies were ApR, making this approach a general method for the rapid and cost-effective synthesis of any gene. We tested the range of assembly PCR by synthesizing, in a single reaction vessel containing 134 oligos, a high-molecular-mass multimeric form of a 2.7-kb plasmid containing the bla gene, the alpha-fragment of the lacZ gene and the pUC origin of replication. Digestion with a unique restriction enzyme, followed by ligation and transformation in Escherichia coli, yielded the correct plasmid. Assembly PCR is well suited for several in vitro mutagenesis strategies."}
{"_id": "a4361b393b3ab28c30f05bbc21450257e3d473db", "title": "A Complete End-to-End Speaker Verification System Using Deep Neural Networks: From Raw Signals to Verification Result", "text": "End-to-end systems using deep neural networks have been widely studied in the field of speaker verification. Raw audio signal processing has also been widely studied in the fields of automatic music tagging and speech recognition. However, as far as we know, end-to-end systems using raw audio signals have not been explored in speaker verification. In this paper, a complete end-to-end speaker verification system is proposed, which inputs raw audio signals and outputs the verification results. A pre-processing layer and the embedded speaker feature extraction models were mainly investigated. The proposed pre-emphasis layer was combined with a strided convolution layer for pre-processing at the first two hidden layers. In addition, speaker feature extraction models using convolutionallayer and long short-term memory are proposed to be embedded in the proposed end-to-end system."}
{"_id": "083123080c0ef964b38314b049e26c306c992321", "title": "The Central Nervous System and the Gut Microbiome", "text": "Neurodevelopment is a complex process governed by both intrinsic and extrinsic signals. While historically studied by researching the brain, inputs from the periphery impact many neurological conditions. Indeed, emerging data suggest communication between the gut and the brain in anxiety, depression, cognition, and autism spectrum disorder (ASD). The development of a healthy, functional brain depends on key pre- and post-natal events that integrate environmental cues, such as molecular signals from the gut. These cues largely originate from the microbiome, the consortium of symbiotic bacteria that reside within all animals. Research over the past few years reveals that the gut microbiome plays a role in basic neurogenerative processes such as the formation of the blood-brain barrier, myelination, neurogenesis, and microglia maturation and also modulates many aspects of animal behavior. Herein, we discuss the biological intersection of neurodevelopment and the microbiome and explore the hypothesis that gut bacteria are integral contributors to development and function of the nervous system and to the balance between mental health and disease."}
{"_id": "e4dcc6c4260b162192ac60677d42d48295662fb8", "title": "Personality traits associated with problematic and non-problematic massively multiplayer online role playing game use", "text": "This research investigated the associations between personality traits and both normal and problematic massively multiplayer online role playing game (MMORPGs) play, as measured by a self report scale. A total of 225 participants completed the online questionnaire, 66 of whom played MMORPGs. Correlational analyses indicated that low levels of functional impulsivity and agreeableness alongside high levels of verbal aggression and video game dependency were associated with greater amount of time spent playing MMORPGs. When comparing problematic and non-problematic MMORPG players directly, problematic players were found to be lower in self regulation, dysfunctional impulsivity and agreeableness, suggesting that these traits may be important in the development and maintenance of problematic"}
{"_id": "47e2d768f6629352103bcc87c0a12b4d6eb12904", "title": "A Graduated Assignment Algorithm for Graph Matching", "text": "A graduated assignment algorithm for graph matching is presented which is fast and accurate even in the presence of high noise. By combining graduated nonconvexity, two-way (assignment) constraints, and sparsity, large improvements in accuracy and speed are achieved. Its low order computational complexity [O(/m), where land mare the number of links in the two graphs] and robustness in the presence of noise offer advantages over traditional combinatorial approaches. The algorithm, not restricted to any special class of graph, is applied to subgraph isomorphism, weighted graph matching, and attributed relational graph matching. To illustrate the performance of the algorithm, attributed relational graphs derived from objects are matched. Then, results from twenty-five thousand experiments conducted on 100 node random graphs of varying types (graphs with only zero-one links, weighted graphs, and graphs with node attributes and multiple link types) are reported. No comparable results have been reported by any other graph matching algorithm before in the research literature. Twenty-five hundred control experiments are conducted using a relaxation labeling algorithm and large improvements in accuracy are demonstrated."}
{"_id": "b8b32bd8a3571f4cb14aea0ea38037551d856d1b", "title": "Vehicle black box with 24GHz FMCW radar", "text": "This paper presents vehicle black box using camera image and 24GHz Frequency Modulation Continuous Wave (FMCW) radar. Currently, almost all of vehicle black boxes are recording driving data using a single camera. They have some problems on low image quality and narrow viewing angle. In addition, it is difficult to record a clean image when the weather is in bad condition due to heavy rain, snow and night. These problems may have some troubles to investigate car accident accurately. In this paper, we propose a novel black box fusing data from radar and cameras. We estimate the distance and velocity of obstacle using the Doppler Effect and FMCW radar. In particular, we compare various Direction of Arrival (DOA) algorithms for estimation of obstacle angle. The experimental results indicate that the Multiple Signal Classification (MUSIC) is more accurate compared to other methods. Moreover, we formulate the vehicle tracking image through the camera image processing and radar information. This image can be used to widen viewing angle and it may be helpful to investigate a car accident. The experimental study results indicate that the new black box overcome the disadvantages of the current black boxes and may be useful for safe driving and accurate accident investigation."}
{"_id": "844e0141f47374c6c2cb4c5425d5f9b3ce201f75", "title": "Exploring Consumers \u2019 Payment Behaviours at Completing MicroTransactions with Vending Machines in Sweden", "text": ".........................................................................................................................................\t\r 2 SAMMANFATTNING\t\r ........................................................................................................................\t\r 2 ACKNOWLEDGMENTS\t\r ...................................................................................................................\t\r 3 CONTENT\t\r ...........................................................................................................................................\t\r 4 1.\t\r INTRODUCTION\t\r .......................................................................................................................\t\r 6 1.1.\t\r PROBLEM\t\r FORMULATION\t\r AND\t\r PURPOSE\t\r ........................................................................................\t\r 7 2.\t\r BACKGROUND\t\r ..........................................................................................................................\t\r 8 2.1.\t\r VENDING\t\r MACHINE\t\r IN\t\r SWEDEN\t\r ........................................................................................................\t\r 8 2.1.1.\t\r Payment\t\r Procedures\t\r at\t\r Selecta\t\r Vending\t\r Machine\t\r ........................................................\t\r 9 2.2.\t\r RELEVANT\t\r STUDIES\t\r ..........................................................................................................................\t\r 14 3.\t\r THEORETICAL\t\r FRAMWORK\t\r .............................................................................................\t\r 16 3.1.\t\r ROGERS\t\r \u2013\t\r DIFFUSION\t\r OF\t\r INNOVATION\t\r THEORY\t\r ..........................................................................\t\r 16 3.1.1.\t\r Relative\t\r Advantage\t\r ..................................................................................................................\t\r 16 3.1.2.\t\r Compatibility\t\r ..............................................................................................................................\t\r 17 3.1.3.\t\r Complexity\t\r ...................................................................................................................................\t\r 17 3.2.\t\r TRUST\t\r AND\t\r SECURITY\t\r .......................................................................................................................\t\r 17 3.3.\t\r USE\t\r SITUATION\t\r ..................................................................................................................................\t\r 17 3.4.\t\r MONEY\u2019S\t\r PAYMENT\t\r RITUAL\t\r ............................................................................................................\t\r 18 4.\t\r METHODOLOGY\t\r AND\t\r DATA\t\r COLLECTION\t\r ...................................................................\t\r 19 4.1.\t\r OBSERVATIONS\t\r ..................................................................................................................................\t\r 19 4.2.\t\r SEMI-\u00ad\u2010STRUCTURED\t\r INTERVIEWS\t\r ....................................................................................................\t\r 19 4.3.\t\r DATA\t\r ANALYSES\t\r ................................................................................................................................\t\r 19 4.4.\t\r VALIDITY\t\r AND\t\r RELIABILITY\t\r ............................................................................................................\t\r 20 4.5.\t\r ETHICS\t\r .................................................................................................................................................\t\r 20 5.\t\r RESULTS\t\r .................................................................................................................................\t\r 21 5.1.\t\r CONSUMERS\u2019\t\r CHARACTERISTICS\t\r .....................................................................................................\t\r 21 5.2.\t\r OBSERVATION\t\r AND\t\r INTERVIEW\t\r RESULTS\t\r ....................................................................................\t\r 22 5.2.1.\t\r Relative\t\r Advantage\t\r ..................................................................................................................\t\r 22 5.2.2.\t\r Compatibility\t\r ..............................................................................................................................\t\r 22 5.2.3.\t\r Complexity\t\r ...................................................................................................................................\t\r 23 5.2.4.\t\r Trust\t\r and\t\r Security\t\r ....................................................................................................................\t\r 24 5.2.5.\t\r Impact\t\r of\t\r Unexpected\t\r Circumstances\t\r ..............................................................................\t\r 26 5.2.6.\t\r Impact\t\r of\t\r Money\u2019s\t\r Payment\t\r Ritual\t\r ...................................................................................\t\r 27 6.\t\r DISCUSSION\t\r ...........................................................................................................................\t\r 28 6.1.\t\r RELATIVE\t\r ADVANTAGE\t\r .....................................................................................................................\t\r 28 6.2.\t\r COMPATIBILITY\t\r ..................................................................................................................................\t\r 28"}
{"_id": "3b9ea0c1aa52540e4b01c35b61f2e0a25d5571cd", "title": "An analysis of the GTZAN music genre dataset", "text": "A significant amount of work in automatic music genre recognition has used a dataset whose composition and integrity has never been formally analyzed. For the first time, we provide an analysis of its composition, and create a machine-readable index of artist and song titles. We also catalog numerous problems with its integrity, such as replications, mislabelings, and distortions."}
{"_id": "bf0e798edd3ec2397296ff9acf7fd7a427b00d3c", "title": "The status of HTS ship propulsion motor developments", "text": "The development of ship propulsion synchronous motors with high temperature superconductor (HTS) field windings for Naval electric ship applications has progressed to the point where a full scale motor is now under construction. A 5 MW, 230-rpm prototype ship propulsion motor was built and tested by the Center for Advanced Power Systems (CAPS) on behalf of U.S. Office of Naval Research (ONR). It met or exceeded all its design goals. Currently, a 36.5 MW, 120-rpm ship propulsion motor is being built for delivery to ONR at the end of 2006. This paper presents test results of the 5 MW motor and the status of the 36.5 MW motor"}
{"_id": "286ede5b71075fcc6079bfc8d1816b3ad0ba087b", "title": "Standoff Detection of Weapons and Contraband in the 100 GHz to 1 THz Region", "text": "The techniques and technologies currently being investigated to detect weapons and contraband concealed on persons under clothing are reviewed. The basic phenomenology of the atmosphere and materials that must be understood in order to realize such a system are discussed. The component issues and architectural designs needed to realize systems are outlined. Some conclusions with respect to further technology developments are presented."}
{"_id": "a79558699052d424cbae479a8d143eedeb3f54ef", "title": "Beyond pixels: Exploiting camera metadata for photo classification", "text": "Semantic scene classification based only on low-level vision cues has had limited success on unconstrained image sets. On the other hand, camera metadata related to capture conditions provides cues independent of the captured scene content that can be used to improve classification performance. We consider three problems, indoor-outdoor classification, sunset detection, and manmade-natural classification. Analysis of camera metadata statistics for images of each class revealed that metadata fields, such as exposure time, flash fired, and subject distance, are most discriminative for each problem. A Bayesian network is employed to fuse content-based and metadata cues in the probability domain and degrades gracefully even when specific metadata inputs are missing (a practical concern). Finally, we provide extensive experimental results on the three problems using content-based and metadata cues to demonstrate the efficacy of the proposed integrated scene classification scheme."}
{"_id": "66981819e6569cad851b5abd91bb8bd2b29ed0e8", "title": "Outlier Rejection for Absolute Pose Estimation with Known Orientation", "text": "Estimating the pose of a camera is a core problem in many geometric vision applications. While there has been much progress in the last two decades, the main difficulty is still dealing with data contaminated by outliers. For many scenes, e.g. with poor lightning conditions or repetitive textures, it is common that most of the correspondences are outliers. For real applications it is therefore essential to have robust estimation methods. In this paper we present an outlier rejection method for absolute pose estimation. We focus on the special case when the orientation of the camera is known. The problem is solved by projecting to a lower dimensional subspace where we are able to efficiently compute upper bounds on the maximum number of inliers. The method guarantees that only correspondences which cannot belong to an optimal pose are removed. In a number of challenging experiments we evaluate our method on both real and synthetic data and show improved performance compared to competing methods."}
{"_id": "9df77827a3573fcafb4b32aa086db4709da425a8", "title": "MySpace and Facebook: Applying the Uses and Gratifications Theory to Exploring Friend-Networking Sites", "text": "The increased use of the Internet as a new tool in communication has changed the way people interact. This fact is even more evident in the recent development and use of friend-networking sites. However, no research has evaluated these sites and their impact on college students. Therefore, the present study was conducted to evaluate: (a) why people use these friend-networking sites, (b) what the characteristics are of the typical college user, and (c) what uses and gratifications are met by using these sites. Results indicated that the vast majority of college students are using these friend-networking sites for a significant portion of their day for reasons such as making new friends and locating old friends. Additionally, both men and women of traditional college age are equally engaging in this form of online communication with this result holding true for nearly all ethnic groups. Finally, results showed that many uses and gratifications are met by users (e.g., \"keeping in touch with friends\"). Results are discussed in light of the impact that friend-networking sites have on communication and social needs of college students."}
{"_id": "568a0de980b9773fe96abf3c75ac0891d1df9c2b", "title": "Is e-Learning the Solution for Individual Learning?.", "text": "Despite the fact that e-Learning exists for a relatively long time, it is still in its infancy. Current eLearning systems on the market are limited to technical gadgets and organizational aspects of teaching, instead of supporting the learning. As a result the learner has become deindividualized and demoted to a noncritical homogenous user. One way out of this drawback is the creation of individual e-Learning materials. For this purpose a flexible multidimensional data model and the generation of individual content are the solution. It is necessary to enable the interaction between the learners and the content in e-Learning systems in the same manner."}
{"_id": "56c16d9e2a5270ba6b1d83271e2c10916591968d", "title": "Human memory ; A proposed system and its control processes", "text": ""}
{"_id": "47ed703b8b6a501a5eb0e07ba8bc8d27be911ce5", "title": "Answering Location-Aware Graph Reachability Queries on GeoSocial Data", "text": "Thanks to the wide spread use of mobile and wearable devices, popular social networks, e.g., Facebook, prompts users to add spatial attributes to social entities, e.g., check-ins, traveling posts, and geotagged photos, leading to what is known as, The GeoSocial Graph. In such graph, usersmay issue a Reachability Query with Spatial Range Predicate (abbr. RangeReach). RangeReach finds whether an input vertex can reach any spatial vertex that lies within an input spatial range. The paper proposes GEOREACH, an approach that adds spatial data awareness to a graph database management system. GEOREACH allows efficient execution of RangeReach queries, yet without compromising a lot on the overall system scalability. Experiments based on system implementation inside Neo4j prove that GEOREACH exhibits up to two orders of magnitude better query performance and up to four times less storage than the state-of-the-art spatial and reachability indexing approaches."}
{"_id": "4c6dc7218f6ba4059be6b7e77bf09492bcbae9bc", "title": "Distributed Representation of Subgraphs", "text": "Network embeddings have become very popular in learning effective feature representations of networks. Motivated by the recent successes of embeddings in natural language processing, researchers have tried to \u0080nd network embeddings in order to exploit machine learning algorithms for mining tasks like node classi\u0080cation and edge prediction. However, most of the work focuses on \u0080nding distributed representations of nodes, which are inherently ill-suited to tasks such as community detection which are intuitively dependent on subgraphs. Here, we propose Sub2Vec, an unsupervised scalable algorithm to learn feature representations of arbitrary subgraphs. We provide means to characterize similarties between subgraphs and provide theoretical analysis of Sub2Vec and demonstrate that it preserves the so-called local proximity. We also highlight the usability of Sub2Vec by leveraging it for network mining tasks, like community detection. We show that Sub2Vec gets signi\u0080cant gains over stateof-the-art methods and node-embedding methods. In particular, Sub2Vec o\u0082ers an approach to generate a richer vocabulary of features of subgraphs to support representation and reasoning. ACM Reference format: Bijaya Adhikari, Yao Zhang, Naren Ramakrishnan and B. Aditya Prakash. 2016. Distributed Representation of Subgraphs. In Proceedings of ACM Conference, Washington, DC, USA, July 2017 (Conference\u201917), 9 pages. DOI: 10.1145/nnnnnnn.nnnnnnn"}
{"_id": "4294fb388e45f3e969cb8615d8001f70eb47206e", "title": "Employee job attitudes and organizational characteristics as predictors of cyberloafing", "text": "Cyberloafing is the personal use of the Internet by employees while at work. The purpose of this study is to examine whether employee job attitudes, organizational characteristics, attitudes towards cyberloafing, and other non-Internet loafing behaviors serve as antecedents to cyberloafing behaviors. We hypothesize that the employee job attitudes of job involvement and intrinsic involvement are related to cyberloafing. In addition, we hypothesize that organizational characteristics including the perceived cyberloafing of one\u2019s coworkers and managerial support for internet usage are related to cyberloafing. We also hypothesize that attitudes towards cyberloafing and the extent to which employees participate in non-Internet loafing behaviors (e.g., talking with coworkers, running personal errands) will both be related to cyberloafing. One hundred and forty-three working professional from a variety of industries were surveyed regarding their Internet usage at work. As hypothesized, the employee job attitudes of job involvement and intrinsic involvement were negatively related to cyberloafing. Also as predicted, the organizational characteristics of the perceived cyberloafing of one\u2019s coworkers and managerial support for internet usage were positively related to cyberloafing. Finally, results showed that attitudes towards cyberloafing and participation in non-Internet loafing behaviors were positively related to cyberloafing. Implications for both organizations and employees are discussed. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "37735f5760f8ef487791cd67a7e8fc90f79c51c9", "title": "Hand Rehabilitation Learning System With an Exoskeleton Robotic Glove", "text": "This paper presents a hand rehabilitation learning system, the SAFE Glove, a device that can be utilized to enhance the rehabilitation of subjects with disabilities. This system is able to learn fingertip motion and force for grasping different objects and then record and analyze the common movements of hand function including grip and release patterns. The glove is then able to reproduce these movement patterns in playback fashion to assist a weakened hand to accomplish these movements, or to modulate the assistive level based on the user's or therapist's intent for the purpose of hand rehabilitation therapy. Preliminary data have been collected from healthy hands. To demonstrate the glove's ability to manipulate the hand, the glove has been fitted on a wooden hand and the grasping of various objects was performed. To further prove that hands can be safely driven by this haptic mechanism, force sensor readings placed between each finger and the mechanism are plotted. These experimental results demonstrate the potential of the proposed system in rehabilitation therapy."}
{"_id": "0647c9d56cf11215894d57d677997826b22f6a13", "title": "Transgender face recognition with off-the-shelf pre-trained CNNs: A comprehensive study", "text": "Face recognition has become a ubiquitous way of establishing identity in many applications. Gender transformation therapy induces changes to face on both for structural and textural features. A challenge for face recognition system is, therefore, to reliably identify the subjects after they undergo gender change while the enrolment images correspond to pre-change. In this work, we propose a new framework based on augmenting and fine-tuning deep Residual Network-50 (ResNet-50). We employ YouTube database with 37 subjects whose images are self-captured to evaluate the performance of state-of-the-schemes. Obtained results demonstrate the superiority of the proposed scheme over twelve different state-of-the-art schemes with an improved Rank \u2014 1 recognition rate."}
{"_id": "c874f54082b6c2545babd7f5d1f447416ebba667", "title": "Multi-Strategy Sentiment Analysis of Consumer Reviews Based on Semantic Fuzziness", "text": "Since Internet has become an excellent source of consumer reviews, the area of sentiment analysis (also called sentiment extraction, opinion mining, opinion extraction, and sentiment mining) has seen a large increase in academic interest over the last few years. Sentiment analysis mines opinions at word, sentence, and document levels, and gives sentiment polarities and strengths of articles. As known, the opinions of consumers are expressed in sentiment Chinese phrases. But due to the fuzziness of Chinese characters, traditional machine learning techniques can not represent the opinion of articles very well. In this paper, we propose a multi-strategy sentiment analysis method with semantic fuzziness to solve the problem. The results show that this hybrid sentiment analysis method can achieve a good level of effectiveness."}
{"_id": "c86daf4fcb53be986d3b2cb1524c45b3772ac802", "title": "Online shopping behavior model: A literature review and proposed model", "text": "In this study, we conducted extensive reviews of online shopping literatures and proposed a hierarchy model of online shopping behavior. We collected 47 studies and classified them by variables used. Some critical points were found that research framework, methodology, and lack of cross-cultural comparison, etc So we developed a cross-cultural model of online shopping including shopping value, attitudes to online retailer's attributes and online purchasing based on the integrated V-A-B model."}
{"_id": "5ad9f87ef6f41eb91bc479cf61c850bdde5a71ef", "title": "Spinal curvature determination from scoliosis X-Ray image using sum of squared difference template matching", "text": "Scoliosis is a disorder in which there is a sideways curve of the spine. Curve are often S-shaped or C-shaped. One of the methods to ascertain a patient with scoliosis is through using Cobb angle measurement. The importance of the automatic spinal curve detection system is to detect any spinal disorder quicker and faster. This study is intended as a first step based on computer-aided diagnosis. The spinal detection method that we propose is using template matching based on Sum of Squared Difference (SSD). This method is used to estimate the location of the vertebra. By using polynomial curve fitting, a spinal curvature estimation can be done. This paper discusses the performance of SSD method used to detect a variety of data sources of X-Ray from numerous patients. The results from the implementation indicate that the proposed algorithm can be used to detect all the X-ray images. The best result in this experiment has 96.30% accuracy using 9-subdivisions poly 5 algorithm, and the average accuracy is 86.01%."}
{"_id": "78e5e97a082263f653ac07fbfbfc32c317ddb881", "title": "Multitask Compressive Sensing", "text": "Compressive sensing (CS) is a framework whereby one performs N nonadaptive measurements to constitute a vector v isin RN used to recover an approximation u isin RM desired signal u isin RM with N << M this is performed under the assumption that u is sparse in the basis represented by the matrix Psi RMtimesM. It has been demonstrated that with appropriate design of the compressive measurements used to define v, the decompressive mapping vrarru may be performed with error ||u-u||2 2 having asymptotic properties analogous to those of the best adaptive transform-coding algorithm applied in the basis Psi. The mapping vrarru constitutes an inverse problem, often solved using l1 regularization or related techniques. In most previous research, if L > 1 sets of compressive measurements {vi}i=1,L are performed, each of the associated {ui}i=1,Lare recovered one at a time, independently. In many applications the L ldquotasksrdquo defined by the mappings virarrui are not statistically independent, and it may be possible to improve the performance of the inversion if statistical interrelationships are exploited. In this paper, we address this problem within a multitask learning setting, wherein the mapping vrarru for each task corresponds to inferring the parameters (here, wavelet coefficients) associated with the desired signal vi, and a shared prior is placed across all of the L tasks. Under this hierarchical Bayesian modeling, data from all L tasks contribute toward inferring a posterior on the hyperparameters, and once the shared prior is thereby inferred, the data from each of the L individual tasks is then employed to estimate the task-dependent wavelet coefficients. An empirical Bayesian procedure for the estimation of hyperparameters is considered; two fast inference algorithms extending the relevance vector machine (RVM) are developed. Example results on several data sets demonstrate the effectiveness and robustness of the proposed algorithms."}
{"_id": "6237f7264a5c278c717d1bc625e93d0506c843cf", "title": "Two cortical systems for memory-guided behaviour", "text": "Although the perirhinal cortex (PRC), parahippocampal cortex (PHC) and retrosplenial cortex (RSC) have an essential role in memory, the precise functions of these areas are poorly understood. Here, we review the anatomical and functional characteristics of these areas based on studies in humans, monkeys and rats. Our Review suggests that the PRC and PHC\u2013RSC are core components of two separate large-scale cortical networks that are dissociable by neuroanatomy, susceptibility to disease and function. These networks not only support different types of memory but also appear to support different aspects of cognition."}
{"_id": "81dd4a314f8340d5e9bfd897897e8147faf8c63e", "title": "Sexual arousal patterns of bisexual men.", "text": "There has long been controversy about whether bisexual men are substantially sexually aroused by both sexes. We investigated genital and self-reported sexual arousal to male and female sexual stimuli in 30 heterosexual, 33 bisexual, and 38 homosexual men. In general, bisexual men did not have strong genital arousal to both male and female sexual stimuli. Rather, most bisexual men appeared homosexual with respect to genital arousal, although some appeared heterosexual. In contrast, their subjective sexual arousal did conform to a bisexual pattern. Male bisexuality appears primarily to represent a style of interpreting or reporting sexual arousal rather than a distinct pattern of genital sexual arousal."}
{"_id": "b5b0255f03233a04d5381600f79b4f0530b26467", "title": "Towards a Distributed Large-Scale Dynamic Graph Data Store", "text": "In many graph applications, the structure of the graph changes dynamically over time and may require real time analysis. However, constructing a large graph is expensive, and most studies for large graphs have not focused on a dynamic graph data structure, but rather a static one. To address this issue, we propose DegAwareRHH, a high performance dynamic graph data store designed for scaling out to store large, scale-free graphs by leveraging compact hash tables with high data locality. We extend DegAwareRHH for multiple processes and distributed memory, and perform dynamic graph construction on large scale-free graphs using emerging 'Big Data HPC' systems such as the Catalyst cluster at LLNL. We demonstrate that DegAwareRHH processes a request stream 206.5x faster than a state-of-the-art shared-memory dynamic graph processing framework, when both implementations use 24 threads/processes to construct a graph with 1 billion edge insertion requests and 54 million edge deletion requests. DegAwareRHH also achieves a processing rate of over 2 billion edge insertion requests per second using 128 compute nodes to construct a large-scale web graph, containing 128 billion edges, the largest open-source real graph dataset to our knowledge."}
{"_id": "4e5326b0c248246b88f786907edb4e295eae9928", "title": "Automatic assessment of macular edema from color retinal images", "text": "Diabetic macular edema (DME) is an advanced symptom of diabetic retinopathy and can lead to irreversible vision loss. In this paper, a two-stage methodology for the detection and classification of DME severity from color fundus images is proposed. DME detection is carried out via a supervised learning approach using the normal fundus images. A feature extraction technique is introduced to capture the global characteristics of the fundus images and discriminate the normal from DME images. Disease severity is assessed using a rotational asymmetry metric by examining the symmetry of macular region. The performance of the proposed methodology and features are evaluated against several publicly available datasets. The detection performance has a sensitivity of 100% with specificity between 74% and 90%. Cases needing immediate referral are detected with a sensitivity of 100% and specificity of 97%. The severity classification accuracy is 81% for the moderate case and 100% for severe cases. These results establish the effectiveness of the proposed solution."}
{"_id": "e879de0d1be295288646a8308a0a01416f31db20", "title": "Shoulder impingement revisited: evolution of diagnostic understanding in orthopedic surgery and physical therapy", "text": "\u201cImpingement syndrome\u201d is a common diagnostic label for patients presenting with shoulder pain. Historically, it was believed to be due to compression of the rotator cuff tendons beneath the acromion. It has become evident that \u201cimpingement syndrome\u201d is not likely an isolated condition that can be easily diagnosed with clinical tests or most successfully treated surgically. Rather, it is likely a complex of conditions involving a combination of intrinsic and extrinsic factors. A mechanical impingement phenomenon as an etiologic mechanism of rotator cuff disease may be distinct from the broad diagnostic label of \u201cimpingement syndrome\u201d. Acknowledging the concepts of mechanical impingement and movement-related impairments may better suit the diagnostic and interventional continuum as they support the existence of potentially modifiable impairments within the conservative treatment paradigm. Therefore, it is advocated that the clinical diagnosis of \u201cimpingement syndrome\u201d be eliminated as it is no more informative than the diagnosis of \u201canterior shoulder pain\u201d. While both terms are ambiguous, the latter is less likely to presume an anatomical tissue pathology that may be difficult to isolate either with a clinical examination or with diagnostic imaging and may prevent potentially inappropriate surgical interventions. We further recommend investigation of mechanical impingement and movement patterns as potential mechanisms for the development of shoulder pain, but clearly distinguished from a clinical diagnostic label of \u201cimpingement syndrome\u201d. For shoulder researchers, we recommend investigations of homogenous patient groups with accurately defined specific pathologies, or with subgrouping or classification based on specific movement deviations. Diagnostic labels based on the movement system may allow more effective subgrouping of patients to guide treatment strategies."}
{"_id": "56c2fb2438f32529aec604e6fc3b06a595ddbfcc", "title": "Comparison of Recent Machine Learning Techniques for Gender Recognition from Facial Images", "text": "Recently, several machine learning methods for gender classification from frontal facial images have been proposed. Their variety suggests that there is not a unique or generic solution to this problem. In addition to the diversity of methods, there is also a diversity of benchmarks used to assess them. This gave us the motivation for our work: to select and compare in a concise but reliable way the main state-of-the-art methods used in automatic gender recognition. As expected, there is no overall winner. The winner, based on the accuracy of the classification, depends on the type of benchmarks used."}
{"_id": "1ba84863e2685c45c5c41953444d9383dc7aa13b", "title": "Efficient Support Vector Classifiers for Named Entity Recognition", "text": "NamedEntity (NE) recognitionis a task in which proper nouns and numerical information are extractedfrom documentsandareclassifiedinto categoriessuchas person,organization,and date. It is a key technologyof InformationExtractionand Open-DomainQuestionAnswering.First,weshow thatanNE recognizerbasedonSupportVectorMachines(SVMs)givesbetterscoresthanconventional systems.However, off-the-shelfSVM classifiersare too inefficient for this task.Therefore,we presenta methodthat makes the systemsubstantiallyfaster . This approachcan also be applied to other similar taskssuchaschunkingandpart-of-speechtagging. We alsopresentanSVM-basedfeatureselection methodandanefficient trainingmethod."}
{"_id": "ed7fc2b80328cea51bb11b59b15db5f6e26140fd", "title": "The Bad Boys of Cyberspace: Deviant Behavior in a Multimedia Chat Community", "text": "A wide variety of deviant behavior may arise as the population of an online multimedia community increases. That behavior can span the range from simple mischievous antics to more serious expressions of psychopathology, including depression, sociopathy, narcissism, dissociation, and borderline dynamics. In some cases the deviant behavior may be a process of pathological acting out\u2014in others, a healthy attempt to work through. Several factors must be taken into consideration when explaining online deviance, such as social/cultural issues, the technical infrastructure of the environment, transference reactions, and the effects of the ambiguous, anonymous, and fantasy-driven atmosphere of cyberspace life. In what we may consider an \"online community psychology,\" intervention strategies for deviant behavior can be explored along three dimensions: preventative versus remedial, user versus superuser based, and automated versus interpersonal."}
{"_id": "d2bc84a42de360534913e8c3f239197064cf0889", "title": "Bayesian Optimization with Empirical Constraints ( PhD Proposal )", "text": "This work is motivated by the experimental design problem of optimizing the power output of nano-enhanced microbial fuel cells. Microbial fuel cells (MFCs) (Bond and Lovley, 2003; Fan et al., 2007; Park and Zeikus, 2003; Reguera, 2005) use micro-organisms to break down organic matter and generate electricity. For a particular MFC design, it is critical to optimize the biological energetics and the microbial/electrode interface of the system, which research has shown to depend strongly on the surface properties of the anodes (Park and Zeikus, 2003; Reguera, 2005). This motivates the design of nano-enhanced anodes, where nano-structures (e.g. carbon nano-wire) are grown on the anode surface to improve the MFC\u2019s power output. Unfortunately, there is little understanding of the interaction between various possible nano-enhancements and MFC capabilities for different micro-organisms. Thus, optimizing anode design for a particular application is largely guess work. Our goal is to develop algorithms to aid this process. Bayesian optimization (Jones, 2001a; Brochu et al., 2009) has been widely used for experimental design problems where the goal is to optimize an unknown function f(\u00b7), that is costly to evaluate. In general, we are interested in finding the point x\u2217 \u2208 X d \u2282 Rd such that: x\u2217 = argmax x\u2208X d f(x), (1)"}
{"_id": "8b422d929c083299afc1cff7debfd2a41e94fa50", "title": "I know what you did on your smartphone: Inferring app usage over encrypted data traffic", "text": "Smartphones and tablets are now ubiquitous in many people's lives and are used throughout the day in many public places. They are often connected to a wireless local area network (IEEE 802.11 WLANs) and rely on encryption protocols to maintain their security and privacy. In this paper, we show that even in presence of encryption, an attacker without access to encryption keys is able to determine the users' behavior, in particular, their app usage. We perform this attack using packet-level traffic analysis in which we use side-channel information leaks to identify specific patterns in packets regardless of whether they are encrypted or not. We show that just by collecting and analyzing small amounts of wireless traffic, one can determine what apps each individual smartphone user in the vicinity is using. Furthermore, and more worrying, we show that by using these apps the privacy of the user is more at risk compared to using online services through browsers on mobile devices. This is due to the fact that apps generate more identifiable traffic patterns. Using random forests to classify the apps we show that we are able to identify individual apps, even in presence of noise, with great accuracy. Given that most online services now provide native apps that may be identified by this method, these attacks represent a serious threat to users' privacy."}
{"_id": "bdf67ee2a13931ca2d5eac458714ed98148d1b34", "title": "An Intrusion-Detection Model", "text": "A model of a real-time intrusion-detection expert system capable of detecting break-ins, penetrations, and other forms of computer abuse is described. The model is based on the hypothesis that security violations can be detected by monitoring a system's audit records for abnormal patterns of system usage. The model includes profiles for representing the behavior of subjects with respect to objects in terms of metrics and statistical models, and rules for acquiring knowledge about this behavior from audit records and for detecting anomalous behavior. The model is independent of any particular system, application environment, system vulnerability, or type of intrusion, thereby providing a framework for a general-purpose intrusion-detection expert system."}
{"_id": "eeb1a1e0cab8d809b5789d04418dc247dca956cc", "title": "Mining fuzzy association rules and fuzzy frequency episodes for intrusion detection", "text": "Lee, Stolfo, and Mok have previously reported the use of association rules and frequency episodes for mining audit data to gain knowledge for intrusion detection. The integration of association rules and frequency episodes with fuzzy logic can produce more abstract and flexible patterns for intrusion detection, since many quantitative features are involved in intrusion detection and security itself is fuzzy. We present a modification of a previously reported algorithm for mining fuzzy association rules, define the concept of fuzzy frequency episodes, and present an original algorithm for mining fuzzy frequency episodes. We add a normalization step to the procedure for mining fuzzy association rules in order to prevent one data instance from contributing more than others. We also modify the procedure for mining frequency episodes to learn fuzzy frequency episodes. Experimental results show the utility of fuzzy association rules and fuzzy frequency episodes in intrusion detection. Draft: Updated version published in the International Journal of Intelligent Systems, Volume 15, No. I, August 2000 3"}
{"_id": "4b3ffdbe6e2cfc1f7edbe84eb8dc26b94f948c26", "title": "Context-Aware Time Series Anomaly Detection for Complex Systems", "text": "Systems with several components interacting to accomplish challenging tasks are ubiquitous; examples include large server clusters providing \u201ccloud computing\u201d, manufacturing plants, automobiles, etc. Our relentless efforts to improve the capabilities of these systems inevitably increase their complexity as we add more components or introduce more dependencies between existing ones. To tackle this surge in distributed system complexity, system operators collect continuous monitoring data from various sources including hardware and software-based sensors. A large amount of this data is either in the form of time-series or contained in logs, e.g., operators\u2019 activity, system event, and error logs, etc. In this paper, we propose a framework for mining system operational intelligence from massive amount of monitoring data that combines the time series data with information extracted from text-based logs. Our work is aimed at systems where logs capture the context of a system\u2019s operations and the time-series data record the state of different components. This category includes a wide variety of systems including IT systems (compute clouds, web services\u2019 infrastructure, enterprise computing infrastructure, etc.) and complex physical systems such as manufacturing plants. We instantiate our framework for Hadoop. Our preliminary results using both real and synthetic datasets show that the proposed context-aware approach is more effective for detecting anomalies compared to a time series only approach."}
{"_id": "3297a13244c83b0bdc4cc1e235471290b8a3a2d4", "title": "On Cognitive Computing", "text": "Inspired by the latest development in cognitive informatics and contemporary denotational mathematics, cognitive computing is an emerging paradigm of intelligent computing methodologies and systems, which implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain. This article presents a survey on the theoretical framework and architectural techniques of cognitive computing beyond conventional imperative and autonomic computing technologies. Theoretical foundations of cognitive computing are elaborated from the aspects of cognitive informatics, neural informatics, and denotational mathematics. Conceptual models of cognitive computing are explored on the basis of the latest advances in abstract intelligence and computational intelligence. Applications of cognitive computing are described from the aspects of autonomous agent systems and cognitive search engines, which demonstrate how machine and computational intelligence may be generated and implemented by cognitive computing theories and technologies toward autonomous knowledge processing. [Article copies are available for purchase from InfoSci-on-Demand.com]"}
{"_id": "08c0c3b42c337809e233371c730c23d08da442bf", "title": "Market-Based Multirobot Coordination: A Survey and Analysis", "text": "Market-based multirobot coordination approaches have received significant attention and are growing in popularity within the robotics research community. They have been successfully implemented in a variety of domains ranging from mapping and exploration to robot soccer. The research literature on market-based approaches to coordination has now reached a critical mass that warrants a survey and analysis. This paper addresses this need for a survey of the relevant literature by providing an introduction to market-based multirobot coordination, a review and analysis of the state of the art in the field, and a discussion of remaining research challenges"}
{"_id": "3dff11679346f5344af1018cad57fa14cc349f2f", "title": "GraphLab: A New Framework For Parallel Machine Learning", "text": "Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and Compressed Sensing. We show that using GraphLab we can achieve excellent parallel performance on large scale real-world problems."}
{"_id": "6b0ce981c72b50773128b16036ae7c900b8dedfa", "title": "MEASUREMENT INVARIANCE OF A REFINED VALUES SCALE 1 Running Head : MEASUREMENT INVARIANCE OF A REFINED VALUES SCALE The Cross-National Invariance Properties of a New Scale to Measure 19 Basic Human Values . A Test across Eight Countries", "text": "Several studies that measured basic human values across countries with the Portrait Values Questionnaire (PVQ-21) reported violations of measurement invariance. Such violations may hinder meaningful cross-cultural research on human values because value scores may not be comparable. Schwartz et al. (2012) proposed a refined value theory and a new instrument (PVQ-5X) to measure 19 more narrowly defined values. We tested the measurement invariance of this instrument across eight countries. Configural and metric invariance were established for all values across almost all countries. Scalar invariance was supported across nearly all countries for 10 values. The analyses revealed that the cross-country invariance properties of the values measured with the PVQ-5X are substantially better than those measured with the PVQ-21. DOI: https://doi.org/10.1177/0022022114527348 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-94906 Accepted Version Originally published at: Cieciuch, Jan; Davidov, Eldad; Vecchione, Michele; Beierlein, Constanze; Schwartz, Shalom H (2014). The Cross-National Invariance Properties of a New Scale to Measure 19 Basic Human Values: A Test Across Eight Countries. Journal of Cross-Cultural Psychology, 45(5):764-776. DOI: https://doi.org/10.1177/0022022114527348 MEASUREMENT INVARIANCE OF A REFINED VALUES SCALE Jan Cieciuch Institute of Sociology, University of Zurich, Switzerland and University of Finance and Management, Warsaw, Poland Eldad Davidov Institute of Sociology, University of Zurich, Switzerland Michele Vecchione \u201cSapienza\u201d University of Rome, Italy Constanze Beierlein Leibniz-Institute for the Social Sciences, GESIS, Mannheim, Germany Shalom H. Schwartz The Hebrew University of Jerusalem, Israel and National Research University-Higher School of Economics, Moscow This is a pre-copy-editing, author-produced PDF of an article accepted for publication in the Journal of Cross-Cultural Psychology following peer review. It was first published online in this journal on March 19, 2014. The definitive publisher-authenticated version is available online at: http://jcc.sagepub.com/content/early/2014/03/14/0022022114527348 or under doi: 10.1177/0022022114527348 MEASUREMENT INVARIANCE OF A REFINED VALUES SCALE 1 Running Head: MEASUREMENT INVARIANCE OF A REFINED VALUES SCALE The Cross-National Invariance Properties of a New Scale to Measure 19 Basic Human Values. A Test across Eight Countries Jan Cieciuch jancieciuch@gmail.com Institute of Sociology, University of Zurich, Switzerland and University of Finance and Management, Warsaw, Poland Contact address: Andreasstrasse 15, CH-8050 Zurich, Switzerland Tel: +41 44 635 23 22, Fax: +41 44 635 23 99 Eldad Davidov Institute of Sociology, University of Zurich, Switzerland Michele Vecchione \u201cSapienza\u201d University of Rome, Italy Constanze Beierlein Leibniz-Institute for the Social Sciences, GESIS, Mannheim, Germany Shalom H. Schwartz The Hebrew University of Jerusalem, Israel and National Research University-Higher School of Economics, Moscow MEASUREMENT INVARIANCE OF A REFINED VALUES SCALE 2 Author Note The work of the first and second authors on this paper was supported by the Scientific Exchange Program (Switzerland) and the Research Priority Program \u2018Social Networks\u2019, University of Zurich. The work of the first author was partially supported by Grant DEC2011/01/D/HS6/04077 from the Polish National Science Centre. The work of the fifth author on this paper was partly supported by the Higher School of Economics (HSE) Basic Research Program (International Laboratory of Socio-Cultural Research). The second author would like to thank the EUROLAB, GESIS, Cologne, for their hospitality during work on this paper. The authors would like to thank Lisa Trierweiler for the English proof of the manuscript. MEASUREMENT INVARIANCE OF A REFINED VALUES SCALE 3 Abstract Several studies that measured basic human values across countries with the Portrait Values Questionnaire (PVQ-21) reported violations of measurement invariance. Such violations may hinder meaningful cross-cultural research on human values because value scores may not be comparable. Schwartz et al. (2012) proposed a refined value theory and a new instrument (PVQ-5X) to measure 19 more narrowly defined values. We tested the measurement invariance of this instrument across eight countries. Configural and metric invariance were established for all values across almost all countries. Scalar invariance was supported across nearly all countries for 10 values. The analyses revealed that the cross-country invariance properties of the values measured with the PVQ-5X are substantially better than those measured with the PVQ-21.Several studies that measured basic human values across countries with the Portrait Values Questionnaire (PVQ-21) reported violations of measurement invariance. Such violations may hinder meaningful cross-cultural research on human values because value scores may not be comparable. Schwartz et al. (2012) proposed a refined value theory and a new instrument (PVQ-5X) to measure 19 more narrowly defined values. We tested the measurement invariance of this instrument across eight countries. Configural and metric invariance were established for all values across almost all countries. Scalar invariance was supported across nearly all countries for 10 values. The analyses revealed that the cross-country invariance properties of the values measured with the PVQ-5X are substantially better than those measured with the PVQ-21."}
{"_id": "6bfbcf45f974fc56d41d70d0d8bbfd72956c7462", "title": "Design and performance analysis of radial flux C-core switched reluctance motor for in-wheel electrical vehicle application", "text": "A 12/15 pole dual air-gap single rotor configuration with C-core radial flux switched reluctance motor (RFSRM) is proposed as a potential candidate for in-wheel electrical vehicle application with removable wheel/rotor, without disturbing the stator or stator winding. The proposed topology offers effective stator and rotor contact area without increasing the diameter of electrical motor and has smaller flux path compared to the conventional RFSRM. This arrangement offers good torque with low excitation current. It is easy to wound the winding at stator core compared to toothed or segmented type axial field SRM. In this paper phase inductance and torque are calculated analytically and compared with finite element analysis (FEA) results."}
{"_id": "ce84cd74808caffb905e9800f919c5703a42fa50", "title": "Demand-Side Management via Distributed Energy Generation and Storage Optimization", "text": "Demand-side management, together with the integration of distributed energy generation and storage, are considered increasingly essential elements for implementing the smart grid concept and balancing massive energy production from renewable sources. We focus on a smart grid in which the demand-side comprises traditional users as well as users owning some kind of distributed energy sources and/or energy storage devices. By means of a day-ahead optimization process regulated by an independent central unit, the latter users intend to reduce their monetary energy expense by producing or storing energy rather than just purchasing their energy needs from the grid. In this paper, we formulate the resulting grid optimization problem as a noncooperative game and analyze the existence of optimal strategies. Furthermore, we present a distributed algorithm to be run on the users' smart meters, which provides the optimal production and/or storage strategies, while preserving the privacy of the users and minimizing the required signaling with the central unit. Finally, the proposed day-ahead optimization is tested in a realistic situation."}
{"_id": "33329bc0422ec19983a665aaf2283d61a4b626f6", "title": "About relationship between business text patterns and financial performance in corporate data", "text": "This study uses text and data mining to investigate the relationship between the text patterns of annual reports published by US listed companies and sales performance. Taking previous research a step further, although annual reports show only past and present financial information, analyzing text content can identify sentences or patterns that indicate the future business performance of a company. First, we examine the relation pattern between business risk factors and current business performance. For this purpose, we select companies belonging to two categories of US SIC (Standard Industry Classification) in the IT sector, 7370 and 7373, which include Twitter, Facebook, Google, Yahoo, etc. We manually collect sales and business risk information for a total of 54 companies that submitted an annual report (Form 10-K) for the last three years in these two categories. To establish a correlation between patterns of text and sales performance, four hypotheses were set and tested. To verify the hypotheses, statistical analysis of sales, statistical analysis of text sentences, sentiment analysis of sentences, clustering, dendrogram visualization, keyword extraction, and word-cloud visualization techniques are used. The results show that text length has some correlation with sales performance, and that patterns of frequently appearing words are correlated with the sales performance. However, a sentiment analysis indicates that the positive or negative tone of a report is not related to sales performance."}
{"_id": "5dc6fff0e5f67923a0d0f4c31b8bc9bbe7742b69", "title": "Graphical models for text mining: knowledge extraction and performance estimation", "text": "This thesis would not have been possible unless the constant support of my advisor Professor Fabio Stella. He gave me suggestions, advices and a constant incentive to learn and deeply understand how things works. Special thanks go to Professor Enrico Fagiuoli for its suggestions and its huge knowledge of probabilistic models. internship has been a precious occasion to learn and discuss and to find new friends. Special thanks go my colleagues at DISCo that shared the highs and lows of my Ph.D. In particular, i thank my laboratory colleague Elena Gatti for our discussion and friendship and all the other Ph.D students for the discussion and fun moments. I thanks all the administrative personnel at the department, without them a Ph.D student life is hell. Thanks to all the student who worked with me during this years, part of the work in this thesis has been developed thanks to their contribution. It is a pleasure to thank those who made my Ph.D and thus my thesis possible: Docflow S.p.A. They gave me the possibility to get a sense of business world, their needs and to propose solutions. A special thanks go to all who have tolerated my obsessions and complaints during this years: my family, my friends and Manoela."}
{"_id": "b6dbf4e39a272e5c63550bee08c520d1cbc9e162", "title": "Exploring the benefits of the combination of a software architecture analysis and a usability evaluation of a mobile application", "text": "Designing easy to use mobile applications is a difficult task. In order to optimize the development of a usable mobile application, it is necessary to consider the mobile usage context for the design and the evaluation of the user\u2013system interaction of a mobile application. In our research we designed a method that aligns the inspection method \u201cSoftware ArchitecTure analysis of Usability Requirements realizatioN\u201d SATURN and a mobile usability evaluation in the form of a user test. We propose to use mobile context factors and thus requirements as a common basis for both inspection and user test. After conducting both analysis and user test, the results described as usability problems are mapped and discussed. The mobile context factors identified define and describe the usage context of a mobile application. We oftware architecture analysis"}
{"_id": "9ca758c5efd61c22c7c8897066c412c272c744c4", "title": "Script independent scene text segmentation using fast stroke width transform and GrabCut", "text": "In this paper, a novel script independent scene text segmentation method is presented. The proposed method introduces an approach for character segmentation which combines the advantages of connected component analysis, stroke width analysis and graph cuts. The method was evaluated on a standard dataset, and also on a custom dataset of scene images of text from various Indian scripts. Additionally, a fast implementation of Stroke Width Transform (SWT) is presented here, which improves speed over other implementations by 79%. From our experiments, we have achieved encouraging results on the usefulness of the combination of text/non-text classification with GrabCut. The technique is robust to the language/script of text."}
{"_id": "8d5b3de520fc07d2babc2144b1b41e11fc6708f2", "title": "Improving accuracy in E-Textiles as a platform for pervasive sensing", "text": "Recently Electronic Textile (E-Textile) technology enables the weaving computation and communication components into the clothes that we wear and objects that we interact with every day. E-Textile enables the design and development of a broad range of pervasive sensing systems such as smart bed sheets for sleep posture monitoring, insoles in medical shoes for monitoring plantar pressure distribution, smart garments and sensing gloves. However the low cost of E-Textiles come at the cost of accuracy. In this work we propose an actuator-based method that increases the accuracy of E-Textiles by means of enabling real-time calibration. Our proposed system increases system accuracy by 38.55% on average (maximum 58.4%)."}
{"_id": "de580138291876803d259d195c4ce571f6572e3a", "title": "Imbalance in the cloud: An analysis on Alibaba cluster trace", "text": "To improve resource efficiency and design intelligent scheduler for clouds, it is necessary to understand the workload characteristics and machine utilization in large-scale cloud data centers. In this paper, we perform a deep analysis on a newly released trace dataset by Alibaba in September 2017, consists of detail statistics of 11089 online service jobs and 12951 batch jobs co-locating on 1300 machines over 12 hours. To the best of our knowledge, this is one of the first work to analyze the Alibaba public trace. Our analysis reveals several important insights about different types of imbalance in the Alibaba cloud. Such imbalances exacerbate the complexity and challenge of cloud resource management, which might incur severe wastes of resources and low cluster utilization. 1) Spatial Imbalance: heterogeneous resource utilization across machines and workloads. 2) Temporal Imbalance: greatly time-varying resource usages per workload and machine. 3) Imbalanced proportion of multi-dimensional resources (CPU and memory) utilization per workload. 4) Imbalanced resource demands and runtime statistics (duration and task number) between online service and offline batch jobs. We argue accommodating such imbalances during resource allocation is critical to improve cluster efficiency, and will motivate the emergence of new resource managers and schedulers."}
{"_id": "b02120f7656b9717c2c4aca60e0be212b9eb3a3a", "title": "Automatic Collocation Suggestion in Academic Writing", "text": "In recent years, collocation has been widely acknowledged as an essential characteristic to distinguish native speakers from non-native speakers. Research on academic writing has also shown that collocations are not only common but serve a particularly important discourse function within the academic community. In our study, we propose a machine learning approach to implementing an online collocation writing assistant. We use a data-driven classifier to provide collocation suggestions to improve word choices, based on the result of classification. The system generates and ranks suggestions to assist learners\u2019 collocation usages in their academic writing with satisfactory results. *"}
{"_id": "5942b0a56e784b6c2d8fc4be1585a91297fb9e11", "title": "Analysis of the MED Oscillation Problem in BGP", "text": "The Multi Exit Discriminator (MED) attribute of the Border Gateway Protocol (BGP) is widely used to implement \u201ccold potato routing\u201d between autonomous systems. However, the use of MED in practice has led to BGP persistent oscillation. The MED oscillation problem has been described with example configurations and complicated, step-by-step evaluation of dynamic route computations performed at multiple routers. Our work presents the first rigorous analysis of the MED oscillation problem. We employ the Stable Paths Problem (SPP) formalism that allows a static analysis of the interaction of routing policies. We give a formal definition of MED Induced Routing Anomalies (MIRA) and show that, in general, they can span multiple autonomous systems. However, if we assume that the BGP configurations between ASes follows a common model based on customer/provider and peer/peer relationships, then we show that the scope of any MIRA is always contained within a single autonomous system. Contrary to widely held assumptions, we show that a MIRA can occur even in a fully meshed IBGP configuration. We also show that a stable BGP routing may actually violate the stated semantics of the MED attribute."}
{"_id": "add08bd4735305d3c6feabe5e505f3daecd8dabb", "title": "Math modeling, neuropsychology, and category learning:: Response to B. Knowlton (1999)", "text": "raising the issue of the implications of findings in neuropsychological subjects for category learning [Knowlton, B. (1999) What can neuropsychology tell us about category learning? Trends Cognit. Sci. 3, 123\u2013124]1. Knowlton and Squire2,3 reported various dissociations between categorization and recognition in normal and amnesic subjects to support their view that multiple memory systems mediate these tasks. The multiple-system view is more complex than a single-system approach is to modeling categorization and recognition. On grounds of parsimony, a question of scientific interest is whether a single-system model can suffice to account for these data. In support of this possibility, we demonstrated that an extremely simple version of a single-system exemplar model accounted in quantitative detail for the bulk of the dissociation findings reported by Knowlton and Squire, as long as plausible assumptions were made regarding parameter differences in memory ability between groups4. At the same time, we acknowledged some remaining challenges for the exemplar model, and concluded our article by stating, \u2018Whether or not a full account of Knowlton and Squire\u2019s complete set of dissociation findings is available within the exemplar approach remains a subject for future research, but we feel that we have made an important start\u2019 (Ref. 4, p. 255). We also outlined various directions in which the exemplar model could be extended to meet these challenges. In the first part of her commentary1, Knowlton reiterates these acknowledged limitations of the simple exemplar model and argues against the possible lines of extension that we suggested. Her first point pertains to the dissociation exhibited by the severe amnesic E.P., who despite showing zero ability to discriminate between old and new exemplars in a recognition task, displayed normal performance in a classification task5. We had pointed out that stimulus conditions were dramatically different across the particular recognition and classification tasks in question. In the test phase of the recognition task, presentations of the target item were rare and widely spaced between distractor items. By contrast, in the classification test, members of the target category were frequently presented. We noted that if E.P. had even a small residual N o s o f s k y a n d Z a k i \u2013 R e s p o n s e Update Comment"}
{"_id": "4ac9ef24a4b99f244cd17bf5d390a0c0e712327b", "title": "An Introduction to Boosting and Leveraging", "text": "We provide an introduction to theoretical and practical aspects of Boosting and Ensemble learning, providing a useful reference for researchers in the field of Boosting as well as for those seeking to enter this fascinating area of research. We begin with a short background concerning the necessary learning theoretical foundations of weak learners and their linear combinations. We then point out the useful connection between Boosting and the Theory of Optimization, which facilitates the understanding of Boosting and later on enables us to move on to new Boosting algorithms, applicable to a broad spectrum of problems. In order to increase the relevance of the paper to practitioners, we have added remarks, pseudo code, \u201ctricks of the trade\u201d, and algorithmic considerations where appropriate. Finally, we illustrate the usefulness of Boosting algorithms by giving an overview of some existing applications. The main ideas are illustrated on the problem of binary classification, although several extensions are discussed. 1 A Brief History of Boosting The underlying idea of Boosting is to combine simple \u201crules\u201d to form an ensemble such that the performance of the single ensemble member is improved, i.e. \u201cboosted\u201d. Let h1, h2, . . . , hT be a set of hypotheses, and consider the composite ensemble hypothesis"}
{"_id": "6a24520adafb127581ca92bae565e2844533bce2", "title": "High Power Terahertz and Millimeter-Wave Oscillator Design: A Systematic Approach", "text": "A systematic approach to designing high frequency and high power oscillators using activity condition is introduced. This method finds the best topology to achieve frequencies close to the fmax of the transistors. It also determines the maximum frequency of oscillation for a fixed circuit topology, considering the quality factor of the passive components. Using this technique, in a 0.13 \u03bcm CMOS process, we design and implement 121 GHz and 104 GHz fundamental oscillators with the output power of -3.5 dBm and -2.7 dBm, respectively. Next, we introduce a novel triple-push structure to realize 256 GHz and 482 GHz oscillators. The 256 GHz oscillator was implemented in a 0.13 \u03bcm CMOS process and the output power of -17 dBm was measured. The 482 GHz oscillator generates -7.9 dBm (0.16 mW) in a 65 nm CMOS process."}
{"_id": "5cfae79ccc499444783df70fc10b50c1ef686488", "title": "Examining micro-payments for participatory sensing data collections", "text": "The rapid adoption of mobile devices that are able to capture and transmit a wide variety of sensing modalities (media and location) has enabled a new data collection paradigm - participatory sensing. Participatory sensing initiatives organize individuals to gather sensed information using mobile devices through cooperative data collection. A major factor in the success of these data collection projects is sustained, high quality participation. However, since data capture requires a time and energy commitment from individuals, incentives are often introduced to motivate participants. In this work, we investigate the use of micro-payments as an incentive model. We define a set of metrics that can be used to evaluate the effectiveness of incentives and report on findings from a pilot study using various micro-payment schemes in a university campus sustainability initiative."}
{"_id": "ac5a2d9efc4113b280dbf19159fc61deecc52126", "title": "Clustering in the Presence of Background Noise", "text": "We address the problem of noise management in clustering algorithms. Namely, issues that arise when on top of some cluster structure the data also contains an unstructured set of points. We consider how clustering algorithms can be \u201crobustified\u201d so that they recover the cluster structure in spite of the unstructured part of the input. We introduce some quantitative measures of such robustness that take into account the strength of the embedded cluster structure as well as the mildness of the noise subset. We propose a simple and efficient method to turn any centroid-based clustering algorithm into a noiserobust one, and prove robustness guarantees for our method with respect to these measures. We also prove that more straightforward ways of \u201crobustifying\u201d clustering algorithms fail to achieve similar guarantees."}
{"_id": "e0f3f9c09775e70b71d3e6d2ec0d145add0e8b94", "title": "Analyzing Resource Behavior Using Process Mining", "text": "It is vital to use accurate models for the analysis, design, and/or control of business processes. Unfortunately, there are often important discrepancies between reality and models. In earlier work, we have shown that simulation models are often based on incorrect assumptions and one example is the speed at which people work. The \u201cYerkes-Dodson Law of Arousal\u201d suggests that a worker that is under time pressure may become more efficient and thus finish tasks faster. However, if the pressure is too high, then the worker\u2019s performance may degrade. Traditionally, it was difficult to investigate such phenomena and few analysis tools (e.g., simulation packages) support workload-dependent behavior. Fortunately, more and more activities are being recorded and modern process mining techniques provide detailed insights in the way that people really work. This paper uses a new process mining plug-in that has been added to ProM to explore the effect of workload on service times. Based on historic data and by using regression analysis, the relationship between workload and services time is investigated. This information can be used for various types of analysis and decision making, including more realistic forms of simulation."}
{"_id": "595e5b8d9e08d56dfcec464cfc2854e562cd7089", "title": "Heart rate variability and its relation to prefrontal cognitive function: the effects of training and detraining", "text": "The aim of the present study was to investigate the relationship between physical fitness, heart rate variability (HRV) and cognitive function in 37 male sailors from the Royal Norwegian Navy. All subjects participated in an 8-week training program, after which the subjects completed the initial cognitive testing (pre-test). The subjects were assigned into a detrained group (DG) and a trained group (TG) based on their application for further duty. The DG withdrew from the training program for 4 weeks after which all subjects then completed the cognitive testing again (post-test). Physical fitness, measured as maximum oxygen consumption (V\u0307O2max), resting HRV, and cognitive function, measured using a continuous performance task (CPT) and a working memory test (WMT), were recorded during the pre-test and the post-test, and the data presented as the means and standard deviations. The results showed no between-group differences in V\u0307O2max or HRV at the pre-test. The DG showed a significant decrease in V\u0307O2max from the pre- to the post-test and a lower resting HRV than the TG on the post-test. Whereas there were no between-group differences on the CPT or WMT at the pre-test, the TG had faster reaction times and more true positive responses on tests of executive function at the post-test compared to the pre-test. The DG showed faster reaction times on non-executive tasks at the post-test compared to the pre-test. The results are discussed within a neurovisceral integration framework linking parasympathetic outflow to the heart to prefrontal neural functions."}
{"_id": "46739eed6aefecd4591beed0d45b783cc0052a94", "title": "Power spectrum analysis of heart rate fluctuation: a quantitative probe of beat-to-beat cardiovascular control.", "text": "Power spectrum analysis of heart rate fluctuations provides a quantitative noninvasive means of assessing the functioning of the short-term cardiovascular control systems. We show that sympathetic and parasympathetic nervous activity make frequency-specific contributions to the heart rate power spectrum, and that renin-angiotensin system activity strongly modulates the amplitude of the spectral peak located at 0.04 hertz. Our data therefore provide evidence that the renin-angiotensin system plays a significant role in short-term cardiovascular control in the time scale of seconds to minutes."}
{"_id": "9684797643d13be86002683f60caa1cb69832e74", "title": "Vagal influence on working memory and attention.", "text": "The aim of the present study was to investigate the effect of vagal tone on performance during executive and non-executive tasks, using a working memory and a sustained attention test. Reactivity to cognitive tasks was also investigated using heart rate (HR) and heart rate variability (HRV). Fifty-three male sailors from the Royal Norwegian Navy participated in this study. Inter-beat-intervals were recorded continuously for 5 min of baseline, followed by randomized presentation of a working memory test (WMT) based on Baddeley and Hitch's research (1974) and a continuous performance test (CPT). The session ended with a 5-min recovery period. High HRV and low HRV groups were formed based on a median split of the root mean squared successive differences during baseline. The results showed that the high HRV group showed more correct responses than the low HRV group on the WMT. Furthermore, the high HRV group showed faster mean reaction time (mRT), more correct responses and less error, than the low HRV group on the CPT. Follow-up analysis revealed that this was evident only for components of the CPT where executive functions were involved. The analyses of reactivity showed a suppression of HRV and an increase in HR during presentation of cognitive tasks compared to recovery. This was evident for both groups. The present results indicated that high HRV was associated with better performance on tasks involving executive function."}
{"_id": "fd829cb98deba075f17db467d49a0391208c07f1", "title": "0.9\u201310GHz low noise amplifier with capacitive cross coupling", "text": "This paper presents a 0.9GHz-10GHz Ultra Wideband Low Noise Amplifier (LNA) designed for software-defined-radios (SDR). Capacitive cross coupling (CCC) is used at both input stage and cascade stage for wideband input impedance matching and small noise figure (NF). A combination of inductor peaking load and series inductor between the cascade stages of LNA is employed for a flat gain and enhanced input matching. This LNA is implemented in a standard 0.18\u00b5m CMOS technology. For 0.9GHz-10GHz ultra wideband applications, it achieves a maximum gain of 14dB, 3.5dB\u223c4.1dB NF and +1.8dBm IIP3. It consumes only7.6mW from a 1.8V power supply."}
{"_id": "518de7c7ae347fcd7ee466300c1a7853eca0538a", "title": "New Direct Torque Control Scheme for BLDC Motor Drives Suitable for EV Applications", "text": "This paper proposes a simple and effective scheme for Direct Torque Control (DTC) of Brushless DC Motor. Unlike the traditional DTC where both the torque and flux must to be employed in order to deliver the switching table for the inverter, the proposed DTC scheme utilizes only the torque information. By considering the particular operating principle of the motor, the instantaneous torque can be estimated only by using back EMF ratio in an unified differential equation. The simulation results have shown the good performance of the proposed scheme. It can be seen thus a very suitable technique for EV drives which need fast and precise torque response."}
{"_id": "1719aa40d501613c06f3c9a411e7bb928fb552b8", "title": "Using the Facebook group as a learning management system: An exploratory study", "text": "Facebook is a popular social networking site. It, like many other new technologies, has potential for teaching and learning because of its unique built-in functions that offer pedagogical, social and technological affordances. In this study, the Facebook group was used as a learning management system (LMS) in two courses for putting up announcements, sharing resources, organizing weekly tutorials and conducting online discussions at a teacher education institute in Singapore. This study explores using the Facebook group as an LMS and the students\u2019 perceptions of using it in their courses. Results showed that students were basically satisfied with the affordances of Facebook as the fundamental functions of an LMS could be easily implemented in the Facebook group. However, using the Facebook group as an LMS has certain limitations. It did not support other format files to be uploaded directly, and the discussion was not organized in a threaded structure. Also, the students did not feel safe and comfortable as their privacy might be revealed. Constraints of using the Facebook group as an LMS, implications for practice and limitations of this study are discussed. Practitioner notes What is already known about this topic \u2022 Facebook has been popularly used by tertiary students, but many students do not want their teachers to be friends on Facebook \u2022 Teacher\u2019s self-disclosure on Facebook can promote classroom atmosphere, teacher\u2019s credibility and student\u2013teacher relationship \u2022 Commercial learning management systems (LMSs) have limitations What this paper adds \u2022 The Facebook group can be used an LMS as it has certain pedagogical, social and technological affordances \u2022 Students are satisfied with the way of using the Facebook group as an LMS British Journal of Educational Technology (2011) doi:10.1111/j.1467-8535.2011.01195.x \u00a9 2011 The Authors. British Journal of Educational Technology \u00a9 2011 Bera. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. \u2022 Younger students are more acceptable with the idea of using the Facebook group as an LMS \u2022 Using the Facebook group as an LMS has limitations: it does not support other format files; its discussions are not listed in threads; and it is not perceived as a safe environment Implications for practice and/or policy \u2022 The Facebook group can be used an LMS substitute or supplement \u2022 Third-party applications are needed to extend the capability of the Facebook group as an LMS \u2022 Using Facebook seems to be more appropriate for young learners than adults \u2022 Teachers do not have to be students\u2019 friends on Facebook. Introduction Social networking sites (SNSs) are virtual spaces where people of similar interest gather to communicate, share photos and discuss ideas with one another (Boyd & Ellison, 2008; Raacke & Bonds-Raacke, 2008). In recent years, Facebook has become one of the most prominent SNSs. Like any new technology, Facebook seems to offer great potentials for teaching and learning as many students are using Facebook daily. One possible way of using Facebook for teaching and learning is to use its group as an LMS. Research shows that using LMSs possesses numerous benefits for teaching and learning. It enables faculty to shift the focus from content-based learning to process-based learning (Vogel & Klassen, 2001) and helps to \u201cfacilitate change from passive to active learning\u201d (Herse & Lee, 2005, p. 51). Using LMSs also has the potential to increase student enrollment (Nunes & McPherson, 2003) and to promote interaction between students and faculty members (Lonn & Teasley, 2009; West, Waddoups & Graham, 2007). Using existing commercial LMSs like Blackboard, however, often has practical constraints (Sanchez-Franco, 2010). For example, LMSs tend to be expensive and that not every school can afford to purchase and maintain them over the long run. Trainee teachers cannot access certain features such as creating a course, enrolling students and setting up student groups as these functions are usually open to instructors or administrators only. The resources in the present LMS are often no longer accessible to trainee teachers after their graduation. Also, the LMS used at school in the future may be different from the one being currently used. They have to shift to a new LMS, and research shows learning a new system is often a painful experience (Black, Beck, Dawson, Jinks & DiPietro, 2007). If the Facebook group can be used as an alternative LMS, it would help to overcome some of the abovementioned constraints. For instance, it would enable a teacher to easily create a new course and enroll students in person if the class size is small. As presented in the following literature review section, many research studies have investigated the usage of Facebook, the effect of teacher\u2019s self-disclosure via Facebook on teacher\u2013student relationship improvement and the academic performance of Facebook users. However, few studies have examined if and how Facebook can be effectively used as an LMS. In this exploratory study, the Facebook group was used as an LMS to put up announcements, share resources, organize weekly tutorial sessions and conduct online discussions. The purpose of this paper is to describe how the Facebook group was used as an LMS in the study and to report students\u2019 perceptions on it. Literature review on Facebook"}
{"_id": "ac3a313bff326f666059d797f2f505881eff3581", "title": "An empirical investigation of student adoption model toward mobile e-textbook: UTAUT2 and TTF model", "text": "Faculty of Economics in Universitas Atma Jaya Yogyakarta (UAJY) has replaced printed textbooks with e-textbooks for its academic activities since 2015. These e-textbooks can be accessed via iPad which is given to each student. Hence, the objective of this study is to test the proposed research model based on integrated UTAUT2 and TTF. Questionnaires were distributed to 326 students in 10 classes. Only junior and sophomore students were eligible to fulfill the questionnaires. The result of the study shows that performance expectancy, effort expectancy, social influence, facilitating condition, and habit have a direct significant relationship on behavioral intention to use mobile e-textbook. The result also shows that both Task technology and technology characteristic positively affect task technology fit. There is also a direct significant relationship between Task technology fit and performance expectancy. We suggest that further studies include experimentation to investigate performance expectancy of mobile e-book. 2) More evaluation should be held in the future in more mature environment. The finding of this study can help policymakers in the Faculty of Economics UAJY to evaluate the policy and formulate better strategy. In addition, these findings can be used by others faculties and universities as mobile e-textbook adoption case reference."}
{"_id": "871ddd72744812b4eb552a4e9a72d93e77fd87b4", "title": "Ethernet-Based Real-Time and Industrial Communications", "text": "Despite early attempts to use Ethernet in the industrial context, only recently has it attracted a lot of attention as a support for industrial communication. A number of vendors are offering industrial communication products based on Ethernet and TCP/IP as a means to interconnect field devices to the first level of automation. Others restrict their offer to communication between automation devices such as programmable logic controllers and provide integration means to existing fieldbuses. This paper first details the requirements that an industrial network has to fulfill. It then shows how Ethernet has been enhanced to comply with the real-time requirements in particular in the industrial context. Finally, we show how the requirements that cannot be fulfilled at layer 2 of the OSI model can be addressed in the higher layers adding functionality to existing standard protocols."}
{"_id": "2f348a2ad3ba390ee178d400be0f09a0479ae17b", "title": "Gabor-based kernel PCA with fractional power polynomial models for face recognition", "text": "This paper presents a novel Gabor-based kernel principal component analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels."}
{"_id": "7ca07ca599d3811eb1d8f9c302a203c6ead0e7fe", "title": "Marker-controlled watershed segmentation of nuclei in H&E stained breast cancer biopsy images", "text": "In this paper we present an unsupervised automatic method for segmentation of nuclei in H&E stained breast cancer biopsy images. Colour deconvolution and morphological operations are used to preprocess the images in order to remove irrelevant structures. Candidate nuclei locations, obtained with the fast radial symmetry transform, act as markers for a marker-controlled watershed segmentation. Watershed regions that are unlikely to represent nuclei are removed in the postprocessing stage. The proposed algorithm is evaluated on a number of images that are representative of the diversity in pathology in this type of tissue. The method shows good performance in terms of the number of segmented nuclei and segmentation accuracy."}
{"_id": "827a8ab345b673ebc0896296272333d4f88bd539", "title": "Empathy decline and its reasons: a systematic review of studies with medical students and residents.", "text": "PURPOSE\nEmpathy is a key element of patient-physician communication; it is relevant to and positively influences patients' health. The authors systematically reviewed the literature to investigate changes in trainee empathy and reasons for those changes during medical school and residency.\n\n\nMETHOD\nThe authors conducted a systematic search of studies concerning trainee empathy published from January 1990 to January 2010, using manual methods and the PubMed, EMBASE, and PsycINFO databases. They independently reviewed and selected quantitative and qualitative studies for inclusion. Intervention studies, those that evaluated psychometric properties of self-assessment tools, and those with a sample size <30 were excluded.\n\n\nRESULTS\nEighteen studies met the inclusion criteria: 11 on medical students and 7 on residents. Three longitudinal and six cross-sectional studies of medical students demonstrated a significant decrease in empathy during medical school; one cross-sectional study found a tendency toward a decrease, and another suggested stable scores. The five longitudinal and two cross-sectional studies of residents showed a decrease in empathy during residency. The studies pointed to the clinical practice phase of training and the distress produced by aspects of the \"hidden,\" \"formal,\" and \"informal\" curricula as main reasons for empathy decline.\n\n\nCONCLUSIONS\nThe results of the reviewed studies, especially those with longitudinal data, suggest that empathy decline during medical school and residency compromises striving toward professionalism and may threaten health care quality. Theory-based investigations of the factors that contribute to empathy decline among trainees and improvement of the validity of self-assessment methods are necessary for further research."}
{"_id": "52ba56baa6f72cbb6c76529f3dc56ffc6c735558", "title": "Multi-Cell Multiuser Massive MIMO Networks: User Capacity Analysis and Pilot Design", "text": "We propose a novel pilot sequence design to mitigate pilot contamination in multi-cell multiuser massive multiple-input multiple-output networks. Our proposed design generates pilot sequences in the multi-cell network and devises power allocation at base stations (BSs) for downlink transmission. The pilot sequences together with the power allocation ensure that the user capacity of the network is achieved and the pre-defined signal-to-interference-plus-noise ratio (SINR) requirements of all users are met. To realize our design, we first derive new closed-form expressions for the user capacity and the user capacity region. Built upon these expressions, we then develop a new algorithm to obtain the required pilot sequences and power allocation. We further determine the minimum number of antennas required at the BSs to achieve certain SINR requirements of all users. The numerical results are presented to corroborate our analysis and to examine the impact of key parameters, such as the pilot sequence length and the total number of users, on the network performance. A pivotal conclusion is reached that our design achieves a larger user capacity region than the existing designs and needs less antennas at the BS to fulfill the pre-defined SINR requirements of all users in the network than the existing designs."}
{"_id": "9574732bdab1bea8fed896ae53641916db642156", "title": "repository copy of Transformational Leadership and Burnout : The Role of Thriving and Followers ' Openness to Experience", "text": "eprints@whiterose.ac.uk https://eprints.whiterose.ac.uk/ Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher\u2019s website."}
{"_id": "5c9737f1448fac37483a94a731778b1127af23c5", "title": "Adaptive scale based entropy-like estimator for robust fitting", "text": "In this paper, we propose a novel robust estimator, called ASEE (Adaptive Scale based Entropy-like Estimator) which minimizes the entropy of inliers. This estimator is based on IKOSE (Iterative Kth Ordered Scale Estimator) and LEL (Least Entropy-Like Estimator). Unlike LEL, ASEE only considers inliers' entropy while excluding outliers, which makes it very robust in parametric model estimation. Compared with other robust estimators, ASEE is simple and computationally efficient. From the experiments on both synthetic and real-image data, ASEE is more robust than several state-of-the-art robust estimators, especially in handling extreme outliers."}
{"_id": "14b0b0e3cfef7a6b22fc5c3753d797cde9787a7d", "title": "Home Bias in Online Investments: An Empirical Study of an Online Crowdfunding Market", "text": "An extensive literature in economics and finance has documented \" home bias, \" the tendency that transactions are more likely to occur between parties in the same country or state, rather than outside. Can the Internet help overcome this spatial divide, especially in the context of online financial investments? We address this question in the context of a large online crowd funding marketplace. We analyze detailed transaction data over an extended period of time under typical market conditions, as well as those from a natural experiment, a period in which investors were restricted to only one state due to regulations. We further employ a quasi-experimental design by tracking borrowers who moved across state boundaries, and study how investors' behaviors change when those borrowers moved. Home bias appears to be a robust phenomenon under all three scenarios. This finding has important implications not just for the home bias literature, but also more broadly for the growing research and policy interest on Internet-based crowd funding, especially as a new channel for entrepreneurial financing."}
{"_id": "0b07f84c22ce01309981a02c23d5cd1770cad48b", "title": "Query optimization techniques for partitioned tables", "text": "Table partitioning splits a table into smaller parts that can be accessed, stored, and maintained independent of one another. From their traditional use in improving query performance, partitioning strategies have evolved into a powerful mechanism to improve the overall manageability of database systems. Table partitioning simplifies administrative tasks like data loading, removal, backup, statistics maintenance, and storage provisioning. Query language extensions now enable applications and user queries to specify how their results should be partitioned for further use. However, query optimization techniques have not kept pace with the rapid advances in usage and user control of table partitioning. We address this gap by developing new techniques to generate efficient plans for SQL queries involving multiway joins over partitioned tables. Our techniques are designed for easy incorporation into bottom-up query optimizers that are in wide use today. We have prototyped these techniques in the PostgreSQL optimizer. An extensive evaluation shows that our partition-aware optimization techniques, with low optimization overhead, generate plans that can be an order of magnitude better than plans produced by current optimizers."}
{"_id": "1b5b518fe9ebf59520f57efb880a6f93e3b20c0e", "title": "Balancing minimum spanning trees and shortest-path trees", "text": "We give a simple algorithm to find a spanning tree that simultaneously approximates a shortest-path tree and a minimum spanning tree. The algorithm provides a continuous tradeoff: given the two trees and a\u03b3>0, the algorithm returns a spanning tree in which the distance between any vertex and the root of the shortest-path tree is at most 1+\u221a2\u03b3 times the shortest-path distance, and yet the total weight of the tree is at most 1+\u221a2/\u03b3 times the weight of a minimum spanning tree. Our algorithm runs in linear time and obtains the best-possible tradeoff. It can be implemented on a CREW PRAM to run a logarithmic time using one processor per vertex."}
{"_id": "4e29cef227595acdeb09a611f9728406f30baa1d", "title": "The Anatomic Basis of Midfacial Aging", "text": "Facial aging is a multifactorial, three-dimensional (3D) process with anatomic, biochemical, and genetic correlates. Many exogenous and endogenous factors can signifi cantly impact the perceived age of an individual. Solar exposure [ 1\u2013 3 ] , cigarette smoking [ 1, 2, 4, 5 ] , medications [ 1 ] , alcohol use [ 1 ] , body mass index [ 2 ] , and endocrinologic status [ 1, 6, 7 ] have all been implicated as factors that accelerate cutaneous and subcutaneous aging. These factors act in concert to create a variegated spectrum of facial morphologic aging changes, and thus, Mme. Chanel was partially correct in her statement from the last century. Most of the aging changes that occur in the midface, however, occur predictably in the majority of individuals. Stigmata of midfacial aging typically appear by the middle of the fourth decade. Degenerative changes occur in nearly every anatomic component of the midface and include cranial bone remodeling, tissue descent secondary to gravity, fat atrophy, and deterioration in the condition and appearance of the skin. The lower eyelids and adjacent tissues are often the initial areas of patient concern. This chapter reviews the morphologic changes that occur in the aging midface and discusses the pathogenesis of midfacial aging based upon its anatomic components. An integrated theory of facial aging will be presented. A. E. Wulc , MD, FACS ( ) Associate Clinical Professor of Ophthalmology , University of Pennsylvania"}
{"_id": "615f5506aa64d5496ebd2019f7a5661c8935fc81", "title": "DGASensor: Fast Detection for DGA-Based Malwares", "text": "DNS protocol has been used by many malwares for command-and-control (C&C). To improve the resiliency of C&C communication, Domain Generation Algorithm (DGA) has been utilized by recent malwares such as Locky, Conficker and Zeus. Many detection systems have been introduced for DGA-based botnets detection. However, such botnets detection approaches suffer from several limitations, for instance, requiring a group of DGA domains, period behaviors, the presence of multiple bots, and so forth. It is very hard for them to detect an individually running DGA-based malware which leave only a few traces. In this paper, we develop DGASensor to detect DGA-based malwares immediately by identifying a single DGA domain using lexical evidence. First, DGASensor automatically analyzes the lexical patterns of the most popular domains listed in Alexa top 100,000, and then extracts two templates, namely distribution template and structure template. Second, the above two templates, pronounceable attributes, and some frequently used properties like entropy and length, are used to extract features from a single domain. Third, we train our classifier using a non-DGA dataset consisting of domains obtained from Alexa rank and a DGA dataset generated by known DGAs. At last, we provide a short word filter to decrease the false positive rate. We implement a prototype system and evaluate it using the above training dataset with 10-fold cross validation. Moreover, a set of real world DNS traffic collected from a recursive DNS server is used to measure real world performance of our system. The results show that DGASensor detects DGA domains with accuracy 93% in our training dataset and is able to identify a variety of malwares in the real world dataset with an extremely high processing capability."}
{"_id": "588e95a290df5ec9a1296e11fc71db6c7a95300d", "title": "On the effectiveness of address-space randomization", "text": "Address-space randomization is a technique used to fortify systems against buffer overflow attacks. The idea is to introduce artificial diversity by randomizing the memory location of certain system components. This mechanism is available for both Linux (via PaX ASLR) and OpenBSD. We study the effectiveness of address-space randomization and find that its utility on 32-bit architectures is limited by the number of bits available for address randomization. In particular, we demonstrate a derandomization attack that will convert any standard buffer-overflow exploit into an exploit that works against systems protected by address-space randomization. The resulting exploit is as effective as the original exploit, although it takes a little longer to compromise a target machine: on average 216 seconds to compromise Apache running on a Linux PaX ASLR system. The attack does not require running code on the stack.\n We also explore various ways of strengthening address-space randomization and point out weaknesses in each. Surprisingly, increasing the frequency of re-randomizations adds at most 1 bit of security. Furthermore, compile-time randomization appears to be more effective than runtime randomization. We conclude that, on 32-bit architectures, the only benefit of PaX-like address-space randomization is a small slowdown in worm propagation speed. The cost of randomization is extra complexity in system support."}
{"_id": "ab491c2bd542c9fb50171f316376a0a8ac75b732", "title": "Cool English: a Grammatical Error Correction System Based on Large Learner Corpora", "text": "This paper presents a grammatical error correction (GEC) system that provides corrective feedback for essays. We apply the neural sequence-to-sequence model, which is frequently used in machine translation and text summarization, to this GEC task. The model is trained on EF-Cambridge Open Language Database (EFCAMDAT), a large learner corpus annotated with grammatical errors and corrections. Evaluation shows that our system achieves competitive performance on a number of publicly available testsets."}
{"_id": "5fd2e1b033eca1c3f3d9da3dc0bcb801ec054bde", "title": "Uses and Gratification Theory \u2013 Why Adolescents Use Facebook ?", "text": "Due to a dynamic development of the Web 2.0 and new trends in the social media field that change on a daily basis, contemporary media research is shifting its focus to a greater extent on media users, their motivation and behavior in using social network sites in order to explain the extreme popularity of Facebook, Twitter, WhatsApp and other similar SNSs and mobile chat applications among the young. In this paper we wanted to explore the benefits of Facebook use among adolescents as well as which of their needs are gratified thereat. As the theoretical background we used the uses and gratification theory due to its user oriented approach. Furthermore, we wanted to test whether the uses and gratification concept is adequate for analyzing the motivation and behavior of SNSs users as suggested by some previous research. The survey comprising 431 adolescent Facebook users was conducted from October to December 2013 in the City of Zagreb. The results have shown that most adolescents use Facebook for socializing and communicating with their friends, discussing school activities, setting up meetings and dates with friends as well as"}
{"_id": "26d673f140807942313545489b38241c1f0401d0", "title": "Analysis of WEKA Data Mining Algorithm REPTree, Simple Cart and RandomTree for Classification of Indian News", "text": "The amount of data in the world and in our lives seems everincreasing and there\u2019s no end to it. The Weka workbench is an organized collection of state-of-the-art machine learning algorithms and data pre-processing tools. The basic way of interacting with these methods is by invoking them from the command line. However, convenient interactive graphical user interfaces are provided for data exploration, for setting up largescale experiments on distributed computing platforms, and for designing configurations for streamed data processing. These interfaces constitute an advanced environment for experimental data mining. Classification is an important data mining technique with broad applications. It classifies data of various kinds. This paper has been carried out to make a performance evaluation of REPTree, Simple Cart and RandomTree classification algorithm. The paper sets out to make comparative evaluation of classifiers REPTree, Simple Cart and RandomTree in the context of dataset of Indian news to maximize true positive rate and minimize false positive rate. For processing Weka API were used. The results in the paper on dataset of Indian news also show that the efficiency and accuracy of RandomTree is good than REPTree, and Simple Cart. Keywords\u2014 Simple Cart, RandomTree, REPTree, Weka, WWW"}
{"_id": "6e633b41d93051375ef9135102d54fa097dc8cf8", "title": "Classification and Regression by randomForest", "text": "Recently there has been a lot of interest in \u201censemble learning\u201d \u2014 methods that generate many classifiers and aggregate their results. Two well-known methods are boosting (see, e.g., Shapire et al., 1998) and bagging Breiman (1996) of classification trees. In boosting, successive trees give extra weight to points incorrectly predicted by earlier predictors. In the end, a weighted vote is taken for prediction. In bagging, successive trees do not depend on earlier trees \u2014 each is independently constructed using a bootstrap sample of the data set. In the end, a simple majority vote is taken for prediction. Breiman (2001) proposed random forests, which add an additional layer of randomness to bagging. In addition to constructing each tree using a different bootstrap sample of the data, random forests change how the classification or regression trees are constructed. In standard trees, each node is split using the best split among all variables. In a random forest, each node is split using the best among a subset of predictors randomly chosen at that node. This somewhat counterintuitive strategy turns out to perform very well compared to many other classifiers, including discriminant analysis, support vector machines and neural networks, and is robust against overfitting (Breiman, 2001). In addition, it is very user-friendly in the sense that it has only two parameters (the number of variables in the random subset at each node and the number of trees in the forest), and is usually not very sensitive to their values. The randomForest package provides an R interface to the Fortran programs by Breiman and Cutler (available at http://www.stat.berkeley.edu/ users/breiman/). This article provides a brief introduction to the usage and features of the R functions."}
{"_id": "8cfe24108b7f73aa229be78f9108e752e8210c36", "title": "FOR PREDICTING STUDENT PERFORMANCE", "text": "Although data mining has been successfully implemented in the business world for some time now, its use in higher education is still relatively new, i.e. its use is intended for identification and extraction of new and potentially valuable knowledge from the data. Using data mining the aim was to develop a model which can derive the conclusion on students' academic success. Different methods and techniques of data mining were compared during the prediction of students' success, applying the data collected from the surveys conducted during the summer semester at the University of Tuzla, the Faculty of Economics, academic year 2010-2011, among first year students and the data taken during the enrollment. The success was evaluated with the passing grade at the exam. The impact of students' socio-demographic variables, achieved results from high school and from the entrance exam, and attitudes towards studying which can have an affect on success, were all investigated. In future investigations, with identifying and evaulating variables associated with process of studying, and with the sample increase, it would be possible to produce a model which would stand as a foundation for the development of decision support system in higher education."}
{"_id": "cc5c84c1c876092e6506040cde7d2a5b9e9065ff", "title": "A comparative analysis of techniques for predicting academic performance", "text": "This paper compares the accuracy of decision tree and Bayesian network algorithms for predicting the academic performance of undergraduate and postgraduate students at two very different academic institutes: Can Tho University (CTU), a large national university in Viet Nam; and the Asian Institute of Technology (AIT), a small international postgraduate institute in Thailand that draws students from 86 different countries. Although the diversity of these two student populations is very different, the data-mining tools were able to achieve similar levels of accuracy for predicting student performance: 73/71% for {fail, fair, good, very good} and 94/93% for {fail, pass} at the CTU/AIT respectively. These predictions are most useful for identifying and assisting failing students at CTU (64% accurate), and for selecting very good students for scholarships at the AIT (82% accurate). In this analysis, the decision tree was consistently 3-12% more accurate than the Bayesian network. The results of these case studies give insight into techniques for accurately predicting student performance, compare the accuracy of data mining algorithms, and demonstrate the maturity of open source tools."}
{"_id": "f2dcd6e582c946af491b00ae9c52cc9f6b05cdfc", "title": "Driver Identification: a Time Series Classification Approach", "text": "In the last years, researchers and auto vehicle developers are focusing their attention on the topic of driver identification encouraged by an increasing number of always more sophisticated vehicle sensors able to extract information about the driver. Indeed, driver identification can be very useful to customize and improve driver experience, increase the safety on the road and reduce the global environmental problems. In this work, we propose and explore a set of features (extracted from a car monitoring system) identifying the driver basing on his/her driving behavior. The proposed behavioral features are exploited using a time series classification approach and a multi-layer perceptron (MLP) network is used to evaluate the ability of the proposed features to identify the vehicle driver. The proposed features are tested on a real data set composed of totally 66 observations (each observation consists of a given person driving a given car on a predefined path). The obtained results show that the proposed features have effective driver identification ability."}
{"_id": "ba153c968a142d90f6286e69a5f8bdf7eb9331ef", "title": "Color exploitation in hog-based traffic sign detection", "text": "We study traffic sign detection on a challenging large-scale real-world dataset of panoramic images. The core processing is based on the Histogram of Oriented Gradients (HOG) algorithm which is extended by incorporating color information in the feature vector. The choice of the color space has a large influence on the performance, where we have found that the CIELab and YCbCr color spaces give the best results. The use of color significantly improves the detection performance. We compare the performance of a specific and HOG algorithm, and show that HOG outperforms the specific algorithm by up to tens of percents in most cases. In addition, we propose a new iterative SVM training paradigm to deal with the large variation in background appearance. This reduces memory consumption and increases utilization of background information."}
{"_id": "66869075d20858a2e9af144b2749a055c6b03177", "title": "An Initial Investigation of Test Driven Development in Industry", "text": "Test Driven Development (TDD) is a software development practice in which unit test cases are incrementally written prior to code implementation. In our research, we ran a set of structured experiments with 24 professional pair programmers. One group developed code using TDD while the other a waterfall-like approach. Both groups developed a small Java program. We found that the TDD developers produced higher quality code, which passed 18% more functional black box test cases. However, TDD developer pairs took 16% more time for development. A moderate correlation between time spent and the resulting quality was established upon analysis. It is conjectured that the resulting high quality of code written using the TDD practice may be due to the granularity of TDD, which may encourage more frequent and tighter verification and validation. Lastly, the programmers which followed a waterfall-like process often did not write the required automated test cases after completing their code, which might be indicative of the tendency among practitioners toward inadequate testing. This observation supports that TDD has the potential of increasing the level of testing in the industry as testing as an integral part of code development."}
{"_id": "f5837cb0edacf304860cb14ad141b017e2f60fbe", "title": "Using Nonfinancial Measures to Assess Fraud Risk", "text": "This study examines whether auditors can effectively use nonfinancial measures (NFMs) to assess the reasonableness of financial performance and, thereby, help detect financial statement fraud (hereafter, fraud). If auditors or other interested parties (e.g., directors, lenders, investors, or regulators) can identify NFMs (e.g., facilities growth) that are correlated with financial measures (e.g., revenue growth), inconsistent patterns between the NFMs and financial measures can be used to detect firms with high fraud risk. We find that the difference between financial and nonfinancial performance is significantly greater for firms that committed fraud than for their nonfraud competitors. We also find that this difference is a significant fraud indicator when included in a model containing variables that have previously been linked to the likelihood of fraud. Overall, our results provide empirical evidence suggesting that NFMs can be effectively used to assess fraud risk. \u2217North Carolina State University; \u2020George Mason University; \u2021Brigham Young University. We especially thank Abbie Smith (editor) and the referee for their invaluable guidance. We are grateful for helpful comments from Mark Beasley, Brian Cloyd, Andrew Felo, Glen Gray, Dana Hermanson, Diane Matson, Kevin Melendrez, Mitch Oler, Mark Peecher, and workshop participants at Indiana University, Virginia Tech University, Lehigh University, Utah State University, the 2005 Brigham Young University Research Symposium, the 2006 AAA Annual Meeting, the 2006 International Symposium on Audit Research, the 2007 Auditing Section Midyear Conference, the 2008 Conference on Corporate Governance and Fraud Prevention at George Mason University, the 2008 Corporate Governance and Ethics Symposium at George Mason University, the North Carolina Association of Government Accountants, the North Carolina Office of the State Controller, and First Citizens Bank. Finally, we appreciate the many students who gathered data for our sample of firms. This study was funded by research grants from the Institute of Internal Auditors Research Foundation and the Financial Industry Regulatory Authority Investor Education Foundation."}
{"_id": "49f938d2f66563ba7d016bdd47433d0be743669a", "title": "Edge Detection in Digital Images Using Fuzzy Logic Technique", "text": "The fuzzy technique is an operator introduced in order to simulate at a mathematical level the compensatory behavior in process of decision making or subjective evaluation. The following paper introduces such operators on hand of computer vision application. In this paper a novel method based on fuzzy logic reasoning strategy is proposed for edge detection in digital images without determining the threshold value. The proposed approach begins by segmenting the images into regions using floating 3x3 binary matrix. The edge pixels are mapped to a range of values distinct from each other. The robustness of the proposed method results for different captured images are compared to those obtained with the linear Sobel operator. It is gave a permanent effect in the lines smoothness and straightness for the straight lines and good roundness for the curved lines. In the same time the corners get sharper and can be defined easily. Keywords\u2014Fuzzy logic, Edge detection, Image processing, computer vision, Mechanical parts, Measurement."}
{"_id": "739f54f1cda6021a9dd009e274a5b9de4334f103", "title": "Factors Affecting the Adoption of Electronic Word-of-Mouth in the Tourism Industry", "text": "This paper proposes a theoretical framework to explain the factors that influence consumer adoption of electronic word-of-mouth (eWOM) in the decision to purchase travel-related services. Adoption of eWOM is posited to be influenced by the perceived usefulness of eWOM. The perceived usefulness of eWOM, in turn, is expected to be influenced by the nature of eWOM characterized by message basis, positivity, valence, elaborateness, and timeliness; eWOM quantity; source credibility in the form of message forum type, reviewer identity disclosure, expertise, and reputation; and consumers\u2019 prior knowledge of the services being considered. Managerial implications of the framework are also discussed."}
{"_id": "e34f35fde6659732ccb0f491b5b94abbdc74d2e0", "title": "Moodle's Ontology Development from UML for Social Learning Network Analysis", "text": "The online learning called e-learning is a new learning path that offers to learners to study at their own pace and at the moments that suit them. It is in this perspective that the semantic web has known its emergence in the field of e-learning to offer platforms content more personalized and more adapted to student's and teacher's needs. Since Moodle is the most popular e-learning platform, we propose in this paper to build its OWL ontology by exploring the representative data that we collected from its UML class diagram. The choice of UML class diagram as a basis for data collection for the development of the ontology is justified by the fact that the transition from UML to OWL ontology brings ontology development process closer to the wider software engineering population. The built ontology brings also great benefit in the field of the Social Learning Network Analysis. Because it gives the opportunity to study the behavior of the platform users by giving meaning to their relationships instead of modelling them only as knots and edges."}
{"_id": "aa91d63bf15bac3924387b93d96fb6adf7e27fc3", "title": "Dynamic Modeling and Adaptive Control of a Cable-suspended Robot", "text": "The level adjustment of cable-driven parallel mechanism is challenging due to the difficulty in obtaining an accurate mathematical model and the fact that different sources of uncertainties exist in the adjustment process. This paper presents application of an adaptive control scheme for a cable suspended robot to handle uncertainties in mass and moments of inertia of end effecter. In section II dynamic equations of motion are derived and the constraints are utilized to obtain the complete required equations. In section III inverse dynamic controller and adaptive controller are presented. Simulations results presented in section IV show the effectiveness of the adaptive controller when there is no enough knowledge about system parameters."}
{"_id": "7695f9e1137ff02bda64196b16e76516559b1219", "title": "E-Learning\u2019s Impact on the Academic Performance of Student-Teachers: A Curriculum Lens", "text": "This study was planned to explore the impact of eLearning on the academic performance of student-teachers. The researchers' concern with e-Learning was sanctioned by the need for a teaching and learning strategy that can help institutions of learning address their problems and improve on their outcome. In this respect, the researchers conducted an experiment to find out whether student-teachers taught using the method of e-Learning (blended learning) perform better than studentteachers taught using the traditional method of teaching and learning. Findings offers new evidence that e-Learning has a significant influence on the performance of students as student-teachers taught using eLearning consistently performed better than student-teachers taught using the traditional method. Based on this result, recommendations are made to training institutions to embrace ICTs and become more flexible by adopting learning approaches that are dynamic and multidimensional as problems in education are becoming more complex."}
{"_id": "8498599366ce2bf12952049d9e13773b8e620b13", "title": "The state of the art in semantic relatedness: a framework for comparison", "text": "Semantic relatedness (SR) is a form of measurement that quantitatively identifies the relationship between two words or concepts based on the similarity or closeness of their meaning. In the recent years, there have been noteworthy efforts to compute SR between pairs of words or concepts by exploiting various knowledge resources such as linguistically structured (e.g. WordNet) and collaboratively developed knowledge bases (e.g. Wikipedia), among others. The existing approaches rely on different methods for utilizing these knowledge resources, for instance, methods that depend on the path between two words, or a vector representation of the word descriptions. The purpose of this paper is to review and present the state of the art in SR research through a hierarchical framework. The dimensions of the proposed framework cover three main aspects of SR approaches including the resources they rely on, the computational methods applied on the resources for developing a relatedness metric, and the evaluation models that are used for measuring their effectiveness. We have selected 14 representative SR approaches to be analyzed using our framework. We compare and critically review each of them through the dimensions of our framework, thus, identifying strengths and weaknesses of each approach. In addition, we provide guidelines for researchers and practitioners on how to select the most relevant SR method for their purpose. Finally, based on the comparative analysis of the reviewed relatedness measures, we identify existing challenges and potentially valuable future research directions in this domain."}
{"_id": "9a37a40b8d525f079dbec2dfd748cefa859a9d33", "title": "Leakproofing the Singularity Artificial Intelligence Confinement Problem", "text": "This paper attempts to formalize and to address the \u2018leakproofing\u2019 of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence."}
{"_id": "37900f1921e5e904bb0ddc3eba2377b46475c97f", "title": "Density Based Smart Traffic System with Real Time Data Analysis Using IoT", "text": "This paper aims to develop a convenient traffic system that allows a smooth movement of cars which will help build a smarter city. The traffic system currently implemented in many areas is not based on the density of traffic and every road is allotted a preset time. This results in traffic congestion due to large red-light delays and timings allotted for roads in a city that should vary during peak on-off hours, but in reality don't. These traditional systems are not adaptable and fail to support traffic during an unexpected situation or during an accident, and this makes them inefficient. In order to calculate the density of traffic various sensors can be used, each having their merits and demerits. In our proposed system Ultrasound Sensors are used along with Image Processing(using live feed from a camera) that works on a Raspberry Pi platform and calculates the vehicle density and dynamically allots time for different levels of traffic. This in turn allows better signal control and effective management of traffic thereby reducing the probability of a collision. By using Internet Of Things(IoT) real time data from the system can be collected, stored and managed on a cloud. This data can be used to interpret the signal duration in-case any of the sensing equipment fail, and also for future analysis."}
{"_id": "9d0f09e343ebc9d5e896528273b79a1f13aa5c07", "title": "Efficient iris segmentation method in unconstrained environments", "text": ""}
{"_id": "de270dcf3a03059af50f6da3a89ff1ba5631eb04", "title": "Does the technology acceptance model predict actual use? A systematic literature review", "text": "0950-5849/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.infsof.2009.11.005 * Corresponding author. Tel.: +44 (0) 1782 734090; E-mail address: m.turner@cs.keele.ac.uk (M. Turne Context: The technology acceptance model (TAM) was proposed in 1989 as a means of predicting technology usage. However, it is usually validated by using a measure of behavioural intention to use (BI) rather than actual usage. Objective: This review examines the evidence that the TAM predicts actual usage using both subjective and objective measures of actual usage. Method: We performed a systematic literature review based on a search of six digital libraries, along with vote-counting meta-analysis to analyse the overall results. Results: The search identified 79 relevant empirical studies in 73 articles. The results show that BI is likely to be correlated with actual usage. However, the TAM variables perceived ease of use (PEU) and perceived usefulness (PU) are less likely to be correlated with actual usage. Conclusion: Care should be taken using the TAM outside the context in which it has been validated. 2009 Elsevier B.V. All rights reserved."}
{"_id": "009b98f3ec7c28f3c87acdcc60fb7a5f90e24f9c", "title": "A Hierarchical Thread Scheduler and Register File for Energy-Efficient Throughput Processors", "text": "Modern graphics processing units (GPUs) employ a large number of hardware threads to hide both function unit and memory access latency. Extreme multithreading requires a complex thread scheduler as well as a large register file, which is expensive to access both in terms of energy and latency. We present two complementary techniques for reducing energy on massively-threaded processors such as GPUs. First, we investigate a two-level thread scheduler that maintains a small set of active threads to hide ALU and local memory access latency and a larger set of pending threads to hide main memory latency. Reducing the number of threads that the scheduler must consider each cycle improves the scheduler\u2019s energy efficiency. Second, we propose replacing the monolithic register file found on modern designs with a hierarchical register file. We explore various trade-offs for the hierarchy including the number of levels in the hierarchy and the number of entries at each level. We consider both a hardware-managed caching scheme and a software-managed scheme, where the compiler is responsible for orchestrating all data movement within the register file hierarchy. Combined with a hierarchical register file, our two-level thread scheduler provides a further reduction in energy by only allocating entries in the upper levels of the register file hierarchy for active threads. Averaging across a variety of real world graphics and compute workloads, the active thread count can be reduced by a factor of 4 with minimal impact on performance and our most efficient three-level software-managed register file hierarchy reduces register file energy by 54%."}
{"_id": "c1f72b41abf54ad6d64791063f2d56b91856fc0f", "title": "Real Time Video Data Mining for Surveillance Video Streams", "text": "We extend our previous work [1] of the general framework for video data mining to further address the issue such as how to mine video data. To extract motions, we use an accumulation of quantized pixel differences among all frames in a video segment. As a result, the accumulated motions of segment are represented as a two dimensional matrix. Further, we develop how to capture the location of motions occurring in a segment using the same matrix generated for the calculation of the amount. We study how to cluster those segmented pieces using the features (the amount and the location of motions) we extract by the matrix above. We investigate an algorithm to find whether a segment has normal or abnormal events by clustering and modeling normal events, which occur mostly. In addition to deciding normal or abnormal, the algorithm computes Degree of Abnormality of a segment, which represents to what extent a segment is distant to the existing segments in relation with normal events. Our experimental studies indicate that the proposed techniques are promising."}
{"_id": "00eea9f10738475e9c37a5a240b2f3a2515f16bb", "title": "Multi-patch deep features for power line insulator status classification from aerial images", "text": "The status of the insulators in power line can directly affect the reliability of the power transmission systems. Computer vision aided approaches have been widely applied in electric power systems. Inspecting the status of insulators from aerial images has been challenging due to the complex background and rapid view changing under different illumination conditions. In this paper, we propose a novel approach to inspect the insulators with Deep Convolutional Neural Networks (CNN). A CNN model with multi-patch feature extraction method is applied to represent the status of insulators and a Support Vector Machine (SVM) is trained based on these features. A thorough evaluation is conducted on our insulator status dataset of six classes from real inspection videos. The experimental results show that a pre-trained model for classification is more accurate than the shallow features by hand-crafted. Our approach achieves 98.7095% mean Average Precision (mAP) in status classification. We also study the behavior of the neural activations of the convolutional layers. Different results vary with different fully connected layers, and interesting findings are discussed."}
{"_id": "9fea31cdd90f625206bffacccc31f66adae9b335", "title": "Traveling Salesman Problems with Profits", "text": "T salesman problems with profits (TSPs with profits) are a generalization of the traveling salesman problem (TSP), where it is not necessary to visit all vertices. A profit is associated with each vertex. The overall goal is the simultaneous optimization of the collected profit and the travel costs. These two optimization criteria appear either in the objective function or as a constraint. In this paper, a classification of TSPs with profits is proposed, and the existing literature is surveyed. Different classes of applications, modeling approaches, and exact or heuristic solution techniques are identified and compared. Conclusions emphasize the interest of this class of problems, with respect to applications as well as theoretical results."}
{"_id": "9055fb858641693db03d023d7c85e7d467ba5c60", "title": "A Review on Position/Speed Sensorless Control for Permanent-Magnet Synchronous Machine-Based Wind Energy Conversion Systems", "text": "Owing to the advantages of higher efficiency, greater reliability, and better grid compatibility, the direct-drive permanent-magnet synchronous generator (PMSG)-based variable-speed wind energy conversion systems (WECSs) have drawn the highest attention from both academia and industry in the last few years. Applying mechanical position/speed sensorless control to direct-drive PMSG-based WECSs will further reduce the cost and complexity, while enhancing the reliability and robustness of the WECSs. This paper reviews the state-of-the-art and highly applicable mechanical position/speed sensorless control schemes for PMSG-based variable-speed WECSs. These include wind speed sensorless control schemes, generator rotor position and speed sensorless vector control schemes, and direct torque and direct power control schemes for a variety of direct-drive PMSG-based WECSs."}
{"_id": "5c8f5bfdc7e92789a17b02e90c9d64b4b9a17b43", "title": "Smile Esthetics \u2013 A Literature", "text": "Esthetic dental treatment involves artistic and subjective components design to create the illusion of beauty. An organised systematic approach is required to evaluate, diagnose and resolve esthetic problems predictably. Our ultimate goal is to achieve pleasing composition in the smile by creating an arrangement of various esthetic elements. One of the most important tasks in esthetic dentistry is creating harmonious proportions between the widths of maxillary anterior teeth when restoring or replacing these teeth This review article describes application of the Golden Proportion and Red Proportion in dentistry also the smile design for completely edentulous and for full mouth rehabilitation. Also the future scope for designing smile ."}
{"_id": "707603240701ca0545e37eae14b5c6ec80b3624d", "title": "Graph-based Cross Entropy method for solving multi-robot decentralized POMDPs", "text": "This paper introduces a probabilistic algorithm for multi-robot decision-making under uncertainty, which can be posed as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP). Dec-POMDPs are inherently synchronous decision-making frameworks which require significant computational resources to be solved, making them infeasible for many real-world robotics applications. The Decentralized Partially Observable Semi-Markov Decision Process (Dec-POSMDP) was recently introduced as an extension of the Dec-POMDP that uses high-level macro-actions to allow large-scale, asynchronous decision-making. However, existing Dec-POSMDP solution methods have limited scalability or perform poorly as the problem size grows. This paper proposes a cross-entropy based Dec-POSMDP algorithm motivated by the combinatorial optimization literature. The algorithm is applied to a constrained package delivery domain, where it significantly outperforms existing Dec-POSMDP solution methods."}
{"_id": "7da05dc8c992126f09610d22415d3a6f8bee324a", "title": "Challenges of using homomorphic encryption to secure cloud computing", "text": "With the emergence of cloud computing, the concept of information security has become a major issue. Indeed, the security of such a system is the greatest concern of computer scientists, providers of cloud and organizations who want to adopt and benefit from these services. Cloud computing providers must implement concepts ensuring network security, hardware, data storage and strategies of control and access to services. All these elements help to preserve data security and ensuring the availability of services associated with the Cloud, to better satisfy clients and acquire and build their trust. However, even if the data storage security in cloud servers is assured, reluctance remain when it comes to process the confidential data. Indeed, the fear that sensitive data is being used is a major obstacle in the adoption of cloud services by enterprises. To overcome this obstacle, the use of methods that can perform operations on encrypted data without knowing the secret key, seems to be an effective way to strengthen the confidentiality of information. In this paper we will examine the challenges facing Homomorphic Encryption methods to allow suppliers of cloud to perform operations on encrypted data, and provide the same results after treatment, as if they were performing calculations on raw data."}
{"_id": "2cb6d78e822ca7fd0e29670ec7e26e37ae3d3e8f", "title": "Compact LTCC Bandpass Filter With Wide Stopband Using Discriminating Coupling", "text": "This paper presents a novel compact low-temperature cofired ceramic (LTCC) bandpass filter (BPF) with wide stopband and high selectivity. The proposed circuit consists of two coupled \u03bbg/4 transmission-line resonators. A special coupling region is selected to realize a novel discriminating coupling scheme for generating a transmission zero (TZ) at the third harmonic frequency. The mechanism is analyzed and the design guideline is described. The source-load coupling is introduced to generate two TZs near the passband and one in the stopband. Thus, wide stopband can be obtained without extra circuits. Due to the LTCC multilayer structures, the filter size is 0.058 \u03bbg\u00d70.058 \u03bbg\u00d70.011 \u03bbg, or 2.63 mm \u00d7 2.61 mm \u00d7 0.5 mm. The simulated and measured results of the demonstrated LTCC BPF are presented to validate the proposed design."}
{"_id": "52c9eb70c55685b349126ed907e037f383673cf3", "title": "Cluster-based Web Summarization", "text": "We propose a novel approach to abstractive Web summarization based on the observation that summaries for similar URLs tend to be similar in both content and structure. We leverage existing URL clusters and construct per-cluster word graphs that combine known summaries while abstracting out URL-specific attributes. The resulting topology, conditioned on URL features, allows us to cast the summarization problem as a structured learning task using a lowest cost path search as the decoding step. Early experimental results on a large number of URL clusters show that this approach is able to outperform previously proposed Web summarizers."}
{"_id": "956d66d2a95f60054a473eea583786f2b3c16f9d", "title": "Very-Heavy Sled Training for Improving Horizontal-Force Output in Soccer Players.", "text": "BACKGROUND\nSprint running acceleration is a key feature of physical performance in team sports, and recent literature shows that the ability to generate large magnitudes of horizontal ground-reaction force and mechanical effectiveness of force application are paramount. The authors tested the hypothesis that very-heavy loaded sled sprint training would induce an improvement in horizontal-force production, via an increased effectiveness of application.\n\n\nMETHODS\nTraining-induced changes in sprint performance and mechanical outputs were computed using a field method based on velocity-time data, before and after an 8-wk protocol (16 sessions of 10- \u00d7 20-m sprints). Sixteen male amateur soccer players were assigned to either a very-heavy sled (80% body mass sled load) or a control group (unresisted sprints).\n\n\nRESULTS\nThe main outcome of this pilot study is that very-heavy sled-resisted sprint training, using much greater loads than traditionally recommended, clearly increased maximal horizontal-force production compared with standard unloaded sprint training (effect size of 0.80 vs 0.20 for controls, unclear between-groups difference) and mechanical effectiveness (ie, more horizontally applied force; effect size of 0.95 vs -0.11, moderate between-groups difference). In addition, 5-m and 20-m sprint performance improvements were moderate and small for the very-heavy sled group and small and trivial for the control group, respectively. Practical Applications: This brief report highlights the usefulness of very-heavy sled (80% body mass) training, which may suggest value for practical improvement of mechanical effectiveness and maximal horizontal-force capabilities in soccer players and other team-sport athletes.\n\n\nRESULTS\nThis study may encourage further research to confirm the usefulness of very-heavy sled in this context."}
{"_id": "9282750a80de3a7f09f5562e4ff6d33593eb1614", "title": "Analysis of malicious input issues on intelligent systems", "text": "Intelligent systems can facilitate decision making and have been widely applied to various domains. The output of intelligent systems relies on the users\u2019 input. However, with the development of Web-Based Interface, users can easily provide dishonest input. Therefore, the accuracy of the generated decision will be affected. This dissertation presents three essays to discuss the defense solutions for malicious input into three types of intelligent systems: expert systems, recommender systems, and rating systems. Different methods are proposed in each domain based on the nature of each problem. The first essay addresses the input distortion issue in expert systems. It develops four methods to distinguish liars from truth-tellers, and redesign the expert systems to control the impact of input distortion by liars. Experimental results show that the proposed methods could lead to the better accuracy or the lower misclassification cost. The second essay addresses the shilling attack issue in recommender systems. It proposes an integrated Value-based Neighbor Selection (VNS) approach, which aims to select proper neighbors for recommendation systems that maximize the e-retailer\u2019s profit while protecting the system from shilling attacks. Simulations are conducted to demonstrate the effectiveness of the proposed method. The third essay addresses the rating fraud issue in rating systems. It designs a twophase procedure for rating fraud detection based on the temporal analysis on the rating series."}
{"_id": "e2f224b84e52e3df52b91b4659ecf4deeb0d1966", "title": "Why it can matter more than IQ", "text": "Goleman highlights the many reports of the disintegration of civility and safety, and the onslaught of mean-spirited impulse running amok and claims that these only reflect back to us, on a larger scale, a creeping sense of emotions out of control in our own lives, and in the lives of those people living around us. He seeks to make sense of the senselessness and bring about change which will help our children fare better in life."}
{"_id": "8e048b335d01e1fa539dc22c50deeb7721c4e1ce", "title": "Learning decision trees from dynamic data streams", "text": "This paper presents a system for induction of forest of functional trees from data streams able to detect concept drift. The Ultra Fast Forest of Trees (UFFT) is an incremental algorithm, that works online, processing each example in constant time, and performing a single scan over the training examples. It uses analytical techniques to choose the splitting criteria, and the information gain to estimate the merit of each possible splitting-test. For multi-class problems the algorithm grows a binary tree for each possible pair of classes, leading to a forest of trees. Decision nodes and leaves contain naive-Bayes classifiers playing different roles during the induction process. Naive-Bayes in leaves are used to classify test examples, naive-Bayes in inner nodes can be used as multivariate splitting-tests if chosen by the splitting criteria, and used to detect drift in the distribution of the examples that traverse the node. When a drift is detected, all the sub-tree rooted at that node will be pruned. The use of naive-Bayes classifiers at leaves to classify test examples, the use of splitting-tests based on the outcome of naive-Bayes, and the use of naive-Bayes classifiers at decision nodes to detect drift are directly obtained from the sufficient statistics required to compute the splitting criteria, without no additional computations. This aspect is a main advantage in the context of high-speed data streams. This methodology was tested with artificial and real-world data sets. The experimental results show a very good performance in comparison to a batch decision tree learner, and high capacity to detect and react to drift."}
{"_id": "41e085d52e85a224a66e6b0884f053c05f285877", "title": "Hockey Action Recognition via Integrated Stacked Hourglass Network", "text": "A convolutional neural network (CNN) has been designed to interpret player actions in ice hockey video. The hourglass network is employed as the base to generate player pose estimation and layers are added to this network to produce action recognition. As such, the unified architecture is referred to as action recognition hourglass network, or ARHN. ARHN has three components. The first component is the latent pose estimator, the second transforms latent features to a common frame of reference, and the third performs action recognition. Since no benchmark dataset for pose estimation or action recognition is available for hockey players, we generate such an annotated dataset. Experimental results show action recognition accuracy of 65% for four types of actions in hockey. When similar poses are merged to three and two classes, the accuracy rate increases to 71% and 78%, proving the efficacy of the methodology for automated action recognition in hockey."}
{"_id": "bf26209de8359e99c434e09103680d70d5161bf5", "title": "A qualitative exploration of individuals' motivators for seeking substance user treatment.", "text": "Substance abuse continues to greatly impact the US population. The purpose of this study is to improve understanding of the reasons why individuals seek substance user treatment. Qualitative interviews were conducted with 50 men (n\u00a0=\u00a025) and women (n\u00a0=\u00a025) across eight substance user treatment programs in Durham, North Carolina, in 2010. We identify six motivating themes regarding reasons for seeking treatment and explore how these themes vary by the participants' mode of entry into treatment. Findings provide a holistic understanding of how and why individuals come into treatment that can enhance the quality and impact of substance user services."}
{"_id": "563cbb597462c4f45739144f2c98cdf1d05872a0", "title": "Learning Context for Text Categorization", "text": "This paper describes our work which is based on discovering context for text document categorization. The document categorization approach is derived from a combination of a learning paradigm known as relation extraction and an technique known as context discovery. We demonstrate the effectiveness of our categorization approach using reuters 21578 dataset and synthetic real world data from sports domain. Our experimental results indicate that the learned context greatly improves the categorization performance as compared to traditional categorization approaches."}
{"_id": "389b2390fd310c9070e72563181547cf23dceea3", "title": "\u03b2-VAE : L EARNING B ASIC", "text": "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce \u03b2-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter \u03b2 that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that \u03b2-VAE with appropriately tuned \u03b2 > 1 qualitatively outperforms VAE (\u03b2 = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, \u03b2-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter \u03b2, which can be directly optimised through a hyperparameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data."}
{"_id": "f0eb8480c4c4fb57a80d03ee8509aa3131d982a5", "title": "Exploration of Large Networks with Covariates via Fast and Universal Latent Space Model Fitting", "text": "Latent space models are effective tools for statistical modeling and exploration of network data. These models can effectively model real world network characteristics such as degree heterogeneity, transitivity, homophily, etc. Due to their close connection to generalized linear models, it is also natural to incorporate covariate information in them. The current paper presents two universal fitting algorithms for networks with edge covariates: one based on nuclear norm penalization and the other based on projected gradient descent. Both algorithms are motivated by maximizing likelihood for a special class of inner-product models while working simultaneously for a wide range of different latent space models, such as distance models, which allow latent vectors to affect edge formation in flexible ways. These fitting methods, especially the one based on projected gradient descent, are fast and scalable to large networks. We obtain their rates of convergence for both inner-product models and beyond. The effectiveness of the modeling approach and fitting algorithms is demonstrated on five real world network datasets for different statistical tasks, including community detection with and without edge covariates, and network assisted learning."}
{"_id": "10dee5e2b116ce9c0b7961965953357feda2ccbd", "title": "Childhood Obesity Perceptions Among African American Caregivers in a Rural Georgia Community: A Mixed Methods Approach", "text": "Given the pivotal role of African American caregiver\u2019s perceptions of childhood obesity in rural areas, the inclusion of caregiver\u2019s perceptions could potentially reduce childhood obesity rates. The objective of the current study was to explore childhood obesity perceptions among African Americans in a rural Georgia community. This concurrent mixed methods study utilized two theoretical frameworks: Social Cognitive Theory and Social Ecological Model. Using a convenience sample, caregivers ages 22\u201365\u00a0years completed a paper-based survey (n\u00a0=\u00a0135) and a face-to-face interview (n\u00a0=\u00a012) to explore perceptions of obesity risk factors, health complications, weight status, built environment features, and obesity prevention approaches. Descriptive statistics were generated and a six-step process was used for qualitative analysis. Participants commonly cited behavioral risk factors; yet, social aspects and appearance of the community were not considered contributing factors. Chronic diseases were reported as obesity health complications. Caregivers had a distorted view of their child\u2019s weight status. In addition, analysis revealed that caregivers assessed child\u2019s weight and height measurements by the child\u2019s appearance or a recent doctor visit. Environmental barriers reported by caregivers included safety concerns and insufficient physical activity venues and programs. Also, caregivers conveyed parents are an imperative component of preventing obesity. Although this study found caregivers were aware of obesity risk factors, health complications, built environment features, and prevention approaches their obesity perceptions were not incorporated into school or community prevention efforts. Findings suggest that children residing in rural areas are in need of tailored efforts that address caregiver perceptions of obesity."}
{"_id": "0dad9cfd5b0065a27d8d7ffc94ee004844a97502", "title": "Balancing traffic load in wireless networks with curveball routing", "text": "We address the problem of balancing the traffic load in multi-hop wireless networks. We consider a point-to-point communicating network with a uniform distribution of source-sink pairs. When routing along shortest paths, the nodes that are centrally located forward a disproportionate amount of traffic. This translates into increased congestion and energy consumption. However, the maximum load can be decreased if the packets follow curved paths. We show that the optimum such routing scheme can be expressed in terms of geometric optics and computed by linear programming. We then propose a practical solution, which we call Curveball Routing which achieves results not much worse than the optimum.\n We evaluate our solution at three levels of fidelity: a Java high-level simulator, the ns2 simulator, and the Intel Mirage Sensor Network Testbed. Simulation results using the high-level simulator show that our solution successfully avoids the crowded center of the network, and reduces the maximum load by up to 40%. At the same time, the increase of the expected path length is minimal, i.e., only 8% on average. Simulation results using the ns2 simulator show that our solution can increase throughput on moderately loaded networks by up to 15%, while testbed results show a reduction in peak energy usage by up to 25%. Our prototype suggests that our solution is easily deployable."}
{"_id": "731ebd54010c90a31c7318284c628ff20848dfdb", "title": "A decision support system for operations in a container terminal", "text": "We describe a variety of inter-related decisions made during daily operations at a container terminal. The ultimate goal of these decisions is to minimize the berthing time of vessels, the resources needed for handling the workload, the waiting time of customer trucks, and the congestion on the roads and at the storage blocks and docks inside the terminal; and to make the best use of the storage space. Given the scale and complexity of these decisions, it is essential to use decision support tools to make them. This paper reports on work to develop such a decision support system (DSS). We discuss the mathematical models and algorithms used in designing the DSS, the reasons for using these approaches, and some experimental results. D 2003 Elsevier B.V. All rights reserved."}
{"_id": "8947ca4949fc66eb65f863dfb825ebd90ab01772", "title": "Reducing the human overhead in text categorization", "text": "Many applications in text processing require significant human effort for either labeling large document collections (when learning statistical models) or extrapolating rules from them (when using knowledge engineering). In this work, we describe away to reduce this effort, while retaining the methods' accuracy, by constructing a hybrid classifier that utilizes human reasoning over automatically discovered text patterns to complement machine learning. Using a standard sentiment-classification dataset and real customer feedback data, we demonstrate that the resulting technique results in significant reduction of the human effort required to obtain a given classification accuracy. Moreover, the hybrid text classifier also results in a significant boost in accuracy over machine-learning based classifiers when a comparable amount of labeled data is used."}
{"_id": "8c10a7d51d8c33a3daf2c39e16f2e11bf51de55e", "title": "Asirra: a CAPTCHA that exploits interest-aligned manual image categorization", "text": "The typical CAPTCHA GUI consists of two parts: a character image with noise, and an and J. Saul, \"Asirra: a CAPTCHA that exploits interest-aligned manual image categorization,\" in 14th ACM conference on Computer and Communications. CAPTCHAs (short for \"Completely Automated Public Turing test to tell Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization. (30)J. Elson, J. R. Douceur, J. Howell, and J. Saul, \u201cAsirra: A CAPTCHA that exploits interest-aligned manual image categorization,\u201d in Proc. ACM CCS, 2007, pp."}
{"_id": "79c0dcc2e14d945fa0a210716f5b783bde8e687f", "title": "On the Character of Phishing URLs: Accurate and Robust Statistical Learning Classifiers", "text": "Phishing attacks resulted in an estimated $3.2 billion dollars worth of stolen property in 2007, and the success rate for phishing attacks is increasing each year [17]. Phishing attacks are becoming harder to detect and more elusive by using short time windows to launch attacks. In order to combat the increasing effectiveness of phishing attacks, we propose that combining statistical analysis of website URLs with machine learning techniques will give a more accurate classification of phishing URLs. Using a two-sample Kolmogorov-Smirnov test along with other features we were able to accurately classify 99.3% of our dataset, with a false positive rate of less than 0.4%. Thus, accuracy of phishing URL classification can be greatly increased through the use of these statistical measures."}
{"_id": "dd2ff3304f72ddf73397fee8587d5dc25c58a171", "title": "Consistent sparsification for graph optimization", "text": "In a standard pose-graph formulation of simultaneous localization and mapping (SLAM), due to the continuously increasing numbers of nodes (states) and edges (measurements), the graph may grow prohibitively too large for long-term navigation. This motivates us to systematically reduce the pose graph amenable to available processing and memory resources. In particular, in this paper we introduce a consistent graph sparsification scheme: (i) sparsifying nodes via marginalization of old nodes, while retaining all the information (consistent relative constraints) - which is conveyed in the discarded measurements - about the remaining nodes after marginalization; and (ii) sparsifying edges by formulating and solving a consistent \u21131-regularized minimization problem, which automatically promotes the sparsity of the graph. The proposed approach is validated on both synthetic and real data."}
{"_id": "a1133da5214e34d8a95b63aa2a8568bf68eafce7", "title": "The Digital Self-Tuning Control of Step a Down DC-DC Converter", "text": "A digital self-tuning control technique of DC-DC Buck converter is considered and thoroughly analyzed in this paper. The development of the small-signal model of the converter, which is the key of the control design presented in this work, is based on the state-space averaged (SSA) technique. Adaptive control has become a widely-used term in DC-DC conversion in recent years. A digital self-tuning Dahlin PID and a /Keviczky PID controller based on recursive leastsquares estimation are developed and designed to be applied to the voltage mode control (VMC) approach operating in a continuous conduction mode (CCM). A comparative study of these two digital self-tuning controllers for step change in input voltage magnitude or output load is also carried out. The simulation results obtained using a Matlab SimPowerSystems toolbox to validate the effectiveness of the proposed strategies are also given and discussed extensively."}
{"_id": "1967ad3ac8a598adc6929e9e6b9682734f789427", "title": "Hierarchical Attention Networks for Document Classification", "text": "We propose a hierarchical attention network for document classification. Our model has two distinctive characteristics: (i) it has a hierarchical structure that mirrors the hierarchical structure of documents; (ii) it has two levels of attention mechanisms applied at the wordand sentence-level, enabling it to attend differentially to more and less important content when constructing the document representation. Experiments conducted on six large scale text classification tasks demonstrate that the proposed architecture outperform previous methods by a substantial margin. Visualization of the attention layers illustrates that the model selects qualitatively informative words and sentences."}
{"_id": "5e40be36d11483923cb12260bed6e8f7ed355ff3", "title": "Interactive Attention Networks for Aspect-Level Sentiment Classification", "text": "Aspect-level sentiment classification aims at identifying the sentiment polarity of specific target in its context. Previous approaches have realized the importance of targets in sentiment classification and developed various methods with the goal of precisely modeling their contexts via generating target-specific representations. However, these studies always ignore the separate modeling of targets. In this paper, we argue that both targets and contexts deserve special treatment and need to be learned their own representations via interactive learning. Then, we propose the interactive attention networks (IAN) to interactively learn attentions in the contexts and targets, and generate the representations for targets and contexts separately. With this design, the IAN model can well represent a target and its collocative context, which is helpful to sentiment classification. Experimental results on SemEval 2014 Datasets demonstrate the effectiveness of our model."}
{"_id": "7942655c7510baf5e81189d58ef88ca730800804", "title": "Using Structured Events to Predict Stock Price Movement: An Empirical Investigation", "text": "It has been shown that news events influence the trends of stock price movements. However, previous work on news-driven stock market prediction rely on shallow features (such as bags-of-words, named entities and noun phrases), which do not capture structured entity-relation information, and hence cannot represent complete and exact events. Recent advances in Open Information Extraction (Open IE) techniques enable the extraction of structured events from web-scale data. We propose to adapt Open IE technology for event-based stock price movement prediction, extracting structured events from large-scale public news without manual efforts. Both linear and nonlinear models are employed to empirically investigate the hidden and complex relationships between events and the stock market. Largescale experiments show that the accuracy of S&P 500 index prediction is 60%, and that of individual stock prediction can be over 70%. Our event-based system outperforms bags-of-words-based baselines, and previously reported systems trained on S&P 500 stock historical data."}
{"_id": "7eb77b33585ced33db902cf0be8fa1a46ef235bd", "title": "Logic design with unipolar memristors", "text": "Memristors are novel devices that could naturally be designed as memory elements. Recently, several methods of designing memristors for logic operations have been proposed. These methods, mostly, make use of bipolar memristors. In this paper, we propose a method for performing logic with unipolar memristors based on OR and NOT logic gates. An integration of the basic building blocks into more complicated logic functions is described and demonstrated. Our results indicate that any logic function could be performed using an external controller. Thus, adding the capability of performing logic computations to numerous types of unipolar memristive materials in addition to their memory capability."}
{"_id": "8ac753263f962be06f49a661b243182c583b1b56", "title": "The inferior alveolar nerve's loop at the mental foramen and its implications for surgery.", "text": "BACKGROUND\nIn this study, the authors aimed to identify and measure the anterior extension of the alveolar loop (aAL) and the caudal extension of the alveolar loop (cAL) of the inferior alveolar nerve by using cone-beam computed tomography (CBCT). They also aimed to provide recommendations for surgery in the anterior mandible.\n\n\nMETHODS\nIn this retrospective case study of the frequency and extension of aAL and cAL, the authors evaluated 1,384 mandibular sites in 694 CBCT scans of dentate and partly edentulous patients, performed mainly for further diagnosis before removal of the mandibular third molars between January 2009 and February 2013, by using multiplanar reconstructions.\n\n\nRESULTS\nThe frequency of aAL was 69.73 percent and of cAL was 100 percent. The mean value for aAL was 1.16 millimeters, with a range of 0.3 to 5.6 mm; the mean value for cAL was 4.11 mm, with a range of 0.25 to 8.87 mm. For aAL, 95.81 percent of the sites showed values of 0 to 3 mm; for cAL, 93.78 percent of the sites showed values of 0.25 to 6 mm. Dentate patients showed statistically significantly higher values for cAL than did partly edentulous patients (P = .043). CBCT resolution had a statistically significant impact on cAL measurements (P = .001), with higher values at higher resolution.\n\n\nCONCLUSIONS\nThis study showed a high frequency of and large variations in aAL and cAL. In contrast to panoramic radiography, CBCT has been shown to be a reliable tool for identifying and measuring the AL. Therefore, preoperative diagnosis with CBCT is recommended for planning three-dimensional tasks such as implant placement in the vicinity of the mental foramen.\n\n\nPRACTICAL IMPLICATIONS\nOwing to the variability of aAL and cAL measurements, it is difficult to recommend reliable safety margins for surgical procedures such as implant placement, bone harvesting or genioplasty Depending on the indication, the clinician should consider preoperative diagnosis by means of CBCT."}
{"_id": "d1e505d31501e0135c21aa7f6768eb69090a4913", "title": "The Synthetic Data Vault", "text": "The goal of this paper is to build a system that automatically creates synthetic data to enable data science endeavors. To achieve this, we present the Synthetic Data Vault (SDV), a system that builds generative models of relational databases. We are able to sample from the model and create synthetic data, hence the name SDV. When implementing the SDV, we also developed an algorithm that computes statistics at the intersection of related database tables. We then used a state-of-the-art multivariate modeling approach to model this data. The SDV iterates through all possible relations, ultimately creating a model for the entire database. Once this model is computed, the same relational information allows the SDV to synthesize data by sampling from any part of the database. After building the SDV, we used it to generate synthetic data for five different publicly available datasets. We then published these datasets, and asked data scientists to develop predictive models for them as part of a crowdsourced experiment. By analyzing the outcomes, we show that synthetic data can successfully replace original data for data science. Our analysis indicates that there is no significant difference in the work produced by data scientists who used synthetic data as opposed to real data. We conclude that the SDV is a viable solution for synthetic data generation."}
{"_id": "1678db582ce638a4bc9448e1a95111f1e83f45aa", "title": "CSIsnoop: Attacker Inference of Channel State Information in Multi-User WLANs", "text": "Channel State Information (CSI) has been proposed to enhance physical layer security between a transmitter and a receiver because it decorrelates over half wavelength distances in rich scattering environments. Consequently, CSI was employed to generate passwords, to authenticate the source of packets, and to inject artificial noise to thwart eavesdroppers. However, in this paper, we present CSIsnoop, and show that an attacker can infer CSI in a multi-user WLAN, even if both channel sounding sequences from the access point and CSI measurement feedback from the clients are encrypted. The insights of CSIsnoop are that the CSI of clients can be computed based on transmit beamforming weights at the access point, and that the transmit beamforming weights can be estimated from downlink multi-user transmission. We implement CSIsnoop on a software defined radio and conduct experiments in various indoor environments. Our results show that on average CSIsnoop can infer CSI of the target client with an absolute normalized correlation of over 0.99, thereby urging reconsideration of the use of CSI as a tool to enhance physical layer security in multi-user WLANs."}
{"_id": "6408188efe6265c569ef99ae655d385f37fba6de", "title": "ROS play a critical role in the differentiation of alternatively activated macrophages and the occurrence of tumor-associated macrophages", "text": "Differentiation to different types of macrophages determines their distinct functions. Tumor-associated macrophages (TAMs) promote tumorigenesis owing to their proangiogenic and immune-suppressive functions similar to those of alternatively activated (M2) macrophages. We report that reactive oxygen species (ROS) production is critical for macrophage differentiation and that inhibition of superoxide (O2\u2212) production specifically blocks the differentiation of M2 macrophages. We found that when monocytes are triggered to differentiate, O2\u2212 is generated and is needed for the biphasic ERK activation, which is critical for macrophage differentiation. We demonstrated that ROS elimination by butylated hydroxyanisole (BHA) and other ROS inhibitors blocks macrophage differentiation. However, the inhibitory effect of ROS elimination on macrophage differentiation is overcome when cells are polarized to classically activated (M1), but not M2, macrophages. More importantly, the continuous administration of the ROS inhibitor BHA efficiently blocked the occurrence of TAMs and markedly suppressed tumorigenesis in mouse cancer models. Targeting TAMs by blocking ROS can be a potentially effective method for cancer treatment."}
{"_id": "22ab75ccad4fe337b330004111e255ee89e2ed31", "title": "Loc-Auth: Location-enabled authentication through attribute-based encryption", "text": "Traditional user authentication involves entering a username and password into a system. Strong authentication security demands, among other requirements, long, frequently hard-to-remember passwords. Two-factor authentication aids in the security, even though, as a side effect, might worsen user experience. We depict a mobile sign-on scheme that benefits from the dynamic relationship between a user's attributes, the service the user wishes to utilize, and location (where the user is, and what services are available there) as an authentication factor. We demonstrate our scheme employing Bluetooth Low Energy beacons for location awareness and the expressiveness of Attribute-Based Encryption to capture and leverage the described relationship. Bluetooth Low Energy beacons broadcast encrypted messages with encoded access policies. Within range of the beacons, a user with appropriate attributes is able to decrypt the broadcast message and obtain parameters that allow the user to perform a short or simplified login."}
{"_id": "993793efa6e1a3562ba2cc392bad601a94e15680", "title": "Reconstruction of the high urogenital sinus: early perineal prone approach without division of the rectum.", "text": "PURPOSE\nReconstruction of the vagina and external genitalia in the infant is quite challenging, particularly when a urogenital sinus is associated with high confluence of the vagina and urethra. Many surgeons believe that children with such a malformation should undergo staged or delayed reconstruction, so that vaginoplasty is done when the child is older and larger. Vaginoplasty early in life is thought to be difficult due to patient size and poor visualization. The posterior sagittal approach has been beneficial for acquiring exposure to high urogenital sinus anomalies but it has been thought to require splitting of the rectum and temporary colostomy. We report a modification of this technique.\n\n\nMATERIALS AND METHODS\nIn the last 5 years all patients with urogenital sinus anomalies underwent reconstruction using a single stage approach regardless of the level of confluence. In 8 patients with a high level of confluence reconstruction was performed using a perineal prone approach. Exposure was achieved without division of the rectum. The operative technique is presented in detail.\n\n\nRESULTS\nThis midline perineal prone approach has allowed excellent exposure of the high vagina even in infants. In all 8 patients reconstruction was done without difficulty and no patient required incision of the rectum or colostomy. This procedure did not preclude the use of a posteriorly based flap for vaginal reconstruction.\n\n\nCONCLUSIONS\nWhile patients with low confluence can be treated with single posteriorly based flap vaginoplasty, those with higher confluence may benefit from a perineal prone approach to achieve adequate exposure for pull-through vaginoplasty. This prone approach to the high urogenital sinus anomaly can be performed without division of the rectum, provides excellent exposure of the high confluence even in small children and does not preclude the use of posterior flaps for vaginal reconstruction."}
{"_id": "03dbc94b54c85cc34815df629fb508c6729e6eab", "title": "LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop", "text": "While there has been remarkable progress in the performance of visual recognition algorithms, the state-of-the-art models tend to be exceptionally data-hungry. Large labeled training datasets, expensive and tedious to produce, are required to optimize millions of parameters in deep network models. Lagging behind the growth in model capacity, the available datasets are quickly becoming outdated in terms of size and density. To circumvent this bottleneck, we propose to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop. Starting from a large set of candidate images for each category, we iteratively sample a subset, ask people to label them, classify the others with a trained model, split the set into positives, negatives, and unlabeled based on the classification confidence, and then iterate with the unlabeled set. To assess the effectiveness of this cascading procedure and enable further progress in visual recognition research, we construct a new image dataset, LSUN. It contains around one million labeled images for each of 10 scene categories and 20 object categories. We experiment with training popular convolutional networks and find that they achieve substantial performance gains when trained on this dataset."}
{"_id": "cbc285f6cce876c8732d4fb24936de940494751e", "title": "Women \u201c Take Care , \u201d Men \u201c Take Charge \u201d : Managers \u2019 Stereotypic Perceptions of Women and Men Leaders", "text": "This study explored possible underpinnings of findings from earlier research in which women\u2019s overall leadership competence was less favorably evaluated than men\u2019s. The authors examined perceptions held by senior managers, 34% of whom were CEOs, of women\u2019s and men\u2019s effectiveness at 10 key leadership behaviors. Respondents generally perceived that women were more effective than men at caretaking leader behaviors; and that men were more effective than women at actionoriented, \u201ctake-charge\u201d leader behaviors. Notably, male respondents perceived that the behavior at which men leaders most outperformed women was problemsolving. The authors propose that this perception could potentially undermine the influence of women leaders."}
{"_id": "c0653bcb4a64dfc9e15c17bfbcfcddcc0a968961", "title": "A Combination of Amino Acids and Caffeine Enhances Sprint Running Capacity in a Hot, Hypoxic Environment.", "text": "Heat and hypoxia exacerbate central nervous system (CNS) fatigue. We therefore investigated whether essential amino acid (EAA) and caffeine ingestion attenuates CNS fatigue in a simulated team sport-specific running protocol in a hot, hypoxic environment. Subelite male team sport athletes (n = 8) performed a repeat sprint running protocol on a nonmotorized treadmill in an extreme environment on 4 separate occasions. Participants ingested one of four supplements: a double placebo, 3 mg.kg-1 body mass of caffeine + placebo, 2 x 7 g EAA (Musashi Create)+placebo, or caffeine + EAA before each exercise session using a randomized, double-blind crossover design. Electromyography (EMG) activity and quadriceps evoked responses to magnetic stimulation were assessed from the dominant leg at preexercise, halftime, and postexercise. Central activation ratio (CAR) was used to quantify completeness of quadriceps activation. Oxygenation of the prefrontal cortex was measured via near-infrared spectroscopy. Mean sprint work was higher (M = 174 J, 95% CI [23, 324], p < .05, d = 0.30; effect size, likely beneficial) in the caffeine + EAA condition versus EAAs alone. The decline in EMG activity was less (M = 13%, 95% CI [0, 26]; p < .01, d = 0.58, likely beneficial) in caffeine + EAA versus EAA alone. Similarly, the pre- to postexercise decrement in CAR was significantly less (M = -2.7%, 95% CI [0.4, 5.4]; p < .05, d = 0.50, likely beneficial) when caffeine + EAA were ingested compared with placebo. Cerebral oxygenation was lower (M = -5.6%, 95% CI [1.0, 10.1]; p < .01, d = 0.60, very likely beneficial) in the caffeine + EAA condition compared with LNAA alone. Co-ingestion of caffeine and EAA appears to maintain muscle activation and central drive, with a small improvement in running performance."}
{"_id": "bec7f9de0ec5bef2ccd1d4f518e0152fa80b9ed5", "title": "KDD meets Big Data", "text": "Cross-Industry Standard process model (CRISP-DM) was developed in the late 90s by a consortium of industry participants to facilitate the end-to-end data mining process for Knowledge Discovery in Databases (KDD). While there have been efforts to better integrate with management and software development practices, there are no extensions to handle the new activities involved in using big data technologies. Data Science Edge (DSE) is an enhanced process model to accommodate big data technologies and data science activities. In recognition of the changes, the author promotes the use of a new term, Knowledge Discovery in Data Science (KDDS) as a call for the community to develop a new industry standard data science process model."}
{"_id": "1e0b8416b9d2afb9b1ef87557958ef964cb4472b", "title": "TED-LIUM: an Automatic Speech Recognition dedicated corpus", "text": "This paper presents the corpus developed by the LIUM for Automatic Speech Recognition (ASR), based on the TED Talks. This corpus was built during the IWSLT 2011 Evaluation Campaign, and is composed of 118 hours of speech with its accompanying automatically aligned transcripts. We describe the content of the corpus, how the data was collected and processed, how it will be publicly available and how we built an ASR system using this data leading to a WER score of 17.4%. The official results we obtained at the IWSLT 2011 evaluation campaign are also discussed."}
{"_id": "5878d191bf99915c0880256c369e3765755d9ba7", "title": "Bias and ignorance in demographic perception", "text": "When it comes to knowledge of demographic facts, misinformation appears to be the norm. Americans massively overestimate the proportions of their fellow citizens who are immigrants, Muslim, LGBTQ, and Latino, but underestimate those who are White or Christian. Previous explanations of these estimation errors have invoked topic-specific mechanisms such as xenophobia or media bias. We reconsidered this pattern of errors in the light of more than 30 years of research on the psychological processes involved in proportion estimation and decision-making under uncertainty. In two publicly available datasets featuring demographic estimates from 14 countries, we found that proportion estimates of national demographics correspond closely to what is found in laboratory studies of quantitative estimates more generally. Biases in demographic estimation, therefore, are part of a very general pattern of human psychology\u2014independent of the particular topic or demographic under consideration\u2014that explains most of the error in estimates of the size of politically salient populations. By situating demographic estimates within a broader understanding of general quantity estimation, these results demand reevaluation of both topic-specific misinformation about demographic facts and topic-specific explanations of demographic ignorance, such as media bias and xenophobia."}
{"_id": "bbf0235e7e2e2c3396c3bf85932ecd4cfc8ba1f9", "title": "Nonlinear Scale Space with Spatially Varying Stopping Time", "text": "A general scale space algorithm is presented for denoising signals and images with spatially varying dominant scales. The process is formulated as a partial differential equation with spatially varying time. The proposed adaptivity is semi-local and is in conjunction with the classical gradient-based diffusion coefficient, designed to preserve edges. The new algorithm aims at maximizing a local SNR measure of the denoised image. It is based on a generalization of a global stopping time criterion presented recently by the author and colleagues. Most notably, the method works well also for partially textured images and outperforms any selection of a global stopping time. Given an estimate of the noise variance, the procedure is automatic and can be applied well to most natural images."}
{"_id": "5b9f8a71c268911fbecf567f894567a3cdd1a08e", "title": "Design procedure for two-stage CMOS operational amplifiers employing current buffer", "text": "The design procedure of the two-stage CMOS operational amplifiers employing Miller capacitor in conjunction with the common-gate current buffer is presented. Unlike the previously reported design strategy of the opamp of this type, which results in the opamp with a pair of nondominant complex conjugate poles and a finite zero, the proposed procedure is based upon the design strategy that results in the opamp with only one dominant pole. Design example of the proposed procedure is given."}
{"_id": "c861d8a9ecc116a9b5db59573a48a1086c7ef340", "title": "Electroencephalography (EEG) Based Mobile Robot Control through an Adaptive Brain Robot Interface", "text": "This project mentioned a couple of brain controlled automaton supported Brain\u2013computer interfaces (BCI). BCIs square measure systems that may by pass standard channels of communication (i.e., muscles and thoughts) to produce direct communication and management between the human brain and physical devices by translating totally different patterns of brain activity into commands in real time. With these commands mobile automaton may be controlled. The intention of the project work is to develop automaton that may assist the disabled folks in their standard of living to try and do some work freelance on others. Brain signals are detected by the brain wave device and it'll convert the info into packets and transmit through Bluetooth medium. Level instrument unit (LAU) can receive the brain wave information and it'll extract and method the signal victimization Mat lab platform. Then the management commands are transmitted to the robotic ARM module to method. With this whole system, we will choose AN object and place it consequently through the designed brain signals."}
{"_id": "563384a5aa6111610ac4939f645d1125a5a0ac7f", "title": "Face recognition using PCA and SVM", "text": "Automatic recognition of people has received much attention during the recent years due to its many applications in different fields such as law enforcement, security applications or video indexing. Face recognition is an important and very challenging technique to automatic people recognition. Up to date, there is no technique that provides a robust solution to all situations and different applications that face recognition may encounter. In general, we can make sure that performance of a face recognition system is determined by how to extract feature vector exactly and to classify them into a group accurately. It, therefore, is necessary for us to closely look at the feature extractor and classifier. In this paper, Principle Component Analysis (PCA) is used to play a key role in feature extractor and the SVMs are used to tackle the face recognition problem. Support Vector Machines (SVMs) have been recently proposed as a new classifier for pattern recognition. We illustrate the potential of SVMs on the Cambridge ORL Face database, which consists of 400 images of 40 individuals, containing quite a high degree of variability in expression, pose, and facial details. The SVMs that have been used included the Linear (LSVM), Polynomial (PSVM), and Radial Basis Function (RBFSVM) SVMs. We provide experimental evidence which show that Polynomial and Radial Basis Function (RBF) SVMs performs better than Linear SVM on the ORL Face Dataset when both are used with one against all classification. We also compared the SVMs based recognition with the standard eigenface approach using the Multi-Layer Perceptron (MLP) Classification criterion."}
{"_id": "ed149963de1e756879cfa6707dd6e94935b5cd1c", "title": "Probabilistic Extraction of Beat Positions from a Beat Activation Function", "text": "We present a probabilistic way to extract beat positions from the output (activations) of the neural network that is at the heart of an existing beat tracker. The method can serve as a replacement for the greedy search the beat tracker currently uses for this purpose. Our experiments show improvement upon the current method for a variety of data sets and quality measures, as well as better results compared to other state-of-the-art algorithms."}
{"_id": "56757c983518b0604d54719df85fcd0adf789044", "title": "Learning diverse rankings with multi-armed bandits", "text": "Algorithms for learning to rank Web documents usually assume a document's relevance is independent of other documents. This leads to learned ranking functions that produce rankings with redundant results. In contrast, user studies have shown that diversity at high ranks is often preferred. We present two online learning algorithms that directly learn a diverse ranking of documents based on users' clicking behavior. We show that these algorithms minimize abandonment, or alternatively, maximize the probability that a relevant document is found in the top k positions of a ranking. Moreover, one of our algorithms asymptotically achieves optimal worst-case performance even if users' interests change."}
{"_id": "19089ecd35606445c62ff4abaa26252f44dcda89", "title": "Review of statistical shape spaces for 3D data with comparative analysis for human faces", "text": "With systems for acquiring 3D surface data being evermore commonplace, it has become important to reliably extract specific shapes from the acquired data. In the presence of noise and occlusions, this can be done through the use of statistical shape models, which are learned from databases of clean examples of the shape in question. In this paper, we review, analyze and compare different statistical models: from those that analyze the variation in geometry globally to those that analyze the variation in geometry locally. We first review how different types of models have been used in the literature, then proceed to define the models and analyze them theoretically, in terms of both their statistical and computational aspects. We then perform extensive experimental comparison on the task of model fitting, and give intuition about which type of model is better for a few applications. Due to the wide availability of databases of high-quality data, we use the human face as the specific shape we wish to extract from corrupted data."}
{"_id": "38a4bc276fb3820dad9c85c201ef567cd93c07e6", "title": "Flow : The Psychology of Optimal Experience", "text": "BOOK REVIEW: Csikszentmihalyi, M. (2008). Flow: The Psychology of Optimal Experience. New York, NY: HarperCollins. 336 pp. ISBN 978-0-06-133920-2."}
{"_id": "50d5035bae37531d2d8108055d52f4a000d66dbe", "title": "Positive affect facilitates creative problem solving.", "text": "Four experiments indicated that positive affect, induced by means of seeing a few minutes of a comedy film or by means of receiving a small bag of candy, improved performance on two tasks that are generally regarded as requiring creative ingenuity: Duncker's (1945) candle task and M. T. Mednick, S. A. Mednick, and E. V. Mednick's (1964) Remote Associates Test. One condition in which negative affect was induced and two in which subjects engaged in physical exercise (intended to represent affectless arousal) failed to produce comparable improvements in creative performance. The influence of positive affect on creativity was discussed in terms of a broader theory of the impact of positive affect on cognitive organization."}
{"_id": "748742a6cac31b7bcb4f3e5bd506eae8397f217f", "title": "On the bipolarity of positive and negative affect.", "text": "Is positive affect (PA) the bipolar opposite of, or is it independent of, negative affect (NA)? Previous analyses of this vexing question have generally labored under the false assumption that bipolarity predicts an invariant latent correlation between PA and NA. The predicted correlation varies with time frame, response format, and items selected to define PA and NA. The observed correlation also varies with errors inherent in measurement. When the actual predictions of a bipolar model are considered and error is taken into account, there is little evidence for independence of what were traditionally thought opposites. Bipolarity provides a parsimonious fit to existing data."}
{"_id": "774e783c2f8921c6030359f05683b17fcda16d57", "title": "Culture and the categorization of emotions.", "text": "Some writers assume--and others deny--that all human beings distinguish emotions from nonemotions and divide the emotions into happiness, anger, fear, and so on. A review of ethnographic and cross-cultural studies on (a) emotion lexicons, (b) the emotions inferred from facial expressions, and (c) dimensions implicit in comparative judgments of emotions indicated both similarities and differences in how the emotions are categorized in different languages and cultures. Five hypotheses are reviewed: (a) Basic categories of emotion are pancultural, subordinate categories culture specific; (b) emotional focal points are pancultural, boundaries culture specific; (c) emotion categories evolved from a single primitive category of physiological arousal; (d) most emotion categories are culture specific but can be defined by pancultural semantic primitives; and (e) an emotion category is a script with both culture-specific and pancultural components."}
{"_id": "927c10385d93d538e2791f8ef28c5eaf96e08a73", "title": "The associative basis of the creative process.", "text": "The intent of this paper is the presentation of an associative interpretation of the process of creative thinking. The explanation is not directed to any specific field of application such as art or science but attempts to delineate processes that underlie all creative thought. The discussion will take the following form, (a) First, we will define creative thinking in associative terms and indicate three ways in which creative solutions may be achieved\u2014serendipity, similarity, and mediation, (b) This definition will allow us to deduce those individual difference variables which will facilitate creative performance, (c) Consideration of the definition of the creative process has suggested an operational statement of the definition in the form of a test. The test will be briefly described along with some preliminary research results. (d) The paper will conclude with a discussion of predictions regarding the influence of certain experimentally manipulable variables upon the creative process. Creative individuals and the processes by which they manifest their creativity have excited a good deal of"}
{"_id": "7f2b36610d14d25142a5664e8a1ca6e1425c14dc", "title": "Missing data correction in still images and image sequences", "text": "The ability to replace missing data in images and video is of key importance to many application fields. The general-purpose algorithm presented here is inspired by texture synthesis techniques but is suited to any complex natural scene and not restricted to stationary patterns. It has the property to be adapted to both still images and image sequences and to incorporate temporal information when available while preserving the simplicity of the algorithm. This method gives very good results in various situations without user intervention. The resulting computational cost is relatively low and corrections are usually produced within seconds."}
{"_id": "9adb5a88b19e2edfef58472f8249ec995da89555", "title": "Towards Wifi Mobility without Fast Handover", "text": "WiFi is emerging as the preferred connectivity solution for mobile clients because of its low power consumption and high capacity. Dense urban WiFi deployments cover entire central areas, but the one thing missing is a seamless mobile experience. Mobility in WiFi has traditionally pursued fast handover, but we argue that this is the wrong goal to begin with. Instead, we propose that mobile clients should connect to all the access points they see, and split traffic over them with the newly deployed MPTCP protocol. We let a mobile connect to multiple APs on the same channel, or on different channels, and show via detailed experiments and simulation that this solution greatly enhances capacity and reliability of TCP connections straight away for certain flavors of WiFi a/b/g. We also find there are situations where connecting to multiple APs severely decreases throughput, and propose a series of client-side changes that make this solution robust across a wide range of scenarios."}
{"_id": "47daf9cc8fb15b3a4b7c3db4498d29a5a8b84c22", "title": "Modeling 2D Appearance Evolution for 3D Object Categorization", "text": "3D object categorization is a non-trivial task in computer vision encompassing many real-world applications. We pose the problem of categorizing 3D polygon meshes as learning appearance evolution from multi-view 2D images. Given a corpus of 3D polygon meshes, we first render the corresponding RGB and depth images from multiple viewpoints on a uniform sphere. Using rank pooling, we propose two methods to learn the appearance evolution of the 2D views. Firstly, we train view-invariant models based on a deep convolutional neural network (CNN) using the rendered RGB-D images and learn to rank the first fully connected layer activations and, therefore, capture the evolution of these extracted features. The parameters learned during this process are used as the 3D shape representations. In the second method, we learn the aggregation of the views from the outset by employing the ranking machine to the rendered RGB- D images directly, which produces aggregated 2D images which we term as ``3D shape images\". We then learn CNN models on this novel shape representation for both RGB and depth which encode salient geometrical structure of the polygon. Experiments on the ModelNet40 and ModelNet10 datasets show that the proposed method consistently outperforms existing state-of-the-art algorithms in 3D shape recognition."}
{"_id": "c0e977d56c2055804327273ed10a30d10a063805", "title": "VMCTune: A Load Balancing Scheme for Virtual Machine Cluster Using Dynamic Resource Allocation", "text": "This paper designs and implements a load balancing scheme based on dynamic resource allocation policy for virtual machine cluster, which are running under para-virtualization mode on a cluster of physical machines (PM) in shared storage architecture. It monitors the real-time resources utilization of VMs and PMs, including CPU, memory and network, then uses instant resource reallocation for virtual machines (VM) running on same PM to achieve local VMs\u2019 load balancing, while uses live migration of VMs among PMs to achieve global VMs\u2019 load balancing. It optimize the resource allocation of VMs to achieve global load balancing of virtual machine cluster. The resource utilization of physical machines will be improved as well. Compared to traditional load balancing schemes based on task scheduling, it is application independent and works seamless on VMs hosting different kinds of applications."}
{"_id": "6d54134e5cca20516b0802ee3f5e5695840c379f", "title": "SemEval 2015 Task 18: Broad-Coverage Semantic Dependency Parsing", "text": "Task 8 at SemEval 2014 defines BroadCoverage Semantic Dependency Parsing (SDP) as the problem of recovering sentence-internal predicate\u2013argument relationships for all content words, i.e. the semantic structure constituting the relational core of sentence meaning. In this task description, we position the problem in comparison to other sub-tasks in computational language analysis, introduce the semantic dependency target representations used, reflect on high-level commonalities and differences between these representations, and summarize the task setup, participating systems, and main results. 1 Background and Motivation Syntactic dependency parsing has seen great advances in the past decade, in part owing to relatively broad consensus on target representations, and in part reflecting the successful execution of a series of shared tasks at the annual Conference for Natural Language Learning (CoNLL; Buchholz & Marsi, 2006; Nivre et al., 2007; inter alios). From this very active research area accurate and efficient syntactic parsers have developed for a wide range of natural languages. However, the predominant data structure in dependency parsing to date are trees, in the formal sense that every node in the dependency graph is reachable from a distinguished root node by exactly one directed path. This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and the proceedings footer are added by the organizers: http:// creativecommons.org/licenses/by/4.0/. Unfortunately, tree-oriented parsers are ill-suited for producing meaning representations, i.e. moving from the analysis of grammatical structure to sentence semantics. Even if syntactic parsing arguably can be limited to tree structures, this is not the case in semantic analysis, where a node will often be the argument of multiple predicates (i.e. have more than one incoming arc), and it will often be desirable to leave nodes corresponding to semantically vacuous word classes unattached (with no incoming arcs). Thus, Task 8 at SemEval 2014, Broad-Coverage Semantic Dependency Parsing (SDP 2014),1 seeks to stimulate the dependency parsing community to move towards more general graph processing, to thus enable a more direct analysis of Who did What to Whom? For English, there exist several independent annotations of sentence meaning over the venerable Wall Street Journal (WSJ) text of the Penn Treebank (PTB; Marcus et al., 1993). These resources constitute parallel semantic annotations over the same common text, but to date they have not been related to each other and, in fact, have hardly been applied for training and testing of datadriven parsers. In this task, we have used three different such target representations for bi-lexical semantic dependencies, as demonstrated in Figure 1 below for the WSJ sentence: (1) A similar technique is almost impossible to apply to other crops, such as cotton, soybeans, and rice. Semantically, technique arguably is dependent on the determiner (the quantificational locus), the modifier similar, and the predicate apply. Conversely, the predicative copula, infinitival to, and the vacSee http://alt.qcri.org/semeval2014/ task8/ for further technical details, information on how to obtain the data, and official results."}
{"_id": "83b427459fc5769b869a51f6f530388770ea83c1", "title": "A Survey of Personality Computing", "text": "Personality is a psychological construct aimed at explaining the wide variety of human behaviors in terms of a few, stable and measurable individual characteristics. In this respect, any technology involving understanding, prediction and synthesis of human behavior is likely to benefit from Personality Computing approaches, i.e. from technologies capable of dealing with human personality. This paper is a survey of such technologies and it aims at providing not only a solid knowledge base about the state-of-the-art, but also a conceptual model underlying the three main problems addressed in the literature, namely Automatic Personality Recognition (inference of the true personality of an individual from behavioral evidence), Automatic Personality Perception (inference of personality others attribute to an individual based on her observable behavior) and Automatic Personality Synthesis (generation of artificial personalities via embodied agents). Furthermore, the article highlights the issues still open in the field and identifies potential application areas."}
{"_id": "d305002d40ce9e17b02793eef7fe9f5d6f669dba", "title": "A method for business process reengineering based on enterprise ontology", "text": "Business Process Reengineering increases enterprise's chance to survive in competition among organizations , but failure rate among reengineering efforts is high, so new methods that decrease failure, are needed, in this paper a business process reengineering method is presented that uses Enterprise Ontology for modelling the current system and its goal is to improve analysing current system and decreasing failure rate of BPR, and cost and time of performing processes, In this method instead of just modelling processes, processes with their : interactions and relations, environment, staffs and customers will be modelled in enterprise ontology. Also in choosing processes for reengineering step, after choosing them, processes which, according to the enterprise ontology, has the most connection with the chosen ones, will also be chosen to reengineer, finally this method is implemented on a company and As-Is and To-Be processes are simulated and compared by ARIS tools, Report and Simulation Experiment"}
{"_id": "2a6dadbdb0853fa1d14f66c502a17bbc2641ebf7", "title": "Analyzing Big Data in Psychology: A Split/Analyze/Meta-Analyze Approach", "text": "Big data is a field that has traditionally been dominated by disciplines such as computer science and business, where mainly data-driven analyses have been performed. Psychology, a discipline in which a strong emphasis is placed on behavioral theories and empirical research, has the potential to contribute greatly to the big data movement. However, one challenge to psychologists-and probably the most crucial one-is that most researchers may not have the necessary programming and computational skills to analyze big data. In this study we argue that psychologists can also conduct big data research and that, rather than trying to acquire new programming and computational skills, they should focus on their strengths, such as performing psychometric analyses and testing theories using multivariate analyses to explain phenomena. We propose a split/analyze/meta-analyze approach that allows psychologists to easily analyze big data. Two real datasets are used to demonstrate the proposed procedures in R. A new research agenda related to the analysis of big data in psychology is outlined at the end of the study."}
{"_id": "5d222f7409d8fe405d5b8a8e221f2d2e8c979b10", "title": "Compositional Analysis Framework Using EDP Resource Models", "text": "Compositional schedulability analysis of hierarchical scheduling frameworks is a well studied problem, as it has wide-ranging applications in the embedded systems domain. Several techniques, such as periodic resource model based abstraction and composition, have been proposed for this problem. However these frameworks are sub-optimal because they incur bandwidth overhead. In this work, we introduce the explicit deadline periodic (EDP) resource model, and present compositional analysis techniques under EDF and DM. We show that these techniques are bandwidth optimal, in that they do not incur any bandwidth overhead in abstraction or composition. Hence, this framework is more efficient when compared to existing approaches."}
{"_id": "3596815d999024e7492002b47a9ae609d194fb53", "title": "Secure Overlay Cloud Storage with Access Control and Assured Deletion", "text": "We can now outsource data backups off-site to third-party cloud storage services so as to reduce data management costs. However, we must provide security guarantees for the outsourced data, which is now maintained by third parties. We design and implement FADE, a secure overlay cloud storage system that achieves fine-grained, policy-based access control and file assured deletion. It associates outsourced files with file access policies, and assuredly deletes files to make them unrecoverable to anyone upon revocations of file access policies. To achieve such security goals, FADE is built upon a set of cryptographic key operations that are self-maintained by a quorum of key managers that are independent of third-party clouds. In particular, FADE acts as an overlay system that works seamlessly atop today's cloud storage services. We implement a proof-of-concept prototype of FADE atop Amazon S3, one of today's cloud storage services. We conduct extensive empirical studies, and demonstrate that FADE provides security protection for outsourced data, while introducing only minimal performance and monetary cost overhead. Our work provides insights of how to incorporate value-added security features into today's cloud storage services."}
{"_id": "97c5acc1b3de2291bfad85731a22a15fb4a98b6a", "title": "Robust Model Predictive Control : A Survey", "text": "This paper gives an overview of robustness in Model Predictive Control (MPC). After reviewing the basic concepts of MPC, we survey the uncertainty descriptions considered in the MPC literature, and the techniques proposed for robust constraint handling, stability, and performance. The key concept of \u201cclosedloop prediction\u201d is discussed at length. The paper concludes with some comments on future research directions."}
{"_id": "493c8e17b413d91414bba56d493dff8c474a41e8", "title": "Object Discovery in Videos as Foreground Motion Clustering", "text": "We consider the problem of providing dense segmentation masks for object discovery in videos. We formulate the object discovery problem as foreground motion clustering, where the goal is to cluster foreground pixels in videos into different objects. We introduce a novel pixel-trajectory recurrent neural network that learns feature embeddings of foreground pixel trajectories linked in time. By clustering the pixel trajectories using the learned feature embeddings, our method establishes correspondences between foreground object masks across video frames. To demonstrate the effectiveness of our framework for object discovery, we conduct experiments on commonly used datasets for motion segmentation, where we achieve state-of-the-art performance."}
{"_id": "5985b3ce087440e7c9f40c82003371449926261c", "title": "Learning to detect and localize many objects from few examples", "text": "The current trend in object detection and localization is to learn predictions with high capacity deep neural networks trained on a very large amount of annotated data and using a high amount of processing power. In this work, we propose a new neural model which directly predicts bounding box coordinates. The particularity of our contribution lies in the local computations of predictions with a new form of local parameter sharing which keeps the overall amount of trainable parameters low. Key components of the model are spatial 2D-LSTM recurrent layers which convey contextual information between the regions of the image. We show that this model is more powerful than the state of the art in applications where training data is not as abundant as in the classical configuration of natural images and Imagenet/Pascal VOC tasks. We particularly target the detection of text in document images, but our method is not limited to this setting. The proposed model also facilitates the detection of many objects in a single image and can deal with inputs of variable sizes without resizing."}
{"_id": "37a18f314731cf2a5d829e9137a2aaa34a8a7caa", "title": "Brain activation modulated by sentence comprehension.", "text": "The comprehension of visually presented sentences produces brain activation that increases with the linguistic complexity of the sentence. The volume of neural tissue activated (number of voxels) during sentence comprehension was measured with echo-planar functional magnetic resonance imaging. The modulation of the volume of activation by sentence complexity was observed in a network of four areas: the classical left-hemisphere language areas (the left laterosuperior temporal cortex, or Wernicke's area, and the left inferior frontal gyrus, or Broca's area) and their homologous right-hemisphere areas, although the right areas had much smaller volumes of activation than did the left areas. These findings generally indicate that the amount of neural activity that a given cognitive process engenders is dependent on the computational demand that the task imposes."}
{"_id": "d23b77f084885d0cf1a579ab97ec808abe75291c", "title": "Power Waves and the Scattering Matrix", "text": "This paper discusses the physical meaning and properties of the waves defined by v%+ z%Ib ~ = V. \u2013 Z,*Ii a%= ,, 2u/Re Z,] 2<1 ReZtl where V, and 1, are the voltage at and the current flowing into the ith port of a junction and Z, is the impedance of the circuit connected to the ith port. The square of the magnitude of these waves is directly related to the exchangeable power of a source and the reflected power. For this reason, in this paper, they are called the power waves. For certain applications where the power relations are of main concern, the power waves are more suitable quantities than the conventional traveling waves. The lossless and reciprocal conditions as well as the frequency characteristics of the scattering matrix are presented. Then, the formula is given for a new scattering matrix when the 2,\u2019s are changed. As an application, the condition under which an amplifier can be matched simultaneously at both input and output ports as well as the condition for the network to be unconditionally stable are given in terms of the scattering matrix components. Also a brief comparison is made between the traveling waves and the power waves."}
{"_id": "58156d27f80ee450ba43651a780ebd829b70c363", "title": "SKEE: A lightweight Secure Kernel-level Execution Environment for ARM", "text": "Previous research on kernel monitoring and protection widely relies on higher privileged system components, such as hardware virtualization extensions, to isolate security tools from potential kernel attacks. These approaches increase both the maintenance effort and the code base size of privileged system components, which consequently increases the risk of having security vulnerabilities. SKEE, which stands for Secure Kernellevel Execution Environment, solves this fundamental problem. SKEE is a novel system that provides an isolated lightweight execution environment at the same privilege level of the kernel. SKEE is designed for commodity ARM platforms. Its main goal is to allow secure monitoring and protection of the kernel without active involvement of higher privileged software. SKEE provides a set of novel techniques to guarantee isolation. It creates a protected address space that is not accessible to the kernel, which is challenging to achieve when both the kernel and the isolated environment share the same privilege level. SKEE solves this challenge by preventing the kernel from managing its own memory translation tables. Hence, the kernel is forced to switch to SKEE to modify the system\u2019s memory layout. In turn, SKEE verifies that the requested modification does not compromise the isolation of the protected address space. Switching from the OS kernel to SKEE exclusively passes through a well-controlled switch gate. This switch gate is carefully designed so that its execution sequence is atomic and deterministic. These properties combined guarantee that a potentially compromised kernel cannot exploit the switching sequence to compromise the isolation. If the kernel attempts to violate these properties, it will only cause the system to fail without exposing the protected address space. SKEE exclusively controls access permissions of the entire OS memory. Hence, it prevents attacks that attempt to inject unverified code into the kernel. Moreover, it can be easily extended to intercept other system events in order to support various intrusion detection and integrity verification tools. This paper presents a SKEE prototype that runs on both 32-bit ARMv7 and 64-bit ARMv8 architectures. Performance evaluation results demonstrate that SKEE is a practical solution for real world systems. 1These authors contributed equally to this work"}
{"_id": "34ac2ef764e9abf7dd87e643e3f555d73f3f8515", "title": "Transparent Queries: Investigating Users' Mental Models of Search Engines", "text": "Typically, commercial Web search engines provide very little feedback to the user concerning how a particular query is processed and interpreted. Specifically, they apply key query transformations without the users knowledge. Although these transformations have a pronounced effect on query results, users have very few resources for recognizing their existence and understanding their practical importance. We conducted a user study to gain a better understanding of users knowledge of and reactions to the operation of several query transformations that web search engines automatically employ. Additionally, we developed and evaluated Transparent Queries, a software system designed to provide users with lightweight feedback about opaque query transformations. The results of the study suggest that users do indeed have difficulties understanding the operation of query transformations without additional assistance. Finally, although transparency is helpful and valuable, interfaces that allow direct control of query transformations might ultimately be more helpful for end-users."}
{"_id": "c083eb83293da33e91b50d9b488df9dd434760fb", "title": "Multivariate analysis of fMRI time series: classification and regression of brain responses using machine learning.", "text": "Machine learning and pattern recognition techniques are being increasingly employed in functional magnetic resonance imaging (fMRI) data analysis. By taking into account the full spatial pattern of brain activity measured simultaneously at many locations, these methods allow detecting subtle, non-strictly localized effects that may remain invisible to the conventional analysis with univariate statistical methods. In typical fMRI applications, pattern recognition algorithms \"learn\" a functional relationship between brain response patterns and a perceptual, cognitive or behavioral state of a subject expressed in terms of a label, which may assume discrete (classification) or continuous (regression) values. This learned functional relationship is then used to predict the unseen labels from a new data set (\"brain reading\"). In this article, we describe the mathematical foundations of machine learning applications in fMRI. We focus on two methods, support vector machines and relevance vector machines, which are respectively suited for the classification and regression of fMRI patterns. Furthermore, by means of several examples and applications, we illustrate and discuss the methodological challenges of using machine learning algorithms in the context of fMRI data analysis."}
{"_id": "f006a67c5b6b13a0862207e6db50d6d7d9d34363", "title": "Cluster Frameworks for Efficient Scheduling and Resource Allocation in Data Center Networks: A Survey", "text": "Data centers are widely used for big data analytics, which often involve data-parallel jobs, including query and web service. Meanwhile, cluster frameworks are rapidly developed for data-intensive applications in data center networks (DCNs). To promote the performance of these frameworks, many efforts have been paid to improve scheduling strategies and resource allocation algorithms. With the deployment of geo-distributed data centers and data-intensive applications, the optimization in DCNs regains pervasive attention in both industry and academia. Many solutions, such as the coflow-aware scheduling and speculative execution, have been proposed to meet various requirements. Therefore, we present a solid starting ground and comprehensive overview in this area to help readers quickly understand state-of-the-art technologies and research progress. We observe that algorithms in cluster frameworks are implemented with different guidelines and can be classified according to scheduling granularity, controller management, and prior-knowledge requirement. In addition, mechanisms for conquering crucial challenges in DCNs are discussed, including providing low latency and minimizing job completion time. Moreover, we analyze desirable properties of fault tolerance and scalability to illuminate the design principles of distributed systems. We hope that this paper will shed light on this promising land and serve as a guide for further researches."}
{"_id": "1fb08a10448699e1a0c1094c4e39fce819985ff9", "title": "SAF: Strategic Alignment Framework for Monitoring Organizations", "text": "Reaching a Strategic Alignment is a crucial aspect for any organization. The alignment can be achieved by controlling, through monitoring probes, the coherency of the Business Processes with the related Business Strategy. In this paper we present SAF, a powerful framework for those organizations that aim at a superior business performance and want to keep monitored the organization\u2019s alignment. SAF has been applied to a real case study and it has also been compared with GQM Strategy [2] and Process Performance Indicators Monitoring Model [16]."}
{"_id": "9b569f0bc1d5278dd384726d36f36017787aa8e1", "title": "Design and evaluation of a Twitter hashtag recommendation system", "text": "Twitter has evolved into a powerful communication and information sharing tool used by millions of people around the world to post what is happening now. A hashtag, a keyword prefixed with a hash symbol (#), is a feature in Twitter to organize tweets and facilitate effective search among a massive volume of data. In this paper, we propose an automatic hashtag recommendation system that helps users find new hashtags related to their interests.\n We propose the Hashtag Frequency-Inverse Hashtag Ubiquity (HF-IHU) ranking scheme, which is a variation of the well-known TF-IDF, that considers hashtag relevancy, as well as data sparseness. Experiments on a large Twitter data set demonstrate that our method successfully yields relevant hashtags for user's interest and that recommendations more stable and reliable than ranking tags based on tweet content similarity. Our results show that HF-IHU can achieve over 30% hashtag recall when asked to identify the top 10 relevant hashtags for a particular tweet."}
{"_id": "80d68c37245da3c2bcb76e7910782290f308c144", "title": "A Theory of Program Size Formally Identical to Information Theory", "text": "Hopefully members enjoyed the \"dividend\" mailing of the Proceedings of the Symposium on Theory of Computing. Since it went to all members, and a good number of them presumably have an interest in any errors, we will publish any errata sheets which we receive. Cook and Reckhow have sent errata for their paper , and they appear in this issue. Also included is a paper of somewhat greater length than we usually publish, but which is a followuD to an earlier article in an area that needs some encouragement. Generally, we like to keep contributions to five Da~es or less."}
{"_id": "b25c668707b98db90f2ea1d8939f9c35c0c7ace3", "title": "Modeling by shortest data description", "text": "It is known that the usual criteria resulting from such, should we dare to say, ad hoc principles as the least squares, the minimum prediction error, and the maximum likelihood principles, cannot be used to estimate the order nor other structure parameters. This very fact seems to suggest that these time-honored principles are successful only insofar as they happen to coincide with one or another facet of some more powerful and comprehensive principle. The shortest description length of individual recursively definable objects was studied by Kolmogorov[1] and others, and it gives rise to the algorithmic notion of entropy. In statistical estimation, however, the data always includes a random element, and the original notion of entropy of random variables turns out to be the appropriate one to use. In a peripheral way we also invoke a simple combinatorial entropy notion which, however, need not cause any con-"}
{"_id": "28d06ce92c97e18ebe8f61a9e2fe802f4d42d218", "title": "Information-Theoretic Limitations of Formal Systems", "text": "An attempt is made to apply information-theoretic computational complexity to meta-mathematics. The paper studies the number of bits of instructions that must be given to a computer for it to perform finite and infinite tasks, and also the time it takes the computer to perform these tasks. This is applied to measuring the difficulty of proving a given set of theorems, in terms of the number of bits of axioms that are assumed, and the size of the proofs needed to deduce the theorems from the axioms."}
{"_id": "4c7671550671deba9ec318d867522897f20e19ba", "title": "Logical reversibility of computation", "text": "The usual general-purpose computing automaton (e.g.. a Turing machine) is logically irreversibleits transition function lacks a single-valued inverse. Here i t is shown that such machines may he made logically reversible at every step, while retainillg their simplicity and their ability to do general computations. This result is of great physical interest because it makes plausible the existence of thermodynamically reversible computers which could perform useful computations at useful speed while dissipating considerably less than kT of energy per logical step. In the first stage of its computation the logically reversible automaton parallels the corresponding irreversible automaton, except that it saves all intermediate results, thereby avoiding the irreversible operation of erasure. The second stage consists of printing out the desired output. The third stage then reversibly disposes of all the undesired intermediate results by retracing the steps of the first stage in backward order (a process which is only possible because the first stage has been carried out reversibly), thereby restoring the machine (except for the now-written output tape) to its original condition. The final machine configuration thus contains the desired output and a reconstructed copy o f the input, but no other undesired data. The foregoing results are demonstrated explicitly using a type of three-tape Turing machine. The biosynthesis of messenger RNA is discussed as a physical example of reversible computation."}
{"_id": "f743c207f9a4b655ebcdd5fcaacd19c3c0b2029f", "title": "Designing for Labour: Uber and the On-Demand Mobile Workforce", "text": "Apps allowing passengers to hail and pay for taxi service on their phone? such as Uber and Lyft-have affected the livelihood of thousands of workers worldwide. In this paper we draw on interviews with traditional taxi drivers, rideshare drivers and passengers in London and San Francisco to understand how \"ride-sharing\" transforms the taxi business. With Uber, the app not only manages the allocation of work, but is directly involved in \"labour issues\": changing the labour conditions of the work itself. We document how Uber driving demands new skills such as emotional labour, while increasing worker flexibility. We discuss how the design of new technology is also about creating new labour opportunities -- jobs -- and how we might think about our responsibilities in designing these labour relations."}
{"_id": "590628a9584e500f3e7f349ba7e2046c8c273fcf", "title": "Generating Natural Questions About an Image", "text": "There has been an explosion of work in the vision & language community during the past few years from image captioning to video transcription, and answering questions about images. These tasks focus on literal descriptions of the image. To move beyond the literal, we choose to explore how questions about an image often address abstract events that the objects evoke. In this paper, we introduce the novel task of \u2018Visual Question Generation\u2019 (VQG), where the system is tasked with asking a natural and engaging question when shown an image. We provide three datasets which cover a variety of images from object-centric to event-centric, providing different and more abstract training data than the stateof-the-art captioning systems have used thus far. We train and test several generative and retrieval models to tackle the task of VQG. Evaluation results show that while such models ask reasonable questions given various images, there is still a wide gap with human performance. Our proposed task offers a new challenge to the community which we hope can spur further interest in exploring deeper connections between vision & language."}
{"_id": "ca94397722457f47c051490f559718cd7af0eddb", "title": "Detection of Unknown Anomalies in Streaming Videos with Generative Energy-based Boltzmann Models", "text": "Abnormal event detection is one of the important objectives in research and practical applications of video surveillance. However, there are still three challenging problems for most anomaly detection systems in practical setting: limited labeled data, ambiguous definition of \u201cabnormal\u201d and expensive feature engineering steps. This paper introduces a unified detection framework to handle these challenges using energy-based models, which are powerful tools for unsupervised representation learning. Our proposed models are firstly trained on unlabeled raw pixels of image frames from an input video rather than hand-crafted visual features; and then identify the locations of abnormal objects based on the errors between the input video and its reconstruction produced by the models. To handle video stream, we develop an online version of our framework, wherein the model parameters are updated incrementally with the image frames arriving on the fly. Our experiments show that our detectors, using Restricted Boltzmann Machines (RBMs) and Deep Boltzmann Machines (DBMs) as core modules, achieve superior anomaly detection performance to unsupervised baselines and obtain accuracy comparable with the state-of-the-art approaches when evaluating at the pixel-level. More importantly, we discover that our system trained with DBMs is able to simultaneously perform scene clustering and scene reconstruction. This capacity not only distinguishes our method from other existing detectors but also offers a unique tool to investigate and understand how the model works."}
{"_id": "79aa2c4f22d2b878155aba14cd723e8985b8182a", "title": "Of animals and objects: men's implicit dehumanization of women and likelihood of sexual aggression.", "text": "Although dehumanizing women and male sexual aggression are theoretically aligned, the present research provides the first direct support for this assumption, using the Implicit Association Test to assess two forms of female dehumanization: animalization and objectification. In Study 1, men who automatically associated women more than men with primitive constructs (e.g., animals, instinct, nature) were more willing to rape and sexually harass women, and to report negative attitudes toward female rape victims. In Study 2, men who automatically associated women with animals (e.g., animals, paw, snout) more than with humans scored higher on a rape-behavioral analogue, as well as rape proclivity. Automatically objectifying women by associating them with objects, tools, and things was also positively correlated with men's rape proclivity. In concert, the research demonstrates that men who implicitly dehumanize women (as either animals or objects) are also likely to sexually victimize them."}
{"_id": "21acea7e14ab225aeaea61314187d4d8e9a76e91", "title": "Evaluation of text data mining for database curation: lessons learned from the KDD Challenge Cup", "text": "MOTIVATION\nThe biological literature is a major repository of knowledge. Many biological databases draw much of their content from a careful curation of this literature. However, as the volume of literature increases, the burden of curation increases. Text mining may provide useful tools to assist in the curation process. To date, the lack of standards has made it impossible to determine whether text mining techniques are sufficiently mature to be useful.\n\n\nRESULTS\nWe report on a Challenge Evaluation task that we created for the Knowledge Discovery and Data Mining (KDD) Challenge Cup. We provided a training corpus of 862 articles consisting of journal articles curated in FlyBase, along with the associated lists of genes and gene products, as well as the relevant data fields from FlyBase. For the test, we provided a corpus of 213 new ('blind') articles; the 18 participating groups provided systems that flagged articles for curation, based on whether the article contained experimental evidence for gene expression products. We report on the evaluation results and describe the techniques used by the top performing groups."}
{"_id": "4f4eefe26444175496c828e6da8fbc5c7ee2f40e", "title": "New resampling algorithm for particle filter localization for mobile robot with 3 ultrasonic sonar sensor", "text": "This paper present a particle filter also known as Monte Carlo Localization (MCL) to solve the localization problem presented before. A new resampling mechanism is proposed. This new resampling mechanism enables the particle filter to converge quicker and more robust to kidnaping problem. This particle filter is simulated in MATLAB and also experimented physically using a simple autonomous mobile robot built with Lego Mindstorms NXT with 3 ultrasonic sonar and RWTH Mindstorms NXT Toolbox for MATLAB to connect the robot to MATLAB. The particle filter with the new resampling algorithm can perform very well in the physical experiments."}
{"_id": "698902ce1a836d353d4ff955c826095e28506e05", "title": "Using social network analysis to prevent money laundering", "text": ""}
{"_id": "67d9457769c42c7ae26046d38b1fdecba2531a6b", "title": "Abnormal image detection in endoscopy videos using a filter bank and local binary patterns", "text": "Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician's time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a \"texton histogram\" of an image block as features. The histogram captures the distribution of different \"textons\" representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images."}
{"_id": "c9e7ace35c3b0ccb005efb01607e747bed1a7dc2", "title": "Examining the Bilingual Advantage on Conflict Resolution Tasks: A Meta-Analysis", "text": "A great deal of research has compared monolingual and bilinguals on conflict resolution tasks, with inconsistent findings: Some studies reveal a bilingual advantage on global RTs, some reveal a bilingual advantage on interference cost, and some show no advantage. We report a meta-analysis of 73 comparisons (N = 5538), with estimates of global RTs and interference cost for each study. Results revealed a moderately significant effect size that was not moderated by type of cost (global RT or interference cost) or task. Age interacted with type of cost, showing a pattern difficult to reconcile with theories of bilingualism and executive control. Additionally there was a significant main effect of lab, which might be due to sociolinguistic differences in samples, data treatment and methodology, or Hawthorne effects."}
{"_id": "7aea23d6e14d2cf70c8bcc286a95b520d1bc6437", "title": "Proofs of Space-Time and Rational Proofs of Storage", "text": "We introduce a new cryptographic primitive: Proofs of SpaceTime (PoSTs) and construct a practical protocol for implementing these proofs. A PoST allows a prover to convince a verifier that she spent a \u201cspacetime\u201d resource (storing data\u2014space\u2014over a period of time). Formally, we define the PoST resource as a linear tradeoff between CPU work and space-time (under reasonable cost assumptions, a rational user will prefer to use the lower-cost space-time resource over CPU work). Compared to a proof-of-work, a PoST requires less energy use, as the \u201cdifficulty\u201d can be increased by extending the time period over which data is stored without increasing computation costs. Our definition is very similar to \u201cProofs of Space\u201d [ePrint 2013/796, 2013/805] but, unlike the previous definitions, takes into account amortization attacks and storage duration. Moreover, our protocol uses a very different (and simpler) technique, making use of the fact that we explicitly allow a space-time tradeoff."}
{"_id": "33da45838d0b6c082cc71e603fd802bac4d56713", "title": "Fast concurrent queues for x86 processors", "text": "Conventional wisdom in designing concurrent data structures is to use the most powerful synchronization primitive, namely compare-and-swap (CAS), and to avoid contended hot spots. In building concurrent FIFO queues, this reasoning has led researchers to propose combining-based concurrent queues.\n This paper takes a different approach, showing how to rely on fetch-and-add (F&A), a less powerful primitive that is available on x86 processors, to construct a nonblocking (lock-free) linearizable concurrent FIFO queue which, despite the F&A being a contended hot spot, outperforms combining-based implementations by 1.5x to 2.5x in all concurrency levels on an x86 server with four multicore processors, in both single-processor and multi-processor executions."}
{"_id": "5c643a6e649230da504d5c9344dab7e4885b7ff5", "title": "Concurrent Constraint Programming", "text": "This paper presents a new and very rich class of (concurrent) programming languages, based on the notion of computing with partial information, and the concomitant notions of consistency and entailment.1 In this framework, computation emerges from the interaction of concurrently executing agents that communicate by placing, checking and instantiating constraints on shared variables. Such a view of computation is interesting in the context of programming languages because of the ability to represent and manipulate partial information about the domain of discourse, in the context of concurrency because of the use of constraints for communication and control, and in the context of AI because of the availability of simple yet powerful mechanisms for controlling inference, and the promise that very rich representational/programming languages, sharing the same set of abstract properties, may be possible.\nTo reflect this view of computation, [Sar89] develops the cc family of languages. We present here one member of the family, cc(\u2193, \u2192) (pronounced \u201ccc with Ask and Choose\u201d) which provides the basic operations of blocking Ask and atomic Tell and an algebra of behaviors closed under prefixing, indeterministic choice, interleaving, and hiding, and provides a mutual recursion operator. cc(\u2193, \u2192) is (intentionally!) very similar to Milner's CCS, but for the radically different underlying concept of communication, which, in fact, provides a general\u2014and elegant\u2014alternative approach to \u201cvalue-passing\u201d in CCS. At the same time it adequately captures the framework of committed choice concurrent logic programming languages. We present the rules of behavior for cc agents, motivate a notion of \u201cvisible action\u201d for them, and develop the notion of c-trees and reactive congruence analogous to Milner's synchronization trees and observational congruence. We also present an equational characterization of reactive congruence for Finitary cc(\u2193, \u2192)."}
{"_id": "15f16252af84a630f1f9442d4e0367b6fba089cf", "title": "Deep code comment generation", "text": "During software maintenance, code comments help developers comprehend programs and reduce additional time spent on reading and navigating source code. Unfortunately, these comments are often mismatched, missing or outdated in the software projects. Developers have to infer the functionality from the source code. This paper proposes a new approach named DeepCom to automatically generate code comments for Java methods. The generated comments aim to help developers understand the functionality of Java methods. DeepCom applies Natural Language Processing (NLP) techniques to learn from a large code corpus and generates comments from learned features. We use a deep neural network that analyzes structural information of Java methods for better comments generation. We conduct experiments on a large-scale Java corpus built from 9,714 open source projects from GitHub. We evaluate the experimental results on a machine translation metric. Experimental results demonstrate that our method DeepCom outperforms the state-of-the-art by a substantial margin."}
{"_id": "05b073c44188946aeb9c410c1447262cbdf77b6d", "title": "SecureML: A System for Scalable Privacy-Preserving Machine Learning", "text": "Machine learning is widely used in practice to produce predictive models for applications such as image processing, speech and text recognition. These models are more accurate when trained on large amount of data collected from different sources. However, the massive data collection raises privacy concerns. In this paper, we present new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method. Our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation (2PC). We develop new techniques to support secure arithmetic operations on shared decimal numbers, and propose MPC-friendly alternatives to non-linear functions such as sigmoid and softmax that are superior to prior work. We implement our system in C++. Our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions, and scale to millions of data samples with thousands of features. We also implement the first privacy preserving system for training neural networks."}
{"_id": "660a150c15f36542decb1d9694dfffc0bbe0ed12", "title": "Big Data and the Internet of Things", "text": "Advances in sensing and computing capabilities are making it possible to embed increasing computing power in small devices. This has enabled the sensing devices not just to passively capture data at very high resolution but also to take sophisticated actions in response. Combined with advances in communication, this is resulting in an ecosystem of highly interconnected devices referred to as the Internet of Things IoT. In conjunction, the advances in machine learning have allowed building models on this ever increasing amounts of data. Consequently, devices all the way from heavy assets such as aircraft engines to wearables such as health monitors can all now not only generate massive amounts of data but can draw back on aggregate analytics to \u201cimprove\u201d their performance over time. Big data analytics has been identified as a key enabler for the IoT. In this chapter, we discuss various avenues of the IoT where big data analytics either is already making a significant impact or is on the cusp of doing so. We also discuss social implications and areas of concern."}
{"_id": "62986466fb60d1139cd8044651583627833e4789", "title": "Sensorless Direct Field-Oriented Control of Three-Phase Induction Motor Drives for Low-Cost Applications", "text": "A sensorless direct rotor field-oriented control (SDRFOC) scheme of three-phase induction motors for low-cost applications is presented in this paper. The SDRFOC algorithm is based on a sensorless closed-loop rotor flux observer whose main advantages are simplicity and robustness to motor parameter detuning. The whole algorithm has been developed and implemented on a low-cost fixed-point digital signal processor controller. Experimental results are presented for a 0.5-kW induction motor drive for a primary vacuum pump used in industry applications."}
{"_id": "34c658d37221dc4bdff040a0f025230adf6c26a0", "title": "Entity Extraction: From Unstructured Text to DBpedia RDF triples", "text": "In this paper, we describe an end-to-end system that automatically extracts RDF triples describing entity relations and properties from unstructured text. This system is based on a pipeline of text processing modules that includes a semantic parser and a coreference solver. By using coreference chains, we group entity actions and properties described in different sentences and convert them into entity triples. We applied our system to over 114,000 Wikipedia articles and we could extract more than 1,000,000 triples. Using an ontology-mapping system that we bootstrapped using existing DBpedia triples, we mapped 189,000 extracted triples onto the DBpedia namespace. These extracted entities are available online in the N-Triple format1."}
{"_id": "da09bc42bbf5421b119abea92716186a1ca3f02f", "title": "Fuzzy Identity Based Encryption", "text": "We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, \u03c9, to decrypt a ciphertext encrypted with an identity, \u03c9\u2032, if and only if the identities \u03c9 and \u03c9\u2032 are close to each other as measured by the \u201cset overlap\u201d distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term \u201cattribute-based encryption\u201d. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model."}
{"_id": "706606976857357247e2c1872abbeb9bfe022065", "title": "A review of ontologies for describing scholarly and scientific documents", "text": "Several ontologies have been created in the last years for the semantic annotation of scholarly publications and scientific documents. This rich variety of ontologies makes it difficult for those willing to annotate their documents to know which ones they should select for such activity. This paper presents a classification and description of these state-of-the-art ontologies, together with the rationale behind the different approaches. Finally, we provide an example of how some of these ontologies can be used for the annotation of a scientific document."}
{"_id": "b3baba6c34a2946b999cc0f6be6bb503d303073e", "title": "ROC curve equivalence using the Kolmogorov-Smirnov test", "text": "This paper describes a simple, non-parametric and generic test of the equivalence of Receiver Operating Characteristic (ROC) curves based on a modified Kolmogorov-Smirnov (KS) test. The test is described in relation to the commonly used techniques such as the Area Under the ROC curve (AUC) and the Neyman-Pearson method. We first review how the KS test is used to test the null hypotheses that the class labels predicted by a classifier are no better than random. We then propose an interval mapping technique that allows us to use two KS tests to test the null hypothesis that two classifiers have ROC curves that are equivalent. We demonstrate that this test discriminates different ROC curves both when one curve dominates another and when the curves cross and so are not discriminated by AUC. The interval mapping technique is then used to demonstrate that, although AUC has its limitations, it can be a model-independent and coherent measure of classifier performance."}
{"_id": "6ff01336245875b38153f19f7d7a04cad5185b62", "title": "Using virtual data effects to stabilize pilot run neural network modeling", "text": "Executing pilot runs before mass production is a common strategy in manufacturing systems. Using the limited data obtained from pilot runs to shorten the lead time to predict future production is this worthy of study. Since a manufacturing system is usually comprehensive, Artificial Neural Networks are widely utilized to extract management knowledge from acquired data for its non-linear properties; however, getting as large a number of training data as needed is the fundamental assumption. This is often not achievable for pilot runs because there are few data obtained during trial stages and theoretically this means that the knowledge obtained is fragile. The purpose of this research is to utilize virtual sample generation techniques and the corresponding data effects to stabilize the prediction model. This research derives from using extreme value theory to estimate the domain range of a small data set, which is used for virtual sample production to fill the information gaps of sparse data. Further, for the virtual samples, a fuzzy-based data effect calculation system is developed to determine the comprehensive importance of each datum. The results of this research indicate that the prediction error rate can be significantly decreased by applying the proposed method to a very small data set."}
{"_id": "c014c1bc895a097f0eeea09f224e0f31284f20c5", "title": "Faster Transit Routing by Hyper Partitioning", "text": "We present a preprocessing-based acceleration technique for computing bi-criteria Pareto-optimal journeys in public transit networks, based on the well-known RAPTOR algorithm [16]. Our key idea is to first partition a hypergraph into cells, in which vertices correspond to routes (e.g., bus lines) and hyperedges to stops, and to then mark routes sufficient for optimal travel across cells. The query can then be restricted to marked routes and those in the source and target cells. This results in a practical approach, suitable for networks that are too large to be efficiently handled by the basic RAPTOR algorithm. 1998 ACM Subject Classification G.2.2 Graph Theory"}
{"_id": "26cb01d2c8713508cb418681d781865a6c2f1106", "title": "Multiagent systems and agent-based modeling and simulation", "text": "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Copyright is held by the owner/author(s). GECCO \u201917 Companion, July 15-19, 2017, Berlin, Germany ACM 978-1-4503-4939-0/17/07. http://dx.doi.org/10.1145/3067695.3067723 INSTITUTO DE INFORM\u00c1TICA UFRGS Multiagent Systems and Agent-based Modeling and Simulation"}
{"_id": "38d79197f3fe57f027186682a9ddb32e249921c4", "title": "Development of small-sized pixel structures for high-resolution CMOS image sensors", "text": "We present our studies on small-sized pixel structures for high-resolution CMOS image sensors. To minimize the number of pixel components, single-transistor pixel and 2T pixel architecture were proposed. To deal with crosstalk between pixels, MOS capacitor deep trench isolation (CDTI) was integrated. CDTI-integrated pixel allows better achievements in dark current and full-well capacity in comparison with the configuration integrating oxide-filled deep trench isolation (DTI). To improve quantum efficiency (QE) and minimize optical crosstalk, back-side illumination (BSI) was developed. Also, vertical photodiode was proposed to maximize its charge-collection region. To take advantages of these structures/technologies, we developed two pixel options (P-type and N-type) combining CDTI or DTI, BSI and vertical photodiode. All the presented pixel structures were designed in 1.4\u00b5m-pitch sensor arrays, fabricated and tested."}
{"_id": "4b9e993d7f4846d4750762fc0dba5b0ddadd709b", "title": "A Context-aware Time Model for Web Search", "text": "In web search, information about times between user actions has been shown to be a good indicator of users' satisfaction with the search results. Existing work uses the mean values of the observed times, or fits probability distributions to the observed times. This implies a context-independence assumption that the time elapsed between a pair of user actions does not depend on the context, in which the first action takes place. We validate this assumption using logs of a commercial web search engine and discover that it does not always hold. For between 37% to 80% of query-result pairs, depending on the number of observations, the distributions of click dwell times have statistically significant differences in query sessions for which a given result (i) is the first item to be clicked and (ii) is not the first. To account for this context bias effect, we propose a context-aware time model (CATM). The CATM allows us (i) to predict times between user actions in contexts, in which these actions were not observed, and (ii) to compute context-independent estimates of the times by predicting them in predefined contexts. Our experimental results show that the CATM provides better means than existing methods to predict and interpret times between user actions."}
{"_id": "7891d3a91b86190df1756b08e69f879108579f40", "title": "UIMA Ruta: Rapid development of rule-based information extraction applications", "text": "Rule-based information extraction is an important approach for processing the increasingly available amount of unstructured data. The manual creation of rule-based applications is a time-consuming and tedious task, which requires qualified knowledge engineers. The costs of this process can be reduced by providing a suitable rule language and extensive tooling support. This paper presents UIMA Ruta, a tool for rule-based information extraction and text processing applications. The system was designed with focus on rapid development. The rule language and its matching paradigm facilitate the quick specification of comprehensible extraction knowledge. They support a compact representation while still providing a high level of expressiveness. These advantages are supplemented by the development environment UIMA Ruta Workbench. It provides, in addition to extensive editing support, essential assistance for explanation of rule execution, introspection, automatic validation, and rule induction. UIMA Ruta is a useful tool for academia and industry due to its open source license. We compare UIMA Ruta to related rule-based systems especially concerning the compactness of the rule representation, the expressiveness, and the provided tooling support. The competitiveness of the runtime performance is shown in relation to a popular and freelyavailable system. A selection of case studies implemented with UIMA Ruta illustrates the usefulness of the system in real-world scenarios."}
{"_id": "d62332f6139a5db1f90a16bc9751657c02b43611", "title": "Arabic Diacritization with Gated Recurrent Unit", "text": "Arabic and similar languages require the use of diacritics in order to determine the necessary parameters to pronounce and identify every part of the speech correctly. Therefore, when it comes to perform Natural Language Processing (NLP) over Arabic, diacritization is a crucial step. In this paper we use a gated recurrent unit network as a language-independent framework for Arabic diacritization. The end-to-end approach allows to use exclusively vocalized text to train the system without using external resources. Evaluation is performed versus the state-of-the-art literature results. We demonstrate that we achieve state-of-the-art results and enhance the learning process by scoring better performance in the training and testing timing."}
{"_id": "090f4b588ba58c36a21eddd67ea33d59614480c1", "title": "Syntactic Simplification and Text Cohesion", "text": "Syntactic simplification is the process of reducing the grammatical complexity of a text, while retaining its information content and meaning. The aim of syntactic simplification is to make text easier to comprehend for human readers, or process by programs. In this thesis, I describe how syntactic simplification can be achieved using shallow robust analysis, a small set of hand-crafted simplification rules and a detailed analysis of the discourse-level aspects of syntactically rewriting text. I offer a treatment of relative clauses, apposition, coordination and subordination. I present novel techniques for relative clause and appositive attachment. I argue that these attachment decisions are not purely syntactic. My approaches rely on a shallow discourse model and on animacy information obtained from a lexical knowledge base. I also show how clause and appositive boundaries can be determined reliably using a decision procedure based on local context, represented by part-of-speech tags and noun chunks. I then formalise the interactions that take place between syntax and discourse during the simplification process. This is important because the usefulness of syntactic simplification in making a text accessible to a wider audience can be undermined if the rewritten text lacks cohesion. I describe how various generation issues like sentence ordering, cue-word selection, referring-expression generation, determiner choice and pronominal use can be resolved so as to preserve conjunctive and anaphoric cohesive-relations during syntactic simplification. In order to perform syntactic simplification, I have had to address various natural language processing problems, including clause and appositive identification and attachment, pronoun resolution and referring-expression generation. I evaluate my approaches to solving each problem individually, and also present a holistic evaluation of my syntactic simplification system."}
{"_id": "3e21cba6742663d6f5a2326fb5b749927d2d181b", "title": "CellNet: Network Biology Applied to Stem Cell Engineering", "text": "Somatic cell reprogramming, directed differentiation of pluripotent stem cells, and direct conversions between differentiated cell lineages represent powerful\u00a0approaches to engineer cells for research and regenerative medicine. We have developed CellNet, a network biology platform that more accurately assesses the fidelity of cellular engineering than existing methodologies and generates hypotheses for improving cell derivations. Analyzing expression data from 56 published reports, we found that cells derived via directed differentiation more closely resemble their in\u00a0vivo counterparts than products of direct conversion, as reflected by the establishment of target cell-type gene regulatory networks (GRNs). Furthermore, we discovered that directly converted cells fail to adequately silence expression programs of the starting population and that the establishment of unintended GRNs is common to virtually every cellular engineering paradigm. CellNet provides a platform for quantifying how closely engineered cell populations resemble their target cell type and a rational strategy to guide enhanced cellular engineering."}
{"_id": "52be49a9fb44ea9a607fb275b496908547455597", "title": "Physiological Computing: Interfacing with the Human Nervous System", "text": "This chapter describes the physiological computing paradigm where electrophysiological changes from the human nervous system are used to interface with a computer system in real time. Physiological computing systems are categorized into five categories: muscle interfaces, brain-computer interfaces, biofeedback, biocybernetic adaptation and ambulatory monitoring. The differences and similarities of each system are described. The chapter also discusses a number of fundamental issues for the design of physiological computing system, these include: the inference between physiology and behaviour, how the system represents behaviour, the concept of the biocybernetic control loop and ethical issues."}
{"_id": "38e4adf4f0884f8d296c3c822c1b74dc9035d8bd", "title": "Parental scaffolding as a bootstrapping mechanism for learning grasp affordances and imitation skills", "text": "Parental scaffolding is an important mechanism utilized by infants during their development. Infants, for example, pay stronger attention to the features of objects highlighted by parents and learn the way of manipulating an object while being supported by parents. Parents are known to make modifications in infant-directed actions, i.e. use \u201cmotionese\u201d. Motionese is characterized by higher range and simplicity of motion, more pauses between motion segments, higher repetitiveness of demonstration, and more frequent social signals to an infant. In this paper, we extend our previously developed affordances framework to enable the robot to benefit from parental scaffolding and motionese. First, we present our results on how parental scaffolding can be used to guide the robot and modify robot\u2019s crude action execution to speed up learning of complex actions such as grasping. For this purpose, we realize the interactive nature of a human caregiver-infant skill transfer scenario on the robot. During reach and grasp attempts, the movement of the robot hand is modified by the human caregiver\u2019s physical interaction to enable successful grasping. Next, we discuss how parental scaffolding can be used in speeding up imitation learning. The system describes how our robot, by using previously learned affordance prediction mechanisms, can go beyond simple goal-level imitation and become a better imitator using infant-directed modifications of parents."}
{"_id": "a9a7168b5b45fcf63e7f8904f68f6a90f8062443", "title": "Playing online games against computer- vs. human-controlled opponents: Effects on presence, flow, and enjoyment", "text": ""}
{"_id": "cd797cd4d225869f2eb17a5c210e04c18345132e", "title": "Sentence Extraction by tf/idf and Position Weighting from Newspaper Articles", "text": "Recently lots of researchers are focusing their interests on the development of summarization systems from large volume sources combined with knowledge acquisition techniques such as infor mation extraction text mining or information re trieval Some of these techniques are implemented according to the speci c knowledge in the domain or the genre from the source document In this pa per we will discuss Japanese Newspaper Domain Knowledge in order to make a summary My sys tem is implemented with the sentence extraction approach and weighting strategy to mine from a number of documents"}
{"_id": "1a742adcc51a8a75f170131a978c7f8c5e1e3f76", "title": "N ov 2 01 8 Adaptive Task Allocation for Mobile Edge Learning", "text": "This paper aims to establish a new optimization paradigm for implementing realistic distributed learning algorithms, with performance guarantees, on wireless edge nodes with heterogeneous computing and communication capacities. We will refer to this new paradigm as \u201cMobile Edge Learning (MEL)\u201d. The problem of dynamic task allocation for MEL is considered in this paper with the aim to maximize the learning accuracy, while guaranteeing that the total times of data distribution/aggregation over heterogeneous channels, and local computing iterations at the heterogeneous nodes, are bounded by a preset duration. The problem is first formulated as a quadratically-constrained integer linear problem. Being an NP-hard problem, the paper relaxes it into a non-convex problem over real variables. We thus proposed two solutions based on deriving analytical upper bounds of the optimal solution of this relaxed problem using Lagrangian analysis and KKT conditions, and the use of suggest-and-improve starting from equal batch allocation, respectively. The merits of these proposed solutions are exhibited by comparing their performances to both numerical approaches and the equal task allocation approach."}
{"_id": "687034f880786612f84bb44a6ba342fdf364a5b3", "title": "Distributed energy balanced routing for wireless sensor networks", "text": "Most routing algorithms for sensor networks focus on finding energy efficient paths to prolong the lifetime of sensor networks. As a result, the power of sensors on efficient paths depletes quickly, and consequently sensor networks become incapable of monitoring events from some parts of their target areas. In many sensor network applications, the events that must be tracked occur at random locations and have non-deterministic generation patterns. Therefore, ideally, routing algorithms should consider not only energy efficiency, but also the amount of energy remaining in each sensor, thus avoiding non-functioning sensors due to early power depletion. This paper introduces a new metric, energy cost, devised to consider a balance of sensors\u2019 remaining energies, as well as energy efficiency. This metric gives rise to the design of the distributed energy balanced routing (DEBR) algorithm devised to balance the data traffic of sensor networks in a decentralized manner and consequently prolong the lifetime of the networks. DEBR is scalable in the number of sensors and also robust to the variations in the dynamics of event generation. We demonstrate the effectiveness of the proposed algorithm by comparing three existing routing algorithms: direct communication approach, minimum transmission energy, and self-organized routing and find that energy balance should be considered to extend lifetime of sensor network and increase robustness of sensor network for diverse event generation patterns. 2009 Elsevier Ltd. All rights reserved."}
{"_id": "6d7c6c8828c7ac91cc74a79fdc06b5783102a784", "title": "Microwave vision: From RF safety to medical imaging", "text": "This article gives an overview of the activities of the company Microwave Vision, formerly Satimo, oriented to health-related applications. The existing products in terms of Specific Absorption Rate (SAR) measurement and RF safety are described in detail. The progress of the development of a new imaging modality for breast pathology detection using microwaves is shortly reported."}
{"_id": "0ceed97c5761f70d8016453897e778a172691ad4", "title": "A Theory and Tools for Applying Sandboxes Effectively", "text": "It is more expensive and time consuming to build modern software without extensive supply chains. Supply chains decrease these development risks, but typically at the cost of increased security risk. In particular, it is often difficult to understand or verify what a software component delivered by a third party does or could do. Such a component could contain unwanted behaviors, vulnerabilities, or malicious code, many of which become incorporated in applications utilizing the component. Sandboxes provide relief by encapsulating a component and imposing a security policy on it. This limits the operations the component can perform without as much need to trust or verify the component. Instead, a component user must trust or verify the relatively simple sandbox. Given this appealing prospect, researchers have spent the last few decades developing new sandboxing techniques and sandboxes. However, while sandboxes have been adopted in practice, they are not as pervasive as they could be. Why are sandboxes not achieving ubiquity at the same rate as extensive supply chains? This thesis advances our understanding of and overcomes some barriers to sandbox adoption. We systematically analyze ten years (2004 \u2013 2014) of sandboxing research from top-tier security and systems conferences. We uncover two barriers: (1) sandboxes are often validated using relatively subjective techniques and (2) usability for sandbox deployers is often ignored by the studied community. We then focus on the Java sandbox to empirically study its use within the open source community. We find features in the sandbox that benign applications do not use, which have promoted a thriving exploit landscape. We develop run time monitors for the Java Virtual Machine (JVM) to turn off these features, stopping all known sandbox escaping JVM exploits without breaking benign applications. Furthermore, we find that the sandbox contains a high degree of complexity benign applications need that hampers sandbox use. When studying the sandbox\u2019s use, we did not find a single application that successfully deployed the sandbox for security purposes, which motivated us to overcome benignly-used complexity via tooling. We develop and evaluate a series of tools to automate the most complex tasks, which currently require error-prone manual effort. Our tools help users derive, express, and refine a security policy and impose it on targeted Java application JARs and classes. This tooling is evaluated through case studies with industrial collaborators where we sandbox components that were previously difficult to sandbox securely. Finally, we observe that design and implementation complexity causes sandbox developers to accidentally create vulnerable sandboxes. Thus, we develop and evaluate a sandboxing technique that leverages existing cloud computing environments to execute untrusted computations. Malicious outcomes produced by the computations are contained by ephemeral virtual machines. We describe a field trial using this technique with Adobe Reader and compare the new sandbox to existing sandboxes using a qualitative framework we developed."}
{"_id": "2b5899c10b4a501763e50573923e2ddb62f563db", "title": "Learning Ensembles of Convolutional Neural Networks", "text": "Ensemble learning is a method for generating multiple versions of a predictor network and using them to get an aggregated prediction. Given a learning set \u03a9 consists of data {(yn, ~xn), n = 1, ..., N} where y is the class label and ~x is the inputing feature, we train a predictor \u03c6(~x,\u03a9). With different initialization, we obtain a series of predictors {\u03c6k}. Our object is to use the {\u03c6k} to get a better predictor, \u03c6A."}
{"_id": "18e595d49b45722588f8561f53a773c6d1a9876e", "title": "SOUL: An Edge-Cloud System for Mobile Applications in a Sensor-Rich World", "text": "With the Internet of Things, sensors are becoming ever more ubiquitous, but interacting with them continues to present numerous challenges, particularly for applications running on resource-constrained devices like smartphones. The SOUL abstractions in this paper address two issues faced by such applications: (1) access to sensors with the levels of convenience needed for their ubiquitous, dynamic use, and only by parties authorized to do so, and (2) scalability in sensor access, given today's multitude of sensors. Toward this end, SOUL, first, introduces a new abstraction for the applications to transparently and uniformly access both on-device and ambient sensors with associated actuators. Second, potentially expensive sensor-related processing needs not just occur on smartphones, but can also leverage edge-and remote-cloud resources. Finally, SOUL provides access control methods that permit users to easily define access permissions for sensors, which leverages users' social ties and captures the context in which access requests are made. SOUL demonstrates that the applications on Android platforms can scale the use of 100s of sensors with performance and energy efficiency."}
{"_id": "0364aeed17861122f469d1a5f3fec48f7ec052a2", "title": "Privacy and rationality in individual decision making", "text": "Traditional theory suggests consumers should be able to manage their privacy. Yet, empirical and theoretical research suggests that consumers often lack enough information to make privacy-sensitive decisions and, even with sufficient information, are likely to trade off long-term privacy for short-term benefits"}
{"_id": "6dabd3476b70581075dab379f745d02ee419e1e2", "title": "Personalization versus Privacy: An Empirical Examination of the Online Consumer's Dilemma", "text": "Personalization refers to the tailoring of products and purchase experience to the tastes of individual consumers based upon their personal and preference information. Recent advances in information acquisition and processing technologies have allowed online vendors to offer varieties of web-based personalization that not only increases switching costs, but also serves as important means of acquiring valuable customer information. However, investments in online personalization may be severely undermined if consumers do not use these services due to privacy concerns. In the absence of any empirical evidence that seeks to understand this consumer dilemma, our research develops a parsimonious model to predict consumers\u2019 usage of online personalization as a result of the tradeoff between their value for personalization and concern for privacy. In addition to this tradeoff, we find that a consumer\u2019s intent to use personalization services is positively influenced by her trust in the vendor. Our findings suggest that: 1. online vendors can improve their abilities to acquire and use customer information through trust building activities; 2. it is of critical importance that vendors understand and evaluate the different values consumers may place in enjoying various types of personalization."}
{"_id": "072cd62eca591d1ea8bc2460da87002fb48a7366", "title": "Conditioning Prices on Purchase History", "text": "T rapid advance in information technology now makes it feasible for sellers to condition their price offers on consumers\u2019 prior purchase behavior. In this paper we examine when it is profitable to engage in this form of price discrimination when consumers can adopt strategies to protect their privacy. Our baseline model involves rational consumers with constant valuations for the goods being sold and a monopoly merchant who can commit to a pricing policy. Applying results from the prior literature, we show that although it is feasible to price so as to distinguish high-value and low-value consumers, the merchant will never find it optimal to do so. We then consider various generalizations of this model, such as allowing the seller to offer enhanced services to previous customers, making the merchant unable to commit to a pricing policy, and allowing competition in the marketplace. In these cases we show that sellers will, in general, find it profitable to condition prices on purchase history."}
{"_id": "3621b1ecb90c0cde6696ddce8ac0597d67439722", "title": "Privacy in electronic commerce and the economics of immediate gratification", "text": "Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expectindividual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only 'naive' individuals but also 'sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant."}
{"_id": "9d8f80a5142ffbbed96c8bda3dfa1912a65a98c9", "title": "Characterization studies of fluorine-induced corrosion crystal defects on microchip Al bondpads using X-ray photoelectron spectroscopy", "text": "In wafer fabrication, Fluorine (F) contamination may cause F-induced corrosion and defects on microchip Al bondpad, resulting in bondpad discoloration or non-stick on pad (NSOP). In the previous paper [1], the authors studied the F-induced corrosion and defects, characterized the composition of the \u201cflower-like\u201d defects and determined the binding energy of Al fluoride [AlF6]3- using X-ray Photoelectron Spectroscopy (XPS) and Time of Flight Secondary Ion Mass Spectrometry (TOF-SIMS) techniques. In this paper, we further studied F-induced corrosion and defects, and characterized the composition of the \u201ccrystal-like\u201d defects using XPS. The experimental results showed that the major component of the \u201ccrystal-like\u201d defect was Al fluoride of AlF3. The percentages of the components of the \u201ccrystal-like\u201d defects on the affected bondpad are: Al (22.2%), Al2O3 (5.4%), AlF3(70.0%) and [AlF6]3- (2.4%). During high-resolution fitting, the binding energies of Al (72.8eV)Al2O3 (74.5eV), AlF3 (76.3eV) and [AlF6]3- (78.7eV) were used."}
{"_id": "0c1a55e0e02c1dbf6cf363ec022ca17925586e16", "title": "User-oriented Bayesian identification and its configuration", "text": "Identification of tracked objects is a key capability of automated surveillance and information systems for air, surface and subsurface (maritime), and ground environments, improving situational awareness and offering decision support to operational users. The Bayesian-based identification data combining process (IDCP) provides an effective instrument for fusion of uncertain identity indications from various sources. A user-oriented approach to configuration of the process is introduced, which enables operators to adapt the IDCP to changing identification needs in varying operational scenarios. Application of results from cognitive psychology and decision theory provides good access to retrieval of Bayesian data and makes configuration easily feasible to operational experts."}
{"_id": "23989b084ebb68d0fcc26c6f0a4119635d16feb1", "title": "The Myth of the \"Biphasic\" Hyaluronic Acid Filler.", "text": "BACKGROUND\nThe terms \"biphasic\" and \"monophasic\" have been used frequently as a means of differentiating hyaluronic acid (HA) fillers. This type of categorization is based on misinterpretations of the term \"phase\" and provides no help to the practitioner when selecting the most appropriate product for each indication, patient, and injection technique.\n\n\nOBJECTIVE\nThe purpose of this study was to analyze the properties of 2 HA filler families; Juvederm (JUV) (Allergan), often stated to be monophasic and Restylane (RES) (Galderma), often stated to be biphasic, and discuss what properties may have led to the use of the terms monophasic and biphasic.\n\n\nMATERIALS AND METHODS\nThree different methods were used for JUV and RES: determination of extractable HA; determination of water uptake; and microscopy.\n\n\nRESULTS\nThe analyzed products were shown to contain both observable gel particles and extractable HA and have the ability to absorb added water.\n\n\nCONCLUSION\nThe categorization of HA fillers as biphasic or monophasic was shown to be scientifically incorrect and should therefore be avoided. Further analytical measurement of the properties leading to this misinterpretation can provide information to discriminate and categorize HA fillers on a sounder scientific basis."}
{"_id": "89cd274caf493812d2bd4fd222451cd3983f50a0", "title": "Link-level access cloud architecture design based on SDN for 5G networks", "text": "The exponential growth of data traffic and connected devices, and the reduction of latency and costs, are considered major challenges for future mobile communication networks. The satisfaction of these challenges motivates revisiting the architecture of these networks. We propose an SDN-based design of a hierarchical architecture for the 5G packet core. In this article we focus on the design of its access cloud with the goal of providing low latency and scalable Ethernet-like support to terminals and MTC devices including mobility management. We examine and address its challenges in terms of network scalability and support for link-level mobility. We propose a link-level architecture that forwards frames from and to edge network elements (AP and routers) with a label that identifies the APs through which the terminal is reachable. An SDN local controller tracks and updates the users' location information at the edge network elements. Additionally, we propose to delegate in SDN local controllers the handling of non-scalable operations, such as broadcast and multicast messages, and network management procedures."}
{"_id": "341fed554ee3e1687dbd07247b98b5c5e13abf7d", "title": "Product Adoption Rate Prediction in a Competitive Market", "text": "As the worlds of commerce and the Internet technology become more inextricably linked, a large number of user consumption series become available for online market intelligence analysis. A critical demand along this line is to predict the future product adoption state of each user, which enables a wide range of applications such as targeted marketing. Nevertheless, previous works only aimed at predicting if a user would adopt a particular product or not with a binary buy-or-not representation. The problem of tracking and predicting users\u2019 adoption rates, i.e., the frequency and regularity of using each product over time, is still under-explored. To this end, we present a comprehensive study of product adoption rate prediction in a competitive market. This task is nontrivial as there are three major challenges in modeling users\u2019 complex adoption states: the heterogeneous data sources around users, the unique user preference and the competitive product selection. To deal with these challenges, we first introduce a flexible factor-based decision function to capture the change of users\u2019 product adoption rate over time, where various factors that may influence users\u2019 decisions from heterogeneous data sources can be leveraged. Using this factor-based decision function, we then provide two corresponding models to learn the parameters of the decision function with both generalized and personalized assumptions of users\u2019 preferences. We further study how to leverage the competition among different products and simultaneously learn product competition and users\u2019 preferences with both generalized and personalized assumptions. Finally, extensive experiments on two real-world datasets show the superiority of our proposed models."}
{"_id": "73a85ee17954c7e4bcb119536b914c02f6cf8881", "title": "A novel hybrid MCDM model combining the SAW, TOPSIS and GRA methods based on experimental design", "text": "Multiple criteria decision-making (MCDM) is a difficult task because the existing alternatives are frequently in conflict with each other. This study presents a hybrid MCDM method combining simple additive weighting (SAW), techniques for order preference by similarity to an ideal solution (TOPSIS) and grey relational analysis (GRA) techniques. A feature of this method is that it employs an experimental design technique to assign attribute weights and then combines different MCDM evaluation methods to construct the hybrid decision-making model. This model can guide a decision maker in making a reasonable judgment without requiring professional skills or extensive experience. The ranking results agreed upon by multiple MCDM methods are more trustworthy than those generated by a single MCDM method. The proposed method is illustrated in a practical application scenario involving an IC packaging company. Four additional numerical examples are provided to demonstrate the applicability of the proposed method. In all of the cases, the results obtained using the proposed method were highly similar to those derived by previous studies, thus proving the validity and capability of this method to solve real-life MCDM problems. \u00a9 2016 Elsevier Inc. All rights reserved."}
{"_id": "50773fba6770179afa2f5047b89c6fe7b74a425a", "title": "A Hierarchical Security Framework for Defending Against Sophisticated Attacks on Wireless Sensor Networks in Smart Cities", "text": "In smart cities, wireless sensor networks (WSNs) act as a type of core infrastructure that collects data from the city to implement smart services. The security of WSNs is one of the key issues of smart cities. In resource-restrained WSNs, dynamic ongoing or unknown attacks usually steer clear of isolated defense components. Therefore, to resolve this problem, we propose a hierarchical framework based on chance discovery and usage control (UCON) technologies to improve the security of WSNs while still taking the low-complexity and high security requirements of WSNs into account. The features of continuous decision and dynamic attributes in UCON can address ongoing attacks using advanced persistent threat detection. In addition, we use a dynamic adaptive chance discovery mechanism to detect unknown attacks. To design and implement a system using the mechanism described above, a unified framework is proposed in which low-level attack detection with simple rules is performed in sensors, and high-level attack detection with complex rules is performed in sinks and at the base station. Moreover, software-defined networking and network function virtualization technologies are used to perform attack mitigation when either low-level or high-level attacks are detected. An experiment was performed to acquire an attack data set for evaluation. Then, a simulation was created to evaluate the resource consumption and attack detection rate. The results demonstrate the feasibility and efficiency of the proposed scheme."}
{"_id": "186dce59aa6bd5bcb228c4f80f55ff032fa272b9", "title": "Factorized graph matching", "text": "Graph matching plays a central role in solving correspondence problems in computer vision. Graph matching problems that incorporate pair-wise constraints can be cast as a quadratic assignment problem (QAP). Unfortunately, QAP is NP-hard and many algorithms have been proposed to solve different relaxations. This paper presents factorized graph matching (FGM), a novel framework for interpreting and optimizing graph matching problems. In this work we show that the affinity matrix can be factorized as a Kronecker product of smaller matrices. There are three main benefits of using this factorization in graph matching: (1) There is no need to compute the costly (in space and time) pair-wise affinity matrix; (2) The factorization provides a taxonomy for graph matching and reveals the connection among several methods; (3) Using the factorization we derive a new approximation of the original problem that improves state-of-the-art algorithms in graph matching. Experimental results in synthetic and real databases illustrate the benefits of FGM. The code is available at http://humansensing.cs.cmu.edu/fgm."}
{"_id": "787f7a8cc450ca08d593428045003967a8e7b549", "title": "Complete Analytical Forward and Inverse Kinematics for the NAO Humanoid Robot", "text": "Articulated robots with multiple degrees of freedom, such as humanoid robots, have become popular research platforms in robotics and artificial intelligence. Such robots can perform complex motions, including the balancing, walking, and kicking skills required in the RoboCup robot soccer competition. The design of complex dynamic motions is achievable only through the use of robot kinematics, which is an application of geometry to the study of arbitrary robotic chains. This thesis studies the problems of forward and inverse kinematics for the Aldebaran NAO humanoid robot and presents for the first time a complete analytical solution to both problems with no approximations, including an implementation of a software library for real-time execution. The forward kinematics allow NAO developers to map any configuration of the robot from its own joint space to the three-dimensional physical space, whereas the inverse kinematics provide closedform solutions to finding joint configurations that drive the end effectors of the robot to desired points in the three-dimensional space. The proposed solution was made feasible through a decomposition into five independent problems (head, two arms, two legs), the use of the Denavit-Hartenberg method, and the analytical solution of a non-linear system of equations. The main advantage of the proposed inverse kinematics compared to existing numerical approaches is its accuracy, its efficiency, and the elimination of singularities. The implemented NAO kinematics library, which additionally offers centerof-mass calculation, is demonstrated in two motion design tasks: pointing to the ball and basic balancing. The library has been integrated into the software architecture of the RoboCup team \u201cKouretes\u201d and is currently being used in various motion design problems, such as dynamic balancing, trajectory following, dynamic kicking, and omnidirectional walking."}
{"_id": "7c97d68d87b3dd15e315fd7beca43b04fa8a693f", "title": "A stochastic control strategy for hybrid electric vehicles", "text": "The supervisory control strategy of a hybrid vehicle coordinates the operation of vehicle sub-systems to achieve performance targets such as maximizing fuel economy and reducing exhaust emissions. This high-level control problem is commonly referred as the power management problem. In the past, many supervisory control strategies were developed on the basis of a few pre-defined driving cycles, using intuition and heuristics. The resulting control strategy was often inherently cycle-beating and lacked a guaranteed level of optimality. In this study, the power management problem is tackled from a stochastic viewpoint. An infinite-horizon stochastic dynamic optimization problem is formulated. The power demand from the driver is modeled as a random Markov process. The optimal control strategy is then obtained by using stochastic dynamic programming (SDP). The obtained control law is in the form of a stationary full-state feedback and can be directly implemented. Simulation results over standard driving cycles and random driving cycles are presented to demonstrate the effectiveness of the proposed stochastic approach. It was found that the obtained SDP control algorithm outperforms a sub-optimal rule-based control strategy trained from deterministic DP results."}
{"_id": "85741fa2a0fb1060c138b1b11a0906381661fbcb", "title": "Snort Intrusion Detection System with Intel Software Guard Extension (Intel SGX)", "text": "Network Function Virtualization (NFV) promises the benefits of reduced infrastructure, personnel, and management costs by outsourcing network middleboxes to the public or private cloud. Unfortunately, running network functions in the cloud entails security challenges, especially for complex stateful services. In this paper, we describe our experiences with hardening the king of middleboxes - Intrusion Detection Systems (IDS) - using Intel Software Guard Extensions (Intel SGX) technology. Our IDS secured using Intel SGX, called SEC-IDS, is an unmodified Snort 3 with a DPDK network layer that achieves 10Gbps line rate. SEC-IDS guarantees computational integrity by running all Snort code inside an Intel SGX enclave. At the same time, SEC-IDS achieves near-native performance, with throughput close to 100 percent of vanilla Snort 3, by retaining network I/O outside of the enclave. Our experiments indicate that performance is only constrained by the modest Enclave Page Cache size available on current Intel SGX Skylake based E3 Xeon platforms. Finally, we kept the porting effort minimal by using the Graphene-SGX library OS. Only 27 Lines of Code (LoC) were modified in Snort and 178 LoC in Graphene-SGX itself."}
{"_id": "21d92a3c2f48493cc7a416ac24a3cdfcd49bcba8", "title": "Comparing textual descriptions to process models - The automatic detection of inconsistencies", "text": "Many organizations maintain textual process descriptions alongside graphical process models. The purpose is to make process information accessible to various stakeholders, including those who are not familiar with reading and interpreting the complex execution logic of process models. Despite this merit, there is a clear risk that model and text become misaligned when changes are not applied to both descriptions consistently. For organizations with hundreds of di\u21b5erent processes, the e\u21b5ort required to identify and clear up such conflicts is considerable. To support organizations in keeping their process descriptions consistent, we present an approach to automatically identify inconsistencies between a process model and a corresponding textual description. Our approach detects cases where the two process representations describe activities in di\u21b5erent orders and detect process model activities not contained in the textual description. A quantitative evaluation with 53 real-life model-text pairs demonstrates that our approach accurately identifies inconsistencies between model and text."}
{"_id": "9224133a244a9b96ffde51706085acafb36b280d", "title": "Approximating Polyhedra with Spheres for Time-Critical Collision Detection", "text": "This article presentsa method for approximatingpolyhedralobjects to support a time-critical collision-detectionalgorithm. The approximationsare hierarchies of spheres, and they allow the time-critical algorithm to progressivelyrefine the accuracy of its detection, stopping as needed to maintain the real-time performanceessential for interactive applications.The key to this approachis a preprocessthat automaticallybuilds tightly fitting hierarchies for rigid and articulatedobjects.The preprocessuses medial-axis surfaces, which are skeletal representations of objects. These skeletonsguide an optimizationtechniquethat gives the hierarchies accuracy properties appropriate for collision detection. In a sample application, hierarchies built this way allow the time-criticalcollision-detectionalgorithmto have acceptableaccuracy, improving significantly on that possible with hierarchies built by previous techniques. The performanceof the time-critical algorithmin this application is consistently 10 to 100 times better than a previous collision-detection algorithm, maintaining low latency and a nearIy constant frame rate of 10 frames per second on a conventional graphics workstation. The time-critical algorithm maintains its real-time performanceas objects become more complicated, even as they exceed previouslyreported complexitylevels by a factor of more than 10."}
{"_id": "2636bff7d3bdccf9b39c5e1e7d86a77690f1c07d", "title": "Reward Shaping via Meta-Learning", "text": "Reward shaping is one of the most effective methods to tackle the crucial yet challenging problem of credit assignment in Reinforcement Learning (RL). However, designing shaping functions usually requires much expert knowledge and handengineering, and the difficulties are further exacerbated given multiple similar tasks to solve. In this paper, we consider reward shaping on a distribution of tasks, and propose a general meta-learning framework to automatically learn the efficient reward shaping on newly sampled tasks, assuming only shared state space but not necessarily action space. We first derive the theoretically optimal reward shaping in terms of credit assignment in model-free RL. We then propose a value-based meta-learning algorithm to extract an effective prior over the optimal reward shaping. The prior can be applied directly to new tasks, or provably adapted to the task-posterior while solving the task within few gradient updates. We demonstrate the effectiveness of our shaping through significantly improved learning efficiency and interpretable visualizations across various settings, including notably a successful transfer from DQN to DDPG."}
{"_id": "0309ec1f0e139cc10090c4fefa08a83a2644530a", "title": "ADDIS ABABA UNIVERSITY SCHOOL OF GRADUATE STUDIES FACULTY OF INFORMATICS DEPARTMENT OF INFORMATION SCIENCE DESIGNING A STEMMER FOR GE\u2019EZ TEXT USING RULE BASED APPROACH", "text": ""}
{"_id": "dc057f27de168272c75665c21878e1fe2ec581cf", "title": "A compact dual-band half-mode SIW based semi-circular antenna", "text": "In this paper, a half mode SIW based semi-circular antenna with co-axial feed has been designed for dual band operation. The fundamental resonating frequency is found to operate at 4.66 GHz while the next higher mode is at 8.3 GHz. The antenna performance is optimized through parametric studies and the dual band antenna is designed using Arlon AD270 substrate. Simulation studies have been carried out to study the various antenna parameters."}
{"_id": "16b1c37ffb2117d6919148cfbcb24f100c678efa", "title": "A novel coaxial probe waveguide to microstrip transition", "text": "In this paper, a modified coaxial probe waveguide-to-microstrip transition structure at K-band is proposed. The transition uses two kinds of coaxial probes and has hermetic characteristics. It can also be used in other band, such as Ka-band. The structure is demonstrated for designing the transition. HFSS simulation and measurement results for the transition structure have been compared in terms of |S/sub 21/|. The comparison shows that the measurement agrees well with the simulation result. The measurement has shown 0.5 dB insertion loss."}
{"_id": "501e496146b04f42e3e6a49aabd29fb909083007", "title": "Heuristic evaluation of user interfaces", "text": "Heuristic evaluation is an informal method of usability analysis where a number of evaluators are presented with an interface design and asked to comment on it. Four experiments showed that individual evaluators were mostly quite bad at doing such heuristic evaluations and that they only found between 20 and 51% of the usability problems in the interfaces they evaluated. On the other hand, we could aggregate the evaluations from several evaluators to a single evaluation and such aggregates do rather well, even when they consist of only three to five people."}
{"_id": "8a7e4164d954eb55617362dcb18ca1359b4b753b", "title": "Stochastic Gradient Descent Tricks", "text": "Chapter 1 strongly advocates the stochastic back-propagation method to train neural networks. This is in fact an instance of a more general technique called stochastic gradient descent (SGD). This chapter provides background material, explains why SGD is a good learning algorithm when the training set is large, and provides useful recommendations."}
{"_id": "f74558d368edc546a1c1f90ef5b378a32f7dba11", "title": "Introduction to React", "text": "Introduction to React teaches you React, the JavaScript framework created by developers at Facebook, to solve the problem of building complex user interfaces in a consistent and maintainable way. React.js shrugs away common front-end conventions in an effort to make things more effi cient use Introduction to React to learn about this framework and more today. Get to know the React API and it\u2019s specifi c JavaScript extension, JSX, which makes authoring React components easier and maintainable. You will also learn how to test your React applications and about the tools you can use while building. Once you understand these core concepts, you can build applications with React. This will help you cement the ideas and fundamentals of React and prepare you to utilize React in your own use case."}
{"_id": "9130629db05dfb314d4662ab6dbf1e1f5aba793c", "title": "Excipient-drug interactions in parenteral formulations.", "text": "Excipients are added to parenteral formulations to enhance or maintain active ingredient solubility (solubilizers) and/or stability (buffers, antioxidants, chelating agents, cryo- and lyoprotectants). Excipients also are important in parenteral formulations to assure safety (antimicrobial preservatives), minimize pain and irritation upon injection (tonicity agents), and control or prolong drug delivery (polymers). These are all examples of positive or synergistic interactions between excipients and drugs. However, excipients may also produce negative effects such as loss of drug solubility, activity, and/or stability. This review article will highlight documented interactions, both synergistic and antagonistic, between excipients and drugs in parenteral formulations. The reader will gain better understanding and appreciation of the implications of adding formulation ingredients to parenteral drug products."}
{"_id": "447fe73437a6fec67334b3304051a7698cbc9127", "title": "Standing Out from the Crowd: Emotional Labor, Body Labor, and Temporal Labor in Ridesharing", "text": "CSCW researchers have become interested in crowd work as a new form of collaborative engagement, that is, as a new way in which people\u00e2\u0080\u0099s actions are coordinated in order to achieve collective effects. We address this area but from a different perspective \u00e2\u0080\u0093 that of the labor practices involved in taking crowd work as a form of work. Using empirical materials from a study of ride-sharing, we draw inspiration from studies of the immaterial forms of labor and alternate analyses of political economy that can cast a new light on the context of crowd labor that might matter for CSCW researchers."}
{"_id": "1002ef7a675defeb6ac53ab77f081d5b6b0bf98a", "title": "Social Media and Information Overload: Survey Results", "text": "A UK-based online questionnaire investigating aspects of usage of usergenerated media (UGM), such as Facebook, LinkedIn and Twitter, attracted 587 participants. Results show a high degree of engagement with social networking media such as Facebook, and a significant engagement with other media such as professional media, microblogs and blogs. Participants who experience information overload are those who engage less frequently with the media, rather than those who have fewer posts to read. Professional users show different behaviours to social users. Microbloggers complain of information overload to the greatest extent. Two thirds of Twitter-users have felt that they receive too many posts, and over half of Twitter-users have felt the need for a tool to filter out the irrelevant posts. Generally speaking, participants express satisfaction with the media, though a significant minority express a range of concerns including information overload and privacy. Keyword list: User-Generated Media, Social Networking, Information Overload, Information Overlook, Facebook, LinkedIn, Twitter"}
{"_id": "20c88e23020ea3c42e640f8ae3dc2a1f8569892d", "title": "Dynamic Influence Analysis in Evolving Networks", "text": "We propose the first real-time fully-dynamic index data structure designed for influence analysis on evolving networks. With this aim, we carefully redesign the data structure of the state-of-the-art sketching method introduced by Borgs et al., and construct corresponding update algorithms. Using this index, we present algorithms for two kinds of queries, influence estimation and influence maximization, which are strongly motivated by practical applications, such as viral marketing. We provide a thorough theoretical analysis, which guarantees the non-degeneracy of the solution accuracy after an arbitrary number of updates. Furthermore, we introduce a reachability-tree-based technique and a skipping method, which greatly reduce the time consumption required for edge/vertex deletions and vertex additions, respectively, and counter-based random number generators, which improve the space efficiency. Experimental evaluations using real dynamic networks with tens of millions of edges demonstrate the efficiency, scalability, and accuracy of our proposed indexing scheme. Specifically, it can reflect a graph modification within a time of several orders of magnitude smaller than that required to reconstruct an index from scratch, estimate the influence spread of a vertex set accurately within a millisecond, and select highly influential vertices at least ten times faster than state-of-the-art static algorithms."}
{"_id": "42771aede47980ae8eeebac246c7a8b941d11414", "title": "Improving personalized web search using result diversification", "text": "We present and evaluate methods for diversifying search results to improve personalized web search. A common personalization approach involves reranking the top N search results such that documents likely to be preferred by the user are presented higher. The usefulness of reranking is limited in part by the number and diversity of results considered. We propose three methods to increase the diversity of the top results and evaluate the effectiveness of these methods."}
{"_id": "5345c358466cb5e11801687820b247afb999b01a", "title": "The TNF and TNF Receptor Superfamilies Integrating Mammalian Biology", "text": "The receptors and ligands in this superfamily have unique structural attributes that couple them directly to signaling pathways for cell proliferation, survival, and differentiation. Thus, they have assumed prominent roles in the generation of tissues and transient microen-capabilities are crucial in coordinating the proliferation and protective functions of pathogen-reactive cells. Here, we review the organization of the TNF/TNFR SF and how these proteins have been adapted for pro-Bethesda, Maryland 20892 cesses as seemingly disparate as host defense and or-ganogenesis. In interpreting this large and highly active area of research, we have focused on common themes Introduction that unite the actions of these genes in different tissues. We also discuss the evolutionary success of this super-Three decades ago, lymphotoxin (LT) and tumor necro-sis factor (TNF) were identified as products of lympho-family\u2014success that we infer from its expansion across the mammalian genome and from its many indispens-cytes and macrophages that caused the lysis of certain types of cells, especially tumor cells (Granger et al., able roles in mammalian biology.The normal functions of TNF/TNFR SFPs, as well as ally, it became clear that they were members of a gene certain diseases involving them, depend on the obliga-superfamily. Not surprisingly, the receptors for these tory 3-fold symmetry that defines the essential signaling proteins also constitute a TNF receptor (TNFR)-related stoichiometry and structure (Figure 1). The ligands are gene superfamily. Large-scale sequencing of \" ex-type 2 proteins that can have both membrane-embed-pressed sequence tags \" (ESTs) identified many related ded \" pro \" as well as cleaved, soluble \" mature \" forms proteins, collectively referred to here as TNF-and TNFR-(for review, see Idriss and Naismith, 2000). Both forms related superfamily proteins (TNF/TNFR SFPs; reviewed are active as self-assembling noncovalent trimers, whose individual chains fold as compact \" jellyroll \" \u2424 sandwiches and interact at hydrophobic interfaces (Fesik, UCL.ac.uk/users/hester/tnfinfo.html). The familiar as 2000) (Figure 1A). The 25%\u201330% amino acid similarity well as standardized names of these proteins are listed between TNF-like ligands is largely confined to internal in Table 1, together with their gene locations, pheno-aromatic residues responsible for trimer assembly. The types caused by mutations in these genes, and identified external surfaces of ligand trimers show little sequence functions. similarity, which accounts for receptor selectivity (Figure The discovery that cachectin, a protein known to 2). The ligand shape is that of an inverted bell that is cause fever and wasting, was identical to TNF provided \u2026"}
{"_id": "bfdd071170ff552b89c161b2eaad8a9fd10fee3d", "title": "A Double-Sided LCLC-Compensated Capacitive Power Transfer System for Electric Vehicle Charging", "text": "A double-sided LCLC-compensated capacitive power transfer (CPT) system is proposed for the electric vehicle charging application. Two pairs of metal plates are utilized to form two coupling capacitors to transfer power wirelessly. The LCLC-compensated structure can dramatically reduce the voltage stress on the coupling capacitors and maintain unity power factor at both the input and output. A 2.4-kW CPT system is designed with four 610-mm \u00d7 610-mm copper plates and an air gap distance of 150 mm. The experimental prototype reaches a dc-dc efficiency of 90.8% at 2.4-kW output power. At 300-mm misalignment case, the output power drops to 2.1 kW with 90.7% efficiency. With a 300-mm air gap distance, the output power drops to 1.6 kW with 89.1% efficiency."}
{"_id": "c02329489704dadbadf6d84bb7c03eeee22c9e38", "title": "Depth of field rendering via adaptive recursive filtering", "text": "We present a new post-processing method for rendering high-quality depth-of-field effects in real time. Our method is based on a recursive filtering process, which adaptively smooths the image frame with local depth and circle of confusion information. Unlike previous post-filtering approaches that rely on various convolution kernels, the behavior of our filter is controlled by a weighting function defined between two neighboring pixels. By properly designing this weighting function, our method produces spatially-varying smoothed results, correctly handles the boundaries between in-focus and out-of-focus objects, and avoids rendering artifacts such as intensity leakage and blurring discontinuity. Additionally, our method works on the full frame without resorting to image pyramids. Our algorithms runs efficiently on graphics hardware. We demonstrate the effectiveness of the proposed method with several complex scenes."}
{"_id": "1dddfa634589e347648e79ae4e261af23553981e", "title": "Learning feed-forward one-shot learners", "text": "One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark."}
{"_id": "f2ee6b46b30313fcf0ba03569489f7509799fa2d", "title": "A Study of Big Data in Cloud Environment with their Related Challenges", "text": "Big Data , it is a word which is used for the huge amount of data sets which have the big, extra various and difficu lt structure facing some difficult ies in storing the data then analyzing and visualizing the data for the final outcome [1] . Big data analytics is the procedure of probing huge amounts of data. Here we use cloud computing technology so firstly acquaint the general environment of big data and some associated technology like Hadoop. This powerful technology is used to execute the huge s cale and complicated data computing. It eradicates the requirements of hardware computing which is costly dedicated space and software\u2019s [3]. It is a resolution for analyzing Big Data by deploying Big Data softwares like Hadoop on cloud background which gives users an innovative pattern of service called as Analytics as a Service [7]. Hadoop uses the map-reduce paradigm .We also talk about various advantages, challenges, and issues associated with Big Data."}
{"_id": "0ed10b173271dfd068d8804b676c8d013a060d17", "title": "Online human gesture recognition from motion data streams", "text": "Online human gesture recognition has a wide range of applications in computer vision, especially in human-computer interaction applications. Recent introduction of cost-effective depth cameras brings on a new trend of research on body-movement gesture recognition. However, there are two major challenges: i) how to continuously recognize gestures from unsegmented streams, and ii) how to differentiate different styles of a same gesture from other types of gestures. In this paper, we solve these two problems with a new effective and efficient feature extraction method that uses a dynamic matching approach to construct a feature vector for each frame and improves sensitivity to the features of different gestures and decreases sensitivity to the features of gestures within the same class. Our comprehensive experiments on MSRC-12 Kinect Gesture and MSR-Action3D datasets have demonstrated a superior performance than the stat-of-the-art approaches."}
{"_id": "2ead773ade64c4063b7bd4ff0d0b7e6493d2b8f4", "title": "On Covering a Solid Sphere with Concentric Spheres in ${\\mathbb Z}^3$", "text": "We show that a digital sphere, constructed by the circular sweep of a digital semicircle (generatrix) around its diameter, consists of some holes (absentee-voxels), which appear on its spherical surface of revolution. This incompleteness calls for a proper characterization of the absentee-voxels whose restoration will yield a complete spherical surface without any holes. In this paper, we present a characterization of such absentee-voxels using certain techniques of digital geometry and show that their count varies quadratically with the radius of the semicircular generatrix. Next, we design an algorithm to fill these absentee-voxels so as to generate a spherical surface of revolution, which is more realistic from the viewpoint of visual perception. We further show that covering a solid sphere by a set of complete spheres also results in an asymptotically larger count of absentees, which is cubic in the radius of the sphere. The characterization and generation of complete solid spheres without any holes can also be accomplished in a similar fashion. We furnish test results to substantiate our theoretical findings."}
{"_id": "1f43689ceba25b12a8e5b4e443ed658572c4af64", "title": "Automatic Text Summarization System for Punjabi Language", "text": "This paper concentrates on single document multi news Punjabi extractive summarizer. Although lot of research is going on in field of multi document news summarization systems but not even a single paper was found in literature for single document multi news summarization for any language. It is first time that this system has been developed for Punjabi language and is available online at: http://pts.learnpunjabi.org/. Punjab is one of Indian states and Punjabi is its official language. Punjabi is under resourced language. Various linguistic resources for Punjabi were also developed first time as part of this project like Punjabi noun morph, Punjabi stemmer and Punjabi named entity recognition, Punjabi keywords identification, normalization of Punjabi nouns etc. A Punjabi document (like single page of Punjabi E-news paper) can have hundreds of multi news of varying length. Based on compression ratio selected by user, this system starts by extracting headlines of each news, lines just next to headlines and other important lines depending upon their importance. Selection of sentences is on the basis of statistical and linguistic features of sentences. This system comprises of two main steps: Pre Processing and Processing phase. Pre Processing phase represents the Punjabi text in structured way. In processing phase, different features deciding the importance of sentences are determined and calculated. Some of the statistical features are Punjabi keywords identification, relative sentence length feature and numbered data feature. Various linguistic features for selecting important sentences in summary are: Punjabiheadlines identification, identification of lines just next to headlines, identification of Punjabi-nouns, identification of Punjabi-proper-nouns, identification of common-EnglishPunjabi-nouns, identification of Punjabi-cue-phrases and identification of title-keywords in sentences. Scores of sentences are determined from sentence-feature-weight equation. Weights of features are determined using mathematical regression. Using regression, feature values of some Punjabi documents which are manually summarized are treated as independent input values and their corresponding dependent output values are provided. In the training phase, manually summaries of fifty newsdocuments are made by giving fuzzy scores to the sentences of those documents and then regression is applied for finding values of feature-weights and then average values of feature-weights are calculated. High scored sentences in proper order are selected for final summary. In final summary, sentences coherence is maintained by properly ordering the sentences in the same order as they appear in the input text at the selective compression ratios. This extractive Punjabi summarizer is available online."}
{"_id": "091b3d154d75a80e0afe94f795abdb24ba5c3ed3", "title": "Calibration of Ultrawide Fisheye Lens Cameras by Eigenvalue Minimization", "text": "We present a new technique for calibrating ultrawide fisheye lens cameras by imposing the constraint that collinear points be rectified to be collinear, parallel lines to be parallel, and orthogonal lines to be orthogonal. Exploiting the fact that line fitting reduces to an eigenvalue problem in 3D, we do a rigorous perturbation analysis to obtain a practical calibration procedure. Doing experiments, we point out that spurious solutions exist if collinearity and parallelism alone are imposed. Our technique has many desirable properties. For example, no metric information is required about the reference pattern or the camera position, and separate stripe patterns can be displayed on a video screen to generate a virtual grid, eliminating the grid point extraction processing."}
{"_id": "92609b4914e27c02c5e071a061191bfa73a2e88a", "title": "An Approach to Discovering Temporal Association Rules", "text": "The goal of discovering association rules is to discover aH possible associations that accomplish certain restrictions (minimum support and confidence and interesting). However, it is possible to find interesting associations with a high confidence level hut with little support. This problem is caused by the way support is calculated, as the denominator represents the total number of transactions in a time period when the involved items may have not existed. If, on the other hand, we limit the total transactions to the ones belonging to the items' lifetime, those associations would be now discovered, as they would count on enough support. Another difficulty is the large number of rules that could be generated, tbr which many solutions have been proposed. Using age as an obsolescence tactor for rules helps reduce the number of rules to be presented to the user. In this paper we expand the notion of association rules incorporating time to the frequent itemsets discovered. The concept of temporal support is introduced and. as an example, the known algorithm A priori is modified to incorporate the temporal notions."}
{"_id": "319a23997fa99f96c608bc442cdf36af36746a37", "title": "Liquid Biopsy in Liquid Tumors", "text": "The availability of a minimally invasive patient simple, capable of providing tumor information, represents a valuable clinical tool. The liquid biopsy has the potential to achieve this need. Circulating cell free DNA (ccfDNA), other circulating nucleic acids such as microRNA and circulating tumor cells (CTCs), can be obtained from a peripheral blood sample. Liquid biopsy has been particularly studied in solid tumors, specially of the epitelial origin, such as a pancreatic carcinoma and advanced breast cancer. It has been considerably less applied to the study of non-solid tumors. It represents an important source for diagnosis, prognosis and predictive information. Also it is suitable to evaluate response to therapy and drugs pharmacokinetics. It provides a unique opportunity to evaluate the disease evolution in serial blood samples collection, otherwise difficult to obtain. Liquid biopsy can be rehearsed using different circulating biological fluids such as whole blood, serum, plasma and lymph, as well as, non-circulating fluids such as urine, feces, saliva, bile and accumulated pathological fluids such as ascites. This review summarizes the current status of circulating material analysis in non-solid tunors. It is specially focused on Hodgkin Lymphoma and among Non-Hodgkin Lymphoma, it refers particularly to Diffuse Large B cell Lymphoma, the most common aggressive Non-Hodgkin Lymphoma derived from germinal center B-cells in adults. It further discusses the benefit of liquid biopsy in oncohemtaological diseases and potential clinical applications."}
{"_id": "24ab9442631582eea0cb56b620e2f0c6ec38c49d", "title": "PDA: Privacy-Preserving Data Aggregation in Wireless Sensor Networks", "text": "Providing efficient data aggregation while preserving data privacy is a challenging problem in wireless sensor networks research. In this paper, we present two privacy-preserving data aggregation schemes for additive aggregation functions. The first scheme -cluster-based private data aggregation (CPDA)-leverages clustering protocol and algebraic properties of polynomials. It has the advantage of incurring less communication overhead. The second scheme -Slice-Mix-AggRegaTe (SMART)-builds on slicing techniques and the associative property of addition. It has the advantage of incurring less computation overhead. The goal of our work is to bridge the gap between collaborative data collection by wireless sensor networks and data privacy. We assess the two schemes by privacy-preservation efficacy, communication overhead, and data aggregation accuracy. We present simulation results of our schemes and compare their performance to a typical data aggregation scheme -TAG, where no data privacy protection is provided. Results show the efficacy and efficiency of our schemes. To the best of our knowledge, this paper is among the first on privacy-preserving data aggregation in wireless sensor networks."}
{"_id": "22a8979b53315fad7f98781328cc0326b5147cca", "title": "An ANN-Based Synthesis Model for the Single-Feed Circularly-Polarized Square Microstrip Antenna With Truncated Corners", "text": "An artificial neural network-based synthesis model is proposed for the design of single-feed circularly-polarized square microstrip antenna (CPSMA) with truncated corners. To obtain the training data sets, the resonant frequency and Q-factor of square microstrip antennas are calculated by empirical formulae. Then the size of the truncated corners and the operation frequency with the best axial ratio are obtained. Using the Levenberg-Marquardt (LM) algorithm, a three hidden layered network is trained to achieve an accurate synthesis model. At last, the model is validated by comparing its results with the electromagnetic simulation and measurement. It is extremely useful to antenna engineers for directly obtaining patch physical dimensions of the single-feed CPSMA with truncated corners."}
{"_id": "f6ada42f5b34d60cbd9cc825843a8dc63d229fd8", "title": "Infrapatellar saphenous neuralgia - diagnosis and treatment.", "text": "Persistent anterior knee pain, especially after surgery, can be very frustrating for the patient and the clinician. Injury to the infrapatellar branch of the saphenous nerve (IPS) is not uncommon after knee surgeries and trauma, yet the diagnosis and treatment of IPS neuralgia is not usually taught in pain training programs. In this case report, we describe the anatomy of the saphenous nerve and specifically the infrapatellar saphenous nerve branch; we also discuss the types of surgical trauma, the clinical presentation, the diagnostic modalities, the diagnostic injection technique, and the treatment options. As early as 1945, surgeons were cautioned regarding the potential surgical trauma to the IPS. Although many authors dismissed the nerve damage as unavoidable, the IPS is now recognized as a potential cause of persistent anterior and anteriomedial knee pain. Even more concerning, damage to peripheral nerves such as the IPS has been identified as the cause and potential perpetuating factor for conditions such as complex regional pain syndromes (CRPS). Because the clinical presentation may be vague, it has often been misdiagnosed and underdiagnosed. There is a documented vasomotor instability, but, unfortunately, sympathetic blocks will not address the underlying pathology, and therefore patients often will not respond to this modality, although the correct diagnosis can lead to rapid and gratifying resolution of the pathology. An entity unknown to the clinician is never diagnosed, and so it is important to familiarize pain physicians with IPS neuropathy so that they may be able to offer assistance when this painful condition arises."}
{"_id": "af30913f8aebe68d88da7228ed5d1120da349885", "title": "The Timbre Model", "text": "This paper presents the timbre model, a signal model which has been built to better understand the relationship between the perception of timbre and the musical sounds most commonly associated with timbre. In addition, an extension to the timbre model incorporating expressions is introduced. The presented work therefore has relation to a large field of science, including auditory perception, signal processing, physical models and the acoustics of musical instruments, music expression, and other computer music research. The timbre model is based on a sinusoidal model, and it consists of a spectral envelope, frequencies, a temporal envelope and different irregularity parameters. The paper is divided into four parts: an overview of the research done on the perception of timbre, an overview of the signal processing aspects dealing with sinusoidal modeling, the timbre model, and an introduction of some expressive extensions to the timbre model."}
{"_id": "23a53ad67e71118e62eadb854b3011390dfce2a2", "title": "BreadCrumbs: forecasting mobile connectivity", "text": "Mobile devices cannot rely on a single managed network, but must exploit a wide variety of connectivity options as they travel. We argue that such systems must consider the derivative of connectivity--the changes inherent in movement between separately managed networks, with widely varying capabilities. With predictive knowledge of such changes, devices can more intelligently schedule network usage.\n To exploit the derivative of connectivity, we observe that people are creatures of habit; they take similar paths every day. Our system, BreadCrumbs, tracks the movement of the device's owner, and customizes a predictive mobility model for that specific user. Combined with past observations of wireless network capabilities, BreadCrumbs generates connectivity forecasts. We have built a BreadCrumbs prototype, and demonstrated its potential with several weeks of real-world usage. Our results show that these forecasts are sufficiently accurate, even with as little as one week of training, to provide improved performance with reduced power consumption for several applications."}
{"_id": "3878c832571f81a1f8b171851be303a2e2989832", "title": "Tracking and Connecting Topics via Incremental Hierarchical Dirichlet Processes", "text": "Much research has been devoted to topic detection from text, but one major challenge has not been addressed: revealing the rich relationships that exist among the detected topics. Finding such relationships is important since many applications are interested in how topics come into being, how they develop, grow, disintegrate, and finally disappear. In this paper, we present a novel method that reveals the connections between topics discovered from the text data. Specifically, our method focuses on how one topic splits into multiple topics, and how multiple topics merge into one topic. We adopt the hierarchical Dirichlet process (HDP) model, and propose an incremental Gibbs sampling algorithm to incrementally derive and refine the labels of clusters. We then characterize the splitting and merging patterns among clusters based on how labels change. We propose a global analysis process that focuses on cluster splitting and merging, and a finer granularity analysis process that helps users to better understand the content of the clusters and the evolution patterns. We also develop a visualization process to present the results."}
{"_id": "2b8851338197d3cca75a1c727ef977e9f8b98df6", "title": "Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach", "text": "This paper describes the Q-routing algorithm for packet routing, in which a reinforcement learning module is embedded into each node of a switching network. Only local communication is used by each node to keep accurate statistics on which routing decisions lead to minimal delivery times. In simple experiments involving a 36-node, irregularly connected network, Q-routing proves superior to a nonadaptive algorithm based on precomputed shortest paths and is able to route e ciently even when critical aspects of the simulation, such as the network load, are allowed to vary dynamically. The paper concludes with a discussion of the tradeo between discovering shortcuts and maintaining stable policies."}
{"_id": "c38cedbdfd3a5c910b5cf05bae72d5a200db3a1b", "title": "La reconstruction du nid et les coordinations interindividuelles chezBellicositermes natalensis etCubitermes sp. la th\u00e9orie de la stigmergie: Essai d'interpr\u00e9tation du comportement des termites constructeurs", "text": "I. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 ~I. ]:~.ECONSTRUCTION PAR LES Cubi termes sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 I . La r e c o n s t r u c t i o n p r o p r e m e n t dire, p. 43. 2. Le c o m p o r t e m e n t en fonc t ion du groupe , p. 46. 3. Les condui tes individuel les , p. 49. 4. Les ac t iv i t4s au t r e s que la r econs t ruc t i on , p. 51. I I [ . RECONSTRUCTION PAR Bell icosi termes natalensis . . . . . . . . . . . . . . . . . . . . . 53 I V . ~V[]~CANISME DE LA CORRELATION DES TACIIES INDIVIDUELLES ET DE LA PRETENDUE REGULATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 I . L a phase d ' incoordinat ion, p. 57. A. La s i tua t ion s t i m u l a n t e , p. 57 ; B. Les act iv i t~s incoordonn~es , p. 59 ; C. La local isa t ion des d~p6ts de t e r re , p. 60. 2. L a phase de coordinat ion, p. 61. A. La densi t6 cr i t ique des bou le t t e s de te r re et l ' o r i en ta t ion du c o m p o r t e m e n t , p. 61 ; B. La s t i g m e r g i e et les s t i m u l a t i o n s s imul tan6es , p. 62 ; C. N a t u r e des s t imu l i a g i s s a n t s u r les c o n s t r u c t e u r s : les odeurs fo rmes , p. 66 ; D. La t endance l ' un i t4 , p. 67 ; E, R61e de la m~moi re d a n s ia cons t ruc t ion , p. 67 ; F. Essa i de s y n t h ~ s e , p. 68. V. COMPORTEMENT DES TERMITES COMPARE A CELUI D'ANIMAUX SOLITAIRES. 73 VI . CARACTERISTIQUES FONDAMENTALES DU COMPORTEMENT DES INSECTES SOCIAUX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 V I I . AUTEURS CIT#.S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 L~.CENDES ~)ES PLANC~ES I A V I I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8'1"}
{"_id": "a985f8c3b245659d6d80cffb10a5430560bde89c", "title": "3D Depthwise Convolution: Reducing Model Parameters in 3D Vision Tasks", "text": "Standard 3D convolution operations require much larger amounts of memory and computation cost than 2D convolution operations. The fact has hindered the development of deep neural nets in many 3D vision tasks. In this paper, we investigate the possibility of applying depthwise separable convolutions in 3D scenario and introduce the use of 3D depthwise convolution. A 3D depthwise convolution splits a single standard 3D convolution into two separate steps, which would drastically reduce the number of parameters in 3D convolutions with more than one order of magnitude. We experiment with 3D depthwise convolution on popular CNN architectures and also compare it with a similar structure called pseudo-3D convolution. The results demonstrate that, with 3D depthwise convolutions, 3D vision tasks like classification and reconstruction can be carried out with more light-weighted neural networks while still delivering comparable performances."}
{"_id": "83bf2c8dce3521d56ae63171c839094c6beeac0e", "title": "Runtime Models Based on Dynamic Decision Networks: Enhancing the Decision-making in the Domain of Ambient Assisted Living Applications", "text": "Dynamic decision-making for self-adaptive systems (SAS) requires the runtime trade-off of multiple non-functional requirements (NFRs) -aka quality propertiesand the costsbenefits analysis of the alternative solutions. Usually, it requires the specification of utility preferences for NFRs and decisionmaking strategies. Traditionally, these preferences have been defined at design-time. In this paper we develop further our ideas on re-assessment of NFRs preferences given new evidence found at runtime and using dynamic decision networks (DDNs) as the runtime abstractions. Our approach use conditional probabilities provided by DDNs, the concepts of Bayesian surprise and Primitive Cognitive Network Process (P-CNP), for the determination of the initial preferences. Specifically, we present a case study in the domain problem of ambient assisted living (AAL). Based on the collection of runtime evidence, our approach allows the identification of unknown situations at the design stage."}
{"_id": "01a8cfffdd7f4e17d1be08f6c6259cfbbae661f4", "title": "Design of a Data-Driven PID Controller", "text": "Since most processes have nonlinearities, controller design schemes to deal with such systems are required. On the other hand, proportional-integral-derivative (PID) controllers have been widely used for process systems. Therefore, in this paper, a new design scheme of PID controllers based on a data-driven (DD) technique is proposed for nonlinear systems. According to the DD technique, a suitable set of PID parameters is automatically generated based on input/output data pairs of the controlled object stored in the database. This scheme can adjust the PID parameters in an online manner even if the system has nonlinear properties and/or time-variant system parameters. Finally, the effectiveness of the newly proposed control scheme is evaluated on some simulation examples, and a pilot-scale temperature control system."}
{"_id": "e7f9e93dc20a8ee0936ab0b6e65f3fe8b5004154", "title": "Blocked recursive image composition with exclusion zones", "text": "Photo collages are a popular and powerful storytelling mechanism. They are often enhanced with background artwork that sets the theme for the story. However, layout algorithms for photo collage creation typically do not take this artwork into account, which can result in collages where photos overlay important artwork elements. To address this, we extend our previous Blocked Recursive Image Composition (BRIC) method to allow any number of photos to be automatically arranged around preexisting exclusion zones on a canvas (exBRIC). We first generate candidate binary splitting trees to partition the canvas into regions that accommodate both photos and exclusion zones. We use a Cassowary constraint solver to ensure that the desired exclusion zones are not covered by photos. Finally, photo areas, exclusion zones and layout symmetry are evaluated to select the best candidate. This method provides flexible, dynamic and integrated photo layout with background artwork."}
{"_id": "4ac6a51f12417548abefc8ce7aa2a2c7559a7c68", "title": "Data Quality and Data Cleaning: An Overview", "text": "Data quality is a serious concern in any data-driven enterprise, often creating misleading findings during data mining, and causing process disruptions in operational databases. The manifestations of data quality problems can be very expensive- \"losing\" customers, \"misplacing\" billions of dollars worth of equipment, misallocated resources due to glitched forecasts, and so on. Solving data quality problems typically requires a very large investment of time and energy -- often 80% to 90% of a data analysis project is spent in making the data reliable enough that the results can be trusted.In this tutorial, we present a multi disciplinary approach to data quality problems. We start by discussing the meaning of data quality and the sources of data quality problems. We show how these problems can be addressed by a multidisciplinary approach, combining techniques from management science, statistics, database research, and metadata management. Next, we present an updated definition of data quality metrics, and illustrate their application with a case study. We conclude with a survey of recent database research that is relevant to data quality problems, and suggest directions for future research."}
{"_id": "6f4fa2f6f878b081888d2761df47534c7cc09613", "title": "An improved analytical IGBT model for loss calculation including junction temperature and stray inductance", "text": "An improved analytical model suitable for IGBT modules is proposed in this paper to calculate the power losses with high accuracy and short calculation time. In this model, the parameters varying with junction temperature of the modules, such as di/dt in the turn-on period and dv/dt in the turn-off period, are discussed and derived according to several equivalent models. In addition, the parasitic inductance in the circuit including the emitter and collector inductance in the power circuits and the gate inductance in the driving loop are considered in this model. Based on this proposed model, the simulation switching waveforms of collector currents and collector-emitter voltages are provided to verify the model. Meanwhile, the calculated power losses are confirmed to be precise by comparing with measurement results."}
{"_id": "38a9dfdf72d67cea75298cf29d3ea563e9ce3137", "title": "Temporal Segmentation of Egocentric Videos", "text": "The use of wearable cameras makes it possible to record life logging egocentric videos. Browsing such long unstructured videos is time consuming and tedious. Segmentation into meaningful chapters is an important first step towards adding structure to egocentric videos, enabling efficient browsing, indexing and summarization of the long videos. Two sources of information for video segmentation are (i) the motion of the camera wearer, and (ii) the objects and activities recorded in the video. In this paper we address the motion cues for video segmentation. Motion based segmentation is especially difficult in egocentric videos when the camera is constantly moving due to natural head movement of the wearer. We propose a robust temporal segmentation of egocentric videos into a hierarchy of motion classes using a new Cumulative Displacement Curves. Unlike instantaneous motion vectors, segmentation using integrated motion vectors performs well even in dynamic and crowded scenes. No assumptions are made on the underlying scene structure and the method works in indoor as well as outdoor situations. We demonstrate the effectiveness of our approach using publicly available videos as well as choreographed videos. We also suggest an approach to detect the fixation of wearer's gaze in the walking portion of the egocentric videos."}
{"_id": "93f962a46b24030bf4486a77b282f567529e7782", "title": "Compact and power-efficient 5 GHz full-duplex design utilizing the 180\u00b0 ring hybrid coupler", "text": "This paper presents a compact and power-efficient 5 Ghz in-band full-duplex (FD) design in ANSYS HFSS using the 180-degree ring hybrid coupler. The proposed design achieves an excellent isolation of 57dB by taking advantage of destructive interference between two radiating antennas attached to the coupler, leading to a large reduction in self-interference. The design is passive and hence overcomes additional power requirement for adaptive channel estimation. In addition, it has a very workable physical size for the desired frequency of operation. The proposed FD design is therefore compact and power-efficient, which can be used in mobile devices, such as cell phones or tablet/phablet devices for a more flexible and greater contention of scarce RF resources."}
{"_id": "70e6d77b241e64369f2d5d4010d77595e963226d", "title": "TAaaS: Trustworthy Authentication as a Service Based on Trusted Path", "text": "Authentication as a Service (AaaS) provides on-demand delivery of multi-factor authentication (MFA). However, current AaaS has left out of consideration the trustworthiness of user inputs at client evices and the risk of privacy exposure at the AaaS providers. To solve these concerns, we present TAaaS, Trustworthy Authentication as a Service, which offers a trusted path-based MFA service to the service provider in the cloud. TAaaS leverages the hypervisor-based trusted path to ensure the trustworthiness of user inputs, and addresses privacy concerns in the cloud by storing only the irreversible user account information. We implement two end-to-end prototypes and evaluate our work to show its feasibility and security."}
{"_id": "57bf5ec1c47c1af035079f908915535f84f086bc", "title": "OFDM : Basic Concepts , Scope & its Applications", "text": "Orthogonal frequency division multiplexing (OFDM) is a special case of multicarrier transmission where a single DataStream is transmitted over a number of lower rate subcarriers. In July 1998, the IEEE standardization group decided to select OFDM as the basis for their new 5-GHz standard aiming a range of data stream from 6 up to 54 Mbps. This new standard is the first one to use OFDM in packetbased communications. In wireless communication, concept of parallel transmission of symbols is used to achieve high throughput and better transmission quality. Orthogonal Frequency Division Multiplexing (OFDM) is one of the techniques for parallel transmission. The idea of OFDM is to split the total transmission bandwidth into a number of orthogonal subcarriers in order to transmit the symbols using these subcarriers in parallel. In this paper we will discuss the basics of OFDM techniques, role of OFDM in this era, its benefits and losses and also some of its application. Keywords\u2014 Orthogonal frequency division multiplexing(OFDM); BER; ISI; PAPR; DVB; DAB."}
{"_id": "75e39f8ba93ea7fa14ac25cc4b22a8e138deba10", "title": "Graduated Consistency-Regularized Optimization for Multi-graph Matching", "text": "Graph matching has a wide spectrum of computer vision applications such as finding feature point correspondences across images. The problem of graph matching is generally NP-hard, so most existing work pursues suboptimal solutions between two graphs. This paper investigates a more general problem of matching N attributed graphs to each other, i.e. labeling their common node correspondences such that a certain compatibility/affinity objective is optimized. This multigraph matching problem involves two key ingredients affecting the overall accuracy: a) the pairwise affinity matching score between two local graphs, and b) global matching consistency that measures the uniqueness and consistency of the pairwise matching results by different sequential matching orders. Previous work typically either enforces the matching consistency constraints in the beginning of iterative optimization, which may propagate matching error both over iterations and across different graph pairs; or separates score optimizing and consistency synchronization in two steps. This paper is motivated by the observation that affinity score and consistency are mutually affected and shall be tackled jointly to capture their correlation behavior. As such, we propose a novel multigraph matching algorithm to incorporate the two aspects by iteratively approximating the global-optimal affinity score, meanwhile gradually infusing the consistency as a regularizer, which improves the performance of the initial solutions obtained by existing pairwise graph matching solvers. The proposed algorithm with a theoretically proven convergence shows notable efficacy on both synthetic and public image datasets."}
{"_id": "a97c3c663126f0a5fc05d852b63382839e155cf9", "title": "Mental Stress Assessment of ECG Signal using Statistical Analysis of Bio-Orthogonal Wavelet Coefficients", "text": "It is observed that the stress level is function of various statistical parameters like standard deviation, entropy, energy,, mean, Covariance and power of the ECG signals of two states i.e. normal state of mind and stressed state of mind. Further, it is observed that the features extracted are directly from the ECG in frequency domain using db4 wavelet. However, db4 introduces some error on account of db4 wavelet shape. This error in turn amplifies while measuring the features as mentioned above. In order to reduce this error, we propose a Bior 3.9 wavelet family to decompose the ECG signal. The decomposed ECG signal is now analyzed statistically to extract the above feature. Further, in the existing work, all the ECGs are taken by using the 12 leads method. This factor also adds some undue stress to the person under scanner. And this is time consuming too. Therefore, to reduce this complexity, we propose to analyze all above features by using a two lead ECG. Further, a back propagation neural network is trained using the above features as input neurons in order to classify the stress level."}
{"_id": "f71a0b5b1bec6b2eeb1399ed9e704c28ebfe9ac4", "title": "A Better Comparison Summary of Credit Scoring Classification", "text": "The credit scoring aim is to classify the customer credit as defaulter or non-defaulter. The credit risk analysis is more effective with further boosting and smoothing of the parameters of models. The objective of this paper is to explore the credit score classification models with an imputation technique and without imputation technique. However, data availability is low in case of without imputation because of missing values depletion from the large dataset. On the other hand, imputation based dataset classification accuracy with linear method of ANN is better than other models. The comparison of models with boosting and smoothing shows that error rate is better metric than area under curve (AUC) ratio. It is concluded that artificial neural network (ANN) is better alternative than decision tree and logistic regression when data availability is high in dataset. Keywords\u2014Credit score data mining; classification; artifical neural network; imputation"}
{"_id": "5c36857b18bd9440483be679633b30cf9e3704f2", "title": "How to Predict Mood? Delving into Features of Smartphone-Based Data", "text": "Smartphones are increasingly utilized in society and enable scientists to record a wide range of behavioral and environmental information. These information, referred to as Unobtrusive Ecological Momentary Assessment Data, might support prediction procedures regarding the mood level of users and simultaneously contribute to an enhancement of therapy strategies. In this paper, we analyze how the mood level of healthy clients is affected by unobtrusive measures and how this kind of data contributes to the prediction performance of various statistical models (Bayesian methods, Lasso procedures, etc.). We conduct analyses on a non-user and a user level. We then compare the models by utilizing introduced performance measures. Our findings indicate that the prediction performance increases when considering individual users. However, the implemented models only perform slightly better than the introduced mean model. Indicated by feature selection methods, we assume that more meaningful variables regarding the outcome can potentially increase prediction performance."}
{"_id": "e0913f20483033236f2849b875dfea043d9fb714", "title": "A four-electrode low frequency impedance spectroscopy measurement system using the AD5933 measurement chip.", "text": "This paper presents the design of a four electrode impedance measurement circuit dedicated to bioimpedance embedded applications. It extends the use of the AD5933 measurement chip to allow it to operate in a four electrode configuration in order to limit the effects of the parasitic impedances between the medium under test and the electrodes. The circuit has shown a good measurement accuracy on various test circuits. In association with a four microband electrode system it has been successfully used to characterize small physiological samples (50\u00a0\u03bcl) with conductivities ranging from 0.14 to 1.2\u00a0S\u00a0m(-1). It can be used as an alternative bioimpedance measurement approach for embedded applications operating in the four electrode configuration."}
{"_id": "05625dc851d58636c954b7fb932bd450f23f7894", "title": "Highlighting interventions and user differences: informing adaptive information visualization support", "text": "There is increasing evidence that the effectiveness of information visualization techniques can be impacted by the particular needs and abilities of each user. This suggests that it is important to investigate information visualization systems that can dynamically adapt to each user. In this paper, we address the question of how to adapt. In particular, we present a study to evaluate a variety of visual prompts, called \"interventions\", that can be performed on a visualization to help users process it. Our results show that some of the tested interventions perform better than a condition in which no intervention is provided, both in terms of task performance as well as subjective user ratings. We also discuss findings on how intervention effectiveness is influenced by individual differences and task complexity."}
{"_id": "a16e2a64fea3ec99dd34bdc955e299921a4e6b83", "title": "A RFID case-based logistics resource management system for managing order-picking operations in warehouses", "text": "In the supply chain, a warehouse is an essential component for linking the chain partners. It is necessary to allocate warehouse resources efficiently and effectively to enhance the productivity and reduce the operation costs of the warehouse. Therefore, warehouse management systems (WMSs) have been developed for handling warehouse resources and monitoring warehouse operations. However, it is difficult to update daily operations of inventory level, locations of forklifts and stock keeping units (SKUs) in realtime by using the bar-code-based or manual-based warehouse management systems. In this paper, RFID technology is adopted to facilitate the collection and sharing of data in a warehouse. Tests are performed for evaluating the reading performance of both the active and passive RFID apparatus. With the help of the testing results, the efficient radio frequency cover ranges of the readers are examined for formulating a radio frequency identification case-based logistics resource management system (R-LRMS). The capabilities of R-LRMS are demonstrated in GSL Limited. Three objectives are achieved: (i) a simplification of RFID adoption procedure, (ii) an improvement in the visibility of warehouse operations and (iii) an enhancement of the productivity of the warehouse. The successful case example proved the feasibility of R-LRMS in real working practice. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "9f829eb41c2ecb850fe20329e7da06eb369151f9", "title": "Deep Representation Learning with Target Coding", "text": "We consider the problem of learning deep representation when target labels are available. In this paper, we show that there exists intrinsic relationship between target coding and feature representation learning in deep networks. Specifically, we found that distributed binary code with error correcting capability is more capable of encouraging discriminative features, in comparison to the 1-of-K coding that is typically used in supervised deep learning. This new finding reveals additional benefit of using error-correcting code for deep model learning, apart from its well-known error correcting property. Extensive experiments are conducted on popular visual benchmark datasets."}
{"_id": "beb5dd7427bb7dc9c48bd2b2d0515bebea7e9259", "title": "A Framework for Smart Servicescape : A Case of Smart Home Service Experience", "text": "The rapid development of IoT technology has accelerated the growth of smart services. Despite the proliferation of smart services, academic research is still in its early stage particularly in terms of service experience and service design. Concerning a service experience viewpoint, it is essential to consider the context and environment of smart services, namely \u201csmart servicescape,\u201d as this can influence users\u2019 entire experience. Moreover, the smart servicescape will have different characteristics due to the convergence of online and offline connected environments. With this background, this study aimed to propose a framework for the smart servicescape by identifying new dimensions that reflect the characteristics of smart services. Accordingly, an initial analytic framework of service experience blueprint was established on the basis of the conventional servicescape and service blueprinting. Twenty movie clips on smart home services officially produced by ICT corporations were collected, were analyzed through grounded theory, and were classified according to the analytic framework. Through a series of qualitative analysis, the framework structure was improved to make it more suitable for the smart servicescape. Finally, this study proposed a framework for the smart servicescape derived from the smart home service experience blueprint. The values of this framework can be identified in two aspects: (1) by identifying new dimensions to reflect the characteristics of smart services such as Smart device, Datascape, and Connected scape; and (2) by suggesting the structure of the service experience blueprint infused with the perspective of service experience, which consists of service encounters and the servicescape."}
{"_id": "1f63f2768742919073a3821bdb5a2c033dcd7322", "title": "Decision-Theoretic User Interface Generation", "text": "For decades, researchers have debated the pros and cons of adaptive user interfaces with enthusiastic AI practitioners often confronting skeptical HCI experts (Shneiderman & Maes, 1997). This paper summarizes the SUPPLE project\u2019s six years of work analyzing the characteristics of successful adaptive interfaces and developing efficient algorithms for their automatic generation (Gajos & Weld, 2004; Gajos et al., 2005; Gajos & Weld, 2005; Gajos et al., 2006; Gajos, Wobbrock, & Weld, 2007; Gajos et al., 2008; Gajos, Wobbrock, & Weld, 2008). We conclude that personalized user interfaces, which are adapted to a person\u2019s devices, tasks, preferences and abilities, can improve user satisfaction and performance. Further, we demonstrate that automatic generation of these interfaces is computationally feasible. We developed two systems which autonomously construct a broad range of personalized adaptive interfaces: 1) SUPPLE/ARNAULD uses decision-theoretic optimization to automatically generate user interfaces adapted to target device, usage patterns, and a learned model of user preferences, and 2) SUPPLE++ first performs a one-time assessment of a person\u2019s motor abilities and then automatically generates user interfaces adapted to that user\u2019s abilities. Our experiments show that these automatically generated interfaces significantly improve speed, accuracy and satisfaction of users with motor impairments compared to manufacturers\u2019 defaults. We also provide the first characterization of the design space of adaptive interfaces and demonstrate how such interfaces can significantly improve the quality and efficiency of daily interactions for typical users."}
{"_id": "0930a158b9379e8c9fc937a38c9f0d0408d6d900", "title": "Efficient High-Quality Volume Rendering of SPH Data", "text": "High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles."}
{"_id": "8185596da0938aa976fcf255fd983399177bb5ea", "title": "A survey of event detection techniques in online social networks", "text": "The online social networks (OSNs) have become an important platform for detecting real-world event in recent years. These real-world events are detected by analyzing huge social-stream data available on different OSN platforms. Event detection has become significant because it contains substantial information which describes different scenarios during events or crisis. This information further helps to enable contextual decision making, regarding the event location, content and the temporal specifications. Several studies exist, which offers plethora of frameworks and tools for detecting and analyzing events used for applications like crisis management, monitoring and predicting events in different OSN platforms. In this paper, a survey is done for event detection techniques in OSN based on social text streams\u2014newswire, web forums, emails, blogs and microblogs, for natural disasters, trending or emerging topics and public opinion-based events. The work done and the open problems are explicitly mentioned for each social stream. Further, this paper elucidates the list of event detection tools available for the researchers."}
{"_id": "023cc7f9f3544436553df9548a7d0575bb309c2e", "title": "Bag of Tricks for Efficient Text Classification", "text": "This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute."}
{"_id": "2f399f8d56d91352ad02b630f025e183b5cc08d4", "title": "A Matrix Decomposition based Webshell Detection Method", "text": "WebShell is a web based network backdoor. With the help of WebShells, the hacker can take any control of the web services illegally. The current method of detecting WebShells is just matching the eigenvalues or detecting the produced flow or services, which is hard to find new kinds of WebShells. To solve these problems, this paper analyzes the different features of a page and proposes a novel matrix decomposition based WebShell detection algorithm. The algorithm is a supervised machine learning algorithm. By analyzing and learning features of known existing and non-existing WebShell pages, the algorithm can make predictions on the unknown pages. The experimental results show that, compared with traditional detection methods, this algorithm spends less time, has higher accuracy and recall rate, and can detect new kinds of WebShells with a certain probability, overcoming the shortcomings of the traditional feature matching based method, improving the accuracy and recalling rate of WebShell detection."}
{"_id": "92f072dbcd15421ed28c8c83ccb02fdb5ab6a462", "title": "Webshell detection techniques in web applications", "text": "With widely adoption of online services, malicious web sites have become a malignant tumor of the Internet. Through system vulnerabilities, attackers can upload malicious files (which are also called webshells) to web server to create a backdoor for hackers' further attacks. Therefore, finding and detecting webshell inside web application source code are crucial to secure websites. In this paper, we propose a novel method based on the optimal threshold values to identify files that contain malicious codes from web applications. Our detection system will scan and look for malicious codes inside each file of the web application, and automatically give a list of suspicious files and a detail log analysis table of each suspicious file for administrators to check further. The Experimental results show that our approach is applicable to identify webshell more efficient than some other approaches."}
{"_id": "b1b3c8f907db6748c373bf1d15ec0c15bb2307dc", "title": "Countering kernel rootkits with lightweight hook protection", "text": "Kernel rootkits have posed serious security threats due to their stealthy manner. To hide their presence and activities, many rootkits hijack control flows by modifying control data or hooks in the kernel space. A critical step towards eliminating rootkits is to protect such hooks from being hijacked. However, it remains a challenge because there exist a large number of widely-scattered kernel hooks and many of them could be dynamically allocated from kernel heap and co-located together with other kernel data. In addition, there is a lack of flexible commodity hardware support, leading to the socalled protection granularity gap -- kernel hook protection requires byte-level granularity but commodity hardware only provides page level protection.\n To address the above challenges, in this paper, we present HookSafe, a hypervisor-based lightweight system that can protect thousands of kernel hooks in a guest OS from being hijacked. One key observation behind our approach is that a kernel hook, once initialized, may be frequently \"read\"-accessed, but rarely write\"-accessed. As such, we can relocate those kernel hooks to a dedicated page-aligned memory space and then regulate accesses to them with hardware-based page-level protection. We have developed a prototype of HookSafe and used it to protect more than 5,900 kernel hooks in a Linux guest. Our experiments with nine real-world rootkits show that HookSafe can effectively defeat their attempts to hijack kernel hooks. We also show that HookSafe achieves such a large-scale protection with a small overhead (e.g., around 6% slowdown in performance benchmarks)."}
{"_id": "93f989be108cb8e41045e2acaf0ab0b32811730c", "title": "PERFORMANCE ANALYSIS OF CHAOTIC CHIRP SPREAD SPECTRUM SYSTEM IN MULTIPATH ENVIRONMENT", "text": "Wireless channels often include multipath propagation, in which the signal has more than one path from the transmitter to the receiver. Atmospheric reflection or refraction, as well as reflection from the ground or from objects such as buildings may cause this phenomena. The aim of this research is to study the effects of multipath environment on the performance of a proposed chaotic chirp spread spectrum system (CCSSS). The benchmark performance of the proposed system is investigated in term of bit error rate (BER) under multipath environment. A Quadrature Phase Shift Keying (QPSK) modulation scheme and a matched filter based on adaptive threshold decision were used in this work. To assess the performance of this system, simulations were performed to compare the proposed system with the traditional frequency modulation differential chaotic shift keying (DCSK) spread spectrum system. The proposed system may be considered as a well qualified communication system in multipath environment."}
{"_id": "4d08dd7b55c2084642c1ad3aa6999851a8be8379", "title": "From goals to components: a combined approach to self-management", "text": "Autonomous or semi-autonomous systems are deployed in environments where contact with programmers or technicians is infrequent or undesirable. To operate reliably, such systems should be able to adapt to new circumstances on their own. This paper describes our combined approach for adaptable software architecture and task synthesis from high-level goals, which is based on a three-layer model. In the uppermost layer, reactive plans are generated from goals expressed in a temporal logic. The middle layer is responsible for plan execution and assembling a configuration of domain-specific software components, which reside in the lowest layer. Moreover, the middle layer is responsible for selecting alternative components when the current configuration is no longer viable for the circumstances that have arisen. The implementation demonstrates that the approach enables us to handle non-determinism in the environment and unexpected failures in software components."}
{"_id": "7556f4ec9e5bd0e3883aae2a70502d7619393ecb", "title": "Support Vector Machines and Kernel Methods: The New Generation of Learning Machines", "text": "algorithms, utilize techniques from optimization, statistics, and functional analysis to achieve maximal generality, flexibility, and performance. These algorithms are different from earlier techniques used in machine learning in many respects: For example, they are explicitly based on a theoretical model of learning rather than on loose analogies with natural learning systems or other heuristics. They come with theoretical guarantees about their performance and have a modular design that makes it possible to separately implement and analyze their components. They are not affected by the problem of local minima because their training amounts to convex optimization. In the last decade, a sizable community of theoreticians and practitioners has formed around these methods, and a number of practical applications have been realized. Although the research is not concluded, already now kernel methods are considered the state of the art in several machine learning tasks. Their ease of use, theoretical appeal, and remarkable performance have made them the system of choice for many learning problems. Successful applications range from text categorization to handwriting recognition to classification of geneexpression data."}
{"_id": "c1fc7adefd1f3c99978098e4d748fd9963daa1e8", "title": "A survey on unmanned aerial vehicle collision avoidance systems", "text": "Collision avoidance is a key factor in enabling the integration of unmanned aerial vehicle into real life use, whether it is in military or civil application. For a long time there have been a large number of works to address this problem; therefore a comparative summary of them would be desirable. This paper presents a survey on the major collision avoidance systems developed in up to date publications. Each collision avoidance system contains two main parts: sensing and detection, and collision avoidance. Based on their characteristics each part is divided into different categories; and those categories are explained, compared and discussed about advantages and disadvantages in this paper."}
{"_id": "d8a99802f0606063a7b55be4e898f2a0ab8f5264", "title": "No-reference perceptual quality assessment of JPEG compressed images", "text": "Human observers can easily assess the quality of a distorted image without examining the original image as a reference. By contrast, designing objective No-Reference (NR) quality measurement algorithms is a very difficult task. Currently, NR quality assessment is feasible only when prior knowledge about the types of image distortion is available. This research aims to develop NR quality measurement algorithms for JPEG compressed images. First, we established a JPEG image database and subjective experiments were conducted on the database. We show that Peak Signalto-Noise Ratio (PSNR), which requires the reference images, is a poor indicator of subjective quality. Therefore, tuning an NR measurement model towards PSNR is not an appropriate approach in designing NR quality metrics. Furthermore, we propose a computational and memory efficient NR quality assessment model for JPEG images. Subjective test results are used to train the model, which achieves good quality prediction performance. A Matlab implementation of the proposed method is available at http://anchovy.ece.ute xas.edu/ \u0303zwang/research/nr jpeg quality/index.html ."}
{"_id": "7fc3119ab077617e93a8b5b421857820a1ac745f", "title": "\u201cUnexpected Item in the Bagging Area\u201d: Anomaly Detection in X-Ray Security Images", "text": "The role of anomaly detection in X-ray security imaging, as a supplement to targeted threat detection, is described, and a taxonomy of anomaly types in this domain is presented. Algorithms are described for detecting appearance anomalies of shape, texture, and density, and semantic anomalies of object category presence. The anomalies are detected on the basis of representations extracted from a convolutional neural network pre-trained to identify object categories in photographs, from the final pooling layer for appearance anomalies, and from the logit layer for semantic anomalies. The distribution of representations in normal data is modeled using high-dimensional, full-covariance, Gaussians, and anomalies are scored according to their likelihood relative to those models. The algorithms are tested on X-ray parcel images using stream-of-commerce data as the normal class, and parcels with firearms present the examples of anomalies to be detected. Despite the representations being learned for photographic images and the varied contents of stream-of-commerce parcels, the system, trained on stream-of-commerce images only, is able to detect 90% of firearms as anomalies, while raising false alarms on 18% of stream-of-commerce."}
{"_id": "53d1f88fea31dd84587faac9819c6f27e6c03821", "title": "Aiding in the Treatment of Low Back Pain by a Fuzzy Linguistic Web System", "text": "Low back pain affects a large proportion of the adult population at some point in their lives and has a major economic and social impact. To soften this impact, one possible solution is to make use of recommender systems, which have already been introduced in several health fields. In this paper, we present TPLUFIB-WEB, a novel fuzzy linguistic Web system that uses a recommender system to provide personalized exercises to patients with low back pain problems and to offer recommendations for their prevention. This system may be useful to reduce the economic impact of low back pain, help professionals to assist patients, and inform users on low back pain prevention measures. A strong part of TPLUFIB-WEB is that it satisfies the Web quality standards proposed by the Health On the Net Foundation (HON), Official College of Physicians of Barcelona, and Health Quality Agency of the Andalusian Regional Government, endorsing the health information provided and warranting the trust of users."}
{"_id": "21940188396e1ab4f47db30cf6fd289eee2a930a", "title": "Revisiting Batch Normalization For Practical Domain Adaptation", "text": "Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study (Tommasi et al. 2015) shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without finetuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives stateof-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance."}
{"_id": "d80e7da055f9c25e29f732d0a829daf172eb1fa0", "title": "Diffusion of innovations in service organizations: systematic review and recommendations.", "text": "This article summarizes an extensive literature review addressing the question, How can we spread and sustain innovations in health service delivery and organization? It considers both content (defining and measuring the diffusion of innovation in organizations) and process (reviewing the literature in a systematic and reproducible way). This article discusses (1) a parsimonious and evidence-based model for considering the diffusion of innovations in health service organizations, (2) clear knowledge gaps where further research should be focused, and (3) a robust and transferable methodology for systematically reviewing health service policy and management. Both the model and the method should be tested more widely in a range of contexts."}
{"_id": "f3339612cba41ddb7acfe299d2d38d13e611e1f9", "title": "The evolutionary significance of polyploidy", "text": "Polyploidy, or the duplication of entire genomes, has been observed in prokaryotic and eukaryotic organisms, and in somatic and germ cells. The consequences of polyploidization are complex and variable, and they differ greatly between systems (clonal or non-clonal) and species, but the process has often been considered to be an evolutionary 'dead end'. Here, we review the accumulating evidence that correlates polyploidization with environmental change or stress, and that has led to an increased recognition of its short-term adaptive potential. In addition, we discuss how, once polyploidy has been established, the unique retention profile of duplicated genes following whole-genome duplication might explain key longer-term evolutionary transitions and a general increase in biological complexity."}
{"_id": "3343d1d78f2a14045b52b71428efaf43073d616d", "title": "Dietary energy density is associated with obesity and the metabolic syndrome in U.S. adults.", "text": "OBJECTIVE\nRising obesity rates have been linked to the consumption of energy-dense diets. We examined whether dietary energy density was associated with obesity and related disorders including insulin resistance and the metabolic syndrome.\n\n\nRESEARCH DESIGN AND METHODS\nWe conducted a cross-sectional study using nationally representative data of U.S. adults > or =20 years of age from the 1999-2002 National Health and Nutrition Examination Survey (n = 9,688). Dietary energy density was calculated based on foods only. We used a series of multivariate linear regression models to determine the independent association between dietary energy density, obesity measures (BMI [in kilograms per meters squared] and waist circumference [in centimeters]), glycemia, or insulinemia. We used multivariate Poisson regression models to determine the independent association between dietary energy density and the metabolic syndrome as defined by the National Cholesterol and Education Program (Adult Treatment Panel III).\n\n\nRESULTS\nDietary energy density was independently and significantly associated with higher BMI in women (beta = 0.44 [95% CI 0.14-0.73]) and trended toward a significant association in men (beta = 0.37 [-0.007 to 0.74], P = 0.054). Dietary energy density was associated with higher waist circumference in women (beta = 1.11 [0.42-1.80]) and men (beta = 1.33 [0.46-2.19]). Dietary energy density was also independently associated with elevated fasting insulin (beta = 0.65 [0.18-1.12]) and the metabolic syndrome (prevalence ratio = 1.10 [95% CI 1.03-1.17]).\n\n\nCONCLUSIONS\nDietary energy density is an independent predictor of obesity, elevated fasting insulin levels, and the metabolic syndrome in U.S. adults. Intervention studies to reduce dietary energy density are warranted."}
{"_id": "0fe3a88a22a018651e4886d547488e5a3ce224fc", "title": "Performance Evaluation and Optimization of Communication Infrastructure for the Next Generation Air Transportation System", "text": "Automatic dependent surveillance-broadcast (ADS-B) is one of the fundamental surveillance technologies to improve the safety, capacity, and efficiency of the national airspace system. ADS-B shares its frequency band with current radar systems that use the same 1,090 MHz band. The coexistence of radar systems and ADS-B systems is a key issue to detect and resolve conflicts in the next generation air transportation system (NextGen). This paper focuses on the performance evaluation of ADS-B with existing radar systems and performance optimization of ADS-B systems to improve the safety and efficiency of conflict detection and resolution in NextGen. We have developed a simulation environment which models the complex interplay among the air traffic load, the radar systems, the ADS-B systems, and the wireless channel. A simple model is used to derive an analytical expression for a performance metric of ADS-B. This model is then used to design an adaptive ADS-B protocol for maximizing the information coverage while guaranteeing reliable and timely communication in air traffic surveillance networks. Simulation results show that the effect of ADS-B interference on the current radar system is negligible. The operational ability of ADS-B meets the performance requirements of conflict detection and resolution in air traffic control. However, upgrades are required in the current radar system for operation within an ADS-B environment since the current radars can significantly degrade the ADS-B performance. Numerical results indicate that the proposed adaptive protocol has the potential to improve the performance of conflict detection and resolution in air traffic control."}
{"_id": "d70d7990a2165f78456b28d4cb04cbf9188dcd9d", "title": "Cross-Validation and the Estimation of Conditional Probability Densities", "text": "Many practical problems, especially some connected with forecasting, require nonparametric estimation of conditional densities from mixed data. For example, given an explanatory data vector X for a prospective customer, with components that could include the customer\u2019s salary, occupation, age, sex, marital status, and address, a company might wish to estimate the density of the expenditure, Y , that could be made by that person, basing the inference on observations of (X,Y) for previous clients. Choosing appropriate smoothing parameters for this problem can be tricky, not in the least because plug-in rules take a particularly complex form in the case of mixed data. An obvious difficulty is that there exists no general formula for the optimal smoothing parameters. More insidiously, and more seriously, it can be difficult to determine which components of X are relevant to the problem of conditional inference. For example, if the jth component of X is independent of Y , then that component is irrelevant to estimating the density of Y given X, and ideally should be dropped before conducting inference. In this article we show that cross-validation overcomes these difficulties. It automatically determines which components are relevant and which are not, through assigning large smoothing parameters to the latter and consequently shrinking them toward the uniform distribution on the respective marginals. This effectively removes irrelevant components from contention, by suppressing their contribution to estimator variance; they already have very small bias, a consequence of their independence of Y . Cross-validation also yields important information about which components are relevant; the relevant components are precisely those that cross-validation has chosen to smooth in a traditional way, by assigning them smoothing parameters of conventional size. Indeed, cross-validation produces asymptotically optimal smoothing for relevant components, while eliminating irrelevant components by oversmoothing. In the problem of nonparametric estimation of a conditional density, cross-validation comes into its own as a method with no obvious peers."}
{"_id": "cf0545d9f171c8e2d88aed13cd7ad60ff2ab8fb5", "title": "How Much Attention Do You Need? A Granular Analysis of Neural Machine Translation Architectures", "text": "ACL 2018 \u2022 Baseline: 6 layer Transformer, 512 hidden units \u2022 Data sets: IWSLT'16, WMT'17 \u2022 Metrics: BLEU (and METEOR in the paper) \u2022 3 runs, reporting mean and standard deviation Encoder/Decoder Combinations \u2022 Architecture Definition Language \u2022 Allows easy experimentation of Neural Machine Translation (NMT) architecture variations \u2022 Detailed analysis of NMT architecture combinations \u2022 Multiple source attention layers and residual feedforward layer are key \u2022 Self-attention more important on the source side Introduction"}
{"_id": "64588554a7fd9b3408aa78c3a111b0fae432a651", "title": "Relationship of personality to performance motivation: a meta-analytic review.", "text": "This article provides a meta-analysis of the relationship between the five-factor model of personality and 3 central theories of performance motivation (goal-setting, expectancy, and self-efficacy motivation). The quantitative review includes 150 correlations from 65 studies. Traits were organized according to the five-factor model of personality. Results indicated that Neuroticism (average validity = -.31) and Conscientiousness (average validity = .24) were the strongest and most consistent correlates of performance motivation across the 3 theoretical perspectives. Results further indicated that the validity of 3 of the Big Five traits--Neuroticism, Extraversion, and Conscientiousness--generalized across studies. As a set, the Big Five traits had an average multiple correlation of .49 with the motivational criteria, suggesting that the Big Five traits are an important source of performance motivation."}
{"_id": "b77c26b72935309ff78a953008ad9907f991ec07", "title": "Personality and job performance: the Big Five revisited.", "text": "Prior meta-analyses investigating the relation between the Big 5 personality dimensions and job performance have all contained a threat to construct validity, in that much of the data included within these analyses was not derived from actual Big 5 measures. In addition, these reviews did not address the relations between the Big 5 and contextual performance. Therefore, the present study sought to provide a meta-analytic estimate of the criterion-related validity of explicit Big 5 measures for predicting job performance and contextual performance. The results for job performance closely paralleled 2 of the previous meta-analyses, whereas analyses with contextual performance showed more complex relations among the Big 5 and performance. A more critical interpretation of the Big 5-performance relationship is presented, and suggestions for future research aimed at enhancing the validity of personality predictors are provided."}
{"_id": "e832a69d84e612af3d7fff75027e0e018ed5e836", "title": "Common method biases in behavioral research: a critical review of the literature and recommended remedies.", "text": "Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings."}
{"_id": "3d040abaa8c0b88098916a9b1a01294c6713a424", "title": "The happy personality: a meta-analysis of 137 personality traits and subjective well-being.", "text": "This meta-analysis used 9 literature search strategies to examine 137 distinct personality constructs as correlates of subjective well-being (SWB). Personality was found to be equally predictive of life satisfaction, happiness, and positive affect, but significantly less predictive of negative affect. The traits most closely associated with SWB were repressive-defensiveness, trust, emotional stability, locus of control-chance, desire for control, hardiness, positive affectivity, private collective self-esteem, and tension. When personality traits were grouped according to the Big Five factors, Neuroticism was the strongest predictor of life satisfaction, happiness, and negative affect. Positive affect was predicted equally well by Extraversion and Agreeableness. The relative importance of personality for predicting SWB, how personality might influence SWB, and limitations of the present review are discussed."}
{"_id": "89b013c229c2ed40fd6c38f96f0bf5c0afd7531f", "title": "Approximate 32-bit floating-point unit design with 53% power-area product reduction", "text": "The floating-point unit is one of the most common building block in any computing system and is used for a huge number of applications. By combining two state-of-the-art techniques of imprecise hardware, namely Gate-Level Pruning and Inexact Speculative Adder, and by introducing a novel Inexact Speculative Multiplier architecture, three different approximate FPUs and one reference IEEE-754 compliant FPU have been integrated in a 65 nm CMOS process within a low-power multi-core processor. Silicon measurements show up to 27% power, 36% area and 53 % power-area product savings compared to the IEEE-754 single-precision FPU. Accuracy loss has been evaluated with a high-dynamic-range image tone-mapping algorithm, resulting in small but non-visible errors with image PSNR value of 90 dB."}
{"_id": "bf06a18d7c067d9190ff02c439ef0ad40017be86", "title": "Meandered Variable Pitch angle Printed Quadrifilar Helix Antenna", "text": "A new method of Meandered Variable Pitch angle Printed Quadrifilar Helix Antenna (MVPQHA) is described. This type of antenna helps in reducing the size of the Printed Quadrifilar Helix Antenna (PQHA) and makes the bandwidth of the antenna broader. The new type of antenna is compared with other existing types of Printed Quadrifilar helix antenna in terms of size, return loss and bandwidth."}
{"_id": "c68d59944b0c06ef8ec4e11cc32fbf5cad6a76e6", "title": "Need Satisfaction , Motivation , and Well-Being in the Work Organizations of a Former Eastern Bloc Country : A Cross-Cultural Study of Self-Determination", "text": "Past studies in U.S. work organizations have supported a model derived from self-determination theory in which autonomy-supportive work climates predict satisfaction of the intrinsic needs for competence, autonomy, and relatedness, which in turn predict task motivation and psychological adjustment on the job. To test this model cross-culturally, the authors studied employees of state-owned companies in Bulgaria, a country that has traditionally had a central-planning economy, a totalitarian political system, and collectivist values. A sample from a privately owned American corporation was used for comparison purposes. Results using structural equation modeling suggested that the model fit the data from each country, that the constructs were equivalent across countries, and that some paths of the structural model fit equivalently for the two countries but that county moderated the other paths."}
{"_id": "7d719e345661a1aaf2b0dd032a9c0f7f395b059d", "title": "WaveletNet: Logarithmic Scale Efficient Convolutional Neural Networks for Edge Devices", "text": "We present a logarithmic-scale efficient convolutional neural network architecture for edge devices, named WaveletNet. Our model is based on the well-known depthwise convolution, and on two new layers, which we introduce in this work: a wavelet convolution and a depthwise fast wavelet transform. By breaking the symmetry in channel dimensions and applying a fast algorithm, WaveletNet shrinks the complexity of convolutional blocks by an O(logD/D) factor, where D is the number of channels. Experiments on CIFAR-10 and ImageNet classification show superior and comparable performances of WaveletNet compared to state-of-the-art models such as MobileNetV2."}
{"_id": "c8c63fcadf4cd86114d38c854c8e5f0316d6122f", "title": "Random Projection Based Representations for Learning Policies in Deterministic Atari Games", "text": "Recent advances in sample efficient reinforcement learning algorithms in quasi-deterministic environments highlight the requirement for computationally inexpensive visual representations. Here we investigate non-parametric dimensionality reduction techniques based on random linear transformations and we provide empirical evidence on the importance of high-variance projections using sparse random matrices in the context of episodic controllers learning deterministic policies. We also propose a novel Maximum-Variance Random Projection and improve on the performance of the original Model-Free Episodic Control results with respect to both sample efficiency and final average score."}
{"_id": "2ec3a0d6c71face777138f7cdc2e44d6762d23f5", "title": "Unsupervised Meta-Learning for Reinforcement Learning", "text": "Meta-learning is a powerful tool that builds on multi-task learning to learn how to quickly adapt a model to new tasks. In the context of reinforcement learning, meta-learning algorithms can acquire reinforcement learning procedures to solve new problems more efficiently by meta-learning prior tasks. The performance of meta-learning algorithms critically depends on the tasks available for meta-training: in the same way that supervised learning algorithms generalize best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks. In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design. If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated. In this work, we take a step in this direction, proposing a family of unsupervised meta-learning algorithms for reinforcement learning. We describe a general recipe for unsupervised meta-reinforcement learning, and describe an effective instantiation of this approach based on a recently proposed unsupervised exploration technique and model-agnostic meta-learning. We also discuss practical and conceptual considerations for developing unsupervised meta-learning methods. Our experimental results demonstrate that unsupervised meta-reinforcement learning effectively acquires accelerated reinforcement learning procedures without the need for manual task design, significantly exceeds the performance of learning from scratch, and even matches performance of meta-learning methods that use hand-specified task distributions."}
{"_id": "50f2f7166a89e1ef628ab372edc53df77109b9fa", "title": "A Survey on Visual Approaches for Analyzing Scientific Literature and Patents", "text": "The increasingly large number of available writings describing technical and scientific progress, calls for advanced analytic tools for their efficient analysis. This is true for many application scenarios in science and industry and for different types of writings, comprising patents and scientific articles. Despite important differences between patents and scientific articles, both have a variety of common characteristics that lead to similar search and analysis tasks. However, the analysis and visualization of these documents is not a trivial task due to the complexity of the documents as well as the large number of possible relations between their multivariate attributes. In this survey, we review interactive analysis and visualization approaches of patents and scientific articles, ranging from exploration tools to sophisticated mining methods. In a bottom-up approach, we categorize them according to two aspects: (a) data type (text, citations, authors, metadata, and combinations thereof), and (b) task (finding and comparing single entities, seeking elementary relations, finding complex patterns, and in particular temporal patterns, and investigating connections between multiple behaviours). Finally, we identify challenges and research directions in this area that ask for future investigations."}
{"_id": "c474c626f91fa7d5a367347e18330e23c9cbdc04", "title": "Automatic number plate recognition system using modified stroke width transform", "text": "Automatic number plate recognition(ANPR) techniques occupy a significant role in intelligent traffic management systems. Most of the existing ANPR systems do not perform well under conditions like high vehicle speed, varying illuminations, changing back ground etc. In this paper we propose an ANPR technique which efficiently solves the above issues using the stroke width transform. In our method, first we pre-process the image to remove blurring artifacts. Then the significant edges in the image are detected and a new image is formed by grouping the connecting rays. This is followed by morphological processing which detects the letters in the image. Then an appropriate Optical Character Recognizer (OCR) recovers the final result. Experimental results show that the proposed method work effectively under varying light conditions and even in the presence of shadows."}
{"_id": "f9a346913ba335e4f776182b7ab69d759f36b80c", "title": "Colour image segmentation using the self-organizing map and adaptive resonance theory", "text": "We propose a new competitive-learning neural network model for colour image segmentation. The model, which is based on the adaptive resonance theory (ART) of Carpenter and Grossberg and on the self-organizing map (SOM) of Kohonen, overcomes the limitations of (i) the stability\u2013plasticity trade-offs in neural architectures that employ ART; and (ii) the lack of on-line learning property in the SOM. In order to explore the generation of a growing feature map using ART and to motivate the main contribution, we first present a preliminary experimental model, SOMART, based on Fuzzy ART. Then we propose the new model, SmART, that utilizes a novel lateral control of plasticity to resolve the stability\u2013plasticity problem. SmART has been experimentally found to perform well in RGB colour space, and is believed to be more coherent than Fuzzy ART. q 2005 Elsevier Ltd. All rights reserved."}
{"_id": "d0c8bb3e73cf7a2779ec82e26afd987895df8a93", "title": "Deep Learning: A Generic Approach for Extreme Condition Traffic Forecasting", "text": "Traffic forecasting is a vital part of intelligent transportation systems. It becomes particularly challenging due to short-term (e.g., accidents, constructions) and long-term (e.g., peak-hour, seasonal, weather) traffic patterns. While most of the previously proposed techniques focus on normal condition forecasting, a single framework for extreme condition traffic forecasting does not exist. To address this need, we propose to take a deep learning approach. We build a deep neural network based on long short term memory (LSTM) units. We apply Deep LSTM to forecast peak-hour traffic and manage to identify unique characteristics of the traffic data. We further improve the model for postaccident forecasting with Mixture Deep LSTM model. It jointly models the normal condition traffic and the pattern of accidents. We evaluate our model on a realworld large-scale traffic dataset in Los Angeles. When trained end-to-end with suitable regularization, our approach achieves 30%-50% improvement over baselines. We also demonstrate a novel technique to interpret the model with signal stimulation. We note interesting observations from the trained neural network."}
{"_id": "5d5ec2a092bd2d999ec812c08bf5d2f7b4b75474", "title": "Weighted cluster ensembles: Methods and analysis", "text": "Cluster ensembles offer a solution to challenges inherent to clustering arising from its ill-posed nature. Cluster ensembles can provide robust and stable solutions by leveraging the consensus across multiple clustering results, while averaging out emergent spurious structures that arise due to the various biases to which each participating algorithm is tuned. In this article, we address the problem of combining multiple weighted clusters that belong to different subspaces of the input space. We leverage the diversity of the input clusterings in order to generate a consensus partition that is superior to the participating ones. Since we are dealing with weighted clusters, our consensus functions make use of the weight vectors associated with the clusters. We demonstrate the effectiveness of our techniques by running experiments with several real datasets, including high-dimensional text data. Furthermore, we investigate in depth the issue of diversity and accuracy for our ensemble methods. Our analysis and experimental results show that the proposed techniques are capable of producing a partition that is as good as or better than the best individual clustering."}
{"_id": "7d3cf8315cfb586dae984b0aa986ea0857c18cf0", "title": "Local Probabilistic Models for Link Prediction", "text": "One of the core tasks in social network analysis is to predict the formation of links (i.e. various types of relationships) over time. Previous research has generally represented the social network in the form of a graph and has leveraged topological and semantic measures of similarity between two nodes to evaluate the probability of link formation. Here we introduce a novel local probabilistic graphical model method that can scale to large graphs to estimate the joint co-occurrence probability of two nodes. Such a probability measure captures information that is not captured by either topological measures or measures of semantic similarity, which are the dominant measures used for link prediction. We demonstrate the effectiveness of the co-occurrence probability feature by using it both in isolation and in combination with other topological and semantic features for predicting co-authorship collaborations on real datasets."}
{"_id": "0ba8c3fe53d27a04a477b5149e9d2cc23e477cb2", "title": "Printed Circuit Board Deconstruction Techniques", "text": "The primary purpose of printed circuit board (PCB) reverse engineering is to determine electronic system or subsystem functionality by analyzing how components are interconnected. We performed a series of experiments using both inexpensive home-based solutions and state-of-the-art technologies with a goal of removing exterior coatings and accessing individual PCB layers. This paper presents our results from the most effective techniques."}
{"_id": "8938ec408bf79a8dd3a0eb7418fbb5a281762cdb", "title": "RIC: Relaxed Inclusion Caches for mitigating LLC side-channel attacks", "text": "Recently, side-channel attacks on Last Level Caches (LLCs) were demonstrated. The attacks require the ability to evict critical data from the cache hierarchy, making future accesses visible. We propose Relaxed Inclusion Caches (RIC), a low-complexity cache design protecting against LLC side channel attacks. RIC relaxes inclusion when it is not needed, preventing the attacker from replacing the victim's data from the local core caches thus protecting critical data from leakage. RIC improves performance (by about 10%) and retains snoop filtering capabilities of inclusive cache hierarchies, while requiring only minimal changes to the cache."}
{"_id": "e17c6d0a36ee24dfc17026f86d113b3aa2ffe51f", "title": "Curcumin: From ancient medicine to current clinical trials", "text": "Curcumin is the active ingredient in the traditional herbal remedy and dietary spice turmeric (Curcuma longa). Curcumin has a surprisingly wide range of beneficial properties, including anti-inflammatory, antioxidant, chemopreventive and chemotherapeutic activity. The pleiotropic activities of curcumin derive from its complex chemistry as well as its ability to influence multiple signaling pathways, including survival pathways such as those regulated by NF-\u03baB, Akt, and growth factors; cytoprotective pathways dependent on Nrf2; and metastatic and angiogenic pathways. Curcumin is a free radical scavenger and hydrogen donor, and exhibits both pro- and antioxidant activity. It also binds metals, particularly iron and copper, and can function as an iron chelator. Curcumin is remarkably non-toxic and exhibits limited bioavailability. Curcumin exhibits great promise as a therapeutic agent, and is currently in human clinical trials for a variety of conditions, including multiple myeloma, pancreatic cancer, myelodysplastic syndromes, colon cancer, psoriasis and Alzheimer\u2019s disease."}
{"_id": "85f349c1da70c59e47fcc1e5fc434fd044b52a36", "title": "Interference-Aware Channel Assignment in Multi-Radio Wireless Mesh Networks", "text": "The capacity problem in wireless mesh networks can be alleviated by equipping the mesh routers with multiple radios tuned to non-overlapping channels. However, channel assignment presents a challenge because co-located wireless networks are likely to be tuned to the same channels. The resulting increase in interference can adversely affect performance. This paper presents an interference-aware channel assignment algorithm and protocol for multi-radio wireless mesh networks that address this interference problem. The proposed solution intelligently assigns channels to radios to minimize interference within the mesh network and between the mesh network and co-located wireless networks. It utilizes a novel interference estimation technique implemented at each mesh router. An extension to the conflict graph model, the multi-radio conflict graph, is used to model the interference between the routers. We demonstrate our solution\u2019s practicality through the evaluation of a prototype implementation in a IEEE 802.11 testbed. We also report on an extensive evaluation via simulations. In a sample multi-radio scenario, our solution yields performance gains in excess of 40% compared to a static assignment of channels."}
{"_id": "d0ca851a2443ce41d5d20cc9dd4f105eadb56e35", "title": "An Edge Based Smart Parking Solution Using Camera Networks and Deep Learning", "text": "The smart parking industry continues to evolve as an increasing number of cities struggle with traffic congestion and inadequate parking availability. For urban dwellers, few things are more irritating than anxiously searching for a parking space. Research results show that as much as 30% of traffic is caused by drivers driving around looking for parking spaces in congested city areas. There has been considerable activity among researchers to develop smart technologies that can help drivers find a parking spot with greater ease, not only reducing traffic congestion but also the subsequent air pollution. Many existing solutions deploy sensors in every parking spot to address the automatic parking spot detection problems. However, the device and deployment costs are very high, especially for some large and old parking structures. A wide variety of other technological innovations are beginning to enable more adaptable systems\u2014including license plate number detection, smart parking meter, and vision-based parking spot detection. In this paper, we propose to design a more adaptable and affordable smart parking system via distributed cameras, edge computing, data analytics, and advanced deep learning algorithms. Specifically, we deploy cameras with zoom-lens and motorized head to capture license plate numbers by tracking the vehicles when they enter or leave the parking lot; cameras with wide angle fish-eye lens will monitor the large parking lot via our custom designed deep neural network. We further optimize the algorithm and enable the real-time deep learning inference in an edge device. Through the intelligent algorithm, we can significantly reduce the cost of existing systems, while achieving a more adaptable solution. For example, our system can automatically detect when a car enters the parking space, the location of the parking spot, and precisely charge the parking fee and associate this with the license plate number."}
{"_id": "decaf265f1fb1df8bf50f6a9d2147c2336b3c357", "title": "Role of Emotional Intelligence for Academic Achievement for Students", "text": "In the current competitive environment where students are expected to perform multi roles with efficiency and effectiveness, it is highly needed to develop their right attitude and emotional intelligence towards the unseen complexities of life and quality education. As emotional intelligence is a subset of social intelligence with the ability to understand and monitor one\u2019s own feelings and others too which allows a student to mine the required data for his academic achievement which is an outcome of education and the extent at which the educational goal has been achieved. The emphasis of this paper was to determine the factors which are affecting the development of emotional intelligence and its role in academic achievement for students. In this research secondary data has been collected out of which we find out the correlation between emotional intelligence and academic achievement and teaching emotional and social skills at school not only positively influence academic achievement during the year when these were taught but also leaves the impact in long term achievement. Findings of this paper present that academic achievement without emotional intelligence does not indicate future success and absence of emotional intelligence also indicate the week personality and ability to build relations at working place as well in schools and it is highly important for quality education."}
{"_id": "54de1893fdc7ad165b9b7448fbeab914a346b2e7", "title": "Adaptive Clock Gating Technique for Low Power IP Core in SoC Design", "text": "Clock gating is a well-known technique to reduce chip dynamic power. This paper analyzes the disadvantages of some recent clock gating techniques and points out that they are difficult in system-on-chip (SoC) design. Based on the analysis of the intellectual property (IP) core model, an adaptive clock gating (ACG) technique which can be easily realized is introduced for the low power IP core design. ACG can automatically enable or disable the IP clock to reduce not only dynamic power but also leakage power with power gating technique. The experimental results on some IP cores in a real SoC show an average of 62.2% dynamic power reduction and 70.9% leakage power reduction without virtually performance impact."}
{"_id": "fddd72b759b1eddb8eda07887ef6a92f82e39f8c", "title": "Sustaining iterative game playing processes in DGBL: The relationship between motivational processing and outcome processing", "text": "Digital game-based learning (DGBL) has become a viable instructional option in recent years due to its support of learning motivation. Recent studies have mostly focused on identifying motivational factors in digital games (e.g., curiosity, rules, control) that support intrinsic motivation. These findings, however, are limited in two fronts. First, they did not depict the interactive nature of the motivational processing in DGBL. Second, they excluded the outcome processing (learners\u2019 final effort versus performance evaluation) as a possible motivation component to sustain the iterative game playing cycle. To address these problems, situated in the integrative theory of Motivation, Volition, and Performance (MVP), this study examined the relationship between motivational processing and outcome processing in an online instructional game. The study surveyed 264 undergraduate students after playing the Trade Ruler online game. Based on the data collected by ARCS-based Instructional Materials Motivational Survey (IMMS), a regression analysis revealed a significant model between motivational processing (attention, relevance, and confidence) and the outcome processing (satisfaction). The finding preliminarily suggests that both motivational processing and outcome processing need to be considered when designing DGBL. Furthermore, the finding implies a potential relationship between intrinsic motives and extrinsic rewards in DGBL. Published by Elsevier Ltd."}
{"_id": "3e597e492c1ed6e7bbd539d5f2e5a6586c6074cd", "title": "Improved Neural Machine Translation with a Syntax-Aware Encoder and Decoder", "text": "Most neural machine translation (NMT) models are based on the sequential encoder-decoder framework, which makes no use of syntactic information. In this paper, we improve this model by explicitly incorporating source-side syntactic trees. More specifically, we propose (1) a bidirectional tree encoder which learns both sequential and tree structured representations; (2) a tree-coverage model that lets the attention depend on the source-side syntax. Experiments on Chinese-English translation demonstrate that our proposed models outperform the sequential attentional model as well as a stronger baseline with a bottom-up tree encoder and word coverage.1"}
{"_id": "4e88de2930a4435f737c3996287a90ff87b95c59", "title": "Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks", "text": "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. TreeLSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)."}
{"_id": "6411da05a0e6f3e38bcac0ce57c28038ff08081c", "title": "Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks", "text": "Semantic representations have long been argued as potentially useful for enforcing meaning preservation and improving generalization performance of machine translation methods. In this work, we are the first to incorporate information about predicate-argument structure of source sentences (namely, semantic-role representations) into neural machine translation. We use Graph Convolutional Networks (GCNs) to inject a semantic bias into sentence encoders and achieve improvements in BLEU scores over the linguistic-agnostic and syntaxaware versions on the English\u2013German language pair."}
{"_id": "9f291ce2d0fc1d76206139a40a859283674d8f65", "title": "Improved Neural Machine Translation with Source Syntax", "text": "Neural Machine Translation (NMT) based on the encoder-decoder architecture has recently achieved the state-of-the-art performance. Researchers have proven that extending word level attention to phrase level attention by incorporating source-side phrase structure can enhance the attention model and achieve promising improvement. However, word dependencies that can be crucial to correctly understand a source sentence are not always in a consecutive fashion (i.e. phrase structure), sometimes they can be in long distance. Phrase structures are not the best way to explicitly model long distance dependencies. In this paper we propose a simple but effective method to incorporate source-side long distance dependencies into NMT. Our method based on dependency trees enriches each source state with global dependency structures, which can better capture the inherent syntactic structure of source sentences. Experiments on Chinese-English and English-Japanese translation tasks show that our proposed method outperforms state-of-the-art SMT and NMT baselines."}
{"_id": "7c660ea201217c1482cf81b20cb50671ceb123d6", "title": "Performance evaluation of local colour invariants", "text": "In this paper, we compare local colour descriptors to grey-value descriptors. We adopt the evaluation framework of Mikolayzcyk and Schmid. We modify the framework in several ways. We decompose the evaluation framework to the level of local grey-value invariants on which common region descriptors are based. We compare the discriminative power and invariance of grey-value invariants to that of colour invariants. In addition, we evaluate the invariance of colour descriptors to photometric events such as shadow and highlights. We measure the performance over an extended range of common recording conditions including significant photometric variation. We demonstrate the intensity-normalized colour invariants and the shadow invariants to be highly distinctive, while the shadow invariants are more robust to both changes of the illumination colour, and to changes of the shading and shadows. Overall, the shadow invariants perform best: they are most robust to various imaging conditions while maintaining discriminative power. When plugged into the SIFT descriptor, they show to outperform other methods that have combined colour information and SIFT. The usefulness of C-colour-SIFT for realistic computer vision applications is illustrated for the classification of object categories from the VOC challenge, for which a significant improvement is reported. 2008 Elsevier Inc. All rights reserved."}
{"_id": "00c98f76a1f24cd445cd5c7932f01f887f59c44c", "title": "Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction", "text": "Web search result clustering aims to facilitate information search on the Web. Rather than the results of a query being presented as a flat list, they are grouped on the basis of their similarity and subsequently shown to the user as a list of clusters. Each cluster is intended to represent a different meaning of the input query, thus taking into account the lexical ambiguity (i.e., polysemy) issue. Existing Web clustering methods typically rely on some shallow notion of textual similarity between search result snippets, however. As a result, text snippets with no word in common tend to be clustered separately even if they share the same meaning, whereas snippets with words in common may be grouped together even if they refer to different meanings of the input query.In this article we present a novel approach to Web search result clustering based on the automatic discovery of word senses from raw text, a task referred to as Word Sense Induction. Key to our approach is to first acquire the various senses (i.e., meanings) of an ambiguous query and then cluster the search results based on their semantic similarity to the word senses induced. Our experiments, conducted on data sets of ambiguous queries, show that our approach outperforms both Web clustering and search engines."}
{"_id": "4704ee86b88a25a5afe8622f3437537b4a3c93a6", "title": "Towards multi-cue urban curb recognition", "text": "This paper presents a multi-cue approach to curb recognition in urban traffic. We propose a novel texture-based curb classifier using local receptive field (LRF) features in conjunction with a multi-layer neural network. This classification module operates on both intensity images and on three-dimensional height profile data derived from stereo vision. We integrate the proposed multi-cue curb classifier as an additional measurement module into a state-of-the-art Kaiman filter-based urban lane recognition system. Our experiments involve a challenging real-world dataset captured in urban traffic with manually labeled ground-truth. We quantify the benefit of the proposed multi-cue curb classifier in terms of the improvement in curb localization accuracy of the integrated system. Our results indicate a 25% reduction of the average curb localization error at real-time processing speeds."}
{"_id": "67d0efb80486ef0e1fd156f9c12329e15a272f20", "title": "An empirical comparison of classification algorithms for mortgage default prediction: evidence from a distressed mortgage market", "text": "This paper evaluates the performance of a number of modelling approaches for future mortgage default status. Boosted regression trees, random forests, penalised linear and semi-parametric logistic regression models are applied to four portfolios of over 300,000 Irish owner-occupier mortgages. The main findings are that the selected approaches have varying degrees of predictive power and that boosted regression trees significantly outperform logistic regression. This suggests that boosted regression trees can be a useful addition to the current toolkit for mortgage credit risk assessment by banks and regulators."}
{"_id": "74caec8942c6eb545df42c8367dec1885ff3272b", "title": "Can Personality Be Changed ? The Role of Beliefs in Personality and Change", "text": "Using recent research, I argue that beliefs lie at the heart of personality and adaptive functioning and that they give us unique insight into how personality and functioning can be changed. I focus on two classes of beliefs\u2014beliefs about the malleability of self-attributes and expectations of social acceptance versus rejection\u2014and show how modest interventions have brought about important real-world changes. I conclude by suggesting that beliefs are central to the way in which people package their experiences and carry them forward, and that beliefs should play a more central role in the study of personality. KEYWORDS\u2014personality; personality theories; change; self-beliefs; interventions"}
{"_id": "ded38de3ff8dc89b3ab901eefa4664d366a8c99f", "title": "Business Intelligence and Business Process Management in Banking Operations", "text": "Success of banking operations is strongly correlated with the quality of customer relations and efficacy of banks processes. Banks seek means for efficient analysis of vast amount of gathered data from their IT systems. They are exploiting business intelligence (BI) technology to analyze every aspect of their data to understand behavior of their clients, striving to satisfy client's needs, in an endless race for a competitive advantage in the market. The intention of this paper is to review and discuss the most significant applications of business intelligence in banking as well as point to new technology trends that will affect bank's development."}
{"_id": "da8aa8b99bd63f28e9f9ad23be871831795773e6", "title": "Pitch Extraction and Fundamental Frequency: History and Current Techniques", "text": "Pitch extraction (also called fundamental frequency estimation) has been a popular topic in many fields of research since the age of computers. Yet in the course of some 50 years of study, current techniques are still not to a desired level of accuracy and robustness. When presented with a single clean pitched signal, most techniques do well, but when the signal is noisy, or when there are multiple pitch streams, many current pitch algorithms still fail to perform well. This report presents a discussion of the history of pitch detection techniques, as well as a survey of the current state of the art in pitch detection technology."}
{"_id": "d12c173ea92fc33dc276d1da90dc72a660f7ea12", "title": "High Performance Methods for Linked Open Data Connectivity Analytics", "text": "The main objective of Linked Data is linking and integration, and a major step for evaluating whether this target has been reached, is to find all the connections among the Linked Open Data (LOD) Cloud datasets. Connectivity among two or more datasets can be achieved through common Entities, Triples, Literals, and Schema Elements, while more connections can occur due to equivalence relationships between URIs, such as owl:sameAs, owl:equivalentProperty and owl:equivalentClass, since many publishers use such equivalence relationships, for declaring that their URIs are equivalent with URIs of other datasets. However, there are not available connectivity measurements (and indexes) involving more than two datasets, that cover the whole content (e.g., entities, schema, triples) or \u201cslices\u201d (e.g., triples for a specific entity) of datasets, although they can be of primary importance for several real world tasks, such as Information Enrichment, Dataset Discovery and others. Generally, it is not an easy task to find the connections among the datasets, since there exists a big number of LOD datasets and the transitive and symmetric closure of equivalence relationships should be computed for not missing connections. For this reason, we introduce scalable methods and algorithms, (a) for performing the computation of transitive and symmetric closure for equivalence relationships (since they can produce more connections between the datasets); (b) for constructing dedicated global semantics-aware indexes that cover the whole content of datasets; and (c) for measuring the connectivity among two or more datasets. Finally, we evaluate the speedup of the proposed approach, while we report comparative results for over two billion triples."}
{"_id": "d6020bdf3b03f209174cbc8fb4ecbe6208eb9ff1", "title": "Information Technology, Materiality, and Organizational Change: A Professional Odyssey", "text": "We begin with a retrospective reflection on the first author\u2019s research career, which in large part is devoted to research about the implications of information technology (IT) for organizational change. Although IT has long been associated with organizational change, our historical review of the treatment of technology in organization theory demonstrates how easily the material aspects of organizations can disappear into the backwaters of theory development. This is an unfortunate result since the material characteristics of IT initiatives distinguish them from other organizational change initiatives. Our aim is to restore materiality to studies of IT impact by tracing the reasons for its disappearance and by offering options in which IT\u2019s materiality plays a more central theoretical role. We adopt a socio-technical perspective that differs from a strict sociomaterial perspective insofar as we wish to preserve the ontological distinction between material artifacts and their social context of use. Our analysis proceeds using the concept of \u201caffordance\u201d as a relational concept consistent with the socio-technical perspective. We then propose extensions of organizational routines theory that incorporate material artifacts in the generative system known as routines. These contributions exemplify two of the many challenges inherent in adopting materiality as a new research focus in the study of IT\u2019s organizational impacts."}
{"_id": "86040af28751baac6ae9093ea3505a75822deec8", "title": "Electronic travel aids for the blind based on sensory substitution", "text": "With the development of advanced human-machine interface, effective information processing algorithms, and more powerful microprocessors, it is possible to enable the blind to achieve additional perception of the environment. In 1970s, the blind-assistant facility based on sensory substitution was introduced[1]. It called as ETA (short for Electronic Travel Aids), bases on the other natural senses of the blind, such as hearing, touch, smell, feeling and etc. Then the research of vision aids has been broadly extended. After introducing the traditional methods for guiding blind, two typical input modes of travel aid are presented in this paper. One is sonar-based ETAs and the other technique is camera-based ETAs. Its principle of each modes is discussed in detail. Besides these, comparisons between the above two techniques have also been indicated in this paper."}
{"_id": "1ad80c3406df83a1b591bfe60d3c57f8064bda51", "title": "0-knowledge fuzzing", "text": "Nowadays fuzzing is a pretty common technique used both by attackers and software developers. Currently known techniques usually involve knowing the protocol/format that needs to be fuzzed and having a basic understanding of how the user input is processed inside the binary. In the past since fuzzing was little-used obtaining good results with a small amount of effort was possible. Today finding bugs requires digging a lot inside the code and the user-input as common vulnerabilies are already identified and fixed by developers. This paper will present an idea on how to effectively fuzz with no knowledge of the userinput and the binary. Specifically the paper will demonstrate how techniques like code coverage, data tainting and inmemory fuzzing allow to build a smart fuzzer with no need to instrument it."}
{"_id": "d429db345c3263a0c695249d3fce9e26004b5419", "title": "Personal Attributes Extraction in Chinese Text Based on Recurrent Neural Network", "text": "Personal Attributes Extraction in Chinese Text Task is designed to extract person specific attributes, for example, spouse/husband, children, education, title etc. from unstructured Chinese texts. According to the characteristics of the personal attributes extraction, we propose a new method based on deep learning to extract attributes from the documents in this paper. Experiments show that this method can improve the accuracy of personal attributes extraction effectively."}
{"_id": "48f6cd5a88e39aead3f60b7dccedbb46bf2ebf7e", "title": "Strategic Human Resources Management : Where Do We Go From Here ? \u2020", "text": "The authors identify the key challenges facing strategic human resource management (SHRM) going forward and discuss several new directions in both the scholarship and practice of SHRM. They focus on a clearer articulation of the \u201cblack box\u201d between HR and firm performance, emphasizing the integration of strategy implementation as the central mediating variable in this relationship. There are direct implications for the nature of fit and contingencies in SHRM. They also highlight the significance of a differentiated HR architecture not just across firms but also within firms."}
{"_id": "f679df058483f2dae169d8abd01a49c75e087f37", "title": "Edlines: Real-time line segment detection by Edge Drawing (ed)", "text": "We propose a linear time line segment detector that gives accurate results, requires no parameter tuning, and runs up to 11 times faster than the fastest known line segment detection algorithm in the literature; namely, the LSD by Gioi et al. The proposed algorithm also includes a line validation step due to the Helmholtz principle, which lets it control the number of false detections. Our detector makes use of the clean, contiguous (connected) chain of edge pixels produced by our novel edge detector, the Edge Drawing (ED) algorithm; hence the name EDLines. With its accurate results and blazing speed, EDLines will be very suitable for the next generation real-time computer vision applications."}
{"_id": "8a15ca504ce5a86cca2a4bf8ab7eeeb92a4b7270", "title": "Who does what during a code review? Datasets of OSS peer review repositories", "text": "We present four datasets that are focused on the general roles of OSS peer review members. With data mined from both an integrated peer review system and code source repositories, our rich datasets comprise of peer review data that was automatically recorded. Using the Android project as a case study, we describe our extraction methodology, the datasets and their application used for three separate studies. Our datasets are available online at http://sdlab.naist.jp/reviewmining/."}
{"_id": "1c2ce06b718ffde5c96e761d9a92cb286c494dc6", "title": "HEXHOOP: Modular Templates for Converting a Hex-Dominant Mesh to an ALL-Hex Mesh", "text": "This paper presents a new mesh conversion template called HEXHOOP, which fully automates a con-version from a hex-dominant mesh to an all-hex mesh. A HEXHOOP template subdivides a hex/prism/pyramid element into a set of smaller hex elements while main-taining the topological conformity with neighboring elements. A HEXHOOP template is constructed by assembling sub-templates, cores and caps. A dicing template for a hex and a prism is constructed by choosing the appropriate combination of a core and caps. A template that dices a pyramid without losing conformity to the adjacent element is derived from a HEXHOOP template. Some experimental results show that the HEXHOOP templates successfully convert a hex-dominant mesh into an all-hex mesh."}
{"_id": "f095eb57f920e4c6d0b6a1fea4eacb51601e34e3", "title": "Chat Speed OP : Practices of Coherence in Massive Twitch Chat", "text": "Paste the appropriate copyright statement here. ACM now supports three different copyright statements: \u2022 ACM copyright: ACM holds the copyright on the work. This is the historical approach. \u2022 License: The author(s) retain copyright, but ACM receives an exclusive publication license. \u2022 Open Access: The author(s) wish to pay for the work to be open access. The additional fee must be paid to ACM. This text field is large enough to hold the appropriate release statement assuming it is single spaced in a sans-serif 7 point font. Every submission will be assigned their own unique DOI string to be included here. Abstract Twitch.tv, a streaming platform known for video game content, has grown tremendously since its inception in 2011. We examine communication practices in Twitch chats for the popular game Hearthstone, comparing massive chats with at least 10,000 concurrent viewers and small chats with fewer than 2,000 concurrent viewers. Due to the large scale and fast pace of massive chats, communication patterns no longer follow models developed in previous studies of computer-mediated communication. Rather than what other studies have described as communication breakdowns and information overload, participants in massive chats communicate in what we call \u201ccrowdspeak.\u201d"}
{"_id": "0c5200b6eed636de2e0ffd77772ccc988a9a6a70", "title": "A Mobile Transaction Model That Captures Both the Data and Movement Behavior", "text": "Unlike distributed transactions, mobile transactions do not originate and end at the same site. The implication of the movement of such transactions is that classical atomicity, concurrency and recovery solutions must be revisited to capture the movement behavior. As an effort in this direction, we define a model of mobile transactions by building on the concepts of split transactions and global transactions in a multidatabase environment. Our view of mobile transactions, called Kangaroo Transactions, incorporates the property that transactions in a mobile computing system hop from one base station to another as the mobile unit moves through cells. Our model is the first to capture this movement behavior as well as the data behavior which reflects the access to data located in databases throughout the static network. The mobile behavior is dynamic and is realized in our model via the use of split operations. The data access behavior is captured by using the idea of global and local transactions in a multidatabase system."}
{"_id": "5dbc5efc9f6414dabc922919c56f40e0f75caad7", "title": "Birds of a \u2018 \u2018 bad \u2019 \u2019 feather flock together : The Dark Triad and mate choice", "text": "Previous research on the Dark Triad traits (i.e., Machiavellianism, psychopathy, and narcissism) has focused solely on the role the Dark Triad traits played in mate choice of actors. The current study (N = 336) extends this by manipulating the apparent levels of Dark Triad traits in targets and correlating mate choice in these targets with individual differences in the Dark Triad traits in actors. As expected, both sexes preferred partners low in the Dark Triad traits for long-term mating, while those high in these traits were preferred for one-night stands. However, women high in psychopathy considered the Dark Triad traits in potential male partners more physically attractive and desirable for an one-night stand, as well as a potential husband. Men who were high on psychopathy were likewise attracted to psychopathy in potential mothers. Our findings are discussed from an evolutionary personality paradigm. 2015 Elsevier Ltd. All rights reserved."}
{"_id": "b9bdf31f8ee1bdc37e2a11333461ab52676e12f8", "title": "Fast Metric Tracking by Detection System: Radar Blob and Camera Fusion", "text": "This article proposes a system that fuses radar and monocular vision sensor data in order to detect and classify on-road obstacles, like cars or not cars (other obstacles). The obstacle detection process and classification is divided into three stages, the first consist in reading radar signals and capturing the camera data, the second stage is the data fusion, and the third step is the classify the obstacles, aiming to differentiate the obstacles types identified by the radar and confirmed by the computer vision. In the detection task it is important to locate, measure, and rank the obstacles to be able to take adequate decisions and actions (e.g. Generate alerts, autonomously control the vehicle path), so the correct classification of the obstacle type and position, is very important, also avoiding false positives and/or false negatives in the classification task."}
{"_id": "7039b7c97bd0e59693f2dc4ed7b40e8790bf2746", "title": "Learning Distributed Representations of Texts and Entities from Knowledge Base", "text": "We describe a neural network model that jointly learns distributed representations of texts and knowledge base (KB) entities. Given a text in the KB, we train our proposed model to predict entities that are relevant to the text. Our model is designed to be generic with the ability to address various NLP tasks with ease. We train the model using a large corpus of texts and their entity annotations extracted from Wikipedia. We evaluated the model on three important NLP tasks (i.e., sentence textual similarity, entity linking, and factoid question answering) involving both unsupervised and supervised settings. As a result, we achieved state-of-the-art results on all three of these tasks. Our code and trained models are publicly available for further academic research."}
{"_id": "f0d9f59efce8e5696fff15c7bdf3d769c7a42e9c", "title": "Automatic removal of manually induced artefacts in ultrasound images of thyroid gland", "text": "Manually induced artefacts, like caliper marks and anatomical labels, render an ultrasound (US) image incapable of being subjected to further processes of Computer Aided Diagnosis (CAD). In this paper, we propose a technique to remove these artefacts and restore the image as accurately as possible. The technique finds application as a pre-processing step when developing unsupervised segmentation algorithms for US images that deal with automatic estimation of the number of segments and clustering. The novelty of the algorithm lies in the image processing pipeline chosen to automatically identify the artefacts and is developed based on the histogram properties of the artefacts. The algorithm was able to successfully restore the images to a high quality when it was executed on a dataset of 18 US images of the thyroid gland on which the artefacts were induced manually by a doctor. Further experiments on an additional dataset of 10 unmarked US images of the thyroid gland on which the artefacts were simulated using Matlab showed that the restored images were again of high quality with a PSNR > 38dB and free of any manually induced artefacts."}
{"_id": "8593881cb3bfbf627c6edd1d6456c7e8e35f3a5c", "title": "Comparison of automatic shot boundary detection algorithms", "text": "Various methods of automatic shot boundary detection have been proposed and claimed to perform reliably. A the detection of edits is fundamental to any kind of video analysis since it segments a video into its basic com the shots, only few comparative investigations on early shot boundary detection algorithms have been pu These investigations mainly concentrate on measuring the edit detection performance, however, do not con algorithms\u2019 ability to classify the types and to locate the boundaries of the edits correctly. This paper extend comparative investigations. More recent algorithms designed explicitly to detect specific complex editing ope such as fades and dissolves are taken into account, and their ability to classify the types and locate the boun such edits are examined. The algorithms\u2019 performance is measured in terms of hit rate, number of false hits, a rate for hard cuts, fades, and dissolves over a large and diverse set of video sequences. The experiments while hard cuts and fades can be detected reliably, dissolves are still an open research issue. The false hit rat solves is usually unacceptably high, ranging from 50% up to over 400%. Moreover, all algorithms seem to fai roughly the same conditions."}
{"_id": "42f75b297aed474599c8e598dd211a1999804138", "title": "Bayesian Classification (AutoClass): Theory and Results", "text": "We describe AutoClass, an approach to unsupervised classiication based upon the classical mixture model, supplemented by a Bayesian method for determining the optimal classes. We include a moderately detailed exposition of the mathematics behind the AutoClass system. We emphasize that no current unsupervised classiication system can produce maximally useful results when operated alone. It is the interaction between domain experts and the machine searching over the model space, that generates new knowledge. Both bring unique information and abilities to the database analysis task, and each enhances the others' eeectiveness. We illustrate this point with several applications of AutoClass to complex real world databases, and describe the resulting successes and failures. 6.1 Introduction This chapter is a summary of our experience in using an automatic classiication program (AutoClass) to extract useful information from databases. It also gives an outline of the principles that underlie automatic classiication in general, and AutoClass in particular. We are concerned with the problem of automatic discovery of classes in data (sometimes called clustering, or unsupervised learning), rather than the generation of class descriptions from labeled examples (called supervised learning). In some sense, automatic classiication aims at discovering the \\natural\" classes in the data. These classes reeect basic causal mechanisms that makes some cases look more like each other than the rest of the cases. The causal mechanisms may be as boring as sample biases in the data, or could reeect some major new discovery in the domain. Sometimes, these classes were well known to experts in the eld, but unknown to AutoClass, and other times"}
{"_id": "04e0fefb859f4b02b017818915a2645427bfbdb2", "title": "Context dependent recurrent neural network language model", "text": "Recurrent neural network language models (RNNLMs) have recently demonstrated state-of-the-art performance across a variety of tasks. In this paper, we improve their performance by providing a contextual real-valued input vector in association with each word. This vector is used to convey contextual information about the sentence being modeled. By performing Latent Dirichlet Allocation using a block of preceding text, we achieve a topic-conditioned RNNLM. This approach has the key advantage of avoiding the data fragmentation associated with building multiple topic models on different data subsets. We report perplexity results on the Penn Treebank data, where we achieve a new state-of-the-art. We further apply the model to the Wall Street Journal speech recognition task, where we observe improvements in word-error-rate."}
{"_id": "480d545ac4a4ffff5b1bc291c2de613192e35d91", "title": "DyNet: The Dynamic Neural Network Toolkit", "text": "We describe DyNet, a toolkit for implementing neural network models based on dynamic declaration of network structure. In the static declaration strategy that is used in toolkits like Theano, CNTK, and TensorFlow, the user first defines a computation graph (a symbolic representation of the computation), and then examples are fed into an engine that executes this computation and computes its derivatives. In DyNet\u2019s dynamic declaration strategy, computation graph construction is mostly transparent, being implicitly constructed by executing procedural code that computes the network outputs, and the user is free to use different network structures for each input. Dynamic declaration thus facilitates the implementation of more complicated network architectures, and DyNet is specifically designed to allow users to implement their models in a way that is idiomatic in their preferred programming language (C++ or Python). One challenge with dynamic declaration is that because the symbolic computation graph is defined anew for every training example, its construction must have low overhead. To achieve this, DyNet has an optimized C++ backend and lightweight graph representation. Experiments show that DyNet\u2019s speeds are faster than or comparable with static declaration toolkits, and significantly faster than Chainer, another dynamic declaration toolkit. DyNet is released opensource under the Apache 2.0 license, and available at http://github.com/clab/dynet. Carnegie Mellon University, Pittsburgh, PA, USA Nara Institute of Science and Technology, Ikoma, Japan DeepMind, London, UK Bar Ilan University, Ramat Gan, Israel Allen Institute for Artificial Intelligence, Seattle, WA, USA University of Notre Dame, Notre Dame, IN, USA IBM T.J. Watson Research Center, Yorktown Heights, NY, USA University of Melbourne, Melbourne, Australia Johns Hopkins University, Baltimore, MD, USA Google, New York, NY, USA Google, Mountain View, CA, USA University of Washington, Seattle, USA Microsoft Research, Redmond, WA, USA University of Edinburgh, Edinburgh, UK 1 ar X iv :1 70 1. 03 98 0v 1 [ st at .M L ] 1 5 Ja n 20 17"}
{"_id": "7ad19fa4baf1031aee118d0ffd00b2c031d60ccf", "title": "KenLM: Faster and Smaller Language Model Queries", "text": "We present KenLM, a library that implements two data structures for efficient language model queries, reducing both time and memory costs. The PROBING data structure uses linear probing hash tables and is designed for speed. Compared with the widelyused SRILM, our PROBING model is 2.4 times as fast while using 57% of the memory. The TRIE data structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed at lower memory consumption. TRIE simultaneously uses less memory than the smallest lossless baseline and less CPU than the fastest baseline. Our code is open-source1, thread-safe, and integrated into the Moses, cdec, and Joshua translation systems. This paper describes the several performance techniques used and presents benchmarks against alternative implementations."}
{"_id": "306a2f7b702f9aabb9c4750413dd9d8932ffae13", "title": "Off-policy evaluation for slate recommendation", "text": "This paper studies the evaluation of policies that recommend an ordered set of items (e.g., a ranking) based on some context\u2014a common scenario in web search, ads, and recommendation. We build on techniques from combinatorial bandits to introduce a new practical estimator that uses logged data to estimate a policy\u2019s performance. A thorough empirical evaluation on real-world data reveals that our estimator is accurate in a variety of settings, including as a subroutine in a learningto-rank task, where it achieves competitive performance. We derive conditions under which our estimator is unbiased\u2014these conditions are weaker than prior heuristics for slate evaluation\u2014and experimentally demonstrate a smaller bias than parametric approaches, even when these conditions are violated. Finally, our theory and experiments also show exponential savings in the amount of required data compared with general unbiased estimators."}
{"_id": "6042669265c201041c7523a21c00a2ab3360f6d1", "title": "Towards a functional neuroanatomy of speech perception", "text": "The functional neuroanatomy of speech perception has been difficult to characterize. Part of the difficulty, we suggest, stems from the fact that the neural systems supporting 'speech perception' vary as a function of the task. Specifically, the set of cognitive and neural systems involved in performing traditional laboratory speech perception tasks, such as syllable discrimination or identification, only partially overlap those involved in speech perception as it occurs during natural language comprehension. In this review, we argue that cortical fields in the posterior-superior temporal lobe, bilaterally, constitute the primary substrate for constructing sound-based representations of speech, and that these sound-based representations interface with different supramodal systems in a task-dependent manner. Tasks that require access to the mental lexicon (i.e. accessing meaning-based representations) rely on auditory-to-meaning interface systems in the cortex in the vicinity of the left temporal-parietal-occipital junction. Tasks that require explicit access to speech segments rely on auditory-motor interface systems in the left frontal and parietal lobes. This auditory-motor interface system also appears to be recruited in phonological working memory."}
{"_id": "082f5899efd2078532707e70c6bc30a9f0a9cf3f", "title": "Path planning for UAV based on improved heuristic A\u2217 algorithm", "text": "A 3D vehicle path planning method based on heuristic A\u2217 algorithm is proposed and simulated which makes the UAV's path planning for low altitude penetration more real-time, effective and engineering-oriented. The digital terrain information is processed with the integrated terrain elevation smoothing algorithm at first, and then, the safe flight surface which integrates the maneuverability of vehicle is built, At last, the mathematical models of the threats are established. An improved A\u2217 algorithm is used to plan 3D vehicle path on the safe surface, and the planned path is optimized and smoothed. Simulation results indicate that the method has a good performance in creating optimal path in 3D environment and what's more, the path planning algorithm is simple, efficient and easily realized in engineering field, thus adapting to the different missions of the UAV."}
{"_id": "98f173f16746e8c565a41027f6ad1d5ec13728f0", "title": "A Task-based Approach for Ontology Evaluation", "text": "The need for the establishment of evaluation methods that can measure respective improvements or degradations of ontological models e.g. yielded by a precursory ontology population stage is undisputed. We propose an evaluation scheme that allows to employ a number of different ontologies and to measure their performance on specific tasks. In this paper we present the resulting task-based approach for quantitative ontology evaluation, which also allows for a bootstrapping approach to ontology population. Benchmark tasks commonly feature a so-called gold-standard defining perfect performance. By selecting ontology-based approaches for the respective tasks, the ontology-dependent part of the performance can be measured. Following this scheme, we present the results of an experiment for testing and incrementally augmenting ontologies using a well-defined benchmark problem based on a evaluation goldstandard."}
{"_id": "e764f7ff9ada2561c31592cd3b71b3d2b8209602", "title": "Vaginoplasty using autologous in vitro cultured vaginal tissue in a patient with Mayer-von-Rokitansky-Kuster-Hauser syndrome.", "text": "Mayer-von-Rokitansky-K\u00fcster-Hauser syndrome (MRKHS) is characterized by vaginal agenesis with variable M\u00fcllerian duct abnormalities. The Abb\u00e8-McIndoe technique is considered a valid treatment option for vaginoplasty but no consensus has been reached on what material should be used for the neovagina canal wall lining. We report the first case of autologous vaginal tissue transplantation in a 28-year-old women with MRKHS. The patient was subjected to a 1 cm2 full-thickness mucosal biopsy from the vaginal vestibule. Following enzymatic dissociation, cells were inoculated onto collagen IV-coated plates and cultured for 2 weeks. The patient was subjected to a vaginoplasty with a modified Abb\u00e8-McIndoe vaginoplasty with 314 cm2 autologous in vitro cultured vaginal tissue for the canal lining. At 1 month from surgery, the vagina appeared normal in length and depth and a vaginal biopsy revealed normal vaginal tissue. The use of autologous in vitro cultured vaginal tissue to create a neovagina appears as an easy, minimally invasive and useful method."}
{"_id": "48c09d03c7103a3d2b94c213032d27f1fa577b8a", "title": "An integration of urban spatial data with energy simulation to produce X3D city models: the case of Landkreis Ludwigsburg", "text": "In this paper, we describe a chain of steps to produce X3D city models based on transformation and integration of urban 3D CityGML models with DTM tiles, orthophotos and energy simulation outcome of energy demand and photovoltaic potential generated by the SimStadt platform."}
{"_id": "a67b7e0f6ff15bbbdd782c815ccf120180a19b03", "title": "Comparison Analysis: Granger Causality and New Causality and Their Applications to Motor Imagery", "text": "In this paper we first point out a fatal drawback that the widely used Granger causality (GC) needs to estimate the autoregressive model, which is equivalent to taking a series of backward recursive operations which are infeasible in many irreversible chemical reaction models. Thus, new causality (NC) proposed by Hu et al. (2011) is theoretically shown to be more sensitive to reveal true causality than GC. We then apply GC and NC to motor imagery (MI) which is an important mental process in cognitive neuroscience and psychology and has received growing attention for a long time. We study causality flow during MI using scalp electroencephalograms from nine subjects in Brain-computer interface competition IV held in 2008. We are interested in three regions: Cz (central area of the cerebral cortex), C3 (left area of the cerebral cortex), and C4 (right area of the cerebral cortex) which are considered to be optimal locations for recognizing MI states in the literature. Our results show that: 1) there is strong directional connectivity from Cz to C3/C4 during left- and right-hand MIs based on GC and NC; 2) during left-hand MI, there is directional connectivity from C4 to C3 based on GC and NC; 3) during right-hand MI, there is strong directional connectivity from C3 to C4 which is much clearly revealed by NC than by GC, i.e., NC largely improves the classification rate; and 4) NC is demonstrated to be much more sensitive to reveal causal influence between different brain regions than GC."}
{"_id": "9870c40f9b02736aef6bcfc934c2743e82b20472", "title": "SwiPIN: Fast and Secure PIN-Entry on Smartphones", "text": "In this paper, we present SwiPIN, a novel authentication system that allows input of traditional PINs using simple touch gestures like up or down and makes it secure against human observers. We present two user studies which evaluated different designs of SwiPIN and compared it against traditional PIN. The results show that SwiPIN performs adequately fast (3.7 s) to serve as an alternative input method for risky situations. Furthermore, SwiPIN is easy to use, significantly more secure against shoulder surfing attacks and switching between PIN and SwiPIN feels natural."}
{"_id": "687f58e8252082d55a3823c43629460e7550c6fd", "title": "A Secure Color Image Steganography In Transform Domain", "text": "Steganography is the art and science of covert communication. The secret information can be concealed in content such as image, audio, or video. This paper provides a novel image steganography technique to hide both image and key in color cover image using Discrete Wavelet Transform (DWT) and Integer Wavelet Transform (IWT). There is no visual difference between the stego image and the cover image. The extracted image is also similar to the secret image. This is proved by the high PSNR (Peak Signal to Noise Ratio), value for both stego and extracted secret image. The results are compared with the results of similar techniques and it is found that the proposed technique is simple and gives better PSNR values than others."}
{"_id": "5a18c126b5d960eb445ac8e5db541fdc6c771372", "title": "Multi-document English Text Summarization using Latent Semantic Analysis", "text": "In today's busy schedule, everybody expects to get the information in short but meaningful manner. Huge long documents consume more time to read. For this, we need summarization of document. Work has been done on Single-document but need of multiple document summarization is encouraging. Existing methods such as cluster approach, graph-based approach and fuzzy-based approach for multiple document summaries are improving. The statistical approach based on algebraic method is still topic of research. It demands for improvement in the approach by considering the limitations of Latent Semantic Analysis (LSA). Firstly, it reads only input text and does not consider world knowledge, for example women and lady it does not consider synonyms. Secondly, it does not consider word order, for example I will deliver to you tomorrow, deliver I will to you or tomorrow I will deliver to you. These different clauses may wrongly convey same meaning in different parts of document. Experimental results have overcomed the limitation and prove LSA with tf-idf method better in performance than KNN with tf-idf."}
{"_id": "9b2191f457af676bf7d356560de93a463410c481", "title": "JSForce: A Forced Execution Engine for Malicious JavaScript Detection", "text": "The drastic increase of JavaScript exploitation attacks has led to a strong interest in developing techniques to enable malicious JavaScript analysis. Existing analysis techniques fall into two general categories: static analysis and dynamic analysis. Static analysis tends to produce inaccurate results (both false positive and false negative) and is vulnerable to a wide series of obfuscation techniques. Thus, dynamic analysis is constantly gaining popularity for exposing the typical features of malicious JavaScript. However, existing dynamic analysis techniques possess limitations such as limited code coverage and incomplete environment setup, leaving a broad attack surface for evading the detection. To overcome these limitations, we present the design and implementation of a novel JavaScript forced execution engine named JSForce which drives an arbitrary JavaScript snippet to execute along different paths without any input or environment setup. We evaluate JSForce using 220,587 HTML and 23,509 PDF realworld samples. Experimental results show that by adopting our forced execution engine, the malicious JavaScript detection rate can be substantially boosted by 206.29% using same detection policy without any noticeable false positive increase. We also make JSForce publicly available as an online service and will release the source code to the security community upon the acceptance for publication."}
{"_id": "de56d4dc240d358465264e10d058ea12b89bfdca", "title": "Evaluation of Cluster Identification Performance for Different PCP Variants", "text": "Parallel coordinate plots (PCPs) are a well-known visualization technique for viewing multivariate data. In the past, various visual modifications to PCPs have been proposed to facilitate tasks such as correlation and cluster identification, to reduce visual clutter, and to increase their information throughput. Most modifications pertain to the use of color and opacity, smooth curves, or the use of animation. Although many of these seem valid improvements, only few user studies have been performed to investigate this, especially with respect to cluster identification. We performed a user study to evaluate cluster identification performance \u2013 with respect to response time and correctness \u2013 of nine PCP variations, including standard PCPs. To generate the variations, we focused on covering existing techniques as well as possible while keeping testing feasible. This was done by adapting and merging techniques, which led to the following novel variations. The first is an effective way of embedding scatter plots into PCPs. The second is a technique for highlighting fuzzy clusters based on neighborhood density. The third is a spline-based drawing technique to reduce ambiguity. The last is a pair of animation schemes for PCP rotation. We present an overview of the tested PCP variations and the results of our study. The most important result is that a fair number of the seemingly valid improvements, with the exception of scatter plots embedded into PCPs, do not result in significant performance gains."}
{"_id": "90f787d12c0115b595692b8dc7ea5223fe81ccbe", "title": "Study of bi-directional buck-boost converter with different control methods", "text": "In this paper, four types control methods have been proposed for buck-boost cascade topology. This converter used in hybrid electric vehicle can step the battery voltage level either up or down. The target is to find the most appropriate control method for the converter with certain specifications. Critical conduction mode control, fixed frequency control, the buck/boost control and the buck + boost control have been mentioned. The inductor design is presented according to the given parameters. The relationship among the parameters for buck/boost mode, buck mode and boost mode in critical conduction mode are proposed. The efficiency comparison for four control methods is given. The loss breakdown of different control methods is also presented by mathematical calculation."}
{"_id": "052662aaa5e351e194bcd9bf34810f9bb3f45e93", "title": "A user-guided approach to program analysis", "text": "Program analysis tools often produce undesirable output due to various approximations. We present an approach and a system EUGENE that allows user feedback to guide such approximations towards producing the desired output. We formulate the problem of user-guided program analysis in terms of solving a combination of hard rules and soft rules: hard rules capture soundness while soft rules capture degrees of approximations and preferences of users. Our technique solves the rules using an off-the-shelf solver in a manner that is sound (satisfies all hard rules), optimal (maximally satisfies soft rules), and scales to real-world analyses and programs. We evaluate EUGENE on two different analyses with labeled output on a suite of seven Java programs of size 131\u2013198 KLOC. We also report upon a user study involving nine users who employ EUGENE to guide an information-flow analysis on three Java micro-benchmarks. In our experiments, EUGENE significantly reduces misclassified reports upon providing limited amounts of feedback."}
{"_id": "362b430408e4fdb535abc3613861441b49b2aadb", "title": "Fast efficient algorithm for enhancement of low lighting video", "text": "We describe a novel and effective video enhancement algorithm for low lighting video. The algorithm works by first inverting the input low-lighting video and then applying an image de-haze algorithm on the inverted input. To facilitate faster computation and improve temporal consistency, correlations between temporally neighboring frames are utilized. Simulations using naive implementations of the algorithm show good enhancement results and 2x speed-up as compared with frame-wise enhancement algorithms, with further improvements in both quality and speed possible."}
{"_id": "84606131b809c0043098fb9947e04b14d2258b7c", "title": "Prediction of heart disease using data mining techniques", "text": "The healthcare industry is a vast field with a plethora of data about patients,added to the huge medical records every passing day. In terms of science, this industry is \u2019information rich\u2019 yet \u2019knowledge poor\u2019. However, data mining with its various analytical tools and techniques plays a major role in reducing the use of cumbersome tests used on patients to detect a disease. The aim of this paper is to employ and analyze different data mining techniques for the prediction of heart disease in a patient through extraction of interesting patterns from the dataset using vital parameters. This paper strives to bring out the methodology and implementation of these techniques-Artificial Neural Networks, Decision Tree and Naive Bayes and stress upon the results and conclusion induced on the basis of accuracy and time complexity. By far, the observations reveal that Artificial Neural Networks outperformed Naive Bayes and Decision Tree."}
{"_id": "69db38730143da1c8883e281128895b27bba737e", "title": "Design of a novel dual band patch antenna loaded with CELC resonator for WLAN/WiMAX applications", "text": "A Complementary Electric Field Coupled (CELC) resonator, loaded in the ground plane of a conventional patch antenna to achieve dual band operation is presented here. Two antenna designs have been presented. In the first design, one CELC is etched out from the ground plane of patch antenna which leads to generation of a lower order mode in addition to the conventional patch mode. In order to achieve better impedance bandwidth for the lower order resonant mode two CELC resonators with closely spaced resonant frequencies are etched out from the ground plane of patch antenna in the second design. The proposed antennas can be used in WLAN (Wireless Local Area Network) and WiMAX (Worldwide Interoperability for Microwave Access) applications."}
{"_id": "9fa42f3a881ee69ac7f16165682daa61b2caf622", "title": "Quantitative and qualitative research: beyond the debate.", "text": "Psychology has been a highly quantitative field since its conception as a science. However, a qualitative approach to psychological research has gained increasing importance in the last decades, and an enduring debate between quantitative and qualitative approaches has arisen. The recently developed Mixed Methods Research (MMR) addresses this debate by aiming to integrate quantitative and qualitative approaches. This article outlines and discusses quantitative, qualitative and mixed methods research approaches with specific reference to their (1) philosophical foundations (i.e. basic sets of beliefs that ground inquiry), (2) methodological assumptions (i.e. principles and formal conditions which guide scientific investigation), and (3) research methods (i.e. concrete procedures for data collection, analysis and interpretation). We conclude that MMR may reasonably overcome the limitation of purely quantitative and purely qualitative approaches at each of these levels, providing a fruitful context for a more comprehensive psychological research."}
{"_id": "648ffd39ec9ee9edea7631c1e5be2e49e7b0522e", "title": "A generic abstract syntax model for embedded languages", "text": "Representing a syntax tree using a data type often involves having many similar-looking constructors. Functions operating on such types often end up having many similar-looking cases. Different languages often make use of similar-looking constructions. We propose a generic model of abstract syntax trees capable of representing a wide range of typed languages. Syntactic constructs can be composed in a modular fashion enabling reuse of abstract syntax and syntactic processing within and across languages. Building on previous methods of encoding extensible data types in Haskell, our model is a pragmatic solution to Wadler's \"expression problem\". Its practicality has been confirmed by its use in the implementation of the embedded language Feldspar."}
{"_id": "32aea4c9fb9eb7cf2b6869efa83cf73420374628", "title": "A theory of reasoned action: some applications and implications.", "text": ""}
{"_id": "091778f43d947affb69dbccc2c3251abfa852ad2", "title": "Semantic File Systems", "text": "A semantic file system is an information storage system that provides flexible associative access to the system's contents by automatically extracting attributes from files with file type specific transducers. Associative access is provided by a conservative extension to existing tree-structured file system protocols, and by protocols that are designed specifically for content based access. Compatiblity with existing file system protocols is provided by introducing the concept of a virtual directory. Virtual directory names are interpreted as queries, and thus provide flexible associative access to files and directories in a manner compatible with existing software. Rapid attribute-based access to file system contents is implemented by automatic extraction and indexing of key properties of file system objects. The automatic indexing of files and directories is called \"semantic\" because user programmable transducers use information about the semantics of updated file system objects to extract the properties for indexing. Experimental results from a semantic file system implementation support the thesis that semantic file systems present a more effective storage abstraction than do traditional tree structured file systems for information sharing and command level programming."}
{"_id": "1955266a8a58d94e41ad0efe20d707c92a069e95", "title": "Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality", "text": "The nearest neighbor problem is the follolving: Given a set of n points P = (PI, . . . ,p,} in some metric space X, preprocess P so as to efficiently answer queries which require finding bhe point in P closest to a query point q E X. We focus on the particularly interesting case of the d-dimensional Euclidean space where X = Wd under some Zp norm. Despite decades of effort, t,he current solutions are far from saabisfactory; in fact, for large d, in theory or in practice, they provide litt,le improvement over the brute-force algorithm which compares the query point to each data point. Of late, t,here has been some interest in the approximate newest neighbors problem, which is: Find a point p E P that is an c-approximate nearest neighbor of the query q in t,hat for all p\u2019 E P, d(p, q) < (1 + e)d(p\u2019, q). We present two algorithmic results for the approximate version t,hat significantly improve the known bounds: (a) preprocessing cost polynomial in n and d, and a truly sublinear query t.ime (for 6 > 1); and, (b) query time polynomial in log-n and d, and only a mildly exponential preprocessing cost* O(n) x 0(1/~)~. Furt.her, applying a classical geometric lemma on random projections (for which we give a simpler proof), we obtain t.he first known algorithm with polynomial preprocessing and query t.ime polynomial in d and log n. Unfortunately, for small E, the latter is a purely theoretical result since bhe e?rponent depends on l/e. Experimental resuits indicate that our tit algori&m offers orders of magnitude improvement on running times over real data sets. Its key ingredient is the notion of locality-sensitive hashing which may be of independent interest; here, we give applications to information ret,rieval, pattern recognition, dynamic closest-pairs, and fast clustering algorithms. *Supported by a Stanford Graduate Fellowship and NSF"}
{"_id": "1d32f29a9880998264286633e66ea92915cc557a", "title": "Multi-Probe LSH: Efficient Indexing for High-Dimensional Similarity Search", "text": "Similarity indices for high-dimensional data are very desirable for building content-based search systems for featurerich data such as audio, images, videos, and other sensor data. Recently, locality sensitive hashing (LSH) and its variations have been proposed as indexing techniques for approximate similarity search. A significant drawback of these approaches is the requirement for a large number of hash tables in order to achieve good search quality. This paper proposes a new indexing scheme called multi-probe LSH that overcomes this drawback. Multi-probe LSH is built on the well-known LSH technique, but it intelligently probes multiple buckets that are likely to contain query results in a hash table. Our method is inspired by and improves upon recent theoretical work on entropy-based LSH designed to reduce the space requirement of the basic LSH method. We have implemented the multi-probe LSH method and evaluated the implementation with two different high-dimensional datasets. Our evaluation shows that the multi-probe LSH method substantially improves upon previously proposed methods in both space and time efficiency. To achieve the same search quality, multi-probe LSH has a similar timeefficiency as the basic LSH method while reducing the number of hash tables by an order of magnitude. In comparison with the entropy-based LSH method, to achieve the same search quality, multi-probe LSH uses less query time and 5 to 8 times fewer number of hash tables."}
{"_id": "2ca4b5e877a52ef9681e112a2d5b1308dab3f237", "title": "SmartStore: a new metadata organization paradigm with semantic-awareness for next-generation file systems", "text": "Existing storage systems using hierarchical directory tree do not meet scalability and functionality requirements for exponentially growing datasets and increasingly complex queries in Exabyte-level systems with billions of files. This paper proposes semantic-aware organization, called SmartStore, which exploits metadata semantics of files to judiciously aggregate correlated files into semantica-ware groups by using information retrieval tools. Decentralized design improves system scalability and reduces query latency for complex queries (range and top-k queries), which is conducive to constructing semantic-aware caching, and conventional filename-based query. SmartStore limits search scope of complex query to a single or a minimal number of semantically related groups and avoids or alleviates brute-force search in entire system. Extensive experiments using real-world traces show that SmartStore improves system scalability and reduces query latency over basic database approaches by one thousand times. To the best of our knowledge, this is the first study implementing complex queries in large-scale file systems."}
{"_id": "096db7e8d2b209fb6dca9c7495ac84405c40e507", "title": "Hierarchical ALS Algorithms for Nonnegative Matrix and 3D Tensor Factorization", "text": "In the paper we present new Alternating Least Squares (ALS) algorithms for Nonnegative Matrix Factorization (NMF) and their extensions to 3D Nonnegative Tensor Factorization (NTF) that are robust in the presence of noise and have many potential applications, including multi-way Blind Source Separation (BSS), multi-sensory or multi-dimensional data analysis, and nonnegative neural sparse coding. We propose to use local cost functions whose simultaneous or sequential (one by one) minimization leads to a very simple ALS algorithm which works under some sparsity constraints both for an under-determined (a system which has less sensors than sources) and over-determined model. The extensive experimental results confirm the validity and high performance of the developed algorithms, especially with usage of the multi-layer hierarchical NMF. Extension of the proposed algorithm to multidimensional Sparse Component Analysis and Smooth Component Analysis is also proposed."}
{"_id": "ee62294e294d1bd3ceddd432f17d8fcc14a3b35f", "title": "Multi-Scale DenseNet-Based Electricity Theft Detection", "text": "1. Beijing University of Posts and Telecommunications, Beijing, 100876, China deepblue.lb@gmail.com 2. School of Computer, National University of Defense Technology, Changsha, 410073, China kelele.xu@gmail.com 3. School of Information Communication, National University of Defense Technology, Wuhan, 430015, China 4. The University of Melbourne, Parkville, 3010, Australia yihengw1@student.unimelb.edu.au 5. China Minsheng Bank, Beijing 100031, China wangyanbo@cmbc.com.cn"}
{"_id": "09bd1991465ffe079e4ffccd917e014570aac588", "title": "Automatic PCB Defects Detection and Classification using Matlab", "text": "The importance of the Printed Circuit Board inspection process has been magnified by requirements of the modern manufacturing environment. In electronics mass production manufacturing facilities, an attempt is often to achieve 100% quality assurance. In this work Machine Vision PCB Inspection System is applied at the first step of manufacturing. In this system a PCB inspection system is proposed and the inspection algorithm mainly focuses on the defect detection and defect classification of the defects. Defect classification is essential to the identification of the defect sources. The purpose of the system is to provide the automatic defect detection of PCB and relieve the human inspectors from the tedious task of finding the defects in PCB which may lead to electric failure. We first compare a standard PCB inspection image with a PCB image to be inspected. The MATLAB tool is used to detect the defects and to classify the defects. With the solutions designed and implemented in this thesis the algorithm designed in the proposed system is able to detect and classify all the known 14 types of defects successfully with greater accuracy. The algorithm makes use of image subtraction method for defect detection and kNN classification algorithm for the classification of the defects. This thesis will present and analyze the performance of the proposed inspection algorithm. The experiment will measure the accuracy of the system."}
{"_id": "ad669e5d7dc28f2c7b401c90645f6a42c96df5bd", "title": "A compact broadband hybrid ridged SIW and GCPW coupler", "text": "In this paper, a novel hybrid ridged substrate integrated waveguide (RSIW) and ground coplanar waveguide (GCPW) structure coupler is presented. In this coupler, the RSIW and GCPW are arranged back to back to realize high-integration. Such hybrid coupler combines the broadband advantage of RSIW and the flexibility of GCPW. According to the simulated results, it can operate at 28 GHz and achieve a relative bandwidth of 17%."}
{"_id": "544ef9981674ef85eac05e7660b3702e96c5eca8", "title": "Automatic detection of users\u2019 skill levels using high-frequency user interface events", "text": "Computer users have different levels of system skills. Moreover, each user has different levels of skill across different applications and even in different portions of the same application. Additionally, users\u2019 skill levels change dynamically as users gain more experience in a user interface. In order to adapt user interfaces to the different needs of user groups with different levels of skills, automatic methods of skill detection are required. In this paper, we present our experiments and methods, which are used to build automatic skill classifiers for desktop applications. Machine learning algorithms were used to build statistical predictive models of skill. Attribute values were extracted from high frequency user interface events, such as mouse motions and menu interactions, and were used as inputs to our models. We have built both task-independent and task-dependent classifiers with promising results."}
{"_id": "1309bf56a5c020a2dabbbc9c33d9865706278c1c", "title": "Balanced outcomes in social exchange networks", "text": "The study of bargaining has a long history, but many basic settings are still rich with unresolved questions. In particular, consider a set of agents who engage in bargaining with one another,but instead of pairs of agents interacting in isolation,agents have the opportunity to choose whom they want to negotiate with, along the edges of a graph representing social-network relations. The area of network exchange theory in sociology has developed a large body of experimental evidence for the way in which people behave in such network-constrained bargaining situations, and it is a challenging problem to develop models that are both mathematically tractable and in general agreement with the results of these experiments.\n We analyze a natural theoretical model arising in network exchange theory, which can be viewed as a direct extension of the well-known Nash bargaining solution to the case of multiple agents interacting on a graph. While this generalized Nash bargaining solution is surprisingly effective at picking up even subtle differences in bargaining power that have been observed experimentally on small examples, it has remained an open question to characterize the values taken by this solution on general graphs, or to find an efficient means to compute it.\n Here we resolve these questions, characterizing the possible values of this bargaining solution, and giving an efficient algorithm to compute the set of possible values. Our result exploits connections to the structure of matchings in graphs, including decomposition theorems for graphs with perfect matchings, and also involves the development of new techniques. In particular, the values we are seeking turn out to correspond to a novel combinatorially defined point in the interior of a fractional relaxation of the matching problem."}
{"_id": "8b835b74b8fcd7dfa5608c4195b6f575149125f7", "title": "Evaluative Language Beyond Bags of Words: Linguistic Insights and Computational Applications", "text": "The study of evaluation, affect, and subjectivity is a multidisciplinary enterprise, including sociology, psychology, economics, linguistics, and computer science. A number of excellent computational linguistics and linguistic surveys of the field exist. Most surveys, however, do not bring the two disciplines together to show how methods from linguistics can benefit computational sentiment analysis systems. In this survey, we show how incorporating linguistic insights, discourse information, and other contextual phenomena, in combination with the statistical exploitation of data, can result in an improvement over approaches that take advantage of only one of these perspectives. We first provide a comprehensive introduction to evaluative language from both a linguistic and computational perspective. We then argue that the standard computational definition of the concept of evaluative language neglects the dynamic nature of evaluation, in which the interpretation of a given evaluation depends on linguistic and extra-linguistic contextual factors. We thus propose a dynamic definition that incorporates update functions. The update functions allow for different contextual aspects to be incorporated into the calculation of sentiment for evaluative words or expressions, and can be applied at all levels of discourse. We explore each level and highlight which linguistic aspects contribute to accurate extraction of sentiment. We end the review by outlining what we believe the future directions of sentiment analysis are, and the role that discourse and contextual information need to play."}
{"_id": "2fda75479692808771fafece53625c3582f08f22", "title": "Performing a check-in: emerging practices, norms and 'conflicts' in location-sharing using foursquare", "text": "Location-sharing services have a long history in research, but have only recently become available for consumers. Most popular commercial location-sharing services differ from previous research efforts in important ways: they use manual 'check-ins' to pair user location with semantically named venues rather than tracking; venues are visible to all users; location is shared with a potentially very large audience; and they employ incentives. By analysis of 20 in-depth interviews with foursquare users and 47 survey responses, we gained insight into emerging social practices surrounding location-sharing. We see a shift from privacy issues and data deluge, to more performative considerations in sharing one's location. We discuss performance aspects enabled by check-ins to public venues, and show emergent, but sometimes conflicting norms (not) to check-in."}
{"_id": "28b471f2ee92e6eabc1191f80f8c5117e840eadb", "title": "Predicting Airfare Prices", "text": "Airlines implement dynamic pricing for their tickets, and base their pricing decisions on demand estimation models. The reason for such a complicated system is that each flight only has a set number of seats to sell, so airlines have to regulate demand. In the case where demand is expected to exceed capacity, the airline may increase prices, to decrease the rate at which seats fill. On the other hand, a seat that goes unsold represents a loss of revenue, and selling that seat for any price above the service cost for a single passenger would have been a more preferable scenario."}
{"_id": "5806969719c1b01290f0ce2c6c5752a8dd31e121", "title": "Functional imaging of human crossmodal identification and object recognition", "text": "The perception of objects is a cognitive function of prime importance. In everyday life, object perception benefits from the coordinated interplay of vision, audition, and touch. The different sensory modalities provide both complementary and redundant information about objects, which may improve recognition speed and accuracy in many circumstances. We review crossmodal studies of object recognition in humans that mainly employed functional magnetic resonance imaging (fMRI). These studies show that visual, tactile, and auditory information about objects can activate cortical association areas that were once believed to be modality-specific. Processing converges either in multisensory zones or via direct crossmodal interaction of modality-specific cortices without relay through multisensory regions. We integrate these findings with existing theories about semantic processing and propose a general mechanism for crossmodal object recognition: The recruitment and location of multisensory convergence zones varies depending on the information content and the dominant modality."}
{"_id": "9a57cac91ecf32e31af82e621eb75a7af0cfed70", "title": "A gene module associated with dysregulated TCR signaling pathways in CD4+ T cell subsets in rheumatoid arthritis.", "text": "We analyzed the transcriptome of detailed CD4+ T cell subsets including them after abatacept treatment, and examined the difference among CD4+ T cell subsets and identified gene sets that are closely associated disease activity and abatacept treatment. Seven CD4+ T cell subsets (naive, Th1, Th17, Th1/17, nonTh1/17, Tfh and Treg) were sorted from PBMCs taken from 10 RA patients and 10 healthy controls, and three RA patients donated samples before and 6 months after abatacept treatment. Paired-end RNA sequencing was performed using HiSeq 2500. A total of 149 samples except for 12 outliers were analyzed. Overview of expression pattern of RA revealed that administration of abatacept exerts a large shift toward the expression pattern of HC. Most of differentially expressed gene (DEG) upregulated in RA (n\u00a0=\u00a01776) were downregulated with abatacept treatment (n\u00a0=\u00a01349). Inversely, most of DEG downregulated in RA (n\u00a0=\u00a01860) were upregulated with abatacept treatment (n\u00a0=\u00a01294). This DEG-based analysis revealed shared pathway changes in RA CD4+ T cell subsets. Knowledge-based pathway analysis revealed the upregulation of activation-related pathways in RA that was substantially ameliorated by abatacept. Weighted gene co-expression network analysis (WGCNA) evaluated CD4+ T cells collectively and identified a gene module that consisted of 227 genes and was correlated with DAS28-CRP (Spearman's rho\u00a0=\u00a00.46, p\u00a0=\u00a04\u00a0\u00d7\u00a010-9) and abatacept administration (Spearman's rho\u00a0=\u00a0-0.91, p\u00a0=\u00a05\u00a0\u00d7\u00a010-57). The most highly connected 30 genes of this module included ZAP70 and JAK3, and pathway analysis of this module revealed dysregulation of the TCR signaling pathway network, which was ameliorated by abatacept."}
{"_id": "2f31a567fe782cc2a2c0a30a721dd9ccb1aa028a", "title": "Dynamic Social Network Analysis of a Dark Network: Identifying Significant Facilitators", "text": "\"Dark Networks\" refer to various illegal and covert social networks like criminal and terrorist networks. These networks evolve over time with the formation and dissolution of links to survive control efforts by authorities. Previous studies have shown that the link formation process in such networks is influenced by a set of facilitators. However, there have been few empirical evaluations to determine the significant facilitators. In this study, we used dynamic social network analysis methods to examine several plausible link formation facilitators in a large-scale real-world narcotics network. Multivariate Cox regression showed that mutual acquaintance and vehicle affiliations were significant facilitators in the network under study. These findings provide insights into the link formation processes and the resilience of dark networks. They also can be used to help authorities predict co-offending in future crimes."}
{"_id": "2012969b230dda8759e9ab7cdf17fcca84583542", "title": "Margin based feature selection - theory and algorithms", "text": "Feature selection is the task of choosing a small set out of a given set of features that capture the relevant properties of the data. In the context of supervised classification problems the relevance is determined by the given labels on the training data. A good choice of features is a key for building compact and accurate classifiers. In this paper we introduce a margin based feature selection criterion and apply it to measure the quality of sets of features. Using margins we devise novel selection algorithms for multi-class classification problems and provide theoretical generalization bound. We also study the well known Relief algorithm and show that it resembles a gradient ascent over our margin criterion. We apply our new algorithm to various datasets and show that our new Simba algorithm, which directly optimizes the margin, outperforms Relief."}
{"_id": "d817776590c8586b344dfc8628bb7676ab2d8c32", "title": "Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition", "text": "Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware."}
{"_id": "45fb42fd14695acf2554de4aef2961449291e1f9", "title": "Effective Clipart Image Vectorization through Direct Optimization of Bezigons", "text": "Bezigons, i.e., closed paths composed of Be\u0301zier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality."}
{"_id": "38cc6dca88f4352454b71a849fe9f47f27759626", "title": "i-JEN: Visual Interactive Malaysia Crime News Retrieval System", "text": "Supporting crime news investigation involves a mechanism to help monitor the current and past status of criminal events. We believe this could be well facilitated by focusing on the user interfaces and the event crime model aspects. In this paper we discuss on a development of Visual Interactive Malaysia Crime News Retrieval System (i-JEN) and describe the approach, user studies and planned, the system architecture and future plan. Our main objectives are to construct crime-based event; investigate the use of crimebased event in improving the classification and clustering; develop an interactive crime news retrieval system; visualize crime news in an effective and interactive way; integrate them into a usable and robust system and evaluate the usability and system performance. The system will serve as a news monitoring system which aims to automatically organize, retrieve and present the crime news in such a way as to support an effective monitoring, searching, and browsing for the target users groups of general public, news analysts and policemen or crime investigators. The study will contribute to the better understanding of the crime data consumption in the Malaysian context as well as the developed system with the visualisation features to address crime data and the eventual goal of combating the crimes."}
{"_id": "37d06d4eb6bf0e2d9641ea420054abdc31bdf18a", "title": "Knowledge management, innovation and firm performance", "text": "Purpose \u2013 To provide important empirical evidence to support the role of knowledge management within firms. Design/methodology/approach \u2013 Data were collected using a mail survey sent to CEOs representing firms with 50 or more employees from a cross-section of industries. A total of 1,743 surveys were mailed out and 443 were returned and usable (27.8 percent response rate). The sample was checked for response and non-response bias. Hypotheses were tested using structural equation modelling. Findings \u2013 This paper presents knowledge management as a coordinating mechanism. Empirical evidence supports the view that a firm with a knowledge management capability will use resources more efficiently and so will be more innovative and perform better. Research limitations/implications \u2013 The sample slightly over-represented larger firms. Data were also collected in New Zealand. As with most studies, it is important to replicate this study in different contexts. Practical implications \u2013 Knowledge management is embraced in many organizations and requires a business case to justify expenditure on programs to implement knowledge management behaviours and practices or hardware and software solutions. This paper provides support for the importance of knowledge management to enhance innovation and performance. Originality/value \u2013 This paper is one of the first to find empirical support for the role of knowledge management within firms. Further, the positioning of knowledge management as a coordinating mechanism is also an important contribution to our thinking on this topic."}
{"_id": "62d853de6c7764b893455b7894ab0d522076bd27", "title": "Not All Oil Price Shocks Are Alike : Disentangling Demand and Supply Shocks in the Crude Oil Market", "text": "Using a newly developed measure of global real economic activity, a structural decomposition of the real price of crude oil into three components is proposed: crude oil supply shocks; shocks to the global demand for all industrial commodities; and demand shocks that are specific to the crude oil market. The latter shock is designed to capture shifts in the price of oil driven by higher precautionary demand associated with concerns about future oil supply shortfalls. The paper estimates the dynamic effects of these shocks on the real price of oil. A historical decomposition sheds light on the causes of the major oil price shocks since 1975. The implications of higher oil prices for U.S. real GDP and CPI inflation are shown to depend on the cause of the oil price increase. Changes in the composition of shocks help explain why regressions of macroeconomic aggregates on oil prices tend to be unstable. Evidence that the recent increase in crude oil prices was driven primarily by global aggregate demand shocks helps explain why this oil price shock so far has failed to cause a major recession in the U.S."}
{"_id": "a9fb026e72056dd16c19a6e121e37cced125613a", "title": "OntoDM: An Ontology of Data Mining", "text": "Motivated by the need for unification of the field of data mining and the growing demand for formalized representation of outcomes of research, we address the task of constructing an ontology of data mining. The proposed ontology, named OntoDM, is based on a recent proposal of a general framework for data mining, and includes definitions of basic data mining entities, such as datatype and dataset, data mining task, data mining algorithm and components thereof (e.g., distance function), etc. It also allows for the definition of more complex entities, e.g., constraints in constraint-based data mining, sets of such constraints (inductive queries) and data mining scenarios (sequences of inductive queries). Unlike most existing approaches to constructing ontologies of data mining, OntoDM is a deep/heavy-weight ontology and follows best practices in ontology engineering, such as not allowing multiple inheritance of classes, using a predefined set of relations and using a top level ontology."}
{"_id": "1e24a6b80171c7ed7b543fe74d62d0b364f3f31c", "title": "Improving RDF Data Through Association Rule Mining", "text": "Linked Open Data comprises very many and often large public data sets, which are mostly presented in the Rdf triple structure of subject, predicate, and object. However, the heterogeneity of available open data requires significant integration steps before it can be used in applications. A promising and novel technique to explore such data is the use of association rule mining. We introduce \u201cmining configurations\u201d, which allow us to mine Rdf data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. We present rule-based approaches for predicate suggestion, data enrichment, ontology improvement, and query relaxation. On the one hand we prevent inconsistencies in the data through predicate suggestion, enrichment with missing facts, and alignment of the corresponding ontology. On the other hand we support users to handle inconsistencies during query formulation through predicate expansion techniques. Based on these approaches, we show that association rule mining benefits the integration and usability of Rdf data."}
{"_id": "20dc1890ca65e01856f31edf10126c2ad67e9d04", "title": "PathSim: Meta Path-Based Top-K Similarity Search in Heterogeneous Information Networks", "text": "Similarity search is a primitive operation in database and Web search engines. With the advent of large-scale heterogeneous information networks that consist of multi-typed, interconnected objects, such as the bibliographic networks and social media networks, it is important to study similarity search in such networks. Intuitively, two objects are similar if they are linked by many paths in the network. However, most existing similarity measures are defined for homogeneous networks. Different semantic meanings behind paths are not taken into consideration. Thus they cannot be directly applied to heterogeneous networks. In this paper, we study similarity search that is defined among the same type of objects in heterogeneous networks. Moreover, by considering different linkage paths in a network, one could derive various similarity semantics. Therefore, we introduce the concept of meta path-based similarity, where a meta path is a path consisting of a sequence of relations defined between different object types (i.e., structural paths at the meta level). No matter whether a user would like to explicitly specify a path combination given sufficient domain knowledge, or choose the best path by experimental trials, or simply provide training examples to learn it, meta path forms a common base for a network-based similarity search engine. In particular, under the meta path framework we define a novel similarity measure called PathSim that is able to find peer objects in the network (e.g., find authors in the similar field and with similar reputation), which turns out to be more meaningful in many scenarios compared with random-walk based similarity measures. In order to support fast online query processing for PathSim queries, we develop an efficient solution that partially materializes short meta paths and then concatenates them online to compute top-k results. Experiments on real data sets demonstrate the effectiveness and efficiency of our proposed paradigm."}
{"_id": "233f5e420642b33ee4fea3af9db846324ccba469", "title": "Evaluating question answering over linked data", "text": "The availability of large amounts of open, distributed and structured semantic data on the web has no precedent in the history of computer science. In recent years, there have been important advances in semantic search and question answering over RDF data. In particular, natural language interfaces to online semantic data have the advantage that they can exploit the expressive power of Semantic Web data models and query languages, while at the same time hiding their complexity from the user. However, despite the increasing interest in this area, there are no evaluations so far that systematically evaluate this kind of systems, in contrast to traditional question answering and search interfaces to document spaces. To address this gap, we have set up a series of evaluation challenges for question answering over linked data. The main goal of the challenge was to get insight into the strengths, capabilities and current shortcomings of question answering systems as interfaces to query linked data sources, as well as benchmarking how these interaction paradigms can deal with the fact that the amount of RDF data available on the web is very large and heterogeneous with respect to the vocabularies and schemas used. Here we report on the results from the first and second of such evaluation campaigns. We also discuss how the second evaluation addressed some of the issues and limitations which arose from the first one, as well as the open issues to be addressed in future competitions."}
{"_id": "dfd3696827abcaca0192224033cc963327825d13", "title": "dbrec - Music Recommendations Using DBpedia", "text": "This paper describes the theoretical background and the implementation of dbrec, a music recommendation system built on top of DBpedia, offering recommendations for more than 39,000 bands and solo artists. We discuss the various challenges and lessons learnt while building it, providing relevant insights for people developing applications consuming Linked Data. Furthermore, we provide a user-centric evaluation of the system, notably by comparing it to last.fm."}
{"_id": "f24eefd8d53208b2eca0bf9bbe34d01004e10133", "title": "Query modeling for entity search based on terms, categories, and examples", "text": "Users often search for entities instead of documents, and in this setting, are willing to provide extra input, in addition to a series of query terms, such as category information and example entities. We propose a general probabilistic framework for entity search to evaluate and provide insights in the many ways of using these types of input for query modeling. We focus on the use of category information and show the advantage of a category-based representation over a term-based representation, and also demonstrate the effectiveness of category-based expansion using example entities. Our best performing model shows very competitive performance on the INEX-XER entity ranking and list completion tasks."}
{"_id": "b0ab22f050c24b3d0fe92a742b8f1b00bf8243d7", "title": "Health benefits of fruit and vegetables are from additive and synergistic combinations of phytochemicals.", "text": "Cardiovascular disease and cancer are ranked as the first and second leading causes of death in the United States and in most industrialized countries. Regular consumption of fruit and vegetables is associated with reduced risks of cancer, cardiovascular disease, stroke, Alzheimer disease, cataracts, and some of the functional declines associated with aging. Prevention is a more effective strategy than is treatment of chronic diseases. Functional foods that contain significant amounts of bioactive components may provide desirable health benefits beyond basic nutrition and play important roles in the prevention of chronic diseases. The key question is whether a purified phytochemical has the same health benefit as does the whole food or mixture of foods in which the phytochemical is present. Our group found, for example, that the vitamin C in apples with skin accounts for only 0.4% of the total antioxidant activity, suggesting that most of the antioxidant activity of fruit and vegetables may come from phenolics and flavonoids in apples. We propose that the additive and synergistic effects of phytochemicals in fruit and vegetables are responsible for their potent antioxidant and anticancer activities, and that the benefit of a diet rich in fruit and vegetables is attributed to the complex mixture of phytochemicals present in whole foods."}
{"_id": "c0669244ca03002382b642aa0d744c101e2fa04b", "title": "Population-based norms for the Mini-Mental State Examination by age and educational level.", "text": "OBJECTIVE\nTo report the distribution of Mini-Mental State Examination (MMSE) scores by age and educational level.\n\n\nDESIGN\nNational Institute of Mental Health Epidemiologic Catchment Area Program surveys conducted between 1980 and 1984.\n\n\nSETTING\nCommunity populations in New Haven, Conn; Baltimore, Md; St Louis, Mo; Durham, NC; and Los Angeles, Calif.\n\n\nPARTICIPANTS\nA total of 18,056 adult participants selected by probability sampling within census tracts and households.\n\n\nMAIN OUTCOME MEASURES\nSummary scores for the MMSE are given in the form of mean, median, and percentile distributions specific for age and educational level.\n\n\nRESULTS\nThe MMSE scores were related to both age and educational level. There was an inverse relationship between MMSE scores and age, ranging from a median of 29 for those 18 to 24 years of age, to 25 for individuals 80 years of age and older. The median MMSE score was 29 for individuals with at least 9 years of schooling, 26 for those with 5 to 8 years of schooling, and 22 for those with 0 to 4 years of schooling.\n\n\nCONCLUSIONS\nCognitive performance as measured by the MMSE varies within the population by age and education. The cause of this variation has yet to be determined. Mini-Mental State Examination scores should be used to identify current cognitive difficulties and not to make formal diagnoses. The results presented should prove to be useful to clinicians who wish to compare an individual patient's MMSE scores with a population reference group and to researchers making plans for new studies in which cognitive status is a variable of interest."}
{"_id": "b4d28bee02c5fbcecbcf7c2467d0ccfcc9599973", "title": "Movie Review Classification Based on a Multiple Classifier", "text": "In this paper, we propose a method to classify movie review documents into positive or negative opinions. There are several approaches to classify documents. The previous studies, however, used only a single classifier for the classification task. We describe a multiple classifier for the review document classification task. The method consists of three classifiers based on SVMs, ME and score calculation. We apply two voting methods and SVMs to the integration process of single classifiers. The integrated methods improved the accuracy as compared with the three single classifiers. The experimental results show the effectiveness of our method."}
{"_id": "402a5f749c99c75c98a626539553c2e4b5939c29", "title": "G\u00f6del Machines: Fully Self-referential Optimal Universal Self-improvers", "text": "We present the first class of mathematically rigorous, general, fully self-referential, self-improving, optimally efficient problem solvers. Inspired by Kurt G\u00f6del\u2019s celebrated self-referential formulas (1931), such a problem solver rewrites any part of its own code as soon as it has found a proof that the rewrite is useful, where the problem-dependent utility function and the hardware and the entire initial code are described by axioms encoded in an initial proof searcher which is also part of the initial code. The searcher systematically and efficiently tests computable proof techniques (programs whose outputs are proofs) until it finds a provably useful, computable self-rewrite. We show that such a self-rewrite is globally optimal\u2014no local maxima!\u2014since the code first had to prove that it is not useful to continue the proof search for alternative self-rewrites. Unlike previous non-self-referential methods based on hardwired proof searchers, ours not only boasts an optimal order of complexity but can optimally reduce any slowdowns hidden by the O()-notation, provided the utility of such speed-ups is provable at all."}
{"_id": "339888b357e780c6e80fc135ec48a14c3b524f7d", "title": "Broder and Mitzenmacher : Network Applications of Bloom Filters : A Survey 487", "text": "A Bloom filter is a simple space-efficient randomized data structure for representing a set in order to support membership queries. Bloom filters allow false positives but the space savings often outweigh this drawback when the probability of an error is controlled. Bloom filters have been used in database applications since the 1970s, but only in recent years have they become popular in the networking literature. The aim of this paper is to survey the ways in which Bloom filters have been used and modified in a variety of network problems, with the aim of providing a unified mathematical and practical framework for understanding them and stimulating their use in future applications."}
{"_id": "20b0616717eb92310c79bc007cab8d75de7ae0cb", "title": "Mechanical thrombectomy in acute ischemic stroke: Consensus statement by ESO-Karolinska Stroke Update 2014/2015, supported by ESO, ESMINT, ESNR and EAN.", "text": "The original version of this consensus statement on mechanical thrombectomy was approved at the European Stroke Organisation (ESO)-Karolinska Stroke Update conference in Stockholm, 16-18 November 2014. The statement has later, during 2015, been updated with new clinical trials data in accordance with a decision made at the conference. Revisions have been made at a face-to-face meeting during the ESO Winter School in Berne in February, through email exchanges and the final version has then been approved by each society. The recommendations are identical to the original version with evidence level upgraded by 20 February 2015 and confirmed by 15 May 2015. The purpose of the ESO-Karolinska Stroke Update meetings is to provide updates on recent stroke therapy research and to discuss how the results may be implemented into clinical routine. Selected topics are discussed at consensus sessions, for which a consensus statement is prepared and discussed by the participants at the meeting. The statements are advisory to the ESO guidelines committee. This consensus statement includes recommendations on mechanical thrombectomy after acute stroke. The statement is supported by ESO, European Society of Minimally Invasive Neurological Therapy (ESMINT), European Society of Neuroradiology (ESNR), and European Academy of Neurology (EAN)."}
{"_id": "eb5eaed8516c344383d5dadc828671a43b2d7da9", "title": "COSY: An Energy-Efficient Hardware Architecture for Deep Convolutional Neural Networks Based on Systolic Array", "text": "Deep convolutional neural networks (CNNs) show extraordinary abilities in artificial intelligence applications, but their large scale of computation usually limits their uses on resource-constrained devices. For CNN\u2019s acceleration, exploiting the data reuse of CNNs is an effective way to reduce bandwidth and energy consumption. Row-stationary (RS) dataflow of Eyeriss is one of the most energy-efficient state-of-the-art hardware architectures, but has redundant storage usage and data access, so the data reuse has not been fully exploited. It also requires complex control and is intrinsically unable to skip over zero-valued inputs in timing. In this paper, we present COSY (CNN on Systolic Array), an energy-efficient hardware architecture based on the systolic array for CNNs. COSY adopts the method of systolic array to achieve the storage sharing between processing elements (PEs) in RS dataflow at the RF level, which reduces low-level energy consumption and on-chip storage. Multiple COSY arrays sharing the same storage can execute multiple 2-D convolutions in parallel, further increasing the data reuse in the low-level storage and improving throughput. To compare the energy consumption of COSY and Eyeriss running actual CNN models, we build a process-based energy consumption evaluation system according to the hardware storage hierarchy. The result shows that COSY can achieve an over 15% reduction in energy consumption under the same constraints, improving the theoretical Energy-Delay Product (EDP) and Energy-Delay Squared Product (ED^2P) by 1.33 x on average. In addition, we prove that COSY has the intrinsic ability for zero-skipping, which can further increase the improvements to 2.25\u00d7 and 3.83\u00d7 respectively."}
{"_id": "4222de4ada0850e77b4cc3dec2d8148b4a5d0783", "title": "An introduction to recursive partitioning: rationale, application, and characteristics of classification and regression trees, bagging, and random forests.", "text": "Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, which can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine, and bioinformatics within the past few years. High-dimensional problems are common not only in genetics, but also in some areas of psychological research, where only a few subjects can be measured because of time or cost constraints, yet a large amount of data is generated for each subject. Random forests have been shown to achieve a high prediction accuracy in such applications and to provide descriptive variable importance measures reflecting the impact of each variable in both main effects and interactions. The aim of this work is to introduce the principles of the standard recursive partitioning methods as well as recent methodological improvements, to illustrate their usage for low and high-dimensional data exploration, but also to point out limitations of the methods and potential pitfalls in their practical application. Application of the methods is illustrated with freely available implementations in the R system for statistical computing."}
{"_id": "2cb94e79784d28fa2e6a7a1ff58cdb1261093f8b", "title": "Detecting Suicidal Ideation in Chinese Microblogs with Psychological Lexicons", "text": "Suicide is among the leading causes of death in China. However, technical approaches toward preventing suicide are challenging and remaining under development. Recently, several actual suicidal cases were preceded by users who posted micro blogs with suicidal ideation to Sina Weibo, a Chinese social media network akin to Twitter. It would therefore be desirable to detect suicidal ideations from micro blogs in real-time, and immediately alert appropriate support groups, which may lead to successful prevention. In this paper, we propose a real-time suicidal ideation detection system deployed over Weibo, using machine learning and known psychological techniques. Currently, we have identified 53 known suicidal cases who posted suicide notes on Weibo prior to their deaths. We explore linguistic features of these known cases using a psychological lexicon dictionary, and train an effective suicidal Weibo post detection model. 6714 tagged posts and several classifiers are used to verify the model. By combining both machine learning and psychological knowledge, SVM classifier has the best performance of different classifiers, yielding an F-measure of 68:3%, a Precision of 78:9%, and a Recall of 60:3%."}
{"_id": "6912297a9f404ff6dd23cb3bb3ff5c4981f68162", "title": "Provable Inductive Matrix Completion", "text": "Consider a movie recommendation system where apart from the ratings information, side information such as user\u2019s age or movie\u2019s genre is also available. Unlike standard matrix completion, in this setting one should be able to predict inductively on new users/movies. In this paper, we study the problem of inductive matrix completion in the exact recovery setting. That is, we assume that the ratings matrix is generated by applying feature vectors to a low-rank matrix and the goal is to recover back the underlying matrix. Furthermore, we generalize the problem to that of low-rank matrix estimation using rank-1 measurements. We study this generic problem and provide conditions that the set of measurements should satisfy so that the alternating minimization method (which otherwise is a non-convex method with no convergence guarantees) is able to recover back the exact underlying low-rank matrix. In addition to inductive matrix completion, we show that two other low-rank estimation problems can be studied in our framework: a) general low-rank matrix sensing using rank-1 measurements, and b) multi-label regression with missing labels. For both the problems, we provide novel and interesting bounds on the number of measurements required by alternating minimization to provably converges to the exact low-rank matrix. In particular, our analysis for the general low rank matrix sensing problem significantly improves the required storage and computational cost than that required by the RIP-based matrix sensing methods [1]. Finally, we provide empirical validation of our approach and demonstrate that alternating minimization is able to recover the true matrix for the above mentioned problems using a small number of measurements."}
{"_id": "4a79d343e5ce0f961a1a675d461f2208f153358b", "title": "For better or for worse: neural systems supporting the cognitive down- and up-regulation of negative emotion", "text": "Functional neuroimaging studies examining the neural bases of the cognitive control of emotion have found increased prefrontal and decreased amygdala activation for the reduction or down-regulation of negative emotion. It is unknown, however, (1) whether the same neural systems underlie the enhancement or up-regulation of emotion, and (2) whether altering the nature of the regulatory strategy alters the neural systems mediating the regulation. To address these questions using functional magnetic resonance imaging (fMRI), participants up- and down-regulated negative emotion either by focusing internally on the self-relevance of aversive scenes or by focusing externally on alternative meanings for pictured actions and their situational contexts. Results indicated (1a) that both up- and down-regulating negative emotion recruited prefrontal and anterior cingulate regions implicated in cognitive control, (1b) that amygdala activation was modulated up or down in accord with the regulatory goal, and (1c) that up-regulation uniquely recruited regions of left rostromedial PFC implicated in the retrieval of emotion knowledge, whereas down-regulation uniquely recruited regions of right lateral and orbital PFC implicated in behavioral inhibition. Results also indicated that (2) self-focused regulation recruited medial prefrontal regions implicated in internally focused processing, whereas situation-focused regulation recruited lateral prefrontal regions implicated in externally focused processing. These data suggest that both common and distinct neural systems support various forms of reappraisal and that which particular prefrontal systems modulate the amygdala in different ways depends on the regulatory goal and strategy employed."}
{"_id": "dc3e8bea9ef0c9a2df20e4d11860203eaf795b6a", "title": "Using Ground Reaction Forces from Gait Analysis: Body Mass as a Weak Biometric", "text": "Ground reaction forces generated during normal walking have recently been used to identify and/or classify individuals based upon the pattern of the forces observed over time. One feature that can be extracted from vertical ground reaction forces is body mass. This single feature has identifying power comparable to other studies that use multiple and more complex features. This study contributes to understanding the role of body mass in identification by (1) quantifying the accuracy and precision with which body mass can be obtained using vertical ground reaction forces, (2) quantifying the distribution of body mass across a population larger than has previously been studied in relation to gait analysis, and (3) quantifying the expected identification capabilities of systems using body mass as a weak biometric. Our results show that body mass can be measured in a fraction of a second with less than a 1 kilogram standard deviation of error."}
{"_id": "74b69c0db79ecfaac849b38a1d2cffbcc3121896", "title": "Machine learning based malware classification for Android applications using multimodal image representations", "text": "The popularity of smartphones usage especially Android mobile platform has increased to 80% of share in global smartphone operating systems market, as a result of which it is on the top in the attacker's target list. The fact of having more private data and low security assurance letting the attacker to write several malware programs in different ways for smartphone, and the possibility of obfuscating the malware detection applications through different coding techniques is giving more energy to attacker. Several approaches have been proposed to detect malwares through code analysis which are now severely facing the problem of code obfuscation and high computation requirement. We propose a machine learning based method to detect android malware by analyzing the visual representation of binary formatted apk file into Grayscale, RGB, CMYK and HSL. GIST feature from malware and benign image dataset were extracted and used to train machine learning algorithms. Initial experimental results are encouraging and computationally effective. Among machine learning algorithms Random Forest have achieved highest accuracy of 91% for grayscale image, which can be further improved by tuning the various parameters."}
{"_id": "4d534c09d4e3d93b2e2add55af0af30b04c75d40", "title": "The reliability and validity of the Face, Legs, Activity, Cry, Consolability observational tool as a measure of pain in children with cognitive impairment.", "text": "UNLABELLED\nPain assessment remains difficult in children with cognitive impairment (CI). In this study, we evaluated the validity and reliability of the Face, Legs, Activity, Cry, Consolability (FLACC) tool for assessing pain in children with CI. Each child's developmental level and ability to self-report pain were evaluated. The child's nurse observed and scored pain with the FLACC tool before and after analgesic administration. Simultaneously, parents scored pain with a visual analog scale, and scores were obtained from children who were able to self-report pain. Observations were videotaped and later viewed by nurses blinded to analgesics and pain scores. One-hundred-forty observations were recorded from 79 children. FLACC scores correlated with parent scores (P < 0.001) and decreased after analgesics (P = 0.001), suggesting good validity. Correlations of total scores (r = 0.5-0.8; P < 0.001) and of each category (r = 0.3-0.8; P < 0.001), as well as measures of exact agreement (kappa = 0.2-0.65), suggest good reliability. Test-retest reliability was supported by excellent correlations (r = 0.8-0.883; P < 0.001) and categorical agreement (r = 0.617-0.935; kappa = 0.400-0.881; P < 0.001). These data suggest that the FLACC tool may be useful as an objective measure of postoperative pain in children with CI.\n\n\nIMPLICATIONS\nThe FLACC pain assessment tool may facilitate reliable and valid observational pain assessment in children with cognitive impairment who cannot self-report their pain. Objective pain assessment is important to facilitate effective postoperative pain management in these vulnerable children."}
{"_id": "847e297181f64bcbc8c9f7dc5fa90ec2fe38c026", "title": "A 10-Gb/s receiver with a continuous-time linear equalizer and 1-tap decision-feedback equalizer", "text": "This paper presents a wireline receiver design of CMOS I/O at 10-Gb/s data rate. A power efficient continuous-time linear equalizer (CTLE) and 1-tap lookahead decision feedback equalizer (DFE) are designed and implemented in a 45 nm CMOS technology. The DFE employs a sampler including a current injection section that makes no use of summer as a separated block. In addition, cascode structure increases kick-back noise immunity and reduces power consumption by 11%. The PLL used in the proposed receiver drives 5 GHz clock frequency with 12.62 pspk-pk jitter. The core receiver circuit consumes 14.3 mW at a 1.1 V supply voltage when processing 10 Gb/s data rate with 15 dB of channel loss at Nyquist frequency."}
{"_id": "a2d2b7bb3496c51acecdf3e3574278dfbf17174b", "title": "Transaction aggregation as a strategy for credit card fraud detection", "text": "The problem of preprocessing transaction data for supervised fraud classification is considered. It is impractical to present an entire series of transactions to a fraud detection system, partly because of the very high dimensionality of such data but also because of the heterogeneity of the transactions. Hence, a framework for transaction aggregation is considered and its effectiveness is evaluated against transaction-level detection, using a variety of classification methods and a realistic cost-based performance measure. These methods are applied in two case studies using real data. Transaction aggregation is found to be advantageous in many but not all circumstances. Also, the length of the aggregation period has a large impact upon performance. Aggregation seems particularly effective when a random forest is used for classification. Moreover, random forests were found to perform better than other classification methods, including SVMs, logistic regression and KNN. Aggregation also has the advantage of not requiring precisely labeled data and may be more robust to the effects of population drift."}
{"_id": "9abc37a1e5709dbceb13366ec6b0697c9c790573", "title": "Rapid MPPT for Uniformly and Partial Shaded PV System by Using JayaDE Algorithm in Highly Fluctuating Atmospheric Conditions", "text": "In photovoltaic (PV) array, the output power and the power\u2013voltage (P\u2013V ) characteristic of PV array are totally dependent on the temperature and solar insolation. Therefore, if these atmospheric parameters fluctuate rapidly, then the maximum power point (MPP) of the P\u2013V curve of PV array also fluctuates very rapidly. This rapid fluctuation of the MPP may be in accordance with the uniform shading of the PV panel or may be in accordance to the partially shaded due to the clouds, tall building, trees, and raindrops. However, in both cases, the MPP tracking (MPPT) is not only a nonlinear problem, this becomes a highly nonlinear problem, which solution is time bounded. Because the highly fluctuating atmospheric conditions change the P\u2013V characteristic after every small time duration. This paper introduces a hybrid of \u201cJaya\u201d and \u201cdifferential evolution (DE)\u201d (JayaDE) technique for MPPT in the highly fluctuating atmospheric conditions. This JayaDE algorithm is tested on MATLAB simulator and is verified on a developed hardware of the solar PV system, which consists of a single peak and many multiple peaks in the voltage\u2013power curve. Moreover, the tracking ability is compared with the recent state of the art methods. The satisfactory steady-state and dynamic performances of this new hybrid technique under variable irradiance and temperature levels show the superiority over the state-of-the-art control methods."}
{"_id": "1b2f2bb90fb08d0e02eabb152120dbf1d6e5837e", "title": "Multilingual Word Embeddings using Multigraphs", "text": "We present a family of neural-network\u2013 inspired models for computing continuous word representations, specifically designed to exploit both monolingual and multilingual text. This framework allows us to perform unsupervised training of embeddings that exhibit higher accuracy on syntactic and semantic compositionality, as well as multilingual semantic similarity, compared to previous models trained in an unsupervised fashion. We also show that such multilingual embeddings, optimized for semantic similarity, can improve the performance of statistical machine translation with respect to how it handles words not present in the parallel data."}
{"_id": "4543b9fd2570709870b3c99f9a567bd2ac436744", "title": "Representing Text for Joint Embedding of Text and Knowledge Bases", "text": "Models that learn to represent textual and knowledge base relations in the same continuous latent space are able to perform joint inferences among the two kinds of relations and obtain high accuracy on knowledge base completion (Riedel et al., 2013). In this paper we propose a model that captures the compositional structure of textual relations, and jointly optimizes entity, knowledge base, and textual relation representations. The proposed model significantly improves performance over a model that does not share parameters among textual relations with common sub-structure."}
{"_id": "6dd3b79f34a8b40320d1d745b9abf2d70e1d4db8", "title": "Joint Representation Learning of Text and Knowledge for Knowledge Graph Completion", "text": "Joint representation learning of text and knowledge within a unified semantic space enables us to perform knowledge graph completion more accurately. In this work, we propose a novel framework to embed words, entities and relations into the same continuous vector space. In this model, both entity and relation embeddings are learned by taking knowledge graph and plain text into consideration. In experiments, we evaluate the joint learning model on three tasks including entity prediction, relation prediction and relation classification from text. The experiment results show that our model can significantly and consistently improve the performance on the three tasks as compared with other baselines."}
{"_id": "84af7277cf4b7bf96d5bb0774b92b247327ed71d", "title": "Using Clustering for Categorization of Support Tickets", "text": "Support tickets from customers contain much hidden information. Unsupervised machine learning methods are able to discover this hidden information. In this paper we propose the categorization of support tickets using clustering methods in combination with topic models. Furthermore label generation techniques are used to generate meaningful names for these categories. The results are compared with related"}
{"_id": "f22938a02b5c39d5e038c7dd13dd07f58a666c78", "title": "Kissing Nevus of the Penis", "text": "Kissing or divided nevi are similar in shape to congenital melanocytic nevi located on an adjacent part of the body that are separated during embryogenesis. Kissing nevi of the upper and lower eyelids have been reported infrequently since the first report in 1908. Kissing nevi of the penis are very rare, with only 12 cases being reported until now, and this is the first case report in the Korean dermatological literature. A previously healthy 27-year-old man presented with asymptomatic black colored patches, which were detected 10 years ago, on the glans penis and the prepuce with growth in size. We report here a case of kissing nevus of the penis, which showed an obvious mirror-image symmetry relative to the coronal sulcus."}
{"_id": "396945dabf79f4a8bf36ca408a137d6e961306e7", "title": "\u69cb\u5efa\u4e00\u500b\u4e2d\u6587\u570b\u5c0f\u6578\u5b78\u6587\u5b57\u554f\u984c\u8a9e\u6599\u5eab(Building a Corpus for Developing the Chinese Elementary School Math Word Problem Solver)[In Chinese]", "text": ""}
{"_id": "b50f3076848aff91139dd19229618c3d21b346c2", "title": "Course signals at Purdue: using learning analytics to increase student success", "text": "In this paper, an early intervention solution for collegiate faculty called Course Signals is discussed. Course Signals was developed to allow instructors the opportunity to employ the power of learner analytics to provide real-time feedback to a student. Course Signals relies not only on grades to predict students' performance, but also demographic characteristics, past academic history, and students' effort as measured by interaction with Blackboard Vista, Purdue's learning management system. The outcome is delivered to the students via a personalized email from the faculty member to each student, as well as a specific color on a stoplight -- traffic signal -- to indicate how each student is doing. The system itself is explained in detail, along with retention and performance outcomes realized since its implementation. In addition, faculty and student perceptions will be shared."}
{"_id": "5f3d1bccb098619129130543f33c104f2723f6c4", "title": "Propranolol treatment for hemangioma of infancy: risks and recommendations.", "text": "Hemangioma of infancy is a condition that may be associated with significant morbidity. While evidence most supports the use of corticosteroids, there is no well-defined or Federal Drug Administration (FDA)-approved systemic therapy for hemangioma of infancy. All currently used treatments have significant risks. Dramatic improvement of complicated hemangioma of infancy to propranolol was recently reported, but details for initiating therapy, monitoring, and potential risks were not included. We present two infants treated with propranolol, who suffered complications and propose a treatment protocol to minimize potential adverse events."}
{"_id": "6010c2d8eb5b6c5da3463d0744203060bdcc07a7", "title": "A Survey of the ATP-Binding Cassette (ABC) Gene Superfamily in the Salmon Louse (Lepeophtheirus salmonis)", "text": "Salmon lice, Lepeophtheirus salmonis (Kr\u00f8yer, 1837), are fish ectoparasites causing significant economic damage in the mariculture of Atlantic salmon, Salmo salar Linnaeus, 1758. The control of L. salmonis at fish farms relies to a large extent on treatment with anti-parasitic drugs. A problem related to chemical control is the potential for development of resistance, which in L. salmonis is documented for a number of drug classes including organophosphates, pyrethroids and avermectins. The ATP-binding cassette (ABC) gene superfamily is found in all biota and includes a range of drug efflux transporters that can confer drug resistance to cancers and pathogens. Furthermore, some ABC transporters are recognised to be involved in conferral of insecticide resistance. While a number of studies have investigated ABC transporters in L. salmonis, no systematic analysis of the ABC gene family exists for this species. This study presents a genome-wide survey of ABC genes in L. salmonis for which, ABC superfamily members were identified through homology searching of the L. salmonis genome. In addition, ABC proteins were identified in a reference transcriptome of the parasite generated by high-throughput RNA sequencing (RNA-seq) of a multi-stage RNA library. Searches of both genome and transcriptome allowed the identification of a total of 33 genes / transcripts coding for ABC proteins, of which 3 were represented only in the genome and 4 only in the transcriptome. Eighteen sequences were assigned to ABC subfamilies known to contain drug transporters, i.e. subfamilies B (4 sequences), C (11) and G (2). The results suggest that the ABC gene family of L. salmonis possesses fewer members than recorded for other arthropods. The present survey of the L. salmonis ABC gene superfamily will provide the basis for further research into potential roles of ABC transporters in the toxicity of salmon delousing agents and as potential mechanisms of drug resistance."}
{"_id": "76c82cf591b797f51769557c569d3866c84cb2a4", "title": "Efficient Transformerless MOSFET Inverter for a Grid-Tied Photovoltaic System", "text": "The unipolar sinusoidal pulse width modulation full-bridge transformerless photovoltaic (PV) inverter can achieve high efficiency by using latest superjunction metal-oxide-semiconductor field-effect transistor (MOSFET) together with silicon carbide (SiC) diodes. However, the MOSFETs aSiCre limited to use in transformerless PV inverter due to the low reverse-recovery characteristics of the body diode. In this paper, a family of new transformerless PV inverter topology for a single-phase grid-tied operation is proposed using superjunction MOSFETs and SiC diodes as no reverse-recovery issues are required for the main power switches for unity power operation. The added clamping branch clamps the freewheeling voltage at the half of dc input voltage during the freewheeling period. As a result, the common-mode voltage kept constant during the whole grid period that reduces the leakage current significantly. In addition, dead time is not necessary for main power switches at both the high-frequency commutation and the grid zero crossing instant, results low-current distortion at output. Finally, a 1-kW prototype is built and tested to verify the theoretical analysis. The experimental results show 98.5% maximum efficiency and 98.32% European efficiency. Furthermore, to show the effectiveness, the proposed topology is compared with the other transformerless topologies."}
{"_id": "4a3235a542f92929378a11f2df2e942fe5674c0e", "title": "Network-Based Intrusion Detection Using Unsupervised Adaptive Resonance Theory ( ART )", "text": "This paper introduces the Unsupervised Neural Net based Intrusion Detector (UNNID) system, which detects network-based intrusions and attacks using unsupervised neural networks. The system has facilities for training, testing, and tunning of unsupervised nets to be used in intrusion detection. Using the system, we tested two types of unsupervised Adaptive Resonance Theory (ART) nets (ART-1 and ART-2). Based on the results, such nets can efficiently classify network traffic into normal and intrusive. The system uses a hybrid of misuse and anomaly detection approaches, so is capable of detecting known attack types as well as new attack types as anomalies."}
{"_id": "f951411d7c7d6a2c6d3bb8314108eba5ca2406a7", "title": "AN ALGORITHM TO GROUP DEFECTS ON PRINTED CIRCUIT BOARD FOR AUTOMATED VISUAL INSPECTION", "text": "Due to disadvantages in manual inspection, an automated visual inspection system is needed to eliminate subjective aspects and provides fast and quantitative assessment of printed circuit board (PCB). Up to the present, there has been a lot of work and research concentrated on PCB defect detection. PCB defects detection is necessary for verification of the characteristics of PCB to make sure it is in conformity with the design specifications. However, besides the need to detect the defects, it is also essential to classify these defects so that the source of these defects can be identified. Unfortunately, this area has been neglected and not been given enough attention. Hence, this study proposes an algorithm to group the defects found on bare PCB. Using a synthetically generated PCB image, the algorithm is able to group 14 commonly known PCB defects into five groups. The proposed algorithm includes several image processing operations such as image subtraction, image adding, logical XOR and NOT, and flood fill operator."}
{"_id": "58a2f18792b5f19499b4d45219078dcb4a1d6e8b", "title": "Provably Secure Active IC Metering Techniques for Piracy Avoidance and Digital Rights Management", "text": "In the horizontal semiconductor business model where the designer's intellectual property (IP) is transparent to foundry and to other entities on the production chain, integrated circuits (ICs) overbuilding and IP piracy are prevalent problems. Active metering is a suite of methods enabling the designers to control their chips postfabrication. We provide a comprehensive description of the first known active hardware metering method and introduce new formal security proofs. The active metering method uniquely and automatically locks each IC upon manufacturing, such that the IP rights owner is the only entity that can provide the specific key to unlock or otherwise control each chip. The IC control mechanism exploits: 1) the functional description of the design, and 2) unique and unclonable IC identifiers. The locks are embedded by modifying the structure of the hardware computation model, in the form of a finite state machine (FSM). We show that for each IC hiding the locking states within the modified FSM structure can be constructed as an instance of a general output multipoint function that can be provably efficiently obfuscated. The hidden locks within the FSM may also be used for remote enabling and disabling of chips by the IP rights owner during the IC's normal operation. An automatic synthesis method for low overhead hardware implementation is devised. Attacks and countermeasures are addressed. Experimental evaluations demonstrate the low overhead of the method. Proof-of-concept implementation on the H.264 MPEG decoder automatically synthesized on a Xilinix Virtex-5 field-programmable gate array (FPGA) further shows the practicality, security, and the low overhead of the new method."}
{"_id": "f496cfb580e824e18181ff49f524ce8794c33d30", "title": "Situation assessment algorithm for a collision prevention assistant", "text": "In this paper, we present the concept of a collision prevention assistant, a system that we believe can significantly contribute to road safety. We propose a new situation assessment algorithm which is tailored to the action of braking and that further accounts for the nonlinearities that arise when vehicles cut out or come to a standstill. The effect of sensor uncertainty on the performance of the proposed algorithm is modelled using a Markov chain and analyzed by means of a Monte Carlo simulation on a typical traffic situation."}
{"_id": "ec6012530267dc9d3e59e4bc87d5a65def3e1873", "title": "Latent dirichlet allocation based diversified retrieval for e-commerce search", "text": "Diversified retrieval is a very important problem on many e-commerce sites, e.g. eBay and Amazon. Using IR approaches without optimizing for diversity results in a clutter of redundant items that belong to the same products. Most existing product taxonomies are often too noisy, with overlapping structures and non-uniform granularity, to be used directly in diversified retrieval. To address this problem, we propose a Latent Dirichlet Allocation (LDA) based diversified retrieval approach that selects diverse items based on the hidden user intents. Our approach first discovers the hidden user intents of a query using the LDA model, and then ranks the user intents by making trade-offs between their relevance and information novelty. Finally, it chooses the most representative item for each user intent to display. To evaluate the diversity in the search results on e-commerce sites, we propose a new metric, average satisfaction, measuring user satisfaction with the search results. Through our empirical study on eBay, we show that the LDA model discovers meaningful user intents and the LDA-based approach provides significantly higher user satisfaction than the eBay production ranker and three other diversified retrieval approaches."}
{"_id": "1e5482c85ee48b92a75d0590ed9b0d5f0e6a0b17", "title": "Service Personnel, Technology, and Their Interaction in Influencing Customer Satisfaction", "text": "Managing both the technologies and the personnel needed for providing high-quality, multichannel customer support creates a complex and persistent operational challenge. Adding to this difficulty, it is still unclear how service personnel and these new communication technologies interact to influence the customer\u2019s perceptions of the service being provided. Motivated by both practical importance and inconsistent findings in the academic literature, this exploratory research examines the interaction of media richness, represented by three different technology contexts (telephone, e-mail, and online chat), with six customer service representative (CSR) characteristics and their influences on customer satisfaction. Using a large-sample customer survey data set, the article develops a multigroup structural equation model to analyze these interactions. Results suggest that CSR characteristics influence customer service satisfaction similarly across all three technology-mediated contexts. Of the characteristics studied, service representatives contribute to customer satisfaction more when they exhibit the characteristics of thoroughness, knowledgeableness, and preparedness, regardless of the richness of the medium used. Surprisingly, while three other CSR characteristics studied (courtesy, professionalism, and attentiveness) are traditionally believed to be important in face-to-face encounters, they had no significant impact on customer satisfaction in the technology-mediated contexts studied. Implications for both practitioners and researchers are drawn from the results and future research opportunities are"}
{"_id": "69c703189432e28e3f075ceec17abeb8090e5dd0", "title": "String Alignment with Substitution, Insertion, Deletion, Squashing and Expansion Operations", "text": "Let X and Y be any two strings of finite length. The problem of transforming X to Y using the edit operations of substitution, deletion, and insertion has been extensively studied in the literature. The problem can be solved in quadratic time if the edit operations are extended to include the operation of transposition of adjacent characters, and is NP-complete if the characters can be edited repeatedly. In this paper we consider the problem of transforming X to Y when the set of edit operations is extended to include the squashing and expansion operations. Whereas in the squashing operation two (or more) contiguous characters of X can be transformed into a single character of Y, in the expansion operation a single character in X may be expanded into two or more contiguous characters of Y. These operations are typically found in the recognition of cursive script. A quadratic time solution to the problem has been presented. This solution is optimal for the infinite-alphabet case. The strategy to compute the sequence of edit operations is also presented."}
{"_id": "842a50bf43e8b59efbab69c8b6d6e309fe8568ba", "title": "Predicting Outcome in Subarachnoid Hemorrhage (SAH) Utilizing the Full Outline of UnResponsiveness (FOUR) Score.", "text": "BACKGROUND\nExisting scoring systems for aneurysmal subarachnoid hemorrhage (SAH) patients fail to accurately predict patient outcome. Our goal was to prospectively study the Full Outline of UnResponsiveness (FOUR) score as applied to newly admitted aneurysmal SAH patients.\n\n\nMETHODS\nAll adult patients presenting to Health Sciences Center in Winnipeg from January 2013 to July 2015 (2.5\u00a0year period) with aneurysmal SAH were prospectively enrolled in this study. All patients were followed up to 6\u00a0months. FOUR score was calculated upon admission, with repeat calculation at 7 and 14\u00a0days. The primary outcomes were: mortality, as well as dichotomized 1- and 6-month Glasgow Outcome Scale (GOS) and modified Rankin Scale (mRS) values.\n\n\nRESULTS\nSixty-four patients were included, with a mean age of 54.2\u00a0years (range 26-85\u00a0years). The mean FOUR score upon admission pre- and post-external ventricular drain (EVD) was 10.3 (range 0-16) and 11.1 (range 3-16), respectively. There was a statistically significant association between pre-EVD FOUR score (total, eye, respiratory and motor sub-scores) with mortality, 1-month GOS, and 6-month GOS/mRS (p\u00a0<\u00a00.05 in all). The day 7 total, eye, respiratory, and motor FOUR scores were associated with mortality, 1-month GOS/mRS, and 6-month GOS/mRS (p\u00a0<\u00a00.05 in all). The day 14 total, eye, respiratory, and motor FOUR scores were associated with 6-month GOS (p\u00a0<\u00a00.05 in all). The day 7 cumulative FOUR score was associated with the development of clinical vasospasm (p\u00a0<\u00a00.05).\n\n\nCONCLUSIONS\nThe FOUR score at admission and day 7 post-SAH is associated with mortality, 1-month GOS/mRS, and 6-month GOS/mRS. The FOUR score at day 14 post-SAH is associated with 6-month GOS. The brainstem sub-score was not associated with 1- or 6-month primary outcomes."}
{"_id": "84edb628ee6fa0b5e0ded11f8505fc66d04a35dd", "title": "VERY HIGH RESOLUTION SAR TOMOGRAPHY VIA COMPRESSIVE SENSING", "text": "By using multi-pass SAR acquisitions, SAR tomography (TomoSAR) extends the synthetic aperture principle into the elevation direction for 3-D imaging. Since the orbits of modern space-borne SAR systems, like TerraSAR-X, are tightly controlled, the elevation resolution (depending on the elevation aperture size) is at least an order of magnitude lower than in range and azimuth. Hence, superresolution algorithms are desired. The high anisotropic 3D resolution element renders the signals sparse in elevation. This property suggests using compressive sensing (CS) methods. The paper presents the theory of 4D (i.e. space-time) CS TomoSAR and compares it with classical tomographic methods. Super-resolution properties and point localization accuracies are demonstrated using simulations and real data. A CS reconstruction of a building complex from TerraSAR-X spotlight data is presented. In addition, the model based time warp method for differential tomographic non-linear motion monitoring is proposed and validated by reconstructing seasonal motion (caused by thermal expansion) of a building complex."}
{"_id": "d830cf7fad6ab7a809dd0589bb552c43d839ff14", "title": "Method of invariant manifold for chemical kinetics", "text": "In this paper, we review the construction of low-dimensional manifolds of reduced description for equations of chemical kinetics from the standpoint of the method of invariant manifold (MIM). The MIM is based on a formulation of the condition of invariance as an equation, and its solution by Newton iterations. A review of existing alternative methods is extended by a thermodynamically consistent version of the method of intrinsic low-dimensional manifolds. A grid-based version of the MIM is developed, and model extensions of low-dimensional dynamics are described. Generalizations to open systems are suggested. The set of methods covered makes it possible to e9ectively reduce description in chemical kinetics. ? 2003 Elsevier Ltd. All rights reserved."}
{"_id": "44e67d10a33a11930c40813e034679ea8d7feb6c", "title": "Deploying SDP for machine learning", "text": "We discuss the use in machine learning of a general type of convex optimisation problem known as semi-definite programming (SDP) [1]. We intend to argue that SDP\u2019s arise quite naturally in a variety of situations, accounting for their omnipresence in modern machine learning approaches, and we provide examples in support."}
{"_id": "226242629f3d21b9e86afe76b1849048148351de", "title": "Remote Timing Attacks Are Practical", "text": "Timing attacks are usually used to attack weak computing devices such as smartcards. We show that timing attacks apply to general software systems. Specifically, we devise a timing attack against OpenSSL. Our experiments show that we can extract private keys from an OpenSSL-based web server running on a machine in the local network. Our results demonstrate that timing attacks against network servers are practical and therefore security systems should defend against them."}
{"_id": "487e6a85d55c3adbffcd3ce8032b150e90a25bf0", "title": "Introduction to differential power analysis", "text": "The power consumed by a circuit varies according to the activity of its individual transistors and other components. As a result, measurements of the power used by actual computers or microchips contain information about the operations being performed and the data being processed. Cryptographic designs have traditionally assumed that secrets are manipulated in environments that expose no information beyond the specified inputs and outputs. This paper examines how information leaked through power consumption and other side channels can be analyzed to extract secret keys from a wide range of devices. The attacks are practical, non-invasive, and highly effective\u2014even against complex and noisy systems where cryptographic computations account for only a small fraction of the overall power consumption. We also introduce approaches for preventing DPA attacks and for building cryptosystems that remain secure even when implemented in hardware that leaks."}
{"_id": "b10cb04fd45f968d29ce0bdc17c4d29d12e05b67", "title": "The EM Side-Channel(s)", "text": "We present results of a systematic investigation of leakage of compromising information via electromagnetic (EM) emanations from CMOS devices. These emanations are shown to consist of a multiplicity of signals, each leaking somewhat different information about the underlying computation. We show that not only can EM emanations be used to attack cryptographic devices where the power side\u2013channel is unavailable, they can even be used to break power analysis countermeasures."}
{"_id": "0dada24bcee09e1f3d556ecf81f628ecaf4659d1", "title": "Cache Games -- Bringing Access-Based Cache Attacks on AES to Practice", "text": "Side channel attacks on cryptographic systems exploit information gained from physical implementations rather than theoretical weaknesses of a scheme. In recent years, major achievements were made for the class of so called access-driven cache attacks. Such attacks exploit the leakage of the memory locations accessed by a victim process. In this paper we consider the AES block cipher and present an attack which is capable of recovering the full secret key in almost real time for AES-128, requiring only a very limited number of observed encryptions. Unlike previous attacks, we do not require any information about the plaintext (such as its distribution, etc.). Moreover, for the first time, we also show how the plaintext can be recovered without having access to the cipher text at all. It is the first working attack on AES implementations using compressed tables. There, no efficient techniques to identify the beginning of AES rounds is known, which is the fundamental assumption underlying previous attacks. We have a fully working implementation of our attack which is able to recover AES keys after observing as little as 100 encryptions. It works against the OpenS SL 0.9.8n implementation of AES on Linux systems. Our spy process does not require any special privileges beyond those of a standard Linux user. A contribution of probably independent interest is a denial of service attack on the task scheduler of current Linux systems (CFS), which allows one to observe (on average) every single memory access of a victim process."}
{"_id": "d058dd4c309feb9c1bfb784afa791ff3a0420237", "title": "A Study of Intercept Adjusted Markov Switching Vector Autoregressive Model in Economic Time Series Data", "text": "Commodity price always related to the movement of stock market index. However real economic time series data always exhibit nonlinear properties such as structural change, jumps or break in the series through time. Therefore, linear time series models are no longer suitable and Markov Switching Vector Autoregressive models which able to study the asymmetry and regime switching behavior of the data are used in the study. Intercept adjusted Markov Switching Vector Autoregressive (MSI-VAR) model is discuss and applied in the study to capture the smooth transition of the stock index changes from recession state to growth state. Results found that the dramatically changes from one state to another state are continuous smooth transition in both regimes. In addition, the 1-step prediction probability for the two regime Markov Switching model which act as the filtered probability to the actual probability of the variables is converged to the actual probability when undergo an intercept adjusted after a shift. This prove that MSI-VAR model is suitable to use in examine the changes of the economic model and able to provide significance, valid and reliable results. While oil price and gold price also proved that as a factor in affecting the stock exchange."}
{"_id": "c644a1c742e8d4e092e700aa2f247dbd11cd5e81", "title": "Power Converter EMI Analysis Including IGBT Nonlinear Switching Transient Model", "text": "It is well known that very high dv/dt and di/dt during the switching instant is the major high-frequency electromagnetic interference (EMI) source. This paper proposes an improved and simplified EMI-modeling method considering the insulated gate bipolar transistor switching-behavior model. The device turn-on and turn-off dynamics are investigated by dividing the nonlinear transition by several stages. The real device switching voltage and current are approximated by piecewise linear lines and expressed using multiple dv/dt and di/dt superposition. The derived EMI spectra suggest that the high-frequency noise is modeled with an acceptable accuracy. The proposed methodology is verified by experimental results using a dc-dc buck converter"}
{"_id": "8a0d229cfea897c43047290f3c2e30e229eb7343", "title": "Large scale predictive process mining and analytics of university degree course data", "text": "For students, in particular freshmen, the degree pathway from semester to semester is not that transparent, although students have a reasonable idea what courses are expected to be taken each semester. An often-pondered question by students is: \"what can I expect in the next semester?\" More precisely, given the commitment and engagement I presented in this particular course and the respective performance I achieved, can I expect a similar outcome in the next semester in the particular course I selected? Are the demands and expectations in this course much higher so that I need to adjust my commitment and engagement and overall workload if I expect a similar outcome? Is it better to drop a course to manage expectations rather than to (predictably) fail, and perhaps have to leave the degree altogether? Degree and course advisors and student support units find it challenging to provide evidence based advise to students. This paper presents research into educational process mining and student data analytics in a whole university scale approach with the aim of providing insight into the degree pathway questions raised above. The beta-version of our course level degree pathway tool has been used to shed light for university staff and students alike into our university's 1,300 degrees and associated 6 million course enrolments over the past 20 years."}
{"_id": "4d997d58ada764c91c2c230b72b74bc4c72dd83f", "title": "Bethe-ADMM for Tree Decomposition based Parallel MAP Inference", "text": "We consider the problem of maximum a posteriori (MAP) inference in discrete graphical models. We present a parallel MAP inference algorithm called Bethe-ADMM based on two ideas: tree-decomposition of the graph and the alternating direction method of multipliers (ADMM). However, unlike the standard ADMM, we use an inexact ADMM augmented with a Bethe-divergence based proximal function, which makes each subproblem in ADMM easy to solve in parallel using the sum-product algorithm. We rigorously prove global convergence of Bethe-ADMM. The proposed algorithm is extensively evaluated on both synthetic and real datasets to illustrate its effectiveness. Further, the parallel Bethe-ADMM is shown to scale almost linearly with increasing number of cores."}
{"_id": "9a46b85cd26a8d3b9b1c551e3cbd39a5c269ccb2", "title": "Using a Programmable Toy at Preschool Age: Why and How?", "text": "Robotic toys bring new dimension to role-play activities in kindergarten. Some preschool curricula clearly identify reasons for their inclusion. However, preschool teacher needs to revise usual teaching methods in order to use them. Offering a programmable toy or robotic-related activity doesn't mean immediate success in work with children. We document our research with concrete programmable device in a preschool classroom. Details of robotic-related sessions can help reader to design the quality game for preschool-age based on using a programmable toy."}
{"_id": "34d6ef27c8d335fb5f5e62d365817b02de516fd8", "title": "Design Space Exploration of Network Processor Architectures", "text": "We describe an approach to explore the design space for architectures of packet processing devices on the system level. Our method is specific to the application domain of packet processors and is based on (1) models for packet processing tasks, a specification of the workload generated by traffic streams, and a description of the feasible space of architectures including computation and communication resources, (2) a measure to characterize the performance of network processors under different usage scenarios, (3) a new method to estimate end-to-end packet delays and queuing memory, taking task scheduling policies and bus arbitration schemes into account, and (4) a evolutionary algorithm for multi-objective design space exploration. Our method is analytical and based on a high level of abstraction, where the goal is to quickly identify interesting architectures, which may be subjected to a more detailed evaluation, e.g. using simulation. The feasibility of our approach is shown by a detailed case study, where the final output is three candidate architectures, representing different cost versus performance tradeoffs."}
{"_id": "1f1de030c6b150fbeafe9e06e4389b277e5f3b6a", "title": "Membrane-distillation desalination : status and potential", "text": "This paper presents an assessment of membrane distillation (MD) based on the available state of the art and on our preliminary analysis. The process has many desirable properties such as low energy consumption, ability to use low temperature beat, compactness, and perceivably more immunity to fouling than other membrane processes. Within the tested range, the operating parameters of conventional MD configurations have the following effects:(1) the permeate fluxes can significantly be improved by increasing the hot feed temperature (increasing the temperature from 50 to 70\u00b0C increases the flux by more than three-fold), and by reducing the vapor/air gap (reducing the vapor air gap thickness from 5 to 1 mm increase the flux 2.3-fold); (2) the mass flow rate of the feed solution has a smaller effect: increasing it three-fold increases the flux by about 1.3-fold; (3) the concentration of the solute has slight effect: increasing the concentration by more than five-fold decreases the flux by just 1.I 5-fold; (4) the cold side conditions have a lower effect (about half) on the flux than the hot side; (5) the coolant mass flow rate has a negligible effect; (6) the coolant temperature has a lower effect than the mass flow rate of the hot solution. Fouling effects, membranes used, energy consumption, system applications and configurations, and very approximate cost estimates are presented. The permeate fluxes obtained by the different researchers seem to disagree by an order of magnitude, and better experimental work is needed."}
{"_id": "30804753232e527f89919f5719f9d18918a15480", "title": "AutoModeling: Integrated Approach for Automated Model Generation by Ensemble Selection of Feature Subset and Classifier", "text": "Feature subset selection and identification of appropriate classification method plays an important role to optimize the predictive performance of supervised machine learning system. Current literature makes isolated attempts to optimize the feature selection and classifier identification. However, feature set has an intrinsic relationship with classification technique and together they form a \u2018model\u2019 for classification task. In this paper, we propose AutoModeling that finds optimal learning model and jointly optimize the feature and hypothesis space to maximize performance measure objective function. It is an automated framework of selecting the ensemble model {selected feature subset, selected classifier} from a given superset of features and classifiers learned from given training dataset in a computational efficient manner. We introduce novel relax-greedy search with our proposed patience function as a wrapper feature selection that maximizes the predictive performance and eliminates the classical nesting effect. We perform extensive experimentations on different types of publicly available datasets and AutoModeling demonstrates superior performance over relevant state-of-the-art methods, expert-driven manual methods and deep neural networks."}
{"_id": "fa7737e295648d6fc5a1b2ad31a98a2341ffeb91", "title": "The questionable contribution of medical measures to the decline of mortality in the United States in the twentieth century.", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use."}
{"_id": "7e084820c195b65e45e9138415f6cac7762f18dc", "title": "A new approach for robot motion planning using rapidly-exploring Randomized Trees", "text": "In the last few years, car-like robots became increasingly important. Thus, motion planning algorithms for this kind of problem are needed more than ever. Unfortunately, this problem is computational difficult and so probabilistic approaches like Probabilistic Roadmaps or Rapidly-exploring Randomized Trees are often used in this context. This paper introduces a new concept for robot motion planning especially for car-like robots based on Rapidly-exploring Randomized Trees. In contrast to the conventional method, the presented approach uses a pre-computed auxiliary path to improve the distribution of random states. The main contribution of this approach is the significantly increased quality of the computed path. A proof-of-concept implementation evaluates the quality and performance of the proposed concept."}
{"_id": "1a380e0f2d3e760f0fcf618cd60fcea397ed5349", "title": "A Robust Bayesian Truth Serum for Small Populations", "text": "Peer prediction mechanisms allow the truthful elicitation of private signals (e.g., experiences, or opinions) in regard to a true world state when this ground truth is unobservable. The original peer prediction method is incentive compatible for any number of agents n \u2265 2, but relies on a common prior, shared by all agents and the mechanism. The Bayesian Truth Serum (BTS) relaxes this assumption. While BTS still assumes that agents share a common prior, this prior need not be known to the mechanism. However, BTS is only incentive compatible for a large enough number of agents, and the particular number of agents required is uncertain because it depends on this private prior. In this paper, we present a robust BTS for the elicitation of binary information which is incentive compatible for every n \u2265 3, taking advantage of a particularity of the quadratic scoring rule. The robust BTS is the first peer prediction mechanism to provide strict incentive compatibility for every n \u2265 3 without relying on knowledge of the common prior. Moreover, and in contrast to the original BTS, our mechanism is numerically robust and ex post individually rational. Introduction Web services that are built around user-generated content are ubiquitous. Examples include reputation systems, where users leave feedback about the quality of products or services, and crowdsourcing platforms, where users (workers) are paid small rewards to do human computation tasks, such as annotating an image. Whereas statistical estimation techniques (Raykar et al. 2010) can be used to resolve noisy inputs, for example in order to determine the image tags most likely to be correct, they are appropriate only when user inputs are informative in the first place. But what if providing accurate information is costly for users, or if users otherwise have an external incentive for submitting false inputs? The peer prediction method (Miller, Resnick, and Zeckhauser 2005) addresses the quality control problem by providing payments (in cash, points or otherwise) that align an agent\u2019s own interest with providing inputs that are predictive of the inputs that will be provided by other agents. Formally, the peer prediction method provides strict incentives for providing truthful inputs (e.g., in regard to a user\u2019s information about the quality of a product, or a user\u2019s view on Copyright c \u00a9 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the correct label for a training example) for a system of two or more agents, and when there is a common prior amongst agents and, critically, known to the mechanism. The Bayesian Truth Serum (BTS) by Prelec (2004) still assumes that agents share a common prior, but does not require this to be known by the mechanism. In addition to an information report from an agent, BTS asks each agent for a prediction report, that reflects the agent\u2019s belief about the distribution of information reports in the population. An agent\u2019s payment depends on both reports, with an information component that rewards reports that are \u201csurprisingly common,\u201d i.e., more common than collectively predicted, and a prediction component that rewards accurate predictions of the reports made by others. A significant drawback of BTS is that it only aligns incentives for a large enough number of agents, where this number depends on the prior and is thus unknown to the mechanism. In addition, BTS may leave a participant with a negative payment, and is not numerically robust for all inputs. In this paper, we present the robust Bayesian Truth Serum (RBTS) mechanism, which, to the best of our knowledge, is the first peer prediction mechanism that does not rely on knowledge of the common prior to provide strict incentive compatibility for every number of agents n \u2265 3. RBTS is also ex post individually rational (so that no agent makes a negative payment in any outcome) and numerically robust, being well defined for all possible agent reports. Moreover, the mechanism seems conceptually simpler than BTS, and the incentive analysis is more straightforward. The main limitation of RBTS relative to earlier mechanisms, is that it applies only to the elicitation of binary information; e.g., good or bad experiences, or true or false classification labels.1 Extending RBTS to incorporate more than two signals is the most important direction for future research. RBTS takes the same reports as BTS, and an agent\u2019s payment continues to consist of one component that depends on an agent\u2019s information report and a second component that Many interesting applications involve binary information reports. This is supported by the fact that Prelec\u2019s own experimental papers have adopted the binary signal case (Prelec and Seung 2006; John, Loewenstein, and Prelec 2011). Indeed, as the number of possible information reports increases, so does the difficulty imposed on users in providing the prediction report, which must include estimates for the additional possible information reports. depends on an agent\u2019s prediction report. The main innovation is to induce a \u201cshadow\u201d posterior belief report for an agent i from her information report and the prediction report of another agent j, adjusting this prediction report in the direction suggested by agent i\u2019s information report. We couple this with a particularity of the quadratic scoring rule, by which an agent prefers a shadow posterior belief that is as close as possible to her true posterior. In order to determine the agent\u2019s payment, we then apply both the shadow posterior belief and the agent\u2019s prediction report to the quadratic scoring rule, adopting the information report of a third agent k as the event to be predicted."}
{"_id": "81ad6906a9a78f4db4dbaad641ab0fc952a84532", "title": "Detecting Predatory Behaviour in Game Chats", "text": "While games are a popular social media for children, there is a real risk that these children are exposed to potential sexual assault. A number of studies have already addressed this issue, however, the data used in previous research did not properly represent the real chats found in multiplayer online games. To address this issue, we obtained real chat data from MovieStarPlanet, a massively multiplayer online game for children. The research described in this paper aimed to detect predatory behaviours in the chats using machine learning methods. In order to achieve a high accuracy on this task, extensive preprocessing was necessary. We describe three different strategies for data selection and preprocessing, and extensively compare the performance of different learning algorithms on the different datasets and features."}
{"_id": "08847df8ea5b22c6a2d6d75352ef6270f53611de", "title": "Using k-Poselets for Detecting People and Localizing Their Keypoints", "text": "A k-poselet is a deformable part model (DPM) with k parts, where each of the parts is a poselet, aligned to a specific configuration of keypoints based on ground-truth annotations. A separate template is used to learn the appearance of each part. The parts are allowed to move with respect to each other with a deformation cost that is learned at training time. This model is richer than both the traditional version of poselets and DPMs. It enables a unified approach to person detection and keypoint prediction which, barring contemporaneous approaches based on CNN features, achieves state-of-the-art keypoint prediction while maintaining competitive detection performance."}
{"_id": "2e0aba5f061d5ac016e6a88072411571256c4a55", "title": "Regularization for Unsupervised Deep Neural Nets", "text": "Unsupervised neural networks, such as restricted Boltzmann machines (RBMs) and deep belief networks (DBNs), are powerful tools for feature selection and pattern recognition tasks. We demonstrate that overfitting occurs in such models just as in deep feedforward neural networks, and discuss possible regularization methods to reduce overfitting. We also propose a \u201cpartial\u201d approach to improve the efficiency of Dropout/DropConnect in this scenario, and discuss the theoretical justification of these methods from model convergence and likelihood bounds. Finally, we compare the performance of these methods based on their likelihood and classification error rates for various pattern recognition data sets."}
{"_id": "0f7f9f4aafede27d2575ee0d576ed77977f94e8d", "title": "IEEE 802 . 11 ad : Directional 60 GHz Communication for Multi-Gbps", "text": "With the ratification of the IEEE 802.11ad amendment to the 802.11 standard in December 2012, a major step has been taken to bring consumer wireless communication to the millimeter wave (mm-Wave) band. However, multi-Gbps throughput and small interference footprint come at the price of adverse signal propagation characteristics and require a fundamental rethinking of Wi-Fi communication principles. This paper describes the design assumptions taken into consideration for the IEEE 802.11ad standard and the novel techniques defined to overcome the challenges of mm-Wave communication. In particular we study the transition from omni-directional to highly directional communication and its impact on the design of"}
{"_id": "b7c2f779951041ca918ae580177b6ee1fd0a476c", "title": "Scholarly communication and bibliometrics", "text": "Why devote an ARIST chapter to scholarly communication and bibliometrics, and why now? Bibliometrics already is a frequently covered ARIST topic, with chapters such as that by White and McCain (1989) on bibliometrics generally, White and McCain (1997) on visualization of literatures, Wilson and Hood (2001) on informetric laws, and Tabah (2001) on literature dynamics. Similarly, scholarly communication has been addressed in other ARIST chapters such as Bishop and Star (1996) on social informatics and digital libraries, Schamber (1994) on relevance and information behavior, and many earlier chapters on information needs and uses. More than a decade ago, the first author addressed the intersection of scholarly communication and bibliometrics with a journal special issue and an edited book (Borgman, 1990; Borgman & Paisley, 1989), and she recently examined interim developments (Borgman, 2000a, 2000c). This review covers the decade (1990\u20132000) since the comprehensive 1990 volume, citing earlier works only when necessary to explain the foundation for recent developments. Given the amount of attention these topics have received, what is new and exciting enough to warrant a full chapter in 2001? What is new is that electronic scholarly communication is reaching critical mass, and CHAPTER 1"}
{"_id": "4c5cc16b8f890faaa23be30f752fe2a8a935f87f", "title": "Improvements to BM25 and Language Models Examined", "text": "Recent work on search engine ranking functions report improvements on BM25 and Language Models with Dirichlet Smoothing. In this investigation 9 recent ranking functions (BM25, BM25+, BM25T, BM25-adpt, BM25L, TF1\u00b0\u0394\u00b0p\u00d7ID, LM-DS, LM-PYP, and LM-PYP-TFIDF) are compared by training on the INEX 2009 Wikipedia collection and testing on INEX 2010 and 9 TREC collections. We find that once trained (using particle swarm optimization) there is very little difference in performance between these functions, that relevance feedback is effective, that stemming is effective, and that it remains unclear which function is best over-all."}
{"_id": "1be498d4bbc30c3bfd0029114c784bc2114d67c0", "title": "Age and Gender Estimation of Unfiltered Faces", "text": "This paper concerns the estimation of facial attributes-namely, age and gender-from images of faces acquired in challenging, in the wild conditions. This problem has received far less attention than the related problem of face recognition, and in particular, has not enjoyed the same dramatic improvement in capabilities demonstrated by contemporary face recognition systems. Here, we address this problem by making the following contributions. First, in answer to one of the key problems of age estimation research-absence of data-we offer a unique data set of face images, labeled for age and gender, acquired by smart-phones and other mobile devices, and uploaded without manual filtering to online image repositories. We show the images in our collection to be more challenging than those offered by other face-photo benchmarks. Second, we describe the dropout-support vector machine approach used by our system for face attribute estimation, in order to avoid over-fitting. This method, inspired by the dropout learning techniques now popular with deep belief networks, is applied here for training support vector machines, to the best of our knowledge, for the first time. Finally, we present a robust face alignment technique, which explicitly considers the uncertainties of facial feature detectors. We report extensive tests analyzing both the difficulty levels of contemporary benchmarks as well as the capabilities of our own system. These show our method to outperform state-of-the-art by a wide margin."}
{"_id": "d4964461e8b91534e210b33990aa92d65a687577", "title": "Performance analysis of downlink NOMA networks over Rayleigh fading channels", "text": "This paper analyzes the performance of downlink non-orthogonal multiple-access (NOMA) networks over independent but not necessarily identically distributed Rayleigh fading channels for arbitrary number of network users. Specifically, closed-form expressions for the average SNR, average achievable rate, and outage probability are derived, where it has been shown that network users achieve diversity orders equivalent to their ordered channel gains. Numerical evaluations are presented to validate the derived closed-form expressions for the different performance metrics, which are found to be in agreement with the network simulation results."}
{"_id": "f0af0029293dc8f242894f113baf15d68228ec4d", "title": "Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks", "text": "Factorization Machines (FMs) are a supervised learning approach that enhances the linear regression model by incorporating the second-order feature interactions. Despite effectiveness, FM can be hindered by its modelling of all feature interactions with the same weight, as not all feature interactions are equally useful and predictive. For example, the interactions with useless features may even introduce noises and adversely degrade the performance. In this work, we improve FM by discriminating the importance of different feature interactions. We propose a novel model named Attentional Factorization Machine (AFM), which learns the importance of each feature interaction from data via a neural attention network. Extensive experiments on two real-world datasets demonstrate the effectiveness of AFM. Empirically, it is shown on regression task AFM betters FM with a 8.6% relative improvement, and consistently outperforms the state-of-the-art deep learning methods Wide&Deep [Cheng et al., 2016] and DeepCross [Shan et al., 2016] with a much simpler structure and fewer model parameters. Our implementation of AFM is publicly available at: https://github. com/hexiangnan/attentional factorization machine"}
{"_id": "490af69c14ae58a345feffc689cbfe71b76659de", "title": "Review on Privacy Preserving Deep Computation Model on Cloud for Big Data Feature Learning", "text": "Big Data Analytics and Deep Learning are two high-center of information science. Big Data has become important because of companies i.e. both public and private have been gathering huge measures of domain-specific data, which can contain helpful data about issues, for example, national intelligence, cyber security, fraud detection, marketing, and medical informatics. Deep Learning algorithms extract high-level, complex reflections as information representations through a progressive learning process. A advantage of Deep Learning is the study and learning of huge size of unsupervised information, making it a significant instrument for Big Data Analytics where crude information is to a great extent unlabeled and un-sorted. The present survey gives an idea of the previous work done by several researchers on the Big Data Analytics and Deep Learning applications."}
{"_id": "4634b63d66b7cc54421553da23cd5f3737618c86", "title": "Increasing ATM Efficiency withAssistant Based Speech Recognition", "text": "Initiatives to integrate Automatic Speech Recognition into Air Traffic Management (ATM) exists at least since the late 90s. Some success to replace pseudo pilots have been reported, but its integration into controller assistant tools is missing. German Aerospace Center (DLR) and Saarland University developed Assistant Based Speech Recognition (ABSR) enabling command recognition rates better than 95%. However, good recognition rates are no convincing argument for decision makers. Therefore, we conducted an ABSR validation study with eight air traffic controllers to quantify the benefits with respect to workload and efficiency. The study validates that ABSR does not just reduce controllers\u2019 workload, which would already be a lot, but this paper presents that ABSR significantly increases ATM efficiency. Fuel reductions of 60 liters (16 gallons) per flight and a throughput increase by two arrivals per hour are possible. Keywords\u2014AcListant\u00ae, Automatic Speech Recognition (ASR), Assistant Based Speech Recognition (ABSR), ATM Efficiency, Electronic Flight Strips, Aircraft Label"}
{"_id": "10a9abb4c78f0be5cc85847f248d3e8277b3c810", "title": "The CoNLL 2007 Shared Task on Dependency Parsing", "text": "The Conference on Computational Natural Language Learning features a shared task, in which participants train and test their learning systems on the same data sets. In 2007, as in 2006, the shared task has been devoted to dependency parsing, this year with both a multilingual track and a domain adaptation track. In this paper, we define the tasks of the different tracks and describe how the data sets were created from existing treebanks for ten languages. In addition, we characterize the different approaches of the participating systems, report the test results, and provide a first analysis of these results."}
{"_id": "ac4658f420e1d613d89291ce4b9a12845b0c38cb", "title": "Dynamic adaptive streaming over HTTP -: standards and design principles", "text": "In this paper, we provide some insight and background into the Dynamic Adaptive Streaming over HTTP (DASH) specifications as available from 3GPP and in draft version also from MPEG. Specifically, the 3GPP version provides a normative description of a Media Presentation, the formats of a Segment, and the delivery protocol. In addition, it adds an informative description on how a DASH Client may use the provided information to establish a streaming service for the user. The solution supports different service types (e.g., On-Demand, Live, Time-Shift Viewing), different features (e.g., adaptive bitrate switching, multiple language support, ad insertion, trick modes, DRM) and different deployment options. Design principles and examples are provided."}
{"_id": "03c42879f25ca66a9ae906672a6e3649bb07ca03", "title": "Dynamic Adaptive Streaming over HTTP \u2013 Design Principles and Standards", "text": "In this paper, we provide some insight and background into the Dynamic Adaptive Streaming over HTTP (DASH) specifications as available from 3GPP and in draft version also from MPEG. Specifically, the 3GPP version provides a normative description of a Media Presentation, the formats of a Segment, and the delivery protocol. In addition, it adds an informative description on how a DASH Client may use the provided information to establish a streaming service for the user. The solution supports different service types (e.g., On-Demand, Live, Time-Shift Viewing), different features (e.g., adaptive bitrate switching, multiple language support, ad insertion, trick modes, DRM) and different deployment options. Design principles and some forward-looking considerations are provided."}
{"_id": "522f13224a0012b1b40bf5f834af2069c9bf463e", "title": "Dummynet: a simple approach to the evaluation of network protocols", "text": "Network protocols are usually tested in operational networks or in simulated environments. With the former approach it is not easy to set and control the various operational parameters such as bandwidth, delays, queue sizes. Simulators are easier to control, but they are often only an approximate model of the desired setting, especially for what regards the various traffic generators (both producers and consumers) and their interaction with the protocol itself.In this paper we show how a simple, yet flexible and accurate network simulator - dummynet - can be built with minimal modifications to an existing protocol stack, allowing experiments to be run on a standalone system. dummynet works by intercepting communications of the protocol layer under test and simulating the effects of finite queues, bandwidth limitations and communication delays. It runs in a fully operational system, hence allowing the use of real traffic generators and protocol implementations, while solving the problem of simulating unusual environments. With our tool, doing experiments with network protocols is as simple as running the desired set of applications on a workstation.A FreeBSD implementation of dummynet, targeted to TCP, is available from the author. This implementation is highly portable and compatible with other BSD-derived systems, and takes less than 300 lines of kernel code."}
{"_id": "14626b05a5ec7ec2addc512f0dfa8db60d817c1b", "title": "Interpolatron: Interpolation or Extrapolation Schemes to Accelerate Optimization for Deep Neural Networks", "text": "In this paper we explore acceleration techniques for large scale nonconvex optimization problems with special focuses on deep neural networks. The extrapolation scheme is a classical approach for accelerating stochastic gradient descent for convex optimization, but it does not work well for nonconvex optimization typically. Alternatively, we propose an interpolation scheme to accelerate nonconvex optimization and call the method Interpolatron. We explain motivation behind Interpolatron and conduct a thorough empirical analysis. Empirical results on DNNs of great depths (e.g., 98-layer ResNet and 200-layer ResNet) on CIFAR-10 and ImageNet show that Interpolatron can converge much faster than the stateof-the-art methods such as the SGD with momentum and Adam. Furthermore, Anderson\u2019s acceleration, in which mixing coefficients are computed by least-squares estimation, can also be used to improve the performance. Both Interpolatron and Anderson\u2019s acceleration are easy to implement and tune. We also show that Interpolatron has linear convergence rate under certain regularity assumptions."}
{"_id": "7b890df91c53b7c1a489dd5a1f30b608f5cb3546", "title": "How Does He Saw Me ? A Recommendation Engine for Picking Heroes in Dota 2", "text": "In this paper, we present a hero recommendation engine for the popular computer game Dota 2. We detail previous efforts at hero recommendation, building on these to provide a more complete exploration of machine learning algorithms applied to this problem. In addition to discussing the details behind the machine learning algorithms we use, we also provide insight into our method of data collection and feature selection. In doing so, the outcome of our efforts is two-fold. First, we provide the first public survey of machine learning algorithms applied to Dota 2\u2013in the process gaining further insight into the algorithms we explore. Second, we provide a tool that we believe will be useful for both the casual and competitive sides of Dota 2\u2019s 6 million unique player base."}
{"_id": "08480bdf82aed2f891e6844792ee190930228457", "title": "Simultaneous penile-vaginal intercourse orgasm is associated with satisfaction (sexual, life, partnership, and mental health).", "text": "INTRODUCTION\nPrevious multivariate research found that satisfaction was associated positively with frequency of specifically penile-vaginal intercourse (PVI; as opposed to other sexual activities) as well as with vaginal orgasm. The contribution to satisfaction of simultaneous orgasm produced by PVI merited direct examination in a large representative sample.\n\n\nAIMS\nTo examine the associations of aspects of satisfaction (sexual, life, own mental health, partner relationship) with consistency of simultaneous orgasm produced by PVI (as well as with PVI frequency and vaginal orgasm consistency).\n\n\nMETHODS\nA representative sample of Czechs (N = 1,570) aged 35-65 years completed a survey on aspects of satisfaction, PVI frequency, vaginal orgasm consistency, and consistency of simultaneous orgasm produced by PVI (the latter being a specially timed version of vaginal orgasm for women).\n\n\nMAIN OUTCOME MEASURES\nAnalysis of variance of satisfaction components (LiSat scale items) from age and the sexual behaviors.\n\n\nRESULTS\nFor both sexes, all aspects of satisfaction were associated with simultaneous PVI orgasm consistency and with PVI frequency (except female life satisfaction). All aspects of satisfaction were also associated with vaginal orgasm consistency. Multivariate analyses indicated that PVI frequency and simultaneous orgasm consistency make independent contributions to the aspects of satisfaction for both sexes.\n\n\nCONCLUSIONS\nFor both sexes, PVI frequency and simultaneous orgasm produced by PVI (as well as vaginal orgasm for women) are associated with greater life, sexual, partnership, and mental health satisfaction. Greater support for these specific aspects of sexual activity is warranted."}
{"_id": "da77314a63359c513cacb821ff0038d3a3c36112", "title": "Note-Taking and Secondary Students with Learning Disabilities : Challenges and Solutions", "text": "As more secondary students with learning disabilities (LD) enroll in advanced content-area classes and are expected to pass state exams, they are faced with the challenge of mastering difficult concepts and abstract vocabulary while learning content. Once in these classes, students must learn from lectures that move at a quick pace, record accurate and complete notes, and then demonstrate their mastery of the content on tests. This article provides an overview of the challenges faced by students with LD in content-area classes and discusses the problems that students have learning from lectures and recording notes. Further, the article discusses theory and research related to note-taking and presents note-taking interventions that teachers can use to help students improve their note-taking skills, and ultimately, improve their achievement in these classes."}
{"_id": "9c85b0c0ec4fcb49aabff4b5ee83a71bfa855b1d", "title": "Finding and Fixing Vehicle NVH Problems with Transfer Path Analysis", "text": "This article discusses the use of experimental transfer path analysis (TPA) to find optimized solutions to NVH problems remaining late in vehicle development stages. After a short review of established TPA methods, four practical case histories are discussed to illustrate how TPA, FE models and practical experiments can supplement each other efficiently for finding optimum and attribute-balanced solutions to complex NVH issues late in the development process. Experimental transfer path analysis (TPA) is a fairly well established technique, 1,2 for estimating and ranking individual low-frequency noise or vibration contributions via the different structural transmission paths from point-coupled power-train or wheel suspensions to the vehicle body. TPA is also used to analyze the transmission paths into vibration-isolated truck or tractor cabs etc. TPA can also be used at higher frequencies (above 150-200 Hz) in road vehicles, although it may be reasonable to introduce a somewhat different formulation based on the response statistics of multimodal vibro-acoustic systems with strong modal overlap. 3 When NVH problems still remain close to start of production (SOP), experimental TPA is often a favored technique to investigate further possibilities to fine-tune the rubber components of the engine or wheel suspension with respect to NVH. The aim is to further improve NVH with minimal negative impact on other vehicle attributes, such as ride comfort, handling , drivability, durability, etc. The only design parameters that can directly be changed in a \" what if? \" study based purely on experimental TPA, are the dynamic properties of rubber elements connecting the source and the receiving structure. Also, any reduction of transfer path contributions to noise or vibration in that case will be a result of reducing some of the dynamic stiffness' for the connecting elements. To take any other design changes into account, additional measurements are normally necessary. Each degree of freedom (DOF) acting at interface points between a vibration source system and a receiving, passive vibro-acoustic system is a transfer path in TPA. TPA can also be performed analytically, using FE models or specialized system analysis software. 4 The experimental TPA method involves: 1) An indirect measurement procedure for estimating operating force components acting at the coupled DOFs. 2) The direct or reciprocal measurement of all transfer frequency response functions (FRFs) between response in points of interest (e.g. at the drivers ear) and points where these forces act. The FRFs are measured with the receiving subsystem disconnected at all \u2026"}
{"_id": "4d07d4c8019ebc327a7bb611fba73b8e844928d0", "title": "The evolutionary impact of invasive species.", "text": "Since the Age of Exploration began, there has been a drastic breaching of biogeographic barriers that previously had isolated the continental biotas for millions of years. We explore the nature of these recent biotic exchanges and their consequences on evolutionary processes. The direct evidence of evolutionary consequences of the biotic rearrangements is of variable quality, but the results of trajectories are becoming clear as the number of studies increases. There are examples of invasive species altering the evolutionary pathway of native species by competitive exclusion, niche displacement, hybridization, introgression, predation, and ultimately extinction. Invaders themselves evolve in response to their interactions with natives, as well as in response to the new abiotic environment. Flexibility in behavior, and mutualistic interactions, can aid in the success of invaders in their new environment."}
{"_id": "888ca705c1dd1c2f809a127fe97cc237ea08c8eb", "title": "International Accounting Standards and Accounting Quality", "text": "We compare characteristics of accounting amounts for firms that apply International Accounting Standards (IAS) to a matched sample of firms that do not to investigate whether applying IAS is associated with higher accounting quality and lower equity cost of capital. We find that firms applying IAS evidence less earnings management, more timely loss recognition, and more value relevance of accounting amounts than do those applying domestic GAAP. Firms applying IAS have higher variance of the change in net income, a higher ratio of the variances of the change in net income and change in cash flows, a significantly less negative correlation between accruals and cash flows, and a lower frequency of small positive net income. They have a significantly higher frequency of large negative net income and generally higher value relevance of accounting amounts. Differences between firms applying IAS and those applying domestic GAAP in the period before IAS firms adopt IAS do not explain the differences in accounting quality. Firms applying IAS generally exhibit higher accounting quality than when they previously applied domestic GAAP. The increase in accounting quality for IAS firms is generally greater than that for firms applying domestic GAAP throughout the sample period. We also find weak evidence suggesting that application of IAS is associated with a lower equity cost of capital. Overall, our results suggest improvement in accounting quality associated with applying IAS."}
{"_id": "e47f01b4c3da9a4684eb05865eacbcee2362c736", "title": "As Cool as a Cucumber: Towards a Corpus of Contemporary Similes in Serbian", "text": "Similes are natural language expressions used to compare unlikely things, where the comparison is not taken literally. They are often used in everyday communication and are an important part of cultural heritage. Having an up-to-date corpus of similes is challenging, as they are constantly coined and/or adapted to the contemporary times. In this paper we present a methodology for semi-automated collection of similes from the world wide web using text mining techniques. We expanded an existing corpus of traditional similes (containing 333 similes) by collecting 446 additional expressions. We, also, explore how crowdsourcing can be used to extract and curate new similes."}
{"_id": "bd682eb6a6a50983833456dc35d99f6bb39456dd", "title": "The Information Systems Identity Crisis: Focusing on High-Visibility and High-Impact Research", "text": "This paper presents an alternative view of the Information Systems identity crisis described recently by Benbasat and Zmud (2003). We agree with many of their observations, but we are concerned with their prescription for IS research. We critique their discussion of errors of inclusion and exclusion in IS research and highlight the potential misinterpretations that are possible from a literal Ron Weber was the accepting senior editor for this paper. Iris Vessey and Rudy Hirschheim served as reviewers. reading of their comments. Our conclusion is that following Benbasat and Zmud\u2019s nomological net will result in a micro focus for IS research. The results of such a focus are potentially dangerous for the field. They could result in the elimination of IS from many academic programs. We present an alternative set of heuristics that can be used to assess what lies within the domain of IS scholarship. We argue that the IS community has a powerful story to tell about the transformational impact of information technology. We believe that a significant portion of our research should be macro studies of the impact of IT. It is important for academic colleagues, deans, and managers to understand the transformational power of the technology. As IS researchers with deep knowledge of the underlying artifact, we are best positioned to do such research."}
{"_id": "09c4b743782338a0940e9c6a31838e4f3c325fda", "title": "Object-Oriented Bayesian Networks", "text": "Bayesian networks provide a modeling language and associated inference algorithm for stochastic domains. They have been successfully applied in a variety of medium-scale applications. However, when faced with a large complex domain, the task of modeling using Bayesian networks begins to resemble the task of programming using logical circuits. In this paper, we describe an object-oriented Bayesian network (OOBN) language, which allows complex domains to be described in terms of inter-related objects. We use a Bayesian network fragment to describe the probabilistic relations between the attributes of an object. These attributes can themselves be objects, providing a natural framework for encoding part-of hierarchies. Classes are used to provide a reusable probabilistic model which can be applied to multiple similar objects. Classes also support inheritance of model fragments from a class to a subclass, allowing the common aspects of related classes to be defined only once. Our language has clear declarative semantics: an OOBN can be interpreted as a stochastic functional program, so that it uniquely specifies a probabilistic model. We provide an inference algorithm for OOBNs, and show that much of the structural information encoded by an OOBN\u2014particularly the encapsulation of variables within an object and the reuse of model fragments in different contexts\u2014can also be used to speed up the inference process."}
{"_id": "99982a0d1d33c6a946c5bf66a0b74a8e8789c8e9", "title": "Entrapment of the sciatic nerve at the linea aspera: A case report and literature review", "text": "BACKGROUND\nNontraumatic, non-neoplastic sciatic nerve entrapment at the level of the thigh is extremely rare. In its course, in proximity of the linea aspera, the nerve is exposed to unexpected neuropathic syndromes associated with bone disorders.\n\n\nCASE DESCRIPTION\nA 67-year-old woman presented with a painful, neuropathic syndrome of the sciatic nerve, not resulting from any trauma and persisting for approximately 2 years. Imaging studies of the thigh showed a delimited zone of hyperostosis in the proximal third of the femoral diaphysis. The symptoms dramatically resolved after the patient underwent neurolysis of the tract of the nerve adjoining to the linea aspera. At the clinical checkup 2 years later, the patient remained free of pain.\n\n\nCONCLUSION\nThe diagnosis of sciatic nerve entrapment at the linea aspera may present considerable difficulties. The clinical history and physical examination sometimes motivate the exploration and neurolysis of the nerve at this site."}
{"_id": "4c5cc196232e15cd54c81993ab0aeb334fa8b585", "title": "Leveraging Peer Centrality in the Designof Socially-Informed Peer-to-Peer Systems", "text": "Social applications mine user social graphs to improve performance in search, provide recommendations, allow resource sharing and increase data privacy. When such applications are implemented on a peer-to-peer (P2P) architecture, the social graph is distributed on the P2P system: the traversal of the social graph translates into a socially-informed routing in the peer-to-peer layer. In this work we introduce the model of a projection graph that is the result of decentralizing a social graph onto a peer-to-peer network. We focus on three social network metrics: degree, node betweenness and edge betweenness centrality and analytically formulate the relation between metrics in the social graph and in the projection graph. We demonstrate with an application scenario on real social networks the usability of the projection graph properties in designing social search applications and unstructured P2P overlays that exhibit improved performance and reduced overhead over baseline scenarios."}
{"_id": "c9eb84b39351f840c4d1caa7924f27775f79a1fd", "title": "The Coherent Heart Heart \u2013 Brain Interactions , Psychophysiological Coherence , and the Emergence of System-Wide Order", "text": "This article presents theory and research on the scientific study of emotion that emphasizes the importance of coherence as an optimal psychophysiological state. A dynamic systems view of the interrelations between psychological, cognitive and emotional systems and neural communication networks in the human organism provides a foundation for the view presented. These communication networks are examined from an information processing perspective and reveal a fundamental order in heart-brain interactions and a harmonious synchronization of physiological systems associated with positive emotions. The concept of coherence is drawn on to understand optimal functioning which is naturally reflected in the heart\u2019s rhythmic patterns. Research is presented identifying various psychophysiological states linked to these patterns, with neurocardiological coherence emerging as having significant impacts on well being. These include psychophysiological as well as improved cognitive performance. From this, the central role of the heart is explored in terms of biochemical, biophysical and energetic interactions. Appendices provide further details and research on; psychophysiological functioning, reference previous research in this area, details on research linking coherence with optimal cognitive performance, heart brain synchronization and the energetic signature of the various psychophysiological modes."}
{"_id": "3274aa71b2bac7325e6abc84d814cc7da765b717", "title": "Preliminary, open-label, pilot study of add-on oral \u03949-tetrahydrocannabinol in chronic post-traumatic stress disorder.", "text": "BACKGROUND AND OBJECTIVES\nMany patients with post-traumatic stress disorder (PTSD) achieve but partial remission with current treatments. Patients with unremitted PTSD show high rates of substance abuse. Marijuana is often used as compassion add-on therapy for treatment-resistant PTSD. This open-label study evaluates the tolerance and safety of orally absorbable \u0394(9)-tetrahydrocannabinol (THC) for chronic PTSD.\n\n\nMETHODS\nTen outpatients with chronic PTSD, on stable medication, received 5\u00a0mg of \u0394(9)-THC twice a day as add-on treatment.\n\n\nRESULTS\nThere were mild adverse effects in three patients, none of which led to treatment discontinuation. The intervention caused a statistically significant improvement in global symptom severity, sleep quality, frequency of nightmares, and PTSD hyperarousal symptoms.\n\n\nCONCLUSIONS\nOrally absorbable \u0394(9)-THC was safe and well tolerated by patients with chronic PTSD."}
{"_id": "6f6dddd0b5767cdc9cd0c2d0d2805c08eb385292", "title": "Subjective Evaluation of Latency and Packet Loss in a Cloud-Based Game", "text": "On-demand multimedia services are more popular than ever and continue to grow. Consumers can now stream music, movies, television, and video games at the push of a button. Such services typically require a minimum connection speed to support streaming. However, transient network effects such as packet loss and delay variation can play a crucial role in determining the user quality of experience (QoE) in streaming multimedia systems. This paper will seek to establish the subjective impact of negative network effects on the user experience of a popular cloud-based on-demand video game service."}
{"_id": "9c79d6f2ce0e14a2e6d9d574cdc185fa204bfae7", "title": "Privacy preservation and public auditing for cloud data using ASS in Multi-cloud", "text": "Cloud computing is the new business paradigm introduced in the IT market in the current decade providing the customers with the option to pay for the amount of service utilized. The use of cloud computing has increased rapidly in many organizations. Ensuring security of data in a cloud is a major factor in the cloud computing environment, as users often store fragile information in cloud storage. Public auditing on shared data stored in the cloud is done by exploiting Aggregate Signature Scheme. With this mechanism, the identity of the signer of various blocks of shared data is kept private from public verifiers, who are able to verify efficiently the integrity of shared data without retrieving the entire file. However traceability is a major issue. A Multi-clouds Database Model (MCDB) which is based on Multi-clouds service providers instead of using single cloud service provider such as in Amazon cloud service is used to implement traceability. Also, dealing with single cloud providers is predicted to become less popular with customers due to risks of service availability failure and the possibility of malicious insiders in the single cloud. A movement towards multi-clouds is highly efficient and this system would also check for the data's which are attacked by malicious users, with short time and cost requirement."}
{"_id": "a902ae7016408a63c3a2539d012e07070c29f9d6", "title": "Event-Driven Semantic Concept Discovery by Exploiting Weakly Tagged Internet Images", "text": "Analysis and detection of complex events in videos require a semantic representation of the video content. Existing video semantic representation methods typically require users to pre-define an exhaustive concept lexicon and manually annotate the presence of the concepts in each video, which is infeasible for real-world video event detection problems. In this paper, we propose an automatic semantic concept discovery scheme by exploiting Internet images and their associated tags. Given a target event and its textual descriptions, we crawl a collection of images and their associated tags by performing text based image search using the noun and verb pairs extracted from the event textual descriptions. The system first identifies the candidate concepts for an event by measuring whether a tag is a meaningful word and visually detectable. Then a concept visual model is built for each candidate concept using a SVM classifier with probabilistic output. Finally, the concept models are applied to generate concept based video representations. We use the TRECVID Multimedia Event Detection (MED) 2013 as our video test set and crawl 400K Flickr images to automatically discover 2, 000 visual concepts. We show significant performance gains of the proposed concept discovery method over different video event detection tasks including supervised event modeling over concept space and semantic based zero-shot retrieval without training examples. Importantly, we show the proposed method of automatic concept discovery outperforms other well-known concept library construction approaches such as Classemes and ImageNet by a large margin (228%) in zero-shot event retrieval. Finally, subjective evaluation by humans also confirms clear superiority of the proposed method in discovering concepts for event representation."}
{"_id": "33fc7a58f2a4c924a5f3868eced1726ce961e559", "title": "Unsupervised Modeling of Twitter Conversations", "text": "We propose the first unsupervised approach to the problem of modeling dialogue acts in an open domain. Trained on a corpus of noisy Twitter conversations, our method discovers dialogue acts by clustering raw utterances. Because it accounts for the sequential behaviour of these acts, the learned model can provide insight into the shape of communication in a new medium. We address the challenge of evaluating the emergent model with a qualitative visualization and an intrinsic conversation ordering task. This work is inspired by a corpus of 1.3 million Twitter conversations, which will be made publicly available. This huge amount of data, available only because Twitter blurs the line between chatting and publishing, highlights the need to be able to adapt quickly to a new medium."}
{"_id": "52e011e52d1a9e2c9ea12f31f5a2d3c1b459017e", "title": "Neural Responding Machine for Short-Text Conversation", "text": "We propose Neural Responding Machine (NRM), a neural networ k-based response generator for Short-Text Conversation. NRM takes the general encoder-de coder framework: it formalizes the generation of response as a decoding process based on the lat ent representation of the input text, while both encoding and decoding are realized with recurren t n ural networks (RNN). The NRM is trained with a large amount of one-round conversation dat a collected from a microblogging service. Empirical study shows that NRM can generate gramma tically correct and content-wise appropriate responses to over 75% of the input text, outperf orming state-of-the-arts in the same setting, including retrieval-based and SMT-based models."}
{"_id": "4b0eaa20df186f40b88b70c2abeb360c8443bdf7", "title": "Development of an OCV prediction mechanism for lithium-ion battery system", "text": "This paper investigates the development of an open circuit voltage (OCV) prediction technique for lithium cells. The underlying principal of the technique described in this paper employs one simple equation to predict the equilibrated cell voltage after a small rest period. The technique was tested and verified by using results obtained from experiments conducted at the Centre for Automotive and Power System Engineering (CAPSE) battery laboratories at the University of South Wales. Lithium-ion battery system analysis was carried out with reference to the standard battery system models available in the literature."}
{"_id": "a04f625de804166567a73dcd67f5615ce0100641", "title": "'Just What the Doctor Ordered': A Revised UTAUT for EMR System Adoption and Use by Doctors", "text": "Electronic medical record (EMR) systems can deliver many benefits to healthcare organizations and the patients they serve. However, one of the biggest stumbling blocks in garnering these benefits is the limited adoption and use by doctors. We employ the unified theory of acceptance and use of technology (UTAUT) as the theoretical foundation and adapt the theory to the context of EMR system adoption and use by doctors. Specifically, we suggest that age will be the only significant moderator, and gender, voluntariness and experience will not play significant moderating roles. We tested our model in a longitudinal study over a 7-month period in a hospital implementing a new EMR system. We collected 3 waves of survey data from 141 doctors and used system logs to measure use. While the original UTAUT only predicted about 20% of the variance in intention, the modified UTAUT predicted 44%. Both models were comparable in their prediction of use. In addition to contributing to healthcare IT and UTAUT research, we hope this work will serve as a foundation for future work that integrates UTAUT with other theoretical perspectives."}
{"_id": "f57cd9b20629389b44d9e2e3ffad41ed2263297e", "title": "Optical mouse sensor for detecting height variation and translation of a surface", "text": "A new sensing system based on an optical mouse sensor is developed. The optical mouse sensor is traditionally used as a 2D motion sensor, and in this study it is creatively used for detecting the height variation and translation of the object surfaces at the same time. Based on the experimental characteristics of the sensing system, it can be seen that the sensing system is useful in many fields such as robot odometry. The sensing system is based on a low-cost photoelectric sensor, which is a commercial LED-based optical mouse sensor, and thus it is easily available and generalized. Specially, the outputs of the sensor are intuitively displayed on the computer screen and recorded on the hard disk, which greatly improves the convenience of the system. A periodically-square-shaped rubber surface and a periodically-arc-shaped rubber surface are studied in the experiments. Experiments show that the shape information of the object surfaces and the translation direction of the object surfaces can be obtained at the same time. Besides, in the experiments, it is found that the sensitivity of the optical mouse sensor in different translation direction is different."}
{"_id": "fa4fb6acb96a3cac8d276da6ea922bfa9322df72", "title": "Human-level intelligence or animal-like abilities?", "text": "What just happened in artificial intelligence and how it is being misunderstood."}
{"_id": "662dbb3ebc0ab5086dc0683deeb9ba41deda93a0", "title": "BCS: A Binary Cuckoo Search algorithm for feature selection", "text": "Feature selection has been actively pursued in the last years, since to find the most discriminative set of features can enhance the recognition rates and also to make feature extraction faster. In this paper, the propose a new feature selection called Binary Cuckoo Search, which is based on the behavior of cuckoo birds. The experiments were carried out in the context of theft detection in power distribution systems in two datasets obtained from a Brazilian electrical power company, and have demonstrated the robustness of the proposed technique against with several others nature-inspired optimization techniques."}
{"_id": "0ab3bb4dd28894d0fb23f00e0a1e39e06546cf0b", "title": "Controllable Procedural Content Generation via Constrained Multi-Dimensional Markov Chain Sampling", "text": "Statistical models, such as Markov chains, have recently started to be studied for the purpose of Procedural Content Generation (PCG). A major problem with this approach is controlling the sampling process in order to obtain output satisfying some desired constraints. In this paper we present three approaches to constraining the content generated using multi-dimensional Markov chains: (1) a generate and test approach that simply resamples the content until the desired constraints are satisfied, (2) an approach that finds and resamples parts of the generated content that violate the constraints, and (3) an incremental method that checks for constraint violations during sampling. We test our approaches by generating maps for two classic video games, Super Mario Bros. and Kid Icarus."}
{"_id": "21361d285fc47098ccb513744072368ad24eba12", "title": "Prometheus: User-Controlled P2P Social Data Management for Socially-Aware Applications", "text": "Recent Internet applications, such as online social networks and user-generated content sharing, produce an unprecedented amount of social information, which is further augmented by location or collocation data collected from mobile phones. Unfortunately, this wealth of social information is fragmented across many different proprietary applications. Combined, it could provide a more accurate representation of the social world, and it could enable a whole new set of socially-aware applications. We introduce Prometheus, a peer-to-peer service that collects and manages social information from multiple sources and implements a set of social inference functions while enforcing user-defined access control policies. Prometheus is socially-aware: it allows users to select peers that manage their social information based on social trust and exploits naturallyformed social groups for improved performance. We tested our Prometheus prototype on PlanetLab and built a mobile social application to test the performance of its social inference functions under real-time constraints. We showed that the social-based mapping of users onto peers improves the service response time and high service availability is achieved with low overhead."}
{"_id": "1f4205e63d63f75d40993c5cac8d051888bcfc9d", "title": "Design pattern implementation in Java and aspectJ", "text": "AspectJ implementations of the GoF design patterns show modularity improvements in 17 of 23 cases. These improvements are manifested in terms of better code locality, reusability, composability, and (un)pluggability.The degree of improvement in implementation modularity varies, with the greatest improvement coming when the pattern solution structure involves crosscutting of some form, including one object playing multiple roles, many objects playing one role, or an object playing roles in multiple pattern instances."}
{"_id": "91480c9cacdd0d630698fb1e30e7e8582e17751e", "title": "A Semantic Framework for the Security Analysis of Ethereum smart contracts", "text": "Smart contracts are programs running on cryptocurrency (e.g., Ethereum) blockchains, whose popularity stem from the possibility to perform financial transactions, such as payments and auctions, in a distributed environment without need for any trusted third party. Given their financial nature, bugs or vulnerabilities in these programs may lead to catastrophic consequences, as witnessed by recent attacks. Unfortunately, programming smart contracts is a delicate task that requires strong expertise: Ethereum smart contracts are written in Solidity, a dedicated language resembling JavaScript, and shipped over the blockchain in the EVM bytecode format. In order to rigorously verify the security of smart contracts, it is of paramount importance to formalize their semantics as well as the security properties of interest, in particular at the level of the bytecode being executed. In this paper, we present the first complete small-step semantics of EVM bytecode, which we formalize in the F* proof assistant, obtaining executable code that we successfully validate against the official Ethereum test suite. Furthermore, we formally define for the first time a number of central security properties for smart contracts, such as call integrity, atomicity, and independence from miner controlled parameters. This formalization relies on a combination of hyperand safety properties. Along this work, we identified various mistakes and imprecisions in existing semantics and verification tools for Ethereum smart contracts, thereby demonstrating once more the importance of rigorous semantic foundations for the design of security verification techniques."}
{"_id": "5de47aff8b6db8e2bc1cdd4a30a357453f6f37a8", "title": "Axe UX: Exploring long-term user experience with iScale and AttrakDiff", "text": "Positive user experience (UX) is an important goal in product design. Positive long-term UX is believed to improve customer loyalty, therefore being vital for continuous commercial success. Most UX research investigates momentary or short-term UX although the relationship between the user and the product evolves over time. There is a need to develop methods for measuring long-term UX and evaluate their feasibility in different product contexts. In this explorative study, 18 customers reported their experiences during the first three months of use of a non-interactive design tool, an axe. UX was evaluated with retrospective iScale tool and monthly repeated AttrakDiff questionnaire. iScale demonstrated the long-term trend of the attractiveness of the product well and provided information about the causes of the change in the experience. The AttrakDiff questionnaire was a good indicator of attractiveness during a longer period of time and is also well suited to remote studies."}
{"_id": "30023acba3ac198a7d260228dc51fda8414b8860", "title": "kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels", "text": "Many kinds of memory safety vulnerabilities have been endangering software systems for decades. Amongst other approaches, fuzzing is a promising technique to unveil various software faults. Recently, feedback-guided fuzzing demonstrated its power, producing a steady stream of security-critical software bugs. Most fuzzing efforts\u2014especially feedback fuzzing\u2014are limited to user space components of an operating system (OS), although bugs in kernel components are more severe, because they allow an attacker to gain access to a system with full privileges. Unfortunately, kernel components are difficult to fuzz as feedback mechanisms (i.e., guided code coverage) cannot be easily applied. Additionally, non-determinism due to interrupts, kernel threads, statefulness, and similar mechanisms poses problems. Furthermore, if a process fuzzes its own kernel, a kernel crash highly impacts the performance of the fuzzer as the OS needs to reboot. In this paper, we approach the problem of coverageguided kernel fuzzing in an OS-independent and hardware-assisted way: We utilize a hypervisor and Intel\u2019s Processor Trace (PT) technology. This allows us to remain independent of the target OS as we just require a small user space component that interacts with the targeted OS. As a result, our approach introduces almost no performance overhead, even in cases where the OS crashes, and performs up to 17,000 executions per second on an off-the-shelf laptop. We developed a framework called kernel-AFL (kAFL) to assess the security of Linux, macOS, and Windows kernel components. Among many crashes, we uncovered several flaws in the ext4 driver for Linux, the HFS and APFS file system of macOS, and the NTFS driver of Windows."}
{"_id": "3a8f765ec8e94abe35b8210d0422e768787feff4", "title": "High-Efficiency Slim Adapter With Low-Profile Transformer Structure", "text": "This paper presents the implementation of a high-efficiency slim adapter using an resonant converter. A new structure of the slim-type transformer is proposed, which is composed of copper wire as the primary winding and printed-circuit-board winding on the outer layer as the secondary winding. The proposed structure is suitable for a slim and high-efficiency converter because it has advantages of easy utilization and wide conductive cross-sectional area. In addition, the voltage-doubler rectifier is applied to the secondary side due to its simple structure of secondary winding, and a filter is adopted to reduce the output filter size. The validity of this study is confirmed by the experimental results from an 85-W prototype."}
{"_id": "3294c60580355ce3891970fa13f7216e69a1fcdf", "title": "Thing broker: a twitter for things", "text": "In the Web of Things, standard web technologies and protocols are used to represent and communicate with physical and virtual things. One challenge toward this vision is integrating things with different characteristics, protocols, interfaces and constraints while maintaining the simplicity and flexibility required for a variety of applications. In this paper we present the Thing Broker, a core platform for Web of Things that provides RESTFul interfaces to things using a Twitter-based set of abstractions and communication model. We present the key abstractions, a reference implementation and explain how a typical WoT application can be created using Thing Broker. We finish with a preliminary evaluation and draw some lessons from our experiences."}
{"_id": "3f0130a2a7a9887b840e25cffc70729787b0d0e7", "title": "Studying depression using imaging and machine learning methods", "text": "Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presents a background on depression, imaging, and machine learning methodologies; (2) reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3) suggests directions for future depression-related studies."}
{"_id": "4466dcf725e5383b10a6c5172c860f397e006b48", "title": "Unified temporal and spatial calibration for multi-sensor systems", "text": "In order to increase accuracy and robustness in state estimation for robotics, a growing number of applications rely on data from multiple complementary sensors. For the best performance in sensor fusion, these different sensors must be spatially and temporally registered with respect to each other. To this end, a number of approaches have been developed to estimate these system parameters in a two stage process, first estimating the time offset and subsequently solving for the spatial transformation between sensors. In this work, we present on a novel framework for jointly estimating the temporal offset between measurements of different sensors and their spatial displacements with respect to each other. The approach is enabled by continuous-time batch estimation and extends previous work by seamlessly incorporating time offsets within the rigorous theoretical framework of maximum likelihood estimation. Experimental results for a camera to inertial measurement unit (IMU) calibration prove the ability of this framework to accurately estimate time offsets up to a fraction of the smallest measurement period."}
{"_id": "54e33095e128db65efb3562a9d79bd654231c876", "title": "An empirical analysis of the distribution of unit test smells and their impact on software maintenance", "text": "Unit testing represents a key activity in software development and maintenance. Test suites with high internal quality facilitate maintenance activities, such as code comprehension and regression testing. Several guidelines have been proposed to help developers write good test suites. Unfortunately, such rules are not always followed resulting in the presence of bad test code smells (or simply test smells). Test smells have been defined as poorly designed tests and their presence may negatively affect the maintainability of test suites and production code. Despite the many studies that address code smells in general, until now there has been no empirical evidence regarding test smells (i) distribution in software systems nor (ii) their impact on the maintainability of software systems. This paper fills this gap by presenting two empirical studies. The first study is an exploratory analysis of 18 software systems (two industrial and 16 open source) aimed at analyzing the distribution of test smells in source code. The second study, a controlled experiment involving twenty master students, is aimed at analyzing whether the presence of test smells affects the comprehension of source code during software maintenance. The results show that (i) test smells are widely spread throughout the software systems studied and (ii) most of the test smells have a strong negative impact on the comprehensibility of test suites and production code."}
{"_id": "aa799b093a2c8dc8b31a72a164723404e67216f9", "title": "Regenerative braking control method and optimal scheme for electric motorcycle", "text": "This paper describes an optimal regenerative braking control scheme for a permanent magnet brushless DC motor of an electric motorcycle to achieve dual goals of the electric brake and the maximal power harvest in two cases of the road, downhill and flat, without any additional changes on the hardware. Based on regenerative braking of brushless DC motor in half bridge modulation mode, the relationship between average regenerative current on the DC bus and PWM duty cycle under different speeds was studied intensively from the three aspects of theoretical deduction, simulation based on Matlab/Simulink and platform experiments, and then a simple but effective scheme considering the actual application was proposed tentatively to achieve the dual goals. Since the braking kinetic energy is harvested in the form of the electric energy stored up in battery, the scheme could improve the driving range, safeness, noise preventability and cost effective of electric motorcycles."}
{"_id": "398b45f6c593791b3c9fff452b092bcf4fa69870", "title": "Design and optimisation of a voice coil motor (VCM) with a rotary actuator for an ultrasound scanner. IEEE Transactions on Industrial Electronics", "text": "This paper proposes a new application for the rotary VCM. In developing a low cost ultrasound scanner for the developing world an oscillating transducer is required to sweep over the skin. The ultrasound scanner must operate from a USB power supply in remote locations. The application requires a 3.3N force on the coils of the motor to overcome the inertia of the skin. A proof of concept prototype motor with electronics has been designed, simulated and tested. The VCM optimisation is discussed in detail with the unique separation of the magnets being critical to reduce the axial bearing forces for this application."}
{"_id": "cfb5e8322091852f72eaa170ceae4427258f66a3", "title": "Design and Modeling of a Textile Pressure Sensor for Sitting Posture Classification", "text": "A textile pressure sensor has been designed for measuring pressure distribution on the human body. Electrodes built with conductive textiles are arranged on both sides of a compressible spacer, forming a variable capacitor. The use of textiles makes unobtrusive, comfortable, lightweight, and washable sensors possible. This simplifies the goal to integrate such sensors into clothing in the future, to be simple, and fast to mount just as getting dressed. Hysteresis induced by the spacer of the sensor has been modeled with the Preisach model to reduce the measurement error from 24% to 5% on average and the maximal error from above 50% to below 10%. Standard textiles that are not specially optimized for low hysteresis can be used for designing the sensor due to the modeling. The model may also be used for other pressure or even strain sensors to reduce their hysteresis. The modeling enhances the accuracy of the textile sensor to those of commercial, nontextile pressure sensing mats. Furthermore, we used the sensor system for the classification of sitting postures in a chair. The data of nine subjects have been classified with Naive Bayes classifier, achieving an average recognition rate of 82%. We show that the textile sensor performs similarly to a commercial, nontextile pressure sensing mat for this application."}
{"_id": "819e12c46c95add85a4bbfcffa8fe6ba5b1924a4", "title": "PID control system analysis, design, and technology", "text": "Designing and tuning a proportional-integral-derivative (PID) controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives such as short transient and high stability are to be achieved. Usually, initial designs obtained by all means need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired. This stimulates the development of \"intelligent\" tools that can assist engineers to achieve the best overall PID control for the entire operating envelope. This development has further led to the incorporation of some advanced tuning algorithms into PID hardware modules. Corresponding to these developments, this paper presents a modern overview of functionalities and tuning methods in patents, software packages and commercial hardware modules. It is seen that many PID variants have been developed in order to improve transient performance, but standardising and modularising PID control are desired, although challenging. The inclusion of system identification and \"intelligent\" techniques in software based PID systems helps automate the entire design and tuning process to a useful degree. This should also assist future development of \"plug-and-play\" PID controllers that are widely applicable and can be set up easily and operate optimally for enhanced productivity, improved quality and reduced maintenance requirements."}
{"_id": "bbf043b283d9ff2de298d79c4e415dcc908f8a28", "title": "Design of PI and PID controllers with transient performance specification", "text": "Proportional-integral-derivative (PID) controllers are widely used in industrial control systems because of the reduced number of parameters to be tuned. The most popular design technique is the Ziegler\u2013Nichols method, which relies solely on parameters obtained from the plant step response. However, besides being suitable only for systems with monotonic step response, the compensated systems whose controllers are tuned in accordance with the Ziegler\u2013Nichols method have generally a step response with a high-percent overshoot. In this paper, tuning methods for proportional-integral (PI) and PID controllers are proposed that, like the Ziegler\u2013Nichols method, need only parameters obtained from the plant step response. The methodology also encompasses the design of PID controllers for plants with underdamped step response and provides the means for a systematic adjustment of the controller gain in order to meet transient performance specifications. In addition, since all the development of the methodology relies solely on concepts introduced in a frequency-domain-based control course, the paper has also a didactic contribution."}
{"_id": "edfd600b6d8b5c012279e9dbc669285a858f36ad", "title": "A zigbee-based home automation system", "text": "In recent years, the home environment has seen a rapid introduction of network enabled digital technology. This technology offers new and exciting opportunities to increase the connectivity of devices within the home for the purpose of home automation. Moreover, with the rapid expansion of the Internet, there is the added potential for the remote control and monitoring of such network enabled devices. However, the adoption of home automation systems has been slow. This paper identifies the reasons for this slow adoption and evaluates the potential of ZigBee for addressing these problems through the design and implementation of a flexible home automation architecture. A ZigBee based home automation system and Wi-Fi network are integrated through a common home gateway. The home gateway provides network interoperability, a simple and flexible user interface, and remote access to the system. A dedicated virtual home is implemented to cater for the system's security and safety needs. To demonstrate the feasibility and effectiveness of the proposed system, four devices, a light switch, radiator valve, safety sensor and ZigBee remote control have been developed and evaluated with the home automation system."}
{"_id": "1983baef43d57c3511530b9c42b5db186beeff39", "title": "CHARADE: Remote Control of Objects Using Free-Hand Gestures", "text": "This paper presents an application that uses hand gesture input to control a computer while giving a presentation. In order to develop a prototype of this application, we have defined an interaction model, a notation for gestures, and a set of guidelines to design gestural command sets. This works aims to define interaction styles that work in computerized reality environments. In our application, gestures are used for interacting with the computer as well as for communicating with other people or operating other devices."}
{"_id": "1634e2183186174989f1e2f378952250e13245da", "title": "Dynamic Materialized Views", "text": "A conventional materialized view blindly materializes and maintains all rows of a view, even rows that are never accessed. We propose a more flexible materialization strategy aimed at reducing storage space and view maintenance costs. A dynamic materialized view selectively materializes only a subset of rows, for example, the most frequently accessed rows. One or more control tables are associated with the view and define which rows are currently materialized. The set of materialized rows can be changed dynamically, either manually or automatically by an internal cache manager using a feedback loop. Dynamic execution plans are generated to decide whether the view is applicable at run time. Experimental results in Microsoft SQL Server show that compared with conventional materialized views, dynamic materialized views greatly reduce storage requirements and maintenance costs while achieving better query performance with improved buffer pool efficiency."}
{"_id": "09da677bdbba113374d8fe4bb15ecfbdb4c8fe40", "title": "Dual Path Networks", "text": "In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64\u00d7 4d) with 26% smaller model size, 25% less computational cost and 8% lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications."}
{"_id": "53492cb14b33a26b10c91102daa2d5a2a3ed069d", "title": "Improving Online Multiple Object tracking with Deep Metric Learning", "text": "Tracking by detection is a common approach to solving the Multiple Object Tracking problem. In this paper we show how deep metric learning can be used to improve three aspects of tracking by detection. We train a convolutional neural network to learn an embedding function in a Siamese configuration on a large person re-identification dataset offline. It is then used to improve the online performance of tracking while retaining a high frame rate. We use this learned appearance metric to robustly build estimates of pedestrian\u2019s trajectories in the MOT16 dataset. In breaking with the tracking by detection model, we use our appearance metric to propose detections using the predicted state of a tracklet as a prior in the case where the detector fails. This method achieves competitive results in evaluation, especially among online, realtime approaches. We present an ablative study showing the impact of each of the three uses of our deep appearance metric."}
{"_id": "840ce7a9b77f6d04f3356d28e544d31413b24a25", "title": "Learning to Rank for Consumer Health Search: A Semantic Approach", "text": "For many internet users, searching for health advice online is the first step in seeking treatment. We present a Learning to Rank system that uses a novel set of syntactic and semantic features to improve consumer health search. Our approach was evaluated on the 2016 CLEF eHealth dataset, outperforming the best method by 26.6% in NDCG@10."}
{"_id": "a064ab02a989a0da0240767871cc9c8270c7eca3", "title": "Adaptive Load Balancing: A Study in Multi-Agent Learning", "text": "We study the process of multi-agent reinforcement learning in the context of load balancing in a distributed system, without use of either central coordination or explicit communication. We rst de ne a precise framework in which to study adaptive load balancing, important features of which are its stochastic nature and the purely local information available to individual agents. Given this framework, we show illuminating results on the interplay between basic adaptive behavior parameters and their e ect on system e ciency. We then investigate the properties of adaptive load balancing in heterogeneous populations, and address the issue of exploration vs. exploitation in that context. Finally, we show that naive use of communication may not improve, and might even harm system e ciency."}
{"_id": "e243f9da7783b330234b22954f2b76a3222e0b22", "title": "Local connectome phenotypes predict social, health, and cognitive factors", "text": "The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample (N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions.Author SummaryThe local connectome is the pattern of fiber systems (i.e., number of fibers, orientation, and size) within a voxel, and it reflects the proximal characteristics of white matter fascicles distributed throughout the brain. Here we show how variability in the local connectome is correlated in a principled way across individuals. This intersubject correlation is reliable enough that unique phenotype maps can be learned to predict between-subject variability in a range of social, health, and cognitive attributes. This work shows, for the first time, how the local connectome has both the sensitivity and the specificity to be used as a phenotypic marker for subject-specific attributes."}
{"_id": "2173124089eb267b66786ae3a08dc2fbf72de1e3", "title": "Python for Microwave and RF Engineers [Application Notes]", "text": "Python is a powerful programming language for handling engineering and scientific computational tasks efficiently [1]-[5]. Used by companies such as Google and Intel and by organizations including NASA and Los Alamos National Laboratory, it offers an extremely wide selection of tools for tasks such as scientific computing, signal processing, Web site construction, database programming, and graphical user interface (GUI) design. The language is platform independent with most programs running on Linux, Microsoft Windows, or MAC OS virtually unchanged. Python has been distributed as a part of Open Source Initiative, and most versions are General Public License (GPL) compatible. In microwave and radio frequency (RF) engineering, it can be used for numerical programming, automated RF testing, automated monolithic microwave integrated-circuit (MMIC) layout generation, automated netlist generation and simulator sequencing, and other tasks. In this article, we are going to provide examples demonstrating the application areas of Python in microwave and RF engineering. We are also going to give a brief introduction to the language highlighting its salient features."}
{"_id": "21704423268b6d23260527e97b37cfd49a6eafa7", "title": "Create Your Own Internet of Things: A survey of IoT platforms.", "text": "We live in the age of After Google (AG), where information is just one click away and talking just one touch away. The near future of the AG age is the Internet of Things (IoT), where physical things connected over a network will take part in Internet activities to exchange information about themselves and their surroundings. In other words, the IoT is nothing but a computing concept in which everyday objects with embedded hardware/devices are connected to a network or are simply online."}
{"_id": "38f1606ce1bf15a5035c651597977e86e97ed2d2", "title": "Model Checking Interlocking Control Tables", "text": "A challenging problem for model checking is represented by railway interlocking systems. It is a well known fact that interlocking systems, due to their inherent complexity related to the high number of variables involved, are not amenable to automatic verification, typically incurring in state space explosion problems. The literature is however quite scarce on data concerning the size of interlocking systems that have been successfully proved with model checking techniques. In this paper we attempt a systematic study of the applicability bounds for general purpose model checkers on this class of systems, by studying the typical characteristics of control tables and their size parameters. The results confirm that, although small scale interlocking systems can be addressed by model checking, interlockings that control medium or large railway yards can not, asking for specialized verification techniques."}
{"_id": "a0c15ca3c227b77df7b8e06d4c8c7c398f07d872", "title": "Image Segmentation and Classification of MRI Brain Tumor Based on Cellular Automata and Neural Networks", "text": "This paper proposes segmentation of MRI brain tumor using cellular automata and classification of tumors using Gray level Co-occurrence matrix features and artificial neural network. In this technique, cellular automata (CA) based seeded tumor segmentation method on magnetic resonance (MR) images, which uses volume of interest (VOI) and seed selection. Seed based segmentation is performed in the image for detecting the tumor region and then highlighting the region with help of level set method. The brain images are classified into three stages that are normal, benign and malignant. For this non knowledge based automatic image classification, image texture features and Artificial Neural Network are employed. The conventional method for medical resonance brain images classification and tumors detection is by human inspection. Decision making was performed in two stages: feature extraction using Gray level Co-occurrence matrix and the classification using Radial basis function which is the type of ANN. The performance of the ANN classifier was evaluated in terms of training performance and classification accuracies. Artificial Neural Network gives fast and accurate classification than other neural networks and it is a promising tool for classification of the tumors."}
{"_id": "cc1bd334ec89970265bd5f775d52208af5ac5818", "title": "FlowTime: Dynamic Scheduling of Deadline-Aware Workflows and Ad-Hoc Jobs", "text": "With rapidly increasing volumes of data to be processed in modern data analytics, it is commonplace to run multiple data processing jobs with inter-job dependencies in a datacenter cluster, typically as recurring data processing workloads. Such a group of inter-dependent data analytic jobs is referred to as a workflow, and may have a deadline due to its mission-critical nature. In contrast, non-recurring ad-hoc jobs are typically best-effort in nature, and rather than meeting deadlines, it is desirable to minimize their average job turnaround time. The state-of-the-art scheduling mechanisms focused on meeting deadlines for individual jobs only, and are oblivious to workflow deadlines. In this paper, we present FlowTime, a new system framework designed to make scheduling decisions for workflows so that their deadlines are met, while simultaneously optimizing the performance of ad-hoc jobs. To achieve this objective, we first adopt a divide-and-conquer strategy to transform the problem of workflow scheduling to a deadline-aware job scheduling problem, and then design an efficient algorithm that tackles the scheduling problem with both deadline-aware jobs and ad-hoc jobs by solving its corresponding optimization problem directly using a linear program solver. Our experimental results have clearly demonstrated that FlowTime achieves the lowest deadline-miss rates for deadline-aware workflows and 2-10 times shorter average job turnaround time, as compared to the state-of-the-art scheduling algorithms."}
{"_id": "83677c227f747e3875c7f00eb984d76367567844", "title": "The Finite Element Method with Anisotropic Mesh Grading for Elliptic Problems in Domains with Corners and Edges", "text": "This paper is concerned with a specific finite element strategy for solving elliptic boundary value problems in domains with corners and edges. First, the anisotropic singular behaviour of the solution is described. Then the finite element method with anisotropic, graded meshes and piecewise linear shape functions is investigated for such problems; the schemes exhibit optimal convergence rates with decreasing mesh size. For the proof, new local interpolation error estimates for functions from anisotropically weighted spaces are derived. Finally, a numerical experiment is described, that shows a good agreement of the calculated approximation orders with the theoretically predicted ones. ( 1998 B. G. Teubner Stuttgart\u2014John Wiley & Sons Ltd."}
{"_id": "04f044da13e618ca8b399f004e713603e54e17cb", "title": "Procedural Generation of Parcels in Urban Modeling", "text": "We present a method for interactive procedural generation of parcels within the urban modeling pipeline. Our approach performs a partitioning of the interior of city blocks using user-specified subdivision attributes and style parameters. Moreover, our method is both robust and persistent in the sense of being able to map individual parcels from before an edit operation to after an edit operation this enables transferring most, if not all, customizations despite small to large-scale interactive editing operations. The guidelines guarantee that the resulting subdivisions are functionally and geometrically plausible for subsequent building modeling and construction. Our results include visual and statistical comparisons that demonstrate how the parcel configurations created by our method can closely resemble those found in real-world cities of a large variety of styles. By directly addressing the block subdivision problem, we intend to increase the editability and realism of the urban modeling pipeline and to become a standard in parcel generation for future urban modeling methods."}
{"_id": "d8de109a60bb533ea20718cf94ebd8d73abd1108", "title": "TOM-Net: Learning Transparent Object Matting from a Single Image", "text": "This paper addresses the problem of transparent object matting. Existing image matting approaches for transparent objects often require tedious capturing procedures and long processing time, which limit their practical use. In this paper, we first formulate transparent object matting as a refractive flow estimation problem. We then propose a deep learning framework, called TOM-Net, for learning the refractive flow. Our framework comprises two parts, namely a multi-scale encoder-decoder network for producing a coarse prediction, and a residual network for refinement. At test time, TOM-Net takes a single image as input, and outputs a matte (consisting of an object mask, an attenuation mask and a refractive flow field) in a fast feed-forward pass. As no off-the-shelf dataset is available for transparent object matting, we create a large-scale synthetic dataset consisting of 178K images of transparent objects rendered in front of images sampled from the Microsoft COCO dataset. We also collect a real dataset consisting of 876 samples using 14 transparent objects and 60 background images. Promising experimental results have been achieved on both synthetic and real data, which clearly demonstrate the effectiveness of our approach."}
{"_id": "d3db914b1a25e6006402d4b3a570934376c70e71", "title": "Reinforcement Learning: A Review from a Machine Learning Perspective", "text": "Machine Learning is the study of methods for programming computers to learn. Reinforcement Learning is a type of Machine Learning and refers to learning problem which allows machines and software agents to automatically determine the ideal behaviour within a specific context, in order to maximize its performance. RL is inspired by behaviourist psychology based on the mechanism of learning from rewards and does not require prior knowledge and automatically get optimal policy with the help of knowledge obtained by trial-and-error and continuous interaction with the dynamic environment. This paper provides the overview of Reinforcement Learning from Machine learning perspective. It discusses the fundamental principles and techniques used to solve RL problems. It presents nature of RL problems, with focus on some influential model free RL algorithms, challenges and recent trends in theory and practice of RL. It concludes with the future scope of RL."}
{"_id": "865bd9a32fb375a80fca9f04bead4d0d96576709", "title": "Bootstrapping coreference resolution using word associations", "text": "In this paper, we present an unsupervised framework that bootstraps a complete coreference resolution (CoRe) system from word associations mined from a large unlabeled corpus. We show that word associations are useful for CoRe \u2013 e.g., the strong association between Obama and President is an indicator of likely coreference. Association information has so far not been used in CoRe because it is sparse and difficult to learn from small labeled corpora. Since unlabeled text is readily available, our unsupervised approach addresses the sparseness problem. In a self-training framework, we train a decision tree on a corpus that is automatically labeled using word associations. We show that this unsupervised system has better CoRe performance than other learning approaches that do not use manually labeled data."}
{"_id": "dc701e7fca147e2bf37f14481e35e1b975396809", "title": "A simplified extension of the Area under the ROC to the multiclass domain", "text": "The Receiver Operator Characteristic (ROC) plot allows a classifier to be evaluated and optimised over all possible operating points. The Area Under the ROC (AUC) has become a standard performance evaluation criterion in two-class pattern recognition problems, used to compare different classification algorithms independently of operating points, priors, and costs. Extending the AUC to the multiclass case is considered in this paper, called the volume under the ROC hypersurface (VUS). A simplified VUS measure is derived that ignores specific intraclass dimensions, and regards inter-class performances only. It is shown that the VUS measure generalises from the 2-class case, but the bounds between random and perfect classification differ, with the lower bound tending towards zero as the dimensionality increases. A number of experiments with known distributions are used to verify the bounds, and to investigate a numerical integration approach to estimating the VUS. Experiments on real data compare several competing classifiers in terms of both error-rate and VUS. It was found that some classifiers compete in terms of error-rate, but have significantly different VUS scores, illustrating the importance of the VUS approach."}
{"_id": "0b27eee3dc09f1a4ef541b808899e5a8b1d481f9", "title": "An Improved Lexicographical Sort Algorithm of Copy-move Forgery Detection", "text": "As a technique for digital image tampering, copy-move forgery is frequently used. In this paper, an improved lexicographical sort algorithm based on discrete cosine transform (DCT) is developed to detect copy-move forgery. Firstly, the image is split into 8*8 blocks, and the image data undergoes a DCT. Then the DCT coefficients were grouped to reduce the dimension according to the frequency property. Finally, the distance of eigenvectors, instead of the DCT coefficients, was taken as the eigenvalue to fulfill the block matching. Experiments results showed that the false matching ratio of the proposed algorithm was reduced while the detecting ability was maintained."}
{"_id": "55baef0d54403387f5cf28e2ae1ec850355cf60a", "title": "An Empirical Study of Rich Subgroup Fairness for Machine Learning", "text": "Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup fairness intended to bridge the gap between statistical and individual notions of fairness. Rich subgroup fairness picks a statistical fairness constraint (say, equalizing false positive rates across protected groups), but then asks that this constraint hold over an exponentially or infinitely large collection of subgroups defined by a class of functions with bounded VC dimension. They give an algorithm guaranteed to learn subject to this constraint, under the condition that it has access to oracles for perfectly learning absent a fairness constraint. In this paper, we undertake an extensive empirical evaluation of the algorithm of Kearns et al. On four real datasets for which fairness is a concern, we investigate the basic convergence of the algorithm when instantiated with fast heuristics in place of learning oracles, measure the tradeoffs between fairness and accuracy, and compare this approach with the recent algorithm of Agarwal, Beygelzeimer, Dudik, Langford, and Wallach [ICML 2018], which implements weaker and more traditional marginal fairness constraints defined by individual protected attributes. We find that in general, the Kearns et al. algorithm converges quickly, large gains in fairness can be obtained with mild costs to accuracy, and that optimizing accuracy subject only to marginal fairness leads to classifiers with substantial subgroup unfairness. We also provide a number of analyses and visualizations of the dynamics and behavior of the Kearns et al. algorithm. Overall we find this algorithm to be effective on real data, and rich subgroup fairness to be a viable notion in practice."}
{"_id": "224a55fddd0646269cb12678843ff3d84f38c108", "title": "Deep transfer learning with ontology for image classification", "text": "Purely, data-driven large scale image classification has been achieved using various feature descriptors like SIFT, HOG etc. Major milestone in this regards is Convolutional Neural Networks (CNN) based methods which learn optimal feature descriptors as filters. Little attention has been given to the use of domain knowledge. Ontology plays an important role in learning to categorize images into abstract classes where there may not be a clear visual connect between category and image, for example identifying image mood - happy, sad and neutral. Our algorithm combines CNN and ontology priors to infer abstract patterns in Indian Monument Images. We use a transfer learning based approach in which, knowledge of domain is transferred to CNN while training (top down transfer) and inference is made using CNN prediction and ontology tree/priors (bottom up transfer ). We classify images to categories like Tomb, Fort and Mosque. We demonstrate that our method improves remarkably over logistic classifier and other transfer learning approach. We conclude with a remark on possible applications of the model and note about scaling this to bigger ontology."}
{"_id": "2e016c61724304980dc2c82b7d60896fd921c176", "title": "Multi-view 3D Object Detection Network for Autonomous Driving", "text": "This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the birds eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 14.9% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods."}
{"_id": "f4976cc874ff737da775494260169ee75ae72a29", "title": "Artificial Intelligence in the NASA Volcano Sensorweb : Over a Decade in Operations", "text": "Volcanic activity can occur with little or no warning. Increasing numbers of space borne assets can enable coordinated measurements of volcanic events to enhance both scientific study and hazard response. We describe the use of space and ground measurements to target further measurements as part of a worldwide volcano monitoring system. We utilize a number of alert systems including the MODVOLC, GOESVOLC, US Air Force Weather Advisory, and Volcanic Ash Advisory Center (VAAC) alert systems. Additionally we use in-situ data from ground instrumentation at a number of volcanic sites, including Iceland. Artificial Intelligence Software plays a key role in the Volcano Sensorweb. First, several in-situ volcano monitoring networks use \u201cintelligent\u201d data interpretation software to trigger alerts that can then be used to allocate network resources, notify human agents, and even task space observations. Second, the Earth Observing One (EO-1) spacecraft uses Artificial Intelligence Software to automatically task the spacecraft to execute observations. Third, EO-1 also interprets thermal data onboard to allow for faster notifications of volcanic activity. Finally some data interpretation steps use intelligent software such as Random Decision Forest Methods used to automatically estimate volcanic plume heights in Worldview-2 Imagery."}
{"_id": "23fc83c8cfff14a16df7ca497661264fc54ed746", "title": "Comprehensive Database for Facial Expression Analysis", "text": "Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the problem space for facial expression analysis, which includes level of description, transitions among expression, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity, image characteristics, and relation to non-verbal behavior. We then present the CMU-Pittsburgh AU-Coded Face Expression Image Database, which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity, performing multiple tokens of most primary FACS action units. This database is the most comprehensive test-bed to date for comparative studies of facial expression"}
{"_id": "6be461dd5869d00fc09975a8f8e31eb5f86be402", "title": "Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction.", "text": "Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present progress on one such perceptual primitive. The system automatically detects frontal faces in the video stream and codes them with respect to 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with boosting techniques [15, 2]. The expression recognizer receives image patches located by the face detector. A Gabor representation of the patch is formed and then processed by a bank of SVM classifiers. A novel combination of Adaboost and SVM's enhances performance. The system was tested on the Cohn-Kanade dataset of posed facial expressions [6]. The generalization performance to new subjects for a 7- way forced choice correct. Most interestingly the outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. The system has been deployed on a wide variety of platforms including Sony's Aibo pet robot, ATR's RoboVie, and CU animator, and is currently being evaluated for applications including automatic reading tutors, assessment of human-robot interaction."}
{"_id": "1dff919e51c262c22630955972968f38ba385d8a", "title": "Toward an affect-sensitive multimodal human-computer interaction", "text": "The ability to recognize affective states of a person we are communicating with is the core of emotional intelligence. Emotional intelligenceisafacetofhumanintelligencethathasbeenarguedtobe indispensable and perhaps the most important for successful interpersonal social interaction. This paper argues that next-generation human\u2013computer interaction (HCI) designs need to include the essence of emotional intelligence\u2014the ability to recognize a user\u2019s affective states\u2014in order to become more human-like, more effective, and more efficient. Affective arousal modulates all nonverbal communicative cues (facial expressions, body movements, and vocal and physiological reactions). In a face-to-face interaction, humans detect and interpret those interactive signals of their communicator with little or no effort. Yet design and development of an automated system that accomplishes these tasks is rather difficult. This paper surveys the past work in solving these problems by a computer and provides a set of recommendations for developing the first part of an intelligent multimodal HCI\u2014an automatic personalized analyzer of a user\u2019s nonverbal affective feedback."}
{"_id": "3d838b7e02ecfafdd1ed81ac0f70d9996b4bdf20", "title": "Parametric Hidden Markov Models for Gesture Recognition", "text": "\u00d0A new method for the representation, recognition, and interpretation of parameterized gesture is presented. By parameterized gesture we mean gestures that exhibit a systematic spatial variation; one example is a point gesture where the relevant parameter is the two-dimensional direction. Our approach is to extend the standard hidden Markov model method of gesture recognition by including a global parametric variation in the output probabilities of the HMM states. Using a linear model of dependence, we formulate an expectation-maximization (EM) method for training the parametric HMM. During testing, a similar EM algorithm simultaneously maximizes the output likelihood of the PHMM for the given sequence and estimates the quantifying parameters. Using visually derived and directly measured three-dimensional hand position measurements as input, we present results that demonstrate the recognition superiority of the PHMM over standard HMM techniques, as well as greater robustness in parameter estimation with respect to noise in the input features. Last, we extend the PHMM to handle arbitrary smooth (nonlinear) dependencies. The nonlinear formulation requires the use of a generalized expectation-maximization (GEM) algorithm for both training and the simultaneous recognition of the gesture and estimation of the value of the parameter. We present results on a pointing gesture, where the nonlinear approach permits the natural spherical coordinate parameterization of pointing direction. Index Terms\u00d0Gesture recognition, hidden Markov models, expectation-maximization algorithm, time-series modeling, computer vision."}
{"_id": "15f932d189b13786ca54b1dc684902301d34ef65", "title": "A high-efficient LLCC series-parallel resonant converter", "text": "A high efficient LLCC-type resonant dc-dc converter is discussed in this paper for a low-power photovoltaic application. Emphasis is put on the different design mechanisms of the resonant tank. At the same time soft switching of the inverter as well as the rectifier bridge are regarded. Concerning the design rules, a new challenge is solved in designing a LLCC-converter with voltage-source output. Instead of the resonant elements, ratios of them, e.g. the ratio of inductances Ls/Lp is considered as design parameters first. Furthermore, the derived design rule for the transformer-inductor device fits directly into the overall LLCC-design. Due to the nature of transformers, i.e. the relation of the inductances Ls/Lp is only a function of geometry, this design parameter is directly considered by geometry. Experimental results demonstrate the high efficiency."}
{"_id": "9cab068f133e37b834b27bdc73f0823709b6d43e", "title": "A dynamic model for integrating simple web spam classification techniques", "text": "Over the last years, Internet spam content has spread enormously inside web sites mainly due to the emergence of new web technologies oriented towards the online sharing of resources and information. In such a situation, both academia and industry have shown their concern to accurately detect and effectively control web spam, resulting in a good number of anti-spam techniques currently available. However, the successful integration of different algorithms for web spam classification is still a challenge. In this context, the present study introduces WSF2, a novel web spam filtering framework specifically designed to take advantage of multiple classification schemes and algorithms. In detail, our approach encodes the life cycle of a case-based reasoning system, being able to use appropriate knowledge and dynamically adjust different parameters to ensure continuous improvement in filtering precision with the passage of time. In order to correctly evaluate the effectiveness of the dynamic model, we designed a set of experiments involving a publicly available corpus, as well as different simple well-known classifiers and ensemble approaches. The results revealed that WSF2 performed well, being able to take advantage of each classifier and to achieve a better performance when compared to other alternatives. WSF2 is an open-source project licensed under the terms of the LGPL publicly available at https://sourceforge.net/"}
{"_id": "72c62348d76db34d507695db5dcc774a0ff402f4", "title": "Broadband compact microstrip patch antenna design loaded by multiple split ring resonator superstrate and substrate", "text": "We present microstrip patch antenna loaded with multiple split ring resonator substrate and superstrate. We analyze how the loading of split ring resonator superstrate and substrate can improve the bandwidth compared to the simple microstrip patch antenna and microstrip patch antenna loaded with split ring resonator superstrate. Another important observation is made for multiple split ring resonator loading in superstrate and substrate of microstrip patch antenna. The design is compared for two, three, and four-ring split ring resonator loading. The designs are also compared for different gap spacing between the rings. All three designs are compared for small gap and large gap between the rings. The design results in the form of reflection coefficient and bandwidth is presented in this manuscript. The design results are also compared with previously published designs."}
{"_id": "2175d3d22066a58914554ba1a776e2e617e30469", "title": "Intentions to use social media in organizing and taking vacation trips", "text": "0747-5632/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.chb.2010.05.022 * Corresponding author. Tel.: +34 922317075. E-mail addresses: eparra@ull.es (E. Parra-L\u00f3pez), Bulchand-Gidumal), dgtano@ull.es (D. Guti\u00e9rrez-Ta Armas). This work proposes a theoretical model to explain the factors determining the intentions to use social media when organizing and taking vacation trips. Understanding the antecedents of the tourists\u2019 use of these technologies is considered to be crucial for organization managers and destination policy makers. This use of social media technologies determines which elements of the trip might be used by the tourist thus having a great impact on the market. The model and its hypotheses have been tested by means of an approach based on structural equations with the PLS technique. The study was conducted on a sample of 404 individuals who normally use the Internet and had traveled on vacation in the previous 12 months. The conclusions of the study reveal that the intentions to use social media are directly influenced by the perceived benefits of that use (functional, psychological and hedonic and social); however, the costs do not significantly affect the predisposition to use such technologies. It is also shown that there is a series of incentives such as altruism, availability, individual predisposition or trust in the contributions of others which facilitate and promote the use of this type of technology when organizing and taking tourist trips. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "b8813ca58d0fe6940e6d86f7ec8efc7d204cc8c8", "title": "Realistic image composite with best-buddy prior of natural image patches", "text": "Realistic image composite requires the appearance of foreground and background layers to be consistent. This is difficult to achieve because the foreground and the background may be taken from very different environments. This paper proposes a novel composite adjustment method that can harmonize appearance of different composite layers. We introduce the Best-Buddy Prior (BBP), which is a novel compact representations of the joint co-occurrence distribution of natural image patches. BBP can be learned from unlabelled images given only the unsupervised regional segmentation. The most-probable adjustment of foreground can be estimated efficiently in the BBP space as the shift vector to the local maximum of density function. Both qualitative and quantitative evaluations show that our method outperforms previous composite adjustment methods."}
{"_id": "29d16b420ff3ac0f3df6640fbf0d0f814a03de1b", "title": "Leveraging microservices architecture by using Docker technology", "text": "Microservices architecture is not a hype and for awhile, started getting attention from organizations who want to shorten time to market of a software product by improving productivity effect through maximizing the automation in all life circle of the product. However, microservices architecture approach also introduces a lot of new complexity and requires application developers a certain level of maturity in order to confidently apply the architectural style. Docker has been a disruptive technology which changes the way applications are being developed and distributed. With a lot of advantages, Docker is a very good fit to implementing microservices architecture. In this paper we will discuss about how Docker can effectively help in leveraging mircoservices architecture with a real working model as a case study."}
{"_id": "56b596c0762224d612a2cbf9b873423a12ff5b6b", "title": "Disco: Discover Your Processes", "text": "Disco is a complete process mining toolkit from Fluxicon that makes process mining fast, easy, and simply fun."}
{"_id": "072f49e07e02edc2a7f5b89b980a691248a5290e", "title": "Distributed Denial of Service (DDoS) Attack in Cloud- Assisted Wireless Body Area Networks: A Systematic Literature Review", "text": "Wireless Body Area Networks (WBANs) have emerged as a promising technology that has shown enormous potential in improving the quality of healthcare, and has thus found a broad range of medical applications from ubiquitous health monitoring to emergency medical response systems. The huge amount of highly sensitive data collected and generated by WBAN nodes requires an ascendable and secure storage and processing infrastructure. Given the limited resources of WBAN nodes for storage and processing, the integration of WBANs and cloud computing may provide a powerful solution. However, despite the benefits of cloud-assisted WBAN, several security issues and challenges remain. Among these, data availability is the most nagging security issue. The most serious threat to data availability is a distributed denial of service (DDoS) attack that directly affects the all-time availability of a patient\u2019s data. The existing solutions for standalone WBANs and sensor networks are not applicable in the cloud. The purpose of this review paper is to identify the most threatening types of DDoS attacks affecting the availability of a cloud-assisted WBAN and review the state-of-the-art detection mechanisms for the identified DDoS attacks."}
{"_id": "f09442e47cd2ff1383151325c0cb5f2e39f2d789", "title": "60GHz high-gain low-noise amplifiers with a common-gate inductive feedback in 65nm CMOS", "text": "In this paper, a novel design technique of common-gate inductive feedback is presented for millimeter-wave low-noise amplifiers (LNAs). For this technique, by adopting a gate inductor at the common-gate transistor of the cascode stage, the gain of the LNA can be enhanced even under a wideband operation. Using a 65nm CMOS process, transmission-line-based and spiral-inductor-based LNAs are fabricated for demonstration. With a dc power consumption of 33.6 mW from a 1.2-V supply voltage, the transmission-line-based LNA exhibits a gain of 20.6 dB and a noise figure of 5.4 dB at 60 GHz while the 3dB bandwidth is 14.1 GHz. As for the spiral-inductor-based LNA, consuming a dc power of 28.8 mW from a 1.2-V supply voltage, the circuit shows a gain of 18.0 dB and a noise figure of 4.5 dB at 60 GHz while the 3dB bandwidth is 12.2 GHz."}
{"_id": "6ff2796ff8c6a4b2f7e3855b9dae6766ce177e4f", "title": "COLOR EYE FUNDUS IMAGES", "text": "This paper presents an approach for automatic detection of microaneurysms and hemorrhages in fundus images. These lesions are considered the earliest signs of diabetic retinopathy. The diabetic retinopathy is a disease caused by diabetes and is considered as the major cause of blindness in working age population. The proposed method is based on mathematical morphology and consists in removing components of retinal anatomy to reach the lesions. This method consists of five stages: a) pre-processing; b) enhancement of low intensity structures; c) detection of blood vessels; d) elimination of blood vessels; e) elimination of the fovea. The accuracy of the method was tested on a public database of fundus images, where it achieved satisfactory results, comparable to other methods from the literature, reporting 87.69% and 92.44% of mean sensitivity and specificity, respectively."}
{"_id": "81f83f0f353d2a7ca639cc16ecaad6b3abdd6cf1", "title": "Smartwatch in vivo", "text": "In recent years, the smartwatch has returned as a form factor for mobile computing with some success. Yet it is not clear how smartwatches are used and integrated into everyday life differently from mobile phones. For this paper, we used wearable cameras to record twelve participants' daily use of smartwatches, collecting and analysing incidents where watches were used from over 34 days of user recording. This allows us to analyse in detail 1009 watch uses. Using the watch as a timepiece was the most common use, making up 50% of interactions, but only 14% of total watch usage time. The videos also let us examine why and how smartwatches are used for activity tracking, notifications, and in combination with smartphones. In discussion, we return to a key question in the study of mobile devices: how are smartwatches integrated into everyday life, in both the actions that we take and the social interactions we are part of?"}
{"_id": "4ac5145d2be39a6f7541ee7cb38d388790b6d1d1", "title": "Design of an Assistive Communication Glove Using Combined Sensory Channels", "text": "This paper presents a new design of a wireless sensor glove developed for American Sign Language finger spelling gesture recognition. Five contact sensors are installed on the glove, in addition to five flex sensors on the fingers and a 3D accelerometer on the back of the hand. Each pair of flex and contact sensors are combined into the same input channel on the BSN node in order to save the number of channels and the installation area. After which, the signal is analyzed and separated back into flex and contact features by software. With electrical contacts and wirings made of conductive fabric and threads, the glove design has become thinner and more flexible. For validation, ASL finger spelling gesture recognition experiments have been performed on signals collected from six speech-impaired subjects and a normal subject. With the new sensor glove design, the experimental results have shown a significant increase in classification accuracy."}
{"_id": "e45cf13bd9c0520e9efcc48dcf2e579cfca9490f", "title": "Sex for fun: a synthesis of human and animal neurobiology", "text": "Sex is a fundamental pleasure, and crucial to the survival of our species. Though not many people would disagree with the proposition that sexual behaviour depends on the brain, the neuroscientific study of human sex is still relatively taboo and much remains to be discovered. On the contrary, excellent experimental animal models (mostly rat) are available that have uncovered major behavioural, neurochemical, and neuroanatomical characteristics of sexual behaviour. Restructuring sexual behaviour into broader terms reflecting behavioural states (wanting, liking, and inhibition) facilitates species comparison, revealing many similarities between animal and human sexual pleasure cycles, some of which can serve as potential avenues of new human sex research. In particular, behavioural and brain evidence clearly shows that motivational and consummatory phases are fundamentally distinct, and that genitally-induced sexual reward is a major factor in sexual learning mechanisms."}
{"_id": "7f1b0539bec01e52c4ffa5543c5ed55880dead55", "title": "Neural Temporal Relation Extraction", "text": "We experiment with neural architectures for temporal relation extraction and establish a new state-of-the-art for several scenarios. We find that neural models with only tokens as input outperform state-ofthe-art hand-engineered feature-based models, that convolutional neural networks outperform LSTM models, and that encoding relation arguments with XML tags outperforms a traditional position-based encoding."}
{"_id": "f13902eb6429629179419c95234ddbd555eb2bb6", "title": "Analysis of Procrastination and Flow Experiences", "text": ""}
{"_id": "0432c63daa8356a7916d8475c854b6c774f31a55", "title": "Distributed Method of Multiplier for Coupled Lagrangian Problems: A Control Approach", "text": "In this paper, we propose a method for solving the distributed optimization problem in which the objective function is the sum of separable convex functions with linear constraints. In our approach, the primal variable is partially updated to make the Method of Multiplier algorithm distributed which is based on the suitable scaling of constraint matrix. The algorithm is then viewed as a dynamical system the convergence analysis of which is done using the passivity concepts of nonlinear control theory. The convexity of the function is related to the passivity of the non-linear functions which is in feedback with the positive real linear system."}
{"_id": "952e6fba33b3a5e525a59cb922307f61395ed450", "title": "SPDT Switch Using Both nMOS and pMOS Transistors for Improving Power Handling", "text": "An SPDT switch consisting of both nMOS and pMOS transistors is presented. Compared with conventional SPDT switches using only nMOS transistors under the same bias condition, the proposed switch exhibits better power-handling capability (PHC). The mechanism for the PHC improvement is explained. A prototype is implemented using a 0.18-um CMOS process. Measurement results show that, at 2.4 GHz, the insertion loss is 0.62 dB when the nMOS transistors are on and 0.91 dB when the pMOS transistors are on. For both modes, the measured return loss and isolation are better than 10 dB and 19 dB, respectively, up to 6 GHz. Under 1.8-V operation, the switch is able to handle a 26.1-dBm input power when the nMOS transistors are on and a 24.0-dBm input power when the pMOS transistors are on."}
{"_id": "70969a9322e4dec2ae1ff4c945d94ae707bccd25", "title": "Brain Connectivity Analysis Methods for Better Understanding of Coupling", "text": "Action, cognition, emotion and perception can be mapped in the brain by using set of techniques. Translating unimodal concepts from one modality to another is an important step towards understanding the neural mechanisms. This paper provides a comprehensive survey of multimodal analysis of brain signals such as fMRI, EEG, MEG, NIRS and motivations, assumptions and pitfalls associated with it. All these non-invasive brain modalities complement and restrain each other and hence improve our understating of functional and neuronal organization. By combining the various modalities together, we can exploit the strengths and flaws of individual brain imaging methods. Integrated anatomical analysis and functional measurements of human brain offer a powerful paradigm for the brain mapping. Here we provide the brief review on non invasive brain modalities, describe the future of co-analysis of these"}
{"_id": "f51820619ca42c5799f3c5acc3855671b905419c", "title": "THE SERIAL POSITION EFFECT OF FREE RECALL", "text": "Recently Murdock (1960) has shown that in free recall RI, the total number of words recalled after one presentation, is a linear function of t, total presentation time. Nothing was said about the serial position effect, though this is a well-known phenomenon of free recall (e.g., Deese & Kaufman, 1957). However, given that there is a serial position effect, the simple linear relationship between RI and t is rather surprising. In the customary serial position curve of free recall, probability of recall is plotted as a function of serial position. This means, then, that the area under the serial position curve is equal to RI, the number of words recalled after one presentation. If R j is a linear function of t then it must follow that the area under the serial position curve is also a linear function of t. However, it is not immediately apparent how the serial position curve varies with t in such a way as to maintain this simple linear relationship. The present experiment was designed as an attempt to determine how the serial position curve varied with list length and presentation rate while still maintaining this linear relationship. Unfortunately, at the end of the experiment it was still not clear how this relationship came about or, for that matter, whether the relationship was even linear after all. The basic reason for this failure was"}
{"_id": "8ad18a6f9bad5b9a68cf03623044c69335edc0a3", "title": "Ecosystem or Echo-System? Exploring Content Sharing across Alternative Media Domains", "text": "This research examines the competing narratives about the role and function of Syria Civil Defence, a volunteer humanitarian organization popularly known as the White Helmets, working in war-torn Syria. Using a mixed-method approach based on seed data collected from Twitter, and then extending out to the websites cited in that data, we examine content sharing practices across distinct media domains that functioned to construct, shape, and propagate these narratives. We articulate a predominantly alternative media \u201cecho-system\u201d of websites that repeatedly share content about the White Helmets. Among other findings, our work reveals a small set of websites and authors generating content that is spread across diverse sites, drawing audiences from distinct communities into a shared narrative. This analysis also reveals the integration of governmentfunded media and geopolitical think tanks as source content for anti-White Helmets narratives. More broadly, the analysis demonstrates the role of alternative newswire-like services in providing content for alternative media websites. Though additional work is needed to understand these patterns over time and across topics, this paper provides insight into the dynamics of this multi-layered media ecosystem."}
{"_id": "318d7da35307221267b6ce6ead995cc812245abb", "title": "Fake News Detection on Social Media using Geometric Deep Learning", "text": "Social media are nowadays one of the main news sources for millions of people around the globe due to their low cost, easy access, and rapid dissemination. This however comes at the cost of dubious trustworthiness and significant risk of exposure to \u2018fake news\u2019, intentionally written to mislead the readers. Automatically detecting fake news poses challenges that defy existing content-based analysis approaches. One of the main reasons is that often the interpretation of the news requires the knowledge of political or social context or \u2018common sense\u2019, which current natural language processing algorithms are still missing. Recent studies have empirically shown that fake and real news spread differently on social media, forming propagation patterns that could be harnessed for the automatic fake news detection. Propagation-based approaches have multiple advantages compared to their content-based counterparts, among which is language independence and better resilience to adversarial attacks. In this paper, we show a novel automatic fake news detection model based on geometric deep learning. The underlying core algorithms are a generalization of classical convolutional neural networks to graphs, allowing the fusion of heterogeneous data such as content, user profile and activity, social graph, and news propagation. Our model was trained and tested on news stories, verified by professional fact-checking organizations, that were spread on Twitter. Our experiments indicate that social network structure and propagation are important features allowing highly accurate (92.7% ROC AUC) fake news detection. Second, we observe that fake news can be reliably detected at an early stage, after just a few hours of propagation. Third, we test the aging of our model on training and testing data separated in time. Our results point to the promise of propagation-based approaches for fake news detection as an alternative or complementary strategy to content-based approaches. ar X iv :1 90 2. 06 67 3v 1 [ cs .S I] 1 0 Fe b 20 19"}
{"_id": "07d138a54c441d6ae9bff073025f8f5eeaac4da4", "title": "Performance Modeling and Scalability Optimization of Distributed Deep Learning Systems", "text": "Big deep neural network (DNN) models trained on large amounts of data have recently achieved the best accuracy on hard tasks, such as image and speech recognition. Training these DNNs using a cluster of commodity machines is a promising approach since training is time consuming and compute-intensive. To enable training of extremely large DNNs, models are partitioned across machines. To expedite training on very large data sets, multiple model replicas are trained in parallel on different subsets of the training examples with a global parameter server maintaining shared weights across these replicas. The correct choice for model and data partitioning and overall system provisioning is highly dependent on the DNN and distributed system hardware characteristics. These decisions currently require significant domain expertise and time consuming empirical state space exploration.\n This paper develops performance models that quantify the impact of these partitioning and provisioning decisions on overall distributed system performance and scalability. Also, we use these performance models to build a scalability optimizer that efficiently determines the optimal system configuration that minimizes DNN training time. We evaluate our performance models and scalability optimizer using a state-of-the-art distributed DNN training framework on two benchmark applications. The results show our performance models estimate DNN training time with high estimation accuracy and our scalability optimizer correctly chooses the best configurations, minimizing the training time of distributed DNNs."}
{"_id": "0d15202f2de79a5cdc4ba1ff005c3aef27aea2b0", "title": "Variance Reduction for Stochastic Gradient Optimization", "text": "Stochastic gradient optimization is a class of widely used algorithms for training machine learning models. To optimize an objective, it uses the noisy gradient computed from the random data samples instead of the true gradient computed from the entire dataset. However, when the variance of the noisy gradient is large, the algorithm might spend much time bouncing around, leading to slower convergence and worse performance. In this paper, we develop a general approach of using control variate for variance reduction in stochastic gradient. Data statistics such as low-order moments (pre-computed or estimated online) is used to form the control variate. We demonstrate how to construct the control variate for two practical problems using stochastic gradient optimization. One is convex\u2014the MAP estimation for logistic regression, and the other is non-convex\u2014stochastic variational inference for latent Dirichlet allocation. On both problems, our approach shows faster convergence and better performance than the classical approach."}
{"_id": "42dfb714a9faddf3f7047d482b2e0531d884dc67", "title": "Advances in Variational Inference", "text": "Many modern unsupervised or semi-supervised machine learning algorithms rely on Bayesian probabilistic models. These models are usually intractable and thus require approximate inference. Variational inference (VI) lets us approximate a high-dimensional Bayesian posterior with a simpler variational distribution by solving an optimization problem. This approach has been successfully used in various models and large-scale applications. In this review, we give an overview of recent trends in variational inference. We first introduce standard mean field variational inference, then review recent advances focusing on the following aspects: (a) scalable VI, which includes stochastic approximations, (b) generic VI, which extends the applicability of VI to a large class of otherwise intractable models, such as non-conjugate models, (c) accurate VI, which includes variational models beyond the mean field approximation or with atypical divergences, and (d) amortized VI, which implements the inference over local latent variables with inference networks. Finally, we provide a summary of promising future research directions."}
{"_id": "9855ae8dc6cfd578334e98edb748aa386c435eab", "title": "Optimized rate matching architecture for a LTE-Advanced FPGA-based PHY", "text": "In this paper we present an optimized rate matching architecture for a LTE-Advanced FPGA-based physical layer. Since LTE-Advanced can reach up to rates of 1 Gbps in downlink, and since rate matching is in that critical path, it is very important that the design of the hardware architecture be efficient enough to allow this high data rate with little resources as possible. If not well planned, implementations on FPGAs can be quite challenging, limiting the choices of speed grades and FPGAs sizes capable of supporting such requirements. We propose efficient hardware architecture for the LTE-Advanced rate matching generic procedure; occupying only 218 slices and 9 block RAMs and performing in frequencies greater than 400 MHz in a FPGA-based solution."}
{"_id": "67e2bf8d945fb5883cea23001dc722ec9f8cdda5", "title": "Linear model predictive control of the locomotion of Pepper, a humanoid robot with omnidirectional wheels", "text": "The goal of this paper is to present a new real-time controller based on linear model predictive control for an omnidirectionnal wheeled humanoid robot. It is able to control both the mobile base of the robot and its body, while taking into account dynamical constraints. It makes it possible to have high velocity and acceleration motions by predicting the dynamic behavior of the robot in the future. Experimental results are proposed on the robot Pepper made by Aldebaran, showing good performance in terms of robustness and tracking control, efficiently managing kinematic and dynamical constraints."}
{"_id": "43d00d394ff76c0a5426c37fe072038ac7ec7627", "title": "CLASSIFICATION USING NA\u00cfVE BAYES , VSM AND POS TAGGER", "text": "This research focuses on Text Classification. Text classification is the task of automatically sorting a set of documents into categories from a predefined set. The domain of this research is the combination of information ret rieval (IR) technology, Data mining and machine learning (ML) technology. This research will outline the fundamental traits of the technologies involved. This research uses three text classification algorithms (Naive Bayes, VSM for text classificatio n and the new technique -Use of Stanford Tagger for text classific ation) to classify documents into different categories, which is trained on two different datasets (20 Newsgroups and New news dataset for five categories).In regards to the above classifica tion strategies, Na\u00efve Bayes is potentially good at serving as a tex classification model due to its simplicity."}
{"_id": "d714e066a362efb3151eb9195c06fd56523cc399", "title": "Handling non-functional requirements for big data and IOT projects in Scrum", "text": "Agile methodologies are gaining popularity at a lightning pace in the software industry. Prominent agile methodologies, such as Scrum and Extreme Programming (XP), help quickly deliver a system that meets Functional Requirements (FRs), while adapting to the changes in the customer requirements and feedback. However, Non-Functional Requirements (NFRs) by and large have been either mostly ignored or introduced and retrofit late in the software development cycle, oftentimes leading to project failures. With the recent, increasing shifts towards cloud and emphasis in big data in the software industry, NFRs, such as security and performance, have become more critical than ever before. This research proposes a novel approach to handling security and performance NFRs for projects involving big data and cloud, using Scrum. An industrial case study, conducted over a span of 9 months, shows that the approach helps deal with security and performance requirements individually as well as conflicts between them in an agile methodology."}
{"_id": "0ce6df6d6f009cbdd5f8c395bd422581e9aa0769", "title": "An Active Testing Model for Tracking Roads in Satellite Images", "text": "We present a new approach for tracking roads from satellite images, and thereby illustrate a general computational strategy (\\active testing\") for tracking 1D structures and other recognition tasks in computer vision. Our approach is related to recent work in active vision on \\where to look next\" and motivated by the \\divide-and-conquer\" strategy of parlor games such as \\Twenty Questions.\" We choose \\tests\" (matched lters for short road segments) one at a time in order to remove as much uncertainty as possible about the \\true hypothesis\" (road position) given the results of the previous tests. The tests are chosen on-line based on a statistical model for the joint distribution of tests and hypotheses. The problem of minimizing uncertainty (measured by entropy) is formulated in simple and explicit analytical terms. To execute this entropy testing rule we then alternate between data collection and optimization: at each iteration new image data are examined and a new entropy minimization problem is solved (exactly), resulting in a new image location to inspect, and so forth. We report experiments using panchromatic SPOT satellite imagery with a ground resolution of ten meters: given a starting point and starting direction, we are able to rapidly track highways in southern France over distances on the order of one hundred kilometers without manual intervention. Une m ethode active pour le suivi de routes dans des images satellitaires R esum e : Nous pr esentons une approche originale pour suivre automatiquement une route dans une image de la Terre acquise par satellite. Ce faisant, nous illustrons une m ethodologie plus g en erale pour le probl eme du suivi de structures lin eiques et autres dans des images. Notre approche est li ee a des travaux r ecents en vision active concernant les strat egies de perception ainsi qu'' a la m ethode algorithmique \"diviser pour r egner\" dans le jeu des vingt questions. Nous choisissons des questions (d etecteur local pour la pr esence d'une route) s equentiel-lement, de mani ere a r eduire le plus possible l'incertitude sur l'inconnue (la localisation de la route dans l'image). Ce choix est bas e sur un mod ele statistique pour la distribution jointe de l'inconnue et des r eponses aux questions. Le probl eme de minimisation de l'incer-titude (mesur ee par l'entropie de C. Shannon) est formul e de mani ere analytique. Ex ecuter cette strat egie consiste donc a alterner entre \u2026"}
{"_id": "c7c7484916e63bcfcf937372d50e92403feeafdc", "title": "Effects of weightlifting vs. kettlebell training on vertical jump, strength, and body composition.", "text": "Effects of weightlifting vs. kettlebell training on vertical jump, strength, and body composition. J Strength Cond Res 26(5): 1199-1202, 2012-The present study compared the effects of 6 weeks of weightlifting plus traditional heavy resistance training exercises vs. kettlebell training on strength, power, and anthropometric measures. Thirty healthy men were randomly assigned to 1 of 2 groups: (a) weightlifting (n = 13; mean \u00b1 SD: age, 22.92 \u00b1 1.98 years; body mass, 80.57 \u00b1 12.99 kg; height, 174.56 \u00b1 5.80 cm) or (b) kettlebell (n = 17; mean \u00b1 SD: age, 22.76 \u00b1 1.86 years; body mass, 78.99 \u00b1 10.68 kg; height, 176.79 \u00b1 5.08 cm) and trained 2 times a week for 6 weeks. A linear periodization model was used for training; at weeks 1-3 volume was 3 \u00d7 6 (kettlebell swings or high pull), 4 \u00d7 4 (accelerated swings or power clean), and 4 \u00d7 6 (goblet squats or back squats), respectively, and the volume increased during weeks 4-6 to 4 \u00d7 6, 6 \u00d7 4, and 4 \u00d7 6, respectively. Participants were assessed for height (in centimeters), body mass (in kilograms), and body composition (skinfolds). Strength was assessed by the back squat 1 repetition maximum (1RM), whereas power was assessed by the vertical jump and power clean 1RM. The results of this study indicated that short-term weightlifting and kettlebell training were effective in increasing strength and power. However, the gain in strength using weightlifting movements was greater than that during kettlebell training. Neither method of training led to significant changes in any of the anthropometric measures. In conclusion, 6 weeks of weightlifting induced significantly greater improvements in strength compared with kettlebell training. No between-group differences existed for the vertical jump or body composition."}
{"_id": "f37dd651bb7fe32430f4e771045ba2e34477ae08", "title": "Probabilistic Analysis of Plug-In Electric Vehicles Impact on Electrical Grid Through Homes and Parking Lots", "text": "Plug-in electric vehicles in the future will possibly emerge widely in city areas. Fleets of such vehicles in large numbers could be regarded as considerable stochastic loads in view of the electrical grid. Moreover, they are not stabled in unique positions to define their impact on the grid. Municipal parking lots could be considered as important aggregators letting these vehicles interact with the utility grid in certain positions. A bidirectional power interface in a parking lot could link electric vehicles with the utility grid or any storage and dispersed generation. Such vehicles, depending on their need, could transact power with parking lots. Considering parking lots equipped with power interfaces, in more general terms, parking-to-vehicle and vehicle-to-parking are propose here instead of conventional grid-to-vehicle and vehicle-to-grid concepts. Based on statistical data and adopting general regulations on vehicles (dis)charging, a novel stochastic methodology is presented to estimate total daily impact of vehicles aggregated in parking lots on the grid. Different scenarios of plug-in vehicles' penetration are suggested in this paper and finally, the scenarios are simulated on standard grids that include several parking lots. The results show acceptable penetration level margins in terms of bus voltages and grid power loss."}
{"_id": "fbb751db839401b1f3d71859faffc15b72d5700e", "title": "Multiple-Loop Design Technique for High-Performance Low-Dropout Regulator", "text": "A new multiple-loop design technique for high-performance low-dropout (LDO) regulator designs has been proposed and successfully implemented in many commercial products for portable smart phone and tablet PC applications. The proposed LDO is composed of five loops that allows designers to obtain a good tradeoff between quiescent current and other performances, such as undershoot, overshoot, and so on. A total of one bandgap reference and 38 LDOs (including n-type and p-type LDOs, which will be named NLDO and PLDO, respectively) were integrated in one power-management IC chip for powering an application processor inside mobile devices. The proposed LDO has been fabricated based on 0.13- $\\mu {m}$ CMOS process and supplies various current capacities from 50 to 600 mA; 38 LDOs have been designed and supply different output voltage levels from 0.7 to 3.0 V. One of the proposed NLDOs consumes $14~\\mu {A}$ of the quiescent current and features under 56/24 mV of undershoot/overshoot at VOUT =1V as the load current steps up from 0 to 300 mA with 300 mA/1 $\\mu {s}$ on a 1- $\\mu {F}$ output capacitor. The measured output load and line regulations are 1.8 and 0.4 mV, respectively. The measured integrated output noise from 10 Hz to 100 kHz at $\\text{I}_{\\mathrm{ LOAD}}=10\\%$ of maximum current shows 80 $\\mu {V}$ rms. The package chip size is $6.25 \\times 6.25$ mm2 with 169 balls."}
{"_id": "e4b1da04377eae25f8c70794dc27cedabb147010", "title": "Non-rigid image registration using self-supervised fully convolutional networks without training data", "text": "A novel non-rigid image registration algorithm is built upon fully convolutional networks (FCNs) to optimize and learn spatial transformations between pairs of images to be registered in a self-supervised learning framework. Different from most existing deep learning based image registration methods that learn spatial transformations from training data with known corresponding spatial transformations, our method directly estimates spatial transformations between pairs of images by maximizing an image-wise similarity metric between fixed and deformed moving images, similar to conventional image registration algorithms. The image registration is implemented in a multi-resolution image registration framework to jointly optimize and learn spatial transformations and FCNs at different spatial resolutions with deep self-supervision through typical feedforward and backpropagation computation. The proposed method has been evaluated for registering 3D structural brain magnetic resonance (MR) images and obtained better performance than state-of-the-art image registration algorithms."}
{"_id": "74bed25930678a97e6e064ac88a61ff75493196a", "title": "Identifying HTTPS-Protected Netflix Videos in Real-Time", "text": "After more than a year of research and development, Netflix recently upgraded their infrastructure to provide HTTPS encryption of video streams in order to protect the privacy of their viewers. Despite this upgrade, we demonstrate that it is possible to accurately identify Netflix videos from passive traffic capture in real-time with very limited hardware requirements. Specifically, we developed a system that can report the Netflix video being delivered by a TCP connection using only the information provided by TCP/IP headers. To support our analysis, we created a fingerprint database comprised of 42,027 Netflix videos. Given this collection of fingerprints, we show that our system can differentiate between videos with greater than 99.99% accuracy. Moreover, when tested against 200 random 20-minute video streams, our system identified 99.5% of the videos with the majority of the identifications occurring less than two and a half minutes into the video stream."}
{"_id": "9837224b8920be02f27ad13abf0c05c89f7d6f75", "title": "TS-CLOCK: temporal and spatial locality aware buffer replacement algorithm for NAND flash storages", "text": "NAND flash storage is widely adopted in all classes of computing devices. However, random write performance and lifetime issues remain to be addressed. In this paper, we propose a novel buffer replacement algorithm called TS-CLOCK that effectively resolves the remaining problems. Our experimental results show that TS-CLOCK outperforms state-of-the-art algorithms in terms of performance and lifetime."}
{"_id": "7588d1f796299c71b416a84d1fc2e7e86294d558", "title": "Semantic Web User Interfaces : A Systematic Mapping Study and Review", "text": "Context : Exploration of user interfaces has attracted a considerable attention in Semantic Web research in the last several years. However, the overall impact of these research efforts and the level of the maturity of their results are still unclear. Objective: This study aims at surveying existing research on Semantic Web user interfaces in order to synthesise their results, assess the level of the maturity of the research results, and identify needs for future research. Method : This paper reports the results of a systematic literature review and mapping study on Semantic Web user interfaces published between 2003 and 2011. After collecting a corpus of 87 papers, the collected papers are analysed through four main dimensions: general information such as sources, types and time of publication; covered research topics; quality of evaluations; and finally, used research methods and reported rigour of their use. Results: Based on only the obtained data, the paper provides a high level overview of the works reviewed, a detailed quality assessment of papers involving human subjects and a discussion of the identified gaps in the analysed papers and draws promising venues for future research. Conclusion: We arrive at two main conclusions: important gaps exist which hinder a wide adoption of Semantic Web technologies (e.g. lack user interfaces based on semantic technologies for social and mobile applications) and experimental research hardly used any proven theories and instruments specifically designed for investigating technology acceptance."}
{"_id": "4c6a4143cab3754ed696c368896e8be1354a4d4b", "title": "System design of the internet of things for residential smart grid", "text": "The Internet of Things envisions integration, coordination, communication, and collaboration of real-world objects in order to perform daily tasks in a more intelligent and efficient manner. To comprehend this vision, this article studies the design of a large-scale IoT system for smart grid application, which constitutes a large number of home users and has the requirement of fast response time. In particular, we focus on the messaging protocol of a universal IoT home gateway, where our cloud enabled system consists of a back-end server, a unified home gateway (UHG) at the end users, and a user interface for mobile devices. We discuss the features of such an IoT system to support a large-scale deployment with a UHG and real-time residential smart grid applications. Based on the requirements, we design an IoT system using XMPP and implemented in a testbed for energy management applications. To show the effectiveness of the designed testbed, we present some results using the proposed IoT architecture."}
{"_id": "d8152278e81fbe66d20b4a1ac88e7b15b7af4a2e", "title": "Finding time period-based most frequent path in big trajectory data", "text": "The rise of GPS-equipped mobile devices has led to the emergence of big trajectory data. In this paper, we study a new path finding query which finds the most frequent path (MFP) during user-specified time periods in large-scale historical trajectory data. We refer to this query as time period-based MFP (TPMFP). Specifically, given a time period T, a source v_s and a destination v_d, TPMFP searches the MFP from v_s to v_d during T. Though there exist several proposals on defining MFP, they only consider a fixed time period. Most importantly, we find that none of them can well reflect people's common sense notion which can be described by three key properties, namely suffix-optimal (i.e., any suffix of an MFP is also an MFP), length-insensitive (i.e., MFP should not favor shorter or longer paths), and bottleneck-free (i.e., MFP should not contain infrequent edges). The TPMFP with the above properties will reveal not only common routing preferences of the past travelers, but also take the time effectiveness into consideration. Therefore, our first task is to give a TPMFP definition that satisfies the above three properties. Then, given the comprehensive TPMFP definition, our next task is to find TPMFP over huge amount of trajectory data efficiently. Particularly, we propose efficient search algorithms together with novel indexes to speed up the processing of TPMFP. To demonstrate both the effectiveness and the efficiency of our approach, we conduct extensive experiments using a real dataset containing over 11 million trajectories."}
{"_id": "1cc85952c8588a17fa086d94ae1493a3b0413872", "title": "VisComplete: Automating Suggestions for Visualization Pipelines", "text": "Building visualization and analysis pipelines is a large hurdle in the adoption of visualization and workflow systems by domain scientists. In this paper, we propose techniques to help users construct pipelines by consensus-automatically suggesting completions based on a database of previously created pipelines. In particular, we compute correspondences between existing pipeline subgraphs from the database, and use these to predict sets of likely pipeline additions to a given partial pipeline. By presenting these predictions in a carefully designed interface, users can create visualizations and other data products more efficiently because they can augment their normal work patterns with the suggested completions. We present an implementation of our technique in a publicly-available, open-source scientific workflow system and demonstrate efficiency gains in real-world situations."}
{"_id": "11f6d271aaa94ad718ec68164f1c5a7ae1255eae", "title": "Safe and Nested Endgame Solving for Imperfect-Information Games", "text": "In imperfect-information games, the optimal strategy in a subgame may depend on the strategy in other, unreached subgames. Thus a subgame cannot be solved in isolation and must instead consider the strategy for the entire game as a whole, unlike perfect-information games. Nevertheless, it is possible to first approximate a solution for the whole game and then improve it in individual subgames. This is referred to as subgame solving. We introduce subgame-solving techniques that outperform prior methods both in theory and practice. We also show how to adapt them, and past subgame-solving techniques, to respond to opponent actions that are outside the original action abstraction; this significantly outperforms the prior state-of-the-art approach, action translation. Finally, we show that subgame solving can be repeated as the game progresses down the game tree, leading to far lower exploitability. These techniques were a key component of Libratus, the first AI to defeat top humans in heads-up no-limit Texas hold\u2019em poker."}
{"_id": "8841ced8ffe2a0203679b8f97de4aa488732fd1f", "title": "Understanding and Constructing Shared Spaces with Mixed-Reality Boundaries", "text": "We propose an approach to creating shared mixed realities based on the construction of transparent boundaries between real and virtual spaces. First, we introduce a taxonomy that classifies current approaches to shared spaces according to the three dimensions of transportation, artificiality, and spatiality. Second, we discuss our experience of staging a poetry performance simultaneously within real and virtual theaters. This demonstrates the complexities involved in establishing social interaction between real and virtual spaces and motivates the development of a systematic approach to mixing realities. Third, we introduce and demonstrate the technique of mixed-reality boundaries as a way of joining real and virtual spaces together in order to address some of these problems."}
{"_id": "eee686b822950a55f31d4c9c33d02c1942424785", "title": "Design and Implementation of T-junction Triangular Microstrip Patch Antenna Array for Wireless Applications", "text": "Abstract\u2014 This paper describes 2 x 2 triangular microstrip patch antenna using T-junction with quarter wave transformer. By regulating the distance in patch antenna and adjusting feed position, bandwidth can be obtained and by using an array, directivity is enhanced. The requirement of large bandwidth, high directivity, and minimum size leads to the design of 2 x 2 triangular microstrip patch antenna array feeding with T-junction network operate at 5.5 GHz. An antenna designed on an FR4 substrate that had a dielectric constant (\uf065r) 4.4, a loss tangent 0.02 and thickness of 1.6 mm. Simulated results showed that designed antenna has directivity 12.91 dB and bandwidth 173 MHz with VSWR 1.07 using T-junction feeding network. The proposed 2 x 2 triangular array has the benefit of light weight, simplicity of fabrication, single layer structure, and high directivity. Keyword Bandwidth, Corporate feeding, Return Loss, T-junction, VSWR."}
{"_id": "ae432e6036c7c17d91f383ab623636bbb8827ac4", "title": "Canine responses to hypoglycemia in patients with type 1 diabetes.", "text": "OBJECTIVE\nAnecdotal evidence suggests that domestic dogs may be able to detect hypoglycemia in their human caregivers; scientific investigation of this phenomenon, however, is sorely lacking. This study thus aimed to investigate how pet dogs respond to the hypoglycemic episodes of their owners with type 1 diabetes.\n\n\nMETHODS\nTwo hundred and twelve dog owners (64.2% female) with medically diagnosed type 1 diabetes participated in the study. All participants owned at least 1 dog. Each person completed a purpose-designed questionnaire developed to collect information on their dogs' responses (if any) to their hypoglycemic episodes.\n\n\nRESULTS\nOne hundred and thirty-eight (65.1%) respondents indicated that their dog had shown a behavioral reaction to at least one of their hypoglycemic episodes, with 31.9% of animals reacting to 11 or more events. Canine sex, age, breed status, and length of pet ownership were unrelated to hypoglycemia-response likelihood. Thirty-six percent of the sample believed that their dogs reacted most of the times they went \"low\"; 33.6% indicated that their pets reacted before they themselves were aware they were hypoglycemic. Dogs' behavioral responses to their owners' hypoglycemic episodes varied. Most animals behaved in a manner suggestive of attracting their owners' attention, for example, vocalizing (61.5%), licking them (49.2%), nuzzling them (40.6%), jumping on top of them (30.4%), and/or staring intently at their faces (41.3%). A smaller proportion showed behavioral responses suggestive of fear, including trembling (7.2%), running away from the owner (5.1%), and/or hyperventilating (2.2%).\n\n\nCONCLUSIONS\nThe findings suggest that behavioral reactions to hypoglycemic episodes in pet owners with type 1 diabetes commonly occur in untrained dogs. Further research is now needed to elucidate the mechanism(s) that dogs use to perform this feat."}
{"_id": "8d9fc2259f8ab6d6b72d4bc762d6c8808235f509", "title": "Unsupervised pattern mining from symbolic temporal data", "text": "We present a unifying view of temporal concepts and data models in order to categorize existing approaches for unsupervised pattern mining from symbolic temporal data. In particular we distinguish time point-based methods and interval-based methods as well as univariate and multivariate methods. The mining paradigms and the robustness of many proposed approaches are compared to aid the selection of the appropriate method for a given problem. For time points, sequential pattern mining algorithms can be used to express equality and order of time points with gaps in multivariate data. For univariate data and limited gaps suffix tree methods are more efficient. Recently, efficient algorithms have been proposed to mine the more general concept of partial order from time points. For time interval data with precise start and end points the relations of Allen can be used to formulate patterns. The recently proposed Time Series Knowledge Representation is more robust on noisy data and offers an alternative semantic that avoids ambiguity and is more expressive. For both pattern languages efficient mining algorithms have been proposed."}
{"_id": "726d02d03a79f0b91203938d70a2e9f8b80c6973", "title": "Circuit techniques for reducing the effects of op-amp imperfections: autozeroing, correlated double sampling, and chopper stabilization", "text": "In linear IC's fabricated in a low-voltage CMOS technology, the reduction of the dynamic range due to the dc offset and low,frequency noise of the ampli$ers becomes increasingly significant. Also, the achievable aniplijier gain is often quite low in such a technology, since cascoding may not be a practical circuit option due to the resulting reduction of the output signal swing. In this paper, some old and some new circuit techniques will be described for the compensation of the amplifier most important nonideal effects including the noise (mainly thermal and l l f noise), the input-referred dc offset voltage, as well as the finite gain resulting in a nonideal virtual ground at the input."}
{"_id": "2074898fb3afab00a44439b33defd5d5f9b7a7c3", "title": "Only Aggressive Elephants are Fast Elephants", "text": "Yellow elephants are slow. A major reason is that they consume their inputs entirely before responding to an elephant rider\u2019s orders. Some clever riders have trained their yellow elephants to only consume parts of the inputs before responding. However, the teaching time to make an elephant do that is high. So high that the teaching lessons often do not pay off. We take a different approach. We make elephants aggressive; only this will make them very fast. We propose HAIL (Hadoop Aggressive Indexing Library), an enhancement of HDFS and Hadoop MapReduce that dramatically improves runtimes of several classes of MapReduce jobs. HAIL changes the upload pipeline of HDFS in order to create different clustered indexes on each data block replica. An interesting feature of HAIL is that we typically create a win-win situation: we improve both data upload to HDFS and the runtime of the actual Hadoop MapReduce job. In terms of data upload, HAIL improves over HDFS by up to 60% with the default replication factor of three. In terms of query execution, we demonstrate that HAIL runs up to 68x faster than Hadoop. In our experiments, we use six clusters including physical and EC2 clusters of up to 100 nodes. A series of scalability experiments also demonstrates the superiority of HAIL."}
{"_id": "29b3fdcdda43e08534d6cae560b08dd2fbe88427", "title": "Fast and practical indexing and querying of very large graphs", "text": "Many applications work with graph-structured data. As graphs grow in size, indexing becomes essential to ensure sufficient query performance. We present the GRIPP index structure (GRaph Indexing based on Pre- and Postorder numbering) for answering reachability queries in graphs.\n GRIPP requires only linear time and space. Using GRIPP, we can answer reachability queries on graphs with 5 million nodes on average in less than 5 milliseconds, which is unrivaled by previous methods. We evaluate the performance and scalability of our approach on real and synthetic random and scale-free graphs and compare our approach to existing indexing schemes. GRIPP is implemented as stored procedure inside a relational database management system and can therefore very easily be integrated into existing graph-oriented applications."}
{"_id": "31cf7b9bbc9199b0432483789ec65d81e64e4ddc", "title": "Computational augmented reality eyeglasses", "text": "In this paper we discuss the design of an optical see-through head-worn display supporting a wide field of view, selective occlusion, and multiple simultaneous focal depths that can be constructed in a compact eyeglasses-like form factor. Building on recent developments in multilayer desktop 3D displays, our approach requires no reflective, refractive, or diffractive components, but instead relies on a set of optimized patterns to produce a focused image when displayed on a stack of spatial light modulators positioned closer than the eye accommodation distance. We extend existing multilayer display ray constraint and optimization formulations while also purposing the spatial light modulators both as a display and as a selective occlusion mask. We verify the design on an experimental prototype and discuss challenges to building a practical display."}
{"_id": "5940229d572d7195469d2e7b83eaa63ede2ae452", "title": "Personal Recommendation as Link Prediction using a Bipartite Graph Kernel", "text": "We implement a personal recommendation system on the Yelp academic dataset using a novel algorithm proposed by [Li and Chen., 2012]. By constructing a bipartite userbusiness graph where positive reviews are edges between nodes, we pose the recommendation problem as link prediction. More specifically, we construct a kernel which combines the information from both the node features and local graph structure. This kernel is then put into a one-class support vector machine, which generates an ordering of likely user-business links. We downsample the graph and tune the parameters of our algorithm (via cross validation). In what follows we provide a summary of the Yelp data, describe the recommendation algorithm in detail, and present the results of our implementation."}
{"_id": "2c6e8c365ee78cbbb2e6278b5329710063a297ea", "title": "Optimizing F-Measure with Support Vector Machines", "text": "Support vector machines (SVMs) are regularly used for classification of unbalanced data by weighting more heavily the error contribution from the rare class. This heuristic technique is often used to learn classifiers with high F-measure, although this particular application of SVMs has not been rigorously examined. We provide significant and new theoretical results that support this popular heuristic. Specifically, we demonstrate that with the right parameter settings SVMs approximately optimize F-measure in the same way that SVMs have already been known to approximately optimize accuracy. This finding has a number of theoretical and practical implications for using SVMs in F-measure optimization."}
{"_id": "3a3eca7045d42cff3e66ac0395fa5c0f58773e0d", "title": "Decision Support Systems Using Intelligent Paradigms", "text": "Decision-making is a process of choosing among alternative courses of action for solving complicated problems where multi-criteria objectives are involved. The past few years have witnessed a growing recognition of Soft Computing (SC) technologies that underlie the conception, design and utilization of intelligent systems. In this paper, we present different SC paradigms involving an artificial neural network trained using the scaled conjugate gradient algorithm, two different fuzzy inference methods optimised using neural network learning/evolutionary algorithms and regression trees for developing intelligent decision support systems. We demonstrate the efficiency of the different algorithms by developing a decision support system for a Tactical Air Combat Environment (TACE). Some empirical comparisons between the different algorithms are also provided."}
{"_id": "22ab39b0d687837c9119ca6c2818bc14443fc758", "title": "Compact Dual-Band Antenna With Electronic Beam-Steering and Beamforming Capability", "text": "A low-cost compact dual-band antenna that can achieve electronic beam steering in the horizontal plane across a range from 0\u00b0 to 360\u00b0 and adaptive beamforming is presented. Multiple radiation patterns can be generated by the antenna for interference canceling. To reduce the size and cost, the antenna employs the configuration of electronically steerable parasitic array radiators (ESPARs). The parasitic elements are 12 folded monopole antennas, which surround a short monocone antenna. The short monocone antenna serves as the driven element. Each folded monopole is loaded with a p-i-n diode, and by controlling the dc voltages applied to the p-i-n diodes, the antenna can achieve electronic beam steering and adaptive beamforming. To prove the concept, a prototype antenna was developed whose frequency bands are 1.8-2.2 and 2.85-3.15 GHz. The height of the antenna has been reduced to 0.12 \u03bb at 1.8 GHz."}
{"_id": "478b0c2ffe23cdab8c78420e89d6ef023f1e550f", "title": "Effects of bacterial inoculants on the indigenous microbiome and secondary metabolites of chamomile plants", "text": "Plant-associated bacteria fulfill important functions for plant growth and health. However, our knowledge about the impact of bacterial treatments on the host's microbiome and physiology is limited. The present study was conducted to assess the impact of bacterial inoculants on the microbiome of chamomile plants Chamomilla recutita (L.) Rauschert grown in a field under organic management in Egypt. Chamomile seedlings were inoculated with three indigenous Gram-positive strains (Streptomyces subrutilus Wbn2-11, Bacillus subtilis Co1-6, Paenibacillus polymyxa Mc5Re-14) from Egypt and three European Gram-negative strains (Pseudomonas fluorescens L13-6-12, Stenotrophomonas rhizophila P69, Serratia plymuthica 3Re4-18) already known for their beneficial plant-microbe interaction. Molecular fingerprints of 16S rRNA gene as well as real-time PCR analyses did not show statistically significant differences for all applied bacterial antagonists compared to the control. In contrast, a pyrosequencing analysis of the 16S rRNA gene libraries revealed significant differences in the community structure of bacteria between the treatments. These differences could be clearly shown by a shift within the community structure and corresponding beta-diversity indices. Moreover, B. subtilis Co1-6 and P. polymyxa Mc5Re-14 showed an enhancement of the bioactive secondary metabolite apigenin-7-O-glucoside. This indicates a possible new function of bacterial inoculants: to interact with the plant microbiome as well as to influence the plant metabolome."}
{"_id": "b5ea982d6628f5fabb58a7d4f0c262de3a9b34df", "title": "Pitch Elevation in Male-to-female Transgender Persons-the W\u00fcrzburg Approach.", "text": "OBJECTIVES\nThe present study reports objective and subjective voice results of Wendler's glottoplasty modified by Hagen.\n\n\nSTUDY DESIGN\nThis is an outcomes research study.\n\n\nMETHODS\nA total of 21 patients underwent Wendler's glottoplasty modified by Hagen. Parameters in the follow-up session were laryngoscopy, voice range profile, Voice Handicap Index, Life Satisfaction Questionnaire, and a visual analog scale for individual satisfaction with the voice.\n\n\nRESULTS\nThe fundamental frequency was elevated into the typical female fundamental frequency range. Furthermore, an elevation of the lower frequency limit was shown without a reduction of the frequency range. About one third of the population feels affected by the restricted dynamic range. This change of the vocal pitch is seen as part of the voice feminization by some of the patients. The Dysphonia Severity Index as a marker for voice quality was unchanged. Subjective satisfaction with the voice showed a strong correlation with the individual elevation of the pitch.\n\n\nCONCLUSION\nWendler's glottoplasty modified by Hagen is an effective and low-risk method of raising the vocal pitch of male-to-female transgender persons. However, elevated Scores of the Voice Handicap Index indicated that in everyday life, transgender persons continue to feel handicapped because of their voice. Another indicator for the lack of social acceptance and integration is the reduced general life satisfaction in the Life Satisfaction Questionnaire especially in the domain \"friends, acquaintances, relatives.\" Therefore, a better multidisciplinary therapy concept for voice feminization is necessary."}
{"_id": "d5279ad0e320dada70bbb598c627af9cf57b8d8f", "title": "Zero-Bending Piezoelectric Micromachined Ultrasonic Transducer (pMUT) With Enhanced Transmitting Performance", "text": "A piezoelectric micromachined ultrasonic transducer (pMUT) has enabled numerous exciting ultrasonic applications. However, residual stress and initial buckling may worsen the transmitting sensitivity of a pMUT, and also limit its application and commercialization. In this paper, we report a new innovative pMUT with a perfectly flat membrane, i.e., zero-bending membrane. Leveraging on the stress-free AlN thin film, framelike top electrode layout, and integrated vacuum cavity, the initial deflection of suspended membrane is significantly suppressed to only 0.005%. The transmitting sensitivity of the zero-bending pMUT is measured as 123 nm/V at a resonant frequency of 2.21 MHz, which is 450% higher than that of the reference pMUT with slightly non-zero initial deflection. Compared with the simulation results, the measured data of zero-bending pMUT achieve 94.5% of its ideal transmitting sensitivity. It is solid evidence that our approach is an effective and reliable way to overcome the residual stress and the initial buckling issue."}
{"_id": "ae37774ff871575b7799411bf87f42eb52634390", "title": "Sparse reconstruction cost for abnormal event detection", "text": "We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given an over-complete normal basis set (e.g., an image sequence or a collection of local spatio-temporal patches), we introduce the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. To condense the size of the dictionary, a novel dictionary selection method is designed with sparsity consistency constraint. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. Our method provides a unified solution to detect both local abnormal events (LAE) and global abnormal events (GAE). We further extend it to support online abnormal event detection by updating the dictionary incrementally. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our algorithm."}
{"_id": "2e1c33f55f80cb7e440435deb6f5fdf8bed95f47", "title": "Compressed Representations for Web and Social Graphs 1", "text": "Compressed representations have become effective to store and access large Web and social graphs, in order to support various graph querying and mining tasks. The existing representations exploit various typical patterns in those networks and provide basic navigation support. In this paper we obtain unprecedented results by finding \u201cdense subgraph\u201d patterns and combining them with techniques such as node orderings and compact data structures. On those representations we support out-neighbor and out/in-neighbor queries, as well as mining queries based on dense subgraphs. First, we propose a compression scheme for Web graphs that reduces edges by representing dense subgraphs with \u201cvirtual nodes\u201d; over this scheme we apply node orderings and other compression techniques. With this approach we match the best current compression ratios that support out-neighbor queries (i.e., nodes pointed from a given node), using 1.0\u20131.8 bits per edge (bpe) on large Web graphs, and retrieving each neighbor of a node in 0.6\u20131.0 microseconds (\u03bcsec). When supporting both outand in-neighbor queries, instead, our technique generally offers the best time when using little space. If the reduced graph, instead, is represented with a compact data structure that supports bidirectional navigation, we obtain the most compact Web graph representations (0.9\u20131.5 bpe) that support out/in-neighbor navigation, yet the time per neighbor extracted raises to around 5\u201320 \u03bcsec. We also propose a compact data structure that represents dense subgraphs without using virtual nodes. It allows us to recover out/in-neighbors and answer other more complex queries on the dense subgraphs identified. This structure is not competitive on Web graphs, but on social networks it achieves 4\u201313 bpe and 8\u201312 \u03bcsec per out/in-neighbor retrieved, which improves upon all existing representations."}
{"_id": "b994b872aa88bec1f92d441263892939a21bfbff", "title": "Probabilistic models in noisy environments : and their application to a visual prosthesis for the blind/", "text": "In recent years, probabilistic models have become fundamental techniques in machine learning. They are successfully applied in various engineering problems, such as robotics, biometrics, brain-computer interfaces or artificial vision, and will gain in importance in the near future. This work deals with the difficult, but common situation where the data is, either very noisy, or scarce compared to the complexity of the process to model. We focus on latent variable models, which can be formalized as probabilistic graphical models and learned by the expectation-maximization algorithm or its variants (e.g., variational Bayes). After having carefully studied a non-exhaustive list of multivariate kernel density estimators, we established that in most applications locally adaptive estimators should be preferred. Unfortunately, these methods are usually sensitive to outliers and have often too many parameters to set. Therefore, we focus on finite mixture models, which do not suffer from these drawbacks provided some structural modifications. Two questions are central in this dissertation: (i) how to make mixture models robust to noise, i.e. deal efficiently with outliers, and (ii) how to exploit side-channel information, i.e. additional information intrinsic to the data. In order to tackle the first question, we extend the training algorithms of the popular Gaussian mixture models to the Student-t mixture models. The Student-t distribution can be viewed as a heavy-tailed alternative to the Gaussian distribution, the robustness being tuned by an extra parameter, the degrees of freedom. Furthermore, we introduce a new variational Bayesian algorithm for learning Bayesian Student-t mixture models. This algorithm leads to very robust density estimators and clustering. To address the second question, we introduce manifold constrained mixture models. This new technique exploits the information that the data are living on a manifold of lower dimension than the dimension of the feature space. Taking the implicit geometrical data arrangement into account results in better generalization on unseen data. Finally, we show that the latent variable framework used for learning mixture models can be extended to construct probabilistic regularization networks, such as the Relevance Vector Machines. Subsequently, we make use of these methods in the context of an optic nerve visual prosthesis to restore partial vision to blind people of whom the optic nerve is still functional. Although visual sensations"}
{"_id": "33e6ba0eaebf15334d08eba275c626338ef6068f", "title": "Dual-Band Wide-Angle Scanning Planar Phased Array in X/Ku-Bands", "text": "A novel planar dual-band phased array, operational in the X/Ku-bands and with wide-angle scanning capability is presented. The design, development and experimental demonstration are described. A new single-layer crossed L-bar microstrip antenna is used for the array design. The antenna has low-profile architecture, measuring only 0.33\u03bb\u00d70.33\u03bb, at the low frequency band of operation, with flexible resonance tuning capability offered by the use of a plate-through-hole and field-matching ring arrangement. A 49-element planar (7 \u00d7 7) array demonstrator has been built and its performance validated, exhibiting good agreement with full-wave simulations. The dual-band array supports a large frequency ratio of nearly 1.8:1, and also maintains good sub-band bandwidths. Wide-angle scanning up to a maximum of 60 \u00b0 and 50 \u00b0 are achieved at the low and high frequency bands of operation, respectively."}
{"_id": "bfa9386e2036fb3c4925455c9cc8d7a6f38ac8a2", "title": "Performance Optimization of Aho-Corasick Algorithm on a GPU", "text": "Aho-Corasick (AC) algorithm is a multiple patternsmatching algorithm commonly used for applications such as computer and network security, bioinformatics, image processing, among others. These applications are computationally demanding, thus optimizing performance for AC algorithm is crucial. In this paper, we present a performance optimization strategy for the AC algorithm on a Graphic Processing Unit(GPU). Our strategy efficiently utilizes the high degree of the on chip parallelism and the complicated memory hierarchy of the GPU so that the aggregate performance (or throughput) for the AC algorithm can be optimized. The strategy significantly cuts down the effective memory access latencies and efficiently utilizes the memory bandwidth. Also, it maximizes the effects of the multithreading capability of the GPU through optimal thread scheduling. Experimental results on Nvidia GeForce GTX 285 GPU show that our approach delivers up to 127 Gbps throughput performance and 222-times speedup compared with a serial version running on single core of 2.2Ghz Core2Duo Intel processor."}
{"_id": "ef9f29673efd78143410ed4e463f6f6c168d64a5", "title": "Robust acoustic echo cancellation in the short-time fourier transform domain using adaptive crossband filters", "text": "This paper presents a robust acoustic echo cancellation (AEC) system in the short-time Fourier transform (STFT) domain using adaptive crossband filters. The STFT-domain AEC allows for a simpler system structure compared to the traditional frequency-domain AEC, which normally requires several applications of the discrete Fourier transform (DFT) and the inverse DFT, while the robust AEC (RAEC) allows for continuous and stable filter updates during double talk without freezing the adaptive filter. The RAEC and the STFT-domain AEC have been investigated in the past in separate studies. In this work we propose a novel algorithm that combines the advantages of both approaches for robust update of the adaptive crossband filters even during double talk. Experimental results confirm the benefit of incorporating the robustness constraint for the adaptive crossband filters and show improved performance in terms of the echo reduction and the predicted sound quality."}
{"_id": "72af45375b1a0e29e9b635e4334e9b9ba9d15802", "title": "Sparsity and smoothness via the fused lasso", "text": "The lasso penalizes a least squares regression by the sum of the absolute values (L1-norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0).We propose the \u2018fused lasso\u2019, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L1-norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences\u2014i.e. local constancy of the coefficient profile.The fused lasso is especially useful when the number of features p is much greater than N , the sample size.The technique is also extended to the \u2018hinge\u2019 loss function that underlies the support vector classifier.We illustrate the methods on examples from protein mass spectroscopy and gene expression data."}
{"_id": "6db7d4109a61705f658d315787061f6fa8af3595", "title": "Efficient and Scalable IoT Service Delivery on Cloud", "text": "Nowadays, IoT services are typically delivered as physically isolated vertical solutions, in which all system components ranging from sensory devices to applications are customized and tightly coupled for the requirements of each specific project. The efficiency and scalability of such service delivery model are intrinsically limited, posing significant challenges to IoT solution providers. Therefore, we propose a novel PaaS framework that provides essential platform services for IoT solution providers to efficiently deliver and continuously extend their services. This paper first introduces the IoT PaaS architecture, on which IoT solutions can be delivered as virtual verticals by leveraging computing resources and middleware services on cloud. Then we present the detailed mechanism and implementation of domain mediation, which helps solution providers to efficiently provide domain-specific control applications. The proposed approaches are demonstrated through the implementation of a domain mediator for building management and two use cases using the mediator."}
{"_id": "3d3d51142c4245482729e8d6faf10ac2051c8dfe", "title": "Particle Systems - a Technique for Modeling a Class of Fuzzy Objects", "text": "This paper introduces particle systems--a method for modeling fuzzy objects such as fire, clouds, and water. Particle systems model an object as a cloud of primitive particles that define its volume. Over a period of time, particles are generated into the system, move and change form within the system, and die from the system. The resulting model is able to represent motion, changes of form, and dynamics that are not possible with classical surface-based representations. The particles can easily be motion blurred, and therefore do not exhibit temporal aliasing or strobing. Stochastic processes are used to generate and control the many particles within a particle system. The application of particle systems to the wall of fire element from the Genesis Demo sequence of the film Star Trek II: The Wrath of Khan [10] is presented."}
{"_id": "73e52c5005069100a3613fdf57e9a90feb03a9e2", "title": "Solid Modeling: A Historical Summary and Contemporary Assessment", "text": "A new generation of industrial geometry systems is emerging based on the technology of the 1970's. The generations of the eighties and nineties will require more research."}
{"_id": "e04d7772b91a83a901408eb0876bbb7814b1d4b5", "title": "An image synthesizer", "text": "We introduce the concept of a Pixel Stream Editor. This forms the basis for an interactive synthesizer for designing highly realistic Computer Generated Imagery. The designer works in an interactive Very High Level programming environment which provides a very fast concept/implement/view iteration cycle.Naturalistic visual complexity is built up by composition of non-linear functions, as opposed to the more conventional texture mapping or growth model algorithms. Powerful primitives are included for creating controlled stochastic effects. We introduce the concept of \"solid texture\" to the field of CGI.We have used this system to create very convincing representations of clouds, fire, water, stars, marble, wood, rock, soap films and crystal. The algorithms created with this paradigm are generally extremely fast, highly realistic, and asynchronously parallelizable at the pixel level."}
{"_id": "c707938422b60bf827ec161872641468ec1ffe00", "title": "The Value Function Polytope in Reinforcement Learning", "text": "We establish geometric and topological properties of the space of value functions in finite stateaction Markov decision processes. Our main contribution is the characterization of the nature of its shape: a general polytope (Aigner et al., 2010). To demonstrate this result, we exhibit several properties of the structural relationship between policies and value functions including the line theorem, which shows that the value functions of policies constrained on all but one state describe a line segment. Finally, we use this novel perspective to introduce visualizations to enhance the understanding of the dynamics of reinforcement learning algorithms."}
{"_id": "16de86b9fcedb7ee7ff5c4d2f5c28ef8e411d890", "title": "Visualizing Translation Variation: Shakespeare's Othello", "text": "Recognized as great works of world literature, Shakespeare\u2019s poems and plays have been translated into dozens of languages for over 300 years. Also, there are many re-translations into the same language, for example, there are more than 60 translations of Othello into German. Every translation is a different interpretation of the play. These large quantities of translations reflect changing culture and express individual thought by the authors. They demonstrate wide connections between different world regions today, and reveal a retrospective view of their cultural, intercultural, and linguistic histories. Researchers from Arts and Humanities at Swansea University are collecting a large number of translations of William Shakespeare\u2019s Othello. In this paper, we have developed an interactive visualization system to present, analyze and explore the variations among these different translations. Our system is composed of two parts: the structure-aware Treemap for document selection and meta data analysis, and Focus + Context parallel coordinates for in-depth document comparison and exploration. In particular, we want to learn more about which content varies highly with each translation, and which content remains stable. We also want to form hypotheses as to the implications behind these variations. Our visualization is evaluated by the domain experts from Arts and Humanities."}
{"_id": "7957c8194364c32a29c982442c9644ecf3d33362", "title": "A Concept Analysis of Systems Thinking.", "text": "PURPOSE\nThis concept analysis, written by the National Quality and Safety Education for Nurses (QSEN) RN-BSN Task Force, defines systems thinking in relation to healthcare delivery.\n\n\nMETHODS\nA review of the literature was conducted using five databases with the keywords \"systems thinking\" as well as \"nursing education,\" \"nursing curriculum,\" \"online,\" \"capstone,\" \"practicum,\" \"RN-BSN/RN to BSN,\" \"healthcare organizations,\" \"hospitals,\" and \"clinical agencies.\" Only articles that focused on systems thinking in health care were used. The authors identified defining attributes, antecedents, consequences, and empirical referents of systems thinking.\n\n\nFINDINGS\nSystems thinking was defined as a process applied to individuals, teams, and organizations to impact cause and effect where solutions to complex problems are accomplished through collaborative effort according to personal ability with respect to improving components and the greater whole. Four primary attributes characterized systems thinking: dynamic system, holistic perspective, pattern identification, and transformation.\n\n\nCONCLUSION\nUsing the platform provided in this concept analysis, interprofessional practice has the ability to embrace planned efforts to improve critically needed quality and safety initiatives across patients' lifespans and all healthcare settings."}
{"_id": "4e5a5939e1f0a2706dc4615fc6ea9d977ad8a51d", "title": "Democracy and Education Spending in Africa", "text": "While it is widely believed that electoral competition influences public spending decisions, there has been relatively little effort to examine how recent democratization in the developing world has resulted in changes in basic service provision. There have been even fewer attempts to investigate whether democracy matters for public spending in the poorest developing countries, where \u201cweak institutions\u201d may mean that the formal adoption of electoral competition has little effect on policy. In this article I confront these questions directly, asking whether the shift to multiparty competition in African countries has resulted in increased spending on primary education. I develop an argument, illustrated with a game-theoretic model, which suggests that the need to obtain an electoral majority may have prompted African governments to spend more on education and to prioritize primary schools over universities within the education budget. I test three propositions from the model using panel data on electoral competition and education spending in African countries. I find clear evidence that democratically elected African governments have spent more on primary education, while spending on universities appears unaffected by democratization."}
{"_id": "86854374c13516a8ad0dc28ffd9cd4be2bca9bfc", "title": "Unscented Kalman Filtering on Riemannian Manifolds", "text": "In recent years there has been a growing interest in problems, where either the observed data or hidden state variables are confined to a known Riemannian manifold. In sequential data analysis this interest has also been growing, but rather crude algorithms have been applied: either Monte Carlo filters or brute-force discretisations. These approaches scale poorly and clearly show a missing gap: no generic analogues to Kalman filters are currently available in non-Euclidean domains. In this paper, we remedy this issue by first generalising the unscented transform and then the unscented Kalman filter to Riemannian manifolds. As the Kalman filter can be viewed as an optimisation algorithm akin to the Gauss-Newton method, our algorithm also provides a general-purpose optimisation framework on manifolds. We illustrate the suggested method on synthetic data to study robustness and convergence, on a region tracking problem using covariance features, an articulated tracking problem, a mean value optimisation and a pose optimisation problem."}
{"_id": "f24df001ba37746d1d28919f38a7704db5d178c3", "title": "Active Learning Algorithms for Multi-label Data", "text": "The iterative supervised learning setting, in which learning algorithms can actively query an oracle for labels, e.g. a human annotator that understands the nature of the problem, is called active learning. As the learner is allowed to interactively choose the data from which it learns, it is expected that the learner would perform better with less training. The active learning approach is appropriate to machine learning applications where training labels are costly to obtain but unlabeled data is abundant. Although active learning has been widely considered for single-label learning, this is not the case for multi-label learning, in which objects can have more than one class label and a multi-label learner is trained to assign multiple labels simultaneously to an object. There are different scenarios to query the annotator. This work focuses on the scenario in which the evaluation of unlabeled data is taken into account to select the object to be labeled. In this scenario, several multilabel active learning algorithms were identified in the literature. These algorithms were implemented in a common framework and experimentally evaluated in two multi-label datasets which have different properties. The influence of the properties of the datasets in the results obtained by the multi-label active learning algoritm is highlighted."}
{"_id": "a075a513b2b1e8dbf9b5d1703a401e8084f9df9c", "title": "Microstrip antenna phased array with electromagnetic bandgap substrate", "text": "Uniplanar compact electromagnetic bandgap (UC-EBG) substrate has been proven to be an effective measure to reduce surface wave excitation in printed antenna geometries. This paper investigates the performance of a microstrip antenna phased array embedded in an UC-EBG substrate. The results show a reduction in mutual coupling between elements and provide a possible solution to the \"blind spots\" problem in phased array applications with printed elements. A novel and efficient UC-EBG array configuration is proposed. A probe fed patch antenna phased array of 7/spl times/5 elements on a high dielectric constant substrate was designed, built and tested. Simulation and measurement results show improvement in the active return loss and active pattern of the array center element. The tradeoffs used to obtain optimum performance are discussed."}
{"_id": "dc667bd6d0b2bbe6ef0330fe27d9ec24c13f3da8", "title": "Automatic Solution of Jigsaw Puzzles", "text": "We present a method for automatically solving apictorial jigsaw puzzles that is based on an extension of the method of differential invariant signatures. Our algorithms are designed to solve challenging puzzles, without having to impose any restrictive assumptions on the shape of the puzzle, the shapes of the individual pieces, or their intrinsic arrangement. As a demonstration, the method was successfully used to solve two commercially available puzzles. Finally we perform some preliminary investigations into scalability of the algorithm for even larger puzzles."}
{"_id": "2def77a5fc0d11228b5050daf28a1ddb546c76aa", "title": "A parameterized cell design for high-Q, variable width and spacing spiral inductors", "text": "The on-chip planar spiral inductors having variable width (W) and spacing (S) across their turns are known to exhibit higher quality factors (Q). In this paper, we present an efficient parameterized cell (pcell) design in cadence using SKILL scripts for automatic layout generation of these complex, high-Q, variable W&S spiral inductors comprising of single ended and symmetric structures with rectangular, hexagonal, octagonal and circular spirals. Electromagnetic simulations are performed on the inductor layouts generated using the developed pcells. The constant W&S and variable W&S spiral inductor structures are fabricated in a 0.18 \u03bcm silicon on insulator process. Measurements show ~25% improvement in the quality factor of variable W%S spiral inductors compared to their constant W&S counterparts and also validates the proper operation of the developed inductor parameterized cells. The presented variable W&S inductor pcell significantly reduces the layout design time of RF circuit designers and also helps in the design automation of these complex inductor structures to boost their own performance and the RF circuits as well."}
{"_id": "da71649cb6930a20d24447ed516b5656958c59cc", "title": "A Novel IoT Authorization Architecture on Hyperledger Fabric With Optimal Consensus Using Genetic Algorithm", "text": "This paper proposes a distributed authorization architecture for Internet of Things adopting a hyperledger fabric framework. This paper also applies Genetic Algorithm to enhance the consensus component, traditionally used for Kafka, in order to determine the best configuration over different parameters, such as input transactions and their success rates. The experimental results confirm our superior performance in that the transaction transfer rate of our optimization, called GA Kafka, outperforms the traditional Kafka by 69.43%, and 89.97% for transaction transfer success rate."}
{"_id": "6b23d614c79de546852a5d643845d8a548d4499d", "title": "Perspectives on usability guidelines for smartphone applications: An empirical investigation and systematic literature review", "text": "Context: Several usability guidelines have been proposed to improve the usability of smartphone apps. These guidelines can be classified into three disjoint sets: platform specific guidelines, genre specific guidelines, and generic guidelines. However, smartphone applications are usually developed for multiple platforms and targeted for a variety of users. Hence the usefulness of existing guidelines is severally limited. Objective: This study aims to develop a comprehensive list of usability guidelines suitable for multiple platforms and genres of smartphone applications."}
{"_id": "ea371dd66e63157588c2e16525c5669835bc21d1", "title": "For a social network analysis of computer networks: a sociological perspective on collaborative work and virtual community", "text": "When computer networks link people as well as machines, they become social networks. Social network analysis provides a useful approach to moving beyond the concept of \u201cgroup\u201d in studying virtual communities and computer supported cooperative work and telework. Such computer supported social networks (CSSNS) sustain strong, intermediate and weak ties that provide information and social support in both specialized and broadly-based relationships. They foster informal workplace communities that are usually partial and narrowly-focused, although some do become encompassing and broadly-based. CSSNS connect workers within and between organizations who often are physically dispersed. The nature of the medium both constrains and facilitates management control. Although many relationships function offline as well as online, CSSNS have developed their own norms and structures. They have strong societal implications, fostering situations that combine global connectivity, the fragmentation of solidarities and the de-emphasis of local organization, and the importance of home bases."}
{"_id": "529887205fa3840b8fd25662103caac5b3ebf543", "title": "World Knowledge as Indirect Supervision for Document Clustering", "text": "One of the key obstacles in making learning protocols realistic in applications is the need to supervise them, a costly process that often requires hiring domain experts. We consider the framework to use the world knowledge as indirect supervision. World knowledge is general-purpose knowledge, which is not designed for any specific domain. Then, the key challenges are how to adapt the world knowledge to domains and how to represent it for learning. In this article, we provide an example of using world knowledge for domain-dependent document clustering. We provide three ways to specify the world knowledge to domains by resolving the ambiguity of the entities and their types, and represent the data with world knowledge as a heterogeneous information network. Then, we propose a clustering algorithm that can cluster multiple types and incorporate the sub-type information as constraints. In the experiments, we use two existing knowledge bases as our sources of world knowledge. One is Freebase, which is collaboratively collected knowledge about entities and their organizations. The other is YAGO2, a knowledge base automatically extracted from Wikipedia and maps knowledge to the linguistic knowledge base, WordNet. Experimental results on two text benchmark datasets (20newsgroups and RCV1) show that incorporating world knowledge as indirect supervision can significantly outperform the state-of-the-art clustering algorithms as well as clustering algorithms enhanced with world knowledge features.\n A preliminary version of this work appeared in the proceedings of KDD 2015 [Wang et al. 2015a]. This journal version has made several major improvements. First, we have proposed a new and general learning framework for machine learning with world knowledge as indirect supervision, where document clustering is a special case in the original paper. Second, in order to make our unsupervised semantic parsing method more understandable, we add several real cases from the original sentences to the resulting logic forms with all the necessary information. Third, we add details of the three semantic filtering methods and conduct deep analysis of the three semantic filters, by using case studies to show why the conceptualization-based semantic filter can produce more accurate indirect supervision. Finally, in addition to the experiment on 20 newsgroup data and Freebase, we have extended the experiments on clustering results by using all the combinations of text (20 newsgroup, MCAT, CCAT, ECAT) and world knowledge sources (Freebase, YAGO2)."}
{"_id": "bdb2a4f197ba6da046ac8c41789fb53b1ba06c38", "title": "Exploratory Innovation, Exploitative Innovation, and Performance: Effects of Organizational Antecedents and Environmental Moderators", "text": "AND"}
{"_id": "4bcb3f436ba4b46648414e080fd3c9210040e95f", "title": "Unsupervised Multi-Target Domain Adaptation: An Information Theoretic Approach", "text": "Unsupervised domain adaptation (uDA) models focus on pairwise adaptation settings where there is a single, labeled, source and a single target domain. However, in many real-world settings one seeks to adapt to multiple, but somewhat similar, target domains. Applying pairwise adaptation approaches to this setting may be suboptimal, as they fail to leverage shared information among multiple domains. In this work we propose an information theoretic approach for domain adaptation in the novel context of multiple target domains with unlabeled instances and one source domain with labeled instances. Our model aims to find a shared latent space common to all domains, while simultaneously accounting for the remaining private, domainspecific factors. Disentanglement of shared and private information is accomplished using a unified information-theoretic approach, which also serves to establish a stronger link between the latent representations and the observed data. The resulting model, accompanied by an efficient optimization algorithm, allows simultaneous adaptation from a single source to multiple target domains. We test our approach on three challenging publicly-available datasets, showing that it outperforms several popular domain adaptation methods."}
{"_id": "a70862a82678df0fd711789b30a22cf0d589e766", "title": "An application of neural net chips: handwritten digit recognition", "text": "A general-purpose, fully interconnected neural-net chip was used to perform computationally intensive tasks for handwritten digit recognition. The chip has nearly 3000 programmable connections, which can be set for template matching. The templates can be reprogrammed as needed during the recognition sequence. The recognition process proceeds in four major steps. First, the image is captured using a TV camera and a digital framegrab. This image is converted, using a digital computer, to either black or white pixels and scaled to fill a 16*16-pixel frame. Next, using the neural-net chip, the image is skeletonized, i.e. the image is thinned to a backbone one pixel wide. Then, the chip is programmed, and a feature map is created by template-matching stored primitive patterns on the chip with regions on the skeletonized image. Finally, recognition, based on the feature map, is achieved using any one of a variety of statistical and heuristic techniques on a digital computer. Best scores range between 90% and 99% correct classification, depending on the quality of the original handwritten digits.<>"}
{"_id": "5b4a044b81d68a5ef27d6ea0fdf7d7d5d4c00d04", "title": "The biology of interleukin-6.", "text": "B CELLS ARE the only eukaryotic cells that are able to produce antibody molecules. Their growth and differentiation into antibody producing cells require the presence of T cells and macrophages; the function of these cells was found to be replaced by soluble factors. 3 In the early i980s it was shown that at least two different kinds of factors were required in the regulation of B cell response, one for growth of activated B cells, B-cell growth factor (BCGF), and the other for antibody induction in B cells, B-cell differentiation factor (BCDF). 6 Since then, a variety of factors regulating the B cell response have been reported in the human and murine systems. Finally, in 1986 the cDNAs for three B-cell stimulatory factors have been cloned; interleukin-4 (IL-4) (BCGF1 /BSF1 ) for the early activation of resting B cells,7\u20198 IL-5 (BCGFII) for the growth ofactivated B cells,9 and IL-6 (BCDF/BSF2) for the final differentiation of B cells into antibody producing cells\u2019#{176} (Fig 1). Human IL-6 (BSF2) was originally identified as a factor in the culture supernatants of mitogen or antigen-stimulated peripheral mononuclear cells, which induced immunoglobulin production in Epstein Barr virus (EBV) transformed B-cell lines or in Staphylococcus aureus Cowan 1 (SAC) stimulated normal B cells.t1 This molecule was found to be separable from other factors, such as IL-2 and BCGFs, and the establishment of human T cell hybridoma generating BSF2 activity confirmed that this is a distinct molecule from other cytokines.6 BSF2 was purified to homogeneity from the culture supernatant of a human T-cell leukemia virus type 1 (HTLV1 ) transformed T-cell line and its partial N-terminal amino acid sequence was determined.t2 Based on these findings, the cDNA encoding human BSF2 was cloned. #{176} Approximately at the same time, the molecular cloning and the nucleotide sequences of the molecules termed interferon / 2 (IFN 32)\u20193 and 26 Kd protein were reportedt4 and the results revealed that BSF2, IFN 32, and 26 Kd protein were identical. In I 980, an inducible mRNA species of about 1 3S encoding for a novel human fibroblast-type interferon (IFN), named 1FNf32 was reported.\u20195 The isolated cDNA clone for such an induced mRNA was trancribed in vitro into a protein of 26 Kd. One group detected an antiviral activity that was neutralized with anti-IFNf 2 and thus called this molecule IFNj32)5 On the other hand, another group could not detect any antiviral activity in this protein\u20196 and its interferon activity was controversial until recombinant molecules became available. In 1987 recombinant IL-6 (r IL-6) was shown to have no IFN activity and to have antigenically and functionally no relations with IFNfl.\u20197 Growth factors for plasmacytomas/myelomas have been reported by several investigators.\u20198\u201920 In 1 986, N-terminal amino acid sequence of a human cytokine that showed hybridoma/plasmacytoma growth factor activity was determined and the result again showed that the factor was identical with BSF2/IFN/32/26 Kd protein.2\u2019 Subsequently, the cDNA cloning of murine hybridoma/plasmacytoma growth factor was completed and the sequence indicated that it was the murine homologue of IL-6/BSF2.2223 Therefore, all the results demonstrated that IL-6 has the growth activity in plasmacytoma/myeloma cells. The other major activty of IL-6 is induction of acute phase proteins in hepatocytes. The studies with IL-6 and anti-IL-6 antibody carried out by Gauldie et a124 and Andus et a125 clearly demonstrated that IL-6 functioned as a hepatocyte stimulating factor (HSF) and induced the production of major acute phase proteins. As described, the molecular cloning of the cDNA of IL-6 indicated that the function of IL-6 is not restricted to B lineage cells but shows a wide variety of biological activities on various tissues and cells. Table 1 summarizes the activities exerted by molecules identified to be identical to IL-6."}
{"_id": "9e64bb4c144b76d5e8eefe6cb24df2254bccd2f9", "title": "FontMatcher: Font Image Paring for Harmonious Digital Graphic Design", "text": "One of the important aspects in graphic design is choosing the font of the caption that matches aesthetically the associated image. To obtain a good match, users would exhaustively examine a long font list requiring them a substantial effort. This paper presents FontMatcher, which supports users to design digital graphic works harmoniously pairing fonts with an image. The system provides three features, recommendation, explaination and feedback. If a warm feeling image is given as input, the system recommends warm feeling fonts, and then explains what is the distinguishing features of the recommendation, e.g. a cursive shape. Users can also provide feedback to find fonts which correspond to their intention. Our evaluation results show that the recommended fonts scored better than selected fonts by novices and provides competing results with the ones chosen by experienced graphic designers. The system also provides explanations that help increasing the reliability of the recommended results."}
{"_id": "89c1c6c952e1e962677026269c094b8c0a966ee0", "title": "A Low-Profile Third-Order Bandpass Frequency Selective Surface", "text": "We demonstrate a new class of low-profile frequency selective surfaces (FSS) with an overall thickness of lambda/24 and a third-order bandpass frequency response. The proposed FSS is a three layer structure composed of three metal layers, separated by two electrically thin dielectric substrates. Each layer is a two-dimensional periodic structure with sub-wavelength unit cell dimensions and periodicity. The unit cell of the proposed FSS is composed of a combination of resonant and non-resonant elements. It is shown that this arrangement acts as a spatial bandpass filter with a third-order bandpass response. However, unlike traditional third-order bandpass FSSs, which are usually obtained by cascading three identical first-order bandpass FSSs a quarter wavelength apart from one another and have thicknesses in the order of lambda/2 , the proposed structure has an extremely low profile and an overall thickness of about lambda/24 , making it an attractive choice for conformal FSS applications. As a result of the miniaturized unit cells and the extremely small overall thickness of the structure, the proposed FSS has a reduced sensitivity to the angle of incidence of the EM wave compared to traditional third-order frequency selective surfaces. The principles of operation along with guidelines for the design of the proposed FSS are presented in this paper. A prototype of the proposed third-order bandpass FSS is also fabricated and tested using a free space measurement system at C band."}
{"_id": "d59814a5014f220c1bd43fef6393b1250ecdc58b", "title": "A Coplanar Reconfigurable Folded Slot Antenna Without Bias Network for WLAN Applications", "text": "The design of a reconfigurable single folded slot antenna is shown. Metal strips are used in the slot to manipulate the ground size around the slot, which yields to changing the slot's perimeter and thus changing the resonant frequency of the antenna. The design of a single folded slot antenna is first illustrated. Later, the design is modified to be reconfigurable. The resonant frequencies for the reconfigurable design are chosen to be applicable with the WLAN applications. The antenna design, simulation, and measurements are described. The simulated results both for the return loss and maximum gain agree with the measurements. The antenna has similar radiation patterns in both bands. The advantage of this design is that the length of the bias lines does not affect the antenna performance, thus making its design, feeding, and matching an extremely simple and low-cost procedure for antenna designers."}
{"_id": "ef348f8538ae7ce39004a5ce961c7ff747dfba01", "title": "An electromechanical interpretation of electrowetting", "text": "Electrowetting on dielectric-coated electrodes involves two independently observable phenomena: (i) the well-known voltage dependence of the apparent contact angle and (ii) a central electromechanical force that can be exploited to move and manipulate small liquid volumes on a substrate. The electromechanical force does not depend upon field-mediated changes of the contact angle; it is operative even if the liquid surface is constrained. Forces on liquid volumes may be determined using capacitance or the Maxwell stress tensor with no reference made to liquid surface profiles. According to this interpretation, a nonlinear mechanism manifesting a voltage threshold is responsible for both contact angle saturation and the observed clamping of the electromechanical force."}
{"_id": "cdb5e5d28a9bd6a04f969d6465110f875e706e71", "title": "Java-based graphical user interface for MRUI, a software package for quantitation of in vivo/medical magnetic resonance spectroscopy signals", "text": "This article describes a Java-based graphical user interface for the magnetic resonance user interface (MRUI) quantitation package. This package allows MR spectroscopists to easily perform time-domain analysis of in vivo/medical MR spectroscopy data. We have found that the Java programming language is very well suited for developing highly interactive graphical software applications such as the MRUI system. We also have established that MR quantitation algorithms, programmed in the past in other languages, can easily be embedded into the Java-based MRUI by using the Java native interface (JNI)."}
{"_id": "16a0fde5a8ab5591a9b2985f60a04fdf50a18dc4", "title": "Improving Gait Cryptosystem Security Using Gray Code Quantization and Linear Discriminant Analysis", "text": "Gait has been considered as an efficient biometric trait for user authentication. Although there are some studies that address the task of securing gait templates/models in gait-based authentication systems, they do not take into account the low discriminability and high variation of gait data which significantly affects the security and practicality of the proposed systems. In this paper, we focus on addressing the aforementioned deficiencies in inertial-sensor based gait cryptosystem. Specifically, we leverage Linear Discrimination Analysis to enhance the discrimination of gait templates, and Gray code quantization to extract high discriminative and stable binary template. The experimental results on 38 different users showed that our proposed method significantly improve the performance and security of the gait cryptosystem. In particular, we achieved the False Acceptant Rate of 6\u00d710\u22125% (i.e., 1 fail in 16983 trials) and False Rejection Rate of 9.2% with 148-bit security."}
{"_id": "34e5eeb48da470e8b91693cc5d322959d79f1470", "title": "Optimal Reconstruction with a Small Number of Views", "text": "Estimating positions of world points from features observed in images is a key problem in 3D reconstruction, image mosaicking, simultaneous localization and mapping and structure from motion. We consider a special instance in which there is a dominant ground plane G viewed from a parallel viewing plane S above it. Such instances commonly arise, for example, in aerial photography. Consider a world point g \u2208 G and its worst case reconstruction uncertainty \u03b5(g,S) obtained by merging all possible views of g chosen from S. We first show that one can pick two views sp and sq such that the uncertainty \u03b5(g, {sp, sq}) obtained using only these two views is almost as good as (i.e. within a small constant factor of) \u03b5(g,S). Next, we extend the result to the entire ground plane G and show that one can pick a small subset of S \u2032 \u2286 S (which grows only linearly with the area of G) and still obtain a constant factor approximation, for every point g \u2208 G, to the minimum worst case estimate obtained by merging all views in S. Our results provide a view selection mechanism with provable performance guarantees which can drastically increase the speed of scene reconstruction algorithms. In addition to theoretical results, we demonstrate their effectiveness in an application where aerial imagery is used for monitoring farms and orchards."}
{"_id": "aae6a40b8fd8b6ab3ff136fcbd12e65d0507a2b9", "title": "Viewpoint - Why \"open source\" misses the point of free software", "text": "Decoding the important differences in terminology, underlying philosophy, and value systems between two similar categories of software."}
{"_id": "6a9ba089303fd7205844cefdafadba2b433dbe29", "title": "Fast Ego-motion Estimation with Multi-rate Fusion of Inertial and Vision", "text": "This paper presents a tracking system for ego-motion estimation which fuses vision and inertial measurements using EKF and UKF (Extended and Unscented Kalman Filters), where a comparison of their performance has been done. It also considers the multi-rate nature of the sensors: inertial sensing is sampled at a fast sampling frequency while the sampling frequency of vision is lower. the proposed approach uses a constant linear acceleration model and constant angular velocity model based on quaternions, which yields a non-linear model for states and a linear model in measurement equations. Results show that a significant improvement is obtained on the estimation when fusing both measurements with respect to just vision or just inertial measurements. It is also shown that the proposed system can estimate fast-motions even when vision system fails. Moreover, a study of the influence of the noise covariance is also performed, which aims to select their appropriate values at the tuning process. The setup is an end-effector mounted camera, which allow us to pre-define basic rotational and translational motions for validating results. KEY WORDS\u2014vision and inertial, multi-rate systems, sensor fusion, tracking"}
{"_id": "09aaf8b51ffea0816f7b62c448241de474ae65f7", "title": "Weaknesses in the Key Scheduling Algorithm of RC4", "text": "In this paper we present several weaknesses in the key scheduling algorithm of RC4, and describe their cryptanalytic signi cance. We identify a large number of weak keys, in which knowledge of a small number of key bits su ces to determine many state and output bits with non-negligible probability. We use these weak keys to construct new distinguishers for RC4, and to mount related key attacks with practical complexities. Finally, we show that RC4 is completely insecure in a common mode of operation which is used in the widely deployed Wired Equivalent Privacy protocol (WEP, which is part of the 802.11 standard), in which a xed secret key is concatenated with known IV modi ers in order to encrypt di erent messages. Our new passive ciphertext-only attack on this mode can recover an arbitrarily long key in a negligible amount of time which grows only linearly with its size, both for 24 and 128 bit IV modi ers."}
{"_id": "2eb4eb0df6ce8c76666393438b28e3b17e86e56f", "title": "Conceptual modeling and development of an intelligent agent-assisted decision support system for anti-money laundering", "text": "Criminal elements in today\u2019s technology-driven society are using every means available at their disposal to launder the proceeds from their illegal activities. In response, international anti-money laundering (AML) efforts are being made. The events of September 11, 2001, highlighted the need for more sophisticated AML and anti-terrorist financing programs across the industry and nation. In the wake of this, regulators are focusing on the role that technology can play in compliance with laws and ultimately in law enforcement. Banks will have to employ or enhance AML tools and technology to satisfy rising regulatory expectations. While many AML solutions have been in place for some time within the banks, they are faced with the challenge of adapting to the ever-changing risks and methods related to money laundering. In order to provide support for AML decisions, we have formulated an AML conceptual model by following Simon [Simon, H. A. (1977). The new science of management decision. Englewood Cliffs, NJ: Prentice-Hall] decision-making process model. Based on this model, a novel and open multi-agent AML system prototype has been designed and developed. Intelligent agents with their properties of autonomy, reactivity, and proactivity are well suited for dynamic, ill-structured, and complex ML prevention controls. The advanced architecture is able to provide more adaptive, intelligent, and flexible solution for AML. This paper is the first attempt at intelligent agent financial application in the AML domain, with a decision-making/problem-solving process model, an innovative agent-based architecture, and a prototype system. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "86e04c917a516f40e95e3f27d54729d4b3c15aa3", "title": "Latent Skill Embedding for Personalized Lesson Sequence Recommendation", "text": "Students in online courses generate large amounts of data that can be used to personalize the learning process and improve quality of education. In this paper, we present the Latent Skill Embedding (LSE), a probabilistic model of students and educational content that can be used to recommend personalized sequences of lessons with the goal of helping students prepare for specific assessments. Akin to collaborative filtering for recommender systems, the algorithm does not require students or content to be described by features, but it learns a representation using access traces. We formulate this problem as a regularized maximum-likelihood embedding of students, lessons, and assessments from historical student-content interactions. An empirical evaluation on large-scale data from Knewton, an adaptive learning technology company, shows that this approach predicts assessment results competitively with benchmark models and is able to discriminate between lesson sequences that lead to mastery and failure."}
{"_id": "610a98a4ba40c831182887d0c5f90c3f47561927", "title": "A deep CNN based multi-class classification of Alzheimer's disease using MRI", "text": "In the recent years, deep learning has gained huge fame in solving problems from various fields including medical image analysis. This work proposes a deep convolutional neural network based pipeline for the diagnosis of Alzheimer's disease and its stages using magnetic resonance imaging (MRI) scans. Alzheimer's disease causes permanent damage to the brain cells associated with memory and thinking skills. The diagnosis of Alzheimer's in elderly people is quite difficult and requires a highly discriminative feature representation for classification due to similar brain patterns and pixel intensities. Deep learning techniques are capable of learning such representations from data. In this paper, a 4-way classifier is implemented to classify Alzheimer's (AD), mild cognitive impairment (MCI), late mild cognitive impairment (LMCI) and healthy persons. Experiments are performed using ADNI dataset on a high performance graphical processing unit based system and new state-of-the-art results are obtained for multiclass classification of the disease. The proposed technique results in a prediction accuracy of 98.8%, which is a noticeable increase in accuracy as compared to the previous studies and clearly reveals the effectiveness of the proposed method."}
{"_id": "9b0c1401ff3b905b22317e738b43945b5cf33cdd", "title": "Handwriting development, competency, and intervention.", "text": "Failure to attain handwriting competency during the school-age years often has far-reaching negative effects on both academic success and self-esteem. This complex occupational task has many underlying component skills that may interfere with handwriting performance. Fine motor control, bilateral and visual-motor integration, motor planning, in-hand manipulation, proprioception, visual perception, sustained attention, and sensory awareness of the fingers are some of the component skills identified. Poor handwriting may be related to intrinsic factors, which refer to the child's actual handwriting capabilities, or extrinsic factors which are related to environmental or biomechanical components, or both. It is important that handwriting performance be evaluated using a valid, reliable, standardized tool combined with informal classroom observation and teacher consultation. Studies of handwriting remediation suggest that intervention is effective. There is evidence to indicate that handwriting difficulties do not resolve without intervention and affect between 10 and 30% of school-aged children. Despite the widespread use of computers, legible handwriting remains an important life skill that deserves greater attention from educators and health practitioners."}
{"_id": "56b58ead464ea57ce3628742bf681298f603a68b", "title": "The formation and function of plant cuticles.", "text": "The plant cuticle is an extracellular hydrophobic layer that covers the aerial epidermis of all land plants, providing protection against desiccation and external environmental stresses. The past decade has seen considerable progress in assembling models for the biosynthesis of its two major components, the polymer cutin and cuticular waxes. Most recently, two breakthroughs in the long-sought molecular bases of alkane formation and polyester synthesis have allowed construction of nearly complete biosynthetic pathways for both waxes and cutin. Concurrently, a complex regulatory network controlling the synthesis of the cuticle is emerging. It has also become clear that the physiological role of the cuticle extends well beyond its primary function as a transpiration barrier, playing important roles in processes ranging from development to interaction with microbes. Here, we review recent progress in the biochemistry and molecular biology of cuticle synthesis and function and highlight some of the major questions that will drive future research in this field."}
{"_id": "8a5444e350e59f69ddf37121d9ede8a818f54a3d", "title": "Towards a psychographic user model from mobile phone usage", "text": "Knowing the users' personality can be a strategic advantage for the design of adaptive and personalized user interfaces. In this paper, we present the results of a first trial conducted with the aim of inferring people's personality traits based on their mobile phone call behavior. Initial findings corroborate the efficacy of using call detail records (CDR) and Social Network Analysis (SNA) of the call graph to infer the Big Five personality factors. On-going work includes a large-scale study that shall refine the accuracy of the models with a reduced margin of error."}
{"_id": "58ab158785a0899210369f76025c1e33c19299f5", "title": "Integration of Modular Process Units Into Process Control Systems", "text": "The modularization of process plants is regarded as a promising approach to cope with upcoming requirements in the process industry regarding flexibility. Within the Decentralized Intelligence for Modular Assets Project, a concept has been developed to overcome current deficiencies in implementing modular plants and especially their automation. On one hand, the approach considers a clear separation of engineering efforts into plant-independent module engineering and plant-specific integration engineering. On the other hand, the concept provides a method for the fast integration of a module's automation system into a higher level process control system. To do so, each module provides its process functionality in encapsulated services. The core of the integration method is a file-based description of the module including its services, operating screens, and communication variables. The so-called \u201cmodule-type package\u201d is currently under standardization by different organizations. This paper presents the general approach as well as current results of the standardization efforts. Furthermore, the results are executed on an application demonstrator to allow a practical assessment. Thus, module engineering and integration engineering, as well as corresponding software tools can be demonstrated."}
{"_id": "2a41247a1904e34307ac0577d756aedb8f7f3aed", "title": "Cryptanalytic Attacks on MIFARE Classic Protocol", "text": "MIFARE Classic is the most widely used contactless smart card in the world. It implements a proprietary symmetric-key mutual authentication protocol with a dedicated reader and a proprietary stream cipher algorithm known as CRYPTO1, both of which have been reverse engineered. The existing attacks in various scenarios proposed in the literature demonstrate that MIFARE Classic does not offer the desired 48-bit security level. The most practical scenario is the card-only scenario where a fake, emulated reader has a wireless access to a genuine card in the on-line stage of the attack. The most effective known attack in the card-only scenario is a differential attack, which is claimed to require about 10 seconds of average on-line time in order to reconstruct the secret key from the card. This paper presents a critical comprehensive survey of currently known attacks on MIFARE Classic, puts them into the right perspective in light of the prior art in cryptanalysis, and proposes a number of improvements. It is shown that the differential attack is incorrectly analyzed and is optimized accordingly. A new attack of a similar, differential type is also introduced. In comparison with the optimized differential attack, it has a higher success probability of about 0.906 and a more than halved on-line time of about 1.8 seconds."}
{"_id": "086cde207a125816091c34a076b473a465b5388f", "title": "NEXMark \u2013 A Benchmark for Queries over Data Streams DRAFT", "text": "A lot of research has focused recently on executing queries over data streams. This recent attention is due to the large number of data streams available, and the desire to get answers to queries on these streams in real time. There are many sources of data streams: environmental sensor networks, network routing traces, financial transactions and cell phone call records. Many systems are currently under development to execute queries over data streams [BW01, CCC02, MF02, NDM00, SH98]. Further, many ideas have been presented to improve the execution of queries over data streams [ABB02, MSHR02, TMSF02]. It is important to be able to measure the effectiveness of this work. To this end, we present the Niagara Extension to XMark benchmark (NEXMark). This is a work in progress. We are circulating it now for feedback. We have three goals: To define a benchmark, to provide stream generators, and to define metrics for measuring queries over continuous data streams. The XMark benchmark [SWK01] is designed to measure the performance of XML repositories. The benchmark provides a data generator that models the state of an auction in XML format, and various queries over the generated data. An abbreviated schema for XMark is shown in Figure 1."}
{"_id": "3ce25fbaa79ee9d7074832440a1c033ca9e2a709", "title": "A wearable 12-lead ECG acquisition system with fabric electrodes", "text": "Continuous electrocardiogram (ECG) monitoring is significant for prevention of heart disease and is becoming an important part of personal and family health care. In most of the existing wearable solutions, conventional metal sensors and corresponding chips are simply integrated into clothes and usually could only collect few leads of ECG signals that could not provide enough information for diagnosis of cardiac diseases such as arrhythmia and myocardial ischemia. In this study, a wearable 12-lead ECG acquisition system with fabric electrodes was developed and could simultaneously process 12 leads of ECG signals. By integrating the fabric electrodes into a T-shirt, the wearable system would provide a comfortable and convenient user interface for ECG recording. For comparison, the proposed fabric electrode and the gelled traditional metal electrodes were used to collect ECG signals on a subject, respectively. The approximate entropy (ApEn) of ECG signals from both types of electrodes were calculated. The experimental results show that the fabric electrodes could achieve similar performance as the gelled metal electrodes. This preliminary work has demonstrated that the developed ECG system with fabric electrodes could be utilized for wearable health management and telemedicine applications."}
{"_id": "1963c71b017106a16a734290d65d282cdb8be3c4", "title": "Local Rademacher Complexity Bounds based on Covering Numbers", "text": "This paper provides a general result on controlling local Rademacher complexities, which captures in an elegant form to relate the complexities with constraint on the expected norm to the corresponding ones with constraint on the empirical norm. This result is convenient to apply in real applications and could yield refined local Rademacher complexity bounds for function classes satisfying general entropy conditions. We demonstrate the power of our complexity bounds by applying them to derive effective generalization error bounds."}
{"_id": "b433b04f1e39e739305f8b99ac0ee3d0a939ea35", "title": "Movie recommendation system using clustering and pattern recognition network", "text": "As the world is going global, people have a lot to choose from when it comes to movies. There are different genres, cultures and languages to choose from in the world of movies. This arises the issue of recommending movies to users in automated systems. So far, a decent number of works has been done in this field. But there is always room for renovation. This paper proposed a machine learning approach to recommend movies to users using K-means clustering algorithm to separate similar users and creating a neural network for each cluster."}
{"_id": "58698fb2e9f114fae23fd6344bb597fe466c93ef", "title": "Role and users' approach to social networking sites (SNSs): a study of universities of North India", "text": "Purpose \u2013 The purpose of this paper is to evaluate and assess the awareness and extent of the use of social networking sites (SNSs) by the students and research scholars of universities of North India. Design/methodology/approach \u2013 The study is a questionnaire-based survey on the usage of SNSs among the students and research scholars of the universities of North India. The data of the study were collected through questionnaires, which were personally distributed to the identified population, i.e. undergraduate students, postgraduate students and research scholars, by the authors. The survey was based on a sample of 610 questionnaires; of which, 486 questionnaires were received, having a response rate of 79.67 per cent. Findings \u2013 The study showed that all the respondents were found to be aware of and making use of such applications in their academic affairs. It was revealed from the study that Facebook is the most popular SNS by all categories of respondents. To determine the purpose of SNSs, it emerged that SNSs are mostly used for entertainment and communication. The study also found that the majority of respondents were aware about the security aspects of SNSs. It signifies that excessive time consumption and fear of misusing personal information were the major hurdles in the way of accessing SNSs. Research limitations/implications \u2013 The study covers the students and research scholars of select universities of North India. It also signifies the use of SNSs in their research and academic environment. Originality/value \u2013 The paper provides reliable and authentic data. The study is worth, justifiable and enlightens the salient findings on the topic, which will be very useful for researchers in this area."}
{"_id": "aaf51f96ca1fe18852f586764bc3aa6e852d0cb6", "title": "A Tour of Reinforcement Learning: The View from Continuous Control", "text": "This manuscript surveys reinforcement learning from the perspective of optimization and control with a focus on continuous control applications. It surveys the general formulation, terminology, and typical experimental implementations of reinforcement learning and reviews competing solution paradigms. In order to compare the relative merits of various techniques, this survey presents a case study of the Linear Quadratic Regulator (LQR) with unknown dynamics, perhaps the simplest and best-studied problem in optimal control. The manuscript describes how merging techniques from learning theory and control can provide non-asymptotic characterizations of LQR performance and shows that these characterizations tend to match experimental behavior. In turn, when revisiting more complex applications, many of the observed phenomena in LQR persist. In particular, theory and experiment demonstrate the role and importance of models and the cost of generality in reinforcement learning algorithms. This survey concludes with a discussion of some of the challenges in designing learning systems that safely and reliably interact with complex and uncertain environments and how tools from reinforcement learning and control might be combined to approach these challenges."}
{"_id": "1862833e6e0eda58f82fe646578d671d0e13d54b", "title": "Markov Games as a Framework for Multi-Agent Reinforcement Learning", "text": "In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic."}
{"_id": "47960e52dd5968f83c29df6d594354c661043532", "title": "Prioritized Experience Replay", "text": "Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in the Deep Q-Network (DQN) algorithm, which achieved human-level performance in Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 42 out of 57 games."}
{"_id": "c017364530e9f113497c4532458fd70ac6f576d6", "title": "Deep reinforcement learning with experience replay based on SARSA", "text": "SARSA, as one kind of on-policy reinforcement learning methods, is integrated with deep learning to solve the video games control problems in this paper. We use deep convolutional neural network to estimate the state-action value, and SARSA learning to update it. Besides, experience replay is introduced to make the training process suitable to scalable machine learning problems. In this way, a new deep reinforcement learning method, called deep SARSA is proposed to solve complicated control problems such as imitating human to play video games. From the experiments results, we can conclude that the deep SARSA learning shows better performances in some aspects than deep Q learning."}
{"_id": "34b5da5845a4307eb8342efcb95e5155f661b5f8", "title": "One Intelligent Agent to Rule Them All Bachelor Thesis", "text": "In this work, we explore the possibility of training agents which are able to show intelligent behaviour in many different scenarios. We present the effectiveness of different machine learning algorithms in the StarCraft II Learning Environment [1] and their performance in different scenarios and compare them. In the end, we recreate DeepMind\u2019s FullyConv Agent with slightly better results."}
{"_id": "d7fd575c7fae05e055e47d898a5d9d2766f742b9", "title": "Topological pattern recognition for point cloud data \u2217", "text": ""}
{"_id": "f5f206e8d4c71ac1197eae0e203d5436cdc39ec9", "title": "Delay Analysis of Hybrid WiFi-LiFi System", "text": "Heterogeneous wireless networks are capable of effectivel y leveraging different access technologies to provide a wid e variety of coverage areas. In this paper, the coexistence of WiFi and visible light communication (VLC) is investigated as a para digm. The delay of two configurations of such heterogeneous system has been evaluated. In the first configuration, the non-aggregat ed system, any request is either allocated to WiFi or VLC. While in the se cond configuration, the aggregated system, each request is s plit into two pieces, one is forwarded to WiFi and the other is forw arded to VLC. Under the assumptions of Poisson arrival proce ss of requests and the exponential distribution of requests si ze, it is mathematically proved that the aggregated system p rovides lower minimum average system delay than that of the non-aggregate d system. For the non-aggregated system, the optimal traffic allo ation ratio is derived. For the aggregated system, an efficient sol ution for the splitting ratio is proposed. Empirical result show that the solution proposed here incurs a delay penalty (less than 3%) over the optimal result."}
{"_id": "85fcdef58294d1502bf8be6358ac1d31baaf0a85", "title": "Characterization of defense mechanisms against distributed denial of service attacks", "text": "We propose a characterization of distributed denial-of-s ervice (DDOS) defenses where reaction points are network-based and attack responses are active. The purpose is to provide a framework for comparing the performance and deployment of DDOS defenses. We identify the characteristics in attack detection algorithms and a ttack responses by reviewing defenses that have appeared in the literature. We expect that this char a terization will provide practitioners and academia insights into deploying DDOS defense as network services."}
{"_id": "bc6cc2d3a8e8a04b575dcb0df99697b5b446b26d", "title": "Bi-objective conflict detection and resolution in railway traffic management", "text": "Railway conflict detection and resolution is the daily task faced by railway managers that consists of adjusting train schedules whenever disturbances make the timetable infeasible. The main objective pursued by the infrastructure managers in this task is minimization of train delays, while train operating companies are also interested in other indicators of passengers dissatisfaction. The two objectives are conflicting whenever delay reduction requires cancellation of some connections between train services, which hampers passengers transfer. In fact, the infrastructure company and the train operating companies discuss on which connection to keep or drop in order to reach a compromise solution. In this paper we face the bi-objective problem of minimizing delays and missed connections to provide a set of feasible non-dominated schedules to support this decisional process. We use a detailed alternative graph model to ensure schedule feasibility and develop two heuristic algorithms to compute the Pareto front of non-dominated schedules. Our computational study, based on a complex and densely occupied Dutch railway network, shows that good coordination of connected train services is important to achieve real-time efficiency of railway services since the management of connections may heavily affect train punctuality. The two algorithms approximate accurately the Pareto front within a limited computational time."}
{"_id": "fbed5b41187e8c789115cf21a53282a3f5765f41", "title": "Beamforming for MIMO-OFDM Wireless Systems", "text": "The smart antennas are widely used for wireless com munication, because it has a ability to increase th coverage and capacity of a communication system. Smart anten na performs two main functions such as direction of arrival estimation (DOA) and beam forming. Using beam formi ng algorithm smart antenna is able to form main bea m towards desired user and null in the direction of i nterfering signals. In this project Direction of ar rival (DOA) is estimated by using MUSIC algorithm. Receive Beam fo r ing is performed by using LMS and LLMS algorithm .In this Paper, in order to perform secure transmission of signal over wireless communication we have used chaotic sequences. This paper evaluates the performance of B am forming with and without LMS and LLMS algorith m for MIMO-OFDM wireless system. The simulations are carr ied out using MATLAB."}
{"_id": "053b97e316fc43588e6235f88a1a7a4077342de7", "title": "A comparison of the Two One-Sided Tests Procedure and the Power Approach for assessing the equivalence of average bioavailability", "text": "The statistical test of the hypothesis of no difference between the average bioavailabilities of two drug formulations, usually supplemented by an assessment of what the power of the statistical test would have been if the true averages had been inequivalent, continues to be used in the statistical analysis of bioavailability/bioequivalence studies. In the present article, this Power Approach (which in practice usually consists of testing the hypothesis of no difference at level 0.05 and requiring an estimated power of 0.80) is compared to another statistical approach, the Two One-Sided Tests Procedure, which leads to the same conclusion as the approach proposed by Westlake (2) based on the usual (shortest) 1\u20132\u03b1 confidence interval for the true average difference. It is found that for the specific choice of \u03b1=0.05 as the nominal level of the one-sided tests, the two one-sided tests procedure has uniformly superior properties to the power approach in most cases. The only cases where the power approach has superior properties when the true averages are equivalent correspond to cases where the chance of concluding equivalence with the power approach when the true averages are notequivalent exceeds 0.05. With appropriate choice of the nominal level of significance of the one-sided tests, the two one-sided tests procedure always has uniformly superior properties to the power approach. The two one-sided tests procedure is compared to the procedure proposed by Hauck and Anderson (1)."}
{"_id": "7c4318bfdc3a12c2665814c1dd845349eef8ad10", "title": "Speech recognition for vocalized and subvocal modes of production using surface EMG signals from the neck and face", "text": "We report automatic speech recognition accuracy for individual words using eleven surface electromyographic (sEMG) recording locations on the face and neck during three speaking modes: vocalized, mouthed, and mentally rehearsed. An HMM based recognition system was trained and tested on a 65 word vocabulary produced by 9 American English speakers in all three speaking modes. Our results indicate high sEMG-based recognition accuracy for the vocalized and mouthed speaking modes (mean rates of 92.1% and 86.7% respectively), but an inability to conduct recognition on mentally rehearsed speech due to a lack of sufficient sEMG activity."}
{"_id": "fcb3402c8bbe7c75f207dde41affb2f688b77680", "title": "Avoiding Traffic Jam Using Ant Colony Optimization - A Novel Approach", "text": "Ant colony optimization (ACO) is a meta-heuristic based on colony of artificial ants which work cooperatively, building solutions by moving on the problem graph and by communicating through artificial pheromone trails mimicking real ants. One of the active research directions is the application of ACO algorithms to solve dynamic shortest path problems. Solving traffic jams is one such problem where the cost i.e. time to travel increases during rush hours resulting in tremendous strain on daily commuters and chaos. This paper describes a new approach-DSATJ (Dynamic System for Avoiding Traffic Jam) which aims at choosing an alternative optimum path to avoid traffic jam and then resuming that same path again when the traffic is regulated. The approach is inspired by variants of ACO algorithms. Traffic jam is detected through pheromone values on edges which are updated according to goodness of solution on the optimal tours only. Randomness is introduced in the probability function to ensure maximum exploration by ants. Experiments were carried out with the partial road map of North-West region of Delhi, India, to observe the performance of our approach."}
{"_id": "84ade3cb5b57624baee89d9e617bb5847ee07375", "title": "Mobile robot localization by tracking geometric beacons", "text": ""}
{"_id": "9e5158222c911bec96d4f533cd0d7a1a0cff1731", "title": "RF modules (Tx-Rx) with multifunctional MMICs", "text": "Next generation RF sensor modules for multifunction active electronically steered antenna (AESA) systems will need a combination of different operating modes, such as radar, electronic warfare (EW) functionalities and communications/datalinks within the same antenna frontend. They typically operate in C-Band, X-Band and Ku-Band and imply a bandwidth requirement of more than 10 GHz. For the realisation of modern active electronically steered antennas, the transmit/receive (T/R) modules have to match strict geometry demands. A major challenge for these future multifunction RF sensor modules is dictated by the half-wavelength antenna grid spacing, that limits the physical channel width to < 12 mm or even less, depending on the highest frequency of operation with accordant beam pointing requirements. A promising solution to overcome these geometry demands is the reduction of the total monolithic microwave integrated circuit (MMIC) chip area, achieved by integrating individual RF functionalities, which are commonly achieved through individual integrated circuits (ICs), into new multifunctional (MFC) MMICs. Various concepts, some of them already implemented, towards next generation RF sensor modules will be discussed and explained in this work."}
{"_id": "75d17e8fa5165a849ebe5f0475bdf77bf0b6be74", "title": "Context-Dependent Sentiment Analysis in User-Generated Videos", "text": "Multimodal sentiment analysis is a developing area of research, which involves the identification of sentiments in videos. Current research considers utterances as independent entities, i.e., ignores the interdependencies and relations among the utterances of a video. In this paper, we propose a LSTM-based model that enables utterances to capture contextual information from their surroundings in the same video, thus aiding the classification process. Our method shows 5-10% performance improvement over the state of the art and high robustness to generalizability."}
{"_id": "77a9473256f6841d40cb9198feb5b91dccf9ffd1", "title": "A dimmable charge-pump ZVS led driver with PFC", "text": "This paper presents a dimmable charge-pump driver to power light-emitting diodes (LEDs) with power factor correction (PFC) and Zero Voltage Switching (ZVS). The proposed LED driver does not utilize electrolytic capacitors, providing a high useful lifetime, and it can stabilize the output current in open loop control without needing current sensors, which reduces the cost. The output power is proportional to the switching frequency, which allows the LEDs dimming. A prototype with 22 W was implemented and experimental results were discussed. The prototype presented a power factor of 0.996 and an efficiency of 89.5 %. The driver output power was reduced by more than 40% through the switching frequency while varying of 53 kHz to 30 kHz and the converter has continues to operate in ZVS."}
{"_id": "371a80bcdd2538df8ef999c1216fd87aa8742562", "title": "The Bilingual's Language Modes.", "text": "GROSJEAN 2 Bilinguals who have reflected on their bilingualism will often report that they change their way of speaking when they are with monolinguals and when they are with bilinguals. Whereas they avoid using their other language with the former, they may call on it for a word or a sentence with the latter or even change over to it completely. In addition, bilinguals will also report that, as listeners, they are sometimes taken by surprise when they are spoken to in a language that they did not expect. Although these reports are quite anecdotal, they do point to an important phenomenon, language mode, which researchers have been alluding to over the years. For example, Weinreich (1966) writes that when speaking to a monolingual, the bilingual is subject to interlocutory constraint which requires that he or she limit interferences (Weinreich uses this as a cover term for any element of the other language) but when speaking to another bilingual, there is hardly any limit to interferences; forms can be transferred freely from one language to the other and often used in an unadapted way. A few years later, Hasselmo (1970) refers to three sets of \"norms\" or \"modes of speaking\" among Swedish-English bilinguals in the United States: English only for contact with English monolinguals, American Swedish with some bilinguals (the main language used is Swedish), and Swedish American with other bilinguals (here the main language is English). In the latter two cases, code-switching can take place in the other language. The author also notes that there exists two extremes in the behavior of certain bilinguals, one extreme involves minimal and the other maximal code-switching. A couple of years later, Clyne (1972) talks of three communication possibilities in bilingual discourse: in the first, both codes are used by both speakers; in the second, each one uses a different code but the two understand both codes; and, in the third, only one of the two speakers uses and understands both codes whereas the other speaker is monolingual in one of the codes. Finally, Baetens Beardsmore (1982) echoes these views when he writes that bilinguals in communication with other bilinguals may feel free to use both of their language repertoires. However, the same bilingual speakers in conversation with monoglots may not feel the same liberty and may well attempt to maximize alignment on monoglot norms by consciously reducing any formal \"interference\" features to \u2026"}
{"_id": "0c839ef5d71bb0878ce4df0f28eb7635795e850c", "title": "A Bottom-Up Approach to Sentence Ordering for Multi-Document Summarization", "text": "Ordering information is a difficult but important task for applications generating naturallanguage texts such as multi-document summarization, question answering, and conceptto-text generation. In multi-document summarization, information is selected from a set of source documents. However, improper ordering of information in a summary can confuse the reader and deteriorate the readability of the summary. Therefore, it is vital to properly order the information in multi-document summarization. We present a bottom-up approach to arrange sentences extracted for multi-document summarization. To capture the association and order of two textual segments (e.g. sentences), we define four criteria: chronology, topical-closeness, precedence, and succession. These criteria are integrated into a criterion by a supervised learning approach. We repeatedly concatenate two textual segments into one segment based on the criterion, until we obtain the overall segment with all sentences arranged. We evaluate the sentence orderings produced by the proposed method and numerous baselines using subjective gradings as well as automatic evaluation measures. We introduce the average continuity, an automatic evaluation measure of sentence ordering in a summary, and investigate its appropriateness for this task."}
{"_id": "d10df96b3fb0ab5c6b1d0cc22c7400d0acccc3cc", "title": "The Hitchhiker's Guide to Testing Statistical Significance in Natural Language Processing", "text": "Statistical significance testing is a standard statistical tool designed to ensure that experimental results are not coincidental. In this opinion/theoretical paper we discuss the role of statistical significance testing in Natural Language Processing (NLP) research. We establish the fundamental concepts of significance testing and discuss the specific aspects of NLP tasks, experimental setups and evaluation measures that affect the choice of significance tests in NLP research. Based on this discussion, we propose a simple practical protocol for statistical significance test selection in NLP setups and accompany this protocol with a brief survey of the most relevant tests. We then survey recent empirical papers published in ACL and TACL during 2017 and show that while our community assigns great value to experimental results, statistical significance testing is often ignored or misused. We conclude with a brief discussion of open issues that should be properly addressed so that this important tool can be applied in NLP research in a statistically sound manner1."}
{"_id": "0b5d303b212f99d8f04de77f0c7fd5724871b6b7", "title": "Mr. Bennet, his coachman, and the Archbishop walk into a bar but only one of them gets recognized: On The Difficulty of Detecting Characters in Literary Texts", "text": "Characters are fundamental to literary analysis. Current approaches are heavily reliant on NER to identify characters, causing many to be overlooked. We propose a novel technique for character detection, achieving significant improvements over state of the art on multiple datasets."}
{"_id": "b5fe4731ff6a7a7f1ad8232186e84b1f944162e0", "title": "Cross-Media Hashing with Neural Networks", "text": "Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional hamming space, has attracted intensive attention in recent years. This is motivated by the facts a) the multi-modal data is widespread, e.g., the web images on Flickr are associated with tags, and b) hashing is an effective technique towards large-scale high-dimensional data processing, which is exactly the situation of cross-media retrieval. Inspired by recent advances in deep learning, we propose a cross-media hashing approach based on multi-modal neural networks. By restricting in the learning objective a) the hash codes for relevant cross-media data being similar, and b) the hash codes being discriminative for predicting the class labels, the learned Hamming space is expected to well capture the cross-media semantic relationships and to be semantically discriminative. The experiments on two real-world data sets show that our approach achieves superior cross-media retrieval performance compared with the state-of-the-art methods."}
{"_id": "1e8e1c16cea908db84b83d0ec1a37a497dd1f4de", "title": "A HOG-based Real-time and Multi-scale Pedestrian Detector Demonstration System on FPGA", "text": "Pedestrian detection will play a major role in future driver assistance and autonomous driving. One powerful algorithm in this field uses HOG features to describe the specific properties of pedestrians in images. To determine their locations, features are extracted and classified window-wise from different scales of an input image. The results of the classification are finally merged to remove overlapping detections. The real-time execution of this method requires specific FPGA- or ASIC-architectures. Recent work focused on accelerating the feature extraction and classification. Although merging is an important step in the algorithm, it is only rarely considered in hardware implementations. A reason for that could be its complexity and irregularity that is not trivial to implement in hardware. In this paper, we present a new bottom-up FPGA architecture that maps the full HOG-based algorithm for pedestrian detection including feature extraction, SVM classification, and multi-scale processing in combination with merging. For that purpose, we also propose a new hardware-optimized merging method. The resulting architecture is highly efficient. Additionally, we present an FPGA-based full real-time and multi-scale pedestrian detection demonstration system."}
{"_id": "7f9a1a8460201a5686672ce946f7631be6a41cf2", "title": "An Efficient Algorithm for Calculating the Exact Hausdorff Distance", "text": "The Hausdorff distance (HD) between two point sets is a commonly used dissimilarity measure for comparing point sets and image segmentations. Especially when very large point sets are compared using the HD, for example when evaluating magnetic resonance volume segmentations, or when the underlying applications are based on time critical tasks, like motion detection, then the computational complexity of HD algorithms becomes an important issue. In this paper we propose a novel efficient algorithm for computing the exact Hausdorff distance. In a runtime analysis, the proposed algorithm is demonstrated to have nearly-linear complexity. Furthermore, it has efficient performance for large point set sizes as well as for large grid size; performs equally for sparse and dense point sets; and finally it is general without restrictions on the characteristics of the point set. The proposed algorithm is tested against the HD algorithm of the widely used national library of medicine insight segmentation and registration toolkit (ITK) using magnetic resonance volumes with extremely large size. The proposed algorithm outperforms the ITK HD algorithm both in speed and memory required. In an experiment using trajectories from a road network, the proposed algorithm significantly outperforms an HD algorithm based on R-Trees."}
{"_id": "388874cac122fd33ef8eef839392b60cf6e9c508", "title": "Deep Network Cascade for Image Super-resolution", "text": "In this paper, we propose a new model called deep network cascade (DNC) to gradually upscale low-resolution images layer by layer, each layer with a small scale factor. DNC is a cascade of multiple stacked collaborative local auto-encoders. In each layer of the cascade, non-local self-similarity search is first performed to enhance high-frequency texture details of the partitioned patches in the input image. The enhanced image patches are then input into a collaborative local auto-encoder (CLA) to suppress the noises as well as collaborate the compatibility of the overlapping patches. By closing the loop on non-local self-similarity search and CLA in a cascade layer, we can refine the super-resolution result, which is further fed into next layer until the required image scale. Experiments on image super-resolution demonstrate that the proposed DNC can gradually upscale a low-resolution image with the increase of network layers and achieve more promising results in visual quality as well as quantitative performance."}
{"_id": "728de8bc39d7de12b31839924078c3fcd63b0a71", "title": "View invariant human action recognition using histograms of 3D joints", "text": "In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.'s method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action 3D dataset and our algorithm outperforms Li et al. [25] on most of the cases."}
{"_id": "0cacd99fc4f973b380e3893da06977a33292faf2", "title": "Sparse solutions to linear inverse problems with multiple measurement vectors", "text": "We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known to be NP-hard, many single-measurement suboptimal algorithms have been formulated that have found utility in many different applications. Here, we consider in depth the extension of two classes of algorithms-Matching Pursuit (MP) and FOCal Underdetermined System Solver (FOCUSS)-to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed. Cost functions appropriate to the multiple measurement problem are developed, and algorithms are derived based on their minimization. A simulation study is conducted on a test-case dictionary to show how the utilization of more than one measurement vector improves the performance of the MP and FOCUSS classes of algorithm, and their performances are compared."}
{"_id": "7344b60ef80831e5ce2a6ad4aa11c6055517cdd6", "title": "Low-voltage low-power CMOS full adder - Circuits, Devices and Systems, IEE Proceedings-", "text": "Low-power design of VLSI circuits has been identified as a critical technological need in recent years due to the high demand for portable consumer electronics products. In this regard many innovative designs for basic logic functions using pass transistors and transmission gates have appeared in the literature recently. These designs relied on the intuition and cleverness of the designers, without involving formal design procedures. Hence, a formal design procedure for realising a minimal transistor CMOS pass network XOR-XNOR cell, that is fully compensated for threshold voltage drop in MOS transistors, is presented. This new cell can reliably operate within certain bounds when the power supply voltage is scaled down, as long as due consideration is given to the sizing of the MOS transistors during the initial design step. A low transistor count full adder cell using the new XOR-XNOR cell is also presented."}
{"_id": "aab6d5f4954130fc16fb80e3a8de4c1418d18542", "title": "Associations between online friendship and Internet addiction among adolescents and emerging adults.", "text": "The past decades have witnessed a dramatic increase in the number of youths using the Internet, especially for communicating with peers. Online activity can widen and strengthen the social networks of adolescents and emerging adults (Subrahmanyam & Smahel, 2011), but it also increases the risk of Internet addiction. Using a framework derived from Griffiths (2000a), this study examined associations between online friendship and Internet addiction in a representative sample (n = 394) of Czech youths ages 12-26 years (M = 18.58). Three different approaches to friendship were identified: exclusively offline, face-to-face oriented, Internet oriented, on the basis of the relative percentages of online and offline associates in participants' friendship networks. The rate of Internet addiction did not differ by age or gender but was associated with communication styles, hours spent online, and friendship approaches. The study revealed that effects between Internet addiction and approaches to friendship may be reciprocal: Being oriented toward having more online friends, preferring online communication, and spending more time online were related to increased risk of Internet addiction; on the other hand, there is an alternative causal explanation that Internet addiction and preference for online communication conditions young people's tendency to seek friendship from people met online."}
{"_id": "351d8b6ae4d9d1c0b87a4d44b8c30e8dd4ba0d5a", "title": "Constructing a Hermitian Matrix from Its Diagonal Entries and Eigenvalues", "text": "Given two vectors a R the Schur Horn theorem states that a majorizes if and only if there exists a Hermitian matrix H with eigenvalues and diagonal entries a While the theory is regarded as classical by now the known proof is not constructive To construct a Hermitian matrix from its diagonal entries and eigenvalues therefore becomes an interesting and challenging inverse eigenvalue problem Two algorithms for determining the matrix numerically are proposed in this paper The lift and pro jection method is an iterative method which involves an interesting application of the Wielandt Ho man theorem The projected gradient method is a continuous method which besides its easy implementation o ers a new proof of existence because of its global convergence property"}
{"_id": "62649fa6b79a1d97ca86359ffb01e017b6acf0a0", "title": "Pisotriquetral joint disorders: an under-recognized cause of ulnar side wrist pain", "text": "Pisotriquetral joint disorders are often under-recognized in routine clinical practice. They nevertheless represent a significant cause of ulnar side wrist pain. The aim of this article is to present the main disorders of this joint and discuss the different imaging modalities that can be useful for its assessment."}
{"_id": "7aa63f414a4d7c6e4369a15a04dc5d3eb5da2b0e", "title": "Automatic Feature Engineering for Answer Selection and Extraction", "text": "This paper proposes a framework for automatically engineering features for two important tasks of question answering: answer sentence selection and answer extraction. We represent question and answer sentence pairs with linguistic structures enriched by semantic information, where the latter is produced by automatic classifiers, e.g., question classifier and Named Entity Recognizer. Tree kernels applied to such structures enable a simple way to generate highly discriminative structural features that combine syntactic and semantic information encoded in the input trees. We conduct experiments on a public benchmark from TREC to compare with previous systems for answer sentence selection and answer extraction. The results show that our models greatly improve on the state of the art, e.g., up to 22% on F1 (relative improvement) for answer extraction, while using no additional resources and no manual feature engineering."}
{"_id": "9814dd00440b08caf0df96988edb4c56cfcf7bd1", "title": "Active SLAM using Model Predictive Control and Attractor based Exploration", "text": "Active SLAM poses the challenge for an autonomous robot to plan efficient paths simultaneous to the SLAM process. The uncertainties of the robot, map and sensor measurements, and the dynamic and motion constraints need to be considered in the planning process. In this paper, the active SLAM problem is formulated as an optimal trajectory planning problem. A novel technique is introduced that utilises an attractor combined with local planning strategies such as model predictive control (a.k.a. receding horizon) to solve this problem. An attractor provides high level task intentions and incorporates global information about the environment for the local planner, thereby eliminating the need for costly global planning with longer horizons. It is demonstrated that trajectory planning with an attractor results in improved performance over systems that have local planning alone"}
{"_id": "30453ab7ea984609c135032988998234ce99e294", "title": "Domestic food and sustainable design: a study of university student cooking and its impacts", "text": "In four university student kitchens over twenty-one days, we captured participants' food preparation activity, quantified the greenhouse gas emissions and direct energy connected to the food and cooking, and talked to participants about their food practices. Grounded in this uniquely detailed micro-account, our findings inform sustainable design for cooking and eating at home and quantify the potential impacts. We outline the relation of the impacts to our participants' approaches to everyday food preparation, the organisation of their time, and the role of social meals. Our technique allows evaluation of opportunities for sustainable intervention design: at the appliance, in the digitally-mediated organisation of meals and inventory management, and more broadly in reflecting upon and reshaping diet."}
{"_id": "311a4e0aaa64ab73d0470ee565e3a368b2b4fcb2", "title": "Bayesian Neural Networks for Internet Traffic Classification", "text": "Internet traffic identification is an important tool for network management. It allows operators to better predict future traffic matrices and demands, security personnel to detect anomalous behavior, and researchers to develop more realistic traffic models. We present here a traffic classifier that can achieve a high accuracy across a range of application types without any source or destination host-address or port information. We use supervised machine learning based on a Bayesian trained neural network. Though our technique uses training data with categories derived from packet content, training and testing were done using features derived from packet streams consisting of one or more packet headers. By providing classification without access to the contents of packets, our technique offers wider application than methods that require full packet/payloads for classification. This is a powerful advantage, using samples of classified traffic to permit the categorization of traffic based only upon commonly available information"}
{"_id": "70e07ea983cfbe56e7ee897cf333a2fb45811b81", "title": "A VLSI Implementation of an Analog Neural Network Suited for Genetic Algorithms", "text": "The usefulness of an artificial analog neural network is closely bound to its trainability. This paper introduces a new analog neural network architecture using weights determined by a genetic algorithm. The first VLSI implementation presented in this paper achieves 200 giga connections per second with 4096 synapses on less than 1 mm silicon area. Since the training can be done at the full speed of the network, several hundred individuals per second can be tested by the genetic algorithm. This makes it feasible to tackle problems that require large multi-layered networks."}
{"_id": "820ee778e87f91719ec1c9f5b53a1c429a372490", "title": "Relationship Banking : What Do We Know ?", "text": "This paper briefly reviews the contemporary literature on relationship banking. We start out with a discussion of the raison d\u2019\u02c6 etre of banks in the context of the financial intermediation literature. From there we discuss how relationship banking fits into the core economic services provided by banks and point at its costs and benefits. This leads to an examination of the interrelationship between the competitive environment and relationship banking as well as a discussion of the empirical evidence. Journal of Economic LiteratureClassification Numbers: G20, G21, L10. C \u00a9 2000 Academic Press"}
{"_id": "f88aa878ad3005e28fd8c5cbeb1d450c4cd578d5", "title": "Microalgae as a raw material for biofuels production", "text": "Biofuels demand is unquestionable in order to reduce gaseous emissions (fossil CO2, nitrogen and sulfur oxides) and their purported greenhouse, climatic changes and global warming effects, to face the frequent oil supply crises, as a way to help non-fossil fuel producer countries to reduce energy dependence, contributing to security of supply, promoting environmental sustainability and meeting the EU target of at least of 10% biofuels in the transport sector by 2020. Biodiesel is usually produced from oleaginous crops, such as rapeseed, soybean, sunflower and palm. However, the use of microalgae can be a suitable alternative feedstock for next generation biofuels because certain species contain high amounts of oil, which could be extracted, processed and refined into transportation fuels, using currently available technology; they have fast growth rate, permit the use of non-arable land and non-potable water, use far less water and do not displace food crops cultures; their production is not seasonal and they can be harvested daily. The screening of microalgae (Chlorella vulgaris, Spirulina maxima, Nannochloropsis sp., Neochloris oleabundans, Scenedesmus obliquus and Dunaliella tertiolecta) was done in order to choose the best one(s), in terms of quantity and quality as oil source for biofuel production. Neochloris oleabundans (fresh water microalga) and Nannochloropsis sp. (marine microalga) proved to be suitable as raw materials for biofuel production, due to their high oil content (29.0 and 28.7%, respectively). Both microalgae, when grown under nitrogen shortage, show a great increase (~50%) in oil quantity. If the purpose is to produce biodiesel only from one species, Scenedesmus obliquus presents the most adequate fatty acid profile, namely in terms of linolenic and other polyunsaturated fatty acids. However, the microalgae Neochloris oleabundans, Nannochloropsis sp. and Dunaliella tertiolecta can also be used if associated with other microalgal oils and/or vegetable oils."}
{"_id": "dd97ac9c9d9267793054aa38782f03342e4ccb32", "title": "Error-Correcting Codes for Semiconductor Memory Applications: A State-of-the-Art Review", "text": "This paper presents a state-of-the-art review of error-correcting codes for computer semiconductor memory applications. The construction of four classes of error-correcting codes appropriate for semiconductor memory designs is described, and for each class of codes the number of check bits required for commonly used data lengths is provided. The implementation aspects of error correction and error detection are also discussed, and certain algorithms useful in extending the error-correcting capability for the correction of soft errors such as a-particle-induced errors are examined in some detail."}
{"_id": "f13a256c0a595ac5ee815e0f85e4ee1552bc08f0", "title": "Smarter Presentations: Exploiting Homography in Camera-Projector Systems", "text": "Standard presentation systems consisting of a laptop connected to a projector suffer from two problems: (1) the projected image appears distorted (keystoned) unless the projector is precisely aligned to the projection screen; (2) the speaker is forced to interact with the computer rather than the audience. This paper shows how the addition of an uncalibrated camera, aimed at the screen, solves both problems. Although the locations, orientations and optical parameters of the camera and projector are unknown, the projector-camera system calibrates itself by exploiting the homography between the projected slide and the camera image. Significant improvements are possible over passively calibrating systems since the projector actively manipulates the environment by placing feature points into the scene. For instance, using a low-resolution (160x120) camera, we can achieve an accuracy of \u00b13 pixels in a 1024x768 presentation slide. The camera-projector system infers models for the projector-to-camera and projector-to-screen mappings in order to provide two major benefits. First, images sent to the projector are pre-warped in such a way that the distortions induced by the arbitrary projector-screen geometry are precisely negated. This enables projectors to be mounted anywhere in the environment \u2013 for instance, at the side of the room, where the speaker is less likely to cast shadows on the screen, and where the projector does not occlude the audience\u2019s view. Second, the system detects the position of the user\u2019s laser pointer dot in the camera image at 20Hz, allowing the laser pointer to emulate the pointing actions of a mouse. This enables the user to activate virtual buttons in the presentation (such as \u201cnext slide\u201d) and draw on the projected image. The camera-assisted presentation system requires no special hardware (aside from the cheap camera) Rahul Sukthankar (rahuls@cs.cmu.edu) is now affiliated with Compaq Cambridge Research Lab and Carnegie Mellon; Robert Stockton and Matthew Mullin ({rstock,mmullin}@whizbang.com) are now with WhizBang! Labs. and runs on a standard laptop as a Java application. It is now used by the authors for all of their conference presentations."}
{"_id": "58ca8da8e71a832e225cbf5c2d00042e1e90b970", "title": "Scholarly paper recommendation via user's recent research interests", "text": "We examine the effect of modeling a researcher's past works in recommending scholarly papers to the researcher. Our hypothesis is that an author's published works constitute a clean signal of the latent interests of a researcher. A key part of our model is to enhance the profile derived directly from past works with information coming from the past works' referenced papers as well as papers that cite the work. In our experiments, we differentiate between junior researchers that have only published one paper and senior researchers that have multiple publications. We show that filtering these sources of information is advantageous -- when we additionally prune noisy citations, referenced papers and publication history, we achieve statistically significant higher levels of recommendation accuracy."}
{"_id": "c976999540e56ea82d511694a02c7fec1a20dc67", "title": "The role of the inferior frontal junction area in cognitive control", "text": "Cognitive control processes refer to our ability to coordinate thoughts and actions in accordance with internal goals. In the fronto-lateral cortex such processes have been primarily related to mid-dorsolateral prefrontal cortex (mid-DLPFC). However, recent brain-imaging and meta-analytic studies suggest that a region located more posterior in the fronto-lateral cortex plays a pivotal role in cognitive control as well. This region has been termed the inferior frontal junction area and can be functionally and structurally distinguished from mid-DLPFC."}
{"_id": "c8a8437e91717eb0a2d73d32ac936ea0797c1fa5", "title": "Self processes in interdependent relationships Partner affirmation and the Michelangelo phenomenon", "text": "This essay reviews theory and research regarding the \u201cMichelangelo phenomenon,\u201d which describes the manner in which close partners shape one another\u2019s dispositions, values, and behavioral tendencies. Individuals are more likely to exhibit movement toward their ideal selves to the degree that their partners exhibit affirming perception and behavior, exhibiting confidence in the self \u2019s capacity and enacting behaviors that elicit key features of the self \u2019s ideal. In turn, movement towards the ideal self yields enhanced personal well-being and couple well-being. We review empirical evidence regarding this phenomenon and discuss self and partner variables that contribute to the process."}
{"_id": "41d1f7d60e459eca8f5e62482d57a41b232be61d", "title": "Receiver Design for Downlink Non-Orthogonal Multiple Access (NOMA)", "text": "Non-orthogonal multiple access (NOMA) is one of the promising radio access techniques for further cellular enhancements toward 5G. Compared to orthogonal multiple access (OMA) such as orthogonal frequency-division multiple access (OFDMA), large performance gains were confirmed via system-level simulations. However, NOMA link-level simulations and the design of the receiver remain of great importance to validate NOMA performance gains. In this paper, we evaluate downlink NOMA link-level performance with multiple receiver designs and propose a novel NOMA transmitter and receiver design, where the signals of multi-users are jointly modulated at transmitter side and detected at receiver side. The predominant advantage of the proposed scheme is that at receiver side interference cancellation to the interference signal is not needed, thus low complexity is achieved. The performances of codeword-level SIC, symbol-level SIC and the proposed receiver are evaluated and compared with ideal SIC. Simulation results show that compared with ideal SIC, downlink NOMA link-level performance depends on actual receiver design and the difference in the power ratio split between the cell edge user and cell center user. In particular, it is shown that codeword-level SIC and the proposed receiver can both provide a good performance even when the power ratio difference between the cell center user and cell edge user is small and with real channel estimation."}
{"_id": "9c4c7d60cd883e0bf85dd9eaccc8ed49b481ba77", "title": "Sovrin : digital identities in the blockchain era", "text": "In this paper we describe a practical digital identity project of a global scale, which solves a number of privacy and scalability problems using the concepts of anonymous credentials and permissioned blockchains. Even though both ideas have been known and developed for a number of years, our system appears to be the first amalgamation. The prototype has been developed, tested, and published as open source. I. DIGITAL IDENTITY: HISTORY AND CHALLENGES The idea of digital identity can be traced to the beginning of the computer era, when the user profiles were first transferred across networks. In the real world, different authorities issue licenses, passports, identification cards, credit cards, checks, etc., generally called credentials, to ordinary users for their presentation to various service or goods providers. This brings the concept of Issuer, User, and Verifier as subjects of the identity system. It immediately became clear that issuance, storage, and presentation procedures must follow a number of security requirements to be competitive with the traditional paper credentials: \u2022 Compatibility. Credentials issued by different Issuers can be combined for a single presentation by User who is required to prove several properties simultaneously: e.g. that he is at least age 18 employed by XYZ Corp. with a driver license and good eyesight (does not require corrective lenses). \u2022 Unforgeability. A malicious User is unable to present a credential not issued by a legitimate Issuer. \u2022 Scalability. A system must handle hundreds of Issuers and billions of Users, such as an international passport system. It might be required to handle thousands of transactions per second like the Visa system [1]. \u2022 Performance/Low latency. Verification of credentials must be almost instant and not require broadband network connectivity per credential, even if it is issued by a previously unknown Issuer. Credentials can be stored and processed easily by the User even if he is offline. \u2022 Revocation. An Issuer can revoke any credential such that a User and any Verifiers can know within a reasonable amount of time that credential was revoked. Additionally, a digital identity system could provide features impossible or difficult to implement in a paper identity system: \u2022 Minimal dependencies. Aside from an earlier issuance, an Issuer should not be involved during the preparation and presentation of proofs, and the verification of credentials, including proof of nonrevocation. \u2022 Privacy/Anonymity. The real identity of a User can be kept secret from Verifiers who do not need it. \u2022 Unlinkability. Different presentations of the same credential cannot be linked together. \u2022 Selective disclosure. Any subset of attributes embedded in a credential can be kept secret from Verifiers. The latter group of properties is particularly attractive for Users from authoritarian countries or those who are concerned about leaks of their data from careless Issuers or Verifiers. The privacy-oriented digital identity schemes have been proposed as U-Prove [2] (mainly Microsoft, primarily based on Brands\u2019 credentials [3]) and Idemix [4] (mainly IBM, primarily based on the Camenisch-Lysyanskaya credentials [5]), with the ABC4Trust and FIDO [6] projects encompassing these two. Despite significant effort expended on both projects, neither is in widespread use today. We deduce that all these systems lack compatibility and scalability, as an actual implementation would require a meta-system that congregates multiple Issuers and Verifiers and manages credential schemas. Before recently, all such system would require a (trusted) global third party for the exchange and distribution of Issuers\u2019 data and parameters. It seems that such a centralized party on a global scale could not exist. II. OUR CONTRIBUTIONS We designed and implemented a system called Sovrin, which integrates anonymous credentials with revocation [7], [8] for privacy, unforgeability, performance, unlinkability and a distributed ledger, taking best practices from Ethereum [9] and BFT protocols. The implementation is mostly finished and will go live in the near future. A. Anonymous credentials for privacy It is well known that the anonymity and unlinkability properties are provided by various kinds of anonymous credentials, the concept dating back to seminal works by David Chaum in 1980s [10] and refined in a number of follow-up papers including well-known [3], [7], [11]. Zeroknowledge proofs allow a User to prove the possession of a credential without showing the credential itself, thus providing unlinkability. Additional features include delegation [11] and revocation [8]. We reference the Idemix specification [4], [12] as the basis for our anonymous credential module as it provides unlinkability by default (in contrast to U-Prove). Although Idemix is available as a Java library, for portability and fast prototyping we select the Charm framework [13], which provides a Python API for large integers, signature schemes, and pairings. It also allows us to integrate the pairing-based revocation module easily. B. Our treatment of the revocation feature The revocation feature was missing in the original papers behind U-Prove and Idemix and was added later in different ways. The revocation procedure assumes that each credential has special Revocation-ID attribute with value iR, and Issuer can at any moment revoke a particular iR. For Verifiers this means that if a User with a revoked iR presents his credential, this can be detected by the Verifier. We observe that in addition to various drawbacks described above, the revocation feature provides quite a privacy leak. Indeed, consider a typical attack setting: a Verifier colludes with an Issuer to disclose the identity of a User. The Issuer then revokes half of the issued credentials, and the Verifier asks the User to update his revocation data. If the User succeeds in presenting a valid non-revocation proof, he is among the users with non-revoked iR. Then the Issuer reinstates the earlier revocations and revokes another group of revocation IDs of equal size. Repeating this procedure logN times, the Issuer-Verifier team can uniquely identify one of N users. C. Our selection of revocation methods Since 2001, the following revocation methods have been proposed: \u2022 Signature lists [14]. Issuer regularly releases a list of signatures, one per non-revoked iR. User proves that his credential has the same ID as in some signature. Verification is fast, but the amount of data retrieved by User and computation he does is huge, and multiple presentations are linkable. \u2022 Cryptographic accumulators [7], [8]. The issued iR is added to (or removed from) a constant-size accumulator A, and a User must prove that his (non-disclosed) ID is still contained in A. This approach is reasonably fast for all parties. \u2022 Forward revocation lists [15]. For each revoked iR, an Issuer generates a revocation entry f(iR). A User presents a non-revocation proof to a Verifier, who then tests the proof against every entry in the list. If all matches fail, the credential is deemed valid. This method requires significant computational resources from the Verifier. We decided to use accumulators based on bilinear maps [8] for performance reasons. The small order of the target group allows for short accumulators and witnesses. We have our own implementation of accumulators, which is based on a corrected version of paper [8]. The drawback is that the User must be aware of all revoked credentials, as the proof is updated every time another credential is revoked and needs (public) Issuer-specific data for the update. If User has small capacity, the data he needs to update the nonrevocation proof reveals his ID. Existing research papers suggest User connecting to Issuer every time before creating a non-revocation proof [8], which is privacy-vulnerable if Issuer or his agent colludes with Verifier. D. Revocation with attribute-based sharding. The accumulator-based revocation procedure requires the User to maintain his non-revocation data w using the indices {iR} of the credentials revoked and issued from the last update. To update w the User also needs to access a global set G = {gi} of validity tails, published by Issuer. The tail set G is immutable but should be generated in advance and occupy quite big space (256 bytes per credential). Even though the number of tails needed for the update is proportional to the number of revoked credentials in the period, the tails retrieved by the User effectively disclose his own index iR and thus his identity. For a global Issuer the User would be unable to carry the entire set of tails at his device, as they would require GBytes of storage. To prevent the privacy leak, we propose to partition the credential IDs I into shards I1, I2, . . . , Is of limited size so that the tail set G(Ij) for each shard would be feasible to download and carry (say, a dozen of MBytes). Together with the credential to present, the User will then inform the Verifier of his shard number j so that the Verifier uses the corresponding accumulator data Aj . To make the shard index privacy-oblivious, we suggest attribute-based sharding, when Issuer selects q attributes {ai} of necessary granularity (year of birth, zip code, etc.) and partitions the credential set by each ai independently. For each attribute ai and each shard an accumulator Ai,j will be published. Nevertheless, since accumulators are of constant size, the total amount of space consumed by q\u00d7 s accumulators is negligible. At the time of presentation the User selects the attribute on which he will prove the nonrevocation. This attribute should be among those that are already disclosed to the Verifier. Before presentation, User downloads the entire set Gi,j of tails that correspond to his share j at attribute i. As the tails are signed, he ask the Verifier to download, and this does not reduce the privacy. We also implement the revocati"}
{"_id": "893d5b19b0f973537f87710d1f5a5fcabb270ca6", "title": "Retrieval-Based Learning: Positive Effects of Retrieval Practice in Elementary School Children", "text": "A wealth of research has demonstrated that practicing retrieval is a powerful way to enhance learning. However, nearly all prior research has examined retrieval practice with college students. Little is known about retrieval practice in children, and even less is known about possible individual differences in retrieval practice. In three experiments, 88 children (mean age 10 years) studied a list of words and either restudied the items or practiced retrieving them. They then took a final free recall test (Experiments 1 and 2) or recognition test (Experiment 3). In all experiments, children showed robust retrieval practice effects. Although a range of individual differences in reading comprehension and processing speed were observed among these children, the benefits of retrieval practice were independent of these factors. The results contribute to the growing body of research supporting the mnemonic benefits of retrieval practice and provide preliminary evidence that practicing retrieval may be an effective learning strategy for children with varying levels of reading comprehension and processing speed."}
{"_id": "6496a1a9d96dcca186ff6bef41c65d9a5d5079f3", "title": "Human posture recognition using curved segments for image retrieval", "text": "This paper presents a human posture recognition method from a single image. We rst segment an image into homogeneous regions and extract curve segments corresponding to human body parts. Each body part is considered as a 2D ribbon. From the smooth curve segments in skin regions, 2D ribbons are extracted and a human body model is constructed. We assign a predeened posture type to the image according to the constructed body model. For the user input query to retrieve images containing human of speciic posture, the system convert the query to a body model. The body model is compared to other body models saved in the local storage of target images and images of good matches are retrieved. When a face detection result is available for the given image, it is also used to increase the reliability of body model. For the query human posture, our system retrieves images of the corresponding posture. As another application, the proposed method provides an initial location of a human body to track in a video sequence."}
{"_id": "5fd8082fd2533e3ec226cd727cac8dbd04889dae", "title": "A Study Investigating Typical Concepts and Guidelines for Ontology Building", "text": "In semantic technologies, the shared common understanding of the structure of information among artifacts (people or software agents) can be realized by building an ontology. To do this, it is imperative for an ontology builder to answer several questions: a) what are the main components of an ontology? b) How an ontology look likes and how it works? c) Verify if it is required to consider reusing existing ontologies or not? c) What is the complexity of the ontology to be developed? d) What are the principles of ontology design and development? e) How to evaluate an ontology? This paper answers all the key questions above. The aim of this paper is to present a set of guiding principles to help ontology developers and also inexperienced users to answer such questions."}
{"_id": "0080118b0eb02af581ff32b85a1bb6aed7081f45", "title": "Sinkhorn Distances: Lightspeed Computation of Optimal Transport", "text": "Optimal transport distances are a fundamental family of dis tances for probability measures and histograms of features. Despite their appeali ng theoretical properties, excellent performance in retrieval tasks and intuiti ve formulation, their computation involves the resolution of a linear program whose c ost an quickly become prohibitive whenever the size of the support of these me asur s or the histograms\u2019 dimension exceeds a few hundred. We propose in this work a new family of optimal transport distances that look at transport probl ems from a maximumentropy perspective. We smooth the classic optimal transpo rt pr blem with an entropic regularization term, and show that the resulting o ptimum is also a distance which can be computed through Sinkhorn\u2019s matrix scali ng algorithm at a speed that is several orders of magnitude faster than that of transport solvers. We also show that this regularized distance improves upon clas si optimal transport distances on the MNIST classification problem."}
{"_id": "439c038941d5bbeb1a2ee5f8691e6b617f8362ea", "title": "Supervised Word Mover's Distance", "text": "Recently, a new document metric called the word mover\u2019s distance (WMD) has been proposed with unprecedented results on kNN-based document classification. The WMD elevates high-quality word embeddings to a document metric by formulating the distance between two documents as an optimal transport problem between the embedded words. However, the document distances are entirely unsupervised and lack a mechanism to incorporate supervision when available. In this paper we propose an efficient technique to learn a supervised metric, which we call the Supervised-WMD (S-WMD) metric. The supervised training minimizes the stochastic leave-one-out nearest neighbor classification error on a perdocument level by updating an affine transformation of the underlying word embedding space and a word-imporance weight vector. As the gradient of the original WMD distance would result in an inefficient nested optimization problem, we provide an arbitrarily close approximation that results in a practical and efficient update rule. We evaluate S-WMD on eight real-world text classification tasks on which it consistently outperforms almost all of our 26 competitive baselines."}
{"_id": "6812fb9ef1c2dad497684a9020d8292041a639ff", "title": "Siamese Recurrent Architectures for Learning Sentence Similarity", "text": "We present a siamese adaptation of the Long Short-Term Memory (LSTM) network for labeled data comprised of pairs of variable-length sequences. Our model is applied to assess semantic similarity between sentences, where we exceed state of the art, outperforming carefully handcrafted features and recently proposed neural network systems of greater complexity. For these applications, we provide wordembedding vectors supplemented with synonymic information to the LSTMs, which use a fixed size vector to encode the underlying meaning expressed in a sentence (irrespective of the particular wording/syntax). By restricting subsequent operations to rely on a simple Manhattan metric, we compel the sentence representations learned by our model to form a highly structured space whose geometry reflects complex semantic relationships. Our results are the latest in a line of findings that showcase LSTMs as powerful language models capable of tasks requiring intricate understanding."}
{"_id": "9a4711072b94c25a18c07ebc771c9ad425d81cd9", "title": "Effective Feature Integration for Automated Short Answer Scoring", "text": "A major opportunity for NLP to have a realworld impact is in helping educators score student writing, particularly content-based writing (i.e., the task of automated short answer scoring). A major challenge in this enterprise is that scored responses to a particular question (i.e., labeled data) are valuable for modeling but limited in quantity. Additional information from the scoring guidelines for humans, such as exemplars for each score level and descriptions of key concepts, can also be used. Here, we explore methods for integrating scoring guidelines and labeled responses, and we find that stacked generalization (Wolpert, 1992) improves performance, especially for small training sets."}
{"_id": "f08b99098d6e29b7b92e01783e9e77506b0192cb", "title": "Automatic Segmentation of Hair in Images", "text": "Based on a multi-step process, an automatic hair segmentation method is created and tested on a database of 115 manually segmented hair images. By extracting various information components from an image, including background color, face position, hair color, skin color, and skin mask, a heuristic-based method is created for the detection and segmentation of hair that can detect hair with an accuracy of approximately 75% and with a false hair overestimation error of 34%. Furthermore, it is shown that downsampling the image down to a face width of 25px results in a 73% reduction in computation time with insignificant change in detection accuracy."}
{"_id": "e9b7854cf7e152e35defee61599d21b0d0cdab21", "title": "A clutched parallel elastic actuator concept: Towards energy efficient powered legs in prosthetics and robotics", "text": "Parallel passive-elastic elements can reduce the energy consumption and torque requirements for motors in powered legged systems. However, the hardware design for such combined actuators is challenged by the need to engage and disengage the parallel elasticity depending on the gait phase. Although clutches in the drive train are often proposed, compact and low cost solutions of clutched parallel elastic actuators have so far not been established. Here we present the design and control of an initial prototype for a parallel elastic actuator. The actuator combines a DC motor with a parallel spring that is engaged and disengaged by a commercially available, compact and low-cost electric clutch. In experiments that mimic the torque and motion patterns of knee extensor muscles in human rebounding tasks we find that the parallel spring in the prototype reduces the energy consumption of the actuator by about 80% and the peak torque requirement for the DC motor by about 66%. In addition, we find that a simple trigger-based control can reliably engage and disengage the electric clutch during the motion, allowing the spring to support the motor in rebound, to remove stored energy from the system as necessary for stopping, and to virtually disappear at the actuator output level. On the other hand, the hardware experiments also reveal that our initial design limits the precision in the torque control, and we propose specific improvements to overcome these limitations."}
{"_id": "c68f2e65f1785b1de7291161ad739047df29c118", "title": "Document Distance Estimation via Code Graph Embedding", "text": "Accurately representing the distance between two documents (i.e. pieces of textual information extracted from various software artifacts) has far-reaching applications in many automated software engineering approaches, such as concept location, bug location and traceability link recovery. This is a challenging task, since documents containing different words may have similar semantic meanings. In this paper, we propose a novel document distance estimation approach. This approach captures latent semantic associations between documents through analyzing structural information in software source code: first, we embed code elements as points in a shared representation space according to structural dependencies between them; then, we represent documents as weighted point clouds of code elements in the representation space and reduce the distance between two documents to an earth mover's distance transportation problem. We define a document classification task in StackOverflow dataset to evaluate the effectiveness of our approach. The empirical evaluation results show that our approach outperforms several state-of-the-art approaches."}
{"_id": "38516ceb8d87e4eb21983bd79f77b666b80f1199", "title": "Traffic road line detection based on the vanishing point and contour information", "text": "This paper describes an algorithm to detect the road lane boundary for a driver who changes lane as \u2018access road\u2019, \u2018passing lane\u2019, \u2018approach ramp\u2019, etc. The proposed algorithm starts with binarization and labeling for the image. The binary image is generated by adaptive threshold. The binary image is labeled recursively to blobs. From each blob, the algorithm analyses principal axes calculated by Eigen value and vector. Those are based on the covariance matrix of pixel distribution for the blobs. Then it shows intersection points by the combination of each long principal axis. Among intersection points, the highest density cluster is selected to a dominant vanishing point as the parallel road lines. In experiments of road line detection from the urban road image, the proposed method shows the effectiveness of lane boundary detection in the rate of more than 94.3%."}
{"_id": "bc32313c5b10212233007ebb38e214d713db99f9", "title": "Combining and benchmarking methods of\u00a0foetal ECG extraction without maternal or scalp electrode data.", "text": "Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques.The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50\u00a0Hz or 60\u00a0Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected.The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1\u2009=\u2009179.44, E2\u2009=\u200920.79, E3\u2009=\u2009153.07, E4\u2009=\u200929.62 and E5\u2009=\u20094.67 for events 1-5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described."}
{"_id": "c0d51beacbc3910abd55701a9b0a5e16fc3e71f2", "title": "Absolute Orientation for Word Embedding Alignment", "text": "We develop a family of techniques to align word embeddings which are derived from different source datasets or created using different mechanisms (e.g., GloVe or word2vec). Our methods are simple and have a closed form to optimally rotate, translate, and scale to minimize root mean squared errors or maximize the average cosine similarity between two embeddings of the same vocabulary into the same dimensional space. Our methods extend approaches known as Absolute Orientation, which are popular for aligning objects in three-dimensions, and generalize an approach by Smith etal (ICLR 2017). We prove new results for optimal scaling and for maximizing cosine similarity. Then we demonstrate how to evaluate the similarity of embeddings from different sources or mechanisms, and that certain properties like synonyms and analogies are preserved across the embeddings and can be enhanced by simply aligning and averaging ensembles of embeddings."}
{"_id": "1960889706e08935912e8ee44173288e2ff10ad7", "title": "Trust, But Verify: On the Importance of Chemical Structure Curation in Cheminformatics and QSAR Modeling Research", "text": "With the recent advent of high-throughput technologies for both compound synthesis and biological screening, there is no shortage of publicly or commercially available data sets and databases that can be used for computational drug discovery applications (reviewed recently in Williams et al.). Rapid growth of large, publicly available databases (such as PubChem or ChemSpider containing more than 20 million molecular records each) enabled by experimental projects such as NIH\u2019s Molecular Libraries and Imaging Initiative provides new opportunities for the development of cheminformatics methodologies and their application to knowledge discovery in molecular databases. A fundamental assumption of any cheminformatics study is the correctness of the input data generated by experimental scientists and available in various data sets. Nevertheless, a recent study showed that on average there are two errors per each medicinal chemistry publication with an overall error rate for compounds indexed in the WOMBAT database as high as 8%. In another recent study, the authors investigated several public and commercial databases to calculate their error rates: the latter were ranging from 0.1 to 3.4% depending on the database. How significant is the problem of accurate structure representation (given that the error rates in current databases may appear relatively low) since it concerns exploratory cheminformatics and molecular modeling research? Recent investigations by a large group of collaborators from six laboratories have clearly demonstrated that the type of chemical descriptors has much greater influence on the prediction performance of QSAR models than the nature of model optimization techniques. These findings suggest that having erroneous structures represented by erroneous descriptors should have a detrimental effect on model performance. Indeed, a recent seminal publication clearly pointed out the importance of chemical data curation in the context of QSAR modeling. The authors have discussed the error rates in several known databases and evaluated the consequences of both random and systematic errors with respect to the prediction performances of the derivative QSAR models. They also presented several illustrative examples of incorrect structures generated from either correct or incorrect SMILES. The main conclusions of the study were that small structural errors within a data set could lead to significant losses of predictive ability of QSAR models. The authors further demonstrated that manual curation of structural data leads to substantial increase in the model predictivity. This conclusion becomes especially important in light of the aforementioned study of Oprea et al. that cited a significant error rate in medicinal chemistry literature. Alarmed by these conclusions, we have examined several popular public databases of bioactive molecules to assess possible error rates in structure representation. For instance, the NCI AIDS Antiviral Screen (the paper describing this screen has been cited 57 times in PubMed Central only) comprises 42687 chemical records with their associated activity. Even quick analysis of this data sets revealed that 4202 (i.e., ca. 10%) compounds should be either treated with care or removed before any cheminformatics investigation: 3350 compounds were mixtures and salts, and we detected 741 pairs of exact duplicates and stereoisomers, possessing different or opposite reported activities. Similar observations can be made for several well-known public databases such as NCI Human Tumor Cell Line and PubChem, as well as for smaller data sets studied and reported in published literature. For instance, in a study already mentioned above, six research teams each specializing in QSAR modeling (including the authors of this paper!) collaborated on the analysis of the Tetrahymena pyriformis aquatic toxicity data set comprising 1093 compounds. Later this exact data set was used by the organizers of CADASTER toxicity challenge. However, our re-examination of this data set showed the presence of six pairs of duplicates among 1093 compounds because of the presence of different metal cations in salts (with different aquatic toxicities measured by pIGC50 ranging from \u223c0.1 to 1 logarithmic unit. Furthermore, in the new external set compiled by the organizers of the CADASTER challenge to evaluate the comparative performance of competing models, eight out of 120 compounds were found to be structurally identical to modeling set compounds but with different toxicity values (\u223c0.3 pIGC50). Such disappointing observations may, at least in part, explain why QSAR models may sometimes fail, which is * To whom correspondence should be addressed. E-mail: alex_tropsha@ unc.edu. \u2020 University of North Carolina at Chapel Hill. \u2021 A.V. Bogatsky Physical-Chemical Institute NAS of Ukraine. J. Chem. Inf. Model. XXXX, xxx, 000 A"}
{"_id": "6f479ef860b94be2f3a39f1172611f0aee195094", "title": "Cyberbullying and Primary-School Aged Children : The Psychological Literature and the Challenge for Sociology", "text": "Cyberbullying is an international issue for schools, young people and their families. Whilst many research domains have explored this phenomenon, and bullying more generally, the majority of reported studies appear in the psychological and educational literatures, where bullying, and more recently, cyberbullying has been examined primarily at the individual level: amongst adolescents and young people, with a focus on the definition, its prevalence, behaviours, and impact. There also is growing evidence that younger children are increasingly accessing technology and engaging with social media, yet there is limited research dedicated to this younger age group. The purpose of this paper is to report on a systematic literature review from the psychological and educational research domains related to this younger age group, to inform future research across the disciplines. Younger children require different methods of engagement. This review highlights the methodological challenges associated with this age group present in the psychological literature, and argues for a greater use of sociological, child-centred approaches to data collection. This review examined studies published in English, between 2009 and 2014, and conducted with children aged 5\u201312 years, about their experiences with cyberbullying. Searches were conducted on seven key databases using keywords associated with cyberbullying and age of children. A Google Scholar search also examined published and unpublished reports. A total of 966 articles and reports were retrieved. A random peer review process was employed to establish inter-rater reliability and veracity of the review. Findings revealed 38 studies reported specifically on children aged 5\u201312 years. The dominant focus of these articles was on prevalence of cyberbullying, established through survey methodology. Few studies noted OPEN ACCESS Societies 2015, 5 493 impacts, understanding and behaviours or engaged children\u2019s independent voice. This review highlights current gaps in our knowledge about younger children\u2019s experiences with this form of bullying, and the importance of employing cross-disciplinary and developmentally appropriate methodologies to inform future research."}
{"_id": "32150cb42d549e47b5508c3593f8e54a69bb8002", "title": "Smooth contact-aware facial blendshapes transfer", "text": "In this paper, we propose a method to improve the quality of the facial blendshapes synthesized using deformation transfer by introducing the idea of contact and smoothness awareness. Simply applying deformation transfer for face models easily results in penetrations and separations which do not exist in the original data. As there is no concept of smoothness, crumpling artifacts appear where the mesh is extremely concentrated, e.g. eyelids and mouth corners. We solve these issues by first introducing the concept of virtual triangles and then adding a Laplacian energy term into the linear problem to be solved for the deformation transfer. Experimental results show that our system reduces the amount of artifacts caused by the original deformation transfer, improving the efficiency of the animation pipeline."}
{"_id": "843c7b9f500f6491ae56af4e2999e16558377bf6", "title": "Educational Software Applied in Teaching Electrocardiogram: A Systematic Review", "text": "Background\nThe electrocardiogram (ECG) is the most used diagnostic tool in medicine; in this sense, it is essential that medical undergraduates learn how to interpret it correctly while they are still on training. Naturally, they go through classic learning (e.g., lectures and speeches). However, they are not often efficiently trained in analyzing ECG results. In this regard, methodologies such as other educational support tools in medical practice, such as educational software, should be considered a valuable approach for medical training purposes.\n\n\nMethods\nWe performed a literature review in six electronic databases, considering studies published before April 2017. The resulting set comprises 2,467 studies. From this collection, 12 studies have been selected, initially, whereby we carried out a snowballing process to identify other relevant studies through the reference lists of these studies, resulting in five relevant studies, making up a total of 17 articles that passed all stages and criteria.\n\n\nResults\nThe results show that 52.9% of software types were tutorial and 58.8% were designed to be run locally on a computer. The subjects were discussed together with a greater focus on the teaching of electrophysiology and/or cardiac physiology, identifying patterns of ECG and/or arrhythmias.\n\n\nConclusions\nWe found positive results with the introduction of educational software for ECG teaching. However, there is a clear need for using higher quality research methodologies and the inclusion of appropriate controls, in order to obtain more precise conclusions about how beneficial the inclusion of such tools can be for the practices of ECG interpretation."}
{"_id": "15bfab0fdc01ceb865e3beb788aff760806dbf90", "title": "Self-disclosure on Facebook among female users and its relationship to feelings of loneliness", "text": "URLs: Self-disclosure on Facebook among female users and its relationship to feelings of loneliness Author(s): Al-Saggaf, Y.M. ; Nielsen, S.G. Computers in Human Behavior"}
{"_id": "a2415bf1c32c484f38adbbdd5633e0b67b6c8055", "title": "Optimal dimensioning of active cell balancing architectures", "text": "This paper presents an approach to optimal dimensioning of active cell balancing architectures, which are of increasing relevance in Electrical Energy Storages (EESs) for Electric Vehicles (EVs) or stationary applications such as smart grids. Active cell balancing equalizes the state of charge of cells within a battery pack via charge transfers, increasing the effective capacity and lifetime. While optimization approaches have been introduced into the design process of several aspects of EESs, active cell balancing architectures have, until now, not been systematically optimized in terms of their components. Therefore, this paper analyzes existing architectures to develop design metrics for energy dissipation, installation volume, and balancing current. Based on these design metrics, a methodology to efficiently obtain Pareto-optimal configurations for a wide range of inductors and transistors at different balancing currents is developed. Our methodology is then applied to a case study, optimizing two state-of-the-art architectures using realistic balancing algorithms. The results give evidence of the applicability of systematic optimization in the domain of cell balancing, leading to higher energy efficiencies with minimized installation space."}
{"_id": "882e93570eae184ae737bf0344cb50a2925e353d", "title": "Algorithms for Unevenly Spaced Time Series : Moving Averages and Other Rolling Operators", "text": "This paper describes algorithms for efficiently calculating certain rolling time series operators for unevenly spaced data. In particular, we show how to calculate simple moving averages (SMAs), exponential moving averages (EMAs), and related operators in linear time with respect to the number of observations in a time series. A web appendix provides an implementation of these algorithms in the programming language C and a package (forthcoming) for the statistical software R."}
{"_id": "e95ecc765814a21dc63f6ba9c8e721c1644bba31", "title": "Effects of strength training with blood flow restriction on torque, muscle activation and local muscular endurance in healthy subjects", "text": "The present study aimed to analyse the effects of six weeks of strength training (ST), with and without blood flow restriction (BFR), on torque, muscle activation, and local muscular endurance (LME) of the knee extensors. Thirty-seven healthy young individuals were divided into four groups: high intensity (HI), low intensity with BFR (LI+BFR), high intensity and low intensity + BFR (COMB), and low intensity (LI). Torque, muscle activation and LME were evaluated before the test and at the 2nd, 4th and 6th weeks after exercise. All groups had increased torque, muscle activation and LME (p<0.05) after the intervention, but the effect size and magnitude were greater in the HI, LI+BFR and COMB groups. In conclusion, the groups with BFR (LI+BFR and COMB) produced magnitudes of muscle activation, torque and LME similar to those of the HI group."}
{"_id": "9ee292dd1682d1787bd2ac7df59ef1dc1335774f", "title": "Discovery-driven graph summarization", "text": "Large graph datasets are ubiquitous in many domains, including social networking and biology. Graph summarization techniques are crucial in such domains as they can assist in uncovering useful insights about the patterns hidden in the underlying data. One important type of graph summarization is to produce small and informative summaries based on user-selected node attributes and relationships, and allowing users to interactively drill-down or roll-up to navigate through summaries with different resolutions. However, two key components are missing from the previous work in this area that limit the use of this method in practice. First, the previous work only deals with categorical node attributes. Consequently, users have to manually bucketize numerical attributes based on domain knowledge, which is not always possible. Moreover, users often have to manually iterate through many resolutions of summaries to identify the most interesting ones. This paper addresses both these key issues to make the interactive graph summarization approach more useful in practice. We first present a method to automatically categorize numerical attributes values by exploiting the domain knowledge hidden inside the node attributes values and graph link structures. Furthermore, we propose an interestingness measure for graph summaries to point users to the potentially most insightful summaries. Using two real datasets, we demonstrate the effectiveness and efficiency of our techniques."}
{"_id": "41d01883708e5f32b6480106ad373cbfdfced05f", "title": "Wound microbiology and associated approaches to wound management.", "text": "The majority of dermal wounds are colonized with aerobic and anaerobic microorganisms that originate predominantly from mucosal surfaces such as those of the oral cavity and gut. The role and significance of microorganisms in wound healing has been debated for many years. While some experts consider the microbial density to be critical in predicting wound healing and infection, others consider the types of microorganisms to be of greater importance. However, these and other factors such as microbial synergy, the host immune response, and the quality of tissue must be considered collectively in assessing the probability of infection. Debate also exists regarding the value of wound sampling, the types of wounds that should be sampled, and the sampling technique required to generate the most meaningful data. In the laboratory, consideration must be given to the relevance of culturing polymicrobial specimens, the value in identifying one or more microorganisms, and the microorganisms that should be assayed for antibiotic susceptibility. Although appropriate systemic antibiotics are essential for the treatment of deteriorating, clinically infected wounds, debate exists regarding the relevance and use of antibiotics (systemic or topical) and antiseptics (topical) in the treatment of nonhealing wounds that have no clinical signs of infection. In providing a detailed analysis of wound microbiology, together with current opinion and controversies regarding wound assessment and treatment, this review has attempted to capture and address microbiological aspects that are critical to the successful management of microorganisms in wounds."}
{"_id": "d39fe688a41ff3b105e39166a11edd454cf33d37", "title": "The Psychometric Properties of the Internet Addiction Test", "text": "There is growing concern about excessive Internet use and whether this can amount to an addiction. In researching this topic, a valid and reliable assessment instrument is essential. In her survey of Internet addiction, Young designed the Internet Addiction Test (IAT), which provides a basis for developments. The IAT has high face validity, but it has not been subjected to systematic psychometric testing. This study sought to replicate and expand Young's survey, and to examine the IAT more systematically. A questionnaire that existed as a Web page was devised, consisting of the IAT and 15 other questions regarding the respondents' demographic information and Internet usage. Participants were recruited through the Internet, yielding 86 valid responses (29 males and 57 females). Factor analysis of the IAT revealed six factors--salience, excessive use, neglecting work, anticipation, lack of control, and neglecting social life. These factors showed good internal consistency and concurrent validity, with salience being the most reliable. Younger and more recent users reported more problems, mainly concerning the neglect of work and social life. We expected interactive Internet functions to be more addictive; however, this was not found to be so. Overall, the IAT is a valid and reliable instrument that may be used in further research on Internet addiction."}
{"_id": "4e68fdf47bbe7fade49292121adb63a5a477f7e6", "title": "Futures of elderly care in Iran: A protocol with scenario approach", "text": "Background: The number of people aged 60 and older is increasing faster than other age groups worldwide. Iran will experience a sharp aging population increase in the next decades, and this will pose new challenges to the healthcare system. Since providing high quality aged-care services would be the major concern of the policymakers, this question arises that what types of aged care services should be organized in the coming 10 years? This protocol has been designed to develop a set of scenarios for the future of elderly care in Iran. Methods: In this study, intuitive logics approach and Global Business Network (GBN) model were used to develop scenarios for elderly care in Iran. In terms of perspective, the scenarios in this approach are normative, qualitative with respect to methodology and deductive in constructing the process of scenarios. The three phases of GBN model are as follows: 1) Orientation: Identifying strategic levels, stakeholders, participants and time horizon; 2) Exploration: Identifying the driving forces and key uncertainties; 3) Synthesis: Defining the scenario logics and constructing scenario storyline. Results: Presently, two phases are completed and the results will be published in mid-2016. Conclusion: This study delivers a comprehensive framework for taking appropriate actions in providing care for the elderly in the future. Moreover, policy makers should specify and provide the full range of services for the elderly, and in doing so, the scenarios and key findings of this study could be of valuable help."}
{"_id": "7946235f4f6a08bba2776652363a087a5cc5dbcc", "title": "Adolescent self-compassion : Associations with narcissism , self-esteem , aggression , and internalizing symptoms in at-risk males", "text": "Self-compassion is an attitude toward oneself that involves perceiving one\u2019s experiences as an opportunity for self-awareness and improvement, as well as limited self-judgment after failure. Self-compassion has not been extensively studied in adolescence, a time when self-perception and self-appraisals regarding success and failure take on notable importance. This study considered the connection between self-compassion, narcissism, self-esteem, aggression, and internalizing problems in a sample of 251 male adolescents, ages 16\u201318, attending a residential program. Self-compassion was negatively correlated with aggression and vulnerable narcissism and positively correlated with self-esteem. In general, selfcompassion did not exhibit the hypothesized protective effect on the relation between narcissism and aggression. Findings indicate that, as expected, self-compassion is indicative of a relatively secure, positive sense of self in adolescents. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "84f060fd5e958ad38ddc1bcee99868fb3e0de4ef", "title": "Megaspeed Drive Systems: Pushing Beyond 1 Million r/min", "text": "The latest research in mesoscale drive systems is targeting rotational speeds toward 1 million r/min for a power range of 1-1 kW. Emerging applications for megaspeed drives (MegaNdrives) are to be found in future turbo compressor systems for fuel cells and heat pumps, generators/starters for portable nanoscale gas turbines, printed circuit board drilling and machining spindles, and electric power generation from pressurized gas flow. The selection of the machine type and the challenges involved in designing a machine for megaspeed operation such as the winding concepts, a mechanical rotor design capable of 1 000 000 r/min, the selection of magnetic materials for the stator, and the optimization concerning high-frequency losses and torque density are presented. Furthermore, a review of the advantageous inverter topologies, taking into account the extremely low stator inductance and possible high-speed bearing types such as ball bearings, air bearings, foil bearings, and magnetic bearings, are given. Finally, prototypes and experimental results originating from MegaNdrive research at Swiss Federal Institute of Technology Zurich are discussed and extreme temperature operation and power microelectricalmechanical system are identified as targets for future research."}
{"_id": "f313691d8910c231c91d48388a9163c91bb93a36", "title": "Potentials and limits of high-speed PM motors", "text": "This paper illustrates potentials and limits of high-speed permanent-magnet (PM) motors. The influence of materials chosen for PM, stator core, and retaining sleeve is highlighted. Slotted and slotless topologies are considered and compared, computing magnetic, electrical, mechanical, and thermal quantities by means of both analytical and finite-element approach. Thermal and PM demagnetization limits as well as rotor losses are evaluated. A criteria of optimization of the motor structure is described, with the diameter ratio and the iron flux density as main design variables."}
{"_id": "7ab809ba0bdd62cbdba44ec8dfcaf31d8ad2849b", "title": "An Integrated Flywheel Energy Storage System with a Homopolar Inductor Motor / Generator and High-Frequency Drive", "text": "The design, construction, and test of an integrated flywheel energy storage system with a homopolar inductor motor/generator and high-frequency drive is presented in this paper. The work is presented as an integrated design of flywheel system, motor, drive, and controller. The motor design features low rotor losses, a slotless stator, construction from robust and low cost materials, and a rotor that also serves as the energy storage rotor for the flywheel system. A high-frequency six-step drive scheme is used in place of pulsewidth modulation because of the high electrical frequencies. A speed-sensorless controller that works without state estimation is also described. A prototype of the flywheel system has been demonstrated at a power level of 9.4 kW, with an average system efficiency of 83% over a 30 000\u201360 000-r/min speed range."}
{"_id": "8240c06f8d7706fc41719aecf03e3c19ed866852", "title": "Analytical and Experimental Investigation of a Low Torque, Ultra-High Speed Drive System", "text": "New application areas are demanding the development of ultra-high speed electrical machines. Significant challenges exist to produce a test bench that is capable of withstanding operating speeds exceeding 500,000 rpm and measuring very low torque values of mNm. This paper describes a purpose built test bench that is able to characterize the operation of an ultra-high speed drive system. Furthermore, the calculation of electromagnetic losses, air friction and critical speeds is presented and a comparison of analytical and experimental results is given. The ultra-high speed machine has an efficiency of 95%, however, in the upper speed ranges the frictional losses become dominant"}
{"_id": "833ef3057b6560c0d249b69214b377d9d9ac93ec", "title": "Design of a 100 W, 500000 rpm permanent-magnet generator for mesoscale gas turbines", "text": "Mesoscale gas turbine generator systems are a promising solution for high energy and power density portable devices. This paper focuses on the design of a 100 W, 500000 rpm generator suitable for use with a gas turbine. The design procedure selects the suitable machine type and bearing technology, and determines the electromagnetic characteristics. The losses caused by the high frequency operation are minimized by optimizing the winding and the stator core material. The final design is a permanent-magnet machine with a volume of 3.5 cm/sup 3/ and experimental measurements from a test bench are presented."}
{"_id": "8d58fe45f099da0ca72dca7829a1cd0c21e7b46c", "title": "Chapter 1 Feature Engineering for Twitter-based Applications", "text": "1."}
{"_id": "c036d9343c7c6baf29b8900cf587941519163b98", "title": "Discovering Reader's Emotions Triggered from News Articles", "text": "The study of sentiment analysis aims to explore the occurrences of opinion words in a document, and their orientation and strength. Most existing researches focus on writer's emotions, which are the feelings that authors were expressing. In contrast, news articles are usually described in objective terms, in which opinion words are rare. In this paper, we propose to discover the reader's feelings when they read news articles. First, different feature units are extracted such as unigrams, bigrams, and segmented words. We compare several feature selection methods including document frequency variation, information gain, mutual information, and chi-square test, to select the candidate features for sentiment classification. Then, the performance for multi-class classification and multiple binary classification are evaluated. The experimental results on real news articles show the effectiveness of the proposed method for discovering reader's emotions triggered from news articles. Further investigation is needed to validate the performance in larger scales."}
{"_id": "9be90e9d40ded9ea8dafd2fad087681ffdddbd03", "title": "Visual discrimination learning requires sleep after training", "text": "Performance on a visual discrimination task showed maximal improvement 48\u201396 hours after initial training, even without intervening practice. When subjects were deprived of sleep for 30 hours after training and then tested after two full nights of recovery sleep, they showed no significant improvement, despite normal levels of alertness. Together with previous findings that subjects show no improvement when retested the same day as training, this demonstrates that sleep within 30 hours of training is absolutely required for improved performance."}
{"_id": "1e4b8045eec4790d2a92e11d8465d88b0b0c1be0", "title": "Automatic Simplification of Obfuscated JavaScript Code: A Semantics-Based Approach", "text": "JavaScript is a scripting language that is commonly used to create sophisticated interactive client-side web applications. However, JavaScript code can also be used to exploit vulnerabilities in the web browser and its extensions, and in recent years it has become a major mechanism for web-based malware delivery. In order to avoid detection, attackers often take advantage of the dynamic nature of JavaScript to create highly obfuscated code. This paper describes a semantics-based approach for automatic deobfuscation of JavaScript code. Experiments using a prototype implementation indicate that our approach is able to penetrate multiple layers of complex obfuscations and extract the core logic of the computation, which makes it easier to understand the behavior of the code."}
{"_id": "362ba8317aba71c78dafca023be60fb71320381d", "title": "Nighttime face recognition at large standoff: Cross-distance and cross-spectral matching", "text": "Face recognition in surveillance systems is important for security applications, especially in nighttime scenarios when the subject is far away from the camera. However, due to the face image quality degradation caused by large camera standoff and low illuminance, nighttime face recognition at large standoff is challenging. In this paper, we report a system that is capable of collecting face images at large standoff in both daytime and nighttime, and present an augmented heterogeneous face recognition (AHFR) approach for cross-distance (e.g., 150m probe vs. 1m gallery) and cross-spectral (near-infrared probe vs. visible light gallery) face matching. We recover high-quality face images from degraded probe images by proposing an image restoration method based on Locally Linear Embedding (LLE). The restored face images are matched to the gallery by using a heterogeneous face matcher. Experimental results show that the proposed AHFR approach significantly outperforms the state-of-the-art methods for cross-spectral and cross-distance face matching."}
{"_id": "f40ef58372fed77000a057db1f1454847189c26d", "title": "CCR - A Content-Collaborative Reciprocal Recommender for Online Dating", "text": "We present a new recommender system for online dating. Using a large dataset from a major online dating website, we first show that similar people, as defined by a set of personal attributes, like and dislike similar people and are liked and disliked by similar people. This analysis provides the foundation for our Content-Collaborative Reciprocal (CCR) recommender approach. The content-based part uses selected user profile features and similarity measure to generate a set of similar users. The collaborative filtering part uses the interactions of the similar users, including the people they like/dislike and are liked/disliked by, to produce reciprocal recommendations. CCR addresses the cold start problem of new users joining the site by being able to provide recommendations immediately, based on their profiles. Evaluation results show that the success rate of the recommendations is 69.26% compared with a baseline of 35.19% for the top 10 ranked recommendations."}
{"_id": "b7fae0bd4b12e1d066acb77da3ae8ce4f2f18779", "title": "Patch-based Image Correlation with Rapid Filtering", "text": "This paper describes a patch-based approach for rapid image correlation or template matching. By representing a template image with an ensemble of patches, the method is robust with respect to variations such as local appearance variation, partial occlusion, and scale changes. Rectangle filters are applied to each image patch for fast filtering based on the integral image representation. A new method is developed for feature dimension reduction by detecting the \"salient\" image structures given a single image. Experiments on a variety images show the success of the method in dealing with different variations in the test images. In terms of computation time, the approach is faster than traditional methods by up to two orders of magnitude and is at least three times faster than a fast implementation of normalized cross correlation."}
{"_id": "f35eccb6459a4240cf9567e93c401b15a3bbda61", "title": "Music and Cognitive Abilities", "text": "Does music make you smarter? Music listening and music lessons have been claimed to confer intellectual advantages. Any association between music and intellectual functioning would be notable only if the benefits apply reliably to nonmusical abilities and if music is unique in producing the effects. The available evidence indicates that music listening leads to enhanced performance on a variety of cognitive tests, but that such effects are shortterm and stem from the impact of music on arousal level and mood, which, in turn, affect cognitive performance; experiences other than music listening have similar effects. Music lessons in childhood tell a different story. They are associated with small but general and long-lasting intellectual benefits that cannot be attributed to obvious confounding variables such as family income and parents\u2019 education. The mechanisms underlying this association have yet to be determined. KEYWORDS\u2014music cognition; intelligence; cognitive development People\u2019s eagerness for quick fixes can be seen in the seemingly irrational appeal of crash diets, get-rich-quick schemes, and getsmart-fast programs. Is the claim of music as a route to enhanced intelligence yet another self-improvement fantasy? In the pages that follow, I outline the origins of the claim and evaluate the available evidence. Intellectual benefits of exposure to music would be noteworthy if (a) they extend to nonmusical abilities, (b) they are systematic and reliable, and (c) they are more likely to result from hearing or playing music than from other activities. Unlike in other cultures, where virtually everyone participates in music making, musical experiences in Western society typically involve listening and, only in some cases, lessons and performing. Music listening is ubiquitous, both purposefully (e.g., listening to the radio) and incidentally (e.g., background music in stores and restaurants). By contrast, relatively few individuals take music lessons for several years. The consequences of music listening are likely to differ quantitatively and qualitatively from those of music lessons."}
{"_id": "6c3a1c4709cf5f1fe11f4f408f38945f0195ec17", "title": "A Statistical Nonparametric Approach of Face Recognition: Combination of Eigenface & Modified k-Means Clustering", "text": "Keywords: Eigenface, face Recognition, k-means clustering, principle component analysis (PCA)"}
{"_id": "13b57229d87071b5438a9cd7921db320a0e1e5a1", "title": "Modified Caesar Cipher for Better Security Enhancement", "text": "Encryption is the process of scrambling a message so that only the intended recipient can read it. With the fast progression of digital data exchange in electronic way, Information Security is becoming much more important in data storage and transmission. Caesar cipher is a mono alphabetic cipher. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter. In this paper, author modified the traditional Caesar cipher and fixed the key size as one. Another thing alphabet index is checked if the alphabet index is even then increase the value by one else alphabet index is odd decrease the key value by one. Encryption and scrambling of the letters in the Cipher Text."}
{"_id": "442f574724af0dd2c04b7974510740ae8910f359", "title": "Addressing Legal Requirements in Requirements Engineering", "text": "Legal texts, such as regulations and legislation, are playing an increasingly important role in requirements engineering and system development. Monitoring systems for requirements and policy compliance has been recognized in the requirements engineering community as a key area for research. Similarly, regulatory compliance is critical in systems that are governed by regulations and law, especially given that non-compliance can result in both financial and criminal penalties. Working with legal texts can be very challenging, however, because they contain numerous ambiguities, cross-references, domain-specific definitions, and acronyms, and are frequently amended via new regulations and case law. Requirements engineers and compliance auditors must be able to identify relevant regulations, extract requirements and other key concepts, and monitor compliance throughout the software lifecycle. This paper surveys research efforts over the past 50 years in handling legal texts for systems development. These efforts include the use of symbolic logic, logic programming, first-order temporal logic, deontic logic, defeasible logic, goal modeling, and semi-structured representations. This survey can aid requirements engineers and auditors to better specify, monitor, and test software systems for compliance."}
{"_id": "81e8916b9d4eeb47c2e96c823b36d032b244842f", "title": "Modeling the Clickstream : Implications for Web-Based Advertising Efforts", "text": "Advertising revenues have become a critical element in the business plans of most commercial Web sites. Despite extensive research on advertising in traditional media, managers and researchers face considerable uncertainty about its role in the online environment. The benefits offered by the medium notwithstanding, the lack of models to measure and predict advertising performance is a major deterrent to acceptance of the Web by mainstream advertisers. Presently, existing media models based on aggregate vehicle and advertising exposure are being adapted which underutilize the unique characteristics of the medium. What is required are methods that measure how consumers interact with advertising stimuli in ad-supported Web sites, going beyond mere counts of individual \"eyeballs\" attending to media. Given the enormous potential of this dynamic medium, academic investigation is sorely needed before cost implications and skepticism endanger the ability of the medium to generate and maintain advertiser support. This paper addresses advertiser and publisher need to understand and predict how consumers interact with advertising stimuli placed at Web sites. We do so by developing a framework to formally model the commercial \u201cclickstream\u201d at an advertiser supported Web site with mandatory visitor registration. Consumers visiting such Web sites are conceptualized as deriving utility from navigating through editorial and advertising content subject to time constraints. The clickstream represents a new source of consumer response data detailing the content and banner ads that consumers click on during the online navigation process. To our knowledge, this paper is the first to model the clickstream from an actual commercial Web site. Clickstream data allow us to investigate how consumers respond to advertising over time at an individual level. Such modeling is not possible in broadcast media because the data do not exist. Our results contrast dramatically from those typically found in traditional broadcast media. First, 3 the effect of repeated exposures to banner ads is U-shaped. This is in contrast with the inverted Ushaped response found in broadcast media. Second, the differential effects of each successive ad exposure is initially negative, but non-linear, and becomes positive later at higher levels of passive ad exposures. Third, the negative response to repeated banner ad exposures increases for consumers who visit the site more frequently. Fourth, in contrast to findings in traditional media the effect of exposure to competing ads is either insignificant or positive. However, carryover effects of past advertising exposures are similar to those proposed in broadcast media. Finally, heterogeneity in cumulative effects of advertising exposure and involvement across consumers captured by cumulative click behavior across visits and click behavior during the visit was found to significantly contribute to differences in click response. This has implications for dynamic ad placement based on past history of consumer exposure and interaction with advertising at the Web site. Response parameters of consumer visitor segments can be used by media buyers and advertisers to understand the benefits consumers seek from the advertising vehicle, and thus guide advertising media placement decisions. In sum, our modeling effort offers an important first look into how advertisers can use the Web medium to maximize their desired advertising outcomes."}
{"_id": "c8010801fe989281d7e1a498b7331714d4243f19", "title": "Improving Fraud and Abuse Detection in General Physician Claims: A Data Mining Study.", "text": "BACKGROUND\nWe aimed to identify the indicators of healthcare fraud and abuse in general physicians' drug prescription claims, and to identify a subset of general physicians that were more likely to have committed fraud and abuse.\n\n\nMETHODS\nWe applied data mining approach to a major health insurance organization dataset of private sector general physicians' prescription claims. It involved 5 steps: clarifying the nature of the problem and objectives, data preparation, indicator identification and selection, cluster analysis to identify suspect physicians, and discriminant analysis to assess the validity of the clustering approach.\n\n\nRESULTS\nThirteen indicators were developed in total. Over half of the general physicians (54%) were 'suspects' of conducting abusive behavior. The results also identified 2% of physicians as suspects of fraud. Discriminant analysis suggested that the indicators demonstrated adequate performance in the detection of physicians who were suspect of perpetrating fraud (98%) and abuse (85%) in a new sample of data.\n\n\nCONCLUSION\nOur data mining approach will help health insurance organizations in low-and middle-income countries (LMICs) in streamlining auditing approaches towards the suspect groups rather than routine auditing of all physicians."}
{"_id": "09f13c590f19dce53dfd8530f8cbe8044cce33ed", "title": "Control of Flight Operation of a Quad rotor AR . Drone Using Depth Map from Microsoft Kinect Sensor", "text": "In recent years, many user-interface devices appear for managing variety the physical interactions. The Microsft Kinect camera is a revolutionary and useful depth camera giving new user experience of interactive gaming on the Xbox platform through gesture or motion detection. In this paper we present an approach for control the Quadrotor AR.Drone using the Microsft Kinect sensor."}
{"_id": "0c8cd095e42d995b38905320565e17d6afadf13f", "title": "Discriminant kernel and regularization parameter learning via semidefinite programming", "text": "Regularized Kernel Discriminant Analysis (RKDA) performs linear discriminant analysis in the feature space via the kernel trick. The performance of RKDA depends on the selection of kernels. In this paper, we consider the problem of learning an optimal kernel over a convex set of kernels. We show that the kernel learning problem can be formulated as a semidefinite program (SDP) in the binary-class case. We further extend the SDP formulation to the multi-class case. It is based on a key result established in this paper, that is, the multi-class kernel learning problem can be decomposed into a set of binary-class kernel learning problems. In addition, we propose an approximation scheme to reduce the computational complexity of the multi-class SDP formulation. The performance of RKDA also depends on the value of the regularization parameter. We show that this value can be learned automatically in the framework. Experimental results on benchmark data sets demonstrate the efficacy of the proposed SDP formulations."}
{"_id": "4cf04572c80dc70b3ff353b506f782d1bf43e4a0", "title": "To have or not to have Internet at home: Implications for online shopping", "text": "This paper analyzes the individual decision of online shopping, in terms of socioeconomic characteristics, internet related variables and location factors. Since online shopping is only observed for internet users, we use the two-step Heckman\u0092s model to correct sample selection. We argue that one of the relevant variables to explain online shopping, the existence of a home internet connection, can be endogenous. To account for this potential endogeneity, we jointly estimate the probability of online shopping and the probability of having internet at home. The dataset used in this paper comes from the Household Survey of ICT Equipment and Usage, conducted by the Spanish Statistical O\u00a2 ce on an annual basis. Our analysis covers the period 2004-2009. Our results show that not accounting for the endogeneity of having internet at home, yields to overestimate its e\u00a4ect on the probability of buying online. This \u0085nding can be important in the design of public policies aimed at enhancing e-commerce through providing households with internet connection at home. We also show that, compared to other variables that are also relevant for online shopping, the e\u00a4ect of internet at home is quite small. JEL Classi\u0085cation: D12, C35, C25"}
{"_id": "ca78c8c4dbe4c92ba90c8f6e1399b78ced3cf997", "title": "Surprisingly Easy Hard-Attention for Sequence to Sequence Learning", "text": "In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning. The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention. On five translation and two morphological inflection tasks we show effortless and consistent gains in BLEU compared to existing attention mechanisms."}
{"_id": "bac736c70f85ffdd17bd2ba514e84e5db987f07a", "title": "Parkinsons disease patients perspective on context aware wearable technology for auditive assistance", "text": "In this paper we present a wearable assistive technology for the freezing of gait (FOG) symptom in patients with Parkinsons disease (PD), with emphasis on subjective user appreciation. Patients with advanced PD often suffer from FOG, which is a sudden and transient inability to move. It often causes falls, interferes with daily activities and significantly impairs quality of life. Because gait deficits in PD patients are often resistant to pharmacologic treatment, effective non+pharmacologic treatments are of special interest."}
{"_id": "d60fcb9b0ff0484151975c86c70c0cb314468ffb", "title": "Achieving IT-Business Strategic Alignment via Enterprise-Wide Implementation of Balanced Scorecards", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."}
{"_id": "abdb694ab4b1cb4f54f07ed16a657765ce8c47f5", "title": "Innovation characteristics and innovation adoption-implementation: A meta-analysis of findings", "text": "A review and meta-analysis was performed of seventy-five articles concerned with innovation characteristics and their relationship to innovation adoption and implementation. One part of the analysis consisted of constructing a methodological profile of the existing studies, and contrasting this with a hypothetical optimal approach. A second part of the study employed meta-analytic statistical techniques to assess the generality and consistency of existing empirical findings. Three innovation characteristics (compatibility, relative advantage, and complexity) had the most consistent significant relationship to innovation adoption. Suggestions for future research in the area were made."}
{"_id": "2402312da67861c63a103b92cfac7ee101ff0544", "title": "A Novel Bayesian Framework for Discriminative Feature Extraction in Brain-Computer Interfaces", "text": "As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases."}
{"_id": "91c0245ceb0ba894a3f40e674a253e0893f3bcb6", "title": "High frequency data in financial markets : Issues and applications", "text": "The development of high frequency data bases allows for empirical investigations of a wide range of issues in the financial markets. In this paper, we set out some of the many important issues connected with the use, analysis, and application of high-frequency data sets. These include the effects of market structure on the availability and interpretation of the data, methodological issues such as the treatment of time, the effects of intra-day seasonals, and the effects of time-varying volatility, and the information content of various market data. We also address using high frequency data to determine the linkages between markets and to determine the applicability of temporal trading rules. The paper concludes with a discussion of the issues for future research. \u00a9 1997 Elsevier Science B.V. JEL classification: Cl0; C50; F30; G14; GI5"}
{"_id": "28e503e9538bee6f4bf51c5cc6388ceee452d7e7", "title": "Understanding the Metropolis-Hastings Algorithm", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/astata.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission."}
{"_id": "4b7d5d9203e4ec2f03c1ffeff97fe28ead9d9db1", "title": "A Bayesian Approach to Filtering Junk E-Mail", "text": "In addressing the growing problem of junk E-mail on the Internet, we examine methods for the automated construction of filters to eliminate such unwanted messages from a user\u2019s mail stream. By casting this problem in a decision theoretic framework, we are able to make use of probabilistic learning methods in conjunction with a notion of differential misclassification cost to produce filters Which are especially appropriate for the nuances of this task. While this may appear, at first, to be a straight-forward text classification problem, we show that by considering domain-specific features of this problem in addition to the raw text of E-mail messages, we can produce much more accurate filters. Finally, we show the efficacy of such filters in a real world usage scenario, arguing that this technology is mature enough for deployment."}
{"_id": "6aedc02edbd48c06a70b3e7bfef8888ded61c83d", "title": "Topic sentiment mixture: modeling facets and opinions in weblogs", "text": "In this paper, we define the problem of topic-sentiment analysis on Weblogs and propose a novel probabilistic model to capture the mixture of topics and sentiments simultaneously. The proposed Topic-Sentiment Mixture (TSM) model can reveal the latent topical facets in a Weblog collection, the subtopics in the results of an ad hoc query, and their associated sentiments. It could also provide general sentiment models that are applicable to any ad hoc topics. With a specifically designed HMM structure, the sentiment models and topic models estimated with TSM can be utilized to extract topic life cycles and sentiment dynamics. Empirical experiments on different Weblog datasets show that this approach is effective for modeling the topic facets and sentiments and extracting their dynamics from Weblog collections. The TSM model is quite general; it can be applied to any text collections with a mixture of topics and sentiments, thus has many potential applications, such as search result summarization, opinion tracking, and user behavior prediction."}
{"_id": "0853539a42df99848d0a046eaa33457eea2ad092", "title": "Beating the Bookies : Predicting the Outcome of Soccer Games", "text": "Soccer is the most popular sport in the world. With an estimated 3.5 billion fans around the world, it is enjoyed by nearly half the world\u2019s population. My project asks the following question: Is it possible to predict the outcome of soccer games with high accuracy automatically? This questions is not merely of interest to fans who must satisfy their curiosity, but it is also of significant economical relevance. A BBC article from 2013 (BBC) estimates that the soccer betting market is worth between 500 and 700 billion USD a year."}
{"_id": "ec8977e127c1bd1ffce46bc0f2532f420e69e97f", "title": "Exploiting Geographical Neighborhood Characteristics for Location Recommendation", "text": "Geographical characteristics derived from the historical check-in data have been reported effective in improving location recommendation accuracy. However, previous studies mainly exploit geographical characteristics from a user's perspective, via modeling the geographical distribution of each individual user's check-ins. In this paper, we are interested in exploiting geographical characteristics from a location perspective, by modeling the geographical neighborhood of a location. The neighborhood is modeled at two levels: the instance-level neighborhood defined by a few nearest neighbors of the location, and the region-level neighborhood for the geographical region where the location exists. We propose a novel recommendation approach, namely Instance-Region Neighborhood Matrix Factorization (IRenMF), which exploits two levels of geographical neighborhood characteristics: a) instance-level characteristics, i.e., nearest neighboring locations tend to share more similar user preferences; and b) region-level characteristics, i.e., locations in the same geographical region may share similar user preferences. In IRenMF, the two levels of geographical characteristics are naturally incorporated into the learning of latent features of users and locations, so that IRenMF predicts users' preferences on locations more accurately. Extensive experiments on the real data collected from Gowalla, a popular LBSN, demonstrate the effectiveness and advantages of our approach."}
{"_id": "282fadbd4b013a360877b999c6cd302190c56e50", "title": "Towards a Universal Grammar for Natural Language Processing", "text": "Universal Dependencies is a recent initiative to develop cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. In this talk, I outline the motivation behind the initiative and explain how the basic design principles follow from these requirements. I then discuss the different components of the annotation standard, including principles for word segmentation, morphological annotation, and syntactic annotation. I conclude with some thoughts on the challenges that lie ahead."}
{"_id": "62ea4b4fa88bd19e89017f71ec867f29f0b1a6dd", "title": "High voltage direct current transmission - A Review, Part II - Converter technologies", "text": "This paper is the 2nd part of the survey titled \u201cHigh Voltage Direct Current Transmission - A Review, Part I\u201d. The main converter technologies and HVDC systems' components will be discussed in this complementary paper."}
{"_id": "748381c36e82ebca65fa7265d08812a30ef8a954", "title": "Energy characterization of a tiled architecture processor with on-chip networks", "text": "Tiled architectures provide a paradigm for designers to turn silicon resources into processors with burgeoning quantities of programmable functional units and memories. The architecture has a dual responsibility: first, it must expose these resources in a way that is programmable. Second, it needs to manage the power associated with such resources.We present the power management facilities of the 16-tile Raw microprocessor. This design selectively turns on and off 48 SRAM macros, 96 functional unit clusters, 32 fetch units, and over 250 unique processor pipeline stages, all according to the needs of the computation and environment at hand."}
{"_id": "cb03f2d998b4978aacf6dd795fac01ac2b93608e", "title": "The nature of feelings: evolutionary and neurobiological origins", "text": "Feelings are mental experiences of body states. They signify physiological need (for example, hunger), tissue injury (for example, pain), optimal function (for example, well-being), threats to the organism (for example, fear or anger) or specific social interactions (for example, compassion, gratitude or love). Feelings constitute a crucial component of the mechanisms of life regulation, from simple to complex. Their neural substrates can be found at all levels of the nervous system, from individual neurons to subcortical nuclei and cortical regions."}
{"_id": "5bcb2e10f19e923df54f827c7cdd60f3db533268", "title": "Neural Network training model for weather forecasting using Fireworks Algorithm", "text": "Weather forecasting is the application of science and technology in order to predict the weather conditions. It is important for agricultural and industrial sectors. Models of Artificial Neural Networks with supervised learning paradigm are suitable for weather forecasting in complexity atmosphere. Training algorithm is required for providing weight and bias values to the model. This research proposed a weather forecasting method using Artificial Neural Networks trained by Fireworks Algorithm. Fireworks Algorithm is a recently developed Swarm Intelligence Algorithm for optimization. The main objective of the method is to predict daily mean temperature based on various measured parameters gained from the Meteorological Station, located in Bangkok. The experimental results indicate that the proposed method is advantageous for weather forecasting."}
{"_id": "518fd110bbf86df5259fb99126173d626a2ff744", "title": "Learning preferences for manipulation tasks from online coactive feedback", "text": "We consider the problem of learning preferences over trajectories for mobile manipulators such as personal robots and assembly line robots. The preferences we learn are more intricate than simple geometric constraints on trajectories; they are rather governed by the surrounding context of various objects and human interactions in the environment. We propose a coactive online learning framework for teaching preferences in contextually rich environments. The key novelty of our approach lies in the type of feedback expected from the user: the human user does not need to demonstrate optimal trajectories as training data, but merely needs to iteratively provide trajectories that slightly improve over the trajectory currently proposed by the system. We argue that this coactive preference feedback can be more easily elicited than demonstrations of optimal trajectories. Nevertheless, theoretical regret bounds of our algorithm match the asymptotic rates of optimal trajectory algorithms. We implement our algorithm on two high-degree-of-freedom robots, PR2 and Baxter, and present three intuitive mechanisms for providing such incremental feedback. In our experimental evaluation we consider two context rich settings, household chores and grocery store checkout, and show that users are able to train the robot with just a few feedbacks (taking only a few minutes)."}
{"_id": "d5e7342a1a364afab737a93c1abe6e0c12e4dd9e", "title": "Authentication Using Mobile Phone as a Security Token", "text": "Today security concerns are on the rise in all areas industries such as banks, governmental applications, healthcare industry, militaryorganization, educational institutions etc, with one common weak link being \u201cpasswords\u201d. Several proper strategies for using passwords have been proposed. Some of which are very difficult to use and others might not meet the company\u2019s security concerns. The usage of passwords for authentication is no longer sufficient and stronger authentication schemes are necessary. Two factor authentication uses elements or devices such as tokens and ATM cards. We have proposed to resolve the password problem, which could be inconvenient for the user and costly for the service providers to combat otherwise. To avoid the usage of additional device, the mobile phone is adopted as security token. In this paper several different authentication solutions using the mobile as authentication token are discussed, where these solutions vary in complexity, strength, security and user friendliness. One of the authentication schemes (OTP solution) is implemented to verify their usability. Hence, a classification and evaluation of the different solutions is provided according to defined criteria. Keywords\u2010 Two-factor authentication, Security token, UPIF, IMSI, IMEI, one time password (OTP)."}
{"_id": "5677e27a4317f61044d6538b33cbb0d6f0be6752", "title": "Survey on Performance of CMOS Sensors", "text": "This paper presents a survey on CMOS Image Sensors (CIS) based on active Pixel Sensors (APS). Due to its properties, the APS is used in low voltage, low power and its applications in portable imaging systems such as disposable cameras and on-chip autonomous security cameras are random access, faster readout and low noise with peripheral circuitry."}
{"_id": "307c0b7c163cc50d20483c18034be62b5ee22004", "title": "Fluid Vector Flow and Applications in Brain Tumor Segmentation", "text": "In this paper, we propose a new approach that we call the ldquofluid vector flowrdquo (FVF) active contour model to address problems of insufficient capture range and poor convergence for concavities. With the ability to capture a large range and extract concave shapes, FVF demonstrates improvements over techniques like gradient vector flow, boundary vector flow, and magnetostatic active contour on three sets of experiments: synthetic images, pediatric head MRI images, and brain tumor MRI images from the Internet brain segmentation repository."}
{"_id": "b9d7fd4458b5bf50a1b08335692c11d054a5da93", "title": "Compressing deep neural networks using a rank-constrained topology", "text": "We present a general approach to reduce the size of feedforward deep neural networks (DNNs). We propose a rankconstrained topology, which factors the weights in the input layer of the DNN in terms of a low-rank representation: unlike previous work, our technique is applied at the level of the filters learned at individual hidden layer nodes, and exploits the natural two-dimensional time-frequency structure in the input. These techniques are applied on a small-footprint DNN-based keyword spotting task, where we find that we can reduce model size by 75% relative to the baseline, without any loss in performance. Furthermore, we find that the proposed approach is more effective at improving model performance compared to other popular dimensionality reduction techniques, when evaluated with a comparable number of parameters."}
{"_id": "db5c41cebcba46f69164e27aa9e8d60fe1e4912a", "title": "RMDL: Random Multimodel Deep Learning for Classification", "text": "The continually increasing number of complex datasets each year necessitates ever improving machine learning methods for robust and accurate categorization of these data. This paper introduces Random Multimodel Deep Learning (RMDL): a new ensemble, deep learning approach for classification. Deep learning models have achieved state-of-the-art results across many domains. RMDL solves the problem of finding the best deep learning structure and architecture while simultaneously improving robustness and accuracy through ensembles of deep learning architectures. RDML can accept as input a variety data to include text, video, images, and symbolic. This paper describes RMDL and shows test results for image and text data including MNIST, CIFAR-10, WOS, Reuters, IMDB, and 20newsgroup. These test results show that RDML produces consistently better performance than standard methods over a broad range of data types and classification problems."}
{"_id": "9f927249d7b33b91ca23f8820e21b22a6951a644", "title": "Initial Beam Association in Millimeter Wave Cellular Systems: Analysis and Design Insights", "text": "Enabling the high data rates of millimeter wave (mmWave) cellular systems requires deploying large antenna arrays at both the basestations and mobile users. Prior work on coverage and rate of mmWave cellular networks focused on the case when basestations and mobile beamforming vectors are predesigned for maximum beamforming gains. Designing beamforming/combining vectors, though, requires training, which may impact both the SINR coverage and rate of mmWave systems. This paper evaluates mmWave cellular network performance while accounting for the beam training/association overhead. First, a model for the initial beam association is developed based on beam sweeping and downlink control pilot reuse. To incorporate the impact of beam training, a new metric, called the effective reliable rate, is defined and adopted. Using stochastic geometry, the effective rate of mmWave cellular networks is derived for two special cases: near-orthogonal pilots and full pilot reuse. Analytical and simulation results provide insights into the answers of two important questions. First, what is the impact of beam association on mmWave network performance? Then, should orthogonal or reused pilots be employed? The results show that unless the employed beams are very wide, initial beam training with full pilot reuse is nearly as good as perfect beam alignment."}
{"_id": "49a1adb73725feceed47ec08a23356b3e26dd266", "title": "Driving-Style-Based Codesign Optimization of an Automated Electric Vehicle: A Cyber-Physical System Approach", "text": "This paper studies the codesign optimization approach to determine how to optimally adapt automatic control of an intelligent electric vehicle to driving styles. A cyber-physical system (CPS)-based framework is proposed for codesign optimization of the plant and controller parameters for an automated electric vehicle, in view of vehicle's dynamic performance, drivability, and energy along with different driving styles. System description, requirements, constraints, optimization objectives, and methodology are investigated. Driving style recognition algorithm is developed using unsupervised machine learning and validated via vehicle experiments. Adaptive control algorithms are designed for three driving styles with different protocol selections. Performance exploration method is presented. Parameter optimizations are implemented based on the defined objective functions. Test results show that an automated vehicle with optimized plant and controller can perform its tasks well under aggressive, moderate, and conservative driving styles, further improving the overall performance. The results validate the feasibility and effectiveness of the proposed CPS-based codesign optimization approach."}
{"_id": "dfedc4173a1000b20d7bb0ef38ce72ffdea88dfb", "title": "Overview of the Photovoltaic System Topologies", "text": "\u2014 Due to increased interest for solar energy harvesting systems in recent years the number of developed system types is large. In order to choose the optimal one for future system development the analysis of common architectures being used for photovoltaic systems has been done. The paper contains the small description of different converter architectures and analysis of prototyped or simulated systems proposed in the references. Systems of different distribution level are observed from more integrated to more distributed topologies. Distribution level comes hand-to-hand to the overall system efficiency. Less distributed systems in case of low irradiation disturbance show better performance due to higher efficiency that could be achieved relatively easy. More distributed systems have better performance in cases of the frequent partial shading for example in building-integrated photovoltaic systems when there are relatively high objects (like trees, chimneys and other buildings) close to the PV-panel mounting place that can partially shade the PV. But this type of systems have large number of small-power converters that usually has smaller efficiency as the design of effective and small-power converters is hard technical task. The reason of better performance is that distributed PV-systems have larger number of maximum power point tracking converters and therefore increase the harvest of energy by keeping the photovoltaic power generation process optimal. This paper is made for choice of most suitable system topology for future PV-system implementation."}
{"_id": "ffd93a001fa5094263ac0ff1f29032667aa89476", "title": "Semantic Parsing with Syntax- and Table-Aware SQL Generation", "text": "We present a generative model to map natural language questions into SQL queries. Existing neural network based approaches typically generate a SQL query wordby-word, however, a large portion of the generated results is incorrect or not executable due to the mismatch between question words and table contents. Our approach addresses this problem by considering the structure of table and the syntax of SQL language. The quality of the generated SQL query is significantly improved through (1) learning to replicate content from column names, cells or SQL keywords; and (2) improving the generation of WHERE clause by leveraging the column-cell relation. Experiments are conducted on WikiSQL, a recently released dataset with the largest questionSQL pairs. Our approach significantly improves the state-of-the-art execution accuracy from 69.0% to 74.4%."}
{"_id": "6bd1f2782d6c8c3066d4e7d7e3afb995d79fa3dd", "title": "Deep Neural Networks for Semantic Segmentation of Multispectral Remote Sensing Imagery", "text": "A semantic segmentation algorithm must assign a label to every pixel in an image. Recently, semantic segmentation of RGB imagery has advanced significantly due to deep learning. Because creating datasets for semantic segmentation is laborious, these datasets tend to be significantly smaller than object recognition datasets. This makes it difficult to directly train a deep neural network for semantic segmentation, because it will be prone to overfitting. To cope with this, deep learning models typically use convolutional neural networks pre-trained on large-scale image classification datasets, which are then fine-tuned for semantic segmentation. For non-RGB imagery, this is currently not possible because large-scale labeled non-RGB datasets do not exist. In this paper, we developed two deep neural networks for semantic segmentation of multispectral remote sensing imagery. Prior to training on the target dataset, we initialize the networks with large amounts of synthetic multispectral imagery. We show that this significantly improves results on real-world remote sensing imagery, and we establish a new state-of-the-art result on the challenging Hamlin Beach State Park Dataset."}
{"_id": "bf17eb98696b2a8bb626d1d6afd20ac18044f8a9", "title": "ATRIUM: A Radar Target Simulator for Complex Traffic Scenarios", "text": "Future autonomous automobiles will require extremely reliable radar sensors to guarantee the safety of road users. This requires test environments to verify that automotive radars work properly even in complex traffic situations. Radar target simulators can generate virtual scattering centers in front of the radar sensor and are therefore a promising verification technique. However, currently available systems only offer a very basic approach to manipulating scattering centers. With the workflow presented in this publication it becomes possible to decompose high-level descriptions of traffic scenarios into parameters required for radar target simulation. The workflow can therefore be used to generate realistic traffic scenarios and thus increases the reliability of automotive radar sensor tests."}
{"_id": "45b418f04a3ab99952ebc90fa5cbeec708348df0", "title": "SentiWordNet for Indian Languages", "text": "The discipline where sentiment/ opinion/ emotion has been identified and classified in human written text is well known as sentiment analysis. A typical computational approach to sentiment analysis starts with prior polarity lexicons where entries are tagged with their prior out of context polarity as human beings perceive using their cognitive knowledge. Till date, all research efforts found in sentiment lexicon literature deal mostly with English texts. In this article, we propose multiple computational techniques like, WordNet based, dictionary based, corpus based or generative approaches for generating SentiWordNet(s) for Indian languages. Currently, SentiWordNet(s) are being developed for three Indian languages: Bengali, Hindi and Telugu. An online intuitive game has been developed to create and validate the developed SentiWordNet(s) by involving Internet population. A number of automatic, semi-automatic and manual validations and evaluation methodologies have been adopted to measure the coverage and credibility of the developed SentiWordNet(s)."}
{"_id": "9e9b8832b9e727d5f7a61cedfa4bdf44e8969623", "title": "Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems", "text": "An efficient optimization method called \u2018Teaching\u2013Learning-Based Optimization (TLBO)\u2019 is proposed in this paper for large scale non-linear optimization problems for finding the global solutions. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The effectiveness of the method is tested on many benchmark problems with different characteristics and the results are compared with other population based methods. 2011 Elsevier Inc. All rights reserved."}
{"_id": "47565795cc8a46306fada69ef45b28be5b520060", "title": "The Evolution of Accuracy and Bias in Social Judgment", "text": "Humans are an intensely social species and therefore it is essential for our interpersonal judgments to be valid enough to help us to avoid enemies, form useful alliances and find suitable mates; flawed judgments can literally be fatal. An evolutionary perspective implies that humans ought to have developed sufficient skills at solving problems of interpersonal judgment, including gauging the personalities of others, to be useful for the basic tasks of survival and reproduction. Yet, the view to be derived from the large and influential bias-and-error literature of social psychology is decidedly different\u2014the social mind seems riddled with fundamental design flaws. We will argue in this paper that flawed design is probably the least plausible explanation for the existence of so many errors. We present an evolutionarily-based taxonomy of known bias effects that distinguishes between biases that are trivial or even artifactual and lead virtually nowhere, and those that have interesting implications and deserve further study. Finally, we present an evolutionary perspective that suggests that the ubiquity, automaticity, and success of interpersonal judgment, among other considerations, presents the possibility of a universal Personality Judgment Instinct. Archeological evidence and behavioral patterns observed in extant hunter-gatherer groups indicate that the human species has been intensely social for a long time (e.g., Chagnon, 1983, Tooby & Devore, 1987). Human offspring have a remarkably extended period of juvenile dependency, which both requires and provides the skills for surviving in a complex social world (Hrdy, 1999). Humans evolved language and universal emotional expressions which serve the social purpose of discerning and influencing the thoughts of others (e. and humans will infer social intentions on the basis of minimal cues, as Heider and Simmel (1944) demonstrated in their classic experiment involving chasing triangles and evading circles. Recent work has shown that children above age 4 and adults in disparate cultures (Germans and Amazonian Indians) can categorize intentions\u2014chasing, fighting, following, playing, and courting (for adults)\u2014 from no more than the motion patterns of computerized v-shaped arrowheads (Barrett, Todd, Miller, & Blythe, in press). Most notably, humans have a deeply-felt need for social inclusion. Deprivation of social contact produces anxiety, loneliness, and depression (Baumeister & Leary, 1995); indeed, as William James (1890) observed: \" Solitary confinement is by many regarded as a mode of torture Accuracy and Bias-3 too cruel and unnatural for civilised countries to adopt. \" Participants in laboratory studies who are left out of a face-to-face \u2026"}
{"_id": "87b1a994964d1a41858652da91e036243ed03051", "title": "Pre-Prosthetic Orthodontic Implant for Management of Congenitally Unerupted Lateral Incisors \u2013 A Case Report", "text": "The maxillary lateral incisor is one of the most common congenitally missing teeth of the permanent dentition. With the advent of implants in the field of restorative dentistry, a stable and predictable fixed prosthetic replacement has become a reality, especially for young adult patients who suffer from congenital absence of teeth. The dual goals of establishment of functional stability as well as enhancement of esthetic outcomes are made achievable by the placement of implants. A multidisciplinary team approach involving the triad of orthodontist, periodontist and restorative dentist will ensure the successful completion of the integrated treatment approach in these patients. The present case report achieved successful implant based oral rehabilitation in a patient diagnosed with congenital absence of bilateral maxillary lateral incisors utilizing a preprosthetic orthodontic implant site preparation for the purpose of space gain."}
{"_id": "51bb6450e617986d1bd8566878f7693ffd03132d", "title": "Efficient and accurate nearest neighbor and closest pair search in high-dimensional space", "text": "Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results.\n Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders of magnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results.\n As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial reduction in the space and running time. In our experiments, our technique was faster: (i) than distance browsing (a well-known method for solving the problem exactly) by several orders of magnitude, and (ii) than D-shift (an approximate approach with theoretical guarantees in low-dimensional space) by one order of magnitude, and at the same time, outputs better results."}
{"_id": "17f3358d219c05f3cb8d68bdfaf6424567d66984", "title": "Adversarial Examples for Generative Models", "text": "We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network."}
{"_id": "9a25143ad5cb06a596eb8bda2dd87c99de36d12c", "title": "Exploring hacker assets in underground forums", "text": "Many large companies today face the risk of data breaches via malicious software, compromising their business. These types of attacks are usually executed using hacker assets. Researching hacker assets within underground communities can help identify the tools which may be used in a cyberattack, provide knowledge on how to implement and use such assets and assist in organizing tools in a manner conducive to ethical reuse and education. This study aims to understand the functions and characteristics of assets in hacker forums by applying classification and topic modeling techniques. This research contributes to hacker literature by gaining a deeper understanding of hacker assets in well-known forums and organizing them in a fashion conducive to educational reuse. Additionally, companies can apply our framework to forums of their choosing to extract their assets and appropriate functions."}
{"_id": "e09c38c84e953f85d4ebca9b5c2b9f9e25903d9f", "title": "Analysis of circulating tumor DNA to monitor metastatic breast cancer.", "text": "BACKGROUND\nThe management of metastatic breast cancer requires monitoring of the tumor burden to determine the response to treatment, and improved biomarkers are needed. Biomarkers such as cancer antigen 15-3 (CA 15-3) and circulating tumor cells have been widely studied. However, circulating cell-free DNA carrying tumor-specific alterations (circulating tumor DNA) has not been extensively investigated or compared with other circulating biomarkers in breast cancer.\n\n\nMETHODS\nWe compared the radiographic imaging of tumors with the assay of circulating tumor DNA, CA 15-3, and circulating tumor cells in 30 women with metastatic breast cancer who were receiving systemic therapy. We used targeted or whole-genome sequencing to identify somatic genomic alterations and designed personalized assays to quantify circulating tumor DNA in serially collected plasma specimens. CA 15-3 levels and numbers of circulating tumor cells were measured at identical time points.\n\n\nRESULTS\nCirculating tumor DNA was successfully detected in 29 of the 30 women (97%) in whom somatic genomic alterations were identified; CA 15-3 and circulating tumor cells were detected in 21 of 27 women (78%) and 26 of 30 women (87%), respectively. Circulating tumor DNA levels showed a greater dynamic range, and greater correlation with changes in tumor burden, than did CA 15-3 or circulating tumor cells. Among the measures tested, circulating tumor DNA provided the earliest measure of treatment response in 10 of 19 women (53%).\n\n\nCONCLUSIONS\nThis proof-of-concept analysis showed that circulating tumor DNA is an informative, inherently specific, and highly sensitive biomarker of metastatic breast cancer. (Funded by Cancer Research UK and others.)."}
{"_id": "9371a8f9916c0e2bd510d82436dedfc358c74ee4", "title": "Combining Bottom-Up, Top-Down, and Smoothness Cues for Weakly Supervised Image Segmentation", "text": "This paper addresses the problem of weakly supervised semantic image segmentation. Our goal is to label every pixel in a new image, given only image-level object labels associated with training images. Our problem statement differs from common semantic segmentation, where pixel-wise annotations are typically assumed available in training. We specify a novel deep architecture which fuses three distinct computation processes toward semantic segmentation – namely, (i) the bottom-up computation of neural activations in a CNN for the image-level prediction of object classes, (ii) the top-down estimation of conditional likelihoods of the CNNs activations given the predicted objects, resulting in probabilistic attention maps per object class, and (iii) the lateral attention-message passing from neighboring neurons at the same CNN layer. The fusion of (i)-(iii) is realized via a conditional random field as recurrent network aimed at generating a smooth and boundary-preserving segmentation. Unlike existing work, we formulate a unified end-to-end learning of all components of our deep architecture. Evaluation on the benchmark PASCAL VOC 2012 dataset demonstrates that we outperform reasonable weakly supervised baselines and state-of-the-art approaches."}
{"_id": "d64b5c107c7a8710f4e413a9b1fd224478def83e", "title": "Skipping Refinement", "text": "We introduce skipping refinement, a new notion of correctness for reasoning about optimized reactive systems. Reasoning about reactive systems using refinement involves defining an abstract, highlevel specification system and a concrete, low-level implementation system. One then shows that every behavior allowed by the implementation is also allowed by the specification. Due to the difference in abstraction levels, it is often the case that the implementation requires many steps to match one step of the specification, hence, it is quite useful for refinement to directly account for stuttering. Some optimized implementations, however, can actually take multiple specification steps at once. For example, a memory controller can buffer the commands to the memory and at a later time simultaneously update multiple memory locations, thereby skipping several observable states of the abstract specification, which only updates one memory location at a time. We introduce skipping simulation refinement and provide a sound and complete characterization consisting of \u201clocal\u201d proof rules that are amenable to mechanization and automated verification. We present case studies that highlight the applicability of skipping refinement: a JVM-inspired stack machine, a simple memory controller and a scalar to vector compiler transformation. Our experimental results demonstrate that current model-checking and automated theorem proving tools have difficultly automatically analyzing these systems using existing notions of correctness, but they can analyze the systems if we use skipping refinement."}
{"_id": "038df8f5f00817e29bdcb8822c905fc4bb7718e9", "title": "Fossil chrysophycean cyst flora of Racze Lake, Wolin Island (Poland) in relation to paleoenvironmental conditions", "text": "The study presents stratigraphic distribution of fossil chrysophycean cysts in the bottom sediments of Racze Lake (Poland). Thirty morphotypes are described, most of them for the first time. The description of the cyst includes SEM microphotographs. The long-term relationship between the lake's conditions and the occurrence of characteristic morphotypes of chrysophycean cysts is discussed."}
{"_id": "b8c1f2c1f266aa37d7b82731b90ec28b72197454", "title": "Design and implementation of a 120A resonant inverter for induction furnace", "text": "This paper presents the design and implementation of a series resonant inverter operating at high frequency for induction heating applications. The main advantage of this work is focused in applying power electronics, particularly a series resonant inverter based on an IGBT's full bridge and a series RLC Load matched with a high frequency coupling transformer. The series resonant inverter is designed to deliver a current load up to 120A with a one phase AC source. Due to the inherent high speed operation and high efficiency of switching converters, other advantages of the induction furnace presented in this paper are that it require less time to start melting, and require less physical space than other traditional furnaces."}
{"_id": "f1e364bab4646e45a4acd5638977a6cdf508cc90", "title": "Is Verbal Irony Special?", "text": "The way we speak can reveal much about what we intend to communicate, but the words we use often only indirectly relate to the meanings we wish to convey. Verbal irony is a commonly studied form of indirect speech in which a speaker produces an explicit evaluative utterance that implicates an unstated, opposing evaluation. Producing and understanding ironic language, as well as many other types of indirect speech, requires the ability to recognize mental states in others, sometimes described as a capacity for metarepresentation. This article aims to connect common elements between the major theoretical approaches to verbal irony to recent psycholinguistic, developmental, and neuropsychological research demonstrating the necessity for metarepresentation in the effective use of verbal irony in social interaction. Here I will argue that verbal irony is one emergent, strategic possibility given the interface between people\u2019s ability to infer mental states, and use language. Rather than think of ironic communication as a specialized cognitive ability, I will claim that it arises from the same set of abilities that underlie a wide range of inferential communicative behaviors. Language interaction involves a complex interplay of many cognitive abilities. Theorists across many disciplines struggle with questions of how to carve up these abilities, and whether they can be carved up in the first place (e.g., Christiansen and Chater 2008). Among the most difficult questions to address are those that involve how people recognize intentions in others\u2019 behavior, and how that is achieved through language in the context of many other sources of information. The way we speak can reveal much about what we intend to communicate, but the words we use are often quite different from the meanings we wish to convey. People often speak indirectly for a variety of strategic reasons, and these strategies rely intrinsically on social cognition. One well-researched example of this is the phenomenon of verbal irony \u2013 a type of indirect speech in which a speaker produces an explicit evaluative utterance that implicates an unstated, opposing evaluation. As described below, this trope has been traditionally defined in rather vague terms, but has generated a great deal of research as a phenomenon in need of special theoretical explanation. Here I will argue that verbal irony is one emergent, strategic possibility given the interface between people\u2019s ability to infer the mental states of others, and use language. Rather than think of ironic communication as a unique skill or specialized cognitive ability, I suggest that it arises from the same set of abilities that underlie a wide range of inferential communicative behaviors. The study of verbal irony production and comprehension has been a multidisciplinary effort lacking extensive interdisciplinary exchange. There have been various taxonomies and definitions of what constitutes irony, both verbal and situational \u2013 a struggling enterprise to say the least (Colston and Gibbs 2007). The traditional notion of verbal irony dates back at least to the Roman rhetorician Quintilian, who described irony as a kind of allegory expressing the opposite of what one means, often for mocking Language and Linguistics Compass 6/11 (2012): 673\u2013685, 10.1002/lnc3.364 a 2012 The Author Language and Linguistics Compass a 2012 Blackwell Publishing Ltd communicative effects. Some of the assumptions of this early categorization have carried over into modern psycholinguistic research without proper consideration of underlying communicative functions (Kreuz 2000). Many scholars have linked verbal irony to situational irony \u2013 particularly in how both forms seem to involve, at some level, a contradiction between what is expected, and what occurs. While verbal irony often points to unexpected or unfortunate outcomes in people\u2019s actions (Kumon-Nakamura, Glucksberg, and Brown 1995; Pexman 2008), it does not do so necessarily. At present, most contemporary researchers of figurative language and inferential communication have settled on some version of the basic claim that verbal irony is a class of indirect language use where explicit sentence meanings are conceptually contradictory to a network of related implied propositions. Many other tropes, however, are not obviously distinct (e.g., parody, pretense, double entendre, litotes, etc). One common thread between many theoretical and empirical approaches to verbal irony is the distinction language users must make between layers of meaning. This could be the difference between actual and attributed beliefs (Kreuz and Glucksberg 1989; Sperber and Wilson 1986, 1995), real and imagined discourse acts (Clark and Gerrig 1984), relevant or inappropriate to a particular context (Attardo 2000), what is negated and what is implicated (Giora 1995), or failed expectations and attitudes toward those failures (Kumon-Nakamura, Glucksberg, and Brown 1995). Speakers implicitly highlight the contrasts between these different levels of meaning, and in doing so communicate an attitude toward an attributed belief (Sperber and Wilson 1986, 1995). This attitudinal dissociation is typically considered a hallmark of ironic communication, and necessarily requires an implicit understanding of others\u2019 mental states. Consider the following exchange: John: I got another virus on my laptop. Mary: Aren\u2019t you glad you didn\u2019t switch to Mac? In this example John and Mary are discussing John\u2019s computer problems. John reports that he got a virus on his computer \u2013 not his first. Mary asks an ironic rhetorical question expressing her negative opinion of Windows PCs. The literal question is in particular opposition to the implied evaluation \u2013 the essence of irony. In this example, the irony is largely contained in a single lexical opposition \u2013 \u2018\u2018glad\u2019\u2019 instead of, for example, \u2018\u2018sad,\u2019\u2019 but is also communicated by the rhetorical aspect of the question. The question form is one format speakers use to express ironic intentions, but ironic forms can manifest themselves in various ways linguistically, and in a variety of media including television, music, and visual art (Bryant 2011; El Refaie 2005; Ettema and Glasser 1994; Nuolijarvi and Tiittula 2010; Scott 2003). Figurative language devices such as verbal irony can be quite powerful communicative tools. The sentences we speak are often produced in a manner that will allow our audience to derive certain unstated meanings, and the way we put them is often particularly economical because of our heavy reliance on others\u2019 inferential abilities. These meanings are not just efficient, but unique to indirect communication (Gibbs 2000a). The combination of affect directed toward some attributed belief, along with a direct utterance, makes ironic meanings difficult, if not impossible, to precisely capture. But this is due to inferential processes associated with mindreading and linguistic communication, not some distinct process of irony understanding. Similarly, it is difficult to capture, in linguistic terms, many kinds of inferences regarding others\u2019 intentions that people generate in social contexts. In our example, Mary, of course, communicates much more than a negative opinion of PCs. For instance, she implicitly reminds John of previous interactions, some being debates, when their differences of opinion about computer platforms were discussed (i.e., 674 Gregory A. Bryant a 2012 The Author Language and Linguistics Compass 6/11 (2012): 673\u2013685, 10.1002/lnc3.364 Language and Linguistics Compass a 2012 Blackwell Publishing Ltd she echoes previous opinions expressed, perhaps even through literal quotations) (Kreuz and Glucksberg 1989; Sperber and Wilson 1986, 1995). In doing so she reiterates her claim that John\u2019s decision was a mistake, and that he would regret it. By using an ironic rhetorical question, Mary efficiently conveys a complex set of implied meanings that are relevant to John. The irony is not necessarily obvious to an outsider, as is the case with most occurrences of irony in everyday conversations (Gibbs 2000b), but John understands Mary immediately, and Mary knew he would. In the process, Mary potentially reduces John\u2019s perception that she is being critical, and makes him laugh (while actually criticizing him) (Dews and Winner 1995). Alternatively, Mary might be accentuating her criticism by highlighting a contrast between John\u2019s current situation and his earlier stated preference (Colston 1997). Speakers attempt to fulfill particular communicative goals by using indirect language such as verbal irony, and different forms can result in quite different emotional reactions \u2013 we should expect it to be used strategically and variably (Bryant 2011; Bryant and Fox Tree 2005; Leggitt and Gibbs 2000). Scholars have suggested many functions for irony in discourse, including speakers\u2019 attempts to be humorous, create solidarity, appear clever, increase memorability, save face, be polite, or alter the valence of an attack or praise (Gibbs 1994; Roberts and Kreuz 1994; Toplak and Katz 2000). The pragmatic uses of ironic speech are far-reaching \u2013 \u2018\u2018what can\u2019t irony do?\u2019\u2019 might be a better question. One reason that irony can fulfill so many discourse goals is that includable tokens in the category need only satisfy a couple conditions; thus, an enormous variety of speech acts can qualify. Burgers, van Mulken, and Schellens (2011) recently proposed a procedure for identifying ironies in discourse, and for a basic definition, they looked at the commonalities between different contemporary theories. In their view, verbal irony involves an expressed evaluative utterance that implies an opposing evaluative appraisal. This implied meaning does not need to be the opposite of the stated meaning, but just differ in a scalar manner. For instance, an understatement might still be negative, but less negative than the intended meaning (e.g., when asked about a personal experience with a"}
{"_id": "64aa6e5e7b3f8a29bad2e97463ffb7bd43a8af9d", "title": "Sharpness Enhancement and Super-Resolution of Around-View Monitor Images", "text": "In the wide-angle (WA) images embedded in an around-view monitor system, the subject(s) in the peripheral region is normally small and has little information. Furthermore, since the outer region suffers from the non-uniform blur phenomenon and artifact caused by the inherent optical characteristic of WA lenses, its visual quality tends to deteriorate. According to our experiments, conventional image enhancement techniques rarely improve the degraded visual quality of the outer region of WA images. In order to solve the above-mentioned problem, this paper proposes a joint sharpness enhancement (SE) and super-resolution (SR) algorithm which can improve the sharpness and resolution of WA images together. The proposed SE algorithm improves the sharpness of the deteriorated WA images by exploiting self-similarity. Also, the proposed SR algorithm generates super-resolved images by using high-resolution information which is classified according to the extended local binary pattern-based classifier and learned on a pattern basis. Experimental results show that the proposed scheme effectively improves the sharpness and resolution of the input deteriorated WA images. Even in terms of quantitative metrics such as just noticeable blur, structural similarity, and peak signal-to-noise ratio. Finally, the proposed scheme guarantees real-time processing such that it achieves 720p video at 29 Hz on a low cost GPU platform."}
{"_id": "277e6234b9f512d410302665a60c680c75be1a37", "title": "Understanding the Potential of Interpreter-based Optimizations for Python", "text": "The increasing popularity of scripting languages as general purpose programming environments calls for more efficient execution. Most of these languages, such as Python, Ruby, PHP, and JavaScript are interpreted. Interpretation is a natural implementation given the dynamic nature of these languages and interpreter portability has facilitated wide-spread use. In this work, we analyze the performance of CPython, a commonly used Python interpreter, to identify major sources of overhead. Based on our findings, we investigate the efficiency of a number of optimizations and explore the design options and trade-offs involved."}
{"_id": "950c8d0b042b2a55a3f32fa9bba9af485a26ad8a", "title": "A Fast Parallel Algorithm for Thinning Digital Patterns", "text": "A fast parallel thinning algorithm is proposed in this paper. It consists of two subiterations: one aimed at deleting the south-east boundary points and the north-west corner points while the other one is aimed at deleting the north-west boundary points and the south-east corner points. End points and pixel connectivity are preserved. Each pattern is thinned down to a \"skeleton\" of unitary thickness. Experimental results show that this method is very effective."}
{"_id": "9228aa5523fd84616b7a4d199e2178050677ad9b", "title": "Lifetime study for a poly fuse in a 0.35 /spl mu/m polycide CMOS process", "text": "Poly fuses are used as the base element for one time programmable cells in a standard CMOS process. Using a defined programming current, the resistance of the poly fuse increases irreversibly over several orders of magnitude. The goal of this study is to show that a poly fuse has a sufficient life time stability to be used as a storage element even in high reliability circuits. This paper shows the drift of the resistance of a poly fuse over the whole range of programming currents for a standard polycide 0.35 /spl mu/m CMOS process. The poly fuse for the selected process is build using two different layers, which gives a special performance in terms of programming current."}
{"_id": "6e07fcf8327a3f53f90f86ea86ca084d6733fb88", "title": "The relative success of alternative approaches to strategic information systems planning: an empirical analysis", "text": "Strategic information systems planning (SISP) is an exercise or ongoing activity that enables organisations to develop priorities for information systems development. It has been suggested that the \u2018SISP approach\u2019, a combination of method, process and implementation, is the most complete way of describing SISP activity. Based upon questionnaire responses from 267 IT Directors, four distinct approaches to SISP have been derived using cluster analysis. A comparison of these four approaches with five approaches of Earl, M.J., 1993. Experiences in SISP, MIS Quarterly, (March), 1\u201324, indicates that three bear strong similarities to the \u2018organisational\u2019, \u2018business-led\u2019, and \u2018administrative\u2019 approaches, whilst the fourth cluster is related to both Earl\u2019s \u2018method-driven\u2019 and \u2018technological\u2019 approaches. An analysis of the relationship between SISP approach and SISP success demonstrates that the \u2018organisational approach\u2019 is significantly more successful than the other three approaches. q 1999 Elsevier Science B.V. All rights reserved."}
{"_id": "84f27dd934eed1b0bde59bbcb8a562cd3b670a87", "title": "Classroom Re-design to Facilitate Student Learning : A Case Study of Changes to a University Classroom", "text": "This case study examines the physical aspects of a particular university classroom, and what affect specific changes to the classroom had on the perceptions of students, instructors and observers regarding the room as an effective learning space. We compare survey and focus group data collected from students taking courses in the classroom prior to changes to the physical environment with comparable data from students taking courses in the same classroom after specific changes had been made. Immediately following changes to the classroom, notable increases were observed in reported perceptions of student satisfaction with the physical environment, including perceptions of the classroom as a more effective and engaging learning space. Similar perceptions of improvement as a teaching-learning space were reported by instructors and observers. However, subsequent follow-up data collection and analyses suggested little if any sustained increase in perceptions of efficacy of the room as a learning space; indeed, most reported variables returned to baseline levels. The implications of these findings and their relevance to classroom design nevertheless may provide insight regarding the manner in which physical space might support or even enhance teaching and learning. Keywords: learning spaces, active learning, classrooms, teaching and learning environment, classroom design For a number of years there has been an on-going pedagogical shift in higher education away from a traditional content delivery model of instruction to more active models of learning in which students play more involved and interactive roles within the classroom (Cornell, 2002; Brown, 2006). This movement has been coupled with the recognition that the traditional university classroom, with its unidirectional design and tiered, fixed theatre-like seating, is insufficient to accommodate what have increasingly become more varied teaching and learning practices. This growing realization that as the nature of teaching and learning evolves so too must teaching and learning spaces has, in recent years, resulted in a heightened interest among scholars in the examination of classroom space and, more specifically, the inquiry into the connection between classroom design and pedagogy and learning (Brooks, 2011). This study presents findings from an on-going research project examining classroom spaces on a small university campus. The present case study, which is one component of the larger project, compares perceptions of a specific classroom as an effective learning space before and after relatively extensive changes were made to the physical environment of the room. These changes were guided by feedback from students and instructors as well as by existing literature on innovative classroom designs. In particular, we were interested in assessing if these changes 1\t\r University\t\r of\t\r Lethbridge,\t\r Lethbridge,\t\r Alberta,\t\r Canada"}
{"_id": "4f8deabc58014eae708c3e6ee27114535325067b", "title": "Approximating the Kullback Leibler Divergence Between Gaussian Mixture Models", "text": "The Kullback Leibler (KL) divergence is a widely used tool in statistics and pattern recognition. The KL divergence between two Gaussian mixture models (GMMs) is frequently needed in the fields of speech and image recognition. Unfortunately the KL divergence between two GMMs is not analytically tractable, nor does any efficient computational algorithm exist. Some techniques cope with this problem by replacing the KL divergence with other functions that can be computed efficiently. We introduce two new methods, the variational approximation and the variational upper bound, and compare them to existing methods. We discuss seven different techniques in total and weigh the benefits of each one against the others. To conclude we evaluate the performance of each one through numerical experiments."}
{"_id": "5276d835a8cb96e06d19f0b8491dba3ab1642963", "title": "Interprocedural Semantic Change-Impact Analysis using Equivalence Relations", "text": "Change-impact analysis (CIA) is the task of determining the set of program elements impacted by a program change. Precise CIA has great potential to avoid expensive t sting and code reviews for (parts of) changes that are refactoring s (semantics-preserving). Existing CIA is imprecise becaus e it is coarse-grained, deals with only few refactoring patterns,or is unaware of the change semantics. We formalize the notion of change impact in terms of the trace semantics of two program versions. We show how to leverage equivalence relations to make dataflow-based CIA aware of th e change semantics, thereby improving precision in the prese nce of semantics-preserving changes. We propose an anytime algorithm that allows applying costly equivalence relation inference incrementally to refine the set of impacted statements. We ha ve implemented a prototype in SYM DIFF , and evaluated it on 322 real-world changes from open-source projects and benchmar k programs used by prior research. The evaluation results sho w an average 35% improvement in the size of the set of impacted statements compared to standard dataflow-based techniques ."}
{"_id": "c2a8d08805681ca258c101cdb850d7c3f81ed9a9", "title": "Dual Learning based Multi-Objective Pairwise Ranking", "text": "There are many different recommendation tasks in our real life. The item ranking task is ranking a set of items based on users\u2019 preferences. The user ranking task is referred to as user retrieval, find the potential users and recommend items to them. We find every recommendation task has it\u2019s own dual task, e.g., Recommend an item to a user versus the user is supposed to become the item\u2019s potential interested user. In this paper, we propose a novel dual learning based ranking framework with the overall aim of learning users\u2019 preferences over items and learning items\u2019 preferences over users by minimizing a pairwise ranking loss. We generate effective feedback signal from close loop formed by dual task. Through a reinforcement learning process, we can iteratively update these two tasks\u2019 models to catch relations between the item recommendation task and the user retrieval task. The experiments show that our method outperforms classical approaches for multi-object recommendation tasks, especially for user retrieval tasks."}
{"_id": "beceeaaeee67884b727248d1f9ecda075e4ce85d", "title": "Business Analytics in (a) Blink", "text": "The Blink project\u2019s ambitious goal is to answer all Business Intelligence (BI) queries in mere seconds, regardless of the database size, with an extremely low total cost of ownership. Blink is a new DBMS aimed primarily at read-mostly BI query processing that exploits scale-out of commodity multi-core processors and cheap DRAM to retain a (copy of a) data mart completely in main memory. Additionally, it exploits proprietary compression technology and cache-conscious algorithms that reduce memory bandwidth consumption and allow most SQL query processing to be performed on the compressed data. Blink always scans (portions of) the data mart in parallel on all nodes, without using any indexes or materialized views, and without any query optimizer to choose among them. The Blink technology has thus far been incorporated into two IBM accelerator products generally available since March 2011. We are now working on the next generation of Blink, which will significantly expand the \u201csweet spot\u201d of the Blink technology to much larger, disk-based warehouses and allow Blink to \u201cown\u201d the data, rather"}
{"_id": "4d977c0a2ce4d60258140645f3b13c9d69cc8f63", "title": "Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands.", "text": "The term 'synergy' - from the Greek synergia - means 'working together'. The concept of multiple elements working together towards a common goal has been extensively used in neuroscience to develop theoretical frameworks, experimental approaches, and analytical techniques to understand neural control of movement, and for applications for neuro-rehabilitation. In the past decade, roboticists have successfully applied the framework of synergies to create novel design and control concepts for artificial hands, i.e., robotic hands and prostheses. At the same time, robotic research on the sensorimotor integration underlying the control and sensing of artificial hands has inspired new research approaches in neuroscience, and has provided useful instruments for novel experiments. The ambitious goal of integrating expertise and research approaches in robotics and neuroscience to study the properties and applications of the concept of synergies is generating a number of multidisciplinary cooperative projects, among which the recently finished 4-year European project \"The Hand Embodied\" (THE). This paper reviews the main insights provided by this framework. Specifically, we provide an overview of neuroscientific bases of hand synergies and introduce how robotics has leveraged the insights from neuroscience for innovative design in hardware and controllers for biomedical engineering applications, including myoelectric hand prostheses, devices for haptics research, and wearable sensing of human hand kinematics. The review also emphasizes how this multidisciplinary collaboration has generated new ways to conceptualize a synergy-based approach for robotics, and provides guidelines and principles for analyzing human behavior and synthesizing artificial robotic systems based on a theory of synergies."}
{"_id": "67755a1c251882f97fb4f49a269579e0bd557c81", "title": "The thalamic dynamic core theory of conscious experience", "text": "I propose that primary conscious awareness arises from synchronized activity in dendrites of neurons in dorsal thalamic nuclei, mediated particularly by inhibitory interactions with thalamic reticular neurons. In support, I offer four evidential pillars: (1) consciousness is restricted to the results of cortical computations; (2) thalamus is the common locus of action of brain injury in vegetative state and of general anesthetics; (3) the anatomy and physiology of the thalamus imply a central role in consciousness; (4) neural synchronization is a neural correlate of consciousness."}
{"_id": "a44e60b839e861bea2340532a47b0902a19bfcdb", "title": "PFC Cuk Converter-Fed BLDC Motor Drive", "text": "This paper deals with a power factor correction (PFC)-based Cuk converter-fed brushless dc motor (BLDC) drive as a cost-effective solution for low-power applications. The speed of the BLDC motor is controlled by varying the dc-bus voltage of a voltage source inverter (VSI) which uses a low frequency switching of VSI (electronic commutation of the BLDC motor) for low switching losses. A diode bridge rectifier followed by a Cuk converter working in a discontinuous conduction mode (DCM) is used for control of dc-link voltage with unity power factor at ac mains. Performance of the PFC Cuk converter is evaluated under four different operating conditions of discontinuous and continuous conduction modes (CCM) and a comparison is made to select a best suited mode of operation. The performance of the proposed system is simulated in a MATLAB/Simulink environment and a hardware prototype of the proposed drive is developed to validate its performance over a wide range of speed with unity power factor at ac mains."}
{"_id": "2af34fe944e515869cb29707fd216308e9c788ce", "title": "Pok\u00e9mon Go: Impact on Yelp Restaurant Reviews", "text": "Pok\u00e9mon Go, the popular Augmented Reality based mobile application, launched in July of 2016. The game's meteoric rise in usage since that time has had an impact on not just the mobile gaming industry, but also the physical activity of players, where they travel, where they spend their money, and possibly how they interact with other social media applications. In this paper, we studied the impact of Pok\u00e9mon Go on Yelp reviews. For restaurants near Pok\u00e9Stops, we found a slight drop in the number of online reviews."}
{"_id": "75f1100f5e0411ba19b4c336be29c72efb8403b6", "title": "CIVSched: A Communication-Aware Inter-VM Scheduling Technique for Decreased Network Latency between Co-Located VMs", "text": "Server consolidation in cloud computing environments makes it possible for multiple servers or desktops to run on a single physical server for high resource utilization, low cost, and reduced energy consumption. However, the scheduler in the virtual machine monitor (VMM), such as Xen credit scheduler, is agnostic about the communication behavior between the guest operating systems (OS). The aforementioned behavior leads to increased network communication latency in consolidated environments. In particular, the CPU resources management has a critical impact on the network latency between co-located virtual machines (VMs) when there are CPUand I/O-intensive workloads running simultaneously. This paper presents the design and implementation of a communication-aware inter-VM scheduling (CIVSched) technique that takes into account the communication behavior between inter-VMs running on the same virtualization platform. The CIVSched technique inspects the network packets transmitted between local co-resident domains to identify the target VM and process that will receive the packets. Thereafter, the target VM and process are preferentially scheduled by the VMM and the guest OS. The cooperation of these two schedulers makes the network packets to be timely received by the target application. Experimental results on the Xen virtualization platform depict that the CIVSched technique can reduce the average response time of network traffic by approximately 19 percent for the highly consolidated environment, while keeping the inherent fairness of the VMM scheduler."}
{"_id": "da24419dcbc837782d704511552f085b9069983d", "title": "Classification of plant leaf images with complicated background", "text": "Keywords: Image segmentation Plant leaf Complicated background Watershed segmentation Hu geometric moments Zernike moment Moving center hypersphere (MCH) classifier a b s t r a c t Classifying plant leaves has so far been an important and difficult task, especially for leaves with complicated background where some interferents and overlapping phenomena may exist. In this paper, an efficient classification framework for leaf images with complicated background is proposed. First, a so-called automatic marker-controlled watershed segmen-tation method combined with pre-segmentation and morphological operation is introduced to segment leaf images with complicated background based on the prior shape information. Then, seven Hu geometric moments and sixteen Zernike moments are extracted as shape features from segmented binary images after leafstalk removal. In addition , a moving center hypersphere (MCH) classifier which can efficiently compress feature data is designed to address obtained mass high-dimensional shape features. Finally, experimental results on some practical plant leaves show that proposed classification framework works well while classifying leaf images with complicated background. There are twenty classes of practical plant leaves successfully classified and the average correct classification rate is up to 92.6%. Plants play the most important part in the cycle of nature. They are the primary producers that sustain all other life forms including people. This is because plants are the only organisms that can convert light energy from the sun into food. Animals, incapable of making their own food, depend directly or indirectly on plants for their supply of food. Moreover, all of the oxygen available for living organisms comes from plants. Plants are also the primary habitat for thousands of other organisms. In addition, many of the fuel people use today, such as coal, natural gas and gasoline, were made from plants that lived millions of years ago. However, in recent years, people have been seriously destroying the natural environments, so that many plants constantly die and even die out every year. Conversely, the resulting ecological crisis has brought many serious consequences including land desertion, climate anomaly, land flood, and so on, which have menaced the survival of human being and the development of society. Now, people have realized the importance and urgency of protecting plant resource. Besides taking effective measures to protect plants, it is important for ordinary people to know or classify plants which can further enhance the public's consciousness of plant protection. Therefore, besides professional botanists, many non-professional researchers have paid more \u2026"}
{"_id": "e28b42b12aaf9c812d20eed63f09ed612212c983", "title": "How do I communicate my emotions on SNS and IMs?", "text": "With influx of new devices and applications over the past few years, computer mediated communication has developed as an alternate to face-to-face communication. Through our study we attempt to understand how individuals communicate in mediated settings in emotion-laden situations. We explore their preferred medium of communication in different situations along with analysis of the content of communication. Spatial arrays, lexical surrogates, vocal spellings and grammatical markers were used as strategies by individuals for communicating non-verbal cues in text based communication. We also look at how messages sent on IMs or posts on social networks are interpreted by readers in terms of the emotional state of the sender. We found that while valence of the sender gets easily and accurately communicated, arousal is misinterpreted in most situations. In this paper, we present findings from our study which can be valuable for technology companies looking to better the current communication experience across different media."}
{"_id": "da9225a30b44afa24b819d9188bfc1a6e790ab96", "title": "Handling Multiword Expressions in Causality Estimation", "text": "Previous studies on causality estimation mainly aquire causal event pairs from a large corpus based on lexico-syntactic patterns and coreference relations, and estimate causality by a statistical method. However, most of the previous studies assume event pairs can be represented by a pair of single words, therefore they cannot estimate multiword causality correctly (e.g.\u201ctired\u201d-\u201cgive up\u201d) . In this paper, we create a list of multiword expressions and extend an existing method. Our evaluation demonstrates that the proper treatment of multiword expression events is effective and the proposed method outperforms the state-of-the-art causality estimation model."}
{"_id": "deeddd31fc92705f484d6bb1fcebd417b1a6119c", "title": "Health-Related Disaster Communication and Social Media: Mixed-Method Systematic Review.", "text": "This mixed-method evidence synthesis drew on Cochrane methods and principles to systematically review literature published between 2003 and 2016 on the best social media practices to promote health protection and dispel misinformation during disasters. Seventy-nine studies employing quantitative, qualitative, and mixed methods on risk communication during disasters in all UN-languages were reviewed, finding that agencies need to contextualize the use of social media for particular populations and crises. Social media are tools that still have not become routine practices in many governmental agencies regarding public health in the countries studied. Social media, especially Twitter and Facebook (and equivalents in countries such as China), need to be incorporated into daily operations of governmental agencies and implementing partners to build familiarity with them before health-related crises happen. This was especially observed in U.S. agencies, local government, and first responders but also for city governments and school administrations in Europe. For those that do use social media during health-related risk communication, studies find that public relations officers, governmental agencies, and the general public have used social media successfully to spread truthful information and to verify information to dispel rumors during disasters. Few studies focused on the recovery and preparation phases and on countries in the Southern hemisphere, except for Australia. The vast majority of studies did not analyze the demographics of social media users beyond their geographic location, their status of being inside/outside the disaster zone; and their frequency and content of posting. Socioeconomic demographics were not collected and/or analyzed to drill deeper into the implications of using social media to reach vulnerable populations. Who exactly is reached via social media campaigns and who needs to be reached with other means has remained an understudied area."}
{"_id": "6db33f4c15a34c3c777c3aaca663fb1a12db4bcd", "title": "Don't fall into a trap: Physical side-channel analysis of ChaCha20-Poly1305", "text": "The stream cipher ChaCha20 and the MAC function Poly1305 have been published as IETF RFC 7539. Since then, the industry is starting to use it more often. For example, it has been implemented by Google in their Chrome browser for TLS and also support has been added to OpenSSL, as well as OpenSSH. It is often claimed, that the algorithms are designed to be resistant to side-channel attacks. However, this is only true, if the only observable side-channel is the timing behavior. In this paper, we show that ChaCha20 is susceptible to power and EM side-channel analysis, which also translates to an attack on Poly1305, if used together with ChaCha20 for key generation. As a first countermeasure, we analyze the effectiveness of randomly shuffling the operations of the ChaCha round function."}
{"_id": "8e13be4e095a51475c62878643bf232a410f4afa", "title": "Depressive realism: a meta-analytic review.", "text": "The current investigation represents the first meta-analysis of the depressive realism literature. A search of this literature revealed 75 relevant studies representing 7305 participants from across the US and Canada, as well as from England, Spain, and Israel. Results generally indicated a small overall depressive realism effect (Cohen's d=-.07). Overall, however, both dysphoric/depressed individuals (d=.14) and nondysphoric/nondepressed individuals evidenced a substantial positive bias (d=.29), with this bias being larger in nondysphoric/nondepressed individuals. Examination of potential moderator variables indicated that studies lacking an objective standard of reality (d=-.15 versus -.03, for studies possessing such a standard) and that utilize self-report measures to measure symptoms of depression (d=.16 versus -.04, for studies which utilize structured interviews) were more likely to find depressive realism effects. Methodological paradigm was also found to influence whether results consistent with depressive realism were found (d's ranged from -.09 to .14)."}
{"_id": "b526780b22900c2e159a9fd63bf2046eff47606f", "title": "Pattern Reconfigurable Antenna Based on Morphing Bistable Composite Laminates", "text": "In this paper, a novel pattern reconfigurable antenna based on morphing bistable composite laminates is presented. The bistable asymmetric glass-fiber reinforced polymer (GFRP) composite laminates have two stable configurations with curvatures of opposite signs. The antenna pattern is reconfigured by transforming the configuration of the bistable GFRP laminate which acts as the substrate of the antenna. The coplanar waveguide transmission lines feeding technique is used for the microstrip quasi-Yagi antenna. A prototype of the proposed antenna is fabricated using a semi-automatic screen printer and an autoclave. The transformation between the two stable states of the proposed antenna using Ni/Ti shape memory alloy springs is investigated experimentally. The out-of-plane displacements, reflection coefficients and radiation patterns for the two stable configurations of the antenna are measured, which agree well with the simulated results. The main beam direction is 89\u00b0 and 59\u00b0 for the two stable configurations, respectively. In addition, the influences of various bending radii on the radiation patterns are investigated to gain a thorough understanding of the reconfigurable mechanism of the proposed antenna. Finally, a two-element array of such an antenna is presented and measured. The proposed antenna provides a potential application in multifunctional, conformal, morphing, and integrated structures."}
{"_id": "c85628aa0938ab4165b8ba548f7a4d42afb8dad4", "title": "Ventilation performance prediction for buildings : A method overview and recent applications", "text": "This paper presented an overview of the tools used to predict ventilation performance in buildings. The tools reviewed were analytical models, empirical models, small-scale experimental models, full-scale experimental models, multizone network models, zonal models, and Computational Fluid Dynamics (CFD) models. This review found that the analytical and empirical models had made minimal contributions to the research literature in the past year. The smalland full-scale experimental models were mainly used to generate data to validate numerical models. The multizone models were improving, and they were the main tool for predicting ventilation performance in an entire building. The zonal models had limited applications and could be replaced by the coarse-grid CFD models. The CFD models were most popular and contributed to 70 percent of the literature found in this review. Considerable efforts were still made to seek more reliable and accurate models. It has been a trend to improve their performance by coupling CFD with other building simulation models. The applications of CFD models were mainly for studying indoor air quality, natural ventilation, and stratified ventilation as they were difficult to be predicted by other models."}
{"_id": "608bcf0c7a9432458ba104b0147cddc73c416bc5", "title": "Bioconductor: open software development for computational biology and bioinformatics", "text": "The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples."}
{"_id": "c9851f86613c876129748b4626e309cde3d1ddbd", "title": "State of the Art in LP-WAN Solutions for Industrial IoT Services", "text": "The emergence of low-cost connected devices is enabling a new wave of sensorization services. These services can be highly leveraged in industrial applications. However, the technologies employed so far for managing this kind of system do not fully cover the strict requirements of industrial networks, especially those regarding energy efficiency. In this article a novel paradigm, called Low-Power Wide Area Networking (LP-WAN), is explored. By means of a cellular-type architecture, LP-WAN-based solutions aim at fulfilling the reliability and efficiency challenges posed by long-term industrial networks. Thus, the most prominent LP-WAN solutions are reviewed, identifying and discussing the pros and cons of each of them. The focus is also on examining the current deployment state of these platforms in Spain. Although LP-WAN systems are at early stages of development, they represent a promising alternative for boosting future industrial IIoT (Industrial Internet of Things) networks and services."}
{"_id": "8354adf7092db767712b9b95c8c58cd62752df9f", "title": "Facial performance sensing head-mounted display", "text": "There are currently no solutions for enabling direct face-to-face interaction between virtual reality (VR) users wearing head-mounted displays (HMDs). The main challenge is that the headset obstructs a significant portion of a user's face, preventing effective facial capture with traditional techniques. To advance virtual reality as a next-generation communication platform, we develop a novel HMD that enables 3D facial performance-driven animation in real-time. Our wearable system uses ultra-thin flexible electronic materials that are mounted on the foam liner of the headset to measure surface strain signals corresponding to upper face expressions. These strain signals are combined with a head-mounted RGB-D camera to enhance the tracking in the mouth region and to account for inaccurate HMD placement. To map the input signals to a 3D face model, we perform a single-instance offline training session for each person. For reusable and accurate online operation, we propose a short calibration step to readjust the Gaussian mixture distribution of the mapping before each use. The resulting animations are visually on par with cutting-edge depth sensor-driven facial performance capture systems and hence, are suitable for social interactions in virtual worlds."}
{"_id": "0a4f03e8e25d9deb2d9a4ac853b87ae22f908720", "title": "Machine Learning for the Internet of Things Security: A Systematic Review", "text": "Internet of things (IoT) is nowadays one of the fastest growing technologies for both private and business purposes. Due to a big number of IoT devices and their rapid introduction to the market, security of things and their services is often not at the expected level. Recently, machine learning algorithms, techniques, and methods are used in research papers to enhance IoT security. In this paper, we systematically review the state-of-the art to classify the research on machine learning for the IoT security. We analysed the primary studies, identify types of studies and publication fora. Next, we have extracted all machine learning algorithms and techniques described in primary studies, and identified the most used ones to tackle IoT security issues. We classify the research into three main categories (intrusion detection, authentication and other) and describe the primary studies in detail to analyse existing relevant works and propose topics for future research."}
{"_id": "9dbe40fb98577f53d03609f69c34d9b263ff2385", "title": "The impact of data replication on job scheduling performance in the Data Grid", "text": "In the Data Grid environment, the primary goal of data replication is to shorten the data access time experienced by the job and consequently reduce the job turnaround time. After introducing a Data Grid architecture that supports efficient data access for the Grid job, the dynamic data replication algorithms are put forward. Combined with different Grid scheduling heuristics, the performances of the data replication algorithms are evaluated with various simulations. The simulation results demonstrate f shortest lacement data e ists, d the ten-"}
{"_id": "1ea8085fe1c79d12adffb02bd157b54d799568e4", "title": "Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection", "text": "We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space\u2014if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher\u2019s Linear Discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The Eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed \u201cFisherface\u201d method has error rates that are lower than those of the Eigenface technique for tests on the Harvard and Yale Face Databases. Index Terms \u2014Appearance-based vision, face recognition, illumination invariance, Fisher\u2019s linear discriminant. \u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014 \u2726 \u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014"}
{"_id": "269d8ef5cbb1d6e83079a3ecd985b0dfa2c0426e", "title": "Hardware, Design and Implementation Issues on a Fpga-Based Smart Camera", "text": "Processing images to extract useful information in real-time is a complex task, dealing with large amounts of iconic data and requiring intensive computation. Smart cameras use embedded processing to save the host system from the low-level processing load and to reduce communication flows and overheads. Field programmable devices present special interest for smart cameras design: flexibility, reconfigurability and parallel processing skills are some specially important features. In this paper we present a FPGA-based smart camera research platform. The hardware architecture is described, and some design issues are discussed. Our goal is to use the possibility to reconfigure the FPGA device in order to adapt the system architecture to a given application. To that, a design methodology, based on pre-programmed processing elements, is proposed and sketched. Some implementation issues are discussed and a template tracking application is given as example, with its experimental results."}
{"_id": "7a4777371a4a993445bddb3d4ac6721e93654402", "title": "Oboe: auto-tuning video ABR algorithms to network conditions", "text": "Most content providers are interested in providing good video delivery QoE for all users, not just on average. State-of-the-art ABR algorithms like BOLA and MPC rely on parameters that are sensitive to network conditions, so may perform poorly for some users and/or videos. In this paper, we propose a technique called Oboe to auto-tune these parameters to different network conditions. Oboe pre-computes, for a given ABR algorithm, the best possible parameters for different network conditions, then dynamically adapts the parameters at run-time for the current network conditions. Using testbed experiments, we show that Oboe significantly improves BOLA, MPC, and a commercially deployed ABR. Oboe also betters a recently proposed reinforcement learning based ABR, Pensieve, by 24% on average on a composite QoE metric, in part because it is able to better specialize ABR behavior across different network states."}
{"_id": "89d642c21202d5905c59d23d735c302eaffbce08", "title": "An efficient method of license plate location", "text": "License plate location is an important stage in vehicle license plate recognition for automated transport system. This paper presents a real time and robust method of license plate location. License plate area contains rich edge and texture information. We first extract out the vertical edges of the car image using image enhancement and Sobel operator, then remove most of the background and noise edges by an effective algorithm, and finally search the plate region by a rectangle window in the residual edge image and segment the plate out from the original car image. Experimental results demonstrate the great robustness and efficiency of our method. 2005 Elsevier B.V. All rights reserved."}
{"_id": "287a11d7b95490866104535a0419aea5702192f5", "title": "How to Increase Users' Social Commerce Engagement? A Technology Attractiveness Model", "text": "With the proliferation of social networking and electronic commerce, social commerce helps people engage in various forms of online social commercial activities through sharing their product or service knowledge and experiences. A better understanding of users\u2019 engagement in social commerce websites thus become increasingly important. Based on the attractiveness theory, this study proposes a research model that highlights the unique role of technology attractiveness, including task, social, and physical attractiveness, in promoting user involvement, which in turn affects social commerce engagement. Results demonstrate that users\u2019 perceptions of technology attractiveness are positively associated with their involvement with social commerce websites, and further stimulate engagement. In addition, website involvement partially and fully mediates the effects of social and physical attractiveness, respectively, on social commerce engagement. The limitations and implications of this study for research and practice are further discussed."}
{"_id": "e62ffb7a63e063aedf046f8cf05b8aa47ddb930a", "title": "Geometry-Driven Diffusion in Computer Vision", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance."}
{"_id": "2b2e6b6ef4312146712aefafd28638b6a221d411", "title": "3D face matching and registration based on hyperbolic Ricci flow", "text": "3D surface matching is fundamental for shape analysis. As a powerful method in geometric analysis, Ricci flow can flexibly design metrics by prescribed target curvature. In this paper we describe a novel approach for matching surfaces with complicated topologies based on hyperbolic Ricci flow. For surfaces with negative Euler characteristics, such as a human face with holes (eye contours), the canonical hyperbolic metric is conformal to the original and can be efficiently computed. Then the surface can be canonically decomposed to hyperbolic hexagons. By matching the corresponding hyperbolic hexagons, the matching between surfaces can be easily established. Compared to existing methods, hyperbolic Ricci flow induces diffeomorphisms between surfaces with complicated topologies with negative Euler characteristics, while avoiding singularities. Furthermore, all the boundaries are intrinsically mapped to hyperbolic lines as alignment constraints. Finally, we demonstrate the applicability of this intrinsic shape representation for 3D face matching and registration."}
{"_id": "9c8c9d2f7803197cdf3b6def3a63429bfbeef9e7", "title": "The weight consistency matrix framework for general non-binary LDPC code optimization: Applications in flash memories", "text": "Transmission channels underlying modern memory systems, e.g., Flash memories, possess a significant amount of asymmetry. While existing LDPC codes optimized for symmetric, AWGN-like channels are being actively considered for Flash applications, we demonstrate that, due to channel asymmetry, such approaches are fairly inadequate. We propose a new, general, combinatorial framework for the analysis and design of non-binary LDPC (NB-LDPC) codes for asymmetric channels. We introduce a refined definition of absorbing sets, which we call general absorbing sets (GASs), and an important subclass of GASs, which we refer to as general absorbing sets of type two (GASTs). Additionally, we study the combinatorial properties of GASTs. We then present the weight consistency matrix (WCM), which succinctly captures key properties in a GAST. Based on these new concepts, we then develop a general code optimization framework, and demonstrate its effectiveness on the realistic highly-asymmetric normal-Laplace mixture (NLM) Flash channel. Our optimized codes enjoy over one order (resp., half of an order) of magnitude performance gain in the uncorrectable BER (UBER) relative to the unoptimized codes (resp. the codes optimized for symmetric channels)."}
{"_id": "1c8b317056f490f2c9e33d8a8141edac1a13d136", "title": "An approach to interactive deep reinforcement learning for serious games", "text": "Serious games receive increasing interest in the area of e-learning. Their development, however, is often still a demanding, specialized and arduous process, especially when regarding reasonable non-player character behaviour. Reinforcement learning and, since recently, also deep reinforcement learning have proven to automatically generate successful AI behaviour to a certain degree. These methods are computationally expensive and hardly scalable to various complex serious game scenarios. For this reason, we introduce a new approach of augmenting the application of deep reinforcement learning methods by interactively making use of domain experts' knowledge to guide the learning process. Thereby, we aim to create a synergistic combination of experts and emergent cognitive systems. We call this approach interactive deep reinforcement learning and point out important aspects regarding realization within a framework."}
{"_id": "6f3f457b1028a4376564579c69c799e8c22e71af", "title": "Gesture recognition using kinect for sign language translation", "text": "Sign Language is a widely used method of communication among the community of deaf-mute people. It contains some series of body gestures, which enables a person to interact without the need of spoken words. Although the use of sign language is very popular among the deaf-mute people but the other communities don't even try to learn it, this creates a gulf of communication and hence becomes a cause of the isolation of physically impaired people. The problem creates a requirement of a system which can facilitate a way of communication between these two communities. This paper presents a novel method for identification of an isolated sign language gesture using Microsoft Kinect. This paper presents the way of extracting some highly robust features from the depth image provided by Kinect and to use them in creating a robust and accurate gesture recognition system, for the purpose of sign language translation. Apart from translation, the proposed system also opens the doors of endless applications in the field of Human Computer Interaction. The proposed algorithm helps in translating a sign language gesture performed by a user, which in-turn can be used as an input for different types of applications."}
{"_id": "3ca6ab58ae015860098d800a9942af9df4d1e090", "title": "Software and Algorithms for Graph Queries on Multithreaded Architectures", "text": "Search-based graph queries, such as finding short paths and isomorphic subgraphs, are dominated by memory latency. If input graphs can be partitioned appropriately, large cluster-based computing platforms can run these queries. However, the lack of compute-bound processing at each vertex of the input graph and the constant need to retrieve neighbors implies low processor utilization. Furthermore, graph classes such as scale-free social networks lack the locality to make partitioning clearly effective. Massive multithreading is an alternative architectural paradigm, in which a large shared memory is combined with processors that have extra hardware to support many thread contexts. The processor speed is typically slower than normal, and there is no data cache. Rather than mitigating memory latency, multithreaded machines tolerate it. This paradigm is well aligned with the problem of graph search, as the high ratio of memory requests to computation can be tolerated via multithreading. In this paper, we introduce the multithreaded graph library (MTGL), generic graph query software for processing semantic graphs on multithreaded computers. This library currently runs on serial machines and the Cray MTA-2, but Sandia is developing a run-time system that will make it possible to run MTGL-based code on symmetric multiprocessors. We also introduce a multithreaded algorithm for connected components and a new heuristic for inexact subgraph isomorphism We explore the performance of these and other basic graph algorithms on large scale-free graphs. We conclude with a performance comparison between the Cray MTA-2 and Blue Gene/Light for s-t connectivity."}
{"_id": "b6e6112193da680f88502a0eb5b9e3478ba64109", "title": "The hymen morphology in normal newborn Saudi girls.", "text": "BACKGROUND\nHymen morphology has a medico-legal importance. In view of the lack of national norms, establishing the hymen morphology of Saudi newborn infants is essential.\n\n\nSUBJECTS AND METHODS\nOver a period of 4 months, the genitalia of 345 full-term female newborn infants were examined to determine the shape of the hymen. A previously described labia traction technique was used to classify the hymen morphology into annular, sleeve-like, fimbriated, crescentric, and other types.\n\n\nRESULTS\nThe hymen was present in all 345 female newborn infants examined. A total of 207 (60%) were of the annular type, 76 (22%) were sleeve-like, 43 (12.5%) fimbriated, 17 (4.9%) crescentric, and 2 (0.6%) of other types.\n\n\nCONCLUSION\nThe most common hymen morphology in Saudi newborn girls was annular, followed by sleeve-like, fimbriated, and crescentric. This study may be the first to define normal configuration of the hymen in this community."}
{"_id": "4aa5191088edafc2d3ae6232d9db4145d0099529", "title": "Self-Attentive Feature-Level Fusion for Multimodal Emotion Detection", "text": "Multimodal emotion recognition is the task of detecting emotions present in user-generated multimedia content. Such resources contain complementary information in multiple modalities. A stiff challenge often faced is the complexity associated with feature-level fusion of these heterogeneous modes. In this paper, we propose a new feature-level fusion method based on self-attention mechanism. We also compare it with traditional fusion methods such as concatenation, outer-product, etc. Analyzed using textual and speech (audio) modalities, our results suggest that the proposed fusion method outperforms others in the context of utterance-level emotion recognition in videos."}
{"_id": "9f0bca4ad46a2ee9f03da23bba619d6edc453cb1", "title": "Provenance: On and Behind the Screens", "text": "Collecting and processing provenance, i.e., information describing the production process of some end product, is important in various applications, e.g., to assess quality, to ensure reproducibility, or to reinforce trust in the end product. In the past, different types of provenance meta-data have been proposed, each with a different scope. The first part of the proposed tutorial provides an overview and comparison of these different types of provenance. To put provenance to good use, it is essential to be able to interact with and present provenance data in a user-friendly way. Often, users interested in provenance are not necessarily experts in databases or query languages, as they are typically domain experts of the product and production process for which provenance is collected (biologists, journalists, etc.). Furthermore, in some scenarios, it is difficult to use solely queries for analyzing and exploring provenance data. The second part of this tutorial therefore focuses on enabling users to leverage provenance through adapted visualizations. To this end, we will present some fundamental concepts of visualization before we discuss possible visualizations for provenance."}
{"_id": "2eafdb47aa9b5b510f7fcb113b22e6ab7c79d143", "title": "Using Genetic Algorithm to Solve Game of Go-Moku", "text": "Genetic algorithm is a stochastic parallel beam search that can be applied to many typical search problems. This paper describes a genetic algorithmic approach to a problem in artificial intelligence. During the process of evolution, the environment cooperates with the population by continuously making itself friendlier so as to lower the evolutionary pressure. Evaluations show the performance of this approach seems considerably effective in solving this type of board games. Game-playing programs are often described as being a combination of search and"}
{"_id": "476e152a5e9999cc252bbde95fb9dce8e73236c2", "title": "How to Normalize Co-Occurrence Data ? An Analysis of Some Well-Known Similarity Measures Nees", "text": "AND"}
{"_id": "c05b671fd1b6607fcf3eacfc605067c0c6e9d608", "title": "An improved algorithm for feature selection using fractal dimension", "text": "Dimensionality reduction is an important issue in data mining and machine learning. Traina[1] proposed a feature selection algorithm to select the most important attributes for a given set of n-dimensional vectors based on correlation fractal dimension. The author used a kind of multi-dimensional \u201cquad-tree\u201d structure to compute the fractal dimension. Inspired by his work, we propose a new and simpler algorithm to compute the fractal dimension, and design a novel and faster feature selection algorithm using correlation fractal dimension, whose time complexity is lower than that of Traina\u2019s. The main idea is when computing the fractal dimension of (d-1)-dimensional data, the intermediate generated results of the extended d-dimensional data is reused. It inherits the desirable properties described as in [1]. Also, Our algorithm does not require the grid sizes decrease by half as the original \u201cquad-tree\u201d algorithm. Experiments show our feature selection algorithm has a good efficiency over the test dataset."}
{"_id": "3055b4df91e92c32637f3a5146c13875c963dc72", "title": "Advancing interactive collaborative mediums through tele-immersive dance (TED): a symbiotic creativity and design environment for art and computer science", "text": "The Tele-immersive Dance Environment (TED) is a geographically distributed, real-time 3-D virtual room where multiple participants interact independent of physical distance. TED, a highly interactive collaborative environment, offers digital options with multiple viewpoints, enhancing the creative movement composition involved with dance choreography. A symbiotic relationship for creativity and design exists between dance artists and computer scientists as the tele-immersive environment is analyzed as a creativity and learning tool. We introduce the advancements of the interactive digital options, new interface developments, user study results, and the possibility of a computational model for human creativity through Laban Movement Analysis."}
{"_id": "6906431861dff300461e595f756c13c95a9507cb", "title": "A Survey on Security for Mobile Devices", "text": "Nowadays, mobile devices are an important part of our everyday lives since they enable us to access a large variety of ubiquitous services. In recent years, the availability of these ubiquitous and mobile services has significantly increased due to the different form of connectivity provided by mobile devices, such as GSM, GPRS, Bluetooth and Wi-Fi. In the same trend, the number and typologies of vulnerabilities exploiting these services and communication channels have increased as well. Therefore, smartphones may now represent an ideal target for malware writers. As the number of vulnerabilities and, hence, of attacks increase, there has been a corresponding rise of security solutions proposed by researchers. Due to the fact that this research field is immature and still unexplored in depth, with this paper we aim to provide a structured and comprehensive overview of the research on security solutions for mobile devices. This paper surveys the state of the art on threats, vulnerabilities and security solutions over the period 2004-2011, by focusing on high-level attacks, such those to user applications. We group existing approaches aimed at protecting mobile devices against these classes of attacks into different categories, based upon the detection principles, architectures, collected data and operating systems, especially focusing on IDS-based models and tools. With this categorization we aim to provide an easy and concise view of the underlying model adopted by each approach."}
{"_id": "fcce2d6be712765ffef7943ab364d1652db2820a", "title": "End-to-End Blind Image Quality Assessment Using Deep Neural Networks", "text": "We propose a multi-task end-to-end optimized deep neural network (MEON) for blind image quality assessment (BIQA). MEON consists of two sub-networks\u2014a distortion identification network and a quality prediction network\u2014sharing the early layers. Unlike traditional methods used for training multi-task networks, our training process is performed in two steps. In the first step, we train a distortion type identification sub-network, for which large-scale training samples are readily available. In the second step, starting from the pre-trained early layers and the outputs of the first sub-network, we train a quality prediction sub-network using a variant of the stochastic gradient descent method. Different from most deep neural networks, we choose biologically inspired generalized divisive normalization (GDN) instead of rectified linear unit as the activation function. We empirically demonstrate that GDN is effective at reducing model parameters/layers while achieving similar quality prediction performance. With modest model complexity, the proposed MEON index achieves state-of-the-art performance on four publicly available benchmarks. Moreover, we demonstrate the strong competitiveness of MEON against state-of-the-art BIQA models using the group maximum differentiation competition methodology."}
{"_id": "3e6fcc02692664f2ac92cef6cffe11ef4ec5b407", "title": "Quality, productivity and economic benefits of software reuse: a review of industrial studies", "text": "Systematic software reuse is proposed to increase productivity and software quality and lead to economic benefits. Reports of successful software reuse programs in industry have been published. However, there has been little effort to organize the evidence systematically and appraise it. This review aims to assess the effects of software reuse in industrial contexts. Journals and major conferences between 1994 and 2005 were searched to find observational studies and experiments conducted in industry, returning eleven papers of observational type. Systematic software reuse is significantly related to lower problem (defect, fault or error) density in five studies and to decreased effort spent on correcting problems in three studies. The review found evidence for significant gains in apparent productivity in three studies. Other significant benefits of software reuse were reported in single studies or the results were inconsistent. Evidence from industry is sparse and combining results was done by vote-counting. Researchers should pay more attention to using comparable metrics, performing longitudinal studies, and explaining the results and impact on industry. For industry, evaluating reuse of COTS or OSS components, integrating reuse activities in software processes, better data collection and evaluating return on investment are major challenges."}
{"_id": "5143b97d3fe8b881d0baa50984e3b99b7f13626e", "title": "NoSQL Systems for Big Data Management", "text": "The advent of Big Data created a need for out-of-the-box horizontal scalability for data management systems. This ushered in an array of choices for Big Data management under the umbrella term NoSQL. In this paper, we provide a taxonomy and unified perspective on NoSQL systems. Using this perspective, we compare and contrast various NoSQL systems using multiple facets including system architecture, data model, query language, client API, scalability, and availability. We group current NoSQL systems into seven broad categories: Key-Value, Table-type/Column, Document, Graph, Native XML, Native Object, and Hybrid databases. We also describe application scenarios for each category to help the reader in choosing an appropriate NoSQL system for a given application. We conclude the paper by indicating future research directions."}
{"_id": "8078a68fd78bb4cde311e53a5fc6a30cd2560fad", "title": "Interoperable Data Migration between NoSQL Columnar Databases", "text": "NoSQL databases have emerged as the solution to handle large quantities of user-generated contents still guaranteeing fault tolerance, availability and scalability. Each NoSQL database offers differentiated properties and characteristics as well as different data models and architectures. As a result, the development of applications exploiting such kind of technology is strictly dependent on the specific NoSQL solution being adopted, and the migration from a NoSQL to the other requires the development of ad-hoc code managing the transfer of data. In order to mitigate such issue, this paper proposes an interoperable migration system for columnar NoSQL databases. The proposed approach is based on an orginal Metamodel, capable of preserving both strong and weak consistency between data updates, secondary indexes and various data types. Moreover, the approach allows developers to easily add support for new databases."}
{"_id": "5107784c74f7f3a733e337a5190247cd04869cd6", "title": "SEQUEL: A Structured English Query Language", "text": "In this paper we present the data manipulation facility for a structured English query language (SEQUEL) which can be used for accessing data in an integrated relational data base. Without resorting to the concepts of bound variables and quantifiers SEQUEL identifies a set of simple operations on tabular structures, which can be shown to be of equivalent power to the first order predicate calculus. A SEQUEL user is presented with a consistent set of keyword English templates which reflect how people use tables to obtain information. Moreover, the SEQUEL user is able to compose these basic templates in a structured manner in order to form more complex queries. SEQUEL is intended as a data base sublanguage for both the professional programmer and the more infrequent data base user."}
{"_id": "676734e5b7ecffe5f71a01784a58093104777ae7", "title": "CAP twelve years later: How the \"rules\" have changed", "text": "The CAP theorem asserts that any networked shared-data system can have only two of three desirable properties. However, by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three. The featured Web extra is a podcast from Software Engineering Radio, in which the host interviews Dwight Merriman about the emerging NoSQL movement, the three types of nonrelational data stores, Brewer's CAP theorem, and much more."}
{"_id": "4c5daf186822f3aa2924d151be43c5e049ba13ab", "title": "Research Methods in Computer Science: The Challenges and Issues", "text": "Research methods are essential parts in conducting any research project. Although they have been theorized and summarized based on best practices, every field of science requires an adaptation of the overall approaches to perform research activities. In addition, any specific research needs a particular adjustment to the generalized approach and specializing them to suit the project in hand. However, unlike most well-established science disciplines, computing research is not supported by well-defined, globally accepted methods. This is because of its infancy and ambiguity in its definition, on one hand, and its extensive coverage and overlap with other fields, on the other hand. This article discusses the research methods in science and engineering in general and in computing in particular. It shows that despite several special parameters that make research in computing rather unique, it still follows the same steps that any other scientific research would do. The article also shows the particularities that researchers need to consider when they conduct research in this field."}
{"_id": "35d60639efde74ccde49612d1773e66c96753042", "title": "Abiotic and Biotic Stress Responses in Solanum tuberosum Group Phureja DM1-3 516 R44 as Measured through Whole Transcriptome Sequencing", "text": "This study conducted an in-depth analysis of the potato (Solanum tuberosum L.) transcriptome in response to abiotic (salinity, drought, and heat) and biotic (Phytophthora infestans, DL-b amino-n-butyric acid, and acibenzolar-s-methyl) stresses and plant hormone treatment (abscisic acid, 6-benzylaminopurine, gibberellic acid, and indole-3-acetic acid) using ribonucleic acid sequencing (RNA-seq) of the doubled monoploid S. tuberosum Group Phureja DM1-3 516 R44 clone. Extensive changes in gene expression were observed with 37% of the expressed genes being differentially expressed in at least one comparison of stress to control tissue. Stress-inducible genes known to function in stress tolerance or to be involved in the regulation of gene expression were among the highest differentially expressed. Members of the MYB, APETALA2 (AP2)/ethylene-responsive element binding factor (ERF), and NAM, ATAF1/2, and CUC2 (NAC) transcription factor families were the most represented regulatory genes. A weighted gene co-expression network analysis yielded 37 co-expression modules containing 9198 genes. Fifty percent of the genes within these co-expression modules were specific to a stress condition indicating conditionspecific responses. Cross-species comparison between potato and Arabidopsis thaliana (L.) Heynh. uncovered differentially expressed orthologs and defined evolutionary conserved genes. Collectively, the transcriptional profiling of RNA-seq data presented here provide a valuable reference for potato stress responses to environmental factors that is supported by statistically significant differences in expression changes, highly interconnected genes in co-expression networks, and evidence of evolutionary conservation. Plants growing in natural habitats are exposed to multiple environmental stresses resulting from abiotic and biotic factors. Adaptation and response to these stresses is highly complex and involve changes at the molecular, cellular, and physiological levels. Abiotic stress factors such as heat, drought, and salinity have a significant impact on cultivated potato, affecting yield, tuber quality, and market value (Wang-Pruski and Schofield, 2012). Water availability is by far the most important limiting factor in crop production. Potato plants use water relatively efficiently; however, due in part to its sparse and shallow root system, potato is considered a drought-sensitive crop (Yuan et al., 2003). Salinity is another serious threat to crop production that may increase under anticipated climate change scenarios (Yeo, 1998). Although potatoes are considered moderately salt sensitive (Katerji et al., 2000), significant variations in salt tolerance exist among Solanum tuberosum cultivars and wild Solanum species (Khrais et al., 1998). Furthermore, potato responses to salt stress are related to the responses of other abiotic stresses, as revealed by microarray analysis of leaf responses to cold and salt Published in The Plant Genome 6 doi: 10.3835/plantgenome2013.05.0014 \u00a9 Crop Science Society of America 5585 Guilford Rd., Madison, WI 53711 USA An open-access publication All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Permission for printing and for reprinting the material contained herein has been obtained by the publisher. Dep. of Plant Biology, Michigan State Univ., East Lansing, MI 48824. Received 17 May 2013. *Corresponding author (buell@"}
{"_id": "81a23c5ebf2414d65e32e2a68903bbe0c056a3a7", "title": "G2G interaction among local agencies in developing countries based on diffusion of innovations theory", "text": "Technological advancement has allowed governments to meet the demands of its citizens electronically. Electronic government (e-Government) facilitates accurate and fast transactions and delivery of services and information to businesses, citizens, and government agencies. Moreover, e-Government helps enhance democracy. Agencies interact with one another electronically through the e-Government, which enhances efficiency. e-Government utilizes information and communication technology to provide the public access to various services. Leaders and information technology executives in the public sector have recognized the importance of sharing inter-organizational information to improve the efficiency of government agencies. Therefore, this study takes the diffusion of innovations theory as context to identify the most important factor affecting the electronic interaction between local agencies in developing countries."}
{"_id": "7a66e302652f50699304dedf46384d33edc9f4c1", "title": "A Kinect-Based Wearable Face Recognition System to Aid Visually Impaired Users", "text": "In this paper, we introduce a real-time face recognition (and announcement) system targeted at aiding the blind and low-vision people. The system uses a Microsoft Kinect sensor as a wearable device, performs face detection, and uses temporal coherence along with a simple biometric procedure to generate a sound associated with the identified person, virtualized at his/her estimated 3-D location. Our approach uses a variation of the K-nearest neighbors algorithm over histogram of oriented gradient descriptors dimensionally reduced by principal component analysis. The results show that our approach, on average, outperforms traditional face recognition methods while requiring much less computational resources (memory, processing power, and battery life) when compared with existing techniques in the literature, deeming it suitable for the wearable hardware constraints. We also show the performance of the system in the dark, using depth-only information acquired with Kinect's infrared camera. The validation uses a new dataset available for download, with 600 videos of 30 people, containing variation of illumination, background, and movement patterns. Experiments with existing datasets in the literature are also considered. Finally, we conducted user experience evaluations on both blindfolded and visually impaired users, showing encouraging results."}
{"_id": "fc2bbe930c124b9ed6ba8d486e764bdb36974ccf", "title": "Agent based Architecture for Modeling and Analysis of Self Adaptive Systems using Formal Methods", "text": "Self-adaptive systems (SAS) can modify their behavior during execution; this modification is done because of change in internal or external environment. The need for selfadaptive software systems has increased tremendously in last decade due to ever changing user requirements, improvement in technology and need for building software that reacts to user preferences. To build this type of software we need well establish models that have the flexibility to adjust to the new requirements and make sure that the adaptation is efficient and reliable. Feedback loop has proven to be very effective in modeling and developing SAS, these loops help the system to sense, analyze, plan, test and execute the adaptive behavior at runtime. Formal methods are well defined, rigorous and reliable mathematical techniques that can be effectively used to reason and specify behavior of SAS at design and run-time. Agents can play an important role in modeling SAS because they can work independently, with other agents and with environment as well. Using agents to perform individual steps in feedback loop and formalizing these agents using Petri nets will not only increase the reliability, but also, the adaptation can be performed efficiently for taking decisions at run time with increased confidence. In this paper, we propose a multi-agent framework to model self-adaptive systems using agent based modeling. This framework will help the researchers in implementation of SAS, which is more dependable, reliable, autonomic and flexible because of use of multi-agent based formal approach. Keywords\u2014Formal methods; self-adaptive systems; agent based modeling; feedback loop; Petri nets"}
{"_id": "d4c65ee21bb8d64b8e4380f80ad856a1629b5949", "title": "Development of folded dual-polarization dividers for broadband ortho-mode transducers", "text": "A waveguide divider with folded lateral arms is presented for separating dual orthogonal linear polarizations in broadband ortho-mode transducers. The structure is based on a well-known double symmetry junction, where the metallic pins have been eliminated and the lateral outputs have been folded to achieve a combined effect: matching for the vertical polarization and a very significant size reduction. In addition, since the path for the lateral branches has been reduced, the insertion losses for the different polarizations are balanced. The isolation between orthogonal polarizations is kept because of the double-symmetry of the junction. From the mechanical point of view, the proposed junction allows a more simple manufacture and assembly of the ortho-mode transducer parts, which has been shown with a Ku-band design, covering the full Ku-band from 12.6 to 18.25 GHz. The experimental prototype has demonstrated a measured return loss better than 28 dB in the design band and insertion loss smaller than 0.15 dB for both polarizations."}
{"_id": "a23cf642a98f67f9b8f1271705024a08e95fa1e7", "title": "Real-time wireless communication with per-packet deadlines", "text": "The last decades\u2019 tremendous advances in wireless communications have been driven mainly by personal communications. Radio resource allocation mechanisms for optimizing key metrics, such as average throughput and delay, for such traffic are by now rather well-developed. However, with the increased interest in wireless machine-to-machine communication, e.g. for industrial control or monitoring of large-scale infrastructures, new challenges emerge. The performance of an estimator or closed-loop control system that operates over an unreliable wireless network depends on the full latency and loss distributions and not only on their averages. For these applications, more suitable performance metrics are per-packet guarantees on latency and reliability (on-time delivery). This thesis studies optimal forwarding of deadline-constrained traffic over lossy multi-hop networks. We assume a routing topology in the form of a directed graph with packet loss on links described by finite-state Markov chains, and focus on a single transient packet scenario. Two problems are considered: maximizing the probability that packets are delivered within their specified deadline; and minimizing the expected energy cost with a guaranteed probability of on-time delivery. The first problem can be formulated as a finite-horizon Markov decision process (MDP), while the second problem is a finite-horizon constrained Markov decision process (CMDP). A general dynamic programming framework that solves a weighted sum of reliability and energy maximization problem is proposed. The maximum deadline-constrained reliability problem is solved by studying a simplification of the general dynamic programming. The minimum energy optimal policy is a random selection between two deterministic and computable forwarding policies, each of which can be found via a dynamic programming framework. Particular instances with Bernoulli and Gilbert-Elliot loss models that admit numerically efficient solutions are discussed. Finally, we show the application of the technique for a co-design of forwarding policies and controller for wireless control. Based on the recent result on the monotonicity of the optimal control loss, the problem of minimum control loss and the problem of minimum energy cost with a guaranteed control performance can be solved optimally."}
{"_id": "db3259ae9e7f18a319cc24229662da9bf400221a", "title": "Numerical methods for least square problems", "text": ""}
{"_id": "5a77bb301637660fb5e1fe96144c65cd8cc160d5", "title": "Analysis and Control of Multiphase Inductively Coupled Resonant Converter for Wireless Electric Vehicle Charger Applications", "text": "In this paper, an inductively coupled multiphase resonant converter is presented for wireless electric vehicle charging applications. As an alternative to the traditional frequency and phase shift control methods, a hybrid phase-frequency control strategy is implemented to improve the system efficiency. A theoretical analysis of the proposed system is carried out considering a wide battery state-of-charging range. In order to confirm the proposed converter and control technique, a laboratory prototype wireless charger is designed using 8-in air-gap coreless transformer and rectifier. The proposed control is compared with the conventional control methods for various load conditions at the different power levels. In comparison results, the proposed hybrid control methodology demonstrates the efficiency improvements of 1.1% at the heaviest load condition and 5.7% at the lightest load condition."}
{"_id": "b35cba9513348875c7dd28b72b70b95964737765", "title": "Arduino-based smart irrigation using water flow sensor, soil moisture sensor, temperature sensor and ESP8266 WiFi module", "text": "Emergence of Controlled Environment Agriculture (CEA) ranging from computer controlled water irrigation system to lightning and ventilation has changed the conventional scenario of farming. This paper proposes and demonstrate an economical and easy to use arduino based controlled irrigation system. The designed system deals with various environmental factors such as moisture, temperature and amount of water required by the crops using sensors like water flow sensor, temperature sensor and soil moisture sensor. Datas are collected and received by arduino which can be linked to an interactive website which show the real time values along with the standard values of different factor required by a crop. This allows user to control irrigation pumps and sprinklers from far distance through a website and to meet the standard values which would help the farmer to yield maximum and quality crops. Studies conducted on laboratory prototype suggested the designed system to be applicable which can be implemented."}
{"_id": "732e27b9f67daf5a761dce79495c5c1a4170707a", "title": "Methods for the prevention, detection and removal of software security vulnerabilities", "text": "Over the past decade, the need to build secure software has become a dominant goal in software development. Consequently, software researchers and practitioners have identified ways that malicious users can exploit software and how developers can fix the vulnerabilities. They have also built a variety of source code security checking software applications to partially automate the task of performing a security analysis of a program. Although great advances have been made in this area, the core problem of how the security vulnerabilities occur still exists. An answer to this problem could be a paradigm shift from imperative to functional programming techniques. This may hold the key to removing software vulnerabilities altogether."}
{"_id": "345ebabb5589d7d66955ac629429a72e4ebe26f0", "title": "Neurocognitive Architecture of Working Memory", "text": "A crucial role for working memory in temporary information processing and guidance of complex behavior has been recognized for many decades. There is emerging consensus that working-memory maintenance results from the interactions among long-term memory representations and basic processes, including attention, that are instantiated as reentrant loops between frontal and posterior cortical areas, as well as sub-cortical structures. The nature of such interactions can account for capacity limitations, lifespan changes, and restricted transfer after working-memory training. Recent data and models indicate that working memory may also be based on synaptic plasticity and that working memory can operate on non-consciously perceived information."}
{"_id": "6c3f3a0fc282a809fd0d46399077fd90049b407c", "title": "Imago: presence and emotion in virtual reality", "text": "As virtual reality becomes available to a wider audience, filmmakers are being challenged to create ever more immersive and creative narratives for this quickly evolving medium. While many of the early attempts at virtual reality filmmaking have been devoted to the creation of quick paced and aw-inspiring experiences, filmmakers such as Chris Milk and Oculus Story Studio have been more focused on more story driven experiences, using the medium to tell narratives in a new and exciting way.\n Project Hypnos was created at Carnegie Mellon University to explore immersive narrative filmmaking techniques for virtual reality. The resulting proof-of-concept film, Imago, was created using the techniques developed during this research. Imago plays with the concept of a viewer's presence in a film in an attempt to elicit both negative and positive emotions. The result is a story-driven experience that comes to life through live-action drama, dance, and abstract CG imagery."}
{"_id": "c412daa414b5951b915f37708ad6875d2db03b1a", "title": "Representations of recent and remote autobiographical memories in hippocampal subfields", "text": "The hippocampus has long been implicated in supporting autobiographical memories, but little is known about how they are instantiated in hippocampal subfields. Using high-resolution functional magnetic resonance imaging (fMRI) combined with multivoxel pattern analysis we found that it was possible to detect representations of specific autobiographical memories in individual hippocampal subfields. Moreover, while subfields in the anterior hippocampus contained information about both recent (2 weeks old) and remote (10 years old) autobiographical memories, posterior CA3 and DG only contained information about the remote memories. Thus, the hippocampal subfields are differentially involved in the representation of recent and remote autobiographical memories during vivid recall."}
{"_id": "6832638bc755b9e35bc29242fa22d7e0fb0eadb0", "title": "VT: An Expert Elevator Designer That Uses Knowledge-Based Backtracking", "text": "system for handling the design of elevator systems that is currently in use at Westinghouse Elevator Company Although VT tries to postpone each decision in creating a design until all information that constrains the decision is known, for many decisions this postponement is not possible In these cases, VT uses the strategy of constructing a plausible approximation and successively refining it VT uses domain-specific knowledge to guide its backtracking search for successful refinements The VT architecture provides the basis for a knowledge representation that is used by SALT, an automated knowledge-acquisition tool SALT was used to build VT and provides an analysis of VT\u2019s knowledge base to assess its potential for convergence on a solution VT: An Expert Elevator Designer That Uses Knowledge-Based Backtracking Sandra Marcus, Jeffrey Stout, John McDermott"}
{"_id": "9c7277b43fc8699a5d9c0ed95820bf5eed170eb8", "title": "Trust and Recommendations", "text": "Recommendation technologies and trust metrics constitute the two pillars of trust-enhanced recommender systems. We discuss and illustrate the basic trust concepts such as trust and distrust modeling, propagation and aggregation. These concepts are needed to fully grasp the rationale behind the trust-enhanced recommender techniques that are discussed in the central part of the chapter, which focuses on the application of trust metrics and their operators in recommender systems. We explain the benefits of using trust in recommender algorithms and give an overview of state-of-the-art approaches for trust-enhanced recommender systems. Furthermore, we explain the details of three well-known trust-based systems and provide a comparative analysis of their performance. We conclude with a discussion of some recent developments and open challenges, such as visualizing trust relationships in a recommender system, alleviating the cold start problem in a trust network of a recommender system, studying the effect of involving distrust in the recommendation process, and investigating the potential of other types of social relationships."}
{"_id": "c798f1785b243b64b4e2c0eceb36ea54d39feae9", "title": "The YAP Prolog System", "text": "Yet Another Prolog (YAP) is a Prolog system originally developed in the mid-eighties and that has been under almost constant development since then. This paper presents the general structure and design of the YAP system, focusing on three important contributions to the Logic Programming community. First, it describes the main techniques used in YAP to achieve an efficient Prolog engine. Second, most Logic Programming systems have a rather limited indexing algorithm. YAP contributes to this area by providing a dynamic indexing mechanism, or just-in-time indexer (JITI). Third, a important contribution of the YAP system has been the integration of both or-parallelism and tabling in a single Logic Programming system."}
{"_id": "82a0c5dfc66d0b8e26b962795695882f99929aa8", "title": "B-spline-based road model for 3d lane recognition", "text": "The assumption of a planar road is rarely fulfilled on rural roads and often violated in construction zones. Hills and dips are common and curves with small radius often have an inclination towards the center. In this paper a novel road model is presented, which uses B-Spline modeling to describe the 3D road course. The right and left vertical road courses are modeled separately to represent the roll angle of the road. The well-proven clothoid model used for lateral road course estimation is extended with a B-Spline curve resulting in a more flexible road model as needed for rural roads and construction sites. This road model is tracked over time using a Kalman filter. Stereo measurements provide height information of the road surface, which are used for the 3D lane recognition."}
{"_id": "11d2dccc6dd5e52dc18c4835bd5b84083fe7cb39", "title": "Are web applications ready for parallelism?", "text": "In recent years, web applications have become pervasive. Their backbone is JavaScript, the only programming language supported by all major web browsers. Most browsers run on desktop or mobile devices with parallel hardware. However, JavaScript is by design sequential, and current web applications make little use of hardware parallelism. Are web applications ready to exploit parallel hardware? We answer the question in two steps: First, we survey 174 web developers about the potential and challenges of using parallelism. Then, we study the performance and computation shape of a set of web applications that are representative for the emerging web. Our findings indicate that emerging web applications do have latent data parallelism, and JavaScript developers' programming style is not a significant impediment to exploiting this parallelism."}
{"_id": "10dae7fca6b65b61d155a622f0c6ca2bc3922251", "title": "Gradient-based learning algorithms for recurrent networks and their computational complexity", "text": ""}
{"_id": "12dd078034f72e4ebd9dfd9f80010d2ae7aaa337", "title": "Professor Forcing: A New Algorithm for Training Recurrent Networks", "text": "The Teacher Forcing algorithm trains recurrent networks by supplying observed sequence values as inputs during training and using the network\u2019s own one-stepahead predictions to do multi-step sampling. We introduce the Professor Forcing algorithm, which uses adversarial domain adaptation to encourage the dynamics of the recurrent network to be the same when training the network and when sampling from the network over multiple time steps. We apply Professor Forcing to language modeling, vocal synthesis on raw waveforms, handwriting generation, and image generation. Empirically we find that Professor Forcing acts as a regularizer, improving test likelihood on character level Penn Treebank and sequential MNIST. We also find that the model qualitatively improves samples, especially when sampling for a large number of time steps. This is supported by human evaluation of sample quality. Trade-offs between Professor Forcing and Scheduled Sampling are discussed. We produce T-SNEs showing that Professor Forcing successfully makes the dynamics of the network during training and sampling more similar."}
{"_id": "69ab9e293ae3b54f7b004ac89d789716e0ea5aa4", "title": "Style-based inverse kinematics", "text": "This paper presents an inverse kinematics system based on a learned model of human poses. Given a set of constraints, our system can produce the most likely pose satisfying those constraints, in real-time. Training the model on different input data leads to different styles of IK. The model is represented as a probability distribution over the space of all possible poses. This means that our IK system can generate any pose, but prefers poses that are most similar to the space of poses in the training data. We represent the probability with a novel model called a Scaled Gaussian Process Latent Variable Model. The parameters of the model are all learned automatically; no manual tuning is required for the learning component of the system. We additionally describe a novel procedure for interpolating between styles.Our style-based IK can replace conventional IK, wherever it is used in computer animation and computer vision. We demonstrate our system in the context of a number of applications: interactive character posing, trajectory keyframing, real-time motion capture with missing markers, and posing from a 2D image."}
{"_id": "756bfbefe59b7cd91eee3e432e59d8d83566cf43", "title": "Impact of increased penetration of photovoltaic generation on power systems", "text": "Present renewable portfolio standards are changing power systems by replacing conventional generation with alternate energy resources such as photovoltaic (PV) systems. With the increase in penetration of PV resources, power systems are expected to experience a change in dynamic and operational characteristics. This paper studies the impact of increased penetration of PV systems on static performance as well as transient stability of a large power system, in particular the transmission system. Utility scale and residential rooftop PVs are added to the aforementioned system to replace a portion of conventional generation resources. While steady state voltages are observed under various PV penetration levels, the impact of reduced inertia on transient stability performance is also examined. The studied system is a large test system representing a portion of the Western U.S. interconnection. The simulation results obtained effectively identify both detrimental and beneficial impacts of increased PV penetration both for steady state stability and transient stability performance."}
{"_id": "091c35d8f42ff9a571ef782f19a3290f1ac62d95", "title": "Bayesian Poisson Tensor Factorization for Inferring Multilateral Relations from Sparse Dyadic Event Counts", "text": "We present a Bayesian tensor factorization model for inferring latent group structures from dynamic pairwise interaction patterns. For decades, political scientists have collected and analyzed records of the form \"country i took action a toward country j at time t\" - known as dyadic events - in order to form and test theories of international relations. We represent these event data as a tensor of counts and develop Bayesian Poisson tensor factorization to infer a low-dimensional, interpretable representation of their salient patterns. We demonstrate that our model's predictive performance is better than that of standard non-negative tensor factorization methods. We also provide a comparison of our variational updates to their maximum likelihood counterparts. In doing so, we identify a better way to form point estimates of the latent factors than that typically used in Bayesian Poisson matrix factorization. Finally, we showcase our model as an exploratory analysis tool for political scientists. We show that the inferred latent factor matrices capture interpretable multilateral relations that both conform to and inform our knowledge of international a airs."}
{"_id": "808bf39b022006112a4950c69fe995fb481a7942", "title": "Optimizing Players' Expected Enjoyment in Interactive Stories", "text": "In interactive storytelling systems and other story-based computer games, a drama manager is a background agent that aims to bring about an enjoyable and coherent experience for the players. In this paper, we present a personalized drama manager that increases a player\u2019s expected enjoyment without removing player agency. Our personalized drama manager models a player\u2019s preference using data-driven techniques, predicts the probability the player transitioning to different story experiences, selects an objective experience that can maximize the player\u2019s expected enjoyment, and guides the player to the selected story experience. Human study results show that our drama manager can significantly increase players\u2019 enjoyment ratings in an interactive storytelling testbed, compared to drama managers in pre-"}
{"_id": "5021c5f6d94ffaf735ab941241ab21e0c491ffa1", "title": "Improving Performances of MSER Features in Matching and Retrieval Tasks", "text": "MSER features are redefined to improve their performances in matching and retrieval tasks. The proposed SIMSER features (i.e. scale-insensitive MSERs) are the extremal regions which are maximally stable not only under the threshold changes (like MSERs) but, additionally, under image rescaling (smoothing). Theoretical advantages of such a modification are discussed. It is also preliminarily verified experimentally that such a modification preserves the fundamental properties of MSERs, i.e. the average numbers of features, repeatability, and computational complexity (which is only multiplicatively increased by the number of scales used), while performances (measured by typical CBVIR metrics) can be significantly improved. In particular, results on benchmark datasets indicate significant increments in recall values, both for descriptor-based matching and word-based matching. In general, SIMSERs seem particularly suitable for a usage with large visual vocabularies, e.g. they can be prospectively applied to improve quality of BoW pre-retrieval operations in large-scale databases."}
{"_id": "6cbaa29841e96d7f5b910af0113adf13fd8a625c", "title": "Variational Inference for Data-Efficient Model Learning in POMDPs", "text": "Partially observable Markov decision processes (POMDPs) are a powerful abstraction for tasks that require decision making under uncertainty, and capture a wide range of real world tasks. Today, effective planning approaches exist that generate effective strategies given black-box models of a POMDP task. Yet, an open question is how to acquire accurate models for complex domains. In this paper we propose DELIP, an approach to model learning for POMDPs that utilizes structured, amortized variational inference. We empirically show that our model leads to effective control strategies when coupled with state-of-the-art planners. Intuitively, model-based approaches should be particularly beneficial in environments with changing reward structures, or where rewards are initially unknown. Our experiments confirm that DELIP is particularly effective in this setting."}
{"_id": "8776c004a351e23be9ef7a4d214da4fc93260484", "title": "Order-Preserving Encryption for Numeric Data", "text": "Encryption is a well established technology for protecting sensitive data. However, once encrypted, data can no longer be easily queried aside from exact matches. We present an order-preserving encryption scheme for numeric data that allows any comparison operation to be directly applied on encrypted data. Query results produced are sound (no false hits) and complete (no false drops). Our scheme handles updates gracefully and new values can be added without requiring changes in the encryption of other values. It allows standard databse indexes to be built over encrypted tables and can easily be integrated with existing database systems. The proposed scheme has been designed to be deployed in application environments in which the intruder can get access to the encrypted database, but does not have prior domain information such as the distribution of values and annot encrypt or decrypt arbitrary values of his choice. The encryption is robust against estimation of the true value in such environments."}
{"_id": "86f011e5ca736d337c342a7c3d942ee5b57be1f4", "title": "Genetics and Marker-assisted Selection of the Chemotype in Cannabis sativa L.", "text": "Cannabis sativa is an interesting crop for several industrial uses, but the legislations in Europe and USA require a tight control of cannabinoid type and content for cultivation and subsidies release. Therefore, cannabinoid survey by gas chromatography of materials under selection is an important step in hemp breeding. In this paper, a number of Cannabis accessions were examined for their cannabinoid composition. Their absolute and relative content was examined, and results are discussed in the light of both the current genetic model for cannabinoid\u2019s inheritance, and the legislation\u2019s requirements. In addition, the effectiveness of two different types of markers associated to the locus determining the chemotype in Cannabis was evaluated and discussed, as possible tools in marker-assisted selection in hemp, but also for possible applications in the forensic and pharmaceutical fields."}
{"_id": "8829ed626a278c2eba686ff973496f7c0825a2e8", "title": "Deep Context: End-to-end Contextual Speech Recognition", "text": "In automatic speech recognition (ASR) what a user says depends on the particular context she is in. Typically, this context is represented as a set of word n-grams. In this work, we present a novel, all-neural, end-to-end (E2E) ASR system that utilizes such context. Our approach, which we refer to as Contextual Listen, Attend and Spell (CLAS) jointly-optimizes the ASR components along with embeddings of the context n-grams. During inference, the CLAS system can be presented with context phrases which might contain-of-vocabulary (OOV) terms not seen during training. We compare our proposed system to a more traditional contextualization approach, which performs shallow-fusion between independently trained LAS and contextual n-gram models during beam search. Across a number of tasks, we find that the proposed CLAS system outperforms the baseline method by as much as 68% relative WER, indicating the advantage of joint optimization over individually trained components."}
{"_id": "f0b0ec10c5d737f9f11543ed27cfd2779f6bafd8", "title": "Balance control of two-wheeled self-balancing mobile robot based on TS fuzzy model", "text": "Two-wheeled self-balancing robot is a mechanically unstable, nonholonomic constraint robot. This paper has established a dynamics and kinematics for nonholonomic mobile robot. In order to control more effective for self-balancing robot's balance model, has designed fuzzy controller based on the T-S fuzzy model with the parallel distribution compensator (PDC) the structure. It realizes the robot balance and the movement control. Finally, validity and feasibility of the system modeling and the designed controller were confirmed through the actual test results and the simulation results contrast analysis."}
{"_id": "353350b5367727719e3f170fa932d76e2caa350e", "title": "A longitudinal study of follow predictors on twitter", "text": "Follower count is important to Twitter users: it can indicate popularity and prestige. Yet, holistically, little is understood about what factors -- like social behavior, message content, and network structure - lead to more followers. Such information could help technologists design and build tools that help users grow their audiences. In this paper, we study 507 Twitter users and a half-million of their tweets over 15 months. Marrying a longitudinal approach with a negative binomial auto-regression model, we find that variables for message content, social behavior, and network structure should be given equal consideration when predicting link formations on Twitter. To our knowledge, this is the first longitudinal study of follow predictors, and the first to show that the relative contributions of social behavior and mes-sage content are just as impactful as factors related to social network structure for predicting growth of online social networks. We conclude with practical and theoretical implications for designing social media technologies."}
{"_id": "f3c1a64a3288b819cb46c2b0c479f6e99707bf62", "title": "A service orchestration architecture for Fog-enabled infrastructures", "text": "The development of Fog Computing technology is crucial to address the challenges to come with the mass adoption of Internet Of Things technology, where the generation of data tends to grow at an unprecedented pace. The technology brings computing power to the surrounds of devices, to offer local processing, filtering, storage and analysis of data and control over actuators. Orchestration is a requirement of Fog Computing technology to deliver services, based on the composition of microservices. It must take into consideration the heterogeneity of the IoT environment and device's capabilities and constraints. This heterogeneity requires a different approach for orchestration, be it regarding infrastructure management, node selection and/or service placement. Orchestrations shall be manually or automatically started through event triggers. Also, the Orchestrator must be flexible enough to work in a centralized or distributed fashion. Orchestration is still a hot topic and can be seen in different areas, especially in the Service Oriented Architectures, hardware virtualization, in the Cloud, and in Network Virtualization Function. However, the architecture of these solutions is not enough to handle Fog Requirements, specially Fog's heterogeneity, and dynamics. In this paper, we propose an architecture for Orchestration for the Fog Computing environment. We developed a prototype to prof some concepts. We discuss in this paper the implementation, and the tools chose, and their roles. We end the paper with a discussion on performance indicators and future direction on the evaluation of non-functional aspects of the Architecture."}
{"_id": "41ca9fa673a3693f9e0f78c60925959b7c926816", "title": "A Better Way to Pretrain Deep Boltzmann Machines", "text": "We describe how the pretraining algorithm for Deep Boltzmann Machines (DBMs) is related to the pretraining algorithm for Deep Belief Networks and we show that under certain conditions, the pretraining procedure improves the variational lower bound of a two-hidden-layer DBM. Based on this analysis, we develop a different method of pretraining DBMs that distributes the modelling work more evenly over the hidden layers. Our results on the MNIST and NORB datasets demonstrate that the new pretraining algorithm allows us to learn better generative models."}
{"_id": "4fb40940a0814125c72ef7dfc18390f64521efa4", "title": "Age Estimation in Short Speech Utterances Based on LSTM Recurrent Neural Networks", "text": "Age estimation from speech has recently received increased interest as it is useful for many applications such as user-profiling, targeted marketing, or personalized call-routing. This kind of applications need to quickly estimate the age of the speaker and might greatly benefit from real-time capabilities. Long short-term memory (LSTM) recurrent neural networks (RNN) have shown to outperform state-of-the-art approaches in related speech-based tasks, such as language identification or voice activity detection, especially when an accurate real-time response is required. In this paper, we propose a novel age estimation system based on LSTM-RNNs. This system is able to deal with short utterances (from 3 to 10 s) and it can be easily deployed in a real-time architecture. The proposed system has been tested and compared with a state-of-the-art i-vector approach using data from NIST speaker recognition evaluation 2008 and 2010 data sets. Experiments on short duration utterances show a relative improvement up to 28% in terms of mean absolute error of this new approach over the baseline system."}
{"_id": "26a07743cc9e3864a85b1c4044202ab5103cc206", "title": "Do hospital fall prevention programs work? A systematic review.", "text": "OBJECTIVES\nTo analyze published hospital fall prevention programs to determine whether there is any effect on fall rates. To review the methodological quality of those programs and the range of interventions used. To provide directions for further research.\n\n\nDESIGN\nSystematic review of published hospital fall prevention programs. Meta-analysis.\n\n\nMETHODS\nKeyword searches of Medline, CINAHL, monographs, and secondary references. All papers were included that described fall rates before and during intervention. Risk ratios and 95% Confidence Intervals (95% CI) were estimated and random effects meta-analysis employed. Begg's test was applied to detect possible publication bias. Separate meta-analysis regressions were performed to determine whether individual components of multifaceted interventions were effective.\n\n\nRESULTS\nA total of 21 papers met the criteria (18 from North America), although only 10 contained sufficient data to allow calculation of confidence intervals. A rate ratio of <1 indicates a reduction in the fall rate, resulting from an intervention. Three were randomized controlled trials (pooled rate ratio 1.0 (CI 0.60, 1.68)), seven prospective studies with historical control (0.76 (CI 0.65, 0.88)). Pooled effect rate ratio from these 10 studies was 0.79 (CI 0.69, 0.89). The remaining 11 studies were prospective studies with historical control describing fall rates only. Individual components of interventions showed no significant benefit.\n\n\nDISCUSSION\nThe pooled effect of about 25% reduction in the fall rate may be a result of intervention but may also be biased by studies that used historical controls not allowing for historical trends in the fall rate before and during the intervention. The randomized controlled trials apparent lack of effect might be due to a change in practice when patients and controls were in the same unit at the same time during a study. Studies did not analyze compliance with the intervention or opportunity costs resulting from the intervention. Research and clinical programs in hospital fall prevention should pay more attention to study design and the nature of interventions."}
{"_id": "7956f57e3a3e7cc5c2eecc1365b557df744e68ad", "title": "Asymptomatic vulvar pigmentation.", "text": "A 29-year-old woman presented with a 3-month history of a macular lesion on the vaginal introit and labia minor, which was causing concern because of its crescent-shaped enlargement. There was no itching or vaginal discharge. The patient\u2019s medical history was unremarkable, particularly for dermatological diseases such as lichen planus or other inflammatory disorders. There was no family history of skin diseases or skin cancer, including melanoma. On physical examination, a dark-brown macule with irregular borders, measuring around 20 \u00b7 30 mm, was seen, localized to the vaginal introit and labia minora (Fig. 1). On palpation, the lesion was flat, having the contour of the normal vaginal mucosa, and was not indurated or tender. The patient had no oral mucosal, perioral or facial hyperpigmentation, and the remaining physical examination was normal. Dermatoscopy was performed, revealing diffuse pigmentation with a parallel pattern of partially linear and partially curvilinear brown streaks across the lesion."}
{"_id": "7072470b0a0654709107c8c8b335dbbacd9b3a8c", "title": "Inadvertent Harvest of the Median Nerve Instead of the Palmaris Longus Tendon.", "text": "BACKGROUND\nThe palmaris longus tendon is frequently used as a tendon graft or ligament replacement. In rare instances the median nerve has been inadvertently harvested instead of the palmaris longus for use as a tendon.\n\n\nMETHODS\nNineteen cases in which the median nerve had been mistakenly harvested instead of the palmaris longus tendon were collected from members of the American Society for Surgery of the Hand (ASSH) Listserve. Surgeons involved in the subsequent care of the subject who had had an inadvertent harvest were contacted or the chart was reviewed. The reason for the initial procedure, the skill level of the primary surgeon, and when the inadvertent harvest was recognized were documented. When possible, the method of harvest and subsequent treatment were also documented.\n\n\nRESULTS\nThe most common initial procedure was a reconstruction of the elbow ulnar collateral ligament, followed by basal joint arthroplasty, tendon reconstruction, and reconstruction of the ulnar collateral ligament of the thumb metacarpophalangeal joint. Only 7 of the inadvertent harvests were recognized intraoperatively; in the remaining 12 cases the nerve was used as a tendon graft. The sensory loss was not recognized as being due to the inadvertent harvest until the first postoperative visit (2 subjects), 3 to 4 weeks (2 subjects), 2 to 3 months (2 subjects), 5 to 7 months (2 subjects), 1 year (1 subject), 3 years (1 subject), or 10 years (1 subject). Preoperative clinical identification of the presence or absence of a palmaris longus did not necessarily prevent an inadvertent harvest.\n\n\nCONCLUSIONS\nKnowledge of the relevant anatomy is crucial to avoiding inadvertent harvest of the median nerve instead of the palmaris longus tendon."}
{"_id": "ae1f9609c250b38181f369d0215fad3a168b725d", "title": "Bayesian Point Cloud Reconstruction", "text": "In this paper, we propose a novel surface reconstruction technique based on Bayesian statistics: The measurement process as well as prior assumptions on the measured objects are modeled as probability distributions and Bayes\u2019 rule is used to infer a reconstruction of maximum probability. The key idea of this paper is to define both measurements and reconstructions as point clouds and describe all statistical assumptions in terms of this finite dimensional representation. This yields a discretization of the problem that can be solved using numerical optimization techniques. The resulting algorithm reconstructs both topology and geometry in form of a well-sampled point cloud with noise removed. In a final step, this representation is then converted into a triangle mesh. The proposed approach is conceptually simple and easy to extend. We apply the approach to reconstruct piecewise-smooth surfaces with sharp features and examine the performance of the algorithm on different synthetic and real-world"}
{"_id": "698f61abb89ced1f5d724a90ec0153aad624ff20", "title": "The impact of website content dimension and e-trust on e-marketing effectiveness: The case of Iranian commercial saffron corporations", "text": "By considering the problems that commercial saffron companies have faced in international markets, the aim of this study is to investigate the impact of website content, including informational and design dimensions, on the effectiveness of e-marketing and e-trust as mediator variables. These aspects are examined with reference to sales and marketing division managers in a sample of 100 commercial saffron corporations in the Khorasan province. The findings support the ideas that website content has an effect on e-marketing and e-trust and that e-trust plays a mediating role in the relationship between etrust and e-marketing effectiveness. 2013 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +98 5118811240; fax: +98 5118811243. E-mail addresses: r-nia@ferdowsi.um.ac.ir, frahimnia@hotmail.com (F. Rahimnia)."}
{"_id": "02b465f012f7e1649d48f1cc59058006f1354887", "title": "Editorial: special issue on web content mining", "text": "With the phenomenal growth of the Web, there is an everincreasing volume of data and information published in numerous Web pages. The research in Web mining aims to develop new techniques to effectively extract and mine useful knowledge or information from these Web pages [8]. Due to the heterogeneity and lack of structure of Web data, automated discovery of targeted or unexpected knowledge/information is a challenging task. It calls for novel methods that draw from a wide range of fields spanning data mining, machine learning, natural language processing, statistics, databases, and information retrieval. In the past few years, there was a rapid expansion of activities in the Web mining field, which consists of Web usage mining, Web structure mining, and Web content mining. Web usage mining refers to the discovery of user access patterns from Web usage logs. Web structure mining tries to discover useful knowledge from the structure of hyperlinks. Web content mining aims to extract/mine useful information or knowledge from Web page contents. For this special issue, we focus on Web content mining."}
{"_id": "e23c9687ba0bf15940af76b7fa0e0c1af9d3156e", "title": "An analysis of sources of risk in the consumer electronics industry", "text": "The consumer electronics industry is a $ 240 billion global industry with a small number of highly competitive global players. We describe many of the risks associated with any global supply chain in this industry. As illustration, we also list steps that Samsung Electronics and its subsidiary, Samsung Electronics UK, have taken to mitigate these risks. Our description of the risks and illustration of mitigation efforts provides the backdrop to identify areas of future research."}
{"_id": "2f52cbef51a6a8a2a74119ad821526f9e0b57b39", "title": "SAP HANA database: data management for modern business applications", "text": "The SAP HANA database is positioned as the core of the SAP HANA Appliance to support complex business analytical processes in combination with transactionally consistent operational workloads. Within this paper, we outline the basic characteristics of the SAP HANA database, emphasizing the distinctive features that differentiate the SAP HANA database from other classical relational database management systems. On the technical side, the SAP HANA database consists of multiple data processing engines with a distributed query processing environment to provide the full spectrum of data processing -- from classical relational data supporting both row- and column-oriented physical representations in a hybrid engine, to graph and text processing for semi- and unstructured data management within the same system.\n From a more application-oriented perspective, we outline the specific support provided by the SAP HANA database of multiple domain-specific languages with a built-in set of natively implemented business functions. SQL -- as the lingua franca for relational database systems -- can no longer be considered to meet all requirements of modern applications, which demand the tight interaction with the data management layer. Therefore, the SAP HANA database permits the exchange of application semantics with the underlying data management platform that can be exploited to increase query expressiveness and to reduce the number of individual application-to-database round trips."}
{"_id": "3a011bd31f1de749210b2b188ffb752d9858c6a6", "title": "Graph cube: on warehousing and OLAP multidimensional networks", "text": "We consider extending decision support facilities toward large sophisticated networks, upon which multidimensional attributes are associated with network entities, thereby forming the so-called multidimensional networks. Data warehouses and OLAP (Online Analytical Processing) technology have proven to be effective tools for decision support on relational data. However, they are not well-equipped to handle the new yet important multidimensional networks. In this paper, we introduce Graph Cube, a new data warehousing model that supports OLAP queries effectively on large multidimensional networks. By taking account of both attribute aggregation and structure summarization of the networks, Graph Cube goes beyond the traditional data cube model involved solely with numeric value based group-by's, thus resulting in a more insightful and structure-enriched aggregate network within every possible multidimensional space. Besides traditional cuboid queries, a new class of OLAP queries, crossboid, is introduced that is uniquely useful in multidimensional networks and has not been studied before. We implement Graph Cube by combining special characteristics of multidimensional networks with the existing well-studied data cube techniques. We perform extensive experimental studies on a series of real world data sets and Graph Cube is shown to be a powerful and efficient tool for decision support on large multidimensional networks."}
{"_id": "4b573416043cf9cff42cbb7b753993c907a2be4a", "title": "The Graph Story of the SAP HANA Database", "text": "Many traditional and new business applications work with inherently graphstructured data and therefore benefit from graph abstractions and operations provided in the data management layer. The property graph data model not only offers schema flexibility but also permits managing and processing data and metadata jointly. By having typical graph operations implemented directly in the database engine and exposing them both in the form of an intuitive programming interface and a declarative language, complex business application logic can be expressed more easily and executed very efficiently. In this paper we describe our ongoing work to extend the SAP HANA database with built-in graph data support. We see this as a next step on the way to provide an efficient and intuitive data management platform for modern business applications with SAP HANA."}
{"_id": "c2a1201cc71b7a98564dbbfe40efba2675107ba4", "title": "miR-PREFeR: an accurate, fast and easy-to-use plant miRNA prediction tool using small RNA-Seq data", "text": "SUMMARY\nPlant microRNA prediction tools that use small RNA-sequencing data are emerging quickly. These existing tools have at least one of the following problems: (i) high false-positive rate; (ii) long running time; (iii) work only for genomes in their databases; (iv) hard to install or use. We developed miR-PREFeR (miRNA PREdiction From small RNA-Seq data), which uses expression patterns of miRNA and follows the criteria for plant microRNA annotation to accurately predict plant miRNAs from one or more small RNA-Seq data samples of the same species. We tested miR-PREFeR on several plant species. The results show that miR-PREFeR is sensitive, accurate, fast and has low-memory footprint.\n\n\nAVAILABILITY AND IMPLEMENTATION\nhttps://github.com/hangelwen/miR-PREFeR"}
{"_id": "09f47d7b8508f06ad808a965a9ad01bc0fe08dec", "title": "45\u00b0 Linearly Polarized Resonant Slot Array Antenna Based on Substrate Integrated Waveguide", "text": "In this paper, a new configuration of integrated 45deg linearly polarized slot array antenna based on substrate integrated waveguide (SIW) technology is proposed, designed and fabricated with normal printed circuit board (PCB) process. The antenna consists of eight 45deg inclined slots and the measured gain is higher than 10.27 dBi at X-band. The measured radiation patterns and the return loss are in agreement with the simulated results."}
{"_id": "72df769d2ab1c415c2a0ac900734ef18eb773b11", "title": "THE X-BAR THEORY OF PHRASE STRUCTURE", "text": "X-bar theory is widely regarded as a substantive theory of phrase structure properties in natural languages. In this paper we will demonstrate that a formalization of its content reveals very little substance in its claims. We state and discuss six conditions that encapsulate the claims of X-bar theory: Lexicality \u2014 each nonterminal is a projection of a preterminal; Succession \u2014 each X dominates an X for all n \u2265 0; Uniformity \u2014 all maximal projections have the same bar-level; Maximality \u2014 all non-heads are maximal projections; Centrality \u2014 the start symbol is a maximal projection; and Optionality \u2014 all and only non-heads are optional. We then consider recent proposals to \u2018eliminate\u2019 base components from transformational grammars and to reinterpret X-bar theory as a set of universal constraints holding for all languages at D-structure, arguing that this strategy fails. We show that, as constraints on phrase structure rule systems, the X-bar conditions have hardly any effect on the descriptive power of grammars, and that the principles with the most chance of making some descriptive difference are the least adhered to in practice. Finally, we reconstruct X-bar theory in a way that makes no reference to the notion of bar-level but instead makes the notion \u2018head of\u2019 the central one.\u2217"}
{"_id": "1e096fa5014c146d858b4b85517e94a36df7a9c8", "title": "Interactive technical illustration", "text": "A rendering is an abstraction that favors, preserves, or even emphasizes some qualities while sacrificing, suppressing, or omitting other characteristics that are not the focus of attention. Most computer graphics rendering activities have been concerned with photorealism, i.e., trying to emulate an image that looks like a highquality photograph. This laudable goal is useful and appropriate in many applications, but not in technical illustration where elucidation of structure and technical information is the preeminent motivation. This calls for a different kind of abstraction in which technical communication is central, but art and appearance are still essential instruments toward this end. Work that has been done on computer generated technical illustrations has focused on static images, and has not included all of the techniques used to hand draw technical illustrations. A paradigm for the display of technical illustrations in a dynamic environment is presented. This display environment includes all of the benefits of computer generated technical illustrations, such as a clearer picture of shape, structure, and material composition than traditional computer graphics methods. It also includes the three-dimensional interactive strength of modem display systems. This is accomplished by using new algorithms for real time drawing of silhouette curves, algorithms which solve a number of the problems inherent in previous methods. We incorporate current non-photorealistic lighting methods, and augment them with new shadowing algorithms based on accepted techniques used by artists and studies carried out in human perception. This paper, all of the images, and a mpeg video clip are available at http://www.cs.utah.edu/~bgooch/ITI CR Categories: I.3.0 [Computer Graphics]: General; I.3.6 [Computer Graphics]: Methodology and Techniques."}
{"_id": "93a4caae19b26f38b17d421c7545e32ca5c5b11a", "title": "A Patch-Based Saliency Detection Method for Assessing the Visual Privacy Levels of Objects in Photos", "text": "Photo privacy protection has recently received increasing attention from the public. However, the overprotection of photo privacy by hiding too much visual information can make photos meaningless. To avoid this, visual information with different degrees of privacy sensitivity can be filtered out using various image-processing techniques. Objects in a photo usually contain visual information that can potentially reveal private information; this potential depends on both the visual saliency of the objects and on the specific categories to which the objects belong. In this paper, we aim to quantitatively evaluate the influence of visual saliency information on privacy and objectively evaluate the levels of visual privacy that objects contain. Meeting this objective faces two challenges: 1) determining a method of effectively detecting generic objects in a photo for the extraction of saliency information and 2) determining a scientific method for assessing the visual private information contained in objects. To cope with these challenges, we first propose a hierarchical saliency detection method that combines a patch-based saliency detection strategy with an objectness estimation strategy to effectively locate salient objects and obtain the saliency information of each object. The proposed method results in a small set of class-independent locations with high quality and a mean average best overlap score of 0.627 at 1150 locations, which is superior to the score of other saliency detection methods. Second, we build a computational privacy assessment system to scientifically calculate and rank the privacy risks of objects in a photo by creating an improved risk matrix and using the Borda count method. The proposed computational privacy assessment method matches human evaluations to a relatively high degree."}
{"_id": "3a4202a16cf9137125ce054a9903d0f61767a277", "title": "Accelerating Apache Hive with MPI for Data Warehouse Systems", "text": "Data warehouse systems, like Apache Hive, have been widely used in the distributed computing field. However, current generation data warehouse systems have not fully embraced High Performance Computing (HPC) technologies even though the trend of converging Big Data and HPC is emerging. For example, in traditional HPC field, Message Passing Interface (MPI) libraries have been optimized for HPC applications during last decades to deliver ultra-high data movement performance. Recent studies, like DataMPI, are extending MPI for Big Data applications to bridge these two fields. This trend motivates us to explore whether MPI can benefit data warehouse systems, such as Apache Hive. In this paper, we propose a novel design to accelerate Apache Hive by utilizing DataMPI. We further optimize the DataMPI engine by introducing enhanced non-blocking communication and parallelism mechanisms for typical Hive workloads based on their communication characteristics. Our design can fully and transparently support Hive workloads like Intel HiBench and TPC-H with high productivity. Performance evaluation with Intel HiBench shows that with the help of light-weight DataMPI library design, efficient job start up and data movement mechanisms, Hive on DataMPI performs 30% faster than Hive on Hadoop averagely. And the experiments on TPC-H with ORCFile show that the performance of Hive on DataMPI can improve 32% averagely and 53% at most more than that of Hive on Hadoop. To the best of our knowledge, Hive on DataMPI is the first attempt to propose a general design for fully supporting and accelerating data warehouse systems with MPI."}
{"_id": "191314cc722196bf4dc0dbdbdb1cbec4bb67a2a9", "title": "Adaptive figure-ground classification", "text": "We propose an adaptive figure-ground classification algorithm to automatically extract a foreground region using a user-provided bounding-box. The image is first over-segmented with an adaptive mean-shift algorithm, from which background and foreground priors are estimated. The remaining patches are iteratively assigned based on their distances to the priors, with the foreground prior being updated online. A large set of candidate segmentations are obtained by changing the initial foreground prior. The best candidate is determined by a score function that evaluates the segmentation quality. Rather than using a single distance function or score function, we generate multiple hypothesis segmentations from different combinations of distance measures and score functions. The final segmentation is then automatically obtained with a voting or weighted combination scheme from the multiple hypotheses. Experiments indicate that our method performs at or above the current state-of-the-art on several datasets, with particular success on challenging scenes that contain irregular or multiple-connected foregrounds. In addition, this improvement in accuracy is achieved with low computational cost."}
{"_id": "127e802b3ae9096447201d3ecc53dcad95190083", "title": "Salamandra Robotica II: An Amphibious Robot to Study Salamander-Like Swimming and Walking Gaits", "text": "In this paper, we present Salamandra robotica II: an amphibious salamander robot that is able to walk and swim. The robot has four legs and an actuated spine that allow it to perform anguilliform swimming in water and walking on the ground. The paper first presents the new robot hardware design, which is an improved version of Salamandra robotica I. We then address several questions related to body\u2013limb coordination in robots and animals that have a sprawling posture like salamanders and lizards, as opposed to the erect posture of mammals (e.g., in cats and dogs). In particular, we investigate how the speed of locomotion and curvature of turning motions depend on various gait parameters such as the body\u2013limb coordination, the type of body undulation (offset, amplitude, and phase lag of body oscillations), and the frequency. Comparisons with animal data are presented, and our results show striking similarities with the gaits observed with real salamanders, in particular concerning the timing of the body\u2019s and limbs\u2019 movements and the relative speed of locomotion."}
{"_id": "2d2e53746269594ba3c6f677f6f2f19d6bf03dc5", "title": "Extracting Relations between Non-Standard Entities using Distant Supervision and Imitation Learning", "text": "Distantly supervised approaches have become popular in recent years as they allow training relation extractors without textbound annotation, using instead known relations from a knowledge base and a large textual corpus from an appropriate domain. While state of the art distant supervision approaches use off-theshelf named entity recognition and classification (NERC) systems to identify relation arguments, discrepancies in domain or genre between the data used for NERC training and the intended domain for the relation extractor can lead to low performance. This is particularly problematic for \u201cnon-standard\u201d named entities such as album which would fall into the MISC category. We propose to ameliorate this issue by jointly training the named entity classifier and the relation extractor using imitation learning which reduces structured prediction learning to classification learning. We further experiment with Web features different features and compare against using two off-the-shelf supervised NERC systems, Stanford NER and FIGER, for named entity classification. Our experiments show that imitation learning improves average precision by 4 points over an one-stage classification model, while removing Web features results in a 6 points reduction. Compared to using FIGER and Stanford NER, average precision is 10 points and 19 points higher with our imitation learning approach."}
{"_id": "6c88df76007b3fe839e78d7479d8cf739affe272", "title": "Growth Promotion of indigenous Scenedesmus dimorphus strain under different conditions using stirred tank", "text": "The effects of culture conditions at different temperature (15-25-35\u00b0C) and nitrate concentration (0.1-0.25-0.4g/L) on biomass production and growth rate of scenedesmus dimorphus were determined. The results shown that the dry biomass weight (0.153g/L) at 15\u00b0C was the lowest. No significant difference was observed between the dry biomasses at 25\u00b0C (0.289g/L) and 35\u00b0C (0.240g/L). While the lowest growth rate was at 15\u00b0C(0.157d), the highestone was at 25\u00b0C (0.229d) and 35\u00b0C (0.195d) with insignificant deference, and The nitrate concentration had no effect on both of biomass production and growth rate."}
{"_id": "a684fc37c79a3276af12a21c1af1ebd8d47f2d6a", "title": "linalg: Matrix Computations in Apache Spark", "text": "We describe matrix computations available in the cluster programming framework, Apache Spark. Out of the box, Spark comes with the mllib.linalg library, which provides abstractions and implementations for distributed matrices. Using these abstractions, we highlight the computations that were more challenging to distribute. When translating single-node algorithms to run on a distributed cluster, we observe that often a simple idea is enough: separating matrix operations from vector operations and shipping the matrix operations to be ran on the cluster, while keeping vector operations local to the driver. In the case of the Singular Value Decomposition, by taking this idea to an extreme, we are able to exploit the computational power of a cluster, while running code written decades ago for a single core. We conclude with a comprehensive set of benchmarks for hardware accelerated matrix computations from the JVM, which is interesting in its own right, as many cluster programming frameworks use the JVM."}
{"_id": "63fb426900361285bdb681a188cbed340ac119a8", "title": "Outlier-based Health Insurance Fraud Detection for U.S. Medicaid Data", "text": "Fraud, waste, and abuse in the U.S. healthcare system are estimated at $700 billion annually. Predictive analytics offers government and private payers the opportunity to identify and prevent or recover such billings. This paper proposes a data-driven method for fraud detection based on comparative research, fraud cases, and literature review. Unsupervised data mining techniques such as outlier detection are suggested as effective predictors for fraud. Based on a multi-dimensional data model developed for Medicaid claim data, specific metrics for dental providers were developed and evaluated in analytical experiments using outlier detection applied to claim, provider, and patient data in a state Medicaid program. The proposed methodology enabled successful identification of fraudulent activity, with 12 of the top 17 suspicious providers (71%) referred to officials for investigation with clearly anomalous and inappropriate activity. Future research is underway to extend the method to other specialties and enable its use by fraud analysts."}
{"_id": "9b0a376009cdf9b828b982760d4b85dbbee70c4a", "title": "FOR LEAP MOTION CONTROLLER", "text": "This thesis studies the new possibilities to gesture interfaces that emerged with a Leap Motion sensor. The Leap Motion is an innovative, 3D motion capturing device designed especially for hands and fingers tracking with precision up to 0.01mm. The outcome of the thesis is the LeapGesture library dedicated to the developers for Leap Motion Controller that contains algorithms allowing to learn and recognize gestures. The authors examined the data provided by the sensor in context of recognition of hand poses (static gestures), hand movements (dynamic gestures) and in task of a finger recognition. The static gestures are recognized using the Support Vector Machine (SVM) with median filtering an input data and using the correspondences between consecutive recognitions. The thesis contains evaluation of different feature sets, which have a significant impact on the recognition rate. The chosen feature set allowed to recognize a set of five gestures with 99% accuracy and a set of ten gestures with 85%. The dynamic gestures (movements of a hand and fingers) are recognized with the Hidden Markov Models (HMM). Recognition with HMMs allowed to achieve accuracy of 80% for a set containing six classes of dynamic gestures. Finger recognition algorithms proposed in this thesis works with 93% accuracy on a recorded dataset. The LeapGesture library contains presented approaches using a C++ interface, that can be easily used in any application for many purposes."}
{"_id": "9145a88fb828b7914f4e47348f117d120ee6be8f", "title": "A Hybrid Ant Colony Optimization Algorithm for Path Planning of Robot in Dynamic Environment", "text": "Ant colony optimization and artificial potential field were used respectively as global path planning and local path planning methods in this paper. Some modifications were made to accommodate ant colony optimization to path planning. Pheromone generated by ant colony optimization was also utilized to prevent artificial potential field from getting local minimum. Simulation results showed that the hybrid algorithm could satisfy the real-time demand. The comparison between ant colony optimization and genetic algorithm was also made in this paper."}
{"_id": "2b8e846cc1f7ce04f1f702168e6d9ae2e74c32d6", "title": "Co-designing accelerators and SoC interfaces using gem5-Aladdin", "text": "Increasing demand for power-efficient, high-performance computing has spurred a growing number and diversity of hardware accelerators in mobile and server Systems on Chip (SoCs). This paper makes the case that the co-design of the accelerator microarchitecture with the system in which it belongs is critical to balanced, efficient accelerator microarchitectures. We find that data movement and coherence management for accelerators are significant yet often unaccounted components of total accelerator runtime, resulting in misleading performance predictions and inefficient accelerator designs. To explore the design space of accelerator-system co-design, we develop gem5-Aladdin, an SoC simulator that captures dynamic interactions between accelerators and the SoC platform, and validate it to within 6% against real hardware. Our co-design studies show that the optimal energy-delay-product (EDP) of an accelerator microarchitecture can improve by up to 7.4\u00d7 when system-level effects are considered compared to optimizing accelerators in isolation."}
{"_id": "0d0b7fc76cdde25c24c7a7068f4fc9758d9db5af", "title": "Compatibility Family Learning for Item Recommendation and Generation", "text": "Compatibility between items, such as clothes and shoes, is a major factor among customer\u2019s purchasing decisions. However, learning \u201ccompatibility\u201d is challenging due to (1) broader notions of compatibility than those of similarity, (2) the asymmetric nature of compatibility, and (3) only a small set of compatible and incompatible items are observed. We propose an end-to-end trainable system to embed each item into a latent vector and project a query item into K compatible prototypes in the same space. These prototypes reflect the broad notions of compatibility. We refer to both the embedding and prototypes as \u201cCompatibility Family\u201d. In our learned space, we introduce a novel Projected Compatibility Distance (PCD) function which is differentiable and ensures diversity by aiming for at least one prototype to be close to a compatible item, whereas none of the prototypes are close to an incompatible item. We evaluate our system on a toy dataset, two Amazon product datasets, and Polyvore outfit dataset. Our method consistently achieves state-of-the-art performance. Finally, we show that we can visualize the candidate compatible prototypes using a Metric-regularized Conditional Generative Adversarial Network (MrCGAN), where the input is a projected prototype and the output is a generated image of a compatible item. We ask human evaluators to judge the relative compatibility between our generated images and images generated by CGANs conditioned directly on query items. Our generated images are significantly preferred, with roughly twice the number of votes as others."}
{"_id": "1343d58d5e39c3a06a5bde6289a94ee6f4a17773", "title": "ON SUPERVISED AND SEMI-SUPERVISED k-NEAREST NEIGHBOR ALGORITHMS", "text": "The k-nearest neighbor (kNN) is one of the simplest classification methods used in machine learning. Since the main component of kNN is a distance metric, kernelization of kNN is possible. In this paper kNN and semi-supervised kNN algorithms are empirically compared on two data sets (the USPS data set and a subset of the Reuters-21578 text categorization corpus). We use a soft version of the kNN algorithm to handle multi-label classification settings. Semi-supervision is performed by using data-dependent kernels."}
{"_id": "48fb21c5ae8678b24f4d7c95333f90fa5876cfd6", "title": "GaN HEMT based Doherty amplifier for 3.5-GHz WiMAX Applications", "text": "We have implemented a Doherty amplifier for 3.5-GHz World Interoperability for Microwave Access (WiMAX) applications using Eudyna 90-W (P3 dB) Gallium-Nitride (GaN) High Electron Mobility Transistor (HEMT) because of the poor efficiency of a standard class AB amplifier when the linearity performance is good. The load modulation performance of the GaN HEMT device for the Doherty operation is rather moderate but workable. The linearity is improved using the in-band error cancellation technique of the Doherty amplifier. The implemented Doherty amplifier has been designed at an average output power of 43 dBm, backed-off about 8 dB from the 51 dBm (P3 dB). For WiMAX signal with 28 MHz signal bandwidth, the measured drain efficiency of the amplifier is 27.8%, and the measured Relative Constellation Error (RCE) is -33.17 dB, while those of the comparable class AB amplifier are 19.42% and -24.26 dB, respectively, at the same average output power level."}
{"_id": "c1eee3a61fdadcba86564035c3aa7bdd3fe1e100", "title": "MICS: an efficient content space representation model for publish/subscribe systems", "text": "One of the main challenges faced by content-based publish/subscribe systems is handling large amount of dynamic subscriptions and publications in a multidimensional content space. To reduce subscription forwarding load and speed up content matching, subscription covering, subsumption and merging techniques have been proposed. In this paper we propose MICS, Multidimensional Indexing for Content Space that provides an efficient representation and processing model for large number of subscriptions and publications. MICS creates a one dimensional representation for publications and subscriptions using Hilbert space filling curve. Based on this representation, we propose novel content matching and subscription management (covering, subsumption and merging) algorithms. Our experimental evaluation indicates that the proposed approach significantly speeds up subscription management operations compared to the naive linear approach."}
{"_id": "db8aa4263d89b459ea1380bf8ec8f272880fe2f4", "title": "Named Entity Recognition and Compositional Morphology for Word Representations in Chinese", "text": "We approach the task of named entity recognition (NER) for Chinese by representing each entity with a composition of its traits: the token-entity, its characters, and its characters\u2019 main radicals. Character and radical-level information for each entity are included to provide additional relationships that might not be strictly captured within a token-entity\u2019s word embedding during training. We learn using neural networks that are some combination of the following traits: unidirectional or bidirectional; single or multi-layer; simple, gated recurrent unit (GRU), or long short term memory (LSTM) celled. We achieve a maximum not-O token-level F1 score of 76 and entity-level F1 of 70."}
{"_id": "25026b61bd339eb5e099fb6825438e68d05ddca6", "title": "The extraction of neural strategies from the surface EMG.", "text": "This brief review examines some of the methods used to infer central control strategies from surface electromyogram (EMG) recordings. Among the many uses of the surface EMG in studying the neural control of movement, the review critically evaluates only some of the applications. The focus is on the relations between global features of the surface EMG and the underlying physiological processes. Because direct measurements of motor unit activation are not available and many factors can influence the signal, these relations are frequently misinterpreted. These errors are compounded by the counterintuitive effects that some system parameters can have on the EMG signal. The phenomenon of crosstalk is used as an example of these problems. The review describes the limitations of techniques used to infer the level of muscle activation, the type of motor unit recruited, the upper limit of motor unit recruitment, the average discharge rate, and the degree of synchronization between motor units. Although the global surface EMG is a useful measure of muscle activation and assessment, there are limits to the information that can be extracted from this signal."}
{"_id": "7f5326ad8685a006ee19f37230d9f4cf0eacf89e", "title": "Device-to-device communication in 5G cellular networks: challenges, solutions, and future directions", "text": "In a conventional cellular system, devices are not allowed to directly communicate with each other in the licensed cellular bandwidth and all communications take place through the base stations. In this article, we envision a two-tier cellular network that involves a macrocell tier (i.e., BS-to-device communications) and a device tier (i.e., device-to-device communications). Device terminal relaying makes it possible for devices in a network to function as transmission relays for each other and realize a massive ad hoc mesh network. This is obviously a dramatic departure from the conventional cellular architecture and brings unique technical challenges. In such a two-tier cellular system, since the user data is routed through other users' devices, security must be maintained for privacy. To ensure minimal impact on the performance of existing macrocell BSs, the two-tier network needs to be designed with smart interference management strategies and appropriate resource allocation schemes. Furthermore, novel pricing models should be designed to tempt devices to participate in this type of communication. Our article provides an overview of these major challenges in two-tier networks and proposes some pricing schemes for different types of device relaying."}
{"_id": "611de477b62651bb84d5b6e36c25bc088f780d24", "title": "Device-to-device communication as an underlay to LTE-advanced networks", "text": "In this article device-to-device (D2D) communication underlaying a 3GPP LTE-Advanced cellular network is studied as an enabler of local services with limited interference impact on the primary cellular network. The approach of the study is a tight integration of D2D communication into an LTE-Advanced network. In particular, we propose mechanisms for D2D communication session setup and management involving procedures in the LTE System Architecture Evolution. Moreover, we present numerical results based on system simulations in an interference limited local area scenario. Our results show that D2D communication can increase the total throughput observed in the cell area."}
{"_id": "ca4f5d20ab55de912e25e5fc2e87b31d614da326", "title": "Design aspects of network assisted device-to-device communications", "text": "Device-to-device (D2D) communications underlaying a cellular infrastructure has been proposed as a means of taking advantage of the physical proximity of communicating devices, increasing resource utilization, and improving cellular coverage. Relative to the traditional cellular methods, there is a need to design new peer discovery methods, physical layer procedures, and radio resource management algorithms that help realize the potential advantages of D2D communications. In this article we use the 3GPP Long Term Evolution system as a baseline for D2D design, review some of the key design challenges, and propose solution approaches that allow cellular devices and D2D pairs to share spectrum resources and thereby increase the spectrum and energy efficiency of traditional cellular networks. Simulation results illustrate the viability of the proposed design."}
{"_id": "0fefc80a2ba5a93a115f07e38556efe16c881a41", "title": "Augmenting geospatial data provenance through metadata tracking in geospatial service chaining", "text": "In a service-oriented environment, heterogeneous data from distributed data archiving centers and various geo-processing services are chained together dynamically to generate on-demand data products. Creating an executable service chain requires detailed specification of metadata for data sets and service instances. Using metadata tracking, semantics-enabled metadata are generated and propagated through a service chain. This metadata can be employed to validate a service chain, e.g. whether metadata preconditions on the input data of services can be satisfied. This paper explores how this metadata can be further exploited to augment geospatial data provenance, i.e., how a geospatial data product is derived. Provenance information is automatically captured during the metadata tracking process. Semantic Web technologies, including OWL and SPARQL, are used for representation and query of this provenance information. The approach can not only contribute to the automatic recording of geospatial data provenance, but also provide a more informed understanding of provenance information using Semantic Web technologies. & 2009 Elsevier Ltd. All rights reserved."}
{"_id": "f5e43beaba18013d09ccd8086fcea3d07576461f", "title": "Accelerometer Based Joint Step Detection and Adaptive Step Length Estimation Algorithm Using Handheld Devices", "text": "\uf020\u2014The pedestrian inertial navigation systems are generally based on Pedestrian Dead Reckoning (PDR) algorithm. Considering the physiological characteristics of pedestrian movement, we use the cyclical characteristics and statistics of acceleration waveform and features which are associated with the walking speed to estimate the stride length. Due to the randomness of the pedestrian hand-held habit, the step events cannot always be detected by using the periods of zero velocity updates (ZUPTs). Furthermore, the signal patterns of the sensor could differ significantly depending on the carrying modes and the user\u2019s hand motion. Hence, the step detection and the associated adaptive step length model using a handheld device equipped with accelerometer is required to obtain high-accurate measurements. To achieve this goal, a compositional algorithm of empirical formula and Back-propagation neural network by using handheld devices is proposed to estimate the step length, as well as achieve the accuracy of step detection higher than 98%. Furthermore, the proposed joint step detection and adaptive step length estimation algorithm can help much in the development of Pedestrian Navigation Devices (PNDs) based on the handheld inertial sensors."}
{"_id": "fb27f57d902e79562dd7c36605a0ca473e24d901", "title": "Toward the Minimal Universal Petri Net", "text": "A universal Petri net with 14 places, 42 transitions, and 218 arcs was built in the class of deterministic inhibitor Petri nets (DIPNs); it is based on the minimal Turing machine (TM) of Woods and Neary with 6 states, 4 symbols, and 23 instructions, directly simulated by a Petri net. Several techniques were developed, including bi-tag system (BTS) construction on a DIPN, special encoding of TM tape by two stacks, and concise subnets that implement arithmetic encoding operations. The simulation using the BTS has cubic time and linear space complexity, while the resulting universal net runs in exponential time and quadratic space with respect to the target net transitions' firing sequence length. The technique is applicable for simulating any TM by the Petri net."}
{"_id": "d197357fb516b9b70f36911fa4bddd44b1489dd4", "title": "Transference of kettlebell training to strength, power, and endurance.", "text": "Kettlebells are a popular implement in many strength and conditioning programs, and their benefits are touted in popular literature, books, and videos. However, clinical data on their efficacy are limited. The purpose of this study was to examine whether kettlebell training transfers strength and power to weightlifting and powerlifting exercises and improves muscular endurance. Thirty-seven subjects were assigned to an experimental (EXP, n = 23; mean age = 40.9 \u00b1 12.9 years) or a control group (CON; n = 14; mean age = 39.6 \u00b1 15.8 years), range 18-72 years. The participants were required to perform assessments including a barbell clean and jerk, barbell bench press, maximal vertical jump, and 45\u00b0 back extensions to volitional fatigue before and after a 10-week kettlebell training program. Training was structured in a group setting for 2 d\u00b7wk(-1) for 10 weeks. A repeated measures analysis of variance was conducted to determine group \u00d7 time interactions and main effects. Post hoc pairwise comparisons were conducted when appropriate. Bench press revealed a time \u00d7 group interaction and a main effect (p < 0.05). Clean and jerk and back extension demonstrated a trend toward a time \u00d7 group interaction, but it did not reach significance (p = 0.053). However, clean and jerk did reveal a main effect for time (p < 0.05). No significant findings were reported for maximal vertical jump. The results demonstrate a transfer of power and strength in response to 10 weeks of training with kettlebells. Traditional training methods may not be convenient or accessible for strength and conditioning specialists, athletes, coaches, and recreational exercisers. The current data suggest that kettlebells may be an effective alternative tool to improve performance in weightlifting and powerlifting."}
{"_id": "6bcfcc4a0af2bf2729b5bc38f500cfaab2e653f0", "title": "Facial Expression Recognition in the Wild Using Improved Dense Trajectories and Fisher Vector Encoding", "text": "Improved dense trajectory features have been successfully used in video-based action recognition problems, but their application to face processing is more challenging. In this paper, we propose a novel system that deals with the problem of emotion recognition in real-world videos, using improved dense trajectory, LGBP-TOP, and geometric features. In the proposed system, we detect the face and facial landmarks from each frame of a video using a combination of two recent approaches, and register faces by means of Procrustes analysis. The improved dense trajectory and geometric features are encoded using Fisher vectors and classification is achieved by extreme learning machines. We evaluate our method on the extended Cohn-Kanade (CK+) and EmotiW 2015 Challenge databases. We obtain state-of the-art results in both databases."}
{"_id": "f3cfb09aad7972c7e1798b0887758ebd849d0681", "title": "Skew detection of scanned document images", "text": "\u2014Skewing of the scanned image is an inevitable process and its detection is an important issue for document recognition systems. The skew of the scanned document image specifies the deviation of the text lines from the horizontal or vertical axis. This paper surveys methods to detect this skew in"}
{"_id": "16f05c1b16efb9fd8f58a47496017d3286b35857", "title": "Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor", "text": "Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods."}
{"_id": "a0ab5e8c3fc5f9eb62a52b19143618677d25dccc", "title": "Energy-Efficient Hybrid Analog and Digital Precoding for MmWave MIMO Systems With Large Antenna Arrays", "text": "Millimeter wave (mmWave) MIMO will likely use hybrid analog and digital precoding, which uses a small number of RF chains to reduce the energy consumption associated with mixed signal components like analog-to-digital components not to mention baseband processing complexity. However, most hybrid precoding techniques consider a fully connected architecture requiring a large number of phase shifters, which is also energy-intensive. In this paper, we focus on the more energy-efficient hybrid precoding with subconnected architecture, and propose a successive interference cancelation (SIC)-based hybrid precoding with near-optimal performance and low complexity. Inspired by the idea of SIC for multiuser signal detection, we first propose to decompose the total achievable rate optimization problem with nonconvex constraints into a series of simple subrate optimization problems, each of which only considers one subantenna array. Then, we prove that maximizing the achievable subrate of each subantenna array is equivalent to simply seeking a precoding vector sufficiently close (in terms of Euclidean distance) to the unconstrained optimal solution. Finally, we propose a low-complexity algorithm to realize SIC-based hybrid precoding, which can avoid the need for the singular value decomposition (SVD) and matrix inversion. Complexity evaluation shows that the complexity of SIC-based hybrid precoding is only about 10% as complex as that of the recently proposed spatially sparse precoding in typical mmWave MIMO systems. Simulation results verify that SIC-based hybrid precoding is near-optimal and enjoys higher energy efficiency than the spatially sparse precoding and the fully digital precoding."}
{"_id": "83e0ac51e6fa313c0aca3d945ccbca5df8479a09", "title": "Continuous and fine-grained breathing volume monitoring from afar using wireless signals", "text": "In this work, we propose for the first time an autonomous system, called WiSpiro, that continuously monitors a person's breathing volume with high resolution during sleep from afar. WiSpiro relies on a phase-motion demodulation algorithm that reconstructs minute chest and abdominal movements by analyzing the subtle phase changes that the movements cause to the continuous wave signal sent by a 2.4 GHz directional radio. These movements are mapped to breathing volume, where the mapping relationship is obtained via a short training process. To cope with body movement, the system tracks the large-scale movements and posture changes of the person, and moves its transmitting antenna accordingly to a proper location in order to maintain its beam to specific areas on the frontal part of the person's body. It also incorporates interpolation mechanisms to account for possible inaccuracy of our posture detection technique and the minor movement of the person's body. We have built WiSpiro prototype, and demonstrated through a user study that it can accurately and continuously monitor user's breathing volume with a median accuracy from 90% to 95.4% (or 0.0581 to 0.111 of error) to even in the presence of body movement. The monitoring granularity and accuracy are sufficiently high to be useful for diagnosis by clinical doctor."}
{"_id": "35bb7f871dbb9383854dbf48274aa839f9580564", "title": "Polygenic scores for schizophrenia and educational attainment are associated with behavioural problems in early childhood in the general population.", "text": "BACKGROUND\nGenome-wide association studies in adults have identified numerous genetic variants related to psychiatric disorders and related traits, such as schizophrenia and educational attainment. However, the effects of these genetic variants on behaviour in the general population remain to be fully understood, particularly in younger populations. We investigated whether polygenic scores of five psychiatric disorders and educational attainment are related to emotional and behaviour problems during early childhood.\n\n\nMETHODS\nFrom the Generation R Study, we included participants with available genotype data and behavioural problems measured with the Child Behavior Checklist (CBCL) at the age of 3 (n\u00a0=\u00a01,902), 6 (n\u00a0=\u00a02,202) and 10\u00a0years old (n\u00a0=\u00a01,843). Polygenic scores were calculated for five psychiatric disorders and educational attainment. These polygenic scores were tested for an association with the broadband internalizing and externalizing problem scales and the specific CBCL syndrome scale scores.\n\n\nRESULTS\nAnalysis of the CBCL broadband scales showed that the schizophrenia polygenic score was associated with significantly higher internalizing scores at 3, 6 and 10\u00a0years and higher externalizing scores at age 3 and 6. The educational attainment polygenic score was associated with lower externalizing scores at all time points and lower internalizing scores at age 3. No associations were observed for the polygenic scores of bipolar disorder, major depressive disorder and autism spectrum disorder. Secondary analyses of specific syndrome scores showed that the schizophrenia polygenic score was strongly related to the Thought Problems scores. A negative association was observed between the educational attainment polygenic score and Attention Problems scores across all age groups.\n\n\nCONCLUSIONS\nPolygenic scores for adult psychiatric disorders and educational attainment are associated with variation in emotional and behavioural problems already at a very early age."}
{"_id": "91c67e40a302edd4db02160f2a34d88b1c7aee8e", "title": "A decision cognizant Kullback-Leibler divergence", "text": "In decision making systems involving multiple classifiers there is the need to assess classifier (in)congruence, that is to gauge the degree of agreement between their outputs. A commonly used measure for this purpose is the Kullback-Leibler (KL) divergence. We propose a variant of the KL divergence, named decision cognizant Kullback-Leibler divergence (DC-KL), to reduce the contribution of the minority classes, which obscure the true degree of classifier incongruence. We investigate the properties of the novel divergence measure analytically and by simulation studies. The proposed measure is demonstrated to be more robust to minority class clutter. Its sensitivity to estimation noise is also shown to be considerably lower than that of the classical KL divergence. These properties render the DC-KL divergence a much better statistic for discriminating between classifier congruence and incongruence in pattern recognition systems."}
{"_id": "c44c2139458f1a25ba8526045857ec7ae9442dbf", "title": "Approaches , methods , metrics , measures , and subjectivity in ontology evaluation : A survey", "text": "Ontology evaluation is concerned with ascertain two important aspects of ontologies: quality and correctness. The distinction between the two is attempted in this survey as a way to better approach ontology evaluation. The role that ontologies play on the semantic web at large has been has been argued to have catalyzed the proliferation of ontologies in existence. This has also presented the challenge of deciding one the suitability of an given ontology to one\u2019s purposes as compared with another ontology in a similar domain. This survey intends to analyze the state of the art in ontology evaluate spanning such topics as the approaches to ontology evaluation, the metrics and measures used. Particular interest is given to Data-driven ontology evaluation with special emphasis on the notion of bias and it\u2019s relevance to evaluation results. Chief among the outputs of this survey is the gap analysis on the topic of ontology evaluation."}
{"_id": "4b928b6fd5e3830a916750d3e0ebadf6622d78e8", "title": "Radar interferometry offers new insights into threats to the Angkor site", "text": "The conservation of World Heritage is critical to the cultural and social sustainability of regions and nations. Risk monitoring and preventive diagnosis of threats to heritage sites in any given ecosystem are a complex and challenging task. Taking advantage of the performance of Earth Observation technologies, we measured the impacts of hitherto imperceptible and poorly understood factors of groundwater and temperature variations on the monuments in the Angkor World Heritage site (400 km2). We developed a two-scale synthetic aperture radar interferometry (InSAR) approach. We describe spatial-temporal displacements (at millimeter-level accuracy), as measured by high-resolution TerraSAR/TanDEM-X satellite images, to provide a new solution to resolve the current controversy surrounding the potential structural collapse of monuments in Angkor. Multidisciplinary analysis in conjunction with a deterioration kinetics model offers new insights into the causes that trigger the potential decline of Angkor monuments. Our results show that pumping groundwater for residential and touristic establishments did not threaten the sustainability of monuments during 2011 to 2013; however, seasonal variations of the groundwater table and the thermodynamics of stone materials are factors that could trigger and/or aggravate the deterioration of monuments. These factors amplify known impacts of chemical weathering and biological alteration of temple materials. The InSAR solution reported in this study could have implications for monitoring and sustainable conservation of monuments in World Heritage sites elsewhere."}
{"_id": "114e15d677d2055a3a627eb805c2e914f405404e", "title": "A fast and efficient sift detector using the mobile GPU", "text": "Emerging mobile applications, such as augmented reality, demand robust feature detection at high frame rates. We present an implementation of the popular Scale-Invariant Feature Transform (SIFT) feature detection algorithm that incorporates the powerful graphics processing unit (GPU) in mobile devices. Where the usual GPU methods are inefficient on mobile hardware, we propose a heterogeneous dataflow scheme. By methodically partitioning the computation, compressing the data for memory transfers, and taking into account the unique challenges that arise out of the mobile GPU, we are able to achieve a speedup of 4-7x over an optimized CPU version, and a 6.4x speedup over a published GPU implementation. Additionally, we reduce energy consumption by 87 percent per image. We achieve near-realtime detection without compromising the original algorithm."}
{"_id": "70555f7ae214caaecb19904456fa47f899905e3c", "title": "Home, habits, and energy: examining domestic interactions and energy consumption", "text": "This paper presents findings from a qualitative study of people's everyday interactions with energy-consuming products and systems in the home. Initial results from a large online survey are also considered. This research focuses not only on \"conservation behavior\" but importantly investigates interactions with technology that may be characterized as \"normal consumption\" or \"over-consumption.\" A novel vocabulary for analyzing and designing energy-conserving interactions is proposed based on our findings, including: cutting, trimming, switching, upgrading, and shifting. Using the proposed vocabulary, and informed by theoretical developments from various literatures, this paper demonstrates ways in which everyday interactions with technology in the home are performed without conscious consideration of energy consumption but rather are unconscious, habitual, and irrational. Implications for the design of energy-conserving interactions with technology and broader challenges for HCI research are proposed."}
{"_id": "660a0ac9f6ca2a185b89dfead5e95deb56d988dc", "title": "What affects web credibility perception - an analysis of textual justifications", "text": "In this paper, we present the findings of a qualitative analysis of 15,750 comments left by 2,041 participants in a Reconcile web credibility evaluation study. While assessing the credibility of the presented pages, respondents of the Reconcile studies were also asked to justify their ratings in writing. This work attempts to give an insight into the factors that affected the credibility assessment. To the best of our knowledge, the presented study is the most-recent large-scale study of its kind carried out since 2003, when the Fogg et al. \u015bHow do users evaluate the credibility of Web sites? A study with over 2,500 participants\u2019 paper was published. The performed analysis shows that the findings made a decade ago are still mostly valid today despite the passage of time and the advancement of Internet technologies. However we report a weaker impact of webpage appearance. A much bigger dataset (as compared to Fogg\u2019s studies) allowed respondents to reveal additional features, which influenced the credibility evaluations."}
{"_id": "8a438d15caa48c69fee55f20ada3ed28a1bb8a67", "title": "Semantics and implementation of continuous sliding window queries over data streams", "text": "In recent years the processing of continuous queries over potentially infinite data streams has attracted a lot of research attention. We observed that the majority of work addresses individual stream operations and system-related issues rather than the development of a general-purpose basis for stream processing systems. Furthermore, example continuous queries are often formulated in some declarative query language without specifying the underlying semantics precisely enough. To overcome these deficiencies, this article presents a consistent and powerful operator algebra for data streams which ensures that continuous queries have well-defined, deterministic results. In analogy to traditional database systems, we distinguish between a logical and a physical operator algebra. While the logical algebra specifies the semantics of the individual operators in a descriptive but concrete way over temporal multisets, the physical algebra provides efficient implementations in the form of stream-to-stream operators. By adapting and enhancing research from temporal databases to meet the challenging requirements in streaming applications, we are able to carry over the conventional transformation rules from relational databases to stream processing. For this reason, our approach not only makes it possible to express continuous queries with a sound semantics, but also provides a solid foundation for query optimization, one of the major research topics in the stream community. Since this article seamlessly explains the steps from query formulation to query execution, it outlines the innovative features and operational functionality implemented in our state-of-the-art stream processing infrastructure."}
{"_id": "aefceee6b13236a06418e28f025e96dac062149e", "title": "Atrial Fibrillation Detection Based on Poincar\u00e9 plot of RR Intervals", "text": "Atrial fibrillation (AF) is one of the most common types of arrhythmia which significantly increases the risk factor of stroke \u2013 especially in elderly population. In this paper an algorithm is presented which is suitable for the effective detection of AF, by using merely heart rate data as input. The method is based on the processing of Poincar\u00e9 plots constructed from the set of 30 RR intervals. During the analysis of Poincar\u00e9 plots the dispersion of points around the diagonal line is calculated and the number of clusters is determined by a self-developed cluster analyzer. The decision criterion of AF relies on these two parameters. On the one hand, the algorithm was tested on 10 AF and 10 normal rhythm ECG signals of the PhysioNet Database, achieving the average sensitivity (Se) of 98.69% and the average specificity (Sp) of 99.59%. On the other hand, 10 AF and 10 normal clinically confirmed records of a heart rate meter were also processed, resulting the average Se and Sp of 96.89% and 99.00%, respectively."}
{"_id": "defa8774d3c6ad46d4db4959d8510b44751361d8", "title": "FEBEI-Face Expression Based Emoticon Identification CSB 657 Computer Vision", "text": "The Face Expression Based Emoticon Identification (FEBEI) system is an open source extension to the Tracker.js framework which converts a human facial expression to the best matching emoticon. The contribution of this project was to build this robust classifier which can identify facial expression in real time without any reliance on an external server or computation node. An entirely client-side JavaScript implementation has clear privacy benefits as well as the avoidance of any lag inherent in uploading and downloading images. We accomplished this by utilizing several computationally efficient methods. Tracking.js provided a Viola Jones based face detector which we used to pass facial images to our own implementation of an eigenemotion detection system which was trained to distinguish between happy and angry faces. We have implemented a similar eigenface classifier in python and have trained a Convoluted Neural Network (CNN) to classify emotions to provide a comparative view of its advantages. We aim to make FEBEI easily extendable so that the developer community will be able to add classifiers for more emoticon."}
{"_id": "926fcaa58aa8ef387d4a7159b2fde267bcf8cebe", "title": "Analysis and Application of Discrete Halbach Magnet Array With Unequal Arc Lengths and Unequally Changed Magnetization Directions", "text": "This paper presents a general process to analyze and optimize the performance for an arbitrary discrete Halbach magnet array which is \u201cunnormal\u201d on magnetization direction and arc length of each segment. Besides the original type, a new arrangement of Halbach array is proposed, which does not have main PM segments. Two-dimensional field analytical models for predicting the air-gap field distribution of both arrangements are offered and validated by an FEA method. Then, a multi-objective optimizer was applied for different requirements, and comparison between normal and optimized 3-segment Halbach array is given to verify the effect of optimization. At last, some potential applications and follow-up study are discussed."}
{"_id": "52047f4929bf616ca6dfad6acf7d5da2a0c15aa8", "title": "Three-dimensional menus: A survey and taxonomy", "text": "Various interaction techniques have been developed in the field of virtual and augmented reality. Whereas techniques for object selection, manipulation, travel, and wayfinding have already been covered in existing taxonomies in some detail, application control techniques have not yet been sufficiently considered. However, they are needed by almost every mixed reality application, e.g. for choosing from alternative objects or options. For this purpose a great variety of distinct three-dimensional (3D) menu selection techniques is available. This paper surveys existing 3D menus from the corpus of literature and classifies them according to various criteria. The taxonomy introduced here assists developers of interactive 3D applications to better evaluate their options when choosing, optimizing, and implementing a 3D menu technique. Since the taxonomy spans the design space for 3D menu solutions, it also aids researchers in identifying opportunities to improve or create novel virtual menu techniques. r 2006 Elsevier Ltd. All rights reserved."}
{"_id": "661ce19f315aafbf5a3916684e0e7c10e642d5f1", "title": "ShoeSoleSense: proof of concept for a wearable foot interface for virtual and real environments", "text": "ShoeSoleSense is a proof of concept, novel body worn interface - an insole that enables location independent hands-free interaction through the feet. Forgoing hand or finger interaction is especially beneficial when the user is engaged in real world tasks. In virtual environments as moving through safety training applications is often conducted via finger input, which is not very suitable. To enable a more intuitive interaction, alternative control concepts utilize gesture control, which is usually tracked by statically installed cameras in CAVE-like-installations. Since tracking coverage is limited, problems may also occur. The introduced prototype provides a novel control concept for virtual reality as well as real life applications. Demonstrated functions include movement control in a virtual reality installation such as moving straight, turning and jumping. Furthermore the prototype provides additional feedback by heating up the feet and vibrating in dedicated areas on the surface of the insole."}
{"_id": "9810a7976242774b4d7878fd121f54094412ae40", "title": "A discussion of cybersickness in virtual environments", "text": "An important and troublesome problem with current virtual environment (VE) technology is the tendency for some users to exhibit symptoms that parallel symptoms of classical motion sickness both during and after the VE experience. This type of sickness, cybersickness, is distinct from motion sickness in that the user is often stationary but has a compelling sense of self motion through moving visual imagery. Unfortunately, there are many factors that can cause cybersickness and there is no foolproof method for eliminating the problem. In this paper, I discuss a number of the primary factors that contribute to the cause of cybersickness, describe three conflicting cybersickness theories that have been postulated, and discuss some possible methods for reducing cybersickness in VEs."}
{"_id": "16af753e94919ca257957cee7ab6c1b30407bb91", "title": "ChairIO--the Chair-Based Interface", "text": ""}
{"_id": "278df68251ff50faa36585ceb05253bcd3b06c32", "title": "Multi-agent quadrotor testbed control design: integral sliding mode vs. reinforcement learning", "text": "The Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) is a multi-vehicle testbed currently comprised of two quadrotors, also called X4-flyers, with capacity for eight. This paper presents a comparison of control design techniques, specifically for outdoor altitude control, in and above ground effect, that accommodate the unique dynamics of the aircraft. Due to the complex airflow induced by the four interacting rotors, classical linear techniques failed to provide sufficient stability. Integral sliding mode and reinforcement learning control are presented as two design techniques for accommodating the nonlinear disturbances. The methods both result in greatly improved performance over classical control techniques."}
{"_id": "5d65ac7d6ae07247a6cc12c2aa87977ceee520f9", "title": "Design guidelines for wireless sensor networks: communication, clustering and aggregation", "text": "When sensor nodes are organized in clusters, they could use either single hop or multi-hop mode of communication to send their data to their respective cluster heads. We present a systematic cost-based analysis of both the modes, and provide results that could serve as guidelines to decide which mode should be used for given settings. We determine closed form expressions for the required number of cluster heads and the required battery energy of nodes for both the modes. We also propose a hybrid communication mode which is a combination of single hop and multi-hop modes, and which is more cost-effective than either of the two modes. Our problem formulation also allows for the application to be taken into account in the overall design problem through a data aggregation model. 2003 Elsevier B.V. All rights reserved."}
{"_id": "60f64addc751835709c9aa10efcec36e1cd74551", "title": "Grounded Learning of Color Semantics with Autoencoders", "text": "Humans learn language by grounding word meaning in the physical world. Recent efforts in natural language processing have attempted to model such multimodal learning by incorporating visual information into word and sentence representations. Here, we explore the task of grounding lexical color descriptions in their visual referents. We propose an RNN-based autoencoder model to learn vector representations of sentences that reflect their associated color values. Our model effectively learns a joint visual-lexical space that demonstrates compositionality and generalizes to unseen color names. As a demonstration of such a space learned, we show that our model can predict captions from color representations and color representations from captions. In addition to successfully modeling color language, this work provides a novel framework for grounded language learning."}
{"_id": "cc75568885ab99851cc0e0ea5679121606121e5d", "title": "Behavior recognition based on machine learning algorithms for a wireless canine machine interface", "text": "Training and handling working dogs is a costly process and requires specialized skills and techniques. Less subjective and lower-cost training techniques would not only improve our partnership with these dogs but also enable us to benefit from their skills more efficiently. To facilitate this, we are developing a canine body-area-network (cBAN) to combine sensing technologies and computational modeling to provide handlers with a more accurate interpretation for dog training. As the first step of this, we used inertial measurement units (IMU) to remotely detect the behavioral activity of canines. Decision tree classifiers and Hidden Markov Models were used to detect static postures (sitting, standing, lying down, standing on two legs and eating off the ground) and dynamic activities (walking, climbing stairs and walking down a ramp) based on the heuristic features of the accelerometer and gyroscope data provided by the wireless sensing system deployed on a canine vest. Data was collected from 6 Labrador Retrievers and a Kai Ken. The analysis of IMU location and orientation helped to achieve high classification accuracies for static and dynamic activity recognition."}
{"_id": "6c319487df3b3ef48fe6ea4b971151bd183ae5ea", "title": "Encoding User as More Than the Sum of Their Parts: Recurrent Neural Networks and Word Embedding for People-to-people Recommendation", "text": "Neural networks and word embeddings are powerful tools to capture latent factors. These tools can provide effective measures of similarities between users or items in the context of sparse data. We propose a novel approach that relies on neural networks and word embeddings to the problem of matching a learner looking for mentoring, and a tutor that is willing to provide this mentoring. Tutors and learners can issue multiple offers/requests on different topics. The approach matches over the whole array of topics specified by learners and tutors. Its performance for tutor-learner matching is compared with the state of the art. It yields similar results in terms of precision, but improves the recall."}
{"_id": "a27b8f921e475ab5c0aa93be8c566ffe1b529c79", "title": "MATLAB / SIMULINK Based Modelling Photovoltaic Array Fed T-Source Inverter", "text": "In order to utilize the solar energy for industrial, commercial and domestic applications the power conversion schemes plays an important role. The problem exists in conventional power conversion schemes are low efficiency, poor transient response, low voltage gain and more reactive components are being used .This paper proposes a single stage power conversion scheme called T-source inverter(TSI) to overcome these drawbacks. T-Source inverter (TSI) with simple boost control scheme is used as interface circuit between PV array and load. The PV array is analyzed under different irradiation and temperature value. The mathematical equations are verified with simulation and hardware. The verification shows the voltage gain of TSI was comparatively higher than ZSI. The reactive components in the circuit are less, fast transient response and low output ripple."}
{"_id": "b9225857342bf3ca201a4cca173cab3329db584f", "title": "A near-threshold 7T SRAM cell with high write and read margins and low write time for sub-20 nm FinFET technologies", "text": "In this paper, a 7T SRAM cell with differential write and single ended read operations working in the near-threshold region is proposed. The structure is based on modifying a recently proposed 5T cell which uses high and low VTH transistors to improve the read and write stability. To enhance the read static noise margin (RSNM) while keeping the high write margin and low write time, an extra access transistor is used and the threshold voltages of the SRAM transistors are appropriately set. In addition, to maintain the low leakage power of the cell and increase the Ion/Ioff ratio of its access transistors, a high VTH transistor is used in the pull down path of the cell. To assess the efficacy of the proposed cell, its characteristics are compared with those of 5T, 6T, 8T, and 9T SRAM cells. The characteristics are obtained from HSPICE simulations using 20 nm, 16 nm, 14 nm, 10 nm, and 7 nm FinFET technologies assuming a supply voltage of 500 mV. The results reveal high write and read margins, the highest Ion/Ioff ratio, a fast write, and ultra-low leakage power in the hold \u201c0\u201d state for the cell. Therefore, the suggested 7T cell may be considered as one of the better design choices for both high performance and low power applications. Also, the changes of cell parameters when the temperature rises from 40 1C to 100 1C are investigated. Finally, the write margin as well as the read and hold SNMs of the cell in the presence of the process variations are studied at two supply voltages of 400 mV and 500 mV. The study shows that the proposed cell meets the required cell sigma value (6\u03c3) under all conditions. & 2015 Elsevier B.V. All rights reserved."}
{"_id": "93a648c9d8b1d1128c9c3abf75d3c5b9b3bdcb02", "title": "Microcontroller-based quadratic buck converter used as led lamp driver", "text": "This paper presents a new proposal for driving LED lamps through the use of a micro controlled quadratic buck DC/DC converter. The main focus is to improve the LED driver's characteristics using the microcontroller PIC16F873A, thus achieving greater durability and efficiency. Mathematical analyses and experimental results are presented in this paper."}
{"_id": "694a40785f480cc0d65bd94a5e44f570aff5ea37", "title": "Integrating Grid-Based and Topological Maps for Mobile Robot Navigation", "text": "Research on mobile robot navigation has produced two major paradigms for mapping indoor environments: grid-based and topological. While grid-based methods produce accurate metric maps, their complexity often prohibits efficient planning and problem solving in large-scale indoor environments. Topological maps, on the other hand, can be used much more efficiently, yet accurate and consistent topological maps are considerably difficult to learn in large-scale environments. This paper describes an approach that integrates both paradigms: grid-based and topological. Grid-based maps are learned using artificial neural networks and Bayesian integration. Topological maps are generated on top of the grid-based maps, by partitioning the latter into coherent regions. By combining both paradigms\u2014grid-based and topological\u2014, the approach presented here gains the best of both worlds: accuracy/consistency and efficiency. The paper gives results for autonomously operating a mobile robot equipped with sonar sensors in populated multi-room envi-"}
{"_id": "0c6f14e13f475c56d45f1e9c91f6f05e199ca742", "title": "Seasonal Effect on Tree Species Classification in an Urban Environment Using Hyperspectral Data, LiDAR, and an Object-Oriented Approach", "text": "The objective of the current study was to analyze the seasonal effect on differentiating tree species in an urban environment using multi-temporal hyperspectral data, Light Detection And Ranging (LiDAR) data, and a tree species database collected from the field. Two Airborne Imaging Spectrometer for Applications (AISA) hyperspectral images were collected, covering the Summer and Fall seasons. In order to make both datasets spatially and spectrally compatible, several preprocessing steps, including band reduction and a spatial degradation, were performed. An object-oriented classification was performed on both images using training data collected randomly from the tree species database. The seven dominant tree species (Gleditsia triacanthos, Acer saccharum, Tilia Americana, Quercus palustris, Pinus strobus and Picea glauca) were used in the classification. The results from this analysis did not show any major difference in overall accuracy between the two seasons. Overall accuracy was approximately 57% for the Summer dataset and 56% for the Fall dataset. However, the Fall dataset provided more consistent results for all tree species while the Summer dataset had a few higher individual class accuracies. Further, adding LiDAR into the classification improved the results by 19% for both fall and summer. This is mainly due to the removal of shadow effect and the addition of elevation data to separate low and high vegetation."}
{"_id": "cd7d22c8dd0bbbedd7987abff7a3ee2a76b2572f", "title": "Gamification solutions for software acceptance: A comparative study of Requirements Engineering and Organizational Behavior techniques", "text": "Gamification is a powerful paradigm and a set of best practices used to motivate people carrying out a variety of ICT-mediated tasks. Designing gamification solutions and applying them to a given ICT system is a complex and expensive process (in time, competences and money) as software engineers have to cope with heterogeneous stakeholder requirements on one hand, and Acceptance Requirements on the other, that together ensure effective user participation and a high level of system utilization. As such, gamification solutions require significant analysis and design as well as suitable supporting tools and techniques. In this work, we compare concepts, tools and techniques for gamification design drawn from Software Engineering and Human and Organizational Behaviors. We conduct a comparison by applying both techniques to the specific Meeting Scheduling exemplar used extensively in the Requirements Engineering literature."}
{"_id": "274166f64e23a54d47fc75bb4d0f4d0225ce7dbf", "title": "Organic food: buying more safety or just peace of mind? A critical review of the literature.", "text": "Consumer concern over the quality and safety of conventional food has intensified in recent years, and primarily drives the increasing demand for organically grown food, which is perceived as healthier and safer. Relevant scientific evidence, however, is scarce, while anecdotal reports abound. Although there is an urgent need for information related to health benefits and/or hazards of food products of both origins, generalized conclusions remain tentative in the absence of adequate comparative data. Organic fruits and vegetables can be expected to contain fewer agrochemical residues than conventionally grown alternatives; yet, the significance of this difference is questionable, inasmuch as actual levels of contamination in both types of food are generally well below acceptable limits. Also, some leafy, root, and tuber organic vegetables appear to have lower nitrate content compared with conventional ones, but whether or not dietary nitrate indeed constitutes a threat to human health is a matter of debate. On the other hand, no differences can be identified for environmental contaminants (e.g. cadmium and other heavy metals), which are likely to be present in food from both origins. With respect to other food hazards, such as endogenous plant toxins, biological pesticides and pathogenic microorganisms, available evidence is extremely limited preventing generalized statements. Also, results for mycotoxin contamination in cereal crops are variable and inconclusive; hence, no clear picture emerges. It is difficult, therefore, to weigh the risks, but what should be made clear is that 'organic' does not automatically equal 'safe.' Additional studies in this area of research are warranted. At our present state of knowledge, other factors rather than safety aspects seem to speak in favor of organic food."}
{"_id": "4ce6a28609c225ed928bf585391995b985860709", "title": "Induction of pluripotent stem cells from fibroblast cultures", "text": "Clinical application of embryonic stem (ES) cells faces difficulties regarding use of embryos, as well as tissue rejection after implantation. One way to circumvent these issues is to generate pluripotent stem cells directly from somatic cells. Somatic cells can be reprogrammed to an embryonic-like state by the injection of a nucleus into an enucleated oocyte or by fusion with ES cells. However, little is known about the mechanisms underlying these processes. We have recently shown that the combination of four transcription factors can generate ES-like pluripotent stem cells directly from mouse fibroblast cultures. The cells, named induced pluripotent stem (iPS) cells, can be differentiated into three germ layers and committed to chimeric mice. Here we describe detailed methods and tips for the generation of iPS cells."}
{"_id": "b4fc77d6fa6bc4e7845fda834c314573b14a69f1", "title": "Foundations of Augmented Cognition: Neuroergonomics and Operational Neuroscience", "text": "This work evaluates the feasibility of a motor imagery-based optical brain-computer interface (BCI) for humanoid robot control. The functional near-infrared spectroscopy (fNIRS) based BCI-robot system developed in this study operates through a high-level control mechanism where user specifies a target action through the BCI and the robot performs the set of micro operations necessary to fulfill the identified goal. For the evaluation of the system, four motor imagery tasks (left hand, right hand, left foot, and right foot) were mapped to operational commands (turn left, turn right, walk forward, walk backward) that were sent to the robot in real time to direct the robot navigating a small room. An ecologically valid offline analysis with minimal preprocessing shows that seven subjects could achieve an average accuracy of 32.5 %. This was increased to 43.6 % just by including calibration data from the same day of the robot control using the same cap setup, indicating that day-of calibration following the initial training may be important for BCI control."}
{"_id": "24019386432f4b60842b86a4892d5d424e4c5e2e", "title": "Understanding Availability", "text": "This paper addresses a simple, yet fundamental question in the design of peer-to-peer systems: What does it mean when we say \u201cavailability\u201d and how does this understanding impact the engineering of practical systems? We argue that existing measurements and models do not capture the complex timevarying nature of availability in today\u2019s peer-to-peer environments. Further, we show that unforeseen methodological shortcomings have dramatically biased previous analyses of this phenomenon. As the basis of our study, we empirically characterize the availability of a large peer-to-peer system over a period of 7 days, analyze the dependence of the underlying availability distributions, measure host turnover in the system, and discuss how these results may affect the design of high-availability peer-to-peer services."}
{"_id": "c98fedc89caca35fe10d4118b7e984ac10737c3b", "title": "Photonic Crystal-Structures for THz Vacuum Electron Devices", "text": "The technology of photonic crystals (PhCs) is investigated here to improve the performance of THz vacuum electron devices. Compared with conventional metallic waveguides, the PhC arrangement alleviates typical issues in THz vacuum electron tubes, i.e. difficult vacuum pumping process and assembling, and improves the input/output coupling. A slow-wave structure (SWS) based on a corrugated waveguide assisted by PhC lateral walls and the efficient design of a PhC coupler for sheet-beam interaction devices are demonstrated. Based on the proposed technology, a backward-wave oscillator (BWO) is designed in this paper. Cold parameters of the novel PhC SWS as well as 3-D particle-in-cell simulations of the overall BWO are investigated, obtaining more than 70-mW-peak output power at 0.650 THz for beam voltage of 11 kV and beam current of 6 mA."}
{"_id": "7d88ddeddc8a8c8e71ade8ec4bcdb3dfdd8d0526", "title": "Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving", "text": "To investigate the impact of visualizing car uncertainty on drivers' trust during an automated driving scenario, a simulator study was conducted. A between-group design experiment with 59 Swedish drivers was carried out where a continuous representation of the uncertainty of the car's ability to autonomously drive during snow conditions was displayed to one of the groups, whereas omitted for the control group. The results show that, on average, the group of drivers who were provided with the uncertainty representation took control of the car faster when needed, while they were, at the same time, the ones who spent more time looking at other things than on the road ahead. Thus, drivers provided with the uncertainty information could, to a higher degree, perform tasks other than driving without compromising with driving safety. The analysis of trust shows that the participants who were provided with the uncertainty information trusted the automated system less than those who did not receive such information, which indicates a more proper trust calibration than in the control group."}
{"_id": "5cd1c62dc99b6c3ecb2e678aa6fb2bffe3853c28", "title": "Kernel-Based Learning of Hierarchical Multilabel Classification Models", "text": "We present a kernel-based algorithm for hierarchical text c lassification where the documents are allowed to belong to more than one category at a time. The clas sification model is a variant of the Maximum Margin Markov Network framework, where the classifi cation hierarchy is represented as a Markov tree equipped with an exponential family defined o n the edges. We present an efficient optimization algorithm based on incremental conditi onal gradient ascent in single-example subspaces spanned by the marginal dual variables. The optim ization is facilitated with a dynamic programming based algorithm that computes best update dire ctions in the feasible set. Experiments show that the algorithm can feasibly optimize t raining sets of thousands of examples and classification hierarchies consisting of hundreds of nodes. Training of the full hierarchical model is as efficient as training independent SVM-light clas sifiers for each node. The algorithm\u2019s predictive accuracy was found to be competitive with other r ecently introduced hierarchical multicategory or multilabel classification learning algorithms ."}
{"_id": "2fd54e111e77f6f5a7925592d6016af68d08c81e", "title": "A novel ultra-wideband 80 GHz FMCW radar system for contactless monitoring of vital signs", "text": "In this paper an ultra-wideband 80 GHz FMCW-radar system for contactless monitoring of respiration and heart rate is investigated and compared to a standard monitoring system with ECG and CO2 measurements as reference. The novel FMCW-radar enables the detection of the physiological displacement of the skin surface with submillimeter accuracy. This high accuracy is achieved with a large bandwidth of 10 GHz and the combination of intermediate frequency and phase evaluation. This concept is validated with a radar system simulation and experimental measurements are performed with different radar sensor positions and orientations."}
{"_id": "3d8c7499221033e2d1201cd842d88dfb7fed9af8", "title": "Optimizing F-measure: A Tale of Two Approaches", "text": "F-measures are popular performance metrics, particularly for tasks with imbalanced data sets. Algorithms for learning to maximize F-measures follow two approaches: the empirical utility maximization (EUM) approach learns a classifier having optimal performance on training data, while the decision-theoretic approach learns a probabilistic model and then predicts labels with maximum expected F-measure. In this paper, we investigate the theoretical justifications and connections for these two approaches, and we study the conditions under which one approach is preferable to the other using synthetic and real datasets. Given accurate models, our results suggest that the two approaches are asymptotically equivalent given large training and test sets. Nevertheless, empirically, the EUM approach appears to be more robust against model misspecification, and given a good model, the decision-theoretic approach appears to be better for handling rare classes and a common domain adaptation scenario."}
{"_id": "e6b6a75ac509d30089b63451c2ed640f471af18a", "title": "Diffusion maps for high-dimensional single-cell analysis of differentiation data", "text": "MOTIVATION\nSingle-cell technologies have recently gained popularity in cellular differentiation studies regarding their ability to resolve potential heterogeneities in cell populations. Analyzing such high-dimensional single-cell data has its own statistical and computational challenges. Popular multivariate approaches are based on data normalization, followed by dimension reduction and clustering to identify subgroups. However, in the case of cellular differentiation, we would not expect clear clusters to be present but instead expect the cells to follow continuous branching lineages.\n\n\nRESULTS\nHere, we propose the use of diffusion maps to deal with the problem of defining differentiation trajectories. We adapt this method to single-cell data by adequate choice of kernel width and inclusion of uncertainties or missing measurement values, which enables the establishment of a pseudotemporal ordering of single cells in a high-dimensional gene expression space. We expect this output to reflect cell differentiation trajectories, where the data originates from intrinsic diffusion-like dynamics. Starting from a pluripotent stage, cells move smoothly within the transcriptional landscape towards more differentiated states with some stochasticity along their path. We demonstrate the robustness of our method with respect to extrinsic noise (e.g. measurement noise) and sampling density heterogeneities on simulated toy data as well as two single-cell quantitative polymerase chain reaction datasets (i.e. mouse haematopoietic stem cells and mouse embryonic stem cells) and an RNA-Seq data of human pre-implantation embryos. We show that diffusion maps perform considerably better than Principal Component Analysis and are advantageous over other techniques for non-linear dimension reduction such as t-distributed Stochastic Neighbour Embedding for preserving the global structures and pseudotemporal ordering of cells.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe Matlab implementation of diffusion maps for single-cell data is available at https://www.helmholtz-muenchen.de/icb/single-cell-diffusion-map.\n\n\nCONTACT\nfbuettner.phys@gmail.com, fabian.theis@helmholtz-muenchen.de\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."}
{"_id": "848c717ba51e48afef714dfef4bd6ab1cc050dab", "title": "Algorithms for the Assignment and Transiortation Troblems*", "text": "In this paper we presen algorithms for the solution of the general assignment and transportation problems. In Section 1, a statement of the algorithm for the assignment problem appears, along with a proof for the correctness of the algorithm. The remarks which constitute the proof are incorporated parenthetically into the statement of the algorithm. Following this appears a discussion of certain theoretical aspects of the problem. In Section 2, the algorithm is generalized to one for the transportation problem. The algorithm of that section is stated as concisely as possible, with theoretical remarks omitted."}
{"_id": "4f929e3865e8543f581c1c4c891bfc27dd543d85", "title": "Segmentation of Dermoscopy Images Using Wavelet Networks", "text": "This paper introduces a new approach for the segmentation of skin lesions in dermoscopic images based on wavelet network (WN). The WN presented here is a member of fixed-grid WNs that is formed with no need of training. In this WN, after formation of wavelet lattice, determining shift and scale parameters of wavelets with two screening stage and selecting effective wavelets, orthogonal least squares algorithm is used to calculate the network weights and to optimize the network structure. The existence of two stages of screening increases globality of the wavelet lattice and provides a better estimation of the function especially for larger scales. R, G, and B values of a dermoscopy image are considered as the network inputs and the network structure formation. Then, the image is segmented and the skin lesions exact boundary is determined accordingly. The segmentation algorithm were applied to 30 dermoscopic images and evaluated with 11 different metrics, using the segmentation result obtained by a skilled pathologist as the ground truth. Experimental results show that our method acts more effectively in comparison with some modern techniques that have been successfully used in many medical imaging problems."}
{"_id": "a512385be058b1e2e1d8b418a097065707622ecd", "title": "Global cancer statistics.", "text": "The global burden of cancer continues to increase largely because of the aging and growth of the world population alongside an increasing adoption of cancer-causing behaviors, particularly smoking, in economically developing countries. Based on the GLOBOCAN 2008 estimates, about 12.7 million cancer cases and 7.6 million cancer deaths are estimated to have occurred in 2008; of these, 56% of the cases and 64% of the deaths occurred in the economically developing world. Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death among females, accounting for 23% of the total cancer cases and 14% of the cancer deaths. Lung cancer is the leading cancer site in males, comprising 17% of the total new cancer cases and 23% of the total cancer deaths. Breast cancer is now also the leading cause of cancer death among females in economically developing countries, a shift from the previous decade during which the most common cause of cancer death was cervical cancer. Further, the mortality burden for lung cancer among females in developing countries is as high as the burden for cervical cancer, with each accounting for 11% of the total female cancer deaths. Although overall cancer incidence rates in the developing world are half those seen in the developed world in both sexes, the overall cancer mortality rates are generally similar. Cancer survival tends to be poorer in developing countries, most likely because of a combination of a late stage at diagnosis and limited access to timely and standard treatment. A substantial proportion of the worldwide burden of cancer could be prevented through the application of existing cancer control knowledge and by implementing programs for tobacco control, vaccination (for liver and cervical cancers), and early detection and treatment, as well as public health campaigns promoting physical activity and a healthier dietary intake. Clinicians, public health professionals, and policy makers can play an active role in accelerating the application of such interventions globally."}
{"_id": "17d1949e6653ec81eb496b38f816e99572eabcb1", "title": "Border detection in dermoscopy images using statistical region merging.", "text": "BACKGROUND\nAs a result of advances in skin imaging technology and the development of suitable image processing techniques, during the last decade, there has been a significant increase of interest in the computer-aided diagnosis of melanoma. Automated border detection is one of the most important steps in this procedure, because the accuracy of the subsequent steps crucially depends on it.\n\n\nMETHODS\nIn this article, we present a fast and unsupervised approach to border detection in dermoscopy images of pigmented skin lesions based on the statistical region merging algorithm.\n\n\nRESULTS\nThe method is tested on a set of 90 dermoscopy images. The border detection error is quantified by a metric in which three sets of dermatologist-determined borders are used as the ground-truth. The proposed method is compared with four state-of-the-art automated methods (orientation-sensitive fuzzy c-means, dermatologist-like tumor extraction algorithm, meanshift clustering, and the modified JSEG method).\n\n\nCONCLUSION\nThe results demonstrate that the method presented here achieves both fast and accurate border detection in dermoscopy images."}
{"_id": "6d173e78ee719a0713abdcf2110cd7f38d082ae7", "title": "Lesion border detection in dermoscopy images", "text": "BACKGROUND\nDermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty and subjectivity of human interpretation, computerized analysis of dermoscopy images has become an important research area. One of the most important steps in dermoscopy image analysis is the automated detection of lesion borders.\n\n\nMETHODS\nIn this article, we present a systematic overview of the recent border detection methods in the literature paying particular attention to computational issues and evaluation aspects.\n\n\nCONCLUSION\nCommon problems with the existing approaches include the acquisition, size, and diagnostic distribution of the test image set, the evaluation of the results, and the inadequate description of the employed methods. Border determination by dermatologists appears to depend upon higher-level knowledge, therefore it is likely that the incorporation of domain knowledge in automated methods will enable them to perform better, especially in sets of images with a variety of diagnoses."}
{"_id": "df70d198aa5543c99d887573447ab939cb442293", "title": "Segmentation of digitized dermatoscopic images by two-dimensional color clustering", "text": "A color-based segmentation scheme applied to dermatoscopic images is proposed. The RGB image is processed in the L*u*v* color space. A 2D histogram is computed with the two principal components and then smoothed with a Gaussian low-pass filter. The maxima location and a set of features are computed from the histogram contour lines. These features are the number of enclosed pixels, the surface of the base and the height of the maximum. They allow for the selection of valid clusters which determine the number of classes. The image is then segmented using a modified version of the fuzzy c-means (FCM) clustering technique that takes into account the cluster orientation. Finally, the segmented image is cleaned using mathematical morphology, the region borders are smoothed and small components are removed."}
{"_id": "26b0f2c364c2512f7a39fcc0a42f36c6d2667c19", "title": "NICBPM: Non-invasive cuff-less blood pressure monitor", "text": "An ultra-low power, ultra portable, wearable integrated bio-medical device, the Non-Invasive Cuff-less Blood Pressure Monitor (NICBPM) is proposed to measure the Pulse Transit Time (PTT). The relationship between PTT and Blood Pressure (BP) is investigated using linear and non linear regression models. Simultaneous reading obtained from an FDA approved, commercially available Blood Pressure monitor and the (Electrocardiogram) ECG and (photoplethsymogram) PPG signal from the NICBPM are used to calibrate the device for 30 subjects during cardiopulmonary exercise test. Bland-Altman plots for the linear regression model revealed limits of agreement of \u221210.6 to 10.6 mmHg for systolic BP (sBP) and \u221210.5 to 10.5 mmHg for distolic BP (dBP). Bland-Altman plots for the non linear regression model revealed marginally better limits of agreement between \u22128.5 to 8.5 mmHg for sBP and \u22126.6 to 6.6 mmHg for dBP. sBP can be estimated from the NICBPM with 99% accuracy and 99% sensitivity."}
{"_id": "d759d76f5e4cd50b2f34d41b6ae1879d3bc295a1", "title": "A Dolev-Yao Model for Zero Knowledge", "text": "In cryptographic protocols, zero knowledge proofs are employed for a principal A to communicate some non-trivial information t to B while at the same time ensuring that B cannot derive any information \u201cstronger\u201d than t . O\ue18den this implies that B can verify that some property holds without being able to produce a proof of this. While a rich theory of zero knowledge proofs exists, there are few formal models addressing verification questions.We propose an extension of the standard Dolev-Yao model of cryptographic protocols which involves not only constructibility of terms but also a form of verifiability. We present a proof system for term derivability, which is employed to yield a decision procedure for checking whether a given protocol meets its zero knowledge specification."}
{"_id": "9d52d7f7e2e0ccd2c591c76545a05151d3715566", "title": "Tools for the Automatic Generation of Ontology Documentation: A Task-Based Evaluation", "text": "Ontologies are knowledge constructs essential for creation of the Web of Data. Good documentation is required to permit people to understand ontologies and thus employ them correctly, but this is costly to create by tradition authorship methods, and is thus inefficient to create in this way until an ontology has matured into a stable structure. The authors describe three tools, LODE, Parrot and the OWLDoc-based Ontology Browser, that can be used automatically to create documentation from a well-formed OWL ontology at any stage of its development. They contrast their properties and then report on the authors\u2019 evaluation of their effectiveness and usability, determined by two task-based user testing sessions."}
{"_id": "17033755729ccfdd25893778635031f9c60e4738", "title": "Digital Mosaic Frameworks - An Overview", "text": "Art often provides valuable hints for technological innovations especially in the field of Image Processing and Computer Graphics. In this paper we survey in a unified framework several methods to transform raster input images into good quality mosaics. For each of the major different approaches in literature the paper reports a short description and a discussion of the most relevant issues. To complete the survey comparisons among the different techniques both in terms of visual quality and computational complexity are provided."}
{"_id": "063571e4736d7ce60d9776ee8b583b2b324bb722", "title": "Human movement analysis using stereophotogrammetry. Part 1: theoretical background.", "text": "This paper sets the stage for a series of reviews dealing with the problems associated with the reconstruction and analysis of in vivo skeletal system kinematics using optoelectronic stereophotogrammetric data. Instantaneous bone position and orientation and joint kinematic variable estimations are addressed in the framework of rigid body mechanics. The conceptual background to these exercises is discussed. Focus is placed on the experimental and analytical problem of merging the information relative to movement and that relative to the morphology of the anatomical body parts of interest. The various global and local frames that may be used in this context are defined. Common anatomical and mathematical conventions that can be used to describe joint kinematics are illustrated in a comparative fashion. The authors believe that an effort to systematize the different theoretical and experimental approaches to the problems involved and related nomenclatures, as currently reported in the literature, is needed to facilitate data and knowledge sharing, and to provide renewed momentum for the advancement of human movement analysis."}
{"_id": "1820bdf677714baf6b538ea73439396ff1d8c316", "title": "Differentially Private Smart Metering with Battery Recharging", "text": "The energy industry has recently begun using smart meters to take fine-grained readings of energy usage. These smart meters enable flexible timeof-use billing, forecasting, and demand response, but they also raise serious user privacy concerns. We propose a novel technique for provably hiding sensitive power consumption information in the overall power consumption stream. Our technique relies on a rechargeable battery that is connected to the household\u2019s power supply. This battery is used to modify the household\u2019s power consumption by adding or subtracting noise (i.e., increasing or decreasing power consumption), in order to establish strong privacy guarantees in the sense of differential privacy. To achieve these privacy guarantees in realistic settings, we first investigate the influence of, and the interplay between, capacity and throughput bounds that batteries face in reality. We then propose an integrated method based on noise cascading that allows for recharging the battery on-the-fly so that differential privacy is retained, while adhering to capacity and throughput constraints, and while keeping the additional consumption of energy induced by our technique to a minimum."}
{"_id": "04e954c5afc21447cf43ba1420c9905d359eefd9", "title": "Diagnosing performance overheads in the xen virtual machine environment", "text": "Virtual Machine (VM) environments (e.g., VMware and Xen) are experiencing a resurgence of interest for diverse uses including server consolidation and shared hosting. An application's performance in a virtual machine environment can differ markedly from its performance in a non-virtualized environment because of interactions with the underlying virtual machine monitor and other virtual machines. However, few tools are currently available to help debug performance problems in virtual machine environments.In this paper, we present Xenoprof, a system-wide statistical profiling toolkit implemented for the Xen virtual machine environment. The toolkit enables coordinated profiling of multiple VMs in a system to obtain the distribution of hardware events such as clock cycles and cache and TLB misses. The toolkit will facilitate a better understanding of performance characteristics of Xen's mechanisms allowing the community to optimize the Xen implementation.We use our toolkit to analyze performance overheads incurred by networking applications running in Xen VMs. We focus on networking applications since virtualizing network I/O devices is relatively expensive. Our experimental results quantify Xen's performance overheads for network I/O device virtualization in uni- and multi-processor systems. With certain Xen configurations, networking workloads in the Xen environment can suffer significant performance degradation. Our results identify the main sources of this overhead which should be the focus of Xen optimization efforts. We also show how our profiling toolkit was used to uncover and resolve performance bugs that we encountered in our experiments which caused unexpected application behavior."}
{"_id": "3f073fb8d4db9c7d53478c59e4709e4d5edbe942", "title": "Adaptive Courseware: A Literature Review", "text": "This paper gives a detailed description of how adaptivity is realized in e-learning systems, with a focus on adaptivity of courseware. The review aims to answer three basic questions on adaptive courseware: what has been adapted, how and why. Researchers have tried to adapt according to these different elements: student knowledge, behaviour, interests, preferences, learning style, cognitive style, goals, courseware presentation, navigation, annotation, sequencing and knowledge testing. An historical background of adaptivity is described: from word definition, through adaptive instructions and adaptive systems, all the way to adaptive courseware. The biggest challenge was to find the most prominent representatives of adaptive systems in courseware generation that were selected after an exhaustive search through relevant scientific paper databases. Each prevailing system is briefly described and the student features, to which the system adapts, are highlighted, together with the level of adaptation. This paper aims to provide important information to researchers, educators and software developers of computer-based educational software ranging from elearning systems in general to intelligent tutoring systems in particular. Such an analysis has not been done so far, especially in a way that adaptive systems are described uniformly. Finally, a comparative analysis of those systems and conclusions and demands about the ultimate adaptive system is given. This paper can be used as a guide for making decisions about what to adapt, how and why, while designing an adaptive e-learning system."}
{"_id": "e15705c5af16304d127caf57c57ce0fecb8245bf", "title": "Influence of Rectifiers on High-Speed Permanent Magnet Generator Electromagnetic and Temperature Fields in Distributed Power Generation Systems", "text": "A power converter is very necessary in the system of high-speed permanent magnet generators (HSPMG), since the frequency of the output electrical energy is generally hundreds of Hertz or even more than 1000 Hz. However, the converter with switches in fast on-off transitions will have some effects on the generator and, therewith, some current harmonics will be also induced in the device. Taking a 117 kW 60 000 r/min HSPMG as an example, the harmonics in this generator were analyzed. And then, the voltage harmonic content, the current harmonic content, and the current total harmonic distortion could be obtained. In addition, stator core losses and rotor eddy current losses were analyzed comparatively, and the influence of the converter on the generator power factor was further studied. Based on the 3-D coupling field between fluid and temperature, the temperature distributions were obtained when the generator was connected to different loads, and the influence of the rectifier on the generator temperature field could be obtained, which was also compared with the test data."}
{"_id": "a246e6ab6b247eaecaff7d519903c9225e00d597", "title": "A Cross-Type Thermal Wind Sensor With Self-Testing Function", "text": "A 2-D smart flow sensor with self-testing function was designed, fabricated and tested in this paper. It is composed of cross-structure heaters and four symmetrically located sensing parts. When the flow angle changes in the clockwise direction, the temperature differences among the four parts, namely Mode 1 given by T13(T24) and Model 2 given by T12(T14) will be Sine(Cosine) functions of wind direction. Further study shows that the magnitude of mode 1 is 1.414 times that of Mode 2, and the phase of Mode 2 leads Mode 1 by 45\u00b0. By using the fixed phase gap between the two modes, the self-test function can be realized without additional components. In order to achieve a high sensitivity and robust sensor, thermal insulated substrate was introduced to fabricate calorimetric flow sensor instead of silicon wafers. The proposed sensor was fabricated on the Pyrex7740 substrate using single liftoff process. Finally, a wind tunnel test was carried out in constant power (CP) mode, and the test results show reasonable agreement with the simulation curves."}
{"_id": "c1c30c350dcb2d24ff1061f02112e5ed18fae190", "title": "Modeling the ownership of source code topics", "text": "Exploring linguistic topics in source code is a program comprehension activity that shows promise in helping a developer to become familiar with an unfamiliar software system. Examining ownership in source code can reveal complementary information, such as who to contact with questions regarding a source code entity, but the relationship between linguistic topics and ownership is an unexplored area. In this paper we combine software repository mining and topic modeling to measure the ownership of linguistic topics in source code. We conduct an exploratory study of the relationship between linguistic topics and ownership in source code using 10 open source Java systems. We find that classes that belong to the same linguistic topic tend to have similar ownership characteristics, which suggests that conceptually related classes often share the same owner(s). We also find that similar topics tend to share the same ownership characteristics, which suggests that the same developers own related topics."}
{"_id": "4fba1e4f2c6a4ca423a02c87e4dd4b554ef43929", "title": "Behavioural Finance : A Review and Synthesis", "text": "I provide a synthesis of the Behavioural finance literature over the past two decades. I review the literature in three parts, namely, (i) empirical and theoretical analyses of patterns in the cross-section of average stock returns, (ii) studies on trading activity, and (iii) research in corporate finance. Behavioural finance is an exciting new field because it presents a number of normative implications for both individual investors and CEOs. The papers reviewed here allow us to learn more about these specific implications."}
{"_id": "c4327457971686c16eebbc63a16738bc08ffa926", "title": "PATTERNS OF INFORMATION SEEKING ON THE WEB: A QUALITATIVE STUDY OF DOMAIN EXPERTISE AND WEB EXPERTISE", "text": "This research examines the pattern of Web information seeking in four groups of nurses with different combinations of domain expertise and Web expertise. Protocols were gathered as the nurses carried out information-seeking tasks in the domain of osteoporosis. Domain and Web novices searched breadth-first and did little or no evaluation of the results. Domain expert/Web novices also searched breadth-first but evaluated information more thoroughly using osteoporosis knowledge. Domain novice/Web experts searched in a mixed, breadth-first/depth-first pattern and attempted to evaluate information using general criteria. Domain expert/Web experts carried out depth-first searches, following deep trails of information and evaluated information based on the most varied and sophisticated criteria. The results suggest that there are distinct differences in searching patterns related to expertise. Implications of these findings and suggestions for future research are provided. _______________________ Christine A. Jenkins is a Solution Architect with Sprint Corporation. christine_jenkin@hotmail.com. Cynthia L. Corritore is a professor at the College of Business Administration, Creighton University, Omaha, NE 68178 USA. cindy@creighton.edu. Susan Wiedenbeck is a professor at the College of Information Science and Technology, Drexel University, Philadelphia, PA 19104 USA. susan.wiedenbeck@cis.drexel.edu."}
{"_id": "2df543e8d1e127a8e848fe709adc784b74da6e1c", "title": "A Softwarization Architecture for UAVs and WSNs as Part of the Cloud Environment", "text": "The development of Unmanned Aerial Vehicles (UAV) and wireless sensor network (WSN) applications depends on the availability of resources in UAVs and sensor nodes. In traditional applications, UAVs obtain the required data from specific sensors. However, this tightly coupled architecture restricts the usage of the infrastructure for specific applications. This paper proposes a softwarization architecture for UAVs and WSNs. In addition, the higher layers are proposed to be part of the cloud to benefit from the various cloud opportunities. The proposed architecture is based on the network softwarization concept and the Software Defined Networks (SDN) concept along with the Network Functional Virtualization (NFV) technologies. These concepts are associated with decoupling the hardware infrastructure from the control layer that virtualizes the infrastructure resources for the higher layers. The architecture is illustrated with an agricultural example for a cooperative system of a WSN and UAVs. We implement a prototype system that consists of a sensing node, UAV, WSN controller, UAV controller, and orchestration layer to provide a proof of concept for the proposed architecture."}
{"_id": "d838069b832a15ff73bbab3d33d9eeb93c19b54b", "title": "A Comprehensive Study for Software Testing and Test Cases Generation Paradigms", "text": "Software testing is accounted to be an essential part in software development life cycle in terms of cost and manpower, where its total cost is considerable high. Consequently, many studies [48] have been conducted to minimize the associated cost and human effort to fix bugs and errors, and to improve the quality of testing process by automatically generating test cases. Test cases can be generated from different phases (requirement phase, design phase and after development phase). Though, test case generation at early stages is more effective rather than that after development, where time and effort used for finding and fixing errors and bugs is less than that after development. At later stage, fixing errors results in enormous code correction, consuming a lot of time and effort. In this paper, we study the different paradigms of testing techniques for generating test cases, where we investigate their coverage and associated capabilities. We finally propose a preliminary model for a generic automated test cases generation."}
{"_id": "95f6f034fb772fdbbcaffb36f94b2b5e6e1060ae", "title": "Violent Crime Rate Studies in Philosophical Context : A Destructive Testing Approach to Heat and Southern Culture ofViolence Effects", "text": "The logic behind the translation of conceptual hypotheses into testable propositions was illustrated with the heat hypothesis, The destructive testing philosophy was introduced and applied, This consists of first showing that a predicted empirical relation exists, then attempting to break that relation. by adding competitor variables. The key question in destructive testing is \"How difficult was it to break the relation?'~ This approach was used to analyze the heat effect on violent crime rates (Study I ) and on White violent crime arrest rates (Study 2) in U.S. cities. One competitor variable was the particular focus of analysis: southern culture of violence. The heat hypothesis was supported by highly significant correlations between the warmth of a city and its violence rate. This heat effect survived multiple destructive tests. Some support for the southern culture effect was also found, but this effect was more easily broken."}
{"_id": "729b220b3f2de47f32c9454cd3814627fef141e0", "title": "2D Color Barcodes for Mobile Phones", "text": "We propose a new high capacity color barcode, named HCC2D (High Capacity Colored 2-Dimensional), which use colors to increase the barcode data density. The introduction and recognition of colored modules poses some new and non-trivial computer vision challenges, such as handling the color distortions introduced by the hardware equipment that realizes the Print&Scan process. We developed a prototype for generating and reading the HCC2D code format, both on desktops (Linux and Windows platforms) and on mobile phones (Android platforms). We tested this prototype in many experiments considering different operating scenarios and data densities, and compared it to known 2-dimensional barcodes."}
{"_id": "1bdb595caa3fd39811fa777c3815bec2f7b21697", "title": "Fast Computation of Graph Kernels", "text": "Using extensions of linear algebra concepts to Reproducing Kernel Hilbert Spaces (RKHS), we define a unifying framework for random walk kernels on graphs. Reduction to a Sylvester equation allows us to compute many of these kernels in O(n) worst-case time. This includes kernels whose previous worst-case time complexity was O(n), such as the geometric kernels of G\u00e4rtner et al. [1] and the marginal graph kernels of Kashima et al. [2]. Our algebra in RKHS allow us to exploit sparsity in directed and undirected graphs more effectively than previous methods, yielding sub-cubic computational complexity when combined with conjugate gradient solvers or fixed-point iterations. Experiments on graphs from bioinformatics and other application domains show that our algorithms are often more than 1000 times faster than existing approaches."}
{"_id": "3c30aa7f7b075dcc325d9c45173d4642016b85b6", "title": "Parallel and Distributed Data Pipelining with KNIME", "text": "In recent years a new category of data analysis applications have evolved, known as data pipelining tools, which enable even nonexperts to perform complex analysis tasks on potentially huge amounts of data. Due to the complex and computing intensive analysis processes and methods used, it is often neither sufficient nor possible to simply rely on the increase of performance of single processors. Promising solutions to this problem are parallel and distributed approaches that can accelerate the analysis process. In this paper we discuss the parallel and distribution potential of pipelining tools by demonstrating several parallel and distributed implementations in the open source pipelining platform KNIME. We verify the practical applicability in a number of real world experiments."}
{"_id": "37fa040ec0c4bc1b85f3ca2929445f3229ed7f72", "title": "A Neural Representation of Sketch Drawings", "text": "We present sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects. The model is trained on thousands of crude human-drawn images representing hundreds of classes. We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format."}
{"_id": "a2cf2c5c04a5fa7d9926d195f6d18bb6ecfb2e0a", "title": "Multi-Document Summarization with GIST EXTER", "text": "This paperpresentsthe architectureandthe multidocument summarizationtechniquesimplementedin the GISTEXTER system.The paperpresentsanalgorithmfor producing incrementalmulti-document summariesif extractiontemplatesof goodquality areavailable. An empirical methodof generatingad-hoctemplatesthat can be populated with information extractedfrom texts by automatically acquiredextractionpatternsis alsopresented . The resultsof GISTEXTER in the DUC-2001evaluations account for the advantagesof usingthetechniquespresentedin this paper ."}
{"_id": "b2b06560a1cd5f9c8c808866a77d583e26c999da", "title": "Robust Matrix Elastic Net based Canonical Correlation Analysis: An Effective Algorithm for Multi-View Unsupervised Learning", "text": "This paper presents a robust matrix elastic net based canonical correlation analysis (RMEN-CCA) for multiple view unsupervised learning problems, which emphasizes the combination of CCA and the robust matrix elastic net (RMEN) used as coupled feature selection. The RMEN-CCA leverages the strength of the RMEN to distill naturally meaningful features without any prior assumption and to measure effectively correlations between different \u2019views\u2019. We can further employ directly the kernel trick to extend the RMEN-CCA to the kernel scenario with theoretical guarantees, which takes advantage of the kernel trick for highly complicated nonlinear feature learning. Rather than simply incorporating existing regularization minimization terms into CCA, this paper provides a new learning paradigm for CCA and is the first to derive a coupled feature selection based CCA algorithm that guarantees convergence. More significantly, for CCA, the newly-derived RMEN-CCA bridges the gap between measurement of relevance and coupled feature selection. Moreover, it is nontrivial to tackle directly the RMEN-CCA by previous optimization approaches derived from its sophisticated model architecture. Therefore, this paper further offers a bridge between a new optimization problem and an existing efficient iterative approach. As a consequence, the RMEN-CCA can overcome the limitation of CCA and address large-scale and streaming data problems. Experimental results on four popular competing datasets illustrate that the RMEN-CCA performs more effectively and efficiently than do state-of-the-art approaches."}
{"_id": "59b5c521de0cb4e3e09aabae8248699a59e9d40c", "title": "Mars Sample Return campaign status", "text": "The proposed Mars Sample Return (MSR) Campaign would be the next big step in the Mars Exploration Program (MEP). Identified by the Planetary Decadal Survey of 2011 as the highest science priority for large missions, the Mars Program has been working in partnership with the European Space Agency (ESA) to define a campaign to return a surface sample of martian surface material to Earth for analysis. As currently envisioned, the MSR campaign concept consists of a series of flight missions to collect and return the sample, and a ground based project to develop a sample receiving facility. The first mission in this proposed campaign would be baselined for launch in 2018, and would consist of a rover capable of scientifically selecting and caching the sample for retrieval by a later mission. Subsequent missions would retrieve the cache, launch it into orbit around Mars, where it would be captured by an orbiter and returned to the Earth. This paper discusses the current status of the MSR Campaign architecture definition and requirements development and near term plans for technical definition and review. Also discussed are selected design drivers and proposed key requirements which would have to be addressed in the definition of the campaign.1, 2"}
{"_id": "58bb338cab40d084321f6d1dd7f4512896249566", "title": "Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations", "text": "In recent years, deep neural networks (DNN) have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc. Training of large DNNs, however, is universally considered as time consuming and computationally intensive task that demands datacenter-scale computational resources recruited for many days. Here we propose a concept of resistive processing unit (RPU) devices that can potentially accelerate DNN training by orders of magnitude while using much less power. The proposed RPU device can store and update the weight values locally thus minimizing data movement during training and allowing to fully exploit the locality and the parallelism of the training algorithm. We evaluate the effect of various RPU device features/non-idealities and system parameters on performance in order to derive the device and system level specifications for implementation of an accelerator chip for DNN training in a realistic CMOS-compatible technology. For large DNNs with about 1 billion weights this massively parallel RPU architecture can achieve acceleration factors of 30, 000 \u00d7 compared to state-of-the-art microprocessors while providing power efficiency of 84, 000 GigaOps\u2215s\u2215W. Problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator. A system consisting of a cluster of RPU accelerators will be able to tackle Big Data problems with trillions of parameters that is impossible to address today like, for example, natural speech recognition and translation between all world languages, real-time analytics on large streams of business and scientific data, integration, and analysis of multimodal sensory data flows from a massive number of IoT (Internet of Things) sensors."}
{"_id": "09bdbdf186e24db2cf11c4c1006c718478c10b3f", "title": "On-line cursive script recognition using time-delay neural networks and hidden Markov models", "text": "We present a writer-independent system for online handwriting recognition that can handle a variety of writing styles including cursive script and handprinting. The input to our system contains the pen trajectory information, encoded as a time-ordered sequence of feature vectors. A time-delay neural network is used to estimate a posteriori probabilities for characters in a word. A hidden Markov model segments the word in a way that optimizes the global word score, using a dictionary in the process. A geometrical normalization scheme and a fast but efficient dictionary search are also presented. Trained on 20 k words from 59 writers, using a 25 k-word dictionary, our system reached recognition rates of 89% for characters and 80% for words on test data from a disjoint set of writers."}
{"_id": "a695c1493aab6a4aa0a22491e5c85d5bdea90cf0", "title": "Force of Habit and Information Systems Usage: Theory and Initial Validation", "text": "Over the last two decades, information systems (IS) research has primarily focused on people\u2019s conscious (intentional) behavior when trying to explain and predict IS usage. Consequently, almost no research has investigated the potential importance of subconscious (automatic) behaviors, also known as habits. This study represents a first step toward validating the idea that one can add explanatory power to a behavioral model such as Ajzen\u2019s [1985] theory of planned behavior (TPB) by including the habit construct. We conducted a two-stage questionnaire-based survey involving two different groups of students who had access to a sophisticated internet-based communication tool (IBCT). These data were used to test a behavioral model integrating theoretical constructs of TPB and a relevant subset of Triandis\u2019 [1980] behavioral framework. Our findings highlight the importance of considering both conscious (intentions) and subconscious (habits) factors in explaining usage behavior. Furthermore, we share our observations about antecedents of IBCT usage in the educational context. Implications for practice and research are discussed."}
{"_id": "897343e1a63761c7dd1b156f2d92ac07e6cd1fa7", "title": "Interacting with in-vehicle systems: understanding, measuring, and evaluating attention", "text": "In-vehicle systems research is becoming a significant field as the market for in-vehicle systems continue to grow. As a consequence, researchers are increasingly concerned with opportunities and limitations of HCI in a moving vehicle. Especially aspects of attention constitute a challenge for in-vehicle systems development. This paper seeks to remedy this by defining and exemplifying attention understandings. 100 papers were classified in a two-fold perspective; under what settings are in-vehicle systems evaluated and how is driver attention measured in regard to in-vehicle systems HCI. A breakdown of the distribution of driving settings and measures is presented and the impact of driver attention is discussed. The classification revealed that most of the studies were conducted in driving simulators and real traffic driving, while lateral and longitudinal control and eye behaviour were the most used measures. Author"}
{"_id": "c81d111aa42569aadd48eddef0f5d293067e711d", "title": "Chemi-net: a graph convolutional network for accurate drug property prediction", "text": "Absorption, distribution, metabolism, and excretion (ADME) studies are critical for drug discovery. Conventionally, these tasks, together with other chemical property predictions, rely on domain-specific feature descriptors, or fingerprints. Following the recent success of neural networks, we developed Chemi-Net, a completely data-driven, domain knowledge-free, deep learning method for ADME property prediction. To compare the relative performance of Chemi-Net with Cubist, one of the popular machine learning programs used by Amgen, a large-scale ADME property prediction study was performed on-site at Amgen. The results showed that our deep neural network method improved current methods by a large margin. We foresee that the significantly increased accuracy of ADME prediction seen with Chemi-Net over Cubist will greatly accelerate drug discovery."}
{"_id": "4573ec8eeb5ed4a528c6044744e48d6c2c8c7abb", "title": "Threatening faces and social anxiety: a literature review.", "text": "A threatening facial expression is a potent social sign of hostility or dominance. During the past 20 years, photographs of threatening faces have been increasingly included as stimuli in studies with socially anxious participants, based on the hypothesis that a threatening face is especially salient to people with fears of social interaction or negative evaluation. The purpose of this literature review is to systematically evaluate the accumulated research and suggest possible avenues for further research. The main conclusion is that photographs of threatening faces engage a broad range of perceptual processes in socially anxious participants, particularly when exposure times are very short."}
{"_id": "c9293af4e528e6d20e61f21064a44338ed3b70dd", "title": "A 1 . 5V , 10-bit , 14 . 3-MS / s CMOS Pipeline Analog-to-Digital Converter", "text": "A 1.5-V, 10-bit, 14.3-MS/s pipeline analog-to-digital converter was implemented in a 0.6m CMOS technology. Emphasis was placed on observing device reliability constraints at low voltage. MOS switches were implemented without lowthreshold devices by using a bootstrapping technique that does not subject the devices to large terminal voltages. The converter achieved a peak signal-to-noise-and-distortion ratio of 58.5 dB, maximum differential nonlinearity of 0.5 least significant bit (LSB), maximum integral nonlinearity of 0.7 LSB, and a power consumption of 36 mW."}
{"_id": "27fca2a7c2d565be983b4a786194e92bdc3d6ac7", "title": "Remote Sensing Image Classification Based on Stacked Denoising Autoencoder", "text": "Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP) neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1) remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification."}
{"_id": "c4f08b29fe95183395f7fca6a85bbdf8c2b605f1", "title": "A wearable fingertip haptic device with 3 DoF asymmetric 3-RSR kinematics", "text": "A novel wearable haptic device for modulating skin stretch at the fingertip is presented. Rendering of skin stretch in 3 degrees of freedom (DoF), with contact - no contact capabilities, was implemented through rigid parallel kinematics. The novel asymmetrical three revolute-spherical-revolute (3-RSR) configuration allowed compact dimensions with minimum encumbrance of the hand workspace and minimum inter-finger interference. A differential method for solving the non-trivial inverse kinematics is proposed and implemented in real time for controlling the position of the skin tactor. Experiments involving the grasping of a virtual object were conducted using two devices (thumb and index fingers) in a group of 4 subjects: results showed that participants performed the grasping task more precisely and with grasping forces closer to the expected natural behavior when the proposed device provided haptic feedback."}
{"_id": "d589265e932f2d410fdd2ac5fa6c80e3649c49c8", "title": "Deformable Classifiers", "text": "Geometric variations of objects, which do not modify the object class, pose a major challenge for object recognition. These variations could be rigid as well as non-rigid transformations. In this paper, we design a framework for training deformable classifiers, where latent transformation variables are introduced, and a transformation of the object image to a reference instantiation is computed in terms of the classifier output, separately for each class. The classifier outputs for each class, after transformation, are compared to yield the final decision. As a by-product of the classification this yields a transformation of the input object to a reference pose, which can be used for downstream tasks such as the computation of object support. We apply a two-step training mechanism for our framework, which alternates between optimizing over the latent transformation variables and the classifier parameters to minimize the loss function. We show that multilayer perceptrons, also known as deep networks, are well suited for this approach and achieve state of the art results on the rotated MNIST and the Google Earth dataset, and produce competitive results on MNIST and CIFAR-10 when training on smaller subsets of training data."}
{"_id": "1531a0fddacbe55b389278abd029fb311acb1b5b", "title": "Evolutionary Delivery versus the \"waterfall model\"", "text": "The conventional wisdom of planning software engineering projects, using the widely cited \"waterfall model\" is not the only useful software development process model. In fact, the \"waterfall model\" may be unrealistic, and dangerous to the primary objectives of any software project.The alternative model, which I choose to call \"evolutionary delivery\" is not widely taught or practiced yet. But there is already more than a decade of practical experience in using it. In various forms. It is quite clear from these experiences that evolutionary delivery is a powerful general tool for both software development and associated systems development.Almost all experienced software developers do make use of some of the ideas in evolutionary development at one time or another. But, this is often unplanned, informal and it is an incomplete exploitation of this powerful method. This paper will try to expose the theoretical and practical aspects of the method in a fuller perspective. We need to learn the theory fully, so that we can apply and learn it completely."}
{"_id": "9f0f4ea0ec0701d0ff2738f0719ee52589cc3717", "title": "Deep Generative Models with Learnable Knowledge Constraints", "text": "The broad set of deep generative models (DGMs) has achieved remarkable advances. However, it is often difficult to incorporate rich structured domain knowledge with the end-to-end DGMs. Posterior regularization (PR) offers a principled framework to impose structured constraints on probabilistic models, but has limited applicability to the diverse DGMs that can lack a Bayesian formulation or even explicit density evaluation. PR also requires constraints to be fully specified a priori, which is impractical or suboptimal for complex knowledge with learnable uncertain parts. In this paper, we establish mathematical correspondence between PR and reinforcement learning (RL), and, based on the connection, expand PR to learn constraints as the extrinsic reward in RL. The resulting algorithm is modelagnostic to apply to any DGMs, and is flexible to adapt arbitrary constraints with the model jointly. Experiments on human image generation and templated sentence generation show models with learned knowledge constraints by our algorithm greatly improve over base generative models."}
{"_id": "412b3ef02c85087e5f1721176114672c722b17a4", "title": "A Taxonomy of Deep Convolutional Neural Nets for Computer Vision", "text": "Traditional architectures for solving computer vision problems and the degree of success they enjoyed have been heavily reliant on hand-crafted features. However, of late, deep learning techniques have offered a compelling alternative \u2013 that of automatically learning problem-specific features. With this new paradigm, every problem in computer vision is now being re-examined from a deep learning perspective. Therefore, it has become important to understand what kind of deep networks are suitable for a given problem. Although general surveys of this fast-moving paradigm (i.e., deep-networks) exist, a survey specific to computer vision is missing. We specifically consider one form of deep networks widely used in computer vision \u2013 convolutional neural networks (CNNs). We start with \u201cAlexNet\u201d as our base CNN and then examine the broad variations proposed over time to suit different applications. We hope that our recipe-style survey will serve as a guide, particularly for novice practitioners intending to use deep-learning techniques for computer vision."}
{"_id": "fbc27038b9c111dad2851397047c6230ece79c23", "title": "Glaucoma-Deep : Detection of Glaucoma Eye Disease on Retinal Fundus Images using Deep Learning Detection of Glaucoma by Abbas", "text": "Detection of glaucoma eye disease is still a challenging task for computer-aided diagnostics (CADx) systems. During eye screening process, the ophthalmologists measures the glaucoma by structure changes in optic disc (OD), loss of nerve fibres (LNF) and atrophy of the peripapillary region (APR). In retinal images, the automated CADx systems are developed to assess this eye disease through segmentation-based hand-crafted features. Therefore in this paper, the convolutional neural network (CNN) unsupervised architecture was used to extract the features through multilayer from raw pixel intensities. Afterwards, the deep-belief network (DBN) model was used to select the most discriminative deep features based on the annotated training dataset. At last, the final decision is performed by softmax linear classifier to differentiate between glaucoma and non-glaucoma retinal fundus image. This proposed system is known as Glaucoma-Deep and tested on 1200 retinal images obtained from publically and privately available datasets. To evaluate the performance of Glaucoma-Deep system, the sensitivity (SE), specificity (SP), accuracy (ACC), and precision (PRC) statistical measures were utilized. On average, the SE of 84.50%, SP of 98.01%, ACC of 99% and PRC of 84% values were achieved. Comparing to state-of-the-art systems, the Nodular-Deep system accomplished significant higher results. Consequently, the Glaucoma-Deep system can easily recognize the glaucoma eye disease to solve the problem of clinical experts during eye-screening process on large-scale environments. Keywords\u2014Fundus imaging; glaucoma; diabetic retinopathy; deep learning; convolutional neural networks; deep belief network"}
{"_id": "7dee8be2a8eccb892253a94d2fcaa0aa9971cc54", "title": "On Laser Ranging Based on High-Speed/Energy Laser Diode Pulses and Single-Photon Detection Techniques", "text": "This paper discusses the construction principles and performance of a pulsed time-of-flight (TOF) laser radar based on high-speed (FWHM ~100 ps) and high-energy (~1 nJ) optical transmitter pulses produced with a specific laser diode working in an \u201cenhanced gain-switching\u201d regime and based on single-photon detection in the receiver. It is shown by analysis and experiments that single-shot precision at the level of 2W3 cm is achievable. The effective measurement rate can exceed 10 kHz to a noncooperative target (20% reflectivity) at a distance of > 50 m, with an effective receiver aperture size of 2.5 cm2. The effect of background illumination is analyzed. It is shown that the gating of the SPAD detector is an effective means to avoid the blocking of the receiver in a high-level background illumination case. A brief comparison with pulsed TOF laser radars employing linear detection techniques is also made."}
{"_id": "0f7fb0e9bd0e0e16fcfbea8c6667c9d8c13ebf72", "title": "Detecting Structural Similarities between XML Documents", "text": "In this paper we propose a technique for detecting the similarity in the structure of XML documents. The technique is based on the idea of representing the structure of an XML document as a time series in which each occurrence of a tag corresponds to a given impulse. By analyzing the frequencies of the corresponding Fourier transform, we can hence state the degree of similarity between documents. The efficiency and effectiveness of this approach are compelling when compared with traditional ones."}
{"_id": "85aeaa9bd5bd55082d19e211a379a17d0de4cb4c", "title": "Translating Learning into Numbers: A Generic Framework for Learning Analytics", "text": "With the increase in available educational data, it is expected that Learning Analytics will become a powerful means to inform and support learners, teachers and their institutions in better understanding and predicting personal learning needs and performance. However, the processes and requirements behind the beneficial application of Learning and Knowledge Analytics as well as the consequences for learning and teaching are still far from being understood. In this paper, we explore the key dimensions of Learning Analytics (LA), the critical problem zones, and some potential dangers to the beneficial exploitation of educational data. We propose and discuss a generic design framework that can act as a useful guide for setting up Learning Analytics services in support of educational practice and learner guidance, in quality assurance, curriculum development, and in improving teacher effectiveness and efficiency. Furthermore, the presented article intends to inform about soft barriers and limitations of Learning Analytics. We identify the required skills and competences that make meaningful use of Learning Analytics data possible to overcome gaps in interpretation literacy among educational stakeholders. We also discuss privacy and ethical issues and suggest ways in which these issues can be addressed through policy guidelines and best practice examples."}
{"_id": "2a99978d060b84a3397d130808cc7d32c50ce1ff", "title": "Trust-aware crowdsourcing with domain knowledge", "text": "The rise of social network and crowdsourcing platforms makes it convenient to take advantage of the collective intelligence to estimate true labels of questions of interest. However, input from workers is often noisy and even malicious. Trust is used to model workers in order to better estimate true labels of questions. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. Finally, we demonstrate that our model is superior to state-of-the-art by testing it on multiple real-world datasets."}
{"_id": "a1a1c4fb58a2bc056a056795609a2be307b6b9bf", "title": "GORAM \u2013 Group ORAM for Privacy and Access Control in Outsourced Personal Records 1", "text": "Cloud storage has rapidly become a cornerstone of many IT infrastructures, constituting a seamless solution for the backup, synchronization, and sharing of large amounts of data. Putting user data in the direct control of cloud service providers, however, raises security and privacy concerns related to the integrity of outsourced data, the accidental or intentional leakage of sensitive information, the profiling of user activities and so on. Furthermore, even if the cloud provider is trusted, users having access to outsourced files might be malicious and misbehave. These concerns are particularly serious in sensitive applications like personal health records and credit score systems. To tackle this problem, we present GORAM, a cryptographic system that protects the secrecy and integrity of outsourced data with respect to both an untrusted server and malicious clients, guarantees the anonymity and unlinkability of accesses to such data, and allows the data owner to share outsourced data with other clients, selectively granting them read and write permissions. GORAM is the first system to achieve such a wide range of security and privacy properties for outsourced storage. In the process of designing an efficient construction, we developed two new, generally applicable cryptographic schemes, namely, batched zero-knowledge proofs of shuffle and an accountability technique based on chameleon signatures, which we consider of independent interest. We implemented GORAM in Amazon Elastic Compute Cloud (EC2) and ran a performance evaluation demonstrating the scalability and efficiency of our construction."}
{"_id": "4ad35e94ec1edf7de498bf5007c0fb90448ff0aa", "title": "Position Control of a Permanent Magnet DC Motor by Model Reference Adaptive Control", "text": "Model reference adaptive control (MRAC) is one of the various techniques of solving the control problem when the parameters of the controlled process are poorly known or vary during normal operation. To understand the dynamic behavior of a dc motor it is required to know its parameters; armature inductance and resistance (La, Ra), inertia of the rotor (Jm), motor constant (Km), friction coefficient (Bm), etc. To identify these parameters, some experiments should be performed. However, motor parameters change under operation according to several conditions. Therefore, the performance of controller, which has been designed considering constant motor parameters becomes poorer. For this reason, a model reference adaptive control method is proposed to control the position of a dc motor without requiring fixed motor parameters. Experimental results show how well this method controls the position of a permanent magnet dc motor."}
{"_id": "c86c3c47beec8a1db3b4d1c28f71502a6091f059", "title": "Solutions Strategies for Die Shift Problem in Wafer Level Compression Molding", "text": "Die shift problem that arises during the wafer molding process in embedded micro wafer level package fabrication was systematically analyzed and solution strategies were developed. A methodology to measure die shift was developed and applied to create maps of die shift on an 8 inch wafer. A total of 256 dies were embedded in an 8 inch mold compound wafer using compression molding. Thermal and cure shrinkages of mold compound are determined to be the primary reasons for die shift in wafer molding. Die shift value increases as the distance from the center of the wafer increases. Pre-compensation of die shift during pick and place is demonstrated as an effective method to control die shift. Applying pre-compensation method 99% of dies can be achieved to have die shift values of less than 40 \u03bcm. Usage of carrier wafer during wafer molding reduces the maximum die shift in a wafer from 633 \u03bcm to 79 \u03bcm. Die area/package area ratio has a strong influence on the die shift values. Die area/package area ratios of 0.81, 0.49, and 0.25 lead to maximum die shift values of 26, 76, and 97 \u03bc.m, respectively. Wafer molding using low coefficient of thermal expansion (7 \u00d7 10-6/\u00b0C) and low cure shrinkage (0.094%) mold compounds is demonstrated to yield maximum die shift value of 28 \u03bcm over the whole 8 inch wafer area."}
{"_id": "60611f3395ba44d0a386d6eebdd4d0a7277d5df0", "title": "Gender differences in participation and reward on Stack Overflow", "text": "Programming is a valuable skill in the labor market, making the underrepresentation of women in computing an increasingly important issue. Online question and answer platforms serve a dual purpose in this field: they form a body of knowledge useful as a reference and learning tool, and they provide opportunities for individuals to demonstrate credible, verifiable expertise. Issues, such as male-oriented site design or overrepresentation of men among the site\u2019s elite may therefore compound the issue of women\u2019s underrepresentation in IT. In this paper we audit the differences in behavior and outcomes between men and women on Stack Overflow, the most popular of these Q&A sites. We observe significant differences in how men and women participate in the platform and how successful they are. For example, the average woman has roughly half of the reputation points, the primary measure of success on the site, of the average man. Using an Oaxaca-Blinder decomposition, an econometric technique commonly applied to analyze differences in wages between groups, we find that most of the gap in success between men and women can be explained by differences in their activity on the site and differences in how these activities are rewarded. Specifically, 1) men give more answers than women and 2) are rewarded more for their answers on average, even when controlling for possible confounders such as tenure or buy-in to the site. Women ask more questions and gain more reward per question. We conclude with a hypothetical redesign of the site\u2019s scoring system based on these behavioral differences, cutting the reputation gap in half."}
{"_id": "18e7dde371cf17ca4089a4a59660483e70160b09", "title": "Named data networking", "text": "Named Data Networking (NDN) is one of five projects funded by the U.S. National Science Foundation under its Future Internet Architecture Program. NDN has its roots in an earlier project, Content-Centric Networking (CCN), which Van Jacobson first publicly presented in 2006. The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications for how we design, develop, deploy, and use networks and applications. We describe the motivation and vision of this new architecture, and its basic components and operations. We also provide a snapshot of its current design, development status, and research challenges. More information about the project, including prototype implementations, publications, and annual reports, is available on named-data.net."}
{"_id": "9828fe888d14bca6dd4577a104451dc2ef64f48c", "title": "A survey of information-centric networking", "text": "The information-centric networking (ICN) concept is a significant common approach of several future Internet research activities. The approach leverages in-network caching, multiparty communication through replication, and interaction models decoupling senders and receivers. The goal is to provide a network infrastructure service that is better suited to today\u00bfs use (in particular. content distribution and mobility) and more resilient to disruptions and failures. The ICN approach is being explored by a number of research projects. We compare and discuss design choices and features of proposed ICN architectures, focusing on the following main components: named data objects, naming and security, API, routing and transport, and caching. We also discuss the advantages of the ICN approach in general."}
{"_id": "1e7eb118ec15b75fec0cd546a70b86a40fe1e7ce", "title": "Chronos : Serverless Multi-User Chat Over NDN", "text": "Multi-user applications are commonly implemented using a centralized server. This paper presents a new design for multi-user chat applications (Chronos) that works in a distributed, serverless fashion over Named Data Networking. In Chronos, all participants share their views by exchanging the cryptographic digests of the chat room data set. A newly generated message causes a change of the digest at the message originator, which leads to retrieving the new data by all other participants in an efficient way and resynchronization of chat room views. Chronos does not have a single point of failure and eliminates traffic concentration problem of server-based implementations. We use simulations to evaluate and compare Chronos with a traditional serverbased chat room implementation. Our results demonstrate Chronos\u2019 robustness and efficiency in data dissemination. Chronos\u2019 approach of replacing centralized servers by distributed data synchronization can be applied to a variety of distributed applications to simplify design and ease deployment."}
{"_id": "205d9beb9499fe4eebb2ff008d3eb57e9f823622", "title": "NLSR: named-data link state routing protocol", "text": "This paper presents the design of the Named-data Link State Routing protocol (NLSR), a routing protocol for Named Data Networking (NDN). Since NDN uses names to identify and retrieve data, NLSR propagates reachability to name prefixes instead of IP prefixes. Moreover, NLSR differs from IP-based link-state routing protocols in two fundamental ways. First, NLSR uses Interest/Data packets to disseminate routing updates, directly benefiting from NDN's data authenticity. Second, NLSR produces a list of ranked forwarding options for each name prefix to facilitate NDN's adaptive forwarding strategies. In this paper we discuss NLSR's main design choices on (1) a hierarchical naming scheme for routers, keys, and routing updates, (2) a hierarchical trust model for routing within a single administrative domain, (3) a hop-by-hop synchronization protocol to replace the traditional network-wide flooding for routing update dissemination, and (4) a simple way to rank multiple forwarding options. Compared with IP-based link state routing, NLSR offers more efficient update dissemination, built-in update authentication, and native support of multipath forwarding."}
{"_id": "3b25f61954558439e7c18b524ea320851f457d3e", "title": "Interest flooding attack and countermeasures in Named Data Networking", "text": "Distributed Denial of Service (DDoS) attacks are an ongoing problem in today's Internet, where packets from a large number of compromised hosts thwart the paths to the victim site and/or overload the victim machines. In a newly proposed future Internet architecture, Named Data Networking (NDN), end users request desired data by sending Interest packets, and the network delivers Data packets upon request only, effectively eliminating many existing DDoS attacks. However, an NDN network can be subject to a new type of DDoS attack, namely Interest packet flooding. In this paper we investigate effective solutions to mitigate Interest flooding. We show that NDN's inherent properties of storing per packet state on each router and maintaining flow balance (i.e., one Interest packet retrieves at most one Data packet) provides the basis for effective DDoS mitigation algorithms. Our evaluation through simulations shows that the solution can quickly and effectively respond and mitigate Interest flooding."}
{"_id": "058c115e0bf6ab42f15414aead1076493d8727da", "title": "Interpolating and approximating implicit surfaces from polygon soup", "text": "This paper describes a method for building interpolating or approximating implicit surfaces from polygonal data. The user can choose to generate a surface that exactly interpolates the polygons, or a surface that approximates the input by smoothing away features smaller than some user-specified size. The implicit functions are represented using a moving least-squares formulation with constraints integrated over the polygons. The paper also presents an improved method for enforcing normal constraints and an iterative procedure for ensuring that the implicit surface tightly encloses the input vertices."}
{"_id": "c4138fe0757c3d9eee85c4bd7bdaf48001430bd7", "title": "Adaptive Spatial-Scale-Aware Deep Convolutional Neural Network for High-Resolution Remote Sensing Imagery Scene Classification", "text": "High-resolution remote sensing (HRRS) scene classification plays an important role in numerous applications. During the past few decades, a lot of remarkable efforts have been made to develop various methods for HRRS scene classification. In this paper, focusing on the problems of complex context relationship and large differences of object scale in HRRS scene images, we propose a deep CNN-based scene classification method, which not only enables to enhance the ability of spatial representation, but adaptively recalibrates channel-wise feature responses to suppress useless feature channels. We evaluated the proposed method on a publicly large-scale dataset with several state-of-the-art convolutional neural network (CNN) models. The experimental results demonstrate that the proposed method is effective to extract high-level category features for HRRS scene classification."}
{"_id": "55c241f76d0c115189da2eefd34e1af98631fcbe", "title": "Sentiment Analysis: From Opinion Mining to Human-Agent Interaction", "text": "The opinion mining and human-agent interaction communities are currently addressing sentiment analysis from different perspectives that comprise, on the one hand, disparate sentiment-related phenomena and computational representations, and on the other hand, different detection and dialog management methods. In this paper we identify and discuss the growing opportunities for cross-disciplinary work that may increase individual advances. Sentiment/opinion detection methods used in human-agent interaction are indeed rare and, when they are employed, they are not different from the ones used in opinion mining and consequently not designed for socio-affective interactions (timing constraint of the interaction, sentiment analysis as an input and an output of interaction strategies). To support our claims, we present a comparative state of the art which analyzes the sentiment-related phenomena and the sentiment detection methods used in both communities and makes an overview of the goals of socio-affective human-agent strategies. We propose then different possibilities for mutual benefit, specifying several research tracks and discussing the open questions and prospects. To show the feasibility of the general guidelines proposed we also approach them from a specific perspective by applying them to the case of the Greta embodied conversational agents platform and discuss the way they can be used to make a more significative sentiment analysis for human-agent interactions in two different use cases: job interviews and dialogs with museum visitors."}
{"_id": "f38621de44b7a1ea27c5441d6bad9c5b278e92d0", "title": "Floor Layout Planning Using Artificial Intelligence Technique", "text": "In the era of e-commerce while buying furniture online the customers obviously feel the need for visual representation of the arrangement of their furniture. Even when doing interiors of the house it's difficult to just rely on assumptions about best layouts possible and professional help may become quite expensive. In this project, we make use of Genetic Algorithm (GA) which is an Artificial Intelligence technique to display various optimal arrangements of furniture. The basic idea behind using GA is developing an evolutionary design model. This is done by generating chromosomes for each possible solution and then performing a crossover between them in each generation until an optimum fitness function is reached. Modification in chromosome representation may also be done for better results. The proposed system will generate different layout designs for the furniture keeping in consideration the structure of a master bedroom."}
{"_id": "7737bb27dce71774457c08b597afa73895202b62", "title": "Generation of Panoramic View from 360 Degree Fisheye Images Based on Angular Fisheye Projection", "text": "this paper proposes a algorithm to generate the panoramic view from 360 \u00a1\u00e3fisheye images based on angular fisheye projection model. Firstly, the image plane will be mapped to the view plane, the sphere plane, to obtain the coordinates of spherical points. Then use the angular fisheye projection model to get fisheye images' radius. Secondly, calculate the coordinates of fisheye images' points with the radius of fisheye images and the coordinates of spherical points. Finally, map the points of fisheye images to the target plane by the way of backward mapping. Experiments show that this algorithm is simple and fast, and can obtain the ideal images."}
{"_id": "807d33eb53dcec2eb0dfb4e9d51520a2bd3c317a", "title": "Accident prediction model for railway-highway interfaces.", "text": "Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes."}
{"_id": "4a358092d4771a33a26b3a8ffae3b0a3fa5aa2c1", "title": "Determination of spinal curvature from scoliosis X-ray images using K-means and curve fitting for early detection of scoliosis disease", "text": "One of the disease that require X-ray diagnosis is scoliosis. Early detection of scoliosis is important to do for anyone. From the early detection information, the doctor may take the firts step to further treatment quickly. Determination of spinal curvature is a first step method that used to measure how severe the degree of scoliosis. The severity degree of scoliosis can be assess by using Cobb angle. Therefore, by approximate the spinal curvature, we can approximate the cobb angle too. From previous work that interobserver measurement value may reach 11.8\u00b0 and intraobserver measurement error is 6\u00b0. So, as far as the cobb angle measuring, the subjectivity aspect is the natural thing and can be tolerated until now. This research propose an algorithm how to define spinal curvature with the aid of a computer in digital X-ray image quickly but has a standard error that can be tolerated. The preprocessing has been done by canny edge detection. The k-means clustering algorithm can detect the centroid point after segmentation preprocessing of the spinal segment and polynomial curve fitting will be used in the process for determining the spinal curve. From the spinal curvature information, the scoliosis curve can be classified into 4 condition, normal, mild, moderate, and severe scoliosis."}
{"_id": "9eb419d80f016905526e92ac1990ba8027e295c4", "title": "A study of attention-based neural machine translation model on Indian languages", "text": "Neural machine translation (NMT) models have recently been shown to be very successful in machine translation (MT). The use of LSTMs in machine translation has significantly improved the translation performance for longer sentences by being able to capture the context and long range correlations of the sentences in their hidden layers. The attention model based NMT system has become state-of-the-art, performing equal or better than other statistical MT approaches. In this paper, we studied the performance of the attention-model based NMT system on the Indian language pair, Hindi and Bengali. We analysed the types of errors that occur in morphologically rich languages when there is a scarcity of large parallel training corpus. We then carried out certain post-processing heuristic steps to improve the quality of the translated statements and suggest further measures."}
{"_id": "ae5df6f13f63b4d01505823f455d4ebb9d82ff1e", "title": "A Voltage Efficient PMOS Charge Pump Architecture", "text": "In this paper a charge pump circuit is presented: it is based on PMOS pass transistors with dynamic control of the gate and body voltages. By controlling the gate and the bulk of each PMOS pass transistor, the voltage loss due to the device threshold is removed and the charge is pumped from one stage to the other with negligible voltage drop. Furthermore, the overdrive voltage of the pass transistors grows progressively from the first to the last stage. Hence, a larger output voltage can be achieved compared to conventional charge pumps, still retaining a simple two-phase clocking scheme, along with the Dickson basic architecture. The prototype was fabricated using the STMicroelectronics T10 90nm CMOS process. Measurements results are also presented confirming the validity of the design"}
{"_id": "cca8a45987269f3a40e4e2adf64d391518985c68", "title": "A novel planer transition structure from micro-strip line to waveguide", "text": "A planer micro-strip to waveguide transition structure is given in this paper. Coupled patch on the bottom of the substrate is excited by the micro-strip feed-line on the top, which realizes the transformation from the TE10 mode in the waveguide to the quasi-TEM mode on the micro-strip line. Modeling and simulation of the structure are conducted for the design by using electromagnetic simulation software HFSS. The measurement of the manufactured structure exhibits an insertion loss less than 0.5dB within 33GHz~37GHz. Compared to traditional waveguide E-plane probe or H-plane probe transition, this design has properties of low profile and easy integration."}
{"_id": "76bcb14ad96df6811d2263db6b8b62659e239763", "title": "Single Shot Text Detector with Regional Attention", "text": "We present a novel single-shot text detector that directly outputs word-level bounding boxes in a natural image. We propose an attention mechanism which roughly identifies text regions via an automatically learned attentional map. This substantially suppresses background interference in the convolutional features, which is the key to producing accurate inference of words, particularly at extremely small sizes. This results in a single model that essentially works in a coarse-to-fine manner. It departs from recent FCN-based text detectors which cascade multiple FCN models to achieve an accurate prediction. Furthermore, we develop a hierarchical inception module which efficiently aggregates multi-scale inception features. This enhances local details, and also encodes strong context information, allowing the detector to work reliably on multi-scale and multi-orientation text with single-scale images. Our text detector achieves an F-measure of 77% on the ICDAR 2015 benchmark, advancing the state-of-the-art results in [18, 28]. Demo is available at: http://sstd.whuang.org/."}
{"_id": "3fbf6da0408cc06227e728d2389cdd005bdee827", "title": "A Novel Approach for Addressing Wandering Off Elderly Using Low Cost Passive RFID Tags", "text": "Wandering (e.g. elopement) by elderly persons at acute hospitals and nursing homes poses a significant problem to providing patient care as such incidents can lead to injury and even accidental morbidity. These problems are particularly serious given aging populations around the world. While various technologies exit (such as door alarms), they are expensive and the reliability of such systems have not been evaluated in the past. In this article we propose two novel methods for a very low cost solution to address the problem of wandering off patients using body worn low cost passive Radio Frequency Identification (RFID) tags using phase based measurements. Our approach requires no modification to the air interface protocols, firmware or hardware. Results from extensive experiments show that: i) the proposed algorithms can accurately identify whether a person is moving into or out of, for example, a room; and ii) it can be implemented in real-time to develop a low cost wandering off alarm."}
{"_id": "90c59c36389c3b42d72c23c8828a106b624a3317", "title": "Cybersecurity Games and Investments: A Decision Support Approach", "text": "In this paper we investigate how to optimally invest in cybersecurity controls. We are particularly interested in examining cases where the organization suffers from an underinvestment problem or inefficient spending on cybersecurity. To this end, we first model the cybersecurity environment of an organization. We then model non-cooperative cybersecurity control-games between the defender which abstracts all defense mechanisms of the organization and the attacker which can exploit different vulnerabilities at different network locations. To implement our methodology we use the SANS Top 20 Critical Security Controls and the 2011 CWE/SANS top 25 most dangerous software errors. Based on the profile of an organization, which forms its preferences in terms of indirect costs, its concerns about different kinds of threats and the importance of the assets given their associated risks we derive the Nash Equilibria of a series of control-games. These game solutions are then handled by optimization techniques, in particular multi-objective, multiple choice Knapsack to determine the optimal cybersecurity investment. Our methodology provides security effective and cost efficient solutions especially against commodity attacks. We believe our work can be used to advise security managers on how they should spend an available cybersecurity budget given their organization profile."}
{"_id": "0106220c414b7551229f67b33a605e18a4edf797", "title": "Parallel and distributed simulation", "text": "Research and development efforts in the parallel and distributed simulation field over the last 15 years has progressed, largely independently, in two separate camps: the largely academic high performance parallel and distributed (discrete event) simulation (PADS) community, and the DoD-centered distributed interactive simulation (DIS) community. This tutorial gives an overview and comparison of work in these two areas, emphasizing issues related to distributed execution where these fields have the most overlap. Differences in the fundamental assumptions routinely used within each community are contrasted, followed by overviews of work in each community."}
{"_id": "32527d9fcbfb0c84daf715d7e9a375f647b33c2c", "title": "Predicting online grocery buying intention: a comparison of the theory of reasoned action and the theory of planned behavior", "text": ""}
{"_id": "1b3c0d8e829bebfe71f311757e68311912b1ebaf", "title": "Visual Exploration of Climate Variability Changes Using Wavelet Analysis", "text": "Due to its nonlinear nature, the climate system shows quite high natural variability on different time scales, including multiyear oscillations such as the El Nino southern oscillation phenomenon. Beside a shift of the mean states and of extreme values of climate variables, climate change may also change the frequency or the spatial patterns of these natural climate variations. Wavelet analysis is a well established tool to investigate variability in the frequency domain. However, due to the size and complexity of the analysis results, only few time series are commonly analyzed concurrently. In this paper we will explore different techniques to visually assist the user in the analysis of variability and variability changes to allow for a holistic analysis of a global climate model data set consisting of several variables and extending over 250 years. Our new framework and data from the IPCC AR4 simulations with the coupled climate model ECHAM5/MPI-OM are used to explore the temporal evolution of El Nino due to climate change."}
{"_id": "40ae67c49a15a47d46cdc918eb95f7fb35590741", "title": "Highly reliable M1X MLC NAND flash memory cell with novel active air-gap and p+ poly process integration technologies", "text": "Our Middle-1X nm MLC NAND (M1X) flash cell is intensively characterized with respect to reliability and manufacturability. For the first time, the novel active air-gap technology is applied to alleviate the drop of channel boosting potential of program inhibition mode, BL-BL interference is reduced to our 2y nm node level by this novel integration technology. Furthermore, it also relaxes the effect of process variation like EFH (Effective Field oxide Height) on cell Vt distribution. Better endurance and retention characteristics can be obtained by p+ doped poly gate. By optimization of active air-gap profile and poly doping level, M1X nm MLC NAND flash memory has been successfully implemented with superior manufacturability and acceptable reliability."}
{"_id": "269ed5ba525519502123b58472e069d77c5bda14", "title": "Non-sentential Question Resolution using Sequence to Sequence Learning", "text": "An interactive Question Answering (QA) system frequently encounters non-sentential (incomplete) questions. These non-sentential questions may not make sense to the system when a user asks them without the context of conversation. The system thus needs to take into account the conversation context to process the incomplete question. In this work, we present a recurrent neural network (RNN) based encoder decoder network that can generate a complete (intended) question, given an incomplete question and conversation context. RNN encoder decoder networks have been show to work well when trained on a parallel corpus with millions of sentences, however it is extremely hard to obtain conversation data of this magnitude. We therefore propose to decompose the original problem into two separate simplified problems where each problem focuses on an abstraction. Specifically, we train a semantic sequence model to learn semantic patterns, and a syntactic sequence model to learn linguistic patterns. We further combine syntactic and semantic sequence models to generate an ensemble model. Our model achieves a BLEU score of 30.15 as compared to 18.54 using a standard RNN encoder decoder model."}
{"_id": "b76c6b58a4afa39e027c95a48dac876a37ed0299", "title": "Development of an Omni-Directional Mobile Robot with 3 DOF Decoupling Drive Mechanism", "text": "A ne'ui driving mech,anism for holonomic ornwidirectional mobi le robots i s designzed, ,which enu.bles 3 DOF m o t i o n control by three correspondent actuators in, (I decozipled ma'nner ,with n o redundan,cy. A prototype of the omni-drrectional mobile robot w i th th,e dr iu ing mechan i sm is developed including a parallel link suspemion m,echanism. T h e k inemat ics of th.e omnidirectionsal mobile robot i s also analyzed, and simulat i o n fo,r velocity control of the robot is perform.ed by od f o r ,oelocity modula t ion wi th in,terpolation. to uchieve the given target pos i t ion and velocity."}
{"_id": "182696253c4bee32c98c27e3252add22ac09b328", "title": "Residential exposure to aircraft noise and hospital admissions for cardiovascular diseases: multi-airport retrospective study", "text": "OBJECTIVE\nTo investigate whether exposure to aircraft noise increases the risk of hospitalization for cardiovascular diseases in older people (\u2265 65 years) residing near airports.\n\n\nDESIGN\nMulti-airport retrospective study of approximately 6 million older people residing near airports in the United States. We superimposed contours of aircraft noise levels (in decibels, dB) for 89 airports for 2009 provided by the US Federal Aviation Administration on census block resolution population data to construct two exposure metrics applicable to zip code resolution health insurance data: population weighted noise within each zip code, and 90th centile of noise among populated census blocks within each zip code.\n\n\nSETTING\n2218 zip codes surrounding 89 airports in the contiguous states.\n\n\nPARTICIPANTS\n6 027 363 people eligible to participate in the national medical insurance (Medicare) program (aged \u2265 65 years) residing near airports in 2009.\n\n\nMAIN OUTCOME MEASURES\nPercentage increase in the hospitalization admission rate for cardiovascular disease associated with a 10 dB increase in aircraft noise, for each airport and on average across airports adjusted by individual level characteristics (age, sex, race), zip code level socioeconomic status and demographics, zip code level air pollution (fine particulate matter and ozone), and roadway density.\n\n\nRESULTS\nAveraged across all airports and using the 90th centile noise exposure metric, a zip code with 10 dB higher noise exposure had a 3.5% higher (95% confidence interval 0.2% to 7.0%) cardiovascular hospital admission rate, after controlling for covariates.\n\n\nCONCLUSIONS\nDespite limitations related to potential misclassification of exposure, we found a statistically significant association between exposure to aircraft noise and risk of hospitalization for cardiovascular diseases among older people living near airports."}
{"_id": "4323ec9e3b5266d90ef9cf482bc09cfaab0abddb", "title": "Continuous Requirements Engineering and Human-Centered Agile Software Development", "text": "The idea of Continuous Requirements Engineering in relation to a Human-Centered Agile Development Process is discussed. First, it is argued that Continuous Requirements Engineering has to cover design-time and runtime aspects. In this way maintenance is covered as well. Second, arguments are provided for integrating aspects of usability and user experience into requirements specifications. This has to be done continuously. Therefore, the term Continuous Human-Centered Development is introduced and discussed. Based on a process model for SCRUM some aspects of integrating HCD into the development process are discussed."}
{"_id": "383746b84400b82972ab84fd6027f1524090fa38", "title": "From Non-Functional Requirements to Design through Patterns", "text": "Design patterns aid in documenting and communicating proven design solutions to recurring problems. They describe not only how to solve design problems, but also why a solution is chosen over others and what trade-offs are made. Non-functional requirements (NFRs) are pervasive in descriptions of design patterns. They play a crucial role in understanding the problem being addressed, the trade-offs discussed, and the design solution proposed. However, since design patterns are mostly expressed as informal text, the structure of the design reasoning is not systematically organised. This paper proposes a systematic treatment of NFRs in descriptions of patterns and when applying patterns during design. The approach organises, analyses and refines non-functional requirements, and provides guidance and reasoning support when applying patterns during the design of a software system. Three design patterns taken from the literature are used to illustrate this approach."}
{"_id": "1dc776a1f94f5a4e56ceb088827e34964a57c79c", "title": "Tools for ontology matching practical considerations", "text": "There exists a large body of scienti c literature devoted to ontology matching, aligning, mapping translating and merging. With it, comes a long list (90+) of tools that support various aspects of these operations. We have approached such tools from the perspective of the INTER-IoT project, in which one of the goals is to facilitate semantic interoperability of Internet of Things platforms. Thus, we had to answer a question: what is actually available when one needs to align/merge ontologies. Here, we summarize our ndings."}
{"_id": "13e6aa5f61267b2814fa9b32f47c17c0fcdef2d5", "title": "Speedy transactions in multicore in-memory databases", "text": "Silo is a new in-memory database that achieves excellent performance and scalability on modern multicore machines. Silo was designed from the ground up to use system memory and caches efficiently. For instance, it avoids all centralized contention points, including that of centralized transaction ID assignment. Silo's key contribution is a commit protocol based on optimistic concurrency control that provides serializability while avoiding all shared-memory writes for records that were only read. Though this might seem to complicate the enforcement of a serial order, correct logging and recovery is provided by linking periodically-updated epochs with the commit protocol. Silo provides the same guarantees as any serializable database without unnecessary scalability bottlenecks or much additional latency. Silo achieves almost 700,000 transactions per second on a standard TPC-C workload mix on a 32-core machine, as well as near-linear scalability. Considered per core, this is several times higher than previously reported results."}
{"_id": "6479bc321dd7859eb6b6b8cca100bade86940526", "title": "To Lock, Swap, or Elide: On the Interplay of Hardware Transactional Memory and Lock-Free Indexing", "text": "The release of hardware transactional memory (HTM) in commodity CPUs has major implications on the design and implementation of main-memory databases, especially on the architecture of highperformance lock-free indexing methods at the core of several of these systems. This paper studies the interplay of HTM and lockfree indexing methods. First, we evaluate whether HTM will obviate the need for crafty lock-free index designs by integrating it in a traditional B-tree architecture. HTM performs well for simple data sets with small fixed-length keys and payloads, but its benefits disappear for more complex scenarios (e.g., larger variable-length keys and payloads), making it unattractive as a general solution for achieving high performance. Second, we explore fundamental differences between HTM-based and lock-free B-tree designs. While lock-freedom entails design complexity and extra mechanism, it has performance advantages in several scenarios, especially high-contention cases where readers proceed uncontested (whereas HTM aborts readers). Finally, we explore the use of HTM as a method to simplify lock-free design. We find that using HTM to implement a multi-word compare-and-swap greatly reduces lockfree programming complexity at the cost of only a 10-15% performance degradation. Our study uses two state-of-the-art index implementations: a memory-optimized B-tree extended with HTM to provide multi-threaded concurrency and the Bw-tree lock-free B-tree used in several Microsoft production environments."}
{"_id": "12fab45bab0c56ac8bb29f83f431d7233ee9232c", "title": "An effective hybrid transactional memory system with strong isolation guarantees", "text": "We propose signature-accelerated transactional memory (SigTM), ahybrid TM system that reduces the overhead of software transactions. SigTM uses hardware signatures to track the read-set and write-set forpending transactions and perform conflict detection between concurrent threads. All other transactional functionality, including dataversioning, is implemented in software. Unlike previously proposed hybrid TM systems, SigTM requires no modifications to the hardware caches, which reduces hardware cost and simplifies support for nested transactions and multithreaded processor cores. SigTM is also the first hybrid TM system to provide strong isolation guarantees between transactional blocks and non-transactional accesses without additional read and write barriers in non-transactional code.\n Using a set of parallel programs that make frequent use of coarse-grain transactions, we show that SigTM accelerates software transactions by 30% to 280%. For certain workloads, SigTM can match the performance of a full-featured hardware TM system, while for workloads with large read-sets it can be up to two times slower. Overall, we show that SigTM combines the performance characteristics and strong isolation guarantees of hardware TM implementations with the low cost and flexibility of software TM systems."}
{"_id": "136eefe33796c388a15d25ca03cb8d5077d14f37", "title": "The End of an Architectural Era (It's Time for a Complete Rewrite)", "text": "In previous papers [SC05, SBC+07], some of us predicted the end of \u201cone size fits all\u201d as a commercial relational DBMS paradigm. These papers presented reasons and experimental evidence that showed that the major RDBMS vendors can be outperformed by 1-2 orders of magnitude by specialized engines in the data warehouse, stream processing, text, and scientific database markets. Assuming that specialized engines dominate these markets over time, the current relational DBMS code lines will be left with the business data processing (OLTP) market and hybrid markets where more than one kind of capability is required. In this paper we show that current RDBMSs can be beaten by nearly two orders of magnitude in the OLTP market as well. The experimental evidence comes from comparing a new OLTP prototype, H-Store, which we have built at M.I.T., to a popular RDBMS on the standard transactional benchmark, TPC-C. We conclude that the current RDBMS code lines, while attempting to be a \u201cone size fits all\u201d solution, in fact, excel at nothing. Hence, they are 25 year old legacy code lines that should be retired in favor of a collection of \u201cfrom scratch\u201d specialized engines. The DBMS vendors (and the research community) should start with a clean sheet of paper and design systems for tomorrow\u2019s requirements, not continue to push code lines and architectures designed for yesterday\u2019s needs."}
{"_id": "9c8f7d5747814d8b04dd822fe1c67bd6aaef9035", "title": "Biped gait controller for large speed variations, combining reflexes and a central pattern generator in a neuromuscular model", "text": "Controllers based on neuromuscular models hold the promise of energy-efficient and human-like walkers. However, most of them rely on optimizations or cumbersome hand-tuning to find controller parameters which, in turn, are usually working for a specific gait or forward speed only. Consequently, designing neuromuscular controllers for a large variety of gaits is usually challenging and highly sensitive. In this contribution, we propose a neuromuscular controller combining reflexes and a central pattern generator able to generate gaits across a large range of speeds, within a single optimization. Applying this controller to the model of COMAN, a 95 cm tall humanoid robot, we were able to get energy-efficient gaits ranging from 0.4 m/s to 0.9 m/s. This covers normal human walking speeds once scaled to the robot height. In the proposed controller, the robot speed could be continuously commanded within this range by changing three high-level parameters as linear functions of the target speed. This allowed large speed transitions with no additional tuning. By combining reflexes and a central pattern generator, this approach can also predict when the next strike will occur and modulate the step length to step over a hole."}
{"_id": "b35ba82ed916143390c6a14f750b87e7763fddbb", "title": "Using Argument Mining to Assess the Argumentation Quality of Essays", "text": "Argument mining aims to determine the argumentative structure of texts. Although it is said to be crucial for future applications such as writing support systems, the benefit of its output has rarely been evaluated. This paper puts the analysis of the output into the focus. In particular, we investigate to what extent the mined structure can be leveraged to assess the argumentation quality of persuasive essays. We find insightful statistical patterns in the structure of essays. From these, we derive novel features that we evaluate in four argumentation-related essay scoring tasks. Our results reveal the benefit of argument mining for assessing argumentation quality. Among others, we improve the state of the art in scoring an essay\u2019s organization and its argument strength."}
{"_id": "4d17b5a236d3af3487593ece93a3c34f69fd53b4", "title": "Ontology-Aware Token Embeddings for Prepositional Phrase Attachment", "text": "Type-level word embeddings use the same set of parameters to represent all instances of a word regardless of its context, ignoring the inherent lexical ambiguity in language. Instead, we embed semantic concepts (or synsets) as defined in WordNet and represent a word token in a particular context by estimating a distribution over relevant semantic concepts. We use the new, context-sensitive embeddings in a model for predicting prepositional phrase (PP) attachments and jointly learn the concept embeddings and model parameters. We show that using context-sensitive embeddings improves the accuracy of the PP attachment model by 5.4% absolute points, which amounts to a 34.4% relative reduction in errors."}
{"_id": "7d63336eefd130a70f7a333ddd23e454ae844089", "title": "Developing an Isolated Word Recognition System in MATLAB", "text": "The Development Workflow There are two major stages within isolated word recognition: a training stage and a testing stage. Training involves \u201cteaching\u201d the system by building its dictionary, an acoustic model for each word that the system needs to recognize. In our example, the dictionary comprises the digits \u2018zero\u2019 to \u2018nine\u2019. In the testing stage we use acoustic models of these digits to recognize isolated words using a classification algorithm. The development workflow consists of three steps: Speech acquisition \u2022 Speech analysis \u2022 User interface development \u2022"}
{"_id": "c50cd7df4271ef94a0a60894f0e2cf4ef89fb912", "title": "Ruminating Reader: Reasoning with Gated Multi-Hop Attention", "text": "To answer the question in machine comprehension (MC) task, the models need to establish the interaction between the question and the context. To tackle the problem that the single-pass model cannot reflect on and correct its answer, we present Ruminating Reader. Ruminating Reader adds a second pass of attention and a novel information fusion component to the Bi-Directional Attention Flow model (BIDAF). We propose novel layer structures that construct a query aware context vector representation and fuse encoding representation with intermediate representation on top of BIDAF model. We show that a multi-hop attention mechanism can be applied to a bi-directional attention structure. In experiments on SQuAD, we find that the Reader outperforms the BIDAF baseline by 2.1 F1 score and 2.7 EM score. Our analysis shows that different hops of the attention have different responsibilities in selecting answers."}
{"_id": "da073ff5c7ef69a35d6ae45d334067cab7458b24", "title": "An investigation of current models of second language speech perception: the case of Japanese adults' perception of English consonants.", "text": "This study reports the results of two experiments with native speakers of Japanese. In experiment 1, near-monolingual Japanese listeners participated in a cross-language mapping experiment in which they identified English and Japanese consonants in terms of a Japanese category, then rated the identifications for goodness-of-fit to that Japanese category. Experiment 2 used the same set of stimuli in a categorial discrimination test. Three groups of Japanese speakers varying in English-language experience, and one group of native English speakers participated. Contrast pairs composed of two English consonants, two Japanese consonants, and one English and one Japanese consonant were tested. The results indicated that the perceived phonetic distance of second language (L2) consonants from the closest first language (L1) consonant predicted the discrimination of L2 sounds. In addition, this study investigated the role of experience in learning sounds in a second language. Some of the consonant contrasts tested showed evidence of learning (i.e., significantly higher scores for the experienced than the relatively inexperienced Japanese groups). The perceived phonetic distance of L1 and L2 sounds was found to predict learning effects in discrimination of L1 and L2 sounds in some cases. The results are discussed in terms of models of cross-language speech perception and L2 phonetic learning."}
{"_id": "4b9f7c233a105eafd23cc3afdd39e6cd36993fa1", "title": "Resource Sharing in Continuous Sliding-Window Aggregates", "text": "We consider the problem of resource sharing when processing large numbers of continuous queries. We specifically address sliding-window aggregates over data streams, an important class of continuous operators for which sharing has not been addressed. We present a suite of sharing techniques that cover a wide range of possible scenarios: different classes of aggregation functions (algebraic, distributive, holistic), different window types (time-based, tuple-based, suffix, historical), and different input models (single stream, multiple substreams). We provide precise theoretical performance guarantees for our techniques, and show their practical effectiveness through experimental study."}
{"_id": "46c6e1f6ac7f19a659b197abcd10fa781ac2a30d", "title": "Design of Wireless Sensor System for Neonatal Monitoring", "text": "In this paper, we present the application of wireless sensor technology and the advantages it will inherently have for neonatal care and monitoring at Neonatal Intensive Care Units (NICU). An electrocardiography (ECG) readout board and a wireless transceiver module developed by IMEC at the Holst Centre in the Netherlands are embedded in the proposed wireless sensor systems in combination with the signal processing and software interface developed at the Department of Industrial Design, Eindhoven University of Technology (TU/e). Through development of this prototype system, we opt to ensure correct data transmission, detection and display. The wireless system is designed to be suitable for integration into non-invasive monitoring platforms such as a smart neonatal jacket developed at TU/e. Experiments at Maxima Medical Centre (MMC) in Veldhoven, the Netherlands demonstrate the wireless transmission of ECG data from the smart jacket integrated with textile electrodes."}
{"_id": "29366f39dc4237a380f20e9ae9fbeffa645200c9", "title": "Sentiment Analysis and Subjectivity", "text": "Textual information in the world can be broadly categorized into two main types: facts and opinions. Facts are objective expressions about entities, events and their properties. Opinions are usually subjective expressions that describe people\u2019s sentiments, appraisals or feelings toward entities, events and their properties. The concept of opinion is very broad. In this chapter, we only focus on opinion expressions that convey people\u2019s positive or negative sentiments. Much of the existing research on textual information processing has been focused on mining and retrieval of factual information, e.g., information retrieval, Web search, text classification, text clustering and many other text mining and natural language processing tasks. Little work had been done on the processing of opinions until only recently. Yet, opinions are so important that whenever we need to make a decision we want to hear others\u2019 opinions. This is not only true for individuals but also true for organizations."}
{"_id": "1f254564e7363a0a2e3b69aa99165792a3a7bac5", "title": "Comparison of Classifier Fusion Methods for Classification in Pattern Recognition Tasks", "text": "This work presents a comparison of current research in the use of voting ensembles of classifiers in order to improve the accuracy of single classifiers and make the performance more robust against the difficulties that each individual classifier may have. Also, a number of combination rules are proposed. Different voting schemes are discussed and compared in order to study the performance of the ensemble in each task. The ensembles have been trained on real data available for benchmarking and also applied to a case study related to statistical description models of melodies for music genre recognition."}
{"_id": "3a501b6b460bedfd8188b35967b7d2de93740a62", "title": "Unmanned Aerial Vehicle Path Following: A Survey and Analysis of Algorithms for Fixed-Wing Unmanned Aerial Vehicless", "text": "Unmanned aerial vehicles (UAVs) are mainly used by military and government organizations, but with low-cost sensors, electronics, and airframes there is significant interest in using low-cost UAVs among aircraft hobbyists, academic researchers, and industries. Applications such as mapping, search and rescue, patrol, and surveillance require the UAV to autonomously follow a predefined path at a prescribed height. The most commonly used paths are straight lines and circular orbits. Path-following algorithms ensure that the UAV will follow a predefined path in three or two dimensions at constant height. A basic requirement for these path-following algorithms is that they must be accurate and robust to wind disturbances."}
{"_id": "7b7e011fdc6d30c164a5728732d9293e29463ebf", "title": "Erratum to: Eye\u2013hand coordination in a sequential target contact task", "text": "Most object manipulation tasks involve a series of actions demarcated by mechanical contact events, and gaze is typically directed to the locations of these events as the task unfolds. Here, we examined the timing of gaze shifts relative to hand movements in a task in which participants used a handle to contact sequentially five virtual objects located in a horizontal plane. This task was performed both with and without visual feedback of the handle position. We were primarily interested in whether gaze shifts, which in our task shifted from a given object to the next about 100\u00a0ms after contact, were predictive or triggered by tactile feedback related to contact. To examine this issue, we included occasional catch contacts where forces simulating contact between the handle and object were removed. In most cases, removing force did not alter the timing of gaze shifts irrespective of whether or not vision of handle position was present. However, in about 30% of the catch contacts, gaze shifts were delayed. This percentage corresponded to the fraction of contacts with force feedback in which gaze shifted more than 130\u00a0ms after contact. We conclude that gaze shifts are predictively controlled but timed so that the hand actions around the time of contact are captured in central vision. Furthermore, a mismatch between the expected and actual tactile information related to the contact can lead to a reorganization of gaze behavior for gaze shifts executed greater than 130\u00a0ms after a contact event."}
{"_id": "56b7b23845d475309816cb7951d7cae061327b0d", "title": "Recognition with Local Features: the Kernel Recipe", "text": "Recent developments in computer vision have shown that local features can provide efficient representations suitable for robust object recognition. Support Vector Machines have been established as powerful learning algorithms with good generalization capabilities. In this paper, we combine these two approaches and propose a general kernel method for recognition with local features. We show that the proposed kernel satisfies the Mercer condition and that it is suitable for many established local feature frameworks. Large-scale recognition results are presented on three different databases, which demonstrate that SVMs with the proposed kernel perform better than standard matching techniques on local features. In addition, experiments on noisy and occluded images show that local feature representations significantly outperform global approaches."}
{"_id": "d43f0d39a09691ac35d7a9592c06d2a698a349d7", "title": "Fast k-Means Based on k-NN Graph", "text": "In the big data era, k-means clustering has been widely adopted as a basic processing tool in various contexts. However, its computational cost could be prohibitively high when the data size and the cluster number are large. The processing bottleneck of k-means lies in the operation of seeking the closest centroid in each iteration. In this paper, a novel solution towards the scalability issue of k-means is presented. In the proposal, k-means is supported by an approximate k-nearest neighbors graph. In the k-means iteration, each data sample is only compared to clusters that its nearest neighbors reside. Since the number of nearest neighbors we consider is much less than k, the processing cost in this step becomes minor and irrelevant to k. The processing bottleneck is therefore broken. The most interesting thing is that k-nearest neighbor graph is constructed by calling the fast k-means itself. Compared with existing fast k-means variants, the proposed algorithm achieves hundreds to thousands times speed-up while maintaining high clustering quality. As it is tested on 10 million 512-dimensional data, it takes only 5.2 hours to produce 1 million clusters. In contrast, it would take 3 years for traditional k-means to fulfill the same scale of clustering."}
{"_id": "6820155c6bcb273174775e64d5210e35efcaa580", "title": "Malassezia Intra-Specific Diversity and Potentially New Species in the Skin Microbiota from Brazilian Healthy Subjects and Seborrheic Dermatitis Patients", "text": "Malassezia yeasts are part of the resident cutaneous microbiota, and are also associated with skin diseases such as seborrheic dermatitis (SD). The role these fungi play in skin diseases and why they are pathogenic for only some individuals remain unclear. This study aimed to characterize Malassezia microbiota from different body sites in healthy and SD subjects from Brazil. Scalp and forehead samples from healthy, mild SD and severe SD subjects were collected. Non-scalp lesions from severe SD patients were also sampled. 5.8S rDNA/ITS2 amplicons from Malassezia sp. were analyzed by RFLP and sequencing. Results indicate that Malassezia microbiota did not group according to health condition or body area. Phylogenetic analysis revealed that three groups of sequences did not cluster together with any formally described species, suggesting that they might belong to potential new species. One of them was found in high proportions in scalp samples. A large variety of Malassezia subtypes were detected, indicating intra-specific diversity. Higher M. globosa proportions were found in non-scalp lesions from severe SD subjects compared with other areas, suggesting closer association of this species with SD lesions from areas other than scalp. Our results show the first panorama of Malassezia microbiota in Brazilian subjects using molecular techniques and provide new perspectives for further studies to elucidate the association between Malassezia microbiota and skin diseases."}
{"_id": "65910400a9d58b02ede95c8f8c987e102f739699", "title": "ECISER: Efficient Clip-art Image SEgmentation by Re-rasterization", "text": "a r t i c l e i n f o"}
{"_id": "27099ec9ea719f8fd919fb69d66af677a424143b", "title": "An integrated theory of the mind.", "text": "Adaptive control of thought-rational (ACT-R; J. R. Anderson & C. Lebiere, 1998) has evolved into a theory that consists of multiple modules but also explains how these modules are integrated to produce coherent cognition. The perceptual-motor modules, the goal module, and the declarative memory module are presented as examples of specialized systems in ACT-R. These modules are associated with distinct cortical regions. These modules place chunks in buffers where they can be detected by a production system that responds to patterns of information in the buffers. At any point in time, a single production rule is selected to respond to the current pattern. Subsymbolic processes serve to guide the selection of rules to fire as well as the internal operations of some modules. Much of learning involves tuning of these subsymbolic processes. A number of simple and complex empirical examples are described to illustrate how these modules function singly and in concert."}
{"_id": "a7aeb2596f78fa695d99a5b176c60ae3d2fbe920", "title": "Compact Broadband Microstrip Crossover With Isolation Improvement and Phase Compensation", "text": "In this letter, we present a broadband microstrip crossover structure in two different configurations. Both structures have compact size, i.e., less than 10 mm \u00d7 10 mm. Across the measured band from dc to 10 GHz, the structure 1 has less than 1 dB insertion loss, more than 20 dB return loss and at least 23.3 dB isolation. Within the same band, the structure 2 has less than 1 dB insertion loss, more than 22 dB return loss and at least 18.9 dB isolation. Phase difference between two paths in both structures is kept in less than 5 \u00b0."}
{"_id": "71c3e958e71c2e8138896d8523bf7a7aa310737f", "title": "Efficient Local Search Heuristics for Packing Irregular Shapes in Two-Dimensional Heterogeneous Bins", "text": "In this paper we proposed a local search heuristic and a genetic algorithm to solve the two-dimensional irregular multiple bin-size bin packing problem. The problem consists of placing a set of pieces represented as 2D polygons in rectangular bins with different dimensions such that the total area of bins used is minimized. Most packing algorithms available in the literature for 2D irregular bin packing consider single size bins only. However, for many industries the material can be supplied in a number of standard size sheets, for example, metal, foam, plastic and timber sheets. For this problem, the cut plans must decide the set of standard size stock sheets as well as which pieces to cut from each bin and how to arrange them in order to minimise waste material. Moreover, the literature constrains the orientation of pieces to a single or finite set of angles. This is often an artificial constraint that makes the solution space easier to navigate. In this paper we do not restrict the orientation of the pieces. We show that the local search heuristic and the genetic algorithm can address all of these decisions and obtain good solutions, with the local search performing better. We also discuss the affect of different groups of stock sheet sizes."}
{"_id": "9d2a99114e54afd50b2fac759943f1f95f1bd259", "title": "Why are some problems hard? Evidence from Tower of Hanoi", "text": "This paper analyzes the causes for large differences in difficulty of various isomorphic versions of the Tower of Hanoi problem. Some forms of the problem take 16 times as long to solve, on average, as other versions. Since isomorphism rules out size of task domain as a determinant of relative difficulty, these experiments seek and find causes for the differences in features of the problem representation. Analysis of verbal protocols and the temporal patterning of moves allows the problem-solving behavior to be divided into exploratory and final-path phases. Entry into the final-path phase depends on acquisition of the ability to plan pairs of moves, an achievement made difficult by the working memory load it entails. This memory load can be reduced by automating the rules governing moves, either through problem exploration or training. Once automation has occurred, the solution is obtained very rapidly. Memory load is also proposed as the locus of other differences in difficulty found between various problem rep-"}
{"_id": "b49bb7ecd2afd6461c78ff29536839b5ee45cd15", "title": "Problem-solving methods in artificial intelligence", "text": "Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some problem solving methods in artificial intelligence references. People with open minded will always try to seek for the new things and information from many sources. On the contrary, people with closed mind will always think that they can do it by their principals. So, what kind of person are you?"}
{"_id": "c963c36ed6275f1798e4f308f9071d89696ed63c", "title": "Product Geometric Crossover for the Sudoku Puzzle", "text": "Geometric crossover is a representation-independent definition of crossover based on the distance of the search space interpreted as a metric space. It generalizes the traditional crossover for binary strings and other important recombination operators for the most used representations. Using a distance tailored to the problem at hand, the abstract definition of crossover can be used to design new problem specific crossovers that embed problem knowledge in the search. In recent work, we have introduced the important notion of product geometric crossover that enables the construction of new geometric crossovers combining preexisting geometric crossovers in a simple way. In this paper, we use it to design an evolutionary algorithm to solve the Sudoku puzzle. The different types of constraints make Sudoku an interesting study case for crossover design. We conducted extensive experimental testing and found that, on medium and hard problems, the new geometric crossovers perform significantly better than hill-climbers and mutations alone."}
{"_id": "0630d2adcd2a2d177ee82905fdaf4b798d788c15", "title": "GramFuzz: Fuzzing testing of web browsers based on grammar analysis and structural mutation", "text": "Fuzz testing is an automated black-box testing technique providing random data as input to a software system in the hope to find vulnerability. In order to be effective, the fuzzed input must be common enough to pass elementary consistency checks. Web Browser accepts JavaScript, CSS files as well as the html as input, which must be considered in fuzzing testing, while traditional fuzzing technology generates test cases using simple mutation strategies, ignoring the grammar and code structure. In this article, vulnerability patterns are summarized and a new fuzzing testing method are proposed based on grammar analysis of input data and mutation of code structure. Combining methods of generation and mutation, test cases will be more effective in the fuzzing testing of web browsers. Applied on the Mozilla and IE web browsers, it discovered a total of 36 new severe vulnerabilities(and thus became one of the top security bug bounty collectors within this period)."}
{"_id": "836e9643c4e31171d4b653c0ecb1cb82ad714b48", "title": "Clinical and radiological pictures of two newborn babies with manifestations of chondrodysplasia punctata and review of available literature", "text": "BACKGROUND\nChondrodysplasia punctata (CDP) is a rare, heterogeneous congenital skeletal dysplasia, characterized by punctate or dot-like calcium deposits in cartilage observed on neonatal radiograms. A number of inborn metabolic diseases are associated with CDP, including peroxisomal and cholesterol biosynthesis dysfunction and other inborn errors of metabolism such as: mucolipidosis type II, mucopolysacharidosis type III, GM1 gangliosidosis. CDP is also related to disruption of vitamin K-dependent metabolism, causing secondary effects on the embryo, as well as fetal alcohol syndrome (FAS), chromosomal abnormalities that include trisomies 18 and 21, Turner syndrome.\n\n\nCASE REPORT\nThis article presents clinical data and diagnostic imaging findings of two newborn babies with chondrodysplasia punctata. Children presented with skeletal and cartilage anomalies, dysmorphic facial feature, muscles tone abnormalities, skin changes and breathing difficulties. One of the patients demonstrated critical stenosis of spinal canal with anterior subluxation of C1 vertebra relative to C2. The aim of this article is to present cases and briefly describe current knowledge on etiopathogenesis as well as radiological and clinical symptoms of diseases coexisting with CDP.\n\n\nCONCLUSIONS\nRadiological diagnostic imaging allows for visualization of punctate focal mineralization in bone epiphyses during neonatal age and infancy. Determining the etiology of chondrodysplasia punctata requires performing various basic as well as additional examinations, including genetic studies."}
{"_id": "8132941b5e43eb27546f6611c551b38599595e2f", "title": "Numerical Synthesis of Six-Bar Linkages for Mechanical Computation", "text": "This paper presents a design procedure for six-bar linkages that use eight accuracy points to approximate a specified input-output function. In the kinematic synthesis of linkages, this is known as the synthesis of a function generator to perform mechanical computation. Our formulation uses isotropic coordinates to define the loop equations of the Watt II, Stephenson II, and Stephenson III six-bar linkages. The result is 22 polynomial equations in 22 unknowns that are solved using the polynomial homotopy software Bertini. The bilinear structure of the system yields a polynomial degree of 705,432. Our first run of Bertini generated 92,736 nonsingular solutions, which were used as the basis of a parameter homotopy solution. The algorithm was tested on the design of the Watt II logarithmic function generator patented by Svoboda in 1944. Our algorithm yielded his linkage and 64 others, in 129 minutes of parallel computation on a Mac Pro with 12\u21e52.93 GHz processors. Three additional examples are provided as well."}
{"_id": "2f813885f3ac7be62894b182fa3c5d7a8226a480", "title": "Skin Segmentation Using YCBCR and RGB Color Models", "text": "Face detection is one of the challenging problems in image processing. This report proposes a novel technique for detecting faces in color images using skin color model algorithm combined with skin likely-hood, skin Segmentation, Morphological operation and Template matching. Color images with skin color in the chromatic and pure color space YCrCb, which separates luminance and chrominance components. A Gaussian probability density is estimated from skin samples, collected from different ethnic groups, via the maximum-likelihood criterion. Adaptive thresholding for segmentation to localize the faces within the detected skin regions. Further, mathematical morphological operators are used to remove noisy regions and fill holes in the skin-color region, so that candidate human face regions can be extracted. These systems can achieve high detection accuracy, high detection speed and reduce the false detecting rate. The two methodology used for the Skin Segmentation is YCbCr and RGB Model. In YCbCr model both the skin colour and texture of the image can be used to identify the particular object in the image,where as in RGB model only the skin color has to be used for identification of the person.Hence the YCbCr model is better than the RGB model . From our analysis we conclude that the new approach in modeling skin color can achieve good detection success rate. The algorithm gives computationally a very efficient as well as an accurate approach for skin detection. The performance of different color space may be dependent on the method used to model the color for skin pixel. Keywords\u2014Gaussian probability, YCrCb,RGB Model"}
{"_id": "130b4402f1238d48555d311aa68f9e834263d04f", "title": "Monte Carlo Filtering Using Kernel Embedding of Distributions", "text": "Recent advances of kernel methods have yielded a framework for representing probabilities using a reproducing kernel Hilbert space, called kernel embedding of distributions. In this paper, we propose a Monte Carlo filtering algorithm based on kernel embeddings. The proposed method is applied to state-space models where sampling from the transition model is possible, while the observation model is to be learned from training samples without assuming a parametric model. As a theoretical basis of the proposed method, we prove consistency of the Monte Carlo method combined with kernel embeddings. Experimental results on synthetic models and real vision-based robot localization confirm the effectiveness of the proposed approach."}
{"_id": "75a4681b7154661b99e2541ff1a4a03180c0cc13", "title": "Should I do that? using relational reinforcement learning and declarative programming to discover domain axioms", "text": "Robots assisting humans in complex domains need the ability to represent, reason with, and learn from, different descriptions of incomplete domain knowledge and uncertainty. This paper focuses on the challenge of incrementally and interactively discovering previously unknown axioms governing domain dynamics, and describes an architecture that integrates declarative programming and relational reinforcement learning to address this challenge. Answer Set Prolog (ASP), a declarative programming paradigm, is used to represent and reason with incomplete domain knowledge for planning and diagnostics. For any given goal, unexplained failure of plans created by ASP-based inference is taken to indicate the existence of unknown domain axioms. The task of discovering these axioms is formulated as a reinforcement learning problem, and a relational representation is used to incrementally generalize from specific axioms identified over time. These generic axioms are then added to the ASP-based representation for subsequent inference. The architecture's capabilities are demonstrated and evaluated in two domains, Blocks World and Robot Butler."}
{"_id": "904dbf0173ad1ad4e718d77d142476cfd9430193", "title": "The ONE simulator for DTN protocol evaluation", "text": "Delay-tolerant Networking (DTN) enables communication in sparse mobile ad-hoc networks and other challenged environments where traditional networking fails and new routing and application protocols are required. Past experience with DTN routing and application protocols has shown that their performance is highly dependent on the underlying mobility and node characteristics. Evaluating DTN protocols across many scenarios requires suitable simulation tools. This paper presents the Opportunistic Networking Environment (ONE) simulator specifically designed for evaluating DTN routing and application protocols. It allows users to create scenarios based upon different synthetic movement models and real-world traces and offers a framework for implementing routing and application protocols (already including six well-known routing protocols). Interactive visualization and post-processing tools support evaluating experiments and an emulation mode allows the ONE simulator to become part of a real-world DTN testbed. We show sample simulations to demonstrate the simulator\u2019s flexible support for DTN protocol evaluation."}
{"_id": "c7300430dda7a52d8284821c09267b552e61e7f1", "title": "Scene text extraction using stroke width transform for tourist translator on android platform", "text": "Scene text extraction is recent research area in the field of Computer Vision. Text component in an image is of particular interest, as it is easy to understand by both humans as well as machines and can be used in variety of applications like content-based image retrieval, aids for visually impaired people, translator for tourists. Although there exists a lot of research activities in this field, it is still remained as a challenging problem, mainly due to two issues: different variety of text patterns like fonts, colors, sizes, orientations; and presence of background outliers similar to text characters, such as windows, bricks. In this paper, an android application named as TravelMate is developed, which is a user-friendly application to assist the tourist navigation, while they are roaming in foreign countries. Proposed application is able to extract text from an image, which is captured by an android mobile camera. Extraction is performed using stroke width transform (SWT) approach and connected component analysis. Extracted texts are recognized using Google's open source optical character recognition (OCR) engine \u2018Tesseract\u2019 and translated to a target language using Google's translation. The entire result is displayed back onto the screen of the mobile Phone."}
{"_id": "6fdbf20f50dfd6276d9b89e494f86fbcc7b0b9b7", "title": "A Single-Channel Microstrip Electronic Tracking Antenna Array With Time Sequence Phase Weighting on Sub-Array", "text": "We have designed and tested a novel electronic tracking antenna array that is formed by 2 \u00d7 2 microstrip sub-arrays. Through time sequence phase weighting on each sub-array, the amplitude and phase on each sub-array can be recovered from the output of the resultant single channel. The amplitude and phase on each array can be used to produce the sum and difference radiation pattern by digital signal processing. In comparison with the monopulse system, the RF comparator is eliminated and the number of the receiver channels is reduced from 3 to 1. A proof-of-concept prototype was fabricated and tested. The measured results confirmed the validity and advantages of the proposed scheme. The procedure of channel correction is given."}
{"_id": "1acd0d579869d6ff8274da61a924178b31a8aedd", "title": "Insights into the neural basis of response inhibition from cognitive and clinical neuroscience", "text": "Neural mechanisms of cognitive control enable us to initiate, coordinate and update behaviour. Central to successful control is the ability to suppress actions that are no longer relevant or required. In this article, we review the contribution of cognitive neuroscience, molecular genetics and clinical investigations to understanding how response inhibition is mediated in the human brain. In Section 1, we consider insights into the neural basis of inhibitory control from the effects of neural interference, neural dysfunction, and drug addiction. In Section 2, we explore the functional specificity of inhibitory mechanisms among a range of related processes, including response selection, working memory, and attention. In Section 3, we focus on the contribution of response inhibition to understanding flexible behaviour, including the effects of learning and individual differences. Finally, in Section 4, we propose a series of technical and conceptual objectives for future studies addressing the neural basis of inhibition."}
{"_id": "214eff45ec18eb45cfd5c7268f29911b46df97d7", "title": "Web science and human-computer interaction: when disciplines collide", "text": "Web Science and Human-Computer Interaction (HCI) are interdisciplinary arenas concerned with the intersection of people and technology. After introducing the two disciplines we discuss overlaps and notable differences between them, covering subject matter, scope and methodology. Given the longer history of HCI, we identify and discuss some potential lessons that the Web Science community may be able to take from this field. These concern: the division between interpretivist and positivist approaches; methods and methodology; evaluation; and design focus and methods. In summary, this paper clarifies the relationship between the communities, signposting complementary aspects and ways in which they might collaborate in future."}
{"_id": "187bbfd80449a88a7c6a5c0fe8ec2dd947cd761c", "title": "Summarization Evaluation Methods: Experiments and Analysis", "text": "Two methods are used for evaluation of summarization systems: an evaluation of generated summaries against an \"ideal\" summary and evaluation of how well summaries help a person perform in a task such as informa. tion retrieval. We carried out two large experiments to study the two evaluation methods. Our results show that different parameters of an experiment can (h-amatically affect how well a system scores. For example, summary length was found to affect both types of evaluations. For the \"ideal\" summary based evaluation, accuracy decreases as summary length increases, while for task based evaluations summary length and accuracy on an information retrieval task appear to correlate randomly. In this paper, we show how this parameter and others can affect evaluation results and describe how parameters can be controlled to produce a sound evaluation."}
{"_id": "af25339054a73263643742ddd1468c8426592803", "title": "The 2 nd Workshop on Arabic Corpora and Processing Tools 2016 Theme :", "text": "Speech datasets and corpora are crucial for both developing and evaluating accurate Natural Language Processing systems. While Modern Standard Arabic has received more attention, dialects are drastically underestimated, even they are the most used in our daily life and the social media, recently. In this paper, we present the methodology of building an Arabic Speech Corpus for Algerian dialects, and the preliminary version of that dataset of dialectal arabic speeches uttered by Algerian native speakers selected from different Algeria\u2019s departments. In fact, by means of a direct recording way, we have taken into acount numerous aspects that foster the richness of the corpus and that provide a representation of phonetic, prosodic and orthographic varieties of Algerian dialects. Among these considerations, we have designed a rich speech topics and content. The annotations provided are some useful information related to the speakers, time-aligned orthographic word transcription. Many potential uses can be considered such as speaker/dialect identification and computational linguistic for Algerian sub-dialects. In its preliminary version, our corpus encompasses 17 sub-dialects with 109 speakers and more than 6 K utterances."}
{"_id": "3c66a7edbe21812702e55a37677716d87de01018", "title": "Bike Angels: An Analysis of Citi Bike's Incentive Program", "text": "Bike-sharing systems provide a sustainable and affordable transportation alternative in many American cities. However, they also face intricate challenges due to imbalance, caused by asymmetric traffic demand. That imbalance often-times leads to bike-sharing stations being empty (full), causing out-of-stock events for customers that want to rent (return) bikes at such stations. In recent years, the study of data-driven methods to help support the operation of such system, has developed as a popular research area.\n In this paper, we study the impact of Bike Angels, an incentive program New York City's Citi Bike system set up in 2015 to crowd-source some of its operational challenges related to imbalance. We develop a performance metric for both online- and offline-policies to set incentives within the system; our results indicate that though Citi Bike's original offline policy performed well in a regime in which incentives given to customers are not associated to costs, there is ample space for improvement when the costs of the incentives are taken into consideration. Motivated by these findings, we develop several online- and offline- policies to investigate the trade-offs between real-time and offline decision-making; one of our online policies has since been adopted by Citi Bike."}
{"_id": "aca8059854b85bef137ac4ded96f743a6b097d2d", "title": "Integration challenges of intelligent transportation systems with connected vehicle, cloud computing, and internet of things technologies", "text": "Transportation is a necessary infrastructure for our modern society. The performance of transportation systems is of crucial importance for individual mobility, commerce, and for the economic growth of all nations. In recent years modern society has been facing more traffic jams, higher fuel prices, and an increase in CO2 emissions. It is imperative to improve the safety and efficiency of transportation. Developing a sustainable intelligent transportation system requires the seamless integration and interoperability with emerging technologies such as connected vehicles, cloud computing, and the Internet of Things. In this article we present and discuss some of the integration challenges that must be addressed to enable an intelligent transportation system to address issues facing the transportation sector such as high fuel prices, high levels of CO2 emissions, increasing traffic congestion, and improved road safety."}
{"_id": "090ab6c395c010ced449b540c425a6a2835647a6", "title": "A Deep Dual-Branch Networks for Joint Blind Motion Deblurring and Super-Resolution", "text": "Image super-resolution is a fundamental pre-processing technique for the machine vision applications of robotics and other mobile platforms. Inevitably, images captured by the mobile camera tend to emerge severe motion blur and this degradation will deteriorate the performance of current state-of-the-art super-resolution methods. In this paper, we propose a deep dual-branch convolution neural network (CNN) to generate a clear high-resolution image from a single natural image with severe blurs. Compared to off-the-shelf methods, our method, called DB-SRN, can remove the complex non-uniform motion blurs and restore useful texture details simultaneously. By sharing the features from modified residual blocks (ResBlocks), the dual-branch design can promote the performances of both tasks other while retaining network simplicity. Extensive experiments demonstrate that our method produces remarkable deblurred and super-resolved images in terms of quality and quantity with high computational efficiency."}
{"_id": "419ee304092445c8e2de26229bc93ffca51e751b", "title": "Energy-Aware Autonomic Resource Allocation in Multitier Virtualized Environments", "text": "With the increase of energy consumption associated with IT infrastructures, energy management is becoming a priority in the design and operation of complex service-based systems. At the same time, service providers need to comply with Service Level Agreement (SLA) contracts which determine the revenues and penalties on the basis of the achieved performance level. This paper focuses on the resource allocation problem in multitier virtualized systems with the goal of maximizing the SLAs revenue while minimizing energy costs. The main novelty of our approach is to address-in a unifying framework-service centers resource management by exploiting as actuation mechanisms allocation of virtual machines (VMs) to servers, load balancing, capacity allocation, server power state tuning, and dynamic voltage/frequency scaling. Resource management is modeled as an NP-hard mixed integer nonlinear programming problem, and solved by a local search procedure. To validate its effectiveness, the proposed model is compared to top-performing state-of-the-art techniques. The evaluation is based on simulation and on real experiments performed in a prototype environment. Synthetic as well as realistic workloads and a number of different scenarios of interest are considered. Results show that we are able to yield significant revenue gains for the provider when compared to alternative methods (up to 45 percent). Moreover, solutions are robust to service time and workload variations."}
{"_id": "30d5877df650ac0cf278001abe96858137bc65e3", "title": "Extraction of Significant Patterns from Heart Disease Warehouses for Heart Attack Prediction", "text": "The diagnosis of diseases is a significant and tedious task in medicine. The detection of heart disease from various factors or symptoms is a multi-layered issue which is not free from false presumptions often accompanied by unpredictable effects. Thus the effort to utilize knowledge and experience of numerous specialists and clinical screening data of patients collected in databases to facilitate the diagnosis process is considered a valuable option. The healthcare industry gathers enormous amounts of heart disease data that regrettably, are not \u201cmined\u201d to determine concealed information for effective decision making by healthcare practitioners. In this paper, we have proposed an efficient approach for the extraction of significant patterns from the heart disease warehouses for heart attack prediction. Initially, the data warehouse is preprocessed to make it appropriate for the mining process. After preprocessing, the heart disease warehouse is clustered using the K-means clustering algorithm, which will extract the data relevant to heart attack from the warehouse. Subsequently the frequent patterns are mined from the extracted data, relevant to heart disease, using the MAFIA algorithm. Then the significant weightage of the frequent patterns are calculated. Further, the patterns significant to heart attack prediction are chosen based on the calculated significant weightage. These significant patterns can be used in the development of heart attack prediction system."}
{"_id": "973cb57bd2f4b67c99aea0bbdb74c41a29bf6d2a", "title": "A survey of decision tree classifier methodology", "text": "Decision Tree Classifiers (DTC's) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition, to name only a few. Perhaps, the most important feature of DTC's is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. This paper presents a survey of current methods for DTC designs and the various existing issues. After considering potential advantages of DTC's over single stage classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed_ Finally, several remarks are made concerning possible future research directions."}
{"_id": "f816edce638d88f7ee02370910852d6337d7d2fb", "title": "Combination Data Mining Methods with New Medical Data to Predicting Outcome of Coronary Heart Disease", "text": "The prediction of survival of Coronary heart disease (CHD) has been a challenging research problem for medical society. The goal of this paper is to develop data mining algorithms for predicting survival of CHD patients based on 1000 cases .We carry out a clinical observation and a 6-month follow up to include 1000 CHD cases. The survival information of each case is obtained via follow up. Based on the data, we employed three popular data mining algorithms to develop the prediction models using the 502 cases. We also used 10-fold cross-validation methods to measure the unbiased estimate of the three prediction models for performance comparison purposes. The results indicated that the SVM is the best predictor with 92.1 % accuracy on the holdout sample artificial neural networks came out to be the second with91.0% accuracy and the decision trees models came out to be the worst of the three with 89.6% accuracy. The comparative study of multiple prediction models for survival of CHD patients along with a 10-fold cross-validation provided us with an insight into the relative prediction ability of different data."}
{"_id": "0ed269ce5462a4a0ba22a1270d7810c7b3374a09", "title": "Selected techniques for data mining in medicine", "text": "Widespread use of medical information systems and explosive growth of medical databases require traditional manual data analysis to be coupled with methods for efficient computer-assisted analysis. This paper presents selected data mining techniques that can be applied in medicine, and in particular some machine learning techniques including the mechanisms that make them better suited for the analysis of medical databases (derivation of symbolic rules, use of background knowledge, sensitivity and specificity of induced descriptions). The importance of the interpretability of results of data analysis is discussed and illustrated on selected medical applications."}
{"_id": "37807e97c624fb846df7e559553b32539ba2ea5d", "title": "Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks", "text": "-We give conditions ensuring that multilayer jeedJorward networks with as Jew as a single hidden layer and an appropriately smooth hidden layer activation fimction are capable o f arbitrarily accurate approximation to an arbitrao' function and its derivatives. In fact, these networks can approximate functions that are not dtifferentiable in the classical sense, but possess only a generalized derivative, as is the case jor certain piecewise dtlf[erentiable Junctions. The conditions imposed on the hidden layer activation function are relatively mild: the conditions imposed on the domain o f the fimction to be approximated have practical intplications. Our approximation results provide a previously missing theoretical justification jor the use of multilayer /~'ed/brward networks in applications requiring simultaneous approximation o f a function and its derivatives. Keywords--Approximation, Derivatives, Sobolev space, Feedforward networks. 1. I N T R O D U C T I O N The capability of sufficiently complex multilayer feedforward networks to approximate an unknown mapping./~. R' ----~ R arbitrarily well has been recently investigated by Cybenko (1989), Funahashi (1989), Hecht-Nielsen (1989), Hornik, Stinchcombe, and White (1989) (HSW) (all for sigmoid hidden layer activation functions) and Stinchcombe and White (1989) (SW) (non-sigmoid hidden layer activation functions). In applications, it may be desirable to approximate not only the unknown mapping, but also its unknown derivatives. This is the case in Jordan's (1989) recent investigation of robot learning of smooth movement . Jordan states: The Jacobian matrix 8z/Ox . . . is the matrix that relates small changes in the controller output to small changes in the task space results and cannot be assumed to be available a priori, or provided by the environment. However, all of the derivatives in the matrix are forward derivatives. They are easily obtained by differentiation if a forward model is available. The forward model itself must be learned, but this can be achieved directly by system idenAcknowledgements: We arc indebted to Angelo Melino for pressing us on the issue addressed here and to the referees for numerous helpful suggestions. White's participation was supported by NSF Grant SES-8806990. Requests for reprints should be sent to Halbert White, Department of Economics, D-008, University of California, San Diego. La Jolla, CA 92093. tification. Once the model is accurate over a particular domain, its derivatives provide a learning operator that allows the system to convert errors in task space into errors in articulatory space and thereby change the controller. Thus, learning an adequate approximation to the Jacobian matrix of an unknown mapping is a key component of Jordan 's approach to robot learning of smooth movement . Despite the success of Jordan 's experiments, there is no existing theoretical guarantee that multilayer feedforward networks generally have the capability to approximate an unknown mapping and its derivatives simultaneously. For example, a network with hard limiting hidden layer activations approximates unknown mappings with a piecewise-constant function, the first derivatives of which exist and are zero almost everywhere. Obviously, the derivatives of such a network output function cannot approximate the derivatives of an arbitrary function. Intuition suggests that networks having smooth hidden layer activation functions ought to have output function derivatives that will approximate the derivatives of an unknown mapping. However , the justification for this intuition is not obvious. Consider the class of single hidden layer feedforward networks having network output functions belonging to the set 2. \"(G) -~ {g : x\"-~ R[ g(x) : /:,G(_i';,,):"}
{"_id": "cabf34028bc1e95fd38b95473bf2e6bbad8b79e6", "title": "Domainless Adaptation by Constrained Decoding on a Schema Lattice", "text": "In many applications such as personal digital assistants, there is a constant need for new domains to increase the system\u2019s coverage of user queries. A conventional approach is to learn a separate model every time a new domain is introduced. This approach is slow, inefficient, and a bottleneck for scaling to a large number of domains. In this paper, we introduce a framework that allows us to have a single model that can handle all domains: including unknown domains that may be created in the future as long as they are covered in the master schema. The key idea is to remove the need for distinguishing domains by explicitly predicting the schema of queries. Given permitted schema of a query, we perform constrained decoding on a lattice of slot sequences allowed under the schema. The proposed model achieves competitive and often superior performance over the conventional model trained separately per domain."}
{"_id": "9ef6c5ef82918e99b560d801c960f4cdeb3bc186", "title": "Some Basic Cryptographic Requirements for Chaos-Based Cryptosystems", "text": "In recent years, a large amount of work on chaos-based cryptosystems have been published. However many of the proposed schemes fail to explain or do not possess a number of features that are fundamentally important to all kind of cryptosystems. As a result, many proposed systems are difficult to implement in practice with a reasonable degree of security. Likewise, they are seldom accompanied by a thorough security analysis. Consequently, it is difficult for other researchers and end users to evaluate their security and performance. This work is intended to provide a common framework of basic guidelines that, if followed, every new cryptosystem would benefit from. The suggested guidelines address three main issues: implementation, key management, and security analysis, aiming at assisting designers of new cryptosystems to present their work in a more systematic and rigorous way to fulfill some basic cryptographic requirements. Meanwhile, several recommendations are made regarding some practical aspects of analog chaos-based secure communications, such as channel noise, limited bandwith, and attenuation."}
{"_id": "99c841b50a4010626667b78562d8ceb6b8e0b54d", "title": "A Framework for Cloud Forensic Readiness in Organizations", "text": "Many have argued that cloud computing is one of the fastest growing and most transformative technologies in the history of computing. It has radically changed the way in which information technologies can manage, access, deliver and create services. It has also brought numerous benefits to end-users and organizations. However, this rapid growth in cloud computing adoption has also seen it become a new arena for cybercrime. This has, in turn, led to new technical, legal and organizational challenges. In addition to the large number of attacks which affect cloud computing and the decentralized nature of data processing in the cloud, many concerns have been raised. One of these concerns is how to conduct a proper digital investigation in cloud environments and be ready to collect data proactively before an incident occurs in order to save time, money and effort. This paper proposes the technical, legal and organizational factors that influence digital forensic readiness for Infrastructure as a Service consumers."}
{"_id": "92dc827eaca6e31243b7c886d5ad4bb9289eee26", "title": "Scalable Formal Concept Analysis algorithm for large datasets using Spark", "text": "In the process of knowledge discovery and representation in large datasets using formal concept analysis, complexity plays a major role in identifying all the formal concepts and constructing the concept lattice(digraph of the concepts). For identifying the formal concepts and constructing the digraph from the identified concepts in very large datasets, various distributed algorithms are available in the literature. However, the existing distributed algorithms are not very well suitable for concept generation because it is an iterative process. The existing algorithms are implemented using distributed frameworks like MapReduce and Open MP, these frameworks are not appropriate for iterative applications. Hence, in this paper we proposed efficient distributed algorithms for both formal concept generation and concept lattice digraph construction in large formal contexts using Apache Spark. Various performance metrics are considered for the evaluation of the proposed work, the results of the evaluation proves that the proposed algorithms are efficient for concept generation and lattice graph construction in comparison with the existing algorithms."}
{"_id": "cf80e910f2690d795c45d3ad3d88854bb9eea080", "title": "Mass-media coverage , its influence on public awareness of climate-change issues , and implications for Japan \u2019 s national campaign to reduce greenhouse gas emissions", "text": "We analyse Japanese newspaper coverage of global warming from January 1998 to July 2007 and how public opinion during parts of that period were influenced by newspaper coverage. We show that a dramatic increase in newspaper coverage of global warming from January 2007 correlated with an increase in public concern for the issue. Before January 2007, we find that coverage of global warming had an immediate but short-term influence on public concern. With such transitory high levels of media coverage we suggest that for more effective communication of climate change, strategies aimed at maintaining mass-media coverage of global warming are required. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "89720a2ccd0e68d6c4b007fe330ebcb85476c60c", "title": "mmWave communications for 5G: implementation challenges and advances", "text": "The requirement of the fifth generation (5G) wireless communication for high throughput motivates the wireless industry to use the mmWave (millimeter wave) communications for its wide bandwidth advantage. To compensate the heavy path loss and increase the communications capacity, phased array beamforming and massive multiple-input multiple-output (MIMO) techniques are employed at both the user equipment (UE) and base stations (BS). Considering the commercial requirements, 5G mmWave large array systems should be implemented in an energy- and cost-efficient way with a small form factor. To address above issues and realize a reliable communications link, taking into account the particular characteristics of 5G mmWave systems, this paper firstly examines the design challenges and trade-offs in system implementations, then some of the design strategies are summarized. At last, recent advance in RF front-end circuits and receiver sub-systems is then highlighted."}
{"_id": "c7e6f338aa5a42b6b5e292ff1c37dcd1108dcbd7", "title": "A 6 Vpp, 52 dB, 30-dB Dynamic Range, 43 Gb/s InP DHBT Differential Limiting Amplifier", "text": "We report the design, fabrication and measurement of a 43 Gb/s differential limiting amplifier integrating a transimpedance amplifier (TIA) as input stage and a distributed amplifier (Driver) as output stage, realised in a 1.5-\u00b5m indium phosphide (InP) double heterojunction bipolar transistor (DHBT) technology. With a differential gain higher than 52 dB, the TIA-Driver provides a constant 6 Vpp differential output amplitude for an input amplitude ranging from 12.7 mVpp to 405 mVpp, achieving more than 30 dB of electrical dynamic range, for a power consumption of 2.7 W. Designed to be used between a photodiode and an electro-optical modulator, this circuit is well suited for optical-to-electrical-to-optical (O-E-O) wavelength converters at up to 43 Gb/s."}
{"_id": "af3cadee7fedea6d6b68b9204d4e2711064450d3", "title": "Stress Detection and Management: A Survey of Wearable Smart Health Devices", "text": "In today's world, many people feel stressed out from school, work, or other life events. Therefore, it is important to detect stress and manage it to reduce the risk of damage to an individual's well being. With the emergence of the Internet of Things (IoT), devices can be made to detect stress and manage it effectively by using cloudbased services and smartphone apps to aggregate and compute large data sets that track stress behavior over long periods of time. Additionally, there is added convenience via the connectivity and portability of these IoT devices. They allow individuals to seek intervention prior to elevated health risks and achieve a less stressful life."}
{"_id": "f24eda86d06907ada2e627f93de2c612e50b2778", "title": "Personal Archive Service System using Blockchain Technology: Case Study, Promising and Challenging", "text": "This paper is to explore applications of the blockchain technology to the concept of \u201cproof of X\u201d such as proof of identity, proof of property ownership, proof of specific transaction, proof of college degree, proof of medical records, proof of academic achievements, etc. It describes a novel approach of building a decentralized transparent immutable secure personal archive management and service system. Personal archive is defined as a collection of various artifacts that reflect personal portfolio as well as personal unique identifications. Personal portfolio is beyond of a statement of personal achievement. It is an evidentiary document designed to provide qualitative and quantitative chronically documents and examples. Subjects can tag their information with proof, that is, certified by trusted entities or organizations like universities. Such proofs are associated with confidentiality levels exposed in the public domains. Personal identifications include biometrics as well as other multi-factors such as something the subject \u201chas\u201d, the subject \u201cknows\u201d or the subject \u201cacts\u201d. Stack holders in a consortium oriented blockchain network serve as verifiers and /or miners that provide their trusted services the delegated proof of stake. Such personal archive based system can be exploited to various applications including professional network like Linkedin, instant credit approval like alipay or live human from social bots like internet social media. A prototype simulation shows that such personal portfolio management and service system is feasible and immune to many ID attacks."}
{"_id": "1cb3a0ff57de3199fe7029db7a1b8bac310fa5f8", "title": "Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing", "text": "Deep neural networks such as Convolutional Networks (ConvNets) and Deep Belief Networks (DBNs) represent the state-of-the-art for many machine learning and computer vision classification problems. To overcome the large computational cost of deep networks, spiking deep networks have recently been proposed, given the specialized hardware now available for spiking neural networks (SNNs). However, this has come at the cost of performance losses due to the conversion from analog neural networks (ANNs) without a notion of time, to sparsely firing, event-driven SNNs. Here we analyze the effects of converting deep ANNs into SNNs with respect to the choice of parameters for spiking neurons such as firing rates and thresholds. We present a set of optimization techniques to minimize performance loss in the conversion process for ConvNets and fully connected deep networks. These techniques yield networks that outperform all previous SNNs on the MNIST database to date, and many networks here are close to maximum performance after only 20 ms of simulated time. The techniques include using rectified linear units (ReLUs) with zero bias during training, and using a new weight normalization method to help regulate firing rates. Our method for converting an ANN into an SNN enables low-latency classification with high accuracies already after the first output spike, and compared with previous SNN approaches it yields improved performance without increased training time. The presented analysis and optimization techniques boost the value of spiking deep networks as an attractive framework for neuromorphic computing platforms aimed at fast and efficient pattern recognition."}
{"_id": "0e80f3edf3f83d13ea329d9d30da49fb17b44b2f", "title": "Privacy protection based access control scheme in cloud-based services", "text": "With the rapid development of the computer technology, cloud-based services have become a hot topic. Cloud-based services not only provide users with convenience, but also bring many security issues. Therefore, the study of access control scheme to protect users' privacy in cloud environment is of great significance. In this paper, we present an access control system with privilege separation based on privacy protection (PS-ACS). In the PS-ACS scheme, we divide the users into personal domain (PSD) and public domain (PUD) logically. In the PSD, we set read and write access permissions for users respectively. The Key-Aggregate Encryption (KAE) is exploited to implement the read access permission which improves the access efficiency. A high degree of patient privacy is guaranteed simultaneously by exploiting an Improved Attribute-based Signature (IABS) which can determine the users' write access. For the users of PUD, a hierarchical attribute-based encryption (HABE) is applied to avoid the issues of single point of failure and complicated key distribution. Function and performance testing result shows that the PS-ACS scheme can achieve privacy protection in cloud-based services."}
{"_id": "c8d61031220cc756598bb88542508223d2f3b181", "title": "Input feature selection for classification problems", "text": "Feature selection plays an important role in classifying systems such as neural networks (NNs). We use a set of attributes which are relevant, irrelevant or redundant and from the viewpoint of managing a dataset which can be huge, reducing the number of attributes by selecting only the relevant ones is desirable. In doing so, higher performances with lower computational effort is expected. In this paper, we propose two feature selection algorithms. The limitation of mutual information feature selector (MIFS) is analyzed and a method to overcome this limitation is studied. One of the proposed algorithms makes more considered use of mutual information between input attributes and output classes than the MIFS. What is demonstrated is that the proposed method can provide the performance of the ideal greedy selection algorithm when information is distributed uniformly. The computational load for this algorithm is nearly the same as that of MIFS. In addition, another feature selection algorithm using the Taguchi method is proposed. This is advanced as a solution to the question as to how to identify good features with as few experiments as possible. The proposed algorithms are applied to several classification problems and compared with MIFS. These two algorithms can be combined to complement each other's limitations. The combined algorithm performed well in several experiments and should prove to be a useful method in selecting features for classification problems."}
{"_id": "fce63f2b4531ab2c2d6df239e63bad0f80b086ad", "title": "RF MEMS Actuated Reconfigurable Reflectarray Patch-Slot Element", "text": "This paper describes the design of a reconfigurable reflectarray element using commercially available radio frequency micro-electromechanical system (RF MEMS) switches. The element consists of a microstrip patch on the top surface and a slot with an actuated variable length in the ground plane. RF MEMS switches are mounted on the slot to electronically vary the slot length by actuating the switches and thus obtaining the desired phase response. Waveguide measurements and high frequency structure simulator (HFSS) simulations are used to characterize the reflectarray element. The four MEMS switches element gives 10 independent states with a phase swing of 150 deg and a loss variation from 0.4 dB to 1.5 dB at 2 GHz (more switches can provide larger phase shift). The loss is mainly attributed to the dielectric loss and the conductor loss, which occur due to the relatively strong electric fields in the substrate region below the patch and the large currents on the top surface of the patch, respectively, close to the patch resonance. Detailed analysis is performed to characterize the effect of the switches by taking into consideration the switch model and wire bonding effects."}
{"_id": "8ebe49fd0c1e05911841218cf2311b443efd1c3d", "title": "Generalizing and Improving Weight Initialization", "text": "We propose a new weight initialization suited for arbitrary nonlinearities by generalizing previous weight initializations. The initialization corrects for the influence of dropout rates and an arbitrary nonlinearity\u2019s influence on variance through simple corrective scalars. Consequently, this initialization does not require computing mini-batch statistics nor weight pre-initialization. This simple method enables improved accuracy over previous initializations, and it allows for training highly regularized neural networks where previous initializations lead to poor convergence."}
{"_id": "7fd2f04a48a9f45c1530cdd5f20efed215415a79", "title": "Probabilistic Inductive Logic Programming", "text": "Probabilistic inductive logic programming aka. statistical relational learning addresses one of the central questions of artificial intelligence: the integration of probabilistic reasoning with machine learning and first order and relational logic representations. A rich variety of different formalisms and learning techniques have been developed. A unifying characterization of the underlying learning settings, however, is missing so far. In this chapter, we start from inductive logic programming and sketch how the inductive logic programming formalisms, settings and techniques can be extended to the statistical case. More precisely, we outline three classical settings for inductive logic programming, namely learning from entailment, learning from interpretations, and learning from proofs or traces, and show how they can be adapted to cover state-of-the-art statistical relational learning approaches."}
{"_id": "63ddea17dbecfc43042d6ddc15803824e35cfab9", "title": "Intuitive visualization of Pareto Frontier for multi-objective optimization in n-dimensional performance space", "text": "A visualization methodology is presented in which a Pareto Frontier can be visualized in an intuitive and straightforward manner for an n-dimensional performance space. Based on this visualization, it is possible to quickly identify \u2018good\u2019 regions of the performance and optimal design spaces for a multi-objective optimization application, regardless of space complexity. Visualizing Pareto solutions for more than three objectives has long been a significant challenge to the multi-objective optimization community. The Hyper-space Diagonal Counting (HSDC) method described here enables the lossless visualization to be implemented. The proposed method requires no dimension fixing. In this paper, we demonstrate the usefulness of visualizing n-f space (i.e. for more than three objective functions in a multiobjective optimization problem). The visualization is shown to aid in the final decision of what potential optimal design point should be chosen amongst all possible Pareto solutions."}
{"_id": "ff1bc2fb305ce189c1f680c33a72ec9b60e2a0bc", "title": "MATCHINGS AND PAIRED DOMINATION IN FUZZY GRAPHS USING STRONG ARCS", "text": "The concepts of covering and matching in fuzzy graphs using strong arcs are introduced and obtained the relationship between them analogous to Gallai\u2019s results in graphs. The notion of paired domination in fuzzy graphs using strong arcs is also studied. The strong paired domination number \u03b3spr of complete fuzzy graph and complete bipartite fuzzy graph is determined and obtained bounds for the strong paired domination number of fuzzy graphs. An upper bound for the strong paired domination number of fuzzy graphs in terms of strong independence number is also obtained."}
{"_id": "821c7ecebefaf0fe1fea7121a26c3eace99e8ed5", "title": "Enabling internet banking adoption: An empirical examination with an augmented technology acceptance model (TAM)", "text": "Purpose \u2013 The integration of relevant antecedents into TAM would lead to better understanding of the decision factors which act as enablers for the adoption of internet banking. The purpose of the paper is to determine the influence of the antecedents subjective norm, image, banks initiative, internet banking self-efficacy, internet usage efficacy, trust, perceived risk, trialability and government support on the existing constructs of the technology acceptance model (TAM) and to test measurement invariance and the moderating effect of the demographic variables on the relationship between the latent constructs used in this augmented TAM. Design/methodology/approach \u2013 A survey questionnaire was administered on internet banking users and a total of 300 responses were collected. A two-step approach suggested by Hair et al. (2006) and Schumacker and Lomax (2004) was used in this study. The proposed model was assessed using the confirmatory factor analysis approach. The structural model was then tested in order to establish nomological validity. The data based on four demographic dimensions gender, age, income, education were divided into two groups for each of these demographic dimensions. The invariance test was first performed on the measurement model and then on the structural model. The measurement model and structural model were subjected to tests of equivalence of parameters across groups. Findings \u2013 To a large extent the results of the study supports the proposed model and thereby contributes to understand the influence of subjective norm, image, banks initiative, internet banking self-efficacy, internet usage efficacy, trust, perceived risk and government support on internet banking adoption. The predictor variables in the augmented TAM were able to explain 29.9 per cent of the variance in the actual usage of internet banking as compared to the TAM which was able to explain only 26.5 per cent variance in the actual usage of internet banking. A significant difference in the relationship between the different constructs of the model was observed when the model was subjected to multi-group invariance testing. Research limitations/implications \u2013 The study suffers from the same limitations as most other studies involving TAM. In this study self-reported measures about the usage were taken as the actual usage. The findings of the study can be of use to marketers for target-specific marketing by customizing the marketing campaign focussing on the factors that were found to be strong influencers leading to the usage of internet banking for each target audience. Originality/value \u2013 The main challenge in this study was to develop the conceptual model for the internet banking adoption by extending the TAM and to get a robust theoretical support from the extant literature for the relevant factors along with their relationship to uncover new insights about factors responsible for the internet banking adoption. The augmented model had an improved predictive capability and explanatory utility."}
{"_id": "049c6e5736313374c6e594c34b9be89a3a09dced", "title": "FeUdal Networks for Hierarchical Reinforcement Learning", "text": "We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, and gains power and efficacy by decoupling end-to-end learning across multiple levels \u2013 allowing it to utilise different resolutions of time. Our framework employs a Manager module and a Worker module. The Manager operates at a lower temporal resolution and sets abstract goals which are conveyed to and enacted by the Worker. The Worker generates primitive actions at every tick of the environment. The decoupled structure of FuN conveys several benefits \u2013 in addition to facilitating very long timescale credit assignment it also encourages the emergence of sub-policies associated with different goals set by the Manager. These properties allow FuN to dramatically outperform a strong baseline agent on tasks that involve long-term credit assignment or memorisation. We demonstrate the performance of our proposed system on a range of tasks from the ATARI suite and also from a 3D DeepMind Lab environment."}
{"_id": "142497432fe179ddb6ffe600c64a837ec6179550", "title": "Parameter Space Noise for Exploration", "text": "Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent\u2019s parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both offand on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually."}
{"_id": "2fae24b2cea28785d6a0285cf864c2f6c09cd10f", "title": "BOA: the Bayesian optimization algorithm", "text": "In this paper, an algorithm based on the concepts of genetic algorithms that uses an estimation of a probability distribution of promising solutions in order to generate new candidate solutions is proposed. To estimate the distribution, techniques for modeling multivariate data by Bayesian networks are used. The proposed algorithm identifies, reproduces and mixes building blocks up to a specified order. It is independent of the ordering of the variables in the strings representing the solutions. Moreover, prior information about the problem can be incorporated into the algorithm. However, prior information is not essential. Preliminary experiments show that the BOA outperforms the simple genetic algorithm even on decomposable functions with tight building blocks as a problem size grows."}
{"_id": "4bb773040673cd6f60a2b3eeaeb2878dc6334523", "title": "Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation", "text": "Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. One of the key difficulties is insufficient exploration, resulting in an agent being unable to learn robust policies. Intrinsically motivated agents can explore new behavior for their own sake rather than to directly solve external goals. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchicalDQN (h-DQN), a framework to integrate hierarchical action-value functions, operating at different temporal scales, with goal-driven intrinsically motivated deep reinforcement learning. A top-level q-value function learns a policy over intrinsic goals, while a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse and delayed feedback: (1) a complex discrete stochastic decision process with stochastic transitions, and (2) the classic ATARI game \u2013 \u2018Montezuma\u2019s Revenge\u2019."}
{"_id": "55f0f716a355adc7e29593bf650590b1a5a552b5", "title": "Cell biology of protein misfolding: The examples of Alzheimer's and Parkinson's diseases", "text": "The salutary intersection of fundamental cell biology with the study of disease is well illustrated by the emerging elucidation of neurodegenerative disorders. Novel mechanisms in cell biology have been uncovered through disease-orientated research; for example, the discovery of presenilin as an intramembrane aspartyl protease that processes many diverse proteins within the lipid bilayer. A common theme has arisen in this field: normally-soluble proteins accumulate, misfold and oligomerize, inducing cytotoxic effects that are particularly devastating in the post-mitotic milieu of the neuron."}
{"_id": "8c9a5f60fccede51f42aec7a9d0b433c035f477c", "title": "Topological Data Analysis of Biological Aggregation Models", "text": "We apply tools from topological data analysis to two mathematical models inspired by biological aggregations such as bird flocks, fish schools, and insect swarms. Our data consists of numerical simulation output from the models of Vicsek and D'Orsogna. These models are dynamical systems describing the movement of agents who interact via alignment, attraction, and/or repulsion. Each simulation time frame is a point cloud in position-velocity space. We analyze the topological structure of these point clouds, interpreting the persistent homology by calculating the first few Betti numbers. These Betti numbers count connected components, topological circles, and trapped volumes present in the data. To interpret our results, we introduce a visualization that displays Betti numbers over simulation time and topological persistence scale. We compare our topological results to order parameters typically used to quantify the global behavior of aggregations, such as polarization and angular momentum. The topological calculations reveal events and structure not captured by the order parameters."}
{"_id": "3701bdb05b6764b09a5735cdc3cb9c40736d9765", "title": "The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM", "text": "We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence \u2014 among synchronous or asynchronous methods \u2014 on this problem class. We provide upper bounds on the number of workers for which we can expect to see a linear speedup, which match the best bounds known for less complex problems, and show that in practice SAPALM achieves this linear speedup. We demonstrate state-of-the-art performance on several matrix factorization problems."}
{"_id": "dfa2176f0c6bc721cee0c0f8d4c94a7d34615cd4", "title": "Characterization of Silicided Polysilicon Fuse Implemented in 65nm Logic CMOS Technology", "text": "NiSi electrically programmable fuses (eFUSE) were fabricated and investigated using 65 nm logic CMOS technology. The optimization of fuse program was achieved by analyzing electrical and physical responses of fuse bits for various conditions. Controlled electromigration of Ni during fuse program was identified as a key factor in achieving reliably high post-program fuse resistance."}
{"_id": "4ab41a28fde26e50a888b5780f56f688b27f57b5", "title": "Prediction of Microwave Absorption Behavior of Grading Honeycomb Composites Based on Effective Permittivity Formulas", "text": "Microwave absorbing honeycomb composites with grading coating thickness are prepared by impregnation of aramid paper frame within water solutions of conductive carbon blacks. Based on effective medium method, grading impedance in the composites can be theoretically expressed by a linear approximation of the inversely tapered effective permittivity at wave propagation direction, which provides a feasible way to optimize the design of microwave absorbing honeycomb structures. The consistent result of calculations and experimental measurements for positive impedance grading design confirms the broadband absorption effect in high frequencies."}
{"_id": "4b2d1869c9cda9630ba55bd87aba397ed2dea3b5", "title": "New support vector algorithms with parametric insensitive/margin model", "text": "In this paper, a modification of v-support vector machines (v-SVM) for regression and classification is described, and the use of a parametric insensitive/margin model with an arbitrary shape is demonstrated. This can be useful in many cases, especially when the noise is heteroscedastic, that is, the noise strongly depends on the input value x. Like the previous v-SVM, the proposed support vector algorithms have the advantage of using the parameter 0 < or = v < or = 1 for controlling the number of support vectors. To be more precise, v is an upper bound on the fraction of training errors and a lower bound on the fraction of support vectors. The algorithms are analyzed theoretically and experimentally."}
{"_id": "172c088dca699eae10250ab5f884f5ac47298da5", "title": "The feeling of effort during mental activity", "text": "The feeling of effort is familiar to most, if not all, humans. Prior research shows that the feeling of effort shapes judgments (e.g., of agency) and decisions (e.g., to quit the current task) in various ways, but the proximal causes of the feeling of effort are not well understood. In this research, I address these proximal causes. In particular, I conducted two preregistered experiments in which participants performed a difficult vs. easy cognitive task, while I measured effort-related phenomenology (feeling of effort) and physiology (pupil dilation) on a moment-to-moment basis. In both experiments, difficult tasks increased the feeling of effort; however, this effect could not be explained by concurrent increases in physiological effort. To explain these findings, I suggest that the feeling of effort during mental activity stems from the decision to exert physiological effort, rather than from physiological effort itself."}
{"_id": "c58a9cb9b17637f294c3a64d8cd102b5391626e6", "title": "Temporal and Real-Time Databases: A Survey", "text": "\u2022 a short biography of John Doe. \u2013 born on April 3rd, 1975 in Smallville. His birth was registered on April 4th, 1975. \u2013 He went to live on his own in Bigtown. Although he moved out on August 26th, 1994, he forgot to register the change of address officially. \u2013 He updated his record on December 27, 1994. \u2013 John Doe was accidentally hit by a truck on April 1st, 2001. The coroner reported his date of death on the next day."}
{"_id": "97cfc710079d8906712eab73e3716afbcf4f6c59", "title": "Conflict-Aware Event-Participant Arrangement and Its Variant for Online Setting", "text": "With the rapid development of Web 2.0 and Online To Offline (O2O) marketing model, various online event-based social networks (EBSNs) are getting popular. An important task of EBSNs is to facilitate the most satisfactory event-participant arrangement for both sides, i.e., events enroll more participants and participants are arranged with personally interesting events. Existing approaches usually focus on the arrangement of each single event to a set of potential users, or ignore the conflicts between different events, which leads to infeasible or redundant arrangements. In this paper, to address the shortcomings of existing approaches, we first identify a more general and useful event-participant arrangement problem, called Global E vent-participant Arrangement with Conflict and C apacity ($GEACC$ ) problem, focusing on the conflicts of different events and making event-participant arrangements in a global view. We find that the GEACC problem is NP-hard due to the conflicts among events. Thus, we design two approximation algorithms with provable approximation ratios and an exact algorithm with pruning technique to address this problem. In addition, we propose an online setting of GEACC, called OnlineGEACC, which is also practical in real-world scenarios. We further design an online algorithm with provable performance guarantee. Finally, we verify the effectiveness and efficiency of the proposed methods through extensive experiments on real and synthetic datasets."}
{"_id": "cdd48a0a8ae54d3f5f68483a48a54f614372bf4b", "title": "Interobserver variation in the assessment of pelvic organ prolapse", "text": "The aim of this study was to determine the interobserver agreement of two grading systems for pelvic organ prolapse: the vaginal profile and the International Continence Society (ICS) draft proposal. Forty-nine consecutive women referred for evaluation of urinary incontinence and/or pelvic organ prolapse were studied. Patients were first examined by a physician and a nurse clinician using the vaginal profile, followed by an examination according to the technique described in the ICS draft proposal for standardization of terminology (1994). \u03ba statistic and Pearson's correlation coefficient were used to determine interobserver variability for the ICS system by overall stage, by stage-specific comparison, and by specific anatomic location. The vaginal profile was evaluated by obtaining a \u03ba for overall degree of prolapse, stage-specific comparison and by anatomic area. The \u03ba for the ICS stage was 0.79 (P<0.001), and the \u03ba for the vaginal profile by area of greatest prolapse was 0.68 (P<0.001), indicating substantial interobserver agreement for both systems. The ICS system was noted to have substantial interobserver agreement by a stage-specific comparison. All anatomic locations of the ICS staging system were found to correlate significantly, and a high degree of interobserver precision was found. The vaginal profile also showed significant interobserver agreement by overall degree of prolapse, by specific degree of prolapse, and by anatomic area. It was concluded that both the proposed ICS staging system and the traditional vaginal profile show significant interobserver agreement both by overall stage, stage-specific analysis and specific location. The registered nurse examination correlated well with the physican examination, indicating that the most important factor in obtaining reproducible results may be definition and close attention to examination technique."}
{"_id": "e73c4118e4311a7a2f1c7b43fc57937c84fcaa38", "title": "\"Good job, you're so smart\": The effects of inconsistency of praise type on young children's motivation.", "text": "Previous research has demonstrated that generic praise (\"good drawer\") is related to children giving up after failure because failure implies the lack of a critical trait (e.g., drawing ability). Conversely, nongeneric praise (\"good job drawing\") is related to mastery motivation because it implies that success is related to effort. Yet children may receive a mixture of these praise types (i.e., inconsistent praise), the effects of which are unclear. We tested how inconsistent praise influenced two components of motivation: self-evaluation and persistence. Kindergarteners (N=135) were randomly assigned to one of five conditions in which consistency of praise type was varied. After two failure scenarios, children reported self-evaluations and persistence. Results indicated that more nongeneric praise related linearly to greater motivation, yet self-evaluation and persistence were impacted differently by inconsistent praise types. Hearing even a small amount of generic praise reduced persistence, whereas hearing a small amount of nongeneric praise preserved self-evaluation."}
{"_id": "1f9c6ce2a2b8d48b32ad26de2a99728ea3f0df04", "title": "Duration prediction using multi-level model for GPR-based speech synthesis", "text": "This paper introduces frame-based Gaussian process regression (GPR) into phone/syllable duration modeling for Thai speech synthesis. The GPR model is designed for predicting framelevel acoustic features using corresponding frame information, which includes relative position in each unit of utterance structure and linguistic information such as tone type and part of speech. Although the GPR-based prediction can be applied to a phone duration model, the use of phone duration model only is not always sufficient to generate natural sounding speech. Specifically, in some languages including Thai, syllable durations affect the perception of sentence structure. In this paper, we propose a duration prediction technique using a multi-level model which includes syllable and phone levels for prediction. In the technique, first, syllable durations are predicted, and then they are used as additional contexts in phone-level model to generate phone duration for synthesizing. Objective and subjective evaluation results show that GPR-based modeling with multi-level model for duration prediction outperforms the conventional HMM-based speech synthesis."}
{"_id": "415b0ed9197b634fed55aed351317e5880866ce1", "title": "The effect of glycolic acid on the treatment of acne in Asian skin.", "text": "BACKGROUND\nGlycolic acid has become important and popular for treating acne.\n\n\nOBJECTIVE\nTo evaluate the efficacy and safety of serial glycolic acid peels with glycolic acid home care products on facial acne lesions and other associated skin problems.\n\n\nMETHODS\nWe collected 40 Asian candidates with moderate to moderately severe acne. They were divided into two groups according to the degree of greasiness of their facial skin. The two groups' members were treated with four series of 35% and 50% glycolic acid peels, respectively. They also used 15% glycolic acid home care products during this study period. The improvement of acne as well as other associated problems were assessed by both the physicians and the patient themselves.\n\n\nRESULTS\nSignificant resolution of comedones, papules, and pustules was found. The skin texture of each candidate was dramatically rejuvenated. Consistent and repetitive treatment with glycolic acid was needed for the apparent improvement of acne scars and cystic lesions. The follicular pores also became comparatively smaller. Furthermore, most of the candidates had much brighter and lighter looking skin. Only small percentage of patients (5.6%) developed side effects.\n\n\nCONCLUSION\nGlycolic acid has considerable therapeutic value for acne with minimal side effects even in Asian skin. It may be an ideal adjunctive treatment of acne."}
{"_id": "69fbb90ef9aeedc588193e593387e040621b825d", "title": "Aspect Based Recommendations: Recommending Items with the Most Valuable Aspects Based on User Reviews", "text": "In this paper, we propose a recommendation technique that not only can recommend items of interest to the user as traditional recommendation systems do but also specific aspects of consumption of the items to further enhance the user experience with those items. For example, it can recommend the user to go to a specific restaurant (item) and also order some specific foods there, e.g., seafood (an aspect of consumption). Our method is called Sentiment Utility Logistic Model (SULM). As its name suggests, SULM uses sentiment analysis of user reviews. It first predicts the sentiment that the user may have about the item based on what he/she might express about the aspects of the item and then identifies the most valuable aspects of the user's potential experience with that item. Furthermore, the method can recommend items together with those most important aspects over which the user has control and can potentially select them, such as the time to go to a restaurant, e.g. lunch vs. dinner, and what to order there, e.g., seafood. We tested the proposed method on three applications (restaurant, hotel, and beauty & spa) and experimentally showed that those users who followed our recommendations of the most valuable aspects while consuming the items, had better experiences, as defined by the overall rating."}
{"_id": "264a84f4d27cd4bca94270620907cffcb889075c", "title": "Deep motion features for visual tracking", "text": "Robust visual tracking is a challenging computer vision problem, with many real-world applications. Most existing approaches employ hand-crafted appearance features, such as HOG or Color Names. Recently, deep RGB features extracted from convolutional neural networks have been successfully applied for tracking. Despite their success, these features only capture appearance information. On the other hand, motion cues provide discriminative and complementary information that can improve tracking performance. Contrary to visual tracking, deep motion features have been successfully applied for action recognition and video classification tasks. Typically, the motion features are learned by training a CNN on optical flow images extracted from large amounts of labeled videos. This paper presents an investigation of the impact of deep motion features in a tracking-by-detection framework. We further show that hand-crafted, deep RGB, and deep motion features contain complementary information. To the best of our knowledge, we are the first to propose fusing appearance information with deep motion features for visual tracking. Comprehensive experiments clearly suggest that our fusion approach with deep motion features outperforms standard methods relying on appearance information alone."}
{"_id": "00b5a6a84c8d516c750706dde9330d33f56e1059", "title": "A New Local Distance-Based Outlier Detection Approach for Scattered Real-World Data", "text": "Detecting outliers which are grossly different from or inconsistent with the remaining dataset is a major challenge in real-world KDD applications. Existing outlier detection methods are ineffective on scattered real-world datasets due to implicit data patterns and parameter setting issues. We define a novel Local Distance-based Outlier Factor (LDOF) to measure the outlier-ness of objects in scattered datasets which addresses these issues. LDOF uses the relative location of an object to its neighbours to determine the degree to which the object deviates from its neighbourhood. Properties of LDOF are theoretically analysed including LDOF\u2019s lower bound and its false-detection probability, as well as parameter settings. In order to facilitate parameter settings in real-world applications, we employ a top-n technique in our outlier detection approach, where only the objects with the highest LDOF values are regarded as outliers. Compared to conventional approaches (such as top-n KNN and top-n LOF), our method top-n LDOF is more effective at detecting outliers in scattered data. It is also easier to set parameters, since its performance is relatively stable over a large range of parameter values, as illustrated by experimental results on both real-world and synthetic datasets."}
{"_id": "f435d36163acde4ad11da04ba2c95dec9c69c00e", "title": "Unified Model for Contrast Enhancement and Denoising", "text": "In this paper, we attempt a challenging task to unify two important complementary operations, i.e. contrast enhancement and denoising, that is required in most image processing applications. The proposed method is implemented using practical analog circuit configurations that can lead to near real-time processing capabilities useful to be integrated with vision sensors. Metrics used for performance includes estimation of Residual Noise Level (RNL), Structural Similarity Index Measure (SSIM), Output-to-Input Contrast Ratio (CRo_i), and its combined score (SCD). The class of contrast stretching methods has resulted in higher noise levels (RNL \u2265 7) along with increased contrast measures (CRo-i \u2265 eight times than that of the input image) and SSIM \u2264 0.52. Denoising methods generates images with lesser noise levels (RNL \u2264 0.2308), poor contrast enhancements (CRo-i \u2264 1.31) and with best structural similarity (SSIM \u2265 0.85). In contrast, the proposed model offers best contrast stretching (CRo-i = 5.83), least noise (RNL = 0.02), a descent structural similarity (SSIM = 0.6453) and the highest combined score (SCD = 169)."}
{"_id": "f0ba1eec44247fb202b099b8f6c67813d37f403f", "title": "A Process Mining Driven Framework for Clinical Guideline Improvement in Critical Care", "text": "This paper presents a framework for process mining in critical care. The framework uses the CRISP-DM model, extended to incorporate temporal and multidimensional aspects (CRISP-TDM), combined with the Patient Journey Modeling Architecture (PaJMa), to provide a structured approach to knowledge discovery of new condition onset pathophysiologies in physiological data streams. The approach is based on temporal abstraction and mining of physiological data streams to develop process flow mappings that can be used to update patient journeys; instantiated in critical care within clinical practice guidelines. We demonstrate the framework within the neonatal intensive care setting, where we are performing clinical research in relation to pathophysiology within physiological streams of patients diagnosed with late onset neonatal sepsis. We present an instantiation of the framework for late onset neonatal sepsis, using CRISP-TDM for the process mining model and PaJMa for the knowledge representation."}
{"_id": "e72935d4d4f4beeed5f5050409b2e304bc5d0f45", "title": "Second to fourth digit ratio: a predictor of adult penile length.", "text": "The second to fourth digit ratio (2D:4D) has been proposed as a putative biomarker for prenatal testosterone and covaries with the sensitivity of the androgen receptor (AR). Both prenatal testosterone and the AR play a central role in penile growth. In this study, we investigated the relationship between digit ratio and penile length. Korean men who were hospitalized for urological surgery at a single tertiary academic centre were examined in this study, and 144 men aged 20 years or older who gave informed consent were prospectively enrolled. Right-hand second- and fourth-digit lengths were measured by a single investigator prior to measurement of penile length. Under anaesthesia, flaccid and stretched penile lengths were measured by another investigator who did not measure nor have any the information regarding the digit lengths. Univariate and multivariate analysis using linear regression models showed that only height was a significant predictive factor for flaccid penile length (univariate analysis: r=0.185, P=0.026; multivariate analysis: r=0.172, P=0.038) and that only digit ratio was a significant predictive factor for stretched penile length (univariate analysis:r=-0.216, P=0.009; multivariate analysis: r=-0.201, P=0.024; stretched penile length=-9.201\u00d7digit ratio + 20.577). Based on this evidence, we suggest that the digit ratio can predict adult penile size and that the effects of prenatal testosterone may in part explain the differences in adult penile length."}
{"_id": "0dfb47e206c762d2f4caeb99fd9019ade78c2c98", "title": "Custom Pictorial Structures for Re-identification", "text": "We propose a novel methodology for re-identification, based on Pictorial Structures (PS). Whenever face or other biometric information is missing, humans recognize an individual by selectively focusing on the body parts, looking for part-to-part correspondences. We want to take inspiration from this strategy in a re-identification context, using PS to achieve this objective. For single image re-identification, we adopt PS to localize the parts, extract and match their descriptors. When multiple images of a single individual are available, we propose a new algorithm to customize the fit of PS on that specific person, leading to what we call a Custom Pictorial Structure (CPS). CPS learns the appearance of an individual, improving the localization of its parts, thus obtaining more reliable visual characteristics for re-identification. It is based on the statistical learning of pixel attributes collected through spatio-temporal reasoning. The use of PS and CPS leads to state-of-the-art results on all the available public benchmarks, and opens a fresh new direction for research on re-identification."}
{"_id": "0ee9610d5771f4bd4b2ab9f077a1572a1007043a", "title": "Feature Mining for Image Classification", "text": "The efficiency and robustness of a vision system is often largely determined by the quality of the image features available to it. In data mining, one typically works with immense volumes of raw data, which demands effective algorithms to explore the data space. In analogy to data mining, the space of meaningful features for image analysis is also quite vast. Recently, the challenges associated with these problem areas have become more tractable through progress made in machine learning and concerted research effort in manual feature design by domain experts. In this paper, we propose a feature mining paradigm for image classification and examine several feature mining strategies. We also derive a principled approach for dealing with features with varying computational demands. Our goal is to alleviate the burden of manual feature design, which is a key problem in computer vision and machine learning. We include an in-depth empirical study on three typical data sets and offer theoretical explanations for the performance of various feature mining strategies. As a final confirmation of our ideas, we show results of a system, that utilizing feature mining strategies matches or outperforms the best reported results on pedestrian classification (where considerable effort has been devoted to expert feature design)."}
{"_id": "46638b810bf69023bca41db664b49bc935bcba3c", "title": "Unsupervised Salience Learning for Person Re-identification", "text": "Human eyes can recognize person identities based on some small salient regions. However, such valuable salient information is often hidden when computing similarities of images with existing approaches. Moreover, many existing approaches learn discriminative features and handle drastic viewpoint change in a supervised way and require labeling new training data for a different pair of camera views. In this paper, we propose a novel perspective for person re-identification based on unsupervised salience learning. Distinctive features are extracted without requiring identity labels in the training procedure. First, we apply adjacency constrained patch matching to build dense correspondence between image pairs, which shows effectiveness in handling misalignment caused by large viewpoint and pose variations. Second, we learn human salience in an unsupervised manner. To improve the performance of person re-identification, human salience is incorporated in patch matching to find reliable and discriminative matched patches. The effectiveness of our approach is validated on the widely used VIPeR dataset and ETHZ dataset."}
{"_id": "4fca83b685a6523f167da5cbe59a51e6b09db61e", "title": "Person Re-identification: What Features Are Important?", "text": "State-of-the-art person re-identification methods seek robust person matching through combining various feature types. Often, these features are implicitly assigned with a single vector of global weights, which are assumed to be universally good for all individuals, independent to their different appearances. In this study, we show that certain features play more important role than others under different circumstances. Consequently, we propose a novel unsupervised approach for learning a bottom-up feature importance, so features extracted from different individuals are weighted adaptively driven by their unique and inherent appearance attributes. Extensive experiments on two public datasets demonstrate that attribute-sensitive feature importance facilitates more accurate person matching when it is fused together with global weights obtained using existing methods."}
{"_id": "601c9ac5859021c5c1321adeb38b177ebad346f0", "title": "Salient Color Names for Person Re-identification", "text": "\uf06c A novel and effective salient color names (SCNCD) based color descriptor is proposed for person re-identification; \uf06c Background information is exploited to enrich the feature representation which is of good robustness against background interference and partial occlusions; \uf06c To tackle different types of illumination changes, color names distribution and color histograms are fused in four color spaces. Experimental Results"}
{"_id": "280bad45331f1bc6ef49d3d6a2c781e00927a2dc", "title": "A Beginner\u2019s Guide to the Mathematics of Neural Networks", "text": "In this paper I try to describe both the role of mathematics in shaping our understanding of how neural networks operate, and the curious new mathematical concepts generated by our attempts to capture neural networks in equations. My target reader being the non-expert, I will present a biased selection of relatively simple examples of neural network tasks, models and calculations, rather than try to give a full encyclopedic review-like account of the many mathematical developments in this eld."}
{"_id": "1da0b10ba41a613f76843e22b332fc019aa4ff9e", "title": "When and how to develop domain-specific languages", "text": "Domain-specific languages (DSLs) are languages tailored to a specific application domain. They offer substantial gains in expressiveness and ease of use compared with general-purpose programming languages in their domain of application. DSL development is hard, requiring both domain knowledge and language development expertise. Few people have both. Not surprisingly, the decision to develop a DSL is often postponed indefinitely, if considered at all, and most DSLs never get beyond the application library stage.Although many articles have been written on the development of particular DSLs, there is very limited literature on DSL development methodologies and many questions remain regarding when and how to develop a DSL. To aid the DSL developer, we identify patterns in the decision, analysis, design, and implementation phases of DSL development. Our patterns improve and extend earlier work on DSL design patterns. We also discuss domain analysis tools and language development systems that may help to speed up DSL development. Finally, we present a number of open problems."}
{"_id": "3b610c4467b32ad65cf3821ac2b8f0808d47cd51", "title": "Local sleep spindle modulations in relation to specific memory cues", "text": "Sleep spindles have been connected to memory processes in various ways. In addition, spindles appear to be modulated at the local cortical network level. We investigated whether cueing specific memories during sleep leads to localized spindle modulations in humans. During learning of word-location associations, words presented in the left and right visual hemifields were paired with different odors. By presenting a single odor during a subsequent nap, we aimed to selectively reactivate a subset of the studied material in sleeping subjects. During sleep, we observed topographically restricted spindle responses to memory cues, suggesting successful reactivation of specific memory traces. In particular, we found higher amplitude and greater incidence of fast spindles over posterior brain areas involved in visuospatial processing, contralateral to the visual field being cued. These results suggest that sleep spindles in different cortical areas reflect the reprocessing of specific memory traces."}
{"_id": "ae0514be12d200bd9fecf0d834bdcb30288c7a1e", "title": "Automatic Opinion Question Generation", "text": "We study the problem of opinion question generation from sentences with the help of community-based question answering systems. For this purpose, we use a sequence to sequence attentional model, and we adopt coverage mechanism to prevent sentences from repeating themselves. Experimental results on the Amazon question/answer dataset show an improvement in automatic evaluation metrics as well as human evaluations from the state-of-theart question generation systems."}
{"_id": "a51d49b75dff19be114a603cb8a80ff46ba9fdbb", "title": "Predicting IMDB Movie Ratings Using Social Media", "text": "We predict IMDb movie ratings and consider two sets of features: surface and textual features. For the latter, we assume that no social media signal is isolated and use data from multiple channels that are linked to a particular movie, such as tweets from Twitter and comments from YouTube. We extract textual features from each channel to use in our prediction model and we explore whether data from either of these channels can help to extract a better set of textual feature for prediction. Our best performing model is able to rate movies very close to the observed values."}
{"_id": "97137d5154a9f22a5d9ecc32e8e2b95d07a5a571", "title": "Facial expression recognition based on local region specific features and support vector machines", "text": "Facial expressions are one of the most powerful, natural and immediate means for human being to communicate their emotions and intensions. Recognition of facial expression has many applications including human-computer interaction, cognitive science, human emotion analysis, personality development etc. In this paper, we propose a new method for the recognition of facial expressions from single image frame that uses combination of appearance and geometric features with support vector machines classification. In general, appearance features for the recognition of facial expressions are computed by dividing face region into regular grid (holistic representation). But, in this paper we extracted region specific appearance features by dividing the whole face region into domain specific local regions. Geometric features are also extracted from corresponding domain specific regions. In addition, important local regions are determined by using incremental search approach which results in the reduction of feature dimension and improvement in recognition accuracy. The results of facial expressions recognition using features from domain specific regions are also compared with the results obtained using holistic representation. The performance of the proposed facial expression recognition system has been validated on publicly available extended Cohn-Kanade (CK+) facial expression data sets."}
{"_id": "9596656914665f262decbce2e9a757fbb9b47a64", "title": "Statistical physics of vaccination", "text": "Historically, infectious diseases caused considerable damage to human societies, and they continue to do so today. To help reduce their impact, mathematical models of disease transmission have been studied to help understand disease dynamics and inform prevention strategies. Vaccination\u2013one of the most important preventive measures of modern times\u2013is of great interest both theoretically and empirically. And in contrast to traditional approaches, recent research increasingly explores the pivotal implications of individual behavior and heterogeneous contact patterns in populations. Our report reviews the developmental arc of theoretical epidemiology with emphasis on vaccination, as it led from classical models assuming homogeneously mixing (mean-field) populations and ignoring human behavior, to recent models that account for behavioral feedback and/or population spatial/social structure. Many of the methods used originated in statistical physics, such as lattice and network models, and their associated analytical frameworks. Similarly, the feedback loop between vaccinating behavior and disease propagation forms a coupled nonlinear system with analogs in physics. We also review the new paradigm of digital epidemiology, wherein sources of digital data such as online social media are mined for high-resolution information on epidemiologically relevant individual behavior. Armed with the tools and concepts of statistical physics, and further assisted by new sources of digital data, models that capture nonlinear interactions between behavior and disease dynamics offer a novel way of modeling real-world phenomena, and can help improve health outcomes. We conclude the review by discussing open problems in the field and promising directions for future research."}
{"_id": "b6e330eb5e8299eae17b4ad629b852e4d8500026", "title": "A Study of Various Feature Extraction Methods on a Motor Imagery Based Brain Computer Interface System", "text": "INTRODUCTION\nBrain Computer Interface (BCI) systems based on Movement Imagination (MI) are widely used in recent decades. Separate feature extraction methods are employed in the MI data sets and classified in Virtual Reality (VR) environments for real-time applications.\n\n\nMETHODS\nThis study applied wide variety of features on the recorded data using Linear Discriminant Analysis (LDA) classifier to select the best feature sets in the offline mode. The data set was recorded in 3-class tasks of the left hand, the right hand, and the foot motor imagery.\n\n\nRESULTS\nThe experimental results showed that Auto-Regressive (AR), Mean Absolute Value (MAV), and Band Power (BP) features have higher accuracy values,75% more than those for the other features.\n\n\nDISCUSSION\nThese features were selected for the designed real-time navigation. The corresponding results revealed the subject-specific nature of the MI-based BCI system; however, the Power Spectral Density (PSD) based \u03b1-BP feature had the highest averaged accuracy."}
{"_id": "fc8b516c6f37b1be737891cdaedf73348a0400aa", "title": "A Test Generation Strategy for Pairwise Testing", "text": "\u00d0Pairwise testing is a specification-based testing criterion, which requires that for each pair of input parameters of a system, every combination of valid values of these two parameters be covered by at least one test case. In this paper, we propose a new test generation strategy for pairwise testing."}
{"_id": "5cd28cdc4c82f788dee27cb73d7d9280cf9c7343", "title": "Spatial graphlet matching kernel for recognizing aerial image categories", "text": "This paper presents a method for recognizing aerial image categories based on matching graphlets(i.e., small connected subgraphs) extracted from aerial images. By constructing a Region Adjacency Graph (RAG) to encode the geometric property and the color distribution of each aerial image, we cast aerial image category recognition as RAG-to-RAG matching. Based on graph theory, RAG-to-RAG matching is conducted by matching all their respective graphlets. Towards an effective graphlet matching process, we develop a manifold embedding algorithm to transfer different-sized graphlets into equal length feature vectors and further integrate these feature vectors into a kernel. This kernel is used to train a SVM [8] classifier for aerial image categories recognition. Experimental results demonstrate our method outperforms several state-of-the-art object/scene recognition models."}
{"_id": "dd4170d54b46d37792976fd4a961c6f389202981", "title": "A Study of Android Malware Detection Techniques and Machine Learning", "text": "Android OS is one of the widely used mobile Operating Systems. The number of malicious applications and adwares are increasing constantly on par with the number of mobile devices. A great number of commercial signature based tools are available on the market which prevent to an extent the penetration and distribution of malicious applications. Numerous researches have been conducted which claims that traditional signature based detection system work well up to certain level and malware authors use numerous techniques to evade these tools. So given this state of affairs, there is an increasing need for an alternative, really tough malware detection system to complement and rectify the signature based system. Recent substantial research focused on machine learning algorithms that analyze features from malicious application and use those features to classify and detect unknown malicious applications. This study summarizes the evolution of malware detection techniques based on machine learning algorithms focused"}
{"_id": "273235553c5e71ff27b505ec9d7fe8c986a626f6", "title": "Selecting image pairs for SfM by introducing Jaccard similarity", "text": "We present a new approach for selecting image pairs that are more likely to match in Structure from Motion (SfM). We propose to use Jaccard Similarity (JacS) which shows how many different visual words is shared by an image pair. In our method, the similarity between images is evaluated using JacS of bag-of-visual-words in addition to tf-idf, which is popular for this purpose. To evaluate the efficiency of our method, we carry out experiments on our original datasets as well as on \u201cPantheon\u201d dataset, which is derived from Flickr. The result of our method using both JacS and tf-idf is better than the results of a standard method using tf-idf only."}
{"_id": "ec03802473f86062e324a225e619866fec091135", "title": "Collective List-Only Entity Linking: A Graph-Based Approach", "text": "List-only entity linking (EL) is the task of mapping ambiguous mentions in texts to target entities in a group of entity lists. Different from traditional EL task, which leverages rich semantic relatedness in knowledge bases to improve linking accuracy, the list-only EL can merely take advantage of co-occurrences information in entity lists. State-of-the-art work utilizes co-occurrences information to enrich entity descriptions which are further used to calculate local compatibility between mentions and entities to determine results. Nonetheless, entity coherence is also deemed to play an important part in EL, which is yet currently neglected. In this paper, in addition to local compatibility, we take into account global coherence among entities. Specifically, we propose to harness co-occurrences in entity lists for mining both explicit and implicit entity relations. The relations are then integrated into an entity graph, on which personalized PageRank is incorporated to compute entity coherence. The final results are derived by combining local mention-entity similarity and global entity coherence. The experimental studies validate the superiority of our method. Our proposal not only improves the performance of the list-only EL, but also opens up the bridge between the list-only EL and conventional EL solutions."}
{"_id": "7bea79484eb27830076764bfa2c4adf1ec392a9a", "title": "Keeping up to date: An academic researcher's information journey", "text": "Keeping up to date with research developments is a central activity of academic researchers, but researchers face difficulties in managing the rapid growth of available scientific information. This study examined how researchers stay up to date, using the information journey model as a framework for analysis and investigating which dimensions influence information behaviors. We designed a 2-round study involving semistructured interviews and prototype testing with 61 researchers with 3 levels of seniority (PhD student to professor). Data were analyzed following a semistructured qualitative approach. Five key dimensions that influence information behaviors were identified: level of seniority, information sources, state of the project, level of familiarity, and how well defined the relevant community is. These dimensions are interrelated and their values determine the flow of the information journey. Across all levels of professional expertise, researchers used similar hard (formal) sources to access content, while soft (interpersonal) sources were used to filter information. An important \u201cpain point\u201d that future information tools should address is helping researchers filter information at the point of need. Introduction"}
{"_id": "e39a3f08e2f27f5799e2cd345c1c12eb6a6703e7", "title": "Aspects of anthocyanin absorption, metabolism and pharmacokinetics in humans.", "text": "Interest in the health-promoting properties of berry anthocyanins is intensifying; however, findings are primarily based on in vitro characteristics, leaving mechanisms associated with absorption, metabolism and pharmacokinetics largely unexplored. The present review integrates the available anthocyanin literature with that of similar flavonoids or polyphenols in order to form hypotheses regarding absorption, metabolism and clearance in humans. Of the limited available literature regarding the absorption and clearance kinetics of anthocyanins, maximum plasma concentrations are reported anywhere between 1.4 and 592 nmol/l and occur at 0.5-4 h post-consumption (doses; 68-1300 mg). Average urinary excretion is reported between 0.03 and 4 % of the ingested dose, having elimination half-lives of 1.5-3 h. In addition, much is unknown regarding the metabolism of anthocyanins. The most commonly cited conjugation reactions involved in the metabolism of other flavonoids include glucuronidation, methylation and sulfation. It is reasonable to suspect that anthocyanins are metabolised in much the same manner; however, until recently, there was little evidence to suggest that anthocyanins were metabolised to any significant extent. New evidence now suggests that anthocyanins are absorbed and transported in human serum and urine primarily as metabolites, with recent studies documenting as much as 68-80 % of anthocyanins as metabolised derivatives in human urine. Further research is required to resolve mechanisms associated with the absorption, metabolism and clearance of anthocyanins in order to establish their true biological activities and health effects. The presented evidence will hopefully focus future research, refining study design and propagating a more complete understanding of anthocyanins' biological significance in humans."}
{"_id": "cd596a2682d74bdfa7b7160dd070b598975e89d9", "title": "Mood Detection : Implementing a facial expression recognition system", "text": "Facial expressions play a significant role in human dialogue. As a result, there has been considerable work done on the recognition of emotional expressions and the application of this research will be beneficial in improving human-machine dialogue. One can imagine the improvements to computer interfaces, automated clinical (psychological) research or even interactions between humans and autonomous robots."}
{"_id": "5bda1eb97d05be39679c992b3e1251dc65908923", "title": "FRACTIONAL DIFFERENTIAL EQUATIONS IN BANACH SPACES", "text": "This paper deals with some impulsive fractional differential equations in Banach spaces. Utilizing the Leray-Schauder fixed point theorem and the impulsive nonlinear singular version of the Gronwall inequality, the existence of PC-mild solutions for some fractional differential equations with impulses are obtained under some easily checked conditions. At last, an example is given for demonstration."}
{"_id": "7ed6b4cb15c141118a721d576d801a1d0a2ebed8", "title": "Vehicle License Plate Recognition Based on Extremal Regions and Restricted Boltzmann Machines", "text": "This paper presents a vehicle license plate recognition method based on character-specific extremal regions (ERs) and hybrid discriminative restricted Boltzmann machines (HDRBMs). First, coarse license plate detection (LPD) is performed by top-hat transformation, vertical edge detection, morphological operations, and various validations. Then, character-specific ERs are extracted as character regions in license plate candidates. Followed by suitable selection of ERs, the segmentation of characters and coarse-to-fine LPD are achieved simultaneously. Finally, an offline trained pattern classifier of HDRBM is applied to recognize the characters. The proposed method is robust to illumination changes and weather conditions during 24 h or one day. Experimental results on thorough data sets are reported to demonstrate the effectiveness of the proposed approach in complex traffic environments."}
{"_id": "12fc800d955f137274a8b51bd7d2c46f5f7d7222", "title": "A vehicle license plate recognition system based on analysis of maximally stable extremal regions", "text": "Vehicle License Plate Recognition (VLPR) system is a core module in Intelligent Transportation Systems (ITS). In this paper, a VLPR system is proposed. Considering that license plate localization is the most important and difficult part in VLPR system, we present an effective license plate localization method based on analysis of Maximally Stable Extremal Region (MSER) features. Firstly, MSER detector is utilized to extract candidate character regions. Secondly, the exact locations of license plates are inferred according to the arrangement of characters in standard license plates. The advantage of this license plate localization method is that less assumption of environmental illumination, weather and other conditions is made. After license plate localization, we continue to recognize the license plate characters and color to complete the whole VLPR system. Finally, the proposed VLPR system is tested on our own collected dataset. The experimental results show the availability and effectiveness of our VLPR system in locating and recognizing all the explicit license plates in an image."}
{"_id": "64d8ec332c895522a0c2a72f379a454ce78779fe", "title": "A wideband unity-gain buffer in 0.13-\u03bcm CMOS", "text": "In this paper, an ultra wideband analog voltage-mode buffer is presented which can drive a load impedance of 50 \u03a9. The presented feedback-based buffer uses a compound amplifier which is a parallel combination of a high-DC gain operational amplifier and a operation transconductance amplifier to achieve a high unity gain bandwidth. A proof-of-concept prototype is designed and fabricated in a 0.13 \u03bcm CMOS process. The simulation and measurement results of the proposed buffer are in good agreement. The prototype buffer circuit consumes 7.34 mW from a 1.3-V supply, while buffering a 2 GHz sinusoidal input signal with a 0.4 V peak-to-peak (Vpp) amplitude and driving an AC-coupled 50-\u03a9 load."}
{"_id": "67027a3428c5f7246fa64b97c79ed0e2d268193d", "title": "How can human motion prediction increase transparency?", "text": "A major issue in the field of human-robot interaction for assistance to manipulation is transparency. This basic feature qualifies the capacity for a robot to follow human movements without any human-perceptible resistive forces. In this paper we address the issue of human motion prediction in order to increase the transparency of a robotic manipulator. Our aim is not to predict the motion itself, but to study how this prediction can be used to improve the robot transparency. For this purpose, we have designed a setup for performing basic planar manipulation tasks involving movements that are demanded to the subject and thus easily predictable. Moreover, we have developed a general controller which takes a predicted trajectory (recorded from offline free motion experiments) as an input and feeds the robot motors with a weighted sum of three controllers: torque feedforward, variable stiffness control and force feedback control. Subjects were then asked to perform the same task but with or without the robot assistance (which was not visible to the subject), and with several sets of gains for the controller tuning. First results seems to indicate that when a predictive controller with open loop torque feedforward is used, in conjunction with force- feedback control, the interaction forces are minimized. Therefore, the transparency is increased."}
{"_id": "a9514224ebb2e591312e02e00db9bfc2594f6c48", "title": "Brain development and ADHD.", "text": "Attention-Deficit/Hyperactivity Disorder (ADHD) is characterized by excessive inattention, hyperactivity, and impulsivity, either alone or in combination. Neuropsychological findings suggest that these behaviors result from underlying deficits in response inhibition, delay aversion, and executive functioning which, in turn, are presumed to be linked to dysfunction of frontal-striatal-cerebellar circuits. Over the past decade, magnetic resonance imaging (MRI) has been used to examine anatomic differences in these regions between ADHD and control children. In addition to quantifying differences in total cerebral volume, specific areas of interest have been prefrontal regions, basal ganglia, the corpus callosum, and cerebellum. Differences in gray and white matter have also been examined. The ultimate goal of this research is to determine the underlying neurophysiology of ADHD and how specific phenotypes may be related to alterations in brain structure."}
{"_id": "bdbc3f5c86165ec51375d77c5d38a65804a34ed6", "title": "Media Framing and Public Attitudes Toward Biofuels", "text": "This study investigates the relationship between media framing and public opinion on the issue of biofuels\u2014transportation fuels made from plants, animal products, or organic waste. First, the paper investigates how media framing of biofuels has changed since the issue regained national prominence in the early 2000s. Through a detailed content analysis of newspaper coverage, the paper documents an increase in negative frames between 1999 and 2008, especially frames focusing on the negative economic effects of biofuels on consumers. Second, using data from a 2010 Internet survey of a random sample of the U.S. public, the paper analyzes the relative influence of these new media frames on public attitudes toward biofuels compared with other common predictors of public opinion, such as party ID, regional economic interests, and personal identity as an environmentalist. In general, the results confirm that public attitudes toward biofuels appear to be shaped by these new media frames, especially among those who indicate a high degree of attention to the media, suggesting the relative importance of framing effects on policy attitudes for environmental and energy policies in general."}
{"_id": "0fb03917b4c6ec8b840cecf25b5d3cd0c101fb98", "title": "Looking Beyond the Visible Scene", "text": "A common thread that ties together many prior works in scene understanding is their focus on the aspects directly present in a scene such as its categorical classification or the set of objects. In this work, we propose to look beyond the visible elements of a scene; we demonstrate that a scene is not just a collection of objects and their configuration or the labels assigned to its pixels - it is so much more. From a simple observation of a scene, we can tell a lot about the environment surrounding the scene such as the potential establishments near it, the potential crime rate in the area, or even the economic climate. Here, we explore several of these aspects from both the human perception and computer vision perspective. Specifically, we show that it is possible to predict the distance of surrounding establishments such as McDonald's or hospitals even by using scenes located far from them. We go a step further to show that both humans and computers perform well at navigating the environment based only on visual cues from scenes. Lastly, we show that it is possible to predict the crime rates in an area simply by looking at a scene without any real-time criminal activity. Simply put, here, we illustrate that it is possible to look beyond the visible scene."}
{"_id": "07551d5841e695ab2bafd4a97f7231a18e13ba17", "title": "Single motor actuated peristaltic wave generator for a soft bodied worm robot", "text": "This paper presents the design and development of a single motor actuated peristaltic worm robot with three segments using a bio-inspired method of locomotion with one actuator that achieves optimised worm like peristaltic motion. Each segment consists of two solid circular disks that have a tendon connected through to the drive mechanism using a Bowden cable and a soft rubber skin that deforms under the compression of the tendon. Our hypothesis that a tuned peristaltic waveform can achieve improved performance of locomotion distance and clamping strength is proven using an initial test platform capable of demonstrating varying waveform types with multiple actuators. Three experiments were undertaken: (i) moving along a flat surface, (ii) moving through a confined tunnel (iii) moving through a confined tunnel whilst pulling a payload. Results from these experiments have identified the optimal parameters for a smart gearbox capable of achieving the same optimal peristaltic waveform signal as the more complex test platform but with only one actuator driving all three worm segments. Unlike other examples of peristaltic worm robots, this example uses a control method embedded within the mechanics of the design removing excessive number of actuators which contributes to miniaturisation, reduces power consumption and simplifies the overall design."}
{"_id": "290589f6d68cfcdd7c12e7fa6c07a863ec1e8e37", "title": "A Powerful Generative Model Using Random Weights for the Deep Image Representation", "text": "To what extent is the success of deep visualization due to the training? Could we do deep visualization using untrained, random weight networks? To address this issue, we explore new and powerful generative models for three popular deep visualization tasks using untrained, random weight convolutional neural networks. First we invert representations in feature spaces and reconstruct images from white noise inputs. The reconstruction quality is statistically higher than that of the same method applied on well trained networks with the same architecture. Next we synthesize textures using scaled correlations of representations in multiple layers and our results are almost indistinguishable with the original natural texture and the synthesized textures based on the trained network. Third, by recasting the content of an image in the style of various artworks, we create artistic images with high perceptual quality, highly competitive to the prior work of Gatys et al. on pretrained networks. To our knowledge this is the first demonstration of image representations using untrained deep neural networks. Our work provides a new and fascinating tool to study the representation of deep network architecture and sheds light on new understandings on deep visualization."}
{"_id": "a8b0db6cc13bb7d3292c26ee51efecd5e3565bf0", "title": "Analysis of Flux Leakage in a Segmented Core Brushless Permanent Magnet Motor", "text": "This paper presents an analytical model to analyze the leakage flux associated with a brushless permanent magnet motor that utilizes segmented stator core. Due to the presence of a segmented stator structure, the leakage of flux through the tooth tips becomes worthy of analysis and consideration in such a motor. A leakage parameter alpha is developed to show the effects of various design parameters on tooth-tip flux leakage that is essential in accurately predicting the air gap flux density and other output parameters in the motor. Finite-element analysis and experimental verifications are also done to validate the analytical model."}
{"_id": "6137115b6044d9eb7846eb4d9212b9aaa5483184", "title": "Double-Input Bidirectional DC/DC Converter Using Cell-Voltage Equalizer With Flyback Transformer", "text": "In this paper, a double-input bidirectional dc/dc converter that uses a rechargeable battery and an ultracapacitor (UC) is proposed. This converter is connected to a cell-voltage equalizer between the battery and UC. The cell-voltage equalizer enables cell-voltage equalization and energy transfer between the battery and UC. This converter has six operational modes. These modes are investigated by reduced-power-scale circuit experiment. In addition, the circuit operation under the combination of the six modes is verified using a PSIM simulator in a large power scale."}
{"_id": "7c59bd355b15b1b5c7c13a865ec9dca9e146a1f7", "title": "An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts", "text": "The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems."}
{"_id": "4dfb425df1b7d4873b5a29c4cf7f7db9fa824133", "title": "Towards a truly mobile auditory brain-computer interface: exploring the P300 to take away.", "text": "In a previous study we presented a low-cost, small, and wireless 14-channel EEG system suitable for field recordings (Debener et al., 2012, psychophysiology). In the present follow-up study we investigated whether a single-trial P300 response can be reliably measured with this system, while subjects freely walk outdoors. Twenty healthy participants performed a three-class auditory oddball task, which included rare target and non-target distractor stimuli presented with equal probabilities of 16%. Data were recorded in a seated (control condition) and in a walking condition, both of which were realized outdoors. A significantly larger P300 event-related potential amplitude was evident for targets compared to distractors (p<.001), but no significant interaction with recording condition emerged. P300 single-trial analysis was performed with regularized stepwise linear discriminant analysis and revealed above chance-level classification accuracies for most participants (19 out of 20 for the seated, 16 out of 20 for the walking condition), with mean classification accuracies of 71% (seated) and 64% (walking). Moreover, the resulting information transfer rates for the seated and walking conditions were comparable to a recently published laboratory auditory brain-computer interface (BCI) study. This leads us to conclude that a truly mobile auditory BCI system is feasible."}
{"_id": "7232404c653814b90c044aa6d0d3ed83fd29f570", "title": "Exploring Recurrent Neural Networks for On-Line Handwritten Signature Biometrics", "text": "Systems based on deep neural networks have made a breakthrough in many different pattern recognition tasks. However, the use of these systems with traditional architectures seems not to work properly when the amount of training data is scarce. This is the case of the on-line signature verification task. In this paper, we propose a novel writer-independent on-line signature verification systems based on Recurrent Neural Networks (RNNs) with a Siamese architecture whose goal is to learn a dissimilarity metric from the pairs of signatures. To the best of our knowledge, this is the first time these recurrent Siamese networks are applied to the field of on-line signature verification, which provides our main motivation. We propose both Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) systems with a Siamese architecture. In addition, a bidirectional scheme (which is able to access both past and future context) is considered for both LSTM- and GRU-based systems. An exhaustive analysis of the system performance and also the time consumed during the training process for each recurrent Siamese network is carried out in order to compare the advantages and disadvantages for practical applications. For the experimental work, we use the BiosecurID database comprised of 400 users who contributed a total of 11,200 signatures in four separated acquisition sessions. Results achieved using our proposed recurrent Siamese networks have outperformed the state-of-the-art on-line signature verification systems using the same database."}
{"_id": "b2dac341df54e5f744d5b6562d725d254aae8e80", "title": "OpenHAR: A Matlab Toolbox for Easy Access to Publicly Open Human Activity Data Sets", "text": "This study introduces OpenHAR, a free Matlab toolbox to combine and unify publicly open data sets. It provides an easy access to accelerometer signals of ten publicly open human activity data sets. Data sets are easy to access as OpenHAR provides all the data sets in the same format. In addition, units, measurement range and labels are unified, as well as, body position IDs. Moreover, data sets with different sampling rates are unified using downsampling. What is more, data sets have been visually inspected to find visible errors, such as sensor in wrong orientation. OpenHAR improves re-usability of data sets by fixing these errors. Altogether OpenHAR contains over 65 million labeled data samples. This is equivalent to over 280 hours of data from 3D accelerometers. This includes data from 211 study subjects performing 17 daily human activities and wearing sensors in 14 different body positions."}
{"_id": "51bbd0d4695aabdf1eb2d9ff612499ec929e7812", "title": "Predicting Sequences of Clinical Events by Using a Personalized Temporal Latent Embedding Model", "text": "As a result of the recent trend towards digitization -- which increasingly affects evidence-based medicine, accountable care, personalized medicine, and medical \"Big Data\" analysis -- growing amounts of clinical data are becoming available for analysis. In this paper, we follow the idea that one can model clinical processes based on clinical data, which can then be the basis for many useful applications. We model the whole clinical evolution of each individual patient, which is composed of thousands of events such as ordered tests, lab results and diagnoses. Specifically, we base our work on a dataset provided by the Charite\u0301 University Hospital of Berlin which is composed of patients that suffered from kidney failure and either obtained an organ transplant or are still waiting for one. These patients face a lifelong treatment and periodic visits to the clinic. Our goal is to develop a system to predict the sequence of events recorded in the electronic medical record of each patient, and thus to develop the basis for a future clinical decision support system. For modelling, we use machine learning approaches which are based on a combination of the embedding of entities and events in a multidimensional latent space, in combination with Neural Network predictive models. Similar approaches have been highly successful in statistical models for recommendation systems, language models, and knowledge graphs. We extend existing embedding models to the clinical domain, in particular with respect to temporal sequences, long-term memories and personalization. We compare the performance of our proposed models with standard approaches such as K-nearest neighbors method, Nai\u0308ve Bayes classifier and Logistic Regression, and obtained favorable results with our proposed model."}
{"_id": "638d50100fe392ae705a0e5f7d9b47ca1dad3eea", "title": "A survey of video datasets for anomaly detection in automated surveillance", "text": "Automated anomaly detection in surveillance system demands to address various problems of real-time monitoring. Due to these challenges, the researchers from computer vision community focused on automation of the surveillance system. The research carried out in this domain has been proposed many datasets which are suitable for different applications. Though many prominent datasets are available, there is lack of description and categorization of these datasets according to the application demand. A survey on anomaly detection video datasets is necessary for researchers to understand and progress in this domain. This paper presents various datasets description briefly."}
{"_id": "5edd55391b2fcbad4f25c3da2194efda862f61fe", "title": "Logic and Artificial Intelligence", "text": "Nilsson, N.J., Logic and artificial intelligence, Artificial Intelligence 47 (1990) 31-56. The theoretical foundations of the logical approach to artificial intelligence are presented. Logical languages are widely used for expressing the declarative knowledge needed in artificial intelligence systems. Symbolic logic also provides a clear semantics for knowledge representation languages and a methodology for analyzing and comparing deductive inference techniques. Several observations gained from experience with the approach are discussed. Finally, we confront some challenging problems for artificial intelligence and describe what is being done in an attempt to solve them."}
{"_id": "a3bb4cebd1b33e4e780a57ba7ac088156f334730", "title": "Complete mesocolic excision with central vascular ligation produces an oncologically superior specimen compared with standard surgery for carcinoma of the colon.", "text": "PURPOSE\nThe plane of surgery in colonic cancer has been linked to patient outcome although the optimal extent of mesenteric resection is still unclear. Surgeons in Erlangen, Germany, routinely perform complete mesocolic excision (CME) with central vascular ligation (CVL) and report 5-year survivals of higher than 89%. We aimed to further investigate the importance of CME and CVL surgery for colonic cancer by comparison with a series of standard specimens.\n\n\nMETHODS\nThe fresh photographs of 49 CME and CVL specimens from Erlangen and 40 standard specimens from Leeds, United Kingdom, for primary colonic adenocarcinoma were collected. Precise tissue morphometry and grading of the plane of surgery were performed before comparison to histopathologic variables.\n\n\nRESULTS\nCME and CVL surgery removed more tissue compared with standard surgery in terms of the distance between the tumor and the high vascular tie (median, 131 v 90 mm; P < .0001), the length of large bowel (median, 314 v 206 mm; P < .0001), and ileum removed (median, 83 v 63 mm; P = .003), and the area of mesentery (19,657 v 11,829 mm(2); P < .0001). In addition, CME and CVL surgery was associated with more mesocolic plane resections (92% v 40%; P < .0001) and a greater lymph node yield (median, 30 v 18; P < .0001).\n\n\nCONCLUSION\nSurgeons in Erlangen routinely practicing CME and CVL surgery remove more mesocolon and are more likely to resect in the mesocolic plane when compared with standard excisions. This, along with the associated greater lymph node yield, may partially explain the high 5-year survival rates reported in Erlangen."}
{"_id": "7347b4601078bd52eec80d5de29f801890f82de3", "title": "Simple broadband Gysel combiner with a single coupled line", "text": "A coupled-Gysel broadband combiner/divider is proposed and demonstrated. The new concept relies on using a single coupled line segment in the design. Significant improvement in bandwidth are realized while maintaining low-loss, ease of design, and flexibility. The coupled-Gysel is demonstrated with a 2.5 - 8 GHz (105% fractional bandwidth) divider with 0.1 dB divider loss, and a 3.4 - 10.2 GHz (100% fractional bandwidth) with 0.2 dB divider loss."}
{"_id": "1b9eee299b5e6362782c3d4ad51e72039ed490e6", "title": "A Deep Learning Approach for Population Estimation from Satellite Imagery", "text": "Knowing where people live is a fundamental component of many decision making processes such as urban development, infectious disease containment, evacuation planning, risk management, conservation planning, and more. While bottom-up, survey driven censuses can provide a comprehensive view into the population landscape of a country, they are expensive to realize, are infrequently performed, and only provide population counts over broad areas. Population disaggregation techniques and population projection methods individually address these shortcomings, but also have shortcomings of their own. To jointly answer the questions of \"where do people live\" and \"how many people live there,\" we propose a deep learning model for creating high-resolution population estimations from satellite imagery. Specifically, we train convolutional neural networks to predict population in the USA at a 0.01\u00b0x0.01\u00b0 resolution grid from 1-year composite Landsat imagery. We validate these models in two ways: quantitatively, by comparing our model's grid cell estimates aggregated at a county-level to several US Census county-level population projections, and qualitatively, by directly interpreting the model's predictions in terms of the satellite image inputs. We find that aggregating our model's estimates gives comparable results to the Census county-level population projections and that the predictions made by our model can be directly interpreted, which give it advantages over traditional population disaggregation methods. In general, our model is an example of how machine learning techniques can be an effective tool for extracting information from inherently unstructured, remotely sensed data to provide effective solutions to social problems."}
{"_id": "1db02cdbd767ac812b3b3c43aee9baf73f292739", "title": "High Voltage Pulsed Power Supply Using IGBT Stacks", "text": "High voltage pulsed power supply using IGBT (insulated gate bipolar transistor) stacks and pulse transformer for plasma source ion implantation is proposed. To increase voltage rating, twelve IGBTs are used in series at each IGBT stack and a step-up pulse transformer is utilized. To increase the current rating, the proposed system makes use of synchronized three pulse generator modules composed of diodes, capacitors and IGBT stacks. The proposed pulsed power supply uses semiconductor switches as main switches. Hence, the system is compact, and has semi-infinite lifetime. In addition, it has high flexibility in parameters such as voltage magnitude (10-60 kV), pulse repetition rate (PRR) (10-2000 pps), and pulse width (2-5 muS)."}
{"_id": "938ca67787b1eb942648f7640c4c07994a0d74de", "title": "Super-resolution from image sequences-a review", "text": "Growing interest in super-resolution (SR) restoration of video sequences and the closely related problem of construction of SR still images from image sequences has led to the emergence of several competing methodologies. We review the state of the art of SR techniques using a taxonomy of existing techniques. We critique these methods and identify areas which promise performance improvements."}
{"_id": "639949d0dd53e8c36407c203135b3e4ae05ee50e", "title": "AppFusion: Interactive Appearance Acquisition Using a Kinect Sensor", "text": "We present an interactive material acquisition system for average users to capture the spatially-varying appearance of daily objects. While an object is being scanned, our system computes its appearance on-the-fly and provides quick visual feedback. We build the system entirely on low-end, off-the-shelf components: a Kinect, a mirror ball and printed markers. We exploit the Kinect infra-red emitter/receiver, originally designed for depth computation, as an active hand-held reflectometer, to segment the object into clusters of similar specular materials and estimate the roughness parameters of BRDFs simultaneously. Next, the diffuse albedo and specular intensity of the spatially-varying materials are rapidly computed in an inverse rendering framework, using data from the Kinect RGB camera. We demonstrate captured results of a range of materials, and physically validate our system. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism\u2014Color, shading, shadowing, and texture"}
{"_id": "0d5619cf83cc951ced77a481b6497a558e61107f", "title": "Gaussian Process Priors with Uncertain Inputs - Application to Multiple-Step Ahead Time Series Forecasting", "text": "We consider the problem of multi-step ahead prediction in time series analysis using the non-parametric Gaussian process model. -step ahead forecasting of a discrete-time non-linear dynamic system can be performed by doing repeated one-step ahead predictions. For a state-space model of the form , the prediction of at time is based on the point estimates of the previous outputs. In this paper, we show how, using an analytical Gaussian approximation, we can formally incorporate the uncertainty about intermediate regressor values, thus updating the uncertainty on the current prediction."}
{"_id": "266f68df0dcc0931cf3c7c5e132ddd116fef8e2b", "title": "Minimum Complexity Echo State Network", "text": "Reservoir computing (RC) refers to a new class of state-space models with a fixed state transition structure (the reservoir) and an adaptable readout form the state space. The reservoir is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be exploited by the reservoir-to-output readout mapping. The field of RC has been growing rapidly with many successful applications. However, RC has been criticized for not being principled enough. Reservoir construction is largely driven by a series of randomized model-building stages, with both researchers and practitioners having to rely on a series of trials and errors. To initialize a systematic study of the field, we concentrate on one of the most popular classes of RC methods, namely echo state network, and ask: What is the minimal complexity of reservoir construction for obtaining competitive models and what is the memory capacity (MC) of such simplified reservoirs? On a number of widely used time series benchmarks of different origin and characteristics, as well as by conducting a theoretical analysis we show that a simple deterministically constructed cycle reservoir is comparable to the standard echo state network methodology. The (short-term) of linear cyclic reservoirs can be made arbitrarily close to the proved optimal value."}
{"_id": "8430c0b9afa478ae660398704b11dca1221ccf22", "title": "The''echo state''approach to analysing and training recurrent neural networks", "text": "The report introduces a constructive learning algorithm for recurrent neural networks, which modifies only the weights to output units in order to achieve the learning task. key words: recurrent neural networks, supervised learning Zusammenfassung. Der Report f\u00fchrt ein konstruktives Lernverfahren f\u00fcr rekurrente neuronale Netze ein, welches zum Erreichen des Lernzieles lediglich die Gewichte der zu den Ausgabeneuronen f\u00fchrenden Verbindungen modifiziert. Stichw\u00f6rter: rekurrente neuronale Netze, \u00fcberwachtes Lernen"}
{"_id": "60944018b088c537a91bf6e54ab5ce6bc7a25fd1", "title": "Reweighted Wake-Sleep", "text": "Training deep directed graphical models with many hidden variables and performing inference remains a major challenge. Helmholtz machines and deep belief networks are such models, and the wake-sleep algorithm has been proposed to train them. The wake-sleep algorithm relies on training not just the directed generative model but also a conditional generative model (the inference network) that runs backward from visible to latent, estimating the posterior distribution of latent given visible. We propose a novel interpretation of the wake-sleep algorithm which suggests that better estimators of the gradient can be obtained by sampling latent variables multiple times from the inference network. This view is based on importance sampling as an estimator of the likelihood, with the approximate inference network as a proposal distribution. This interpretation is confirmed experimentally, showing that better likelihood can be achieved with this reweighted wake-sleep procedure. Based on this interpretation, we propose that a sigmoid belief network is not sufficiently powerful for the layers of the inference network, in order to recover a good estimator of the posterior distribution of latent variables. Our experiments show that using a more powerful layer model, such as NADE, yields substantially better generative models."}
{"_id": "34bb750d3bb437b47b8ebb5998d40165fdad3ca8", "title": "Design and Synthesis of Synchronization Skeletons Using Branching Time Temporal Logic", "text": "We propose a method of constructing concurrent programs in which the synchronization skeleton of the program ~s automatically synthesized from a high-level (branching time) Temporal Logic specification. The synchronization skeleton is an abstraction of the actual program where detail irrelevant to synchronization is suppressed. For example, in the synchronization skeleton for a solution to the critical section problem each process's critical section may be viewed as a single node since the internal structure of the critical section is unimportant. Most solutions to synchronization problems in the literature are in fact given as synchronization skeletons. Because synchronization skeletons are in general finite state, the propositional version of Temporal Logic can be used to specify their properties. Our synthesis method exploits the (bounded) finite model property for an appropriate propositional Temporal Logic which asserts that if a formula of the logic is satisfiable, it is satisfiable in a finite model (of size bounded by a function of the length of the formula). Decision procedures have been devised which, given a formula of Temporal Logic, f, will decide whether f is satisfiable or unsatisfiable. If f is satisfiable, a finite model of f is constructed, in our application, unsatisfiability of f means that the specification is inconsistent (and must be reformulated). If the formula f is satisfiable, then the specification it expresses is consistent. A model for f with a finite number of states is constructed by the decision procedure. The synchronization skeleton of a program meeting the specification can be read from this model. The finite model property ensures that any program whose synchronization properties can be expressed in propositional Temporal Logic can be realized by a system of concurrently running processes, each of which is a finite state machine. initially, the synchronization skeletons we synthesize wi]] be for concurrent programs running in a shared-memory environment and for monitors. However, we believe that it is also possible to extend these techniques to synthesize distributed programs. One such application would be the automatic synthesis of network communication protocols from propositional Temporal Logic specifications. Previous efforts toward parallel program synthesis can be found in the work of [LA78] and iRK80]. [LA78] uses a specification language that is essentially predicate"}
{"_id": "53151847c0a12897f8b4fab841b8687412611619", "title": "Cross-Domain Perceptual Reward Functions", "text": "In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agent\u2019s environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agent\u2019s. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agent\u2019s state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning."}
{"_id": "5bdae41d1daec604c57c6b067c1160faae052fc2", "title": "Height and gradient from shading", "text": "The method described here for recovering the shape of a surface from a shaded image can deal with complex, wrinkled surfaces. Integrability can be enforced easily because both surface height and gradient are represented. (A gradient field is integrable if it is the gradient of some surface height function.) The robustness of the method stems in part from linearization of the reflectance map about the current estimate of the surface orientation at each picture cell. (The reflectance map gives the dependence of scene radiance on surface orientation.) The new scheme can find an exact solution of a given shape-from-shading problem even though a regularizing term is included. The reason is that the penalty term is needed only to stabilize the iterative scheme when it is far from the correct solution; it can be turned off as the solution is approached. This is a reflection of the fact that shape-from-shading problems are not ill posed when boundary conditions are available, or when the image contains singular points. This article includes a review of previous work on shape from shading and photoclinometry. Novel features of the new scheme are introduced one at a time to make it easier to see what each contributes. Included is a discussion of implementation details that are important if exact algebraic solutions of synthetic shape-from-shading problems are to be obtained. The hope is that better performance on synthetic data will lead to better performance on real data."}
{"_id": "f952ae12e53dca17a0a68ceeebc94fe20b75f4d5", "title": "Origami Pop-up Card Generation from 3D Models Using a Directed Acyclic Graph", "text": "Origami is a paper art used to create three-dimensional (3D) objects by cutting and folding a single sheet of paper. Origami is labor-intensive and requires a high skill level to generate two-dimensional (2D) objects that pop-up into realistic 3D objects. However, this special feature makes designing an origami architecture procedure challenging. This paper provides a novel algorithm to create an origami paper card from a 3D model with a user-specified folding line. The algorithm segments the 2D shape from a 3D model and creates layers of the pop-up paper card using a directed acyclic graph. After applying layers to the layout, the algorithm creates connections between layers, and then outputs the origami layout. Based on a directed acyclic graph, the algorithm computes a class of paper architectures containing two sets of layers and connections that approximate the input geometry while guaranteeing that a pop up card is physically realizable. The proposed method is demonstrated with a number of paper pop-ups, and physical experimental results are presented."}
{"_id": "2d737fbbe8e5f9e35b0ce0d3c6db1d7d1097fa7d", "title": "Misplaced Trust ? Exploring the Structure of the E-Government-Citizen Trust Relationship", "text": "A growing body of research focuses on the relationship between e-government, the relatively new mode of citizen-to-government contact founded in information and communications technologies, and citizen trust in government. For many, including both academics and policy makers, e-government is seen as a potentially transformational medium, a mode of contact that could dramatically improve citizen perceptions of government service delivery and possibly reverse the long-running decline in citizen trust in government. To date, however, the literature has left significant gaps in our understanding of the e-government-citizen trust relationship. This study intends to fill some of these gaps. Using a cross-sectional sample of 787 end users of US federal government services, data from the American Customer Satisfaction Index study, and structural equation modeling statistical techniques, this study explores the structure of the e-government-citizen trust relationship. Included in the model are factors influencing the decision to adopt e-government, as well as prior expectations, overall satisfaction, and outcomes including both confidence in the particular agency experienced and trust in the federal government overall. The findings suggest that although e-government The authors would like to thank Professor Claes Fornell of the University ofMichigan for granting access to the ACSI data that made this study possible. Thanks also to Jason Joyner of CFI Group for his extensive and helpful comments on an early version of the article. Forrest V. Morgeson III is a Research Scientist at the ACSI in Ann Arbor, MI. He has a PhD in Political Science from the University of Pittsburgh. His recent research focuses on the financial impact of consumer satisfaction and citizen satisfaction with federal government services\u2014both online and offline\u2014and has recently been published or is forthcoming in journals such as Public Administration Review, Electronic Government: an International Journal, Journal of Marketing, Marketing Science, and the International Journal of Research in Marking. David VanAmburg is Director of the American Customer Satisfaction Index. Mr VanAmburg has acted as Director for the ACSI\u2019s Federal government-wide satisfaction measurement since its inception in 1999 and of the ACSI as a whole since 2001. He has lectured extensively both in the United States and abroad on topics related to customer satisfaction, quality, customer loyalty, and shareholder value, and co-authored The American Customer Satisfaction Index at Ten Years, ACSI 1994\u20132004: A Summary of Findings: Implications for the Economy, Stock Returns andManagement. Sunil Mithas is an Assistant Professor at Robert H. Smith School of Business at University of Maryland. He has a PhD in Business from the Ross School of Business at the University of Michigan and an Engineering degree from Indian Institute of Technology, Roorkee, India. Prior to pursuing the PhD, he worked for about 10 years with the Tata group. His research focuses on strategic management and the impact of information technology resources and has appeared in journals that includeManagement Science, Information Systems Research, MIS Quarterly,Marketing Science, Journal of Marketing, and Production and Operations Management. His articles have won best article awards and best article nominations and have been featured in practice-oriented publications such as Harvard Business Review, Sloan Management Review, Bloomberg, Computerworld, and InformationWeek. Address correspondence to the author at morgeson@theacsi.org. doi:10.1093/jopart/muq006 a The Author 2010. Published by Oxford University Press on behalf of the Journal of Public Administration Research and Theory, Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oxfordjournals.org Journal of Public Administration Research and Theory Advance Access published April 26, 2010 by on A ril 7, 2010 http://jpaordjournals.org D ow nladed fom may help improve citizens\u2019 confidence in the future performance of the agency experienced, it does not yet lead to greater satisfaction with an agency interaction nor does it correlate with greater generalized trust in the federal government overall. Explanations for these findings, including an assessment of the potential of e-government to help rebuild trust in government in the future, are offered."}
{"_id": "08c2844eed200baf07d892f55812849ea8f3a604", "title": "Death Due to Decapitation in Two Motorcyclists: A Description of a Unique Case and a Brief Review of the Literature.", "text": "Deaths due to decapitation, especially related to traffic accidents, are rarely found in forensic practice. The present case involves a man and a woman who died from decapitation due to an unusual mechanism while they were riding on a motorbike down a mountain road. The autopsy, which was completed as a physics study, allowed the accident to be reconstructed as follows: A plastic cable that had detached from a timber-transporting machine whipped the road and hit the two motorcyclists. The impact resulted in the complete severing of both riders' heads. Involving different scientists in this accident investigation was crucial to understanding the dynamics of the accident. In fact, in addition to scene inspection and autopsy, a physics study was carried out on the cable and the clamp involved, which led to an explanation for the abrupt movement of the cable and, thus, to a thorough reconstruction of the accident."}
{"_id": "692b30d03ab7f1fd797327d26d2392008b8e3a93", "title": "Revisiting Requirements Elicitation Techniques", "text": "The importance of Requirements Engineering (RE) has been well recognized by the research community in the last decade. There is no doubt to say that requirements phase is the foundation of the entire Software Development Life Cycle (SDLC) on which the entire software is built. With the proper management of various activities falling under requirements phase, a project can deliver the right solution within the time and budget. Requirement elicitation, requirement specification, and requirement validation are the important stages to assure the quality of requirement documentations. Out of these, elicitation is the first major activity, which requires proper attention by Requirement Engineers and other related stakeholders. Literature reveals various elicitation techniques, which can be used for the purpose depending upon the nature of project/s. In this paper, an attempt is made to cover all the major elicitation techniques along with their significant aspects at one place. Such a review would enable the concerned stakeholders to understand and select the most appropriate technique to be used for their project/s."}
{"_id": "a05d984443d62575c097ad65b747aae859a5f8b0", "title": "Video Gaming and Children\u2019s Psychosocial Wellbeing: A Longitudinal Study", "text": "The effects of video games on children's psychosocial development remain the focus of debate. At two timepoints, 1 year apart, 194 children (7.27-11.43 years old; male\u2009=\u200998) reported their gaming frequency, and their tendencies to play violent video games, and to game (a) cooperatively and (b) competitively; likewise, parents reported their children's psychosocial health. Gaming at time one was associated with increases in emotion problems. Violent gaming was not associated with psychosocial changes. Cooperative gaming was not associated with changes in prosocial behavior. Finally, competitive gaming was associated with decreases in prosocial behavior, but only among children who played video games with high frequency. Thus, gaming frequency was related to increases in internalizing but not externalizing, attention, or peer problems, violent gaming was not associated with increases in externalizing problems, and for children playing approximately 8\u2009h or more per week, frequent competitive gaming may be a risk factor for decreasing prosocial behavior. We argue that replication is needed and that future research should better distinguish between different forms of gaming for more nuanced and generalizable insight."}
{"_id": "8df9263c264edb54cb4a0922aa1fb40789f556e4", "title": "Classification of Substitution Ciphers using Neural Networks", "text": "Most of the time of a cryptanalyst is spent on finding the cipher technique used for encryption rather than the finding the key/ plaintext of the received cipher text. The Strength of the classical substitution cipher\u201fs success lie on the variety of characters employed to represent a single character. More, the characters employed more the complexity. Thus, in order to reduce the work of the cryptanalyst, neural network based identification is done based on the features of the cipher methods. In this paper, classical substitution ciphers, namely, Playfair, Vigen\u00e8re and Hill ciphers are considered. The features of the cipher methods under consideration were extracted and a backpropagation neural network was trained. The network was tested for random texts with random keys of various lengths. The cipher text size was fixed as 1Kb. The results so obtained were encouraging."}
{"_id": "771ee96c388e11e9643a44dccd1278a1e7694e5c", "title": "Efficient Verification of Holograms Using Mobile Augmented Reality", "text": "Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, optically variable devices such as holograms are important, but difficult to inspect. Augmented Reality can provide all relevant information on standard mobile devices. However, hologram verification on mobiles still takes long and provides lower accuracy than inspection by human individuals using appropriate reference information. We aim to address these drawbacks by automatic matching combined with a special parametrization of an efficient goal-oriented user interface which supports constrained navigation. We first evaluate a series of similarity measures for matching hologram patches to provide a sound basis for automatic decisions. Then a re-parametrized user interface is proposed based on observations of typical user behavior during document capture. These measures help to reduce capture time to approximately 15 s with better decisions regarding the evaluated samples than what can be achieved by untrained users."}
{"_id": "51a083a0702159cddc803ce7126d52297e94821b", "title": "Rethinking Feelings: An fMRI Study of the Cognitive Regulation of Emotion", "text": "The ability to cognitively regulate emotional responses to aversive events is important for mental and physical health. Little is known, however, about neural bases of the cognitive control of emotion. The present study employed functional magnetic resonance imaging to examine the neural systems used to reappraise highly negative scenes in unemotional terms. Reappraisal of highly negative scenes reduced subjective experience of negative affect. Neural correlates of reappraisal were increased activation of the lateral and medial prefrontal regions and decreased activation of the amygdala and medial orbito-frontal cortex. These findings support the hypothesis that prefrontal cortex is involved in constructing reappraisal strategies that can modulate activity in multiple emotion-processing systems."}
{"_id": "2c4222413225d257cb1bf6d45ae1a70197da2fa3", "title": "Metacognitive Control of the Spacing of Study Repetitions", "text": "Rememberers play an active role in learning, not only by committing material more or less faithfully to memory, but also by selecting judicious study strategies (or not). In three experiments, subjects chose whether to mass or space the second presentation of to-be-learned paired-associate terms that were either normatively difficult or easy to remember, under the constraint that subjects needed to space exactly half of the items (and mass the other half). In contrast with recent findings that implemented no such constraint (Son, 2004), subjects chose to space more of the difficult pairs (in Experiments 1 and 2). Reduction in exposure time eliminated but did not reverse this effect (Experiment 3). Subjects who spaced more of the difficult pairs were more likely to exhibit superior memory performance, but, because subjects who made spacing selections that had no effect on the actual scheduling of items also showed this effect (Experiment 2), that enhancement in performance is more likely to reflect subject selection than strategy efficacy. Overall, these results suggest that choice constraints strongly elicit a discrepancy-reduction approach (Dunlosky & Hertzog, 1998) to strategic decisionmaking, but that reduced study time can eliminate this effect. Metacognitive control of spacing 3 The question of how people use their metacognitive knowledge to regulate their behaviors has been of much interest in recent years, particularly with regard to the implementation of study strategies. Metacognition plays an integral role in tasks such as selfdirected learning (Koriat & Goldsmith, 1996; Nelson & Narens, 1990), and understanding the means by which metacognitions guide learning processes is essential to facilitate and optimize the learning process itself (see Bjork, 1994). This article pursues that question by examining the strategies that subjects employ in the scheduling of learning events. Recent evidence revealed conditions under which subjects prefer to space easier materials and mass more difficult ones (Son, 2004). That result is fascinating because it either reveals that subjects choose to apply more effective study conditions to easier materials\u2014a result in conflict with the vast majority of findings from study-time allocation experiments\u2014or it reveals a fundamental misappreciation of the greater effectiveness of spacing in promoting learning (e.g., Baddeley & Longman, 1978). However, the present experiments reveal the opposite effect\u2014subjects choose to space difficult and mass easy items. These results thus suggest that, under some conditions, subjects do understand the beneficial effects of spacing and also choose to selectively utilize them with difficult materials. Self-regulation of learning Theories of self-regulated study claim that active learners use assessments of item difficulty and their own degree of learning in deciding whether to allocate further cognitive resources toward study of that item or to move on to other items (e.g. Dunlosky & Hertzog, 1998; Mazzoni, Cornoldi, & Marchitelli, 1990; Metcalfe, 2002; Nelson & Leonesio, 1988; Nelson & Narens, 1990; Thiede & Dunlosky, 1999). There is some debate, however, with regard to how difficulty and resource allocation \u2013 specifically, study time allotment \u2013 are related. Metacognitive control of spacing 4 Discrepancy reduction. One theory emphasizes a discrepancy-reduction mechanism (Dunlosky & Hertzog, 1998). According to this theory, the learner compares their perceived degree of learning of a to-be-learned item to the their desired level of mastery for that item, also known as the norm of study (Le Ny, Denhiere, & Le Taillanter, 1972; see also Nelson & Narens, 1990), and if the degree of learning does not reach that criterion, additional study of that item ensues. Therefore, in reducing the discrepancy between an item\u2019s current and desired degree of learning, the model predicts an inverse relationship between the perceived (prior) degree of learning and study time, and hence suggests that people will allot more study time to judgeddifficult than judged-easy items (Dunlosky & Hertzog, 1998; see also Thiede & Dunlosky, 1999). Indeed, a multitude of experiments have shown that people tend to study more difficult items for longer than they study easier items. In a comprehensive review, Son and Metcalfe (2000) reported that, of 46 treatment combinations in 19 published experiments in which subjects controlled the allocation of study time, 35 revealed a strategy of devoting more time to difficult items, and none showed the opposite strategy of devoting more time to easy items. These studies included subjective as well as objective measures of difficulty, and the results were consistently found across age groups and study materials. Proximal learning. However, Son and Metcalfe (2000) showed that total time constraints caused subjects to apportion more study time to judged-easy items than to judged-difficult items. That is, when the total study time allotted was likely insufficient to master all items, subjects chose to allocate their limited time to items that individually take less time to master, rather than slowly learn fewer difficult items. Similarly, Thiede and Dunlosky (1999) found that if their task Metacognitive control of spacing 5 was to remember only a small portion of the to-be-learned items, rather than the full set, subjects devoted more study time to easy items. Metcalfe (2002) surmised that the discrepancy-reduction model adequately accounted for subjects\u2019 allocation strategies only under certain conditions, and forwarded a more comprehensive model to incorporate the newer data. She argued that study time should be devoted to those items that are just beyond the current grasp of the individual, in the region of proximal learning (Metcalfe, 2002; see also Metcalfe & Kornell, 2003). In the case that those just-unlearned items are the most difficult to-be-learned items, the discrepancy-reduction and region of proximal learning models agree on what the appropriate strategy should be. However, in cases where easy items are still unlearned, the predictions of the two theories are in opposition. Whereas the discrepancy-reduction model suggests that learners will always devote more time to the difficult items, the proximal learning hypothesis implies that individual differences in expertise within a domain should influence study-time allocation. Metcalfe (2002) demonstrated this effect using English-Spanish vocabulary pairs. To monolingual speakers, even relatively easy items can be difficult to learn, and thus those speakers allocated more study time to those easy pairs accordingly. Experts, on the other hand, spent more time studying the difficult word pairs, and Metcalfe (2002) attributed the group differences in item selection to the difference between the two groups\u2019 regions of proximal learning. Novices chose to spend more time studying the easy, yet still unlearned, items before moving on to more difficult items, a result that is not predicted by the discrepancy-reduction model. Effects of strategy choice. In addition to identifying the strategies used in allocating study time, determining which strategy ultimately leads to superior performance on subsequent Metacognitive control of spacing 6 recall tests is of importance as well. The actual quality of any study strategy, after all, can only be evaluated by the outcome it produces on the subsequent test. As Son and Metcalfe (2000) pointed out, even though much of the previous literature suggests a tendency for subjects to study the difficult items for longer than the easy items, there are no data showing that subjects who employ such a strategy outperform subjects who spend equal amounts of time on easy and difficult items. While it is intuitive that increased duration of study should lead to higher recall performance for any given item, previous findings have suggested that once study time for an item has been sufficient for initial acquisition, continued immediate study of that item leads to little or no increase in the probability of future recall. This null increase in performance despite substantial increases in study time has been termed the \u201clabor-in-vain effect\u201d (Nelson & Leonesio, 1988, p. 680). Metcalfe and Kornell (2003) systematically investigated the effects of allocating study time to a particular subset of items on later recall performance. Their results showed that allocating study time to items of medium difficulty was more helpful than was allocating study time to easy or difficult items, presumably because learning of the easy items had already plateaued and would not benefit from additional study, whereas learning of the difficult items would require even larger amounts of additional study time to reach a level sufficient to improve later recall. The plateau occurs sooner for easy items than for medium items, and likewise sooner for medium items than for difficult items. If an optimal strategy of study-time allocation is one in which study of a particular item should be discontinued at the point when further study is not as advantageous as study of another item, a critical question for the metacognizer is where (or when) that point is. If information uptake has slowed sufficiently for an item, immediate further study of that item at the cost of a Metacognitive control of spacing 7 different item may be detrimental to overall recall performance, whereas further study at some later time may boost one\u2019s memory for that item and may benefit future recall. Such a strategy has already been shown in the context of semantic retrieval tasks in which subjects tried to name as many items as possible from two semantic categories in a short time. When retrieval for items in one category slowed, and the other category became relatively more appealing, subjects switched to the second category (Young, 2004). To test whether subjects employ this type of strategy in a learning task, a pa"}
{"_id": "5a47e047d4d41b61204255e1b265d704b7f265f4", "title": "Undefined By Data: A Survey of Big Data Definitions", "text": "The term big data has become ubiquitous. Owing to a shared origin between academia, industry and the media there is no single unified definition, and various stakeholders provide diverse and often contradictory definitions. The lack of a consistent definition introduces ambiguity and hampers discourse relating to big data. This short paper attempts to collate the various definitions which have gained some degree of traction and to furnish a clear and concise definition of an otherwise ambiguous term."}
{"_id": "14315bed06f687c029419d8a6cef2beaadc574b5", "title": "Explaining the Poor Performance of Consumption-Based Asset Pricing Models", "text": "The poor performance of consumption-based asset pricing models relative to traditional portfoliobased asset pricing models is one of the great disappointments of the empirical asset pricing literature. We show that the external habit-formation model economy of Campbell and Cochrane (1999) can explain this puzzle. Though artificial data from that economy conform to a consumption-based model by construction, the CAPM and its extensions are much better approximate models than is the standard power utility specification of the consumption-based model. Conditioning information is the central reason for this result. The model economy has one shock, so when returns are measured at sufficiently high frequency the consumption-based model and the CAPM are equivalent and perfect conditional asset pricing models. However, the model economy also produces time-varying expected returns, tracked by the dividend-price ratio. Portfolio-based models capture some of this variation in state variables, which a state-independent function of consumption cannot capture, and so portfolio-based models are better approximate unconditional asset pricing models. John Y. Campbell John H. Cochrane Department of Economics Graduate School of Business Harvard University University of Chicago Littauer Center 213 1101 E. 58th. St. Cambridge, MA 02138 Chicago, IL 60637 and NBER and NBER john_campbell@harvard.edu john.cochrane@gsb.uchicago.edu"}
{"_id": "7065e6b496af41bba16971246a02986f5e388860", "title": "Assessing Organizational Capabilities: Reviewing and Guiding the Development of Maturity Grids", "text": "Managing and improving organizational capabilities is a significant and complex issue for many companies. To support management and enable improvement, performance assessments are commonly used. One way of assessing organizational capabilities is by means of maturity grids. While maturity grids may share a common structure, their content differs and very often they are developed anew. This paper presents both a reference point and guidance for developing maturity grids. This is achieved by reviewing 24 existing maturity grids and by suggesting a roadmap for their development. The review places particular emphasis on embedded assumptions about organizational change in the formulation of the maturity ratings. The suggested roadmap encompasses four phases: planning, development, evaluation, and maintenance. Each phase discusses a number of decision points for development, such as the selection of process areas, maturity levels, and the delivery mechanism. An example demonstrating the roadmap's utility in industrial practice is provided. The roadmap can also be used to evaluate existing approaches. In concluding the paper, implications for management practice and research are presented."}
{"_id": "ec5f929b57cf12b4d624ab125f337c14ad642ab1", "title": "Modelling out-of-vocabulary words for robust speech recognition", "text": "This thesis concerns the problem of unknown or out-of-vocabulary (OOV) words in continuous speech recognition. Most of today's state-of-the-art speech recognition systems can recognize only words that belong to some predefined finite word vocabulary. When encountering an OOV word, a speech recognizer erroneously substitutes the OOV word with a similarly sounding word from its vocabulary. Furthermore, a recognition error due to an OOV word tends to spread errors into neighboring words; dramatically degrading overall recognition performance. In this thesis we propose a novel approach for handling OOV words within a single-stage recognition framework. To achieve this goal, an explicit and detailed model of OOV words is constructed and then used to augment the closed-vocabulary search space of a standard speech recognizer. This OOV model achieves open-vocabulary recognition through the use of more flexible subword units that can be concatenated during recognition to form new phone sequences corresponding to potential new words. Examples of such subword units are phones, syllables, or some automatically-learned multi-phone sequences. Subword units have the attractive property of being a closed set, and thus are able to cover any new words, and can conceivably cover most utterances with partially spoken words as well. The main challenge with such an approach is ensuring that the OOV model does not absorb portions of the speech signal corresponding to in-vocabulary (IV) words. In dealing with this challenge, we explore several research issues related to designing the subword lexicon, language model, and topology of the OOV model. We present a dictionary-based approach for estimating subword language models. Such language models are utilized within the subword search space to help recognize the underlying phonetic transcription of OOV words. We also propose a data-driven iterative bottom-up procedure for automatically creating a multi-phone subword inventory. Starting with individual phones, this procedure uses the maximum mutual information principle to successively merge phones to obtain longer subword units. The thesis also extends this OOV approach to modelling multiple classes of OOV words. Instead of augmenting the word search space with a single model, we add several models, one for each class of words. We present two approaches for designing the OOV word classes. The first approach relies on using common part-of-speech tags. The second approach is a data-driven two-step clustering procedure, where the first step uses agglomerative clustering to derive an initial class assignment, while the second step uses iterative clustering to move words from one class to another in order to reduce the model perplexity."}
{"_id": "9be48e5606c8add113041f94a5664d31313e4cda", "title": "A Digital Input Controller for Audio Class-D Amplifiers with 100W 0.004% THD+N and 113dB DR", "text": "A digital input controller for audio class-D amplifiers is presented. The controller utilizes specially configured integrated DAC and power stage feedback loop to suppress distortion components coming from power-stage switching with digital input capability. The class-D amplifier system with the controller and an existing power stage achieves 113dB DR, 0.0018% THD+N with 10W output power, and 0.004% THD+N with 100W output power into 4Omega load"}
{"_id": "3c89345bb88a440096f7a057c28857cc4baf3695", "title": "Understanding Latency Variation in Modern DRAM Chips: Experimental Characterization, Analysis, and Optimization", "text": "Long DRAM latency is a critical performance bottleneck in current systems. DRAM access latency is defined by three fundamental operations that take place within the DRAM cell array: (i) activation of a memory row, which opens the row to perform accesses; (ii) precharge, which prepares the cell array for the next memory access; and (iii) restoration of the row, which restores the values of cells in the row that were destroyed due to activation. There is significant latency variation for each of these operations across the cells of a single DRAM chip due to irregularity in the manufacturing process. As a result, some cells are inherently faster to access, while others are inherently slower. Unfortunately, existing systems do not exploit this variation. \n The goal of this work is to (i) experimentally characterize and understand the latency variation across cells within a DRAM chip for these three fundamental DRAM operations, and (ii) develop new mechanisms that exploit our understanding of the latency variation to reliably improve performance. To this end, we comprehensively characterize 240 DRAM chips from three major vendors, and make several new observations about latency variation within DRAM. We find that (i) there is large latency variation across the cells for each of the three operations; (ii) variation characteristics exhibit significant spatial locality: slower cells are clustered in certain regions of a DRAM chip; and (iii) the three fundamental operations exhibit different reliability characteristics when the latency of each operation is reduced. \n Based on our observations, we propose Flexible-LatencY DRAM (FLY-DRAM), a mechanism that exploits latency variation across DRAM cells within a DRAM chip to improve system performance. The key idea of FLY-DRAM is to exploit the spatial locality of slower cells within DRAM, and access the faster DRAM regions with reduced latencies for the fundamental operations. Our evaluations show that FLY-DRAM improves the performance of a wide range of applications by 13.3%, 17.6%, and 19.5%, on average, for each of the three different vendors' real DRAM chips, in a simulated 8-core system. We conclude that the experimental characterization and analysis of latency variation within modern DRAM, provided by this work, can lead to new techniques that improve DRAM and system performance."}
{"_id": "c1db41b8f039dc35536a8d777ad6dc9282eb3252", "title": "Autoimmune disorders: nail signs and therapeutic approaches.", "text": "Systemic sclerosis (scleroderma, SSc) is an autoimmune disease that targets small and medium-sized arteries and arterioles in the involved tissues, resulting in a fibrotic vasculopathy and tissue fibrosis. Several prominent nail and periungual changes are apparent in scleroderma. Examination of the nail fold capillaries can reveal the nature and extent of microvascular pathology in patients with collagen vascular disease and Raynaud's phenomenon. Among the complications stemming from Raynaud's phenomenon can be painful ischemic digital ulcers. This can be managed, and potentially prevented, through pharmacologic and nonpharmacologic means. Whereas oral calcium channel blockers remain the most convenient therapy, oral endothelin receptor antagonists and intravenous prostaglandins may be important therapeutic advances for ischemic digital vascular lesions."}
{"_id": "15d46ecc8ca5e426a45d8fe2df7bdecc9597d88a", "title": "An introduction to convex optimization for communications and signal processing", "text": "Convex optimization methods are widely used in the design and analysis of communication systems and signal processing algorithms. This tutorial surveys some of recent progress in this area. The tutorial contains two parts. The first part gives a survey of basic concepts and main techniques in convex optimization. Special emphasis is placed on a class of conic optimization problems, including second-order cone programming and semidefinite programming. The second half of the survey gives several examples of the application of conic programming to communication problems. We give an interpretation of Lagrangian duality in a multiuser multi-antenna communication problem; we illustrate the role of semidefinite relaxation in multiuser detection problems; we review methods to formulate robust optimization problems via second-order cone programming techniques"}
{"_id": "194b3c6c772d1883caff495f9781102a939a3c5c", "title": "Fading correlation and its effect on the capacity of multielement antenna systems", "text": "We investigate the effects of fading correlations in multielement antenna (MEA) communication systems. Pioneering studies showed that if the fades connecting pairs of transmit and receive antenna elements are independently, identically distributed, MEA\u2019s offer a large increase in capacity compared to single-antenna systems. An MEA system can be described in terms of spatial eigenmodes, which are single-input single-output subchannels. The channel capacity of an MEA is the sum of capacities of these subchannels. We will show that the fading correlation affects the MEA capacity by modifying the distributions of the gains of these subchannels. The fading correlation depends on the physical parameters of MEA and the scatterer characteristics. In this paper, to characterize the fading correlation, we employ an abstract model, which is appropriate for modeling narrow-band Rayleigh fading in fixed wireless systems."}
{"_id": "2720c554db24944546348fc0623a980953a3bbeb", "title": "Shifting the MIMO Paradigm", "text": "Multi-user MIMO (MU-MIMO) networks reveal the unique opportunities arising from a joint optimization of antenna combining techniques with resource allocation protocols. Furthermore, it brings robustness with respect to multipath richness, allowing for compact antenna spacing at the BS and, crucially, yielding the diversity and multiplexing gains without the need for multiple antenna user terminals. To realize these gains, however, the BS should be informed with the user's channel coefficients, which may limit practical application to TDD or low-mobility settings. To circumvent this problem and reduce feedback load, combining MU-MIMO with opportunistic scheduling seems a promising direction. The success for this type of scheduler is strongly traffic and QoS-dependent, however."}
{"_id": "5dd79167d714ff3907ffbba102b8e6fba49f053e", "title": "On Limits of Wireless Communications in a Fading Environment when Using Multiple Antennas", "text": "This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, , of antenna elements at bothtransmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon\u2019s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99% of the channels the capacity is about 7, 19 and 88 bits/cycle respectively, while if n = 1 there is only about 1.2 bit/cycle at the 99% level. For say a symbol rate equal to the channel bandwith, since it is the bits/symbol/dimension that is relevant for signal constellations, these higher capacities are not unreasonable. The 19 bits/cycle for n = 4 amounts to 4.75 bits/symbol/dimension while 88 bits/cycle for n = 16 amounts to 5.5 bits/symbol/dimension. Standard approaches such as selection and optimum combining are seen to be deficient when compared to what will ultimately be possible. New codecs need to be invented to realize a hefty portion of the great capacity promised."}
{"_id": "4f911fe6ee5040e6e46e84a9f1e211153943cd9b", "title": "Detecting A Continuum Of Compositionality In Phrasal Verbs", "text": ""}
{"_id": "2bda312479be9bcb2754e9c2fd80d6a89634c3e0", "title": "Growth and morphological development of laboratory-reared larval and juvenile giant gourami Osphronemus goramy (Perciformes: Osphronemidae)", "text": "Morphological development in laboratory-reared larval and juvenile Osphronemus goramy is described. Body lengths were 4.8\u00a0\u00b1\u00a00.1 (mean\u00a0\u00b1\u00a0SD) mm just after hatching, and reached 9.8\u00a0\u00b1\u00a00.8\u00a0mm by day 16 and 15.3\u00a0\u00b1\u00a00.6\u00a0mm by day 40. Aggregate fin ray numbers attained full complements by 11.9\u00a0mm BL on day 22. All larvae began feeding by day 5 (ca. 6.7\u00a0mm BL). Conical jaw teeth appeared on day 7 (ca. 7.0\u00a0mm BL). Notochord flexion began on day 3 (ca. 6.5\u00a0mm BL), the yolk being absorbed during days 14\u201315 (ca. 8.0\u201310.0\u00a0mm BL). Pale melanophores, initially visible on the upper yolksac surface during days 0 and 1, became more intense and numerous with growth, and covered almost the entire body in juveniles. Seven or eight vertical bars appeared on the body on day 13 (ca. 9.0\u00a0mm BL), thereafter becoming more discernible. Pre-anal length, body depth, and eye diameter proportions became relatively constant after yolk absorption, with those of the head/snout/upper jaw lengths subsequently increasing. The labyrinth organ began to develop in juveniles larger than ca. 14.5\u00a0mm BL (day 30), and air-breathing behavior was first observed on days 35\u201340."}
{"_id": "926c913e3974476078c72a51bea5ca3a8e4009ac", "title": "Optimal power allocation for DL NOMA systems", "text": "Non-orthogonal multiple access (NOMA) is an important candidate for 5G radio access technology. In NOMA transmitter, different users' signals are superposed on the same radio resource with different power allocation factors. The receiver removes other users' signals from the received signal before decoding its own signal. In this work, an iterative gradient ascent-based power allocation method is proposed for downlink NOMA transmitter. It maximizes the geometric mean of the throughputs of users who share the same radio resource to provide proportional fairness between users. Simulation results show that the method achieves theoretical best results in terms of the suggested metric. Also it is shown that it increases the efficiency as much as 80% when compared to orthogonal multiple access (OMA) and it gives better results than NOMA that uses fixed and fractional power allocation methods."}
{"_id": "52983eebb587fb6da50fbeb5e88e3288381999d9", "title": "The PeerRank Method for Peer Assessment", "text": "We propose the PeerRank method for peer assessment. This constructs a grade for an agent based on the grades proposed by the agents evaluating the agent. Since the grade of an agent is a measure of their ability to grade correctly, the PeerRank method weights grades by the grades of the grading agent. The PeerRank method also provides an incentive for agents to grade correctly. As the grades of an agent depend on the grades of the grading agents, and as these grades themselves depend on the grades of other agents, we define the PeerRank method by a fixed point equation similar to the PageRank method for ranking web-pages."}
{"_id": "14730bfcb9f6e2748c2f08ef025183979563bd16", "title": "Critical Issues Affecting an ERP Implementation", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."}
{"_id": "55135e17242d50d52df7d0be0d617888db96fd43", "title": "Analysis of Semipermeable Containment Sleeve Technology for High-Speed Permanent Magnet Machines", "text": "This paper presents an alternative permanent magnet rotor containment technology. The use of a semipermeable material in a laminated structure is shown to result in a significant improvement in magnetic loading for a given thickness of containment compared to the use of magnetically inert materials such as carbon fiber or Inconel, while minimizing eddy current losses in the containment layer. An analytical model is presented for determining the air-gap field delivered from a permanent magnet array with a semipermeable containment. The analysis is validated through finite element analysis. The fabrication of a prototype machine is detailed and the results presented show good agreement with the analysis. The validated modeling process is used to assess the potential of the new technology."}
{"_id": "083dc9609bd6ab54b4fc8e863244a12ecab0f5ff", "title": "A scalable end-to-end Gaussian process adapter for irregularly sampled time series classification", "text": "We present a general framework for classification of sparse and irregularly-sampled time series. The properties of such time series can result in substantial uncertainty about the values of the underlying temporal processes, while making the data difficult to deal with using standard classification methods that assume fixeddimensional feature spaces. To address these challenges, we propose an uncertaintyaware classification framework based on a special computational layer we refer to as the Gaussian process adapter that can connect irregularly sampled time series data to any black-box classifier learnable using gradient descent. We show how to scale up the required computations based on combining the structured kernel interpolation framework and the Lanczos approximation method, and how to discriminatively train the Gaussian process adapter in combination with a number of classifiers end-to-end using backpropagation."}
{"_id": "c7402d051f22e7046e35cd433abfa8d7df6f0420", "title": "Characterization of a RS-LiDAR for 3D Perception", "text": "High precision 3D LiDARs are still expensive and hard to acquire. This paper presents the characteristics of RSLiDAR, a model of low-cost LiDAR with sufficient supplies, in comparison with VLP-16. The paper also provides a set of evaluations to analyze the characterizations and performances of LiDARs sensors. This work analyzes multiple properties, such as drift effects, distance effects, color effects and sensor orientation effects, in the context of 3D perception. By comparing with Velodyne LiDAR, we found RS-LiDAR as a cheaper and acquirable substitute of VLP-16 with similar efficiency."}
{"_id": "59c5eecef1f35bd331f7e9ebe1a1a784bcdb76a5", "title": "Analytical Design of Decoupling Internal Model Control ( IMC ) Scheme for Two-Input-Two-Output ( TITO ) Processes with Time Delays", "text": "In this paper, a new decoupling control scheme, in terms of an internal model control (IMC) structure, is proposed for two-input -two-output (TITO) processes with time delays. Noteworthy improvement of the decoupling regulation can be achieved for the nominal system output responses, and moreover, either of the system output responses can be quantitatively regulated by virtue of the analytical relationship between the adjustable parameters of the decoupling controller matrix and the nominal system transfer matrix. The ideally optimal controller matrix is analytically derived by proposing the practically desired diagonal system transfer matrix, in terms of the robust H 2 optimal performance objective. Because of the difficulty of physical implementation, its practical form is carefully configured according to whether there exist any complex righthalf-plane (RHP) zeros in the process transfer matrix determinant. At the same time, tuning constraints for the proposed decoupling controller matrix to hold the control system robust stability are analyzed in the presence of the process additive and multiplicative uncertainties, and, accordingly, the on-line tuning rule is provided to cope with the process unmodeled dynamics in practice. Finally, illustrative simulation examples are included to demonstrate the remarkable superiority of the proposed method."}
{"_id": "a36d9dbf93d69b408bdec35e8efd3a52556134db", "title": "Benchmarking open government: An open data perspective", "text": "Available online 3 April 2014"}
{"_id": "91561c762467facfe2a7f7644b6dc4d2085c9092", "title": "Gesture recognition in smart home using passive RFID technology", "text": "Gesture recognition is a well-establish topic of research that is widely adopted for a broad range of applications. For instance, it can be exploited for the command of a smart environment without any remote control unit or even for the recognition of human activities from a set of video cameras deployed in strategic position. Many researchers working on assistive smart home, such as our team, believe that the intrusiveness of that technology will prevent the future adoption and commercialization of smart homes. In this paper, we propose a novel gesture recognition algorithm that is solely based on passive RFID technology. This technology enables the localization of small tags that can be embedded in everyday life objects (a cup or a book, for instance) while remaining non intrusive. However, until now, this technology has been largely ignored by researchers on gesture recognition, mostly because it is easily disturbed by noise (metal, human, etc.) and offer limited precision. Despite these issues, the localization algorithms have improved over the years, and our recent efforts resulted in a real-time tracking algorithm with a precision approaching 14cm. With this, we developed a gesture recognition algorithm able to perform segmentation of gestures and prediction on a spatio-temporal data series. Our new model, exploiting works on qualitative spatial reasoning, achieves recognition of 91%. Our goal is to ultimately use that knowledge for both human activity recognition and errors detection."}
{"_id": "bee97f419057ebe3826e17384387d030faf37ef8", "title": "Dirichlet Processes for Joint Learning of Morphology and PoS Tags", "text": "This paper presents a joint model for learning morphology and part-of-speech (PoS) tags simultaneously. The proposed method adopts a finite mixture model that groups words having similar contextual features thereby assigning the same PoS tag to those words. While learning PoS tags, words are analysed morphologically by exploiting similar morphological features of the learned PoS tags. The results show that morphology and PoS tags can be learned jointly in a fully unsupervised setting."}
{"_id": "c93b6b0b11b2d317c47b5b82c52f195df32ec8f0", "title": "Social media driven image retrieval", "text": "People often try to find an image using a short query and images are usually indexed using short annotations. Matching the query vocabulary with the indexing vocabulary is a difficult problem when little text is available. Textual user generated content in Web 2.0 platforms contains a wealth of data that can help solve this problem. Here we describe how to use Wikipedia and Flickr content to improve this match. The initial query is launched in Flickr and we create a query model based on co-occurring terms. We also calculate nearby concepts using Wikipedia and use these to expand the query. The final results are obtained by ranking the results for the expanded query using the similarity between their annotation and the Flickr model. Evaluation of these expansion and ranking techniques, over the Image CLEF 2010 Wikipedia Collection containing 237,434 images and their multilingual textual annotations, shows that a consistent improvement compared to state of the art methods."}
{"_id": "1d0dcb458aa4d30b51f7c74b159be687f39120a0", "title": "Pose-Driven Deep Convolutional Model for Person Re-identification", "text": "Feature extraction and matching are two crucial components in person Re-Identification (ReID). The large pose deformations and the complex view variations exhibited by the captured person images significantly increase the difficulty of learning and matching of the features from person images. To overcome these difficulties, in this work we propose a Pose-driven Deep Convolutional (PDC) model to learn improved feature extraction and matching models from end to end. Our deep architecture explicitly leverages the human part cues to alleviate the pose variations and learn robust feature representations from both the global image and different local parts. To match the features from global human body and local body parts, a pose driven feature weighting sub-network is further designed to learn adaptive feature fusions. Extensive experimental analyses and results on three popular datasets demonstrate significant performance improvements of our model over all published state-of-the-art methods."}
{"_id": "05378af0c67c59505067e2cbeb9ca29ed5f085e4", "title": "Probabilistic aggregation for data dissemination in VANETs", "text": "We propose an algorithm for the hierarchical aggregation of observations in dissemination-based, distributed traffic information systems. Instead of carrying specific values (e.g., the number offree parking places in a given area), our aggregates contain a modified Flajolet-Martin sketch as a probabilistic approximation. The main advantage of this approach is that the aggregates are duplicate insensitive. This overcomes two central problems of existing aggregation schemes for VANET applications. First, when multiple aggregates of observations for the same area are available, it is possible to combine them into an aggregate containing all information from the original aggregates. This is fundamentally different from existing approaches where typically one of the aggregates is selected for further use while the rest is discarded. Second, any observation or aggregate can be included into higher level aggregates, regardless if it has already been previously - directly or indirectly - added. As a result of those characteristics the quality of the aggregates is high, while their construction is very flexible. We demonstrate these traits of our approach by a simulation study."}
{"_id": "5ad5cbf8f34f5f671b46af827ff3d9b2557e9a55", "title": "Drugs of Abuse and Stress Trigger a Common Synaptic Adaptation in Dopamine Neurons", "text": "Drug seeking and drug self-administration in both animals and humans can be triggered by drugs of abuse themselves or by stressful events. Here, we demonstrate that in vivo administration of drugs of abuse with different molecular mechanisms of action as well as acute stress both increase strength at excitatory synapses on midbrain dopamine neurons. Psychoactive drugs with minimal abuse potential do not cause this change. The synaptic effects of stress, but not of cocaine, are blocked by the glucocorticoid receptor antagonist RU486. These results suggest that plasticity at excitatory synapses on dopamine neurons may be a key neural adaptation contributing to addiction and its interactions with stress and thus may be an attractive therapeutic target for reducing the risk of addiction."}
{"_id": "6d6461388d8f5d599866111d8b1c9abc8e47a976", "title": "Personalizing Similar Product Recommendations in Fashion E-commerce", "text": "In fashion e-commerce platforms, product discovery is one of the key components of a good user experience. There are numerous ways using which people find the products they desire. Similar product recommendations is one of the popular modes using which users find products that resonate with their intent. Generally these recommendations are not personalized to a specific user. Traditionally, collaborative filtering based approaches have been popular in the literature for recommending non-personalized products given a query product. Also, there has been focus on personalizing the product listing for a given user. In this paper, we marry these approaches so that users will be recommended with personalized similar products. Our experimental results on a large fashion ecommerce platform (Myntra) show that we can improve the key metrics by applying personalization on similar product recommendations."}
{"_id": "54254a717b6877fa7053595fe82fc34b5a475911", "title": "Reader-Aware Multi-Document Summarization via Sparse Coding", "text": "We propose a new MDS paradigm called readeraware multi-document summarization (RA-MDS). Specifically, a set of reader comments associated with the news reports are also collected. The generated summaries from the reports for the event should be salient according to not only the reports but also the reader comments. To tackle this RAMDS problem, we propose a sparse-coding-based method that is able to calculate the salience of the text units by jointly considering news reports and reader comments. Another reader-aware characteristic of our framework is to improve linguistic quality via entity rewriting. The rewriting consideration is jointly assessed together with other summarization requirements under a unified optimization model. To support the generation of compressive summaries via optimization, we explore a finer syntactic unit, namely, noun/verb phrase. In this work, we also generate a data set for conducting RA-MDS. Extensive experiments on this data set and some classical data sets demonstrate the effectiveness of our proposed approach."}
{"_id": "370844bdb26c72ee6ad402703e6c79eb46f5d992", "title": "Foundations of Cryptography: Basic Tools", "text": "If you really want to be smarter, reading can be one of the lots ways to evoke and realize. Many people who like reading will have more knowledge and experiences. Reading can be a way to gain information from economics, politics, science, fiction, literature, religion, and many others. As one of the part of book categories, foundations of cryptography basic tools always becomes the most wanted book. Many people are absolutely searching for this book. It means that many love to read this kind of book."}
{"_id": "2c2d5f44384b9505764ac51145c2b5d897b14a0c", "title": "An early fire-detection method based on image processing", "text": "The paper presents an early fire-alarm raising method based on video processing. The basic idea of the proposed of fire-detection is to adopt a RGB (red, green, blue) model based chromatic and disorder measurement for extracting fire-pixels and smoke-pixels. The decision function of fire-pixels is mainly deduced by the intensity and saturation of R component. The extracted fire-pixels will be verified if it is a real fire by both dynamics of growth and disorder, and further smoke. Based on iterative checking on the growing ratio of flames, a fire-alarm is given when the alarm-raising condition is met. Experimental results show that the developed technique can achieve fully automatic surveillance of fire accident with a lower false alarm rate and thus is very attractive for the important military, social security, commercial applications, and so on, at a general cost."}
{"_id": "3d5107b913ff3cc3da3455c186f1e949bfbcd94e", "title": "A Fire-Alarming Method Based on Video Processing", "text": "This paper presents a fire-alarming method based on video processing. We propose a system that uses color and motion information extracted from video sequences to detect fire. Flame can be recognized according to its color which is a primary element of fire images. Thus choosing a suitable color model is the key to detect flames from fire images. An effective fire detection criterion based on color model is proposed in this paper by intensive experiments and trainings. The detection criterion is used to make a raw localization of fire regions first. However, color alone is not enough for fire detection. To identify a real burning fire, in addition to chromatic features, dynamic features are usually adopted to distinguish other fire aliases. In this paper, both the growth of fire region and the invariability of flame are utilized to further detect the fire regions as a complement of the detection criterion. The effectiveness of the proposed fire-alarming method is demonstrated by the experiments implemented on a large number of scenes."}
{"_id": "1790ae874429c5e033677c358cb43a77e4486f55", "title": "Adaptive Background Mixture Models for Real-Time Tracking", "text": "A common method for real-time segmentation of moving regions in image sequences involves \u201cbackground subtraction,\u201d or thresholding the error between an estimate of the image without moving objects and the current image. The numerous approaches to this problem differ in the type of background model used and the procedure used to update the model. This paper discusses modeling each pixel as a mixture of Gaussians and using an on-line approximation to update the model. The Gaussian distributions of the adaptive mixture model are then evaluated to determine which are most likely to result from a background process. Each pixel is classified based on whether the Gaussian distribution which represents it most effectively is considered part of the background model. This results in a stable, real-time outdoor tracker which reliably deals with lighting changes, repetitive motions from clutter, and long-term scene changes. This system has been run almost continuously for 16 months, 24 hours a day, through rain and snow."}
{"_id": "2f83d2294d44b44ad07d327635a34276abe1ec55", "title": "Antenna in package design for WLAN using MCM-D manufacturing technology", "text": "This paper introduces an antenna design based on MCM-D manufacturing technology to realize an antenna-integrated package for IEEE 802.11b/g application. Co-design guidelines are employed to include the parasitic effects caused by the integration of the antenna and the RF module. The loop antenna is located on the second layer of the MCM-D substrate. The antenna incorporates the capacitively feed strip which is fed by the coplanar waveguide (CPW). By the coupling feed technique, the size of the proposed antenna is only 3.8 mm \u00d7 4.7 mm over the WLAN band (2.4\u20132.484 GHz). Furthermore, the resonant frequency can be adjusted by tuning the length of the coupling strip. The results show that the coupling-fed loop antenna achieved a gain of 1.6 dBi and radiation efficiency of 85 % at 2.45 GHz in a very compact size (0.03 \u03bb0 \u00d7 0.04 \u03bb0). In addition, the occupied area of the antenna is very small (4.4%) compared to the overall area of the package; therefore, the proposed method is very useful for package antenna design. The detailed parameters studies are presented, which demonstrate the feasibility of the proposed method."}
{"_id": "ed261525dd4648b01f494ffc3974b906d3fc5c85", "title": "Relationship between procrastination and academic performance among a group of undergraduate dental students in India.", "text": "Procrastination, generally defined as a voluntary, irrational delay of behavior, is a prevalent phenomenon among college students throughout the world and occurs at alarmingly high rates. For this study, a survey was conducted of 209 second-, third-, and fourth-year undergraduate dental students of Bapuji Dental College and Hospital, Davangere, India, to identify the relationship between their level of procrastination and academic performance. A sixteen-item questionnaire was used to assess the level of procrastination among these students. Data related to their academic performance were also collected. Spearman's correlation coefficient test was used to assess the relationship between procrastination and academic performance. It showed a negative correlation of -0.63 with a significance level of p<0.01 (two-tailed test), indicating that students who showed high procrastination scores performed below average in their academics. In addition, analysis with the Mann-Whitney U test found a significant difference in procrastination scores between the two gender groups (p<0.05). Hence, among the Indian undergraduate dental students evaluated in this study, it appeared that individuals with above average and average academic performance had lower scores of procrastination and vice versa."}
{"_id": "fee14e570861573f920036f2abaae119fc46a659", "title": "Detecting cyberbullying: query terms and techniques", "text": "In this paper we describe a close analysis of the language used in cyberbullying. We take as our corpus a collection of posts from Formspring.me. Formspring.me is a social networking site where users can ask questions of other users. It appeals primarily to teens and young adults and the cyberbullying content on the site is dense; between 7% and 14% of the posts we have analyzed contain cyberbullying content.\n The results presented in this article are two-fold. Our first experiments were designed to develop an understanding of both the specific words that are used by cyberbullies, and the context surrounding these words. We have identified the most commonly used cyberbullying terms, and have developed queries that can be used to detect cyberbullying content. Five of our queries achieve an average precision of 91.25% at rank 100.\n In our second set of experiments we extended this work by using a supervised machine learning approach for detecting cyberbullying. The machine learning experiments identify additional terms that are consistent with cyberbullying content, and identified an additional querying technique that was able to accurately assign scores to posts from Formspring.me. The posts with the highest scores are shown to have a high density of cyberbullying content."}
{"_id": "b7ebcec93f35e52ef14a41c9c9d55fa132987473", "title": "The Magma Algebra System I: The User Language", "text": "Magma is a new software system for computational algebra, the design of which is based on the twin concepts of algebraic structure and morphism. The design is intended to provide a mathematically rigorous environment for computing with algebraic structures (groups, rings, fields, modules and algebras), geometric structures (varieties, special curves) and combinatorial structures (graphs, designs and codes). The philosophy underlying the design of Magma is based on concepts from Universal Algebra and Category Theory. Key ideas from these two areas provide the basis for a general scheme for the specification and representation of mathematical structures. The user language includes three important groups of constructors that realize the philosophy in syntactic terms: structure constructors, map constructors and set constructors. The utility of Magma as a mathematical tool derives from the combination of its language with an extensive kernel of highly efficient C implementations of the fundamental algorithms for most branches of computational algebra. In this paper we outline the philosophy of the Magma design and show how it may be used to develop an algebraic programming paradigm for language design. In a second paper we will show how our design philosophy allows us to realize natural computational \u201cenvironments\u201d for different branches of algebra. An early discussion of the design of Magma may be found in Butler and Cannon (1989, 1990). A terse overview of the language together with a discussion of some of the implementation issues may be found in Bosma et al. (1994)."}
{"_id": "caf9b9c92d4b58ed3f5b517bfde1f2faf3bad34f", "title": "Quadcopter control system", "text": "The paper presents two different types of approach for mathematical modeling of quadcopter kinematics and dynamics. The first one is based on the equations of classical mechanics and the other one is derived from Denavit-Hartenberg formalism and Lagrangian mechanics. The obtained models were used to design the control of the quadcopter motion on one rotation axis. The paper also offers some details of the physical and software implementation of the system on which tests have been made."}
{"_id": "6a419bedf54e67dd3da2ee95f82b6a1eabe5db59", "title": "Recognition of Information Leakage of Computer via Conducted Emanations on the Power Line", "text": "The security problem of screen image leakage from a display unit has become serious with the rapid speed of signal transmission technology. This paper presents a novel investigation on the characteristics of the power line compromising channel. Moreover, a measurement system has been actually developed for the leakage signal analyzing and image reconstruction. In order to overcome the degradation of reconstructed motion images and enhance the reconstructed image quality, a multi-image blind deconvolution method was proposed and test experiments were carried out to verify the effectiveness of the multi-image blind deconvolution algorithm based on the conducted signal from the power line."}
{"_id": "ff0858c76442a86c50803917433f09ca0b42f3fe", "title": "Out-of-domain FrameNet Semantic Role Labeling", "text": "Domain dependence of NLP systems is one of the major obstacles to their application in large-scale text analysis, also restricting the applicability of FrameNet semantic role labeling (SRL) systems. Yet, current FrameNet SRL systems are still only evaluated on a single in-domain test set. For the first time, we study the domain dependence of FrameNet SRL on a wide range of benchmark sets. We create a novel test set for FrameNet SRL based on user-generated web text and find that the major bottleneck for out-of-domain FrameNet SRL is the frame identification step. To address this problem, we develop a simple, yet efficient system based on distributed word representations. Our system closely approaches the state-of-the-art in-domain while outperforming the best available frame identification system out-of-domain. We publish our system and test data for research purposes.1"}
{"_id": "0b85249b2707b9a5b5a170f1bde0afe168e3b51d", "title": "S-Match: an algorithm and an implementation of semantic matching", "text": "We think of Match as an operator which takes two graph-like structures (e.g., conceptual hierarchies or ontologies) and produces a mapping between those nodes of the two graphs that correspond semantically to each other. Semantic matching is a novel approach where semantic correspondences are discovered by computing, and returning as a result, the semantic information implicitly or explicitly codified in the labels of nodes and arcs. In this paper we present an algorithm implementing semantic matching, and we discuss its implementation within the S-Match system. We also test S-Match against three state of the art matching systems. The results, though preliminary, look promising, in particular for what concerns precision and recall."}
{"_id": "791f07be56d1188b69bf103786d8cf30a0c97f24", "title": "Behavioral Experiments for Assessing the Abstract Argumentation Semantics of Reinstatement", "text": "Argumentation is a very fertile area of research in Artificial Intelligence, and various semantics have been developed to predict when an argument can be accepted, depending on the abstract structure of its defeaters and defenders. When these semantics make conflicting predictions, theoretical arbitration typically relies on ad hoc examples and normative intuition about what prediction ought to be the correct one. We advocate a complementary, descriptive-experimental method, based on the collection of behavioral data about the way human reasoners handle these critical cases. We report two studies applying this method to the case of reinstatement (both in its simple and floating forms). Results speak for the cognitive plausibility of reinstatement and yet show that it does not yield the full expected recovery of the attacked argument. Furthermore, results show that floating reinstatement yields comparable effects to that of simple reinstatement, thus arguing in favor of preferred argumentation semantics, rather than grounded argumentation semantics. Besides their theoretical value for validating and inspiring argumentation semantics, these results have applied value for developing artificial agents meant to argue with human users."}
{"_id": "9afba7cb45c8cfb32fe9cb625444a54244b1d15f", "title": "An integrated framework for human activity recognition", "text": "This poster presents an integrated framework to enable using standard non-sequential machine learning tools for accurate multi-modal activity recognition. Our framework contains simple pre- and post-classification strategies such as class-imbalance correction on the learning data using structure preserving oversampling, leveraging the sequential nature of sensory data using smoothing of the predicted label sequence and classifier fusion, respectively, for improved performance. Through evaluation on recent publicly-available OPPORTUNITY activity datasets comprising of a large amount of multi-dimensional, continuous-valued sensory data, we show that our proposed strategies are effective in improving the performance over common techniques such as One Nearest Neighbor (1NN) and Support Vector Machines (SVM). Our framework also shows better performance over sequential probabilistic models, such as Conditional Random Field (CRF) and Hidden Markov Models (HMM) and when these models are used as meta-learners."}
{"_id": "3b01c11e8f1eaf57d484b1561b76b1d5b03129e2", "title": "The five-factor model of personality and managerial performance: validity gains through the use of 360 degree performance ratings.", "text": "This study investigated the usefulness of the five-factor model (FFM) of personality in predicting two aspects of managerial performance (task vs. contextual) assessed by utilizing the 360 degree performance rating system. The authors speculated that one reason for the low validity of the FFM might be the failure of single-source (e.g., supervisor) ratings to comprehensively capture the construct of managerial performance. The operational validity of personality was found to increase substantially (50%-74%) across all of the FFM personality traits when both peer and subordinate ratings were added to supervisor ratings according to the multitrait-multimethod approach. Furthermore, the authors responded to the recent calls to validate tests via a multivariate (e.g., multitrait-multimethod) approach by decomposing overall managerial performance into task and contextual performance criteria and by using multiple rating perspectives (sources). Overall, this study contributes to the evidence that personality may be even more useful in predicting managerial performance if the performance criteria are less deficient."}
{"_id": "4c12f8f4821eb40e2a0db1cd22fbce68a72406df", "title": "A bias for posterior \u03b1-band power suppression versus enhancement during shifting versus maintenance of spatial attention", "text": "Voluntarily directing visual attention to a cued position in space leads to improved processing of forthcoming visual stimuli at the attended position, and attenuated processing of competing stimuli elsewhere, due to anticipatory tuning of visual cortex activity. In EEG, recent evidence points to a determining role of modulations of posterior alpha-band activity (8-14 Hz) in such anticipatory facilitation (alpha-power decreases) versus inhibition (alpha-power increases). Yet, while such alpha-modulations are a common finding, the direction of modulation varies to a great extent across studies implying dependence on task demands. Here, we reveal opposite modulation of posterior alpha-power with early/initiation versus later/sustained processes of anticipatory attention orienting. Marked alpha-decreases were observed during shifting of attention (initial 700 ms) over occipito-parietal areas processing to-be-attended visual space, while alpha-increases dominated in the subsequent maintenance phase (>700 ms) over occipito-parietal cortex tuned to unattended positions. Notably, the presence of alpha-modulation strongly depended on individual resting alpha-power. Overall, this provides further support to an active facilitative versus inhibitory role of alpha-power decreases and increases and suggests that these attention-related changes are differentially deployed during anticipatory attention orienting to prepare versus maintain the cortex for optimal target processing."}
{"_id": "07e1c0c5e28491a8464052826eaa075db3abb9c2", "title": "A Novel Method for Accurate and Efficient Barcode Detection with Morphological Operations", "text": "Barcode technology is the pillar of automatic identification, that is used in a wide range of real-time applications with various types of codes. The different types of codes and applications impose special problems, so there is a continuous need for solutions with improved effectiveness. There are several methods for barcode localization, that are well characterized by accuracy and speed. Particularly, high-speed processing places need automatic barcode localization, e.g. conveyor belts, automated production, where missed detections cause loss of profit. In this paper, we mainly deal with segmentation of images with 1D barcode, but also analyze the operation of different methods for 2D barcode images as well. Our goal is to detect automatically, rapidly and accurately the barcode location by the help of extracted features. We compare some published method from the literature, which basically rely on the contrast between the background and the shape that represent the code. We also propose a novel algorithm, that outperforms the others in both accuracy and efficiency in detecting 1D codes."}
{"_id": "9c5fe19670efc5ccecaea9e2348d9e74c7f7a465", "title": "A Two-Layered Permission-Based Android Malware Detection Scheme", "text": "Android platform has become the main target of the malware developers in the past few years. One of Android's main defense mechanisms against malicious apps is a permission-based access control mechanism. It is a feasible approach to detect a potential malicious application based on the permissions it requested. In this paper, we proposed a two-layered permission based detection scheme for detecting malicious Android applications. Comparing with previous researches, we consider the apps requested permission pairs as the additional condition, and we also consider used permissions to improve detection accuracy. The result of an evaluation, performed using 28548 benign apps and 1536 malicious apps, indicates that a two-layered permission-based detector has high detection rate of malware."}
{"_id": "1b0a4e2db47d4788b3792234e467f27a9c2bba4e", "title": "A signal processing approach to fair surface design", "text": "In this paper we describe a new tool for interactive free-form fair surface design. By generalizing classical discrete Fourier analysis to two-dimensional discrete surface signals \u2013 functions defined on polyhedral surfaces of arbitrary topology \u2013, we reduce the problem of surface smoothing, or fairing, to low-pass filtering. We describe a very simple surface signal low-pass filter algorithm that applies to surfaces of arbitrary topology. As opposed to other existing optimization-based fairing methods, which are computationally more expensive, this is a linear time and space complexity algorithm. With this algorithm, fairing very large surfaces, such as those obtained from volumetric medical data, becomes affordable. By combining this algorithm with surface subdivision methods we obtain a very effective fair surface design technique. We then extend the analysis, and modify the algorithm accordingly, to accommodate different types of constraints. Some constraints can be imposed without any modification of the algorithm, while others require the solution of a small associated linear system of equations. In particular, vertex location constraints, vertex normal constraints, and surface normal discontinuities across curves embedded in the surface, can be imposed with this technique. CR"}
{"_id": "225395fd97c4c61b697ccfec911d498038e5d2ee", "title": "Filling Holes in Triangular Meshes in Engineering", "text": "In this paper, a novel hole-filling algorithm for triangular meshes is proposed. Firstly, the hole is triangulated into a set of new triangles using modified principle of minimum angle. Then the initial patching mesh is refined according to the density of vertices on boundary edges. Finally, the patching mesh is optimized via the bilateral filter to recover missed features. Experimental results demonstrate that the proposed algorithm fills complex holes robustly, and preserves geometric features to a certain extent as well. The resulted meshes are of good quality for engineering."}
{"_id": "ef8db0f78e035569e4085011fffcb8492f376a43", "title": "Polygon mesh repairing: An application perspective", "text": "Nowadays, digital 3D models are in widespread and ubiquitous use, and each specific application dealing with 3D geometry has its own quality requirements that restrict the class of acceptable and supported models. This article analyzes typical defects that make a 3D model unsuitable for key application contexts, and surveys existing algorithms that process, repair, and improve its structure, geometry, and topology to make it appropriate to case-by-case requirements.\n The analysis is focused on polygon meshes, which constitute by far the most common 3D object representation. In particular, this article provides a structured overview of mesh repairing techniques from the point of view of the application context. Different types of mesh defects are classified according to the upstream application that produced the mesh, whereas mesh quality requirements are grouped by representative sets of downstream applications where the mesh is to be used. The numerous mesh repair methods that have been proposed during the last two decades are analyzed and classified in terms of their capabilities, properties, and guarantees. Based on these classifications, guidelines can be derived to support the identification of repairing algorithms best-suited to bridge the compatibility gap between the quality provided by the upstream process and the quality required by the downstream applications in a given geometry processing scenario."}
{"_id": "17f31b10caacbc0f636c78ddaaa3fbd4499c4c96", "title": "An Integrated Approach to Filling Holes in Meshes", "text": "Data obtained by scanning 3D models typically contains missing pieces and holes. This paper describes an integrated approach for filling holes in triangular meshes obtained during the process of surface reconstruction, which consists of three steps: triangulation, refinement and smoothing. The approach first generates a minimal triangulation to the hole, and then iteratively subdivides the patching meshes making it assort with the density of surrounding meshes. Finally, geometric optimization is enforced to the patching meshes for improving regularity. The approach can deal with arbitrary holes in oriented connected manifold meshes. Experimental results on several triangular meshes are presented."}
{"_id": "1a02e6ce5f288c3c6a58d825a44e1cae69e35f97", "title": "Filling Holes in Complex Surfaces using Volumetric Diffusion", "text": "We address the problem of building watertight 3D models from surfaces that contain holes \u2212 for example, sets of range scans that observe most but not all of a surface. We specifically address situations in which the holes are too geometrically and topologically complex to fill using triangulation algorithms. Our solution begins by constructing a signed distance function, the zero set of which defines the surface. Initially, this function is defined only in the vicinity of observed surfaces. We then apply a diffusion process to extend this function through the volume until its zero set bridges whatever holes may be present. If additional information is available, such as known-empty regions of space inferred from the lines of sight to a 3D scanner, it can be incorporated into the diffusion process. Our algorithm is simple to implement, is guaranteed to produce manifold noninterpenetrating surfaces, and is efficient to run on large datasets because computation is limited to areas near holes."}
{"_id": "d8de994a13bbebb8d95b14429c5d51798e5187e3", "title": "Role-Based Access Control Model Supporting Regional Division in Smart Grid System", "text": "Smart Grid is a modern electric power infrastructure that incorporates elements of traditional power systems and information and communication technology (ICT) with the aim to improve the reliability, efficiency and safety as crucial requirements of critical infrastructure systems. Relying on the ICT, Smart Grid exposes electric power systems to new security issues and vulnerabilities over the network. Therefore, the security is becoming an increasing concern at both physical and information levels. Access controls are one of the most important aspects of information security. Role-based access control (RBAC) model is widely used in complex systems that are characterized by many participants. However, the existing models are not suitable for critical infrastructure systems with specialized features, such as a high number of devices dispersed over the vast geographical regions in \u201can Internetlike distributed environment\u201d. In this paper, access control mechanism optimized for Smart Grid systems is developed considering the regional division and concept of areas of responsibility (AOR). To this end, the standardized RBAC model will be extended to provide an efficient and consistent policy with greater level of granularity from the aspect of resource protection. The proposed RBACAOR model is formally described and then applied to the smart grid environment."}
{"_id": "51f885f8e84cd91a641bb10c51b363f62f29dc94", "title": "Polyphenolic content and antioxidant properties of Moringa oleifera leaf extracts and enzymatic activity of liver from goats supplemented with Moringa oleifera leaves/sunflower seed cake.", "text": "The study investigated antioxidant potency of Moringa oleifera leaves in different in vitro systems using standard phytochemical methods. The antioxidative effect on the activities of superoxide dismutase (SOD), catalase (CAT), lipid peroxidation (LPO) and reduced glutathione (GSH) were investigated in goats supplemented with M. oleifera (MOL) or sunflower seed cake (SC). The acetone extract had higher concentrations of total flavonoids (295.01 \u00b1 1.89 QE/g) followed by flavonols (132.74 \u00b1 0.83 QE/g), phenolics (120.33 \u00b1 0.76 TE/g) and then proanthocyanidins (32.59 \u00b1 0.50 CE/g) than the aqueous extract. The reducing power of both solvent extracts showed strong antioxidant activity in a concentration dependent manner. The acetone extract depicted higher percentage inhibition against DPPH, ABTS and nitric oxide radicals which were comparable with reference standard antioxidants (vitamin C and BHT). MOL increased the antioxidant activity of GSH (186%), SOD (97.8%) and catalase (0.177%). Lipid peroxidation was significantly reduced by MOL. The present study suggests that M. oleifera could be a potential source of compounds with strong antioxidant potential."}
{"_id": "9ac28d20f15539cb3c73a0da8cdffc8fbdd54687", "title": "A 10-GS/s 4-Bit Single-Core Digital-to-Analog Converter for Cognitive Ultrawidebands", "text": "This brief delineates the design and realization of a 10-GS/s 4-bit digital-to-analog converter (DAC) for the cognitive ultrawideband (CUWB), an emerging solution for low interference and efficient spectrum utilization in communication networks. The DAC serves as the data converter for the adaptive waveform transmitter therein, largely to reduce its power dissipation and hardware complexity. For reasons of low power dissipation and low-cost CUWB application, the resolution of the DAC is 4 bits, its realization is in standard 65-nm CMOS, and the architecture is a single core. The binary current-steering DAC includes critical building blocks such as current sources and a novel deglitcher circuit. The current sources are designed for small area with high linearity based on our derived relationship between current-source output resistance and linearity parameters [integral nonlinearity (INL) and spurious-free dynamic range (SFDR)]. The deglitcher design includes high-speed source followers as high-speed low voltage swing buffers to improve the linearity by decreasing the output glitch energy. The DAC embodies an in situ hardware efficient (small integrated-circuit area and reduced input/output pinout) tester that generates 4 \u00d7 10-Gb/s test-data pattern to facilitate functional verification. The designed DAC achieves \u2264 0.16-least significant bit INL/differential nonlinearity and > 23-dBc SFDR over the Nyquist bandwidth up to 4.53 GHz, and features the most competitive figures-of-merit of all similar DACs reported to date."}
{"_id": "dfd5dff7e5652d1472acf081908f1a35c3e7179e", "title": "Inferring causal impact using Bayesian structural time-series models", "text": "An important problem in econometrics and marketing is to infer the causal impact that a designed market intervention has exerted on an outcome metric over time. In order to allocate a given budget optimally, for example, an advertiser must determine the incremental contributions that different advertising campaigns have made to web searches, product installs, or sales. This paper proposes to infer causal impact on the basis of a diffusion-regression state-space model that predicts the counterfactual market response that would have occurred had no intervention taken place. In contrast to classical difference-in-differences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) flexibly accommodate multiple sources of variation, including the time-varying influence of contemporaneous covariates, i.e., synthetic controls. Using a Markov chain Monte Carlo algorithm for posterior inference, we illustrate the statistical properties of our approach on synthetic data. We then demonstrate its practical utility by evaluating the effect of an online advertising campaign on search-related site visits. We discuss the strengths and limitations of our approach in improving the accuracy of causal attribution, power analyses, and principled budget allocation."}
{"_id": "c62277e01c6dda0c195f494a424959368a077d60", "title": "Understanding the influence of dead-time on GaN based synchronous boost converter", "text": "Gallium Nitride (GaN) based power switching devices are known to be superior to conventional Si devices due to properties such as low loss and fast switching speed. However, as a new technology, some special characteristics of GaN-based power devices still remain unknown to the public. This paper tries to investigate the effect of dead-time on a GaN-based synchronous boost converter, as compared to its Si counterpart. It is found out that GaN-based converter is more sensitive to dead-time related loss as a result of fast switching transition and high reverse voltage drop. Improper selection of dead-time value can offset its advantage over Si converter. Analyses also show that GaN HEMTs have different switching characteristics compared to Si MOSFET due to low Cgd to Cds ratio and lack of reverse recovery. These critical findings will help power electronic engineers take better advantage of GaN technology in synchronous rectification and inverter applications."}
{"_id": "8196b0321d1699937a7444993731ab46aee8c9c8", "title": "Clonogenic assay of cells in vitro", "text": "Clonogenic assay or colony formation assay is an in vitro cell survival assay based on the ability of a single cell to grow into a colony. The colony is defined to consist of at least 50 cells. The assay essentially tests every cell in the population for its ability to undergo \u201cunlimited\u201d division. Clonogenic assay is the method of choice to determine cell reproductive death after treatment with ionizing radiation, but can also be used to determine the effectiveness of other cytotoxic agents. Only a fraction of seeded cells retains the capacity to produce colonies. Before or after treatment, cells are seeded out in appropriate dilutions to form colonies in 1\u20133 weeks. Colonies are fixed with glutaraldehyde (6.0% v/v), stained with crystal violet (0.5% w/v) and counted using a stereomicroscope. A method for the analysis of radiation dose\u2013survival curves is included."}
{"_id": "167e5b3ff2c2f1fc46fd309957d157b09d047f70", "title": "Animations 25 Years Later: New Roles and Opportunities", "text": "Animations are commonplace in today's user interfaces. From bouncing icons that catch attention, to transitions helping with orientation, to tutorials, animations can serve numerous purposes. We revisit Baecker and Small's pioneering work Animation at the Interface, 25 years later. We reviewed academic publications and commercial systems, and interviewed 20 professionals of various backgrounds. Our insights led to an expanded set of roles played by animation in interfaces today for keeping in context, teaching, improving user experience, data encoding and visual discourse. We illustrate each role with examples from practice and research, discuss evaluation methods and point to opportunities for future research. This expanded description of roles aims at inspiring the HCI research community to find novel uses of animation, guide them towards evaluation and spark further research."}
{"_id": "8440f7d0811ada6c3f0a0025b27a8fc3e6675faa", "title": "The knowledge pyramid: a critique of the DIKW hierarchy", "text": "The paper evaluates the Data-Information-Knowledge-Wisdom (DIKW) Hierarchy. This hierarchy is part of the canon of information science and management. The paper considers whether the hierarchy, also known as the \u2018Knowledge Hierarchy\u2019, is a useful and intellectually desirable construct to introduce, whether the views expressed about DIKW are true and have evidence in favour of them, and whether there are good reasons offered or sound assumptions made about DIKW. Arguments are offered that the hierarchy is unsound and methodologically undesirable. The paper identifies a central logical error that DIKW makes. The paper identifies the dated and unsatisfactory philosophical positions of operationalism and inductivism as the philosophical backdrop to the hierarchy. The paper concludes with a sketch of some positive theories, of value to information science, on the nature of the components of the hierarchy: that data is anything recordable in a semantically and pragmatically sound way, that information is what is known in other literature as \u2018weak knowledge\u2019, that knowledge also is \u2018weak knowledge\u2019 and that wisdom is the possession and use, if required, of wide practical knowledge, by an agent who appreciates the fallible nature of that knowledge."}
{"_id": "cf08bf7bcf3d3ec926d0cedf453e257e21cc398a", "title": "Testing of Digital Systems", "text": "Device testing represents the single largest manufacturing expense in the semiconductor industry, costing over $40 million a year. The most comprehensive and wide ranging book of its kind, Testing of Digital Systems covers everything you need to know about this vitally important subject. Starting right from the basics, the authors take the reader through automatic test pattern generation, design for testability and built-in self-test of digital circuits before moving on to more advanced topics such as IDDQ testing, functional testing, delay fault testing, memory testing, and fault diagnosis. The book includes detailed treatment of the latest techniques including test generation for various fault modes, discussion of testing techniques at different levels of integrated circuit hierarchy and a chapter on system-on-a-chip test synthesis. Written for students and engineers, it is both an excellent senior/graduate level textbook and a valuable reference."}
{"_id": "718a9dd7a7e82e891f1fb53b9316897d8cde3794", "title": "Understanding the Role of Social presence in Crowdfunding: Evidence from Leading U.S. And German Platforms", "text": "As a novel opportunity to acquire capital from the masses, crowdfunding has attracted great attention in academia and practice. So far, little is known about the factors that promote the success of crowdfunding projects, however. In this paper, we examine in how far the social presence perceived on a project\u2019s website influences the success of the respective crowdfunding project. Based on a data-driven analysis of 2.000 project websites from the largest crowdfunding platforms in the U.S. and Germany, we show that the perceived social presence has a significant influence on the success of crowdfunding projects. The obtained results indicate that using socially rich pictures and a socially rich description in the project presentation positively affects the success of a crowdfunding project. A socially rich profile page of the founder(s) in contrast appears to have a rather limited effect. The success of crowdfunding projects seems to be dependent on the participation behavior of the founder, however. Our results indicate that having backed other projects positively influences the success of one\u2019s own initiative. The number of answered comments might have a negative effect on the success of the initiative, though."}
{"_id": "18a7ff041c4c716fa212632a3a165d45ecbdccb9", "title": "Zero-Shot Learning Posed as a Missing Data Problem", "text": "This paper presents a method of zero-shot learning (ZSL) which poses ZSL as the missing data problem, rather than the missing label problem. Specifically, most existing ZSL methods focus on learning mapping functions from the image feature space to the label embedding space. Whereas, the proposed method explores a simple yet effective transductive framework in the reverse way - our method estimates data distribution of unseen classes in the image feature space by transferring knowledge from the label embedding space. Following the transductive setting, we leverage unlabeled data to refine the initial estimation. In experiments, our method achieves the highest classification accuracies on two popular datasets, namely, 96.00% on AwA and 60.24% on CUB."}
{"_id": "0c0bc2dd31ef54f1a89f9bc49694f103249c6605", "title": "Relationship between Variants of One-Class Nearest Neighbors and Creating Their Accurate Ensembles", "text": "In one-class classification problems, only the data for the target class is available, whereas the data for the non-target class may be completely absent. In this paper, we study one-class nearest neighbor (OCNN) classifiers and their different variants. We present a theoretical analysis to show the relationships among different variants of OCNN that may use different neighbors or thresholds to identify unseen examples of the non-target class. We also present a method based on inter-quartile range for optimizing parameters used in OCNN in the absence of non-target data during training. Then, we propose two ensemble approaches based on random subspace and random projection methods to create accurate OCNN ensembles. We tested the proposed methods on 15 benchmark and real world domain-specific datasets and show that random-projection ensembles of OCNN perform best."}
{"_id": "2964577b83daae60e582d357bd7c463c349f57f0", "title": "Bus detection device for the blind using RFID application", "text": "This paper outlines a bus detection mechanism for the blind in travelling from one place to another. In order to get transportation independently, the blind use auditory touched clues like walking stick or white cane. The limitation of the walking stick is that a blind person must come into close proximity with their surroundings to determine the location of an obstacle. For that basis, various devices have been developed such as the Sonicguide, the Mowat sensor, the Laser cane and the Navbelt [4]. However, these device can only assist the blind at a pedestrian crossing. Therefore, the project is aims to develop a bus detection prototype using Radio Frequency Identification (RFID) for the blind. The paper covers brief idea about the blind and RFID system, review relevant papers and summary of current research. The review of RFID system compare between families of auto-ID technology, the basic principle of RFID, the type of RFID tagging and the antenna characteristic. The summary of current research discussed about the development of prototype, the database system, the output mechanism and integration between hardware and software. Database management will provided. The information such as bus route, final destination and bus number are also provided. This paper also describes the future work intended to be done."}
{"_id": "8da15b6042d876e367333cda0e9ce2a15ebf3407", "title": "Association rules: Normalizing the lift", "text": "Association rules is a popular data mining technique for discovering relations between variables in large amounts of data. Support, confidence and lift are three of the most common measures for evaluating the usefulness of these rules. A concern with the lift measure is that it can only compare items within a transaction set. The main contribution of this paper is to develop a formula for normalizing the lift, as this will allow valid comparisons between distinct transaction sets. Traffic accident data was used to validate the revised formula for lift and the result of this analysis was very strong."}
{"_id": "5a014925a0ac9e329a77ed5101cc6949745f6344", "title": "Breast Cancer Multi-classification from Histopathological Images with Structured Deep Learning Model", "text": "Automated breast cancer multi-classification from histopathological images plays a key role in computer-aided breast cancer diagnosis or prognosis. Breast cancer multi-classification is to identify subordinate classes of breast cancer (Ductal carcinoma, Fibroadenoma, Lobular carcinoma, etc.). However, breast cancer multi-classification from histopathological images faces two main challenges from: (1) the great difficulties in breast cancer multi-classification methods contrasting with the classification of binary classes (benign and malignant), and (2) the subtle differences in multiple classes due to the broad variability of high-resolution image appearances, high coherency of cancerous cells, and extensive inhomogeneity of color distribution. Therefore, automated breast cancer multi-classification from histopathological images is of great clinical significance yet has never been explored. Existing works in literature only focus on the binary classification but do not support further breast cancer quantitative assessment. In this study, we propose a breast cancer multi-classification method using a newly proposed deep learning model. The structured deep learning model has achieved remarkable performance (average 93.2% accuracy) on a large-scale dataset, which demonstrates the strength of our method in providing an efficient tool for breast cancer multi-classification in clinical settings."}
{"_id": "3bd3cb377f6ba730ae502386dbcb244ca0bcc425", "title": "Effectiveness of domestic wastewater treatment using microbial fuel cells at ambient and mesophilic temperatures.", "text": "Domestic wastewater treatment was examined under two different temperature (23+/-3 degrees C and 30+/-1 degrees C) and flow modes (fed-batch and continuous) using single-chamber air-cathode microbial fuel cells (MFCs). Temperature was an important parameter for treatment efficiency and power generation. The highest power density of 422 mW/m(2) (12.8 W/m(3)) was achieved under continuous flow and mesophilic conditions, at an organic loading rate of 54 g COD/L-d, achieving 25.8% COD removal. Energy recovery was found to depend significantly on the operational conditions (flow mode, temperature, organic loading rate, and HRT) as well as the reactor architecture. The results demonstrate that the main advantages of using temperature-phased, in-series MFC configurations for domestic wastewater treatment are power savings, low solids production, and higher treatment efficiency."}
{"_id": "47079f0718de686ee3ee5c3a4061fd5f01c2aeb2", "title": "[Botulinum toxin injection techniques in the lower third and middle of the face, the neck and the d\u00e9collet\u00e9: the \"Nefertiti lift\"].", "text": "Although correction of the dynamic wrinkles of the upper part of the face is the major indication for botulinum toxin, there are also many possibilities for the middle and lower thirds of the face and neck. However, these injections are more delicate and require an experienced operator who has excellent knowledge of the muscles of these regions, their functions, the antagonist actions exercised on other muscles, particularly in terms of the complex equilibrium of the mouth. An excessive dose, an inappropriate injection point, or a centering mistake can all easily be responsible for undesirable side effects. However, the results obtained, often with lower doses than in the superior part of the face, can be highly satisfactory, notably in erasing bunny lines, improvement of marionette lines, peau d'orange chin, attenuation of peribuccal lines, melomental folds, correction of a gummy smile, and facial asymmetries. In the neck it is possible to reduce platysmal bands, horizontal lines, and diagonal lines of the neck and d\u00e9collet\u00e9. The face contours can also be improved by the Nefertiti lift. In the mid and lower regions of the face, botulinum toxin is often a complement to other esthetic techniques, particularly filling procedures."}
{"_id": "4f19a6383e509feef7b3a1cc8bd45dbb8af6669a", "title": "changepoint: An R Package for Changepoint Analysis", "text": "One of the key challenges in changepoint analysis is the ability to detect multiple changes within a given time series or sequence. The changepoint package has been developed to provide users with a choice of multiple changepoint search methods to use in conjunction with a given changepoint method and in particular provides an implementation of the recently proposed PELT algorithm. This article describes the search methods which are implemented in the package as well as some of the available test statistics whilst highlighting their application with simulated and practical examples. Particular emphasis is placed on the PELT algorithm and how results differ from the binary segmentation approach."}
{"_id": "5535bece0fd5e2f0b3eb8b8e2bf07221f85b5cc7", "title": "Eight-Band Antenna With A Small Ground Clearance for LTE Metal-Frame Mobile Phone Applications", "text": "This letter presents a compact eight-band monopole antenna whose size is 70\u00a0mm\u00a0\u00d7\u00a07\u00a0mm\u00a0\u00d7\u00a06\u00a0mm for LTE metal-frame mobile phones. The mainly radiating part of the proposed antenna is the metal frame. There are three gaps with the same size of 1\u00a0mm in the frame, and these gaps divide the antenna into two parts. Besides, a match circuit is used to enlarge the bandwidth and make the lower band matched. The advantage of the proposed antenna is that eight bands can be covered with a metal frame under the condition of only a 7\u00a0mm ground clearance. A prototype is manufactured and tested. The measured \u20136\u00a0dB impedance bandwidths are 375\u00a0MHz (0.675\u20131.05\u00a0GHz) and 1.2\u00a0GHz (1.6\u20132.8\u00a0GHz). The LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands are covered. The measured efficiencies are 52.7%\u201378.7% at the lower band and 45.6%\u201381% at the higher band. The measured gains are 0\u20132.1\u00a0dBi at the lower band and 1.6\u20133.6\u00a0dBi at the higher band."}
{"_id": "8f1fef0241a52edc455959bb00817c629770c7b1", "title": "Online Fault Detection and Diagnosis in Photovoltaic Systems Using Wavelet Packets", "text": "The nonlinear characteristics of a photovoltaic (PV) array, maximum power point tracking of the PV inverter, presence of blocking diodes, and lower irradiation prevent the conventional protection devices to trip under certain faults and lead to reduced system efficiency and fire hazards. Moreover, the PV characteristics under certain partial shading conditions are similar to that under fault conditions. Hence, it is imperative to detect faults and differentiate faults from the partial shading condition to avoid false tripping of the system. This paper proposes a simple fault detection method using the available data of array voltage and current by means of wavelet packets. The proposed scheme is simulated using MATLAB/Simulink and is experimentally tested on a 1.6-kW $\\text{4}\\,\\times \\,\\text{4}$ PV array to validate its performance."}
{"_id": "00af4fba4bc85262d381881848c3ad67536fcb6b", "title": "A Multivariate Timeseries Modeling Approach to Severity of Illness Assessment and Forecasting in ICU with Sparse, Heterogeneous Clinical Data", "text": "The ability to determine patient acuity (or severity of illness) has immediate practical use for clinicians. We evaluate the use of multivariate timeseries modeling with the multi-task Gaussian process (GP) models using noisy, incomplete, sparse, heterogeneous and unevenly-sampled clinical data, including both physiological signals and clinical notes. The learned multi-task GP (MTGP) hyperparameters are then used to assess and forecast patient acuity. Experiments were conducted with two real clinical data sets acquired from ICU patients: firstly, estimating cerebrovascular pressure reactivity, an important indicator of secondary damage for traumatic brain injury patients, by learning the interactions between intracranial pressure and mean arterial blood pressure signals, and secondly, mortality prediction using clinical progress notes. In both cases, MTGPs provided improved results: an MTGP model provided better results than single-task GP models for signal interpolation and forecasting (0.91 vs 0.69 RMSE), and the use of MTGP hyperparameters obtained improved results when used as additional classification features (0.812 vs 0.788 AUC)."}
{"_id": "23c3953fb45536c9129e86ac7a23098bd9f1381d", "title": "Machine Learning for Sequential Data: A Review", "text": "Statistical learning problems in many fields involve sequential data. This paper formalizes the principal learning tasks and describes the methods that have been developed within the machine learning research community for addressing these problems. These methods include sliding window methods, recurrent sliding windows, hidden Markov models, conditional random fields, and graph transformer networks. The paper also discusses some open research issues."}
{"_id": "604a82697d874c4da2aa07797c4b9f24c3dd272a", "title": "Lung cancer cell identification based on artificial neural network ensembles", "text": "An artificial neural network ensemble is a learning paradigm where several artificial neural networks are jointly used to solve a problem. In this paper, an automatic pathological diagnosis procedure named Neural Ensemble-based Detection (NED) is proposed, which utilizes an artificial neural network ensemble to identify lung cancer cells in the images of the specimens of needle biopsies obtained from the bodies of the subjects to be diagnosed. The ensemble is built on a two-level ensemble architecture. The first-level ensemble is used to judge whether a cell is normal with high confidence where each individual network has only two outputs respectively normal cell or cancer cell. The predictions of those individual networks are combined by a novel method presented in this paper, i.e. full voting which judges a cell to be normal only when all the individual networks judge it is normal. The second-level ensemble is used to deal with the cells that are judged as cancer cells by the first-level ensemble, where each individual network has five outputs respectively adenocarcinoma, squamous cell carcinoma, small cell carcinoma, large cell carcinoma, and normal, among which the former four are different types of lung cancer cells. The predictions of those individual networks are combined by a prevailing method, i.e. plurality voting. Through adopting those techniques, NED achieves not only a high rate of overall identification, but also a low rate of false negative identification, i.e. a low rate of judging cancer cells to be normal ones, which is important in saving lives due to reducing missing diagnoses of cancer patients."}
{"_id": "1a16866c9fce54eb9c21b7730a42f1f17372907f", "title": "Towards Online, Accurate, and Scalable QoS Prediction for Runtime Service Adaptation", "text": "Service-based cloud applications are typically built on component services to fulfill certain application logic. To meet quality-of-service (QoS) guarantees, these applications have to become resilient against the QoS variations of their component services. Runtime service adaptation has been recognized as a key solution to achieve this goal. To make timely and accurate adaptation decisions, effective QoS prediction is desired to obtain the QoS values of component services. However, current research has focused mostly on QoS prediction of the working services that are being used by a cloud application, but little on QoS prediction of candidate services that are also important for making adaptation decisions. To bridge this gap, in this paper, we propose a novel QoS prediction approach, namely adaptive matrix factorization (AMF), which is inspired from the collaborative filtering model used in recommender systems. Specifically, our AMF approach extends conventional matrix factorization into an online, accurate, and scalable model by employing techniques of data transformation, online learning, and adaptive weights. Comprehensive experiments have been conducted based on a real-world large-scale QoS dataset of Web services to evaluate our approach. The evaluation results provide good demonstration for our approach in achieving accuracy, efficiency, and scalability."}
{"_id": "ec0c3f8206f879857b9aea6d553084058131e7c3", "title": "Aromas of rosemary and lavender essential oils differentially affect cognition and mood in healthy adults.", "text": "This study was designed to assess the olfactory impact of the essential oils of lavender (Lavandula angustifolia) and rosemary (Rosmarlnus officinalis) on cognitive performance and mood in healthy volunteers. One hundred and forty-four participants were randomly assigned to one of three independent groups, and subsequently performed the Cognitive Drug Research (CDR) computerized cognitive assessment battery in a cubicle containing either one of the two odors or no odor (control). Visual analogue mood questionnaires were completed prior to exposure to the odor, and subsequently after completion of the test battery. The participants were deceived as to the genuine aim of the study until the completion of testing to prevent expectancy effects from possibly influencing the data. The outcome variables from the nine tasks that constitute the CDR core battery feed into six factors that represent different aspects of cognitive functioning. Analysis of performance revealed that lavender produced a significant decrement in performance of working memory, and impaired reaction times for both memory and attention based tasks compared to controls. In contrast, rosemary produced a significant enhancement of performance for overall quality of memory and secondary memory factors, but also produced an impairment of speed of memory compared to controls. With regard to mood, comparisons of the change in ratings from baseline to post-test revealed that following the completion of the cognitive assessment battery, both the control and lavender groups were significantly less alert than the rosemary condition; however, the control group was significantly less content than both rosemary and lavender conditions. These findings indicate that the olfactory properties of these essential oils can produce objective effects on cognitive performance, as well as subjective effects on mood."}
{"_id": "a9b545a0bf2e305caff18fe94c6dad12342554a6", "title": "Sentiment analysis of a document using deep learning approach and decision trees", "text": "The given paper describes modern approach to the task of sentiment analysis of movie reviews by using deep learning recurrent neural networks and decision trees. These methods are based on statistical models, which are in a nutshell of machine learning algorithms. The fertile area of research is the application of Google's algorithm Word2Vec presented by Tomas Mikolov, Kai Chen, Greg Corrado and Jeffrey Dean in 2013. The main idea of Word2Vec is the representations of words with the help of vectors in such manner that semantic relationships between words preserved as basic linear algebra operations. The extra advantage of the mentioned algorithm above the alternatives is computational efficiency. This paper focuses on using Word2Vec model for text classification by their sentiment type."}
{"_id": "76044a2bc14af5ab600908dcbc8ba8edfb58a673", "title": "A Hardware Design Language for Efficient Control of Timing Channels", "text": "Information security can be compromised by leakage via low-level hardware features. One recently prominent example is cache probing attacks, which rely on timing channels created by caches. We introduce a hardware design language, SecVerilog, which makes it possible to statically analyze information flow at the hardware level. With SecVerilog, systems can be built with verifiable control of timing channels and other information channels. SecVerilog is Verilog, extended with expressive type annotations that enable precise reasoning about information flow. It also comes with rigorous formal assurance: we prove that SecVerilog enforces timing-sensitive noninterference and thus ensures secure information flow. By building a secure MIPS processor and its caches, we demonstrate that SecVerilog makes it possible to build complex hardware designs with verified security, yet with low overhead in time, space, and HW designer effort."}
{"_id": "ffcfe390081aba6eb451a8465703415132cf8746", "title": "A comparison of Deep Learning methods for environmental sound detection", "text": "Environmental sound detection is a challenging application of machine learning because of the noisy nature of the signal, and the small amount of (labeled) data that is typically available. This work thus presents a comparison of several state-of-the-art Deep Learning models on the IEEE challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 challenge task and data, classifying sounds into one of fifteen common indoor and outdoor acoustic scenes, such as bus, cafe, car, city center, forest path, library, train, etc. In total, 13 hours of stereo audio recordings are available, making this one of the largest datasets available."}
{"_id": "9682f17929a2a2f1ff9b812de37d23bdf40cb766", "title": "A Tristate Rigid Reversible and Non-Back-Drivable Active Docking Mechanism for Modular Robotics", "text": "This paper proposes a new active bonding mechanism that achieves rigid, reversible, and nonback-drivable coupling between modular mobile robots in a chain formation. The first merit of this interface lies in its ability to operate in three independent modes. In the drive mode, the motor torque is routed to drive the module. In the clamp mode, the motor torque is redirected toward an active joint that enables one module to rotate relative to its neighbors in the formation. In the neutral mode, the motor's rotation achieves alignment between the interface's components prior to the initiation of the drive and clamp modes. The second merit stems from the dual-rod slider rocker (DRSR) mechanism, which toggles between the interface's three modes of operation. The design details of the interface are presented, as well as the optimal kinematic synthesis and dynamic analysis of the DRSR mechanism. Simulation and experimental results validate the DRSR's unique kinematics, as well as the rigidity and the three operation modes of the docking interface."}
{"_id": "435277e20f62f2dcb40564a525cf7e73199eaaac", "title": "Sound-model-based acoustic source localization using distributed microphone arrays", "text": "Acoustic source localization and sound recognition are common acoustic scene analysis tasks that are usually considered separately. In this paper, a new source localization technique is proposed that works jointly with an acoustic event detection system. Given the identities and the end-points of simultaneous sounds, the proposed technique uses the statistical models of those sounds to compute a likelihood score for each model and for each signal at the output of a set of null-steering beamformers per microphone array. Those scores are subsequently combined to find the MAP-optimal event source positions in the room. Experimental work is reported for a scenario consisting of meeting-room acoustic events, either isolated or overlapped with speech. From the localization results, which are compared with those from the SRP-PHAT technique, it seems that the proposed model-based approach can be an alternative to current techniques for event-based localization."}
{"_id": "4a9b68db23584c3b2abfd751100e8b5b44630f9c", "title": "Graph Convolutional Neural Networks for ADME Prediction in Drug Discovery", "text": "ADME in-silico methods have grown increasingly powerful over the past twenty years, driven by advances in machine learning and the abundance of high-quality training data generated by laboratory automation. Meanwhile, in the technology industry, deep-learning has taken o\u21b5, driven by advances in topology design, computation, and data. The key premise of these methods is that the model is able to pass gradients back into the feature structure, engineering its own problem-specific representation for the data. Graph Convolutional Networks (GC-DNNs), a variation of neural fingerprints, allow for true deep-learning in the chemistry domain. We use this new approach to build human plasma protein binding, lipophilicty, and humans clearance models that significantly outperform random-forests and support-vector-regression."}
{"_id": "316293b426eb42853f30cd906f71fa6bc73dacce", "title": "System Dynamics: Modeling, Simulation, and Control of Mechatronic Systems", "text": "If your wanted solutions manual ins't on this list, also can ask me if _ is available. Theory of Applied Robotics: Kinematics, Dynamics and Control (Reza N. _ Jazar) System Dynamics: Modeling and Simulation of Mechatronic Systems (4th _. Solution Manual Dynamics of Mechanical Systems (C.T.F. Ross) Solution Manual Thermodynamics : An Integrated Learning System (Schmidt, Ezekoye, System Dynamics : Modeling, Simulation, and Control of Mechatronic Systems (5th."}
{"_id": "722ef716113443e2b13bd180dcfacf1c1f2054f2", "title": "A Universal-Input Single-Stage High-Power-Factor Power Supply for HB-LEDs Based on Integrated Buck-Flyback Converter", "text": "Due to the high rise in luminous efficiency that HB-LEDs have experienced in the last recent years, many new applications have been researched. In this paper, a streetlight LED application will be covered, using the Integrated Buck-Flyback Converter developed in previous works, which performs power factor correction (PFC) from a universal ac source, as well as a control loop using the LM3524 IC, which allows PWM dimming operation mode. Firstly, the LED load will be linearized and modeled in order to calculate the IBFC topology properly. Afterwards, the converter will be calculated, presenting the one built in the lab. Secondly, the converter will be modeled in order to build the closed loop system, modeling the current sensor as well in order to develop an adequate controller. Finally, experimental results obtained from the lab tests will be presented."}
{"_id": "0c23ebb3abf584fa5e0fde558584befc94fb5ea2", "title": "Parsing Natural Scenes and Natural Language with Recursive Neural Networks", "text": "Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-theart performance on the Stanford background dataset (78.1%). The features from the image parse tree outperform Gist descriptors for scene classification by 4%."}
{"_id": "4cf4bdbf65d1db8fe22ac040591dba9622fce5b3", "title": "Correcting the Document Layout: A Machine Learning Approach", "text": "In this paper, a machine learning approach to support the user during the correction of the layout analysis is proposed. Layout analysis is the process of extracting a hierarchical structure describing the layout of a page. In our approach, the layout analysis is performed in two steps: firstly, the global analysis determines possible areas containing paragraphs, sections, columns, figures and tables, and secondly, the local analysis groups together blocks that possibly fall within the same area. The result of the local analysis process strongly depends on the quality of the results of the first step. We investigate the possibility of supporting the user during the correction of the results of the global analysis. This is done by allowing the user to correct the results of the global analysis and then by learning rules for layout correction from the sequence of user actions. Experimental results on a set of multi-page documents are reported and commented. 1. Background and motivation Strategies for the extraction of layout analysis have been traditionally classified as top-down or bottom-up [10]. In top-down methods, the document image is repeatedly decomposed into smaller and smaller components, while in bottom-up methods, basic layout components are extracted from bitmaps and then grouped together into larger blocks on the basis of their characteristics. In WISDOM++, a document image analysis system that can transform paper documents into XML format [1], the applied page decomposition method is hybrid, since it combines a top-down approach to segment the document image, and a bottom-up layout analysis method to assemble basic blocks into frames. Some attempts to learn the layout structure from a set of training examples have also been reported in the literature [2,3,4,7,11]. They are based on ad-hoc learning algorithms, which learn particular data structures, such as geometric trees and tree grammars. Results are promising, although it has been proven that good layout structures could also be obtained by exploiting generic knowledge on typographic conventions [5]. This is the case of WISDOM++, which analyzes the layout in two steps: 1. A global analysis, in order to determine possible areas containing paragraphs, sections, columns, figures and tables. This step is based on an iterative process, in which the vertical and horizontal histograms of text blocks are alternately analyzed, in order to detect columns and sections/paragraphs, respectively. 2. A local analysis to group together blocks that possibly fall within the same area. Generic knowledge on west-style typesetting conventions is exploited to group blocks together, such as \u201cthe first line of a paragraph can be indented\u201d and \u201cin a justified text, the last line of a paragraph can be shorter than the previous one\u201d. Experimental results proved the effectiveness of this knowledge-based approach on images of the first page of papers published in conference proceedings and journals [1]. However, performance degenerates when the system is tested on intermediate pages of multi-page articles, where the structure is much more variable, due to the presence of formulae, images, and drawings that can stretch over more than one column, or are quite close. The majority of errors made by the layout analysis module were in the global analysis step, while the local analysis step performed satisfactorily when the result of the global analysis was correct. In this paper, we investigate the possibility of supporting the user during the correction of the results of the global analysis. This is done by allowing the user to correct the results of the global analysis and then by learning rules for layout correction from his/her sequence of actions. This approach is different from those that learn the layout structure from scratch, since we try to correct the result of a global analysis returned by a bottom-up algorithm. Furthermore, we intend to capture knowledge on correcting actions performed by the user of the document image processing system. Other document processing systems allow users to correct the result of the layout analysis; nevertheless WISDOM++ is the only one that tries to learn correcting actions from user interaction with the system. Proceedings of the Seventh International Conference on Document Analysis and Recognition (ICDAR 2003) 0-7695-1960-1/03 $17.00 \u00a9 2003 IEEE In the following section, we describe the layout correction operations. The automated generation of training examples is explained in Section 3. Section 4 introduces the learning strategy, while Section 5 presents some experimental results. 2. Correcting the layout Global analysis aims at determining the general layout structure of a page and operates on a tree-based representation of nested columns and sections. The levels of columns and sections are alternated (Figure 1), which means that a column contains sections, while a section contains columns. At the end of the global analysis, the user can only see the sections and columns that have been considered atomic, that is, not subject to further decomposition (Figure 2). The user can correct this result by means of three different operations: \u2022 Horizontal splitting: a column/section is cut horizontally. \u2022 Vertical splitting: a column/section is cut vertically. \u2022 Grouping: two sections/columns are merged together. The cut point in the two splitting operations is automatically determined by computing either the horizontal or the vertical histogram on the basic blocks returned by the segmentation algorithm. The horizontal (vertical) cut point corresponds to the largest gap between two consecutive bins in the horizontal (vertical) histogram. Therefore, splitting operations can be described by means of a unary function, split(X), where X represents the column/section to be split and the range is the set {horizontal, vertical, no_split}. The grouping operation, which can be described by means of a binary predicate group(A,B), is applicable to two sections (columns) A and B and returns a new section (column) C, whose boundary is determined as follows. Let (leftX, topX) and (bottomX, rightX) be the coordinates of the top-left and bottom-right vertices of a column/section X, respectively. Then: leftC= min(leftA, leftB), rightC=max(rightA,rightB), topC=min(topA,topB), bottomC=max(bottomA,bottomB). Grouping is possible only if the following two conditions are satisfied: 1. C does not overlap another section (column) in the document. 2. A and B are nested in the same column (section). After each splitting/grouping operation, WISDOM++ recomputes the result of the local analysis process, so that the user can immediately perceive the final effect of the requested correction and can decide whether to confirm the correction or not. 3. Representing corrections From the user interaction, WISDOM++ implicitly generates some training observations describing when and how the user intended to correct the result of the global analysis. These training observations are used to learn correction rules of the result of the global analysis, as explained in the next section. The simplest representation describes, for each training observation, the page layout at the i-th correction step and the correcting operation performed by the user on that layout. Therefore, if the user performs n-1 correcting operations, n observations are generated. The last one corresponds to the page layout accepted by the user. In the learning phase, this representation may lead the system to generate rules which strictly take into account the exact user correction sequence. However, several alternative correction sequences, which lead to the same result, may be also possible. If they are not considered, the learning strategy will suffer from data overfitting problems. This issue was already discussed in a preliminary work [9]. A more sophisticated representation, which takes into account alternative correction sequences, is based on the Column level"}
{"_id": "a48cf75ec1105e72035c0b4969423dedb625c5bf", "title": "Building Lifecycle Management System for Enhanced Closed Loop Collaboration", "text": "In the past few years, the architecture, engineering and construction (AEC) industry has carried out efforts to develop BIM (Building Information Modelling) facilitating tools and standards for enhanced collaborative working and information sharing. Lessons learnt from other industries and tools such as PLM (Product Lifecycle Management) \u2013 established tool in manufacturing to manage the engineering change process \u2013 revealed interesting potential to manage more efficiently the building design and construction processes. Nonetheless, one of the remaining challenges consists in closing the information loop between multiple building lifecycle phases, e.g. by capturing information from middle-of-life processes (i.e., use and maintenance) to re-use it in end-of-life processes (e.g., to guide disposal decision making). Our research addresses this lack of closed-loop system in the AEC industry by proposing an open and interoperable Web-based building lifecycle management system. This paper gives (i) an overview of the requirement engineering process that has been set up to integrate efforts, standards and directives of both the AEC and PLM industries, and (ii) first proofs-of-concept of our system implemented on two distinct campus."}
{"_id": "dda8f969391d71dd88a4546c06f37a6c1bcd3412", "title": "Bridge Resonant Inverter For Induction Heating Applications", "text": "Induction heating is a well-known technique to produce very high temperature for applications. A large number of topologies have been developed in this area such as voltage and current source inverter. Recent developments in switching schemes and control methods have made the voltage-source resonant inverters widely used in several applications that require output power control. The series-resonant inverter needs an output transformer for matching the output power to the load but it carry high current as a result additional real power loss is occur and overall efficiency also is reduced. This project proposes a high efficiency LLC resonant inverter for induction heating applications by using asymmetrical voltage cancellation control .The proposed control method is implemented in a full-bridge topology for induction heating application. The output power is controlled using the asymmetrical voltage cancellation technique. The LLC resonant tank is designed without the use of output transformer. This results in an increase of the net efficiency of the induction heating system. The circuit is simulated using MATLAB .The circuit is implemented using PIC controller. Both simulation and hardware results are compared."}
{"_id": "c98bd092ebb68787da64e95202f922a0ed7fc588", "title": "Development and Initial Validation of a Multidimensional Scale Assessing Subjective Well-Being: The Well-Being Scale (WeBS).", "text": "Numerous scales currently exist that assess well-being, but research on measures of well-being is still advancing. Conceptualization and measurement of subjective well-being have emphasized intrapsychic over psychosocial domains of optimal functioning, and disparate research on hedonic, eudaimonic, and psychological well-being lacks a unifying theoretical model. Lack of systematic investigations on the impact of culture on subjective well-being has also limited advancement of this field. The goals of this investigation were to (1) develop and validate a self-report measure, the Well-Being Scale (WeBS), that simultaneously assesses overall well-being and physical, financial, social, hedonic, and eudaimonic domains of this construct; (2) evaluate factor structures that underlie subjective well-being; and (3) examine the measure's psychometric properties. Three empirical studies were conducted to develop and validate the 29-item scale. The WeBS demonstrated an adequate five-factor structure in an exploratory structural equation model in Study 1. Confirmatory factor analyses showed that a bifactor structure best fit the WeBS data in Study 2 and Study 3. Overall WeBS scores and five domain-specific subscale scores demonstrated adequate to excellent internal consistency reliability and construct validity. Mean differences in overall well-being and its five subdomains are presented for different ethnic groups. The WeBS is a reliable and valid measure of multiple aspects of well-being that are considered important to different ethnocultural groups."}
{"_id": "36675de101e2eb228727692e969a909d49fbbee2", "title": "Adoption of Mobile Internet Services: An Exploratory Study of Mobile Commerce Early Adopters", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the \u201cContent\u201d) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content."}
{"_id": "44c3dac2957f379e7646986f593b9a7db59bd714", "title": "Literary Fiction Improves Theory of Mind", "text": ""}
{"_id": "0d96ac48e92b6b42737276a319f48d9d27080fce", "title": "EvalAI: Towards Better Evaluation Systems for AI Agents", "text": "We introduce EvalAI, an open source platform for evaluating and comparing machine learning (ML) and artificial intelligence algorithms (AI) at scale. EvalAI is built to provide a scalable solution to the research community to fulfill the critical need of evaluating machine learning models and agents acting in an environment against annotations or with a human-in-the-loop. This will help researchers, students, and data scientists to create, collaborate, and participate in AI challenges organized around the globe. By simplifying and standardizing the process of benchmarking these models, EvalAI seeks to lower the barrier to entry for participating in the global scientific effort to push the frontiers of machine learning and artificial intelligence, thereby increasing the rate of measurable progress in this domain. Our code is available here."}
{"_id": "5a4bb08d4750d27bd5a2ad0a993d144c4fb9586c", "title": "Hype and Heavy Tails: A Closer Look at Data Breaches", "text": "Recent widely publicized data breaches have exposed the personal information of hundreds of millions of people. Some reports point to alarming increases in both the size and frequency of data breaches, spurring institutions around the world to address what appears to be a worsening situation. But, is the problem actually growing worse? In this paper, we study a popular public dataset and develop Bayesian Generalized Linear Models to investigate trends in data breaches. Analysis of the model shows that neither size nor frequency of data breaches has increased over the past decade. We find that the increases that have attracted attention can be explained by the heavy-tailed statistical distributions underlying the dataset. Specifically, we find that data breach size is log-normally distributed and that the daily frequency of breaches is described by a negative binomial distribution. These distributions may provide clues to the generative mechanisms that are responsible for the breaches. Additionally, our model predicts the likelihood of breaches of a particular size in the future. For example, we find that in the next year there is only a 31% chance of a breach of 10 million records or more in the US. Regardless of any trend, data breaches are costly, and we combine the model with two different cost models to project that in the next three years breaches could cost up to $55 billion."}
{"_id": "4665130fd1236d7cf5636b188ca7adcc4fd8d631", "title": "Smart Semantic Middleware for the Internet of Things", "text": "As ubiquitous systems become increasingly complex, traditional solutions to manage and control them reach their limits and pose a need for self-manageability. Also, heterogeneity of the ubiquitous components, standards, data formats, etc, creates significant obstacles for interoperability in such complex systems. The promising technologies to tackle these problems are the Semantic technologies, for interoperability, and the Agent technologies for management of complex systems. This paper describes our vision of a middleware for the Internet of Things, which will allow creation of self-managed complex systems, in particular industrial ones, consisting of distributed and heterogeneous components of different nature. We also present an analysis of issues to be resolved to realize such a middleware."}
{"_id": "48059d47eb27e58f40de3c1e6509b6ed3508b953", "title": "Strong Authentication for RFID Systems Using the AES Algorithm", "text": "Radio frequency identification (RFID) is an emerging technology which brings enormous productivity benefits in applications where objects have to be identified automatically. This paper presents issues concerning security and privacy of RFID systems which are heavily discussed in public. In contrast to the RFID community, which claims that cryptographic components are too costly for RFID tags, we describe a solution using strong symmetric authentication which is suitable for today\u2019s requirements regarding low power consumption and low die-size. We introduce an authentication protocol which serves as a proof of concept for authenticating an RFID tag to a reader device using the Advanced Encryption Standard (AES) as cryptographic primitive. The main part of this work is a novel approach of an AES hardware implementation which encrypts a 128-bit block of data within 1000 clock cycles and has a power consumption below 9 \u03bcA on a 0.35 \u03bcm CMOS process."}
{"_id": "e8e2c3d884bba807bcf7fbfa2c27f864b20ceb80", "title": "HMAC: Keyed-Hashing for Message Authentication", "text": "This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract This document describes HMAC, a mechanism for message authentication using cryptographic hash functions. HMAC can be used with any iterative cryptographic hash function, e.g., MD5, SHA-1, in combination with a secret shared key. The cryptographic strength of HMAC depends on the properties of the underlying hash function."}
{"_id": "f01d369becb42ff69d156d5e19d8af18dadacc6e", "title": "Security and Privacy in Sensor Networks", "text": "As wireless sensor networks (WSN) continue to grow, so does the need for effective security mechanisms. Since sensor networks can interact with sensitive data and/or operate in hostile unattended environments, it is imperative that these security concerns be addressed from the beginning of the system design. This paper aims at describing security solutions for collecting and processing data in WSNs. Adequate security capabilities for medium and large scale WSNs are a hard but necessary goal to achieve to prepare these networks for the market. The paper includes an overview on WSNs space security solutioins and reliability challenges for"}
{"_id": "442ef2e96ea0f56d50e027efdc9ed44ddb74ec59", "title": "An Approach for Learning and Construction of Expressive Ontology from Text in Natural Language", "text": "In this paper, we present an approach based on Ontology Learning and Natural Language Processing for automatic construction of expressive Ontologies, specifically in OWL DL with ALC expressivity, from a natural language text. The viability of our approach is demonstrated through the generation of descriptions of complex axioms from concepts defined by users and glossaries found at Wikipedia. We evaluated our approach in an experiment with entry sentences enriched with hierarchy axioms, disjunction, conjunction, negation, as well as existential and universal quantification to impose restriction of properties. The obtained results prove that our model is an effective solution for knowledge representation and automatic construction of expressive Ontologies. Thereby, it assists professionals involved in processes for obtain, construct and model knowledge domain."}
{"_id": "b2f8bc3b8f0d41a9d8223d88420619b66b9beb10", "title": "Multiagent Path Finding With Persistence Conflicts", "text": "Multiagent path finding is the problem of finding paths for a set of agents\u2014one for each agent\u2014in a graph that the agents navigate simultaneously. Agents must navigate from their individual start to goal vertices without any collision. We argue that the prevalent treatment of path conflicts in the literature is incomplete for applications, such as computer games and crowd simulations, and extend the definition of path conflicts to accommodate cases where agents persist in their intermediate locations and even after reaching their goals. We show that an existing algorithm, conflict-based search (CBS), can be extended to handle these cases while preserving its optimality. Experiments show that our variant of CBS generates fewer nodes and runs faster than a competing algorithm."}
{"_id": "b0767b46a7cb3459030dd538356cc547145c105c", "title": "Low-Profile Two-Degree-of-Freedom Wrist Exoskeleton Device Using Multiple Spring Blades", "text": "Robotically assisted rehabilitation therapy, which is therapy performed using a robotic device that guides or assists motion and in some cases triggered by a biological signal (such as a myoelectric signal or brain signal) is effective in recovering motor function following impairment. In the field of rehabilitation robots and human support robot, exoskeleton robots have been widely studied both for upper and lower limbs. Applying the exoskeleton to a wrist joint remains challenging because the wrist has two degrees of freedom in a structure smaller than the structures of other upper and lower limb joints and the center of rotation moves according to motion owing to the complex anatomical structure. To tackle this problem, we developed a wrist exoskeleton mechanism that consists of two elastic elements that largely deform during motion in transmitting forces from two linear actuators implemented on the user's forearm and in transforming the forces to wrist flexion/extension and adduction/abduction. The advantages of the presented device are the device's compactness, low weight, and inherent flexibility due to the elastic structure. The present letter presents an overview of the newly developed wrist exoskeleton mechanism, prototype implementation, and preliminary mechanical evaluations. The technical verification of the exoskeleton revealed the feasibility of generating forces that can generate wrist flexion/extension and adduction/abduction movements."}
{"_id": "70624f16968fe4f0c851398dbd46a1ebcce892ce", "title": "An efficient message access quality model in vehicular communication networks", "text": "In vehicular ad hoc network (VANET), vehicles equipped with computing, sensing, and communication capabilities can exchange information within a geographical area to distribute emergency messages and achieve safety system. Then how to enforce fine grained control of these messages and ensure the receiving messages coming from the claimed source in such a highly dynamic environments remains a key challenge that affects the quality of service. In this paper, we propose a hierarchical access control with authentication scheme for transmitted messages with security assurance over VANET. By extending ciphertext-policy attribute-based encryption (CP-ABE) with a hierarchical structure of multiple authorities, the scheme not only achieves scalability due to its hierarchical structure, but also inherits fine-grained access control on the transmitted messages. Also by exploiting attribute-based signature (ABS), the scheme can authorize the vehicles that can most appropriately deal with the message efficiently. The results of efficiency analysis and comparison with the related works show that the proposed scheme is efficient and scalable in dealing with access control and message authentication for data dissemination in VANET. & 2014 Elsevier B.V. All rights reserved."}
{"_id": "278ab6b7b30db9d013c49bf396404f37dc65c07b", "title": "Modelling of Parasitic Capacitances for Single-Gate, Double-Gate and Independent Double-Gate MOSFET", "text": "This paper discusses the type of capacitances for Single Gate MOSFET and Double Gate MOSFET including their quantity. The effect of parasitic capacitance makes double gate MOSFET more suitable component for the designing of digital logic switches than single gate MOSFET. Here, we introducing Independent double gate MOSFET operation based on VeSFET concept. Then introducing with the total capacitance model of Independent double gate MOSFET and compared its performance parameters with double gate MOSFET and the single gate MOSFET. Double gate transistor circuit is the first choice for reduction of short channel effect in application of MOSFET. The basic advantage of double gate MOSFET is its area required. But design CMOS double gate transistor for AND functionality suffering from high leakage current; while using Independent double gate MOSFET based on VeSFET concept reduction of that leakage current is possible. So, we can easily implement logic circuits while using CMOS design based on Independent double gate MOSFET which gives high performance."}
{"_id": "3ce92cac0f3694be2f2918bf122679c6664a1e16", "title": "Deep Relative Attributes", "text": "Visual attributes are great means of describing images or scenes, in a way both humans and computers understand. In order to establish a correspondence between images and to be able to compare the strength of each property between images, relative attributes were introduced. However, since their introduction, hand-crafted and engineered features were used to learn increasingly complex models for the problem of relative attributes. This limits the applicability of those methods for more realistic cases. We introduce a deep neural network architecture for the task of relative attribute prediction. A convolutional neural network (ConvNet) is adopted to learn the features by including an additional layer (ranking layer) that learns to rank the images based on these features. We adopt an appropriate ranking loss to train the whole network in an end-to-end fashion. Our proposed method outperforms the baseline and state-of-the-art methods in relative attribute prediction on various coarse and fine-grained datasets. Our qualitative results along with the visualization of the saliency maps show that the network is able to learn effective features for each specific attribute. Source code of the proposed method is available at https://github.com/yassersouri/ghiaseddin."}
{"_id": "249fdd8406a8aea1342dc6ea17b589181c9c37dd", "title": "A 330nA energy-harvesting charger with battery management for solar and thermoelectric energy harvesting", "text": "Harvesting energy from ambient energy sources such as solar and thermal gradients is one solution to address the dramatic increase in energy consumption of personal electronics. In this paper, an ultra low quiescent current charger and battery management IC that can efficiently extract energy from solar panels and thermoelectric generators to charge batteries and super capacitors is presented. While previous works on energy harvesting [1-5] report efficient DC-DC converters to extract the energy from the harvester, not all of them provide complete battery management functionality for various chemistries. Also, maximum power point tracking (MPPT) which is critical in energy harvesting applications is not provided in some of the published work. In this paper, a charger and battery management IC with 330nA quiescent current is presented. The IC can cold start from 330mV and 5\u03bcW of input power. The charger achieves efficiency greater than 80% at single cell solar voltages of 0.5V. A low quiescent current battery management architecture involving sampled circuits, sub regulated rails and clock gating is demonstrated."}
{"_id": "97d21e0b65845bfed7acf9721eaaa97c297a11f6", "title": "On the Design and Scalability of Distributed Shared-Data Databases", "text": "Database scale-out is commonly implemented by partitioning data across several database instances. This approach, however, has several restrictions. In particular, partitioned databases are inflexible in large-scale deployments and assume a partition-friendly workload in order to scale. In this paper, we analyze an alternative architecture design for distributed relational databases that overcomes the limitations of partitioned databases. The architecture is based on two fundamental principles: We decouple query processing and transaction management from data storage, and we share data across query processing nodes. The combination of these design choices provides scalability, elasticity, and operational flexibility without making any assumptions on the workload. As a drawback, sharing data among multiple database nodes causes synchronization overhead. To address this limitation, we introduce techniques for scalable transaction processing in shared-data environments. Specifically, we describe mechanisms for efficient data access, concurrency control, and data buffering. In combination with new hardware trends, the techniques enable performance characteristics that top state-of-the-art partitioned databases."}
{"_id": "f940006de9cc3e657c8f4d91354948eeac61c3e9", "title": "High-gain 60-GHz patch antenna array using GaN MMIC technology", "text": "In this paper, a high-gain 2\u00d72 antenna array at 60 GHz using GaN MMIC process technology is presented. For gain enhancement, a modified patch antenna element with two parasitic strips on each non-radiation edge and a parasitic patch on one of the radiation edges is proposed. This antenna element achieves a gain of 3.42 dBi at 60 GHz, about 2 dBi higher than a conventional patch antenna element using the same process technology. The 2\u00d72 antenna array consisting of four modified patch antenna element exhibits a 10-dB impedance bandwidth of 10.1% and a peak gain of 5.7 dBi at 60 GHz."}
{"_id": "81eba9ac90fbde82525949cda44e5a5db13c1e97", "title": "Increasing psychological well-being and resilience by psychotherapeutic methods.", "text": "A specific psychotherapeutic strategy for increasing psychological well-being and resilience, well-being therapy, has been developed and validated in a number of randomized controlled trials. The findings indicate that flourishing and resilience can be promoted by specific interventions leading to a positive evaluation of one's self, a sense of continued growth and development, the belief that life is purposeful and meaningful, the possession of quality relations with others, the capacity to manage effectively one's life, and a sense of self-determination. A decreased vulnerability to depression and anxiety has been demonstrated after well-being therapy in high-risk populations. There are important implications for the state/trait dichotomy in psychological well-being and for the concept of recovery in mood and anxiety disorders."}
{"_id": "ced401ba8a6418103386ced13734df2947acf149", "title": "MIMO for ATSC 3.0", "text": "This paper provides an overview of the optional multiple-input multiple-output (MIMO) antenna scheme adopted in ATSC 3.0 to improve robustness or increase capacity via additional spatial diversity and multiplexing by sending two data streams in a single radio frequency channel. Although it is not directly specified, it is expected in practice to use cross-polarized 2\u00d72 MIMO (i.e., horizontal and vertical polarization) to retain multiplexing capabilities in line-of-sight conditions. MIMO allows overcoming the channel capacity limit of single antenna wireless communications in a given channel bandwidth without any increase in the total transmission power. But in the U.S. MIMO can actually provide a larger comparative gain because it would be allowed to increase the total transmit power, by transmitting the nominal transmit power in each polarization. Hence, in addition to the MIMO gains (array, diversity, and spatial multiplexing), MIMO could exploit an additional 3 dB power gain. The MIMO scheme adopted in ATSC 3.0 re-uses the single-input single-output antenna baseline constellations, and hence it introduces the use of MIMO with non-uniform constellations."}
{"_id": "2de06861c42928581aec8ddd932e68ad450a2c0a", "title": "Adversarial support vector machine learning", "text": "Many learning tasks such as spam filtering and credit card fraud detection face an active adversary that tries to avoid detection. For learning problems that deal with an active adversary, it is important to model the adversary's attack strategy and develop robust learning models to mitigate the attack. These are the two objectives of this paper. We consider two attack models: a free-range attack model that permits arbitrary data corruption and a restrained attack model that anticipates more realistic attacks that a reasonable adversary would devise under penalties. We then develop optimal SVM learning strategies against the two attack models. The learning algorithms minimize the hinge loss while assuming the adversary is modifying data to maximize the loss. Experiments are performed on both artificial and real data sets. We demonstrate that optimal solutions may be overly pessimistic when the actual attacks are much weaker than expected. More important, we demonstrate that it is possible to develop a much more resilient SVM learning model while making loose assumptions on the data corruption models. When derived under the restrained attack model, our optimal SVM learning strategy provides more robust overall performance under a wide range of attack parameters."}
{"_id": "ef6e38c3fed47236d90b61e36cfea98990c07bbc", "title": "New Algorithms for SIMD Alignment", "text": "Optimizing programs for modern multiprocessor or vector platforms is a major important challenge for compilers today. In this work, we focus on one challenging aspect: the SIMD ALIGNMENT problem. Previously, only heuristics were used to solve this problem, without guarantees on the number of shifts in the obtained solution. We study two interesting and realistic special cases of the SIMD ALIGNMENT problem and present two novel and efficient algorithms that provide optimal solutions for these two cases. The new algorithms employ dynamic programming and a MIN-CUT/MAX-FLOW algorithm as subroutines. We also discuss the relation between the SIMD ALIGNMENT problem and the MULTIWAY CUT and NODE MULTIWAY CUT problems; and we show how to derive an approximated solution to the SIMD ALIGNMENT problem based on approximation algorithms to these two known problems."}
{"_id": "c0d6eaab60a57a5416490d5026a78770a5da3d57", "title": "Toward normative expert systems: Part I. The Pathfinder project.", "text": "Pathfinder is an expert system that assists surgical pathologists with the diagnosis of lymph-node diseases. The program is one of a growing number of normative expert systems that use probability and decision theory to acquire, represent, manipulate, and explain uncertain medical knowledge. In this article, we describe Pathfinder and our research in uncertain-reasoning paradigms that was stimulated by the development of the program. We discuss limitations with early decision-theoretic methods for reasoning under uncertainty and our initial attempts to use non-decision-theoretic methods. Then, we describe experimental and theoretical results that directed us to return to reasoning methods based in probability and decision theory."}
{"_id": "fb9fb159f620bf7520c009bb008c0d1579083d6d", "title": "Field and Torque Calculation and Transient Analysis in Variable Reluctance Machines", "text": "The variable reluctance machine (VRM), also known as the switched reluctance machine, is a class of electrical machines that exhibits many interesting characteristics. However, the satisfactory analytical formulation of its working equations (torque and voltage equations) in a saturated regime is not yet completely available for covering some of its operational aspects, such as the analysis of transient state. This paper provides such equations for the general case in which a VRM operates with a pronounced level of magnetic saturation. These equations together with help of lookup tables allow online access of the instant torque on the shaft as well as the flux linkage of a phase, since the computation times are reduced to a few microseconds, whereas in a numerical approach such as the finite-element method, would take a few minutes of CPU time, which is totally inappropriate for such investigations."}
{"_id": "b4ad98875181c6fd50d1921fe43cccdd383a067e", "title": "Interactions between auditory and dorsal premotor cortex during synchronization to musical rhythms", "text": "When listening to music, we often spontaneously synchronize our body movements to a rhythm's beat (e.g. tapping our feet). The goals of this study were to determine how features of a rhythm such as metric structure, can facilitate motor responses, and to elucidate the neural correlates of these auditory-motor interactions using fMRI. Five variants of an isochronous rhythm were created by increasing the contrast in sound amplitude between accented and unaccented tones, progressively highlighting the rhythm's metric structure. Subjects tapped in synchrony to these rhythms, and as metric saliency increased across the five levels, louder tones evoked longer tap durations with concomitant increases in the BOLD response at auditory and dorsal premotor cortices. The functional connectivity between these regions was also modulated by the stimulus manipulation. These results show that metric organization, as manipulated via intensity accentuation, modulates motor behavior and neural responses in auditory and dorsal premotor cortex. Auditory-motor interactions may take place at these regions with the dorsal premotor cortex interfacing sensory cues with temporally organized movement."}
{"_id": "417f12cd8dc5eec846b5f0f11ce5c58f5591c306", "title": "Vision Assisted SCARA Manipulator Design and Control Using Arduino and LabVIEW", "text": "Using vision systems with SCARA robotic manipulators is beneficial in many industrial applications specially in automated object picking and placing tasks. In this paper, the equations of forward and inverse kinematics of a SCARA manipulator are derived using D-H parameters convention. The mechanical structure, as well as the electrical, electronic and pneumatic components of the manipulator were designed and developed. The manipulator's control software, animation and simulation were developed using LabVIEW and Arduino. In addition to that, a vision system was developed and integrated with the manipulator system where a mobile phone camera was used to acquire images, then transmit them wirelessly to LabVIEW's Vision Assistant codes for processing and calibration. The developed vision system allowed detection and calculation of object positions within the manipulator's workspace. Finally, the manipulator's positioning errors were determined and discussed based on a number of repeatability experiments."}
{"_id": "6efcd0530cfee650a5317a59608dcc30e9c8a16e", "title": "Students' Misconceptions and Other Difficulties in Introductory Programming: A Literature Review", "text": "Efforts to improve computer science education are underway, and teachers of computer science are challenged in introductory programming courses to help learners develop their understanding of programming and computer science. Identifying and addressing students\u2019 misconceptions is a key part of a computer science teacher's competence. However, relevant research on this topic is not as fully developed in the computer science education field as it is in mathematics and science education. In this article, we first review relevant literature on general definitions of misconceptions and studies about students\u2019 misconceptions and other difficulties in introductory programming. Next, we investigate the factors that contribute to the difficulties. Finally, strategies and tools to address difficulties including misconceptions are discussed.\n Based on the review of literature, we found that students exhibit various misconceptions and other difficulties in syntactic knowledge, conceptual knowledge, and strategic knowledge. These difficulties experienced by students are related to many factors including unfamiliarity of syntax, natural language, math knowledge, inaccurate mental models, lack of strategies, programming environments, and teachers\u2019 knowledge and instruction. However, many sources of students\u2019 difficulties have connections with students\u2019 prior knowledge. To better understand and address students\u2019 misconceptions and other difficulties, various instructional approaches and tools have been developed. Nevertheless, the dissemination of these approaches and tools has been limited. Thus, first, we suggest enhancing the dissemination of existing tools and approaches and investigating their long-term effects. Second, we recommend that computing education research move beyond documenting misconceptions to address the development of students\u2019 (mis)conceptions by integrating conceptual change theories. Third, we believe that developing and enhancing instructors\u2019 pedagogical content knowledge (PCK), including their knowledge of students\u2019 misconceptions and ability to apply effective instructional approaches and tools to address students\u2019 difficulties, is vital to the success of teaching introductory programming."}
{"_id": "c7cbae07fd7d4a6cad6f987d1f8e1716a8b116b5", "title": "A 0.2\u20133.6-GHz 10-dBm B1dB 29-dBm IIP3 Tunable Filter for Transmit Leakage Suppression in SAW-Less 3G/4G FDD Receivers", "text": "A tunable N-path filter with more than one decade tuning range is demonstrated to address transmit (TX) leakage in a surface-acoustic-wave-less diversity path receiver for frequency-division duplexing cellular systems. The proposed filter simultaneously creates a reject band and passband anywhere from 0.2 to 3.6 GHz to suppress TX leakage while minimizing insertion loss of the receive signal and is implemented in 45-nm CMOS silicon-on-insulator. Measurements show the 3-dB bandwidth of the passband is greater than 80 MHz with an independently tunable reject band, providing the ultimate rejection is from 33 to 41 dB while the passband insertion loss is between 2.6 dB and 4.3 dB over the tuning range. The proposed filter offers 29-dBm out-of-band (OOB) third-order input-intercept power (IIP3) and 22-dBm in-band (IB) IIP3 with a 10-dBm blocker 1-dB compression point (B1dB). To the authors' knowledge, this is the highest B1dB, IB IIP3, and OOB IIP3 for a CMOS tunable filter."}
{"_id": "b94a965b18f632f1d9a5905915c7b722e916fd3b", "title": "A Li-Ion Battery Charger With Smooth Control Circuit and Built-In Resistance Compensator for Achieving Stable and Fast Charging", "text": "A built-in resistance compensator (BRC) technique is presented to speed up the charging time of a lithium-ion battery. A smooth control circuit (SCC) is proposed to ensure the stable transition from the constant-current (CC) to the constant-voltage (CV) stage. Due to the external parasitic resistance of the Li-ion battery-pack system, the charger circuit switches from the CC to the CV stage without fully charging the cell. The BRC technique dynamically estimates the external resistance to extend the CC stage. The experimental results show that the period of the CC stage can be extended to 40% of that of the original design. The charging time is effectively reduced."}
{"_id": "e6ebaa8b4292eab71cb6f51ed294babb33f13c70", "title": "Fuzzy-controlled Li-ion battery charge system with active state-of-charge controller", "text": "A fuzzy-controlled active state-of-charge controller (FC-ASCC) for improving the charging behavior of a lithium\u2013ion (Li\u2013ion) battery is proposed. The proposed FC-ASCC is designed to replace the general constant-voltage charging mode by two kinds of modes: sense and charge. A fuzzy-controlled algorithm is built with the predicted charger performance to program the charging trajectory faster and to remain the charge operation in a proposed safe-charge area (SCA). A modeling work is conducted for analyzing and describing the Li\u2013ion battery in charging process. A three-dimensional Y-mesh diagram for describing the charging trajectories of the proposed FC charger is simulated. A prototype of a Li\u2013ion battery charger with FC-ASCC is simulated and realized to assess the predicted charging performance. Experiment shows that the charging speed of the proposed FC charger compared with the general one increases about 23% and the charger can safely work in the SCA."}
{"_id": "10f76e733a49918fe3b0945ce10ff3be10bb1872", "title": "Advanced lithium ion battery charger", "text": "The requirements for state-of-charge and voltage control for lithium ion batteries are reviewed. Strategies for controlling the state-of-charge of the individual Li-ion cells that comprise a battery are described. The design and test results for several of these charge control strategies are presented."}
{"_id": "24eb46d0787176a54da61f08ce328e4d44a6a817", "title": "Accurate, Compact, and Power-Efficient Li-Ion Battery Charger Circuit", "text": "A novel, accurate, compact, and power-efficient lithium-ion (Li-Ion) battery charger designed to yield maximum capacity, cycle life, and therefore runtime is presented and experimentally verified. The proposed charger uses a diode to smoothly (i.e., continuously) transition between two high-gain linear feedback loops and control a single power MOS device, automatically charging the battery with constant current and then constant voltage. An adaptive power-efficient charging scheme in the form of a cascaded switching regulator supply ensures the voltage across the charging power-intensive pMOS remains low, thereby reducing its power losses and yielding up to 27% better overall power efficiency. An 83% power-efficient printed circuit board prototype was built and used to charge several Li-Ion batteries to within plusmn0.43% of their optimum full-charge voltage and therefore within a negligibly small fraction of their full capacity"}
{"_id": "f6d95dbfff265d28788f678a0264906a32cacf2d", "title": "Secure data transfer in IoT environment: Adopting both cryptography and steganography techniques", "text": "Owing to the unprecedented growth in computing power, electronics miniaturization and mobile and wireless network interconnections the internet has metamorphosed into Internet of Things (IoT) which refers to next stage of the information revolution whose context involves billions of individuals and devices interconnected to facilitate exchange of huge volume of data and information from diverse locations, demanding the consequent necessity for smart data aggregation followed by an increased obligation to index, hoard and process data with higher efficiency and effectiveness. But along with its myriad offered benefits and applications, emerges a novel complexity aspect in terms of many inherent hassles primarily security concerns during data transfer phases in IoT covering mostly data confidentiality and integrity features. Thus to enhance safe data transfer in smart IoT environment, a security scheme is proposed in this paper which addresses both the aforesaid issues, employing an integrated approach of lightweight cryptography and steganography (Simple LSB Substitution) technique during data transfer between IoT device and Home Server and adoption of combined approach of cryptography and steganography (Proposed MSB-LSB Substitution) technique during data transfer phases between Home Server and Cloud Servers."}
{"_id": "57c4bedce0dc991af1b67c795242303e43889962", "title": "Providing device integration with OPC UA", "text": "OPC UA allows the company wide access and integration of information. One important part of this information system is the automation system and its automation devices. The important issue for presenting devices in such an enterprise system is standardization of device representation. This document provides a proposal on how to standardize the presentation of devices based on a comparison of existing device integration technologies."}
{"_id": "e3bf66adcc1e338f3ca8d669876a4f36c5e68c2a", "title": "Seizure classification in EEG signals utilizing Hilbert-Huang transform", "text": "BACKGROUND\nClassification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG) signals.\n\n\nMETHOD\nDiscrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering.\n\n\nRESULTS\nThe t-test results in a P-value < 0.02 and the clustering leads to accurate (94%) and specific (96%) results. The proposed method is also contrasted against the Multivariate Empirical Mode Decomposition that reaches 80% accuracy. Comparison results strengthen the contribution of this paper not only from the accuracy point of view but also with respect to its fast response and ease to use.\n\n\nCONCLUSION\nAn original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool."}
{"_id": "a0fefc14c3dd10a031c9567fa136a64146eda925", "title": "Transfer Learning in Multi-Agent Reinforcement Learning Domains", "text": "Transfer learning refers to the process of reusing knowledge from past tasks in order to speed up the learning procedure in new tasks. In reinforcement learning, where agents often require a considerable amount of training, transfer learning comprises a suitable solution for speeding up learning. Transfer learning methods have primarily been applied in single-agent reinforcement learning algorithms, while no prior work has addressed this issue in the case of multi-agent learning. This work proposes a novel method for transfer learning in multi-agent reinforcement learning domains. We test the proposed approach in a multiagent domain under various setups. The results demonstrate that the method helps to reduce the learning time and increase the asymptotic performance."}
{"_id": "e19d5b44450255319fe03a599c633c8361f758c4", "title": "Lexicon-based sentiment analysis: Comparative evaluation of six sentiment lexicons", "text": "This paper introduces a new general-purpose sentiment lexicon called WKWSCI Sentiment Lexicon, and compares it with five existing lexicons: Hu & Liu Opinion Lexicon, MPQA Subjectivity Lexicon, General Inquirer, NRC Word-Sentiment Association Lexicon and SO-CAL lexicon. The effectiveness of the sentiment lexicons for sentiment categorization at the document-level and sentence-level was evaluated using an Amazon product review dataset and a news headlines dataset. WKWSCI, MPQA, Hu & Liu and SO-CAL lexicons are equally good for product review sentiment categorization, obtaining accuracy rates of 75% to 77% when appropriate weights are used for different categories of sentiment words. However, when a training corpus is not available, Hu & Liu obtained the best accuracy with a simple-minded approach of counting positive and negative words, for both document-level and sentencelevel sentiment categorization. The WKWSCI lexicon obtained the best accuracy of 69% on the news headlines sentiment categorization task, and the sentiment strength values obtained a Pearson correlation of 0.57 with human assigned sentiment values. It is recommended that the Hu & Liu lexicon be used for product review texts, and the WKWSCI lexicon for non-review texts."}
{"_id": "48db40abd81dae6e298d3dfd3d344edafd882e12", "title": "Bluetooth positioning using RSSI and triangulation methods", "text": "Location based services are the hottest applications on mobile devices nowadays and the growth is continuing. Indoor wireless positioning is the key technology to enable location based services to work well indoors, where GPS normally could not work. Bluetooth has been widely used in mobile devices like phone, PAD etc. therefore Bluetooth based indoor positioning has great market potential. Radio Signal Strength (RSS) is a key parameter for wireless positioning. New Bluetooth standard (since version 2.1) enables RSS to be discovered without time consuming pre-connection. In this research, general wireless positioning technologies are firstly analysed. Then RSS based Bluetooth positioning using the new feature is studied. The mathematical model is established to analyse the relation between RSS and the distance between two Bluetooth devices. Three distance-based algorithms are used for Bluetooth positioning: Least Square Estimation, Three-border and Centroid Method. Comparison results are analysed and the ways to improve the positioning accuracy are discussed."}
{"_id": "ba06127976efae136f6d870341535f75dff22d2d", "title": "Parent and family impact of autism spectrum disorders: a review and proposed model for intervention evaluation.", "text": "Raising a child with an autism spectrum disorder (ASD) can be an overwhelming experience for parents and families. The pervasive and severe deficits often present in children with ASD are associated with a plethora of difficulties in caregivers, including decreased parenting efficacy, increased parenting stress, and an increase in mental and physical health problems compared with parents of both typically developing children and children with other developmental disorders. In addition to significant financial strain and time pressures, high rates of divorce and lower overall family well-being highlight the burden that having a child with an ASD can place on families. These parent and family effects reciprocally and negatively impact the diagnosed child and can even serve to diminish the positive effects of intervention. However, most interventions for ASD are evaluated only in terms of child outcomes, ignoring parent and family factors that may have an influence on both the immediate and long-term effects of therapy. It cannot be assumed that even significant improvements in the diagnosed child will ameliorate the parent and family distress already present, especially as the time and expense of intervention can add further family disruption. Thus, a new model of intervention evaluation is proposed, which incorporates these factors and better captures the transactional nature of these relationships."}
{"_id": "61556fee73f0804fb0148ec93df8de8e595a3584", "title": "Marijuana Use in the Elderly: Implications and Considerations.", "text": "OBJECTIVE\nThis article reviews the literature on the use of marijuana in the elderly. Pharmacists play an important role in the management of medications including drug use of potentially illegal drugs, including marijuana. The use of both recreational and medical marijuana has grown exponentially in the general population, including in older adults. As of 2017, marijuana for medical use is legal in 26 states and the District of Columbia.\n\n\nDATA SOURCES\nPubMed and Internet search using the following terms: marijuana, cannabis, delta-9-tetrhydrocannabinol (THC), cannabidiol, cannabinoid, elderly, geriatric, and pharmacology. Findings are based on data collected from older adults (65 years of age and older) through August 2016.\n\n\nSTUDY SELECTION\nBecause of the lack of research and funding, reputable literature on the impact of marijuana on older adults is scarce. The available evidence suggests that elderly individuals should be cautious when consuming marijuana, especially those who have certain comorbid conditions.\n\n\nDATA EXTRACTION\nThe geriatric population has a higher likelihood of having multiple comorbidities and is subject to polypharmacy. Marijuana use, medicinal or recreational, complicates the picture with additive central nervous system side effects.\n\n\nDATA SYNTHESIS\nThis article reviews the growing information on marijuana use and discusses issues to consider and cautions in usage that can apply to day-to-day clinical practice and geriatric care. The role of the pharmacist in educating patients, caregivers, and health care providers is expanding with the growing number of states that have legalized medical marijuana (26 states and the District of Columbia, as of 2017). Important education points including drug-drug interactions, drug-disease interactions, and signs and symptoms of acute overdose should be considered.\n\n\nCONCLUSION\nWith this review, pharmacists will be informed on recommendations on the use of marijuana in the older adult. Monitoring of therapy, as well as adverse effects, will be reviewed, including some legal issues and challenges."}
{"_id": "94fb742ff47744d8104ac4451a9a9521fc58bd72", "title": "\u2018Internet Addiction\u2019: A Critical Review", "text": "It has been alleged by some academics that excessive Internet use can be pathological and addictive. This paper reviews what is known from the empirical literature on \u2018Internet addiction\u2019 and its derivatives (e.g., Internet Addiction Disorder, Pathological Internet Use, etc.) and assesses to what extent it exists. Empirical research into \u2018Internet addiction\u2019 can roughly be divided into five areas: (1) survey studies that compare excessive Internet users with non-excessive users, (2) survey studies that have examined vulnerable groups of excessive Internet use, most notably students, (3) studies that examine the psychometric properties of excessive Internet use, (4) case studies of excessive Internet users and treatment case studies, and (5) correlational studies examining the relationship of excessive Internet use with other behaviours (e.g., psychiatric problems, depression, self-esteem, etc.). Each of these areas is reviewed. It is concluded that if \u2018Internet addiction\u2019 does indeed exist, it affects a relatively small percentage of the online population. However, exactly what it is on the Internet that they are addicted to still remains unclear."}
{"_id": "c27701de54a6a90d3c4d57e75e3520bbbcd3cc91", "title": "Title Design and realization of a smart battery management system", "text": "Battery management system (BMS) emerges a decisive system component in battery-powered applications, such as (hybrid) electric vehicles and portable devices. However, due to the inaccurate parameter estimation of aged battery cells and multi-cell batteries, current BMSs cannot control batteries optimally, and therefore affect the usability of products. In this paper, we proposed a smart management system for multi-cell batteries, and discussed the development of our research study in three directions: i) improving the effectiveness of battery monitoring and current sensing, ii) modeling the battery aging process, and iii) designing a self-healing circuit system to compensate performance variations due to aging and other variations."}
{"_id": "6eb439a90456878df8b7a9c7501ab0eb3dc66eb3", "title": "ModSpec: An open, flexible specification framework for multi-domain device modelling", "text": "We describe ModSpec, a MATLAB/Octave based specification format suitable for modelling devices across a wide variety of application domains, including circuits, optics, fluidics and biology. The ModSpec format and associated API are centered around describing the nonlinear differential equations at the core of any device model. The format is open, general and easy to use, and is supported by toolchains that translate and automatically differentiate models, set up equations for systems of interacting devices, and provide simulation facilities. We illustrate the use of ModSpec for modelling semiconductor, photovoltaic, fluidic and neuronal devices and systems."}
{"_id": "029cf8445332eb9ac80d9e646cbda859fa0080a0", "title": "An efficient IP address lookup algorithm based on a small balanced tree using entry reduction", "text": "Due to a tremendous increase in internet traffic, backbone routers must have the capability to forward massive incoming packets at several gigabits per second. IP address lookup is one of the most challenging tasks for high-speed packet forwarding. Some high-end routers have been implemented with hardware parallelism using ternary content addressable memory (TCAM). However, TCAM is much more expensive in terms of circuit complexity as well as power consumption. Therefore, efficient algorithmic solutions are essentially required to be implemented using network processors as low cost solutions. Among the state-of-the-art algorithms for IP address lookup, a binary search based on a balanced tree is effective in providing a low-cost solution. In order to construct a balanced search tree, the prefixes with the nesting relationship should be converted into completely disjointed prefixes. A leaf-pushing technique is very useful to eliminate the nesting relationship among prefixes [V. Srinivasan, G. Varghese, Fast address lookups using controlled prefix expansion, ACM Transactions on Computer Systems 17 (1) (1999) 1\u201340]. However, it creates duplicate prefixes, thus expanding the search tree. This paper proposes an efficient IP address lookup algorithm based on a small balanced tree using entry reduction. The leaf-pushing technique is used for creating the completely disjointed entries. In the leaf-pushed prefixes, there are numerous pairs of adjacent prefixes with similarities in prefix strings and output ports. The number of entries can be significantly reduced by the use of a new entry reduction method which merges pairs with these similar prefixes. After sorting the reduced disjointed entries, a small balanced tree is constructed with a very small node size. Based on this small balanced tree, a native binary search can be effectively used in address lookup issue. In addition, we propose a new multi-way search algorithm to improve a binary search for IPv4 address lookup. As a result, the proposed algorithms offer excellent lookup performance along with reduced memory requirements. Besides, these provide good scalability for large amounts of routing data and for the address migration toward IPv6. Using both various IPv4 and IPv6 routing data, the performance evaluation results demonstrate that the proposed algorithms have better performance in terms of lookup speed, memory requirement and scalability for the growth of entries and IPv6, as compared with other algorithms based on a binary search. 2011 Elsevier B.V. All rights reserved. . All rights reserved. x: +82 2 313 8053."}
{"_id": "1ff2ed03f73b701eb5b39688b72c98a9f2c6f264", "title": "Happiness Runs in a Circular Motion : Evidence for a Positive Feedback Loop between Prosocial Spending and Happiness", "text": "We examine whether a positive feedback loop exists between spending money on others (i.e. prosocial spending) and happiness. Participants recalled a previous purchase made for either themselves or someone else and then reported their happiness. Afterward, participants chose whether to spend a monetary windfall on themselves or someone else. Participants assigned to recall a purchase made for someone else reported feeling significantly happier immediately after this recollection; most importantly, the happier participants felt, the more likely they were to choose to spend a windfall on someone else in the near future. Thus, by providing initial evidence for a positive feedback loop between prosocial spending and well-being, these data offer one potential path to sustainable happiness: prosocial spending increases happiness which in turn encourages prosocial spending."}
{"_id": "2b5f846701c6f64cf07425efce604bea709133ae", "title": "Accessory Axillary Breast Tissues", "text": "Dr. Emsen recently published a case report describing a patient with accessory axillary breast tissue treated with ultrasound-guided liposuction [3]. We recently treated a similar case of bilateral axillary breast tissue, and we offer our contribution to the debate (Figs. 1, 2). Accessory axillary breast tissues are common embriogenic alterations found in 1% to 6% of women that often manifest bilaterally [1,2]. Ectopic breast tissue is at risk for benign and malignant breast disease with reported diagnoses of fibrocystic disease, mastitis, fibroadenoma, atypical hyperplasia, and carcinoma [4 6]. In a review of 82 ectopic breast cancer cases, Marshall et al. [5] found an increased incidence of cancer in aberrant breast tissue. Ectopic cancers located in the axilla seemed to present with more extensive disease at an earlier age, suggesting that aberrant breast tissue may be at increased risk for malignant change. According to this view, as for any ectopic mass, tissue diagnosis is required and mandatory [4]. We recently treated a 43-year-old woman who was asymptomatic until her second pregnancy (2004), when two bilateral masses appeared in axillae and began to increase in size. Six months after parturition, a clear fluid was continuously secreted throughout few small pores on the skin. Ultrasounddiagnosed breast tissue and excision with histologic examination confirmed the radiologic findings. In this case, we preoperatively contemplated, as Dr. Emsen did, less invasive and aesthetic methods such as liposuction. However, a collegial discussion with our pathologist and an analysis of literature data suggested en bloc removal for a more precise analysis not only of cellular, but also of histologic characteristics of suspected mammary ducts. We believe that liposuction, more traumatic on tissues than simple excision, does not allow for this. Furthermore, although liposuction leaves minimal incision scars, the Fig. 1. A 43-year-old woman. Right axilla with accessory breast tissue."}
{"_id": "1dcf08c37fe2e8e78d3f1857547a965a0ac29526", "title": "2D ear classification based on unsupervised clustering", "text": "Ear classification refers to the process by which an input ear image is assigned to one of several pre-defined classes based on a set of features extracted from the image. In the context of large-scale ear identification, where the input probe image has to be compared against a large set of gallery images in order to locate a matching identity, classification can be used to restrict the matching process to only those images in the gallery that belong to the same class as the probe. In this work, we utilize an unsupervised clustering scheme to partition ear images into multiple classes (i.e., clusters), with each class being denoted by a prototype or a centroid. A given ear image is assigned class labels (i.e., cluster indices) that correspond to the clusters whose centroids are closest to it. We compare the classification performance of three different texture descriptors, viz. Histograms of Oriented Gradients, uniform Local Binary Patterns and Local Phase Quantization. Extensive experiments using three different ear datasets suggest that the Local Phase Quantization texture descriptor scheme along with PCA for dimensionality reduction results in a 96.89% hit rate (i.e., 3.11% pre-selection error rate) with a penetration rate of 32.08%. Further, we demonstrate that the hit rate improves to 99.01% with a penetration rate of 47.10% when a multi-cluster search strategy is employed."}
{"_id": "1f333ade9620ad695556353d5a052f1c71ae297b", "title": "Beyond the PDP-11: Architectural Support for a Memory-Safe C Abstract Machine", "text": "We propose a new memory-safe interpretation of the C abstract machine that provides stronger protection to benefit security and debugging. Despite ambiguities in the specification intended to provide implementation flexibility, contemporary implementations of C have converged on a memory model similar to the PDP-11, the original target for C. This model lacks support for memory safety despite well-documented impacts on security and reliability.\n Attempts to change this model are often hampered by assumptions embedded in a large body of existing C code, dating back to the memory model exposed by the original C compiler for the PDP-11. Our experience with attempting to implement a memory-safe variant of C on the CHERI experimental microprocessor led us to identify a number of problematic idioms. We describe these as well as their interaction with existing memory safety schemes and the assumptions that they make beyond the requirements of the C specification. Finally, we refine the CHERI ISA and abstract model for C, by combining elements of the CHERI capability model and fat pointers, and present a softcore CPU that implements a C abstract machine that can run legacy C code with strong memory protection guarantees."}
{"_id": "7a78d02931e0309b39f7ac1e1e21b7bdabda2bf2", "title": "Design principles for sensemaking support systems in environmental sustainability transformations", "text": "Correspondence: Stefan Seidel, University of Liechtenstein, Fuerst-Franz-Josef-Strasse 23, 9490 Vaduz, Liechtenstein. Tel: +423 265 1303; E-mail: stefan.seidel@uni.li Abstract This paper reports on the results of a design science research (DSR) study that develops design principles for information systems (IS) that support organisational sensemaking in environmental sustainability transformations. We identify initial design principles based on salient affordances required in organisational sensemaking and revise them through three rounds of developing, demonstrating and evaluating a prototypical implementation. Through our analysis, we learn how IS can support essential sensemaking practices in environmental sustainability transformations, including experiencing disruptive ambiguity through the provision of environmental data, noticing and bracketing, engaging in an open and inclusive communication and presuming potential alternative environmentally responsible actions. We make two key contributions: First, we provide a set of theory-inspired design principles for IS that support sensemaking in sustainability transformations, and revise them empirically using a DSR method. Second, we show how the concept of affordances can be used in DSR to investigate how IS can support organisational practices. While our findings are based on the investigation of the substantive context of environmental sustainability transformation, we suggest that they might be applicable in a broader set of contexts of organisational sensemaking and thus for a broader class of sensemaking support systems. European Journal of Information Systems (2017). doi:10.1057/s41303-017-0039-0"}
{"_id": "47824362bc0aa94b6a52c1412986ce633d70e52f", "title": "Transforming Web Tables to a Relational Database", "text": "HTML tables represent a significant fraction of web data. The often complex headers of such tables are determined accurately using their indexing property. Isolated headers are factored to extract category hierarchies. Web tables are then transformed into a canonical form and imported into a relational database. The proposed processing allows for the formulation of arbitrary SQL queries over the collection of induced relational tables. Keywords\u2014table segmentation; Wang categories; header paths; relational table SQL queries"}
{"_id": "a8c28b6172caf25a98f55dfc32d1f829d02ba84f", "title": "MULTIPLE-ATTRIBUTE TEXT REWRITING", "text": "The dominant approach to unsupervised \u201cstyle transfer\u201d in text is based on the idea of learning a latent representation, which is independent of the attributes specifying its \u201cstyle\u201d. In this paper, we show that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning such disentangled representations. We thus propose a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation. Our method allows control over multiple attributes, like gender, sentiment, product type, etc., and a more fine-grained control on the trade-off between content preservation and change of style with a pooling operator in the latent space. Our experiments demonstrate that the fully entangled model produces better generations, even when tested on new and more challenging benchmarks comprising reviews with multiple sentences and multiple attributes."}
{"_id": "14752aec362de9e3e64daeb38b79fcd09f32b298", "title": "Activity Recognition and Abnormal Behaviour Detection with Recurrent Neural Networks", "text": "In this paper, we study the problem of activity recognition and abnormal behaviour detection for elderly people with dementia. Very few studies have attempted to address this problem presumably because of the lack of experimental data in the context of dementia care. In particular, the paper investigates three variants of Recurrent Neural Networks (RNNs): Vanilla RNNs (VRNN), Long Short Term RNNs (LSTM) and Gated Recurrent Unit RNNs (GRU). Here activity recognition is considered as a sequence labelling problem, while abnormal behaviour is flagged based on the deviation from normal patterns. To provide an adequate discussion of the performance of RNNs in this context, we compare them against the state-of-art methods such as Support Vector Machines (SVMs), Na\u0131\u0308ve Bayes (NB), Hidden Markov Models (HMMs), Hidden Semi-Markov Models (HSMM) and Conditional Random Fields (CRFs). The results obtained indicate that RNNs are competitive with those state-of-art methods. Moreover, the paper presents a methodology for generating synthetic data reflecting on some behaviours of people with dementia given the difficulty of obtaining real-world data. c 6 t r . ublis e ls i r . . i ilit f t f re ce r ra hairs."}
{"_id": "18f5593d6082b1ba3c02cf64d64eb9d969db3e6b", "title": "Community Evaluation and Exchange of Word Vectors at wordvectors.org", "text": "Vector space word representations are useful for many natural language processing applications. The diversity of techniques for computing vector representations and the large number of evaluation benchmarks makes reliable comparison a tedious task both for researchers developing new vector space models and for those wishing to use them. We present a website and suite of offline tools that that facilitate evaluation of word vectors on standard lexical semantics benchmarks and permit exchange and archival by users who wish to find good vectors for their applications. The system is accessible at: www.wordvectors.org."}
{"_id": "07e880c468301a9d5b85718eee029a3cba21e5e0", "title": "Central pattern generators for locomotion control in animals and robots: A review", "text": "The problem of controlling locomotion is an area in which neuroscience and robotics can fruitfully interact. In this article, I will review research carried out on locomotor central pattern generators (CPGs), i.e. neural circuits capable of producing coordinated patterns of high-dimensional rhythmic output signals while receiving only simple, low-dimensional, input signals. The review will first cover neurobiological observations concerning locomotor CPGs and their numerical modelling, with a special focus on vertebrates. It will then cover how CPG models implemented as neural networks or systems of coupled oscillators can be used in robotics for controlling the locomotion of articulated robots. The review also presents how robots can be used as scientific tools to obtain a better understanding of the functioning of biological CPGs. Finally, various methods for designing CPGs to control specific modes of locomotion will be briefly reviewed. In this process, I will discuss different types of CPG models, the pros and cons of using CPGs with robots, and the pros and cons of using robots as scientific tools. Open research topics both in biology and in robotics will also be discussed."}
{"_id": "28d6f4e25dabf95d6e04b6f0b468087d754f5fea", "title": "Inverse Kinematics for a Point-Foot Quadruped Robot with Dynamic Redundancy Resolution", "text": "In this work we examine the control of center of mass and swing leg trajectories in LittleDog, a point-foot quadruped robot. It is not clear how to formulate a function to compute forward kinematics of the center of mass of the robot as a function of actuated joint angles because point-foot walkers have no direct actuation between the feet and the ground. Nevertheless, we show that a whole-body Jacobian exists and is well defined when at least three of the feet are on the ground. Also, the typical approach of work-space centering for redundancy resolution causes destabilizing motions when executing fast motions. An alternative redundancy resolution optimization is proposed which projects single-leg inverse kinematic solutions into the nullspace. This hybrid approach seems to minimize 1) unnecessary rotation of the body, 2) twisting of the stance legs, and 3) whole-body involvement in achieving a step leg trajectory. In simulation, this control allows the robot to perform significantly more dynamic behaviors while maintaining stability."}
{"_id": "3d4cea130fd9daaf49f4db49f9cab70e4e075042", "title": "Inverse dynamics control of floating base systems using orthogonal decomposition", "text": "Model-based control methods can be used to enable fast, dexterous, and compliant motion of robots without sacrificing control accuracy. However, implementing such techniques on floating base robots, e.g., humanoids and legged systems, is non-trivial due to under-actuation, dynamically changing constraints from the environment, and potentially closed loop kinematics. In this paper, we show how to compute the analytically correct inverse dynamics torques for model-based control of sufficiently constrained floating base rigid-body systems, such as humanoid robots with one or two feet in contact with the environment. While our previous inverse dynamics approach relied on an estimation of contact forces to compute an approximate inverse dynamics solution, here we present an analytically correct solution by using an orthogonal decomposition to project the robot dynamics onto a reduced dimensional space, independent of contact forces. We demonstrate the feasibility and robustness of our approach on a simulated floating base bipedal humanoid robot and an actual robot dog locomoting over rough terrain."}
{"_id": "6a316e0d44e35a55c41a442b3f0d0eb1f9d4d0ca", "title": "The 3D linear inverted pendulum mode: a simple modeling for a biped walking pattern generation", "text": "F or D walkingcontr ol of a biped robot we analyze the dynamics of a three dimensional inverted pendu lum in which motion is constr aine d to move along an arbitr arily de ned plane This analysis leads us a sim ple line ar dynamics the Thr ee Dimensional Linear In verte d Pendulum Mode D LIPM Geometric nature of trajectories under the D LIPM and a method for walking pattern gener ationare discusse d A simula tion r esult of a walking contr ol using a d o f biped robot model is also shown"}
{"_id": "6c267aafaceb9de1f1af22f6f566ba1d897054e4", "title": "Capture Point: A Step toward Humanoid Push Recovery", "text": "It is known that for a large magnitude push a human or a humanoid robot must take a step to avoid a fall. Despite some scattered results, a principled approach towards \"when and where to take a step\" has not yet emerged. Towards this goal, we present methods for computing capture points and the capture region, the region on the ground where a humanoid must step to in order to come to a complete stop. The intersection between the capture region and the base of support determines which strategy the robot should adopt to successfully stop in a given situation. Computing the capture region for a humanoid, in general, is very difficult. However, with simple models of walking, computation of the capture region is simplified. We extend the well-known linear inverted pendulum model to include a flywheel body and show how to compute exact solutions of the capture region for this model. Adding rotational inertia enables the humanoid to control its centroidal angular momentum, much like the way human beings do, significantly enlarging the capture region. We present simulations of a simple planar biped that can recover balance after a push by stepping to the capture region and using internal angular momentum. Ongoing work involves applying the solution from the simple model as an approximate solution to more complex simulations of bipedal walking, including a 3D biped with distributed mass."}
{"_id": "36a8cc9b9f2f64e883ec44d7252184aa61c3dd49", "title": "Zero is Not Enough : On The Lower Limit of Agent Intelligence For Continuous Double Auction Markets \u2020", "text": "agent continuous double auction, trade Gode and Sunder's (1993) results from using \"zero-intelligence\" (ZI) traders, that act randomly within a structured market, appear to imply that convergence to the theoretical equilibrium price in continuous double-auction markets is determined more by market structure than by the intelligence of the traders in that market. However, it is demonstrated here that the average transaction prices of ZI traders can vary significantly from the theoretical equilibrium value when the market supply and demand are asymmetric, and that the degree of difference from equilibrium is predictable from a priori probabilistic analysis. In this sense, it is shown here that Gode and Sunder's results are artefacts of their experimental regime. Following this, 'zero-intelligence-plus' (ZIP) traders are introduced: like ZI traders, these simple agents make stochastic bids. Unlike ZI traders, they employ an elementary form of machine learning. Groups of ZIP traders interacting in experimental markets similar to those used by Smith (1962) and Gode and Sunder (1993) are demonstrated, and we show that the performance of ZIP traders is significantly closer to the human data than is the performance of Gode and Sunder's ZI traders. Because ZI traders are too simple, we offer ZIP traders as a slight upward revision of the intelligence required to achieve equilibrium in continuous double auction markets."}
{"_id": "da90d0265ef675763e8b51efb3c2efba8f809c54", "title": "Durable write cache in flash memory SSD for relational and NoSQL databases", "text": "In order to meet the stringent requirements of low latency as well as high throughput, web service providers with large data centers have been replacing magnetic disk drives with flash memory solid-state drives (SSDs). They commonly use relational and NoSQL database engines to manage OLTP workloads in the warehouse-scale computing environments. These modern database engines rely heavily on redundant writes and frequent cache flushes to guarantee the atomicity and durability of transactional updates. This has become a serious bottleneck of performance in both relational and NoSQL database engines. This paper presents a new SSD prototype called DuraSSD equipped with tantalum capacitors. The tantalum capacitors make the device cache inside DuraSSD durable, and additional firmware features of DuraSSD take advantage of the durable cache to support the atomicity and durability of page writes. It is the first time that a flash memory SSD with durable cache has been used to achieve an order of magnitude improvement in transaction throughput without compromising the atomicity and durability. Considering that the simple capacitors increase the total cost of an SSD no more than one percent, DuraSSD clearly provides a cost-effective means for transactional support. DuraSSD is also expected to alleviate the problem of high tail latency by minimizing write stalls."}
{"_id": "87ae705a3ac98bb485e55a93b94855d66a220f14", "title": "Multi-Task Learning For Parsing The Alexa Meaning Representation Language", "text": "The Alexa Meaning Representation Language (AMRL) is a compositional graph-based semantic representation that includes fine-grained types, properties, actions, and roles and can represent a wide variety of spoken language. AMRL increases the ability of virtual assistants to represent more complex requests, including logical and conditional statements as well as ones with nested clauses. Due to this representational capacity, the acquisition of large scale data resources is challenging, which limits the accuracy of resulting models. This paper has two primary contributions. The first contribution is a linearization of the AMRL parses that aligns it to a related task of spoken language understanding (SLU) and a deep neural network architecture that uses multi-task learning to predict AMRL fine-grained types, properties and intents. The second contribution is a deep neural network architecture that leverages embeddings from the large-scale data resources that are available for SLU. When combined, these contributions enable the training of accurate models of AMRL parsing, even in the presence of data sparsity. The proposed models, which use the linearized AMRL parse, multi-task learning, residual connections and embeddings from SLU, decrease the error rates in the prediction of the full AMRL parse by 3.56%"}
{"_id": "6e3660e8efb976c916e61fe17ea90aea7d13aac4", "title": "BIER \u2014 Boosting Independent Embeddings Robustly", "text": "Learning similarity functions between image pairs with deep neural networks yields highly correlated activations of large embeddings. In this work, we show how to improve the robustness of embeddings by exploiting independence in ensembles. We divide the last embedding layer of a deep network into an embedding ensemble and formulate training this ensemble as an online gradient boosting problem. Each learner receives a reweighted training sample from the previous learners. This leverages large embedding sizes more effectively by significantly reducing correlation of the embedding and consequently increases retrieval accuracy of the embedding. Our method does not introduce any additional parameters and works with any differentiable loss function. We evaluate our metric learning method on image retrieval tasks and show that it improves over state-ofthe- art methods on the CUB-200-2011, Cars-196, Stanford Online Products, In-Shop Clothes Retrieval and VehicleID datasets by a significant margin."}
{"_id": "b6c86e3e0619497e79009451307ec632215bd408", "title": "MAGIX: Model Agnostic Globally Interpretable Explanations", "text": "Explaining the behavior of a black box machine learning model at the instance level is useful for building trust. However, it is also important to understand how the model behaves globally. Such an understanding provides insight into both the data on which the model was trained and the patterns that it learned. We present here an approach that learns if-then rules to globally explain the behavior of black box machine learning models that have been used to solve classification problems. The approach works by first extracting conditions that were important at the instance level and then evolving rules through a genetic algorithm with an appropriate fitness function. Collectively, these rules represent the patterns followed by the model for decisioning and are useful for understanding its behavior. We demonstrate the validity and usefulness of the approach by interpreting black box models created using publicly available data sets as well as a private digital marketing data set."}
{"_id": "1d25ecee0ac6348a3d4c2dafe7b2cfaf6a62b561", "title": "Integrating occlusion culling with parallel LOD for rendering complex 3D environments on GPU", "text": "In many research domains, such as mechanical engineering, game development and virtual reality, a typical output model is usually produced in a multi-object manner for efficient data management. To have a complete description of the whole model, tens of thousands of, or even millions of, such objects are created, which makes the entire dataset exceptionally complex. Consequently, visualizing the model becomes a computationally intensive process that impedes a real-time rendering and interaction."}
{"_id": "49057d3b250776e4dd45bc2f8abc1ed1c9da32b2", "title": "New Frontiers in Formal Methods: Learning, Cyber-Physical Systems, Education, and Beyond", "text": "We survey promising directions for research in the area of formal methods and its applications, including fundamental questions about the combination of formal methods and machine learning, and applications to cyberphysical systems and education."}
{"_id": "1c8c10d28359497dae24eb0d1391ddf1da599c6f", "title": "Electrick: Low-Cost Touch Sensing Using Electric Field Tomography", "text": "Current touch input technologies are best suited for small and flat applications, such as smartphones, tablets and kiosks. In general, they are too expensive to scale to large surfaces, such as walls and furniture, and cannot provide input on objects having irregular and complex geometries, such as tools and toys. We introduce Electrick, a low-cost and versatile sensing technique that enables touch input on a wide variety of objects and surfaces, whether small or large, flat or irregular. This is achieved by using electric field tomography in concert with an electrically conductive material, which can be easily and cheaply added to objects and surfaces. We show that our technique is compatible with commonplace manufacturing methods, such as spray/brush coating, vacuum forming, and casting/molding enabling a wide range of possible uses and outputs. Our technique can also bring touch interactivity to rapidly fabricated objects, including those that are laser cut or 3D printed. Through a series of studies and illustrative example uses, we show that Electrick can enable new interactive opportunities on a diverse set of objects and surfaces that were previously static."}
{"_id": "a4fe3a3bede8b0b648de7c9025d28a620a34581b", "title": "Treacher collins syndrome.", "text": "Treacher Collins syndrome is a genetic disorder resulting in congenital craniofacial malformation. Patients typically present with downslanting palpebral fissures, lower eyelid colobomas, microtia, and malar and mandibular hypoplasia. This autosomal dominant disorder has a variable degree of phenotypic expression, and patients have no associated developmental delay or neurologic disease. Care for these patients requires a multidisciplinary team from birth through adulthood. Proper planning, counseling and surgical techniques are essential for optimizing patient outcomes. Here the authors review the features, genetics, and treatment of Treacher Collins syndrome."}
{"_id": "ed4df64ac5a2829c2091ecb5b1be9068c9831497", "title": "Generating entangled photon pairs in a parallel crystal geometry", "text": "We present recent findings towards developing brighter entangled photon sources in critically phase matched (CPM) nonlinear crystals. Specifically, we use type-I collinear phase matching at non-degenerate wavelengths in parallel \u03b2-Barium Borate (BBO) crystals to generate pairs of polarization entangled photons for free-space quantum key distribution (QKD). We first review the entangled source configuration and then discuss ways to further improve the source brightness by means of tailoring the pump and collection modes. We present preliminary results that may lead to brighter entangled photon sources that are intrinsically robust to changes in their environment."}
{"_id": "cc383d9308c38e36d268b77bd6acee7bcd79fc10", "title": "A Survey on Wireless Body Area Networks for eHealthcare Systems in Residential Environments", "text": "Current progress in wearable and implanted health monitoring technologies has strong potential to alter the future of healthcare services by enabling ubiquitous monitoring of patients. A typical health monitoring system consists of a network of wearable or implanted sensors that constantly monitor physiological parameters. Collected data are relayed using existing wireless communication protocols to a base station for additional processing. This article provides researchers with information to compare the existing low-power communication technologies that can potentially support the rapid development and deployment of WBAN systems, and mainly focuses on remote monitoring of elderly or chronically ill patients in residential environments."}
{"_id": "2ab78d140d0bde65803057afa751ee6609f47ba7", "title": "Primary Object Segmentation in Videos via Alternate Convex Optimization of Foreground and Background Distributions", "text": "An unsupervised video object segmentation algorithm, which discovers a primary object in a video sequence automatically, is proposed in this work. We introduce three energies in terms of foreground and background probability distributions: Markov, spatiotemporal, and antagonistic energies. Then, we minimize a hybrid of the three energies to separate a primary object from its background. However, the hybrid energy is nonconvex. Therefore, we develop the alternate convex optimization (ACO) scheme, which decomposes the nonconvex optimization into two quadratic programs. Moreover, we propose the forward-backward strategy, which performs the segmentation sequentially from the first to the last frames and then vice versa, to exploit temporal correlations. Experimental results on extensive datasets demonstrate that the proposed ACO algorithm outperforms the state-of-the-art techniques significantly."}
{"_id": "50911111f1d575ac62e56bc5b4b933a78b7dfbbd", "title": "A qualitative evaluation of evolution of a learning analytics tool", "text": "New systems are designed with improved quality or features. However, not every new system lives to its pre-development expectations when tested with end-users. The evaluation of the first version of our LOCO-Analyst tool for providing teachers with feedback on students learning activities and performance led to the enhancing the visualization features of the tool. The second evaluation of the improved tool allowed us to see how the improvements affect the user perceptions of the tool. Here, we present the qualitative results of our two evaluations. The results show that educators find the kinds of feedback implemented in the tool informative and they value the mix of textual and graphical representations of different kinds of feedback provided by the tool."}
{"_id": "35a325cac27b6e04fd3fe4c3ccb7ccfc607bf154", "title": "Automatic Readability Classification of Crowd-Sourced Data based on Linguistic and Information-Theoretic Features", "text": "This paper presents a classifier of text readability based on information-theoretic features. The classifier was developed based on a linguistic approach to readability that explores lexical, syntactic and semantic features. For this evaluation we extracted a corpus of 645 articles from Wikipedia together with their quality judgments. We show that information-theoretic features perform as well as their linguistic counterparts even if we explore several linguistic levels at once."}
{"_id": "be1e60b65603b59ad666e47f105f39d88756e226", "title": "Why Deep Learning Is Changing the Way to Approach NGS Data Processing: A Review", "text": "Nowadays, big data analytics in genomics is an emerging research topic. In fact, the large amount of genomics data originated by emerging next-generation sequencing (NGS) techniques requires more and more fast and sophisticated algorithms. In this context, deep learning is re-emerging as a possible approach to speed up the DNA sequencing process. In this review, we specifically discuss such a trend. In particular, starting from an analysis of the interest of the Internet community in both NGS and deep learning, we present a taxonomic analysis highlighting the major software solutions based on deep learning algorithms available for each specific NGS application field. We discuss future challenges in the perspective of cloud computing services aimed at deep learning based solutions for NGS."}
{"_id": "fbe0653e31dbd1c230ea0d7726e8e87640e60a79", "title": "Rigor mortis in an unusual position: Forensic considerations", "text": "We report a case in which the dead body was found with rigor mortis in an unusual position. The dead body was lying on its back with limbs raised, defying gravity. Direction of the salivary stains on the face was also defying the gravity. We opined that the scene of occurrence of crime is unlikely to be the final place where the dead body was found. The clues were revealing a homicidal offence and an attempt to destroy the evidence. The forensic use of 'rigor mortis in an unusual position' is in furthering the investigations, and the scientific confirmation of two facts - the scene of death (occurrence) is different from the scene of disposal of dead body, and time gap between the two places."}
{"_id": "1c68e08907367cf1fadcccaa8b4d8a35dde256bd", "title": "A conceptual model of client-driven agile requirements prioritization: results of a case study", "text": "Requirements (re)prioritization is an essential mechanism of agile development approaches to maximize the value for the clients and to accommodate changing requirements. Yet, in the agile Requirements Engineering (RE) literature, very little is known about how agile (re)prioritization happens in practice. Conceptual models about this process are missing, which, in turn, makes it difficult for both practitioners and researchers to reason about requirements decision-making at inter-iteration time. We did a multiple case study on agile requirements prioritization methods to yield a conceptual model for understanding the inter-iteration prioritization process. The model is derived by using interview data from practitioners in 8 development organizations. Such a model makes explicit the concepts that are used tacitly in the agile requirements prioritization practice and can be used for structuring future empirical investigations about this topic, and for analyzing, supporting, and improving the process in real-life projects."}
{"_id": "21e222383b85483a28305a4d406f044fec00d114", "title": "A Penalty-Function Approach for Pruning Feedforward Neural Networks", "text": "This article proposes the use of a penalty function for pruning feedforward neural network by weight elimination. The penalty function proposed consists of two terms. The first term is to discourage the use of unnecessary connections, and the second term is to prevent the weights of the connections from taking excessively large values. Simple criteria for eliminating weights from the network are also given. The effectiveness of this penalty function is tested on three well-known problems: the contiguity problem, the parity problems, and the monks problems. The resulting pruned networks obtained for many of these problems have fewer connections than previously reported in the literature."}
{"_id": "1c40786dc5cc14efeb3a08e08bfdfdc52995b3ea", "title": "The Cascade-Correlation Learning Architecture", "text": "Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network. This research was sponsored in part by the National Science Foundation under Contract Number EET-8716324 and by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976 under Contract F33615-87-C-1499 and monitored by: Avionics Laboratory, Air Force Wright Aeronautical Laboratories, Aeronautical Systems Division (AFSC), Wright-Patterson AFB, OH 45433-6543. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government. 1. Why is Back-Propagation Learning So Slow? The Cascade-Correlation learning algorithm was developed in an attempt to overcome certain problems and limitations of the popular back-propagation (or \u201cbackprop\u201d) learning algorithm [Rumelhart, 1986]. The most important of these limitations is the slow pace at which backprop learns from examples. Even on simple benchmark problems, a back-propagation network may require many thousands of epochs to learn the desired behavior from examples. (An epoch is defined as one pass through the entire set of training examples.) We have attempted to analyze the reasons why backprop learning is so slow, and we have identified two major problems that contribute to the slowness. We call these the step-size problem and the moving target problem. There may, of course, be other contributing factors that we have not yet identified. 1.1. The Step-Size Problem The step-size problem occurs because the standard back-propagation method computes only E w, the partial first derivative of the overall error function with respect to each weight in the network. Given these derivatives, we can perform a gradient descent in weight space, reducing the error with each step. It is straightforward to show that if we take infinitesimal steps down the gradient vector, running a new training epoch to recompute the gradient after each step, we will eventually reach a local minimum of the error function. Experience has shown that in most situations this local minimum will be a global minimum as well, or at least a \u201cgood enough\u201d solution to the problem at hand. In a practical learning system, however, we do not want to take infinitesimal steps; for fast learning, we want to take the largest steps that we can. Unfortunately, if we choose a step size that is too large, the network will not reliably converge to a good solution. In order to choose a reasonable step size, we need to know not just the slope of the error function, but something about its higher-order derivatives\u2014its curvature\u2014in the vicinity of the current point in weight space. This information is not available in the standard back-propagation algorithm. A number of schemes have been proposed for dealing with the step-size problem. Some form of \u201cmomentum\u201d [Rumelhart, 1986] is often used as a crude way of summarizing the slope of the error surface at earlier points in the computation. Conjugate gradient methods have been explored in the context of artificial neural networks by a number of researchers [Watrous, 1988, Lapedes, 1987, Kramer, 1989] with generally good results. Several schemes, for example [Franzini, 1987] and jacobs:dynamic, have been proposed that adjust the step-size dynamically, based on the change in gradient from one step to another. Becker and LeCun [Becker, 1988] explicitly compute an approximation to the second derivative of the error function at each step and use that information to guide the speed of descent. Fahlman\u2019s quickprop algorithm [Fahlman, 1988] is one of the more successful algorithms for handling the step-size problem in back-propagation systems. Quickprop computes the E w values just as in standard backprop, but instead of simple gradient descent, Quickprop uses a second-order method, related to Newton\u2019s method, to update the weights. On the learning benchmarks we have collected, quickprop consistently out-performs other backprop-like algorithms, sometimes by a large factor. Quickprop\u2019s weight-update procedure depends on two approximations: first, that small changes in one weight have relatively little effect on the error gradient observed at other weights; second, that the error function with respect to each weight is locally quadratic. For each weight, quickprop keeps a copy of"}
{"_id": "212a4ab68c4489eca22984ecd297e986693e5200", "title": "Dynamic node creation in backpropagation networks", "text": "Summary form only given. A novel method called dynamic node creation (DNC) that attacks issues of training large networks and of testing networks with different numbers of hidden layer units is presented. DNC sequentially adds nodes one at a time to the hidden layer(s) of the network until the desired approximation accuracy is achieved. Simulation results for parity, symmetry, binary addition, and the encoder problem are presented. The procedure was capable of finding known minimal topologies in many cases, and was always within three nodes of the minimum. Computational expense for finding the solutions was comparable to training normal backpropagation (BP) networks with the same final topologies. Starting out with fewer nodes than needed to solve the problem actually seems to help find a solution. The method yielded a solution for every problem tried. BP applied to the same large networks with randomized initial weights was unable, after repeated attempts, to replicate some minimum solutions found by DNC.<>"}
{"_id": "6c6590e1e27ba9df8145a0add4d8fe982ae4b57a", "title": "Emotion-based music recommendation by affinity discovery from film music", "text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.09.042 q Part of the content of this paper has been publi International Conference on Multimedia, 2005. * Corresponding author. Tel.: +886 2 29393091x67 E-mail address: mkshan@cs.nccu.edu.tw (M.-K. Sh With the growth of digital music, the development of music recommendation is helpful for users to pick desirable music pieces from a huge repository of music. The existing music recommendation approaches are based on a user\u2019s preference on music. However, sometimes, it might better meet users\u2019 requirement to recommend music pieces according to emotions. In this paper, we propose a novel framework for emotion-based music recommendation. The core of the recommendation framework is the construction of the music emotion model by affinity discovery from film music, which plays an important role in conveying emotions in film. We investigate the music feature extraction and propose the Music Affinity Graph and Music Affinity Graph-Plus algorithms for the construction of music emotion model. Experimental result shows the proposed emotion-based music recommendation achieves 85% accuracy in average. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "b3e326f56fd2e32f33fd5a8f3138c6633da25786", "title": "Autonomous cleaning robot: Roboking system integration and overview", "text": "This paper would present the system integration and overview of the autonomous cleaning robot Roboking. Roboking is self-propelled autonomously navigating vacuum cleaning robot. It uses several sensors in order to protect indoor environments and itself while cleaning. In this paper, we would describe the principle of operation along with system structure, sensors, functions and integrated subsystems."}
{"_id": "aaa49eee7ed13b2945996446762c2d1f868d1310", "title": "Design of auxetic structures via mathematical optimization.", "text": "The design of auxetic structures is traditionally based on engineering experience, see, for instance, refs. [1,2]. The goal of this article is to demonstrate how an existing auxetic structure can be improved by structural optimization techniques and manufactured by a selective electron beam melting (SEBM) system. Auxetic materials possess negative Poisson\u2019s ratios \u03bd which is in contrast to almost all engineering materials (e. g. metals \u03bd \u2248 0.3). The negative Poisson\u2019s ratio translates into the unusual geometric effect that these materials expand when stretched and contract when compressed. This behavior is of interest for example for fastener systems and fi lters. [ 2 , 3 ] It also results in interesting mechanical behavior like increased penetration resistance for large negative Poisson\u2019s ratios [ 4 ] and in the case of isotropy for higher shear strength relative to the Young\u2019s modulus [ 2 ] of the structure. The auxetic behavior can always be traced back to complex mesoor microscopic geometric mechanisms, e. g. unfolding of inverted elements or rotation of star shaped rigid units relative to each other. [ 5 \u2013 7 ] These geometries can be realized on vastly different scales from specifi cally designed polymers to reactor cores. [ 2 , 8 ]"}
{"_id": "d7380a586ff83845263d98d77e3b0ec5ff00765d", "title": "CMT-bone: A Mini-App for Compressible Multiphase Turbulence Simulation Software", "text": "Designed with the goal of mimicking key features of real HPC workloads, mini-apps have become an important tool for co-design. An investigation of mini-app behavior can provide system designers with insight into the impact of architectures, programming models, and tools on application performance. Mini-apps can also serve as a platform for fast algorithm design space exploration, allowing the application developers to evaluate their design choices before significantly redesigning the application codes. Consequently, it is prudent to develop a mini-app alongside the full blown application it is intended to represent. In this paper, we present CMT-bone a mini-app for the compressible multiphase turbulence (CMT) application, CMT-nek, being developed to extend the physics of the CESAR Nek5000 application code. CMT-bone consists of the most computationally intensive kernels of CMT-nek and the communication operations involved in nearest-neighbor updates and vector reductions. The mini-app represents CMT-nek in its most mature state and going forward it will be developed in parallel with the CMT-nek application to keep pace with key new performance impacting changes. We describe these kernels and discuss the role that CMT-bone has played in enabling interdisciplinary collaboration by allowing application developers to work with computer scientists on performance optimization on current architectures and performance analysis on notional future systems."}
{"_id": "df1f684a65853983f9d04a99137c20207dec0c70", "title": "A typology for visualizing uncertainty", "text": "Information analysts, especially those working in the field of intelligence analysis, must assess not only the information presented to them, but the confidence they have in that information. Visual representations of information are challenged to incorporate a notion of confidence or certainty because the factors that influence the certainty or uncertainty of information vary with the type of information and the type of decisions being made. Visualization researchers have no model or framework for describing uncertainty as it relates to intelligence analysis, thus no consistent basis for constructing visualizations of uncertainty. This paper presents a typology describing the aspects of uncertainty related to intelligence analysis, drawing on frameworks for uncertainty representation in scientific computing."}
{"_id": "b36f2191fe33ef99a6870bcd3f282b7093b27302", "title": "Broadcast news story segmentation using latent topics on data manifold", "text": "This paper proposes to use Laplacian Probabilistic Latent Semantic Analysis (LapPLSA) for broadcast news story segmentation. The latent topic distributions estimated by LapPLSA are used to replace term frequency vector as the representation of sentences and measure the cohesive strength between the sentences. Subword n-gram is used as the basic term unit in the computation. Dynamic Programming is used for story boundary detection. LapPLSA projects the data into a low-dimensional semantic topic representation while preserving the intrinsic local geometric structure of the data. The locality preserving property attempts to make the estimated latent topic distributions more robust to the noise from automatic speech recognition errors. Experiments are conducted on the ASR transcripts of TDT2 Mandarin broadcast news corpus. Our proposed approach is compared with other approaches which use dimensionality reduction technique with the locality preserving property, and two different topic modeling techniques. Experiment results show that our proposed approach provides the highest F1-measure of 0.8228, which significantly outperforms the best previous approaches."}
{"_id": "4105dcca0c1c18c0420a98202567f036bf2b2bcc", "title": "A 10-year retrospective of research in health mass media campaigns: where do we go from here?", "text": "Mass media campaigns have long been a tool for promoting public health. How effective are such campaigns in changing health-related attitudes and behaviors, however, and how has the literature in this area progressed over the past decade? The purpose of the current article is threefold. First, I discuss the importance of health mass media campaigns and raise the question of whether they are capable of effectively impacting public health. Second, I review the literature and discuss what we have learned about the effectiveness of campaigns over the past 10 years. Finally, I conclude with a discussion of possible avenues for the health campaign literature over the next 10 years. The overriding conclusion is the following: The literature is beginning to amass evidence that targeted, well-executed health mass media campaigns can have small-to-moderate effects not only on health knowledge, beliefs, and attitudes, but on behaviors as well, which can translate into major public health impact given the wide reach of mass media. Such impact can only be achieved, however, if principles of effective campaign design are carefully followed."}
{"_id": "e49fc5ec302a4a95131bc798b946370c84ee9402", "title": "Conditional Generative Adversarial Networks for Commonsense Machine Comprehension", "text": "Recently proposed Story Cloze Test [Mostafazadeh et al., 2016] is a commonsense machine comprehension application to deal with natural language understanding problem. This dataset contains a lot of story tests which require commonsense inference ability. Unfortunately, the training data is almost unsupervised where each context document followed with only one positive sentence that can be inferred from the context. However, in the testing period, we must make inference from two candidate sentences. To tackle this problem, we employ the generative adversarial networks (GANs) to generate fake sentence. We proposed a Conditional GANs (CGANs) in which the generator is conditioned by the context. Our experiments show the advantage of the CGANs in discriminating sentence and achieve state-of-the-art results in commonsense story reading comprehension task compared with previous feature engineering and deep learning methods."}
{"_id": "09c4ba8dd3cc88289caf18d71e8985bdd11ad21c", "title": "Outliers , Level Shifts , and Variance Changes in Time Series", "text": "Outliers, level shifts, and variance changes are commonplace in applied time series analysis. However, their existence is often ignored and their impact is overlooked, for the lack of simple and useful methods to detect and handle those extraordinary events. The problem of detecting outliers, level shifts, and variance changes in a univariate time series is considered. The methods employed are extremely simple yet useful. Only the least squares techniques and residual variance ratios are used. The effectiveness of these simple methods is demonstrated by analysing three real data sets."}
{"_id": "1fbccf28aed3f085e385dc727d4ab8505598fea2", "title": "A simple tractor-trailer backing control law for path following", "text": "Backing of tractor-trailer systems is a problem addressed in many literatures. It is usually solved using various nonlinear-based control methods, which are often not easy to implement or tune. Similar to other work focused on backing a single axle trailer with a car like vehicle, we propose a two-tier controller that is simple and intuitive. However, ours is based upon curvature as opposed to hitch angle, which allows the control input to be more directly related to path specification and to handle path curvature discontinuity better. Experimental results are provided to illustrate the capability of this new algorithm applied to a full scale autonomous vehicle and trailer system in a real field environment using minimal sensing capability. Results demonstrate good performance on sloped grounds with various grades."}
{"_id": "254be802991ca4b4464c7abb5ff1a11803680950", "title": "Weighted graph comparison techniques for brain connectivity analysis", "text": "The analysis of brain connectivity is a vast field in neuroscience with a frequent use of visual representations and an increasing need for visual analysis tools. Based on an in-depth literature review and interviews with neuroscientists, we explore high-level brain connectivity analysis tasks that need to be supported by dedicated visual analysis tools. A significant example of such a task is the comparison of different connectivity data in the form of weighted graphs. Several approaches have been suggested for graph comparison within information visualization, but the comparison of weighted graphs has not been addressed. We explored the design space of applicable visual representations and present augmented adjacency matrix and node-link visualizations. To assess which representation best support weighted graph comparison tasks, we performed a controlled experiment. Our findings suggest that matrices support these tasks well, outperforming node-link diagrams. These results have significant implications for the design of brain connectivity analysis tools that require weighted graph comparisons. They can also inform the design of visual analysis tools in other domains, e.g. comparison of weighted social networks or biological pathways."}
{"_id": "2b1b3ae38a2d7bc13f63920641540d77eb2ac60d", "title": "Identifying Customer Preferences about Tourism Products Using an Aspect-based Opinion Mining Approach", "text": "In this study we extend Bing Liu\u2019s aspect-based opinion mining technique to apply it to the tourism domain. Using this extension, we also offer an approach for considering a new alternative to discover consumer preferences about tourism products, particularly hotels and restaurants, using opinions available on the Web as reviews. An experiment is also conducted, using hotel and restaurant reviews obtained from TripAdvisor, to evaluate our proposals. Results showed that tourism product reviews available on web sites contain valuable information about customer preferences that can be extracted using an aspect-based opinion mining approach. The proposed approach proved to be very effective in determining the sentiment orientation of opinions, achieving a precision and recall of 90%. However, on average, the algorithms were only capable of extracting 35% of the explicit aspect expressions. c \u00a9 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of KES International."}
{"_id": "12451c12aa782c3db7b63a990120d59a15a8e310", "title": "Cannabinoids and the skeleton: from marijuana to reversal of bone loss.", "text": "The active component of marijuana, Delta(9)-tetrahydrocannabinol, activates the CB1 and CB2 cannabinoid receptors, thus mimicking the action of endogenous cannabinoids. CB1 is predominantly neuronal and mediates the cannabinoid psychotropic effects. CB2 is predominantly expressed in peripheral tissues, mainly in pathological conditions. So far the main endocannabinoids, anandamide and 2-arachidonoylglycerol, have been found in bone at 'brain' levels. The CB1 receptor is present mainly in skeletal sympathetic nerve terminals, thus regulating the adrenergic tonic restrain of bone formation. CB2 is expressed in osteoblasts and osteoclasts, stimulates bone formation, and inhibits bone resorption. Because low bone mass is the only spontaneous phenotype so far reported in CB2 mutant mice, it appears that the main physiologic involvement of CB2 is associated with maintaining bone remodeling at balance, thus protecting the skeleton against age-related bone loss. Indeed, in humans, polymorphisms in CNR2, the gene encoding CB2, are strongly associated with postmenopausal osteoporosis. Preclinical studies have shown that a synthetic CB2-specific agonist rescues ovariectomy-induced bone loss. Taken together, the reports on cannabinoid receptors in mice and humans pave the way for the development of 1) diagnostic measures to identify osteoporosis-susceptible polymorphisms in CNR2, and 2) cannabinoid drugs to combat osteoporosis."}
{"_id": "85b3cd74945cc6517aa3a7017f89d8857c3600da", "title": "A Bayesian Approach to Predict Performance of a Student (BAPPS): A Case with Ethiopian Students", "text": "The importance of accurate estimation of student\u2019s future performance is essential in order to provide the student with adequate assistance in the learning process. To this end, this research aimed at investigating the use of Bayesian networks for predicting performance of a student, based on values of some identified attributes. We presented empirical experiments on the prediction of performance with a data set of high school students containing 8 attributes. The paper demonstrates an application of the Bayesian approach in the field of education and shows that the Bayesian network classifier has a potential to be used as a tool for prediction of student performance."}
{"_id": "c401b33b3611ccc3111ed50ba9f7c20ce7e4057f", "title": "A Prolog Framework for Integrating Business Rules into Java Applications", "text": "Business specifications \u2013 that formerly only supported IT development \u2013 increasingly become business configurations in the form of rules that can be loaded directly into IT solutions. PROLOG is well\u2013known for its qualities in the development of sophisticated rule systems. It is desirable to combine the advantages of PROLOG with JAVA, since JAVA has become one of the most used programming languages in industry. However, experts of both programming languages are rare. To overcome the resulting interoperability problems, we have developed a framework which generates a JAVA archive that provides methods to query a given set of PROLOG rules; it ensures that valid knowledge bases are transmitted between JAVA and PROLOG. We use XML Schema for describing the format for exchanging a knowledge base between PROLOG and JAVA. From the XML Schema desciption, we scaffold JAVA classes; the JAVA programmer can use them and fill in the open slots by statements accessing other JAVA data structures. The data structure on the JAVA side reflects the complex structured knowledge base, with which the PROLOG rules work, in an object\u2013oriented way. We can to some extend verify the correctness of the data set / knowledge base sent from JAVA to PROLOG using standard methods for XML Schema. Moreover, we can add constraints that go beyond XML. For instance, we can specify standard integrity constraints known from relational databases, such as primary key, foreign key, and not\u2013null constraints. Since we are dealing with complex structured XML data, however, there can be far more general integrity constraints. These can be expressed by standard PROLOG rules, which can be evaluated on the PROLOG side; they could also be compiled to JAVA by available PROLOG to JAVA converters such as Prolog Cafe \u2013 since they will usually be written in a supported subset of PROLOG. We have used our framework for integrating PROLOG business rules into a commercial E\u2013Commerce system written in JAVA."}
{"_id": "1491084572f77d15cb1367395a4ab8bb8b3cbe1a", "title": "Confluence: conformity influence in large social networks", "text": "Conformity is a type of social influence involving a change in opinion or behavior in order to fit in with a group. Employing several social networks as the source for our experimental data, we study how the effect of conformity plays a role in changing users' online behavior. We formally define several major types of conformity in individual, peer, and group levels. We propose Confluence model to formalize the effects of social conformity into a probabilistic model. Confluence can distinguish and quantify the effects of the different types of conformities. To scale up to large social networks, we propose a distributed learning method that can construct the Confluence model efficiently with near-linear speedup. Our experimental results on four different types of large social networks, i.e., Flickr, Gowalla, Weibo and Co-Author, verify the existence of the conformity phenomena. Leveraging the conformity information, Confluence can accurately predict actions of users. Our experiments show that Confluence significantly improves the prediction accuracy by up to 5-10% compared with several alternative methods."}
{"_id": "085085c28297570459bf8cbafeb285333cb610db", "title": "On board eco-driving system for varying road-traffic environments using model predictive control", "text": "This paper presents model predictive control of a vehicle in a varying road-traffic environment for ecological (eco) driving. The vehicle control input is derived by rigorous reasoning approach of model based anticipation of road, traffic and fuel consumption in a crowded road network regulated by traffic signals. Model predictive control with Continuation and generalized minimum residual method for optimization is used to calculate the sequence of control inputs aiming at long run fuel economy maintaining a safe driving. Performance of the proposed eco-driving system is evaluated through simulations in AIMSUN microscopic transport simulator. In spite of nonlinearity and discontinuous movement of other traffic and signals, the proposed system is robust enough to control the vehicle safely. The driving behavior with fuel saving aspects is graphically illustrated, compared and analyzed to signify the prospect of the proposed eco-driving of a vehicle."}
{"_id": "b8b11e8152fa900ab34aabb30f9542ffe7d4c453", "title": "Joint Computation Partitioning and Resource Allocation for Latency Sensitive Applications in Mobile Edge Clouds", "text": "The proliferation of mobile devices and ubiquitous access of the wireless network enables many new mobile applications such as augmented reality, mobile gaming and so on. As the applications are latency sensitive, researchers propose to offload the complex computations of these applications to the nearby mobile edge cloud, in order to reduce the latency. Existing works mostly consider the problem of partitioning the computations between the mobile device and the traditional cloud that has abundant resources. The proposed approaches can not be applied in the context of mobile edge cloud, because both the resources in the mobile edge cloud and the wireless access bandwidth to the edge cloud are constrained. In this paper, we study joint computation partitioning and resource allocation problem for latency sensitive applications in mobile edge clouds. The problem is novel in that we combine the computation partitioning and the two-dimensional resource allocations in both the computation resources and the network bandwidth. We develop a new and efficient method, namely Multi-Dimensional Search and Adjust (MDSA), to solve the problem. We compares MDSA with the classic list scheduling method and the SearchAdjust algorithm via comprehensive simulations. The results show that MDSA outperforms the benchmark algorithms in terms of the overall application latency."}
{"_id": "cb84881e3dcd2d6c908a3a88a3c7891728da3e39", "title": "smartPrediction: a real-time smartphone-based fall risk prediction and prevention system", "text": "The high risk of falls and the substantial increase in the elderly population have recently stimulated scientific research on Smartphone-based fall detection systems. Even though these systems are helpful for fall detection, the best way to reduce the number of falls and their consequences is to predict and prevent them from happening in the first place. To address the issue of fall prevention, in this paper, we propose a fall prediction system by integrating the sensor data of Smartphones and a Smartshoe. We designed and implemented a Smartshoe that contains four pressure sensors with a Wi-Fi communication module to unobtrusively collect data in any environment. By assimilating the Smartshoe and Smartphone sensors data, we performed an extensive set of experiments to evaluate normal and abnormal walking patterns. The system can generate an alert message in the Smartphone to warn the user about the high-risk gait patterns and potentially save them from an imminent fall. We validated our approach using a decision tree with 10-fold cross validation and found 97.2% accuracy in gait abnormality detection."}
{"_id": "8dfa1e41d8b6c3805be4f33f70dc191dd41a1180", "title": "Evaluation of LoRa LPWAN technology for remote health and wellbeing monitoring", "text": "Low power consumption, low transceiver chip cost and large coverage area are the main characteristics of the low power wide area networks (LPWAN) technologies. We expect that LPWAN can be part of enabling new human-centric health and wellness monitoring applications. Therefore in this work we study the indoor performance of one LPWAN technology, namely LoRa, by the means of real-life measurements. The measurements were conducted using the commercially available equipment in the main campus of the University of Oulu, Finland, which has an indoor area spanning for over 570 meters North to South and over 320 meters East to West. The measurements were executed for a sensor node operating close to human body that was periodically reporting the sensed data to a base station. The obtained results show that when using 14 dBm transmit power and the largest spreading factor of 12 for the 868 MHz ISM band, the whole campus area can be covered. Measured packet success delivery ratio was 96.7 % without acknowledgements and retransmissions."}
{"_id": "65e414d24a0cc51a4f431174120e3070b91043bc", "title": "Executive control in language processing", "text": "During communication, speakers and listeners need the mechanisms of executive control to organize thoughts and actions along internal goals. Speakers may use executive functions to select the right word over competing alternatives to refer to the concept in mind. Listeners may use executive functions to coordinate the outputs of multiple linguistic processes to reach a coherent interpretation of what others say. Bilinguals may use executive functions to control which language is to use or to switch from one language to another. The control mechanisms recruited in language processing may be similar to those recruited in perception and attention, supported by a network of frontal, parietal and sub-cortical brain structures. Here we review existing evidences regarding the involvement of domain-general executive control in language processing. We will explain how executive functions are employed to control interference in comprehension and production, within and across languages."}
{"_id": "6208388594a067d508aa3b8b874306e0ff9029a4", "title": "Soft-Switched CCM Boost Converters With High Voltage Gain for High-Power Applications", "text": "This paper proposes a new soft-switched continuous-conduction-mode (CCM) boost converter suitable for high-power applications such as power factor correction, hybrid electric vehicles, and fuel cell power conversion systems. The proposed converter achieves zero-voltage-switched (ZVS) turn-on of active switches in CCM and zero-current-switched turn-off of diodes leading to negligible reverse-recovery loss. The components' voltage ratings and energy volumes of passive components of the proposed converter are greatly reduced compared to the conventional zero-voltage-transition converter. Voltage conversion ratio is almost doubled compared to the conventional boost converter. Extension of the proposed concept to realize multiphase dc-dc converters is discussed. Experimental results from a 1.5-kW prototype are provided to validate the proposed concept."}
{"_id": "752cf3bfed094ff88307247947b39702e696c0d5", "title": "An Interleaved Boost Converter With Zero-Voltage Transition", "text": "This paper proposes a novel soft-switching interleaved boost converter composed of two shunted elementary boost conversion units and an auxiliary inductor. This converter is able to turn on both the active power switches at zero voltage to reduce their switching losses and evidently raise the conversion efficiency. Since the two parallel-operated elementary boost units are identical, operation analysis and design for the converter module becomes quite simple. A laboratory test circuit is built, and the circuit operation shows satisfactory agreement with the theoretical analysis. The experimental results show that this converter module performs very well with the output efficiency as high as 95%."}
{"_id": "76a06bd4caa423627d4946d8f98783a187679838", "title": "A new active clamping zero-voltage switching PWM current-fed half-bridge converter", "text": "A new active clamping zero-voltage switching (ZVS) pulse-width modulation (PWM) current-fed half-bridge converter (CFHB) is proposed in this paper. Its active clamping snubber (ACS) can not only absorb the voltage surge across the turned-off switch, but also achieve the ZVS of all power switches. Moreover, it can be applied to all current-fed power conversion topologies and its operation as well as structure is very simple. Since auxiliary switches in the snubber circuit are switched in a complementary way to main switches, an additional PWM IC is not necessary. In addition, it does not need any clamp winding and auxiliary circuit besides additional two power switches and one capacitor while the conventional current-fed half bridge converter has to be equipped with two clamp windings, two ZVS circuits, and two snubbers. Therefore, it can ensure the higher operating frequency, smaller-sized reactive components, lower cost of production, easier implementation, and higher efficiency. The operational principle, theoretical analysis, and design considerations are presented. To confirm the operation, validity, and features of the proposed circuit, experimental results from a 200-W, 24-200Vdc prototype are presented."}
{"_id": "df766770cc7a9fa75deea60257f28ae421fe74af", "title": "Landsat continuity : Issues and opportunities for land cover monitoring", "text": "Initiated in 1972, the Landsat program has provided a continuous record of earth observation for 35 years. The assemblage of Landsat spatial, spectral, and temporal resolutions, over a reasonably sized image extent, results in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is absolutely unique and indispensable for monitoring, management, and scientific activities. Recent technical problems with the two existing Landsat satellites, and delays in the development and launch of a successor, increase the likelihood that a gap in Landsat continuity may occur. In this communication, we identify the key features of the Landsat program that have resulted in the extensive use of Landsat data for large area land covermapping andmonitoring.We then augment this list of key features by examining the data needs of existing large area land cover monitoring programs. Subsequently, we use this list as a basis for reviewing the current constellation of earth observation satellites to identify potential alternative data sources for large area land cover applications. Notions of a virtual constellation of satellites to meet large area land cover mapping and monitoring needs are also presented. Finally, research priorities that would facilitate the integration of these alternative data sources into existing large area land cover monitoring programs are identified. Continuity of the Landsat program and the measurements provided are critical for scientific, environmental, economic, and social purposes. It is difficult to overstate the importance of Landsat; there are no other systems in orbit, or planned for launch in the short-term, that can duplicate or approach replication, of the measurements and information conferred by Landsat. While technical and political options are being pursued, there is no satellite image data stream poised to enter the National Satellite Land Remote Sensing Data Archive should system failures occur to Landsat-5 and -7. Crown Copyright \u00a9 2007 Published by Elsevier Inc. All rights reserved."}
{"_id": "8dfa93543b00e1e76a43102b87fa7636001520ca", "title": "Subcarrier-index modulation OFDM", "text": "A new transmission approach, referred to as subcarrier-index modulation (SIM) is proposed to be integrated with the orthogonal frequency division multiplexing (OFDM) systems. More specifically, it relates to adding an additional dimension to the conventional two-dimensional (2-D) amplitude/phase modulation (APM) techniques, i.e. amplitude shift keying (ASK) and quadrature amplitude modulation (QAM). The key idea of SIM is to employ the subcarrier-index to convey information to the receiver. Furthermore, a closed-form analytical bit error ratio (BER) of SIM OFDM in Rayleigh channel is derived. Analytical and simulation results show error probability performance gain of 4 dB over 4-QAM OFDM systems for both coded and uncoded data without power saving policy. Alternatively, power saving policy retains an average gain of 1 dB while using 3 dB less transmit power per OFDM symbol."}
{"_id": "86aa83ebab0f72ef84f8e6d62379c71c04cb6b68", "title": "Impact of aluminum wire and ribbon bonding technologies on D2PAK package reliability during thermal cycling applications", "text": ""}
{"_id": "60e38f92ba391b2b209b56c0faee5ee4cbcde0f9", "title": "Image Processing using IP Core Generator through FPGA", "text": "Xilinx CORE Generator System generates and delivers parameterizable cores optimized for Xilinx FPGAs. CORE Generator is mainly used to create high density, high performance designs in Xilinx FPGAs in less time. The CORE Generator is included with the ISE WebPack and ISE Foundation software and comes with an extensive library of Xilinx LogiCORE IP. These include DSP functions, memories, storage elements, math functions and a variety of basic elements. Xilinx provides a flexible Block Memory Generator core to create compact, high-performance memories running at up to 450 MHz. Block Memory Generator provides single port and dual port block memory. These memory types differ in selection of operating modes.Matlab tool is used to convert the image that is being processed to .coe file format."}
{"_id": "ebda1b2b2737ff2bcb9a68314299d933240a82a3", "title": "Improvement of the on-body performance of a dual-band textile antenna using an EBG structure", "text": "A new dual-band wearable textile antenna designed for body-centric communications is introduced. The antenna is fully characterized in free space and on the body model, with and without an electromagnetic band gap (EBG) substrate. The band gap array consists of 3 \u00d7 3 elements and is used to reduce the interaction with human tissues. With the EBG back reflector, the radiation into the body is reduced by more than 15 dB. Increases of 5.2 dB and 3 dB gain are noticed at 2.45 GHz and 5.5 GHz, respectively. The results are presented for the return loss, radiation pattern, efficiency, and specific absorption rate (SAR)."}
{"_id": "326178b72dc0ce59be8f221e095ac1f5f5b111f7", "title": "Procedural City Layout Generation Based on Urban Land Use Models", "text": "Training and simulation applications in virtual worlds require significant amounts of urban environments. Procedural generation is an efficient way to create such models. Existing approaches for procedural modelling of cities aim at facilitating the work of urban planners and artists, but either require expert knowledge or external input data to generate results that resemble real-life cities, or they have long computation times, and thus are unsuitable for non-experts such as training instructors. We propose a method that procedurally creates layouts of structurally plausible cities from high-level, intuitive user input such as city size, location and historic background. The resulting layouts consist of different kinds of city districts which are arranged using constraints derived from established models of urban land use. Our approach avoids the need for external expert engagement in the creation process, and allows for the generation of large city layouts in seconds, making it significantly faster than comparable agent-based software and thus supporting the needs of non-expert creators of virtual cities for many applications."}
{"_id": "14086835db435334398537a901117a89c2f16d77", "title": "A modular crawler-driven robot: Mechanical design and preliminary experiments", "text": "This paper presents a tracked robot composed of the proposed crawler mechanism, in which a planetary gear reducer is employed as the transmission device and provides two outputs in different forms with only one actuator. When the crawler moves in a rough environment, collision between mechanism and environment inevitably occurs. This under-actuated crawler can absorb the impact energy that should be transmitted to the actuator. A modular concept for the crawler is proposed for enlarging its use in robot systems and mechanical design of a modular crawler is conducted. Using this crawler module, a four-crawler-driven robot is realized by easily assembling. Experiments are conducted to verify the proposed concept and mechanical design. A single crawler module can well perform the proposed three locomotion modes. The four-crawler-driven robot has good adaptability to the environment which can get over obstacles both passively and actively."}
{"_id": "2d6436227825a574268f50cab90b19012dfc1084", "title": "An Algorithm for Training Polynomial Networks", "text": "We consider deep neural networks, in which the output of each node is a quadratic function of its inputs. Similar to other deep architectures, these networks can compactly represent any function on a finite training set. The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training such networks, which we denote as the Basis Learner. The algorithm is a universal learner in the sense that the training error is guaranteed to decrease at every iteration, and can eventually reach zero under mild conditions. We present practical implementations of this algorithm, as well as preliminary experimental results. We also compare our deep architecture to other shallow architectures for learning polynomials, in particular kernel learning."}
{"_id": "70c7ec6f4753354b2d3302b4ae62aa09e21fa1fa", "title": "Deep Learning Approach for Secondary Structure Protein Prediction based on First Level Features Extraction using a Latent CNN Structure", "text": "In Bioinformatics, Protein Secondary Structure Prediction (PSSP) has been considered as one of the main challenging tasks in this field. Today, secondary structure protein prediction approaches have been categorized into three groups (Neighbor-based, model-based, and meta predicator-based model). The main purpose of the model-based approaches is to detect the protein sequence-structure by utilizing machine learning techniques to train and learn a predictive model for that. In this model, different supervised learning approaches have been proposed such as neural networks, hidden Markov chain, and support vector machines have been proposed. In this paper, our proposed approach which is a Latent Deep Learning approach relies on detecting the first level features based on using Stacked Sparse Autoencoder. This approach allows us to detect new features out of the set of training data using the sparse autoencoder which will have used later as convolved filters in the Convolutional Neural Network (CNN) structure. The experimental results show that the highest accuracy of the prediction is 86.719% in the testing set of our approach when the backpropagation framework has been used to pre-trained techniques by relying on the unsupervised fashion where the whole network can be fine-tuned in a supervised learning fashion. Keywords\u2014Secondary structure protein prediction; secondary structure; fine-tuning; Stacked Sparse; Deep Learning; CNN"}
{"_id": "b2054dc469052bce103a44bbfd7cf0395d63c441", "title": "A Method for Identification of Basmati Rice grain of India and Its Quality Using Pattern Classification", "text": "The research work deals with an approach to perform texture and morphological based retrieval on a corpus of Basmati rice grain images. The work has been carried out using Image Warping and Image analysis approach. The method has been employed to normalize food grain images and hence eliminating the effects of orientation using image warping technique with proper scaling. The approach has been tested on sufficient number of basmati rice grain images of rice based on intensity, position and orientation. A digital image analysis algorithm based on color, morphological and textural features was developed to identify the six varieties of basmati rice seeds which are widely planted in India. Nine color and nine morphological and textural features were used for discriminant analysis. A back propagation neural network-based classifier was developed to identify the unknown grain types. The color and textural features were presented to the neural network for training purposes. The trained network was then used to identify the unknown grain types. It is also to find the percentage purity of hulled basmati rice grain sample by image processing technique. Commercially the purity test of basmati rice sample is done according to the size of the grain kernel (full, half or broken). By image processing we can also identify any broken basmati rice grains. Here we discuss the various procedures used to obtain the percentage quality of basmati rice grains. Keywords\u2014 warping, Image rectification, Image segmentation, Edge Detection, blurring image, Thresholding, Percentage Purity, Pixel area."}
{"_id": "e4e8e46f7f9c2160c801af406c61d529826d779e", "title": "OCR binarization and image pre-processing for searching historical documents", "text": "We consider the problem of document binarization as a pre-processing step for optical character recognition (OCR) for the purpose of keyword search of historical printed documents. A number of promising techniques from the literature for binarization, pre-filtering, and post-binarization denoising were implemented along with newly developed methods for binarization: an error diffusion binarization, a multiresolutional version of Otsu\u2019s binarization, and denoising by despeckling. The OCR in the ABBYY FineReader 7.1 SDK is used as a black box metric to compare methods. Results for 12 pages from six newspapers of differing quality show that performance varies widely by image, but that the classic Otsu method and Otsu-based methods perform best on average. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved."}
{"_id": "6654c2921b28cc80cc80a8d84c9ca74586571c8d", "title": "Multi-phase Word Sense Embedding Learning Using a Corpus and a Lexical Ontology", "text": "Word embeddings play a significant role in many modern NLP systems. However, most prevalent word embedding learning methods learn one representation per word which is problematic for polysemous words and homonymous words. To address this problem, we propose a multi-phase word sense embedding learning method which utilizes both a corpus and a lexical ontology to learn one embedding per word sense. We use word sense definitions and relations between word senses defined in a lexical ontology in a different way from existing systems. Experimental results on word similarity task show that our approach produces word sense embeddings of high quality."}
{"_id": "b250f48f02d97f33422f9ebf2983e3213655eddc", "title": "A Study of Attribute-based Proxy Re-encryption Scheme in Cloud Environments", "text": "Attribute-based proxy re-encryption (ABPRE) scheme is one of the proxy cryptography, which can delegate the reencryption capability to the proxy and re-encrypt the encrypted data by using the re-encryption key. ABPRE extending the traditional proxy cryptography and attributes plays an important role. In ABPRE, users are identified by attributes, and the access policy is designed to control the user\u2019s access. Using ABPRE can have these advantages: (i) The proxy can be delegated to execute the re-encryption operation, which reduces the computation overhead of the data owner; (ii) The authorized user just uses his own secret key to decrypt the encrypted data, and he doesn\u2019t need to store an additional decryption key for deciphering; (iii) The sensitive information cannot be revealed to the proxy in re-encryption, and the proxy only complies to the data owner\u2019s command. In this paper, we survey two various access policy attribute-based proxy re-encryption schemes and analyze these schemes. Thereafter, we list the comparisons of them by some criteria."}
{"_id": "1e0333823e74a33ad70d51fee122d37167419558", "title": "Dual-Band Transmitarrays With Dual-Linear Polarization at Ka-Band", "text": "Dual-band transmitarray antennas are demonstrated at Ka-band with the capability of forming independent linearly polarized beams with a given polarization in each frequency band, while sharing the same radiating aperture. The proposed three-layer unit-cell is based on identical narrow microstrip patches printed on both receiving and transmitting layers and connected by a metallized via hole. The metal layers are printed on two identical substrates bonded with a thin film, and the designed unit-cell exhibits a 180\u00b0 phase resolution (i.e., 1-b phase quantization). The dual-band dual-polarized property of the transmitarray is achieved by interleaving unit-cells operating in the down-link and up-link frequency bands. Four different prototypes are characterized to demonstrate the relevance of the proposed concepts. A good agreement is obtained between the radiation patterns, gain curves, and cross-polarization levels measured and computed in both frequency bands and polarizations."}
{"_id": "9c0537b930e3f79b5bc4405c947b6c76df28872d", "title": "The diffusion of political memes in social media: keynote abstract", "text": "This talk presents ongoing work on the study of information diffusion in social media, focusing in particular on political communication in the Twitter microblogging network. Social media platforms play an important role in shaping political discourse in the US and around the world. The truthy.indiana.edu infrastructure allows us to mine and visualize a large stream of social media data related to political themes. The analyses in this keynote address polarization and cross-ideological communication, and partisan asymmetries in the online political activities of social media users. Machine learning efforts can successfully leverage the structure of meme diffusion networks to detect orchestrated astroturf attacks that simulate grassroots campaigns, and to predict the political affiliation of active users. The retweet network segregates individuals into two distinct, homogenous communities of left- and right-leaning users. The mention network does not exhibit this kind of segregation, instead forming a communication bridge across which information flows between these two partisan communities. We propose a mechanism of action to explain these divergent topologies and provide statistical evidence in support of this hypothesis. Related to political communication are questions about the birth of online social movements. Social media data provides an opportunity to look for signatures that capture these seminal events. Finally, I will introduce a model of the competition for attention in social media. A dynamic of information diffusion emerges from this process, where a few ideas go viral while most do not. I will show that the relative popularity of different topics, the diversity of information to which we are exposed, and the fading of our collective interests for specific memes, can all be explained as deriving from a combination between the competition for limited attention and the structure of social networks. Surprisingly, one can reproduce the massive heterogeneity in the popularity and persistence of ideas without the need to assume different intrinsic values among those ideas."}
{"_id": "88f85283db95994d9a632a1c53573c3ba0d655ea", "title": "Factorization machines with follow-the-regularized-leader for CTR prediction in display advertising", "text": "Predicting ad click-through rates is the core problem in display advertising, which has received much attention from the machine learning community in recent years. In this paper, we present an online learning algorithm for click-though rate prediction, namely Follow-The-Regularized-Factorized-Leader (FTRFL), which incorporates the Follow-The-Regularized-Leader (FTRL-Proximal) algorithm with per-coordinate learning rates into Factorization machines. Experiments on a real-world advertising dataset show that the FTRFL method outperforms the baseline with stochastic gradient descent, and has a faster rate of convergence."}
{"_id": "c6173d86caa9d3688133e4200ddcc3178189fff4", "title": "Culture and diverging views of social events.", "text": "The authors compared East Asians' and Americans' views of everyday social events. Research suggests that Americans tend to focus more on the self and to have a greater sense of personal agency than East Asians. The authors assessed whether, compared to East Asians, Americans emphasize main characters even when events do not involve the self and whether they see more agency or intentionality in actions, even when the actions are not their own. Whether East Asians would observe more emotions in everyday scenarios than would Americans also was investigated. In Study 1, Chinese and Americans read alleged diary entries of another person. Americans did focus more on main characters and on characters' intentionality. Study 2 replicated these results comparing Taiwanese and Americans on free recall of events concerning the self and of narratives and videos concerning others. Study 2 also found that Taiwanese made more comments about the emotional states of characters."}
{"_id": "05e5e58edead6167befb089444d35fbd17b13414", "title": "A Fuzzy Approach for Mining Quantitative Association Rules", "text": ""}
{"_id": "286dc0c3992f5b9c78209827117546888371b818", "title": "Secure cloud computing: Benefits, risks and controls", "text": "Cloud computing presents a new model for IT service delivery and it typically involves over-a-network, on-demand, self-service access, which is dynamically scalable and elastic, utilising pools of often virtualized resources. Through these features, cloud computing has the potential to improve the way businesses and IT operate by offering fast start-up, flexibility, scalability and cost efficiency. Even though cloud computing provides compelling benefits and cost-effective options for IT hosting and expansion, new risks and opportunities for security exploits are introduced. Standards, policies and controls are therefore of the essence to assist management in protecting and safeguarding systems and data. Management should understand and analyse cloud computing risks in order to protect systems and data from security exploits. The focus of this paper is on mitigation for cloud computing security risks as a fundamental step towards ensuring secure cloud computing environments."}
{"_id": "9d19affb1521192b5a075f33ff517dd06a5c17dc", "title": "Improving the Neural Algorithm of Artistic Style", "text": "In this work we investigate different avenues of improving the Neural Algorithm of Artistic Style [7]. While showing great results when transferring homogeneous and repetitive patterns, the original style representation often fails to capture more complex properties, like having separate styles of foreground and background. This leads to visual artifacts and undesirable textures appearing in unexpected regions when performing style transfer. We tackle this issue with a variety of approaches, mostly by modifying the style representation in order for it to capture more information and impose a tighter constraint on the style transfer result. In our experiments, we subjectively evaluate our best method as producing from barely noticeable to significant improvements in the quality of style transfer."}
{"_id": "08485bbdfec76168205469ab971cd6b29a0e67ab", "title": "Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream.", "text": "Converging evidence suggests that the primate ventral visual pathway encodes increasingly complex stimulus features in downstream areas. We quantitatively show that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain. This was achieved by mapping thousands of stimulus features of increasing complexity across the cortical sheet using a deep neural network. Our approach also revealed a fine-grained functional specialization of downstream areas of the ventral stream. Furthermore, it allowed decoding of representations from human brain activity at an unsurpassed degree of accuracy, confirming the quality of the developed approach. Stimulus features that successfully explained neural responses indicate that population receptive fields were explicitly tuned for object categorization. This provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream."}
{"_id": "10d39702b2b969ff8ba8645988910954864ccbec", "title": "High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis", "text": "Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, they can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-the-art inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images."}
{"_id": "1963f5cf7741af4787875e92ff4cd5bd26be9390", "title": "A path decomposition approach for computing blocking probabilities in wavelength-routing networks", "text": "We study a class of circuit-switched wavelength-routing networks with fixed or alternate routing and with random wavelength allocation. We present an iterative path decomposition algorithm to evaluate accurately and efficiently the blocking performance of such networks with and without wavelength converters. Our iterative algorithm analyzes the original network by decomposing it into single-path subsystems. These subsystems are analyzed in isolation, and the individual results are appropriately combined to obtain a solution for the overall network. To analyze individual subsystems, we first construct an exact Markov process that captures the behavior of a path in terms of wavelength use. We also obtain an approximate Markov process which has a closed-form solution that can be computed efficiently for short paths. We then develop an iterative algorithm to analyze approximately arbitrarily long paths. The path decomposition approach naturally captures the correlation of both link loads and link blocking events. Our algorithm represents a simple and computationally efficient solution to the difficult problem of computing call-blocking probabilities in wavelength-routing networks. We also demonstrate how our analytical techniques can be applied to gain insight into the problem of converter placement in wavelength-routing networks."}
{"_id": "b15f8d389814a1a8d2580977f29f0ff705ace0d8", "title": "A neuro-stimulus chip with telemetry unit for retinal prosthetic device", "text": "In this retinal prosthesis project, a rehabilitative device is designed to replace the functionality of defective photoreceptors in patients suffering from retinitis pigmentosa (RP) and age-related macular degeneration (AMD). The device consists of an extraocular and an intraocular unit. The implantable component receives power and a data signal via a telemetric inductive link between the two units. The extraocular unit includes a video camera and video processing board, a telemetry protocol encoder chip, and an RF amplifier and primary coil. The intraocular unit consists of a secondary coil, a rectifier and regulator, a retinal chip with a telemetry protocol decoder, a stimulus signal generator, and an electrode array. This paper focuses on the design, fabrication, and testing of a microchip which serves as the telemetry protocol decoder and stimulus signal generator. It is fabricated by MOSIS with 1.2-mm CMOS technology and was demonstrated to provide the desired biphasic current stimulus pulses for an array of 100 retinal electrodes at video frame rates."}
{"_id": "1033ecb47d713d3ff21f0f22a12dc13a3e129348", "title": "The Causal Foundations of Structural Equation Modeling", "text": "The role of causality in SEM research is widely perceived to be, on the one hand, of pivotal methodological importance and, on the other hand, confusing, enigmatic and controversial. The confusion is vividly portrayed, for example, in the influential report of Wilkinson and Task Force\u2019s (1999) on \u201cStatistical Methods in Psychology Journals: Guidelines and Explanations.\u201d In discussing SEM, the report starts with the usual warning: \u201cIt is sometimes thought that correlation does not prove causation but \u2018causal modeling\u2019 does. [Wrong! There are] dangers in this practice.\u201d But then ends with a startling conclusion: \u201cThe use of complicated causal-modeling software [read SEM] rarely yields any results that have any interpretation as causal effects.\u201d The implication being that the entire enterprise of causal modeling, from Sewell Wright (1921) to Blalock (1964) and Duncan (1975), the entire literature in econometric research, including modern advances in graphical and nonparametric structural models have all been misguided, for they have been chasing parameters that have no causal interpretation. The motives for such overstatements notwithstanding, readers may rightly ask: \u201cIf SEM methods do not \u2018prove\u2019 causation, how can they yield results that have causal interpretation?\u201d Put another way, if the structural coefficients that SEM researchers labor to estimate can legitimately be interpreted as causal effects then, unless these parameters are grossly misestimated, why deny SEM researchers the honor of \u201cestablishing causation\u201d or at least of deriving some useful claims about causation. The answer is that a huge logical gap exists between \u201cestablishing causation,\u201d which requires careful manipulative experiments, and \u201cinterpreting parameters as causal effects,\u201d which may be based on firm scientific knowledge or on previously conducted experiments, perhaps by other researchers. One can legitimately be in a possession of a parameter that stands for a causal effect and still be unable, using statistical means alone, to determine the magnitude of that parameter given non-experimental data. As a matter of fact, we know that"}
{"_id": "0f415803e4b94cfbefa0bf118b04e9f7f79dcf36", "title": "Librispeech: An ASR corpus based on public domain audio books", "text": "This paper introduces a new corpus of read English speech, suitable for training and evaluating speech recognition systems. The LibriSpeech corpus is derived from audiobooks that are part of the LibriVox project, and contains 1000 hours of speech sampled at 16 kHz. We have made the corpus freely available for download, along with separately prepared language-model training data and pre-built language models. We show that acoustic models trained on LibriSpeech give lower error rate on the Wall Street Journal (WSJ) test sets than models trained on WSJ itself. We are also releasing Kaldi scripts that make it easy to build these systems."}
{"_id": "ddaf546e1b0ac14a7c4e41b4b8f4e165928eccf8", "title": "Study about IOT's application in \"Digital Agriculture\" construction", "text": "Along with acceleration of various countries' digitization and informationization, according to the 12th 5-year development plan, the internet of things (IOT) will become one of Chinese emerging strategic industries. Based on the analysis of IOT's basic concept and key technology, this paper studies IOT's application in \"Digital Agriculture\" construction. The main contents include: IOT's application in agricultural production, IOT's application in logistics and distribution of agricultural products, and IOT's application in agricultural products safety traceability."}
{"_id": "4ddfbdb5a0fcdeaa3fea3f75a54a18257c4d248a", "title": "A circle packing algorithm", "text": "A circle packing is a configuration P of circles realizing a specified pattern of tangencies. Radii of packin the euclidean and hyperbolic planes may be computed using an iterative process suggested by William T We describe an efficient implementation, discuss its performance, and illustrate recent applications. A cen is played by new and subtle monotonicity results for \u201cflowers\u201d of circles. \uf6d9 2003 Elsevier Science B.V. All rights reserved."}
{"_id": "0bbddcc8f83d6e97d203693451248945c8290df5", "title": "Help from the Sky: Leveraging UAVs for Disaster Management", "text": "This article presents a vision for future unmanned aerial vehicles (UAV)-assisted disaster management, considering the holistic functions of disaster prediction, assessment, and response. Here, UAVs not only survey the affected area but also assist in establishing vital wireless communication links between the survivors and nearest available cellular infrastructure. A perspective of different classes of geophysical, climate-induced, and meteorological disasters based on the extent of interaction between the UAV and terrestrially deployed wireless sensors is presented in this work, with suitable network architectures designed for each of these cases. The authors outline unique research challenges and possible solutions for maintaining connected aerial meshes for handoff between UAVs and for systems-specific, security- and energy-related issues. This article is part of a special issue on drones."}
{"_id": "44b1b0468bc03c255e7de1a141437887a4fdaaad", "title": "Cross-Level Sensor Network Simulation with COOJA", "text": "Simulators for wireless sensor networks are a valuable tool for system development. However, current simulators can only simulate a single level of a system at once. This makes system development and evolution difficult since developers cannot use the same simulator for both high-level algorithm development and low-level development such as device-driver implementations. We propose cross-level simulation, a novel type of wireless sensor network simulation that enables holistic simultaneous simulation at different levels. We present an implementation of such a simulator, COOJA, a simulator for the Contiki sensor node operating system. COOJA allows for simultaneous simulation at the network level, the operating system level, and the machine code instruction set level. With COOJA, we show the feasibility of the cross-level simulation approach"}
{"_id": "c9dc151f40cee3b1841dde868895df2183c76912", "title": "Design considerations and performance evaluation of 1200 V, 100 a SiC MOSFET based converter for high power density application", "text": "Silicon Carbide (SiC) MOSFET is capable of achieving better efficiency, power density and reliability of power converters due to its low on-state resistance, high temperature operation capability and lower switching losses compared to silicon (Si) IGBT. Operation of power converters at higher switching frequency using SiC devices allows reduction in filter size and hence improves the power to weight ratio of the converter. This paper presents switching characterization of 1200 V, 100 A SiC MOSFET module and compares efficiency of a Two Level Voltage Source Converter (2L-VSC) using SiC MOSFETs and Si IGBTs. Also, various design considerations of the 1200 V, 100 A SiC MOSFET based 2L-VSC including gate drive design, bus bar packaging and thermal management have been elaborated. The designed and developed 2L-VSC is operated to supply 35 kVA load at 20 kHz switching frequency with DC bus voltage at 800 V and the experimental results are presented."}
{"_id": "269c24a4aad9be622b609a0860f5df80688c2f93", "title": "A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services", "text": "Datacenter workloads demand high computational capabilities, flexibility, power efficiency, and low cost. It is challenging to improve all of these factors simultaneously. To advance datacenter capabilities beyond what commodity server designs can provide, we designed and built a composable, reconfigurable hardware fabric based on field programmable gate arrays (FPGA). Each server in the fabric contains one FPGA, and all FPGAs within a 48-server rack are interconnected over a low-latency, high-bandwidth network.\n We describe a medium-scale deployment of this fabric on a bed of 1632 servers, and measure its effectiveness in accelerating the ranking component of the Bing web search engine. We describe the requirements and architecture of the system, detail the critical engineering challenges and solutions needed to make the system robust in the presence of failures, and measure the performance, power, and resilience of the system. Under high load, the large-scale reconfigurable fabric improves the ranking throughput of each server by 95% at a desirable latency distribution or reduces tail latency by 29% at a fixed throughput. In other words, the reconfigurable fabric enables the same throughput using only half the number of servers."}
{"_id": "adb99ae529f0a64082fc8f9a86f97fbacab64ea2", "title": "Work Related Stress, Burnout, Job Satisfaction and General Health of Nurses", "text": "Gaps in research focusing on work related stress, burnout, job satisfaction and general health of nurses is evident within developing contexts like South Africa. This study identified the relationship between work related stress, burnout, job satisfaction and general health of nurses. A total of 1200 nurses from four hospitals were invited to participate in this cross-sectional study (75% response rate). Participants completed five questionnaires and multiple linear regression analysis was used to determine significant relationships between variables. Staff issues are best associated with burnout as well as job satisfaction. Burnout explained the highest amount of variance in mental health of nurses. These are known to compromise productivity and performance, as well as affect the quality of patient care. Issues, such as security risks in the workplace, affect job satisfaction and health of nurses. Although this is more salient to developing contexts it is important in developing strategies and intervention programs towards improving nurse and patient related outcomes."}
{"_id": "e155ab4112bbddddbd842c7861748765cf369478", "title": "Performance of PMASynRM With Ferrite Magnets for EV/HEV Applications Considering Productivity", "text": "Although motors that use rare-earth permanent magnets typically exhibit high performance, their cost is high, and there are concerns about the stability of the raw material supply. This paper proposes a permanent-magnet-assisted synchronous reluctance motor (PMASynRM) with a ferrite magnet that does not use rare-earth materials considering productivity. The performance of the proposed PMASynRM is evaluated based on the finite-element method and an experiment using a prototype machine. The analysis results reveal that the proposed PMASynRM has the same power density and an equivalent efficiency as rare-earth permanent-magnet synchronous motors for hybrid electric vehicles (2003 Toyota Prius). Furthermore, some experimental results are presented in order to validate the analytical results. As a result, the proposed PMASynRM was found to achieve high-power-density and high-efficiency performance."}
{"_id": "3779782a7bc5cd743e4029eba985e9a7c9bc3a00", "title": "Adaptive Speculative Processing of Out-of-Order Event Streams", "text": "Distributed event-based systems are used to detect meaningful events with low latency in high data-rate event streams that occur in surveillance, sports, finances, etc. However, both known approaches to dealing with the predominant out-of-order event arrival at the distributed detectors have their shortcomings: buffering approaches introduce latencies for event ordering, and stream revision approaches may result in system overloads due to unbounded retraction cascades.\n This article presents an adaptive speculative processing technique for out-of-order event streams that enhances typical buffering approaches. In contrast to other stream revision approaches developed so far, our novel technique encapsulates the event detector, uses the buffering technique to delay events but also speculatively processes a portion of it, and adapts the degree of speculation at runtime to fit the available system resources so that detection latency becomes minimal.\n Our technique outperforms known approaches on both synthetical data and real sensor data from a realtime locating system (RTLS) with several thousands of out-of-order sensor events per second. Speculative buffering exploits system resources and reduces latency by 40% on average."}
{"_id": "b2827253cb663812ff1065751823f0a6a3d020fd", "title": "Cerebellar ataxia rehabilitation trial in degenerative cerebellar diseases.", "text": "OBJECTIVE\nTo investigate short- and long-term effects of intensive rehabilitation on ataxia, gait, and activities of daily living (ADLs) in patients with degenerative cerebellar disease.\n\n\nMETHODS\nA total of 42 patients with pure cerebellar degeneration were randomly assigned to the immediate group or the delayed-entry control group. The immediate group received 2 hours of inpatient physical and occupational therapy, focusing on coordination, balance, and ADLs, on weekdays and 1 hour on weekends for 4 weeks. The control group received the same intervention after a 4-week delay. Short-term outcome was compared between the immediate and control groups. Long-term evaluation was done in both groups at 4, 12, and 24 weeks after the intervention. Outcome measures included the assessment and rating of ataxia, Functional Independence Measure, gait speed, cadence, functional ambulation category, and number of falls.\n\n\nRESULTS\nThe immediate group showed significantly greater functional gains in ataxia, gait speed, and ADLs than the control group. Improvement of truncal ataxia was more prominent than limb ataxia. The gains in ataxia and gait were sustained at 12 weeks and 24 weeks, respectively. At least 1 measure was better than at baseline at 24 weeks in 22 patients.\n\n\nCONCLUSIONS\nShort-term benefit of intensive rehabilitation was evident in patients with degenerative cerebellar diseases. Although functional status tended to decline to the baseline level within 24 weeks, gains were maintained in more than half of the participants."}
{"_id": "4664bd8c79051af855ee9e036a867bc35be4d20a", "title": "ON ATTRIBUTE BASED ACCESS CONTROL SCHEME IN CLOUD ENVIRONMENT", "text": "*Corresponding author: Email: divyasnkl@gmail.com; Tel.: +91 9486029378"}
{"_id": "9d85b10f5870e114240baebd34bb5c1ac3f79244", "title": "Interoperability evaluation models: A systematic review", "text": "Interoperability is defined as the ability for two (or more) systems or components to exchange information and to use the information that has been exchanged. There is increasing demand for interoperability between individual software systems. Developing an interoperability evaluation model between software and information systems is difficult, and becoming an important challenge. An interoperability evaluation model allows knowing the degree of interoperability, and lead to the improvement of interoperability. This paper describes the existing interoperability evaluation models, and performs a comparative analysis among their findings to determine the similarities and differences in their philosophy and implementation. This analysis yields a set of recommendations for any party that is open to the idea of creating or improving an interoperability evaluation model. 2013 Elsevier B.V. All rights reserved."}
{"_id": "f67076d44011b856baeb57d378eb2e1440b0985d", "title": "The partograph for the prevention of obstructed labor.", "text": "Obstructed labor is an important cause of maternal and perinatal mortality and morbidity. The partograph graphically represents key events in labor and provides an early warning system. The World Health Organization partographs are the best known partographs in low resource settings. Experiences with World Health Organization and other types of partographs in low resource settings suggest that when used with defined management protocols, this inexpensive tool can effectively monitor labor and prevent obstructed labor. However, challenges to implementation exist and these should be addressed urgently."}
{"_id": "ff2695513e7e0f49a0a484818c19b6346e076172", "title": "Divertible Protocols and Atomic Proxy Cryptography", "text": "First, we introduce the notion of divertibility as a protocol property as opposed to the existing notion as a language property (see Okamoto, Ohta [OO90]). We give a definition of protocol divertibility that applies to arbitrary 2-party protocols and is compatible with Okamoto and Ohta\u2019s definition in the case of interactive zero-knowledge proofs. Other important examples falling under the new definition are blind signature protocols. We propose a sufficiency criterion for divertibility that is satisfied by many existing protocols and which, surprisingly, generalizes to cover several protocols not normally associated with divertibility (e.g., Diffie-Hellman key exchange). Next, we introduce atomic proxy cryptography, in which an atomic proxy function, in conjunction with a public proxy key, converts ciphertexts (messages or signatures) for one key into ciphertexts for another. Proxy keys, once generated, may be made public and proxy functions applied in untrusted environments. We present atomic proxy functions for discrete-log-based encryption, identification, and signature schemes. It is not clear whether atomic proxy functions exist in general for all public-key cryptosystems. Finally, we discuss the relationship between divertibility and proxy cryptography."}
{"_id": "626ecbdfdf0f92ef306865cc28503350d2591008", "title": "Proxy Cryptography Revisited", "text": "In this work we revisit and formally study the notion of proxy cryptography. Intuitively, various proxy functions allow two cooperating parties F (the \u201cFBI\u201d) and P (the \u201cproxy\u201d) to duplicate the functionality available to t he third party U (the \u201cuser\u201d), without being able to perform this functionality on their own (without cooperation). The concept is closely related to the notion of threshold cryptography, except we deal with only two parties P andF , and place very strict restrictions on the way the operations are performed (which is done for the sake of efficiency, usability and scalability). For example, for decryption (res p. signature)P (F ) sends a single message to F (P ), after which the latter can decrypt (sign) the message. Our formal modeling of proxy cryptography significantly generalizes, simplifies and simultaneously clarifies the model of \u201catomic proxy\u201d suggested by Blaze and Strauss [4]. In particular, we define bidirectional and unidirectional var iants of our model 1, and show extremely simple generic solutions for proxy signature and encryption in these models. We also give more efficient solutions for several specific schemes. We conclude that proxy cryptography is a relatively simple concept to satisfy when looked from the correct and formal standpoint."}
{"_id": "65f77b5c1a03ab2beb949714dd9bb37b60a8ff0c", "title": "A Hard-Core Predicate for all One-Way Functions", "text": "A central tool in constructing pseudorandom generators, secure encryption functions, and in other areas are \u201chard-core\u201d predicates b of functions (permutations) \u0192, discovered in [Blum Micali 82]. Such b(x) cannot be efficiently guessed (substantially better than 50-50) given only \u0192(x). Both b, \u0192 are computable in polynomial time.\n[Yao 82] transforms any one-way function \u0192 into a more complicated one, \u0192*, which has a hard-core predicate. The construction applies the original \u0192 to many small pieces of the input to \u0192* just to get one \u201chard-core\u201d bit. The security of this bit may be smaller than any constant positive power of the security of \u0192. In fact, for inputs (to \u0192*) of practical size, the pieces effected by \u0192 are so small that \u0192 can be inverted (and the \u201chard-core\u201d bit computed) by exhaustive search.\nIn this paper we show that every one-way function, padded to the form \u0192(p, x) = (p, g(x)), \u2016\u2016p\u2016\u2016 = \u2016x\u2016, has by itself a hard-core predicate of the same (within a polynomial) security. Namely, we prove a conjecture of [Levin 87, sec. 5.6.2] that the scalar product of Boolean vectors p, x is a hard-core of every one-way function \u0192(p, x) = (p, g(x)). The result extends to multiple (up to the logarithm of security) such bits and to any distribution on the x's for which \u0192 is hard to invert."}
{"_id": "f982d95a9177088bb7cc3d0cccf06187251457ce", "title": "Zero-knowledge proofs of identity", "text": "In this paper we extend the notion of zero knowledge proofs of membership (which reveal one bit of information) to zero knowledge proofs of knowledge (which reveal no information whatsoever). After formally defining this notion, we show its relevance to identification schemes, in which parties prove their identity by demonstrating their knowledge rather than by proving the validity of assertions. We describe a novel scheme which is provably secure if factoring is difficult and whose practical implementations are about two orders of magnitude faster than RSA-based identification schemes. In the last part of the paper we consider the question of sequential versus parallel executions of zero knowledge protocols, define a new notion of \u201ctransferable information\u201d, and prove that the parallel version of our identification scheme (which is not known to be zero knowledge) is secure since it reveals no transferable information."}
{"_id": "4e0636a1b92503469b44e2807f0bb35cc0d97652", "title": "Adversarial Localization Network", "text": "We propose the Adversarial Localization Network, a novel weakly supervised approach to generate object masks in an image. We train a corruption net to mask out regions of the image to reduce prediction confidence of the classifier. To avoid generating adversarial artifacts due to vulnerability of deep networks to adversarial perturbation, we also co-train the classifier to correctly predict the corrupted images. Our approach could efficiently produce reasonable object masks with a simple architecture and few hyper-parameters. We achieve competitive results on the ILSVRC2012 dataset for object localization with only image level labels and no bounding boxes for training."}
{"_id": "cf887165c44ab9a5bf5dd3f399c9548b3b0bb96b", "title": "A Multi-Modal Approach to Infer Image Affect", "text": "\u008ce group a\u0082ect or emotion in an image of people can be inferred by extracting features about both the people in the picture and the overall makeup of the scene. \u008ce state-of-the-art on this problem investigates a combination of facial features, scene extraction and even audio tonality. \u008cis paper combines three additional modalities, namely, human pose, text-based tagging and CNN extracted features / predictions. To the best of our knowledge, this is the \u0080rst time all of the modalities were extracted using deep neural networks. We evaluate the performance of our approach against baselines and identify insights throughout this paper."}
{"_id": "b5569014fa6099f841213f8965f025ab39826a02", "title": "Privacy by BlockChain Design: A BlockChain-enabled GDPR-compliant Approach for Handling Personal Data", "text": "This paper takes an initial step forward in bringing to life the certification mechanisms according to Art. 42 of the General Data Protection Regulation (GDPR). These newly established methods of legal specification act not only as a central vehicle for overcoming widely articulated and discussed legal challenges, but also as a sandbox for the much needed close collaboration between computer sciences and legal studies. In order to illustrate, for example, what data protection seals could look like in the future, the authors propose a methodology for \"translating\" legal requirements into technical guidelines: architectural blueprints designed using legal requirements. The purpose of these blueprints is to show developers how their solutions might comply with the principle of Privacy by Design (Art. 25 GDPR). To demonstrate this methodology, the authors propose an architectural blueprint that embodies the legal concept of the data subject\u2019s consent (Art. 6 sec. 1 lit. a GDPR) and elevates best practice to a high standard of Privacy by Design. Finally, the authors highlight further legal problems concerning blockchain technology under the GDPR that will have to be addressed in order to achieve a comprehensive certification mechanism for Privacy by Blockchain Design in the future. ACM Classification"}
{"_id": "8c63d23cc29dc6221ed6bd0704fccc03baf20ebc", "title": "Generating Sentence Planning Variations for Story Telling", "text": "There has been a recent explosion in applications for dialogue interaction ranging from direction-giving and tourist information to interactive story systems. Yet the natural language generation (NLG) component for many of these systems remains largely handcrafted. This limitation greatly restricts the range of applications; it also means that it is impossible to take advantage of recent work in expressive and statistical language generation that can dynamically and automatically produce a large number of variations of given content. We propose that a solution to this problem lies in new methods for developing language generation resources. We describe the ES-TRANSLATOR, a computational language generator that has previously been applied only to fables, and quantitatively evaluate the domain independence of the EST by applying it to personal narratives from weblogs. We then take advantage of recent work on language generation to create a parameterized sentence planner for story generation that provides aggregation operations, variations in discourse and in point of view. Finally, we present a user evaluation of different personal narrative retellings."}
{"_id": "40ff6d970a68098a2b99cda4e2067282620be1ee", "title": "Persecutory delusions and catastrophic worry in psychosis: developing the understanding of delusion distress and persistence.", "text": "In a recent theoretical account of persecutory delusions, it is suggested that anxiety and worry are important factors in paranoid experience [Freeman, D., Garety, P. A., Kuipers, E., Fowler, D., & Bebbington, P. E. (2002). A cognitive model of persecutory delusions. British Journal of Clinical Psychology, 41(4), 331-347]. In emotional disorders worry has been understood in terms of catastrophising. In the current study, the concept of catastrophising is applied for the first time with persecutory delusions. Thirty individuals with current persecutory delusions and 30 non-clinical controls participated in a cross-sectional study. The group with persecutory delusions was also followed up at 3 months to assess predictors of delusion persistence. At its most severe, 21% of individuals with persecutory delusions had clinical worry, 68% had levels of worry comparable with treatment seeking GAD patients. Further, high levels of anxiety, worry and catastrophising were associated with high levels of persecutory delusion distress and with the persistence of delusions over 3 months. If future research replicates these findings, worry reduction interventions for individuals with persecutory delusions may be warranted."}
{"_id": "97d19d14679452e0da301933789f4e1d6cf855d9", "title": "Multigranulation decision-theoretic rough sets", "text": "Article history: Available online 23 March 2013"}
{"_id": "2b33a3d37a973ec5a28613fb0f83566ee9cde259", "title": "Perfectly Matched Layers for Elastodynamics : A New Absorbing Boundary", "text": "The use of perfectly matched layers (PML) has recently been introduced by Berenger as a material absorbing boundary condition (ABC) for electromagnetic waves. In this paper, we will rst prove that a ctitious elastodynamic material half-space exists that will absorb an incident wave for all angles and all frequencies. Moreover, the wave is attenuative in the second half-space. As a consequence, layers of such material could be designed at the edge of a computer simulation region to absorb outgoing waves. Since this is a material ABC, only one set of computer codes is needed to simulate an open region. Hence, it is easy to parallelize such codes on multiprocessor computers. For instance, it is easy to program massively parallel computers on the SIMD (single instruction multiple data) mode for such codes. We will show two and three dimensional computer simulations of the PML for the linearized equations of elastodyanmics. Comparison with Liao's ABC will be given."}
{"_id": "78bc141c3c3e3498856eee6641f321e21dee47ec", "title": "Mitigating IoT security threats with a trusted Network element", "text": "Securing the growing amount of IoT devices is a challenge for both the end-users bringing IoT devices into their homes, as well as the corporates and industries exposing these devices into the Internet as part of their service or operations. The exposure of these devices, often poorly configured and secured, offers malicious actors an easy access to the private information of their users, or potential to utilize the devices in further activities, e.g., attacks on other devices via Distributed Denial of Service. This paper discusses the current security challenges of IoT devices and proposes a solution to secure these devices via a trusted Network Edge Device. NED offloads the security countermeasures of the individual devices into the trusted network elements. The major benefit of this approach is that the system can protect the IoT devices with user-defined policies, which can be applied to all devices regardless of the constraints of computing resources in the IoT tags. Additional benefit is the possibility to manage the countermeasures of multiple IoT devices/gateways at once, via a shared interface, thus largely avoiding the per-device maintenance operations."}
{"_id": "607fb677be1c9799c3783750eb5f19d8d9bf9ffe", "title": "The All Relevant Feature Selection using Random Forest", "text": "In this paper we examine the application of the random forest classifier for the all relevant feature selection problem. To this end we first examine two recently proposed all relevant feature selection algorithms, both being a random forest wrappers, on a series of synthetic data sets with varying size. We show that reasonable accuracy of predictions can be achieved and that heuristic algorithms that were designed to handle the all relevant problem, have performance that is close to that of the reference ideal algorithm. Then, we apply one of the algorithms to four families of semi-synthetic data sets to assess how the properties of particular data set influence results of feature selection. Finally we test the procedure using a well-known gene expression data set. The relevance of nearly all previously established important genes was confirmed, moreover the relevance of several new ones is discovered."}
{"_id": "3c3a63ab5c8e6924082ccdde8132e8b42d910bed", "title": "Using Indirect Encoding of Multiple Brains to Produce Multimodal Behavior", "text": "An important challenge in neuroevolution is to evolve complex neural networks with multiple modes of behavior. Indirect encodings can potentially answer this challenge. Yet in practice, indirect encodings do not yield effective multimodal controllers. Thus, this paper introduces novel multimodal extensions to HyperNEAT, a popular indirect encoding. A previous multimodal HyperNEAT approach called situational policy geometry assumes that multiple brains benefit from being embedded within an explicit geometric space. However, experiments here illustrate that this assumption unnecessarily constrains evolution, resulting in lower performance. Specifically, this paper introduces HyperNEAT extensions for evolving many brains without assuming geometric relationships between them. The resulting MultiBrain HyperNEAT can exploit human-specified task divisions to decide when each brain controls the agent, or can automatically discover when brains should be used, by means of preference neurons. A further extension called module mutation allows evolution to discover the number of brains, enabling multimodal behavior with even less expert knowledge. Experiments in several multimodal domains highlight that multi-brain approaches are more effective than HyperNEAT without multimodal extensions, and show that brains without a geometric relation to each other outperform situational policy geometry. The conclusion is that Multi-Brain HyperNEAT provides several promising techniques for evolving complex multimodal behavior."}
{"_id": "43dfdf71c82d7a61367e94ea927ef1c33d4ac17a", "title": "Securing distributed storage: challenges, techniques, and systems", "text": "The rapid increase of sensitive data and the growing number of government regulations that require longterm data retention and protection have forced enterprises to pay serious attention to storage security. In this paper, we discuss important security issues related to storage and present a comprehensive survey of the security services provided by the existing storage systems. We cover a broad range of the storage security literature, present a critical review of the existing solutions, compare them, and highlight potential research issues."}
{"_id": "492eefbcae67edbfada670645033610cef04e487", "title": "Analysis of Ethernet AVB for automotive networks using Network Calculus", "text": "Future functions of vehicles to make driving safer, more comfortable and more exciting rely on suitable in-vehicle networks. Legacy Ethernet extended by AVB to meet recent and future requirements is discussed as a proper candidate. Investigation of its forwarding, synchronization and resource reservation mechanisms are an important prerequisite for the object: The progressive vehicle integration unleashing its whole potential as an in-vehicle network. This paper focuses on AVB's forwarding policy by using Network Calculus as a theory for calculating worst case delays."}
{"_id": "831d2fa6af688ef2d6b754bb315ef6cb20085763", "title": "Application of the Laminar Navier-Stokes Equations for Solving 2D and 3D Pathfinding Problems with Static and Dynamic Spatial Constraints: Implementation and Validation in Comsol Multiphysics", "text": "Pathfinding problems consist in determining the optimal shortest path, or at least one path, between two points in the space. In this paper, we propose a particular approach, based on methods used in Computational Fluid Dynamics, that intends to solve such problems. In particular, we reformulate pathfinding problems as the motion of a viscous fluid via the use of the laminar Navier-Stokes equations completed with suitable boundary conditions corresponding to some characteristics of the considered problem: position of the initial and final points, a-priori information of the terrain, One-way routes and dynamic spatial configuration. The advantages of this technique, regarding existing ones (e.g., A* algorithm) is that it does not require a pre-processing method (e.g., graph conversion) of the environment and can manage complex geometries. Then, we propose a particular numerical implementation of this methodology by using Comsol Multiphysics (i.e., a modeling software based on Finite Element Methods). Finally, we validate our approach by considering several 2D and 3D benchmark cases. Results are compared with the ones returned by a simple A* algorithm. From those numerical tests, we deduce that our algorithms generate suitable routes (but not the shortest ones) for the studied problems in a computational time similar to the considered A*."}
{"_id": "5229c1735d80a7bf20834a244c5b618d440c2eb4", "title": "One-shot learning for fine-grained relation extraction via convolutional siamese neural network", "text": "Extracting fine-grained relations between entities of interest is of great importance to information extraction and large-scale knowledge graph construction. Conventional approaches on relation extraction require an existing knowledge graph to start with or sufficient observed samples from each relation type in the training process. However, such resources are not always available, and fine-grained manual labeling is extremely time-consuming and requires extensive expertise for specific domains such as healthcare and bioinformatics. Additionally, the distribution of fine-grained relations is often highly imbalanced in practice. We tackle this label scarcity and distribution imbalance issue from a one-shot classification perspective via a convolutional siamese neural network which extracts discriminative semantic-aware features to verify the relations between a pair of input samples. The proposed siamese network effectively extracts uncommon relations with only limited observed samples on the tasks of 1-shot and few-shot classification, demonstrating significant benefits to domain-specific information extraction in practical applications."}
{"_id": "692265f55ec6c67453b86dd45f55b4a4afd812c2", "title": "Fault classification and fault signature production for rolling element bearings in electric machines", "text": "Most condition monitoring techniques for rolling element bearings are designed to detect the four characteristic fault frequencies. This has lead to the common practice of categorizing bearing faults according to fault location (i.e., inner race, outer race, ball, or cage fault). While the ability to detect the four characteristic fault frequencies is necessary, this approach neglects another important class of faults that arise in many industrial settings. This research introduces the notion of categorizing bearing faults as either single-point defects or generalized roughness. These classes separate bearing faults according to the fault signatures that are produced rather than by the physical location of the fault. Specifically, single-point defects produce the four predictable characteristic fault frequencies while faults categorized as generalized roughness produce unpredictable broadband changes in the machine vibration and stator current. Experimental results are provided from bearings failed in situ via a shaft current. These results illustrate the unpredictable and broadband nature of the effects produced by generalized roughness bearing faults. This issue is significant because a successful bearing condition monitoring scheme must be able to reliably detect both classes of faults."}
{"_id": "8d6b6d56226098bce86afb0a68567f60463bd4e2", "title": "DCA for bot detection", "text": "Ensuring the security of computers is a non-trivial task, with many techniques used by malicious users to compromise these systems. In recent years a new threat has emerged in the form of networks of hijacked zombie machines used to perform complex distributed attacks such as denial of service and to obtain sensitive data such as password information. These zombie machines are said to be infected with a dasiahotpsila - a malicious piece of software which is installed on a host machine and is controlled by a remote attacker, termed the dasiabotmaster of a botnetpsila. In this work, we use the biologically inspired dendritic cell algorithm (DCA) to detect the existence of a single hot on a compromised host machine. The DCA is an immune-inspired algorithm based on an abstract model of the behaviour of the dendritic cells of the human body. The basis of anomaly detection performed by the DCA is facilitated using the correlation of behavioural attributes such as keylogging and packet flooding behaviour. The results of the application of the DCA to the detection of a single hot show that the algorithm is a successful technique for the detection of such malicious software without responding to normally running programs."}
{"_id": "cc06ad0ed7b91c93b64d10ee4ed700c33f0d9f13", "title": "Neural systems behind word and concept retrieval", "text": "Using both the lesion method and functional imaging (positron emission tomography) in large cohorts of subjects investigated with the same experimental tasks, we tested the following hypotheses: (A) that the retrieval of words which denote concrete entities belonging to distinct conceptual categories depends upon partially segregated regions in higher-order cortices of the left temporal lobe; and (B) that the retrieval of conceptual knowledge pertaining to the same concrete entities also depends on partially segregated regions; however, those regions will be different from those postulated in hypothesis A, and located predominantly in the right hemisphere (the second hypothesis tested only with the lesion method). The analyses provide support for hypothesis A in that several regions outside the classical Broca and Wernicke language areas are involved in name retrieval of concrete entities, and that there is a partial segregation in the temporal lobe with respect to the conceptual category to which the entities belong, and partial support for hypothesis B in that retrieval of conceptual knowledge is partially segregated from name retrieval in the lesion study. Those regions identified here are seen as parts of flexible, multi-component systems serving concept and word retrieval for concrete entities belonging to different conceptual categories. By comparing different approaches the article also addresses a number of method issues that have surfaced in recent studies in this field."}
{"_id": "709cc149b7e17d03b8b92c181a98d6e44ad782e6", "title": "CortexNet: a Generic Network Family for Robust Visual Temporal Representations", "text": "In the past five years we have observed the rise of incredibly well performing feed-forward neural networks trained supervisedly for vision related tasks. These models have achieved super-human performance on object recognition, localisation, and detection in still images. However, there is a need to identify the best strategy to employ these networks with temporal visual inputs and obtain a robust and stable representation of video data. Inspired by the human visual system, we propose a deep neural network family, CortexNet, which features not only bottom-up feedforward connections, but also it models the abundant top-down feedback and lateral connections, which are present in our visual cortex. We introduce two training schemes \u2014 the unsupervised MatchNet and weakly supervised TempoNet modes \u2014 where a network learns how to correctly anticipate a subsequent frame in a video clip or the identity of its predominant subject, by learning egomotion clues and how to automatically track several objects in the current scene. Find the project website at tinyurl.com/CortexNet."}
{"_id": "4750b72b4735354429f544673fd50bd415cddf1b", "title": "Can we trust digital image forensics?", "text": "Compared to the prominent role digital images play in nowadays multimedia society, research in the field of image authenticity is still in its infancy. Only recently, research on digital image forensics has gained attention by addressing tamper detection and image source identification. However, most publications in this emerging field still lack rigorous discussions of robustness against strategic counterfeiters, who anticipate the existence of forensic techniques. As a result, the question of trustworthiness of digital image forensics arises. This work will take a closer look at two state-of-the-art forensic methods and proposes two counter-techniques; one to perform resampling operations undetectably and another one to forge traces of image origin. Implications for future image forensic systems will be discussed."}
{"_id": "097e157201da1c6abd06eb92c80de5e6762994d8", "title": "On Advances in Statistical Modeling of Natural Images", "text": "Statistical analysis of images reveals two interesting properties: (i) invariance of image statistics to scaling of images, and (ii) non-Gaussian behavior of image statistics, i.e. high kurtosis, heavy tails, and sharp central cusps. In this paper we review some recent results in statistical modeling of natural images that attempt to explain these patterns. Two categories of results are considered: (i) studies of probability models of images or image decompositions (such as Fourier or wavelet decompositions), and (ii) discoveries of underlying image manifolds while restricting to natural images. Applications of these models in areas such as texture analysis, image classification, compression, and denoising are also considered."}
{"_id": "14853b0ade80dad087ee8fc07748ea987b5ff953", "title": "Tamper Hiding: Defeating Image Forensics", "text": "This paper introduces novel hiding techniques to counter the detection of image manipulations through forensic analyses. The presented techniques allow to resize and rotate (parts of) bitmap images without leaving a periodic pattern in the local linear predictor coefficients, which has been exploited by prior art to detect traces of manipulation. A quantitative evaluation on a batch of test images proves the proposed method\u2019s efficacy, while controlling for key parameters and for the retained image quality compared to conventional linear interpolation."}
{"_id": "5f4e4ab4c87c82e5fdd3abe271303d2ddfcafea1", "title": "Statistical Tools for Digital Forensics", "text": "A digitally altered photograph, often leaving no visual clues of having been tampered with, can be indistinguishable from an authentic photograph. As a result, photographs no longer hold the unique stature as a definitive recording of events. We describe several statistical techniques for detecting traces of digital tampering in the absence of any digital watermark or signature. In particular, we quantify statistical correlations that result from specific forms of digital tampering, and devise detection schemes to reveal these correlations."}
{"_id": "1466678330c03592b1dbe344cf1277e867822bf6", "title": "Validation of Document Clustering based on Purity and Entropy measures", "text": "Document clustering aims to automatically group related document into clusters. If two documents are close to each other in the original document space, they are grouped into the same cluster. If the two documents are far away from each other in the original document space, they tend to be grouped into different cluster. The classical clustering algorithms assign each data to exactly one cluster, but fuzzy c-means allow data belong to different clusters. Fuzzy clustering is a powerful unsupervised method for the analysis of data. Cluster validity measure is useful in estimating the optimal number of clusters. Purity and Entropy are the validity measures used in this"}
{"_id": "af9c2284f387cc09109e628effc133eca225b6ec", "title": "Revised and abbreviated forms of the Genderism and Transphobia Scale: tools for assessing anti-trans prejudice.", "text": "Many studies of anti-trans prejudice have measured such attitudes using the Genderism and Transphobia Scale (GTS; Hill & Willoughby, 2005). The GTS is unique in assessing negative attitudes and propensity for violence toward trans people. The present research addressed previously observed limitations in the psychometric properties of data produced by the GTS, including inconsistencies in factor structure and subscale scoring across studies. Results across the present 2 studies (Ns = 314 and 250) yielded 2 refined versions of the GTS: the 22-item GTS-Revised (GTS-R) and a more abbreviated 13-item GTS-R-Short Form (GTS-R-SF), each of which produced stable 2-factor structures corresponding with the intended negative attitudes and propensity for violence dimensions of the GTS. The 2 versions differ in that the GTS-R-SF Genderism/Transphobia subscale focuses on more severe expressions of prejudicial attitudes, whereas the longer GTS-R Genderism/Transphobia subscale assesses subtle expressions of prejudice as well. The Gender-Bashing subscale is identical across the 2 versions. Thus, researchers and practitioners may choose the GTS-R or GTS-R-SF depending on the breadth of prejudicial attitudes they wish to assess. Reliability estimates for GTS-R and GTS-R-SF scale and subscale item responses were acceptable and stable across the 2 studies, and validity evidence was garnered in Study 2. These findings can inform use of the GTS-R and GTS-R-SF in research and practice settings, where psychometric precision and efficiency are both critical."}
{"_id": "80f64e6b90ecda36e5afe2214e24e767c3181d25", "title": "Development and Implementation of a Chat Bot in a Social Network", "text": "This document describes how to implement a Chat Bot on the Twitter social network for entertainment and viral advertising using a database and a simple algorithm. Having as a main theme a successfully implementation of a Chat Bot preventing people classify it as SPAM, as a result of this a Twitter account (@DonPlaticador) that works without the intervention of a person and every day earns more followers was obtained."}
{"_id": "5a2ace6627ca0c1fdc693dbc46ce385c77690789", "title": "A comprehensive security architecture for SDN", "text": "SDN enables the administrators to configure network resources very quickly and to adjust network-wide traffic flow to meet changing needs dynamically. However, there are some challenges for implementing a full-scale carrier SDN. One of the most important challenges is SDN security, which is beginning to receive attention. With new SDN architecture, some security threats are common to traditional networking, but the profile of these threats (including their likelihood and impact and hence their overall risk level) changes. Moreover, there are some new security challenges such as bypassing predefined mandatory policies by overwriting flow entries and data eavesdropping by inserting fraudulent flow entries. This paper is to design open-flow specific security solutions and propose a comprehensive security architecture to provide security services such as enforcing mandatory network policy correctly and receiving network policy securely for SDN in order to solve these common security issues and new security challenges. It can also help the developers to implement security functions to provide security services when developing the SDN controller."}
{"_id": "aab62a55f4696ef02e2a769c3500334db799c473", "title": "Adversarial synthesis learning enables segmentation without target modality ground truth", "text": "A lack of generalizability is one key limitation of deep learning based segmentation. Typically, one manually labels new training images when segmenting organs in different imaging modalities or segmenting abnormal organs from distinct disease cohorts. The manual efforts can be alleviated if one is able to reuse manual labels from one modality (e.g., MRI) to train a segmentation network for a new modality (e.g., CT). Previously, two stage methods have been proposed to use cycle generative adversarial networks (CycleGAN) to synthesize training images for a target modality. Then, these efforts trained a segmentation network independently using synthetic images. However, these two independent stages did not use the complementary information between synthesis and segmentation. Herein, we proposed a novel end-to-end synthesis and segmentation network (EssNet) to achieve the unpaired MRI to CT image synthesis and CT splenomegaly segmentation simultaneously without using manual labels on CT. The end-to-end EssNet achieved significantly higher median Dice similarity coefficient (0.9188) than the two stages strategy (0.8801), and even higher than canonical multi-atlas segmentation (0.9125) and ResNet method (0.9107), which used the CT manual labels."}
{"_id": "6514cf67be4b439534960cf389caba6abe0705d5", "title": "SVM based forest fire detection using static and dynamic features", "text": "A novel approach is proposed in this paper for automatic forest fire detection from video. Based on 3D point cloud of the collected sample fire pixels, Gaussian mixture model is built and helps segment some possible flame regions in single image. Then the new specific flame pattern is defined for forest, and three types of fire colors are labeled accordingly. With 11 static features including color distributions, texture parameters and shape roundness, the static SVM classifier is trained and filters the segmented results. Using defined overlapping degree and varying degree, the remained candidate regions are matched among consecutive frames. Subsequently the variations of color, texture, roundness, area, contour are computed, then the average and the mean square deviation of them are obtained. Together with the flickering frequency from temporal wavelet based Fourier descriptors analysis of flame contour, 27 dynamic features are used to train the dynamic SVM classifier, which is applied for final decision. Our approach has been tested with dozens of video clips, and it can detect forest fire while recognize the fire like objects, such as red house, bright light and flying flag. Except for the acceptable accuracy, our detection algorithm performs in real time, which proves its value for computer vision based forest fire surveillance."}
{"_id": "f28cb37e0f1a225f0d4f27f43ef4e05eee8b321c", "title": "SEAME: a Mandarin-English code-switching speech corpus in south-east asia", "text": "In Singapore and Malaysia, people often speak a mixture of Mandarin and English within a single sentence. We call such sentences intra-sentential code-switch sentences. In this paper, we report on the development of a Mandarin-English codeswitching spontaneous speech corpus: SEAME. The corpus is developed as part of a multilingual speech recognition project and will be used to examine how Mandarin-English codeswitch speech occurs in the spoken language in South-East Asia. Additionally, it can provide insights into the development of large vocabulary continuous speech recognition (LVCSR) for code-switching speech. The corpus collected consists of intra-sentential code-switching utterances that are recorded under both interview and conversational settings. This paper describes the corpus design and the analysis of collected corpus."}
{"_id": "604b2f8d5d6955da5094066348443e3c9d208284", "title": "FeatureAnalytics: An approach to derive relevant attributes for analyzing Android Malware", "text": "Ever increasing number of Android malware, has always been a concern for cybersecurity professionals. Even though plenty of anti-malware solutions exist, a rational and pragmatic approach for the same is rare and has to be inspected further. In this paper, we propose a novel two-set feature selection approach based on Rough Set and Statistical Test named as RSST to extract relevant system calls. To address the problem of higher dimensional attribute set, we derived suboptimal system call space by applying the proposed feature selection method to maximize the separability between malware and benign samples. Comprehensive experiments conducted on a dataset consisting of 3500 samples with 30 RSST derived essential system calls resulted in an accuracy of 99.9%, Area Under Curve (AUC) of 1.0, with 1% False Positive Rate (FPR). However, other feature selectors (Information Gain, CFsSubsetEval, ChiSquare, FreqSel and Symmetric Uncertainty) used in the domain of malware analysis resulted in the accuracy of 95.5% with 8.5% FPR. Besides, empirical analysis of RSST derived system calls outperform other attributes such as permissions, opcodes, API, methods, call graphs, Droidbox attributes and network traces."}
{"_id": "fd28fdd56d10f5f387b030de65b7c17c2820ace2", "title": "Comprehensive study of data analytics tools (RapidMiner, Weka, R tool, Knime)", "text": "In today's era, data has been increasing in volume, velocity and variety. Due to large and complex collection of datasets, it is difficult to process on traditional data processing application. So, this leads to emerging of new technology called data analytics. Data analytics is a science of exploring raw data and elicitation the useful information and hidden pattern. The main aim of data analytics is to use advance analytics techniques for huge and different datasets. Datasets may vary in sizes from terabytes to zettabytes and can be structured or unstructured. The paper gives the comprehensive and theoretical analysis of four open source data analytics tools which are RapidMiner, Weka, R Tool and KNIME. The study describes the technical specification, features, and specialization for each selected tool. By employing the study the choice and selection of tools can be made easy. The tools are evaluated on basis of various parameters like volume of data used, response time, ease of use, price tag, analysis algorithm and handling."}
{"_id": "ca58d8872fc0f7b59d827a79bc70496e41528d77", "title": "Pattern classification with missing data: a review", "text": "Pattern classification has been successfully applied in many problem domains, such as biometric recognition, document classification or medical diagnosis. Missing or unknown data are a common drawback that pattern recognition techniques need to deal with when solving real-life classification tasks. Machine learning approaches and methods imported from statistical learning theory have been most intensively studied and used in this subject. The aim of this work is to analyze the missing data problem in pattern classification tasks, and to summarize and compare some of the well-known methods used for handling missing values."}
{"_id": "996c790e05d113550cc1f6bb1143b92c6d07233f", "title": "A scalable purchase intention prediction system using extreme gradient boosting machines with browsing content entropy", "text": "Nowadays, a prosperity of electronic commerce (E-commerce) not only gives more convenience to consumers but brings more new opportunities in online advertising and marketing. Online advertisers can understand more about consumer preferences based on their daily online shopping and browsing behaviors. The development of big data and cloud computing techniques further empower advertisers and marketers to have a data-driven and consumer-specific preference recommendation based on the online browsing histories. In this research, a decision support system is proposed to predict a consumer purchase intention during browsing sessions. The proposed decision support system categorizes online browsing activities into purchase-oriented and general sessions using extreme boosting machines. With the browsing content entropy features, the proposed method achieves 41.81% recall and 34.35% F score. It further shows its strong predictive capability compared to other benchmark algorithms including logistic regression and traditional ensemble models. The proposed method can be implemented in real-time bidding algorithms for online advertising strategies. Ad deliveries on browsing session with potential purchase intention not only improve the effectiveness of advertisements but significantly increase last-touch attributions for campaign performance."}
{"_id": "4fdc539aafc82dd12574f4e0b1a763129b750a51", "title": "Cluster Ensembles for High Dimensional Clustering : An Empirical Study", "text": "This paper studies cluster ensembles for high dimensional data clustering. We examine three different approaches to constructing cluster ensembles. To address high dimensionality, we focus on ensemble construction methods that build on two popular dimension reduction techniques, random projection and principal component analysis (PCA). We present evidence showing that ensembles generated by random projection perform better than those by PCA and further that this can be attributed to the capability of random projection to produce diverse base clusterings. We also examine four different consensus functions for combining the clusterings of the ensemble. We compare their performance using two types of ensembles, each with different properties. In both cases, we show that a recent consensus function based on bipartite graph partitioning achieves the best performance."}
{"_id": "b594a248218121789e5073a90c31b261610478e0", "title": "Linear SLAM: A linear solution to the feature-based and pose graph SLAM based on submap joining", "text": "This paper presents a strategy for large-scale SLAM through solving a sequence of linear least squares problems. The algorithm is based on submap joining where submaps are built using any existing SLAM technique. It is demonstrated that if submaps coordinate frames are judiciously selected, the least squares objective function for joining two submaps becomes a quadratic function of the state vector. Therefore, a linear solution to large-scale SLAM that requires joining a number of local submaps either sequentially or in a more efficient Divide and Conquer manner, can be obtained. The proposed Linear SLAM technique is applicable to both feature-based and pose graph SLAM, in two and three dimensions, and does not require any assumption on the character of the covariance matrices or an initial guess of the state vector. Although this algorithm is an approximation to the optimal full nonlinear least squares SLAM, simulations and experiments using publicly available datasets in 2D and 3D show that Linear SLAM produces results that are very close to the best solutions that can be obtained using full nonlinear optimization started from an accurate initial value. The C/C++ and MATLAB source codes for the proposed algorithm are available on OpenSLAM."}
{"_id": "3b37d7e44686ad4ba02ee4050cd701ad8496055e", "title": "Open Database of Epileptic EEG with MRI and Postoperational Assessment of Foci\u2014a Real World Verification for the EEG Inverse Solutions", "text": "This paper introduces a freely accessible database http://eeg.pl/epi , containing 23 datasets from patients diagnosed with and operated on for drug-resistant epilepsy. This was collected as part of the clinical routine at the Warsaw Memorial Child Hospital. Each record contains (1) pre-surgical electroencephalography (EEG) recording (10\u201320 system) with inter-ictal discharges marked separately by an expert, (2) a full set of magnetic resonance imaging (MRI) scans for calculations of the realistic forward models, (3) structural placement of the epileptogenic zone, recognized by electrocorticography (ECoG) and post-surgical results, plotted on pre-surgical MRI scans in transverse, sagittal and coronal projections, (4) brief clinical description of each case. The main goal of this project is evaluation of possible improvements of localization of epileptic foci from the surface EEG recordings. These datasets offer a unique possibility for evaluating different EEG inverse solutions. We present preliminary results from a subset of these cases, including comparison of different schemes for the EEG inverse solution and preprocessing. We report also a finding which relates to the selective parametrization of single waveforms by multivariate matching pursuit, which is used in the preprocessing for the inverse solutions. It seems to offer a possibility of tracing the spatial evolution of seizures in time."}
{"_id": "1c0f7d631e708619d5d5f161a886fce5df91a7d8", "title": "ResearchGate versus Google Scholar: Which finds more early citations?", "text": "ResearchGate has launched its own citation index by extracting citations from documents uploaded to the site and reporting citation counts on article profile pages. Since authors may upload preprints to ResearchGate, it may use these to provide early impact evidence for new papers. This article assesses the whether the number of citations found for recent articles is comparable to other citation indexes using 2675 recently-published library and information science articles. The results show that in March 2017, ResearchGate found less citations than did Google Scholar but more than both Web of Science and Scopus. This held true for the dataset overall and for the six largest journals in it. ResearchGate correlated most strongly with Google Scholar citations, suggesting that ResearchGate is not predominantly tapping a fundamentally different source of data than Google Scholar. Nevertheless, preprint sharing in ResearchGate is substantial enough for authors to take seriously."}
{"_id": "50a6aca5abdb613da11cd29b5ee1b3a03f179d0d", "title": "A static hand gesture recognition algorithm using k-mean based radial basis function neural network", "text": "The accurate classification of static hand gestures is a vital role to develop a hand gesture recognition system which is used for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC) application. A vision-based static hand gesture recognition algorithm consists of three stages: preprocessing, feature extraction and classification. The preprocessing stage involves following three sub-stages: segmentation which segments hand region from its background images using a histogram based thresholding algorithm and transforms into binary silhouette; rotation that rotates segmented gesture to make the algorithm, rotation invariant; filtering that effectively removes background noise and object noise from binary image by morphological filtering technique. To obtain a rotation invariant gesture image, a novel technique is proposed in this paper by coinciding the 1st principal component of the segmented hand gestures with vertical axes. A localized contour sequence (LCS) based feature is used here to classify the hand gestures. A k-mean based radial basis function neural network (RBFNN) is also proposed here for classification of hand gestures from LCS based feature set. The experiment is conducted on 500 train images and 500 test images of 25 class grayscale static hand gesture image dataset of Danish/international sign language hand alphabet. The proposed method performs with 99.6% classification accuracy which is better than earlier reported technique."}
{"_id": "cd9551270d8fa0ed7752cace5619dc4af5d60ea4", "title": "Generative Event Schema Induction with Entity Disambiguation", "text": "This paper presents a generative model to event schema induction. Previous methods in the literature only use head words to represent entities. However, elements other than head words contain useful information. For instance, an armed man is more discriminative than man. Our model takes into account this information and precisely represents it using probabilistic topic distributions. We illustrate that such information plays an important role in parameter estimation. Mostly, it makes topic distributions more coherent and more discriminative. Experimental results on benchmark dataset empirically confirm this enhancement."}
{"_id": "221b59a2a6bc19302bac89abaf42c531f4dc4cf8", "title": "A Unified Probabilistic Framework for Name Disambiguation in Digital Library", "text": "Despite years of research, the name ambiguity problem remains largely unresolved. Outstanding issues include how to capture all information for name disambiguation in a unified approach, and how to determine the number of people K in the disambiguation process. In this paper, we formalize the problem in a unified probabilistic framework, which incorporates both attributes and relationships. Specifically, we define a disambiguation objective function for the problem and propose a two-step parameter estimation algorithm. We also investigate a dynamic approach for estimating the number of people K. Experiments show that our proposed framework significantly outperforms four baseline methods of using clustering algorithms and two other previous methods. Experiments also indicate that the number K automatically found by our method is close to the actual number."}
{"_id": "2d2e19cc2da24df9890bb91949269c996567e0e4", "title": "Broad expertise retrieval in sparse data environments", "text": "Expertise retrieval has been largely unexplored on data other than the W3C collection. At the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas. We first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people. For our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site. Using this test set, we conduct two series of experiments. The first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set. The second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set. Expertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings."}
{"_id": "69ab8fe2bdc2b1ea63d86c7fd64142e5d3ed88ec", "title": "Relevance-Based Language Models", "text": "We explore the relation between classical probabilistic models of information retrieval and the emerging language modeling approaches. It has long been recognized that the primary obstacle to effective performance of classical models is the need to estimate a relevance model: probabilities of words in the relevant class. We propose a novel technique for estimating these probabilities using the query alone. We demonstrate that our technique can produce highly accurate relevance models, addressing important notions of synonymy and polysemy. Our experiments show relevance models outperforming baseline language modeling systems on TREC retrieval and TDT tracking tasks. The main contribution of this work is an effective formal method for estimating a relevance model with no training data."}
{"_id": "177b89d8d4f2ced92ae6bf3175440f569daab902", "title": "Voting for candidates: adapting data fusion techniques for an expert search task", "text": "In an expert search task, the users' need is to identify people who have relevant expertise to a topic of interest. An expert search system predicts and ranks the expertise of a set of candidate persons with respect to the users' query. In this paper, we propose a novel approach for predicting and ranking candidate expertise with respect to a query. We see the problem of ranking experts as a voting problem, which we model by adapting eleven data fusion techniques.We investigate the effectiveness of the voting approach and the associated data fusion techniques across a range of document weighting models, in the context of the TREC 2005 Enterprise track. The evaluation results show that the voting paradigm is very effective, without using any collection specific heuristics. Moreover, we show that improving the quality of the underlying document representation can significantly improve the retrieval performance of the data fusion techniques on an expert search task. In particular, we demonstrate that applying field-based weighting models improves the ranking of candidates. Finally, we demonstrate that the relative performance of the adapted data fusion techniques for the proposed approach is stable regardless of the used weighting models."}
{"_id": "0ef844606017d650e71066955d04b1902ffa8f3d", "title": "A wirelessly controlled implantable LED system for deep brain optogenetic stimulation", "text": "In recent years optogenetics has rapidly become an essential technique in neuroscience. Its temporal and spatial specificity, combined with efficacy in manipulating neuronal activity, are especially useful in studying the behavior of awake behaving animals. Conventional optogenetics, however, requires the use of lasers and optic fibers, which can place considerable restrictions on behavior. Here we combined a wirelessly controlled interface and small implantable light-emitting diode (LED) that allows flexible and precise placement of light source to illuminate any brain area. We tested this wireless LED system in vivo, in transgenic mice expressing channelrhodopsin-2 in striatonigral neurons expressing D1-like dopamine receptors. In all mice tested, we were able to elicit movements reliably. The frequency of twitches induced by high power stimulation is proportional to the frequency of stimulation. At lower power, contraversive turning was observed. Moreover, the implanted LED remains effective over 50 days after surgery, demonstrating the long-term stability of the light source. Our results show that the wireless LED system can be used to manipulate neural activity chronically in behaving mice without impeding natural movements."}
{"_id": "5a4f62874520a97ec2002abcd6d906326291f07f", "title": "Optimal detection of fetal chromosomal abnormalities by massively parallel DNA sequencing of cell-free fetal DNA from maternal blood.", "text": "BACKGROUND\nMassively parallel DNA sequencing of cell-free fetal DNA from maternal blood can detect fetal chromosomal abnormalities. Although existing algorithms focus on the detection of fetal trisomy 21 (T21), these same algorithms have difficulty detecting trisomy 18 (T18).\n\n\nMETHODS\nBlood samples were collected from 1014 patients at 13 US clinic locations before they underwent an invasive prenatal procedure. All samples were processed to plasma, and the DNA extracted from 119 samples underwent massively parallel DNA sequencing. Fifty-three sequenced samples came from women with an abnormal fetal karyotype. To minimize the intra- and interrun sequencing variation, we developed an optimized algorithm by using normalized chromosome values (NCVs) from the sequencing data on a training set of 71 samples with 26 abnormal karyotypes. The classification process was then evaluated on an independent test set of 48 samples with 27 abnormal karyotypes.\n\n\nRESULTS\nMapped sites for chromosomes of interest in the sequencing data from the training set were normalized individually by calculating the ratio of the number of sites on the specified chromosome to the number of sites observed on an optimized normalizing chromosome (or chromosome set). Threshold values for trisomy or sex chromosome classification were then established for all chromosomes of interest, and a classification schema was defined. Sequencing of the independent test set led to 100% correct classification of T21 (13 of 13) and T18 (8 of 8) samples. Other chromosomal abnormalities were also identified.\n\n\nCONCLUSION\nMassively parallel sequencing is capable of detecting multiple fetal chromosomal abnormalities from maternal plasma when an optimized algorithm is used."}
{"_id": "3f9e38e101833621d88ec3d63d7bfc6d0d553ee5", "title": "A paradigmatic and methodological examination of knowledge management research: 2000 to 2004", "text": "Knowledge management (KM) research has been evolving for more than a decade, yet little is known about KM theoretical perspectives, research paradigms, and research methods. This paper explores KM research in influential journals for the period 2000\u20132004. A total of 160 KM articles in ten top-tier information systems and management journals are analyzed. Articles that may serve as useful exemplars of KM research from positivist, interpretivist, and critical pluralist paradigms are selected. We find that KM research in information systems journals differs from that in management journals, but neither makes a balanced use of positivist and non-positivist research approaches. \u00a9 2007 Elsevier B.V. All rights reserved."}
{"_id": "7b11c39cff62359e0bd18f88a5fe9907000ec5e6", "title": "A fast and memory-efficient Discrete Focal Stack Transform for plenoptic sensors", "text": "Article history: Available online 23 December 2014"}
{"_id": "d6faf1be4f776a849719a4ed060c205ad3f78707", "title": "How to Choose the Best Pivot Language for Automatic Translation of Low-Resource Languages", "text": "Recent research on multilingual statistical machine translation focuses on the usage of pivot languages in order to overcome language resource limitations for certain language pairs. Due to the richness of available language resources, English is, in general, the pivot language of choice. However, factors like language relatedness can also effect the choice of the pivot language for a given language pair, especially for Asian languages, where language resources are currently quite limited. In this article, we provide new insights into what factors make a pivot language effective and investigate the impact of these factors on the overall pivot translation performance for translation between 22 Indo-European and Asian languages. Experimental results using state-of-the-art statistical machine translation techniques revealed that the translation quality of 54.8% of the language pairs improved when a non-English pivot language was chosen. Moreover, 81.0% of system performance variations can be explained by a combination of factors such as language family, vocabulary, sentence length, language perplexity, translation model entropy, reordering, monotonicity, and engine performance."}
{"_id": "51ce45f4782b9496d03894173e984f222a3cfa3d", "title": "Sistemas de reconocimiento basados en la imagen facial", "text": "\u2500 This article summarizes the most important research projects developed in thearea of recognitions y stemsusing facial images. It presents the description of the principal lines of study about people identification systems using face recognition. Furthermore, it contains a synthesis of recent mathematical techniques for pattern sextraction in these systems."}
{"_id": "3462b7edb0cfa76048563072f9e9be9dbfd79182", "title": "An integrated approach to characterize genetic interaction networks in yeast metabolism", "text": "Although experimental and theoretical efforts have been applied to globally map genetic interactions, we still do not understand how gene-gene interactions arise from the operation of biomolecular networks. To bridge the gap between empirical and computational studies, we i, quantitatively measured genetic interactions between \u223c185,000 metabolic gene pairs in Saccharomyces cerevisiae, ii, superposed the data on a detailed systems biology model of metabolism and iii, introduced a machine-learning method to reconcile empirical interaction data with model predictions. We systematically investigated the relative impacts of functional modularity and metabolic flux coupling on the distribution of negative and positive genetic interactions. We also provide a mechanistic explanation for the link between the degree of genetic interaction, pleiotropy and gene dispensability. Last, we show the feasibility of automated metabolic model refinement by correcting misannotations in NAD biosynthesis and confirming them by in vivo experiments."}
{"_id": "ddcec795272d832b8b82b69abe90490709ea334b", "title": "Sequence of central nervous system myelination in human infancy. II. Patterns of myelination in autopsied infants.", "text": "The timing and synchronization of postnatal myelination in the human central nervous system (CNS) are complex. We found eight time-related patterns of CNS myelination during the first two postnatal years in autopsied infants. The intensity of myelination was graded in 162 infants with diverse diseases on an ordinal scale of degrees 0-4. The Ayer method for maximum likelihood estimates for censored data was utilized to generate curves of the temporal changes in the percent of infants with degrees 0 through 4 of myelin in 62 white matter sites. These sites fall into eight subgroups determined by the presence or absence of microscopic myelin (degree 1) at birth and the median age at which mature myelin (degree 3) is reached. There is variability in the timing of myelination within and across axonal systems, and early onset of myelination is not always followed by early myelin maturation. We reexamined general rules governing the timing of myelination proposed by previous investigators, and found that those rules are neither complete nor inviolate, and that there is a complex interplay among them. This study specifies distinct periods of maturation in which myelinating pathways are potentially vulnerable to insult during the first two postnatal years."}
{"_id": "e340a9b752f9b358cda4dc7a1c6e3c6280867158", "title": "Machine Invention of First Order Predicates by Inverting Resolution", "text": "\u0099-\u009a \u009bl\u009c\u0098\u009d\u008f\u009ew\u009fx\u009c z\u00a1c\u00a2\u00a4\u00a3x\u00a5\u008a\u00a6x\u00a7 \u0308\u00a1a\u00a9l\u00abN\u00acr\u00a9l\u00a9\u0098\u00abN\u00abw\u00a6x\u00a1a\u00a9l\u00ad \u00a1Q\u00a2\u00a4\u00a3\u008f\u00a1.\u00a1Q\u00a2w\u00a9 \u00ae5\u00a9\u0098 \u0304a\u00a7\u00b0\u00a6x \u0304Q\u00b1_\u00a32\u00ab\u00a43\u0098\u00a9;\u00a62\u00a7a\u00a9 \u0301|\u03bc\u0088\u00a5\u00b6\u00a1a\u03bc\u0088\u00abw\u00b7_ \u0327\u0088\u00a9l\u00a3x \u0304a\u00abw\u03bc\u0088\u00abw\u00b7A\u00a5a1|\u00a5\u00bbo \u00a1Q\u00a9\u0098\u00b1_\u00a5_\u03bc\u0091\u00a5A\u00a5\u00b6\u00a1a \u0304Q\u00a6x\u00abw\u00b7\u008b \u0327[1~\u00acw\u03bc\u0091\u00a3x\u00a5a\u00a9l\u00ad1\u20444\u00ac 1~\u00a1a\u00a2\u00a4\u00a9N1\u20442x\u00a6|3\u0098\u00a3x\u00acw3\u20444w \u0327\u0091\u00a32 \u0304Q1~\u00aew \u0304a\u00a6\u008f1\u20442 \u03bc\u0091\u00ad|\u00a9l\u00ad \u03bc[\u00ab\u00bf\u00a1Q\u00a2w\u00a9N\u00aew \u0304Q\u00a6x\u00acw \u0327\u0088\u00a9\u0098\u00b1\u008c\u00adw\u00a9 o \u00a5Q3 \u0304Q\u03bc\u0088\u00ae|\u00a1a\u03bc\u0088\u00a6x\u00ab( \u0327\u0088\u00a3x\u00abw\u00b7x3\u20444r\u00a32\u00b7x\u00a9\u008b\u00c0 \u00c1L\u00ab \u03bc\u0091\u00ad|\u00a9f\u00a32 \u0327\"\u00a5a1 \u00a5\u00b6\u00a1a\u00a9l\u00b1\u00c2\u00a5a\u00a2w\u00a6x3\u20444w \u0327\u0091\u00adA\u00ac5\u00a9 3\u0098\u00a3x\u00ae\u00a4\u00a32\u00ac\u00a4 \u0327[\u00a9 \u00a62\u00a7 \u00a6\u008f1\u20442x\u00a9\u0098 \u0304\u00c33 \u00a6\u008b\u00b14\u03bc[\u00abw\u00b73\u00a1a\u00a2w\u03bc\u0091\u00a5 \u0304Q\u00a9l\u00a5\u00b6\u00a1a \u0304Q\u03bc\u00913 \u00a1a\u03bc\u0088\u00a6x\u00ab\u0082\u00ac 1\u0080\u00adw\u00a9 \u00c4\u00a4\u00abw\u03bc\u0088\u00abw\u00b7N\u03bc[\u00a1Q\u00a5;\u00a6\u008f\u00c5\u008a\u00ab\u00821\u20442\u008b\u00a6 3l\u00a32\u00acw3\u20444\u00a4 \u0327\u0088\u00a3x \u0304a1\u008b\u00c0-\u00c6 3\u20444\u00a43\u00c3\u00a2W\u00a3N\u00a5a1 \u00a5\u00b6\u00a1a\u00a9l\u00b1\u00c7\u00c5)\u00a6x3\u20444w \u0327\u0091\u00ad\u00c8\u00acr\u00a9_ \u0327\u0088\u00a9l\u00a5Q\u00a5 \u0304Q\u00a9\u0098 \u0327\u0088\u03bc\u0091\u00a32\u00abq\u00a1 \u00a6\u008b\u00ab \u00a1Q\u00a2w\u00a9)\u00a1a\u00a9l\u00a3\u008b3\u00c3\u00a2w\u00a9\u0098 \u0304f\u00c9 \u00a5?\u03bc\u0088\u00abw\u00b7x\u00a9l\u00ab 3\u20444w\u03bcY\u00a1\u00bb1 \u03bc\u0088\u00ab \u00a5a3\u20444w\u00aew\u00ae\u00a4 \u0327[1 \u03bc\u0088\u00abw\u00b7 \u00a3x\u00ab \u00a3x\u00aew\u00aew \u0304Q\u00a6x\u00aew \u0304Q\u03bc\u0088\u00a32\u00a1a\u00a9/\u00aew \u0304a\u00a6\u008b\u00acw \u0327\u0088\u00a9\u0098\u00b1\u00ca \u0304Q\u00a9\u0098\u00ae|o \u0304Q\u00a9l\u00a5a\u00a9\u0098\u00abq\u00a1\u00c3\u00a3\u008f\u00a1a\u03bc\u0088\u00a6x\u00ab?\u00c0 \u00cb\u00a4\u00a6x \u0304 \u00a1a\u00a2\u00a4\u03bc\u0088\u00a5;\u00ae\u00a43\u20444w \u0304a\u00ae5\u00a6\u008b\u00a5a\u00a9 \u00c5/\u00a94\u00aew \u0304Q\u00a9l\u00a5a\u00a9\u0098\u00abq\u00a1 \u00a3(\u00b1 \u00a9f3\u00c3\u00a2\u00a4\u00a32\u00ab\u00a4\u03bc\u0088\u00a5a\u00b1&\u00a7\u00b0\u00a6x \u0304 \u00a3x3\u20444|\u00a1a\u00a6\u008b\u00b1_\u00a3\u008f\u00a1a\u03bc\u00913\u0098\u00a3x \u0327[ \u0327\u00881 \u03bc\u0088\u00ab 1\u20442x\u00a9l\u00ab\u008b\u00a1Q\u03bc[\u00ab\u00a4\u00b73\u00a32\u00ab\u00a4\u00ad4\u00b7\u008b\u00a9\u0098\u00abw\u00a9l \u0304Q\u00a3x \u0327[\u03bc\u0091\u00a5a\u03bc[\u00abw\u00b7 \u00c4r \u0304Q\u00a5\u00b6\u00a1\u00b6oz\u00a6x \u0304\u00c3\u00ad|\u00a9\u0098 \u0304/\u00cc.\u00a6x \u0304Q\u00abA3 \u0327\u0091\u00a323\u20444r\u00a5\u00b6\u00a9.\u00ae\u00a4 \u0304a\u00a9f\u00ad|\u03bc\u00883l\u00a3\u008f\u00a1Q\u00a9l\u00a5l\u00c0 \u00cd\u00ce\u00a2w\u00a9L\u00b1 \u00a9\u0098\u00a1a\u00a2w\u00a6|\u00ad \u03bc\u0091\u00a5 \u00ac\u00a4\u00a3x\u00a5a\u00a9l\u00adK\u00a6x\u00abI\u03bc\u0088\u00ab 1\u20442x\u00a9l \u0304\u00b6\u00a1Q\u03bc[\u00abw\u00b7-\u00a1a\u00a2w\u00a9 \u00b14\u00a9l3\u00c3\u00a2\u00a4\u00a3x\u00abw\u03bc\u0091\u00a5\u00b6\u00b1+\u00a62\u00a7/ \u0304a\u00a9f\u00a5\u00b6\u00a6\u008b \u0327[3\u20444|\u00a1Q\u03bc[\u00a6\u008b\u00ab\"\u00c0 \u00cd\u00ce\u00a2w\u00a9 \u00a3x\u00aew\u00aew \u0304Q\u00a6\u008b\u00a3\u008b3\u00c3\u00a2N\u00a2\u00a4\u00a3x\u00a5 \u03bc[\u00a1Q\u00a5 \u0304Q\u00a6 \u00a62\u00a1\u00c3\u00a5\u00ce\u03bc[\u00ab(\u00a1a\u00a2\u00a4\u00a9 \u00cfL3\u20444\u00a43\u0098\u00a9 \u00a5\u00b61|\u00a5\u00b6\u00a1a\u00a9l\u00b11\u00a7\u00b0\u00a6x \u0304\u008a\u03bc\u0088\u00ab\u00a4\u00ad|3\u20444r3 \u00a1a\u03bc\u0088\u00a6x\u00ab \u00a6x\u00a7 \u00aew \u0304Q\u00a6x\u00ae5\u00a6\u008b\u00a5a\u03bc[\u00a1a\u03bc\u0088\u00a6x\u00ab\u00a4\u00a3x \u0327%\u00ccL\u00a6x \u0304Q\u00ab(3\u0098 \u0327\u0088\u00a3x3\u20444\u00a4\u00a5a\u00a9l\u00a5l\u00c0 \u00d0\u00c8\u00a9 \u00a2r\u00a3m1\u20442x\u00a9.\u03bc\u0088\u00b14\u00aew \u0327[\u00a9l\u00b14\u00a9\u0098\u00abq\u00a1a\u00a9f\u00ad4\u00a1a\u00a2w\u00a9c\u00abw\u00a9\u0098\u00c5\u00d1\u00b14\u00a9f3\u00c3\u00a2\u00a4\u00a32\u00abw\u03bc\u0091\u00a5a\u00b1\u00d2\u03bc\u0088\u00ab-\u00a33\u00a5a1|\u00a5\u00bb\u00a1Q\u00a9\u0098\u00b1\u00d33\u0098\u00a3x \u0327[ \u0327\u0088\u00a9l\u00ad \u00d4/ \u00b6\u00d5 \u00d6 \u00d7 \u00c0 \u00d4/ \u00b6\u00d5 \u00d6 \u00d7 3\u20444r\u00a5\u00b6\u00a9f\u00a5 \u03bc\u0088\u00ab\u00a43 \u0304Q\u00a9\u0098\u00b14\u00a9l\u00ab\u008b\u00a1\u00c3\u00a32 \u0327r\u03bc[\u00ab\u00a4\u00adw3\u20444\u00a43 \u00a1Q\u03bc[\u00a6\u008b\u00ab_\u00a1a\u00a63\u00a3x3\u20444w\u00b7x\u00b14\u00a9l\u00ab\u008b\u00a1/\u03bc[\u00ab\u00a43\u0098\u00a6x\u00b14\u00aew \u0327\u0088\u00a9 \u00a1Q\u00a9c3 \u0327\u0091\u00a323\u20444r\u00a5a\u00a3x \u0327w\u00a1a\u00a2\u00a4\u00a9\u0098\u00a6x \u0304Q\u03bc\u0088\u00a9l\u00a5l\u00c0 \u00c1\u00d8\u00a5\u00b6\u03bc\u0088\u00ab|o \u00b7\u008b \u0327[\u00a9\u008b\u00d9?3\u20444w\u00ab\u00a4\u03bcY\u00a7\u00b0\u00a6\u008b \u0304a\u00b1\u00db\u00daq\u00ab\u00a4\u00a6\u008f\u00c5\u008a \u0327[\u00a9f\u00ad|\u00b7x\u00a94 \u0304Q\u00a9\u0098\u00aew \u0304Q\u00a9l\u00a5a\u00a9\u0098\u00abq\u00a1Q\u00a32\u00a1a\u03bc\u0088\u00a6x\u00ab\u00c8\u00a32 \u0327\u0088 \u0327[\u00a6\u008f\u00c5.\u00a5 \u00a9 \u0301|\u03bc\u0091\u00a5\u00bb\u00a1Q\u03bc[\u00ab\u00a4\u00b7K3 \u0327\u0091\u00a323\u20444\u00a4\u00a5a\u00a9l\u00a5 \u00a1Q\u00a6N\u00acr\u00a9_3\u20444\u00a4\u00a5a\u00a9l\u00ad \u00a3\u008b\u00a5 \u00ac\u00a4\u00a3x3\u00c3\u00da \u00b7x \u0304Q\u00a6x3\u20444\u00a4\u00ab\u00a4\u00ad\u0082\u00da \u00abw\u00a6\u008f\u00c5\u008a \u0327[\u00a9f\u00ad|\u00b7x\u00a9 \u03bc[\u00ab1\u20444\u00a1a\u00a2\u00a4\u00a9 3\u0098\u00a6x\u00ab\u00a4\u00a5\u00b6\u00a1a \u0304Q3\u20444\u00a43 \u00a1a\u03bc\u0088\u00a6x\u00abW\u00a6x\u00a7L\u00abw\u00a9\u0098\u00c5\u00dc\u00aew \u0304Q\u00a9l\u00ad|\u03bc\u00913\u0098\u00a32\u00a1a\u00a9l\u00a5l\u00c0 \u00d5 \u03bc\u00881\u20442x\u00a9\u0098\u00ab \u00a9\u0098 \u0301w\u00a32\u00b14\u00aew \u0327\u0088\u00a9l\u00a5/\u00a62\u00a7 \u00a33\u00a2w\u03bc[\u00b7\u008b\u00a2|oz \u0327[\u00a9l1\u20442x\u00a9\u0098 \u0327r\u00aew \u0304Q\u00a9l\u00ad|\u03bc\u00913\u0098\u00a32\u00a1a\u00a9x\u00d9\u00a4\u00d4/ \u00b6\u00d5 \u00d6 \u00d7I\u00b7x\u00a9\u0098\u00ab\u00a4\u00a9\u0098 \u0304\u00c3\u00a3\u008f\u00a1a\u00a9f\u00a5\u0092 \u0304Q\u00a9\u0098 \u0327\u0091\u00a3\u008f\u00a1Q\u00a9l\u00adA\u00a5a3\u20444w\u00ac|o\u00dd3 \u00a6\u008b\u00ab\u00a43 \u00a9l\u00ae|\u00a1Q\u00a5 \u00c5\u008a\u00a2\u00a4\u03bc\u00883\u00c3\u00a2\u00c8\u03bcY\u00a1 \u00a1Q\u00a2w\u00a9\u0098\u00ab\u0082\u00a3x\u00a5a\u00da|\u00a5c\u03bc[\u00a1Q\u00a5 \u00a2 3\u20444w\u00b1_\u00a3x\u00abK\u00a1a\u00a9f\u00a3x3\u00c3\u00a2w\u00a9l \u0304 \u00a1a\u00a6 \u00ab\u00a4\u00a32\u00b14\u00a9\u008b\u00c0 \u00d5 \u00a9\u0098\u00ab\u00a4\u00a9\u0098 \u0304\u00c3\u00a32 \u0327\u0088\u03bc\u0088\u00a5Q\u00a3\u008f\u00a1Q\u03bc[\u00a6\u008b\u00ab\u00a4\u00a5L\u00a6x\u00a7 \u00aew \u0304Q\u00a9l\u00ad|\u03bc[o 3l\u00a3\u008f\u00a1Q\u00a9l\u00a5;\u00a32 \u0304Q\u00a9 \u00a1Q\u00a9l\u00a5\u00b6\u00a1a\u00a9f\u00adI\u00ac 1I\u00a3\u008b\u00a5\u00b6\u00da \u03bc\u0088\u00abw\u00b7(\u00deq3\u20444w\u00a9f\u00a5\u00bb\u00a1Q\u03bc[\u00a6\u008b\u00ab\u00a4\u00a5 \u00a62\u00a7/\u00a1a\u00a2w\u00a94\u00a2 3\u20444w\u00b1_\u00a32\u00ab\u0080\u00a1a\u00a9f\u00a3x3\u00c3\u00a2w\u00a9l \u0304l\u00c0 \u00d4/ \u00b6\u00d5 \u00d6 \u00d7\u00bf\u00b7\u008b\u00a9\u0098\u00ab|o \u00a9l \u0304Q\u00a32\u00a1a\u00a9f\u00a5 \u00abw\u00a9\u0098\u00c5\u00d33 \u00a6\u008b\u00ab\u00a43 \u00a9l\u00ae|\u00a1Q\u00a5 \u00a3x\u00ab\u00a4\u00ad\u00c8\u00b7x\u00a9l\u00abw\u00a9\u0098 \u0304\u00c3\u00a32 \u0327\u0088\u03bc\u0088\u00a5Q\u00a3\u008f\u00a1Q\u03bc[\u00a6\u008b\u00ab\u00a4\u00a5 \u00c5\u008a\u03bcY\u00a1Q\u00a2~\u00a3 \u00aew \u0304a\u00a9\u0098\u00a7\u00b0\u00a9\u0098 \u0304Q\u00a9\u0098\u00ab\u00a43\u0098\u00a9 \u00a7\u00b0\u00a6x \u03043\u00a5a\u03bc[\u00b14\u00aew \u0327\u0088\u03bc\u00913 \u03bc[\u00a1\u00bb1x\u00c0 \u00d0\u00c8\u00a9 \u03bc[ \u0327\u0088 \u0327\u00883\u20444\u00a4\u00a5\u00bb\u00a1Q \u0304Q\u00a32\u00a1a\u00a9c\u00a1a\u00a2w\u00a9 \u00a6\u008b\u00aer\u00a9l \u0304Q\u00a32\u00a1a\u03bc\u0088\u00a6x\u00ab-\u00a6x\u00a7a\u00d4/ \u00b6\u00d5 \u00d6 \u00d7\u0080\u00acq1-\u00c5)\u00a3m14\u00a62\u00a7 1\u20442\u008f\u00a32 \u0304Q\u03bc[\u00a6\u008b3\u20444\u00a4\u00a5/\u00a5a\u00a9l\u00a5Q\u00a5a\u03bc[\u00a6\u008b\u00ab\u00a4\u00a5 \u03bc\u0088\u00ab(\u00c5\u008a\u00a2w\u03bc\u00883\u00c3\u00a2 \u00a3x3\u20444| \u0301|\u03bc[ \u0327\u0088\u03bc\u0091\u00a32 \u0304Q1_\u00aew \u0304a\u00a9f\u00ad|\u03bc\u00913\u0098\u00a3\u008f\u00a1Q\u00a9l\u00a5\u008a\u00a3x \u0304a\u00a9;\u00a323\u20444w\u00a1a\u00a6x\u00b1_\u00a32\u00a1a\u03bc\u00913\u0098\u00a32 \u0327\u0088 \u0327\u00881_\u03bc[\u00abq\u00a1a \u0304Q\u00a6|\u00ad|3\u20444\u00a43\u0098\u00a9l\u00ad \u00a32\u00ab\u00a4\u00ad \u00b7x\u00a9l\u00abw\u00a9\u0098 \u0304\u00c3\u00a32 \u0327\u0088\u03bc\u0091\u00a5\u00b6\u00a9f\u00ad!\u00c0"}
{"_id": "a908299e667ece1b15b058dcebe2ff42e05e7f45", "title": "DALEX: explainers for complex predictive models", "text": "Predictive modeling is invaded by elastic, yet complex methods such as neural networks or ensembles (model stacking, boosting or bagging). Such methods are usually described by a large number of parameters or hyper parameters a price that one needs to pay for elasticity. The very number of parameters makes models hard to understand. This paper describes a consistent collection of explainers for predictive models, a.k.a. black boxes. Each explainer is a technique for exploration of a black box model. Presented approaches are model-agnostic, what means that they extract useful information from any predictive method despite its internal structure. Each explainer is linked with a specific aspect of a model. Some are useful in decomposing predictions, some serve better in understanding performance, while others are useful in understanding importance and conditional responses of a particular variable. Every explainer presented in this paper works for a single model or for a collection of models. In the latter case, models can be compared against each other. Such comparison helps to find strengths and weaknesses of different approaches and gives additional possibilities for model validation. Presented explainers are implemented in the DALEX package for R. They are based on a uniform standardized grammar of model exploration which may be easily extended. The current implementation supports the most popular frameworks for classification and regression."}
{"_id": "cf092c9cd4ea8112ddb558591c1c1cb472c269f7", "title": "Visualizing Semantic Table Annotations with TableMiner+", "text": "This paper describes an extension of the TableMiner system, an open source Semantic Table Interpretation system that annotates Web tables using Linked Data in an effective and efficient approach. It adds a graphical user interface to TableMiner, to facilitate the visualization and correction of automatically generated annotations. This makes TableMiner an ideal tool for the semi-automatic creation of high-quality semantic annotations on tabular data, which facilitates the publication of Linked Data on the Web."}
{"_id": "daf3f52d90f73e1170ca4db688c1fdceea8d9f91", "title": "Adaptive sequential Monte Carlo by means of mixture of experts", "text": "Selecting appropriately the proposal kernel of particle filters is an issue of significant importance, since a bad choice may lead to deterioration of the particle sample and, consequently, waste of computational power. In this paper we introduce a novel algorithm approximating adaptively the so-called optimal proposal kernel by a mixture of integrated curved exponential distributions with logistic weights. This family of distributions, referred to as mixtures of experts, is broad enough to be used in the presence of multi-modality or strongly skewed distributions. The mixtures are fitted, via Monte Carlo EM or online-EM methods, to the optimal kernel through minimization of the Kullback-Leibler divergence between the auxiliary target and instrumental distributions of the particle filter. At each iteration of the particle filter, the algorithm is required to solve only a single optimization problem for the whole particle sample, as opposed to existing methods solving one problem per particle. In addition, we illustrate in a simulation study how the method can be successfully applied to optimal filtering in nonlinear state-space models."}
{"_id": "246ec7e1e97d2feec9c27e3a598253d8c50c1a82", "title": "Preservation of nostril morphology in nasal base reduction", "text": "Asian patients often desire reduction of the base and alar lobules of the Asian mesorrhine nose. Sill excision is commonly used, but may result in an angular or notched nostril rim. We developed an internal method of alar base reduction involving triangle flaps for sill resection. This method avoids alar rim notching and teardrop deformity. Cinching sutures and double-layer closure avoid tension on the wound. We categorized the results in 50 patients (4 men, 46 women) who underwent surgery between November 2012 and August 2015 and who could be followed up for more than 3\u00a0months. The mean age of the subjects was 26.3\u00a0years and the mean follow-up period was 8.9\u00a0months. Forty patients underwent base reduction with the internal method, while ten with alar flare were treated with additional external resection. The mean reduction of the nostril sill width was 4.8\u00a0mm for both methods. In the subjects receiving flare resection, the mean reduction of the lateral alar width was 4.4\u00a0mm. There was no notching at the suture site. Complications included a short scar running obliquely under the sill in 13 patients and a trap door deformity in one patient. Nasal base reduction is widely performed, but subject to outcomes with abnormal nostril contour. We used triangle flaps to narrow the sill, and cinching sutures to prevent tension on the wound. Our methods prevent nostril notching and/or teardrop deformity. Scarring can occur, but can be reduced using cinching sutures for wound relaxation. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266."}
{"_id": "b9b209152ccaea5b9b0618467305949b6753cdb6", "title": "A localization and tracking scheme for target gangs based on big data of Wi-Fi locations", "text": "The modeling and analysis of target gangs\u2019 usual haunts plays a very important role in law enforcement and supervision. Existing localization and tracking schemes usually need to deploy a large number of monitoring devices or continue to move with the target, which lead to high cost. In this paper, a localization and tracking scheme based on big data of Wi-Fi locations is proposed. Firstly, the characteristic of the smart mobile device that continuously broadcasts probe request frames is used to obtain its MAC address and Wi-Fi connection history. Secondly, the service set identifier (SSID) in the Wi-Fi connection history of smart mobile devices held by the target gangs are queried from the Wi-Fi location database, and the target gangs\u2019 usual haunts are gained by statistical analysis. Lastly, monitoring devices are deployed in these places, and most of the target gangs\u2019 activity pattern are known with only a small number of monitoring devices. The results of the related experimental tests demonstrate the feasibility of the proposed scheme."}
{"_id": "440bc8f1de3d48064c188a9d682679b8c95b1ef1", "title": "DO INVESTMENT-CASH FLOW SENSITIVITIES PROVIDE USEFUL MEASURES OF FINANCING CONSTRAINTS?*", "text": "No. This paper investigates the relationship between financing constraints and investment-cash flow sensitivities by analyzing the firms identified by Fazzari, Hubbard, and Petersen as having unusually high investment-cash flow sensitivities. We find that firms that appear less financially constrained exhibit significantly greater sensitivities than firms that appear more financially constrained. We find this pattern for the entire sample period, subperiods, and individual years. These results (and simple theoretical arguments) suggest that higher sensitivities cannot be interpreted as evidence that firms are more financially constrained. These findings call into question the interpretation of most previous research that uses this methodology."}
{"_id": "54eda1aa7a6078d21a905366d8217de3cd39b187", "title": "The Role of Gender Stereotypes in Perceptions of Entrepreneurs and Intentions to Become an Entrepreneur", "text": "In this study we examine the role of socially constructed gender stereotypes in entrepreneurship and their influence on men and women\u2019s entrepreneurial intentions. Data on characteristics of males, females, and entrepreneurs were collected from young adults in three countries. As hypothesized, entrepreneurs were perceived to have predominantly masculine characteristics. Additional results revealed that although both men and women perceive entrepreneurs to have characteristics similar to those of males (masculine genderrole stereotype), only women also perceived entrepreneurs and females as having similar characteristics (feminine gender-role stereotype). Further, though men and women did not differ in their entrepreneurial intentions, those who perceived themselves as more similar to males (high on male gender identification) had higher entrepreneurial intentions than those who saw themselves as less similar to males (low male gender identification). No such difference was found for people who saw themselves as more or less similar to females (female gender identification). The results were consistent across the three countries. Practical implications and directions for future research are discussed."}
{"_id": "5eb552c3b282227b6abd7745c666526c7b7a0dd2", "title": "Diced cartilage grafts in rhinoplasty surgery: current techniques and applications.", "text": "BACKGROUND\nThe author has used diced cartilage grafts in nasal surgery for more than 30 years. However, the number of cases and the variety of techniques have increased dramatically over the past 6 years.\n\n\nMETHODS\nThe author uses three methods of diced cartilage in rhinoplasty surgery: diced cartilage, diced cartilage wrapped in fascia (with fascia then sewn closed), and diced cartilage covered with fascia (with the recipient area covered with fascia after graft placement). The constructs are highly varied to fit the specific defect. Pieces of diced cartilage without any fascia can be placed in pockets in the peripyriform area, radix, or alongside structural rib grafts.\n\n\nRESULTS\nOver a 2-year period, the author treated 546 rhinoplasty cases in which 79 patients (14 percent) had a total of 91 diced cartilage grafts. There were 34 primary and 45 secondary operations involving the radix (n = 11), half-length grafts (n = 14), full-length dorsum (n = 43), peripyriform (n = 16), infralobule (n = 4), and lateral wall (n = 3). All charts were reviewed for the 256 rhinoplasties performed in 2006 of which 30 patients had 35 diced cartilage grafts. With a median follow-up of 19 months (range, 13 to 25 months), two patients had had revisions unrelated to their diced cartilage grafts. The three most common technical problems were overcorrection, visibility, and junctional step-offs.\n\n\nCONCLUSIONS\nDiced cartilage grafts are a valuable addition to rhinoplasty surgery. They are highly flexible and useful throughout the nose. Their use simplifies one of the greatest challenges in all of rhinoplasty--dorsal augmentation. Complications have been relatively minor and their correction relatively simple."}
{"_id": "2a894be44d07a963c28893cc6f45d29fbfa872f7", "title": "STRADS: a distributed framework for scheduled model parallel machine learning", "text": "Machine learning (ML) algorithms are commonly applied to big data, using distributed systems that partition the data across machines and allow each machine to read and update all ML model parameters --- a strategy known as data parallelism. An alternative and complimentary strategy, model parallelism, partitions the model parameters for non-shared parallel access and updates, and may periodically repartition the parameters to facilitate communication. Model parallelism is motivated by two challenges that data-parallelism does not usually address: (1) parameters may be dependent, thus naive concurrent updates can introduce errors that slow convergence or even cause algorithm failure; (2) model parameters converge at different rates, thus a small subset of parameters can bottleneck ML algorithm completion. We propose scheduled model parallelism (SchMP), a programming approach that improves ML algorithm convergence speed by efficiently scheduling parameter updates, taking into account parameter dependencies and uneven convergence. To support SchMP at scale, we develop a distributed framework STRADS which optimizes the throughput of SchMP programs, and benchmark four common ML applications written as SchMP programs: LDA topic modeling, matrix factorization, sparse least-squares (Lasso) regression and sparse logistic regression. By improving ML progress per iteration through SchMP programming whilst improving iteration throughput through STRADS we show that SchMP programs running on STRADS outperform non-model-parallel ML implementations: for example, SchMP LDA and SchMP Lasso respectively achieve 10x and 5x faster convergence than recent, well-established baselines."}
{"_id": "9c9eaf102b01d1a7ff017557a4989b8c9d40003c", "title": "Image Stylization using Depth Information for Hatching and Engraving effects by", "text": "In this thesis inspired by artists such as Gustave Dor\u00e9 and copperplate engravers, we present a new approach for producing a sketch-type hatching effect as well as engraving art seen in the US dollar bills using depth maps as input. The approach uses the acquired depth to generate pen-and-ink type of strokes that exaggerate surface characteristics of faces and objects. The background on this area is presented with an emphasis on line drawing, pen-and-ink illustrations, engraving art and other artistic styles. The patterns generated are composed of parallel lines that change their directionality smoothly while avoiding intersections between them. The style communicates tone and surface characteristics using curve deformation, thickness, density, and spacing while at the same time adding crosshatching effects. This is achieved with our algorithmic approach which uses the data to perform operations that deforms patterns. The components of our methodology and algorithms are detailed, as well as the equations that govern these effects. This work also presents an array of different results by varying parameters or by rendering variations."}
{"_id": "c29ba92a13603e7e7e1ff147ba38eb513384c579", "title": "Evaluating Personalization and Persuasion in E-Commerce", "text": "The use of personalization and persuasion has been shown to optimize customers\u2019 shopping experience in e-commerce. This study aims to identify the personalization methods and persuasive principles that make an ecommerce company successful. Using Amazon as a case study, we evaluated the personalization methods implemented using an existing process framework. We also applied the PSD model to Amazon to evaluate the persuasive principles it uses. Our results show that all the principles of the PSD model were implemented in Amazon. This study can serve as a guide to e-commerce businesses and software developers for building or improving existing e-commerce platforms."}
{"_id": "669e43e6f142a8be393b1003b219a3ff26af109f", "title": "DBpedia: A Multilingual Cross-domain Knowledge Base", "text": "The DBpedia project extracts structured information from Wikipedia editions in 97 different languages and combines this information into a large multi-lingual knowledge base covering many specific domains and general world knowledge. The knowledge base contains textual descriptions (titles and abstracts) of concepts in up to 97 languages. It also contains structured knowledge that has been extracted from the infobox systems of Wikipedias in 15 different languages and is mapped onto a single consistent ontology by a community effort. The knowledge base can be queried using a structured query language and all its data sets are freely available for download. In this paper, we describe the general DBpedia knowledge base and extended data sets that specifically aim at supporting computational linguistics tasks. These task include Entity Linking, Word Sense Disambiguation, Question Answering, Slot Filling and Relationship Extraction. These use cases are outlined, pointing at added value that the structured data of DBpedia provides."}
{"_id": "167abf2c9eda9ce21907fcc188d2e41da37d9f0b", "title": "Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection", "text": "Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the wordand phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus."}
{"_id": "7988ef10dc9770e5fa4dc40d5d2f3693fd2ed917", "title": "A Comparison of Vector-based Representations for Semantic Composition", "text": "In this paper we address the problem of modeling compositional meaning for phrases and sentences using distributional methods. We experiment with several possible combinations of representation and composition, exhibiting varying degrees of sophistication. Some are shallow while others operate over syntactic structure, rely on parameter learning, or require access to very large corpora. We find that shallow approaches are as good as more computationally intensive alternatives with regards to two particular tests: (1) phrase similarity and (2) paraphrase detection. The sizes of the involved training corpora and the generated vectors are not as important as the fit between the meaning representation and compositional method."}
{"_id": "bdc6acc8d11b9ef1e8f0fe2f0f41ce7b6f6a100a", "title": "Learning Discriminative Projections for Text Similarity Measures", "text": "Traditional text similarity measures consider each term similar only to itself and do not model semantic relatedness of terms. We propose a novel discriminative training method that projects the raw term vectors into a common, low-dimensional vector space. Our approach operates by finding the optimal matrix to minimize the loss of the pre-selected similarity function (e.g., cosine) of the projected vectors, and is able to efficiently handle a large number of training examples in the highdimensional space. Evaluated on two very different tasks, cross-lingual document retrieval and ad relevance measure, our method not only outperforms existing state-of-the-art approaches, but also achieves high accuracy at low dimensions and is thus more efficient."}
{"_id": "b67bedf7ee4f4664541a1265dc816186b9d0160d", "title": "Ground-Penetrating Radar Theory and Application of Thin-Bed Offset-Dependent Reflectivity", "text": "Offset-dependent reflectivity or amplitude-variationwith-offset AVO analysis of ground-penetrating radar GPR data may improve the resolution of subsurface dielectric permittivity estimates. A horizontally stratified medium has a limiting layer thickness below which thin-bed AVO analysis is necessary. For a typical GPR signal, this limit is approximately 0.75 of the characteristic wavelength of the signal. Our approach to modeling the GPR thin-bed response is a broadband, frequency-dependent computation that utilizes an analytical solution to the three-interface reflectivity and is easy to implement for either transverse electric TE or transverse magnetic TM polarizations. The AVO curves for TE and TM modes differ significantly. In some cases, constraining the interpretation using both TE and TM data is critical. In two field examples taken from contaminated-site characterization data, we find quantitative thin-bed modeling agrees with the GPR field data and available characterization data."}
{"_id": "fe82d072a8d13cfefcd575db893f3374251f04a8", "title": "Multi-fiber Networks for Video Recognition", "text": "In this paper, we aim to reduce the computational cost of spatio-temporal deep neural networks, making them run as fast as their 2D counterparts while preserving state-of-the-art accuracy on video recognition benchmarks. To this end, we present the novel Multi-Fiber architecture that slices a complex neural network into an ensemble of lightweight networks or fibers that run through the network. To facilitate information flow between fibers we further incorporate multiplexer modules and end up with an architecture that reduces the computational cost of 3D networks by an order of magnitude, while increasing recognition performance at the same time. Extensive experimental results show that our multi-fiber architecture significantly boosts the efficiency of existing convolution networks for both image and video recognition tasks, achieving state-of-the-art performance on UCF-101, HMDB-51 and Kinetics datasets. Our proposed model requires over 9\u00d7 and 13\u00d7 less computations than the I3D [1] and R(2+1)D [2] models, respectively, yet providing higher accuracy."}
{"_id": "865a6ce9ae169f36afbcbdfd3bdc0c38fe7c7d3a", "title": "Evolved GANs for generating pareto set approximations", "text": "In machine learning, generative models are used to create data samples that mimic the characteristics of the training data. Generative adversarial networks (GANs) are neural-network based generator models that have shown their capacity to produce realistic samples in different domains. In this paper we propose a neuro-evolutionary approach for evolving deep GAN architectures together with the loss function and generator-discriminator synchronization parameters. We also propose the problem of Pareto set (PS) approximation as a suitable benchmark to evaluate the quality of neural-network based generators in terms of the accuracy of the solutions they generate. The covering of the Pareto front (PF) by the generated solutions is used as an indicator of the mode-collapsing behavior of GANs. We show that it is possible to evolve GANs that generate good PS approximations. Our method scales to up to 784 variables and that it is capable to create architecture transferable across dimensions and functions."}
{"_id": "17962df92ccdcc0e6339b614aee8842da3faf8de", "title": "Where Do Creative Interactions Come From? The Role of Tie Content and Social Networks", "text": "U the determinants of creativity at the individual and organizational level has been the focus of a long history of research in various disciplines from the social sciences, but little attention has been devoted to studying creativity at the dyadic level. Why are some dyadic interactions more likely than others to trigger the generation of novel and useful ideas in organizations? As dyads conduit both knowledge and social forces, they offer an ideal setting to disentangle the effects of knowledge diversity, tie strength, and network structure on the generation of creative thoughts. This paper not only challenges the current belief that sporadic and distant dyadic relationships (weak ties) foster individual creativity but also argues that diverse and strong ties facilitate the generation of creative ideas. From a knowledge viewpoint, our results suggest that ties that transmit a wide (rather than narrow) set of knowledge domains (within the same tie) favor creative idea generation if exchanges occur with sufficient frequency. From a social perspective, we find that strong ties serve as effective catalysts for the generation of creative ideas when they link actors who are intrinsically motivated to work closely together. Finally, this paper also shows that dyadic network cohesion (i.e., the connections from the focal dyad to common contacts) does not always hinder the generation of creative ideas. Our empirical evidence suggests that when cohesion exceeds its average levels, it becomes detrimental to creative idea generation. Hypotheses are tested in a sociometric study conducted within the development department of a software firm."}
{"_id": "63656b96f5026692ac174bc6cf6450aef5b6f37c", "title": "Validation, verification, and testing techniques throughout the life cycle of a simulation study", "text": "Life cycle validation, verification, and testing (VV&T) is extremely important for the success of a simulation study. This paper surveys current software VV&T techniques and current simulation model VV&T techniques and describes how they can all be applied throughout the life cycle of a simulation study. The processes and credibility assessment stages of the life cycle are described and the applicability of the VV&T techniques for each stage is stated."}
{"_id": "dc96c39ea571051866eded0d94cf9777814af9d9", "title": "Perceived qualities of smart wearables: determinants of user acceptance", "text": "Wearable computers are one of the new technologies that are expected to be a part of users' lives extensively in near future. While some of the users have positive attitudes towards these new products, some users may reject to use them due to different reasons. User experience is subjective, and effected by various parameters. Among these the first impression, namely the perceived qualities has an important impact on product acceptance. This paper aims to explore the perceived qualities of wearables and define the relations between them. An empirical study is conducted, to find out the hierarchy and meaningful relationships between the perceived qualities of smart wearables. The study is based on personal construct theory and data is presented by Cross-Impact Analysis. The patterns behind affection and affected qualities are explored to understand the design requirements for the best integration of wearables into daily lives."}
{"_id": "86ae8972ccbeaa3193398523fcac095c8e6a38a2", "title": "Let's talk about it: evaluating contributions through discussion in GitHub", "text": "Open source software projects often rely on code contributions from a wide variety of developers to extend the capabilities of their software. Project members evaluate these contributions and often engage in extended discussions to decide whether to integrate changes. These discussions have important implications for project management regarding new contributors and evolution of project requirements and direction. We present a study of how developers in open work environments evaluate and discuss pull requests, a primary method of contribution in GitHub, analyzing a sample of extended discussions around pull requests and interviews with GitHub developers. We found that developers raised issues around contributions over both the appropriateness of the problem that the submitter attempted to solve and the correctness of the implemented solution. Both core project members and third-party stakeholders discussed and sometimes implemented alternative solutions to address these issues. Different stakeholders also influenced the outcome of the evaluation by eliciting support from different communities such as dependent projects or even companies. We also found that evaluation outcomes may be more complex than simply acceptance or rejection. In some cases, although a submitter's contribution was rejected, the core team fulfilled the submitter's technical goals by implementing an alternative solution. We found that the level of a submitter's prior interaction on a project changed how politely developers discussed the contribution and the nature of proposed alternative solutions."}
{"_id": "50988101501366324c11e9e7a199e88a9a899bec", "title": "To Kill a Centrifuge A Technical Analysis of What Stuxnet \u2019 s Creators Tried to Achieve", "text": ""}
{"_id": "7777d299e7b4217fc4b80234994b5a68b3031199", "title": "Fixed-Rate Compressed Floating-Point Arrays", "text": "Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4d values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation."}
{"_id": "94e73164abda446b88c9523810a43aad6832377e", "title": "Literature Review of Teamwork Models", "text": "Both human collaboration and software agent collaboration have been thoroughly studied, but there is relatively little research on hybrid human-agent teamwork. Some research has identified the roles that agents could play in hybrid teams: supporting individual team members, being a teammate, or supporting the team as a whole [99]. Some other work [57] has investigated trust concepts as the fundamental building block for effective human-agent teamwork or posited the types of shared knowledge that promote mutual understanding between cooperating humans and agents [9, 68]. However, many of the facets of human agent teamwork models, such as communication protocols for forming mutual intelligibility, performing team monitoring to assess progress, forming joint goals, addressing task interdependencies in hybrid teamwork are still unexplored. In this report, we address the following questions: 1. what factors affect human team task performance and cognition? 2. how can agent coordination mechanisms be adapted for human-agent teams? 3. with current technologies, what roles can agents successfully fill in hybrid human-agent teams? 4. what are the barriers to human-agent interaction?"}
{"_id": "b2e68ca577636aaa6f6241c3af7478a3ae1389a7", "title": "Transformational leadership in nursing: a concept analysis.", "text": "AIM\nTo analyse the concept of transformational leadership in the nursing context.\n\n\nBACKGROUND\nTasked with improving patient outcomes while decreasing the cost of care provision, nurses need strategies for implementing reform in health care and one promising strategy is transformational leadership. Exploration and greater understanding of transformational leadership and the potential it holds is integral to performance improvement and patient safety.\n\n\nDESIGN\nConcept analysis using Walker and Avant's (2005) concept analysis method.\n\n\nDATA SOURCES\nPubMed, CINAHL and PsychINFO.\n\n\nMETHODS\nThis report draws on extant literature on transformational leadership, management, and nursing to effectively analyze the concept of transformational leadership in the nursing context.\n\n\nIMPLICATIONS FOR NURSING\nThis report proposes a new operational definition for transformational leadership and identifies model cases and defining attributes that are specific to the nursing context. The influence of transformational leadership on organizational culture and patient outcomes is evident. Of particular interest is the finding that transformational leadership can be defined as a set of teachable competencies. However, the mechanism by which transformational leadership influences patient outcomes remains unclear.\n\n\nCONCLUSION\nTransformational leadership in nursing has been associated with high-performing teams and improved patient care, but rarely has it been considered as a set of competencies that can be taught. Also, further research is warranted to strengthen empirical referents; this can be done by improving the operational definition, reducing ambiguity in key constructs and exploring the specific mechanisms by which transformational leadership influences healthcare outcomes to validate subscale measures."}
{"_id": "cf09f59c2530533bd7fc31d5bc92cf01a6e30cd2", "title": "Statistical Nested Sensor Array Signal Processing", "text": "OF THE DISSERTATION Statistical Nested Sensor Array Signal Processing by Keyong Han Doctor of Philosophy in Electrical Engineering Washington University in St. Louis, 2015 Professor Arye Nehorai, Chair Source number detection and direction-of-arrival (DOA) estimation are two major applications of sensor arrays. Both applications are often confined to the use of uniform linear arrays (ULAs), which is expensive and difficult to yield wide aperture. Besides, a ULA with N scalar sensors can resolve at most N \u2212 1 sources. On the other hand, a systematic approach was recently proposed to achieve O(N) degrees of freedom (DOFs) using O(N) sensors based on a nested array, which is obtained by combining two or more ULAs with successively increased spacing. This dissertation will focus on a fundamental study of statistical signal processing of nested arrays. Five important topics are discussed, extending the existing nested-array strategies to more practical scenarios. Novel signal models and algorithms are proposed. First, based on the linear nested array, we consider the problem for wideband Gaussian sources. To employ the nested array to the wideband case, we propose effective strategies to apply nested-array processing to each frequency component, and combine all the spectral xi information of various frequencies to conduct the detection and estimation. We then consider the practical scenario with distributed sources, which considers the spreading phenomenon of sources. Next, we investigate the self-calibration problem for perturbed nested arrays, for which existing works require certain modeling assumptions, for example, an exactly known array geometry, including the sensor gain and phase. We propose corresponding robust algorithms to estimate both the model errors and the DOAs. The partial Toeplitz structure of the covariance matrix is employed to estimate the gain errors, and the sparse total least squares is used to deal with the phase error issue. We further propose a new class of nested vector-sensor arrays which is capable of significantly increasing the DOFs. This is not a simple extension of the nested scalar-sensor array. Both the signal model and the signal processing strategies are developed in the multidimensional sense. Based on the analytical results, we consider two main applications: electromagnetic (EM) vector sensors and acoustic vector sensors. Last but not least, in order to make full use of the available limited valuable data, we propose a novel strategy, which is inspired by the jackknifing resampling method. Exploiting numerous iterations of subsets of the whole data set, this strategy greatly improves the results of the existing source number detection and DOA estimation methods."}
{"_id": "45df07d96c59a38d8e8033ad673ad289b2207352", "title": "PZT-Actuated and -Sensed Resonant Micromirrors with Large Scan Angles Applying Mechanical Leverage Amplification for Biaxial Scanning", "text": "This article presents design, fabrication and characterization of lead zirconate titanate (PZT)-actuated micromirrors, which enable extremely large scan angle of up to 106\u00b0 and high frequency of 45 kHz simultaneously. Besides the high driving torque delivered by PZT actuators, mechanical leverage amplification has been applied for the micromirrors in this work to reach large displacements consuming low power. Additionally, fracture strength and failure behavior of poly-Si, which is the basic material of the micromirrors, have been studied to optimize the designs and prevent the device from breaking due to high mechanical stress. Since comparing to using biaxial micromirror, realization of biaxial scanning using two independent single-axial micromirrors shows considerable advantages, a setup combining two single-axial micromirrors for biaxial scanning and the results will also be presented in this work. Moreover, integrated piezoelectric position sensors are implemented within the micromirrors, based on which closed-loop control has been developed and studied."}
{"_id": "576f2a0edb28d940427211727ced954d7df38a29", "title": "Streaming Hierarchical Video Segmentation", "text": "The use of video segmentation as an early processing step in video analysis lags behind the use of image segmentation for image analysis, despite many available video segmentation methods. A major reason for this lag is simply that videos are an order of magnitude bigger than images; yet most methods require all voxels in the video to be loaded into memory, which is clearly prohibitive for even medium length videos. We address this limitation by proposing an approximation framework for streaming hierarchical video segmentation motivated by data stream algorithms: each video frame is processed only once and does not change the segmentation of previous frames. We implement the graph-based hierarchical segmentation method within our streaming framework; our method is the first streaming hierarchical video segmentation method proposed. We perform thorough experimental analysis on a benchmark video data set and longer videos. Our results indicate the graph-based streaming hierarchical method outperforms other streaming video segmentation methods and performs nearly as well as the full-video hierarchical graph-based method."}
{"_id": "e8503fac5b092f8c3ce740cbd37291b082e2a041", "title": "Design and Implementation of Power Converters for Hybrid Wind-Solar Energy Conversion System with an Implementation of MPPT", "text": "This paper presents the design and implementation of power converters for wind conversion systems. The power converter can not only transfer the power from a wind generator, but also improve the stability and safety of the system. The proposed system consists of a Permanent magnet synchronous generator (PMSG); a DC/DC boosts converter, a bi-directional DC/DC converter and a full-bridge DC/AC inverter. The wind generator is the main power source of the system, and the battery is used for energy storage and power compensation to recover the natural irregularity of the wind power. In this paper presents a new system configuration of the front-end rectifier stage for a hybrid wind or photo voltaic energy system. The configuration allows the two sources to supply the load separately or simultaneously, depending on the availability of energy sources. The inherent nature of this cuk-scpic fused converter, additional input filters are not necessary to filter out high frequency harmonic content is determinant for the generator life span, heating issue and efficiency. The fused multi-input rectifier stage also allows maximum power from the wind and sun. When it is available an adaptive MPPT algorithm will be used for photo voltaic (PV) system. Operational analysis of the proposed system, will discoursed in this paper simulation results are given to highlight the merit of the proposed circuit Index terms \u2014Wind generator, PV and Fuel Cells, Bi-directional DC/DC converter, full-bridge DC/AC inverter, MPPT."}
{"_id": "4a44e491e09b75e4f8062c4aba801ab16f213559", "title": "A Novel, Planar, and Compact Crossover Design for Dual-Band Applications", "text": "This paper presents, for the first time, the novel design of a dual-band crossover. The proposed circuit features planar implementation with moderate line impedance, compact layout, and widely separated frequency bands. Moreover, simple and explicit closed-form expressions are available for the exact evaluation of the line impedances. For verification, both simulated and measured results of a microstrip crossover operating at 800 MHz and 1.2 GHz are given."}
{"_id": "43b9fc3dabc4f9cf6550f64e50b92bbe58dd3893", "title": "Practical condition synchronization for transactional memory", "text": "Few transactional memory implementations allow for condition synchronization among transactions. The problems are many, most notably the lack of consensus about a single appropriate linguistic construct, and the lack of mechanisms that are compatible with hardware transactional memory. In this paper, we introduce a broadly useful mechanism for supporting condition synchronization among transactions. Our mechanism supports a number of linguistic constructs for coordinating transactions, and does so without introducing overhead on in-flight hardware transactions. Experiments show that our mechanisms work well, and that the diversity of linguistic constructs allows programmers to chose the technique that is best suited to a particular application."}
{"_id": "135c5d0ad84b91303d42e3f0dfec1f13e9cea178", "title": "Wasserstein Generative Adversarial Network", "text": "Recent advances in deep generative models give us new perspective on modeling highdimensional, nonlinear data distributions. Especially the GAN training can successfully produce sharp, realistic images. However, GAN sidesteps the use of traditional maximum likelihood learning and instead adopts an two-player game approach. This new training behaves very differently compared to ML learning. There are still many remaining problem of GAN training. In this thesis, we gives a comprehensive review of recently published methods or analysis on GAN training, especially the Wasserstein GAN and FlowGAN model. We also discuss the limitation of the later model and use this as the motivation to propose a novel generator architecture using mixture models. Furthermore, we also modify the discriminator architecture using similar ideas to allow \u2019personalized\u2019 guidance. We refer the generator mixture model as Mixflow and mixture of discriminators as \u2019personalized GAN\u2019 (PGAN). In experiment chapter, we demonstrate their performance advantages using toy examples compared to single flow model. In the end, we test their performance on MNIST dataset and the Mixflow model not only achieves the best log likelihood but also produce reasonable images compared to state-of-art DCGAN generation."}
{"_id": "2a9fdd7f8ba315135c16335e19751c655ff09f1b", "title": "SparkClouds: Visualizing Trends in Tag Clouds", "text": "Tag clouds have proliferated over the web over the last decade. They provide a visual summary of a collection of texts by visually depicting the tag frequency by font size. In use, tag clouds can evolve as the associated data source changes over time. Interesting discussions around tag clouds often include a series of tag clouds and consider how they evolve over time. However, since tag clouds do not explicitly represent trends or support comparisons, the cognitive demands placed on the person for perceiving trends in multiple tag clouds are high. In this paper, we introduce SparkClouds, which integrate sparklines into a tag cloud to convey trends between multiple tag clouds. We present results from a controlled study that compares SparkClouds with two traditional trend visualizations-multiple line graphs and stacked bar charts-as well as Parallel Tag Clouds. Results show that SparkClouds' ability to show trends compares favourably to the alternative visualizations."}
{"_id": "5e83ef316ca3536d184f28307059ed51dd945543", "title": "A survey of glove-based input", "text": "Clumsy intermediary devices constrain our interaction with computers and their applications. Glove-based input devices let us apply our manual dexterity to the task. We provide a basis for understanding the field by describing key hand-tracking technologies and applications using glove-based input. The bulk of development in glove-based input has taken place very recently, and not all of it is easily accessible in the literature. We present a cross-section of the field to date. Hand-tracking devices may use the following technologies: position tracking, optical tracking, marker systems, silhouette analysis, magnetic tracking or acoustic tracking. Actual glove technologies on the market include: Sayre glove, MIT LED glove, Digital Data Entry Glove, DataGlove, Dexterous HandMaster, Power Glove, CyberGlove and Space Glove. Various applications of glove technologies include projects into the pursuit of natural interfaces, systems for understanding signed languages, teleoperation and robotic control, computer-based puppetry, and musical performance.<>"}
{"_id": "5ba7042c5220548c9d5636df3cc2c84bb8641e02", "title": "\"Put-that-there\": Voice and gesture at the graphics interface", "text": "Recent technological advances in connected-speech recognition and position sensing in space have encouraged the notion that voice and gesture inputs at the graphics interface can converge to provide a concerted, natural user modality.\n The work described herein involves the user commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference."}
{"_id": "f172c1fc51ea749de17d5201d3797ad4282eb83f", "title": "The See-Through System: A VANET-enabled assistant for overtaking maneuvers", "text": "The use of wireless technology based on Vehicular Ad hoc Networks (VANETs) for information exchange can influence the drivers' behavior towards improving driving performance and reducing road accidents. This information can even be more relevant if it is presented as a video stream. In this paper we propose a system that relies on VANET and video-streaming technology: the See-Through System (STS). The system enhances driver's visibility and supports the driver's overtaking decision in challenging situations, such as overtaking a vision-obstructing vehicle. The use of the See-Through System provides the driver with an additional tool for determining if traffic conditions permit starting an overtaking maneuver thus reducing the risk of overtaking. We tested the See-Through System on an experimental vehicle on the road as well as in the context of a driving simulator for real world environment. Results are promising, since the use of the 802.11p standard wireless communication protocol allows a vehicle-to-vehicle connection without significant delay and the totality of the participants regarded the information provided by the STS as useful."}
{"_id": "246310099609c87cacec2ce27ad2cf4228d95ef8", "title": "The Collateral Damage of Internet Censorship by DNS Injection", "text": "Some ISPs and governments (most notably the Great Firewall of China) use DNS injection to block access to \u201cunwanted\u201d websites. The censorship tools inspect DNS queries near the ISP\u2019s boundary routers for sensitive domain keywords and injecting forged DNS responses, blocking the users from accessing censored sites, such as twitter.com and facebook. com. Unfortunately this causes large scale collateral damage, affecting communication beyond the censored networks when outside DNS traffic traverses censored links. In this paper, we analyze the causes of the collateral damages comprehensively and measure the Internet to identify the injecting activities and their effect. We find 39 ASes in China injecting forged replies even for transit DNS traffic, and 26% of 43,000 measured open resolvers outside China, distributed in 109 countries, may suffer some collateral damage. Different from previous work, we find that most collateral damage arises from resolvers querying TLD name servers who\u2019s transit passes through China rather than effects due to root servers (F, I, J) located in China."}
{"_id": "bdcdc95ef36b003fce90e8686bfd292c342b0b57", "title": "The Dreaming Variational Autoencoder for Reinforcement Learning Environments", "text": "Reinforcement learning has shown great potential in generalizing over raw sensory data using only a single neural network for value optimization. There are several challenges in the current state-of-the-art reinforcement learning algorithms that prevent them from converging towards the global optima. It is likely that the solution to these problems lies in shortand long-term planning, exploration and memory management for reinforcement learning algorithms. Games are often used to benchmark reinforcement learning algorithms as they provide a flexible, reproducible, and easy to control environment. Regardless, few games feature a state-space where results in exploration, memory, and planning are easily perceived. This paper presents The Dreaming Variational Autoencoder (DVAE), a neural network based generative modeling architecture for exploration in environments with sparse feedback. We further present Deep Maze, a novel and flexible maze engine that challenges DVAE in partial and fully-observable state-spaces, long-horizon tasks, and deterministic and stochastic problems. We show initial findings and encourage further work in reinforcement learning driven by generative exploration."}
{"_id": "7e5af1cf715305fc394b5d24fc1caf17643a9205", "title": "The Double Dance of Agency : a socio-theoretic account of how machines and humans interact", "text": "The nature of the relationship between information technology (IT) and organizations has been a long-standing debate in the Information Systems literature. Does IT shape organizations, or do people in organisations control how IT is used? To formulate the question a little differently: does agency (the capacity to make a difference) lie predominantly with machines (computer systems) or humans (organisational actors)? Many proposals for a middle way between the extremes of technological and social determinism have been put advanced; in recent years researchers oriented towards social theories have focused on structuration theory and (lately) actor network theory. These two theories, however, adopt different and incompatible views of agency. Thus, structuration theory sees agency as exclusively a property of humans, whereas the principle of general symmetry in actor network theory implies that machines may also be agents. Drawing on critiques of both structuration theory and actor network theory, this paper develops a theoretical account of the interaction between human and machine agency: the double dance of agency. The account seeks to contribute to theorisation of the relationship between technology and organisation by recognizing both the different character of human and machine agency, and the emergent properties of their interplay."}
{"_id": "d7cbedbee06293e78661335c7dd9059c70143a28", "title": "MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices", "text": "We present a class of extremely efficient CNN models, MobileFaceNets, which use less than 1 million parameters and are specifically tailored for high-accuracy real-time face verification on mobile and embedded devices. We first make a simple analysis on the weakness of common mobile networks for face verification. The weakness has been well overcome by our specifically designed MobileFaceNets. Under the same experimental conditions, our MobileFaceNets achieve significantly superior accuracy as well as more than 2 times actual speedup over MobileNetV2. After trained by ArcFace loss on the refined MS-Celeb-1M, our single MobileFaceNet of 4.0MB size achieves 99.55% accuracy on LFW and 92.59% TAR@FAR1e-6 on MegaFace, which is even comparable to state-of-the-art big CNN models of hundreds MB size. The fastest one of MobileFaceNets has an actual inference time of 18 milliseconds on a mobile phone. For face verification, MobileFaceNets achieve significantly improved efficiency over previous state-of-the-art mobile CNNs."}
{"_id": "871726e86897fd9012c853d3282796e69be5176e", "title": "Forensic Image Inspection Assisted by Deep Learning", "text": "Investigations on the charge of possessing child pornography usually require manual forensic image inspection in order to collect evidence. When storage devices are confiscated, law enforcement authorities are hence often faced with massive image datasets which have to be screened within a limited time frame. As the ability to concentrate and time are highly limited factors of a human investigator, we believe that intelligent algorithms can effectively assist the inspection process by rearranging images based on their content. Thus, more relevant images can be discovered within a shorter time frame, which is of special importance in time-critical investigations of triage character.\n While currently employed techniques are based on black- and whitelisting of known images, we propose to use deep learning algorithms trained for the detection of pornographic imagery, as they are able to identify new content. In our approach, we evaluated three state-of-the-art neural networks for the detection of pornographic images and employed them to rearrange simulated datasets of 1 million images containing a small fraction of pornographic content. The rearrangement of images according to their content allows a much earlier detection of relevant images during the actual manual inspection of the dataset, especially when the percentage of relevant images is low. With our approach, the first relevant image could be discovered between positions 8 and 9 in the rearranged list on average. Without using our approach of image rearrangement, the first relevant image was discovered at position 1,463 on average."}
{"_id": "44f18ef0800e276617e458bc21502947f35a7f94", "title": "EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras", "text": "Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort with marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. Alternative suit-based systems use several inertial measurement units or an exoskeleton to capture motion with an inside-in setup, i.e. without external sensors. This makes capture independent of a confined volume, but requires substantial, often constraining, and hard to set up body instrumentation. Therefore, we propose a new method for real-time, marker-less, and egocentric motion capture: estimating the full-body skeleton pose from a lightweight stereo pair of fisheye cameras attached to a helmet or virtual reality headset - an optical inside-in method, so to speak. This allows full-body motion capture in general indoor and outdoor scenes, including crowded scenes with many people nearby, which enables reconstruction in larger-scale activities. Our approach combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a large new dataset. It is particularly useful in virtual reality to freely roam and interact, while seeing the fully motion-captured virtual body."}
{"_id": "f741cba061655581f6fbb628613d0669c4bdecd5", "title": "Deep Cosine Metric Learning for Person Re-identification", "text": "Metric learning aims to construct an embedding where two extracted features corresponding to the same identity are likely to be closer than features from different identities. This paper presents a method for learning such a feature space where the cosine similarity is effectively optimized through a simple re-parametrization of the conventional softmax classification regime. At test time, the final classification layer can be stripped from the network to facilitate nearest neighbor queries on unseen individuals using the cosine similarity metric. This approach presents a simple alternative to direct metric learning objectives such as siamese networks that have required sophisticated pair or triplet sampling strategies in the past. The method is evaluated on two large-scale pedestrian re-identification datasets where competitive results are achieved overall. In particular, we achieve better generalization on the test set compared to a network trained with triplet loss."}
{"_id": "87bd7d70a9733a6856b78606d5fca85a4ce8e6d1", "title": "Grail: context-aware fixing of concurrency bugs", "text": "Writing efficient synchronization for multithreaded programs is notoriously hard. The resulting code often contains subtle concurrency bugs. Even worse, many bug fixes introduce new bugs. A classic example, seen widely in practice, is deadlocks resulting from fixing of an atomicity violation. These complexities have motivated the development of automated fixing techniques. Current techniques generate fixes that are typically conservative, giving up on available parallelism. Moreover, some of the techniques cannot guarantee the correctness of a fix, and may introduce deadlocks similarly to manual fix, whereas techniques that ensure correctness do so at the expense of even greater performance loss. We present Grail, a novel fixing algorithm that departs from previous techniques by simultaneously providing both correctness and optimality guarantees. Grail synthesizes bug-free yet optimal lock-based synchronization. To achieve this, Grail builds an analysis model of the buggy code that is both contextual, distinguishing different aliasing contexts to ensure efficiency, and global, accounting for the entire synchronization behavior of the involved threads to ensure correctness. Evaluation of Grail on 12 bugs from popular codebases confirms its practical advantages, especially compared with existing techniques: Grail patches are, in general, >=40% more efficient than the patches produced by other techniques, and incur only 2% overhead."}
{"_id": "a1328ce3f3a3245b39e61edd8ab1fb3eed9f0e33", "title": "Multiple Trench Split-gate SOI LDMOS Integrated With Schottky Rectifier", "text": "In this paper, a multiple trench split-gate silicon-on-insulator (SOI) lateral double-diffused MOSFET with a Schottky rectifier (MTS-SG-LDMOS) is proposed and its characteristics are studied using 2-D simulations. The new structure features double oxide trenches, a floating polysilicon split-gate, and a Schottky rectifier. Each oxide trench includes a vertical field plate which enhances the depletion of the drift region and modulates the bulk electric field. As the simulation results, when compared to the conventional SOI LDMOS (C-LDMOS), the breakdown voltage in the MTS-SG-LDMOS increases from 297 to 350 V, the specific on-state resistance ( ${R} _{ \\mathrm{\\scriptscriptstyle ON},\\mathsf {sp}}$ ) decreases from 142.9 to 48.2 $\\text{m}\\Omega \\cdot \\mathsf {cm}^{{\\mathsf {2}}}$ , and the gate\u2013drain charge ( ${Q} _{{\\mathsf {gd}}}$ ) decreases from 19 to 9 pC. Moreover, the reverse recovery time of the proposed structure shows a 73.6% reduction as compared to the C-LDMOS device."}
{"_id": "259b17967d7af2e8fac64a073aa9b3b37431440f", "title": "Crafting Adversarial Examples For Speech Paralinguistics Applications", "text": "Computational paralinguistic analysis is increasingly being used in a wide range of applications, including securitysensitive applications such as speaker verification, deceptive speech detection, and medical diagnostics. While state-ofthe-art machine learning techniques, such as deep neural networks, can provide robust and accurate speech analysis, they are susceptible to adversarial attacks. In this work, we propose a novel end-to-end scheme to generate adversarial examples by perturbing directly the raw waveform of an audio recording rather than specific acoustic features. Our experiments show that the proposed adversarial perturbation can lead to a significant performance drop of state-of-the-art deep neural networks, while only minimally impairing the audio quality."}
{"_id": "76dbaa744a42a95f799a95990273a716d8baaf1d", "title": "Wideband Patch Antenna for HPM Applications", "text": "This paper presents the design, fabrication, and characterization of a compact wideband antenna for high-power microwave applications. The antennas proposed are array of high-power wideband patches with high compactness and less than \u03bb/10 thick. The concept developed can be fed by high-voltage signals (up to 60 kV) in repetitive operation. Two designs are produced at central frequencies of 350 MHz and 1 GHz. Their relative bandwidth is larger than 40% at 350 MHz and 25% at 1 GHz for S11 <; - 10 dB, respectively. The arrays studied produce a gain of more than 14 dB."}
{"_id": "384c6943777e950f56effaeb6f9a0bf614145718", "title": "Aligning multiple genomic sequences with the threaded blockset aligner.", "text": "We define a \"threaded blockset,\" which is a novel generalization of the classic notion of a multiple alignment. A new computer program called TBA (for \"threaded blockset aligner\") builds a threaded blockset under the assumption that all matching segments occur in the same order and orientation in the given sequences; inversions and duplications are not addressed. TBA is designed to be appropriate for aligning many, but by no means all, megabase-sized regions of multiple mammalian genomes. The output of TBA can be projected onto any genome chosen as a reference, thus guaranteeing that different projections present consistent predictions of which genomic positions are orthologous. This capability is illustrated using a new visualization tool to view TBA-generated alignments of vertebrate Hox clusters from both the mammalian and fish perspectives. Experimental evaluation of alignment quality, using a program that simulates evolutionary change in genomic sequences, indicates that TBA is more accurate than earlier programs. To perform the dynamic-programming alignment step, TBA runs a stand-alone program called MULTIZ, which can be used to align highly rearranged or incompletely sequenced genomes. We describe our use of MULTIZ to produce the whole-genome multiple alignments at the Santa Cruz Genome Browser."}
{"_id": "b699999bc1dabc99355111fe0d3e97fd1dff7f30", "title": "Low-Voltage Wilson Current Mirrors in CMOS", "text": "In this paper, we describe three simple low-voltage CMOS analogs of the Wilson current mirror that function well at all current levels, ranging from weak inversion to strong inversion. Each of these current mirrors can operate on a low power-supply voltage of a diode drop plus two saturation voltages and features a wide output-voltage swing with a cascode-type incremental output impedance. Two of the circuits requires an input voltage of a diode drop plus a saturation voltage while the third one features a low input voltage of a saturation voltage. We present experimental results from versions of these three current mirrors that were fabricated in a 0.5-mum CMOS process through MOSIS, comparing them with CMOS implementations of the conventional Wilson and super-Wilson current mirrors."}
{"_id": "ddf4adbcf42fdbb39e560f205ab75ad9cc469b32", "title": "Trend Recalling Algorithm for Automated Online Trading in Stock Market", "text": "Unlike financial forecasting, a type of mechanical trading technique called Trend Following (TF) doesn\u2019t predict any market movement; instead it identifies a trend at early time of the day, and trades automatically afterwards by a pre-defined strategy regardless of the moving market directions during run time. Trend following trading has a long and successful history among speculators. The traditional TF trading method is by human judgment in setting the rules (aka the strategy). Subsequently the TF strategy is executed in pure objective operational manner. Finding the correct strategy at the beginning is crucial in TF. This usually involves human intervention in first identifying a trend, and configuring when to place an order and close it out, when certain conditions are met. In this paper, we presented a new type of TF, namely Trend Recalling algorithm that operates in a totally automated manner. It works by partially matching the current trend with one of the proven successful patterns from the past. Our experiments based on real stock market data show that this algorithm has an edge over the other trend following methods in profitability. The new algorithm is also compared to time-series forecasting type of stock trading, and it can even outperform the best forecasting type in a"}
{"_id": "4d439e84ce0be39e3b7653116c1d887d95d39437", "title": "Illuminant Aware Gamut-Based Color Transfer", "text": "This paper proposes a new approach for color transfer between two images. Our method is unique in its consideration of the scene illumination and the constraint that the mapped image must be within the color gamut of the target image. Specifically, our approach first performs a white-balance step on both images to remove color casts caused by different illuminations in the source and target image. We then align each image to share the same \u2018white axis\u2019 and perform a gradient preserving histogram matching technique along this axis to match the tone distribution between the two images. We show that this illuminant-aware strategy gives a better result than directly working with the original source and target image\u2019s luminance channel as done by many previous methods. Afterwards, our method performs a full gamut-based mapping technique rather than processing each channel separately. This guarantees that the colors of our transferred image lie within the target gamut. Our experimental results show that this combined illuminant-aware and gamut-based strategy produces more compelling results than previous methods. We detail our approach and demonstrate its effectiveness on a number of examples."}
{"_id": "db5511e90b2f4d650067ebf934294617eff81eca", "title": "Locate the Hate: Detecting Tweets against Blacks", "text": "Although the social medium Twitter grants users freedom of speech, its instantaneous nature and retweeting features also amplify hate speech. Because Twitter has a sizeable black constituency, racist tweets against blacks are especially detrimental in the Twitter community, though this effect may not be obvious against a backdrop of half a billion tweets a day. We apply a supervised machine learning approach, employing inexpensively acquired labeled data from diverse Twitter accounts to learn a binary classifier for the labels \u201cracist\u201d and \u201cnonracist.\u201d The classifier has a 76% average accuracy on individual tweets, suggesting that with further improvements, our work can contribute data on the sources of anti-black hate speech."}
{"_id": "3b4f66d5a51025a0feeab26e767869f17a711019", "title": "General 3D modelling of novel objects from a single view", "text": "In this paper we present a method for building models for grasping from a single 3D snapshot of a scene composed of objects of daily use in human living environments. We employ fast shape estimation, probabilistic model fitting and verification methods capable of dealing with different kinds of symmetries, and combine these with a triangular mesh of the parts that have no other representation to model previously unseen objects of arbitrary shape. Our approach is enhanced by the information given by the geometric clues about different parts of objects which serve as prior information for the selection of the appropriate reconstruction method. While we designed our system for grasping based on single view 3D data, its generality allows us to also use the combination of multiple views. We present two application scenarios that require complete geometric models: grasp planning and locating objects in camera images."}
{"_id": "16a39222c0297c55401b94aa3bed09ca825be732", "title": "Graph Drawing by Stress Majorization", "text": "One of the most popular graph drawing methods is based of achieving graphtheoretic target ditsances. This method was used by Kamada and Kawai [15], who formulated it as an energy optimization problem. Their energy is known in the multidimensional scaling (MDS) community as the stress function. In this work, we show how to draw graphs by stress majorization, adapting a technique known in the MDS community for more than two decades. It appears that majorization has advantages over the technique of Kamada and Kawai in running time and stability. We also present a few extensions to the basic energy model which can improve layout quality and computation speed in practice. Majorization-based optimization is essential to these extensions."}
{"_id": "198a8507c7b26f89419430ed51f1c7675e5fa6c7", "title": "Eigensolver Methods for Progressive Multidimensional Scaling of Large Data", "text": "We present a novel sampling-based approximation technique for classical multidimensional scaling that yields an extremely fast layout algorithm suitable even for very large graphs. It produces layouts that compare favorably with other methods for drawing large graphs, and it is among the fastest methods available. In addition, our approach allows for progressive computation, i.e. a rough approximation of the layout can be produced even faster, and then be refined until satisfaction."}
{"_id": "4c77e5650e2328390995f3219ec44a4efd803b84", "title": "Accelerating Large Graph Algorithms on the GPU Using CUDA", "text": "Large graphs involving millions of vertices are common in many practical applications and are challenging to process. Practical-time implementations using high-end computers are reported but are accessible only to a few. Graphics Processing Units (GPUs) of today have high computation power and low price. They have a restrictive programming model and are tricky to use. The G80 line of Nvidia GPUs can be treated as a SIMD processor array using the CUDA programming model. We present a few fundamental algorithms \u2013 including breadth first search, single source shortest path, and all-pairs shortest path \u2013 using CUDA on large graphs. We can compute the single source shortest path on a 10 million vertex graph in 1.5 seconds using the Nvidia 8800GTX GPU costing $600. In some cases optimal sequential algorithm is not the fastest on the GPU architecture. GPUs have great potential as high-performance co-processors."}
{"_id": "3578078d7071459135d4c89f2f557d29678d6be1", "title": "Self-monitoring: appraisal and reappraisal.", "text": "Theory and research on self-monitoring have accumulated into a sizable literature on the impact of variation in the extent to which people cultivate public appearances in diverse domains of social functioning. Yet self-monitoring and its measure, the Self-Monitoring Scale, are surrounded by controversy generated by conflicting answers to the critical question, Is self-monitoring a unitary phenomenon? A primary source of answers to this question has been largely neglected--the Self-Monitoring Scale's relations with external criteria. We propose a quantitative method to examine the self-monitoring literature and thereby address major issues of the controversy. Application of this method reveals that, with important exceptions, a wide range of external criteria tap a dimension directly measured by the Self-Monitoring Scale. We discuss what this appraisal reveals about with self-monitoring is and is not."}
{"_id": "20ee7a52a4a75762ddcb784b77286b4261e53723", "title": "Low cost high performance uncertainty quantification", "text": "Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications."}
{"_id": "da5075fa79da6cd7b81e5d3dc24161217ef86368", "title": "ViP-CNN: A Visual Phrase Reasoning Convolutional Neural Network for Visual Relationship Detection", "text": "As the intermediate level task connecting image captioning and object detection, visual relationship detection started to catch researchers\u2019 attention because of its descriptive power and clear structure. It localizes the objects and captures their interactions with a subject-predicateobject triplet, e.g. \u3008person-ride-horse\u3009. In this paper, the visual relationship is considered as a phrase with three components. So we formulate the visual relationship detection as three inter-connected recognition problems and propose a Visual Phrase reasoning Convolutional Neural Network (ViP-CNN) to address them simultaneously. In ViP-CNN, we present a Visual Phrase Reasoning Structure (VPRS) to set up the connection among the relationship components and help the model consider the three problems jointly. Corresponding non-maximum suppression method and model training strategy are also proposed. Experimental results show that our ViP-CNN outperforms the stateof-art method both in speed and accuracy. We further pretrain our model on our cleansed Visual Genome Relationship dataset, which is found to perform better than the pretraining on the ImageNet for this task."}
{"_id": "ac47fb864a0092a931b0b40e1821c9a0c74795ea", "title": "Goal Setting and Self-Efficacy During Self-Regulated Learning", "text": "This article focuses on the self-regulated learning processes of goal setting and perceived self-efficacy. Students enter learning activities with goals and self-efficacy for goal attainment. As learners work on tasks, they observe their own performances and evaluate their own goal progress. Self-efficacy and goal setting are affected by selfobservation, self-judgment, and self-reaction. When students perceive satisfactory goal progress, they feel capable of improving their skills; goal attainment, coupled with high self-efficacy, leads students to set new challenging goals. Research is reviewed on goal properties (specificity, proximity, difficulty), self-set goals, progress feedback, contracts and conferences, and conceptions of ability. Ways of teaching students to set realistic goals and evaluate progress include establishing upper and lower goal limits and employing games, contracts, and conferences. Future research might clarify the relation of goal setting and self-efficacy to transfer, goal orientations, and affective reactions. Article: Self-regulated learning occurs when students activate and sustain cognitions and behaviors systematically oriented toward attainment of learning goals. Self-regulated learning processes involve goal-directed activities that students instigate, modify, and sustain (Zimmerman, 1989). These activities include attending to instruction, processing and integrating knowledge, rehearsing information to be remembered, and developing and maintaining positive beliefs about learning capabilities and anticipated outcomes of actions (Schunk, 1989). This article focuses on two self-regulated learning processes: goal setting and perceived self-efficacy. As used in this article, a goal is what an individual is consciously trying to accomplish, goal setting involves establishing a goal and modifying it as necessary, and perceived self-efficacy refers to beliefs concerning one's capabilities to attain designated levels of performance (Bandura, 1986, 1988). I initially present a theoretical overview of self-regulated learning to include the roles of goal setting and self-efficacy. I discuss research bearing on these processes, and conclude with implications for educational practice and future research suggestions. THEORETICAL OVERVIEW Subprocesses of Self-Regulated Learning Investigators working within a social cognitive learning theory framework view self-regulation as comprising three subprocesses: self-observation, self-judgment, and self-reaction (Bandura, 1986; Kanfer & Gaelick, 1986; Schunk, 1989). A model highlighting goal setting and self-efficacy is portrayed in Figure 1. Students enter learning activities with such goals as acquiring knowledge, solving problems, and finishing workbook pages. Self-efficacy for goal attainment is influenced by abilities, prior experiences, attitudes toward learning, instruction, and the social context. As students work on tasks, they observe their performances, evaluate goal progress, and continue their work or change their task approach. Self-evaluation of goal progress as satisfactory enhances feelings of efficacy; goal attainment leads students to set new challenging goals. Self-observation. Self-observation, or deliberate attention to aspects of one's behaviors, informs and motivates. Behaviors can be assessed on such dimensions as quality, rate, quantity, and originality. The information gained is used to gauge goal progress. Self-observation also can motivate behavioral change. Many students with poor study habits are surprised to learn that they waste much study time on nonacademic activities. Sustained motivation depends on students believing that if they change their behavior they will experience better outcomes, valuing those outcomes, and feeling they can change those habits (high self-efficacy). Self-observation is aided with self-recording, where behavior instances are recorded along with such features as time, place, and duration of occurrence (Mace, Belfiore, & Shea, 1989). Without recording, observations may not faithfully reflect behaviors due to selective memory. Behaviors should be observed close in time to their occurrence and on a continuous basis rather than intermittently. Self-judgment. Self-judgment involves comparing present performance with one's goal. Self-judgments are affected by the type of standards employed, goal properties (discussed in next section), importance of goal attainment, and performance attributions. Learning goals may reflect absolute or normative standards (Bandura, 1986). Absolute standards are fixed (e.g., complete six workbook pages in 30 min). Grading systems often are based on absolute standards (A = 90-100, B = 80-89). Normative standards employ performances by others. Social comparison of one's performances with those of peers helps one determine behavioral appropriateness. Standards are informative; comparing one's performance with standards informs one of goal progress. Standards also can motivate when they show that goal progress is being made. Self-judgments can be affected by the importance of goal attainment. When individuals care little about how they perform, they may not assess their performance or expend effort to improve (Bandura, 1986). Judgments of goal progress are more likely to be made for goals one personally values. Attributions, or perceived causes of outcomes (successes, failures), influence achievement beliefs and behaviors (Weiner, 1985). Achievement outcomes often are attributed to such causes as ability, effort, task difficulty, and luck (Frieze, 1980; Weiner, 1979). Children view effort as the prime cause of outcomes (Nicholls, 1984). With development, ability attributions become increasingly important. Whether goal progress is judged as acceptable depends on its attribution. Students who attribute successes to teacher assistance may hold low self-efficacy for good performance if they believe they cannot succeed on their own. If they believe they lack ability, they may judge learning progress as deficient and be unmotivated to work harder. Self-reaction. Self-reactions to goal progress motivate behavior (Bandura, 1986). The belief that one's progress is acceptable, along with anticipated satisfaction of goal accomplishment, enhances self-efficacy and motivation. Negative evaluations will not decrease motivation if individuals believe they are capable of improving (Schunk, 1989). Motivation will not improve if students believe they lack the ability to succeed and increased effort will not help. Individuals routinely make such rewards as work breaks, new clothes, and nights on the town contingent on task progress or goal attainment. Anticipation of rewards enhances motivation and self-efficacy. Compensations raise efficacy when they are tied to accomplishments. If students are told that they will earn rewards based on what they accomplish, they become instilled with a sense of efficacy for learning. Self-efficacy is validated as students work at a task and note their own progress; receipt of the reward then becomes a symbol of the progress made. Goal Setting The effects of goals on behavior depend on their properties: specificity, proximity, and difficulty level (Bandura, 1988; Locke, Shaw, Saari, & Latham, 1981). Goals incorporating specific performance standards are more likely to enhance learning and activate self-evaluations than general goals (i.e., \"Do your best\"). Specific goals boost performance by greater specification of the amount of effort required for success and the selfsatisfaction anticipated. Specific goals promote self-efficacy because progress is easy to gauge. Proximal goals result in greater motivation than distant goals. It is easier to gauge progress toward a proximal goal, and the perception of progress raises self-efficacy. Proximal goals are especially influential with young children, who do not represent distant outcomes in thought. Goal difficulty, or the level of task proficiency required as assessed against a standard, influences the effort learners expend to attain a goal. Assuming requisite skills, individuals expend greater effort to attain difficult goals than when standards are lower. Learners initially may doubt whether they can attain difficult goals, but working toward them builds self-efficacy. Self-Efficacy Self-efficacy is hypothesized to influence choice of activities, effort expended, and persistence (Bandura, 1986). Students who hold low self-efficacy for learning may avoid tasks; those who judge themselves efficacious are more likely to participate. When facing difficulties, self-efficacious learners expend greater effort and persist longer than students who doubt their capabilities. Students acquire information about their self-efficacy in a given domain from their performances, observations of models (i.e., vicarious experiences), forms of social persuasion, and physiological indexes (e.g., heart rate, sweating). Information acquired from these sources does not influence efficacy automatically but is cognitively appraised. Efficacy appraisal is an inferential process; persons weigh and combine the contributions of personal and situational factors. In assessing self-efficacy, students take into account such factors as perceived ability, expended effort, task difficulty, teacher assistance, other situational factors, and patterns of successes and failures. The notion that personal expectations influence behavior is not unique to self-efficacy theory. Self-efficacy is conceptually similar to such other constructs as perceived competence, expectations of success, and selfconfidence. One means of distinguishing constructs involves the generality of the constructs. Some constructs (e.g., self-concept, self-esteem) are hypothesized to affect diverse areas of human functioning. Though perceptions of efficacy can generalize, they offer the best prediction of behavior within specific domains (e.g., selfefficacy for acquiring fraction"}
{"_id": "094ca99cc94e38984823776158da738e5bc3963d", "title": "Learning to predict by the methods of temporal differences", "text": "This article introduces a class of incremental learning procedures specialized for prediction-that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage."}
{"_id": "d379e858069861ca388659e578abb7a96c571e85", "title": "A review on methods and classifiers in lip reading", "text": "The idea of lip reading as a visual technique which people may use to translate lip movement into phrases without relying on speech itself is fascinating. There are numerous application areas in which lip reading could provide full assistance. Although there may be a downside to using the lip reading system, whether it may range from problems such as time constraint to minor word recognition mistakes, further development of the system should be an active cycle. Ongoing research into improvement of the lip reading system performance by way of suitable choices of feature extraction and classifiers is essential to track the developing trend in both fields of technology as well as in pattern recognition. This paper discusses a short review on the existing methods and classifiers which have been used in previous work in the field of lip reading."}
{"_id": "bba260607d53209373176bce4b515441de510d1e", "title": "Classical grasp quality evaluation: New algorithms and theory", "text": "This paper investigates theoretical properties of a well-known L1 grasp quality measure Q whose approximation Q-l is commonly used for the evaluation of grasps and where the precision of Q-l depends on an approximation of a cone by a convex polyhedral cone with l edges. We prove the Lipschitz continuity of Q and provide an explicit Lipschitz bound that can be used to infer the stability of grasps lying in a neighbourhood of a known grasp. We think of Q-l as a lower bound estimate to Q and describe an algorithm for computing an upper bound Q+. We provide worst-case error bounds relating Q and Q-l. Furthermore, we develop a novel grasp hypothesis rejection algorithm which can exclude unstable grasps much faster than current implementations. Our algorithm is based on a formulation of the grasp quality evaluation problem as an optimization problem, and we show how our algorithm can be used to improve the efficiency of sampling based grasp hypotheses generation methods."}
{"_id": "cd74b296bed69cb6e9f838e9330a975ebac5139c", "title": "A high frequency high voltage power supply", "text": "Novel compact high frequency high voltage (HV) power supply is introduced for small volume, lightweight, ultra fast high voltage output application. The two key factors high frequency HV transformer and voltage multiplier diodes are evaluated in this paper. The HV resonant tank size is only 1L for 400kHz\u223c500kHz 35kV 2kW output, the rise time for output HV is around 100us by lab prototype experiment results."}
{"_id": "45a33bddf460554b7c1f550aa382d63345a20704", "title": "Data-Driven Intelligent Transportation Systems: A Survey", "text": "For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented."}
{"_id": "f032295b9f8e7771c43a1296a1e305f23e5f4fcb", "title": "Modified Convolutional Neural Network Based on Dropout and the Stochastic Gradient Descent Optimizer", "text": "This study proposes a modified convolutional neural network (CNN) algorithm that is based on dropout and the stochastic gradient descent (SGD) optimizer (MCNN-DS), after analyzing the problems of CNNs in extracting the convolution features, to improve the feature recognition rate and reduce the time-cost of CNNs. The MCNN-DS has a quadratic CNN structure and adopts the rectified linear unit as the activation function to avoid the gradient problem and accelerate convergence. To address the overfitting problem, the algorithm uses an SGD optimizer, which is implemented by inserting a dropout layer into the all-connected and output layers, to minimize cross entropy. This study used the datasets MNIST, HCL2000, and EnglishHand as the benchmark data, analyzed the performance of the SGD optimizer under different learning parameters, and found that the proposed algorithm exhibited good recognition performance when the learning rate was set to [0.05, 0.07]. The performances of WCNN, MLP-CNN, SVM-ELM, and MCNN-DS were compared. Statistical results showed the following: (1) For the benchmark MNIST, the MCNN-DS exhibited a high recognition rate of 99.97%, and the time-cost of the proposed algorithm was merely 21.95% of MLP-CNN, and 10.02% of SVM-ELM; (2) Compared with SVM-ELM, the average improvement in the recognition rate of MCNN-DS was 2.35% for the benchmark HCL2000, and the time-cost of MCNN-DS was only 15.41%; (3) For the EnglishHand test set, the lowest recognition rate of the algorithm was 84.93%, the highest recognition rate was 95.29%, and the average recognition rate was 89.77%."}
{"_id": "2fa4e164014a36c55b6cf6ed283333fb0bfa82e7", "title": "Designing Sketches for Similarity Filtering", "text": "The amounts of currently produced data emphasize the importance of techniques for efficient data processing. Searching big data collections according to similarity of data well corresponds to human perception. This paper is focused on similarity search using the concept of sketches \u2013 a compact bit string representations of data objects compared by Hamming distance, which can be used for filtering big datasets. The object-to-sketch transformation is a form of the dimensionality reduction and thus there are two basic contradictory requirements: (1) The length of the sketches should be small for efficient manipulation, but (2) longer sketches retain more information about the data objects. First, we study various sketching methods for data modeled by metric space and we analyse their quality. Specifically, we study importance of several sketch properties for similarity search and we propose a high quality sketching technique. Further, we focus on the length of sketches by studying mutual influence of sketch properties such as correlation of their bits and the intrinsic dimensionality of a set of sketches. The outcome is an equation that allows us to estimate a suitable length of sketches for an arbitrary given dataset. Finally, we empirically verify proposed approach on two real-life datasets."}
{"_id": "163131a9ad9d106058000a45bb8f4b65599859b3", "title": "Are emoticons good enough to train emotion classifiers of Arabic tweets?", "text": "Nowadays, the automatic detection of emotions is employed by many applications across different fields like security informatics, e-learning, humor detection, targeted advertising, etc. Many of these applications focus on social media. In this study, we address the problem of emotion detection in Arabic tweets. We focus on the supervised approach for this problem where a classifier is trained on an already labeled dataset. Typically, such a training set is manually annotated, which is expensive and time consuming. We propose to use an automatic approach to annotate the training data based on using emojis, which are a new generation of emoticons. We show that such an approach produces classifiers that are more accurate than the ones trained on a manually annotated dataset. To achieve our goal, a dataset of emotional Arabic tweets is constructed, where the emotion classes under consideration are: anger, disgust, joy and sadness. Moreover, we consider two classifiers: Support Vector Machine (SVM) and Multinomial Naive Bayes (MNB). The results of the tests show that the automatic labeling approaches using SVM and MNB outperform manual labeling approaches."}
{"_id": "82f9ed9ea80957a74bfe57affe1a017c913be5e1", "title": "X-Ray PoseNet: 6 DoF Pose Estimation for Mobile X-Ray Devices", "text": "Precise reconstruction of 3D volumes from X-ray projections requires precisely pre-calibrated systems where accurate knowledge of the systems geometric parameters is known ahead. However, when dealing with mobile X-ray devices such calibration parameters are unknown. Joint estimation of the systems calibration parameters and 3d reconstruction is a heavily unconstrained problem, especially when the projections are arbitrary. In industrial applications, that we target here, nominal CAD models of the object to be reconstructed are usually available. We rely on this prior information and employ Deep Learning to learn the mapping between simulated X-ray projections and its pose. Moreover, we introduce the reconstruction loss in addition to the pose loss to further improve the reconstruction quality. Finally, we demonstrate the generalization capabilities of our method in case where poses can be learned on instances of the objects belonging to the same class, allowing pose estimation of unseen objects from the same category, thus eliminating the need for the actual CAD model. We performed exhaustive evaluation demonstrating the quality of our results on both synthetic and real data."}
{"_id": "56121c8b56688a0fcce0358f74a2c98566ff80e2", "title": "A low-cost smart sensor for non intrusive load monitoring applications", "text": "Next generation Smart Cities have the potential to enhance the citizens quality of life and to reduce the overall energy expenditure. In particular, emerging smart metering technologies promise to greatly increase energy efficiency in the residential and industrial areas. In this context, new power metering methods such as Non-Intrusive Load Monitoring (NILM) can provide important real-time information about the contribution of any single appliance. In this paper, we present a complete hardware-software design that concentrates locally an efficient event-based supervised NILM algorithm for load disaggregation. This new kind of power analysis, which usually requires high computing capability, is executed real-time on a low-cost and easy-to-install smart meter ready for the Internet of Things (IoT)."}
{"_id": "ac474d5a3cf7afeba424e9e64984f13eb22e2ec6", "title": "Linearity and efficiency enhancement strategies for 4G wireless power amplifier designs", "text": "Next generation wireless transmitters will rely on highly integrated silicon-based solutions to realize the cost and performance goals of the 4G market. This will require increased use of digital compensation techniques and innovative circuit approaches to maximize power and efficiency and minimize linearity degradation. This paper summarizes the circuit and system strategies being developed to meet these aggressive performance goals."}
{"_id": "51354699bfd423bb99d34c06d0061540d8ed178e", "title": "Vector space model adaptation and pseudo relevance feedback for content-based image retrieval", "text": "Image retrieval is an important problem for researchers in computer vision and content-based image retrieval (CBIR) fields. Over the last decades, many image retrieval systems were based on image representation as a set of extracted low-level features such as color, texture and shape. Then, systems calculate similarity metrics between features in order to find similar images to a query image. The disadvantage of this approach is that images visually and semantically different may be similar in the low level feature space. So, it is necessary to develop tools to optimize retrieval of information. Integration of vector space models is one solution to improve the performance of image retrieval. In this paper, we present an efficient and effective retrieval framework which includes a vectorization technique combined with a pseudo relevance model. The idea is to transform any similarity matching model (between images) to a vector space model providing a score. A study on several methodologies to obtain the vectorization is presented. Some experiments have been undertaken on Wang, Oxford5k and Inria Holidays datasets to show the performance of our proposed framework."}
{"_id": "8da17dde7a90af885d8d7413b2ae44202607ff54", "title": "Deep Network Embedding with Aggregated Proximity Preserving", "text": "Network embedding is an effective method to learn a low-dimensional feature vector representation for each node of a given network. In this paper, we propose a deep network embedding model with aggregated proximity preserving (DNE-APP). Firstly, an overall network proximity matrix is generated to capture both local and global network structural information, by aggregating different k-th order network proximities between different nodes. Then, a semi-supervised stacked auto-encoder is employed to learn the hidden representations which can best preserve the aggregated proximity in the original network, and also map the node pairs with higher proximity closer to each other in the embedding space. With the hidden representations learned by DNE-APP, we apply vector-based machine learning techniques to conduct node classification and link label prediction tasks on the real-world datasets. Experimental results demonstrate the superiority of our proposed DNE-APP model over the state-of-the-art network embedding algorithms."}
{"_id": "515ccd058edbbc588b4e98926897af5b1969b559", "title": "Counterfactual Fairness", "text": "Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school."}
{"_id": "1e295f4e195cf3d63a90ce85ca7ce29e0b42d8b7", "title": "A Corpus of Sentence-level Revisions in Academic Writing: A Step towards Understanding Statement Strength in Communication", "text": "The strength with which a statement is made can have a significant impact on the audience. For example, international relations can be strained by how the media in one country describes an event in another; and papers can be rejected because they overstate or understate their findings. It is thus important to understand the effects of statement strength. A first step is to be able to distinguish between strong and weak statements. However, even this problem is understudied, partly due to a lack of data. Since strength is inherently relative, revisions of texts that make claims are a natural source of data on strength differences. In this paper, we introduce a corpus of sentence-level revisions from academic writing. We also describe insights gained from our annotation efforts for this task."}
{"_id": "69b755a7593a94d39af20f781d8db0c6fd62be53", "title": "CVSS Attack Graphs", "text": "Attack models and attack graphs are efficient tools to describe and analyse attack scenarios aimed at computer networks. More precisely, attack graphs give all possible scenarios for an attacker to reach a certain goal, exploiting vulnerabilities of the targeted network. Nevertheless they give no information about the damages induced by these attacks, nor about the probability of exploitation of these scenarios. In this paper, we propose to combine attack graphs and CVSS framework, in order to add damage and exploitability probability information. Then, we define a notion of risk for each attack scenario, which is based on quantitative information added to attack graphs."}
{"_id": "604809686a977df714f76b823d4fce48e67ee3ba", "title": "Better Summarization Evaluation with Word Embeddings for ROUGE", "text": "ROUGE is a widely adopted, automatic evaluation measure for text summarization. While it has been shown to correlate well with human judgements, it is biased towards surface lexical similarities. This makes it unsuitable for the evaluation of abstractive summarization, or summaries with substantial paraphrasing. We study the effectiveness of word embeddings to overcome this disadvantage of ROUGE. Specifically, instead of measuring lexical overlaps, word embeddings are used to compute the semantic similarity of the words used in summaries instead. Our experimental results show that our proposal is able to achieve better correlations with human judgements when measured with the Spearman and Kendall rank co-"}
{"_id": "191e6f705beb6d37eb610763bd885e43b96988e8", "title": "QUOTUS: The Structure of Political Media Coverage as Revealed by Quoting Patterns", "text": "Given the extremely large pool of events and stories available, media outlets need to focus on a subset of issues and aspects to convey to their audience. Outlets are often accused of exhibiting a systematic bias in this selection process, with different outlets portraying different versions of reality. However, in the absence of objective measures and empirical evidence, the direction and extent of systematicity remains widely disputed. In this paper we propose a framework based on quoting patterns for quantifying and characterizing the degree to which media outlets exhibit systematic bias. We apply this framework to a massive dataset of news articles spanning the six years of Obama's presidency and all of his speeches, and reveal that a systematic pattern does indeed emerge from the outlet's quoting behavior. Moreover, we show that this pattern can be successfully exploited in an unsupervised prediction setting, to determine which new quotes an outlet will select to broadcast. By encoding bias patterns in a low-rank space we provide an analysis of the structure of political media coverage. This reveals a latent media bias space that aligns surprisingly well with political ideology and outlet type. A linguistic analysis exposes striking differences across these latent dimensions, showing how the different types of media outlets portray different realities even when reporting on the same events. For example, outlets mapped to the mainstream conservative side of the latent space focus on quotes that portray a presidential persona disproportionately characterized by negativity."}
{"_id": "f82e86b853fcda968d63c0196bf2df6748b13233", "title": "Greedy Attribute Selection", "text": "Many real-world domains bless us with a wealth of attributes to use for learning. This blessing is often a curse: most inductive methods generalize worse given too many attributes than if given a good subset of those attributes. We examine this problem for two learning tasks taken from a calendar scheduling domain. We show that ID3/C4.5 generalizes poorly on these tasks if allowed to use all available attributes. We examine five greedy hillclimbing procedures that search for attribute sets that generalize well with ID3/C4.5. Experiments suggest hillclimbing in attribute space can yield substantial improvements in generalization performance. We present a caching scheme that makes attribute hillclimbing more practical computationally. We also compare the results of hillclimbing in attribute space with FOCUS and RELIEF on the two tasks."}
{"_id": "3289b159b5e01d7fd024cac502edd94f7b49322d", "title": "RCS Reduction of Patch Array Antenna by Electromagnetic Band-Gap Structure", "text": "In this letter, electromagnetic band-gap (EBG) structure is used to reduce the radar cross section (RCS) of the patch array antenna. The proposition of this method is based on the high impedance characteristic of the mushroom-like EBG structure. The basic patch array antenna is designed with a central frequency of 5.0 GHz while replacing the substrate of the array with the mushroom-like EBG structure. The frequency band in which RCS of the patch array antenna reduced significantly can be adjusted by parameters of the EBG. The backward RCS of the patch array antenna with EBG can be reduced as much as 10 dB compared to that of the conventional array antenna, and the degradation of the antenna performance is not significant."}
{"_id": "b23e737180e833f2442199964ad39dc2b3cc6a7d", "title": "A Frequency Selective Radome With Wideband Absorbing Properties", "text": "A frequency selective radome is presented, acting as a pass band filter at a given frequency band, while behaving as an absorber above the transmission band. The pass band behavior is obtained by a metallic FSS realized through a compact interdigitated Jerusalem cross element characterized by a very large rejection band. The metallic FSS is used as the ground plane of a thin wideband absorber based on resistive high-impedance surfaces within the total reflection band. The outer absorber reduces the signature of the antenna system when the radome is illuminated by out of band signals. The resistive FSS which comprises the absorber is designed so to minimize losses within the transmitting band of the radome. The composite structure is thoroughly analyzed by an efficient equivalent circuit approach and by full-wave numerical simulations."}
{"_id": "b4818b0e19eb562dd8918bd2196013d0e024b635", "title": "Design of Polarization Reconfigurable Antenna Using Metasurface", "text": "A planar polarization-reconfigurable metasurfaced antenna (PRMS) designed using metasurface (MS) is proposed. The PRMS antenna consists of a planar MS placed atop of and in direct contact with a planar slot antenna, both having a circular shape with a diameter of 78 mm (0.9 \u03bb0), making it compact and low profile. By rotating the MS around the center with respect to the slot antenna, the PRMS antenna can be reconfigured to linear polarization, left-hand and right-hand circular polarizations. An equivalent circuit is used to explain the reconfigurability of the antenna. The PRMS antenna is studied and designed to operate at around 3.5 GHz using computer simulation. For verification of simulation results, the PRMS antenna is fabricated and measured. The antenna performance, in terms of polarization reconfigurability, axial-ratio bandwidth, impedance bandwidth, realized boresight gain and radiation pattern, is presented. Results show that the PRMS antenna in circular polarizations achieves an operating bandwidth of 3.3-3.7 GHz (i.e., fractional bandwidth 11.4%), a boresight gain of above 5 dBi and high-polarization isolation of larger than 15 dB. While the PRMS antenna in linear polarization achieves a gain of above 7.5 dBi with cross-polarization isolation larger than 50 dB."}
{"_id": "e6db4f3b1f12f3aaa08016babdeb8ff3a965ae42", "title": "Understanding of DC leakage and faults in floating photovoltaic systems: Trojan horse to undermine metallic infrastructure and safety", "text": "This article elaborates on a recently published issue that calls the entire PV community's and utility management attention. It specifically provides further analysis on DC leakage and fault detection blind spots associated with the operation of floating PV Systems. Floating systems are almost a standard practice in Europe when it comes to the operation of large-scale PV parks. A floating PV system has neither the positive nor the negative DC current-carrying conductors connected to earth. The system nevertheless benefits from other earthing connections, including the bonding to earth of exposed metals (e.g. metal frames of the PV modules, supporting infrastructure, combiner boxes) that could become energised in a fault situation or even under normal operation due to DC coupling mechanisms. These may entail safety as well as accelerated DC corrosion concerns."}
{"_id": "e66ac60a4cede28862bbcd55fda8d15d9b8dadf9", "title": "Combi-Tor: Track-to-Track Association Framework for Automotive Sensor Fusion", "text": "The data association algorithm plays the vital role of forming an appropriate and valid set of tracks from the available tracks at the fusion center, which are delivered by different sensor's local tracking systems. The architecture of the data association module has to be designed taking into account the fusion strategy of the sensor fusion system, the granularity and the quality of the data provided by the sensors. The current generation environment perception sensors used for automotive sensor fusion are capable of providing estimated kinematic and as well as non-kinematic information on the observed targets. This paper focuses on integrating the kinematic and non-kinematic information in a track-to-track association (T2TA) procedure. A scalable framework called Combi-Tor is introduced here that is designed to calculate the association decision using likelihood ratio tests based on the available kinematic and non-kinematic information on the targets, which are tracked and classified by different sensors. The calculation of the association decision includes the uncertainty in the sensor's local tracking and classification modules. The required sufficient statistical derivations are discussed. The performance of this T2TA framework and the traditional T2TA scheme considering only the kinematic information are evaluated using Monte-Carlo simulation. The initial results obtained using the real world sensor data is presented."}
{"_id": "b104a8302e512011776919c68d60c59140b03bbb", "title": "X-Ku Wide-Bandwidth GaN HEMT MMIC Amplifier with Small Deviation of Output Power and PAE", "text": "A new design methodology was proposed to obtain wide-bandwidth and flat-group-delay reactive-matching GaN HEMT MMIC amplifiers. Frequency dependence of the optimal source and load impedance of a GaN HEMT are derived as polynomial equations and matching circuits are designed by small signal simulation without the use of large-signal transistor model and large-signal simulation. Fabricated GaN HEMT MMIC amplifiers, which show a small deviation of Pout and PAE in the range of 8-18 GHz, prove that our methodology is suitable for the design of a wide-bandwidth MMIC amplifier."}
{"_id": "ed7d6051c2a51a0a3f2d4f5776a4a9917019907c", "title": "Capacity Analysis of Cooperative Relaying Systems Using Non-Orthogonal Multiple Access", "text": "In this letter, we propose the cooperative relaying system using non-orthogonal multiple access (NOMA) to improve the spectral efficiency. The achievable average rate of the proposed system is analyzed for independent Rayleigh fading channels, and also its asymptotic expression is provided. In addition, a suboptimal power allocation scheme for NOMA used at the source is proposed."}
{"_id": "457a3bff3d04b26c11ac770de1a6f3d018035a97", "title": "Tensor Decompositions, State of the Art and Applications", "text": "In this paper, we present a partial survey of the tools borrowed from tensor algebra, which have been utilized recently in Statistics and Signal Processing. It is shown why the decompositions well known in linear algebra can hardly be extended to tensors. The concept of rank is itself difficult to define, and its calculation raises difficulties. Numerical algorithms have nevertheless been developed, and some are reported here, but their limitations are emphasized. These reports hopefully open research perspectives for enterprising readers. in Mathematics in Signal Processing V, J. G. McWhirter and I. K. Proudler Eds., Oxford University Press, Oxford, UK, 2001"}
{"_id": "3ffdcfdd7511beab79539a96c2ae805c40160a36", "title": "Twin Clouds: An Architecture for Secure Cloud Computing (Extended Abstract)", "text": "Cloud computing promises a more cost effective enabling technology to outsource storage and computations. Existing approaches for secure outsourcing of data and arbitrary computations are either based on a single tamper-proof hardware, or based on recently proposed fully homomorphic encryption. The hardware based solutions are not scaleable, and fully homomorphic encryption is currently only of theoretical interest and very inefficient. In this paper we propose an architecture for secure outsourcing of data and arbitrary computations to an untrusted commodity cloud. In our approach, the user communicates with a trusted cloud (either a private cloud or built from multiple secure hardware modules) which encrypts and verifies the data stored and operations performed in the untrusted commodity cloud. We split the computations such that the trusted cloud is mostly used for security-critical operations in the less time-critical setup phase, whereas queries to the outsourced data are processed in parallel by the fast commodity cloud on encrypted data."}
{"_id": "3d088fe89529ea162282a2bc68234472ac43cad4", "title": "Etiology of the Psychopathic Serial Killer : An Analysis of Antisocial Personality Disorder , Psychopathy , and Serial Killer Personality and Crime Scene Characteristics", "text": "The purpose of this article is tomake the distinction between antisocial personality disorder and psychopathy, discuss possible etiologies of psychopathy, and analyze the crimes, personality characteristics, and historical aspects of psychopathic serial killers. The research indicates that both environmental and biological factors affect the development of psychopathy. Several different serial killers were compared to assess the similarities and differences between their histories, crimes, and personalities. Though there were marked differences between their crimes, startling historical and personality similarities were clearly identified. Based on these findings, the validity and reliability of offender profiling is also discussed. [Brief Treatment and Crisis Intervention 7:151\u2013160 (2007)]"}
{"_id": "77c26685b1eddc011c3b839edd6b0e13315711fa", "title": "May I help you? - Design of Human-like Polite Approaching Behavior-", "text": "When should service staff initiate interaction with a visitor? Neither simply-proactive (e.g. talk to everyone in a sight) nor passive (e.g. wait until being talked to) strategies are desired. This paper reports our modeling of polite approaching behavior. In a shopping mall, there are service staff members who politely approach visitors who need help. Our analysis revealed that staff members are sensitive to \"intentions\" of nearby visitors. That is, when a visitor intends to talk to a staff member and starts to approach, the staff member also walks a few steps toward the visitors in advance to being talked. Further, even when not being approached, staff members exhibit \"availability\" behavior in the case that a visitor's intention seems uncertain. We modeled these behaviors that are adaptive to pedestrians' intentions, occurred prior to initiation of conversation. The model was implemented into a robot and tested in a real shopping mall. The experiment confirmed that the proposed method is less intrusive to pedestrians, and that our robot successfully initiated interaction with pedestrians."}
{"_id": "c0f17f99c44807762f2a386ac6579c364330e082", "title": "A Review on Deep Learning Techniques Applied to Semantic Segmentation", "text": "Image semantic segmentation is more and more being of interest for computer vision and machine learning researchers. Many applications on the rise need accurate and efficient segmentation mechanisms: autonomous driving, indoor navigation, and even virtual or augmented reality systems to name a few. This demand coincides with the rise of deep learning approaches in almost every field or application target related to computer vision, including semantic segmentation or scene understanding. This paper provides a review on deep learning methods for semantic segmentation applied to various application areas. Firstly, we describe the terminology of this field as well as mandatory background concepts. Next, the main datasets and challenges are exposed to help researchers decide which are the ones that best suit their needs and their targets. Then, existing methods are reviewed, highlighting their contributions and their significance in the field. Finally, quantitative results are given for the described methods and the datasets in which they were evaluated, following up with a discussion of the results. At last, we point out a set of promising future works and draw our own conclusions about the state of the art of semantic segmentation using deep learning techniques."}
{"_id": "ad51140c8d5f56e2d205befa2f98c33566bd7b4f", "title": "A Comprehensive Survey of Recent Trends in Cloud Robotics Architectures and Applications", "text": "Cloud robotics has recently emerged as a collaborative technology between cloud computing and service robotics enabled through progress in wireless networking, large scale storage and communication technologies, and the ubiquitous presence of Internet resources over recent years. Cloud computing empowers robots by offering them faster and more powerful computational capabilities through massively parallel computation and higher data storage facilities. It also offers access to open-source, big datasets and software, cooperative learning capabilities through knowledge sharing, and human knowledge through crowdsourcing. The recent progress in cloud robotics has led to active research in this area spanning from the development of cloud robotics architectures to its varied applications in different domains. In this survey paper, we review the recent works in the area of cloud robotics technologies as well as its applications. We draw insights about the current trends in cloud robotics and discuss the challenges and limitations in the current literature, open research questions and future research directions."}
{"_id": "59ca2eaaee0d3486843f324dacf26b314203dc29", "title": "Standard Plane Localization in Fetal Ultrasound via Domain Transferred Deep Neural Networks", "text": "Automatic localization of the standard plane containing complicated anatomical structures in ultrasound (US) videos remains a challenging problem. In this paper, we present a learning-based approach to locate the fetal abdominal standard plane (FASP) in US videos by constructing a domain transferred deep convolutional neural network (CNN). Compared with previous works based on low-level features, our approach is able to represent the complicated appearance of the FASP and hence achieve better classification performance. More importantly, in order to reduce the overfitting problem caused by the small amount of training samples, we propose a transfer learning strategy, which transfers the knowledge in the low layers of a base CNN trained from a large database of natural images to our task-specific CNN. Extensive experiments demonstrate that our approach outperforms the state-of-the-art method for the FASP localization as well as the CNN only trained on the limited US training samples. The proposed approach can be easily extended to other similar medical image computing problems, which often suffer from the insufficient training samples when exploiting the deep CNN to represent high-level features."}
{"_id": "049724305ed60528958073d0c64d39977bb82c72", "title": "Alternative Splicing: New Insights from Global Analyses", "text": "Recent analyses of sequence and microarray data have suggested that alternative splicing plays a major role in the generation of proteomic and functional diversity in metazoan organisms. Efforts are now being directed at establishing the full repertoire of functionally relevant transcript variants generated by alternative splicing, the specific roles of such variants in normal and disease physiology, and how alternative splicing is coordinated on a global level to achieve cell- and tissue-specific functions. Recent progress in these areas is summarized in this review."}
{"_id": "86955608218ab293d41b6d1c0bf9e1be97f571d8", "title": "Theorizing Language Teacher Identity : Three Perspectives and Beyond", "text": ""}
{"_id": "a96ed352ac09fe88c79588d470c5fecca6324946", "title": "Survey of STT-MRAM Cell Design Strategies: Taxonomy and Sense Amplifier Tradeoffs for Resiliency", "text": "Spin-Transfer Torque Random Access Memory (STT-MRAM) has been explored as a post-CMOS technology for embedded and data storage applications seeking non-volatility, near-zero standby energy, and high density. Towards attaining these objectives for practical implementations, various techniques to mitigate the specific reliability challenges associated with STT-MRAM elements are surveyed, classified, and assessed in this article. Cost and suitability metrics assessed include the area of nanomagmetic and CMOS components per bit, access time and complexity, sense margin, and energy or power consumption costs versus resiliency benefits. Solutions to the reliability issues identified are addressed within a taxonomy created to categorize the current and future approaches to reliable STT-MRAM designs. A variety of destructive and non-destructive sensing schemes are assessed for process variation tolerance, read disturbance reduction, sense margin, and write polarization asymmetry compensation. The highest resiliency strategies deliver a sensing margin above 300mV while incurring low power and energy consumption on the order of picojoules and microwatts, respectively, and attaining read sense latency of a few nanoseconds down to hundreds of picoseconds for non-destructive and destructive sensing schemes, respectively."}
{"_id": "a8f9b218965d9c31f139be01da82bb90041d384f", "title": "Antropologia de la Informatica Social: Teoria de la Convergencia Tecno-Social", "text": "The traditional humanism of the twentieth century, inspired by the culture of the book, systematically distanced itself from the new society of digital information; the Internet and tools of information processing revolutionized the world, society during this period developed certain adaptive characteristics based on coexistence (Human Machine), this transformation sets based on the impact of three technology segments: devices, applications and infrastructure of social communication, which are involved in various physical, behavioural and cognitive changes of the human being; and the emergence of new models of influence and social control through the new ubiquitous communication; however in this new process of conviviality new models like the \"collaborative thinking\" and \"InfoSharing\" develop; managing social information under three Human ontological dimensions (h) Information (i) Machine (m), which is the basis of a new physical-cyber ecosystem, where they coexist and develop new social units called \"virtual communities \". This new communication infrastructure and social management of information given discovered areas of vulnerability \"social perspective of risk\", impacting all social units through massive impact vector (i); The virtual environment \"H + i + M\"; and its components, as well as the life cycle management of social information allows us to understand the path of integration \"Techno Social\" and setting a new contribution to cybernetics, within the convergence of technology with society and the new challenges of coexistence, aimed at a new holistic and not pragmatic vision, as the human component (h) in the virtual environment is the precursor of the future and needs to be studied not as an application, but as the hub of a new society."}
{"_id": "f0f700a162b932d24a12fe53dc7b603b136fb4f7", "title": "Sentiment-enhanced multidimensional analysis of online social networks: Perception of the mediterranean refugees crisis", "text": "We propose an analytical framework able to investigate discussions about polarized topics in online social networks from many different angles. The framework supports the analysis of social networks along several dimensions: time, space and sentiment. We show that the proposed analytical framework and the methodology can be used to mine knowledge about the perception of complex social phenomena. We selected the refugee crisis discussions over Twitter as a case study. This difficult and controversial topic is an increasingly important issue for the EU. The raw stream of tweets is enriched with space information (user and mentioned locations), and sentiment (positive vs. negative) w.r.t. refugees. Our study shows differences in positive and negative sentiment in EU countries, in particular in UK, and by matching events, locations and perception, it underlines opinion dynamics and common prejudices regarding the refugees."}
{"_id": "cbb47597ba3d5ec38c4a2beab1214af44d3152c5", "title": "Positive academic emotions moderate the relationship between self-regulation and academic achievement.", "text": "BACKGROUND\nResearch has shown how academic emotions are related to achievement and to cognitive/motivational variables that promote achievement. Mediated models have been proposed to account for the relationships among academic emotions, cognitive/motivational variables, and achievement, and research has supported such mediated models, particularly with negative emotions.\n\n\nAIMS\nThe study tested the hypotheses: (1) self-regulation and the positive academic emotions of enjoyment and pride are positive predictors of achievement; and (2) enjoyment and pride both moderate the relationship between self-regulation and achievement.\n\n\nSAMPLE\nParticipants were 1,345 students enrolled in various trigonometry classes in one university.\n\n\nMETHODS\nParticipants answered the Academic Emotions Questionnaire-Math (Pekrun, Goetz, & Frenzel, 2005) and a self-regulation scale (Pintrich, Smith, Garcia, & McKeachie, 1991) halfway through their trigonometry class. The students' final grades in the course were regressed to self-regulation, positive emotions, and the interaction terms to test the moderation effects.\n\n\nRESULTS AND CONCLUSIONS\nEnjoyment and pride were both positive predictors of grades; more importantly, both moderated the relationship between self-regulation and grades. For students who report higher levels of both positive emotions, self-regulation was positively associated with grades. However, for those who report lower levels of pride, self-regulation was not related to grades; and, for those who reported lower levels of enjoyment, self-regulation was negatively related to grades. The results are discussed in terms of how positive emotions indicate positive appraisals of task/outcome value, and thus enhance the positive links between cognitive/motivational variables and learning."}
{"_id": "4b06257a27f292d3025acaca05b3a27c9f1d1712", "title": "Adaptive systems for foreign exchange trading", "text": "Foreign exchange markets are notoriously difficult to predict. For many years academics and practitioners alike have tried to build trading models, but history has not been kind to their efforts. Consistently predicting FX markets has seemed like an impossible goal but recent advances in financial research now suggest otherwise. With newly developed computational techniques and newly available data, the development of successful trading models is looking possible. The Centre for Financial Research (CFR) at Cambridge University\u2019s Judge Institute of Management has been researching trading techniques in foreign exchange markets for a number of years. Over the last 18 months a joint project with HSBC Global Markets has looked at how the bank\u2019s proprietary information on customer order flow and on the customer limit order book can be used to enhance the profitability of technical trading systems in FX markets. Here we give an overview of that research and report our results."}
{"_id": "03aca587f27fda3cbdad708aa69c07fc71b691d7", "title": "Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network", "text": "Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2 \u00d7 2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance ( ~ 85.5%) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists."}
{"_id": "33642a356552ca3c967119e98af14a6ec8d4e330", "title": "A convolutional neural network approach for face verification", "text": "In this paper, we present a convolutional neural network (CNN) approach for the face verification task. We propose a \u201cSiamese\u201d architecture of two CNNs, with each CNN reduced to only four layers by fusing convolutional and subsampling layers. Network training is performed using the stochastic gradient descent algorithm with annealed global learning rate. Generalization ability of network is investigated via unique pairing of face images, and testing is done on AT&T face database. Experimental work shows that the proposed CNN system can classify a pair of 46\u00d746 pixel face images in 0.6 milliseconds, which is significantly faster compared to equivalent network architecture with cascade of convolutional and subsampling layers. The verification accuracy achieved is 3.33% EER (equal error rate). Learning converges within 20 epochs, and the proposed technique can verify a test subject unseen in training. This work shows the viability of the \u201cSiamese\u201d CNN for face verification applications, and further improvements to the architecture are under construction to enhance its performance."}
{"_id": "6ed0f69caccf0966998484e742ee29a87e108353", "title": "Face Detection Using Convolutional Neural Networks and Gabor Filters", "text": "This paper proposes a method for detecting facial regions by combining a Gabor filter and a convolutional neural network. The first stage uses the Gabor filter which extracts intrinsic facial features. As a result of this transformation we obtain four subimages. The second stage of the method concerns the application of the convolutional neural network to these four images. The approach presented in this paper yields better classification performance in comparison to the results obtained by the convolutional neural network alone."}
{"_id": "6442bba9457682becf1ed9dbe6566ef64355d5e8", "title": "Multiclass Posterior Probability Support Vector Machines", "text": "Tao et. al. have recently proposed the posterior probability support vector machine (PPSVM) which uses soft labels derived from estimated posterior probabilities to be more robust to noise and outliers. Tao et. al.'s model uses a window-based density estimator to calculate the posterior probabilities and is a binary classifier. We propose a neighbor-based density estimator and also extend the model to the multiclass case. Our bias-variance analysis shows that the decrease in error by PPSVM is due to a decrease in bias. On 20 benchmark data sets, we observe that PPSVM obtains accuracy results that are higher or comparable to those of canonical SVM using significantly fewer support vectors."}
{"_id": "17d2f027221d60cda373ecf15b03706c9e60269b", "title": "A Generalized Representer Theorem", "text": "Wahba\u2019s classical representer theorem states that the solutions of certain risk minimization problems involving an empirical risk term and a quadratic regularizer can be written as expansions in terms of the training examples. We generalize the theorem to a larger class of regularizers and empirical risk terms, and give a self-contained proof utilizing the feature space associated with a kernel. The result shows that a wide range of problems have optimal solutions that live in the finite dimensional span of the training examples mapped into feature space, thus enabling us to carry out kernel algorithms independent of the (potentially infinite) dimensionality of the feature space."}
{"_id": "a522fb4646ad19fe88893b90e6fbc1faa1470976", "title": "An extensive comparative study of cluster validity indices", "text": "The validation of the results obtained by clustering algorithms is a fundamental part of the clustering process. The most used approaches for cluster validation are based on internal cluster validity indices. Although many indices have been proposed, there is no recent extensive comparative study of their performance. In this paper we show the results of an experimental work that compares 30 cluster validity indices in many different environments with different characteristics. These results can serve as a guideline for selecting the most suitable index for each possible application and provide a deep insight into the performance differences between the currently available indices. & 2012 Elsevier Ltd. All rights reserved."}
{"_id": "65313fc2b7d95141bcac589bab26b72e60ba2b97", "title": "Joint precision optimization and high level synthesis for approximate computing", "text": "Approximate computing has been recognized as an effective low power technique for applications with intrinsic error tolerance, such as image processing and machine learning. Existing efforts on this front are mostly focused on approximate circuit design, approximate logic synthesis or processor architecture approximation techniques. This work aims at how to make good use of approximate circuits at system and block level. In particular, approximation aware scheduling, functional unit allocation and binding algorithms are developed for data intensive applications. Simple yet credible error models, which are essential for precision control in the optimizations, are investigated. The algorithms are further extended to include bitwidth optimization in fixed point computations. Experimental results, including those from Verilog simulations, indicate that the proposed techniques facilitate desired energy savings under latency and accuracy constraints."}
{"_id": "38492609de2db25abbad067a3519acfcb827543c", "title": "On the impact of knowledge discovery and data mining", "text": "Knowledge Discovery and Data Mining are powerful automated data analysis tools and they are predicted to become the most frequently used analytical tools in the near future. The rapid dissemination of these technologies calls for an urgent examination of their social impact. This paper identifies social issues arising from Knowledge Discovery (KD) and Data Mining (DM). An overview of these technologies is presented, followed by a detailed discussion of each issue. The paper's intention is to primarily illustrate the cultural context of each issue and, secondly, to describe the impact of KD and DM in each case. Existing solutions specific to each issue are identified and examined for feasibility and effectiveness, and a solution that provides a suitably contextually sensitive means for gathering and analysing sensitive data is proposed and briefly outlined. The paper concludes with a discussion of topics for further consideration."}
{"_id": "34e703d61e29213f82e3a3e6776c17933219d63f", "title": "Analysis of Various Sentiment Classification Techniques", "text": "Sentiment analysis is an ongoing research area in the field of text mining. People post their review in form of unstructured data so opinion extraction provides overall opinion of reviews so it does best job for customer, people, organization etc. The main aim of this paper is to find out approaches that generate output with good accuracy. This paper presents recent updates on papers related to classification of sentiment analysis of implemented various approaches and algorithms. The main contribution of this paper is to give idea about that careful feature selection and existing classification approaches can give better accuracy."}
{"_id": "7e9a03e35b6b20c6ff0a221e3db281b69bb00e6c", "title": "Book recommendation system based on combine features of content based filtering, collaborative filtering and association rule mining", "text": "Recommendation systems are widely used to recommend products to the end users that are most appropriate. Online book selling websites now-a-days are competing with each other by many means. Recommendation system is one of the stronger tools to increase profit and retaining buyer. The book recommendation system must recommend books that are of buyer's interest. This paper presents book recommendation system based on combined features of content filtering, collaborative filtering and association rule mining."}
{"_id": "1925d9c7f30e3f4fc7b53828e32e3583197d92f4", "title": "A real-coded genetic algorithm for training recurrent neural networks", "text": "The use of Recurrent Neural Networks is not as extensive as Feedforward Neural Networks. Training algorithms for Recurrent Neural Networks, based on the error gradient, are very unstable in their search for a minimum and require much computational time when the number of neurons is high. The problems surrounding the application of these methods have driven us to develop new training tools. In this paper, we present a Real-Coded Genetic Algorithm that uses the appropriate operators for this encoding type to train Recurrent Neural Networks. We describe the algorithm and we also experimentally compare our Genetic Algorithm with the Real-Time Recurrent Learning algorithm to perform the fuzzy grammatical inference."}
{"_id": "9b4eeff022962192e2305bab461e70d22aa2d354", "title": "Behavioral inhibition , behavioral activation , and affective responses to impending reward and punishment : The BIS / BAS Scales", "text": "Gray (1981, 1982) holds that 2 general motivational systems underlie behavior and affect: a behavioral inhibition system (BIS) and a behavioral activation system (BAS). Self-report scales to assess dispositional BIS and BAS sensitivities were created. Scale development (Study 1) and convergent and discriminant validity in the form of correlations with alternative measures are reported (Study 2). In Study 3, a situation in which Ss anticipated a punishment was created. Controlling for initial nervousness, Ss high in BIS sensitivity (assessed earlier) were more nervous than those low. In Study 4, a situation in which Ss anticipated a reward was created. Controlling for initial happiness, Ss high in BAS sensitivity (Reward Responsiveness and Drive scales) were happier than those low. In each case the new scales predicted better than an alternative measure. Discussion is focused on conceptual implications."}
{"_id": "905612b48488b8b1972694119a74451eb20822b7", "title": "Workshop on novelty and diversity in recommender systems - DiveRS 2011", "text": "Novelty and diversity have been identified as key dimensions of recommendation utility in real scenarios, and a fundamental research direction to keep making progress in the field. Yet recommendation novelty and diversity remain a largely open area for research. The DiveRS workshop gathered researchers and practitioners interested in the role of these dimensions in recommender systems. The workshop seeks to advance towards a better understanding of what novelty and diversity are, how they can improve the effectiveness of recommendation methods and the utility of their outputs. The workshop pursued the identification of open problems, relevant research directions, and opportunities for innovation in the recommendation business."}
{"_id": "b073446c7de3e5c355ee84fcda52ae7f844f7958", "title": "Autonomous Indoor Robot Navigation Using Sketched Maps and Routes", "text": "Hand drawn sketches are natural means by which a high level description of an environment can be provided. They can be exploited to impart coarse prior information about the scene to a robot, thereby enabling it to perform autonomous navigation and exploration when a full metrical description of the scene is not available beforehand. In this paper, we present a navigation system supplemented by a tablet interface that allows a user to sketch a rough map of an indoor environment and a desired trajectory for the robot to follow. We propose a novel theoretical framework for sketch interpretation based upon the manifold formalism in which associations between the sketch and the real world are modeled as local deformation of a suitable metric manifold. We also present empirical results from experimental evaluations of our approach in real world scenarios both from the perspective of the navigation capability and the usability of the interface."}
{"_id": "4d6e01f9251d9cd783d5efcea4e460ea1f0dbcad", "title": "Analysis and Defensive Tools for Social-Engineering Attacks on Computer Systems", "text": "The weakest link in an information-security chain is often the user because people can be manipulated. Attacking computer systems with information gained from social interactions is one form of social engineering (K. Mitnick, et al. 2002). It can be much easier to do than targeting the complex technological protections of systems (J. McDermott, Social engineering - the weakest link in information security). In an effort to formalize social engineering for cyberspace, we are building models of trust and attack. Models help in understanding the bewildering number of different tactics that can be employed. Social engineering attacks can be complex with multiple ploys and targets; our models function as subroutines that are called multiple times to accomplish attack goals in a coordinated plan. Models enable us to infer good countermeasures to social engineering"}
{"_id": "f95c729b97769b2c7f579cd0a65afa176992ec72", "title": "Conducted EMI noise mitigation in DC-DC converters using active filtering method", "text": "Electromagnetic interference (EMI) noise mitigation is an important issue that should be addressed and emphasized when designing DC/DC converters. These later, are known to be the primary culprit of the EMI noise generation in most of the electronic systems, mainly due to the switching action of the MOSFET circuitries. Passive input EMI LC filters have been the intuitive solution for EMI noise mitigation; hence they have been integrated in almost every DC/DC converters. However, their size, weight and cost can cause a significant constraint in some applications. To overcome these constraints, an input active EMI filter is proposed. The active filter is based on the noise current phase shift and the injection of this noise current back to the DC input bus. However, the combination of the input active and the passive filters shows a substantial attenuation of the conducted emissions as compared to the passive filter only, which in turn contributes to the reduction of the size and weight of the input passive EMI filter. The proposed combination provides a design solution for compliance engineers where the PCB real-estate is an issue. Experimental results to demonstrate the performance and the effectiveness of the input active EMI filter in DC/DC converters are presented."}
{"_id": "893307f339e31e645fcacdd7a599bfdbd49451b2", "title": "ITU-T G.729.1: AN 8-32 Kbit/S Scalable Coder Interoperable with G.729 for Wideband Telephony and Voice Over IP", "text": "This paper describes the scalable coder - G.729.1 - which has been recently standardized by ITU-T for wideband telephony and voice over IP (VoIP) applications. G.729.1 can operate at 12 different bit rates from 32 down to 8 kbit/s with wideband quality starting at 14 kbit/s. This coder is a bitstream interoperable extension of ITU-T G.729 based on three embedded stages: narrowband cascaded CELP coding at 8 and 12 kbit/s, time-domain bandwidth extension (TDBWE) at 14 kbit/s, and split-band MDCT coding with spherical vector quantization (VQ) and pre-echo reduction from 16 to 32 kbit/s. Side information - consisting of signal class, phase, and energy - is transmitted at 12, 14 and 16 kbit/s to improve the resilience and recovery of the decoder in case of frame erasures. The quality, delay, and complexity of G.729.1 are summarized based on ITU-T results."}
{"_id": "005d92543a3ebf303d2b8e16c7c6a32d52c6618f", "title": "MultiSE: multi-path symbolic execution using value summaries", "text": "Dynamic symbolic execution (DSE) has been proposed to effectively generate test inputs for real-world programs. Unfortunately, DSE techniques do not scale well for large realistic programs, because often the number of feasible execution paths of a program increases exponentially with the increase in the length of an execution path. In this paper, we propose MultiSE, a new technique for merging states incrementally during symbolic execution, without using auxiliary variables. The key idea of MultiSE is based on an alternative representation of the state, where we map each variable, including the program counter, to a set of guarded symbolic expressions called a value summary. MultiSE has several advantages over conventional DSE and conventional state merging techniques: value summaries enable sharing of symbolic expressions and path constraints along multiple paths and thus avoid redundant execution. MultiSE does not introduce auxiliary symbolic variables, which enables it to 1) make progress even when merging values not supported by the constraint solver, 2) avoid expensive constraint solver calls when resolving function calls and jumps, and 3) carry out most operations concretely. Moreover, MultiSE updates value summaries incrementally at every assignment instruction, which makes it unnecessary to identify the join points and to keep track of variables to merge at join points. We have implemented MultiSE for JavaScript programs in a publicly available open-source tool. Our evaluation of MultiSE on several programs shows that 1) value summaries are an eective technique to take advantage of the sharing of value along multiple execution path, that 2) MultiSE can run significantly faster than traditional dynamic symbolic execution and, 3) MultiSE saves a substantial number of state merges compared to conventional state-merging techniques."}
{"_id": "e7ad8adbb447300ecafb4d00fb84efc3cf4996cf", "title": "Variational Bayes with synthetic likelihood", "text": "Synthetic likelihood is an attractive approach to likelihood-free inference when an approximately Gaussian summary statistic for the data, informative for inference about the parameters, is available. The synthetic likelihood method derives an approximate likelihood function from a plug-in normal density estimate for the summary statistic, with plug-in mean and covariance matrix obtained by Monte Carlo simulation from the model. In this article, we develop alternatives to Markov chain Monte Carlo implementations of Bayesian synthetic likelihoods with reduced computational overheads. Our approach uses stochastic gradient variational inference methods for posterior approximation in the synthetic likelihood context, employing unbiased estimates of the log likelihood. We compare the new method with a related likelihood free variational inference technique in the literature, while at the same time improving the implementation of that approach in a number of ways. These new algorithms are feasible to implement in situations which are challenging for conventional approximate Bayesian computation (ABC) methods, in terms of the dimensionality of the parameter and summary statistic."}
{"_id": "0fb45e704ef3ca1f9c70e7be3fb93b53714ed8b5", "title": "Head Pose and Expression Transfer Using Facial Status Score", "text": "We propose a method to transfer both head poseand facial expression of a source person in a video to the faceof a target person in an output video. Our method models theentire 2D frame instead of the 3D face, and it generates outputresults using a status score, which includes the relative facialstatus about the head pose and expression in a frame. From thetarget video, the learning process obtains frame features neededfor moving to each frame from the neutral frame for all frames,and generates the basis of these features via principal componentanalysis (PCA). Then, it learns to generate these features from agiven status score sequentially. In the transfer process, it obtainsa status score from a source frame of the video and generatesthe features from the given status score. Then, it generates theoutput frame using the reconstructed features. An output videois generated by repeating these steps for each source frame.Our method generates output results on the trajectory of thetarget video by using the advantage of PCA. Therefore, in theoutput results generated by our methods, both head pose andexpression are transferred correctly while the non-face regionsof the frames are supported. Finally, we experimentally comparethe effectiveness of our method and conventional methods."}
{"_id": "73a8c63210c3121fe6c5d43d89397ca46789b61e", "title": "Photovoltaic Power Conditioning System With Line Connection", "text": "A photovoltaic (PV) power conditioning system (PCS) with line connection is proposed. Using the power slope versus voltage of the PV array, the maximum power point tracking (MPPT) controller that produces a smooth transition to the maximum power point is proposed. The dc current of the PV array is estimated without using a dc current sensor. A current controller is suggested to provide power to the line with an almost-unity power factor that is derived using the feedback linearization concept. The disturbance of the line voltage is detected using a fast sensing technique. All control functions are implemented in software with a single-chip microcontroller. Experimental results obtained on a 2-kW prototype show high performance such as an almost-unity power factor, a power efficiency of 94%, and a total harmonic distortion (THD) of 3.6%"}
{"_id": "6e6cdf5b0282d89de45c407fc76a4c218696e3e3", "title": "Predicting Movie Revenue from IMDb Data", "text": "Given the information known about a movie in the we ek of its release, can we predict the total gross revenue for that movie? Such information would be u s f l to marketers, theater operators, and others i n the movie industry, but it is a hard problem, even for human beings. We found that, given a set of numeric , textbased, and sentiment features from IMDb, linear reg ession outperforms class-based logistic regression at predicting gross revenue. However, neither gives su fficiently precise results to be used in practice."}
{"_id": "13b5a23d06207fb9b40544f284165d0348f5720c", "title": "Semi-productive Polysemy and Sense Extension", "text": "In this paper we discuss various aspects of systematic or conventional polysemy and their formal treatment within an implemented constraint based approach to linguistic representation. We distinguish between two classes of systematic polysemy: constructional polysemy, where a single sense assigned to a lexical entry is contextually specialised, and sense extension, which predictably relates two or more senses. Formally the first case is treated as instantiation of an underspecified lexical entry and the second by use of lexical rules. The problems of distinguishing between these two classes are discussed in detail. We illustrate how lexical rules can be used both to relate fully conventionalised senses and also applied productively to recognise novel usages and how this process can be controlled to account for semi-productivity by utilising probabilities."}
{"_id": "ecb2aa44717429dea368da1bf87bff0ee3fe8465", "title": "Type shifting in construction grammar : An integrated approach to aspectual coercion *", "text": "Implicit type shifting, or coercion, appears to indicate a modular grammatical architecture, in which the process of semantic composition may add meanings absent from the syntax in order to ensure that certain operators, e.g., the progressive, receive suitable arguments (Jackendo\u00a4 1997; De Swart 1998). I will argue that coercion phenomena actually provide strong support for a sign-based model of grammar, in which rules of morphosyntactic combination can shift the designations of content words with which they combine. On this account, enriched composition is a by-product of the ordinary referring behavior of constructions. Thus, for example, the constraint which requires semantic concord between the syntactic sisters in the string a bottle is also what underlies the coerced interpretation found in a beer. If this concord constraint is stated for a rule of morphosyntactic combination, we capture an important generalization: a single combinatory mechanism, the construction, is responsible for both coerced and compositional meanings. Since both type-selecting constructions (e.g., the French Imparfait) and type-shifting constructions (e.g., English progressive aspect) require semantic concord between syntactic sisters, we account for the fact that constructions of both types perform coercion. Coercion data suggest that aspectual sensitivity is not merely a property of formally di\u00a4erentiated past tenses, as in French and Latin, but a general property of tense constructions, including the English present and past tenses."}
{"_id": "0b8fb343f979bf1bc48eed705cd1ada4075140de", "title": "Grammatical Constructions and Linguistic Generalizations : the What ' s X doing Y ? Construction 1", "text": "To adopt a constructional approach is to undertake a commitment in principle to account for the entirety of each language. This means that the relatively general patterns of the language, such as the one licensing the ordering of a finite auxiliary verb before its subject in English as illustrated in (1), and the more idiomatic patterns, such as those exemplified in (2), stand on an equal footing as data for which the grammar must provide an account."}
{"_id": "6faf939a9a1db55ec83d79e58297e828ba8e15b8", "title": "On the relationship between math anxiety and math achievement in early elementary school: The role of problem solving strategies.", "text": "Even at young ages, children self-report experiencing math anxiety, which negatively relates to their math achievement. Leveraging a large dataset of first and second grade students' math achievement scores, math problem solving strategies, and math attitudes, we explored the possibility that children's math anxiety (i.e., a fear or apprehension about math) negatively relates to their use of more advanced problem solving strategies, which in turn relates to their math achievement. Our results confirm our hypothesis and, moreover, demonstrate that the relation between math anxiety and math problem solving strategies is strongest in children with the highest working memory capacity. Ironically, children who have the highest cognitive capacity avoid using advanced problem solving strategies when they are high in math anxiety and, as a result, underperform in math compared with their lower working memory peers."}
{"_id": "e8183b2be9b3a9ded6f63fc36905ff0bcd25a878", "title": "Selective harmonic elimination modulation technique applied on four-leg NPC", "text": "In this paper the selective harmonic elimination pulse width modulation (SHE-PWM) technique is proposed to control the three-level four-leg neutral point clamped (NPC) inverter in order to have both advantages of low switching frequency of SHE and neutral point of fourth leg in four wire systems. In this proposed method, the obtained switching angles of inverter phase legs are used to eliminate the non-triplen, 5th to 23th harmonics orders from the output voltage. As well, switching angles calculated for the fourth leg are considered to eliminate the triplen harmonics containing 3th, 9th, 15th, 21th and 27th orders from phase voltage. The efficiency of the proposed modulation technique is verified by simulations of a four leg NPC inverter as an UPS feeding different types of dynamic and unbalanced loads."}
{"_id": "110df71198e5143391fbe6e7d074d52d951b7bc8", "title": "Evaluation of direct attacks to fingerprint verification systems", "text": "The vulnerabilities of fingerprint-based recognition systems to direct attacks with and without the cooperation of the user are studied. Two different systems, one minutiae-based and one ridge feature-based, are evaluated on a database of real and fake fingerprints. Based on the fingerprint images quality and on the results achieved on different operational scenarios, we obtain a number of statistically significant observations regarding the robustness of the systems."}
{"_id": "4b5daaa4edd3caadfb1fa214f44140877828051f", "title": "Conjunctive representation of position, direction, and velocity in entorhinal cortex.", "text": "Grid cells in the medial entorhinal cortex (MEC) are part of an environment-independent spatial coordinate system. To determine how information about location, direction, and distance is integrated in the grid-cell network, we recorded from each principal cell layer of MEC in rats that explored two-dimensional environments. Whereas layer II was predominated by grid cells, grid cells colocalized with head-direction cells and conjunctive grid x head-direction cells in the deeper layers. All cell types were modulated by running speed. The conjunction of positional, directional, and translational information in a single MEC cell type may enable grid coordinates to be updated during self-motion-based navigation."}
{"_id": "f60e88248c5b669175f267069c3319d9ac2d3e84", "title": "Compliance through Informed Consent: Semantic Based Consent Permission and Data Management Model", "text": "The General Data Protection Regulations (GDPR) imposes greater restrictions on obtaining valid user consents involving the use of personal data. A semantic model of consent can make the concepts of consent explicit, establish a common understanding and enable re-use of consent. Therefore, forming a semantic model of consent will satisfy the GDPR requirements of specificity and unambiguity and is an important step towards ensuring compliance. In this paper, we discuss obtaining an open vocabulary of expressing consent leveraging existing semantic models of provenance, processes, permission and obligations. We also present a reference architecture for the management of data processing according to consent permission. This data management model utilizes the open vocabulary of consent and incorporates the change of context into the data processing activity. By identifying and incorporating changes to the relational context between data controllers and data subjects into the data processing model, it aims to improve the integration of data management across different information systems specifically adhering to the GDPR and helping controllers to demonstrate compliance."}
{"_id": "75eb55a66fc4e7c1ed702be946dcfc24e86ecb4e", "title": "Risk and parameter convergence of logistic regression", "text": "The logistic loss is strictly convex and does not attain its infimum; consequently the solutions of logistic regression are in general off at infinity. This work provides a convergence analysis of stochastic and batch gradient descent for logistic regression. Firstly, under the assumption of separability, stochastic gradient descent minimizes the population risk at rate O(ln(t)/t) with high probability. Secondly, with or without separability, batch gradient descent minimizes the empirical risk at rate O(ln(t)/t). Furthermore, parameter convergence can be characterized along a unique pair of complementary subspaces defined by the problem instance: one subspace along which strong convexity induces parameters to converge at rate O(ln(t)/ \u221a t), and its orthogonal complement along which separability induces parameters to converge in direction at rate O(ln ln(t)/ ln(t)). 1 Overview Logistic regression is the task of finding a vector w \u2208 R which approximately minimizes the population or empirical logistic risk, namely Rlog(w) := E [ `log(\u3008w,\u2212yx\u3009) ] or R\u0302log(w) := 1 n n \u2211 i=1 `log(\u3008w,\u2212yixi\u3009), where `log(r) := ln ( 1 + exp(r) ) is the logistic loss. A traditional way to minimize Rlog or R\u0302log is to pick an arbitrary w0, and from there recursively construct stochastic gradient descent (SGD) or gradient descent (GD) iterates (wj)j\u22650 via wj+1 := wj \u2212 \u03b7jgj , where (\u03b7j)j\u22650 are step sizes, and gj := ` \u2032 log (\u3008 wj ,\u2212yjxj \u3009) (\u2212yjxj) for SGD or gj := \u2207R\u0302`log(wj) for GD. A common assumption is that the distribution on (x, y) is separable (Novikoff, 1962; Soudry et al., 2017). Under this assumption, the first result here is that SGD with constant step sizes minimizes the population risk Rlog at rate \u00d5(1/t) with high probability. Theorem 1.1 (Simplification of Theorem 2.1). Suppose there exists a unit vector \u016b with \u3008\u016b, yx\u3009 \u2265 \u03b3, and |yx| \u2264 1 almost surely. Consider SGD iterates (wj)j\u22650 as above, with w0 = 0 and \u03b7j = 1. Then for any t \u2265 1, |wt| \u2264 O(ln(t)), and with probability at least 1\u2212 \u03b4, the average iterate \u0175t := t\u22121 \u2211 j 0 for every w \u2208 R. Furthermore, not only do the guarantees here avoid strong convexity; the proofs are different. The proof of Theorem 1.1 is based on the perceptron proof, and might be of independent interest. Now consider batch gradient descent on empirical risk. In this case, a stronger result is possible: no assumption is made (separability is dropped), and the parameters may be precisely characterized. Theorem 1.2 (Simplification of Theorems 3.1, 3.2 and 3.6). Let examples ((xi, yi)) n i=1 be given with |xiyi| \u2264 1, along with a loss ` \u2208 {`log, exp}, with corresponding empirical risk R\u0302 as above. Consider GD iterates (wj)j\u22650 as above, with w0 = 0. 1. (Convergence in risk.) For any step sizes \u03b7j \u2264 1 and any t \u2265 1, R\u0302(wt)\u2212 inf w\u2208Rd R\u0302(w) = O ( 1 t + ln(t) \u2211 j 0 ; 3: for t = 1, 2, . . . do 4: \u000f\u0303t = \u03bd\u000f\u03030; \u03b8t = (c\u2212 1)/(c+ 2); 5: \u03bbt = \u03bd(\u03bb0 \u2212 \u03bb) + \u03bb; 6: Yt = Xt + \u03b8t(Xt \u2212Xt\u22121); 7: Z\u0303t = Yt + P\u03a9(O \u2212 Yt); 8: Vt\u22121 = Vt\u22121 \u2212 Vt(VtVt\u22121), remove zero columns; 9: Rt = QR([Vt, Vt\u22121]); 10: [Ut+1,\u03a3t+1, Vt+1] = approx-SVT(Z\u0303t, Rt, \u03bbt, \u000f\u0303t); 11: if F (Ut+1\u03a3t+1V > t+1) > F (Ut\u03a3tV > t ); c = 1 else c = c+ 1; 12: end for 13: return Xt+1 = Ut+1\u03a3t+1V > t+1."}
{"_id": "d7714e384e3d817fd77ec139faf318a5b7b03f6e", "title": "How Popular is Your Paper ? An Empirical Study of the Citation Distribution", "text": "Numerical data for the distribution of citations are examined for: (i) papers published in 1981 in journals which are catalogued by the Institute for Scientific Information (783,339 papers) and (ii) 20 years of publications in Physical Review D, vols. 11-50 (24,296 papers). A Zipf plot of the number of citations to a given paper versus its citation rank appears to be consistent with a power-law dependence for leading rank papers, with exponent close to \u22121/2. This, in turn, suggests that the number of papers with x citations, N(x), has a large-x power law decay N(x) \u223c x, with \u03b1 \u2248 3. PACS. 02.50.-r Probability theory, stochastic processes, and statistics \u2013 01.75.+m Science and society \u2013 89.90.+n Other areas of general interest to physicists In this article, I consider a question which is of relevance to those for whom scientific publication is a primary means of scholarly communication. Namely, how often is a paper cited? While the average or total number of citations are often quoted anecdotally and tabulations of highly-cited papers exist [1,2], the focus of this work is on the more fundamental distribution of citations, namely, the number of papers which have been cited a total of x times,N(x). In spite of the fact that many academics are obliged to document their citations for merit-based considerations, there have been only a few scientific investigations on quantifying citations or related measures of scientific productivity. In a 1957 study based on the publication record of the scientific research staff at Brookhaven National Laboratory, Shockley [3] claimed that the scientific publication rate is described by a log-normal distribution. Much more recently, Laherrere and Sornette [4] have presented numerical evidence, based on data of the 1120 most-cited physicists from 1981 through June 1997, that the citation distribution of individual authors has a stretched exponential form, N(x) \u221d exp[\u2212(x/x0) ] with \u03b2 \u2248 0.3. Both papers give qualitative justifications for their assertions which are based on plausible general principles; however, these arguments do not provide specific numerical predictions. Here, the citation distribution of scientific publications based on two relatively large data sets is investigated [5]. One (ISI) is the citation distribution of 783,339 papers (with 6,716,198 citations) published in 1981 and cited bea redner@sid.bu.edu tween 1981 \u2013 June 1997 that have been cataloged by the Institute for Scientific Information. The second (PRD) is the citation distribution, as of June 1997, of the 24,296 papers cited at least once (with 351,872 citations) which were published in volumes 11 through 50 of Physical Review D, 1975\u20131994. Unlike reference [4], the focus here is on citations of publications rather than citations of specific authors. A primary reason for this emphasis is that the publication citation count reflects on the publication itself, while the author citation count reflects ancillary features, such as the total number of author publications, the quality of each of these publications, and co-author attributes. Additionally, only most-cited author data is currently available; this permits reconstruction of just the large-citation tail of the citation distribution. The main result of this study is that the asymptotic tail of the citation distribution appears to be described by a power law, N(x) \u223c x\u2212\u03b1, with \u03b1 \u2248 3. This conclusion is reached indirectly by means of a Zipf plot (to be defined and discussed below), however, because Figure 1 indicates that the citation distribution is not described by a single function over the whole range of x. Since the distribution curves downward on a double logarithmic scale and upward on a semi-logarithmic scale (Figs. 1a and b respectively), a natural first hypothesis is that this distribution is a stretched exponential, N(x) \u221d exp[\u2212(x/x0) ]. Visually, the numerical data fit this form fairly well for x \u2264 200 (PRD) and x \u2264 500 (ISI) as indicated in Figure 1b, with best fit values \u03b2 \u2248 0.39 (PRD) and \u03b2 \u2248 0.44 (ISI). However, the stretched exponential is unsuitable to describe the large-x data. Here, 132 The European Physical Journal B 10 0 10 1 10 2 10 3 10 4 x 10 0 10 1 10 2 10 3 10 4 10 5"}
{"_id": "2d2a3ff0a7609f225534b746b8a1bb569cb77b12", "title": "Multi-Instance Multi-Label Learning with Weak Label", "text": "Multi-Instance Multi-Label learning (MIML) deals with data objects that are represented by a bag of instances and associated with a set of class labels simultaneously. Previous studies typically assume that for every training example, all positive labels are tagged whereas the untagged labels are all negative. In many real applications such as image annotation, however, the learning problem often suffers from weak label; that is, users usually tag only a part of positive labels, and the untagged labels are not necessarily negative. In this paper, we propose the MIMLwel approach which works by assuming that highly relevant labels share some common instances, and the underlying class means of bags for each label are with a large margin. Experiments validate the effectiveness of MIMLwel in handling the weak label problem."}
{"_id": "ab1975ae0996fe2358bbb33182c41097b198f7d5", "title": "Why the sunny side is up: association between affect and vertical position.", "text": "Metaphors linking spatial location and affect (e.g., feeling up or down) may have subtle, but pervasive, effects on evaluation. In three studies, participants evaluated words presented on a computer. In Study 1, evaluations of positive words were faster when words were in the up rather than the down position, whereas evaluations of negative words were faster when words were in the down rather than the up position. In Study 2, positive evaluations activated higher areas of visual space, whereas negative evaluations activated lower areas of visual space. Study 3 revealed that, although evaluations activate areas of visual space, spatial positions do not activate evaluations. The studies suggest that affect has a surprisingly physical basis."}
{"_id": "5b24f11ddcc7e9271eb0e44bdd37c4fab2afd600", "title": "A high-precision time-to-digital converter for pulsed time-of-flight laser radar applications", "text": "A time-to-digital converter (TDC) has been designed, and six units have been constructed and tested. It consists of integrated digital time-interval measurement electronics with a 10-ns resolution that can be improved to 10 ps by an analog interpolation method. Identical construction of the interpolation electronics leads to stable performance of the TDC. The drifts of all six TDC units remain within 10 ps over a temperature range from 10 C to+50 C. The stability can be improved further by a real-time calibration procedure developed here. The single-shot precision of the TDC is better than 15 ps (standard deviation), but precision can be improved to below 0.1 ps by averaging about 10 000 measurements at the maximum measurement speed of 100 kHz. The time range of the TDC is currently 2.55 s, but this can be increased by adding an external digital counter. The TDC suffers from a periodic nonlinearity over the measured time range from 0 to 1.3 s, the amplitude and period of the nonlinearity being 20 ps and 40 ns, respectively. The reason lies in the integrated digital part of the electronics. The linearity of interpolation in the 10-ns time range improves, however, as more results are averaged."}
{"_id": "849e5eb4589141fe177eff4812e1027f8e979cdd", "title": "Towards a better understanding of Vector Quantized Autoencoders", "text": "Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQ-VAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete bottleneck with EM helps us achieve better image reconstruction results on SVHN together with knowledge distillation, allows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference."}
{"_id": "0c167008408c301935bade9536084a527527ec74", "title": "Automatic detection of player's identity in soccer videos using faces and text cues", "text": "In soccer videos, most significant actions are usually followed by close--up shots of players that take part in the action itself. Automatically annotating the identity of the players present in these shots would be considerably valuable for indexing and retrieval applications. Due to high variations in pose and illumination across shots however, current face recognition methods are not suitable for this task. We show how the inherent multiple media structure of soccer videos can be exploited to understand the players' identity without relying on direct face recognition. The proposed method is based on a combination of interest point detector to \"read\" textual cues that allow to label a player with its name, such as the number depicted on its jersey, or the superimposed text caption showing its name. Players not identified by this process are then assigned to one of the labeled faces by means of a face similarity measure, again based on the appearance of local salient patches. We present results obtained from soccer videos taken from various recent games between national teams."}
{"_id": "2b4b941b1d003cd6357aff531c298847824495ae", "title": "Deep neural networks employing Multi-Task Learning and stacked bottleneck features for speech synthesis", "text": "Deep neural networks (DNNs) use a cascade of hidden representations to enable the learning of complex mappings from input to output features. They are able to learn the complex mapping from text-based linguistic features to speech acoustic features, and so perform text-to-speech synthesis. Recent results suggest that DNNs can produce more natural synthetic speech than conventional HMM-based statistical parametric systems. In this paper, we show that the hidden representation used within a DNN can be improved through the use of Multi-Task Learning, and that stacking multiple frames of hidden layer activations (stacked bottleneck features) also leads to improvements. Experimental results confirmed the effectiveness of the proposed methods, and in listening tests we find that stacked bottleneck features in particular offer a significant improvement over both a baseline DNN and a benchmark HMM system."}
{"_id": "ece8c51bbb060061148e7361b93c47faa21a692f", "title": "Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction: Possible role of a repetitive structure in sounds", "text": "A set of simple new procedures has been developed to enable the real-time manipulation of speech parameters. The proposed method uses pitch-adaptive spectral analysis combined with a surface reconstruction method in the time\u00b1 frequency region. The method also consists of a fundamental frequency (F0) extraction using instantaneous frequency calculation based on a new concept called `fundamentalness'. The proposed procedures preserve the details of time\u00b1 frequency surfaces while almost perfectly removing \u00aene structures due to signal periodicity. This close-to-perfect elimination of interferences and smooth F0 trajectory allow for over 600% manipulation of such speech parameters as pitch, vocal tract length, and speaking rate, while maintaining high reproductive quality. \u00d3 1999 Elsevier Science B.V. All rights reserved."}
{"_id": "35dad46ad81a94453705a1a67a6d67e467ed61e3", "title": "Extracting deep bottleneck features using stacked auto-encoders", "text": "In this work, a novel training scheme for generating bottleneck features from deep neural networks is proposed. A stack of denoising auto-encoders is first trained in a layer-wise, unsupervised manner. Afterwards, the bottleneck layer and an additional layer are added and the whole network is fine-tuned to predict target phoneme states. We perform experiments on a Cantonese conversational telephone speech corpus and find that increasing the number of auto-encoders in the network produces more useful features, but requires pre-training, especially when little training data is available. Using more unlabeled data for pre-training only yields additional gains. Evaluations on larger datasets and on different system setups demonstrate the general applicability of our approach. In terms of word error rate, relative improvements of 9.2% (Cantonese, ML training), 9.3% (Tagalog, BMMI-SAT training), 12% (Tagalog, confusion network combinations with MFCCs), and 8.7% (Switchboard) are achieved."}
{"_id": "375d7b8a70277d5d7b5e0cc999b03ba395c42901", "title": "Auto-encoder bottleneck features using deep belief networks", "text": "Neural network (NN) bottleneck (BN) features are typically created by training a NN with a middle bottleneck layer. Recently, an alternative structure was proposed which trains a NN with a constant number of hidden units to predict output targets, and then reduces the dimensionality of these output probabilities through an auto-encoder, to create auto-encoder bottleneck (AE-BN) features. The benefit of placing the BN after the posterior estimation network is that it avoids the loss in frame classification accuracy incurred by networks that place the BN before the softmax. In this work, we investigate the use of pre-training when creating AE-BN features. Our experiments indicate that with the AE-BN architecture, pre-trained and deeper NNs produce better AE-BN features. On a 50-hour English Broadcast News task, the AE-BN features provide over a 1% absolute improvement compared to a state-of-the-art GMM/HMM with a WER of 18.8% and pre-trained NN hybrid system with a WER of 18.4%. In addition, on a larger 430-hour Broadcast News task, AE-BN features provide a 0.5% absolute improvement over a strong GMM/HMM baseline with a WER of 16.0%. Finally, system combination with the GMM/HMM baseline and AE-BN systems provides an additional 0.5% absolute on 430 hours over the AE-BN system alone, yielding a final WER of 15.0%."}
{"_id": "627e173dedf1282988d4f6436456c10a566a720e", "title": "Transformation of formants for voice conversion using artificial neural networks", "text": "In this paper we propose a scheme for developing a voice conversion system that converts the speech signal uttered by a source speaker to a speech signal having the voice characteristics of the target speaker. In particular, we address the issue of transformation of the vocal tract system features from one speaker to another. Formants are used to represent the vocal tract system features and a formant vocoder is used for synthesis. The scheme consists of a formant analysis phase, followed by a learning phase in which the implicit formant transformation is captured by a neural network. The transformed formants together with the pitch contour modified to suit the average pitch of the target speaker are used to synthesize speech with the desired vocal tract system characteristics."}
{"_id": "c25689ba6e1fc6fde50a86d2956c8f5837d009a6", "title": "Burnout in the Mental Health Workforce: A Review", "text": "There are enormous concerns regarding the recruitment, retention, training, and performance of the behavioral health workforce. Paramount among these concerns is turnover, which causes disruption in continuity of care, diminishes access to care while a position remains vacant, and poses financial hardship on the provider organization through costs related to recruitment, orientation, and training of a new hire. There is frequent mention of burnout within the literature and among behavioral health managers as a potential cause of turnover. However, there is no recent or comprehensive review of burnout that examines the evidence surrounding its validity, utility, and potential relationship to turnover. The purpose of this paper is to provide such a review by examining the construct of burnout, methodological and measurement issues, its prevalence in the mental health workforce, correlates of burnout, and interventions to decrease it. The implications for provider organizations and recommendations for future research are identified."}
{"_id": "b06c47f69e733f12d57752f443a8d62bc3faf7fc", "title": "Low-Temperature Sintering of Nanoscale Silver Paste for Attaching Large-Area $({>}100~{\\rm mm}^{2})$ Chips", "text": "A low-temperature sintering technique enabled by a nanoscale silver paste has been developed for attaching large-area (>100 mm2) semiconductor chips. This development addresses the need of power device or module manufacturers who face the challenge of replacing lead-based or lead-free solders for high-temperature applications. The solder-reflow technique for attaching large chips in power electronics poses serious concern on reliability at higher junction temperatures above 125\u00b0C. Unlike the soldering process that relies on melting and solidification of solder alloys, the low-temperature sintering technique forms the joints by solid-state atomic diffusion at processing temperatures below 275\u00b0C with the sintered joints having the melting temperature of silver at 961\u00b0C. Recently, we showed that a nanoscale silver paste could be used to bond small chips at temperatures similar to soldering temperatures without any externally applied pressure. In this paper, we extend the use of the nanomaterial to attach large chips by introducing a low pressure up to 5 MPa during the densification stage. Attachment of large chips to substrates with silver, gold, and copper metallization is demonstrated. Analyses of the sintered joints by scanning acoustic imaging and electron microscopy showed that the attachment layer had a uniform microstructure with micrometer-sized porosity with the potential for high reliability under high-temperature applications."}
{"_id": "08e09151bd3a2e3a1c4ca0b8450dded420b01452", "title": "Opportunities and Limits of Remote Timing Attacks", "text": "Many algorithms can take a variable amount of time to complete depending on the data being processed. These timing differences can sometimes disclose confidential information. Indeed, researchers have been able to reconstruct an RSA private key purely by querying an SSL Web server and timing the results. Our work analyzes the limits of attacks based on accurately measuring network response times and jitter over a local network and across the Internet. We present the design of filters to significantly reduce the effects of jitter, allowing an attacker to measure events with 15-100\u03bcs accuracy across the Internet, and as good as 100ns over a local network. Notably, security-related algorithms on Web servers and other network servers need to be carefully engineered to avoid timing channel leaks at the accuracy demonstrated in this article."}
{"_id": "3609c92bbcad4eaa6e239112fc2cadbf87bb3c33", "title": "Knowing Yourself: Improving Video Caption via In-depth Recap", "text": "Generating natural language descriptions for videos (a.k.a video captioning) has attracted much research attention in recent years, and a lot of models have been proposed to improve the caption performance. However, due to the rapid progress in dataset expansion and feature representation, newly proposed caption models have been evaluated on different settings, which makes it unclear about the contributions from either features or models. Therefore, in this work we aim to gain a deep understanding about \"where are we\" for the current development of video captioning. First, we carry out extensive experiments to identify the contribution from different components in video captioning task and make fair comparison among several state-of-the-art video caption models. Second, we discover that these state-of-the-art models are complementary so that we could benefit from \"wisdom of the crowd\" through ensembling and reranking. Finally, we give a preliminary answer to the question \"how far are we from the human-level performance in general'' via a series of carefully designed experiments. In summary, our caption models achieve the state-of-the-art performance on the MSR-VTT 2017 challenge, and it is comparable with the average human-level performance on current caption metrics. However, our analysis also shows that we still have a long way to go, such as further improving the generalization ability of current caption models."}
{"_id": "db8aa51c1d6f90f2f83cc0dcd76ec8167294b825", "title": "Cumulative Effect in Information Diffusion: Empirical Study on a Microblogging Network", "text": "Cumulative effect in social contagion underlies many studies on the spread of innovation, behavior, and influence. However, few large-scale empirical studies are conducted to validate the existence of cumulative effect in information diffusion on social networks. In this paper, using the population-scale dataset from the largest Chinese microblogging website, we conduct a comprehensive study on the cumulative effect in information diffusion. We base our study on the diffusion network of message, where nodes are the involved users and links characterize forwarding relationship among them. We find that multiple exposures to the same message indeed increase the possibility of forwarding it. However, additional exposures cannot further improve the chance of forwarding when the number of exposures crosses its peak at two. This finding questions the cumulative effect hypothesis in information diffusion. Furthermore, to clarify the forwarding preference among users, we investigate both structural motif in the diffusion network and temporal pattern in information diffusion process. Findings provide some insights for understanding the variation of message popularity and explain the characteristics of diffusion network."}
{"_id": "84bcb3e72aee77d69e7cadbf43b3d6026fb6f8f7", "title": "DudeTM: Building Durable Transactions with Decoupling for Persistent Memory", "text": "Emerging non-volatile memory (NVM) offers non-volatility, byte-addressability and fast access at the same time. To make the best use of these properties, it has been shown by empirical evidence that programs should access NVM directly through CPU load and store instructions, so that the overhead of a traditional file system or database can be avoided. Thus, durable transactions become a common choice of applications for accessing persistent memory data in a crash consistent manner. However, existing durable transaction systems employ either undo logging, which requires a fence for every memory write, or redo logging, which requires intercepting all memory reads within transactions.\n This paper presents DUDETM, a crash-consistent durable transaction system that avoids the drawbacks of both undo logging and redo logging. DUDETM uses shadow DRAM to decouple the execution of a durable transaction into three fully asynchronous steps. The advantage is that only minimal fences and no memory read instrumentation are required. This design also enables an out-of-the-box transactional memory (TM) to be used as an independent component in our system. The evaluation results show that DUDETM adds durability to a TM system with only 7.4 ~ 24.6% throughput degradation. Compared to the existing durable transaction systems, DUDETM provides 1.7times to 4.4times higher throughput. Moreover, DUDETM can be implemented with existing hardware TMs with minor hardware modifications, leading to a further 1.7times speedup."}
{"_id": "1dd9786e51dd4fbe5df185f4a6ae3e1d70113207", "title": "A Timing-Based Scheme for Rogue AP Detection", "text": "This paper considers a category of rogue access points (APs) that pretend to be legitimate APs to lure users to connect to them. We propose a practical timing-based technique that allows the user to avoid connecting to rogue APs. Our detection scheme is a client-centric approach that employs the round trip time between the user and the DNS server to independently determine whether an AP is a rogue AP without assistance from the WLAN operator. We implemented our detection technique on commercially available wireless cards to evaluate their performance. Extensive experiments have demonstrated the accuracy, effectiveness, and robustness of our approach. The algorithm achieves close to 100 percent accuracy in distinguishing rogue APs from legitimate APs in lightly loaded traffic conditions, and larger than 60 percent accuracy in heavy traffic conditions. At the same time, the detection only requires less than 1 second for lightly-loaded traffic conditions and tens of seconds for heavy traffic conditions."}
{"_id": "f047967b223e3ea65e84a4cad9c04560f9b98cb0", "title": "Segment-aware energy-efficient management of heterogeneous memory system for ultra-low-power IoT devices", "text": "The emergence of IoT (Internet of Things) has brought various studies on low-power techniques back to embedded systems. In general, minimizing power consumed by program executions is the main consideration of system design. While running programs, executing data-dependent program code results in a large number of memory accesses which consume huge amount of power. Further, most embedded systems consist of multiple types of memory devices, i.e., heterogeneous memory system, to benefit the different characteristics of memory devices. In this paper, we conduct a research on low-power techniques to reduce the power consumption of heterogeneous memory to achieve ultra-low-power in the system level. This study proposes a segment-aware energy-efficient management to improve the power efficiency considering the characteristics and structures of the memory devices. In the proposed approach, the technique migrates program code from allocated memory device to another if the consumed power is considered to be less. We also analyze and evaluate the comprehensive effects on energy efficiency by applying the technique as well. Compared to the unmodified program code, our model reduces power consumption up to 12.98% by migrating functions."}
{"_id": "ef48b1a5eece2c88826cf4de9ec9d42c63151605", "title": "A combinational incremental ensemble of classifiers as a technique for predicting students' performance in distance education", "text": "0950-7051/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.knosys.2010.03.010 * Corresponding author. Address: Software Quality L and Technology, Hellenic Open University, 12\u201315 Tsa 26222, Greece. Tel.: +3"}
{"_id": "be637e78493c50e2e37e9ad46e24dfaf532c9ff9", "title": "Online Robust Low-Rank Tensor Learning", "text": "The rapid increase of multidimensional data (a.k.a. tensor) like videos brings new challenges for low-rank data modeling approaches such as dynamic data size, complex high-order relations, and multiplicity of low-rank structures. Resolving these challenges require a new tensor analysis method that can perform tensor data analysis online, which however is still absent. In this paper, we propose an Online Robust Lowrank Tensor Modeling (ORLTM) approach to address these challenges. ORLTM dynamically explores the high-order correlations across all tensor modes for low-rank structure modeling. To analyze mixture data from multiple subspaces, ORLTM introduces a new dictionary learning component. ORLTM processes data streamingly and thus requires quite low memory cost that is independent of data size. This makes ORLTM quite suitable for processing large-scale tensor data. Empirical studies have validated the effectiveness of the proposed method on both synthetic data and one practical task, i.e., video background subtraction. In addition, we provide theoretical analysis regarding computational complexity and memory cost, demonstrating the efficiency of ORLTM rigorously."}
{"_id": "505345cc9de5388ef553c4d7f90b6136b3f4e520", "title": "Gesture Recognition: A Survey", "text": "With increasing use of computers in our daily lives, lately there has been a rapid increase in the efforts to develop a better human computer interaction interface. The need of easy to use and advance types of human-computer interaction with natural interfaces is more than ever. In the present framework, the UI (User Interface) of a computer allows user to interact with electronic devices with graphical icons and visual indicators, which is still inconvenient and not suitable for working in virtual environments. An interface which allow user to communicate through gestures is the next step in the direction of advance human computer interface. In the present paper author explore different aspects of gesture recognition techniques."}
{"_id": "d99b2aaac602455f8dafef725ea59298cd7d5871", "title": "Neurocognitive effects of clozapine, olanzapine, risperidone, and haloperidol in patients with chronic schizophrenia or schizoaffective disorder.", "text": "OBJECTIVE\nNewer antipsychotic drugs have shown promise in ameliorating neurocognitive deficits in patients with schizophrenia, but few studies have compared newer antipsychotic drugs with both clozapine and conventional agents, particularly in patients who have had suboptimal response to prior treatments.\n\n\nMETHOD\nThe authors examined the effects of clozapine, olanzapine, risperidone, and haloperidol on 16 measures of neurocognitive functioning in a double-blind, 14-week trial involving 101 patients. A global score was computed along with scores in four neurocognitive domains: memory, attention, motor function, and general executive and perceptual organization.\n\n\nRESULTS\nGlobal neurocognitive function improved with olanzapine and risperidone treatment, and these improvements were superior to those seen with haloperidol. Patients treated with olanzapine exhibited improvement in the general and attention domains but not more than that observed with other treatments. Patients treated with risperidone exhibited improvement in memory that was superior to that of both clozapine and haloperidol. Clozapine yielded improvement in motor function but not more than in other groups. Average effect sizes for change were in the small to medium range. More than half of the patients treated with olanzapine and risperidone experienced \"clinically significant\" improvement (changes in score of at least one-half standard deviation relative to baseline). These findings did not appear to be mediated by changes in symptoms, side effects, or blood levels of medications.\n\n\nCONCLUSIONS\nPatients with a history of suboptimal response to conventional treatments may show cognitive benefits from newer antipsychotic drugs, and there may be differences between atypical antipsychotic drugs in their patterns of cognitive effects."}
{"_id": "8a4d04379b8c232a9440df867c8c75f8e6ff6622", "title": "The reduction of graphene oxide", "text": "Graphene has attracted great interest for its excellent mechanical, electrical, thermal and optical properties. It can be produced by micro-mechanical exfoliation of highly ordered pyrolytic graphite, epitaxial growth, chemical vapor deposition, and the reduction of graphene oxide (GO). The first three methods can produce graphene with a relatively perfect structure and excellent properties, while in comparison, GO has two important characteristics: (1) it can be produced using inexpensive graphite as raw material by cost-effective chemical methods with a high yield, and (2) it is highly hydrophilic and can form stable aqueous colloids to facilitate the assembly of macroscopic structures by simple and cheap solution processes, both of which are important to the large-scale uses of graphene. A key topic in the research and applications of GO is the reduction, which partly restores the structure and properties of graphene. Different reduction processes result in different properties of reduced GO (rGO), which in turn affect the final performance of materials or devices composed of rGO. In this contribution, we review the state-of-art status of the reduction of GO on both techniques and mechanisms. The development in this field will speed the applications of graphene. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "05ba00812bbbe15be83418df6657f74edf76f727", "title": "Sparse Code Filtering for Action Pattern Mining", "text": "Action recognition has received increasing attention during the last decade. Various approaches have been proposed to encode the videos that contain actions, among which self-similarity matrices (SSMs) have shown very good performance by encoding the dynamics of the video. However, SSMs become sensitive when there is a very large view change. In this paper, we tackle the multi-view action recognition problem by proposing a sparse code filtering (SCF) framework which can mine the action patterns. First, a class-wise sparse coding method is proposed to make the sparse codes of the between-class data lie close by. Then we integrate the classifiers and the class-wise sparse coding process into a collaborative filtering (CF) framework to mine the discriminative sparse codes and classifiers jointly. The experimental results on several public multi-view action recognition datasets demonstrate that the presented SCF framework outperforms other state-of-the-art methods."}
{"_id": "9aa687a89607bc0e2c0defa90efd6056be59f49f", "title": "Aging: a theory based on free radical and radiation chemistry.", "text": "The phenomenon of growth, decline and death\u2014aging\u2014has been the source of considerable speculation (1, 8, 10). This cycle seems to be a more or less direct function of the metabolic rate and this in turn depends on the species (animal or plant) on which are superimposed the factors of heredity and the effects of the stresses and strains of life\u2014which alter the metabolic activity. The universality of this phenomenon suggests that the reactions which cause it are basically the same in all living things. Viewing this process in the light of present day free radical and radiation chemistry and of radiobiology, it seems possible that one factor in aging may be related to deleterious side attacks of free radicals (which are normally produced in the course of cellular metabolism) on cell constituents.* Irradiation of living things induces mutation, cancer, and aging (9). Inasmuch as these also arise spontaneously in nature, it is natural to inquire if the processes might not be similar. It is believed that one mechanism of irradiation effect is through liberation of OH and HO2 radicals (12). There is evidence, although indirect, that these two highly active free radicals are produced normally in living systems. In the first place, free radicals are present in living cells; this was recently demonstrated in vivo by a paramagnetic resonance absorption method (3). Further, it was shown that the concentration of free radicals increased with increasing metabolic activity in conformity with the postulates set forth some years ago that free radicals were involved in biologic oxidation-reduction reactions (11, 13). Are some of these free radicals OH and/or HO2, or radicals of a similar high order of reactivity, and where might they arise in the cell?"}
{"_id": "c956b29a133673c32586c7736d12c606f2d59a21", "title": "Separate Hydrolysis and Fermentation of Pretreated Spruce Josefin Axelsson", "text": ""}
{"_id": "fc1655df6da906c66ff2a70d122e078137e29e71", "title": "Please Biofeed the Zombies: Enhancing the Gameplay and Display of a Horror Game Using Biofeedback", "text": "This paper describes an investigation into how real-time but low-cost biometric information can be interpreted by computer games to enhance gameplay without fundamentally changing it. We adapted a cheap sensor, (the Lightstone mediation sensor device by Wild Divine), to record and transfer biometric information about the player (via sensors that clip over their fingers) into a commercial game engine, Half-Life 2. During game play, the computer game was dynamically modified by the player\u2019s biometric information to increase the cinematically augmented \u201chorror\u201d affordances. These included dynamic changes in the game shaders, screen shake, and the creation of new spawning points for the game\u2019s non-playing characters (zombies), all these features were driven by the player\u2019s biometric data. To evaluate the usefulness of this biofeedback device, we compared it against a control group of players who also had sensors clipped on their fingers, but for the second group the gameplay was not modified by the biometric information of the players. While the evaluation results indicate biometric data can improve the situated feeling of horror, there are many design issues that will need to be investigated by future research, and the judicious selection of theme and appropriate interaction is vital. Author"}
{"_id": "487a5d756e29f24e737cfad8872d4fae5f072817", "title": "Compacting Neural Network Classifiers via Dropout Training", "text": "We introduce dropout compaction, a novel method for training feed-forward neural networks which realizes the performance gains of training a large model with dropout regularization, yet extracts a compact neural network for run-time efficiency. In the proposed method, we introduce a sparsity-inducing prior on the per unit dropout retention probability so that the optimizer can effectively prune hidden units during training. By changing the prior hyperparameters, we can control the size of the resulting network. We performed a systematic comparison of dropout compaction and competing methods on several realworld speech recognition tasks and found that dropout compaction achieved comparable accuracy with fewer than 50% of the hidden units, translating to a 2.5x speedup in run-time."}
{"_id": "9bebe8bfbe4e1e0f449f7478b0afce899d4ddbc7", "title": "Online search of overlapping communities", "text": "A great deal of research has been conducted on modeling and discovering communities in complex networks. In most real life networks, an object often participates in multiple overlapping communities. In view of this, recent research has focused on mining overlapping communities in complex networks. The algorithms essentially materialize a snapshot of the overlapping communities in the network. This approach has three drawbacks, however. First, the mining algorithm uses the same global criterion to decide whether a subgraph qualifies as a community. In other words, the criterion is fixed and predetermined. But in reality, communities for different vertices may have very different characteristics. Second, it is costly, time consuming, and often unnecessary to find communities for an entire network. Third, the approach does not support dynamically evolving networks. In this paper, we focus on online search of overlapping communities, that is, given a query vertex, we find meaningful overlapping communities the vertex belongs to in an online manner. In doing so, each search can use community criterion tailored for the vertex in the search. To support this approach, we introduce a novel model for overlapping communities, and we provide theoretical guidelines for tuning the model. We present several algorithms for online overlapping community search and we conduct comprehensive experiments to demonstrate the effectiveness of the model and the algorithms. We also suggest many potential applications of our model and algorithms."}
{"_id": "4d067fdbe62e8e1f30feb5f728e2113966b9ef2b", "title": "Virtual reality exposure therapy.", "text": "It has been proposed that virtual reality (VR) exposure may be an alternative to standard in vivo exposure. Virtual reality integrates real-time computer graphics, body tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer-generated virtual environment. Virtual reality exposure is potentially an efficient and cost-effective treatment of anxiety disorders. VR exposure therapy reduced the fear of heights in the first controlled study of virtual reality in treatment of a psychiatric disorder. A case study supported the efficacy of VR exposure therapy for the fear of flying. The potential for virtual reality exposure treatment for these and other disorders is explored, and therapeutic issues surrounding the delivery of VR exposure are discussed."}
{"_id": "b697eed0a2ff305622ea4d6d3c8a3df68248ac3a", "title": "Virtual reality exposure therapy for Vietnam veterans with posttraumatic stress disorder.", "text": "BACKGROUND\nVirtual reality (VR) integrates real-time computer graphics, body-tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer-generated virtual environment that changes in a natural way with head and body motion. VR exposure (VRE) is proposed as an alternative to typical imaginal exposure treatment for Vietnam combat veterans with posttraumatic stress disorder (PTSD).\n\n\nMETHOD\nThis report presents the results of an open clinical trial using VRE to treat Vietnam combat veterans who have DSM-IV PTSD. In 8 to 16 sessions, 10 male patients were exposed to 2 virtual environments: a virtual Huey helicopter flying over a virtual Vietnam and a clearing surrounded by jungle.\n\n\nRESULTS\nClinician-rated PTSD symptoms as measured by the Clinician Administered PTSD Scale, the primary outcome measure, at 6-month follow-up indicated an overall statistically significant reduction from baseline (p = .0021) in symptoms associated with specific reported traumatic experiences. All 8 participants interviewed at the 6-month follow-up reported reductions in PTSD symptoms ranging from 15% to 67%. Significant decreases were seen in all 3 symptom clusters (p < .02). Patient self-reported intrusion symptoms as measured by the Impact of Event Scale were significantly lower (p < .05) at 3 months than at baseline but not at 6 months, although there was a clear trend toward fewer intrusive thoughts and somewhat less avoidance.\n\n\nCONCLUSION\nVirtual reality exposure therapy holds promise for treating PTSD in Vietnam veterans."}
{"_id": "f39ed876490389f76ef20ba9bb071ba48fc1b5ca", "title": "Influence of emotional engagement and habituation on exposure therapy for PTSD.", "text": "This study examined 2 process variables, emotional engagement and habituation, and outcome of exposure therapy for posttraumatic stress disorder. Thirty-seven female assault victims received treatment that involved repeated imaginal reliving of their trauma, and rated their distress at 10-min intervals. The average distress levels during each of 6 exposure sessions were submitted to a cluster analysis. Three distinct groups of clients with different patterns of distress were found: high initial engagement and gradual habituation between sessions, high initial engagement without habituation, and moderate initial engagement without habituation. Clients with the 1st distress pattern improved more in treatment than the other clients. The results are discussed within the framework of emotional processing theory, emphasizing the crucial role of emotional engagement and habituation in exposure therapy."}
{"_id": "387d1c16cd89f19965e44c4cd38dc22c5c3163d2", "title": "Virtual Reality Exposure Therapy for World Trade Center Post-traumatic Stress Disorder: A Case Report", "text": "Done properly by experienced therapists, re-exposure to memories of traumatic events via imaginal exposure therapy can lead to a reduction of Post-traumatic Stress Disorder (PTSD) symptoms. Exposure helps the patient process and habituate to memories and strong emotions associated with the traumatic event: memories and emotions they have been carefully avoiding. But many patients are unwilling or unable to self-generate and re-experience painful emotional images. The present case study describes the treatment of a survivor of the World Trade Center (WTC) attack of 9-11-01 who had developed acute PTSD. After she failed to improve with traditional imaginal exposure therapy, we sought to increase emotional engagement and treatment success using virtual reality (VR) exposure therapy. Over the course of six 1-h VR exposure therapy sessions, we gradually and systematically exposed the PTSD patient to virtual planes flying over the World Trade Center, jets crashing into the World Trade Center with animated explosions and sound effects, virtual people jumping to their deaths from the burning buildings, towers collapsing, and dust clouds. VR graded exposure therapy was successful for reducing acute PTSD symptoms. Depression and PTSD symptoms as measured by the Beck Depression Inventory and the Clinician Administered PTSD Scale indicated a large (83%) reduction in depression, and large (90%) reduction in PTSD symptoms after completing VR exposure therapy. Although case reports are scientifically inconclusive by nature, these strong preliminary results suggest that VR exposure therapy is a promising new medium for treating acute PTSD. This study may be examined in more detail at www.vrpain.com."}
{"_id": "5867e37a74299f3d589b3eea63fcf70f5f0ed22b", "title": "Emotional processing of fear: exposure to corrective information.", "text": "In this article we propose mechanisms that govern the processing of emotional information, particularly those involved in fear reduction. Emotions are viewed as represented by information structures in memory, and anxiety is thought to occur when an information structure that serves as program to escape or avoid danger is activated. Emotional processing is denned as the modification of memory structures that underlie emotions. It is argued that some form of exposure to feared situations is common to many psychotherapies for anxiety, and that confrontation with feared objects or situations is an effective treatment. Physiological activation and habituation within and across exposure sessions are cited as indicators of emotional processing, and variables that influence activation and habituation of fear responses are examined. These variables and the indicators are analyzed to yield an account of what information must be integrated for emotional processing of a fear structure. The elements of such a structure are viewed as cognitive representations of the stimulus characteristic of the fear situation, the individual's responses in it, and aspects of its meaning for the individual. Treatment failures are interpreted with respect to the interference of cognitive defenses, autonomic arousal, mood state, and erroneous ideation with reformation of targeted fear structures. Applications of the concepts advanced here to therapeutic practice and to the broader study of psychopathology are discussed."}
{"_id": "9a55a63ce8fc4d9723db92f2f27bcd900e93d1da", "title": "A Simple and Effective Unsupervised Word Segmentation Approach", "text": "In this paper, we propose a new unsupervised approach for word segmentation. The core idea of our approach is a novel word induction criterion called WordRank, which estimates the goodness of word hypotheses (character or phoneme sequences). We devise a method to derive exterior word boundary information from the link structures of adjacent word hypotheses and incorporate interior word boundary information to complete the model. In light of WordRank, word segmentation can be modeled as an optimization problem. A Viterbistyled algorithm is developed for the search of the optimal segmentation. Extensive experiments conducted on phonetic transcripts as well as standard Chinese and Japanese data sets demonstrate the effectiveness of our approach. On the standard Brent version of BernsteinRatner corpora, our approach outperforms the state-ofthe-art Bayesian models by more than 3%. Plus, our approach is simpler and more efficient than the Bayesian methods. Consequently, our approach is more suitable for real-world applications."}
{"_id": "d9e03655b42ab9f0aeda3523961a04a1d6a54e14", "title": "Question Answering Over Knowledge Graphs: Question Understanding Via Template Decomposition", "text": "The gap between unstructured natural language and structured data makes it challenging to build a system that supports using natural language to query large knowledge graphs. Many existing methods construct a structured query for the input question based on a syntactic parser. Once the input question is parsed incorrectly, a false structured query will be generated, which may result in false or incomplete answers. The problem gets worse especially for complex questions. In this paper, we propose a novel systematic method to understand natural language questions by using a large number of binary templates rather than semantic parsers. As sufficient templates are critical in the procedure, we present a low-cost approach that can build a huge number of templates automatically. To reduce the search space, we carefully devise an index to facilitate the online template decomposition. Moreover, we design effective strategies to perform the two-level disambiguations (i.e., entity-level ambiguity and structure-level ambiguity) by considering the query semantics. Extensive experiments over several benchmarks demonstrate that our proposed approach is effective as it significantly outperforms state-of-the-art methods in terms of both precision and recall. PVLDB Reference Format: Weiguo Zheng, Jeffrey Xu Yu, Lei Zou, and Hong Cheng. Question Answering Over Knowledge Graphs: Question Understanding Via Template Decomposition. PVLDB, 11 (11): 1373-1386, 2018. DOI: https://doi.org/10.14778/3236187.3236192."}
{"_id": "92756b2cc917e307ab407cb59d2bf86d98f41a24", "title": "Demand Characteristics in Assessing Motion Sickness in a Virtual Environment: Or Does Taking a Motion Sickness Questionnaire Make You Sick?", "text": "The experience of motion sickness in a virtual environment may be measured through pre and postexperiment self-reported questionnaires such as the Simulator Sickness Questionnaire (SSQ). Although research provides converging evidence that users of virtual environments can experience motion sickness, there have been no controlled studies to determine to what extent the user's subjective response is a demand characteristic resulting from pre and posttest measures. In this study, subjects were given either SSQ's both pre and postvirtual environment immersion, or only postimmersion. This technique tested for contrast effects due to demand characteristics in which administration of the questionnaire itself suggested to the participant that the virtual environment may produce motion sickness. Results indicate that reports of motion sickness after immersion in a virtual environment are much greater when both pre and postquestionnaires are given than when only a posttest questionnaire is used. The implications for assessments of motion sickness in virtual environments are discussed."}
{"_id": "0b83159ebfdd930695afb54c151cde23774dc642", "title": "The security of modern password expiration: an algorithmic framework and empirical analysis", "text": "This paper presents the first large-scale study of the success of password expiration in meeting its intended purpose, namely revoking access to an account by an attacker who has captured the account's password. Using a dataset of over 7700 accounts, we assess the extent to which passwords that users choose to replace expired ones pose an obstacle to the attacker's continued access. We develop a framework by which an attacker can search for a user's new password from an old one, and design an efficient algorithm to build an approximately optimal search strategy. We then use this strategy to measure the difficulty of breaking newly chosen passwords from old ones. We believe our study calls into question the merit of continuing the practice of password expiration."}
{"_id": "86f530eb1e7cc8a9ec8e14e551d71c3fa324f689", "title": "Automatic Gleason grading of prostate cancer using quantitative phase imaging and machine learning.", "text": "We present an approach for automatic diagnosis of tissue biopsies. Our methodology consists of a quantitative phase imaging tissue scanner and machine learning algorithms to process these data. We illustrate the performance by automatic Gleason grading of prostate specimens. The imaging system operates on the principle of interferometry and, as a result, reports on the nanoscale architecture of the unlabeled specimen. We use these data to train a random forest classifier to learn textural behaviors of prostate samples and classify each pixel in the image into different classes. Automatic diagnosis results were computed from the segmented regions. By combining morphological features with quantitative information from the glands and stroma, logistic regression was used to discriminate regions with Gleason grade 3 versus grade 4 cancer in prostatectomy tissue. The overall accuracy of this classification derived from a receiver operating curve was 82%, which is in the range of human error when interobserver variability is considered. We anticipate that our approach will provide a clinically objective and quantitative metric for Gleason grading, allowing us to corroborate results across instruments and laboratories and feed the computer algorithms for improved accuracy."}
{"_id": "76cda01403872a672711a248c5acd1b63ff43304", "title": "Novel Six-Slot Four-Pole Axial Flux-Switching Permanent Magnet Machine for Electric Vehicle", "text": "This paper proposes several novel 6-slot/4-pole axial flux-switching machine topologies for electric and hybrid electric vehicles. The proposed topologies include stator shifting, rotor shifting, and a combination of shifting both the stator and the rotor. The proposed topologies eliminate the even harmonics of the flux linkage that create unbalanced flux linkage in the traditional 6-slot/4-pole machine. Fundamental frequency will be reduced by 60% compared with a 12-slot/10-pole axial flux-switching machine without unbalanced flux linkage. An analytical method is used to find the offset angle between stator a-axis and the rotor pole axis to realize the elimination of even harmonics of the flux linkage. The analytical method is verified by finite-element analysis and experiments. The results confirm that the proposed novel topologies efficiently eliminate the even harmonics of the flux linkage in the 6-slot/4-pole machine."}
{"_id": "4ea3ee7c8e65d599d36e5776428b9893f9ffa4e6", "title": "The inconspicuous penis in children", "text": "The term 'inconspicuous penis' refers to a group of anatomical abnormalities in which the penis looks smaller than is expected. Micropenis can be defined as 'true micropenis'\u2014which results from a defect in the hypothalamic\u2013pituitary\u2013gonadal axis\u2014and 'micropenis secondary to congenital anatomical anomalies of the surrounding and overlying structures'\u2014also known as 'concealed penis'. The different forms of concealed penis include webbed penis, congenital megaprepuce and partially hidden penis caused by prepubic adiposity. This disorder can also have iatrogenic causes resulting from adhesions that are secondary to circumcision\u2014this type of concealed penis is known as 'trapped penis'. However, in both groups, micropenis is defined as a stretched penile length that is at least 2.5 SD below the mean for the patient's age, but without any other penile defects. Patients with true micropenis can be managed with testosterone, which has demonstrated good penile elongation results in the long term. Surgery also has a pivotal role in reconstruction for elongating the penis and for correction of anatomical abnormalities in concealed penis."}
{"_id": "237a4d91b7c78ebe103d5c75d4319e5ccd1acd23", "title": "Neural Network based Performance Tuning in Distributed Database Environment", "text": "The self-tuning of database management systems has important advantages like efficient use of resources, improved performance, reduced total cost of ownership, improved customer satisfaction etc. To support large database driven web-applications with high scalability and availability, the data tier invariably will have distributed database systems. The self-tuning of database systems in a distributed database environment is most desirable as it not only improves the quality of service to the end user applications, but it also provides central control over the entire tuning process. In this paper, a novel technique that leverages the learning ability of the Artificial Neural Network(ANN) has been employed to estimate the extent of tuning required at each database location. Further, the estimated values are used to fine-tune the Database systems at every location from remote control panel. The proposed method has been tested under OLTP workload-type with database distributed on three nodes and the experimental results are validated with workload characteristics based selftuning method. The results show a significant performance improvement og 18.64% as compared to built in selftuning feature of the Database Management System. The self-tuning method not only makes the end-user applications more responsive, but it will also help improve the business prospects of the enterprise."}
{"_id": "9b248e8902d2ccf3b11a3696a2b18b6f11fbedef", "title": "Characterization of soft errors caused by single event upsets in CMOS processes", "text": "Radiation-induced single event upsets (SEUs) pose a major challenge for the design of memories and logic circuits in high-performance microprocessors in technologies beyond 90nm. Historically, we have considered power-performance-area trade offs. There is a need to include the soft error rate (SER) as another design parameter. In this paper, we present radiation particle interactions with silicon, charge collection effects, soft errors, and their effect on VLSI circuits. We also discuss the impact of SEUs on system reliability. We describe an accelerated measurement of SERs using a high-intensity neutron beam, the characterization of SERs in sequential logic cells, and technology scaling trends. Finally, some directions for future research are given."}
{"_id": "bf08522764aa3882f32bf87ff38653e29e1d3a1b", "title": "Multitechnique paraphrase alignment: A contribution to pinpointing sub-sentential paraphrases", "text": "This work uses parallel monolingual corpora for a detailed study of the task of sub-sentential paraphrase acquisition. We argue that the scarcity of this type of resource is compensated by the fact that it is the most suited type for studies on paraphrasing. We propose a large exploration of this task with experiments on two languages with five different acquisition techniques, selected for their complementarity, their combinations, as well as four monolingual corpus types of varying comparability. We report, under all conditions, a significant improvement over all techniques by validating candidate paraphrases using a maximum entropy classifier. An important result of our study is the identification of difficult-to-acquire paraphrase pairs, which are classified and quantified in a bilingual typology."}
{"_id": "421ccb793d042f186e2031bd5a07a05b5a6f8162", "title": "Automatic Model Generation from Documentation for Java API Functions", "text": "Modern software systems are becoming increasingly complex, relying on a lot of third-party library support. Library behaviors are hence an integral part of software behaviors. Analyzing them is as important as analyzing the software itself. However, analyzing libraries is highly challenging due to the lack of source code, implementation in different languages, and complex optimizations. We observe that many Java library functions provide excellent documentation, which concisely describes the functionalities of the functions. We develop a novel technique that can construct models for Java API functions by analyzing the documentation. These models are simpler implementations in Java compared to the original ones and hence easier to analyze. More importantly, they provide the same functionalities as the original functions. Our technique successfully models 326 functions from 14 widely used Java classes. We also use these models in static taint analysis on Android apps and dynamic slicing for Java programs, demonstrating the effectiveness and efficiency of our models."}
{"_id": "07ba4a933f34064a03a8e05be8f6de3492b3448e", "title": "Practical graph isomorphism, II", "text": "We report the current state of the graph isomorphism problem from the practical point of view. After describing the general principles of the refinement-individualization paradigm and proving its validity, we explain how it is implemented in several of the key programs. In particular, we bring the description of the best known program nauty up to date and describe an innovative approach called Traces that outperforms the competitors for many difficult graph classes. Detailed comparisons against saucy, Bliss and conauto are presented. Email addresses: bdm@cs.anu.edu.au (Brendan D. McKay), piperno@di.uniroma1.it (Adolfo Piperno). 1 Supported by the Australian Research Council. Preprint submitted to Elsevier \u2014 nautytraces2b \u2014 11 December 2013 \u2014 ar X iv :1 30 1. 14 93 v1 [ cs .D M ] 8 J an 2 01 3"}
{"_id": "644482c6c6ca800ccc4ef07505e34dbde8cefcb4", "title": "Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network", "text": "The Deep Convolutional Neural Network (DCNN) is one of the most powerful and successful deep learning approaches. DCNNs have already provided superior performance in different modalities of medical imaging including breast cancer classification, segmentation, and detection. Breast cancer is one of the most common and dangerous cancers impacting women worldwide. In this paper, we have proposed a method for breast cancer classification with the Inception Recurrent Residual Convolutional Neural Network (IRRCNN) model. The IRRCNN is a powerful DCNN model that combines the strength of the Inception Network (Inception-v4), the Residual Network (ResNet), and the Recurrent Convolutional Neural Network (RCNN). The IRRCNN shows superior performance against equivalent Inception Networks, Residual Networks, and RCNNs for object recognition tasks. In this paper, the IRRCNN approach is applied for breast cancer classification on two publicly available datasets including BreakHis and Breast Cancer (BC) classification challenge 2015. The experimental results are compared against the existing machine learning and deep learning\u2013based approaches with respect to image-based, patch-based, image-level, and patient-level classification. The IRRCNN model provides superior classification performance in terms of sensitivity, area under the curve (AUC), the ROC curve, and global accuracy compared to existing approaches for both datasets."}
{"_id": "c9f56722d56454123e046d4bf2dbd9a69690942e", "title": "Co-word analysis and thematic landscapes in Spanish information science literature, 1985\u20132014", "text": "This paper discusses the thematic backdrop for Spanish library and information science output. It draws from Web of Science records on papers authored by researchers at Spanish institutions and published under the category \u2018Information Science & Library Science\u2019 between 1985 and 2014. Two analytical techniques were used, one based on co-keyword and the other on document co-citation networks. Burst detection was applied to noun phrases and references of the intellectual base. Co-citation analysis identified nine research fronts: \u2018digital rights management\u2019, \u2018citation analysis\u2019, \u2018translation services\u2019, \u2018bibliometric analysis\u2019, \u2018co-authorship\u2019, \u2018electronic books\u2019, \u2018webometrics\u2019, \u2018information systems\u2019 and \u2018world wide web\u2019. The most recent trends in the subject areas addressed in Spain were found to lie in metrics-related sub-specialities: the h-index, scientific collaboration, journal bibliometric indicators, rankings, universities and webometrics."}
{"_id": "626130a8907ee3514c411fa8df956c37a9cc35cb", "title": "Five Facts About Prices : A Reevaluation of Menu Cost Models", "text": "We establish five facts about prices in the U.S. economy: 1) The median duration of consumer prices when sales are excluded at the product level is 11 months. The median duration of finished goods producer prices is 8.7 months. 2) Two-thirds of regular price changes are price increases. 3) The frequency of price increases responds strongly to inflation while the frequency of price decreases and the size of price increases and price decreases do not. 4) The frequency of price change is highly seasonal: It is highest in the 1st quarter and lowest in the 4th quarter. 5) The hazard function of price changes for individual consumer and producer goods is downward sloping for the first few months and then flat (except for a large spike at 12 months in consumer services and all producer prices). These facts are based on CPI microdata and a new comprehensive data set of microdata on producer prices that we construct from raw production files underlying the PPI. We show that the 1st, 2nd and 3rd facts are consistent with a benchmark menu-cost model, while the 4th and 5th facts are not."}
{"_id": "790d2d77fa6439121baee5ec85920eb918561316", "title": "Random projections fuzzy k-nearest neighbor(RPFKNN) for big data classification", "text": "As the number of features in pattern recognition applications continuously grows, new algorithms are necessary to reduce the dimensionality of the feature space while producing comparable results. For example, a dynamic area of research, activity recognition, produces large quantities of high-velocity, high-dimensionality data that require real time classification. While dimensionality reduction approaches such as principle component analysis (PCA) and feature selection work well for datasets of reasonable size and dimensionality, they fail on big data. A possible approach to classification of high-dimensionality datasets is to combine a typical classifier, fuzzy k-nearest neighbor in our case (FKNN), with feature reduction by random projection (RP). As opposed to PCA where one projection matrix is computed based on least square optimality, in RP, a projection matrix is chosen at random multiple times. As the random projection procedure is repeated many times, the question is how to aggregate the values of the classifier obtained in each projection. In this paper we present a fusion strategy for RP FKNN, denoted as RPFKNN. The fusion strategy is based on the class membership values produced by FKNN and classification accuracy in each projection. We test RPFKNN on several synthetic and activity recognition datasets."}
{"_id": "77f7385a9d206640cc10efad29eb4ac656e7cb1b", "title": "Course Enrollment Recommender System", "text": "One of the main problems faced by university students is to create and manage the semester course plan. In this paper, we present a course enrollment recommender system based on data mining techniques. The system mainly helps with students\u2019 enrollment decisions. More specifically, it provides recommendation of selective and optional courses with respect to students\u2019 skills, knowledge, interests and free time slots in their timetables. The system also warns students against difficult courses and reminds them mandatory study duties. We evaluate the usability of designed methods by analyzing real-world data obtained from the Information System of Masaryk University."}
{"_id": "4d484c4d81899418dc61ac55f1293ccb7287866f", "title": "Convolution-based memory models Memory", "text": "Definition: Convolution-Based Memory Models are a mathematical model of neural storage of complex data structures using distributed representations. Data structures stored range from lists of pairs through sequences, trees and networks."}
{"_id": "28cf7bdd838aae51faebd2d78f8f1f80cd1e2e3d", "title": "MilkyWay-2 supercomputer: system and application", "text": "On June 17, 2013, MilkyWay-2 (Tianhe-2) supercomputer was crowned as the fastest supercomputer in the world on the 41th TOP500 list. This paper provides an overview of the MilkyWay-2 project and describes the design of hardware and software systems. The key architecture features of MilkyWay-2 are highlighted, including neo-heterogeneous compute nodes integrating commodity-off-the-shelf processors and accelerators that share similar instruction set architecture, powerful networks that employ proprietary interconnection chips to support the massively parallel message-passing communications, proprietary 16-core processor designed for scientific computing, efficient software stacks that provide high performance file system, emerging programming model for heterogeneous systems, and intelligent system administration. We perform extensive evaluation with wide-ranging applications from LINPACK and Graph500 benchmarks to massively parallel software deployed in the system."}
{"_id": "a71743e9ffa554c25b572c3bda9ac9676fca1061", "title": "Road traffic sign detection and classification", "text": "A vision-based vehicle guidance system for road vehicles can have three main roles: 1) road detection; 2) obstacle detection; and 3) sign recognition. The first two have been studied for many years and with many good results, but traffi c sign recognition is a less-studied field. Traffi c signs provide drivers with very valuable informatio n about the road, in order to make drivin g safer and easier. We think that traffi c signs must play the same role for autonomous vehicles. They are designed to be easily recognized by human driver s mainly because their color and shapes are very different from natural environments. The algorithm described in this paper takes advantage of these features. I t has two main parts. The first one, for the detection, uses color thresholding to segment the image and shape analysis to detect the signs. The second one, for the classification, uses a neural network. Some results from natural scenes are shown. On the other hand, the algorithm is valid to detect other kinds of marks that would tell the mobile robot to perform some task at that place."}
{"_id": "e49f811a191ff76a45b8310310acaa12f677eb9e", "title": "Study on WeChat User Behaviors of University Graduates", "text": "WeChat was published in China by Tencent company since Jan. 21st 2011. After 100 million users in Mar. 2012, Tencent announced that the number of WeChat's users reached 300 million in Jan 15th 2013. WeChat has become one of the most heated topics within the last 2 years and its every move is attracting attention from various academia, industries, and enterprises. This study selected 50 university graduates as object and analyzed their We Chat using patterns through investigation and survey with statistic methods. The author designed the methods of user study and invited users to participate in the user study at first. Second, by using the collected data from university graduates from 2013 to Mar 2014, the author analyzed the quantities and composition of their contacts in We Chat, figured out the source, type and goal of the public platform they are following, recorded the time, type, form and time relativity of their first 100 posts in the \"Moments\" function. Through analyzing the data, the author intended to study the real situation in which university graduates virtually socialize and communicate with each other using WeChat and the popularity of WeChat on building social network with strangers. Also, this paper has certain contribution to how to improve the function and service of WeChat. What's more, it analyzed both positive and negative influence of We Chat on students' socialization."}
{"_id": "216545a302687574b0cdbc0375483c726b990f80", "title": "Intrinsic Motivation Systems for Autonomous Mental Development", "text": "Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology"}
{"_id": "41d205fd36883f506bccf56db442bac92a854ec3", "title": "PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem", "text": "Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. Given a general problem-solving architecture, at any given time, the novel algorithmic framework PowerPlay (Schmidhuber, 2011) searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Newly invented tasks may require to achieve a wow-effect by making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. The greedy search of typical PowerPlay variants uses time-optimal program search to order candidate pairs of tasks and solver modifications by their conditional computational (time and space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. This biases the search toward pairs that can be described compactly and validated quickly. The computational costs of validating new tasks need not grow with task repertoire size. Standard problem solver architectures of personal computers or neural networks tend to generalize by solving numerous tasks outside the self-invented training set; PowerPlay's ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to G\u00f6del's sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem-solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. PowerPlay may be viewed as a greedy but practical implementation of basic principles of creativity (Schmidhuber, 2006a, 2010). A first experimental analysis can be found in separate papers (Srivastava et al., 2012a,b, 2013)."}
{"_id": "6352bbad35081aaca0ac6c3621f27386a4a44ed2", "title": "Overcoming catastrophic forgetting in neural networks", "text": "The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially."}
{"_id": "03b9cf696f119281334d8b7a71d64cd52560ca5c", "title": "Three Approaches to the Quantitative Definition of Information in an Individual Pure Quantum State", "text": "In analogy of classical Kolmogorov complexity we develop a theory of the algorithmic information in bits contained in any one of continuously many pure quantum states: quantum Kolmogorov complexity. Classical Kolmogorov complexity coincides with the new quantum Kolmogorov complexity restricted to the classical domain. Quantum Kolmogorov complexity is upper bounded and can be effectively approximated from above. With high probability a quantum object is incompressible. There are two alternative approaches possible: to define the complexity as the length of the shortest qubit program that effectively describes the object, and to use classical descriptions with computable real parameters."}
{"_id": "c050cd5d538f09f10269f430ecf40cc15599056b", "title": "Adding Semantic Role Annotation to a Corpus of Written Dutch", "text": "We present an approach to automatic semantic role labeling (SRL) carried out in the context of the Dutch Language Corpus Initiative (D-Coi) project. Adapting earlier research which has mainly focused on English to the Dutch situation poses an interesting challenge especially because there is no semantically annotated Dutch corpus available that can be used as training data. Our automatic SRL approach consists of three steps: bootstrapping from a syntactically annotated corpus by means of a rulebased tagger developed for this purpose, manual correction on the basis of the PropBank guidelines which have been adapted to Dutch and training a machine learning system on the manually corrected data."}
{"_id": "f36ef0d3e8d3abc1f30abc06603471c9aa1cc0d7", "title": "Impacts of increasing volume of digital forensic data: A survey and future research challenges", "text": ""}
{"_id": "a0e01cd16b129d590a9b250bb9935a7504302054", "title": "(Leader/Randomization/Signature)-free Byzantine Consensus for Consortium Blockchains", "text": "This paper presents a new resilience optimal Byzantine consensus algorithm targeting consortium blockchains. To this end, it first revisits the consensus validity property by requiring that the decided value satisfies a predefined predicate, which does not systematically exclude a value proposed only by Byzantine processes, thereby generalizing the validity properties found in the literature. Then the paper presents a simple and modular Byzantine consensus algorithm that relies neither on a leader, nor on signatures, nor on randomization. It consists of a reduction of multivalued Byzantine consensus to binary Byzantine consensus satisfying this validity property. This reduction terminates after a constant-sized sequence of binary Byzantine consensus instances. The idea is to spawn concurrent instances of binary consensus but to decide only after a sequence of two of these instances. The binary consensus instances result in a bitmask that the reduction applies to a vector of multivalued proposals to filter out a valid proposed value that is decided. The paper then presents an underlying binary Byzantine consensus algorithm that assumes eventual synchrony to terminate."}
{"_id": "ce90a7d21c1a3bd3767c12e4e84de965c5fa9b76", "title": "Social Interactions and Social Preferences in Social Networks", "text": "We extend the utility specification in Ballester et al. (2006) to study social interactions when individuals hold altruistic preferences in social networks. We show that more enriched network features can be captured in the best response function derived from the maximization of the extended utility which incorporates altruism, providing microfoundation for studies which investigate how network features mediate peer effects or other important features in social interactions. We demonstrate that often ignored altruism is another serious confounding factor of peer effects. Our results show that the estimates of peer affects are about 36% smaller under altruistic preferences. Furthermore, we are able to separate the two different types of effects generated by peers: the (usually positive) spillover effects from peers and the (negative or positive) externality effects directly generated from peers\u2019 outcomes, which is not possible in conventional social interactions model based on self-interest hypothesis. JEL Classification: I10 I20 C31"}
{"_id": "bedd052b8a6a28cb1e8d54f4c26deb5873d80fe7", "title": "Beating the adaptive bandit with high probability", "text": "We provide a principled way of proving \u00d5(\u221aT) high-probability guarantees for partial-information (bandit) problems over arbitrary convex decision sets. First, we prove a regret guarantee for the full-information problem in terms of \u201clocal\u201d norms, both for entropy and self-concordant barrier regularization, unifying these methods. Given one of such algorithms as a black-box, we can convert a bandit problem into a full-information problem using a sampling scheme. The main result states that a high-probability \u00d5(\u221aT) bound holds whenever the black-box, the sampling scheme, and the estimates of missing information satisfy a number of conditions, which are relatively easy to check. At the heart of the method is a construction of linear upper bounds on confidence intervals. As applications of the main result, we provide the first known efficient algorithm for the sphere with an \u00d5(\u221aT) high-probability bound. We also derive the result for the n-simplex, improving the O(\u221anT log(nT)) bound of Auer et al [3] by replacing the log T term with log log T and closing the gap to the lower bound of \u2126(\u221anT). While \u00d5(\u221aT) high-probability bounds should hold for general decision sets through our main result, construction of linear upper bounds depends on the particular geometry of the set; we believe that the sphere example already exhibits the necessary ingredients. The guarantees we obtain hold for adaptive adversaries (unlike the in-expectation results of [1]) and the algorithms are efficient, given that the linear upper bounds on confidence can be computed."}
{"_id": "c64d27b122d5b6ef0be135e63df05c3b24bd80c5", "title": "Inventory of Transcribed and Translated Talks", "text": "We describe here a Web inventory named WIT3 that offers access to a collection of transcribed and translated talks. The core of WIT3 is the TED Talks corpus, that basically redistributes the original content published by the TED Conference website (http://www.ted.com). Since 2007, the TED Conference, based in California, has been posting all video recordings of its talks together with subtitles in English and their translations in more than 80 languages. Aside from its cultural and social relevance, this content, which is published under the Creative Commons BYNC-ND license, also represents a precious language resource for the machine translation research community, thanks to its size, variety of topics, and covered languages. This effort repurposes the original content in a way which is more convenient for machine translation researchers."}
{"_id": "4bb8340a8c237efd80fac258398d180f9aa32887", "title": "Microservices approach for the internet of things", "text": "The microservice approach has created a hype in the domain of cloud and enterprise application business. Before, grown, monolithic, software has been pushed to the limits of maintainability and scalability. The microservice architecture approach utilizes the service oriented architecture together with best practices and recent developments in software virtualization to overcome those issues. One monolithic application is split up into a set of distributed services. Those are strongly decoupled to enable high maintainability and scalability. In this case an application is split up in a top down manner. In the internet of things, applications need to be put together from a set of small and independent services. Thus, creating value added services would require to freely combine services of different vendors to fully make use of the IoT's heterogeneity. Even though the direction is different, many of the requirements in microservices are similar to those of the internet of things. This paper investigates patterns and best practices that are used in the microservices approach and how they can be used in the internet of things. Since the companies using microservices have made considerations on how services have to be designed to work together properly, IoT applications might adopt several of these design decisions to improve the ability to create value added applications from a multitude of services."}
{"_id": "ab9922f5961e0607380da80b257d0adbb8b3bbcd", "title": "Blood pressure-lowering effect of Shinrin-yoku (Forest bathing): a systematic review and meta-analysis", "text": "BACKGROUND\nShinrin-yoku (experiencing the forest atmosphere or forest bathing) has received increasing attention from the perspective of preventive medicine in recent years. Some studies have reported that the forest environment decreases blood pressure. However, little is known about the possibility of anti-hypertensive applications of Shinrin-yoku. This study aimed to evaluate preventive or therapeutic effects of the forest environment on blood pressure.\n\n\nMETHODS\nWe systematically reviewed the medical literature and performed a meta-analysis.Four electronic databases were systematically searched for the period before May 2016 with language restriction of English and Japanese. The review considered all published, randomized, controlled trials, cohort studies, and comparative studies that evaluated the effects of the forest environment on changes in systolic blood pressure. A subsequent meta-analysis was performed.\n\n\nRESULTS\nTwenty trials involving 732 participants were reviewed. Systolic blood pressure of the forest environment was significantly lower than that of the non-forest environment. Additionally, diastolic blood pressure of the forest environment was significantly lower than that of the non-forest environment.\n\n\nCONCLUSIONS\nThis systematic review shows a significant effect of Shinrin-yoku on reduction of blood pressure."}
{"_id": "57cb33a6906bb05c51666e1ef58740d78f69e360", "title": "Origins of narcissism in children.", "text": "Narcissism levels have been increasing among Western youth, and contribute to societal problems such as aggression and violence. The origins of narcissism, however, are not well understood. Here, we report, to our knowledge, the first prospective longitudinal evidence on the origins of narcissism in children. We compared two perspectives: social learning theory (positing that narcissism is cultivated by parental overvaluation) and psychoanalytic theory (positing that narcissism is cultivated by lack of parental warmth). We timed the study in late childhood (ages 7-12), when individual differences in narcissism first emerge. In four 6-mo waves, 565 children and their parents reported child narcissism, child self-esteem, parental overvaluation, and parental warmth. Four-wave cross-lagged panel models were conducted. Results support social learning theory and contradict psychoanalytic theory: Narcissism was predicted by parental overvaluation, not by lack of parental warmth. Thus, children seem to acquire narcissism, in part, by internalizing parents' inflated views of them (e.g., \"I am superior to others\" and \"I am entitled to privileges\"). Attesting to the specificity of this finding, self-esteem was predicted by parental warmth, not by parental overvaluation. These findings uncover early socialization experiences that cultivate narcissism, and may inform interventions to curtail narcissistic development at an early age."}
{"_id": "9c573daa179718f6c362f296f123e8ea2a775082", "title": "Synthesis of a compound T-junction for a two-way splitter with arbitrary power ratio", "text": "We have developed a simple and efficient procedure to design H-plane rectangular waveguide T-junctions for equal and unequal two-way power splitters. This synthesis procedure is scalable, renders manufacturable structures, applicable to any arbitrary power split-ratio, and can offer wide band operation. In our implementation, we utilized wedges and inductive windows (being an integral part of the T-junctions), to provide more degrees of freedom, thus, excellent match at the input port, flat power-split ratio over the band with equal phase, where phase balance is essential for various antenna feeds."}
{"_id": "6e3a4eea1736d24494949d362430c9b9c5e5db7d", "title": "Brain MRI Findings of Nitrogen Gas Inhalation for Suicide Attempt: a Case Report", "text": "According to the Organization for Economic Cooperation and Development (OECD) report in 2015, South Korea had the highest suicide rate among all countries that belong to the OECD. In contrast to the pattern in most OECD countries, death rates from suicide in Korea have risen significantly in the last decade (1). Recently, several organizations and internet communities in favor of assisted suicide have promoted the use of nitrogen (N2) gas to that end (2). Nitrogen gas has caused accidental deaths in industrial or laboratory explosion, and during scuba diving and anesthesia (2). Although it is reported that industrial nitrogen asphyxiation hazards resulted in 80 deaths during the period 1992 through 2002, there is a paucity of documentation regarding nitrogen gas as a means of committing suicide (2, 3). Nitrogen is a colorless, odorless, nontoxic, and generally inert gas that is a normal component (78.09%) of the atmosphere, at standard temperature and pressure (4). However, nitrogen can be hazardous when it displaces oxygen resulting in hypoxic damage (2, 3). Nitrogen intoxication manifests with various symptoms such as progressive fatigue, loss of coordination, purposeful movement and balance, nausea, a complete inability to move and unconsciousness (2, 4). Here, we describe a case of brain magnetic resonance imaging (MRI) findings associated with nitrogen gas inhalation, which have been rarely reported previously. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/ by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Received: September 18, 2017 Revised: October 1, 2017 Accepted: October 12, 2017"}
{"_id": "eb55d6467632337c053d33a8833717f41f274309", "title": "Classification using Hybrid SVM and KNN Approach", "text": "Phishing is a potential web threat that includes mimicking official websites to trick users by stealing their important information such as username and password related to financial systems. The attackers use social engineering techniques like email, SMS and malware to fraud the users. Due to the potential financial losses caused by phishing, it is essential to find effective approaches for phishing websites detection. This paper proposes a hybrid approach for classifying the websites as Phishing, Legitimate or Suspicious websites, the proposed approach intelligently combines the K-nearest neighbors (KNN) algorithm with the Support Vector Machine (SVM) algorithm in two stages. Firstly, the KNN was utilized as a robust to noisy data and effective classifier. Secondly, the SVM is employed as a powerful classifier. The proposed approach integrates the simplicity of KNN with the effectiveness of SVM. The experimental results show that the proposed hybrid approach achieved the highest accuracy of 90.04% when compared with other approaches. Keywords\u2014Information security; phishing websites; support vector machine; K-nearest neighbors"}
{"_id": "672a1d2c9853e48dad8c01bbf006fe25cc1f892e", "title": "Electric Flight \u2013 Potential and Limitations", "text": "During the last years, the development of electric propulsion systems is pushed strongly, most notably for automotive applications. For ground based transportation, the system mass is relevant, but of relatively low importance. This paper addresses the application of electric propulsion systems in aeronautics, where all systems are much more constrained by their mass. After comparing different propulsion system architectures the focus is cast on the energy storage problem and here especially on battery storage systems. Using first order principles the limitations and required technology developments are demonstrated by introducing battery powered electric propulsion systems to small and medium sized aircraft. Abbreviations and Symbols b = wing span [m] C = capacity [Wh] D = drag force [N] * E = mass specific energy content [Wh/kg] e f = empty mass fraction e m /m g = gravity acceleration [m/s] L = lift force [N] m = mass [kg] initial m = initial mass at takeoff [kg] e m = empty mass without fuel and payload [kg] final m = final mass at landing [kg] battery m = battery mass [kg] payload m = payload mass [kg] pax m = mass per passenger [kg] PAX = number of passengers P = power [W] battery P = electric power provided by battery [W] R = range [m] t = time [s] T = thrust [N] * V = volume specific energy content [Wh/m] v\u00a5 = flight speed [m/s] h = efficiency \u00a5 r = density of air [kg/m ]"}
{"_id": "a27471a433129e0519ff1f3c21a16ce12cd0277a", "title": "ANANAS - A Framework for Analyzing Android Applications", "text": "Android is an open software platform for mobile devices with a large market share in the smart phone sector. The openness of the system as well as its wide adoption lead to an increasing amount of malware developed for this platform. ANANAS is an expandable and modular framework for analyzing Android applications. It takes care of common needs for dynamic malware analysis and provides an interface for the development of plugins. Adaptability and expandability have been main design goals during the development process. An abstraction layer for simple user interaction and phone event simulation is also part of the framework. It allows an analyst to script the required user simulation or phone events on demand or adjust the simulation to his needs. Six plugins have been developed for ANANAS. They represent well known techniques for malware analysis, such as system call hooking and network traffic analysis. The focus clearly lies on dynamic analysis, as five of the six plugins are dynamic analysis methods."}
{"_id": "2fbe574330abb86d3a544dff76adc6b7d0f89c44", "title": "Motion tracking and gait feature estimation for recognising Parkinson's disease using MS Kinect.", "text": "BACKGROUND\nAnalysis of gait features provides important information during the treatment of neurological disorders, including Parkinson's disease. It is also used to observe the effects of medication and rehabilitation. The methodology presented in this paper enables the detection of selected gait attributes by Microsoft (MS) Kinect image and depth sensors to track movements in three-dimensional space.\n\n\nMETHODS\nThe experimental part of the paper is devoted to the study of three sets of individuals: 18 patients with Parkinson's disease, 18 healthy aged-matched individuals, and 15 students. The methodological part of the paper includes the use of digital signal-processing methods for rejecting gross data-acquisition errors, segmenting video frames, and extracting gait features. The proposed algorithm describes methods for estimating the leg length, normalised average stride length (SL), and gait velocity (GV) of the individuals in the given sets using MS Kinect data.\n\n\nRESULTS\nThe main objective of this work involves the recognition of selected gait disorders in both the clinical and everyday settings. The results obtained include an evaluation of leg lengths, with a mean difference of 0.004 m in the complete set of 51 individuals studied, and of the gait features of patients with Parkinson's disease (SL: 0.38 m, GV: 0.61 m/s) and an age-matched reference set (SL: 0.54 m, GV: 0.81 m/s). Combining both features allowed for the use of neural networks to classify and evaluate the selectivity, specificity, and accuracy. The achieved accuracy was 97.2 %, which suggests the potential use of MS Kinect image and depth sensors for these applications.\n\n\nCONCLUSIONS\nDiscussion points include the possibility of using the MS Kinect sensors as inexpensive replacements for complex multi-camera systems and treadmill walking in gait-feature detection for the recognition of selected gait disorders."}
{"_id": "640eccc55eeb23f561efcb32ca97d445624cf326", "title": "BackIP : Backpressure Routing in IPv 6-Based Wireless Sensor Networks", "text": "Wireless sensor networks are increasingly being deployed in real-world applications ranging from energy monitoring to water-level measurement. To better integrate with existing network infrastructure, they are being designed to communicate using IPv6. The current de-facto standard for routing in IPv6-based sensor networks is the shortest-path-based RPL, developed by the IETF 6LoWPaN working group. This paper describes BackIP, an alternative routing protocol for data collection in IPv6-based wireless sensor networks that is based on the backpressure paradigm. In a backpressure-based protocol, routing decisions can be made on-the-fly on a perpacket basis by nodes based on the current locally observed state, and prior work has shown that they can offer superior throughput performance and responsiveness to dynamic conditions compared to shortest-path routing protocols. We discuss a number of design decisions that are needed to enable backpressure routing to work in a scalable and efficient manner with IPv6. We implement and evaluate the performance of this protocol on a TinyOS-based real wireless sensor network testbed."}
{"_id": "053912e76e50c9f923a1fc1c173f1365776060cc", "title": "On optimization methods for deep learning", "text": "The predominant methodology in training deep learning advocates the use of stochastic gradient descent methods (SGDs). Despite its ease of implementation, SGDs are difficult to tune and parallelize. These problems make it challenging to develop, debug and scale up deep learning algorithms with SGDs. In this paper, we show that more sophisticated off-the-shelf optimization methods such as Limited memory BFGS (L-BFGS) and Conjugate gradient (CG) with line search can significantly simplify and speed up the process of pretraining deep algorithms. In our experiments, the difference between LBFGS/CG and SGDs are more pronounced if we consider algorithmic extensions (e.g., sparsity regularization) and hardware extensions (e.g., GPUs or computer clusters). Our experiments with distributed optimization support the use of L-BFGS with locally connected networks and convolutional neural networks. Using L-BFGS, our convolutional network model achieves 0.69% on the standard MNIST dataset. This is a state-of-theart result on MNIST among algorithms that do not use distortions or pretraining."}
{"_id": "20cb9de9921d7efbc1add2848239d7916bf158b2", "title": "Activity Recognition from Accelerometer Data", "text": "Activity recognition fits within the bigger framework of context awareness. In this paper, we report on our efforts to recognize user activity from accelerometer data. Activity recognition is formulated as a classification problem. Performance of base-level classifiers and meta-level classifiers is compared. Plurality Voting is found to perform consistently well across different settings."}
{"_id": "55b81991fbb025038d98e8c71acf7dc2b78ee5e9", "title": "Practical recommendations for gradient-based training of deep architectures", "text": "Learning algorithms related to artificial neural networks and in particular for Deep Learning may seem to involve many bells and whistles, called hyperparameters. This chapter is meant as a practical guide with recommendations for some of the most commonly used hyper-parameters, in particular in the context of learning algorithms based on backpropagated gradient and gradient-based optimization. It also discusses how to deal with the fact that more interesting results can be obtained when allowing one to adjust many hyper-parameters. Overall, it describes elements of the practice used to successfully and efficiently train and debug large-scale and often deep multi-layer neural networks. It closes with open questions about the training difficulties observed with deeper architectures."}
{"_id": "cabcfc0c8704fa15fa8212a6f8944a249d5dcfa9", "title": "Miniaturized dual-band dipole antenna loaded with metamaterial based structure", "text": "In this paper, a new miniaturized double-sided printed dipole antenna loaded with balanced Capacitively Loaded Loops (CLLs) as Metamaterial structure is presented. CLLs placed close to the edge of the printed antenna cause the antenna to radiate at two different frequencies, one of which is lower than self-resonant frequency of dipole antenna. In the other words, the loaded dipole antenna can perform at low frequency as compared with natural resonance frequency of unload half wavelength dipole. Finally, the CLL element is integrated with chip capacitor to provide a larger capacitance which in turn allows the resulting CLL element to resonate at a lower frequency. It is demonstrated that the proposed loaded dipole antenna is a dual band radiator with sufficient gain suitable for applications such as mobile communication and Industrial, Scientific and Medical (ISM) system. Prototype of miniaturized double resonant dipole antenna is fabricated and tested. The measured results are in good agreement with those obtained from simulation."}
{"_id": "2b8b023c4a7bc7cd9d38a5acb6e2feaabbea9a36", "title": "Driver fatigue monitoring system using Support Vector Machines", "text": "Driver fatigue is one of the leading causes of traffic accidents. This paper presents a real-time non-intrusive fatigue monitoring system which exploits the driver's facial expression to detect and alert fatigued drivers. The presented approach adopts the Viola-Jones classifier to detect the driver's facial features. The correlation coefficient template matching method is then applied to derive the state of each feature on a frame by frame basis. A Support Vector Machine (SVM) is finally integrated within the system to classify the facial appearance as either fatigued or otherwise. Using this simple and cheap implementation, the overall system achieved an accuracy of 95.2%, outperforming other developed systems employing expensive hardware to reach the same objective."}
{"_id": "ded6e007efd7288ff2c35b5e95c77dba35058c0a", "title": "A generic camera calibration method for fish-eye lenses", "text": "Fish-eye lenses are convenient in such computer vision applications where a very wide angle of view is needed. However, their use for measurement purposes is limited by the lack of an accurate, generic, and easy-to-use calibration procedure. We hence propose a generic camera model for cameras equipped with fish-eye lenses and a method for calibration of such cameras. The calibration is possible by using only one view of a planar calibration object but more views should be used for better results. The proposed calibration method was evaluated with real images and the obtained results are promising. The calibration software becomes commonly available at the author's Web page."}
{"_id": "8ff67b8bb53883f2e1c30990bdf44ec2d8b502b9", "title": "Word Spotting and Recognition with Embedded Attributes", "text": "This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks."}
{"_id": "4e4040cec4a97fe9c407e00331327d271ae38a18", "title": "SDN Multi-Domain Orchestration and Control: Challenges and innovative future directions", "text": "This paper proposes a hierarchical Software-Defined Networking (SDN) Multi-Domain Orchestration and Control architectural framework to address complex multi-tier governance domain deployments by Carriers, Service Providers (SP), Cloud Service Providers (CSP), large Enterprises, and Network Functions Virtualization (NFV) Infrastructure/Platform as a Service (NFVIaaS/NFVPaaS) providers. The proposed framework allows for simpler networking control and orchestration, more agile development processes, accelerated pace of innovation, and increased speed of deployment, while improving scalability and configuration automation as compared to existing frameworks."}
{"_id": "fad4cf87ca2f3948b7a71b306e321454af7b346b", "title": "Web Credibility: Features Exploration and Credibility Prediction", "text": "The open nature of the World Wide Web makes evaluating webpage credibility challenging for users. In this paper, we aim to automatically assess web credibility by investigating various characteristics of webpages. Specifically, we first identify features from textual content, link structure, webpages design, as well as their social popularity learned from popular social media sites (e.g., Facebook, Twitter). A set of statistical analyses methods are applied to select the most informative features, which are then used to infer webpages credibility by employing supervised learning algorithms. Real dataset-based experiments under two application settings show that we attain an accuracy of 75% for classification, and an improvement of 53% for the mean absolute error (MAE), with respect to the random baseline approach, for regression."}
{"_id": "620c39cbed9844a38636ae2e42ff82abb1a51f09", "title": "Comparative Performance Analysis of Wireless Communication Protocols for Intelligent Sensors and Their Applications", "text": "The systems based on intelligent sensors are currently expanding, due to theirs functions and theirs performances of intelligence: transmitting and receiving data in real-time, computation and processing algorithms, metrology remote, diagnostics, automation and storage measurements...The radio frequency wireless communication with its multitude offers a better solution for data traffic in this kind of systems. The mains objectives of this paper is to present a solution of the problem related to the selection criteria of a better wireless communication technology face up to the constraints imposed by the intended application and the evaluation of its key features. The comparison between the different wireless technologies (WiFi, Wi-Max, UWB, Bluetooth, ZigBee, ZigBeeIP, GSM/GPRS) focuses on their performance which depends on the areas of utilization. Furthermore, it shows the limits of their characteristics. Study findings can be used by the developers/ engineers to deduce the optimal mode to integrate and to operate a system that guarantees quality of communication, minimizing energy consumption, reducing the implementation cost and avoiding time constraints. Keywords\u2014Wireless communications; Performances; Energy; Protocols; Intelligent sensors; Applications"}
{"_id": "29629ff7367ec1b97e1d27816427c439222b93c6", "title": "A Parallel Proximal Algorithm for Anisotropic Total Variation Minimization", "text": "Total variation (TV) is a one of the most popular regularizers for stabilizing the solution of ill-posed inverse problems. This paper proposes a novel proximal-gradient algorithm for minimizing TV regularized least-squares cost functionals. Unlike traditional methods that require nested iterations for computing the proximal step of TV, our algorithm approximates the latter with several simple proximals that have closed form solutions. We theoretically prove that the proposed parallel proximal method achieves the TV solution with arbitrarily high precision at a global rate of converge that is equivalent to the fast proximal-gradient methods. The results in this paper have the potential to enhance the applicability of TV for solving very large-scale imaging inverse problems."}
{"_id": "42f9e29539f63ed000df96da2a2b7d0736698061", "title": "A compact ultra-wideband slot antenna with band notch characteristics", "text": "In this study, a compact wideband slot antenna with band notch performance is designed. An M-shaped slot on the feed line increases the impedance bandwidth of the square slot antenna. A stop-band notch can be achieved by using two L-shaped protruded strip inside the square-ring stub. The designed slot antenna has a compact size of 20\u00d720 mm, showing wideband characteristics in the frequency band of 4.4 to over 10.7 GHz with a band rejection performance in the frequency band of 5.0 to 5.4 GHz."}
{"_id": "2ce6545409d3169c6b8c68f3f38e9f73e58f8c64", "title": "Automatic Fruit Recognition Based on DCNN for Commercial Source Trace System", "text": "Automatically fruit recognition by using machine vision is considered as challenging task due to similarities between various types of fruits and external environmental changes e-g lighting. In this paper, fruit recognition algorithm based on Deep Convolution Neural Network(DCNN) is proposed. Most of the previous techniques have some limitations because they were examined and evaluated under limited dataset, furthermore they have not considered external environmental changes. Another major contribution in this paper is that we established fruit images database having 15 different categories comprising of 44406 images which were collected within a period of 6 months by keeping in view the limitations of existing dataset under different real-world conditions. Images were directly used as input to DCNN for training and recognition without extracting features, besides this DCNN learn optimal features from images through adaptation process. The final decision was totally based on a fusion of all regional classification using probability mechanism. Experimental results exhibit that the proposed approach have efficient capability of automatically recognizing the fruit with a high accuracy of 99% and it can also effectively meet real world application requirements."}
{"_id": "217577bdff47ff84bcec4b9610a04265f76c9515", "title": "Integrating Classification and Association Rule Mining: A Concept Lattice Framework", "text": "Concept lattice is an efficient tool for data analysis. In this paper we show how classification and association rule mining can be unified under concept lattice framework. We present a fast algorithm to extract association and classification rules from concept lattice."}
{"_id": "af24e10093658bd4f70a828372373011f5538cbb", "title": "Scalable Similarity Search With Topology Preserving Hashing", "text": "Hashing-based similarity search techniques is becoming increasingly popular in large data sets. To capture meaningful neighbors, the topology of a data set, which represents the neighborhood relationships between its subregions and the relative proximities between the neighbors of each subregion, e.g., the relative neighborhood ranking of each subregion, should be exploited. However, most existing hashing methods are developed to preserve neighborhood relationships while ignoring the relative neighborhood proximities. Moreover, most hashing methods lack in providing a good result ranking, since there are often lots of results sharing the same Hamming distance to a query. In this paper, we propose a novel hashing method to solve these two issues jointly. The proposed method is referred to as topology preserving hashing (TPH). TPH is distinct from prior works by also preserving the neighborhood ranking. Based on this framework, we present three different TPH methods, including linear unsupervised TPH, semisupervised TPH, and kernelized TPH. Particularly, our unsupervised TPH is capable of mining semantic relationship between unlabeled data without supervised information. Extensive experiments on four large data sets demonstrate the superior performances of the proposed methods over several state-of-the-art unsupervised and semisupervised hashing techniques."}
{"_id": "937c0f0719d9843b521c8c5b5e13737e8fd91bd4", "title": "LNSC: A Security Model for Electric Vehicle and Charging Pile Management Based on Blockchain Ecosystem", "text": "The Internet of Energy (IoE) provides an effective networking technology for distributed green energy, which allows the connection of energy anywhere at any time. As an important part of the IoE, electric vehicles (EVs), and charging pile management are of great significance to the development of the IoE industry. Previous work has mainly focused on network performance optimization for its management, and few studies have considered the security of the management between EVs and charging piles. Therefore, this paper proposes a decentralized security model based on the lightning network and smart contract in the blockchain ecosystem; this proposed model is called the lightning network and smart contract (LNSC). The overall model involves registration, scheduling, authentication, and charging phases. The new proposed security model can be easily integrated with current scheduling mechanisms to enhance the security of trading between EVs and charging piles. Experimental results according to a realistic infrastructure are presented in this paper. These experimental results demonstrate that our scheme can effectively enhance vehicle security. Different performances of LNSC-based scheduling strategies are also presented."}
{"_id": "1c9b97e644ce5dcc417121f0ae6dfb14cce9364c", "title": "Round-Robin Arbiter Design and Generation", "text": "In this paper, we introduce a Round-robin Arbiter Generator (RAG) tool. The RAG tool can generate a design for a Bus Arbiter (BA). The BA is able to handle the exact number of bus masters for both on-chip and off-chip buses. RAG can also generate a distributed and parallel hierarchical Switch Arbiter (SA). The first contribution of this paper is the automated generation of a round-robin token passing BA to reduce time spent on arbiter design. The generated arbiter is fair, fast, and has a low and predictable worst-case wait time. The second contribution of this paper is the design and integration of a distributed fast arbiter, e.g., for a terabit switch, based on 2x2 and 4x4 switch arbiters (SAs). Using a .25\u00b5 TSMC standard cell library from LEDA Systems [10, 14], we show the arbitration time of a 256x256 SA for a terabit switch and demonstrate that the SA generated by RAG meets the time constraint to achieve approximately six terabits of throughput in a typical network switch design. Furthermore, our generated SA performs better than the Ping-Pong Arbiter and Programmable Priority Encoder by a factor of 1.9X and 2.4X, respectively."}
{"_id": "0401bd2dd5e613b67e7277a7f519589be7360fd4", "title": "Extending the Coverage of DBpedia Properties using Distant Supervision over Wikipedia", "text": "DBpedia is a Semantic Web project aiming to extract structured data from Wikipedia articles. Due to the increasing number of resources linked to it, DBpedia plays a central role in the Linked Open Data community. Currently, the information contained in DBpedia is mainly collected from Wikipedia infoboxes, a set of subject-attribute-value triples that represents a summary of the Wikipedia page. These infoboxes are manually compiled by the Wikipedia contributors, and in more than 50% of the Wikipedia articles the infobox is missing. In this article, we use the distant supervision paradigm to extract the missing information directly from the Wikipedia article, using a Relation Extraction tool trained on the information already present in DBpedia. We evaluate our system on a data set consisting of seven DBpedia properties, demonstrating the suitability of the approach in extending the DBpedia coverage."}
{"_id": "44b5c555f9d5af4de380ac9d86c20c77521d6eee", "title": "Online Tourism Communities on the Path to Web 2.0: An Evaluation", "text": "In recent years a technological and sociological paradigm shift has taken place in the Internet that is often referred to as Web 2.0. Companies and individuals have started to adapt existing Web sites to the new standards and principles and created new types of Web services and communities. The tourism domain is no exception to this trend new tourism communities emerged and long-established ones integrated new features to keep up with this trend. In this paper we are evaluating eight tourism communities with respect to Web 2.0. Each community is evaluated based on a criteria catalogue that draws ideas from online community studies. The findings are discussed in the context of the tourist life cycle that is structured in a pre-trip, on-site and after-trip phase. The value for the traveller is highlighted for each phase and potential problems are discussed."}
{"_id": "4eff2292620c21abe259412e7b402fe50c0a100e", "title": "Wideband Circularly Polarized Patch Antennas on Reactive Impedance Substrates", "text": "A reduced-size wideband single-feed circularly polarized patch antenna is introduced for telemetry applications in S-band around 2300 MHz. The proposed structure consists of a slot-loaded patch antenna printed over an optimized metamaterial-inspired reactive impedance substrate (RIS). We demonstrate, step by step, the main role of each antenna element by comparing numerically and experimentally the performance of various antenna configurations: antenna over a single- or dual-layer substrate, standard patch or slot-loaded patch, antenna with or without RIS. The final optimized structure exhibits an axial-ratio bandwidth of about 15% and an impedance bandwidth better than 11%, which is much wider than the conventional printed antenna on the same materials."}
{"_id": "c601e52e0ebdfc1efddb30ec9710972e18b5652b", "title": "Magnetic Field Analysis of Halbach Permanent Magnetic Synchronous Machine", "text": "A numerical study of PMSM based on finite element method for estimating the airgap magnetic field distribution considering stator slots effect is presented both for traditional radial magnet structure and Halbach magnet array topology. Magnetic field, flux, torque and the induction in the air gap are presented. The two machines structures performances were compared which put in view the modifications brought by these structures.. According to numerical calculations, and harmonic analysis, the PMSM with Halbach magnet array provides a higher torque and better performance than the classical radial PMSM."}
{"_id": "4c32da988c7caef974a3fec6a2c96aee169df84f", "title": "Rice quality analysis using image processing techniques", "text": "In agricultural industries grain quality evaluation is very big challenge. Quality control is very important in food industry because after harvesting, based on quality parameters food products are classified and graded into different grades. Grain quality evaluation is done manually but it is relative, time consuming, may be varying results and costly. To overcome these limitations and shortcoming image processing techniques is the alternative solution can be used for grain quality analysis. Rice quality is nothing but the combination of physical and chemical characteristics. Grain size and shape, chalkiness, whiteness, milling degree, bulk density and moisture content are some physical characteristics while amylose content, gelatinization temperature and gel consistency are chemical characteristics of rice. The paper presents a solution of grading and evaluation of rice grains on the basis of grain size and shape using image processing techniques. Specifically edge detection algorithm is used to find out the region of boundaries of each grain. In this technique we find the endpoints of each grain and after using caliper we can measure the length and breadth of rice. This method requires minimum time and it is low in cost."}
{"_id": "b10d69c64adb973348c3259b1f3560c432f3234c", "title": "The British Industrial Revolution in Global Perspective", "text": "THE INDUSTRIAL REVOLUTION was a turning point in the history of the world and inaugurated two centuries of economic growth that have resulted in the high incomes enjoyed in developed countries today.1 Technological progress is the motor of economic growth, and the Industrial Revolution is defi ned by famous technological breakthroughs: machinery to spin and weave cotton, the use of coal to smelt and refi ne iron, and the steam engine.2 In the words of the schoolboy made famous by T. S. Ashton: \u2018About 1760 a wave of gadgets swept over England\u2019 (Ashton 1955: 42). The questions for today\u2019s lecture are: How can we explain the technological breakthroughs of the Industrial Revolution? And, why did the Industrial Revolution happen in Britain, rather than France, the Netherlands, or China? These questions will be answered by developing these themes: in comparison with other countries, Britain had an unusual structure of wages and prices in the eighteenth century, and this structure of wages and prices was a major factor in explaining why the revolution happened in Britain. In addition Britain had an effective \u2018innovation system\u2019 based on a high"}
{"_id": "298500897243b17fa2ebe7bde0a1b8ebc00ea07f", "title": "Stanley: The robot that won the DARPA Grand Challenge", "text": "Sebastian Thrun1, Mike Montemerlo1, Hendrik Dahlkamp1, David Stavens1, Andrei Aron1, James Diebel1, Philip Fong1, John Gale1, Morgan Halpenny1, Gabriel Hoffmann1, Kenny Lau1, Celia Oakley1, Mark Palatucci1, Vaughan Pratt1, Pascal Stang1, Sven Strohband2, Cedric Dupont2, Lars-Erik Jendrossek2, Christian Koelen2, Charles Markey2, Carlo Rummel2, Joe van Niekerk2, Eric Jensen2, Philippe Alessandrini2, Gary Bradski3, Bob Davies3, Scott Ettinger3, Adrian Kaehler3, Ara Nefian3, and Pamela Mahoney4"}
{"_id": "7e98801ab0b438115ea4d86fdd4babe7ae178561", "title": "Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces", "text": "Real-time robot motion planning using rasterizing computer graphics hardware. In Proc. OY82] C. O'D unlaing and C.K. Yap. A retraction method for planning the motion of a disc. A local approach for path planning of manipulators with a high number of degrees of freedom. a path generation algorithm into oo-line programming of airbus panels. Motion planning for many degrees of freedom-random reeections at c-space obstacles. REFERENCES 31 iterative collision checker. We observed no dramatic slowdown of the planner. A challenging research goal would now be to extend the method to dynamic scenes. One rst question is: How should a roadmap computed for a given workspace be updated if a few obstacles are removed or added? Answering this question would be useful to apply our method to scenes subject to small incremental changes. Such changes occur in many manufacturing (e.g., assembly) cells; while most of the geometry of such a cell is permanent and stationary, a few objects (e.g., xtures) are added or removed between any two consecutive manufacturing operations. Similar incremental changes also occur in automatic graphic animation. A second question is: How should the learning and query phase be modiied if some obstacles are moving along known trajectories? An answer to this question might consist of applying our roadmap method in the conngurationtime space of the robot Lat91]. The roadmap would then have to be built as a directed graph, since local paths between any two nodes must monotonically progress along the time axis, with possibly additional constraints on their slope and curvature to reeect bounds on the robot's velocity and acceleration. A motion planner with performance proportional to task diiculty. 30 In KL94a, KL94b, O S94] prior versions of the method have been applied to a great variety of holonomic robots including planar and spatial articulated robots with revolute, prismatic, and/or spherical joints, xed or free base, and single or multiple kinematic chains. In Sve93, SO94] a variation of the method (essentially one with a diierent general local planner) was also run successfully on examples involving nonholonomic car-like robots. Experimental results show that our method can eeciently solve problems which are beyond the capabilities of other existing methods. For example, for planar articulated robots with many dofs, the customized implementation of Section 5 is much more consistent than the Randomized Path Planner (RPP) of BL91]. Indeed, the latter can be very fast on some diicult problems, but it \u2026"}
{"_id": "1fb6e0669e1e606112cec45c6d60b43655c4eb71", "title": "Mushroom immunomodulators: unique molecules with unlimited applications.", "text": "For centuries, mushrooms have been used as food and medicine in different cultures. More recently, many bioactive compounds have been isolated from different types of mushrooms. Among these, immunomodulators have gained much interest based on the increasing growth of the immunotherapy sector. Mushroom immunomodulators are classified under four categories based on their chemical nature as: lectins, terpenoids, proteins, and polysaccharides. These compounds are produced naturally in mushrooms cultivated in greenhouses. For effective industrial production, cultivation is carried out in submerged culture to increase the bioactive compound yield, decrease the production time, and reduce the cost of downstream processing. This review provides a comprehensive overview on mushroom immunomodulators in terms of chemistry, industrial production, and applications in medical and nonmedical sectors."}
{"_id": "4c33e154b49a2bc0dcef0d835febd63a8e62fb2c", "title": "One Road to Turnover: An Examination of Work Exhaustion in Technology Professionals", "text": "The concept of work exhaustion (or job burnout) from the management and psychology research literature is examined in the context of technology professionals. Data were collected from 270 IT professionals and managers in various industries across the United States. Through structural equation modeling, work exhaustion was shown to partially mediate the effects of workplace factors on turnover intention. In addition, the results of the study revealed that: (1) technology professionals experiencing higher levels of exhaustion reported higher intentions to leave the job and, (2) of the variables expected to influence exhaustion (work overload, role ambiguity and conflict, lack of autonomy and lack of rewards), work overload was the strongest contributor to exhaustion in the technology workers. Moreover, exhausted IT professionals identified insufficient staff and resources as a primary cause of work overload and exhaustion. Implications for practice and future research are discussed."}
{"_id": "2671bf82168234a25fce7950e0527eb03b201e0c", "title": "Reranking and Self-Training for Parser Adaptation", "text": "Statistical parsers trained and tested on the Penn Wall Street Journal (WSJ) treebank have shown vast improvements over the last 10 years. Much of this improvement, however, is based upon an ever-increasing number of features to be trained on (typically) the WSJ treebank data. This has led to concern that such parsers may be too finely tuned to this corpus at the expense of portability to other genres. Such worries have merit. The standard \u201cCharniak parser\u201d checks in at a labeled precisionrecall f -measure of 89.7% on the Penn WSJ test set, but only 82.9% on the test set from the Brown treebank corpus. This paper should allay these fears. In particular, we show that the reranking parser described in Charniak and Johnson (2005) improves performance of the parser on Brown to 85.2%. Furthermore, use of the self-training techniques described in (McClosky et al., 2006) raise this to 87.8% (an error reduction of 28%) again without any use of labeled Brown data. This is remarkable since training the parser and reranker on labeled Brown data achieves only 88.4%."}
{"_id": "916c33212aa906192a948aec0e8ce94ec90087c9", "title": "Social network, social trust and shared goals in organizational knowledge sharing", "text": "The aim of our study was to further develop an understanding of social capital in organizationalknowledge-sharing. We first developed a measurement tool and then a theoretical framework in which three social capital factors (social network, social trust, and shared goals) were combined with the theory of reasoned action; their relationships were then examined using confirmatory factoring analysis. We then surveyed of 190 managers from Hong Kong firms, we confirm that a social network and shared goals significantly contributed to a person\u2019s volition to share knowledge, and directly contributed to the perceived social pressure of the organization. The social trust has however showed no direct effect on the attitude and subjective norm of sharing knowledge. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +852 3411 7582; fax: +852 3411 5585. E-mail addresses: vwschow@hkbu.edu.hk (W.S. Chow), sherry.ls.chan@gmail.com (L.S. Chan)."}
{"_id": "d4e974d68c36de92609fcffaa3ee11bbcbc9eb57", "title": "Leader Efficacy Linking Leadership to Student Learning : The Contributions of", "text": ""}
{"_id": "1ca26f0582c03d080777d59e2483cbc7b32aff40", "title": "Learning A Task-Specific Deep Architecture For Clustering", "text": "While sparse coding-based clustering methods have shown to be successful, their bottlenecks in both efficiency and scalability limit the practical usage. In recent years, deep learning has been proved to be a highly effective, efficient and scalable feature learning tool. In this paper, we propose to emulate the sparse coding-based clustering pipeline in the context of deep learning, leading to a carefully crafted deep model benefiting from both. A feed-forward network structure, named TAGnet, is constructed based on a graphregularized sparse coding algorithm. It is then trained with taskspecific loss functions from end to end. We discover that connecting deep learning to sparse coding benefits not only the model performance, but also its initialization and interpretation. Moreover, by introducing auxiliary clustering tasks to the intermediate feature hierarchy, we formulate DTAGnet and obtain a further performance boost. Extensive experiments demonstrate that the proposed model gains remarkable margins over several state-of-the-art methods."}
{"_id": "00cae399de72b2bea32f72223820553c22fd52ef", "title": "Vision based fire detection", "text": "Vision based fire detection is potentially a useful technique. With the increase in the number of surveillance cameras being installed, a vision based fire detection capability can be incorporated in existing surveillance systems at relatively low additional cost. Vision based fire detection offers advantages over the traditional methods. It will thus complement the existing devices. In this paper, we present spectral, spatial and temporal models of fire regions in visual image sequences. The spectral model is represented in terms of the color probability density of fire pixels. The spatial model captures the spatial structure within a fire region. The shape of a fire region is represented in terms of the spatial frequency content of the region contour using its Fourier coefficients. The temporal changes in these coefficients are used as the temporal signatures of the fire region. Specifically, an auto regressive model of the Fourier coefficient series is used. Experiments with a large number of scenes show that our method is capable of detecting fire reliably."}
{"_id": "e19b5b74311da1d36fd79187e7e5444c96253b05", "title": "The role of violent media preference in cumulative developmental risk for violence and general aggression.", "text": "The impact of exposure to violence in the media on the long-term development and short-term expression of aggressive behavior has been well documented. However, gaps in this literature remain, and in particular the role of violent media exposure in shaping violent and other serious antisocial behavior has not been investigated. Further, studies of violent media effects typically have not sampled from populations with confirmed histories of violent and/or nonviolent antisocial behavior. In this study, we analyzed data on 820 youth, including 390 juvenile delinquents and 430 high school students, to examine the relation of violent media use to involvement in violence and general aggression. Using criterion scores developed through cross-informant modeling of data from self, parent/guardian, and teacher/staff reports, we observed that childhood and adolescent violent media preferences contributed significantly to the prediction of violence and general aggression from cumulative risk totals. Findings represent a new and important direction for research on the role of violent media use in the broader matrix of risk factors for youth violence."}
{"_id": "953335aaea41ee7d112501f21f2b07d80170ab53", "title": "Towards Building a SentiWordNet for Tamil", "text": "Sentiment analysis is a discipline of Natural Language Processing which deals with analysing the subjectivity of the data. It is an important task with both commercial and academic functionality. Languages like English have several resources which assist in the task of sentiment analysis. SentiWordNet for English is one such important lexical resource that contains subjective polarity for each lexical item. With growing data in native vernacular, there is a need for language-specific SentiWordNet(s). In this paper, we discuss a generic approach followed for the development of a Tamil SentiWordNet using currently available resources in English. For Tamil SentiWordNet, a substantial agreement Fleiss Kappa score of 0.663 was obtained after verification from Tamil annotators. Such a resource would serve as a baseline for future improvements in the task of sentiment analysis specific to"}
{"_id": "170e2f4778f975768aaa5e888349fc0cb23d576f", "title": "Point Cloud GAN", "text": "Generative Adversarial Networks (GAN) can achieve promising performance on learning complex data distributions on different types of data. In this paper, we first show a straightforward extension of existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data. We propose a two fold modification to GAN algorithm for learning to generate point clouds (PC-GAN). First, we combine ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process. A key component of our method is that we train a posterior inference network for the hidden variables. Second, instead of using only state-of-the-art Wasserstein GAN objective, we propose a sandwiching objective, which results in a tighter Wasserstein distance estimate than the commonly used dual form. Thereby, PC-GAN defines a generic framework that can incorporate many existing GAN algorithms. We validate our claims on ModelNet40 benchmark dataset. Using the distance between generated point clouds and true meshes as metric, we find that PC-GAN trained by the sandwiching objective achieves better results on test data than the existing methods. Moreover, as a byproduct, PCGAN learns versatile latent representations of point clouds, which can achieve competitive performance with other unsupervised learning algorithms on object recognition task. Lastly, we also provide studies on generating unseen classes of objects and transforming image to point cloud, which demonstrates the compelling generalization capability and potentials of PC-GAN."}
{"_id": "f2795d7d07f4e34decbdd43059b0b333a5aff6c0", "title": "Understanding the Effects of Mobile Gamification on Learning Performance", "text": "With the development of mobile technology, mobile learning, which is not bound by time and space, has become one of the most essential learning methods for modern people. However, previous studies have suggested that improvements are still needed in mobile learning performance. One of the means to achieve this goal is situated learning, which enhances learners\u2019 autonomous motivation and make them more enthusiastic in learning. Nonetheless, few studies have attempted to increase learning performance through situated learning. This study attempts to explore the development of a productive learning atmosphere in the context of mobile situated learning. An experiment is conducted with university-level students having homogenous background and coursework by applying heterogeneous pedagogies, including textual pedagogy, video pedagogy, collaborative pedagogy, and gamification pedagogy. Moreover, the uses and gratification theory and the cognitive load theory were adopted to identify the cognitive factors that influence the learning performance in situated learning. It is hoped that education service providers can benefit from the insights discovered from this study and implement more effective learning strategies in mobile education."}
{"_id": "67352c19438591e2521f16260ecdd598b88ed752", "title": "Generating regular expression signatures for network traffic classification in trusted network management", "text": "Network traffic classification is a critical foundation for trusted network management and security systems. Matching application signatures in traffic payload is widely considered to be the most reliable classifying method. However, deriving accurate and efficient signatures for various applications is not a trivial task, for which current practice is mostly manual thus error-prone and of low efficiency. In this paper, we tackle the problem of automatic signature generation. In particular, we focus on generating regular expression signatures with a certain subset of standard syntax rules, which are of sufficient expressive power and compatible with most practical systems. We propose a novel approach that takes as input a labeled training data set and produces a set of signatures for matching the application classes presented in the data. The approach involves four procedures: pre-processing to extract application session payload, tokenization to find common substrings and incorporate position constraints, multiple sequence alignment to find common subsequences, and signature construction to transform the results into regular expressions. A real life full payload traffic trace is used to evaluate the proposed system, and signatures for a range of applications are automatically derived. The results indicate that the signatures are of high quality, and exhibit low false negatives and false positives. & 2011 Elsevier Ltd. All rights reserved."}
{"_id": "6799971d202fa5d10cc7f1cae7daf07c247ab641", "title": "Toward Automatic Sound Source Recognition : Identifying Musical Instruments", "text": "average pitch average spectral centroid avg. normalized spect. cent. maximum onset slope onset duration vibrato frequency vibrato strength tremolo frequency tremolo strength slope of onset harmonic skew variance of ons. harm. skew amplitude decay odd/even harmonic ratio Features violin (bowed, muted, pizzicato) viola (bowed, muted, pizzicato) cello (bowed, muted, pizzicato) double bass (bowed, muted, pizz.) trumpet (C, Bach, C with Harmon mute) horn (normal, muted) tenor trombone (normal, muted) tuba flute piccolo oboe english horn bassoon Bb clarinet Instruments"}
{"_id": "e51c0ba606e40c99b2a87f32728ba1e18c183540", "title": "Inversion of seismic reflection data in the acoustic approximation", "text": "The nonlinear inverse problem for seismic reflection data is solved in the acoustic approximation. The method is based on the generalized least-squares criterion, and it can handle errors in the data set and a priori information on the model. Multiply reflected energy is naturally taken into account, as well as refracted energy or surface waves. The inverse problem can be solved using an iterative algorithm which gives, at each iteration, updated values of bulk modulus, density, and time source function. Each step of the iterative algorithm essentially consists of a forward propagation of the actual sources in the current model and a forward propagation (backward in time) of the data residuals. The correlation at each point of the space of the two fields thus obtained yields the corrections of the bulk modulus and density models. This shows, in particular, that the general solution of the inverse problem can be attained by methods strongly related to the methods of migration of unstacked data, and commercially competitive with them."}
{"_id": "b095ddd10f9a4b8109799e3c229642e611417a6f", "title": "Neural Machine Translation Incorporating Named Entity", "text": "This study proposes a new neural machine translation (NMT) model based on the encoderdecoder model that incorporates named entity (NE) tags of source-language sentences. Conventional NMT models have two problems enumerated as follows: (i) they tend to have difficulty in translating words with multiple meanings because of the high ambiguity, and (ii) these models\u2019 ability to translate compound words seems challenging because the encoder receives a word, a part of the compound word, at each time step. To alleviate these problems, the encoder of the proposed model encodes the input word on the basis of its NE tag at each time step, which could reduce the ambiguity of the input word. Furthermore, the encoder introduces a chunk-level LSTM layer over a word-level LSTM layer and hierarchically encodes a source-language sentence to capture a compound NE as a chunk on the basis of the NE tags. We evaluate the proposed model on an English-to-Japanese translation task with the ASPEC, and English-to-Bulgarian and English-to-Romanian translation tasks with the Europarl corpus. The evaluation results show that the proposed model achieves up to 3.11 point improvement in BLEU."}
{"_id": "95badade580eeff6a2f512268a017b9b66c6a212", "title": "Software Defect Prediction via Convolutional Neural Network", "text": "To improve software reliability, software defect prediction is utilized to assist developers in finding potential bugs and allocating their testing efforts. Traditional defect prediction studies mainly focus on designing hand-crafted features, which are input into machine learning classifiers to identify defective code. However, these hand-crafted features often fail to capture the semantic and structural information of programs. Such information is important in modeling program functionality and can lead to more accurate defect prediction.In this paper, we propose a framework called Defect Prediction via Convolutional Neural Network (DP-CNN), which leverages deep learning for effective feature generation. Specifically, based on the programs' Abstract Syntax Trees (ASTs), we first extract token vectors, which are then encoded as numerical vectors via mapping and word embedding. We feed the numerical vectors into Convolutional Neural Network to automatically learn semantic and structural features of programs. After that, we combine the learned features with traditional hand-crafted features, for accurate software defect prediction. We evaluate our method on seven open source projects in terms of F-measure in defect prediction. The experimental results show that in average, DP-CNN improves the state-of-the-art method by 12%."}
{"_id": "2ab7edaf2113f8bb1e8cd6431dafbc1c708a9a18", "title": "Towards continuous control of flippers for a multi-terrain robot using deep reinforcement learning", "text": "In this paper we focus on developing a control algorithm for multi-terrain tracked robots with flippers using a reinforcement learning (RL) approach. The work is based on the deep deterministic policy gradient (DDPG) algorithm, proven to be very successful in simple simulation environments. The algorithm works in an end-to-end fashion in order to control the continuous position of the flippers. This end-to-end approach makes it easy to apply the controller to a wide array of circumstances, but the huge flexibility comes to the cost of an increased difficulty of solution. The complexity of the task is enlarged even more by the fact that real multi-terrain robots move in partially observable environments. Notwithstanding these complications, being able to smoothly control a multi-terrain robot can produce huge benefits in impaired people daily lives or in search and rescue situations. Manuscript received 22 September 2017 ar X iv :1 70 9. 08 43 0v 1 [ cs .R O ] 2 5 Se p 20 17"}
{"_id": "9f401bddfc5c1f859e6fcc347189553b788dd299", "title": "Computational approaches to fMRI analysis", "text": "Analysis methods in cognitive neuroscience have not always matched the richness of fMRI data. Early methods focused on estimating neural activity within individual voxels or regions, averaged over trials or blocks and modeled separately in each participant. This approach mostly neglected the distributed nature of neural representations over voxels, the continuous dynamics of neural activity during tasks, the statistical benefits of performing joint inference over multiple participants and the value of using predictive models to constrain analysis. Several recent exploratory and theory-driven methods have begun to pursue these opportunities. These methods highlight the importance of computational techniques in fMRI analysis, especially machine learning, algorithmic optimization and parallel computing. Adoption of these techniques is enabling a new generation of experiments and analyses that could transform our understanding of some of the most complex\u2014and distinctly human\u2014signals in the brain: acts of cognition such as thoughts, intentions and memories."}
{"_id": "e632c5f594ca5fb71408ed15d08075b836a887dd", "title": "iMAP-CampUS: Developing an Intelligent Mobile Augmented Reality Program on Campus as a Ubiquitous System", "text": "Augmented Reality (AR) offers a combination of physical and virtual objects, drawing on the strengths of each. Therefore, it is different from virtual reality since it permits users to sight the real world enhanced with virtual objects. Thus, AR technology holds potential to support users with valuable information about the surrounding area. In this paper, we demonstrate the development of an AR app that provides the students with rich information about the buildings nearby. The application, which we name \"iMAP-CampUS\", is designed to assist students at Macquarie University in locating places of interest close to them by moving the camera of the device in all probable directions to overlay information of places around them. The iMAP-CampUS has been developed for both iOS and Android platforms and run on smartphones and tablets with different screen sizes."}
{"_id": "72a2680a9043977f2f5ed776170b75beeff5b815", "title": "Deep Learning Based Forecasting of Critical Infrastructure Data", "text": "Intelligent monitoring and control of critical infrastructure such as electric power grids, public water utilities and transportation systems produces massive volumes of time series data from heterogeneous sensor networks. Time Series Forecasting (TSF) is essential for system safety and security, and also for improving the efficiency and quality of service delivery. Being highly dependent on various external factors, the observed system behavior is usually stochastic, which makes the next value prediction a tricky and challenging task that usually needs customized methods. In this paper we propose a novel deep learning based framework for time series analysis and prediction by ensembling parametric and nonparametric methods. Our approach takes advantage of extracting features at different time scales, which improves accuracy without compromising reliability in comparison with the state-of-the-art methods. Our experimental evaluation using real-world SCADA data from a municipal water management system shows that our proposed method outperforms the baseline methods evaluated here."}
{"_id": "fb9d0839e3a0ef0e567878b016be83616ee0aa90", "title": "A fast restoration method for atmospheric turbulence degraded images using non-rigid image registration", "text": "In this paper, a fast image restoration method is proposed to restore the true image from an atmospheric turbulence degraded video. A non-rigid image registration algorithm is employed to register all the frames of the video to a reference frame and determine the shift maps. The First Register Then Average And Subtract-variant (FRTAASv) method is applied to correct the geometric distortion of the reference frame. A performance comparison is presented between the proposed restoration method and the earlier Minimum Sum of Squared Differences (MSSD) image registration based FRTAASv method, in terms of processing time and accuracy. Simulation results show that the proposed method requires shorter processing time to achieve the same geometric accuracy."}
{"_id": "2811534b332206223188687e214ebbde0f2f294d", "title": "A framework for mesencephalic dopamine systems based on predictive Hebbian learning.", "text": "We develop a theoretical framework that shows how mesencephalic dopamine systems could distribute to their targets a signal that represents information about future expectations. In particular, we show how activity in the cerebral cortex can make predictions about future receipt of reward and how fluctuations in the activity levels of neurons in diffuse dopamine systems above and below baseline levels would represent errors in these predictions that are delivered to cortical and subcortical targets. We present a model for how such errors could be constructed in a real brain that is consistent with physiological results for a subset of dopaminergic neurons located in the ventral tegmental area and surrounding dopaminergic neurons. The theory also makes testable predictions about human choice behavior on a simple decision-making task. Furthermore, we show that, through a simple influence on synaptic plasticity, fluctuations in dopamine release can act to change the predictions in an appropriate manner."}
{"_id": "5af94797da17f5db17d7a5d7e85a725956d61249", "title": "Predictive reward signal of dopamine neurons.", "text": "The effects of lesions, receptor blocking, electrical self-stimulation, and drugs of abuse suggest that midbrain dopamine systems are involved in processing reward information and learning approach behavior. Most dopamine neurons show phasic activations after primary liquid and food rewards and conditioned, reward-predicting visual and auditory stimuli. They show biphasic, activation-depression responses after stimuli that resemble reward-predicting stimuli or are novel or particularly salient. However, only few phasic activations follow aversive stimuli. Thus dopamine neurons label environmental stimuli with appetitive value, predict and detect rewards and signal alerting and motivating events. By failing to discriminate between different rewards, dopamine neurons appear to emit an alerting message about the surprising presence or absence of rewards. All responses to rewards and reward-predicting stimuli depend on event predictability. Dopamine neurons are activated by rewarding events that are better than predicted, remain uninfluenced by events that are as good as predicted, and are depressed by events that are worse than predicted. By signaling rewards according to a prediction error, dopamine responses have the formal characteristics of a teaching signal postulated by reinforcement learning theories. Dopamine responses transfer during learning from primary rewards to reward-predicting stimuli. This may contribute to neuronal mechanisms underlying the retrograde action of rewards, one of the main puzzles in reinforcement learning. The impulse response releases a short pulse of dopamine onto many dendrites, thus broadcasting a rather global reinforcement signal to postsynaptic neurons. This signal may improve approach behavior by providing advance reward information before the behavior occurs, and may contribute to learning by modifying synaptic transmission. The dopamine reward signal is supplemented by activity in neurons in striatum, frontal cortex, and amygdala, which process specific reward information but do not emit a global reward prediction error signal. A cooperation between the different reward signals may assure the use of specific rewards for selectively reinforcing behaviors. Among the other projection systems, noradrenaline neurons predominantly serve attentional mechanisms and nucleus basalis neurons code rewards heterogeneously. Cerebellar climbing fibers signal errors in motor performance or errors in the prediction of aversive events to cerebellar Purkinje cells. Most deficits following dopamine-depleting lesions are not easily explained by a defective reward signal but may reflect the absence of a general enabling function of tonic levels of extracellular dopamine. Thus dopamine systems may have two functions, the phasic transmission of reward information and the tonic enabling of postsynaptic neurons."}
{"_id": "89158f7c4534024a25ef201e6c32be2f4ee587d9", "title": "FMRI Visualization of Brain Activity during a Monetary Incentive Delay Task", "text": "Comparative studies have implicated striatal and mesial forebrain circuitry in the generation of autonomic, endocrine, and behavioral responses for incentives. Using blood oxygen level-dependent functional magnetic resonance imaging, we sought to visualize functional activation of these regions in 12 normal volunteers as they anticipated and responded for monetary incentives. Both individual and group analyses of time-series data revealed significant activation of striatal and mesial forebrain structures (including insula, caudate, putamen, and mesial prefrontal cortex) during trials involving both monetary rewards and punishments. In addition to these areas, during trials involving punishment, group analysis revealed activation foci in the anterior cingulate and thalamus. These results corroborate comparative studies which implicate striatal and mesial forebrain circuitry in the elaboration of incentive-driven behavior. This report also introduces a new paradigm for probing the functional integrity of this circuitry in humans."}
{"_id": "5b3e0d2b5dd0ff205f5b29c0e934e81d07e9866f", "title": "Improved Single Image Dehazing Using Dark Channel Prior and Multi-scale Retinex", "text": "In this paper, we propose an improved image dehazing algorithm using dark channel prior and Multi-Scale Retinex. Main improvement lies in automatic and fast acquisition of transmission map of the scene. We implement the Multi-scale Retinex algorithm on the luminance component in YCbCr space, obtain a pseudo transmission map whose function is similar to the transmission map in original approach. Combining with the haze image model and the dark channel prior, we can recover a high quality haze-free image. Compared with the original method, our algorithm has two main advantages: (i) no user interaction is needed, and (ii) restoring the image much faster while maintaining comparable dehazing performance."}
{"_id": "c3681656b53cea3cd9e9854a1be082c2d3cc4d50", "title": "Formal Design and Verification of Self-Adaptive Systems with Decentralized Control", "text": "Feedback control loops that monitor and adapt managed parts of a software system are considered crucial for realizing self-adaptation in software systems. The MAPE-K (Monitor-Analyze-Plan-Execute over a shared Knowledge) autonomic control loop is the most influential reference control model for self-adaptive systems. The design of complex distributed self-adaptive systems having decentralized adaptation control by multiple interacting MAPE components is among the major challenges. In particular, formal methods for designing and assuring the functional correctness of the decentralized adaptation logic are highly demanded.\n This article presents a framework for formal modeling and analyzing self-adaptive systems. We contribute with a formalism, called self-adaptive Abstract State Machines, that exploits the concept of multiagent Abstract State Machines to specify distributed and decentralized adaptation control in terms of MAPE-K control loops, also possible instances of MAPE patterns. We support validation and verification techniques for discovering unexpected interfering MAPE-K loops, and for assuring correctness of MAPE components interaction when performing adaptation."}
{"_id": "d538f891652f0128a1017b5e13472d57f075a70c", "title": "Online Media Use of False News to Frame the 2016 Trump Presidential Campaign", "text": "The 2016 U.S. presidential election campaigns witnessed an unprecedented viral false news -- a type of misinformation referred to as \"factitious information blend\" that is motivated to discredit political rivals. Despite the different speculations of factors that might have influenced Donald Trump's surprised victory, empirical and theoretical research on the potential impacts of false news propagated by online news media during election campaigns on influencing voters' attitudes and public opinion is seriously lacking. By drawing on the literature on framing political-effects research and by developing our computational text analytics programs, we addressed questions regarding how online news media used false news to negatively frame the Trump presidential campaign. Our text analytics results indicate that although the negative frames against Trump far outnumbered those against Hillary Clinton, weak frames of unverifiable misinformation might have failed to influence the mass audience, leaving them to the power of Trump's direct political communications via Twitter."}
{"_id": "2e7883aa71c1030032a5e414a66e72c10851ed82", "title": "Seraph: an efficient, low-cost system for concurrent graph processing", "text": "Graph processing systems have been widely used in enterprises like online social networks to process their daily jobs. With the fast growing of social applications, they have to efficiently handle massive concurrent jobs. However, due to the inherent design for single job, existing systems incur great inefficiency in memory use and fault tolerance. Motivated by this, in this paper we introduce Seraph, a graph processing system that enables efficient job-level parallelism. Seraph is designed based on a decoupled data model, which allows multiple concurrent jobs to share graph structure data in memory. Seraph adopts a copy-on-write semantic to isolate the graph mutation of concurrent jobs, and a lazy snapshot protocol to generate consistent graph snapshots for jobs submitted at different time. Moreover, Seraph adopts an incremental checkpoint/regeneration model which can tremendously reduce the overhead of checkpointing. We have implemented Seraph, and the evaluation results show that Seraph significantly outperforms popular systems (such as Giraph and Spark) in both memory usage and job completion time, when executing concurrent graph jobs."}
{"_id": "08d2a558ea2deb117dd8066e864612bf2899905b", "title": "Person Re-identification with Deep Similarity-Guided Graph Neural Network", "text": "The person re-identification task requires to robustly estimate visual similarities between person images. However, existing person re-identification models mostly estimate the similarities of different image pairs of probe and gallery images independently while ignores the relationship information between different probe-gallery pairs. As a result, the similarity estimation of some hard samples might not be accurate. In this paper, we propose a novel deep learning framework, named Similarity-Guided Graph Neural Network (SGGNN) to overcome such limitations. Given a probe image and several gallery images, SGGNN creates a graph to represent the pairwise relationships between probegallery pairs (nodes) and utilizes such relationships to update the probegallery relation features in an end-to-end manner. Accurate similarity estimation can be achieved by using such updated probe-gallery relation features for prediction. The input features for nodes on the graph are the relation features of different probe-gallery image pairs. The probe-gallery relation feature updating is then performed by the messages passing in SGGNN, which takes other nodes\u2019 information into account for similarity estimation. Different from conventional GNN approaches, SGGNN learns the edge weights with rich labels of gallery instance pairs directly, which provides relation fusion more precise information. The effectiveness of our proposed method is validated on three public person re-identification datasets."}
{"_id": "b87641b544a74f71f848bc50e139f96687555ab5", "title": "Of, for, and by the people: the legal lacuna of synthetic persons", "text": "Conferring legal personhood on purely synthetic entities is a very real legal possibility, one under consideration presently by the European Union. We show here that such legislative action would be morally unnecessary and legally troublesome. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We conclude that difficulties in holding \u201celectronic persons\u201d accountable when they violate the rights of others outweigh the highly precarious moral interests that AI legal personhood might protect."}
{"_id": "b2031a633c90e1114664456f4537a4daba94fb22", "title": "Grasping analysis for a 3-Finger Adaptive Robot Gripper", "text": "A 3-Finger Adaptive Robot Gripper is an advanced robotic research that provides a robotic hand-like capabilities due to its flexibility and versatility. However, the grasping performance has to be analyzed and monitored based on the motor encoder, motor current, and force feedback so that the finger position and grasping force can be effectively controlled. This paper provides an open-loop grasping analysis for a 3-Finger Adaptive Robot Gripper. A series of grasping tests has been conducted to demonstrate the robot capabilities and functionalities. Different stiffness levels of the grasped objects have been chosen to demonstrate the grasping ability. In the experiment, a Modbus RTU protocol and Matlab/Simulink are used as communication and control platform. A specially modified interlink FSR sensor is proposed where a special plastic cover has been developed to enhance the sensor sensitivity. The Arduino IO Package is employed to interface the sensor and Matlab/Simulink. The results show that the significant relationships between finger position, motor current, and force sensor are found and the results can be used for a proper grasping performance."}
{"_id": "7b3e0b0f776c995375a8b47db094a9a5a601374f", "title": "Posterior short-segment fixation with or without fusion for thoracolumbar burst fractures. a five to seven-year prospective randomized study.", "text": "BACKGROUND\nThe impact of fusion as a supplement to short-segment instrumentation for the treatment of thoracolumbar burst fractures is unclear. We conducted a controlled clinical trial to define the effect of fusion on lumbar spine and patient-related functional outcomes.\n\n\nMETHODS\nFrom 2000 to 2002, seventy-three consecutive patients with a single-level Denis type-B burst fracture involving the thoracolumbar spine and a load-sharing score of ANDOOP for Java generates unit tests for Java code using feedback-directed random test generation. Below we describe RANDOOP's input, output, and test generation algorithm. We also give an overview of RANDOOP's annotation-based interface for specifying configuration parameters that affect RANDOOP's behavior and output."}
{"_id": "1fab8830baef8f353dce995c0269a32fd3105f64", "title": "An orchestrated survey of methodologies for automated software test case generation", "text": "Test case generation is among the most labour-intensive tasks in software testing. It also has a strong impact on the effectiveness and efficiency of software testing. For these reasons, it has been one of the most active research topics in software testing for several decades, resulting in many different approaches and tools. This paper presents an orchestrated survey of the most prominent techniques for automatic generation of software test cases, reviewed in self-standing sections. The techniques presented include: (a) structural testing using symbolic execution, (b) model-based testing, (c) combinatorial testing, (d) random testing and its variant of adaptive random testing, and (e) search-based testing. Each section is odel-based testing contributed by world-renowned active researchers on the technique, and briefly covers the basic ideas underlying the method, the current state of the art, a discussion of the open research problems, and a rchestrated survey earch-based software testing"}
{"_id": "4f789439fe5a121e6f47453d8a95ec733baca537", "title": "Pex-White Box Test Generation for .NET", "text": "Pex automatically produces a small test suite with high code coverage for a .NET program. To this end, Pex performs a systematic program analysis (using dynamic symbolic execution, similar to pathbounded model-checking) to determine test inputs for Parameterized Unit Tests. Pex learns the program behavior by monitoring execution traces. Pex uses a constraint solver to produce new test inputs which exercise different program behavior. The result is an automatically generated small test suite which often achieves high code coverage. In one case study, we applied Pex to a core component of the .NET runtime which had already been extensively tested over several years. Pex found errors, including a serious issue."}
{"_id": "1f0774eb8ef13971a545dbdc406d7560a073bbc9", "title": "Software Testing Research: Achievements, Challenges, Dreams", "text": "Software engineering comprehends several disciplines devoted to prevent and remedy malfunctions and to warrant adequate behaviour. Testing, the subject of this paper, is a widespread validation approach in industry, but it is still largely ad hoc, expensive, and unpredictably effective. Indeed, software testing is a broad term encompassing a variety of activities along the development cycle and beyond, aimed at different goals. Hence, software testing research faces a collection of challenges. A consistent roadmap of the most relevant challenges to be addressed is here proposed. In it, the starting point is constituted by some important past achievements, while the destination consists of four identified goals to which research ultimately tends, but which remain as unreachable as dreams. The routes from the achievements to the dreams are paved by the outstanding research challenges, which are discussed in the paper along with interesting ongoing work."}
{"_id": "307acb91ebc6333f044359aff9284bbe0d48e358", "title": "SEGAN: Speech Enhancement Generative Adversarial Network", "text": "Current speech enhancement techniques operate on the spectral domain and/or exploit some higher-level feature. The majority of them tackle a limited number of noise conditions and rely on first-order statistics. To circumvent these issues, deep networks are being increasingly used, thanks to their ability to learn complex functions from large example sets. In this work, we propose the use of generative adversarial networks for speech enhancement. In contrast to current techniques, we operate at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them. We evaluate the proposed model using an independent, unseen test set with two speakers and 20 alternative noise conditions. The enhanced samples confirm the viability of the proposed model, and both objective and subjective evaluations confirm the effectiveness of it. With that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to improve their performance."}
{"_id": "252385dabb18852c73a9285b5cfe5c3482c3a7f8", "title": "The power of naive query segmentation", "text": "We address the problem of query segmentation: given a keyword query submitted to a search engine, the task is to group the keywords into phrases, if possible. Previous approaches to the problem achieve good segmentation performance on a gold standard but are fairly intricate. Our method is easy to implement and comes with a comparable accuracy."}
{"_id": "6539f238add7b676428b623bda08813f67e86a81", "title": "Opposite Effects of \u0394-9-Tetrahydrocannabinol and Cannabidiol on Human Brain Function and Psychopathology", "text": "\u0394-9-tetrahydrocannabinol (\u0394-9-THC) and Cannabidiol (CBD), the two main ingredients of the Cannabis sativa plant have distinct symptomatic and behavioral effects. We used functional magnetic resonance imaging (fMRI) in healthy volunteers to examine whether \u0394-9-THC and CBD had opposite effects on regional brain function. We then assessed whether pretreatment with CBD can prevent the acute psychotic symptoms induced by \u0394-9-THC. Fifteen healthy men with minimal earlier exposure to cannabis were scanned while performing a verbal memory task, a response inhibition task, a sensory processing task, and when viewing fearful faces. Subjects were scanned on three occasions, each preceded by oral administration of \u0394-9-THC, CBD, or placebo. BOLD responses were measured using fMRI. In a second experiment, six healthy volunteers were administered \u0394-9-THC intravenously on two occasions, after placebo or CBD pretreatment to examine whether CBD could block the psychotic symptoms induced by \u0394-9-THC. \u0394-9-THC and CBD had opposite effects on activation relative to placebo in the striatum during verbal recall, in the hippocampus during the response inhibition task, in the amygdala when subjects viewed fearful faces, in the superior temporal cortex when subjects listened to speech, and in the occipital cortex during visual processing. In the second experiment, pretreatment with CBD prevented the acute induction of psychotic symptoms by \u0394-9-tetrahydrocannabinol. \u0394-9-THC and CBD can have opposite effects on regional brain function, which may underlie their different symptomatic and behavioral effects, and CBD's ability to block the psychotogenic effects of \u0394-9-THC."}
{"_id": "1e59f1071eeb6a662ad8cffad1752167b7cef451", "title": "Graph Based Relational Features for Collective Classification", "text": "Statistical Relational Learning (SRL) methods have shown that classification accuracy can be improved by integrating relations between samples. Techniques such as iterative classification or relaxation labeling achieve this by propagating information between related samples during the inference process. When only a few samples are labeled and connections between samples are sparse, collective inference methods have shown large improvements over standard feature-based ML methods. However, in contrast to feature based ML, collective inference methods require complex inference procedures and often depend on the strong assumption of label consistency among related samples. In this paper, we introduce new relational features for standard ML methods by extracting information from direct and indirect relations. We show empirically on three standard benchmark datasets that our relational features yield results comparable to collective inference methods. Finally we show that our proposal outperforms these methods when additional information is available."}
{"_id": "f2f51a5f1e2edface850ff423536091f5d344ab0", "title": "Key organizational factors in data warehouse architecture selection", "text": "a r t i c l e i n f o Keywords: Data warehousing Data warehouse architecture Archictecture selection Independent data marts Bus architecture Enterprise data warehouse Hub and spoke Federated Even though data warehousing has been in existence for over a decade, companies are still uncertain about a critical decision \u2014 which data warehouse architecture to implement? Based on the existing literature, theory, and interviews with experts, a research model was created that identifies the various contextual factors that affect the selection decision. The results from the field survey and multinomial logistic regression suggest that various combinations of organizational factors influence data warehouse architecture selection. The strategic view of the data warehouse prior to implementation emerged as a key determinant. The research suggests an overall model for predicting the data warehouse architecture selection decision. Over the past decade, many companies have made data warehouses the foundation of their decision support infrastructures. These data repositories provide a solution to a recurring problem that has plagued companies' decision support initiatives \u2014 the lack of clean, accurate, timely, and integrated data. Whether the data is used for queries and reporting, decision support systems (DSS), executive information systems (EIS), online analytical processing (OLAP), or data mining, data warehouses provide the data that \" fuels \" these applications [36]. Data warehouses are also critical enablers of current strategic initiatives such as customer relationship management (CRM), business performance management (BPM), and supply chain management [18,23,35,85]. Despite the importance and growing experience with data ware-housing, there is still a considerable discussion and disagreement over which architecture to use. Bill Inmon, commonly referred to as \" the father of data warehousing, \" advocates what is variously called the hub and spoke architecture (i.e., a centralized data warehouse with dependent data marts), Corporate Information Factory, DW 2.0 (i.e., Inmon's current term), or enterprise data warehouse architecture [23]. Another architecture alternative is the data mart bus architecture with linked dimensional data marts (i.e., bus architecture), advocated by Ralph Kimball, the other preeminent figure in data warehousing [52]. 1 These and other architectures (e.g., independent data marts and federated) have fundamental differences and strong advocates. The selection of an appropriate architecture is an important decision. A study by the Meta Group found that architecture selection is one of the key factors influencing data warehousing success [54]. A Gartner report identified the architecture selection decision as one of the five problem \u2026"}
{"_id": "18bcef5ad12592c6437d7f9dd35bce6a23227559", "title": "Experienced Grey Wolf Optimizer through Reinforcement Learning and Neural Networks", "text": "In this paper, a variant of Grey Wolf Optimizer (GWO) that uses reinforcement learning principles combined with neural networks to enhance the performance is proposed. The aim is to overcome, by reinforced learning, the common challenges of setting the right parameters for the algorithm. In GWO, a single parameter is used to control the exploration/exploitation rate which influences the performance of the algorithm. Rather than using a global way to change this parameter for all the agents, we use reinforcement learning to set it on an individual basis. The adaptation of the exploration rate for each agent depends on the agent\u2019s own experience and the current terrain of the search space. In order to achieve this, an experience repository is built based on the neural network to map a set of agents\u2019 states to a set of corresponding actions that specifically influence the exploration rate. The experience repository is updated by all the search agents to reflect experience and to enhance the future actions continuously. The resulted algorithm is called Experienced Grey Wolf Optimizer (EGWO) and its performance is assessed on solving feature selection problems and on finding optimal weights for neural networks algorithm. We use a set of performance indicators to evaluate the efficiency of the method. Results over various datasets demonstrate an advance of the EGWO over the original GWO and other meta-heuristics such as genetic algorithms and particle swarm optimization."}
{"_id": "335a69e88ab58e663f0ff9107e106f29d0179ae2", "title": "Second-order complex random vectors and normal distributions", "text": "We formulate as a deconvolution problem the causalhoncausal non-Gaussian multichannel autoregressive (AR) parameter estimation problem. The super exponential aljporithm presented in a recent paper by Shalvi and Weinstein is generalized to the vector case. We present an adaptive implementation that is very attractive since it is higher order statistics (HOS) based b u t does not present the high comlputational complexity of methods proposed up to now."}
{"_id": "d0b046f9e003af2e09ff6b52ca758b4d9a740a8b", "title": "One selector-one resistor (1S1R) crossbar array for high-density flexible memory applications", "text": "Lack of a suitable selection device to suppress sneak current has impeded the development of 4F2 crossbar memory array utilizing stable and scalable bipolar resistive-switching. We report a high-performance nonlinear bipolar selector realized by a simple Ni/TiO2/Ni MIM structure with a high current density of 105 A/cm2, and a Ni/TiO2/Ni/HfO2/Pt vertically stacked 1S1R cell capable of gigabit memory implementation. Furthermore, the demonstration of 1S1R array fabricated completely at room temperature on a plastic substrate highlights the promise of future extremely low-cost flexible nonvolatile memory."}
{"_id": "a263478403e855e39616009232ff06b66b8eda31", "title": "A Low Cost Eeg Based Bci Prosthetic Using Motor Imagery", "text": "Brain Computer Interfaces (BCI) provide the opportunity to control external devices using the brain ElectroEncephaloGram (EEG) signals. In this paper we propose two software framework in order to control a 5 degree of freedom robotic and prosthetic hand. Results are presented where an Emotiv Cognitive Suite (i.e. the 1 st framework) combined with an embedded software system (i.e. an open source Arduino board) is able to control the hand through character input associated with the taught actions of the suite. This system provides evidence of the feasibility of brain signals being a viable approach to controlling the chosen prosthetic. Results are then presented in the second framework. This latter one allowed for the training and classification of EEG signals for motor imagery tasks. When analysing the system, clear visual representations of the performance and accuracy are presented in the results using a confusion matrix, accuracy measurement and a feedback bar signifying signal strength. Experiments with various acquisition datasets were carried out and with a critical evaluation of the results given. Finally depending on the classification of the brain signal a Python script outputs the driving command to the Arduino to control the prosthetic. The proposed architecture performs overall good results for the design and implementation of economically convenient BCI and prosthesis."}
{"_id": "038b24b41b03ad93bde593d1f8a8a5016d97b770", "title": "Methods for ordinal peer grading", "text": "Massive Online Open Courses have the potential to revolutionize higher education with their wide outreach and accessibility, but they require instructors to come up with scalable alternates to traditional student evaluation. Peer grading -- having students assess each other -- is a promising approach to tackling the problem of evaluation at scale, since the number of \"graders\" naturally scales with the number of students. However, students are not trained in grading, which means that one cannot expect the same level of grading skills as in traditional settings. Drawing on broad evidence that ordinal feedback is easier to provide and more reliable than cardinal feedback [5, 38, 29, 9], it is therefore desirable to allow peer graders to make ordinal statements (e.g. \"project X is better than project Y\") and not require them to make cardinal statements (e.g. \"project X is a B-\"). Thus, in this paper we study the problem of automatically inferring student grades from ordinal peer feedback, as opposed to existing methods that require cardinal peer feedback. We formulate the ordinal peer grading problem as a type of rank aggregation problem, and explore several probabilistic models under which to estimate student grades and grader reliability. We study the applicability of these methods using peer grading data collected from a real class --- with instructor and TA grades as a baseline --- and demonstrate the efficacy of ordinal feedback techniques in comparison to existing cardinal peer grading methods. Finally, we compare these peer-grading techniques to traditional evaluation techniques."}
{"_id": "de743755180a6be3017a1cf4f16de60fa351aa55", "title": "Watch Me Code: Programming Mentorship Communities on Twitch.tv", "text": "Live streaming-an emerging practice of broadcasting video of oneself in real time to an online audience-is often used by people to portray themselves engaged in a craft such as programming. Viewers of these 'creative streams' gather to watch the streamer at work and to interact with the streamer and other audience members. However, little is known about how creative streamers engage with their audience, how their viewership communities form and operate, and how creative streams may support learning. In this study, we used a participant-observer method to study game development streams on the live streaming site Twitch.tv. We found that live streams support the growth of learning-focused communities that mentor both the streamer and each other during and after streams. We show the influence of streamers in creating a space for learning and motivating learners. Finally, we discuss implications for online education and communities of practice."}
{"_id": "c498b1cf7a98673ddda461225356610ac51ae59b", "title": "A multi-dimensional meter-adaptive method for automatic segmentation of music", "text": "Music structure appears on a wide variety of temporal levels (notes, bars, phrases, etc). Its highest-level expression is therefore dependent on music's lower-level organization, especially beats and bars. We propose a method for automatic structure segmentation that uses musically meaningful information and is content-adaptive. It relies on a meter-adaptive signal representation that prevents from the use of empirical parameters. Moreover, our method is designed to combine multiple signal features to account for various musical dimensions. Finally, it also combines multiple structural principles that yield complementary results. The resulting algorithm proves to already outperform state-of-the-art methods, especially within small tolerance windows, and yet offers several encouraging improvement directions."}
{"_id": "586b77c12ce869771321b853edaa31fb050b39b9", "title": "X-shaped inferior extensor retinaculum and its doubtful use in the Br\u00f6strom\u2013Gould procedure", "text": "The inferior extensor retinaculum (IER) is an aponeurotic structure located in the anterior aspect of the ankle. According to the literature, it can be used to reinforce a repair of the anterior talofibular ligament in ankle instability. Despite its usual description as an Y-shaped structure, it is still unclear which part of the retinaculum is used for this purpose, or if it is instead the crural fascia that is being used. The purpose of this study is to define the anatomical characteristics of the IER to better understand its role in the Brostr\u00f6m\u2013Gould procedure. Twenty-one ankles were dissected. The morphology of the IER and its relationship with neighbouring structures were recorded. Seventeen (81%) of the IER in this study had an X-shaped morphology, with the presence of an additional oblique superolateral band. This band, by far the thinnest of the retinaculum, is supposed to be used to reinforce the repair of the anterior talofibular ligament. The intermediate dorsal cutaneous nerve (lateral branch of the superficial peroneal nerve) was found to cross the retinaculum in all cases. The IER is most commonly seen as an X-shaped structure, but the fact that the oblique superolateral band is a thin band of tissue probably indicates that it may not add significant strength to ankle stability. Furthermore, the close relationship of the retinaculum with the superficial peroneal nerve is another factor to consider before deciding to perform a Brostr\u00f6m\u2013Gould procedure. These anatomical findings advise against the use of the Gould augmentation."}
{"_id": "0db98ae16ee22767d0c8b3e28417c7ea5460664f", "title": "Construction projects selection and risk assessment by fuzzy AHP and fuzzy TOPSIS methodologies", "text": "Construction projects are initiated in dynamic environment which result in circumstances of high uncertainty and risks due to accumulation of many interrelated parameters. The purpose of this study is to use novel analytic tools to evaluate the construction projects and their overall risks under incomplete and uncertain situations. It was also aimed to place the risk in a proper category and predict the level of it in advance to develop strategies and counteract the high-risk factors. The study covers identifying the key risk criteria of construction projects at King Abdulaziz University (KAU), and assessing the criteria by the integrated hybrid methodologies. The proposed hybrid methodologies were initiated with a survey for data collection. The relative importance index (RII) method was applied to prioritize the project risks based on the data obtained. The construction projects were then categorized by fuzzy AHP and fuzzy TOPSIS methodologies. Fuzzy AHP (FAHP) was used to create favorable weights for fuzzy linguistic variable of construction projects overall risk. The fuzzy TOPSIS method is very suitable for solving group decision making problems under the fuzzy environment. It attempted to incorporate vital qualitative attributes in performance analysis of construction projects and transformed the qualitative data into equivalent quantitative measures. Thirty construction projects were studied with respect to five main criteria that are the time, cost, quality, safety and environment sustainability. The results showed that these novel methodologies are able to assess the overall risks of construction projects, select the project that has the lowest risk with the contribution of relative importance index. This approach will have potential applications in the future. Crown Copyright \u00a9 2014 Published by Elsevier B.V. All rights reserved."}
{"_id": "5038365531534c6bd5f08543280c3c390b6925de", "title": "Noise Modeling for RF CMOS Circuit Simulation", "text": "The RF noise in 0.18m CMOS technology has been measured and modeled. In contrast to some other groups, we find only a moderate enhancement of the drain current noise for shortchannel MOSFETs. The gate current noise on the other hand is more significantly enhanced, which is explained by the effects of the gate resistance. The experimental results are modeled with a nonquasi-static RF model, based on channel segmentation, which is capable of predicting both drain and gate current noise accurately. Experimental evidence is shown for two additional noise mechanisms: 1) avalanche noise associated with the avalanche current from drain to bulk and 2) shot noise in the direct-tunneling gate leakage current. Additionally, we show low-frequency noise measurements, which strongly point toward an explanation of the1 noise based on carrier trapping, not only in n-channel MOSFETs, but also in p-channel MOSFETs."}
{"_id": "7b66c5c66d5d49c1be8d908b7db4df75552540bb", "title": "Topology Aware Fully Convolutional Networks for Histology Gland Segmentation", "text": "[1] Sirinukunwattana et al. Gland Segmentation in colon histology images: the GlaS contest, arXiv 2016 [2] Long et al. Fully convolutional networks for semantic segmentation, CVPR 2015 [3] Chend et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs, arXiv 2014 [4] Zheng et al. Conditional random fields as recurrent neural networks, ICCV 2015 Alexnet-FCN (6 layers) FCN-8s (16 layers) UNet (23 layers) DN (26 layers) 0.93 0.79 0.86 0.83"}
{"_id": "c47bb003c2c8063cb45d690232448b3259462aaf", "title": "Learning to predict eye fixations for semantic contents using multi-layer sparse network", "text": "In this paper, we present a novel model for saliency prediction under a unified framework of feature integration. The model distinguishes itself by directly learning from natural images and automatically incorporating higher-level semantic information in a scalable manner for gaze prediction. Unlike most existing saliency models that rely on specific features or object detectors, our model learns multiple stages of features that mimic the hierarchical organization of the ventral stream in the visual cortex and integrate them by adapting their weights based on the ground-truth fixation data. To accomplish this, we utilize a multi-layer sparse network to learn low-, midand high-level features from natural images and train a linear support vector machine (SVM) for weight adaption and feature integration. Experimental results show that our model could learn high-level semantic features like faces and texts and can perform competitively among existing approaches in predicting eye fixations. & 2014 Elsevier B.V. All rights reserved."}
{"_id": "a7ee01555555c22ecd126d27bbde52691ab8a953", "title": "E2E NLG Challenge Submission: Towards Controllable Generation of Diverse Natural Language", "text": "In natural language generation (NLG), the task is to generate utterances from a more abstract input, such as structured data. An added challenge is to generate utterances that contain an accurate representation of the input, while reflecting the fluency and variety of human-generated text. In this paper, we report experiments with NLG models that can be used in task oriented dialogue systems. We explore the use of additional input to the model to encourage diversity and control of outputs. While our submission does not rank highly using automated metrics, qualitative investigation of generated utterances suggests the use of additional information in neural network NLG systems to be a promising research direction."}
{"_id": "6f5925371ac78b13d306c5123bcef34061b0ef50", "title": "A Novel Earth Mover's Distance Methodology for Image Matching with Gaussian Mixture Models", "text": "The similarity or distance measure between Gaussian mixture models (GMMs) plays a crucial role in content-based image matching. Though the Earth Mover's Distance (EMD) has shown its advantages in matching histogram features, its potentials in matching GMMs remain unclear and are not fully explored. To address this problem, we propose a novel EMD methodology for GMM matching. We first present a sparse representation based EMD called SR-EMD by exploiting the sparse property of the underlying problem. SR-EMD is more efficient and robust than the conventional EMD. Second, we present two novel ground distances between component Gaussians based on the information geometry. The perspective from the Riemannian geometry distinguishes the proposed ground distances from the classical entropy-or divergence-based ones. Furthermore, motivated by the success of distance metric learning of vector data, we make the first attempt to learn the EMD distance metrics between GMMs by using a simple yet effective supervised pair-wise based method. It can adapt the distance metrics between GMMs to specific classification tasks. The proposed method is evaluated on both simulated data and benchmark real databases and achieves very promising performance."}
{"_id": "67d3db2649bafde26f285088d36cb5d85a21a8b5", "title": "Collaborative color calibration for multi-camera systems", "text": "This paper proposes a collaborative color calibration meth od for multi-camera systems. The multi-camera color calibr t on problem is formulated as an overdetermined linear system, i n which a dynamic range shaping is incorporated to ensure the high contrasts for captured images. The cameras are calibrated w ith the parameters obtained by solving the linear system. Fo r non-planar multi-camera systems, we design a novel omnidirectional co lor checker and present a method for establishing global cor respondences to facilitate automatic color calibration without m anual adjustment. According to experimental results on bot h synthetic and real-system datasets, the proposed method shows high perfo rmance in achieving inter-camera color consistency and hig h dynamic range. Thanks to the generality of the linear system formula tion and the flexibility of the designed color checker, the pr oposed method is applicable to various multi-camera systems."}
{"_id": "2ed1e7bab25e414bff8e6b29412d63d27b6712bf", "title": "Dynamic Reserve Prices for Repeated Auctions: Learning from Bids - Working Paper", "text": "A large fraction of online advertisements are sold via repeated second price auctions. In these auctions, the reserve price is the main tool for the auctioneer to boost revenues. In this work, we investigate the following question: Can changing the reserve prices based on the previous bids improve the revenue of the auction, taking into account the long-term incentives and strategic behavior of the bidders? We show that if the distribution of the valuations is known and satisfies the standard regularity assumptions, then the optimal mechanism has a constant reserve. However, when there is uncertainty in the distribution of the valuations and competition among the bidders, previous bids can be used to learn the distribution of the valuations and to update the reserve prices. We present a simple approximately incentive-compatible and optimal dynamic reserve mechanism that can significantly improve the revenue over the best static reserve in such settings."}
{"_id": "0f366de3ea595932dad06389f6e61fe0dd8cbe74", "title": "DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field", "text": "Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including \"Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks\" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit)."}
{"_id": "45790b5bf6a3ad7c641809035661d14d73d6361b", "title": "CDnet 2014: An Expanded Change Detection Benchmark Dataset", "text": "Change detection is one of the most important lowlevel tasks in video analytics. In 2012, we introduced the changedetection.net (CDnet) benchmark, a video dataset devoted to the evalaution of change and motion detection approaches. Here, we present the latest release of the CDnet dataset, which includes 22 additional videos (70; 000 pixel-wise annotated frames) spanning 5 new categories that incorporate challenges encountered in many surveillance settings. We describe these categories in detail and provide an overview of the results of more than a dozen methods submitted to the IEEE Change DetectionWorkshop 2014. We highlight strengths and weaknesses of these methods and identify remaining issues in change detection."}
{"_id": "fc1e8b3c69084e48a98b2f4e688377e17b838d37", "title": "Does Online Technology Make Us More or Less Sociable? A Preliminary Review and Call for Research.", "text": "How does online technology affect sociability? Emerging evidence-much of it inconclusive-suggests a nuanced relationship between use of online technology (the Internet, social media, and virtual reality) and sociability (emotion recognition, empathy, perspective taking, and emotional intelligence). Although online technology can facilitate purely positive behavior (e.g., charitable giving) or purely negative behavior (e.g., cyberbullying), it appears to affect sociability in three ways, depending on whether it allows a deeper understanding of people's thoughts and feelings: (a) It benefits sociability when it complements already-deep offline engagement with others, (b) it impairs sociability when it supplants deeper offline engagement for superficial online engagement, and (c) it enhances sociability when deep offline engagement is otherwise difficult to attain. We suggest potential implications and moderators of technology's effects on sociability and call for additional causal research."}
{"_id": "abc7b6834e7e81088b9f2c03d246e2643e5f3c82", "title": "A Rule-Based Approach For Effective Sentiment Analysis", "text": "The success of Web 2.0 applications has made online social media websites tremendous assets for supporting critical business intelligence applications. The knowledge gained from social media can potentially lead to the development of novel services that are better tailored to users\u2019 needs and at the same time meet the objectives of businesses offering them. Online consumer reviews are one of the critical social media contents. Proper analysis of consumer reviews not only provides valuable information to facilitate the purchase decisions of customers but also helps merchants or product manufacturers better understand general responses of customers on their products for marketing campaign improvement. This study aims at designing an approach for supporting the effective analysis of the huge volume of online consumer reviews and, at the same time, settling the major limitations of existing approaches. Specifically, the proposed rule-based sentiment analysis (R-SA) technique employs the class association rule mining algorithm to automatically discover interesting and effective rules capable of extracting product features or opinion sentences for a specific product feature interested. According to our preliminary evaluation results, the R-SA technique performs well in comparison with its benchmark technique."}
{"_id": "2132af47c82d5722b86747dfe3a9d7cf9e0bb4e1", "title": "Making Sense of Entities and Quantities in Web Tables", "text": "HTML tables and spreadsheets on the Internet or in enterprise intranets often contain valuable information, but are created ad-hoc. As a result, they usually lack systematic names for column headers and clear vocabulary for cell values. This limits the re-use of such tables and creates a huge heterogeneity problem when comparing or aggregating multiple tables.\n This paper aims to overcome this problem by automatically canonicalizing header names and cell values onto concepts, classes, entities and uniquely represented quantities registered in a knowledge base.\n To this end, we devise a probabilistic graphical model that captures coherence dependencies between cells in tables and candidate items in the space of concepts, entities and quantities. We give specific consideration to quantities which are mapped into a \"measure, value, unit\" triple over a taxonomy of physical (e.g. power consumption), monetary (e.g. revenue), temporal (e.g. date) and dimensionless (e.g. counts) measures.\n Our experiments with Web tables from diverse domains demonstrate the viability of our method and its benefits over baselines."}
{"_id": "8423cc50c18d68f797adaa4f571f5e4efbe325a5", "title": "A Laplacian Framework for Option Discovery in Reinforcement Learning", "text": "Representation learning and option discovery are two of the biggest challenges in reinforcement learning (RL). Proto-RL is a well known approach for representation learning in MDPs. The representations learned with this framework are called proto-value functions (PVFs). In this paper we address the option discovery problem by showing how PVFs implicitly define options. We do it by introducing eigenpurposes, intrinsic reward functions derived from the learned representations. The options discovered from eigenpurposes traverse the principal directions of the state space. They are useful for multiple tasks because they are independent of the agents\u2019 intentions. Moreover, by capturing the diffusion process of a random walk, different options act at different time scales, making them helpful for exploration strategies. We demonstrate features of eigenpurposes in traditional tabular domains as well as in Atari 2600 games."}
{"_id": "86f4420551916d4a158a814605809ac45a5191df", "title": "Accessible Crowdwork?: Understanding the Value in and Challenge of Microtask Employment for People with Disabilities", "text": "We present the first formal study of crowdworkers who have disabilities via in-depth open-ended interviews of 17 people (disabled crowdworkers and job coaches for people with disabilities) and a survey of 631 adults with disabilities. Our findings establish that people with a variety of disabilities currently participate in the crowd labor marketplace, despite challenges such as crowdsourcing workflow designs that inadvertently prohibit participation by, and may negatively affect the worker reputations of, people with disabilities. Despite such challenges, we find that crowdwork potentially offers different opportunities for people with disabilities relative to the normative office environment, such as job flexibility and lack of a need to rely on public transit. We close by identifying several ways in which crowd labor platform operators and/or individual task requestors could improve the accessibility of this increasingly important form of employment."}
{"_id": "a7587234f5f102fad5fb0908af8e7069915fe813", "title": "Breakfast reduces declines in attention and memory over the morning in schoolchildren", "text": "Twenty-nine schoolchildren were tested throughout the morning on 4 successive days, having a different breakfast each day (either of the cereals Cheerios or Shreddies, glucose drink or No breakfast). A series of computerised tests of attention, working memory and episodic secondary memory was conducted prior to breakfast and again 30, 90, 150 and 210 min later. The glucose drink and No breakfast conditions were followed by declines in attention and memory, but the declines were significantly reduced in the two cereal conditions. This study provides objective evidence that a typical breakfast of cereal rich in complex carbohydrates can help maintain mental performance over the morning."}
{"_id": "e7ee730fa3d5083a7ec4e68bb0cc428862e86a51", "title": "Customer relationship management capabilities Measurement , antecedents and consequences", "text": "Purpose \u2013 This study seeks to extend the resource-based view to the context of customer relationship management. It is intended to develop a measurement model of customer relationship management (CRM) capabilities, and to explore the key antecedents and performance consequences of CRM capabilities. Design/methodology/approach \u2013 Questionnaire survey was used to collect data. In order to develop a reliable and valid measurement model of CRM capabilities, several rounds of questionnaire survey were conducted, and hypotheses were tested by utilizing the technique of structural equation modeling. Findings \u2013 A three-factor (customer interaction management capability, customer relationship upgrading capability and customer win-back capability) measurement model of CRM capabilities is developed and tested. Furthermore, results support the hypothesized influences of customer orientation, customer-centric organizational system and CRM technology on CRM capabilities, as well as the influence of CRM capabilities on organizational performance. Practical implications \u2013 This study provides a useful measurement mode of CRM capabilities that managers can use to evaluate the status in quo of CRM capabilities of their firms. Managers may also improve their CRM programs more effectively and efficiently by deploying such strategic resources of firms as customer orientation, customer-centric organizational system and CRM technology to build and strengthen their CRM capabilities. Originality/value \u2013 The paper addresses significant gaps in the current literature by taking a capability view of CRM, developing a valid measurement model of CRM capabilities, and examining how possession of important CRM resources influences the effective deployment of CRM capabilities."}
{"_id": "a048260ce47b07fb8e14eee42595414330059652", "title": "Reduced interference in working memory following mindfulness training is associated with increases in hippocampal volume", "text": "Proactive interference occurs when previously relevant information interferes with retaining newer material. Overcoming proactive interference has been linked to the hippocampus and deemed critical for cognitive functioning. However, little is known about whether and how this ability can be improved or about the neural correlates of such improvement. Mindfulness training emphasizes focusing on the present moment and minimizing distraction from competing thoughts and memories. It improves working memory and increases hippocampal density. The current study examined whether mindfulness training reduces proactive interference in working memory and whether such improvements are associated with changes in hippocampal volume. 79 participants were randomized to a 4-week web-based mindfulness training program or a similarly structured creative writing active control program. The mindfulness group exhibited lower proactive interference error rates compared to the active control group following training. No group differences were found in hippocampal volume, yet proactive interference improvements following mindfulness training were significantly associated with volume increases in the left hippocampus. These results provide the first evidence to suggest that (1) mindfulness training can protect against proactive interference, and (2) that these benefits are related to hippocampal volumetric increases. Clinical implications regarding the application of mindfulness training in conditions characterized by impairments to working memory and reduced hippocampal volume such as aging, depression, PTSD, and childhood adversity are discussed."}
{"_id": "4df7a155b4b8a3d6dbf5e214179b349ed46e7310", "title": "Real-Time Top-View People Counting Based on a Kinect and NVIDIA Jetson TK1 Integrated Platform", "text": "In this paper, we describe how to establish an embedded framework for real-time top-view people counting. The development of our system consists of two parts, i.e. establishing an embedded signal processing platform and designing a people counting algorithm for the embedded system. For the hardware platform construction, we use Kinect as the camera and exploit NVIDIA Jetson TK1 board as the embedded processing platform. We describe how to build a channel to make Kinect for windows version 2.0 communicate with Jetson TK1. Based on the embedded system, we adapt a water filling based scheme for top-view people counting, which integrates head detection based on water drop, people tracking and counting. Gaussian Mixture Model is used to construct and update the background model. The moving people in each video frame are extracted using background subtraction method. Additionally, the water filling algorithm is used to segment head area as Region Of Interest(ROI). Tracking and counting people are performed by calculating the distance of ROI center point before and after the frame. The whole framework is flexible and practical for real-time application."}
{"_id": "c8e0f9e65fd76d78f178206f3012add6ffc56b4d", "title": "Mobile technology and the digitization of healthcare.", "text": "The convergence of science and technology in our dynamic digital era has resulted in the development of innovative digital health devices that allow easy and accurate characterization in health and disease. Technological advancements and the miniaturization of diagnostic instruments to modern smartphone-connected and mobile health (mHealth) devices such as the iECG, handheld ultrasound, and lab-on-a-chip technologies have led to increasing enthusiasm for patient care with promises to decrease healthcare costs and to improve outcomes. This 'hype' for mHealth has recently intersected with the 'real world' and is providing important insights into how patients and practitioners are utilizing digital health technologies. It is also raising important questions regarding the evidence supporting widespread device use. In this state-of-the-art review, we assess the current literature of mHealth and aim to provide a framework for the advances in mHealth by understanding the various device, patient, and clinical factors as they relate to digital health from device designs and patient engagement, to clinical workflow and device regulation. We also outline new strategies for generation and analysis of mHealth data at the individual and population-based levels."}
{"_id": "432ebdde46a3323f377bbb04ed8954e1b7e012fe", "title": "Experimental implementation of underactuated potential energy shaping on a powered ankle-foot orthosis", "text": "Traditional control methodologies of rehabilitation orthoses/exoskeletons aim at replicating normal kinematics and thus fall into the category of kinematic control. This control paradigm depends on pre-defined reference trajectories, which can be difficult to adjust between different locomotor tasks and human subjects. An alternative control category, kinetic control, enforces kinetic goals (e.g., torques or energy) instead of kinematic trajectories, which could provide a flexible learning environment for the user while freeing up therapists to make corrections. We propose that the theory of underactuated potential energy shaping, which falls into the category of kinetic control, could be used to generate virtual body-weight support for stroke gait rehabilitation. After deriving the nonlinear control law and simulating it on a human-like biped model, we implemented this controller on a powered ankle-foot orthosis that was designed specifically for testing torque control strategies. Experimental results with an able-bodied human subject demonstrate the feasibility of the control approach for both positive and negative virtual body-weight augmentation."}
{"_id": "f99d1358e3d7198b85d3b689e743787c1c91c9fd", "title": "Distance estimation with 2 . 5 D anchors and its application to robot navigation", "text": "Estimating the distance of a target object from a single image is a challenging task since a large variation in the object appearance makes the regression of the distance difficult. In this paper, to tackle such the challenge, we propose 2.5D anchors which provide the candidate of distances based on a perspective camera model. This candidate is expected to relax the difficulty of the regression model since only the residual from the candidate distance needs to be taken into account. We show the effectiveness of the regression with our proposed anchors, by comparing with ordinary regression methods and state-of-the-art 3D object detection methods, through Pascal 3D+ TV monitor and KITTI car experiments. In addition, we also show an example of practical uses of our proposed method in a real-time system, robot navigation, by integrating with ROS-based simultaneous localization and mapping."}
{"_id": "9f8daa09a9b7918cb51825db86df6b21ff1410c2", "title": "Machine morality: bottom-up and top-down approaches for modelling human moral faculties", "text": "The implementation of moral decision making abilities in artificial intelligence (AI) is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of systems that aim at goals or standards which may or may not be specified in explicitly theoretical terms. In this paper we wish to provide some direction for continued research by outlining the value and limitations inherent in each of these approaches."}
{"_id": "503e91d69742a723be55ff4143a6c02457a6caca", "title": "A systematic review of implementation strategies for assessment, prevention, and management of ICU delirium and their effect on clinical outcomes", "text": "INTRODUCTION\nDespite recommendations from professional societies and patient safety organizations, the majority of ICU patients worldwide are not routinely monitored for delirium, thus preventing timely prevention and management. The purpose of this systematic review is to summarize what types of implementation strategies have been tested to improve ICU clinicians' ability to effectively assess, prevent and treat delirium and to evaluate the effect of these strategies on clinical outcomes.\n\n\nMETHOD\nWe searched PubMed, Embase, PsychINFO, Cochrane and CINAHL (January 2000 and April 2014) for studies on implementation strategies that included delirium-oriented interventions in adult ICU patients. Studies were suitable for inclusion if implementation strategies' efficacy, in terms of a clinical outcome, or process outcome was described.\n\n\nRESULTS\nWe included 21 studies, all including process measures, while 9 reported both process measures and clinical outcomes. Some individual strategies such as \"audit and feedback\" and \"tailored interventions\" may be important to establish clinical outcome improvements, but otherwise robust data on effectiveness of specific implementation strategies were scarce. Successful implementation interventions were frequently reported to change process measures, such as improvements in adherence to delirium screening with up to 92%, but relating process measures to outcome changes was generally not possible. In meta-analyses, reduced mortality and ICU length of stay reduction were statistically more likely with implementation programs that employed more (six or more) rather than less implementation strategies and when a framework was used that either integrated current evidence on pain, agitation and delirium management (PAD) or when a strategy of early awakening, breathing, delirium screening and early exercise (ABCDE bundle) was employed. Using implementation strategies aimed at organizational change, next to behavioral change, was also associated with reduced mortality.\n\n\nCONCLUSION\nOur findings may indicate that multi-component implementation programs with a higher number of strategies targeting ICU delirium assessment, prevention and treatment and integrated within PAD or ABCDE bundle have the potential to improve clinical outcomes. However, prospective confirmation of these findings is needed to inform the most effective implementation practice with regard to integrated delirium management and such research should clearly delineate effective practice change from improvements in clinical outcomes."}
{"_id": "4c7c5c2f9ea1f8803bf8ee3ee2840fca8f1d0872", "title": "Ontology and Information Systems", "text": "Ontology as a branch of philosophy is the science of what is, of the kinds and structures of objects, properties, events, processes and relations in every area of reality. \u2018Ontology\u2019 is often used by philosophers as a synonym for \u2018metaphysics\u2019 (literally: \u2018what comes after the Physics\u2019), a term which was used by early students of Aristotle to refer to what Aristotle himself called \u2018first philosophy\u2019. The term \u2018ontology\u2019 (or ontologia) was itself coined in 1613, independently, by two philosophers, Rudolf G\u00f6ckel (Goclenius), in his Lexicon philosophicum and Jacob Lorhard (Lorhardus), in his Theatrum philosophicum. The first occurrence in English recorded by the OED appears in Bailey\u2019s dictionary of 1721, which defines ontology as \u2018an Account of being in the Abstract\u2019."}
{"_id": "3af369fcb2837ddcfe1ffc488ec307dc5e3ed014", "title": "Deep convolutional acoustic word embeddings using word-pair side information", "text": "Recent studies have been revisiting whole words as the basic modelling unit in speech recognition and query applications, instead of phonetic units. Such whole-word segmental systems rely on a function that maps a variable-length speech segment to a vector in a fixed-dimensional space; the resulting acoustic word embeddings need to allow for accurate discrimination between different word types, directly in the embedding space. We compare several old and new approaches in a word discrimination task. Our best approach uses side information in the form of known word pairs to train a Siamese convolutional neural network (CNN): a pair of tied networks that take two speech segments as input and produce their embeddings, trained with a hinge loss that separates same-word pairs and different-word pairs by some margin. A word classifier CNN performs similarly, but requires much stronger supervision. Both types of CNNs yield large improvements over the best previously published results on the word discrimination task."}
{"_id": "1959b4c9c0c37acb19c14129389e15a6ced515bc", "title": "Water-walking devices", "text": "We report recent efforts in the design and construction of water-walking machines inspired by insects and spiders. The fundamental physical constraints on the size, proportion and dynamics of natural water-walkers are enumerated and used as design criteria for analogous mechanical devices. We report devices capable of rowing along the surface, leaping off the surface and climbing menisci by deforming the free surface. The most critical design constraint is that the devices be lightweight and non-wetting. Microscale manufacturing techniques and new man-made materials such as hydrophobic coatings and thermally actuated wires are implemented. Using highspeed cinematography and flow visualization, we compare the functionality and dynamics of our devices with those of their natural counterparts."}
{"_id": "6f898bea897bebb212463288823edcf4dae7ef46", "title": "Genome engineering in Saccharomyces cerevisiae using CRISPR-Cas systems", "text": "Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) and CRISPR-associated (Cas) systems in bacteria and archaea use RNA-guided nuclease activity to provide adaptive immunity against invading foreign nucleic acids. Here, we report the use of type II bacterial CRISPR-Cas system in Saccharomyces cerevisiae for genome engineering. The CRISPR-Cas components, Cas9 gene and a designer genome targeting CRISPR guide RNA (gRNA), show robust and specific RNA-guided endonuclease activity at targeted endogenous genomic loci in yeast. Using constitutive Cas9 expression and a transient gRNA cassette, we show that targeted double-strand breaks can increase homologous recombination rates of single- and double-stranded oligonucleotide donors by 5-fold and 130-fold, respectively. In addition, co-transformation of a gRNA plasmid and a donor DNA in cells constitutively expressing Cas9 resulted in near 100% donor DNA recombination frequency. Our approach provides foundations for a simple and powerful genome engineering tool for site-specific mutagenesis and allelic replacement in yeast."}
{"_id": "397f3ab172ec419639da5837bfe5d6d6da80f576", "title": "Cognitive programs: software for attention's executive", "text": "What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention."}
{"_id": "317d42ec9d872445d7ea8373cf97c11e512d56d6", "title": "A line feature based SLAM with low grade range sensors using geometric constraints and active exploration for mobile robot", "text": "This paper describes a geometrically constrained Extended Kalman Filter (EKF) framework for a line feature based SLAM, which is applicable to a rectangular indoor environment. Its focus is on how to handle sparse and noisy sensor data, such as PSD infrared sensors with limited range and limited number, in order to develop a low-cost navigation system. It has been applied to a vacuum cleaning robot in our research. In order to meet the real-time objective with low computing power, we develop an efficient line feature extraction algorithm based upon an iterative end point fit (IEPF) technique assisted by our constrained version of the Hough transform. It uses a geometric constraint that every line is orthogonal or parallel to each other because in a general indoor setting, most furniture and walls satisfy this constraint. By adding this constraint to the measurement model of EKF, we build a geometrically constrained EKF framework which can estimate line feature positions more accurately as well as allow their covariance matrices to converge more rapidly when compared to the case of an unconstrained EKF. The experimental results demonstrate the accuracy and robustness to the presence of sensor noise and errors in an actual indoor environment."}
{"_id": "7aba8268cc4041f3de94399eb217e818111b414c", "title": "Ontology based Semantic Search Engine", "text": "We live in a connected age when every entity is linked to each other through numerous relationships thus forming complex networks, information networks etc., In this meshy scenario, storing analyzing, managing extracting information is going to be highly challenging task. Information retrieval happened to be an activity that only a few people engaged in. due to the advent of search engines and communication facilities, millions of people are engaged in information retrieval and extraction process. The information retrieval deals with finding a set of documents relevant to the user query. Commercial search engines like Google deals with key word search which is based on Boolean logical queries. The major disadvantage of this kind of keyword search is that it returns a lot of irrelevant information to the users which results in low precision. Nowadays the field of information retrieval is moving towards semantic level from syntactic level. These semantic search engines are based on the concept of ontology which gives a Meta data representation of concepts. In this paper we focus on the retrieval mechanisms related to it. This includes various phases in which the ontology is designed first and then the indexing and retrieval phases work. Key wordsOntology, semantic web, Information retrieval, Query builder I.INTRODUCTION In this digital age each and every concept is linked to each other through various relationships forming the complex information web. By passing through the web we face some sort of challenges in storing and extracting the information from them. We therefore rely on search engines to overcome those difficulties. Any way these search engines facilitate only keyword search which does not returns accurate results to the external user. Aiming to solve the limitations of keyword based models, the idea of semantic search, understood as searching by meanings rather than by literals has been the focus of a wide body of research in the information retrieval and the semantic web communities Semantic search has been present in the information retrieval since the early eighties. Some of these approaches are based on the statistical methods that study the co-occurrence of terms that capture and exploit tough and fuzzy conceptualizations. Other Information retrieval approaches apply linguistic algorithms modeled on human language processing structures and taxonomies, where the level of conceptualization is often shallow and sparse, especially the level of relations, which are commonly at the core of expressing user needs and finding the answers. Ontologies are used to represent knowledge in a conceptual manner that can be distributed among various applications. The ontology has its application in information retrieval where it exploits the knowledge bases enhancing the semantic search, steering in one hand the use of fully fledged ontologies in the semantic based perspective, and on the other the consideration of unstructured content as target search space. In other words, this work explores the use of semantic information to support more expensive queries and more accurate results, while the retrieval problem is formulated in a way that is consistent with the Information retrieval field, thus drawing benefit from the state of art in this area and enabling more realistic and applicable approaches. II. RELEATED WORK A. Semantic web Current World Wide Web (WWW) is a huge library of interlinked documents that are transferred by computers and presented to people. It has grown from hypertext systems, but the difference is that anyone can contribute to it. This also means that the quality of information or even the persistence of documents cannot be generally guaranteed. Current WWW contains a lot of information and knowledge, but machines usually serve only to deliver and present the content of documents describing the knowledge. People have to connect all the sources of relevant information and interpret them themselves Semantic web is an effort to enhance current web so that computers can process the information presented on WWW, interpret and connect it, to help humans to find required knowledge. In the same way as WWW is a huge distributed hypertext system, semantic web is intended to form a huge distributed knowledge based system. The focus of semantic web is to share data instead of documents. In other words, it is a project that should provide a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. It is a collaborative effort led by World Wide Web Consortium (W3C). The semantic web has got its peak in recent years by the use of explicit metadata representation of information in the web. The metadata representation is embedded in web page using RDF to enhance the visualization of results. Semantic Web can be seen as a huge engineering solution... but it is more than that. We will find that as it becomes easier to publish data in a repurposable form, so more people will want to publish data, and there will be a knock-on or domino effect. N.Vanjulavalli et al IJCSET |August 2012 | Vol 2, Issue 8, 1349-1353 ISSN:2231-0711 Available online at www.ijcset.net 1349 We may find that a large number of Semantic Web applications can be used for a variety of different tasks, increasing the modularity of applications on the Web. But enough subjective reasoning. onto how this will be accomplished. The Semantic Web is generally built on syntaxes which use URIs to represent data, usually in triples based structures: i.e. many triples of URI data that can be held in databases, or interchanged on the world Wide Web using a set of particular syntaxes developed especially for the task. These syntaxes are called \"Resource Description Framework\" syntaxes. Once information is in RDF form, it becomes easy to process it, since RDF is a generic format, which already has many parsers. XML RDF is quite a verbose specification, and it can take some getting used to (for example, to learn XML RDF properly, you need to understand a little about XML and namespaces beforehand...), but let's take a quick look at an example of XML RDF right now: Sean B. Palmer The Semantic Web: An Introduction"}
{"_id": "414b0d139d83024d47649ba37c3d11b1165057d6", "title": "Automatic crop irrigation system", "text": "India is agriculture based nation. It is necessary to improve the productivity and quality of agro based products. The proposed design is an automatic system that aids the famers in irrigation process. It keeps notifying the farmer through an on-board LCD display and messages that is sent on the farmer's cellular number. This proposed design is also helpful for the farmers who are facing power failure issues to maintain a uniform water supply due to power failure or inadequate and non-uniform water supply. The automatic irrigation system also keeps the farmer updated with all the background activities through a SIM900 Module that sends messages on the registered number. This device can be a turning point for our society. The device is easily affordable by the farmers of the country. This proposed design is helpful for reducing the human labour. This is a low budget system with an essential social application."}
{"_id": "6ed591fec03437ed2bf7479d92f49833e3851f71", "title": "Automatic drip irrigation system using fuzzy logic and mobile technology", "text": "An intelligent drip irrigation system optimizes water and fertilizer use for agricultural crops using wireless sensors and fuzzy logic. The wireless sensor networks consists of many sensor nodes, hub and control unit. The sensor collects real-time data such as temperature, soil humidity. This data is sent to the hub using the wireless technology. The hub processes the data using fuzzy logic and decides the time duration for keeping the valves open. Accordingly, the drip irrigation system is implemented for a particular amount of time. The whole system is powered by photovoltaic cells and has a communication link which allows the system to be monitored, controlled, and scheduled through cellular text messages. The system can quickly and accurately calculate water demand amount of crops, which can provide a scientific basis for water-saving irrigation, as well as a method to optimize the amount of fertilizer used."}
{"_id": "8075c73fd8b13fa9663230a383f5712bf210ebcf", "title": "Remote Sensing and Control of an Irrigation System Using a Distributed Wireless Sensor Network", "text": "Efficient water management is a major concern in many cropping systems in semiarid and arid areas. Distributed in-field sensor-based irrigation systemsoffer a potential solution to support site-specific irrigation management that allows producers to maximize their productivity while saving water. This paper describes details of the design and instrumentation of variable rate irrigation, a wireless sensor network, and software for real-time in-field sensing and control of a site-specific precision linear-move irrigation system. Field conditions were site-specifically monitored by six in-field sensor stations distributed across the field based on a soil property map, and periodically sampled and wirelessly transmitted to a base station. An irrigation machine was converted to be electronically controlled by a programming logic controller that updates georeferenced location of sprinklers from a differential Global Positioning System (GPS) and wirelessly communicates with a computer at the base station. Communication signals from the sensor network and irrigation controller to the base station were successfully interfaced using low-cost Bluetooth wireless radio communication. Graphic user interface-based software developed in this paper offered stable remote access to field conditions and real-time control and monitoring of the variable-rate irrigation controller."}
{"_id": "ebf9bfbb122237ffdde5ecbbb292181c92738fd4", "title": "AUTOMATED IRRIGATION SYSTEM USING THERMOELECTRIC GENERATOR AS SOIL MOISTURE DETECTOR", "text": "This paper shows the design and fabrication of a Thermo-electric generator (TEG) and the implementation of an automated irrigation system using this TEG as a soil moisture detector. The TEG inserted in two heat exchangers is capable of finding the thermal difference between the air and the soil that establishes a relationship with the soil\u2019s moisture condition. Being able to obtain the soil moisture level from the TEG\u2019s output, a microcontroller is used to automate the irrigation system. The irrigation system adapts to the soil area\u2019s condition it irrigates based from the moisture it detects via the TEG. The water consumption of the soil is controlled by the automated irrigation system based on the soil\u2019s condition and therefore, promotes water conservation compared to that of the water consumption of manual irrigation system. It also optimizes plant growth in that it waters it to the correct moisture level at the right time."}
{"_id": "59f153ddd37e22af153aa0d7caf3ec44053aa8e8", "title": "A Wireless Design of Low-Cost Irrigation System Using ZigBee Technology", "text": "At present, labor-saving and water-saving technology is a key issue in irrigation. A wireless solution for intelligent field irrigation system dedicated to Jew's-ear planting in Lishui, Zhejiang, China, based on ZigBee technology was proposed in this paper. Instead of conventional wired connection, the wireless design made the system easy installation and maintenance. The hardware architecture and software algorithm of wireless sensor/actuator node and portable controller, acting as the end device and coordinator in ZigBee wireless sensor network respectively, were elaborated in detail. The performance of the whole system was evaluated in the end. The long-time smooth and proper running of the system in the field proved its high reliability and practicability. As an explorative application of wireless sensor network in irrigation management, this paper offered a methodology to establish large-scale remote intelligent irrigation system."}
{"_id": "3a9f2d5e07f3ab1ebe45cd906dbc0a3b0257651a", "title": "Axial Force and Vibroacoustic Analysis of External-Rotor Axial-Flux Motors", "text": "This paper presents a detailed analysis of the axial electromagnetic force and vibroacoustic behavior of the external-rotor axial-flux motors. First, the spatial and temporal characteristics of axial force acting on the surface of magnets are derived analytically and validated through the 2-D Fourier decomposition. Subsequently, a multiphysics model is established to calculate the vibration and noise of an external-rotor axial-flux in-wheel motor, which integrates the control model, the electromagnetic model, the structural model, and the acoustic radiation model and takes the uneven distribution of axial force into account. The accuracy of the multiphysics model is verified by the noise test. The vibroacoustic mechanism of axial-flux motors is revealed based on the multiphysics model, the influence of current harmonics and dead time effect on the vibration and noise are also analyzed. Finally, the main orders of tested noise are explained by the theoretical analysis. It is found that the zeroth spatial order axial force is dominant for the generation of the vibration and noise in axial-flux motors. The harmonic current may deteriorate the vibroacoustic behavior, but depending on its amplitude and phase. This study provides guidance for the design of low-noise axial-flux motors."}
{"_id": "64bae20a8338707ed00bd9c551b8310d9ecb9407", "title": "Model-based policy gradients with parameter-based exploration by least-squares conditional density estimation", "text": "The goal of reinforcement learning (RL) is to let an agent learn an optimal control policy in an unknown environment so that future expected rewards are maximized. The model-free RL approach directly learns the policy based on data samples. Although using many samples tends to improve the accuracy of policy learning, collecting a large number of samples is often expensive in practice. On the other hand, the model-based RL approach first estimates the transition model of the environment and then learns the policy based on the estimated transition model. Thus, if the transition model is accurately learned from a small amount of data, the model-based approach is a promising alternative to the model-free approach. In this paper, we propose a novel model-based RL method by combining a recently proposed model-free policy search method called policy gradients with parameter-based exploration and the state-of-the-art transition model estimator called least-squares conditional density estimation. Through experiments, we demonstrate the practical usefulness of the proposed method."}
{"_id": "e1e2f5c00c9584deef1601181b7491bb71220d9b", "title": "A Smart Home Appliances Power Management System for Handicapped, Elder and Blind People", "text": "Smart home has become a visible concept that attracts the collaboration of various areas of science and engineering. Increasing the power efficiency and power management has become an important research topic since a decade. In this research article, the focus is to ease the life of elder and handicapped people. In point of fact as compare to the healthy people, elderly and disabled people experiences difficulties to perform their everyday activities. Elderly and disabled people can be supported by using smart homes by providing them secure, safe, and controlled environments. The developed system allows the users to be able to control the appliances with least physical efforts. More particularly, the designed system allow users to switch the home appliances ON and OFF just by sending message command by using Android application or SMS with the help of a cell phone. Not only the far remote monitoring, but the local management of appliances is made possible by using the Bluetooth technology. The experimental results demonstrate that the user can control the home appliances with the designed Android application by using the voice, Bluetooth, and SMS inputs."}
{"_id": "20eb16a035e0d81db83b48229b639cc6241e4e1b", "title": "Detecting Singleton Review Spammers Using Semantic Similarity", "text": "Online reviews have increasingly become a very important resource for consumers when making purchases. Though it is becoming more and more difficult for people to make well-informed buying decisions without being deceived by fake reviews. Prior works on the opinion spam problem mostly considered classifying fake reviews using behavioral user patterns. They focused on prolific users who write more than a couple of reviews, discarding one-time reviewers. The number of singleton reviewers however is expected to be high for many review websites. While behavioral patterns are effective when dealing with elite users, for one-time reviewers, the review text needs to be exploited. In this paper we tackle the problem of detecting fake reviews written by the same person using multiple names, posting each review under a different name. We propose two methods to detect similar reviews and show the results generally outperform the vectorial similarity measures used in prior works. The first method extends the semantic similarity between words to the reviews level. The second method is based on topic modeling and exploits the similarity of the reviews topic distributions using two models: bag-of-words and bag-of-opinion-phrases. The experiments were conducted on reviews from three different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset (800 reviews)."}
{"_id": "a5dbaf9921e8f95b1738ff6c3ea16fc29f2a2dda", "title": "Fuzzy sets, advanced fuzzy sets and hybrids", "text": "Fuzzy Sets were proposed several years ago with various extensions in later years. Each extension has advantages over the fuzzy sets. Rough sets are used to handle incomplete information. Interval-valued fuzzy sets deal with uncertainty and vagueness. Intuitionistic fuzzy sets contain a sub-interval hesitation degree that lies between membership and non-membership degrees. Soft sets overcome the problem of insufficiency of parameterization. Advanced fuzzy sets have myriad number of advantages due to their applications in realistic examples. This paper briefly explains the abstract of fuzzy sets, advanced fuzzy sets and their hybrids. Advanced fuzzy sets include rough sets, intuitionistic fuzzy sets, interval-valued fuzzy sets and soft sets."}
{"_id": "37ef4d243da25edd9f02b9b44fdca04eae0c33b8", "title": "The habitual consumer", "text": "Consumers sometimes act like creatures of habit, automatically repeating past behavior with little regard to current goals and valued outcomes. To explain this phenomenon, we show that habits are a specific form of automaticity in which responses are directly cued by the contexts (e.g., locations, preceding actions) that consistently covaried with past performance. Habits are prepotent responses that are quick to activate in memory over alternatives and that have a slow-to-modify memory trace. In daily life, the tendency to act on habits is compounded by everyday demands, including time pressures, distraction, and self-control depletion. However, habits are not immune to deliberative processes. Habits are learned largely as people pursue goals in daily life, and habits are broken through the strategic deployment of effortful self-control. Also, habits influence the post hoc inferences that people make about their behavior. \u00a9 2009 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved."}
{"_id": "cc3fd13ec9b52ca1c3f13f1ed21928f1c2fd3914", "title": "Reputation and e-commerce : eBay auctions and the asymmetrical impact of positive and negative ratings", "text": "This analysis explores the impact and nature of reputation as related to e-commerce by looking at the importance of a seller\u2019s reputational rating on the final bid price associated with eBay auctions. Positive reputational ratings emerged as mildly influential in determining final bid price. However, negative reputational ratings emerged as highly influential and detrimental. Thus, we find strong evidence for the importance of reputation when engaging in e-commerce and equally strong evidence concerning the exaggerated influence of negative reputation. \u00a9 2001 Elsevier Science Inc. All rights reserved."}
{"_id": "48f315d2457ddb51b261fefef082ef6c43bad6e6", "title": "Substrate Modeling and Lumped Substrate Resistance Extraction for CMOS ESD/Latchup Circuit Simulation", "text": "Due to interactions through the common silicon substrate, the layout and placement of devices and substrate contacts can have signi cant impacts on a circuit's ESD (Electrostatic Discharge) and latchup behavior in CMOS technologies. Proper substrate modeling is thus required for circuit-level simulation to predict the circuit's ESD performance and latchup immunity. In this work we propose a new substrate resistance network model, and develop a novel substrate resistance extraction method that accurately calculates the distribution of injection current into the substrate during ESD or latchup events. With the proposed substrate model and resistance extraction, we can capture the three-dimensional layout parasitics in the circuit as well as the vertical substrate doping pro le, and simulate these e ects on circuit behavior at the circuit-level accurately. The usefulness of this work for layout optimization is demonstrated with an industrial circuit example."}
{"_id": "5937acec2b19a40a778f05e020979c894689dc44", "title": "Effects of controlled breathing, mental activity and mental stress with or without verbalization on heart rate variability.", "text": "OBJECTIVES\nTo assess whether talking or reading (silently or aloud) could affect heart rate variability (HRV) and to what extent these changes require a simultaneous recording of respiratory activity to be correctly interpreted.\n\n\nBACKGROUND\nSympathetic predominance in the power spectrum obtained from short- and long-term HRV recordings predicts a poor prognosis in a number of cardiac diseases. Heart rate variability is often recorded without measuring respiration; slow breaths might artefactually increase low frequency power in RR interval (RR) and falsely mimic sympathetic activation.\n\n\nMETHODS\nIn 12 healthy volunteers we evaluated the effect of free talking and reading, silently and aloud, on respiration, RR and blood pressure (BP). We also compared spontaneous breathing to controlled breathing and mental arithmetic, silent or aloud. The power in the so called low- (LF) and high-frequency (HF) bands in RR and BP was obtained from autoregressive power spectrum analysis.\n\n\nRESULTS\nCompared with spontaneous breathing, reading silently increased the speed of breathing (p < 0.05), decreased mean RR and RR variability and increased BP. Reading aloud, free talking and mental arithmetic aloud shifted the respiratory frequency into the LF band, thus increasing LF% and decreasing HF% to a similar degree in both RR and respiration, with decrease in mean RR but with minor differences in crude RR variability.\n\n\nCONCLUSIONS\nSimple mental and verbal activities markedly affect HRV through changes in respiratory frequency. This possibility should be taken into account when analyzing HRV without simultaneous acquisition and analysis of respiration."}
{"_id": "e7650b2cd8ffe1e73c306e61944d99b73b070e48", "title": "SensePlace2: GeoTwitter analytics support for situational awareness", "text": "Geographically-grounded situational awareness (SA) is critical to crisis management and is essential in many other decision making domains that range from infectious disease monitoring, through regional planning, to political campaigning. Social media are becoming an important information input to support situational assessment (to produce awareness) in all domains. Here, we present a geovisual analytics approach to supporting SA for crisis events using one source of social media, Twitter. Specifically, we focus on leveraging explicit and implicit geographic information for tweets, on developing place-time-theme indexing schemes that support overview+detail methods and that scale analytical capabilities to relatively large tweet volumes, and on providing visual interface methods to enable understanding of place, time, and theme components of evolving situations. Our approach is user-centered, using scenario-based design methods that include formal scenarios to guide design and validate implementation as well as a systematic claims analysis to justify design choices and provide a framework for future testing. The work is informed by a structured survey of practitioners and the end product of Phase-I development is demonstrated / validated through implementation in SensePlace2, a map-based, web application initially focused on tweets but extensible to other media."}
{"_id": "8400ad5e6026fba72d6152188339e427484ab15b", "title": "Formal Semantics and Analysis of BPMN Process Models using Petri Nets ?", "text": "The Business Process Modelling Notation (BPMN) is a standard for capturing business processes in the early phases of systems development. The mix of constructs found in BPMN makes it possible to obtain models with a range of semantic errors. The ability to statically check the semantic correctness of models is thus a desirable feature for modelling tools based on BPMN. However, the static analysis of BPMN models is hindered by ambiguities in the standard specification and the complexity of the language. The fact that BPMN integrates constructs from graph-oriented process definition languages with features for concurrent execution of multiple instances of a subprocess and exception handling, makes it challenging to provide a formal semantics of BPMN. Even more challenging is to define a semantics that can be used to analyse BPMN models. This paper proposes a formal semantics of BPMN defined in terms of a mapping to Petri nets, for which efficient analysis techniques exist. The proposed mapping has been implemented as a tool that generates code in the Petri Net Markup Language. This formalisation exercise has also led to the identification of a number of deficiencies in the BPMN standard specification."}
{"_id": "0427f233e50b314fa59c286e698954e072aded71", "title": "The Virtual Reality Modeling Language and Java", "text": "Why VRML and Java together? Over 20 million VRML browsers have been shipped with Web browsers, making interactive 3D graphics suddenly available for any desktop. Java adds complete programming capabilities plus network access, making VRML fully functional and portable. This is a powerful new combination, especially as ongoing research shows that VRML plus Java provide extensive support for building large-scale virtual environments (LSVEs). This article provides historical background, a detailed overview of VRML 3D graphics, sample VRML-Java test programs, and a look ahead at future work."}
{"_id": "d8ae769afaca5ff0546b45f71ae0cf1bd97038d4", "title": "Single actuator wave-like robot (SAW): design, modeling, and experiments.", "text": "In this paper, we present a single actuator wave-like robot, a novel bioinspired robot which can move forward or backward by producing a continuously advancing wave. The robot has a unique minimalistic mechanical design and produces an advancing sine wave, with a large amplitude, using only a single motor but with no internal straight spine. Over horizontal surfaces, the robot does not slide relative to the surface and its direction of locomotion is determined by the direction of rotation of the motor. We developed a kinematic model of the robot that accounts for the two-dimensional mechanics of motion and yields the speed of the links relative to the motor. Based on the optimization of the kinematic model, and accounting for the mechanical constraints, we have designed and built multiple versions of the robot with different sizes and experimentally tested them (see movie). The experimental results were within a few percentages of the expectations. The larger version attained a top speed of 57 cm s(-1) over a horizontal surface and is capable of climbing vertically when placed between two walls. By optimizing the parameters, we succeeded in making the robot travel by 13% faster than its own wave speed."}
{"_id": "96e92ff6c7642cc75dc856ae4b22a5409c69e6cb", "title": "Graph-based distributed cooperative navigation for a general multi-robot measurement model", "text": "Cooperative navigation (CN) enables a group of cooperative robots to reduce their individual navigation errors. For a general multi-robot (MR) measurement model that involves both inertial navigation data and other onboard sensor readings, taken at different time instances, the various sources of information become correlated. Thus, this correlation should be solved for in the process of information fusion to obtain consistent state estimation. The common approach for obtaining the correlation terms is to maintain an augmented covariance matrix. This method would work for relative pose measurements, but is impractical for a general MR measurement model, because the identities of the robots involved in generating the measurements, as well as the measurement time instances, are unknown a priori. In the current work, a new consistent information fusion method for a general MR measurement model is developed. The proposed approach relies on graph theory. It enables explicit on-demand calculation of the required correlation terms. The graph is locally maintained by every robot in the group, representing all the MR measurement updates. The developed method calculates the correlation terms in the most general scenarios of MR measurements while properly handling the involved process and measurement noise. A theoretical example and a statistical study are provided, demonstrating the performance of the method for vision-aided navigation based on a three-view measurement model. The method is compared, in a simulated environment, to a fixed-lag centralized smoothing approach. The method is also validated in an experiment that involved real imagery and navigation data. Computational complexity estimates show that the newly-developed method is computationally efficient."}
{"_id": "3704eef061227ffab99569fbf6b01b73b8e4031c", "title": "Validating a Deep Learning Framework by Metamorphic Testing", "text": "Deep learning has become an important tool for image classification and natural language processing. However, the effectiveness of deep learning is highly dependent on the quality of the training data as well as the net model for the learning. The training data set for deep learning normally is fairly large, and the net model is pretty complex. It is necessary to validate the deep learning framework including the net model, executing environment, and training data set before it is used for any applications. In this paper, we propose an approach for validating the classification accuracy of a deep learning framework that includes a convolutional neural network, a deep learning executing environment, and a massive image data set. The framework is first validated with a classifier built on support vector machine, and then it is tested using a metamorphic validation approach. The effectiveness of the approach is demonstrated by validating a deep learning classifier for automated classification of biology cell images. The proposed approach can be used for validating other deep learning framework for different applications."}
{"_id": "c614d687584f4bf0227679437a8f5bc02c4afe20", "title": "File Fragment Classification-The Case for Specialized Approaches", "text": "Increasingly advances in file carving, memory analysis and network forensics requires the ability to identify the underlying type of a file given only a file fragment. Work to date on this problem has relied on identification of specific byte sequences in file headers and footers, and the use of statistical analysis and machine learning algorithms taken from the middle of the file. We argue that these approaches are fundamentally flawed because they fail to consider the inherent internal structure in widely used file types such as PDF, DOC, and ZIP. We support our argument with a bottom-up examination of some popular formats and an analysis of TK PDF files. Based on our analysis, we argue that specialized methods targeted to each specific file type will be necessary to make progress in this area."}
{"_id": "fc20f0ce11946c7d17a676fd880fec6dfc1c0397", "title": "Self-traits and motivations as antecedents of digital media flow and addiction: The Internet, mobile phones, and video games", "text": ""}
{"_id": "d25d6d7162a62774f0ebf40fee545f9b8c0c49fe", "title": "Mobile robot localization and object pose estimation using optical encoder, vision and laser sensors", "text": "A key problem of a mobile robot system is how to localize itself and detect objects in the workspace. In this paper, a multiple sensor based robot localization and object pose estimation method is presented. First, optical encoders and odometry model are utilized to determine the pose of the mobile robot in the workspace, with respect to the global coordinate system. Next, a CCD camera is used as a passive sensor to find an object (a box) in the environment, including the specific vertical surfaces of the box. By identifying and tracking color blobs which are attached to the center of each vertical surface of the box, the robot rotates and adjusts its base pose to move the color blob into the center of the camera view in order to make sure that the box is in the range of the laser scanner. Finally, a laser range finder, which is mounted on the top of the mobile robot, is activated to detect and compute the distances and angles between the laser source and laser contact surface on the box. Based on the information acquired in this manner, the global pose of the robot and the box can be represented using the homogeneous transformation matrix. This approach is validated using the Microsoft Robotics Studio simulation environment."}
{"_id": "afd812d62479dbb9c7e586229b80de79d5ecf4b1", "title": "An In Vitro Study on the Effects of Post-Core Design and Ferrule on the Fracture Resistance of Endodontically Treated Maxillary Central Incisors", "text": "BACKGROUND\nEndodontically treated teeth have significantly different physical and mechanical properties compared to vital teeth and are more prone to fracture. The study aims to compare the fracture resistance of endodontically treated teeth with and without post reinforcement, custom cast post-core and prefabricated post with glass ionomer core and to evaluate the ferrule effect on endodontically treated teeth restored with custom cast post-core.\n\n\nMATERIALS AND METHODS\nA total of 40 human maxillary central incisors with similar dimensions devoid of any root caries, restorations, previous endodontic treatment or cracks were selected from a collection of stored extracted teeth. An initial silicone index of each tooth was made. They were treated endodontically and divided into four groups of ten specimens each. Their apical seal was maintained with 4 mm of gutta-percha. Root canal preparation was done and then post core fabrication was done. The prepared specimens were subjected to load testing using a computer coordinated UTM. The fracture load results were then statistically analyzed. One-way ANOVA was followed by paired t-test.\n\n\nRESULTS\n1. Reinforcement of endodontically treated maxillary central incisors with post and core, improved their fracture resistance to be at par with that of endodontically treated maxillary central incisor, with natural crown. 2. The fracture resistance of endodontically treated maxillary central incisors is significantly increased when restored with custom cast post-core and 2 mm ferrule.\n\n\nCONCLUSION\nWith 2 mm ferrule, teeth restored with custom cast post-core had a significantly higher fracture resistance than teeth restored with custom cast post-core or prefabricated post and glass ionomer core without ferrule."}
{"_id": "e1fa3fd2883cf04f84c811328aecaef976b68a34", "title": "UAV-based crop and weed classification for smart farming", "text": "Unmanned aerial vehicles (UAVs) and other robots in smart farming applications offer the potential to monitor farm land on a per-plant basis, which in turn can reduce the amount of herbicides and pesticides that must be applied. A central information for the farmer as well as for autonomous agriculture robots is the knowledge about the type and distribution of the weeds in the field. In this regard, UAVs offer excellent survey capabilities at low cost. In this paper, we address the problem of detecting value crops such as sugar beets as well as typical weeds using a camera installed on a light-weight UAV. We propose a system that performs vegetation detection, plant-tailored feature extraction, and classification to obtain an estimate of the distribution of crops and weeds in the field. We implemented and evaluated our system using UAVs on two farms, one in Germany and one in Switzerland and demonstrate that our approach allows for analyzing the field and classifying individual plants."}
{"_id": "724cc8ef4661cb4e7223977b5367ba1bf27aaf86", "title": "Understanding Android Security", "text": "Google's Android platform is a widely anticipated open source operating system for mobile phones. This article describes Android's security model and attempts to unmask the complexity of secure application development. The authors conclude by identifying lessons and opportunities for future enhancements."}
{"_id": "be7536d9baaef7ccdbff845f8e98c136b4c80bb3", "title": "Protecting privacy using the decentralized label model", "text": "Stronger protection is needed for the confidentiality and integrity of data, because programs containing untrusted code are the rule rather than the exception. Information flow control allows the enforcement of end-to-end security policies, but has been difficult to put into practice. This article describes the decentralized label model, a new label model for control of information flow in systems with mutual distrust and decentralized authority. The model improves on existing multilevel security models by allowing users to declassify information in a decentralized way, and by improving support for fine-grained data sharing. It supports static program analysis of information flow, so that programs can be certified to permit only acceptable information flows, while largely avoiding the overhead of run-time checking. The article introduces the language Jif, an extension to Java that provides static checking of information flow using the decentralized label model."}
{"_id": "eab0d3bdcb21364883bb55e295e55ab479a90181", "title": "Firewalls and internet security: Repelling the wily hacker, 2nd ed. [Book Review]", "text": "General Interest How the Internet Works, 7th ed., Preston Gralla (Que, 2004, 368 pp., ISBN 0-78972973-3, $29.99; www.quepublishing.com) Preston Gralla is a well-known technology columnist and author. In this large-format book lavishly illustrated by Michael Troller, Gralla explains all of the concepts that most people are likely to encounter in connecting to the Internet. The explanations are not deep, but they systematically cover the basics. Most people, even experts, will probably find at least some of the explanations helpful. For example, I feel that I know quite a bit about the Internet, but Gralla\u2019s explanation of proxy servers clarified the subject for me. The illustrations are colorful and attractive, but in many cases are merely decorative. Visual learners may find them helpful, though I doubt that anyone could get much from the book just by looking at the pictures. The text, on the other hand, can probably stand alone. If you know people who are just starting to use the Internet and electronic mail, consider giving them this book. It may save you from having to answer some difficult questions."}
{"_id": "23ff1f97e26c2ab9c4e37c63bb23cb49053ef753", "title": "Event Tactic Analysis Based on Broadcast Sports Video", "text": "Most existing approaches on sports video analysis have concentrated on semantic event detection. Sports professionals, however, are more interested in tactic analysis to help improve their performance. In this paper, we propose a novel approach to extract tactic information from the attack events in broadcast soccer video and present the events in a tactic mode to the coaches and sports professionals. We extract the attack events with far-view shots using the analysis and alignment of web-casting text and broadcast video. For a detected event, two tactic representations, aggregate trajectory and play region sequence, are constructed based on multi-object trajectories and field locations in the event shots. Based on the multi-object trajectories tracked in the shot, a weighted graph is constructed via the analysis of temporal\u2013spatial interaction among the players and the ball. Using the Viterbi algorithm, the aggregate trajectory is computed based on the weighted graph. The play region sequence is obtained using the identification of the active field locations in the event based on line detection and competition network. The interactive relationship of aggregate trajectory with the information of play region and the hypothesis testing for trajectory temporal\u2013spatial distribution are employed to discover the tactic patterns in a hierarchical coarse-to-fine framework. Extensive experiments on FIFA World Cup 2006 show that the proposed approach is highly effective."}
{"_id": "105a216d530d05e9f22be59b8be786735f9a19fa", "title": "Assessing Suicide Risk and Emotional Distress in Chinese Social Media: A Text Mining and Machine Learning Study", "text": "BACKGROUND\nEarly identification and intervention are imperative for suicide prevention. However, at-risk people often neither seek help nor take professional assessment. A tool to automatically assess their risk levels in natural settings can increase the opportunity for early intervention.\n\n\nOBJECTIVE\nThe aim of this study was to explore whether computerized language analysis methods can be utilized to assess one's suicide risk and emotional distress in Chinese social media.\n\n\nMETHODS\nA Web-based survey of Chinese social media (ie, Weibo) users was conducted to measure their suicide risk factors including suicide probability, Weibo suicide communication (WSC), depression, anxiety, and stress levels. Participants' Weibo posts published in the public domain were also downloaded with their consent. The Weibo posts were parsed and fitted into Simplified Chinese-Linguistic Inquiry and Word Count (SC-LIWC) categories. The associations between SC-LIWC features and the 5 suicide risk factors were examined by logistic regression. Furthermore, the support vector machine (SVM) model was applied based on the language features to automatically classify whether a Weibo user exhibited any of the 5 risk factors.\n\n\nRESULTS\nA total of 974 Weibo users participated in the survey. Those with high suicide probability were marked by a higher usage of pronoun (odds ratio, OR=1.18, P=.001), prepend words (OR=1.49, P=.02), multifunction words (OR=1.12, P=.04), a lower usage of verb (OR=0.78, P<.001), and a greater total word count (OR=1.007, P=.008). Second-person plural was positively associated with severe depression (OR=8.36, P=.01) and stress (OR=11, P=.005), whereas work-related words were negatively associated with WSC (OR=0.71, P=.008), severe depression (OR=0.56, P=.005), and anxiety (OR=0.77, P=.02). Inconsistently, third-person plural was found to be negatively associated with WSC (OR=0.02, P=.047) but positively with severe stress (OR=41.3, P=.04). Achievement-related words were positively associated with depression (OR=1.68, P=.003), whereas health- (OR=2.36, P=.004) and death-related (OR=2.60, P=.01) words positively associated with stress. The machine classifiers did not achieve satisfying performance in the full sample set but could classify high suicide probability (area under the curve, AUC=0.61, P=.04) and severe anxiety (AUC=0.75, P<.001) among those who have exhibited WSC.\n\n\nCONCLUSIONS\nSC-LIWC is useful to examine language markers of suicide risk and emotional distress in Chinese social media and can identify characteristics different from previous findings in the English literature. Some findings are leading to new hypotheses for future verification. Machine classifiers based on SC-LIWC features are promising but still require further optimization for application in real life."}
{"_id": "ea41089bdae06d05981e72c4871111ed7f0de88e", "title": "Architecture decisions: demystifying architecture", "text": "We believe that a key to demystifying architecture products lies in the architecture decisions concept. We can make the architecture more transparent and clarify its rationale for all stakeholders by explicitly documenting major architecture decisions."}
{"_id": "91a073617e082582cac90aea197535e9ab206349", "title": "k-Shape: Efficient and Accurate Clustering of Time Series", "text": "The proliferation and ubiquity of temporal data across many disciplines has generated substantial interest in the analysis and mining of time series. Clustering is one of the most popular data mining methods, not only due to its exploratory power, but also as a preprocessing step or subroutine for other techniques. In this paper, we describe k-Shape, a novel algorithm for time-series clustering. k-Shape relies on a scalable iterative refinement procedure, which creates homogeneous and well-separated clusters. As its distance measure, k-Shape uses a normalized version of the cross-correlation measure in order to consider the shapes of time series while comparing them. Based on the properties of that distance measure, we develop a method to compute cluster centroids, which are used in every iteration to update the assignment of time series to clusters. An extensive experimental evaluation against partitional, hierarchical, and spectral clustering methods, with the most competitive distance measures, showed the robustness of k-Shape. Overall, k-Shape emerges as a domain-independent, highly accurate, and efficient clustering approach for time series with broad applications."}
{"_id": "bef9d9edd340eb09e2cda37cb7f4d4886a36fe66", "title": "A survey on position-based routing protocols for Flying Ad hoc Networks (FANETs)", "text": ""}
{"_id": "75e614689c0fb9ea8ffa469a273c81d7c590c1db", "title": "Combining Numerous Uncorrelated MEMS Gyroscopes for Accuracy Improvement Based on an Optimal Kalman Filter", "text": "In this paper, an approach to improve the accuracy of microelectromechanical systems (MEMS) gyroscopes by combining numerous uncorrelated gyroscopes is presented. A Kalman filter (KF) is used to fuse the output signals of several uncorrelated sensors. The relationship between the KF bandwidth and the angular rate input is quantitatively analyzed. A linear model is developed to choose suitable system parameters for a dynamic application of the concept. Simulation and experimental tests of a six-gyroscope array proved that the presented approach was effective to improve the MEMS gyroscope accuracy. The experimental results indicate that six identical gyroscopes with a noise density of 0.11\u00b0/s/\u221aHz and a bias instability of 62\u00b0/h can be combined to form a virtual gyroscope with a noise density of 0.03\u00b0/s/\u221aHz and a bias instability of 16.8\u00b0/h . The accuracy improvement is better than that of a simple averaging process of the individual sensors."}
{"_id": "9ff962eef04a9ff83bf1912dde811e6e661b43d8", "title": "Autonomous collision avoidance for unmanned surface ships using onboard monocular vision", "text": "This study presents the development of vision-based techniques for autonomous collision avoidance by an unmanned surface ship using an onboard monocular camera. In order to determine the initiation of an evasive maneuver, the range and bearing measurements of each target traffic ship with respect to the observer (e.g., own ship) need to be provided for trajectory estimation. A tracking estimator is used to estimate the target ship trajectory in the framework of bearings-only tracking based on the extended Kalman filter (EKF) algorithm with a Continuous White Noise Acceleration (CWNA) model. To enhance the observability of the tracking filter, the vertical pixel distance from the water horizon to each target ship is used as a range measurement. When the estimated separation distance between the target and own ships is less than a predefined minimum separation, the own ship alters its heading angle to avoid an imminent collision following the standard marine traffic rules. Field experiment results are presented and discussed to demonstrate the feasibility and validity of the proposed approach."}
{"_id": "2bd8f5a11c4e6f3e7b596fe1d79c1ec510531191", "title": "Enhancing the Effectiveness of Work Groups and Teams.", "text": "Teams of people working together for a common purpose have been a centerpiece of human social organization ever since our ancient ancestors first banded together to hunt game, raise families, and defend their communities. Human history is largely a story of people working together in groups to explore, achieve, and conquer. Yet, the modern concept of work in large organizations that developed in the late 19th and early 20th centuries is largely a tale of work as a collection of individual jobs. A variety of global forces unfolding over the last two decades, however, has pushed organizations worldwide to restructure work around teams, to enable more rapid, flexible, and adaptive responses to the unexpected. This shift in the structure of work has made team effectiveness a salient organizational concern. Teams touch our lives everyday and their effectiveness is important to well-being across a wide range of societal functions. There is over 50 years of psychological research-literally thousands of studies-focused on understanding and influencing the processes that underlie team effectiveness. Our goal in this monograph is to sift through this voluminous literature to identify what we know, what we think we know, and what we need to know to improve the effectiveness of work groups and teams. We begin by defining team effectiveness and establishing the conceptual underpinnings of our approach to understanding it. We then turn to our review, which concentrates primarily on topics that have well-developed theoretical and empirical foundations, to ensure that our conclusions and recommendations are on firm footing. Our review begins by focusing on cognitive, motivational/affective, and behavioral team processes-processes that enable team members to combine their resources to resolve task demands and, in so doing, be effective. We then turn our attention to identifying interventions, or \"levers,\" that can shape or align team processes and thereby provide tools and applications that can improve team effectiveness. Topic-specific conclusions and recommendations are given throughout the review. There is a solid foundation for concluding that there is an emerging science of team effectiveness and that findings from this research foundation provide several means to improve team effectiveness. In the concluding section, we summarize our primary findings to highlight specific research, application, and policy recommendations for enhancing the effectiveness of work groups and teams."}
{"_id": "45af76c8542b68d15d17d088d2cbbd8df5a23f19", "title": "Data-Driven Modeling for Chinese Ancient Architecture", "text": "The existing 3D modeling studies of Chinese ancient architecture are mostly procedure driven and rely on fixed construction rules. Therefore, these methods have limited applications in virtual reality (VR) engineering. We propose a data-driven approach to synthesize 3D models from existing 3D data that provides more flexibility and fills the gap between academic studies and VR engineering. First, 3D architecture models were preprocessed and decomposed into components, and the components were clustered by their geometric features. Second, a Bayesian network was generated by learning from the dataset to represent the internal relationships between the architectural components. Third, the inference results of the trained network were utilized to generate a reasonable relationship matching to support the synthesis of the structural components. The proposed method can be used in 3D content creation for VR development and directly supports VR applications in practice."}
{"_id": "96230bbd9804f4e7ac0017f9065ebe488f30b642", "title": "The Anisotropic Noise in Stochastic Gradient Descent : Its Behavior of Escaping from Minima and Regularization Effects", "text": "Understanding the behavior of stochastic gradient descent (SGD) in the context of deep neural networks has raised lots of concerns recently. Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects. A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency. We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of position-dependent noise."}
{"_id": "ed7e7d643b0d97be0b5a139e808c0e17139ecd16", "title": "Ka-Band Linear to Circular Polarization Converter Based on Multilayer Slab With Broadband Performance", "text": "In this paper, a Ka-band polarization converter is presented, which is based on multilayer slab. In order to improve impedance matching, metallic circular traces are printed periodically on each dielectric multilayer slab. Simulated results of the polarizer show that it can transform linearly polarized (LP) to circularly polarized (CP) fields over a frequency band from 23 to 35GHz (42%) with an insertion loss less than 0.5 dB. The transmitted CP wave by the polarizer is approximately robust under oblique illuminations. The polarizer is fabricated and measured by a wideband horn antenna satisfying the simulated results. Next, in order to design a high-gain CP structure around 30 GHz, an 8-element LP array antenna with Chebyshev tapered distribution is designed and integrated with the polarizer. Obviously, the antenna limits the overall bandwidth (nearly 28 to 31.5 GHz) due to the narrowband nature of the LP antenna array. When the polarizer is illuminated by an incident LP wave, the two linear components of the transmitted wave with approximately equal amplitudes and 90\u00b0 phase difference on the frequency band of interest are produced. Experimental results of the proposed structure show a pure CP with a gain of 13 dBi at 30 GHz, which can be suitable for millimeter wave communication."}
{"_id": "c79d8c3768b8dbde4e9fbbd8924805d4a02a1158", "title": "Sequence-to-Dependency Neural Machine Translation", "text": "Nowadays a typical Neural Machine Translation (NMT) model generates translations from left to right as a linear sequence, during which latent syntactic structures of the target sentences are not explicitly concerned. Inspired by the success of using syntactic knowledge of target language for improving statistical machine translation, in this paper we propose a novel Sequence-to-Dependency Neural Machine Translation (SD-NMT) method, in which the target word sequence and its corresponding dependency structure are jointly constructed and modeled, and this structure is used as context to facilitate word generations. Experimental results show that the proposed method significantly outperforms state-of-the-art baselines on Chinese-English and JapaneseEnglish translation tasks."}
{"_id": "12e119a036533b0c5337a8abf99ca934082d3875", "title": "Overdose Fatalities Patterns of Abuse Among Unintentional Pharmaceutical", "text": "Supplementary material http://jama.ama-assn.org/cgi/content/full/300/22/2613/DC1 Report Video JAMA Correction Contact me if this article is corrected. Citations Contact me when this article is cited. This article has been cited 1 time. Topic collections Contact me when new articles are published in these topic areas. Drug Therapy, Other Pain; Public Health; Substance Abuse/ Alcoholism; Drug Therapy; Adverse Effects;"}
{"_id": "0d550ebf6510bc6bedde6951ef77c310b905185d", "title": "A Fiber Tracking Method for Building Patient Specific Dynamic Musculoskeletal Models from Diffusion Tensor Data", "text": "A new musculoskeltal simulation primitive, the strand, can provide an efficient way of simulating complicated muscle fiber architectures [1]. In this paper we present a new fiber tracking algorithm based on an energy minimizing active curve that is well suited for building these strand type models from diffusion tensor imaging data. We present fiber tracking results for the Brachioradialis muscle in the left forearm of a male subject. The algorithm produces a space filling arrangement of fibers that are uniformly distributed throughout the muscle and aligned with the underlying muscle fiber direction."}
{"_id": "7a40321919c6f6489ea62817aab83b12f32948b5", "title": "Brief screening instrument for post-traumatic stress disorder.", "text": "BACKGROUND\nBrief screening instruments appear to be a viable way of detecting post-traumatic stress disorder (PTSD) but none has yet been adequately validated.\n\n\nAIMS\nTo test and cross-validate a brief instrument that is simple to administer and score.\n\n\nMETHOD\nForty-one survivors of a rail crash were administered a questionnaire, followed by a structured clinical interview 1 week later.\n\n\nRESULTS\nExcellent prediction of a PTSD diagnosis was provided by respondents endorsing at least six re-experiencing or arousal symptoms, in any combination. The findings were replicated on data from a previous study of 157 crime victims.\n\n\nCONCLUSIONS\nPerformance of the new measure was equivalent to agreement achieved between two full clinical interviews."}
{"_id": "d908f630582f1a11b6d481e635fb1d06e7671f32", "title": "Image biomarker standardisation initiative version 1 . 4", "text": ""}
{"_id": "ecd0ae4cc17e29dc20265c049134b242c06a64b4", "title": "Treatment with the Self-Discovery Camp (SDiC) improves Internet gaming disorder.", "text": "INTRODUCTION\nInternet gaming disorder (IGD) is a novel behavioral addiction that influences the physical, mental, and social aspects of health due to excessive Internet gaming. One type of intensive treatment for IGD is the therapeutic residential camp (TRC), which comprises many types of therapies, including psychotherapy, psychoeducational therapy, and cognitive behavioral therapy. The TRC was developed in South Korea and has been administered to many patients with IGD; however, its efficacy in other countries remains unknown. We investigated the efficacy of the Self-Discovery Camp (SDiC), a Japanese version of a TRC, and the correlations between individual characteristics and outcome measures.\n\n\nMETHODS\nWe recruited 10 patients with IGD (all male, mean age=16.2years, diagnosed using the DSM-5) to spend 8 nights and 9days at the SDiC. We measured gaming time as well as self-efficacy (using the Stages of Change Readiness and Treatment Eagerness Scale, a measure of therapeutic motivation and problem recognition).\n\n\nRESULTS\nTotal gaming time was significantly lower 3months after the SDiC. Problem recognition and self-efficacy towards positive change also improved. Furthermore, there was a correlation between age of onset and problem recognition score.\n\n\nCONCLUSIONS\nOur results demonstrate the effectiveness of the SDiC for IGD, especially regarding gaming time and self-efficacy. Additionally, age of onset may be a useful predictor of IGD prognosis. Further studies with larger sample sizes and control groups, and that target long-term outcomes, are needed to extend our understanding of SDiC efficacy."}
{"_id": "925b3393b18489face6bb6a046db69da86a301bd", "title": "Incorporating Tweet Relationships into Topic Derivation", "text": "With its rapid users growth, Twitter has become an essential source of information about what events are happening in the world. It is critical to have the ability to derive the topics from Twitter messages (tweets), that is, to determine and characterize the main topics of the Twitter messages (tweets). However, tweets are very short in nature and therefore the frequency of term co-occurrences is very low. The sparsity in the relationship between tweets and terms leads to a poor characterization of the topics when only the content of the tweets is used. In this paper, we exploit the relationships between tweets and propose intLDA, a variant of Latent Dirichlet Allocation (LDA) that goes beyond content and directly incorporates the relationship between tweets. We have conducted experiments on a Twitter dataset and evaluated the performance in terms of both topic coherence and tweettopic accuracy. Our experiments show that intLDA outperforms methods that do not use relationship information. Keywords-Topic Derivation; Twitter; Tweets Relationship;"}
{"_id": "1874ed438725a069d2fd21f47e33f812513836c6", "title": "SHORTEST PATHS FOR THE REEDS-SHEPP CAR : A WORKED OUT EXAMPLE OF THE USE OF GEOMETRIC TECHNIQUES IN NONLINEAR OPTIMAL CONTROL", "text": "We illustrate the use of the techniques of modern geometric optimal control theory by studying the shortest paths for a model of a car that can move forwards and backwards. This problem was discussed in recent work by Reeds and Shepp who showed, by special methods, (a) that shortest path motion could always be achieved by means of trajectories of a special kind, namely, concatenations of at most five pieces, each of which is either a straight line or a circle, and (b) that these concatenations can be classified into 48 three-parameter families. We show how these results fit in a much more general framework, and can be discovered and proved by applying in a systematic way the techniques of Optimal Control Theory. It turns out that the \u201cclassical\u201d optimal control tools developed in the 1960\u2019s, such as the Pontryagin Maximum Principle and theorems on the existence of optimal trajectories, are helpful to go part of the way and get some information on the shortest paths, but do not suffice to get the full result. On the other hand, when these classical techniques are combined with the use of a more recently developed body of theory, namely, geometric methods based on the Lie algebraic analysis of trajectories, then one can recover the full power of the Reeds-Shepp results, and in fact slightly improve upon them by lowering their 48 to a 46."}
{"_id": "1eb4d3c0b2c40771a3de1329fed7f0dbd8427c23", "title": "Autonomous robotic exploration using occupancy grid maps and graph SLAM based on Shannon and R\u00e9nyi Entropy", "text": "In this paper we examine the problem of autonomously exploring and mapping an environment using a mobile robot. The robot uses a graph-based SLAM system to perform mapping and represents the map as an occupancy grid. In this setting, the robot must trade-off between exploring new area to complete the task and exploiting the existing information to maintain good localization. Selecting actions that decrease the map uncertainty while not significantly increasing the robot's localization uncertainty is challenging. We present a novel information-theoretic utility function that uses both Shannon's and Re\u0301nyi's definitions of entropy to jointly consider the uncertainty of the robot and the map. This allows us to fuse both uncertainties without the use of manual tuning. We present simulations and experiments comparing the proposed utility function to state-of-the-art utility functions, which only use Shannon's entropy. We show that by using the proposed utility function, the robot and map uncertainties are smaller than using other existing methods."}
{"_id": "ee458bee26e6371f9347b1972bbc9dc26b2f3713", "title": "Stacking-based deep neural network: Deep analytic network on convolutional spectral histogram features", "text": "Stacking-based deep neural network (S-DNN), in general, denotes a deep neural network (DNN) resemblance in terms of its very deep, feedforward network architecture. The typical S-DNN aggregates a variable number of individually learnable modules in series to assemble a DNN-alike alternative to the targeted object recognition tasks. This work likewise devises an S-DNN instantiation, dubbed deep analytic network (DAN), on top of the spectral histogram (SH) features. The DAN learning principle relies on ridge regression, and some key DNN constituents, specifically, rectified linear unit, fine-tuning, and normalization. The DAN aptitude is scrutinized on three repositories of varying domains, including FERET (faces), MNIST (handwritten digits), and CIFAR10 (natural objects). The empirical results unveil that DAN escalates the SH baseline performance over a sufficiently deep layer."}
{"_id": "9bbca97d7ddb129c3d4e8b8e6b0bf63164686a87", "title": "A Simple and Reliable PDMS and SU-8 Irreversible Bonding Method and Its Application on a Microfluidic-MEA Device for Neuroscience Research", "text": "Polydimethylsiloxane (PDMS) and SU-8 are currently two very commonly used polymeric materials in the microfluidics field for biological applications. However, there is a pressing need to find a simple, reliable, irreversible bonding method between these two materials for their combined use in innovative integrated microsystems. In this paper, we attempt to investigate the aminosilane-mediated irreversible bonding method for PDMS and SU-8 with X-Ray Photoelectron Spectroscopy (XPS) surface analysis and bonding strength tests. Additionally, the selected bonding method was applied in fabricating a microelectrode array (MEA) device, including microfluidic features, which allows electrophysiological observations on compartmentalized neuronal cultures. As there is a growing trend towards microfluidic devices for neuroscience research, this type of integrated microdevice, which can observe functional alterations on compartmentalized neuronal culture, can potentially be used for neurodegenerative disease research and pharmaceutical development."}
{"_id": "472c9d490912dacc8c8a22ce42826119c8ace5a9", "title": "Measurements and Simulations of the Convective Heat Transfer Coefficients on the End Windings of an Electrical Machine", "text": "The prediction of the thermal behavior of an electrical machine in the design process basically depends on the used boundary conditions such as the convective heat transfer coefficient. Due to the complicated shape of the end windings, the heat transfer coefficient has to be identified empirically by using thermopiles as heat flux sensors placed at certain positions on the end windings, because an analytical derivation is not feasible. This paper presents the results of the measured values of the heat transfer on the end windings of an electrical machine. The gained coefficients are compared to the results obtained by numerical simulations using commercial software for computational fluid dynamics. The simulation results are discussed according to the validity of the gained heat transfer coefficient. The configuration of the simulation has been tested on a cylindrical shape in a cross-flow."}
{"_id": "f9938e3c487364da21bedb4d64d4a18308026348", "title": "Modeling Credit Risk for SMEs : Evidence from the US Market", "text": "Considering the fundamental role played by small and medium sized enterprises (SMEs) in the economy of many countries and the considerable attention placed on SMEs in the new Basel Capital Accord, we develop a distress prediction model specifically for the SME sector and to analyze its effectiveness compared to a generic corporate model. The behaviour of financial measures for SMEs is analyzed and the most significant variables in predicting the entities\u2019 credit worthiness are selected in order to construct a default prediction model. Using a logit regression technique on panel data of over 2,000 US firms (with sales less than $65 million) over the period 1994-2002, we develop a one-year default prediction model. This model has an out-of-sample prediction power which is almost 30 percent higher than a generic corporate model. An associated objective is to observe our model\u2019s ability to lower bank capital requirements considering the new Basel Capital Accord\u2019s rules for SMEs."}
{"_id": "8ecd33a666d0208961d49841c3cb67dc7fcf6cd2", "title": "Query optimization using column statistics in hive", "text": "Hive is a data warehousing solution on top of the Hadoop MapReduce framework that has been designed to handle large amounts of data and store them in tables like a relational database management system or a conventional data warehouse while using the parallelization and batch processing functionalities of the Hadoop MapReduce framework to speed up the execution of queries. Data inserted into Hive is stored in the Hadoop FileSystem (HDFS), which is part of the Hadoop MapReduce framework. To make the data accessible to the user, Hive uses a query language similar to SQL, which is called HiveQL. When a query is issued in HiveQL, it is translated by a parser into a query execution plan that is optimized and then turned into a series of map and reduce iterations. These iterations are then executed on the data stored in the HDFS, writing the output to a file.\n The goal of this work is to to develop an approach for improving the performance of the HiveQL queries executed in the Hive framework. For that purpose, we introduce an extension to the Hive MetaStore which stores metadata that has been extracted on the column level of the user database. These column level statistics are then used for example in combination with join ordering algorithms which are adapted to the specific needs of the Hadoop MapReduce environment to improve the overall performance of the HiveQL query execution."}
{"_id": "9ccc85ffb24602cd198c0ebc7b673fa4e0409249", "title": "Perspectives of granular computing", "text": "Granular computing emerges as a new multi-disciplinary study and has received much attention in recent years. A conceptual framework is presented by extracting shared commonalities from many fields. The framework stresses multiple views and multiple levels of understanding in each view. It is argued that granular computing is more about a philosophical way of thinking and a practical methodology of problem solving. By effectively using levels of granularity, granular computing provides a systematic, natural way to analyze, understand, represent, and solve real world problems. With granular computing, one aims at structured thinking at the philosophical level, and structured problem solving at the practical level."}
{"_id": "1535d5db7078c85f0e2d565860a0fb4053a6090c", "title": "Learning Attribute-to-Feature Mappings for Cold-Start Recommendations", "text": "Cold-start scenarios in recommender systems are situations in which no prior events, like ratings or clicks, are known for certain users or items. To compute predictions in such cases, additional information about users (user attributes, e.g. gender, age, geographical location, occupation) and items (item attributes, e.g. genres, product categories, keywords) must be used. We describe a method that maps such entity (e.g. user or item) attributes to the latent features of a matrix (or higher-dimensional) factorization model. With such mappings, the factors of a MF model trained by standard techniques can be applied to the new-user and the new-item problem, while retaining its advantages, in particular speed and predictive accuracy. We use the mapping concept to construct an attribute-aware matrix factorization model for item recommendation from implicit, positive-only feedback. Experiments on the new-item problem show that this approach provides good predictive accuracy, while the prediction time only grows by a constant factor."}
{"_id": "e1b2a1effd0689c27ec65d120850639910d05ba8", "title": "Language and the Newborn Brain: Does Prenatal Language Experience Shape the Neonate Neural Response to Speech?", "text": "Previous research has shown that by the time of birth, the neonate brain responds specially to the native language when compared to acoustically similar non-language stimuli. In the current study, we use near-infrared spectroscopy to ask how prenatal language experience might shape the brain response to language in newborn infants. To do so, we examine the neural response of neonates when listening to familiar versus unfamiliar language, as well as to non language stimuli. Twenty monolingual English-exposed neonates aged 0-3\u2009days were tested. Each infant heard low-pass filtered sentences of forward English (familiar language), forward Tagalog (unfamiliar language), and backward English and Tagalog (non-language). During exposure, neural activation was measured across 12 channels on each hemisphere. Our results indicate a bilateral effect of language familiarity on neonates' brain response to language. Differential brain activation was seen when neonates listened to forward Tagalog (unfamiliar language) as compared to other types of language stimuli. We interpret these results as evidence that the prenatal experience with the native language gained in utero influences how the newborn brain responds to language across brain regions sensitive to speech processing."}
{"_id": "fb4e56efa789b8f37c0bcfccff05a61ab4265d3d", "title": "Progress and prospects in plant genome editing", "text": "The emergence of sequence-specific nucleases that enable genome editing is revolutionizing basic and applied biology. Since the introduction of CRISPR\u2013Cas9, genome editing has become widely used in transformable plants for characterizing gene function and improving traits, mainly by inducing mutations through non-homologous end joining of double-stranded breaks generated by CRISPR\u2013Cas9. However, it would be highly desirable to perform precision gene editing in plants, especially in transformation-recalcitrant species. Recently developed Cas9 variants, novel RNA-guided nucleases and base-editing systems, and DNA-free CRISPR\u2013Cas9 delivery methods now provide great opportunities for plant genome engineering. In this Review Article, we describe the current status of plant genome editing, focusing on newly developed genome editing tools and methods and their potential applications in plants. We also discuss the specific challenges facing plant genome editing, and future prospects."}
{"_id": "714fda699cda81daee22841452aa435148acf0df", "title": "Material-based Design Computation An Inquiry into Digital Simulation of Physical Material Properties as Design Generators", "text": "The paper demonstrates the association between geometry and material behavior, specifically the elastic properties of resin impregnated latex membranes, by means of homogenizing protocols which translate physical properties into geometrical functions. Resin-impregnation patterns are applied to 2-D pre-stretched form-active tension systems to induce 3-D curvature upon release.This method enables form-finding based on material properties, organization and behavior. Some theoretical foundations for material-computation are outlined. A digital tool developed in the Processing (JAVA coded) environment demonstrates the simulation of material behavior and its prediction under specific environmental conditions. Finally, conclusions are drawn from the physical and digital explorations which redefine generative material-based design computation, supporting a synergetic approach to design integrating form, structure, material and environment."}
{"_id": "431b472db742c2fb48f1d8d0e9376176d617959a", "title": "Cooperative Multi-Sensor Video Surveillance", "text": "Takeo Kanade, Robert T. Collins, and Alan J. Lipton Carnegie Mellon University, Pittsburgh, PA e-mail: fkanade,rcollins,ajlg@cs.cmu.edu homepage: http://www.cs.cmu.edu/ vsam P. Anandan, Peter Burt, and Lambert Wixson David Sarno Research Center, Princeton, NJ e-mail: fpanandan,pburt,lwixsong@sarno .com Abstract Carnegie Mellon University (CMU) and the David Sarno Research Center (Sarno ) have begun a joint, integrated feasibility demonstration in the area of Video Surveillance and Monitoring (VSAM). The objective is to develop a cooperative, multi-sensor video surveillance system that provides continuous coverage over large battle eld areas. Image Understanding (IU) technologies will be developed to: 1) coordinate multiple sensors to seamlessly track moving targets over an extended area, 2) actively control sensor and platform parameters to track multiple moving targets, 3) integrate multisensor output with collateral data to maintain an evolving, scene-level representation of all targets and platforms, and 4) monitor the scene for unusual \\trigger\" events and activities. These technologies will be integrated into an experimental testbed to support evaluation, data collection, and demonstration of other VSAM technologies developed within the DARPA IU community."}
{"_id": "637be4a09263c117e36342cfcb454bff8c9c7145", "title": "Test Case Selection and Prioritization Using Multiple Criteria", "text": "Regression testing activities such as test case selection and test case prioritization are ordinarily based on the criteria which focused around code coverage, code modifications and test execution costs. The approach mainly based on the multiple criteria of code coverage which performs efficient selection of test case. The method mainly aims to maximize the coverage size by executing the test cases effectively. The selected test cases are then prioritized based on the priority which depends on the code coverage to achieve the desired results. Keywords\u2014Test case selection, Test case prioritization, Cod coverage, Jaccard distance, Greedy algorithm."}
{"_id": "58e611c163f9cc2968a0eef36ce075d605eb5794", "title": "Studying weak central coherence at low levels: children with autism do not succumb to visual illusions. A research note.", "text": "While anecdotal reports of abnormal perceptual experiences in autism abound, there have been to date no experimental studies showing fundamental perceptual peculiarities. The present paper reports results from a first study of low-level visual integration in autism. Twenty-five subjects with autism, 21 normal 7- and 8-year-olds, and 26 children with learning difficulties were asked to make simple judgements about six well-known visual illusions. Two conditions were used, in an attempt to explore group differences; standard two-dimensional black and white line drawings, and the same figures augmented with raised coloured lines. The subjects with autism were less likely to succumb to the two-dimensional illusions than were the other groups, and were less aided by the three-dimensional 'disembedded' condition. These striking results are discussed with reference to the 'central coherence' account of autism."}
{"_id": "52b9956530b0fac8a5d663cd2fd822fd080569db", "title": "On Throughput Maximization of Time Division Multiple Access With Energy Harvesting Users", "text": "In this paper, we consider a multiple-access channel, where multiple users equipped with energy harvesting batteries communicate to an access point. To avoid consuming extra energy on competition for the channel, the users are supposed to share the channel via time division multiple access (TDMA). In many existing works, it is commonly assumed that the users' energy harvesting processes and storage status are known to all the users before transmissions. In practice, such knowledge may not be readily available. To avoid excessive overhead for real-time information exchange, we consider the scenario where the users schedule their individual transmissions according to the users' statistical energy harvesting profiles. We first study the optimal transmission scheme in the case where each node has an infinite-capacity battery. By optimization theory, we show that, to maximize the average system throughput, all the users should transmit at an identical optimal power, which solely depends on the energy harvesting rate per time slot. We then study the equal-power TDMA scheme in the case where each node is equipped with a battery of finite capacity. The system is formulated as a polling system consisting of multiple energy queues and one server. By the Markov chain modeling method, we derive the performance of equal-power TDMA in this case, in terms of the energy loss ratio and average system throughput. In addition, we develop an algorithm to efficiently compute the optimal transmission power in the finite-capacity battery case. We also consider an equal-time TDMA scheme, which assigns equal-length subslots to each user, and analyze its system performance. It is found that equal-power TDMA always outperforms equal-time TDMA in the infinite-capacity battery case, whereas equal-time TDMA exhibits compatible or even slightly better performance in some scenarios when the batteries have finite capacities."}
{"_id": "2138bc4fab4c6d9fbf78b2970be65f6770676da9", "title": "Predictive habitat distribution models in ecology", "text": "With the rise of new powerful statistical techniques and GIS tools, the development of predictive habitat distribution models has rapidly increased in ecology. Such models are static and probabilistic in nature, since they statistically relate the geographical distribution of species or communities to their present environment. A wide array of models has been developed to cover aspects as diverse as biogeography, conservation biology, climate change research, and habitat or species management. In this paper, we present a review of predictive habitat distribution modeling. The variety of statistical techniques used is growing. Ordinary multiple regression and its generalized form (GLM) are very popular and are often used for modeling species distributions. Other methods include neural networks, ordination and classification methods, Bayesian models, locally weighted approaches (e.g. GAM), environmental envelopes or even combinations of these models. The selection of an appropriate method should not depend solely on statistical considerations. Some models are better suited to reflect theoretical findings on the shape and nature of the species\u2019 response (or realized niche). Conceptual considerations include e.g. the trade-off between optimizing accuracy versus optimizing generality. In the field of static distribution modeling, the latter is mostly related to selecting appropriate predictor variables and to designing an appropriate procedure for model selection. New methods, including threshold-independent measures (e.g. receiver operating characteristic (ROC)-plots) and resampling techniques (e.g. bootstrap, cross-validation) have been introduced in ecology for testing the accuracy of predictive models. The choice of an evaluation measure should be driven primarily by the goals of the study. This may possibly lead to the attribution of different weights to the various types of prediction errors (e.g. omission, commission or confusion). Testing the model in a wider range of situations (in space and time) will permit one to define the range of applications for which the model predictions are suitable. In turn, the qualification of the model depends primarily on the goals of the study that define the qualification criteria and on the usability of the model, rather than on statistics alone. \u00a9 2000 Elsevier Science B.V. All rights reserved."}
{"_id": "5eb3577baa3a28b038b6d1e5c1511d72c09d07a3", "title": "Spectral conversion based on maximum likelihood estimation considering global variance of converted parameter", "text": "The paper describes a novel spectral conversion method for voice transformation. We perform spectral conversion between speakers using a Gaussian mixture model (GMM) on the joint probability density of source and target features. A smooth spectral sequence can be estimated by applying maximum likelihood (ML) estimation to the GMM-based mapping using dynamic features. However, there is still degradation of the converted speech quality due to an over-smoothing of the converted spectra, which is inevitable in conventional ML-based parameter estimation. In order to alleviate the over-smoothing, we propose an ML-based conversion taking account of the global variance of the converted parameter in each utterance. Experimental results show that the performance of the voice conversion can be improved by using the global variance information. Moreover, it is demonstrated that the proposed algorithm is more effective than spectral enhancement by postfiltering."}
{"_id": "6ed848acd9f796b1949ce4d1fa780b25ca799dc1", "title": "Transfer Learning Improves Supervised Image Segmentation Across Imaging Protocols", "text": "The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%."}
{"_id": "d8a20e330efb3c4285a3698c32551552c46ddfcf", "title": "Learning to lip read words by watching videos", "text": "Our aim is to recognise the words being spoken by a talking face, given only the video but not the audio. Existing works in this area have focussed on trying to recognise a small number of utterances in controlled environments (e.g. digits and alphabets), partially due to the shortage of suitable datasets. We make three novel contributions: first, we develop a pipeline for fully automated data collection from TV broadcasts. With this we have generated a dataset with over a million word instances, spoken by over a thousand different people; second, we develop a two-stream convolutional neural network that learns a joint embedding between the sound and the mouth motions from unlabelled data. We apply this network to the tasks of audio-to-video synchronisation and active speaker detection; third, we train convolutional and recurrent networks that are able to effectively learn and recognize hundreds of words from this large-scale dataset. In lip reading and in speaker detection, we demonstrate results that exceed the current state-of-the-art on public benchmark datasets."}
{"_id": "5ef6f2a4274c4f61ebd46901a50ee8457dd68cf4", "title": "Older adults' views of \"successful aging\"--how do they compare with researchers' definitions?", "text": "OBJECTIVES\nTo determine whether older adults have thought about aging and aging successfully and to compare their perceptions of successful aging with attributes of successful aging identified in the published literature.\n\n\nDESIGN\nA cross-sectional, mailed survey.\n\n\nSETTING\nKing County, Washington.\n\n\nPARTICIPANTS\nNondemented participants from two cohorts. The first cohort, referred to as Kame, which means turtle, a symbol of longevity for Japanese, enrolled 1,985 Japanese Americans aged 65 and older and was established in 1992-94. The second cohort, Adult Changes in Thought, enrolled 2,581 white men and women aged 65 and older from a health maintenance organization and was established in 1994-96.\n\n\nMEASUREMENTS\nRespondents were asked whether they had ever thought about aging and aging successfully and whether these thoughts had changed over the previous 20 years and about how important specific attributes, originating from the published literature, were in characterizing successful aging.\n\n\nRESULTS\nOverall, 90% had previously thought about aging and aging successfully, and approximately 60% said their thoughts had changed over the previous 20 years. The Japanese-American group rated 13 attributes as important to successful aging; the white group rated the same 13 as important and added one additional attribute, learning new things.\n\n\nCONCLUSION\nOlder adults' definition of successful aging is multidimensional, encompassing physical, functional, psychological, and social health. In contrast, none of the published work describing attributes of successful aging includes all four dimensions. Future work would benefit from an expanded definition to adequately reflect the perceptions of older adults."}
{"_id": "27db63ab642d9c27601a9311d65b63e2d2d26744", "title": "Statistical Comparisons of Classifiers over Multiple Data Sets", "text": "While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams."}
{"_id": "3a5066bcd59f81228876b6bf7d5410c63a82f173", "title": "Rank aggregation methods for the Web", "text": "We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. We develop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can e ectively combat \\spam,\" a serious problem in Web searches. Experiments show that our methods are simple, e cient, and e ective."}
{"_id": "4dbd924046193a51e4a5780d0e6eb3a4705784cd", "title": "BayesOpt: A Bayesian Optimization Library for Nonlinear Optimization, Experimental Design and Bandits", "text": "BayesOpt is a library with state-of-the-art Bayesian optimization methods to solve nonlinear optimization, stochastic bandits or sequential experimental design problems. Bayesian optimization is sample efficient by building a posterior distribution to capture the evidence and prior knowledge for the target function. Built in standard C++, the library is extremely efficient while being portable and flexible. It includes a common interface for C, C++, Python, Matlab and Octave."}
{"_id": "0c2e493c3344208ef45199e49069eceff16be03a", "title": "A taxonomy of argumentation models used for knowledge representation", "text": "Understanding argumentation and its role in human reasoning has been a continuous subject of investigation for scholars from the ancient Greek philosophers to current researchers in philosophy, logic and artificial intelligence. In recent years, argumentation models have been used in different areas such as knowledge representation, explanation, proof elaboration, commonsense reasoning, logic programming, legal reasoning, decision making, and negotiation. However, these models address quite specific needs and there is need for a conceptual framework that would organize and compare existing argumentation-based models and methods. Such a framework would be very useful especially for researchers and practitioners who want to select appropriate argumentation models or techniques to be incorporated in new software systems with argumentation capabilities. In this paper, we propose such a conceptual framework, based on taxonomy of the most important argumentation models, approaches and systems found in the literature. This framework highlights the similarities and differences between these argumentation models. As an illustration of the practical use of this framework, we present a case study which shows how we used this framework to select and enrich an argumentation model in a knowledge acquisition project which aimed at representing argumentative knowledge contained in texts critiquing military courses of action."}
{"_id": "51603f87d13685f10dede6ebe2930a5e02c0bfe9", "title": "Bandwidth Modeling in Large Distributed Systems for Big Data Applications", "text": "The emergence of Big Data applications provides new challenges in data management such as processing and movement of masses of data. Volunteer computing has proven itself as a distributed paradigm that can fully support Big Data generation. This paradigm uses a large number of heterogeneous and unreliable Internet-connected hosts to provide Peta-scale computing power for scientific projects. With the increase in data size and number of devices that can potentially join a volunteer computing project, the host bandwidth can become a main hindrance to the analysis of the data generated by these projects, especially if the analysis is a concurrent approach based on either in-situ or in-transit processing. In this paper, we propose a bandwidth model for volunteer computing projects based on the real trace data taken from the Docking@Home project with more than 280,000 hosts over a 5-year period. We validate the proposed statistical model using model-based and simulation-based techniques. Our modeling provides us with valuable insights on the concurrent integration of data generation with in-situ and in-transit analysis in the volunteer computing paradigm."}
{"_id": "e0b5f0fee8ecb6df6266a6414fa6135873449e35", "title": "Instant messaging and the future of language", "text": "The writing style commonly used in IMing, texting, and other forms of computer-mediated communication need not spell the end of normative language."}
{"_id": "bc2d3dfd2555c7590021db88d60d729dde30be82", "title": "Which robot am I thinking about? The impact of action and appearance on people's evaluations of a moral robot", "text": "In three studies we found further evidence for a previously discovered Human-Robot (HR) asymmetry in moral judgments: that people blame robots more for inaction than action in a moral dilemma but blame humans more for action than inaction in the identical dilemma (where inaction allows four persons to die and action sacrifices one to save the four). Importantly, we found that people's representation of the \u201crobot\u201d making these moral decisions appears to be one of a mechanical robot. For when we manipulated the pictorial display of a verbally described robot, people showed the HR asymmetry only when making judgments about a mechanical-looking robot, not a humanoid robot. This is the first demonstration that robot appearance affects people's moral judgments about robots."}
{"_id": "0c0e19b74bc95b6e78bcd928b6678bcc1bdb8f3c", "title": "Satellite alerts track deforestation in real time", "text": "A satellite-based alert system could prove a potent weapon in the fight against deforestation. As few as eight hours after it detects that trees are being cut down, the system will send out e-mails warning that an area is endangered. That rapid response could enable environmental managers to catch illegal loggers before they damage large swathes of forest. \" It's going to be very, very helpful, \" says Brian Zutta Salazar, a remote-sensing scientist at the Peruvian Ministry of the Environment in Lima. Satellites are already valuable tools for monitoring deforestation; in recent decades, they have delivered consistent data on forest change over large and often remote areas. One such effort, the Real Time System for Detection of Deforestation, or DETER, has helped Brazil's government to reduce its deforestation rate by almost 80% since 2004, by alerting the country's environmental police to large-scale forest clearing. But DETER and other existing alert systems can be relatively slow to yield useful information. They use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, which at its top resolution produces images with pixels covering an area 250 metres on each side, roughly equivalent to 10 football pitches. This is too big to spot small changes in land cover, so it can take computer programs that process MODIS data weeks or even months to detect that a forest is being cleared. \" By the time MODIS picks up on it, it's almost too late, \" says Peter Ellis, a forest-carbon scientist at the Nature Conservancy, a conservation group in Arlington, Virginia. Seeking to provide a sharper view, geographer Matthew Hansen of the University of Maryland in College Park and his colleagues published maps showing year-to-year changes in global forest cover from 2000 to 2012 (M. C. Hansen et al. Science 342, 850\u2013853; 2013). The researchers relied on data from NASA's two active Landsat satellites, which together photograph every spot on Earth every eight days. Each pixel in a Landsat image is 30 metres on each side \u2014 roughly the size of a baseball diamond. This means that an area covered by just one MODIS pixel is captured in roughly 70 smaller Landsat pixels. Hansen and his team wrote data-processing software that can use these higher-resolution images to recognize a disturbance as small as a road snaking its way through a previously untouched forest, something that often appears before clear-cutting begins. \" \u2026"}
{"_id": "d6944cce2ba156e02fbaee670630400b6699a498", "title": "OpenSense: open community driven sensing of environment", "text": "This paper outlines a vision for community-driven sensing of our environment. At its core, community sensing is a dynamic new form of mobile geosensor network. We believe that community sensing networks, in order to be widely deployable and sustainable, need to follow utilitarian approaches towards sensing and data management. Current projects exploring community sensing have paid less attention to these underlying fundamental principles. We illustrate this vision through OpenSense -- a large project that aims to explore community sensing driven by air pollution monitoring."}
{"_id": "eed567622453f29b7b2d72955d07d8aad5e86daf", "title": "Automated Essay Scoring 1 Automated Essay Scoring: Writing Assessment and Instruction", "text": "Automated Essay Scoring 2 Introduction This chapter documents the advent and rise of automated essay scoring (AES) as a means of both assessment and instruction. The first section discusses what AES is, how it works, and who the major purveyors of the technology are. The second section describes outgrowths of the technology as it applies to ongoing projects in measurement and education. In 1973, the late Ellis Page and colleagues at the University of Connecticut programmed the first successful automated essay scoring engine, \" Project Essay Grade (PEG) \" (1973). The technology was foretold some six years earlier in a landmark Phi Delta Kappan article entitled, \" The Imminence of Grading Essays by Computer \" (Page, 1966). At the time the article was provocative and a bit outrageous, though in hindsight, it can only be deemed prophetic. As a former high school English teacher , Page was convinced that students would benefit greatly by having access to technology that would provide quick feedback on their writing. He also realized that the greatest hindrance to having secondary students write more was the requirement that, ultimately, a teacher had to review stacks of papers. While PEG produced impressive results, the technology of the time was too primitive to make it a practical application. Text had to be typed on IBM 80-column punched cards and read into a mainframe computer before it could be evaluated. As a consequence, the technology sat dormant until the early 1990s and was revitalized with the confluence of two technological developments: microcomputers and the Internet. Microcomputers permitted the generation of electronic text from a regular keyboard and the internet provided a universal platform to submit text for review Automated Essay Scoring 3 Automated essay scoring is a measurement technology in which computers evaluate written work (Shermis & Burstein, 2003). Most of the initial applications have 2002). Computers do not \" understand \" the written text being evaluated. So, for example, the computer would not \" get \" the following joke. Q-When is a door not a door? A-When it is ajar. Unlike humans, a computer cannot interpret the play on words, and infer that the predicate in the answer (i.e., \" ajar \") is being cleverly used as a noun (i.e., \" a jar \"). What the computer does in an AES context is to analyze the written text into its observable components. Different AES systems evaluate different \u2026"}
{"_id": "cf328fb4887b27c5928b88d8e5fca465c3d2889c", "title": "A micro-video recommendation system based on big data", "text": "With the development of the Internet and social networking service, the micro-video is becoming more popular, especially for youngers. However, for many users, they spend a lot of time to get their favorite micro-videos from amounts videos on the Internet; for the micro-video producers, they do not know what kinds of viewers like their products. Therefore, this paper proposes a micro-video recommendation system. The recommendation algorithms are the core of this system. Traditional recommendation algorithms include content-based recommendation, collaboration recommendation algorithms, and so on. At the Bid Data times, the challenges what we meet are data scale, performance of computing, and other aspects. Thus, this paper improves the traditional recommendation algorithms, using the popular parallel computing framework to process the Big Data. Slope one recommendation algorithm is a parallel computing algorithm based on MapReduce and Hadoop framework which is a high performance parallel computing platform. The other aspect of this system is data visualization. Only an intuitive, accurate visualization interface, the viewers and producers can find what they need through the micro-video recommendation system."}
{"_id": "454e684e85d1f775480422932e92e7d1e923225f", "title": "SUIT: A Supervised User-Item Based Topic Model for Sentiment Analysis", "text": "Probabilistic topic models have been widely used for sentiment analysis. However, most of existing topic methods only model the sentiment text, but do not consider the user, who expresses the sentiment, and the item, which the sentiment is expressed on. Since different users may use different sentiment expressions for different items, we argue that it is better to incorporate the user and item information into the topic model for sentiment analysis. In this paper, we propose a new Supervised User-Item based Topic model, called SUIT model, for sentiment analysis. It can simultaneously utilize the textual topic and latent user-item factors. Our proposed method uses the tensor outer product of text topic proportion vector, user latent factor and item latent factor to model the sentiment label generalization. Extensive experiments are conducted on two datasets: review dataset and microblog dataset. The results demonstrate the advantages of our model. It shows significant improvement compared with supervised topic models and collaborative filtering methods."}
{"_id": "376529c7585902a0d4f64f160eae09ddcf22f323", "title": "Developing learner corpus annotation for Chinese grammatical errors", "text": "This study describes the construction of the TOCFL (Test Of Chinese as a Foreign Language) learner corpus, including the collection and grammatical error annotation of 2,837 essays written by Chinese language learners originating from a total of 46 different mother-tongue languages. We propose hierarchical tagging sets to manually annotate grammatical errors, resulting in 33,835 inappropriate usages. Our built corpus has been provided for the shared tasks on Chinese grammatical error diagnosis. These demonstrate the usability of our learner corpus annotation."}
{"_id": "16796761ccbeeff3b9368f5e2ad6cdb5238eaedf", "title": "Information, Utility & Bounded Rationality", "text": "Perfectly rational decision-makers maximize expected utility, but crucially ignore the resource costs incurred when determining optimal actions. Here we employ an axiomatic framework for bounded rational decision-making based on a thermodynamic interpretation of resource costs as information costs. This leads to a variational \u201cfree utility\u201d principle akin to thermodynamical free energy that trades off utility and information costs. We show that bounded optimal control solutions can be derived from this variational principle, which leads in general to stochastic policies. Furthermore, we show that risk-sensitive and robust (minimax) control schemes fall out naturally from this framework if the environment is considered as a bounded rational and perfectly rational opponent, respectively. When resource costs are ignored, the maximum expected utility principle is recovered."}
{"_id": "caa744e15f7a366d96d6df883c2973fd44b95b9e", "title": "Resilient 3D hierarchical architected metamaterials.", "text": "Hierarchically designed structures with architectural features that span across multiple length scales are found in numerous hard biomaterials, like bone, wood, and glass sponge skeletons, as well as manmade structures, like the Eiffel Tower. It has been hypothesized that their mechanical robustness and damage tolerance stem from sophisticated ordering within the constituents, but the specific role of hierarchy remains to be fully described and understood. We apply the principles of hierarchical design to create structural metamaterials from three material systems: (i) polymer, (ii) hollow ceramic, and (iii) ceramic-polymer composites that are patterned into self-similar unit cells in a fractal-like geometry. In situ nanomechanical experiments revealed (i) a nearly theoretical scaling of structural strength and stiffness with relative density, which outperforms existing nonhierarchical nanolattices; (ii) recoverability, with hollow alumina samples recovering up to 98% of their original height after compression to \u2265 50% strain; (iii) suppression of brittle failure and structural instabilities in hollow ceramic hierarchical nanolattices; and (iv) a range of deformation mechanisms that can be tuned by changing the slenderness ratios of the beams. Additional levels of hierarchy beyond a second order did not increase the strength or stiffness, which suggests the existence of an optimal degree of hierarchy to amplify resilience. We developed a computational model that captures local stress distributions within the nanolattices under compression and explains some of the underlying deformation mechanisms as well as validates the measured effective stiffness to be interpreted as a metamaterial property."}
{"_id": "801556eae6de26616d2ce90cdd4aecc4e2de7fe4", "title": "Common mode noise cancellation for electrically non-contact ECG measurement system on a chair.", "text": "Electrically non-contact ECG measurement system on a chair can be applied to a number of various fields for continuous health monitoring in daily life. However, the body is floated electrically for this system due to the capacitive electrodes and the floated body is very sensitive to the external noises or motion artifacts which affect the measurement system as the common mode noise. In this paper, the Driven-Seat-Ground circuit similar to the Driven-Right-Leg circuit is proposed to reduce the common mode noise. The analysis of this equivalent circuit is performed and the output signal waveforms are compared between with Driven-Seat-Ground and with capacitive ground. As the results, the Driven-Seat-Ground circuit improves significantly the properties of the fully capacitive ECG measurement system as the negative feedback."}
{"_id": "a96c22dfecb3e549441ad8ffe670c7bd23461c6d", "title": "Minimum Steiner Tree Construction", "text": "In optimizing the area of Very Large Scale Integrated (VLSI) layouts, circuit interconnections should generally be realized with minimum total interconnect. This chapter addresses several variations of the corresponding fundamental Steiner minimal tree (SMT) problem, where a given set of pins is to be connected using minimum total wirelength. Steiner trees are important in global routing and wirelength estimation [15], as well as in various non-VLSI applications such as phylogenetic tree reconstruction in biology [48], network routing [61], and civil engineering, among many other areas [21, 25, 26, 29, 51, 74]."}
{"_id": "2597874c62ab14a729e82239622cc56ed80afdfa", "title": "Dynamic path planning and replanning for mobile robots using RRT", "text": "It is necessary for a mobile robot to be able to efficiently plan a path from its starting, or current location to a desired goal location. This is a trivial task when the environment is static. However, the operational environment of the robot is rarely static, and it often has many moving obstacles. The robot may encounter one, or many of these unknown and unpredictable moving obstacles. The robot will need to decide how to proceed when one of these obstacles is obstructing it's path. A method of dynamic replanning using RRT\u2217 is presented. The robot will modify it's current plan when an unknown random moving obstacle obstructs the path. Various experimental results show the effectiveness of the proposed method."}
{"_id": "55d2934b568e0c73b0291d983ae3cfbf43592c8c", "title": "Impacts of IoT and big data to automotive industry", "text": "With the recent advancement in technologies such as embedded system, wireless distributed sensor, light weight material, smart cognitive radio networks, cloud computing, higher efficiency and ultra-low emission internal combustion engines, intelligent converter, high performance battery and fuel cell technology, the production of smarter, safer, energy efficient and zero emission vehicles is possible in near future. Apart from vehicle technologies, other factors such as road users', well maintained road infrastructure, well maintained vehicles, drivers' attitudes and law and enforcement are also important to be considered and they should work together in order to make our world natural resources can be preserved and maintain cleaner environment and produce sustainable mobility. This paper will discuss the impacts of IoT and Big Data and other emerging technologies mentioned above to the automotive industry. It will include discussion on education, economy, advanced technology, environment, safety and energy."}
{"_id": "87d7e335184eda7767b1e2af6e2c60c3b766f1c2", "title": "Efficient GPU Screen-Space Ray Tracing", "text": "We present an efficient GPU solution for screen-space 3D ray tracing against a depth buffer by adapting the perspective-correct DDA line rasterization algorithm. Compared to linear ray marching, this ensures sampling at a contiguous set of pixels and no oversampling. This paper provides for the first time full implementation details of a method that has been proven in production of recent major game titles. After explaining the optimizations, we then extend the method to support multiple depth layers for robustness. We include GLSL code and examples of pixel-shader ray tracing for several applications."}
{"_id": "23d1aae49d51df38df651fb4fd8c922a55004e4e", "title": "Compressive phase retrieval via generalized approximate message passing", "text": "In this paper, we propose a novel approach to compressive phase retrieval based on loopy belief propagation and, in particular, on the generalized approximate message passing (GAMP) algorithm. Numerical results show that the proposed PR-GAMP algorithm has excellent phase-transition behavior, noise robustness, and runtime. In particular, for successful recovery of synthetic Bernoulli-circular-Gaussian signals, PR-GAMP requires \u22484 times the number of measurements as a phase-oracle version of GAMP and, at moderate to large SNR, the NMSE of PR-GAMP is only \u22483 dB worse than that of phase-oracle GAMP. A comparison to the recently proposed convex-relation approach known as \u201cCPRL\u201d reveals PR-GAMP's superior phase transition and orders-of-magnitude faster runtimes, especially as the problem dimensions increase. When applied to the recovery of a 65k-pixel grayscale image from 32k randomly masked magnitude measurements, numerical results show a median PR-GAMP runtime of only 13.4 seconds."}
{"_id": "86a443116f801af8c16296762574c234d6e5abb9", "title": "A Rule-Based System to Extract Financial Information", "text": "Extracting up-to-date information from financial documents can be important in making investment decisions. However, the unstructured nature and enormity of the volume of such data makes manual analysis tedious and time consuming. Information extraction technology can be applied to automatically extract the most relevant and precise financial information. This paper introduces a rule-based information extraction methodology for the extraction of highly accurate financial information to aid investment decisions. The methodology includes a rule-based symbolic learning model trained by the Greedy Search algorithm and a similar model trained by the Tabu Search algorithm. The methodology has been found very effective in extracting financial information from NASDAQ-listed companies. Also, the Tabu Search based model performed better than some well-known systems. The simple rule structure makes the system portable and it should make parallel processing implementations easier."}
{"_id": "143d2e02ab91ae6259576ac50b664b8647af8988", "title": "Monte Carlo sampling methods using Markov chains and their applications", "text": "A generalization of the sampling method introduced by Metropolis et al. (1953) is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates. Examples of the methods, including the generation of random orthogonal matrices and potential applications of the methods to numerical problems arising in statistics, are discussed."}
{"_id": "15532ef54af96755f3e4166004e54fdf6b0aae6a", "title": "Metropolis procedural modeling", "text": "Procedural representations provide powerful means for generating complex geometric structures. They are also notoriously difficult to control. In this article, we present an algorithm for controlling grammar-based procedural models. Given a grammar and a high-level specification of the desired production, the algorithm computes a production from the grammar that conforms to the specification. This production is generated by optimizing over the space of possible productions from the grammar. The algorithm supports specifications of many forms, including geometric shapes and analytical objectives. We demonstrate the algorithm on procedural models of trees, cities, buildings, and Mondrian paintings."}
{"_id": "1cb0954115b1e2350627d9bfcab33cc44b635f15", "title": "Markov logic networks", "text": "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach."}
{"_id": "4871b188dce83d59cf07a15c565729f8cbb705af", "title": "Delayed Rejection in Reversible", "text": "In a Metropolis-Hastings algorithm, rejection of proposed moves is an intrinsic part of ensuring that the chain converges to the intended target distribution. However, persistent rejection, perhaps in particular parts of the state space, may indicate that locally the proposal distribution is badly calibrated to the target. As an alternative to careful oo-line tuning of state-dependent proposals, the basic algorithm can be modiied so that on rejection, a second attempt to move is made. A diierent proposal can be generated from a new distribution, that is allowed to depend on the previously rejected proposal. We generalise this idea of delaying the rejection and adapting the proposal distribution, due to Tierney and Mira (1999), to generate a more exible class of methods, that in particular applies to a variable dimension setting. The approach is illustrated by two pedagogical examples, and a more realistic application, to a change-point analysis for point processes."}
{"_id": "6ef3da9d9c5a7e2f96a1470bad413c857f64aa95", "title": "A Pattern Language - Towns, Buildings, Construction", "text": "When there are many people who don't need to expect something more than the benefits to take, we will suggest you to have willing to reach all benefits. Be sure and surely do to take this a pattern language towns buildings construction that gives the best reasons to read. When you really need to get the reason why, this a pattern language towns buildings construction book will probably make you feel curious."}
{"_id": "78aba15cb889949c33e29afbdd3bde1525b09f8d", "title": "Adwords management for third-parties in SEM: An optimisation model and the potential of Twitter", "text": "In Search Engine Marketing (SEM), \u201cthird-party\u201d partners play an important intermediate role by bridging the gap between search engines and advertisers in order to optimise advertisers' campaigns in exchange of a service fee. In this paper, we present an economic analysis of the market involving a third-party broker in Google AdWords and the broker's customers. We show that in order to optimise his profit, a third-party broker should minimise the weighted average Cost Per Click (CPC) of the portfolio of keywords attached to customer's ads while still satisfying the negotiated customer's demand. To help the broker build and manage such portfolio of keywords, we develop an optimisation framework inspired from the classical Markowitz portfolio management which integrates the customer's demand constraint and enables the broker to manage the tradeoff between return on investment and risk through a single risk aversion parameter. We then propose a method to augment the keywords portfolio with relevant keywords extracted from trending and popular topics on Twitter. Our evaluation shows that such a keywords-augmented strategy is very promising and enables the broker to achieve, on average, four folds larger return on investment than with a non-augmented strategy, while still maintaining the same level of risk."}
{"_id": "101a1953a4cc711c05c7adc6e31634f5192caff0", "title": "Support vector machines under adversarial label contamination", "text": "Machine learning algorithms are increasingly being applied in security-related tasks such as spam and malware detection, although their security properties against deliberate attacks have not yet been widely understood. Intelligent and adaptive attackers may indeed exploit specific vulnerabilities exposed by machine learning techniques to violate system security. Being robust to adversarial data manipulation is thus an important, additional requirement for machine learning algorithms to successfully operate in adversarial settings. In this work, we evaluate the security of Support Vector Machines (SVMs) to well-crafted, adversarial label noise attacks. In particular, we consider an attacker that aims to maximize the SVM\u2019s classification error by flipping a number of labels in the training data. We formalize a corresponding optimal attack strategy, and solve it by means of heuristic approaches to keep the computational complexity tractable. We report an extensive experimental analysis on the effectiveness of the considered attacks against linear and non-linear SVMs, both on synthetic and real-world datasets. We finally argue that our approach can also provide useful insights for developing more secure SVM learning algorithms, and also novel techniques in a number of related research areas, such as semi-supervised and active learning."}
{"_id": "bb75de5280ff9b0bbdd74633b9887d10fbe0ae10", "title": "Holographic Embeddings of Knowledge Graphs", "text": "Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HOLE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator, H OLE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. Experimentally, we show that holographic embeddings are able to outperform state-ofthe-art methods for link prediction on knowledge graphs and relational learning benchmark datasets."}
{"_id": "396aabd694da04cdb846cb724ca9f866f345cbd5", "title": "Domain Adaptation via Pseudo In-Domain Data Selection", "text": "We explore efficient domain adaptation for the task of statistical machine translation based on extracting sentences from a large generaldomain parallel corpus that are most relevant to the target domain. These sentences may be selected with simple cross-entropy based methods, of which we present three. As these sentences are not themselves identical to the in-domain data, we call them pseudo in-domain subcorpora. These subcorpora \u2013 1% the size of the original \u2013 can then used to train small domain-adapted Statistical Machine Translation (SMT) systems which outperform systems trained on the entire corpus. Performance is further improved when we use these domain-adapted models in combination with a true in-domain model. The results show that more training data is not always better, and that best results are attained via proper domain-relevant data selection, as well as combining inand general-domain systems during decoding."}
{"_id": "95f388c8cd9db1e800e515e53aaaf4e9b433866f", "title": "Observations of achievement and motivation in using cloud computing driven CAD: Comparison of college students with high school and vocational high school backgrounds", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.chb.2012.08.001 \u21d1 Corresponding author. Tel.: +886 02 7734 3347; f E-mail address: joum@ntnu.edu.tw (M. Jou). Cloud computing technology has matured as it has been integrated with every kind of digitalization processes. It offers numerous advantages for data and software sharing, and thus making the management of complex IT systems much simpler. For education in engineering, cloud computing even provides students with versatile and ubiquitous access to software commonly used in the field without having to step into an actual computer lab. Our study analyzed learning attitudes and academic performances induced by the utilization of resources driven by cloud computing technologies. Comparisons were made between college students with high school and vocational high school backgrounds. One hundred and thirtytwo students who took the computer-aided designing (CAD) course participated in the study. Technology Acceptance Model (TAM) was used as the fundamental framework. Open-ended sets of questionnaires were designed to measure academic performance and causal attributions; the results indicated no significant differences in the cognitive domain between the two groups of students, though it is not so in both the psychomotor and the affective domains. College students with vocational high school background appeared to possess higher learning motivation in CAD applications. 2012 Elsevier Ltd. All rights reserved."}
{"_id": "e65f812a717d8fde5289849c8b31bad19e7969b9", "title": "Gang involvement: psychological and behavioral characteristics of gang members, peripheral youth, and nongang youth.", "text": "Research has noted the existence of a loose and dynamic gang structure. However, the psychological processes that underpin gang membership have only begun to be addressed. This study examined gang members, peripheral youth, and nongang youth across measures of criminal activity, the importance they attach to status, their levels of moral disengagement, their perceptions of out-group threat, and their attitudes toward authority. Of the 798 high school students who participated in this study, 59 were identified as gang members, 75 as peripheral youth, and 664 as nongang youth. Gang members and peripheral youth were more delinquent than nongang youth overall; however, gang members committed more minor offenses than nongang youth and peripheral youth committed more violent offenses than nongang youth. Gang members were more anti-authority than nongang youth, and both gang and peripheral youth valued social status more than nongang youth. Gang members were also more likely to blame their victims for their actions and use euphemisms to sanitize their behavior than nongang youth, whereas peripheral youth were more likely than nongang youth to displace responsibility onto their superiors. These findings are discussed as they highlight the importance of examining individual differences in the cognitive processes that relate to gang involvement."}
{"_id": "6506b7899516db29df714a02b3d5add377db4b61", "title": "Analysis and detection of low quality information in social networks", "text": "With social networks like Facebook, Twitter and Google+ attracting audiences of millions of users, they have been an important communication platform in daily life. This in turn attracts malicious users to the social networks as well, causing an increase in the incidence of low quality information. Low quality information such as spam and rumors is a nuisance to people and hinders them from consuming information that is pertinent to them or that they are looking for. Although individual social networks are capable of filtering a significant amount of low quality information they receive, they usually require large amounts of resources (e.g, personnel) and incur a delay before detecting new types of low quality information. Also the evolution of various low quality information posts lots of challenges to defensive techniques. My PhD thesis work focuses on the analysis and detection of low quality information in social networks. We introduce social spam analytics and detection framework SPADE across multiple social networks showing the efficiency and flexibility of cross-domain classification and associative classification. For evolutionary study of low quality information, we present the results on large-scale study on Web spam and email spam over a long period of time. Furthermore, we provide activity-based detection approaches to filter out low quality information in social networks: click traffic analysis of short URL spam, behavior analysis of URL spam and information diffusion analysis of rumor. Our framework and detection techniques show promising results in analyzing and detecting low quality information in social networks."}
{"_id": "3ec58519644b128388ca7336e09b59ffea32444b", "title": "African-American rhinoplasty.", "text": "Increased width, loss of definition, and lack of projection characterize the stereotypical African-American nose. Early rhinoplasty surgeons attempted strict adherence to neoclassical aesthetic ideals. However, in reality, the anatomy and aesthetic desires of these patients are much more complex. Building dorsal height, achieving nasal tip definition amidst thick skin, and producing a more aesthetically pleasing alar base are the major challenges. Surgical planning should be sensitive to both individual and cultural differences in aesthetic perception and expectations. Here we describe the techniques used by the senior author (R.W.H.K.)."}
{"_id": "d303dbd601f974fa462c503d3135e0f9575712f9", "title": "Business Case and Technology Analysis for 5G Low Latency Applications", "text": "A large number of new consumer and industrial applications are likely to change the classic operator\u2019s business models and provide a wide range of new markets to enter. This paper analyzes the most relevant 5G use cases that require ultra-low latency, from both technical and business perspectives. Low latency services pose challenging requirements to the network, and to fulfill them, operators need to invest in costly changes in their network. In this sense, it is not clear whether such investments are going to be amortized with these new business models. In light of this, specific applications and requirements are described and the potential market benefits for operators are analyzed. Conclusions show that the operators have clear opportunities to add value and position themselves strongly with the increasing number of services to be provided by 5G."}
{"_id": "5ac9f3697b91cb25e402ebb92a7cdfa1a2f93cc2", "title": "Parallel vision for perception and understanding of complex scenes: methods, framework, and perspectives", "text": "In the study of image and vision computing, the generalization capability of an algorithm often determines whether it is able to work well in complex scenes. The goal of this review article is to survey the use of photorealistic image synthesis methods in addressing the problems of visual perception and understanding. Currently, the ACP Methodology comprising artificial systems, computational experiments, and parallel execution is playing an essential role in modeling and control of complex systems. This paper extends the ACP Methodology into the computer vision field, by proposing the concept and basic framework of Parallel Vision. In this paper, we first review previous works related to Parallel Vision, in terms of synthetic data generation and utilization. We detail the utility of synthetic data for feature analysis, object analysis, scene analysis, and other analyses. Then we propose the basic framework of Parallel Vision, which is composed of an ACP trilogy (artificial scenes, computational experiments, and parallel execution). We also present some in-depth thoughts and perspectives on Parallel Vision. This paper emphasizes the significance of synthetic data to vision system design and suggests a novel research methodology for perception and understanding of complex scenes."}
{"_id": "bd56ea35c1f898841587ae017e282766c57c86df", "title": "The Semantic Web - ISWC 2003", "text": "The Semantic Network, a component of the Unified Medical Language System\u00ae (UMLS), describes core biomedical knowledge consisting of semantic types and relationships. It is a well established, semi-formal ontology in widespread use for over a decade. We expected to \u201cpublish\u201d this ontology on the Semantic Web, using OWL, with relatively little effort. However, we ran into a number of problems concerning alternative interpretations of the SN notation and the inability to express some of the interpretations in OWL. We detail these problems, as a cautionary tale to others planning to publish pre-existing ontologies on the Semantic Web, as a list of issues to consider when describing formally concepts in any ontology, and as a collection of criteria for evaluating alternative representations, which could form part of a methodology of ontology development."}
{"_id": "aa7802f0a552234f0b67c2be64b6867c073f2ec0", "title": "The Basal Ganglia and Chunking of Action Repertoires", "text": "The basal ganglia have been shown to contribute to habit and stimulus-response (S-R) learning. These forms of learning have the property of slow acquisition and, in humans, can occur without conscious awareness. This paper proposes that one aspect of basal ganglia-based learning is the recoding of cortically derived information within the striatum. Modular corticostriatal projection patterns, demonstrated experimentally, are viewed as producing recoded templates suitable for the gradual selection of new input-output relations in cortico-basal ganglia loops. Recordings from striatal projection neurons and interneurons show that activity patterns in the striatum are modified gradually during the course of S-R learning. It is proposed that this recoding within the striatum can chunk the representations of motor and cognitive action sequences so that they can be implemented as performance units. This scheme generalizes Miller's notion of information chunking to action control. The formation and the efficient implementation of action chunks are viewed as being based on predictive signals. It is suggested that information chunking provides a mechanism for the acquisition and the expression of action repertoires that, without such information compression would be biologically unwieldy or difficult to implement. The learning and memory functions of the basal ganglia are thus seen as core features of the basal ganglia's influence on motor and cognitive pattern generators."}
{"_id": "73c9e35ab40638ba2261f4cb3c5e01be47d0162b", "title": "ADVANCED POLARIMETRIC CONCEPTS \u2013 PART 1 ( Polarimetric Target Description , Speckle filtering and Decomposition Theorems )", "text": "There is currently a great deal of interest in the use of polarimetry for radar remote sensing. In this context, different and important objectives are to classify Earth terrain components within a fully polarimetric SAR image and then extract physical information from the observed scattering of microwaves by surface and volume structures. The most important observable measured by such radar systems is the 3x3-coherency matrix [T]. This matrix accounts for local variations in the scattering matrix and is the lowest order operator suitable to extract polarimetric parameters for distributed scatterers in the presence of additive (system) and/or multiplicative (speckle) noise. In the first part of this paper, the most important Target Polarimetry descriptors: Sinclair Matrix, target vectors, coherency matrix and the covariance matrix as well are presented, their interconnections and equivalences will be shown together with the respective transformations."}
{"_id": "e2413f14a014603253815398e56c7fee0ba01a3d", "title": "Chapter 2 INTRUSION DETECTION : A SURVEY", "text": "This chapter provides the overview of the state of the art in intrusion detection research. Intrusion detection systems are software and/or hardware components that monitor computer systems and analyze events occurring in them for signs of intrusions. Due to widespread diversity and complexity of computer infrastructures, it is difficult to provide a completely secure computer system. Therefore, there are numerous security systems and intrusion detection systems that address different aspects of computer security. This chapter first provides taxonomy of computer intrusions, along with brief descriptions of major computer attack categories. Second, a common architecture of intrusion detection systems and their basic characteristics are presented. Third, taxonomy of intrusion detection systems based on five criteria (information source, analysis strategy, time aspects, architecture, response) is given. Finally, intrusion detection systems are classified according to each of these categories and the most representative research prototypes are briefly described."}
{"_id": "f1e227b37387607849db4358ce2353b3d6e6f6b1", "title": "Reforming education sector through Big Data", "text": "Big data, which is maturing over time, has a high impact in analyzing scenarios and coming up with decision making strategies pertaining to any sector. Consequently the number of applications are also increasing where in the past, they were limited to the confines of company sales departments. In lieu, this paper is primarily focused on applying Data Sciences within the education sector. We will be discussing the overall application to address higher-education issues. We will also present a framework for capturing, analyzing it and concluding on data in order to benefit decision-makers, thus ultimately manifesting in informing design and finalizing policy."}
{"_id": "5b49c4c566bf85ac04cb40bb4d8997c9d6ba2173", "title": "Agile Modeling with the UML", "text": "This paper discusses a model-based approach to software development. It argues that an approach using models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development as well as quality and reusability of the results. Two major and strongly related techniques are identified and discussed: Test case modeling and an evolutionary approach to model transformation."}
{"_id": "677c2f8913e8321dbe4740b4e0eb5a354d7c956a", "title": "Near-Saddle-Node Bifurcation Behavior as Dynamics in Working Memory for Goal-Directed Behavior", "text": "In consideration of working memory as a means for goal-directed behavior in nonstationary environments, we argue that the dynamics of working memory should satisfy two opposing demands: long-term maintenance and quick transition. These two characteristics are contradictory within the linear domain. We propose the near-saddle-node bifurcation behavior of a sigmoidal unit with a self-connection as a candidate of the dynamical mechanism that satisfies both of these demands. It is shown in evolutionary programming experiments that the near-saddle-node bifurcation behavior can be found in recurrent networks optimized for a task that requires efficient use of working memory. The result suggests that the near-saddle-node bifurcation behavior may be a functional necessity for survival in nonstationary environments."}
{"_id": "ef5363f8d378ccbddbc8f2d3ec26517b75184f62", "title": "Data Base Sublanguage Founded on the Relational Calculus", "text": "Three principal types of language for data base manipulation are identified: the low-level, procedure-oriented (typified by the CODASYL-proposed DML), the intermediate level, algebraic (typified by the Project MAC MacAIMS language), and the high level, relational calculus-based data sublanguage, an example of which is described in this paper. The language description is informal and stresses concepts and principles. Following this, arguments are presented for the superiority of the calculus-based type of data base sub-language over the algebraic, and for the algebraic over the low-level procedural. These arguments are particularly relevant to the questions of inter-system compatibility and standardization."}
{"_id": "f4ce1a544bdceeab0de4b1ad5094c17ff3acadcd", "title": "The art of appeal in electronic commerce: Understanding the impact of product and website quality on online purchases", "text": "Purpose \u2013 This study advances product appeal and website appeal as focal psychological mechanisms that can be invoked by business-to-consumer (B2C) e-commerce sites to mitigate problems of information asymmetry via signaling to bolster consumers\u2019 purchase intention under the influence of trust. Design/methodology/approach \u2013 Survey approach was employed to validate our research model. Findings \u2013 Website appeal partially mediates the positive effect of product appeal on consumers\u2019 purchase intention. Trust in e-commerce sites not only increases purchase intention directly, but it also reinforces the positive relationship between website appeal and purchase intention while attenuating the positive relationship between product appeal and purchase intention. Service content quality, search delivery quality, and enjoyment are confirmed as positive antecedents of website appeal whereas diagnosticity and justifiability are established as positive antecedents of product appeal. Research implications \u2013 This study not only delineates product and website appeal as complementary drivers of consumer purchase on e-commerce sites, but it also derives five signals that aid in bolstering both product and website appeal. Trust is revealed to exert a moderating influence on the impact of product and website appeal on purchase intention. Practical implications \u2013 Practitioners should prioritize their resource allocation to enhance qualities most pertinent to product and website appeal. E-commerce sites should offer product-oriented functionalities to facilitate product diagnosticity and reassure consumers of their purchase decisions. Originality/value \u2013 This study distinguishes between product and website appeal as well as between their respective antecedents. It also uncovers how trust can alter the effects of both website and product appeal on consumers\u2019 purchase intention."}
{"_id": "4afd9f800ca2553556b888885162152969431434", "title": "From Classical To Hip-Hop : Can Machines Learn Genres ?", "text": "The ability to classify the genre of a song is an easy task for the human ear. After listening to a song for just several seconds, it is often not difficult to take note of that song\u2019s characteristics and subsequently identify this genre. However, this task is one that computers have historically not been able to solve well, at least not until the advent of more sophisticated machine learning techniques. Our team explored several of these techniques over the course of this project and successfully built a system that achieves 61.873% accuracy in classifying music genres. This paper discusses the methods we used for exploratory data analysis, feature selection, hyperparameter optimization, and eventual implementation of several algorithms for classification."}
{"_id": "2d89d66675b25aabcfe503f60235f2489d40e4e9", "title": "Approximate Inference in Additive Factorial HMMs with Application to Energy Disaggregation", "text": "This paper considers additive factorial hidden Markov models, an extension to HMMs where the state factors into multiple independent chains, and the output is an additive function of all the hidden states. Although such models are very powerful, accurate inference is unfortunately difficult: exact inference is not computationally tractable, and existing approximate inference techniques are highly susceptible to local optima. In this paper we propose an alternative inference method for such models, which exploits their additive structure by 1) looking at the observed difference signal of the observation, 2) incorporating a \u201crobust\u201d mixture component that can account for unmodeled observations, and 3) constraining the posterior to allow at most one hidden state to change at a time. Combining these elements we develop a convex formulation of approximate inference that is computationally efficient, has no issues of local optima, and which performs much better than existing approaches in practice. The method is motivated by the problem of energy disaggregation, the task of taking a whole home electricity signal and decomposing it into its component appliances; applied to this task, our algorithm achieves state-of-the-art performance, and is able to separate many appliances almost perfectly using just the total aggregate signal."}
{"_id": "d0f415a5d1ca91a934b72711b4594aa5396fe567", "title": "Recent approaches to non-intrusive load monitoring techniques in residential settings", "text": "The concept of Smart Grids is closely related to energy conservation and load shedding concepts. However, it is difficult to quantify the effectiveness of energy conservation efforts in residential settings without any sort of end-use energy information as feedback. In order to achieve that, load monitoring methods are normally used. In recent years, non-intrusive load monitoring (NILM) approaches are gaining popularity due to their minimal installation requirements and cost effectiveness. For a NILM system to work, only one sensor at the entry point to a home is required. Fluctuations in the aggregate power consumption signals are used to mathematically estimate the composition of operation of appliances. This approach eliminates the requirement of installing plug-meters for every appliance in the house. In this paper, we provide a review of recent research efforts on state-of-the-art NILM algorithms before concluding with a baseline and overall vision for our future research direction."}
{"_id": "ecd15fff71eaaf2350a7343fe068e21d825cc4a9", "title": "Toward Non-Intrusive Load Monitoring via Multi-Label Classification", "text": "Demand-side management technology is a key element of the proposed smart grid, which will help utilities make more efficient use of their generation assets by reducing consumers' energy demand during peak load periods. However, although some modern appliances can respond to price signals from the utility companies, there is a vast stock of older appliances that cannot. For such appliances, utilities must infer what appliances are operating in a home, given only the power signals on the main feeder to the home (i.e., the home's power consumption must be disaggregated into individual appliances). We report on an in-depth investigation of multi-label classification algorithms for disaggregating appliances in a power signal. A systematic review of this research topic shows that this class of algorithms has received little attention in the literature, even though it is arguably a more natural fit to the disaggregation problem than the traditional single-label classifiers used to date. We examine a multi-label meta-classification framework (RAkEL), and a bespoke multi-label classification algorithm (MLkNN), employing both time-domain and wavelet-domain feature sets. We test these classifiers on two real houses from the Reference Energy Disaggregation Dataset. We found that the multilabel algorithms are effective and competitive with published results on the datasets."}
{"_id": "0924f9c2f4d6eea7905d75070af40960efaeb330", "title": "ElectriSense: single-point sensing using EMI for electrical event detection and classification in the home", "text": "This paper presents ElectriSense, a new solution for automatically detecting and classifying the use of electronic devices in a home from a single point of sensing. ElectriSense relies on the fact that most modern consumer electronics and fluorescent lighting employ switch mode power supplies (SMPS) to achieve high efficiency. These power supplies continuously generate high frequency electromagnetic interference (EMI) during operation that propagates throughout a home's power wiring. We show both analytically and by in-home experimentation that EMI signals are stable and predictable based on the device's switching frequency characteristics. Unlike past transient noise-based solutions, this new approach provides the ability for EMI signatures to be applicable across homes while still being able to differentiate between similar devices in a home. We have evaluated our solution in seven homes, including one six-month deployment. Our results show that ElectriSense can identify and classify the usage of individual devices with a mean accuracy of 93.82%."}
{"_id": "94c7c2ddecd5dc724aafa0bf753b96b85b658b4c", "title": "Updatable Block-Level Message-Locked Encryption", "text": "Deduplication is a widely used technique for reducing storage space of cloud service providers. Yet, it is unclear how to support deduplication of encrypted data securely until the study of Bellareetal on message-locked encryption (Eurocrypt 2013). Since then, there are many improvements such as strengthening its security, reducing client storage, etc. While updating a (shared) file is common, there is little attention on how to efficiently update large encrypted files in a remote storage with deduplication. To modify even a single bit, existing solutions require the trivial and expensive way of downloading and decrypting the large ciphertext.\n We initiate the study of updatable block-level message-locked encryption. We propose a provably secure construction that is efficiently updatable with O(log|F|) computational cost, where |F| is the file size. It also supports proof-of-ownership, a nice feature which protects storage providers from being abused as a free content distribution network."}
{"_id": "a7b9cf2a6581947b2ed672cce90c37d0220a4459", "title": "LIGHTNETs: Smart LIGHTing and Mobile Optical Wireless NETworks \u2014 A Survey", "text": "Recently, rapid increase of mobile devices pushed the radio frequency (RF)-based wireless technologies to their limits. Free-space-optical (FSO), a.k.a. optical wireless, communication has been considered as one of the viable solutions to respond to the ever-increasing wireless capacity demand. Particularly, Visible Light Communication (VLC) which uses light emitting diode (LED) based smart lighting technology provides an opportunity and infrastructure for the high-speed low-cost wireless communication. Though stemming from the same core technology, the smart lighting and FSO communication have inherent tradeoffs amongst each other. In this paper, we present a tutorial and survey of advances in these two technologies and explore the potential for integration of the two as a single field of study: LIGHTNETs. We focus our survey to the context of mobile communications given the recent pressing needs in mobile wireless networking. We deliberate on key challenges involved in designing technologies jointly performing the two functions simultaneously: LIGHTing and NETworking."}
{"_id": "58962c541d3e03e6c49df29999981a9a1b9a12f6", "title": "Automatic Caption Generation for News Images", "text": "This paper is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Examples include video and image retrieval as well as the development of tools that aid visually impaired individuals to access pictorial information. Our approach leverages the vast resource of pictures available on the web and the fact that many of them are captioned and colocated with thematically related documents. Our model learns to create captions from a database of news articles, the pictures embedded in them, and their captions, and consists of two stages. Content selection identifies what the image and accompanying article are about, whereas surface realization determines how to verbalize the chosen content. We approximate content selection with a probabilistic image annotation model that suggests keywords for an image. The model postulates that images and their textual descriptions are generated by a shared set of latent variables (topics) and is trained on a weakly labeled dataset (which treats the captions and associated news articles as image labels). Inspired by recent work in summarization, we propose extractive and abstractive surface realization models. Experimental results show that it is viable to generate captions that are pertinent to the specific content of an image and its associated article, while permitting creativity in the description. Indeed, the output of our abstractive model compares favorably to handwritten captions and is often superior to extractive methods."}
{"_id": "7106f693d5202ecc6d9281ae30cc881e9ce62896", "title": "Pileus: protecting user resources from vulnerable cloud services", "text": "Cloud computing platforms are now constructed as distributed, modular systems of cloud services, which enable cloud users to manage their cloud resources. However, in current cloud platforms, cloud services fully trust each other, so a malicious user may exploit a vulnerability in a cloud service to obtain unauthorized access to another user's data. To date, over 150 vulnerabilities have been reported in cloud services in the OpenStack cloud. Research efforts in cloud security have focused primarily on attacks originating from user VMs or compromised operating systems rather than threats caused by the compromise of distributed cloud services, leaving cloud users open to attacks from these vulnerable cloud services. In this paper, we propose the Pileus cloud service architecture, which isolates each user's cloud operations to prevent vulnerabilities in cloud services from enabling malicious users to gain unauthorized access. Pileus deploys stateless cloud services \"on demand\" to service each user's cloud operations, limiting cloud services to the permissions of individual users. Pileus leverages the decentralized information flow control (DIFC) model for permission management, but the Pileus design addresses special challenges in the cloud environment to: (1) restrict how cloud services may be allowed to make security decisions; (2) select trustworthy nodes for access enforcement in a dynamic, distributed environment; and (3) limit the set of nodes a user must trust to service each operation. We have ported the OpenStack cloud platform to Pileus, finding that we can systematically prevent compromised cloud services from attacking other users' cloud operations with less than 3% additional latency for the operation. Application of the Pileus architecture to Open-Stack shows that confined cloud services can service users' cloud operations effectively for a modest overhead."}
{"_id": "42cfb5614cbef64a5efb0209ca31efe760cec0fc", "title": "Novelty and Reinforcement Learning in the Value System of Developmental Robots", "text": "The value system of a developmental robot signals the occurrence of salient sensory inputs, modulates the mapping from sensory inputs to action outputs, and evaluates candidate actions. In the work reported here, a low level value system is modeled and implemented. It simulates the non-associative animal learning mechanism known as habituation e ect. Reinforcement learning is also integrated with novelty. Experimental results show that the proposed value system works as designed in a study of robot viewing angle selection."}
{"_id": "3c474a0967e4f74971a0a35fa1af03769c4dd469", "title": "A Linear Permanent-Magnet Motor for Active Vehicle Suspension", "text": "Traditionally, automotive suspension designs with passive components have been a compromise between the three conflicting demands of road holding, load carrying, and passenger comfort. Linear electromagnetic motor-based active suspension has superior controllability and bandwidth, provides shock load isolation between the vehicle chassis and wheel, and, therefore, has great potential. It also has the ability to recover energy that is dissipated in the shock absorber in the passive systems and results in a much more energy-efficient suspension system. This paper describes the issues pertinent to the design of a high force density tubular permanent-magnet (PM) motor for active suspension in terms of performance optimization, the use of a solid stator core for low-cost production and its impact on thrust force, and the assessment of demagnetization risk."}
{"_id": "923ec0da8327847910e8dd71e9d801abcbc93b08", "title": "Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-Supervised Object and Action Localization", "text": "We propose \u2018Hide-and-Seek\u2019, a weakly-supervised framework that aims to improve object localization in images and action localization in videos. Most existing weakly-supervised methods localize only the most discriminative parts of an object rather than all relevant parts, which leads to suboptimal performance. Our key idea is to hide patches in a training image randomly, forcing the network to seek other relevant parts when the most discriminative part is hidden. Our approach only needs to modify the input image and can work with any network designed for object localization. During testing, we do not need to hide any patches. Our Hide-and-Seek approach obtains superior performance compared to previous methods for weakly-supervised object localization on the ILSVRC dataset. We also demonstrate that our framework can be easily extended to weakly-supervised action localization."}
{"_id": "73447b6a02d1caff0a96472a2e0b571e1be497c8", "title": "Externalising the autobiographical self: sharing personal memories online facilitated memory retention.", "text": "Internet technology provides a new means of recalling and sharing personal memories in the digital age. What is the mnemonic consequence of posting personal memories online? Theories of transactive memory and autobiographical memory would make contrasting predictions. In the present study, college students completed a daily diary for a week, listing at the end of each day all the events that happened to them on that day. They also reported whether they posted any of the events online. Participants received a surprise memory test after the completion of the diary recording and then another test a week later. At both tests, events posted online were significantly more likely than those not posted online to be recalled. It appears that sharing memories online may provide unique opportunities for rehearsal and meaning-making that facilitate memory retention."}
{"_id": "d591682c056fe9d082a180c14324ee46e5a618f0", "title": "Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback", "text": "This paper presents a visual-inertial odometry framework which tightly fuses inertial measurements with visual data from one or more cameras, by means of an iterated extended Kalman filter (IEKF). By employing image patches as landmark descriptors, a photometric error is derived, which is directly integrated as an innovation term in the filter update step. Consequently, the data association is an inherent part of the estimation process and no additional feature extraction or matching processes are required. Furthermore, it enables the tracking of non-corner shaped features, such as lines, and thereby increases the set of possible landmarks. The filter state is formulated in a fully robocentric fashion, which reduces errors related to nonlinearities. This also includes partitioning of a landmark\u2019s location estimate into a bearing vector and distance and thereby allows an undelayed initialization of landmarks. Overall, this results in a compact approach which exhibits a high level of robustness with respect to low scene texture and motion blur. Furthermore, there is no time-consuming initialization procedure and pose estimates are available starting at the second image frame. We test the filter on different real datasets and compare it to other state-of-the-art visual-inertial frameworks. The experimental results show that robust localization with high accuracy can be achieved with this filterbased framework."}
{"_id": "b3ede733fcd97271f745d8c0f71e44562abbb6d5", "title": "CareLog: a selective archiving tool for behavior management in schools", "text": "Identifying the function of problem behavior can lead to the development of more effective interventions. One way to identify the function is through functional behavior assessment (FBA). Teachers conduct FBA in schools. However, the task load of recording the data manually is high, and the challenge of accurately identifying antecedents and consequences is significant while interacting with students. These issues often result in imperfect information capture. CareLog allows teachers more easily to conduct FBAs and enhances the capture of relevant information. In this paper, we describe the design process that led to five design principles that governed the development of CareLog. We present results from a five-month, quasi-controlled study aimed at validating those design principles. We reflect on how various constraints imposed by special education settings impact the design and evaluation process for HCI practitioners and researchers."}
{"_id": "b60fa2d772633f957b2462dfb2a9a6bc5e6eaff8", "title": "A Note on the Convergence of SOR for the PageRank Problem", "text": "A curious phenomenon when it comes to solving the linear system formulation of the PageRank problem is that while the convergence rate of Gauss\u2013Seidel shows an improvement over Jacobi by a factor of approximately two, successive overrelaxation (SOR) does not seem to offer a meaningful improvement over Gauss\u2013Seidel. This has been observed experimentally and noted in the literature, but to the best of our knowledge there has been no analytical explanation for this thus far. This convergence behavior is surprising because there are classes of matrices for which Gauss\u2013Seidel is faster than Jacobi by a similar factor of two, and SOR accelerates convergence by an order of magnitude compared to Gauss\u2013Seidel. In this short paper we prove analytically that the PageRank model has the unique feature that there exist PageRank linear systems for which SOR does not converge outside a very narrow interval depending on the damping factor, and that in such situations Gauss\u2013Seidel may be the best choice among the relaxation parameters. Conversely, we show that within that narrow interval, there exists no PageRank problem for which SOR does not converge. Our result may give an analytical justification for the popularity of Gauss\u2013Seidel as a solver for the linear system formulation of PageRank."}
{"_id": "f6ce44ef8bc394abf2730131e6c9c7f9a9876852", "title": "A Card Game Description Language", "text": "We present initial research regarding a system capable of generating novel card games. We furthermore propose a method for computationally analysing existing games of the same genre. Ultimately, we present a formalisation of card game rules, and a context-free grammar Gcardgame capable of expressing the rules of a large variety of card games. Example derivations are given for the poker variant Texas hold \u2019em, Blackjack and UNO. Stochastic simulations are used both to verify the implementation of these well-known games, and to evaluate the results of new game rules derived from the grammar. In future work, this grammar will be used to evolve completely novel card games using a grammarguided genetic program."}
{"_id": "0988e3b54a7bffaa1a8328c39ee52090e58ffd5c", "title": "A Hexanucleotide Repeat Expansion in C9ORF72 Is the Cause of Chromosome 9p21-Linked ALS-FTD", "text": "The chromosome 9p21 amyotrophic lateral sclerosis-frontotemporal dementia (ALS-FTD) locus contains one of the last major unidentified autosomal-dominant genes underlying these common neurodegenerative diseases. We have previously shown that a founder haplotype, covering the MOBKL2b, IFNK, and C9ORF72 genes, is present in the majority of cases linked to this region. Here we show that there is a large hexanucleotide (GGGGCC) repeat expansion in the first intron of C9ORF72 on the affected haplotype. This repeat expansion segregates perfectly with disease in the Finnish population, underlying 46.0% of familial ALS and 21.1% of sporadic ALS in that population. Taken together with the D90A SOD1 mutation, 87% of familial ALS in Finland is now explained by a simple monogenic cause. The repeat expansion is also present in one-third of familial ALS cases of outbred European descent, making it the most common genetic cause of these fatal neurodegenerative diseases identified to date."}
{"_id": "45f639807caeb7e457904423972a2e1211e408ef", "title": "Precision and Recall for Regression", "text": "Cost sensitive prediction is a key task in many real world applications. Most existing research in this area deals with classification problems. This paper addresses a related regression problem: the prediction of rare extreme values of a continuous variable. These values are often regarded as outliers and removed from posterior analysis. However, for many applications (e.g. in finance, meteorology, biology, etc.) these are the key values that we want to accurately predict. Any learning method obtains models by optimizing some preference criteria. In this paper we propose new evaluation criteria that are more adequate for these applications. We describe a generalization for regression of the concepts of precision and recall often used in classification. Using these new evaluation metrics we are able to focus the evaluation of predictive models on the cases that really matter for these applications. Our experiments indicate the advantages of the use of these new measures when comparing predictive models in the context of our target applications."}
{"_id": "8bd63933f50f028e3cd7d50440e037280b3e5b5e", "title": "Review on trajectory similarity measures", "text": "The availability of devices that can be used to track moving objects has increased dramatically leading to a great growth in movement data from almost every application domain. Therefore, there has been an increasing interest in proposing new methodologies for indexing, classifying, clustering, querying and measuring similarity between moving objects' data. One of the main functions for a wide range of application domains is to measure the similarity between two moving objects' trajectories. In this paper, we present a comparative study between widely used trajectory similarity measures observing the advantages and disadvantages of these measures."}
{"_id": "c3cd42dd1031cf093bfa9b8743ab16cfe9a039c9", "title": "Use of Network Latency Profiling and Redundancy for Cloud Server Selection", "text": "As servers are placed in diverse locations in networked services today, it becomes vital to direct a client's request to the best server(s) to achieve both high performance and reliability. In this distributed setting, non-negligible latency and server availability become two major concerns, especially for highly-interactive applications. Profiling latencies and sending redundant data have been investigated as solutions to these issues. The notion of a cloudlet in mobile-cloud computing is also relevant in this context, as the cloudlet can supply these solution approaches on behalf of the mobile. In this paper, we investigate the effects of profiling and redundancy on latency when a client has a choice of multiple servers to connect to, using measurements from real experiments and simulations. We devise and test different server selection and data partitioning strategies in terms of profiling and redundancy. Our key findings are summarized as follows. First, intelligent server selection algorithms help find the optimal group of servers that minimize latency with profiling. Second, we can achieve good performance with relatively simple approaches using redundancy. Our analysis of profiling and redundancy provides insight to help designers determine how many servers and which servers to select to reduce latency."}
{"_id": "e6f6af7899a6aea57f4a4d87d45bd5719dc4e08d", "title": "Sustainable, effective implementation of a surgical preprocedural checklist: an \"attestation\" format for all operating team members.", "text": "BACKGROUND\nAdoption ofa preprocedural pause (PPP) associated with a checklist and a team briefing has been shown to improve teamwork function in operating rooms (ORs) and has resulted in improved outcomes. The format of the World Health Organization Safe Surgery Saves Lives checklist has been used as a template for a PPP. Performing a PPP, described as a \"time-out,\" is one of the three principal components, along with a preprocedure verification process and marking the procedure site, of the Joint Commission's Universal Protocol for Preventing Wrong Site, Wrong Procedure, Wrong Person Surgery. However, if the surgeon alone leads the pause, its effectiveness may be decreased by lack of input from other operating team members.\n\n\nMETHODS\nIn this study, the PPP was assessed to measure participation and input from operating team members. On the basis of low participation levels, the pause was modified to include an attestation from each member of the team.\n\n\nRESULTS\nPreliminary analysis of our surgeon-led pause revealed only 54% completion of all items, which increased to 97% after the intervention. With the new format, operating team members stopped for the pause in 96% of cases, compared with 78% before the change. Operating team members introduced themselves in 94% of cases, compared with 44% before the change. Follow-up analysis showed sustained performance at 18 months after implementation.\n\n\nCONCLUSIONS\nA preprocedural checklist format in which each member of the operating team provides a personal attestation can improve pause compliance and may contribute to improvements in the culture of teamwork within an OR. Successful online implementation of a PPP, which includes participation by all operating team members, requires little or no additional expense and only minimal formal coaching outside working situations."}
{"_id": "f62a8c59de7bb20da5030ed09a8892dd353f3a50", "title": "Local multidimensional scaling", "text": "In a visualization task, every nonlinear projection method needs to make a compromise between trustworthiness and continuity. In a trustworthy projection the visualized proximities hold in the original data as well, whereas a continuous projection visualizes all proximities of the original data. We show experimentally that one of the multidimensional scaling methods, curvilinear components analysis, is good at maximizing trustworthiness. We then extend it to focus on local proximities both in the input and output space, and to explicitly make a user-tunable parameterized compromise between trustworthiness and continuity. The new method compares favorably to alternative nonlinear projection methods."}
{"_id": "2564a92add8b1749dbe3648ff14421691b6bd7d8", "title": "A Mixed Fragmentation Methodology For Initial Distributed Database Design", "text": "We deene mixed fragmentation as a process of simultaneously applying the horizontal and vertical fragmentation on a relation. It can be achieved in one of two ways: by performing horizontal fragmentation followed by vertical fragmentation or by performing vertical fragmentation followed by horizontal fragmentation. The need for mixed fragmentation arises in distributed databases because database users usually access subsets of data which are vertical and horizontal fragments of global relations and there is a need to process queries or transactions that would access these fragments optimally. We present algorithms for generating candidate vertical and horizontal fragmentation schemes and propose a methodology for distributed database design using these fragmentation schemes. When applied together these schemes form a grid. This grid consisting of cells is then merged to form mixed fragments so as to minimize the number of disk accesses required to process the distributed transactions. We have implemented the vertical and horizontal fragmentation algorithms and are developing a Distributed Database Design Tool to support the proposed methodology."}
{"_id": "8178a3a6fbe7f5f29f615b151f1abe4ba6d2a473", "title": "Getting plugged in: an overview of internet addiction.", "text": "Internet addiction is not formally recognised as a clinical disorder by the WHO despite increasing evidence that excessive internet use can interfere with daily life and work. There is increasing pressure from Psychologists for Internet addiction to be recognised. This article explores the prevalence, symptoms and management of Internet addiction and the consequences of ignoring the ever growing concerns from public figures and institutions."}
{"_id": "b55cec15c675c4adf03e732de2e1b5217d529a58", "title": "A 96-GHz Ortho-Mode Transducer for the Polatron", "text": "We describe the design, simulation, fabrication, and performance of a 96-GHz ortho-mode transducer (OMT) to be used for the Polatron\u2014a bolometric receiver with polarization capability. The OMT has low loss, good isolation, and moderately broad bandwidth, and its performance closely resembles simulation results."}
{"_id": "0ee2efb0bacede2eaa4516795aa5150d20e3a266", "title": "A11y Attacks: Exploiting Accessibility in Operating Systems", "text": "Driven in part by federal law, accessibility (a11y) support for disabled users is becoming ubiquitous in commodity OSs. Some assistive technologies such as natural language user interfaces in mobile devices are welcomed by the general user population. Unfortunately, adding new features in modern, complex OSs usually introduces new security vulnerabilities. Accessibility support is no exception. Assistive technologies can be defined as computing subsystems that either transform user input into interaction requests for other applications and the underlying OS, or transform application and OS output for display on alternative devices. Inadequate security checks on these new I/O paths make it possible to launch attacks from accessibility interfaces. In this paper, we present the first security evaluation of accessibility support for four of the most popular computing platforms: Microsoft Windows, Ubuntu Linux, iOS, and Android. We identify twelve attacks that can bypass state-of-the-art defense mechanisms deployed on these OSs, including UAC, the Yama security module, the iOS sandbox, and the Android sandbox. Further analysis of the identified vulnerabilities shows that their root cause is that the design and implementation of accessibility support involves inevitable trade-offs among compatibility, usability, security, and (economic) cost. These trade-offs make it difficult to secure a system against misuse of accessibility support. Based on our findings, we propose a number of recommendations to either make the implementation of all necessary security checks easier and more intuitive, or to alleviate the impact of missing/incorrect checks. We also point out open problems and challenges in automatically analyzing accessibility support and identifying security vulnerabilities."}
{"_id": "21f47e1d9078d12de1bd06341619923e8b9d85bb", "title": "Low-cost traffic analysis of Tor", "text": "Tor is the second generation onion router supporting the anonymous transport of TCP streams over the Internet. Its low latency makes it very suitable for common tasks, such as Web browsing, but insecure against traffic-analysis attacks by a global passive adversary. We present new traffic-analysis techniques that allow adversaries with only a partial view of the network to infer which nodes are being used to relay the anonymous streams and therefore greatly reduce the anonymity provided by Tor. Furthermore, we show that otherwise unrelated streams can be linked back to the same initiator Our attack is feasible for the adversary anticipated by the Tor designers. Our theoretical attacks are backed up by experiments performed on the deployed, albeit experimental, Tor network. Our techniques should also be applicable to any low latency anonymous network. These attacks highlight the relationship between the field of traffic-analysis and more traditional computer security issues, such as covert channel analysis. Our research also highlights that the inability to directly observe network links does not prevent an attacker from performing traffic-analysis: the adversary can use the anonymising network as an oracle to infer the traffic load on remote nodes in order to perform traffic-analysis."}
{"_id": "77e7b0663f6774b3d6e1d51106020a9a0f96bcd2", "title": "Connecting \u201d and \u201c Disconnecting \u201d With Civic Life : Patterns of Internet Use and the Production of Social Capital", "text": "This article explores the relationship between Internet use and the individual-level production of social capital. To do so, the authors adopt a motivational perspective to distinguish among types of Internet use when examining the factors predicting civic engagement, interpersonal trust, and life contentment. The predictive power of new media use is then analyzed relative to key demographic, contextual, and traditional media use variables using the 1999 DDB Life Style Study. Although the size of associations is generally small, the data suggest that informational uses of the Internet are positively related to individual differences in the production of social capital, whereas social-recreational uses are negatively related to these civic indicators. Analyses within subsamples defined by generational age breaks further suggest that social capital production is related to Internet use among Generation X, while it is tied to television use among Baby Boomers and newspaper use among members of the Civic Generation. The possibility of life cycle and cohort effects is discussed."}
{"_id": "d7c30d97b6e1c73f733cbdab64948aaa666b4bf6", "title": "The spatio-temporal generalized additive model for criminal incidents", "text": "Law enforcement agencies need to model spatio-temporal patterns of criminal incidents. With well developed models, they can study the causality of crimes and predict future criminal incidents, and they can use the results to help prevent crimes. In this paper, we described our newly developed spatio-temporal generalized additive model (S-T GAM) to discover underlying factors related to crimes and predict future incidents. The model can fully utilize many different types of data, such as spatial, temporal, geographic, and demographic data, to make predictions. We efficiently estimated the parameters for S-T GAM using iteratively re-weighted least squares and maximum likelihood and the resulting estimates provided for model interpretability. In this paper we showed the evaluation of S-T GAM with the actual criminal incident data from Charlottesville, Virginia. The evaluation results showed that S-T GAM outperformed the previous spatial prediction models in predicting future criminal incidents."}
{"_id": "4bd1944714ca025237957053170206c14f940cbc", "title": "Characterizing and detecting performance bugs for smartphone applications", "text": "Smartphone applications\u2019 performance has a vital impact on user experience. However, many smartphone applications suffer from bugs that cause significant performance degradation, thereby losing their competitive edge. Unfortunately, people have little understanding of these performance bugs. They also lack effective techniques to fight with such bugs. To bridge this gap, we conducted a study of 70 real-world performance bugs collected from eight large-scale and popular Android applications. We studied the characteristics (e.g., bug types and how they manifested) of these bugs and identified their common patterns. These findings can support follow-up research on performance bug avoidance, testing, debugging and analysis for smartphone applications. To demonstrate the usefulness of our findings, we implemented a static code analyzer, PerfChecker, to detect our identified performance bug patterns. We experimentally evaluated PerfChecker by applying it to 29 popular Android applications, which comprise 1.1 million lines of Java code. PerfChecker successfully detected 126 matching instances of our performance bug patterns. Among them, 68 were quickly confirmed by developers as previously-unknown issues that affect application performance, and 20 were fixed soon afterwards by following our optimization suggestions."}
{"_id": "4b16c239044c839a0fcd1b27bf3c7212c4d4064a", "title": "Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences", "text": "We present and evaluate a person re-identification scheme for multi-camera surveillance system. Our approach uses matching of signatures based on interest-points descriptors collected on short video sequences. One of the originalities of our method is to accumulate interest points on several sufficiently time-spaced images during person tracking within each camera, in order to capture appearance variability. A first experimental evaluation conducted on a publicly available set of low-resolution videos in a commercial mall shows very promising inter-camera person re-identification performances (a precision of 82% for a recall of 78%). It should also be noted that our matching method is very fast: ~ 1/8s for re-identification of one target person among 10 previously seen persons, and a logarithmic dependence with the number of stored person models, making re- identification among hundreds of persons computationally feasible in less than ~ 1/5 second."}
{"_id": "6d60ed7e6eed6d5b54fa9ac0838b3ece708061ef", "title": "ThingTalk : A Distributed Language for a Social Internet of Things", "text": "The Internet of Things has increased the number of connected devices by an order of magnitude. This increase comes with a corresponding increase in complexity, which has led to centralized architectures that do not respect the privacy of the users and incur high development and deployment costs. We propose the Open Thing Platform as a new distributed architecture which is built around ThingTalk, a new high level declarative programming language that can express the interaction between multiple physical devices and online services. The architecture abstracts away the details of the communication without sacrificing privacy. We experiment with ThingTalk in multiple domains and we show that several applications can be written in a few lines of code and with no need for any server infrastructure."}
{"_id": "9b52a30439517b59edb401e671d12b895aababc2", "title": "Using Mobile LiDAR Data for Rapidly Updating Road Markings", "text": "Updating road markings is one of the routine tasks of transportation agencies. Compared with traditional road inventory mapping techniques, vehicle-borne mobile light detection and ranging (LiDAR) systems can undertake the job safely and efficiently. However, current hurdles include software and computing challenges when handling huge volumes of highly dense and irregularly distributed 3-D mobile LiDAR point clouds. This paper presents the development and implementation aspects of an automated object extraction strategy for rapid and accurate road marking inventory. The proposed road marking extraction method is based on 2-D georeferenced feature (GRF) images, which are interpolated from 3-D road surface points through a modified inverse distance weighted (IDW) interpolation. Weighted neighboring difference histogram (WNDH)-based dynamic thresholding and multiscale tensor voting (MSTV) are proposed to segment and extract road markings from the noisy corrupted GRF images. The results obtained using 3-D point clouds acquired by a RIEGL VMX-450 mobile LiDAR system in a subtropical urban environment are encouraging."}
{"_id": "cffb7da93d0f754f527ce7e507087e238e35527b", "title": "A review on the platform design, dynamic modeling and control of hybrid UAVs", "text": "This article presents a review on the platform design, dynamic modeling and control of hybrid Unmanned Aerial Vehicles (UAVs). For now, miniature UAVs which have experienced a tremendous development are dominated by two main types, i.e., fixed-wing UAV and Vertical Take-Off and Landing (VTOL) UAV, each of which, however, has its own inherent limitations on such as flexibility, payload, axnd endurance. Enhanced popularity and interest are recently gained by a newer type of UAVs, named hybrid UAV that integrates the beneficial features of both conventional ones. In this paper, a technical overview of the recent advances of the hybrid UAV is presented. More specifically, the hybrid UAV's platform design together with the associated technical details and features are introduced first. Next, the work on hybrid UAV's flight dynamics modeling is then categorized and explained. As for the flight control system design for the hybrid UAV, several flight control strategies implemented are discussed and compared in terms of theory, linearity and implementation."}
{"_id": "7a8fc6f785b9db5ac447f109144a5883892dac23", "title": "Power divider in ridge gap waveguide technology", "text": "A power divider in ridge gap waveguide technology has been designed to work at 15GHz. Measurements and simulations are provided. Besides, a broad numerical study of the characteristic impedance of a ridge gap waveguide is presented."}
{"_id": "f66a00d2bc5c04dd68f672659f97eff8e7084dc1", "title": "Mapping the antigenic and genetic evolution of influenza virus.", "text": "The antigenic evolution of influenza A (H3N2) virus was quantified and visualized from its introduction into humans in 1968 to 2003. Although there was remarkable correspondence between antigenic and genetic evolution, significant differences were observed: Antigenic evolution was more punctuated than genetic evolution, and genetic change sometimes had a disproportionately large antigenic effect. The method readily allows monitoring of antigenic differences among vaccine and circulating strains and thus estimation of the effects of vaccination. Further, this approach offers a route to predicting the relative success of emerging strains, which could be achieved by quantifying the combined effects of population level immune escape and viral fitness on strain evolution."}
{"_id": "18ab50a0cbb22da9253d86f2f31bf65bf6568ce8", "title": "Color image processing [basics and special issue overview]", "text": "umans have always seen the world in color but only recently have we been able to generate vast quantities of color images with such ease. In the last three decades, we have seen a rapid and enormous transition from grayscale images to color ones. Today, we are exposed to color images on a daily basis in print, photographs, television, computer displays, and cinema movies, where color now plays a vital role in the advertising and dissemination of information throughout the world. Color monitors, printers, and copiers now dominate the office and home environments, with color becoming increasingly cheaper and easier to generate and reproduce. Color demands have soared in the marketplace and are projected to do so for years to come. With this rapid progression, color and multispectral properties of images are becoming increasingly crucial to the field of image processing, often extending and/or replacing previously known grayscale techniques. We have seen the birth of color algorithms that range from direct extensions of grayscale ones, where images are treated as three monochrome separations, to more sophisticated approaches that exploit the correlations among the color bands, yielding more accurate results. Hence, it is becoming increasingly necessary for the signal processing community to understand the fundamental differences between color and grayscale imaging. There are more than a few extensions of concepts"}
{"_id": "49b4f6975e7a0e27e36153e27bbd6e333cd3ee73", "title": "Relational active learning for link-based classification", "text": "Many information tasks involve objects that are explicitly or implicitly connected in a network (or graph), such as webpages connected by hyperlinks or people linked by \u201cfriendships\u201d in a social network. Research on link-based classification (LBC) has shown how to leverage these connections to improve classification accuracy. Unfortunately, acquiring a sufficient number of labeled examples to enable accurate learning for LBC can often be expensive or impractical. In response, some recent work has proposed the use of active learning, where the LBC method can intelligently select a limited set of additional labels to acquire, so as to reduce the overall cost of learning a model with sufficient accuracy. This work, however, has produced conflicting results and has not considered recent progress for LBC inference and semi-supervised learning. In this paper, we evaluate multiple prior methods for active learning and demonstrate that none consistently improve upon random guessing. We then introduce two new methods that both seek to improve active learning by leveraging the link structure to identify nodes to acquire that are more representative of the underlying data. We show that both approaches have some merit, but that one method, by proactively acquiring nodes so as to produce a more representative distribution of known labels, often leads to significant accuracy increases with minimal computational cost."}
{"_id": "61e04278cc6478a3d3fb800d672a0425fabfea78", "title": "MRJS : A JavaScript MapReduce Framework for Web Browsers", "text": "This paper describes MRJS, a JavaScript based MapReduce framework. MRJS runs on the open internet over HTTP rather than in a data center. Instead of using clusters of machines for processing power, it relies on volunteer clients in the form of JavaScript engines in web browsers. Users can specify, execute, and fetch data for arbitrary MapReduce jobs with HTTP requests. Our centralized job server acts as a coordinator. We have implemented MRJS and results show that it performs well for certain types of MapReduce computations."}
{"_id": "a1aeb66b4b806c075438dff0ba26ebed41258d02", "title": "Privacy-preserving personal health record using multi-authority attribute-based encryption with revocation", "text": "Personal health record (PHR) service is an emerging model for health information exchange. In PHR systems, patient\u2019s health records and information are maintained by the patient himself through the Web. In reality, PHRs are often outsourced to be stored at the third parties like cloud service providers. However, there have been serious privacy concerns about cloud service as it may expose user\u2019s sensitive data like PHRs to those cloud service providers or unauthorized users. Using attribute-based encryption (ABE) to encrypt patient\u2019s PHRs in cloud environment, secure and flexible access control can be achieved. Yet, problems like scalability in key management, fine-grained access control, and efficient user revocation remain to be addressed. In this paper, we propose a privacy-preserving PHR, which supports fine-grained access control and efficient revocation. To be specific, our scheme achieves the goals (1) scalable and fine-grained access control for PHRs by using multi-authority ABE scheme, and (2) efficient on-demand user/attribute revocation and dynamic policy update. In our scheme, we consider the situation that multiple data owners exist, and patient\u2019s PHRs are encrypted and stored in semi-trust servers. The access structure in our scheme is expressive access tree structure, and the security of our scheme can be reduced to the standard decisional bilinear Diffie\u2013Hellman assumption."}
{"_id": "076be17f97325fda82d1537aaa48798eb66ba91f", "title": "Identity-based encryption with efficient revocation", "text": "Identity-based encryption (IBE) is an exciting alternative to public-key encryption, as IBE eliminates the need for a Public Key Infrastructure (PKI). The senders using an IBE do not need to look up the public keys and the corresponding certificates of the receivers, the identities (e.g. emails or IP addresses) of the latter are sufficient to encrypt. Any setting, PKI- or identity-based, must provide a means to revoke users from the system. Efficient revocation is a well-studied problem in the traditional PKI setting. However in the setting of IBE, there has been little work on studying the revocation mechanisms. The most practical solution requires the senders to also use time periods when encrypting, and all the receivers (regardless of whether their keys have been compromised or not) to update their private keys regularly by contacting the trusted authority. We note that this solution does not scale well -- as the number of users increases, the work on key updates becomes a bottleneck. We propose an IBE scheme that significantly improves key-update efficiency on the side of the trusted party (from linear to logarithmic in the number of users), while staying efficient for the users. Our scheme builds on the ideas of the Fuzzy IBE primitive and binary tree data structure, and is provably secure."}
{"_id": "e3b1622104c9d1be776f1ddb8c7299e86995fe33", "title": "Using Markov decision process in cognitive radio networks towards the optimal reward", "text": "The Learning is an indispensable phase in the cognition cycle of cognitive radio network. It corresponds between the executed actions and the estimated rewards. Based on this phase, the agent learns from past experiences to improve his actions in the next interventions. In the literature, there are several methods that treat the artificial learning. Among them, we cite the reinforcement learning that look for the optimal policy, for ensuring the maximum reward.\n The present work exposes an approach, based on a model of reinforcement learning, namely Markov decision process, to maximize the sum of transfer rates of all secondary users. Such conception defines all notions relative to an environment with finite set of states, including: the agent, all states, the allowed actions with a given state, the obtained reward after the execution of an action and the optimal policy. After the implementation, we remark a correlation between the started policy and the optimal policy, and we improve the performances by referring to a previous work."}
{"_id": "f5c63f9f0264fe9eb4fc1e10995dfc60bc09a969", "title": "High Performance Transactions via Early Write Visibility", "text": "In order to guarantee recoverable transaction execution, database systems permit a transaction\u2019s writes to be observable only at the end of its execution. As a consequence, there is generally a delay between the time a transaction performs a write and the time later transactions are permitted to read it. This delayed write visibility can significantly impact the performance of serializable database systems by reducing concurrency among conflicting transactions. This paper makes the observation that delayed write visibility stems from the fact that database systems can arbitrarily abort transactions at any point during their execution. Accordingly, we make the case for database systems which only abort transactions under a restricted set of conditions, thereby enabling a new recoverability mechanism, early write visibility, which safely makes transactions\u2019 writes visible prior to the end of their execution. We design a new serializable concurrency control protocol, piece-wise visibility (PWV), with the explicit goal of enabling early write visibility. We evaluate PWV against state-of-the-art serializable protocols and a highly optimized implementation of read committed, and find that PWV can outperform serializable protocols by an order of magnitude and read committed by 3X on high contention workloads."}
{"_id": "d8a747a134c289a6754d38e4adaa9127a7cdbf57", "title": "A Genetic NewGreedy Algorithm for Influence Maximization in Social Network", "text": "A user may be influenced by the other users of a social network by sharing information. Influence maximization is one of the critical research topics aimed at knowing the current circumstances of a social network, such as the general mood of the society. The goal of this problem is to find a seed set which has a maximum influence with respect to a propagation model. For the influence maximization problem is NP-Hard, it is obvious that an exhausted search algorithm is not able to find the solution in a reasonable time. It is also obvious that a greedy algorithm may not find a solution that satisfies all the requirements. Hence, a high-performance algorithm for solving the influence maximization problem, which leverages the strength of the greedy method and the genetic algorithm (GA) is presented in this paper. Experimental results show that the proposed algorithm can provide a better result than simple GA by about 10% in terms of the quality."}
{"_id": "e6298dd550a4a72f9819702ce6bee578b888d316", "title": "Hyperdimensional biosignal processing: A case study for EMG-based hand gesture recognition", "text": "The mathematical properties of high-dimensional spaces seem remarkably suited for describing behaviors produces by brains. Brain-inspired hyperdimensional computing (HDC) explores the emulation of cognition by computing with hypervectors as an alternative to computing with numbers. Hypervectors are high-dimensional, holographic, and (pseudo)random with independent and identically distributed (i.i.d.) components. These features provide an opportunity for energy-efficient computing applied to cyberbiological and cybernetic systems. We describe the use of HDC in a smart prosthetic application, namely hand gesture recognition from a stream of Electromyography (EMG) signals. Our algorithm encodes a stream of analog EMG signals that are simultaneously generated from four channels to a single hypervector. The proposed encoding effectively captures spatial and temporal relations across and within the channels to represent a gesture. This HDC encoder achieves a high level of classification accuracy (97.8%) with only 1/3 the training data required by state-of-the-art SVM on the same task. HDC exhibits fast and accurate learning explicitly allowing online and continuous learning. We further enhance the encoder to adaptively mitigate the effect of gesture-timing uncertainties across different subjects endogenously; further, the encoder inherently maintains the same accuracy when there is up to 30% overlapping between two consecutive gestures in a classification window."}
{"_id": "0202c8ac88c7a00d3f42f49160d9a174cca0502c", "title": "Ambulance redeployment: An approximate dynamic programming approach", "text": "Emergency medical service (EMS) providers are charged with the task of managing ambulances so that the time required to respond to emergency calls is minimized. One approach that may assist in reducing response times is ambulance redeployment, i.e., repositioning idle ambulances in real time. We formulate a simulation model of EMS operations to evaluate the performance of a given allocation policy and use this model in an approximate dynamic programming (ADP) context to compute high-quality redeployment policies. We find that the resulting ADP policies perform much better than sub-optimal static policies and marginally better than near-optimal static policies. Representative computational results for Edmonton, Alberta are included."}
{"_id": "e84c2882b691889ebd1c0565f32dab588dd4182c", "title": "Design of household appliances for a Dc-based nanogrid system: An induction heating cooktop study case", "text": "Efficient energy management is becoming an important issue when designing any electrical system. Recently, a significant effort has been devoted to the design of optimized micro and nanogrids comprising residential area subsystems. One of these approaches consists on the design of a dc-based nanogrid optimized for the interoperation of electric loads, sources, and storage elements. Home appliances are one of the main loads in such dc-based nanogrids. In this paper, the design and optimization of an appliance for operation in a dc-based nanogrid is detailed. An induction heating cooktop appliance will be considered as a reference example, being some of the design considerations generalizable to other appliances. The main design aspects, including the inductor system, power converter, EMC filter, and control are considered. Finally, some simulation results of the expected converter performance are shown."}
{"_id": "b954e91c53abdd2430c48a4e693123357162ee5f", "title": "PCS/W-CDMA dual-band MMIC power amplifier with a newly proposed linearizing bias circuit", "text": "A personal communications service/wide-band code division multiple access (PCS/W-CDMA) dual-band monolithic microwave integrated circuit (MMIC) power amplifier with a single-chip MMIC and a single-path output matching network is demonstrated by adopting a newly proposed on-chip linearizer. The linearizer is composed of the base\u2013emitter diode of an active bias transistor and a capacitor to provide an RF short at the base node of the active bias transistor. The linearizer enhances the linearity of the power amplifier effectively for both PCS and W-CDMA bands with no additional dc power consumption, and has negligible insertion power loss with almost no increase in die area. It improves the input 1-dB gain compression point by 18.5 (20) dB and phase distortion by 6.1 (12.42 ) at an output power of 28 (28) dBm for the PCS (W-CDMA) band while keeping the base bias voltage of the power amplifier as designed. A PCS and W-CDMA dual-band InGaP heterojunction bipolar transistor MMIC power amplifier with single input and output and no switch for band selection is embodied by implementing the linearizer and by designing the amplifier to have broad-band characteristics. The dual-band power amplifier exhibits an output power of 30 (28.5) dBm, power-added efficiency of 39.5% (36%), and adjacent channel power ratio of 46 ( 50) dBc at the output power of 28 (28) dBm under 3.4-V operation voltage for PCS (W-CDMA) applications."}
{"_id": "7028fbb34276c4b71aaadbc8e55ca2692ccce099", "title": "Punica granatum (pomegranate) and its potential for prevention and treatment of inflammation and cancer.", "text": "The last 7 years have seen over seven times as many publications indexed by Medline dealing with pomegranate and Punica granatum than in all the years preceding them. Because of this, and the virtual explosion of interest in pomegranate as a medicinal and nutritional product that has followed, this review is accordingly launched. The pomegranate tree, Punica granatum, especially its fruit, possesses a vast ethnomedical history and represents a phytochemical reservoir of heuristic medicinal value. The tree/fruit can be divided into several anatomical compartments: (1) seed, (2) juice, (3) peel, (4) leaf, (5) flower, (6) bark, and (7) roots, each of which has interesting pharmacologic activity. Juice and peels, for example, possess potent antioxidant properties, while juice, peel and oil are all weakly estrogenic and heuristically of interest for the treatment of menopausal symptoms and sequellae. The use of juice, peel and oil have also been shown to possess anticancer activities, including interference with tumor cell proliferation, cell cycle, invasion and angiogenesis. These may be associated with plant based anti-inflammatory effects, The phytochemistry and pharmacological actions of all Punica granatum components suggest a wide range of clinical applications for the treatment and prevention of cancer, as well as other diseases where chronic inflammation is believed to play an essential etiologic role."}
{"_id": "21bbec954226c5fdf53560cb072188a18051683c", "title": "Transfer learning using computational intelligence: A survey", "text": "26 27 28 29 30 31 32 33 34 35 36 37 38 Article history: Received 3 December 2014 Received in revised form 7 January 2015 Accepted 17 January 2015 Available online xxxx"}
{"_id": "45bcf14c659b2ac41eda26d334b0f38c95a45513", "title": "The development of a new test of agility for rugby league.", "text": "Agility requires change of direction speed (CODS) and also perceptual and decision-making skills and reaction speed. The purpose of this study was to develop a reliable and valid agility test for rugby league, which stressed all those dimensions. Players from a subelite rugby league team were tested twice on a sport-specific reactive agility test (RAT) and CODS test. Data were analyzed for reliability. For validity results from the subelite groups, first test was compared with data from an elite group. The RAT required participants to run toward an unpredictable life-size video of an attacking opponent and react to that video by changing direction. The CODS test required the same movement patterns however direction changes were preplanned. The subelite group's mean time to complete the CODS test and RAT on their first test was 1.67 \u00b1 0.15 and 1.98 \u00b1 0.16 seconds, respectively, and 1.62 \u00b1 0.14 and 1.91 \u00b1 0.17 seconds, respectively, on their second test (results are \u00b1 \u03c3). Statistical analyses revealed no significant difference in means (p < 0.05) and good correlation (intraclass correlation coefficient = 0.87 and 0.82, respectively). The elite group's mean time to complete the tests was 1.65 \u00b1 0.09 and 1.79 \u00b10.12 seconds, respectively. Statistical analyses revealed a significant difference in mean RAT time between the elite group and the subelite group (p < 0.05). The RAT was reliable and valid. Performance differences on the RAT were attributed to differences in perceptual skills and/or reaction ability. Testing and training agility should therefore stress those dimensions of agility and not just CODS."}
{"_id": "955964e3262e57f807151a6b7d77e339f3d5ecdb", "title": "Diabetes Prediction by using Bacterial Foraging Optimization Algorithm And Artificial Neural Network", "text": "Diagnosis of any diseases earlier is always preferable .diabetes is one such diseases.it reaches to the epidemic proportional in many developing countries and also reaching to industrialized nations. In this study, we investigate a programmable method to analyze diabetes diseases based on BFO and ANN.ANN is a powerful tool and solves the problems such as classification and prediction. Its performance is highly dependent on its architecture and weights. To gain high efficiency in ANN, an appropriate Architecture and learning Algorithm is preferable. The experiments on dataset occupied from undertaking repository to verify the potential of BFO algorithm and showed that BFO-ANN is more accurate than traditional ANN in terms of classification. General Terms-Artificial Neural Network, BacterialForaging Optimization Algorithm."}
{"_id": "c0f55a309a8652008cb1af16bd7bef7103080d7f", "title": "Cross-Domain Ranking via Latent Space Learning", "text": "We study the problem of cross-domain ranking, which addresses learning to rank objects from multiple interrelated domains. In many applications, we may have multiple interrelated domains, some of them with a large amount of training data and others with very little. We often wish to utilize the training data from all these related domains to help improve ranking performance. In this paper, we present a unified model: BayCDR for cross-domain ranking. BayCDR uses a latent space to measure the correlation between different domains, and learns the ranking functions from the interrelated domains via the latent space by a Bayesian model, where each ranking function is based on a weighted average model. An efficient learning algorithm based on variational inference and a generalization bound has been developed. To scale up to handle real large data, we also present a learning algorithm under the Map-Reduce programming model. Finally, we demonstrate the effectiveness and efficiency of BayCDR on large datasets."}
{"_id": "cd3405227b5d555e1e177cdece5e37745eeb049b", "title": "Mitigation of Continuous and Pulsed Radio Interference with GNSS Antenna Arrays", "text": "Dr. Andriy Konovaltsev received his engineer diploma and the Ph.D. degree in electrical engineering from Kharkov State Technical University of Radio Electronics, Ukraine in 1993 and 1996, correspondingly. He joined the Institute of Communications and Navigation of DLR in 2001. His research interests are in array processing for satellite navigation systems, signal processing algorithms for navigation receivers including synchronisation, multipath and radio interference mitigation."}
{"_id": "e496d6be415038de1636bbe8202cac9c1cea9dbe", "title": "Facial Expression Recognition in Older Adults using Deep Machine Learning", "text": "Facial Expression Recognition is still one of the challenging fields in pattern recognition and machine learning science. Despite efforts made in developing various methods for this topic, existing approaches lack generalizability and almost all studies focus on more traditional hand-crafted features extraction to characterize facial expressions. Moreover, effective classifiers to model the spatial and temporary patterns embedded in facial expressions ignore the effects of facial attributes, such as age, on expression recognition even though research indicates that facial expression manifestation varies with ages. Although there are large amount of benchmark datasets available for the recognition of facial expressions, only few datasets contains faces of older adults. Consequently the current scientific literature has not exhausted this topic. Recently, deep learning methods have been attracting more and more researchers due to their great success in various computer vision tasks, mainly because they avoid a process of feature definition and extraction which is often very difficult due to the wide variability of the facial expressions. Based on the deep learning theory, a neural network for facial expression recognition in older adults is constructed by combining a Stacked Denoising Auto-Encoder method to pre-train the network and a supervised training that provides a fine-tuning adjustment of the network. For the supervised classification layer, the -class softmax classifier was implemented, where is the number of expressions to be recognized. The performance are evaluated on two benchmark datasets (FACES and Lifespan), that are the only ones that contain facial expressions of the elderly. The achieved results show the superiority of the proposed deep learning approach compared to the conventional non-deep learning based facial expression recognition methods used in this context."}
{"_id": "2776dcecb444e6c02a56b06480b8346e00e78353", "title": "Wasserstein Propagation for Semi-Supervised Learning", "text": "Probability distributions and histograms are natural representations for product ratings, traffic measurements, and other data considered in many machine learning applications. Thus, this paper introduces a technique for graph-based semisupervised learning of histograms, derived from the theory of optimal transportation. Our method has several properties making it suitable for this application; in particular, its behavior can be characterized by the moments and shapes of the histograms at the labeled nodes. In addition, it can be used for histograms on non-standard domains like circles, revealing a strategy for manifold-valued semi-supervised learning. We also extend this technique to related problems such as smoothing distributions on graph nodes."}
{"_id": "04524df954801d9bea98ddc2db56ccbe5fb7f954", "title": "Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using Kapur's entropy", "text": "The objective of image segmentation is to extract meaningful objects. A meaningful segmentation selects the proper threshold values to optimize a criterion using entropy. The conventional multilevel thresholding methods are efficient for bi-level thresholding. However, they are computationally expensive when extended to multilevel thresholding since they exhaustively search the optimal thresholds to optimize the objective functions. To overcome this problem, two successful swarm-intelligence-based global optimization algorithms, cuckoo search (CS) algorithm and wind driven optimization (WDO) for multilevel thresholding using Kapur\u2019s entropy has been employed. For this purpose, best solution as fitness function is achieved through CS and WDO algorithm using Kapur\u2019s entropy for optimal multilevel thresholding. A new approach of CS andWDO algorithm is used for selection of optimal threshold value. This algorithm is used to obtain the best solution or best fitness value from the initial random threshold values, and to evaluate the quality of a solution, correlation function is used. Experimental results have been examined on standard set of satellite images using various numbers of thresholds. The results based on Kapur\u2019s entropy reveal that CS, ELR-CS and WDO method can be accurately and efficiently used in multilevel thresholding problem. 2013 Elsevier Ltd. All rights reserved."}
{"_id": "25d0cd147e48cf277011c652192d737a55f994ea", "title": "Improved Relation Extraction with Feature-Rich Compositional Embedding Models", "text": "Compositional embedding models build a representation (or embedding) for a linguistic structure based on its component word embeddings. We propose a Feature-rich Compositional Embedding Model (FCM) for relation extraction that is expressive, generalizes to new domains, and is easy-to-implement. The key idea is to combine both (unlexicalized) handcrafted features with learned word embeddings. The model is able to directly tackle the difficulties met by traditional compositional embeddings models, such as handling arbitrary types of sentence annotations and utilizing global information for composition. We test the proposed model on two relation extraction tasks, and demonstrate that our model outperforms both previous compositional models and traditional feature rich models on the ACE 2005 relation extraction task, and the SemEval 2010 relation classification task. The combination of our model and a loglinear classifier with hand-crafted features gives state-of-the-art results. We made our implementation available for general use1."}
{"_id": "be7aa1dd1812910b6b02b972eaeaff1b62621c1e", "title": "Exploring Semantic Properties of Sentence Embeddings", "text": "Neural vector representations are ubiquitous throughout all subfields of NLP. While word vectors have been studied in much detail, thus far only little light has been shed on the properties of sentence embeddings. In this paper, we assess to what extent prominent sentence embedding methods exhibit select semantic properties. We propose a framework that generate triplets of sentences to explore how changes in the syntactic structure or semantics of a given sentence affect the similarities obtained between their sentence embeddings."}
{"_id": "d4dc1b58a4bd2db5ede3bc427306892e13e5aa64", "title": "High Gain Wide-Band U-Shaped Patch Antennas With Modified Ground Planes", "text": "A high gain wideband U-shaped patch antenna with two equal arms on poly tetra fluoro ethylene (PTFE) substrate is presented. An inverted U-shaped slot is introduced on the circular or square shaped ground plane just under the U-shaped patch. In this communication the effect of size and shape of the ground plane on impedance bandwidth is studied. Maximum impedance bandwidth of 86.79% (4.5\u201311.4 GHz) is obtained with circular shaped ground plane with diameter 36 mm. The highest gain achieved is 4.1 dBi. The simulated results are confirmed experimentally. The proposed antenna is simple in structure compared to the regular stacked or coplanar parasitic patch antennas. It is highly suitable for wireless communications."}
{"_id": "471e9d18671d713528bf2255eba68476d5acdf2f", "title": "Influence and correlation in social networks", "text": "In many online social systems, social ties between users play an important role in dictating their behavior. One of the ways this can happen is through social influence, the phenomenon that the actions of a user can induce his/her friends to behave in a similar way. In systems where social influence exists, ideas, modes of behavior, or new technologies can diffuse through the network like an epidemic. Therefore, identifying and understanding social influence is of tremendous interest from both analysis and design points of view.\n This is a difficult task in general, since there are factors such as homophily or unobserved confounding variables that can induce statistical correlation between the actions of friends in a social network. Distinguishing influence from these is essentially the problem of distinguishing correlation from causality, a notoriously hard statistical problem.\n In this paper we study this problem systematically. We define fairly general models that replicate the aforementioned sources of social correlation. We then propose two simple tests that can identify influence as a source of social correlation when the time series of user actions is available.\n We give a theoretical justification of one of the tests by proving that with high probability it succeeds in ruling out influence in a rather general model of social correlation. We also simulate our tests on a number of examples designed by randomly generating actions of nodes on a real social network (from Flickr) according to one of several models. Simulation results confirm that our test performs well on these data. Finally, we apply them to real tagging data on Flickr, exhibiting that while there is significant social correlation in tagging behavior on this system, this correlation cannot be attributed to social influence."}
{"_id": "62275e38e177af26079353b590658076cd24c94e", "title": "Simulation on Simulink AC4 model (200hp DTC induction motor drive) using Fuzzy Logic controller", "text": "Classical direct torque control (DTC) has advantage in absence of coordinate transform and voltage modulator block. However, it may have disadvantage in controlling electromagnetic torque. DTC produces high ripple in electromagnetic torque as it is not directly controlled. High torque ripple causes vibrations to the motor which may lead to component lose, bearing failure or resonance. Thus, Fuzzy Logic controller is applied to reduce electromagnetic torque ripple. This paper presents the simulation analysis of Fuzzy Logic Direct Torque Controller (FLDTC) of induction machine drives. However, only the design of the controller using built in DTC induction motor drive model of Simulink AC4 will be discussed in this paper. Using FLDTC, the resulting electromagnetic torque from produced less ripple than classical DTC."}
{"_id": "925e2f74cc2ac6ec2b2816a5a66a9a9738d6786e", "title": "Camera calibration with spheres: linear approaches", "text": "This paper addresses the problem of camera calibration from spheres. By studying the relationship between the dual images of spheres and that of the absolute conic, a linear solution has been derived from a recently proposed non-linear semi-definite approach. However, experiments show that this approach is quite sensitive to noise. In order to overcome this problem, a second approach has been proposed, where the orthogonal calibration relationship is obtained by regarding any two spheres as a surface of revolution. This allows a camera to be fully calibrated from an image of three spheres. Besides, a conic homography is derived from the imaged spheres, and from its eigenvectors the orthogonal invariants can be computed directly. Experiments on synthetic and real data show the practicality of such an approach."}
{"_id": "b2a898e40ed4ea31d818b58804c431079b764a5d", "title": "Smart music player integrating facial emotion recognition and music mood recommendation", "text": "Songs, as a medium of expression, have always been a popular choice to depict and understand human emotions. Reliable emotion based classification systems can go a long way in helping us parse their meaning. However, research in the field of emotion-based music classification has not yielded optimal results. In this paper, we present an affective cross-platform music player, EMP, which recommends music based on the real-time mood of the user. EMP provides smart mood based music recommendation by incorporating the capabilities of emotion context reasoning within our adaptive music recommendation system. Our music player contains three modules: Emotion Module, Music Classification Module and Recommendation Module. The Emotion Module takes an image of the user's face as an input and makes use of deep learning algorithms to identify their mood with an accuracy of 90.23%. The Music Classification Module makes use of audio features to achieve a remarkable result of 97.69% while classifying songs into 4 different mood classes. The Recommendation Module suggests songs to the user by mapping their emotions to the mood type of the song, taking into consideration the preferences of the user."}
{"_id": "e081165c4abd8fb773658db7e411384d6d02fb13", "title": "Jubatus : An Open Source Platform for Distributed Online Machine Learning", "text": "Distributed computing is essential for handling very large datasets. Online learning is also promising for learning from rapid data streams. However, it is still an unresolved problem how to combine them for scalable learning and prediction on big data streams. We propose a general computational framework called loose model sharing for online and distributed machine learning. The key is to share only models rather than data between distributed servers. We also introduce Jubatus, an open source software platform based on the framework. Finally, we describe the details of implementing classifier and nearest neighbor algorithms, and discuss our experimental evaluations."}
{"_id": "25b68d0a59d94da680c496a68fdfb3bb2400bc3f", "title": "Reinforcement Learning Testbed for Power-Consumption Optimization", "text": "Common approaches to control a data-center cooling system rely on approximated system/environment models that are built upon the knowledge of mechanical cooling and electrical and thermal management. These models are difficult to design and often lead to suboptimal or unstable performance. In this paper, we show how deep reinforcement learning techniques can be used to control the cooling system of a simulated data center. In contrast to common control algorithms, those based on reinforcement learning techniques can optimize a system\u2019s performance automatically without the need of explicit model knowledge. Instead, only a reward signal needs to be designed. We evaluated the proposed algorithm on the open source simulation platform EnergyPlus. The experimental results indicate that we can achieve 22% improvement compared to a model-based control algorithm built into the EnergyPlus. To encourage the reproduction of our work as well as future research, we have also publicly released an open-source EnergyPlus wrapper interface 1 directly compatible with existing reinforcement learning frameworks."}
{"_id": "08168dcf259f2b80e0ff5ecd5e1bca0cdea889eb", "title": "Social Network Structures in Open Source Software Development Teams", "text": "Drawing on social network theories and previous studies, this research examines the dynamics of social network structures in open source software (OSS) teams. Three projects were selected from SourceForge.net in terms of their similarities as well as their differences. Monthly data were extracted from the bug tracking systems in order to achieve a longitudinal view of the interaction pattern of each project. Social network analysis was used to generate the indices of social structure. The finding suggests that the interaction pattern of OSS projects evolves from a single hub at the beginning to a core/periphery model as the projects move forward. INtrODUctION The information system development arena has seen many revolutions and evolutions. We have witnessed the movement from structured development to object-oriented (OO) development. Modeling methods, such as data flow diagram and entity relationship diagram, are facing new OO modeling languages, such as the unified modeling language (UML) (see Siau & Cao, 2001; Siau, Erickson, & Lee, 2005; Siau & Loo, 2006) and OO methodologies, such as unified process (UP). The latest development includes agile modeling (see Erickson, Lyytinen, & Siau, 2005), extreme programming, and OSS development. While many of these changes are related to systems development paradigms, methodologies, methods, and techniques, the phenomenon of OSS development entails a different structure for software development teams. Chapter 2.17 Social Network Structures in Open Source Software Development Teams Yuan Long Colorado State University-Pueblo, USA Keng Siau University of Nebraska-Lincoln, USA Copyright \u00a9 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited. 646 Social Network Structures in Open Source Software Development Teams Unlike conventional software projects, the participants of OSS projects are volunteers. They are self-selected based on their interests and capability to contribute to the projects (Raymond, 2000). In addition, the developers of OSS projects are distributed all around the world. They communicate and collaborate with each other through the Internet, using e-mails or discussion boards. Therefore, effective and efficient communication and collaboration are critical to OSS success. However, little empirical research has been conducted to study the underlying interaction pattern of OSS teams, especially the dynamics of the social network structures in OSS development teams. To fill this gap, this study examines the evolvement of social structure in OSS teams. The study contributes to the enhancement of the understanding of OSS development, and provides foundation for future studies to analyze the antecedents and consequences of social networks in the OSS context. The remainder of the paper is structured as follows. First, prior studies on social network structures in OSS teams are reviewed. Second, theories related to social structure and social network theory are discussed. Third, the research methodology is presented, and the research results are reported. Next, discussions of the results, the limitations, and the implications are provided. The paper concludes with suggestions for future research."}
{"_id": "208afba2537f374da0cbaf2211585169ccb54b25", "title": "Mermaid: Integrating Vertex-Centric with Edge-Centric for Real-World Graph Processing", "text": "There has been increasing interests in processing large-scale real-world graphs, and recently many graph systems have been proposed. Vertex-centric GAS (Gather-Apply-Scatter) and Edge-centric GAS are two graph computation models being widely adopted, and existing graph analytics systems commonly follow only one computation model, which is not the best choice for real-world graph processing. In fact, vertex degrees in real-world graphs often obey skewed power-law distributions: most vertices have relatively few neighbors while a few have many neighbors. We observe that vertex-centric GAS for high-degree vertices and edge-centric GAS for low-degree vertices is a much better choice for real-world graph processing. In this paper, we present Mermaid, a system for processing large-scale real-world graphs on a single machine. Mermaid skillfully integrates vertex-centric GAS with edge-centric GAS through a novel vertex-mapping mechanism, and supports streamlined graph processing. On a total of 6 practical natural graph processing tasks, we demonstrate that, on average, Mermaid achieves 1.83x better performance than the state-of-the-art graph system on a single machine."}
{"_id": "123a89a9c28043f478906781bec55822abc6bae0", "title": "Bayesian Optimization Algorithm", "text": "In this paper an algorithm based on the concepts of genetic algorithms that uses an estimation of a probability distribution of promising solutions in order to generate new candidate solutions is proposed To esti mate the distribution techniques for model ing multivariate data by Bayesian networks are used The proposed algorithm identi es reproduces and mixes building blocks up to a speci ed order It is independent of the ordering of the variables in the strings rep resenting the solutions Moreover prior in formation about the problem can be incor porated into the algorithm However prior information is not essential Preliminary ex periments show that the BOA outperforms the simple genetic algorithm even on decom posable functions with tight building blocks as a problem size grows"}
{"_id": "1f61892374c2be70f5924281c0d0e88dc4e02a25", "title": "Space-Efficient Online Computation of Quantile Summaries", "text": "An \u2208-approximate quantile summary of a sequence of N elements is a data structure that can answer quantile queries about the sequence to within a precision of \u2208N.\nWe present a new online algorithm for computing\u2208-approximate quantile summaries of very large data sequences. The algorithm has a worst-case space requirement of &Ogr;(1\u00f7\u2208 log(\u2208N)). This improves upon the previous best result of &Ogr;(1\u00f7\u2208 log2(\u2208N)). Moreover, in contrast to earlier deterministic algorithms, our algorithm does not require a priori knowledge of the length of the input sequence.\nFinally, the actual space bounds obtained on experimental data are significantly better than the worst case guarantees of our algorithm as well as the observed space requirements of earlier algorithms."}
{"_id": "1b596269c1d3a08c159b99434055beb4e3157c07", "title": "Load Disaggregation Based on Aided Linear Integer Programming", "text": "Load disaggregation based on aided linear integer programming (ALIP) is proposed. We start with a conventional linear integer programming (IP)-based disaggregation and enhance it in several ways. The enhancements include additional constraints, correction based on a state diagram, median filtering, and linear-programming-based refinement. With the aid of these enhancements, the performance of IP-based disaggregation is significantly improved. The proposed ALIP system relies only on the instantaneous load samples instead of waveform signatures and, hence, works well on low-frequency data. Experimental results show that the proposed ALIP system performs better than conventional IP-based load disaggregation."}
{"_id": "88496bd36dd61ca42dbd5020d23e76ebeaa994a4", "title": "Information flow and cooperative control of vehicle formations", "text": "We consider the problem of cooperation among a collection of vehicles performing a shared task using intervehicle communication to coordinate their actions. Tools from algebraic graph theory prove useful in modeling the communication network and relating its topology to formation stability. We prove a Nyquist criterion that uses the eigenvalues of the graph Laplacian matrix to determine the effect of the communication topology on formation stability. We also propose a method for decentralized information exchange between vehicles. This approach realizes a dynamical system that supplies each vehicle with a common reference to be used for cooperative motion. We prove a separation principle that decomposes formation stability into two components: Stability of this is achieved information flow for the given graph and stability of an individual vehicle for the given controller. The information flow can thus be rendered highly robust to changes in the graph, enabling tight formation control despite limitations in intervehicle communication capability."}
{"_id": "d1b86fa7c4160811de7529bff6bb1cfc206ac1d0", "title": "Enhanced Visibility of MoS 2 , MoSe 2 , WSe 2 and Black-Phosphorus : Making Optical Identification of 2 D Semiconductors Easier", "text": "We explore the use of Si3N4/Si substrates as a substitute of the standard SiO2/Si substrates employed nowadays to fabricate nanodevices based on 2D materials. We systematically study the visibility of several 2D semiconducting materials that are attracting a great deal of interest in nanoelectronics and optoelectronics: MoS2, MoSe2, WSe2 and black-phosphorus. We find that the use of Si3N4/Si substrates provides an increase of the optical contrast up to a 50%\u2013100% and also the maximum contrast shifts towards wavelength values optimal for human eye detection, making optical identification of 2D semiconductors easier. OPEN ACCESS Electronics 2015, 4 848"}
{"_id": "65fa5b2b62e6cc460add773e2c355cb45a0db619", "title": "Scaling Similarity Joins over Tree-Structured Data", "text": "Given a large collection of tree-structured objects (e.g., XML documents), the similarity join finds the pairs of objects that are similar to each other, based on a similarity threshold and a tree edit distance measure. The state-ofthe-art similarity join methods compare simpler approximations of the objects (e.g., strings), in order to prune pairs that cannot be part of the similarity join result based on distance bounds derived by the approximations. In this paper, we propose a novel similarity join approach, which is based on the dynamic decomposition of the tree objects into subgraphs, according to the similarity threshold. Our technique avoids computing the exact distance between two tree objects, if the objects do not share at least one common subgraph. In order to scale up the join, the computed subgraphs are managed in a two-layer index. Our experimental results on real and synthetic data collections show that our approach outperforms the state-of-the-art methods by up to an order of magnitude."}
{"_id": "c20429ccb48d3efb871952d1fe348b075af46f53", "title": "Design and simulation of a Sierpinski carpet fractal antenna for 5G commercial applications", "text": "This article presents the design and simulation of a multi-band microstrip patch antenna, using Sierpinski carpet fractal concept. The proposed antenna is designed on a FR-4 substrate with a relative permittivity of 4.4 and excited using a microstrip feed line at the edge of the patch. Sierpinski carpet fractal with iteration-3 is applied to the radiating patch. After the application of fractal, this patch antenna resonates at 14.80 GHz, 17.92 GHz, 23.12 GHz, and 27.92 GHz with impedance bandwidth of 5800 MHz, 3040 MHz, 2960 MHz, and 3840 MHz respectively. With these four bands, this antenna finds application in commercial 5G wireless communications. To illustrate the effectiveness of the fractal concept, four patch antennas are designed independently at those four resonating frequencies and compared with the single fractal antenna in terms of their size and other performance parameters. All the antennas proposed here, are designed using HFSS15 and results are analyzed in terms of return loss, VSWR, gain, and bandwidth."}
{"_id": "97f8cece32f25f5f896b5660589b7c413d575702", "title": "OpenCV based disease identification of mango leaves", "text": "This paper aims in classifying and identifying the diseases of mango leaves for Indian agriculture. K-means algorithm is chosen for the disease segmentation, and the disease classification and identification is carried out using the SVM classifier. Disease identification based on analysis of patches or discoloring of leaf will hold good for some of the plant diseases, but some other diseases which will deform the leaf shape cannot be identified based on the same method. In this case leaf shape based disease identification has to be performed. Based on this analysis two topics are addressed in this research paper. (1) Disease identification using the OpenCV libraries (2) Leaf shape based disease identification. Keywordk-means,Principal Component Analysis (PCA), feature extraction, shape detection, disease identification, Elliptic fourier analysis, Support Vector Machine(SVM), Artificial Neural Network (ANN)"}
{"_id": "4d393dfa779b2a1de3f4a5b3cba058c898895137", "title": "Assessing the benefits of computational offloading in mobile-cloud applications", "text": "This paper presents the results of a formative study conducted to determine the effects of computation offloading in mobile applications by comparing \u201capplication performance\u201d (chiefly energy consumption and response time). The study examined two general execution scenarios: (1) computation is performed locally on a mobile device, and (2) when it is offloaded entirely to the cloud. The study also carefully considered the underlying network characteristics as an important factor affecting the performance. More specifically, we refactored two mobile applications to offload their computationally intensive functionality to execute in the cloud. We then profiled these applications under different network conditions, and carefully measured \u201capplication performance\u201d in each case. The results were not as conclusive as we had expected. On fast networks, offloading is almost always beneficial. However, on slower networks, the offloading cost-benefit analysis is not as clear cut. The characteristics of the data transferred between the mobile device and the cloud may be a deciding factor in determining whether offloading a computation would improve performance."}
{"_id": "c4e4149a36018af000d2091bfa53b80cd087b7ad", "title": "A Batteryless Sensor ASIC for Implantable Bio-Impedance Applications", "text": "The measurement of the biological tissue's electrical impedance is an active research field that has attracted a lot of attention during the last decades. Bio-impedances are closely related to a large variety of physiological conditions; therefore, they are useful for diagnosis and monitoring in many medical applications. Measuring living tissues, however, is a challenging task that poses countless technical and practical problems, in particular if the tissues need to be measured under the skin. This paper presents a bio-impedance sensor ASIC targeting a battery-free, miniature size, implantable device, which performs accurate 4-point complex impedance extraction in the frequency range from 2 kHz to 2 MHz. The ASIC is fabricated in 150 nm CMOS, has a size of 1.22 mm \u00d7 1.22 mm and consumes 165 \u03bcA from a 1.8 V power supply. The ASIC is embedded in a prototype which communicates with, and is powered by an external reader device through inductive coupling. The prototype is validated by measuring the impedances of different combinations of discrete components, measuring the electrochemical impedance of physiological solution, and performing ex vivo measurements on animal organs. The proposed ASIC is able to extract complex impedances with around 1 \u03a9 resolution; therefore enabling accurate wireless tissue measurements."}
{"_id": "f002724b9954d76b98afa71eee4a15d6e1a3f25e", "title": "Skeleton-based Adaptive In-between Generation for Hand-drawn Key Frames", "text": "For improving the efficiency of 2D animation production, this paper presents a method to create in-between frames based on hand-drawn key-frames. The outlines of characters or objects on two key-frames are used as inputs. First, a skeleton linkage of the target object is automatically constructed by rasterizing each of input key-frames and then applying a pixel-based skeleton extraction method. Secondary, a pair of skeleton linkages having corresponding structure between the current key-frame and the next key-frame is constructed by applying the stroke matching algorithm. After these processes, motion transitions between the skeleton linkages are generated based on our simulation model. When the in-between frames are created only in the 2D plane, the outlines at in-between frames can be generated by a 2D deformation. In case that the in-between transitions are containing a rotation around an axis which is no perpendicular to the drawing plane, however, a 3D structure is required. For achieving such in-between transitions, our method constructs a 3D structure by inflating 2D mesh based on the input outlines. Finally, the contours from the view-point for the created 3D structure are projected onto the 2D plane during in-between transitions. In our method, we adopt the Photic Extremum Lines (PEL) to extract the 2D contours from the obtained 3D shape. In this way, we achieve the in-between creation containing spatial rotation such as hand-flipping, which has not been achieved by general ways of in-between creation method."}
{"_id": "0a464df731945c16d4d97f77dc6d3f22b8a65bd2", "title": "Memory-Efficient Adaptive Optimization for Large-Scale Learning", "text": "Adaptive gradient-based optimizers such as AdaGrad and Adam are among the methods of choice in modern machine learning. These methods maintain second-order statistics of each parameter, thus doubling the memory footprint of the optimizer. In behemoth-size applications, this memory overhead restricts the size of the model being used as well as the number of examples in a mini-batch. We describe a novel, simple, and flexible adaptive optimization method with sublinear memory cost that retains the benefits of per-parameter adaptivity while allowing for larger models and mini-batches. We give convergence guarantees for our method and demonstrate its effectiveness in training very large deep models."}
{"_id": "42ee2f62883a0518d9c897589946d30e360a1425", "title": "Better Approximation of Betweenness Centrality", "text": "Centrality indices are used to classify a graph in important and unimportant vertices. A commonly used index is betweenness centrality which is based on shortest paths. For large graphs it is almost impossible to calculate exact betweenness centrality in appropriate time. The best known algorithm today is based on solving n SSSPs and requires O(nm+ n2 log(n)) time. But in most cases, e.g. on a home computer, resources are limited. Current approximation algorithms extrapolate values by solving only k \u226a n SSSPs. Vertices near the source of a SSSP are often overestimated. We introduce improvements which take the distance to the source into account. Euclidean distance between approximation and exact values is reduced by factor 4 with same runtime. Or runtime is 16 times faster with same Euclidean distance in a standard example, the movie actor network. Other real-world networks show similar promising results."}
{"_id": "b92bdab806509c2813d25058c7ae9408695de3bb", "title": "Am I wasting my time organizing email?: a study of email refinding", "text": "We all spend time every day looking for information in our email, yet we know little about this refinding process. Some users expend considerable preparatory effort creating complex folder structures to promote effective refinding. However modern email clients provide alternative opportunistic methods for access, such as search and threading, that promise to reduce the need to manually prepare. To compare these different refinding strategies, we instrumented a modern email client that supports search, folders, tagging and threading. We carried out a field study of 345 long-term users who conducted over 85,000 refinding actions. Our data support opportunistic access. People who create complex folders indeed rely on these for retrieval, but these preparatory behaviors are inefficient and do not improve retrieval success. In contrast, both search and threading promote more effective finding. We present design implications: current search-based clients ignore scrolling, the most prevalent refinding behavior, and threading approaches need to be extended."}
{"_id": "969c340d8b2a22a640b352fc8ef85ec5e547c4d3", "title": "Partially supervised clustering for image segmentation", "text": "All clustering algorithms process unlabeled data and, consequently, suffer from two problems: (P1) choosing and validating the correct number of clusters and (P2) insuring that algorithmic labels correspond to meaningful physical labels. Clustering algorithms such as hard and fuzzy c-means, based on optimizing sums of squared errors objective functions, suffer from a third problem: (P3) a tendency to recommend solutions that equalize cluster populations. The semi-supervised c-means algorithms introduced in this paper attempt to overcome these three problems for problem domains where a few data from each class can be labeled. Segmentation of magnetic resonance images is a problem of this type and we use it to illustrate the new algorithm. Our examples show that the semi-supervised approach provides MRI segmentations that are superior to ordinary fuzzy c-means and to the crisp k-nearest neighbor rule and further, that the new method ameliorates (P1)-(P3). Cluster analysis Fuzzy c-means Partial supervision Image segmentation Magnetic resonance images"}
{"_id": "a4a5ef4f2e937dc407a4089019316b054e8e3043", "title": "Design and Implementation of a Network Security Model for Cooperative Network", "text": "In this paper a design and implementation of a network security model was presented, using routers and firewall. Also this paper was conducted the network security weakness in router and firewall network devices, type of threats and responses to those threats, and the method to prevent the attacks and hackers to access the network. Also this paper provides a checklist to use in evaluating whether a network is adhering to best practices in network security and data confidentiality. The main aim of this research is to protect the network from vulnerabilities, threats, attacks, configuration weaknesses and security policy weaknesses. KeywordsNetwork security, network management, threats, security policy Received December 17, 2008; Accepted March 2 2009"}
{"_id": "17f5c7411eeeeedf25b0db99a9130aa353aee4ba", "title": "Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models", "text": "We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and backoff n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger questionanswer pair corpus and from pretrained word embeddings."}
{"_id": "20690b7465e6fef5337f0c9be0a302d33b3c9b3a", "title": "The Dialog State Tracking Challenge", "text": "In a spoken dialog system, dialog state tracking deduces information about the user\u2019s goal as the dialog progresses, synthesizing evidence such as dialog acts over multiple turns with external data sources. Recent approaches have been shown to overcome ASR and SLU errors in some applications. However, there are currently no common testbeds or evaluation measures for this task, hampering progress. The dialog state tracking challenge seeks to address this by providing a heterogeneous corpus of 15K human-computer dialogs in a standard format, along with a suite of 11 evaluation metrics. The challenge received a total of 27 entries from 9 research groups. The results show that the suite of performance metrics cluster into 4 natural groups. Moreover, the dialog systems that benefit most from dialog state tracking are those with less discriminative speech recognition confidence scores. Finally, generalization is a key problem: in 2 of the 4 test sets, fewer than half of the entries out-performed simple baselines. 1 Overview and motivation Spoken dialog systems interact with users via natural language to help them achieve a goal. As the interaction progresses, the dialog manager maintains a representation of the state of the dialog in a process called dialog state tracking (DST). For example, in a bus schedule information system, the dialog state might indicate the user\u2019s desired bus route, origin, and destination. Dialog state tracking is difficult because automatic speech \u2217Most of the work for the challenge was performed when the second and third authors were with Honda Research Institute, Mountain View, CA, USA recognition (ASR) and spoken language understanding (SLU) errors are common, and can cause the system to misunderstand the user\u2019s needs. At the same time, state tracking is crucial because the system relies on the estimated dialog state to choose actions \u2013 for example, which bus schedule information to present to the user. Most commercial systems use hand-crafted heuristics for state tracking, selecting the SLU result with the highest confidence score, and discarding alternatives. In contrast, statistical approaches compute scores for many hypotheses for the dialog state (Figure 1). By exploiting correlations between turns and information from external data sources \u2013 such as maps, bus timetables, or models of past dialogs \u2013 statistical approaches can overcome some SLU errors. Numerous techniques for dialog state tracking have been proposed, including heuristic scores (Higashinaka et al., 2003), Bayesian networks (Paek and Horvitz, 2000; Williams and Young, 2007), kernel density estimators (Ma et al., 2012), and discriminative models (Bohus and Rudnicky, 2006). Techniques have been fielded which scale to realistically sized dialog problems and operate in real time (Young et al., 2010; Thomson and Young, 2010; Williams, 2010; Mehta et al., 2010). In end-to-end dialog systems, dialog state tracking has been shown to improve overall system performance (Young et al., 2010; Thomson and Young, 2010). Despite this progress, direct comparisons between methods have not been possible because past studies use different domains and system components, for speech recognition, spoken language understanding, dialog control, etc. Moreover, there is little agreement on how to evaluate dialog state tracking. Together these issues limit progress in this research area. The Dialog State Tracking Challenge (DSTC) provides a first common testbed and evaluation"}
{"_id": "cfdef0cd7ec53868c600005ec74a4a34f063a004", "title": "Partially observable Markov decision processes for spoken dialog systems", "text": "In a spoken dialog system, determining which action a machine should take in a given situation is a difficult problem because automatic speech recognition is unreliable and hence the state of the conversation can never be known with certainty. Much of the research in spoken dialog systems centres on mitigating this uncertainty and recent work has focussed on three largely disparate techniques: parallel dialog state hypotheses, local use of confidence scores, and automated planning. While in isolation each of these approaches can improve action selection, taken together they currently lack a unified statistical framework that admits global optimization. In this paper we cast a spoken dialog system as a partially observable Markov decision process (POMDP). We show how this formulation unifies and extends existing techniques to form a single principled framework. A number of illustrations are used to show qualitatively the potential benefits of POMDPs compared to existing techniques, and empirical results from dialog simulations are presented which demonstrate significant quantitative gains. Finally, some of the key challenges to advancing this method \u2013 in particular scalability \u2013 are briefly outlined. 2006 Elsevier Ltd. All rights reserved."}
{"_id": "7760b2869f83ff04e595a39a0070283d2ea31c29", "title": "Human balance and posture control during standing and walking", "text": "The fact that we as humans are bipeds and locomote over the ground with one foot in contact (walking), no feet in contact (running), or both feet in contact (standing) creates a major challenge to our balance control system. Because two-thirds of our body mass is located two-thirds of body height above the ground we are an inherently unstable system unless a control system is continuously acting. The purpose of this review is to introduce the reader to the basic anatomy, mechanics, and neuromuscular control that is seen in the current literature related to balance control. The degeneration of the balance control system in the elderly and in many pathologies has forced researchers and clinicians to understand more about how the system works and how to quantify its status at any point in time. With the increase in our ageing population and with increased life expectancy of our elderly the importance of maintaining mobility is becoming ever more critical. Injuries and loss of life due to falls in the elderly is a very major factor facing them. Table 1 compares the deaths due to falls with the deaths due to motor vehicle accidents (MVA) as a function of age as reported by Statistics Canada in 1991. The startling fact is that the deaths due to falls in the SO+ population is almost as high as the MVA deaths in the accident-prone l5-29-year group. However, when we look at the deaths per 100 000 for these two causes the figures are even more startling. The young 15-29-year-old group had 21.5 MVA deaths per 100 000 while the elderly deaths"}
{"_id": "9d622784f3599d34ed5ede0140272006df341ebb", "title": "Math Anxiety : Personal , Educational , and Cognitive Consequences", "text": "Highly math-anxious individuals are characterized by a strong tendency to avoid math, which ultimately undercuts their math competence and forecloses important career paths. But timed, on-line tests reveal math-anxiety effects on w h o l e-n u m b e r a r i t h m e t i c p r o b l e m s (e. g. , 4 6 \u03e9 2 7) , whereas achievement tests show no competence differences. Math anxiety disrupts cognitive processing by compromising ongoing activity in working memory. Although the causes of math anxiety are undetermined, some teaching styles are implicated as risk factors. We need research on the origins of math anxiety and on its \" signature \" in brain activity , to examine both its emo-t i o n a l a n d i t s c o g n i t i v e components. My graduate assistant recently told me about a participant he had tested in the lab. She exhibited increasing discomfort and nervousness as the testing session progressed , eventually becoming so distraught that she burst into tears. My assistant remarked that many of our participants show some un-ease or apprehension during test-ing\u2014trembling hands, nervous laughter, and so forth. Many ask, defensively, if their performance says anything about their overall intelligence. These occasionally extreme emotional reactions are not triggered by deliberately provocative procedures\u2014there are no personally sensitive questions or intentional manipulations of stress. Instead, we merely ask college adults to solve elementary-school arithmetic problems, such as 46 \u03e9 18 \u03ed ? and 34 \u03ea 19 \u03ed ? The reactions are obvious symptoms of anxiety, in this case math anxiety induced by ordinary arithmetic problems presented in timed tasks. On the one hand, it is almost unbelievable that tests on such fundamental topics can be so upsetting ; knowing that 15 \u03ea 8 \u03ed 7 ought to be as basic as knowing how to spell \" cat. \" On the other hand, U.S. culture abounds with attitudes that foster math anxiety: Math is thought to be inherently difficult (as Barbie dolls used to say, \" Math class is hard \"), aptitude is considered far more important than effort (Geary, 1994, chap. 7), and being good at math is considered relatively unimportant, or even optional. In this article, I discuss what has been learned about math anxiety across the past 30 years or so, and suggest \u2026"}
{"_id": "9c22a8108c96c39ea930a5123ac1f18b6f4bbc60", "title": "Enabling Public Verifiability and Data Dynamics for Storage Security", "text": "Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of client through the auditing of whether his data stored in the cloud is indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public verifiability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the Proof of Retrievability model [1] by manipulating the classic Merkle Hash Tree (MHT) construction for block tag authentication. Extensive security and performance analysis show that the proposed scheme is highly efficient and provably secure."}
{"_id": "0dbb09d840e91c4054005af999b8c94d6bc987ed", "title": "The Effect of Noise on Sofware Engineers' Performance", "text": "Background: Noise, de\u0080ned as an unwanted sound, is one of the commonest factors that could a\u0082ect people\u2019s performance in their daily work activities. \u008ce so\u0089ware engineering research community has marginally investigated the e\u0082ects of noise on so\u0089ware engineers\u2019 performance. Aims: We studied if noise a\u0082ects so\u0089ware engineers\u2019 performance in: (i) comprehending functional requirements and (ii) \u0080xing faults in source code. Method: We conducted two experiments with \u0080nal-year undergraduate students in Computer Science. In the \u0080rst experiment, we asked 55 students to comprehend functional requirements exposing them or not to noise, while in the second experiment 42 students were asked to \u0080x faults in Java code. Results: \u008ce participants in the second experiment, when exposed to noise, had signi\u0080cantly worse performance in \u0080xing faults in source code. On the other hand, we did not observe any statistically signi\u0080cant di\u0082erence in the \u0080rst experiment. Conclusions: Fixing faults in source code seems to be more vulnerable to noise than comprehending functional requirements."}
{"_id": "c73bc56353b6194c8604035847a434624a2a18fd", "title": "Transcranial Direct Current Stimulation Modulates Neuronal Activity and Learning in Pilot Training", "text": "Skill acquisition requires distributed learning both within (online) and across (offline) days to consolidate experiences into newly learned abilities. In particular, piloting an aircraft requires skills developed from extensive training and practice. Here, we tested the hypothesis that transcranial direct current stimulation (tDCS) can modulate neuronal function to improve skill learning and performance during flight simulator training of aircraft landing procedures. Thirty-two right-handed participants consented to participate in four consecutive daily sessions of flight simulation training and received sham or anodal high-definition-tDCS to the right dorsolateral prefrontal cortex (DLPFC) or left motor cortex (M1) in a randomized, double-blind experiment. Continuous electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS) were collected during flight simulation, n-back working memory, and resting-state assessments. tDCS of the right DLPFC increased midline-frontal theta-band activity in flight and n-back working memory training, confirming tDCS-related modulation of brain processes involved in executive function. This modulation corresponded to a significantly different online and offline learning rates for working memory accuracy and decreased inter-subject behavioral variability in flight and n-back tasks in the DLPFC stimulation group. Additionally, tDCS of left M1 increased parietal alpha power during flight tasks and tDCS to the right DLPFC increased midline frontal theta-band power during n-back and flight tasks. These results demonstrate a modulation of group variance in skill acquisition through an increasing in learned skill consistency in cognitive and real-world tasks with tDCS. Further, tDCS performance improvements corresponded to changes in electrophysiological and blood-oxygenation activity of the DLPFC and motor cortices, providing a stronger link between modulated neuronal function and behavior."}
{"_id": "e49827e5d65953835f2b7c48bdcb0d07e2d801f5", "title": "How to gamify software engineering", "text": "Software development, like any prolonged and intellectually demanding activity, can negatively affect the motivation of developers. This is especially true in specific areas of software engineering, such as requirements engineering, test-driven development, bug reporting and fixing, where the creative aspects of programming fall short. The developers' engagement might progressively degrade, potentially impacting their work's quality."}
{"_id": "8ffe02f01a7a471b5753049762999745bb2a0c65", "title": "A multi-method simulation of a high-frequency bus line", "text": "A methodology to model high-frequency bus lines is proposed and a realistic event-simulation model for such a line in the Netherlands is presented. This simulation model helps policy makers to predict changes that have to be made to bus routes and planned travel times before problems occur. With this model, different passenger growth scenarios can be easily evaluated. The model is validated using different key performance indicators, showing that under some model assumptions, it can realistically simulate real-life scenarios. The simulations workings are illustrated by a case study of passenger growth."}
{"_id": "f2ad525c6748bad505371dad125f5f86c5a17163", "title": "The role of mindfulness in a contextual cognitive-behavioral analysis of chronic pain-related suffering and disability", "text": "An increasing number of studies consider the specific processes by which distressing sensations, thoughts, and emotional experiences exert their influence on the daily functioning of those who suffer with chronic pain. Clinical methods of mindfulness and the processes that underlie them appear to have clear implications in this area, but have not been systematically investigated to this point in time. The purpose of the present study was to examine mindfulness in relation to the pain, emotional, physical, and social functioning of individuals with chronic pain. The present study included 105 consecutive patients attending a clinical assessment for treatment of chronic pain. Each completed a standardized battery of questionnaires, including a measure of mindfulness, the Mindful Attention Awareness Scale [Brown KW, Ryan RM. The benefits of being present: mindfulness and its role in psychological well-being. J Pers Soc Psychol 2003;84:822-48]. Correlation analyses indicated that mindfulness was unrelated to age, gender, education, or chronicity of pain, but was significantly related to multiple measures of patient functioning. In multiple regression analyses, after controlling for patient background variables, pain intensity, and pain-related acceptance, mindfulness accounted for significant variance in measures of depression, pain-related anxiety; physical, psychosocial, and \"other\" disability. In each instance greater mindfulness was associated with better functioning. The combined increments of variance explained from acceptance of pain and mindfulness were at least moderate and, in some cases, appeared potentially meaningful. The behavioral processes of mindfulness and their accessibility to scientific study are considered."}
{"_id": "bc7e8f56ce46bcef13f07840185461a0d5171bca", "title": "Integrative Neuromuscular Training in Youth Athletes. Part II", "text": "THE SECOND PART OF THIS REVIEW PROVIDES A FLEXIBLE APPROACH TO INTEGRATIVE NEUROMUSCULAR TRAINING (INT) WITH THE GOALS TO IMPROVE INJURY RESILIENCE AND TO ENHANCE SPORT AND MOTOR PERFORMANCE ABILITIES IN YOUTH POPULATIONS. THE PROPOSED MODEL OF INT IN THIS MANUSCRIPT PRESENTS 6 ESSENTIAL COMPONENTS: DYNAMIC STABILITY (LOWER LIMB AND CORE), STRENGTH, PLYOMETRICS, COORDINATION, SPEED AND AGILITY, AND FATIGUE RESISTANCE. THE DEVELOPMENT OF THESE 6 CAPACITIES ARE INTEGRAL IN ESTABLISHING AN IMPORTANT FOUNDATION BY INITIALLY DEVELOPING FUNDAMENTAL MOVEMENT SKILL COMPETENCY BEFORE BUILDING UPON THESE SKILLS TO ENRICH SPORTS-SPECIFIC AND ACTIVITY-SPECIFIC SKILL SETS. FOR A VIDEO ABSTRACT OF THIS ARTICLE, SEE SUPPLEMENTAL DIGITAL CONTENT 1 (SEE VIDEO, http://links.lww.com/SCJ/A190)."}
{"_id": "7a8c9d4c6fd88b305d7d99105d2ec0f52bc476ab", "title": "Coping theory and research: past, present, and future.", "text": "In this essay in honor of Donald Oken, I emphasize coping as a key concept for theory and research on adaptation and health. My focus will be the contrasts between two approaches to coping, one that emphasizes style\u2014that is, it treats coping as a personality characteristic\u2014and another that emphasizes process\u2014that is, efforts to manage stress that change over time and are shaped by the adaptational context out of which it is generated. I begin with an account of the style and process approaches, discuss their history briefly, set forth the principles of a process approach, describe my own efforts at measurement, and define coping and its functions from a process standpoint. This is followed by a digest of major generalizations that resulted from coping process research. The essay concludes with a discussion of special issues of coping measurement, in particular, the limitations of both coping style and process approaches and how these limitations might be dealt with. There has been a prodigious volume of coping research in the last decade or two, which I can only touch on very selectively. In this essay, I also ignore a host of important developmental issues that have to do with the emergence of coping and its cognitive and motivational bases in infants, as well as a growing literature on whether, how, and why the coping process changes with aging."}
{"_id": "7fe75f121d52d7b6bde5174b03249746c05f633d", "title": "Sentiment-specific word embedding for Indonesian sentiment analysis", "text": "Word embedding has been proven to improve performance of sentiment classification systems. A new proposed word embedding called sentiment-specific word embedding exploits sentiment information, in order to reach better performance of sentiment analysis systems using word embedding. In this paper, we build Indonesian sentiment-specific word embedding and apply it for sentiment analysis. We compare performance of sentiment classification based on sentiment-specific word embedding with other popular feature representations such as bag of words, Term Frequency-Inverse Document Frequency (TF-IDF), and also generic word embedding. Although our sentiment-specific word embedding models achieve a higher score than the Word2Vec embedding, the lexical representations, bag of words and TF-IDF still gained better performance."}
{"_id": "e6b803d9846bcfacc6c62572faa47655768b85e1", "title": "The Airport Ground Movement Problem : Past and Current Research and Future Directions", "text": "Determining efficient airport operations is an important and critical problem for airports, airlines, passengers and other stakeholders. Moreover, it is likely to become even more so given the traffic increases which are expected over the next few years. The ground movement problem forms the link between other airside problems, such as arrival sequencing, departure sequencing and gate/stand allocation. This paper provides an overview, categorisation and critical examination of the previous research for ground movement and highlights various important open areas of research. Of particular importance is the question of the integration of various airport operations and their relationships which are considered in this paper."}
{"_id": "88d6f9d4e9461081652a3b6f31560d4350b1c1d1", "title": "An Organization Ontology for Enterprise Modelling", "text": "The paper presents our exploration into an organization ontology for the TOVE enterprise model. Its primary focus has been in linking structure and behavior through the concept of empowerment. Empowerment is the right of an organization agent to perform status changing actions. This linkage is critical to the unification of enterprise models and their executability."}
{"_id": "058f6752d85a517aae298586fdf117acdd7560ea", "title": "VL2: a scalable and flexible data center network", "text": "To be agile and cost effective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. To meet these goals, we present VL2, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the network, (2) Valiant Load Balancing to spread traffic uniformly across network paths, and (3) end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane. VL2's design is driven by detailed measurements of traffic and fault data from a large operational cloud service provider. VL2's implementation leverages proven network technologies, already available at low cost in high-speed hardware implementations, to build a scalable and reliable network architecture. As a result, VL2 networks can be deployed today, and we have built a working prototype. We evaluate the merits of the VL2 design using measurement, analysis, and experiments. Our VL2 prototype shuffles 2.7 TB of data among 75 servers in 395 seconds - sustaining a rate that is 94% of the maximum possible."}
{"_id": "19d56eebdac34f0848e00c8fce6ee96ba8dad362", "title": "Friendlink: Link prediction in social networks via bounded local path traversal", "text": "Online social networks (OSNs) like Facebook, Myspace, and Hi5 have become popular, because they allow users to easily share content or expand their social circle. OSNs recommend new friends to registered users based on local graph features (i.e. based on the number of common friends that two users share). However, OSNs do not exploit all different length paths of the network. Instead, they consider only pathways of maximum length 2 between a user and his candidate friends. On the other hand, there are global approaches, which detect the overall path structure in a network, being computationally prohibitive for huge-size social networks. In this paper, we provide friend recommendations, also known as the link prediction problem, by traversing all paths of a bounded length, based on the \u201calgorithmic small world hypothesis\u201d. As a result, we are able to provide more accurate and faster friend recommendations. We perform an extensive experimental comparison of the proposed method against existing link prediction algorithms, using two real data sets (Hi5 and Epinions). Our experimental results show that our FriendLink algorithm outperforms other approaches in terms of effectiveness and efficiency in both real data sets."}
{"_id": "35e846afa7e247ed7ff5acc2448d4e766d9183dc", "title": "Fast local search and guided local search and their application to British Telecom's workforce scheduling problem", "text": "This paper reports a Fast Local Search (FLS) algorithm which helps to improve the efficiency of hill climbing and a Guided Local Search (GLS) Algorithm which is developed to help local search to escape local optima and distribute search effort. To illustrate how these algorithms work, this paper describes their application to British Telecom\u2019s workforce scheduling problem, which is a hard real life problem. The effectiveness of FLS and GLS are demonstrated by the fact that they both out-perform all the methods applied to this problem so far, which include simulated annealing, genetic algorithms and constraint logic programming."}
{"_id": "2ac6fa8687e186ae65e0775db8d594660ebe8612", "title": "Implementation and Optimization of the Accelerator Based on FPGA Hardware for LSTM Network", "text": "Today, artificial neural networks (ANNs) are important machine learning methods which are widely used in a variety of applications. As the emerging field of ANNs, recurrent neural networks (RNNs) are often used for sequencerelated applications. And Long Short-Term Memory (LSTM) is an improved RNN which contains complex computational logic. To achieve high accuracy, researchers always build largescale LSTM networks which are time-consuming and powerconsuming. Thus the acceleration of LSTM networks, low power & energy consumption become the hot issues in today's research. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. To optimize our implementation, we also use multiple methods including tiled matrix-vector multiplication, binary adder tree, and overlap of computation and data access. Through the acceleration and optimization methods, our accelerator is power-efficient and has a better performance than ARM Cortex A9 processor and Intel Core i5 processor."}
{"_id": "0e11777dce8074614a10a4d66d874a6fa9c38e82", "title": "Demonstration of MOS capacitor measurement for wafer manufacturing using a Direct Charge Measurement", "text": "Direct Charge Measurement (DCM) has a capability to improve the capacitance measurement time in parametric test. Through an actual wafer measurement, we have successfully verified that DCM can measure MOS capacitor much faster than an LCR meter while keeping good correlations for wafer manufacturing."}
{"_id": "e174cb593fb199ddc93d151c66710b6b3df91db4", "title": "NB-IoT and LoRA connectivity analysis for M2M/IoT smart grids applications", "text": "The evolution of the new M2M/IoT paradigm suggests new applications in several fields. However, the unique characteristics of M2M communications pose different challenges in terms of connectivity and capacity with respect to the conventional telecommunications networks. In particular, the harmful locations where IoT devices could be deployed and the massive number of the devices to be connected require a different approach with respect the traditional wireless communication. In this paper, we analyzed two different technologies expressly devoted the M2M communication: the Long Term Evolution (LTE) based solution, NB-IoT (Narrow Band \u2014 Internet of Things), and the LPWAN (Low Power Wide Area Network) based solution, LoRa (Low Range). The analysis has been provided taking into account different propagation scenarios and geographical information of the real MNOs (Mobile Network Operators) sites operating in the territory under test. The aim of this work is to check wireless access method of both cellular and proprietary technologies for the novel IoT Smart Grid applications requirements."}
{"_id": "0c50772e92971458402205097a67a2fd015575fd", "title": "A Survey of Context-Aware Mobile Computing Research", "text": "Context-aware computing is a mobile computing paradigm in which applications can discover and take advantage of contextual information (such as user location, time of day, nearby people and devices, and user activity). Since it was proposed about a decade ago, many researchers have studied this topic and built several context-aware applications to demonstrate the usefulness of this new technology. Context-aware applications (or the system infrastructure to support them), however, have never been widely available to everyday users. In this survey of research on context-aware systems and applications, we looked in depth at the types of context used and models of context information, at systems that support collecting and disseminating context, and at applications that adapt to the changing context. Through this survey, it is clear that context-aware research is an old but rich area for research. The difficulties and possible solutions we outline serve as guidance for researchers hoping to make context-aware computing a reality."}
{"_id": "f70f153743dc9586ebc2a6ca9946863c775288cf", "title": "Convolutional Neural Networks for Real-Time Beat Tracking: A Dancing Robot Application", "text": "In this paper a novel approach that adopts Convolutional Neural Networks (CNN) for the Beat Tracking task is proposed. The proposed architecture involves 2 convolutional layers with the CNN filter dimensions corresponding to time and band frequencies, in order to learn a Beat Activation Function (BAF) from a time-frequency representation. The output of each convolutional layer is computed only over the past values of the previous layer, to enable the computation of the BAF in an online fashion. The output of the CNN is post-processed by a dynamic programming algorithm in combination with a bank of resonators for calculating the salient rhythmic periodicities. The proposed method has been designed to be computational efficient in order to be embedded on a dancing NAO robot application, where the dance moves of the choreography are synchronized with the beat tracking output. The proposed system was submitted to the Signal Processing Cup Challenge 2017 and ranked among the top third algorithms."}
{"_id": "b2f2f1c695e3bf2471edfc2bdb0f806b55dd2a7a", "title": "Design of a Compact GPS and SDARS Integrated Antenna for Automotive Applications", "text": "This letter describes a novel compact solution for integrating a Global Positioning System (GPS) and a Satellite Digital Audio Radio Service (SDARS) antenna in a very small volume to satisfy the requirements of the automotive market. The GPS antenna is a classic small commercial ceramic patch, and the SDARS antenna is designed in order to fit in the reduced space without affecting radiation and band performance of the GPS antenna. The SDARS basic geometry is a square-ring microstrip antenna operating in the frequency range 2.320-2.345 GHz with left-hand circular polarization (LHCP), and it is wrapped around the GPS patch, which receives a right-hand circular polarization (RHCP) signal at 1.575 GHz. The overall volume of the presented integrated antenna solution is only 30 x 30 x 7.6 mm3, rendering it attractive for the automotive market."}
{"_id": "20539bcef61b020c77907aa9ffe59e1d85047e57", "title": "A DUAL BAND CIRCULARLY POLARIZED RING ANTENNA BASED ON COMPOSITE RIGHT AND LEFT HANDED METAMATERIALS", "text": "Abstract\u2014In this paper a dual band circularly polarized antenna is designed, fabricated, and measured. Based on the concept of composite right and left handed (CRLH) metamaterials, the same phase constants and current distributions on the ring antenna are achieved at two frequency bands. Thus, the antenna has similar radiation patterns at both bands. The circular polarization is implemented by feeding two vertical ports from power dividers that provide equal magnitudes and quadrature phase excitations."}
{"_id": "d4e74e30c20c390381ce1c55bf71cdd22dbe179b", "title": "Omnidirectional Circularly Polarized Antenna Utilizing Zeroth-Order Resonance of Epsilon Negative Transmission Line", "text": "The omnidirectional circularly polarized (CP) antenna using a circular mushroom structure with curved branches is proposed. The antenna is based on the zeroth-order resonance (ZOR) mode of epsilon negative (ENG) transmission line (TL) to obtain a vertical polarization and an omnidirectional radiation pattern. Also, the horizontal polarization is obtained by the curved branches. The 90\u00b0 phase difference between two orthogonal polarizations is inherently provided by the zeroth-order resonator. Therefore, the antenna has an omnidirectional CP radiation pattern in the azimuthal plane. In addition, this antenna is planar type and simply designed without a dual feeding structure and 90\u00b0 phase shifter. The measured average axial ratio and left-hand (LH) CP gain are 2.03 dB and - 0.40 dBic, respectively, in the azimuthal plane."}
{"_id": "4ae0ddfea80e52e8627e41ca901523390424efe0", "title": "Gaze Direction based Mobile Application for Quadriplegia Wheelchair Control System", "text": "People with quadriplegia recruit the interest of researchers in introducing automated movement systems for adopted special purpose wheelchairs. These systems were introduced for easing the movement of such type of disabled people independently. This paper proposed a comprehensive control system that can control the movement of Quadriplegia wheelchairs using gaze direction and blink detection. The presented system includes two main parts. The first part consists of a smartphone that applies the propose gaze direction detection based mobile application. It then sends the direction commands to the second part via Wi-Fi connection. The second part is a prototype representing the quadriplegia wheelchair that contains robotic car (two-wheel driving car), Raspberry Pi III and ultrasound sensors. The gaze direction commands, sent from the smartphone, are received by the Raspberry Pi for processing and producing the control signals. The ultrasound sensors are fixed at the front and back of the car for performing the emergency stop when obstacles are detected. The proposed system is based on gaze tracking and direction detection without the requirement of calibration with additional sensors and instruments. The obtained results show the superior performance of the proposed system that proves the claim of authors. The accuracy ratio is ranged between 66% and 82% depending on the environment (indoor and outdoor) and surrounding lighting as well as the smart phone type. Keywords\u2014Gaze direction detection; mobile application; obstacle detection; quadriplegia; Raspberry Pi microcomputer"}
{"_id": "44228af54a5c586e659bd282344aac3bbb52f723", "title": "Understanding crowdfunding work: implications for support tools", "text": "Crowdfunding is changing the way people realize their work by providing a new way to gain support from a distributed audience. This study seeks to understand the work of crowdfunding project creators in order to inform the design of crowdfunding support tools and systems. We conducted interviews with 30 project creators from three popular crowdfunding platforms in order to understand what tasks are involved and what tools people use to accomplish crowdfunding work. Initial results suggest that project creators carry out three main types of work-preparing the campaign material, marketing the project, and following through with project goals-and have adapted general support tools to facilitate doing this work. From our initial findings, we hope to improve and design future crowdfunding support tools and systems."}
{"_id": "96853f8184284f55f7315e3f3e73d18658381705", "title": "Delicate manipulations with compliant mechanism and electrostatic adhesion", "text": "Traditional rigid robotic hand manipulator has been used in many field nowadays due to its advantages of large gripping force and stable performance. However, this kind of rigid manipulator is not suitable for gripping fragile objects since it is motorized and force control can be a problem. It is also not suitable to grip object with different shapes since the manipulator is rigid and not compliant. In this study, a novel manipulator with gripping capability is designed and fabricated. The manipulator combines electrostatic adhesion actuation with soft manipulators. The manipulator has high flexibility and can be compliant to different shapes due to the property of the materials. It is very promising to do delicate manipulations in industry field and biomedical field."}
{"_id": "191372fd366ed4e1f6f34b018731c90f617a7396", "title": "A hybrid approach for database intrusion detection at transaction and inter-transaction levels", "text": "Nowadays, information plays an important role in the organizations. Sensitive information is often stored within the database. Traditional mechanisms such as encryption, access control, and authentication cannot provide a high level of confidence. Therefore, the existence of Intrusion Detection Systems in the database is a necessity. In this paper, we propose a type of intrusion detection system for detecting attacks in both database transaction level and inter-transaction level (user task level). For this purpose, we propose a detection method at transaction level, which is based on describing the expected transactions within the database applications. Then at inter-transaction level, we propose a detection method that is based on anomaly detection and uses data mining to find dependency and sequence rules. The advantage of this system compared to the previous database intrusion detection systems is that it can detect malicious behaviors in both transaction and inter-transaction levels. Also, it gains advantages of a hybrid method, including specification-based detection and anomaly detection, to minimize both false positive and false negative errors. In order to evaluate the accuracy of the proposed system, some experiments have been done. The experimental evaluation results show high accuracy and effectiveness of the proposed system."}
{"_id": "3ecc793f0b4f8b7a41ed25f7f3bb34d157329cf1", "title": "Deep Domain Adaptation in Action Space", "text": "In the general settings of supervised learning, human action recognition has been a widely studied topic. The classifiers learned in this setting assume that the training and test data have been sampled from the same underlying probability distribution. However, in most of the practical scenarios, this assumption is not true, resulting in a suboptimal performance of the classifiers. This problem, referred to as Domain Shift, has been extensively studied, but mostly for image/object classification task. In this paper, we investigate the problem of Domain Shift in action videos, an area that has remained under-explored, and propose two new approaches named Action Modeling on Latent Subspace (AMLS) and Deep Adversarial Action Adaptation (DAAA). In the AMLS approach, the action videos in the target domain are modeled as a sequence of points on a latent subspace and adaptive kernels are successively learned between the source domain point and the sequence of target domain points on the manifold. In the DAAA approach, an end-to-end adversarial learning framework is proposed to align the two domains. The action adaptation experiments were conducted using various combinations of multi-domain action datasets, including six common classes of Olympic Sports and UCF50 datasets and all classes of KTH, MSR and our own SonyCam datasets. In this paper, we have achieved consistent improvements over chosen baselines and obtained some state-of-the-art results for the datasets."}
{"_id": "d8a67b13e2d4051dd7a451232314a5d778a1b047", "title": "Parallel Computer Architecture: A Hardware/Software Approach", "text": "enter the circuit several times, possibly via different input processors. More elaborate combinatorial considerations than for standard VLSI circuits are needed to show how communication complexity can be used to get lower bounds on multilective VLSI computations. Near the end of the book, Hromkovic presents relations between communication complexity and some complexity measures of sequential computations. The core section introduces a uniform model of one-way communication protocols and shows that the corresponding uniform one-way communication complexity is strongly related to the size of deterministic finite automata. Then he relates communication complexity to the time and space complexity of Turing machines. Finally, he shows how communication complexity can be used to obtain lower bounds on the size and the depth of decisions nees and branching programs."}
{"_id": "4011e6397aba8369af14f35bbb7d15f7c13b8664", "title": "Sub-Nyquist Radar via Doppler Focusing", "text": "We investigate the problem of a monostatic pulse-Doppler radar transceiver trying to detect targets sparsely populated in the radar's unambiguous time-frequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here, we describe a sub-Nyquist sampling and recovery approach called Doppler focusing, which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size, which does not increase with increasing number of pulses P. Furthermore, in the presence of noise, Doppler focusing enjoys a signal-to-noise ratio (SNR) improvement, which scales linearly with P, obtaining good detection performance even at SNR as low as - 25 dB. The recovery is based on the Xampling framework, which allows reduction of the number of samples needed to accurately represent the signal, directly in the analog-to-digital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype."}
{"_id": "ed74e838bf8b5e5d9134ff15ab17ea9390c2fc90", "title": "The diffusion of misinformation on social media: Temporal pattern, message, and source", "text": "This study examines dynamic communication processes of political misinformation on social media focusing on three components: the temp oral attern, content mutation, and sources of misinformation. We traced the lifecycle of 17 po pular political rumors that circulated on Twitter over 13 months during the 2012 U.S. preside ntial election. Using text analysis based on time series, we found that while false rumors (misi nformation) tend to come back multiple times after the initial publication, true rumors (facts) do not. Rumor resurgence continues, often accompanying textual changes, until the tension aro und the target dissolves. We observed that rumors resurface by partisan news websites that rep ckage the old rumor into news and, gain visibility by influential Twitter users who introdu ce such rumor into the Twittersphere. In this paper, we argue that media scholars should consider the mutability of diffusing information, temporal recurrence of such messages, and the mecha nism by which these messages evolve over time."}
{"_id": "65ba5f2f034e334c1b1d932d798df8f8a684a679", "title": "Autonomous railway crack detector robot for bangladesh: SCANOBOT", "text": "An automatic railway track crack detector system for Bangladesh Railway has been proposed here which aims in building a robot that can detect and analyze any kind of crack on the railway line and send the coordinates of that faulty line to the concerned authority. This robot includes two ultrasonic sensors, GPS, GSM modules, and Arduino Mega based crack detection assembly which is cost effective and robust to facilitate better safety standards in railways. As soon as the robot passed through a crack that might cause the derailment of a train, the ultrasonic sensors sense that and generate a signal. Then this signal is fed into the Arduino Mega. At that point, with the assistance of GSM and GPS modules, an alert SMS consist of the geographic coordinate of that damaged track is sent to the nearby railway authority who can easily take necessary steps to resolve the problem before any major accident occurs. This will save several trains in Bangladesh from an unwanted discontinuity from the rail track."}
{"_id": "f6e58f54b82e19ccf7c397cf8200ebf5160eb73c", "title": "End-to-End Learning of Latent Deformable Part-Based Representations for Object Detection", "text": "Object detection methods usually represent objects through rectangular bounding boxes from which they extract features, regardless of their actual shapes. In this paper, we apply deformations to regions in order to learn representations better fitted to objects. We introduce DP-FCN, a deep model implementing this idea by learning to align parts to discriminative elements of objects in a latent way, i.e. without part annotation. This approach has two main assets: it builds invariance to local transformations, thus improving recognition, and brings geometric information to describe objects more finely, leading to a more accurate localization. We further develop both features in a new model named DP-FCN2.0 by explicitly learning interactions between parts. Alignment is done with an in-network joint optimization of all parts based on a CRF with custom potentials, and deformations are influencing localization through a bilinear product. We validate our models on PASCAL VOC and MS COCO datasets and show significant gains. DP-FCN2.0 achieves state-of-the-art results of 83.3 and 81.2% on VOC 2007 and 2012 with VOC data only."}
{"_id": "9431b1a7453aa43a14579f047873c73e386d2c59", "title": "Suicidal strangulation by double ligature: a case report.", "text": "This article reports a self-strangulation suicide in which a double complex group of ligatures was used. Although similar cases have been referenced in forensic literature, this case is notable because of the unusual method used by the victim. A 61-year-old man was found in a locked room with a double elaborate ligature comprising six wire clothes hangers completely encircling the neck and a black rubber band in a double loop. Autopsy also documented parallel superficial cut lesions in proximal forearm interpreted as hesitation marks. It stresses the importance of characteristics such as analysis of number of ligatures, position of the knots, number of knots and turns of ligature marks and the absence of any defense and relevant internal injuries inherent to suicide ligature strangulation cases."}
{"_id": "0153664ba0f4d37155357c9c76b82bbc786de3ab", "title": "An Evaluation of Distributed Concurrency Control", "text": "Increasing transaction volumes have led to a resurgence of interest in distributed transaction processing. In particular, partitioning data across several servers can improve throughput by allowing servers to process transactions in parallel. But executing transactions across servers limits the scalability and performance of these systems. In this paper, we quantify the effects of distribution on concurrency control protocols in a distributed environment. We evaluate six classic and modern protocols in an in-memory distributed database evaluation framework called Deneva, providing an apples-to-apples comparison between each. Our results expose severe limitations of distributed transaction processing engines. Moreover, in our analysis, we identify several protocol-specific scalability bottlenecks. We conclude that to achieve truly scalable operation, distributed concurrency control solutions must seek a tighter coupling with either novel network hardware (in the local area) or applications (via data modeling and semantically-aware execution), or both."}
{"_id": "2ca7af4bce0979babcdeca008f64ef16bf172d7e", "title": "Hierarchical Self-Organization in Genetic programming", "text": "This paper presents an approach to automatic discovery of functions in Genetic Programming. The approach is based on discovery of useful building blocks by analyzing the evolution trace, generalizing blocks to define new functions, and finally adapting the problem representation onthe-fly. Adaptating the representation determines a hierarchical organization of the extended function set which enables a restructuring of the search space so that solutions can be found more easily. Measures of complexity of solution trees are defined for an adaptive representation framework. The minimum description length principle is applied to justify the feasibilityof approaches based on a hierarchy of discovered functions and to suggest alternative ways of defining a problem\u2019s fitness function. Preliminary empirical results are presented."}
{"_id": "36e7ce710c07071de3c56a433c9eaf4ca6f59c11", "title": "Contactless DC Connector Based on GaN LLC Converter for Next-Generation Data Centers", "text": "An inductively coupled contactless dc connector has been proposed for the next-generation 380-V dc distribution system in data centers. A LLC resonant dc-dc converter topology with gallium nitride (GaN) power transistors has been applied to realize the short-distance highly efficient contactless power transfer. A prototype of a 1.2-kW 384- to 192-V connector has been fabricated and the conversion efficiency of over 95% with the power density of 8.1 W/cm3 has been confirmed experimentally under 1000-kHz operation. The design consideration has been carried out and the potential to achieve 10.0 W/cm3 has been also shown taking the feature of the GaN power device and the characteristics of the magnetic core material for the transformer into account. The contactless dc connector integrates the functioning of an isolated dc-dc converter into a connector for space saving, and the dc current can be cut off without arc because of the inductive coupling. The proposed connector contributes to realizing a highly efficient, space saving, and reliable future 380-V dc distribution system."}
{"_id": "1bb91a73667a660f83285b306b40799ce17affb5", "title": "Deep Sentiment Representation Based on CNN and LSTM", "text": "Traditional machine learning techniques, including support vector machine (SVM), random walk, and so on, have been applied in various tasks of text sentiment analysis, which makes poor generalization ability in terms of complex classification problem. In recent years, deep learning has made a breakthrough in the research of Natural Language Processing. Convolutional neural network (CNN) and recurrent neural networks (RNNs) are two mainstream methods of deep learning in document and sentence modeling. In this paper, a model of capturing deep sentiment representation based on CNN and long short-term memory recurrent neural network (LSTM) is proposed. The model uses the pre-trained word vectors as input and employs CNN to gain significant local features of the text, then features are fed to two-layer LSTMs, which can extract context-dependent features and generate sentence representation for sentiment classification. We evaluate the proposed model by conducting a series of experiments on dataset. The experimental results show that the model we designed outperforms existing CNN, LSTM, CNN-LSTM (our implement of one-layer LSTM directly stacked on one-layer CNN) and SVM (support vector machine)."}
{"_id": "c3482f5eeb6f5e5a3a41e31d1be0b2eac7b8010b", "title": "What Do Customers Crave in Mobile 5G?: A survey spotlights four standout factors.", "text": "Increasing customer demand coupled with limitations in current fourth-generation (4G) wireless mobile networks and advances toward the next-generation wireless mobile network have all together created a need for us to look at what customers today expect from the upcoming fifth-generation (5G) network. This study was conducted among existing 4G users in the United States, the United Kingdom, Singapore, and Australia. The research was completed in two parts: qualitative and quantitative. The quantitative part was accomplished by performing data collection and selection through surveys and data evaluation by structural equation modeling (SEM) using SMARTPLS."}
{"_id": "6a97c80d113c0b965e28b34d6304325a1caabf6c", "title": "Multi-target tracking scheme using a track management table for automotive radar systems", "text": "In this paper, a multiple target tracking scheme for automotive radars is presented. First, a track table for efficient track management is proposed; then, a track management function is designed using a state machine. The proposed scheme can initiate, confirm, and delete all tracks of multiple targets. Moreover, it can determine whether the detected target is a real target or a ghost track, and a terminated track or a sudden missing track. The simulation results demonstrate that the proposed tracking scheme is processed normally."}
{"_id": "e7b6a604cf6fd18215a81b8f9c98a63e8e18e304", "title": "Technochange management: using IT to drive organizational change", "text": "Using IT in ways that can trigger major organizational changes creates high-risk, potentially high-reward, situations that I call technochange (for technology-driven organizational change). Technochange differs from typical IT projects and from typical organizational change programs and therefore requires a different approach. One major risk in technochange\u2014that people will not use information technology and related work practices\u2014is not thoroughly addressed by the discipline of IT project management, which focuses on project cost, project schedule, and solution functionality. Organizational change management approaches are also generally not effective on their own, because they take as a given the IT \u2018\u2018solutions\u2019\u2019 developed by a technical team. Consequently, the potential for the IT \u2018\u2018solution\u2019\u2019 to be misaligned with important organizational characteristics, such as culture or incentives, is great. Merely combining IT project management and organizational change management approaches does not produce the best results, for two reasons. First, the additive approach does not effectively address the many failure-threatening problems that can arise over the lengthy sequential process of the typical technochange lifecycle. Second, the additive approach is not structured to produce the characteristics of a good technochange solution: a complete intervention consisting of IT and complementary organizational changes, an implementable solution with minimal misfits with the existing organization, and an organization primed to appropriate the potential bene\u00a2ts of the technochange solution. With hard work and care, the combined IT project management plus organizational change approach can be made to work. However, an iterative, incremental approach to implementing technochange can be a better strategy in many situations. The essential characteristic of the technochange prototyping approach is that each phase involves both new IT functionality and related organizational changes, such as redesigned business processes, new performance metrics, and training. Journal of Information Technology (2004) 19, 4\u201320. doi:10.1057/palgrave.jit.2000002"}
{"_id": "41a0b5b93d37302264be50f6ce144c1d31c12ec6", "title": "An efficient FPGA-Based architecture for convolutional neural networks", "text": "The goal of this paper is to implement an efficient FPGA-based hardware architectures for the design of fast artificial vision systems. The proposed architecture is capable of performing classification operations of a Convolutional Neural Network (CNN) in realtime. To show the effectiveness of the architecture, some design examples such as hand posture recognition, character recognition, and face recognition are provided. Experimental results show that the proposed architecture is well suited for embedded artificial computer vision systems requiring high portability, high computational speed, and accurate classification."}
{"_id": "a3e31c892c165b293ba4832833e87e5e33d4d5a4", "title": "A BigData approach for classification and prediction of student result using MapReduce", "text": "In recent years the amount of data stored in educational database is growing rapidly. The stored database contains hidden information which if used aids improvement of student's performance and behaviour. In this paper predictive modelling approach is used for extracting this hidden information. Data is collected, a predictive model is formulated, predictions are made, and the model is validated as additional data becomes available. The predictive models will help the instructor to understand how well or how poorly the students in his/her class will perform, and hence the instructor can choose proper pedagogical and instructional interventions to enhance student learning outcomes. The implementation is done in Hadoop framework with MapReduce and Revolutionary R Enterprise RRE."}
{"_id": "7a58abc92dbe41c9e5b3c7b0a358ab9096880f25", "title": "\u201c Proof-of-Work \u201d Proves Not to Work version 0 . 1", "text": "A frequently proposed method of reducing unsolicited bulk email (\u201cspam\u201d) is for senders to pay for each email they send. Proof-ofwork schemes avoid charging real money by requiring senders to demonstrate that they have expended processing time in solving a cryptographic puzzle. We attempt to determine how difficult that puzzle should be so as to be effective in preventing spam. We analyse this both from an economic perspective, \u201chow can we stop it being cost-effective to send spam\u201d, and from a security perspective, \u201cspammers can access insecure end-user machines and will steal processing cycles to solve puzzles\u201d. Both analyses lead to similar values of puzzle difficulty. Unfortunately, realworld data from a large ISP shows that these difficulty levels would mean that significant numbers of senders of legitimate email would be unable to continue their current levels of activity. We conclude that proof-of-work will not be a solution to the problem of spam."}
{"_id": "836ce293d9a7d716b10bfb88cd66606613f13926", "title": "Strong Injection Locking in Low- $Q$ LC Oscillators: Modeling and Application in a Forwarded-Clock I/O Receiver", "text": "A general model for injection-locked LC oscillators (LC-ILOs) is presented that is valid for any tank quality factor and injection strength. Important properties of an ILO such as lock-range, phase shift, bandwidth and response to input jitter are described. An LC-ILO together with a half-rate data sampler is implemented as a forwarded-clock I/O receiver in 45-nm CMOS. A strongly-injected low-Q LC oscillator enables clock deskew across 1UI and rejects high-frequency clock jitter. The complete 27 Gb/s ILO-based data receiver has an overall power efficiency of 1.6 mW/Gb/s."}
{"_id": "a95206390c8ad724e00a5aa228651d8bdca0088a", "title": "A 27Gb/s Forwarded-Clock I/O Receiver Using an Injection-Locked LC-DCO in 45nm CMOS", "text": "This paper describes a method for both filtering and deskewing a link clock using a differential injection-locked LC-DCO and demonstrates a forwarded-clock data receiver using this technique operating at 27 Gb/s."}
{"_id": "13e30c5dccae82477ee5d38e4d9c96b504a13d29", "title": "Acoustic Modeling for Google Home", "text": "This paper describes the technical and system building advances made to the Google Home multichannel speech recognition system, which was launched in November 2016. Technical advances include an adaptive dereverberation frontend, the use of neural network models that do multichannel processing jointly with acoustic modeling, and Grid-LSTMs to model frequency variations. On the system level, improvements include adapting the model using Google Home specific data. We present results on a variety of multichannel sets. The combination of technical and system advances result in a reduction of WER of 8-28% relative compared to the current production system."}
{"_id": "35bef4597f5e514359ff45bea31be8b8239effe1", "title": "Path Reducing Watershed for the GPU", "text": "The watershed transform is a popular image segmentation procedure from mathematical morphology used in many applications of computer vision. This paper proposes a novel parallel watershed procedure designed for GPU implementation. Our algorithm constructs paths of steepest descent and reduces these paths into direct pointers to catchment basin minima in logarithmic time, also crucially incorporating successful resolution of plateaux. Three implementation variants and their parameters are analysed through experiments on 2D and 3D images; a comparison against the state-of-the-art shows a runtime improvement of around 30%. For 3D images of 128 megavoxels execution times of approximately 1.5\u20132 seconds are achieved."}
{"_id": "43dd2df0a123fedb022d353892713cd2eec7a6b4", "title": "Learning-based approach for online lane change intention prediction", "text": "Predicting driver behavior is a key component for Advanced Driver Assistance Systems (ADAS). In this paper, a novel approach based on Support Vector Machine and Bayesian filtering is proposed for online lane change intention prediction. The approach uses the multiclass probabilistic outputs of the Support Vector Machine as an input to the Bayesian filter, and the output of the Bayesian filter is used for the final prediction of lane changes. A lane tracker integrated in a passenger vehicle is used for real-world data collection for the purpose of training and testing. Data from different drivers on different highways were used to evaluate the robustness of the approach. The results demonstrate that the proposed approach is able to predict driver intention to change lanes on average 1.3 seconds in advance, with a maximum prediction horizon of 3.29 seconds."}
{"_id": "9b252a3318efbe39f95520d1d03a787059430035", "title": "On the Role of Seed Lexicons in Learning Bilingual Word Embeddings", "text": "A shared bilingual word embedding space (SBWES) is an indispensable resource in a variety of cross-language NLP and IR tasks. A common approach to the SBWES induction is to learn a mapping function between monolingual semantic spaces, where the mapping critically relies on a seed word lexicon used in the learning process. In this work, we analyze the importance and properties of seed lexicons for the SBWES induction across different dimensions (i.e., lexicon source, lexicon size, translation method, translation pair reliability). On the basis of our analysis, we propose a simple but effective hybrid bilingual word embedding (BWE) model. This model (HYBWE) learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from a seed document-level embedding space. We perform bilingual lexicon learning (BLL) with 3 language pairs and show that by carefully selecting reliable translation pairs our new HYBWE model outperforms benchmarking BWE learning models, all of which use more expensive bilingual signals. Effectively, we demonstrate that a SBWES may be induced by leveraging only a very weak bilingual signal (document alignments) along with monolingual data."}
{"_id": "9d7b00532ca316eb851fc844f8362fb900694dff", "title": "Beyond traditional DTN routing: social networks for opportunistic communication", "text": "This article examines the evolution of routing protocols for intermittently connected ad hoc networks and discusses the trend toward socialbased routing protocols. A survey of current routing solutions is presented, where routing protocols for opportunistic networks are classified based on the network graph employed. The need to capture performance trade-offs from a multi-objective perspective is highlighted."}
{"_id": "2d5660ce23fd9efa3ed2512605bbbae3f9ab7b3a", "title": "Modeling and animating eye blinks", "text": "Facial animation often falls short in conveying the nuances present in the facial dynamics of humans. In this article, we investigate the subtleties of the spatial and temporal aspects of eye blinks. Conventional methods for eye blink animation generally employ temporally and spatially symmetric sequences; however, naturally occurring blinks in humans show a pronounced asymmetry on both dimensions. We present an analysis of naturally occurring blinks that was performed by tracking data from high-speed video using active appearance models. Based on this analysis, we generate a set of key-frame parameters that closely match naturally occurring blinks. We compare the perceived naturalness of blinks that are animated based on real data to those created using textbook animation curves. The eye blinks are animated on two characters, a photorealistic model and a cartoon model, to determine the influence of character style. We find that the animated blinks generated from the human data model with fully closing eyelids are consistently perceived as more natural than those created using the various types of blink dynamics proposed in animation textbooks."}
{"_id": "42628ebecf7debccb9717251d7d02c9a5d040ecf", "title": "Emotion Detection From Text Documents", "text": "Emotion Detection is one of the most emerging issues in human computer interaction. A sufficient amount of work has been done by researchers to detect emotions from facial and audio information whereas recognizing emotions from textual data is still a fresh and hot research area. This paper presented a knowledge based survey on emotion detection based on textual data and the methods used for this purpose. At the next step paper also proposed a new architecture for recognizing emotions from text document. Proposed architecture is composed of two main parts, emotion ontology and emotion detector algorithm. Proposed emotion detector system takes a text document and the emotion ontology as inputs and produces one of the six emotion classes (i.e. love, joy, anger, sadness, fear and surprise) as the output."}
{"_id": "1ee177b7e24d00b0d20b26d4c871059d43c3009a", "title": "Honesty on the Streets : A Natural Field Experiment on Newspaper Purchasing", "text": "A publisher uses an honor system for selling a newspaper in the street. The customers are supposed to pay, but they can also pay less than the price or not pay at all. We conduct an experiment to study honesty in this market. The results show that appealing to honesty increases payments, whereas reminding the customers of the legal norm has no effect. Furthermore, appealing to honesty does not affect the behavior of the dishonest. These findings suggest that some people have internalized an honesty norm, whereas others have not, and that the willingness to pay to obey the norm differs among individuals. In a follow-up survey study we find that honesty is associated with family characteristics, self-esteem, social connectedness, trust in the legal system, and compliance with tax regulations."}
{"_id": "1c3aecadabc2a78c29de4bc5d0b796c00645e6bf", "title": "A hybrid edge-cloud architecture for reducing on-demand gaming latency", "text": "The cloud was originally designed to provide general-purpose computing using commodity hardware and its focus was on increasing resource consolidation as a means to lower cost. Hence, it was not particularly adapted to the requirements of multimedia applications that are highly latency sensitive and require specialized hardware, such as graphical processing units. Existing cloud infrastructure is dimensioned to serve general-purpose workloads and to meet end-user requirements by providing high throughput. In this paper, we investigate the effectiveness of using this general-purpose infrastructure for serving latency-sensitive multimedia applications. In particular, we examine on-demand gaming, also known as cloud gaming, which has the potential to change the video game industry. We demonstrate through a large-scale measurement study that the existing cloud infrastructure is unable to meet the strict latency requirements necessary for acceptable on-demand game play. Furthermore, we investigate the effectiveness of incorporating edge servers, which are servers located near end-users (e.g., CDN servers), to improve end-user coverage. Specifically, we examine an edge-server-only infrastructure and a hybrid infrastructure that consists of using edge servers in addition to the cloud. We find that a hybrid infrastructure significantly improves the number of end-users served. However, the number of satisfied end-users in a hybrid deployment largely depends on the various deployment parameters. Therefore, we evaluate various strategies that determine two such parameters, namely, the location of on-demand gaming servers and the games that are placed on these servers. We find that, through both a careful selection of on-demand gaming servers and the games to place on these servers, we significantly increase the number of end-users served over the basic random selection and placement strategies."}
{"_id": "d8ed29985999ca9642e7940fbf7484f544df54e5", "title": "Explainable , Normative , and Justified Agency", "text": "In this paper, we pose a new challenge for AI researchers \u2013 to develop intelligent systems that support justified agency. We illustrate this ability with examples and relate it to two more basic topics that are receiving increased attention \u2013 agents that explain their decisions and ones that follow societal norms. In each case, we describe the target abilities, consider design alternatives, note some open questions, and review prior research. After this, we return to justified agency, offering a hypothesis about its relation to explanatory and normative behavior. We conclude by proposing testbeds and experiments to evaluate this empirical claim and encouraging other researchers to contribute to this crucial area. 1 Background and Motivation Autonomous artifacts, from self-driving cars to drones to household robots, are becoming more widely adopted, and this trend seems likely to accelerate in the future. The increasing reliance on these devices has raised concerns about our ability to understand their behavior and our capacity to ensure their safety. Before intelligent agents can gain widespred acceptance in society, they must be able to communicate their decision making to humans in ways that convince us they actually share our aims. This challenge involves two distinct but complementary issues. The first is the need for agents to explain the reasons they carried out particular courses of action in terms that we can understand. Langley et al. (2017) have referred to this as explainable agency. The second is the need for assurance that, when agents pursue explicit goals, they will also follow the many implicit rules of society. We will refer to this ability as normative agency. Both of these functions are necessary underpinnings of trustable autonomous systems, and two capabilities are closely intertwined. Consider a scenario in which a person drives a friend with a ruptured appendix to the hospital. The driver exceeds the speed limit, weaves in and out of traffic, runs through red lights, and even drives on a sidewalk, although he is still careful to avoid hitting other cars or losing control on turns. Afterwards, the driver explains that he took such drastic actions because he thought the passenger\u2019s life was in danger, Copyright c \u00a9 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. so reaching the hospital in short order had higher priority than being polite to others along the way or obeying traffic laws. Humans can defend their behavior in this manner even when they violate serious societal norms, and we want intelligent agents of the future to exhibit the same ability to justify their decisions and actions on demand. We discuss this challenge in the sections that follow, arguing that such justified agency is an important topic that deserves substantial attention from AI researchers. We first examine the two related topics of explainable agency and normative agency. In each case, we describe the desired abilities, offer illustrative examples, and touch on relevant research. We also consider the space of agent designs, including some open issues that require additional work. After this, we turn to justified agency, arguing that it combines the first two abilities and proposing a hypothesis about what else is required to support it. We close by considering some testbeds and experiments that would let the research community evaluate progress in this critical arena. 2 Explainable Agency People can usually explain their reasons for making decisions and taking actions. When someone purchases a microwave oven, rearranges furniture in a room, or plans a vacation, he can state the choices considered, why he selected one alternative over the others, and even how his decision might have differed in other circumstances. Retrospective reports are seldom perfect, but they often offer important windows into the decision process. As intelligent agents become both more autonomous and more prevalent, it is essential that they support similar abilities. We will say that: \u2022 An intelligent system exhibits explainable agency if it can provide, on request, the reasons for its activities. Consider some examples from the realm of autonomous vehicles. If we ask a self-driving car why it followed a given route, it should be able to state that the path had few lights and stop signs while still being reasonably short. More importantly, when we ask the vehicle why it swerved into another car, it should explain that it was the only way to avoid hitting an unexpected jaywalker. Like humans, such agents should have reasons for their actions and they should be able to communicate them to others when asked. Tackling this problem requires that we make design decisions about representations and processes needed to support explainable agency, some of which we will borrow from Langley et al.\u2019s (2017) analysis. One issue concerns what will count as legitimate explanations. Should plausible post hoc rationalizations be acceptable if an agent\u2019s decisionmaking procedures are not interpretable or should we require genuine insights into why the agent took its actions? We argue that only the latter should be viewed as reasonable accounts, which implies that the agents should make decisions in ways that are transparent and easily communicated. There are many well-established methods in the AI arsenal that meet this criterion, but opaque policies induced from large training sets will not suffice. Another design choice involves whether to explain activities in terms of individual actions or higher-level structures. Research in machine learning has emphasized reactive control (e.g., Erwig et al., 2018), which supports the first alternative, whereas AI planning systems make choices about entire sequences of actions. We predict that humans will find plan-oriented explanations more familiar and easier to understand, and thus offer a natural approach to adopt. Even when a plan goes awry during execution, an agent can still give the reasons it decided to change course and how its new plan responded to the surprise. Some frameworks, like hierarchical task networks, also specify plans at multiple levels of abstraction, which would let a system offer more or less detailed explanations, down to individual actions if desired. Any intelligent agent must use information to make decisions about which activities to pursue, and a third issue concerns how to encode this content. One response, widely adopted in AI planning research, relies on symbolic goals, often stated as logical expressions. Another option, popular in game-playing systems, instead uses numeric evaluation functions that comprise sets of weighted features. Each approach has advantages for explainable agency: goals provide explicit end points for chains of actions, while functions show how such plans handle trade offs. Although these typically appear in isolation, they can also be combined. For instance, Langley et al. (2016) describe an agent architecture that associates functions with goals, using their weighted sum to guide planning. Such hybrid frameworks offer one promising approach to building explainable agents. Within this design space, we still need research on a number of open issues about explainable agency: These include extending intelligent systems to: \u2022 Generate explanatory content. When deciding on courses of action, an agent must consider different alternatives, evaluate them, and select one of the options to pursue. This should take place during generation of plans and during their execution, producing traces that can be used in later explanations of the agent\u2019s activities. \u2022 Store generated content. As it makes these decisions, an agent must cache information about the choices that it considered and the reasons that it favored one over the others, in an episodic memory or similar repository. This requires not just retaining the content about decisions, but also indexing it in ways that support later access. \u2022 Retrieve stored content. After it has completed an activity, an agent must be able to retrieve decision traces that are relevant to different types of questions. This requires transforming the queries into cues and using them to access content in episodic memory about alternatives considered, their evaluations, and the final choices made. \u2022 Communicate retrieved content. Once it has retrieved episodic traces in response to a question, an agent must identify those aspects most relevant to the query, translate them into an understandable form, and share the answer. This should include no more detail than needed to convey the reasons for making the decision under examination. Research on analogical planning (e.g., Jones and Langley, 2005; Veloso et al., 1995) has addressed issues of storage, indexing, and retrieval, but not for the purpose of self report. Leake (1992) presented a computational theory of explanation, but it focused on accounts of other agents\u2019 behaviors. The AI literature also includes other research relevant to this topic. Early explanation systems recorded inference chains and recounted them when asked to justify their conclusions (Swartout, Paris, and Moore, 1991), with some systems supporting higher-level accounts with meta-level rules (Clancey, 1983), but these did not deal with physical activities. More relevant work comes from Johnson (1994) and van Lent et al. (2004), who developed agents for military mssions that recorded their decisions, offered reasons on request, and anwered counterfactual queries. However, they dealt with knowledge-guided execution rather than agentgenerated plans and, despite linking actions to objectives, did not state why some activities were preferable to others. In more recent work, Briggs and Scheutz (2015) have reported an interactive robot that gives reasons why it cannot carry out some task, drawing on five explanation types, including lack of knowledge and physical ability. Th"}
{"_id": "cef9aef9b73c94eacc55670f3aa8f70329cd4bc6", "title": "Return-Oriented Flush-Reload Side Channels on ARM and Their Implications for Android Devices", "text": "Cache side-channel attacks have been extensively studied on x86 architectures, but much less so on ARM processors. The technical challenges to conduct side-channel attacks on ARM, presumably, stem from the poorly documented ARM cache implementations, such as cache coherence protocols and cache flush operations, and also the lack of understanding of how different cache implementations will affect side-channel attacks. This paper presents a systematic exploration of vectors for flush-reload attacks on ARM processors. flush-reload attacks are among the most well-known cache side-channel attacks on x86. It has been shown in previous work that they are capable of exfiltrating sensitive information with high fidelity. We demonstrate in this work a novel construction of flush-reload side channels on last-level caches of ARM processors, which, particularly, exploits return-oriented programming techniques to reload instructions. We also demonstrate several attacks on Android OS (e.g., detecting hardware events and tracing software execution paths) to highlight the implications of such attacks for Android devices."}
{"_id": "9045fdf362098d725db0ddc4386cde5625bd1b40", "title": "Learning Ontologies for the Semantic Web", "text": "The Semantic Web relies heavily on the formal ontologies that structure underlying data for the purpose of comprehensive and transportable machine understanding. Therefore, the success of the Semantic Web depends strongly on the proliferation of ontologies, which requires fast and easy engineering of ontologies and avoidance of a knowledge acquisition bottleneck. Ontology Learning greatly facilitates the construction of ontologies by the ontology engineer. The vision of ontology learning that we propose here includes a number of complementary disciplines that feed on di erent types of unstructured, semi-structured and fully structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, re nement, and evaluation giving the ontology engineer a wealth of coordinated tools for ontology modeling. Besides of the general framework and architecture, we show in this paper some exemplary techniques in the ontology learning cycle that we have implemented in our ontology learning environment, TextTo-Onto, such as ontology learning from free text, from dictionaries, or from legacy ontologies, and refer to some others that need to complement the complete architecture, such as reverse engineering of ontologies from database schemata or learning from XML documents. 1. ONTOLOGIES FOR THE SEMANTIC WEB Conceptual structures that de ne an underlying ontology are germane to the idea of machine processable data on the Semantic Web. Ontologies are (meta)data schemas, providing a controlled vocabulary of concepts, each with an explicitly de ned and machine processable semantics. By de ning shared and common domain theories, ontologies help both people and machines to communicate concisely, supporting the exchange of semantics and not only syntax. Hence, the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission by the authors. Semantic Web Workshop 2001 Hongkong, China Copyright by the authors. cheap and fast construction of domain-speci c ontologies is crucial for the success and the proliferation of the Semantic Web. Though ontology engineering tools have become mature over the last decade (cf. [9]), the manual acquisition of ontologies still remains a tedious, cumbersome task resulting easily in a knowledge acquisition bottleneck. Having developed our ontology engineering workbench, OntoEdit, we had to face exactly this issue, in particular we were given questions like Can you develop an ontology fast? (time) Is it di\u00c6cult to build an ontology? (di\u00c6culty) How do you know that you've got the ontology right? (con dence) In fact, these problems on time, di\u00c6culty and con dence that we ended up with were similar to what knowledge engineers had dealt with over the last two decades when they elaborated on methodologies for knowledge acquisition or workbenches for de ning knowledge bases. A method that proved extremely bene cial for the knowledge acquisition task was the integration of knowledge acquisition with machine learning techniques [33]. The drawback of these approaches, e.g. the work described in [21], however, was their rather strong focus on structured knowledge or data bases, from which they induced their rules. In contrast, in the Web environment that we encounter when building Web ontologies, the structured knowledge or data base is rather the exception than the norm. Hence, intelligent means for an ontology engineer takes on a di erent meaning than the | very seminal | integration architectures for more conventional knowledge acquisition [7]. Our notion of Ontology Learning aims at the integration of a multitude of disciplines in order to facilitate the construction of ontologies, in particular machine learning. Because the fully automatic acquisition of knowledge by machines remains in the distant future, we consider the process of ontology learning as semi-automatic with human intervention, adopting the paradigm of balanced cooperative modeling [20] for the construction of ontologies for the Semantic Web. This objective in mind, we have built an architecture that combines knowledge acquisition with machine learning, feeding on the resources that we nowadays nd on the syntactic Web, viz. free text, semi-structured text, schema de nitions (DTDs), etc. Thereby, modules in our framework serve different steps in the engineering cycle, which here consists of the following ve steps (cf. Figure 1): First, existing ontologies are imported and reused by merging existing structures or de ning mapping rules between existing structures and the ontology to be established. For instance, [26] describe how ontological structures contained in Cyc are used in order to facilitate the construction of a domain-speci c ontology. Second, in the ontology extraction phase major parts of the target ontology are modeled with learning support feeding from web documents. Third, this rough outline of the target ontology needs to be pruned in order to better adjust the ontology to its prime purpose. Fourth, ontology re nement pro ts from the given domain ontology, but completes the ontology at a ne granularity (also in contrast to extraction). Fifth, the prime target application serves as a measure for validating the resulting ontology [31]. Finally, one may revolve again in this cycle, e.g. for including new domains into the constructed ontology or for maintaining and updating its scope. 2. THE ONTOLOGY LEARNING KILLER APPLICATION Though ontologies and their underlying data in the Semantic Web are envisioned to be reusable for a wide range of possibly unforeseen applications, a particular target application remains the touchstone for a given ontology. In our case, we have been dealing with ontology-based knowledge portals that structure Web content and that allow for structured provisioning and accessing of data [29, 30]. Knowledge portals are information intermediaries for knowledge accessing and sharing on the Web. The development of a knowledge portal consists of the tasks of structuring the knowledge, establishing means for providing new knowledge and accessing the knowledge contained in the portal. A considerable part of development and maintenance of the portal lies in integrating legacy information as well as in constructing and maintaining the ontology in vast, often unknown, terrain. For instance, a knowledge portal may focus on the electronics sector, integrating comparative shopping in conjunction with manuals, reports and opinions about current electronic products. The creation of the background ontology for this knowledge portal involves tremendous e orts for engineering the conceptual structures that underly existing warehouse databases, product catalogues, user manuals, test reports and newsgroup discussions. Correspondingly, ontology structures must be constructed from database schemata, a given product thesaurus (like BMEcat), XML documents and document type de nitions (DTDs), and free texts. Still worse, signi cant parts of these (meta-)data change extremely fast and, hence, require a regular update of the corresponding ontology parts. Thus, very di erent types of (meta-)data might be useful input for the construction of the ontology. However, in practice one needs comprehensive support in order to deal Just like Tim Berners-Lee did not forsee online auctions being a common business model of the Web of 2000. Comprehensive support with ontology learning need not necessarily imply the top-notch learning algorithms, but may rely more heavily on appropriate tool support and methodology. with this wealth. Hence, there comes the need for a range of di erent techniques: Structured data and meta data require reverse engineering approaches, free text may contribute to ontology learning directly or through information extraction approaches. Semi-structured data may nally require and pro t from both. In the following we elaborate on our ontology learning framework. Thereby we approach di erent techniques for di erent types of data, showing parts of our architecture, its current status, and parts that may complement our current Text-To-Onto environment. A general overview of ontology learning techniques as well as corresponding references may be found in Section 9. 3. AN ARCHITECTURE FOR ONTOLOGY LEARNING Given the task of constructing and maintaining an ontology for a Semantic Web application, e.g. for an ontologybased knowledge portal that we have been dealing with (cf. [29]), we have produced a wish list of what kind of support we would fancy. 3.1 Ontology Engineering WorkbenchOntoEdit As core to our approach we have built a graphical user interface to support the ontology engineering process manually performed by the ontology engineer. Here, we o er sophisticated graphical means for manual modeling and re ning the nal ontology. Di erent views are o ered to the user targeting the epistemological level rather than a particular representation language. However, the ontological structures built there may be exported to standard Semantic Web representation languages, such as OIL and DAML-ONT, as well as our own F-Logic based extensions of RDF(S). In addition, executable representations for constraint checking and application debugging can be generated and then accessed via SilRi, our F-Logic inference engine, that is directly connected with OntoEdit. The sophisticated ontology engineering tools we knew, e.g. the Prot eg e modeling environment for knowledge-based systems [9], would o er capabilities roughly comparable to OntoEdit. However, given the task of constructing a knowledge portal, we f"}
{"_id": "7c62f5e4a62758f44bd98f087f92b6b6b1f2043b", "title": "Combination features and models for human detection", "text": "This paper presents effective combination models with certain combination features for human detection. In the past several years, many existing features/models have achieved impressive progress, but their performances are still limited by the biases rooted in their self-structures, that is, a particular kind of feature/model may work well for some types of human bodies, but not for all the types. To tackle this difficult problem, we combine certain complementary features/models together with effective organization/fusion methods. Specifically, the HOG features, color features and bar-shape features are combined together with a cell-based histogram structure to form the so-called HOG-III features. Moreover, the detections from different models are fused together with the new proposed weighted-NMS algorithm, which enhances the probable \u201ctrue\u201d activations as well as suppresses the overlapped detections. The experiments on PASCAL VOC datasets demonstrate that, both the HOG-III features and the weighted-NMS fusion algorithm are effective (obvious improvement for detection performance) and efficient (relatively less computation cost): When applied to human detection task with the Grammar model and Poselet model, they can boost the detection performance significantly; Also, when extended to detection of the whole VOC 20 object categories with the deformable part-based model and deep CNN-based model, they still show competitive improvements."}
{"_id": "8f2c1e756fb187f4daff65649c8f3d51c64fe851", "title": "Multimedia, hypermedia, and hypertext: Motivation considered and reconsidered", "text": "Computer-based instruction (CBI) is becoming increasingly popular in the classroom, particularly because the latest technological advancements allow for visually rich and interactive environments. While the inherent nature of CBIs is often thought to engage learners, research examining the role of motivation in learning with these environments has resulted in mixed findings. These findings are further complicated by unique design characteristics of distinct CBIs. This literature review synthesizes research that has examined the role of theoretically-grounded constructs of motivation in the context of three popular CBIs, multimedia, hypermedia, and hypertext. Specifically, this literature review considered empirical studies that examined the effect of these CBIs on motivation, in addition to the effect of motivation on learning outcomes and the learning process within the context of these environments. The literature review concludes with a theoretical consideration of previous research and a discussion of a framework for future directions. 2010 Published by Elsevier Ltd."}
{"_id": "30d0600e9be1ce97a196f32be99b7ef57839b76f", "title": "Ensemble Methods in Data Mining: Improving Accuracy Through Combining Predictions", "text": "Following your need to always fulfil the inspiration to obtain everybody is now simple. Connecting to the internet is one of the short cuts to do. There are so many sources that offer and connect us to other world condition. As one of the products to see in internet, this website becomes a very available place to look for countless ensemble methods in data mining improving accuracy through combining predictions synthesis lectures on data mining and knowledge discovery sources. Yeah, sources about the books from countries in the world are provided."}
{"_id": "489be725fed18a7813d721f0817202a0a3fcf208", "title": "The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands", "text": "Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find that managers tend to rely on intuition and on the long-standing methods RFM and cross-tabulation. Our results indicate that the application of segmentation and response modeling is positively related to company and database size, frequency of customer contact, and the use of a direct channel of distribution. The respondents indicate that future research should focus on models applicable for Internet marketing, long-term effects of direct marketing, irritation from direct marketing offers, and segmentation and predictive modeling techniques. D 2002 Elsevier Science B.V. All rights reserved."}
{"_id": "d15b04f7b4caecb75bd9ed467e603d573e9d4b05", "title": "Design and analysis of an origami continuum manipulation module with torsional strength", "text": "This paper presents an origami-inspired cable-driven continuum manipulator module that offers low-cost, low-volume deployment, light weight, and inherently safe human interaction and collaboration. Each module has a mass of around 110 g and integrates the actuation, sensing, and control sub-systems necessary for operation. The origami structure has 7.311 Nm/rad (0.128 Nm/degree) torsional stiffness while being capable of bending in two directions and changing arc-length down to a fully collapsed state. A maximum contraction of 35 mm and bending angle of 35.5 degrees were achieved with 45 mm arc length. The module is capable of passively supporting a 1-kg mass at its tip, or 4 additional serially connected modules, bending approximately 6 degrees in the worst case. We also show that we can actively compensate for external moments by pre-compressing or pre-bending the module. We utilize an inverse kinematic control scheme and use it for both open and closed loop control following a circular trajectory. Our results indicate that the module motion follows the desired trajectory with an RMS error of 0.681 mm in the horizontal (x-y) plane and 0.373 mm in the z-axis with closed-loop control. We also assembled two origami modules in series and drove them independently, demonstrating the proof-of-concept of a modular origami continuum manipulator."}
{"_id": "d5b3b2a5d668ea53dac4881b012567fc5a4f7f12", "title": "Bayesian optimization for learning gaits under uncertainty", "text": "Designing gaits and corresponding control policies is a key challenge in robot locomotion. Even with a viable controller parametrization, finding near-optimal parameters can be daunting. Typically, this kind of parameter optimization requires specific expert knowledge and extensive robot experiments. Automatic black-box gait optimization methods greatly reduce the need for human expertise and time-consuming design processes. Many different approaches for automatic gait optimization have been suggested to date. However, no extensive comparison among them has yet been performed. In this article, we thoroughly discuss multiple automatic optimization methods in the context of gait optimization. We extensively evaluate Bayesian optimization, a model-based approach to black-box optimization under uncertainty, on both simulated problems and real robots. This evaluation demonstrates that Bayesian optimization is particularly suited for robotic applications, where it is crucial to find a good set of gait parameters in a small number of experiments."}
{"_id": "5040e59c29e62c58cb723042f0cadc3284915859", "title": "A Generative Approach to Audio-Visual Person Tracking", "text": "This paper focuses on the integration of acoustic and visual information for people tracking. The system presented relies on a probabilistic framework within which information from multiple sources is integrated at an intermediate stage. An advantage of the method proposed is that of using a generative approach which supports easy and robust integration of multi source information by means of sampled projection instead of triangulation. The system described has been developed in the EU funded CHIL Project research activities. Experimental results from the CLEAR evaluation workshop are reported."}
{"_id": "d66274419204a068c861ce5a479e82c6fa3806cf", "title": "Position-aware activity recognition with wearable devices", "text": "Reliable human activity recognition with wearable devices enables the development of human-centric pervasive applications.We aim to develop a robust wearable-based activity recognition system for real life situations where the device position is up to the user or where a user is unable to collect initial training data. Consequently, in this work we focus on the problem of recognizing the on-body position of the wearable device ensued by comprehensive experiments concerning subject-specific and cross-subjects activity recognition approaches that rely on acceleration data. We introduce a device localization method that predicts the on-body position with an F-measure of 89% and a cross-subjects activity recognition approach that considers common physical characteristics. In this context, we present a real world data set that has been collected from 15 participants for 8 common activities where they carried 7 wearable devices in different on-body positions. Our results show that the detection of the device position consistently improves the result of activity recognition for common activities. Regarding cross-subjects models, we identified the waist as the most suitable device location at which the acceleration patterns for the same activity across several people are most similar. In this context, our results provide evidence for the reliability of physical characteristics based cross-subjectsmodels. \u00a9 2017 Elsevier B.V. All rights reserved."}
{"_id": "efd2668600954872533239ceb93f5459677d2aac", "title": "Tears or Fears? Comparing Gender Stereotypes about Movie Preferences to Actual Preferences", "text": "This study investigated the accuracy of gender-specific stereotypes about movie-genre preferences for 17 genres. In Study 1, female and male participants rated the extent to which 17 movie genres are preferred by women or men. In Study 2, another sample of female and male participants rated their own preference for each genre. There were three notable results. First, Study 1 revealed the existence of gender stereotypes for the majority of genres (i.e., for 15 of 17 genres). Second, Study 2 revealed the existence of actual gender differences in preferences for the majority of genres (i.e., for 11 of 17 genres). Third, in order to assess the accuracy of gender stereotypes on movie preferences, we compared the results of both studies and found that the majority of gender stereotypes were accurate in direction, but inaccurate in size. In particular, the stereotypes overestimated actual gender differences for the majority of movie genres (i.e., 10 of 17). Practical and theoretical implications of these findings are discussed."}
{"_id": "10802b770eae576fdeff940f20ee1a1e99b406ad", "title": "Rule extraction from autoencoder-based connectionist computational models", "text": "1College of Computer Science, Sichuan Normal University, Chengdu, Sichuan, China 2Developmental Neurocognition Lab, Department of Psychological Sciences, Birkbeck, University of London, London, UK 3College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China Correspondence Juan Yang, College of Computer Science, Sichuan Normal University, Chengdu, Sichuan, China. Email: jkxy_yjuan@sicnu.edu.cn Funding information National Natural Science Foundation of China, Grant/Award Number: (61402309); UK Economic and Social Research Council, Grant/Award Number: RES-062-23-2721 Summary Mapping problems are typical research topics related to natural language learning, and they include not only classification mappings but also nonclassification mappings, such as verbs and their past tenses. Connectionist computational models are one of the most popular approaches for simulating those mapping problems; however, their lack of explanatory ability has prevented them from being further used to understand the language learning process. Therefore, the work of extracting rational rules from a connectionist model is as important as simulating the mapping behaviors. Unfortunately, there is no available technique that can be directly applied in those computational models to simulate nonclassification problems. In this paper, an autoencoder-based connectionist computational model is proposed to derive a rule extraction method that can construct \u201cIf-Then\u201d rational relations with high fidelity for nonclassification mapping problems. To demonstrate its generalizability, this computational model is extended to a modified version to address a multi-label classification mapping problem related to cognitive style prediction. Experiments prove this computational model's simulation ability and its explanatory ability on nonclassification problems by comparing its fidelity performances with those of the classical connectionist computational model (multilayer perceptron artificial neural network), and its similar ability on a multi-label classification problem (Felder-Silverman learning style classification) by comparing its prediction accuracy with those of other rule induction techniques."}
{"_id": "bb0ce42cc8ceb11d94973b8e2658a5f44d738826", "title": "Shape from periodic texture using the spectrogram", "text": "Texture has long been recognized in computer vision as an important monocular shape cue, with texture gradients yielding information on surface orientation. A more recent trend is the analysis of images in terms of local spatial frequencies, where each pixel has associated with it its own spatial frequency distribution. This has proven to be a successful method of reasoning about and exploiting many imaging phenomena. Thinking about both shape-from-texture and local spatial frequency, it seems that texture gradients would cause systematic changes in local frequency, and that these changes could be analyzed to extract shape information. However, there does not yet exist a theory that connects texture, shape, and the detailed behavior of local spatial frequency. We show in this paper how local spatial frequency is related to the surface normal of a textured surface. We find that the Fourier power spectra of any two similarly textured patches on a plane are approximately related to each other by an affine transformation. The transformation parameters are a function of the plane\u2019s surface normal. We use this relationship as the basis of a new algorithm for finding surface normals of textured shapes using the spectrogram, which is one type of local spatial frequency representation. We validate the relationship by testing the algorithm on real textures. By analyzing shape and texture in terms of the local spatial frequency representation, we can exploit the advantages of the representation for the shape-from-texture problem. Specifically, our algorithm requires no feature detection and can give correct results even when the texture is aliased."}
{"_id": "c7a97091a1179c9ff520b1fcbc9a97d1bb2f843b", "title": "Network traffic anomaly detection using clustering techniques and performance comparison", "text": "Real-time network traffic anomaly detection is crucial for the confidentiality, integrity, and security of network information. Machine learning approaches are widely used to distinguish traffic flow outliers based on different anomalies with unique statistical characteristics. K-means clustering and Gaussian Mixture Model (GMM) are effective clustering techniques with many variations and easy to implement. Fuzzy clustering is more flexible than hard clustering and is practical for intrusion detection because of the natural treatment of data using fuzzy clustering. Fuzzy c-means clustering (FCM) is an iteratively optimal algorithm normally based on the least square method to partition data sets, which has high computational overhead. This paper proposes modifications to the objective function and the distance function that reduce the computational complexity of FCM while keeping clustering accurate. A combination of FCM clustering GMM, and feature transformation methods are proposed and a comparison of the related testing results and clustering methods is presented."}
{"_id": "52ece1e929758e9d282e818e8e9985f88570f2dd", "title": "ElasticSwitch: practical work-conserving bandwidth guarantees for cloud computing", "text": "While cloud computing providers offer guaranteed allocations for resources such as CPU and memory, they do not offer any guarantees for network resources. The lack of network guarantees prevents tenants from predicting lower bounds on the performance of their applications. The research community has recognized this limitation but, unfortunately, prior solutions have significant limitations: either they are inefficient, because they are not work-conserving, or they are impractical, because they require expensive switch support or congestion-free network cores.\n In this paper, we propose ElasticSwitch, an efficient and practical approach for providing bandwidth guarantees. ElasticSwitch is efficient because it utilizes the spare bandwidth from unreserved capacity or underutilized reservations. ElasticSwitch is practical because it can be fully implemented in hypervisors, without requiring a specific topology or any support from switches. Because hypervisors operate mostly independently, there is no need for complex coordination between them or with a central controller. Our experiments, with a prototype implementation on a 100-server testbed, demonstrate that ElasticSwitch provides bandwidth guarantees and is work-conserving, even in challenging situations."}
{"_id": "bdcf5c5e058d21e19250c398171854f435d0f352", "title": "Approximate Reverse Carry Propagate Adder for Energy-Efficient DSP Applications", "text": "In this paper, a reverse carry propagate adder (RCPA) is presented. In the RCPA structure, the carry signal propagates in a counter-flow manner from the most significant bit to the least significant bit; hence, the carry input signal has higher significance than the output carry. This method of carry propagation leads to higher stability in the presence of delay variations. Three implementations of the reverse carry propagate full-adder (RCPFA) cell with different delay, power, energy, and accuracy levels are introduced. The proposed structure may be combined with an exact (forward) carry adder to form hybrid adders with tunable levels of accuracy. The design parameters of the proposed RCPA implementations and some hybrid adders realized utilizing these structures are studied and compared with those of the state-of-the-art approximate adders using HSPICE simulations in a 45-nm CMOS technology. The results indicate that employing the proposed RCPAs in the hybrid adders may provide, on average, 27%, 6%, and 31% improvements in delay, energy, and energy-delay-product while providing higher levels of accuracy. In addition, the structure is more resilient to delay variation compared to the conventional approximate adder. Finally, the efficacy of the proposed RCPAs is investigated in the discrete cosine transform (DCT) block of the JPEG compression and finite-impulse response (FIR) filter applications. The investigation reveals 60% and 39% energy saving in the DCT of JPEG and FIR filter, respectively, for the proposed RCPAs."}
{"_id": "5284e8897f3a73ff08da1f2ce744ba652583405a", "title": "Beyond Autograding: Advances in Student Feedback Platforms", "text": "1. SUMMARY Automatic grading of programming assignments has been a feature of computer science courses for almost as long as there have been computer science courses [1]. However, contemporary autograding systems in computer science courses have extended their scope far beyond performing automated assessment to include gamification [2], test coverage analysis [3], managing human-authored feedback, contest adjudication [4], secure remote code execution [5], and more. Many of these individual features have been described and evaluated in the computer science education literature, but little attention has been given to the practical benefits and challenges of using the systems that implement these features in computer science courses."}
{"_id": "89c399e10b8d1fec2fcb0f811c2564b58cdff92a", "title": "The promising future of microalgae: current status, challenges, and optimization of a sustainable and renewable industry for biofuels, feed, and other products", "text": "Microalgae have recently attracted considerable interest worldwide, due to their extensive application potential in the renewable energy, biopharmaceutical, and nutraceutical industries. Microalgae are renewable, sustainable, and economical sources of biofuels, bioactive medicinal products, and food ingredients. Several microalgae species have been investigated for their potential as value-added products with remarkable pharmacological and biological qualities. As biofuels, they are a perfect substitute to liquid fossil fuels with respect to cost, renewability, and environmental concerns. Microalgae have a significant ability to convert atmospheric CO2 to useful products such as carbohydrates, lipids, and other bioactive metabolites. Although microalgae are feasible sources for bioenergy and biopharmaceuticals in general, some limitations and challenges remain, which must be overcome to upgrade the technology from pilot-phase to industrial level. The most challenging and crucial issues are enhancing microalgae growth rate and product synthesis, dewatering algae culture for biomass production, pretreating biomass, and optimizing the fermentation process in case of algal bioethanol production. The present review describes the advantages of microalgae for the production of biofuels and various bioactive compounds and discusses culturing parameters."}
{"_id": "c08ba7374aabc9fbf6a4b88b5e03f744c78b9de8", "title": "Computational Thinking and Expository Writing in the Middle School", "text": "To broaden participation in computing we need to look beyond traditional domains of inquiry and expertise. We present results from a demonstration project in which interactive journalism was used to infuse computational thinking into the standard curriculum and regular classroom experience at a middle school with a diverse population. Outcomes indicate that we were able to develop positive attitudes about computational thinking and programming among students and teachers who did not necessarily view themselves as \u201cmath types.\u201d By partnering with language arts, technology and math teachers at Fisher Middle School, Ewing New Jersey, we introduced the isomorphism between the journalistic process and computational thinking to 7th and 8th graders. An intense summer institute, first with the teachers and then with students recruited from the school, immersed them in the \u201cnewsroom of the future\u201d where they researched and wrote news stories, shot and edited video, and developed procedural animations in Scratch to support their storylines. An afterschool club sustained the experience. The teachers adapted interactive journalism and Scratch programming to enrich standard language arts curriculum and are infusing computational thinking in classroom experiences throughout the school."}
{"_id": "c2748de616247677b44815c6a7477afed5c79084", "title": "Modeling HTTP/2 Speed from HTTP/1 Traces", "text": "With the standardization of HTTP/2, content providers want to understand the benefits and pitfalls of transitioning to the new standard. Using a large dataset of HTTP/1.1 resource timing data from production traffic on Akamai\u2019s CDN, and a model of HTTP/2 behavior, we obtain the distribution of performance differences between the protocol versions for nearly 280,000 downloads. We find that HTTP/2 provides significant performance improvements in the tail, and, for websites for which HTTP/2 does not improve median performance, we explore how optimizations like prioritization and push can improve performance, and how these improvements relate to page structure."}
{"_id": "5c5244811a3400ec9622a879b5ab279fea7a16c3", "title": "Guillain-Barr\u00e9 Syndrome in India: population-based validation of the Brighton criteria.", "text": "OBJECTIVE\nCase definitions of GBS were recently developed in response to the 2009 H1N1 vaccination programme but have undergone limited field testing. We validate the sensitivity of the Brighton Working Group case definitions for Guillain-Barr\u00e9 Syndrome (GBS) using a population-based cohort in India.\n\n\nMETHODS\nThe National Polio Surveillance Unit of India actively collects all cases of acute flaccid paralysis (AFP) in children <15 years old, including cases of GBS. Cases of GBS with available cerebrospinal fluid (CSF) and nerve conduction studies (NCS) results, neurological examination, clinical history, and exclusion of related diagnoses were selected (2002-2003). Relevant data were abstracted and entered into a central database. Sensitivity of the Brighton GBS criteria for level 3 of diagnostic certainty which requires no clinical laboratory testing, level 2 which employs CSF or NCS, and level 1 which employs both, were calculated.\n\n\nRESULTS\n79 cases of GBS (mean age 6.5 years, range 4.0-14.5; 39% female) met the case definition. GBS cases were ascending (79%), symmetrical (85%), and bilateral (100%); involving lower extremity hypotonia (86%) and weakness (100%), upper extremity hypotonia (62%) and weakness (80%), areflexia/hyporeflexia (88%), respiratory muscles (22%), bulbar muscles (22%), and cranial nerves (13%). Four limbs were involved in 80% of cases. Mean time to maximal weakness was 5.2 days (range 0.5-30 days) with nadir GBS disability scores of 3 (7%), 4 (67%), 5 (15%), 6 (10%), or unclear (1%). CSF (mean time to lumbar puncture 29 days) was normal in 29% with cytoalbuminologic dissociation in 65% (mean protein 105 mg/dL, range 10-1000; mean cell count 11/\u03bcL, range 0-220, n=4 with >50 cells/\u03bcL). Significant improvement occurred in 73% whereas death (9%) occurred 6-29 days after sensorimotor symptom onset. The majority of cases (86%) fulfilled Brighton level 3, level 2 (84%), and level 1 (62%) of diagnostic certainty.\n\n\nCONCLUSION\nThe diagnosis of GBS can be made using Brighton Working Group criteria in India with moderate to high sensitivity. Brighton Working Group case definitions are a plausible standard for capturing a majority of cases of GBS in field operations in low income settings during AFP surveillance."}
{"_id": "8a58a1107f790bc07774d18e0184e4bf9d1901ba", "title": "3 D Tracking via Body Radio Reflections by", "text": "This thesis presents WiTrack, a system that tracks the 3D motion of a user from the radio signals reflected off her body. It works even if the person is occluded from the WiTrack device or in a different room. WiTrack does not require the user to carry any wireless device, yet its accuracy exceeds current RF localization systems, which require the user to hold a transceiver. Empirical measurements with a WiTrack prototype show that, on average, it localizes the center of a human body to within a median of 10 to 13 cm in the x and y dimensions, and 21 cm in the z dimension. It also provides coarse tracking of body parts, identifying the direction of a pointing hand with a median of 11.20. WiTrack bridges a gap between RF-based localization systems which locate a user through walls and occlusions, and human-computer interaction systems like Kinect, which can track a user without instrumenting her body, but require the user to stay within the direct line of sight of the device. Thesis Supervisor: Dina Katabi Title: Professor of Computer Science and Engineering"}
{"_id": "221cc713ea68ed3baff7d66faa6b1b4659a40264", "title": "Genetic Ablation of Orexin Neurons in Mice Results in Narcolepsy, Hypophagia, and Obesity", "text": "Orexins (hypocretins) are a pair of neuropeptides implicated in energy homeostasis and arousal. Recent reports suggest that loss of orexin-containing neurons occurs in human patients with narcolepsy. We generated transgenic mice in which orexin-containing neurons are ablated by orexinergic-specific expression of a truncated Machado-Joseph disease gene product (ataxin-3) with an expanded polyglutamine stretch. These mice showed a phenotype strikingly similar to human narcolepsy, including behavioral arrests, premature entry into rapid eye movement (REM) sleep, poorly consolidated sleep patterns, and a late-onset obesity, despite eating less than nontransgenic littermates. These results provide evidence that orexin-containing neurons play important roles in regulating vigilance states and energy homeostasis. Orexin/ataxin-3 mice provide a valuable model for studying the pathophysiology and treatment of narcolepsy."}
{"_id": "3adb53d25aed38f6d8c12e02f47c5eaeb3796a41", "title": "Deep Enhanced Representation for Implicit Discourse Relation Recognition", "text": "Implicit discourse relation recognition is a challenging task as the relation prediction without explicit connectives in discourse parsing needs understanding of text spans and cannot be easily derived from surface features from the input sentence pairs. Thus, properly representing the text is very crucial to this task. In this paper, we propose a model augmented with different grained text representations, including character, subword, word, sentence, and sentence pair levels. The proposed deeper model is evaluated on the benchmark treebank and achieves stateof-the-art accuracy with greater than 48% in 11-way and F1 score greater than 50% in 4-way classifications for the first time according to our best knowledge."}
{"_id": "bb37ce8cab4e56e9d8c20b11f3757a50d13b16d3", "title": "Computational thinking (CT): on weaving it in", "text": "Paul Curzon Queen Mary University of London Mile End, London, E1 4NS UK +44 (20) 7882 5212 pc@dcs.qmul.ac.uk Joan Peckham Harriet Taylor NSF 4201 Wilson Blvd. Arlington, VA 22230, USA (703) 292-7344 jpeckham/htaylor@nsf.gov Amber Settle School of Computing DePaul University 243 S. Wabash Avenue Chicago, IL 60604, USA (312) 362-5324 asettle@cdm.depaul.edu Eric Roberts Computer Science Stanford University 202 Gates Stanford, CA 94305-901 650-723-3642 eroberts@cs.stanford.edu"}
{"_id": "562836843897bcb98801838b44f3e5ff09fe212a", "title": "A Survey of Computation Offloading for Mobile Systems", "text": "Mobile systems have limited resources, such as battery life, network bandwidth, storage capacity, and processor performance. These restrictions may be alleviated by computation of f loading: sending heavy computation to resourceful servers and receiving the results from these servers. Many issues related to offloading have been investigated in the past decade. This survey paper provides an overview of the background, techniques, systems, and research areas for offloading computation. We also describe directions for future research."}
{"_id": "67ab675999e4b11904743070e295bc22476080fe", "title": "Adaptive Computation Offloading from Mobile Devices into the Cloud", "text": "The inherently limited processing power and battery lifetime of mobile phones hinder the possible execution of computationally intensive applications like content-based video analysis or 3D modeling. Offloading of computationally intensive application parts from the mobile platform into a remote cloud infrastructure or nearby idle computers addresses this problem. This paper presents our Mobile Augmentation Cloud Services (MACS) middleware which enables adaptive extension of Android application execution from a mobile client into the cloud. Applications are developed by using the standard Android development pattern. The middleware does the heavy lifting of adaptive application partitioning, resource monitoring and computation offloading. These elastic mobile applications can run as usual mobile application, but they can also reach transparently remote computing resources. Two prototype applications using the MACS middleware demonstrate the benefits of the approach. The evaluation shows that applications, which involve complicated algorithms and large computations, can benefit from offloading with around 95% energy savings and significant performance gains compared to local execution only."}
{"_id": "415f425606c812f26087d39e50c27f0b7c6bbc57", "title": "Computing arbitrary functions of encrypted data", "text": "Suppose that you want to delegate the ability to process your data, without giving away access to it. We show that this separation is possible: we describe a \"fully homomorphic\" encryption scheme that keeps data private, but that allows a worker that does not have the secret decryption key to compute any (still encrypted) result of the data, even when the function of the data is very complex. In short, a third party can perform complicated processing of data without being able to see it. Among other things, this helps make cloud computing compatible with privacy."}
{"_id": "a60ae2987f8f55872e937505c404222b4d4c6f1f", "title": "Developing and Implementing Parametric Human Body Shape Models in Ergonomics Software", "text": "Human figure models included with today\u2019s high-end simulation systems are widely used for ergonomic analysis. Male and female figures can be scaled by inputting standard anthropometric dimensions, such as stature, body weight, and limb lengths to represent a large range of body sizes. However, the shapes of these scaled figures are often unrealistic, particularly for extreme body types, because only a relatively few dimensions of the figures are adjusted. Several software packages have the capability of adjusting the figure to match a body surface scan of an individual, but this capability is insufficient when individuals with a wide range of attributes must be simulated. We present a new figure generation capability in the Jack human modeling software based on a statistical analysis of laser scans of hundreds of men and women. A template mesh based on the Jack figure model was fitted to each scan, and a statistical shape model was created using principal component analysis and regression. The model was integrated within the Jack software to create Jack figures with highly realistic body shapes, usable natively with all simulation and analysis features."}
{"_id": "69e8fe6294561a239742beda9a34753ae7d23d0f", "title": "Adolescent development and the regulation of youth crime.", "text": "Elizabeth Scott and Laurence Steinberg explore the dramatic changes in the law's conception of young offenders between the end of the nineteenth century and the beginning of the twenty-first. At the dawn of the juvenile court era, they note, most youths were tried and punished as if they were adults. Early juvenile court reformers argued strongly against such a view, believing that the justice system should offer young offenders treatment that would cure them of their antisocial ways. That rehabilitative model of juvenile justice held sway until a sharp upswing in youth violence at the end of the twentieth century led both public opinion and public policy toward a view that youths should be held to the same standard of criminal accountability as adults. Lawmakers seemed to lose sight of developmental differences between adolescents and adults. But Scott and Steinberg note that lawmakers and the public appear now to be rethinking their views once more. A justice system that operates on the principle of \"adult time for adult crime\" now seems to many to take too little note of age and immaturity in calculating criminal punishment. In 2005 the United States Supreme Court abolished the juvenile death penalty as cruel and unusual punishment, emphasizing that the immaturity of adolescents made them less culpable than adult criminals. In addition, state legislatures recently have repealed or moderated some of the punitive laws they recently enacted. Meanwhile, observe the authors, public anger has abated and attitudes toward young offenders have softened somewhat. In response to these changes, Scott and Steinberg argue that it is appropriate to reexamine juvenile justice policy and to devise a new model for the twenty-first century. In this article, they propose what they call a developmental model. They observe that substantial new scientific evidence about adolescence and criminal activity by adolescents provides the building blocks for a new legal regime superior to today's policy. They put adolescent offenders into an intermediate legal category-neither children, as they were seen in the early juvenile court era, nor adults, as they often are seen today. They observe that such an approach is not only more compatible than the current regime with basic principles of fairness at the heart of the criminal law, but also more likely to promote social welfare by reducing the social cost of juvenile crime."}
{"_id": "739ae835adaf8589ca5cb78a5d2662001600bd92", "title": "A survey of consensus problems in multi-agent coordination", "text": "As a distributed solution to multi-agent coordination, consensus or agreement problems have been studied extensively in the literature. This paper provides a survey of consensus problems in multi-agent cooperative control with the goal of promoting research in this area. Theoretical results regarding consensus seeking under both time-invariant and dynamically changing information exchange topologies are summarized. Applications of consensus protocols to multiagent coordination are investigated. Future research directions and open problems are also proposed."}
{"_id": "de4117904423645dd43bf263b5cb9994445d9a1e", "title": "Software Security Requirements Gathering Instrument", "text": "Security breaches are largely caused by the vulnerable software. Since individuals and organizations mostly depend on softwares, it is important to produce in secured manner. The first step towards producing secured software is through gathering security requirements. This paper describes Software Security Requirements Gathering Instrument (SSRGI) that helps gather security requirements from the various stakeholders. This will guide the developers to gather security requirements along with the functional requirements and further incorporate security during other phases of software development. We subsequently present case studies that describe the integration of the SSRGI instrument with Software Requirements Specification (SRS) document as specified in standard IEEE 830-1998. Proposed SSRGI will support the software developers in gathering security requirements in detail during requirements gathering phase."}
{"_id": "9068c298b9a163c22406bc5fa84e633e65c3575e", "title": "DESIGN AND IMPLEMENTATION OF A VIRTUAL CLASSROOM SYSTEM", "text": "In the last few decades, education has witnessed some advances in technologies involving computeraided learning that promises to drastically change the methods of teaching and learning. The World Wide Web has played a major role in information storage and dissemination in the educational community. Conventional classroom based teaching involves the delivery of course materials by the lecturer in a particular place at a defined time. Hence it imposes a constraint of time and place on both the instructor and the student. Due to human factor arising from the traditional classroom method, the lecturer may not always be able to put in optimum effort towards preparing and delivering course materials. There may also be inconsistencies in the pedagogy and learning style due to repetitive nature of teaching/learning. The objective of this paper is to develop a virtual classroom system to enhance learning on campus. The system was developed using PHP and MySQL as server side programming and database respectively. The web-based virtual classroom provides a web enabled interactive model for e-learning in which the course material is presented using multimedia and hypermedia."}
{"_id": "df81f40750a45d1da0eef1b7b5e09eb2ac868ff0", "title": "Regenerative braking for electric vehicle based on fuzzy logic control strategy", "text": "In order to recycle more energy in the process of regenerative braking, we design a regenerative braking force calculation controller based on fuzzy logic. The sugeno's interface fuzzy logic controller has three-inputs including the driver's brake requirements, vehicle speed and batteries' SOC and one-output which is the regenerative braking force. To ensure charging safety, the influence of batteries' temperature is also taken into consideration. Besides, we modify the original braking force distribution regulations in the simulation software--ADVISOR. The simulation results verify that the method can not only recycle more energy from braking but also ensure the safety of braking and the batteries."}
{"_id": "5d53c8288ac996420745a21fb8adc14737cf297b", "title": "MATLAB functions to analyze directional (azimuthal) data - II: Correlation", "text": "Data that represent azimuthal directions cannot be analyzed in the same manner as ordinary linear variables that measure, for example, size or quality. This is especially true for correlations between variables. Although many statistical techniques have been developed for dealing with directional data, software is scarce for the unique calculations. This paper describes a MATLAB script, located on the IAMG server, that calculates measures of association (correlations) between two circular variables, or between a circular and a linear variable. The calculations include tests of significance of the associations. These tests use large-sample approximations or resampling methods. r 2005 Elsevier Ltd. All rights reserved."}
{"_id": "2e200af6f2a45a38ba22089f3ef86873176e1193", "title": "Multiscale contrast enhancement for radiographies: Laplacian pyramid versus fast wavelet transform", "text": "Contrast enhancement of radiographies based on a multiscale decomposition of the images recently has proven to be a far more versatile and efficient method than regular unsharp-masking techniques, while containing these as a subset. In this paper, we compare the performance of two multiscale-methods, namely the Laplacian Pyramid and the fast wavelet transform (FWT). We find that enhancement based on the FWT suffers from one serious drawback-the introduction of visible artifacts when large structures are enhanced strongly. By contrast, the Laplacian Pyramid allows a smooth enhancement of large structures, such that visible artifacts can be avoided. Only for the enhancement of very small details, for denoising applications or compression of images, the FWT may have some advantages over the Laplacian Pyramid."}
{"_id": "8c39a91a9fb915dd6b71fa51e8e89c6f913af681", "title": "A Survey of Denial-of-Service and Distributed Denial of Service Attacks and Defenses in Cloud Computing", "text": "Cloud Computing is a computing model that allows ubiquitous, convenient and on-demand access to a shared pool of highly configurable resources (e.g., networks, servers, storage, applications and services). Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) attacks are serious threats to the Cloud services\u2019 availability due to numerous new vulnerabilities introduced by the nature of the Cloud, such as multi-tenancy and resource sharing. In this paper, new types of DoS and DDoS attacks in Cloud Computing are explored, especially the XML-DoS and HTTP-DoS attacks, and some possible detection and mitigation techniques are examined. This survey also provides an overview of the existing defense solutions and investigates the experiments and metrics that are usually designed and used to evaluate their performance, which is helpful for the future research in the domain."}
{"_id": "a90be72735a8179d3efbbbb89098f1e7cdea396b", "title": "A benchmark for comparison of dental radiography analysis algorithms", "text": "Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/)."}
{"_id": "44035f565cd8cc78d5698b072d5cb2aef12ac49a", "title": "Anatomy of the Facial Danger Zones: Maximizing Safety during Soft-Tissue Filler Injections.", "text": "With limited downtime and immediate results, facial filler injections are becoming an ever more popular alternative to surgical rejuvenation of the face. The results, and the complications, can be impressive. To maximize safety during injections, the authors have outlined general injection principles followed by pertinent anatomy within six different facial danger zones. Bearing in mind the depth and the location of the vasculature within each zone, practitioners can tailor their injection techniques to prevent vessel injury and avoid cannulation."}
{"_id": "1f19532c0e2abbc152c57ee3b6b62eacd514069b", "title": "Feature Generation Using General Constructor Functions", "text": "Most classification algorithms receive as input a set of attributes of the classified objects. In many cases, however, the supplied set of attributes is not sufficient for creating an accurate, succinct and comprehensible representation of the target concept. To overcome this problem, researchers have proposed algorithms for automatic construction of features. The majority of these algorithms use a limited predefined set of operators for building new features. In this paper we propose a generalized and flexible framework that is capable of generating features from any given set of constructor functions. These can be domain-independent functions such as arithmetic and logic operators, or domain-dependent operators that rely on partial knowledge on the part of the user. The paper describes an algorithm which receives as input a set of classified objects, a set of attributes, and a specification for a set of constructor functions that contains their domains, ranges and properties. The algorithm produces as output a set of generated features that can be used by standard concept learners to create improved classifiers. The algorithm maintains a set of its best generated features and improves this set iteratively. During each iteration, the algorithm performs a beam search over its defined feature space and constructs new features by applying constructor functions to the members of its current feature set. The search is guided by general heuristic measures that are not confined to a specific feature representation. The algorithm was applied to a variety of classification problems and was able to generate features that were strongly related to the underlying target concepts. These features also significantly improved the accuracy achieved by standard concept learners, for a variety of classification problems."}
{"_id": "9c74e6e5cd3018118cc37758d01298889b9381e5", "title": "Towards Security as a Service (SecaaS): On the modeling of Security Services for Cloud Computing", "text": "The security of software services accessible via the Internet has always been a crosscutting non-functional requirement of uttermost importance. The recent advent of the Cloud Computing paradigm and its wide diffusion has given birth to new challenges towards the securing of existing Cloud services, by properly accounting the issues related to their delivery models and their usage patterns, and has opened the way to the new concept of Security as a Service(SecaaS), i.e. the ability of developing reusable software services which can be composed with standard Cloud services in order to offer them the suitable security features. In this context, there is a strong need for methods and tools for the modeling of security concerns, as well as for evaluation techniques, for supporting both the comparison of different design choices and the analysis of their impact on the behavior of new services before their actual realization. This paper proposes a meta-model for supporting the modeling of Security Services in a Cloud Computing environment as well as an approach for guiding the identification and the integration of security services within the standard Cloud delivery models. The proposal is exemplified through a case study."}
{"_id": "4b1c78cde5ada664f689e38217b4190e53d5bee7", "title": "Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting", "text": "Timely accurate traffic forecast is crucial for urban traffic control and guidance. Due to the high nonlinearity and complexity of traffic flow, traditional methods cannot satisfy the requirements of mid-and-long term prediction tasks and often neglect spatial and temporal dependencies. In this paper, we propose a novel deep learning framework, Spatio-Temporal Graph Convolutional Networks (STGCN), to tackle the time series prediction problem in traffic domain. Instead of applying regular convolutional and recurrent units, we formulate the problem on graphs and build the model with complete convolutional structures, which enable much faster training speed with fewer parameters. Experiments show that our model STGCN effectively captures comprehensive spatio-temporal correlations through modeling multi-scale traffic networks and consistently outperforms state-of-the-art baselines on various real-world traffic datasets."}
{"_id": "42004b6bdf5ea375dfaeb96c1fd6f8f77d908d65", "title": "The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections.", "text": "Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India's 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company."}
{"_id": "85295b5c4544b2a5f554e628745dd6c46cd1c41a", "title": "Analysis of Software Defined Network firewall (SDF)", "text": "Software Defined Networks (SDN) technology can alternate the network, unlock critical intelligence, and help deliver the new services and analytics needed to run on-demand applications for today's businesses and consumers. So, SDN has been much request for adoption in mostly data centre. SDN provides network administrators to clear overview of the entire network architecture. It decouples a network's control and forwarding functions such that the physical and logical networks can be treated independently. This method allows network traffic flows to be programmatically and effectively reallocated to meet changing demands. That's what SDN does. Software defined network have too many scarcity like throughput, packet loss and network attack defense. Software defined firewalls helps to overcome lack of SDN and also reduce ratios of attack traffic. Here we analyze some SDF techniques and challenges related to software defined network."}
{"_id": "fd77bd8601a743a4705863010129345f1b15379f", "title": "REVISED NOTE ON LEARNING QUADRATIC ASSIGNMENT WITH GRAPH NEURAL NETWORKS", "text": "Inverse problems correspond to a certain type of optimization problems formulated over appropriate input distributions. Recently, there has been a growing interest in understanding the computational hardness of these optimization problems, not only in the worst case, but in an average-complexity sense under this same input distribution.In this revised note, we are interested in studying another aspect of hardness, related to the ability to learn how to solve a problem by simply observing a collection of previously solved instances. These \u2018planted solutions\u2019 are used to supervise the training of an appropriate predictive model that parametrizes a broad class of algorithms, with the hope that the resulting model will provide good accuracy-complexity tradeoffs in the average sense.We illustrate this setup on the Quadratic Assignment Problem, a fundamental problem in Network Science. We observe that data-driven models based on Graph Neural Networks offer intriguingly good performance, even in regimes where standard relaxation based techniques appear to suffer."}
{"_id": "45f9f16d144998bfa4ed121e986fe3095f48a52e", "title": "The Six Pillars for Building Big Data Analytics Ecosystems", "text": "With almost everything now online, organizations look at the Big Data collected to gain insights for improving their services. In the analytics process, derivation of such insights requires experimenting-with and integrating different analytics techniques, while handling the Big Data high arrival velocity and large volumes. Existing solutions cover bits-and-pieces of the analytics process, leaving it to organizations to assemble their own ecosystem or buy an off-the-shelf ecosystem that can have unnecessary components to them. We build on this point by dividing the Big Data Analytics problem into six main pillars. We characterize and show examples of solutions designed for each of these pillars. We then integrate these six pillars into a taxonomy to provide an overview of the possible state-of-the-art analytics ecosystems. In the process, we highlight a number of ecosystems to meet organizations different needs. Finally, we identify possible areas of research for building future Big Data Analytics Ecosystems."}
{"_id": "6b2a2b5b965e831e425aee45741866f1c7cdfd1c", "title": "Iterative thresholding based image segmentation using 2D improved Otsu algorithm", "text": "Threshold based segmentation methods can effectively separate objects from the background. The conventional Otsu's method can achieve good result only when the histogram of an image has two distinct peaks. In this paper, an iterative thresholding based on 2D improved Otsu method using a novel threshold value recognition function is proposed which is used to find the optimum threshold value in different types of histograms and separate into two classes. For the two classes separated by the above threshold value, mean values are computed. Based on the threshold value and the two mean values, the histogram of an image is divided iteratively into three regions, namely, the Foreground (FG) region, Background (BG) region and To-Be-Processed (TBP) region. The pixels greater than the larger mean belongs to FG region, the pixel which are lesser than the smaller mean belongs to BG region. The pixel that lies between the two mean values belongs to the TBP region and only this region is processed at next iteration, the FG and BG region are not processed further. Again for that TBP region, the novel threshold value recognition function is used to find a new threshold value and then the two mean values are computed to separate into three classes, namely, the FG, BG and TBP region. Then the new TBP region is processed in a similar manner iteratively until the two mean values computed at successive iterations are not changed. Finally all the previously determined foreground regions are combined separately and the background regions are combined separately to give the segmented image. The experimental results show that the proposed method performs well in detecting the objects and has better anti-noise performance compared with the existing methods."}
{"_id": "5d28237872f8597f8e72b65e5a0eaed82123958c", "title": "Understanding the Spatial and Temporal Activity Patterns of Subway Mobility Flows", "text": "In urban transportation systems, mobility flows in the subway system reflect the spatial and temporal dynamics of working days. To investigate the variability of mobility flows, we analyse the spatial community through a series of snapshots of subway stations over sequential periods. Using Shanghai as a case study, we find that the spatial community snapshots reveal dynamic passenger activities. Adopting a dual-perspective, we apply spatial and temporal models separately to explore where and when individuals travel for entertainment. In the two models, microblog topics and spatial facilities such as food venues and entertainment businesses are used to characterise the spatial popularity of each station and people\u2019s travelling perceptions. In the studied case, the city centre is characterised by greater social influence, and it is better described by the spatial model. In the temporal model, shorter travel distances motivate individuals to start their trips earlier. Interestingly, as the number of food-related facilities near the starting station increases, until it exceeds 1563, the speed of people\u2019s journeys slows down. This study provides a method for modelling the effects of social features on mobility flows and for predicting the spatial-temporal mobility flows of newly built subway stations."}
{"_id": "41e2112d71ae72a7f460a31e946216bb07d092a4", "title": "Optimal crawling strategies for web search engines", "text": "Web Search Engines employ multiple so-called crawlers to maintain local copies of web pages. But these web pages are frequently updated by their owners, and therefore the crawlers must regularly revisit the web pages to maintain the freshness of their local copies. In this paper, we propose a two-part scheme to optimize this crawling process. One goal might be the minimization of the average level of staleness over all web pages, and the scheme we propose can solve this problem. Alternatively, the same basic scheme could be used to minimize a possibly more important search engine embarrassment level metric: The frequency with which a client makes a search engine query and then clicks on a returned url only to find that the result is incorrect. The first part our scheme determines the (nearly) optimal crawling frequencies, as well as the theoretically optimal times to crawl each web page. It does so within an extremely general stochastic framework, one which supports a wide range of complex update patterns found in practice. It uses techniques from probability theory and the theory of resource allocation problems which are highly computationally efficient -- crucial for practicality because the size of the problem in the web environment is immense. The second part employs these crawling frequencies and ideal crawl times as input, and creates an optimal achievable schedule for the crawlers. Our solution, based on network flow theory, is exact as well as highly efficient. An analysis of the update patterns from a highly accessed and highly dynamic web site is used to gain some insights into the properties of page updates in practice. Then, based on this analysis, we perform a set of detailed simulation experiments to demonstrate the quality and speed of our approach."}
{"_id": "30a7fcdaa836837d87a8e4702ed015cd66e6ad03", "title": "Neural Network Recognizer for Hand-Written Zip Code Digits", "text": "This paper describes the construction of a system that recognizes hand-printed digits, using a combination of classical techniques and neural-net methods. The system has been trained and tested on real-world data, derived from zip codes seen on actual U.S. Mail. The system rejects a small percentage of the examples as unclassifiable, and achieves a very low error rate on the remaining examples. The system compares favorably with other state-of-the art recognizers. While some of the methods are specific to this task, it is hoped that many of the techniques will be applicable to a wide range of recognition tasks."}
{"_id": "b10440620da8a43a1b97e3da4b1ff13746306475", "title": "Consistent inference of probabilities in layered networks: predictions and generalizations", "text": "The problem of learning a general input-output relation using a layered neural network is discussed in a statistical framework. By imposing the consistency condition that the error minimization be equivalent to a likelihood maximization for training the network, the authors arrive at a Gibbs distribution on a canonical ensemble of networks with the same architecture. This statistical description enables them to evaluate the probability of a correct prediction of an independent example, after training the network on a given training set. The prediction probability is highly correlated with the generalization ability of the network, as measured outside the training set. This suggests a general and practical criterion for training layered networks by minimizing prediction errors. The authors demonstrate the utility of this criterion for selecting the optimal architecture in the continuity problem. As a theoretical application of the statistical formalism, they discuss the question of learning curves and estimate the sufficient training size needed for correct generalization, in a simple example.<>"}
{"_id": "cce178917c44c188d15f27d77fca4e8d37eb28ec", "title": "Inferring Employee Engagement from Social Media", "text": "Employees increasingly are expressing ideas and feelings through enterprise social media. Recent work in CHI and CSCW has applied linguistic analysis towards understanding employee experiences. In this paper, we apply dictionary based linguistic analysis to measure 'Employee Engagement'. Employee engagement is a measure of employee willingness to apply discretionary effort towards organizational goals, and plays an important role in organizational outcomes such as financial or operational results. Organizations typically use surveys to measure engagement. This paper describes an approach to model employee engagement based on word choice in social media. This method can potentially complement surveys, thus providing more real-time insights into engagement and allowing organizations to address engagement issues faster. Our results predicting engagement scores on a survey by combining demographics with social media text demonstrate that social media text has significant predictive power compared to demographic data alone. We also find that engagement may be a state than a stable trait since social media posts closer to the administration of the survey had the most predictive power. We further identify the minimum number of social media posts required per employee for the best prediction."}
{"_id": "00816b774f52031ea160c05181af3251a76220e6", "title": "Chrome Extensions: Threat Analysis and Countermeasures", "text": "The widely popular browser extensions now become one of the most commonly used malware attack vectors. The Google Chrome browser, which implements the principles of least privileges and privilege separation by design, offers a strong security mechanism to protect malicious websites from damaging the whole browser system via extensions. In this study, we however reveal that Chrome\u2019s extension security model is not a panacea for all possible attacks with browser extensions. Through a series of practical bot-based attacks that can be performed even under typical settings, we demonstrate that malicious Chrome extensions pose serious threats, including both information dispersion and harvesting, to browsers. We further conduct an in-depth analysis of Chrome\u2019s extension security model, and conclude that its vulnerabilities are rooted fro m the violation of the principles of least privileges and priv ilege separation. Following these principles, we propose a set of countermeasures that enforce the policies of microprivilege management and differentiating DOM elements. Using a prototype developed on the latest Chrome browser, we show that they can effectively mitigate the threats posed by malicious Chrome extensions with little effect on normal browsing experience."}
{"_id": "5bdb0d7329da678a1324573a13cc7c3c5728578e", "title": "Segmented handwritten text recognition with recurrent neural network classifiers", "text": "Recognition of handwritten text is a useful technique that can be applied in different applications, such as signature recognition, bank check recognition, etc. However, the off-line handwritten text recognition in an unconstrained situation is still a very challenging task due to the high complexity of text strokes and image background. This paper presents a novel segmented handwritten text recognition technique that ensembles recurrent neural network (RNN) classifiers. Two RNN models are first trained that take advantage of the widely used geometrical feature and the Histogram of Oriented Gradient (HOG) feature, respectively. Given a handwritten word image, the optimal recognition result is then obtained by integrating the two trained RNN models together with a lexicon. Experiments on public datasets show the superior performance of our proposed technique."}
{"_id": "e1df8d30462995c299ac33e954dc1715d150cd83", "title": "A Hypercube-Based Indirect Encoding for Evolving Large-Scale Neural Networks", "text": "Research in neuroevolution, i.e. evolving artificial neural networks (ANNs) through evolutionary algorithms, is inspired by the evolution of biological brains. Because natural evolution discovered intelligent brains with billions of neurons and trillions of connections, perhaps neuroevolution can do the same. Yet while neuroevolution has produced successful results in a variety of domains, the scale of natural brains remains far beyond reach. This paper presents a method called Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective Compositional Pattern Producing Networks (connective CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. The advantage of this approach is that it can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution . HyperNEAT is demonstrated through visual discrimination and food gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution."}
{"_id": "3715eb263afcd8bb5ed70ad53a6953e3b86a31ef", "title": "Answer Selection in Community Question Answering via Attentive Neural Networks", "text": "Answer selection in community question answering (cQA) is a challenging task in natural language processing. The difficulty lies in that it not only needs the consideration of semantic matching between question answer pairs but also requires a serious modeling of contextual factors. In this letter, we propose an attentive deep neural network architecture so as to learn the deterministic information for answer selection. The architecture can support various input formats through the organization of convolutional neural networks, attention-based long short-term memory, and conditional random fields. Experiments are carried out on the SemEval-2015 cQA dataset. We attain 58.35% on macroaveraged $\\mathrm{F_1}$, which outperforms the Top-1 system in the shared task by 1.16% and improves the state-of-the-art deep-neural-network-based method by 2.21%."}
{"_id": "a93344eac662814a544b4e824a1e7383e31a337d", "title": "Thermodynamics and NMR of internal G.T mismatches in DNA.", "text": "Thermodynamics of 39 oligonucleotides with internal G.T mismatches dissolved in 1 M NaCl were determined from UV absorbance versus temperature profiles. These data were combined with literature values of six sequences to derive parameters for 10 linearly independent trimer and tetramer sequences with G.T mismatches and Watson-Crick base pairs. The G.T mismatch parameters predict DeltaG degrees 37, DeltaH degrees , DeltaS degrees , and TM with average deviations of 5.1%, 7.5%, 8.0%, and 1.4 degrees C, respectively. These predictions are within the limits of what can be expected for a nearest-neighbor model. The data show that the contribution of a single G.T mismatch to helix stability is context dependent and ranges from +1.05 kcal/mol for AGA/TTT to -1.05 kcal/mol for CGC/GTG. Several tests of the applicability of the nearest-neighbor model to G.T mismatches are described. Analysis of imino proton chemical shifts show that structural perturbations from the G.T mismatches are highly localized. One-dimensional NOE difference spectra demonstrate that G.T mismatches form stable hydrogen-bonded wobble pairs in diverse contexts. Refined nearest-neighbor parameters for Watson-Crick base pairs are also presented."}
{"_id": "5e23bd6ef03102013a27c28d4a1bd0f0cc702722", "title": "Comparing Blockchain and Cloud Services for Business Process Execution", "text": "Blockchain is of rising importance as a technology for engineering applications in cross-organizational settings, avoiding reliance on central trusted third-parties. The use of blockchain, instead of traditional databases or services, is an architectural choice in the development of a software system. The costs of execution and storage are important non-functional qualities, but as yet very little has been done to study them for blockchain-based systems. We investigate the cost of using blockchain using business process execution as a lens. Specifically, we compare the cost for computation and storage of business process execution on blockchain vs. a popular cloud service. First, we capture the cost models for both alternatives. Second, we implemented and measured the cost of business process execution on blockchain and cloud services for an example business process model from the literature. We observe two orders of magnitude difference in this cost."}
{"_id": "25d0e335e0fe3e1707354bfccca9b534d7bb4420", "title": "Bipartite graph reinforcement model for web image annotation", "text": "Automatic image annotation is an effective way for managing and retrieving abundant images on the internet. In this paper, a bipartite graph reinforcement model (BGRM) is proposed for web image annotation. Given a web image, a set of candidate annotations is extracted from its surrounding text and other textual information in the hosting web page. As this set is often incomplete, it is extended to include more potentially relevant annotations by searching and mining a large-scale image database. All candidates are modeled as a bipartite graph. Then a reinforcement algorithm is performed on the bipartite graph to re-rank the candidates. Only those with the highest ranking scores are reserved as the final annotations. Experimental results on real web images demonstrate the effectiveness of the proposed model."}
{"_id": "539b70f3b08532fda306258f9039d75f6977d762", "title": "Design and prototyping of 3-phase BLDC motor", "text": "The development of electric vehicle is now growing rapidly. Demands to deliver a reliable and easy to drive in motor control causes Brushless Direct Current (BLDC) motor becomes a potential candidate. A BLDC motor drive is a potential option for an electric vehicle since it has a high reliability, simple design, and ability to work at high rotation per minute (RPM). This paper discussed the Permanent Magnet BLDC Motor design method. The structure of an interior rotor permanent magnet type is selected to be used in the design of Permanent Magnet BLDC motor so that it can be applied in a drive that requires a large torque and capable of acceleration and deceleration with good response. Selection of 12 slots and 8 poles configuration aims for improving the motor performance. The motor is designed and simulated using a software-based Motor Solve FEA (Finite Element Analysis). Based on this design and simulation results, a prototype of BLDC motor is built. Parameters testing as stator resistance, inductances (the d-axis and q-axis inductance), and the back emf constant (Ke) were used to evaluate the result of the design and prototype motor. Measuring the prototype motor's parameters was carried out by several different methods depending on the parameters tested. Stator resistance testing is performed with the measurement of current in the coil which is then obtained by calculating the magnitude of stator resistance as 0.14710296 \u03a9. Measurements of d-axis stator inductance, q-axis stator inductance, and back emf constant of prototype permanent magnet BLDC motor is obtained as results of 0.35304710 mH, 0.38246769 mH and 0.09690626 Vs/rad respectively. The test results between design and prototype testing were quite good. The difference between the test results and the design of the prototype test results was caused by incompatibility of material composition although using the same type of material. The evaluation shows the electromagnetic parameters is influenced by its constituent materials."}
{"_id": "ef681d066c5b17506558f00c33704362f5722259", "title": "Edge Computing \u2013 EDGE 2018", "text": "Edge computing systems (Cloudlet, Fog Computing, Multi-access Edge Computing) provide numerous benefits to information technology: reduced latency, improved bandwidth, battery lifetime, etc. Despite all the benefits, edge computing systems have several issues that could significantly reduce the performance of certain applications. Indeed, current edge computing technologies do not assure ultra-low latency for real-time applications and they encounter overloading issues for data processing. To solve the aforementioned issues, we propose Home Edge Computing (HEC): a new three-tier edge computing architecture that provides data storage and processing in close proximity to the users. The term \u201cHome\u201d in Home Edge Computing does not restrain our work to the homes of the users, we take into account other places where the users could connect to the Internet such as: companies, shopping malls, hospitals, etc. Our three-tier architecture is composed of a Home Server, an Edge Server and a Central Cloud which we also find in traditional edge computing architectures. The Home Server is located within the vicinities of the users which allow the achievement of ultra-low latency for applications that could be processed by the said server; this also help reduce the amount of data that could be treated in the Edge Server and the Central Cloud. We demonstrate the validity of our architecture by leveraging the EdgeCloudSim simulation platform. The results of the simulation show that our proposal can, in fact, help achieve ultra-low latency and reduce overloading issues."}
{"_id": "dd767f7d238668a757709475cf3d158f1d458e73", "title": "Network Decoupling: From Regular to Depthwise Separable Convolutions", "text": "Depthwise separable convolution has shown great efficiency in network design, but requires time-consuming training procedure with full training-set available. This paper first analyzes the mathematical relationship between regular convolutions and depthwise separable convolutions, and proves that the former one could be approximated with the latter one in closed form. We show depthwise separable convolutions are principal components of regular convolutions. And then we propose network decoupling (ND), a training-free method to accelerate convolutional neural networks (CNNs) by transferring pre-trained CNN models into the MobileNet-like depthwise separable convolution structure, with a promising speedup yet negligible accuracy loss. We further verify through experiments that the proposed method is orthogonal to other training-free methods like channel decomposition, spatial decomposition, etc. Combining the proposed method with them will bring even larger CNN speedup. For instance, ND itself achieves about 2\u00d7 speedup for the widely used VGG16, and combined with other methods, it reaches 3.7\u00d7 speedup with graceful accuracy degradation. We demonstrate that ND is widely applicable to classification networks like ResNet, and object detection network like SSD300."}
{"_id": "1b524842a471e157ae6b8a91fbfd52def35d4f8a", "title": "Sixty years of fear appeal research: current state of the evidence.", "text": "Fear arousal is widely used in persuasive campaigns and behavioral change interventions. Yet, experimental evidence argues against the use of threatening health information. The authors reviewed the current state of empirical evidence on the effectiveness of fear appeals. Following a brief overview of the use of fear arousal in health education practice and the structure of effective fear appeals according to two main theoretical frameworks-protection motivation theory and the extended parallel process model-the findings of six meta-analytic studies in the effectiveness of fear appeals are summarized. It is concluded that coping information aimed at increasing perceptions of response effectiveness and especially self-efficacy is more important in promoting protective action than presenting threatening health information aimed at increasing risk perceptions and fear arousal. Alternative behavior change methods than fear appeals should be considered."}
{"_id": "aaaa8fa3554a680842499f9880d084a41c2a1ac6", "title": "The Prediction of Earnings Using Financial Statement Information: Empirical Evidence With Logit Models and Artificial Neural Networks", "text": "In the past three decades, earnings have been one of the most researched variables in accounting. Empirical research provided substantial evidence on its usefulness in the capital markets but evidence in predicting earnings has been limited, yielding inconclusive results. The purpose of this study is to validate and extend prior research in predicting earnings by examining aggregate and industry-specific data. A sample of 10,509 firm-year observations included in the Compustat database for the period 1982-91 is used in the study. The stepwise logistic regression results of the present study indicated that nine earnings and non-earnings variables can be used to predict earnings. These predictor variables are not identical to those reported in prior studies. These results are also extended to the manufacturing industry. Two new variables are identified to be significant in this industry. Moreover, an Artificial Neural Network (ANN) approach is employed to complement the logistic regression results. The ANN model's performance is at least as high as the logistic regression model's predictive ability."}
{"_id": "44f80bcdf36ebb83457dc091c6ceedc1c651af34", "title": "Compact-Size Low-Profile Wideband Circularly Polarized Omnidirectional Patch Antenna With Reconfigurable Polarizations", "text": "A compact-size low-profile wideband circularly polarized (CP) omnidirectional antenna with reconfigurable polarizations is presented in this communication. This design is based on a low-profile omnidirectional CP antenna which consists of a vertically polarized microstrip patch antenna working in TM01/TM02 modes and sequentially bended slots etched on the ground plane for radiating horizontally polarized electric field. The combined radiation from both the microstrip patch and the slots leads to a CP omnidirectional radiation pattern. The polarization reconfigurability is realized by introducing PIN diodes on the slots. By electronically controlling the states of the PIN diodes, the effective orientation of the slots on ground plane can be changed dynamically and the polarization of antenna can be altered between left-hand circular polarization (LHCP) and right-hand circular polarization (RHCP). The proposed antenna exhibits a wide-operational bandwidth of 19.8% (2.09-2.55 GHz) with both axial ratio below 3 dB and return loss above 10 dB when radiates either LHCP or RHCP waves. Experimental results show good agreement with the simulation results. The present design has a compact size, a thickness of only 0.024\u03bb and exhibits stable CP omnidirectional conical-beam radiation patterns within the entire operating frequency band with good circular polarization."}
{"_id": "fd0ff7873f6465c6dd0ed0bdfe4b10c91ce1fa7d", "title": "Substrate Integrated Waveguide Loaded by Complementary Split-Ring Resonators and Its Applications to Miniaturized Waveguide Filters", "text": "A substrate integrated waveguide with square complementary split-ring resonators (CSRRs) etched on the waveguide surface is investigated in this paper. The proposed structures allow the implementation of a forward-wave passband propagating below the characteristic cutoff frequency of the waveguide. By changing the orientations of the CSRRs, which are incorporated in the waveguide surface and can be interpreted in terms of electric dipoles, varied passband characteristics are observed. A detailed explanation for the generation and variations of the passbands has been illuminated. The application of this waveguide and CSRR combination technique to the design of miniaturized waveguide bandpass filters characterized by transmission zeros is then illustrated. Filter design methodology is examined. These proposed filters exhibit high selectivity and compact size due to the employment of the subwavelength resonators and an evanescent-wave transmission. By slightly altering the configuration of the CSRRs, we find that the propagation of the TE10 mode can be suppressed and filters with improved selectivity and stopband rejection can be obtained. To verify the presented concept, three different types of filters are fabricated based on the standard printed circuit board process. The measured results are in good agreement with the simulation."}
{"_id": "a1af56c0c448d57b4313778072892655ce35fb00", "title": "Head-worn displays: a review", "text": "Head-worn display design is inherently an interdisciplinary subject fusing optical engineering, optical materials, optical coatings, electronics, manufacturing techniques, user interface design, computer science, human perception, and physiology for assessing these displays. This paper summarizes the state-of-the-art in head-worn display design (HWD) and development. This review is focused on the optical engineering aspects, divided into different sections to explore principles and applications. Building on the guiding fundamentals of optical design and engineering, the principles section includes a summary of microdisplay or laser sources, the Lagrange invariant for understanding the trade-offs in optical design of HWDs, modes of image presentation (i.e., monocular, biocular, and stereo) and operational modes such as optical and video see-through. A brief summary of the human visual system pertinent to the design of HWDs is provided. Two optical design forms, namely, pupil forming and non-pupil forming are discussed. We summarize the results from previous design work using aspheric, diffractive, or holographic elements to achieve compact and lightweight systems. The applications section is organized in terms of field of view requirements and presents a reasonable collection of past designs"}
{"_id": "84d3ecf404b0de3f215131bea32ad4315b7399c6", "title": "Large-Scale Automatic Species Identification", "text": "The crowd-sourced Naturewatch GBIF dataset is used to obtain a species classification dataset containing approximately 1.2 million photos of nearly 20 thousand different species of biological organisms observed in their natural habitat. We present a general hierarchical species identification system based on deep convolutional neural networks trained on the NatureWatch dataset. The dataset contains images taken under a wide variety of conditions and is heavily imbalanced, with most species associated with only few images. We apply multi-view classification as a way to lend more influence to high frequency details, hierarchical fine-tuning to help with class imbalance and provide regularisation, and automatic specificity control for optimising classification depth. Our system achieves 55.8% accuracy when identifying individual species and around 90% accuracy at an average taxonomy depth of 5.1\u2014equivalent to the taxonomic rank of \u201dfamily\u201d\u2014when applying automatic specificity control."}
{"_id": "605a12a1d02451157cc5fd4dc475d5cbddd5cb01", "title": "Health Monitoring and Assistance to Support Aging in Place", "text": "To many people, home is a sanctuary. For those people who need special medical care, they may need to be pulled out of their home to meet their medical needs. As the population ages, the percentage of people in this group is increasing and the effects are expensive as well as unsatisfying. We hypothesize that many people with disabilities can lead independent lives in their own homes with the aid of athome automated assistance and health monitoring. In order to accomplish this, robust methods must be developed to collect relevant data and process it dynamically and adaptively to detect and/or predict threatening long-term trends or immediate crises. The main objective of this paper is to investigate techniques for using agent-based smart home technologies to provide this at-home health monitoring and assistance. To this end, we have developed novel inhabitant modeling and automation algorithms that provide remote health monitoring for caregivers. Specifically, we address the following technological challenges: 1) identifying lifestyle trends, 2) detecting anomalies in current data, and 3) designing a reminder assistance system. Our solution approaches are being tested in simulation and with volunteers at the UTA\u2019s MavHome site, an agent-based"}
{"_id": "793fa365083be47b0e92e35cf19400347acd7cbd", "title": "An Empirical Evaluation of Deep Learning for Network Anomaly Detection", "text": "Deep learning has been given a great deal of attention with its success story in many areas such as image analysis and speech recognition. In particular, deep learning is good at dealing with high-dimensional data exhibiting non-linearity. Our preliminary study reveals a very high degree of non-linearity from network traffic data, which explains why it is hard to improve the detection accuracy by using conventional machine learning techniques (e.g., SVM, Random Forest, Ad-aboosting). In this study, we empirically evaluate deep learning to see its feasibility for network anomaly detection. We examine a set of deep learning models constructed based on the Fully Connected Network (FCN), Variational AutoEncoder (VAE), and Long Short-Term Memory with Sequence to Sequence (LSTM Seq2Seq) structures, with two public traffic data sets that have distinctive properties with respect to the distribution of normal and attack populations. Our experimental results confirm the potential of deep learning models for network anomaly detection, and the model based on the LSTM Seq2Seq structure shows a highly promising performance, yielding 99% of binary classification accuracy on the public data sets."}
{"_id": "55adaacf46d8ef42f93d8229e734e5ae3c02cf86", "title": "A study of speaker adaptation for DNN-based speech synthesis", "text": "A major advantage of statistical parametric speech synthesis (SPSS) over unit-selection speech synthesis is its adaptability and controllability in changing speaker characteristics and speaking style. Recently, several studies using deep neural networks (DNNs) as acoustic models for SPSS have shown promising results. However, the adaptability of DNNs in SPSS has not been systematically studied. In this paper, we conduct an experimental analysis of speaker adaptation for DNN-based speech synthesis at different levels. In particular, we augment a low-dimensional speaker-specific vector with linguistic features as input to represent speaker identity, perform model adaptation to scale the hidden activation weights, and perform a feature space transformation at the output layer to modify generated acoustic features. We systematically analyse the performance of each individual adaptation technique and that of their combinations. Experimental results confirm the adaptability of the DNN, and listening tests demonstrate that the DNN can achieve significantly better adaptation performance than the hidden Markov model (HMM) baseline in terms of naturalness and speaker similarity."}
{"_id": "e7bca1a5b3331d7755d5cf8ebfc16327fc3435f7", "title": "Noise and Bandwidth analysis for PMOS based wide swing cascode current mirror using 0.35 um CMOS-MEMS technology", "text": "In this work, we propose a PMOS based wide swing cascode current mirror (WSSCM) for 0.35 um CMOS-MEMS technology. The proposed circuit shows a high bandwidth current mirror with high output resistance capability operating at low voltage levels. Noise, output resistance and bandwidth analysis are presented. CADENCE SPECTRE circuit design tool using MIMOS 0.35 um and BSIM3 transistor models is used to carry out the simulations. The circuit achieves an amplification of 120.04 uA, the input referred noise of the circuit is 75.77 nV/\u221aHz with a minimum power consumption of 0.33 mW and a high output resistance of 15 MO ."}
{"_id": "161b07f86b5f8a7749480c793330cb780006ae88", "title": "Review of Inflatable Booms for Deployable Space Structures : Packing and Rigidization", "text": "Inflatable structures offer the potential of compactly stowing lightweight structures, which assume a fully deployed state in space. An important category of space inflatables are cylindrical booms, which may form the structural members of trusses or the support structure for solar sails. Two critical and interdependent aspects of designing inflatable cylindrical booms for space applications are i) packaging methods that enable compact stowage and ensure reliable deployment, and ii) rigidization techniques that provide long-term structural ridigity after deployment. The vast literature in these two fields is summarized to establish the state of the art."}
{"_id": "a64be66a4d9819b33a0f5f34127e58443fd56cdd", "title": "Simulation of Rigid Origami", "text": "This paper presents a system for computer based interactive simulation of origami, based on rigid origami model. Our system shows the continuous process of folding a piece of paper into a folded shape by calculating the configuration from the crease pattern. The configuration of the model is represented by crease angles, and the trajectory is calculated by projecting angles motion to the constrained space. Additionally, polygons are triangulated and crease lines are adjusted in order to make the model more flexible. Flexible motion and comprehensible representation of origami help users to understand the structure of the model and to fold the model from a crease pattern."}
{"_id": "132c24f524c66d380ea948e432fbdbc95a08b1d4", "title": "One-DOF Cylindrical Deployable Structures with Rigid Quadrilateral Panels", "text": "In this paper,we present a novel cylindrical deployable structure and variations of its design with the following characteristics: 1. Flat-foldable: The shape flattens into a compact 2D configuration. 2. Rigid-foldable: Each element does not deform throughout the transformation. 3. One-DOF: The mechanism has exactly one degree of freedom. 4. Thick: Facets can be substituted with thick or multilayered panels without introducing the distortion of elements."}
{"_id": "3774f733990bf18c458a0f41ddfa1b5d7460b1fc", "title": "Robotic origami folding", "text": "Origami, the human art of paper sculpture, is a fresh challenge for the field of robotic manipulation, and provides a concrete example for many difficult and general manipulation problems. This thesis will present some initial results, including the world\u2019s first origami-folding robot, some new theorems about foldability, definition of a simple class of origami for which I have designed a complete automatic planner, analysis of the kinematics of more complicated folds, and some observations about the configuration spaces of compound spherical closed chains."}
{"_id": "1f40e03bde5f6f9fd6f22573271a421ae6179b4c", "title": "Swarm Control of UAVs for Cooperative Hunting with DDDAS", "text": "Swarm control is a problem of increasing importance with technological advancements. Recently, governments have begun employing UAVs for reconnaissance, including swarms of drones searching for evasive targets. An agent-based simulation for dynamic cooperative cleaning is augmented with additional behaviors and implemented into a Dynamic Data-Driven Application System (DDDAS) framework for dynamic swarm control."}
{"_id": "3e4120537d14c1884089f998a9e0a0fafd00ae1a", "title": "Machine-to-machine communications for home energy management system in smart grid", "text": "Machine-to-machine (M2M) communications have emerged as a cutting edge technology for next-generation communications, and are undergoing rapid development and inspiring numerous applications. This article presents an investigation of the application of M2M communications in the smart grid. First, an overview of M2M communications is given. The enabling technologies and open research issues of M2M communications are also discussed. Then we address the network design issue of M2M communications for a home energy management system (HEMS) in the smart grid. The network architecture for HEMS to collect status and power consumption demand from home appliances is introduced. Then the optimal HEMS traffic concentration is presented and formulated as the optimal cluster formation. A dynamic programming algorithm is applied to obtain the optimal solution. The numerical results show that the proposed optimal traffic concentration can minimize the cost of HEMS."}
{"_id": "3fc33bbd5c801d5a2cac0d1def5f571e7a7bb111", "title": "Security and usability: the gap in real-world online banking", "text": "Online banking is one of the most sensitive tasks performed by general Internet users. Most traditional banks now offer online banking services, and strongly encourage customers to do online banking with 'peace of mind.' Although banks heavily advertise an apparent '100% online security guarantee,' typically the fine print makes this conditional on users fulfilling certain security requirements. We examine some of these requirements as set by major Canadian banks, in terms of security and usability. We opened personal checking accounts at the five largest Canadian banks, and one online-only bank. We found that many security requirements are too difficult for regular users to follow, and believe that some marketing-related messages about safety and security actually mislead users. We are also interested in what kind of computer systems people really use for online banking, and whether users satisfy common online banking requirements. Our survey of 123 technically advanced users from a university environment strongly supports our view of an emerging gap between banks' expectations (or at least what their written customer policy agreements imply) and users' actions related to security requirements of online banking. Our participants, being more security-aware than the general population, arguably makes our results best-case regarding what can be expected from regular users. Yet most participants failed to satisfy common security requirements, implying most online banking customers do not (or cannot) follow banks' stated end-user security requirements and guidelines. The survey also sheds light on the security settings of systems used for sensitive online transactions. This work is intended to spur a discussion on real-world system security and user responsibilities, in a scenario where everyday users are heavily encouraged to perform critical tasks over the Internet, despite the continuing absence of appropriate tools to do so."}
{"_id": "f1873f1185738f6eb7e181ffd7984d2882101891", "title": "Sensor Log: A mobile data collection and annotation application", "text": "Smartphones equipped with sensors can be used for monitoring simple daily activities (e.g., walking, climbing stairs etc.). The data collected via sensors need to be partially or completely annotated in order to be used in the training phase of learning algorithms. In order to ease the time consuming and burdensome annotation process, a mobile data collection and annotation application named Sensor Log is developed. The annotation feature and its ease of use make the developed application superior to existing systems. The application can be downloaded from the Android applications distribution platform."}
{"_id": "e66a39104d4962dfb4c82cb7e076f0b9918bd537", "title": "Analysis and design for RCD clamped snubber used in output rectifier of phase-shift full-bridge ZVS converters", "text": "Detailed analysis and parameter design for a resistorcapacitor-diode (RCD) clamped snubber used in the output rectifier of phase-shift full-bridge zero-voltage-switching (PS-FB-ZVS) converters are presented. Design equations and some properties of the clamped circuit are highlighted."}
{"_id": "86260c16d8a2db7fffb06143bdbfd6eafc5a8e11", "title": "Mining advertiser-specific user behavior using adfactors", "text": "Consider an online ad campaign run by an advertiser. The ad serving companies that handle such campaigns record users' behavior that leads to impressions of campaign ads, as well as users' responses to such impressions. This is summarized and reported to the advertisers to help them evaluate the performance of their campaigns and make better budget allocation decisions.\n The most popular reporting statistics are the click-through rate and the conversion rate. While these are indicative of the effectiveness of an ad campaign, the advertisers often seek to understand more sophisticated long-term effects of their ads on the brand awareness and the user behavior that leads to the conversion, thus creating a need for the reporting measures that can capture both the duration and the frequency of the pathways to user conversions.\n In this paper, we propose an alternative data mining framework for analyzing user-level advertising data. In the aggregation step, we compress individual user histories into a graph structure, called the adgraph, representing local correlations between ad events. For the reporting step, we introduce several scoring rules, called the adfactors (AF), that can capture global role of ads and ad paths in the adgraph, in particular, the structural correlation between an ad impression and the user conversion. We present scalable local algorithms for computing the adfactors; all algorithms were implemented using the MapReduce programming model and the Pregel framework.\n Using an anonymous user-level dataset of sponsored search campaigns for eight different advertisers, we evaluate our framework with different adgraphs and adfactors in terms of their statistical fit to the data, and show its value for mining the long-term behavioral patterns in the advertising data."}
{"_id": "494fc1e30be172fbe393e0d68695ae318e23da8c", "title": "An Organizational Theoretic Review of Green Supply Chain Management Literature", "text": "Green supply chain management (GSCM) has gained increasing attention within both academia and industry. As the literature grows, finding new directions by critically evaluating the research and identifying future directions becomes important in advancing knowledge for the field. Using organizational theories to help categorize the literature provides opportunities to address both the objectives of understanding where the field currently stands and identifying research opportunities and directions. After providing a background discussion on GSCM we categorize and review recent GSCM literature under nine broad organizational theories. Within this review framework, we also identify GSCM research questions that are worthy of investigation. Additional organizational theories which are considered valuable for future GSCM research are also identified with a conclusion for this review."}
{"_id": "d4cd5e77f24a6004c5a6956a044f732c56b5d335", "title": "Design of biped robot inspired by cats for fast running", "text": "ELECT A novel design of a biped robot inspired by domestic cats for fast running is introduced. The skeletomuscular system of a cat is analysed and applied to determine the link parameters and the linkage structure of the proposed mechanism. The linkage design of the leg mechanism is explained and a kinematic analysis based on vector loop equations is performed. The effectiveness of the proposed mechanism is verified experimentally. The biped robot runs at an average speed of 2 m/s at a step frequency of 4 Hz. This leg mechanism can facilitate the development of fast running robot systems."}
{"_id": "0935abb3bcd51c7496b9c9ed49297ba8ec0ec125", "title": "Paper 265-25 Sample Size Computations and Power Analysis with the SAS System", "text": "Statistical power analysis characterizes the ability of a study to detect a meaningful effect size\u2014for example, the difference between two population means. It also determines the sample size required to provide a desired power for an effect of scientific interest. Proper planning reduces the risk of conducting a study that will not produce useful results and determines the most sensitive design for the resources available. Power analysis is now integral to the health and behavioral sciences, and its use is steadily increasing wherever empirical studies are performed. SAS Institute is working to implement power analysis for common situations such as t-tests, ANOVA, comparison of binomial proportions, equivalence testing, survival analysis, contingency tables and linear models, and eventually for a wide range of models and designs. An effective graphical user interface reveals the contribution to power of factors such as effect size, sample size, inherent variability, type I error rate, and choice of design and analysis. This presentation demonstrates issues involved in power analysis, summarizes the current state of methodology and software, and outlines future directions."}
{"_id": "58496266bde6ed47a85a886bd0d984fd0de6bce3", "title": "TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency", "text": "In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence \u2013 both semantic and syntactic \u2013 but might face difficulty remembering long-range dependencies. Intuitively, these long-range dependencies are of semantic nature. In contrast, latent topic models are able to capture the global semantic structure of a document but do not account for word ordering. The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics. Unlike previous work on contextual RNN language modeling, our model is learned endto-end. Empirical results on word prediction show that TopicRNN outperforms existing contextual RNN baselines. In addition, TopicRNN can be used as an unsupervised feature extractor for documents. We do this for sentiment analysis on the IMDB movie review dataset and report an error rate of 6.28%. This is comparable to the state-of-the-art 5.91% resulting from a semi-supervised approach. Finally, TopicRNN also yields sensible topics, making it a useful alternative to document models such as latent Dirichlet allocation."}
{"_id": "8dc160b3b53686689d3bb168c1c4e963aaca2819", "title": "Advances in direct transesterification of algal oils from wet biomass.", "text": "An interest in biodiesel as an alternative fuel for diesel engines has been increasing because of the issue of petroleum depletion and environmental concerns related to massive carbon dioxide emissions. Researchers are strongly driven to pursue the next generation of vegetable oil-based biodiesel. Oleaginous microalgae are considered to be a promising alternative oil source. To commercialize microalgal biodiesel, cost reductions in oil extraction and downstream biodiesel conversion are stressed. Herein, starting from an investigation of oil extraction from wet microalgae, a review is conducted of transesterification using enzymes, homogeneous and heterogeneous catalysts, and yield enhancement by ultrasound, microwave, and supercritical process. In particular, there is a focus on direct transesterification as a simple and energy efficient process that omits a separate oil extraction step and utilizes wet microalgal biomass; however, it is still necessary to consider issues such as the purification of microalgal oils and upgrading of biodiesel properties."}
{"_id": "13fa8063363c28790a1bdb77e895800937a8edb2", "title": "An Atomatic Fundus Image Analysis System for Clinical Diagnosis of Glaucoma", "text": "Glaucoma is a serious ocular disease and leads blindness if it couldn\u00a1\u00a6t be detected and treated in proper way. The diagnostic criteria for glaucoma include intraocular pressure measurement, optic nerve head evaluation, retinal nerve fiber layer and visual field defect. The observation of optic nerve head, cup to disc ratio and neural rim configuration are important for early detecting glaucoma in clinical practice. However, the broad range of cup to disc ratio is difficult to identify early changes of optic nerve head, and different ethnic groups possess various features in optic nerve head structures. Hence, it is still important to develop various detection techniques to assist clinicians to diagnose glaucoma at early stages. In this study, we developed an automatic detection system which contains two major phases: the first phase performs a series modules of digital fundus retinal image analysis including vessel detection, vessel in painting, cup to disc ratio calculation, and neuro-retinal rim for ISNT rule, the second phase determines the abnormal status of retinal blood vessels from different aspect of view. In this study, the novel idea of integrating image in painting and active contour model techniques successfully assisted the correct identification of cup and disk regions. Several clinical fundus retinal images containing normal and glaucoma images were applied to the proposed system for demonstration."}
{"_id": "6254b3c10d52044bf78c3fb12b6eb42d0be3dac7", "title": "A Practical Approach to Recognizing Physical Activities", "text": "We are developing a personal activity recognition system that is practical, reliable, and can be incorporated into a variety of health-care related applications ranging from personal fitness to elder care. To make our system appealing and useful, we require it to have the following properties: (i) data only from a single body location needed, and it is not required to be from the same point for every user; (ii) should work out of the box across individuals, with personalization only enhancing its recognition abilities; and (iii) should be effective even with a cost-sensitive subset of the sensors and data features. In this paper, we present an approach to building a system that exhibits these properties and provide evidence based on data for 8 different activities collected from 12 different subjects. Our results indicate that the system has an accuracy rate of approximately 90% while meeting our requirements. We are now developing a fully embedded version of our system based on a cell-phone platform augmented with a Bluetooth-connected sensor board."}
{"_id": "c3a41f97b29c6abce6f75ee9c668584d77a84170", "title": "Agronomy for sustainable agriculture. A review", "text": "Sustainability rests on the principle that we must meet the needs of the present without compromising the ability of future generations to meet their own needs. Starving people in poor nations, obesity in rich nations, increasing food prices, on-going climate changes, increasing fuel and transportation costs, flaws of the global market, worldwide pesticide pollution, pest adaptation and resistance, loss of soil fertility and organic carbon, soil erosion, decreasing biodiversity, desertification, and so on. Despite unprecedented advances in sciences allowing us to visit planets and disclose subatomic particles, serious terrestrial issues about food show clearly that conventional agriculture is no longer suited to feeding humans and preserving ecosystems. Sustainable agriculture is an alternative for solving fundamental and applied issues related to food production in an ecological way (Lal (2008) Agron. Sustain. Dev. 28, 57\u201364.). While conventional agriculture is driven almost solely by productivity and profit, sustainable agriculture integrates biological, chemical, physical, ecological, economic and social sciences in a comprehensive way to develop new farming practices that are safe and do not degrade our environment. To address current agronomical issues and to promote worldwide discussions and cooperation we implemented sharp changes at the journal Agronomy for Sustainable Development from 2003 to 2006. Here we report (1) the results of the renovation of the journal and (2) a short overview of current concepts of agronomical research for sustainable agriculture. Considered for a long time as a soft, side science, agronomy is rising fast as a central science because current issues are about food, and humans eat food. This report is the introductory article of the book Sustainable Agriculture, volume 1, published by EDP Sciences and Springer (Lichtfouse et al. (2009) Sustainable Agriculture, Vol. 1, Springer, EDP Sciences, in press)."}
{"_id": "5f20f07013d2c1d1e442af5cb460f1f1bcb4aa47", "title": "Efficient Nash equilibrium approximation through Monte Carlo counterfactual regret minimization", "text": "Recently, there has been considerable progress towards algorithms for approximating Nash equilibrium strategies in extensive games. One such algorithm, Counterfactual Regret Minimization (CFR), has proven to be effective in two-player zero-sum poker domains. While the basic algorithm is iterative and performs a full game traversal on each iteration, sampling based approaches are possible. For instance, chance-sampled CFR considers just a single chance outcome per traversal, resulting in faster but less precise iterations. While more iterations are required, chance-sampled CFR requires less time overall to converge. In this work, we present new sampling techniques that consider sets of chance outcomes during each traversal to produce slower, more accurate iterations. By sampling only the public chance outcomes seen by all players, we take advantage of the imperfect information structure of the game to (i) avoid recomputation of strategy probabilities, and (ii) achieve an algorithmic speed improvement, performing O(n) work at terminal nodes in O(n) time. We demonstrate that this new CFR update converges more quickly than chance-sampled CFR in the large domains of poker and Bluff."}
{"_id": "3075bc058312110e647ab3c09e27e500da2343f3", "title": "Learning Content-rich Diffusion Network Embedding 1", "text": "Information networks are ubiquitous in the real world, while embedding, as a kind of network representation, has received attention from many researchers because of its effectiveness in preserving the semantics of the network and its broad application including classification, link prediction etc. Previously, many methods have been proposed to learn network embedding from non-attributed and static networks. Networks are treated simply as nodes and links. However, information is not merely encoded in the structure, nodes itself may have their intrinsic attributes and little research have been done to incorporate this information. Furthermore, outside information will also diffuse on the network over the time and for a specific time we can have a different diffusion structure. In this paper, we will be introducing the idea of diffusion network, and we propose two models for embedding learning that aim to efficiently capture the rich content of nodes as well as that of the diffusion. We then evaluate the quality of our embedding by conducting node classification experiments, the result of which shows that our method for generating embedding outperforms other baselines."}
{"_id": "981f8cde8085df066402e1b4cfe89bb31ce71c79", "title": "Automatic Shoeprint Retrieval Algorithm for Real Crime Scenes", "text": "This study is to propose a fully automatic crime scene shoeprint retrieval algorithm that can be used to link scenes of crime or determine the brand of a shoe. A shoeprint contour model is proposed to roughly correct the geometry distortions. To simulate the character of the forensic experts, a region priority match and similarity estimation strategy is also proposed. The shoeprint is divided into two semantic regions, and their confidence values are computed based on the priority in the forensic practice and the quantity of reliable information. Similarities of each region are computed respectively, and the matching score between the reference image and an image in the database is the weighted sum. For regions with higher confidence value, the similarities are computed based on the proposed coarse-to-fine global invariant descriptors, which are based on Wavelet-Fourier transform and are invariant under slight geometry distortions and interference such as breaks and small holes, etc. For regions with lower confidence value, similarities are estimated based on computed similarities of regions with higher confidence value. Parameters of the proposed algorithm have learned from huge quantity of crime scene shoeprints and standard shoeprints which can cover most practical cases, and the algorithm can have better performance with minimum user intervention. The proposed algorithm has been tested on the crime scene shoeprint database composed of 210,000 shoeprints provided by the third party, and the cumulative matching score of the top 2 percent is 90.87."}
{"_id": "b97957b43862b9462a8aca454cf16036bf735638", "title": "Sales Demand Forecast in E-commerce using a Long Short-Term Memory Neural Network Methodology", "text": "Generating accurate and reliable sales forecasts is crucial in the E-commerce business. The current state-of-theart techniques are typically univariate methods, which produce forecasts considering only the historical sales data of a single product. However, in a situation where large quantities of related time series are available, conditioning the forecast of an individual time series on past behaviour of similar, related time series can be beneficial. Given that the product assortment hierarchy in an E-commerce platform contains large numbers of related products, in which the sales demand patterns can be correlated, our attempt is to incorporate this cross-series information in a unified model. We achieve this by globally training a Long Short-Term Memory network (LSTM) that exploits the nonlinear demand relationships available in an E-commerce product assortment hierarchy. Aside from the forecasting engine, we propose a systematic pre-processing framework to overcome the challenges in an E-commerce setting. We also introduce several product grouping strategies to supplement the LSTM learning schemes, in situations where sales patterns in a product portfolio are disparate. We empirically evaluate the proposed forecasting framework on a real-world online marketplace dataset from Walmart.com. Our method achieves competitive results on category level and super-departmental level datasets, outperforming stateof-the-art techniques."}
{"_id": "8216ca257a33d0d64cce02f5bb37de31c5b824f8", "title": "An objective and standardized test of hand function.", "text": ""}
{"_id": "1cd60b8211a335505054f920ee111841792ff393", "title": "Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions", "text": "Neural correlates of the often-powerful emotional responses to music are poorly understood. Here we used positron emission tomography to examine cerebral blood flow (CBF) changes related to affective responses to music. Ten volunteers were scanned while listening to six versions of a novel musical passage varying systematically in degree of dissonance. Reciprocal CBF covariations were observed in several distinct paralimbic and neocortical regions as a function of dissonance and of perceived pleasantness/unpleasantness. The findings suggest that music may recruit neural mechanisms similar to those previously associated with pleasant/unpleasant emotional states, but different from those underlying other components of music perception, and other emotions such as fear."}
{"_id": "2fededbe97de622b55ff2358028975741c053635", "title": "Listening to polyphonic music recruits domain-general attention and working memory circuits.", "text": "Polyphonic music combines multiple auditory streams to create complex auditory scenes, thus providing a tool for investigating the neural mechanisms that orient attention in natural auditory contexts. Across two fMRI experiments, we varied stimuli and task demands in order to identify the cortical areas that are activated during attentive listening to real music. In individual experiments and in a conjunction analysis of the two experiments, we found bilateral blood oxygen level dependent (BOLD) signal increases in temporal (the superior temporal gyrus), parietal (the intraparietal sulcus), and frontal (the precentral sulcus, the inferior frontal sulcus and gyrus, and the frontal operculum) areas during selective and global listening, as compared with passive rest without musical stimulation. Direct comparisons of the listening conditions showed significant differences between attending to single timbres (instruments) and attending across multiple instruments, although the patterns that were observed depended on the relative demands of the tasks being compared. The overall pattern of BOLD signal increases indicated that attentive listening to music recruits neural circuits underlying multiple forms of working memory, attention, semantic processing, target detection, and motor imagery. Thus, attentive listening to music appears to be enabled by areas that serve general functions, rather than by music-specific cortical modules."}
{"_id": "361f606333f62187b5ef9f144da0097db7fc4d52", "title": "Brain organization for music processing.", "text": "Research on how the brain processes music is emerging as a rich and stimulating area of investigation of perception, memory, emotion, and performance. Results emanating from both lesion studies and neuroimaging techniques are reviewed and integrated for each of these musical functions. We focus our attention on the common core of musical abilities shared by musicians and nonmusicians alike. Hence, the effect of musical training on brain plasticity is examined in a separate section, after a review of the available data regarding music playing and reading skills that are typically cultivated by musicians. Finally, we address a currently debated issue regarding the putative existence of music-specific neural networks. Unfortunately, due to scarcity of research on the macrostructure of music organization and on cultural differences, the musical material under focus is at the level of the musical phrase, as typically used in Western popular music."}
{"_id": "84bf29d37c50115451975a25256150394b7e7da3", "title": "Taming the beast: Free and open-source massive point cloud web visualization", "text": "Powered by WebGL, some renderers have recently become available for the visualization of point cloud data over the web, for example Plasio or Potree. We have extended Potree to be able to visualize massive point clouds and we have successfully used it with the second national Lidar survey of the Netherlands, AHN2, with 640 billion points. In addition to the visualization, the publicly available service at http://ahn2.pointclouds.nl/ also features a multi-resolution download tool, a geographic name search bar, a measurement toolkit, a 2D orientation map with field of view depiction, a demo mode and the tuning of the visualization parameters. Potree relies on reorganizing the point cloud data into an multi-resolution octree data structure. However, this reorganization is very time consuming for massive data sets. Hence, we have used a divide and conquer approach to decrease the octree creation time. To achieve such performance improvement we divided the entire space into smaller cells, generated an octree for each of them in a distributed manner and then we merged them into a single massive octree. The merging is possible because the extent of all the nodes of the octrees is known and fixed. All the developed tools are free and open-source (FOSS) and they can be used to visualize over the web other massive point clouds."}
{"_id": "beaf1f989e63f8c25a444854b3e561ea621e8d9d", "title": "Development and Validation of a Surgical Workload Measure: The Surgery Task Load Index (SURG-TLX)", "text": "The purpose of the present study was to develop and validate a multidimensional, surgery-specific workload measure (the SURG-TLX), and to determine its utility in providing diagnostic information about the impact of various sources of stress on the perceived demands of trained surgical operators. As a wide range of stressors have been identified for surgeons in the operating room, the current approach of considering stress as a unidimensional construct may not only limit the degree to which underlying mechanisms may be understood but also the degree to which training interventions may be successfully matched to particular sources of stress. The dimensions of the SURG-TLX were based on two current multidimensional workload measures and developed via focus group discussion. The six dimensions were defined as mental demands, physical demands, temporal demands, task complexity, situational stress, and distractions. Thirty novices were trained on the Fundamentals of Laparoscopic Surgery (FLS) peg transfer task and then completed the task under various conditions designed to manipulate the degree and source of stress experienced: task novelty, physical fatigue, time pressure, evaluation apprehension, multitasking, and distraction. The results were supportive of the discriminant sensitivity of the SURG-TLX to different sources of stress. The sub-factors loaded on the relevant stressors as hypothesized, although the evaluation pressure manipulation was not strong enough to cause a significant rise in situational stress. The present study provides support for the validity of the SURG-TLX instrument and also highlights the importance of considering how different stressors may load surgeons. Implications for categorizing the difficulty of certain procedures, the implementation of new technology in the operating room (man\u2013machine interface issues), and the targeting of stress training strategies to the sources of demand are discussed. Modifications to the scale to enhance clinical utility are also suggested."}
{"_id": "1e90b34da4627115d2d6e9a9ff950a8044ad5c26", "title": "SMC: A Practical Schema for Privacy-Preserved Data Sharing over Distributed Data Streams", "text": "Data collection is required to be safe and efficient considering both data privacy and system performance. In this paper, we study a new problem: distributed data sharing with privacy-preserving requirements. Given a data demander requesting data from multiple distributed data providers, the objective is to enable the data demander to access the distributed data without knowing the privacy of any individual provider. The problem is challenged by two questions: how to transmit the data safely and accurately; and how to efficiently handle data streams? As the first study, we propose a practical method, Shadow Coding, to preserve the privacy in data transmission and ensure the recovery in data collection, which achieves privacy preserving computation in a data-recoverable, efficient, and scalable way. We also provide practical techniques to make Shadow Coding efficient and safe in data streams. Extensive experimental study on a large-scale real-life dataset offers insight into the performance of our schema. The proposed schema is also implemented as a pilot system in a city to collect distributed mobile phone data."}
{"_id": "2d4ae26c3904fb2d89d2e3c4c1b78e4b0b8d6eee", "title": "HMM-based Address Parsing with Massive Synthetic Training Data Generation", "text": "Record linkage is the task of identifying which records in one or more data collections refer to the same entity, and address is one of the most commonly used fields in databases. Hence, segmentation of the raw addresses into a set of semantic fields is the primary step in this task. In this paper, we present a probabilistic address parsing system based on the Hidden Markov Model. We also introduce several novel approaches of synthetic training data generation to build robust models for noisy real-world addresses, obtaining 95.6% F-measure. Furthermore, we demonstrate the viability and efficiency of this system for large-scale data by scaling it up to parse billions of addresses."}
{"_id": "11ff11e3aedaa41a67bea3623e5d2a805db6166e", "title": "Copysets: Reducing the Frequency of Data Loss in Cloud Storage", "text": "Random replication is widely used in data center storage systems to prevent data loss. However, random replication is almost guaranteed to lose data in the common scenario of simultaneous node failures due to cluster-wide power outages. Due to the high fixed cost of each incident of data loss, many data center operators prefer to minimize the frequency of such events at the expense of losing more data in each event. We present Copyset Replication, a novel generalpurpose replication technique that significantly reduces the frequency of data loss events. We implemented and evaluated Copyset Replication on two open source data center storage systems, HDFS and RAMCloud, and show it incurs a low overhead on all operations. Such systems require that each node\u2019s data be scattered across several nodes for parallel data recovery and access. Copyset Replication presents a near optimal tradeoff between the number of nodes on which the data is scattered and the probability of data loss. For example, in a 5000-node RAMCloud cluster under a power outage, Copyset Replication reduces the probability of data loss from 99.99% to 0.15%. For Facebook\u2019s HDFS cluster, it reduces the probability from 22.8% to 0.78%."}
{"_id": "c11dd3224cd7ae2ba8d1e2ae603fb7a010443b5f", "title": "Bend, stretch, and touch: Locating a finger on an actively deformed transparent sensor array", "text": "The development of bendable, stretchable, and transparent touch sensors is an emerging technological goal in a variety of fields, including electronic skin, wearables, and flexible handheld devices. Although transparent tactile sensors based on metal mesh, carbon nanotubes, and silver nanowires demonstrate operation in bent configurations, we present a technology that extends the operation modes to the sensing of finger proximity including light touch during active bending and even stretching. This is accomplished using stretchable and ionically conductive hydrogel electrodes, which project electric field above the sensor to couple with and sense a finger. The polyacrylamide electrodes are embedded in silicone. These two widely available, low-cost, transparent materials are combined in a three-step manufacturing technique that is amenable to large-area fabrication. The approach is demonstrated using a proof-of-concept 4 \u00d7 4 cross-grid sensor array with a 5-mm pitch. The approach of a finger hovering a few centimeters above the array is readily detectable. Light touch produces a localized decrease in capacitance of 15%. The movement of a finger can be followed across the array, and the location of multiple fingers can be detected. Touch is detectable during bending and stretch, an important feature of any wearable device. The capacitive sensor design can be made more or less sensitive to bending by shifting it relative to the neutral axis. Ultimately, the approach is adaptable to the detection of proximity, touch, pressure, and even the conformation of the sensor surface."}
{"_id": "1518c8dc6a07c2391e58ece6e2ad8edca87be56e", "title": "A Survey of Distributed Mining of Data Streams", "text": "With advances in data collection and generation technologies, organizations and researchers are faced with the ever growing problem of how to manage and analyze large dynamic datasets. Environments that produce streaming sources of data are becoming common place. Examples include stock market, sensor, web click stream, and network data. In many instances, these environments are also equipped with multiple distributed computing nodes that are often located near the data sources. Analyzing and monitoring data in such environments requires data mining technology that is cognizant of the mining task, the distributed nature of the data, and the data influx rate. In this chapter, we survey the current state of the field and identify potential directions of future research."}
{"_id": "9ae2a02751433f806d319e613dbc59ca6ee04904", "title": "Toward Detecting Malicious Links in Online Social Networks through User Behavior", "text": "It is becoming increasingly difficult to ignore the importance of using online social networks (OSNs) for various purposes such as marketing, education, entertainment, and business. However, OSNs also open the door for harmful activities and behaviors. Committing financial fraud and propagating malware and spam advertisements are very common criminal actions that people engage in by accessing uniform resource locators (URLs). It has been reported that advanced attackers tend to exploit human flaws rather than system flaws, thus, users are targeted in social media threats by hour. This research aims to understand the state of literature on detecting malicious URLs in OSNs, with a focus on two major aspects: URL and OSN objects. Although the literature presents these two aspects in a different context, this paper will mainly focus on their features in relation to malicious URL detection using classification methods. We firstly review three features of URLs: lexical features, hosts, and domains, then we discuss their limitations. We then introduce social spam analytics and detection models using combined features from both URLs and OSNs, particularly the use of user profiles and posts together with URL features, to enhance the detection of malicious behavior. This combination can help in understanding the interests of the user either explicitly, by stating choices in the profile, or implicitly, by analyzing the post behavior, as the spammers do not maintain a regular interest and tend to exploit events or top trend topics."}
{"_id": "435de469e2b2b4bcdc57f3603eaf790bf98bf657", "title": "Side channels in cloud services , the case of deduplication in cloud storage", "text": "Cloud storage services commonly use deduplication, which eliminates redundant data by storing only a single copy of each file or block. Deduplication reduces the space and bandwidth requirements of data storage services, and is most effective when applied across multiple users, a common practice by cloud storage offerings. We study the privacy implications of cross-user deduplication. We demonstrate how deduplication can be used as a side channel which reveals information about the contents of files of other users. In a different scenario, deduplication can be used as a covert channel by which malicious software can communicate with its control center, regardless of any firewall settings at the attacked machine. Due to the high savings offered by cross-user deduplication, cloud storage providers are unlikely to stop using this technology. We therefore propose simple mechanisms that enable cross-user deduplication while greatly reducing the risk of data leakage."}
{"_id": "c537a1b27cc3fbc06a62b6334f2953377736220c", "title": "Real-Time Deep Learning Method for Abandoned Luggage Detection in Video", "text": "Recent terrorist attacks in major cities around the world have brought many casualties among innocent citizens. One potential threat is represented by abandoned luggage items (that could contain bombs or biological warfare) in public areas. In this paper, we describe an approach for real-time automatic detection of abandoned luggage in video captured by surveillance cameras. The approach is comprised of two stages: (i) static object detection based on background subtraction and motion estimation and (ii) abandoned luggage recognition based on a cascade of convolutional neural networks (CNN). To train our neural networks we provide two types of examples: images collected from the Internet and realistic examples generated by imposing various suitcases and bags over the scene's background. We present empirical results demonstrating that our approach yields better performance than a strong CNN baseline method."}
{"_id": "600081cefad09cf337b978bb6e1ec53309ee69b3", "title": "Authenticated storage using small trusted hardware", "text": "A major security concern with outsourcing data storage to third-party providers is authenticating the integrity and freshness of data. State-of-the-art software-based approaches require clients to maintain state and cannot immediately detect forking attacks, while approaches that introduce limited trusted hardware (e.g., a monotonic counter) at the storage server achieve low throughput. This paper proposes a new design for authenticating data storage using a small piece of high-performance trusted hardware attached to an untrusted server. The proposed design achieves significantly higher throughput than previous designs. The server-side trusted hardware allows clients to authenticate data integrity and freshness without keeping any mutable client-side state. Our design achieves high performance by parallelizing server-side authentication operations and permitting the untrusted server to maintain caches and schedule disk writes, while enforcing precise crash recovery and write access control."}
{"_id": "2eddfa4d27dbadc94a0e248592347e85b5d5e4d5", "title": "An introduction to modern missing data analyses.", "text": "A great deal of recent methodological research has focused on two modern missing data analysis methods: maximum likelihood and multiple imputation. These approaches are advantageous to traditional techniques (e.g. deletion and mean imputation techniques) because they require less stringent assumptions and mitigate the pitfalls of traditional techniques. This article explains the theoretical underpinnings of missing data analyses, gives an overview of traditional missing data techniques, and provides accessible descriptions of maximum likelihood and multiple imputation. In particular, this article focuses on maximum likelihood estimation and presents two analysis examples from the Longitudinal Study of American Youth data. One of these examples includes a description of the use of auxiliary variables. Finally, the paper illustrates ways that researchers can use intentional, or planned, missing data to enhance their research designs."}
{"_id": "7111d0ae2766f7c3c30118a877991b8fa45b05cf", "title": "The immune contexture in human tumours: impact on clinical outcome", "text": "Tumours grow within an intricate network of epithelial cells, vascular and lymphatic vessels, cytokines and chemokines, and infiltrating immune cells. Different types of infiltrating immune cells have different effects on tumour progression, which can vary according to cancer type. In this Opinion article we discuss how the context-specific nature of infiltrating immune cells can affect the prognosis of patients."}
{"_id": "3cb28d6838e0158bcd989241731e8e7d727ddccd", "title": "Resilience to natural hazards : How useful is this concept ?", "text": "Resilience is widely seen as a desirable system property in environmental management. This paper explores the concept of resilience to natural hazards, using weather-related hazards in coastal megacities as an example. The paper draws on the wide literature on megacities, coastal hazards, hazard risk reduction strategies, and resilience within environmental management. Some analysts define resilience as a system attribute, whilst others use it as an umbrella concept for a range of system attributes deemed desirable. These umbrella concepts have not been made operational to support planning or management. It is recommended that resilience only be used in a restricted sense to describe specific system attributes concerning (i) the amount of disturbance a system can absorb and still remain within the same state or domain of attraction and (ii) the degree to which the system is capable of selforganisation. The concept of adaptive capacity, which has emerged in the context of climate change, can then be adopted as the umbrella concept, where resilience will be one factor influencing adaptive capacity. This improvement to conceptual clarity would foster much-needed communication between the natural hazards and the climate change communities and, more importantly, offers greater potential in application, especially when attempting to move away from disaster recovery to hazard prediction, disaster prevention, and preparedness. r 2004 Elsevier Ltd. All rights reserved."}
{"_id": "286465103c7e72c05f811e4e0c5671678b7a3db5", "title": "A rating approach based on sentiment analysis", "text": "With the advent to social media the number of reviews for any particular product is in millions, as there exist thousand of websites where that particular product exists. As the numbers of reviews are very high the user ends up spending a lot of time for searching best product based on the experiences shared by review writers. This paper presents a sentiment based rating approach for food recipes which sorts food recipes present on various website on the basis of sentiments of review writers. The results are shown with the help of a mobile application: Foodoholic. The output of the application is an ordered list of recipes with user input as their core ingredient."}
{"_id": "438997f5eb454e8f55023f402ee237314f5f7dda", "title": "Poisson surface reconstruction", "text": "We show that surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, our Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. We describe a spatially adaptive multiscale algorithm whose time and space complexities are proportional to the size of the reconstructed model. Experimenting with publicly available scan data, we demonstrate reconstruction of surfaces with greater detail than previously achievable."}
{"_id": "fc8294a1b8243b90f79abe6c2939f778a2d609be", "title": "A study of count-based exploration and bonus for reinforcement learning", "text": "In order to better balance exploration and exploitation and solve the problem of sparse reward in reinforcement learning, this paper focus on changing the traditional exploration strategy, such as \u03b5-greedy strategy, by introducing the count for state-action visitation into the Boltzmann distribution method as a new behavioral exploration strategy, and adding count-based exploration bonus to guide the agent to exploration, proposes a method of count-based exploration and bonus for reinforcement learning. Selecting Sarsa(\u03bb) learning algorithm as the basis, the effectiveness of the count-based Sarsa(\u03bb) learning algorithm is verified by carrying out comparative experiments of the Q learning algorithm based on the \u03b5-greedy strategy, the Sarsa(\u03bb) learning algorithm based on the \u03b5-greedy strategy, and the count-based Sarsa(\u03bb) learning algorithm in the tank combat simulation problem. Due to its simplicity, it provides an easy yet powerful baseline for solving MDPs that require informed exploration."}
{"_id": "a32c9afad21bda9c2624dc95570146438c5ffa08", "title": "Development of a Compact Switched-Reluctance Motor Drive for EV Propulsion With Voltage-Boosting and PFC Charging Capabilities", "text": "This paper presents a compact battery-powered switched-reluctance motor (SRM) drive for an electric vehicle with voltage-boosting and on-board power-factor-corrected-charging capabilities. Although the boost-type front-end DC/DC converter is externally equipped, the on-board charger is formed by the embedded components of SRM windings and converter. In the driving mode, the DC/DC boost converter is controlled to establish boostable well-regulated DC-link voltage for the SRM drive from the battery. During demagnetization of each stroke, the winding stored energy is automatically recovered back to the battery. Moreover, the regenerative braking is achieved by properly setting the SRM in the regenerating mode. The controls of two power stages are properly designed and digitally realized using a common digital signal processor. Good winding current and speed dynamic responses of the SRM drive are obtained. In addition, the static operation characteristics are also improved. In the charging mode, the power devices that were embedded in the SRM converter and the three motor-phase windings are used to form a buck-boost switch-mode rectifier to charge the battery from utility with good power quality. All constituted circuit components of the charger are placed on board, and only the insertion of power cords is needed."}
{"_id": "1ab2e6e827156c648e6a2ff4d97fe03437d74014", "title": "On the Mathematics of Flat Origamis\u2217", "text": "Origami is the art of folding pieces of paper into works of sculpture without the aid of scissors or glue. Modern advancements in the complexity of origami (e.g., the work of Montroll and Maekawa) reveal a rich geometric structure governing the possibilities of paperfolding. In this paper we initiate a mathematical study of this \u201corigami geometry\u201d and explore the possibilities of a graph theoretic model. In particular, we study the properties of origami models which fold flat (i.e., can be pressed in a book without crumpling). Necessary and sufficient conditions are given for an origami model to locally fold flat, and the problems encountered in trying to extend these results globally are discussed."}
{"_id": "342c8308f0fb0742e8d4ef5fc470f57c10af6e27", "title": "Cocaine cues and dopamine in dorsal striatum: mechanism of craving in cocaine addiction.", "text": "The ability of drugs of abuse to increase dopamine in nucleus accumbens underlies their reinforcing effects. However, preclinical studies have shown that with repeated drug exposure neutral stimuli paired with the drug (conditioned stimuli) start to increase dopamine by themselves, which is an effect that could underlie drug-seeking behavior. Here we test whether dopamine increases occur to conditioned stimuli in human subjects addicted to cocaine and whether this is associated with drug craving. We tested eighteen cocaine-addicted subjects using positron emission tomography and [11C]raclopride (dopamine D2 receptor radioligand sensitive to competition with endogenous dopamine). We measured changes in dopamine by comparing the specific binding of [11C]raclopride when subjects watched a neutral video (nature scenes) versus when they watched a cocaine-cue video (scenes of subjects smoking cocaine). The specific binding of [11C]raclopride in dorsal (caudate and putamen) but not in ventral striatum (in which nucleus accumbens is located) was significantly reduced in the cocaine-cue condition and the magnitude of this reduction correlated with self-reports of craving. Moreover, subjects with the highest scores on measures of withdrawal symptoms and of addiction severity that have been shown to predict treatment outcomes, had the largest dopamine changes in dorsal striatum. This provides evidence that dopamine in the dorsal striatum (region implicated in habit learning and in action initiation) is involved with craving and is a fundamental component of addiction. Because craving is a key contributor to relapse, strategies aimed at inhibiting dopamine increases from conditioned responses are likely to be therapeutically beneficial in cocaine addiction."}
{"_id": "4a49bfb7d79e6645c651d55ff1eef8c69335b259", "title": "Detection of data injection attack in industrial control system using long short term memory recurrent neural network", "text": "In 2010, the outbreak of Stuxnet sounded a warning in the field of industrial control.security. As the major attack form of Stuxnet, data injection attack is characterized by high concealment and great destructiveness. This paper proposes a new method to detect data injection attack in Industrial Control Systems (ICS), in which Long Short Term Memory Recurrent Neural Network (LSTM-RNN) is a temporal sequences predictor. We then use the Euclidean detector to identify attacks in a model of a chemical plant. With simulation and evaluation in Tennessee Eastman (TE) process, we show that this method is able to detect various types of data injection attacks."}
{"_id": "dd86669b91927f4c4504786269f93870854e117f", "title": "Theorieentwicklung in der Akzeptanzforschung: Entwicklung eines Modells auf Basis einer qualitativen Studie", "text": "Die Untersuchung der Akzeptanz von Technologien im Allgemeinen und Software im Besonderen ist ein fruchtbares Feld in der angloamerikanischen Forschung der Disziplinen (Management) Information Systems und der deutschen Wirtschaftsinformatik. Trotz zahlreicher Untersuchungen, die ihren Ursprung in dem Technologie-Akzeptanzmodell und verwandten Theorien haben, mehren sich Beitr\u00e4ge, welche die Defizite bisherigerer Studien und Forschungsans\u00e4tze herausstellen. Eine wesentliche Ursache ist Fokussierung auf quantitative Forschungsmethoden, wie wir anhand von Metastudien und einer eigenen Literaturrecherche aufzeigen. W\u00e4hrend quantitative Verfahren sich in der Regel gut f\u00fcr eine \u00dcberpr\u00fcfung gegebener Theorien eignen, ist ihr Beitrag f\u00fcr die Bildung neuer Theorien begrenzt. Im vorliegenden Beitrag wird aufgezeigt, wie ein qualitatives Verfahren zur besseren Theoriebildung genutzt werden kann. Am Beispiel der Untersuchung der Akzeptanz von Projektmanagement-Software (PMS) kann aufgezeigt werden, dass dieses Vorgehen zu neuen Konstrukten f\u00fchrt, w\u00e4hrend einige Konstrukte bestehender Akzeptanz-Theorien nicht best\u00e4tigt werden konnten."}
{"_id": "1371bd3e6834136d08e917c6e51ca9ff61bbd323", "title": "A Kalman Filter-Based Algorithm for IMU-Camera Calibration: Observability Analysis and Performance Evaluation", "text": "Vision-aided inertial navigation systems (V-INSs) can provide precise state estimates for the 3-D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved by combining inertial measurements from an inertial measurement unit (IMU) with visual observations from a camera under the assumption that the rigid transformation between the two sensors is known. Errors in the IMU-camera extrinsic calibration process cause biases that reduce the estimation accuracy and can even lead to divergence of any estimator processing the measurements from both sensors. In this paper, we present an extended Kalman filter for precisely determining the unknown transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlation of the IMU measurements and provide a figure of merit (covariance) for the estimated transformation. The proposed method does not require any special hardware (such as spin table or 3-D laser scanner) except a calibration target. Furthermore, we employ the observability rank criterion based on Lie derivatives and prove that the nonlinear system describing the IMU-camera calibration process is observable. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy."}
{"_id": "5c17dcf68621a2513eb2eb3b283ac69c5273006e", "title": "Estimation and Removal of Clock Skew from Network Delay Measurements", "text": "Packet delay and loss traces are frequently used by network engineers, as well as network applications, to analyze netw ork performance. The clocks on the end-systems used to measure the dela ys, however, are not always synchronized, and this lack of synchron ization reduces the accuracy of these measurements. Therefore, estimating and removing relative skews and offsets from delay measurements between sender and receiver clocks are critical to the accurate assessment and analysis of network performance. In this paper we introduce a linear programming-based algorithm to estimate the clock skew in network delay measur ements and compare it with three other algorithms. We show that our algorithm has time complexity of O(N), leaves the delay after the skew removal positive, and is robust in the sense that the error margin of the skew est imate is independent of the magnitude of the skew. We use traces of real Int ernet delay measurements to assess the algorithm, and compare its perfo rmance to that of three other algorithms. Furthermore, we show through simulation that our algorithm is unbiased, and that the sample variance of the skew estimate is better (smaller) than existing algorithms. Keywords\u2014 clock skew, clock ratio, end-to-end delay, delay measurement."}
{"_id": "76d3036f2988ceb693e6f72bac37569a9c4797d9", "title": "A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM", "text": "Robust, accurate pose estimation and mapping at real-time in six dimensions is a primary need of mobile robots, in particular flying Micro Aerial Vehicles (MAVs), which still perform their impressive maneuvers mostly in controlled environments. This work presents a visual-inertial sensor unit aimed at effortless deployment on robots in order to equip them with robust real-time Simultaneous Localization and Mapping (SLAM) capabilities, and to facilitate research on this important topic at a low entry barrier. Up to four cameras are interfaced through a modern ARM-FPGA system, along with an Inertial Measurement Unit (IMU) providing high-quality rate gyro and accelerometer measurements, calibrated and hardware-synchronized with the images. This facilitates a tight fusion of visual and inertial cues that leads to a level of robustness and accuracy which is difficult to achieve with purely visual SLAM systems. In addition to raw data, the sensor head provides FPGA-pre-processed data such as visual keypoints, reducing the computational complexity of SLAM algorithms significantly and enabling employment on resource-constrained platforms. Sensor selection, hardware and firmware design, as well as intrinsic and extrinsic calibration are addressed in this work. Results from a tightly coupled reference visual-inertial motion estimation framework demonstrate the capabilities of the presented system."}
{"_id": "0bce67a3c38ac1634e59e5309707cdfca588195a", "title": "TICSync: Knowing when things happened", "text": "Modern robotic systems are composed of many distributed processes sharing a common communications infrastructure. High bandwidth sensor data is often collected on one computer and served to many consumers. It is vital that every device on the network agrees on how time is measured. If not, sensor data may be at best inconsistent and at worst useless. Typical clocks in consumer grade PCs are highly inaccurate and temperature sensitive. We argue that traditional approaches to clock synchronization, such as the use of NTP are inappropriate in the robotics context. We present an extremely efficient algorithm for learning the mapping between distributed clocks, which typically achieves better than millisecond accuracy within just a few seconds. We also give a probabilistic analysis providing an upper-bound error estimate."}
{"_id": "5094cc8e7ef25e3d47d3c645ce7a5cc44a9102a2", "title": "A 3.3-V single-poly CMOS audio ADC delta-sigma modulator with 98-dB peak SINAD and 105-dB peak SFDR", "text": "This paper presents a second-order /spl Delta//spl Sigma/ modulator for audio-band analog-to-digital conversion implemented in a 3.3-V, 0.5-/spl mu/m, single-poly CMOS process using metal-metal capacitors that achieves 98-dB peak signal-to-noise-and-distortion ratio and 105-dB peak spurious-free dynamic range. The design uses a low-complexity, first-order mismatch shaping 33-level digital-to-analog converter and a 33-level flash analog-to-digital converter with digital common-mode rejection and dynamic element matching of comparator offsets. These signal-processing innovations, combined with established circuit techniques, enable state-of-the art performance in CMOS technology optimized for digital circuits."}
{"_id": "97087e9f331de56ebcf1ed36d3a98b9d758fac4c", "title": "The influence of chronotype and intelligence on academic achievement in primary school is mediated by conscientiousness, midpoint of sleep and motivation.", "text": "Individuals differ in their timing of sleep (bed times, rise times) and in their preference for morning or evening hours. Previous work focused on the relationship between academic achievement and these variables in secondary school students. The main aim of the study is to investigate the relationship between chronotype and academic achievement in 10-year-old children (n\u2009=\u20091125) attending 4th grade of primary school. They filled a cognitive test (Culture Fair Intelligence Test, CFT 20-R) and questions about rise times and bed times, academic achievement, conscientiousness and motivation. We used the \"scales for the assessment of learning and performance motivation\" (SELLMO; Skalen zur Erfassung der Lern- und Leistungsmotivation for motivation), the short version of the Five-Factor Personality Inventory Children (FFPI-C) to measure conscientiousness, and the Composite Scale of Morningness (CSM) to assess morningness-eveningness. Mean CSM score was 37.84\u2009\u00b1\u20096.66, midpoint of sleep was 1:36\u2009\u00b1\u200900:25 and average sleep duration (time in bed) was 10:15\u2009\u00b1\u20090:48. Morningness orientation was positively related to intelligence, conscientiousness and learning objectives. Eveningness orientation was related to avoidance performance objectives and work avoidance. Early midpoint of sleep, conscientiousness and intelligence were associated with better grades. The multivariate model showed that intelligence was the strongest predictor of good grades. Conscientiousness, motivation, younger age and an earlier midpoint of sleep were positively related to good grades. This is the first study in primary school pupils, and it shows that the relationship between evening orientation and academic achievement is already prevalent at this age even when controlling for important predictors of achievement."}
{"_id": "4a4c28b55c65f0eb021029d8276fcf1358ce3d26", "title": "Learning Policies for First Person Shooter Games Using Inverse Reinforcement Learning", "text": "The creation of effective autonomous agents (bots) for combat scenarios has long been a goal of the gaming industry. However, a secondary consideration is whether the autonomous bots behave like human players; this is especially important for simulation/training applications which aim to instruct participants in real-world tasks. Bots often compensate for a lack of combat acumen with advantages such as accurate targeting, predefined navigational networks, and perfect world knowledge, which makes them challenging but often predictable opponents. In this paper, we examine the problem of teaching a bot to play like a human in first-person shooter game combat scenarios. Our bot learns attack, exploration and targeting policies from data collected from expert human player demonstrations in Unreal Tournament. We hypothesize that one key difference between human players and autonomous bots lies in the relative valuation of game states. To capture the internal model used by expert human players to evaluate the benefits of different actions, we use inverse reinforcement learning to learn rewards for different game states. We report the results of a human subjects\u2019 study evaluating the performance of bot policies learned from human demonstration against a set of standard bot policies. Our study reveals that human players found our bots to be significantly more human-like than the standard bots during play. Our technique represents a promising stepping-stone toward addressing challenges such as the Bot Turing Test (the CIG Bot 2K Competition)."}
{"_id": "fb088049e4698f8517e5d83dc134e756ffae7eb8", "title": "Service Level Agreement in Cloud Computing", "text": "Cloud computing that provides cheap and pay-as-you-go computing resources is rapidly gaining momentum as an alternative to traditional IT Infrastructure. As more and more consumers delegate their tasks to cloud providers, Service Level Agreements(SLA) between consumers and providers emerge as a key aspect. Due to the dynamic nature of the cloud, continuous monitoring on Quality of Service (QoS) attributes is necessary to enforce SLAs. Also numerous other factors such as trust (on the cloud provider) come into consideration, particularly for enterprise customers that may outsource its critical data. This complex nature of the cloud landscape warrants a sophisticated means of managing SLAs. This paper proposes a mechanism for managing SLAs in a cloud computing environment using the Web Service Level Agreement(WSLA) framework, developed for SLA monitoring and SLA enforcement in a Service Oriented Architecture (SOA). We use the third party support feature of WSLA to delegate monitoring and enforcement tasks to other entities in order to solve the trust issues. We also present a real world use case to validate our proposal."}
{"_id": "b6e0fad2e3d8a39c457abecac22d4d1e011e546a", "title": "Ensemble Named Entity Recognition (NER): Evaluating NER Tools in the Identification of Place Names in Historical Corpora", "text": "The field of Spatial Humanities has advanced substantially in the past years. The identification and extraction of toponyms and spatial information mentioned in historical text collections has allowed its use in innovative ways, making possible the application of spatial analysis and the mapping of these places with geographic information systems. For instance, automated place name identification is possible with Named Entity Recognition (NER) systems. Statistical NER methods based on supervised learning, in particular, are highly successful with modern datasets. However, there are still major challenges to address when dealing with historical corpora. These challenges include language changes over time, spelling variations, transliterations, OCR errors, and sources written in multiple languages among others. In this article, considering a task of place name recognition over two collections of historical correspondence, we report an evaluation of five NER systems and an approach that combines these through a voting system. We found that although individual performance of each NER system was corpus dependent, the ensemble combination was able to achieve consistent measures of precision and recall, outperforming the individual NER systems. In addition, the results showed that these NER systems are not strongly dependent on preprocessing and translation to Modern English."}
{"_id": "c7e2c97049b7f365cc42cd6bc795d1209753f4c0", "title": "Flickr Circles: Aesthetic Tendency Discovery by Multi-View Regularized Topic Modeling", "text": "Aesthetic tendency discovery is a useful and interesting application in social media. In this paper we propose to categorize large-scale Flickr users into multiple circles, each containing users with similar aesthetic interests (e.g., landscapes). We notice that: (1) an aesthetic model should be flexible as different visual features may be used to describe different image sets; (2) the numbers of photos from different Flickr users vary significantly and some users may have very few photos; and (3) visual features from each Flickr photo should be seamlessly integrated at both low-level and high-level. To meet these challenges, we propose to fuze color, textural, and semantic channel features using a multi-view learning framework, where the feature weights are adjusted automatically. Then, a regularized topic model is developed to quantify each user's aesthetic interest as a distribution in the latent space. Afterward, a graph is constructed to describe the discrepancy of aesthetic interests among users. Apparently, densely connected users are with similar aesthetic interests. Thus, an efficient dense subgraph mining algorithm is adopted to group Flickr users into multiple circles. Experiments have shown that our approach performs competitively on a million-scale image set crawled from Flickr. Besides, our method can enhance the transferal-based photo cropping [40] as reported by the user study."}
{"_id": "6f5270009b669ba0e963878bcbbbf2a47ef75e1b", "title": "SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming", "text": "Support vector machine (SVM) soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifiers are especially efficient for very large size samples. But little is known about their convergence, compared with the well-understood quadratic programming SVM classifier. In this article, we point out the difficulty and provide an error analysis. Our analysis shows that the convergence behavior of the linear programming SVM is almost the same as that of the quadratic programming SVM. This is implemented by setting a stepping-stone between the linear programming SVM and the classical 1-norm soft margin classifier. An upper bound for the misclassification error is presented for general probability distributions. Explicit learning rates are derived for deterministic and weakly separable distributions, and for distributions satisfying some Tsybakov noise condition."}
{"_id": "74dd6b583c834bf68916a3a46c654e335dff1d2e", "title": "Python robotics: an environment for exploring robotics beyond LEGOs", "text": "This paper describes Pyro, a robotics programming environment designed to allow inexperienced undergraduates to explore topics in advanced robotics. Pyro, which stands for Python Robotics, runs on a number of advanced robotics platforms. In addition, programs in Pyro can abstract away low-level details such that individual programs can work unchanged across very different robotics hardware. Results of using Pyro in an undergraduate course are discussed."}
{"_id": "ac5371fe31b7dce69284f4b0e919654b13ce3ffb", "title": "Heat, vibration, dust, salt spray, weather - taking wireless positioning to the extreme", "text": "There are lots of local positioning systems available for indoor applications, partly even for outdoor use. Many of them rely on a rather clean environment which does not interfere in their measurements too much. Industrial environments, in- or outdoors, often do not meet these requirements. The positioning applications here are surrounded by dirt, dust and vibrations. The number of local positioning systems being able to perform properly here at all times is limited to such not dependent on optical or sonic measurements. Local secondary radar systems proved to be a reliable source of distance and position measurements, even in harsh environments. This paper examines a few use cases and depicts the performance of local positioning radar systems there."}
{"_id": "7ff5d85d135c0d618cca36cd52ae4c9323895ea4", "title": "Gender differences in mathematics anxiety and the relation to mathematics performance while controlling for test anxiety", "text": "Mathematics anxiety (MA), a state of discomfort associated with performing mathematical tasks, is thought to affect a notable proportion of the school age population. Some research has indicated that MA negatively affects mathematics performance and that girls may report higher levels of MA than boys. On the other hand some research has indicated that boys\u2019 mathematics performance is more negatively affected by MA than girls\u2019 performance is. The aim of the current study was to measure girls\u2019 and boys\u2019 mathematics performance as well as their levels of MA while controlling for test anxiety (TA) a construct related to MA but which is typically not controlled for in MA studies. Four-hundred and thirty three British secondary school children in school years 7, 8 and 10 completed customised mental mathematics tests and MA and TA questionnaires. No gender differences emerged for mathematics performance but levels of MA and TA were higher for girls than for boys. Girls and boys showed a positive correlation between MA and TA and a negative correlation between MA and mathematics performance. TA was also negatively correlated with mathematics performance, but this relationship was stronger for girls than for boys. When controlling for TA, the negative correlation between MA and performance remained for girls only. Regression analyses revealed that MA was a significant predictor of performance for girls but not for boys. Our study has revealed that secondary school children experience MA. Importantly, we controlled for TA which is typically not controlled for in MA studies. Girls showed higher levels of MA than boys and high levels of MA were related to poorer levels of mathematics performance. As well as potentially having a detrimental effect on \u2018online\u2019 mathematics performance, past research has shown that high levels of MA can have negative consequences for later mathematics education. Therefore MA warrants attention in the mathematics classroom, particularly because there is evidence that MA develops during the primary school years. Furthermore, our study showed no gender difference in mathematics performance, despite girls reporting higher levels of MA. These results might suggest that girls may have had the potential to perform better than boys in mathematics however their performance may have been attenuated by their higher levels of MA. Longitudinal research is needed to investigate the development of MA and its effect on mathematics performance."}
{"_id": "50c4bd192e861a74769686970c582268995a7d78", "title": "The Virtual Human Interface: A Photorealistic Digital Human", "text": "o mimic the quality of everyday human communication, future computer interfaces must combine the benefits of high visual fidelity with conversational intelligence and\u2014above all\u2014the ability to modulate the emotions of their users. From a computer graphics or visual perspective, achieving this goal requires creating synthetic digital humans that have photorealistic faces capable of expressing the finest shades of emotion. From a perceptual point of view, the same system must also read users' emotional reactions and adapt the behavior of the digital human accordingly. Our research focuses on implementing this concept to create a face-to-face system called the Virtual Human Interface (VHI). Photorealistic virtual humans\u2014 in contrast to nonrealistic virtual characters\u2014make a more effective communication interface between computers and humans because they can stimulate declarative and procedural memory in the human brain. Declarative memory is our memory of information and events. Procedural memory, on the other hand, is the memory of how to do something, such as riding a bike. Most computer interfaces primarily access users' declarative memory; the person learns information about a particular topic. The VHI system, however, engages the user through entertaining emotional responses that help engrave knowledge in the user's mind and even modify the user's behavior through procedural memory. During the past several decades, researchers have conducted countless studies on agents and human animation. Early work mainly addressed low-polygon virtual humans that users could animate in real time. 1 Many of these animation systems initially addressed purposes other than the needs of human\u2013computer interaction. However, the requirements of facial ani-mation\u2014and especially speech synthesis\u2014demand a different underlying architecture that can effectively model how real faces move and change. 2-4 As a result, research began to focus on creating autonomous agents that could exhibit rich personalities and interact in virtual worlds inhabited by other characters. To provide the illusion of a lifelike character, researchers have developed detailed emotional and personality models that can control the animation channels as a function of the virtual human's personality, mood, and emotions. 7-9 The real-time interaction with these virtual characters poses an extra set of technical challenges in terms of the speed, computational power, and visual quality required to make the user believe that he or she is interacting with a living creature. To achieve this goal, researchers eventually replaced precrafted animated actions with intelligent behavior modules that could control speech, locomotion, gaze, blinks, gestures (including various postures), and interaction with the \u2026"}
{"_id": "96bc75787e7ef09b869fad7a360c432aa0150008", "title": "SwellShark: A Generative Model for Biomedical Named Entity Recognition without Labeled Data", "text": "We present SWELLSHARK, a framework for building biomedical named entity recognition (NER) systems quickly and without hand-labeled data. Our approach views biomedical resources like lexicons as function primitives for autogenerating weak supervision. We then use a generative model to unify and denoise this supervision and construct large-scale, probabilistically labeled datasets for training high-accuracy NER taggers. In three biomedical NER tasks, SWELLSHARK achieves competitive scores with state-of-the-art supervised benchmarks using no hand-labeled training data. In a drug name extraction task using patient medical records, one domain expert using SWELLSHARK achieved within 5.1% of a crowdsourced annotation approach \u2013 which originally utilized 20 teams over the course of several weeks \u2013 in 24 hours."}
{"_id": "b22b1e9b20a34e4d8efeb024499556752f74f1bd", "title": "Robust Federated Learning Using ADMM in the Presence of Data Falsifying Byzantines", "text": "In this paper, we consider the problem of federated (or decentralized) learning using ADMM with multiple agents. We consider a scenario where a certain fraction of agents (referred to as Byzantines) provide falsified data to the system. In this context, we study the convergence behavior of the decentralized ADMM algorithm. We show that ADMM converges linearly to a neighborhood of the solution to the problem under certain conditions. We next provide guidelines for network structure design to achieve faster convergence. Next, we provide necessary conditions on the falsified updates for exact convergence to the true solution. To tackle the data falsification problem, we propose a robust variant of ADMM. We also provide simulation results to validate the analysis and show the resilience of the proposed algorithm to Byzantines."}
{"_id": "bd707485346ce54ea5983e96b3d6443534757eb0", "title": "PIP Loss : A Unitary-invariant Metric for Understanding Functionality and Dimensionality of Vector Embeddings", "text": "In this paper, we present a theoretical framework for understanding vector embedding, a fundamental building block of many deep learning models, especially in NLP. We discover a natural unitary-invariance in vector embeddings, which is required by the distributional hypothesis. This unitary-invariance states the fact that two embeddings are essentially equivalent if one can be obtained from the other by performing a relative-geometry preserving transformation, for example a rotation. This idea leads to the Pairwise Inner Product (PIP) loss, a natural unitary-invariant metric for the distance between two embeddings. We demonstrate that the PIP loss captures the difference in functionality between embeddings, and that it is tightly connect to two fundamental properties, namely similarity and compositionality. By formulating the embedding training process as matrix factorization under noise, we reveal a fundamental bias-variance tradeoff in dimensionality selection. With tools from perturbation and stability theory, we provide an upper bound on the PIP loss using the signal spectrum and noise variance, both of which can be readily inferred from data. Our framework sheds light on many empirical phenomena, including the existence of an optimal dimension, and the robustness of embeddings against over-parametrization. The bias-variance tradeoff of PIP loss explicitly answers the fundamental open problem of dimensionality selection for vector embeddings. 1 ar X iv :1 80 3. 00 50 2v 3 [ st at .M L ] 1 2 A pr 2 01 8"}
{"_id": "bd6e0eded16cfe1526330c31971e6b92f968bae0", "title": "A Big Data system supporting Bosch Braga Industry 4.0 strategy", "text": "People, devices, infrastructures and sensors can constantly communicate exchanging data and generating new data that trace many of these exchanges. This leads to vast volumes of data collected at ever increasing velocities and of different variety, a phenomenon currently known as Big Data. In particular, recent developments in Information and Communications Technologies are pushing the fourth industrial revolution, Industry 4.0, being data generated by several sources like machine controllers, sensors, manufacturing systems, among others. Joining volume, variety and velocity of data, with Industry 4.0, makes the opportunity to enhance sustainable innovation in the Factories of the Future. In this, the collection, integration, storage, processing and analysis of data is a key challenge, being Big Data systems needed to link all the entities and data needs of the factory. Thereby, this paper addresses this key challenge, proposing and implementing a Big Data Analytics architecture, using a multinational organisation (Bosch Car Multimedia \u2013 Braga) as a case study. In this work, all the data lifecycle, from collection to analysis, is handled, taking into consideration the different data processing speeds that can exist in the real environment of a factory (batch or stream)."}
{"_id": "da5ace0b68a5e6a3a8aeabff07cbc63eb28e0309", "title": "UAVSim: A simulation testbed for unmanned aerial vehicle network cyber security analysis", "text": "Increased use of unmanned systems in various tasks enables users to complete important missions without risking human lives. Nonetheless, these systems pose a huge threat if the operational cyber security is not handled properly, especially for the unmanned aerial vehicle systems (UAVS), which can cause catastrophic damages. Therefore, it is important to check the impact of various attack attempts on the UAV system. The most economical and insightful way to do this is to simulate operational scenarios of UAVs in advance. In this paper, we introduce UAVSim, a simulation testbed for Unmanned Aerial Vehicle Networks cyber security analysis. The testbed allows users to easily experiment by adjusting different parameters for the networks, hosts and attacks. In addition, each UAV host works on well-defined mobility framework and radio propagation models, which resembles real-world scenarios. Based on the experiments performed in UAVSim, we evaluate the impact of Jamming attacks against UAV networks and report the results to demonstrate the necessity and usefulness of the testbed."}
{"_id": "2213f31b361f0793edf70d641d5b1572d436d0d0", "title": "Evaluation of the Patient with Nasal Obstruction.", "text": "Nasal obstruction is often multifactorial and knowledge of the contributing factors is critical to appropriate evaluation, diagnosis, and execution of a treatment plan. Recognizing and appropriately managing all components of nasal obstruction will increase the likelihood of symptomatic improvement and patient satisfaction."}
{"_id": "768a2c5134f855b8bb19c745321a0027c307e90c", "title": "Research in Education ( CPRE ) 1-2001 Teacher Turnover , Teacher Shortages , and the Organization of Schools", "text": "Contemporary educational theory holds that one of the pivotal causes of inadequate school performance is the inability of schools to adequately staff classrooms with qualified teachers. Contemporary theory also holds that these staffing problems are primarily due to shortages of teachers, which, in turn, are primarily due to recent increases in teacher retirements and student enrollments. This analysis investigates the possibility that there are other factors that might have an impact on teacher turnover levels, and, in turn, the staffing problems of schools, factors rooted in the organizational characteristics and conditions of schools. The data utilized in this investigation are from the Schools and Staffing Survey and its supplement, the Teacher Followup Survey, a large, comprehensive, nationally representative survey of teachers and schools conducted by the National Center for Education Statistics. The results of this analysis show that, net of teacher effects, there are significant effects of school characteristics and organizational conditions on teacher turnover which have largely been overlooked by previous research. For example, the data show that while high-poverty public schools have moderately higher rates, contrary to conventional wisdom, neither larger schools, nor public schools in large school districts, nor urban public schools have especially high rates of teacher turnover. In contrast, small private schools stand out for their high rates of turnover. Moreover, the data show, again contrary to popular wisdom, that the amount of turnover accounted for by retirement is relatively minor, especially when compared to that resulting from two related causes \u2013 teacher job dissatisfaction and teachers pursuing other jobs. The data show that, in particular, low salaries, inadequate support from the school administration, student discipline problems, and limited faculty input into school decision-making all contribute to higher rates of turnover, after controlling for the characteristics of both teachers and schools. The results of this investigation suggest that school staffing problems are neither synonymous with, nor primarily due to, teacher shortages in the conventional sense of a deficit in the supply of teachers. Rather, this study suggests that school staffing problems are primarily due to excess demand resulting from a \"revolving door\" \u2013 where large numbers of teachers depart their jobs for reasons other than retirement. This study also suggests that popular education initiatives, such as teacher recruitment programs, will not solve the staffing problems of such schools if they do not also address the organizational sources of low teacher retention. Disciplines Educational Administration and Supervision | Education Economics | Social and Philosophical Foundations of Education | Teacher Education and Professional Development Comments View on the CPRE website. This report is available at ScholarlyCommons: https://repository.upenn.edu/cpre_researchreports/12 ctp Center for the Study of Teaching and Policy U N I V E R S I T Y O F W A S H I N G T O N Teacher Turnover, Teacher Shortages, and the Organization of Schools by Richard M. Ingersoll University of Pennsylvania"}
{"_id": "4e9809960be9ec7f1ad8fadf61dad8cb1c8818d0", "title": "Dynamic Modelling of Differential-Drive Mobile Robots using Lagrange and Newton-Euler Methodologies: A Unified Framework", "text": "In recent years, there has been a considerable interest in the area of mobile robotics and educational technologies [1-7]. For control engineers and researchers, there is a wealth of literature dealing with wheeled mobile robots (WMR) control and their applications. However, while the subject of kinematic modeling of WMR is well documented and easily understood by students, the subject of dynamic modeling of WMR has not been addressed adequately in the literature. The dynamics of WMR are highly nonlinear and involve non-holonomic constraints which makes difficult their modeling and analysis especially for new engineering students starting their research in this field. Therefore, a detailed and accurate dynamic model describing the WMR motion need to be developed to offer students a general framework for simulation analysis and model based control system design."}
{"_id": "16ce0d2712a8f8f4797496cc6c306d6c2b8535e9", "title": "Research Paper: Creating an Online Dictionary of Abbreviations from MEDLINE", "text": "OBJECTIVE\nThe growth of the biomedical literature presents special challenges for both human readers and automatic algorithms. One such challenge derives from the common and uncontrolled use of abbreviations in the literature. Each additional abbreviation increases the effective size of the vocabulary for a field. Therefore, to create an automatically generated and maintained lexicon of abbreviations, we have developed an algorithm to match abbreviations in text with their expansions.\n\n\nDESIGN\nOur method uses a statistical learning algorithm, logistic regression, to score abbreviation expansions based on their resemblance to a training set of human-annotated abbreviations. We applied it to Medstract, a corpus of MEDLINE abstracts in which abbreviations and their expansions have been manually annotated. We then ran the algorithm on all abstracts in MEDLINE, creating a dictionary of biomedical abbreviations. To test the coverage of the database, we used an independently created list of abbreviations from the China Medical Tribune.\n\n\nMEASUREMENTS\nWe measured the recall and precision of the algorithm in identifying abbreviations from the Medstract corpus. We also measured the recall when searching for abbreviations from the China Medical Tribune against the database.\n\n\nRESULTS\nOn the Medstract corpus, our algorithm achieves up to 83% recall at 80% precision. Applying the algorithm to all of MEDLINE yielded a database of 781,632 high-scoring abbreviations. Of all the abbreviations in the list from the China Medical Tribune, 88% were in the database.\n\n\nCONCLUSION\nWe have developed an algorithm to identify abbreviations from text. We are making this available as a public abbreviation server at \\url[http://abbreviation.stanford.edu/]."}
{"_id": "1f1eaf19e38b541eec8a02f099e3090536a4c936", "title": "The Unified Medical Language System (UMLS): integrating biomedical terminology", "text": "The Unified Medical Language System (http://umlsks.nlm.nih.gov) is a repository of biomedical vocabularies developed by the US National Library of Medicine. The UMLS integrates over 2 million names for some 900,000 concepts from more than 60 families of biomedical vocabularies, as well as 12 million relations among these concepts. Vocabularies integrated in the UMLS Metathesaurus include the NCBI taxonomy, Gene Ontology, the Medical Subject Headings (MeSH), OMIM and the Digital Anatomist Symbolic Knowledge Base. UMLS concepts are not only inter-related, but may also be linked to external resources such as GenBank. In addition to data, the UMLS includes tools for customizing the Metathesaurus (MetamorphoSys), for generating lexical variants of concept names (lvg) and for extracting UMLS concepts from text (MetaMap). The UMLS knowledge sources are updated quarterly. All vocabularies are available at no fee for research purposes within an institution, but UMLS users are required to sign a license agreement. The UMLS knowledge sources are distributed on CD-ROM and by FTP."}
{"_id": "2dff153c498c79232ea5bab523aeca697ae18823", "title": "Untangling Text Data Mining", "text": "The possibilities for data mining from large text collections are virtually untapped. Text expresses a vast, rich range of information, but encodes this information in a form that is difficult to decipher automatically. Perhaps for this reason, there has been little work in text data mining to date, and most people who have talked about it have either conflated it with information access or have not made use of text directly to discover heretofore unknown information. In this paper I will first define data mining, information access, and corpus-based computational linguistics, and then discuss the relationship of these to text data mining. The intent behind these contrasts is to draw attention to exciting new kinds of problems for computational linguists. I describe examples of what I consider to be real text data mining efforts and briefly outline recent ideas about how to pursue exploratory data analysis over text."}
{"_id": "da6c3fdf8ef9aae979a5dd156e074ba6691b2e2c", "title": "GENIA corpus - a semantically annotated corpus for bio-textmining", "text": "MOTIVATION\nNatural language processing (NLP) methods are regarded as being useful to raise the potential of text mining from biological literature. The lack of an extensively annotated corpus of this literature, however, causes a major bottleneck for applying NLP techniques. GENIA corpus is being developed to provide reference materials to let NLP techniques work for bio-textmining.\n\n\nRESULTS\nGENIA corpus version 3.0 consisting of 2000 MEDLINE abstracts has been released with more than 400,000 words and almost 100,000 annotations for biological terms."}
{"_id": "869f4fc59d74a96098ed46935cb6fd1a537c38ce", "title": "Information theoretic MPC for model-based reinforcement learning", "text": "We introduce an information theoretic model predictive control (MPC) algorithm capable of handling complex cost criteria and general nonlinear dynamics. The generality of the approach makes it possible to use multi-layer neural networks as dynamics models, which we incorporate into our MPC algorithm in order to solve model-based reinforcement learning tasks. We test the algorithm in simulation on a cart-pole swing up and quadrotor navigation task, as well as on actual hardware in an aggressive driving task. Empirical results demonstrate that the algorithm is capable of achieving a high level of performance and does so only utilizing data collected from the system."}
{"_id": "9249389a2fbc2151a80b4731f007c780616b067a", "title": "Fading memory and the problem of approximating nonlinear operators with volterra series", "text": "Ahsfract-Using the notion of fading memory we prove very strong versions of two folk theorems. The first is that any time-inuariant (TZ) con~inuou.r nonlinear operator can be approximated by a Volterra series operator, and the second is that the approximating operator can be realized as a finiiedimensional linear dynamical system with a nonlinear readout map. While previous approximation results are valid over finite time inlero& and for signals in compact sets, the approximations presented here hold for all time and for signals in useful (noncompact) sets. The discretetime analog of the second theorem asserts that nny TZ operator with fading memory can be approximated (in our strong sense) by a nonlinear movingaverage operator. Some further discussion of the notion of fading memory is given."}
{"_id": "6829120ef2f031003ec855b536e67015d337fc3c", "title": "Multimodal depression recognition with dynamic visual and audio cues", "text": "In this paper, we present our system design for audio visual multi-modal depression recognition. To improve the estimation accuracy of the Beck Depression Inventory (BDI) score, besides the Low Level Descriptors (LLD) features and the Local Gabor Binary Pattern-Three Orthogonal Planes (LGBP-TOP) features provided by the 2014 Audio/Visual Emotion Challenge and Workshop (AVEC2014), we extract extra features to capture key behavioural changes associated with depression. From audio we extract the speaking rate, and from video, the head pose features, the Space-Temporal Interesting Point (STIP) features, and local kinematic features via the Divergence-Curl-Shear descriptors. These features describe body movements, and spatio-temporal changes within the image sequence. We also consider global dynamic features, obtained using motion history histogram (MHH), bag of words (BOW) features and vector of local aggregated descriptors (VLAD). To capture the complementary information within the used features, we evaluate two fusion systems - the feature fusion scheme, and the model fusion scheme via local linear regression (LLR). Experiments are carried out on the training set and development set of the Depression Recognition Sub-Challenge (DSC) of AVEC2014, we obtain root mean square error (RMSE) of 7.6697, and mean absolute error (MAE) of 6.1683 on the development set, which are better or comparable with the state of the art results of the AVEC2014 challenge."}
{"_id": "99e6d870a04e5ad7f883d37274537e664391fd5d", "title": "Null hypothesis significance tests. A mix-up of two different theories: the basis for widespread confusion and numerous misinterpretations", "text": "Null hypothesis statistical significance tests (NHST) are widely used in quantitative research in the empirical sciences including scientometrics. Nevertheless, since their introduction nearly a century ago significance tests have been controversial. Many researchers are not aware of the numerous criticisms raised against NHST. As practiced, NHST has been characterized as a \u2018null ritual\u2019 that is overused and too often misapplied and misinterpreted. NHST is in fact a patchwork of two fundamentally different classical statistical testing models, often blended with some wishful quasi-Bayesian interpretations. This is undoubtedly a major reason why NHST is very often misunderstood. But NHST also has intrinsic logical problems and the epistemic range of the information provided by such tests is much more limited than most researchers recognize. In this article we introduce to the scientometric community the theoretical origins of NHST, which is mostly absent from standard statistical textbooks, and we discuss some of the most prevalent problems relating to the practice of NHST and trace these problems back to the mix-up of the two different theoretical origins. Finally, we illustrate some of the misunderstandings with examples from the scientometric literature and bring forward some modest recommendations for a more sound practice in quantitative data analysis."}
{"_id": "4ef2dc2e02883a0ec13ab7c07121012197a41b76", "title": "Comparative analysis of Role Base and Attribute Base Access Control Model in Semantic Web", "text": "To control the unauthorized access to the resources and services is an emerging security issue in Semantic Web (SW). There are various existing access control models such as Role base, Attribute base, Credential base, Concept level access control models. They all have some strengths and weaknesses with them. In this paper we first take an overview of history of access control models and the need of access control models in semantic web. This paper has a discussion of strengths and weaknesses of the RBAC and ABAC. Than we have a comparative analysis of RBAC and ABAC with some points of issue"}
{"_id": "fa310b5c0d5b1466a1234b2bbc396fe8fcf18cad", "title": "SDR-Based Resilient Wireless Communications", "text": "As the use of wireless technologies increases significantly due to ease of deployment, cost-effectiveness and the increase in bandwidth, there is a critical need to make the wireless communications secure, and resilient to attacks or faults (malicious or natural). Wireless communications are inherently prone to cyberattacks due to the open access to the medium. While current wireless protocols have addressed the privacy issues, they have failed to provide effective solutions against denial of service attacks, session hijacking and jamming attacks.In this paper, we present a resilient wireless communication architecture based on Moving Target Defense, and Software Defined Radios (SDRs). The approach achieves its resilient operations by randomly changing the runtime characteristics of the wireless communications channels between different wireless nodes to make it extremely difficult to succeed in launching attacks. The runtime characteristics that can be changed include packet size, network address, modulation type, and the operating frequency of the channel. In addition, the lifespan for each configuration will be random. To reduce the overhead in switching between two consecutive configurations, we use two radio channels that are selected at random from a finite set of potential channels, one will be designated as an active channel while the second acts as a standby channel. This will harden the wireless communications attacks because the attackers have no clue on what channels are currently being used to exploit existing vulnerability and launch an attack. The experimental results and evaluation show that our approach can tolerate a wide range of attacks (Jamming, DOS and session attacks) against wireless networks."}
{"_id": "1f000e2ca3552bd76f439dbb77c5f7c6c154a662", "title": "VisDB: database exploration using multidimensional visualization", "text": "Discusses how the VisDB system supports the query specification process by representing the result visually. The main idea behind the system stems from the view of relational database tables as sets of multidimensional data where the number of attributes corresponds to the number of dimensions. In such a view, it is often unclear. In this system, each display pixel represents one database item. Pixels are arranged and colored to indicate the item's relevance to a user query and to give a visual impression of the resulting data set.<>"}
{"_id": "5bf59c644b7e9b5162e37b9c275b5b577505bfff", "title": "A High-Performance Drive Circuit for All Solid-State Marx Generator", "text": "In recent years, all solid-state Marx generators have been found more and more extensive applications in industrial, environmental, and biological fields. All solid-state Marx generators have many requirements for their drive circuits, such as good synchrony for driving signals at different stages, fine isolation between control signals and the main circuits, adjustable pulsewidths and frequencies, and good driving abilities. This paper proposes a high-performance drive circuit for a 24-stage Marx generator based on IGBTs. A half-bridge circuit using IR2110 outputs a high-current turning-on drive signal (positive) and a turning-off drive signal (negative) with adjustable dead time. The control drivers are input to the common primary side of 24 nanocrystalline magnetic transformers, which isolate the control circuit and the main circuit. Through gate circuits at the second sides of the magnetic cores, the turning-on drive signal charges 24 gate-emitter capacitors to required voltages and consequently all IGBTs move into on state until the turning-off signal arrives. Similarly, the negative turning-off drive signal charges all gate-emitter capacitors to a negative voltage which ensures all IGBTs stay in the off state. Therefore, the pulsewidth is determined by the phase difference between the turning-on and turning-off drive signals. Equipped with this drive circuit, the 24-stage Marx generator is able to produce stable high voltage pulse with a peak value of -9.6 kV, PRF $0.05\\sim 5$ kHz, and different pulsewidths. In this paper, the design details and experimental confirmation of the proposed drive circuit are illustrated."}
{"_id": "ef8af16b408a7c78ab0780fe419d37130f2efe4c", "title": "New classes of miniaturized planar Marchand baluns", "text": "Three new classes of miniaturized Marchand balun are defined based on the synthesis of filter prototypes. They are suitable for mixed lumped-distributed planar realizations with small size resulting from transmission-line resonators being a quarter-wavelength long at frequencies higher than the passband center frequency. Each class corresponds to an S-plane bandpass prototype derived from the specification of transmission zero locations. A tunable 50:100-/spl Omega/ balun is realized at 1 GHz to demonstrate the advantages of the approach presented here."}
{"_id": "73e25148341c7b19a0bac423a36c88f6d3da8530", "title": "A multichannel CSMA MAC protocol for multihop wireless networks", "text": "We describe a new carrier\u2013sense multiple access (CSMA) protocol for multihop wireless networks, sometimes also called ad hoc networks. The CSMA protocol divides the available bandwidth into several channels and selects an id le channel randomly for packet transmission. It also employs a notion of \u201csoft\u201d channel reservation as it gives preference to the channel that was used for the last successful transmission. We show via simulations that this multichannel CSMA protocol provides a higher throughput compared to its single channel counterpart by reducing the packet loss due to collisions. We also show that the use of channel reservation provides better performance than multichannel CSMA with purely random idle channel selection."}
{"_id": "8968a106e831d668fc44f2083f0abebbcf86f574", "title": "DemNet: A Convolutional Neural Network for the detection of Alzheimer's Disease and Mild Cognitive Impairment", "text": "The early diagnosis of Alzheimer's Disease (AD) and its prodromal form, Mild Cognitive Impairment (MCI), has been the subject of extensive research in recent years. Some recent studies have shown promising results in the diagnosis of AD and MCI using structural Magnetic Resonance Imaging (MRI) scans. In this paper, we propose the use of a Convolutional Neural Network (CNN) in the detection of AD and MCI. In particular, we modified the 16-layered VGGNet for the 3-way classification of AD, MCI and Healthy Controls (HC) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset achieving an overall accuracy of 91.85% and outperforming several classifiers from other studies."}
{"_id": "b44c4d7825747205f5a8feffbc4955ce11eaf69d", "title": "Thoraco-omphalopagus conjoined twins in Chamois-coloured domestic goat kids.", "text": "Conjoined twins have been observed in a wide range of mammalian and non-mammalian species; they are considered to be more common in bovine, less frequent in sheep and pig and extremely rare in horse and goat. A pair of female conjoined twins was delivered from a 2-year-old Chamois-coloured domestic goat. Post-mortem examination revealed two identical and symmetrical twins, fused from the manubrium sterni to the region just caudal to the umbilicus. The rib cages were conjoined in the ventral plane with a single set of pericardial, pleural and peritoneal cavities. Internal examination revealed the presence of a common diaphragm and a single enlarged liver. Within a single central pericardium, two malformed hearts were present. Reports on this type of congenital duplication in goats have not been found in the literature. Thoracopagus and thoraco-omphalopagus are the most common types of conjoined twins in human beings and are associated with the highest mortality because of the frequent incidence of complex cardiac anatomy."}
{"_id": "a7474af315a38fc6d1f87945fc9ea70eae427ead", "title": "Advertising and Consumers' Communications", "text": "U recently, brand identities were built by firms via brand image advertising. However, the flourishing consumer communication weakened the firms\u2019 grip on their brands. The interaction between advertising and consumer communications and their joint impact on brand identity is the focal point of this paper. We present a model in which consumer preference for functional attributes may correlate with the identity they desire to project of themselves. This correlation is known to the firm but not to the consumers. Both the firm and the consumers can communicate their desired brand identity, although the actual brand identity is determined endogeneously by the composition of consumers who purchase it (i.e., what types of people consume the brand). We find that sometimes the firm can strengthen the identity of its brand by refraining from advertising. This result is based on the following intermediate finding: advertising can diminish the endogeneous informativeness of consumer communications by making it one-sided. Furthermore, it turns out that refraining from brand image advertising may be optimal for the firm when the product is especially well positioned to create a strong identity\u2014i.e., when consumer preferences for functional and self-expressive attributes are highly correlated."}
{"_id": "2d09d2343f319624d80cfd11c993a4b666958c3d", "title": "Extreme visualization: squeezing a billion records into a million pixels", "text": "Database searches are usually performed with query languages and form fill in templates, with results displayed in tabular lists. However, excitement is building around dynamic queries sliders and other graphical selectors for query specification, with results displayed by information visualization techniques. These filtering techniques have proven to be effective for many tasks in which visual presentations enable discovery of relationships, clusters, outliers, gaps, and other patterns. Scaling visual presentations from millions to billions of records will require collaborative research efforts in information visualization and database management to enable rapid aggregation, meaningful coordinated windows, and effective summary graphics. This paper describes current and proposed solutions (atomic, aggregated, and density plots) that facilitate sense-making for interactive visual exploration of billion record data sets."}
{"_id": "57bf84b70746c195eaab1ffbc0102673704cc25c", "title": "Discovering interpretable geo-social communities for user behavior prediction", "text": "Social community detection is a growing field of interest in the area of social network applications, and many approaches have been developed, including graph partitioning, latent space model, block model and spectral clustering. Most existing work purely focuses on network structure information which is, however, often sparse, noisy and lack of interpretability. To improve the accuracy and interpretability of community discovery, we propose to infer users' social communities by incorporating their spatiotemporal data and semantic information. Technically, we propose a unified probabilistic generative model, User-Community-Geo-Topic (UCGT), to simulate the generative process of communities as a result of network proximities, spatiotemporal co-occurrences and semantic similarity. With a well-designed multi-component model structure and a parallel inference implementation to leverage the power of multicores and clusters, our UCGT model is expressive while remaining efficient and scalable to growing large-scale geo-social networking data. We deploy UCGT to two application scenarios of user behavior predictions: check-in prediction and social interaction prediction. Extensive experiments on two large-scale geo-social networking datasets show that UCGT achieves better performance than existing state-of-the-art comparison methods."}
{"_id": "f742a220109064ae379a73f6c299ed1b2ca76609", "title": "Mobile Phone Use and Sleep Quality and Length in College Students", "text": "Proper sleep length and quality are essential for physical and mental health and have found to be related to a variety of negative outcomes (Brown, Buboltz, &Soper, 2002). College is a time,a transition for individuals where they begin laying a foundation for their future and acquiring sufficient sleep is of great value. College students are recognized as one of the most sleep-deprived groups, but also, as one of the most technologically-oriented population. Due to this combination, college students\u2019 sleep habits and mobile phone use habits havebegun to receiveattention. The purpose of this study was to examine the relationship between sleep quality/length and mobile phone use among college students. Three hundred and fifty college students voluntarily participated by completing the Sleep Quality Index (SQI), The Sleep Habits Survey,the Mobile Phone Problem Use Scale(MPPUS), the SMS Problem Use Scale (SMS-PUDQ) and the Mini IPIP. Results indicate that various aspects of mobile phone use such as problem mobile phone use, addictive text messaging, problematic texting, and pathological texting are related to sleep quality, but not sleep length. Additionally, extraverted individuals were found to engage in greater mobile phone use and problematic text messaging."}
{"_id": "d0d5c4fc59d9e82318f33c75346de6c4f828a7e0", "title": "Energy Big Data Analytics and Security: Challenges and Opportunities", "text": "The limited available fossil fuels and the call for sustainable environment have brought about new technologies for the high efficiency in the use of fossil fuels and introduction of renewable energy. Smart grid is an emerging technology that can fulfill such demands by incorporating advanced information and communications technology (ICT). The pervasive deployment of the advanced ICT, especially the smart metering, will generate big energy data in terms of volume, velocity, and variety. The generated big data can bring huge benefits to the better energy planning, efficient energy generation, and distribution. As such data involve end users' privacy and secure operation of the critical infrastructure, there will be new security issues. This paper is to survey and discuss new findings and developments in the existing big energy data analytics and security. Several taxonomies have been proposed to express the intriguing relationships of various variables in the field."}
{"_id": "89b99f6eb70193b64fd9748b63d773b613b5ed7c", "title": "A Feature-Based, Robust, Hierarchical Algorithm for Registering Pairs of Images of the Curved Human Retina", "text": "\u00d0This paper describes a robust hierarchical algorithm for fully-automatic registration of a pair of images of the curved human retina photographed by a fundus microscope. Accurate registration is essential for mosaic synthesis, change detection, and design of computer-aided instrumentation. Central to the new algorithm is a 12-parameter interimage transformation derived by modeling the retina as a rigid quadratic surface with unknown parameters, imaged by an uncalibrated weak perspective camera. The parameters of this model are estimated by matching vascular landmarks extracted by an algorithm that recursively traces the blood vessel structure. The parameter estimation technique, which could be generalized to other applications, is a hierarchy of models and methods: an initial match set is pruned based on a zeroth order transformation estimated as the peak of a similarity-weighted histogram; a first order, affine transformation is estimated using the reduced match set and least-median of squares; and the final, second order, 12-parameter transformation is estimated using an M-estimator initialized from the first order estimate. This hierarchy makes the algorithm robust to unmatchable image features and mismatches between features caused by large interframe motions. Before final convergence of the M-estimator, feature positions are refined and the correspondence set is enhanced using normalized sum-of-squared differences matching of regions deformed by the emerging transformation. Experiments involving 3,000 image pairs (I; HPR \u00c2 I; HPR pixels) from 16 different healthy eyes were performed. Starting with as low as 20 percent overlap between images, the algorithm improves its success rate exponentially and has a negligible failure rate above 67 percent overlap. The experiments also quantify the reduction in errors as the model complexities increase. Final registration errors less than a pixel are routinely achieved. The speed, accuracy, and ability to handle small overlaps compare favorably with retinal image registration techniques published in the literature."}
{"_id": "8263a8e5bcdcb7385196d30de828060e33f8441b", "title": "Enhanced differential super class-AB OTA", "text": "A fully differential Super Class AB Operational Transconductance Amplifier (OTA) is presented. To provide additional dynamic current boosting and increased gain-bandwidth product (GBW), not only adaptive biasing techniques has been applied at the differential input stage, but also Local Common Mode Feedback (LCMFB). Additionally, Quasi Floating Gate (QFG) transistors are employed, enhancing the performance of the amplifier. The OTA has been fabricated in a standard 0.5-\u03bcm CMOS process. Simulation results yield a slew rate improvement by factor 86 and a GBW enhancement by factor 16 when compared to the class A OTA while driving the same 70-pF load. Supply voltages are \u00b11-V and the value of the quiescent current is 10 \u03bcA. The overhead in terms of quiescent power, silicon area, and noise level is small."}
{"_id": "03be48912be80303dc5c012896fe77c174829a18", "title": "The neural correlates of maternal and romantic love", "text": "Romantic and maternal love are highly rewarding experiences. Both are linked to the perpetuation of the species and therefore have a closely linked biological function of crucial evolutionary importance. Yet almost nothing is known about their neural correlates in the human. We therefore used fMRI to measure brain activity in mothers while they viewed pictures of their own and of acquainted children, and of their best friend and of acquainted adults as additional controls. The activity specific to maternal attachment was compared to that associated to romantic love described in our earlier study and to the distribution of attachment-mediating neurohormones established by other studies. Both types of attachment activated regions specific to each, as well as overlapping regions in the brain's reward system that coincide with areas rich in oxytocin and vasopressin receptors. Both deactivated a common set of regions associated with negative emotions, social judgment and 'mentalizing', that is, the assessment of other people's intentions and emotions. We conclude that human attachment employs a push-pull mechanism that overcomes social distance by deactivating networks used for critical social assessment and negative emotions, while it bonds individuals through the involvement of the reward circuitry, explaining the power of love to motivate and exhilarate."}
{"_id": "291c9d66cf886ed27d89dc03d42a4612b66ab007", "title": "Improved Optimization for the Robust and Accurate Linear Registration and Motion Correction of Brain Images", "text": "Linear registration and motion correction are important components of structural and functional brain image analysis. Most modern methods optimize some intensity-based cost function to determine the best registration. To date, little attention has been focused on the optimization method itself, even though the success of most registration methods hinges on the quality of this optimization. This paper examines the optimization process in detail and demonstrates that the commonly used multiresolution local optimization methods can, and do, get trapped in local minima. To address this problem, two approaches are taken: (1) to apodize the cost function and (2) to employ a novel hybrid global-local optimization method. This new optimization method is specifically designed for registering whole brain images. It substantially reduces the likelihood of producing misregistrations due to being trapped by local minima. The increased robustness of the method, compared to other commonly used methods, is demonstrated by a consistency test. In addition, the accuracy of the registration is demonstrated by a series of experiments with motion correction. These motion correction experiments also investigate how the results are affected by different cost functions and interpolation methods."}
{"_id": "5628d1eaa67da88d5574b9485071d67f5503d295", "title": "The rises and falls of disconnection syndromes.", "text": "In a brain composed of localized but connected specialized areas, disconnection leads to dysfunction. This simple formulation underlay a range of 19th century neurological disorders, referred to collectively as disconnection syndromes. Although disconnectionism fell out of favour with the move against localized brain theories in the early 20th century, in 1965, an American neurologist brought disconnection to the fore once more in a paper entitled, 'Disconnexion syndromes in animals and man'. In what was to become the manifesto of behavioural neurology, Norman Geschwind outlined a pure disconnectionist framework which revolutionized both clinical neurology and the neurosciences in general. For him, disconnection syndromes were higher function deficits that resulted from white matter lesions or lesions of the association cortices, the latter acting as relay stations between primary motor, sensory and limbic areas. From a clinical perspective, the work reawakened interest in single case studies by providing a useful framework for correlating lesion locations with clinical deficits. In the neurosciences, it helped develop contemporary distributed network and connectionist theories of brain function. Geschwind's general disconnectionist paradigm ruled clinical neurology for 20 years but in the late 1980s, with the re-emergence of specialized functional roles for association cortex, the orbit of its remit began to diminish and it became incorporated into more general models of higher dysfunction. By the 1990s, textbooks of neurology were devoting only a few pages to classical disconnection theory. Today, new techniques to study connections in the living human brain allow us, for the first time, to test the classical formulation directly and broaden it beyond disconnections to include disorders of hyperconnectivity. In this review, on the 40th anniversary of Geschwind's publication, we describe the changing fortunes of disconnection theory and adapt the general framework that evolved from it to encompass the entire spectrum of higher function disorders in neurology and psychiatry."}
{"_id": "70b0a9d2920ad66dc89b497d6548d9933fddf841", "title": "EEG-correlated fMRI of human alpha activity", "text": "Electroencephalography-correlated functional magnetic resonance imaging (EEG/fMRI) can be used to identify blood oxygen level-dependent (BOLD) signal changes associated with both physiological and pathological EEG events. Here, we implemented continuous and simultaneous EEG/fMRI to identify BOLD signal changes related to spontaneous power fluctuations in the alpha rhythm (8-12 Hz), the dominant EEG pattern during relaxed wakefulness. Thirty-two channels of EEG were recorded in 10 subjects during eyes-closed rest inside a 1.5-T magnet resonance (MR) scanner using an MR-compatible EEG recording system. Functional scanning by echoplanar imaging covered almost the entire cerebrum every 4 s. Off-line MRI artifact subtraction software was applied to obtain continuous EEG data during fMRI acquisition. The average alpha power over 1-s epochs was derived at several electrode positions using a Fast Fourier Transform. The power time course was then convolved with a canonical hemodynamic response function, down-sampled, and used for statistical parametric mapping of associated signal changes in the image time series. At all electrode positions studied, a strong negative correlation of parietal and frontal cortical activity with alpha power was found. Conversely, only sparse and nonsystematic positive correlation was detected. The relevance of these findings is discussed in view of the current theories on the generation and significance of the alpha rhythm and the related functional neuroimaging findings."}
{"_id": "548b32cd88ec3ae2454a3af4adacb42710eaae10", "title": "An evaluation framework for viable business models for m-commerce in the information technology sector", "text": "This paper presents a study of the characteristics of viable business models in the field of Mobile Commerce (m-commerce). Mobility has given new dimensions to the way commerce works. All over the world various stakeholder organisations are consistently probing into the areas where m-commerce can be exploited and can generate revenue or value for them, even though some of those implementations are making the business environment more complex and uncertain. This paper proposes a viable business model evaluation framework, based on the VISOR model, which helps in determining the sustainability capabilities of a business model. Four individual cases were conducted with diverse organisations in the Information Technology sector. The four cases discussed dealt with mobile business models and the primary data was collected via semi structured interviews, supplemented by an extensive range of secondary data. A cross-case comparative data analysis was used to review the patterns of different viable business components across the four cases and, finally, the findings and conclusions of the study are presented."}
{"_id": "cf8ed2793bc6aec88da5306fe2de560dc0be9b15", "title": "Delving into adversarial attacks on deep policies", "text": "Adversarial examples have been shown to exist for a variety of deep learning architectures. Deep reinforcement learning has shown promising results on training agent policies directly on raw inputs such as image pixels. In this paper we present a novel study into adversarial attacks on deep reinforcement learning polices. We compare the effectiveness of the attacks using adversarial examples vs. random noise. We present a novel method for reducing the number of times adversarial examples need to be injected for a successful attack, based on the value function. We further explore how re-training on random noise and FGSM perturbations affects the resilience against adversarial examples."}
{"_id": "536f0d0ce5e7d5d3622220da1d5f0cded04436d0", "title": "Columbia-IBM news video story segmentation in trecvid 2004", "text": "In this technical report, we give an overview our technical developments in the story segmentation task in TRECVID 2004. Among them, we propose an information-theoretic framework, visual cue cluster construction (VC), to automatically discover adequate mid-level features. The problem is posed as mutual information maximization, through which optimal cue clusters are discovered to preserve the highest information about the semantic labels. We extend the Information Bottleneck framework to high-dimensional continuous features and further propose a projection method to map each video into probabilistic memberships over all the cue clusters. The biggest advantage of the proposed approach is to remove the dependence on the manual process in choosing the mid-level features and the huge labor cost involved in annotating the training corpus for training the detector of each mid-level feature. When tested in TRECVID 2004 news video story segmentation, the proposed approach achieves promising performance gain over representations derived from conventional clustering techniques and even the mid-level features selected manually; meanwhile, it achieved one of the top performances, F1=0.65, close to the highest performance, F1=0.69, by other groups. We also experiment with other promising visual features and continue investigating effective prosody features. The introduction of post-processing also provides practical improvements. Furthermore, the fusion from other modalities, such as speech prosody features and ASR-based segmentation scores are significant and have been confirmed again in this experiment."}
{"_id": "cf43fcad93c5dc1fe2059be5203cf3d9ab450ebf", "title": "Detection and Discrimination of Open-Phase Fault in Permanent Magnet Synchronous Motor Drive System", "text": "Open-phase fault in permanent magnet synchronous motor (PMSM) drive system occurs as the phase winding is disconnected or one leg of the inverter bridge fails. It may generate large electromagnetic torque ripple and serious mechanical vibration. Therefore, a rapid fault detection method is greatly required to identify this fault at early stage and prevent damage to the system. This paper develops a method of the open-phase fault detection and discrimination for the PMSM drive system based on the zero-sequence voltage components, in which the discrimination of the fault types, namely internal stator winding failure and switches failure of the inverter is realized. Then, appropriate fault-tolerant measures may be taken according to the different fault types. The experimental platform is established, and the experimental results verify the effectiveness of the proposed method, showing that not only the open-phase fault can be rapidly detected, but also the fault type can be effectively discriminated."}
{"_id": "1562ff06a7e75adc97cbc5ac7142eebdbaf098d7", "title": "Google App Engine", "text": "Cloud computing becomes more popular day by day. The \u201cdream\u201d of users to \u201cto get rid\u201d of administrators, huge investments on expensive hardware and human resources becomes true. The goal of this paper is to present a general overview of the new coming service offered by Google company Google App Engine. \u201cWhat is it for?\u201d, \u201cWhat can I do with the system?\u201d and \u201cHow does it work?\u201d are some of the questions on which this paper will try to give an answer. In addition, brief overview and description of \u201ccloud computing\u201d and comparison of Google App Engine with other similar products will be provided to the reader as well."}
{"_id": "6c50395aef0b90cca822bb595eba3d722e60896f", "title": "The utility of low frequency heart rate variability as an index of sympathetic cardiac tone: a review with emphasis on a reanalysis of previous studies.", "text": "This article evaluates the suitability of low frequency (LF) heart rate variability (HRV) as an index of sympathetic cardiac control and the LF/high frequency (HF) ratio as an index of autonomic balance. It includes a comprehensive literature review and a reanalysis of some previous studies on autonomic cardiovascular regulation. The following sources of evidence are addressed: effects of manipulations affecting sympathetic and vagal activity on HRV, predictions of group differences in cardiac autonomic regulation from HRV, relationships between HRV and other cardiac parameters, and the theoretical and mathematical bases of the concept of autonomic balance. Available data challenge the interpretation of the LF and LF/HF ratio as indices of sympathetic cardiac control and autonomic balance, respectively, and suggest that the HRV power spectrum, including its LF component, is mainly determined by the parasympathetic system."}
{"_id": "c8af8073f2dd942be364d2fe9fdf8e204131be4a", "title": "Solving ill-posed inverse problems using iterative deep neural networks", "text": "We propose a partially learned approach for the solution of ill posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional. The method results in a gradient-like iterative scheme, where the \u201cgradient\u201d component is learned using a convolutional network that includes the gradients of the data discrepancy and regularizer as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512\u00d7 512 pixel images in about 0.4 s using a single graphics processing unit (GPU)."}
{"_id": "ce636ad263167b5612c4a2a85376855f39de9eb2", "title": "Two-stream Attentive CNNs for Image Retrieval", "text": "In content-based image retrieval, the most challenging (and ambiguous) part is to define the similarity between images. For the human-being, such similarity can be defined with respect to where they pay attention to and what semantic attributes they understand. Inspired by this fact, this paper presents two-stream attentive CNNs for image retrieval. As the human-being does, the proposed network has two streams that simultaneously handle two tasks. The Main stream focuses on extracting discriminative visual features that are tightly correlated with semantic attributes. Meanwhile, the Auxiliary stream aims to facilitate the main stream by redirecting the feature extraction operation mainly to the image content that human may pay attention to. By fusing these two streams into the Main and Auxiliary CNNs (MAC), image similarity can be computed as the human-being does by reserving the conspicuous content and suppressing the irrelevant regions. Extensive experiments show that the proposed model achieves impressive performance in image retrieval on four public datasets."}
{"_id": "7d1b596463fe21fea2ec06cc7460145ebc056495", "title": "A Hierarchical Recurrent Neural Network for Symbolic Melody Generation", "text": "In recent years, neural networks have been used to generate symbolic melodies. However, the long-term structure in the melody has posed great difficulty for designing a good model. In this paper, we present a hierarchical recurrent neural network for melody generation, which consists of three Long-Short-Term-Memory (LSTM) subnetworks working in a coarse-to-fine manner along time. Specifically, the three subnetworks generate bar profiles, beat profiles and notes in turn, and the output of the high-level subnetworks are fed into the low-level subnetworks, serving as guidance for generating the finer time-scale melody components in low-level subnetworks. Two human behavior experiments demonstrate the advantage of this structure over the single-layer LSTM which attempts to learn all hidden structures in melodies. Compared with the state-of-the-art models MidiNet (Yang, Chou, and Yang 2017) and MusicVAE (Roberts et al. 2018), the hierarchical recurrent neural network produces better melodies evaluated by humans. Introduction Automatic music generation using neural networks has attracted much attention. There are two classes of music generation approaches, symbolic music generation (Hadjeres, Pachet, and Nielsen 2017)(Waite et al. 2016)(Yang, Chou, and Yang 2017) and audio music generation (van den Oord et al. )(Mehri et al. 2016). In this study, we focus on symbolic melody generation, which requires learning from sheet music. Many music genres such as pop music consist of melody and harmony. Since usually beautiful harmonies can be ensured by using legitimate chord progressions which have been summarized by musicians, we only focus on melody generation, similar to some recent studies (Waite et al. 2016)(Yang, Chou, and Yang 2017)(Colombo, Seeholzer, and Gerstner 2017)(Roberts et al. 2018). This greatly simplifies the melody generation problem. Melody is a linear succession of musical notes along time. It has both short time scale such as notes and long time scale such as phrases and movements, which makes the melody generation a challenging task. Existing methods generate pitches and rhythm simultaneously (Waite et al. 2016) or sequentially (Chu, Urtasun, and Fidler 2016) using Recurrent Neural Networks (RNNs), but they usually work on the note scale without explicitly modeling the larger time-scale components such as rhythmic patterns. It is difficult for them to learn long-term dependency or structure in melody. Theoretically, an RNN can learn the temporal structure of any length in the input sequence, but in reality, as the sequence gets longer it is very hard to learn long-term structure. Different RNNs have different learning capability, e.g., LSTM (Hochreiter and Schmidhuber 1997) performs much better than the simple Elman network. But any model has a limit for the length of learnable structure, and this limit depends on the complexity of the sequence to be learned. To enhance the learning capability of an RNN, one approach is to invent a new structure. In this work we take another approach: increase the granularity of the input. Since each symbol in the sequence corresponds to longer segment than the original representation, the same model would learn longer temporal structure. To implement this idea, we propose a Hierarchical Recurrent Neural Network (HRNN) for learning melody. It consists of three LSTM-based sequence generators \u2014 Bar Layer, Beat Layer and Note Layer. The Bar Layer and Beat Layer are trained to generate bar profiles and beat profiles, which are designed to represent the high-level temporal features of melody. The Note Layer is trained to generate melody conditioned on the bar profile sequence and beat profile sequence output by the Bar Layer and Beat Layer. By learning on different time scales, the HRNN can grasp the general regular patterns of human composed melodies in different granularities, and generate melody with realistic long-term structures. This method follows the general idea of granular computing (Bargiela and Pedrycz 2012), in which different resolutions of knowledge or information is extracted and represented for problem solving. With the shorter profile sequences to guide the generation of note sequence, the difficulty of generating note sequence with wellorganized structure is alleviated. Related Work Melody Generation with Neural Networks There is a long history of generating melody with RNNs. A recurrent autopredictive connectionist network called CONCERT is used to compose music (Mozer 1994). With a set of composition rules as constraints to evaluate ar X iv :1 71 2. 05 27 4v 2 [ cs .S D ] 5 S ep 2 01 8 melodies, an evolving neural network is employed to create melodies (Chen and Miikkulainen 2001). As an important form of RNN, LSTM (Hochreiter and Schmidhuber 1997) is used to capture the global music structure and improve the quality of the generated music (Eck and Schmidhuber 2002). Boulanger-Lewandowski, Bengio, and Vincent explore complex polyphonic music generation with an RNNRBM model (Boulanger-Lewandowski, Bengio, and Vincent 2012). Lookback RNN and Attention RNN are proposed to tackle the problem of creating melody\u2019s long-term structure (Waite et al. 2016). The Lookback RNN introduces a handcrafted lookback feature that makes the model repeat sequences easier while the Attention RNN leverages an attention mechanism to learn longer-term structures. Inspired by convolution, two variants of RNN are employed to attain transposition invariance (Johnson 2017). To model the relation between rhythm and melody flow, a melody is divided into pitch sequence and duration sequence and these two sequences are processed in parallel (Colombo et al. 2016). This approach is further extended in (Colombo, Seeholzer, and Gerstner 2017). A hierarchical VAE is employed to learn the distribution of melody pieces in (Roberts et al. 2018), the decoder of which is similar to our model. The major difference is that the higher layer of its decoder uses the automatically learned representation of bars, while our higher layers use predefined representation of bars and beats which makes the learning problem easier. Generative Adversarial Networks (GANs) have also been used to generate melodies. For example, RNN-based GAN (Mogren 2016) and CNN-based GAN (Yang, Chou, and Yang 2017) are employed to generate melodies, respectively. However, the generated melodies also lack realistic long-term structures. Some models are proposed to generate multi-track music. A 4-layer LSTM is employed to produce the key, press, chord and drum of pop music seperately (Chu, Urtasun, and Fidler 2016). With pseudo-Gibbs sampling, a model can generate highly convincing chorales in the style of Bach (Colombo, Seeholzer, and Gerstner 2017). Three GANs for symbolic-domain multi-track music generation were proposed (Dong et al. 2018). An end-to-end melody and arrangement generation framework XiaoIce Band was proposed to generate a melody track with accompany tracks with RNN (Zhu et al. 2018). Hierarchical and Multiple Time Scales Networks The idea of hierarchical or multiple time scales has been used in neural network design, especially in the area of natural language processing. The Multiple Timescale Recurrent Neural Network (MTRNN) realizes the self-organization of a functional hierarchy with two types of neurons \u201cfast\u201d unit and \u201cslow\u201d unit (Yamashita and Tani 2008). Then it is shown that the MTRNN can acquire the capabilities to recognize, generate, and correct sentences in a hierarchical way: characters grouped into words, and words into sentences (Hinoshita et al. 2011). An LSTM auto-encoder is trained to preserve and reconstruct paragraphs by hierarchically building embeddings of words, sentences and paragraphs (Li, Luong, and Jurafsky 2015). To process inputs at multiple time Time signature Bar Key signature"}
{"_id": "293a18fb1624f1882a067dc4be846fe98b8ebb95", "title": "Less guilty by reason of adolescence: developmental immaturity, diminished responsibility, and the juvenile death penalty.", "text": "The authors use a developmental perspective to examine questions about the criminal culpability of juveniles and the juvenile death penalty. Under principles of criminal law, culpability is mitigated when the actor's decision-making capacity is diminished, when the criminal act was coerced, or when the act was out of character. The authors argue that juveniles should not be held to the same standards of criminal responsibility as adults, because adolescents' decision-making capacity is diminished, they are less able to resist coercive influence, and their character is still undergoing change. The uniqueness of immaturity as a mitigating condition argues for a commitment to a legal environment under which most youths are dealt with in a separate justice system and none are eligible for capital punishment."}
{"_id": "93a3ef1b3c6ea02640ece9aabb0d92402838b67a", "title": "Alzheimer disease therapy\u2014moving from amyloid-\u03b2 to tau", "text": "Disease-modifying treatments for Alzheimer disease (AD) have focused mainly on reducing levels of amyloid-\u03b2 (A\u03b2) in the brain. Some compounds have achieved this goal, but none has produced clinically meaningful results. Several methodological issues relating to clinical trials of these agents might explain this failure; an additional consideration is that the amyloid cascade hypothesis\u2014which places amyloid plaques at the heart of AD pathogenesis\u2014does not fully integrate a large body of data relevant to the emergence of clinical AD. Importantly, amyloid deposition is not strongly correlated with cognition in multivariate analyses, unlike hyperphosphorylated tau, neurofibrillary tangles, and synaptic and neuronal loss, which are closely associated with memory deficits. Targeting tau pathology, therefore, might be more clinically effective than A\u03b2-directed therapies. Furthermore, numerous immunization studies in animal models indicate that reduction of intracellular levels of tau and phosphorylated tau is possible, and is associated with improved cognitive performance. Several tau-related vaccines are in advanced preclinical stages and will soon enter clinical trials. In this article, we present a critical analysis of the failure of A\u03b2-directed therapies, discuss limitations of the amyloid cascade hypothesis, and suggest the potential value of tau-targeted therapy for AD."}
{"_id": "9eb08dca8edca20f644b1e8dc250e5116eb10136", "title": "What maintains parental support for vaccination when challenged by anti-vaccination messages? A qualitative study.", "text": "This study sought to explore how parents respond to competing media messages about vaccine safety. Six focus groups with mothers of infants were shown television vignettes of typical pro- and anti-vaccination claims. Thematic analysis of transcripts was undertaken. Mothers expressed surprise and concern about alleged vaccine risks but quickly reinstated their support for vaccination by deference to authority figures; type-casting immunisation opponents; and notions of anticipatory regret, good parenting and social responsibility. We conclude that personal experiences, value systems and level of trust in health professionals are fundamental to parental decision making about vaccination. Vaccination advocacy should increase the focus on matters of process such as maintaining trust and public confidence, particularly in health professionals. Stories about people affected by vaccine-preventable diseases need to re-enter the public discourse."}
{"_id": "6049d1e5db61afad835f10be445cf08d91f0dd76", "title": "SuggestBot: using intelligent task routing to help people find work in wikipedia", "text": "Member-maintained communities ask their users to perform tasks the community needs. From Slashdot, to IMDb, to Wikipedia, groups with diverse interests create community-maintained artifacts of lasting value (CALV) that support the group's main purpose and provide value to others. Said communities don't help members find work to do, or do so without regard to individual preferences, such as Slashdot assigning meta-moderation randomly. Yet social science theory suggests that reducing the cost and increasing the personal value of contribution would motivate members to participate more.We present SuggestBot, software that performs intelligent task routing (matching people with tasks) in Wikipedia. SuggestBot uses broadly applicable strategies of text analysis, collaborative filtering, and hyperlink following to recommend tasks. SuggestBot's intelligent task routing increases the number of edits by roughly four times compared to suggesting random articles. Our contributions are: 1) demonstrating the value of intelligent task routing in a real deployment; 2) showing how to do intelligent task routing; and 3) sharing our experience of deploying a tool in Wikipedia, which offered both challenges and opportunities for research."}
{"_id": "40f04909aaa24b09569863aa71e76fe3d284cdb0", "title": "Application-Managed Flash", "text": "In flash storage, an FTL is a complex piece of code that resides completely inside the storage device and is provided by the manufacturer. Its principal virtue is providing interoperability with conventional HDDs. However, this virtue is also its biggest impediment in reaching the full performance of the underlying flash storage. We propose to refactor the flash storage architecture so that it relies on a new block I/O interface which does not permit overwriting of data without intervening erasures.We demonstrate how high-level applications, in particular file systems, can deal with this restriction efficiently by employing append-only segments. This refactoring dramatically reduces flash management overhead and improves performance of applications, such as file systems and databases, by permitting them to directly manage flash storage. Our experiments on a machine with the new block I/O interface show that DRAM in the flash controller is reduced by 128X and the performance of the file system improves by 80% over conventional SSDs."}
{"_id": "c031b30e37912448953c07b8761136e9533ce7b6", "title": "Expected Stock Returns and Volatility", "text": "This paper examines the relation between stock returns and stock market volatility. We find evidence that the expected market risk premium (the expected return on a stock portfolio minus the Treasury bill yield) is positively related to the predictable volatility of stock returns. There is also evidence that unexpected stock market returns are negatively related to the unexpected change in the volatility of stock returns. This negative relation provides indirect evidence of a positive relation between expected risk premiums and volatility. Disciplines Finance | Finance and Financial Management Comments At the time of publication, author Robert F. Stambaugh was affiliated with the University of Chicago. Currently, he is a faculty member at the Wharton School at the University of Pennsylvania. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/fnce_papers/363 Expected Stock Returns and Volatility by Kenneth R. French* G. William Schwert** Robert F. Starnbaugh*"}
{"_id": "87eeb5622d8fbe4dca5f1c9b4190f719818c4d6e", "title": "Rated aspect summarization of short comments", "text": "Web 2.0 technologies have enabled more and more people to freely comment on different kinds of entities (e.g. sellers, products, services). The large scale of information poses the need and challenge of automatic summarization. In many cases, each of the user-generated short comments comes with an overall rating. In this paper, we study the problem of generating a ``rated aspect summary'' of short comments, which is a decomposed view of the overall ratings for the major aspects so that a user could gain different perspectives towards the target entity. We formally define the problem and decompose the solution into three steps. We demonstrate the effectiveness of our methods by using eBay sellers' feedback comments. We also quantitatively evaluate each step of our methods and study how well human agree on such a summarization task. The proposed methods are quite general and can be used to generate rated aspect summary automatically given any collection of short comments each associated with an overall rating."}
{"_id": "6da97aa50c0c4f6d7473b607f872cd6bcb940c60", "title": "Deep Learning as Feature Encoding for Emotion Recognition", "text": "Deep learning is popular as an end-to-end framework extracting the prominent features and performing the classification also. In this paper, we extensively investigate deep networks as an alternate to feature encoding technique of lowlevel descriptors for emotion recognition on the benchmark EmoDB dataset. Fusion performance with such obtained encoded features with other available features is also investigated. Highest performance to date in the literature is observed."}
{"_id": "26ca2ce169a9844dda7dbcb90c2149b3424f46a5", "title": "Camera/Laser/GPS Fusion Method for Vehicle Positioning Under Extended NIS-Based Sensor Validation", "text": "Vehicle localization and autonomous navigation consist of precisely positioning a vehicle on road by the use of different kinds of sensors. This paper presents a vehicle localization method by integrating a stereoscopic system, a laser range finder (LRF) and a global localization sensor GPS. For more accurate LRF-based vehicle motion estimation, an outlier-rejection invariant closest point method (ICP) is proposed to reduce the matching ambiguities of scan alignment. The fusion approach starts by a sensor selection step that is applied to validate the coherence of the observations from different sensors. Then the information provided by the validated sensors is fused with an unscented information filter. To demonstrate its performance, the proposed multisensor localization method is tested with real data and evaluated by RTK-GPS data as ground truth. The fusion approach also facilitates the incorporation of more sensors if needed."}
{"_id": "78ea9dea4915d5b2a86eb1854145c91ac2c8aba5", "title": "mOS: A Reusable Networking Stack for Flow Monitoring Middleboxes", "text": "Stateful middleboxes, such as intrusion detection systems and application-level firewalls, have provided key functionalities in operating modern IP networks. However, designing an efficient middlebox is challenging due to the lack of networking stack abstraction for TCP flow processing. Thus, middlebox developers often write the complex flow management logic from scratch, which is not only prone to errors, but also wastes efforts for similar functionalities across applications. This paper presents the design and implementation of mOS, a reusable networking stack for stateful flow processing in middlebox applications. Our API allows developers to focus on the core application logic instead of dealing with low-level packet/flow processing themselves. Under the hood, it implements an efficient event system that scales to monitoring millions of concurrent flow events. Our evaluation demonstrates that mOS enables modular development of stateful middleboxes, often significantly reducing development efforts represented by the source lines of code, while introducing little performance overhead in multi-10Gbps network environments."}
{"_id": "fd6a5692f39cd8c18632ea68658282f26b725c63", "title": "An architecture of diversity for commonsense reasoning", "text": "Although computers excel at certain bounded tasks that are difficult for humans, such as solving integrals, they have difficulty performing commonsense tasks that are easy for humans, such as understanding stories. In this Technical Forum contribution, we discuss commonsense reasoning and what makes it difficult for computers. We contend that commonsense reasoning is too hard a problem to solve using any single artificial intelligence technique. We propose a multilevel architecture consisting of diverse reasoning and representation techniques that collaborate and reflect in order to allow the best techniques to be used for the many situations that arise in commonsense reasoning. We present story understanding\u2014specifically, understanding and answering questions about progressively harder children\u2019s texts\u2014as a task for evaluating and scaling up a commonsense reasoning system."}
{"_id": "59e59d3657817c513272aa8096d4e384e6f3dadb", "title": "The role of satellites in 5G", "text": "The next generation of mobile radio communication systems - so called 5G - will provide some major changes to those generations to date. The ability to cope with huge increases in data traffic at reduced latencies and improved quality of user experience together with major reduction in energy usage are big challenges. In addition future systems will need to embody connections to billions of objects - the so called Internet of Things (IoT) which raise new challenges. Visions of 5G are now available from regions across the World and research is ongoing towards new standards. The consensus is a flatter architecture that adds a dense network of small cells operating in the millimetre wave bands and which are adaptable and software controlled. But what place for satellites in such a vision? The paper examines several potential roles for satellite including coverage extension, content distribution, providing resilience, improved spectrum utilisation and integrated signalling systems."}
{"_id": "bbc76f0e50ab96e7318816e24c65fd3459d0497c", "title": "Survey of Pedestrian Detection for Advanced Driver Assistance Systems", "text": "Advanced driver assistance systems (ADASs), and particularly pedestrian protection systems (PPSs), have become an active research area aimed at improving traffic safety. The major challenge of PPSs is the development of reliable on-board pedestrian detection systems. Due to the varying appearance of pedestrians (e.g., different clothes, changing size, aspect ratio, and dynamic shape) and the unstructured environment, it is very difficult to cope with the demanded robustness of this kind of system. Two problems arising in this research area are the lack of public benchmarks and the difficulty in reproducing many of the proposed methods, which makes it difficult to compare the approaches. As a result, surveying the literature by enumerating the proposals one--after-another is not the most useful way to provide a comparative point of view. Accordingly, we present a more convenient strategy to survey the different approaches. We divide the problem of detecting pedestrians from images into different processing steps, each with attached responsibilities. Then, the different proposed methods are analyzed and classified with respect to each processing stage, favoring a comparative viewpoint. Finally, discussion of the important topics is presented, putting special emphasis on the future needs and challenges."}
{"_id": "619e9486c8769f6c80ae60ee52b15da154787fb9", "title": "A study on LIWC categories for opinion mining in Spanish reviews", "text": "With the exponential growth of social media i.e. blogs and social networks, organizations and individual persons are increasingly using the number of reviews of these media for decision making about a product or service. Opinion mining detects whether the emotions of an opinion expressed by a user on Web platforms in natural language, is positive or negative. This paper presents extensive experiments to study the effectiveness of the classification of Spanish opinions in five categories: highly positive, highly negative, positive, negative and neutral, using the combination of the psychological and linguistic features of LIWC. LIWC is a text analysis software that enables the extraction of different psychological and linguistic features from natural language text. For this study, two corpora have been used, one about movies and one about technological products. Furthermore, we have conducted a comparative assessment of the performance of various classification techniques: J48, SMO and BayesNet, using precision, recall and F-measure metrics. All in all, findings have revealed that the positive and negative categories provide better results than the other categories. Finally, experiments on both corpora indicated that SMO produces better results than BayesNet and J48 algorithms, obtaining an F-measure of 90.4% and 87.2% in each domain."}
{"_id": "fb3ba5c7446fe8767dd9da5f6228f8bb15b5396b", "title": "Crowdsourcing for on-street smart parking", "text": "Crowdsourcing has inspired a variety of novel mobile applications. However, identifying common practices across different applications is still challenging. In this paper, we use smart parking as a case study to investigate features of crowdsourcing that may apply to other mobile applications. Based on this we derive principles for efficiently harnessing crowdsourcing. We draw three key guidelines: First, we suggest that that the organizer can play an important role in coordinating participants', a key factor to successful crowdsourcing experience. Second, we suggest that the expected participation rate is a key factor when designing the crowdsourcing system: a system with a lower expected participation rate will place a higher burden in individual participants (e.g., through more complex interfaces that aim to improve the accuracy of the collected data). Finally, we suggest that not only above certain threshold of contributors, a crowdsourcing-based system is resilient to freeriding but, surprisingly, that including freeriders (i.e., actors that do not participate in system effort but share its benefits in terms of coordination) benefits the entire system."}
{"_id": "661b27166c83ca689e287de41602f7444121f4af", "title": "Optimal Dynamic Assortment Planning with Demand Learning", "text": "We study a family of stylized assortment planning problems, where arriving customers make purchase decisions among offered products based on maximizing their utility. Given limited display capacity and no a priori information on consumers\u2019 utility, the retailer must select which subset of products to offer. By offering different assortments and observing the resulting purchase behavior, the retailer learns about consumer preferences, but this experimentation should be balanced with the goal of maximizing revenues. We develop a family of dynamic policies that judiciously balance the aforementioned tradeoff between exploration and exploitation, and prove that their performance cannot be improved upon in a precise mathematical sense. One salient feature of these policies is that they \u201cquickly\u201d recognize, and hence limit experimentation on, strictly suboptimal products."}
{"_id": "ed73e48720cf1780fee1fe15536fde2f31edf914", "title": "A Fast Deep Learning Model for Textual Relevance in Biomedical Information Retrieval", "text": "Publications in the life sciences are characterized by a large technical vocabulary, with many lexical and semantic variations for expressing the same concept. Towards addressing the problem of relevance in biomedical literature search, we introduce a deep learning model for the relevance of a document\u2019s text to a keyword style query. Limited by a relatively small amount of training data, the model uses pre-trained word embeddings. With these, the model first computes a variable-length Delta matrix between the query and document, representing a difference between the two texts, which is then passed through a deep convolution stage followed by a deep feed-forward network to compute a relevance score. This results in a fast model suitable for use in an online search engine. The model is robust and outperforms comparable state-of-the-art deep learning approaches."}
{"_id": "7852d1f7ed81875445caa40b65e67e2813a484d2", "title": "SK-LSH: An Efficient Index Structure for Approximate Nearest Neighbor Search", "text": "Approximate Nearest Neighbor (ANN) search in high dimensional space has become a fundamental paradigm in many applications. Recently, Locality Sensitive Hashing (LSH) and its variants are acknowledged as the most promising solutions to ANN search. However, state-of-the-art LSH approaches suffer from a drawback: accesses to candidate objects require a large number of random I/O operations. In order to guarantee the quality of returned results, sufficient objects should be verified, which would consume enormous I/O cost. To address this issue, we propose a novel method, called SortingKeys-LSH (SK-LSH), which reduces the number of page accesses through locally arranging candidate objects. We firstly define a new measure to evaluate the distance between the compound hash keys of two points. A linear order relationship on the set of compound hash keys is then created, and the corresponding data points can be sorted accordingly. Hence, data points that are close to each other according to the distance measure can be stored locally in an index file. During the ANN search, only a limited number of disk pages among few index files are necessary to be accessed for sufficient candidate generation and verification, which not only significantly reduces the response time but also improves the accuracy of the returned results. Our exhaustive empirical study over several real-world data sets demonstrates the superior efficiency and accuracy of SK-LSH for the ANN search, compared with state-of-the-art methods, including LSB, C2LSH and CK-Means."}
{"_id": "3b9ab87d793e3a391af6e44e7ec170facb102bb9", "title": "Ontology-based Aspect Extraction for an Improved Sentiment Analysis in Summarization of Product Reviews", "text": "Current approaches in aspect-based sentiment analysis ignore or neutralize unhandled issues emerging from the lexicon-based scoring (i.e., SentiWordNet), whereby lexical sentiment analysis only classifies text based on affect word presence and word count are limited to these surface features. This is coupled with considerably low detection rate among implicit concepts in the text. To address this issues, this paper proposed the use of ontology to i) enhance aspect extraction process by identifying features pertaining to implicit entities, and ii) eliminate lexicon-based sentiment scoring issues which, in turn, improve sentiment analysis and summarization accuracy. Concept-level sentiment analysis aims to go beyond word-level analysis by employing ontologies which act as a semantic knowledge base rather than the lexicon. The outcome is an Ontology-Based Product Sentiment Summarization (OBPSS) framework which outperformed other existing summarization systems in terms of aspect extraction and sentiment scoring. The improved performance is supported by the sentence-level linguistic rules applied by OBPSS in providing a more accurate sentiment analysis."}
{"_id": "df13ae1e19995a7326dfdcf19f2e7d308b24ef58", "title": "RECOGNITION WITH WEIGHTED FINITE-STATE TRANSDUCERS", "text": "This chapter describes a general representation and algorithmic framework for speech recognition based on weighted finite-state transducers. These transducers provide a common and natural representation for major components of speech recognition systems, including hidden Markov models (HMMs), context-dependency models, pronunciation dictionaries, statistical grammars, and word or phone lattices. General algorithms for building and optimizing transducer models are presented, including composition for combining models, weighted determinization and minimization for optimizing time and space requirements, and a weight pushing algorithm for redistributing transition weights optimally for speech recognition. The application of these methods to large-vocabulary recognition tasks is explained in detail, and experimental results are given, in particular for the North American Business News (NAB) task, in which these methods were used to combine HMMs, full cross-word triphones, a lexicon of forty thousand words, and a large trigram grammar into a single weighted transducer that is only somewhat larger than the trigram word grammar and that runs NAB in real-time on a very simple decoder. Another example demonstrates that the same methods can be used to optimize lattices for second-pass recognition."}
{"_id": "3cdd95fd3ef3e3195e159901e34bad302e120754", "title": "Ranking in context using vector spaces", "text": "This paper presents a principled approach to the problem of retrieval in context. The notion of basis of a vector space was introduced previously to describe context. Any basis vectors represents a distinct piece of context, such as time, space, or word meaning. A vector is generated by a basis just as an informative object or an information need is generated in a context. As a consequence a different basis generates a different vector as a different context would generate different information needs or informative objects. Also the Vector Space Model (VSM) describes information needs and informative objects as query vectors and document vectors, respectively. However the VSM assumes that there is a unique basis, which is the set of versors and always generates the same vector provided the same coefficients. Thus a drawback of the VSM is that the vectors are insensitive to context, i.e. they are generated and ranked in the same way independently of the context in which information need and informative objects are. This paper also proposes a function to rank documents in context. Since a basis spans a subspace, which includes all the vectors of object being in the same context, the ranking function is a distance measure between the document vector and the subspace. Even though ranking is still based on an inner product between two vectors, the basic difference is that projection and distance depend on the basis, i.e. on the pieces of context and then ultimately on context. Since an informative object can be produced by different contexts, different bases can arise and then ranking can change. Mathematically, object vector generation is given by x = p1b1 + \u00b7 \u00b7 \u00b7 p k b k = x = B \u00b7 p, where B is a n \u00d7 k (k \u2264 n) complex matrix and p is a k \u00d7 1 real vector. The b's are independent vectors and as such form a basis B of a subspace L(B) of C n. The basis generates all the vectors in L(B) and every vector in it describes an informative object produced within the context described by B. It should be clear that every vector x in L(B) is entirely contained in the subspace spanned by B. Any other vector of the vector space may not be entirely contained in the subspace L(B) and may be more or less close to it. A vector y \u2026"}
{"_id": "6df60e5294ad8403fc069a267d11139280b65c74", "title": "A Comparative Study of Bug Algorithms for Robot Navigation", "text": "This paper presents a literature survey and a comparative study of Bug Algorithms, with the goal of investigating their potential for robotic navigation. At first sight, these methods seem to provide an efficient navigation paradigm, ideal for implementations on tiny robots with limited resources. Closer inspection, however, shows that many of these Bug Algorithms assume perfect global position estimate of the robot which in GPS-denied environments implies considerable expenses of computation and memory \u2013 relying on accurate Simultaneous Localization And Mapping (SLAM) or Visual Odometry (VO) methods. We compare a selection of Bug Algorithms in a simulated robot and environment where they endure different types noise and failure-cases of their on-board sensors. From the simulation results, we conclude that the implemented Bug Algorithms\u2019 performances are sensitive to many types of sensor-noise, which was most noticeable for odometry-drift. This raises the question if Bug Algorithms are suitable for real-world, on-board, robotic navigation as is. Variations that use multiple sensors to keep track of their progress towards the goal, were more adept in completing their task in the presence of sensor-failures. This shows that Bug Algorithms must spread their risk, by relying on the readings of multiple sensors, to be suitable for real-world deployment."}
{"_id": "6bc3c59f072f57662368f28fea9f33db878de193", "title": "Adaptive Binary Search Trees Jonathan Carlyle Derryberry", "text": "A ubiquitous problem in the field of algorithms and data structures is that of searching for an element from an ordered universe. The simple yet powerful binary search tree (BST) model provides a rich family of solutions to this problem. Although BSTs require \u03a9(lg n) time per operation in the worst case, various adaptive BST algorithms are capable of exploiting patterns in the sequence of queries to achieve tighter, input-sensitive, bounds that can be o(lg n) in many cases. This thesis furthers our understanding of what is achievable in the BST model along two directions. First, we make progress in improving instance-specific lower bounds in the BST model. In particular, we introduce a framework for generating lower bounds on the cost that any BST algorithm must pay to execute a query sequence, and we show that this framework generalizes previous lower bounds. This suggests that the optimal lower bound in the framework is a good candidate for being tight to within a constant factor of the optimal BST algorithm for each input. Further, we show that lower bounds in this framework are also valid lower bounds for instances of the partial-sums problem in a restricted model of computation, which suggests that augmented BSTs may be the most efficient way of maintaining sums over ranges of an array when the entries of the array can be updated throughout time. Second, we improve the input-sensitive upper bounds that are known to be achievable in the BST model by introducing two new BST algorithms, skip-splay and cache-splay. These two algorithms are the first BSTs that are known to have running times that have nontrivial competitiveness to Iacono\u2019s Unified Bound, which is a generalization of the dynamic finger and working set bounds. Skip-splay is a simple algorithm that is nearly identical to splaying, and it achieves a running time that is within additive O(lg lg n) per operation of the Unified Bound. Cache-splay is a slightly more complicated splay-based algorithm that is the first BST to achieve the Unified Bound to within a constant factor."}
{"_id": "aba97bb55ea74e5edc379af2844f023a1ba8ccaf", "title": "Multilevel Color Transfer on Images for Providing an Artistic Sight of the World", "text": "This paper represents a twofold approach. On the one hand, we introduce a simple method that encompasses an ensemble of image processing algorithms with a multilevel color transfer approach at its core. On the other hand, the method is applied for providing an artistic look to standard images. The approach proposes a multilevel color transfer in a chromatic channel of the CIELAB color space. Once converted from red, green and blue, a specific channel on both images, input and target (reference), is thresholded in a number of levels. Later, the color transfer is performed between regions from corresponding levels using a classical color transfer method. In the application phase, the color palette of a recognized artwork of the Fauve movement is mapped to the input image, emulating the sight of the artist, characterized by the use of vivid colors. Filtering techniques are applied to the outcome, in order to emulate the basic brushstrokes of the artist. Experimental results are shown, visualizing and comparing the input images with the outcomes."}
{"_id": "c17abaf848f2e8005f7c67074b8248efa1e7a79f", "title": "Collaborative process planning and manufacturing in product lifecycle management", "text": "Companies are moving towards quickly providing better customer-centric products and services improve market share and market size with continuously growing revenue. As such, the effective collaboration among customers, developers, suppliers, and manufacturers throughout the entire product lifecycle is becoming much more important for the most advanced competitiveness. To address this need, a framework for product lifecycle collaboration is proposed in this study. The details of these collaboration models throughout the entire product lifecycle are depicted. As one of the key elements in product lifecycle collaboration, technology to support collaborative product manufacturing is proposed, developed and implemented in this study. It is hoped that the developed technology for collaborative product manufacturing will lay a frontier basis for further research and development in product lifecycle management. # 2007 Elsevier B.V. All rights reserved."}
{"_id": "2d78dd731f860fc1a4f8086a530b39e0ba888135", "title": "Kalman Filtering in Wireless Sensor Networks", "text": "Challenges associated with the scarcity of bandwidth and power in wireless communications have to be addressed. For the state-estimation problems discussed in the paper, observations about a common state are collected by physically distributed terminals. To perform state estimation, wireless sensor networks (WSNs) may share these observations with each other or communicate them to a fusion center for centralized processing. With K vector observations {yk(n)}K k=1 available, the optimal mean squared error (MSE) estimation of the state x(n) for the linear model is accomplished by a Kalman filter."}
{"_id": "09d7415dec64ebdd2cd14ae2105eb0a3d1c28d1c", "title": "A Novel Four-DOF Parallel Manipulator Mechanism and Its Kinematics", "text": "A novel 4-UPU parallel manipulator mechanism that can perform three-dimensional translations and rotation about Z axis is presented. The principle that the mechanism can perform the above motions is analyzed based on the screw theory, the mobility of the mechanism is calculated, and the rationality of the chosen input joints is discussed. The forward and inverse position kinematics solutions of the mechanism and corresponding numerical examples are given, the workspace and the singularity of the parallel mechanism are discussed. The mechanism having the advantages of simple symmetric structure and large stiffness can be applied to the developments of NC positioning platforms, parallel machine tools, four-dimensional force sensors and micro-positional parallel manipulators"}
{"_id": "626d68fbbb10182a72d1ac305fbb52ae7e47f0dc", "title": "A highly efficient power harvester with wide dynamic input power range for 900 MHz wireless power transfer applications", "text": "This work demonstrates the design of an adaptive reconfigurable rectifier to address the issue of early breakdown voltage in a conventional rectifier and extends the rectifier's operation for wide dynamic input power range. A depletion-mode field-effect transistor has been introduced to operate as a switch and compensate at low and high input power levels for rectifier. This design accomplishes 40% of RF-DC power conversion efficiency over a wide dynamic input power range from -10 dBm to 27 dBm, while exhibiting 78% of peak power efficiency at 22 dBm. The power harvester is designed to operate in the 900 MHz ISM band and suitable for Wireless Power Transfer applications."}
{"_id": "0200506b4a0b582859ef24b9a946871d29dde0b4", "title": "FCNN: Fourier Convolutional Neural Networks", "text": "The Fourier domain is used in computer vision and machine learning as image analysis tasks in the Fourier domain are analogous to spatial domain methods but are achieved using different operations. Convolutional Neural Networks (CNNs) use machine learning to achieve state-of-the-art results with respect to many computer vision tasks. One of the main limiting aspects of CNNs is the computational cost of updating a large number of convolution parameters. Further, in the spatial domain, larger images take exponentially longer than smaller image to train on CNNs due to the operations involved in convolution methods. Consequently, CNNs are often not a viable solution for large image computer vision tasks. In this paper a Fourier Convolution Neural Network (FCNN) is proposed whereby training is conducted entirely within the Fourier domain. The advantage offered is that there is a significant speed up in training time without loss of effectiveness. Using the proposed approach larger images can therefore be processed within viable computation time. The FCNN is fully described and evaluated. The evaluation was conducted using the benchmark Cifar10 and MNIST datasets, and a bespoke fundus retina image dataset. The results demonstrate that convolution in the Fourier domain gives a significant speed up without adversely affecting accuracy. For simplicity the proposed FCNN concept is presented in the context of a basic CNN architecture, however, the FCNN concept has the potential to improve the speed of any neural network system involving convolution."}
{"_id": "30531392043ea99462d41b2b429590a0fcd842ff", "title": "Information Gain-based Exploration Using Rao-Blackwellized Particle Filters", "text": "This paper presents an integrated approach to exploration, mapping, and localization. Our algorithm uses a highly efficient Rao-Blackwellized particle filter to represent the posterior about maps and poses. It applies a decision-theoretic framework which simultaneously considers the uncertainty in the map and in the pose of the vehicle to evaluate potential actions. Thereby, it trades off the cost of executing an action with the expected information gain and takes into account possible sensor measurements gathered along the path taken by the robot. We furthermore describe how to utilize the properties of the Rao-Blackwellization to efficiently compute the expected information gain. We present experimental results obtained in the real world and in simulation to demonstrate the effectiveness of our approach."}
{"_id": "231c39621b9cf3af5bebcc788c3dde7dd3023aa8", "title": "A Hybrid Approach to Vietnamese Word Segmentation Using Part of Speech Tags", "text": "Word segmentation is one of the most important tasks in NLP. This task, within Vietnamese language and its own features, faces some challenges, especially in words boundary determination. To tackle the task of Vietnamese word segmentation, in this paper, we propose the WS4VN system that uses a new approach based on Maximum matching algorithm combining with stochastic models using part-of-speech information. The approach can resolve word ambiguity and choose the best segmentation for each input sentence. Our system gives a promising result with an F-measure of 97%, higher than the results of existing publicly available Vietnamese word segmentation systems."}
{"_id": "6108e1d3f5037365213130b13a694c6a404543ca", "title": "20 years of the SMART protein domain annotation resource", "text": "SMART (Simple Modular Architecture Research Tool) is a web resource (http://smart.embl.de) for the identification and annotation of protein domains and the analysis of protein domain architectures. SMART version 8 contains manually curated models for more than 1300 protein domains, with approximately 100 new models added since our last update article (1). The underlying protein databases were synchronized with UniProt (2), Ensembl (3) and STRING (4), doubling the total number of annotated domains and other protein features to more than 200 million. In its 20th year, the SMART analysis results pages have been streamlined again and its information sources have been updated. SMART's vector based display engine has been extended to all protein schematics in SMART and rewritten to use the latest web technologies. The internal full text search engine has been redesigned and updated, resulting in greatly increased search speed."}
{"_id": "767755e5c7389eefb8b60e784dc8395c8d0f417a", "title": "2-hop Blockchain : Combining Proof-of-Work and Proof-of-Stake Securely", "text": "Cryptocurrencies like Bitcoin have proven to be a phenomenal success. Bitcoin-like systems use proofof-work mechanism which is therefore considered as 1-hop blockchain, and their security holds if the majority of the computing power is under the control of honest players. However, this assumption has been seriously challenged recently and Bitcoin-like systems will fail when this assumption is broken. We propose the \u0080rst provably secure 2-hop blockchain by combining proof-of-work (\u0080rst hop) and proof-of-stake (second hop) mechanisms. On top of Bitcoin\u2019s brilliant ideas of utilizing the power of the honest miners, via their computing resources, to secure the blockchain, we further leverage the power of the honest users/stakeholders, via their coins/stake, to achieve this goal. \u008ce security of our blockchain holds if the honest players control majority of the collective resources (which consists of both computing power and stake). \u008cat said, even if the adversary controls more than 50% computing power, the honest players still have the chance to defend the blockchain via honest stake. \u2217An early version with title \u201cSecuring Bitcoin-like Blockchains against a Malicious Majority of Computing Power\u201d appeared in ePrint Archive in July 2016. \u008ce current version shares the same motivation. But the construction idea and modeling approach have been completely revised. \u2020Virginia Commonwealth University. Email: duong\u00823@vcu.edu. \u2021Shanghai Jiao Tong University. Most work done while visiting Cryptography Lab at Virginia Commonwealth University. Email: fanlei@sjtu.edu.cn. \u00a7Virginia Commonwealth University. Email: hszhou@vcu.edu."}
{"_id": "db030e37e8fe57bc3d3c5ce54123852c0160e0b4", "title": "An Approximation Algorithm for the Noah's Ark Problem with Random Feature Loss", "text": "The phylogenetic diversity (PD) of a set of species is a measure of their evolutionary distinctness based on a phylogenetic tree. PD is increasingly being adopted as an index of biodiversity in ecological conservation projects. The Noah's Ark Problem (NAP) is an NP-Hard optimization problem that abstracts a fundamental conservation challenge in asking to maximize the expected PD of a set of taxa given a fixed budget, where each taxon is associated with a cost of conservation and a probability of extinction. Only simplified instances of the problem, where one or more parameters are fixed as constants, have as of yet been addressed in the literature. Furthermore, it has been argued that PD is not an appropriate metric for models that allow information to be lost along paths in the tree. We therefore generalize the NAP to incorporate a proposed model of feature loss according to an exponential distribution and term this problem NAP with Loss (NAPL). In this paper, we present a pseudopolynomial time approximation scheme for NAPL."}
{"_id": "8cc2a9f2c886007bb5d30994dee241f192d4d405", "title": "Digital anthropometry for human body measurement on android platform", "text": "Anthropometry is the science of measuring the size and proportions of the human body. Based on the current researches, callipers and tape measures are the instruments used in measuring body length of an individual. Anthropometry is mostly conducted by the tailors as well as interior designers. This study aims to develop Digital Anthropometry for Human Body Measurement Using OpenCV on Android Platform which automatically measures human body using android phone. These body parts to be measured are represented by a marker; shoulder width, sleeve length, body length, back-waist length, back-neck-to-cuff, cross-back, outseam and inseam. For the accuracy, the distance requirement of the device is 30 inches away from the subject for upper body and 62 inches for the lower body. The required color shirt of the subject is a white and black pants or shorts, the required background color is white and bright surrounding is required."}
{"_id": "e4f70c9e6961766a19862a8443a1566a9e182e87", "title": "Skin lesions on yellowfin tuna Thunnus albacares from Gulf of Mexico outer continental shelf: Morphological, molecular, and histological diagnosis of infection by a capsalid monogenoid.", "text": "We characterize lesion-associated capsaline infections on yellowfin tuna, Thunnus albacares, in the Gulf of Mexico by comparing our specimens with published descriptions and museum specimens ascribed to Capsala biparasiticum and its synonyms: vouchers of C. biparasiticum from parasitic copepods; the holotype of Capsala neothunni; and vouchers of Capsala abidjani. Those from parasitic copepods differed by having a small, rounded body, large anterior attachment organs, closely spaced dorsomarginal body sclerites, small testes, and a short and wide testicular field. No morphometric feature in the holotype of C. neothunni ranged outside of that reported for the newly-collected specimens, indicating conspecificity of our specimens. The specimens of C. abidjani differed by having a large anterior attachment organ, few and dendritic testes, and a short, wide testicular field. Large subunit ribosomal DNA (28S) sequences grouped our specimens and Capsala sp. as sister taxa and indicated a phylogenetic affinity of Nasicola klawei. The haptoral attachment site comprised a crater-like depression surrounded by a blackish-colored halo of extensively rugose skin, with abundant pockmarked-like, irregularly-shaped oblong or semi-circular epidermal pits surrounding these attachment sites. Histology confirmed extensive folding of epidermis and underlying stratum laxum, likely epidermal hyperplasia, foci of weak cell-to-cell adhesions among apical malpighian cells as well as that between stratum germinativum and stratum laxum, myriad goblet cells in epidermis, rodlet cells in apical layer of epidermis, and lymphocytic infiltrates and melanin in dermis. The present study comprises (i) the first published report of this parasite from yellowfin tuna captured in the Gulf of Mexico-NW Atlantic Ocean Basin, (ii) confirmation of its infection on the skin (rather than on a parasitic copepod), (iii) the first molecular data for this capsaline, and (iv) the first observations of histopathological changes associated with a capsalid infection on a wild-caught epipelagic fish."}
{"_id": "b9102e71dea65a3030254268156f1ae2ec6facd2", "title": "Capturing Handwritten Ink Strokes with a Fast Video Camera", "text": "We present a system for capturing ink strokes written with ordinary pen and paper using a fast camera with a frame rate comparable to a stylus digitizer. From the video frames, ink strokes are extracted and used as input to an online handwriting recognition engine. A key component in our system is a pen up/down detection model for detecting the contact of the pen-tip with the paper in the video frames. The proposed model consists of feature representation with convolutional neural networks and classification with a recurrent neural network. We also use a high speed tracker with kernelized correlation filters to track the pen-tip. For training and evaluation, we collected labeled video data of users writing English and Japanese phrases from public datasets, and we report on character accuracy scores for different frame rates in the two languages."}
{"_id": "62a450ad1d47fdd9a4c53222bd2281ba3d7c3cb9", "title": "Accelerating computer vision algorithms using OpenCL framework on the mobile GPU - A case study", "text": "Recently, general-purpose computing on graphics processing units (GPGPU) has been enabled on mobile devices thanks to the emerging heterogeneous programming models such as OpenCL. The capability of GPGPU on mobile devices opens a new era for mobile computing and can enable many computationally demanding computer vision algorithms on mobile devices. As a case study, this paper proposes to accelerate an exemplar-based inpainting algorithm for object removal on a mobile GPU using OpenCL. We discuss the methodology of exploring the parallelism in the algorithm as well as several optimization techniques. Experimental results demonstrate that our optimization strategies for mobile GPUs have significantly reduced the processing time and make computationally intensive computer vision algorithms feasible for a mobile device. To the best of the authors' knowledge, this work is the first published implementation of general-purpose computing using OpenCL on mobile GPUs."}
{"_id": "be223f332c19b13d39dfcc6e8d58f16a0253f8fe", "title": "Business ecosystem analysis framework", "text": "The concept of business ecosystems has gained popularity from researchers and practitioners since first introduced in Moore's article in Harvard Business Review in 1993 \u201cPredators and prey: a new ecology of competition\u201d. Inspired by biology, the concept of business ecosystems provides a metaphor to comprehend the inter-wined nature of industries [1]. The reason behind the popularity of this concept and its differentiation from other business theories like Osterwalder's Business Model Canvas [2] and Porter's Value Chain Approach [3] is its comprehensiveness and inclusion of several other components surrounding any business. In this paper, the necessary methods, components and tools for creating, analyzing and managing a business ecosystem successfully will be introduced. The aim of the paper is to propose a Business Ecosystem Analysis Framework that covers all the steps needed to analyze, manage and ensure the feasibility and sustainability of any ecosystem. This framework can be used to locate strengths and weaknesses of an ecosystem, identify its main actors and define how their maturity, interconnectedness and continuity can be improved. The results of the research can be applied to any industry sector."}
{"_id": "ffaca05d50e18d04c7cdf731fd983f9f68bb11be", "title": "A compact cavity filter with novel TM mode dielectric resonator structure", "text": "A novel transverse magnetic (TM) mode dielectric resonator and its filter realization are proposed for mobile communication system miniaturization applications. Single dielectric resonator and an eight-pole filter with three-transmission-zeros are modeled and designed for desired specification by using efficient optimization technique. About 30% horizontal dimensions are saved comparing with the common coaxial filter. Excellent filter responses are obtained and measured."}
{"_id": "0db01127c3a919df8891d85c270109894c67e3f8", "title": "Exploiting Morphological Regularities in Distributional Word Representations", "text": "We present an unsupervised, language agnostic approach for exploiting morphological regularities present in high dimensional vector spaces. We propose a novel method for generating embeddings of words from their morphological variants using morphological transformation operators. We evaluate this approach on MSR word analogy test set (Mikolov et al., 2013d) with an accuracy of 85% which is 12% higher than the previous best known system."}
{"_id": "a293b3804d1972c9f72ed3490eaafa66349d1597", "title": "Evolution for automatic assessment of the difficulty of sokoban boards", "text": "Many games have a collection of boards with the difficulty of an instance of the game determined by the starting configuration of the board. Correctly rating the difficulty of the boards is somewhat haphazard and required either a remarkable level of understanding of the game or a good deal of play-testing. In this study we explore evolutionary algorithms as a tool to automatically grade the difficulty of boards for a version of the game sokoban. Mean time-to-solution by an evolutionary algorithm and number of failures to solve a board are used as a surrogate for the difficulty of a board. Initial testing with a simple string-based representation, giving a sequence of moves for the sokoban agent, provided very little signal; it usually failed. Two other representations, based on a reactive linear genetic programming structure called an ISAc list, generated useful hardness-classification information for both hardness surrogates. These two representations differ in that one uses a randomly initialized population of ISAc lists while the other initializes populations with competent agents pre-trained on random collections of sokoban boards. The study encompasses four hardness surrogates: probability-of-failure and mean time-to-solution for each of these two representations. All four are found to generate similar information about board hardness, but probability-of-failure with pre-evolved agents is found to be faster to compute and to have a clearer meaning than the other three board-hardness surrogates."}
{"_id": "be6d4cecf5e26bd05ab51e61c3399ce00ece2c57", "title": "Survey of microphthalmia in Japan", "text": "To report the current status of patients with microphthalmia based on a cross-sectional survey of patient hospital visits. A questionnaire was sent to the departments of ophthalmology in 1,151 major Japanese hospitals to survey the following: the number of patients with microphthalmia who visited the outpatient clinics between January 2008 and December 2009; gender; age; family history; associated ocular anomalies; complications and systemic diseases; surgical treatment; vision and management. A retrospective quantitative registry of 1,254 microphthalmic eyes (851 patients) from 454 hospitals (39.4%) was compiled. Of the patients for whom data were available, 50% ranged in age from 0 to 9\u00a0years. The major ocular findings were nanophthalmos, coloboma, and vitreoretinal malformations. Ocular complications frequently developed, including cataracts, glaucoma, and retinal detachment. Surgery was performed in 21.4% of all cases, and systemic diseases were present in 31% of all cases. The vision associated with microphthalmia exceeded 0.1 in about 30% of the eyes. Glasses and low vision aids were used by 21.6% of patients. Patients with microphthalmia often have ocular and systemic anomalies. Early assessment and preservation of vision and long-term complication management are needed."}
{"_id": "4011609d66df3fbb629ec075ccd7b5ae1c146f5f", "title": "Pose Estimation from Line Correspondences using Direct Linear Transformation", "text": "Ansar, the method by Ansar and Daniilidis (2003), implementation from Xu et al. (2016). Mirzaei, the method by Mirzaei and Roumeliotis (2011). RPnL, the method by Zhang et al. (2013). ASPnL, the method by Xu et al. (2016). LPnL Bar LS, the method by Xu et al. (2016). LPnL Bar ENull, the method by Xu et al. (2016). DLT-Lines, the method by Hartley and Zisserman (2004, p. 180), our implementation. DLT-Pl\u00fccker-Lines, the method by P\u0159ibyl et al. (2015), our implementation. DLT-Combined-Lines, the proposed method."}
{"_id": "8d626dce3daad4ebfca730d4ded4b10d69648d42", "title": "New Maximum Power Point Tracker Using Sliding-Mode Observer for Estimation of Solar Array Current in the Grid-Connected Photovoltaic System", "text": "A new maximum power point tracker (MPPT) for a grid-connected photovoltaic system without solar array current sensor is proposed. The solar array current information is obtained from the sliding-mode observer and fed into the MPPT to generate the reference voltage. The parameter values such as capacitances can be changed up to 50% from their nominal values, and the linear observer cannot estimate the correct state values under the parameter variations and noisy environments. The structure of a sliding-mode observer is simple, but it shows the robust tracking property against modeling uncertainties and parameter variations. In this paper, the sliding-mode observer for the solar array current has been proposed to compensate for the parameter variations. The mathematical modeling and the experimental results verify the validity of the proposed method"}
{"_id": "42bc977ae4630021b92b92eaee3a905e4aaa65a6", "title": "THE CRIMINAL PSYCHOPATH: HISTORY, NEUROSCIENCE, TREATMENT, AND ECONOMICS.", "text": "The manuscript surveys the history of psychopathic personality, from its origins in psychiatric folklore to its modern assessment in the forensic arena. Individuals with psychopathic personality, or psychopaths, have a disproportionate impact on the criminal justice system. Psychopaths are twenty to twenty-five times more likely than non-psychopaths to be in prison, four to eight times more likely to violently recidivate compared to non-psychopaths, and are resistant to most forms of treatment. This article presents the most current clinical efforts and neuroscience research in the field of psychopathy. Given psychopathy's enormous impact on society in general and on the criminal justice system in particular, there are significant benefits to increasing awareness of the condition. This review also highlights a recent, compelling and cost-effective treatment program that has shown a significant reduction in violent recidivism in youth on a putative trajectory to psychopathic personality."}
{"_id": "a22f5971b02929326c0b86a335248c62de3c70a1", "title": "A Precoding-Based Multicarrier Non-Orthogonal Multiple Access Scheme for 5G Cellular Networks", "text": "Non-orthogonal multiple access (NOMA) has become one of the desirable schemes for the 5G cellular network standard due to its better cell coverage capability, higher data rate, and massive connectivity. Orthogonal frequency division multiplexing (OFDM) can be combined with NOMA to get the higher spectral efficiency. The main drawback of OFDM-based NOMA (OFDM-NOMA) scheme is high peak-to-average power ratio (PAPR). Therefore, this paper presents a new discrete-cosine transform matrix precoding-based uplink OFDM-NOMA scheme for PAPR reduction. Additionally, the proposed precoding-based uplink multicarrier NOMA scheme takes advantage of information spreading over the entire signal spectrum; thus, bit-error rate is also reduced. Simulation results show that the proposed precoding-based NOMA scheme outperforms as compared with the non-precoding-based NOMA schemes available in the literature."}
{"_id": "5af39fdae2cc2e054bb324b102b45ff67b325e89", "title": "Moral Responsibility and Determinism : The Cognitive Science of Folk Intuitions", "text": "The dispute between compatibilists and incompatibilists must be one of the most persistent and heated deadlocks in Western philosophy. Incompatibilists maintain that people are not fully morally responsible if determinism is true, i.e., if every event is an inevitable consequence of the prior conditions and the natural laws. By contrast, compatibilists maintain that even if determinism is true our moral responsibility is not undermined in the slightest, for determinism and moral responsibility are perfectly consistent.1 The debate between these two positions has invoked many different resources, including quantum mechanics, social psychology, and basic metaphysics. But recent discussions have relied heavily on arguments that draw on people\u2019s intuitions about particular cases. Some philosophers have claimed that people have incompatibilist intuitions (e.g., Kane 1999, 218; Strawson 1986, 30; Vargas 2006); others have challenged this claim and suggested that people\u2019s intuitions actually fit with compatibilism (Nahmias et al. 2005). But although philosophers have constructed increasingly sophisticated arguments about the implications of people\u2019s intuitions, there has been remarkably little discussion about why people have the intuitions they do. That is to say, relatively little has been said about the specific psychological processes that generate or sustain people\u2019s intuitions. And yet, it seems clear that questions about the sources of people\u2019s intuitions could have a major impact on debates"}
{"_id": "b315e736aff647c6ab496b958c62c737c41e9e13", "title": "Persian-English Machine Translation: An Overview of the Shiraz Project", "text": "This report describes the Shiraz project MT prototype for a Persian to English machine translation system using typed feature structures and unification. An overview of the linguistic properties of Persian is presented and the morphological and syntactic grammars developed within the Shiraz project are discussed. The underlying model for the system is a layered chart, capable of representing heterogeneous types of hypotheses in an integrated way. .. 2 ...... 2 ..... 4 ... 5 ..... 5 ... .. 9 ... 9 .... 9 .. 11 ... 11 ... 11 ... 11 13 ... 13 ..... 15 ..... 15 ... .... ... 16 ... 17 .... 17 ... 18 ... 18 ... 18 .... 19 ... 19 ... 19 .... 19 .... 20"}
{"_id": "17bb76f79a0aca5abc36096bcb36c2611c0d1d71", "title": "False data injection attacks against state estimation in electric power grids", "text": "A power grid is a complex system connecting electric power generators to consumers through power transmission and distribution networks across a large geographical area. System monitoring is necessary to ensure the reliable operation of power grids, and state estimation is used in system monitoring to best estimate the power grid state through analysis of meter measurements and power system models. Various techniques have been developed to detect and identify bad measurements, including interacting bad measurements introduced by arbitrary, nonrandom causes. At first glance, it seems that these techniques can also defeat malicious measurements injected by attackers.\n In this article, we expose an unknown vulnerability of existing bad measurement detection algorithms by presenting and analyzing a new class of attacks, called false data injection attacks, against state estimation in electric power grids. Under the assumption that the attacker can access the current power system configuration information and manipulate the measurements of meters at physically protected locations such as substations, such attacks can introduce arbitrary errors into certain state variables without being detected by existing algorithms. Moreover, we look at two scenarios, where the attacker is either constrained to specific meters or limited in the resources required to compromise meters. We show that the attacker can systematically and efficiently construct attack vectors in both scenarios to change the results of state estimation in arbitrary ways. We also extend these attacks to generalized false data injection attacks, which can further increase the impact by exploiting measurement errors typically tolerated in state estimation. We demonstrate the success of these attacks through simulation using IEEE test systems, and also discuss the practicality of these attacks and the real-world constraints that limit their effectiveness."}
{"_id": "844b795767b7c382808cc866ffe0c74742f706d4", "title": "Real versus imagined locomotion: A [18F]-FDG PET-fMRI comparison", "text": "The cortical, cerebellar and brainstem BOLD-signal changes have been identified with fMRI in humans during mental imagery of walking. In this study the whole brain activation and deactivation pattern during real locomotion was investigated by [(18)F]-FDG-PET and compared to BOLD-signal changes during imagined locomotion in the same subjects using fMRI. Sixteen healthy subjects were scanned at locomotion and rest with [(18)F]-FDG-PET. In the locomotion paradigm subjects walked at constant velocity for 10 min. Then [(18)F]-FDG was injected intravenously while subjects continued walking for another 10 min. For comparison fMRI was performed in the same subjects during imagined walking. During real and imagined locomotion a basic locomotion network including activations in the frontal cortex, cerebellum, pontomesencephalic tegmentum, parahippocampal, fusiform and occipital gyri, and deactivations in the multisensory vestibular cortices (esp. superior temporal gyrus, inferior parietal lobule) was shown. As a difference, the primary motor and somatosensory cortices were activated during real locomotion as distinct to the supplementary motor cortex and basal ganglia during imagined locomotion. Activations of the brainstem locomotor centers were more prominent in imagined locomotion. In conclusion, basic activation and deactivation patterns of real locomotion correspond to that of imagined locomotion. The differences may be due to distinct patterns of locomotion tested. Contrary to constant velocity real locomotion (10 min) in [(18)F]-FDG-PET, mental imagery of locomotion over repeated 20-s periods includes gait initiation and velocity changes. Real steady-state locomotion seems to use a direct pathway via the primary motor cortex, whereas imagined modulatory locomotion an indirect pathway via a supplementary motor cortex and basal ganglia loop."}
{"_id": "af2beeea9c0df5964d924389bfc4855f78db4da2", "title": "ToneTrack: Leveraging Frequency-Agile Radios for Time-Based Indoor Wireless Localization", "text": "Indoor localization of mobile devices and tags has received much attention recently, with encouraging fine-grained localization results available with enough line-of-sight coverage and hardware infrastructure. Some of the most promising techniques analyze the time-of-arrival of incoming signals, but the limited bandwidth available to most wireless transmissions fundamentally constrains their resolution. Frequency-agile wireless networks utilize bandwidths of varying sizes and locations in a wireless band to efficiently share the wireless medium between users. ToneTrack is an indoor location system that achieves sub-meter accuracy with minimal hardware and antennas, by leveraging frequency-agile wireless networks to increase the effective bandwidth. Our novel signal combination algorithm combines time-of-arrival data from different transmissions as a mobile device hops across different channels, approaching time resolutions previously not possible with a single narrowband channel. ToneTrack's novel channel combination and spectrum identification algorithms together with the triangle inequality scheme yield superior results even in non-line-of-sight scenarios with one to two walls separating client and APs and also in the case where the direct path from mobile client to an AP is completely blocked. We implement ToneTrack on the WARP hardware radio platform and use six of them served as APs to localize Wi-Fi clients in an indoor testbed over one floor of an office building. Experimental results show that ToneTrack can achieve a median 90 cm accuracy when 20 MHz bandwidth APs overhear three packets from adjacent channels."}
{"_id": "8af1ba21a954c910994f5ad599777640452a3d6b", "title": "The synthesis and rendering of eroded fractal terrains", "text": "In standard fractal terrain models based on fractional Brownian motion the statistical character of the surface is, by design, the same everywhere. A new approach to the synthesis of fractal terrain height fields is presented which, in contrast to previous techniques, features locally independent control of the frequencies composing the surface, and thus local control of fractal dimension and other statistical characteristics. The new technique, termed noise synthesis, is intermediate in difficulty of implementation, between simple stochastic subdivision and Fourier filtering or generalized stochastic subdivision, and does not suffer the drawbacks of creases or periodicity. Varying the local crossover scale of fractal character or the fractal dimension with altitude or other functions yields more realistic first approximations to eroded landscapes. A simple physical erosion model is then suggested which simulates hydraulic and thermal erosion processes to create gloabl stream/valley networks and talus slopes. Finally, an efficient ray tracing algorithm for general height fields, of which most fractal terrains are a subset, is presented."}
{"_id": "61dafde5a30d88185321cb598ef445c6071d1a66", "title": "Design of Two-stage High Gain Operational Amplifier Using Current Buffer Compensation for Low Power Applications", "text": "i ii ACKNOWLEDGEMENT I wish to express my sincere appreciation to my thesis advisor, Mr. B. K. Hemant for his timely, informative feedback and support during this effort. It has been a great pleasure to learn from him during the process of expanding and refining my research. He has been a generous mentor. I also like to express my gratefulness to Dr. his perpetual encouragement, generous help and inspiring guidance. for their support and guidance. I am also very grateful for all the help I received from my classmates."}
{"_id": "35fee50ee6668113528602764e874ab159e5c649", "title": "Simplifying and improving mobile based data collection", "text": "Social programs often require collection of demographic and program related information. Pen-paper surveys have been the most favorable way of collecting such information. However, with proliferation of smartphones, low cost mobile connectivity with good coverage and availability of several data collection applications that can even work around the connectivity concerns, pen-paper surveys are now being replaced by mobile based data collection. In this work, we discuss the enhancement of Open Data Kit (ODK), an existing mobile based data collection application, with features such as pre-filling and validation SMS. The additional features were motivated from the real world requirements of reducing the data collection effort and minimizing the discrepancies in records for a project requiring information dissemination to beneficiaries of National Rural Employment Guarantee Scheme (NREGS), a government of India scheme. Data collection exercise, using our extended tool, is currently ongoing in two districts across two different states in India. Preliminary results from one deployment indicate that even with 8% of pre-filled fields in a form, there was a 16.1% decrease in the errors. Additionally, time per survey reduced with the pre-filling option. Preliminary results from another deployment indicate that 35% of the pre-filled forms had mismatched or changed information when compared with the data available in government records."}
{"_id": "56501b3f890e39582367cb2a0542579b71200d14", "title": "DakNet: rethinking connectivity in developing nations", "text": "DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full-coverage broadband wireless infrastructure. What is the basis for a progressive, market-driven migration from e-governance to universal broadband connectivity that local users will pay for? DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity. DakNet has been successfully deployed in remote parts of both India and Cambodia at a cost two orders of magnitude less than that of traditional landline solutions."}
{"_id": "301acc8ad92493fff25592d3bcf92d2656d944a1", "title": "Open data kit: tools to build information services for developing regions", "text": "This paper presents Open Data Kit (ODK), an extensible, open-source suite of tools designed to build information services for developing regions. ODK currently provides four tools to this end: Collect, Aggregate, Voice, and Build. Collect is a mobile platform that renders application logic and supports the manipulation of data. Aggregate provides a \"click-to-deploy\" server that supports data storage and transfer in the \"cloud\" or on local servers. Voice renders application logic using phone prompts that users respond to with keypad presses. Finally, Build is a application designer that generates the logic used by the tools. Designed to be used together or independently, ODK core tools build on existing open standards and are supported by an open-source community that has contributed additional tools. We describe four deployments that demonstrate how the decisions made in the system architecture of ODK enable services that can both push and pull information in developing regions."}
{"_id": "353343336dd1e61f57f7adb75852e051ae01ce50", "title": "MyExperience: a system for in situ tracing and capturing of user feedback on mobile phones", "text": "This paper presents MyExperience, a system for capturing both objective and subjective in situ data on mobile computing activities. MyExperience combines the following two techniques: 1) passive logging of device usage, user context, and environmental sensor readings, and 2) active context-triggered user experience sampling to collect in situ, subjective user feedback. MyExperience currently runs on mobile phones and supports logging of more than 140 event types, including: 1) device usage such as communication, application usage, and media capture, 2) user context such as calendar appointments, and 3) environmental sensing such as Bluetooth and GPS. In addition, user experience sampling can be targeted to moments of interest by triggering off sensor readings. We present several case studies of field deployments on people's personal phones to demonstrate how MyExperience can be used effectively to understand how people use and experience mobile technology."}
{"_id": "58f6e4ecf7412422265282d20a8f227265af1b8e", "title": "Study of Brightness Preservation Histogram Equalization Techniques", "text": "Image captured under various environment conditions with low contrast results in visual deterioration. To enhance the contrast of images a number of techniques have been proposed over the years. Histogram equalization is one of those methods which could be used for this purpose. For the contrast enhancement purpose histogram equalization (HE) attempts reduction in the number of gray levels. Contrast enhancement by histogram equalization method involve mapping of gray levels on the basis of changeover function that is derived from the probability distribution of gray levels embedded in input image. For this, the neighbouring gray levels in image having low probability density values are combined into single gray level value. Whereas for neighbouring gray levels having high probability density value the gap between them is increased, HE often makes over-enhancement of images with frequent gray level region resulting in loss of contrast for less frequent area of the images. Thus, the HE technique is not able to preserve image brightness. So wherever the preservation of image brightness is required this method is not preferred. This paper review various histogram equalization techniques used in preserving image brightness as well as the contrast enhancement is presented. The key approach for this aim is to use image segmentation. The criteria used are decomposition of images and the presentation of the basic difference between these techniques. For comparative study of performance of these techniques, absolute mean brightness error (AMBE) is used to determine degree of brightness preservation while peak signal to noise ratio (PSNR) have been used to fine\\ degree of contrast enhancement."}
{"_id": "cd7911bd866f176ed86934152f5fec3335683b51", "title": "Recurrent neural network based recommendation for time heterogeneous feedback", "text": "In recommender systems, several kinds of user feedback with time stamps are collected and used for recommendation, which is called the time heterogeneous feedback recommendation problem. The existing recommendation methods can handle only one kind of feedback or ignore the time stamps of the feedback. To solve the time heterogeneous feedback recommendation problem, in this paper, we propose a recurrent neural network model to predict the probability that the user will access an item given the time heterogeneous feedback of this user. To our best knowledge, it is the first time to solve the time heterogeneous feedback recommendation problem by deep neural network model. The proposed model is learned by back propagation algorithm and back propagation through time algorithm. The comparison results on four real-life datasets indicate that the proposed method outperforms the compared state-ofthe-art approaches. \u00a9 2016 Elsevier B.V. All rights reserved."}
{"_id": "7f781d073b9bfba2c00d15fb098e057d8be57717", "title": "GaitViewer: Semantic Gait Data Analysis and Visualization Tool", "text": "Clinical gait analysis studies human locomotion by characterizing movement patterns with heterogeneous acquired gait data (e.g., spatio-temporal parameters, geometry of motion, measures of force). Lack of semantic integration of these heterogeneous data slows down collaborative studies among the laboratories that have different acquisition systems. In this work we propose a semantic integration methodology for gait data, and present GaitViewer a prototype web application for semantic analysis and visualization of gait data. The proposed semantic integration methodology separates heterogeneous and mixed numerical and meta information in gait data. Ontology concepts represent the separated meta information, while numerical information is stored in a NoSQL database. Parallel coordinates visual analytics technique are used as an interface to the analytics tools proposed by the NoSQL database. We tailor GaitViewer for two common use-cases in clinical gait analysis: correlation of measured signals for different subjects, and follow-up analysis of the same subject. Finally, we discuss the potential of a large-scale adoption of frameworks such as GaitViewer for the next generation diagnosis systems for movement disorders."}
{"_id": "d372629db7d6516c4729c847eb3f6484ee86de94", "title": "The VQA-Machine: Learning How to Use Existing Vision Algorithms to Answer New Questions", "text": "One of the most intriguing features of the Visual Question Answering (VQA) challenge is the unpredictability of the questions. Extracting the information required to answer them demands a variety of image operations from detection and counting, to segmentation and reconstruction. To train a method to perform even one of these operations accurately from {image, question, answer} tuples would be challenging, but to aim to achieve them all with a limited set of such training data seems ambitious at best. Our method thus learns how to exploit a set of external off-the-shelf algorithms to achieve its goal, an approach that has something in common with the Neural Turing Machine [10]. The core of our proposed method is a new co-attention model. In addition, the proposed approach generates human-readable reasons for its decision, and can still be trained end-to-end without ground truth reasons being given. We demonstrate the effectiveness on two publicly available datasets, Visual Genome and VQA, and show that it produces the state-of-the-art results in both cases."}
{"_id": "641e4a8a49d08e432646463bb4c5afc7d58cde46", "title": "BJOLP : The Big Joint Optimal Landmarks Planner", "text": "BJOLP, The Big Joint Optimal Landmarks Planner uses landmarks to derive an admissible heuristic, which is then used to guide a search for a cost-optimal plan. In this paper we review landmarks and describe how they can be used to derive an admissible heuristic. We conclude with presenting the BJOLP"}
{"_id": "1742e3704c2d5f3b9327d71120f18c4d686abdec", "title": "On Minimal Sets to Destroy the k-Core in Random Networks", "text": "We study the problem of finding the smallest set of nodes in a network whose removal results in an empty k-core; where the k-core is the sub-network obtained after the iterative removal of all nodes of degree smaller than k. This problem is also known in the literature as finding the minimal contagious set. The main contribution of our work is an analysis of the performance of the recently introduced corehd algorithm [Scientific Reports, 6, 37954 (2016)] on random networks taken from the configuration model via a set of deterministic differential equations. Our analyses provides upper bounds on the size of the minimal contagious set that improve over previously known bounds. Our second contribution is a new heuristic called the weak-neighbor algorithm that outperforms all currently known local methods in the regimes considered."}
{"_id": "1b3dfa08444f249934d61ab01139b9eafa64f94c", "title": "Random k-Labelsets for Multilabel Classification", "text": "A simple yet effective multilabel learning method, called label powerset (LP), considers each distinct combination of labels that exist in the training set as a different class value of a single-label classification task. The computational efficiency and predictive performance of LP is challenged by application domains with large number of labels and training examples. In these cases, the number of classes may become very large and at the same time many classes are associated with very few training examples. To deal with these problems, this paper proposes breaking the initial set of labels into a number of small random subsets, called labelsets and employing LP to train a corresponding classifier. The labelsets can be either disjoint or overlapping depending on which of two strategies is used to construct them. The proposed method is called RAkEL (RAndom k labELsets), where k is a parameter that specifies the size of the subsets. Empirical evidence indicates that RAkEL manages to improve substantially over LP, especially in domains with large number of labels and exhibits competitive performance against other high-performing multilabel learning methods."}
{"_id": "11d3700945bddfea49e8538c419e1bf45ffdcd29", "title": "Financial Frictions and the Persistence of History : A Quantitative Exploration", "text": "The transitional dynamics of the neoclassical growth model is at odds with the growth experiences of many developing economies. For example, consider the post-communist transition of Eastern Europe: Following liberalization, output grew slowly (and even fell), while interest rates and investment rates remained low, contrary to the predictions of the standard growth theory. We incorporate financial frictions and resource misallocation into an otherwise-standard growth model, and account for the observed growth dynamics. We discipline the model by calibrating its stationary equilibrium to the US data on standard macroeconomic aggregates, wealth distribution, and firms\u2019 internal v. external financing. Our model economy converges slowly to the steady state, with the interest rate and the investment rate starting low and rising over time. We also obtain a new result on heterogeneity and macroeconomic dynamics: Even though strict approximate aggregation\u2014that relies only on unconditional first moments\u2014does not hold in our economy, the effect of heterogeneity on aggregate dynamics is almost completely captured by conditional first moments. Department of Economics, Northwestern University, 2001 Sheridan Road, Evanston, IL 60208, USA (e-mail: f-buera@northwestern.edu). Department of Economics, University of Wisconsin, 1180 Observatory Drive, Madison, WI 53706, USA (email: yshin@ssc.wisc.edu). The transitional dynamics of the neoclassical growth model is characterized by swift convergence to the steady state, staggeringly high real interest rates in the early stages of development, and decreasing investment-to-output ratio over time. However, most economies\u2019 growth experiences are at odds with these neoclassical predictions. One such example is the \u201cmiracle\u201d economies of East Asia. As Figure 1 illustrates, growth and investment rates were relatively low initially and peaked in the latter stages. More dramatic evidence against the neoclassical growth model can be found in the post-communist transition of the Eastern European economies. The liberalization of these economies was followed by disappointingly low\u2014even negative\u2014growth rates and slowly increasing investment rates (Figure 2).1 Even after going through an exhaustive list of modified neoclassical growth models, King and Rebelo (1993) conclude that neoclassical transitional dynamics can only play a minor role in explaining observed growth experiences. This is a fundamental challenge for the neoclassical growth theory. Output Per Capita Relative to the US + + + + + + + + + + + JPN KOR SGP bc bc bc bc bc bc bc bc bc bc TWN rs rs rs rs rs rs rs rs rs rs rs THA 195"}
{"_id": "2fd8e5fbbdee225b0b9a35ab2b4be9cc2a9788ad", "title": "Neural interactions between motor cortical hemispheres during bimanual and unimanual arm movements.", "text": "Cortico-cortical connections through the corpus callosum are a major candidate for mediating bimanual coordination. However, aside from the deficits observed after lesioning this connection, little positive evidence indicates its function in bimanual tasks. In order to address this issue, we simultaneously recorded neuronal activity at multiple sites within the arm area of motor cortex in both hemispheres of awake primates performing different bimanual and unimanual movements. By employing an adapted form of the joint peri-stimulus time histogram technique, we discovered rapid movement-related correlation changes between the local field potentials (LFPs) of the two hemispheres that escaped detection by time-averaged cross-correlation methods. The frequency and amplitude of dynamic modifications in correlations between the hemispheres were similar to those within the same hemisphere. As in previous EEG studies, we found that, on average, correlation decreased during movements. However, a subset of recording site pairs did show transiently increased correlations around movement onset (57% of all pairs and conditions in monkey G, 39% in monkey P). In interhemispheric pairs, these increases were consistently related to the mode of coupling between the two arms. Both the correlations between the movements themselves and the interhemispheric LFP correlation increases were strongest during bimanual symmetric movements, and weaker during bimanual asymmetric and unimanual movements. Increased correlations occurred mainly around movement onset, whilst decreases in correlation dominated during movement execution. The task-specific way in which interhemispheric correlations are modulated is compatible with the notion that interactions between the hemispheres contribute to behavioural coupling between the arms."}
{"_id": "d4a00cd12d904c9562b66dc82fab9dfd71f887f4", "title": "Modeling and performance evaluation of IEEE 802.22 physical layer", "text": "IEEE 802.22, also called Wireless Regional Area Network (WRAN), is the newest wireless standard being developed for remote and rural areas. In this paper an overview of the standard, and more specifically its PHY layer is introduced. In order to evaluate the performance of the system, we model the PHY layer in MATLAB/SIMULINK and extract the Bit Error Rate (BER) of the system for different code rates and modulation schemes with noisy channel."}
{"_id": "37efe61ca10790c88f502a0939c6b8d590e8dacb", "title": "Using deep networks for fraud detection in the credit card transactions", "text": "Deep learning is a very noteworthy technic that is take into consideration in the several fields. One of the most attractive subjects that need more attention in the prediction accuracy is fraud detection. As the deep network can gradually learn the concepts of any complicated problem, using this technic in this realm is very beneficial. To do so, we propose a deep autoencoder to extract best features from the information of the credit card transactions and then append a softmax network to determine the class labels. Regarding the effect of features in such data employing an overcomplete autoencoder can map data to a high dimensional space and using the sparse models leads to be in a discriminative space that is useful for classification aims. The benefit of this method is the generality virtues that we can use such networks in several realms e.g. national intelligence, cyber security, marketing, medical informatics and so on. Another advantage is the ability to facing big datasets. As the learning phase is offline we can use it for a huge amount of data and generalize that is earned. Results can reveal the advantages of proposed method comparing to the state of the arts."}
{"_id": "4d535b22c5a9dca418154ec5d198d64ac4f7cb44", "title": "Robust control and H\u221e-optimization - Tutorial paper", "text": "-The paper presents a tutorial exposition of ~=-optimal regulation theory, emphasizing the relevance of the mixed sensitivity problem for robust control system design."}
{"_id": "8e9119613bceb83cc8a5db810cf5fd015cf75739", "title": "Rogue-Access-Point Detection: Challenges, Solutions, and Future Directions", "text": "Rogue devices are an increasingly dangerous reality in the insider threat problem domain. Industry, government, and academia need to be aware of this problem and promote state-of-the-art detection methods."}
{"_id": "dfb9cf8b6ec11a87a864ff30fbe2d8377fb66340", "title": "Using Summarization to Discover Argument Facets in Online Idealogical Dialog", "text": "More and more of the information available on the web is dialogic, and a significant portion of it takes place in online forum conversations about current social and political topics. We aim to develop tools to summarize what these conversations are about. What are the CENTRAL PROPOSITIONS associated with different stances on an issue; what are the abstract objects under discussion that are central to a speaker\u2019s argument? How can we recognize that two CENTRAL PROPOSITIONS realize the same FACET of the argument? We hypothesize that the CENTRAL PROPOSITIONS are exactly those arguments that people find most salient, and use human summarization as a probe for discovering them. We describe our corpus of human summaries of opinionated dialogs, then show how we can identify similar repeated arguments, and group them into FACETS across many discussions of a topic. We define a new task, ARGUMENT FACET SIMILARITY (AFS), and show that we can predict AFS with a .54 correlation score, versus an ngram system baseline of .39 and a semantic textual similarity system baseline of .45."}
{"_id": "95a213c530b605b28e1db4fcad6c3e8e1944f48b", "title": "Ensemble Methods in Machine Learning", "text": ""}
{"_id": "7fe46dbe358e5f209544e7896c03c52679ae13fc", "title": "QELAR: A Machine-Learning-Based Adaptive Routing Protocol for Energy-Efficient and Lifetime-Extended Underwater Sensor Networks", "text": "Underwater sensor network (UWSN) has emerged in recent years as a promising networking technique for various aquatic applications. Due to specific characteristics of UWSNs, such as high latency, low bandwidth, and high energy consumption, it is challenging to build networking protocols for UWSNs. In this paper, we focus on addressing the routing issue in UWSNs. We propose an adaptive, energy-efficient, and lifetime-aware routing protocol based on reinforcement learning, QELAR. Our protocol assumes generic MAC protocols and aims at prolonging the lifetime of networks by making residual energy of sensor nodes more evenly distributed. The residual energy of each node as well as the energy distribution among a group of nodes is factored in throughout the routing process to calculate the reward function, which aids in selecting the adequate forwarders for packets. We have performed extensive simulations of the proposed protocol on the Aqua-sim platform and compared with one existing routing protocol (VBF) in terms of packet delivery rate, energy efficiency, latency, and lifetime. The results show that QELAR yields 20 percent longer lifetime on average than VBF."}
{"_id": "c4e73fc1a5b3195ee7f1ac6b025eb35cc7f78c84", "title": "Mechanical design of a novel Hand Exoskeleton for accurate force displaying", "text": "This paper deals with the mechanical design of a novel haptic Hand Exoskeleton (HE) that allows exerting controlled forces on the fingertip of the index and thumb of the operator. The proposed device includes several design solutions for optimizing the accuracy and mechanical performances. Remote Centers of Motion mechanisms have been adopted for delocalizing the encumbrance of linkages of the structure away from the operator's fingers. An improved stiffness of the transmission and reduced requirements for the actuators have been achieved thanks to a novel Patent Pending principle for integrating speed reduction ratio with the transmission system."}
{"_id": "018fd30f1a51c6523b382b6f7db87ddd865e393d", "title": "End-fire Quasi-Yagi antennas with pattern diversity on LTCC technology for 5G mobile communications", "text": "We have designed two end-fire antennas on LTCC with horizontal and vertical polarizations respectively. The antennas operate at 38GHz, a potential frequency for 5G applications. The horizontally-polarized antenna provide a broadband performance about 27% and 6dB end-fire gain and the vertically-polarized one provide a 12.5% bandwidth and 5dB gain. Both antenna are integrated under a compact substrate. Excellent isolation is achieved between the nearby elements making these antennas suitable for corner elements in 5G mobile system."}
{"_id": "93cc0cd1659751e43822d3cced5832324653c697", "title": "Algorithmic Bias in Autonomous Systems", "text": "Algorithms play a key role in the functioning of autonomous systems, and so concerns have periodically been raised about the possibility of algorithmic bias. However, debates in this area have been hampered by different meanings and uses of the term, \u201cbias.\u201d It is sometimes used as a purely descriptive term, sometimes as a pejorative term, and such variations can promote confusion and hamper discussions about when and how to respond to algorithmic bias. In this paper, we first provide a taxonomy of different types and sources of algorithmic bias, with a focus on their different impacts on the proper functioning of autonomous systems. We then use this taxonomy to distinguish between algorithmic biases that are neutral or unobjectionable, and those that are problematic in some way and require a response. In some cases, there are technological or algorithmic adjustments that developers can use to compensate for problematic bias. In other cases, however, responses require adjustments by the agent, whether human or autonomous system, who uses the results of the algorithm. There is no \u201cone size fits all\u201d solution to algorithmic bias."}
{"_id": "30f11d456739d8d83d8cbf240dea46a26bca6509", "title": "A power line communication network infrastructure for the smart home", "text": "Low voltage electrical wiring in homes has largely been dismissed as too noisy and unpredictable to support high speed communication signals. However, recent advances in communication and modulation methodologies as well as in adaptive digital signal processing and error detection and correction have spawned novel media access control (MAC) and physical layer (PHY) protocols, capable of supporting power line communication networks at speeds comparable to wired local area networks (LANs). In this paper we motivate the use of power line LAN\u2019s as a basic infrastructure for building integrated \u201dsmart homes,\u201d wherein information appliances (IA)\u2014ranging from simple control or monitoring devices to multimedia entertainment systems\u2014are seamlessly interconnected by the very wires which provide them electricity. By simulation and actual measurements using \u201dreference design\u201d prototype commercial powerline products, we show that the HomePlugMAC and PHY layers can guarantee QoS for real-time communications, supporting delay sensitive data streams for \u201dSmart Home\u201d applications."}
{"_id": "6032773345c73957f87178fd5d0556870299c4e1", "title": "Learning Deep Boltzmann Machines using Adaptive MCMC", "text": "When modeling high-dimensional richly structured data, it is often the case that the distribution defined by the Deep Boltzmann Machine (DBM) has a rough energy landscape with many local minima separated by high energy barriers. The commonly used Gibbs sampler tends to get trapped in one local mode, which often results in unstable learning dynamics and leads to poor parameter estimates. In this paper, we concentrate on learning DBM\u2019s using adaptive MCMC algorithms. We first show a close connection between Fast PCD and adaptive MCMC. We then develop a Coupled Adaptive Simulated Tempering algorithm that can be used to better explore a highly multimodal energy landscape. Finally, we demonstrate that the proposed algorithm considerably improves parameter estimates, particularly when learning large-scale DBM\u2019s."}
{"_id": "6e7dadd63455c194e3472bb181aaf509f89b9166", "title": "Classification using discriminative restricted Boltzmann machines", "text": "Recently, many applications for Restricted Boltzmann Machines (RBMs) have been developed for a large variety of learning problems. However, RBMs are usually used as feature extractors for another learning algorithm or to provide a good initialization for deep feed-forward neural network classifiers, and are not considered as a standalone solution to classification problems. In this paper, we argue that RBMs provide a self-contained framework for deriving competitive non-linear classifiers. We present an evaluation of different learning algorithms for RBMs which aim at introducing a discriminative component to RBM training and improve their performance as classifiers. This approach is simple in that RBMs are used directly to build a classifier, rather than as a stepping stone. Finally, we demonstrate how discriminative RBMs can also be successfully employed in a semi-supervised setting."}
{"_id": "a0117ec4cd582974d06159644d12f65862a8daa3", "title": "Deep Belief Networks Are Compact Universal Approximators", "text": "Deep belief networks (DBN) are generative models with many layers of hidden causal variables, recently introduced by Hinton, Osindero, and Teh (2006), along with a greedy layer-wise unsupervised learning algorithm. Building on Le Roux and Bengio (2008) and Sutskever and Hinton (2008), we show that deep but narrow generative networks do not require more parameters than shallow ones to achieve universal approximation. Exploiting the proof technique, we prove that deep but narrow feedforward neural networks with sigmoidal units can represent any Boolean expression."}
{"_id": "4a92f809913c9e6d4f50b8ceaaf6bb7a5a43c8f7", "title": "Global, regional, and national causes of child mortality: an updated systematic analysis for 2010 with time trends since 2000", "text": "BACKGROUND\nInformation about the distribution of causes of and time trends for child mortality should be periodically updated. We report the latest estimates of causes of child mortality in 2010 with time trends since 2000.\n\n\nMETHODS\nUpdated total numbers of deaths in children aged 0-27 days and 1-59 months were applied to the corresponding country-specific distribution of deaths by cause. We did the following to derive the number of deaths in children aged 1-59 months: we used vital registration data for countries with an adequate vital registration system; we applied a multinomial logistic regression model to vital registration data for low-mortality countries without adequate vital registration; we used a similar multinomial logistic regression with verbal autopsy data for high-mortality countries; for India and China, we developed national models. We aggregated country results to generate regional and global estimates.\n\n\nFINDINGS\nOf 7\u00b76 million deaths in children younger than 5 years in 2010, 64\u00b70% (4\u00b7879 million) were attributable to infectious causes and 40\u00b73% (3\u00b7072 million) occurred in neonates. Preterm birth complications (14\u00b71%; 1\u00b7078 million, uncertainty range [UR] 0\u00b7916-1\u00b7325), intrapartum-related complications (9\u00b74%; 0\u00b7717 million, 0\u00b7610-0\u00b7876), and sepsis or meningitis (5\u00b72%; 0\u00b7393 million, 0\u00b7252-0\u00b7552) were the leading causes of neonatal death. In older children, pneumonia (14\u00b71%; 1\u00b7071 million, 0\u00b7977-1\u00b7176), diarrhoea (9\u00b79%; 0\u00b7751 million, 0\u00b7538-1\u00b7031), and malaria (7\u00b74%; 0\u00b7564 million, 0\u00b7432-0\u00b7709) claimed the most lives. Despite tremendous efforts to identify relevant data, the causes of only 2\u00b77% (0\u00b7205 million) of deaths in children younger than 5 years were medically certified in 2010. Between 2000 and 2010, the global burden of deaths in children younger than 5 years decreased by 2 million, of which pneumonia, measles, and diarrhoea contributed the most to the overall reduction (0\u00b7451 million [0\u00b7339-0\u00b7547], 0\u00b7363 million [0\u00b7283-0\u00b7419], and 0\u00b7359 million [0\u00b7215-0\u00b7476], respectively). However, only tetanus, measles, AIDS, and malaria (in Africa) decreased at an annual rate sufficient to attain the Millennium Development Goal 4.\n\n\nINTERPRETATION\nChild survival strategies should direct resources toward the leading causes of child mortality, with attention focusing on infectious and neonatal causes. More rapid decreases from 2010-15 will need accelerated reduction for the most common causes of death, notably pneumonia and preterm birth complications. Continued efforts to gather high-quality data and enhance estimation methods are essential for the improvement of future estimates.\n\n\nFUNDING\nThe Bill & Melinda Gates Foundation."}
{"_id": "274946a974bc2bbbfe89c7f6fd3751396f295625", "title": "Theory and Applications of Robust Optimization", "text": "In this paper we survey the primary research, both theoretical and applied, in the area of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results linking RO to adaptable models for multi-stage decision-making problems. Finally, we will highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering."}
{"_id": "40768956409dbafee1bb6c89a329bdf17e0efb0f", "title": "Building a motor simulation de novo: Observation of dance by dancers", "text": "Research on action simulation identifies brain areas that are active while imagining or performing simple overlearned actions. Are areas engaged during imagined movement sensitive to the amount of actual physical practice? In the present study, participants were expert dancers who learned and rehearsed novel, complex whole-body dance sequences 5 h a week across 5 weeks. Brain activity was recorded weekly by fMRI as dancers observed and imagined performing different movement sequences. Half these sequences were rehearsed and half were unpracticed control movements. After each trial, participants rated how well they could perform the movement. We hypothesized that activity in premotor areas would increase as participants observed and simulated movements that they had learnt outside the scanner. Dancers' ratings of their ability to perform rehearsed sequences, but not the control sequences, increased with training. When dancers observed and simulated another dancer's movements, brain regions classically associated with both action simulation and action observation were active, including inferior parietal lobule, cingulate and supplementary motor areas, ventral premotor cortex, superior temporal sulcus and primary motor cortex. Critically, inferior parietal lobule and ventral premotor activity was modulated as a function of dancers' ratings of their own ability to perform the observed movements and their motor experience. These data demonstrate that a complex motor resonance can be built de novo over 5 weeks of rehearsal. Furthermore, activity in premotor and parietal areas during action simulation is enhanced by the ability to execute a learned action irrespective of stimulus familiarity or semantic label."}
{"_id": "4891676f862db6800afbfea4503835d957fe2458", "title": "A Novel Miniature YIG Tuned Oscillator Achieves Octave Tuning Bandwidth with Ultra Low Phase Noise in X and Ku Bands", "text": "Traditional YIG tuned oscillators use negative resistance type circuits and large magnetic structure to produce excellent phase noise with octave plus tuning ranges, but at the expense of size and weight. This paper describes a revolutionary approach to YIG tuned oscillator design which makes use of a unique combination of a miniaturized magnetic structure, and a GaAs HBT ring oscillator circuit topology, working through a YIG tuned filter to produce octave tuning ranges in X and Ku bands with phase noise less than -125 dBC per Hz at 100 KHz. Frequency pulling is less than 10 KHz into a 2:1 VSWR load over all phases. Simulations and measured data are in excellent agreement"}
{"_id": "4ee7f38c36959f673937d0504120e829f5e719cd", "title": "Perceptual category mapping between English and Korean obstruents in non-CV positions: Prosodic location effects in second language identification skills", "text": "This study examines the degree to which mapping patterns between native language (L1) and second language (L2) categories for one prosodic context will generalize to other prosodic contexts, and how position-specific neutralization in the L1 influences the category mappings. Forty L1-Korean learners of English listened to English nonsense words consisting of /p b t d f v h \u00f0/ and /\u0251/, with the consonants appearing in pre-stressed intervocalic, poststressed intervocalic, or coda context, and were asked to identify the consonant with both Korean and English labeling and to give gradient evaluations of the goodness of each label to the stimuli. Results show that the mapping patterns differ extensively from those found previously with the same subjects for consonants in initial, onset context. The mapping patterns for the intervocalic context also differed by position with respect to stress location. Coda consonants elicited poor goodness-of-fit and noisier mapping patterns for all segments, suggesting that an L1 coda neutralization process put all L2-English sounds in codas as \u201cnew\u201d sounds under the Speech Learning Model (SLM) framework (Flege, 1995). Taken together, the results indicate that consonant learning needs to be evaluated in terms of position-by-position variants, rather than just being a general property of the overall"}
{"_id": "a6de0d1389e897cb2c7266401a57e6f10beddcf8", "title": "Viraliency: Pooling Local Virality", "text": "In our overly-connected world, the automatic recognition of virality – the quality of an image or video to be rapidly and widely spread in social networks – is of crucial importance, and has recently awaken the interest of the computer vision community. Concurrently, recent progress in deep learning architectures showed that global pooling strategies allow the extraction of activation maps, which highlight the parts of the image most likely to contain instances of a certain class. We extend this concept by introducing a pooling layer that learns the size of the support area to be averaged: the learned top-N average (LENA) pooling. We hypothesize that the latent concepts (feature maps) describing virality may require such a rich pooling strategy. We assess the effectiveness of the LENA layer by appending it on top of a convolutional siamese architecture and evaluate its performance on the task of predicting and localizing virality. We report experiments on two publicly available datasets annotated for virality and show that our method outperforms state-of-the-art approaches."}
{"_id": "dc2dcde8e068ba9a9073972cd406df873e9dfd93", "title": "Indirect Image Registration with Large Diffeomorphic Deformations", "text": "The paper adapts the large deformation diffeomorphic metric mapping framework for image registration to the indirect setting where a template is registered against a target that is given through indirect noisy observations. The registration uses diffeomorphisms that transform the template through a (group) action. These diffeomorphisms are generated by solving a flow equation that is defined by a velocity field with certain regularity. The theoretical analysis includes a proof that indirect image registration has solutions (existence) that are stable and that converge as the data error tends so zero, so it becomes a well-defined regularization method. The paper concludes with examples of indirect image registration in 2D tomography with very sparse and/or highly noisy data."}
{"_id": "573cd4046fd8b899a7753652cd0f4cf6e351c5ae", "title": "Shape-based recognition of wiry objects", "text": "We present an approach to the recognition of complex-shaped objects in cluttered environments based on edge information. We first use example images of a target object in typical environments to train a classifier cascade that determines whether edge pixels in an image belong to an instance of the desired object or the clutter. Presented with a novel image, we use the cascade to discard clutter edge pixels and group the object edge pixels into overall detections of the object. The features used for the edge pixel classification are localized, sparse edge density operations. Experiments validate the effectiveness of the technique for recognition of a set of complex objects in a variety of cluttered indoor scenes under arbitrary out-of-image-plane rotation. Furthermore, our experiments suggest that the technique is robust to variations between training and testing environments and is efficient at runtime."}
{"_id": "e73b9e7dd4f948553e8de4a0d46c24508b7a00f9", "title": "Consistent Linear Tracker With Converted Range, Bearing, and Range Rate Measurements", "text": "Active sonar and radar systems often include a measurement of range rate in addition to the position-based measurements of range and bearing. Due to the nonlinearity of the range rate measurement with respect to a target state in the Cartesian coordinate system, this measurement is not always fully exploited. The state of the art methods to utilize range rate include the extended Kalman filter and the unscented Kalman filter with sequential processing of the position based measurements and the range rate measurement. Common to these approaches is that the measurement prediction function remains nonlinear. The goal of this work is to develop a measurement conversion from range, bearing, and range rate to Cartesian position and velocity that is unbiased and consistent, with appropriate elimination of estimation bias. The converted measurement is then used with a linear Kalman filter. Performance of this new method is compared to state of the art techniques and shown to match or exceed that of existing techniques over a wide range of scenarios."}
{"_id": "2c92e8f93be31eb7d82f21b64e5b72fec735c169", "title": "A term-based methodology for query reformulation understanding", "text": "Key to any research involving session search is the understanding of how a user\u2019s queries evolve throughout the session. When a user creates a query reformulation, he or she is consciously retaining terms from their original query, removing others and adding new terms. By measuring the similarity between queries we can make inferences on the user\u2019s information need and how successful their new query is likely to be. By identifying the origins of added terms we can infer the user\u2019s motivations and gain an understanding of their interactions. In this paper we present a novel term-based methodology for understanding and interpreting query reformulation actions. We use TREC Session Track data to demonstrate how our technique is able to learn from query logs and we make use of click data to test user interaction behavior when reformulating queries. We identify and evaluate a range of term-based query reformulation strategies and show that our methods provide valuable insight into understanding query reformulation in session search."}
{"_id": "592885ae6cd235f8c642c2ec8765a857b84fee66", "title": "Object-Aware Identification of Microservices", "text": "Microservices is an architectural style inspired by service-oriented computing that structures an application as a collection of cohesive and loosely coupled components, which implement business capabilities. One of today\u2019s problems in designing microservice architectures is to decompose a system into cohesive, loosely coupled, and fine-grained microservices. Identification of microservices is usually performed intuitively, based on the experience of the system designers, however, if the functionalities of a system are highly interconnected, it is a challenging task to decompose the system into appropriate microservices. To tackle this challenge, we present a microservice identification method that decomposes a system using clustering technique. To this end, we model a system as a set of business processes and take two aspects of structural dependency and data object dependency of functionalities into account. Furthermore, we conduct a study to evaluate the effect of process characteristics on the accuracy of identification approaches."}
{"_id": "c1792ff52189bdec1de609cb0d00d64a7d7f128f", "title": "On the Reception and Detection of Pseudo-profound Bullshit", "text": "Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous. We presented participants with bullshit statements consisting of buzzwords randomly organized into statements with syntactic structure but no discernible meaning (e.g., \u201cWholeness quiets infinite phenomena\u201d). Across multiple studies, the propensity to judge bullshit statements as profound was associated with a variety of conceptually relevant variables (e.g., intuitive cognitive style, supernatural belief). Parallel associations were less evident among profundity judgments for more conventionally profound (e.g., \u201cA wet person does not fear the rain\u201d) or mundane (e.g., \u201cNewborn babies require constant attention\u201d) statements. These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. Our results also suggest that a bias toward accepting statements as true may be an important component of pseudo-profound bullshit receptivity."}
{"_id": "2980fd9cc4f599bb93e0a5a11f4bab67364a4dde", "title": "Model Shrinking for Embedded Keyword Spotting", "text": "In this paper we present two approaches to improve computational efficiency of a keyword spotting system running on a resource constrained device. This embedded keyword spotting system detects a pre-specified keyword in real time at low cost of CPU and memory. Our system is a two stage cascade. The first stage extracts keyword hypotheses from input audio streams. After the first stage is triggered, hand-crafted features are extracted from the keyword hypothesis and fed to a support vector machine (SVM) classifier on the second stage. This paper focuses on improving the computational efficiency of the second stage SVM classifier. More specifically, select a subset of feature dimensions and merge the SVM classifier to a smaller size, while maintaining the keyword spotting performance. Experimental results indicate that we can remove more than 36% of the non-discriminative SVM features, and reduce the number of support vectors by more than 60% without significant performance degradation. This results in more than 15% relative reduction in CPU utilization."}
{"_id": "afe45eea3e954805df2dac45b55726773d221f1b", "title": "Predictive Analytics On Public Data - The Case Of Stock Markets", "text": "This work examines the predictive power of public data by aggregating information from multiple online sources. Our sources include microblogging sites like Twitter, online message boards like Yahoo! Finance, and traditional news articles. The subject of prediction are daily stock price movements from Standard & Poor\u2019s 500 index (S&P 500) during a period from June 2011 to November 2011. To forecast price movements we filter messages by stocks, apply state-of-the-art sentiment analysis to message texts, and aggregate message sentiments to generate trading signals for daily buy and sell decisions. We evaluate prediction quality through a simple trading model considering real-world limitations like transaction costs or broker commission fees. Considering 833 virtual trades, our model outperformed the S&P 500 and achieved a positive return on investment of up to ~0.49% per trade or ~0.24% when adjusted by market, depending on supposed trading costs."}
{"_id": "45a4fe01625c47f335e12d67602d3a02d713f095", "title": "Soft pneumatic actuator with adjustable stiffness layers for Multi-DoF Actuation", "text": "The soft pneumatic actuators (SPAs) are a solution toward the highly customizable and light actuators with the versatility of actuation modes, and an inherent compliance. Such flexibility allows SPAs to be considered as alternative actuators for wearable rehabilitative devices and search and rescue robots. The actuator material and air-chamber design dictate the actuator's mechanical performance. Therefore, each actuator design with a single pressure source produces a highly customized motion but only a single degree of freedom (DoF). We present a novel design and fabrication method for a SPA with different modes of actuation using integrated adjustable stiffness layers (ASLs). Unlike the most SPA designs where one independent chamber is needed for each mode of actuation, here we have a single chamber that drives three different modes of actuation by activating different combinations of ASLs. Adapting customized micro heaters and thermistors for modulating the temperature and stiffness of ASLs, we considerably broaden the work space of the SPA actuator. Here, a thorough characterization of the materials and the modeling of the actuator are presented. We propose a design methodology for developing application specific actuators with multi-DoFs that are light and compact."}
{"_id": "3f4cb985e14b975e2b588c8ebd0685ad1d895c23", "title": "Deep learning traffic sign detection, recognition and augmentation", "text": "Driving is a complex, continuous, and multitask process that involves driver's cognition, perception, and motor movements. The way road traffic signs and vehicle information is displayed impacts strongly driver's attention with increased mental workload leading to safety concerns. Drivers must keep their eyes on the road, but can always use some assistance in maintaining their awareness and directing their attention to potential emerging hazards. Research in perceptual and human factors assessment is needed for relevant and correct display of this information for maximal road traffic safety as well as optimal driver comfort. In-vehicle contextual Augmented Reality (AR) has the potential to provide novel visual feedbacks to drivers for an enhanced driving experience. In this paper, we present a new real-time approach for fast and accurate framework for traffic sign recognition, based on Cascade Deep learning and AR, which superimposes augmented virtual objects onto a real scene under all types of driving situations, including unfavorable weather conditions. Experiments results show that, by combining the Haar Cascade and deep convolutional neural networks show that the joint learning greatly enhances the capability of detection and still retains its realtime performance."}
{"_id": "5c6caa27256bea3190bab5c9e3ffcba271806321", "title": "A Design and Assessment of a Direction Finding Proximity Fuze Sensor", "text": "This paper presents the implementation and assessment of a direction finding proximity fuze sensor for anti-aircrafts or anti-air missiles. A higher rejection of clutter signals is achieved by employing a binary phase shift keying modulation using Legendre sequence. The direction finding is implemented by comparing received powers from six receiving antennas equally spaced by an angle of 60\u00b0 around a cylindrical surface. In addition, target detection algorithms have been developed for a robust detection of the target, taking the wide variation of target related parameters into considerations. The performances of the developed fuze sensor are experimentally verified by constructing the fuze-specific encounter simulation test apparatus, which collects and analyzes reflected signals from a standard target. The developed fuze sensor can successfully detect the -10 dBsm target over a 10 m range as well as the direction with an out-of-range rejection of about 40 dB. Furthermore, the developed fuze sensor can clearly detect the target with mesh clutter environment. To assess realistic operation, the fuze sensor is tested using 155 mm gun firing test setup. Through the gun firing test, the successful fuzing range and direction finding performances are demonstrated."}
{"_id": "211cd0a2ea0039b459fe3bbd06a2b34cddfb4cfe", "title": "Cognitive Context Models and Discourse", "text": "1. Mental models Since the early 1980s, the notion of mental model has been quite successful in cognitive psychology in general, and in the theory of text processing in particu-Such models have been conceptualized as representations in episodic memory of situations, acts or events spoken or thought about, observed or participated in by human actors, that is of experiences (Ehrlich, Tardieu & Cavazza 1993). In the theory of text processing, such (situation or event) models played a crucial role in establishing the necessary referential basis for the processing of anaphora and other phenomena of coherence (Albrecht & O Brien 1993). They further explained, among many other things, why text recall does not seem to be based on semantic representations of texts, but rather on the mental model construed or updated of the event the text is about (Bower & Morrow 1990). Conversely, mental models also play a role in the much neglected theory of discourse production, viz., as the mental point of departure of all text and talk, from which relevant information may be selected for the strategic construction of their global and local semantic structures. Many experiments have confirmed these hypotheses, and have shown at text comprehension and recall essentially involve a strategic manipulation of models, for instance by matching text information with structures of the The notion of mental space is sometimes also used in formal linguistics as a construct that has similar functions as our notion of a mental model (Faucormier 1985)."}
{"_id": "940279fe8a2a6e9a2614b8cbe391e56c0fb5ceab", "title": "XBOS: An Extensible Building Operating System", "text": "We present XBOS, an eXtensible Building Operating System for integrated management of previously isolated building subsystems. The key contribution of XBOS is the development of the Building Profile, a canonical, executable description of a building and its subsystems that changes with a building and evolves the control processes and applications accordingly. We discuss the design and implementation of XBOS in the context of 6 dimensions of an effective BAS \u2013 hardware presentation layer, canonical metadata, control process management, building evolution management, security, scalable UX and API \u2013 and evaluate against several recent building and sensor management systems. Lastly, we examine the evolution of a real, 10 month deployment of XBOS in a 7000 sq ft office building."}
{"_id": "0b05900471e6c7599028141836a742c25b84a671", "title": "Survey on Collaborative AR for Multi-user in Urban Simulation", "text": "This paper describes an augmented reality (AR) environment that allows multiple participants or multi-user to interact with 2D and 3D data. AR simply can provide a collaborative interactive AR environment for urban simulation, where users can interact naturally and intuitively. In addition, the collaborative AR makes multi-user in urban simulation to share simultaneously a real world and virtual world. The fusion between real and virtual world, existed in AR environment by see-through HMDs, achieves higher interactivity as a key features of collaborative AR. In real-time, precise registration between both worlds and multiuser are crucial for the collaborations. Collaborative AR allow multi-user to simultaneously share a real world surrounding them and a virtual world. Common problems in AR environment will be discussed and major issues in collaborative AR will be explained details in this paper. The features of collaboration in AR environment are will be identified and the requirements of collaborative AR will be defined. This paper will give an overview on collaborative AR environment for multi-user in urban studies and planning. The work will also cover numerous systems of collaborative AR environments for multi-user."}
{"_id": "39b3d021bc834948221334d26c9ef255741524a3", "title": "Training Deep and Recurrent Networks with Hessian-Free Optimization", "text": "Hessian-Free optimization (HF) is an approach for unconstrained minimization of real-valued smooth objective functions. Like standard Newton\u2019s method, it uses local quadratic approximations to generate update proposals. It belongs to the broad class of approximate Newton methods that are practical for problems of very high dimensionality, such as the training objectives of large neural networks. Different algorithms that use many of the same key principles have appeared in the literatures of various communities under different names such as Newton-CG, CG-Steihaug, Newton-Lanczos, and Truncated Newton [27, 28], but applications to machine learning and especially neural networks, have been limited or non-existent until recently. With the work of Martens [22] and later Martens and Sutskever [23] it has been demonstrated that such an approach, if carefully designed and implemented, can work very well for optimizing nonconvex functions such as the training objective for deep neural networks and recurrent neural networks (RNNs), given sensible random initializations. This was significant because gradient descent methods have been observed to be very slow and sometimes completely ineffective [16, 4, 17] on these problems, unless special non-random initializations schemes like layer-wise pre-training [16, 15, 3] are used. HF, which is a general optimization-based approach, can be used in conjunction with or as an alternative to existing pre-training methods and is more widely applicable, since it relies on fewer assumptions about the specific structure of the network. In this report we will first describe the basic HF approach, and then examine well-known general purpose performance-improving techniques as well as others that are specific to HF (versus other Truncated-Newton type approaches) or to neural networks. We will also provide practical tips for creating efficient and correct implementations, and discuss the pitfalls which may arise when designing and using an HF-based approach in a particular application."}
{"_id": "8a7acaf6469c06ae5876d92f013184db5897bb13", "title": "Neuronlike adaptive elements that can solve difficult learning control problems", "text": "It is shown how a system consisting of two neuronlike adaptive elements can solve a difficult learning control problem. The task is to balance a pole that is hinged to a movable cart by applying forces to the cart's base. It is argued that the learning problems faced by adaptive elements that are components of adaptive networks are at least as difficult as this version of the pole-balancing problem. The learning system consists of a single associative search element (ASE) and a single adaptive critic element (ACE). In the course of learning to balance the pole, the ASE constructs associations between input and output by searching under the influence of reinforcement feedback, and the ACE constructs a more informative evaluation function than reinforcement feedback alone can provide. The differences between this approach and other attempts to solve problems using neurolike elements are discussed, as is the relation of this work to classical and instrumental conditioning in animal learning studies and its possible implications for research in the neurosciences."}
{"_id": "08ac954ed1628d97548a125b4d95d871efff219c", "title": "Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation", "text": "We introduce the first temporal-difference learning algorithms that converge with smooth value function approximators, such as neural networks. Conventional temporal-difference (TD) methods, such as TD(\u03bb), Q-learning and Sarsa have been used successfully with function approximation in many applications. However, it is well known that off-policy sampling, as well as nonlinear function approximation, can cause these algorithms to become unstable (i.e., the parameters of the approximator may diverge). Sutton et al. (2009a, 2009b) solved the problem of off-policy learning with linear TD algorithms by introducing a new objective function, related to the Bellman error, and algorithms that perform stochastic gradient-descent on this function. These methods can be viewed as natural generalizations to previous TD methods, as they converge to the same limit points when used with linear function approximation methods. We generalize this work to nonlinear function approximation. We present a Bellman error objective function and two gradient-descent TD algorithms that optimize it. We prove the asymptotic almost-sure convergence of both algorithms, for any finite Markov decision process and any smooth value function approximator, to a locally optimal solution. The algorithms are incremental and the computational complexity per time step scales linearly with the number of parameters of the approximator. Empirical results obtained in the game of Go demonstrate the algorithms\u2019 effectiveness."}
{"_id": "71d8a29a8292f1a177b5e45d195d96d181c72601", "title": "A 0.5V supply, 49nW band-gap reference and crystal oscillator in 40nm CMOS", "text": "This paper presents the co-design of 0.5V operational band-gap reference (BGR) and 32kHz crystal oscillator (XO) in 40nm CMOS process. The proposed BJT-based BGR offers 10\u00d7 resistor area reduction compared to the conventional BGR. The proposed XO provides trans-conductance enhancement without the need for large coupling capacitor and uses a feedback loop to regulate its bias current, oscillation amplitude across PVT. The total power consumption is less than 49nW from \u221240\u00b0C to 120\u00b0C. Without any trimming, the BGR achieves temperature coefficients of 8ppm/\u00b0C and XO has temperature coefficient of 0.25ppm/\u00b0C. The BGR offers 5\u00d7 lower area, 2\u00d7 lower power consumption at 120\u00b0C compared to prior work on resistor-based, non-duty cycled BGR. The XO improves power consumption by at least 2\u00d7 compared to prior work at 120\u00b0C, 0.5V supply."}
{"_id": "ab5932bf82b396ad4d3c447d1acd4154e67921eb", "title": "Multiobjective Optimization of Ultraflat Magnetic Components With PCB-Integrated Core", "text": "In future applications, e.g., in ultra-flat OLED lamp drivers or flat screen power supplies, ultra-flat ac/dc and dc/dc converter systems are highly demanded. Therefore, the design and implementation of a printed circuit board (PCB)-integrated flyback transformer for a 1-mm-thin single-phase power factor correction rectifier is under investigation. In this paper, first an overview on several integration methods is given. It is shown that the PCB integration of magnetic cores allows us to achieve the required thickness of 1 mm and a high energy density. In a next step, the design and the realization of ultra-flat magnetic components with PCB-integrated cores are discussed in detail. The presented multi-objective design procedure determines the inductor and/or transformer setup optimal with respect to minimal losses and/or minimal footprint area; for this purpose, all required electrical, magnetic, and geometrical parameters of the magnetic component are considered in the design process. Furthermore, all specific implications entailed by the PCB-integrated core, e.g., the core setup, anisotropic core losses, the interleaving of windings, or an accurate reluctance model are treated. Finally, experimental results are used to verify the design procedure."}
{"_id": "361a19d477497905b9d4835533b6a0f0ed86ffa0", "title": "The Logic of Plausible Reasoning: A Core Theory", "text": "1. o formal representation of plausible inference patterns; such OS deductions, inductions, ond analogies, thot ore frequently employed in answering every doy questions: 2. a set of parameters, such OS conditional likelihood, typicality, and similarity, that affect the certainty of people\u2019s onswers to such questions; ond 3. a system relating the different plorisible inference patterns and the different certainty parameters."}
{"_id": "d7fcd46b68a89905c8a7ca46d997dce4af5c2e1f", "title": "A FPGA-Based Neuromorphic Locomotion System for Multi-Legged Robots", "text": "The paper develops a neuromorphic system on a Spartan 6 field programmable gate array (FPGA) board to generate locomotion patterns (gaits) for three different legged robots (biped, quadruped, and hexapod). The neuromorphic system consists of a reconfigurable FPGA-based architecture for a 3G artificial neural network (spiking neural network), which acts as a Central Pattern Generator (CPG). The locomotion patterns, are then generated by the CPG through a general neural architecture, which parameters are offline estimated by means of grammatical evolution and Victor-Purpura distance-based fitness function. The neuromorphic system is fully validated on real biped, quadruped, and hexapod robots."}
{"_id": "0fb0b6dbf2a5271b60f7eb837797617b13438801", "title": "An Empirical Study of Algorithms for Point-Feature Label Placement", "text": "A major factor affecting the clarity of graphical displays that include text labels is the degree to which labels obscure display features (including other labels) as a result of spatial overlap. Point-feature label placement (PFLP) is the problem of placing text labels adjacent to point features on a map or diagram so as to maximize legibility. This problem occurs frequently in the production of many types of informational graphics, though it arises most often in automated cartography. In this paper we present a comprehensive treatment of the PFLP problem, viewed as a type of combinatorial optimization problem. Complexity analysis reveals that the basic PFLP problem and most interesting variants of it are NP-hard. These negative results help inform a survey of previously reported algorithms for PFLP; not surprisingly, all such algorithms either have exponential time complexity or are incomplete. To solve the PFLP problem in practice, then, we must rely on good heuristic methods. We propose two new methods, one based on a discrete form of gradient descent, the other on simulated annealing, and report on a series of empirical tests comparing these and the other known algorithms for the problem. Based on this study, the first to be conducted, we identify the best approaches as a function of available computation time."}
{"_id": "19db6ae591a0f8543680abe3e688b6033b685473", "title": "Mapping Agricultural Ecosystem Services across Scales", "text": "Given the cross-scale interactions of agricultural ecosystems, it is important to collect ecosystem service data at the multiple spatial scales they operate at. Mapping of ecosystem services helps to assess their spatial and temporal distribution and is a popular communication tool of their availability and value. For example, maps can be used to quantify distance between areas of available ecosystem services and their beneficiaries and how services fluctuate with changes in land use patterns over time, allowing identification of synergies and trade-offs. However, a lack of local context and too large a resolution can reduce the utility of these maps, whilst masking heterogeneities in access due to equity dynamics. This review identifies and summarizes eight main methods of ESS mapping found in the literature \u2013 remote sensing, biophysical modelling, agent based modelling, economic valuation, expert opinion, user preference, participatory mapping, and photo-elicitation. We consider what spatial scales these methods are utilized at and the transferability of data created by each method. The analysis concludes with a methodological framework for mapping ecosystem services, intended to help researchers identify appropriate methods for a multi-scale research design. The framework is exemplified with an overview of a research project in Ethiopia."}
{"_id": "c48b689acb424be665b6dfdafd4963f72df0c696", "title": "Evidence for physiotherapy practice: a survey of the Physiotherapy Evidence Database (PEDro).", "text": "Evidence-based practice involves the use of evidence from systematic reviews and randomised controlled trials, but the extent of this evidence in physiotherapy has not previously been surveyed. The aim of this survey is to describe the quantity and quality of randomised controlled trials and the quantity of systematic reviews relevant to physiotherapy. The Physiotherapy Evidence Database (PEDro) was searched. The quality of trials was assessed with the PEDro scale. The search identified a total of 2,376 randomised controlled trials and 332 systematic reviews. The first trial was published in 1955 and the first review was published in 1982. Since that time, the number of trials and reviews has grown exponentially. The mean PEDro quality score has increased from 2.8 in trials published between 1955 and 1959 to 5.0 for trials published between 1995 and 1999. There is a substantial body of evidence about the effects of physiotherapy. However, there remains scope for improvements in the quality of the conduct and reporting of clinical trials."}
{"_id": "3d8169438f7a423df1b6008caacc63c80ef54cca", "title": "A Guided Tour Puzzle for Denial of Service Prevention", "text": "Various cryptographic puzzle schemes are proposed as a defense mechanism against denial of service attack. But, all these puzzle schemes face a dilemma when there is a large disparity between the computational power of attackers and legitimate clients: increasing the difficulty of puzzles might unnecessarily restrict legitimate clients too much, and lower difficulty puzzles cannot sufficiently block attackers with large computational resources. In this paper, we introduce guided tour puzzle, a novel puzzle scheme that is not affected by such resource disparity. A guided tour puzzle requires a client to visit a predefined set of nodes, called tour guides, in a certain sequential order to retrieve an n piece answer, one piece from each tour guide that appears in the tour. This puzzle solving process is non-parallelizable, thus cheating by trying to solve the puzzle in parallel is not possible. Guided tour puzzle not only achieves all previously defined desired properties of a cryptographic puzzle scheme, but it also satisfies more important requirements, such as puzzle fairness and minimum interference, that we identified. The number of tour guides required by the scheme can be as few as two, and this extra cost can be amortized by sharing the same set of tour guides among multiple servers."}
{"_id": "d45f16e1003d0f9122c1a27cbbb6d1e094d24a4e", "title": "On-Chip Compensation of Ring VCO Oscillation Frequency Changes Due to Supply Noise and Process Variation", "text": "A novel circuit technique that stabilizes the oscillation frequency of a ring-type voltage-controlled oscillator (RVCO) is demonstrated. The technique uses on-chip bias-current and voltage-swing controllers, which compensate RVCO oscillation frequency changes caused by supply noise and process variation. A prototype phase-locked loop (PLL) having the RVCO with the compensation circuit is fabricated with 0.13-\u03bcm CMOS technology. At the operating frequency of 4 GHz, the measured PLL rms jitter improves from 20.11 to 5.78 ps with 4-MHz RVCO supply noise. Simulation results show that the oscillation frequency difference between FF and SS corner is reduced from 63% to 6% of the NN corner oscillation frequency."}
{"_id": "6430bcfc57e7ff285e55fb9dd5b7a2c5a2451be8", "title": "Learning Bayesian Belief Network Classifiers: Algorithms and System", "text": "This paper investigates the methods for learning Bayesian belief network (BN) based predictive models for classification. Our primary interests are in the unrestricted Bayesian network and Bayesian multi-net based classifiers. We present our algorithms for learning these classifiers and also the methods for fighting the overfitting problem. A natural method for feature subset selection is also studied. Using a set of standard classification problems, we empirically evaluate the performance of various BN based classifiers. The results show that the proposed BN and Bayes multi-net classifiers are competitive with (or superior to) the best known classifiers, based on both BN and other formalisms; and that the computational time for learning and using these classifiers is relatively small. We also briefly introduce our BN classifier learning system \u2013 BN PowerPredictor. We argue that BN based classifiers deserve more attention in the data mining community."}
{"_id": "e9b7367c63ba970cc9a0360116b160dbe1eb1bb4", "title": "Programmatically Interpretable Reinforcement Learning", "text": "We present a reinforcement learning framework, called Programmatically Interpretable Reinforcement Learning (PIRL), that is designed to generate interpretable and verifiable agent policies. Unlike the popular Deep Reinforcement Learning (DRL) paradigm, which represents policies by neural networks, PIRL represents policies using a high-level, domain-specific programming language. Such programmatic policies have the benefits of being more easily interpreted than neural networks, and being amenable to verification by symbolic methods. We propose a new method, called Neurally Directed Program Search (NDPS), for solving the challenging nonsmooth optimization problem of finding a programmatic policy with maximal reward. NDPS works by first learning a neural policy network using DRL, and then performing a local search over programmatic policies that seeks to minimize a distance from this neural \u201coracle\u201d. We evaluate NDPS on the task of learning to drive a simulated car in the TORCS carracing environment. We demonstrate that NDPS is able to discover human-readable policies that pass some significant performance bars. We also show that PIRL policies can have smoother trajectories, and can be more easily transferred to environments not encountered during training, than corresponding policies discovered by DRL."}
{"_id": "787c7abd9a00c3700e994b253afa5e9aa07a38ab", "title": "Estimator initialization in vision-aided inertial navigation with unknown camera-IMU calibration", "text": "This paper focuses on motion estimation using inertial measurements and observations of naturally occurring point features. To date, this task has primarily been addressed using filtering methods, which track the system state starting from known initial conditions. However, when no prior knowledge of the initial system state is available, (e.g., at the onset of the system's operation), the existing approaches are not applicable. To address this problem, in this work we present algorithms for computing the system's observable quantities (platform attitude and velocity, feature positions, and IMU-camera calibration) directly from the sensor measurements, without any prior knowledge. A key contribution of this work is a convex-optimization based algorithm for computing the rotation matrix between the camera and IMU. We show that once this rotation matrix has been computed, all remaining quantities can be determined by solving a quadratically constrained least-squares problem. To increase their accuracy, the initial estimates are refined by an iterative maximum-likelihood estimator."}
{"_id": "78cdec0c6779b513daa84fe5b9ec15db966e21a6", "title": "LKIM : The Linux Kernel Integrity", "text": "509 he Linux Kernel Integrity Measurer (LKIM) is a next-generation technology for the detection of malicious modifications to a running piece of software. Unlike traditional antivirus systems, LKIM does not rely on a database of known malware signatures; instead, LKIM uses a precise model of expected program behavior to verify the consistency of critical data structures at runtime. APL and the Research Directorate of the National Security Agency (NSA) developed the LKIM prototype and are now working to transition the technology to a variety of critical government applications. LKIM: The Linux Kernel Integrity Measurer"}
{"_id": "92f36bf25e8b71bf3b75abfa47517d9ebfa70f55", "title": "A Wide-Field-of-View Monocentric Light Field Camera", "text": "Light field (LF) capture and processing are important in an expanding range of computer vision applications, offering rich textural and depth information and simplification of conventionally complex tasks. Although LF cameras are commercially available, no existing device offers wide field-of-view (FOV) imaging. This is due in part to the limitations of fisheye lenses, for which a fundamentally constrained entrance pupil diameter severely limits depth sensitivity. In this work we describe a novel, compact optical design that couples a monocentric lens with multiple sensors using microlens arrays, allowing LF capture with an unprecedented FOV. Leveraging capabilities of the LF representation, we propose a novel method for efficiently coupling the spherical lens and planar sensors, replacing expensive and bulky fiber bundles. We construct a single-sensor LF camera prototype, rotating the sensor relative to a fixed main lens to emulate a wide-FOV multi-sensor scenario. Finally, we describe a processing toolchain, including a convenient spherical LF parameterization, and demonstrate depth estimation and post-capture refocus for indoor and outdoor panoramas with 15 x 15 x 1600 x 200 pixels (72 MPix) and a 138° FOV."}
{"_id": "0547d31228bf0347df8490ea92e04cc8dbbc1411", "title": "A lightweight gravity-balanced exoskeleton for home rehabilitation of upper limbs", "text": "Stroke recovery requires intensive and continuous rehabilitation over a long period of time. Access to existing rehabilitation devices is limited to hospitals due to the considerable cost and maintenance. This paper proposes a lightweight, low-cost gravity-balanced exoskeleton for home rehabilitation of upper limbs. Gravity balancing is based on a two-bar mechanism that can fit the periphery of a human arm. A type of planar flexural spring is proposed to achieve the required spring potential energy. This type of spring has a thin cross-section and can be designed and fabricated conveniently. A model is constructed to optimize the geometry and dimension of the exoskeleton. Performance is evaluated through a static analysis. The results show that the proposed exoskeleton is both simpler and lighter than existing passive devices. It can be easily custom-made for different arm masses and sizes. We expect the exoskeleton to be used in both clinical and homecare environments for the arm rehabilitation of stroke patients."}
{"_id": "6f9fbc3973bfc501619277a2b1e7bd0ff06df71f", "title": "Using Risk to Balance Agile and Plan-Driven Methods", "text": "M ethodologies such as Extreme Programming (XP), Scrum, and agile software development promise increased customer satisfaction, lower defect rates, faster development times, and a solution to rapidly changing requirements. Plan-driven approaches such as Cleanroom, the Personal Software Process, or methods based on the Capability Maturity Model promise predictability , stability, and high assurance. However, both agile and planned approaches have situation-dependent shortcomings that, if left unaddressed, can lead to project failure. The challenge is to balance the two approaches to take advantage of their strengths in a given situation while compensating for their weaknesses. We present a risk-based approach for structuring projects to incorporate both agile and plan-driven approaches in proportion to a project's needs. We drew this material from our book, Balancing Agility and Discipline: A Guide for the Perplexed, to be published in 2003. Our method uses risk analysis and a unified process framework to tailor risk-based processes into an overall development strategy. This method enhances the ability of key development team members to understand their environment and organizational capabilities and to identify and collaborate with the project's stakeholders. We use risk analysis to define and address risks particularly associated with agile and plan-driven methods. The Risk-Based Spiral Model Anchor Points 2 provide the framework for this process. Both the Rational Unified Process 3 (RUP) and the Model-Based Architecting and Software Engineering (Mbase) process 4 have adopted these anchor points, which are essentially an integrated set of decision criteria for stakeholder commitment at specific points in the development process. Our method consists of five steps. Step 1 First, we apply risk analysis to specific risk areas associated with agile and plan-driven methods. We identify three specific risk categories: environmental , agile, and plan-driven. While not a simple task, Step 1 provides the basis for making decisions about the development strategy later in the process. If we uncover too much uncertainty about some risk categories, spending resources early to buy information about the project aspects that create the uncertainty may prove prudent. The candidate risks we describe are just that\u2014 candidates for consideration. They are neither complete nor always applicable, but serve as guides to stimulate participants' thinking. Next, we evaluate the risk analysis results to determine if the project at hand is appropriate for either purely agile or purely plan-driven methods. In these cases, the project characteristics fall squarely in the home ground of one approach or A \u2026"}
{"_id": "9f573d557b3e76053736e179ee23bea4c4f2222f", "title": "Why healthcare workers don't wash their hands: a behavioral explanation.", "text": "OBJECTIVE\nTo elucidate behavioral determinants of handwashing among nurses.\n\n\nDESIGN\nStatistical modeling using the Theory of Planned Behavior and relevant components to handwashing behavior by nurses that were derived from focus-group discussions and literature review.\n\n\nSETTING\nThe community and 3 tertiary care hospitals.\n\n\nPARTICIPANTS\nChildren aged 9-10 years, mothers, and nurses.\n\n\nRESULTS\nResponses from 754 nurses were analyzed using backward linear regression for handwashing intention. We reasoned that handwashing results in 2 distinct behavioral practices--inherent handwashing and elective handwashing--with our model explaining 64% and 76%, respectively, of the variance in behavioral intention. Translation of community handwashing behavior to healthcare settings is the predominant driver of all handwashing, both inherent (weighted beta =2.92) and elective (weighted beta =4.1). Intended elective in-hospital handwashing behavior is further significantly predicted by nurses' beliefs in the benefits of the activity (weighted beta =3.12), peer pressure of senior physicians (weighted beta =3.0) and administrators (weighted beta =2.2), and role modeling (weighted beta =3.0) but only to a minimal extent by reduction in effort (weighted beta =1.13). Inherent community behavior (weighted beta =2.92), attitudes (weighted beta =0.84), and peer behavior (weighted beta =1.08) were strongly predictive of inherent handwashing intent.\n\n\nCONCLUSIONS\nA small increase in handwashing adherence may be seen after implementing the use of alcoholic hand rubs, to decrease the effort required to wash hands. However, the facilitation of compliance is not simply related to effort but is highly dependent on altering behavioral perceptions. Thus, introduction of hand rub alone without an associated behavioral modification program is unlikely to induce a sustained increase in hand hygiene compliance."}
{"_id": "5e29ccd8eba6482848c18adda9012bb537419fbb", "title": "Platelet-rich plasma vs hyaluronic acid to treat knee degenerative pathology: study design and preliminary results of a randomized controlled trial", "text": "BACKGROUND\nPlatelet rich plasma (PRP), a blood-derived product rich in growth factors, is a promising treatment for cartilage defects but there is still a lack of clinical evidence. The aim of this study is to show, through a randomized double blind prospective trial, the efficacy of this procedure, by comparing PRP to Hyaluronic Acid (HA) injections for the treatment of knee chondropathy or osteoarthritis (OA).\n\n\nMETHODS\n109 patients (55 treated with HA and 54 with PRP) were treated and evaluated at 12 months of follow-up. The patients were enrolled according to the following inclusion criteria: age > 18 years, history of chronic (at least 4 months) pain or swelling of the knee and imaging findings of degenerative changes of the joint (Kellgren-Lawrence Score up to 3). A cycle of 3 weekly injections was administered blindly. All patients were prospectively evaluated before and at 2, 6, and 12 months after the treatment by: IKDC, EQ-VAS, TEGNER, and KOOS scores. Range of motion and knee circumference changes were measured over time. Adverse events and patient satisfaction were also recorded.\n\n\nRESULTS\nOnly minor adverse events were detected in some patients, such as mild pain and effusion after the injections, in particular in the PRP group, where a significantly higher post-injective pain reaction was observed (p=0.039). At the follow-up evaluations, both groups presented a clinical improvement but the comparison between the two groups showed a not statistically significant difference in all scores evaluated. A trend favorable for the PRP group was only found in patients with low grade articular degeneration (Kellgren-Lawrence score up to 2).\n\n\nCONCLUSIONS\nResults suggest that PRP injections offer a significant clinical improvement up to one year of follow-up. However, conversely to what was shown by the current literature, for middle-aged patients with moderate signs of OA, PRP results were not better than those obtained with HA injections, and thus it should not be considered as first line treatment. More promising results are shown for its use in low grade degeneration, but they still have to be confirmed."}
{"_id": "4f6571c4d7712e5903611055774223257015894c", "title": "CamOdoCal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry", "text": "Multiple cameras are increasingly prevalent on robotic and human-driven vehicles. These cameras come in a variety of wide-angle, fish-eye, and catadioptric models. Furthermore, wheel odometry is generally available on the vehicles on which the cameras are mounted. For robustness, vision applications tend to use wheel odometry as a strong prior for camera pose estimation, and in these cases, an accurate extrinsic calibration is required in addition to an accurate intrinsic calibration. To date, there is no known work on automatic intrinsic calibration of generic cameras, and more importantly, automatic extrinsic calibration of a rig with multiple generic cameras and odometry. We propose an easy-to-use automated pipeline that handles both intrinsic and extrinsic calibration; we do not assume that there are overlapping fields of view. At the begining, we run an intrinsic calibration for each generic camera. The intrinsic calibration is automatic and requires a chessboard. Subsequently, we run an extrinsic calibration which finds all camera-odometry transforms. The extrinsic calibration is unsupervised, uses natural features, and only requires the vehicle to be driven around for a short time. The intrinsic parameters are optimized in a final bundle adjustment step in the extrinsic calibration. In addition, the pipeline produces a globally-consistent sparse map of landmarks which can be used for visual localization. The pipeline is publicly available as a standalone C++ package."}
{"_id": "8c4098807f2890706fa32ff3e401fad8694ae817", "title": "PSRR of bridge-tied load PWM Class D Amps", "text": "In this paper, the effects of power supply noise, qualified by power supply rejection ratio (PSRR), on two types of bridge-tied load (BTL) pulse width modulation (PWM) class D amps (denoted as Type-I BTL and Type-II BTL respectively) are investigated and the analytical expressions for PSRR of the two designs derived. The derived analytical expressions are verified by means of HSPICE simulations. The relationships derived herein provide good insight to the design of BTL class D amps, including how various parameters may be varied/optimized to meet a given PSRR specification. Furthermore, the PSRR of the two BTL Class D amps are compared against the single-ended class D amp, and the former designs show superior PSRR compared to the latter."}
{"_id": "55c1aacbbbb4655effa3733275104f92b07eb815", "title": "ARMageddon: Cache Attacks on Mobile Devices", "text": "In the last 10 years, cache attacks on Intel x86 CPUs have gained increasing attention among the scientific community and powerful techniques to exploit cache side channels have been developed. However, modern smartphones use one or more multi-core ARM CPUs that have a different cache organization and instruction set than Intel x86 CPUs. So far, no cross-core cache attacks have been demonstrated on non-rooted Android smartphones. In this work, we demonstrate how to solve key challenges to perform the most powerful cross-core cache attacks Prime+Probe, Flush+Reload, Evict+Reload, and Flush+Flush on non-rooted ARM-based devices without any privileges. Based on our techniques, we demonstrate covert channels that outperform state-of-the-art covert channels on Android by several orders of magnitude. Moreover, we present attacks to monitor tap and swipe events as well as keystrokes, and even derive the lengths of words entered on the touchscreen. Eventually, we are the first to attack cryptographic primitives implemented in Java. Our attacks work across CPUs and can even monitor cache activity in the ARM TrustZone from the normal world. The techniques we present can be used to attack hundreds of millions of Android devices."}
{"_id": "70fb3cea8335aefdf849597e9d9dd7512d722d88", "title": "Cache Template Attacks: Automating Attacks on Inclusive Last-Level Caches", "text": "Recent work on cache attacks has shown that CPU caches represent a powerful source of information leakage. However, existing attacks require manual identification of vulnerabilities, i.e., data accesses or instruction execution depending on secret information. In this paper, we present Cache Template Attacks. This generic attack technique allows us to profile and exploit cachebased information leakage of any program automatically, without prior knowledge of specific software versions or even specific system information. Cache Template Attacks can be executed online on a remote system without any prior offline computations or measurements. Cache Template Attacks consist of two phases. In the profiling phase, we determine dependencies between the processing of secret information, e.g., specific key inputs or private keys of cryptographic primitives, and specific cache accesses. In the exploitation phase, we derive the secret values based on observed cache accesses. We illustrate the power of the presented approach in several attacks, but also in a useful application for developers. Among the presented attacks is the application of Cache Template Attacks to infer keystrokes and\u2014even more severe\u2014the identification of specific keys on Linux and Windows user interfaces. More specifically, for lowercase only passwords, we can reduce the entropy per character from log2(26) = 4.7 to 1.4 bits on Linux systems. Furthermore, we perform an automated attack on the Ttable-based AES implementation of OpenSSL that is as efficient as state-of-the-art manual cache attacks."}
{"_id": "86d31093d7e01930cb9da37b4d48fb77ead68449", "title": "TruSpy: Cache Side-Channel Information Leakage from the Secure World on ARM Devices", "text": "As smart, embedded devices are increasingly integrated into our daily life, the security of these devices has become a major concern. The ARM processor family, which powers more than 60% of embedded devices, introduced TrustZone technology to offer security protection via an isolated execution environment called secure world. Caches in TrustZone-enabled processors are extended with a nonsecure (NS) bit to indicate whether a cache line is used by the secure world or the normal world. This cache design improves system performance by eliminating the need to perform cache flush during world switches; however, it also enables cache contention between the two worlds. In this work, we present TruSpy, the first study of timingbased cache side-channel information leakage of TrustZone. Our proposed attack exploits the cache contention between normal world and secure world to recover secret information from secure world. Two attacks are proposed in TruSpy, namely, the normal world OS attack and the normal world Android app attack. In the OS-based attack, the attacker is able to access virtual-to-physical address translation and high precision timers. In the Android app-based attack, these tools are unavailable to the attacker, so we devise a novel method that uses the expected channel statistics to allocate memory for cache probing. We also show how an attacker might use the less accurate performance event interface as a timer. Using the T-table based AES implementation in OpenSSL 1.0.1f as an example, we demonstrate that it is possible for a normal world attacker to steal a fine-grained secret from the secure world using a timing-based cache side-channel. We can recover the full AES encryption key via either the OS-based attack or the Android app-based attack. Since our zero permission TruSpy attack is based on the cache design in TrustZone enabled ARM processors, it poses a significant threat to a wide array of devices. To mitigate the newly discovered threat, we also propose both application-based and system-oriented countermeasures."}
{"_id": "a20af3a9ce01c331a5966d5efb1d71ccf35e877b", "title": "Using GLCM and Gabor filters for classification of PAN images", "text": "In the present research we have used GLCM and Gabor filters to extract texture features in order to classify PAN images. The main drawback of GLCM algorithm is its time-consuming nature. In this work, we proposed a fast GLCM algorithm to overcome the mentioned weakness of traditional GLCM. The fast GLCM is capable of extracting approximately the same features as the traditional GLCM does, but in a really much less time (in the best case, 180 times faster, and in the worst case, 30 times faster). The other weakness of the traditional GLCM is its lower accuracies in the region near the class borders. As Gabor filters are more powerful in border regions, we have tried to combine Gabor features with GLCM features. In this way we would compensate the latter mentioned weakness of GLCM. Experimental results show good capabilities of the proposed fast GLCM and the feature fusion method in classification of PAN images."}
{"_id": "fc2ff011f59498ef613d3f0a756e11b1291b3f2e", "title": "Exploring knowledge sharing in ERP implementation: an organizational culture framework", "text": "This is a multi-site case study of firms that have implemented enterprise resource planning (ERP) systems. It examines eight dimensions of culture and their impact on how ERP implementation teams are able to effectively share knowledge across diverse functions and perspectives during ERP implementation. Through synthesizing the data, we develop a cultural configuration that shows the dimensions of culture that best facilitate knowledge sharing in ERP implementation. The results also indicate ways that firms may overcome cultural barriers to knowledge sharing. A model is developed that demonstrates the link between the dimensions of culture and knowledge sharing during ERP implementation. Possible research questions on which future research can be based are also identified. D 2004 Elsevier B.V. All rights reserved."}
{"_id": "f8edbc3155b3b1d351dc577e57ad4663943d02ea", "title": "External Data Access And Indexing In AsterixDB", "text": "Traditional database systems offer rich query interfaces (SQL) and efficient query execution for data that they store. Recent years have seen the rise of Big Data analytics platforms offering query-based access to \"raw\" external data, e.g., file-resident data (often in HDFS). In this paper, we describe techniques to achieve the qualities offered by DBMSs when accessing external data. This work has been built into Apache AsterixDB, an open source Big Data Management System. We describe how we build distributed indexes over external data, partition external indexes, provide query consistency across access paths, and manage external indexes amidst concurrent activities. We compare the performance of this new AsterixDB capability to an external-only solution (Hive) and to its internally managed data and indexes."}
{"_id": "f71761c4060fb3d7deefce973780ad96ddf47fbf", "title": "Realtime FPGA-based processing unit for a high-resolution automotive MIMO radar platform", "text": "Next generations of high-resolution automotive radar sensors rely on powerful, real-time capable processing units. High sampling rates, large antenna arrays as well as hard real-time constraints require the use of both parallel architectures and high-bandwidth memory interfaces. This paper presents a novel architecture of a FPGA-based multiple-input multiple-output (MIMO) radar sensor processing unit. Sampling rates up to 250 MSPS and a maximum of 16 parallel receiving channels can be used, resulting in a maximum data rate of 56GBit/s. The processing chain consists of a flexible FIR filter, a range-Doppler processing unit using windowed FFTs and an ordered statistics constant-false-alarm-rate (OS-CFAR) unit for optimal target detection and data reduction. The realized target system is composed of a Virtex7-FPGA and a 1GByte SDRAM memory. The resource usage of the FPGA implementation is analyzed in order to provide estimations for future system designs. Finally, the resulting performance of the system is verified in connection with a prototype MIMO radar front-end. High-resolution measurements of moving scenes have been carried out to validate the correct operation of the system."}
{"_id": "a31ecfed1505b34fae0bfd536420baad3bc5ae64", "title": "A tensor-based deep learning framework", "text": "This paper presents an unsupervised deep learning framework that derives spatio-temporal features for human-robot interaction. The respective models extract high-level features from low-level ones through a hierarchical network, viz. the Hierarchical Temporal Memory (HTM), providing at the same time a solution to the curse of dimensionality in shallow techniques. The presented work incorporates the tensor-based framework within the operation of the nodes and, thus, enhances the feature derivation procedure. This is due to the fact that tensors allow the preservation of the initial data format and their respective correlation and, moreover, attain more compact representations. The computational nodes form spatial and temporal groups by exploiting the multilinear algebra, subsequently express the samples according to those groups in terms of proximity. This generic framework may be applied in a diverse of visual data, whilst it has been examined on sequences of color and depth images, exhibiting remarkable performance."}
{"_id": "c6b739711e0379ccb18213c90c2487dbfebc4f2c", "title": "On Resampling Algorithms for Particle Filters", "text": "In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respect to their resampling quality and computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in terms of resampling quality and computational complexity."}
{"_id": "595ff00c3a60cc2879688e0b1d6e71d93ee07d38", "title": "Contributions from research on anger and cognitive dissonance to understanding the motivational functions of asymmetrical frontal brain activity", "text": "Research has suggested that approach-related positive emotions are associated with greater left frontal brain activity and that withdrawal-related negative emotions are associated with greater right frontal brain activity. Different explanations have been proposed. One posits that frontal asymmetry is due to emotional valence (positivity/negativity), one posits that frontal asymmetry is due to motivational direction (approach/withdrawal), and one posits that frontal asymmetry is due to a combination of emotional valence and motivational direction (positive-approach/negative-withdrawal). Because research had confounded emotional valence and motivational direction, the theoretical explanation was muddled. Solely supporting the motivational direction model, recent research has revealed that anger and cognitive dissonance, emotions with negative valence and approach motivational tendencies, are related to relatively greater left frontal activity."}
{"_id": "d154d857d27aa02a1c42ede6213ab7510f15f6e1", "title": "Motives for using Facebook, patterns of Facebook activities, and late adolescents' social adjustment to college.", "text": "Previous studies have confirmed that Facebook, the leading social networking site among young people, facilitates social connections among college students, but the specific activities and motives that foster social adjustment remain unclear. This study examined associations between patterns of Facebook activity, motives for using Facebook, and late adolescents' social adjustment to the college environment. Anonymous self-report survey data from 193 mostly European American students (M age = 20.32; 54 % female) attending a major Midwestern university indicated that motives and activity patterns were associated directly with social adjustment, but the association between one activity, status updating, and social adjustment also was moderated by the motive of relationship maintenance. Findings provide a more comprehensive portrait of how Facebook use may foster or inhibit social adjustment in college."}
{"_id": "177162dee1a16ac3d526f8fcf01b153bf8983975", "title": "The eyeBook \u2013 Using Eye Tracking to Enhance the Reading Experience", "text": "Introduction A rapid development of eye tracking technology has been observed in recent years. Today\u2019s eye trackers can determine the current focus point of the eye precisely while being relatively unobtrusive in their application. Also, a variety of research and commercial groups has been working on this technology, and there is a growing interest for such devices on the market. Eye tracking has great potential and it can be assumed that it will advance further and might become a widespread technology used at a large number of personal or office computer workplaces. Approaches using simple webcams for eye tracking already exist, for example webcams integrated into laptop computers by default. Thus, they allow for new kinds of applications using eye gaze data. However, not only eye tracking technology is advancing rapidly to an easily usable state. Additionally, during the past 100 years researchers gathered a considerable amount of knowledge on eye movements, why and how they occur, and what they might mean. So, today, we have the technology and knowledge for tracking and analyzing eye movements, making an excellent starting point for sophisticated interactive gaze-based applications. Naive approaches where gaze data is directly employed for interacting with the system, e. g., pressing buttons on the screen with the \u201cblink of an eye\u201d generally have serious problems. Because the eyes are organs used for perceiving the world and not for manipulating the world, it is hard and against human nature to control eye movements deliberately. However, a highly promising approach is just to observe eye movements of the user during his or her daily work in front of the computer, to infer user intentions based on eye movement behavior, and to provide assistance where helpful. Gaze can be seen as a proxy for the user\u2019s attention, and eye movements are known to be usually tightly coupled with cognitive processes in the brain, so that a great deal about those processes can be observed by eye tracking. For example, by interpreting eye movements, reading behavior of the user can be detected, which most likely entails cognitive processes of understanding with regard to the currently read text. In this paper we are focusing particularly on reading behavior since reading is probably the most common activity of knowledge workers sitting in front of a computer screen. We present an algorithm for online reading detection based on eye tracking data and introduce an application for assisted and augmented reading called the eyeBook. The idea behind the eyeBook is to create an interactive and entertaining reading experience. The system observes, which text parts are currently being read by the user on the screen and generates appropriate effects such as playing sounds, presenting"}
{"_id": "abee26e68603fb37999b6ecc4f706e18a6a9ad34", "title": "CRM Strategies for A Small-Sized Online Shopping Mall Based on Association Rules and Sequential Patterns", "text": "Data mining has a tremendous contribution to the extraction of knowledge and information which have been hidden in a large volume of data. This study has proposed customer relationship management (CRM) strategies for a small-sized online shopping mall based on association rules and sequential patterns obtained by analyzing the transaction data of the shop. We first defined the VIP customer in terms of recency, frequency and monetary value. Then, we developed a model which classifies customers into VIP or non-VIP, using various techniques such as decision tree, artificial neural network and bagging with each of these as a base classifier. Last, we identified association rules and sequential patterns from the transactions of VIPs, and then these rules and patterns were utilized to propose CRM strategies for the online shopping mall."}
{"_id": "35a30f0cba6da00bcb679c44fd8441c9cdb4f398", "title": "Research of Workflow Access Control Strategy based on Trust", "text": "The traditional workflow access control strategy has often found to be inadequate to detect and restrain malicious behavior effectively. With the aim to solve this problem, this paper presents a new workflow access control model based on trust, and a new access control strategy with an authorization process. This strategy introduces user behavior evaluation, trust computation and role hierarchy into role access control strategy. Through the trust computation of user behavior, it can dynamically adjust user's role and permissions, realizing the dynamic authorization process. Theory analysis and simulation experiments show that this access control strategy is more sensitive in dynamic authorization, and it has fine-grained trust computation. Also this strategy can detect malicious behaviors in time, effectively restraining malicious behavior harming the system so enhancing the security of the system."}
{"_id": "00b1efa6107bee4d69cb50b1d8bbe5a5537ddde6", "title": "Design of Metal Interconnects for Stretchable Electronic Circuits using Finite Element Analysis", "text": "In this work, the design of flexible and stretchable interconnections is presented. These interconnections are done by embedding sinuous electroplated metallic wires in a stretchable substrate material. A silicone material was chosen as substrate because of its low stiffness and high elongation before break. Common metal conductors used in the electronic industry have very limited elastic ranges; therefore a metallization design is crucial to allow stretchability of the conductors going up to 100%. Different configurations were simulated and compared among them and based on these results, a horseshoe like shape was suggested. This design allows a large deformation with the minimum stress concentration. Moreover, the damage in the metal is significantly reduced by applying narrow metallization schemes. In this way, each conductor track has been split in four parallel lines of 15 mum and 15 mum space in order to improve the mechanical performance without limiting the electrical characteristics. Compared with the single copper or gold trace, the calculated stress was reduced up to 10 times."}
{"_id": "98c66f7745916c539456d7d63686d7be5de92ecb", "title": "Using a Graph-based Coherence Model in Document-Level Machine Translation", "text": "Although coherence is an important aspect of any text generation system, it has received little attention in the context of machine translation (MT) so far. We hypothesize that the quality of document-level translation can be improved if MT models take into account the semantic relations among sentences during translation. We integrate the graph-based coherence model proposed by Mesgar and Strube (2016) with Docent1 (Hardmeier et al., 2012; Hardmeier, 2014) a document-level machine translation system. The application of this graph-based coherence modeling approach is novel in the context of machine translation. We evaluate the coherence model and its effects on the quality of the machine translation. The result of our experiments shows that our coherence model slightly improves the quality of translation in terms of the average Meteor score."}
{"_id": "e8ffecd8541f5f7539c51a54aeff66e95886f1c7", "title": "Post-Processing OCR Text using Web-Scale Corpora", "text": "We introduce a (semi-)automatic OCR post-processing system that utilizes web-scale linguistic corpora in providing high-quality correction. This paper is a comprehensive system overview with the focus on the computational procedures, applied linguistic analysis, and processing optimization."}
{"_id": "6f93ccb777dccea26838bfcb74dc27fb7bab6503", "title": "Characteristics of personal space during obstacle circumvention in physical and virtual environments.", "text": "It is not known how the flexible protective zone maintained around oneself during locomotion (personal space or PS; see [G\u00e9rin-Lajoie M, Richards CL, McFadyen BJ. The negotiation of stationary and moving obstructions during walking: anticipatory locomotor adaptations and preservation of personal space. Motor Control 2005;9:242-69]) is modulated with walking speed, whether both sides of the PS are symmetrical, and whether the circumvention of physical and virtual obstructions elicit the same use of such PS. Personal space was measured in ten adults as they circumvented a cylindrical obstacle that was stationary within their path. Both left and right passes were performed at natural self-selected, slow and fast walking speeds. The same circumvention task was also performed at natural speeds in an immersive virtual environment (VE) replicating the same obstruction scenario. The shape and size of PS were maintained across walking speeds, and a smaller PS was generally observed on the dominant side. The general shape and lateral bias of the PS were preserved in the VE while its size was slightly increased. The systematic behavior across walking speeds and types of environment and the lateral bias suggest that PS is used to control navigation. This study deepens our understanding of normal adaptive walking behavior and has implications for the development of better tools for the assessment and retraining of locomotor capacity in different populations, from people with walking deficits to elite athletes. Since the PS behavior was shown to be robust in the VE used for this study, the virtual reality technology is proposed as a promising platform for the development of such assessment and retraining applications."}
{"_id": "2fb19f33df18e6975653b7574ab4c897d9b6ba06", "title": "Multi-scale retinex for color image enhancement", "text": "The retinex is a human perception-based image processing algorithm which provides color constancy and dynamic range compression. We have previously reported on a single-scale retinex (SSR) and shown that it can either achieve color/lightness rendition or dynamic range compression, but not both simultaneously. We now present a multi-scale retinex (MSR) which overcomes this limitation for most scenes. Both color rendition and dynamic range compression are successfully accomplished except for some \\patho-logical\" scenes that have very strong spectral characteristics in a single band."}
{"_id": "f34677c65a9d3778517c0df4637564a61c28464b", "title": "Extracting lines of curvature from noisy point clouds", "text": "We present a robust framework for extracting lines of curvature from point clouds. First, we show a novel approach to denoising the input point cloud using robust statistical estimates of surface normal and curvature which automatically rejects outliers and corrects points by energy minimization. Then the lines of curvature are constructed on the point cloud with controllable density. Our approach is applicable to surfaces of arbitrary genus, with or without boundaries, and is statistically robust to noise and outliers while preserving sharp surface features. We show our approach to be effective over a range of synthetic and real-world input datasets with varying amounts of noise and outliers. The extraction of curvature information can benefit many applications in CAD, computer vision and graphics for point cloud shape analysis, recognition and segmentation. Here, we show the possibility of using the lines of curvature for feature-preserving mesh construction directly from noisy point clouds."}
{"_id": "26df186db79e927fb8099c52fac56d812b2bb4c6", "title": "Visual Interactive Creation, Customization, and Analysis of Data Quality Metrics", "text": "During data preprocessing, analysts spend a significant part of their time and effort profiling the quality of the data along with cleansing and transforming the data for further analysis. While quality metrics\u2014ranging from general to domain-specific measures\u2014support assessment of the quality of a dataset, there are hardly any approaches to visually support the analyst in customizing and applying such metrics. Yet, visual approaches could facilitate users\u2019 involvement in data quality assessment. We present MetricDoc, an interactive environment for assessing data quality that provides customizable, reusable quality metrics in combination with immediate visual feedback. Moreover, we provide an overview visualization of these quality metrics along with error visualizations that facilitate interactive navigation of the data to determine the causes of quality issues present in the data. In this article, we describe the architecture, design, and evaluation of MetricDoc, which underwent several design cycles, including heuristic evaluation and expert reviews as well as a focus group with data quality, human-computer interaction, and visual analytics experts."}
{"_id": "a4ba13f755adffe70d4c319c396bc369cc9e13b1", "title": "Using Features From Pre-trained TimeNET For Clinical Predictions", "text": "Predictive models based on Recurrent Neural Networks (RNNs) for clinical time series have been successfully used for various tasks such as phenotyping, in-hospital mortality prediction, and diagnostics. However, RNNs require large labeled data for training and are computationally expensive to train. Pre-training a network for some supervised or unsupervised tasks on a dataset, and then fine-tuning via transfer learning for a related end-task can be an efficient way to leverage deep models for scenarios that lack in either computational resources or labeled data, or both. In this work, we consider an approach to leverage a deep RNN \u2013 namely TimeNet [Malhotra et al., 2017] \u2013 that is pre-trained on a large number of diverse publicly available time-series from UCR Repository [Chen et al., 2015]. TimeNet maps varyinglength time series to fixed-dimensional feature vectors and acts as an off-the-shelf feature extractor. TimeNet-based approach overcome the need for hand-crafted features, and allows for use of traditional easy-to-train and interpretable linear models for the end-task, while still leveraging the features from a deep neural network. Empirical evaluation of the proposed approach on MIMIC-III1 data suggests promising direction for future exploration: our results are comparable to existing benchmarks while our models require lesser training and hyperparameter tuning effort."}
{"_id": "25d7e0385edeadc378f73138a9a8f5da594a1d76", "title": "\"You've Got E-Mail!\" ... Shall I Deal With It Now? Electronic Mail From the Recipient's Perspective", "text": "This article considers the nature of e-mail from the recipient\u2019s perspective\u2014what the seeminglyfreeandeasycommunicationreallycosts therecipient. Informationgathered by electronic monitoring software is shown to be at odds with the results of an online survey of e-mail users\u2019 perceptions of their e-mail experience\u2014users drastically underestimate the disruptive effects of e-mail. The conclusion is that the constant monitoring of e-mail actually reduces productivity and that there is a need for increased power, control,andawarenessonthepartof thee-mail recipient toensure thate-mail remainsatool rather than a tyrant. It is necesssary to alert the user of the true cost of e-mail alerts."}
{"_id": "48d103f81e9b70dc2de82508973bc35f61f8ed01", "title": "Future developments trend for Ku and Ka antenna for satcom on the move", "text": "This document presents the state-of-the-art of TeS Ku band antennas for mobile satellite communications on board high speed trains and ground vehicles, and his evolution in terms of Ku band antenna performances improvement and upgrade to Ka band terminals."}
{"_id": "7294d9fa5c5524a43619ad7e52d132a90cfe91bb", "title": "Ka-Band Phased Array Antenna for High-Data-Rate SATCOM", "text": "The general issue of this letter deals with the design of a phased array antenna for high-data-rate SATCOM. A final demonstrator antenna could be installed on an unmanned aerial vehicle (UAV) to communicate with a satellite in Ka-band. First, a compact reflection-type phase shifter is designed and realized. Second, the conception of a phased array antenna prototype is detailed. Third, a new calibration method is involved that can provide the bias voltage to be applied to each phase shifter in order to scan the beam in the desired direction."}
{"_id": "b40b8a2b528a88f45bba5ecd23cec02840798e72", "title": "Dual-band leaky-wave antenna based on a dual-layer frequency selective surface for bi-directional satcom-on-the-move in Ka-band", "text": "A 2D-periodic leaky-wave antenna for a Ka-band satcom-on-the-move ground user terminal is presented. The antenna panel operates at the 20 GHz downlink as well as the 30 GHz uplink bands with the respective circular polarizations, using a common radiation aperture and a common phase centre. The dual-band performance is achieved by a carefully designed stacked dual-layer frequency selective surface, with one layer operating at 20 GHz and being transparent at 30 GHz, and the second layer acting vice versa. The paper describes the design of the circularly polarized primary feed, the dual-layer structures, and the complete compact leaky-wave antenna panel. The measured radiation performance reveals realized-gain values above 22 dBi and efficiencies above 60%. The cross-polarization discrimination and sidelobe level are suitable to meet the power spectral requirements for satellite communications at Ka-band."}
{"_id": "55ac65d1fed89dcebb1f60a954f00e8a6682434b", "title": "Simulation of brake by wire system with dynamic force control", "text": "By wire technology is recently developed to improve the reliability, safety, and performance of vehicular drive technology. Brake system is the most important control system for vehicle safety. By wire technology development has encouraged the development of brake by wire systems to reduce traditional mechanical and hydraulic systems usage in automobiles. This paper proposes a novel brake by wire controller that uses a reaction force based bilateral motor controlling method. The proposed system uses two linear actuators with disturbance observer and reaction force observers to provide pedal force amplification and pedal retraction capabilities. The system includes a force controller to provide pedal feel to drivers. Electro mechanical brake position control is used to provide the brake force. The proposed system is simulated for different conditions to measure the performance and robustness. The simulation results provide evidence for robustness, force amplification, and pedal and brake retraction capabilities of the system."}
{"_id": "af0eb0860e44d2e766398f1d0894d4f2d2b08a95", "title": "Linguistic Annotation in/for Corpus Linguistics", "text": "This article surveys linguistic annotation in corpora and corpus linguistics.Wefirst define the concept of \u2018corpus\u2019 as a radial category and then, in Sect. 2, discuss a variety of kinds of information for which corpora are annotated and that are exploited in contemporary corpus linguistics. Section3 then exemplifies many current formats of annotation with an eye to highlighting both the diversity of formats currently available and the emergence of XML annotation as, for now, the most widespread form of annotation. Section4 summarizes and concludes with desiderata for future developments."}
{"_id": "80ff6ad15a7b571f9577058c49a935b44baa0db1", "title": "FTP-SC: Fuzzy Topology Preserving Stroke Correspondence", "text": "Stroke correspondence construction is a precondition for vectorized 2D animation inbetweening and remains a challenging problem. This paper introduces the FTP-SC, a fuzzy topology preserving stroke correspondence technique, which is accurate and provides the user more effective control on the correspondence result than previous matching approaches. The method employs a two-stage scheme to progressively establish the stroke correspondence construction between the keyframes. In the first stage, the stroke correspondences with high confidence are constructed by enforcing the preservation of the so-called \u201cfuzzy topology\u201d which encodes intrinsic connectivity among the neighboring strokes. Starting with the high-confidence correspondences, the second stage performs a greedy matching algorithm to generate a full correspondence between the strokes. Experimental results show that the FTP-SC outperforms the existing approaches and can establish the stroke correspondence with a reasonable amount of user interaction even for keyframes with large geometric and spatial variations between strokes. CCS Concepts \u2022Computing methodologies \u2192 Animation; Parametric curve and surface models;"}
{"_id": "5b5a2917a43ff430f6810ec069ddddb135885373", "title": "Dynamic analysis and control of lego mindstorms NXT bicycle", "text": "The paper is inspired from a challenging experiment conducted with undergraduate students for bachelor project within Automatic Control and Computer Engineering Faculty, Ias\u0327i. A bicycle type system was implemented, analyzed and some control strategies were implemented and tested in order to self-stabilize the system. The bicycle study is enjoyable both to undergraduate students and researchers because it uses a \u201chi-tech\u201d relevant concrete example. This work encompasses and uses a full range of system-theoretic aspects which covers knowledge from analytical, computational and experimental point of view. The dynamic description of the bicycle is done using a set of dynamic equations that describes the location and orientation of the body, considering four rigid subsystems that form the bicycle. The control issues were conducted in order to follow a desired forward speed using classical PI controller, followed by the LQR approach for steering angle."}
{"_id": "3848d8181efd4a504dfa02b531b7713ab4a2573a", "title": "Prevalence and predictors of internet bullying.", "text": "PURPOSE\nWith the Internet quickly becoming a new arena for social interaction, it has also become a growing venue for bullying among youth. The purpose of the present study was to contrast the prevalence of Internet bullying with physical and verbal bullying among elementary, middle, and high school boys and girls, and to examine whether key predictors of physical and verbal bullying also predicted Internet bullying.\n\n\nMETHODS\nAs part of an ongoing, statewide bullying prevention initiative in Colorado, 3,339 youth in Grades 5, 8, and 11 completed questionnaires in 78 school sites during the fall of 2005, and another 2,293 youth in that original sample participated in a follow-up survey in 65 school sites in the spring of 2006. Questionnaires included measures of bullying perpetration and victimization, normative beliefs about bullying, perceptions of peer social support, and perceptions of school climate.\n\n\nRESULTS\nThe highest prevalence rates were found for verbal, followed by physical, and then by Internet bullying. Physical and Internet bullying peaked in middle school and declined in high school. Verbal bullying peaked in middle school and remained relatively high during high school. Males were more likely to report physical bullying than females, but no gender differences were found for Internet and verbal bullying. All three types of bullying were significantly related to normative beliefs approving of bullying, negative school climate, and negative peer support.\n\n\nCONCLUSIONS\nPreventive interventions that target school bullying by changing norms about bullying and school context may also impact Internet bullying, given the shared predictors."}
{"_id": "5a7abdeed65d52049d04b6daed74e989e864e9d0", "title": "Supply Chain Optimization within Aviation MRO", "text": "Maintenance Repair and Operations (MRO) supply chain not only provides one of the best opportunities to reduce cost of operation and increase productivity but also offers complex challenges such as end to end integrated planning, increased availability of assets, inventory optimization and effective spend management. This paper gives an overview of the challenges faced in MRO supply chains and highlights some of the approaches to meet these challenges, drive efficiencies and optimize the entire supply chain. KeywordsERP; Supply chain, MRO, Aviation,"}
{"_id": "5d73ed46f7d87d91ca077481ce1ec7bcd3eafec8", "title": "Role of culture in gambling and problem gambling.", "text": "There has been a significant gap in the gambling literature regarding the role of culture in gambling and problem gambling (PG). This paper aims to this such gap by presenting a systematic review of the cultural variations in gambling and PG as well as a discussion of the role cultural variables can play in the initiation and maintenance of gambling in order to stimulate further research. The review shows that although studies investigating prevalence rates of gambling and PG among different cultures are not plentiful, evidence does suggest certain cultural groups are more vulnerable to begin gambling and to develop PG. Significant factors including familial/genetic, sociological, and individual factors have been found in the Western gambling literature as playing important roles in the development and maintenance of PG. These factors need to be examined now in other cultural groups so we can better understand the etiological processes involved in PG and design culturally sensitive treatments. In addition, variables, such as cultural values and beliefs, the process of acculturation, and the influence of culturally determined, help-seeking behaviors need to be also examined in relation to the role they could play in the initiation of and maintenance of gambling. Understanding the contribution of cultural variables will allow us to devise better prevention and treatment options for PG. Methodological problems in this area of research are highlighted, and suggestions for future research are included."}
{"_id": "3880be8db2be1b87e8a13f992d7b402bf78cae6d", "title": "Survey on UAV Cellular Communications: Practical Aspects, Standardization Advancements, Regulation, and Security Challenges", "text": "The rapid growth of consumer Unmanned Aerial Vehicles (UAVs) is creating promising new business opportunities for cellular operators. On the one hand, UAVs can be connected to cellular networks as new types of user equipment, therefore generating significant revenues for the operators that guarantee their stringent service requirements. On the other hand, UAVs offer the unprecedented opportunity to realize UAV-mounted flying base stations that can dynamically reposition themselves to boost coverage, spectral efficiency and user quality of experience. Indeed, the standards bodies are currently exploring possibilities for serving commercial UAVs with cellular networks. Industries are beginning to trial early prototypes of flying base stations or user equipments, while academia is in full swing researching mathematical and algorithmic solutions to many interesting new problems arising from flying nodes in cellular networks. In this article, we provide a comprehensive survey of all of these developments promoting smooth integration of UAVs in cellular networks. Specifically, we survey the types of consumer UAVs currently available off-the-shelf, the interference issues and potential solutions addressed by standardization bodies for serving aerial users with existing terrestrial base stations, the challenges and opportunities for assisting cellular communications with UAVbased flying relays and base stations, the ongoing prototyping and test bed activities, the new regulations being developed to manage the commercial use of UAVs, and the cyber-physical security of UAV-assisted cellular communications."}
{"_id": "45373921f06a6efebefa6189d2dd80362ab0836e", "title": "Illuminating search spaces by mapping elites", "text": "Nearly all science and engineering fields use search algorithms, which automatically explore a search space to find high-performing solutions: chemists search through the space of molecules to discover new drugs; engineers search for stronger, cheaper, safer designs, scientists search for models that best explain data, etc. The goal of search algorithms has traditionally been to return the single highestperforming solution in a search space. Here we describe a new, fundamentally different type of algorithm that is more useful because it provides a holistic view of how highperforming solutions are distributed throughout a search space. It creates a map of high-performing solutions at each point in a space defined by dimensions of variation that a user gets to choose. This Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) algorithm illuminates search spaces, allowing researchers to understand how interesting attributes of solutions combine to affect performance, either positively or, equally of interest, negatively. For example, a drug company may wish to understand how performance changes as the size of molecules and their costto-produce vary. MAP-Elites produces a large diversity of high-performing, yet qualitatively different solutions, which can be more helpful than a single, high-performing solution. Interestingly, because MAP-Elites explores more of the search space, it also tends to find a better overall solution than state-of-the-art search algorithms. We demonstrate the benefits of this new algorithm in three different problem domains ranging from producing modular neural networks to designing simulated and real soft robots. Because MAPElites (1) illuminates the relationship between performance and dimensions of interest in solutions, (2) returns a set of high-performing, yet diverse solutions, and (3) improves the state-of-the-art for finding a single, best solution, it will catalyze advances throughout all science and engineering fields."}
{"_id": "c5842100002ade4b163d3350641deeec30decaf3", "title": "Fitting Multiple Connected Ellipses to an Image Silhouette Hierarchically", "text": "In this paper, we seek to fit a model, specified in terms of connected ellipses, to an image silhouette. Some algorithms that have attempted this problem are sensitive to initial guesses and also may converge to a wrong solution when they attempt to minimize the objective function for the entire ellipse structure in one step. We present an algorithm that overcomes these issues. Our first step is to temporarily ignore the connections, and refine the initial guess using unconstrained Expectation-Maximization (EM) for mixture Gaussian densities. Then the ellipses are reconnected linearly. Lastly, we apply the Levenberg-Marquardt algorithm to fine-tune the ellipse shapes to best align with the contour. The fitting is achieved in a hierarchical manner based upon the joints of the model. Experiments show that our algorithm can robustly fit a complex ellipse structure to a corresponding shape for several applications."}
{"_id": "c4df3bbcc011e4f2a53ae0885ac484bd15e40824", "title": "Fast Road Sign Detection Using Hough Transform for Assisted Driving of Road Vehicles", "text": "A system for real-time traffic sign detection is described in this paper. The system uses restricted Hough transform for circumferences in order to detect circular signs, and for straight lines for triangular ones. Some results obtained from a set of real road images captured under both normal and adverse weather conditions are presented as well in order to illustrate the robustness of the detection system. The average processing time is 30 ms per frame, what makes the system a good approach to work in real time conditions."}
{"_id": "e8c2ff800b496f870ae9625ec9d2f549119bbfa7", "title": "Classification of Ulcerative Colitis Severity in Colonoscopy Videos using CNN", "text": "Ulcerative Colitis (UC) is a chronic inflammatory disease characterized by periods of relapses and remissions affecting more than 500,000 people in the United States. The therapeutic goals of UC are to first induce and then maintain disease remission. However, it is very difficult to evaluate the severity of UC objectively because of non-uniform nature of symptoms and large variations in their patterns. To address this, we already developed an approach using the image textures in our previous work. But, we found that it could not handle larger number of variations in their patterns. In this paper, we propose a different approach using CNN (Convolutional Neural Network) to measure and classify objectively the severity of UC presented in optical colonoscopy video frames. We call the proposed approach using CNN as Ulcerative Colitis Severity CNN (UCS-CNN) which utilizes endoscopic domain knowledge and convolutional neural network to classify different UC severity of colonoscopy images. The experimental results show that the proposed UCS-CNN can evaluate the severity of UC reasonably."}
{"_id": "a80d7094d4dcb5434ca6e618b0902198e16dd311", "title": "Leakiness and creepiness in app space: perceptions of privacy and mobile app use", "text": "Mobile devices are playing an increasingly intimate role in everyday life. However, users can be surprised when informed of the data collection and distribution activities of apps they install. We report on two studies of smartphone users in western European countries, in which users were confronted with app behaviors and their reactions assessed. Users felt their personal space had been violated in \"creepy\" ways. Using Altman's notions of personal space and territoriality, and Nissenbaum's theory of contextual integrity, we account for these emotional reactions and suggest that they point to important underlying issues, even when users continue using apps they find creepy."}
{"_id": "abd38dc07fc8a31aaf5c0812e2f6cde72d4a2757", "title": "REDUNDANT IMUS FOR PRECISE TRAJECTORY DETERMINATION", "text": "A redundant inertial measurement unit (IMU) is an inertial sensing device composed by more than three accelerometers and three gyroscopes. This paper analyses the performance of redundant IMUs and their potential benefits and applications in airborne remote sensing and photogrammetry. The theory of redundant IMUs is presented through two different algorithmic approaches. The first approach is to combine the inertial observations, in the observation space, to generate a \u201csynthetic\u201d non-redundant IMU. The second approach modifies the INS mechanization equations so that they directly account for the observational redundancy. The paper ends with an empirical assesment of the concept. For this purpose, redundant IMU data was generated by combining two IMUs in a non-orthogonal configuration and flying them. Preliminary results of this flight are presented."}
{"_id": "30fed63d76f963d1f9cd6666e202d35753767215", "title": "Geometry Optimize of Printed Circuit Board Stator Winding in Coreless Axial Field Permanent Magnet Motor", "text": "Axial Field Permanent Magnet Motor (AFPM) can be easily integrated into wheel-hub motor of Electrical Vehicle (EV) as impact structure and light weight. Improved hexagonal asymmetry coil is presented in Printed Circuit Board (PCB) stator of coreless AFPM. Geometry parameters of coil shape is optimized. According to the structure characteristics of coreless AFPM, dimension formulas of PCB coil are derived. Analytical calculation models of back-EMF and winding co-efficiency are built up. To maximize ratio of torque to copper loss, coil parameters such as radial length, inclined angle of active conductors etc. are optimized. Finally comparison and verification are performed by means of 3D finite element method and prototype test. The ratio of torque to copper loss is increased 30%. As a result, motor rated torque is also increased at certain motor size, heating load and dissipation condition."}
{"_id": "1f009366a901c403a8aad65c94ec2fecf3428081", "title": "Neural Machine Translation with Gumbel-Greedy Decoding", "text": "Previous neural machine translation models used some heuristic search algorithms (e.g., beam search) in order to avoid solving the maximum a posteriori problem over translation sentences at test phase. In this paper, we propose the GumbelGreedy Decoding which trains a generative network to predict translation under a trained model. We solve such a problem using the Gumbel-Softmax reparameterization, which makes our generative network differentiable and trainable through standard stochastic gradient methods. We empirically demonstrate that our proposed model is effective for generating sequences of discrete words."}
{"_id": "44680ac375502a3f250aea019b7083eca7b92c8b", "title": "Data-Warehousing 3.0 - Die Rolle von Data-Warehouse-Systemen auf Basis von In-Memory Technologie", "text": "In diesem Beitrag widmen wir uns der Frage, welche Rolle aktuelle Trends der Hardund Software f\u00fcr Datenbanksysteme spielen, um als Enabler f\u00fcr neuartige Konzepte im Umfeld des Data-Warehousing zu dienen. Als zentraler Schritt der Evolution im Kontext des Data-Warehousing wird dabei die enge Kopplung zu operativen Systemen gesehen, um eine direkte R\u00fcckkopplung bzw. Einbettung in operationale Gesch\u00e4ftsprozesse zu realisieren. In diesem Papier diskutieren wir die Fragen, wie In-Memory-Technologie das Konzept von Echtzeit-DWH-Systemen unterst\u00fctzt bzw. erm\u00f6glicht. Dazu stellen wir zum einen eine Referenzarchitektur f\u00fcr DWH-Systeme vor, die insbesondere pushund pullbasierte Datenversorgung ber\u00fccksichtigt. Zum anderen diskutieren wir die konkrete Rolle von In-Memory-Systemen mit Blick auf konkrete Aspekte wie der Frage optionaler Persistenzschichten, Reduktion der Batchgr\u00f6\u00dfe, Positionierung von In-Memory-Techniken f\u00fcr den Aufbau eines Corporate Memorys und die schnelle Bereitstellung externer Datenbest\u00e4nde zur Unterst\u00fctzung situativer BISzenarien."}
{"_id": "196fd77d4d49295c5dbe8df5af0afa43880a836e", "title": "Relation extraction and the influence of automatic named-entity recognition", "text": "We present an approach for extracting relations between named entities from natural language documents. The approach is based solely on shallow linguistic processing, such as tokenization, sentence splitting, part-of-speech tagging, and lemmatization. It uses a combination of kernel functions to integrate two different information sources: (i) the whole sentence where the relation appears, and (ii) the local contexts around the interacting entities. We present the results of experiments on extracting five different types of relations from a dataset of newswire documents and show that each information source provides a useful contribution to the recognition task. Usually the combined kernel significantly increases the precision with respect to the basic kernels, sometimes at the cost of a slightly lower recall. Moreover, we performed a set of experiments to assess the influence of the accuracy of named-entity recognition on the performance of the relation-extraction algorithm. Such experiments were performed using both the correct named entities (i.e., those manually annotated in the corpus) and the noisy named entities (i.e., those produced by a machine learning-based named-entity recognizer). The results show that our approach significantly improves the previous results obtained on the same dataset."}
{"_id": "04f6a5dc6c2aac0586f8f1e83b434ea96fffcd66", "title": "Fast userspace packet processing", "text": "In recent years, we have witnessed the emergence of high speed packet I/O frameworks, bringing unprecedented network performance to userspace. Using the Click modular router, we first review and quantitatively compare several such packet I/O frameworks, showing their superiority to kernel-based forwarding. We then reconsider the issue of software packet processing, in the context of modern commodity hardware with hardware multi-queues, multi-core processors and non-uniform memory access. Through a combination of existing techniques and improvements of our own, we derive modern general principles for the design of software packet processors. Our implementation of a fast packet processor framework, integrating a faster Click with both Netmap and DPDK, exhibits up-to about 2.3x speed-up compared to other software implementations, when used as an IP router."}
{"_id": "77a86695df168a010c01cec1f3ffa1e4b0e17ac8", "title": "Combining content with user preferences for non-fiction multimedia recommendation: a study on TED lectures", "text": "This paper introduces a new dataset and compares several methods for the recommendation of non-fiction audio visual material, namely lectures from the TED website. The TED dataset contains 1,149 talks and 69,023 profiles of users, who have made more than 100,000 ratings and 200,000 comments. The corresponding metadata, which we make available, can be used for training and testing generic or personalized recommender systems. We define content-based, collaborative, and combined recommendation methods for TED lectures and use cross-validation to select the best parameters of keyword-based (TFIDF) and semantic vector space-based methods (LSI, LDA, RP, and ESA). We compare these methods on a personalized recommendation task in two settings, a cold-start and a non-cold-start one. In the cold-start setting, semantic vector spaces perform better than keywords. In the non-cold-start setting, where collaborative information can be exploited, content-based methods are outperformed by collaborative filtering ones, but the proposed combined method shows acceptable performances, and can be used in both settings. For the generic recommendation task, LSI and RP again outperform TF-IDF."}
{"_id": "8e49caba006e1832a70162f3a93a31be25927349", "title": "Cognitive radar: a way of the future", "text": "This article discusses a new idea called cognitive radar. Three ingredients are basic to the constitution of cognitive radar: 1) intelligent signal processing, which builds on learning through interactions of the radar with the surrounding environment; 2) feedback from the receiver to the transmitter, which is a facilitator of intelligence; and 3) preservation of the information content of radar returns, which is realized by the Bayesian approach to target detection through tracking. All three of these ingredients feature in the echo-location system of a bat, which may be viewed as a physical realization (albeit in neurobiological terms) of cognitive radar. Radar is a remote-sensing system that is widely used for surveillance, tracking, and imaging applications, for both civilian and military needs. In this article, we focus on future possibilities of radar with particular emphasis on the issue of cognition. As an illustrative case study along the way, we consider the problem of radar surveillance applied to an ocean environment."}
{"_id": "f584c3d1c2d0e4baa7f8fc72fcab7b9395970ef5", "title": "Dynamic programming.", "text": "Little has been done in the study of these intriguing questions, and I do not wish to give the impression that any extensive set of ideas exists that could be called a \"theory.\" What is quite surprising, as far as the histories of science and philosophy are concerned, is that the major impetus for the fantastic growth of interest in brain processes, both psychological and physiological, has come from a device, a machine, the digital computer. In dealing with a human being and a human society, we enjoy the luxury of being irrational, illogical, inconsistent, and incomplete, and yet of coping. In operating a computer, we must meet the rigorous requirements for detailed instructions and absolute precision. If we understood the ability of the human mind to make effective decisions when confronted by complexity, uncertainty, and irrationality then we could use computers a million times more effectively than we do. Recognition of this fact has been a motivation for the spurt of research in the field of neurophysiology. The more we study the information processing aspects of the mind, the more perplexed and impressed we become. It will be a very long time before we understand these processes sufficiently to reproduce them. In any case, the mathematician sees hundreds and thousands of formidable new problems in dozens of blossoming areas, puzzles galore, and challenges to his heart's content. He may never resolve some of these, but he will never be bored. What more can he ask?"}
{"_id": "abd57a2ce8987ff5e809984c62cc06effa6ab836", "title": "Do Personal Ethics Influence Corporate Ethics ?", "text": "In this paper, we introduce a new measure of personal ethics in the form of marital cheating to examine the relationship between personal ethics and corporate misconduct. Firms with CEOs and CFOs who use a marital infidelity website are more than twice as likely to engage in two forms of corporate misconduct. The relationship is not explained by a wide range of regional, firm, and executive characteristics or by the infidelity website usage of other executives. Additionally, white-collar SEC defendants also have elevated levels of infidelity website usage. Our findings suggest that personal and professional ethics are closely related. (JEL G30, M14)"}
{"_id": "76cc7d1b3824e9cf810e3a7b85d4e53007ffc523", "title": "IGBT and Diode Behavior During Short-Circuit Type 3", "text": "A short circuit during inverter operation can result in the so-called short-circuit types 2 and 3. The short-circuit type 3 results in an interaction with several current commutations between the Insulate-Gate Bipolar Transistor (IGBT) and the antiparallel diode. The switched ON IGBT, conducting no current before the short circuit occurs, has no plasma inside and can block any voltage. It behaves like a voltage-controlled current source, which can stress the diode by commutating the complete short-circuit current into the diode. An avalanche arises inside the diode, which can lead to a destruction of the diode. The current through the diode determines dv/dt and makes the diode a dominating semiconductor during the short circuit. The current source behavior of the IGBT can be verified with semiconductor simulations and measurements on a high-power 3.3-kV IGBT. In the simulations, the IGBT is replaced by a current source, which gives similar results. For the measurements, the gate-emitter voltage is varied to change the IGBT current without influencing the collector-emitter voltage."}
{"_id": "53574d7884904866221cdc033da1d645a3bddd1a", "title": "Educational data mining and its role in determining factors affecting students academic performance: A systematic review", "text": "Education helps people develop as individuals. It not only helps build social skills but also enhances the problem solving and decision making skills of individuals. With the growing number of schools, colleges and universities around the globe, education now has taken new dimension. The main focus of higher Educational Institutes is to improve the overall quality and effectiveness of education. Predicting the performance of student and Predicting the Future in this world of Big Data is the need of Today in Education. The crude data from educational organizations act as a gold mine which facilitate the analysts and designers to make a system that will improve the overall teaching and learning process. This new field coined as Educational Data mining helps in Predicting the Future and Changing the Future. In this paper we have studied what is Educational Data Mining and what are the Application areas of EDM, various techniques of EDM and the factors affecting the students' Academic Performance and the Teaching-Learning process."}
{"_id": "3fd29f96b58e0355a78d5821ed19359442b4e9e0", "title": "Maternal Control of Child Feeding During the Weaning Period: Differences Between Mothers Following a Baby-led or Standard Weaning Approach", "text": "A controlling maternal feeding style has been shown to have a negative impact on child eating style and weight in children over the age of 12\u00a0months. The current study explores maternal feeding style during the period of 6\u201312\u00a0months when infants are introduced to complementary foods. Specifically it examines differences between mothers who choose to follow a traditional weaning approach using spoon feeding and pure\u00e9s to mothers following a baby-led approach where infants are allowed to self feed foods in their solid form. Seven hundred and two mothers with an infant aged 6\u201312\u00a0months provided information regarding weaning approach alongside completing the Child Feeding Questionnaire. Information regarding infant weight and perceived size was also collected. Mothers following a baby-led feeding style reported significantly lower levels of restriction, pressure to eat, monitoring and concern over child weight compared to mothers following a standard weaning response. No association was seen between weaning style and infant weight or perceived size. A baby-led weaning style was associated with a maternal feeding style which is low in control. This could potentially have a positive impact upon later child weight and eating style. However due to the cross sectional nature of the study it cannot be ascertained whether baby-led weaning encourages a feeding style which is low in control to develop or whether mothers who are low in control choose to follow a baby-led weaning style."}
{"_id": "edbb8cce0b813d3291cae4088914ad3199736aa0", "title": "Efficient Subspace Segmentation via Quadratic Programming", "text": "We explore in this paper efficient algorithmic solutions to robust subspace segmentation. We propose the SSQP, namely Subspace Segmentation via Quadratic Programming, to partition data drawn from multiple subspaces into multiple clusters. The basic idea of SSQP is to express each datum as the linear combination of other data regularized by an overall term targeting zero reconstruction coefficients over vectors from different subspaces. The derived coefficient matrix by solving a quadratic programming problem is taken as an affinity matrix, upon which spectral clustering is applied to obtain the ultimate segmentation result. Similar to sparse subspace clustering (SCC) and low-rank representation (LRR), SSQP is robust to data noises as validated by experiments on toy data. Experiments on Hopkins 155 database show that SSQP can achieve competitive accuracy as SCC and LRR in segmenting affine subspaces, while experimental results on the Extended Yale Face Database B demonstrate SSQP\u2019s superiority over SCC and LRR. Beyond segmentation accuracy, all experiments show that SSQP is much faster than both SSC and LRR in the practice of subspace segmentation."}
{"_id": "276285e996c08a26706979be838fd94abae2fed9", "title": "Locality-sensitive hashing scheme based on dynamic collision counting", "text": "Locality-Sensitive Hashing (LSH) and its variants are well-known methods for solving the c-approximate NN Search problem in high-dimensional space. Traditionally, several LSH functions are concatenated to form a \"static\" compound hash function for building a hash table. In this paper, we propose to use a base of m single LSH functions to construct \"dynamic\" compound hash functions, and define a new LSH scheme called Collision Counting LSH (C2LSH). If the number of LSH functions under which a data object o collides with a query object q is greater than a pre-specified collision threhold l, then o can be regarded as a good candidate of c-approximate NN of q. This is the basic idea of C2LSH.\n Our theoretical studies show that, by appropriately choosing the size of LSH function base m and the collision threshold l, C2LSH can have a guarantee on query quality. Notably, the parameter m is not affected by dimensionality of data objects, which makes C2LSH especially good for high dimensional NN search. The experimental studies based on synthetic datasets and four real datasets have shown that C2LSH outperforms the state of the art method LSB-forest in high dimensional space."}
{"_id": "65a3e9228f5dfba5ead7b6c412390d86de75ab9e", "title": "Design of the Artificial: lessons from the biological roots of general intelligence", "text": "Our desire and fascination with intelligent machines dates back to the antiquity\u2019s mythical automaton Talos, Aristotle\u2019s mode of mechanical thought (syllogism) and Heron of Alexandria\u2019s mechanical machines and automata. However, the quest for Artificial General Intelligence (AGI) is troubled with repeated failures of strategies and approaches throughout the history. This decade has seen a shift in interest towards bio-inspired software and hardware, with the assumption that such mimicry entails intelligence. Though these steps are fruitful in certain directions and have advanced automation, their singular design focus renders them highly inefficient in achieving AGI. Which set of requirements have to be met in the design of AGI? What are the limits in the design of the artificial? Here, a careful examination of computation in biological systems hints that evolutionary tinkering of contextual processing of information enabled by a hierarchical architecture is the key to build AGI."}
{"_id": "bfa1ddc28be92576dcb0ba618a61f18f8acfc4a9", "title": "Clustering Algorithms Applied in Educational Data Mining", "text": "Fifty years ago there were just a handful of universities across the globe that could provide for specialized educational courses. Today Universities are generating not only graduates but also massive amounts of data from their systems. So the question that arises is how can a higher educational institution harness the power of this didactic data for its strategic use? This review paper will serve to answer this question. To build an Information system that can learn from the data is a difficult task but it has been achieved successfully by using various data mining approaches like clustering, classification, prediction algorithms etc. However the use of these algorithms with educational dataset is quite low. This review paper focuses to consolidate the different types of clustering algorithms as applied in Educational Data Mining context."}
{"_id": "860f8ea71abdf1bb3ac4f1c1df366bb6abbe59dd", "title": "Arbitrarily shaped formations of mobile robots: artificial potential fields and coordinate transformation", "text": "In this paper we describe a novel decentralized control strategy to realize formations of mobile robots. We first describe how to design artificial potential fields to obtain a formation with the shape of a regular polygon. We provide a formal proof of the asymptotic stability of the sys tem, based on the definition of a proper Lyapunov function. We also prove that our control strategy is not affected by the problem of local minima. Then, we exploit a bijective coordinate transformation to deform the polygonal formation, thus obtaining a completely arbitrarily shaped forma tion. Simulations and experimental tests are provided to va lidate the control strategy."}
{"_id": "7c91c58581fa17a2da4dcf1e8bd281854cc35527", "title": "Advanced analytics: opportunities and challenges", "text": "Purpose \u2013 Advanced analytics-driven data analyses allow enterprises to have a complete or \u201c360 degrees\u201d view of their operations and customers. The insight that they gain from such analyses is then used to direct, optimize, and automate their decision making to successfully achieve their organizational goals. Data, text, and web mining technologies are some of the key contributors to making advanced analytics possible. This paper aims to investigate these three mining technologies in terms of how they are used and the issues that are related to their effective implementation and management within the broader context of predictive or advanced analytics. Design/methodology/approach \u2013 A range of recently published research literature on business intelligence (BI); predictive analytics; and data, text and web mining is reviewed to explore their current state, issues and challenges learned from their practice. Findings \u2013 The findings are reported in two parts. The first part discusses a framework for BI using the data, text, and web mining technologies for advanced analytics; and the second part identifies and discusses the opportunities and challenges the business managers dealing with these technologies face for gaining competitive advantages for their businesses. Originality/value \u2013 The study findings are intended to assist business managers to effectively understand the issues and emerging technologies behind advanced analytics implementation."}
{"_id": "9d2222506f6a076e2c9803ac4ea5414eda881c73", "title": "Predicting driver drowsiness using vehicle measures: recent insights and future challenges.", "text": "INTRODUCTION\nDriver drowsiness is a significant contributing factor to road crashes. One approach to tackling this issue is to develop technological countermeasures for detecting driver drowsiness, so that a driver can be warned before a crash occurs.\n\n\nMETHOD\nThe goal of this review is to assess, given the current state of knowledge, whether vehicle measures can be used to reliably predict drowsiness in real time.\n\n\nRESULTS\nSeveral behavioral experiments have shown that drowsiness can have a serious impact on driving performance in controlled, experimental settings. However, most of those studies have investigated simple functions of performance (such as standard deviation of lane position) and results are often reported as averages across drivers, and across time.\n\n\nCONCLUSIONS\nFurther research is necessary to examine more complex functions, as well as individual differences between drivers.\n\n\nIMPACT ON INDUSTRY\nA successful countermeasure for predicting driver drowsiness will probably require the setting of multiple criteria, and the use of multiple measures."}
{"_id": "e54877a44357402a72ccb8c99dd72504618ac9c4", "title": "Curved double-layer strip folded dipole antenna for WBAN applications", "text": "This paper presents an optimal design of strip folded dipole antenna with a stacking technique to stabilize the resonance frequency. The designed double-layer strip folded dipole antenna worked well with the cylindrically curved surface on the human body. While being bent, the antenna will change the extended length in each layer within the stacking flexible copper-clad laminate. In this case, the properties of the antenna will be automatically adjusted. However, the extended length can occur at the contact point in each layer. So, we can choose between two points and four points by selecting different types of the designs. This reason, the performance of the extended length can be added by two and four when being compared to the previous multilayer design and the total layer of the new antenna was only two. The extended length of the antenna resulted in making antenna resonance frequency improvement become more stable. The contribution of proposed antenna was used only two layers of flexible copper-clad laminate that reduces a number of antenna's layers with a folded dipole antenna, which causes the designed less complicated, and the flexibility is advantageous for WBAN applications. The double-layer strip folded dipole antenna is designed for 2.45GHz WBAN application."}
{"_id": "cecf83f0ebc708c8b522815afd1cc5b140177a38", "title": "The Effect of Synonym Substitution on Search Results", "text": "Synonyms or other semantic associations can be used in web search in query substitution to improve or augment the query to retrieve more relevant search results. The value of substitution depends on how well the synonyms preserve semantic meaning, as any attrition in meaning can result in semantic drift of query results. Many synonyms are not synonyms in the traditional, thesaurus sense, but are semantic associations discovered automatically from online data, with the risk of semantic drift in substitution. This discovery of synonyms or other semantic associations arises from different methods applied over web search logs, and in this paper we review the candidate synonym pairs of words or phrases generated from three different methods applied over the same web search logs. The suitability of the candidate synonym pairs for the purpose of query substitution is evaluated in an experiment where 68 subjects assessed the search results generated by both the original query and the substituted query. It was found that two of the discovery methods returned significantly worse results with the substitution than were returned by the original query for the majority of queries, with only around 20-22% of substituted queries generating either improved or equally-relevant results. The third method however returned a very similar level of superior results as the original query, and saw over 71% of substituted queries generating either improved or equally-relevant results. These results indicate that even when candidate synonym pairs are confirmed as being semantically associated using other methods, they still may not be suitable for query substitution, depending on the method of synonym discovery."}
{"_id": "7bbd56f4050eb9f8b63f0eacb58ad667aaf49f25", "title": "Millimeter-wave mobile broadband with large scale spatial processing for 5G mobile communication", "text": "The phenomenal growth in mobile data traffic calls for a drastic increase in mobile network capacity beyond current 3G/4G networks. In this paper, we propose a millimeter wave mobile broadband (MMB) system for the next generation mobile communication system (5G). MMB taps into the vast spectrum in 3-300 GHz range to meet this growing demand. We reason why the millimeter wave spectrum is suitable for mobile broadband applications. We discuss the unique advantages of millimeter waves such as spectrum availability and large beamforming gain in small form factors. We also describe a practical MMB system design capable of providing Gb/s data rates at distances up to 500 meters and supports mobility up to 350 km/h. By means of system simulations, we show that a basic MMB system is capable of delivering an average cell throughput and cell-edge throughput performances that is 10-100 times better than the current 20MHz LTE-Advanced systems."}
{"_id": "14c2ac5b61cc4813cfe2e5787680001f2794d7ec", "title": "Emotion circuits in the brain.", "text": "The field of neuroscience has, after a long period of looking the other way, again embraced emotion as an important research area. Much of the progress has come from studies of fear, and especially fear conditioning. This work has pinpointed the amygdala as an important component of the system involved in the acquisition, storage, and expression of fear memory and has elucidated in detail how stimuli enter, travel through, and exit the amygdala. Some progress has also been made in understanding the cellular and molecular mechanisms that underlie fear conditioning, and recent studies have also shown that the findings from experimental animals apply to the human brain. It is important to remember why this work on emotion succeeded where past efforts failed. It focused on a psychologically well-defined aspect of emotion, avoided vague and poorly defined concepts such as \"affect,\" \"hedonic tone,\" or \"emotional feelings,\" and used a simple and straightforward experimental approach. With so much research being done in this area today, it is important that the mistakes of the past not be made again. It is also time to expand from this foundation into broader aspects of mind and behavior."}
{"_id": "8915a82752ecf8fc762fb48362decd06429177ee", "title": "Unsupervised Histopathology Image Synthesis", "text": "Hematoxylin and Eosin stained histopathology image analysis is essential for the diagnosis and study of complicated diseases such as cancer. Existing state-of-the-art approaches demand extensive amount of supervised training data from trained pathologists. In this work we synthesize in an unsupervised manner, large histopathology image datasets, suitable for supervised training tasks. We propose a unified pipeline that: a) generates a set of initial synthetic histopathology images with paired information about the nuclei such as segmentation masks; b) refines the initial synthetic images through a Generative Adversarial Network (GAN) to reference styles; c) trains a task-specific CNN and boosts the performance of the task-specific CNN with onthe-fly generated adversarial examples. Our main contribution is that the synthetic images are not only realistic, but also representative (in reference styles) and relatively challenging for training task-specific CNNs. We test our method for nucleus segmentation using images from four cancer types. When no supervised data exists for a cancer type, our method without supervision cost significantly outperforms supervised methods which perform across-cancer generalization. Even when supervised data exists for all cancer types, our approach without supervision cost performs better than supervised methods."}
{"_id": "e1250228bf1525d5def11a410bdf03441251dff6", "title": "Moving object surveillance using object proposals and background prior prediction", "text": "In this paper, a moving object detection algorithm is combined with a background estimate and a Bing (Binary Norm Gradient) object is proposed in video surveillance. A simple background estimation method is used to detect rough images of a group of moving foreground objects. The foreground setting in the foreground will estimate another set of candidate object windows, and the target (pedestrian / vehicle) from the intersection area comes from the first two steps. In addition, the time cost is reduced by the estimated area. Experiments on outdoor datasets show that the proposed method can not only achieve high detection rate (DR), but also reduce false alarm rate (FAR) and time cost."}
{"_id": "d4f353f3a78b8cb6d5a394f8efe9e69c41eae2a5", "title": "Progressive Geometric Algorithms", "text": "Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms for two geometric problems: computing the convex hull of a planar point set, and finding popular places in a set of trajectories."}
{"_id": "1ee98b76ea263617e8b6726a7282f0cfcd37e168", "title": "A SURVEY OF DISTRIBUTED FILE SYSTEMS", "text": "This paper is a survey of the current state of the art in the design and implementation of distributed file systems. It consists of four major parts: an overview of background material, case studies of a number of contemporary file systems, identification of key design techniques, and an examination of current research issues. The systems surveyed are Sun NFS, Apollo Domain, Andrew, IBM AIX DS, AT&T RFS, and Sprite. The coverage of background material includes a taxonomy of file system issues, a brief history of distributed file systems, and a summary of empirical research on file properties. A comprehensive bibliography forms an important of the paper."}
{"_id": "1ccf5670461638542b32fc7bd86cd47bf2f9d050", "title": "Combining Language and Vision with a Multimodal Skip-gram Model", "text": "We extend the SKIP-GRAM model of Mikolov et al. (2013a) by taking visual information into account. Like SKIP-GRAM, our multimodal models (MMSKIP-GRAM) build vector-based word representations by learning to predict linguistic contexts in text corpora. However, for a restricted set of words, the models are also exposed to visual representations of the objects they denote (extracted from natural images), and must predict linguistic and visual features jointly. The MMSKIP-GRAM models achieve good performance on a variety of semantic benchmarks. Moreover, since they propagate visual information to all words, we use them to improve image labeling and retrieval in the zero-shot setup, where the test concepts are never seen during model training. Finally, the MMSKIP-GRAM models discover intriguing visual properties of abstract words, paving the way to realistic implementations of embodied theories of meaning."}
{"_id": "3bc9f8eb5ba303816fd5f642f2e7408f0752d3c4", "title": "WORDNET: A Lexical Database For English", "text": ""}
{"_id": "3c802de1c9e4ab4ffc4669eb7e39093c73b4d708", "title": "Computing Lexical Contrast", "text": "Knowing the degree of semantic contrast between words has widespread application in natural language processing, including machine translation, information retrieval, and dialogue systems. Manually created lexicons focus on opposites, such as hot and cold. Opposites are of many kinds such as antipodals, complementaries, and gradable. Existing lexicons often do not classify opposites into the different kinds, however. They also do not explicitly list word pairs that are not opposites but yet have some degree of contrast in meaning, such as warm and cold or tropical and freezing. We propose an automatic method to identify contrasting word pairs that is based on the hypothesis that if a pair of words, A and B, are contrasting, then there is a pair of opposites, C and D, such that A and C are strongly related and B and D are strongly related. (For example, there exists the pair of opposites hot and cold such that tropical is related to hot, and freezing is related to cold.) We will call this the contrast hypothesis.We begin with a large crowdsourcing experiment to determine the amount of human agreement on the concept of oppositeness and its different kinds. In the process, we flesh out key features of different kinds of opposites. We then present an automatic and empirical measure of lexical contrast that relies on the contrast hypothesis, corpus statistics, and the structure of a Roget-like thesaurus. We show how, using four different data sets, we evaluated our approach on two different tasks, solving \u201cmost contrasting word\u201d questions and distinguishing synonyms from opposites. The results are analyzed across four parts of speech and across five different kinds of opposites. We show that the proposed measure of lexical contrast obtains high precision and large coverage, outperforming existing methods."}
{"_id": "808a225d7a943041612411b23c0b68f63b89a437", "title": "Word Embedding-based Antonym Detection using Thesauri and Distributional Information", "text": "This paper proposes a novel approach to train word embeddings to capture antonyms. Word embeddings have shown to capture synonyms and analogies. Such word embeddings, however, cannot capture antonyms since they depend on the distributional hypothesis. Our approach utilizes supervised synonym and antonym information from thesauri, as well as distributional information from large-scale unlabelled text data. The evaluation results on the GRE antonym question task show that our model outperforms the state-of-the-art systems and it can answer the antonym questions in the F-score of 89%."}
{"_id": "44d62adc08e878499f775569385386590bf20032", "title": "Forensic Analysis of WhatsApp on Android Smartphones", "text": "ii Acknowledgements The research and writing of this thesis has been one of the most significant academic challenges I ever had to face. Without the support and guidance of the following people, this study would not have been completed. It is to them that I owe my deepest gratitude. I would sincerely like to thank my advisor, Dr. Golden G. Richard III for giving me an opportunity to conduct this thesis research under his excellent guidance. His exceptional knowledge, wisdom and understanding have inspired and motivated me. My professors, Dr. Shengru Tu and Dr. Adlai DePano for being on my thesis committee and mentoring me during my Master's degree in Computer Science at University of New Orleans. I am thankful to my friend Thomas Sires, for initiating my interest in Linux, making sure I keep it up and help me solve technical issues. I appreciate the help from Aisha Ali-Gombe, Joe Sylve and Andrew Case who are on our Information Assurance group, for their contributions in the Information Assurance field and answering my questions. Finally, I would like to thank my Parents and each member in my family and friends for their unconditional love and support. I specially want to thank my husband Mr. Naveen Singh, who has been my core source of strength through every phase of this effort and constantly encouraged me to do my best. His systematic approach and kind feedback towards my thesis has helped me improve. Words cannot express how grateful I am to you. My heartfelt gratitude to my precious daughter Navya Singh, who was born during my Master's program and has spent many days without my complete attention. With her never give up attitude she has taught me perseverance to work around a problem until I succeed. Thank you for all your love my little girl, it kept me going. Abstract Android forensics has evolved over time offering significant opportunities and exciting challenges. On one hand, being an open source platform Android is giving developers the freedom to contribute to the rapid growth of the Android market whereas on the other hand Android users may not be aware of the security and privacy implications of installing these applications on their phones. Users may assume that a password-locked device protects their personal information, but applications may retain private information on devices, in ways that users might not anticipate. In this thesis we will be \u2026"}
{"_id": "48ae0201cd987b62e971d14e93faf3a050d94b4c", "title": "Yet another DSL for cross-platforms mobile development", "text": "With the growing success of mobility, mobile platforms (iOS, Android, WindowsPhone, etc.) multiply, each requiring specific development skills. Given this situation, it becomes very difficult for software developers to duplicate their apps accordingly. Meanwhile, web-based applications have evolved to be \"mobile-friendly\" but it appears that this is not a silver bullet: the user experience and the overall quality is still better with native applications. Of course, cross-platform mobile development tools have emerged in recent years. This paper provides a survey of these tools and points out that a full-fledged language for mobile development is highly desirable. Consequently, we present a preliminary work on Xmob, a technology-neutral DSL intended to be cross-compiled to produce native code for a variety of platforms."}
{"_id": "27d209044d591da0795424ee747d101a4d998dd7", "title": "Molecular phylogenetic evidence for the reorganization of the Hyperiid amphipods, a diverse group of pelagic crustaceans.", "text": "Within the crustaceans, the Amphipoda rank as one of the most speciose extant orders. Amphipods have successfully invaded and become major constituents of a variety of ecosystems. The hyperiid amphipods are classically defined as an exclusively pelagic group broadly inhabiting oceanic midwater environments and often having close associations with gelatinous zooplankton. As with other amphipod groups they have largely been classified based on appendage structures, however evidence suggests that at least some of these characters are the product of convergent evolution. Here we present the first multi-locus molecular phylogenetic assessment of relationships among the hyperiid amphipods. We sampled 51 species belonging to 16 of the 23 recognized hyperiidian families for three nuclear loci (18S, 28S, and H3) and mitochondrial COI. We performed both Bayesian Inference and Maximum Likelihood analyses of concatenated sequences. In addition, we also explored the utility of species-tree methods for reconstructing deep evolutionary histories using the Minimize Deep Coalescence (MDC) approach. Our results are compared with previous molecular analyses and traditional systematic groupings. We discuss these results within the context of adaptations correlated with the pelagic life history of hyperiid amphipods. Within the infraorder Physocephalata (Bowman and Gruner, 1973) we inferred support for three reciprocally monophyletic clades; the Platysceloidea, Vibilioidea, and Phronimoidea. Our results also place the enigmatic Cystisomatidae and Paraphronimidae at the base of the infraorder Physosomata (Bowman and Gruner, 1973) suggesting that Physosomata as traditionally recognized is paraphyletic. Based on our multilocus phylogeny, major rearrangements to existing taxonomic groupings of hyperiid amphipods are warranted."}
{"_id": "8e855f0474941c624b04317ce46086ba4d501c08", "title": "Classification of Battlefield Ground Vehicles Using Acoustic Features and Fuzzy Logic Rule-Based Classifiers", "text": "In this paper, we demonstrate, through the multicategory classification of battlefield ground vehicles using acoustic features, how it is straightforward to directly exploit the information inherent in a problem to determine the number of rules, and subsequently the architecture, of fuzzy logic rule-based classifiers (FLRBC). We propose three FLRBC architectures, one non-hierarchical and two hierarchical (HFLRBC), conduct experiments to evaluate the performances of these architectures, and compare them to a Bayesian classifier. Our experimental results show that: 1) for each classifier the performance in the adaptive mode that uses simple majority voting is much better than in the non-adaptive mode; 2) all FLRBCs perform substantially better than the Bayesian classifier; 3) interval type-2 (T2) FLRBCs perform better than their competing type-1 (T1) FLRBCs, although sometimes not by much; 4) the interval T2 nonhierarchical and HFLRBC-series architectures perform the best; and 5) all FLRBCs achieve higher than the acceptable 80% classification accuracy"}
{"_id": "7537c9c66246e1a5962f3c5fc38df566626efaa4", "title": "Current sensor in PCB technology", "text": "A novel DC/AC current sensor works on the fluxgate principle. The core in the form of a 7/10-mm ring made of electrodeposited permalloy is sandwiched in the middle of a printed circuit board (PCB), whereas the sensor excitation winding is also integrated in the copper layers of the PCB. To lower the sensor power consumption, the excitation winding was tuned by a parallel capacitor. The peak-peak/rms ratio of 5.2 was achieved for the excitation frequency of 25 kHz. The open-loop sensor has 100-mV/A sensitivity; the characteristics have a linearity error of 10% and hysteresis below 0.1% in the 1-A range."}
{"_id": "3b46294188e26bbd340ecc43d4f301c0f8368174", "title": "Kernel regression in mixed feature spaces for spatio-temporal saliency detection", "text": "Spatio-temporal saliency detection has attracted lots of research interests due to its competitive performance on wide multimedia applications. For spatio-temporal saliency detection, existing bottom-up algorithms often over-simplify the fusion strategy, which results in the inferior performance than the human vision system. In this paper, a novel bottom-up spatio-temporal saliency model is proposed to improve the accuracy of attentional region estimation in videos through fully exploiting the merit of fusion. In order to represent the space constructed by several types of features such as location, appearance and temporal cues extracted from video, kernel regression in mixed feature spaces (KR-MFS) including three approximation entity-models is proposed. Using KR-MFS, a hybrid fusion strategy which considers the combination of spatial and temporal saliency of each individual unit and incorporates the impacts from the neighboring units is presented and embedded into the spatio-temporal saliency model. The proposed model has been evaluated on the publicly available dataset. Experimental results show that the proposed spatio-temporal saliency model can achieve better performance than the state-of-the-art approaches. 2015 Elsevier Inc. All rights reserved."}
{"_id": "e400213e5f4b3e1ff09c60232f6dacc2ad5c8c76", "title": "Gamification of Clinical Routine: The Dr. Fill Approach", "text": "Gamification is used in clinical context in the health care education. Furthermore, it has shown great promises to improve the performance of the health care staff in their daily routine. In this work we focus on the medication sorting task, which is performed manually in hospitals. This task is very error prone and needs to be performed daily. Nevertheless, errors in the medication are crucial and lead to serious complications. In this work we present a real world gamification approach of the medication sorting task in a patient\u2019s daily pill organizer. The player of the game needs to sort the correct medication into the correct dispenser slots and is rewarded or punished in real time. At the end of the game, a score is given and the user can register in a leaderboard."}
{"_id": "7b91f9e74dcc604cf13220e149120114c96d23ff", "title": "A data-centric approach to synchronization", "text": "Concurrency-related errors, such as data races, are frustratingly difficult to track down and eliminate in large object-oriented programs. Traditional approaches to preventing data races rely on protecting instruction sequences with synchronization operations. Such control-centric approaches are inherently brittle, as the burden is on the programmer to ensure that all concurrently accessed memory locations are consistently protected. Data-centric synchronization is an alternative approach that offloads some of the work on the language implementation. Data-centric synchronization groups fields of objects into atomic sets to indicate that these fields must always be updated atomically. Each atomic set has associated units of work, that is, code fragments that preserve the consistency of that atomic set. Synchronization operations are added automatically by the compiler. We present an extension to the Java programming language that integrates annotations for data-centric concurrency control. The resulting language, called AJ, relies on a type system that enables separate compilation and supports atomic sets that span multiple objects and that also supports full encapsulation for more efficient code generation. We evaluate our proposal by refactoring classes from standard libraries, as well as a number of multithreaded benchmarks, to use atomic sets. Our results suggest that data-centric synchronization is easy to use and enjoys low annotation overhead, while successfully preventing data races. Moreover, experiments on the SPECjbb benchmark suggest that acceptable performance can be achieved with a modest amount of tuning."}
{"_id": "e9c76af1da23f63753a4dfbbfafeef756652ae37", "title": "Data Mining of Causal Relations from Text: Analysing Maritime Accident Investigation Reports", "text": "Text mining is a process of extracting information of interest from text. Such a method includes techniques from various areas such as Information Retrieval (IR), Natural Language Processing (NLP), and Information Extraction (IE). In this study, text mining methods are applied to extract causal relations from maritime accident investigation reports collected from the Marine Accident Investigation Branch (MAIB). These causal relations provide information on various mechanisms behind accidents, including human and organizational factors relating to the accident. The objective of this study is to facilitate the analysis of the maritime accident investigation reports, by means of extracting contributory causes with more feasibility. A careful investigation of contributory causes from the reports provide opportunity to improve safety in future. Two methods have been employed in this study to extract the causal relations. They are 1) Pattern classification method and 2) Connectives method. The earlier one uses na\u0131\u0308ve Bayes and Support Vector Machines (SVM) as classifiers. The latter simply searches for the words connecting cause and effect in sentences. The causal patterns extracted using these two methods are compared to the manual (human expert) extraction. The pattern classification method showed a fair and sensible performance with F-measure(average) = 65% when compared to connectives method with F-measure(average) = 58%. This study is an evidence, that text mining methods could be employed in extracting causal relations from marine accident investigation reports."}
{"_id": "fb4401254c4528f7a60f42e993cab79a0ceca28d", "title": "SemEval-2016 Task 7: Determining Sentiment Intensity of English and Arabic Phrases", "text": "We present a shared task on automatically determining sentiment intensity of a word or a phrase. The words and phrases are taken from three domains: general English, English Twitter, and Arabic Twitter. The phrases include those composed of negators, modals, and degree adverbs as well as phrases formed by words with opposing polarities. For each of the three domains, we assembled the datasets that include multi-word phrases and their constituent words, both manually annotated for real-valued sentiment intensity scores. The three datasets were presented as the test sets for three separate tasks (each focusing on a specific domain). Five teams submitted nine system outputs for the three tasks. All datasets created for this shared task are freely available to the research community."}
{"_id": "7af4feb3cc7c8ce856ad0b2b43f1adc9c6b46be9", "title": "Risk assessment method for cyber security of cyber physical systems", "text": "Cyber security is one of the most important risks for all types of cyber-physical systems (CPS). To evaluate the cyber security risk of CPS, a quantitative hierarchized assessment model consists of attack severity, attack success probability and attack consequence is proposed, which can assess the risk caused by an ongoing attack at host level and system level. Then the definitions and calculation methods of the three indexes are discussed in detail. Finally, this paper gives the risk assessment algorithm which describes the steps of implementation. Numerical example shows that the model can response to the attack timely and obtain the system security risk change curve. So that it can help users response to the risk timely. The risk change curve can also be used to predict the risk for the future time."}
{"_id": "7ef7d9345fc6f6c78d3445dcd0e3dd3327e223e9", "title": "Monthly Rainfall Prediction Using Wavelet Neural Network Analysis", "text": "Rainfall is one of the most significant parameters in a hydrological model. Several models have been developed to analyze and predict the rainfall forecast. In recent years, wavelet techniques have been widely applied to various water resources research because of their timefrequency representation. In this paper an attempt has been made to find an alternative method for rainfall prediction by combining the wavelet technique with Artificial Neural Network (ANN). The wavelet and ANNmodels have been applied to monthly rainfall data of Darjeeling rain gauge station. The calibration and validation performance of the models is evaluated with appropriate statistical methods. The results of monthly rainfall series modeling indicate that the performances of wavelet neural network models are more effective than the ANN models."}
{"_id": "f4da29b70b21798dd80ba35e8166a8c34c9a91f1", "title": "On Dimensionality Reduction Techniques for Cross-Language Information Retrieval", "text": "With the advent of the Web, cross-language information retrieval (CLIR) becomes important not only to satisfy the information need across languages but to mine resources for multiple languages e.g. parallel or comparable documents. Broadly CLIR techniques are of two types, in the first case, either queries or documents are translated to the language of comparison while the other type tries to project the vector space representation of the text to a shared translingual space which represents the \u201csemantics\u201d of the documents. In this study, we review the state-of-the-art for CLIR by means of the latter approach and identify the scope for further research."}
{"_id": "fa2b49a77082a3a3a975c4af5e2a36b69ff65545", "title": "A memetic algorithm based extreme learning machine for classification", "text": "Extreme Learning Machine (ELM) is an elegant technique for training Single-hidden Layer Feedforward Networks (SLFNs) with extremely fast speed that attracts significant interest recently. One potential weakness of ELM is the random generation of the input weights and hidden biases, which may deteriorate the classification accuracy. In this paper, we propose a new Memetic Algorithm (MA) based Extreme Learning Machine (M-ELM) for classification problems. M-ELM uses Memetic Algorithm which is a combination of population-based global optimization technique and individual-based local heuristic search method to find optimal network parameters for ELM. The optimized network parameters will enhance the classification accuracy and generalization performance of ELM. Experiments and comparisons on 22 benchmark data sets demonstrate that M-ELM is able to provide highly competitive results compared with other state-of-the-art varieties of ELM algorithms."}
{"_id": "29bddd305e1a6c4be700605016cadd2df27140af", "title": "Best Keyword Cover Search", "text": "It is common that the objects in a spatial database (e.g., restaurants/hotels) are associated with keyword(s) to indicate their businesses/services/features. An interesting problem known as Closest Keywords search is to query objects, called keyword cover, which together cover a set of query keywords and have the minimum inter-objects distance. In recent years, we observe the increasing availability and importance of keyword rating in object evaluation for the better decision making. This motivates us to investigate a generic version of Closest Keywords search called Best Keyword Cover which considers inter-objects distance as well as the keyword rating of objects. The baseline algorithm is inspired by the methods of Closest Keywords search which is based on exhaustively combining objects from different query keywords to generate candidate keyword covers. When the number of query keywords increases, the performance of the baseline algorithm drops dramatically as a result of massive candidate keyword covers generated. To attack this drawback, this work proposes a much more scalable algorithm called keyword nearest neighbor expansion (keyword-NNE). Compared to the baseline algorithm, keyword-NNE algorithm significantly reduces the number of candidate keyword covers generated. The in-depth analysis and extensive experiments on real data sets have justified the superiority of our keyword-NNE algorithm."}
{"_id": "d18973a9e0ba1c425e3817e0e34643a18a602d80", "title": "Debugging the Internet of Things: The Case of Wireless Sensor Networks", "text": "The Internet of Things (IoT) has the strong potential to support a human society interacting more symbiotically with its physical environment. Indeed, the emergence of tiny devices that sense environmental cues and trigger actuators after consulting logic and human preferences promises a more environmentally aware and less wasteful society. However, the IoT inherently challenges software development processes, particularly techniques for ensuring software reliability. Researchers have developed debugging tools for wireless sensor networks (WSNs), which can be viewed as the enablers of perception in the IoT. These tools gather run-time information on individual sensor node executions and node interactions and then compress that information."}
{"_id": "848d63c2e4ff4c3d031c30e481b1c1ff050949ce", "title": "MedLeaf: Mobile biodiversity informatics tool for mapping and identifying Indonesian medicinal Plants", "text": "We presents a mobile biodiversity informatics tools for identifying and mapping Indonesian medicinal plants. The system - called MedLeaf - has been developed as a prototype data resource for documenting, integrating, disseminating, and identifying of Indonesian medicinal plants. Identification of medicinal plant is done automatically based on digital image processing. Fuzzy Local Binary Pattern (LBP) and geometrical features are used to extract leaves features. Probabilistic Neural Network is used as classifier for discrimination. Data set consist of 85 species of Indonesian medicinal plants with 3,502 leaves digital images. Our results indicate that combination of leaves features outperform than using single features with accuracy 88.5%. The distribution of medicinal plants can be shown on mobile phone using GIS application. The application is essential to help people identify the medicinal plants and disseminate information of medicinal plants distribution in Indonesia."}
{"_id": "203a6c87ca3a45f7d0a37c5e529dcf81dcbf59b1", "title": "Discriminative Word Alignment with Conditional Random Fields", "text": "In this paper we present a novel approach for inducing word alignments from sentence aligned data. We use a Conditional Random Field (CRF), a discriminative model, which is estimated on a small supervised training set. The CRF is conditioned on both the source and target texts, and thus allows for the use of arbitrary and overlapping features over these data. Moreover, the CRF has efficient training and decoding processes which both find globally optimal solutions. We apply this alignment model to both French-English and Romanian-English language pairs. We show how a large number of highly predictive features can be easily incorporated into the CRF, and demonstrate that even with only a few hundred word-aligned training sentences, our model improves over the current state-ofthe-art with alignment error rates of 5.29 and 25.8 for the two tasks respectively."}
{"_id": "90d68ee3484d93947299881d3b1bf5a2e6f7b587", "title": "Fingerprint classification using convolutional neural networks and ridge orientation images", "text": "Deep learning is currently popular in the field of computer vision and pattern recognition. Deep learning models such as Convolutional Neural Networks (CNNs) have been shown to be extremely effective at solving computer vision problems with state-of-the-art results. This work uses the CNN model to achieve a high degree of accuracy on the problem of fingerprint image classification. It will also be shown that effective image preprocessing can greatly reduce the dimensionality of the problem, allowing fast training times without compromising classification accuracy, even in networks of moderate depth. The proposed approach has achieved 95.9% classification accuracy on the NIST-DB4 dataset with zero sample rejection."}
{"_id": "43c83047ce71bc69786da967e6365c46306a404c", "title": "The cognitive control of emotion", "text": "The capacity to control emotion is important for human adaptation. Questions about the neural bases of emotion regulation have recently taken on new importance, as functional imaging studies in humans have permitted direct investigation of control strategies that draw upon higher cognitive processes difficult to study in nonhumans. Such studies have examined (1) controlling attention to, and (2) cognitively changing the meaning of, emotionally evocative stimuli. These two forms of emotion regulation depend upon interactions between prefrontal and cingulate control systems and cortical and subcortical emotion-generative systems. Taken together, the results suggest a functional architecture for the cognitive control of emotion that dovetails with findings from other human and nonhuman research on emotion."}
{"_id": "60e9e6bb6d53770010f195dfe477d0e384654923", "title": "Deceiving Others: Distinct Neural Responses of the Prefrontal Cortex and Amygdala in Simple Fabrication and Deception with Social Interactions", "text": "Brain mechanisms for telling lies have been investigated recently using neuroimaging techniques such as functional magnetic resonance imaging and positron emission tomography. Although the advent of these techniques has gradually enabled clarification of the functional contributions of the prefrontal cortex in deception with respect to executive function, the specific roles of subregions within the prefrontal cortex and other brain regions responsible for emotional regulation or social interactions during deception are still unclear. Assuming that the processes of falsifying truthful responses and deceiving others are differentially associated with the activities of these regions, we conducted a positron emission tomography experiment with 2 (truth, lie) 2 (honesty, dishonesty) factorial design. The main effect of falsifying the truthful responses revealed increased brain activity of the left dorsolateral and right anterior prefrontal cortices, supporting the interpretation of previous studies that executive functions are related to making untruthful responses. The main effect of deceiving the interrogator showed activations of the ventromedial prefrontal (medial orbitofrontal) cortex and amygdala, adding new evidence that the brain regions assumed to be responsible for emotional processing or social interaction are active during deceptive behavior similar to that in real-life situations. Further analysis revealed that activity of the right anterior prefrontal cortex showed both effects of deception, indicating that this region has a pivotal role in telling lies. Our results provide clear evidence of functionally dissociable roles of the prefrontal subregions and amygdala for human deception."}
{"_id": "8c74fc02ff5cbece8e813ab99defabffb24249ec", "title": "Functional Neuroanatomy of Emotion: A Meta-Analysis of Emotion Activation Studies in PET and fMRI", "text": "Neuroimagingstudies with positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) have begun to describe the functional neuroanatomy of emotion. Taken separately, specific studies vary in task dimensions and in type(s) of emotion studied and are limited by statistical power and sensitivity. By examining findings across studies, we sought to determine if common or segregated patterns of activations exist across various emotional tasks. We reviewed 55 PET and fMRI activation studies (yielding 761 individual peaks) which investigated emotion in healthy subjects. Peak activation coordinates were transformed into a standard space and plotted onto canonical 3-D brain renderings. We divided the brain into 20 nonoverlapping regions, and characterized each region by its responsiveness across individual emotions (positive, negative, happiness, fear, anger, sadness, disgust), to different induction methods (visual, auditory, recall/imagery), and in emotional tasks with and without cognitive demand. Our review yielded the following summary observations: (1) The medial prefrontal cortex had a general role in emotional processing; (2) fear specifically engaged the amygdala; (3) sadness was associated with activity in the subcallosal cingulate; (4) emotional induction by visual stimuli activated the occipital cortex and the amygdala; (5) induction by emotional recall/imagery recruited the anterior cingulate and insula; (6) emotional tasks with cognitive demand also involved the anterior cingulate and insula. This review provides a critical comparison of findings across individual studies and suggests that separate brain regions are involved in different aspects of emotion."}
{"_id": "2168c44666d95962339a91f4ddba873985b4a5ab", "title": "The retrosplenial cortex and emotion: new insights from functional neuroimaging of the human brain", "text": "Little is known about the function of the retrosplenial cortex and until recently, there was no evidence that it had any involvement in emotional processes. Surprisingly, recent functional neuroimaging studies show that the retrosplenial cortex is consistently activated by emotionally salient words. A review of the functional neuroimaging literature reveals a previously overlooked pattern of observations: the retrosplenial cortex is the cortical region most consistently activated by emotionally salient stimuli. Evidence that this region is also involved in episodic memory suggests that it might have a role in the interaction between emotion and episodic memory. Recognition that the retrosplenial cortex has a prominent role in the processing of emotionally salient stimuli invites further studies to define its specific functions and its interactions with other emotion-related brain regions."}
{"_id": "ac5ab8f71edde6d1a2129da12d051ed03a8446a1", "title": "Comparator Networks", "text": "The objective of this work is set-based verification, e.g. to decide if two sets of images of a face are of the same person or not. The traditional approach to this problem is to learn to generate a feature vector per image, aggregate them into one vector to represent the set, and then compute the cosine similarity between sets. Instead, we design a neural network architecture that can directly learn set-wise verification. Our contributions are: (i) We propose a Deep Comparator Network (DCN) that can ingest a pair of sets (each may contain a variable number of images) as inputs, and compute a similarity between the pair \u2013 this involves attending to multiple discriminative local regions (landmarks), and comparing local descriptors between pairs of faces; (ii) To encourage highquality representations for each set, internal competition is introduced for recalibration based on the landmark score; (iii) Inspired by image retrieval, a novel hard sample mining regime is proposed to control the sampling process, such that the DCN is complementary to the standard image classification models. Evaluations on the IARPA Janus face recognition benchmarks show that the comparator networks outperform the previous state-of-the-art results by a large margin."}
{"_id": "6660d9102a02ffc6df5bd4019b27e68f433fccd5", "title": "Artificial moral agents are infeasible with foreseeable technologies", "text": "For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility."}
{"_id": "74f2b3adbc2b62ac8727d4d98935b3799b53119a", "title": "IEEE 802.22: The first cognitive radio wireless regional area network standard", "text": "This article presents a high-level overview of the IEEE 802.22 standard for cognitive wireless regional area networks (WRANs) that is under development in the IEEE 802 LAN/MAN Standards Committee."}
{"_id": "8b2c14d0e727b49d7ec1d00ab4dc24957e6f6c2a", "title": "Fingertip friction modulation due to electrostatic attraction", "text": "The human fingertip is extremely sensitive to lateral (shear) forces that arise in surface exploration. We and others have developed haptic displays that work by modulating surface friction via electrostatic attraction. Despite the demonstrated ability of these displays to render haptic environments, an understanding of the fingertip-surface interface is lacking. We have developed a tribometer for measuring the lateral frictional forces on a fingertip under well-controlled conditions. We show an expected square law dependence of frictional force (and inferred electrostatic normal force) on actuation voltage, although we observe a large person to person variability. We model an expected dependence of the frictional force on the frequency of the actuation voltage, predicting a first order cut off below about 500Hz. However, our measurements are unambiguously at odds with the model's predictions."}
{"_id": "75ba51ffd9d2bed8a65029c9340d058f587059da", "title": "Understanding the HyperLogLog : a Near-Optimal Cardinality Estimation Algorithm", "text": "The HyperLogLog algorithm (HLL) is a method to estimate the number of distinct elements in large datasets i.e. cardinality, in a single pass, and using a very small amount of memory. The HLL algorithm is an optimization of the method presented in 2003 by Durand and Flajolet in the paper LogLog Counting of Large Cardinalities. In this report we analyze the so called cornerstone of Big Data infrastructures, give a detailed description of the algorithm and explain the mathematical intuition behind it."}
{"_id": "d6800fa7409dbe896d2bfb0e0084fc4bfba43c70", "title": "Semi-supervised Named Entity Recognition in noisy-text", "text": "Many of the existing Named Entity Recognition (NER) solutions are built based on news corpus data with proper syntax. These solutions might not lead to highly accurate results when being applied to noisy, user generated data, e.g., tweets, which can feature sloppy spelling, concept drift, and limited contextualization of terms and concepts due to length constraints. The models described in this paper are based on linear chain conditional random fields (CRFs), use the BIEOU encoding scheme, and leverage random feature dropout for up-sampling the training data. The considered features include word clusters and pre-trained distributed word representations, updated gazetteer features, and global context predictions. The latter feature allows for ingesting the meaning of new or rare tokens into the system via unsupervised learning and for alleviating the need to learn lexicon based features, which usually tend to be high dimensional. In this paper, we report on the solution [ST] we submitted to the WNUT 2016 NER shared task. We also present an improvement over our original submission [SI], which we built by using semi-supervised learning on labelled training data and pre-trained resourced constructed from unlabelled tweet data. Our ST solution achieved an F1 score of 1.2% higher than the baseline (35.1% F1) for the task of extracting 10 entity types. The SI resulted in an increase of 8.2% in F1 score over the baseline (7.08% over ST). Finally, the SI model\u2019s evaluation on the test data achieved a F1 score of 47.3% (~1.15% increase over the 2 best submitted solution). Our experimental setup and results are available as a standalone twitter NER tool at https://github.com/napsternxg/TwitterNER."}
{"_id": "44e15324bcb114227f99db8fe5b425eaab0777ca", "title": "The web impact factor: a critical review", "text": "Purpose \u2013 We analyse the link-based web site impact measure known as the Web Impact Factor (WIF). It is a quantitative tool for evaluating and ranking web sites, top-level domains and sub-domains. We also discuss the WIF's advantages and disadvantages, data collection problems, and validity and reliability of WIF results. Design/methodology/approach \u2013 A key to webometric studies has been the use of largescale search engines, such as Yahoo and AltaVista that allow measurements to be made of the total number of pages in a web site and the total number of backlinks to the web site. These search engines provide similar possibilities for the investigation of links between web sites/pages to those provided by the academic journals citation databases from the Institute of Scientific Information (ISI). But the content of the Web is not of the same nature and quality as the databases maintained by the ISI. Findings \u2013 This paper reviews how the WIF has been developed and applied. It has been suggested that Web Impact Factors can be calculated as a way of comparing the attractiveness of web sites or domains on the Web. It is concluded that, while the WIF is arguably useful for quantitative intra-country comparison, application beyond this (i.e., to inter-country assessment) has little value. Originality/value \u2013 The paper attempts to make a critical review over literature on the WIF and associated indicators."}
{"_id": "51503422e36cf69b9eed45daffdad03c60f5847a", "title": "Stock-out Prediction in Multi-echelon Networks", "text": "In multi-echelon inventory systems the performance of a given node is affected by events that occur at many other nodes and in many other time periods. For example, a supply disruption upstream will have an effect on downstream, customer-facing nodes several periods later as the disruption \"cascades\" through the system. There is very little research on stock-out prediction in single-echelon systems and (to the best of our knowledge) none on multi-echelon systems. However, in real the world, it is clear that there is significant interest in techniques for this sort of stock-out prediction. Therefore, our research aims to fill this gap by using deep neural networks (DNN) to predict stock-outs in multi-echelon supply chains."}
{"_id": "cd48c935c121c180b68967514f179e8b1e04c936", "title": "Detection of Occluding Contours and Occlusion by Active Binocular Stereo", "text": "|We propose a reliable method to detect occluding contours and occlusion by active binocular stereo with an occluding contour model that describes the correspondence between the contour of curved objects and its projected image. Applying the image ow generated by moving one camera to the model, we can restrict possible matched points seen by the other camera. We detect occluding contours by tting these matched points to the model. This method can nd occluding contours and occlusion more reliably than conventional ones because stereo matching is performed by using the geometric constraint based on the occluding contour model instead of heuristic constraints. Experimental results of the proposed method applied to the actual scene are presented."}
{"_id": "6bb63518e5a33543196489657e634d532e0b4051", "title": "Anchored Correlation Explanation: Topic Modeling with Minimal Domain Knowledge", "text": "While generative models such as Latent Dirichlet Allocation (LDA) have proven fruitful in topic modeling, they often require detailed assumptions and careful specification of hyperparameters. Such model complexity issues only compound when trying to generalize generative models to incorporate human input. We introduce Correlation Explanation (CorEx), an alternative approach to topic modeling that does not assume an underlying generative model, and instead learns maximally informative topics through an information-theoretic framework. This framework naturally generalizes to hierarchical and semi-supervised extensions with no additional modeling assumptions. In particular, word-level domain knowledge can be flexibly incorporated within CorEx through anchor words, allowing topic separability and representation to be promoted with minimal human intervention. Across a variety of datasets, metrics, and experiments, we demonstrate that CorEx produces topics that are comparable in quality to those produced by unsupervised and semi-supervised variants of LDA."}
{"_id": "4b2e99971fdc57a050a9693e95c828721f207355", "title": "Salt resistance genes revealed by functional metagenomics from brines and moderate-salinity rhizosphere within a hypersaline environment", "text": "Hypersaline environments are considered one of the most extreme habitats on earth and microorganisms have developed diverse molecular mechanisms of adaptation to withstand these conditions. The present study was aimed at identifying novel genes from the microbial communities of a moderate-salinity rhizosphere and brine from the Es Trenc saltern (Mallorca, Spain), which could confer increased salt resistance to Escherichia coli. The microbial diversity assessed by pyrosequencing of 16S rRNA gene libraries revealed the presence of communities that are typical in such environments and the remarkable presence of three bacterial groups never revealed as major components of salt brines. Metagenomic libraries from brine and rhizosphere samples, were transferred to the osmosensitive strain E. coli MKH13, and screened for salt resistance. Eleven genes that conferred salt resistance were identified, some encoding for well-known proteins previously related to osmoadaptation such as a glycerol transporter and a proton pump, whereas others encoded proteins not previously related to this function in microorganisms such as DNA/RNA helicases, an endonuclease III (Nth) and hypothetical proteins of unknown function. Furthermore, four of the retrieved genes were cloned and expressed in Bacillus subtilis and they also conferred salt resistance to this bacterium, broadening the spectrum of bacterial species in which these genes can function. This is the first report of salt resistance genes recovered from metagenomes of a hypersaline environment."}
{"_id": "e48281d7934ffd16647ac112f0356813602278a7", "title": "Process modeling for industry 4.0 applications: Towards an industry 4.0 process modeling language and method", "text": "The term Industry 4.0 derives from the new (fourth) industrial revolution enabling suppliers and manufacturers to leverage new technological concepts like Internet of Things, Big Data, and Cloud Computing: New or enhanced products and services can be created, cost be reduced and productivity be increased. Similar terms are Smart Factory or Smart Manufacturing. The ideas, concepts and technologies are not hype anymore - they are at least partly reality, but many software specification and development aspects are still not sufficiently covered, e.g. standardization, specification and modeling languages. This paper presents an Industry 4.0 process modeling language (I4PML) that is an extension (UML profile with stereotypes) of OMG's BPMN (Business Process Model and Notation) standard. We also describe a method for the specification of Industry 4.0 applications using UML and I4PML."}
{"_id": "5d02e076e23121d1a09515974febc98311e3685b", "title": "Strategies for Evaluating Software Usability", "text": "This paper presents a usage analysis and taxonomy of methods which are used to evaluate the usability of computer systems. To accommodate the analysis and taxonomy, a matrix of strategies which can be used for effective usability evaluation is presented. Such an analysis, taxonomy and strategies support human-computer interaction (HCI) professionals who have the responsibility for ensuring computer system usability. The strategies outlined are named Virtual Engineering, Soft Modelling, Hard Review and Real World. This paper also uses a composite set of existing popular generic evaluation methods which can be used as part of these strategies. The methods used are observation, questionnaire, interview, empirical methods, user groups, cognitive walkthroughs, heuristic methods, review methods and model methods. The paper continues by presenting a Usage Analysis Table of these methods and concludes by grouping them into a Taxonomy of Usability Evaluation Methods. A key emphasis of this paper is the appropriateness of individual methods to lifecycle timing."}
{"_id": "bb16aeb12a98cdabf6f6054dd16302645a474d88", "title": "Formal Query Generation for Question Answering over Knowledge Bases", "text": "Question answering (QA) systems often consist of several components such as Named Entity Disambiguation (NED), Relation Extraction (RE), and Query Generation (QG). In this paper, we focus on the QG process of a QA pipeline on a large-scale Knowledge Base (KB), with noisy annotations and complex sentence structures. We therefore propose SQG, a SPARQL Query Generator with modular architecture, enabling easy integration with other components for the construction of a fully functional QA pipeline. SQG can be used on large open-domain KBs and handle noisy inputs by discovering a minimal subgraph based on uncertain inputs, that it receives from the NED and RE components. This ability allows SQG to consider a set of candidate entities/relations, as opposed to the most probable ones, which leads to a significant boost in the performance of the QG component. The captured subgraph covers multiple candidate walks, which correspond to SPARQL queries. To enhance the accuracy, we present a ranking model based on Tree-LSTM that takes into account the syntactical structure of the question and the tree representation of the candidate queries to find the one representing the correct intention behind the question. SQG outperforms the baseline systems and achieves a macro F1-measure of 75% on the LC-QuAD dataset."}
{"_id": "6732435ab6e0eb4e0d14662cf724f3be54807a35", "title": "ECONOMICS AND IDENTITY", "text": "This paper considers how identity, a person\u2019s sense of self, affects economic outcomes. We incorporate the psychology and sociology of identity into an economic model of behavior. In the utility function we propose, identity is associated with different social categories and how people in these categories should behave. We then construct a simple game-theoretic model showing how identity can affect individual interactions. The paper adapts these models to gender discrimination in the workplace, the economics of poverty and social exclusion, and the household division of labor. In each case, the inclusion of identity substantively changes conclusions of previous economic analysis."}
{"_id": "f12829890f06758abff0f3b727b1f4c714371782", "title": "10 An Empirical Approach for the Evaluation of Voice User Interfaces", "text": "Nowadays, the convergence of devices, electronic computing, and massive media produces huge volumes of information, which demands the need for faster and more efficient interaction between users and information. How to make information access manageable, efficient, and easy becomes the major challenge for Human-Computer Interaction (HCI) researchers. The different types of computing devices, such as PDAs (personal digital assistants), tablet PCs, desktops, game consoles, and the next generation phones, provide many different modalities for information access. This makes it possible to dynamically adapt application user interfaces to the changing context. However, as applications go more and more pervasive, these devices show theirs limited input/output capacity caused by small visual displays, use of hands to operate buttons and the lack of an alphanumeric keyboard and mouse (Gu & Gilbert, 2004). Voice User Interface (VUI) systems are capable of, besides recognizing the voice of their users, to understand voice commands, and to provide responses to them, usually, in real time. The state-of-the-art in speech technology already allows the development of automatic systems designed to work in real conditions. VUI is perhaps the most critical factor in the success of any automated speech recognition (ASR) system, determining whether the user experience will be satisfying or frustrating, or even whether the customer will remain one. This chapter describes a practical methodology for creating an effective VUI design. The methodology is scientifically based on principles in linguistics, psychology, and language technology (Cohen et al. 2004; San-Segundo et al., 2005). Given the limited input/output capabilities of mobile devices, speech presents an excellent way to enter and retrieve information either alone or in combination with other modalities. Furthermore, people with disabilities should be provided with a wide range of alternative interaction modalities other than the traditional screen-mouse based desktop computing devices. Whether the disability is temporary or permanent, people with reading difficulty, visual impairment, and/or any difficulty using a keyboard, or mouse can rely on speech as an alternate approach for information access. Source: User Interfaces, Book edited by: Rita M\u00e1trai, ISBN 978-953-307-084-1, pp. 270, May 2010, INTECH, Croatia, downloaded from SCIYO.COM"}
{"_id": "8d778d57c182bff1a0a3c5e8dac83b58012c0383", "title": "Handwriting instruction in elementary schools.", "text": "OBJECTIVE\nClassroom teachers teach handwriting, but when problems arise, students are referred to occupational therapy for remediation. This study, conducted by occupational therapists, reviews handwriting instruction by classroom teachers in one school district.\n\n\nMETHOD\nTeachers from kindergarten through grade 6 were asked to complete an open-ended questionnaire regarding handwriting instruction.\n\n\nRESULTS\nTeachers differed in their methods of instruction, including in the programs and paper used, and practice provided. Teachers of grades 5 and 6 had to continue to review handwriting instruction, because all students could not fluently use handwriting as a tool of expression.\n\n\nCONCLUSION\nElementary students need structured instruction to develop the motor skill of writing. School-based occupational therapists can support effective handwriting instruction by interpreting information from motor learning theory pertaining to instruction and practice, which supports acquisition, transfer, and retention of handwriting skills. They also need to be cognizant of prior handwriting instruction when addressing handwriting difficulties."}
{"_id": "06f5a18780d7332ed68a9c786e1c597b27a8e0f6", "title": "From basic network principles to neural architecture: emergence of orientation columns.", "text": "Orientation-selective cells--cells that are selectively responsive to bars and edges at particular orientations--are a salient feature of the architecture of mammalian visual cortex. In the previous paper of this series, I showed that such cells emerge spontaneously during the development of a simple multilayered network having local but initially random feedforward connections that mature, one layer at a time, according to a simple development rule (of Hebb type). In this paper, I show that, in the presence of lateral connections between developing orientation cells, these cells self-organize into banded patterns of cells of similar orientation. These patterns are similar to the \"orientation columns\" found in mammalian visual cortex. No orientation preference is specified to the system at any stage, none of the basic developmental rules is specific to visual processing, and the results emerge even in the absence of visual input to the system (as has been observed in macaque monkey)."}
{"_id": "29026f502e747508d2251df104caebfda0a1da12", "title": "A simplified neuron model as a principal component analyzer.", "text": "A simple linear neuron model with constrained Hebbian-type synaptic modification is analyzed and a new class of unconstrained learning rules is derived. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence."}
{"_id": "705cf26ba1c3ffed659ad59e350520850a3c5934", "title": "From basic network principles to neural architecture: emergence of orientation-selective cells.", "text": "This is the second paper in a series of three that explores the emergence of several prominent features of the functional architecture of visual cortex, in a \"modular self-adaptive network\" containing several layers of cells with parallel feedforward connections whose strengths develop according to a Hebb-type correlation-rewarding rule. In the present paper I show that orientation-selective cells, similar to the \"simple\" cortical cells of Hubel and Wiesel [Hubel, D. H. & Wiesel, T. N. (1962) J. Physiol. 160, 106-154], emerge in such a network. No orientation preference is specified to the system at any stage, the orientation-selective cell layer emerges even in the absence of environmental input to the system, and none of the basic developmental rules is specific to visual processing."}
{"_id": "f5821548720901c89b3b7481f7500d7cd64e99bd", "title": "Auto-association by multilayer perceptrons and singular value decomposition", "text": "The multilayer perceptron, when working in auto-association mode, is sometimes considered as an interesting candidate to perform data compression or dimensionality reduction of the feature space in information processing applications. The present paper shows that, for auto-association, the nonlinearities of the hidden units are useless and that the optimal parameter values can be derived directly by purely linear techniques relying on singular value decomposition and low rank matrix approximation, similar in spirit to the well-known Karhunen-Lo\u00e8ve transform. This approach appears thus as an efficient alternative to the general error back-propagation algorithm commonly used for training multilayer perceptrons. Moreover, it also gives a clear interpretation of the r\u00f4le of the different parameters."}
{"_id": "3f7f4f9b51409d8dafa37666859b456c01811269", "title": "Exploratory analysis of OpenStreetMap for land use classification", "text": "In the last years, volunteers have been contributing massively to what we know nowadays as Volunteered Geographic Information. This huge amount of data might be hiding a vast geographical richness and therefore research needs to be conducted to explore their potential and use it in the solution of real world problems. In this study we conduct an exploratory analysis of data from the OpenStreetMap initiative. Using the Corine Land Cover database as reference and continental Portugal as the study area, we establish a possible correspondence between both classification nomenclatures, evaluate the quality of OpenStreetMap polygon features classification against Corine Land Cover classes from level 1 nomenclature, and analyze the spatial distribution of OpenStreetMap classes over continental Portugal. A global classification accuracy around 76% and interesting coverage areas' values are remarkable and promising results that encourages us for future research on this topic."}
{"_id": "20e86f51f90b1fa9ae48752f73a757d1272ca26a", "title": "An Empirical Study on Detecting and Fixing Buffer Overflow Bugs", "text": "Buffer overflow is one of the most common types of software security vulnerabilities. Although researchers have proposed various static and dynamic techniques for buffer overflow detection, buffer overflow attacks against both legacy and newly-deployed software systems are still quite prevalent. Compared with dynamic detection techniques, static techniques are more systematic and scalable. However, there are few studies on the effectiveness of state-of-the-art static buffer overflow detection techniques. In this paper, we perform an in-depth quantitative and qualitative study on static buffer overflow detection. More specifically, we obtain both the buggy and fixed versions of 100 buffer overflow bugs from 63 real-world projects totalling 28 MLoC (Millions of Lines of Code) based on the reports in Common Vulnerabilities and Exposures (CVE). Then, quantitatively, we apply Fortify, Checkmarx, and Splint to all the buggy versions to investigate their false negatives, and also apply them to all the fixed versions to investigate their false positives. We also qualitatively investigate the causes for the false-negatives and false-positives of studied techniques to guide the design and implementation of more advanced buffer overflow detection techniques. Finally, we also categorized the patterns of manual buffer overflow repair actions to guide automated repair techniques for buffer overflow. The experiment data is available at http://bo-study.github.io/Buffer-Overflow-Cases/."}
{"_id": "18568b54cc2e808da64b499baf740c9d907c8286", "title": "Applied Nonlinear Control of Unmanned Vehicles with Uncertain Dynamics", "text": "The presented research concerns the control of unmanned vehicles. The results introduced in this dissertation provide a solid control framework for a wide class of nonlinear uncertain systems, with a special emphasis on issues related to implementation, such as control input amplitude and rate saturation, or partial state measurements availability. More specifically, an adaptive control framework, allowing to enforce amplitude and rate saturation of the command, is developed. The motion control component of this framework, which works in conjunction with a saturation algorithm, is then specialized to different types of vehicles. Vertical take-off and landing aerial vehicles and a general class of autonomous marine vehicles are considered. A nonlinear control algorithm addressing the tracking problem for a class of underactuated, non-minimum phase marine vehicles is then introduced. This motion controller is extended, using direct and indirect adaptive techniques, to handle parametric uncertainties in the system model. Numerical simulations are used to illustrate the efficacy of the algorithms. Next, the output feedback control problem is treated, for a large class of nonlinear and uncertain systems. The proposed solution relies on a novel nonlinear observer which uses output measurements and partial knowledge of the system\u2019s dynamics to reconstruct the entire state for a wide class of nonlinear systems. The observer is then extended to operate in conjunction with a full state feedback control law and solve both the output feedback control problem and the state observation problem simultaneously. The resulting output feedback control algorithm is then adjusted to provide a high level of robustness to both parametric and structural model uncertainties. Finally, in a natural extension of these results from motion control of a single system to collaborative control of a group of vehicles, a cooperative control framework addressing limited communication issues is introduced."}
{"_id": "59139d77a5418826d75da14fbebefeed180e61f5", "title": "Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking", "text": "The accurate detection and classification of moving objects is a critical aspect of advanced driver assistance systems. We believe that by including the object classification from multiple sensor detections as a key component of the object's representation and the perception process, we can improve the perceived model of the environment. First, we define a composite object representation to include class information in the core object's description. Second, we propose a complete perception fusion architecture based on the evidential framework to solve the detection and tracking of moving objects problem by integrating the composite representation and uncertainty management. Finally, we integrate our fusion approach in a real-time application inside a vehicle demonstrator from the interactIVe IP European project, which includes three main sensors: radar, lidar, and camera. We test our fusion approach using real data from different driving scenarios and focusing on four objects of interest: pedestrian, bike, car, and truck."}
{"_id": "7a360d5561aad3202b2ec4cd6ed9c5ef7aea9fe8", "title": "AN APPROACH TO THE PSYCHOLOGY OF INSTRUCTION", "text": "The relationship between a theory of learning and a theory of instruction is discussed. Examples are presented that illustrate how to proceed from a theoretical description of the learning process to the specification of an optimal strategy for carrying out instruction. The examples deal with fairly simple learning tasks and are admittedly of limited generality. Nevertheless, they clearly define the steps necessary for deriving and testing instructional strategies, thereby providing a set of procedures for analyzing more complex problems. The parameter-dependent optimization strategies are of particular importance because they take into account individual differences among learners as well as differences in difficulty among curriculum units. Experimental evaluations indicate that the parameter-dependent strategies lead to major gains in learning, when compared with strategies that do not take individual differences into account."}
{"_id": "6fce5049bfd7dbc842a65db851e061e31a9cb1cc", "title": "The Impact of Brand Equity and Innovation on the Long-Term Effectiveness of Promotions", "text": "Vol. XLV (June 2008), 293\u2013306 293 \u00a9 2008, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Rebecca J. Slotegraaf is Assistant Professor of Marketing and Eli Lilly Faculty Fellow, Kelley School of Business, Indiana University (e-mail: rslotegr@indiana.edu). Koen Pauwels is Associate Professor of Business Administration, Tuck School of Business, Dartmouth College (e-mail: koen.h.pauwels@dartmouth.edu). Both authors contributed equally to this research. The authors thank Kevin Keller; Don Lehmann; the two anonymous JMR reviewers; and seminar participants at Emory University, the London Business School, Rice University, University of Michigan, University of Illinois, Georgia Institute of Technology, ANZMAC, and the Marketing Dynamics Conference for their valuable comments. The authors also appreciate the generous financial support of the 3M University Relations faculty grant awarded to the first author. Michel Wedel served as associate editor for this article. REBECCA J. SLOTEGRAAF and KOEN PAUWELS*"}
{"_id": "0c83eeceee8f55fb47aed1420b5510aa185feace", "title": "Interactive Visual Exploration of a Large Spatio-temporal Dataset: Reflections on a Geovisualization Mashup.", "text": "Exploratory visual analysis is useful for the preliminary investigation of large structured, multifaceted spatio-temporal datasets. This process requires the selection and aggregation of records by time, space and attribute, the ability to transform data and the flexibility to apply appropriate visual encodings and interactions. We propose an approach inspired by geographical 'mashups' in which freely-available functionality and data are loosely but flexibly combined using de facto exchange standards. Our case study combines MySQL, PHP and the LandSerf GIS to allow Google Earth to be used for visual synthesis and interaction with encodings described in KML. This approach is applied to the exploration of a log of 1.42 million requests made of a mobile directory service. Novel combinations of interaction and visual encoding are developed including spatial 'tag clouds', 'tag maps', 'data dials' and multi-scale density surfaces. Four aspects of the approach are informally evaluated: the visual encodings employed, their success in the visual exploration of the dataset, the specific tools used and the 'mashup' approach. Preliminary findings will be beneficial to others considering using mashups for visualization. The specific techniques developed may be more widely applied to offer insights into the structure of multifarious spatio-temporal data of the type explored here."}
{"_id": "ad9fff7f1da9876ed2cabbee0fec42f9e52a2e9a", "title": "A Self-tuning Failure Detection Scheme for Cloud Computing Service", "text": "Cloud computing is an increasingly important solution for providing services deployed in dynamically scalable cloud networks. Services in the cloud computing networks may be virtualized with specific servers which host abstracted details. Some of the servers are active and available, while others are busy or heavy loaded, and the remaining are offline for various reasons. Users would expect the right and available servers to complete their application requirements. Therefore, in order to provide an effective control scheme with parameter guidance for cloud resource services, failure detection is essential to meet users' service expectations. It can resolve possible performance bottlenecks in providing the virtual service for the cloud computing networks. Most existing Failure Detector (FD) schemes do not automatically adjust their detection service parameters for the dynamic network conditions, thus they couldn't be used for actual application. This paper explores FD properties with relation to the actual and automatic fault-tolerant cloud computing networks, and find a general non-manual analysis method to self-tune the corresponding parameters to satisfy user requirements. Based on this general automatic method, we propose specific and dynamic Self-tuning Failure Detector, called SFD, as a major breakthrough in the existing schemes. We carry out actual and extensive experiments to compare the quality of service performance between the SFD and several other existing FDs. Our experimental results demonstrate that our scheme can automatically adjust SFD control parameters to obtain corresponding services and satisfy user requirements, while maintaining good performance. Such an SFD can be extensively applied to industrial and commercial usage, and it can also significantly benefit the cloud computing networks."}
{"_id": "7c55f61a6cdf053973c1b557f585583890f0eb54", "title": "Noise Compensation using Spectrogram Morphological Filtering", "text": "This paper describes the application of morphological filtering to speech spectrograms for noise robust automatic speech recognition. Speech regions of the spectrogram are identified based on the proximity of high energy regions to neighbouring high energy regions in the three-dimensional space. The process of erosion can remove noise while dilation can then restore any erroneously removed speech regions. The combination of the two techniques results in a non-linear, time-frequency filter. Automatic speech recognition experiments are performed on the AURORA database and results show an average relative improvement of 10% is delivered with the morphological filtering approach. When combined with quantile-based noise estimation and non-linear spectral subtraction, the average relative performance improvement is also 10% but with a different performance profile in terms of SNR."}
{"_id": "5828e56f81ab62ed335eab6a5b53c7e2c132c5ab", "title": "ACStream: Enforcing Access Control over Data Streams", "text": "In this demo proposal, we illustrate ACStream, a system built on top of Stream Base [1], to specify and enforce access control policies over data streams. ACStream supports a very flexible role-based access control model specifically designed to protect against unauthorized access to streaming data. The core component of ACStream is a query rewriting mechanism that, by exploiting a set of secure operators proposed by us in [2], rewrites a user query in such a way that it does not violate the specified access control policies during its execution. The demo will show how policies modelling a variety of access control requirements can be easily specified and enforced using ACStream."}
{"_id": "555e36baa734ffa4c6e4947a49e5085c990dad83", "title": "Fast Anomaly Detection for Streaming Data", "text": "This paper introduces Streaming Half-Space-Trees (HS-Trees), a fast one-class anomaly detector for evolving data streams. It requires only normal data for training and works well when anomalous data are rare. The model features an ensemble of random HS-Trees, and the tree structure is constructed without any data. This makes the method highly efficient because it requires no model restructuring when adapting to evolving data streams. Our analysis shows that Streaming HS-Trees has constant amortised time complexity and constant memory requirement. When compared with a state-of-theart method, our method performs favourably in terms of detection accuracy and runtime performance. Our experimental results also show that the detection performance of Streaming HS-Trees is not sensitive to its parameter settings."}
{"_id": "3c098162f41f452f0edc52d48c50e7602b0c6709", "title": "A method for learning matching errors for stereo computation", "text": "This paper describes a novel learning-based approach for improving the performance of stereo computation. It is based on the observation that whether the image matching scores lead to true or erroneous depth values is dependent on the original stereo images and the underlying scene structure. This function is learned from training data and is integrated into a depth estimation algorithm using the MAP-MRF framework. Because the resultant likelihood function is dependent on the states of a large neighboring region around each pixel, we propose to solve the high-order MRF inference problem using the simulated annealing algorithm combined with a Metropolis-Hastings sampler. A segmentation-based approach is proposed to accelerate the computational speed and improve the performance. Preliminary experimental results show that the learning process captures common errors in SSD matching including the fattening effect, the aperture effect, and mismatches in occluded or low texture regions. It is also demonstrated that the proposed approach significantly improves the accuracy of the depth computation."}
{"_id": "0e2bdc45c08acef3cabfd9a41bad4f79099c5166", "title": "From machu_picchu to \"rafting the urubamba river\": anticipating information needs via the entity-query graph", "text": "We study the problem of anticipating user search needs, based on their browsing activity. Given the current web page p that a user is visiting we want to recommend a small and diverse set of search queries that are relevant to the content of p, but also non-obvious and serendipitous.\n We introduce a novel method that is based on the content of the page visited, rather than on past browsing patterns as in previous literature. Our content-based approach can be used even for previously unseen pages.\n We represent the topics of a page by the set of Wikipedia entities extracted from it. To obtain useful query suggestions for these entities, we exploit a novel graph model that we call EQGraph (Entity-Query Graph), containing entities, queries, and transitions between entities, between queries, as well as from entities to queries. We perform Personalized PageRank computation on such a graph to expand the set of entities extracted from a page into a richer set of entities, and to associate these entities with relevant query suggestions. We develop an efficient implementation to deal with large graph instances and suggest queries from a large and diverse pool.\n We perform a user study that shows that our method produces relevant and interesting recommendations, and outperforms an alternative method based on reverse IR."}
{"_id": "c5dae4440044b015fd4ae8fd59aba43d7515c889", "title": "Big Data Analytics for Earth Sciences: the EarthServer approach", "text": "Large-Scale Scientific Information Systems, Jacobs University, Bremen, Germany; Rasdaman GmbH, Bremen, Germany; CNR-IIA, National Research Council of Italy, Institute of Atmospheric Pollution Research, Florence, Italy; EOX IT Services GmbH, Vienna, Austria; Consorzio COMETA, Catania, Italy; Division of Catania, Italian National Institute for Nuclear Physics, 15 Catania, Italy; Department of Physics and Astronomy, University of Catania, Catania, Italy; MEEO S.r.l., Ferrara, Italy; Plymouth Marine Laboratory, Plymouth, UK; Fraunhofer IGD, Darmstadt, Germany; Athena Research and Innovation Center in Information Communication & Knowledge Technologies, Athens, Greece; British Geological Survey, Edinburgh, UK; Software Engineering Italia S.r.l., Catania, Italy; British Geological Survey, Keyworth, UK; NASA Ames 20 Research Center, Moffett Field, CA, USA"}
{"_id": "0342313b6941e28f21d4d67cda12a40829285b03", "title": "Multi-criteria ABC analysis using artificial-intelligence-based classification techniques", "text": "ABC analysis is a popular and effective method used to classify inventory items into specific categories that can be managed and controlled separately. Conventional ABC analysis classifies inventory items three categories: A, B, or C based on annual dollar usage of an inventory item. Multi-criteria inventory classification has been proposed by a number of researchers in order to take other important criteria into consideration. These researchers have compared artificial-intelligence (AI)-based classification techniques with traditional multiple discriminant analysis (MDA). Examples of these AI-based techniques include support vector machines (SVMs), backpropagation networks (BPNs), and the k-nearest neighbor (k-NN) algorithm. To test the effectiveness of these techniques, classification results based on four benchmark techniques are compared. The results show that AI-based techniques demonstrate superior accuracy to MDA. Statistical analysis reveals that SVM enables more accurate classification than other AI-based techniques. This finding suggests the possibility of implementing AI-based techniques for multi-criteria ABC analysis in enterprise resource planning (ERP) systems. 2010 Elsevier Ltd. All rights reserved."}
{"_id": "d6f6f4695ed5fff107b5dbd65e4b8af0b22809c0", "title": "Improving Sub-Phone Modeling for Better Native Language Identification with Non-Native English Speech", "text": "Identifying a speaker\u2019s native language with his speech in a second language is useful for many human-machine voice interface applications. In this paper, we use a sub-phone-based i-vector approach to identify non-native English speakers\u2019 native languages by their English speech input. Time delay deep neural networks (TDNN) are trained on LVCSR corpora for improving the alignment of speech utterances with their corresponding sub-phonemic \u201csenone\u201d sequences. The phonetic variability caused by a speaker\u2019s native language can be better modeled with the sub-phone models than the conventional phone model based approach. Experimental results on the database released for the 2016 Interspeech ComParE Native Language challenge with 11 different L1s show that our system outperforms the best system by a large margin (87.2% UAR compared to 81.3% UAR for the best system from the 2016 ComParE challenge)."}
{"_id": "660a9612cdc5baf70c82fa1db291ded7bef39546", "title": "Software reuse research: status and future", "text": "This paper briefly summarizes software reuse research, discusses major research contributions and unsolved problems, provides pointers to key publications, and introduces four papers selected from The Eighth International Conference on Software Reuse (ICSR8)."}
{"_id": "5145a92f0ee20025f246cab0f4b6dd6d7bcbee6c", "title": "Hempseed as a nutritional resource: An overview", "text": "The seed of Cannabis sativa L. has been an important source of nutrition for thousands of years in Old World cultures. Non-drug varieties of Cannabis, commonly referred to as hemp, have not been studied extensively for their nutritional potential in recent years, nor has hempseed been utilized to any great extent by the industrial processes and food markets that have developed during the 20th century. Technically a nut, hempseed typically contains over 30% oil and about 25% protein, with considerable amounts of dietary fiber, vitamins and minerals. Hempseed oil is over 80% in polyunsaturated fatty acids (PUFAs), and is an exceptionally rich source of the two essential fatty acids (EFAs) linoleic acid (18:2 omega-6) and alpha-linolenic acid (18:3 omega-3). The omega-6 to omega-3 ratio (n6/n3) in hempseed oil is normally between 2:1 and 3:1, which is considered to be optimal for human health. In addition, the biological metabolites of the two EFAs, gamma-linolenic acid (18:3 omega-6; \u2018GLA\u2019) and stearidonic acid (18:4 omega-3; \u2018SDA\u2019), are also present in hempseed oil. The two main proteins in hempseed are edestin and albumin. Both of these high-quality storage proteins are easily digested and contain nutritionally significant amounts of all essential amino acids. In addition, hempseed has exceptionally high levels of the amino acid arginine. Hempseed has been used to treat various disorders for thousands of years in traditional oriental medicine. Recent clinical trials have identified hempseed oil as a functional food, and animal feeding studies demonstrate the long-standing utility of hempseed as an important food resource."}
{"_id": "26cd7807529f2074a385d94f4950d5166c2371d8", "title": "Are Attractive People Rewarding? Sex Differences in the Neural Substrates of Facial Attractiveness", "text": "The current study examined the neural substrates of facial attractiveness judgments. Based on the extant behavioral literature, it was hypothesized that brain regions involved in identifying the potential reward value of a stimulus would be more active when men viewed attractive women than when women viewed attractive men. To test this hypothesis, we conducted an event-related functional magnetic resonance imaging experiment during which participants provided explicit attractiveness judgments for faces of the opposite sex. These individual ratings were subsequently used to perform analyses aimed at identifying the brain regions preferentially responsive to attractive faces for both sex groups. The results revealed that brain regions comprising the putative reward circuitry (e.g., nucleus accumbens [NAcc], orbito-frontal cortex [OFC]) showed a linear increase in activation with increased judgments of attractiveness. However, further analysis also revealed sex differences in the recruitment of OFC, which distinguished attractive and unattractive faces only for male participants."}
{"_id": "6f4ba09880fb687b471672216824daccc4275db6", "title": "Compact UWB Bandpass Filter With Ultra Narrow Notched Band", "text": "A compact ultra-wideband (UWB) bandpass filter with an ultra narrow notched band is proposed using a hybrid microstrip and coplanar waveguide (CPW) structure. The CPW detached-mode resonator (DMR) composed of a quarter-wavelength (\u00bf/4) nonuniform CPW resonator with a short-stub and a \u00bf/4 single-mode CPW resonator (SMCR) can allocate three split frequencies at the lower end, middle, higher end of the UWB passband. The conventional broadside-coupled microstrip/CPW structure is introduced to improve the bandwidth enhancement around the split frequencies, which leads to good UWB operation. To avoid the interferences such as WLAN signals, the \u00bf/4 meander slot-line structure embedded in the DMR is employed to obtain the notched band inside the UWB passband. The design is then verified by experiment. Good passband and stopband performances are achieved. Specifically, the fabricated filter has a 10 dB notched fractional bandwidth (FBW) of 2.06% at the notched center frequency of 5.80 GHz."}
{"_id": "15e452861af23692c75627fa86448424340ded44", "title": "Engineering Route Planning Algorithms", "text": "Algorithms for route planning in transportation networks have recently undergone a rapid development, leading to methods that are up to three million times faster than Dijkstra\u2019s algorithm. We give an overview of the techniques enabling this development and point out frontiers of ongoing research on more challenging variants of the problem that include dynamically changing networks, time-dependent routing, and flexible objective functions."}
{"_id": "2693c442990c55f2785e4142470b047e2095a8b8", "title": "Computing the shortest path: A search meets graph theory", "text": "We propose shortest path algorithms that use A* search in combination with a new graph-theoretic lower-bounding technique based on landmarks and the triangle inequality. Our algorithms compute optimal shortest paths and work on any directed graph. We give experimental results showing that the most efficient of our new algorithms outperforms previous algorithms, in particular A* search with Euclidean bounds, by a wide margin on road networks and on some synthetic problem families."}
{"_id": "331ac2552bddae766784fd659816fe7a04d3d97a", "title": "Dynamic Highway-Node Routing", "text": "We introduce a dynamic technique for fast route planning in large road networks. For the first time, it is possible to handle the practically relevant scenarios that arise in present-day navigation systems: When an edge weight changes (e.g., due to a traffic jam), we can update the preprocessed information in 2\u201340 ms allowing subsequent fast queries in about one millisecond on average. When we want to perform only a single query, we can skip the comparatively expensive update step and directly perform a prudent query that automatically takes the changed situation into account. If the overall cost function changes (e.g., due to a different vehicle type), recomputing the preprocessed information takes typically less than two minutes. The foundation of our dynamic method is a new static approach that generalises and combines several previous speedup techniques. It has outstandingly low memory requirements of only a few bytes per node."}
{"_id": "11e907ef1dad5daead606ce6cb69ade18828cc39", "title": "Partitioning graphs to speedup Dijkstra's algorithm", "text": "We study an acceleration method for point-to-point shortest-path computations in large and sparse directed graphs with given nonnegative arc weights. The acceleration method is called the arc-flag approach and is based on Dijkstra's algorithm. In the arc-flag approach, we allow a preprocessing of the network data to generate additional information, which is then used to speedup shortest-path queries. In the preprocessing phase, the graph is divided into regions and information is gathered on whether an arc is on a shortest path into a given region. The arc-flag method combined with an appropriate partitioning and a bidirected search achieves an average speedup factor of more than 500 compared to the standard algorithm of Dijkstra on large networks (1 million nodes, 2.5 million arcs). This combination narrows down the search space of Dijkstra's algorithm to almost the size of the corresponding shortest path for long-distance shortest-path queries. We conduct an experimental study that evaluates which partitionings are best suited for the arc-flag method. In particular, we examine partitioning algorithms from computational geometry and a multiway arc separator partitioning. The evaluation was done on German road networks. The impact of different partitions on the speedup of the shortest path algorithm are compared. Furthermore, we present an extension of the speedup technique to multiple levels of partitions. With this multilevel variant, the same speedup factors can be achieved with smaller space requirements. It can, therefore, be seen as a compression of the precomputed data that preserves the correctness of the computed shortest paths."}
{"_id": "251b0baea0be8f34c7bb25aa2ee214c5875d9e64", "title": "Porcelain laminate veneers: 6- to 12-year clinical evaluation--a retrospective study.", "text": "The purpose of this study was to retrospectively evaluate the clinical performance of laminate veneers placed in the anterior segments of the dental arches over a 12-year period at two different private dental practices. Forty-six patients were restored with 182 porcelain laminate veneers. The veneers were studied for a mean observation time of 5.69 years. Color match, porcelain surface, marginal discoloration, and marginal integrity were clinically examined following modified CDA/Ryge criteria. On the basis of the criteria used, most of the veneers rated A. Risk of fracture was determined with a Kaplan-Meier survival analysis. Probability of survival of the 182 veneers was 94.4% at 12 years, with a low clinical failure rate (approximately 5.6%). Porcelain veneers must be bonded with a correct adhesive technique to reach this successful survival rate."}
{"_id": "275896f126776bc7e7bb960d169591bd5140ee68", "title": "Owlready: Ontology-oriented programming in Python with automatic classification and high level constructs for biomedical ontologies", "text": "OBJECTIVE\nOntologies are widely used in the biomedical domain. While many tools exist for the edition, alignment or evaluation of ontologies, few solutions have been proposed for ontology programming interface, i.e. for accessing and modifying an ontology within a programming language. Existing query languages (such as SPARQL) and APIs (such as OWLAPI) are not as easy-to-use as object programming languages are. Moreover, they provide few solutions to difficulties encountered with biomedical ontologies. Our objective was to design a tool for accessing easily the entities of an OWL ontology, with high-level constructs helping with biomedical ontologies.\n\n\nMETHODS\nFrom our experience on medical ontologies, we identified two difficulties: (1) many entities are represented by classes (rather than individuals), but the existing tools do not permit manipulating classes as easily as individuals, (2) ontologies rely on the open-world assumption, whereas the medical reasoning must consider only evidence-based medical knowledge as true. We designed a Python module for ontology-oriented programming. It allows access to the entities of an OWL ontology as if they were objects in the programming language. We propose a simple high-level syntax for managing classes and the associated \"role-filler\" constraints. We also propose an algorithm for performing local closed world reasoning in simple situations.\n\n\nRESULTS\nWe developed Owlready, a Python module for a high-level access to OWL ontologies. The paper describes the architecture and the syntax of the module version 2. It details how we integrated the OWL ontology model with the Python object model. The paper provides examples based on Gene Ontology (GO). We also demonstrate the interest of Owlready in a use case focused on the automatic comparison of the contraindications of several drugs. This use case illustrates the use of the specific syntax proposed for manipulating classes and for performing local closed world reasoning.\n\n\nCONCLUSION\nOwlready has been successfully used in a medical research project. It has been published as Open-Source software and then used by many other researchers. Future developments will focus on the support of vagueness and additional non-monotonic reasoning feature, and automatic dialog box generation."}
{"_id": "e5c2867c103468402e4287147bf8e7ed2426de92", "title": "Continuous Auctions and Insider Trading", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/econosoc.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission."}
{"_id": "a9c928a85843bd2e110907ced1252b9b9061650f", "title": "Simulating cartoon style animation", "text": "Traditional hand animation is in many cases superior to simulated motion for conveying information about character and events. Much of this superiority comes from an animator's ability to abstract motion and play to human perceptual effects. However, experienced animators are difficult to come by and the resulting motion is typically not interactive. On the other hand, procedural models for generating motion, such as physical simulation, can create motion on the fly but are poor at stylizing movement. We start to bridge this gap with a technique that creates cartoon style deformations automatically while preserving desirable qualities of the object's appearance and motion. Our method is focused on squash-and-stretch deformations based on the velocity and collision parameters of the object, making it suitable for procedural animation systems. The user has direct control of the object's motion through a set of simple parameters that drive specific features of the motion, such as the degree of squash and stretch. We demonstrate our approach with examples from our prototype system."}
{"_id": "17af8a709578c3881292643f3e0bdb1b3de62bbf", "title": "Multilevel resilience analysis of transportation and communication networks", "text": "For many years the research community has attempted to model the Internet in order to better understand its behaviour and improve its performance. Since much of the structural complexity of the Inter*Work performed while Egemen K. \u00c7etinkaya, Andrew M. Peck, and Justin P. Rohrer were at the University of Kansas. Egemen K. \u00c7etinkaya Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO 65409, USA cetinkayae@mst.edu, +1 573 341 6887 Mohammed J.F. Alenazi Department of Electrical Engineering and Computer Science, Information and Telecommunication Technology Center, The University of Kansas, Lawrence, KS 66045, USA malenazi@ittc.ku.edu Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia Andrew M. Peck apeck@ittc.ku.edu; andrewpeck11@gmail.com Justin P. Rohrer rohrej@ittc.ku.edu Department of Computer Science, Naval Postgraduate School, Monterey, CA 93943\u20135285, jprohrer@nps.edu James P.G. Sterbenz Department of Electrical Engineering and Computer Science, Information and Telecommunication Technology Center, The University of Kansas, Lawrence, KS 66045, USA https://www.ittc.ku.edu/resilinets jpgs@ittc.ku.edu, +1 508 944 3067 School of Computing and Communications, InfoLab21 Lancaster University, Lancaster, UK, jpgs@comp.lancs.ac.uk Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong jpgs@comp.polyu.edu.hk net is due to its multilevel operation, the Internet\u2019s multilevel nature is an important and non-trivial feature that researchers must consider when developing appropriate models. In this paper, we compare the normalised Laplacian spectra of physicaland logical-level topologies of four commercial ISPs and two research networks against the US freeway topology, and show analytically that physical level communication networks are structurally similar to the US freeway topology. We also generate synthetic Gabriel graphs of physical topologies and show that while these synthetic topologies capture the grid-like structure of actual topologies, they are more expensive than the actual physical level topologies based on a network cost model. Moreover, we introduce a distinction between geographic graphs that include degree-2 nodes needed to capture the geographic paths along which physical links follow, and structural graphs that eliminate these degree-2 nodes and capture only the interconnection properties of the physical graph and its multilevel relationship to logical graph overlays. Furthermore, we develop a multilevel graph evaluation framework and analyse the resilience of single and multilevel graphs using the flow robustness metric. We then confirm that dynamic routing performed over the lower levels helps to improve the performance of a higher level service, and that adaptive challenges more severely impact the performance of the higher levels than non-adaptive challenges."}
{"_id": "1530f3394c91d75642b0b6907455d3abaad30dcb", "title": "Hand Gesture Recognition with Leap Motion", "text": "The recent introduction of depth cameras like Leap Motion Controller allows researchers to exploit the depth information to recognize hand gesture more robustly. This paper proposes a novel hand gesture recognition system with Leap Motion Controller. A series of features are extracted from Leap Motion tracking data, we feed these features along with HOG feature extracted from sensor images into a multiclass SVM classifier to recognize performed gesture, dimension reduction and feature weighted fusion are also discussed. Our results show that our model is much more accurate than previous work."}
{"_id": "31cb1a18b77350a6cf04db2894d7534e132cd1b4", "title": "Generative Adversarial Networks in Estimation of Distribution Algorithms for Combinatorial Optimization", "text": "Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Generative Ad-versarial Networks (GAN) are generative neural networks which can be trained to implicitly model the probability distribution of given data, and it is possible to sample this distribution. We integrate a GAN into an EDA and evaluate the performance of this system when solving combinatorial optimization problems with a single objective. We use several standard benchmark problems and compare the results to state-of-the-art multivariate EDAs. GAN-EDA doe not yield competitive results \u2013 the GAN lacks the ability to quickly learn a good approximation of the probability distribution. A key reason seems to be the large amount of noise present in the first EDA generations."}
{"_id": "3b3ccdbfd5d8034da0011d5fa13f1a3d10d5e2fa", "title": "Discovery of significant parameters in kidney dialysis data sets by K-means algorithm", "text": "The contributing factors for kidney dialysis such as creatinine, sodium, urea plays an important role in deciding the survival prediction of the patients as well as the need for undergoing kidney transplantation. Several attempts have been made to derive automated decision making procedure for earlier prediction. This preliminary study investigates the importance of clustering technique for identifying the influence of kidney dialysis parameters. A simple K-means algorithm is used to elicit knowledge about the interaction between many of these measured parameters and patient survival. The clustering procedure predicts the survival period of the patients who is undergoing the dialysis procedure."}
{"_id": "a374d6b042b8a0f1bb09c0c19d9cf8f9a203e5e5", "title": "Vascular Complications After Chin Augmentation Using Hyaluronic Acid", "text": "Vascular complications after hyaluronic acid (HA) filling of the chin have rarely been reported. In this report, two cases of vascular occlusion after HA augmentation of the mentum are presented. The first case involved local skin necrosis that resulted from a massive microcirculatory embolism and/or external compression of the chin skin microvasculature. The second case involved vascular compromise in the tongue that resulted from HA injection in the chin. The diagnosis was established on the basis of interventional angiography findings. Concerning the pathogenesis, we hypothesized that the filler embolus flowed into the branches of the deep lingual artery through the rich vascular anastomoses among the submental, sublingual, and deep lingual arteries, after being accidentally injected into the submental artery (or its branches). Level of Evidence V This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266."}
{"_id": "26f0fae43021f6fb8ed457e5afe5421d68982620", "title": "Clinical human factors: the need to speak up to improve patient safety.", "text": "This article aims to inspire nurses to recognise how human factors affect individual and team performance. Use of a case study and learning derived from a subsequent independent inquiry exposes the dynamics that can affect teamwork and inhibit effective communication with devastating consequences. The contribution of situational awareness and the importance of nurses speaking up when they have concerns are demonstrated as vital components in the delivery of safe patient care."}
{"_id": "5a2f792d60eebce68d69ad391dbc4211ec266d2e", "title": "Virtual Notepad: handwriting in immersive VR", "text": "We present Virtual Notepad, a collection of interface tools that allows the user to take notes, annotate documents and input text using a pen, while still immersed in virtual environments (VEs). Using a spatially-tracked, pressure-sensitive graphics tablet, pen and handwriting recognition software, Virtual Notepad explores handwriting as a new modality for interaction in immersive VEs. This paper reports details of the Virtual Notepad interface and interaction techniques, discusses implementation and design issues, reports the results of initial evaluation and overviews possible applications of virtual handwriting."}
{"_id": "5f6dcff2576d424adc8574be0a7f03309db4997a", "title": "Food Addiction and Bulimia Nervosa: New Data Based on the Yale Food Addiction Scale 2.0.", "text": "Previous research on 'food addiction' as measured with the Yale Food Addiction Scale (YFAS) showed a large overlap between addiction-like eating and bulimia nervosa. Most recently, a revised version of the YFAS has been developed according to the changes made in the diagnostic criteria for substance use disorder in the Diagnostic and Statistical Manual of Mental Disorders fifth edition. The current study examined prevalence and correlates of the YFAS2.0 in individuals with bulimia (n\u2009=\u2009115) and controls (n\u2009=\u2009341). Ninety-six per cent of participants with bulimia and 14% of controls received a YFAS2.0 diagnosis. A higher number of YFAS2.0 symptoms was associated with lower interoceptive awareness, higher depressiveness, and higher impulsivity in both groups. However, a higher number of YFAS2.0 symptoms was associated with higher body mass and weight suppression in controls only and not in participants with bulimia. The current study is the first to show a large overlap between bulimia and 'food addiction' as measured with the YFAS2.0, replicating and extending findings from studies, which used the previous version of the YFAS. Compensatory weight control behaviours in individuals with bulimia likely alleviate the association between addiction-like eating and higher body mass. Thus, the large overlap between bulimia and 'food addiction' should be taken into consideration when examining the role of addiction-like eating in weight gain and obesity. Copyright \u00a9 2016 John Wiley & Sons, Ltd and Eating Disorders Association."}
{"_id": "4945037c4aacb776c4dfacee42aae9ad50922791", "title": "Database forensic analysis through internal structure carving", "text": "Forensic tools assist analysts with recovery of both the data and system events, even from corrupted storage. These tools typically rely on \u201cfile carving\u201d techniques to restore files after metadata loss by analyzing the remaining raw file content. A significant amount of sensitive data is stored and processed in relational databases thus creating the need for database forensic tools that will extend file carving solutions to the database realm. Raw database storage is partitioned into individual \u201cpages\u201d that cannot be read or presented to the analyst without the help of the database itself. Furthermore, by directly accessing raw database storage, we can reveal things that are normally hidden from database users. There exists a number of database-specific tools developed for emergency database recovery, though not usually for forensic analysis of a database. In this paper, we present a universal tool that seamlessly supports many different databases, rebuilding table and other data content from any remaining storage fragments on disk or in memory. We define an approach for automatically (with minimal user intervention) reverse engineering storage in new databases, for detecting volatile data changes and discovering user action artifacts. Finally, we empirically verify our tool's ability to recover both deleted and partially corrupted data directly from the internal storage of different databases. \u00a9 2015 The Authors. Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/"}
{"_id": "186904c4f644cceddd6ffe649177787f00fcef49", "title": "Software Defined Wireless Networks: A Survey of Issues and Solutions", "text": "Wireless networks such as mobile networks, with their inflexible and expensive network infrastructure, are facing various challenges in efficiently handling the exponentially growing traffic demands of users. Hence, mobile network operators (MNOs) are looking forward to novel networking paradigms which could simplify the task of network management and control and allow faster deployment of newer solutions on top of existing hardware by software updates. Software defined networking (SDN) is a revolutionary technology which makes networks more agile and flexible by separation of data plane and control plane tasks. SDN is playing a key role in deploying mobile network services on network function virtualization (NFV) infrastructure for elastic and flexible deployment of core network services e.g., virtualization of evolved packet core in LTE to efficiently handle huge control signal overhead and traffic demands from machine-to-machine and internet of things devices. Besides, NFV and SDN are offering scalable, flexible and adaptable network service chaining platforms as a replacement for inflexible middleboxes. Mobile edge computing (MEC) is a new evolving platform which brings together IT services and cloud computing for offering end-users network aware services and solutions. In this survey, we enumerate numerous issues and challenges in designing SDN based wireless networks and review various SDN based seminal solutions for 4G/5G. Finally, this survey presents the role of SDN, NFV and MEC in designing 5G networks."}
{"_id": "1dec50169a6f60c8612abc2a729166f5127f1522", "title": "Improving Storage System Reliability with Proactive Error Prediction", "text": "This paper proposes the use of machine learning techniques to make storage systems more reliable in the face of sector errors. Sector errors are partial drive failures, where individual sectors on a drive become unavailable, and occur at a high rate in both hard disk drives and solid state drives. The data in the affected sectors can only be recovered through redundancy in the system (e.g. another drive in the same RAID) and is lost if the error is encountered while the system operates in degraded mode, e.g. during RAID reconstruction. In this paper, we explore a range of different machine learning techniques and show that sector errors can be predicted ahead of time with high accuracy. Prediction is robust, even when only little training data or only training data for a different drive model is available. We also discuss a number of possible use cases for improving storage system reliability through the use of sector error predictors. We evaluate one such use case in detail: We show that the mean time to detecting errors (and hence the window of vulnerability to data loss) can be greatly reduced by adapting the speed of a scrubber based on error predictions."}
{"_id": "3b893914c95fc708db03bcda5b2fe4e993b3442a", "title": "Planar Ultra Wideband bandpass filter using Defected Ground Structure", "text": "The design of planar Ultra Wide Band (UWB) band pass filter using periodic Defected Ground Structure (DGS) is presented. The filter is realized with interdigital coupled lines, stepped impedance open stub and slotted DGS. Interdigital coupled lines are used to widen the band width of the filter. Stub is to provide transmission zeros in upper and lower stop band where as DGS improves the return loss of the filter. It provides an insertion loss of 0.3 dB with a wide pass band from 3.1 - 10.6 GHz. It also provides better return loss of 19 dB."}
{"_id": "92a0841c8850893e4db773f887bfeb1bd97fe9fd", "title": "Naive Bayes for Text Classification with Unbalanced Classes", "text": "Abstract. Multinomial naive Bayes (MNB) is a popular method for document classification due to its computational efficiency and relatively good predictive performance. It has recently been established that predictive performance can be improved further by appropriate data transformations [1, 2]. In this paper we present another transformation that is designed to combat a potential problem with the application of MNB to unbalanced datasets. We propose an appropriate correction by adjusting attribute priors. This correction can be implemented as another data normalization step, and we show that it can significantly improve the area under the ROC curve. We also show that the modified version of MNB is very closely related to the simple centroid-based classifier and compare the two methods empirically."}
{"_id": "09be020a9738464799740602d7cf3273c1416c6a", "title": "Texture synthesis using convolutional neural networks with long-range consistency and spectral constraints", "text": "Procedural texture generation enables the creation of more rich and detailed virtual environments without the help of an artist. However, finding a flexible generative model of real world textures remains an open problem. We present a novel Convolutional Neural Network based texture model consisting of two summary statistics (the Gramian and Translation Gramian matrices), as well as spectral constraints. We investigate the Fourier Transform or Window Fourier Transform in applying spectral constraints, and find that the Window Fourier Transform improved the quality of the generated textures. We demonstrate the efficacy of our system by comparing generated output with that of related state of the art systems."}
{"_id": "916823fe0525f8db8bf5a6863bcab1c7077ff59e", "title": "Properties and performance of a center/surround retinex", "text": "The last version of Land's (1986) retinex model for human vision's lightness and color constancy has been implemented and tested in image processing experiments. Previous research has established the mathematical foundations of Land's retinex but has not subjected his lightness theory to extensive image processing experiments. We have sought to define a practical implementation of the retinex without particular concern for its validity as a model for human lightness and color perception. We describe the trade-off between rendition and dynamic range compression that is governed by the surround space constant. Further, unlike previous results, we find that the placement of the logarithmic function is important and produces best results when placed after the surround formation. Also unlike previous results, we find the best rendition for a \"canonical\" gain/offset applied after the retinex operation. Various functional forms for the retinex surround are evaluated, and a Gaussian form is found to perform better than the inverse square suggested by Land. Images that violate the gray world assumptions (implicit to this retinex) are investigated to provide insight into cases where this retinex fails to produce a good rendition."}
{"_id": "9fa3c3f1fb6f1566638f97fcb993fe121646433e", "title": "Real-time single image dehazing using block-to-pixel interpolation and adaptive dark channel prior", "text": ""}
{"_id": "2d6d5cfa8e99dd53e50bf870e24e72b0be7f7aeb", "title": "Decision Combination in Multiple Classifier Systems", "text": "A multiple classifier system is a powerful solution to difficult pattern recognition problems involving large class sets and noisy input because it allows simultaneous use of arbitrary feature descriptors and classification procedures. Decisions by the classifiers can be represented as rankings of classes so that they are comparable across different types of classifiers and different instances of a problem. The rankings can be combined by methods that either reduce or rerank a given set of classes. An intersection method and a union method are proposed for class set reduction. Three methods based on the highest rank, the Borda count, and logistic regression are proposed for class set reranking. These methods have been tested in applications on degraded machine-printed characters and words from large lexicons, resulting in substantial improvement in overall correctness."}
{"_id": "6baaf1b2dc375e21a8ca8e8d17e5dc9d7483f4e8", "title": "Keyword Search on Spatial Databases", "text": "Many applications require finding objects closest to a specified location that contains a set of keywords. For example, online yellow pages allow users to specify an address and a set of keywords. In return, the user obtains a list of businesses whose description contains these keywords, ordered by their distance from the specified address. The problems of nearest neighbor search on spatial data and keyword search on text data have been extensively studied separately. However, to the best of our knowledge there is no efficient method to answer spatial keyword queries, that is, queries that specify both a location and a set of keywords. In this work, we present an efficient method to answer top-k spatial keyword queries. To do so, we introduce an indexing structure called IR2-Tree (Information Retrieval R-Tree) which combines an R-Tree with superimposed text signatures. We present algorithms that construct and maintain an IR2-Tree, and use it to answer top-k spatial keyword queries. Our algorithms are experimentally compared to current methods and are shown to have superior performance and excellent scalability."}
{"_id": "271c1be41d746e146c62313f810476daa21523bf", "title": "Imitation, mirror neurons and autism", "text": "Various deficits in the cognitive functioning of people with autism have been documented in recent years but these provide only partial explanations for the condition. We focus instead on an imitative disturbance involving difficulties both in copying actions and in inhibiting more stereotyped mimicking, such as echolalia. A candidate for the neural basis of this disturbance may be found in a recently discovered class of neurons in frontal cortex, 'mirror neurons' (MNs). These neurons show activity in relation both to specific actions performed by self and matching actions performed by others, providing a potential bridge between minds. MN systems exist in primates without imitative and 'theory of mind' abilities and we suggest that in order for them to have become utilized to perform social cognitive functions, sophisticated cortical neuronal systems have evolved in which MNs function as key elements. Early developmental failures of MN systems are likely to result in a consequent cascade of developmental impairments characterised by the clinical syndrome of autism."}
{"_id": "7181a8918e34f13402ec8ecf5ff92dff9ddc03f9", "title": "Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation", "text": "This paper presents a novel unsupervised domain adaptation framework, called Synergistic Image and Feature Adaptation (SIFA), to effectively tackle the problem of domain shift. Domain adaptation has become an important and hot topic in recent studies on deep learning, aiming to recover performance degradation when applying the neural networks to new testing domains. Our proposed SIFA is an elegant learning diagram which presents synergistic fusion of adaptations from both image and feature perspectives. In particular, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features towards the segmentation task. The feature encoder layers are shared by both perspectives to grasp their mutual benefits during the end-to-end learning procedure. Without using any annotation from the target domain, the learning of our unified model is guided by adversarial losses, with multiple discriminators employed from various aspects. We have extensively validated our method with a challenging application of crossmodality medical image segmentation of cardiac structures. Experimental results demonstrate that our SIFA model recovers the degraded performance from 17.2% to 73.0%, and outperforms the state-of-the-art methods by a significant margin."}
{"_id": "be61fc5903f22384c90b94013a51448e8d5610eb", "title": "Lower bounds by probabilistic arguments", "text": "The purpose of this paper is to resolve several open problems in the current literature on Boolean circuits, communication complexity, and hashing functions. These lower bound results share the common feature that their proofs utilize probabilistic arguments in an essential way. Specifically, we prove that, to compute the majority function of n Boolean variables, the size of any depth-3 monotone circuit must be greater than 2n\u03b5, and the size of any width-2 branching program must have super-polynomial growth. We also show that, for the problem of deciding whether i \u2264 j for two n-bit integers i and j, the probabilistic \u03b5-error one-way communication complexity is of order \u03b8(n), while the two-way \u03b5-error complexity is O((log n)2). We will also prove that, to compute i \u00bf j mod p for an n-bit prime p, the probabilistic \u03b5-error two-way communication complexity is of order \u03b8(n). Finally, we prove a conjecture of Ullman that uniform hashing is asymptotically optimal in its expected retrieval cost among open address hashing schemes."}
{"_id": "d632593703a604d9ab27e9310ecd9a849d405346", "title": "Analysis of Load Balancing Techniques in Cloud Computing", "text": "Cloud Computing is an emerging computing paradigm. It aims to share data, calculations, and service transparently over a scalable network of nodes. Since Cloud computing stores the data and disseminated resources in the open environment. So, the amount of data storage increases quickly. In the cloud storage, load balancing is a key issue. It would consume a lot of cost to maintain load information, since the system is too huge to timely disperse load. Load balancing is one of the main challenges in cloud computing which is required to distribute the dynamic workload across multiple nodes to ensure that no single node is overwhelmed. It helps in optimal utilization of resources and hence in enhancing the performance of the system. A few existing scheduling algorithms can maintain load balancing and provide better strategies through efficient job scheduling and resource allocation techniques as well. In order to gain maximum profits with optimized load balancing algorithms, it is necessary to utilize resources efficiently. This paper discusses some of the existing load balancing algorithms in cloud computing and also their challenges."}
{"_id": "1464776f20e2bccb6182f183b5ff2e15b0ae5e56", "title": "Benchmarking Deep Reinforcement Learning for Continuous Control", "text": "Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released open-source in order to facilitate experimental reproducibility and to encourage adoption by other researchers."}
{"_id": "eae004dd2cd6ef7e31824ac1f50473812101fda0", "title": "Leaky wave antenna integrated into gap waveguide technology", "text": "A novel leaky wave antenna, based on the gap waveguide technology, is here proposed. A groove gap-waveguide is used as feeding and it also acts as antenna at the same time. The proposed antenna provides an excellent performance while maintaining a simple design. To demonstrate the potential of this radiation mechanism, an antenna was designed to operate between 9 GHz and 11.5 GHz. The antenna has a gain around 12 dB and provides a radiation pattern that steers its radiation direction with the frequency."}
{"_id": "40388bf28adc81ee9208217e7966e9b0b8e81456", "title": "What is Gab: A Bastion of Free Speech or an Alt-Right Echo Chamber", "text": "Over the past few years, a number of new \u201cfringe\u201d communities, like 4chan or certain subreddits, have gained traction on the Web at a rapid pace. However, more often than not, little is known about how they evolve or what kind of activities they attract, despite recent research has shown that they influence how false information reaches mainstream communities. This motivates the need to monitor these communities and analyze their impact on the Web\u2019s information ecosystem. In August 2016, a new social network called Gab was created as an alternative to Twitter. It positions itself as putting \u201cpeople and free speech first\u201d, welcoming users banned or suspended from other social networks. In this paper, we provide, to the best of our knowledge, the first characterization of Gab. We collect and analyze 22M posts produced by 336K users between August 2016 and January 2018, finding that Gab is predominantly used for the dissemination and discussion of news and world events, and that it attracts alt-right users, conspiracy theorists, and other trolls. We also measure the prevalence of hate speech on the platform, finding it to be much higher than Twitter, but lower than 4chan\u2019s Politically Incorrect board."}
{"_id": "0c6345c83c6454d26124c583f8d00b935eefefba", "title": "Lesion-specific coronary artery calcium quantification better predicts cardiac events", "text": "CT-based coronary artery calcium (CAC) scanning has been introduced as a non-invasive, low-radiation imaging technique for the assessment of the overall coronary arterial atherosclerotic burden. A three dimensional CAC volume contains significant clinically relevant information, which is unused by conventional whole-heart CAC quantification methods. In this paper, we have developed a more detailed distance-weighted lesion-specific CAC quantification framework that predicts cardiac events better than the conventional whole-heart CAC measures. This framework consists of (1) a novel lesion-specific CAC quantification tool that measures each calcific lesion's attenuation, morphologic and geometric statistics; (2) a distance-weighted event risk model to estimate the risk probability caused by each lesion, and (3) a Naive Bayesian technique for risk integration. We have tested our lesion-specific event predictor on 30 CAC positive scans (10 with events and 20 without events), and compared it with conventional whole-heart CAC scores. Experiment results showed our novel approach significantly improves the prediction accuracy, including AUC of ROC analysis was improved from 66 \u223c 68% to 75%, and sensitivities was improved by 20 \u223c 30% at the cutpoints of 80% specificity."}
{"_id": "a190742917469c60e96e401140a73de5548be360", "title": "Link prediction in bipartite graphs using internal links and weighted projection", "text": "Many real-world complex networks, like client-product or file-provider relations, have a bipartite nature and evolve during time. Predicting links that will appear in them is one of the main approach to understand their dynamics. Only few works address the bipartite case, though, despite its high practical interest and the specific challenges it raises. We define in this paper the notion of internal links in bipartite graphs and propose a link prediction method based on them. We describe the method and experimentally compare it to a basic collaborative filtering approach. We present results obtained for two typical practical cases. We reach the conclusion that our method performs very well, and that internal links play an important role in bipartite graphs and their dynamics."}
{"_id": "b25613704201c0c9dbbb8223c61cd3cbb0c51168", "title": "Modeling social interestingness in conversational stories", "text": "Telling stories about our daily lives is one of the most ubiquitous, consequential and seamless ways in which we socialize. Current narrative generation methods mostly require specification of a priori knowledge or comprehensive domain models, which are not generalizable across contexts. Hence, such approaches do not lend themselves well to new and unpredictable domains of observation and interaction, in which social stories usually occur. In this paper, we describe a methodology for categorizing event descriptions as being socially interesting. The event sequences are drawn from crowd-sourced Plot Graphs. The models include low-level natural language and higher-level features. The results from classification and regression tasks look promising overall, indicating that general metrics of social interestingness of stories could be modeled for sociable agents."}
{"_id": "682a6668e853992ef57b2d20f4e5cb67f09aa87f", "title": "Depth-of-field-based alpha-matte extraction", "text": "In compositing applications, objects depicted in images frequently have to be separated from their background, so that they can be placed in a new environment. Alpha mattes are important tools aiding the selection of objects, but cannot normally be created in a fully automatic way. We present an algorithm that requires as input two images---one where the object is in focus, and one where the background is in focus---and then automatically produces an alpha matte indicating which pixels belong to the object. This algorithm is inspired by human visual processing and involves nonlinear response compression, center-surround mechanisms as well as a filling-in stage. The output can then be refined with standard computer vision techniques."}
{"_id": "f8f92624c8794d54e08b3a8f94910952ae03cade", "title": "CamStyle: A Novel Data Augmentation Method for Person Re-Identification", "text": "Person re-identification (re-ID) is a cross-camera retrieval task that suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle). CamStyle can serve as a data augmentation approach that reduces the risk of deep network overfitting and that smooths the CamStyle disparities. Specifically, with a style transfer model, labeled training images can be style transferred to each camera, and along with the original training samples, form the augmented training set. This method, while increasing data diversity against overfitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few camera systems in which overfitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of overfitting. We also report competitive accuracy compared with the state of the art on Market-1501 and DukeMTMC-re-ID. Importantly, CamStyle can be employed to the challenging problems of one view learning and unsupervised domain adaptation (UDA) in person re-identification (re-ID), both of which have critical research and application significance. The former only has labeled data in one camera view and the latter only has labeled data in the source domain. Experimental results show that CamStyle significantly improves the performance of the baseline in the two problems. Specially, for UDA, CamStyle achieves state-of-the-art accuracy based on a baseline deep re-ID model on Market-1501 and DukeMTMC-reID. Our code is available at: https://github.com/zhunzhong07/CamStyle."}
{"_id": "42fdf2999c46babe535974e14375fbb224445757", "title": "Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables", "text": "This study compared two alternative techniques for predicting forest cover types from cartographic variables. The study evaluated four wilderness areas in the Roosevelt National Forest, located in the Front Range of northern Colorado. Cover type data came from US Forest Service inventory information, while the cartographic variables used to predict cover type consisted of elevation, aspect, and other information derived from standard digital spatial data processed in a geographic information system (GIS). The results of the comparison indicated that a feedforward artificial neural network model more accurately predicted forest cover type than did a traditional statistical model based on Gaussian discriminant analysis. \u00a9 1999 Elsevier Science B.V. All rights reserved."}
{"_id": "d4884e0c1059046b42c0770f051e485275b93724", "title": "Integrating Algorithmic Planning and Deep Learning for Partially Observable Navigation", "text": "We propose to take a novel approach to robot system design where each building block of a larger system is represented as a differentiable program, i.e. a deep neural network. This representation allows for integrating algorithmic planning and deep learning in a principled manner, and thus combine the benefits of model-free and model-based methods. We apply the proposed approach to a challenging partially observable robot navigation task. The robot must navigate to a goal in a previously unseen 3-D environment without knowing its initial location, and instead relying on a 2-D floor map and visual observations from an onboard camera. We introduce the Navigation Networks (NavNets) that encode state estimation, planning and acting in a single, end-to-end trainable recurrent neural network. In preliminary simulation experiments we successfully trained navigation networks to solve the challenging partially observable navigation task."}
{"_id": "1c66fa1f3f189ca25bd657f53d77fde64c75f3da", "title": "Adaptive Color Attributes for Real-Time Visual Tracking", "text": "Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power. This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attribute-based evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24 % in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second."}
{"_id": "fb340484981edfa25965c8b5e9751a1e3e7b5a41", "title": "SDN-Based Data Center Networking With Collaboration of Multipath TCP and Segment Routing", "text": "Large-scale data centers are major infrastructures in the big data era. Therefore, a stable and optimized architecture is required for data center networks (DCNs) to provide services to the applications. Many studies use software-defined network (SDN)-based multipath TCP (MPTCP) implementation to utilize the entire DCN\u2019s performance and achieve good results. However, the deployment cost is high. In SDN-based MPTCP solutions, the flow allocation mechanism leads to a large number of forwarding rules, which may lead to storage consumption. Considering the advantages and limitations of the SDN-based MPTCP solution, we aim to reduce the deployment cost due to the use of an extremely expensive storage resource\u2014ternary content addressable memory (TCAM). We combine MPTCP and segment routing (SR) for traffic management to limit the storage requirements. And to the best of our knowledge, we are among the first to use the collaboration of MPTCP and SR in multi-rooted DCN topologies. To explain how MPTCP and SR work together, we use four-layer DCN architecture for better description, which contains physical topology, SR over the topology, multiple path selection supplied by MPTCP, and traffic scheduling on the selected paths. Finally, we implement the proposed design in a simulated SDN-based DCN environment. The simulation results reveal the great benefits of such a collaborative approach."}
{"_id": "20d2438efcd5c21b13f9c226f023506ee98411f2", "title": "Visualization Techniques for Mining Large Databases: A Comparison", "text": "Visual data mining techniques ha ve proven to be of high v alue in exploratory data analysis and the y also have a high potential for mining lar ge databases. In this article, we describe and e valuate a ne w visualization-based approach to mining lar ge databases. The basic idea of our visual data mining techniques is to represent as man y d ta items as possible on the screen at the same time by mapping each data v alue o a pixel of the screen and arranging the pixels adequately . The major goal of this article is to e valuate our visual data mining techniques and to compare them to other well-kno wn visualization techniques for multidimensional data: the parallel coordinate and stick figure visualization techniques. F or the evaluation of visual data mining techniques, in the first place the perception of properties of the data counts, and only in the second place the CPU time and the number of secondary storage accesses are important. In addition to testing the visualization techniques using real data, we de velop d a testing environment for database visualizations similar to the benchmark approach used for comparing the performance of database systems. The testing en vironment allows the generation of test data sets with predefined data characteristics which are important for comparing the perceptual abilities of visual data mining techniques."}
{"_id": "579eb8d73a0b9739b48e34c477c097ebe72d6311", "title": "Angular Brushing of Extended Parallel Coordinates", "text": "In this paper we present angularbrushingfor parallel coordinates (PC) as a new approach to high-light rational dataproperties, i.e., features which depend on two data dimensions (instead of one). We also demonstrate smoothbrushing of PC as an intuitive tool to specify non-binary degreeof-interest functions (then used for F+C visualization). We also shortly describe our implementation as well as its application to the visualization of CFD data."}
{"_id": "22f06b536105fca8a9448f1f04b83563a123b587", "title": "Inkjet-Configurable Gate Arrays (IGA)", "text": "Implementation of organic digital circuits (or printed electronic circuits) has been under an extensive investigation, facing some critical challenges such as process variability, device performance, cell design styles and circuit yield. Failure in any single Organic Thin Film Transistor (OTFT) often causes the whole circuit to fail since integration density is still low. For the same reason, the application of fault tolerant techniques is not that popular in these circuits. In this paper, we propose an approach for the direct mapping of digital functions on top of new prefabricated structures: Inkjet-Configurable Gate Arrays (IGA). This alternative has two main advantages. First, it helps to obtain high yield circuits out of mid-yield foils, and second, it allows implementing individual circuit personalization at a very low cost by using additive mask-less printing techniques thus avoiding the need for OTPROM-like (or E2PROM) devices. All along the design process of IGA cells and structures we used the scalability and technology-independent strategies provided by parameterizable cells (PCell) what helps dealing with current fast technology evolution."}
{"_id": "0d60c710406c5f3c1ffa2c225e1b901efa3a7d2e", "title": "A State-of-the-Art Survey on Real-Time Issues in Embedded Systems Virtualization", "text": "Virtualization has gained great acceptance in the server and cloud computing arena. In recent years, it has also been widely applied to real-time embedded systems with stringent timing constraints. We present a comprehensive survey on real-time issues in virtualization for embedded systems, covering popular virtualization systems including KVM, Xen, L4 and others."}
{"_id": "6772164c3dd4ff6e71ba58c5c4c22fa092b9fe55", "title": "Recent advances in deep learning for speech research at Microsoft", "text": "Deep learning is becoming a mainstream technology for speech recognition at industrial scale. In this paper, we provide an overview of the work by Microsoft speech researchers since 2009 in this area, focusing on more recent advances which shed light to the basic capabilities and limitations of the current deep learning technology. We organize this overview along the feature-domain and model-domain dimensions according to the conventional approach to analyzing speech systems. Selected experimental results, including speech recognition and related applications such as spoken dialogue and language modeling, are presented to demonstrate and analyze the strengths and weaknesses of the techniques described in the paper. Potential improvement of these techniques and future research directions are discussed."}
{"_id": "87fab32403d27e9b46effc69b7d55d65706ca26f", "title": "An automatic vision-based malaria diagnosis system.", "text": "Malaria is a worldwide health problem with 225 million infections each year. A fast and easy-to-use method, with high performance is required to differentiate malaria from non-malarial fevers. Manual examination of blood smears is currently the gold standard, but it is time-consuming, labour-intensive, requires skilled microscopists and the sensitivity of the method depends heavily on the skills of the microscopist. We propose an easy-to-use, quantitative cartridge-scanner system for vision-based malaria diagnosis, focusing on low malaria parasite densities. We have used special finger-prick cartridges filled with acridine orange to obtain a thin blood film and a dedicated scanner to image the cartridge. Using supervised learning, we have built a Plasmodium falciparum detector. A two-step approach was used to first segment potentially interesting areas, which are then analysed in more detail. The performance of the detector was validated using 5,420 manually annotated parasite images from malaria parasite culture in medium, as well as using 40 cartridges of 11,780 images containing healthy blood. From finger prick to result, the prototype cartridge-scanner system gave a quantitative diagnosis in 16 min, of which only 1 min required manual interaction of basic operations. It does not require a wet lab or a skilled operator and provides parasite images for manual review and quality control. In healthy samples, the image analysis part of the system achieved an overall specificity of 99.999978% at the level of (infected) red blood cells, resulting in at most seven false positives per microlitre. Furthermore, the system showed a sensitivity of 75% at the cell level, enabling the detection of low parasite densities in a fast and easy-to-use manner. A field trial in Chittagong (Bangladesh) indicated that future work should primarily focus on improving the filling process of the cartridge and the focus control part of the scanner."}
{"_id": "4b1a998af284a12e8ac17b3b036d77f933c1f0fe", "title": "Geocoding location expressions in Twitter messages: A preference learning method", "text": "Resolving location expressions in text to the correct physical location, also known as geocoding or grounding, is complicated by the fact that so many places around the world share the same name. Correct resolution is made even more difficult when there is little context to determine which place is intended, as in a 140-character Twitter message, or when location cues from different sources conflict, as may be the case among different metadata fields of a Twitter message. We used supervised machine learning to weigh the different fields of the Twitter message and the features of a world gazetteer to create a model that will prefer the correct gazetteer candidate to resolve the extracted expression. We evaluated our model using the F1 measure and compared it to similar algorithms. Our method achieved results higher than state-of-the-art competitors."}
{"_id": "700fca96469c56254562bfdd9ac6e519c4660e94", "title": "FREEFORM ORIGAMI TESSELLATIONS BY GENERALIZING RESCH \u2019 S PATTERNS", "text": "In this research, we study a method to produce families of origami tessellations from given polyhedral surfaces. The resulting tessellated surfaces generalize the patterns proposed by Ron Resch and allow the construction of an origami tessellation that approximates a given surface. We will achieve these patterns by first constructing an initial configuration of the tessellated surfaces by separating each facets and inserting folded parts between them based on the local configuration. The initial configuration is then modified by solving the vertex coordinates to satisfy geometric constraints of developability, folding angle limitation, and local non-intersection. We propose a novel robust method for avoiding intersections between facets sharing vertices. Such generated polyhedral surfaces are not only applied to folding paper but also sheets of metal that does not allow 180\u25e6 folding."}
{"_id": "1519790fdcaa4373236e5100ebadbe2e36eb36b9", "title": "On Scalable and Robust Truth Discovery in Big Data Social Media Sensing Applications", "text": "Identifying trustworthy information in the presence of noisy data contributed by numerous unvetted sources from online social media (e.g., Twitter, Facebook, and Instagram) has been a crucial task in the era of big data. This task, referred to as truth discovery, targets at identifying the reliability of the sources and the truthfulness of claims they make without knowing either a priori. In this work, we identified three important challenges that have not been well addressed in the current truth discovery literature. The first one is \u201cmisinformation spread\u201d where a significant number of sources are contributing to false claims, making the identification of truthful claims difficult. For example, on Twitter, rumors, scams, and influence bots are common examples of sources colluding, either intentionally or unintentionally, to spread misinformation and obscure the truth. The second challenge is \u201cdata sparsity\u201d or the \u201clong-tail phenomenon\u201d where a majority of sources only contribute a small number of claims, providing insufficient evidence to determine those sources\u2019 trustworthiness. For example, in the Twitter datasets that we collected during real-world events, more than 90% of sources only contributed to a single claim. Third, many current solutions are not scalable to large-scale social sensing events because of the centralized nature of their truth discovery algorithms. In this paper, we develop a Scalable and Robust Truth Discovery (SRTD) scheme to address the above three challenges. In particular, the SRTD scheme jointly quantifies both the reliability of sources and the credibility of claims using a principled approach. We further develop a distributed framework to implement the proposed truth discovery scheme using Work Queue in an HTCondor system. The evaluation results on three real-world datasets show that the SRTD scheme significantly outperforms the state-of-the-art truth discovery methods in terms of both effectiveness and efficiency."}
{"_id": "97a6913dbe4d78e3c07fdfb86628d77bb6288502", "title": "Dense Multimodal Fusion for Hierarchically Joint Representation", "text": "Multiple modalities can provide more valuable information than single one by describing the same contents in various ways. Hence, it is highly expected to learn effective joint representation by fusing the features of different modalities. However, previous methods mainly focus on fusing the shallow features or high-level representations generated by unimodal deep networks, which only capture part of the hierarchical correlations across modalities. In this paper, we propose to densely integrate the representations by greedily stacking multiple shared layers between different modality-specific networks, which is named as Dense Multimodal Fusion (DMF). The joint representations in different shared layers can capture the correlations in different levels, and the connection between shared layers also provides an efficient way to learn the dependence among hierarchical correlations. These two properties jointly contribute to the multiple learning paths in DMF, which results in faster convergence, lower training loss, and better performance. We evaluate our model on three typical multimodal learning tasks, including audiovisual speech recognition, cross-modal retrieval, and multimodal classification. The noticeable performance in the experiments demonstrates that our model can learn more effective joint representation."}
{"_id": "f612f3c61c2cd092dee2bdb05d80a0e09b97c2f6", "title": "Hiding Behind the Shoulders of Giants: Abusing Crawlers for Indirect Web Attacks", "text": "It could be argued that without search engines, the web would have never grown to the size that it has today. To achieve maximum coverage and provide relevant results, search engines employ large armies of autonomous crawlers that continuously scour the web, following links, indexing content, and collecting features that are then used to calculate the ranking of each page. In this paper, we describe how autonomous crawlers can be abused by attackers to exploit vulnerabilities on thirdparty websites while hiding the true origin of the attacks. Moreover, we show how certain vulnerabilities on websites that are currently deemed unimportant, can be abused in a way that would allow an attacker to arbitrarily boost the rankings of malicious websites in the search results of popular search engines. Motivated by the potentials of these vulnerabilities, we propose a series of preventive and defensive countermeasures that website owners and search engines can adopt to minimize, or altogether eliminate, the effects of crawler-abusing attacks."}
{"_id": "d93dd8ac7691967d2de139e2f701dcb28f69abdd", "title": "Current advances in the fabrication of microneedles for transdermal delivery.", "text": "The transdermal route is an excellent site for drug delivery due to the avoidance of gastric degradation and hepatic metabolism, in addition to easy accessibility. Although offering numerous attractive advantages, many available transdermal systems are not able to deliver drugs and other compounds as desired. The use of hypodermic needles, associated with phobia, pain and accidental needle-sticks has been used to overcome the delivery limitation of macromolecular compounds. The means to overcome the disadvantages of hypodermic needles has led to the development of microneedles for transdermal delivery. However, since the initial stages of microneedle fabrication, recent research has been conducted integrating various fabrication techniques for generating sophisticated microneedle devices for transdermal delivery including progress on their commercialization. A concerted effort has been made within this review to highlight the current advances of microneedles, and to provide an update of pharmaceutical research in the field of microneedle-assisted transdermal drug delivery systems."}
{"_id": "efb20efeb70822b2d9e42542f22793ebbedd5906", "title": "Applications of data mining in robotics", "text": "From a high-level viewpoint, robotics is all about data. In other words, each robot agent for performing its designated tasks should deal with data in some form - from collecting data from the environment to generating internal data. For this reason and due to broad range of possible applications, managing and dealing with such data is an important issue in the field. On the other side, data - in its different types - is known as an invaluable treasure of knowledge and insight within any information system. In this regard, extracting such hidden knowledge through data mining has been in the center of focus for more than two decades. In fact, data mining could be considered as a super-influential phenomenon on all scientific fields and robotics is not an exception. Over the recent years, many studies have been conducted on different aspects of leveraging data mining techniques in the robotic domain to improve performance of robots in various ways. However, despite the current disparate and unorganized studies in different directions, there are some potential areas that have not received enough attention. In this paper, we provide a high-level overview of the field and try to organize the current studies as a reference point and road map for future works. Moreover, challenges and future possible directions will be studied."}
{"_id": "15bb79dc0da9c86bf98d8f787d3be2e5df5c8f5b", "title": "A carpet cloak for visible light.", "text": "We report an invisibility carpet cloak device, which is capable of making an object undetectable by visible light. The cloak is designed using quasi conformal mapping and is fabricated in a silicon nitride waveguide on a specially developed nanoporous silicon oxide substrate with a very low refractive index (n<1.25). The spatial index variation is realized by etching holes of various sizes in the nitride layer at deep subwavelength scale creating a local effective medium index. The fabricated device demonstrates wideband invisibility throughout the visible spectrum with low loss. This silicon nitride on low index substrate can also be a general scheme for implementation of transformation optical devices at visible frequencies."}
{"_id": "3b4287e9a8338ac1f900b1e598816ee7fb22d72b", "title": "Tag suggestion and localization in user-generated videos based on social knowledge", "text": "Nowadays, almost any web site that provides means for sharing user-generated multimedia content, like Flickr, Facebook, YouTube and Vimeo, has tagging functionalities to let users annotate the material that they want to share. The tags are then used to retrieve the uploaded content, and to ease browsing and exploration of these collections, e.g. using tag clouds. However, while tagging a single image is straightforward, and sites like Flickr and Facebook allow also to tag easily portions of the uploaded photos, tagging a video sequence is more cumbersome, so that users just tend to tag the overall content of a video. Moreover, the tagging process is completely manual, and often users tend to spend as few time as possible to annotate the material, resulting in a sparse annotation of the visual content. A semi-automatic process, that helps the users to tag a video sequence would improve the quality of annotations and thus the overall user experience. While research on image tagging has received a considerable attention in the latest years, there are still very few works that address the problem of automatically assigning tags to videos, locating them temporally within the video sequence. In this paper we present a system for video tag suggestion and temporal localization based on collective knowledge and visual similarity of frames. The algorithm suggests new tags that can be associated to a given keyframe exploiting the tags associated to videos and images uploaded to social sites like YouTube and Flickr and visual features."}
{"_id": "65b56807239cca8067b28669843dd21a22458dd9", "title": "ESD protection circuit schemes for DDR3 DQ drivers", "text": "The high-speed interface of DQ pins in DDR3 DRAM requires special ESD considerations. During ESD characterization testing, high voltages from the power rail could pass through the pre-driver PMOSFET to damage the pull-down NMOSFET gate oxide. Special circuit design of the pre-driver circuit is required to eliminate this ESD failure mechanism."}
{"_id": "41642c470d82762821b509a66dad7c04bcd96559", "title": "2 A Physically Unclonable Function with BER < 10-8 for Robust Chip Authentication Using Oscillator Collapse in 40 nm CMOS", "text": "Security is a key concern in today\u2019s mobile devices and a number of hardware implementations of security primitives have been proposed, including true random number generators, differential power attack avoidance, and chip-ID generators [1-4]. Recently, physically unclonable functions (PUFs) were proposed as a secure method for chip authentication in unsecure environments [5-7]. A PUF is a function that maps an input code (\u201cchallenge\u201d) to an output code (\u201cresponse\u201d) in a manner that is unique for every chip. PUFs are increasingly used for IC authentication to offer protection against identity theft, cloning, and counterfeit components [2-4]."}
{"_id": "61ec08b1fd5dc1a0e000d9cddee6747f58d928ec", "title": "Bootstrap Confidence Intervals", "text": "This article surveys bootstrap methods for producing good approximate confidence intervals. The goal is to improve by an order of magnitude upon the accuracy of the standard intervals \u03b8\u0302 \u00b1 z\u0090\u03b1\u0091\u03c3\u0302 , in a way that allows routine application even to very complicated problems. Both theory and examples are used to show how this is done. The first seven sections provide a heuristic overview of four bootstrap confidence interval procedures: BCa, bootstrap-t, ABC and calibration. Sections 8 and 9 describe the theory behind these methods, and their close connection with the likelihood-based confidence interval theory developed by Barndorff-Nielsen, Cox and Reid and others."}
{"_id": "bce8e851f2214e6684b0652b792faa8cc02f1448", "title": "Wireless Body Area Network Security and Privacy Issue in E-Healthcare", "text": "Wireless Body Area Network (WBAN) is a collection of wireless sensor nodes which can be placed within the body or outside the body of a human or a living person which in result observes or monitors the functionality and adjoining situations of the body. Utilizing a Wireless Body Area Network, the patient encounters a greater physical versatility and is never again constrained to remain in the hospital. As the Wireless Body Area Network sensor devices is being utilized for gathering the sensitive data and possibly will run into antagonistic situations, they require complicated and very secure security medium or structure to avoid the vitriolic communications within the system. These devices represent various security and privacy protection of touchy and private patient medical data. The medical records of patients are a significant and an unsolved situation due to which a changing or exploitation to the system is possible. In this research paper, we first present an overview of WBAN, how they utilized for healthcare monitoring, its architecture then highlight major security and privacy requirements and assaults at different network layer in a WBAN and we finally talk about various cryptographic algorithms and laws for providing solution of security and privacy in WBAN. Keywords\u2014E-Health; privacy; security; wireless body area networks"}
{"_id": "303f17eb4620d18f98d4ab8de9be146dbd59ba12", "title": "Self-Compassion and Body Dissatisfaction in Women : A Randomized Controlled Trial of a Brief Meditation Intervention", "text": "Body dissatisfaction is a major source of suffering among women of all ages. One potential factor that could mitigate body dissatisfaction is self-compassion, a construct that is garnering increasing research attention due to its strong association with psychological health. This study investigated whether a brief 3-week period of self-compassion meditation training would improve body satisfaction in a multigenerational group of women. Participants were randomized either to the meditation intervention group (N=98; M age=38.42) or to a waitlist control group (N=130; M age=36.42). Results suggested that compared to the control group, intervention participants experienced significantly greater reductions in body dissatisfaction, body shame, and contingent self-worth based on appearance, as well as greater gains in self-compassion and body appreciation. All improvements were maintained when assessed 3 months later. Self-compassion meditation may be a useful and cost-effective means of improving body image in"}
{"_id": "cc0ed3c19b45f805064e528b7f5adc3cb10486a3", "title": "How Affective Is a \"Like\"?: The Effect of Paralinguistic Digital Affordances on Perceived Social Support", "text": "A national survey asked 323 U.S. adults about paralinguistic digital affordances (PDAs) and how these forms of lightweight feedback within social media were associated with their perceived social support. People perceived PDAs (e.g., Likes, Favorites, and Upvotes) as socially supportive both quantitatively and qualitatively, even without implicit meaning associated with them. People who are highly sensitive about what others think of them and have high self-esteem are more likely to perceive higher social support from PDAs."}
{"_id": "7776ccac08ee57fa7495e764ccc1eff052227b7e", "title": "Tweet Sarcasm Detection Using Deep Neural Network", "text": "Sarcasm detection has been modeled as a binary document classification task, with rich features being defined manually over input documents. Traditional models employ discrete manual features to address the task, with much research effect being devoted to the design of effective feature templates. We investigate the use of neural network for tweet sarcasm detection, and compare the effects of the continuous automatic features with discrete manual features. In particular, we use a bi-directional gated recurrent neural network to capture syntactic and semantic information over tweets locally, and a pooling neural network to extract contextual features automatically from history tweets. Results show that neural features give improved accuracies for sarcasm detection, with different error distributions compared with discrete manual features."}
{"_id": "1e74abb7d331b8dddd4326fddd13d487fc8891fa", "title": "Stable Proportional-Derivative Controllers", "text": "In computer animation, the proportional-derivative (PD) controller is a common technique for tracking characters' motion. A new formulation of the PD controller-stable PD (SPD)-allows arbitrarily high gains, even at large time steps. The key is to determine joint forces and torques while taking into account the character's positions and velocities in the next time step. SPD is stable even when combined with a physics simulator that uses simple Euler integration. SPD controllers have a variety of uses, including motion tracking in a physics simulator, keyframe interpolation with secondary motion, and constraint satisfaction for simulation."}
{"_id": "305917e0b5799ef496a1e24a052965424d3e6c13", "title": "Flocks , Herds , and Schools : A Distributed Behavioral Model", "text": "The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the n a t u r a l world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle system, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the \"animator.\" The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds."}
{"_id": "8263efcbc5e6b3c7c022b1131038b888babc8548", "title": "A new optimizer using particle swarm theory", "text": "The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed. Relationships between particle swarm optimization and both artificial life and evolutionary computation are reviewed."}
{"_id": "3bb5280796b7a51f1b55d86c698043d3dcb52748", "title": "Smalltalk-80: The Language and Its Implementation", "text": "Covers Smalltalk language and concepts from Xerox PARC. This need to a valid in with hexadecimal number before. Dwight thanks adele goldberg david robson copyright addison wesley isbn edition used was at compile. Smalltalk 80 syntax uses punctuation in, the language they can be significantly more popular. There is not this may be written as artifacts such messages. By storing them are parsed according to evaluate? The boolean receiver as lisp often even the virtual machine. One left smalltalk 80 is, the modern open. The increasing usage visualworks has recently been used was an image. Alan kay one thing developers to a transcript window would. Even primitives such interleaving of the, language and dec objectstudio each restart. Enfin was an object holds is, extremely important. Also be recreated explicitly define the, book presents a copy of the agile methods messages."}
{"_id": "54acdb67ca083326c34eabdeb59bfdc01c748df0", "title": "Handbook of genetic algorithms", "text": "In this paper, the effect of Genetic algorithm (GA) on the enhancement of the texture and quality of plants visual L. Davis, The handbook of Genetic Algorithms. Genetic Algorithm is one of many methods that can be used to create a schedule. This method Davis, L.: Handbook on Genetic Algorithms. Van Nostrand. In this research, genetic algorithm approach has been applied for solving exam (18) Davis L., Handbook of Genetic Algorithms, New York, Van Nostrand."}
{"_id": "44208019613d0c78a4d37fe77b870b0e55e8ab28", "title": "Solid state transformer application to grid connected photovoltaic inverters", "text": "In the paper, an architecture, including a solid state transformer (SST) which is different from the conventional style is proposed The photovoltaic system with SST consists of two power stages. A dual half bridge (DHB) converter constitutes the first power stage, which aims to achieve the maximum power point tracking (MPPT) functions. A grid connected inverter is used to stabilize the DC bus voltage and make the output current sinusoidal. The control technique is presented and validated by simulation implemented on a photovoltaic system with SST. The simulation results prove the effectiveness of the proposed photovoltaic system and the proposed control technique."}
{"_id": "568d94b4717ccd73e0f8b36c2c27cf6dff82bc6a", "title": "Statistical Machine Translation: A Guide for Linguists and Translators", "text": "This paper presents an overview of Statistical Machine Translation (SMT), which is currently the dominant approach in Machine Translation (MT) research. In Way and Hearne (2010), we describe how central linguists and translators are to the MT process, so that SMT developers and researchers may better understand how to include these groups in continuing to advance the stateof-the-art. If these constituencies are to make an impact in the field of MT, they need to know how their input is used by SMT systems. Accordingly, our objective in this paper is to present the basic principles underpinning SMT in a way that linguists and translators will find accessible"}
{"_id": "ccd0797e458bba75f0c536e06c2132a6ece08b3c", "title": "Organic Computing \u2013 Technical Systems for Survival in the Real World", "text": "ion is the selection and (possibly) coarsening (i.e. quantisation) of certain system characteristics (attributes, performance indicators, parameters) from the total set of system characteristics. The abstraction process comprises: \u2022 A simplification (Example: The colour Red is an abstraction, which neglects the different possible shades of Red, the wavelengths, the intensity etc.), \u2022 an aggregation (Example: \u2018Temperature\u2019 condenses the myriad of individual molecule movements in a gas volume into a single number.), \u2022 and consequently: loss of information. The opposite of abstraction is concretisation. It comprises: \u2022 gain of information, \u2022 detailing, \u2022 refinement, \u2022 disaggregation, \u2022 in engineering: the design process. 3.2.2 System Boundary Systems are defined by their system boundary, which makes a distinction between inside and outside possible (i.e. between self and non-self, see Sect. 4.1). When 3.2 What is a System? 87 Fig. 3.3 Trade-off between initial development cost and later adaptation cost systems are designed, we must choose where to place the boundary. This involves a consideration of cost (Fig. 3.3): A narrow boundary will reduce the cost for planning and development in the first place but bears the risk of a high effort in case a subsequent adaptation and extension of the system should be necessary. A wide system boundary reverses the cost curves. Apparently, the total cost minimum lies somewhere in the middle. 3.2.3 Some System Types and Properties Most systems are transient, i.e. they change over time. They are developed, assembled, modified, destroyed. Reactive systems react on inputs (events, signals, sensor data) by applying outputs (events, control signals, commands) to the environment. All open systems are reactive! In a more focused definition: Only systems reacting in real time (within a predefined time) are reactive systems. Such real-time systems are characterised by (1) time restrictions (deadlines), and (2) time determinism (i.e. guarantees). Planned vs. Unplanned systems: During the development process of technical systems, only predictable events are taken into account in the designed core. But there will always be an unpredictable rest. Therefore, the \u2018spontaneous closure\u2019 (Fig. 3.4) has to cover these events. In technical systems, the exception handler or the diagnosis system play the role of a simple spontaneous closure. Living systems are characterised by a powerful spontaneous closure. Recurring reactions of the spontaneous closure become part of the designed core. This amounts to learning."}
{"_id": "a9a6322f5d6575adb04d9ed670ffdef741840958", "title": "Micro-punch technique for treatment of Fordyce spots: a surgical approach for an unpleasant condition of the male genital.", "text": "Fordyce spots are ectopic sebaceous glands, ranging between 2 and 3\u00a0mm in diameter. These benign lesions are most frequently located in the oral mucosa and the genital skin. Especially in the male genital region they can cause itching, discomfort during sexual activities and are aesthetically unpleasant. So far, a variety of therapeutic procedures have been reported with varying success and recurrence rates. In the present retrospective study (n\u00a0=\u00a023 patients between 2003 and 2011), we present our surgical approach by means of the micro-punch technique. Using this effective method, we achieved very satisfactory functional and cosmetic results. There were no signs of recurrence during postoperative observations from 12 up to 84 months (median\u00a0=\u00a051.3 months)."}
{"_id": "2520cfc29a521f2333fda020d7ae41860f8dfebd", "title": "ERMIA: Fast Memory-Optimized Database System for Heterogeneous Workloads", "text": "Large main memories and massively parallel processors have triggered not only a resurgence of high-performance transaction processing systems optimized for large main-memory and massively parallel processors, but also an increasing demand for processing heterogeneous workloads that include read-mostly transactions. Many modern transaction processing systems adopt a lightweight optimistic concurrency control (OCC) scheme to leverage its low overhead in low contention workloads. However, we observe that the lightweight OCC is not suitable for heterogeneous workloads, causing significant starvation of read-mostly transactions and overall performance degradation.\n In this paper, we present ERMIA, a memory-optimized database system built from scratch to cater the need of handling heterogeneous workloads. ERMIA adopts snapshot isolation concurrency control to coordinate heterogeneous transactions and provides serializability when desired. Its physical layer supports the concurrency control schemes in a scalable way. Experimental results show that ERMIA delivers comparable or superior performance and near-linear scalability in a variety of workloads, compared to a recent lightweight OCC-based system. At the same time, ERMIA maintains high throughput on read-mostly transactions when the performance of the OCC-based system drops by orders of magnitude."}
{"_id": "5fc9f743d5d689dca6d749549a1334b562ed6b17", "title": "Abstraction-Based Livelock/Deadlock Checking for Hardware Verification", "text": "ion-Based Livelock/Deadlock Checking for Hardware Verification In-Ho Moon and Kevin Harer Synopsys Inc. {mooni, kevinh}@synopsys.com"}
{"_id": "3da5842fc08f12c4846dd12cfb21037f385984c4", "title": "Automatic Forest-Fire Measuring Using Ground Stations and Unmanned Aerial Systems", "text": "This paper presents a novel system for automatic forest-fire measurement using cameras distributed at ground stations and mounted on Unmanned Aerial Systems (UAS). It can obtain geometrical measurements of forest fires in real-time such as the location and shape of the fire front, flame height and rate of spread, among others. Measurement of forest fires is a challenging problem that is affected by numerous potential sources of error. The proposed system addresses them by exploiting the complementarities between infrared and visual cameras located at different ground locations together with others onboard Unmanned Aerial Systems (UAS). The system applies image processing and geo-location techniques to obtain forest-fire measurements individually from each camera and then integrates the results from all the cameras using statistical data fusion techniques. The proposed system has been extensively tested and validated in close-to-operational conditions in field fire experiments with controlled safety conditions carried out in Portugal and Spain from 2001 to 2006."}
{"_id": "300831b4ae35b20d0ef179fc9677f32f447fa43a", "title": "Travi-Navi: self-deployable indoor navigation system", "text": "We present Travi-Navi - a vision-guided navigation system that enables a self-motivated user to easily bootstrap and deploy indoor navigation services, without comprehensive indoor localization systems or even the availability of floor maps. Travi-Navi records high quality images during the course of a guider's walk on the navigation paths, collects a rich set of sensor readings, and packs them into a navigation trace. The followers track the navigation trace, get prompt visual instructions and image tips, and receive alerts when they deviate from the correct paths. Travi-Navi also finds the most efficient shortcuts whenever possible. We encounter and solve several challenges, including robust tracking, shortcut identification, and high quality image capture while walking. We implement Travi-Navi and conduct extensive experiments. The evaluation results show that Travi-Navi can track and navigate users with timely instructions, typically within a 4-step offset, and detect deviation events within 9 steps."}
{"_id": "d5f4701bed2879e958d34a9f2394f841cb2ce4ce", "title": "Gender and language-variety Identification with MicroTC", "text": "In this notebook, we describe our approach to cope with the Author Profiling task on PAN17 which consists of both gender and language identification for Twitter\u2019s users. We used our MicroTC (\u03bcTC) framework as the primary tool to create our classifiers. \u03bcTC follows a simple approach to text classification; it converts the problem of text classification to a model selection problem using several simple text transformations, a combination of tokenizers, a term-weighting scheme, and finally, it classifies using a Support Vector Machine. Our approach reaches accuracies of 0.7838, 0.8054, 0.7957, and 0.8538, for gender identification; and for language variety, it achieves 0.8275, 0.9004, 0.9554, and 0.9850. All these, for Arabic, English, Spanish, and Portuguese languages, respectively."}
{"_id": "4e49c3162f9a2fed15689f4e2211a790b0563490", "title": "Multilingual Neural Network Acoustic Modelling for ASR of Under-Resourced English-isiZulu Code-Switched Speech", "text": "Although isiZulu speakers code-switch with English as a matter of course, extremely little appropriate data is available for acoustic modelling. Recently, a small five-language corpus of code-switched South African soap opera speech was compiled. We used this corpus to evaluate the application of multilingual neural network acoustic modelling to English-isiZulu code-switched speech recognition. Our aim was to determine whether English-isiZulu speech recognition accuracy can be improved by incorporating three other language pairs in the corpus: English-isiXhosa, English-Setswana and English-Sesotho. Since isiXhosa, like isiZulu, belongs to the Nguni language family, while Setswana and Sesotho belong to the more distant Sotho family, we could also investigate the merits of additional data from within and across language groups. Our experiments using both fully connected DNN and TDNN-LSTM architectures show that English-isiZulu speech recognition accuracy as well as language identification after code-switching is improved more by the incorporation of English-isiXhosa data than by the incorporation of the other language pairs. However additional data from the more distant language group remained beneficial, and the best overall performance was always achieved with a multilingual neural network trained on all four language pairs."}
{"_id": "7910e30380895294678089ff818c320f6de1f39e", "title": "Accelerating MapReduce on a coupled CPU-GPU architecture", "text": "The work presented here is driven by two observations. First, heterogeneous architectures that integrate a CPU and a GPU on the same chip are emerging, and hold much promise for supporting power-efficient and scalable high performance computing. Second, MapReduce has emerged as a suitable framework for simplified parallel application development for many classes of applications, including data mining and machine learning applications that benefit from accelerators.\n This paper focuses on the challenge of scaling a MapReduce application using the CPU and GPU together in an integrated architecture. We develop different methods for dividing the work, which are the map-dividing scheme, where map tasks are divided between both devices, and the pipelining scheme, which pipelines the map and the reduce stages on different devices. We develop dynamic work distribution schemes for both the approaches. To achieve high load balance while keeping scheduling costs low, we use a runtime tuning method to adjust task block sizes for the map-dividing scheme. Our implementation of MapReduce is based on a continuous reduction method, which avoids the memory overheads of storing key-value pairs.\n We have evaluated the different design decisions using 5 popular MapReduce applications. For 4 of the applications, our system achieves 1.21 to 2.1 speedup over the better of the CPU-only and GPU-only versions. The speedups over a single CPU core execution range from 3.25 to 28.68. The runtime tuning method we have developed achieves very low load imbalance, while keeping scheduling overheads low. Though our current work is specific to MapReduce, many underlying ideas are also applicable towards intra-node acceleration of other applications on integrated CPU-GPU nodes."}
{"_id": "970698bf0a66ddf935b14e433c81e1175c0e8307", "title": "Security Challenges in the IP-based Internet of Things", "text": "A direct interpretation of the term Internet of Things refers to the use of standard Internet protocols for the human-to-thing or thingto-thing communication in embedded networks. Although the security needs are well-recognized in this domain, it is still not fully understood how existing IP security protocols and architectures can be deployed. In this paper, we discuss the applicability and limitations of existing Internet protocols and security architectures in the context of the Internet of Things. First, we give an overview of the deployment model and general security needs. We then present challenges and requirements for IP-based security solutions and highlight specific technical limitations of standard IP security protocols."}
{"_id": "f35c2ed2cd50527a67eb88602ed342fd01150f63", "title": "Privacy is dead, long live privacy", "text": "Protecting social norms as confidentiality wanes."}
{"_id": "36db754fb93811418320cbb6e08bd765fc3370ed", "title": "PMSG fault diagnosis in marine application", "text": "This paper presents a fault diagnosis (FD) method for on line monitoring of stator winding partial inter-turn fault and severe faults, in permanent magnet synchronous generator (PMSG) used in shaft generator system in marine applications. The faulty machine is represented in state space model based on the dq-frame transformation. The Extended Kalman Filter (EKF) parameters estimation technique is used to estimate the value of stator winding short-circuit parameters of the machine. The proposed technique has been simulated for different fault scenarios using Matlab\u00ae/Simulink\u00ae. Simulation results show that the proposed technique is robust and effective. Moreover it can be applied at different operating conditions to prevent development of complete failure."}
{"_id": "c84fa0d065161a0e421aa464d2f95f20fbce6834", "title": "Why are some people happier than others? The role of cognitive and motivational processes in well-being.", "text": "Addressing the question of why some people are happier than others is important for both theoretical and practical reasons and should be a central goal of a comprehensive positive psychology. Following a construal theory of happiness, the author proposes that multiple cognitive and motivational processes moderate the impact of the objective environment on well-being. Thus, to understand why some people are happier than others, one must understand the cognitive and motivational processes that serve to maintain, and even enhance, enduring happiness and transient mood. The author's approach has been to explore hedonically relevant psychological processes, such as social comparison, dissonance reduction, self-reflection, self-evaluation, and person perception, in chronically happy and unhappy individuals. In support of a construal framework, self-rated happy and unhappy people have been shown to differ systematically in the particular cognitive and motivational strategies they use. Promising research directions for positive psychology in pursuit of the sources of happiness, as well as the implications of the construal approach for prescriptions for enhancing well-being, are discussed."}
{"_id": "a7f2c47f3c658f1600f8963d52bf9c9bd93b1466", "title": "Cooperative map building using qualitative reasoning for several AIBO robots", "text": "The problem that a robot navigates autonomously through its environment, builds its own map and localizes itself in the map, is still an open problem. It is known as the SLAM (Simultaneous Localization and Map Building) problem. This problem is made even more difficult when we have several robots cooperating to build a common map of an unknown environment, due to the problem of map integration of several submaps built independently by each robot, and with a high degree of error, making the map matching specially difficult. Most of the approaches to solve map building problems are quantitative, resulting in a great computational cost and a low level of abstraction. In order to fulfil these drawbacks qualitative models have been recently used. However, qualitative models are non deterministic. Therefore, the solution recently adopted has been to mix both qualitative and quantitative models to represent the environment and build maps. However, no reasoning process has been used to deal with the information stored in maps up to now, therefore maps are only static storage of landmarks. In this paper we propose a novel method for cooperative map building based on hybrid (qualitative+quantitative) representation which includes also a reasoning process. Distinctive landmarks acquisition for map representation is provided by the cognitive vision and infrared modules which compute differences from the expected data according to the current map and the actual information perceived. We will store in the map the relative orientation information of the landmarks which appear in the environment, after a qualitative reasoning process, therefore the map will be independent of the point of view of the robot. Map integration will then be achieved by localizing each robot in the maps made by the other robots, through a process of pattern matching of the hybrid maps elaborated by each robot, resulting in an integrated map which all robots share, and which is the main objective of this work. This map building method is currently being tested on a team of Sony AIBO four"}
{"_id": "2fc10e50d648c32c764250e72a2eb6404aaef6fd", "title": "Sword: Scalable and Flexible Workload Generator for Distributed Data Processing Systems", "text": "Workload generation is commonly employed for performance characterization, testing and benchmarking of computer systems and networks. Workload generation typically aims at simulating or emulating traffic generated by different types of applications, protocols and activities, such as web browsing, email, chat, as well as stream multimedia traffic. We present a Scalable WORkloaD generator (SWORD) that we have developed for the testing and benchmarking of high-volume data processing systems. The tool is not only scalable but is also flexible and extensible allowing the generation of workload of a variety of types of applications and of contents."}
{"_id": "aa0f0b5b722fac211b0bcfd773762626931743fe", "title": "Design and implementation of a fire detection and control system for automobiles using fuzzy logic", "text": "The immense benefits of fire in road transport cannot be overemphasized. However more than two thousand vehicles are damaged by unwanted fire on a daily basis. On a global scale, incendiary losses to the automobile and insurance industries have ran into billions of dollars over the last decade. A not-so-distant contributory factor is the lack of a sophisticated fire safety system on the automobile. This has been addressed by designing and implementing fuzzy logic control system with feedback over an Arduino micro-controller system. The automatic system consisting of flame sensors, temperature sensors, smoke sensors and a re-engineered mobile carbon dioxide air-conditioning unit was tested on a medium sized physical car. Results show that the automobile fire detection and control system devoid of false alarms, detects and extinguishes fire under 20 seconds. An innovative, very promising solution module for hardware implementation in fire detection and control for automobiles has been developed by using new algorithms and fuzzy logic."}
{"_id": "b5a47b7f891fdf7f33f6cc95dcf95272563769d8", "title": "Seven V's of Big Data understanding Big Data to extract value", "text": "Big Data has shown lot of potential in real world industry and research community. We support the power and potential of it in solving real world problems. However, it is imperative to understand Big Data through the lens of 7 V's. 7th V as `Value' is desired output for industry challenges and issues. We provide a brief survey study of 7 Vs of Big Data in order to understand Big Data and extract Value concept in general. Finally we conclude by showing our vision of improved healthcare, a product of Big Data Utilization, as a future work for researchers and students, while moving forward."}
{"_id": "1f6358ab4d84fd1192623e81e6be5d0087a3e827", "title": "Using the Output Embedding to Improve Language Models", "text": "We study the topmost weight matrix of neural network language models. We show that this matrix constitutes a valid word embedding. When training language models, we recommend tying the input embedding and this output embedding. We analyze the resulting update rules and show that the tied embedding evolves in a more similar way to the output embedding than to the input embedding in the untied model. We also offer a new method of regularizing the output embedding. Our methods lead to a significant reduction in perplexity, as we are able to show on a variety of neural network language models. Finally, we show that weight tying can reduce the size of neural translation models to less than half of their original size without harming their performance."}
{"_id": "e750631326e7514cbaa338544e48b50016ba8a40", "title": "A Single-Handed Partial Zooming Technique for Touch-Screen Mobile Devices", "text": "Despite its ubiquitous use, the pinch zooming technique is not effective for one-handed interaction. We propose ContextZoom, a novel technique for single-handed zooming on touch-screen mobile devices. It allows users to specify any place on a device screen as the zooming center to ensure that the intended zooming target is always visible on the screen after zooming. ContextZoom supports zooming in/out a portion of a viewport, and provides a quick switch between the partial and whole viewports. We conducted an empirical evaluation of ContextZoom through a controlled lab experiment to compare ContextZoom and the Google maps\u2019 single-handed zooming technique. Results show that ContextZoom outperforms the latter in task completion time and the number of discrete actions taken. Participants also reported higher levels of perceived effectiveness and overall satisfaction with ContextZoom than with the Google maps\u2019 single-handed zooming technique, as well as a similar level of perceived ease of use."}
{"_id": "46fb7520f6ce62ceed1fdf85154f719607209db1", "title": "From goods to service ( s ) : Divergences and convergences of logics", "text": "There are two logics or mindsets from which to consider and motivate a transition from goods to service(s). The first, \u201cgoods-dominant (G-D) logic\u201d, views services in terms of a type of (e.g., intangible) good and implies that goods production and distribution practices should be modified to deal with the differences between tangible goods and services. The second logic, \u201cservice-dominant (S-D) logic\u201d, considers service \u2013 a process of using ones resources for the benefit of and in conjunction with another party \u2013 as the fundamental purpose of economic exchange and implies the need for a revised, service-driven framework for all of marketing. This transition to a service-centered logic is consistent with and partially derived from a similar transition found in the business-marketing literature \u2014 for example, its shift to understanding exchange in terms value rather than products and networks rather than dyads. It also parallels transitions in other sub-disciplines, such as service marketing. These parallels and the implications for marketing theory and practice of a full transition to a service-logic are explored. \u00a9 2008 Elsevier Inc. All rights reserved. Over the last several decades, leading-edge firms, as well as many business scholars and consultants, have advocated the need for refocusing substantial firm activity or transforming the entire firm orientation from producing output, primarily manufactured goods, to a concern with service(s) (see, e.g., Davies, Brady, & Hobday, 2007; Gebauer& Fleisch, 2007). These initiatives can be found in both business-to-business (e.g., IBM, GE) and businessto-consumer enterprises (e.g. Lowe's, Kodak, Apple) and in some cases entire industries (e.g., software-as-a-service). The common justification is that these initiatives are analogous with the shift from a manufacturing to a service economy in developed countries, if not globally. That is, it is based on the idea that virtually all economies are producing and exchanging more services than they are goods; thus, services require increased attention. This perception suggests that firms need to redirect the production and marketing strategy that they have adopted for manufactured goods by adjusting them for the distinguishing characteristics of services. \u204e Corresponding author. Tel.: +1 808 956 8167; fax: +1 808 956 9886. E-mail addresses: svargo@hawaii.edu (S.L. Vargo), rlusch@eller.arizona.edu (R.F. Lusch). 1 Tel.: +1 520 621 7480. 0019-8501/$ see front matter \u00a9 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.indmarman.2007.07.004 Please cite this article as: Vargo, S. L., & Lusch, R. F., From goods to service(s): (2008), doi:10.1016/j.indmarman.2007.07.004 This logic of the need for a shift in the activities of the enterprise and/or industry to match the analogous shift in the economy is so intuitively compelling that it is an apparent truism. It is a logic that follows naturally from marketing's traditional foundational thought. But is it the only logic; is it the correct logic? Does it move business-to-business (B2B) firms and/or academic marketing thought in a desirable and enhanced direction? While we agree that a shift to a service focus is desirable, if not essential to a firm's well being and the advancement of academic thought, we question the conventional, underlying rationale and the associated, implied approach. The purpose of this commentary is to explore this traditional logical foundation with its roots in the manufacturing and provision of tangible output and to propose an alternative logic, one grounded in a revised understanding of the meaning of service as a process and its central role in economic exchange. It is a logic that represents the convergence and extension of divergent marketing thought by sub-disciplines and other research initiatives. We argue that this more service-centric logic not only amplifies the necessity for the development of a service focus but it also provides a stronger foundation for theory development and, consequently, application. It is a logic that provides a framework for elevating knowledge discovery in business marketing (as well as other \u201csub-disciplines\u201d) beyond the Divergences and convergences of logics, Industrial Marketing Management 2 S.L. Vargo, R.F. Lusch / Industrial Marketing Management xx (2008) xxx\u2013xxx ARTICLE IN PRESS identification and explanation of B2B marketing differences from other forms of marketing to a level capable of informing not only the business-marketing firm but \u201cmainstream\u201d marketing in general. Thus, we argue a service-centered focus is enriching and unifying. 1. Alternative logics Broadly speaking, there are two perspectives for the consideration of service(s). One views goods (tangible output embedded with value) as the primary focus of economic exchange and \u201cservices\u201d (usually plural) as either (1) a restricted type of (intangible) good (i.e., as units of output) or (2) an add-on that enhances the value of a good. We (Vargo & Lusch, 2004a; Lusch & Vargo, 2006a) call this logic goods-dominant (G-D) logic. Others have referred to it as the \u201cneoclassical economics research tradition\u201d (e.g., Hunt, 2000), \u201cmanufacturing logic\u201d (e.g., Normann, 2001), \u201cold enterprise logic\u201d (Zuboff & Maxmin, 2002), and \u201cmarketing management\u201d (Webster, 1992). Regardless of the label, G-D logic points toward using principles developed to manage goods production to manage services \u201cproduction\u201d and \u201cdelivery\u201d. The second logic considers \u201cservice\u201d (singular) \u2013 a process of doing something for another party \u2013 in its own right, without reference to goods and identifies service as the primary focus of exchange activity. We (Vargo & Lusch, 2004a, 2006) call this logic service-dominant (S-D) logic. In S-D logic, goods continue to play an important, service-delivery role, at least in a subset of economic exchange. In contrast to implying the modification of goods-based models of exchange to fit a transition to service, S-D logic provides a service-based foundation centered on service-driven principles. We show that this transition is highly consistent with many contemporary business-marketing models. 2. Goods-dominant logic As the label implies, G-D logic is centered on the good \u2013 or more recently, the \u201cproduct\u201d, to include both tangible (goods) and intangible (services) units of output \u2013 as archetypical units of exchange. The essence of G-D logic is that economic exchange is fundamentally concerned with units of output (products) that are embedded with value during the manufacturing (or farming, or extraction) process. For efficiency, this production ideally takes place in isolation from the customer and results in standardized, inventoriable goods. The roots of G-D logic are found in the work of Smith (1776) and took firmer, paradigmatic grasp in the context of the Industrial Revolution during the quest for a science of economics, at a time when \u201cscience\u201d meant Newtonian mechanics, a paradigm for which the idea of goods embedded with value was particularly amenable. Management and mainstream academic marketing, as well as society in general, inherited this logic from economics (see Vargo, Lusch, & Morgan, 2006; Vargo & Morgan, 2005). However, since formal marketing thought emerged over 100 years ago, G-D logic and its associated concept of embedded value (or utility) have caused problems for marketers. For exPlease cite this article as: Vargo, S. L., & Lusch, R. F., From goods to service(s): (2008), doi:10.1016/j.indmarman.2007.07.004 ample, in the mid 20th Century, it caused Alderson (1957, p. 69) to declare: \u201cWhat is needed is not an interpretation of the utility created bymarketing, but a marketing interpretation of the whole process of creating utility\u201d. But the G-D-logic-based economic theory, with its co-supportive concepts of embedded value (production) and value destruction (consumption) was itself deeply embedded in marketing thought. It was not long after this period, we believe for related reasons, that academic marketing started becoming fragmented, with various marketing concerns taking on an increasingly separate, or sub-disciplinarian, identity. 3. Subdividing and breaking free from G-D logic Arguably, the establishment of many of the sub-disciplines of marketing, such as business-to-business marketing, services marketing, and international marketing, is a response to the limitations and lack of robustness of G-D logic as a foundation for understanding value creation and exchange. That is, while GD logic might have been reasonably adequate as a foundation when marketing was primarily concerned with the distribution of commodities, the foundation was severely restricted as marketing expanded its scope to the more general issues of value creation and exchange. 3.1. Business-to-business marketing Initial sub-disciplinary approaches have typically involved trying to fit the models of mainstream marketing to the particular phenomena of concern. For example, as marketers (both academic and applied) began to address issues of industrial marketing and found that manymainstreammarketingmodels did not seem to apply, the natural course of action was not to question the paradigmatic foundation but rather first to identify how B2B marketing was different from mainstream, consumer marketing and then to identify the ways that business marketers needed to adjust. Thus, early attempts led to the identification of prototypical characteristics of business marketing \u2014 derived demand, fluctuating demand, professional buyers, etc. (see Fern & Brown, 1984). But we suggest that the creation of business-tobusiness marketing as a sub-discipline was more because of the inability of the G-D-logic-grounded mainstream marketing to provide a suitable foundation for understanding inter-enterprise exchange phenomena than it was because of any real and essential difference compared to enterprise-to-individual exchange. Support for this contention can be found in the fact that busin"}
{"_id": "7a19282f51b78f7c18fd1e94cf0e9fa5b1599b73", "title": "A research manifesto for services science", "text": "The services sector has grown over the last 50 years to dominate economic activity in most advanced industrial economies, yet scientific understanding of modern services is rudimentary. Here, we argue for a services science discipline to integrate across academic silos and advance service innovation more rapidly."}
{"_id": "7b97e7e267862103cedbe7a56f8bce0e53a01f07", "title": "Cutting Corners and Working Overtime: Quality Erosion in the Service Industry", "text": "The erosion of service quality throughout the economy is a frequent concern in the popular press. The American Customer Satisfaction Index for services fell in 2000 to 69.4%, down 5 percentage points from 1994. We hypothesize that the characteristics of services\u2014 inseparability, intangibility, and labor intensity\u2014interact with management practices to bias service providers toward reducing the level of service they deliver, often locking entire industries into a vicious cycle of eroding service standards. To explore this proposition we develop a formal model that integrates the structural elements of service delivery. We use econometric estimation, interviews, observations, and archival data to calibrate the model for a consumer-lending service center in a major bank in the United Kingdom. We find that temporary imbalances between service capacity and demand interact with decision rules for effort allocation, capacity management, overtime, and quality aspirations to yield permanent erosion of the service standards and loss of revenue. We explore policies to improve performance and implications for organizational design in the service sector. (Organizational Learning; Service Management Performance; Service Operations; Service Quality; Simulation; System Dynamics)"}
{"_id": "8d600a8caf4bf94093332a8723d7d174f7ae1e56", "title": "Informed Recommender: Basing Recommendations on Consumer Product Reviews", "text": "Recommender systems attempt to predict items in which a user might be interested, given some information about the user's and items' profiles. Most existing recommender systems use content-based or collaborative filtering methods or hybrid methods that combine both techniques (see the sidebar for more details). We created Informed Recommender to address the problem of using consumer opinion about products, expressed online in free-form text, to generate product recommendations. Informed recommender uses prioritized consumer product reviews to make recommendations. Using text-mining techniques, it maps each piece of each review comment automatically into an ontology."}
{"_id": "a5c342e5e998e3bb15bdb0bc5ec03973ebe844a0", "title": "Performance comparison of adder architectures on 28nm FPGA", "text": "Adders are one of the most desirable entities in processor data path architecture. Since long, VLSI engineers are working towards optimization and miniaturization of adder architectures to ultimately improve the performance of processors. As the technology is scaling down, challenges towards optimization are also increasing. In this paper we present the implementation of various 16 bit adder architectures including parallel prefix adders and their comparative analysis based on ultimate performance parameters-area and power at 28nm technology. All designs have been synthesized and implemented on Xilinx Zynq-7000 FPGA board using Xilinx Vivado 14.4 design tool. Verilog HDL is used as programming HDL. Weinberger adder provides the best area solution."}
{"_id": "635ec8608146628cb7d345a0e9147f4a6d38d19c", "title": "Smart Homes and Quality of Life for the Elderly: Perspective of Competing Models", "text": "The percentage of the elderly population has been on rise steadily over the past few years across the globe, which is a cause of serious concern among the research fraternity. A lot of research is going on that tries to harness the benefits of the various information and communication technologies to enable these elderly people to live independently and promote a sense of overall well-being. Although in the evolutionary phase, yet smart homes can help these elderly people in their daily life. However, for the success of such smart systems, the intention of the users towards using those systems must be understood. But, there is a serious lack of relevant exploratory research that tries to measure and explain the adoption of such smart homes by the elderly population from a multiple theory perspective: namely, the technology acceptance model, the theory of reasoned action and the theory of planned behavior. The main aim of this paper is to fill up this void of a lack of theoretical approach by testing the three models in the context of the adoption of smart homes by the elderly population. In order to do so, we conducted a survey with $N = 239$ (after screening) and analyzed the results using the structural equation modeling and confirmatory factor analysis techniques. Results suggest that all the three models are valid although they do not take into account certain factors that are unique to this context. This paper paper provides the initial groundwork to explore the process of adopting smart home services by the elderly with potential future research areas."}
{"_id": "7a621c248d67635b71d294eaacbb39a5ffce0359", "title": "Paraphrase Detection Using Recursive Autoencoder", "text": "In this paper, we tackle the paraphrase detection task. We present a novel recursive autoencoder architecture that learns representations of phrases in an unsupervised way. Using these representations, we are able to extract features for classification algorithms that allow us to outperform many results from previous works."}
{"_id": "dca1d4371d04143d752b51fc0e87b82606c81612", "title": "Scaling Scrum in a Large Distributed Project", "text": "This paper presents a currently ongoing single case study on adopting and scaling Scrum in a large software development project distributed across four sites. The data was gathered by 19 semi-structured interviews of project personnel, including managers, architects, developers and testers. At the time of the interviews the project had grown in size during the past 2,5 years from two collocated Scrum teams to 20 teams located in four countries and employing over 170 persons. In this paper we first describe our research approach and goals. Then we briefly summarize the preliminary results of this ongoing study: we explain how Scrum practices were scaled, as well as discuss the successes and challenges experienced when adopting the agile practices and scaling them, while at the same time growing the project size at a fast pace. Even though this project has been very successful from the business point of view, it has experienced a lot of problems in applying Scrum, especially related to scaling the agile practices. Thus, it seems that adapting Scrum practices to work well in a large distributed setting is challenging."}
{"_id": "96f78f409e4e3f25276d9d98977ef67f2a801abf", "title": "The Statistical Recurrent Unit", "text": "Sophisticated gated recurrent neural network architectures like LSTMs and GRUs have been shown to be highly effective in a myriad of applications. We develop an un-gated unit, the statistical recurrent unit (SRU), that is able to learn long term dependencies in data by only keeping moving averages of statistics. The SRU\u2019s architecture is simple, un-gated, and contains a comparable number of parameters to LSTMs; yet, SRUs perform favorably to more sophisticated LSTM and GRU alternatives, often outperforming one or both in various tasks. We show the efficacy of SRUs as compared to LSTMs and GRUs in an unbiased manner by optimizing respective architectures\u2019 hyperparameters in a Bayesian optimization scheme for both synthetic and realworld tasks."}
{"_id": "414c781042e236e7d396cc9385b86644d566975c", "title": "Normalizing Flows on Riemannian Manifolds", "text": "We consider the problem of density estimation on Riemannian manifolds. Density estimation on manifolds has many applications in fluid-mechanics, optics and plasma physics and it appears often when dealing with angular variables (such as used in protein folding, robot limbs, gene-expression) and in general directional statistics. In spite of the multitude of algorithms available for density estimation in the Euclidean spaces R that scale to large n (e.g. normalizing flows, kernel methods and variational approximations), most of these methods are not immediately suitable for density estimation in more general Riemannian manifolds. We revisit techniques related to homeomorphisms from differential geometry for projecting densities to sub-manifolds and use it to generalize the idea of normalizing flows to more general Riemannian manifolds. The resulting algorithm is scalable, simple to implement and suitable for use with automatic differentiation. We demonstrate concrete examples of this method on the n-sphere S. In recent years, there has been much interest in applying variational inference techniques to learning large scale probabilistic models in various domains, such as images and text [1, 2, 3, 4, 5, 6]. One of the main issues in variational inference is finding the best approximation to an intractable posterior distribution of interest by searching through a class of known probability distributions. The class of approximations used is often limited, e.g., mean-field approximations, implying that no solution is ever able to resemble the true posterior distribution. This is a widely raised objection to variational methods, in that unlike MCMC, the true posterior distribution may not be recovered even in the asymptotic regime. To address this problem, recent work on Normalizing Flows [7], Inverse Autoregressive Flows [8], and others [9, 10] (referred collectively as normalizing flows), focused on developing scalable methods of constructing arbitrarily complex and flexible approximate posteriors from simple distributions using transformations parameterized by neural networks, which gives these models universal approximation capability in the asymptotic regime. In all of these works, the distributions of interest are restricted to be defined over high dimensional Euclidean spaces. There are many other distributions defined over special homeomorphisms of Euclidean spaces that are of interest in statistics, such as Beta and Dirichlet (n-Simplex); Norm-Truncated Gaussian (n-Ball); Wrapped Cauchy and Von-Misses Fisher (n-Sphere), which find little applicability in variational inference with large scale probabilistic models due to the limitations related to density complexity and gradient computation [11, 12, 13, 14]. Many such distributions are unimodal and generating complicated distributions from them would require creating mixture densities or using auxiliary random variables. Mixture methods require further knowledge or tuning, e.g. number of mixture components necessary, and a heavy computational burden on the gradient computation in general, e.g. with quantile functions [15]. Further, mode complexity increases only linearly with mixtures as opposed to exponential increase with normalizing flows. Conditioning on auxiliary variables [16] on the other hand constrains the use of the created distribution, due to the need for integrating out the auxiliary factors in certain scenarios. In all of these methods, computation of low-variance gradients is difficult due to the fact that simulation of random variables cannot be in general reparameterized (e.g. rejection sampling [17]). In this work, we present methods that generalizes previous work on improving variational inference in R using normalizing flows to Riemannian manifolds of interest such as spheres S, tori T and their product topologies with R, like infinite cylinders. ar X iv :1 61 1. 02 30 4v 1 [ st at .M L ] 7 N ov 2 01 6 Figure 1: Left: Construction of a complex density on S by first projecting the manifold to R, transforming the density and projecting it back to S. Right: Illustration of transformed (S \u2192 R) densities corresponding to an uniform density on the sphere. Blue: empirical density (obtained by Monte Carlo); Red: Analytical density from equation (4); Green: Density computed ignoring the intrinsic dimensionality of S. These special manifolds M \u2282 R are homeomorphic to the Euclidean space R where n corresponds to the dimensionality of the tangent space of M at each point. A homeomorphism is a continuous function between topological spaces with a continuous inverse (bijective and bicontinuous). It maps point in one space to the other in a unique and continuous manner. An example manifold is the unit 2-sphere, the surface of a unit ball, which is embedded in R and homeomorphic to R (see Figure 1). In normalizing flows, the main result of differential geometry that is used for computing the density updates is given by, d~x = |det J\u03c6| d~u and represents the relationship between differentials (infinitesimal volumes) between two equidimensional Euclidean spaces using the Jacobian of the function \u03c6 : R \u2192 R that transforms one space to the other. This result only applies to transforms that preserve the dimensionality. However, transforms that map an embedded manifold to its intrinsic Euclidean space, do not preserve the dimensionality of the points and the result above become obsolete. Jacobian of such transforms \u03c6 : R \u2192 R with m > n are rectangular and an infinitesimal cube on R maps to an infinitesimal parallelepiped on the manifold. The relation between these volumes is given by d~x = \u221a det G d~u, where G = J \u03c6 J\u03c6 is the metric induced by the embedding \u03c6 on the tangent space TxM, [18, 19, 20]. The correct formula for computing the density over M now becomes : \u222b"}
{"_id": "b71018f8b81718e3a36daaa6c09eae2683a9b6db", "title": "Flight Dynamics Modeling of Dual-Spin Guided Projectiles", "text": "This paper presents a complete nonlinear parameter-dependent mathematical model, as well as a procedure for computing the quasi-linear parameter-varying (q-LPV) model of a class of spin-stabilized canard-controlled guided projectiles. The proposed projectile concept possesses a so-called dual-spin configuration: the forward section contains the necessary control and guidance software and hardware, whereas the aft roll-decoupled and rapidly spinning section contains the payload. Wind-axes instead of body-axes variables, as is the case in the existing literature, are preferred for the modeling of the highly coupled pitch/yaw airframe nonlinear dynamics, since they are better suited to control synthesis. A q-LPV model, approximating these nonlinear dynamics around the system equilibrium manifold and capturing their dependence on diverse varying flight parameters, is analytically obtained. In addition, a detailed stability analysis specific to this kind of weapons is performed throughout their large flight envelope using the aforementioned q-LPV model. Furthermore, a study is conducted in order to quantify the influence of reducing the dimension of the flight parameter vector on the exactness of the q-LPV model. Finally, the critical influence on the pitch/yaw-dynamics of the nose-embedded sensor position, and of uncertainty on the various static and dynamic aerodynamic coefficients as well as the aerodynamic angles, is shown."}
{"_id": "8c3da6f9c4950faf6846963898f063d714fe4282", "title": "Information Retrieval", "text": "Chinese input recommendation plays an important role in alleviating human cost in typing Chinese words, especially in the scenario of mobile applications. The fundamental problem is to predict the conditional probability of the next word given the sequence of previous words. Therefore, statistical language models, i.e. n-grams based models, have been extensively used on this task in real application. However, the characteristics of extremely different typing behaviors usually lead to serious sparsity problem, even n-gram with smoothing will fail. A reasonable approach to tackle this problem is to use the recently proposed neural models, such as probabilistic neural language model, recurrent neural network and word2vec. They can leverage more semantically similar words for estimating the probability. However, there is no conclusion on which approach of the two will work better in real application. In this paper, we conduct an extensive empirical study to show the differences between statistical and neural language models. The experimental results show that the two different approach have individual advantages, and a hybrid approach will bring a significant improvement."}
{"_id": "1d6425237ea96823e56ad38d4405c3c54297f58f", "title": "On Trade-offs of Applying Block Chains for Electronic Voting Bulletin Boards", "text": "This paper takes a critical look at the recent trend of building electronic voting systems on top of block chain technology. Even though being very appealing from the election integrity perspective, block chains have numerous technical, economical and even political drawbacks that need to be taken into account. Selecting a good trade-off between desirable properties and restrictions imposed by different block chain implementations is a highly non-trivial task. This paper aims at bringing some clarity into performing this task. We will mostly be concentrating on public permissionless block chains and their applications as bulletin board implementations as these are the favourite choices in majority of the recent block chain based voting protocol proposals."}
{"_id": "3646caf12ff38117a0f1b0d4470c034e15021021", "title": "Q-Learning-Based Power Control for LTE Enterprise Femtocell Networks", "text": "Small cells are beneficial for both users and operators to improve communication quality. The femtocell is one type of small cell used for indoor coverage. Enterprise femtocell installation inevitably produces intralayer and interlayer interference management issues. Enterprise femtocell optimization and indoor propagation modeling are built in this work based on the communications environment characteristics. Q-learning-based distributed and hybrid power control strategies are then proposed. The throughput, energy efficiency, and user experience satisfaction with conventional scheduling methods are compared. The simulation results show that distributed Q-learning performs better local optimization while hybrid Q-learning enhances global performance."}
{"_id": "361e8d550e6b91261299540d6b7f1a4a1187053a", "title": "Discovering the invisible city: Location-based games for learning in smart cities", "text": "In this paper we discuss how location-based mobile games can be designed for learning in modern technology enhanced public spaces. We start with the description of the design process and we identify the main challenges faced. We elaborate the case of the game Invisible City: Rebels vs. Spies, a game to be played in a city centre using mobile devices. Through this case we highlight the adaptation of an original party game into a mobile form, the issues we faced and the key aspects conductive to learning in a smart city. It is claimed that creating mobile city games for learning is a new challenge, as our city landscapes are augmented with an increasing number of layers of digital information in which a new generation of city games are played."}
{"_id": "d40355618f3b647584770659d9ba1707b233aa76", "title": "Security and Privacy in Cloud Computing: Vision, Trends, and Challenges", "text": "Cloud computing offers cost-effective solutions via a variety of flexible services. However, security concerns related to managing data, applications, and interactions hamper the rapid deployment of cloud-based services on a large scale. Although many solutions exist, efficiency, scalability, and provable security still have issues that need to be properly addressed. This article explores the various challenges, existing solutions, and limitations of cloud security, with a focus on data utilization management aspects, including data storage, data analytics, and access control. The article concludes with a discussion on future research directions that might lead to more trustworthy cloud security and privacy."}
{"_id": "0f72f5a636d5d86df1088bc7185a7fa2c92f244b", "title": "The sequential maximum angle convex cone ( SMACC ) endmember model", "text": "A new endmember extraction method has been developed that is based on a convex cone model for representing vector data. The endmembers are selected directly from the data set. The algorithm for finding the endmembers is sequential: the convex cone model starts with a single endmember and increases incrementally in dimension. Abundance maps are simultaneously generated and updated at each step. A new endmember is identified based on the angle it makes with the existing cone. The data vector making the maximum angle with the existing cone is chosen as the next endmember to add to enlarge the endmember set. The algorithm updates the abundances of previous endmembers and ensures that the abundances of previous and current endmembers remain positive or zero. The algorithm terminates when all of the data vectors are within the convex cone, to some tolerance. The method offers advantages for hyperspectral data sets where high correlation among channels and pixels can impair un-mixing by standard techniques. The method can also be applied as a band-selection tool, finding end-images that are unique and forming a convex cone for modeling the remaining hyperspectral channels. The method is described and applied to hyperspectral data sets."}
{"_id": "1d726a638d2ba232cb785f83f9902b8f6175e820", "title": "A survey: Attacks on RPL and 6LoWPAN in IoT", "text": "6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks) standard allows heavily constrained devices to connect to IPv6 networks. 6LoWPAN is novel IPv6 header compression protocol, it may go easily under attack. Internet of Things consist of devices which are limited in resource like battery powered, memory and processing capability etc. for this a new network layer routing protocol is designed called RPL (Routing Protocol for low power Lossy network). RPL is light weight protocol and doesn't have the functionality like of traditional routing protocols. This rank based routing protocol may goes under attack. Providing security in Internet of Things is challenging as the devices are connected to the unsecured Internet, limited resources, the communication links are lossy and set of novel technologies used such as RPL, 6LoWPAN etc. This paper focus on possible attacks on RPL and 6LoWPAN network, counter measure against them and consequences on network parameters. Along with comparative analysis of methods to mitigate these attacks are done and finally the research opportunities in network layer security are discussed."}
{"_id": "0eb349c49ee5718945f70d619a023531930ad1f3", "title": "Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning", "text": "imbalanced-learn is an open-source python toolbox aiming at providing a wide range of methods to cope with the problem of imbalanced dataset frequently encountered in machine learning and pattern recognition. The implemented state-of-the-art methods can be categorized into 4 groups: (i) under-sampling, (ii) over-sampling, (iii) combination of overand under-sampling, and (iv) ensemble learning methods. The proposed toolbox depends only on numpy, scipy, and scikit-learn and is distributed under MIT license. Furthermore, it is fully compatible with scikit-learn and is part of the scikit-learn-contrib supported project. Documentation, unit tests as well as integration tests are provided to ease usage and contribution. Source code, binaries, and documentation can be downloaded from https://github.com/scikit-learn-contrib/imbalanced-learn."}
{"_id": "140df6ceb211239b36ff1a7cfdc871f06d787d11", "title": "Function-Private Functional Encryption in the Private-Key Setting", "text": "Functional encryption supports restricted decryption keys that allow users to learn specific functions of the encrypted messages. Although the vast majority of research on functional encryption has so far focused on the privacy of the encrypted messages, in many realistic scenarios it is crucial to offer privacy also for the functions for which decryption keys are provided. Whereas function privacy is inherently limited in the public-key setting, in the private-key setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages $$\\mathsf{m}_1, \\ldots , \\mathsf{m}_T$$ m 1 , \u2026 , m T together with decryption keys corresponding to functions $$f_1, \\ldots , f_T$$ f 1 , \u2026 , f T , reveal essentially no information other than the values $$\\{ f_i(\\mathsf{m}_j)\\}_{i,j\\in [T]}$$ { f i ( m j ) } i , j \u2208 [ T ] . Despite its great potential, the known function-private private-key schemes either support rather limited families of functions (such as inner products) or offer somewhat weak notions of function privacy. We present a generic transformation that yields a function-private functional encryption scheme, starting with any non-function-private scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain function-private schemes based either on the learning with errors assumption, on obfuscation assumptions, on simple multilinear-maps assumptions, and even on the existence of any one-way function (offering various trade-offs between security and efficiency)."}
{"_id": "9ca7adb63f3259740d98f49f82fc758dcac92df6", "title": "Dermoscopy: basic concepts.", "text": "Dermoscopy is a very useful technique for the analysis of pigmented skin lesions. It represents a link between clinical and histological views, permitting an earlier diagnosis of skin melanoma. It also helps in the diagnosis of many other pigmented skin lesions, such as seborrheic keratosis, pigmented basal cell carcinoma, hemangioma, blue nevus, atypical nevus, and mole, which can often clinically simulate melanoma. In this article, dermoscopy is reviewed from its history to the basic concepts of the interpretation of dermoscopic images. The goal is to introduce this subject to those not yet familiar with it, in order to instigate and encourage the training and practice of this technique of growing importance for everyday usage."}
{"_id": "89ed5d61242011990b72282aa8ea7ab932da056c", "title": "Deep learning to classify difference image for image change detection", "text": "Image change detection is a process to analyze multi-temproal images of the same scene for identifying the changes that have occurred. In this paper, we propose a novel difference image analysis approach based on deep neural networks for image change detection problems. The deep neural network learning algorithm for classification includes unsupervised feature learning and supervised fine-tuning. Some samples with the labels of high accuracy obtained by a pre-classification are used for fine-tuning. Since a deep neural network can learn complicated functions that can represent high-level abstractions, it can obtain satisfactory results. Theoretical analysis and experiment results on real datasets show that the proposed method outperforms some other methods."}
{"_id": "71922c86b76b8f15e90e5f5f385d17985515e630", "title": "Category fluency test: normative data for English- and Spanish-speaking elderly.", "text": "Category fluency tasks are an important component of neuropsychological assessment, especially when evaluating for dementia syndromes. The growth in the number of Spanish-speaking elderly in the United States has increased the need for appropriate neuropsychological measures and normative data for this population. This study provides norms for English and Spanish speakers, over the age of 50, on 3 frequently used measures of category fluency: animals, vegetables, and fruits. In addition, it examines the impact of age, education, gender, language, and depressed mood on total fluency scores and on scores on each of these fluency measures. A sample of 702 cognitively intact elderly, 424 English speakers, and 278 Spanish speakers, participated in the study. Normative data are provided stratified by language, age, education, and gender. Results evidence that regardless of the primary language of the examinee, age, education, and gender are the strongest predictors of total category fluency scores, with gender being the best predictor of performance after adjusting for age and education. English and Spanish speakers obtained similar scores on animal and fruit fluency, but English speakers generated more vegetable exemplars than Spanish speakers. Results also indicate that different fluency measures are affected by various factors to different degrees."}
{"_id": "3c458fe8db3ca5fb5b2456b534fbbd213025a55a", "title": "Towards subjective quality of experience assessment for omnidirectional video streaming", "text": "Currently, we witness dramatically increasing interest in immersive media technologies like Virtual Reality (VR), particularly in omnidirectional video (OV) streaming. Omnidirectional (also called 360-degree) videos are panoramic spherical videos in which the user can look around during playback and which therefore can be understood as hybrids between traditional movie streaming and interactive VR worlds. Unfortunately, streaming this kind of content is extremely bandwidth intensive (compared to traditional 2D video) and therefore, Quality of Experience (QoE) tends to deteriorate significantly in absence of continuous optimal bandwidth conditions. In this paper, we present a first approach towards subjective QoE assessment for omnidirectional video (OV) streaming. We present the results of a lab study on the QoE impact of stalling in the context of OV streaming using head-mounted displays (HMDs). Our findings show that subjective testing for immersive media like OV is not trivial, with even simple cases like stalling leading to unexpected results. After a discussion of characteristic pitfalls and lessons learned, we provide a a set of recommendations for upcoming OV assessment studies."}
{"_id": "2c6d7bcf2ae79b73ad5888f591e159a3d994322b", "title": "Three Decades of Driver Assistance Systems: Review and Future Perspectives", "text": "This contribution provides a review of fundamental goals, development and future perspectives of driver assistance systems. Mobility is a fundamental desire of mankind. Virtually any society strives for safe and efficient mobility at low ecological and economic costs. Nevertheless, its technical implementation significantly differs among societies, depending on their culture and their degree of industrialization. A potential evolutionary roadmap for driver assistance systems is discussed. Emerging from systems based on proprioceptive sensors, such as ABS or ESC, we review the progress incented by the use of exteroceptive sensors such as radar, video, or lidar. While the ultimate goal of automated and cooperative traffic still remains a vision of the future, intermediate steps towards that aim can be realized through systems that mitigate or avoid collisions in selected driving situations. Research extends the state-of-the-art in automated driving in urban traffic and in cooperative driving, the latter addressing communication and collaboration between different vehicles, as well as cooperative vehicle operation by its driver and its machine intelligence. These steps are considered important for the interim period, until reliable unsupervised automated driving for all conceivable traffic situations becomes available. The prospective evolution of driver assistance systems will be stimulated by several technological, societal and market trends. The paper closes with a view on current research fields."}
{"_id": "31847f39d468ee33e114a785bdf63926e13c1b66", "title": "USARSim: a robot simulator for research and education", "text": "This paper presents USARSim, an open source high fidelity robot simulator that can be used both for research and education. USARSim offers many characteristics that differentiate it from most existing simulators. Most notably, it constitutes the simulation engine used to run the virtual robots competition within the Robocup initiative. We describe its general architecture, describe examples of utilization, and provide a comprehensive overview for those interested in robot simulations for education, research and competitions."}
{"_id": "b92f68b6d3e05222312d251f658343ef3bfe21ee", "title": "Noise and the Reality Gap: The Use of Simulation in Evolutionary Robotics", "text": "The pitfalls of naive robot simulations have been recognised for areas such as evolutionary robotics. It has been suggested that carefully validated ispell slides.tex simulations with a proper treatment of noise may overcome these problems. This paper reports the results of experiments intended to test some of these claims. A simulation was constructed of a two-wheeled Khepera robot with IR and ambient light sensors. This included detailed mathematical models of the robot-environment interaction dynamics with empirically determined parameters. Artiicial evolution was used to develop recurrent dynamical network controllers for the simulated robot, for obstacle-avoidance and light-seeking tasks, using diierent levels of noise in the simulation. The evolved controllers were down-loaded onto the real robot and the correspondence between behaviour in simulation and in reality was tested. The level of correspondence varied according to how much noise was used in the simulation, with very good results achieved when realistic quantities were applied. It has been demonstrated that it is possible to develop successful robot controllers in simulation that generate almost identical behaviours in reality, at least for a particular class of robot-environment interaction dynamics."}
{"_id": "ec26d7b1cb028749d0d6972279cf4090930989d8", "title": "Making Bertha Drive\u2014An Autonomous Journey on a Historic Route", "text": "125 years after Bertha Benz completed the first overland journey in automotive history, the Mercedes Benz S-Class S 500 INTELLIGENT DRIVE followed the same route from Mannheim to Pforzheim, Germany, in fully autonomous manner. The autonomous vehicle was equipped with close-to-production sensor hardware and relied solely on vision and radar sensors in combination with accurate digital maps to obtain a comprehensive understanding of complex traffic situations. The historic Bertha Benz Memorial Route is particularly challenging for autonomous driving. The course taken by the autonomous vehicle had a length of 103 km and covered rural roads, 23 small villages and major cities (e.g. downtown Mannheim and Heidelberg). The route posed a large variety of difficult traffic scenarios including intersections with and without traffic lights, roundabouts, and narrow passages with oncoming traffic. This paper gives an overview of the autonomous vehicle and presents details on vision and radar-based perception, digital road maps and video-based self-localization, as well as motion planning in complex urban scenarios."}
{"_id": "e84b54bbc144bfeb6b993e4bbb4811c40476a368", "title": "Identifying technological competition trends for R&D planning using dynamic patent maps: SAO-based content analysis", "text": "Patent maps showing competition trends in technological development can provide valuable input for decision support on research and development (R&D) strategies. By introducing semantic patent analysis with advantages in representing technological objectives and structures, this paper constructs dynamic patent maps to show technological competition trends and describes the strategic functions of the dynamic maps. The proposed maps are based on subject-action-object (SAO) structures that are syntactically ordered sentences extracted using the natural language processing of the patent text; the structures of a patent encode the key findings of the invention and expertise of its inventors. Therefore, this paper introduces a method of constructing dynamic patent maps using SAO-based content analysis of patents and presents several types of dynamic patent maps by combining patent bibliographic information and patent mapping and clustering techniques. Building on the maps, this paper provides further analyses to identify technological areas in which patents have not been granted (\u201cpatent vacuums\u201d), areas in which many patents have actively appeared (\u201ctechnological hot spots\u201d), R&D overlap of technological competitors, and characteristics of patent clusters. The proposed analyses of dynamic patent maps are illustrated using patents related to the synthesis of carbon nanotubes. We expect that the proposed method will aid experts in understanding technological competition trends in the process of formulating R&D strategies."}
{"_id": "96c4234eb0e9d3443d09bca7d21caeef8733b26b", "title": "Classification of Arrhythmia Using Conjunction of Machine Learning Algorithms and ECG Diagnostic Criteria", "text": "This paper proposes a classification technique using conjunction of Machine Learning Algorithms and ECG Diagnostic Criteria which improves the accuracy of detecting Arrhythmia using Electrocardiogram (ECG) data. ECG is the most widely used first line clinical instrument to record the electrical activities of the heart. The data-set from UC Irvine (UCI) Machine Learning Repository was used to implement a multi-class classification for different types of heart abnormalities. After implementing rigorous data preprocessing and feature selection techniques,different machine learning algorithms such as Neural Networks, Decision trees, Random Forest, Gradient Boosting and Support Vector Machines were used. Maximum experimental accuracy of 84.82% was obtained via the conjunction of SVM and Gradient Boosting. A further improvement in accuracy was obtained by validating the factors which were important for doctors to decide between normal and abnormal heart conditions.The performance of classification is evaluated using measures such as confusion matrix, kappa-score, confidence interval, Area under curve (AUC) and overall-accuracy. Key\u2013Words: Machine Learning, Arrhythmia Classification, ECG, Neural Networks, SVM, Gradient Boosting"}
{"_id": "a74534569c5532ea62da572e3b617dedc825b262", "title": "Document clustering using nonnegative matrix factorization", "text": "A methodology for automatically identifying and clustering semantic features or topics in a heterogeneous text collection is presented. Textual data is encoded using a low rank nonnegative matrix factorization algorithm to retain natural data nonnegativity, thereby eliminating the need to use subtractive basis vector and encoding calculations present in other techniques such as principal component analysis for semantic feature abstraction. Existing techniques for nonnegative matrix factorization are reviewed and a new hybrid technique for nonnegative matrix factorization is proposed. Performance evaluations of the proposed method are conducted on a few benchmark text collections used in standard topic detection studies."}
{"_id": "d0895e18d0553b9a35cff80bd7bd5619a19d51fb", "title": "A 107 GHz 55 dB-Ohm InP Broadband Transimpedance Amplifier IC for High-Speed Optical Communication Links", "text": "We report a 107 GHz baseband differential transimpedance amplifier IC for high speed optical communication links. The amplifier, comprised of two Darlington resistive feedback stages, was implemented in a 500 nm InP HBT process and demonstrates 55 dB\u03a9 differential transimpedance gain, 30 ps group delay, P1dB = 1 dBm, and is powered by a 5.2 V supply. Differential input and output impedances are 50\u03a9. The IC interfaces to -2V DC at the input for connections to high-speed photodiodes and -450 mV DC at the output for interfaces to Gilbert-cell mixers and to ECL logic."}
{"_id": "76a0fc54737e1ec1764b523b22b0d79947c95dde", "title": "Gravity-independent mobility and drilling on natural rock using microspines", "text": "To grip rocks on the surfaces of asteroids and comets, and to grip the cliff faces and lava tubes of Mars, a 250 mm diameter omni-directional anchor is presented that utilizes a hierarchical array of claws with suspension flexures, called microspines, to create fast, strong attachment. Prototypes have been demonstrated on vesicular basalt and a'a lava rock supporting forces in all directions away from the rock. Each anchor can support >;160 N tangent, >;150 N at 45\u00b0, and >;180 N normal to the surface of the rock. A two-actuator selectively-compliant ankle interfaces these anchors to the Lemur IIB robot for climbing trials. A rotary percussive drill was also integrated into the anchor, demonstrating self-contained rock coring regardless of gravitational orientation. As a harder-than-zero-g proof of concept, 20mm diameter boreholes were drilled 83 mm deep in vesicular basalt samples, retaining a 12 mm diameter rock core in 3-6 pieces while in an inverted configuration, literally drilling into the ceiling."}
{"_id": "3f1750fa8779b9e90f12f6d2b29db3d6ca533e92", "title": "Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning", "text": "The extreme learning machine (ELM), which was originally proposed for \u201cgeneralized\u201d single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods."}
{"_id": "472d88f0a5370f8c4c138c9301a961d32d3eec06", "title": "Effects of users' envy and shame on social comparison that occurs on social network services", "text": "In the context of the social network service environment, we explore how discrete emotions\u2014envy and shame, in particular\u2014may mediate the effects of social comparison on behavior intention and psychological responses. Based on the survey responses of 446 university students, the results suggest that social comparison to media figures correlates with a range of emotional responses as well as with behavioral intention and psychological responses. Envy maintained a significantly greater association with switch intention as a behavioral intention compared to shame. Conversely, shame was significantly related to burnout as a psychological response. Further, mediational analyses were consistent with the argument that envy and shame mediate the social comparison\u2013outcome variables relationship. This research helps to illuminate the driving mechanism for the emotional effect that social comparison on social network service could elicit from a user. This predicts the nature of the behavioral and psychological outcome associated with the comparison and has implications for an enhanced understanding of the way in which the unique social network service communication environment may stimulate this process. 2015 Elsevier Ltd. All rights reserved."}
{"_id": "315aca11365e4441962f072c1ac5286a9641e3ab", "title": "Pre-clinical antigenicity studies of an innovative multivalent vaccine for human visceral leishmaniasis", "text": "The notion that previous infection by Leishmania spp. in endemic areas leads to robust anti-Leishmania immunity, supports vaccination as a potentially effective approach to prevent disease development. Nevertheless, to date there is no vaccine available for human leishmaniasis. We optimized and assessed in vivo the safety and immunogenicity of an innovative vaccine candidate against human visceral leishmaniasis (VL), consisting of Virus-Like Particles (VLP) loaded with three different recombinant proteins (LJL143 from Lutzomyia longipalpis saliva as the vector-derived (VD) component, and KMP11 and LeishF3+, as parasite-derived (PD) antigens) and adjuvanted with GLA-SE, a TLR4 agonist. No apparent adverse reactions were observed during the experimental time-frame, which together with the normal hematological parameters detected seems to point to the safety of the formulation. Furthermore, measurements of antigen-specific cellular and humoral responses, generally higher in immunized versus control groups, confirmed the immunogenicity of the vaccine formulation. Interestingly, the immune responses against the VD protein were reproducibly more robust than those elicited against leishmanial antigens, and were apparently not caused by immunodominance of the VD antigen. Remarkably, priming with the VD protein alone and boosting with the complete vaccine candidate contributed towards an increase of the immune responses to the PD antigens, assessed in the form of increased ex vivo CD4+ and CD8+ T cell proliferation against both the PD antigens and total Leishmania antigen (TLA). Overall, our immunogenicity data indicate that this innovative vaccine formulation represents a promising anti-Leishmania vaccine whose efficacy deserves to be tested in the context of the \"natural infection\"."}
{"_id": "ff42790d9da50a91a61e49f6c59c37d7c90d603c", "title": "Optimization of photovoltaic farm under partial shading effects using artificial intelligent based matrix switch controller", "text": "Partial shading on photovoltaic array due to weather conditions reduces the energy generated and worsening the entire photovoltaic farm efficiency. The purpose of this research was to electric physically reconfigurate PV string through switch matrix by control of smart controller. It is great to find current and voltage at TCT point approaching work of range from MPPT, despite being awarded shading effect. The research method was reflected on the algorithms design. The aim of algorithms was to clustering in two steps. The first step was clustering one or more photovoltaic string based on parallelly current. Result of first clustering continued of second step clustering, clustering the first step result in series. The algorithm work smartly by current input from current sensors. Input processing from sensor by smart controller was signal control which for switch on/off the switches function in switch matrix circuit. Switch matrix output will be forwarded to MPPT circuit. This mechanism being novelty of this research. Based on the testing, it's mechanism able to protect the current at 18A level for all cases with each case voltage consecutively 3060V, 1360V, 2040V and 1020 V. By this mechanism, can be optimality power system increased up to 9.S % .similarly be affected or not partial shading effect."}
{"_id": "a4fec4782afe867b00bb4f0f0157c12fca4b52b0", "title": "High-Efficiency Single-Input Multiple-Output DC\u2013DC Converter", "text": "The aim of this study is to develop a high-efficiency single-input multiple-output (SIMO) dc-dc converter. The proposed converter can boost the voltage of a low-voltage input power source to a controllable high-voltage dc bus and middle-voltage output terminals. The high-voltage dc bus can take as the main power for a high-voltage dc load or the front terminal of a dc-ac inverter. Moreover, middle-voltage output terminals can supply powers for individual middle-voltage dc loads or for charging auxiliary power sources (e.g., battery modules). In this study, a coupled-inductor-based dc-dc converter scheme utilizes only one power switch with the properties of voltage clamping and soft switching, and the corresponding device specifications are adequately designed. As a result, the objectives of high-efficiency power conversion, high step-up ratio, and various output voltages with different levels can be obtained. Some experimental results via a kilowatt-level prototype are given to verify the effectiveness of the proposed SIMO dc-dc converter in practical applications."}
{"_id": "6d7f9304cf5ae6b27376ba136b8dfc681a844993", "title": "Phishing Detection: A Literature Survey", "text": "This article surveys the literature on the detection of phishing attacks. Phishing attacks target vulnerabilities that exist in systems due to the human factor. Many cyber attacks are spread via mechanisms that exploit weaknesses found in end-users, which makes users the weakest element in the security chain. The phishing problem is broad and no single silver-bullet solution exists to mitigate all the vulnerabilities effectively, thus multiple techniques are often implemented to mitigate specific attacks. This paper aims at surveying many of the recently proposed phishing mitigation techniques. A high-level overview of various categories of phishing mitigation techniques is also presented, such as: detection, offensive defense, correction, and prevention, which we belief is critical to present where the phishing detection techniques fit in the overall mitigation process."}
{"_id": "7dc5bc3e44e1be7313f287b36c420e5ef5e2803e", "title": "A packing algorithm for non-Manhattan hexagon/triangle placement design by using an adaptive O-tree representation", "text": "A non-Manhattan Hexagon/Triangle Placement (HTP for short) paradigm is proposed in the present paper. Main feature of this paradigm lies in adapting to the Y- architecture which is one of the promising non-Manhattan VLSI circuit layout architectures. Aim of the HTP is to place a set of equilateral triangles with given size onto a hexagonal chip with maximal chip area usage. Based on the O-tree representation, some adaptive packing rules are adopted to develop an effective placement algorithm for solving the HTP problem in BBL mode. Two examples with benchmark data transformed from the Manhattan BBL mode placement (ami33/49) are presented to justify the feasibility and effectiveness of our algorithms. Experiment results demonstrate that the chip area usage of 94% is achieved through simulated annealing optimization."}
{"_id": "af9e08efed2f5bc58d71761a6c0c493e2b988e22", "title": "Computer animation of knowledge-based human grasping", "text": "The synthesis of human hand motion and grasping of arbitrary shaped objects is a very complex problem. Therefore high-level control is needed to perform these actions. In order to satisfy the kinematic and physical constraints associated with the human hand and to reduce the enormous search space associated with the problem of grasping objects, a knowledge based approach is used. A three-phased scheme is presented which incorporates the role of the hand, the object, the environment and the animator. The implementation of a hand simulation system HANDS is discussed."}
{"_id": "73e30965ab41161ef7ecbed00133a931dbc2efba", "title": "MIDeA: a multi-parallel intrusion detection architecture", "text": "Network intrusion detection systems are faced with the challenge of identifying diverse attacks, in extremely high speed networks. For this reason, they must operate at multi-Gigabit speeds, while performing highly-complex per-packet and per-flow data processing. In this paper, we present a multi-parallel intrusion detection architecture tailored for high speed networks. To cope with the increased processing throughput requirements, our system parallelizes network traffic processing and analysis at three levels, using multi-queue NICs, multiple CPUs, and multiple GPUs. The proposed design avoids locking, optimizes data transfers between the different processing units, and speeds up data processing by mapping different operations to the processing units where they are best suited. Our experimental evaluation shows that our prototype implementation based on commodity off-the-shelf equipment can reach processing speeds of up to 5.2 Gbit/s with zero packet loss when analyzing traffic in a real network, whereas the pattern matching engine alone reaches speeds of up to 70 Gbit/s, which is an almost four times improvement over prior solutions that use specialized hardware."}
{"_id": "54454ca1f4b7f4e6c93137e429b1d9a127f5eb40", "title": "A German Twitter Snapshot", "text": "We present a new corpus of German tweets. Due to the relatively small number of German messages on Twitter, it is possible to collect a virtually complete snapshot of German twitter messages over a period of time. In this paper, we present our collection method which produced a 24 million tweet corpus, representing a large majority of all German tweets sent in April, 2013. Further, we analyze this representative data set and characterize the German twitterverse. While German Twitter data is similar to other Twitter data in terms of its temporal distribution, German Twitter users are much more reluctant to share geolocation information with their tweets. Finally, the corpus collection method allows for a study of discourse phenomena in the Twitter data, structured into discussion threads."}
{"_id": "89c37955f34b2c65c6ea7dc3010366e41ba7a389", "title": "Work strain, health, and absenteeism: a meta-analysis.", "text": "Work strain has been argued to be a significant cause of absenteeism in the popular and academic press. However, definitive evidence for associations between absenteeism and strain is currently lacking. A theory focused meta-analysis of 275 effects from 153 studies revealed positive but small associations between absenteeism and work strain, psychological illness, and physical illness. Structural equation modeling results suggested that the strain-absence connection may be mediated by psychological and physical symptoms. Little support was received for the purported volitional distinction between absence frequency and time lost absence measures on the basis of illness. Among the moderators examined, common measurement, midterm and stable sources of variance, and publication year received support."}
{"_id": "1db92c5ce364d2f459f264108f1044242583aa30", "title": "Extreme Learning Machine for Multi-Label Classification", "text": "Xia Sun 1,*, Jingting Xu 1, Changmeng Jiang 1, Jun Feng 1, Su-Shing Chen 2 and Feijuan He 3 1 School of Information Science and Technology, Northwest University, Xi\u2019an 710069, China; jingtingxu03@gmail.com (J.X.); yadxjcm@163.com (C.J.); fengjun@nwu.edu.cn (J.F.) 2 Computer Information Science and Engineering, University of Florida, Gainesville, FL 32608, USA; suchen@cise.ufl.edu 3 Department of Computer Science, Xi\u2019an Jiaotong University City College, Xi\u2019an 710069, China; hfj@mail.xjtu.edu.cn * Correspondence: raindy@nwu.edu.cn; Tel.: +86-29-8830-8119"}
{"_id": "b146f33d2079ec418bbf26c26fa3c9949bc284db", "title": "Computational Thinking \u2026 and Doing", "text": "A t our most recent CiSE meeting, one of the board members approached me during a short break and asked: \" Why do you do this? you seem to have better things to do. \" seeking clarification , i came to understand that he Was Wondering Why a computer scientist such as myself would want to be on the editorial board of a publication that isn't necessarily focused on the \" tradition \" of computer science. then and there, i realized that a computer sci-entist's role is something worth explaining and understanding. much has been made recently of computational thinking, a term coined by Jeanette Wing at the us national science foundation. 1 While i have my doubts about the term's longevity, 2 it struck me as an excellent starting point for explaining what a computer scientist is and does, especially in the CiSE context. computer scientists are an increasingly important part of interdisciplinary\u2014or multidisciplinary, a term i prefer\u2014problem solving. We help others understand the merits of bringing the right methods (algorithms), architecture/design (software engineering), and programming tools (languages and systems) and techniques (visualization) to science and engineering applications. so, as i see it, computer scientists bring a panoply of options to the table that enhance multi disciplinary research. if you will, we're about computational doing\u2014and doing it right. as the former chez thiruvathukal in scientific programming , i like to view computer science's various ideas as the masala found in my (and possibly your) favorite curries. this brings me back to why i'm here. i've always had an eye to applying computer science to other disciplines, scientific and otherwise. in the past few months, i've begun working on two new (funded) projects\u2014one addresses healthcare (modeling health issues in social networks using agent-based simulation) and the other addresses digital humanities (developing collaborative tools for tex-tual studies). in both cases, computer science plays a critical role in that both projects have expected software and experimental outcomes. i don't always write about my other disciplinary interests in CiSE, but given my emphasis on programming and emerging technologies , it's comforting to know that i've joined a group of people committed to exploring the intersections with and showing respect for other disciplines. for the sake of scholarship and meaningful discourse, we need more places like CiSE, and we must keep the content fresh and exciting\u2014even during these challenging times when all publications face \u2026"}
{"_id": "b9d5f2da9408d176a5cfc6dc0912b6d72e0ea989", "title": "On structuring probabilistic dependences in stochastic language modelling", "text": ""}
{"_id": "28538bff0938665b72b9c10f030b8f661b514305", "title": "Translational Recommender Networks", "text": "Representing relationships as translations in vector space lives at the heart of many neural embedding models such as word embeddings and knowledge graph embeddings. In this work, we study the connections of this translational principle with collaborative \u0080ltering algorithms. We propose Translational Recommender Networks (TransRec), a new a\u008aentive neural architecture that utilizes the translational principle to model the relationships between user and item pairs. Our model employs a neural a\u008aention mechanism over a Latent Relational A\u0088entive Memory (LRAM) module to learn the latent relations between user-item pairs that best explains the interaction. By exploiting adaptive user-item speci\u0080c translations in vector space, our model also alleviates the geometric in\u0083exibility problem of other metric learning algorithms while enabling greater modeling capability and \u0080ne-grained \u0080\u008aing of users and items in vector space. \u008ce proposed architecture not only demonstrates the state-of-the-art performance across multiple recommendation benchmarks but also boasts of improved interpretability. \u008balitative studies over the LRAM module shows evidence that our proposed model is able to infer and encode explicit sentiment, temporal and a\u008aribute information despite being only trained on implicit feedback. As such, this ascertains the ability of TransRec to uncover hidden relational structure within implicit datasets."}
{"_id": "7cabb5223aa526d2f85ad4c1e675c089cb581895", "title": "How STEM Education Improves Student Learning", "text": "The intent of this article is to examine current taxonomies used to design and deliver Science, Technology, Engineering, and Mathematics (i.e., STEM) curriculum in an effort to attract a greater variety of students to the STEM field of study in the K-12 public school environment. I analyzed specific aspects of STEM-based programs to compare STEM education components to traditional college-preparatory methods of instruction, including looking for environmental practices that may attract female and first-generation college attendees toward developing a positive attitude toward attaining a STEM education. I also made connections between current instructional trends and their impact on student mastery."}
{"_id": "8395bf934f4589a4c2d38eac828d185a955d98c5", "title": "Health Information Management System for Elderly Health Sector: A Qualitative Study in Iran.", "text": "BACKGROUND\nThere are increasing change and development of information in healthcare systems. Given the increase in aging population, managers are in need of true and timely information when making decision.\n\n\nOBJECTIVES\nThe aim of this study was to investigate the current status of the health information management system for the elderly health sector in Iran.\n\n\nMATERIALS AND METHODS\nThis qualitative study was conducted in two steps. In the first step, required documents for administrative managers were collected using the data gathering form and observed and reviewed by the researcher. In the second step, using an interview guide, the required information was gathered through interviewing experts and faculty members. The convenience, purposeful and snowball sampling methods were applied to select interviewees and the sampling continued until reaching the data saturation point. Finally, notes and interviews were transcribed and content analysis was used to analyze them.\n\n\nRESULTS\nThe results of the study showed that there was a health information management system for the elderly health sector in Iran. However, in all primary health care centers the documentation of data was done manually; the data flow was not automated; and the analysis and reporting of data are also manually. Eventually, decision makers are provided with delayed information.\n\n\nCONCLUSIONS\nIt is suggested that the steward of health in Iran, the ministry of health, develops an appropriate infrastructure and finally puts a high priority on the implementation of the health information management system for elderly health sector in Iran."}
{"_id": "4424a63d792f259dc7bb13612323c2831c7a94ea", "title": "A 0.18\u03bcm CMOS fully integrated RFDAC and VGA for WCDMA transmitters", "text": "A digital IF to RF converter architecture with an RF VGA targeted for WCDMA transmitters is presented. The RFDAC consists of a 1.5-bits current steering DAC with an embedded semi-digital FIR filter and a mixer with an LC load. The VGA controls the output power in 10 dB steps. The prototype is fabricated in 0.18 mum CMOS technology and achieves 0 dBm output power at 2 GHz and 33 dB image rejection. The measured rms EVM and ACPR show the system fulfills the standard specifications for the WCDMA. The chip draws 123.9 mA from 1.8 V supply. The die area is 0.8 times 1.6 mm2."}
{"_id": "3440ebeda19d658b91c51206be3519343663b9c1", "title": "Personalized Multitask Learning for Predicting Tomorrow \u2019 s Mood , Stress , and Health", "text": "While accurately predicting mood and wellbeing could have a number of important clinical benefits, traditional machine learning (ML) methods frequently yield low performance in this domain. We posit that this is because a one-size-fits-all machine learning model is inherently ill-suited to predicting outcomes like mood and stress, which vary greatly due to individual differences. Therefore, we employ Multitask Learning (MTL) techniques to train personalized ML models which are customized to the needs of each individual, but still leverage data from across the population. Three formulations of MTL are compared: i) MTL deep neural networks, which share several hidden layers but have final layers unique to each task; ii) Multi-task Multi-Kernel learning, which feeds information across tasks through kernel weights on feature types; and iii) a Hierarchical Bayesian model in which tasks share a common Dirichlet Process prior. We offer the code for this work in open source. These techniques are investigated in the context of predicting future mood, stress, and health using data collected from surveys, wearable sensors, smartphone logs, and the weather. Empirical results demonstrate that using MTL to account for individual differences provides large performance improvements over traditional machine learning methods and provides personalized, actionable insights."}
{"_id": "7164e00258f870d7fb5d4f0135274fc08ae3f792", "title": "Automatically extracting cancer disease characteristics from pathology reports into a Disease Knowledge Representation Model", "text": "We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets."}
{"_id": "d540b5337f30b567cb1024465bc2f535f68c1366", "title": "A taxonomy and rating system to measure situation awareness in resuscitation teams", "text": "Team SA involves a common perspective between two or more individuals regarding current environmental events, their meaning, and projected future status. Team SA has been theorized to be important for resuscitation team effectiveness. Accordingly, multidimensional frameworks of observable behaviors relevant to resuscitation teams are needed to understand more deeply the nature of team SA, its implications for team effectiveness, and whether it can be trained. A seven-dimension team resuscitation SA framework was developed following a literature review and consensus process using a modified Delphi approach with a group of content experts. We applied a pre-post design within a day-long team training program involving four video-recorded simulated resuscitation events and 42 teams across Canada. The first and fourth events represented \"pre\" and \"post\" training events, respectively. Teams were scored on SA five times within each 15-minute event. Distractions were introduced to investigate whether SA scores would be affected. The current study provides initial construct validity evidence for a new measure of SA and explicates SA's role in resuscitation teams."}
{"_id": "8fa631d11e63567d84ded6e46340f81ed66fcfa6", "title": "Compact Aspect Embedding for Diversified Query Expansions", "text": "Diversified query expansion (DQE) based approaches aim to select a set of expansion terms with less redundancy among them while covering as many query aspects as possible. Recently they have experimentally demonstrate their effectiveness for the task of search result diversification. One challenge faced by existing DQE approaches is to ensure the aspect coverage. In this paper, we propose a novel method for DQE, called compact aspect embedding, which exploits trace norm regularization to learn a low rank vector space for the query, with each eigenvector of the learnt vector space representing an aspect, and the absolute value of its corresponding eigenvalue representing the association strength of that aspect to the query. Meanwhile, each expansion term is mapped into the vector space as well. Based on this novel representation of the query aspects and expansion terms, we design a greedy selection strategy to choose a set of expansion terms to explicitly cover all possible aspects of the query. We test our method on several TREC diversification data sets, and show that our method significantly outperforms the state-of-the-art search result diversification approaches."}
{"_id": "24e65484bccb3acd6653dda3c30817d70c2ab01d", "title": "LATIONS FOR KNOWLEDGE BASE COMPLETION", "text": "In this work we present a novel approach for the utilization of observed relations between entity pairs in the task of triple argument prediction. The approach is based on representing observations in a shared, continuous vector space of structured relations and text. Results on a recent benchmark dataset demonstrate that the new model is superior to existing sparse feature models. In combination with state-of-the-art models, we achieve substantial improvements when observed relations are available."}
{"_id": "6910f307fef66461f5cb561b4d4fc8caf8594af5", "title": "What's in an Embedding? Analyzing Word Embeddings through Multilingual Evaluation", "text": "In the last two years, there has been a surge of word embedding algorithms and research on them. However, evaluation has mostly been carried out on a narrow set of tasks, mainly word similarity/relatedness and word relation similarity and on a single language, namely English. We propose an approach to evaluate embeddings on a variety of languages that also yields insights into the structure of the embedding space by investigating how well word embeddings cluster along different syntactic features. We show that all embedding approaches behave similarly in this task, with dependency-based embeddings performing best. This effect is even more pronounced when generating low dimensional embed-"}
{"_id": "8d31dbda7c58de30ada8616e1fcb011d32d5cf83", "title": "A Practical Security Architecture for In-Vehicle CAN-FD", "text": "The controller area network with flexible data rate (CAN-FD) is attracting attention as the next generation of in-vehicle network technology. However, security issues have not been completely taken into account when designing CAN-FD, although every bit of information transmitted could be critical to driver safety. If we fail to solve the security vulnerabilities of CAN-FD, we cannot expect Vehicle-Information and Communications Technology (Vehicle-ICT) convergence to continue to develop. Fortunately, secure in-vehicle CAN-FD communication environments can be constructed using the larger data payload of CAN-FD. In this paper, we propose a security architecture for in-vehicle CAN-FD as a countermeasure (designed in accordance with CAN-FD specifications). We considered the characteristics of the International Organization for Standardization (ISO) 26262 Automotive Safety Integrity Level and the in-vehicle subnetwork to design a practical security architecture. We also evaluated the feasibility of the proposed security architecture using three kinds of microcontroller unit and the CANoe software. Our evaluation findings may be used as an indicator of the performance level of electronic control units for manufacturing next-generation vehicles."}
{"_id": "9ef8121d9969e9d7d0779f04f0fb26af923579bf", "title": "Design, measurement and equivalent circuit synthesis of high power HF transformer for three-phase composite dual active bridge topology", "text": "High voltage high frequency (HF) transformer provides the isolation between high and low dc link voltages in dual active bridge (DAB) converters. Such DAB converters are finding wide applications as an intermediate DC-DC converter in transformerless intelligent power substation (TIPS), which is proposed as an alternative for conventional distribution-transformer connecting 13.8 kV and 480 V grids. The design of HF transformer used in DAB stage of such application is very challenging considering the required isolation, size and cost. In this paper, the specification generation, design, characterization, test and measurement results on a 10kHz HF transformer are presented, highlighting the above challenges."}
{"_id": "5202bffff82e94b3d2308774371ef3c1e49f3daa", "title": "Analysis of the weighted Chinese air transportation multilayer network", "text": "Air transportation system is a crucial infrastructure of modern world. In this paper, we analyze the weighted Chinese air transportation network with the framework of multilayer network, in which each layer is defined by a commercial airline (company) and the weight of links is set respectively by the number of flights, the number of seats and the geographical distance between pairs of airports. It is shown that the size of the airlines can be characterized approximatively by the number of air routes they operated. The numeric results also indicate that the CCA (air China) is of a considerable higher value of maximal degree and betweenness than other top airlines. Our work will be helpful to better understand and model the Chinese air transportation network."}
{"_id": "2878b7c87cc084e90f066627b5525d8d4bf64f01", "title": "Domain Transfer SVM for video concept detection", "text": "Cross-domain learning methods have shown promising results by leveraging labeled patterns from auxiliary domains to learn a robust classifier for target domain, which has a limited number of labeled samples. To cope with the tremendous change of feature distribution between different domains in video concept detection, we propose a new cross-domain kernel learning method. Our method, referred to as Domain Transfer SVM (DTSVM), simultaneously learns a kernel function and a robust SVM classifier by minimizing both the structural risk functional of SVM and the distribution mismatch of labeled and unlabeled samples between the auxiliary and target domains. Comprehensive experiments on the challenging TRECVID corpus demonstrate that DTSVM outperforms existing cross-domain learning and multiple kernel learning methods."}
{"_id": "3e4886e512a1424f1c78a631491ae353c4fe86bd", "title": "Characterization of a Novel Nine-Transistor SRAM Cell", "text": "Data stability of SRAM cells has become an important issue with the scaling of CMOS technology. Memory banks are also important sources of leakage since the majority of transistors are utilized for on-chip caches in today's high performance microprocessors. A new nine-transistor (9T) SRAM cell is proposed in this paper for simultaneously reducing leakage power and enhancing data stability. The proposed 9T SRAM cell completely isolates the data from the bit lines during a read operation. The read static-noise-margin of the proposed circuit is thereby enhanced by 2 X as compared to a conventional six-transistor (6T) SRAM cell. The idle 9T SRAM cells are placed into a super cutoff sleep mode, thereby reducing the leakage power consumption by 22.9% as compared to the standard 6T SRAM cells in a 65-nm CMOS technology. The leakage power reduction and read stability enhancement provided with the new circuit technique are also verified under process parameter variations."}
{"_id": "c9c9443b41c54a2413740502d12ab43cdccd0767", "title": "Labelled data collection for anomaly detection in wireless sensor networks", "text": "Security of wireless sensor networks (WSN) is an important research area in computer and communications sciences. Anomaly detection is a key challenge in ensuring the security of WSN. Several anomaly detection algorithms have been proposed and validated recently using labeled datasets that are not publicly available. Our group proposed an ellipsoid-based anomaly detection algorithm but demonstrated its performance using synthetic datasets and real Intel Berkeley Research Laboratory and Grand St. Bernard datasets which are not labeled with anomalies. This approach requires manual assignment of the anomalies' positions based on visual estimates for performance evaluation. In this paper, we have implemented a single-hop and multi-hop sensor-data collection network. In both scenarios we generated real labeled data for anomaly detection and identified different types of anomalies. These labeled sensor data and types of anomalies are useful for research, such as machine learning, and this information will be disseminated to the research community."}
{"_id": "3e13bea4bac2b6d5cbb0a58bec225bfb56687574", "title": "Performance of LoRa-Based IoT Applications on Campus", "text": "To promote smart campus, National Chiao Tung University is deploying several location-based IoT applications based on wireless communications technologies including LoRa and NB-IoT. The applications are indoor/outdoor environment conditions monitoring (such as PM2.5, temperature, and humidity), emergency buttons, and so on. This paper uses PM2.5 measurement application as an example to present the performance of LoRa when it is used on campus. We investigated the LoRa transmission performance from LoRa end-devices to the LoRa gateway. We show how packet losses were affected by the distance between the end-device and the gateway, the transmit power, the payload length, the antenna angle, the time of day, and the weather conditions. The pattern of LoRa packet losses were also measured and we found that in our long-term PM2.5 monitoring application, more than 99\\% of LoRa packet losses occurred with three or less consecutive packet losses."}
{"_id": "2d83ba2d43306e3c0587ef16f327d59bf4888dc3", "title": "Large-Scale Video Classification with Convolutional Neural Networks", "text": "Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3% to 63.9%), but only a surprisingly modest improvement compared to single-frame models (59.3% to 60.9%). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3% up from 43.9%)."}
{"_id": "bd1f3039beefba764b818a3f2f842542690b42cb", "title": "Mechanisms governing empowerment effects: a self-efficacy analysis.", "text": "This experiment tested the hypotheses that perceived coping and cognitive control self-efficacy govern the effects of personal empowerment over physical threats. Women participated in a mastery modeling program in which they mastered the physical skills to defend themselves successfully against unarmed sexual assailants. Multifaceted measures of theoretically relevant variables were administered within a staggered intragroup control design to test the immediate and long-term effects of the empowerment program and the mechanisms through which it produced its effects. Mastery modeling enhanced perceived coping and cognitive control efficacy, decreased perceived vulnerability to assault, and reduced the incidence of intrusive negative thinking and anxiety arousal. These changes were accompanied by increased freedom of action and decreased avoidant behavior. Path analyses of causal structures revealed a dual path of regulation of behavior by perceived coping self-efficacy, one mediated through perceived vulnerability and risk discernment and the other through perceived cognitive control self-efficacy and intrusive negative thinking."}
{"_id": "d822a0dbb0607f99a412620fa275c5ad65d74d01", "title": "Computationally efficient low-pass FIR filter design using Cuckoo Search with adaptive Levy step size", "text": "This paper looks into efficient implementation of one dimensional low pass Finite Impulse Response filters using certain commonly used and state-of-the-art optimization techniques. Methods like Parks-McClellan (PM) equiripple design, Quantum-behaved Particle Swarm Optimization (QPSO) and Cuckoo Search Algorithm (CSA) with Levy Flight are employed and overall performance is further improved by hybridization and adaptive step size update. Various performance metrics are analyzed with a focus on increasing convergence speed to reach global optima faster. It is seen that the improved search methods used in this work, i.e., Simulated Annealing based Weighted Mean Best QPSO (SAWQPSO) and Adaptive CSA (ACSA) effect significant reductions in convergence time with ACSA proving to be the faster one. The designed filter is used in the receiver stage of a Frequency Modulated Radio Transmission model using a Quadrature-Phase Shift Keyed (QPSK) Modulator and Demodulator. Its efficiency is validated by obtaining near perfect correlation between the message and recovered signals."}
{"_id": "141e6c1dd532504611266d08458dbe2a0dbb4e98", "title": "Multiple kernel learning, conic duality, and the SMO algorithm", "text": "While classical kernel-based classifiers are based on a single kernel, in practice it is often desirable to base classifiers on combinations of multiple kernels. Lanckriet et al. (2004) considered conic combinations of kernel matrices for the support vector machine (SVM), and showed that the optimization of the coefficients of such a combination reduces to a convex optimization problem known as a quadratically-constrained quadratic program (QCQP). Unfortunately, current convex optimization toolboxes can solve this problem only for a small number of kernels and a small number of data points; moreover, the sequential minimal optimization (SMO) techniques that are essential in large-scale implementations of the SVM cannot be applied because the cost function is non-differentiable. We propose a novel dual formulation of the QCQP as a second-order cone programming problem, and show how to exploit the technique of Moreau-Yosida regularization to yield a formulation to which SMO techniques can be applied. We present experimental results that show that our SMO-based algorithm is significantly more efficient than the general-purpose interior point methods available in current optimization toolboxes."}
{"_id": "e0cf9c51192f63c3a9e3b23aa07ee5654fc97b68", "title": "Large Margin Methods for Structured and Interdependent Output Variables", "text": "Learning general functional dependencies between arbitra ry input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representa tions, this paper addresses the complementary issue of designing classification algorithms that c an deal with more complex outputs, such as trees, sequences, or sets. More generally, we consider pr oblems involving multiple dependent output variables, structured output spaces, and classifica tion problems with class attributes. In order to accomplish this, we propose to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulat ion. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm that solves the optimization problem i n polynomial time for a large class of problems. The proposed method has important applications i n areas such as computational biology, natural language processing, information retrieval/extr action, and optical character recognition. Experiments from various domains involving different types o f output spaces emphasize the breadth and generality of our approach."}
{"_id": "0948365ef39ef153e61e9569ade541cf881c7c2a", "title": "Learning the Kernel Matrix with Semi-Definite Programming", "text": "Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space\u2014classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm\u2014 using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results."}
{"_id": "b85509aa44db10d3116b36599014df39164c2a2a", "title": "Open-World Knowledge Graph Completion", "text": "Knowledge Graphs (KGs) have been applied to many tasks including Web search, link prediction, recommendation, natural language processing, and entity linking. However, most KGs are far from complete and are growing at a rapid pace. To address these problems, Knowledge Graph Completion (KGC) has been proposed to improve KGs by filling in its missing connections. Unlike existing methods which hold a closed-world assumption, i.e., where KGs are fixed and new entities cannot be easily added, in the present work we relax this assumption and propose a new open-world KGC task. As a first attempt to solve this task we introduce an openworld KGC model called ConMask. This model learns embeddings of the entity\u2019s name and parts of its text-description to connect unseen entities to the KG. To mitigate the presence of noisy text descriptions, ConMask uses a relationshipdependent content masking to extract relevant snippets and then trains a fully convolutional neural network to fuse the extracted snippets with entities in the KG. Experiments on large data sets, both old and new, show that ConMask performs well in the open-world KGC task and even outperforms existing KGC models on the standard closed-world KGC task. Introduction Knowledge Graphs (KGs) are a special type of information network that represents knowledge using RDF-style triples \u3008h, r, t\u3009, where h represents some head entity and r represents some relationship that connects h to some tail entity t. In this formalism a statement like \u201cSpringfield is the capital of Illinois\u201d can be represented as \u3008Springfield, capitalOf, Illinois\u3009. Recently, a variety of KGs, such as DBPedia (Lehmann et al. 2015), and ConceptNet (Speer, Chin, and Havasi 2017), have been curated in the service of fact checking (Shi and Weninger 2016), question answering (Lukovnikov et al. 2017), entity linking (Hachey et al. 2013), and for many other tasks (Nickel et al. 2016). Despite their usefulness and popularity, KGs are often noisy and incomplete. For example, DBPedia, which is generated from Wikipedia\u2019s infoboxes, contains 4.6 million entities, but half of these entities contain less than 5 relationships. Based on this observation, researchers aim to improve the accuracy and reliability of KGs by predicting the existence Copyright c \u00a9 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. (or probability) of relationships. This task is often called Knowledge Graph Completion (KGC). Continuing the example from above, suppose the relationship capitalOf is missing between Indianapolis and Indiana; the KGC task might predict this missing relationship based on the topological similarity between this part of the KG and the part containing Springfield and Illinois. Progress in vector embeddings originating with word2vec has produced major advancements in the KGC task. Typical embedding-based KGC algorithms like TransE (Bordes et al. 2013) and others learn lowdimensional representations (i.e., embeddings) for entities and relationships using topological features. These models are able to predict the existence of missing relationships thereby \u201ccompleting\u201d the KG. Existing KGC models implicitly operate under the Closed-World Assumption (Reiter 1978) in which all entities and relationships in the KG cannot be changed \u2013 only discovered. We formally define the Closed-word KGC task as follows: Definition 1 Given an incomplete Knowledge Graph G = (E,R,T), where E, R, and T are the entity set, relationship set, and triple set respectively, Closed-World Knowledge Graph Completion completes G by finding a set of missing triples T = {\u3008h, r, t\u3009|h \u2208 E, r \u2208 R, t \u2208 E, \u3008h, r, t\u3009 / \u2208 T} in the incomplete Knowledge Graph G. Closed-world KGC models heavily rely on the connectivity of the existing KG and are best able to predict relationships between existing, well-connected entities. Unfortunately, because of their strict reliance on the connectivity of the existing KG, closed-world KGC models are unable to predict the relationships of poorly connected or new entities. Therefore, we assess that closed-world KGC is most suitable for fixed or slowly evolving KGs. However, most real-world KGs evolve quickly with new entities and relationships being added by the minute. For example, in the 6 months between DBPedia\u2019s October 2015 release and its April 2016 release 36, 340 new English entities were added \u2013 a rate of 200 new entities per day. Recall that DBPedia merely tracks changes to Wikipedia infoboxes, so these updates do not include newly added articles without valid infobox data. Because of the accelerated growth of online information, repeatedly re-training closed-world models every day (or hour) has become impractical. In the present work we borrow the idea of openworld assumption from probabilistic database literature (Ceylan, Darwiche, and Van den Broeck 2016) and relax the closed-world assumption to develop an Open-World Knowledge Graph Completion model capable of predicting relationships involving unseen entities or those entities that have only a few connections. Formally we define the openworld KGC task as follows: Definition 2 Given an incomplete Knowledge Graph G = (E,R,T), where E, R, and T are the entity set, relationship set, and triple set respectively, Open-World Knowledge Graph Completion completes G by finding a set of missing triples T = {\u3008h, r, t\u3009|\u3008h, r, t\u3009 / \u2208 T, h \u2208 E, t \u2208 E, r \u2208 R} in the incomplete Knowledge Graph G where E is an entity superset. In Defn. 2 we relax the constraint on the triple set T so that triples in T can contain entities that are absent from the original entity set E. Closed-world KGC models learn entity and relationship embedding vectors by updating an initially random vector based on the KG\u2019s topology. Therefore, any triple \u3008h, r, t\u3009 \u2208 T \u2032 such that h / \u2208 E or t / \u2208 E will only ever be represented by its initial random vector because its absence does not permit updates from any inference function. In order to predict the missing connections for unseen entities, it is necessary to develop alternative features to replace the topological features used by closed-world models. Text content is a natural substitute for the missing topological features of disconnected or newly added entities. Indeed, most KGs such as FreeBase (Bollacker et al. 2008), DBPedia (Lehmann et al. 2015), and SemMedDB (Kilicoglu et al. 2012) were either directly extracted from (Lin et al. 2016; Ji and Grishman 2011), or are built in parallel to some underlying textual descriptions. However, open-world KGC differs from the standard information extraction task because 1) Rather than extracting triples from a large text corpus, the goal of open-world KGC is to discover missing relationships; and 2) Rather than a pipeline of independent subtasks like Entity Linking (Francis-Landau, Durrett, and Klein 2016) and Slotfilling (Liu and Lane 2016), etc., open-world KGC is a holistic task that operates as a single model. Although it may seem intuitive to simply include an entity\u2019s description into an existing KGC model, we find that learning useful vector embeddings from unstructured text is much more challenging than learning topology-embeddings as in the closed-world task. First, in closed-world KGC models, each entity will have a unique embedding, which is learned from its directly connected neighbors; whereas open-world KGC models must fuse entity embeddings with the word embeddings of the entity\u2019s description. These word embeddings must be updated by entities sharing the same words regardless of their connectivity status. Second, because of the inclusion of unstructured content, open-world models are likely to include noisy or redundant information. With respect to these challenges, the present work makes the following contributions: 1. We describe an open-world KGC model called ConMask that uses relationship-dependent content masking to reduce noise in the given entity description and uses fully convolutional neural networks (FCN) to fuse related text into a relationship-dependent entity embedding. 2. We release two new Knowledge Graph Completion data sets constructed from DBPedia and Wikipedia for use in closed-world and open-world KGC evaluation. Before introduce the ConMask model, we first present preliminary material by describing relevant KGC models. Then we describe the methodology, data sets, and a robust case study of closed-world and open-world KGC tasks. Finally, we draw conclusions and offer suggestions for future work. Closed-World Knowledge Graph Completion A variety of models have been developed to solve the closedworld KGC task. The most fundamental and widely used model is a translation-based Representation Learning (RL) model called TransE (Bordes et al. 2013). TransE assumes there exists a simple function that can translate the embedding of the head entity to the embedding of some tail entity via some relationship:"}
{"_id": "232b43584b2236669c0a53702ad89ab10c3886ea", "title": "Implicit Quantile Networks for Distributional Reinforcement Learning", "text": "For the merging function m, the simplest choice would be a simple vector concatenation of \u03c8(x) and \u03c6(\u03c4). Note however, that the MLP f which takes in the output of m and outputs the action-value quantiles, only has a single hidden layer in the DQN network. Therefore, to force a sufficiently early interaction between the two representations, we also considered a multiplicative function m(\u03c8, \u03c6) = \u03c8 \u03c6, where denotes the element-wise (Hadamard) product of two vectors, as well as a \u2018residual\u2019 function m(\u03c8, \u03c6) = \u03c8 (1 + \u03c6)."}
{"_id": "476f764f44d9d31b6bb86dc72b34de681b8b4b03", "title": "DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene from Sparse LiDAR Data and Single Color Image", "text": "In this paper, we propose a deep learning architecture that produces accurate dense depth for the outdoor scene from a single color image and a sparse depth. Inspired by the indoor depth completion, our network estimates surface normals as the intermediate representation to produce dense depth, and can be trained end-to-end. With a modified encoder-decoder structure, our network effectively fuses the dense color image and the sparse LiDAR depth. To address outdoor specific challenges, our network predicts a confidence mask to handle mixed LiDAR signals near foreground boundaries due to occlusion, and combines estimates from the color image and surface normals with learned attention maps to improve the depth accuracy especially for distant areas. Extensive experiments demonstrate that our model improves upon the state-of-the-art performance on KITTI depth completion benchmark. Ablation study shows the positive impact of each model components to the final performance, and comprehensive analysis shows that our model generalizes well to the input with higher sparsity or from indoor scenes."}
{"_id": "e5aad460c7b8b1fff70e45a4d1fb79180cf4d098", "title": "Introduction Welcome to this early seminal work , Fostering Resiliency in Kids : Protective Factors in the Family , School and Community", "text": "The field of prevention, both research and practice, came a long way in the1980s: from short-term, even one-shot, individual-focused interventions in the school classroom to a growing awareness and beginning implementation of long-term, comprehensive, environmental-focused interventions expanding beyond the school to include the community. Furthermore, in the mid-1980s we finally started to hear preventionists talking about prevention strategies and programs based on research identifying the underlying risk factors for problems such as alcohol and other drug abuse, teen pregnancy, delinquency and gangs, and dropping out (Hawkins, Lishner, and Catalano, 1985). While certainly a giant step in the right direction, the identification of risks does not necessarily provide us with a clear sense of just what strategies we need to implement to reduce the risks. More recently, we are hearing preventionists talk about \u201cprotective factors,\u201d about building \u201cresiliency\u201d in youth, about basing our strategies on what research hast old us about the environmental factors that facilitate the development of youth who do not get involved in life-compromising problems (Benard, March 1987). What clearly becomes the challenge for the l990s is the implementation of prevention strategies that strengthen protective factors in our families, schools, an communities. As Gibbs and Bennett (1990) conceptualize the process, we must\u201dturn the situation around by translating negative risk factors into positive action strategies\u201d which are, in essence, protective factors. After a brief overview of the protective factor research phenomenon, this paper will discuss the major protective factors that research has identified as contributing to the development of resiliency in youth and the implications of this for building effective prevention programs."}
{"_id": "acdbcb6f40a178b4a1cb95bdfc812f21e3110d54", "title": "A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis", "text": "With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monitor wheelchair propulsion in court sports, they all have limitations. Through experimental testing, we have shown the Attitude and Heading Reference System (AHRS)-based algorithm to be a suitable and reliable candidate algorithm for estimating velocity, distance, and approximating trajectory. The proposed algorithm is computationally inexpensive, agnostic of wheel camber, not sensitive to sensor placement, and can be embedded for real-time implementations. The research is conducted under Griffith University Ethics (GU Ref No: 2016/294)."}
{"_id": "75514c2623a0c02776b42689963e987c29b42c05", "title": "Tuning of PID controller based on Fruit Fly Optimization Algorithm", "text": "The Proportional - Integral - Derivative (PID) controllers are one of the most popular controllers used in industry because of their remarkable effectiveness, simplicity of implementation and broad applicability. PID tuning is the key issue in the design of PID controllers and most of the tuning processes are implemented manually resulting in difficulty and time consuming. To enhance the capabilities of traditional PID parameters tuning techniques, modern heuristics approaches, such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are employed recent years. In this paper, a novel tuning method based on Fruit Fly Optimization Algorithm (FOA) is proposed to optimize PID controller parameters. Each fruit fly's position represents a candidate solution for PID parameters. When the fruit fly swarm flies towards one location, it is treated as the evolution of each iterative swarm. After hundreds of iteration, the tuning results - the best PID controller parameters can be obtained. The main advantages of the proposed method include ease of implementation, stable convergence characteristic, large searching range, ease of transformation of such concept into program code and ease of understanding. Simulation results demonstrate that the FOA-Based optimized PID (FOA - PID) controller is with the capability of providing satisfactory closed - loop performance."}
{"_id": "91cf9beb696cbb0818609614f4da7351262eac86", "title": "Event detection over twitter social media streams", "text": "In recent years, microblogs have become an important source for reporting real-world events. A real-world occurrence reported in microblogs is also called a social event. Social events may hold critical materials that describe the situations during a crisis. In real applications, such as crisis management and decision making, monitoring the critical events over social streams will enable watch officers to analyze a whole situation that is a composite event, and make the right decision based on the detailed contexts such as what is happening, where an event is happening, and who are involved. Although there has been significant research effort on detecting a target event in social networks based on a single source, in crisis, we often want to analyze the composite events contributed by different social users. So far, the problem of integrating ambiguous views from different users is not well investigated. To address this issue, we propose a novel framework to detect composite social events over streams, which fully exploits the information of social data over multiple dimensions. Specifically, we first propose a graphical model called location-time constrained topic (LTT) to capture the content, time, and location of social messages. Using LTT, a social message is represented as a probability distribution over a set of topics by inference, and the similarity between two messages is measured by the distance between their distributions. Then, the events are identified by conducting efficient similarity joins over social media streams. To accelerate the similarity join, we also propose a variable dimensional extendible hash over social streams. We have conducted extensive experiments to prove the high effectiveness and efficiency of the proposed approach."}
{"_id": "e10642453c5c99442eb24743c4bab60a3a0b6273", "title": "Kullback-Leibler Divergence Constrained Distributionally Robust Optimization", "text": "In this paper we study distributionally robust optimization (DRO) problems where the ambiguity set of the probability distribution is defined by the Kullback-Leibler (KL) divergence. We consider DRO problems where the ambiguity is in the objective function, which takes a form of an expectation, and show that the resulted minimax DRO problems can be formulated as a one-layer convex minimization problem. We also consider DRO problems where the ambiguity is in the constraint. We show that ambiguous expectation-constrained programs may be reformulated as a one-layer convex optimization problem that takes the form of the Benstein approximation of Nemirovski and Shapiro (2006). We further consider distributionally robust probabilistic programs. We show that the optimal solution of a probability minimization problem is also optimal for the distributionally robust version of the same problem, and also show that the ambiguous chance-constrained programs (CCPs) may be reformulated as the original CCP with an adjusted confidence level. A number of examples and special cases are also discussed in the paper to show that the reformulated problems may take simple forms that can be solved easily. The main contribution of the paper is to show that the KL divergence constrained DRO problems are often of the same complexity as their original stochastic programming problems and, thus, KL divergence appears a good candidate in modeling distribution ambiguities in mathematical programming."}
{"_id": "33dc678ba56819b2fd05c1b538b0398125f4ccd3", "title": "Measuring the effectiveness of answers in Yahoo! Answers", "text": "Purpose \u2013 This study investigates the ways in which effectiveness of answers in Yahoo! Answers, one of the largest community question answering sites (CQAs), is related to question types and answerer reputation. Effective answers are defined as those that are detailed, readable, superior in quality, and contributed promptly. Five question types that were studied include factoid, list, definition, complex interactive, and opinion. Answerer reputation refers to the past track record of answerers in the community. Design/Methodology/Approach \u2013 The dataset comprises 1,459 answers posted in Yahoo! Answers in response to 464 questions that were distributed across the five question types. The analysis was done using factorial analysis of variance. Findings \u2013 The results indicate that factoid, definition and opinion questions were comparable in attracting high quality as well as readable answers. Although reputed answerers generally fared better in offering detailed and high quality answers, novices were found to submit more readable responses. Moreover, novices were more prompt in answering factoid, list and definition questions. Originality/value \u2013 By analyzing variations in answer effectiveness with a twin-focus on question types and answerer reputation, this study explores a strand of CQA research that has hitherto received limited attention. The findings offer insights to users and designers of CQAs."}
{"_id": "41e07d21451df21dacda2fea6f90b53bf4b89b27", "title": "An Inference Model for Semantic Entailment in Natural Language", "text": "Semantic entailment is the problem of determining if the meaning of a given sentence entails that of another. This is a fundamental problem in natural language understanding that provides a broad framework for studying language variability and has a large number of applications. We present a principled approach to this problem that builds on inducing rerepresentations of text snippets into a hierarchical knowledge representation along with a sound inferential mechanism that makes use of it to prove semantic entailment."}
{"_id": "d744e447012f2dea175462c1a15472787b171adf", "title": "The social role of social media: the case of Chennai rains-2015", "text": "Social media has altered the way individuals communicate in present scenario. Individuals feel more connected on Facebook and Twitter with greater communication freedom to chat, share pictures, and videos. Hence, social media is widely employed by various companies to promote their product and services and establish better customer relationships. Owing to the increasing popularity of these social media platforms, their usage is also expanding significantly. Various studies have discussed the importance of social media in the corporate world for effective marketing communication, customer relationships, and firm performance, but no studies have focused on the social role of social media, i.e., in disaster resilience in India. Various academicians and practitioners have advocated the importance and use of social media in disaster resilience. This article focuses on the role that social media can play during the time of natural disasters, with the help of the recent case of Chennai floods in India. This study provides a better understanding about the role social media can play in natural disaster resilience in Indian context."}
{"_id": "9c40ba65099210f9af2a25567fd1a5461e1fb80d", "title": "A New Algorithm for Detecting Text Line in Handwritten Documents", "text": "Curvilinear text line detection and segmentation in handwritten documents is a significant challenge for handwriting recognition. Given no prior knowledge of script, we model text line detection as an image segmentation problem by enhancing text line structure using a Gaussian window, and adopting the level set method to evolve text line boundaries. Experiments show that the proposed method achieves high accuracy for detecting text lines in both handwritten and machine printed documents with many scripts."}
{"_id": "e43651a1dc78f5420268c49508dce13e1f154a7a", "title": "Modelling and control of a 2-DOF planar parallel manipulator for semiconductor packaging systems", "text": "A novel direct-drive planar parallel manipulator for high-speed and high-precision semiconductor packaging systems is presented. High precision kinematics design, significant reduction on moving mass and driving power of the actuators over traditional XY motion stages are the benefits of the proposed manipulator. The mathematical model of the manipulator is obtained using the Newton-Euler method and a practical model-based control design approach is employed to design the PID computed-torque controller. Experimental results demonstrate that the proposed planar parallel manipulator has significant improvements on motion performance in terms of positioning accuracy, settling time and stability when compared with traditional XY stages. This shows that the proposed planar parallel manipulator can provide a superior alternative for replacing traditional XY motion stages in high precision low-payload applications"}
{"_id": "5e38a1f41a5ab3c04cfbfb1d9de4ba8a7721d8b7", "title": "BuildingRules: a trigger-action based system to manage complex commercial buildings", "text": "Modern Building Management Systems (BMSs) provide limited amount of control to its occupants, and typically allow only the facility manager to set the building policies. BuildingRules let occupants to customise their office spaces using trigger-action programming. In order to accomplish this task, BuildingRules automatically detects conflicts among the policies expressed by the users using a SMT based logic. We tested our system with 23 users across 17 days in a virtual office building, and evaluate the effectiveness and scalability of the system."}
{"_id": "eb70e45a5f4b74edc1e1fdfa052905184daf655c", "title": "Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States?", "text": "US voters shared large volumes of polarizing political news and information in the form of links to content from Russian, WikiLeaks and junk news sources. Was this low quality political information distributed evenly around the country, or concentrated in swing states and particular parts of the country? In this data memo we apply a tested dictionary of sources about political news and information being shared over Twitter over a ten day period around the 2016 Presidential Election. Using self-reported location information, we place a third of users by state and create a simple index for the distribution of polarizing content around the country. We find that (1) nationally, Twitter users got more misinformation, polarizing and conspiratorial content than professionally produced news. (2) Users in some states, however, shared more polarizing political news and information than users in other states. (3) Average levels of misinformation were higher in swing states than in uncontested states, even when weighted for the relative size of the user population in each state. We conclude with some observations about the impact of strategically disseminated polarizing information on public life. COMPUTATIONAL PROPAGANDA AND THE 2016 US ELECTION Social media plays an important role in the circulation of ideas about public policy and politics. Political actors and governments worldwide are deploying both people and algorithms to shape public life. Bots are pieces of software intended to perform simple, repetitive, and robotic tasks. They can perform legitimate tasks on social media like delivering news and information\u2014real news as well as junk\u2014or undertake malicious activities like spamming, harassment and hate speech. Whatever their uses, bots on social media platforms are able to rapidly deploy messages, replicate themselves, and pass as human users. They are also a pernicious means of spreading junk news over social networks of family and friends. Computational propaganda flourished during the 2016 US Presidential Election. There were numerous examples of misinformation distributed online with the intention of misleading voters or simply earning a profit. Multiple media reports have investigated how \u201cfake news\u201d may have propelled Donald J. Trump to victory. What kinds of political news and information were social media users in the United States sharing in advance of voting day? How much of it was extremist, sensationalist, conspiratorial, masked commentary, fake, or some other form of junk news? Was this misleading information concentrated in the battleground states where the margins of victory for candidates had big consequences for electoral outcomes? SOCIAL MEDIA AND JUNK NEWS Junk news, widely distributed over social media platforms, can in many cases be considered to be a form of computational propaganda. Social media platforms have served significant volumes of fake, sensational, and other forms of junk news at sensitive moments in public life, though most platforms reveal little about how much of this content there is or what its impact on users may be. The World Economic Forum recently identified the rapid spread of misinformation online as among the top 10 perils to society. Prior research has found that social media favors sensationalist content, regardless of whether the content has been fact checked or is from a reliable source. When junk news is backed by automation, either through dissemination algorithms that the platform operators cannot fully explain or through political bots that promote content in a preprogrammed way, political actors have a powerful set of tools for computational propaganda. Both state and non-state political actors deliberately manipulate and amplify non-factual information online. Fake news websites deliberately publish misleading, deceptive or incorrect information purporting to be real news for political, economic or cultural gain. These sites often rely on social media to attract web traffic and drive engagement. Both fake news websites and political bots are crucial tools in digital propaganda attacks\u2014they aim to influence conversations, demobilize opposition and generate false support. SAMPLING AND METHOD Our analysis is based on a dataset of 22,117,221 tweets collected between November 1-11, 2016, that contained hashtags related to politics and the election in the US. Our previous analyses have been based on samples of political conversation, over Twitter that used hashtags that were relevant to the US election as a whole. In this analysis, we selected users who provided some evidence of physical location across the United States in their profiles. Within our initial sample, approximately 7,083,691 tweets, 32 percent of the total"}
{"_id": "122ab8b1ac332bceacf556bc50268b9d80552bb3", "title": "Uptane: Security and Customizability of Software Updates for Vehicles", "text": "A widely accepted premise is that complex software frequently contains bugs that can be remotely exploited by attackers. When this software is on an electronic control unit (ECU) in a vehicle, exploitation of these bugs can have life or death consequences. Since software for vehicles is likely to proliferate and grow more complex in time, the number of exploitable vulnerabilities will increase. As a result, manufacturers are keenly aware of the need to quickly and efficiently deploy updates so that software vulnerabilities can be remedied as soon as possible."}
{"_id": "8d41667f77c8d2aad1eb0ad2b8501e6080f235f1", "title": "Automated classification of security requirements", "text": "Requirement engineers are not able to elicit and analyze the security requirements clearly, that are essential for the development of secure and reliable software. Proper identification of security requirements present in the Software Requirement Specification (SRS) document has been a problem being faced by the developers. As a result, they are not able to deliver the software free from threats and vulnerabilities. Thus, in this paper, we intend to mine the descriptions of security requirements present in the SRS document and thereafter develop the classification models. The security-based descriptions are analyzed using text mining techniques and are then classified into four types of security requirements viz. authentication-authorization, access control, cryptography-encryption and data integrity using J48 decision tree method. Corresponding to each type of security requirement, a prediction model has been developed. The effectiveness of the prediction models is evaluated against requirement specifications collected from 15 projects which have been developed by MS students at DePaul University. The result analysis indicated that all the four models have performed very well in predicting their respective type of security requirements."}
{"_id": "735762290da16163e6f408e1dd138d47ae400745", "title": "CitySpectrum: a non-negative tensor factorization approach", "text": "People flow at a citywide level is in a mixed state with several basic patterns (e.g. commuting, working, commercial), and it is therefore difficult to extract useful information from such a mixture of patterns directly. In this paper, we proposed a novel tensor factorization approach to modeling city dynamics in a basic life pattern space (CitySpectral Space). To obtain the CitySpectrum, we utilized Non-negative Tensor Factorization (NTF) to decompose a people flow tensor into basic life pattern tensors, described by three bases i.e. the intensity variation among different regions, the time-of-day and the sample days. We apply our approach to a big mobile phone GPS log dataset (containing 1.6 million users) to model the fluctuation in people flow before and after the Great East Japan Earthquake from a CitySpectral perspective. In addition, our framework is extensible to a variety of auxiliary spatial-temporal data. We parametrize a people flow with a spatial distribution of the Points of Interest (POIs) to quantitatively analyze the relationship between human mobility and POI distribution. Based on the parametric people flow, we propose a spectral approach for a site-selection recommendation and people flow simulation in another similar area using POI distribution."}
{"_id": "2f3d178126225f8618ab01cfd3786edcec0d30b7", "title": "A Novel Artificial Bee Colony Algorithm Based on Modified Search Equation and Orthogonal Learning", "text": "The artificial bee colony (ABC) algorithm is a relatively new optimization technique which has been shown to be competitive to other population-based algorithms. However, ABC has an insufficiency regarding its solution search equation, which is good at exploration but poor at exploitation. To address this concerning issue, we first propose an improved ABC method called as CABC where a modified search equation is applied to generate a candidate solution to improve the search ability of ABC. Furthermore, we use the orthogonal experimental design (OED) to form an orthogonal learning (OL) strategy for variant ABCs to discover more useful information from the search experiences. Owing to OED's good character of sampling a small number of well representative combinations for testing, the OL strategy can construct a more promising and efficient candidate solution. In this paper, the OL strategy is applied to three versions of ABC, i.e., the standard ABC, global-best-guided ABC (GABC), and CABC, which yields OABC, OGABC, and OCABC, respectively. The experimental results on a set of 22 benchmark functions demonstrate the effectiveness and efficiency of the modified search equation and the OL strategy. The comparisons with some other ABCs and several state-of-the-art algorithms show that the proposed algorithms significantly improve the performance of ABC. Moreover, OCABC offers the highest solution quality, fastest global convergence, and strongest robustness among all the contenders on almost all the test functions."}
{"_id": "3a043714354fe498752b45e4cf429dbae0fb2558", "title": "Large-scale cluster management at Google with Borg", "text": "Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines.\n It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior.\n We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it."}
{"_id": "0831a5baf38c9b3d43c755319a602b15fc01c52d", "title": "The tail at scale", "text": "Software techniques that tolerate latency variability are vital to building responsive large-scale Web services."}
{"_id": "3a257a87ab5d1e317336a6cefb50fee1958bd84a", "title": "Characterizing Machines and Workloads on a Google Cluster", "text": "Cloud computing offers high scalability, flexibility and cost-effectiveness to meet emerging computing requirements. Understanding the characteristics of real workloads on a large production cloud cluster benefits not only cloud service providers but also researchers and daily users. This paper studies a large-scale Google cluster usage trace dataset and characterizes how the machines in the cluster are managed and the workloads submitted during a 29-day period behave. We focus on the frequency and pattern of machine maintenance events, job- and task-level workload behavior, and how the overall cluster resources are utilized."}
{"_id": "71ad31bd506ea571f6c04a293ff298f42fa7b47c", "title": "Characterizing Cloud Applications on a Google Data Center", "text": "In this paper, we characterize Google applications, based on a one-month Google trace with over 650k jobs running across over 12000 heterogeneous hosts from a Google data center. On one hand, we carefully compute the valuable statistics about task events and resource utilization for Google applications, based on various types of resources (such as CPU, memory) and execution types (e.g., whether they can run batch tasks or not). Resource utilization per application is observed with an extremely typical Pareto principle. On the other hand, we classify applications via a K-means clustering algorithm with optimized number of sets, based on task events and resource usage. The number of applications in the K-means clustering sets follows a Pareto-similar distribution. We believe our work is very interesting and valuable for the further investigation of Cloud environment."}
{"_id": "fe6645c5c9f91f3dfacbbc1e7ce995aa4ab8d963", "title": "The Autism Diagnostic Observation Schedule, Toddler Module: Standardized Severity Scores.", "text": "Standardized calibrated severity scores (CSS) have been created for Autism Diagnostic Observation Schedule, 2nd edition (ADOS-2) Modules 1-4 as a metric of the relative severity of autism-specific behaviors. Total and domain CSS were created for the Toddler Module to facilitate comparison to other modules. Analyses included 388 children with ASD age 12-30 months and were replicated on 435 repeated assessments from 127 children with ASD. Compared to raw scores, associations between total and domain CSS and participant characteristics were reduced in the original sample. Verbal IQ effects on Social Affect-CSS were not reduced in the replication sample. Toddler Module CSS increases comparability of ADOS-2 scores across modules and allows studies of symptom trajectories to extend to earlier ages."}
{"_id": "d03b77a5f76764819315ce2daf22bb59d39b7832", "title": "Arx : A DBMS with Semantically Secure Encryption Rishabh", "text": "In recent years, encrypted databases have emerged as a promising direction that provides data confidentiality without sacrificing functionality: queries are executed on encrypted data. However, existing practical proposals rely on a set of weak encryption schemes that have been shown to leak sensitive data. In this paper, we propose Arx, a practical and functionally rich database system that encrypts the data only with semantically secure encryption schemes. We show that Arx supports real applications such as ShareLaTeX and a health data cloud provider with a modest performance overhead."}
{"_id": "47729d2dcaf76092364bd207a8ccd53e4959b5a5", "title": "Design and Vlsi Implementation of Anti- Collision Enabled Robot Processor Using Rfid Technology", "text": "RFID is a low power wireless emerging technology which has given rise to highly promising applications in real life. It can be employed for robot navigation. In multi-robot environment, when many robots are moving in the same workspace, there is a possibility of their physical collision with themselves as well as with physical objects. In the present work, we have proposed and developed a processor incorporating smart algorithm for avoiding such collisions with the help of RFID technology and implemented it by using VHDL. The design procedure and the simulated results are very useful in designing and implementing a practical RFID system. The RTL schematic view of the processor is achieved by successfully synthesizing the proposed design."}
{"_id": "f5f527d7870fc7698be16145674a08ed4d0c8a13", "title": "Text Steganography in SMS", "text": "One of the services used in mobile phone is the short message service (SMS) which is widely used by the public in all parts of the world especially in Asia and Europe. This service enables people to write and exchange short messages via mobile phone. Due to the limited size of SMS, lack of a proper keyboard on the mobile phone and to improve the speed of typing, new abbreviations have been invented for different words and phrases which has lead to the invention of a new language called SMS-texting. One of the main issues in communication is information security and privacy. There are many methods for secret communication and many researchers are working on steganography. In steganography the data is hidden in a cover media such as picture or text. The present paper offers a new method for secret exchange of information through SMS by using and developing abbreviation text steganography with the use of the invented language of SMS-texting. This project has been implemented by J2ME (Java 2 Micro Edition) programming language and tested on a Nokia N71 mobile phone."}
{"_id": "8d7053c21f626df572c56561b6fb87ae62f2612b", "title": "Synchronization and cell search algorithms in 3GPP long term evolution systems (FDD mode)", "text": "In this paper initial downlink synchronization (sync) and cell identification algorithms for the 3rd generation partnership project (3GPP) long term evolution (LTE) systems are presented. The frequency division duplex (FDD) mode is used in the downlink radio frame structure. A user equipment digital receiver architecture is proposed. The orthogonality of signals in orthogonal frequency division multiplexing (OFDM) may be lost due to some impairments such as frequency, time, and phase offsets. Therefore, major parts of this research are involved in restoring the orthogonality at the user equipment (UE) by using some techniques to estimate the sync parameters, and to detect cell identity among candidate cells based on sync signals. A Farrow structure interpolator is used to compensate the fractional timing offset. Both inter site synchronous and inter site asynchronous networks are presented. Computer simulations are used to demonstrate the performance of the proposed schemes with multipath Rayleigh fading channel, frequency offsets, timing offsets, and additive white Gaussian noise (AWGN). Results show high probability of cell identification in a very short time 20 ms in both multi cell model scenarios, especially when multiple input multiple output (MIMO) technique, oversampling at the UE, and high order Farrow structure interpolator are used. Key-Words: 3GPP LTE, OFDM, MIMO, Frequency and Time Synchronization, Cell Search"}
{"_id": "faca93d1fa9044cb250e1ed9195c94c1b5bc75da", "title": "Social norms and human cooperation", "text": "The existence of social norms is one of the big unsolved problems in social cognitive science. Although no other concept is invoked more frequently in the social sciences, we still know little about how social norms are formed, the forces determining their content, and the cognitive and emotional requirements that enable a species to establish and enforce social norms. In recent years, there has been substantial progress, however, on how cooperation norms are enforced. Here we review evidence showing that sanctions are decisive for norm enforcement, and that they are largely driven by non-selfish motives. Moreover, the explicit study of sanctioning behavior provides instruments for measuring social norms and has also led to deeper insights into the proximate and ultimate forces behind human cooperation."}
{"_id": "a9acb57442845180ec31be5ea2b068e0a4feb756", "title": "Circular Pulley Versus Variable Radius Pulley: Optimal Design Methodologies and Dynamic Characteristics Analysis", "text": "Human-centered robotics has received growing interest in low-impedance actuations. In particular, pneumatic artificial muscles (PAMs) provide compliance and high force-to-weight ratio, which allow for safe actuation. However, several performance drawbacks prevent PAMs from being more pervasive. Although many approaches have been proposed to overcome the low control bandwidth of PAMs, some limitations of PAMs, such as restricted workspace and torque capacity, remain to be addressed. This paper analyzes the characteristics and limitations of PAMs-driven joints and subsequently provides an optimization strategy for circular pulleys (CPs) in order to improve joint torque capacity over a large workspace. In addition to CPs, this paper proposes a design methodology to synthesize a pair of variable radius pulleys (VRPs) for further improvement. Simulation and experimental results show that newly synthesized VRPs significantly improve torque capacity in the enlarged workspace without loss of dynamic performance. Finally, the characteristics of CPs and VRPs are discussed in terms of physical human-robot interaction."}
{"_id": "92478ad8a5f36ac8fe4a55b2ad7ebdea75053e60", "title": "Privacy-Preserving Profile Matching for Proximity-Based Mobile Social Networking", "text": "Proximity-based mobile social networking (PMSN) refers to the social interaction among physically proximate mobile users. The first step toward effective PMSN is for mobile users to choose whom to interact with. Profile matching refers to two users comparing their personal profiles and is promising for user selection in PMSN. It, however, conflicts with users' growing privacy concerns about disclosing their personal profiles to complete strangers. This paper tackles this open challenge by designing novel fine-grained private matching protocols. Our protocols enable two users to perform profile matching without disclosing any information about their profiles beyond the comparison result. In contrast to existing coarse-grained private matching schemes for PMSN, our protocols allow finer differentiation between PMSN users and can support a wide range of matching metrics at different privacy levels. The performance of our protocols is thoroughly analyzed and evaluated via real smartphone experiments."}
{"_id": "71aea2ba81ca21dc38e4a36a8fc10052266abcf2", "title": "A Survey of Machine Learning Techniques for Spam Filtering", "text": "Email spam or junk e-mail (unwanted e-mail \u201cusually of a commercial nature sent out in bulk\u201d) is one of the major problems of the today's Internet, bringing financial damage to companies and annoying individual users. Among the approaches developed to stop spam, filtering is an important and popular one. Common uses for mail filters include organizing incoming email and removal of spam and computer viruses. A less common use is to inspect outgoing email at some companies to ensure that employees comply with appropriate laws. Users might also employ a mail filter to prioritize messages, and to sort them into folders based on subject matter or other criteria. Mail filters can be installed by the user, either as separate programs, or as part of their email program (email client). In email programs, users can make personal, \"manual\" filters that then automatically filter mail according to the chosen criteria. In this paper, we present a survey of the performance of five commonly used machine learning methods in spam filtering. Most email programs now also have an automatic spam filtering function."}
{"_id": "0c17001c83d97a1c18a52f3818b616a71c3457b3", "title": "Splitting Compounds by Semantic Analogy", "text": "Compounding is a highly productive word-formation process in some languages that is often problematic for natural language processing applications. In this paper, we investigate whether distributional semantics in the form of word embeddings can enable a deeper, i.e., more knowledge-rich, processing of compounds than the standard string-based methods. We present an unsupervised approach that exploits regularities in the semantic vector space (based on analogies such as \u201cbookshop is to shop as bookshelf is to shelf\u201d) to produce compound analyses of high quality. A subsequent compound splitting algorithm based on these analyses is highly effective, particularly for ambiguous compounds. German to English machine translation experiments show that this semantic analogy-based compound splitter leads to better translations than a commonly used frequency-based method."}
{"_id": "7dfce578644bc101ae4ffcd0184d2227c6d07809", "title": "Polymorphic Encryption and Pseudonymisation for Personalised Healthcare", "text": "Polymorphic encryption and Pseudonymisation, abbreviated as PEP, form a novel approach for the management of sensitive personal data, especially in health care. Traditional encryption is rather rigid: once encrypted, only one key can be used to decrypt the data. This rigidity is becoming an every greater problem in the context of big data analytics, where different parties who wish to investigate part of an encrypted data set all need the one key for decryption. Polymorphic encryption is a new cryptographic technique that solves these problems. Together with the associated technique of polymorphic pseudonymi-sation new security and privacy guarantees can be given which are essential in areas such as (personalised) healthcare, medical data collection via self-measurement apps, and more generally in privacy-friendly identity management and data analytics. The key ideas of polymorphic encryption are: 1. Directly after generation, data can be encrypted in a 'polymorphic' manner and stored at a (cloud) storage facility in such a way that the storage provider cannot get access. Crucially, there is no need to a priori fix who gets to see the data, so that the data can immediately be protected. For instance a PEP-enabled self-measurement device will store all its measurement data in polymorphically encrypted form in a back-end data base. 2. Later on it can be decided who can decrypt the data. This decision will be made on the basis of a policy, in which the data subject should play a key role. The user of the PEP-enabled device can, for instance, decide that doctors X, Y, Z may at some stage decrypt to use the data in their diagnosis, or medical researcher groups A, B, C may use it for their investigations, or third parties U, V, W may use it for additional services, etc. 3. This 'tweaking' of the encrypted data to make it decryptable by a specific party can be done in a blind manner. It will have to be done by a trusted party who knows how to tweak the ciphertext for whom. This PEP technology can provide the necessary security and privacy infrastructure for big data analytics. People can entrust their data in polymorphically encrypted form, and each time decide later to make (parts of) it available (de-cryptable) for specific parties, for specific analysis purposes. In this way users remain in control, and can monitor which of their data is used where by whom for which purposes. The \u2026"}
{"_id": "fd3d94fac6a282414406716040b10c1746634ecd", "title": "Video Captioning by Adversarial LSTM", "text": "In this paper, we propose a novel approach to video captioning based on adversarial learning and long short-term memory (LSTM). With this solution concept, we aim at compensating for the deficiencies of LSTM-based video captioning methods that generally show potential to effectively handle temporal nature of video data when generating captions but also typically suffer from exponential error accumulation. Specifically, we adopt a standard generative adversarial network (GAN) architecture, characterized by an interplay of two competing processes: a \u201cgenerator\u201d that generates textual sentences given the visual content of a video and a \u201cdiscriminator\u201d that controls the accuracy of the generated sentences. The discriminator acts as an \u201cadversary\u201d toward the generator, and with its controlling mechanism, it helps the generator to become more accurate. For the generator module, we take an existing video captioning concept using LSTM network. For the discriminator, we propose a novel realization specifically tuned for the video captioning problem and taking both the sentences and video features as input. This leads to our proposed LSTM\u2013GAN system architecture, for which we show experimentally to significantly outperform the existing methods on standard public datasets."}
{"_id": "8206e427d56419589a2eddaf9d64eecbbc6d3c59", "title": "Reader-Aware Multi-Document Summarization: An Enhanced Model and The First Dataset", "text": "We investigate the problem of readeraware multi-document summarization (RA-MDS) and introduce a new dataset for this problem. To tackle RA-MDS, we extend a variational auto-encodes (VAEs) based MDS framework by jointly considering news documents and reader comments. To conduct evaluation for summarization performance, we prepare a new dataset. We describe the methods for data collection, aspect annotation, and summary writing as well as scrutinizing by experts. Experimental results show that reader comments can improve the summarization performance, which also demonstrates the usefulness of the proposed dataset. The annotated dataset for RA-MDS is available online1."}
{"_id": "26f686da522f76d0108d5518ee490c8a4344dc64", "title": "Bee Colony Optimization (BCO)", "text": "Swarm Intelligence is the part of Artificial Intel ligence based on study of actions of individuals in various decentralized sys tems. The Bee Colony Optimization (BCO) metaheuristic has been introduce fairly recently as a new direction in the field of Swarm Intelligence. Artif icial bees represent agents, which collaboratively solve complex combinatorial optimiz ation problem. The chapter presents a classification and analysis of the resul ts achieved using Bee Colony Optimization (BCO) to model complex engineering and management processes. The primary goal of this chapter is to acquaint readers with the basic principles of Bee Colony Optimization, as well as to indicate potenti al BCO applications in engineering and management."}
{"_id": "3fb9d25025f4db4218b84ba340a7916af75377e3", "title": "Big data needs approximate computing: technical perspective", "text": "\u4ee5\u5bb9\u5fcd\u3002\u8fd8\u6709\u4e00\u70b9\u8ba9\u4eba\u5370\u8c61\u6df1\u523b: \u5bf9\u4e8e\u4e0d\u540c\u7684\u7a0b\u5e8f,\u8fd9\u4e9b\u63d0\u5347\u548c\u80fd \u8017\u964d\u4f4e\u5e76\u4e0d\u9700\u8981\u4f7f\u7528\u4e0d\u540c\u7684\u52a0\u901f \u5668\u5b9e\u73b0\u65b9\u5f0f\u2014\u2014\u5b83\u4eec\u901a\u8fc7\u76f8\u540c\u7684 \u8fd1\u4f3c\u52a0\u901f\u5668\u5f97\u5230,\u90a3\u4e9b\u52a0\u901f\u5668\u53ef \u7528\u4e8e\u6240\u6709\u7a0b\u5e8f\u4e2d\u6240\u6709\u7c7b\u578b\u7684\u53ef\u8fd1 \u4f3c\u4ee3\u7801\u533a\u3002 \u9664\u4e86\u6587\u4e2d\u63cf\u8ff0\u7684\u4e4b\u5916,\u901a\u8fc7\u8ba9 \u8fd1\u4f3c\u8ba1\u7b97\u5f15\u64ce\u5728\u660e\u663e\u4f4e\u5f97\u591a\u7684\u7535\u538b \u4e0b\u8fd0\u884c,\u8fd8\u80fd\u8282\u7ea6\u66f4\u591a\u7684\u80fd\u6e90\u3002\u4e1a \u754c\u589e\u957f\u7684\u771f\u6b63\u5f15\u64ce\u2014\u2014\u767b\u7eb3\u5fb7\u7f29\u653e (Dennard scaling)\u2014\u2014\u5141\u8bb8\u6676\u4f53 \u7ba1\u51cf\u5c11\u5c3a\u5bf8,\u540c\u65f6\u6d88\u8017\u66f4\u5c11\u7684\u80fd\u6e90, \u5f97\u5230\u66f4\u9ad8\u7684\u6027\u80fd\u3002\u4f46\u662f\u767b\u7eb3\u5fb7\u7f29\u653e \u6b63\u8d8b\u8fd1\u6781\u9650\u3002\u8bbe\u5907\u7684\u5c3a\u5bf8\u5df2\u7ecf\u7f29\u51cf \u5230\u4e86\u539f\u5b50\u7ea7\u522b\u3002\u800c\u4e14,\u5728\u4e0d\u727a\u7272\u8bbe \u5907\u53ef\u9760\u6027\u7684\u60c5\u51b5\u4e0b,\u65e0\u6cd5\u7ee7\u7eed\u663e\u8457 \u964d\u4f4e\u7535\u538b\u3002\u65b0\u6280\u672f\u4f1a\u5e26\u6765\u4e0d\u9519\u7684\u80fd \u8017\u6027\u80fd,\u4f46\u7531\u4e8e\u9700\u8981\u5229\u7528\u5197\u4f59\u6765\u8865 \u507f\u5b83\u4eec\u7684\u4e0d\u53ef\u9760\u6027,\u8fd9\u4e00\u4f18\u52bf\u88ab\u62b5 \u6d88\u4e86\u3002\u56e0\u6b64,\u5bf9\u4e8e\u4f20\u7edf\u7684\u4ea4\u6613\u7cfb\u7edf \u800c\u8a00,\u672a\u6765\u7684\u6280\u672f\u4e0d\u5927\u53ef\u80fd\u5927\u5e45\u63d0 \u5347\u5176\u80fd\u8017\u6027\u80fd\u3002\u7136\u800c,\u5bf9\u4e8e\u90a3\u4e9b\u751f \u6210\u8fd1\u4f3c\u7ed3\u679c\u7684\u7cfb\u7edf(\u6bd4\u5982\u4f5c\u8005\u63cf\u8ff0 \u7684\u7cfb\u7edf),\u9700\u8981\u7684\u5197\u4f59\u53ef\u80fd\u6709\u6240\u4e0d \u540c\u4e14\u8981\u7b80\u5355\u5f97\u591a\u3002\u5bf9\u4e8e\u8fd9\u4e9b\u7cfb\u7edf\u7c7b \u578b,\u4f4e\u7535\u538b\u8fd0\u884c\u662f\u53ef\u884c\u7684\u3002 \u672a \u6765, \u5f53 \u6211 \u4eec \u4f7f \u7528 \u7f29 \u5c0f \u7684 CMOS \u548c\u540e\u7845\u6280\u672f,\u5229\u7528\u6d77\u91cf\u8ba1\u7b97 \u5904\u7406\u4e16\u4e0a\u4ea7\u751f\u7684\u6d77\u91cf\u6570\u636e\u65f6,\u8fd1\u4f3c \u8ba1\u7b97\u4f3c\u4e4e\u662f\u6211\u4eec\u6700\u6709\u53ef\u80fd\u53d6\u5f97\u6210\u529f \u7684\u5e0c\u671b\u3002\u4e0b\u9762\u7684\u6587\u7ae0\u5c31\u662f\u4e3a\u5b9e\u73b0\u6b64 \u76ee\u6807\u6240\u9700\u5f00\u5c55\u7684\u5de5\u4f5c\u7684\u4e00\u4e2a\u5f88\u597d\u7684 \u4f8b\u5b50\u3002"}
{"_id": "646d86fb543133b597c786d33994cf387eb6356e", "title": "Torus data center network with smart flow control enabled by hybrid optoelectronic routers [Invited]", "text": "A torus-topology photonic data center network with smart flow control is presented, where optical packet switching (OPS), optical circuit switching (OCS), and virtual OCS (VOCS) schemes are all enabled on a unified hardware platform via the combination of hybrid optoelectronic routers (HOPR) and a centralized network controller. An upgraded HOPR is being developed with the goal of handling 100 Gbps optical packets with a high energy efficiency of 90 mW/Gbps and low latency of less than 100 ns. The architecture of HOPR and its enabling technologies are reviewed, including a label processor, optical switch, and optoelectronic shared buffer. We also explain the concept and operation mechanisms of both the data and control planes in the hybrid OPS/OCS/VOCS torus network. The performance of these three transmission modes is evaluated by numerical simulations for a network composed of 4096 HOPRs."}
{"_id": "831e1bd33a97478d0a24548ce68eb9f809d9b968", "title": "ANNarchy: a code generation approach to neural simulations on parallel hardware", "text": "Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions."}
{"_id": "0426408774fea8d724609769d6954dd75454a97e", "title": "How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks", "text": "Variational autoencoders are a powerful framework for unsupervised learning. However, previous work has been restricted to shallow models with one or two layers of fully factorized stochastic latent variables, limiting the flexibility of the latent representation. We propose three advances in training algorithms of variational autoencoders, for the first time allowing to train deep models of up to five stochastic layers, (1) using a structure similar to the Ladder network as the inference model, (2) warm-up period to support stochastic units staying active in early training, and (3) use of batch normalization. Using these improvements we show state-of-the-art log-likelihood results for generative modeling on several benchmark datasets."}
{"_id": "04640006606ddfb9d6aa4ce8f55855b1f23ec7ed", "title": "Wide Residual Networks", "text": "Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layerdeep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at https: //github.com/szagoruyko/wide-residual-networks."}
{"_id": "126df9f24e29feee6e49e135da102fbbd9154a48", "title": "Deep learning in neural networks: An overview", "text": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks."}
{"_id": "bd36469a4996503d1d5ba2bd5a75e6febe730cc8", "title": "Video games do affect social outcomes: a meta-analytic review of the effects of violent and prosocial video game play.", "text": "Whether video game play affects social behavior is a topic of debate. Many argue that aggression and helping are affected by video game play, whereas this stance is disputed by others. The present research provides a meta-analytical test of the idea that depending on their content, video games do affect social outcomes. Data from 98 independent studies with 36,965 participants revealed that for both violent video games and prosocial video games, there was a significant association with social outcomes. Whereas violent video games increase aggression and aggression-related variables and decrease prosocial outcomes, prosocial video games have the opposite effects. These effects were reliable across experimental, correlational, and longitudinal studies, indicating that video game exposure causally affects social outcomes and that there are both short- and long-term effects."}
{"_id": "49c59164962b847d9f35bc506d92d92f7a4f0ae7", "title": "A Privacy Preservation Model for Facebook-Style Social Network Systems", "text": "Recent years have seen unprecedented growth in the popularity of social network systems, with Facebook being an archetypical example. The access control paradigm behind the privacy preservation mechanism of Facebook is distinctly different from such existing access control paradigms as Discretionary Access Control, Role-Based Access Control, Capability Systems, and Trust Management Systems. This work takes a first step in deepening the understanding of this access control paradigm, by proposing an access control model that formalizes and generalizes the privacy preservation mechanism of Facebook. The model can be instantiated into a family of Facebook-style social network systems, each with a recognizably different access control mechanism, so that Facebook is but one instantiation of the model. We also demonstrate that the model can be instantiated to express policies that are not currently supported by Facebook but possess rich and natural social significance. This work thus delineates the design space of privacy preservation mechanisms for Facebook-style social network systems, and lays out a formal framework for policy analysis in these systems."}
{"_id": "3212929ad5121464ac49741dd3462a5d469e668d", "title": "Adversarial machine learning", "text": "In this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for classifying attacks against online machine learning algorithms; discuss application-specific factors that limit an adversary's capabilities; introduce two models for modeling an adversary's capabilities; explore the limits of an adversary's knowledge about the algorithm, feature space, training, and input data; explore vulnerabilities in machine learning algorithms; discuss countermeasures against attacks; introduce the evasion challenge; and discuss privacy-preserving learning techniques."}
{"_id": "236cb915a1724c4b9ec3e53cfed64ce169358cda", "title": "Voxel-Based Morphometry\u2014The Methods", "text": "At its simplest, voxel-based morphometry (VBM) involves a voxel-wise comparison of the local concentration of gray matter between two groups of subjects. The procedure is relatively straightforward and involves spatially normalizing high-resolution images from all the subjects in the study into the same stereotactic space. This is followed by segmenting the gray matter from the spatially normalized images and smoothing the gray-matter segments. Voxel-wise parametric statistical tests which compare the smoothed gray-matter images from the two groups are performed. Corrections for multiple comparisons are made using the theory of Gaussian random fields. This paper describes the steps involved in VBM, with particular emphasis on segmenting gray matter from MR images with nonuniformity artifact. We provide evaluations of the assumptions that underpin the method, including the accuracy of the segmentation and the assumptions made about the statistical distribution of the data."}
{"_id": "49dc23abd2ba04e30c5dc6084b99e8215228f1be", "title": "Efficient Algorithms for Convolutional Sparse Representations", "text": "When applying sparse representation techniques to images, the standard approach is to independently compute the representations for a set of overlapping image patches. This method performs very well in a variety of applications, but results in a representation that is multi-valued and not optimized with respect to the entire image. An alternative representation structure is provided by a convolutional sparse representation, in which a sparse representation of an entire image is computed by replacing the linear combination of a set of dictionary vectors by the sum of a set of convolutions with dictionary filters. The resulting representation is both single-valued and jointly optimized over the entire image. While this form of a sparse representation has been applied to a variety of problems in signal and image processing and computer vision, the computational expense of the corresponding optimization problems has restricted application to relatively small signals and images. This paper presents new, efficient algorithms that substantially improve on the performance of other recent methods, contributing to the development of this type of representation as a practical tool for a wider range of problems."}
{"_id": "0c1e5dc8776fa497672af67375d5cd2f500b24ba", "title": "Blind Trust Online : Experimental Evidence from Baseball Cards", "text": "Before reaping transaction cost savings from the Internet, online sellers must credibly communicate the quality of their goods or their reliability of delivery. We use baseball cards as an example to address (1) whether consumers fully understand the risk of online trading, (2) the extent to which consumers employ seller reputation and professional grading to alleviate the risk they are exposed to, and (3) the impact of online trading on the retail market and supporting industries. An experiment is carried out to obtain actual baseball cards from both online and retail markets whose quality is then professionally graded and compared to the prices paid by online buyers for goods with similar claims. Our findings indicate that some uninformed online buyers are misled by non-credible claims of quality; they pay higher prices but do not receive better quality and in fact are defrauded more often. Although a combination of professional grading and reputation mechanisms appear to have the potential to solve the problem, they have only had limited success so far. Data on structural changes in the baseball card industry suggest the problems stemming from buyer misconception may have far-reaching consequences, extending beyond the immediate losses of defrauded consumers to influencing producer and supply decisions. Our findings also suggest that empirical studies using online price data should be careful of what assumptions they are making of online users and what behaviors of buyers and sellers are likely to drive the price data. Department of Economics, University of Maryland, 3115 G Tydings, College Park, MD 20742. Contact Email: jin@econ.umd.edu. We thank Seth Sanders, John List, John Shea, Dan Vincent, David Lucking-Reiley, Larry Ausubel, Peter Cramton, V. Joseph Hotz, Jimmy Chan, Vincent Crawford, attendants in Johns Hopkins UMD conference and participants of UCLA,UCSD and FTC seminars for constructive advice and suggestions. We are grateful to 8 friends who acted as our agents in purchasing baseball cards in retail markets, and numerous sports card store owners for their time and insights regarding the sports card industry. Excellent research assistance from Randy Alexander Moore and Krzysztof Fizyta is gratefully acknowledged. Any remaining errors are ours."}
{"_id": "8816173fda63de0053ff78ba0b5412780422543e", "title": "Tail call elimination on the Java Virtual Machine", "text": "A problem that often has to be solved by compilers for functional languages targeting the Java Virtual Machine is the elimination of tail calls. This paper explains how we solved it in our Funnel compiler and presents some experimental results about the impact our technique has on both performance and size of the compiled programs."}
{"_id": "4201edcb2f82bf78abbdcb9b3fb4ce1841462f72", "title": "Vehicle location finder using Global position system and Global System for Mobile", "text": "Each year, the number of stolen vehicle is on the rise. Usually, to prevent theft, a physical type countermeasure is used such as padlock, disk break lock and other more which is a preventive action but it is not enough safe. The objective of this study is to create a controllable system that can display the location of a vehicle using Global position system (GPS) to pin point the location and Global System for Mobile (GSM) as a mean for communicating with the vehicle for ease of finding after a theft attempt. The system is made to test the accuracy of the location that is send to the user when the vehicle is in motion and stationary in the city and suburb. The system is made by combining a micro controller with GPS and GSM, then comparing it with other similar device available in the market like Garmin and a reference website to find the radius of error. The study of proposed device begins by studying IEEE journal about alternative product and the vehicle itself. The hardware and program development is done by research and trial and error as the controller do not interact with both module at the same time, after successfully programming both module, it is combined into a single program with addition of interrupt program. The experiment is done in three set of tests so that the system accuracy can be determine when stationary and in motion on vehicle, output controlling is the test to determine if the controller can be made into anti-theft system. The result of the test concludes that the system can provide standard GPS coordinate when requested via Short Message Service (SMS). The system can also be used to control an actuator."}
{"_id": "3c9ac1876a69b4e35b5f0690ea817de6ac26295d", "title": "Time is of the essence: improving recency ranking using Twitter data", "text": "Realtime web search refers to the retrieval of very fresh content which is in high demand. An effective portal web search engine must support a variety of search needs, including realtime web search. However, supporting realtime web search introduces two challenges not encountered in non-realtime web search: quickly crawling relevant content and ranking documents with impoverished link and click information. In this paper, we advocate the use of realtime micro-blogging data for addressing both of these problems. We propose a method to use the micro-blogging data stream to detect fresh URLs. We also use micro-blogging data to compute novel and effective features for ranking fresh URLs. We demonstrate these methods improve effective of the portal web search engine for realtime web search."}
{"_id": "1ac7f1621387603ebbf7a5612f6b0a839967d904", "title": "Mining asynchronous periodic patterns in time series data", "text": "Periodicy detection in time series data is a challenging pro blem of great importance in many applications. Most previous work focused on mining synchronous p eriodic patterns and did not recognize misaligned presence of a pattern due to the intervention of r andom noise. In this paper, we propose a more flexible model of asynchronous periodic pattern that ma y be present only within a subsequence and whose occurrences may be shifted due to disturbance. Two parametersmin rep andmax dis are employed to specify the minimum number of repetitions th at is required within each segment of non-disrupted pattern occurrences and the maximum allowed disturbance between any two successive valid segments. Upon satisfying these two requirements, th e longest valid subsequence of a pattern is returned. A two phase algorithm is devised to first generate p o ntial periods by distance-based pruning followed by an iterative procedure to derive and validat e candidate patterns and locate the longest valid subsequence. We also show that this algorithm can not o nly provide linear time complexity with respect to the length of the sequence but also achieve space e ffici ncy."}
{"_id": "4c976124994660d9144938113281978a71370d42", "title": "Incremental Fuzzy Clustering With Multiple Medoids for Large Data", "text": "As an important technique of data analysis, clustering plays an important role in finding the underlying pattern structure embedded in unlabeled data. Clustering algorithms that need to store all the data into the memory for analysis become infeasible when the dataset is too large to be stored. To handle such large data, incremental clustering approaches are proposed. The key idea behind these approaches is to find representatives (centroids or medoids) to represent each cluster in each data chunk, which is a packet of the data, and final data analysis is carried out based on those identified representatives from all the chunks. In this paper, we propose a new incremental clustering approach called incremental multiple medoids-based fuzzy clustering (IMMFC) to handle complex patterns that are not compact and well separated. We would like to investigate whether IMMFC is a good alternative to capturing the underlying data structure more accurately. IMMFC not only facilitates the selection of multiple medoids for each cluster in a data chunk, but also has the mechanism to make use of relationships among those identified medoids as side information to help the final data clustering process. The detailed problem formulation, updating rules derivation, and the in-depth analysis of the proposed IMMFC are provided. Experimental studies on several large datasets that include real world malware datasets have been conducted. IMMFC outperforms existing incremental fuzzy clustering approaches in terms of clustering accuracy and robustness to the order of data. These results demonstrate the great potential of IMMFC for large-data analysis."}
{"_id": "d621abffc233507bc615b7b22343a40f72f0df73", "title": "M-learning in Nigerian higher education: an experimental study with Edmodo", "text": "Social media technologies have recently gained remarkable popularity in the education sector. Recent research indicated that students rely on social media for educational purposes like social networking, chatting, and knowledge sharing. With social media aiding learning experiences, m-learning is anticipated to improve the application of social media. In the paper, we investigated the preference of social media tools and mobile devices for learning, their benefits and effectiveness, and how they can possibly improve learning process in Nigeria. Furthermore, we evaluated learning experiences of students using Edmodo social media-based learning environment in a Nigerian university. We used a mixed method research approach, where the data was triangulated from two questionnaires and students\u2019 interviews. Data gathered after the course shows that students learning response was improved. Students were eager to use mobile devices for accessing social media sites, thus providing experimental evidence of the place of social media in m-learning."}
{"_id": "543d27dda4fd8db3d0b0939aedcfa99a996d92a3", "title": "Scientific Workflow: A Survey and Research Directions", "text": "Workflow technologies are emerging as the dominant approach to coordinate groups of distributed services. However with a space filled with competing specifications, standards and frameworks from multiple domains, choosing the right tool for the job is not always a straightforward task. Researchers are often unaware of the range of technology that already exists and focus on implementing yet another proprietary workflow system. As an antidote to this common problem, this paper presents a concise survey of existing workflow technology from the business and scientific domain and makes a number of key suggestions towards the future development of scientific workflow systems."}
{"_id": "0106ac3ed6867af22d732cf7640b766cc98d256b", "title": "Deliberation Networks: Sequence Generation Beyond One-Pass Decoding", "text": "The encoder-decoder framework has achieved promising progress for many sequence generation tasks, including machine translation, text summarization, dialog system, image captioning, etc. Such a framework adopts an one-pass forward process while decoding and generating a sequence, but lacks the deliberation process: A generated sequence is directly used as final output without further polishing. However, deliberation is a common behavior in human\u2019s daily life like reading news and writing papers/articles/books. In this work, we introduce the deliberation process into the encoder-decoder framework and propose deliberation networks for sequence generation. A deliberation network has two levels of decoders, where the first-pass decoder generates a raw sequence and the second-pass decoder polishes and refines the raw sentence with deliberation. Since the second-pass deliberation decoder has global information about what the sequence to be generated might be, it has the potential to generate a better sequence by looking into future words in the raw sentence. Experiments on neural machine translation and text summarization demonstrate the effectiveness of the proposed deliberation networks. On the WMT 2014 English-to-French translation task, our model establishes a new state-of-the-art BLEU score of 41.5."}
{"_id": "5f7296baf41119ab68ba1433a44d64a86ca83b9c", "title": "Real-Time Semantic Segmentation with Label Propagation", "text": "Despite of the success of convolutional neural networks for semantic image segmentation, CNNs cannot be used for many applications due to limited computational resources. Even efficient approaches based on random forests are not efficient enough for real-time performance in some cases. In this work, we propose an approach based on superpixels and label propagation that reduces the runtime of a random forest approach by factor 192 while increasing the segmentation accuracy."}
{"_id": "12893f0569dab51b88d5e1bc318bd3b576b6c2d1", "title": "Mitigation of Denial of Service Attack with Hardware Trojans in NoC Architectures", "text": "As Multiprocessor System-on-Chips (MPSoCs) continue to scale, security for Network-on-Chips (NoCs) is a growing concern as rogue agents threaten to infringe on the hardware's trust and maliciously implant Hardware Trojans (HTs) to undermine their reliability. The trustworthiness of MPSoCs will rely on our ability to detect Denial-of-Service (DoS) threats posed by the HTs and mitigate HTs in a compromised NoC to permit graceful network degradation. In this paper, we propose a new light-weight target-activated sequential payload (TASP) HT model that performs packet inspection and injects faults to create a new type of DoS attack. Faults injected are used to trigger a response from error correction code (ECC) schemes and cause repeated retransmission to starve network resources and create deadlocks capable of rendering single-application to full chip failures. To circumvent the threat of HTs, we propose a heuristic threat detection model to classify faults and discover HTs within compromised links. To prevent further disruption, we propose several switch-to-switch link obfuscation methods to avoid triggering of HTs in an effort to continue using links instead of rerouting packets with minimal overhead (1-3 cycles). Our proposed modifications complement existing fault detection and obfuscation methods and only adds 2% in area overhead and 6% in excess power consumption in the NoC micro-architecture."}
{"_id": "feebbb3378245c28c708a290a888248026b06ca8", "title": "Quadrifilar helical antennas : Wideband and multiband behavior for GPS applications", "text": "A novel multi-frequency printed quadrifilar helix antenna based on Multiple arm technique is presented in this paper. Double frequency and satisfactory antenna characteristics are achieved. The antenna has relatively compact size and hemispherical pattern with excellent circularly polarized coverage. The antenna is designed and simulated with the application of HFSS software. The simulation results and analyses are presented."}
{"_id": "1ee72ed1db4ddbc49922255194890037c7a2f797", "title": "A Broadband Passive Monopulse Comparator MMIC", "text": "A broadband monopulse comparator MMIC (Monolithic Microwave Integrated Circuit) based on GaAs process is presented in this Letter. The comparator network constructed by three magic tees and one lumped power divider is proposed for one sum channel and two delta channels. The measurement results show that very wide frequency band from 15 to 30 GHz (66.7% relatively frequency bandwidth) with less than 2.5-dB loss can be achieved for the sum channel. And the null depth is more than 22 dB in 15\u201327GHz and 17 dB in 27\u201330 GHz for two delta channels. The total chip size is 3.4 mm 3.4 mm ( $0.26\\lambda _{0}~0.26\\lambda _{0}$ at the center frequency of 22.5 GHz)."}
{"_id": "202b3b3bb4a5190ce53b77564f9ae1dc65f3489b", "title": "A Framework for the Automatic Extraction of Rules from Online Text", "text": ""}
{"_id": "76d259940fe57f399a992ea0e2bb6cd5304b6a23", "title": "Why did my car just do that ? Explaining semi-autonomous driving actions to improve driver understanding , trust , and performance", "text": "This study explores, in the context of semiautonomous driving, how the content of the verbalized message accompanying the car\u2019s autonomous action affects the driver\u2019s attitude and safety performance. Using a driving simulator with an auto-braking function, we tested different messages that provided advance explanation of the car\u2019s imminent autonomous action. Messages providing only \u201chow\u201d information describing actions (e.g., \u201cThe car is braking\u201d) led to poor driving performance, whereas \u201cwhy\u201d information describing reasoning for actions (e.g., \u201cObstacle ahead\u201d) was preferred by drivers and led to better driving performance. Providing both \u201chow and why\u201d resulted in the safest driving performance but increased negative feelings in drivers. These results suggest that, to increase overall safety, car makers need to attend not only to the design of autonomous actions but also to the right way to explain these actions to the drivers."}
{"_id": "8eea0da60738a54c0fc6a092aecf0daf0c51cee3", "title": "Public opinion on automated driving : Results of an international questionnaire among 5000 respondents", "text": "This study investigated user acceptance, concerns, and willingness to buy partially, highly, and fully automated vehicles. By means of a 63-question Internet-based survey, we collected 5000 responses from 109 countries (40 countries with at least 25 respondents). We determined cross-national differences, and assessed correlations with personal variables, such as age, gender, and personality traits as measured with a short version of the Big Five Inventory. Results showed that respondents, on average, found manual driving the most enjoyable mode of driving. Responses were diverse: 22% of the respondents did not want to pay more than $0 for a fully automated driving system, whereas 5% indicated they would be willing to pay more than $30,000, and 33% indicated that fully automated driving would be highly enjoyable. 69% of respondents estimated that fully automated driving will reach a 50% market share between now and 2050. Respondents were found to be most concerned about software hacking/misuse, and were also concerned about legal issues and safety. Respondents scoring higher on neuroticism were slightly less comfortable about data transmitting, whereas respondents scoring higher on agreeableness were slightly more comfortable with this. Respondents from more developed countries (in terms of lower accident statistics, higher education, and higher income) were less comfortable with their vehicle transmitting data, with cross-national correlations between q = 0.80 and q = 0.90. The present results indicate the major areas of promise and concern among the international public, and could be useful for vehicle developers and other stakeholders. 2015 Elsevier Ltd. All rights reserved."}
{"_id": "eab0415ebbf5a2e163737e34df4d008405c2be5f", "title": "Effects of adaptive cruise control and highly automated driving on workload and situation awareness : A review of the empirical evidence", "text": "Department of BioMechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands Centre for Transport Studies, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands c TNO Human Factors, Kampweg 5, 3769 DE Soesterberg, The Netherlands d Transportation Research Group, Civil, Maritime, Environmental Engineering and Science, Engineering and the Environment, University of Southampton, United Kingdom"}
{"_id": "2160b8f38538320ff6bbd1c35de7b441541b8007", "title": "The expansion of Google Scholar versus Web of Science: a longitudinal study", "text": "Web of Science (WoS) and Google Scholar (GS) are prominent citation services with distinct indexing mechanisms. Comprehensive knowledge about the growth patterns of these two citation services is lacking. We analyzed the development of citation counts in WoS and GS for two classic articles and 56 articles from diverse research fields, making a distinction between retroactive growth (i.e., the relative difference between citation counts up to mid-2005 measured in mid-2005 and citation counts up to mid-2005 measured in April 2013) and actual growth (i.e., the relative difference between citation counts up to mid-2005 measured in April 2013 and citation counts up to April 2013 measured in April 2013). One of the classic articles was used for a citation-by-citation analysis. Results showed that GS has substantially grown in a retroactive manner (median of 170\u00a0% across articles), especially for articles that initially had low citations counts in GS as compared to WoS. Retroactive growth of WoS was small, with a median of 2\u00a0% across articles. Actual growth percentages were moderately higher for GS than for WoS (medians of 54 vs. 41\u00a0%). The citation-by-citation analysis showed that the percentage of citations being unique in WoS was lower for more recent citations (6.8\u00a0% for citations from 1995 and later vs. 41\u00a0% for citations from before 1995), whereas the opposite was noted for GS (57 vs. 33\u00a0%). It is concluded that, since its inception, GS has shown substantial expansion, and that the majority of recent works indexed in WoS are now also retrievable via GS. A discussion is provided on quantity versus quality of citations, threats for WoS, weaknesses of GS, and implications for literature research and research evaluation."}
{"_id": "2d4f10ccd2503c37ec32aa0033d3e5b3559f4404", "title": "Toward a Theory of Situation Awareness in Dynamic Systems", "text": "Situational awareness has become an increasingly salient factor contributing to flight safety and operational performance, and the research has burgeoned to cope with the human performance challenges associated with the installation of advanced avionics systems in modern aircraft. The systematic study and application of situational awareness has also extended beyond the cockpit to include air traffic controllers and personnel operating within other complex, high consequence work domains. This volume offers a collection of essays that have made important contributions to situational awareness research and practice. To this end, it provides unique access to key readings that address the conceptual development of situational awareness, methods for its assessment, and applications to enhance situational awareness through training and design."}
{"_id": "643bbe8450a52ff474b8194a2c95097d02387610", "title": "The Influence of Culture on Memory", "text": "The study of cognition across cultures offers a useful approach to both identifying bottlenecks in information processing and suggesting culturespecific strategies to alleviate these limitations. The recent emphasis on applying cognitive neuroscience methods to the study of culture further aids in specifying which processes differ cross-culturally. By localizing cultural differences to distinct neural regions, the comparison of cultural groups helps to identify candidate information processing mechanisms that can be made more efficient with augmented cognition and highlights the unique solutions that will be required for different groups of information processors."}
{"_id": "8fdeffbf8d1fa688d74db882a1d8c7ebc596534b", "title": "Multidisciplinary Perspectives on Music Emotion Recognition : Implications for Content and Context-Based Models", "text": "The prominent status of music in human culture and every day life is due in large part to its striking ability to elicit emotions, which may manifest from slight variation in mood to changes in our physical condition and actions. In this paper, we first review state of the art studies on music and emotions from different disciplines including psychology, musicology and music information retrieval. Based on these studies, we then propose new insights to enhance automated music emotion recog-"}
{"_id": "62bdc8ec0f4987f80100cd825dce9cb102c1e7fa", "title": "Spiking Neural Networks: Principles and Challenges", "text": "Over the last decade, various spiking neural network models have been proposed, along with a similarly increasing interest in spiking models of computation in computational neuroscience. The aim of this tutorial paper is to outline some of the common ground in state-of-the-art spiking neural networks as well as open challenges."}
{"_id": "bafd94ad2520a4e10ba3e9b5b665a3a474624cf4", "title": "STRATOS: Using Visualization to Support Decisions in Strategic Software Release Planning", "text": "Software is typically developed incrementally and released in stages. Planning these releases involves deciding which features of the system should be implemented for each release. This is a complex planning process involving numerous trade-offs-constraints and factors that often make decisions difficult. Since the success of a product depends on this plan, it is important to understand the trade-offs between different release plans in order to make an informed choice. We present STRATOS, a tool that simultaneously visualizes several software release plans. The visualization shows several attributes about each plan that are important to planners. Multiple plans are shown in a single layout to help planners find and understand the trade-offs between alternative plans. We evaluated our tool via a qualitative study and found that STRATOS enables a range of decision-making processes, helping participants decide on which plan is most optimal."}
{"_id": "5100b28dd2098b4b4aaa29655c4fd7af7ee8e4e9", "title": "Rainy weather recognition from in-vehicle camera images for driver assistance", "text": "We propose a weather recognition method from in-vehicle camera images that uses a subspace method to judge rainy weather by detecting raindrops on the windshield. \"Eigendrops\" represent the principal components extracted from raindrop images in the learning stage. Then the method detects raindrops by template matching. In experiments using actual video sequences, our method showed good detection ability of raindrops and promising results for rainfall judgment from detection results."}
{"_id": "3542254ecc9f57d19f00e9fcc645b3d44469a6ba", "title": "Scaling Up Mixed Workloads: A Battle of Data Freshness, Flexibility, and Scheduling", "text": "The common \u201cone size does not fit all\u201d paradigm isolates transactional and analytical workloads into separate, specialized database systems. Operational data is periodically replicated to a data warehouse for analytics. Competitiveness of enterprises today, however, depends on real-time reporting on operational data, necessitating an integration of transactional and analytical processing in a single database system. The mixed workload should be able to query and modify common data in a shared schema. The database needs to provide performance guarantees for transactional workloads, and, at the same time, efficiently evaluate complex analytical queries. In this paper, we share our analysis of the performance of two main-memory databases that support mixed workloads, SAP HANA and HyPer, while evaluating the mixed workload CHbenCHmark. By examining their similarities and differences, we identify the factors that affect performance while scaling the number of concurrent transactional and analytical clients. The three main factors are (a) data freshness, i.e., how recent is the data processed by analytical queries, (b) flexibility, i.e., restricting transactional features in order to increase optimization choices and enhance performance, and (c) scheduling, i.e., how the mixed workload utilizes resources. Specifically for scheduling, we show that the absence of workload management under cases of high concurrency leads to analytical workloads overwhelming the system and severely hurting the performance of transactional workloads."}
{"_id": "2d3ab651aa843229932ed63f19986dae0c6aace0", "title": "Locally Weighted Regression : An Approach to Regression Analysis by Local Fitting", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/astata.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission."}
{"_id": "fb8f7d172d47286101fc7f8b34a863162adc0e8e", "title": "Histogram of Oriented Lines for Palmprint Recognition", "text": "Subspace learning methods are very sensitive to the illumination, translation, and rotation variances in image recognition. Thus, they have not obtained promising performance for palmprint recognition so far. In this paper, we propose a new descriptor of palmprint named histogram of oriented lines (HOL), which is a variant of histogram of oriented gradients (HOG). HOL is not very sensitive to changes of illumination, and has the robustness against small transformations because slight translations and rotations make small histogram value changes. Based on HOL, even some simple subspace learning methods can achieve high recognition rates."}
{"_id": "4a00102f2a850bb208911a7541b4174e3b7ac71c", "title": "Automatically verifying and reproducing event-based races in Android apps", "text": "Concurrency has been a perpetual problem in Android apps, mainly due to event-based races. Several event-based race detectors have been proposed, but they produce false positives, cannot reproduce races, and cannot distinguish be- tween benign and harmful races. To address these issues, we introduce a race verification and reproduction approach named ERVA. Given a race report produced by a race detector, ERVA uses event dependency graphs, event flipping, and replay to verify the race and determine whether it is a false positive, or a true positive; for true positives, ERVA uses state comparison to distinguish benign races from harmful races. ERVA automatically produces an event schedule that can be used to deterministically reproduce the race, so developers can fix it. Experiments on 16 apps indicate that only 3% of the races reported by race detectors are harmful, and that ERVA can verify an app in 20 minutes on average."}
{"_id": "658ae58cc25748e5767ca17710c42d9b0d9061be", "title": "Adolescents' attitudes toward sports, exercise, and fitness predict physical activity 5 and 10 years later.", "text": "OBJECTIVE\nTo determine whether adolescent attitudes towards sports, exercise, and fitness predict moderate-to-vigorous physical activity 5 and 10 years later.\n\n\nMETHOD\nA diverse group of 1902 adolescents participating in Project Eating and Activity in Teens, reported weekly moderate-to-vigorous physical activity and attitudes toward sports, exercise, and fitness in Eating and Activity in Teens-I (1998-99), Eating and Activity in Teens-II (2003-04), and Eating and Activity in Teens-III (2008-09).\n\n\nRESULTS\nMean moderate-to-vigorous physical activity was 6.4, 5.1, and 4.0 hours/week at baseline, 5-year, and 10-year follow-up, respectively. Attitudes toward sports, exercise, and fitness together predicted moderate-to-vigorous physical activity at 5 and 10 years. Among the predictors of 5- and 10-year moderate-to-vigorous physical activity, attitude's effect size, though modest, was comparable to the effect sizes for sports participation and body mass index. Adolescents with more-favorable attitudes toward sports, exercise, and fitness engaged in approximately 30%-40% more weekly moderate-to-vigorous physical activity at follow-up (2.1 hour/week at 5 years and 1.2 hour/week at 10 years) than those with less-favorable attitudes.\n\n\nCONCLUSION\nAdolescents' exercise-related attitudes predict subsequent moderate-to-vigorous physical activity independent of baseline behavior suggesting that youth moderate-to-vigorous physical activity promotion efforts may provide long-term benefits by helping youth develop favorable exercise attitudes."}
{"_id": "c57111a50b9e0818138b19cb9d8f01871145ee0a", "title": "Validation of the Ten-Item Internet Gaming Disorder Test (IGDT-10) and evaluation of the nine DSM-5 Internet Gaming Disorder criteria.", "text": "INTRODUCTION\nThe inclusion of Internet Gaming Disorder (IGD) in the DSM-5 (Section 3) has given rise to much scholarly debate regarding the proposed criteria and their operationalization. The present study's aim was threefold: to (i) develop and validate a brief psychometric instrument (Ten-Item Internet Gaming Disorder Test; IGDT-10) to assess IGD using definitions suggested in DSM-5, (ii) contribute to ongoing debate regards the usefulness and validity of each of the nine IGD criteria (using Item Response Theory [IRT]), and (iii) investigate the cut-off threshold suggested in the DSM-5.\n\n\nMETHODS\nAn online gamer sample of 4887 gamers (age range 14-64years, mean age 22.2years [SD=6.4], 92.5% male) was collected through Facebook and a gaming-related website with the cooperation of a popular Hungarian gaming magazine. A shopping voucher of approx. 300 Euros was drawn between participants to boost participation (i.e., lottery incentive). Confirmatory factor analysis and a structural regression model were used to test the psychometric properties of the IGDT-10 and IRT analysis was conducted to test the measurement performance of the nine IGD criteria. Finally, Latent Class Analysis along with sensitivity and specificity analysis were used to investigate the cut-off threshold proposed in the DSM-5.\n\n\nRESULTS\nAnalysis supported IGDT-10's validity, reliability, and suitability to be used in future research. Findings of the IRT analysis suggest IGD is manifested through a different set of symptoms depending on the level of severity of the disorder. More specifically, \"continuation\", \"preoccupation\", \"negative consequences\" and \"escape\" were associated with lower severity of IGD, while \"tolerance\", \"loss of control\", \"giving up other activities\" and \"deception\" criteria were associated with more severe levels. \"Preoccupation\" and \"escape\" provided very little information to the estimation IGD severity. Finally, the DSM-5 suggested threshold appeared to be supported by our statistical analyses.\n\n\nCONCLUSIONS\nIGDT-10 is a valid and reliable instrument to assess IGD as proposed in the DSM-5. Apparently the nine criteria do not explain IGD in the same way, suggesting that additional studies are needed to assess the characteristics and intricacies of each criterion and how they account to explain IGD."}
{"_id": "91d056b60f2391317c1dd2adaad01a92087b066e", "title": "Optimizing Hierarchical Visualizations with the Minimum Description Length Principle", "text": "In this paper we examine how the Minimum Description Length (MDL) principle can be used to efficiently select aggregated views of hierarchical datasets that feature a good balance between clutter and information. We present MDL formulae for generating uneven tree cuts tailored to treemap and sunburst diagrams, taking into account the available display space and information content of the data. We present the results of a proof-of-concept implementation. In addition, we demonstrate how such tree cuts can be used to enhance drill-down interaction in hierarchical visualizations by implementing our approach in an existing visualization tool. Validation is done with the feature congestion measure of clutter in views of a subset of the current DMOZ web directory, which contains nearly half million categories. The results show that MDL views achieve near constant clutter level across display resolutions. We also present the results of a crowdsourced user study where participants were asked to find targets in views of DMOZ generated by our approach and a set of baseline aggregation methods. The results suggest that, in some conditions, participants are able to locate targets (in particular, outliers) faster using the proposed approach."}
{"_id": "a7621b4ec18719b08f3a2a444b6d37a2e20227b7", "title": "Fast Training of Convolutional Networks through FFTs", "text": "Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges."}
{"_id": "6ef1e1c54855fc1d1de1066510e7ea3e588a38c6", "title": "Buccinator myomucosal flap: clinical results and review of anatomy, surgical technique and applications.", "text": "BACKGROUND\nThe buccinator musculomucosal flap is an axial-pattern flap based on either the buccal or the facial artery. We present our experience with this flap and describe its surgical anatomy, the surgical techniques utilised to raise the flap and its clinical applications.\n\n\nMATERIALS AND METHODS\nWe retrospectively reviewed all patients who had had buccinator myomucosal flaps created at the Groote Schuur Hospital between 1999 and 2004. Patients were also recalled to assess flap sensation and to record reduction of mouth opening as a consequence of donor site scarring.\n\n\nRESULTS\nOf the 14 patients who had had a buccinator myomucosal flap created, there was one flap failure. Sensation was present in 71 per cent of flaps, and there was no trismus due to donor site scarring.\n\n\nCONCLUSIONS\nThe buccinator myomycosal flap is a dependable flap with good functional outcome and low morbidity."}
{"_id": "24851ce5f8e097b1053d372cd81717d183b11d96", "title": "Signal Space CoSaMP for Sparse Recovery With Redundant Dictionaries", "text": "Compressive sensing (CS) has recently emerged as a powerful framework for acquiring sparse signals. The bulk of the CS literature has focused on the case where the acquired signal has a sparse or compressible representation in an orthonormal basis. In practice, however, there are many signals that cannot be sparsely represented or approximated using an orthonormal basis, but that do have sparse representations in a redundant dictionary. Standard results in CS can sometimes be extended to handle this case provided that the dictionary is sufficiently incoherent or well conditioned, but these approaches fail to address the case of a truly redundant or overcomplete dictionary. In this paper, we describe a variant of the iterative recovery algorithm CoSaMP for this more challenging setting. We utilize the \\mbi D-RIP, a condition on the sensing matrix analogous to the well-known restricted isometry property. In contrast to prior work, the method and analysis are \u201csignal-focused\u201d; that is, they are oriented around recovering the signal rather than its dictionary coefficients. Under the assumption that we have a near-optimal scheme for projecting vectors in signal space onto the model family of candidate sparse signals, we provide provable recovery guarantees. Developing a practical algorithm that can provably compute the required near-optimal projections remains a significant open problem, but we include simulation results using various heuristics that empirically exhibit superior performance to traditional recovery algorithms."}
{"_id": "15a4cfb61baedfeccd741c1a09001a9b4e560663", "title": "Traffic sign recognition based on color, shape, and pictogram classification using support vector machines", "text": "Traffic sign recognition is the second part of traffic sign detection and recognition systems. It plays a crucial role in driver assistance systems and provides drivers with crucial safety and precaution information. In this study, the recognition of the TS is performed based on its border color, shape, and pictogram information. This technique breaks down the recognition system into small parts, which makes it efficient and accurate. Moreover, this makes it easy to understand TS components. The proposed technique is composed of three independent stages. The first stage involves extracting the border colors using an adaptive image segmentation technique that is based on learning vector quantization. Then, the shape of the TS is detected using a fast and simple matching technique based on the logical exclusive OR operator. Finally, the pictogram is extracted and classified using a support vector machines classifier model. The proposed technique is applied on the German traffic sign recognition benchmark and achieves an overall recognition rate of 98.23%, with an average computational speed of 30\u00a0ms."}
{"_id": "742c49afaa003dcbed07e4b7aabd32f14f5e2617", "title": "Adaptive item-based learning environments based on the item response theory: possibilities and challenges", "text": "The popularity of intelligent tutoring systems (ITSs) is increasing rapidly. In order to make learning environments more efficient, researchers have been exploring the possibility of an automatic adaptation of the learning environment to the learner or the context. One of the possible adaptation techniques is adaptive item sequencing by matching the difficulty of the items to the learner\u2019s knowledge level. This is already accomplished to a certain extent in adaptive testing environments, where the test is tailored to the person\u2019s ability level by means of the item response theory (IRT). Even though IRT has been a prevalent computerized adaptive test (CAT) approach for decades and applying IRT in item-based ITSs could lead to similar advantages as in CAT (e.g. higher motivation and more efficient learning), research on the application of IRT in such learning environments is highly restricted or absent. The purpose of this paper was to explore the feasibility of applying IRT in adaptive item-based ITSs. Therefore, we discussed the two main challenges associated with IRT application in such learning environments: the challenge of the data set and the challenge of the algorithm. We concluded that applying IRT seems to be a viable solution for adaptive item selection in item-based ITSs provided that some modifications are implemented. Further research should shed more light on the adequacy of the proposed solutions."}
{"_id": "5e3fd9e6e7bcfc37fa751385ea3c8c7c7ac80c43", "title": "Maximum Principle Based Algorithms for Deep Learning", "text": "The continuous dynamical system approach to deep learning is explored in order to devise alternative frameworks for training algorithms. Training is recast as a control problem and this allows us to formulate necessary optimality conditions in continuous time using the Pontryagin\u2019s maximum principle (PMP). A modification of the method of successive approximations is then used to solve the PMP, giving rise to an alternative training algorithm for deep learning. This approach has the advantage that rigorous error estimates and convergence results can be established. We also show that it may avoid some pitfalls of gradient-based methods, such as slow convergence on flat landscapes near saddle points. Furthermore, we demonstrate that it obtains favorable initial convergence rate periteration, provided Hamiltonian maximization can be efficiently carried out a step which is still in need of improvement. Overall, the approach opens up new avenues to attack problems associated with deep learning, such as trapping in slow manifolds and inapplicability of gradient-based methods for discrete trainable variables."}
{"_id": "7023fcaa06fc9e66f1e5d35a0b13548e2a8ebb66", "title": "Word Spotting in Cursive Handwritten Documents using Modified Character Shape Codes", "text": "There is a large collection of Handwritten English paper documents of Historical and Scientific importance. But paper documents are not recognised directly by computer. Hence the closest way of indexing these documents is by storing their document digital image. Hence a large database of document images can replace the paper documents. But the document and data corresponding to each image cannot be directly recognised by the computer. This paper applies the technique of word spotting using Modified Character Shape Code to Handwritten English document images for quick and efficient query search of words on a database of document images. It is different from other Word Spotting techniques as it implements two level of selection for word segments to match search query. First based on word size and then based on character shape code of query. It makes the process faster and more efficient and reduces the need of multiple pre-processing."}
{"_id": "7276372b4685c49c1f4fc394fb3fd0ab5fc9db3a", "title": "Visualization of boundaries in volumetric data sets using LH histograms", "text": "A crucial step in volume rendering is the design of transfer functions that highlights those aspects of the volume data that are of interest to the user. For many applications, boundaries carry most of the relevant information. Reliable detection of boundaries is often hampered by limitations of the imaging process, such as blurring and noise. We present a method to identify the materials that form the boundaries. These materials are then used in a new domain that facilitates interactive and semiautomatic design of appropriate transfer functions. We also show how the obtained boundary information can be used in region-growing-based segmentation."}
{"_id": "107a53a46f3acda939d93c47009ab960d6a33464", "title": "Improving Adversarial Robustness by Data-Specific Discretization", "text": "A recent line of research proposed (either implicitly or explicitly) gradient-masking preprocessing techniques to improve adversarial robustness. However, as shown by Athaley-Carlini-Wagner, essentially all these defenses can be circumvented if an attacker leverages approximate gradient information with respect to the preprocessing. This thus raises a natural question of whether there is a useful preprocessing technique in the context of white-box attacks, even just for only mildly complex datasets such as MNIST. In this paper we provide an affirmative answer to this question. Our key observation is that for several popular datasets, one can approximately encode entire dataset using a small set of separable codewords derived from the training set, while retaining high accuracy on natural images. The separability of the codewords in turn prevents small perturbations as in `\u221e attacks from changing feature encoding, leading to adversarial robustness. For example, for MNIST our code consists of only two codewords, 0 and 1, and the encoding of any pixel is simply 1[x > 0.5] (i.e., whether a pixel x is at least 0.5). Applying this code to a naturally trained model already gives high adversarial robustness even under strong white-box attacks based on Backward Pass Differentiable Approximation (BPDA) method of Athaley-Carlini-Wagner that takes the codes into account. We give density-estimation based algorithms to construct such codes, and provide theoretical analysis and certificates of when our method can be effective. Systematic evaluation demonstrates that our method is effective in improving adversarial robustness on MNIST, CIFAR-10, and ImageNet, for either naturally or adversarially trained models."}
{"_id": "1f779bb04bf3510cc8b0fc8f5dba896f5e4e6b95", "title": "The DARPA Twitter Bot Challenge", "text": "From politicians and nation states to terrorist groups, numerous organizations reportedly conduct explicit campaigns to influence opinions on social media, posing a risk to freedom of expression. Thus, there is a need to identify and eliminate \"influence bots\" - realistic, automated identities that illicitly shape discussions on sites like Twitter and Facebook - before they get too influential."}
{"_id": "086a7d7e8315ce3d25e1a2a7a2602b1d2c60eb7a", "title": "The Unity and Diversity of Executive Functions and Their Contributions to Complex \u201cFrontal Lobe\u201d Tasks: A Latent Variable Analysis", "text": "This individual differences study examined the separability of three often postulated executive functions-mental set shifting (\"Shifting\"), information updating and monitoring (\"Updating\"), and inhibition of prepotent responses (\"Inhibition\")-and their roles in complex \"frontal lobe\" or \"executive\" tasks. One hundred thirty-seven college students performed a set of relatively simple experimental tasks that are considered to predominantly tap each target executive function as well as a set of frequently used executive tasks: the Wisconsin Card Sorting Test (WCST), Tower of Hanoi (TOH), random number generation (RNG), operation span, and dual tasking. Confirmatory factor analysis indicated that the three target executive functions are moderately correlated with one another, but are clearly separable. Moreover, structural equation modeling suggested that the three functions contribute differentially to performance on complex executive tasks. Specifically, WCST performance was related most strongly to Shifting, TOH to Inhibition, RNG to Inhibition and Updating, and operation span to Updating. Dual task performance was not related to any of the three target functions. These results suggest that it is important to recognize both the unity and diversity of executive functions and that latent variable analysis is a useful approach to studying the organization and roles of executive functions."}
{"_id": "8ed819fe682efc79801d73e6343885be23d36fb8", "title": "The processing-speed theory of adult age differences in cognition.", "text": "A theory is proposed to account for some of the age-related differences reported in measures of Type A or fluid cognition. The central hypothesis in the theory is that increased age in adulthood is associated with a decrease in the speed with which many processing operations can be executed and that this reduction in speed leads to impairments in cognitive functioning because of what are termed the limited time mechanism and the simultaneity mechanism. That is, cognitive performance is degraded when processing is slow because relevant operations cannot be successfully executed (limited time) and because the products of early processing may no longer be available when later processing is complete (simultaneity). Several types of evidence, such as the discovery of considerable shared age-related variance across various measures of speed and large attenuation of the age-related influences on cognitive measures after statistical control of measures of speed, are consistent with this theory."}
{"_id": "2a6dc23584804d65e5c572881b55dc26abd034c5", "title": "Storage and executive processes in the frontal lobes.", "text": "The human frontal cortex helps mediate working memory, a system that is used for temporary storage and manipulation of information and that is involved in many higher cognitive functions. Working memory includes two components: short-term storage (on the order of seconds) and executive processes that operate on the contents of storage. Recently, these two components have been investigated in functional neuroimaging studies. Studies of storage indicate that different frontal regions are activated for different kinds of information: storage for verbal materials activates Broca's area and left-hemisphere supplementary and premotor areas; storage of spatial information activates the right-hemisphere premotor cortex; and storage of object information activates other areas of the prefrontal cortex. Two of the fundamental executive processes are selective attention and task management. Both processes activate the anterior cingulate and dorsolateral prefrontal cortex."}
{"_id": "501d2a27605221a3719baeed470f7d2137667913", "title": "Cognitive Complexity and Attentional Control in the Bilingual Mind", "text": "In the analysis and control framework, Bialystok identifies analysis (representation) and control (selective attention) as components of language processing and has shown that one of these, control, develops earlier in bilingual children than in comparable monolinguals. In the theory of cognitive complexity and control (CCC), Zelazo and Frye argue that preschool children lack the conscious representation and executive functioning needed to solve problems based on conflicting rules. The present study investigates whether the bilingual advantage in control can be found in a nonverbal task, the dimensional change card sort, used by Zelazo and Frye to assess CCC. This problem contains misleading information characteristic of high-control tasks but minimal demands for analysis. Sixty preschool children, half of whom were bilingual, were divided into a group of younger ( M 5 4,2) and older ( M 5 5,5) children. All the children were given a test of English proficiency (PPVT-R) and working memory (Visually-Cued Recall Task) to assure comparability of the groups and then administered the dimensional change card sort task and the moving word task. The bilingual children were more advanced than the monolinguals in the solving of experimental problems requiring high levels of control. These results demonstrate the role of attentional control in both these tasks, extends our knowledge about the cognitive development of bilingual children, and provides a means of relating developmental proposals articulated in two different theoretical frameworks, namely, CCC and analysis-control."}
{"_id": "26f6ea5492062d9a5a64867b4ed5695132d4083d", "title": "Adaptive Nonlocal Filtering: A Fast Alternative to Anisotropic Diffusion for Image Enhancement", "text": "Nonlinear anisotropic diffusion algorithms provide significant improvement in image enhancement as compared to linear filters. However, the excessive computational cost of solving nonlinear PDEs precludes their use in real-time vision applications. In the present paper, we show that two orders of magnitude speed improvement is provided by a new image filtering paradigm in which an adaptively determined vector field specifies nonlocal application points for an image filter."}
{"_id": "0ee6f663f89e33eb84093ea9cd94212d1a8170c9", "title": "Realization of a composite right/left-handed leaky-wave antenna with circular polarization", "text": "A leaky-wave antenna (LWA) with circular polarization based on the composite right/left-handed (CRLH) substrate integrated waveguide (SIW) is investigated and presented. Series interdigital capacitors have been introduced into the circuit by etching the slots on the waveguide surface achieving a CRLH functionality. Two symmetrical leaky traveling-wave transmission lines with orthogonal polarizations are placed side-by-side and excited with 90\u00b0 phase difference generating a pure circular polarization mode. The main beam of this antenna can be steered continuously by varying the frequency while maintaining a low axial ratio (below 3 dB) within the main beam direction. The performance of this LWA is verified through both full-wave simulation and measurement of a fabricated prototype showing a good agreement."}
{"_id": "3d70eef1a7538460760d85398acbe7154ebbbc16", "title": "Embodiment of Wearable Augmented Reality Technology in Tourism Experiences", "text": "The increasing use of wearable devices for tourism purposes sets the stage for a critical discussion on technological mediation in tourism experiences. This paper provides a theoretical reflection on the phenomenon of embodiment relation in technological mediation and then assesses the embodiment of wearable augmented reality technology in a tourism attraction. The findings suggest that technology embodiment is a multidimensional construct consisting of ownership, location, and agency. These support the concept of technology withdrawal, where technology disappears as it becomes part of human actions, and contest the interplay of subjectivity and intentionality between humans and technology in situated experiences such as tourism. It was also found that technology embodiment affects enjoyment and enhances experience with tourism attractions."}
{"_id": "ebd76f8e4ce274bd48d94f16fd9aeaa78b5ce005", "title": "Document concept lattice for text understanding and summarization", "text": "We argue that the quality of a summary can be evaluated based on how many concepts in the original document(s) that can be preserved after summarization. Here, a concept refers to an abstract or concrete entity or its action often expressed by diverse terms in text. Summary generation can thus be considered as an optimization problem of selecting a set of sentences with minimal answer loss. In this paper, we propose a document concept lattice that indexes the hierarchy of local topics tied to a set of frequent concepts and the corresponding sentences containing these topics. The local topics will specify the promising sub-spaces related to the selected concepts and sentences. Based on this lattice, the summary is an optimized selection of a set of distinct and salient local topics that lead to maximal coverage of concepts with the given number of sentences. Our summarizer based on the concept lattice has demonstrated competitive performance in Document Understanding Conference 2005 and 2006 evaluations as well as follow-on tests. 2007 Elsevier Ltd. All rights reserved."}
{"_id": "38402724788ba6cb9f108292fd9f090db2ae3bda", "title": "Mobility of an in-pipe robot with screw drive mechanism inside curved pipes", "text": "This paper presents motion analyses and experiments of an in-pipe robot with screw drive mechanism while it moves inside curved pipes. This robot is driven by only one motor and composed of two units. One unit works as a rotator and the another one works as a stator. Screw drive mechanism is a better way to travel inside pipelines of small diameter for inspection, because the area for moving of the robot is narrow and the number of actuators can be reduced (minimum is just one). Therefore, the robot can be smaller and lighter. Furthermore, the control becomes easier. Although many kinds of in-pipe robots with screw drive mechanism have been reported to date, the most important problem of such drive mechanism is the difficulty of traveling in pipes other than straight ones. Examples of such pipes are curved pipes like elbow and bent, branch pipes like a T-shape and the pipe where diameter changes. A concrete analysis has not been done yet. In this paper, we concentrate on a helical driving motion of the robot inside the curved pipe and finally perform experiments to determine the characteristics of it."}
{"_id": "15da0849a1e601e0ba7ae4aa7558f7a76ef049d5", "title": "High speed, Low power Approximate Multipliers", "text": "Adders and multipliers form the significant components in all kind of digital applications and has always been a thought-provoking topic for the researchers. Multiplication process will initially find out the set of partial products and then groups the partial products according to their respective bit positions and takes the summation. Among the phases of multiplication, partial product reduction tree consumes the maximum power and area as it is done using several adders. In this paper, a High Speed, Low Power Approximate Multiplier is proposed in which a new 4\u20132 compressor is being projected and is made use while performing vector merge addition after transforming the partial product terms to lead varying probability terms. Simulation results shows that the proposed Approximate Multiplier achieved 3.19% less Area, 53.74% less Delay, 3.79% less Power compared to Approximate Multiplier [6] and 31% less Area, 81.35% less Delay and 59.83% less Power compared to that of an Exact Multiplier. When we make use of Approximate Multipliers, there are likelihoods of error occurring in some cases. Hence approximate multipliers are suggested only for Error Resilient applications. Video compression, Digital Image processing and Test Data compression can be categorized as such applications. The proposed approximate multiplier takes less Power, Area and Time and also can be paralleled to the existing approximate multiplier algorithms. Hence for Error resilient application, it is always sagacious to use less power consuming constituents since power is a crucial one to save in the current scenario."}
{"_id": "1132c72ac86b3f5d5820bf4ab40c05449b1cc756", "title": "Fast Marine Route Planning for UAV Using Improved Sparse A* Algorithm", "text": "This paper focuses on route planning, especially for unmanned aircrafts in marine environment. Firstly, new heuristic information is adopted such as threat-zone, turn maneuver and forbid-zone based on voyage heuristic information. Then, the cost function is normalized to obtain more flexible and reasonable routes. Finally, an improved sparse A* search algorithm is employed to enhance the planning efficiency and reduce the planning time. Experiment results showed that the improved algorithm for aircraft in maritime environment could find a combinational optimum route quickly, which detoured threat-zones, with fewer turn maneuver, totally avoiding forbid-zones, and shorter voyage."}
{"_id": "baf606103d091afcd8da202608dd3322e1969e8e", "title": "The role of Kaizen (continuous improvement) in improving companies' performance: A case study", "text": "Kaizen refers to the philosophy or practices that focus upon continuous improvement of processes in manufacturing, engineering and business management. The improvements are usually accomplished at little or no expense and without sophisticated techniques. The purpose of this paper is to present the results of implementing the Kaizen principles at Port Installed Options Center in Toyota Saudi Arabia. Using relevant Kaizen tools including Toyota Production System (TPS), 5S and the seven Muda enabled us to understand the system and identify the most critical problem areas. The Kaizen target was set after analyzing the problems and identifying the gap between the existing system and the TPS. The main results of the study can be summarized as: using the same facilities it was possible to be more efficient by reducing the manpower needed by 26.9% (from 349 to 275 installers) and more effective by increasing the annual output by 13% (from 188000 to 212400 vehicles); improve the Associates Engagement Index (Q12) by 6.4% (from 2.91 to 3.11); potential for inventory reduction due to the use of Kanban system with just-in-time production; avoid investing in new facilities needed to meet the increase in the demand in the market. All these resulted in improving the production line productivity which were appreciated and acknowledged by Toyota Motor Corporation."}
{"_id": "54176121b8f557e3f3a8f904ddf1b685fa9d636c", "title": "Simulation of CAN bus physical layer using SPICE", "text": "This paper deals with a computational simulation of high speed CAN bus physical layer using SPICE. At the beginning, the general electric parameters of the physical layer are described - these parameters are in compliance with the international standard ISO 11898-2. The next part of the paper describes a SPICE library, which contains all basic electronic parts of physical layer, including CAN transceivers, transmission lines, common mode chokes and ESD protections. The model of ESD generator is also included in the library. In the paper are shown the most important results from the illustrative simulation and the results are compared with the results of the experimental measurement."}
{"_id": "396f057ea64d9c7b932eb7f8c5b4ecfc5a663fd1", "title": "Low-Power High Speed 1-bit Full Adder Circuit Design", "text": "In this paper we presented a new 13T full adder design based on hybrid --CMOS logic design style. Adders are one of the most basic building blocks in digital components present in the Arithmetic Logic Unit (ALU). The performance of an adder have a significant impact on the overall performance of a digital system. The new design is compared with some existing designs for power consumption, delay, PDP at various frequencies viz 10 MHz, 200 MHz and 1 GHz. the simulations are carried out on Cadence Virtuoso at 180nm CMOS technology and the simulation results are analyzed to verify the superiority of the proposed design over the existing designs. Maximum saving of power delay product is at low frequency by proposed circuit is 96.8% with respect to C-CMOS and significant improvement is observed at other frequencies also. The power consumption increases at a slow rate in comparison to other adders with increase in frequency."}
{"_id": "83fd67f0c9e8e909dc7b90025e64bde0385a9a3a", "title": "1 . 2 Named Entity Recognition Task", "text": "In this report, we explore various methods that are applied to solve NER. In section 1, we introduce the named entity problem. In section 2, various named entity recognition methods are discussed in three three broad categories of machine learning paradigm and explore few learning techniques in them. In the first part, we discuss various supervised techniques. Subsequently we move to semi-supervised and unsupervised techniques. In the end we discuss about the method from deep learning to solve NER."}
{"_id": "d6029ad04295079b3634abd013fde50dee3c0fea", "title": "Semantic description, discovery and integration for the Internet of Things", "text": "To share and publish the domain knowledge of IoT objects, the development of a semantic IoT model based directory system that manages meta-data and relationships of IoT objects is required. Many researches focus on static relationships between IoT objects. However, because complex relationships between various resources change with time in an IoT environment, an efficient method for updating the meta-data is required. Thus, we propose an IoT-DS as the IoT directory that supports semantic description, discovery, and integration of IoT objects. Firstly, we introduce a semantic IoT component model to establish a shared conceptualization. Secondly, we present general cases of relationships to efficiently interact between IoT-DS and IoT objects. Thirdly, we construct IoT-DS as a Web portal. Finally, we verify in our evaluation study that the query processing time and communication workload imposed by the proposed approach are reduced."}
{"_id": "ad8c8baf4291ba1481d533b9f5d8ed2097e636a2", "title": "How to evaluate word embeddings? On importance of data efficiency and simple supervised tasks", "text": "Maybe the single most important goal of representation learning is making subsequent learning faster. Surprisingly, this fact is not well reflected in the way embeddings are evaluated. In addition, recent practice in word embeddings points towards importance of learning specialized representations. We argue that focus of word representation evaluation should reflect those trends and shift towards evaluating what useful information is easily accessible. Specifically, we propose that evaluation should focus on data efficiency and simple supervised tasks, where the amount of available data is varied and scores of a supervised model are reported for each subset (as commonly done in transfer learning). In order to illustrate significance of such analysis, a comprehensive evaluation of selected word embeddings is presented. Proposed approach yields a more complete picture and brings new insight into performance characteristics, for instance information about word similarity or analogy tends to be non\u2013linearly encoded in the embedding space, which questions the cosine\u2013based, unsupervised, evaluation methods. All results and analysis scripts are available online."}
{"_id": "5caf0a60ed00c0df6bf3bc4eadcd33cbff228f8a", "title": "Torque-Maximizing Design of Double-Stator, Axial-Flux, PM Machines Using Simple Non-Linear Magnetic Analysis", "text": "This paper presents a torque-maximizing design of a double-stator, axial-flux PM machine using a simple non-linear magnetic analysis. The proposed analysis consists of a geometric- flux-tube-based equivalent-magnetic-circuit model. The model can take into account not only the magnetic saturation but also the three-dimensional(3D) flux distribution. The proposed analysis has realized a dramatic reduction in the computation time compared to 3D-FEA, keeping reasonable analytical accuracy. At first, the validity of the proposed analysis is examined for the various design parameters. After verifying the accuracy of the torque computation of the proposed analysis through the comparisons with 3D-FEA, an 8-pole axial-flux PM machine with double-stator-single-rotor is optimally designed. The performance of the optimized motor are finally verified by 3D-FEA."}
{"_id": "c1f8a3a1b4df9b7856d4fbcfa91ef2752bcc7070", "title": "Fast and Accurate Closest Point Search on Triangulated Surfaces and its Application to Head Motion Estimation", "text": "The Iterative Closest Point (ICP) algorithm is widely used to register two roughly aligned surfaces. Its most expensive step is the search for the closest point. Many efficient variants have been proposed to speed up this step, however they do not guarantee that the closest point on a triangulated surface is found. Instead they might result in an approximation of the closest point. In this paper we present a method for the closest point search that is fast and exact. The method was implemented and used to evaluate the accuracy of head motion estimation using dense 3D data obtained from a stereo camera system. The results show that the accuracy of the estimated head motion is better than 1 mm for translational movements and better than 1 degree for rotations. Key-Words: Iterative Closest Point, 3D Registration, Stereo, Head Motion Estimation"}
{"_id": "aa6272b3fa662866a253c5a7471ace604e01a4d2", "title": "Twitter vigilance: A multi-user platform for cross-domain Twitter data analytics, NLP and sentiment analysis", "text": "The growth and diffusion of online social media have been enormously increased in recent years, as well as the research and commercial interests toward these rising sources of information as a direct public expression of the communities. Moreover, the depth and the quality of data that can be harvested by monitoring and analysis tools have evolved significantly. In particular, Twitter has revealed to be one of the most widespread microblogging services for instantly publishing and sharing opinions, feedbacks, ratings etc., contributing in the development of the emerging role of users as sensors. However, due to the huge amount of data to be collected and analyzed and limitations on data access imposed by Twitter public APIs, more efficient requirements are needed for analytics tools, both in terms of data ingestion and processing, as well as for the computation of analysis metrics, to be provided for deeper statistic insights and further investigations. In this paper, the Twitter Vigilance platform is presented, realized by the DISIT Lab at University of Florence. Twitter Vigilance has been designed as a cross-domain, multi-user tool for collecting and analyzing Twitter data, providing aggregated metrics based on the volume of tweets and retweets, users' influence network, Natural Language Processing and Sentiment Analysis of textual content. The proposed architecture has been validated against a dataset of about 270 million tweets showing a high efficiency in recovering Twitter data. For this reason it has been adopted by a number of researchers as a study platform for social media analysis, early warning, etc."}
{"_id": "376fe2897965854ff024fd87d9a63f5d91e91ba7", "title": "Stochastic Dynamics for Video Infilling.", "text": "In this paper, we introduce a stochastic generation framework (SDVI) to infill long intervals in video sequences. To enhance the temporal resolution, video interpolation aims to produce transitional frames for a short interval between every two frames. Video Infilling, however, aims to complete long intervals in a video sequence. Our framework models the infilling as a constrained stochastic generation process and sequentially samples dynamics from the inferred distribution. SDVI consists of two parts: (1) a bi-directional constraint propagation to guarantee the spatial-temporal coherency among frames, (2) a stochastic sampling process to generate dynamics from the inferred distributions. Experimental results show that SDVI can generate clear and varied sequences. Moreover, motions in the generated sequence are realistic and able to transfer smoothly from the referenced start frame to the end frame."}
{"_id": "c2b80648b2e274f28346b202eea9320ecd66c78d", "title": "A Study of Parsing Process on Natural Language Processing in Bahasa Indonesia", "text": "Research on Natural Language Processing (NLP) in Indonesian is still limited and the results of available research that can be used for further research are also limited. In a series of natural language processing, the initial step is parsing the sentence in a particular language based on the grammar in order to help understanding the meaning of a sentence. This research aims to produce a simulation of Indonesian parser by adapting the process which was conducted by using Collins Algorithm. The three main stages are: 1) preprocessing to generate corpus and events files, 2) lexical analysis to convert the corpus into tokens, and 3) syntax analysis to build parse tree that requires file events to calculate the probability of the grammar by count the occurrence frequency on file events to determine the best sentence trees. An evaluation was performed to the parser using 30 simple sentences and the outcomes were able to generate a corpus file, file events, parse-tree and probability calculations. Nevertheless some sentences could not be parsed completely true because of the limitations of the Tree bank file in Indonesian. Some future works are to develop complete and valid Tree bank and Lexicon files."}
{"_id": "50bc77f3ec070940b1923b823503a4c2b09e9921", "title": "PHANTOM: A Scalable BlockDAG Protocol", "text": ""}
{"_id": "48b38420f9c39c601dcf81621609d131b8035f94", "title": "Smart Health Monitoring Systems: An Overview of Design and Modeling", "text": "Health monitoring systems have rapidly evolved during the past two decades and have the potential to change the way health care is currently delivered. Although smart health monitoring systems automate patient monitoring tasks and, thereby improve the patient workflow management, their efficiency in clinical settings is still debatable. This paper presents a review of smart health monitoring systems and an overview of their design and modeling. Furthermore, a critical analysis of the efficiency, clinical acceptability, strategies and recommendations on improving current health monitoring systems will be presented. The main aim is to review current state of the art monitoring systems and to perform extensive and an in-depth analysis of the findings in the area of smart health monitoring systems. In order to achieve this, over fifty different monitoring systems have been selected, categorized, classified and compared. Finally, major advances in the system design level have been discussed, current issues facing health care providers, as well as the potential challenges to health monitoring field will be identified and compared to other similar systems."}
{"_id": "02460220d72144f971f39613897b6fb67d2253b1", "title": "Aerial-guided navigation of a ground robot among movable obstacles", "text": "We demonstrate the fully autonomous collaboration of an aerial and a ground robot in a mock-up disaster scenario. Within this collaboration, we make use of the individual capabilities and strengths of both robots. The aerial robot first maps an area of interest, then it computes the fastest mission for the ground robot to reach a spotted victim and deliver a first-aid kit. Such a mission includes driving and removing obstacles in the way while being constantly monitored and commanded by the aerial robot. Our mission-planning algorithm distinguishes between movable and fixed obstacles and considers both the time for driving and removing obstacles. The entire mission is executed without any human interaction once the aerial robot is launched and requires a minimal amount of communication between the robots. We describe both the hardware and software of our system and detail our mission-planning algorithm. We present exhaustive results of both simulation and real experiments. Our system was successfully demonstrated more than 20 times at a trade fair."}
{"_id": "08d65470ed0994085a2bfbed17052e50023feeb5", "title": "Beat them or ban them: the characteristics and social functions of anger and contempt.", "text": "This article reports 3 studies in which the authors examined (a) the distinctive characteristics of anger and contempt responses and (b) the interpersonal causes and effects of both emotions. In the 1st study, the authors examined the distinction between the 2 emotions; in the 2nd study, the authors tested whether contempt could be predicted from previous anger incidents with the same person; and in the 3rd study, the authors examined the effects of type of relationship on anger and contempt reactions. The results of the 3 studies show that anger and contempt often occur together but that there are clear distinctions between the 2 emotions: Anger is characterized more by short-term attack responses but long-term reconciliation, whereas contempt is characterized by rejection and social exclusion of the other person, both in the short-term and in the long-term. The authors also found that contempt may develop out of previously experienced anger and that a lack of intimacy with and perceived control over the behavior of the other person, as well as negative dispositional attributions about the other person, predicted the emergence of contempt."}
{"_id": "93a3703a113eb7726345f7cc3c8dbc564d4ece15", "title": "FPMR: MapReduce Framework on FPGA A Case Study of RankBoost Acceleration", "text": "Machine learning and data mining are gaining increasing attentions of the computing society. FPGA provides a highly parallel, low power, and flexible hardware platform for this domain, while the difficulty of programming FPGA greatly limits its prevalence. MapReduce is a parallel programming framework that could easily utilize inherent parallelism in algorithms. In this paper, we describe FPMR, a MapReduce framework on FPGA, which provides programming abstraction, hardware architecture, and basic building blocks to developers. An on-chip processor scheduler is implemented to maximize the utilization of computation resources and achieve better load balancing. An efficient data access scheme is carefully designed to maximize data reuse and throughput. Meanwhile, the FPMR framework hides the task control, synchronization, and communication away from designers so that more attention can be paid to the application itself. A case study of RankBoost acceleration based on FPMR demonstrates that FPMR efficiently helps with the development productivity; and the speedup is 31.8x versus CPU-based implementation. This performance is comparable to a fully manually designed version, which achieves 33.5x speedup. Two other applications: SVM, PageRank are also discussed to show the generalization of the framework."}
{"_id": "7e58e877c4236913a7a2dd0e3a1e6f9b60721cd7", "title": "Area-efficient high-speed hybrid 1-bit full adder circuit using modified XNOR gate", "text": "A hybrid 1-bit full adder design is presented here using modified 3T-XNOR gate to improve the area and speed performance. The design is implemented for 1-bit full adder and then is scaled to 32-bit adder. Combination of CMOS and transmission gate logic is used to enhance the performance in terms of area, delay and power. The performance of the proposed design is evaluated through the simulation analysis in 90-nm technology with 1.2v supply voltage. The effect of scaling on the overall performance is also analyzed through the performance evaluation of 1-bit and 32-bit adder. The performance of proposed design is also compared with conventional design to verify the effectiveness in terms of area, power, delay."}
{"_id": "69f1806ca756846d144e552770e1c71592809397", "title": "Evaluating Knowledge Representation and Reasoning Capabilites of Ontology Specification Languages", "text": "The interchange of ontologies across the World Wide Web (WWW) and the cooperation among heterogeneous agents placed on it is the main reason for the development of a new set of ontology specification languages, based on new web standards such as XML or RDF. These languages (SHOE, XOL, RDF, OIL, etc) aim to represent the knowledge contained in an ontology in a simple and human-readable way, as well as allow for the interchange of ontologies across the web. In this paper, we establish a common framework to compare the expressiveness of \u201c traditional\u201d ontology languages (Ontolingua, OKBC, OCML, FLogic, LOOM) and \u201cweb-based\u201d ontology languages. As a result of this study, we conclude that different needs in KR and reasoning may exist in the building of an ontology-based application, and these needs must be evaluated in order to choose the most suitable ontology language(s)."}
{"_id": "ebbf93b4c79d6bdac6ce5b56aa9cf24b3bb1f542", "title": "The SWAL-QOL Outcomes Tool for Oropharyngeal Dysphagiain Adults: II. Item Reduction and Preliminary Scaling", "text": "The SWAL-QOL outcomes tool was constructed for use in clinical research for patients with oropharyngeal dysphagia. The SWAL-QOL was constructed a priori to enable preliminary psychometric analyses of items and scales before its final validation. This article describes data analysis from a pretest of the SWAL-QOL. We evaluated the different domains of the SWAL-QOL for respondent burden, data quality, item variability, item convergent validity, internal consistency reliability as measured by Cronbach's alpha, and range and skewness of scale scores upon aggregation and floor and ceiling effects. The item reduction techniques outlined reduced the SWAL-QOL from 185 to 93 items. The pretest of the SWAL-QOL afforded us the opportunity to select items for the ongoing validation study which optimally met our a priori psychometric criteria of high data quality, normal item distributions, and robust evidence of item convergent validity."}
{"_id": "5fe85df7321aad95d4a5ec58889ea55f7f0c5ff2", "title": "A Descriptive Content Analysis of Trust-Building Measures in B2B Electronic Marketplaces", "text": "Because business-to-business (B2B) electronic marketplaces (e-marketplaces) facilitate transactions between buyers and sellers, they strive to foster a trustworthy trading environment with a variety of trust-building measures. However, little research has been undertaken to explore trust-building measures used in B2B e-marketplaces, or to determine to what extent these measures are applied in B2B e-marketplaces and how they are applied. Based on reviews of the scholarly, trade, and professional literature on trust in electronic commerce, we identified 11 trustbuilding measures used to create trust in B2B e-marketplaces. Zucker\u2019s trust production theory [1986] was applied to understand how these trust-building measures will enhance participants\u2019 trust in buyers and sellers in B2B e-marketplaces or in B2B e-marketplace providers. A descriptive content analysis of 100 B2B e-marketplaces was conducted to survey the current usage of the 11 trust-building measures. Many of the trust-building measures were found to be widely used in the B2B e-marketplaces. However, although they were proven to be effective in building trust-related beliefs in online business environments, several institutional-based trustbuilding measures, such as escrow services, insurance and third-party assurance seals, are not widely used in B2B e-marketplaces."}
{"_id": "66336d0b89c3eca3dec0a41d2696a0fda23b6957", "title": "A Low-Profile Broadband 32-Slot Continuous Transverse Stub Array for Backhaul Applications in $E$ -Band", "text": "A high-gain, broadband, and low-profile continuous transverse stub antenna array is presented in E-band. This array comprises 32 long slots excited in parallel by a uniform corporate parallel-plate-waveguide beamforming network combined to a pillbox coupler. The radiating slots and the corporate feed network are built in aluminum whereas the pillbox coupler and its focal source are fabricated in printed circuit board technology. Specific transitions have been designed to combine both fabrication technologies. The design, fabrication, and measurement results are detailed, and a simple design methodology is proposed. The antenna is well matched ( $S_{11} < -13.6$ dB) between 71 and 86 GHz, and an excellent agreement is found between simulations and measurements, thus validating the proposed design. The antenna gain is higher than 29.3 dBi over the entire bandwidth, with a peak gain of 30.8 dBi at 82.25 GHz, and a beam having roughly the same half-power beamwidth in E- and H-planes. This antenna architecture is considered as an innovative solution for long-distance millimeter-waves telecommunication applications such as fifth-generation backhauling in E-band."}
{"_id": "663444f7bb70eb20c1f9c6c084db4d7a1dff21c4", "title": "News Article Teaser Tweets and How to Generate Them", "text": "We define the task of teaser generation and provide an evaluation benchmark and baseline systems for it. A teaser is a short reading suggestion for an article that is illustrative and includes curiosity-arousing elements to entice potential readers to read the news item. Teasers are one of the main vehicles for transmitting news to social media users. We compile a novel dataset of teasers by systematically accumulating tweets and selecting ones that conform to the teaser definition. We compare a number of neural abstractive architectures on the task of teaser generation and the overall best performing system is See et al. (2017)\u2019s seq2seq with pointer network."}
{"_id": "07d9dd5c25c944bf009256cdcb622feda53dabba", "title": "Markov Chain Monte Carlo Maximum Likelihood", "text": "Markov chain Monte Carlo (e. g., the Metropolis algorithm and Gibbs sampler) is a general tool for simulation of complex stochastic processes useful in many types of statistical inference. The basics of Markov chain Monte Carlo are reviewed, including choice of algorithms and variance estimation, and some new methods are introduced. The use of Markov chain Monte Carlo for maximum likelihood estimation is explained, and its performance is compared with maximum pseudo likelihood estimation."}
{"_id": "5e7cf311e1d0f0a137add1b1d373b185c52f04bc", "title": "Modeling and power conditioning for thermoelectric generation", "text": "In this paper, the principle and basic structure of the thermoelectric module is introduced. The steady- state and dynamic behaviors of a single TE module are characterized. An electric model of TE modules is developed and can be embedded in the simulation software for circuit analysis and design. The issues associated with the application of the TEG models is analyzed and pointed out. Power electronic technologies provide solutions for thermoelectric generation with features such as load interfacing, maximum power point tracking, power conditioning and failed module bypassing. A maximum power point tracking algorithm is developed and implemented with a DC-DC converter and low cost microcontroller. Experimental results demonstrated that the power electronic circuit can extract the maximum electrical power from the thermoelectric modules and feed electric loads regardless of the thermoelectric module's heat flux and load impedance or conditions."}
{"_id": "411500f7aaa0f3c00d492b78b24b33da0fd0d58d", "title": "Identifying Analogies Across Domains", "text": "Identifying analogies across domains without supervision is an important task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques. We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function. The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision."}
{"_id": "a05e4272d00860c701fa2110365bddaa5a169b80", "title": "Photovoltaic based water pumping system", "text": "In this paper a stand-alone Photovoltaic (PV) systems is presented for water pumping. Solar PV water pumping systems are used for irrigation and drinking water. PV based pumping systems without battery can provide a cost-effective use of solar energy. For the purpose of improving efficiency of the system perturb and observe (P&O) algorithm based Maximum Power Point Tracker (MPPT) is connected to this system. The aim of this paper is to show how to achieve an effective photovoltaic pumping system without battery storage. Results are presented based on different cases of irrigation pumping application and availability of solar irradiance. Simulation results using MATLAB/SIMULINK show that the performance of the controllers both in transient as well as in steady state is quite satisfactory."}
{"_id": "4fa65adc0ad69471b9f146407399db92d17b5140", "title": "Quality Management in Systems Development: An Organizational System Perspective", "text": "We identify top management leadership, a sophisticated management infrastructure, process management efficacy, and stakeholder participation as important elements of a quality-oriented organizational system for software development. A model interrelating these constructs and quality performance is proposed. Data collected through a national survey of IS executives in Fortune 1000 companies and government agencies was used to Robert Zmud was the accepting senior editor for this paper. test the model using a Partial Least Squares analysis methodology. Our results suggest that software quality goals are best attained when top management creates a management infrastructure that promotes improvements in process design and encourages stakeholders to evolve the design of the development processes. Our results also suggest that all elements of the organizational system need to be developed in order to attain quality goals and that piecemeal adoption of select quality management practices are unlikely to be effective. Implications of this research for IS theory and practice are discussed."}
{"_id": "83bd2592ae88603a397d63815b353816a7cec4b1", "title": "Experiences with Selecting Search Engines Using Metasearch", "text": "Search engines are among the most useful and high-profile resources on the Internet. The problem of finding information on the Internet has been replaced with the problem of knowing where search engines are, what they are designed to retrieve, and how to use them. This article describes and evaluates SavvySearch, a metasearch engine designed to intelligently select and interface with multiple remote search engines. The primary metasearch issue examined is the importance of carefully selecting and ranking remote search engines for user queries. We studied the efficacy of SavvySearch's incrementally acquired metaindex approach to selecting search engines by analyzing the effect of time and experience on performance. We also compared the metaindex approach to the simpler categorical approach and showed how much experience is required to surpass the simple scheme."}
{"_id": "8a76de2c9fc7021e360c770b96640860571037a4", "title": "Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1", "text": "We introduce a method to train Binarized Neural Networks (BNNs) neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line."}
{"_id": "e86f2ad8b768df4f34031e20f07fdeb19af505a9", "title": "An Accurate Multi-Row Panorama Generation Using Multi-Point Joint Stitching", "text": "Most of the existing panorama generation tools require the input images to be captured along one direction, and yield a narrow strip panorama. To generate a large viewing field panorama, this paper proposes a multi-row panorama generation (MRPG) method. For a pan/tilt camera whose scanning path covers a wide range of horizontal and vertical views, the image frames in different views correspond to different coordinate benchmarks and different projections. And the image frame should not only be aligned with the continuous frames in timeline but also be aligned with other frames in spatial neighborhood even with long time intervals. For these problems, MRPG first designs an optimal scanning path to cover the large viewing field, and chooses the center frame as the reference frame to start to stitch. The stitching order of multi-frame is arranged in first-column and second-row to ensure a small alignment error. Moreover, MRPG proposes a multi-point joint stitching method to eliminate the seams and correct the distortions, which makes the current frame accurately integrated into the panoramic canvas from all directions. Experimental results show that MRPG can generate a more accurate panorama than other state-of-the-art image stitching methods, and give a better visual effect for a large viewing field panorama."}
{"_id": "13f4d056f6ba318074187ea5ade7f231803e2879", "title": "Privacy-Preserving Image Denoising From External Cloud Databases", "text": "Along with the rapid advancement of digital image processing technology, image denoising remains a fundamental task, which aims to recover the original image from its noisy observation. With the explosive growth of images on the Internet, one recent trend is to seek high quality similar patches at cloud image databases and harness rich redundancy therein for promising denoising performance. Despite the well-understood benefits, such a cloud-based denoising paradigm would undesirably raise security and privacy issues, especially for privacy-sensitive image data sets. In this paper, we initiate the first endeavor toward privacy-preserving image denoising from external cloud databases. Our design enables the cloud hosting encrypted databases to provide secure query-based image denoising services. Considering that image denoising intrinsically demands high quality similar image patches, our design builds upon recent advancements on secure similarity search, Yao\u2019s garbled circuits, and image denoising operations, where each is used at a different phase of the design for the best performance. We formally analyze the security strengths. Extensive experiments over real-world data sets demonstrate that our design achieves the denoising quality close to the optimal performance in plaintext."}
{"_id": "e3435df055d3ea85b5e213c5a9bceb542eeb236c", "title": "Induction and synchronous reluctance motors comparison", "text": "The aim of this paper is to investigate and compare the performances of two induction motors and two transverse laminated synchronous reluctance motors (induction motor nameplate data: 2.2 and 4 kW, 380 V, 50 Hz, 4 pole). Each induction motor is compared with a synchronous reluctance motor, that has the same stator lamination and winding, but, obviously, different rotor. The analysis has been based both on an analytical and experimental approach. The results have shown that the synchronous reluctance motor, compared to the induction motor, is capable of around a 10% to 15% larger rated torque for a given frame size. The direct comparison of the performances of the motor types has been made both for constant load and constant dissipated power conditions."}
{"_id": "2ffd5c401a958e88c80291c48738d21d96942c1a", "title": "The learning and use of traversability affordance using range images on a mobile robot", "text": "We are interested in how the concept of affordances can affect our view to autonomous robot control, and how the results obtained from autonomous robotics can be reflected back upon the discussion and studies on the concept of affordances. In this paper, we studied how a mobile robot, equipped with a 3D laser scanner, can learn to perceive the traversability affordance and use it to wander in a room tilled with spheres, cylinders and boxes. The results showed that after learning, the robot can wander around avoiding contact with non-traversable objects (i.e. boxes, upright cylinders, or lying cylinders in certain orientation), but moving over traversable objects (such as spheres, and lying cylinders in a rollable orientation with respect to the robot) rolling them out of its way. We have shown that for each action approximately 1% of the perceptual features were relevant to determine whether it is afforded or not and that these relevant features are positioned in certain regions of the range image. The experiments are conducted both using a physics-based simulator and on a real robot."}
{"_id": "788121f29d86021a99a4d1d8ba53bb1312334b16", "title": "The role of tutoring in problem solving.", "text": "THIS PAPER is concerned with the nature of the tutorial process; the means whereby an adult or \"expert\" helps somebody who is less adult or less expert. Though its aim is general, it is expressed in terms of a particular task: a tutor seeks to teach children aged 3, 4 and 5 yr to build a particular three-dimensional structure that requires a degree of skill that is initially beyond them. It is the usual type of tutoring situation in which one member \"knows the answer\" and the other does not, rather like a \"practical\" in which only the instructor \"knows how\". The changing interaction of tutor and children provide our data. A great deal of early problem solving by the developing child is of this order. Although from the earliest months of life he is a \"natural\" problem solver in his own right (e.g. Bruner, 1973) it is often the ease that his efforts are assisted and fostered by others who are more skilful than he is (Kaye, 1970). Whether he is learning the procedures that constitute the skills of attending, communicating, manipulating objects, locomoting, or, indeed, a more effective problem solving procedure itself, there are usually others in attendance who help him on his way. Tutorial interactions are, in short, a crucial feature of infancy and childhood. Our species, moreover, appears to be the only one in which any \"intentional\" tutoring goes on (Bruner, 1972; Hinde, 1971). For although it is true that many of the higher primate species learn by observation of their elders (Hamburg, 1968; van Lawick-Goodall, 1968), there is no evidence that those elders do anything to instruct their charges in the performance of the skill in question. What distinguishes man as a species is not only his capacity for learning, but for teaching as well. It is the main aim of this paper to examine some of the major implications of this interactive, instructional relationship between the developing child and his elders for the study of skill acquisition and problem solving. The acquisition of skill in the human child can be fruitfully conceived as a hierarchical program in which component skills are combined into \"higher skills\" by appropriate orchestration to meet new, more complex task requirements (Bruner, 1973). The process is analogous to problem solving in which mastery of \"lower order\" or constituent problems in a sine qua non for success with a larger jjroblcm, each level influencing the other\u2014as with reading where the deciphering of words makes possible the deciphering of sentences, and sentences then aid in the deciphering of particular words (F. Smith, 1971). Given persistent intention in the young learner, given a \"lexicon\" of constituent skills, the crucial task is often one of com-"}
{"_id": "8b3939b8d31fd384cbb6fc7cb95f06425f035f6d", "title": "Increasing fruit and vegetable intake by changing environments, policy and pricing: restaurant-based research, strategies, and recommendations.", "text": "BACKGROUND\nRestaurants are among the most important and promising venues for environmental, policy, and pricing initiatives to increase fruit and vegetable (F&V) intake. This article reviews restaurant-based environmental, policy and pricing strategies for increasing intake of fruits and vegetables and identifies promising strategies, research needs, and innovative opportunities for the future.\n\n\nMETHODS\nThe strategies, examples, and research reported here were identified through an extensive search of published journal articles, government documents, the internet, and inquiries to leaders in the field. Recommendations were expanded based on discussion by participants in the CDC/ACS-sponsored Fruit and Vegetable, Environment Policy and Pricing Workshop held in September of 2002.\n\n\nRESULTS\nSix separate types of restaurant-based interventions were identified: increased availability, increased access, reduced prices and coupons, catering policies, point-of-purchase (POP) information, and promotion and communication. Combination approaches have also been implemented. Evaluation data on these interventions show some significant impact on healthful diets, particularly with point-of-purchase information. However, most published reports emphasize low-fat eating, and there is a need to translate and evaluate interventions focused on increasing fruit and vegetable intake.\n\n\nCONCLUSIONS\nSeveral models for changing environments, policy and pricing to increase fruit and vegetable availability, access, attractiveness and consumption in restaurants have been tested and found to have some promise. There is a need to evaluate fruit and vegetable-specific strategies; to obtain data from industry; to disseminate promising programs; and to enhance public-private partnerships and collaboration to expand on current knowledge."}
{"_id": "0560497d97603c56ebe32bc84ff8b42572be66cc", "title": "Many regression algorithms, one unified model: A review", "text": "Regression is the process of learning relationships between inputs and continuous outputs from example data, which enables predictions for novel inputs. The history of regression is closely related to the history of artificial neural networks since the seminal work of Rosenblatt (1958). The aims of this paper are to provide an overview of many regression algorithms, and to demonstrate how the function representation whose parameters they regress fall into two classes: a weighted sum of basis functions, or a mixture of linear models. Furthermore, we show that the former is a special case of the latter. Our ambition is thus to provide a deep understanding of the relationship between these algorithms, that, despite being derived from very different principles, use a function representation that can be captured within one unified model. Finally, step-by-step derivations of the algorithms from first principles and visualizations of their inner workings allow this article to be used as a tutorial for those new to regression."}
{"_id": "4bf5bd97dc53422d864046562f4078f156b5b726", "title": "User Models for Adaptive Hypermedia and Adaptive Educational Systems", "text": "One distinctive feature of any adaptive system is the user model that represents essential information about each user. This chapter complements other chapters of this book in reviewing user models and user modeling approaches applied in adaptive Web systems. The presentation is structured along three dimensions: what is being modeled, how it is modeled, and how the models are maintained. After a broad overview of the nature of the information presented in these various user models, the chapter focuses on two groups of approaches to user model representation and maintenance: the overlay approach to user model representation and the uncertainty-based approach to user"}
{"_id": "db8e01da48065130581027a66267df41bea4ab17", "title": "Multimedia Question Answering", "text": "By surveying recent research in multimedia question answering, this article explores QA's evolution from text to multimedia and identifies the challenges in the field."}
{"_id": "8dc35218acf021b61c48ff99ecc7fa58df1b3adc", "title": "Sexual Orientation and Gender Identity Data Collection in Clinical Settings and in Electronic Health Records: A Key to Ending LGBT Health Disparities.", "text": "The Institute of Medicine's (IOM's) 2011 report on the health of LGBT people pointed out that there are limited health data on these populations and that we need more research. It also described what we do know about LGBT health disparities, including lower rates of cervical cancer screening among lesbians, and mental health issues related to minority stress. Patient disclosure of LGBT identity enables provider-patient conversations about risk factors and can help us reduce and better understand disparities. It is essential to the success of Healthy People 2020's goal of eliminating LGBT health disparities. This is why the IOM's report recommended data collection in clinical settings and on electronic health records (EHRs). The Center for Medicare and Medicaid Services and the Office of the National Coordinator of Health Information Technology rejected including sexual orientation and gender identity (SOGI) questions in meaningful use guidelines for EHRs in 2012 but are considering this issue again in 2013. There is overwhelming community support for the routine collection of SOGI data in clinical settings, as evidenced by comments jointly submitted by 145 leading LGBT and HIV/AIDS organizations in January 2013. Gathering SOGI data in EHRs is supported by the 2011 IOM's report on LGBT health, Healthy People 2020, the Affordable Care Act, and the Joint Commission. Data collection has long been central to the quality assurance process. Preventive health care from providers knowledgeable of their patients' SOGI can lead to improved access, quality of care, and outcomes. Medical and nursing schools should expand their attention to LGBT health issues so that all clinicians can appropriately care for LGBT patients."}
{"_id": "55da105bcec35b10103671fe87572aec515754b8", "title": "Collision-free object movement using vector fields", "text": "The topics of computer animation and object collision detection and resolution have become increasingly desirable in a wide range of applications including autonomous vehicle navigation, robotics, and the movie industry. However, the techniques currently available are typically difficult to use or are restricted to a particular domain. This paper presents a technique for providing automatic animation and collision avoidance of arbitrary objects in a computer graphics system. The underlying construct used in this process is a surrounding volume octree vector field. The system automatically generates these structures around objects in the scene. By judicious creation and use of these vector fields, objects in the scene move and interact, but do not collide. The manner in which these vector fields are created is given. Two applications cloud movement over terrain and autonomous aircraft navigation are presented which show typical usage of this technique."}
{"_id": "7b59298fa604fee66e41f16fcea5c481d9fe56b4", "title": ": Android Deobfuscation via \u201c Big Code \u201d", "text": "This work presents a new approach for deobfuscating Android APKs based on probabilistic learning of large code bases (termed \u201cBig Code\u201d). The key idea is to learn a probabilistic model over thousands of non-obfuscated Android applications and to use this probabilistic model to deobfuscate new, unseen Android APKs. The concrete focus of the paper is on reversing layout obfuscation, a popular transformation which renames key program elements such as classes, packages and methods, thus making it difficult to understand what the program does. Concretely, the paper: (i) phrases the layout deobfuscation problem of Android APKs as structured prediction in a probabilistic graphical model, (ii) instantiates this model with a rich set of features and constraints that capture the Android setting, ensuring both semantic equivalence and high prediction accuracy, and (iii) shows how to leverage powerful inference and learning algorithms to achieve overall precision and scalability of the probabilistic predictions. We implemented our approach in a tool called DeGuard and used it to: (i) reverse the layout obfuscation performed by the popular ProGuard system on benign, open-source applications, (ii) predict third-party libraries imported by benign APKs (also obfuscated by ProGuard), and (iii) rename obfuscated program elements of Android malware. The experimental results indicate that DeGuard is practically effective: it recovers 79.1% of the program element names obfuscated with ProGuard, it predicts third-party libraries with accuracy of 91.3%, and it reveals string decoders and classes that handle sensitive data in Android malware."}
{"_id": "d8a82d1887d66515ca974edd6a9e35fb52767a27", "title": "Detection of Slang Words in e-Data using semi-Supervised Learning", "text": "The proposed algorithmic approach deals with finding the sense of a word in an electronic data. Now a day, in different communication mediums like internet, mobile services etc. people use few words, which are slang in nature. This approach detects those abusive words using supervised learning procedure. But in the real life scenario, the slang words are not used in complete word forms always. Most of the times, those words are used in different abbreviated forms like sounds alike forms, taboo morphemes etc. This proposed approach can detect those abbreviated forms also using semi supervised learning procedure. Using the synset and concept analysis of the text, the probability of a suspicious word to be a slang word is also evaluated."}
{"_id": "8a2dc3e8c6d8de5b5215167a2a7ab1d6d9bb7e3b", "title": "Thermally Stable Enhancement-Mode GaN Metal-Isolator-Semiconductor High-Electron-Mobility Transistor With Partially Recessed Fluorine-Implanted Barrier", "text": "Al2O3/AlGaN/GaN enhancement-mode metalisolator-semiconductor high-electron-mobility transistor (MIS-HEMT) featuring a partially recessed (Al) GaN barrier was realized by a fluorine plasma implantation/etch technique. By properly adjusting the RF power driving the fluorine plasma, the fluorine plasma is able to produce two desirable results: 1) a well-controlled slow dry etching for gate recess and 2) implanting fluorine ions into the AlGaN barrier. The fluorine ions become negatively charged in the barrier layer and induce a positive shift in the threshold voltage. The proposed MIS-HEMT exhibits a threshold voltage (VTH) of +0.6 V at a drain current of 10 \u03bcA/mm, a maximum drive current of 730 mA/mm, an ON-resistance of 7.07 \u03a9 \u00b7 mm, and an OFF-state breakdown voltage of 703 V at an OFF-state drain leakage current of 1 \u03bcA/mm. From room temperature to 200 \u00b0C, the device exhibits a small negative shift of VTH (~0.5 V) that is attributed to the high-quality dielectric/F-implanted-(Al) GaN interface and the partially recessed barrier."}
{"_id": "a42ab729d016fd07a8718d8f5aa9d3956bd6127c", "title": "Inference and Design in Kuba and Zillij Art with Shape Grammars", "text": "We present a simple method for structural inference in African Kuba cloth, and Moorish zillij mosaics. Our work is based on Stiny and Gips\u2019 formulation of \u201cShape Grammars\u201d. It provides art scholars and geometers with a simple yet powerful medium to perform analysis of existing art and draw inspiration for new designs. The analysis involves studying an artwork and capturing its structure as simple shape grammar rules. We then show how interesting families of artworks could be generated using simple variations in their corresponding grammars."}
{"_id": "bc8e11b8cdf0cfbedde798a53a0318e8d6f67e17", "title": "Deep Learning for Fixed Model Reuse", "text": "Model reuse attempts to construct a model by utilizing existing available models, mostly trained for other tasks, rather than building a model from scratch. It is helpful to reduce the time cost, data amount, and expertise required. Deep learning has achieved great success in various tasks involving images, voices and videos. There are several studies have the sense of model reuse, by trying to reuse pre-trained deep networks architectures or deep model features to train a new deep model. They, however, neglect the fact that there are many other fixed models or features available. In this paper, we propose a more thorough model reuse scheme, FMR (Fixed Model Reuse). FMR utilizes the learning power of deep models to implicitly grab the useful discriminative information from fixed model/features that have been widely used in general tasks. We firstly arrange the convolution layers of a deep network and the provided fixed model/features in parallel, fully connecting to the output layer nodes. Then, the dependencies between the output layer nodes and the fixed model/features are knockdown such that only the raw feature inputs are needed when the model is being used for testing, though the helpful information in the fixed model/features have already been incorporated into the model. On one hand, by the FMR scheme, the required amount of training data can be significantly reduced because of the reuse of fixed model/features. On the other hand, the fixed model/features are not explicitly used in testing, and thus, the scheme can be quite useful in applications where the fixed model/features are protected by patents or commercial secrets. Experiments on five real-world datasets validate the effectiveness of FMR compared with state-of-the-art deep methods."}
{"_id": "cbe32f6f76486438df0bd4b6811b7ff1d11ecf18", "title": "A mixture model for preferences data analysis", "text": "A mixture model for preferences data, which adequately represents the composite nature of the elicitation mechanism in ranking processes, is proposed. Both probabilistic features of the mixture distribution and inferential and computational issues arising from the maximum likelihood parameters estimation are addressed. Moreover, empirical evidence from different data sets confirming the goodness of fit of the proposed model to many real preferences data is shown. \u00a9 2004 Elsevier B.V. All rights reserved."}
{"_id": "0808bb50993547a533ea5254e0454024d98c5e2f", "title": "A Recurrent Latent Variable Model for Sequential Data", "text": "In this paper, we explore the inclusion of latent random variables into the dynamic hidden state of a recurrent neural network (RNN) by combining elements of the variational autoencoder. We argue that through the use of high-level latent random variables, our variational RNN (VRNN) is able to learn to model the kind of variability observed in highly-structured sequential data (such as speech). We empirically evaluate the proposed model against related sequential models on five sequential datasets, four of speech and one of handwriting. Our results show the importance of the role that latent random variables can play in the RNN dynamic hidden state."}
{"_id": "b2b427219bed5c0f83ad3fe5a3d285313759f5d2", "title": "Least Squares Temporal Difference Learning and Galerkin \u2019 s Method", "text": "The problem of estimating the value function underlying a Markovian reward process is considered. As it is well known, the value function underlying a Markovian reward process satisfied a linear fixed point equation. One approach to learning the value function from finite data is to find a good approximation to the value function in a given (linear) subspace of the space of value functions. We review some of the issues that arise when following this approach, as well as some results that characterize the finite-sample performance of some of the algorithms. 1 Markovian Reward Processes Let X be a measurable space and consider a stochastic process (X0, R1, X1, R2, X2, . . .), where Xt \u2208 X and Rt+1 \u2208 R, t = 0, 1, 2, . . .. The process is called a Markovian Reward process if \u2022 (X0, X1, . . .) is a Markov process, and \u2022 for any t \u2265 0, given Xt, Xt+1 the distribution of Rt+1 is independent of the history of the process. Here, Xt is called the state of the system at time t, while Rt+1 is the reward associated to transitioning from Xt to Xt+1. We shall denote by P the Markovian kernel underlying the process: Thus, the distribution of (Xt+1, Rt+1) given Xt is given by P(\u00b7, \u00b7|Xt), t = 0, 1, . . .. Fix the so-called discount factor 0 \u2264 \u03b3 \u2264 1 and define the (total discounted) return associated to the process R = \u221e \u2211"}
{"_id": "513cc8a1e3963b7c2c774ba7600ef08c0acb8917", "title": "A Dual Path Deep Network for Single Image Super-Resolution Reconstruction", "text": "Super-resolution reconstruction based on deep learning has come a long way since the first proposed method in 2015. Numerous methods have been developed for this task using deep learning approaches. Among these methods, residual deep learning algorithms have shown better performance. Although all early proposed deep learning based super-resolution frameworks used bicubic upsampled versions of low resolution images as the main input, most of the current ones use the low resolution images directly by adding up-sampling layers to their networks. In this work, we propose a new method by using both low resolution and bicubic upsampled images as the inputs to our network. The final results confirm that decreasing the depth of the network in lower resolution space and adding the bicubic path lead to almost similar results to those of the deeper networks in terms of PSNR and SSIM, yet making the network computationally inexpensive and more efficient."}
{"_id": "6e8e0727414f753b5a556559556a3922666838d2", "title": "BLE Beacon Based Patient Tracking in Smart Care Facilities", "text": "Patient tracking is an important component toward creating smart cities. In this demo we use Bluetooth Low Energy beacons and single board computers to track patients in the emerging field of smart care facilities. Our model utilizes a fixed scanner moving transmitter method for wireless tracking of patients through the facility. The data collected by all scanners is stored within a central database that is designed to be efficiently queried. We show how inexpensive components in conjunction with free open source software can be used to implement a patient tracking system. We focus on the pipeline between acquisition and display of the location data. Additionally, we also discuss the manipulation of the data required for usability, and optional filtering operations that improve accuracy."}
{"_id": "f654bcf17bb5d6ee0359f0db8ecb47dcf0c19b33", "title": "Immediate effect of a slow pace breathing exercise Bhramari pranayama on blood pressure and heart rate.", "text": "The study was carried out to evaluate the immediate effect Bhramari pranayama, a slow breathing exercise for 5 minutes on heart rate and blood pressure. Heart rate and blood pressure of volunteers were recorded. The subject was directed to inhale slowly up to the maximum for about 5 seconds and then to exhale slowly up to the maximum for about 15 sec keeping two thumbs on two external auditory canal, index and middle finger together on two closed eyes and ring finger on the two sides of the nose. During exhalation the subject must chant the word \"O-U-Mmmma\" with a humming nasal sound mimicking the sound of a humming wasp, so that the laryngeal walls and the inner walls of the nostril mildly vibrate (Bhramari pranayama, respiratory rate 3/min). After 5 minutes of this exercise, the blood pressure and heart rate were recorded again. Both the systolic and diastolic blood pressure were found to be'decreased with a slight fall in heart rate. Fall of diastolic pressure and mean pressure were significant. The result indicated that slow pace Bhramari pranayama for 5 minutes, induced parasympathetic dominance on cardiovascular system."}
{"_id": "713f73ce5c3013d9fb796c21b981dc6629af0bd5", "title": "Deep Neural Networks for Object Detection", "text": "Deep Neural Networks (DNNs) have recently shown outstanding performance on image classification tasks [14]. In this paper we go one step further and address the problem of object detection using DNNs, that is not only classifying but also precisely localizing objects of various classes. We present a simple and yet powerful formulation of object detection as a regression problem to object bounding box masks. We define a multi-scale inference procedure which is able to produce high-resolution object detections at a low cost by a few network applications. State-of-the-art performance of the approach is shown on Pascal VOC."}
{"_id": "0926257e7672c799b6fc6da1dadb45481bfedc20", "title": "Point-and-shoot memories: the influence of taking photos on memory for a museum tour.", "text": "Two studies examined whether photographing objects impacts what is remembered about them. Participants were led on a guided tour of an art museum and were directed to observe some objects and to photograph others. Results showed a photo-taking-impairment effect: If participants took a photo of each object as a whole, they remembered fewer objects and remembered fewer details about the objects and the objects' locations in the museum than if they instead only observed the objects and did not photograph them. However, when participants zoomed in to photograph a specific part of the object, their subsequent recognition and detail memory was not impaired, and, in fact, memory for features that were not zoomed in on was just as strong as memory for features that were zoomed in on. This finding highlights key differences between people's memory and the camera's \"memory\" and suggests that the additional attentional and cognitive processes engaged by this focused activity can eliminate the photo-taking-impairment effect."}
{"_id": "13eba30632154428725983fcd8343f3f1b3f0695", "title": "A Fast and Accurate Dependency Parser using Neural Networks", "text": "Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2% unlabeled attachment score on the English Penn Treebank."}
{"_id": "14c146d457bbd201f3a117ee9c848300d341e5d0", "title": "The CoNLL 2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies", "text": "The Conference on Computational Natural Language Learning is accompanied every year by a shared task whose purpose is to promote natural language processing applications and evaluate them in a standard setting. In 2008 the shared task was dedicated to the joint parsing of syntactic and semantic dependencies. This shared task not only unifies the shared tasks of the previous four years under a unique dependency-based formalism, but also extends them significantly: this year\u2019s syntactic dependencies include more information such as named-entity boundaries; the semantic dependencies model roles of both verbal and nominal predicates. In this paper, we define the shared task and describe how the data sets were created. Furthermore, we report and analyze the results and describe the approaches of the participating systems."}
{"_id": "1ac3a1c91c982666ad63d082ffd3646b09817908", "title": "The Stanford Typed Dependencies Representation", "text": "This paper examines the Stanford typed dependencies representation, which was designed to provide a straightforward description of grammatical relations for any user who could benefit from automatic text understanding. For such purposes, we argue that dependency schemes must follow a simple design and provide semantically contentful information, as well as offer an automatic procedure to extract the relations. We consider the underlying design principles of the Stanford scheme from this perspective, and compare it to the GR and PARC representations. Finally, we address the question of the suitability of the Stanford scheme for parser evaluation."}
{"_id": "2b5aadb2586707433beb21121677289d196e6144", "title": "brat: a Web-based Tool for NLP-Assisted Text Annotation", "text": "We introduce the brat rapid annotation tool (BRAT), an intuitive web-based tool for text annotation supported by Natural Language Processing (NLP) technology. BRAT has been developed for rich structured annotation for a variety of NLP tasks and aims to support manual curation efforts and increase annotator productivity using NLP techniques. We discuss several case studies of real-world annotation projects using pre-release versions of BRAT and present an evaluation of annotation assisted by semantic class disambiguation on a multicategory entity mention annotation task, showing a 15% decrease in total annotation time. BRAT is available under an opensource license from: http://brat.nlplab.org"}
{"_id": "4672aea3dfa891e6d5752b9b81f773f8ce15993a", "title": "Measuring Similarity between Semantic Business Process Models", "text": "A business process may be modeled in different ways by different modelers even when utilizing the same modeling language. An appropriate method for solving ambiguity issues in process models caused by the use of synonyms, homonyms or different abstraction levels for process element names is the use of ontologybased descriptions of process models. So-called semantic business process models promise to support business process interoperability and interconnectivity. But, for (semi-) automatic process interoperability and interconnectivity two problems need to be solved. How can similar terms for process element names be automatically discovered and how can semantic business process composition be facilitated. In this paper we will present solutions for these problems based upon an OWL DL-based description of Petri nets."}
{"_id": "31d3d17a7de5013e5e810e257a707db5e0e64cc1", "title": "Development of knee power assist using backdrivable electro-hydrostatic actuator", "text": "Backdrivability is a keyword with rising importance, not only in humanoid robots, but also to wearable robots. Power assist robots, namely exoskeletons are expected to provide mobility and independence to elderly and people with disability. In such robots, without backdrivability, error between human and robot motion can be painful; sometimes dangerous. In this research, we apply a type of hydraulic actuator, specially designed to realize backdrivability, to exoskeletal robot to improve backdrivability. We present the basic methodology to apply such hydraulic actuator to wearable robots. We developed prototype of knee joint power assist exoskeleton and performed fundamental tests to verify the design method validity."}
{"_id": "c5ee2621e5a0692677890df9a10963293ab14fc2", "title": "Feature Engineering for Knowledge Base Construction", "text": "Knowledge base construction (KBC) is the process of populating a knowledge base, i.e., a relational database together with inference rules, with information extracted from documents and structured sources. KBC blurs the distinction between two traditional database problems, information extraction and information integration. For the last several years, our group has been building knowledge bases with scientific collaborators. Using our approach, we have built knowledge bases that have comparable and sometimes better quality than those constructed by human volunteers. In contrast to these knowledge bases, which took experts a decade or more human years to construct, many of our projects are constructed by a single graduate student. Our approach to KBC is based on joint probabilistic inference and learning, but we do not see inference as either a panacea or a magic bullet: inference is a tool that allows us to be systematic in how we construct, debug, and improve the quality of such systems. In addition, inference allows us to construct these systems in a more loosely coupled way than traditional approaches. To support this idea, we have built the DeepDive system, which has the design goal of letting the user \u201cthink about features\u2014 not algorithms.\u201d We think of DeepDive as declarative in that one specifies what they want but not how to get it. We describe our approach with a focus on feature engineering, which we argue is an understudied problem relative to its importance to end-to-end quality."}
{"_id": "c22f9e2f3cc1c2296f7edb4cf780c6503e244a49", "title": "A precise ranking method for outlier detection", "text": ""}
{"_id": "eea31cf3deea85d456423470da215ba50c97146a", "title": "Crawler Detection: A Bayesian Approach", "text": "In this paper, we introduce a probabilistic modeling approach for addressing the problem of Web robot detection from Web-server access logs. More specifically, we construct a Bayesian network that classifies automatically access-log sessions as being crawler- or human-induced, by combining various pieces of evidence proven to characterize crawler and human behavior. Our approach uses machine learning techniques to determine the parameters of the probabilistic model. We apply our method to real Web-server logs and obtain results that demonstrate the robustness and effectiveness of probabilistic reasoning for crawler detection"}
{"_id": "b222eddf83d5ff8e772e8fcb8eb9c290cad1a056", "title": "Analytical Design Methodology for Litz-Wired High-Frequency Power Transformers", "text": "In the last quarter of a century, high-frequency (HF) transformer design has been one of the major concerns to power electronics designers in order to increase converter power densities and efficiencies. Conventional design methodologies are based on iterative processes and rules of thumb founded more on expertise than on theoretical developments. This paper presents an analytical design methodology for litz-wired HF power transformers that provides a deep insight into the transformer design problem making it a powerful tool for converter designers. The most suitable models for the calculation of core and winding losses and the transformer thermal resistance are first selected and then validated with a 5-kW 50-kHz commercial transformer for a photovoltaic application. Based on these models, the design methodology is finally proposed, reducing the design issue to directly solve a five-variable nonlinear optimization problem. The methodology is illustrated with a detailed design in terms of magnetic material, core geometry, and primary and secondary litz-wire sizing. The optimal design achieves a 46.5% power density increase and a higher efficiency of 99.70% when compared with the commercial one."}
{"_id": "87ba68678a1ae983cee474e4bfdd27257e45ca3d", "title": "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position", "text": "A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by \u201clearning without a teacher\u201d, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname \u201cneocognitron\u201d. After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of \u201cS-cells\u201d, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of \u201cC-cells\u201d similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any \u201cteacher\u201d during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern."}
{"_id": "0dbf801def1dd27800f3e0ab9cbf4d36c38a1250", "title": "Photoshopping the selfie: Self photo editing and photo investment are associated with body dissatisfaction in adolescent girls.", "text": "OBJECTIVE\nSocial media engagement by adolescent girls is high. Despite its appeal, there are potential negative consequences for body dissatisfaction and disordered eating from social media use. This study aimed to examine, in a cross-sectional design, the relationship between social media use in general, and social media activities related to taking \"selfies\" and sharing specifically, with overvaluation of shape and weight, body dissatisfaction, and dietary restraint.\n\n\nMETHOD\nParticipants were 101 grade seven girls (M(age) \u2009=\u200913.1, SD\u2009=\u20090.3), who completed self-report questionnaires of social media use and body-related and eating concerns measures.\n\n\nRESULTS\nResults showed that girls who regularly shared self-images on social media, relative to those who did not, reported significantly higher overvaluation of shape and weight, body dissatisfaction, dietary restraint, and internalization of the thin ideal. In addition, among girls who shared photos of themselves on social media, higher engagement in manipulation of and investment in these photos, but not higher media exposure, were associated with greater body-related and eating concerns, including after accounting for media use and internalization of the thin ideal.\n\n\nDISCUSSION\nAlthough cross-sectional, these findings suggest the importance of social media activities for body-related and eating concerns as well as potential avenues for targeted social-media-based intervention."}
{"_id": "b4f608ec910e9d8949c883257cda0ee0028b7fbb", "title": "Improving OCR Accuracy on Early Printed Books by combining Pretraining, Voting, and Active Learning", "text": "We combine three methods which significantly improve the OCR accuracy of OCR models trained on early printed books: (1) The pretraining method utilizes the information stored in already existing models trained on a variety of typesets (mixed models) instead of starting the training from scratch. (2) Performing cross fold training on a single set of ground truth data (line images and their transcriptions) with a single OCR engine (OCRopus) produces a committee whose members then vote for the best outcome by also taking the top-N alternatives and their intrinsic confidence values into account. (3) Following the principle of maximal disagreement we select additional training lines which the voters disagree most on, expecting them to offer the highest information gain for a subsequent training (active learning). Evaluations on six early printed books yielded the following results: On average the combination of pretraining and voting improved the character accuracy by 46% when training five folds starting from the same mixed model. This number rose to 53% when using different models for pretraining, underlining the importance of diverse voters. Incorporating active learning improved the obtained results by another 16% on average (evaluated on three of the six books). Overall, the proposed methods lead to an average error rate of 2.5% when training on only 60 lines. Using a substantial ground truth pool of 1,000 lines brought the error rate down even further to less than 1% on average."}
{"_id": "85206fe2439bd23e6e452870b2c9da7f233b6956", "title": "Inflammation and atherosclerosis.", "text": "Atherosclerosis, formerly considered a bland lipid storage disease, actually involves an ongoing inflammatory response. Recent advances in basic science have established a fundamental role for inflammation in mediating all stages of this disease from initiation through progression and, ultimately, the thrombotic complications of atherosclerosis. These new findings provide important links between risk factors and the mechanisms of atherogenesis. Clinical studies have shown that this emerging biology of inflammation in atherosclerosis applies directly to human patients. Elevation in markers of inflammation predicts outcomes of patients with acute coronary syndromes, independently of myocardial damage. In addition, low-grade chronic inflammation, as indicated by levels of the inflammatory marker C-reactive protein, prospectively defines risk of atherosclerotic complications, thus adding to prognostic information provided by traditional risk factors. Moreover, certain treatments that reduce coronary risk also limit inflammation. In the case of lipid lowering with statins, this anti-inflammatory effect does not appear to correlate with reduction in low-density lipoprotein levels. These new insights into inflammation in atherosclerosis not only increase our understanding of this disease, but also have practical clinical applications in risk stratification and targeting of therapy for this scourge of growing worldwide importance."}
{"_id": "cf8c0e56c22836be4c80440e0def07b79c107c6d", "title": "A Perspective Analysis of Traffic Accident using Data Mining Techniques", "text": "Data Mining is taking out of hidden patterns from huge database. It is commonly used in a marketing, surveillance, fraud detection and scientific discovery. In data mining, machine learning is mainly focused as research which is automatically learnt to recognize complex patterns and make intelligent decisions based on data. Nowadays traffic accidents are the major causes of death and injuries in this world. Roadway patterns are useful in the development of"}
{"_id": "39c715c182cb74dd3ad29e549218f45a4edfbcba", "title": "Sleep viewed as a state of adaptive inactivity", "text": "Sleep is often viewed as a vulnerable state that is incompatible with behaviours that nourish and propagate species. This has led to the hypothesis that sleep has survived because it fulfills some universal, but as yet unknown, vital function. I propose that sleep is best understood as a variant of dormant states seen throughout the plant and animal kingdoms and that it is itself highly adaptive because it optimizes the timing and duration of behaviour. Current evidence indicates that ecological variables are the main determinants of sleep duration and intensity across species."}
{"_id": "3d2868ed26c83fba3385b903750e4637179b9c5f", "title": "Exploring Customer Reviews for Music Genre Classification and Evolutionary Studies", "text": "In this paper, we explore a large multimodal dataset of about 65k albums constructed from a combination of Amazon customer reviews, MusicBrainz metadata and AcousticBrainz audio descriptors. Review texts are further enriched with named entity disambiguation along with polarity information derived from an aspect-based sentiment analysis framework. This dataset constitutes the cornerstone of two main contributions: First, we perform experiments on music genre classification, exploring a variety of feature types, including semantic, sentimental and acoustic features. These experiments show that modeling semantic information contributes to outperforming strong bag-of-words baselines. Second, we provide a diachronic study of the criticism of music genres via a quantitative analysis of the polarity associated to musical aspects over time. Our analysis hints at a potential correlation between key cultural and geopolitical events and the language and evolving sentiments found in music reviews."}
{"_id": "91e6696dc746580177550571486bcec3fcd73d1b", "title": "A Flexible Low-Power 130 nm CMOS Read-Out Circuit With Tunable Sensitivity for Commercial Robotic Resistive Pressure Sensors", "text": "This paper presents a flexible and low-power ReadOut Circuit (ROC) with tunable sensitivity, designed to interface a wide range of commercial resistive pressure sensors for robotic applications. The ROC provides contact detection, monitoring small capacitance variations at low pressure (<;100 mbar), and encodes pressure measurement on 8 bit, evaluating resistive variations. Two all-digital circuits implement the conversion of the input resistance and capacitance-to-frequency, exploiting an on-chip ring oscillator as timing reference. A 130 nm RFCMOS prototype (with an active area of 428 \u00d7 159 \u03bcm2) has a power consumption of 27.2 \u03bcW, for VDD 1 V. Using digital control inputs, the ROC allows a wide range tuning of measurement sensitivity (6.7-46.4 mbar/LSB) and adjustable acquisition time (226.6-461.7 \u03bcs and 648-890 \u03bcs, for contact detection and pressure evaluation, respectively). The read-out time of ~1 ms is compatible with human response after touch."}
{"_id": "e6fdef12057818aeeab41015eb885c42d799ba3a", "title": "Improving search engines by query clustering", "text": "search engine queries whose aim is to identify groups of queries used to search for similar information on the Web. The framework is based on a novel term vector model of queries that integrates user selections and the content of selected documents extracted from the logs of a search engine. The query representation obtained allows us to treat query clustering similarly to standard document clustering. We study the application of the clustering framework to two problems: relevance ranking boosting and query recommendation. Finally, we evaluate with experiments the effectiveness of our approach."}
{"_id": "e63e6a8b2d8a3f94ef6d830a0240fc6aad28eb22", "title": "Which phoneme-to-viseme maps best improve visual-only computer lip-reading?", "text": "A critical assumption of all current visual speech recognition systems is that there are visual speech units called visemes which can be mapped to units of acoustic speech, the phonemes. Despite there being a number of published maps it is infrequent to see the effectiveness of these tested, particularly on visual-only lip-reading (many works use audio-visual speech). Here we examine 120 mappings and consider if any are stable across talkers. We show a method for devising maps based on phoneme confusions from an automated lip-reading system, and we present new mappings that show improvements for individual talkers."}
{"_id": "7c43d01dd667723787392fb299b826a596882425", "title": "Design and Analysis of Stacked Power Amplifier in Series-Input and Series-Output Configuration", "text": "The stacked-device power-combining technique is a proven method to increase the output power and load impedance of a power amplifier (PA) simultaneously. The series-input configuration is physically realizable for multicell stacked device configuration in monolithic circuits. The series-input and series-output stack configuration is rigorously analyzed and proven to increase both the input impedance and output impedance simultaneously, easing the matching circuit designs in high PAs. The effects of asymmetry of the input feed and amplifier cells due to distributed effects and process variation on the performance of the stack amplifier are discussed. A four-cell HBT amplifier operating at 5-6 GHz is demonstrated to validate the circuit concept."}
{"_id": "2bddc10b0bf0f2f17c57ecd9285d9e76bcb56c74", "title": "Semi-Supervised AUC Optimization Without Guessing Labels of Unlabeled Data", "text": "Semi-supervised learning, which aims to construct learners that automatically exploit the large amount of unlabeled data in addition to the limited labeled data, has been widely applied in many real-world applications. AUC is a well-known performance measure for a learner, and directly optimizing AUC may result in a better prediction performance. Thus, semi-supervised AUC optimization has drawn much attention. Existing semi-supervised AUC optimization methods exploit unlabeled data by explicitly or implicitly estimating the possible labels of the unlabeled data based on various distributional assumptions. However, these assumptions may be violated in many real-world applications, and estimating labels based on the violated assumption may lead to poor performance. In this paper, we argue that, in semi-supervised AUC optimization, it is unnecessary to guess the possible labels of the unlabeled data or prior probability based on any distributional assumptions. We analytically show that the AUC risk can be estimated unbiasedly by simply treating the unlabeled data as both positive and negative. Based on this finding, two semi-supervised AUC optimization methods named SAMULT and SAMPURA are proposed. Experimental results indicate that the proposed methods outperform the existing methods. Introduction In many real-world applications, collecting a large amount of unlabeled data is relatively easy, while obtaining the labels for the collected data is rather expensive since much human effort and expertise is required for labeling. Semisupervised learning (Chapelle, Schlkopf, and Zien 2006; Zhu et al. 2009), aiming to construct learners that automatically exploit the large amount of unlabeled data in addition to the limited labeled data in the purpose of improving the learning performance, has drawn significant attention. Many semi-supervised learning methods have been proposed. To effectively exploit the unlabeled data, almost all of these methods elaborate to link the labeled data and the unlabeled data based on certain distributional assumption (Chapelle, Schlkopf, and Zien 2006), such as the cluster assumption, the manifold assumption, etc., and construct the learner by explicitly or implicitly estimating the labels of unlabeled instances. Copyright c \u00a9 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. AUC (area under ROC curve) (Hanley and McNeil 1982), which measures the probability of a randomly drawn positive instance being ranked before a randomly drawn negative instance, is a widely-used performance measure for a learner, especially when the data distribution exhibits certain imbalance. Directly optimizing the AUC during the learning procedure usually lead to a better prediction performance. Many studies are elaborated to show that AUC can be effectively and efficiently optimized (Herschtal and Raskutti 2004; Gao and Zhou 2015; Gao et al. 2013; Ying, Wen, and Lyu 2016). Some efforts have been devoted to AUC optimization in semi-supervised settings (Amini, Truong, and Goutte 2008; Fujino and Ueda 2016). These methods rely on the aforementioned distributional assumptions. However, those assumptions may be violated in real-world applications and the learner may be biased when explicitly or implicitly estimating the labels based on the violated assumptions, resulting in poor performance or even performance degradation (Cozman and Cohen 2002). Sakai, Niu, and Sugiyama (2017) proposed an unbiased semisupervised AUC optimization method based on positiveunlabeled learning. However, such a method requires an accurate estimation of the prior probability to reweight the unlabeled data, which is usually difficult especially when the number of labeled data is extremely small. In this paper, we argue that in semi-supervised AUC optimization problem, it is unnecessary to guess the possible labels of the unlabeled data or prior probability based on any distributional assumptions. We theoretically show that the AUC risk can be estimated unbiasedly by treating the unlabeled data as both positive and negative without any distributional assumptions. Such a theoretical finding enables us to address semisupervised AUC optimization problem by simply treating the unlabeled data as both positive and negative data, without designing specific strategies to identify the possible labels of the unlabeled data. Based on this theoretical finding, we propose two novel semi-supervised AUC optimization methods: SAMULT, a straightforward unbiased method by regarding the unlabeled data as both positive and negative data, and SAMPURA, an ensemble method by random partitioning the unlabeled data into pseudo-positive and pseudonegative sets to train base classifiers. The experimental reThe Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)"}
{"_id": "3dd9793bc7b1f97115c45e90c8874f786262f466", "title": "Offloading in Heterogeneous Networks: Modeling, Analysis, and Design Insights", "text": "Pushing data traffic from cellular to WiFi is an example of inter radio access technology (RAT) offloading. While this clearly alleviates congestion on the over-loaded cellular network, the ultimate potential of such offloading and its effect on overall system performance is not well understood. To address this, we develop a general and tractable model that consists of M different RATs, each deploying up to K different tiers of access points (APs), where each tier differs in transmit power, path loss exponent, deployment density and bandwidth. Each class of APs is modeled as an independent Poisson point process (PPP), with mobile user locations modeled as another independent PPP, all channels further consisting of i.i.d. Rayleigh fading. The distribution of rate over the entire network is then derived for a weighted association strategy, where such weights can be tuned to optimize a particular objective. We show that the optimum fraction of traffic offloaded to maximize SINR coverage is not in general the same as the one that maximizes rate coverage, defined as the fraction of users achieving a given rate."}
{"_id": "e7b4ea66dff3966fc9da581f32cb69132a7bbd99", "title": "Throughput Optimization, Spectrum Allocation, and Access Control in Two-Tier Femtocell Networks", "text": "The deployment of femtocells in a macrocell network is an economical and effective way to increase network capacity and coverage. Nevertheless, such deployment is challenging due to the presence of inter-tier and intra-tier interference, and the ad hoc operation of femtocells. Motivated by the flexible subchannel allocation capability of OFDMA, we investigate the effect of spectrum allocation in two-tier networks, where the macrocells employ closed access policy and the femtocells can operate in either open or closed access. By introducing a tractable model, we derive the success probability for each tier under different spectrum allocation and femtocell access policies. In particular, we consider joint subchannel allocation, in which the whole spectrum is shared by both tiers, as well as disjoint subchannel allocation, whereby disjoint sets of subchannels are assigned to both tiers. We formulate the throughput maximization problem subject to quality of service constraints in terms of success probabilities and per-tier minimum rates, and provide insights into the optimal spectrum allocation. Our results indicate that with closed access femtocells, the optimized joint and disjoint subchannel allocations provide the highest throughput among all schemes in sparse and dense femtocell networks, respectively. With open access femtocells, the optimized joint subchannel allocation provides the highest possible throughput for all femtocell densities."}
{"_id": "09168f7259e0df1484115bfd44ce4fdcafdc15f7", "title": "Power control in two-tier femtocell networks", "text": "In a two tier cellular network - comprised of a central macrocell underlaid with shorter range femtocell hotspots - cross-tier interference limits overall capacity with universal frequency reuse. To quantify near-far effects with universal frequency reuse, this paper derives a fundamental relation providing the largest feasible cellular Signal-to-Interference-Plus-Noise Ratio (SINR), given any set of feasible femtocell SINRs. We provide a link budget analysis which enables simple and accurate performance insights in a two-tier network. A distributed utility- based SINR adaptation at femtocells is proposed in order to alleviate cross-tier interference at the macrocell from cochannel femtocells. The Foschini-Miljanic (FM) algorithm is a special case of the adaptation. Each femtocell maximizes their individual utility consisting of a SINR based reward less an incurred cost (interference to the macrocell). Numerical results show greater than 30% improvement in mean femtocell SINRs relative to FM. In the event that cross-tier interference prevents a cellular user from obtaining its SINR target, an algorithm is proposed that reduces transmission powers of the strongest femtocell interferers. The algorithm ensures that a cellular user achieves its SINR target even with 100 femtocells/cell-site (with typical cellular parameters) and requires a worst case SINR reduction of only 16% at femtocells. These results motivate design of power control schemes requiring minimal network overhead in two-tier networks with shared spectrum."}
{"_id": "babb0a5119837dbad49f220c2f3845fa582d47df", "title": "Component Model of Addiction Treatment: A Pragmatic Transdiagnostic Treatment Model of Behavioral and Substance Addictions", "text": "Behavioral addictions such as gambling, video games, sex, and shopping share many clinical features with substance use addictions including etiology, course, and neurobiology. Yet, the treatment of behavioral and substance use addictions tends to be separated. However, we argue that a more effective and efficient treatment approach is to conceptualize behavioral and substance use addictions as different expressions of a common underlying disorder and, in treatment, to address the underlying mechanisms common to both. To this end, the article presents a developing transdiagnostic treatment model of addictions that targets underlying similarities between behavioral and substance use addictions, called the component model of addiction treatment (CMAT). The CMAT is transdiagnostic in that it can be used in the treatment of both behavioral and substance use addictions. It is pragmatic in that it targets component vulnerabilities, which are enduring, yet malleable, individual psychological, cognitive, and neurobiological characteristics that are common to all addictive disorders and have been demonstrated to be modifiable. A working model of CMAT is presented, including proposed component vulnerabilities: lack of motivation, urgency, maladaptive expectancies, deficits in self-control, deficits in social support, and compulsivity, as well as their potential intervention possibilities. Future directions and potential implications of the CMAT are discussed."}
{"_id": "fa3041fbd9a3d37c61a2a19b7bf2a9812c824cc2", "title": "Collaborative data mining for clinical trial analytics", "text": "This paper proposes a collaborative data mining technique to provide multi-level analysis from clinical trials data. Clinical trials for clinical research and drug development generate large amount of data. Due to dispersed nature of clinical trial data, it remains a challenge to harness this data for analytics. In this paper, we propose a novel method using master data management (MDM) for analyzing clinical trial data, scattered across multiple databases, through collaborative data mining. Our aim is to validate findings by collaboratively utilizing multiple data mining techniques such as classification, clustering, and association rule mining. We complement our results with the help of interactive visualizations. The paper also demonstrates use of data stratification for identifying disparities between various subgroups of clinical trial participants. Overall, our approach aims at extracting useful knowledge from clinical trial data in order to improve design of clinical trials by gaining confidence in the outcomes using multi-level analysis. We provide experimental results in drug abuse clinical trial data."}
{"_id": "19d09e51b67ec623bab78ee3d84473f58c31c3fb", "title": "Human-Like Rewards to Train a Reinforcement Learning Controller for Planar Arm Movement", "text": "High-level spinal cord injury (SCI) in humans causes paralysis below the neck. Functional electrical stimulation (FES) technology applies electrical current to nerves and muscles to restore movement, and controllers for upper extremity FES neuroprostheses calculate stimulation patterns to produce desired arm movement. However, currently available FES controllers have yet to restore natural movements. Reinforcement learning (RL) is a reward-driven control technique; it can employ user-generated rewards, and human preferences can be used in training. To test this concept with FES, we conducted simulation experiments using computer-generated \u201cpseudo-human\u201d rewards. Rewards with varying properties were used with an actor-critic RL controller for a planar two-degree-of-freedom biomechanical human arm model performing reaching movements. Results demonstrate that sparse, delayed pseudo-human rewards permit stable and effective RL controller learning. The frequency of reward is proportional to learning success, and human-scale sparse rewards permit greater learning than exclusively automated rewards. Diversity of training task sets did not affect learning. Long-term stability of trained controllers was observed. Using human-generated rewards to train RL controllers for upper-extremity FES systems may be useful. Our findings represent progress toward achieving human-machine teaming in control of upper-extremity FES systems for more natural arm movements based on human user preferences and RL algorithm learning capabilities."}
{"_id": "b226c311fb481cc909735f0b190ad0758c2fb0d4", "title": "Fixed-Point Factorized Networks", "text": "In recent years, Deep Neural Networks (DNN) based methods have achieved remarkable performance in a wide range of tasks and have been among the most powerful and widely used techniques in computer vision. However, DNN-based methods are both computational-intensive and resource-consuming, which hinders the application of these methods on embedded systems like smart phones. To alleviate this problem, we introduce a novel Fixed-point Factorized Networks (FFN) for pretrained models to reduce the computational complexity as well as the storage requirement of networks. The resulting networks have only weights of -1, 0 and 1, which significantly eliminates the most resource-consuming multiply-accumulate operations (MACs). Extensive experiments on large-scale ImageNet classification task show the proposed FFN only requires one-thousandth of multiply operations with comparable accuracy."}
{"_id": "97014530aff6891442c456ebae22b95f5ffe8436", "title": "The Method of 2/3 Sampled Sub-Pixel Rendering for AMOLED Display", "text": "Due to the capability of the AMOLED (Active matrix organic light emitting diode) display can provide superior color saturation, flexible utilization, slimmer screen profile and more impressive image quality, it has received increasing demand for application in electronic devices. The organic light emitting diode (OLED) is current driven device so that the characteristics of thin field transistor (TFT) influence panel uniformity. Multiple TFT of pixel circuitry has been designed to compensate the Vth (threshold voltage) variations of array process and other driving issues. The fine metal mask (FMM) used in the OLED evaporation is an accurate mask having physical margin limitations in the high PPI (pixel per inch) AMOLED application. This paper details the utilization of a 2/3 sampled sub-pixel rendering (SPR) method to liberate more 50% layout area than the RGB stripe configuration. The proposed algorithm utilizes edge detection and one-side color diffusion to improve display performance as well as RGB stripe compatibility. A 4.65\u201d HD (720 \u00d71280) AMOLED panel with a 317 PPI retina display was implemented to realize the development of our SPR technology."}
{"_id": "f9999a1190bdc0a7c6779d3abbed1b070d52026f", "title": "Bare-Bone Particle Swarm Optimisation for Simultaneously Discretising and Selecting Features for High-Dimensional Classification", "text": "Feature selection and discretisation have shown their effectiveness for data preprocessing especially for high-dimensional data with many irrelevant features. While feature selection selects only relevant features, feature discretisation finds a discrete representation of data that contains enough information but ignoring some minor fluctuation. These techniques are usually applied in two stages, discretisation and then selection since many feature selection methods work only on discrete features. Most commonly used discretisation methods are univariate in which each feature is discretised independently; therefore, the feature selection stage may not work efficiently since information showing feature interaction is not considered in the discretisation process. In this study, we propose a new method called PSO-DFS using bare-bone particle swarm optimisation (BBPSO) for discretisation and feature selection in a single stage. The results on ten high-dimensional datasets show that PSO-DFS obtains a substantial dimensionality reduction for all datasets. The classification performance is significantly improved or at least maintained on nine out of ten datasets by using the transformed \u201csmall\u201d data obtained from PSO-DFS. Compared to applying the two-stage approach which uses PSO for feature selection on the discretised data, PSO-DFS achieves better performance on six datasets, and similar performance on three datasets with a much smaller number of features selected."}
{"_id": "5309b8f4723d44de2fa51cd2c15bffebf541ef57", "title": "A novel high-gain quasi-Yagi antenna with a parabolic reflector", "text": "The simplicity and intuitive design of traditional planar printed quasi-Yagi antennas has led to its widespread popularity for its good directivity. In this paper, a novel quasi-Yagi antenna with a single director and a concave parabolic reflector, operating in S-band, is proposed. The impedance characteristic and radiation characteristic are simulated with CST-Microwave Studio, and the antenna is fabricated and measured. The measured results indicate that the antenna which can operate at 2.28-2.63GHz can achieve an average gain of 6.5dBi within the operating frequency range, especially a highest gain of 7.5dBi at 2.5GHz. The proposed antenna can be widely used in WLAN/TD-LTE/BD1 and so on."}
{"_id": "00bde36ffb01d5f15ad7489671116338ef560df8", "title": "SRP-PHAT methods of locating simultaneous multiple talkers using a frame of microphone array data", "text": "Two new methods for locating multiple sound sources using a single segment of data from a large-aperture microphone array are presented. Both methods employ the proven-robust steered response power using the phase transform (SRP-PHAT) as a functional. To cluster the data points into highly probable regions containing global peaks, the first method fits a Gaussian mixture model (GMM), whereas the second one sequentially finds the points with highest SRP-PHAT values that most likely represent different clusters. Then the low-cost global optimization method, stochastic region contraction (SRC), is applied to each cluster to find the global peaks. We test the two methods using real data from five simultaneous talkers in a room with high noise and reverberation. Results are presented and discussed."}
{"_id": "d9f6b24838820325494624115545726a39ad533d", "title": "Mean and Variance of the Sampling Distribution of Particle Swarm Optimizers During Stagnation", "text": "Several theoretical analyses of the dynamics of particle swarms have been offered in the literature over the last decade. Virtually all rely on substantial simplifications, often including the assumption that the particles are deterministic. This has prevented the exact characterization of the sampling distribution of the particle swarm optimizer (PSO). In this paper we introduce a novel method that allows us to exactly determine all the characteristics of a PSO sampling distribution and explain how it changes over any number of generations, in the presence stochasticity. The only assumption we make is stagnation, i.e., we study the sampling distribution produced by particles in search for a better personal best. We apply the analysis to the PSO with inertia weight, but the analysis is also valid for the PSO with constriction and other forms of PSO."}
{"_id": "2633f2d2e096f5962fa2589fc2f1b1db645ec28e", "title": "Hybrid decision tree", "text": "In this paper, a hybrid learning approach named HDT is proposed. HDT simulates human reasoning by using symbolic learning to do qualitative analysis and using neural learning to do subsequent quantitative analysis. It generates the trunk of a binary hybrid decision tree according to the binary information gain ratio criterion in an instance space defined by only original unordered attributes. If unordered attributes cannot further distinguish training examples falling into a leaf node whose diversity is beyond the diversity-threshold, then the node is marked as a dummy node. After all those dummy nodes are marked, a specific feedforward neural network named FANNC that is trained in an instance space defined by only original ordered attributes is exploited to accomplish the learning task. Moreover, this paper distinguishes three kinds of incremental learning tasks. Two incremental learning procedures designed for example-incremental learning with different storage requirements are provided, which enables HDT to deal gracefully with data sets where new data are frequently appended. Also a hypothesis-driven constructive induction mechanism is provided, which enables HDT to generate compact concept descriptions."}
{"_id": "d3e8579caf258737033f76e404abad96ec6fc9ab", "title": "Artificial Bee Colony algorithm for optimization of truss structures", "text": "The main goal of the structural optimization is to minimize the weight of structures while satisfying all design requirements imposed by design codes. In this paper, the Artificial Bee Colony algorithm with an adaptive penalty function approach (ABC-AP) is proposed to minimize the weight of truss structures. The ABC algorithm is swarm intelligence based optimization technique inspired by the intelligent foraging behavior of honeybees. Five truss examples with fixed-geometry and up to 200 elements were studied to verify that the ABC algorithm is an effective optimization algorithm in the creation of an optimal design for truss structures. The results of the ABC-AP compared with results of other optimization algorithms from the literature show that this algorithm is a powerful search and optimization technique for structural design. \u00a9 2010 Elsevier B.V. All rights reserved."}
{"_id": "4f7e07d71a54b257261aefecef390f7f66b79f61", "title": "Kqueue - A Generic and Scalable Event Notification Facility", "text": "ApplicationsrunningonaUNIX platformneedto benotified whensomeactivity occurson asocketor otherdescriptor, andthis is traditionallydonewith theselect()or poll() systemcalls. However, it hasbeenshown that the performanceof thesecallsdoesnotscalewell with anincreasingnumberof descriptors.Theseinterfacesarealso limited in therespecthatthey areunableto handleother potentiallyinterestingactivitiesthatanapplicationmight be interestedin, thesemight includesignals,file system changes,and AIO completions. This paperpresentsa genericevent delivery mechanism,which allows an application to selectfrom a wide rangeof event sources, andbenotifiedof activity on thesesourcesin a scalable andefficient manner . The mechanismmay be extended to coverfutureeventsourceswithoutchangingtheapplicationinterface."}
{"_id": "34e906436dce1cc67c5bc1b95a906416eee802dd", "title": "Multiscale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification", "text": "We propose a new multiscale rotation-invariant convolutional neural network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography. MRCNN employs Gabor-local binary pattern that introduces a good property in image analysis\u2014invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public interstitial lung disease database show a superior performance of the proposed method to state of the art."}
{"_id": "6a71306536498fa6ae535156b59c89a316e9b828", "title": "Learning force control policies for compliant manipulation", "text": "Developing robots capable of fine manipulation skills is of major importance in order to build truly assistive robots. These robots need to be compliant in their actuation and control in order to operate safely in human environments. Manipulation tasks imply complex contact interactions with the external world, and involve reasoning about the forces and torques to be applied. Planning under contact conditions is usually impractical due to computational complexity, and a lack of precise dynamics models of the environment. We present an approach to acquiring manipulation skills on compliant robots through reinforcement learning. The initial position control policy for manipulation is initialized through kinesthetic demonstration. We augment this policy with a force/torque profile to be controlled in combination with the position trajectories. We use the Policy Improvement with Path Integrals (PI2) algorithm to learn these force/torque profiles by optimizing a cost function that measures task success. We demonstrate our approach on the Barrett WAM robot arm equipped with a 6-DOF force/torque sensor on two different manipulation tasks: opening a door with a lever door handle, and picking up a pen off the table. We show that the learnt force control policies allow successful, robust execution of the tasks."}
{"_id": "1f9fb489892227e9b8db62abf8db83b1694243ed", "title": "A generalized back-door criterion", "text": "We generalize Pearl\u2019s back-door criterion for directed acyclic graphs (DAGs) to more general types of graphs that describe Markov equivalence classes of DAGs and/or allow for arbitrarily many hidden variables. We also give easily checkable necessary and sufficient graphical criteria for the existence of a set of variables that satisfies our generalized back-door criterion, when considering a single intervention and a single outcome variable. Moreover, if such a set exists, we provide an explicit set that fulfills the criterion. We illustrate the results in several examples. R-code is available in the R-package pcalg."}
{"_id": "0fa88943665de1176b0fc6de4ed7469b40cdb08c", "title": "Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning", "text": "We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient (Liu & Wang, 2016) that maximumly decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. As an application of our method, we propose an amortized MLE algorithm for training deep energy model, where a neural sampler is adaptively trained to approximate the likelihood function. Our method mimics an adversarial game between the deep energy model and the neural sampler, and obtains realisticlooking images competitive with the state-of-the-art results."}
{"_id": "15f9e214384b65559ed5c3fa0833ad4acb8f254e", "title": "Modeling and Signal Processing of Acoustic Gunshot Recordings", "text": "Audio recordings of gunshots can provide information about the gun location with respect to the microphone(s), the speed and trajectory of the projectile, and in some cases the type of firearm and ammunition. Recordings obtained under carefully controlled conditions can be well-modeled by geometrical acoustics. Special acoustic processing systems for real time gunshot detection and localization are used by the military and law enforcement agencies for sniper detection. Forensic analysis of audio recordings is also used to provide evidence in criminal and civil cases. This paper reviews the distinctive features and limitations of acoustic gunshot analysis using DSP techniques"}
{"_id": "3757e42710488825d298ca3ea55ab112e5103c2d", "title": "Toward an Ecological Theory of Adaptation and Aging", "text": "1.3 The environmental docility hypothesis suggests that environmental stimuli (\"press\", in Murray's terms) have a greater demand quality as the competence of the individual decreases. The dynamics of ecological transactions are considered as a function of personal competence, strength of environmental press, the dual characteristics of the individual's response (affective quality and adaptiveness of behavior), adaptation level, and the optimization function. Behavioral homeostasis is maintained by the individual as both respondent and initiator in interaction with his environment. Hypotheses are suggested to account for striving vs. relaxation and for changes in the individual's level of personal competence. Four transaction-al types discussed are environmental engineering, rehabilitation and therapy, individual growth, and active change of the environment. Recent work in the psychology of stimulation (1) has led to theoretical advances in the area of social ecology. We propose an elaboration in this area that is middle-range, in the sense of attempting to account for a limited aspect of human behavior. This contribution to the theory of man-environment relationships deals with the aspects of human responses that can be viewed in evaluative terms, that is, behavior that can be rated on the continuum of adaptiveness, and inner states that can be rated on the continuum of positive to negative. This is, perhaps, a limited view of the human response repertory, but it stems from the traditional concern of the psychologist with mental health and mental illness. Similarly, our view of environment for this purpose is limited to the \"demand quality\" of the environment, an abstraction that represents only one of many ways of dimensionalizing the environment. We shall use our knowledge from the area of gerontology to provide content for the theoretical structure, but suggest that the constructs are more generally applicable to any area involving the understanding of mental or social pathology [see Lawton and Nahemow, (2) for a more complete discussion]. One way to begin is to look at the old ecological equation B = f (P, E) to acknowledge its veracity and familiarity, but linger on a few of its implications: 1. All behavior is transactional, that is, not explainable solely on the basis of knowledge about either the person behaving or the environment in which it occurs."}
{"_id": "6dfd3788021de100726a5a0ff804cb993cefc08c", "title": "Adaptive fuzzy PID temperature control system based on OPC and modbus/TCP protocol", "text": "In this paper, an adaptive fuzzy PID controller is designed to improve the control performance of the resistance furnace, and then is applied to a remote temperature control system based on modbus/TCP industrial Ethernet via OPC communication. The adaptive PID controller is developed by the fuzzy toolbox in Matlab. The data acquisition and devices driving are realized by PCAuto, a SCADA configuration software which is widely used in industry. The data exchange between Matlab and PCAuto is realized by OPC technology. The control process and results verify that the performance of resistance furnace temperature control system is satisfactory and the performance of OPC communication is very efficient and reliable."}
{"_id": "2f04ba35cc3666cbebb0a4a61655324daee175c5", "title": "Spotlight SAR data focusing based on a two-step processing approach", "text": "We present a new spotlight SAR data-focusing algorithm based on a two-step processing strategy that combines the advantages of two commonly adopted processing approaches: the efficiency of SPECAN algorithms and the precision of stripmap focusing techniques. The first step of the proposed algorithm implements a linear and space-invariant azimuth filtering that is carried out via a deramping-based technique representing a simplified version of the SPECAN approach. This operation allows us to perform a bulk azimuth raw data compression and to achieve a pixel spacing smaller than (or equal to) the expected azimuth resolution of the fully focused image. Thus, the azimuth spectral folding phenomenon, typically affecting the spotlight data, is overcome, and the space-variant characteristics of the stripmap system transfer function are preserved. Accordingly, the residual and precise focusing of the SAR data is achieved by applying a conventional stripmap processing procedure requiring a minor modification and implemented in the frequency domain. The extension of the proposed technique to the case of high bandwidth transmitted chirp signals is also discussed. Experiments carried out on real and simulated data confirm the validity of the presented approach, which is mainly focused on spaceborne systems."}
{"_id": "8df39cb8eb8951d434f5b33628dd52313302ccab", "title": "Cooperative Control Strategy of Energy Storage System and Microsources for Stabilizing the Microgrid during Islanded Operation", "text": "In this paper, the cooperative control strategy of microsources and the energy storage system (ESS) during islanded operation is presented and evaluated by a simulation and experiment. The ESS handles the frequency and the voltage as a primary control. And then, the secondary control in microgrid management system returns the current power output of the ESS into zero. The test results show that the proposed cooperative control strategy can regulate the frequency and the voltage, and the secondary control action can contribute to improve the control capability."}
{"_id": "181006ddd152e70c0009199135b408d3cc2e2fa4", "title": "Modeling, Analysis and Testing of Autonomous Operation of an Inverter-Based Microgrid", "text": "The analysis of the small-signal stability of conventional power systems is well established, but for inverter based microgrids there is a need to establish how circuit and control features give rise to particular oscillatory modes and which of these have poor damping. This paper develops the modeling and analysis of autonomous operation of inverter-based microgrids. Each sub-module is modeled in state-space form and all are combined together on a common reference frame. The model captures the detail of the control loops of the inverter but not the switching action. Some inverter modes are found at relatively high frequency and so a full dynamic model of the network (rather than an algebraic impedance model) is used. The complete model is linearized around an operating point and the resulting system matrix is used to derive the eigenvalues. The eigenvalues (termed \"modes\") indicate the frequency and damping of oscillatory components in the transient response. A sensitivity analysis is also presented which helps identifying the origin of each of the modes and identify possible feedback signals for design of controllers to improve the system stability. With experience it is possible to simplify the model (reduce the order) if particular modes are not of interest as is the case with synchronous machine models. Experimental results from a microgrid of three 10-kW inverters are used to verify the results obtained from the model"}
{"_id": "35a9d3f5f19a7573b459efa6b1133d7c9f681ff0", "title": "Making microgrids work", "text": "Distributed energy resources including distributed generation and distributed storage are sources of energy located near local loads and can provide a variety of benefits including improved reliability if they are properly operated in the electrical distribution system. Microgrids are systems that have at least one distributed energy resource and associated loads and can form intentional islands in the electrical distribution systems. This paper gives an overview of the microgrid operation. Microgrid testing experiences from different counties was also provided."}
{"_id": "98f5bb02b2aab2d748572bb7d5c4f6e2afbe6c3d", "title": "Smart home research", "text": "This paper is a survey for smart home research, from definition to current research status. First we give a definition to smart home, and then describe the smart home elements, typical research projects, smart home networks research status, smart home appliances and challenges at last."}
{"_id": "798b4637984c9f8f18ceba332ff483d64830c2ac", "title": "Machine Learning under Attack: Vulnerability Exploitation and Security Measures", "text": "Learning to discriminate between secure and hostile patterns is a crucial problem for species to survive in nature. Mimetism and camouflage are well-known examples of evolving weapons and defenses in the arms race between predators and preys. It is thus clear that all of the information acquired by our senses should not be considered necessarily secure or reliable. In machine learning and pattern recognition systems, however, we have started investigating these issues only recently. This phenomenon has been especially observed in the context of adversarial settings like malware detection and spam filtering, in which data can be purposely manipulated by humans to undermine the outcome of an automatic analysis. As current pattern recognition methods are not natively designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that an attacker may exploit either to mislead learning or to evade detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on learning algorithms has thus been one of the main open issues in the novel research field of adversarial machine learning, along with the design of more secure learning algorithms.\n In the first part of this talk, I introduce a general framework that encompasses and unifies previous work in the field, allowing one to systematically evaluate classifier security against different, potential attacks. As an example of application of this framework, in the second part of the talk, I discuss evasion attacks, where malicious samples are manipulated at test time to evade detection. I then show how carefully-designed poisoning attacks can mislead some learning algorithms by manipulating only a small fraction of their training data. In addition, I discuss some defense mechanisms against both attacks in the context of real-world applications, including biometric identity recognition and computer security. Finally, I briefly discuss our ongoing work on attacks against clustering algorithms, and sketch some promising future research directions."}
{"_id": "50edb17bb311757206a60801a25dd56ca2b342dd", "title": "Profile hidden Markov models", "text": "The recent literature on profile hidden Markov model (profile HMM) methods and software is reviewed. Profile HMMs turn a multiple sequence alignment into a position-specific scoring system suitable for searching databases for remotely homologous sequences. Profile HMM analyses complement standard pairwise comparison methods for large-scale sequence analysis. Several software implementations and two large libraries of profile HMMs of common protein domains are available. HMM methods performed comparably to threading methods in the CASP2 structure prediction exercise."}
{"_id": "ad1087d3a14ba512740fbf2ab4195edae361ae26", "title": "Scaling Agile Development in Mechatronic Organizations - A Comparative Case Study", "text": "Agile software development principles enable companies to successfully and quickly deliver software by meeting their customers' expectations while focusing on high quality. Many companies working with pure software systems have adopted these principles, but implementing them in companies dealing with non-pure software products is challenging. We identified a set of goals and practices to support large-scale agile development in companies that develop software-intense mechatronic systems. We used an inductive approach based on empirical data collected during a longitudinal study with six companies in the Nordic region. The data collection took place over two years through focus group workshops, individual on-site interviews, and complementary surveys. The primary benefit of large-scale agile development is improved quality, enabled by practices that support regular or continuous integration between teams delivering software, hardware, and mechanics. In this regard, the most beneficial integration cycle for deliveries is every four weeks, while continuous integration on a daily basis would favor software teams, other disciplines does not seem to benefit from faster integration cycles. We identified 108 goals and development practices supporting agile principles among the companies, most of them concerned with integration, therefrom, 26 agile practices are unique to the mechatronics domain to support adopting agile beyond pure software development teams. 16 of these practices are considered as key enablers, confirmed by our control cases."}
{"_id": "44e1725621ec51b8bc37f68e89151295570bdd0b", "title": "Induced pluripotent stem cell-based modeling of neurodegenerative diseases: a focus on autophagy", "text": "The advent of cell reprogramming has enabled the generation of induced pluripotent stem cells (iPSCs) from patient skin fibroblasts or blood cells and their subsequent differentiation into tissue-specific cells, including neurons and glia. This approach can be used to recapitulate disease-specific phenotypes in classical cell culture paradigms and thus represents an invaluable asset for disease modeling and drug validation in the framework of personalized medicine. The autophagy pathway is a ubiquitous eukaryotic degradation and recycling system, which relies on lysosomal degradation of unwanted and potentially cytotoxic components. The relevance of autophagy in the pathogenesis of neurodegenerative diseases is underlined by the observation that disease-linked genetic variants of susceptibility factors frequently result in dysregulation of autophagic-lysosomal pathways. In particular, disrupted autophagy is implied in the accumulation of potentially neurotoxic products such as protein aggregates and their precursors and defective turnover of dysfunctional mitochondria. Here, we review the current state of iPSC-based assessment of autophagic dysfunction in the context of neurodegenerative disease modeling. The collected data show that iPSC technology is capable to reveal even subtle alterations in subcellular homeostatic processes, which form the molecular basis for disease manifestation."}
{"_id": "406cebc2b0195cda4e2d175bf9e1871bb8237b69", "title": "Semantic Granularity in Ontology-Driven Geographic Information Systems", "text": "The integration of information of different kinds, such as spatial and alphanumeric at different levels of detail, is a challenge. While a solution is not reached, it is widely recognized that the need to integrate information is so pressing that it does not matter if detail is lost, as long as integration is achieved. This paper shows the potential for information retrieval at different levels of granularity inside the framework of information systems based on ontologies. Ontologies are theories that use a specific vocabulary to describe entities, classes, properties and functions related to a certain view of the world. The use of an ontology, translated into an active information system component, leads to ontology-driven information systems and, in the specific case of GIS, leads to what we call ontology-driven geographic information systems."}
{"_id": "253e4bd4fcdab919a8967e21721e9893ee6f14e2", "title": "Bidirectional Current-Fed Flyback-Push-Pull DC-DC Converter", "text": "This paper proposes a new dc-dc static power converter, designated Bidirectional Current-Fed Flyback-Push-Pull DC-DC Converter. Circuit operation, analysis, simulation, design example and experimental results are included in the paper. What distinguishes the proposed converter from the previous circuits is the existence of input and output inductors, which provides a significant reduction of both the power source and the load side current ripple. The proposed converter is suitable for the renewable electric power systems, such as those having fuel cells as the DC-source power supply. It is also a good candidate for electric vehicle power systems, where bidirectional power flow related with battery charge and discharge is necessary."}
{"_id": "1943e83066c5eb7f25545e939b42882572734dc7", "title": "Semantic Linking of Learning Object Repositories to DBpedia", "text": "Large-sized repositories of learning objects (LOs) are difficult to create and also to maintain. In this paper we propose a way to reduce this drawback by improving the classification mechanisms of the LO repositories. Specifically, we present a solution to automate the LO classification of the Universia repository, a collection of more than 15 million of LOs represented according to the IEEE LOM standard. Although a small part of these LOs is correctly classified, most are unclassified and therefore searching and accessing information is difficult. Our solution makes use of the categories provided by DBpedia, a linked data repository, to automatically improve its classification through a graph-based filtering algorithm, which selects the most suitable categories for describing a LO. Once selected, these categories will classify the LO, linking its classification metadata with a set of DBpedia categories."}
{"_id": "0e6d98fe0e365c6b841161d62fff890d9552b318", "title": "Digital color imaging", "text": "This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented using vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided."}
{"_id": "e59212f34799665d3f68d49309f4dd10c84b8e71", "title": "Finding an emotional face in a crowd: emotional and perceptual stimulus factors influence visual search efficiency.", "text": "In this article, we examine how emotional and perceptual stimulus factors influence visual search efficiency. In an initial task, we run a visual search task, using a large number of target/distractor emotion combinations. In two subsequent tasks, we then assess measures of perceptual (rated and computational distances) and emotional (rated valence, arousal and potency) stimulus properties. In a series of regression analyses, we then explore the degree to which target salience (the size of target/distractor dissimilarities) on these emotional and perceptual measures predict the outcome on search efficiency measures (response times and accuracy) from the visual search task. The results show that both emotional and perceptual stimulus salience contribute to visual search efficiency. The results show that among the emotional measures, salience on arousal measures was more influential than valence salience. The importance of the arousal factor may be a contributing factor to contradictory history of results within this field."}
{"_id": "3d2126066c6244e05ec4d2631262252f4369d9c1", "title": "Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding", "text": "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \u201cdeep compression\u201d, a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35\u00d7 to 49\u00d7 without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9\u00d7 to 13\u00d7; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35\u00d7, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49\u00d7 from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3\u00d7 to 4\u00d7 layerwise speedup and 3\u00d7 to 7\u00d7 better energy efficiency."}
{"_id": "201c6dae375bb0a8e96b214cb5c71098ffaefa28", "title": "Swarm intelligence and robotics", "text": "Purpose \u2013 The aim of this paper is to provide a review of recent developments in the application of swarm intelligence to robotics. Design/methodology/approach \u2013 This paper initially considers swarm intelligence and then discusses its application to robotics through reference to a number of recent research programmes. Findings \u2013 Based on the principles of swarm intelligence, which is derived from the swarming behaviour of biological entities, swarm robotics research is widespread but still at an early stage. Much aims to gain an understanding of biological swarming and apply it to autonomous, mobile multi-robot systems. European activities are particularly strong and several large, collaborative projects are underway. Research in the USA has a military bias and much is funded by defence agencies. Originality/value \u2013 The paper provides an up-to-date insight into swarm robot research and development."}
{"_id": "0319f7be5926072d0586dd01a7a5df57b485e524", "title": "A Span-Extraction Dataset for Chinese Machine Reading Comprehension", "text": "Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention. However, the existing reading comprehension datasets are mostly in English. In this paper, we introduce a SpanExtraction dataset for Chinese Machine Reading Comprehension to add language diversities in this area. The dataset is composed by near 20,000 real questions annotated by human on Wikipedia paragraphs. We also annotated a challenge set which contains the questions that need comprehensive understanding and multi-sentence inference throughout the context. With the release of the dataset, we hosted the Second Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC 2018). We hope the release of the dataset could further accelerate the machine reading comprehension research in Chinese language.1"}
{"_id": "48c4fd399291372d774c668fc4dcfd7cff0a382f", "title": "VNF-FG design and VNF placement for 5G mobile networks", "text": "Network function virtualization (NFV) is envisioned as one of the critical technologies in 5th-Generation (5G) mobile networks. This paper investigates the virtual network function forwarding graph (VNF-FG) design and virtual network function (VNF) placement for 5G mobile networks. We first propose a two-step method composed of flow designing and flow combining for generating VNF-FGs according to network service requests. For mapping VNFs in the generated VNF-FG to physical resources, we then modify the hybrid NFV environment with introducing more types of physical nodes and mapping modes for the sake of completeness and practicality, and formulate the VNF placement optimization problem for achieving lower bandwidth consumption and lower maximum link utilization simultaneously. To resolve this problem, four genetic algorithms are proposed on the basis of the frameworks of two existing algorithms (multiple objective genetic algorithm and improved non-dominated sorting genetic algorithm). Simulation results show that Greedy-NSGA-II achieves the best performance among our four algorithms. Compared with three non-genetic algorithms (random, backtracking mapping and service chains deployment with affiliation-aware), Greedy-NSGA-II reduces 97.04%, 87.76% and 88.42% of the average total bandwidth consumption, respectively, and achieves only 13.81%, 25.04% and 25.41% of the average maximum link utilization, respectively. Moreover, using our VNF-FG design method and Greedy-NSGA-II together can also reduce the total bandwidth consumption remarkably."}
{"_id": "58e4f10114e3fc61f05ff6dfcb35a98a61bb12a8", "title": "A Survey on Semi-Supervised Learning Techniques", "text": "Semi-supervised learning is a learning standard which deals with the study of how computers and natural systems such as human beings acquire knowledge in the presence of both labeled and unlabeled data. Semi\u2013supervised learning based methods are preferred when compared to the supervised and unsupervised learning because of the improved performance shown by the semi-supervised approaches in the presence of large volumes of data. Labels are very hard to attain while unlabeled data are surplus, therefore semi-supervised learning is a noble indication to shrink human labor and improve accuracy. There has been a large spectrum of ideas on semi-supervised learning. In this paper we bring out some of the key approaches for semi-supervised learning."}
{"_id": "14a0df2faff846c0e71cc6af199f687237c505f2", "title": "Explicit reference modeling methodology in parametric CAD system", "text": "Today parametric associative CAD systems must help companies to create more efficient virtual development processes. While dealing with complex parts (e.g. the number of surfaces of the solid) no CAD modeling methodology is existing. Based on the analysis of industrial designers\u2019 practices as well as student practices on CAD, we identified key factors that lead to better performance. Our objective in this article is to propose a practical method for complex parts modeling in parametric CAD system. An illustration of the performances and the results obtained by this method are presented comparing the traditional method with the proposed one while using an academic case and then an industrial case. 2013 Elsevier B.V. All rights reserved."}
{"_id": "ace1bb979baf3eea22ec8bac57630e702ce31bf0", "title": "Memory-Size-Assisted Buffer Overflow Detection", "text": "-Since the first buffer overflow problem occurred, many detection techniques have been presented. These techniques are effective in detecting most attacks, but some attacks still remain undetected. In order to be more effective, a memory-size-assisted buffer overflow detection(MBOD) is presented. The key feature of buffer overflow is that the size of the source memory is bigger than the size of the destination memory when memory copying operation occurs. By capturing memory copying operation and comparing memory size at run time, MBOD detects buffer overflow. MBOD collects the information of memory size in both dynamic way and static way. An implementation shows that the technique is feasible."}
{"_id": "9f76bb90b4ecca71f0e69084c00c03ead41462f0", "title": "Discussions on control loop design in average current mode control [PWM DC/DC power convertors]", "text": "Large control output ripple in ACMC (average current mode control) PWM DC/DC converters exposes challenges in accurate modeling and design. In this paper, superposition models are utilized to perform control design analysis for ACMC PWM DC/DC converters. The ACMC DC/DC PWM converter system is converted to a cascade control structure and a step-by-step control design algorithm is then proposed. The control design algorithms differ from existing, conventional algorithms because they carefully compensate for newly discovered feed forward influence in the model."}
{"_id": "264a5794a4bec9f742d37162ddfc86f399c7862c", "title": "A Reinforcement Learning Approach to job-shop Scheduling", "text": "We apply reinforce merit learning methods to learn domain-specific heuristics for job shop scheduling A repair-based scheduler starts with a critical-path schedule and incrementally repairs constraint violations with the goal of finding a short conflict-free schedule The tem poral difference algorithm is applied to tram a neural network to learn a heuristic evaluation function over s ta tes This evaluation function is used by a one-step lookahead search procedure to find good solutions to new scheduling problems We evaluate this approach on synthetic problems and on problems from a NASA space shuttle pay load processing task The evaluation function is trained on problems involving a small number of jobs and then tested on larger proble ms The TD sclied uler performs better than the best known existing algorithm for this task\u2014Zwehen s iterative repair method based on simulated annealing The results suggest that reinforcement l $8{\\times} 8$ antenna array is 16.3% from 35.4 to 41.7 GHz for $|S_{11}|\\leqslant -10$ dB, and the measured peak gain is 26.7 dBi at 40 GHz. The measured radiation efficiency of the antenna array is 83.2% at 40 GHz. The proposed antenna array which is suitable for 5G applications at 37 and 39 GHz bands shows stable radiation patterns, high gain, and broad bandwidth."}
{"_id": "d24c81f1e2904ba6ec3f341161865ef93247855b", "title": "Scalable and Secure Logistic Regression via Homomorphic Encryption", "text": "Logistic regression is a powerful machine learning tool to classify data. When dealing with sensitive data such as private or medical information, cares are necessary. In this paper, we propose a secure system for protecting the training data in logistic regression via homomorphic encryption. Perhaps surprisingly, despite the non-polynomial tasks of training in logistic regression, we show that only additively homomorphic encryption is needed to build our system. Our system is secure and scalable with the dataset size."}
{"_id": "783b0c1b297dba90e586f435f54410f2a2082ef3", "title": "Cinnamon for diabetes mellitus.", "text": "BACKGROUND\nDiabetes mellitus is a chronic metabolic disorder that is associated with an increased risk of cardiovascular disease, retinopathy, nephropathy, neuropathy, sexual dysfunction and periodontal disease. Improvements in glycaemic control may help to reduce the risk of these complications. Several animal studies show that cinnamon may be effective in improving glycaemic control. While these effects have been explored in humans also, findings from these studies have not yet been systematically reviewed.\n\n\nOBJECTIVES\nTo evaluate the effects of cinnamon in patients with diabetes mellitus.\n\n\nSEARCH METHODS\nPertinent randomised controlled trials were identified through AARP Ageline, AMED, AMI, BioMed Central gateway, CAM on PubMed, CINAHL, Dissertations Abstracts International, EMBASE, Health Source Nursing/Academic edition, International Pharmaceutical Abstracts, MEDLINE, Natural medicines comprehensive database, The Cochrane Library and TRIP database. Clinical trial registers and the reference lists of included trials were searched also (all up to January 2012). Content experts and manufacturers of cinnamon extracts were also contacted.\n\n\nSELECTION CRITERIA\nAll randomised controlled trials comparing the effects of orally administered monopreparations of cinnamon (Cinnamomum spp.) to placebo, active medication or no treatment in persons with either type 1 or type 2 diabetes mellitus.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected trials, assessed risk of bias and trial quality, and extracted data. We contacted study authors for missing information.\n\n\nMAIN RESULTS\nTen prospective, parallel-group design, randomised controlled trials, involving a total of 577 participants with type 1 and type 2 diabetes mellitus, were identified. Risk of bias was high or unclear in all but two trials, which were assessed as having moderate risk of bias. Risk of bias in some domains was high in 50% of trials. Oral monopreparations of cinnamon (predominantly Cinnamomum cassia) were administered at a mean dose of 2 g daily, for a period ranging from 4 to 16 weeks. The effect of cinnamon on fasting blood glucose level was inconclusive. No statistically significant difference in glycosylated haemoglobin A1c (HbA1c), serum insulin or postprandial glucose was found between cinnamon and control groups. There were insufficient data to pool results for insulin sensitivity. No trials reported health-related quality of life, morbidity, mortality or costs. Adverse reactions to oral cinnamon were infrequent and generally mild in nature.\n\n\nAUTHORS' CONCLUSIONS\nThere is insufficient evidence to support the use of cinnamon for type 1 or type 2 diabetes mellitus. Further trials, which address the issues of allocation concealment and blinding, are now required. The inclusion of other important endpoints, such as health-related quality of life, diabetes complications and costs, is also needed."}
{"_id": "bde40c638fd03b685114d8854de2349969f2e091", "title": "A Context-aware Collaborative Filtering Approach for Urban Black Holes Detection", "text": "Urban black hole, as a traffic anomaly, has caused lots of catastrophic accidents in many big cities nowadays. Traditional methods only depend on the single source data (e.g., taxi trajectories) to design blackhole detection algorithm from one point of view, which is rather incomplete to describe the regional crowd flow. In this paper, we model the urban black holes in each region of New York City (NYC) at different time intervals with a 3-dimensional tensor by fusing cross-domain data sources. Supplementing the missing entries of the tensor through a context-aware tensor decomposition approach, we leverage the knowledge from geographical features, 311 complaint features and human mobility features to recover the blackhole situation throughout NYC. The information can facilitate local residents and officials' decision making. We evaluate our model with five datasets related to NYC, diagnosing the urban black holes that cannot be identified (or earlier than those detected) by a single dataset. Experimental results demonstrate the advantages beyond four baseline methods."}
{"_id": "28443c658054b019e146356948256fb3ee149c4d", "title": "Optimum auto exposure based on high-dynamic-range histogram", "text": "This paper introduces a new approach for optimum auto exposure (AE) based on the high-dynamic-range histogram (HDH) of a scene. The exposure control is optimal in terms of recorded information of the scene. The advantages over AE based on mean values of images will be shown as well as a proof of concept implementation and results. Futhermore possible extensions to this approach are discussed. The consideration of the HDH for auto exposure enables new possibilities for auto exposure control for mutliple-slope cameras. An AE method for multiple-slope cameras is proposed which controls exposure as well as the transition curve parameters in order to record a maximum of information from the scene. It considers controllable dynamic range as well as corse quantization in bright image regions. Key\u2013Words: Optimum Auto Exposure, High Dynamic Range (HDR), multiple-slope camera January 26, 2007"}
{"_id": "09f83b83fd3b0114c2c902212101152c2d2d1259", "title": "3HAN: A Deep Neural Network for Fake News Detection", "text": ""}
{"_id": "041e4ff180c804576a968f7269df656f4ff1d279", "title": "A Middleware for Fast and Flexible Sensor Network Deployment", "text": "A key problem in current sensor network technology is the heterogeneity of the available software and hardware platforms which makes deployment and application development a tedious and time consuming task. To minimize the unnecessary and repetitive implementation of identical functionalities for different platforms, we present our Global Sensor Networks (GSN) middleware which supports the flexible integration and discovery of sensor networks and sensor data, enables fast deployment and addition of new platforms, provides distributed querying, filtering, and combination of sensor data, and supports the dynamic adaption of the system configuration during operation. GSN's central concept is the virtual sensor abstraction which enables the user to declaratively specify XML-based deployment descriptors in combination with the possibility to integrate sensor network data through plain SQL queries over local and remote sensor data sources. In this demonstration, we specifically focus on the deployment aspects and allow users to dynamically reconfigure the running system, to add new sensor networks on the fly, and to monitor the effects of the changes via a graphical interface. The GSN implementation is available from http://globalsn.sourceforge.net/."}
{"_id": "31ae73e38bdfd5d901fabf7ec437d88e2f3104bc", "title": "Discovery of Online Shopping Patterns Across Websites", "text": "\u2022 In the online world, customers can easily navigate to different online stores to make purchases \u2022 Market basket analysis is often used to discover associations among products for brick-and-mortar stores, but rarely for online shops \u2022 Defined to online shopping patterns and developed 2 novel methods to perform market basket analysis across websites \u2022 The methods presented in this paper can be only not only to online shopping application, but to other dimensions as well. Uyanga, Do young suk"}
{"_id": "8acc12ccc3487093314ee1d20c51ce73210ba8eb", "title": "Towards Literate Artificial Intelligence", "text": "Speaker: Mrinmaya Sachan Thesis Committee: Eric P. Xing, Chair Jaime Carbonell Tom Mitchell Dan Roth (University of Pennsylvania) Thesis Proposal Jan. 22, 2018 1:00pm 8102 GHC Link to draft document: http://www.cs.cmu.edu/~mrinmays/thesis_proposal.pdf Standardized tests are often used to test students as they progress in the formal education system. These tests are widely available and measurable with clear evaluation procedures and metrics. Hence, these can serve as good tests for AI. We propose approaches for solving some of these tests. We broadly categorize these tests into two categories: open domain question answering tests such as reading comprehensions and elementary school science tests, and closed domain question answering tests such as intermediate or advanced math and science tests. We present alignment based approach with multi-task learning for the former. For closed domain tests, we propose a parsing to programs approach which can be seen as a natural language interface to expert systems. We also describe approaches for question generation based on instructional material in both open domain as well as closed domain settings. Finally, we show that we can improve both the question answering and question generation models by learning them jointly. This mechanism also allows us to leverage cheap unlabelled data for learning the two models. Our work can potentially be applied for the social good in the education domain. We perform studies on human subjects who found our approaches useful as assistive tools in education."}
{"_id": "9104c57122b03f29ea848ccddd87f8652e7305c8", "title": "Plasmodium vivax segmentation using modified fuzzy divergence", "text": "This paper aims at introducing a new approach to Plasmodium vivax (P. vivax) detection from Leishman stained thin blood film. This scheme follows retrospective study design protocol where patients were selected at random in the clinic. The scheme consists of two main stages - firstly artefacts reduction, and secondly fuzzy divergence based segmentation of P. vivax infected region(s) from erythrocytes. Here, malaria parasite segmentation is done using divergence based threshold selection. Fuzzy approach is chosen to minimize ambiguity inherent in the microscopic images. Divergence algorithm is derived from Cauchy membership function to overcome the drawbacks in comparison with other well known membership functions."}
{"_id": "2ad0ee93d029e790ebb50574f403a09854b65b7e", "title": "Acquiring linear subspaces for face recognition under variable lighting", "text": "Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: a large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition."}
{"_id": "06150e6e69a379c27e1d0100fcd7660f073cbacf", "title": "Local Decorrelation For Improved Pedestrian Detection", "text": "Even with the advent of more sophisticated, data-hungry methods, boosted decision trees remain extraordinarily successful for fast rigid object detection, achieving top accuracy on numerous datasets. While effective, most boosted detectors use decision trees with orthogonal (single feature) splits, and the topology of the resulting decision boundary may not be well matched to the natural topology of the data. Given highly correlated data, decision trees with oblique (multiple feature) splits can be effective. Use of oblique splits, however, comes at considerable computational expense. Inspired by recent work on discriminative decorrelation of HOG features, we instead propose an efficient feature transform that removes correlations in local neighborhoods. The result is an overcomplete but locally decorrelated representation ideally suited for use with orthogonal decision trees. In fact, orthogonal trees with our locally decorrelated features outperform oblique trees trained over the original features at a fraction of the computational cost. The overall improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we reduce false positives nearly tenfold over the previous state-of-the-art."}
{"_id": "0a6c36de8726b6feaab586046ddc1d1a008f44f9", "title": "Filtered channel features for pedestrian detection", "text": "This paper starts from the observation that multiple top performing pedestrian detectors can be modelled by using an intermediate layer filtering low-level features in combination with a boosted decision forest. Based on this observation we propose a unifying framework and experimentally explore different filter families. We report extensive results enabling a systematic analysis. Using filtered channel features we obtain top performance on the challenging Caltech and KITTI datasets, while using only HOG+LUV as low-level features. When adding optical flow features we further improve detection quality and report the best known results on the Caltech dataset, reaching 93% recall at 1 FPPI."}
{"_id": "f672d2abd0f08da0f9aafa1d496b411301d895ca", "title": "Crossing Nets: Dual Generative Models with a Shared Latent Space for Hand Pose Estimation", "text": "State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose to model the statistical relationships of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose and a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth map. To improve generalization and to better exploit unlabeled depth maps, we jointly train a generator and a discriminator. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized and unlabeled samples. The proposed discriminator network architecture is highly efficient and runs at 90\u1e1ePS on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks."}
{"_id": "e5cb4ce2ea172181ea37d3395f4b04100eeb8bb1", "title": "Evaluation of the Minstrel rate adaptation algorithm in IEEE 802.11g WLANs", "text": "Rate adaptation varies the transmission rate of a wireless sender to match the wireless channel conditions, in order to achieve the best possible performance. It is a key component of IEEE 802.11 wireless networks. Minstrel is a popular rate adaptation algorithm due to its efficiency and availability in commonly used wireless drivers. However, despite its popularity, little work has been done on evaluating the performance of Minstrel or comparing it to the performance of fixed rates. In this paper, we conduct an experimental study that compares the performance of Minstrel against fixed rates in an IEEE 802.11g testbed. The experiment results show that whilst Minstrel performs reasonably well in static wireless channel conditions, in some cases the algorithm has difficulty selecting the optimal data rate in the presence of dynamic channel conditions. In addition, Minstrel performs well when the channel condition improves from bad quality to good quality. However, Minstrel has trouble selecting the optimal rate when the channel condition deteriorates from good quality to bad quality."}
{"_id": "931c937d1a50f3eb7d3d81e471b9149c6bed9245", "title": "FEATURE ATTRIBUTION AS FEATURE SELECTION", "text": "Feature attribution methods identify \u201crelevant\u201d features as an explanation of a complex machine learning model. Several feature attribution methods have been proposed; however, only a few studies have attempted to define the \u201crelevance\u201d of each feature mathematically. In this study, we formalize the feature attribution problem as a feature selection problem. In our proposed formalization, there arise two possible definitions of relevance. We name the feature attribution problems based on these two relevances as Exclusive Feature Selection (EFS) and Inclusive Feature Selection (IFS). We show that several existing feature attribution methods can be interpreted as approximation algorithms for EFS and IFS. Moreover, through exhaustive experiments, we show that IFS is better suited as the formalization for the feature attribution problem than EFS."}
{"_id": "adff201360be57586bcb1578f06856079006ea29", "title": "Android Malware Characterization Using Metadata and Machine Learning Techniques", "text": "Android Malware has emerged as a consequence of the increasing popularity of smartphones and tablets. While most previous work focuses on inherent characteristics of Android apps to detect malware, this study analyses indirect features and meta-data to identify patterns in malware applications. Our experiments show that: (1) the permissions used by an application offer only moderate performance results; (2) other features publicly available at Android Markets are more relevant in detecting malware, such as the application developer and certificate issuer, and (3) compact and efficient classifiers can be constructed for the early detection of malware applications prior to code inspection or sandboxing."}
{"_id": "959b41cf42e4c2275965ce79d43df084c54fec30", "title": "Can machine learning explain human learning?", "text": "Learning Analytics (LA) has a major interest in exploring and understanding the learning process of humans and, for this purpose, benefits from both Cognitive Science, which studies how humans learn, and Machine Learning, which studies how algorithms learn from data. Usually, Machine Learning is exploited as a tool for analyzing data coming from experimental studies, but it has been recently applied to humans as if they were algorithms that learn from data. One example is the application of Rademacher Complexity, which measures the capacity of a learning machine, to human learning, which led to the formulation of Human Rademacher Complexity (HRC). In this line of research, we propose here a more powerful measure of complexity, the Human Algorithmic Stability (HAS), as a tool to better understand the learning process of humans. The experimental results from three different empirical studies, on more than 600 engineering students from the University of Genoa, showed that HAS (i) can be measured without the assumptions required by HRC, (ii) depends not only on the knowledge domain, as HRC, but also on the complexity of the problem, and (iii) can be exploited for better understanding of the human learning process. & 2016 Elsevier B.V. All rights reserved."}
{"_id": "7725926520ecd7558e0f0f28581b24a76204972e", "title": "Children learning with a social robot", "text": "We used a social robot as a teaching assistant in a class for children's collaborative learning. In the class, a group of 6th graders learned together using Lego Mindstorms. The class consisted of seven lessons with Robovie, a social robot, followed by one lesson to test their individual achievement. Robovie managed the class and explained how to use Lego Mindstorms. In addition to such basic management behaviors for the class, we prepared social behaviors for building relationships with the children and encouraging them. The result shows that the social behavior encouraged children to work more in the first two lessons, but did not affect them in later lessons. On the other hand, social behavior contributed to building relationships and attaining better social acceptance."}
{"_id": "696fdd1ba1b8520731b00cc3a45dfbb504a3d93f", "title": "Best basis-based wavelet packet entropy feature extraction and hierarchical EEG classification for epileptic detection", "text": "In this study, a hierarchical electroencephalogram (EEG) classification system for epileptic seizure detection is proposed. The system includes the following three stages: (i) original EEG signals representation by wavelet packet coefficients and feature extraction using the best basis-based wavelet packet entropy method, (ii) cross-validation (CV) method together with k-Nearest Neighbor (k-NN) classifier used in the training stage to hierarchical knowledge base (HKB) construction, and (iii) in the testing stage, computing classification accuracy and rejection rate using the top-ranked discriminative rules from the HKB. The data set is taken from a publicly available EEG database which aims to differentiate healthy subjects and subjects suffering from epilepsy diseases. Experimental results show the efficiency of our proposed system. The best classification accuracy is about 100% via 2-, 5-, and 10-fold cross-validation, which indicates the proposed method has potential in designing a new intelligent EEG-based assistance diagnosis system for early detection of the electroencephalographic changes. 2011 Elsevier Ltd. All rights reserved."}
{"_id": "d5a1286e9b19fd250b7fa9b5574d4f71945ab3b3", "title": "Accelerator-Aware Pruning for Convolutional Neural Networks", "text": "Convolutional neural networks have shown tremendous performance in computer vision tasks, but their excessive amount of weights and operations prevent them from being adopted in embedded environments. One of the solutions involves pruning, where some unimportant weights are forced to be zero. Many pruning schemes have been proposed, but have focused mainly on the number of pruned weights. The previous pruning schemes hardly considered ASIC or FPGA accelerator architectures. When the pruned networks are run on the accelerators, the lack of architecture consideration casues some inefficiency problems including internal buffer mis-alignment and load imbalance. This paper proposes a new pruning scheme that reflects accelerator architectures. In the proposed scheme, pruning is performed so that the same number of weights remain for each weight group corresponding to activations fetched simultaneously. In this way, the pruning scheme resolves the inefficiency problems. Even with the constraint, the proposed pruning scheme reached a pruning ratio similar to that of the previous unconstrained pruning schemes not only in AlexNet and VGG16 but also in the state-of-the-art very-deep networks like ResNet. Furthermore, the proposed scheme demonstrated a comparable pruning ratio in slimmed networks that were already pruned channel-wisely. In addition to improving the efficiency of previous sparse accelerators, it will be also shown that the proposed pruning scheme can be used to reduce the logic complexity of sparse accelerators."}
{"_id": "47e52ec494c9eb7dc8c1bddc482f2ee36c992b08", "title": "Hierarchical Attention Networks", "text": "We propose a novel attention network, which accurately attends to target objects of various scales and shapes in images through multiple stages. The proposed network enables multiple layers to estimate attention in a convolutional neural network (CNN). The hierarchical attention model gradually suppresses irrelevant regions in an input image using a progressive attentive process over multiple CNN layers. The attentive process in each layer determines whether to pass or suppress feature maps for use in the next convolution. We employ local contexts to estimate attention probability at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets show that the proposed attention network outperforms traditional attention methods in various attribute prediction tasks."}
{"_id": "21fd90733adbdd901920aca449907c6a3e8c5b83", "title": "A persistence landscapes toolbox for topological statistics", "text": "Topological data analysis provides a multiscale description of the geometry and topology of quantitative data. The persistence landscape is a topological summary that can be easily combined with tools from statistics and machine learning. We give efficient algorithms for calculating persistence landscapes, their averages, and distances between such averages. We discuss an implementation of these algorithms and some related procedures. These are intended to facilitate the combination of statistics and machine learning with topological data analysis. We present an experiment showing that the low-dimensional persistence landscapes of points sampled from spheres (and boxes) of varying dimensions differ."}
{"_id": "ef945a01e4e64adc0ed697c7d2ed2ac5076c666d", "title": "Anime Artist Analysis based on CNN", "text": "Artist recognition is not easy for human beings if the artworks are general and 1 share many similarities, but modern machine learning or deep learning models may 2 help people identify the artwork authors. In this project, we trained Convolutional 3 Neural Networks (CNN) identifying Japanese anime artists. Our dataset was 4 crawled from Pixiv, a well-known Japanese website containing lots of fine anime 5 artworks. Specifically, the dataset has 9 authors and each one has 200 paintings. 6 We trained different kinds of networks, from a very basic Conv2D net to more 7 advanced networks, in order to carry out the identification. It turned out that 8 traditional neural networks can in some way identify different authors, but the 9 accuracy could be improved to a better level by tuning hyperparameters or using 10 more advanced networks such as ResNet and VGGNet. 11"}
{"_id": "e905396dce34e495b32e40b93195deeba7096476", "title": "Wideband and Low-Profile H-Plane Ridged SIW Horn Antenna Mounted on a Large Conducting Plane", "text": "This communication presents a wide-band and low-profile H-plane horn antenna based on ridged substrate integrated waveguide (SIW) with a large conducting ground. The horn antenna is implemented in a single substrate with a thickness of 0.13 \u03bb0 at the center frequency. Despite its low profile, the new H-plane horn antenna achieves a very wide bandwidth by employing an arc-shaped copper taper printed on the extended dielectric slab and a three-step ridged SIW transition. The ridged SIW is critical for widening the operation bandwidth and lowering the characteristic impedance so that an excellent impedance matching from the coaxial probe to the narrow SIW can be obtained over a wide frequency range. Measured VSWR of the fabricated horn antenna is below 2.5 from 6.6 GHz to 18 GHz. The antenna also exhibits stable radiation beam over the same frequency range. It is observed that measured results agree well with simulated ones."}
{"_id": "79c029667ffd0629a95d51e295d9a1b3db4efa80", "title": "Simurg: An Extendable Multilingual Corpus for Abstractive Single Document Summarization", "text": "Abstractive single document summarization is considered as a challenging problem in the field of artificial intelligence and natural language processing. Meanwhile and specifically in the last two years, several deep learning summarization approaches were proposed that once again attracted the attention of researchers to this field.\n It is a well-known issue that deep learning approaches do not work well with small amounts of data. With some exceptions, this is, unfortunately, the case for most of the datasets available for the summarization task. Besides this problem, it should be considered that phonetic, morphological, semantic and syntactic features of the language are constantly changing over the time and unfortunately most of the summarization corpora are constructed from old resources. Another problem is the language of the corpora. Not only in the summarization field, but also in other fields of natural language processing, most of the corpora are only available in English. In addition to the above problems, license terms, and fees of the corpora are obstacles that prevent many academics and specifically non-academics from accessing these data.\n This work describes an open source framework to create an extendable multilingual corpus for abstractive single document summarization that addresses the above-mentioned problems. We describe a tool consisted of a scalable crawler and a centralized key-value store database to construct a corpus of an arbitrary size using a news aggregator service."}
{"_id": "ffb93c8d412af27b840c2d168246fc9bb70b2e8f", "title": "Detecting Curve Text in the Wild: New Dataset and New Solution", "text": "Scene text detection has been made great progress in recent years. The detection manners are evolving from axisaligned rectangle to rotated rectangle and further to quadrangle. However, current datasets contain very little curve text, which can be widely observed in scene images such as signboard, product name and so on. To raise the concerns of reading curve text in the wild, in this paper, we construct a curve text dataset named CTW1500, which includes over 10k text annotations in 1,500 images (1000 for training and 500 for testing). Based on this dataset, we pioneering propose a polygon based curve text detector (CTD) which can directly detect curve text without empirical combination. Moreover, by seamlessly integrating the recurrent transverse and longitudinal offset connection (TLOC), the proposed method can be end-to-end trainable to learn the inherent connection among the position offsets. This allows the CTD to explore context information instead of predicting points independently, resulting in more smooth and accurate detection. We also propose two simple but effective post-processing methods named nonpolygon suppress (NPS) and polygonal non-maximum suppression (PNMS) to further improve the detection accuracy. Furthermore, the proposed approach in this paper is designed in an universal manner, which can also be trained with rectangular or quadrilateral bounding boxes without extra efforts. Experimental results on CTW-1500 demonstrate our method with only a light backbone can outperform state-of-the-art methods with a large margin. By evaluating only in the curve or non-curve subset, the CTD + TLOC can still achieve the best results. Code is available at https://github.com/Yuliang-Liu/Curve-Text-Detector."}
{"_id": "e79aea7ff591664c3544ae68cfcd0f1d18beaedd", "title": "Building and evaluating web corpora representing national varieties of English", "text": "Corpora are essential resources for language studies, as well as for training statistical natural language processing systems. Although very large English corpora have been built, only relatively small corpora are available for many varieties of English. National top-level domains (e.g., .au, .ca) could be exploited to automatically build web corpora, but it is unclear whether such corpora would reflect the corresponding national varieties of English; i.e., would a web corpus built from the .ca domain correspond to Canadian English? In this article we build web corpora from national top-level domains corresponding to countries in which English is widely spoken. We then carry out statistical analyses of these corpora in terms of keywords, measures of corpus comparison based on the Chi-square test and spelling variants, and the frequencies of words known to be marked in particular varieties of English. We find evidence that the web corpora indeed reflect the corresponding national varieties of English. We then demonstrate, through a case study on the analysis of Canadianisms, that these corpora could be valuable lexicographical resources."}
{"_id": "6c3ec890029ced10d8d982b1836ac8ff98be3103", "title": "Benefiting from Social Capital in Online Support Groups: An Empirical Study of Cancer Patients", "text": "With measures specific to the online cancer environment and data from an online survey of cancer patients, the current study finds support for the following model: asynchronous online communication --> social interaction --> social support --> positive health outcomes in terms of stress, depression, and coping. The findings suggest that the Internet can be a positive cyber venue for cancer patients as they confront illness, undergo treatment, and seek out support."}
{"_id": "2b94195e42f93929992f56011683da32ec691158", "title": "Contemporary reconstruction of the mandible.", "text": "Reconstruction of the mandible has evolved significantly over the last 40years. Early attempts were often disfiguring and wrought with complications but with the introduction of free tissue transfer of well vascularized bone in the 1970's there was a significant improvement in outcomes. In recent years the harvest, inset, and microvascular anatomosis have been refined to the point that success rates are reported as high as 99% throughout the literature. Focus has now shifted to optimizing functional and aesthetic outcomes after mandible reconstruction. This paper will be a review defect classification, goals of reconstruction, the various donor sites, dental rehabilitation, new advances, and persistent problems. Reconstruction of segmental mandibular defects after ablative surgery is best accomplished using free tissue transfer to restore mandibular continuity and function. Reestablishing occlusion and optimizing tongue mobility are important to post-operative oral function. Persistent problems in oro-mandibular reconstruction relate to the effects of radiation treatment on the native tissue and include xerostomia, dysgeusia, osteoradionecrosis and trismus. These problems continue to plague the oral cancer patient despite the significant advances that allow a far more complete functional restoration than could be accomplished a mere two decades ago."}
{"_id": "d5a2f44980e517f5692d4eb4b1cbe55855fe99fa", "title": "Interactive Groundwater (IGW): An innovative digital laboratory for groundwater education and research", "text": "In this study, we present an award-wining software environment for groundwater education and research. The software functions as a \u2018\u2018digital laboratory\u2019\u2019 in which students can freely explore: visually creating an aquifer of desired configurations and then immediately investigating and visualizing the groundwater system. Students learn by active exploration and interaction. The software allows introducing routinely research and complex problem-solving into the classroom. 2004 Wiley Periodicals, Inc. Comput Appl Eng Educ 11: 179 202, 2003; Published online in Wiley InterScience (www.interscience.wiley.com.); DOI 10.1002/cae.10052"}
{"_id": "2e5d468fddff60336944cc83eb54e68747616f9f", "title": "Intelligent accident identification system using GPS , GSM modem", "text": "Recently technological and population development, the usage of vehicles are rapidly increasing and at the same time the occurrence accident is also increased. Hence, the value of human life is ignored. No one can prevent the accident, but can save their life by expediting the ambulance to the hospital in time. A new vivid scheme called Intelligent Transportation System (ITS) is introduced. The objective of this scheme is to minimize the delay caused by traffic congestion and to provide the smooth flow of emergency vehicles. The concept of this scheme is to green the traffic signal in the path of ambulance automatically with the help of RF module. So that the ambulance can reach the spot in time and human life can be saved and the accident location is identified sends the accident location immediately to the main server. The main server finds the nearest ambulance to the accident zone and sends the exact accident location to the emergency vehicle. The control unit monitors the ambulance and provides the shortest path to the ambulance at the same time it controls the traffic light according to the ambulance location and thus arriving at the hospital safely. This scheme is fully automated, thus it locates the accident spot accurately, controls the traffic lights, provide the shortest path to reach the location and to the hospital in time."}
{"_id": "685a91c9aeb6cc08eb09ef6e838b1e5a3bd07bf3", "title": "Development of vehicle tracking system using GPS and GSM modem", "text": "The ability to track vehicles is useful in many applications including security of personal vehicles, public transportation systems, fleet management and others. Furthermore, the number of vehicles on the road globally is also expected to increase rapidly. Therefore, the development of vehicle tracking system using the Global Positioning System (GPS) and Global System for Mobile Communications (GSM) modem is undertaken with the aim of enabling users to locate their vehicles with ease and in a convenient manner. The system will provide users with the capability to track vehicle remotely through the mobile network. This paper presents the development of the vehicle tracking system's hardware prototype. Specifically, the system will utilize GPS to obtain a vehicle's coordinate and transmit it using GSM modem to the user's phone through the mobile network. The main hardware components of the system are u-blox NEO-6Q GPS receiver module, u-blox LEON-G100 GSM module and Arduino Uno microcontroller. The developed vehicle tracking system demonstrates the feasibility of near real-time tracking of vehicles and improved customizability, global operability and cost when compared to existing solutions."}
{"_id": "c05a52b795776a4d13d99896449b8e0b7d3dad08", "title": "The survey of GSM wireless communication system", "text": "In the past decade, wireless communications experienced an explosive growth period and became an integral part of modern society. The convenience and flexibility offered by mobile communications have made it one of the fastest growing areas of telecommunications. Mobile communication systems have experienced rapid growth in the number of users as well as the range of services provided during the last two decades. The Global System for Mobile communications, GSM, is a pan-European Mobile communication system in the 900 MHz band which was first introduced in the early years of this decade. An increasing demand for data transmission services over the GSM system has been driven by the wide use of the internet applications. In this thesis, we focus on the survey of GSM for wireless communication, which is one of the most widely deployed second generation wireless cellular systems in the world. While voice was the primary service provided early communication systems, current systems offer other transmission services. One of the most commonly used cellular systems is based on the Global System for Mobile communication, GSM, standard. We briefly introduce the GSM System configuration and major properties. It included five parts. It is about service and features, architecture of GSM system, channel and frame structure of GSM, GSM security features, data in the GSM System."}
{"_id": "5049d8f5fe35b0e9e24f3e3cf06f4a69758cb118", "title": "Adaptive postfiltering for quality enhancement of coded speech", "text": "AbstrucfAn adaptive postfiltering algorithm for enhancing the perceptual quality of coded speech is presented. The posffilter consists of a long-term postfilter section in cascade with a shortterm postfilter section and includes spectral tilt compensation and automatic gain control. The long-term section emphasizes pitch harmonics and attenuates the spectral valleys between pitch harmonics. The short-term section, on the other hand, emphasizes speech formants and attenuates the spectral valleys between formants. Both filter sections have poles and zeros. Unlike earlier postfilters that often introduced a substantial amount of muffling to the output speech, our postfilter significantly reduces this effect by minimizing the spectral tilt in its frequency response. As a result, this postlilter achieves noticeable noise reduction while introducing only minimal distortion in speech. The complexity of the postfilter is quite low. Variations of this postlilter are now being used in several national and international speech coding standards. This paper presents for the first time a complete description of our original postfiltering algorithm and the underlying ideas that motivated its development."}
{"_id": "a089b70b04955d19488905c22e174cfc7c679e9e", "title": "Developing a Web-Based Application using OWL and SWRL", "text": "Data integration is central in Web application development because these applications typically deal with a variety of information formats. Ontology-driven applications face the additional challenge of integrating these multiple formats with the information stored in ontologies. A number of mappings are required to reconcile the variety of formats to produce a coherent overall system. To address these mappings we have developed a number of open source tools that support transformations between some of the common formats encountered when developing an ontology-driven Web application. The Semantic Web Rule Language (SWRL) is a central building block in these tools. We describe these tools and illustrate their use in the development of a prototype Web-based application."}
{"_id": "da8c867f7044d4b641697bededb2d8b074aa3d0b", "title": "Styling with Attention to Details", "text": "Fashion as characterized by its nature, is driven by style. In this paper, we propose a method that takes into account the style information to complete a given set of selected fashion items with a complementary fashion item. Complementary items are those items that can be worn along with the selected items according to the style. Addressing this problem facilitates in automatically generating stylish fashion ensembles leading to a richer shopping experience for users. Recently, there has been a surge of online social websites where fashion enthusiasts post the outfit of the day and other users can like and comment on them. These posts contain a gold-mine of information about style. In this paper, we exploit these posts to train a deep neural network which captures style in an automated manner. We pose the problem of predicting complementary fashion items as a sequence to sequence problem where the input is the selected set of fashion items and the output is a complementary fashion item based on the style information learned by the model. We use the encoder decoder architecture to solve this problem of completing the set of fashion items. We evaluate the goodness of the proposed model through a variety of experiments. We empirically observe that our proposed model outperforms competitive baseline like apriori algorithm by ~28% in terms of accuracy for top-1 recommendation to complete the fashion ensemble. We also perform retrieval based experiments to understand the ability of the model to learn style and rank the complementary fashion items and find that using attention in our encoder decoder model helps in improving the mean reciprocal rank by ~24%. Qualitatively we find the complementary fashion items generated by our proposed model are richer than the apriori algorithm."}
{"_id": "2183ac8f89deaf25da63adedaedfebfe6306bd85", "title": "A Framework for Discovering Internal Financial Fraud Using Analytics", "text": "In today's knowledge based society, financial fraud has become a common phenomenon. Moreover, the growth in knowledge discovery in databases and fraud audit has made the detection of internal financial fraud a major area of research. On the other hand, auditors find it difficult to apply a majority of techniques in the fraud auditing process and to integrate their domain knowledge in this process. In this Paper a framework called \"Knowledge-driven Internal Fraud Detection (KDIFD)\" is proposed for detecting internal financial frauds. The framework suggests a process-based approach that considers both forensic auditor's tacit knowledge base and computer-based data analysis and mining techniques. The proposed framework can help auditor in discovering internal financial fraud more efficiently."}
{"_id": "fb924aad6784fca1fc137fd40c6c0ee3c3817f99", "title": "Modified image segmentation method based on region growing and region merging", "text": "Image segmentation is one of the basic concepts widely used in each and every fields of image processing. The entire process of the proposed work for image segmentation comprises of 3 phases: Threshold generation with dynamic Modified Region Growing phase (DMRG), texture feature generation phase and region merging phase. by dynamically changing two thresholds, the given input image can be performed as DMRG, in which the cuckoo search optimization algorithm helps to optimize the two thresholds in modified region growing. after obtaining the region growth segmented image, the edges are detected with edge detection algorithm. In the second phase, the texture feature is extracted using entropy based operation from the input image. In region merging phase, the results obtained from the texture feature generation phase is combined with the results of DMRG phase and similar regions are merged by using a distance comparison between regions. The proposed work is implemented using Mat lab platform with several medical images. the performance of the proposed work is evaluated using the metrics sensitivity, specificity and accuracy. the results show that this proposed work provides very good accuracy for the segmentation process in images."}
{"_id": "a33a1c0f69327b9bc112ee4857112312c41b13ff", "title": "Style Augmentation: Data Augmentation via Style Randomization", "text": "We introduce style augmentation, a new form of data augmentation based on random style transfer, for improving the robustness of convolutional neural networks (CNN) over both classification and regression based tasks. During training, our style augmentation randomizes texture, contrast and color, while preserving shape and semantic content. This is accomplished by adapting an arbitrary style transfer network to perform style randomization, by sampling input style embeddings from a multivariate normal distribution instead of inferring them from a style image. In addition to standard classification experiments, we investigate the effect of style augmentation (and data augmentation generally) on domain transfer tasks. We find that data augmentation significantly improves robustness to domain shift, and can be used as a simple, domain agnostic alternative to domain adaptation. Comparing style augmentation against a mix of seven traditional augmentation techniques, we find that it can be readily combined with them to improve network performance. We validate the efficacy of our technique with domain transfer experiments in classification and monocular depth estimation, illustrating consistent improvements in generalization."}
{"_id": "f838788faf2cdc54f4f3b9212fdc3df53313452b", "title": "Globally optimal toon tracking", "text": "The ability to identify objects or region correspondences between consecutive frames of a given hand-drawn animation sequence is an indispensable tool for automating animation modification tasks such as sequence-wide recoloring or shape-editing of a specific animated character. Existing correspondence identification methods heavily rely on appearance features, but these features alone are insufficient to reliably identify region correspondences when there exist occlusions or when two or more objects share similar appearances. To resolve the above problems, manual assistance is often required. In this paper, we propose a new correspondence identification method which considers both appearance features and motions of regions in a global manner. We formulate correspondence likelihoods between temporal region pairs as a network flow graph problem which can be solved by a well-established optimization algorithm. We have evaluated our method with various animation sequences and results show that our method consistently outperforms the state-of-the-art methods without any user guidance."}
{"_id": "e8cd8e69401360e40327b9d77a22e4f04cff690c", "title": "Aesthetic quality classification of photographs based on color harmony", "text": "Aesthetic quality classification plays an important role in how people organize large photo collections. In particular, color harmony is a key factor in the various aspects that determine the perceived quality of a photo, and it should be taken into account to improve the performance of automatic aesthetic quality classification. However, the existing models of color harmony take only simple color patterns into consideration\u2013e.g., patches consisting of a few colors\u2013and thus cannot be used to assess photos with complicated color arrangements. In this work, we tackle the challenging problem of evaluating the color harmony of photos with a particular focus on aesthetic quality classification. A key point is that a photograph can be seen as a collection of local regions with color variations that are relatively simple. This led us to develop a method for assessing the aesthetic quality of a photo based on the photo's color harmony. We term the method \u2018bags-of-color-patterns.\u2019 Results of experiments on a large photo collection with user-provided aesthetic quality scores show that our aesthetic quality classification method, which explicitly takes into account the color harmony of a photo, outperforms the existing methods. Results also show that the classification performance is improved by combining our color harmony feature with blur, edges, and saliency features that reflect the aesthetics of the photos."}
{"_id": "4d7bfd4eddb0d238c7e00b511f936d98c7dd6b99", "title": "Estimation of IMU and MARG orientation using a gradient descent algorithm", "text": "This paper presents a novel orientation algorithm designed to support a computationally efficient, wearable inertial human motion tracking system for rehabilitation applications. It is applicable to inertial measurement units (IMUs) consisting of tri-axis gyroscopes and accelerometers, and magnetic angular rate and gravity (MARG) sensor arrays that also include tri-axis magnetometers. The MARG implementation incorporates magnetic distortion compensation. The algorithm uses a quaternion representation, allowing accelerometer and magnetometer data to be used in an analytically derived and optimised gradient descent algorithm to compute the direction of the gyroscope measurement error as a quaternion derivative. Performance has been evaluated empirically using a commercially available orientation sensor and reference measurements of orientation obtained using an optical measurement system. Performance was also benchmarked against the propriety Kalman-based algorithm of orientation sensor. Results indicate the algorithm achieves levels of accuracy matching that of the Kalman based algorithm; < 0.8\u00b0 static RMS error, < 1.7\u00b0 dynamic RMS error. The implications of the low computational load and ability to operate at small sampling rates significantly reduces the hardware and power necessary for wearable inertial movement tracking, enabling the creation of lightweight, inexpensive systems capable of functioning for extended periods of time."}
{"_id": "d50398463b7b2bbf2aee9686d8ebf426b18e098a", "title": "Mori-Zwanzig formalism as a practical computational tool.", "text": "An operational procedure is presented to compute explicitly the different terms in the generalized Langevin equation (GLE) for a few relevant variables obtained within Mori-Zwanzig formalism. The procedure amounts to introducing an artificial controlled parameter which can be tuned in such a way that the so-called projected dynamics becomes explicit and the GLE reduces to a Markovian equation. The projected dynamics can be realised in practice by introducing constraints, and it is shown that the Green-Kubo formulae computed with these dynamics do not suffer from the plateau problem. The methodology is illustrated in the example of star polymer molecules in a melt using their center of mass as relevant variables. Through this example, we show that not only the effective potentials, but also the friction forces and the noise play a very important role in the dynamics."}
{"_id": "245f3aa8423563b78367f7726399c8fa1841d7bc", "title": "Revisiting Random Binning Features: Fast Convergence and Strong Parallelizability", "text": "Kernel method has been developed as one of the standard approaches for nonlinear learning, which however, does not scale to large data set due to its quadratic complexity in the number of samples. A number of kernel approximation methods have thus been proposed in the recent years, among which the random features method gains much popularity due to its simplicity and direct reduction of nonlinear problem to a linear one. Different random feature functions have since been proposed to approximate a variety of kernel functions. Among them the Random Binning (RB) feature, proposed in the first random-feature paper [21], has drawn much less attention than the Random Fourier (RF) feature proposed also in [21]. In this work, we observe that the RB features, with right choice of optimization solver, could be orders-of-magnitude more efficient than other random features and kernel approximation methods under the same requirement of accuracy. We thus propose the first analysis of RB from the perspective of optimization, which by interpreting RB as a Randomized Block Coordinate Descent in the infinite-dimensional space, gives a faster convergence rate compared to that of other random features. In particular, we show that by drawing R random grids with at least \u03ba number of non-empty bins per grid in expectation, RB method achieves a convergence rate of O(1/\u03ba R)), which not only sharpens its O(1/\u221aR) rate from Monte Carlo analysis, but also shows a \u03ba times speedup over other random features under the same analysis framework. In addition, we demonstrate another advantage of RB in the L1-regularized setting, where unlike other random features, a RB-based Coordinate Descent solver can be parallelized with guaranteed speedup proportional to \u03ba. Our extensive experiments demonstrate the superior performance of the RB features over other random features and kernel approximation methods."}
{"_id": "91acde3f3db1f793070d9e58b05c48401ff46925", "title": "HHCART: An Oblique Decision Tree", "text": "Decision trees are a popular technique in statistical data classification. They recursively partition the feature space into disjoint sub-regions until each sub-region becomes homogeneous with respect to a particular class. The basic Classification and Regression Tree (CART) algorithm partitions the feature space using axis parallel splits. When the true decision boundaries are not aligned with the feature axes, this approach can produce a complicated boundary structure. Oblique decision trees use oblique decision boundaries to potentially simplify the boundary structure. The major limitation of this approach is that the tree induction algorithm is computationally expensive. In this article we present a new decision tree algorithm, called HHCART. The method utilizes a series of Householder matrices to reflect the training data at each node during the tree construction. Each reflection is based on the directions of the eigenvectors from each classes\u2019 covariance matrix. Considering axis parallel splits in the reflected training data provides an efficient way of finding oblique splits in the unreflected training data. Experimental results show that the accuracy and size of the HHCART trees are comparable with some benchmark methods in the literature. The appealing feature of HHCART is that it can handle both qualitative and quantitative features in the same oblique split."}
{"_id": "9bcc65754a9cfe57956d1f8816f4ca7d3a35da95", "title": "Stillbirths: economic and psychosocial consequences", "text": "Despite the frequency of stillbirths, the subsequent implications are overlooked and underappreciated. We present findings from comprehensive, systematic literature reviews, and new analyses of published and unpublished data, to establish the effect of stillbirth on parents, families, health-care providers, and societies worldwide. Data for direct costs of this event are sparse but suggest that a stillbirth needs more resources than a livebirth, both in the perinatal period and in additional surveillance during subsequent pregnancies. Indirect and intangible costs of stillbirth are extensive and are usually met by families alone. This issue is particularly onerous for those with few resources. Negative effects, particularly on parental mental health, might be moderated by empathic attitudes of care providers and tailored interventions. The value of the baby, as well as the associated costs for parents, families, care providers, communities, and society, should be considered to prevent stillbirths and reduce associated morbidity."}
{"_id": "db3fed1a69977d9b0cf2a3a44b05854940f80430", "title": "Improved Text Extraction from PDF Documents for Large-Scale Natural Language Processing", "text": "The inability of reliable text extraction from arbitrary documents is often an obstacle for large scale NLP based on resources crawled from the Web. One of the largest problems in the conversion of PDF documents is the detection of the boundaries of common textual units such as paragraphs, sentences and words. PDF is a file format optimized for printing and encapsulates a complete description of the layout of a document including text, fonts, graphics and so on. This paper describes a tool for extracting texts from arbitrary PDF files for the support of largescale data-driven natural language processing. Our approach combines the benefits of several existing solutions for the conversion of PDF documents to plain text and adds a language-independent post-processing procedure that cleans the output for further linguistic processing. In particular, we use the PDF-rendering libraries pdfXtk, Apache Tika and Poppler in various configurations. From the output of these tools we recover proper boundaries using on-the-fly language models and languageindependent extraction heuristics. In our research, we looked especially at publications from the European Union, which constitute a valuable multilingual resource, for example, for training statistical machine translation models. We use our tool for the conversion of a large multilingual database crawled from the EU bookshop with the aim of building parallel corpora. Our experiments show that our conversion software is capable of fixing various common issues leading to cleaner data sets in the end."}
{"_id": "77edc5099fc2df0efe644a0ea63a936ac2ac0940", "title": "Depeche Mood: a Lexicon for Emotion Analysis from Crowd Annotated News", "text": "While many lexica annotated with words polarity are available for sentiment analysis, very few tackle the harder task of emotion analysis and are usually quite limited in coverage. In this paper, we present a novel approach for extracting \u2013 in a totally automated way \u2013 a highcoverage and high-precision lexicon of roughly 37 thousand terms annotated with emotion scores, called DepecheMood. Our approach exploits in an original way \u2018crowd-sourced\u2019 affective annotation implicitly provided by readers of news articles from rappler.com. By providing new state-of-the-art performances in unsupervised settings for regression and classification tasks, even using a na\u0131\u0308ve approach, our experiments show the beneficial impact of harvesting social media data for affective lexicon building."}
{"_id": "7748514058675841f46836a9bc3b6aa8ab76c9ca", "title": "Response and Habituation of the Human Amygdala during Visual Processing of Facial Expression", "text": "We measured amygdala activity in human volunteers during rapid visual presentations of fearful, happy, and neutral faces using functional magnetic resonance imaging (fMRI). The first experiment involved a fixed order of conditions both within and across runs, while the second one used a fully counterbalanced order in addition to a low level baseline of simple visual stimuli. In both experiments, the amygdala was preferentially activated in response to fearful versus neutral faces. In the counterbalanced experiment, the amygdala also responded preferentially to happy versus neutral faces, suggesting a possible generalized response to emotionally valenced stimuli. Rapid habituation effects were prominent in both experiments. Thus, the human amygdala responds preferentially to emotionally valenced faces and rapidly habituates to them."}
{"_id": "8de30f8ec37ad31525f753ab2ccbace638291bad", "title": "Deception through telling the truth ? ! : Experimental evidence from individuals and teams", "text": "Informational asymmetries abound in economic decision making and often provide an incentive for deception through telling a lie or misrepresenting information. In this paper I use a cheap-talk sender-receiver experiment to show that telling the truth should be classified as deception too if the sender chooses the true message with the expectation that the receiver will not follow the sender\u2019s (true) message. The experimental data reveal a large degree of \u2018sophisticated\u2019 deception through telling the truth. The robustness of my broader definition of deception is confirmed in an experimental treatment where teams make decisions. JEL-classification: C72, C91, D82"}
{"_id": "1fbc1cc8a85b50c15742672033f51a2f57f86692", "title": "Metacognitive Beliefs About Procrastination : Development and Concurrent Validity of a Self-Report Questionnaire", "text": "This article describes the development of a questionnaire on metacognitive beliefs about procrastination. In Study 1 we performed a principal axis factor analysis that suggested a twofactor solution for the data obtained from the preliminary questionnaire. The factors identified were named positive and negative metacognitive beliefs about procrastination. The factor analysis reduced the questionnaire from 22 to 16 items, with each factor consisting of 8 items. In Study 2 we performed a confirmatory factor analysis that provided support for the twofactor solution suggested by the exploratory factor analysis. Both factors had adequate internal consistency. Concurrent validity was partially established through correlation analyses. These showed that positive metacognitive beliefs about procrastination were positively correlated with decisional procrastination, and that negative metacognitive beliefs about procrastination were positively correlated with both decisional and behavioral procrastination. The Metacognitive Beliefs About Procrastination Questionnaire may aid future research into procrastination and facilitate clinical assessment and case formulation."}
{"_id": "672c4db75dd626d6b8f152ec8ddfe0171ffe5f8b", "title": "One-Class Kernel Spectral Regression for Outlier Detection", "text": "The paper introduces a new efficient nonlinear oneclass classifier formulated as the Rayleigh quotient criterion optimisation. The method, operating in a reproducing kernel Hilbert subspace, minimises the scatter of target distribution along an optimal projection direction while at the same time keeping projections of positive observations distant from the mean of the negative class. We provide a graph embedding view of the problem which can then be solved efficiently using the spectral regression approach. In this sense, unlike previous similar methods which often require costly eigen-computations of dense matrices, the proposed approach casts the problem under consideration into a regression framework which is computationally more efficient. In particular, it is shown that the dominant complexity of the proposed method is the complexity of computing the kernel matrix. Additional appealing characteristics of the proposed one-class classifier are: 1-the ability to be trained in an incremental fashion (allowing for application in streaming data scenarios while also reducing the computational complexity in a non-streaming operation mode); 2-being unsupervised, but providing the option for refining the solution using negative training examples, when available; Last but not least, 3-the use of the kernel trick which facilitates a nonlinear mapping of the data into a high-dimensional feature space to seek better solutions. Extensive experiments conducted on several datasets verify the merits of the proposed approach in comparison with other alternatives."}
{"_id": "f05f698f418478575b8a1be34b5020e08f9fbba2", "title": "Very greedy crossover in a genetic algorithm for the traveling salesman problem", "text": "In the traveling salesman problem, we are given a set of cities and the distances between them, and we seek a shortest tour that visits each city exactly once and returns to the starting city. Many researchers have described genetic algorithms for this problem, and they have often focused on the crossover operator, which builds offspring tours by combining two parental tours. Very greedy crossover extends several of these operators; as it builds a tour, it always appends the shortest parental edge to a city not yet visited, if there is such an edge. A steady-state genetic algorithm using this operator, mutation by inversion, and rank-based probabilities for both selection and deletion shows good results on a suite of flve test problems."}
{"_id": "867e2293e9780b729705b4ba48d6b11e3778e999", "title": "Phishing detection based Associative Classification data mining", "text": "Website phishing is considered one of the crucial security challenges for the online community due to the massive numbers of online transactions performed on a daily basis. Website phishing can be described as mimicking a trusted website to obtain sensitive information from online users such as usernames and passwords. Black lists, white lists and the utilisation of search methods are examples of solutions to minimise the risk of this problem. One intelligent approach based on data mining called Associative Classification (AC) seems a potential solution that may effectively detect phishing websites with high accuracy. According to experimental studies, AC often extracts classifiers containing simple \u2018\u2018If-Then\u2019\u2019 rules with a high degree of predictive accuracy. In this paper, we investigate the problem of website phishing using a developed AC method called Multi-label Classifier based Associative Classification (MCAC) to seek its applicability to the phishing problem. We also want to identify features that distinguish phishing websites from legitimate ones. In addition, we survey intelligent approaches used to handle the phishing problem. Experimental results using real data collected from different sources show that AC particularly MCAC detects phishing websites with higher accuracy than other intelligent algorithms. Further, MCAC generates new hidden knowledge (rules) that other algorithms are unable to find and this has improved its classifiers predictive performance. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "ba1c6e772e72fd04fba3585cdd75e063286f4f6d", "title": "Assessing the severity of phishing attacks: A hybrid data mining approach", "text": "Available online 19 August 2010"}
{"_id": "12d6cf6346f6d693b6dc3b88d176a8a7b192355c", "title": "Why phishing works", "text": "To build systems shielding users from fraudulent (or phishing) websites, designers need to know which attack strategies work and why. This paper provides the first empirical evidence about which malicious strategies are successful at deceiving general users. We first analyzed a large set of captured phishing attacks and developed a set of hypotheses about why these strategies might work. We then assessed these hypotheses with a usability study in which 22 participants were shown 20 web sites and asked to determine which ones were fraudulent. We found that 23% of the participants did not look at browser-based cues such as the address bar, status bar and the security indicators, leading to incorrect choices 40% of the time. We also found that some visual deception attacks can fool even the most sophisticated users. These results illustrate that standard security indicators are not effective for a substantial fraction of users, and suggest that alternative approaches are needed."}
{"_id": "1b99fe2f4e680ebb7fe82ec7054d034c8ab8c79d", "title": "Decision strategies and susceptibility to phishing", "text": "Phishing emails are semantic attacks that con people into divulging sensitive information using techniques to make the user believe that information is being requested by a legitimate source. In order to develop tools that will be effective in combating these schemes, we first must know how and why people fall for them. This study reports preliminary analysis of interviews with 20 non-expert computer users to reveal their strategies and understand their decisions when encountering possibly suspicious emails. One of the reasons that people may be vulnerable to phishing schemes is that awareness of the risks is not linked to perceived vulnerability or to useful strategies in identifying phishing emails. Rather, our data suggest that people can manage the risks that they are most familiar with, but don't appear to extrapolate to be wary of unfamiliar risks. We explore several strategies that people use, with varying degrees of success, in evaluating emails and in making sense of warnings offered by browsers attempting to help users navigate the web."}
{"_id": "9458b89b323af84b4635dbcf7d18114f9af19c96", "title": "Wearable Real-Time Stereo Vision for the Visually Impaired", "text": "In this paper, a development of image processing, stereovision methodology and a sonification procedure for image sonification system for vision substitution are presented. The hardware part consists of a sunglass fitted with two mini cameras, laptop computer and stereo earphones. The image of the scene in front of blind people is captured by stereo cameras. The captured image is processed to enhance the important features in the scene in front of blind user. The image processing is designed to extract the objects from the image and the stereo vision method is applied to calculate the disparity which is required to determine the distance between the blind user and the objects. The processed image is mapped on to stereo sound for the blind\u2019s understanding of the scene in front. Experimentations were conducted in the indoor environment and the proposed methodology is found to be effective for object identification, and thus the sound produced will assists the visually impaired for their collision free navigation."}
{"_id": "896e160b98d52d13a97caa664038e37e86075ee4", "title": "NIMA: Neural Image Assessment", "text": "Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications, such as evaluating image capture pipelines, storage techniques, and sharing media. Despite the subjective nature of this problem, most existing methods only predict the mean opinion score provided by data sets, such as AVA and TID2013. Our approach differs from others in that we predict the distribution of human opinion scores using a convolutional neural network. Our architecture also has the advantage of being significantly simpler than other methods with comparable performance. Our proposed approach relies on the success (and retraining) of proven, state-of-the-art deep object recognition networks. Our resulting network can be used to not only score images reliably and with high correlation to human perception, but also to assist with adaptation and optimization of photo editing/enhancement algorithms in a photographic pipeline. All this is done without need for a \u201cgolden\u201d reference image, consequently allowing for single-image, semantic- and perceptually-aware, no-reference quality assessment."}
{"_id": "8fd1910454feb9a28741992b87a271381edd1af8", "title": "Convolutional neural networks and multimodal fusion for text aided image classification", "text": "With the exponential growth of web meta-data, exploiting multimodal online sources via standard search engine has become a trend in visual recognition as it effectively alleviates the shortage of training data. However, the web meta-data such as text data is usually not as cooperative as expected due to its unstructured nature. To address this problem, this paper investigates the numerical representation of web text data. We firstly adopt convolutional neural network (CNN) for web text modeling on top of word vectors. Combined with CNN for image, we present a multimodal fusion to maximize the discriminative power of visual and textual modality data for decision level and feature level simultaneously. Experimental results show that the proposed framework achieves significant improvement in large-scale image classification on Pascal VOC-2007 and VOC-2012 datasets."}
{"_id": "a91c89c95dbb49b924434cd00dfcef7e635ea8cb", "title": "Wireless Measurement of RFID IC Impedance", "text": "Accurate knowledge of the input impedance of a radio-frequency identification (RFID) integrated circuit (IC) at its wake-up power is valuable as it enables the design of a performance-optimized tag for a specific IC. However, since the IC impedance is power dependent, few methods exist to measure it without advanced equipment. We propose and demonstrate a wireless method, based on electromagnetic simulation and threshold power measurement, applicable to fully assembled RFID tags, to determine the mounted IC's input impedance in the absorbing state, including any parasitics arising from the packaging and the antenna-IC connection. The proposed method can be extended to measure the IC's input impedance in the modulating state as well."}
{"_id": "9422b8b7bec2ec3860902481ff2977211d65112f", "title": "Real-time High Performance Anomaly Detection over Data Streams: Grand Challenge", "text": "Real-time analytics over data streams are crucial for a wide range of use cases in industry and research. Today's sensor systems can produce high throughput data streams that have to be analyzed in real-time. One important analytic task is anomaly or outlier detection from the streaming data. In many industry applications, sensing devices produce a data stream that can be monitored to know the correct operation of industry devices and consequently avoid damages by triggering reactions in real-time.\n While anomaly detection is a well-studied topic in data mining, the real-time high-performance anomaly detection from big data streams require special studies and well-optimized implementation. This paper presents our implementation of a real-time anomaly detection system over data streams. We outline details of our two separate implementations using the Java and C++ programming languages, and provide technical details about the data processing pipelines. We report experimental results and describe performance tuning strategies."}
{"_id": "a70198f652f40ec18a7416ffb1fc858d1a203f84", "title": "MORE ON THE FIBONACCI SEQUENCE AND HESSENBERG MATRICES", "text": "Five new classes of Fibonacci-Hessenberg matrices are introduced. Further, we introduce the notion of two-dimensional Fibonacci arrays and show that three classes of previously known Fibonacci-Hessenberg matrices and their generalizations satisfy this property. Simple systems of linear equations are given whose solutions are Fibonacci fractions."}
{"_id": "ed464cf1f7ceff8e9bb1497c746c8abb510835a6", "title": "Municipal solid waste source-separated collection in China: A comparative analysis.", "text": "A pilot program focusing on municipal solid waste (MSW) source-separated collection was launched in eight major cities throughout China in 2000. Detailed investigations were carried out and a comprehensive system was constructed to evaluate the effects of the eight-year implementation in those cities. This paper provides an overview of different methods of collection, transportation, and treatment of MSW in the eight cities; as well as making a comparative analysis of MSW source-separated collection in China. Information about the quantity and composition of MSW shows that the characteristics of MSW are similar, which are low calorific value, high moisture content and high proportion of organisms. Differences which exist among the eight cities in municipal solid waste management (MSWM) are presented in this paper. Only Beijing and Shanghai demonstrated a relatively effective result in the implementation of MSW source-separated collection. While the six remaining cities result in poor performance. Considering the current status of MSWM, source-separated collection should be a key priority. Thus, a wider range of cities should participate in this program instead of merely the eight pilot cities. It is evident that an integrated MSWM system is urgently needed. Kitchen waste and recyclables are encouraged to be separated at the source. Stakeholders involved play an important role in MSWM, thus their responsibilities should be clearly identified. Improvement in legislation, coordination mechanisms and public education are problematic issues that need to be addressed."}
{"_id": "8cfb316b3233d9b598265e3b3d40b8b064014d63", "title": "Video classification with Densely extracted HOG/HOF/MBH features: an evaluation of the accuracy/computational efficiency trade-off", "text": "The current state-of-the-art in video classification is based on Bag-of-Words using local visual descriptors. Most commonly these are histogram of oriented gradients (HOG), histogram of optical flow (HOF) and motion boundary histograms (MBH) descriptors. While such approach is very powerful for classification, it is also computationally expensive. This paper addresses the problem of computational efficiency. Specifically: (1) We propose several speed-ups for densely sampled HOG, HOF and MBH descriptors and release Matlab code; (2) We investigate the trade-off between accuracy and computational efficiency of descriptors in terms of frame sampling rate and type of Optical Flow method; (3) We investigate the trade-off between accuracy and computational efficiency for computing the feature vocabulary, using and comparing most of the commonly adopted vector quantization techniques: $$k$$ k -means, hierarchical $$k$$ k -means, Random Forests, Fisher Vectors and VLAD."}
{"_id": "e8b5c97a5d9e1a4ece710b2ab1ba93f659e6bb9c", "title": "A Faster Scrabble Move Generation Algorithm", "text": "Appel and Jacobson1 presented a fast algorithm for generating every possible move in a given position in the game of Scrabble using a DAWG, a finite automaton derived from the trie of a large lexicon. This paper presents a faster algorithm that uses a GADDAG, a finite automaton that avoids the non-deterministic prefix generation of the DAWG algorithm by encoding a bidirectional path starting from each letter of each word in the lexicon. For a typical lexicon, the GADDAG is nearly five times larger than the DAWG, but generates moves more than twice as fast. This time/space trade-off is justified not only by the decreasing cost of computer memory, but also by the extensive use of move-generation in the analysis of board positions used by Gordon in the probabilistic search for the most appropriate play in a given position within realistic time constraints."}
{"_id": "6da978116b0a492decb8d2860540146d5aa7e170", "title": "FraudFind: Financial fraud detection by analyzing human behavior", "text": "Financial fraud is commonly represented by the use of illegal practices where they can intervene from senior managers until payroll employees, becoming a crime punishable by law. There are many techniques developed to analyze, detect and prevent this behavior, being the most important the fraud triangle theory associated with the classic financial audit model. In order to perform this research, a survey of the related works in the existing literature was carried out, with the purpose of establishing our own framework. In this context, this paper presents FraudFind, a conceptual framework that allows to identify and outline a group of people inside an banking organization who commit fraud, supported by the fraud triangle theory. FraudFind works in the approach of continuous audit that will be in charge of collecting information of agents installed in user's equipment. It is based on semantic techniques applied through the collection of phrases typed by the users under study for later being transferred to a repository for later analysis. This proposal encourages to contribute with the field of cybersecurity, in the reduction of cases of financial fraud."}
{"_id": "42ca3deda9064f7bc93aa4cca783dbfb71a292d4", "title": "Social Brand Value and the Value Enhancing Role of Social Media Relationships for Brands", "text": "Due to the social media revolution and the emergence of communities, social networks, and user generated content portals, prevalent branding concepts need to catch up with this reality. Given the importance of social ties, social interactions and social identity in the new media environment, there is a need to account for a relationship measure in marketing and branding. Based on the concept of social capital we introduce the concept of social brand value, defined as the perceived value derived by exchange and interactions with other users of the brand within a `community. Within a qualitative study marketing experts were interviewed and highlighted the importance towards social media activities, but also indicated that they do not have a clear picture on how strategies should look like and how their success can be measured. A second quantitative study was conducted which demonstrates the influence the social brand value construct has for consumers brand evangelism and willingness to pay a price premium and hence the value contribution of the social brand value for consumers."}
{"_id": "71ac04e7af020c4fe5a609730ab73fb0ab8b2bfd", "title": "Information security incident management process", "text": "The modern requirements and the best practices in the field of Information Security (IS) Incident Management Process (ISIMP) are analyzed. \"IS event\" and \"IS incident\" terms, being used for ISIMP, have been defined. An approach to ISIMP development has been created. According to this approach ISIMP processes are described. As an example the \u00abVulnerabilities, IS events and incidents detection and notification\u00bb joint process is examined in detail."}
{"_id": "b22685ddab32febe76a5bcab358387a7c73d0f68", "title": "Directly Addressable Variable-Length Codes", "text": "We introduce a symbol reordering technique that implicitly synchronizes variable-length codes, such that it is possible to directly access the i-th codeword without need of any sampling method. The technique is practical and has many applications to the representation of ordered sets, sparse bitmaps, partial sums, and compressed data structures for suffix trees, arrays, and inverted indexes, to name just a few. We show experimentally that the technique offers a competitive alternative to other data structures that handle this problem."}
{"_id": "2579b2066d0fcbeda5498f5053f201b10a8e254b", "title": "Deconstructing the Ladder Network Architecture", "text": "The manual labeling of data is and will remain a costly endeavor. For this reason, semi-supervised learning remains a topic of practical importance. The recently proposed Ladder Network is one such approach that has proven to be very successful. In addition to the supervised objective, the Ladder Network also adds an unsupervised objective corresponding to the reconstruction costs of a stack of denoising autoencoders. Although the empirical results are impressive, the Ladder Network has many components intertwined, whose contributions are not obvious in such a complex architecture. In order to help elucidate and disentangle the different ingredients in the Ladder Network recipe, this paper presents an extensive experimental investigation of variants of the Ladder Network in which we replace or remove individual components to gain more insight into their relative importance. We find that all of the components are necessary for achieving optimal performance, but they do not contribute equally. For semi-supervised tasks, we conclude that the most important contribution is made by the lateral connection, followed by the application of noise, and finally the choice of what we refer to as the \u2018combinator function\u2019 in the decoder path. We also find that as the number of labeled training examples increases, the lateral connections and reconstruction criterion become less important, with most of the improvement in generalization being due to the injection of noise in each layer. Furthermore, we present a new type of combinator function that outperforms the original design in both fullyand semi-supervised tasks, reducing record test error rates on Permutation-Invariant MNIST to 0.57% for the supervised setting, and to 0.97% and 1.0% for semisupervised settings with 1000 and 100 labeled examples respectively."}
{"_id": "9992626e8e063c1b23e1920efd63ab4f008710ac", "title": "Using PMU Data to Increase Situational Awareness Final Project Report", "text": ""}
{"_id": "9fe6002f53c4ca692c232d85a64c617ac3db3b18", "title": "Recent Advances in Indoor Localization: A Survey on Theoretical Approaches and Applications", "text": "The availability of location information has become a key factor in today\u2019s communications systems allowing location based services. In outdoor scenarios, the mobile terminal position is obtained with high accuracy thanks to the global positioning system (GPS) or to the standalone cellular systems. However, the main problem of GPS and cellular systems resides in the indoor environment and in scenarios with deep shadowing effects where the satellite or cellular signals are broken. In this paper, we survey different technologies and methodologies for indoor and outdoor localization with an emphasis on indoor methodologies and concepts. Additionally, we discuss in this review different localization-based applications, where the location information is critical to estimate. Finally, a comprehensive discussion of the challenges in terms of accuracy, cost, complexity, security, scalability, etc. is given. The aim of this survey is to provide a comprehensive overview of existing efforts as well as auspicious and anticipated dimensions for future work in indoor localization techniques and applications."}
{"_id": "ce086e383e1ef18f60eedd2248fcefd8bf4a213d", "title": "Speed control and electrical braking of axial flux BLDC motor", "text": "Axial flux brushless direct current motors (AFBLDC) are becoming popular in many applications including electrical vehicles because of their ability to meet the demand of high power density, high efficiency, wide speed range, robustness, low cost and less maintenance. In this paper, AFBLDC motor drive with single sided configuration having 24 stator poles and 32 permanent magnets on the rotor is proposed. It is driven by six pulse inverter that is fed from a single phase AC supply through controlled AC to DC converter. The speed control and braking methods are also proposed based on pulse width modulation technique. The overall scheme is simulated in MATLAB environment and tested under different operating conditions. A prototype of proposed AFBLDC motor drive is designed and fabricated. The control methods are implemented using DSC dsPIC33EP256MC202 digital signal controller. Tests are performed on this prototype to validate its performance at different speeds with and without braking mode. It is observed that the proposed scheme works effectively and can be used as wheel direct driven motor for electrical vehicle."}
{"_id": "db5baea8e4b2dbe725e587c023b60b5a1658afe1", "title": "Visual Gesture Character String Recognition by Classification-Based Segmentation with Stroke Deletion", "text": "The recognition of character strings in visual gestures has many potential applications, yet the segmentation of characters is a great challenge since the pen lift information is not available. In this paper, we propose a visual gesture character string recognition method using the classification-based segmentation strategy. In addition to the character classifier and character geometry models used for evaluating candidate segmentation-recognition paths, we introduce deletion geometry models for deleting stroke segments that are likely to be ligatures. To perform experiments, we built a Kinect-based fingertip trajectory capturing system to collect gesture string data. Experiments of digit string recognition show that the deletion geometry models improve the string recognition accuracy significantly. The string-level correct rate is over 80%."}
{"_id": "225f7d72eacdd136b0ceb0a522e3a3930c5af9b8", "title": "Automatic Identification of Word Translations from Unrelated English and German Corpora", "text": "Algorithms for the alignment of words in translated texts are w ell established. However, only recently, new approaches have been proposed to identify word translations f rom non-parallel or even unrelated texts. This task is more difficult, because most statistical clues useful in the processing of parallel texts can not be applied for non-parallel tex ts. For this reason, whereas for parallel texts in some studies up to 99% of the word alignments have been shown to be correct, the accuracy for non-parallel texts has been around 30% up to now. The current study, which is based on the assumption that there is a correlation between the patterns of word cooccurrences in corpora of different languages, makes a significant improvement to about 72% of word translations identified correctly."}
{"_id": "11a81c78412f6a8a70b9e450260fb30257126817", "title": "Clustering on Multi-Layer Graphs via Subspace Analysis on Grassmann Manifolds", "text": "Relationships between entities in datasets are often of multiple nature, like geographical distance, social relationships, or common interests among people in a social network, for example. This information can naturally be modeled by a set of weighted and undirected graphs that form a global multi-layer graph, where the common vertex set represents the entities and the edges on different layers capture the similarities of the entities in term of the different modalities. In this paper, we address the problem of analyzing multi-layer graphs and propose methods for clustering the vertices by efficiently merging the information provided by the multiple modalities. To this end, we propose to combine the characteristics of individual graph layers using tools from subspace analysis on a Grassmann manifold. The resulting combination can then be viewed as a low dimensional representation of the original data which preserves the most important information from diverse relationships between entities. As an illustrative application of our framework, we use our algorithm in clustering methods and test its performance on several synthetic and real world datasets where it is shown to be superior to baseline schemes and competitive to state-of-the-art techniques. Our generic framework further extends to numerous analysis and learning problems that involve different types of information on graphs."}
{"_id": "f2fa6cc53919ab8f310cbc56de2f85bd9e07c9a6", "title": "Solar powered unmanned aerial vehicle for continuous flight: Conceptual overview and optimization", "text": "An aircraft that is capable of continuous flight offers a new level of autonomous capacity for unmanned aerial vehicles. We present an overview of the components and concepts of a small scale unmanned aircraft that is capable of sustaining powered flight without a theoretical time limit. We then propose metrics that quantify the robustness of continuous flight achieved and optimization criteria to maximize these metrics. Finally, the criteria are applied to a fabricated and flight tested small scale high efficiency aircraft prototype to determine the optimal battery and photovoltaic array mass for robust continuous flight."}
{"_id": "1617124b134afed8b369f32640b56674caba5e4d", "title": "Aerial acoustic communications", "text": "This paper describes experiments in using audible sound as a means for wireless device communications. The direct application of standard modulation techniques to sound, without further improvements, results in sounds that are immediately perceived as digital communications and that are fairly aggressive and intrusive. We observe that some parameters of the modulation that have an impact in the data rate, the error probability and the computational overhead at the receiver also have a tremendous impact in the quality of the sound as perceived by humans. This paper focuses on how to vary those parameters in standard modulation techniques such as ASK, FSK and Spread-Spectrum to obtain communication systems in which the messages are musical and other familiar sounds, rather than modem sounds. A prototype called Digital Voices demonstrates the feasibility of this music-based communication technology. Our goal is to lay out the basis of sound design for aerial acoustic communications so that the presence of such communications, though noticeable, is not intrusive and can even be considered as part of musical compositions and sound tracks."}
{"_id": "3979cf5a013063e98ad0caf2e7110c2686cf1640", "title": "Basic local alignment search tool.", "text": "A new approach to rapid sequence comparison, basic local alignment search tool (BLAST), directly approximates alignments that optimize a measure of local similarity, the maximal segment pair (MSP) score. Recent mathematical results on the stochastic properties of MSP scores allow an analysis of the performance of this method as well as the statistical significance of alignments it generates. The basic algorithm is simple and robust; it can be implemented in a number of ways and applied in a variety of contexts including straightforward DNA and protein sequence database searches, motif searches, gene identification searches, and in the analysis of multiple regions of similarity in long DNA sequences. In addition to its flexibility and tractability to mathematical analysis, BLAST is an order of magnitude faster than existing sequence comparison tools of comparable sensitivity."}
{"_id": "04d9d54fa90e0cb54dbd63f2f42688b4cd2f6f99", "title": "CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice.", "text": "The sensitivity of the commonly used progressive multiple sequence alignment method has been greatly improved for the alignment of divergent protein sequences. Firstly, individual weights are assigned to each sequence in a partial alignment in order to down-weight near-duplicate sequences and up-weight the most divergent ones. Secondly, amino acid substitution matrices are varied at different alignment stages according to the divergence of the sequences to be aligned. Thirdly, residue-specific gap penalties and locally reduced gap penalties in hydrophilic regions encourage new gaps in potential loop regions rather than regular secondary structure. Fourthly, positions in early alignments where gaps have been opened receive locally reduced gap penalties to encourage the opening up of new gaps at these positions. These modifications are incorporated into a new program, CLUSTAL W which is freely available."}
{"_id": "78edc6c1f80cd2343b3b9d185453cac07a152663", "title": "Text-based adventures of the golovin AI agent", "text": "The domain of text-based adventure games has been recently established as a new challenge of creating the agent that is both able to understand natural language, and acts intelligently in text-described environments. In this paper, we present our approach to tackle the problem. Our agent, named Golovin, takes advantage of the limited game domain. We use genre-related corpora (including fantasy books and decompiled games) to create language models suitable to this domain. Moreover, we embed mechanisms that allow us to specify, and separately handle, important tasks as fighting opponents, managing inventory, and navigating on the game map. We validated usefulness of these mechanisms, measuring agent's performance on the set of 50 interactive fiction games. Finally, we show that our agent plays on a level comparable to the winner of the last year Text-Based Adventure AI Competition."}
{"_id": "ffc5c53be8ec7e8e60d7a03f3e874bc1ca2b9f2d", "title": "One for All: Towards Language Independent Named Entity Linking", "text": "Entity linking (EL) is the task of disambiguating mentions in text by associating them with entries in a predefined database of mentions (persons, organizations, etc). Most previous EL research has focused mainly on one language, English, with less attention being paid to other languages, such as Spanish or Chinese. In this paper, we introduce LIEL, a Language Independent Entity Linking system, which provides an EL framework which, once trained on one language, works remarkably well on a number of different languages without change. LIEL makes a joint global prediction over the entire document, employing a discriminative reranking framework with many domain and language-independent feature functions. Experiments on numerous benchmark datasets, show that the proposed system, once trained on one language, English, outperforms several state-of-the-art systems in English (by 4 points) and the trained model also works very well on Spanish (14 points better than a competitor system), demonstrating the viability of the approach."}
{"_id": "1a8fd4b2f127d02f70f1c94f330628be31d18681", "title": "An approach to fuzzy control of nonlinear systems: stability and design issues", "text": ""}
{"_id": "c7e6b6b3b98e992c3fddd93e7a7c478612dacf94", "title": "Mining Professional's Data from LinkedIn", "text": "Social media has become very popular communication tool among internet users in the recent years. A large unstructured data is available for analysis on the social web. The data available on these sites have redundancies as users are free to enter the data according to their knowledge and interest. This data needs to be normalized before doing any analysis due to the presence of various redundancies in it. In this paper, LinkedIn data is extracted by using LinkedIn API and normalized by removing redundancies. Further, data is also normalized according to locations of LinkedIn connections using geo coordinates provided by Microsoft Bing. Then, clustering of this normalized data set is done according to job title, company names and geographic locations using Greedy, Hierarchical and K-Means clustering algorithms and clusters are visualized to have a better insight into them."}
{"_id": "bf3416ea02ea0d9327a7886136d7d7d5a66cf491", "title": "What do online behavioral advertising privacy disclosures communicate to users?", "text": "Online Behavioral Advertising (OBA), the practice of tailoring ads based on an individual's online activities, has led to privacy concerns. In an attempt to mitigate these privacy concerns, the online advertising industry has proposed the use of OBA disclosures: icons, accompanying taglines, and landing pages intended to inform users about OBA and provide opt-out options. We conducted a 1,505-participant online study to investigate Internet users' perceptions of OBA disclosures. The disclosures failed to clearly notify participants about OBA and inform them about their choices. Half of the participants remembered the ads they saw but only 12% correctly remembered the disclosure taglines attached to ads. When shown the disclosures again, the majority mistakenly believed that ads would pop up if they clicked on disclosures, and more participants incorrectly thought that clicking the disclosures would let them purchase advertisements than correctly understood that they could then opt out of OBA. \"AdChoices\", the most commonly used tagline, was particularly ineffective at communicating notice and choice. A majority of participants mistakenly believed that opting out would stop all online tracking, not just tailored ads. We dicuss challenges in crafting disclosures and provide suggestions for improvement."}
{"_id": "a32e46aba17837384af88c8b74e8d7ef702c35f6", "title": "Discrete Wigner Function Derivation of the Aaronson-Gottesman Tableau Algorithm", "text": "The Gottesman\u2013Knill theorem established that stabilizer states and Clifford operations can be efficiently simulated classically. For qudits with odd dimension three and greater, stabilizer states and Clifford operations have been found to correspond to positive discrete Wigner functions and dynamics. We present a discrete Wigner function-based simulation algorithm for odd-d qudits that has the same time and space complexity as the Aaronson\u2013Gottesman algorithm for qubits. We show that the efficiency of both algorithms is due to harmonic evolution in the symplectic structure of discrete phase space. The differences between the Wigner function algorithm for odd-d and the Aaronson\u2013Gottesman algorithm for qubits are likely due only to the fact that the Weyl\u2013Heisenberg group is not in SU(d) for d = 2 and that qubits exhibit state-independent contextuality. This may provide a guide for extending the discrete Wigner function approach to qubits."}
{"_id": "386fd8503314f0fa289cf52244fc71f851f20770", "title": "LRAGE: Learning Latent Relationships With Adaptive Graph Embedding for Aerial Scene Classification", "text": "The performance of scene classification relies heavily on the spatial and structural features that are extracted from high spatial resolution remote-sensing images. Existing approaches, however, are limited in adequately exploiting latent relationships between scene images. Aiming to decrease the distances between intraclass images and increase the distances between interclass images, we propose a latent relationship learning framework that integrates an adaptive graph with the constraints of the feature space and label propagation for high-resolution aerial image classification. To describe the latent relationships among scene images in the framework, we construct an adaptive graph that is embedded into the constrained joint space for features and labels. To remove redundant information and improve the computational efficiency, subspace learning is introduced to assist in the latent relationship learning. To address out-of-sample data, linear regression is adopted to project the semisupervised classification results onto a linear classifier. Learning efficiency is improved by minimizing the objective function via the linearized alternating direction method with an adaptive penalty. We test our method on three widely used aerial scene image data sets. The experimental results demonstrate the superior performance of our method over the state-of-the-art algorithms in aerial scene image classification."}
{"_id": "49b176947e06521e7c5d6966c7f94c78a1a975a8", "title": "Forming a story: the health benefits of narrative.", "text": "Writing about important personal experiences in an emotional way for as little as 15 minutes over the course of three days brings about improvements in mental and physical health. This finding has been replicated across age, gender, culture, social class, and personality type. Using a text-analysis computer program, it was discovered that those who benefit maximally from writing tend to use a high number of positive-emotion words, a moderate amount of negative-emotion words, and increase their use of cognitive words over the days of writing. These findings suggest that the formation of a narrative is critical and is an indicator of good mental and physical health. Ongoing studies suggest that writing serves the function of organizing complex emotional experiences. Implications for these findings for psychotherapy are briefly discussed."}
{"_id": "d880d303ee0bfdbc80fc34df0978088cd15ce861", "title": "Video Anomaly Detection and Localization via Gaussian Mixture Fully Convolutional Variational Autoencoder", "text": "\uf020 Abstract\u2014We present a novel end-to-end partially supervised deep learning approach for video anomaly detection and localization using only normal samples. The insight that motivates this study is that the normal samples can be associated with at least one Gaussian component of a Gaussian Mixture Model (GMM), while anomalies either do not belong to any Gaussian component. The method is based on Gaussian Mixture Variational Autoencoder, which can learn feature representations of the normal samples as a Gaussian Mixture Model trained using deep learning. A Fully Convolutional Network (FCN) that does not contain a fully-connected layer is employed for the encoder-decoder structure to preserve relative spatial coordinates between the input image and the output feature map. Based on the joint probabilities of each of the Gaussian mixture components, we introduce a sample energy based method to score the anomaly of image test patches. A two-stream network framework is employed to combine the appearance and motion anomalies, using RGB frames for the former and dynamic flow images, for the latter. We test our approach on two popular benchmarks (UCSD Dataset and Avenue Dataset). The experimental results verify the superiority of our method compared to the state of the arts."}
{"_id": "9b1e6362fcfd298cd382c0c24f282d86f174cac8", "title": "Text Classification from Positive and Unlabeled Data using Misclassified Data Correction", "text": "This paper addresses the problem of dealing with a collection of labeled training documents, especially annotating negative training documents and presents a method of text classification from positive and unlabeled data. We applied an error detection and correction technique to the results of positive and negative documents classified by the Support Vector Machines (SVM). The results using Reuters documents showed that the method was comparable to the current state-of-the-art biasedSVM method as the F-score obtained by our method was 0.627 and biased-SVM was 0.614."}
{"_id": "decb7e746acb87710c2a15585cd22133ffc2cc95", "title": "General Video Game AI: Competition, Challenges and Opportunities", "text": "The General Video Game AI framework and competition pose the problem of creating artificial intelligence that can play a wide, and in principle unlimited, range of games. Concretely, it tackles the problem of devising an algorithm that is able to play any game it is given, even if the game is not known a priori. This area of study can be seen as an approximation of General Artificial Intelligence, with very little room for game-dependent heuristics. This talk summarizes the motivation, infrastructure, results and future plans of General Video Game AI, stressing the findings and first conclusions drawn after two editions of our competition, presenting the tracks that will be held in 2016 and outlining our future plans."}
{"_id": "418e0760a75b0797ee355d4ca4f6db83df664f0f", "title": "Piaget : Implications for Teaching", "text": "Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact support@jstor.org."}
{"_id": "03e8abcc388cc41b590d04289deafbf6075fdadf", "title": "Learning to Discriminate Noises for Incorporating External Information in Neural Machine Translation", "text": "Previous studies show that incorporating external information could improve the translation quality of Neural Machine Translation (NMT) systems. However, there are inevitably noises in the external information, severely reducing the benefit that the existing methods could receive from the incorporation. To tackle the problem, this study pays special attention to the discrimination of the noises during the incorporation. We argue that there exist two kinds of noise in this external information, i.e. global noise and local noise, which affect the translations for the whole sentence and for some specific words, respectively. Accordingly, we propose a general framework that learns to jointly discriminate both the global and local noises, so that the external information could be better leveraged. Our model is trained on the dataset derived from the original parallel corpus without any external labeled data or annotation. Experimental results in various real-world scenarios, language pairs, and neural architectures indicate that discriminating noises contributes to significant improvements in translation quality by being able to better incorporate the external information, even in very noisy conditions."}
{"_id": "c34bd1038a798a08fff2112a1a8815cd32f74ca1", "title": "Combining Multimodal Features with Hierarchical Classifier Fusion for Emotion Recognition in the Wild", "text": "Emotion recognition in the wild is a very challenging task. In this paper, we investigate a variety of different multimodal features from video and audio to evaluate their discriminative ability to human emotion analysis. For each clip, we extract SIFT, LBP-TOP, PHOG, LPQ-TOP and audio features. We train different classifiers for every kind of features on the dataset from EmotiW 2014 Challenge, and we propose a novel hierarchical classifier fusion method for all the extracted features. The final achievement we gained on the test set is 47.17% which is much better than the best baseline recognition rate of 33.7%."}
{"_id": "9ada9b211cd11406a7d71707598b2a9466fcc8c9", "title": "Efficient and robust feature extraction and selection for traffic classification", "text": "Given the limitations of traditional classification methods based on port number and payload inspection, a large number of studies have focused on developing classification approaches that use Transport Layer Statistics (TLS) features and Machine Learning (ML) techniques. However, classifying Internet traffic data using these approaches is still a difficult task because (1) TLS features are not very robust for traffic classification because they cannot capture the complex non-linear characteristics of Internet traffic, and (2) the existing Feature Selection (FS) techniques cannot reliably provide optimal and stable features for ML algorithms. With the aim of addressing these problems, this paper presents a novel feature extraction and selection approach. First, multifractal features are extracted from traffic flows using a Wavelet Leaders Multifractal Formalism(WLMF) to depict the traffic flows; next, a Principal Component Analysis (PCA)-based FS method is applied on these multifractal features to remove the irrelevant and redundant features. Based on real traffic traces, the experimental results demonstrate significant improvement in accuracy of Support Vector Machines (SVMs) comparing to the TLS features studied in existing ML-based approaches. Furthermore, the proposed approach is suitable for real time traffic classification because of the ability of classifying traffic at the early stage of traffic transmission."}
{"_id": "6b7370613bca4047addf8fba1e3a465c47cef4f3", "title": "Unsupervised Interpretable Pattern Discovery in Time Series Using Autoencoders", "text": "We study the use of feed-forward convolutional neural networks for the unsupervised problem of mining recurrent temporal patterns mixed in multivariate time series. Traditional convolutional autoencoders lack interpretability for two main reasons: the number of patterns corresponds to the manually-fixed number of convolution filters, and the patterns are often redundant and correlated. To recover clean patterns, we introduce different elements in the architecture, including an adaptive rectified linear unit function that improves patterns interpretability, and a group-lasso regularizer that helps automatically finding the relevant number of patterns. We illustrate the necessity of these elements on synthetic data and real data in the context of activity mining in videos."}
{"_id": "f5af95698fe16f17aeee452e7bf5463c0ce1b1c5", "title": "A Comparison of Controllers for Balancing Two Wheeled Inverted Pendulum Robot", "text": "One of the challenging tasks concerning two wheeled inverted pendulum (TWIP) mobile robot is balancing its tilt to upright position, this is due to its inherently open loop instability. This paper presents an experimental comparison between model based controller and non-model based controllers in balancing the TWIP mobile robot. A Fuzzy Logic Controller (FLC) which is a non-model based controller and a Linear Quadratic Controller (LQR) which is a model-based controller, and the conventional controller, Proportional Integral Derivative (PID) were implemented and compared on a real time TWIP mobile robot. The FLC controller performance has given superior result as compared to LQR and PID in terms of speed response but consumes higher energy. Index Term-Two Wheeled Inverted Pendulum (TWIP), Fuzzy Logic Controller (FLC), Linear Quadratic Controller (LQR), Euler Lagrange equations."}
{"_id": "c872aef8c29a4717c04de39298ef7069967ae7a5", "title": "Cognitive enhancement.", "text": "Cognitive enhancement refers to the improvement of cognitive ability in normal healthy individuals. In this article, we focus on the use of pharmaceutical agents and brain stimulation for cognitive enhancement, reviewing the most common methods of pharmacologic and electronic cognitive enhancement, and the mechanisms by which they are believed to work, the effectiveness of these methods and their prevalence. We note the many gaps in our knowledge of these matters, including open questions about the size, reliability and nature of the enhancing effects, and we conclude with recommendations for further research. WIREs Cogn Sci 2014, 5:95-103. doi: 10.1002/wcs.1250 CONFLICT OF INTEREST: The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website."}
{"_id": "89d09965626167360aea4e414f889b06074491da", "title": "Belt: An Unobtrusive Touch Input Device for Head-worn Displays", "text": "Belt is a novel unobtrusive input device for wearable displays that incorporates a touch surface encircling the user's hip. The wide input space is leveraged for a horizontal spatial mapping of quickly accessible information and applications. We discuss social implications and interaction capabilities for unobtrusive touch input and present our hardware implementation and a set of applications that benefit from the quick access time. In a qualitative user study with 14 participants we found out that for short interactions (2-4 seconds), most of the surface area is considered as appropriate input space, while for longer interactions (up to 10 seconds), the front areas above the trouser pockets are preferred."}
{"_id": "a3a7ba295543b637eac79db24436b96356944375", "title": "Redirected Walking in Place", "text": "This paper describes a method for allowing people to virtually move around a CAVETM without ever having to turn to face the missing back wall. We describe the method, and report a pilot study of 28 participants, half of whom moved through the virtual world using a hand-held controller, and the other half used the new technique called \u2018Redirected Walking in Place\u2019 (RWP). The results show that the current instantiation of the RWP technique does not result in a lower frequency of looking towards the missing wall. However, the results also show that the sense of presence in the virtual environment is significantly and negatively correlated with the amount that the back wall is seen. There is evidence that RWP does reduce the chance of seeing the blank wall for some participants. The increased sense of presence through never having to face the blank wall, and the results of this pilot study show the RWP has promise and merits further development."}
{"_id": "036b5edb74849dd68ad44be84f951c139c3b1738", "title": "On temporal-spatial realism in the virtual reality environment", "text": "The Polhemus Isotrak is often used as an orientation and position tracking device in virtual reality environments. When it is used to dynamically determine the user\u2019s viewpoint and line of sight ( e.g. in the case of a head mounted display) the noise and delay in its measurement data causes temporal-spatial distortion, perceived by the user as jittering of images and lag between head movement and visual feedback. To tackle this problem, we first examined the major cause of the distortion, and found that the lag felt by the user is mainly due to the delay in orientation data, and the jittering of images is caused mostly by the noise in position data. Based on these observations, a predictive Kalman filter was designed to compensate for the delay in orientation data, and an anisotropic low pass filter was devised to reduce the noise in position data. The effectiveness and limitations of both approaches were then studied, and the results shown to be satisfactory."}
{"_id": "07d6e71722aac1ad7e51c48955ea5a04fcaedf35", "title": "Virtual Reality on a WIM: Interactive Worlds in Miniature", "text": "This paper explores a user interface technique which augments an immersive head tracked display with a hand-held miniature copy of the virtual environment. We call this interface technique the Worlds in Miniature (WIM) metaphor. In addition to the first-person perspective offered by a virtual reality system, a World in Miniature offers a second dynamic viewport onto the virtual environment. Objects may be directly manipulated either through the immersive viewport or through the three-dimensional viewport offered by the WIM. In addition to describing object manipulation, this paper explores ways in which Worlds in Miniature can act as a single unifying metaphor for such application independent interaction techniques as object selection, navigation, path planning, and visualization. The WIM metaphor offers multiple points of view and multiple scales at which the user can operate, without requiring explicit modes or commands. Informal user observation indicates that users adapt to the Worlds in Miniature metaphor quickly and that physical props are helpful in manipulating the WIM and other objects in the environment."}
{"_id": "0a5ad27461c93fefd2665e550776417f416997d4", "title": "Recognizing Textual Entailment via Multi-task Knowledge Assisted LSTM", "text": "Recognizing Textual Entailment (RTE) plays an important role in NLP applications like question answering, information retrieval, etc. Most previous works either use classifiers to employ elaborately designed features and lexical similarity or bring distant supervision and reasoning technique into RTE task. However, these approaches are hard to generalize due to the complexity of feature engineering and are prone to cascading errors and data sparsity problems. For alleviating the above problems, some work use LSTM-based recurrent neural network with word-by-word attention to recognize textual entailment. Nevertheless, these work did not make full use of knowledge base (KB) to help reasoning. In this paper, we propose a deep neural network architecture called Multi-task Knowledge Assisted LSTM (MKAL), which aims to conduct implicit inference with the assistant of KB and use predicate-topredicate attention to detect the entailment between predicates. In addition, our model applies a multi-task architecture to further improve the performance. The experimental results show that our proposed method achieves a competitive result compared to the previous work."}
{"_id": "7bcd8c63eee548a4d269d6572af0d35a837aaea8", "title": "Leaf segmentation in plant phenotyping: a collation study", "text": "Image-based plant phenotyping is a growing application area of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants, Arabidopsis and young tobacco. Although leaves do share appearance and shape characteristics, the presence of occlusions and variability in leaf shape and pose, as well as imaging conditions, render this problem challenging. The aim of this paper is to compare several leaf segmentation solutions on a unique and first-of-its-kind dataset containing images from typical phenotyping experiments. In particular, we report and discuss methods and findings of a collection of submissions for the first Leaf Segmentation Challenge of the Computer Vision Problems in Plant Phenotyping workshop in 2014. Four methods are presented: three segment leaves by processing the distance transform in an unsupervised fashion, and the other via optimal template selection and Chamfer matching. Overall, we find that although separating plant from background can be accomplished with satisfactory accuracy ( $$>$$ > 90\u00a0% Dice score), individual leaf segmentation and counting remain challenging when leaves overlap. Additionally, accuracy is lower for younger leaves. We find also that variability in datasets does affect outcomes. Our findings motivate further investigations and development of specialized algorithms for this particular application, and that challenges of this form are ideally suited for advancing the state of the art. Data are publicly available (online at http://www.plant-phenotyping.org/datasets ) to support future challenges beyond segmentation within this application domain."}
{"_id": "09f02eee625b7aa6ba7e6f31cfb56f6d4ddd0fdd", "title": "MAPS: A Multi Aspect Personalized POI Recommender System", "text": "The evolution of the World Wide Web (WWW) and the smart-phone technologies have played a key role in the revolution of our daily life. The location-based social networks (LBSN) have emerged and facilitated the users to share the check-in information and multimedia contents. The Point of Interest (POI) recommendation system uses the check-in information to predict the most potential check-in locations. The different aspects of the check-in information, for instance, the geographical distance, the category, and the temporal popularity of a POI; and the temporal check-in trends, and the social (friendship) information of a user play a crucial role in an efficient recommendation. In this paper, we propose a fused recommendation model termed MAPS (Multi Aspect Personalized POI Recommender System) which will be the first in our knowledge to fuse the categorical, the temporal, the social and the spatial aspects in a single model. The major contribution of this paper are: (i) it realizes the problem as a graph of location nodes with constraints on the category and the distance aspects (i.e. the edge between two locations is constrained by a threshold distance and the category of the locations), (ii) it proposes a multi-aspect fused POI recommendation model, and (iii) it extensively evaluates the model with two real-world data sets."}
{"_id": "764c2fb8d8a7972eb0d520db3db53c38668c0c87", "title": "A Double-Sided Parallel-Strip Line Push\u2013Pull Oscillator", "text": "A novel double-sided parallel-strip line (DSPSL) push-pull oscillator using two identical sub-oscillators on the opposite sides of a dielectric substrate is proposed. The two sub-oscillators, sharing a common DSPSL resonator and common ground in the middle of the substrate, generate out-of-phase fundamental signals and in-phase second harmonics. At the common DSPSL output, the second harmonics are cancelled out while the fundamental signals are well combined. By this design, an additional combiner at the output, as required by the conventional push-pull circuits, is not needed, which greatly reduces the circuit size and simplifies the design procedures of the proposed push-pull oscillator."}
{"_id": "f719dac4d748bc8b8f371ec14f72d96be34b2c28", "title": "Use of Non-conductive Film (NCF) with Nano-Sized Filler Particles for Solder Interconnect: Research and Development on NCF Material and Process Characterization", "text": "As three-dimensional Through-Silicon Via (3D-TSV) packaging is emerging in the semiconductor market to integrate multiple functions in a system for further miniaturization, thermal compression bonding (TCB), which stacks multiple bare chips on top of each other with Cu pillar bumps with solder cap, has become an indispensable new packaging technology. The novel non-conductive film (NCF) described in this paper is an epoxy-type thermosetting material in film form, with lower density of Nano-sized silica filler particles (average particle size is 100 Nano meters). Advantages of this NCF material with Nano-sized fillers include: transparency of the NCF material so that the TCB bonder's image recognition system can easily identify fiducial marks on the chip, ability of the Nano-filler to flow out with the NCF resin during thermal compression bonding, to mitigate filler entrapment between solder joints, which is critical to ensure reliable solder connections, compatibility with fine pitch applications with extremely narrow chip-to-chip and chip-to-substrate gaps to form void-free underfill. Instead of the previous processes of die attach, fluxing, traditional oven mass re-flow, flux cleaning, and capillary underfill (CUF), the current process involves the use of NCF & TCB. The NCF is applied on a wafer in a film form by lamination, and then a wafer is diced with the pre-laminated NCF in pieces. Then they are joined by TCB, with typical parameters of 250\u00b0C for 10 seconds with 80N (solder cap on 15\u03bcm diameter Cu pillar, 40\u03bcm pitch, 1000 bumps). NCF heated by TCB is quickly liquidized to lower its viscosity after a few seconds. Other advantages of NCF is that it has a fluxing function included in the material, which eliminates the need for separate flux apply and flux clean steps, which simplifies process and costs. Also, NCF can control extrusion along the package's edge line since NCF is a half-cured B-stage material which has some mechanical rigidity during handling and lamination. Thus NCF Thermal Compression Flip Chip Bonding method is the vital solution to ensure good reliability with mass productivity for next generation's packaging. The characterization of NCF material and the importance of controlling viscosity and elastic modulus during thermal-compression will be analyzed and discussed in this paper. Process and reliability data on test vehicles will be shown."}
{"_id": "4e821539277add3f2583845864cc6741216f0328", "title": "Asymmetry-Aware Link Quality Services in Wireless Sensor Networks", "text": "Recent study in wireless sensor networks (WSN) has found that the irregular link quality is a common phenomenon. The irregular link quality, especially link asymmetry, has significant impacts on the design of WSN protocols, such as MAC protocols, neighborhood and topology discovery protocols, and routing protocols. In this paper, we propose asymmetryaware link quality services, including the neighborhood link quality service (NLQS) and the link relay service (LRS), to provide the timeliness link quality information of neighbors, and build a relay framework to alleviate effects of the link asymmetry. To demonstrate proposed link quality services, we design and implement two example applications, the shortest hops routing tree (SHRT) and the best path reliability routing tree (BRRT), in the TinyOS platform. To evaluate proposed link quality services, we have conducted both static analysis and simulation through the TOSSIM simulator, in terms of four performance metrics. We found that the performance of two example applications was improved substantially. More than 40% of nodes identify more outbound neighbors and the percentage of increased outbound neighbors is between 14% and 100%. In SHRT, more than 15% of nodes reduce hops of the routing tree and the percentage of reduced hops is between 14% and 100%. In BRRT, more than 16% of nodes improve the path reliability of the routing tree and the percentage of the improved path reliability is between 2% to 50%."}
{"_id": "6b99529d792427f037bf7b415128649943f757e4", "title": "Mortality in British vegetarians: review and preliminary results from EPIC-Oxford.", "text": "BACKGROUND\nThree prospective studies have examined the mortality of vegetarians in Britain.\n\n\nOBJECTIVE\nWe describe these 3 studies and present preliminary results on mortality from the European Prospective Investigation into Cancer and Nutrition-Oxford (EPIC-Oxford).\n\n\nDESIGN\nThe Health Food Shoppers Study and the Oxford Vegetarian Study were established in the 1970s and 1980s, respectively; each included about 11 000 subjects and used a short questionnaire on diet and lifestyle. EPIC-Oxford was established in the 1990s and includes about 56 000 subjects who completed detailed food frequency questionnaires. Mortality in all 3 studies was followed though the National Health Service Central Register.\n\n\nRESULTS\nOverall, the death rates of all the subjects in all 3 studies are much lower than average for the United Kingdom. Standardized mortality ratios (95% CIs) for all subjects were 59% (57%, 61%) in the Health Food Shoppers Study, 52% (49%, 56%) in the Oxford Vegetarian Study, and 39% (37%, 42%) in EPIC-Oxford. Comparing vegetarians with nonvegetarians within each cohort, the death rate ratios (DRRs), adjusted for age, sex and smoking, were 1.03 (0.95, 1.13) in the Health Food Shoppers Study, 1.01 (0.89, 1.14) in the Oxford Vegetarian Study, and 1.05 (0.86, 1.27) in EPIC-Oxford. DRRs for ischemic heart disease in vegetarians compared with nonvegetarians were 0.85 (0.71, 1.01) in the Health Food Shoppers Study, 0.86 (0.67, 1.12) in the Oxford Vegetarian Study, and 0.75 (0.41, 1.37) in EPIC-Oxford.\n\n\nCONCLUSIONS\nThe mortality of both the vegetarians and the nonvegetarians in these studies is low compared with national rates. Within the studies, mortality for major causes of death was not significantly different between vegetarians and nonvegetarians, but the nonsignificant reduction in mortality from ischemic heart disease among vegetarians was compatible with the significant reduction previously reported in a pooled analysis of mortality in Western vegetarians."}
{"_id": "56a2d82f1b89304a80fbeea91accba776a07e55a", "title": "Cloud-assisted industrial cyber-physical systems: An insight", "text": "The development of industrialization and information communication technology (ICT) has deeply changed our way of life. In particular, with the emerging theory of \u2018\u2018Industry 4.0\u201d, the integration of cloud technologies and industrial cyber-physical systems (ICPS) becomes increasingly important, as this will greatly improve the manufacturing chain and business services. In this paper, we first describe the development and character of ICPS. ICPS will inevitably play an important role in manufacturing, sales, and logistics. With the support of the cloud, ICPS development will impact value creation, business models, downstream services, and work organization. Then, we present a service-oriented ICPS model. With the support of the cloud, infrastructure platform and service application, ICPS will promote the manufacturing efficiency, increase quality of production, enable a sustainable industrial system and more environmentally friendly businesses. Thirdly, we focus on some key enabling technologies, which are critical in supporting smart factories. These key enabling technologies will also help companies to realize high quality, high output, and low cost. Finally, we talk about some challenges of ICPS implementation and the future work. 2015 Elsevier B.V. All rights reserved."}
{"_id": "c6240d0cb51e0fe7f838f5437463f5eeaf5563d0", "title": "Lessons Learned: The Complexity of Accurate Identification of in-Text Citations", "text": "The importance of citations is widely recognized by the scientific community. Citations are being used in making a number of vital decisions such as calculating impact factor of journals, calculating impact of a researcher (H-Index), ranking universities and research organizations. Furthermore, citation indexes, along with other criteria, employ citation counts to retrieve and rank relevant research papers. However, citing patterns and in-text citation frequency are not used for such important decisions. The identification of in-text citation from a scientific document is an important problem. However, identification of in-text citation is a tough ask due to the ambiguity between citation tag and content. This research focuses on in-text citation analysis and makes the following specific contributions such as: Provides detailed in-text citation analysis on 16,000 citations of an online journal, reports different pattern of citations-tags and its in-text citations and highlights the problems (mathematical ambiguities, wrong allotments, commonality in content and string variation) in identifying in-text citations from scientific documents. The accurate identification of in-text citations will help information retrieval systems, digital libraries and citation indexes."}
{"_id": "238188daf1ceb8000447c4321125a30ad45c55b8", "title": "Transimpedance Amplifier ( TIA ) Design for 400 Gb / s Optical Fiber Communications", "text": "(ABSTRACT) Analogcircuit/IC design for high speed optical fiber communication is a fairly new research area in Dr. Ha's group. In the first project sponsored by ETRI (Electronics and Telecommunication Research Institute) we started to design the building blocks of receiver for next generation 400 Gb/s optical fiber communication. In this thesis research a transceiver architecture based on 4x100 Gb/s parallel communication is proposed. As part of the receiver, a transimpedance amplifier for 100 Gb/s optical communication is designed, analyzed and simulated. Simulation results demonstrate the excellent feasibility of proposed architecture. always dominated the high speed optical transceiver design because of their inherent properties of high mobility and low noise. But they are power hungry and bulky in size which made them less attractive for highly integrated circuit design. On the contrary, CMOS technology always drew attraction because of low cost, low power dissipation and high level of integration facility. But their notorious parasitic characteristic and inferior noise performance makes high speed transceiver design very challenging. The emergence of nano-scale CMOS offer highly scaled feature sized transistors with transition frequencies exceeding 200 GHz and can improve optical receiver performance significantly. Increasing bandwidth to meet the target data rate is the most challenging task of TIA design especially in CMOS technology. Several CMOS TIA architectures have been published recently [6]-[11] for 40 Gb/s data rate having bandwidth no more than 40 GHz. In contrast to existing works, the goal of this research is to step further and design a single channel stand-alone iii TIA compatible in serial 100 Gb/s data rate with enhanced bandwidth and optimized transimpedance gain, input referred noise and group delay variation. A 100 Gb/s transimpedance amplifier (TIA) for optical receiver front end is designed in this work. To achieve wide bandwidth and low group delay variation a differential TIA with active feedback network is proposed. Proposed design also combines regulated cascode front end, peaking inductors and capacitive degeneration to have wide band response. Simulation results show 70 GHz bandwidth, 42 dB\u03a9 transimpedance gain and 2.8 ps of group delay variation for proposed architecture. Input referred noise current density is 26 pA/\u221a while the total power dissipation from 1.2V supply is 24mW. Performance of the proposed TIA is compared with other existing TIAs, and the proposed TIA shows significant improvement in bandwidth and group delay variation compared to other existing TIA architectures. iv To my parents v Acknowledgements First \u2026"}
{"_id": "b58a85e46d365e47ce937ccc09d60fbcd0fc22d4", "title": "Gadge me if you can: secure and efficient ad-hoc instruction-level randomization for x86 and ARM", "text": "Code reuse attacks such as return-oriented programming are one of the most powerful threats to contemporary software. ASLR was introduced to impede these attacks by dispersing shared libraries and the executable in memory. However, in practice its entropy is rather low and, more importantly, the leakage of a single address reveals the position of a whole library in memory. The recent mitigation literature followed the route of randomization, applied it at different stages such as source code or the executable binary. However, the code segments still stay in one block. In contrast to previous work, our randomization solution, called Xifer, (1) disperses all code (executable and libraries) across the whole address space, (2) re-randomizes the address space for each run, (3) is compatible to code signing, and (4) does neither require offline static analysis nor source-code. Our prototype implementation supports the Linux ELF file format and covers both mainstream processor architectures x86 and ARM. Our evaluation demonstrates that Xifer performs efficiently at load- and during run-time (1.2% overhead)."}
{"_id": "8c0031cd1df734ac224c8c1daf3ce858140c99d5", "title": "Correlated Topic Models", "text": "Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than x-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [1]. We derive a mean-field variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. The CTM gives a better fit than LDA on a collection of OCRed articles from the journal Science. Furthermore, the CTM provides a natural way of visualizing and exploring this and other unstructured data sets."}
{"_id": "10d10df314c1b58f5c83629e73a35185876cd4e2", "title": "Multi-task Gaussian Process Prediction", "text": "In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a \u201cfree-form\u201d covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets."}
{"_id": "c3e2ad2da16f15d212817f833d7dec238a45154d", "title": "Recognizing Human Activities from Raw Accelerometer Data Using Deep Neural Networks", "text": "Activity recognition from wearable sensor data has been researched for many years. Previous works usually extracted features manually, which were hand-designed by the researchers, and then were fed into the classifiers as the inputs. Due to the blindness of manually extracted features, it was hard to choose suitable features for the specific classification task. Besides, this heuristic method for feature extraction could not generalize across different application domains, because different application domains needed to extract different features for classification. There was also work that used auto-encoders to learn features automatically and then fed the features into the K-nearest neighbor classifier. However, these features were learned in an unsupervised manner without using the information of the labels, thus might not be related to the specific classification task. In this paper, we recommend deep neural networks (DNNs) for activity recognition, which can automatically learn suitable features. DNNs overcome the blindness of hand-designed features and make use of the precious label information to improve activity recognition performance. We did experiments on three publicly available datasets for activity recognition and compared deep neural networks with traditional methods, including those that extracted features manually and auto-encoders followed by a K-nearest neighbor classifier. The results showed that deep neural networks could generalize across different application domains and got higher accuracy than traditional methods."}
{"_id": "730fb0508ddb4dbcbd009f326b0298bfdbe9da8c", "title": "Sketch Recognition by Ensemble Matching of Structured Features", "text": "Sketch recognition aims to automatically classify human hand sketches of objects into known categories. This has become increasingly a desirable capability due to recent advances in human computer interaction on portable devices. The problem is nontrivial because of the sparse and abstract nature of hand drawings as compared to photographic images of objects, compounded by a highly variable degree of details in human sketches. To this end, we present a method for the representation and matching of sketches by exploiting not only local features but also global structures of sketches, through a star graph based ensemble matching strategy. Different local feature representations were evaluated using the star graph model to demonstrate the effectiveness of the ensemble matching of structured features. We further show that by encapsulating holistic structure matching and learned bag-of-features models into a single framework, notable recognition performance improvement over the state-of-the-art can be observed. Extensive comparative experiments were carried out using the currently largest sketch dataset released by Eitz et al. [15], with over 20,000 sketches of 250 object categories generated by AMT (Amazon Mechanical Turk) crowd-sourcing."}
{"_id": "5551e57e8d215519d8a671321d7a0d99e5ad53f0", "title": "Measuring complexity using FuzzyEn, ApEn, and SampEn.", "text": "This paper compares three related measures of complexity, ApEn, SampEn, and FuzzyEn. Since vectors' similarity is defined on the basis of the hard and sensitive boundary of Heaviside function in ApEn and SampEn, the two families of statistics show high sensitivity to the parameter selection and may be invalid in case of small parameter. Importing the concept of fuzzy sets, we developed a new measure FuzzyEn, where vectors' similarity is defined by fuzzy similarity degree based on fuzzy membership functions and vectors' shapes. The soft and continuous boundaries of fuzzy functions ensure the continuity as well as the validity of FuzzyEn at small parameters. The more details obtained by fuzzy functions also make FuzzyEn a more accurate entropy definition than ApEn and SampEn. In addition, similarity definition based on vectors' shapes, together with the exclusion of self-matches, earns FuzzyEn stronger relative consistency and less dependence on data length. Both theoretical analysis and experimental results show that FuzzyEn provides an improved evaluation of signal complexity and can be more conveniently and powerfully applied to short time series contaminated by noise."}
{"_id": "4d276039b421dfcd9328f452c20a890d3ed2ac96", "title": "Cannabis, a complex plant: different compounds and different effects on individuals.", "text": "Cannabis is a complex plant, with major compounds such as delta-9-tetrahydrocannabinol and cannabidiol, which have opposing effects. The discovery of its compounds has led to the further discovery of an important neurotransmitter system called the endocannabinoid system. This system is widely distributed in the brain and in the body, and is considered to be responsible for numerous significant functions. There has been a recent and consistent worldwide increase in cannabis potency, with increasing associated health concerns. A number of epidemiological research projects have shown links between dose-related cannabis use and an increased risk of development of an enduring psychotic illness. However, it is also known that not everyone who uses cannabis is affected adversely in the same way. What makes someone more susceptible to its negative effects is not yet known, however there are some emerging vulnerability factors, ranging from certain genes to personality characteristics. In this article we first provide an overview of the biochemical basis of cannabis research by examining the different effects of the two main compounds of the plant and the endocannabinoid system, and then go on to review available information on the possible factors explaining variation of its effects upon different individuals."}
{"_id": "144fea3fd43bd5ef6569768925425e5607afa1f0", "title": "Insertion, Deletion, or Substitution? Normalizing Text Messages without Pre-categorization nor Supervision", "text": "Most text message normalization approaches are based on supervised learning and rely on human labeled training data. In addition, the nonstandard words are often categorized into different types and specific models are designed to tackle each type. In this paper, we propose a unified letter transformation approach that requires neither pre-categorization nor human supervision. Our approach models the generation process from the dictionary words to nonstandard tokens under a sequence labeling framework, where each letter in the dictionary word can be retained, removed, or substituted by other letters/digits. To avoid the expensive and time consuming hand labeling process, we automatically collected a large set of noisy training pairs using a novel webbased approach and performed character-level alignment for model training. Experiments on both Twitter and SMS messages show that our system significantly outperformed the stateof-the-art deletion-based abbreviation system and the jazzy spell checker (absolute accuracy gain of 21.69% and 18.16% over jazzy spell checker on the two test sets respectively)."}
{"_id": "04d7b7851683809cab561d09b5c5c80bd5c33c80", "title": "What's in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams", "text": "QA systems have been making steady advances in the challenging elementary science exam domain. In this work, we develop an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges. In particular, we model the requirements based on appropriate sources of evidence to be used for the QA task. We create requirements by first identifying suitable sentences in a knowledge base that support the correct answer, then use these to build explanations, filling in any necessary missing information. These explanations are used to create a fine-grained categorization of the requirements. Using these requirements, we compare a retrieval and an inference solver on 212 questions. The analysis validates the gains of the inference solver, demonstrating that it answers more questions requiring complex inference, while also providing insights into the relative strengths of the solvers and knowledge sources. We release the annotated questions and explanations as a resource with broad utility for science exam QA, including determining knowledge base construction targets, as well as supporting information aggregation in automated inference."}
{"_id": "22f7c40bd3c188e678796d2f1ad9c19a745e83c7", "title": "Is imitation learning the route to humanoid robots?", "text": "This review investigates two recent developments in artificial intelligence and neural computation: learning from imitation and the development of humanoid robots. It is postulated that the study of imitation learning offers a promising route to gain new insights into mechanisms of perceptual motor control that could ultimately lead to the creation of autonomous humanoid robots. Imitation learning focuses on three important issues: efficient motor learning, the connection between action and perception, and modular motor control in the form of movement primitives. It is reviewed here how research on representations of, and functional connections between, action and perception have contributed to our understanding of motor acts of other beings. The recent discovery that some areas in the primate brain are active during both movement perception and execution has provided a hypothetical neural basis of imitation. Computational approaches to imitation learning are also described, initially from the perspective of traditional AI and robotics, but also from the perspective of neural network models and statistical-learning research. Parallels and differences between biological and computational approaches to imitation are highlighted and an overview of current projects that actually employ imitation learning for humanoid robots is given."}
{"_id": "248040fa359a9f18527e28687822cf67d6adaf16", "title": "A survey of robot learning from demonstration", "text": "We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research."}
{"_id": "37ca75f5f6664fc9ed835a53b48258ec92eb73cd", "title": "Learning Dependency-Based Compositional Semantics", "text": "Suppose we want to build a system that answers a natural language question by representing its semantics as a logical forxm and computing the answer given a structured database of facts. The core part of such a system is the semantic parser that maps questions to logical forms. Semantic parsers are typically trained from examples of questions annotated with their target logical forms, but this type of annotation is expensive.Our goal is to instead learn a semantic parser from question\u2013answer pairs, where the logical form is modeled as a latent variable. We develop a new semantic formalism, dependency-based compositional semantics (DCS) and define a log-linear distribution over DCS logical forms. The model parameters are estimated using a simple procedure that alternates between beam search and numerical optimization. On two standard semantic parsing benchmarks, we show that our system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms."}
{"_id": "900cfd2af153772ffe0db3e60a6e2b9ec381e12f", "title": "Understanding physiological responses to stressors during physical activity", "text": "With advances in physiological sensors, we are able to understand people's physiological status and recognize stress to provide beneficial services. Despite the great potential in physiological stress recognition, there are some critical issues that need to be addressed such as the sensitivity and variability of physiology to many factors other than stress (e.g., physical activity). To resolve these issues, in this paper, we focus on the understanding of physiological responses to both stressor and physical activity and perform stress recognition, particularly in situations having multiple stimuli: physical activity and stressors. We construct stress models that correspond to individual situations, and we validate our stress modeling in the presence of physical activity. Analysis of our experiments provides an understanding on how physiological responses change with different stressors and how physical activity confounds stress recognition with physiological responses. In both objective and subjective settings, the accuracy of stress recognition drops by more than 14% when physical activity is performed. However, by modularizing stress models with respect to physical activity, we can recognize stress with accuracies of 82% (objective stress) and 87% (subjective stress), achieving more than a 5-10% improvement from approaches that do not take physical activity into account."}
{"_id": "3572b462c94b5aba749f567628606c46fa124118", "title": "Identifying Learning Styles in Learning Management Systems by Using Indications from Students' Behaviour", "text": "Making students aware of their learning styles and presenting them with learning material that incorporates their individual learning styles has potential to make learning easier for students and increase their learning progress. This paper proposes an automatic approach for identifying learning styles with respect to the Felder-Silverman learning style model by inferring their learning styles from their behaviour during they are learning in an online course. The approach was developed for learning management systems, which are commonly used in e-learning. In order to evaluate the proposed approach, a study with 127 students was performed, comparing the results of the automatic approach with those of a learning style questionnaire. The evaluation yielded good results and demonstrated that the proposed approach is suitable for identifying learning styles. By using the proposed approach, studentspsila learning styles can be identified automatically and be used for supporting students by considering their individual learning styles."}
{"_id": "24acb0110e57de29f5be55f52887b3cd41d1bf12", "title": "Disentangling top-down vs. bottom-up and low-level vs. high-level influences on eye movements over time", "text": "Bottom-up and top-down, as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analysing their influence over time. For this purpose we develop a saliency model which is based on the internal representation of a recent early spatial vision model to measure the low-level bottom-up factor. To measure the influence of high-level bottom-up features, we use a recent DNN-based saliency model. To account for top-down influences, we evaluate the models on two large datasets with different tasks: first, a memorisation task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: The first saccade, an initial guided exploration characterised by a gradual broadening of the fixation density, and an steady state which is reached after roughly 10 fixations. Saccade target selection during the initial exploration and in the steady state are related to similar areas of interest, which are better predicted when including high-level features. In the search dataset, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties and as early as 200ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later this high-level bottom-up control can be overruled by top-down influences."}
{"_id": "89c44bb1af32a27e020a703e02bbe738158081cd", "title": "Workspace characterization for concentric tube continuum robots", "text": "Concentric tube robots exhibit complex workspaces due to the way their component tubes bend and twist as they interact with one another. This paper explores ways to compute and characterize their workspaces. We use Monte Carlo random samples of the robot's joint space and a discrete volumetric workspace representation, which can describe both reachability and redundancy. Experiments on two physical prototypes are provided to illustrate the proposed approach."}
{"_id": "fc73204706bf79a52ba7b65bbf2cf77fa5072799", "title": "The use of vacuum assisted closure (VAC\u2122) in soft tissue injuries after high energy pelvic trauma", "text": "Application of vacuum-assisted closure (VAC\u2122) in soft tissue defects after high-energy pelvic trauma is described as a retrospective study in a level one trauma center. Between 2002 and 2004, 13 patients were treated for severe soft tissue injuries in the pelvic region. All musculoskeletal injuries were treated with multiple irrigation and debridement procedures and broad-spectrum antibiotics. VAC\u2122 was applied as a temporary coverage for defects and wound conditioning. The injuries included three patients with traumatic hemipelvectomies. Seven patients had pelvic ring fractures with five Morel\u2013Lavallee lesions and two open pelviperineal trauma. One patient suffered from an open iliac crest fracture and a Morel\u2013Lavallee lesion. Two patients sustained near complete pertrochanteric amputations of the lower limb. The average injury severity score was 34.1\u2009\u00b1\u20091.4. The application of VAC\u2122 started in average 3.8\u2009\u00b1\u20090.4\u00a0days after trauma and was used for 15.5\u2009\u00b1\u20091.8\u00a0days. The dressing changes were performed in average every 3\u00a0days. One patient (8%) with a traumatic hemipelvectomy died in the course of treatment due to septic complications. High-energy trauma causing severe soft tissues injuries requires multiple operative debridements to prevent high morbidity and mortality rates. The application of VAC\u2122 as temporary coverage of large tissue defects in pelvic regions supports wound conditioning and facilitates the definitive wound closure."}
{"_id": "71b090c082cd80ca82d5e8170cc08f13e2e85837", "title": "Evaluating the Impact of Social Selfishness on the Epidemic Routing in Delay Tolerant Networks", "text": "To cope with the uncertainty of transmission opportunities between mobile nodes, Delay Tolerant Networks (DTN) routing exploits opportunistic forwarding mechanism. This mechanism requires nodes to forward messages in a cooperative and altruistic way. However, in the real world, most of the nodes exhibit selfish behaviors such as individual and social selfishness. In this paper, we investigate the problem of how social selfishness influences the performance of epidemic routing in DTN. First, we model the message delivery process with social selfishness as a two dimensional continuous time Markov chain. Then, we obtain the system performance of message delivery delay and delivery cost by explicit expressions. Numerical results show that DTN is quite robust to social selfishness, which increases the message delivery delay, but there is more reducing of delivery cost."}
{"_id": "0dcad1ae3bc99c5f03625255ac4261bc6cbfdf91", "title": "Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking", "text": "The evaluation of information retrieval (IR) systems over special collections, such as large book repositories, is out of reach of traditional methods that rely upon editorial relevance judgments. Increasingly, the use of crowdsourcing to collect relevance labels has been regarded as a viable alternative that scales with modest costs. However, crowdsourcing suffers from undesirable worker practices and low quality contributions. In this paper we investigate the design and implementation of effective crowdsourcing tasks in the context of book search evaluation. We observe the impact of aspects of the Human Intelligence Task (HIT) design on the quality of relevance labels provided by the crowd. We assess the output in terms of label agreement with a gold standard data set and observe the effect of the crowdsourced relevance judgments on the resulting system rankings. This enables us to observe the effect of crowdsourcing on the entire IR evaluation process. Using the test set and experimental runs from the INEX 2010 Book Track, we find that varying the HIT design, and the pooling and document ordering strategies leads to considerable differences in agreement with the gold set labels. We then observe the impact of the crowdsourced relevance label sets on the relative system rankings using four IR performance metrics. System rankings based on MAP and Bpref remain less affected by different label sets while the Precision@10 and nDCG@10 lead to dramatically different system rankings, especially for labels acquired from HITs with weaker quality controls. Overall, we find that crowdsourcing can be an effective tool for the evaluation of IR systems, provided that care is taken when designing the HITs."}
{"_id": "1f8b930d3a19f8b2ed37808d9e5c2344fad1942e", "title": "Information quality benchmarks: product and service performance", "text": "Information quality (IQ) is an inexact science in terms of assessment and benchmarks. Although various aspects of quality and information have been investigated [1, 4, 6, 7, 9, 12], there is still a critical need for a methodology that assesses how well organizations develop information products and deliver information services to consumers. Benchmarks developed from such a methodology can help compare information quality across organizations, and provide a baseline for assessing IQ improvements."}
{"_id": "22f31560263e4723a7f16ae6313109b43e0944d3", "title": "Recoding, storage, rehearsal and grouping in verbal short-term memory: an fMRI study", "text": "Functional magnetic resonance imaging (fMRI) of healthy volunteers is used to localise the processes involved in verbal short-term memory (VSTM) for sequences of visual stimuli. Specifically, the brain areas underlying (i) recoding, (ii) storage, (iii) rehearsal and (iv) temporal grouping are investigated. Successive subtraction of images obtained from five tasks revealed a network of left-lateralised areas, including posterior temporal regions, supramarginal gyri, Broca's area and dorsolateral premotor cortex. The results are discussed in relation to neuropsychological distinctions between recoding and rehearsal, previous neuroimaging studies of storage and rehearsal, and, in particular, a recent connectionist model of VSTM that makes explicit assumptions about the temporal organisation of rehearsal. The functional modules of this model are tentatively mapped onto the brain in light of the imaging results. Our findings are consistent with the representation of verbal item information in left posterior temporal areas and short-term storage of phonological information in left supramarginal gyrus. They also suggest that left dorsolateral premotor cortex is involved in the maintenance of temporal order, possibly as the location of a timing signal used in the rhythmic organisation of rehearsal, whereas Broca's area supports the articulatory processes required for phonological recoding of visual stimuli."}
{"_id": "d4a051278307269ce63a8822e1b08b84a5c543e4", "title": "Discourse Annotation of Non-native Spontaneous Spoken Responses Using the Rhetorical Structure Theory Framework", "text": "The availability of the Rhetorical Structure Theory (RST) Discourse Treebank has spurred substantial research into discourse analysis of written texts; however, limited research has been conducted to date on RST annotation and parsing of spoken language, in particular, nonnative spontaneous speech. Considering that the measurement of discourse coherence is typically a key metric in human scoring rubrics for assessments of spoken language, we initiated a research effort to obtain RST annotations of a large number of non-native spoken responses from a standardized assessment of academic English proficiency. The resulting inter-annotator \u03ba agreements on the three different levels of Span, Nuclearity, and Relation are 0.848, 0.766, and 0.653, respectively. Furthermore, a set of features was explored to evaluate the discourse structure of non-native spontaneous speech based on these annotations; the highest performing feature showed a correlation of 0.612 with scores of discourse coherence provided by expert human raters."}
{"_id": "79a630e45169a73d872f4c76b48a020569c41047", "title": "Evaluating C-RAN fronthaul functional splits in terms of network level energy and cost savings", "text": "The placement of the complete baseband processing in a centralized pool results in high data rate requirement and inflexibility of the fronthaul network, which challenges the energy and cost effectiveness of the cloud radio access network (C-RAN). Recently, redesign of the C-RAN through functional split in the baseband processing chain has been proposed to overcome these challenges. This paper evaluates, by mathematical and simulation methods, different splits with respect to network level energy and cost efficiency having in the mind the expected quality of service. The proposed mathematical model quantifies the multiplexing gains and the trade-offs between centralization and decentralization concerning the cost of the pool, fronthaul network capacity and resource utilization. The event-based simulation captures the influence of the traffic load dynamics and traffic type variation on designing an efficient fronthaul network. Based on the obtained results, we derive a principle for fronthaul dimensioning based on the traffic profile. This principle allows for efficient radio access network with respect to multiplexing gains while achieving the expected users' quality of service."}
{"_id": "0a3d4fe59e92e486e5d00aba157f3fdfdad0e0c5", "title": "Classes of Multiagent Q-learning Dynamics with epsilon-greedy Exploration", "text": "Q-learning in single-agent environments is known to converge in the limit given sufficient exploration. The same algorithm has been applied, with some success, in multiagent environments, where traditional analysis techniques break down. Using established dynamical systems methods, we derive and study an idealization of Q-learning in 2-player 2-action repeated general-sum games. In particular, we address the discontinuous case of -greedy exploration and use it as a proxy for value-based algorithms to highlight a contrast with existing results in policy search. Analogously to previous results for gradient ascent algorithms, we provide a complete catalog of the convergence behavior of the -greedy Q-learning algorithm by introducing new subclasses of these games. We identify two subclasses of Prisoner\u2019s Dilemma-like games where the application of Q-learning with -greedy exploration results in higher-than-Nash average payoffs for some initial conditions."}
{"_id": "b927e4a17a0bf5624c0438308093969957cd764e", "title": "Behaviour Analysis of Multilayer Perceptrons with Multiple Hidden Neurons and Hidden Layers", "text": "\u2014The terms \" Neural Network \" (NN) and \" Artificial Neural Network \" (ANN) usually refer to a Multilayer Perceptron Network. It process the records one at a time, and \"learn\" by comparing their prediction of the record with the known actual record. The problem of model selection is considerably important for acquiring higher levels of generalization capability in supervised learning. This paper discussed behavioral analysis of different number of hidden layers and different number of hidden neurons. It's very difficult to select number of hidden layers and hidden neurons. There are different methods like Akaike's Information Criterion, Inverse test method and some traditional methods are used to find Neural Network architecture. What to do while neural network is not getting train or errors are not getting reduced. To reduce Neural Network errors, what we have to do with Neural Network architecture. These types of techniques are discussed and also discussed experiment and result. To solve different problems a neural network should be trained to perform correct classification.."}
{"_id": "5bfb9f011d5e5414d6d5463786bdcbaee7292737", "title": "Chemical crystal identification with deep learning machine vision", "text": "This study was carried out with the purpose of testing the ability of deep learning machine vision to identify microscopic objects and geometries found in chemical crystal structures. A database of 6994 images taken with a light microscope showing microscopic crystal details of selected chemical compounds along with 180 images of an unknown chemical was created to train and test, respectively the deep learning models. The models used were GoogLeNet (22 layers deep network) and VGG-16 (16 layers deep network), based on the Caffe framework (University of California, Berkeley, CA) of the DIGITS platform (NVIDIA Corporation, Santa Clara, CA). The two models were successfully trained with the images, having validation accuracy values of 97.38% and 99.65% respectively. Finally, both models were able to correctly identify the unknown chemical sample with a high probability score of 93.34% (GoogLeNet) and 99.41% (VGG-16). The positive results found in this study can be further applied to other unknown sample identification tasks using light microscopy coupled with deep learning machine vision."}
{"_id": "0743af243b8912abfde5a75bcb9147d3734852b5", "title": "Opinion mining from student feedback data using supervised learning algorithms", "text": "This paper explores opinion mining using supervised learning algorithms to find the polarity of the student feedback based on pre-defined features of teaching and learning. The study conducted involves the application of a combination of machine learning and natural language processing techniques on student feedback data gathered from module evaluation survey results of Middle East College, Oman. In addition to providing a step by step explanation of the process of implementation of opinion mining from student comments using the open source data analytics tool Rapid Miner, this paper also presents a comparative performance study of the algorithms like SVM, Nai\u0308ve Bayes, K Nearest Neighbor and Neural Network classifier. The data set extracted from the survey is subjected to data preprocessing which is then used to train the algorithms for binomial classification. The trained models are also capable of predicting the polarity of the student comments based on extracted features like examination, teaching etc. The results are compared to find the better performance with respect to various evaluation criteria for the different algorithms."}
{"_id": "c9aa18b67ffda7a867cf431ff0b382a60ac8998c", "title": "Physical experience enhances science learning.", "text": "Three laboratory experiments involving students' behavior and brain imaging and one randomized field experiment in a college physics class explored the importance of physical experience in science learning. We reasoned that students' understanding of science concepts such as torque and angular momentum is aided by activation of sensorimotor brain systems that add kinetic detail and meaning to students' thinking. We tested whether physical experience with angular momentum increases involvement of sensorimotor brain systems during students' subsequent reasoning and whether this involvement aids their understanding. The physical experience, a brief exposure to forces associated with angular momentum, significantly improved quiz scores. Moreover, improved performance was explained by activation of sensorimotor brain regions when students later reasoned about angular momentum. This finding specifies a mechanism underlying the value of physical experience in science education and leads the way for classroom practices in which experience with the physical world is an integral part of learning."}
{"_id": "4c9bd91bd044980f5746d623315be5285cc799c9", "title": "Enhanced Sphere Tracing", "text": "In this paper we present several performance and quality enhancements to classical sphere tracing: First, we propose a safe, over-relaxation-based method for accelerating sphere tracing. Second, a method for dynamically preventing self-intersections upon converting signed distance bounds enables controlling precision and rendering performance. In addition, we present a method for significantly accelerating the sphere tracing intersection test for convex objects that are enclosed in convex bounding volumes. We also propose a screen-space metric for the retrieval of a good intersection point candidate, in case sphere tracing does not converge thus increasing rendering quality without sacrificing performance. Finally, discontinuity artifacts common in sphere tracing are reduced using a fixed-point iteration algorithm. We demonstrate complex scenes rendered in real-time with our method. The methods presented in this paper have more universal applicability beyond rendering procedurally generated scenes in real-time and can also be combined with path-tracing-based global illumination solutions."}
{"_id": "e8252ce73e330990b66441842f62f73c8cea56e4", "title": "Random Walks on Graphs: a Survey", "text": "Dedicated to the marvelous random walk of Paul Erd} os through universities, continents, and mathematics Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling. 0. Introduction Given a graph and a starting point, we select a neighbor of it at random, and move to this neighbor; then we select a neighbor of this point at random, and move to it etc. The (random) sequence of points selected this way is a random walk on the graph. A random walk is a nite Markov chain that is time-reversible (see below). In fact, there is not much diierence between the theory of random walks on graphs and the theory of nite Markov chains; every Markov chain can be viewed as random walk on a directed graph, if we allow weighted edges. Similarly, time-reversible Markov chains can be viewed as random walks on undirected graphs, and symmetric Markov chains, as random walks on regular symmetric graphs. In this paper we'll formulate the results in terms of random walks, and mostly restrict our attention to the undirected case. 2 L. Lovv asz Random walks arise in many models in mathematics and physics. In fact, this is one of those notions that tend to pop up everywhere once you begin to look for them. For example, consider the shuuing of a deck of cards. Construct a graph whose nodes are all permutations of the deck, and two of them are adjacent if they come by one shuue move (depending on how you shuue). Then repeated shuue moves correspond to a random walk on this graph (see Diaconis 20]). The Brownian motion of a dust particle is random walk in the room. Models in statistical mechanics can be viewed as random walks on the set of states. The classical theory of random walks deals with random walks on simple , but innnite graphs, like grids, and studies their qualitative behaviour: does the random walk return to its starting point with probability one? does it return innnitely often? For example, PP olya (1921) proved that \u2026"}
{"_id": "d15cfd75d77ba7ef8aaed6c584d3f743aa4080fa", "title": "An improved auto-tuning scheme for PID controllers.", "text": "An improved auto-tuning scheme is proposed for Ziegler-Nichols (ZN) tuned PID controllers (ZNPIDs), which usually provide excessively large overshoots, not tolerable in most of the situations, for high-order and nonlinear processes. To overcome this limitation ZNPIDs are upgraded by some easily interpretable heuristic rules through an online gain modifying factor defined on the instantaneous process states. This study is an extension of our earlier work [Mudi RK., Dey C. Lee TT. An improved auto-tuning scheme for PI controllers. ISA Trans 2008; 47: 45-52] to ZNPIDs, thereby making the scheme suitable for a wide range of processes and more generalized too. The proposed augmented ZNPID (AZNPID) is tested on various high-order linear and nonlinear dead-time processes with improved performance over ZNPID, refined ZNPID (RZNPID), and other schemes reported in the literature. Stability issues are addressed for linear processes. Robust performance of AZNPID is observed while changing its tunable parameters as well as the process dead-time. The proposed scheme is also implemented on a real time servo-based position control system."}
{"_id": "38b1eb892e51661cd0e3c9f6c38f1f7f8def1317", "title": "Vision: automated security validation of mobile apps at app markets", "text": "Smartphones and \"app\" markets are raising concerns about how third-party applications may misuse or improperly handle users' privacy-sensitive data. Fortunately, unlike in the PC world, we have a unique opportunity to improve the security of mobile applications thanks to the centralized nature of app distribution through popular app markets. Thorough validation of apps applied as part of the app market admission process has the potential to significantly enhance mobile device security. In this paper, we propose AppInspector, an automated security validation system that analyzes apps and generates reports of potential security and privacy violations. We describe our vision for making smartphone apps more secure through automated validation and outline key challenges such as detecting and analyzing security and privacy violations, ensuring thorough test coverage, and scaling to large numbers of apps."}
{"_id": "2c68c7faa89b104b78e2850dbade5a81f0743874", "title": "A formal study of information retrieval heuristics", "text": "Empirical studies of information retrieval methods show that good retrieval performance is closely related to the use of various retrieval heuristics, such as TF-IDF weighting. One basic research question is thus what exactly are these \"necessary\" heuristics that seem to cause good retrieval performance. In this paper, we present a formal study of retrieval heuristics. We formally define a set of basic desirable constraints that any reasonable retrieval function should satisfy, and check these constraints on a variety of representative retrieval functions. We find that none of these retrieval functions satisfies all the constraints unconditionally. Empirical results show that when a constraint is not satisfied, it often indicates non-optimality of the method, and when a constraint is satisfied only for a certain range of parameter values, its performance tends to be poor when the parameter is out of the range. In general, we find that the empirical performance of a retrieval formula is tightly related to how well it satisfies these constraints. Thus the proposed constraints provide a good explanation of many empirical observations and make it possible to evaluate any existing or new retrieval formula analytically."}
{"_id": "ae0067338dff4e6b48b6b798776ec88a6c242aae", "title": "Some Simple Effective Approximations to the 2-Poisson Model for Probabilistic Weighted Retrieval", "text": "The 2-Poisson model for term frequencies is used to suggest ways of incorporating certain variables in probabilistic models for information retrieval. The variables concerned are within-document term tkequency, document length, and within-query term frequency. Simple weighting functions are developed, and tested on the TREC test collection. Considerable performance improvements (over simple inverse collection frequency weighting) are demonstrated."}
{"_id": "f5041acfd76ed6da212e694909e435cd57935598", "title": "Actuator fault detection and isolation system for an hexacopter", "text": "The problem of detection and isolation of actuator faults in hexacopter vehicles is address in this article. The aim is to detect and isolate possible faults which can occur on the vehicle actuation system (i.e. rotors and propellers). The dynamic nonlinear model of an hexarotor vehicle is first derived to build a model-based diagnosis system. Then a Thau observer is developed to estimate the states of the hexarotor and, on the usage of this information, to generate the needed set of residuals. The developed fault detection and isolation system is finally tested in a high fidelity hexacopter simulator in which different fault situations are considered."}
{"_id": "31c22efa184fcfd208c2f907c07f27418e7ae705", "title": "Interleukin-17 inhibitors. A new era in treatment of psoriasis and other skin diseases", "text": "Psoriasis is a chronic skin disease caused by the excessive secretion of inflammatory cytokines. Available therapeutic options include biologic drugs such as tumor necrosis factor alpha inhibitors and interleukin 12/23 (IL-12/23) inhibitors. The recent discovery of IL-17, which contributes to development of psoriasis, opened new possibilities for further treatment modalities. Currently, one anti-IL17 biological agent is approved for the treatment - a fully human monoclonal antibody that targets IL-17A (secukinumab). Further clinical trials, including a humanized IgG4 specific for IL-17 (ixekizumab) and a fully human antibody that targets the IL-17 receptor A (brodalumab)."}
{"_id": "b46fad3756c6379e04f281c2ae1051bce546a487", "title": "Personalized recommendation engine using HADOOP", "text": "More and more E-commerce Websites provide products with different prices which made it hard for consumers to find the products and services they want. In order to overcome this data overload, personalized recommendation engines are used to suggest products and to provide consumers with relevant data to help them decide which products to purchase. Recommendation engines are highly computational and hence ideal for the Hadoop Platform. This system aims at building a book recommendation engine which uses item or user based recommendation from Mahout for recommending books. It will analyze the data and give suggestions based on what similar users did and on the past transaction history of the user."}
{"_id": "f2a5dc568534610b07be965d25ff490db21ec4e8", "title": "Field assessment of Serious Games for Entrepreneurship in Higher Education", "text": "The potential of Serious Games (SGs) in education is widely recognized, and their adoption is significant in particular in children instruction. However, the deployment rate of SGs in higher education (HE) and their proper insertion in meaningful curricula is still quite low. This paper intends to make a first step in the direction of a better characterization of the pedagogical effectiveness of SGs in HE, by providing a qualitative analysis based on our field experience using three games for entrepreneurship, that we have studied in the light of two well established pedagogical paradigms, such as the Revised Bloom\u2019s taxonomy and the Kolb\u2019s Learning stages. In general, we observe that SGs address several goals of the Bloom\u2019s taxonomy, in particular at the lower levels. Moreover, the cyclical nature of the business simulations can be directly mapped to the sequential steps described by Kolb. However, our analysis also shows that SGs have still to significantly evolve in order to become an effective and efficient tool that could be successfully and reliably used in HE. In the light of our experience, we also propose a schema for a proper integration of SGs by supporting different goals in different steps of a formal education process, Our study finally suggests directions for future research in the field."}
{"_id": "7df3eab32b051b1bd94ca11120a656590c5508d5", "title": "The Effect of Music on the Human Stress Response", "text": "BACKGROUND\nMusic listening has been suggested to beneficially impact health via stress-reducing effects. However, the existing literature presents itself with a limited number of investigations and with discrepancies in reported findings that may result from methodological shortcomings (e.g. small sample size, no valid stressor). It was the aim of the current study to address this gap in knowledge and overcome previous shortcomings by thoroughly examining music effects across endocrine, autonomic, cognitive, and emotional domains of the human stress response.\n\n\nMETHODS\nSixty healthy female volunteers (mean age = 25 years) were exposed to a standardized psychosocial stress test after having been randomly assigned to one of three different conditions prior to the stress test: 1) relaxing music ('Miserere', Allegri) (RM), 2) sound of rippling water (SW), and 3) rest without acoustic stimulation (R). Salivary cortisol and salivary alpha-amylase (sAA), heart rate (HR), respiratory sinus arrhythmia (RSA), subjective stress perception and anxiety were repeatedly assessed in all subjects. We hypothesized that listening to RM prior to the stress test, compared to SW or R would result in a decreased stress response across all measured parameters.\n\n\nRESULTS\nThe three conditions significantly differed regarding cortisol response (p = 0.025) to the stressor, with highest concentrations in the RM and lowest in the SW condition. After the stressor, sAA (p=0.026) baseline values were reached considerably faster in the RM group than in the R group. HR and psychological measures did not significantly differ between groups.\n\n\nCONCLUSION\nOur findings indicate that music listening impacted the psychobiological stress system. Listening to music prior to a standardized stressor predominantly affected the autonomic nervous system (in terms of a faster recovery), and to a lesser degree the endocrine and psychological stress response. These findings may help better understanding the beneficial effects of music on the human body."}
{"_id": "2c35a03ea6f98c0e77701075d6efbfcf05a18f22", "title": "TeamBeam - Meta-Data Extraction from Scientific Literature", "text": "An important aspect of the work of researchers as well as librarians is to manage collections of scientific literature. Social research networks, such as Mendeley and CiteULike, provide services that support this task. Meta-data plays an important role in providing services to retrieve and organise the articles. In such settings, meta-data is rarely explicitly provided, leading to the need for automatically extracting this valuable information. The TeamBeam algorithm analyses a scientific article and extracts out structured meta-data, such as the title, journal name and abstract, as well as information about the article\u2019s authors (e.g. names, e-mail addresses, affiliations). The input of the algorithm is a set of blocks generated from the article text. A classification algorithm, which takes the sequence of the input into account, is then applied in two consecutive phases. In the evaluation the performance of the algorithm is compared against two heuristics and three existing meta-data extraction systems. Three different data sets with varying characteristics are used to assess the quality of the extraction results. TeamBeam performs well under testing and compares favourably with existing approaches."}
{"_id": "824cd124da2067b5c0db37ddd6c929cd7036c480", "title": "DONet: A Data-Driven Overlay Network For Efficient Live Media Streaming", "text": "This paper presents DONet, a Data-driven Overlay Network for live media streaming. The core operations in DONet are very simple: every node periodically exchanges data availability information with a set of partners, and retrieves unavailable data from one or more partners, or supplies available data to partners. We emphasize three salient features of this data-driven design: 1) easy to implement , as it does not have to construct and maintain a complex global structure; 2) efficient, as data forwarding is dynamically determined according to data availability while not restricted by specific directions; and 3) robust and resilient , as the partnerships enable adaptive and quick switching among multi-suppliers. We show through analysis that DONet is scalable with bounded delay. We also address a set of practical challenges for realizing DONet, and propose an efficient memberand partnership management algorithm, together with an intelligent scheduling algorithm that achieves real-time and continuous distribution of streaming contents. We have extensively evaluated the performance of DONet over the PlanetLab. Our experiments, involving almost all the active PlanetLab nodes, demonstrate that DONet achieves quite good streaming quality even under formidable network conditions. Moreover, its control overhead and transmission delay are both kept at low levels. An Internet-based DONet implementation, calledCoolStreaming v.0.9, was released on May 30, 2004, which has attracted over 30000 distinct users with more than 4000 simultaneously being online at some peak times. We discuss the key issues toward designingCoolStreaming in this paper, and present several interesting observations from these large-scale tests; in particular, the larger the overlay size, the better the streaming quality it can deliver."}
{"_id": "64d2b06a557c47a7e0651e141e9daf93b1fe9b6f", "title": "Unbalanced Three-Phase Optimal Power Flow for Smart Grids", "text": "Advanced distribution management system (DMS), an evolution of supervisory control and data acquisition obtained by extending its working principles from transmission to distribution, is the brain of a smart grid. Advanced DMS assesses smart functions in the distribution system and is also responsible for assessing control functions such as reactive dispatch, voltage regulation, contingency analysis, capability maximization, or line switching. Optimal power flow (OPF)-based tools can be suitably adapted to the requirements of smart distribution network and be employed in an advanced DMS framework. In this paper, the authors present a methodology for unbalanced three-phase OPF (TOPF) for DMS in a smart grid. In the formulation of the TOPF, control variables of the optimization problem are actual active load demand and reactive power outputs of microgenerators. The TOPF is based on a quasi-Newton method and makes use of an open-source three-phase unbalanced distribution load flow. Test results are presented on the IEEE 123-bus Radial Distribution Feeder test case."}
{"_id": "4e3374c01a059f7ce02d81c5f96a3ef50b97a42b", "title": "Mental balance and well-being: building bridges between Buddhism and Western psychology.", "text": "Clinical psychology has focused primarily on the diagnosis and treatment of mental disease, and only recently has scientific attention turned to understanding and cultivating positive mental health. The Buddhist tradition, on the other hand, has focused for over 2,500 years on cultivating exceptional states of mental well-being as well as identifying and treating psychological problems. This article attempts to draw on centuries of Buddhist experiential and theoretical inquiry as well as current Western experimental research to highlight specific themes that are particularly relevant to exploring the nature of mental health. Specifically, the authors discuss the nature of mental well-being and then present an innovative model of how to attain such well-being through the cultivation of four types of mental balance: conative, attentional, cognitive, and affective."}
{"_id": "ee5a54f7d6187666fbdf8e962f1058e4ce9b51a1", "title": "Characterization of Traffic Analysis based video stream source identification", "text": "This paper presents the concept and characterization of Traffic Analysis (TA) for identifying sources of tunneled video streaming traffic. Such identification can be used in enterprise firewalls for blocking unauthorized viewing of tunneled video. We attempt to characterize and evaluate the impacts of the primary TA-influencing factors, namely, streaming protocol, codec, and the actual video content. A test environment is built to study the influence of those factors while Packet Size Distribution is used as the classification feature during Traffic Analysis. Analysis done on data obtained from the test environment has shown that the streaming protocols provide the most dominant source identification distinction. Also, while the codecs provide some weak distinctions, the influence of video content is marginal. In addition to in-laboratory experiments, a real-world verification for corroborating those observations is also made with commercial streaming service providers. Such long-haul experiments indicate that the end-to-end network conditions between the streaming server and video client can act as an additional influencing factor for traffic analysis towards video stream source identification. Overall, the results suggest the feasibility of TA for unknown video stream source identification with sufficiently diverse video examples."}
{"_id": "0b5742378d804b4e7a454bf66e5c489c344329ac", "title": "Neural Multi-task Learning in Automated Assessment", "text": "Grammatical error detection and automated essay scoring are two tasks in the area of automated assessment. Traditionally these tasks have been treated independently with different machine learning models and features used for each task. In this paper, we develop a multi-task neural network model that jointly optimises for both tasks, and in particular we show that neural automated essay scoring can be significantly improved. We show that while the essay score provides little evidence to inform grammatical error detection, the essay score is highly influenced by error detection."}
{"_id": "ad0c3b51c78ed394424707653c133f55bd072d79", "title": "Intellectual capital disclosure in Malaysia", "text": "This study investigates intellectual capital (IC) disclosure in Malaysian public listed companies. Specifically, this research examines the relationship between IC disclosure and companies' profitability, productivity, and firm size. We measure the IC disclosure using the IC disclosure index and I C disclosure frequency. The sample includes 255 firm-year observations from 2006-2008. The results from our content analysis confirm those of prior research that the IC disclosure has been increasing over time. The study finds that human capital is the most reported intellectual capital disclosure. In addition, we also find that firm size positively contribute to the disclosure of IC. The result of this study added to the body of literature by providing evidence on the relationship between IC disclosure and firms' characteristics."}
{"_id": "51bdd0b839fe8ab1daafa05d1ed03edff601f4c7", "title": "Named Entity Recognition using an HMM-based Chunk Tagger", "text": "This paper proposes a Hidden Markov Model (HMM) and an HMM-based chunk tagger, from which a named entity (NE) recognition (NER) system is built to recognize and classify names, times and numerical quantities. Through the HMM, our system is able to apply and integrate four types of internal and external evidences: 1) simple deterministic internal feature of the words, such as capitalization and digitalization; 2) internal semantic feature of important triggers; 3) internal gazetteer feature; 4) external macro context feature. In this way, the NER problem can be resolved effectively. Evaluation of our system on MUC-6 and MUC-7 English NE tasks achieves F-measures of 96.6% and 94.1% respectively. It shows that the performance is significantly better than reported by any other machine-learning system. Moreover, the performance is even consistently better than those based on handcrafted rules."}
{"_id": "dde14d9766e594f5ea41de746ab2fcef4d95bf01", "title": "Heat-induced post-mortem defect of the skull simulating an exit gunshot wound of the calvarium.", "text": "A severely burned body was found in a burnt-out apartment following a house fire. At the death scene, police investigators noted a defect at the back of the head of the deceased. This defect perforated the occipital bone and showed external bevelling. To clarify whether this skull wound corresponded to a gunshot exit wound and for identification purposes, a medico-legal autopsy was ordered. External examination of the body showed intense charring with skin burned away and musculature exposed, fourth degree burns of the remaining skin and heat flexures of the limbs as well as heat-related rupture of both the abdominal cavity and chest wall (Fig. 1). An oval defect of the occipital bone with cratering of the external table of the cranium was found (Fig. 2). The defect measured 2 cm in diameter, was sharp-edged and the excavation of the outer table of the cranium showed no charring in contrast to the surrounding parts of the occipital bone (Fig. 3). Apart from this finding, radiology of the body before autopsy was unremarkable. At autopsy, gross sections of the mucosa of the trachea and main bronchi were covered by a thick layer of soot with soot particles also present within the esophageus and the stomach. The brain showed a reduced volume, flattening of gyri and obliteration of sulci but no signs of trauma. Except for the defect of the occipital bone, no other signs of ante-mortem or post-mortem trauma could be detected. The level of carboxyhemoglobin in heart blood was 65% and the blood alcohol level was 285 mg/dL."}
{"_id": "74aa22cc343c2f9e9c07abddf26d7e6e250c3490", "title": "Mapping the Americanization of English in space and time", "text": "As global political preeminence gradually shifted from the United Kingdom to the United States, so did the capacity to culturally influence the rest of the world. In this work, we analyze how the world-wide varieties of written English are evolving. We study both the spatial and temporal variations of vocabulary and spelling of English using a large corpus of geolocated tweets and the Google Books datasets corresponding to books published in the US and the UK. The advantage of our approach is that we can address both standard written language (Google Books) and the more colloquial forms of microblogging messages (Twitter). We find that American English is the dominant form of English outside the UK and that its influence is felt even within the UK borders. Finally, we analyze how this trend has evolved over time and the impact that some cultural events have had in shaping it."}
{"_id": "4b0288f32a53094b5386643bbea56cb3803941bc", "title": "Glial and neuronal control of brain blood flow", "text": "Blood flow in the brain is regulated by neurons and astrocytes. Knowledge of how these cells control blood flow is crucial for understanding how neural computation is powered, for interpreting functional imaging scans of brains, and for developing treatments for neurological disorders. It is now recognized that neurotransmitter-mediated signalling has a key role in regulating cerebral blood flow, that much of this control is mediated by astrocytes, that oxygen modulates blood flow regulation, and that blood flow may be controlled by capillaries as well as by arterioles. These conceptual shifts in our understanding of cerebral blood flow control have important implications for the development of new therapeutic approaches."}
{"_id": "6f19bbf19682e9acaec32259d5579b8ccfe3289f", "title": "Investigating Players' Engagement, Immersion, and Experiences in Playing Pok\u00e9mon Go", "text": "In recent years, Augmented Reality (AR) based mobile games have become popular among players. The Pok\u00e9mon Go is one of the well-known examples. Although Pok\u00e9mon Go game has become a global phenomenon, there is a limited study about players' experiences in the gameplay. Especially, little is known about players' engagement and immersion in playing AR-based mobile games in which physical movements in the real world are largely required to play the game. In this study, we conducted a pilot study with eight participants to investigate players' engagement, immersion, and experiences in the Pok\u00e9mon Go gameplay. We report and discuss the preliminary findings from the study, which can help game designers understand players' experiences in the gameplay so that they can design more creative AR games in the future. Furthermore, the findings are also useful for future studies with larger sample size."}
{"_id": "9adfdba705e8303f91cf1091908c125d865ef017", "title": "Advances on Interactive Machine Learning", "text": "Interactive Machine Learning (IML) is an iterative learning process that tightly couples a human with a machine learner, which is widely used by researchers and practitioners to effectively solve a wide variety of real-world application problems. Although recent years have witnessed the proliferation of IML in the field of visual analytics, most recent surveys either focus on a specific area of IML or aim to summarize a visualization field that is too generic for IML. In this paper, we systematically review the recent literature on IML and classify them into a task-oriented taxonomy built by us. We conclude the survey with a discussion of open challenges and research opportunities that we believe are inspiring for future work in IML."}
{"_id": "98eec958fad32c98db12efbcc2585e4b0c0d181b", "title": "Doubly Attentive Transformer Machine Translation", "text": "In this paper a doubly attentive transformer machine translation model (DATNMT) is presented in which a doubly-attentive transformer decoder normally joins spatial visual features obtained via pretrained convolutional neural networks, conquering any gap between image captioning and translation. In this framework, the transformer decoder figures out how to take care of source-language words and parts of an image freely by methods for two separate attention components in an Enhanced Multi-Head Attention Layer of doubly attentive transformer, as it generates words in the target language. We find that the proposed model can effectively exploit not just the scarce multimodal machine translation data, but also large general-domain textonly machine translation corpora, or imagetext image captioning corpora. The experimental results show that the proposed doublyattentive transformer-decoder performs better than a single-decoder transformer model, and gives the state-of-the-art results in the EnglishGerman multimodal machine translation task."}
{"_id": "166a5840e66c01f2e7b2f5305c3e4a4c52550eb6", "title": "High Performance Clustering Based on the Similarity Join", "text": "A broad class of algorithms for knowledge discovery in databases (KDD) relies heavily on similarity queries, i.e. range queries or nearest neighbor queries, in multidimensional feature spaces. Many KDD algorithms perform a similarity query for each point stored in the database. This approach causes serious performance degenerations if the considered data set does not fit into main memory. Usual cache strategies such as LRU fail because the locality of KDD algorithms is typically not high enough. In this paper, we propose to replace repeated similarity queries by the similarity join, a database primitive prevalent in multimedia database systems. We present a schema to transform query intensive KDD algorithms into a representation using the similarity join as a basic operation without affecting the correctness of the result of the considered algorithm. In order to perform a comprehensive experimental evaluation of our approach, we apply the proposed transformation to the clustering algorithm DBSCAN and to the hierarchical cluster structure analysis method OPTICS. Our technique allows the application of any similarity join algorithm, which may be based on index structures or not. In our experiments, we use a similarity join algorithm based on a variant of the X-tree. The experiments yield substantial performance improvements of our technique over the original algorithms. The traditional techniques are outperformed by factors of up to 33 for the X-tree and 54 for the R*-tree."}
{"_id": "1fe91d40305ac006999ee866873032e0a5b153dc", "title": "Adding Gradient Noise Improves Learning for Very Deep Networks", "text": "{ Adding Gradient Noise Improves Learning for Very Deep Networks. Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, James Martens. International Conference on Learning Representations Workshop (ICLR WS), 2016 { Generating Sentences from a Continuous Space. Samuel Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, Samy Bengio. International Conference on Learning Representations Workshop (ICLR WS), 2016 { Bethe Projections for Non-Local Inference. Luke Vilnis, David Belanger, Daniel Sheldon, Andrew McCallum. Uncertainty in Artificial Intelligence (UAI), 2015 { Learning Dynamic Feature Selection for Fast Sequential Prediction. Emma Strubell, Luke Vilnis, Kate Silverstein, Andrew McCallum. Annual Meeting of the Association for Computational Linguistics (ACL), 2015 { Word Representations via Gaussian Embedding. Luke Vilnis, Andrew McCallum. International Conference on Learning Representations (ICLR), 2015 { Generalized Eigenvectors for Large Multiclass Problems. Luke Vilnis, Nikos Karampatziakis, Paul Mineiro. Neural Information Processing Systems Workshop on Representation and Learning Methods for Complex Outputs (NIPS WS), 2014 { Training for Fast Sequential Prediction Using Dynamic Feature Selection. Emma Strubell, Luke Vilnis, Andrew McCallum. Neural Information Processing Systems Workshop on Modern Machine Learning and Natural Language Processing (NIPS WS), 2014"}
{"_id": "09e8e7401e02465d3fd7a4f5179c596245cd363a", "title": "Natural language processing in mental health applications using non-clinical texts", "text": "Natural language processing (NLP) techniques can be used to make inferences about peoples' mental states from what they write on Facebook, Twitter and other social media. These inferences can then be used to create online pathways to direct people to health information and assistance and also to generate personalized interventions. Regrettably, the computational methods used to collect, process and utilize online writing data, as well as the evaluations of these techniques, are still dispersed in the literature. This paper provides a taxonomy of data sources and techniques that have been used for mental health support and intervention. Specifically, we review how social media and other data sources have been used to detect emotions and identify people who may be in need of psychological assistance; the computational techniques used in labeling and diagnosis; and finally, we discuss ways to generate and personalize mental health interventions. The overarching aim of this scoping review is to highlight areas of research where NLP has been applied in the mental health literature and to help develop a common language that draws together the fields of mental health, human-computer interaction and NLP."}
{"_id": "9ab8aa61f74a0b7061791c728013a0ba42a983b7", "title": "A review of the production of ethanol from softwood", "text": "Ethanol produced from various lignocellulosic materials such as wood, agricultural and forest residues has the potential to be a valuable substitute for, or complement to, gasoline. One of the major resources in the Northern hemisphere is softwood. This paper reviews the current status of the technology for ethanol production from softwood, with focus on hemicellulose and cellulose hydrolysis, which is the major problem in the overall process. Other issues of importance, e.g. overall process configurations and process economics are also considered."}
{"_id": "e36b5f110e1d783801f37d4bcc57d07e9935e3aa", "title": "Autonomous Navigation and Mapping for Inspection of Penstocks and Tunnels With MAVs", "text": "In this paper, we address the estimation, control, navigation and mapping problems to achieve autonomous inspection of penstocks and tunnels using aerial vehicles with on-board sensing and computation. Penstocks and tunnels have the shape of a generalized cylinder. They are generally dark and featureless. State estimation is challenging because range sensors do not yield adequate information and cameras do not work in the dark. We show that the six degrees of freedom (DOF) pose and velocity can be estimated by fusing information from an inertial measurement unit (IMU), a lidar and a set of cameras. This letter discusses in detail the range-based estimation part while leaving the details of vision component to our earlier work. The proposed algorithm relies only on a model of the generalized cylinder and is robust to changes in shape of the tunnel. The approach is validated through real experiments showing autonomous and shared control, state estimation and environment mapping in the penstock at Center Hill Dam, TN. To our knowledge, this is the first time autonomous navigation and mapping has been achieved in a penstock without any external infrastructure such GPS or external cameras."}
{"_id": "e837dfa120e8ce3cd587bde7b0787ef43fa7832d", "title": "Sensitivity and Generalization in Neural Networks: an Empirical Study", "text": "In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations. Our experiments survey thousands of models with various fully-connected architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets. We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the norm of the input-output Jacobian of the network, and that it correlates well with generalization. We further establish that factors associated with poor generalization \u2013 such as full-batch training or using random labels \u2013 correspond to lower robustness, while factors associated with good generalization \u2013 such as data augmentation and ReLU non-linearities \u2013 give rise to more robust functions. Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points."}
{"_id": "74640bdf33a1e8b7a319fbbbaeccf681f80861cc", "title": "THE K FACTOR : A NEW MATHEMATICAL TOOL FOR STABILITY ANALYSIS AND SYNTHESIS", "text": ""}
{"_id": "e2f15cf76eed3c53cbcf21d3383085494bb6a89a", "title": "Multi-scale Internet traffic forecasting using neural networks and time series methods", "text": "This article presents three methods to forecast accurately the amount of traffic in TCP/IP based networks: a novel neural network ensemble approach and two important adapted time series methods (ARIMA and Holt-Winters). In order to assess their accuracy, several experiments were held using realworld data from two large Internet service providers. In addition, different time scales (five minutes, one hour and one day) and distinct forecasting lookaheads were analyzed. The experiments with the neural ensemble achieved the best results for five minutes and hourly data, while the Holt-Winters is the best option for the daily forecasts. This research opens possibilities for the development of more efficient traffic engineering and anomaly detection tools, which will result in financial gains from better network resource management."}
{"_id": "437ee173b7504b808201b5166721ec3b5ce42a64", "title": "Regional coherence changes in the early stages of Alzheimer\u2019s disease: A combined structural and resting-state functional MRI study", "text": "Recent functional imaging studies have indicated that the pathophysiology of Alzheimer's disease (AD) can be associated with the changes in spontaneous low-frequency (<0.08 Hz) blood oxygenation level-dependent fluctuations (LFBF) measured during a resting state. The purpose of this study was to examine regional LFBF coherence patterns in early AD and the impact of regional brain atrophy on the functional results. Both structural MRI and resting-state functional MRI scans were collected from 14 AD subjects and 14 age-matched normal controls. We found significant regional coherence decreases in the posterior cingulate cortex/precuneus (PCC/PCu) in the AD patients when compared with the normal controls. Moreover, the decrease in the PCC/PCu coherence was correlated with the disease progression measured by the Mini-Mental State Exam scores. The changes in LFBF in the PCC/PCu may be related to the resting hypometabolism in this region commonly detected in previous positron emission tomography studies of early AD. When the regional PCC/PCu atrophy was controlled, these results still remained significant but with a decrease in the statistical power, suggesting that the LFBF results are at least partly explained by the regional atrophy. In addition, we also found increased LFBF coherence in the bilateral cuneus, right lingual gyrus and left fusiform gyrus in the AD patients. These regions are consistent with previous findings of AD-related increased activation during cognitive tasks explained in terms of a compensatory-recruitment hypothesis. Finally, our study indicated that regional brain atrophy could be an important consideration in functional imaging studies of neurodegenerative diseases."}
{"_id": "26b405527380b60378e5310929a3afe7b13b1262", "title": "A self-tuning system based on application Profiling and Performance Analysis for optimizing Hadoop MapReduce cluster configuration", "text": "One of the most widely used frameworks for programming MapReduce-based applications is Apache Hadoop. Despite its popularity, however, application developers face numerous challenges in using the Hadoop framework, which stem from them having to effectively manage the resources of a MapReduce cluster, and configuring the framework in a way that will optimize the performance and reliability of MapReduce applications running on it. This paper addresses these problems by presenting the Profiling and Performance Analysis-based System (PPABS) framework, which automates the tuning of Hadoop configuration settings based on deduced application performance requirements. The PPABS framework comprises two distinct phases called the Analyzer, which trains PPABS to form a set of equivalence classes of MapReduce applications for which the most appropriate Hadoop config- uration parameters that maximally improve performance for that class are determined, and the Recognizer, which classifies an incoming unknown job to one of these equivalence classes so that its Hadoop configuration parameters can be self-tuned. The key research contributions in the Analyzer phase includes modifications to the well-known k - means + + clustering and Simulated Annealing algorithms, which were required to adapt them to the MapReduce paradigm. The key contributions in the Recognizer phase includes an approach to classify an unknown, incoming job to one of the equivalence classes and a control strategy to self-tune the Hadoop cluster configuration parameters for that job. Experimental results comparing the performance improvements for three different classes of applications running on Hadoop clusters deployed on Amazon EC2 show promising results."}
{"_id": "f4c6240b68e97d6f3b9bc67a701f10e49a1b1dab", "title": "Hierarchical Learning in Stochastic Domains: Preliminary Results", "text": "This paper presents the HDG learning algorithm, which uses a hierarchical decomposition of the state space to make learning to achieve goals more efficient with a small penalty in path quality. Special care must be taken when performing hierarchical planning and learning in stochastic domains, because macro-operators cannot be executed ballistically. The HDG algorithm, which is a descendent of Watkins\u2019 Q-learning algorithm, is described here and preliminary empirical results are presented."}
{"_id": "5c5e03254848dd25aa5880df6fa27675545d6c9d", "title": "Social Security and Social Welfare Data Mining: An Overview", "text": "The importance of social security and social welfare business has been increasingly recognized in more and more countries. It impinges on a large proportion of the population and affects government service policies and people's life quality. Typical welfare countries, such as Australia and Canada, have accumulated a huge amount of social security and social welfare data. Emerging business issues such as fraudulent outlays, and customer service and performance improvements challenge existing policies, as well as techniques and systems including data matching and business intelligence reporting systems. The need for a deep understanding of customers and customer-government interactions through advanced data analytics has been increasingly recognized by the community at large. So far, however, no substantial work on the mining of social security and social welfare data has been reported. For the first time in data mining and machine learning, and to the best of our knowledge, this paper draws a comprehensive overall picture and summarizes the corresponding techniques and illustrations to analyze social security/welfare data, namely, social security data mining (SSDM), based on a thorough review of a large number of related references from the past half century. In particular, we introduce an SSDM framework, including business and research issues, social security/welfare services and data, as well as challenges, goals, and tasks in mining social security/welfare data. A summary of SSDM case studies is also presented with substantial citations that direct readers to more specific techniques and practices about SSDM."}
{"_id": "3667784cac9c62fb4d9aa6a4ca074283a040326a", "title": "Record linkage software in the public domain: a comparison of Link Plus, The Link King, and a 'basic' deterministic algorithm", "text": "The study objective was to compare the accuracy of a deterministic record linkage algorithm and two public domain software applications for record linkage (The Link King and Link Plus). The three algorithms were used to unduplicate an administrative database containing personal identifiers for over 500,000 clients. Subsequently, a random sample of linked records was submitted to four research staff for blinded clerical review. Using reviewers' decisions as the 'gold standard', sensitivity and positive predictive values (PPVs) were estimated. Optimally, sensitivity and PPVs in the mid 90s could be obtained from both The Link King and Link Plus. Sensitivity and PPVs using a basic deterministic algorithm were 79 and 98 per cent respectively. Thus the full feature set of The Link King makes it an attractive option for SAS users. Link Plus is a good choice for non-SAS users as long as necessary programming resources are available for processing record pairs identified by Link Plus."}
{"_id": "48c1cbd9b7c059f5f585c003b68ad1ec32606d41", "title": "Natural terrain classification using three-dimensional ladar data for ground robot mobility", "text": "In recent years, much progress has been made in outdoor autonomo us navigation. However, safe navigation is still a daunting challenge in terrain containing vegetation. In this paper, we f ocus on the segmentation of ladar data into three classes using local three-dimensional point cloud statistics. The classes are: \u201dscatter\u201d to represent porous volumes such as grass and tree canopy, \u201dlinear\u201d to capture thin objects like wires or tree branches, and finally \u201dsurface\u201d to capture solid objects like ground surface, rocks or large trunks. We present the details of the proposed method, and the modifications we made to implement it on-board an autonom ous ground vehicle for real-time data processing. Finally, we present results produced from different sta tionary laser sensors and from field tests using an unmanned ground vehicle."}
{"_id": "05e6ca796d628f958d9084eee481b3d2f0046c1b", "title": "On the selection of tags for tag clouds", "text": "We examine the creation of a tag cloud for exploring and understanding a set of objects (e.g., web pages, documents). In the first part of our work, we present a formal system model for reasoning about tag clouds. We then present metrics that capture the structural properties of a tag cloud, and we briefly present a set of tag selection algorithms that are used in current sites (e.g., del.icio.us, Flickr, Technorati) or that have been described in recent work. In order to evaluate the results of these algorithms, we devise a novel synthetic user model. This user model is specifically tailored for tag cloud evaluation and assumes an \"ideal\" user. We evaluate the algorithms under this user model, as well as the model itself, using two datasets: CourseRank (a Stanford social tool containing information about courses) and del.icio.us (a social bookmarking site). The results yield insights as to when and why certain selection schemes work best."}
{"_id": "922781377baf5f245389d02fa41109535643f43b", "title": "Sneak-path constraints in memristor crossbar arrays", "text": "In a memristor crossbar array, a memristor is positioned on each row-column intersection, and its resistance, low or high, represents two logical states. The state of every memristor can be sensed by the current flowing through the memristor. In this work, we study the sneak path problem in crossbars arrays, in which current can sneak through other cells, resulting in reading a wrong state of the memristor. Our main contributions are a new characterization of arrays free of sneak paths, and efficient methods to read the array cells while avoiding sneak paths. To each read method we match a constraint on the array content that guarantees sneak-path free readout, and calculate the resulting capacity."}
{"_id": "b7634a0ac84902b135b6073b61ed6a1909f89bd2", "title": "The impact of non-cognitive skills on outcomes for young people Literature review 21 November 2013 Institute of Education", "text": ""}
{"_id": "c7b007d546d24322152719898c2836910f0d3939", "title": "Students' self-presentation on Facebook: An examination of personality and self-construal factors", "text": "The present research seeks to extend existing theory on self-disclosure to the online arena in higher educational institutions and contribute to the knowledge base and understanding about the use of a popular social networking site (SNS), Facebook, by college students. We conducted a non-experimental study to investigate how university students (N = 463) use Facebook, and examined the roles that personality and culture play in disclosure of information in online SNS-based environments. Results showed that individuals do disclose differently online vs. in-person, and that both culture and personality matter. Specifically, it was found that collectivistic individuals low on extraversion and interacting in an online environment disclosed the least honest and the most audience-relevant information, as compared to others. Exploratory analyses also indicate that students use sites such as Facebook primarily to maintain existing personal relationships and selectively used privacy settings to control their self-presentation on SNSs. The findings of this study offer insight into understanding college students\u2019 self-disclosure on SNS, add to the literature on personality and self-disclosure, and shape future directions for research and practice on online self-presentation. Published by Elsevier Ltd."}
{"_id": "6059142cefd9f436d8b33d13584219e7264f3982", "title": "Computational Linguistics and Deep Learning", "text": "Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse. Accompanying ICML 2015 in Lille, France, there was another, almost as big, event: the 2015 Deep Learning Workshop. The workshop ended with a panel discussion, and at it, Neil Lawrence said, \u201cNLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened.\u201d Now that is a remark that the computational linguistics community has to take seriously! Is it the end of the road for us? Where are these predictions of steamrollering coming from? At the June 2015 opening of the Facebook AI Research Lab in Paris, its director Yann LeCun said: \u201cThe next big step for Deep Learning is natural language understanding, which aims to give machines the power to understand not just individual words but entire sentences and paragraphs.\u201d1 In a November 2014 Reddit AMA (Ask Me Anything), Geoff Hinton said, \u201cI think that the most exciting areas over the next five years will be really understanding text and videos. I will be disappointed if in five years\u2019 time we do not have something that can watch a YouTube video and tell a story about what happened. In a few years time we will put [Deep Learning] on a chip that fits into someone\u2019s ear and have an English-decoding chip that\u2019s just like a real Babel fish.\u201d2 And Yoshua Bengio, the third giant of modern Deep Learning, has also increasingly oriented his group\u2019s research toward language, including recent exciting new developments in neural machine translation systems. It\u2019s not just Deep Learning researchers. When leading machine learning researcher Michael Jordan was asked at a September 2014 AMA, \u201cIf you got a billion dollars to spend on a huge research project that you get to lead, what would you like to do?\u201d, he answered: \u201cI\u2019d use the billion dollars to build a NASA-size program focusing on natural language processing, in all of its glory (semantics, pragmatics, etc.).\u201d He went on: \u201cIntellectually I think that NLP is fascinating, allowing us to focus on highly structured inference problems, on issues that go to the core of \u2018what is thought\u2019 but remain eminently practical, and on a technology"}
{"_id": "7284b35cb62b1cb0cf9016e087f11f0768aaae6c", "title": "An analysis of accelerator coupling in heterogeneous architectures", "text": "Existing research on accelerators has emphasized the performance and energy e ciency improvements they can provide, devoting little attention to practical issues such as accelerator invocation and interaction with other on-chip components (e.g. cores, caches). In this paper we present a quantitative study that considers these aspects by implementing seven high-throughput accelerators following three design models: tight coupling behind a CPU, loose out-of-core coupling with Direct Memory Access (DMA) to the LLC, and loose out-of-core coupling with DMA to DRAM. A salient conclusion of our study is that working sets of non-trivial size are best served by loosely-coupled accelerators that integrate private memory blocks tailored to their needs."}
{"_id": "ae5e1b0941153ebc0de90b4830893618b81a7169", "title": "Constructing elastic distinguishability metrics for location privacy", "text": "With the increasing popularity of hand-held devices, location-based applications and services have access to accurate and real-time location information, raising serious privacy concerns for their users. The recently introduced notion of geo-indistinguishability tries to address this problem by adapting the well-known concept of differential privacy to the area of location-based systems. Although geo-indistinguishability presents various appealing aspects, it has the problem of treating space in a uniform way, imposing the addition of the same amount of noise everywhere on the map. In this paper we propose a novel elastic distinguishability metric that warps the geometrical distance, capturing the different degrees of density of each area. As a consequence, the obtained mechanism adapts the level of noise while achieving the same degree of privacy everywhere. We also show how such an elastic metric can easily incorporate the concept of a \u201cgeographic fence\u201d that is commonly employed to protect the highly recurrent locations of a user, such as his home or work. We perform an extensive evaluation of our technique by building an elastic metric for Paris\u2019 wide metropolitan area, using semantic information from the OpenStreetMap database. We compare the resulting mechanism against the Planar Laplace mechanism satisfying standard geo-indistinguishability, using two real-world datasets from the Gowalla and Brightkite location-based social networks. The results show that the elastic mechanism adapts well to the semantics of each area, adjusting the noise as we move outside the city center, hence offering better overall privacy. 1"}
{"_id": "812183caa91bab9c2729de916ee3789b68023f39", "title": "Modeling the Detection of Textual Cyberbullying", "text": "The scourge of cyberbullying has assumed alarming proportions with an ever-increasing number of adolescents admitting to having dealt with it either as a victim or as a bystander. Anonymity and the lack of meaningful supervision in the electronic medium are two factors that have exacerbated this social menace. Comments or posts involving sensitive topics that are personal to an individual are more likely to be internalized by a victim, often resulting in tragic outcomes. We decompose the overall detection problem into detection of sensitive topics, lending itself into text classification sub-problems. We experiment with a corpus of 4500 YouTube comments, applying a range of binary and multiclass classifiers. We find that binary classifiers for individual labels outperform multiclass classifiers. Our findings show that the detection of textual cyberbullying can be tackled by building individual topic-sensitive classifiers."}
{"_id": "0e3b17c933dcd8c8cb48ed6d23f358b94d1fda60", "title": "A Graph-based System for Network-vulnerability Analysis", "text": "This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low \"effort\" cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is \"matched\" with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success."}
{"_id": "d8fb5a42cb0705716bf41cd23e1a2a9012b76f99", "title": "Using attack graphs for correlating, hypothesizing, and predicting intrusion alerts", "text": "To defend against multi-step intrusions in high-speed networks, efficient algorithms are needed to correlate isolated alerts into attack scenarios. Existing correlation methods usually employ an in-memory index for fast searches among received alerts. With finite memory, the index can only be built on a limited number of alerts inside a sliding window. Knowing this fact, an attacker can prevent two attack steps from both falling into the sliding window by either passively delaying the second step or actively injecting bogus alerts between the two steps. In either case, the correlation effort is defeated. In this paper, we first address the above issue with a novel queue graph (QG) approach. Instead of searching all the received alerts for those that prepare for a new alert, we only search for the latest alert of each type. The correlation between the new alert and other alerts is implicitly represented using the temporal order between alerts. Consequently, our approach can correlate alerts that are arbitrarily far away, and it has a linear (in the number of alert types) time complexity and quadratic memory requirement. Then, we extend the basic QG approach to a unified method to hypothesize missing alerts and to predict future alerts. Finally, we propose a compact representation for the result of alert correlation. Empirical results show that our method can fulfill correlation tasks faster than an IDS can report alerts. Hence, the method is a promising solution for administrators to monitor and predict the progress of intrusions and thus to take appropriate countermeasures in a timely manner. 2006 Elsevier B.V. All rights reserved."}
{"_id": "1e3b61f29e5317ef59d367e1a53ba407912d240e", "title": "Computers and Intractability: A Guide to the Theory of NP-Completeness", "text": "After the files from my ipod just to say is intelligent enough room. Thanks I use your playlists or cmd opt. Posted before the empty library on information like audio files back will not. This feature for my music folder and copying but they do not sure you. I was originally trying the ipod to know my tunes versiondo you even. To work fine but majority of transferring. Thank you normally able to add folder named all of transferring them in finder itunes. Im out itunes music and would solve. But the entire music folder that, are not method because file access? D I just accepted files to enable disk mode. Thanks the scope of ipod itself all. Probably take too much obliged but never full. I tuneaid went to add this tutorial have you. My uncle will do the itunes. Thanks further later in fact the use. On your advice by other information contained in my ipod. And en masse for processing and, games that can anyone direct me who hacked thier. Help youll have an external hard drives. My new synchronization from the major advantage. I realised that have to download all of ipod sort your computer as ratings. Thank you can I wouldnt read, a bunch. Software listed procedure it did and uncheck. My quest for a problem if I didnt know just. Itunes and itunes can not want. Some songs from basic copying files now I can anyone advise can. I knew that you not recognize the add songs to recover."}
{"_id": "235f3b46f88e8696a8bc55c4eb616a8a27043540", "title": "Constructing attack scenarios through correlation of intrusion alerts", "text": "Traditional intrusion detection systems (IDSs) focus on low-level attacks or anomalies, and raise alerts independently, though there may be logical connections between them. In situations where there are intensive intrusions, not only will actual alerts be mixed with false alerts, but the amount of alerts will also become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the alerts and take appropriate actions. This paper presents a practical technique to address this issue. The proposed approach constructs attack scenarios by correlating alerts on the basis of prerequisites and consequences of intrusions. Intuitively, the prerequisite of an intrusion is the necessary condition for the intrusion to be successful, while the consequence of an intrusion is the possible outcome of the intrusion. Based on the prerequisites and consequences of different types of attacks, the proposed approach correlates alerts by (partially) matching the consequence of some previous alerts and the prerequisite of some later ones. The contribution of this paper includes a formal framework for alert correlation, the implementation of an off-line alert correlator based on the framework, and the evaluation of our method with the 2000 DARPA intrusion detection scenario specific datasets. Our experience and experimental results have demonstrated the potential of the proposed method and its advantage over alternative methods."}
{"_id": "c83abfeb5a2f7d431022cd1f8dd7da41431c4810", "title": "A Framework for Estimating Driver Decisions Near Intersections", "text": "We present a framework for the estimation of driver behavior at intersections, with applications to autonomous driving and vehicle safety. The framework is based on modeling the driver behavior and vehicle dynamics as a hybrid-state system (HSS), with driver decisions being modeled as a discrete-state system and the vehicle dynamics modeled as a continuous-state system. The proposed estimation method uses observable parameters to track the instantaneous continuous state and estimates the most likely behavior of a driver given these observations. This paper describes a framework that encompasses the hybrid structure of vehicle-driver coupling and uses hidden Markov models (HMMs) to estimate driver behavior from filtered continuous observations. Such a method is suitable for scenarios that involve unknown decisions of other vehicles, such as lane changes or intersection access. Such a framework requires extensive data collection, and the authors describe the procedure used in collecting and analyzing vehicle driving data. For illustration, the proposed hybrid architecture and driver behavior estimation techniques are trained and tested near intersections with exemplary results provided. Comparison is made between the proposed framework, simple classifiers, and naturalistic driver estimation. Obtained results show promise for using the HSS-HMM framework."}
{"_id": "4c4f026f8a0fc5e9c72825dd79677a4d205e5588", "title": "Phone-aware LSTM-RNN for voice conversion", "text": "This paper investigates a new voice conversion technique using phone-aware Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs). Most existing voice conversion methods, including Joint Density Gaussian Mixture Models (JDGMMs), Deep Neural Networks (DNNs) and Bidirectional Long Short-Term Memory Recurrent Neural Networks (BLSTM-RNNs), only take acoustic information of speech as features to train models. We propose to incorporate linguistic information to build voice conversion system by using monophones generated by a speech recognizer as linguistic features. The monophones and spectral features are combined together to train LSTM-RNN based voice conversion models, reinforcing the context-dependency modelling of LSTM-RNNs. The results of the 1st voice conversion challenge shows our system achieves significantly higher performance than baseline (GMM method) and was found among the most competitive scores in similarity test. Meanwhile, the experimental results show phone-aware LSTM-RNN method obtains lower Mel-cepstral distortion and higher MOS scores than the baseline LSTM-RNNs."}
{"_id": "c36720be7f2c8dcbfc00a6a84e17c3378ac15c9c", "title": "Formal Verification of CNN-based Perception Systems", "text": "We address the problem of verifying neural-based perception systems implemented by convolutional neural networks. We define a notion of local robustness based on affine and photometric transformations. We show the notion cannot be captured by previously employed notions of robustness. The method proposed is based on reachability analysis for feed-forward neural networks and relies on MILP encodings of both the CNNs and transformations under question. We present an implementation and discuss the experimental results obtained for a CNN trained from the MNIST data set."}
{"_id": "a8c1347b82ba3d7ce03122955762db86d44186d0", "title": "Submodular video hashing: a unified framework towards video pooling and indexing", "text": "This paper develops a novel framework for efficient large-scale video retrieval. We aim to find video according to higher level similarities, which is beyond the scope of traditional near duplicate search. Following the popular hashing technique we employ compact binary codes to facilitate nearest neighbor search. Unlike the previous methods which capitalize on only one type of hash code for retrieval, this paper combines heterogeneous hash codes to effectively describe the diverse and multi-scale visual contents in videos. Our method integrates feature pooling and hashing in a single framework. In the pooling stage, we cast video frames into a set of pre-specified components, which capture a variety of semantics of video contents. In the hashing stage, we represent each video component as a compact hash code, and combine multiple hash codes into hash tables for effective search. To speed up the retrieval while retaining most informative codes, we propose a graph-based influence maximization method to bridge the pooling and hashing stages. We show that the influence maximization problem is submodular, which allows a greedy optimization method to achieve a nearly optimal solution. Our method works very efficiently, retrieving thousands of video clips from TRECVID dataset in about 0.001 second. For a larger scale synthetic dataset with 1M samples, it uses less than 1 second in response to 100 queries. Our method is extensively evaluated in both unsupervised and supervised scenarios, and the results on TRECVID Multimedia Event Detection and Columbia Consumer Video datasets demonstrate the success of our proposed technique."}
{"_id": "8e225be4620909ff31690a524096be92b0bfc991", "title": "An algorithmic overview of surface registration techniques for medical imaging", "text": "This paper presents a literature survey of automatic 3D surface registration techniques emphasizing the mathematical and algorithmic underpinnings of the subject. The relevance of surface registration to medical imaging is that there is much useful anatomical information in the form of collected surface points which originate from complimentary modalities and which must be reconciled. Surface registration can be roughly partitioned into three issues: choice of transformation, elaboration of surface representation and similarity criterion, and matching and global optimization. The first issue concerns the assumptions made about the nature of relationships between the two modalities, e.g. whether a rigid-body assumption applies, and if not, what type and how general a relation optimally maps one modality onto the other. The second issue determines what type of information we extract from the 3D surfaces, which typically characterizes their local or global shape, and how we organize this information into a representation of the surface which will lead to improved efficiency and robustness in the last stage. The last issue pertains to how we exploit this information to estimate the transformation which best aligns local primitives in a globally consistent manner or which maximizes a measure of the similarity in global shape of two surfaces. Within this framework, this paper discusses in detail each surface registration issue and reviews the state-of-the-art among existing techniques."}
{"_id": "047eeb7fce304fdb2b41f3c4d0b393dd1137bdab", "title": "Infrastructure support for mobile computing", "text": "Mobile computing is emerging as the prime focus of next generation computing .One of the prime issues of mobile computing is to provide infrastructure support in terms of computing devices, seamless mobility, application middleware, data and user security, and user applications/services. Mobile commerce is one of the driving forces that has evinced enormous interest in mobile computing .The thought of conducting commerce on the go is what is driving the huge investments corporations are making in researching this area. This paper discusses the various challenges in providing infrastructure for wireless computing."}
{"_id": "9a85ba84848e201bbd3587e86a7ac9e762935faa", "title": "A signal conditioner for high-frequency inductive position sensors", "text": "The use of integrated circuit techniques in position sensing applications can offer significant advantages over discrete component solutions, such as miniaturization, reduced cost, increased reliability and enhanced sensitivity. We realized a signal conditioner that includes an application specific integrated circuit (ASIC) and an external microcontroller for readout and control of non-contact high-frequency inductive position sensors. Both the system architecture and details on the key blocks are described. The ASIC was designed in a 0.6-\u00bfm CMOS process technology and preliminary measured results are presented. The signal conditioner architecture is universal and can be used for other sensor types such as LVDTs."}
{"_id": "0faccce84266d2a8f0c4fa08c33b357b42cf17f2", "title": "Latent Predictor Networks for Code Generation", "text": "Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks."}
{"_id": "9fb438389d032453de0d1b5619e19696bf1a95bb", "title": "A huge, undescribed soil ciliate (Protozoa: Ciliophora) diversity in natural forest stands of Central Europe", "text": "We investigated 12 natural forest stands in eastern Austria for soil ciliate diversity, viz., eight beech forests and two lowland and Pinus nigra forests each. The stands span a wide range of climatic (e.g., 543\u20131759 mm precipitation, 160\u20131035 m above sea-level) and abiotic (e.g., pH 4\u20137.4) factors. Samples were taken twice in autumn and late spring and analysed with the non-flooded Petri dish method. Species were identified in vivo, in silver preparations, and in the scanning electron microscope. A total of 233 species were found, of which 30 were undescribed, a surprising number showing our ignorance of soil ciliate diversity, even in Central Europe. Species number varied highly from 45 (acidic beech on silicate) to 120 (floodplain forest) and was strongly correlated with pH and overall habitat quality, as measured by climate, the C/P quotient (ratio of r-selected colpodean and k-selected polyhymenophorean ciliates), and the proportion of mycophagous ciliate species; multivariate analysis showed further important variables, viz., the general nutrient status (glucose, nitrogen, C/ N ratio) and microbial (urease) activity. The highest species number occurred in one of the two floodplain soils, supporting the intermediate disturbance hypothesis. The three main forest types could be clearly distinguished by their ciliate communities, using similarity indices and multidimensional scaling. Individual numbers varied highly from 135\u22121 (lowland forest) to 10,925 ml\u22121 (beech on silicate) soil percolate and showed, interestingly, a weak correlation with soil protozoan phospholipid fatty acids. Eight of the 30 new species found and a forgotten species, Arcuospathidium coemeterii (Kahl 1943) nov. comb., are described in detail, as examples of how species were recognized and soil protozoan diversity should be analyzed: Latispathidium truncatum bimicronucleatum, Protospathidium fusioplites, Erimophrya sylvatica, E. quadrinucleata, Paragonostomum simplex, Periholosticha paucicirrata, P. sylvatica, and Australocirrus zechmeisterae."}
{"_id": "dde39a1ec354605233f3ccec63b3bf61995206f5", "title": "Image Enhancement in Encrypted Domain over Cloud", "text": "Cloud-based multimedia systems are becoming increasingly common. These systems offer not only storage facility, but also high-end computing infrastructure which can be used to process data for various analysis tasks ranging from low-level data quality enhancement to high-level activity and behavior identification operations. However, cloud data centers, being third party servers, are often prone to information leakage, raising security and privacy concerns. In this article, we present a Shamir's secret sharing based method to enhance the quality of encrypted image data over cloud. Using the proposed method we show that several image enhancement operations such as noise removal, antialiasing, edge and contrast enhancement, and dehazing can be performed in encrypted domain with near-zero loss in accuracy and minimal computation and data overhead. Moreover, the proposed method is proven to be information theoretically secure."}
{"_id": "ac569822882547080d3dc51fed10c746946a6cfd", "title": "IT-Forensic Automotive Investigations on the Example of Route Reconstruction on Automotive System and Communication Data", "text": ""}
{"_id": "69e8fb8b29de8427f69676a2ffdd9699b3f2089e", "title": "BLE device indoor localization based on RSS fingerprinting mapped by propagation modes", "text": "Nowadays, Bluetooth Low Energy (BLE) technology has a great attention in the field of wireless localization techniques, especially in the case of indoor scenarios. Study presented in this paper explores BLE localization performance in indoor scenarios based on a received signal strength (RSS). Firstly, we present a Ray-Launching based application to emulate BLE radio frequency (RF) signal propagation in the office environment. Secondly, an appropriate measurement workplace and its setup is proposed to measure RSS values. Obtained results are used to create an RSS-fingerprinting map. Furthermore, accuracy of position determination is calculated and evaluated. Results show advantage of BLE technology for indoor localization purposes (without calibration measurement). However, its performance highly depends on the number of considered BLE nodes and on applied evaluation method (e.g. number of considered sectors)."}
{"_id": "5c219b9fe36ea7bb22d11bad634b64e244f4d9d0", "title": "Visualization of medical data based on EHR standards.", "text": "BACKGROUND\nTo organize an efficient interaction between a doctor and an EHR the data has to be presented in the most convenient way. Medical data presentation methods and models must be flexible in order to cover the needs of the users with different backgrounds and requirements. Most visualization methods are doctor oriented, however, there are indications that the involvement of patients can optimize healthcare.\n\n\nOBJECTIVES\nThe research aims at specifying the state of the art of medical data visualization. The paper analyzes a number of projects and defines requirements for a generic ISO 13606 based data visualization method. In order to do so it starts with a systematic search for studies on EHR user interfaces.\n\n\nMETHODS\nIn order to identify best practices visualization methods were evaluated according to the following criteria: limits of application, customizability, re-usability. The visualization methods were compared by using specified criteria.\n\n\nRESULTS\nThe review showed that the analyzed projects can contribute knowledge to the development of a generic visualization method. However, none of them proposed a model that meets all the necessary criteria for a re-usable standard based visualization method. The shortcomings were mostly related to the structure of current medical concept specifications.\n\n\nCONCLUSION\nThe analysis showed that medical data visualization methods use hardcoded GUI, which gives little flexibility. So medical data visualization has to turn from a hardcoded user interface to generic methods. This requires a great effort because current standards are not suitable for organizing the management of visualization data. This contradiction between a generic method and a flexible and user-friendly data layout has to be overcome."}
{"_id": "b728f6867d7cc038fc2127c85c09d4f56df90789", "title": "\u201cThe magic finger technique\u201d a simplified approach for more symmetric results in alar base resection", "text": "Alar base surgery is one of the most important and challenging steps of aesthetic rhinoplasty. While an ideally shaped alar base is the goal in a desired nose, nearly all patients have asymmetric nostrils preoperatively. Ethnicity, trauma, cocaine use, or previous rhinoplasties are some factors affecting the width and shape of the nasal base. After the conclusion of all planned rhinoplasty sequences and closure of the mid-columellarincision, we mark the midline inferior to the columella at the nasolabial junction and use acaliper to measure an equal distance from the mid-columellar point to the alar creases on eachside, and mark the medial points of the alar creases. Next we draw on the natural creasesbilaterally extending to 3 o\u2019clock on the right side and 9 o\u2018clock on the left side as the limit ofthe lateral excisions to avoid scarring. We then gently depress the alae and alar-facial grooveswith the index finger and allow the formation of new creases superior to the original alarcreases in order to detect excess skin to remove. After marking, the resection was performed with a no. 15 blade. The excision was closed using 6-0 Prolene sutures. We aimed to describe a simple technique for making asymmetric resections in which theapplication of pressure by a finger reveals excess skin in both nostril sill and nostril flareindependently for each alar base. With these asymmetric excisions from the right and left alar bases, a more symmetric nostrils and nasal base can be achieved. Level of Evidence: Level IV, therapeutic study."}
{"_id": "483e8e8d19a56405d303cd3dc4b1dc16d1e9fa90", "title": "Exploiting Eigenposteriors for Semi-Supervised Training of DNN Acoustic Models with Sequence Discrimination", "text": "Deep neural network (DNN) acoustic models yield posterior probabilities of senone classes. Recent studies support the existence of low-dimensional subspaces underlying senone posteriors. Principal component analysis (PCA) is applied to identify eigenposteriors and perform low-dimensional projection of the training data posteriors. The resulted enhanced posteriors are applied as soft targets for training better DNN acoustic model under the student-teacher framework. The present work advances this approach by studying incorporation of sequence discriminative training. We demonstrate how to combine the gains from eigenposterior based enhancement with sequence discrimination to improve ASR using semi-supervised training. Evaluation on AMI meeting corpus yields nearly 4% absolute reduction in word error rate (WER) compared to the baseline DNN trained with cross entropy objective. In this context, eigenposterior enhancement of the soft targets is crucial to enable additive improvement using out-of-domain untranscribed data."}
{"_id": "6d28fb585dc903bc21e999db83f9101d91c13856", "title": "The five-repetition sit-to-stand test as a functional outcome measure in COPD.", "text": "BACKGROUND\nMoving from sitting to standing is a common activity of daily living. The five-repetition sit-to-stand test (5STS) is a test of lower limb function that measures the fastest time taken to stand five times from a chair with arms folded. The 5STS has been validated in healthy community-dwelling adults, but data in chronic obstructive pulmonary disease (COPD) populations are lacking.\n\n\nAIMS\nTo determine the reliability, validity and responsiveness of the 5STS in patients with COPD.\n\n\nMETHODS\nTest-retest and interobserver reliability of the 5STS was measured in 50 patients with COPD. To address construct validity we collected data on the 5STS, exercise capacity (incremental shuttle walk (ISW)), lower limb strength (quadriceps maximum voluntary contraction (QMVC)), health status (St George's Respiratory Questionnaire (SGRQ)) and composite mortality indices (Age Dyspnoea Obstruction index (ADO), BODE index (iBODE)). Responsiveness was determined by measuring 5STS before and after outpatient pulmonary rehabilitation (PR) in 239 patients. Minimum clinically important difference (MCID) was estimated using anchor-based methods.\n\n\nRESULTS\nTest-retest and interobserver intraclass correlation coefficients were 0.97 and 0.99, respectively. 5STS time correlated significantly with ISW, QMVC, SGRQ, ADO and iBODE (r=-0.59, -0.38, 0.35, 0.42 and 0.46, respectively; all p<0.001). Median (25th, 75th centiles) 5STS time decreased with PR (Pre: 14.1 (11.5, 21.3) vs Post: 12.4 (10.2, 16.3) s; p<0.001). Using different anchors, a conservative estimate for the MCID was 1.7 s.\n\n\nCONCLUSIONS\nThe 5STS is reliable, valid and responsive in patients with COPD with an estimated MCID of 1.7 s. It is a practical functional outcome measure suitable for use in most healthcare settings."}
{"_id": "a659283e1c28aa14bd86e15b1b08fe6cf7be4162", "title": "Change Detection in Feature Space Using Local Binary Similarity Patterns", "text": "In general, the problem of change detection is studied in color space. Most proposed methods aim at dynamically finding the best color thresholds to detect moving objects against a background model. Background models are often complex to handle noise affecting pixels. Because the pixels are considered individually, some changes cannot be detected because it involves groups of pixels and some individual pixels may have the same appearance as the background. To solve this problem, we propose to formulate the problem of background subtraction in feature space. Instead of comparing the color of pixels in the current image with colors in a background model, features in the current image are compared with features in the background model. The use of a feature at each pixel position allows accounting for change affecting groups of pixels, and at the same time adds robustness to local perturbations. With the advent of binary feature descriptors such as BRISK or FREAK, it is now possible to use features in various applications at low computational cost. We thus propose to perform background subtraction with a small binary descriptor that we named Local Binary Similarity Patterns (LBSP). We show that this descriptor outperforms color, and that a simple background subtractor using LBSP outperforms many sophisticated state of the art methods in baseline scenarios."}
{"_id": "233880c776b55c140b32409cbad068f82d36f404", "title": "Customer Knowledge Contribution Behavior in Social Shopping Communities", "text": "Social shopping communities, a special form of social media, have offered fertile ground for customers to communicate their opinions and exchange product information. Although social shopping communities have the potential to transform the way online customers acquire knowledge in everyday life, research in information systems has paid little attention to this emerging type of social media. Thus, the goal of this paper is to enhance our understanding of user behavior in this new form of community. We propose and empirically test an integrative theoretical model of customer knowledge contribution based on social capital theory. By analyzing panel data collected over two weeks from 2,251 customers in a social shopping community, we found that reputation, reciprocity, network centrality, as well as customer expertise have significant impact on customer knowledge contribution. These results contribute significantly to the literature and provide important implications for future research and practice."}
{"_id": "2e99aadbf04eb7f7a1f5ea99e5358048c2f70282", "title": "A Framework for Structural Risk Minimisation", "text": "The paper introduces a framework for studying structural risk minimisation. The model views structural risk minimisation in a PAC context. It then considers the more general case when the hierarchy of classes is chosen in response to the data. This theoretically explains the impressive performance of the maximal margin hyperplane algorithm of Vapnik. It may also provide a general technique for exploiting serendipitous simplicity in observed data to obtain better prediction accuracy from small training sets."}
{"_id": "11afd01a3c1f9713f9b04d4b1327326dc357d14d", "title": "Analysis of H.264 Bitstream Prioritization for Dual TCP/UDP Streaming of HD video over WLANs", "text": "Flexible Dual-TCP/UDP Streaming Protocol (FDSP) is a new method for streaming H.264-encoded HD video over wireless networks. The method takes advantage of the hierarchical structure of H.264/AVC syntax and uses TCP to transmit important syntax elements of H.264/AVC video and UDP to transmit less important elements. FDSP was shown to outperform pure-UDP streaming in visual quality and pure-TCP streaming in delay. In this work, FDSP is expanded to include a new parameter called Bitstream Prioritization (BP). The newly modified algorithm, FDSP-BP, is analyzed to measure the impact of BP on the quality of streaming for partially and fully congested networks. Our analysis shows that FDSP-BP is superior to pure-TCP streaming methods with respect to rebuffering instances, while still maintaining high visual quality."}
{"_id": "e70ea58d023df2c31325a9b409ee4493e38b6768", "title": "Business intelligence and analytics: From big data to big impact", "text": ""}
{"_id": "f1559567fbcc1dad4d99574eff3d6dcc216c51f6", "title": "Next-Generation Big Data Analytics: State of the Art, Challenges, and Future Research Topics", "text": "The term big data occurs more frequently now than ever before. A large number of fields and subjects, ranging from everyday life to traditional research fields (i.e., geography and transportation, biology and chemistry, medicine and rehabilitation), involve big data problems. The popularizing of various types of network has diversified types, issues, and solutions for big data more than ever before. In this paper, we review recent research in data types, storage models, privacy, data security, analysis methods, and applications related to network big data. Finally, we summarize the challenges and development of big data to predict current and future trends."}
{"_id": "3895912b187adee599b1ea662da92865dd0b197d", "title": "Big data analytics in healthcare: promise and potential", "text": "OBJECTIVE\nTo describe the promise and potential of big data analytics in healthcare.\n\n\nMETHODS\nThe paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions.\n\n\nRESULTS\nThe paper provides a broad overview of big data analytics for healthcare researchers and practitioners.\n\n\nCONCLUSIONS\nBig data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome."}
{"_id": "f4f7581e8bd35c6decf72b60bc5e4a2bce723adc", "title": "Local energy-based shape histogram feature extraction technique for breast cancer diagnosis", "text": "25 26 27 28 29 30 31 32 33 34 35 Article history: Available online xxxx"}
{"_id": "b43d9c9908ca7d71dd2033aa6baa382b8188dc12", "title": "An unusual case of predation: dog pack or cougar attack?", "text": "Injuries produced by animals are capable of leaving severe patterns and in some cases may result in the death of the attacked individual. Law enforcement authorities may come to erroneous conclusions about the source of the bites based on their awareness of animals present and similarities of the injuries to the untrained eye, with dreadful consequences. Expertise of a carnivore biologist and an odontologist that indentifies the particularities of bite marks may be useful for identifying the attacking species. We present the investigation of a fatal dog pack attack involving a 43-year-old man in Bell Ville (Argentina) where the evidence provided by a forensic dentist and a biologist was categorical for establishing the animal species involved. Because of the unusual characteristics of the wounds and the initial hypothesis made by local authorities of a cougar attack, habits and specific patterns of both dog pack and cougar predation on humans are discussed."}
{"_id": "9463da61f5ec3a558253efa651f2715c9ff40e1f", "title": "Diagnosability of discrete-event systems", "text": "Abstruct-Fault detection and isolation is a crucial and challenging task in the automatic control of large complex systems. We propose a discrete-event system (DES) approach to the problem of failure diagnosis. We introduce two related notions of diagnosability of DES\u2019s in the framework of formal languages and compare diagnosability with the related notions of observability and invertibility. We present a systematic procedure for detection and isolation of failure events using diagnosers and provide necessary and sufficient conditions for a language to be diagnosable. The diagnoser performs diagnostics using online observations of the system behavior; it is also used to state and verify off-line the necessary and sufficient conditions for diagnosability. These conditions are stated on the diagnoser or variations thereof. The approach to failure diagnosis presented in this paper is applicable to systems that fall naturally in the class of DES\u2019s; moreover, for the purpose of diagnosis, most continuous variable dynamic systems can be viewed as DES\u2019s at a higher level of abstraction. In a companion paper [20], we provide a methodology for building DES models for the purpose of failure diagnosis and present applications of the theory developed in this paper."}
{"_id": "8a30c83da72edf574520cbb4ff8e570eb05954ce", "title": "A NOVEL OF RECONFIGURABLE PLANAR ANTENNA ARRAY ( RPAA ) WITH BEAM STEERING CONTROL", "text": "A new antenna structure is formed by combining the concept of reconfigurable planar antenna array (RPAA) with the parasitic elements to produce beam steering patterns. The antenna has been integrated with the PIN diode switches that enable the beam to be steered in the desired direction. This has been done by changing the switch state to either on or off mode. In this work, a number of parasitic elements have been applied to the antenna, namely reflectors and directors. They are placed in between the driven elements, which is aimed to improve the beam steering angle. With such configuration, the main beam radiated by the array can be tilted due to the effect of mutual coupling between the driven elements and parasitic elements (reflectors and director). The unique property of this antenna design is that instead of fabricating all together in the same plane, the antenna\u2019s feeding network is separated from the antenna radiating elements (the patches) by an air gap distance. This allows reducing the spurious effects from the feeding line. The optimization results for the resonant Corresponding author: M. T. Ali (mizi732002@yahoo.com)."}
{"_id": "af34fc0aebff011b56ede8f46ca0787cfb1324ac", "title": "A Deep Learning Approach to Universal Skin Disease Classification", "text": "Skin diseases are very common in people\u2019s daily life. Each year, millions of people in American are affected by all kinds of skin disorders. Diagnosis of skin diseases sometimes requires a high-level of expertise due to the variety of their visual aspects. As human judgment are often subjective and hardly reproducible, to achieve a more objective and reliable diagnosis, a computer aided diagnostic system should be considered. In this paper, we investigate the feasibility of constructing a universal skin disease diagnosis system using deep convolutional neural network (CNN). We train the CNN architecture using the 23,000 skin disease images from the Dermnet dataset and test its performance with both the Dermnet and OLE, another skin disease dataset, images. Our system can achieve as high as 73.1% Top-1 accuracy and 91.0% Top-5 accuracy when testing on the Dermnet dataset. For the test on the OLE dataset, Top-1 and Top-5 accuracies are 31.1% and 69.5%. We show that these accuracies can be further improved if more training images are used."}
{"_id": "73a19026fb8a6ef5bf238ff472f31100c33753d0", "title": "Association Rules Mining : A Recent Overview", "text": "In this paper, we provide the preliminaries of basic concepts about association rule mining and survey the list of existing association rule mining techniques. Of course, a single article cannot be a complete review of all the algorithms, yet we hope that the references cited will cover the major theoretical issues, guiding the researcher in interesting research directions that have yet to be explored."}
{"_id": "d3acac834f244deaff211c8e128eb82937a947de", "title": "Network Hardware-Accelerated Consensus", "text": "Consensus protocols are the foundation for building many fault-tolerant distributed systems and services. This paper posits that there are significant performance benefits to be gained by offering consensus as a network service (CAANS). CAANS leverages recent advances in commodity networking hardware design and programmability to implement consensus protocol logic in network devices. CAANS provides a complete Paxos protocol, is a dropin replacement for software-based implementations of Paxos, makes no restrictions on network topologies, and is implemented in a higher-level, data-plane programming language, allowing for portability across a range of target devices. At the same time, CAANS significantly increases throughput and reduces latency for consensus operations. Consensus logic executing in hardware can transmit consensus messages at line speed, with latency only slightly higher than simply forwarding packets. Report Info"}
{"_id": "d3f811997b68ad3c2fc72f2cabfd420fdefa1549", "title": "Major components of the gravity recommendation system", "text": "The Netflix Prize is a collaborative filtering problem. This subfield of machine learning became popular in the late 1990s with the spread of online services that used recommendation systems (e.g. Amazon, Yahoo! Music, and of course Netflix). The aim of such a system is to predict what items a user might like based on his/her and other users' previous ratings. The Netflix Prize dataset is much larger than former benchmark datasets, therefore the scalability of the algorithms is a must. This paper describes the major components of our blending based solution, called the Gravity Recommendation System (GRS). In the Netflix Prize contest, it attained RMSE 0.8743 as of November 2007. We now compare the effectiveness of some selected individual and combined approaches on a particular subset of the Prize dataset, and discuss their important features and drawbacks."}
{"_id": "75e5ba7621935b57b2be7bf4a10cad66a9c445b9", "title": "Equidistant prototypes embedding for single sample based face recognition with generic learning and incremental learning", "text": "We develop a parameter-free face recognition algorithm which is insensitive to large variations in lighting, expression, occlusion, and age using a single gallery sample per subject. We take advantage of the observation that equidistant prototypes embedding is an optimal embedding that maximizes the minimum one-against-the-rest margin between the classes. Rather than preserving the global or local structure of the training data, our method, called linear regression analysis (LRA), applies least-square regression technique to map gallery samples to the equally distant locations, regardless of the true structure of training data. Further, a novel generic learning method, which maps the intra-class facial differences of the generic faces to the zero vectors, is incorporated to enhance the generalization capability of LRA. Using this novel method, learning based on only a handful of generic classes can largely improve the face recognition performance, even when the generic data are collected from a different database and camera set-up. The incremental learning based on the Greville algorithm makes the mapping matrix efficiently updated from the newly coming gallery classes, training samples, or generic variations. Although it is fairly simple and parameter-free, LRA, combined with commonly used local descriptors, such as Gabor representation and local binary patterns, outperforms the state-of-the-art methods for several standard experiments on the Extended Yale B, CMU PIE, AR, and \u2217Corresponding author. Tel:+86 10 62283059 Fax: +86 10 62285019 Email address: whdeng@bupt.edu.cn (Weihong Deng) Preprint submitted to Elsevier March 28, 2014"}
{"_id": "f866aa1f65eebcbde49cb9cc6fc0bf98c860234e", "title": "A multilevel meta-analysis of studies reporting correlations between the h index and 37 different h index variants", "text": "This paper presents the first meta-analysis of studies that computed correlations between the h index and variants of the h index (such as the g index; in total 37 different variants) that have been proposed and discussed in the literature. A high correlation between the h index and its variants would indicate that the h index variants hardly provide added information to the h index. This meta-analysis included 135 correlation coefficients from 32 studies. The studies were based on a total sample size of N = 9005; on average, each study had a sample size of n = 257. The results of a three-level cross-classified mixed-effects metaanalysis show a high correlation between the h index and its variants: Depending on the model, the mean correlation coefficient varies between .8 and .9. This means that there is redundancy between most of the h index variants and the h index. There is a statistically significant study-to-study variation of the correlation coefficients in the information they yield. The lowest correlation coefficients with the h index are found for the h index variants MII and m index. Hence, these h index variants make a non-redundant contribution to the h index. \u00a9 2011 Elsevier Ltd. All rights reserved."}
{"_id": "5e8d51a1f6ba313a38a35af414a00bcfd3b5c0ae", "title": "Autonomous Ground Vehicles\u2014Concepts and a Path to the Future", "text": "Autonomous vehicles promise numerous improvements to vehicular traffic: an increase in both highway capacity and traffic flow because of faster response times, less fuel consumption and pollution thanks to more foresighted driving, and hopefully fewer accidents thanks to collision avoidance systems. In addition, drivers can save time for more useful activities. In order for these vehicles to safely operate in everyday traffic or in harsh off-road environments, a multitude of problems in perception, navigation, and control have to be solved. This paper gives an overview of the most current trends in autonomous vehicles, highlighting the concepts common to most successful systems as well as their differences. It concludes with an outlook into the promising future of autonomous vehicles."}
{"_id": "f9b0950b5dbc033e41670caa6309fb35d2ec6f05", "title": "Acupuncture and electro-acupuncture for people diagnosed with subacromial pain syndrome: A multicentre randomized trial.", "text": "BACKGROUND\nMusculoskeletal disorders have been identified globally as the second most common healthcare problem for 'years lived with disability', and of these shoulder conditions are amongst the most common, frequently associated with substantial pain and morbidity. Exercise and acupuncture are often provided as initial treatments for musculoskeletal shoulder conditions but their clinical effectiveness is uncertain. This study compared group exercise with group exercise plus either acupuncture or electro-acupuncture in patients with subacromial pain syndrome.\n\n\nMETHODS\nTwo hundred and twenty-seven participants were recruited to a three-arm parallel-group randomized clinical trial. The primary outcome measure was the Oxford Shoulder Score. Follow-up was post treatment, and at 6 and 12\u00a0months. Between-group differences (two comparisons: the exercise group versus each of the acupuncture groups) were analysed at 6\u00a0months. A similar comparison across all follow-up time points was also conducted. Data were analysed on intention-to-treat principles with imputation of missing values.\n\n\nRESULTS\nTreatment groups were similar at baseline, and all treatment groups demonstrated an improvement over time. Between-group estimates at 6\u00a0months were, however, small and non-significant, for both of the comparisons. The analyses across all follow-up time points yielded similar conclusions. There was a high rate of missing values (22% for the Oxford Shoulder Score). A sensitivity analysis using complete data gave similar conclusions to the analysis with missing values imputed.\n\n\nCONCLUSIONS\nIn the current investigation, neither acupuncture nor electro-acupuncture were found to be more beneficial than exercise alone in the treatment of subacromial pain syndrome. These findings may support clinicians with treatment planning.\n\n\nSIGNIFICANCE\nShoulder pain is common and associated with substantial morbidity. Acupuncture is a popular treatment for shoulder pain. The findings suggest that acupuncture and electro-acupuncture offer no additional benefit over exercise in the treatment of shoulder pain of musculoskeletal origin."}
{"_id": "c45b9770546574ea53acfd2eb45906d4620211a1", "title": "COMPACT WIDE-SLOT TRI-BAND ANTENNA FOR WLAN/WIMAX APPLICATIONS", "text": "In this paper, a wide-slot triple band antenna fed by a coplanar waveguide (CPW) for WLAN/WiMAX applications is proposed. The antenna mainly comprises a ground with a wide square slot in the center, a rectangular feeding strip and two pairs of planar inverted L strips (PIL) connecting with the slotted ground. By introducing the two pairs of PIL\u2019s, three resonant frequencies, 2.4/5.5GHz for WLAN, and 3.5 GHz for WiMAX, are excited. Prototypes of the antenna are fabricated and tested. The simulated and measured results show that the proposed antenna has three good impedance bandwidths (S11 better than \u221210 dB) of 300MHz (about 12.6% centered at 2.39 GHz), 280 MHz (about 8% centered at 3.49GHz) and 790 MHz (about 14.5% centered at 5.43GHz), which make it easily cover the required bandwidths for WLAN band (2.4\u2013 2.48GHz, 5.15\u20135.35 GHz, and 5.725\u20135.825GHz) and WiMAX (3.43.6GHz) applications. Moreover, the obtained radiation patterns demonstrate that the proposed antenna has figure-eight patterns in E-plane, and is omni-directional in H-plane. The gains of the antenna at operation bands are stable."}
{"_id": "4f0d7f2926061c9dd6081d95eb9755812f9a16c2", "title": "Videos as Space-Time Region Graphs", "text": "How do humans recognize the action \u201copening a book\u201d? We argue that there are two important cues: modeling temporal shape dynamics and modeling functional relationships between humans and objects. In this paper, we propose to represent videos as space-time region graphs which capture these two important cues. Our graph nodes are defined by the object region proposals from different frames in a long range video. These nodes are connected by two types of relations: (i) similarity relations capturing the long range dependencies between correlated objects and (ii) spatial-temporal relations capturing the interactions between nearby objects. We perform reasoning on this graph representation via Graph Convolutional Networks. We achieve state-of-the-art results on both Charades and Something-Something datasets. Especially for Charades, we obtain a huge 4.4% gain when our model is applied in complex environments."}
{"_id": "03806959f76ef423fd7a7224bfceeb68871ce114", "title": "Enhancing an Interactive Recommendation System with Review-based Information Filtering", "text": "Integrating interactive faceted filtering with intelligent recommendation techniques has shown to be a promising means for increasing user control in Recommender Systems. In this paper, we extend the concept of blended recommending by automatically extracting meaningful facets from social media by means of Natural Language Processing. Concretely, we allow users to influence the recommendations by selecting facet values and weighting them based on information other users provided in their reviews. We conducted a user study with an interactive recommender implemented in the hotel domain. This evaluation shows that users are consequently able to find items fitting interests that are typically difficult to take into account when only structured content data is available. For instance, the extracted facets representing the opinions of hotel visitors make it possible to effectively search for hotels with comfortable beds or that are located in quiet surroundings without having to read the user reviews."}
{"_id": "69286213254fa42dcc35da9fb6fe4bbc26a6f92a", "title": "Hardware Covert Attacks and Countermeasures", "text": "Computing platforms deployed in many critical infrastructures, such as smart grid, financial systems, governmental organizations etc., are subjected to security attacks and potentially devastating consequences. Computing platforms often get attacked 'physically' by an intruder accessing stored information, studying the internal structure of the hardware or injecting a fault. Even if the attackers fail to gain sensitive information stored in hardware, they may be able to disrupt the hardware or deny service leading to other kinds of security failures in the system. Hardware attacks could be covert or overt, based on the awareness of the intended system. This work classifies existing hardware attacks. Focusing mainly on covert attacks, they are quantified using a proposed schema. Different countermeasure techniques are proposed to prevent such attacks."}
{"_id": "b91b48e33d6b0591e156b2ad63590cd80299a2c3", "title": "Can e-learning replace classroom learning?", "text": "In an e-learning environment that emphasizes learner-centered activity and system interactivity, remote learners can outperform traditional classroom students."}
{"_id": "7ef197525d18e2fbd1ef5a313776d4a7d65803ab", "title": "Model-based object tracking in monocular image sequences of road traffic scenes", "text": "Moving vehicles are detected and tracked automatically in monocular image sequences from road traffic scenes recorded by a stationary camera. In order to exploit the a priori knowledge about shape and motion of vehicles in traffic scenes, a parameterized vehicle model is used for an intraframe matching process and a recursive estimator based on a motion model is used for motion estimation. An interpretation cycle supports the intraframe matching process with a state MAP-update step. Initial model hypotheses are generated using an image segmentation component which clusters coherently moving image features into candidate representations of images of a moving vehicle. The inclusion of an illumination model allows taking shadow edges of the vehicle into account during the matching process. Only such an elaborate combination of various techniques has enabled us to track vehicles under complex illumination conditions and over long (over 400 frames) monocular image sequences. Results on various real-world road traffic scenes are presented and open problems as well as future work are outlined."}
{"_id": "a653a4bc78cd6fb3090da1df5b729ec9755ff273", "title": "Pedestrian Detection Using Wavelet Templates", "text": "This paper presents a trainable object detection architecture that is applied to detecting people in static images of cluttered scenes. This problem poses several challenges. People are highly non-rigid objects with a high degree of variability in size, shape, color, and texture. Unlike previous approaches, this system learns from examples and does not rely on any a priori (hand-crafted) models or on motion. The detection technique is based on the novel idea of the wavelet template that deenes the shape of an object in terms of a subset of the wavelet coeecients of the image. It is invariant to changes in color and texture and can be used to robustly deene a rich and complex class of objects such as people. We show how the invariant properties and computational eeciency of the wavelet template make it an eeective tool for object detection."}
{"_id": "d11bf8b06cf96d90e1ee3dd6467c5c92ac53e9a1", "title": "Fitting Parameterized Three-Dimensional Models to Images", "text": "Model-based recognition and motion tracking depends upon the ability to solve for projection and model parameters that will best fit a 3-D model to matching 2-D image features. This paper extends current methods of parameter solving to handle objects with arbitrary curved surfaces and with any number of internal parameters representing articulations, variable dimensions, or surface deformations. Numerical stabilization methods are developed that take account of inherent inaccuracies in the image measurements and allow useful solutions to be determined even when there are fewer matches than unknown parameters. The LevenbergMarquardt method is used to always ensure convergence of the solution. These techniques allow model-based vision to be used for a much wider class of problems than was possible with previous methods. Their application is demonstrated for tracking the motion of curved, parameterized objects. This paper has been published in IEEE Transactions on Pattern Analysis and Machine Intelligence, 13, 5 (May 1991), pp. 441\u2013450."}
{"_id": "7e1d2befac6946112cd80652d0c2b962e3c0d342", "title": "FA-RPN: Floating Region Proposals for Face Detection", "text": "We propose a novel approach for generating region proposals for performing face-detection. Instead of classifying anchor boxes using features from a pixel in the convolutional feature map, we adopt a pooling-based approach for generating region proposals. However, pooling hundreds of thousands of anchors which are evaluated for generating proposals becomes a computational bottleneck during inference. To this end, an efficient anchor placement strategy for reducing the number of anchor-boxes is proposed. We then show that proposals generated by our network (Floating Anchor Region Proposal Network, FA-RPN) are better than RPN for generating region proposals for face detection. We discuss several beneficial features of FA-RPN proposals like iterative refinement, placement of fractional anchors and changing anchors which can be enabled without making any changes to the trained model. Our face detector based on FA-RPN obtains 89.4% mAP with a ResNet-50 backbone on the WIDER dataset."}
{"_id": "2030931dad55a6c18eb26d9dfd8cd3688305f489", "title": "Image Segmentation by Nested Cuts", "text": "We present a new image segmentation algorithm based on graph cuts. Our main tool is separation of each pixel p from a special point outside the image by a cut of a minimum cost. Such a cut creates a group of pixels Cp around each pixel. We show that these groups Cp are either disjoint or nested in each other, and so they give a natural segmentation of the image. In addition this property allows an efficient implementation of the algorithm because for most pixelsp the computation of Cp is not performed on the whole graph. We inspect all Cp\u2019s and discard those which are not interesting, for example if they are too small. This procedure automatically groups small components together or merges them into nearby large clusters. Effectively our segmentation is performed by extracting significant non-intersecting closed contours. We present interes ting segmentation results on real and artificial images."}
{"_id": "d2dd68d72f34e8e4f80f3dc099a4477d6bdd5cf3", "title": "A systematic literature review on Enterprise Architecture Implementation Methodologies", "text": "http://dx.doi.org/10.1016/j.infsof.2015.01.012 0950-5849/ 2015 Elsevier B.V. All rights reserved. \u21d1 Corresponding author. E-mail addresses: drbabak2@live.utm.my (B.D. Rouhani), mdnazrim@utm.my (M.N. Mahrin), fa.nikpay@siswa.um.edu.my (F. Nikpay), rodina@um.edu.my (R.B. npourya2@live.utm.my (P. Nikfard). Babak Darvish Rouhani a,\u21d1, Mohd Naz\u2019ri Mahrin , Fatemeh Nikpay , Rodina Binti Ahmad , Pourya Nikfard c"}
{"_id": "5119705ea5ce5ff01c855479f55a446353c4aea4", "title": "De-Anonymisierungsverfahren: Kategorisierung und Anwendung f\u00fcr Datenbankanfragen(De-Anonymization: Categorization and Use-Cases for Database Queries)", "text": "The project PArADISE deals with activity and intention recognition in smart environments. This can be used in apartments, for example, to recognize falls of elderly people. While doing this, the privacy concerns of the user should be kept. To reach this goal, the processing of the data is done as close as possible at those sensors collecting the data. Only in cases where the processing is not possible on local nodes the data will be transferred to the cloud. But before transferring, it is checked against the privacy concerns using some measures for the anonymity of the data. If the data is not valid against these checks, some additional anonymizations will be done. This anonymization of data must be done quite carefully. Mistakes might cause the problem that data can be reassigned to persons and anonymized data might be reproduced. This paper gives an overview about recent methods for anonymizing data while showing their weaknesses. How these weaknesses can be used to invert the anonymization (called de-anonymization) is shown as well. Our attacks representing the de-anonymization should help to find weaknesses in methods used to anonymize data and how these can be eliminated. Zusammenfassung: Im Projekt PArADISE sollen Aktivit\u00e4tsund Intentionserkennungen in smarten Systemen, etwa Assistenzsystemen in Wohnungen, so durchgef\u00fchrt werden, dass Privatheitsanforderungen des Nutzers gewahrt bleiben. Dazu werden einerseits Auswertungen der Daten sehr nah an den Sensoren, die die Daten erzeugen, vorgenommen. Eine \u00dcbertragung von Daten in die Cloud findet nur im Notfall statt. Zus\u00e4tzlich werden aber vor der \u00dcbertragung der nicht vorausgewerteten Daten in die Cloud diese auf Privatheitsanforderungen hin gepr\u00fcft, indem Anonymisierungsma\u00dfe getestet und eventuell weitere Anonymisierungen von Daten vorgenommen werden. Diese Anonymisierung von Datenbest\u00e4nden muss mit gro\u00dfer Sorgfalt geschehen. Fehler k\u00f6nnen sehr schnell dazu f\u00fchren, dass anonymisierte Datenbest\u00e4nde wieder personalisiert werden k\u00f6nnen und Daten, die eigentlich entfernt wurden, wieder zur\u00fcckgewonnen werden k\u00f6nnen. Dieser Artikel betrachtet aktuelle Verfahren zur Anonymisierung und zeigt Schwachstellen auf, die zu Problemen oder gar der Umkehrung der Methoden f\u00fchren k\u00f6nnen. Unsere k\u00fcnstlich erzeugten Angriffe durch De-Anonymisierungen sollen helfen, Schwachstellen in den Anonymisierungsverfahren zu entdecken und zu beheben."}
{"_id": "db5b706aad90258fa4b97b05acf62361931689c0", "title": "An Ensemble Model for Diabetes Diagnosis in Large-scale and Imbalanced Dataset", "text": "Diabetes is becoming a more and more serious health challenge worldwide with the yearly rising prevalence, especially in developing countries. The vast majority of diabetes are type 2 diabetes, which has been indicated that about 80% of type 2 diabetes complications can be prevented or delayed by timely detection. In this paper, we propose an ensemble model to precisely diagnose the diabetic on a large-scale and imbalance dataset. The dataset used in our work covers millions of people from one province in China from 2009 to 2015, which is highly skew. Results on the real-world dataset prove that our method is promising for diabetes diagnosis with a high sensitivity, F3 and G --- mean, i.e, 91.00%, 58.24%, 86.69%, respectively."}
{"_id": "7a14b0d6cdb11d825c699ea730d9e8d2e7de1edf", "title": "Seeded watershed cut uncertainty estimators for guided interactive segmentation", "text": "Watershed cuts are among the fastest segmentation algorithms and therefore well suited for interactive segmentation of very large 3D data sets. To minimize the number of user interactions (\u201cseeds\u201d) required until the result is correct, we want the computer to actively query the human for input at the most critical locations, in analogy to active learning. These locations are found by means of suitable uncertainty measures. We propose various such measures for watershed cuts along with a theoretical analysis of some of their properties. Extensive evaluation on two types of 3D electron microscopic volumes of neural tissue shows that measures which estimate the non-local consequences of new user inputs achieve performance close to an oracle endowed with complete knowledge of the ground truth."}
{"_id": "fdf10c65ea965c3680b070a7acc2ecd1d0ceaa9c", "title": "Electrodermal responses: what happens in the brain.", "text": "Electrodermal activity (EDA) is now the preferred term for changes in electrical conductance of the skin, including phasic changes that have been referred to as galvanic skin responses (GSR), that result from sympathetic neuronal activity. EDA is a sensitive psychophysiological index of changes in autonomic sympathetic arousal that are integrated with emotional and cognitive states. Until recently there was little direct knowledge of brain mechanisms governing generation and control of EDA in humans. However, studies of patients with discrete brain lesions and, more recently, functional imaging techniques have clarified the contribution of brain regions implicated in emotion, attention, and cognition to peripheral EDA responses. Moreover, such studies enable an understanding of mechanisms by which states of bodily arousal, indexed by EDA, influence cognition and bias motivational behavior."}
{"_id": "a708c17008a4c381592f3246dbd67f49d3fdf5c6", "title": "Tuning in to RF MEMS", "text": "RF MEMS technology was initially developed as a replacement for GaAs HEMT switches and p-i-n diodes for low-loss switching networks and X-band to mm-wave phase shifters. However, we have found that its very low loss properties (high device Q), its simple microwave circuit model and zero power consumption, its high power (voltage/current) handling capabilities, and its very low distortion properties, all make it the ideal tuning device for reconfigurable filters, antennas and impedance matching networks. In fact, reconfigurable networks are currently being funded at the same level-if not higher-than RF MEMS phase shifters, and in our opinion, are much more challenging for high-Q designs."}
{"_id": "f69cd17bb72d459be3db526d9ede41a0fada169d", "title": "EEG-based automatic emotion recognition: Feature extraction, selection and classification methods", "text": "Automatic emotion recognition is an interdisciplinary research field which deals with the algorithmic detection of human affect, e.g. anger or sadness, from a variety of sources, such as speech or facial gestures. Apart from the obvious usage for industry applications in human-robot interaction, acquiring the emotional state of a person automatically also is of great potential for the health domain, especially in psychology and psychiatry. Here, evaluation of human emotion is often done using oral feedback or questionnaires during doctor-patient sessions. However, this can be perceived as intrusive by the patient. Furthermore, the evaluation can only be done in a noncontinuous manner, e.g. once a week during therapy sessions. In contrast, using automatic emotion detection, the affect state of a person can be evaluated in a continuous non-intrusive manner, for example to detect early on-sets of depression. An additional benefit of automatic emotion recognition is the objectivity of such an approach, which is not influenced by the perception of the patient and the doctor. To reach the goal of objectivity, it is important, that the source of the emotion is not easily manipulable, e.g. as in the speech modality. To circumvent this caveat, novel approaches in emotion detection research the potential of using physiological measures, such as galvanic skin sensors or pulse meters. In this paper we outline a way of detecting emotion from brain waves, i.e., EEG data. While EEG allows for a continuous, real-time automatic emotion recognition, it furthermore has the charm of measuring the affect close to the point of emergence: the brain. Using EEG data for emotion detection is nevertheless a challenging task: Which features, EEG channel locations and frequency bands are best suited for is an issue of ongoing research. In this paper we evaluate the use of state of the art feature extraction, feature selection and classification algorithms for EEG emotion classification using data from the de facto standard dataset, DEAP. Moreover, we present results that help choose methods to enhance classification performance while simultaneously reducing computational complexity."}
{"_id": "8b53d050a92332a3b8f185e334bf0c1cf9670f22", "title": "Using the random forest classifier to assess and predict student learning of Software Engineering Teamwork", "text": "The overall goal of our Software Engineering Teamwork Assessment and Prediction (SETAP) project is to develop effective machine-learning-based methods for assessment and early prediction of student learning effectiveness in software engineering teamwork. Specifically, we use the Random Forest (RF) machine learning (ML) method to predict the effectiveness of software engineering teamwork learning based on data collected during student team project development. These data include over 100 objective and quantitative Team Activity Measures (TAM) obtained from monitoring and measuring activities of student teams during the creation of their final class project in our joint software engineering classes which ran concurrently at San Francisco State University (SFSU), Fulda University (Fulda) and Florida Atlantic University (FAU). In this paper we provide the first RF analysis results done at SFSU on our full data set covering four years of our joint SE classes. These data include 74 student teams with over 380 students, totaling over 30000 discrete data points. These data are grouped into 11 time intervals, each measuring important phases of project development during the class (e.g. early requirement gathering and design, development, testing and delivery). We briefly elaborate on the methods of data collection and describe the data itself. We then show prediction results of the RF analysis applied to this full data set. Results show that we are able to detect student teams who are bound to fail or need attention in early class time with good (about 70%) accuracy. Moreover, the variable importance analysis shows that the features (TAM measures) with high predictive power make intuitive sense, such as late delivery/late responses, time used to help each other, and surprisingly statistics on commit messages to the code repository, etc. In summary, we believe we demonstrate the viability of using ML on objective and quantitative team activity measures to predict student learning of software engineering teamwork, and point to easy-to-measure factors that can be used to guide educators and software engineering managers to implement early intervention for teams bound to fail. Details about the project and the complete ML training database are downloadable from the project web site."}
{"_id": "21768c4bf34b9ba0af837bd8cda909dad4fb57d4", "title": "A self biased operational amplifier at ultra low power supply voltage", "text": "This paper discusses the design of a self-biased folded cascode operational amplifier at an ultra low power supply voltage. The proposed design is first of its kind at 0.5 V where self-biasing techniques are used to reduce power and area overheads. The self-biasing scheme in this design is developed by using a current mirror for low voltage operation. This design is implemented in a 90 nm CMOS technology using Cadence General Purpose Design Kit (GPDK)."}
{"_id": "cc5276141c4dc2c245d84e97ad2adc90148be137", "title": "The Business Model as a Tool of Improving Value Creation in Complex Private Service System-Case : Value Network of Electric Mobility", "text": "This paper shows how the concept of business model can be used as an analysis tool when attempting to understand and describe the complexity of the value creation of a developing industry, namely electric mobility. The business model concept is applied as a theoretical framework in action research based and facilitated workshops with nine case companies operating in the field of electric mobility. The concept turned out to be a promising tool in creating a better understanding of the value creation in electric mobility, and thus being a potential framework for the development work for the actors in that field. The concept can help companies to improve cooperation with other players by being a common terminology."}
{"_id": "208986a77f330b6bb66f33d1f7589bd7953f0a7a", "title": "Exact robot navigation using artificial potential functions", "text": "We present a new methodology for exact robot motion planning and control that unifies the purely kinematic path planning problem with the lower level feedback controller design. Complete information about the freespace and goal is encoded in the form of a special artificial potential function a navigation function that connects the kinematic planning problem with the dynamic execution problem in a provably correct fashion. The navigation function automatically gives rise to a bounded-torque feedback controller for the robot's actuators that guarantees collision-free motion and convergence to the destination from almost all initial free configurations. Since navigation functions exist for any robot and obstacle course, our methodology is completely general in principle. However, this paper is mainly concerned with certain constructive techniques for a particular class of motion planning problems. Specifically, we present a formula for navigation functions that guide a point-mass robot in a generalized sphere world. The simplest member of this family is a space obtained by puncturing a disc by an arbitrary number of smaller disjoint discs representing obstacles. The other spaces are obtained from this model by a suitable coordinate transformation that we show how to build. Our constructions exploit these coordinate transformations to adapt a navigation function on the model space to its more geometrically complicated (but topologically equivalent) instances. The formula that we present admits sphere-worlds of arbitrary dimension and is directly applicable to configuration spaces whose forbidden regions can be modeled by such generalized discs. We have implemented these navigation functions on planar scenarios, and simulation results are provided throughout the paper. Disciplines Robotics Comments Copyright 1992 IEEE. Reprinted from IEEE Transactions on Robotics and Automation, Volume 8, Issue 5, October 1992, pages 501-518. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. NOTE: At the time of publication, Daniel Koditschek was affiliated with the University of Michigan. Currently, he is a faculty member at the School of Engineering of the University of Pennsylvania. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/ese_papers/323"}
{"_id": "0e3a73bc01e5cb377b49c11440ba717f33c443ed", "title": "AmbiguityVis: Visualization of Ambiguity in Graph Layouts", "text": "Node-link diagrams provide an intuitive way to explore networks and have inspired a large number of automated graph layout strategies that optimize aesthetic criteria. However, any particular drawing approach cannot fully satisfy all these criteria simultaneously, producing drawings with visual ambiguities that can impede the understanding of network structure. To bring attention to these potentially problematic areas present in the drawing, this paper presents a technique that highlights common types of visual ambiguities: ambiguous spatial relationships between nodes and edges, visual overlap between community structures, and ambiguity in edge bundling and metanodes. Metrics, including newly proposed metrics for abnormal edge lengths, visual overlap in community structures and node/edge aggregation, are proposed to quantify areas of ambiguity in the drawing. These metrics and others are then displayed using a heatmap-based visualization that provides visual feedback to developers of graph drawing and visualization approaches, allowing them to quickly identify misleading areas. The novel metrics and the heatmap-based visualization allow a user to explore ambiguities in graph layouts from multiple perspectives in order to make reasonable graph layout choices. The effectiveness of the technique is demonstrated through case studies and expert reviews."}
{"_id": "cb6be7b2eb8382a85fdc48f1ca123d59d7b003ce", "title": "Definition Extraction with LSTM Recurrent Neural Networks", "text": "Definition extraction is the task to identify definitional sentences automatically from unstructured text. The task can be used in the aspects of ontology generation, relation extraction and question answering. Previous methods use handcraft features generated from the dependency structure of a sentence. During this process, only part of the dependency structure is used to extract features, thus causing information loss. We model definition extraction as a supervised sequence classification task and propose a new way to automatically generate sentence features using a Long Short-Term Memory neural network model. Our method directly learns features from raw sentences and corresponding part-ofspeech sequence, which makes full use of the whole sentence. We experiment on the Wikipedia benchmark dataset and obtain 91.2% on F1 score which outperforms the current state-of-the-art methods by 5.8%. We also show the effectiveness of our method in dealing with other languages by testing on a Chinese dataset and obtaining 85.7% on F1 score."}
{"_id": "63aecf799caa0357c8cc0e40a49116c56703895a", "title": "Path Planning with Phased Array SLAM and Voronoi Tessellation", "text": "Autonomous vehicles must often navigate environments that are at least partially unknown. They are faced with the tasks of creating a coordinate system to localize themselves on, identify the positions of obstacles, and chart safe paths through the environment. This process is known as Simultaneous Localization and Mapping, or SLAM. SLAM has traditionally been executed using measurements of distance to features in the environment. We propose an angle based methodology using a single phased array antenna and DSP.aimed to reduce this requirement to a single path for each data type. Additionally, our method makes use of rudimentary echo-location to discover reflective obstacles. Finally, our method uses Voronoi Tessellation for path planning."}
{"_id": "f3218cff7b9233ebeffa8e912ee0f40cd6da331e", "title": "Word-Formation in English", "text": "published by the press syndicate of the university of cambridge A catalogue record for this book is available from the British Library Library of Congress Cataloguing in Publication data Plag, Ingo. Word-formation in English / Ingo Plag. p. cm. \u2013 (Cambridge textbooks in linguistics) Includes bibliographical references (p.) and indexes. Contents Preface page xi Abbreviations and notational conventions xiii Introduction 1 1 Basic concepts 4 1.1 What is a word? 4 1.2 Studying word-formation 9 1.3 Inflection and derivation 14 1.4 Summary 17 Further reading 18 Exercises 18 2 Studying complex words 20 2.1 Identifying morphemes 20 2.1.1 The morpheme as the minimal linguistic sign 20 2.1.2 Problems with the morpheme: the mapping of form and meaning 22 2.2 Allomorphy 27 2.3 Establishing word-formation rules 30 2.4 Multiple affixation 38 2.5 Summary 41 Further reading 41 Exercises 41"}
{"_id": "30f0e97d498fc3684761d4eec1ea957887399a9e", "title": "Securing SMS Based One Time Password Technique from Man in the Middle Attack", "text": "Security of financial transactions in E-Commerce is difficult to implement and there is a risk that user\u2019s confidential data over the internet may be accessed by hackers. Unfortunately, interacting with an online service such as a banking web application often requires certain degree of technical sophistication that not all Internet users possess. For the last couple of year such naive users have been increasingly targeted by phishing attacks that are launched by miscreants who are aiming to make an easy profit by means of illegal financial transactions. In this paper, we have proposed an idea for securing e-commerce transaction from phishing attack. An approach already exists where phishing attack is prevented using one time password which is sent on user\u2019s registered mobile via SMS for authentication. But this method can be counter attacked by \u201cMan in the Middle\u201d. In our paper, a new idea is proposed which is more secure compared to the existing online payment system using OTP. In this mechanism OTP is combined with the secure key and is then passed through RSA algorithm to generate the Transaction password. A Copy of this password is maintained at the server side and is being generated at the user side using a mobile application; so that it is not transferred over the insecure network leading to a fraudulent transaction. Keywords\u2014Phishing, Replay attack, MITM attack, RSA, Random Generator."}
{"_id": "e913e0d75d00fb790d2d3e75d2ea6e2645757b2c", "title": "Contingencies of self-worth.", "text": "Research on self-esteem has focused almost exclusively on level of trait self-esteem to the neglect of other potentially more important aspects such as the contingencies on which self-esteem is based. Over a century ago, W. James (1890) argued that self-esteem rises and falls around its typical level in response to successes and failures in domains on which one has staked self-worth. We present a model of global self-esteem that builds on James' insights and emphasizes contingencies of self-worth. This model can help to (a) point the way to understanding how self-esteem is implicated in affect, cognition, and self-regulation of behavior; (b) suggest how and when self-esteem is implicated in social problems; (c) resolve debates about the nature and functioning of self-esteem; (d) resolve paradoxes in related literatures, such as why people who are stigmatized do not necessarily have low self-esteem and why self-esteem does not decline with age; and (e) suggest how self-esteem is causally related to depression. In addition, this perspective raises questions about how contingencies of self-worth are acquired and how they change, whether they are primarily a resource or a vulnerability, and whether some people have noncontingent self-esteem."}
{"_id": "f0d401122c35b43ceec1441c548be75d96783673", "title": "A cognitive-affective system theory of personality: reconceptualizing situations, dispositions, dynamics, and invariance in personality structure.", "text": "A theory was proposed to reconcile paradoxical findings on the invariance of personality and the variability of behavior across situations. For this purpose, individuals were assumed to differ in (a) the accessibility of cognitive-affective mediating units (such as encodings, expectancies and beliefs, affects, and goals) and (b) the organization of relationships through which these units interact with each other and with psychological features of situations. The theory accounts for individual differences in predictable patterns of variability across situations (e.g., if A then she X, but if B then she Y), as well as for overall average levels of behavior, as essential expressions or behavioral signatures of the same underlying personality system. Situations, personality dispositions, dynamics, and structure were reconceptualized from this perspective."}
{"_id": "29f94f5a209294554615d925a53c53b3a9649dd1", "title": "AIVAT: A New Variance Reduction Technique for Agent Evaluation in Imperfect Information Games", "text": "Evaluating agent performance when outcomes are stochastic and agents use randomized strategies can be challenging when there is limited data available. The variance of sampled outcomes may make the simple approach of Monte Carlo sampling inadequate. This is the case for agents playing heads-up no-limit Texas hold\u2019em poker, where manmachine competitions typically involve multiple days of consistent play by multiple players, but still can (and sometimes did) result in statistically insignificant conclusions. In this paper, we introduce AIVAT, a low variance, provably unbiased value assessment tool that exploits an arbitrary heuristic estimate of state value, as well as the explicit strategy of a subset of the agents. Unlike existing techniques which reduce the variance from chance events, or only consider game ending actions, AIVAT reduces the variance both from choices by nature and by players with a known strategy. The resulting estimator produces results that significantly outperform previous state of the art techniques. It was able to reduce the standard deviation of a Texas hold\u2019em poker man-machine match by 85% and consequently requires 44 times fewer games to draw the same statistical conclusion. AIVAT enabled the first statistically significant AI victory against professional poker players in no-limit hold\u2019em. Furthermore, the technique was powerful enough to produce statistically significant results versus individual players, not just an aggregate pool of the players. We also used AIVAT to analyze a short series of AI vs human poker tournaments, producing statistical significant results with as few as 28 matches."}
{"_id": "105e77b8182b6e591247918f45f04c6a67f9a72f", "title": "Methods of suicide: international suicide patterns derived from the WHO mortality database.", "text": "OBJECTIVE\nAccurate information about preferred suicide methods is important for devising strategies and programmes for suicide prevention. Our knowledge of the methods used and their variation across countries and world regions is still limited. The aim of this study was to provide the first comprehensive overview of international patterns of suicide methods.\n\n\nMETHODS\nData encoded according to the International Classification of Diseases (10th revision) were derived from the WHO mortality database. The classification was used to differentiate suicide methods. Correspondence analysis was used to identify typical patterns of suicide methods in different countries by providing a summary of cross-tabulated data.\n\n\nFINDINGS\nPoisoning by pesticide was common in many Asian countries and in Latin America; poisoning by drugs was common in both Nordic countries and the United Kingdom. Hanging was the preferred method of suicide in eastern Europe, as was firearm suicide in the United States and jumping from a high place in cities and urban societies such as Hong Kong Special Administrative Region, China. Correspondence analysis demonstrated a polarization between pesticide suicide and firearm suicide at the expense of traditional methods, such as hanging and jumping from a high place, which lay in between.\n\n\nCONCLUSION\nThis analysis showed that pesticide suicide and firearm suicide replaced traditional methods in many countries. The observed suicide pattern depended upon the availability of the methods used, in particular the availability of technical means. The present evidence indicates that restricting access to the means of suicide is more urgent and more technically feasible than ever."}
{"_id": "65f34550cac522b9cd702d2a1bf81de66406ee24", "title": "Active Contour Models for Extracting Ground and Forest Canopy Curves from Discrete Laser Altimeter Data", "text": "The importance of finding efficient ways of quantifying terrestrial carbon stocks at a global scale has increased due to the concerns about global climate change. Exchange of carbon between forests and the atmosphere is a vital component of the global carbon cycle (Nelson et al. 2003). Recent advances in remote sensing technology have facilitated rapid and inexpensive measurements of topography over large areas (Zhang et al. 2003)."}
{"_id": "762f678d97238c407aa5d63ae5aaaa963e1c4c7e", "title": "Managing Complexity in Aircraft Design Using Design Structure Matrix", "text": "Modern aerospace systems have reached a level of complexity that requires systematic methods for their design. The development of products in aircraft industry involves numerous engineers from different disciplines working on independent components. Routine activities consume a significant part of aircraft development time. To be competitive, aircraft industry needs to manage complexity and readjust the ratio of innovative versus routine work. Thus, the main objective is to develop an approach that can manage complexity in engineering design, reduce the design cycle time, and reduce the product development cost. The design structure matrix (DSM) is a simple tool to perform both analysis and management of complex systems. It enables the user to model, visualize, and analyze the dependencies among the functional group of any system and derive suggestions for the improvement or synthesis of a system. This article illustrates with a case study how DSM can be used to manage complexity in aircraft design process. The result of implementing the DSM approach for Light Combat Aircraft (Navy) at Hindustan Aeronautics Limited, India, (in aircraft design) show primary benefits of 75% reduction in routine activities, 33% reduction in design cycle time, 50% reduction in rework cost, and 30% reduction in product and process development cost of aircraft."}
{"_id": "5260a9aa1ba1e46f1acbd4472a2d3bdb175fc67c", "title": "Sub-event detection from tweets", "text": "Social media plays an important role in communication between people in recent times. This includes information about news and events that are currently happening. Most of the research on event detection concentrates on identifying events from social media information. These models assume an event to be a single entity and treat it as such during the detection process. This assumption ignores that the composition of an event changes as new information is made available on social media. To capture the change in information over time, we extend an already existing Event Detection at Onset algorithm to study the evolution of an event over time. We introduce the concept of an event life cycle model that tracks various key events in the evolution of an event. The proposed unsupervised sub-event detection method uses a threshold-based approach to identify relationships between sub-events over time. These related events are mapped to an event life cycle to identify sub-events. We evaluate the proposed sub-event detection approach on a large-scale Twitter corpus."}
{"_id": "6babe6becc5e13ae72d19dde27dc7f80a9642d59", "title": "Asynchronous Complex Analytics in a Distributed Dataflow Architecture", "text": "Scalable distributed dataflow systems have recently experienced widespread adoption, with commodity dataflow engines such as Hadoop and Spark, and even commodity SQL engines routinely supporting increasingly sophisticated analytics tasks (e.g., support vector machines, logistic regression, collaborative filtering). However, these systems\u2019 synchronous (often Bulk Synchronous Parallel) dataflow execution model is at odds with an increasingly important trend in the machine learning community: the use of asynchrony via shared, mutable state (i.e., data races) in convex programming tasks, which has\u2014in a single-node context\u2014delivered noteworthy empirical performance gains and inspired new research into asynchronous algorithms. In this work, we attempt to bridge this gap by evaluating the use of lightweight, asynchronous state transfer within a commodity dataflow engine. Specifically, we investigate the use of asynchronous sideways information passing (ASIP) that presents single-stage parallel iterators with a Volcano-like intra-operator iterator that can be used for asynchronous information passing. We port two synchronous convex programming algorithms, stochastic gradient descent and the alternating direction method of multipliers (ADMM), to use ASIPs. We evaluate an implementation of ASIPs within on Apache Spark that exhibits considerable speedups as well as a rich set of performance trade-offs in the use of these asynchronous algorithms."}
{"_id": "af8a5f3b44be82472ed40f14313e7f7ef9fb148c", "title": "Motor-learning-related changes in piano players and non-musicians revealed by functional magnetic-resonance signals", "text": "In this study, we investigated blood-flow-related magnetic-resonance (MR) signal changes and the time course underlying short-term motor learning of the dominant right hand in ten piano players (PPs) and 23 non-musicians (NMs), using a complex finger-tapping task. The activation patterns were analyzed for selected regions of interest (ROIs) within the two examined groups and were related to the subjects\u2019 performance. A functional learning profile, based on the regional blood-oxygenation-level-dependent (BOLD) signal changes, was assessed in both groups. All subjects achieved significant increases in tapping frequency during the training session of 35 min in the scanner. PPs, however, performed significantly better than NMs and showed increasing activation in the contralateral primary motor cortex throughout motor learning in the scanner. At the same time, involvement of secondary motor areas, such as bilateral supplementary motor area, premotor, and cerebellar areas, diminished relative to the NMs throughout the training session. Extended activation of primary and secondary motor areas in the initial training stage (7\u201314 min) and rapid attenuation were the main functional patterns underlying short-term learning in the NM group; attenuation was particularly marked in the primary motor cortices as compared with the PPs. When tapping of the rehearsed sequence was performed with the left hand, transfer effects of motor learning were evident in both groups. Involvement of all relevant motor components was smaller than after initial training with the right hand. Ipsilateral premotor and primary motor contributions, however, showed slight increases of activation, indicating that dominant cortices influence complex sequence learning of the non-dominant hand. In summary, the involvement of primary and secondary motor cortices in motor learning is dependent on experience. Interhemispheric transfer effects are present."}
{"_id": "5cfd05faf428bcef6670978adb520564d0d69d32", "title": "Analysis of political discourse on twitter in the context of the 2016 US presidential elections", "text": "Social media now plays a pivotal role in electoral campaigns. Rapid dissemination of information through platforms such as Twitter has enabled politicians to broadcast their message to a wide audience. In this paper, we investigated the sentiment of tweets by the two main presidential candidates, Hillary Clinton and Donald Trump, along with almost 2.9 million tweets by Twitter users during the 2016 US Presidential Elections. We analyzed these short texts to evaluate how accurately Twitter represented the public opinion and real world events of significance related with the elections. We also analyzed the behavior of over a million distinct Twitter users to identify whether the platform was used to share original opinions and to interact with other users or whether few opinions were repeated over and over again with little inter-user dialogue. Finally, we wanted to assess the sentiment of tweets by both candidates and their impact on the election related discourse on Twitter. Some of our findings included the discovery that little original content was created by users and Twitter was primarily used for rebroadcasting already present opinions in the form of retweets with little communication between users. Also of significance was the finding that sentiment and topics expressed on Twitter can be a good proxy of public opinion and important election related events. Moreover, we found that Donald Trump offered a more optimistic and positive campaign message than Hillary Clinton and enjoyed better sentiment when mentioned in messages by Twitter users."}
{"_id": "b1cfe7f8b8557b03fa38036030f09b448d925041", "title": "Unsupervised texture segmentation using Gabor filters", "text": "-This paper presents a texture segmentation algorithm inspired by the multi-channel filtering theory for visual information processing in the early stages of human visual system. The channels are characterized by a bank of Gabor filters that nearly uniformly covers the spatial-frequency domain, and a systematic filter selection scheme is proposed, which is based on reconstruction of the input image from the filtered images. Texture features are obtained by subjecting each (selected) filtered image to a nonlinear transformation and computing a measure of \"energy\" in a window around each pixel. A square-error clustering algorithm is then used to integrate the feature images and produce a segmentation. A simple procedure to incorporate spatial information in the clustering process is proposed. A relative index is used to estimate the \"'true\" number of texture categories. Texture segmentation Multi-channel filtering Clustering Clustering index Gabor filters Wavelet transform I . I N T R O D U C T I O N Image segmentation is a difficult yet very important task in many image analysis or computer vision applications. Differences in the mean gray level or in color in small neighborhoods alone are not always sufficient for image segmentation. Rather, one has to rely on differences in the spatial arrangement of gray values of neighboring pixels-that is, on differences in texture. The problem of segmenting an image based on textural cues is referred to as the texture segmentation problem. Texture segmentation involves identifying regions with \"uniform\" textures in a given image. Appropriate measures of texture are needed in order to decide whether a given region has uniform texture. Sklansky (o has suggested the following definition of texture which is appropriate in the segmentation context: \"A region in an image has a constant texture if a set of local statistics or other local properties of the picture are constant, slowly varying, or approximately periodic\". Texture, therefore, has both local and global connotations--i t is characterized by invariance of certain local measures or properties over an image region. The diversity of natural and artificial textures makes it impossible to give a universal definition of texture. A large number of techniques for analyzing image texture has been proposed in the past two decades/2,3) In this paper, we focus on a particular approach to texture analysis which is referred to as \u00b0 This work was supported in part by the National Science Foundation infrastructure grant CDA-8806599, and by a grant from E. I. Du Pont De Nemours & Company Inc. the multi-channel filtering approach. This approach is inspired by a multi-channel filtering theory for processing visual information in the early stages of the human visual system. First proposed by Campbell and Robson (4) the theory holds that the visual system decomposes the retinal image into a number of filtered images, each of which contains intensity variations over a narrow range of frequency (size) and orientation. The psychophysical experiments that suggested such a decomposition used various grating patterns as stimuli and were based on adaptation techniques. I~l Subsequent psychophysiological experiments provided additional evidence supporting the theory. De Valois et al. ,(5) for example, recorded the response of simple cells in the visual cortex of the Macaque monkey to sinusoidal gratings with different frequencies and orientations. It was observed that each cell responds to a narrow range of frequency and orientation only. Therefore, it appears that there are mechanisms in the visual cortex of mammals that are tuned to combinations of frequency and orientation in a narrow range. These mechanisms are often referred to as channels, and are appropriately interpreted as band-pass filters. The multi-channel filtering approach to texture analysis is intuitively appealing because it allows us to exploit differences in dominant sizes and orientations of different textures. Today, the need for a multi-resolution approach to texture analysis is well recognized. While other approaches to texture analysis have had to be extended to accommodate this paradigm, the multi-channel filtering approach, is inherently multi-resolutional. Another important"}
{"_id": "6e412334c5cb6e59a1c7dc2e4594b9c0af52bc97", "title": "Warpage control of silicon interposer for 2.5D package application", "text": "In order to achieve high speed transmission and large volume data processing, large size silicon-interposer has been required. Warpage caused by the CTE mismatch between a large silicon-interposer and an organic substrate is the most significant problem. In this study, we investigated several warpage control techniques for 2.5D package assembly process. First was assembly process sequence. One is called \u201cchip first process\u201d that is, chips are mounted on Si-interposer at first. The other is called \u201cchip last process\u201d that is, silicon-interposer is mounted on organic substrate at first and chips are mounted on at last. The chip first process successfully processed using conventional mass reflow. By using the chip first process, apparent CTE of a large silicon-interposer become close to that of an organic substrate. Second was the warpage control using underfill resin. We focused on the selection of underfill materials for 0 level assembly. And third was the warpage control technique with Sn-57Bi solder using conventional reflow process. We observed warpage change during simulated reflow process using three-dimensional digital image correlation system (3D-DIC). Sn-57Bi solder joining has been noted as a low temperature bonding methods. It is possible to lower peak temperature 45-90 degree C during reflow compared with using Sn3.0wt%Ag0.5wt%Cu (SAC305). By using Sn-57Bi solder, the warpage after reflow was reduced to 75% of that using SAC305. The full assembly was successfully processed using conventional assembly equipment and processes. The full assembly packages were evaluated by some reliability tests. All samples passed each reliability test."}
{"_id": "30ee737e5cc4dd048edf48b6f26ceba5d7b9b1cb", "title": "KCT: a MATLAB toolbox for motion control of KUKA robot manipulators", "text": "The Kuka Control Toolbox (KCT) is a collection of MATLAB functions for motion control of KUKA robot manipulators, developed to offer an intuitive and high-level programming interface to the user. The toolbox, which is compatible with all 6 DOF small and low payload KUKA robots that use the Eth.RSIXML, runs on a remote computer connected with the KUKA controller via TCP/IP. KCT includes more than 30 functions, spanning operations such as forward and inverse kinematics computation, point-to-point joint and Cartesian control, trajectory generation, graphical display and diagnostics. The flexibility, ease of use and reliability of the toolbox is demonstrated through two applicative examples."}
{"_id": "2d69249e404fc7aa66eba313e020b721bb4e6c0b", "title": "Using Data Mining Techniques for Sentiment Shifter Identification", "text": "Sentiment shifters, i.e., words and expressions that can affect text polarity, play an important role in opinion mining. However, the limited ability of current automated opinion mining systems to handle shifters represents a major challenge. The majority of existing approaches rely on a manual list of shifters; few attempts have been made to automatically identify shifters in text. Most of them just focus on negating shifters. This paper presents a novel and efficient semi-automatic method for identifying sentiment shifters in drug reviews, aiming at improving the overall accuracy of opinion mining systems. To this end, we use weighted association rule mining (WARM), a well-known data mining technique, for finding frequent dependency patterns representing sentiment shifters from a domain-specific corpus. These patterns that include different kinds of shifter words such as shifter verbs and quantifiers are able to handle both local and long-distance shifters. We also combine these patterns with a lexicon-based approach for the polarity classification task. Experiments on drug reviews demonstrate that extracted shifters can improve the precision of the lexicon-based approach for polarity classification 9.25 percent."}
{"_id": "5757dd57950f6b3c4d90a342a170061c8c535536", "title": "Computing the Stereo Matching Cost with a Convolutional Neural Network Seminar Recent Trends in 3 D Computer Vision", "text": "This paper presents a novel approach to the problem of computing the matching-cost for stereo vision. The approach is based upon a Convolutional Neural Network that is used to compute the similarity of input patches from stereo image pairs. In combination with state-ofthe-art stereo pipeline steps, the method achieves top results in major stereo benchmarks. The paper introduces the problem of stereo matching, discusses the proposed method and shows results from recent stereo datasets."}
{"_id": "9dcb91c79a70913763091ff1a20c4b7cb46b96fd", "title": "Evolution of fashion brands on Twitter and Instagram", "text": "Social media has become a popular platform for marketing and brand advertisement especially for fashion brands. To promote their products and gain popularity, different brands post their latest products and updates as photos on different social networks. Little has been explored on how these fashion brands use different social media to reach out to their customers and obtain feedback on different products. Understanding this can help future consumers to focus on their interested brands on specific social media for better information and also can help the marketing staff of different brands to understand how the other brands are utilizing the social media. In this article we focus on the top-20 fashion brands and comparatively analyze how they target their current and potential future customers on Twitter and Instagram. Using both linguistic and deep image features, our work reveals an increasing diversification of trends accompanied by a simultaneous concentration towards a few selected trends. It provides insights about brand marketing strategies and their respective competencies. Our investigations show that the brands are using Twitter and Instagram in a distinctive manner."}
{"_id": "ec5c2c30a4bcc66a97e8d9d20ccc4f7616f5505f", "title": "Organizational Learning and Communities of Practice ; A social constructivist perspective", "text": "In this paper, the relationship between organizational learning (OL) and communities of practice (COP) is addressed. A social constructivist lens is used to analyzing the potential contributions of COP\u2019s in supporting learning by organizations. A social constructivist approach sees organizational learning as an institutionalizing process. The attention is on the process through which individual or local knowledge is transformed into collective knowledge as well as the process through which this socially constructed knowledge influences, and is part of, local knowledge. In order to analyse COP\u2019s contribution to OL, we use the three phases or \u2018moments\u2019 described by Berger and Luckman (1966) that can be discerned during institutionalizing knowledge: \u2018externalizing, objectifying and internalizing\u2019. Externalizing knowledge refers to the process through which personal knowledge is exchanged with others. Objectifying knowledge refers to the process through which knowledge becomes an objective reality. Internalizing knowledge refers to the process through which objectified knowledge is used by individuals in the course of their socialization. In relation to OL processes, learning can be analyzed as consisting of these three knowledge sharing activities: externalizing individual knowledge resulting in shared knowledge; objectifying shared knowledge resulting in organizational knowledge; internalizing organizational knowledge resulting in individual knowledge. These various processes that in combination make up OL processes, are visualized by the use of a OL cycle. The cycle provides a simplified picture of OL seen as a process of institutionalization. The cycle is subsequently used to analyze the possible contribution of COP\u2019s to support organizational learning. The paper concludes that COP\u2019s are well suited to support processes of internalization and externalization. As a result, COP\u2019s stimulate social learning processes within organizations. However, COP\u2019s do not seem to be the appropriate means to support the process of objectification. This means that COP\u2019s contribution in supporting learning at the organizational level or \u2018organizational learning\u2019 is much more complicated."}
{"_id": "4b65024cd376067156a5ac967899a7748fa31f6f", "title": "The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale, Unbounded, Out-of-Order Data Processing", "text": "Unbounded, unordered, global-scale datasets are increasingly common in day-to-day business (e.g. Web logs, mobile usage statistics, and sensor networks). At the same time, consumers of these datasets have evolved sophisticated requirements, such as event-time ordering and windowing by features of the data themselves, in addition to an insatiable hunger for faster answers. Meanwhile, practicality dictates that one can never fully optimize along all dimensions of correctness, latency, and cost for these types of input. As a result, data processing practitioners are left with the quandary of how to reconcile the tensions between these seemingly competing propositions, often resulting in disparate implementations and systems. We propose that a fundamental shift of approach is necessary to deal with these evolved requirements in modern data processing. We as a field must stop trying to groom unbounded datasets into finite pools of information that eventually become complete, and instead live and breathe under the assumption that we will never know if or when we have seen all of our data, only that new data will arrive, old data may be retracted, and the only way to make this problem tractable is via principled abstractions that allow the practitioner the choice of appropriate tradeoffs along the axes of interest: correctness, latency, and cost. In this paper, we present one such approach, the Dataflow Model, along with a detailed examination of the semantics it enables, an overview of the core principles that guided its design, and a validation of the model itself via the real-world experiences that led to its development. We use the term \u201cDataflow Model\u201d to describe the processing model of Google Cloud Dataflow [20], which is based upon technology from FlumeJava [12] and MillWheel [2]. This work is licensed under the Creative Commons AttributionNonCommercial-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain permission prior to any use beyond those covered by the license. Contact copyright holder by emailing info@vldb.org. Articles from this volume were invited to present their results at the 41st International Conference on Very Large Data Bases, August 31st September 4th 2015, Kohala Coast, Hawaii. Proceedings of the VLDB Endowment, Vol. 8, No. 12 Copyright 2015 VLDB Endowment 2150-8097/15/08."}
{"_id": "3f5e13e951b58c1725250cb60afc27f08d8bf02c", "title": "A Trusted Safety Verifier for Process Controller Code", "text": "Attackers can leverage security vulnerabilities in control systems to make physical processes behave unsafely. Currently, the safe behavior of a control system relies on a Trusted Computing Base (TCB) of commodity machines, firewalls, networks, and embedded systems. These large TCBs, often containing known vulnerabilities, expose many attack vectors which can impact process safety. In this paper, we present the Trusted Safety Verifier (TSV), a minimal TCB for the verification of safety-critical code executed on programmable controllers. No controller code is allowed to be executed before it passes physical safety checks by TSV. If a safety violation is found, TSV provides a demonstrative test case to system operators. TSV works by first translating assembly-level controller code into an intermediate language, ILIL. ILIL allows us to check code containing more instructions and features than previous controller code safety verification techniques. TSV efficiently mixes symbolic execution and model checking by transforming an ILIL program into a novel temporal execution graph that lumps together safetyequivalent controller states. We implemented TSV on a Raspberry Pi computer as a bump-in-the-wire that intercepts all controllerbound code. Our evaluation shows that it can test a variety of programs for common safety properties in an average of less than three minutes, and under six minutes in the worst case\u2014a small one-time addition to the process engineering life cycle."}
{"_id": "40c3b350008ada8f3f53a758e69992b6db8a8f95", "title": "Discriminative Decorrelation for Clustering and Classification", "text": "Object detection has over the past few years converged on using linear SVMs over HOG features. Training linear SVMs however is quite expensive, and can become intractable as the number of categories increase. In this work we revisit a much older technique, viz. Linear Discriminant Analysis, and show that LDA models can be trained almost trivially, and with little or no loss in performance. The covariance matrices we estimate capture properties of natural images. Whitening HOG features with these covariances thus removes naturally occuring correlations between the HOG features. We show that these whitened features (which we call WHO) are considerably better than the original HOG features for computing similarities, and prove their usefulness in clustering. Finally, we use our findings to produce an object detection system that is competitive on PASCAL VOC 2007 while being considerably easier to train and test."}
{"_id": "c2505e3f0e19d50bdd418ca9becf8cbb08f61dc1", "title": "GUS, A Frame-Driven Dialog System", "text": "GUS is the first o f a series o f experimental computer systems that we intend to construct as part o f a program of research on language understanding. In large measure, these systems will fill the role o f periodic progress reports, summarizing what we have learned, assessing the mutual coherence o f the various lines o f investigation we have been following, and saggestin# where more emphasis is needed in future work. GUS (Genial Understander System) is intended to engage a sympathetic and highly cooperative human in an English dialog, directed towards a specific goal within a very restricted domain o f discourse. As a starting point, G US was restricted to the role o f a travel agent in a conversation with a client who wants to make a simple return trip to a single city in California. There is good reason for restricting the domain o f discourse for a computer system which is to engage in an English dialog. Specializing the subject matter that the system can talk about permiis it to achieve some measure o f realism without encompassing all the possibilities o f human knowledge or o f the English language. It also provides the user with specific motivation for participating in the conversation, thus narrowing the range o f expectations that GUS must have about the user's purposes. A system restricted in this way will be more able to guide the conversation within the boundaries o f its competence. 1. Motivation and Design Issues Within its limitations, ous is able to conduct a more-or-less realistic dialog. But the outward behavior of this first system is not what makes it interesting or significant. There are, after all, much more convenient ways to plan a trip and, unlike some other artificial intelligence programs, (;us does not offer services or furnish information that are otherwise difficult or impossible to obtain. The system is i nteresting because of the phenomena of natural dialog that it attempts to model tThis work was done by the language understander project at the Xerox Palo Alto Research center. Additional affiliations: D. A. Norman, University of California, San Diego; H. Thompso6, University of California, Berkeley; and T. Winograd, Stanford University. Artificial Intelligence 8 0977), 155-173 Copyright \u00a9 1977 by North-Holland Publishing Company 156 D . G . BOBI~OW ET AL. and because of the principles of program organization around which it was de, Signed. Among the hallmarks of natural dialogs are unexpected and seemingly unpredictable sequences of events. We describe some of the forms that these can take below. \"We then go on to discuss the modular design which makes the system re!atively insensitive t o the vagaries of ordinary conversation. 1.1. Problems of natural dialog The simple dialog shown in Fig. 1 illustrates some of the language-understanding problems we attacked. (The parenthesized numbers are for reference in the text). The problems illustrated in this figure, and described in the paragraphs below, include allowing both the client and the system to take the initiative, understanding indirect answers to questions, resolving anaphora, understanding fragments of sentences offered as answers to questions, and interpreting the discourse in the light of known conversational patterns. 1.1.1. Mixed initiative A typical contribution to a dialog, in addition to its more obvious functions, conveys an expectation about how the other participant will respond. This is clearest in the ease of a question, but it is true of all dialog. If one of the participants has very particular expectations and states them strongly whenever he speaks, and ff the other always responds in such a way as to meet the expectations conveyed, then the initiative remains with the first participant throughout. The success of interactive computer systems can often be traced to the skill with which their designers were able to assure them such a dominating position in the interaction. In natural conversations between humans, however, each participant usually assumes the initiative from time to time. Either clear expectations are not stated or simply not honored. GUS attempts to retain the initiative, but not to the extent of jeopardizing the natural flow of the conversation. To this extent it is a mixed-initiative system (see Carbonell [5, 6]). This is exemplified in the dialogue at (1) where the client volunteers more information than GUS requested. In addition to his destination, the client gives the date on which he wants to travel. Line (3) illustrates a ease where the client takes control of the conversation. GUS had found a potentially acceptable flight and asked for the client's approval. Instead of either giving or denying it, the client replied with a question of his own. 1.1.2. Indirect answers It is by no means always clear what constitutes an answer to a question. Frequently the purported answer is at best only a basis on which to infer the information requested. For example, when ous asks \"Whatt ime do you want to leave?\" it is seeking information to constrain the selection of a flight, client's res onse t o \u2022 \u2022 P \" . this question, a t (2), does constrain the flight selection, b u t only indirectly. In Artificial Intelligence8 (1977), 155--17a : -"}
{"_id": "35c92fe4f113f09cfbda873231ca51cdce8d995a", "title": "Fast Robust Logistic Regression for Large Sparse Datasets with Binary Outputs", "text": "Although popular and extremely well established in mainstream statistical data analysis, logistic regression is strangely absent in the field of data mining. There are two possible explanations of this phenomenon. First, there might be an assumption that any tool which can only produce linear classification boundaries is likely to be trumped by more modern nonlinear tools. Second, there is a legitimate fear that logistic regression cannot practically scale up to the massive dataset sizes to which modern data mining tools are This paper consists of an empirical examination of the first assumption, and surveys, implements and compares techniques by which logistic regression can be scaled to data with millions of attributes and records. Our results, on a large life sciences dataset, indicate that logistic regression can perform surprisingly well, both statistically and computationally, when compared with an array of more recent classification algorithms."}
{"_id": "3b8e7c8220d3883d54960d896a73045f3c70ac17", "title": "Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms", "text": "We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random elds (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modi cation of the proof of convergence of the perceptron algorithm for classi cation problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger."}
{"_id": "a287d39c42eb978e379ee79011f4441ee7de96be", "title": "Gratitude and depressive symptoms: the role of positive reframing and positive emotion.", "text": "Eight studies (N=2,973) tested the theory that gratitude is related to fewer depressive symptoms through positive reframing and positive emotion. Study 1 found a direct path between gratitude and depressive symptoms. Studies 2-5 demonstrated that positive reframing mediated the relationship between gratitude and depressive symptoms. Studies 6-7 showed that positive emotion mediated the relationship between gratitude and depressive symptoms. Study 8 found that positive reframing and positive emotion simultaneously mediated the relationship between gratitude and depressive symptoms. In sum, these eight studies demonstrate that gratitude is related to fewer depressive symptoms, with positive reframing and positive emotion serving as mechanisms that account for this relationship."}
{"_id": "147f0d86753413f65fce359a3a9b9a0503813b8c", "title": "A hybrid SoC interconnect with dynamic TDMA-based transaction-less buses and on-chip networks", "text": "The two dominant architectural choices for implementing efficient communication fabrics for SoC's have been transaction-based buses and packet-based networks-on-chip (NoC). Both implementations have some inherent disadvantages - the former resulting from poor scalability and the transactional character of their operation, and the latter from inconsistent access times and deterioration of performance at high injection rates. In this paper, we propose a transaction-less, time-division-based bus architecture, which dynamically allocates timeslots on-the-fly - the dTDMA bus. This architecture addresses the contention issues of current bus architectures, while avoiding the multi-hop overhead of NoC's. It is compared to traditional bus architectures and NoC's and shown to outperform both for configurations with fewer than 10 PE's. In order to exploit the advantages of the dTDMA bus for smaller configurations, and the scalability of NoC's, we propose a new hybrid SoC interconnect combining the two, showing significant improvement in both latency and power consumption."}
{"_id": "5e214a2af786fadb419e9e169a252c6ca6e7d9f0", "title": "Information Extraction: Techniques and Challenges", "text": "This volume takes a broad view of information extraction as any method for ltering information from large volumes of text. This includes the retrieval of documents from collections and the tagging of particular terms in text. In this paper we shall use a narrower de nition: the identi cation of instances of a particular class of events or relationships in a natural language text, and the extraction of the relevant arguments of the event or relationship. Information extraction therefore involves the creation of a structured representation (such as a data base) of selected information drawn from the text. The idea of reducing the information in a document to a tabular structure is not new. Its feasibility for sublanguage texts was suggested by Zellig Harris in the 1950's, and an early implementation for medical texts was done at New York University by Naomi Sager[20]. However, the speci c notion of information extraction described here has received wide currency over the last decade through the series of Message Understanding Conferences [1, 2, 3, 4, 14]. We shall discuss these Conferences in more detail a bit later, and shall use simpli ed versions of extraction tasks from these Conferences as examples throughout this paper. Figure 1 shows a simpli ed example from one of the earlier MUC's, involving terrorist events (MUC-3) [1]. For each terrorist event, the system had to determine the type of attack (bombing, arson, etc.), the date, location, perpetrator (if stated), targets, and e ects on targets. Other examples of extraction tasks are international joint ventures (where the arguments included the partners, the new venture, its product or service, etc.) and executive succession (indicating who was hired or red by which company for which position). Information extraction is a more limited task than \\full text understanding\". In full text understanding, we aspire to represent in a explicit fashion all the information in a text. In contrast, in information extraction we delimit in advance, as part of the speci cation of the task, the semantic range of the output: the relations we will represent, and the allowable llers in each slot of a relation."}
{"_id": "28e0c6088cf444e8694e511148a8f19d9feaeb44", "title": "Deployable Helical Antennas for CubeSats", "text": "This paper explores the behavior of a self-deploying helical pantograph antenna for CubeSats. The helical pantograph concept is described along with concepts for attachment to the satellite bus. Finite element folding simulations of a pantograph consisting of eight helices are presented and compared to compaction force experiments done on a prototype antenna. Reflection coefficient tests are also presented, demonstrating the operating frequency range of the prototype antenna. The helical pantograph is shown to be a promising alternative to current small satellite antenna solutions."}
{"_id": "c3deb9745320563d5060e568aeb294469f6582f6", "title": "Knowledge Extraction from Structured Engineering Drawings", "text": "As a typical type of structured documents, table drawings are widely used in engineering fields. Knowledge extraction of such structured documents plays an important role in automatic interpretation systems. In this paper, we propose a new knowledge extraction method based on automatically analyzing drawing layout and extracting physical or logical structures from the given engineering table drawings. Then based on the automatic interpretation results, we further propose normalization method to integrate varied types of engineering tables with other engineering drawings and extract implied domain knowledge."}
{"_id": "7b7b0b0239072d442e0620a2801c47036ae05251", "title": "Public Goods and Ethnic Divisions", "text": "We present a model that links heterogeneity of preferences across ethnic groups in a city to the amount and type of public good the city supplies. We test the implications of the model with three related data sets: U. S. cities, U. S. metropolitan areas, and U. S. urban counties. Results show that the shares of spending on productive public goods -education, roads, sewers and trash pickup -in U. S. cities (metro areas/urban counties) are inversely related to the city\u2019s (metro area\u2019s/county\u2019s) ethnic fragmentation, even after controlling for other socioeconomic and demographic determinants. We conclude that ethnic conflict is an important determinant of local public finances."}
{"_id": "01dc8e05ad9590b3a1bf2f42226123c7da4b9fd1", "title": "Guided Image Filtering", "text": "In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc."}
{"_id": "876ba98cfa676430933057be8ffbe61d5d83335a", "title": "Self-organization patterns in wasp and open source communities", "text": "In this paper, we conducted a comparative study of how social organization takes place in a wasp colony and OSS developer communities. Both these systems display similar global organization patterns, such as hierarchies and clear labor divisions. As our analysis shows, both systems also define interacting agent networks with similar common features that reflect limited information sharing among agents. As far as we know, this is the first research study analyzing the patterns and functional significance of these systems' weighted-interaction networks. By illuminating the extent to which self-organization is responsible for patterns such as hierarchical structure, we can gain insight into the origins of organization in OSS communities."}
{"_id": "5c0edc899359a69c3769da238491f93e7a2f6d6d", "title": "Representing Attitude : Euler Angles , Unit Quaternions , and Rotation Vectors", "text": "We present the three main mathematical constructs used to represent the attitude of a rigid body in threedimensional space. These are (1) the rotation matrix, (2) a triple of Euler angles, and (3) the unit quaternion. To these we add a fourth, the rotation vector, which has many of the benefits of both Euler angles and quaternions, but neither the singularities of the former, nor the quadratic constraint of the latter. There are several other subsidiary representations, such as Cayley-Klein parameters and the axis-angle representation, whose relations to the three main representations are also described. Our exposition is catered to those who seek a thorough and unified reference on the whole subject; detailed derivations of some results are not presented. Keywords\u2013Euler angles, quaternion, Euler-Rodrigues parameters, Cayley-Klein parameters, rotation matrix, direction cosine matrix, transformation matrix, Cardan angles, Tait-Bryan angles, nautical angles, rotation vector, orientation, attitude, roll, pitch, yaw, bank, heading, spin, nutation, precession, Slerp"}
{"_id": "cb0618d4566b71ec0ea928d35899745fe36d501d", "title": "Evaluation is a Dynamic Process : Moving Beyond Dual System Models", "text": "Over the past few decades, dual attitude \u2044 process \u2044 system models have emerged as the dominant framework for understanding a wide range of psychological phenomena. Most of these models characterize the unconscious and conscious mind as being built from discrete processes or systems: one that is reflexive, automatic, fast, affective, associative, and primitive, and a second that is deliberative, controlled, slow, cognitive, propositional, and more uniquely human. Although these models serve as a useful heuristic for characterizing the human mind, recent developments in social and cognitive neuroscience suggest that the human evaluative system, like most of cognition, is widely distributed and highly dynamic. Integrating these advances with current attitude theories, we review how the recently proposed Iterative Reprocessing Model can account for apparent dual systems as well as discrepancies between traditional dual system models and recent research revealing the dynamic nature of evaluation. Furthermore, we describe important implications this dynamical system approach has for various social psychological domains. For nearly a century, psychologists have sought to understand the unconscious and conscious processes that allow people to evaluate their surroundings (Allport, 1935; Freud, 1930). Building on a model of the human mind rooted in classic Greek philosophy (Annas, 2001), many contemporary psychologists have characterized the mind as possessing discrete processes or systems: one that is evolutionarily older, reflexive, automatic, fast, affective, associative, and the other that is more uniquely human, deliberative, controlled, slow, cognitive, and propositional (see Figure 1). These dual process or system models have been highly influential throughout psychology for the past three decades (Chaiken & Trope, 1999). Indeed, a dual system model of the human mind permeates research in a wide range of psychological domains, such as attitudes and persuasion (Chaiken, 1980; Fazio, 1990; Gawronski & Bodenhausen, 2006; Petty & Cacioppo, 1986; Rydell & McConnell, 2006; Wilson, Samuel, & Schooler, 2000), stereotypes and prejudice (Crandall & Eshleman, 2003; Devine, 1989; Gaertner & Dovidio, 1986; Pettigrew & Meertens, 1995), person perception (Brewer, 1988; Fiske & Neuberg, 1990; Macrae & Bodenhausen, 2000), self-regulation (Baumeister & Heatherton, 1996; Freud, 1930; Hofmann, Friese, & Strack, 2009; Strack & Deutsch, 2004), moral cognition (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001; Haidt, 2001), learning and memory (Smith & DeCoster, 2000; Sun, 2002), and decision-making (Kahneman, 2003; Sloman, 1996). Although dual system models provide generative frameworks for understanding a wide range of psychological phenomenon, recent developments in social and affective neuroscience suggest that the human evaluative system, like most of cognition, is widely distributed and highly dynamic (e.g., Ferguson & Wojnowicz, 2011; Freeman & Ambady, Social and Personality Psychology Compass 6/6 (2012): 438\u2013454, 10.1111/j.1751-9004.2012.00438.x a 2012 Blackwell Publishing Ltd 2011; Scherer, 2009). Integrating these advances with current attitude theories, we review how the recently proposed Iterative Reprocessing Model (Cunningham & Zelazo, 2007; Cunningham, Zelazo, Packer, & Van Bavel, 2007) can account for apparent dual systems as well as discrepancies between traditional dual system models and recent research revealing the dynamic nature of evaluation. The model also address why the nature of evaluative processing differs across people (e.g., Cunningham, Raye, & Johnson, 2005; Park, Van Bavel, Vasey, & Thayer, forthcoming). Although we focus primarily on dual models of attitudes and evaluation due to space constraints, we believe the premises of our dynamic model can be generalized to other domains where dual system models of typically invoked (Chaiken & Trope, 1999), including social cognition, self-regulation, prejudice and stereotyping, and moral cognition. Therefore, we very briefly discuss the implications of our model for these other domains in the final section of this paper and encourage interested readers to read our more extensive treatment of these issues in the domain of stereotypes and prejudice (Cunningham & Van Bavel, 2009a; Van Bavel & Cunningham, 2011) and emotion (Cunningham & Van Bavel, 2009b; Kirkland & Cunningham, 2011, forthcoming). Attitudes and evaluation Attitudes are one of the most central constructs in social psychology, yet there has been considerable debate regarding the most fundamental aspects of attitudes (Fazio, 2007; Schwarz & Bohner, 2001). Allport (1935) defined an attitude as \u2018\u2018a mental and neural state of readiness, organized through experience, exerting a directive or dynamic influence upon the individual\u2019s response to all objects and situations with which it is related\u2019\u2019 (p. 810). Throughout the history of attitude research, theorists and researchers have attempted to provide a complete yet parsimonious definition of this construct. Wellknown examples include the one-component perspective (Thurstone, 1928), the tripartite model (Affective, Behavior, Cognition; Katz & Stotland, 1959; Rosenberg & Hovland, 1960), and more recently, a host of dual attitudes (e.g., Greenwald & Banaji, 1995; Rydell & McConnell, 2006; Wilson et al., 2000) and dual process models (e.g., Chaiken, 1980; Fazio, 1990; Gawronski & Bodenhausen, 2006; Petty & Cacioppo, 1986). It is widely assumed that attitudes are stored associations between objects and their evaluations, which can be accessed from memory very quickly with little conscious effort Figure 1 Illustrative example of the process and content of a dual system model (cited in Kahneman, 2003, p. 698). Evaluation is a Dynamic Process 439 a 2012 Blackwell Publishing Ltd Social and Personality Psychology Compass 6/6 (2012): 438\u2013454, 10.1111/j.1751-9004.2012.00438.x (Fazio, 2001; Fazio, Sanbonmatsu, Powell, & Kardes, 1986; but see Schwarz, 2007). For example, people categorize positive and negative words more quickly when these words are preceded by a similarly valenced stimuli, suggesting that attitudes are automatically activated by the mere presence of the attitude object in the environment (Fazio et al., 1986). Moreover, people may have access to evaluative information about stimuli prior to their semantic content (Bargh, Litt, Pratto, & Spielman, 1989; but see Storbeck & Clore, 2007). Such research has led to the conclusion that the initial evaluative classification of stimuli as good or bad can be activated automatically and guide the perceiver\u2019s interpretation of his or her environment (Houston & Fazio, 1989; Smith, Fazio, & Cejka, 1996). Dual attitudes and dual process models of attitudes The recent development of a wide variety of implicit attitude measures (Petty, Fazio, & Bri\u00f1ol, 2009; Wittenbrink & Schwarz, 2007), including measures of human physiology (Cunningham, Packer, Kesek, & Van Bavel, 2009), has fueled an explosion of research on dual attitude \u2044process \u2044 system models of attitudes and evaluations (see Table 1). Most of these models infer dual process architecture from observable dissociations between implicit and explicit measures of behavior (e.g., Dovidio, Kawakami, & Gaertner, 2002; McConnell & Leibold, 2001; Rydell & McConnell, 2006). Although many dual models generally share a common set of assumptions about the human mind, the specific features of each model differ. Therefore, we propose a rough taxonomy to characterize different classes of these models. \u2018\u2018Dual attitudes models\u2019\u2019 tend to dichotomize the representations of attitudes into distinct automatic versus controlled constructs (Greenwald & Banaji, 1995; Rydell & McConnell, 2006; Wilson et al., 2000). In contrast, \u2018\u2018dual process models\u2019\u2019 tend to dichotomize the processing of attitudinal representations into automatic versus controlled processes. There is considerable debate over whether these two types of processes are independent or communicate with one another (i.e., information from one system is available to the other system) (Fazio, 1990; Gawronski & Bodenhausen, 2006; Gilbert, Pelham, & Krull, 1988; Petty, Brinol, & DeMarree, 2007). In the latter case, interdependent dual process models have generally been proposed to operate in a corrective fashion, such that \u2018\u2018controlled\u2019\u2019 processes can influence otherwise \u2018\u2018automatic\u2019\u2019 responses (e.g., Wegener & Petty, 1997). Although dual attitudes models likely require dual processes to integrate different attitudinal representations into evaluations and behaviors, dual process models are less likely to require the assumption of dual attitude representations (e.g., Fazio, 1990). For the purpose of clarity, we use \u2018\u2018dual system models\u2019\u2019 to capture models that assume dual attitudes and processes that do not interact (e.g., Rydell & McConnell, 2006; Wilson et al., 2000). There are, of course, many ways to hook up a dual system (see Gilbert, 1999 for a discussion). A complete discussion of all possible dual models and interconnections between these systems is beyond the scope of this article. Therefore, we focus on several core premises that many models have in common. Likewise, we focus on the core premises from our own model \u2013 rather than an exhaustive discussion (e.g., Cunningham et al., 2007) \u2013 in order to communicate key similarities and differences between dual models and our proposed dynamic model. Furthermore, we recognize that dual models and our proposed dynamic model do not exhaust all types of models of attitudes and evaluation \u2013 some extant models do include more than two processes (e.g., Beer, Knight, & D\u2019Esposito, 2006; Conrey, Sherman, Gawronski, Hugenberg, & Groom, 2005) and many allow for interactive processes that operate in a post hoc, corrective fashion (e.g., Chen & Chaiken, 1999; Gawronski & 440 Evaluation is a Dynamic Process a 2012 Blackwell Publishing Ltd Social and Personality Psychology Compass 6/6 (2012):"}
{"_id": "c99d36c9de8685052d2ebc4043510c2dafbbd166", "title": "Clues for detecting irony in user-generated contents: oh...!! it's \"so easy\" ;-)", "text": "We investigate the accuracy of a set of surface patterns in identifying ironic sentences in comments submitted by users to an on-line newspaper. The initial focus is on identifying irony in sentences containing positive predicates since these sentences are more exposed to irony, making their true polarity harder to recognize. We show that it is possible to find ironic sentences with relatively high precision (from 45% to 85%) by exploring certain oral or gestural clues in user comments, such as emoticons, onomatopoeic expressions for laughter, heavy punctuation marks, quotation marks and positive interjections. We also demonstrate that clues based on deeper linguistic information are relatively inefficient in capturing irony in user-generated content, which points to the need for exploring additional types of oral clues."}
{"_id": "c27af6bc9d274a4e95cddb5e5ed61dee05c1ed76", "title": "Novel compact spider microstrip antenna with new Defected Ground Structure", "text": "Two novel Defected Ground Structures (DGS) were first proposed, which have better results than that of the dumbbell (published shape). Using the general model of DGS, its equivalent parameters were extracted. The two new proposed shapes of DGS were then used to design a novel compact spider microstrip antenna to minimize its area. The size of the developed antenna was reduced to about 90.5% of that of the conventional one. This antenna with two different novel shapes of DGS was designed and simulated by using the ready-made software package Zeland-IE3D. Finally, it was fabricated by using thin film and photolithographic technique and measured by using vector network analyzer. Good agreements were found between the simulated and measured results."}
{"_id": "29968249a86aaa258aae95a9781d4d025b3c7658", "title": "Fraud Detection From Taxis' Driving Behaviors", "text": "Taxi is a major transportation in the urban area, offering great benefits and convenience to our daily life. However, one of the major business fraud in taxis is the charging fraud, specifically overcharging for the actual distance. In practice, it is hard for us to always monitor taxis and detect such fraud. Due to the Global Positioning System (GPS) embedded in taxis, we can collect the GPS reports from the taxis' locations, and thus, it is possible for us to retrieve their traces. Intuitively, we can utilize such information to construct taxis' trajectories, compute the actual service distance on the city map, and detect fraudulent behaviors. However, in practice, due to the extremely limited reports, notable location errors, complex city map, and road networks, our task to detect taxi fraud faces significant challenges, and the previous methods cannot work well. In this paper, we have a critical and interesting observation that fraudulent taxis always play a secret trick, i.e., modifying the taximeter to a smaller scale. As a result, it not only makes the service distance larger but also makes the reported taxi speed larger. Fortunately, the speed information collected from the GPS reports is accurate. Hence, we utilize the speed information to design a system, which is called the Speed-based Fraud Detection System (SFDS), to model taxi behaviors and detect taxi fraud. Our method is robust to the location errors and independent of the map information and road networks. At the same time, the experiments on real-life data sets confirm that our method has better accuracy, scalability, and more efficient computation, compared with the previous related methods. Finally, interesting findings of our work and discussions on potential issues are provided in this paper for future city transportation and human behavior research."}
{"_id": "db2e6623c8c0f42e29baf066f4499015c8397dae", "title": "Implementation of Running Average Background Subtraction Algorithm in FPGA for Image Processing Applications", "text": "In this paper a new background subtraction algorithm was developed to detect moving objects from a stable system in which visual surveillance plays a major role. Initially it was implemented in MATLAB. Among all existing algorithms running average algorithm was choosen because of low computational complexity which is the major parameter of time in VLSI. The concept of the background subtraction is to subtract the current image with respect to the reference image and compare it with to the certain threshold values. We propose a new real time background subtraction algorithm which was implemented with verilog hdl in order to detect moving objects accurately. Our method involves three important modules background modelling; adaptive threshold estimation and finally fore ground extraction. Compared to all existing algorithms our method having low power consumption and low resource utilization. Here we have written the"}
{"_id": "15684058d73560590931596da9208804c2e14884", "title": "Analyzing Neighborhoods of Falsifying Traces in Cyber-Physical Systems", "text": "We study the problem of analyzing falsifying traces of cyber-physical systems. Specifically, given a system model and an input which is a counterexample to a property of interest, we wish to understand which parts of the inputs are \"responsible\" for the counterexample as a whole. Whereas this problem is well known to be hard to solve precisely, we provide an approach based on learning from repeated simulations of the system under test.\n Our approach generalizes the classic concept of \"one-at-a-time\" sensitivity analysis used in the risk and decision analysis community to understand how inputs to a system influence a property in question. Specifically, we pose the problem as one of finding a neighborhood of inputs that contains the falsifying counterexample in question, such that each point in this neighborhood corresponds to a falsifying input with a high probability. We use ideas from statistical hypothesis testing to infer and validate such neighborhoods from repeated simulations of the system under test. This approach not only helps to understand the sensitivity of these counterexamples to various parts of the inputs, but also generalizes or widens the given counterexample by returning a neighborhood of counterexamples around it.\n We demonstrate our approach on a series of increasingly complex examples from automotive and closed loop medical device domains. We also compare our approach against related techniques based on regression and machine learning."}
{"_id": "6d41c259fe6588681194f0ba22d26a11ebd5ce3d", "title": "Adhesive capsulitis: sonographic changes in the rotator cuff interval with arthroscopic correlation", "text": "To evaluate the sonographic findings of the rotator interval in patients with clinical evidence of adhesive capsulitis immediately prior to arthroscopy. We prospectively compared 30\u00a0patients with clinically diagnosed adhesive capsulitis (20\u00a0females, 10\u00a0males, mean age 50\u00a0years) with a control population of 10\u00a0normal volunteers and 100\u00a0patients with a clinical suspicion of rotator cuff tears. Grey-scale and colour Doppler sonography of the rotator interval were used. Twenty-six patients (87%) demonstrated hypoechoic echotexture and increased vascularity within the rotator interval, all of whom had had symptoms for less than 1\u00a0year. Three patients had hypoechoic echotexture but no increase in vascularity, and one patient had a normal sonographic appearance. All patients were shown to have fibrovascular inflammatory soft-tissue changes in the rotator interval at arthroscopy commensurate with adhesive capsulitis. None of the volunteers or the patients with a clinical diagnosis of rotator cuff tear showed such changes. Sonography can provide an early accurate diagnosis of adhesive capsulitis by assessing the rotator interval for hypoechoic vascular soft tissue."}
{"_id": "14c4e4b83f936184875ba79e6df1ac10ec556bdd", "title": "Unsupervised learning of models for object recognition", "text": "A method is presented to learn object class models from unlab eled and unsegmented cluttered scenes for the purpose of visual object recognition. The variabili ty across a class of objects is modeled in a principled way, treating objects as flexible constellatio ns f rigid parts (features). Variability is represented by a joint probability density function (pdf) o n the shape of the constellation and the output of part detectors. Corresponding \u201cconstellation mo dels\u201d can be learned in a completely unsupervised fashion. In a first stage, the learning method a utomatically identifies distinctive parts in the training set by applying a clustering algorithm to pat terns selected by an interest operator. It then learns the statistical shape model using expectation m aximization. Mixtures of constellation models can be defined and applied to \u201cdiscover\u201d object catego ri s in an unsupervised manner. The method achieves very good classification results on human fa ces, cars, leaves, handwritten letters, and cartoon characters."}
{"_id": "88a4f9ab70cdb9aea823d76e54d08c28a17ee501", "title": "Latent factor models with additive and hierarchically-smoothed user preferences", "text": "Items in recommender systems are usually associated with annotated attributes: for e.g., brand and price for products; agency for news articles, etc. Such attributes are highly informative and must be exploited for accurate recommendation. While learning a user preference model over these attributes can result in an interpretable recommender system and can hands the cold start problem, it suffers from two major drawbacks: data sparsity and the inability to model random effects. On the other hand, latent-factor collaborative filtering models have shown great promise in recommender systems; however, its performance on rare items is poor. In this paper we propose a novel model LFUM, which provides the advantages of both of the above models. We learn user preferences (over the attributes) using a personalized Bayesian hierarchical model that uses a combination(additive model) of a globally learned preference model along with user-specific preferences. To combat data-sparsity, we smooth these preferences over the item-taxonomy using an efficient forward-filtering and backward-smoothing inference algorithm. Our inference algorithms can handle both discrete attributes (e.g., item brands) and continuous attributes (e.g., item prices). We combine the user preferences with the latent-factor models and train the resulting collaborative filtering system end-to-end using the successful BPR ranking algorithm. In our extensive experimental analysis, we show that our proposed model outperforms several commonly used baselines and we carry out an ablation study showing the benefits of each component of our model."}
{"_id": "e5b8caf98c9746525c0b34bd89bf35f431040920", "title": "Finger Vein Recognition With Anatomy Structure Analysis", "text": "Finger vein recognition has received a lot of attention recently and is viewed as a promising biometric trait. In related methods, vein pattern-based methods explore intrinsic finger vein recognition, but their performance remains unsatisfactory owing to defective vein networks and weak matching. One important reason may be the neglect of deep analysis of the vein anatomy structure. By comprehensively exploring the anatomy structure and imaging characteristic of vein patterns, this paper proposes a novel finger vein recognition framework, including an anatomy structure analysis-based vein extraction algorithm and an integration matching strategy. Specifically, the vein pattern is extracted from the orientation map-guided curvature based on the valley- or half valley-shaped cross-sectional profile. In addition, the extracted vein pattern is further thinned and refined to obtain a reliable vein network. In addition to the vein network, the relatively clear vein branches in the image are mined from the vein pattern, referred to as the vein backbone. In matching, the vein backbone is used in vein network calibration to overcome finger displacements. The similarity of two calibrated vein networks is measured by the proposed elastic matching and further recomputed by integrating the overlap degree of corresponding vein backbones. Extensive experiments on two public finger vein databases verify the effectiveness of the proposed framework."}
{"_id": "1e7efea26cfbbcd2905d63451e77a02f1031ea12", "title": "A Novel Global Path Planning Method for Mobile Robots Based on Teaching-Learning-Based Optimization", "text": "The Teaching-Learning-Based Optimization (TLBO) algorithm has been proposed in recent years. It is a new swarm intelligence optimization algorithm simulating the teaching-learning phenomenon of a classroom. In this paper, a novel global path planning method for mobile robots is presented, which is based on an improved TLBO algorithm called Nonlinear Inertia Weighted Teaching-Learning-Based Optimization (NIWTLBO) algorithm in our previous work. Firstly, the NIWTLBO algorithm is introduced. Then, a new map model of the path between start-point and goal-point is built by coordinate system transformation. Lastly, utilizing the NIWTLBO algorithm, the objective function of the path is optimized; thus, a global optimal path is obtained. The simulation experiment results show that the proposed method has a faster convergence rate and higher accuracy in searching for the path than the basic TLBO and some other algorithms as well, and it can effectively solve the optimization problem for mobile robot global path planning."}
{"_id": "a86fed94c9d97e052d0ff84b2403b10200280c6b", "title": "Large Scale Distributed Data Science from scratch using Apache Spark 2.0", "text": "Apache Spark is an open-source cluster computing framework. It has emerged as the next generation big data processing engine, overtaking Hadoop MapReduce which helped ignite the big data revolution. Spark maintains MapReduce\u2019s linear scalability and fault tolerance, but extends it in a few important ways: it is much faster (100 times faster for certain applications), much easier to program in due to its rich APIs in Python, Java, Scala, SQL and R (MapReduce has 2 core calls) , and its core data abstraction, the distributed data frame. In addition, it goes far beyond batch applications to support a variety of compute-intensive tasks, including interactive queries, streaming, machine learning, and graph processing. With massive amounts of computational power, deep learning has been shown to produce state-of-the-art results on various tasks in different fields like computer vision, automatic speech recognition, natural language processing and online advertising targeting. Thanks to the open-source frameworks, e.g. Torch, Theano, Caffe, MxNet, Keras and TensorFlow, we can build deep learning model in a much easier way. Among all these framework, TensorFlow is probably the most popular open source deep learning library. TensorFlow 1.0 was released recently, which provide a more stable, flexible and powerful computation tool for numerical computation using data flow graphs. Keras is a highlevel neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. This tutorial will provide an accessible introduction to large-scale distributed machine learning and data mining, and to Spark and its potential to revolutionize academic and commercial data science practices. It is divided into three parts: the first part will cover fundamental Spark concepts, including Spark Core, functional programming ala map-reduce, data frames, the Spark Shell, Spark Streaming, Spark SQL, MLlib, and more; the second part will focus on hands-on algorithmic design and development with Spark (developing algorithms from scratch such as decision tree learning, association rule mining (aPriori), graph processing algorithms such as pagerank/shortest path, gradient descent algorithms such as support vectors machines and matrix factorization. Industrial applications and deployments of Spark will also be presented.; the third part will introduce deep learning concepts, how to implement a deep learning model through TensorFlow, Keras and run the model on Spark. Example code will be made available in python (pySpark) notebooks."}
{"_id": "070874b011f8eb2b18c8aa521ad0a7a932b4d9ad", "title": "Action Recognition with Improved Trajectories", "text": "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art."}
{"_id": "1aad2da473888cb7ebc1bfaa15bfa0f1502ce005", "title": "First-Person Activity Recognition: What Are They Doing to Me?", "text": "This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning/recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably."}
{"_id": "659fc2a483a97dafb8fb110d08369652bbb759f9", "title": "Improving the Fisher Kernel for Large-Scale Image Classification", "text": "The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets."}
{"_id": "014e1186209e4f942f3b5ba29b6b039c8e99ad88", "title": "Social interactions: A first-person perspective", "text": "This paper presents a method for the detection and recognition of social interactions in a day-long first-person video of u social event, like a trip to an amusement park. The location and orientation of faces are estimated and used to compute the line of sight for each face. The context provided by all the faces in a frame is used to convert the lines of sight into locations in space to which individuals attend. Further, individuals are assigned roles based on their patterns of attention. The rotes and locations of individuals are analyzed over time to detect and recognize the types of social interactions. In addition to patterns of face locations and attention, the head movements of the first-person can provide additional useful cues as to their attentional focus. We demonstrate encouraging results on detection and recognition of social interactions in first-person videos captured from multiple days of experience in amusement parks."}
{"_id": "d0f4eb19708c0261aa07519279792e19b793b863", "title": "Real-Time EMG Driven Lower Limb Actuated Orthosis for Assistance As Needed Movement Strategy", "text": "This paper presents a new approach to control a wearable knee joint exoskeleton driven through the wearer\u2019 s intention. A realistic bio-inspired musculoskeletal knee joint model is used to control the exoskeleton. This model takes in to account changes in muscle length and joint moment arms as well as the dynamics of muscle activation and muscle contrac tion during lower limb movements. Identification of the model parameters is done through an unconstrained optimization problem formulation. A control law strategy based on the principle of assistance as needed is proposed. This approach guarantees asymptotic stability of the knee joint orthosis and adaptat ion to human-orthosis interaction. Moreover, the proposed contr ol law is robust with respect to external disturbances. Experimental validations are conducted online on a healthy subject during flexion and extenion of their knee joint. The proposed control strategy has shown satisfactory performances in terms of tr acking trajectory and adaptation to human tasks completion."}
{"_id": "cf4ed483e83e32e4973fdaf25377f867723c19b0", "title": "Coronal positioning of existing gingiva: short term results in the treatment of shallow marginal tissue recession.", "text": "Although not a new procedure, coronal positioning of existing gingiva may be used to enhance esthetics and reduce sensitivity. Unfortunately when recession is minimal and the marginal tissue is healthy, many periodontists do not suggest treatment. This article outlines a simple surgical technique with the criteria for its use which results in a high degree of predictability and patient satisfaction."}
{"_id": "46cfc1870b1aa0876e213fb08dc23f09420fcca6", "title": "Effective Dynamic Voltage Scaling Through CPU-Boundedness Detection", "text": "Dynamic voltage scaling (DVS) allows a program to execute at a non-peak CPU frequency in order to reduce CPU power, and hence, energy consumption; however, it is done at the cost of performance degradation. For a program whose execution time is bounded by peripherals\u2019 performance rather than the CPU speed, applying DVS to the program will result in negligible performance penalty. Unfortunately, existing DVS-based power-management algorithms are conservative in the sense that they overly exaggerate the impact that the CPU speed has on the execution time, e.g., they assume that the execution time will double if the CPU speed is halved. Based on a new single-coefficient performance model, we propose a DVS algorithm that detects the CPU-boundedness of a program on the fly (via a regression method on the past MIPS rate) and then adjusts the CPU frequency accordingly. To illustrate its effectiveness, we compare our algorithm with other DVS algorithms on real systems via physical measurements."}
{"_id": "6761bca93f1631f93967097c9f0859d3c7cdc233", "title": "The Relationships among Acceleration , Agility , Sprinting Ability , Speed Dribbling Ability and Vertical Jump Ability in 14-Year-Old Soccer Players", "text": "The aim of this study was to evaluate the relationships among acceleration, agility, sprinting ability, speed dribbling ability and vertical jump ability in 14-year-old soccer players. Twenty-five young soccer players (average age 13.52 \u00b1 0.51 years; height 158.81 \u00b1 5.76 cm; weight 48.92 \u00b1 6.48 kg; training age 3.72 \u00b1 0.64 years) performed a series of physical tests: Yo-Yo Intermittent Recovery Test Level 1 (YYIRT); zigzag agility with the ball (ZAWB); without the ball (ZAWOB); sprinting ability (10-m, 20-m and 30-m); speed dribbling ability (SDA), acceleration ability (FLST) and jumping ability (counter-movement jump (CMJ), squat jump (SJ) and drop jump (DJ). The results showed that 10-m sprint was correlated with 20-m (r = 0.682), 30-m sprint (r = 0.634) and also SDA, FLST and ZAWOB (r = 0.540, r = 0.421; r = 0.533 respectively). Similarly, 20-m sprint was correlated with 30-m sprint (r = 0.491) and ZAWOB (r = 0.631). 30-m sprint was negatively correlated with CMJ (r = -0.435), while strong and moderate correlated with FLST (r = 0.742), ZAWOB (r = 0.657). In addition, CMJ strong correlated with SJ and DJ (r = 0.779, r = 0.824 respectively). CMJ, SJ and DJ were negatively correlated with ZAWOB. Furthermore, SDA strong and moderate correlated with FLST (r = 0.875), ZAWB (r = 0.718) and ZAWOB (r = 0.645). Similarly, a correlation was also found among FLST, ZAWB and ZAWOB (r = 0.421, r = 0.614 respectively). Finally, ZAWOB was strong and moderate correlated with the performance of soccer players in different field tests. In conclusion, the findings of the present study indicated that agility without the ball associated with the all physical fitness components. In addition, agility performance affects acceleration, sprint and jumping performances in young soccer players. Therefore, soccer players should focus on agility exercises in order to improve their acceleration, sprinting and jumping performance."}
{"_id": "4f72e1cc3fd59bd177a9ad4a6045cb863b977f6e", "title": "Vehicle Applications of Controller Area Network", "text": "The Controller Area Network (CAN) is a serial bus communications protocol developed by Bosch in the early 1980s. It defines a standard for efficient and reliable communication between sensor, actuator, controller, and other nodes in real-time applications. CAN is the de facto standard in a large variety of networked embedded control systems. The early CAN development was mainly supported by the vehicle industry: CAN is found in a variety of passenger cars, trucks, boats, spacecraft, and other types of vehicles. The protocol is also widely used today in industrial automation and other areas of networked embedded control, with applications in diverse products such as production machinery, medical equipment, building automation, weaving machines, and wheelchairs. In the automotive industry, embedded control has grown from stand-alone systems to highly integrated and networked control systems [11, 7]. By networking electro-mechanical subsystems, it becomes possible to modularize functionalities and hardware, which facilitates reuse and adds capabilities. Fig. 1 shows an example of an electronic control unit (ECU) mounted on a diesel engine of a Scania truck. The ECU handles the control of engine, turbo, fan, etc. but also the CAN communication. Combining networks and mechatronic modules makes it possible to reduce both the cabling and the number"}
{"_id": "ce5a5ef51ef1b470c00a7aad533d5e8ad2ef9363", "title": "A Universal Intelligent System-on-Chip Based Sensor Interface", "text": "The need for real-time/reliable/low-maintenance distributed monitoring systems, e.g., wireless sensor networks, has been becoming more and more evident in many applications in the environmental, agro-alimentary, medical, and industrial fields. The growing interest in technologies related to sensors is an important indicator of these new needs. The design and the realization of complex and/or distributed monitoring systems is often difficult due to the multitude of different electronic interfaces presented by the sensors available on the market. To address these issues the authors propose the concept of a Universal Intelligent Sensor Interface (UISI), a new low-cost system based on a single commercial chip able to convert a generic transducer into an intelligent sensor with multiple standardized interfaces. The device presented offers a flexible analog and/or digital front-end, able to interface different transducer typologies (such as conditioned, unconditioned, resistive, current output, capacitive and digital transducers). The device also provides enhanced processing and storage capabilities, as well as a configurable multi-standard output interface (including plug-and-play interface based on IEEE 1451.3). In this work the general concept of UISI and the design of reconfigurable hardware are presented, together with experimental test results validating the proposed device."}
{"_id": "830aa3e8a0cd17d77c96695373469ba2af23af38", "title": "Efficient Nonsmooth Nonconvex Optimization for Image Restoration and Segmentation", "text": "In this article, we introduce variational image restoration and segmentation models that incorporate the L1 data-fidelity measure and a nonsmooth, nonconvex regularizer. The L1 fidelity term allows us to restore or segment an image with low contrast or outliers, and the nonconvex regularizer enables homogeneous regions of the objective function (a restored image or an indicator function of a segmented region) to be efficiently smoothed while edges are well preserved. To handle the nonconvexity of the regularizer, a multistage convex relaxation method is adopted. This provides a better solution than the classical convex total variation regularizer, or than the standard L1 convex relaxation method. Furthermore, we design fast and efficient optimization algorithms that can handle the non-differentiability of both the fidelity and regularization terms. The proposed iterative algorithms asymptotically solve the original nonconvex problems. Our algorithms output a restored image or segmented regions in the image, as well as an edge indicator that characterizes the edges of the output, similar to Mumford\u2013Shah-like regularizing functionals. Numerical examples demonstrate the promising results of the proposed restoration and segmentation models."}
{"_id": "b32895c83296cff134e98e431e6c65d2d6ae9bcf", "title": "Hybrid models based on rough set classifiers for setting credit rating decision rules in the global banking industry", "text": "Banks are important to national, and even global, economic stability. Banking panics that follow bank insolvency or bankruptcy, especially of large banks, can severely jeopardize economic stability. Therefore, issuers and investors urgently need a credit rating indicator to help identify the financial status and operational competence of banks. A credit rating provides financial entities with an assessment of credit worthiness, investment risk, and default probability. Although numerous models have been proposed to solve credit rating problems, they have the following drawbacks: (1) lack of explanatory power; (2) reliance on the restrictive assumptions of statistical techniques; and (3) numerous variables, which result in multiple dimensions and complex data. To overcome these shortcomings, this work applies two hybrid models that solve the practical problems in credit rating classification. For model verification, this work uses an experimental dataset collected from the Bankscope database for the period 1998\u20132007. Experimental results demonstrate that the proposed hybrid models for credit rating classification outperform the listing models in this work. A set of decision rules for classifying credit ratings is extracted. Finally, study findings and managerial implications are provided for academics and practitioners. 2012 Elsevier B.V. All rights reserved."}
{"_id": "ad49a96e17d8e1360477bd4649ba8d83173b1c3a", "title": "A 5.4-Gbit/s Adaptive Continuous-Time Linear Equalizer Using Asynchronous Undersampling Histograms", "text": "We demonstrate a new type of adaptive continuous-time linear equalizer (CTLE) based on asynchronous undersampling histograms. Our CTLE automatically selects the optimal equalizing filter coefficient among several predetermined values by searching for the coefficient that produces the largest peak value in histograms obtained with asynchronous undersampling. This scheme is simple and robust and does not require clock synchronization for its operation. A prototype chip realized in 0.13-\u03bcm CMOS technology successfully achieves equalization for 5.4-Gbit/s 231 - 1 pseudorandom bit sequence data through 40-, 80-, and 120-cm PCB traces and 3-m DisplayPort cable. In addition, we present the results of statistical analysis with which we verify the reliability of our scheme for various sample sizes. The results of this analysis are confirmed with experimental data."}
{"_id": "b620ce13b5dfaabc8b837b569a2265ff8c0e4e71", "title": "Social media and internet-based data in global systems for public health surveillance: a systematic review.", "text": "CONTEXT\nThe exchange of health information on the Internet has been heralded as an opportunity to improve public health surveillance. In a field that has traditionally relied on an established system of mandatory and voluntary reporting of known infectious diseases by doctors and laboratories to governmental agencies, innovations in social media and so-called user-generated information could lead to faster recognition of cases of infectious disease. More direct access to such data could enable surveillance epidemiologists to detect potential public health threats such as rare, new diseases or early-level warnings for epidemics. But how useful are data from social media and the Internet, and what is the potential to enhance surveillance? The challenges of using these emerging surveillance systems for infectious disease epidemiology, including the specific resources needed, technical requirements, and acceptability to public health practitioners and policymakers, have wide-reaching implications for public health surveillance in the 21st century.\n\n\nMETHODS\nThis article divides public health surveillance into indicator-based surveillance and event-based surveillance and provides an overview of each. We did an exhaustive review of published articles indexed in the databases PubMed, Scopus, and Scirus between 1990 and 2011 covering contemporary event-based systems for infectious disease surveillance.\n\n\nFINDINGS\nOur literature review uncovered no event-based surveillance systems currently used in national surveillance programs. While much has been done to develop event-based surveillance, the existing systems have limitations. Accordingly, there is a need for further development of automated technologies that monitor health-related information on the Internet, especially to handle large amounts of data and to prevent information overload. The dissemination to health authorities of new information about health events is not always efficient and could be improved. No comprehensive evaluations show whether event-based surveillance systems have been integrated into actual epidemiological work during real-time health events.\n\n\nCONCLUSIONS\nThe acceptability of data from the Internet and social media as a regular part of public health surveillance programs varies and is related to a circular challenge: the willingness to integrate is rooted in a lack of effectiveness studies, yet such effectiveness can be proved only through a structured evaluation of integrated systems. Issues related to changing technical and social paradigms in both individual perceptions of and interactions with personal health data, as well as social media and other data from the Internet, must be further addressed before such information can be integrated into official surveillance systems."}
{"_id": "af0f4459cd6a41a582e309a461a4cfa4846edefd", "title": "Functional cultures and health benefits", "text": "A number of health benefits have been claimed for probiotic bacteria such as Lactobacillus acidophilus, Bifidobacterium spp., and L. casei. These benefits include antimutagenic effects, anticarcinogenic properties, improvement in lactose metabolism, reduction in serum cholesterol, and immune system stimulation. Because of the potential health benefits, these organisms are increasingly being incorporated into dairy foods, particularly yoghurt. In addition to yoghurt, fermented functional foods with health benefits based on bioactive peptides released by probiotic organisms, including Evolus and Calpis, have been introduced in the market. To maximize effectiveness of bifidus products, prebiotics are used in probiotic foods. Synbiotics are products that contain both prebiotics and probiotics. r 2007 Elsevier Ltd. All rights reserved."}
{"_id": "139ded7450fc0f838f8784053f656114cbdb9a0d", "title": "Good Question! Statistical Ranking for Question Generation", "text": "We address the challenge of automatically generating questions from reading materials for educational practice and assessment. Our approach is to overgenerate questions, then rank them. We use manually written rules to perform a sequence of general purpose syntactic transformations (e.g., subject-auxiliary inversion) to turn declarative sentences into questions. These questions are then ranked by a logistic regression model trained on a small, tailored dataset consisting of labeled output from our system. Experimental results show that ranking nearly doubles the percentage of questions rated as acceptable by annotators, from 27% of all questions to 52% of the top ranked 20% of questions."}
{"_id": "1e8233a8c8271c3278f1b84bed368145c0034a35", "title": "Maximizing Throughput of Overprovisioned HPC Data Centers Under a Strict Power Budget", "text": "Building future generation supercomputers while constraining their power consumption is one of the biggest challenges faced by the HPC community. For example, US Department of Energy has set a goal of 20 MW for an exascale (1018 flops) supercomputer. To realize this goal, a lot of research is being done to revolutionize hardware design to build power efficient computers and network interconnects. In this work, we propose a software-based online resource management system that leverages hardware facilitated capability to constrain the power consumption of each node in order to optimally allocate power and nodes to a job. Our scheme uses this hardware capability in conjunction with an adaptive runtime system that can dynamically change the resource configuration of a running job allowing our resource manager to re-optimize allocation decisions to running jobs as new jobs arrive, or a running job terminates.\n We also propose a performance modeling scheme that estimates the essential power characteristics of a job at any scale. The proposed online resource manager uses these performance characteristics for making scheduling and resource allocation decisions that maximize the job throughput of the supercomputer under a given power budget. We demonstrate the benefits of our approach by using a mix of jobs with different power-response characteristics. We show that with a power budget of 4.75 MW, we can obtain up to 5.2X improvement in job throughput when compared with the SLURM scheduling policy that is power-unaware. We corroborate our results with real experiments on a relatively small scale cluster, in which we obtain a 1.7X improvement."}
{"_id": "6129378ecc501b88f4fbfe3c0dfc20d09764c5ee", "title": "The Impact of Flow on Online Consumer Behavior", "text": "Previous research has acknowledged flow as a useful construct for explaining online consumer behavior. However, there is dearth of knowledge about what dimensions of flow and how they actually influence online consumer behavior as flow is difficult to conceptualize and measure. This research examines flow and its effects on online consumer behavior in a unified model which draws upon theory of planned behavior (TPB). The four important dimensions of flow (concentration, enjoyment, time distortion, telepresence) are explored in terms of their antecedent effects on online consumer behavior. Results of this empirical study show that flow influences online consumer behavior through several important latent constructs. Findings of this research not only extend the existing knowledge of flow and its antecedent effects on online consumer behavior but also provide new insights into how flow can be conceptualized and studied in the e-commerce setting."}
{"_id": "9562f19f1b6e6bfaeb02f39dc12e6fc262938543", "title": "DuDe: The Duplicate Detection Toolkit", "text": "Duplicate detection, also known as entity matching or record linkage, was first defined by Newcombe et al. [19] and has been a research topic for several decades. The challenge is to effectively and efficiently identify pairs of records that represent the same real world entity. Researchers have developed and described a variety of methods to measure the similarity of records and/or to reduce the number of required comparisons. Comparing these methods to each other is essential to assess their quality and efficiency. However, it is still difficult to compare results, as there usually are differences in the evaluated datasets, the similarity measures, the implementation of the algorithms, or simply the hardware on which the code is executed. To face this challenge, we are developing the comprehensive duplicate detection toolkit \u201cDuDe\u201d. DuDe already provides multiple methods and datasets for duplicate detection and consists of several components with clear interfaces that can be easily served with individual code. In this paper, we present the DuDe architecture and its workflow for duplicate detection. We show that DuDe allows to easily compare different algorithms and similarity measures, which is an important step towards a duplicate detection benchmark. 1. DUPLICATE DETECTION FRAMEWORKS The basic problem of duplicate detection has been studied under various names, such as entity matching, record linkage, merge/purge or record reconciliation. Given a set of entities, the goal is to identify the represented set of distinct real-world entities. Proposed algorithms in the area of duplicate detection aim to improve the efficiency or the effectiveness of the duplicate detection process. The goal of efficiency is usually to reduce the number of pairwise comparisons. In a naive approach this is quadratic in the number of records. By making intelligent guesses which records have a high probability of representing the same real-world entity, the search space is reduced with the drawback that some duPermission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, to post on servers or to redistribute to lists, requires a fee and/or special permission from the publisher, ACM. VLDB \u201810, September 13-17, 2010, Singapore Copyright 2010 VLDB Endowment, ACM 000-0-00000-000-0/00/00. plicates might be missed. Effectiveness, on the other hand, aims at classifying pairs of records accurately as duplicate or non-duplicate [17]. Elmagarmid et al. have compiled a survey of existing algorithms and techniques for duplicate detection [11]. K\u00f6pcke and Rahm give a comprehensive overview about existing duplicate detection frameworks [15]. They compare eleven frameworks and distinguish between frameworks without training (BN [16], MOMA [24], SERF [1]), training-based frameworks (Active Atlas [22], [23], MARLIN [2, 3], Multiple Classifier System [27], Operator Trees [4]) and hybrid frameworks (TAILOR [10], FEBRL [6], STEM [14], Context Based Framework [5]). Not included in the overview is STRINGER [12], which deals with approximate string matching in large data sources. K\u00f6pcke and Rahm use several comparison criteria, such as supported entity types (e.g. relational entities, XML), availability of partitioning methods to reduce the search space, used matchers to determine whether two entities are similar enough to represent the same real-world entity, the ability to combine several matchers, and, where necessary, the selection of training data. In their summary, K\u00f6pcke and Rahm criticize that the frameworks use diverse methodologies, measures, and test problems for evaluation and that it is therefore difficult to assess the efficiency and effectiveness of each single system. They argue that standardized entity matching benchmarks are needed and that researchers should provide prototype implementations and test data with their algorithms. This agrees with Neiling et al. [18], where desired properties of a test framework for object identification solutions are discussed. Moreover, Weis et al. [25] argue for a duplicate detection benchmark. Both papers see the necessity for standardized data from real-world or artificial datasets, which must also contain information about the real-world pairs. Additionally, clearly defined quality criteria with a description of their computation, and a detailed specification of the test procedure are required. An overview of quality and complexity measures for data linkage and deduplication can be found in Christen and Goiser [7] With DuDe, we provide a toolkit for duplicate detection that can easily be extended by new algorithms and components. Conducted experiments are comprehensible and can be compared with former ones. Additionally, several algorithms, similarity measures, and datasets with gold standards are provided, which is a requirement for a duplicate detection benchmark. DuDe and several datasets are available for download at http://www.hpi.uni-potsdam. de/naumann/projekte/dude.html."}
{"_id": "c71217b2b111a51a31cf1107c71d250348d1ff68", "title": "One Network to Solve Them All \u2014 Solving Linear Inverse Problems Using Deep Projection Models", "text": "While deep learning methods have achieved state-of-theart performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, each inverse problem requires its own dedicated network. In scenarios where we need to solve a wide variety of problems, e.g., on a mobile camera, it is inefficient and expensive to use these problem-specific networks. On the other hand, traditional methods using analytic signal priors can be used to solve any linear inverse problem; this often comes with a performance that is worse than learning-based methods. In this work, we provide a middle ground between the two kinds of methods \u2014 we propose a general framework to train a single deep neural network that solves arbitrary linear inverse problems. We achieve this by training a network that acts as a quasi-projection operator for the set of natural images and show that any linear inverse problem involving natural images can be solved using iterative methods. We empirically show that the proposed framework demonstrates superior performance over traditional methods using wavelet sparsity prior while achieving performance comparable to specially-trained networks on tasks including compressive sensing and pixel-wise inpainting."}
{"_id": "df3a7c3b90190a91ca645894da25c56ebe13b6e6", "title": "Automatic Deobfuscation and Reverse Engineering of Obfuscated Code", "text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 CHAPTER"}
{"_id": "14ad0bb3167974a17bc23c94e2a8644e93e57d76", "title": "Static test case prioritization using topic models", "text": "Software development teams use test suites to test changes to their source code. In many situations, the test suites are so large that executing every test for every source code change is infeasible, due to time and resource constraints. Development teams need to prioritize their test suite so that as many distinct faults as possible are detected early in the execution of the test suite. We consider the problem of static black-box test case prioritization (TCP), where test suites are prioritized without the availability of the source code of the system under test (SUT). We propose a new static black-box TCP technique that represents test cases using a previously unused data source in the test suite: the linguistic data of the test cases, i.e., their identifier names, comments, and string literals. Our technique applies a text analysis algorithm called topic modeling to the linguistic data to approximate the functionality of each test case, allowing our technique to give high priority to test cases that test different functionalities of the SUT. We compare our proposed technique with existing static black-box TCP techniques in a case study of multiple real-world open source systems: several versions of Apache Ant and Apache Derby. We find that our static black-box TCP technique outperforms existing static black-box TCP techniques, and has comparable or better performance than two existing execution-based TCP techniques. Static black-box TCP methods are widely applicable because the only input they require is the source code of the test cases themselves. This contrasts with other TCP techniques which require access to the SUT runtime behavior, to the SUT specification models, or to the SUT source code."}
{"_id": "04e54373e680e507908b77b641d57da80ecf77a2", "title": "A survey of current paradigms in machine translation", "text": "This paper is a survey of the current machine translation research in the US, Europe and Japan. A short history of machine translation is presented rst, followed by an overview of the current research work. Representative examples of a wide range of diierent approaches adopted by machine translation researchers are presented. These are described in detail along with a discussion of the practicalities of scaling up these approaches for operational environments. In support of this discussion, issues in, and techniques for, evaluating machine translation systems are addressed."}
{"_id": "bbceefc86d8744c9d929a0930721f8415df97b11", "title": "On-line monitoring of power curves", "text": "A data-driven approach to the performance analysis of wind turbines is presented. Turbine performance is captured with a power curve. The power curves are constructed using historical wind turbine data. Three power curve models are developed, one by the least squares method and the other by the maximum likelihood estimation method. The models are solved by an evolutionary strategy algorithm. The power curve model constructed by the least squares method outperforms the one built by the maximum likelihood approach. The third model is non-parametric and is built with the k-nearest neighbor (k-NN) algorithm. The least squares (parametric) model and the non-parametric model are used for on-line monitoring of the power curve and their performance is analyzed. 2008 Elsevier Ltd. All rights reserved."}
{"_id": "e0306476d4c3021cbda7189ee86390d81a6d7e36", "title": "The infrared camera-based system to evaluate the human sleepiness", "text": "The eye's blinking is a significant indicator of the sleepiness. The existing systems of blink detection and sleepiness analysis require usually to fix camera on spectacle frame or a special helmet that is not convenient and can affect the obtained result. In this paper, the infrared camera-based contact-less system is proposed to evaluate the human's sleepiness. The infrared light switching is used to detect the pupil in each frame and, as a result, the blink event. The active pan-tilt unit is used to make possible free head movements. The algorithm is pointed out to process the camera frames in order to distinguish the involuntary blinks from voluntary ones. Preliminary experimental tests are shown with the intent to validate the proposed hardware and software system pointed out."}
{"_id": "15340aab3a8b8104117f5462788c945194bce782", "title": "Context-Independent Claim Detection for Argument Mining", "text": "Argumentation mining aims to automatically identify structured argument data from unstructured natural language text. This challenging, multifaceted task is recently gaining a growing attention, especially due to its many potential applications. One particularly important aspect of argumentation mining is claim identification. Most of the current approaches are engineered to address specific domains. However, argumentative sentences are often characterized by common rhetorical structures, independently of the domain. We thus propose a method that exploits structured parsing information to detect claims without resorting to contextual information, and yet achieve a performance comparable to that of state-of-the-art methods that heavily rely on the context."}
{"_id": "49e1066b2e61c6ef5b935355cd9b8a0283963288", "title": "Identifying Appropriate Support for Propositions in Online User Comments", "text": "The ability to analyze the adequacy of supporting information is necessary for determining the strength of an argument.1 This is especially the case for online user comments, which often consist of arguments lacking proper substantiation and reasoning. Thus, we develop a framework for automatically classifying each proposition as UNVERIFIABLE, VERIFIABLE NONEXPERIENTIAL, or VERIFIABLE EXPERIENTIAL2, where the appropriate type of support is reason, evidence, and optional evidence, respectively3. Once the existing support for propositions are identified, this classification can provide an estimate of how adequately the arguments have been supported. We build a goldstandard dataset of 9,476 sentences and clauses from 1,047 comments submitted to an eRulemaking platform and find that Support Vector Machine (SVM) classifiers trained with n-grams and additional features capturing the verifiability and experientiality exhibit statistically significant improvement over the unigram baseline, achieving a macro-averaged F1 of 68.99%."}
{"_id": "652a0ac5aea769387ead37225829e7dfea562bdc", "title": "Why do humans reason? Arguments for an argumentative theory.", "text": "Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found."}
{"_id": "97876c2195ad9c7a4be010d5cb4ba6af3547421c", "title": "Report on a general problem-solving program.", "text": ""}
{"_id": "436094815ec69b668b0895ec4f301c1fd63a8ce6", "title": "Effect of Speech Recognition Errors on Text Understandability for People who are Deaf or Hard of Hearing", "text": "Recent advancements in the accuracy of Automated Speech Recognition (ASR) technologies have made them a potential candidate for the task of captioning. However, the presence of errors in the output may present challenges in their use in a fully automatic system. In this research, we are looking more closely into the impact of different inaccurate transcriptions from the ASR system on the understandability of captions for Deaf or Hard-of-Hearing (DHH) individuals. Through a user study with 30 DHH users, we studied the effect of the presence of an error in a text on its understandability for DHH users. We also investigated different prediction models to capture this relation accurately. Among other models, our random forest based model provided the best mean accuracy of 62.04% on the task. Further, we plan to improve this model with more data and use it to advance our investigation on ASR technologies to improve ASR based captioning for DHH users."}
{"_id": "6c165e30b7621cbbfc088ef0bd813330a5b1450c", "title": "IoT security: A layered approach for attacks & defenses", "text": "Internet of Things (IoT) has been a massive advancement in the Information and Communication Technology (ICT). It is projected that over 50 billion devices will become part of the IoT in the next few years. Security of the IoT network should be the foremost priority. In this paper, we evaluate the security challenges in the four layers of the IoT architecture and their solutions proposed from 2010 to 2016. Furthermore, important security technologies like encryption are also analyzed in the IoT context. Finally, we discuss countermeasures of the security attacks on different layers of IoT and highlight the future research directions within the IoT architecture."}
{"_id": "7925d49dfae7e062d6cf39416a0c3105dd2414c6", "title": "Foldio: Digital Fabrication of Interactive and Shape-Changing Objects With Foldable Printed Electronics", "text": "Foldios are foldable interactive objects with embedded input sensing and output capabilities. Foldios combine the advantages of folding for thin, lightweight and shape-changing objects with the strengths of thin-film printed electronics for embedded sensing and output. To enable designers and end-users to create highly custom interactive foldable objects, we contribute a new design and fabrication approach. It makes it possible to design the foldable object in a standard 3D environment and to easily add interactive high-level controls, eliminating the need to manually design a fold pattern and low-level circuits for printed electronics. Second, we contribute a set of printable user interface controls for touch input and display output on folded objects. Moreover, we contribute controls for sensing and actuation of shape-changeable objects. We demonstrate the versatility of the approach with a variety of interactive objects that have been fabricated with this framework."}
{"_id": "1e1c3c3f1ad33e1424584ba7b2f5b6681b842dce", "title": "Empirical Guidance on Scatterplot and Dimension Reduction Technique Choices", "text": "To verify cluster separation in high-dimensional data, analysts often reduce the data with a dimension reduction (DR) technique, and then visualize it with 2D Scatterplots, interactive 3D Scatterplots, or Scatterplot Matrices (SPLOMs). With the goal of providing guidance between these visual encoding choices, we conducted an empirical data study in which two human coders manually inspected a broad set of 816 scatterplots derived from 75 datasets, 4 DR techniques, and the 3 previously mentioned scatterplot techniques. Each coder scored all color-coded classes in each scatterplot in terms of their separability from other classes. We analyze the resulting quantitative data with a heatmap approach, and qualitatively discuss interesting scatterplot examples. Our findings reveal that 2D scatterplots are often 'good enough', that is, neither SPLOM nor interactive 3D adds notably more cluster separability with the chosen DR technique. If 2D is not good enough, the most promising approach is to use an alternative DR technique in 2D. Beyond that, SPLOM occasionally adds additional value, and interactive 3D rarely helps but often hurts in terms of poorer class separation and usability. We summarize these results as a workflow model and implications for design. Our results offer guidance to analysts during the DR exploration process."}
{"_id": "c8b16a237b5f46b8ad6de013d140ddba41fff614", "title": "Genetic analysis of host resistance: Toll-like receptor signaling and immunity at large.", "text": "Classical genetic methods, driven by phenotype rather than hypotheses, generally permit the identification of all proteins that serve nonredundant functions in a defined biological process. Long before this goal is achieved, and sometimes at the very outset, genetics may cut to the heart of a biological puzzle. So it was in the field of mammalian innate immunity. The positional cloning of a spontaneous mutation that caused lipopolysaccharide resistance and susceptibility to Gram-negative infection led directly to the understanding that Toll-like receptors (TLRs) are essential sensors of microbial infection. Other mutations, induced by the random germ line mutagen ENU (N-ethyl-N-nitrosourea), have disclosed key molecules in the TLR signaling pathways and helped us to construct a reasonably sophisticated portrait of the afferent innate immune response. A still broader genetic screen--one that detects all mutations that compromise survival during infection--is permitting fresh insight into the number and types of proteins that mammals use to defend themselves against microbes."}
{"_id": "259c25242db4a0dc1e1b5e61fd059f8949bdb79d", "title": "Parallel geometric algorithms for multi-core computers", "text": "Computers with multiple processor cores using shared memory are now ubiquitous. In this paper, we present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The d-dimensional algorithms we describe are (a) spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) kd-tree construction, (c) axis-aligned box intersection computation, and finally (d) bulk insertion of points in Delaunay triangulations for mesh generation algorithms or simply computing Delaunay triangulations. We show experimental results for these algorithms in 3D, using our implementations based on the Computational Geometry Algorithms Library (CGAL, http://www.cgal.org/). This work is a step towards what we hope will become a parallel mode for CGAL, where algorithms automatically use the available parallel resources without requiring significant user intervention."}
{"_id": "a9116f261c1b6ca543cba3ee95f846ef3934efad", "title": "Facial Component-Landmark Detection With Weakly-Supervised LR-CNN", "text": "In this paper, we propose a weakly supervised landmark-region-based convolutional neural network (LR-CNN) framework to detect facial component and landmark simultaneously. Most of the existing course-to-fine facial detectors fail to detect landmark accurately without lots of fully labeled data, which are costly to obtain. We can handle the task with a small amount of finely labeled data. First, deep convolutional generative adversarial networks are utilized to generate training samples with weak labels, as data preparation. Then, through weakly supervised learning, our LR-CNN model can be trained effectively with a small amount of finely labeled data and a large amount of generated weakly labeled data. Notably, our approach can handle the situation when large occlusion areas occur, as we localize visible facial components before predicting corresponding landmarks. Detecting unblocked components first helps us to focus on the informative area, resulting in a better performance. Additionally, to improve the performance of the above tasks, we design two models as follows: 1) we add AnchorAlign in the region proposal networks to accurately localize components and 2) we propose a two-branch model consisting classification branch and regression branch to detect landmark. Extensive evaluations on benchmark datasets indicate that our proposed approach is able to complete the multi-task facial detection and outperforms the state-of-the-art facial component and landmark detection algorithms."}
{"_id": "089e7c81521f43c5f4ae0ec967d668bc9ea73db7", "title": "On-Line Fingerprint Verification", "text": "Fingerprint verification is one of the most reliable personal identification methods. However, manual fingerprint verification is so tedious, time-consuming, and expensive that it is incapable of meeting today\u2019s increasing performance requirements. An automatic fingerprint identification system (AFIS) is widely needed. It plays a very important role in forensic and civilian applications such as criminal identification, access control, and ATM card verification. This paper describes the design and implementation of an on-line fingerprint verification system which operates in two stages: minutia extraction and minutia matching. An improved version of the minutia extraction algorithm proposed by Ratha et al., which is much faster and more reliable, is implemented for extracting features from an input fingerprint image captured with an on-line inkless scanner. For minutia matching, an alignment-based elastic matching algorithm has been developed. This algorithm is capable of finding the correspondences between minutiae in the input image and the stored template without resorting to exhaustive search and has the ability of adaptively compensating for the nonlinear deformations and inexact pose transformations between fingerprints. The system has been tested on two sets of fingerprint images captured with inkless scanners. The verification accuracy is found to be acceptable. Typically, a complete fingerprint verification procedure takes, on an average, about eight seconds on a SPARC 20 workstation. These experimental results show that our system meets the response time requirements of on-line verification with high accuracy."}
{"_id": "16155ac9c52a11f732a020adaad457c36655969c", "title": "Improving Acoustic Models in TORGO Dysarthric Speech Database", "text": "Assistive speech-based technologies can improve the quality of life for people affected with dysarthria, a motor speech disorder. In this paper, we explore multiple ways to improve Gaussian mixture model and deep neural network (DNN) based hidden Markov model (HMM) automatic speech recognition systems for TORGO dysarthric speech database. This work shows significant improvements over the previous attempts in building such systems in TORGO. We trained speaker-specific acoustic models by tuning various acoustic model parameters, using speaker normalized cepstral features and building complex DNN-HMM models with dropout and sequence-discrimination strategies. The DNN-HMM models for severe and severe-moderate dysarthric speakers were further improved by leveraging specific information from dysarthric speech to DNN models trained on audio files from both dysarthric and normal speech, using generalized distillation framework. To the best of our knowledge, this paper presents the best recognition accuracies for TORGO database till date."}
{"_id": "ac4a2337afdf63e9b3480ce9025736d71f8cec1a", "title": "A wearable system to assist walking of Parkinson s disease patients.", "text": "BACKGROUND\nAbout 50% of the patients with advanced Parkinson's disease (PD) suffer from freezing of gait (FOG), which is a sudden and transient inability to walk. It often causes falls, interferes with daily activities and significantly impairs quality of life. Because gait deficits in PD patients are often resistant to pharmacologic treatment, effective non-pharmacologic treatments are of special interest.\n\n\nOBJECTIVES\nThe goal of our study is to evaluate the concept of a wearable device that can obtain real-time gait data, processes them and provides assistance based on pre-determined specifications.\n\n\nMETHODS\nWe developed a real-time wearable FOG detection system that automatically provides a cueing sound when FOG is detected and which stays until the subject resumes walking. We evaluated our wearable assistive technology in a study with 10 PD patients. Over eight hours of data was recorded and a questionnaire was filled out by each patient.\n\n\nRESULTS\nTwo hundred and thirty-seven FOG events have been identified by professional physiotherapists in a post-hoc video analysis. The device detected the FOG events online with a sensitivity of 73.1% and a specificity of 81.6% on a 0.5 sec frame-based evaluation.\n\n\nCONCLUSIONS\nWith this study we show that online assistive feedback for PD patients is possible. We present and discuss the patients' and physiotherapists' perspectives on wearability and performance of the wearable assistant as well as their gait performance when using the assistant and point out the next research steps. Our results demonstrate the benefit of such a context-aware system and motivate further studies."}
{"_id": "eafcdab44124661cdeba5997d4e2ca3cf5a7627e", "title": "Acne and Rosacea", "text": "Acne, one of the most common skin diseases, affects approximately 85% of the adolescent population, and occurs most prominently at skin sites with a high density of sebaceous glands such as the face, back, and chest. Although often considered a disease of teenagers, acne is occurring at an increasingly early age. Rosacea is a chronic facial inflammatory dermatosis characterized by flushing (or transient facial erythema), persistent central facial erythema, inflammatory papules/pustules, and telangiectasia. Both acne and rosacea have a multifactorial pathology that is incompletely understood. Increased sebum production, keratinocyte hyper-proliferation, inflammation, and altered bacterial colonization with Propionibacterium acnes are considered to be the underlying disease mechanisms in acne, while the multifactorial pathology of rosacea is thought to involve both vasoactive and neurocutaneous mechanisms. Several advances have taken place in the past decade in the research field of acne and rosacea, encompassing pathogenesis and epidemiology, as well as the development of new therapeutic interventions. In this article, we provide an overview of current perspectives on the pathogenesis and treatment of acne and rosacea, including a summary of findings from recent landmark pathophysiology studies considered to have important implications for future clinical practice. The advancement of our knowledge of the different pathways and regulatory mechanisms underlying acne and rosacea is thought to lead to further advances in the therapeutic pipeline for both conditions, ultimately providing a greater array of treatments to address gaps in current management practices."}
{"_id": "59aa6691d7122074cc069e6d9952a2e83e428af5", "title": "Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders", "text": "Convolutional autoencoders have emerged as popular methods for unsupervised defect segmentation on image data. Most commonly, this task is performed by thresholding a per-pixel reconstruction error based on an `-distance. This procedure, however, leads to large residuals whenever the reconstruction includes slight localization inaccuracies around edges. It also fails to reveal defective regions that have been visually altered when intensity values stay roughly consistent. We show that these problems prevent these approaches from being applied to complex real-world scenarios and that they cannot be easily avoided by employing more elaborate architectures such as variational or feature matching autoencoders. We propose to use a perceptual loss function based on structural similarity that examines inter-dependencies between local image regions, taking into account luminance, contrast, and structural information, instead of simply comparing single pixel values. It achieves significant performance gains on a challenging real-world dataset of nanofibrous materials and a novel dataset of two woven fabrics over state-of-the-art approaches for unsupervised defect segmentation that use per-pixel reconstruction"}
{"_id": "96a8de3d1c93835515bd8c76aa5257f41e6420cf", "title": "Cellulose: fascinating biopolymer and sustainable raw material.", "text": "As the most important skeletal component in plants, the polysaccharide cellulose is an almost inexhaustible polymeric raw material with fascinating structure and properties. Formed by the repeated connection of D-glucose building blocks, the highly functionalized, linear stiff-chain homopolymer is characterized by its hydrophilicity, chirality, biodegradability, broad chemical modifying capacity, and its formation of versatile semicrystalline fiber morphologies. In view of the considerable increase in interdisciplinary cellulose research and product development over the past decade worldwide, this paper assembles the current knowledge in the structure and chemistry of cellulose, and in the development of innovative cellulose esters and ethers for coatings, films, membranes, building materials, drilling techniques, pharmaceuticals, and foodstuffs. New frontiers, including environmentally friendly cellulose fiber technologies, bacterial cellulose biomaterials, and in-vitro syntheses of cellulose are highlighted together with future aims, strategies, and perspectives of cellulose research and its applications."}
{"_id": "1a345b4ca7acb172c977f5a4623138ce83e485b1", "title": "Virtual Dermatologist: An application of 3D modeling to tele-healthcare", "text": "In this paper, we present preliminary results towards the development of the Virtual Dermatologist: A 3D image and tactile database for virtual examination of dermatology patients. This system, which can be installed and operated by non-dermatologists in remotes areas where access to a dermatologist is difficult, will enhance and broaden the application of tele-healthcare, and it will greatly facilitate the surveillance and consequent diagnosis of various skin diseases. Unlike other systems that monitor the progress of skin diseases using qualitative data on simple baseline (2D) photography, the proposed system will also allow for the quantitative assessment of the progress of the disease over time (e.g. thickness, size, roughness, etc). In fact, the 3D model created by the proposed system will let the dermatologist perform dermatoscopic-like examinations over specially annotated areas of the 3D model of the patient's body (i.e. higher definition areas of the 3D model). As part of its future development, the system will also allow the dermatologist to virtually touch and feel the lesion through a haptic interface. In its current form, the system can detect skin lesions smaller than 1mm, as we demonstrate in the result section."}
{"_id": "dfa4765a2cd3e8910ef6e56f0b40e70b4881d56a", "title": "A tool-supported compliance process for software systems", "text": "Laws and regulations impact the design of software systems, as they may introduce additional requirements and possible conflicts with pre-existing requirements. We propose a systematic, tool-supported process for establishing compliance of a software system with a given law. The process elicits new requirements from the law, compares them with existing ones and manages conflicts, exploiting a set of heuristics, partially supported by a tool. We illustrate our proposal through an exploratory study using the Italian Privacy Law. We also present results of a preliminary empirical study that indicates that adoption of the process improves compliance analysis for a simple compliance scenario."}
{"_id": "9fc1d0e4da751a09b49f5b0f7e61eb71d587c20f", "title": "Adapting microsoft SQL server for cloud computing", "text": "Cloud SQL Server is a relational database system designed to scale-out to cloud computing workloads. It uses Microsoft SQL Server as its core. To scale out, it uses a partitioned database on a shared-nothing system architecture. Transactions are constrained to execute on one partition, to avoid the need for two-phase commit. The database is replicated for high availability using a custom primary-copy replication scheme. It currently serves as the storage engine for Microsoft's Exchange Hosted Archive and SQL Azure."}
{"_id": "0e410a7baeae7f1c8676a6c72898650d1f144ba5", "title": "An end-to-end approach to host mobility", "text": "We present the design and implementation of an end-to-end architecture for Internet host mobility using dynamic updates to the Domain Name System (DNS) to track host location. Existing TCP connections are retained using secure and efficient connection migration, enabling established connections to seamlessly negotiate a change in endpoint IP addresses without the need for a third party. Our architecture is secure\u2014name updates are effected via the secure DNS update protocol, while TCP connection migration uses a novel set of Migrate options\u2014and provides a pure end-system alternative to routing-based approaches such as Mobile IP.\nMobile IP was designed under the principle that fixed Internet hosts and applications were to remain unmodified and only the underlying IP substrate should change. Our architecture requires no changes to the unicast IP substrate, instead modifying transport protocols and applications at the end hosts. We argue that this is not a hindrance to deployment; rather, in a significant number of cases, it allows for an easier deployment path than Mobile IP, while simultaneously giving better performance. We compare and contrast the strengths of end-to-end and network-layer mobility schemes, and argue that end-to-end schemes are better suited to many common mobile applications. Our performance experiments show that hand-off times are governed by TCP migrate latencies, and are on the order of a round-trip time of the communicating peers."}
{"_id": "00da506d8b50ba47313feb642c0caef2352080bd", "title": "Ocular Pain and Impending Blindness During Facial Cosmetic Injections: Is Your Office Prepared?", "text": "Soft tissue filler injections are the second most common non-surgical procedure performed by the plastic surgeon. Embolization of intravascular material after facial injection is a rare but terrifying outcome due to the high likelihood of long-term sequela such as blindness and cerebrovascular accident. The literature is replete with examples of permanent blindness caused by injection with autologous fat, soft tissue fillers such as hyaluronic acid, PLLA, calcium hydroxyl-apatite, and even corticosteroid suspensions. However, missing from the discussion is an effective treatment algorithm that can be quickly and safely followed by injecting physicians in the case of an intravascular injection with impending blindness. In this report, we present the case of a 64-year-old woman who suffered from blindness and hemiparesis after facial cosmetic injections performed by a family physician. We use this case to create awareness that this complication has become more common as the number of injectors and patients seeking these treatments have increased exponentially over the past few years. We share in this study our experience with the incorporation of a \u201cblindness safety kit\u201d in each of our offices to promptly initiate treatment in someone with embolization and impending blindness. The kit contains a step-by-step protocol to follow in the event of arterial embolization of filler material associated with ocular pain and impending loss of vision. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 ."}
{"_id": "577f5fcadbb97d73c1a41a4fcb17873ad959319c", "title": "CATS: Collection and Analysis of Tweets Made Simple", "text": "Twitter presents an unparalleled opportunity for researchers from various \u00ef\u00ac\u0081elds to gather valuable and genuine textual data from millions of people. However, the collection pro-cess, as well as the analysis of these data require different kinds of skills (e.g. programing, data mining) which can be an obstacle for people who do not have this background. In this paper we present CATS, an open source, scalable, Web application designed to support researchers who want to carry out studies based on tweets. The purpose of CATS is twofold: (i) allow people to collect tweets (ii) enable them to analyze these tweets thanks to ef\u00ef\u00ac\u0081cient tools (e.g. event detection, named-entity recognition, topic modeling, word-clouds). What is more, CATS relies on a distributed imple-mentation which can deal with massive data streams."}
{"_id": "c7af905452b3d70a7da377c2e31ccf364e8dbed8", "title": "Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation", "text": "In optimization, multiple objectives and constraints cannot be handled independently of the underlying optimizer. Requirements such as continuity and di erentiability of the cost surface add yet another con icting element to the decision process. While \\better\" solutions should be rated higher than \\worse\" ones, the resulting cost landscape must also comply with such requirements. Evolutionary algorithms (EAs), which have found application in many areas not amenable to optimization by other methods, possess many characteristics desirable in a multiobjective optimizer, most notably the concerted handling of multiple candidate solutions. However, EAs are essentially unconstrained search techniques which require the assignment of a scalar measure of quality, or tness, to such candidate solutions. After reviewing current evolutionary approaches to multiobjective and constrained optimization, the paper proposes that tness assignment be interpreted as, or at least related to, a multicriterion decision process. A suitable decision making framework based on goals and priorities is subsequently formulated in terms of a relational operator, characterized, and shown to encompass a number of simpler decision strategies. Finally, the ranking of an arbitrary number of candidates is considered. The e ect of preference changes on the cost surface seen by an EA is illustrated graphically for a simple problem. The paper concludes with the formulation of a multiobjective genetic algorithm based on the proposed decision strategy. Niche formation techniques are used to promote diversity among preferable candidates, and progressive articulation of preferences is shown to be possible as long as the genetic algorithm can recover from abrupt changes in the cost landscape."}
{"_id": "0b099066706cb997feb7542d4bf502c6be38e755", "title": "Model-Driven Design for the Visual Analysis of Heterogeneous Data", "text": "As heterogeneous data from different sources are being increasingly linked, it becomes difficult for users to understand how the data are connected, to identify what means are suitable to analyze a given data set, or to find out how to proceed for a given analysis task. We target this challenge with a new model-driven design process that effectively codesigns aspects of data, view, analytics, and tasks. We achieve this by using the workflow of the analysis task as a trajectory through data, interactive views, and analytical processes. The benefits for the analysis session go well beyond the pure selection of appropriate data sets and range from providing orientation or even guidance along a preferred analysis path to a potential overall speedup, allowing data to be fetched ahead of time. We illustrate the design process for a biomedical use case that aims at determining a treatment plan for cancer patients from the visual analysis of a large, heterogeneous clinical data pool. As an example for how to apply the comprehensive design approach, we present Stack'n'flip, a sample implementation which tightly integrates visualizations of the actual data with a map of available data sets, views, and tasks, thus capturing and communicating the analytical workflow through the required data sets."}
{"_id": "06bd4d2d21624c7713d7f10ccb7df61bf6b9ee71", "title": "Cache-oblivious streaming B-trees", "text": "A streaming B-tree is a dictionary that efficiently implements insertions and range queries. We present two cache-oblivious streaming B-trees, the shuttle tree, and the cache-oblivious lookahead array (COLA).\n For block-transfer size B and on N elements, the shuttle tree implements searches in optimal O(log B+1N) transfers, range queries of L successive elements in optimal O(log B+1N +L/B) transfers, and insertions in O((log B+1N)/B\u0398(1/(log log B)2)+(log2N)/B) transfers, which is an asymptotic speedup over traditional B-trees if B \u2265 (log N)1+c log log log2 N for any constant c >1.\n A COLA implements searches in O(log N) transfers, range queries in O(log N + L/B) transfers, and insertions in amortized O((log N)/B) transfers, matching the bounds for a (cache-aware) buffered repository tree. A partially deamortized COLA matches these bounds but reduces the worst-case insertion cost to O(log N) if memory size M = \u03a9(log N). We also present a cache-aware version of the COLA, the lookahead array, which achieves the same bounds as Brodal and Fagerberg's (cache-aware) B\u03b5-tree.\n We compare our COLA implementation to a traditional B-tree. Our COLA implementation runs 790 times faster for random inser-tions, 3.1 times slower for insertions of sorted data, and 3.5 times slower for searches."}
{"_id": "d09bdfbf43bf409bc3bce436ba7a5374456b3c74", "title": "Dynamic Behaviour of an Electronically Commutated ( Brushless DC ) Motor Drive with Back-emf Sensing", "text": "Conventionally, BLDC motors are commutated in six-step pattern with commutation controlled by position sensors. To reduce cost and complexity of the drive system, sensorless drive is preferred. The existing sensorless control scheme with the conventional back EMF sensing based on motor neutral voltage for BLDC has certain drawbacks, which limit its applications. This paper presents the dynamic behaviour of an analytical and circuit model of a Brushless DC (BLDC) motors with back emf sensing. The circuit model was simulated using LTspice and the results obtained were compared with the experimental results. The value of the motor constant and the back emf measured from the experiment agreed with the simulated model. The starting behaviour of the motor, changing of load torque when current are varied and disturbance of sensing method at peak load shows that the dynamic behaviour results of the experiment obtained from oscilloscope are similar to the simulated value."}
{"_id": "415b85c2f3650ac233399a6f147763055475126d", "title": "Quasi-Cyclic LDPC Codes: Influence of Proto- and Tanner-Graph Structure on Minimum Hamming Distance Upper Bounds", "text": "Quasi-cyclic (QC) low-density parity-check (LDPC) codes are an important instance of proto-graph-based LDPC codes. In this paper we present upper bounds on the minimum Hamming distance of QC LDPC codes and study how these upper bounds depend on graph structure parameters (like variable degrees, check node degrees, girth) of the Tanner graph and of the underlying proto-graph. Moreover, for several classes of proto-graphs we present explicit QC LDPC code constructions that achieve (or come close to) the respective minimum Hamming distance upper bounds. Because of the tight algebraic connection between QC codes and convolutional codes, we can state similar results for the free Hamming distance of convolutional codes. In fact, some QC code statements are established by first proving the corresponding convolutional code statements and then using a result by Tanner that says that the minimum Hamming distance of a QC code is upper bounded by the free Hamming distance of the convolutional code that is obtained by \u201cunwrapping\u201d the QC code."}
{"_id": "0af8c168f4423535773afea201c05a9e63ee9515", "title": "Piranha: a scalable architecture based on single-chip multiprocessing", "text": "The microprocessor industry is currently struggling with higher development costs and longer design times that arise from exceedingly complex processors that are pushing the limits of instruction-level parallelism. Meanwhile, such designs are especially ill suited for important commercial applications, such as on-line transaction processing (OLTP), which suffer from large memory stall times and exhibit little instruction-level parallelism. Given that commercial applications constitute by far the most important market for high-performance servers, the above trends emphasize the need to consider alternative processor designs that specifically target such workloads. The abundance of explicit thread-level parallelism in commercial workloads, along with advances in semiconductor integration density, identify chip multiprocessing (CMP) as potentially the most promising approach for designing processors targeted at commercial servers.\n This paper describes the Piranha system, a research prototype being developed at Compaq that aggressively exploits chip multi-processing by integrating eight simple Alpha processor cores along with a two-level cache hierarchy onto a single chip. Piranha also integrates further on-chip functionality to allow for scalable multiprocessor configurations to be built in a glueless and modular fashion. The use of simple processor cores combined with an industry-standard ASIC design methodology allow us to complete our prototype within a short time-frame, with a team size and investment that are an order of magnitude smaller than that of a commercial microprocessor. Our detailed simulation results show that while each Piranha processor core is substantially slower than an aggressive next-generation processor, the integration of eight cores onto a single chip allows Piranha to outperform next-generation processors by up to 2.9 times (on a per chip basis) on important workloads such as OLTP. This performance advantage can approach a factor of five by using full-custom instead of ASIC logic. In addition to exploiting chip multiprocessing, the Piranha prototype incorporates several other unique design choices including a shared second-level cache with no inclusion, a highly optimized cache coherence protocol, and a novel I/O architecture."}
{"_id": "20948c07477fe449dc3da2f06b8a68b3e76e2b08", "title": "Short-Circuit Detection for Electrolytic Processes Employing Optibar Intercell Bars", "text": "This paper presents a method to detect metallurgical short circuits suitable for Optibar intercell bars in copper electrowinning and electrorefining processes. One of the primary achievements of this bar is to limit short-circuit currents to a maximum of 1.5 p.u. of the actual process current. However, low-current short circuits are more difficult to detect. Thus, conventional short-circuit detection instruments like gaussmeters and infrared cameras become ineffective. To overcome this problem, the proposed method is based on detecting the voltage drop across anode-cathode pairs. The method does not affect the operation of the process and does not require modifications of the industrial plant. In order to verify the performance of this proposal, experimental measurements done over a period of four months at a copper refinery are presented. A 100% success rate was obtained."}
{"_id": "6afe915d585ee9471c39efc7de245ec9db4072cb", "title": "Rating Image Aesthetics Using Deep Learning", "text": "This paper investigates unified feature learning and classifier training approaches for image aesthetics assessment . Existing methods built upon handcrafted or generic image features and developed machine learning and statistical modeling techniques utilizing training examples. We adopt a novel deep neural network approach to allow unified feature learning and classifier training to estimate image aesthetics. In particular, we develop a double-column deep convolutional neural network to support heterogeneous inputs, i.e., global and local views, in order to capture both global and local characteristics of images . In addition, we employ the style and semantic attributes of images to further boost the aesthetics categorization performance . Experimental results show that our approach produces significantly better results than the earlier reported results on the AVA dataset for both the generic image aesthetics and content -based image aesthetics. Moreover, we introduce a 1.5-million image dataset (IAD) for image aesthetics assessment and we further boost the performance on the AVA test set by training the proposed deep neural networks on the IAD dataset."}
{"_id": "99b4ec66e2c732e4127e13b0ff2d90c80e31be7d", "title": "Vehicles Capable of Dynamic Vision", "text": "A survey is given on two decades of developments in the field, encompassing an increase in computing power by four orders of magnitude. The '4-D approach' integrating expectation-based methods from systems dynamics and control engineering with methods from AI has allowed to create vehicles with unprecedented capabilities in the technical realm: Autonomous road vehicle guidance in public traffic on freeways at speeds beyond 130 km/h, on-board-autonomous landing approaches of aircraft, and landmark navigation for AGV's, for road vehicles including turn-offs onto cross-roads, and for helicopters in low-level flight (real-time, hardware-in-the-loop simulations in the latter case)."}
{"_id": "3ecc4821d55c0e528690777be3588fc9cf023882", "title": "DeLS-3D: Deep Localization and Segmentation with a 3D Semantic Map", "text": "For applications such as augmented reality, autonomous driving, self-localization/camera pose estimation and scene parsing are crucial technologies. In this paper, we propose a unified framework to tackle these two problems simultaneously. The uniqueness of our design is a sensor fusion scheme which integrates camera videos, motion sensors (GPS/IMU), and a 3D semantic map in order to achieve robustness and efficiency of the system. Specifically, we first have an initial coarse camera pose obtained from consumer-grade GPS/IMU, based on which a label map can be rendered from the 3D semantic map. Then, the rendered label map and the RGB image are jointly fed into a pose CNN, yielding a corrected camera pose. In addition, to incorporate temporal information, a multi-layer recurrent neural network (RNN) is further deployed improve the pose accuracy. Finally, based on the pose from RNN, we render a new label map, which is fed together with the RGB image into a segment CNN which produces perpixel semantic label. In order to validate our approach, we build a dataset with registered 3D point clouds and video camera images. Both the point clouds and the images are semantically-labeled. Each video frame has ground truth pose from highly accurate motion sensors. We show that practically, pose estimation solely relying on images like PoseNet [25] may fail due to street view confusion, and it is important to fuse multiple sensors. Finally, various ablation studies are performed, which demonstrate the effectiveness of the proposed system. In particular, we show that scene parsing and pose estimation are mutually beneficial to achieve a more robust and accurate system."}
{"_id": "440099b3dfff6d553b237e14985ee558b39d57dd", "title": "The learning curve in microtia surgery.", "text": "Reconstruction of the auricle is known to be complex. Our objective was to evaluate the improvement of the outcome of the lobulus-type microtia reconstruction. Patient satisfaction was also evaluated. There are no previous reports of the learning process in this field. Postoperative photographs of 51 microtia reconstructions were assessed and rated by a panel made up of six surgeons. The ratings were gathered to generate learning curves. Twenty-two patients assessed the outlook of their reconstructed ears, and the results were analyzed as a self-assessment group. The reliability of the rating by a panel was tested by intraclass correlations. There is a highly significant increasing trend in learning ( P = 0.000001). This trend is not constantly upward, and the steady state was not reached during the study. In the self-assessment group, females were significantly more critical than males ( P = 0.014). Intraclass correlation for six panel members was 0.90, and the rating was considered reliable. Thus, a long and gentle learning curve does exist in microtia reconstruction. To secure good quality and continuity, centralization of the operations and trainee arrangements are highly advisable. Outcomes of plastic surgery can reliably be rated by an evaluation panel."}
{"_id": "f96bdd1e2a940030fb0a89abbe6c69b8d7f6f0c1", "title": "Comparison of human and computer performance across face recognition experiments", "text": "a r t i c l e i n f o Since 2005, human and computer performance has been systematically compared as part of face recognition competitions, with results being reported for both still and video imagery. The key results from these competitions are reviewed. To analyze performance across studies, the cross-modal performance analysis (CMPA) framework is introduced. The CMPA framework is applied to experiments that were part of face a recognition competition. The analysis shows that for matching frontal faces in still images, algorithms are consistently superior to humans. For video and difficult still face pairs, humans are superior. Finally, based on the CMPA framework and a face performance index, we outline a challenge problem for developing algorithms that are superior to humans for the general face recognition problem."}
{"_id": "6a74e6b9bbbb093ebe928bc5c233953d74813392", "title": "Facilitating relational governance through service level agreements in IT outsourcing: An application of the commitment-trust theory", "text": "Article history: Received 18 June 2007 Received in revised form 9 June 2008 Accepted 20 June 2008 Available online 25 June 2008 Firms increasingly rely on outsourcing for strategic IT decisions, and themany sophisticated forms of outsourcing require significant management attention to ensure their success. Two forms of interorganizational governance\u2013formal control and relational\u2013have been used to examine the management of IT outsourcing relationships. Contrary to the conventional substitution view, recent studies have found that these two governance modes are complementary; however, the dynamics of their interactions remain unexplored. Based on the commitment\u2013trust theory, this paper focuses on how the formal controlmechanism can influence the relational governance in an outsourcing engagement. Using service level agreements (SLAs) as a proxy for formal control, this studyfinds that eleven contractual elements, characterized as foundation, governance, and change management variables in an SLA, are positively related to the trust and relationship commitment among the parties. Trust and commitment, in turn, positively influence relational outcomes that we theorize would contribute to outsourcing success. Both research and practical implications of the results are discussed. \u00a9 2008 Elsevier B.V. All rights reserved."}
{"_id": "633614f969b869388508c636a322eba35fe1f280", "title": "Plan 9 , A Distributed System", "text": "Plan 9 is a computing environment physically distributed across many machines. The distribution itself is transparent to most programs giving both users and administrators wide latitude in configuring the topology of the environment. Two properties make this possible: a per process group name space and uniform access to all resources by representing them as files."}
{"_id": "fe400b814cfea5538887c92040f1ab0d6fb45bfe", "title": "Measuring the Diversity of Automatic Image Descriptions", "text": "Automatic image description systems typically produce generic sentences that only make use of a small subset of the vocabulary available to them. In this paper, we consider the production of generic descriptions as a lack of diversity in the output, which we quantify using established metrics and two new metrics that frame image description as a word recall task. This framing allows us to evaluate system performance on the head of the vocabulary, as well as on the long tail, where system performance degrades. We use these metrics to examine the diversity of the sentences generated by nine state-of-the-art systems on the MS COCO data set. We find that the systems trained with maximum likelihood objectives produce less diverse output than those trained with additional adversarial objectives. However, the adversarially-trained models only produce more types from the head of the vocabulary and not the tail. Besides vocabulary-based methods, we also look at the compositional capacity of the systems, specifically their ability to create compound nouns and prepositional phrases of different lengths. We conclude that there is still much room for improvement, and offer a toolkit to measure progress towards the goal of generating more diverse image descriptions."}
{"_id": "26e9de9f9675bdd6550e72ed0eb2c25327bf3e19", "title": "Delaunay Meshing of Isosurfaces", "text": "We present an isosurface meshing algorithm, DelIso, based on the Delaunay refinement paradigm. This paradigm has been successfully applied to mesh a variety of domains with guarantees for topology, geometry, mesh gradedness, and triangle shape. A restricted Delaunay tri- angulation, dual of the intersection between the surface and the three dimensional Voronoi diagram, is often the main ingredient in Delaunay refinement. Computing and storing three dimensional Voronoi/Delaunay diagrams become bottlenecks for Delaunay refinement techniques since isosurface computations generally have large input datasets and output meshes. A highlight of our algorithm is that we find a simple way to recover the restricted Delaunay triangulation of the surface without computing the full 3D structure. We employ techniques for efficient ray tracing of isosurfaces to generate surface sample points, and demonstrate the effectiveness of our implementation using a variety of volume datasets."}
{"_id": "c498b2dd59e7e097742f7cdcaed91e1228ec6224", "title": "The Many Faces of Formative Assessment.", "text": "In this research paper we consider formative assessment (FA) and discuss ways in which it has been implemented in four different university courses. We illustrate the different aspects of FA by deconstructing it and then demonstrating effectiveness in improving both teaching and student achievement. It appears that specifically \u201cwhat is done\u201d was less important since there were positive achievement gains in each study. While positive gains were realized with use of technology, gains were also realized with implementation of nontechnology dependent techniques. Further, gains were independent of class size or subject matter."}
{"_id": "ef5f6d6b3a5d3436f1802120f71e765a1ec72c2f", "title": "A review of issues and challenges in designing Iris recognition Systems for noisy imaging environment", "text": "Iris recognition is a challenging task in a noisy imaging environment. Nowadays researcher's primary focus is to develop reliable Iris recognition System that can work in noisy imaging environment and to increase the iris recognition rate on different iris database. But there are major issues involved in designing such systems like occlusion by eyelashes, eyelids, glass frames, off-angle imaging, presence of contact lenses, poor illumination, motion blur, close-up of iris image acquired at a large standoff distance and specular reflections etc. Because of these issues the quality of acquired iris image gets affected. The performance of the iris based recognition system will deteriorate abruptly, when the iris mask is not accurate. This results in lower recognition rate. In this review paper different challenges in designing iris recognition systems for noisy imaging environment are reviewed and methodologies involved in overcoming these issues are discussed. At the end, some measures to improve the accuracy of such systems are suggested."}
{"_id": "2a68c39e3586f87da501bc2a5ae6138469f50613", "title": "Mining Multi-label Data", "text": "A large body of research in supervised learning deals with the analysis of singlelabel data, where training examples are associated with a single label \u03bb from a set of disjoint labels L. However, training examples in several application domains are often associated with a set of labels Y \u2286 L. Such data are called multi-label. Textual data, such as documents and web pages, are frequently annotated with more than a single label. For example, a news article concerning the reactions of the Christian church to the release of the \u201cDa Vinci Code\u201d film can be labeled as both religion and movies. The categorization of textual data is perhaps the dominant multi-label application. Recently, the issue of learning from multi-label data has attracted significant attention from a lot of researchers, motivated from an increasing number of new applications, such as semantic annotation of images [1, 2, 3] and video [4, 5], functional genomics [6, 7, 8, 9, 10], music categorization into emotions [11, 12, 13, 14] and directed marketing [15]. Table 1 presents a variety of applications that are discussed in the literature. This chapter reviews past and recent work on the rapidly evolving research area of multi-label data mining. Section 2 defines the two major tasks in learning from multi-label data and presents a significant number of learning methods. Section 3 discusses dimensionality reduction methods for multi-label data. Sections 4 and 5 discuss two important research challenges, which, if successfully met, can significantly expand the real-world applications of multi-label learning methods: a) exploiting label structure and b) scaling up to domains with large number of labels. Section 6 introduces benchmark multi-label datasets and their statistics, while Section 7 presents the most frequently used evaluation measures for multi-label learn-"}
{"_id": "1d9dece252de9457f504c8e79efe50fda73a2199", "title": "Prediction of central nervous system embryonal tumour outcome based on gene expression", "text": "Embryonal tumours of the central nervous system (CNS) represent a heterogeneous group of tumours about which little is known biologically, and whose diagnosis, on the basis of morphologic appearance alone, is controversial. Medulloblastomas, for example, are the most common malignant brain tumour of childhood, but their pathogenesis is unknown, their relationship to other embryonal CNS tumours is debated, and patients\u2019 response to therapy is difficult to predict. We approached these problems by developing a classification system based on DNA microarray gene expression data derived from 99 patient samples. Here we demonstrate that medulloblastomas are molecularly distinct from other brain tumours including primitive neuroectodermal tumours (PNETs), atypical teratoid/rhabdoid tumours (AT/RTs) and malignant gliomas. Previously unrecognized evidence supporting the derivation of medulloblastomas from cerebellar granule cells through activation of the Sonic Hedgehog (SHH) pathway was also revealed. We show further that the clinical outcome of children with medulloblastomas is highly predictable on the basis of the gene expression profiles of their tumours at diagnosis."}
{"_id": "4dc881bf2fb04ffe71bed7a9e0612cb93a9baccf", "title": "Problems in dealing with missing data and informative censoring in clinical trials", "text": "Acommon problem in clinical trials is the missing data that occurs when patients do not complete the study and drop out without further measurements. Missing data cause the usual statistical analysis of complete or all available data to be subject to bias. There are no universally applicable methods for handling missing data. We recommend the following: (1) Report reasons for dropouts and proportions for each treatment group; (2) Conduct sensitivity analyses to encompass different scenarios of assumptions and discuss consistency or discrepancy among them; (3) Pay attention to minimize the chance of dropouts at the design stage and during trial monitoring; (4) Collect post-dropout data on the primary endpoints, if at all possible; and (5) Consider the dropout event itself an important endpoint in studies with many."}
{"_id": "5b62860a9eb3492c5c2d7fb42fd023cae891df45", "title": "Missing value estimation methods for DNA microarrays", "text": "MOTIVATION\nGene expression microarray experiments can generate data sets with multiple missing expression values. Unfortunately, many algorithms for gene expression analysis require a complete matrix of gene array values as input. For example, methods such as hierarchical clustering and K-means clustering are not robust to missing data, and may lose effectiveness even with a few missing values. Methods for imputing missing data are needed, therefore, to minimize the effect of incomplete data sets on analyses, and to increase the range of data sets to which these algorithms can be applied. In this report, we investigate automated methods for estimating missing data.\n\n\nRESULTS\nWe present a comparative study of several methods for the estimation of missing values in gene microarray data. We implemented and evaluated three methods: a Singular Value Decomposition (SVD) based method (SVDimpute), weighted K-nearest neighbors (KNNimpute), and row average. We evaluated the methods using a variety of parameter settings and over different real data sets, and assessed the robustness of the imputation methods to the amount of missing data over the range of 1--20% missing values. We show that KNNimpute appears to provide a more robust and sensitive method for missing value estimation than SVDimpute, and both SVDimpute and KNNimpute surpass the commonly used row average method (as well as filling missing values with zeros). We report results of the comparative experiments and provide recommendations and tools for accurate estimation of missing microarray data under a variety of conditions."}
{"_id": "125d7bd51c44907e166d82469aa4a7ba1fb9b77f", "title": "Molecular classification of cancer: class discovery and class prediction by gene expression monitoring.", "text": "Although cancer classification has improved over the past 30 years, there has been no general approach for identifying new cancer classes (class discovery) or for assigning tumors to known classes (class prediction). Here, a generic approach to cancer classification based on gene expression monitoring by DNA microarrays is described and applied to human acute leukemias as a test case. A class discovery procedure automatically discovered the distinction between acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) without previous knowledge of these classes. An automatically derived class predictor was able to determine the class of new leukemia cases. The results demonstrate the feasibility of cancer classification based solely on gene expression monitoring and suggest a general strategy for discovering and predicting cancer classes for other types of cancer, independent of previous biological knowledge."}
{"_id": "2dfae0e14aea9cc7fae04aa8662765e6227439ae", "title": "Multiple Imputation for Missing Data: Concepts and New Development", "text": "Multiple imputation provides a useful strategy for dealing with data sets with missing values. Instead of filling in a single value for each missing value, Rubin\u2019s (1987) multiple imputation procedure replaces each missing value with a set of plausible values that represent the uncertainty about the right value to impute. These multiply imputed data sets are then analyzed by using standard procedures for complete data and combining the results from these analyses. No matter which complete-data analysis is used, the process of combining results from different imputed data sets is essentially the same. This results in valid statistical inferences that properly reflect the uncertainty due to missing values. This paper reviews methods for analyzing missing data, including basic concepts and applications of multiple imputation techniques. The paper also presents new SAS R procedures for creating multiple imputations for incomplete multivariate data and for analyzing results from multiply imputed data sets. These procedures are still under development and will be available in experimental form in Release 8.1 of the SAS System. Introduction Most SAS statistical procedures exclude observations with any missing variable values from the analysis. These observations are called incomplete cases. While using only complete cases has its simplicity, you lose information in the incomplete cases. This approach also ignores the possible systematic difference between the complete cases and incomplete cases, and the resulting inference may not be applicable to the population of all cases, especially with a smaller number of complete cases. Some SAS procedures use all the available cases in an analysis, that is, cases with available information. For example, PROC CORR estimates a variable mean by using all cases with nonmissing values on this variable, ignoring the possible missing values in other variables. PROC CORR also estimates a correlation by using all cases with nonmissing values for this pair of variables. This may make better use of the available data, but the resulting correlation matrix may not be positive definite. Another strategy is simple imputation, in which you substitute a value for each missing value. Standard statistical procedures for complete data analysis can then be used with the filled-in data set. For example, each missing value can be imputed from the variable mean of the complete cases, or it can be imputed from the mean conditional on observed values of other variables. This approach treats missing values as if they were known in the complete-data analyses. Single imputation does not reflect the uncertainty about the predictions of the unknown missing values, and the resulting estimated variances of the parameter estimates will be biased toward zero. Instead of filling in a single value for each missing value, a multiple imputation procedure (Rubin 1987) replaces each missing value with a set of plausible values that represent the uncertainty about the right value to impute. The multiply imputed data sets are then analyzed by using standard procedures for complete data and combining the results from these analyses. No matter which complete-data analysis is used, the process of combining results from different data sets is essentially the same. Multiple imputation does not attempt to estimate each missing value through simulated values but rather to represent a random sample of the missing values. This process results in valid statistical inferences that properly reflect the uncertainty due to missing values; for example, valid confidence intervals for parameters. Multiple imputation inference involves three distinct phases: The missing data are filled in m times to generate m complete data sets. The m complete data sets are analyzed by using standard procedures. The results from the m complete data sets are combined for the inference. A new SAS/STAT R procedure, PROC MI, is a multiple imputation procedure that creates multiply imputed data sets for incomplete p-dimensional multivariate data. It uses methods that incorporate appropriate variability across the m imputations. Once the m complete data sets are analyzed by using standard procedures, another new procedure, PROC MIANALYZE, can be used to generate valid statistical inferences about these parameters by combining results from the m complete data sets. Statistics and Data Analysis"}
{"_id": "4c1b6d34c0c35e41fb0b0e76794f04d1d871d34b", "title": "An Image-based Feature Extraction Approach for Phishing Website Detection", "text": "Phishing website creators and anti-phishing defenders are in an arms race. Cloning a website is fairly easy and can be automated by any junior programmer. Attempting to recognize numerous phishing links posted in the wild e.g. on social media sites or in email is a constant game of escalation. Automated phishing website detection systems need both speed and accuracy to win. We present a new method of detecting phishing websites and a prototype system LEO (Logo Extraction and cOmparison) that implements it. LEO uses image feature recognition to extract \u201cvisual hotspots\u201d of a webpage, and compare these parts with known logo images. LEO can recognize phishing websites that has different layout from the original websites, or logos embedded in images. Comparing to existing visual similaritybased methods, our method has a much wider application range and higher detection accuracy. Our method successfully recognized 24 of 25 random URLs from PhishTank that previously evaded detection of other visual similarity-based methods."}
{"_id": "22c33a890c0bf4fc2a2d354d48ee9e00bffcc9a6", "title": "Clustering based anomalous transaction reporting", "text": "Anti-money laundering (AML) refers to a set of financial and technological controls that aim to combat the entrance of dirty money into financial systems. A robust AML system must be able to automatically detect any unusual/anomalous financial transactions committed by a customer. The paper presents a hybrid anomaly detection approach that employs clustering to establish customers\u2019 normal behaviors and uses statistical techniques to determine deviation of a particular transaction from the corresponding group behavior. The approach implements a variant of Euclidean Adaptive Resonance Theory, termed as TEART, to group customers in different clusters. The paper also suggests an anomaly index, named AICAF, for ranking transactions as anomalous. The approach has been tested on a real data set comprising of 8.2 million transactions and the results suggest that TEART scales well in terms of the partitions obtained when compared to the traditional K-means algorithm. The presented approach marks transactions having high AICAF values as suspicious."}
{"_id": "4cb04f57941ed2a5335cdb82e3db9bdd5079bd87", "title": "Decomposing Adult Age Differences in Working Memory", "text": "Two studies, involving a total of 460 adults between 18 and 87 years of age, were conducted to determine which of several hypothesized processing components was most responsible for age-related declines in working memory functioning. Significant negative correlations between age and measures of working memory (i.e., from -.39 to -.52) were found in both studies, and these relations were substantially attenuated by partialing measures hypothesized to reflect storage capacity, processing efficiency, coordination effectiveness, and simple comparison speed. Because the greatest attenuation of the age relations occurred with measures of simple processing speed, it was suggested that many of the age differences in working memory may be mediated by age-related reductions in the speed of executing elementary operations."}
{"_id": "b97cdc4bd0b021caefe7921c8c637b76f8a8114b", "title": "The Deep Regression Bayesian Network and Its Applications: Probabilistic Deep Learning for Computer Vision", "text": "Deep directed generative models have attracted much attention recently due to their generative modeling nature and powerful data representation ability. In this article, we review different structures of deep directed generative models and the learning and inference algorithms associated with the structures. We focus on a specific structure that consists of layers of Bayesian networks (BNs) due to the property of capturing inherent and rich dependencies among latent variables. The major difficulty of learning and inference with deep directed models with many latent variables is the intractable inference due to the dependencies among the latent variables and the exponential number of latent variable configurations. Current solutions use variational methods, often through an auxiliary network, to approximate the posterior probability inference. In contrast, inference can also be performed directly without using any auxiliary network to maximally preserve the dependencies among the latent variables. Specifically, by exploiting the sparse representation with the latent space, max-max instead of maxsum operation can be used to overcome the exponential number of latent configurations. Furthermore, the max-max operation and augmented coordinate ascent (AugCA) are applied to both supervised and unsupervised learning as well as to various inference. Quantitative evaluations on benchmark data sets of different models are given for both data representation and feature-learning tasks."}
{"_id": "ec3cd5873b32221677df219fb7a06876fdd1de49", "title": "Making working memory work: a meta-analysis of executive-control and working memory training in older adults.", "text": "This meta-analysis examined the effects of process-based executive-function and working memory training (49 articles, 61 independent samples) in older adults (> 60 years). The interventions resulted in significant effects on performance on the trained task and near-transfer tasks; significant results were obtained for the net pretest-to-posttest gain relative to active and passive control groups and for the net effect at posttest relative to active and passive control groups. Far-transfer effects were smaller than near-transfer effects but were significant for the net pretest-to-posttest gain relative to passive control groups and for the net gain at posttest relative to both active and passive control groups. We detected marginally significant differences in training-induced improvements between working memory and executive-function training, but no differences between the training-induced improvements observed in older adults and younger adults, between the benefits associated with adaptive and nonadaptive training, or between the effects in active and passive control conditions. Gains did not vary with total training time."}
{"_id": "5d1730136d23d5f1a6d0fea50a2203d8df6eb3db", "title": "Direct Torque and Indirect Flux Control of Brushless DC Motor", "text": "In this paper, the position-sensorless direct torque and indirect flux control of brushless dc (BLDC) motor with nonsinusoidal back electromotive force (EMF) has been extensively investigated. In the literature, several methods have been proposed for BLDC motor drives to obtain optimum current and torque control with minimum torque pulsations. Most methods are complicated and do not consider the stator flux linkage control, therefore, possible high-speed operations are not feasible. In this study, a novel and simple approach to achieve a low-frequency torque ripple-free direct torque control (DTC) with maximum efficiency based on dq reference frame is presented. The proposed sensorless method closely resembles the conventional DTC scheme used for sinusoidal ac motors such that it controls the torque directly and stator flux amplitude indirectly using d-axis current. This method does not require pulsewidth modulation and proportional plus integral regulators and also permits the regulation of varying signals. Furthermore, to eliminate the low-frequency torque oscillations, two actual and easily available line-to-line back EMF constants ( kba and kca) according to electrical rotor position are obtained offline and converted to the dq frame equivalents using the new line-to-line park transformation. Then, they are set up in the look-up table for torque estimation. The validity and practical applications of the proposed sensorless three-phase conduction DTC of BLDC motor drive scheme are verified through simulations and experimental results."}
{"_id": "b0de31324518f5281c769b8047fae7c2cba0de5c", "title": "Automatic Identification and Classification of Misogynistic Language on Twitter", "text": "Hate speech may take different forms in online social media. Most of the investigations in the literature are focused on detecting abusive language in discussions about ethnicity, religion, gender identity and sexual orientation. In this paper, we address the problem of automatic detection and categorization of misogynous language in online social media. The main contribution of this paper is two-fold: (1) a corpus of misogynous tweets, labelled from different perspective and (2) an exploratory investigations on NLP features and ML models for detecting and classifying misogynistic language."}
{"_id": "017f511734b7094c360ac7854d39f2fa063e8c9c", "title": "Role of IL-33 in inflammation and disease", "text": "Interleukin (IL)-33 is a new member of the IL-1 superfamily of cytokines that is expressed by mainly stromal cells, such as epithelial and endothelial cells, and its expression is upregulated following pro-inflammatory stimulation. IL-33 can function both as a traditional cytokine and as a nuclear factor regulating gene transcription. It is thought to function as an 'alarmin' released following cell necrosis to alerting the immune system to tissue damage or stress. It mediates its biological effects via interaction with the receptors ST2 (IL-1RL1) and IL-1 receptor accessory protein (IL-1RAcP), both of which are widely expressed, particularly by innate immune cells and T helper 2 (Th2) cells. IL-33 strongly induces Th2 cytokine production from these cells and can promote the pathogenesis of Th2-related disease such as asthma, atopic dermatitis and anaphylaxis. However, IL-33 has shown various protective effects in cardiovascular diseases such as atherosclerosis, obesity, type 2 diabetes and cardiac remodeling. Thus, the effects of IL-33 are either pro- or anti-inflammatory depending on the disease and the model. In this review the role of IL-33 in the inflammation of several disease pathologies will be discussed, with particular emphasis on recent advances."}
{"_id": "60686a80b91ce9518428e00dea95dfafadadd93c", "title": "A Dual-Fed Aperture-Coupled Microstrip Antenna With Polarization Diversity", "text": "This communication presents a dual-port reconfigurable square patch antenna with polarization diversity for 2.4 GHz. By controlling the states of four p-i-n diodes on the patch, the polarization of the proposed antenna can be switched among linear polarization (LP), left- or right-hand circular polarization (CP) at each port. The air substrate and aperture-coupled feed structure are employed to simplify the bias circuit of p-i-n diodes. With high isolation and low cross-polarization level in LP modes, both ports can work simultaneously as a dual linearly polarized antenna for polarimetric radars. Different CP waves are obtained at each port, which are suitable for addressing challenges ranging from mobility, adverse weather conditions and non-line-of-sight applications. The antenna has advantages of simple biasing network, easy fabrication and adjustment, which can be widely applied in polarization diversity applications."}
{"_id": "0cea7a2f9e0d156af3ce6ff3ebf9b07fbd98a90d", "title": "Expression Cloning of TMEM16A as a Calcium-Activated Chloride Channel Subunit", "text": "Calcium-activated chloride channels (CaCCs) are major regulators of sensory transduction, epithelial secretion, and smooth muscle contraction. Other crucial roles of CaCCs include action potential generation in Characean algae and prevention of polyspermia in frog egg membrane. None of the known molecular candidates share properties characteristic of most CaCCs in native cells. Using Axolotl oocytes as an expression system, we have identified TMEM16A as the Xenopus oocyte CaCC. The TMEM16 family of \"transmembrane proteins with unknown function\" is conserved among eukaryotes, with family members linked to tracheomalacia (mouse TMEM16A), gnathodiaphyseal dysplasia (human TMEM16E), aberrant X segregation (a Drosophila TMEM16 family member), and increased sodium tolerance (yeast TMEM16). Moreover, mouse TMEM16A and TMEM16B yield CaCCs in Axolotl oocytes and mammalian HEK293 cells and recapitulate the broad CaCC expression. The identification of this new family of ion channels may help the development of CaCC modulators for treating diseases including hypertension and cystic fibrosis."}
{"_id": "6af807ff627e9fff8742e6d9196d8cbe79007f85", "title": "An improved wavelet based shock wave detector", "text": "In this paper, the detection of shock wave that generated by supersonic bullet is considered. A wavelet based multi-scale products method has been widely used for detection. However, the performance of method decreased at low signal-to-noise ratio (SNR). It is noted that the method does not consider the distribution of the signal and noise. Thus we analyze the method under the standard likelihood ratio test in this paper. It is found that the multi-scale product method is made in an assumption that is extremely restricted, just hold for a special noise condition. Based on the analysis, a general condition is considered for the detection. An improved detector under the standard likelihood ratio test is proposed. Monte Carlo simulations is conducted with simulated shock waves under additive white Gaussian noise. The result shows that this new detection algorithm outperforms the conventional detection algorithm."}
{"_id": "dd41010faa2c848729bc79614f1846a2267f1904", "title": "Cascaded Random Forest for Fast Object Detection", "text": "A Random Forest consists of several independent decision trees arranged in a forest. A majority vote over all trees leads to the final decision. In this paper we propose a Random Forest framework which incorporates a cascade structure consisting of several stages together with a bootstrap approach. By introducing the cascade, 99% of the test images can be rejected by the first and second stage with minimal computational effort leading to a massively speeded-up detection framework. Three different cascade voting strategies are implemented and evaluated. Additionally, the training and classification speed-up is analyzed. Several experiments on public available datasets for pedestrian detection, lateral car detection and unconstrained face detection demonstrate the benefit of our contribution."}
{"_id": "f212f69199a4ca3ca7c5b59cd6325d06686c1956", "title": "A density-based cluster validity approach using multi-representatives", "text": "Although the goal of clustering is intuitively compelling and its notion arises in many fields, it is difficult to define a unified approach to address the clustering problem and thus diverse clustering algorithms abound in the research community. These algorithms, under different clustering assumptions, often lead to qualitatively different results. As a consequence the results of clustering algorithms (i.e. data set partitionings) need to be evaluated as regards their validity based on widely accepted criteria. In this paper a cluster validity index, CDbw, is proposed which assesses the compactness and separation of clusters defined by a clustering algorithm. The cluster validity index, given a data set and a set of clustering algorithms, enables: i) the selection of the input parameter values that lead an algorithm to the best possible partitioning of the data set, and ii) the selection of the algorithm that provides the best partitioning of the data set. CDbw handles efficiently arbitrarily shaped clusters by representing each cluster with a number of points rather than by a single representative point. A full implementation and experimental results confirm the reliability of the validity index showing also that its performance compares favourably to that of several others."}
{"_id": "144ea690592d0dce193cbbaac94266a0c3c6f85d", "title": "Multilevel Inverters for Electric Vehicle Applications", "text": "\uf8e7 This paper presents multilevel inverters as an application for all-electric vehicle (EV) and hybrid-electric vehicle (HEV) motor drives. Diode-clamped inverters and cascaded H-bridge inverters, (1) can generate near-sinusoidal voltages with only fundamental frequency switching; (2) have almost no electromagnetic interference (EMI) and commonmode voltage; and (3) make an EV more accessible/safer and open wiring possible for most of an EV\u2019s power system. This paper explores the benefits and discusses control schemes of the cascade inverter for use as an EV motor drive or a parallel HEV drive and the diode-clamped inverter as a series HEV motor drive. Analytical, simulated, and experimental results show the superiority of these multilevel inverters for this new niche."}
{"_id": "d2f4fb27454bb92f63446e4a059f59b35f4c2508", "title": "X-band FMCW radar system with variable chirp duration", "text": "For application in a short range ground based surveillance radar a combination between frequency modulated continuous wave (FMCW) transmit signals and a receive antenna array system is considered in this paper. The target echo signal is directly down converted by the instantaneous transmit frequency. The target range R will be estimated based on the measured frequency shift fB between transmit and receive signal. Due to an extremely short chirp duration Tchirp, the target radial velocity \u03c5\u03c4 has only a very small influence to the measured frequency shift fB. Therefore the radial velocity \u03c5\u03c4 will not be measured inside a single FMCW chirp but in a sequence of chirp signals and inside each individual range gate. Finally, the target azimuth angle is calculated utilizing the receive antenna array and applying a digital beamforming scheme. Furthermore, in order to unambiguously measure even high radial velocities, a variable chirp duration is proposed on a dwell to dwell basis."}
{"_id": "0d11248c42d5a57bb28b00d64e21a32d31bcd760", "title": "Code-Red: a case study on the spread and victims of an internet worm", "text": "On July 19, 2001, more than 359,000 computers connected to the Internet were infected with the Code-Red (CRv2) worm in less than 14 hours. The cost of this epidemic, including subsequent strains of Code-Red, is estimated to be in excess of $2.6 billion. Despite the global damage caused by this attack, there have been few serious attempts to characterize the spread of the worm, partly due to the challenge of collecting global information about worms. Using a technique that enables global detection of worm spread, we collected and analyzed data over a period of 45 days beginning July 2nd, 2001 to determine the characteristics of the spread of Code-Red throughout the Internet.In this paper, we describe the methodology we use to trace the spread of Code-Red, and then describe the results of our trace analyses. We first detail the spread of the Code-Red and CodeRedII worms in terms of infection and deactivation rates. Even without being optimized for spread of infection, Code-Red infection rates peaked at over 2,000 hosts per minute. We then examine the properties of the infected host population, including geographic location, weekly and diurnal time effects, top-level domains, and ISPs. We demonstrate that the worm was an international event, infection activity exhibited time-of-day effects, and found that, although most attention focused on large corporations, the Code-Red worm primarily preyed upon home and small business users. We also qualified the effects of DHCP on measurements of infected hosts and determined that IP addresses are not an accurate measure of the spread of a worm on timescales longer than 24 hours. Finally, the experience of the Code-Red worm demonstrates that wide-spread vulnerabilities in Internet hosts can be exploited quickly and dramatically, and that techniques other than host patching are required to mitigate Internet worms."}
{"_id": "bd58d8547ca844e6dc67f41c953bf133ce11d9b7", "title": "On the Generation of Skeletons from Discrete Euclidean Distance Maps", "text": "The skeleton is an important representation for shape analysis. A common approach for generating discrete skeletons takes three steps: 1) computing the distance map, 2) detecting maximal disks from the distance map, and 3) linking the centers of maximal disks (CMDs) into a connected skeleton. Algorithms using approximate distance metrics are abundant and their theory has been well established. However, the resulting skeletons may be inaccurate and sensitive to rotation. In this paper, we study methods for generating skeletons based on the exact Euclidean metric. We first show that no previous algorithms identifies the exact set of discrete maximal disks under the Euclidean metric. We then propose new algorithms and show that they are correct. To link CMDs into connected skeletons, we examine two prevalent approaches: connected thinning and steepest ascent. We point out that the connected thinning approach does not work properly for Euclidean distance maps. Only the steepest ascent algorithm produces skeletons that are truly medially placed. The resulting skeletons have all the desirable properties: they have the same simple connectivity as the figure, they are well-centered, they are insensitive to rotation, and they allow exact reconstruction. The effectiveness of our algorithms is demonstrated with numerous examples."}
{"_id": "0e62a123913b0dca9e1697a3cbf978d69dd9284d", "title": "CloudSpeller: query spelling correction by using a unified hidden markov model with web-scale resources", "text": "Query spelling correction is an important component of modern search engines that can help users to express an information need more accurately and thus improve search quality. In this work we proposed and implemented an end-to-end speller correction system, namely CloudSpeller. The CloudSpeller system uses a Hidden Markov Model to effectively model major types of spelling errors in a unified framework, in which we integrate a large-scale lexicon constructed using Wikipedia, an error model trained from high confidence correction pairs, and the Microsoft Web N-gram service. Our system achieves excellent performance on two search query spelling correction datasets, reaching 0.960 and 0.937 F1 scores on the TREC dataset and the MSN dataset respectively."}
{"_id": "ddc334306f269968451ca720b3d804e9b0911765", "title": "Unsupervised Event Tracking by Integrating Twitter and Instagram", "text": "This paper proposes an unsupervised framework for tracking real world events from their traces on Twitter and Instagram. Empirical data suggests that event detection from Instagram streams errs on the false-negative side due to the relative sparsity of Instagram data (compared to Twitter data), whereas event detection from Twitter can suffer from false-positives, at least if not paired with careful analysis of tweet content. To tackle both problems simultaneously, we design a unified unsupervised algorithm that fuses events detected originally on Instagram (called I-events) and events detected originally on Twitter (called T-events), that occur in adjacent periods, in an attempt to combine the benefits of both sources while eliminating their individual disadvantages. We evaluate the proposed framework with real data crawled from Twitter and Instagram. The results indicate that our algorithm significantly improves tracking accuracy compared to baselines."}
{"_id": "92319b104fe2e8979e8237a587bdf455bc7fbc83", "title": "Design consideration of recent advanced low-voltage CMOS boost converter for energy harvesting", "text": "With the emergence of nanoscale material-based energy harvesters such as thermoelectric generator and microbial fuel cells, energy-harvesting-assisted self-powered electronics systems are gaining popularity. The state-of-the-art low-voltage CMOS boost converter, a critical voltage converter circuit for low power energy harvesting sources will be reviewed in this paper. Fundamentals of the boost converter circuit startup problem are discussed and recent circuit solutions to solve this problem are compared and analyzed. Necessary design considerations and trade-offs regarding circuit topology, component and CMOS process are also addressed."}
{"_id": "0462a4fcd991f8d6f814337882da182c504d1d7b", "title": "Syntactic Annotations for the Google Books NGram Corpus", "text": "We present a new edition of the Google Books Ngram Corpus, which describes how often words and phrases were used over a period of five centuries, in eight languages; it reflects 6% of all books ever published. This new edition introduces syntactic annotations: words are tagged with their part-of-speech, and headmodifier relationships are recorded. The annotations are produced automatically with statistical models that are specifically adapted to historical text. The corpus will facilitate the study of linguistic trends, especially those related to the evolution of syntax."}
{"_id": "00daf408c36359b14a92953fda814b6e3603b522", "title": "A Bayesian framework for word segmentation: Exploring the effects of context", "text": "Since the experiments of Saffran et al. [Saffran, J., Aslin, R., & Newport, E. (1996). Statistical learning in 8-month-old infants. Science, 274, 1926-1928], there has been a great deal of interest in the question of how statistical regularities in the speech stream might be used by infants to begin to identify individual words. In this work, we use computational modeling to explore the effects of different assumptions the learner might make regarding the nature of words--in particular, how these assumptions affect the kinds of words that are segmented from a corpus of transcribed child-directed speech. We develop several models within a Bayesian ideal observer framework, and use them to examine the consequences of assuming either that words are independent units, or units that help to predict other units. We show through empirical and theoretical results that the assumption of independence causes the learner to undersegment the corpus, with many two- and three-word sequences (e.g. what's that, do you, in the house) misidentified as individual words. In contrast, when the learner assumes that words are predictive, the resulting segmentation is far more accurate. These results indicate that taking context into account is important for a statistical word segmentation strategy to be successful, and raise the possibility that even young infants may be able to exploit more subtle statistical patterns than have usually been considered."}
{"_id": "0dde53334f17ac4a2b9aee0915ab001f8add692f", "title": "Quantifying the evolutionary dynamics of language", "text": "Human language is based on grammatical rules. Cultural evolution allows these rules to change over time. Rules compete with each other: as new rules rise to prominence, old ones die away. To quantify the dynamics of language evolution, we studied the regularization of English verbs over the past 1,200\u2009years. Although an elaborate system of productive conjugations existed in English\u2019s proto-Germanic ancestor, Modern English uses the dental suffix, \u2018-ed\u2019, to signify past tense. Here we describe the emergence of this linguistic rule amidst the evolutionary decay of its exceptions, known to us as irregular verbs. We have generated a data set of verbs whose conjugations have been evolving for more than a millennium, tracking inflectional changes to 177 Old-English irregular verbs. Of these irregular verbs, 145 remained irregular in Middle English and 98 are still irregular today. We study how the rate of regularization depends on the frequency of word usage. The half-life of an irregular verb scales as the square root of its usage frequency: a verb that is 100 times less frequent regularizes 10 times as fast. Our study provides a quantitative analysis of the regularization process by which ancestral forms gradually yield to an emerging linguistic rule."}
{"_id": "8f563b44db3e9fab315b78cbcccae8ad69f0a000", "title": "Internet Privacy Concerns Confirm the Case for Intervention", "text": "yberspace is invading private space. Controversies about spam, cookies, and the clickstream are merely the tip of an iceberg. Behind them loom real-time person location technologies includ-It's small wonder that lack of public confidence is a serious impediment to the take-up rate of consumer e-commerce. The concerns are not merely about security of value, but about something much more significant: trust in the information society. Conventional thinking has been the Internet renders laws less relevant. On the contrary, this article argues that the current debates about privacy and the Internet are the harbingers of a substantial shift. Because the U.S. has held off general privacy protections for so long, it will undergo much more significant adjustments than European countries. Privacy is often thought of as a moral right or a legal right. But it's often more useful to perceive privacy as the interest that individuals have in sustaining a personal space, free from interference by other people and organizations. Personal space has multiple dimensions, in particular , privacy of the person (concerned with the integrity of the individual's body), privacy of personal behavior, privacy of personal communications , and privacy of personal data. Information privacy refers to the claims of individuals that data about themselves should generally not be available to other individuals and organizations, and that, where data is possessed by another party, the individual must be able to exercise a substantial degree of control over that data and its use. (Definitional issues are examined in [6].) Information privacy has been under increasing threat as a result of the rapid replacement of expensive physical surveillance by what I referred to in Communications over a decade ago as \" dataveillance: \" the systematic use of personal data systems in the investigation or monitoring of people's actions or communications [2]. Intensive data trails about each individual provide a basis for the exercise of power over I In nt te er rv ve en nt ti io on n Public confidence in matters of online privacy seemingly lessens as the Internet grows. Indeed, there is mounting evidence the necessary remedy may be a protective framework that includes (gulp) legislative provisions."}
{"_id": "fbf7e8e8ecc47eceee4e3f86e3eecf5b489a350b", "title": "An Engineering Model for Color Difference as a Function of Size", "text": "This work describes a first step towards the creation of an engineering model for the perception of color difference as a function of size. Our approach is to non-uniformly rescale CIELAB using data from crowdsourced experiments, such as those run on Amazon Mechanical Turk. In such experiments, the inevitable variations in viewing conditions reflect the environment many applications must run in. Our goal is to create a useful model for design applications where it is important to make colors distinct, but for which a small set of highly distinct colors is inadequate."}
{"_id": "3aeb560af8ff8509e6ef0010ae2b53bd15726230", "title": "Generating UML Diagrams from Natural Language Specifications", "text": "The process of generating UML Diagrams from natural language specification is a highly challenging task. This paper proposes a method and tool to facilitate the requirements analysis process and extract UML diagrams from textual requirements using natural language processing (NLP) and Domain Ontology techniques. Requirements engineers analyze requirements manually to understand the scope of the system. The time spent on the analysis and the low quality of human analysis justifies the need of a tool for better understanding of the system. \u201cRequirement analysis to Provide Instant Diagrams (RAPID)\u201d is a desktop tool to assist requirements analysts and Software Engineering students to analyze textual requirements, finding core concepts and its relationships, and extraction UML diagrams. The evaluation of RAPID system is in the process and will be conducted through two forms of evaluation, experimental and expert evaluation."}
{"_id": "02eff775e05d9e67e2498fe464be598be4ab84ce", "title": "Chatbot for admissions", "text": "The communication of potential students with a university department is performed manually and it is a very time consuming procedure. The opportunity to communicate with on a one-to-one basis is highly valued. However with many hundreds of applications each year, one-to-one conversations are not feasible in most cases. The communication will require a member of academic staff to expend several hours to find suitable answers and contact each student. It would be useful to reduce his costs and time. The project aims to reduce the burden on the head of admissions, and potentially other users, by developing a convincing chatbot. A suitable algorithm must be devised to search through the set of data and find a potential answer. The program then replies to the user and provides a relevant web link if the user is not satisfied by the answer. Furthermore a web interface is provided for both users and an administrator. The achievements of the project can be summarised as follows. To prepare the background of the project a literature review was undertaken, together with an investigation of existing tools, and consultation with the head of admissions. The requirements of the system were established and a range of algorithms and tools were investigated, including keyword and template matching. An algorithm that combines keyword matching with string similarity has been developed. A usable system using the proposed algorithm has been implemented. The system was evaluated by keeping logs of questions and answers and by feedback received by potential students that used it. 3 Acknowledgements I would like to thank Dr Peter Hancox for his immeasurable help and support throughout this project. I also need to express my thanks to the computer support team for their excellent help and instructions. Finally, I feel the need to acknowledge the constant support offered by my parents. Introduction This chapter covers an introduction to the project including the context, a description of aims and objectives, a description of what has been achieved, contributions and the structure of the report. Although the admissions process works properly as it is, it is very difficult and time consuming to contact a member of staff of the university. However, the problem would be partially solved if the applicant could talk to a convincing chatbot, able to respond to their concerns with information about admissions, booking accommodation, paying fees in instalments and what pre-sessional courses are on offer. The \u2026"}
{"_id": "5cc695c35e87c91c060aa3fbf9305b4fdc960c9f", "title": "Levofloxacin implants with predefined microstructure fabricated by three-dimensional printing technique.", "text": "A novel three-dimensional (3D) printing technique was utilized in the preparation of drug implants that can be designed to have complex drug release profiles. The method we describe is based on a lactic acid polymer matrix with a predefined microstructure that is amenable to rapid prototyping and fabrication. We describe how the process parameters, especially selection of the binder, were optimized. Implants containing levofloxacin (LVFX) with predefined microstructures using an optimized binder solution of ethanol and acetone (20:80, v/v) were prepared by a 3D printing process that achieved a bi-modal profile displaying both pulsatile and steady state LVFX release from a single implant. The pulse release appeared from day 5 to 25, followed by a steady state phase of 25 days. The next pulse release phase then began at the 50th day and ended at the 80th day. To evaluate the drug implants structurally and analytically, the microscopic morphologies and the in vitro release profiles of the implants fabricated by both the 3D printing technique and the conventional lost mold technique were assessed using environmental scanning electron microscopy (ESEM) and UV absorbance spectrophotometry. The results demonstrate that the 3D printing technology can be used to fabricate drug implants with sophisticated micro- and macro-architecture in a single device that may be rapidly prototyped and fabricated. We conclude that drug implants with predefined microstructure fabricated by 3D printing techniques can have clear advantages compared to implants fabricated by conventional compressing methods."}
{"_id": "30fa9a026e511ee1f00f57c761b62f59c0c4b7c0", "title": "A Machine Learning Approach to Pronoun Resolution in Spoken Dialogue", "text": "We apply a decision tree based approach to pronoun resolution in spoken dialogue. Our system deals with pronouns with NPand non-NP-antecedents. We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron\u2019s (2002) manually tuned system."}
{"_id": "a4f649c50b328705540652cb26e0e8a1830ff676", "title": "Smart Home Automated Control System Using Android Application and Microcontroller", "text": "Smart Home System (SHS) is a dwelling incorporating a communications network that connects the electrical appliances and services allowing them to be remotely controlled, monitored or accessed. SHS includes different approaches to achieve multiple objectives range from enhancing comfort in daily life to enabling a more independent life for elderly and handicapped people. In this paper, the main four fields for SHS which are, home automation and remote monitoring, environmental monitoring, including humidity, temperature, fault tracking and management and finally the health monitoring have been considered. The system design is based on the Microcontroller MIKRO C software; multiple passive and active sensors and also a wireless internet services which is used in different monitoring and control processes .This paper presents the hardware implementation of a multiplatform control system for house automation and combines both hardware and software technologies. The system results shows that it can be classified as a comfortable, secure, private, economic and safe system in addition to its great flexibility and reliability."}
{"_id": "221d61b5719c3c66109d476f3b35b1f557a60769", "title": "Regression Shrinkage and Selection via the Elastic Net , with Applications to Microarrays", "text": "We propose the elastic net, a new regression shrinkage and selection method. Real data and a simulation study show that the elastic net often outperforms the lasso, while it enjoys a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strong correlated predictors are kept in the model. The elastic net is particularly useful in the analysis of microarray data in which the number of genes (predictors) is much bigger than the number of samples (observations). We show how the elastic net can be used to construct a classification rule and do automatic gene selection at the same time in microarray data, where the lasso is not very satisfied. We also propose an efficient algorithm for solving the elastic net based on the recently invented LARS algorithm. keywords: Gene selection; Grouping effect; Lasso; LARS algorithm; Microarray classification."}
{"_id": "5999a5f3a49a53461b02c139e16f79cf820a5774", "title": "Path-planning strategies for a point mobile automaton moving amidst unknown obstacles of arbitrary shape", "text": "The problem of path planning for an automaton moving in a two-dimensional scene filled with unknown obstacles is considered. The automaton is presented as a point; obstacles can be of an arbitrary shape, with continuous boundaries and of finite size; no restriction on the size of the scene is imposed. The information available to the automaton is limited to its own current coordinates and those of the target position. Also, when the automaton hits an obstacle, this fact is detected by the automaton's \u201ctactile sensor.\u201d This information is shown to be sufficient for reaching the target or concluding in finite time that the target cannot be reached. A worst-case lower bound on the length of paths generated by any algorithm operating within the framework of the accepted model is developed; the bound is expressed in terms of the perimeters of the obstacles met by the automaton in the scene. Algorithms that guarantee reaching the target (if the target is reachable), and tests for target reachability are presented. The efficiency of the algorithms is studied, and worst-case upper bounds on the length of generated paths are produced."}
{"_id": "a1dcc2a3bbd58befa7ba4b9b816aabc4aa450b38", "title": "Obsessive-compulsive disorder and gut microbiota dysregulation.", "text": "Obsessive-compulsive disorder (OCD) is a debilitating disorder for which the cause is not known and treatment options are modestly beneficial. A hypothesis is presented wherein the root cause of OCD is proposed to be a dysfunction of the gut microbiome constituency resulting in a susceptibility to obsessional thinking. Both stress and antibiotics are proposed as mechanisms by which gut microbiota are altered preceding the onset of OCD symptomology. In this light, pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS) leading to episodic OCD is explained not by group A beta-hemolytic streptococcal infections, but rather by prophylactic antibiotics that are administered as treatment. Further, stressful life events known to trigger OCD, such as pregnancy, are recast to show the possibility of altering gut microbiota prior to onset of OCD symptoms. Suggested treatment for OCD would be the directed, specie-specific (re)introduction of beneficial bacteria modifying the gut microbiome, thereby ameliorating OCD symptoms. Special considerations should be contemplated when considering efficacy of treatment, particularly the unhealthy coping strategies often observed in patients with chronic OCD that may need addressing in conjunction with microbiome remediation."}
{"_id": "a67f9ecae9ccab7e13630f90cdbf826ba064eef7", "title": "Event-Based Mobile Social Networks: Services, Technologies, and Applications", "text": "Event-based mobile social networks (MSNs) are a special type of MSN that has an immanently temporal common feature, which allows any smart phone user to create events to share group messaging, locations, photos, and insights among participants. The emergence of Internet of Things and event-based social applications integrated with context-awareness ability can be helpful in planning and organizing social events like meetings, conferences, and tradeshows. This paper first provides review of the event-based social networks and the basic principles and architecture of event-based MSNs. Next, event-based MSNs with smartphone contained technology elements, such as context-aware mobility and multimedia sharing, are presented. By combining the feature of context-aware mobility with multimedia sharing in event-based MSNs, event organizers, and planners with the service providers optimize their capability to recognize value for the multimedia services they deliver. The unique features of the current event-based MSNs give rise to the major technology trends to watch for designing applications. These mobile applications and their main features are described. At the end, discussions on the evaluation of the event-based mobile applications based on their main features are presented. Some open research issues and challenges in this important area of research are also outlined."}
{"_id": "c75c82be2e98a8d66907742a89b886902c1a0162", "title": "Fully Integrated Startup at 70 mV of Boost Converters for Thermoelectric Energy Harvesting", "text": "This paper presents an inductive DC-DC boost converter for energy harvesting using a thermoelectric generator with a minimum startup voltage of 70 mV and a regulated output voltage of 1.25 V. With a typical generator resistance of 40 \u03a9, an output power of 17 \u03bcW can be provided, which translates to an end-to-end efficiency of 58%. The converter employs Schmitt-trigger logic startup control circuitry and an ultra-low voltage charge pump using modified Schmitt-trigger driving circuits optimized for driving capacitive loads. Together with a novel ultra-low leakage power switch and the required control scheme, to the best of the authors' knowledge, this enables the lowest minimum voltage with fully integrated startup."}
{"_id": "9ce153635b16fed63a2ec5023533f1143c19e619", "title": "Adaptation and the SetPoint Model of Subjective Well-Being Does Happiness Change After Major Life Events ?", "text": "Hedonic adaptation refers to the process by which individuals return to baseline levels of happiness following a change in life circumstances. Dominant models of subjective well-being (SWB) suggest that people can adapt to almost any life event and that happiness levels fluctuate around a biologically determined set point that rarely changes. Recent evidence from large-scale panel studies challenges aspects of this conclusion. Although inborn factors certainly matter and some adaptation does occur, events such as divorce, death of a spouse, unemployment, and disability are associated with lasting changes in SWB. These recent studies also show that there are considerable individual differences in the extent to which people adapt. Thus, happiness levels do change, and adaptation is not inevitable. KEYWORDS\u2014happiness; subjective well-being; adaptation; set-point theory People\u2019s greatest hopes and fears often center on the possible occurrence of rare but important life events. People may dread the possibility of losing a loved one or becoming disabled, and they may go to great lengths to find true love or to increase their chances of winning the lottery. In many cases, people strive to attain or avoid these outcomes because of the outcomes\u2019 presumed effect on happiness. But do these major life events really affect long-term levels of subjective well-being (SWB)? Dominant models of SWB suggest that after experiencing major life events, people inevitably adapt. More specifically, set-point theorists posit that inborn personality factors cause an inevitable return to genetically determined happiness set points. However, recent evidence from large-scale longitudinal studies challenges some of the stronger conclusions from these models. ADAPTATION RESEARCH AND THEORY Although the thought that levels of happiness cannot change may distress some people, researchers believe that adaptation processes serve important functions (Frederick & Loewenstein, 1999). For one thing, these processes protect people from potentially dangerous psychological and physiological consequences of prolonged emotional states. In addition, because adaptation processes allow unchanging stimuli to fade into the attentional background, these processes ensure that change in the environment receives extra attention. Attention to environmental change is advantageous because threats that have existed for prolonged periods of time are likely to be less dangerous than novel threats. Similarly, because rewards that have persisted are less likely to disappear quickly than are novel rewards, it will often be advantageous to attend and react more strongly to these novel rewards. Finally, by reducing emotional reactions over time, adaptation processes allow individuals to disengage from goals that have little chance of success. Thus, adaptation can be beneficial, and some amount of adaptation to life circumstances surely occurs. Yet many questions about the strength and ubiquity of adaptation effects remain, partly because of the types of evidence that have been used to support adaptation theories. In many cases, adaptation is not directly observed. Instead, it must be inferred from indirect evidence. For instance, psychologists often cite the low correlation between happiness and life circumstances as evidence for adaptation effects. Factors such as income, age, health, marital status, and number of friends account for only a small percentage of the variance in SWB (Diener, Suh, Lucas, & Smith, 1999). One explanation that has been offered for this counterintuitive finding is that these factors initially have an impact but that people adapt over time. However, the weak associations between life circumstances and SWB themselves provide only suggestive evidence for this explanation. Additional indirect support for the set-point model comes from research that takes a personality perspective on SWB. Address correspondence to Richard E. Lucas, Department of Psychology, Michigan State University, East Lansing, MI 48823; e-mail: lucasri@msu.edu. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Volume 16\u2014Number 2 75 Copyright r 2007 Association for Psychological Science Three pieces of evidence are relevant (Lucas, in press-b). First, SWB exhibits moderate stability even over very long periods of time and even in the face of changing life circumstances. Recent reviews suggest that approximately 30 to 40% of the variance in life-satisfaction measures is stable over periods as long as 20 years. Second, a number of studies have shown that well-being variables are about 40 to 50% heritable. These heritability estimates appear to be even higher (about 80%) for long-term levels of happiness (Lykken & Tellegen, 1996). Finally, personality variables like extroversion and neuroticism are relatively strong predictors of happiness, at least when compared to the predictive power of external factors. The explanation for this set of findings is that events can influence short-term levels of happiness, but personality-based adaptation processes inevitably move people back to their genetically determined set point after a relatively short period of time. More direct evidence for hedonic adaptation comes from studies that examine the well-being of individuals who have experienced important life events. However, even these studies can be somewhat equivocal. For instance, one of the most famous studies is that of Brickman, Coates, and Janoff-Bulman (1978) comparing lottery winners and patients with spinal-cord injuries to people in a control group. Brickman et al. showed that lottery winners were not significantly happier than the control-group participants and that individuals with spinal-cord injuries \u2018\u2018did not appear nearly as unhappy as might be expected\u2019\u2019 (p. 921). This study appears to show adaptation to even the most extreme events imaginable. What is often not mentioned, however, is that although the participants with spinal-cord injuries were above neutral on the happiness scale (which is what led Brickman et al. to conclude that they were happier than might be expected), they were significantly less happy than the people in the control group, and the difference between the groups was actually quite large. Individuals with spinal-cord injuries were more than three quarters of a standard deviation below the mean of the control group. This means that the average participant from the control group was happier than approximately 78% of participants with spinal-cord injuries. This result has now been replicated quite often\u2014most existing studies show relatively large differences between individuals with spinal-cord injuries and healthy participants in control groups (Dijkers, 1997). In addition to problems that result from the interpretation of effect sizes, methodological limitations restrict the conclusions that can be drawn from many existing studies of adaptation. Most studies are not longitudinal, and even fewer are prospective (though there are some notable exceptions; see e.g., Bonanno, 2004; Caspi et al., 2003). Because participants\u2019 pre-event levels of SWB are not known, it is always possible that individuals who experienced an event were more or less happy than average before the event occurred. Certain people may be predisposed to experience life events, and these predisposing factors may be responsible for their happiness levels being lower than average. For instance, in a review of the literature examining the well-being of children who had lost limbs from various causes, Tyc (1992) suggested that those who lost limbs due to accidents tended to have higher levels of premorbid psychological disorders than did those who lost limbs due to disease. Thus, simply comparing the well-being of children who lost limbs to those who did not might overestimate the effect of the injury. Psychologists have demonstrated that level of happiness predicts the occurrence of a variety of events and outcomes (Lyubomirsky, King, & Diener, 2005), and therefore, studies that compare individuals who have experienced a particular event with those who have not but that do not take into account previous happiness level must be interpreted cautiously. A second methodological concern relates to what are known as demand characteristics. When researchers recruit participants specifically because they have experienced a given life event, participants may overor underreport SWB. These reports may occur because people believe the life event should have an impact, because they want to appear well-adjusted, or simply because the context of the study makes the event more salient. For instance, Smith, Schwarz, Roberts, and Ubel (2006) showed that patients with Parkinson\u2019s disease reported lower life satisfaction when the study instructions indicated that Parkinson\u2019s disease was a focus than when the instructions indicated that the study focused on the general population. USING LARGE-SCALE PANEL STUDIES TO ASSESS ADAPTATION TO LIFE EVENTS Recently, my colleagues and I have turned to archival data analysis using large, nationally representative panel studies to address questions about adaptation to life events. These studies have a number of advantages over alternative designs. First, they are prospective, which means that pre-event levels of SWB are known. Second, they are longitudinal, which means that change over time can be accurately modeled. Third, very large samples are often involved, which means that even rare events are sampled. Finally, because designers of these studies often recruit nationally representative samples, and because the questionnaires often focus on a variety of issues, demand characteristics are unlikely to have much of an effect. We have used two such panel studies\u2014the German Socioeconomic Panel Study (GSOEP) and the British Household Panel Study (BHPS)\u2014to examine the amount of adaptation that occurs following major life events. The GSOEP includes almost 40,000 individuals living in Germ"}
{"_id": "9312a805f90f0858ae421a8472dc794fe8f1cf03", "title": "Comparison of perioperative outcomes between robotic and laparoscopic partial nephrectomy: a systematic review and meta-analysis.", "text": "CONTEXT\nRobotic partial nephrectomy (RPN) is rapidly increasing; however, the benefit of RPN over laparoscopic partial nephrectomy (LPN) is controversial.\n\n\nOBJECTIVE\nTo compare perioperative outcomes of RPN and LPN.\n\n\nEVIDENCE ACQUISITION\nWe searched Ovid-Medline, Ovid-Embase, the Cochrane Library, KoreaMed, KMbase, KISS, RISS, and KisTi from their inception through August 2013. Two independent reviewers extracted data using a standardized form. Quality of the selected studies was assessed using the methodological index for nonrandomized studies.\n\n\nEVIDENCE SYNTHESIS\nA total of 23 studies and 2240 patients were included. All studies were cohort studies with no randomization, and the methodological quality varied. There was no significant difference between the two groups regarding complications of Clavien-Dindo classification grades 1-2 (p=0.62), Clavien-Dindo classification grades 3-5 (p=0.78), change of serum creatinine (p=0.65), operative time (p=0.35), estimated blood loss (p=0.76), and positive margins (p=0.75). The RPN group had a significantly lower rate of conversion to open surgery (p=0.02) and conversion to radical surgery (p=0.0006), shorter warm ischemia time (WIT; p=0.005), smaller change of estimated glomerular filtration rate (eGFR; p=0.03), and shorter length of stay (LOS; p=0.004).\n\n\nCONCLUSIONS\nThis meta-analysis shows that RPN is associated with more favorable results than LPN in conversion rate to open or radical surgery, WIT, change of eGFR, and shorter LOS. To establish the safety and effectiveness outcomes of robotic surgery, well-designed randomized clinical studies with long-term follow-up are needed.\n\n\nPATIENT SUMMARY\nRobotic partial nephrectomy (PN) is more favorable than laparoscopic PN in terms of lower conversion rate to radical nephrectomy, a favorable renal function indexed estimated glomerular filtration rate, shorter length of hospital stay, and shorter warm ischemia time."}
{"_id": "6cec70991312db3341b597892f79d218a5b2b047", "title": "Bonding-wire triangular spiral inductor for on-chip switching power converters", "text": "This work presents the first design and modelling of bonding-wire-based triangular spiral inductors (Fig. 1), targeting their application to on-chip switching power converters. It is demonstrated that the equilateral triangular shape compared to other polygonal shapes best balances the inductive density as well as the total Equivalent Series Resistance (ESR). Afterwards, a design procedure is presented in order to optimize the inductor design, in terms of ESR and occupied area reduction. Finally, finite-elements simulation results of an optimized design (27nH, 1 \u2126) are presented to validate the proposed expressions."}
{"_id": "d31798506874705f900e72203515abfaa9278409", "title": "Off-line recognition of realistic Chinese handwriting using segmentation-free strategy", "text": "Article history: Received 26 August 2007 Received in revised form 7 May 2008 Accepted 13 May 2008"}
{"_id": "1bfa7d524c649bd81ef5bf0b01e4524d28c6895e", "title": "Formal Analysis of Enhanced Authorization in the TPM 2.0", "text": "The Trusted Platform Module (TPM) is a system component that provides a hardware-based approach to establish trust in a platform by providing protected storage, robust platform integrity measurement, secure platform attestation and other secure functionalities. The access to TPM commands and TPM-resident key objects are protected via an authorization mechanism. Enhanced Authorization (EA) is a new mechanism introduced by the TPM 2.0 to provide a rich authorization model for specifying flexible access control policies for TPM-resident objects.\n In our paper, we conduct a formal verification of the EA mechanism. Firstly, we propose a model of the TPM 2.0 EA mechanism in a variant of the applied pi calculus. Secondly, we identify and formalize the security properties of the EA mechanism (Prop.1 and 2) in its design. We also give out a misuse problem that is easily to be neglected (Lemma 7). Thirdly, using the SAPIC tool and the tamarin prover, we have verified both the two security properties. Meanwhile, we have found 3 misuse cases and one of them leads to an attack on the application in [12]."}
{"_id": "d37e6593c2b14e319d7e4a8c18c8ef9f4e3ef168", "title": "RUNNING HEAD : THE SOCIAL ORIENTATION HYPOTHESIS-1The Origin of Cultural Differences in Cognition : The Social Orientation Hypothesis", "text": "A large body of research documents cognitive differences between Westerners and East Asians. Westerners tend to be more analytic and East Asians tend to be more holistic. These findings have often been explained as being due to corresponding differences in social orientation. Westerners are more independent and Easterners are more interdependent. However, comparisons of the cognitive tendencies of Westerners and East Asians do not allow us to rule out alternative explanations for the cognitive differences, such as linguistic and genetic differences, as well as cultural differences other than social orientation. In this review we summarize recent developments that provide stronger support for the social-orientation hypothesis. Keywordsculture; cross-cultural differences; within-culture differences; reasoning; independence/interdependence; holistic/analytic cognition RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 3 Cultural psychologists have consistently found different patterns of thinking and perception in different societies, with some cultures demonstrating a more analytic pattern and others a more holistic pattern (see Table 1). Analytic cognition is characterized by taxonomic and rule-based categorization of objects, a narrow focus in visual attention, dispositional bias in causal attribution, and the use of formal logic in reasoning. In contrast, holistic cognition is characterized by thematic and familyresemblance-based categorization of objects, a focus on contextual information and relationships in visual attention, an emphasis on situational causes in attribution, and dialecticism (Nisbett, Peng, Choi, & Norenzayan, 2001). What unites the elements of the analytic style is a tendency to focus on a single dimension or aspect\u2014whether in categorizing objects or evaluating arguments\u2014and a tendency to disentangle phenomena from the contexts in which they are embedded\u2014for example, focusing on the individual as a causal agent or attending to focal objects in visual scenes. What unites the elements of the holistic style is a broad attention to context and relationships in visual attention, categorizing objects, and explaining social behavior. Table 1 about here Cultures also differ in their social orientations (independence vs. interdependence) (see Table 2). Cultures that endorse and afford independent social orientation tend to emphasize self-direction, autonomy, and self-expression. Cultures that endorse and afford interdependent social orientation tend to emphasize harmony, relatedness, and connection. Independently oriented cultures tend to view the self as bounded and separate from social others, whereas interdependently oriented cultures tend to view the self as interconnected and as encompassing important relationships (e.g. Markus & Kitayama, 1991; Triandis, 1989). In independently oriented cultural contexts, happiness is most often experienced as a socially disengaging emotion (i.e. pride), whereas in interdependently oriented cultural contexts, happiness is most often experienced as a RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 4 socially engaging emotion (i.e. sense of closeness to others; Kitayama, Mesquita, & Karasawa, 2006). Finally, in cultures that have an independent social orientation, people are more motivated to symbolically enhance the self at the expense of others; this tendency is not as common in interdependently oriented cultures (Kitayama, Ishii, Imada, Takemura, & Ramaswamy, 2006; Kitayama, Mesquita, & Karasawa, 2006). Table 2 about here The proposition that cultures differing in their social orientation (independence vs. interdependence) also differ in their cognitive habits (analytic vs. holistic cognition) is by no means new (e.g. Markus & Kitayama, 1991; Witkin & Berry, 1975). Indeed one can trace the origin of this claim back at least to T\u00f6nnies (/2002). And certainly a large body of literature has demonstrated that cultures which differ in social orientation also show corresponding differences in cognitive style; Western societies tend to be more independent and more analytic, while East Asian societies tend to be more interdependent and holistic (Nisbett et al., 2001). On the basis of such evidence, it has been proposed that differences in social orientation are the driving force behind cultural differences in cognition (Markus & Kitayama, 1991; Nisbett et al., 2001). While the link between social orientation and cognitive style has been widely accepted, the evidence presented until recently has not provided strong support for this connection. East Asia and the West are huge geographic and cultural areas differing from one another in many ways. There are fairly large genetic differences between the two populations. The linguistic differences are large. Western languages are almost all IndoEuropean in origin and differ in many systematic ways from the major languages of East Asia. And there are many large cultural differences between the two regions other than in social orientation along lines of independence and interdependence. East Asia was heavily influenced by Confucian values and ways of thought and European cultures were heavily influenced by ancient Greek, specifically Aristotelian, values and ways of RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 5 thought (Lloyd, 1996). Just within this broad set of cultural differences it would be possible to find many hypotheses that might account for the kind of cognitive differences that have been observed between East and West. Examples of other large societal differences between East and West have to do with the length of time that the respective societies have been industrialized and the degree to which political institutions in these societies have a tradition of being democratic. Both of these latter dimensions are frequently invoked to account for a host of differences between East and West. In the present review, we focus on recent studies that narrow the plausible range of candidates for explaining the cognitive differences. These studies look at much tighter cultural comparisons than those found in previous research. These studies compare Eastern and Western Europe, Europe with the United States, northern and southern Italy, Hokkaido and Mainland Japan, adjacent villages in Turkey, and middle-class and working-class Americans. All of these comparisons involve contrasting more interdependent cultures with more independent cultures. We also review research that manipulates independence vs. interdependence and finds differences in analytic vs. holistic cognition The recent studies make it much less likely that the cognitive differences observed between East and West are due to large genetic or linguistic differences and make it more plausible that the cognitive differences are indeed due to differences in social orientation having to do with independence vs. interdependence rather than to societal differences such as Aristotelian vs. Confucian intellectual traditionsor degree of industrialization. RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 6 CROSS-CULTURAL COMPARISONS Several recent studies have shown that the covariation between social orientation and cognitive style is not confined to North America and East Asia. Even within societies that are part of the European cultural tradition, one observes that cultures differing in social orientation also differ in terms of cognitive style. For example, East Europeans and Americans differ along these dimensions. Russians are more interdependent than Americans (Grossmann, 2009; Matsumoto, Takeuchi, Andayani, Kouznetsova, & Krupp, 1998) and are more holistic in terms of categorization, attribution, visual attention, and reasoning about change (Grossmann, 2009). Similarly, Croats are more interdependent than Americans (\u0160verko, 1995) and show more holistic patterns of cognition in terms of categorization and visual attention (Varnum, Grossmann, Katunar, Nisbett, & Kitayama, 2008). Recent evidence suggests that similar differences exist within Europe. Russians, who are more interdependent than Germans (Naumov, 1996), also show more contextual patterns of visual attention (Medzheritskaya, 2008). WITHIN-CULTURE DIFFERENCES The fact that social orientation and cognitive style covary in comparisons across and within broad cultural regions does not fully address alternative explanations for this pattern. Cross-cultural differences in cognition might conceivably be accounted for by differences in linguistics, genetics, and degree and recency of industrialization and democratization. However, studies comparing groups within the same culture tend to argue against such interpretations. In a recent study comparing Hokkaido Japanese with those from mainland Japan, Kitayama and colleagues (Kitayama, Ishii, et al., 2006) found that those from Hokkaido RUNNING HEAD: THE SOCIAL ORIENTATION HYPOTHESIS 7 (settled by pioneers from the southern Japanese islands) were more independent than those from the main islands and also showed more dispositional bias in attribution. Similarly, Northern Italians, who are more independent than Southern Italians (Martella & Maass, 2000), also show more analytic cognitive habits, categorizing objects in a more taxonomic fashion (Knight & Nisbett, 2007). Even more fine-grained comparisons have found that, within a culture, groups differing in social orientation also differ in cognitive style. For example, Uskul and colleagues compared neighboring villages in the Black Sea region of Turkey that differed in terms of their primary economic activity (Uskul, Kitayama, & Nisbett, 2008). Previous research has found that more sedentary communities (such as farming communities and cooperative fishing communities) tend to be characterized by a more interdependent social orientation and holistic cognition (specifically field dependence or the tendency to have difficulty separating objects from their contexts; Berry, 1966; Witkin & Berry, 1975). Less sedentary communiti"}
{"_id": "cd426e8e7c356d5c31ac786749ac474d8e583937", "title": "Application of Data Mining Techniques in IoT: A Short Review", "text": "Internet of Things (IoT) has been growing rapidly due to recent advancements in communications and sensor technologies. Interfacing an every object together through internet looks very difficult, but within a frame of time Internet of Things will drastically change our life. The enormous data captured by the Internet of Things (IoT) are considered of high business as well as social values and extracting hidden information from raw data, various data mining algorithm can be applied to IoT data. In this paper, We survey systematic review of various data mining models as well as its application in Internet of Thing (IoT) field along with its merits and demerits. At last, we discussed challenges in IoT."}
{"_id": "2dd2c7602d7f4a0b78494ac23ee1e28ff489be88", "title": "Large scale metric learning from equivalence constraints", "text": "In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art."}
{"_id": "6d96f946aaabc734af7fe3fc4454cf8547fcd5ed", "title": "The AR face database", "text": ""}
{"_id": "db5aa767a0e8ceb09e7202f708e15a37bbc7ca01", "title": "Universal approximation using incremental constructive feedforward networks with random hidden nodes", "text": "According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g : R --> R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g : R --> R and integral of R g(x)dx not equal to 0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters."}
{"_id": "0e3fcfe63b7b6620e3c47e9751fe3456e85cc52f", "title": "Robust Discriminative Response Map Fitting with Constrained Local Models", "text": "We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes."}
{"_id": "451f72230e607cb59d60f996299c578623a19294", "title": "Permission Re-Delegation: Attacks and Defenses", "text": "Modern browsers and smartphone operating systems treat applications as mutually untrusting, potentially malicious principals. Applications are (1) isolated except for explicit IPC or inter-application communication channels and (2) unprivileged by default, requiring user permission for additional privileges. Although inter-application communication supports useful collaboration, it also introduces the risk of permission redelegation. Permission re-delegation occurs when an application with permissions performs a privileged task for an application without permissions. This undermines the requirement that the user approve each application\u2019s access to privileged devices and data. We discuss permission re-delegation and demonstrate its risk by launching real-world attacks on Android system applications; several of the vulnerabilities have been confirmed as bugs. We discuss possible ways to address permission redelegation and present IPC Inspection, a new OS mechanism for defending against permission re-delegation. IPC Inspection prevents opportunities for permission redelegation by reducing an application\u2019s permissions after it receives communication from a less privileged application. We have implemented IPC Inspection for a browser and Android, and we show that it prevents the attacks we found in the Android system applications."}
{"_id": "97aa698a422d037ed322ef093371d424244cb131", "title": "Spatio-temporal proximity and social distance: a confirmation framework for social reporting", "text": "Social reporting is based on the idea that the members of a location-based social network observe real-world events and publish reports about their observations. Application scenarios include crisis management, bird watching or even some sorts of mobile games. A major issue in social reporting is the quality of the reports. We propose an approach to the quality problem that is based on the reciprocal confirmation of reports by other reports. This contrasts with approaches that require users to verify reports, that is, to explicitly evaluate their veridicality. We propose to use spatio-termporal proximity as a first criterion for confirmation and social distance as a second one. By combining these two measures we construct a graph containing the reports as nodes connected by confirmation edges that can adopt positive as well as negative values. This graph builds the basis for the computation of confirmation values for individual reports by different aggregation measures. By applying our approach to two use cases, we show the importance of a weighted combination, since the meaningfulness of the constituent measures varies between different contexts."}
{"_id": "e070de33e302b7e8270c3ef12ff5a47f5f700194", "title": "Modeling and Verification of a Six-Phase Interior Permanent Magnet Synchronous Motor", "text": "In this paper, a new mathematical modeling for a six-phase interior permanent magnet synchronous motor (IPMSM) is presented. The proposed model utilizes two synchronous reference frames. First, the flux model in the $abcxyz$ frame is mapped into the stationary $dq$ frames and then to two synchronous rotating frames. Then, differentiating the flux models, voltage equations are derived in rotating frames. Through this analysis, the interaction between the $abc$ and $xyz$ subsystems is properly described by a coupling matrix. The torque equation is also derived using the two reference current variables. Flux model was verified through FEM analysis. Experiments were done using a 100\u00a0kW six-phase IPMSM in a dynamo system. The validity of the torque equation was checked with some experimental results under a shorted condition on an $xyz$ subsystem."}
{"_id": "f84070f5ecd2d9be81e09e5a3699a525382309e3", "title": "Autonomous exploration of motor skills by skill babbling", "text": "Autonomous exploration of motor skills is a key capability of learning robotic systems. Learning motor skills can be formulated as inverse modeling problem, which targets at finding an inverse model that maps desired outcomes in some task space, e.g., via points of a motion, to appropriate actions, e.g., motion control policy parameters. In this paper, autonomous exploration of motor skills is achieved by incrementally learning inverse models starting from an initial demonstration. The algorithm is referred to as skill babbling, features sample-efficient learning, and scales to high-dimensional action spaces. Skill babbling extends ideas of goal-directed exploration, which organizes exploration in the space of goals. The proposed approach provides a modular framework for autonomous skill exploration by separating the learning of the inverse model from the exploration mechanism and a model of achievable targets, i.e. the workspace. The effectiveness of skill babbling is demonstrated for a range of motor tasks comprising the autonomous bootstrapping of inverse kinematics and parameterized motion primitives."}
{"_id": "f1e2d4d8c7ca6e2b2a25f935501031a4ce3e9912", "title": "NestedNet: Learning Nested Sparse Structures in Deep Neural Networks", "text": "Recently, there have been increasing demands to construct compact deep architectures to remove unnecessary redundancy and to improve the inference speed. While many recent works focus on reducing the redundancy by eliminating unneeded weight parameters, it is not possible to apply a single deep network for multiple devices with different resources. When a new device or circumstantial condition requires a new deep architecture, it is necessary to construct and train a new network from scratch. In this work, we propose a novel deep learning framework, called a nested sparse network, which exploits an n-in-1-type nested structure in a neural network. A nested sparse network consists of multiple levels of networks with a different sparsity ratio associated with each level, and higher level networks share parameters with lower level networks to enable stable nested learning. The proposed framework realizes a resource-aware versatile architecture as the same network can meet diverse resource requirements, i.e., anytime property. Moreover, the proposed nested network can learn different forms of knowledge in its internal networks at different levels, enabling multiple tasks using a single network, such as coarse-to-fine hierarchical classification. In order to train the proposed nested network, we propose efficient weight connection learning and channel and layer scheduling strategies. We evaluate our network in multiple tasks, including adaptive deep compression, knowledge distillation, and learning class hierarchy, and demonstrate that nested sparse networks perform competitively, but more efficiently, compared to existing methods."}
{"_id": "838a8c607a993b2448636e2a89262eb3490dbdb4", "title": "Marketing actions can modulate neural representations of experienced pleasantness.", "text": "Despite the importance and pervasiveness of marketing, almost nothing is known about the neural mechanisms through which it affects decisions made by individuals. We propose that marketing actions, such as changes in the price of a product, can affect neural representations of experienced pleasantness. We tested this hypothesis by scanning human subjects using functional MRI while they tasted wines that, contrary to reality, they believed to be different and sold at different prices. Our results show that increasing the price of a wine increases subjective reports of flavor pleasantness as well as blood-oxygen-level-dependent activity in medial orbitofrontal cortex, an area that is widely thought to encode for experienced pleasantness during experiential tasks. The paper provides evidence for the ability of marketing actions to modulate neural correlates of experienced pleasantness and for the mechanisms through which the effect operates."}
{"_id": "1c26786513a0844c3a547118167452bed17abf5d", "title": "Automatic Transliteration of Proper Nouns from Arabic to English", "text": "After providing a brief introduction to the transliteration problem, and highlighting some issues specific to Arabic to English translation, a three phase algorithm is introduced as a computational solution to the problem. The algorithm is based on a Hidden Markov Model approach, but also leverages information available in on-line databases. The algorithm is then evaluated, and shown to achieve accuracy approaching 80%."}
{"_id": "886bc30c4709535031a36b390bf5ad8dbca2a916", "title": "A glasses-type wearable device for monitoring the patterns of food intake and facial activity", "text": "Here we present a new method for automatic and objective monitoring of ingestive behaviors in comparison with other facial activities through load cells embedded in a pair of glasses, named GlasSense. Typically, activated by subtle contraction and relaxation of a temporalis muscle, there is a cyclic movement of the temporomandibular joint during mastication. However, such muscular signals are, in general, too weak to sense without amplification or an electromyographic analysis. To detect these oscillatory facial signals without any use of obtrusive device, we incorporated a load cell into each hinge which was used as a lever mechanism on both sides of the glasses. Thus, the signal measured at the load cells can detect the force amplified mechanically by the hinge. We demonstrated a proof-of-concept validation of the amplification by differentiating the force signals between the hinge and the temple. A pattern recognition was applied to extract statistical features and classify featured behavioral patterns, such as natural head movement, chewing, talking, and wink. The overall results showed that the average F1 score of the classification was about 94.0% and the accuracy above 89%. We believe this approach will be helpful for designing a non-intrusive and un-obtrusive eyewear-based ingestive behavior monitoring system."}
{"_id": "20f2f3775df4c2f93188311d8d66d6dd8a308c43", "title": "A survey of dynamic replication and replica selection strategies based on data mining techniques in data grids", "text": "Mining grid data is an interesting research field which aims at analyzing grid systems with data mining techniques in order to efficiently discover new meaningful knowledge to enhance grid management. In this paper, we focus particularly on how extracted knowledge enables enhancing data replication and replica selection strategies which are important data management techniques commonly used in data grids. Indeed, relevant knowledge such as file access patterns, file correlations, user or job access behavior, prediction of future behavior or network performance, and so on, can be efficiently discovered. These findings are then used to enhance both data replication and replica selection strategies. Various works in this respect are then discussed along with their merits and demerits. In addition, we propose a new guideline to data mining application in the context of data replication and replica selection strategies. & 2015 Elsevier Ltd. All rights reserved."}
{"_id": "e9302c3fee03abb5dd6e134118207272c1dcf303", "title": "Neural embedding-based indices for semantic search", "text": "Traditional information retrieval techniques that primarily rely on keyword-based linking of the query and document spaces face challenges such as the vocabulary mismatch problem where relevant documents to a given query might not be retrieved simply due to the use of different terminology for describing the same concepts. As such, semantic search techniques aim to address such limitations of keyword-based retrieval models by incorporating semantic information from standard knowledge bases such as Freebase and DBpedia. The literature has already shown that while the sole consideration of semantic information might not lead to improved retrieval performance over keyword-based search, their consideration enables the retrieval of a set of relevant documents that cannot be retrieved by keyword-based methods. As such, building indices that store and provide access to semantic information during the retrieval process is important. While the process for building and querying keyword-based indices is quite well understood, the incorporation of semantic information within search indices is still an open challenge. Existing work have proposed to build one unified index encompassing both textual and semantic information or to build separate yet integrated indices for each information type but they face limitations such as increased query process time. In this paper, we propose to use neural embeddings-based representations of term, semantic entity, semantic type and documents within the same embedding space to facilitate the development of a unified search index that would consist of these four information types. We perform experiments on standard and widely used document collections including Clueweb09-B and Robust04 to evaluate our proposed indexing strategy from both effectiveness and efficiency perspectives. Based on our experiments, we find that when neural embeddings are used to build inverted indices; hence relaxing the requirement to explicitly observe the posting list key in the indexed document: (a) retrieval efficiency will increase compared to a standard inverted index, hence reduces the index size and query processing time, and (b) while retrieval efficiency, which is the main objective of an efficient indexing mechanism improves using our proposed method, retrieval effectiveness also retains competitive performance compared to the baseline in terms of retrieving a reasonable number of relevant documents from the indexed corpus. Email addresses: fatemeh.lashkari@unb.ca (Fatemeh Lashkari), bagheri@ryerson.ca (Ebrahim Bagheri), ghorbani@unb.ca (Ali A. Ghorbani) Preprint submitted to Information Processing and Management September 11, 2018"}
{"_id": "9133753f7f1c5bddc85e5435478b10f04ae37ac3", "title": "Visualizing RFM Segmentation", "text": "Segmentation based on RFM (Recency, Frequency, and Monetary) has been used for over 50 years by direct marketers to target a subset of their customers, save mailing costs, and improve profits. RFM analysis is commonly performed using the Arthur Hughes method, which bins each of the three RFM attributes independently into five equal frequency bins. The resulting 125 cells are depicted in a tabular format or as bar graphs and analyzed by marketers, who determine the best cells (customer segments) to target. We propose an interactive visualization of RFM that helps marketers visualize and quickly identify important customer segments. Additionally, we show an integrated filtering approach that allows marketers to interactively explore the RFM segments in relation to other customer attributes, such as behavioral or demographic, to identify interesting subsegments in the customer base. We depict these RFM visualizations on two large real-world data sets and discuss how customers have used these visualizations in practice to glean interesting insights from their data. Given, the widespread use of RFM as a critical, and many times the only, segmentation tool, we believe that the proposed intuitive and interactive visualization will provide significant business value."}
{"_id": "cbdc32f6bc16cc8271dbba13cc7d6338b2be3d38", "title": "Prognostics and Health Management of Industrial Equipment", "text": "Prognostics and health management (PHM) is a field of research and application which aims at making use of past, present and future information on the environmental, operational and usage conditions of an equipment in order to detect its degradation, diagnose its faults, predict and proactively manage its failures. The present paper reviews the state of knowledge on the methods for PHM, placing these in context with the different information and data which may be available for performing the task and identifying the current challenges and open issues which must be addressed for achieving reliable deployment in practice. The focus is predominantly on the prognostic part of PHM, which addresses the prediction of equipment failure occurrence and associated residual useful life (RUL)."}
{"_id": "d48a5454562adfdef47f3ec2e6fdef3ddaf317cb", "title": "Constraint-based sequential pattern mining: the pattern-growth methods", "text": "Constraints are essential for many sequential pattern mining applications. However, there is no systematic study on constraint-based sequential pattern mining. In this paper, we investigate this issue and point out that the framework developed for constrained frequent-pattern mining does not fit our mission well. An extended framework is developed based on a sequential pattern growth methodology. Our study shows that constraints can be effectively and efficiently pushed deep into the sequential pattern mining under this new framework. Moreover, this framework can be extended to constraint-based structured pattern mining as well."}
{"_id": "614a793cb5d8d05fd259bf2832d76018fb31cb35", "title": "Bad to the bone: facial structure predicts unethical behaviour.", "text": "Researchers spanning many scientific domains, including primatology, evolutionary biology and psychology, have sought to establish an evolutionary basis for morality. While researchers have identified social and cognitive adaptations that support ethical behaviour, a consensus has emerged that genetically determined physical traits are not reliable signals of unethical intentions or actions. Challenging this view, we show that genetically determined physical traits can serve as reliable predictors of unethical behaviour if they are also associated with positive signals in intersex and intrasex selection. Specifically, we identify a key physical attribute, the facial width-to-height ratio, which predicts unethical behaviour in men. Across two studies, we demonstrate that men with wider faces (relative to facial height) are more likely to explicitly deceive their counterparts in a negotiation, and are more willing to cheat in order to increase their financial gain. Importantly, we provide evidence that the link between facial metrics and unethical behaviour is mediated by a psychological sense of power. Our results demonstrate that static physical attributes can indeed serve as reliable cues of immoral action, and provide additional support for the view that evolutionary forces shape ethical judgement and behaviour."}
{"_id": "75f52663f803d5253690442dcd4f9995009af601", "title": "Impact of social media usage on students academic performance in Saudi Arabia", "text": "Social media is a popular method for communication amongst university students in Saudi Arabia. However excessive social media use can raise questions about whether academic performance is affected. This research explores this question by conducting a survey on university students in Saudi Arabia in regards to social media usage and their academic performance. The survey also explored which social network is the most popular amongst Saudi students, what students thought about their social media usage and factors besides social media usage which negatively affect academic performance. The survey received 108 responses and descriptive statistics including normality tests i.e. scatter plots were used to examine the relationship between the average number of hours students spent of social media a week and GPA scores of the students. The results demonstrated that there was no linear relationship between social media usage in a week and GPA score. Students highlighted that besides social media use, time management is a factor which affects students \u2018studies negatively. The findings of the paper can be used to propose the effective plans for improving the academic performance of the students in such a way that a balance in the leisure, information exchange and academic performance can be maintained. 2014 Elsevier Ltd. All rights reserved."}
{"_id": "9652745bcecd6f50fb2b8319862bfbf0ea4c0d7a", "title": "Patterns of Play: Play-Personas in User-Centred Game Development", "text": "In recent years certain trends from User-Centered design have been seeping into the practice of designing computer games. The balance of power between game designers and players is being renegotiated in order to find a more active role for players and provide them with control in shaping the experiences that games are meant to evoke. A growing player agency can turn both into an increased sense of player immersion and potentially improve the chances of critical acclaim. This paper presents a possible solution to the challenge of involving the user in the design of interactive entertainment by adopting and adapting the \"persona\" framework introduced by Alan Cooper in the field of Human Computer Interaction. The original method is improved by complementing the traditional ethnographic descriptions of personas with parametric, quantitative, data-oriented models of patterns of user behaviour for computer games. Author"}
{"_id": "dbc82e5b8b17faec972e1d09c34ec9f9cd1a33ea", "title": "Common Consensus : a web-based game for collecting commonsense goals", "text": "In our research on Commonsense reasoning, we have found that an especially important kind of knowledge is knowledge about human goals. Especially when applying Commonsense reasoning to interface agents, we need to recognize goals from user actions (plan recognition), and generate sequences of actions that implement goals (planning). We also often need to answer more general questions about the situations in which goals occur, such as when and where a particular goal might be likely, or how long it is likely to take to achieve. In past work on Commonsense knowledge acquisition, users have been directly asked for such information. Recently, however, another approach has emerged\u2014to entice users into playing games where supplying the knowledge is the means to scoring well in the game, thus motivating the players. This approach has been pioneered by Luis von Ahn and his colleagues, who refer to it as Human Computation. Common Consensus is a fun, self-sustaining web-based game, that both collects and validates Commonsense knowledge about everyday goals. It is based on the structure of the TV game show Family Feud1. A small user study showed that users find the game fun, knowledge quality is very good, and the rate of knowledge collection is rapid. ACM Classification: H.3.3 [INFORMATION STORAGE AND RETRIEVAL]: Information Search and Retrieval; I.2.6 [ARTIFICIAL INTELLIGENCE]: Learning"}
{"_id": "0d635696ef2c768095d9f6378df93241a0e78d16", "title": "Collaborative Filtering with Graph Information: Consistency and Scalable Methods", "text": "Low rank matrix completion plays a fundamental role in collaborative filtering applications, the key idea being that the variables lie in a smaller subspace than the ambient space. Often, additional information about the variables is known, and it is reasonable to assume that incorporating this information will lead to better predictions. We tackle the problem of matrix completion when pairwise relationships among variables are known, via a graph. We formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods. On the theoretical front, we show that such methods generalize weighted nuclear norm formulations, and derive statistical consistency guarantees. We validate our results on both real and synthetic datasets."}
{"_id": "4672f24bf1828452dc367669ab8a29f79834ad58", "title": "Collaborative Deep Learning for Recommender Systems", "text": "Collaborative filtering (CF) is a successful approach commonly used by many recommender systems. Conventional CF-based methods use the ratings given to items by users as the sole source of information for learning to make recommendation. However, the ratings are often very sparse in many applications, causing CF-based methods to degrade significantly in their recommendation performance. To address this sparsity problem, auxiliary information such as item content information may be utilized. Collaborative topic regression (CTR) is an appealing recent method taking this approach which tightly couples the two components that learn from two different sources of information. Nevertheless, the latent representation learned by CTR may not be very effective when the auxiliary information is very sparse. To address this problem, we generalize recently advances in deep learning from i.i.d. input to non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix. Extensive experiments on three real-world datasets from different domains show that CDL can significantly advance the state of the art."}
{"_id": "4ef807650090b4a18910701d697d038c5ab0bcf0", "title": "Social collaborative filtering for cold-start recommendations", "text": "We examine the cold-start recommendation task in an online retail setting for users who have not yet purchased (or interacted in a meaningful way with) any available items but who have granted access to limited side information, such as basic demographic data (gender, age, location) or social network information (Facebook friends or page likes). We formalize neighborhood-based methods for cold-start collaborative filtering in a generalized matrix algebra framework that does not require purchase data for target users when their side information is available. In real-data experiments with 30,000 users who purchased 80,000+ books and had 9,000,000+ Facebook friends and 6,000,000+ page likes, we show that using Facebook page likes for cold-start recommendation yields up to a 3-fold improvement in mean average precision (mAP) and up to 6-fold improvements in Precision@k and Recall@k compared to most-popular-item, demographic, and Facebook friend cold-start recommenders. These results demonstrate the substantial predictive power of social network content, and its significant utility in a challenging problem - recommendation for cold-start users."}
{"_id": "01ba3b2c57f2a1145c219976787480102148669c", "title": "Predicting purchase behaviors from social media", "text": "In the era of social commerce, users often connect from e-commerce websites to social networking venues such as Facebook and Twitter. However, there have been few efforts on understanding the correlations between users' social media profiles and their e-commerce behaviors. This paper presents a system for predicting a user's purchase behaviors on e-commerce websites from the user's social media profile. We specifically aim at understanding if the user's profile information in a social network (for example Facebook) can be leveraged to predict what categories of products the user will buy from (for example eBay Electronics). The paper provides an extensive analysis on how users' Facebook profile information correlates to purchases on eBay, and analyzes the performance of different feature sets and learning algorithms on the task of purchase behavior prediction."}
{"_id": "78746473cbf9452cd0d35f7bbbb26b50ef9dc730", "title": "Efficient Character Skew Rectification in Scene Text Images", "text": "We present an efficient method for character skew rectification in scene text images. The method is based on a novel skew estimators, which exploit intuitive glyph properties and which can be efficiently computed in a linear time. The estimators are evaluated on a synthetically generated data (including Latin, Cyrillic, Greek, Runic scripts) and real scene text images, where the skew rectification by the proposed method improves the accuracy of a state-of-the-art scene text recognition pipeline."}
{"_id": "65124306996ec4ec68f7b2eb889e93728ec3629e", "title": "Why Do Those With Long-Term Substance Use Disorders Stop Abusing Substances? A Qualitative Study", "text": "Although a significant proportion of adults recover from substance use disorders (SUDs), little is known about how they reach this turning point or why they stop using. The purpose of the study was to explore the factors that influence reasoning and decision making about quitting substance use after a long-term SUD. Semistructured interviews were conducted with 18 participants, each of whom had been diagnosed with a SUD and had been abstinent for at least 5\u2009years. A resource group of peer consultants in long-term recovery from SUDs contributed to the study's planning, preparation, and initial analyses. Participants recalled harmful consequences and significant events during their years of substance use. Pressure and concern from close family members were important in their initial efforts to abstain from substance use. Being able to imagine a different life, and the awareness of existing treatment options, promoted hope and further reinforced their motivation to quit. Greater focus on why those with SUDs want to quit may help direct treatment matching; treatment completion may be more likely if the person's reasons for seeking help are addressed."}
{"_id": "b47812577acbb67c58b432e2f2bc0a5eb091bc61", "title": "Play Therapy: Practitioners' Perspectives on Implementation and Effectiveness", "text": "The purpose of the present research was to explore practitioners\u2019 perspectives on play therapy as an intervention when working with a child who has experienced trauma, has present PTSD symptoms and has a co-morbid mental health diagnosis. Play therapy has been accepted as an effective intervention to utilize with children who have been exposed to trauma (Schaefer, 1994). However, there is currently limited research evaluating play therapy as an intervention with children who have been traumatized and have developed PTSD or other mental health symptoms/disorders. The current study aimed to supplement the gap in existing research. Two agencies that serve early childhood mental health clients agreed to participate in the present study by completing an online survey. Data was gathered from 22 practitioner respondents. The results indicate that practitioners believe that play therapy is an effective intervention when treating children with trauma histories, PTSD symptoms, and mental health disorders. The results of the present research support findings from previous literature regarding play therapy when used as an intervention for treating trauma and/or mental health disorders. Furthermore, the present research confirms the notion that creating a safe space for their clients using play therapy is an important part of the intervention process. Given the gap in research surrounding play therapy as an intervention when PTSD and a co-morbid mental health disorders occur concurrently, further research would be beneficial to the field of social work and would positively inform the practitioners who work in early intervention settings. PRACTITIONERS\u2019\t\r PERSPECTIVES 2"}
{"_id": "a0650d278aa0f50e2ca59e770782b94ffcdd47ce", "title": "A Reliability Perspective of the Smart Grid", "text": "Increasing complexity of power grids, growing demand, and requirement for greater reliability, security and efficiency as well as environmental and energy sustainability concerns continue to highlight the need for a quantum leap in harnessing communication and information technologies. This leap toward a \u00bfsmarter\u00bf grid is widely referred to as \u00bfsmart grid.\u00bf A framework for cohesive integration of these technologies facilitates convergence of acutely needed standards, and implementation of necessary analytical capabilities. This paper critically reviews the reliability impacts of major smart grid resources such as renewables, demand response, and storage. We observe that an ideal mix of these resources leads to a flatter net demand that eventually accentuates reliability challenges further. A gridwide IT architectural framework is presented to meet these challenges while facilitating modern cybersecurity measures. This architecture supports a multitude of geographically and temporally coordinated hierarchical monitoring and control actions over time scales from milliseconds and up."}
{"_id": "73aa92ce51fa7107f4c34b5f2e7b45b3694e19ec", "title": "An Approach to Generate Topic Similar Document by Seed Extraction-Based SeqGAN Training for Bait Document", "text": "In recent years, topic similar document generation has drawn more and more attention in both academia and industry. Especially, bait document generation is very important for security. For more-like and fast bait document generation, we proposed the topic similar document generation model based on SeqGAN model (TSDG-SeqGAN). In the training phrase, we used jieba word segmentation tool for training text to greatly reduce the training time. In the generation phrase, we extract keywords and key sentence from the subject document as seeds, and then enter the seeds into the trained generation network. Next, we get keyword-based documents and documents based on key sentences from generation network. Finally, we output documents that are most similar to the subject document as the final result. Experiments show the effectiveness of our model."}
{"_id": "82d7a7ab3fc4aa0bb545deb2b3ac172b39cfec26", "title": "NB-IoT Technology Overview and Experience from Cloud-RAN Implementation", "text": "The 3GPP has introduced a new narrowband radio technology called narrowband Internet of Things (NB-IoT) in Release 13. NB-IoT was designed to support very low power consumption and low-cost devices in extreme coverage conditions. NB-IoT operates in very small bandwidth and will provide connectivity to a large number of low-data-rate devices. This article highlights some of the key features introduced in NB-IoT and presents performance results from real-life experiments. The experiments were carried out using an early-standard-compliant prototype based on a software defined radio partial implementation of NB-IoT that runs on a desktop computer connected to the network. It is found that a cloud radio access network is a good candidate for NB-IoT implementation."}
{"_id": "b1c6f513e347ed9fbf508bd67f763407fa6d5ec6", "title": "RGB-H-CbCr skin colour model for human face detection", "text": "While the RGB, HSV and YUV (YCbCr) are standard models used in various colour imaging applications, not all of their information are necessary to classify skin colour. This paper presents a novel skin colour model, RGB-H-CbCr for the detection of human faces. Skin regions are extracted using a set of bounding rules based on the skin colour distribution obtained from a training set. The segmented face regions are further classified using a parallel combination of simple morphological operations. Experimental results on a large photo data set have demonstrated that the proposed model is able to achieve good detection success rates for near-frontal faces of varying orientations, skin colour and background environment. The results are also comparable to that of the AdaBoost face classifier."}
{"_id": "51e95da85a91844ee939147c6f647f749437f42c", "title": "Multilabel SVM active learning for image classification", "text": "Image classification is an important task in computer vision. However, how to assign suitable labels to images is a subjective matter, especially when some images can be categorized into multiple classes simultaneously. Multilabel image classification focuses on the problem that each image can have one or multiple labels. It is known that manually labelling images is time-consuming and expensive. In order to reduce the human effort of labelling images, especially multilabel images, we proposed a multilabel SVM active learning method. We also proposed two selection strategies: Max Loss strategy and Mean Max Loss strategy. Experimental results on both artificial data and real-world images demonstrated the advantage of proposed method."}
{"_id": "9785c1040a2cdb5d905f8721991c3480d73769cf", "title": "Unhealthy region of citrus leaf detection using image processing techniques", "text": "Producing agricultural products are difficult task as the plant comes to an attack from various micro-organisms, pests and bacterial diseases. The symptoms of the attacks are generally distinguished through the leaves, steams or fruit inspection. The present paper discusses the image processing techniques used in performing early detection of plant diseases through leaf features inspection. The objective of this work is to implement image analysis and classification techniques for extraction and classification of leaf diseases. Leaf image is captured and then processed to determine the status of each plant. Proposed framework is model into four parts image preprocessing including RGB to different color space conversion, image enhancement; segment the region of interest using K-mean clustering for statistical usage to determine the defect and severity areas of plant leaves, feature extraction and classification. texture feature extraction using statistical GLCM and color feature by means of mean values. Finally classification achieved using SVM. This technique will ensure that chemicals only applied when plant leaves are detected to be effected with the disease."}
{"_id": "bcf6433c1a37328c554063447262c574bf3d0f27", "title": "New step-up DC-DC converters for PV power generation systems", "text": "This paper proposes new step-up DC-DC converters with high ratio capability. The proposed DC-DC converters are derived using combination of boost converters and buck-boost converters. A high ratio capability is achieved by parallel input and series output combination so that the efficiency is better than the one that is achieved by using a conventional boost converter. A method to reduce the input and output ripples are also proposed in this paper. Simulated and experimental results are included to show the validity of the proposed converters."}
{"_id": "fe6685dac1680be4aaa5a3c42f12127fd492a78e", "title": "Multi-document Summarization Using Support Vector Regression", "text": "Most multi-document summarization systems follow the extractive framework based on various features. While more and more sophisticated features are designed, the reasonable combination of features becomes a challenge. Usually the features are combined by a linear function whose weights are tuned manually. In this task, Support Vector Regression (SVR) model is used for automatically combining the features and scoring the sentences. Two important problems are inevitably involved. The first one is how to acquire the training data. Several automatic generation methods are introduced based on the standard reference summaries generated by human. Another indispensable problem in SVR application is feature selection, where various features will be picked out and combined into different feature sets to be tested. With the aid of DUC 2005 and 2006 data sets, comprehensive experiments are conducted with consideration of various SVR kernels and feature sets. Then the trained SVR model is used in the main task of DUC 2007 to get the extractive summaries."}
{"_id": "21b90a8e802e00af13aa959039f7687d8df42902", "title": "A smooth-walled spline-profile horn as an alternative to the corrugated horn for wide band millimeter-wave applications", "text": "At millimeter-wave frequencies, corrugated horns can be difficult and expensive to manufacture. As an alternative we present here the results of a theoretical and measurement study of a smooth-walled spline-profile horn for specific application in the 80-120 GHz band. While about 50% longer than its corrugated counterpart, the smooth-walled horn is shown to give improved performance across the band as well as being much easier to manufacture."}
{"_id": "b2b627f803890f8ae1ff75f840fa26e83db32214", "title": "An Analysis and Comparison of CDN-P2P-hybrid Content Delivery System and Model", "text": "In order to fully utilize the stable edge transmission capability of CDN and the scalable last-mile transmission capability of P2P, while at the same time avoiding ISP-unfriendly policies and unlimited usage of P2P delivery, some researches have begun focusing on CDN-P2P-hybrid architecture and ISP-friendly P2P content delivery technology in recent years. In this paper, we first survey CDN-P2P-hybrid architecture technology, including current industry efforts and academic efforts in this field. Second, we make comparisons between CDN and P2P. And then we explore and analyze main issues, including overlay route hybrid issues, and playing buffer hybrid issues. After that we focus on CDN-P2P-hybrid model analysis and design, we compare the tightlycoupled hybrid model with the loosely-coupled hybrid model, and we propose that there are some main common models which need further study. At last, we analyze the prospective research direction and propose our future work. KeywordsCDN, P2P, P2P Streaming, CDN-P2P-hybrid Architecture, Live Streaming, VoD Streaming Note: This work is supported by 2009 National Science Foundation of China (60903164): Research on Model and Algorithm of New-Generation Controllable, Trustworthy, Network-friendly CDN-P2P hybrid Content Delivery."}
{"_id": "8139e1c8c6d8b36ad898d477b83b17e959628b3e", "title": "Optimality principles in sensorimotor control", "text": "The sensorimotor system is a product of evolution, development, learning and adaptation\u2014which work on different time scales to improve behavioral performance. Consequently, many theories of motor function are based on 'optimal performance': they quantify task goals as cost functions, and apply the sophisticated tools of optimal control theory to obtain detailed behavioral predictions. The resulting models, although not without limitations, have explained more empirical phenomena than any other class. Traditional emphasis has been on optimizing desired movement trajectories while ignoring sensory feedback. Recent work has redefined optimality in terms of feedback control laws, and focused on the mechanisms that generate behavior online. This approach has allowed researchers to fit previously unrelated concepts and observations into what may become a unified theoretical framework for interpreting motor function. At the heart of the framework is the relationship between high-level goals, and the real-time sensorimotor control strategies most suitable for accomplishing those goals."}
{"_id": "a143dde8bc102eb8bd79d5e9b7440709f89142a2", "title": "New Developments in Space Syntax Software", "text": "The Spatial Positioning tool (SPOT) is an isovist-based spatial analysis software, and is written in Java working as a stand-alone program. SPOT differs from regular Space syntax software as it can produce integration graphs and intervisibility graphs from a selection of positions. The concept of the software originates from a series of field studies on building interiors highly influenced by organizations and social groups. We have developed SPOT as a prototype. Basic SPOT operations use selections of positions and creations of isovist sets. The sets are color-coded and layered; the layers can be activated and visible by being turned on or off. At this point, there are two graphs produced in SPOT, the isovist overlap graph that shows intervisibility between overlapping isovist fields and the network integration analysis built on visibility relations. The program aims to be used as a fast and interactive sketch tool as well as a precise analysis tool. Data, images, and diagrams can be exported for use in conjunction with other CAD or illustration programs. The first stage of development is to have a functioning prototype with the implementation of all the basic algorithms and a minimal basic functionality in respect to user interaction."}
{"_id": "65a72b15d87fe93725ac3fca3b1c60009ec0af66", "title": "A collaborative filtering approach to ad recommendation using the query-ad click graph", "text": "Search engine logs contain a large amount of click-through data that can be leveraged as soft indicators of relevance. In this paper we address the sponsored search retrieval problem which is to find and rank relevant ads to a search query. We propose a new technique to determine the relevance of an ad document for a search query using click-through data. The method builds on a collaborative filtering approach to discover new ads related to a query using a click graph. It is implemented on a graph with several million edges and scales to larger sizes easily. The proposed method is compared to three different baselines that are state-of-the-art for a commercial search engine. Evaluations on editorial data indicate that the model discovers many new ads not retrieved by the baseline methods. The ads from the new approach are on average of better quality than the baselines."}
{"_id": "f8b1534b26c1a4a30d32aec408614ecff2412156", "title": "Example-Based Methods for Estimating 3 D Human Pose from Silhouette Image using Approximate Chamfer Distance and Kernel Subspace", "text": ""}
{"_id": "4d657fb9382c8d8367a331496288313d64415518", "title": "Toward More Efficient NoC Arbitration : A Deep Reinforcement Learning Approach", "text": "The network on-chip (NoC) is a critical resource shared by various on-chip components. An efficient NoC arbitration policy is crucial in providing global fairness and improving system performance. In this preliminary work, we demonstrate an idea of utilizing deep reinforcement learning to guide the design of more efficient NoC arbitration policies. We relate arbitration to a self-learning decision making process. Results show that the deep reinforcement learning approach can effectively reduce packet latency and has potential for identifying interesting features that could be utilized in more practical hardware designs."}
{"_id": "35a9c2fad935a2389a7b6e3a53d88ea476db611e", "title": "TCA: An Efficient Two-Mode Meta-Heuristic Algorithm for Combinatorial Test Generation (T)", "text": "Covering arrays (CAs) are often used as test suites for combinatorial interaction testing to discover interaction faults of real-world systems. Most real-world systems involve constraints, so improving algorithms for covering array generation (CAG) with constraints is beneficial. Two popular methods for constrained CAG are greedy construction and meta-heuristic search. Recently, a meta-heuristic framework called two-mode local search has shown great success in solving classic NPhard problems. We are interested whether this method is also powerful in solving the constrained CAG problem. This work proposes a two-mode meta-heuristic framework for constrained CAG efficiently and presents a new meta-heuristic algorithm called TCA. Experiments show that TCA significantly outperforms state-of-the-art solvers on 3-way constrained CAG. Further experiments demonstrate that TCA also performs much better than its competitors on 2-way constrained CAG."}
{"_id": "64ca5af9c23518607da217349b39f3646d0beb23", "title": "Methods and protocols of modern solid phase Peptide synthesis.", "text": "The purpose of this article is to delineate strategic considerations and provide practical procedures to enable non-experts to synthesize peptides with a reasonable chance of success. This article is not encyclopedic but rather devoted to the Fmoc/tBu approach of solid phase peptide synthesis (SPPS), which is now the most commonly used methodology for the production of peptides. The principles of SPPS with a review of linkers and supports currently employed are presented. Basic concepts for the different steps of SPPS such as anchoring, deprotection, coupling reaction and cleavage are all discussed along with the possible problem of aggregation and side-reactions. Essential protocols for the synthesis of fully deprotected peptides are presented including resin handling, coupling, capping, Fmoc-deprotection, final cleavage and disulfide bridge formation."}
{"_id": "b82d8fba4dfa2bac63e5be1bf8ef4b7bb24b8e9c", "title": "Understanding the Intention of Information Contribution to Online Feedback Systems from Social Exchange and Motivation Crowding Perspectives", "text": "The online feedback system (OFS) has been touted to be an effective artifact for electronic word-of-mouth (EWOM). Accumulating sufficient detailed consumption information in the OFS is essential to the success of OFS. Yet, past research has focused on the effects of OFS on building trust and promoting sales and little knowledge about information provision to OFS has been developed. This study attempts to fill this gap by developing and testing a theoretical model to identify the possible antecedents that lead to the intention of consumers' information contribution to OFS. The model employs social exchange theory to identify benefit and cost factors influencing consumer intention, and motivation crowding theory to explore the moderating effects from environmental interventions that are embodied in OFS. Our preliminary results in general provide empirical support for the model. Practical implications are offered to OFS designers for system customization"}
{"_id": "1e43d706d38cbacac563de9d0659230de00d73f2", "title": "Paragon: QoS-aware scheduling for heterogeneous datacenters", "text": "Large-scale datacenters (DCs) host tens of thousands of diverse applications each day. However, interference between colocated workloads and the difficulty to match applications to one of the many hardware platforms available can degrade performance, violating the quality of service (QoS) guarantees that many cloud workloads require. While previous work has identified the impact of heterogeneity and interference, existing solutions are computationally intensive, cannot be applied online and do not scale beyond few applications.\n We present Paragon, an online and scalable DC scheduler that is heterogeneity and interference-aware. Paragon is derived from robust analytical methods and instead of profiling each application in detail, it leverages information the system already has about applications it has previously seen. It uses collaborative filtering techniques to quickly and accurately classify an unknown, incoming workload with respect to heterogeneity and interference in multiple shared resources, by identifying similarities to previously scheduled applications. The classification allows Paragon to greedily schedule applications in a manner that minimizes interference and maximizes server utilization. Paragon scales to tens of thousands of servers with marginal scheduling overheads in terms of time or state.\n We evaluate Paragon with a wide range of workload scenarios, on both small and large-scale systems, including 1,000 servers on EC2. For a 2,500-workload scenario, Paragon enforces performance guarantees for 91% of applications, while significantly improving utilization. In comparison, heterogeneity-oblivious, interference-oblivious and least-loaded schedulers only provide similar guarantees for 14%, 11% and 3% of workloads. The differences are more striking in oversubscribed scenarios where resource efficiency is more critical."}
{"_id": "3c0bc4e9d30719269b0048d4f36752ab964145dd", "title": "Bubble-up: Increasing utilization in modern warehouse scale computers via sensible co-locations", "text": "As much of the world's computing continues to move into the cloud, the overprovisioning of computing resources to ensure the performance isolation of latency-sensitive tasks, such as web search, in modern datacenters is a major contributor to low machine utilization. Being unable to accurately predict performance degradation due to contention for shared resources on multicore systems has led to the heavy handed approach of simply disallowing the co-location of high-priority, latency-sensitive tasks with other tasks. Performing this precise prediction has been a challenging and unsolved problem.\n In this paper, we present Bubble-Up, a characterization methodology that enables the accurate prediction of the performance degradation that results from contention for shared resources in the memory subsystem. By using a bubble to apply a tunable amount of \"pressure\" to the memory subsystem on processors in production datacenters, our methodology can predict the performance interference between co-locate applications with an accuracy within 1% to 2% of the actual performance degradation. Using this methodology to arrive at \"sensible\" co-locations in Google's production datacenters with real-world large-scale applications, we can improve the utilization of a 500-machine cluster by 50% to 90% while guaranteeing a high quality of service of latency-sensitive applications."}
{"_id": "28d527e3ca0b13bb053324661ea4abe4195e0f59", "title": "Factored recurrent neural network language model in TED lecture transcription", "text": "In this study, we extend recurrent neural network-based language models (RNNLMs) by explicitly integrating morphological and syntactic factors (or features). Our proposed RNNLM is called a factored RNNLM that is expected to enhance RNNLMs. A number of experiments are carried out on top of state-of-the-art LVCSR system that show the factored RNNLM improves the performance measured by perplexity and word error rate. In the IWSLT TED test data sets, absolute word error rate reductions over RNNLM and n-gram LM are 0.4\u223c0.8 points."}
{"_id": "43b0f7b134d529505cc6d4dc492995c70518d8ac", "title": "Discriminative Multi-View Interactive Image Re-Ranking", "text": "Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users\u2019 intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies."}
{"_id": "c411632b42c037220372c06d9cf6cd4e919ee23c", "title": "PTBI: An efficient privacy-preserving biometric identification based on perturbed term in the cloud", "text": "Biometric identification has played an important role in achieving user authentication. For efficiency and economic savings, biometric data owners are motivated to outsource the biometric data and identification tasks to a third party, which however introduces potential threats to user\u2019s privacy. In this paper, we propose a new privacy-preserving biometric identification scheme which can release the database owner from heavy computation burden. In the proposed scheme, we design concrete biometric data encryption and matching algorithms, and introduce perturb terms in each biometric data. A thorough analysis indicates that our schemes are secure, and the ultimate scheme offers a high level of privacy protection. In addition, the performance evaluations via extensive simulations demonstrate our schemes\u2019 efficiency. \u00a9 2017 Elsevier Inc. All rights reserved."}
{"_id": "b299f97b3b978dea6b358e1b09991c0640913ee3", "title": "An Energy-Efficient Architecture for the Internet of Things (IoT)", "text": "Internet of things (IoT) is a smart technology that connects anything anywhere at any time. Such ubiquitous nature of IoT is responsible for draining out energy from its resources. Therefore, the energy efficiency of IoT resources has emerged as a major research issue. In this paper, an energy-efficien t architecture for IoT has been proposed, which consists of three layers, namely, sensing and control, information processing, and presentation. The architectural design allows the system to predict the sleep interval of sensors based upon their remaining battery level, their previous usage history, and quality of information required for a particular application. The predicted value can be used to boost the utilization of cloud resources by reprovisioning the allocated resources when the corresponding sensory nodes are in sleep mode. This mechanism allows the energy-efficient utilization of all the IoT resources. The experimental results show a significant amount of energy saving in the case of sensor nodes and improved resource utilization of cloud resources."}
{"_id": "13dff6a28d24e4fe443161fcb7d96b68a085a3d4", "title": "Tight bounds for rumor spreading in graphs of a given conductance", "text": "We study the connection between the rate at which a rumor spreads throughout a graph and the conductance of the graph\u2014a standard measure of a graph\u2019s expansion properties. We show that for any n-node graph with conductance \u03c6, the classical PUSH-PULL algorithm distributes a rumor to all nodes of the graph in O(\u03c6 log n) rounds with high probability (w.h.p.). This bound improves a recent result of Chierichetti, Lattanzi, and Panconesi [6], and it is tight in the sense that there exist graphs where \u03a9(\u03c6 log n) rounds of the PUSH-PULL algorithm are required to distribute a rumor w.h.p. We also explore the PUSH and the PULL algorithms, and derive conditions that are both necessary and sufficient for the above upper bound to hold for those algorithms as well. An interesting finding is that every graph contains a node such that the PULL algorithm takes O(\u03c6 log n) rounds w.h.p. to distribute a rumor started at that node. In contrast, there are graphs where the PUSH algorithm requires significantly more rounds for any start node. 1998 ACM Subject Classification G.3 [Mathematics of Computing]: Probability and Statistics"}
{"_id": "4c479f8d18badb29ec6a2a49d6ca8e36d833fbe9", "title": "Coccydynia: an overview of the anatomy, etiology, and treatment of coccyx pain.", "text": "BACKGROUND\nDespite its small size, the coccyx has several important functions. Along with being the insertion site for multiple muscles, ligaments, and tendons, it also serves as one leg of the tripod-along with the ischial tuberosities-that provides weight-bearing support to a person in the seated position. The incidence of coccydynia (pain in the region of the coccyx) has not been reported, but factors associated with increased risk of developing coccydynia include obesity and female gender.\n\n\nMETHODS\nThis article provides an overview of the anatomy, physiology, and treatment of coccydynia.\n\n\nRESULTS\nConservative treatment is successful in 90% of cases, and many cases resolve without medical treatment. Treatments for refractory cases include pelvic floor rehabilitation, manual manipulation and massage, transcutaneous electrical nerve stimulation, psychotherapy, steroid injections, nerve block, spinal cord stimulation, and surgical procedures.\n\n\nCONCLUSION\nA multidisciplinary approach employing physical therapy, ergonomic adaptations, medications, injections, and, possibly, psychotherapy leads to the greatest chance of success in patients with refractory coccyx pain. Although new surgical techniques are emerging, more research is needed before their efficacy can be established."}
{"_id": "f52cb1ff135992751374c5e596c56beec9d07141", "title": "Passive ultrasonic irrigation of the root canal: a review of the literature.", "text": "Ultrasonic irrigation of the root canal can be performed with or without simultaneous ultrasonic instrumentation. When canal shaping is not undertaken the term passive ultrasonic irrigation (PUI) can be used to describe the technique. In this paper the relevant literature on PUI is reviewed from a MEDLINE database search. Passive ultrasonic irrigation can be performed with a small file or smooth wire (size 10-20) oscillating freely in the root canal to induce powerful acoustic microstreaming. PUI can be an important supplement for cleaning the root canal system and, compared with traditional syringe irrigation, it removes more organic tissue, planktonic bacteria and dentine debris from the root canal. PUI is more efficient in cleaning canals than ultrasonic irrigation with simultaneous ultrasonic instrumentation. PUI can be effective in curved canals and a smooth wire can be as effective as a cutting K-file. The taper and the diameter of the root canal were found to be important parameters in determining the efficacies of dentine debris removal. Irrigation with sodium hypochlorite is more effective than with water and ultrasonic irrigation is more effective than sonic irrigation in the removal of dentine debris from the root canal. The role of cavitation during PUI remains inconclusive. No detailed information is available on the influence of the irrigation time, the volume of the irrigant, the penetration depth of the instrument and the shape and material properties of the instrument. The influence of irrigation frequency and intensity on the streaming pattern as well as the complicated interaction of acoustic streaming with the adherent biofilm needs to be clarified to reveal the underlying physical mechanisms of PUI."}
{"_id": "c12f63273cdc2e71a475de49efee5437ca5416a5", "title": "Web-based expert systems: benefits and challenges", "text": "Convergence of technologies in the Internet and the field of expert systems has offered new ways of sharing and distributing knowledge. However, there has been a general lack of research in the area of web-based expert systems (ES). This paper addresses the issues associated with the design, development, and use of web-based ES from a standpoint of the benefits and challenges of developing and using them. The original theory and concepts in conventional ES were reviewed and a knowledge engineering framework for developing them was revisited. The study considered three web-based ES: WITS-Advisor \u2013 for e-business strategy development, Fish-Expert \u2013 For fish disease diagnosis, and IMIS \u2013 to promote intelligent interviews. The benefits and challenges in developing and using ES are discussed by comparing them with traditional standalone systems from development and application perspectives."}
{"_id": "c00ba120de1964b444807255030741d199ba6e04", "title": "Identification, characterization, and grounding of gradable terms in clinical text", "text": "Gradable adjectives are inherently vague and are used by clinicians to document medical interpretations (e.g., severe reaction, mild symptoms). We present a comprehensive study of gradable adjectives used in the clinical domain. We automatically identify gradable adjectives and demonstrate that they have a substantial presence in clinical text. Further, we show that there is a specific pattern associated with their usage, where certain medical concepts are more likely to be described using these adjectives than others. Interpretation of statements using such adjectives is a barrier in medical decision making. Therefore, we use a simple probabilistic model to ground their meaning based on their usage in context."}
{"_id": "c47a688459c49eae0c77f24f19ec9fa063300ce2", "title": "A compact printed wide-slot UWB antenna with band-notched characteristics", "text": "In this paper, we present an offset microstrip-fed ultrawideband antenna with band notched characteristics. The antenna structure consists of rectangular radiating patch and ground plane with rectangular shaped slot, which increases impedance bandwidth upto 123.52%(2.6\u201311GHz). A new modified U slot is etched in the radiating patch to create band-notched properties in the WiMAX (3.3\u20133.7GHz) and C-band satellite communication (3.7\u20134.15GHz). Furthermore, parametric studies have been conducted using EM simulation software CADFEKO suite(7.0). A prototype of antenna is fabricated on 1.6mm thick FR-4 substrate with dielectric constant of 4.4 and loss tangent of 0.02. The proposed antenna exhibits directional and omnidirectional radiation patterns along E and H-plane with stable efficiency over the frequency band from 2.6GHz to 11GHz with VSWR less than 2, except 3.3\u20134.15GHz notched frequency band. The proposed antenna shows good time domain analysis."}
{"_id": "066a65dd36d95bdeff06ef703ba30b0c28397cd0", "title": "Causes and consequences of microRNA dysregulation in cancer", "text": "Over the past several years it has become clear that alterations in the expression of microRNA (miRNA) genes contribute to the pathogenesis of most \u2014 if not all \u2014 human malignancies. These alterations can be caused by various mechanisms, including deletions, amplifications or mutations involving miRNA loci, epigenetic silencing or the dysregulation of transcription factors that target specific miRNAs. Because malignant cells show dependence on the dysregulated expression of miRNA genes, which in turn control or are controlled by the dysregulation of multiple protein-coding oncogenes or tumour suppressor genes, these small RNAs provide important opportunities for the development of future miRNA-based therapies."}
{"_id": "deb4c203a0f0e1fa77843754655a3447ee7bf6a3", "title": "A review on memristor applications", "text": "This article presents a review on the main applications of the fourth fundamental circuit element, named \"memristor\", which had been proposed for the first time by Leon Chua and has recently been developed by a team at HP Laboratories led by Stanley Williams. In particular, after a brief analysis of memristor theory with a description of the first memristor, manufactured at HP Laboratories, we present its main applications in the circuit design and computer technology, together with future developments."}
{"_id": "c524089e59615f90f36e3f89aeb4485441cd7c06", "title": "A Customer Loyalty Model for E-Service Context", "text": "While the importance of customer loyalty has been recognized in the marketing literature for at least three decades, the conceptualization and empirical validation of a customer loyalty model for e-service context has not been addressed. This paper describes a theoretical model for investigating the three main antecedent influences on loyalty (attitudinal commitment and behavioral loyalty) for e-service context: trust, customer satisfaction, and perceived value. Based on the theoretical model, a comprehensive set of hypotheses were formulated and a methodology for testing them was outlined. These hypotheses were tested empirically to demonstrate the applicability of the theoretical model. The results indicate that trust, customer satisfaction, perceived value, and commitment are separate constructs that combine to determine the loyalty, with commitment exerting a stronger influence than trust, customer satisfaction, and perceived value. Customer satisfaction and perceived value were also indirectly related to loyalty through commitment. Finally, the authors discuss the managerial and theoretical implications of these results."}
{"_id": "398a53c795b2403a6ed2a28ea22a306af67b5597", "title": "A multi-scale target detection method for optical remote sensing images", "text": "Faster RCNN is a region proposal based object detection approach. It integrates the region proposal stage and classification stage into a single pipeline, which has both rapid speed and high detection accuracy. However, when the model is applied to the target detection of remote sensing imagery, faced with multi-scale targets, its performance is degraded. We analyze the influences of pooling operation and target size on region proposal, then a modified solution for region proposal is introduced to improve recall rate of multi-scale targets. To speed up the convergence of the region proposal networks, an improved generation strategy of foreground samples is proposed, which could suppresses the generation of non-effective foreground samples. Extensive evaluations on the remote sensing image dataset show that the proposed model can obviously improve detection accuracy for multi-scale targets, moreover the training of the model is rapid and high-efficient."}
{"_id": "c106bff1fcab9cd0ecdd842cc61da4b43fcce39c", "title": "A Linearly Relaxed Approximate Linear Program for Markov Decision Processes", "text": "Approximate linear programming (ALP) and its variants have been widely applied to Markov decision processes (MDPs) with a large number of states. A serious limitation of ALP is that it has an intractable number of constraints, as a result of which constraint approximations are of interest. In this paper, we define a linearly relaxed approximation linear program (LRALP) that has a tractable number of constraints, obtained as positive linear combinations of the original constraints of the ALP. The main contribution is a novel performance bound for LRALP."}
{"_id": "cc8a5356c1ffd78f2ae93793447d9fd325be31bb", "title": "Data science is science's second chance to get causal inference right: A classification of data science tasks", "text": "1. Departments of Epidemiology and Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA 2. Harvard-MIT Division of Health Sciences and Technology, Boston, MA 3. Mongan Institute, Massachusetts General Hospital, Boston, MA 4. Department of Health Care Policy, Harvard Medical School, Boston, MA 5. Department of Neurology, Harvard Medical School, Partners MS Center, Brigham and Women\u2019s Hospital, Boston, MA 6. Biostatistics Center, Massachusetts General Hospital, Boston, MA"}
{"_id": "f3391de90960c8eaf9c0742a54a5680a0ba49186", "title": "Attention Alignment Multimodal LSTM for Fine-Gained Common Space Learning", "text": "We address the problem common space learning approach that maps all related multimodal information into a common space for multimodal data. To establish a fine-grained common space, the aligned relevant local information of different modalities is used to learn a common subspace where the projected fragmented information is further integrated according to intra-modal semantic relationships. Specifically, we propose a novel multimodal LSTM with an attention alignment mechanism, namely attention alignment multimodal LSTM (AAM-LSTM), which mainly includes attentional alignment recurrent network (AA-R) and hierarchical multimodal LSTM (HM-LSTM). Different from the traditional methods which operate on the full modal data directly, the proposed model exploits the inter-modal and intra-modal semantic relationships of local information, to jointly establish a uniform representation of multi-modal data. Specifically, AA-R automatically captures semantic-aligned local information to learn common subspace without the need of supervised labels, then HM-LSTM leverages the potential relationships of these local information to learn a fine-grained common space. The experimental results on Filker30K, Filker8K, and Filker30K entities verify the performance and effectiveness of our model, which compares favorably with the state-of-the-art methods. In particular, the experiment of phrase localization on AA-R with Filker30K entities shows the expected accurate attention alignment. Moreover, from the experiment results of image-sentence retrieval tasks, it can be concluded that the proposed AAM-LSTM outperforms benchmark algorithms by a large margin."}
{"_id": "56a7559948605a39197d154186161a41edb27023", "title": "Chip-level and board-level CDM ESD tests on IC products", "text": "The electrostatic discharge (ESD) transient currents and failure analysis (FA) between chip-level and board-level charged-device-model (CDM) ESD tests are investigated in this work. The discharging current waveforms of three different printed circuit boards (PCBs) are characterized first. Then, the chip-level and board-level CDM ESD tests are performed to an ESD-protected dummy NMOS and a high-speed receiver front-end circuit, respectively. Scanning electron microscope (SEM) failure pictures show that the board-level CDM ESD test causes much severer failure than that caused by the chip-level CDM ESD test."}
{"_id": "0b611a4a4134ffd8865c79bf1bace1c19114e3f8", "title": "Classifying Objectionable Websites Based on Image Content", "text": "This paper describes IBCOW (Image-based Classi cation of Objectionable Websites), a system capable of classifying a website as objectionable or benign based on image content. The system uses WIPETM (Wavelet Image Pornography Elimination) and statistics to provide robust classi cation of on-line objectionable World Wide Web sites. Semantically-meaningful feature vector matching is carried out so that comparisons between a given on-line image and images marked as \"objectionable\" and \"benign\" in a training set can be performed efciently and e ectively in the WIPE module. If more than a certain number of images sampled from a site is found to be objectionable, then the site is considered to be objectionable. The statistical analysis for determining the size of the image sample and the threshold number of objectionable images is given in this paper. The system is practical for real-world applications, classifying a Web site at a speed of less than 2 minutes each, including the time to compute the feature vector for the images downloaded from the site, on a Pentium Pro PC. Besides its exceptional speed, it has demonstrated 97% sensitivity and 97% speci city in classifying a Web site based solely on images. Both the sensitivity and the speci city in real-world applications is expected to be higher because our performance evaluation is relatively conservative and surrounding text can be used to assist the classi cation process."}
{"_id": "e42fa48f49e7e18bf1e4e5e76bb55ae8432beae1", "title": "Knowledge-Based Distant Regularization in Learning Probabilistic Models", "text": "Exploiting the appropriate inductive bias based on the knowledge of data is essential for achieving good performance in statistical machine learning. In practice, however, the domain knowledge of interest often provides information on the relationship of data attributes only distantly, which hinders direct utilization of such domain knowledge in popular regularization methods. In this paper, we propose the knowledge-based distant regularization framework, in which we utilize the distant information encoded in a knowledge graph for regularization of probabilistic model estimation. In particular, we propose to impose prior distributions on model parameters specified by knowledge graph embeddings. As an instance of the proposed framework, we present the factor analysis model with the knowledge-based distant regularization. We show the results of preliminary experiments on the improvement of the generalization capability of such model."}
{"_id": "878ad1aa6e7858d664c0269d60451c91f144e0b9", "title": "Representation Learning for Grounded Spatial Reasoning", "text": "The interpretation of spatial references is highly contextual, requiring joint inference over both language and the environment. We consider the task of spatial reasoning in a simulated environment, where an agent can act and receive rewards. The proposed model learns a representation of the world steered by instruction text. This design allows for precise alignment of local neighborhoods with corresponding verbalizations, while also handling global references in the instructions. We train our model with reinforcement learning using a variant of generalized value iteration. The model outperforms state-of-the-art approaches on several metrics, yielding a 45% reduction in goal localization error."}
{"_id": "0989bbd8c15f9aac24e8832327df560dc8ec5324", "title": "Lower Extremity Exoskeletons and Active Orthoses: Challenges and State-of-the-Art", "text": "In the nearly six decades since researchers began to explore methods of creating them, exoskeletons have progressed from the stuff of science fiction to nearly commercialized products. While there are still many challenges associated with exoskeleton development that have yet to be perfected, the advances in the field have been enormous. In this paper, we review the history and discuss the state-of-the-art of lower limb exoskeletons and active orthoses. We provide a design overview of hardware, actuation, sensory, and control systems for most of the devices that have been described in the literature, and end with a discussion of the major advances that have been made and hurdles yet to be overcome."}
{"_id": "42ad00c8ed436f6b8f0a4a73f55018210181e4a3", "title": "Brain\u2013machine interfaces: past, present and future", "text": "Since the original demonstration that electrical activity generated by ensembles of cortical neurons can be employed directly to control a robotic manipulator, research on brain-machine interfaces (BMIs) has experienced an impressive growth. Today BMIs designed for both experimental and clinical studies can translate raw neuronal signals into motor commands that reproduce arm reaching and hand grasping movements in artificial actuators. Clearly, these developments hold promise for the restoration of limb mobility in paralyzed subjects. However, as we review here, before this goal can be reached several bottlenecks have to be passed. These include designing a fully implantable biocompatible recording device, further developing real-time computational algorithms, introducing a method for providing the brain with sensory feedback from the actuators, and designing and building artificial prostheses that can be controlled directly by brain-derived signals. By reaching these milestones, future BMIs will be able to drive and control revolutionary prostheses that feel and act like the human arm."}
{"_id": "1d92229ad2ad5127fe4d8d25e036debcbe22ef2e", "title": "A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects.", "text": "Multiple factors simultaneously affect the spiking activity of individual neurons. Determining the effects and relative importance of these factors is a challenging problem in neurophysiology. We propose a statistical framework based on the point process likelihood function to relate a neuron's spiking probability to three typical covariates: the neuron's own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior. The framework uses parametric models of the conditional intensity function to define a neuron's spiking probability in terms of the covariates. The discrete time likelihood function for point processes is used to carry out model fitting and model analysis. We show that, by modeling the logarithm of the conditional intensity function as a linear combination of functions of the covariates, the discrete time point process likelihood function is readily analyzed in the generalized linear model (GLM) framework. We illustrate our approach for both GLM and non-GLM likelihood functions using simulated data and multivariate single-unit activity data simultaneously recorded from the motor cortex of a monkey performing a visuomotor pursuit-tracking task. The point process framework provides a flexible, computationally efficient approach for maximum likelihood estimation, goodness-of-fit assessment, residual analysis, model selection, and neural decoding. The framework thus allows for the formulation and analysis of point process models of neural spiking activity that readily capture the simultaneous effects of multiple covariates and enables the assessment of their relative importance."}
{"_id": "606b34547b0edc5b4fba669d1a027961ee517cdd", "title": "Power assist method for HAL-3 using EMG-based feedback controller", "text": "We have developed the exoskeletal robotics suite HAL (Hybrid Assisitve Leg) which is integrated with human and assists suitable power for \u2018lower limb of people with goit disorder. This study proposes the method of assist motion and ossist torque to realize a power assist corresponding to the operator\u2019s intention. In the method of a,ssist motion, we adopted Phase Sequence control which generates a series of assist m,otions hi/ transiti.n,q .some simple basic motions called Phase. w e used the feedback controller to adjust the assist torque to m.ointain myoelectricity signals which were generated while performing the power assist uiolking. The experiment xsults showed the effective power assist according to operator\u2019s intention b y using these control methods."}
{"_id": "725fc2767cd7049b5c0111d392a1b8c8c9da6c2f", "title": "Adaptive control of a variable-impedance ankle-foot orthosis to assist drop-foot gait", "text": "An active ankle-foot orthoses (AAFO) is presented where the impedance of the orthotic joint is modulated throughout the walking cycle to treat drop-foot gait. During controlled plantar flexion, a biomimetic torsional spring control is applied where orthotic joint stiffness is actively adjusted to minimize forefoot collisions with the ground. Throughout late stance, joint impedance is minimized so as not to impede powered plantar flexion movements, and during the swing phase, a torsional spring-damper control lifts the foot to provide toe clearance. To assess the clinical effects of variable-impedance control, kinetic and kinematic gait data were collected on two drop-foot participants wearing the AAFO. For each participant, zero, constant, and variable impedance control strategies were evaluated and the results were compared to the mechanics of three age, weight, and height matched normals. We find that actively adjusting joint impedance reduces the occurrence of slap foot allows greater powered plantar flexion and provides for less kinematic difference during swing when compared to normals. These results indicate that a variable-impedance orthosis may have certain clinical benefits for the treatment of drop-foot gait compared to conventional ankle-foot orthoses having zero or constant stiffness joint behaviors."}
{"_id": "5569dd2da7a10c1480ca6159ff746122222d0e9c", "title": "Comparison of Parametric and Nonparametric Techniques for Non-peak Traffic Forecasting", "text": "Accurately predicting non-peak traffic is crucial to daily traffic for all forecasting models. In the paper, least squares support vector machines (LS-SVMs) are investigated to solve such a practical problem. It is the first time to apply the approach and analyze the forecast performance in the domain. For comparison purpose, two parametric and two non-parametric techniques are selected because of their effectiveness proved in past research. Having good generalization ability and guaranteeing global minima, LS-SVMs perform better than the others. Providing sufficient improvement in stability and robustness reveals that the approach is practically promising. Keywords\u2014Parametric and Nonparametric Techniques, Non-peak Traffic Forecasting"}
{"_id": "479e8eaa115d9c5c204e2d8a80cd16be204182f1", "title": "Ultrasound-Assisted Evaluation ofthe Airway in Clinical AnesthesiaPractice: Past, Present and Future", "text": "Introduction: The incidence of difficulties encountered in perioperative airway management has been reported to range from 1% to 4%. In patients with head and neck cancers, the incidence can be dramatically higher. Because of high quality of imaging, non-invasiveness and relatively low cost, ultrasonography has been utilized as a valuable adjunct to the clinical assessment of the airway. A review of the literature was conducted with the objective of summarizing the available evidence concerning the use of ultrasound (US) for assessment of the airway, with special emphasis on head and neck cancers. Methods and Materials: A systematic search of the literature in the MEDLINE database was performed. A total of 42 manuscripts from 329 searched articles were included in this review. Results: Ultrasonography was found to give high-resolution images of the anatomic structures of the upper airway comparable to computed tomography and magnetic resonance imaging. Several ultrasonographic parameters (soft tissue thickness at level of hyoid bone, epiglottis and vocal cords, visibility of hyoid bone in sublingual ultrasound, hyomental distance in head-extended position and hyomental distance ratio) were found to be independent predictors of difficult laryngoscopy in obese and non-obese patients. In conjunction with elastosonography, it also provided valuable information regarding tumors, infiltration, and edema as well as fibrosis of the head and neck. Conclusion: Ultrasound-assisted evaluation of the difficult airway offers many important advantages. The ready availability of US machines in anesthesiology departments, familiarity of anesthesia providers with USguided procedures and the portability of US machines allow real-time, point-of-care assessment. It will undoubtedly become more popular and will greatly contribute to improve perioperative patient safety."}
{"_id": "acabaadfabb6d4bb8af779060aeff01ba8de4b29", "title": "Time series forecasting using Artificial Neural Networks vs. evolving models", "text": "Time series forecasting plays an important role in many fields such as economics, finance, business intelligence, natural sciences, and the social sciences. This forecasting task can be achieved by using different techniques such as statistical methods or Artificial Neural Networks (ANN). In this paper, we present two different approaches to time series forecasting: evolving Takagi-Sugeno (eTS) fuzzy model and ANN. These two different methods will be compared taking into account the different characteristic of each approach."}
{"_id": "5220abb07c64daf01b02bbc9ed20380c0481120c", "title": "Practice Makes Perfect ? When Does Massed Learning Improve Product Usage Proficiency ?", "text": "Previous research has shown that spacing of information (over time) leads to better learning of product information. We develop a theoretical framework to describe how massed or spaced learning schedules interact with different learning styles to influence product usage proficiency. The core finding is that with experiential learning, proficiency in a product usage task is better under massed conditions, whereas with verbal learning, spacing works better. This effect is demonstrated for usage proficiency assessed via speed as well as quality of use. Further, massed learning also results in better usage proficiency on transfer tasks, for both experiential and verbal learning. We also find that massed learning in experiential learning conditions leads not only to better usage proficiency but also to positive perceptions of the product. Overall, the pattern of results is consistent with a conceptual mapping account, with massed experiences leading to a superior mental model of usage and thus to better usage proficiency."}
{"_id": "54cd6ff8b5c0a9c903ba9096ade7aaa264453dfb", "title": "Multi-Modal Fashion Product Retrieval", "text": "Finding a product in the fashion world can be a daunting task. Everyday, e-commerce sites are updating with thousands of images and their associated metadata (textual information), deepening the problem. In this paper, we leverage both the images and textual metadata and propose a joint multi-modal embedding that maps both the text and images into a common latent space. Distances in the latent space correspond to similarity between products, allowing us to effectively perform retrieval in this latent space. We compare against existing approaches and show significant improvements in retrieval tasks on a largescale e-commerce dataset."}
{"_id": "bf9e2c212f0ad4e159c731aa91fb7ccaf6c82203", "title": "A 3.4 \u2013 6.2 GHz Continuously tunable electrostatic MEMS resonator with quality factor of 460\u2013530", "text": "In this paper we present the first MEMS electrostatically-tunable loaded-cavity resonator that simultaneously achieves a very high continuous tuning range of 6.2 GHz:3.4 GHz (1.8:1) and quality factor of 460\u2013530 in a volume of 18\u00d730\u00d74 mm3 including the actuation scheme and biasing lines. The operating principle relies on tuning the capacitance of the loaded-cavity by controlling the gap between an electrostatically-actuated membrane and the cavity post underneath it. Particular attention is paid on the fabrication of the tuning mechanism in order to avoid a) quality factor degradation due to the biasing lines and b) hysteresis and creep issues. A single-crystal silicon membrane coated with a thin gold layer is the key to the success of the design."}
{"_id": "a6c55b820d125f6c936814e3fa6e1cab2594b696", "title": "Recommendations for the Assessment of Blend and Content Uniformity: Modifications to Withdrawn FDA Draft Stratified Sampling Guidance", "text": "The following paper describes the International Society for Pharmaceutical Engineering (ISPE)-sponsored Blend Uniformity and Content Uniformity Group\u2019s proposed modifications to the withdrawn FDA draft guidance document for industry \u201cPowder Blends and Finished Dosage Units\u2014Stratified In-Process Dosage Unit Sampling and Assessment.\u201d The modifications targeted FDA\u2019s primary concerns that led to the withdrawal of the draft guidance document, which were insufficient blend uniformity testing and that a one-time passing of the criteria stated in USP General Chapter <905> Uniformity of Dosage Units testing lacks confidence to ensure the content uniformity of a batch. The Group\u2019s approach discusses when triplicate blend samples should be analyzed and the importance of performing variance component analysis on the data to identify root causes of non-uniformity. The Group recommends the use of statistically based approaches, acceptance criteria, and sampling plans for assessing content uniformity for batch release that provide increased confidence that future samples drawn from the batch will comply with USP <905>. Alternative statistical approaches, sampling plans, and acceptance criteria, including modern analytical method (e.g., process analytical technology (PAT)) sampling plans, may be substituted for those mentioned in this paper, with justification. This approach also links blend and content uniformity testing to the three stages of the life cycle process validation approach. A framework for the assessment of blend and content uniformity that provides greater assurance of passing USP <905> is presented."}
{"_id": "4adffe0ebdda59d39e43d42a41e1b6f80164f07e", "title": "Symmetric Nonnegative Matrix Factorization: Algorithms and Applications to Probabilistic Clustering", "text": "Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: \u03b1-SNMF and \u03b2 -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression."}
{"_id": "f6632983aa0d3a6dd9c39e89964347a09937cd9f", "title": "Child maltreatment and the developing brain : A review of neuroscience perspectives \u2606", "text": "a r t i c l e i n f o Keywords: Child maltreatment Neuroscience Brain plasticity Stress system dysregulation Brain development In this article we review neuroscience perspectives on child maltreatment to facilitate understanding of the rapid integration of neuroscience knowledge into the academic, clinical, and lay literature on this topic. Seminal articles from developmental psychology and psychiatry, a discussion of brain plasticity, and a summary of recent reviews of research on stress system dysregulation are presented with some attention to methodological issues. A common theme is that maltreatment during childhood is an experience that may affect the course of brain development, potentially leading to differences in brain anatomy and functioning with lifelong consequences for mental health. The design of prevention and intervention strategies for child maltreatment may benefit from considering neuroscience perspectives along with those of other disciplines."}
{"_id": "ff88cc5e63cb22fef2fd074eea91d9cf1889277b", "title": "WHY SUMMARIES OF RESEARCH ON PSYCHOLOGICAL THEORIES ARE OFTEN UNINTERPRETABLE", "text": "\u2014Null hypothesis testing of correlational predictions from weak substantive theories in soft psychology is subject to the influence of ten obfuscating factors whose effects are usually (1) sizeable, (2) opposed, (3) variable, and (4) unknown The net epistemic effect of these ten obfuscating influences is that the usual research literature review is well nigh uninterpretable Major changes in graduate education, conduct of research, and editorial policy are proposed"}
{"_id": "fc73c7fe1e56a81324f3174e193fbd8acc811b05", "title": "An Improved Variable On-Time Control Strategy for a CRM Flyback PFC Converter", "text": "The traditional critical conduction mode (CRM) flyback PFC converter with constant on-time control strategy usually suffers low power factor (PF) and high total harmonic distortion (THD) due to the nonsinusoidal input current waveform. In order to solve this problem, an improved variable on-time control strategy for the CRM flyback PFC converter is proposed in this letter. A simple analog divider circuit consisting of an operational amplifier, two signal switches, and an RC filter is proposed to modulate the turn-on time of the primary switch, and the PF and THD of the CRM flyback PFC converter can be evidently improved. The theoretical analysis is presented and the experiment results verify the advantages of the proposed control scheme."}
{"_id": "733fc2181e89c48ca4ff6b1b9e9b211262a4e6ac", "title": "Range-Based Localization in Wireless Networks Using Density-Based Outlier Detection", "text": "Node localization is commonly employed in wireless networks. For example, it is used to improve routing and enhance security. Localization algorithms can be classified as range-free or range-based. Range-based algorithms use location metrics such as ToA, TDoA, RSS, and AoA to estimate the distance between two nodes. Proximity sensing between nodes is typically the basis for range-free algorithms. A tradeoff exists since range-based algorithms are more accurate but also more complex. However, in applications such as target tracking, localization accuracy is very important. In this paper, we propose a new range-based algorithm which is based on the density-based outlier detection algorithm (DBOD) from data mining. It requires selection of the K-nearest neighbours (KNN). DBOD assigns density values to each point used in the location estimation. The mean of these densities is calculated and those points having a density larger than the mean are kept as candidate points. Different performance measures are used to compare our approach with the linear least squares (LLS) and weighted linear least squares based on singular value decomposition (WLS-SVD) algorithms. It is shown that the proposed algorithm performs better than these algorithms even when the anchor geometry about an unlocalized node is poor."}
{"_id": "7c1cbfd084827ff63b1600c64c533bacb4df2ae4", "title": "Zika virus impairs growth in human neurospheres and brain organoids", "text": "Since the emergence of Zika virus (ZIKV), reports of microcephaly have increased considerably in Brazil; however, causality between the viral epidemic and malformations in fetal brains needs further confirmation. We examined the effects of ZIKV infection in human neural stem cells growing as neurospheres and brain organoids. Using immunocytochemistry and electron microscopy, we showed that ZIKV targets human brain cells, reducing their viability and growth as neurospheres and brain organoids. These results suggest that ZIKV abrogates neurogenesis during human brain development."}
{"_id": "951fe3fef08bbf76fe5d61ef8fd84cfb6f9ae006", "title": "Research on non-invasive glucose concentration measurement by NIR transmission", "text": "Diabetes is a widely spreading disease which is known as one of the life threatening disease in the world. It occurs not only among adults and elderly, but also among infants and children. Blood glucose measurements are indispensable to diabetes patients to determine their insulin dose intake. Invasive blood glucose measurement ways which are high in accuracy are common but they are uncomfortable and have higher risk of infections especially for elders, pregnant and children. As a change, non-invasive blood glucose measurement techniques are introduced to provide a reliable and pain free method for monitoring glucose level without puncturing the skin. In this paper, a non-invasive glucose monitoring setup was developed using near infrared by detecting the transmission laser power. The detecting system included the semiconductor laser diode as light source, the S302C light power probe which detected the incident light and, the PM100USB transmit data to the computer. The specific infrared spectrum (1310 nm) was used as the incident beam. A proportional relationship between the laser power and the glucose concentration was proved by comparing the resulting laser power for a few of glucose aqueous solution samples with glucose concentration estimated value at the same circumstances."}
{"_id": "590f892cb4582738e836b225a293e2692f8552e0", "title": "LSD-induced entropic brain activity predicts subsequent personality change.", "text": "Personality is known to be relatively stable throughout adulthood. Nevertheless, it has been shown that major life events with high personal significance, including experiences engendered by psychedelic drugs, can have an enduring impact on some core facets of personality. In the present, balanced-order, placebo-controlled study, we investigated biological predictors of post-lysergic acid diethylamide (LSD) changes in personality. Nineteen healthy adults underwent resting state functional MRI scans under LSD (75\u00b5g, I.V.) and placebo (saline I.V.). The Revised NEO Personality Inventory (NEO-PI-R) was completed at screening and 2 weeks after LSD/placebo. Scanning sessions consisted of three 7.5-min eyes-closed resting-state scans, one of which involved music listening. A standardized preprocessing pipeline was used to extract measures of sample entropy, which characterizes the predictability of an fMRI time-series. Mixed-effects models were used to evaluate drug-induced shifts in brain entropy and their relationship with the observed increases in the personality trait openness at the 2-week follow-up. Overall, LSD had a pronounced global effect on brain entropy, increasing it in both sensory and hierarchically higher networks across multiple time scales. These shifts predicted enduring increases in trait openness. Moreover, the predictive power of the entropy increases was greatest for the music-listening scans and when \"ego-dissolution\" was reported during the acute experience. These results shed new light on how LSD-induced shifts in brain dynamics and concomitant subjective experience can be predictive of lasting changes in personality. Hum Brain Mapp 37:3203-3213, 2016. \u00a9 2016 Wiley Periodicals, Inc."}
{"_id": "8724631b1b16469fb57df1568d41d1039067c717", "title": "Data Structures and Algorithms for Nearest Neighbor Search in General Metric Spaces", "text": "We consider the computational problem of nding nearest neighbors in general metric spaces Of particular interest are spaces that may not be conveniently embedded or approxi mated in Euclidian space or where the dimensionality of a Euclidian representation is very high Also relevant are high dimensional Euclidian settings in which the distribution of data is in some sense of lower di mension and embedded in the space The vp tree vantage point tree is introduced in several forms together with associated algorithms as an improved method for these di cult search problems Tree construc tion executes in O n log n time and search is under certain circumstances and in the limit O log n expected time The theoretical basis for this approach is developed and the results of several experiments are reported In Euclidian cases kd tree performance is compared"}
{"_id": "2a4423b10725e54ad72f4f1fcf77db5bc835f0a6", "title": "Optimization by simulated annealing.", "text": "There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods."}
{"_id": "dec997b20ebe2b867f68cc5c123d9cb9eafad6bb", "title": "Deriving optimal weights in deep neural networks", "text": "Training deep neural networks generally requires massive amounts of data and is very computation intensive. We show here that it may be possible to circumvent the expensive gradient descent procedure and derive the parameters of a neural network directly from properties of the training data. We show that, near convergence, the gradient descent equations for layers close to the input can be linearized and become stochastic equations with noise related to the covariance of data for each class. We derive the distribution of solutions to these equations and discover that it is related to a \u201csupervised principal component analysis.\u201d We implement these results on image datasets MNIST, CIFAR10 and CIFAR100 and find that, indeed, pretrained layers using our findings performs comparable or superior to neural networks of the same size and architecture trained with gradient descent. Moreover, our pretrained layers can often be calculated using a fraction of the training data, owing to the quick convergence of the covariance matrix. Thus, our findings indicate that we can cut the training time both by requiring only a fraction of the data used for gradient descent, and by eliminating layers in the costly backpropagation step of the training. Additionally, these findings partially elucidate the inner workings of deep neural networks and allow us to mathematically calculate optimal solutions for some stages of classification problems, thus significantly boosting our ability to solve such problems efficiently."}